summaryrefslogtreecommitdiff
path: root/drivers
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2019-07-11 10:55:49 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2019-07-11 10:55:49 -0700
commit237f83dfbe668443b5e31c3c7576125871cca674 (patch)
tree11848a8d0aa414a1d3ce2024e181071b1d9dea08 /drivers
parent8f6ccf6159aed1f04c6d179f61f6fb2691261e84 (diff)
parent1ff2f0fa450ea4e4f87793d9ed513098ec6e12be (diff)
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking updates from David Miller: "Some highlights from this development cycle: 1) Big refactoring of ipv6 route and neigh handling to support nexthop objects configurable as units from userspace. From David Ahern. 2) Convert explored_states in BPF verifier into a hash table, significantly decreased state held for programs with bpf2bpf calls, from Alexei Starovoitov. 3) Implement bpf_send_signal() helper, from Yonghong Song. 4) Various classifier enhancements to mvpp2 driver, from Maxime Chevallier. 5) Add aRFS support to hns3 driver, from Jian Shen. 6) Fix use after free in inet frags by allocating fqdirs dynamically and reworking how rhashtable dismantle occurs, from Eric Dumazet. 7) Add act_ctinfo packet classifier action, from Kevin Darbyshire-Bryant. 8) Add TFO key backup infrastructure, from Jason Baron. 9) Remove several old and unused ISDN drivers, from Arnd Bergmann. 10) Add devlink notifications for flash update status to mlxsw driver, from Jiri Pirko. 11) Lots of kTLS offload infrastructure fixes, from Jakub Kicinski. 12) Add support for mv88e6250 DSA chips, from Rasmus Villemoes. 13) Various enhancements to ipv6 flow label handling, from Eric Dumazet and Willem de Bruijn. 14) Support TLS offload in nfp driver, from Jakub Kicinski, Dirk van der Merwe, and others. 15) Various improvements to axienet driver including converting it to phylink, from Robert Hancock. 16) Add PTP support to sja1105 DSA driver, from Vladimir Oltean. 17) Add mqprio qdisc offload support to dpaa2-eth, from Ioana Radulescu. 18) Add devlink health reporting to mlx5, from Moshe Shemesh. 19) Convert stmmac over to phylink, from Jose Abreu. 20) Add PTP PHC (Physical Hardware Clock) support to mlxsw, from Shalom Toledo. 21) Add nftables SYNPROXY support, from Fernando Fernandez Mancera. 22) Convert tcp_fastopen over to use SipHash, from Ard Biesheuvel. 23) Track spill/fill of constants in BPF verifier, from Alexei Starovoitov. 24) Support bounded loops in BPF, from Alexei Starovoitov. 25) Various page_pool API fixes and improvements, from Jesper Dangaard Brouer. 26) Just like ipv4, support ref-countless ipv6 route handling. From Wei Wang. 27) Support VLAN offloading in aquantia driver, from Igor Russkikh. 28) Add AF_XDP zero-copy support to mlx5, from Maxim Mikityanskiy. 29) Add flower GRE encap/decap support to nfp driver, from Pieter Jansen van Vuuren. 30) Protect against stack overflow when using act_mirred, from John Hurley. 31) Allow devmap map lookups from eBPF, from Toke Høiland-Jørgensen. 32) Use page_pool API in netsec driver, Ilias Apalodimas. 33) Add Google gve network driver, from Catherine Sullivan. 34) More indirect call avoidance, from Paolo Abeni. 35) Add kTLS TX HW offload support to mlx5, from Tariq Toukan. 36) Add XDP_REDIRECT support to bnxt_en, from Andy Gospodarek. 37) Add MPLS manipulation actions to TC, from John Hurley. 38) Add sending a packet to connection tracking from TC actions, and then allow flower classifier matching on conntrack state. From Paul Blakey. 39) Netfilter hw offload support, from Pablo Neira Ayuso" * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (2080 commits) net/mlx5e: Return in default case statement in tx_post_resync_params mlx5: Return -EINVAL when WARN_ON_ONCE triggers in mlx5e_tls_resync(). net: dsa: add support for BRIDGE_MROUTER attribute pkt_sched: Include const.h net: netsec: remove static declaration for netsec_set_tx_de() net: netsec: remove superfluous if statement netfilter: nf_tables: add hardware offload support net: flow_offload: rename tc_cls_flower_offload to flow_cls_offload net: flow_offload: add flow_block_cb_is_busy() and use it net: sched: remove tcf block API drivers: net: use flow block API net: sched: use flow block API net: flow_offload: add flow_block_cb_{priv, incref, decref}() net: flow_offload: add list handling functions net: flow_offload: add flow_block_cb_alloc() and flow_block_cb_free() net: flow_offload: rename TCF_BLOCK_BINDER_TYPE_* to FLOW_BLOCK_BINDER_TYPE_* net: flow_offload: rename TC_BLOCK_{UN}BIND to FLOW_BLOCK_{UN}BIND net: flow_offload: add flow_block_cb_setup_simple() net: hisilicon: Add an tx_desc to adapt HI13X1_GMAC net: hisilicon: Add an rx_desc to adapt HI13X1_GMAC ...
Diffstat (limited to 'drivers')
-rw-r--r--drivers/bluetooth/Kconfig12
-rw-r--r--drivers/bluetooth/bpa10x.c3
-rw-r--r--drivers/bluetooth/btbcm.c1
-rw-r--r--drivers/bluetooth/btmtkuart.c51
-rw-r--r--drivers/bluetooth/btqca.c47
-rw-r--r--drivers/bluetooth/btqca.h10
-rw-r--r--drivers/bluetooth/btrtl.c28
-rw-r--r--drivers/bluetooth/btrtl.h6
-rw-r--r--drivers/bluetooth/btsdio.c1
-rw-r--r--drivers/bluetooth/btusb.c584
-rw-r--r--drivers/bluetooth/hci_bcsp.c5
-rw-r--r--drivers/bluetooth/hci_ldisc.c8
-rw-r--r--drivers/bluetooth/hci_ll.c109
-rw-r--r--drivers/bluetooth/hci_mrvl.c72
-rw-r--r--drivers/bluetooth/hci_qca.c73
-rw-r--r--drivers/bluetooth/hci_uart.h1
-rw-r--r--drivers/i2c/i2c-core-acpi.c3
-rw-r--r--drivers/infiniband/core/roce_gid_mgmt.c5
-rw-r--r--drivers/infiniband/hw/cxgb4/cm.c9
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_cm.c7
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_main.c6
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_utils.c12
-rw-r--r--drivers/infiniband/hw/mlx5/cq.c13
-rw-r--r--drivers/infiniband/hw/mlx5/devx.c18
-rw-r--r--drivers/infiniband/hw/mlx5/flow.c13
-rw-r--r--drivers/infiniband/hw/mlx5/ib_rep.c39
-rw-r--r--drivers/infiniband/hw/mlx5/ib_rep.h4
-rw-r--r--drivers/infiniband/hw/mlx5/main.c79
-rw-r--r--drivers/infiniband/hw/mlx5/mlx5_ib.h3
-rw-r--r--drivers/infiniband/hw/mlx5/mr.c10
-rw-r--r--drivers/infiniband/hw/mlx5/odp.c33
-rw-r--r--drivers/infiniband/hw/mlx5/qp.c2
-rw-r--r--drivers/infiniband/hw/nes/nes.c8
-rw-r--r--drivers/infiniband/hw/qedr/main.c25
-rw-r--r--drivers/infiniband/hw/qedr/qedr.h2
-rw-r--r--drivers/infiniband/hw/usnic/usnic_ib_main.c15
-rw-r--r--drivers/infiniband/ulp/ipoib/ipoib_main.c1
-rw-r--r--drivers/isdn/Kconfig51
-rw-r--r--drivers/isdn/Makefile6
-rw-r--r--drivers/isdn/capi/Kconfig29
-rw-r--r--drivers/isdn/capi/Makefile2
-rw-r--r--drivers/isdn/capi/capidrv.c2525
-rw-r--r--drivers/isdn/capi/capidrv.h140
-rw-r--r--drivers/isdn/divert/Makefile10
-rw-r--r--drivers/isdn/divert/divert_init.c82
-rw-r--r--drivers/isdn/divert/divert_procfs.c336
-rw-r--r--drivers/isdn/divert/isdn_divert.c846
-rw-r--r--drivers/isdn/divert/isdn_divert.h132
-rw-r--r--drivers/isdn/gigaset/i4l.c692
-rw-r--r--drivers/isdn/hardware/Kconfig8
-rw-r--r--drivers/isdn/hardware/Makefile1
-rw-r--r--drivers/isdn/hardware/mISDN/Kconfig7
-rw-r--r--drivers/isdn/hardware/mISDN/Makefile2
-rw-r--r--drivers/isdn/hardware/mISDN/isdnhdlc.c (renamed from drivers/isdn/i4l/isdnhdlc.c)2
-rw-r--r--drivers/isdn/hardware/mISDN/isdnhdlc.h69
-rw-r--r--drivers/isdn/hardware/mISDN/netjet.c2
-rw-r--r--drivers/isdn/hisax/Kconfig423
-rw-r--r--drivers/isdn/hisax/Makefile60
-rw-r--r--drivers/isdn/hisax/amd7930_fn.c794
-rw-r--r--drivers/isdn/hisax/amd7930_fn.h37
-rw-r--r--drivers/isdn/hisax/arcofi.c131
-rw-r--r--drivers/isdn/hisax/arcofi.h27
-rw-r--r--drivers/isdn/hisax/asuscom.c423
-rw-r--r--drivers/isdn/hisax/avm_a1.c307
-rw-r--r--drivers/isdn/hisax/avm_a1p.c267
-rw-r--r--drivers/isdn/hisax/avm_pci.c904
-rw-r--r--drivers/isdn/hisax/avma1_cs.c162
-rw-r--r--drivers/isdn/hisax/bkm_a4t.c358
-rw-r--r--drivers/isdn/hisax/bkm_a8.c433
-rw-r--r--drivers/isdn/hisax/bkm_ax.h119
-rw-r--r--drivers/isdn/hisax/callc.c1792
-rw-r--r--drivers/isdn/hisax/config.c1993
-rw-r--r--drivers/isdn/hisax/diva.c1282
-rw-r--r--drivers/isdn/hisax/elsa.c1245
-rw-r--r--drivers/isdn/hisax/elsa_cs.c218
-rw-r--r--drivers/isdn/hisax/elsa_ser.c659
-rw-r--r--drivers/isdn/hisax/enternow_pci.c420
-rw-r--r--drivers/isdn/hisax/fsm.c161
-rw-r--r--drivers/isdn/hisax/fsm.h61
-rw-r--r--drivers/isdn/hisax/gazel.c691
-rw-r--r--drivers/isdn/hisax/hfc4s8s_l1.c1584
-rw-r--r--drivers/isdn/hisax/hfc4s8s_l1.h89
-rw-r--r--drivers/isdn/hisax/hfc_2bds0.c1078
-rw-r--r--drivers/isdn/hisax/hfc_2bds0.h128
-rw-r--r--drivers/isdn/hisax/hfc_2bs0.c591
-rw-r--r--drivers/isdn/hisax/hfc_2bs0.h60
-rw-r--r--drivers/isdn/hisax/hfc_pci.c1755
-rw-r--r--drivers/isdn/hisax/hfc_pci.h235
-rw-r--r--drivers/isdn/hisax/hfc_sx.c1517
-rw-r--r--drivers/isdn/hisax/hfc_sx.h196
-rw-r--r--drivers/isdn/hisax/hfc_usb.c1594
-rw-r--r--drivers/isdn/hisax/hfc_usb.h208
-rw-r--r--drivers/isdn/hisax/hfcscard.c261
-rw-r--r--drivers/isdn/hisax/hisax.h1352
-rw-r--r--drivers/isdn/hisax/hisax_cfg.h66
-rw-r--r--drivers/isdn/hisax/hisax_debug.h80
-rw-r--r--drivers/isdn/hisax/hisax_fcpcipnp.c1024
-rw-r--r--drivers/isdn/hisax/hisax_fcpcipnp.h58
-rw-r--r--drivers/isdn/hisax/hisax_if.h66
-rw-r--r--drivers/isdn/hisax/hisax_isac.c895
-rw-r--r--drivers/isdn/hisax/hisax_isac.h46
-rw-r--r--drivers/isdn/hisax/hscx.c277
-rw-r--r--drivers/isdn/hisax/hscx.h41
-rw-r--r--drivers/isdn/hisax/hscx_irq.c294
-rw-r--r--drivers/isdn/hisax/icc.c680
-rw-r--r--drivers/isdn/hisax/icc.h72
-rw-r--r--drivers/isdn/hisax/ipac.h29
-rw-r--r--drivers/isdn/hisax/ipacx.c913
-rw-r--r--drivers/isdn/hisax/ipacx.h162
-rw-r--r--drivers/isdn/hisax/isac.c681
-rw-r--r--drivers/isdn/hisax/isac.h70
-rw-r--r--drivers/isdn/hisax/isar.c1910
-rw-r--r--drivers/isdn/hisax/isar.h222
-rw-r--r--drivers/isdn/hisax/isdnl1.c930
-rw-r--r--drivers/isdn/hisax/isdnl1.h32
-rw-r--r--drivers/isdn/hisax/isdnl2.c1839
-rw-r--r--drivers/isdn/hisax/isdnl2.h25
-rw-r--r--drivers/isdn/hisax/isdnl3.c594
-rw-r--r--drivers/isdn/hisax/isdnl3.h42
-rw-r--r--drivers/isdn/hisax/isurf.c305
-rw-r--r--drivers/isdn/hisax/ix1_micro.c316
-rw-r--r--drivers/isdn/hisax/jade.c305
-rw-r--r--drivers/isdn/hisax/jade.h134
-rw-r--r--drivers/isdn/hisax/jade_irq.c238
-rw-r--r--drivers/isdn/hisax/l3_1tr6.c932
-rw-r--r--drivers/isdn/hisax/l3_1tr6.h164
-rw-r--r--drivers/isdn/hisax/l3dss1.c3227
-rw-r--r--drivers/isdn/hisax/l3dss1.h124
-rw-r--r--drivers/isdn/hisax/l3ni1.c3182
-rw-r--r--drivers/isdn/hisax/l3ni1.h136
-rw-r--r--drivers/isdn/hisax/lmgr.c50
-rw-r--r--drivers/isdn/hisax/mic.c235
-rw-r--r--drivers/isdn/hisax/netjet.c985
-rw-r--r--drivers/isdn/hisax/netjet.h69
-rw-r--r--drivers/isdn/hisax/niccy.c380
-rw-r--r--drivers/isdn/hisax/nj_s.c294
-rw-r--r--drivers/isdn/hisax/nj_u.c258
-rw-r--r--drivers/isdn/hisax/q931.c1513
-rw-r--r--drivers/isdn/hisax/s0box.c260
-rw-r--r--drivers/isdn/hisax/saphir.c296
-rw-r--r--drivers/isdn/hisax/sedlbauer.c873
-rw-r--r--drivers/isdn/hisax/sedlbauer_cs.c209
-rw-r--r--drivers/isdn/hisax/sportster.c267
-rw-r--r--drivers/isdn/hisax/st5481.h529
-rw-r--r--drivers/isdn/hisax/st5481_b.c380
-rw-r--r--drivers/isdn/hisax/st5481_d.c780
-rw-r--r--drivers/isdn/hisax/st5481_init.c221
-rw-r--r--drivers/isdn/hisax/st5481_usb.c659
-rw-r--r--drivers/isdn/hisax/tei.c465
-rw-r--r--drivers/isdn/hisax/teleint.c334
-rw-r--r--drivers/isdn/hisax/teles0.c364
-rw-r--r--drivers/isdn/hisax/teles3.c498
-rw-r--r--drivers/isdn/hisax/teles_cs.c201
-rw-r--r--drivers/isdn/hisax/telespci.c349
-rw-r--r--drivers/isdn/hisax/w6692.c1085
-rw-r--r--drivers/isdn/hisax/w6692.h184
-rw-r--r--drivers/isdn/i4l/Kconfig129
-rw-r--r--drivers/isdn/i4l/Makefile20
-rw-r--r--drivers/isdn/i4l/isdn_audio.c711
-rw-r--r--drivers/isdn/i4l/isdn_audio.h44
-rw-r--r--drivers/isdn/i4l/isdn_bsdcomp.c930
-rw-r--r--drivers/isdn/i4l/isdn_common.c2368
-rw-r--r--drivers/isdn/i4l/isdn_common.h47
-rw-r--r--drivers/isdn/i4l/isdn_concap.c99
-rw-r--r--drivers/isdn/i4l/isdn_concap.h11
-rw-r--r--drivers/isdn/i4l/isdn_net.c3198
-rw-r--r--drivers/isdn/i4l/isdn_net.h151
-rw-r--r--drivers/isdn/i4l/isdn_ppp.c3046
-rw-r--r--drivers/isdn/i4l/isdn_ppp.h41
-rw-r--r--drivers/isdn/i4l/isdn_tty.c3756
-rw-r--r--drivers/isdn/i4l/isdn_tty.h120
-rw-r--r--drivers/isdn/i4l/isdn_ttyfax.c1123
-rw-r--r--drivers/isdn/i4l/isdn_ttyfax.h17
-rw-r--r--drivers/isdn/i4l/isdn_v110.c625
-rw-r--r--drivers/isdn/i4l/isdn_v110.h29
-rw-r--r--drivers/isdn/i4l/isdn_x25iface.c332
-rw-r--r--drivers/isdn/i4l/isdn_x25iface.h30
-rw-r--r--drivers/isdn/isdnloop/Makefile6
-rw-r--r--drivers/isdn/isdnloop/isdnloop.c1528
-rw-r--r--drivers/isdn/isdnloop/isdnloop.h112
-rw-r--r--drivers/media/dvb-frontends/tua6100.c22
-rw-r--r--drivers/media/rc/bpf-lirc.c30
-rw-r--r--drivers/net/bonding/bond_3ad.c222
-rw-r--r--drivers/net/bonding/bond_alb.c30
-rw-r--r--drivers/net/bonding/bond_main.c388
-rw-r--r--drivers/net/bonding/bond_netlink.c14
-rw-r--r--drivers/net/bonding/bond_options.c101
-rw-r--r--drivers/net/bonding/bond_procfs.c2
-rw-r--r--drivers/net/bonding/bond_sysfs.c13
-rw-r--r--drivers/net/can/softing/softing_main.c4
-rw-r--r--drivers/net/dsa/Kconfig24
-rw-r--r--drivers/net/dsa/Makefile4
-rw-r--r--drivers/net/dsa/b53/b53_common.c4
-rw-r--r--drivers/net/dsa/microchip/Kconfig1
-rw-r--r--drivers/net/dsa/microchip/ksz9477.c229
-rw-r--r--drivers/net/dsa/microchip/ksz9477_spi.c114
-rw-r--r--drivers/net/dsa/microchip/ksz_common.c8
-rw-r--r--drivers/net/dsa/microchip/ksz_common.h169
-rw-r--r--drivers/net/dsa/microchip/ksz_priv.h25
-rw-r--r--drivers/net/dsa/microchip/ksz_spi.h69
-rw-r--r--drivers/net/dsa/mt7530.c46
-rw-r--r--drivers/net/dsa/mt7530.h4
-rw-r--r--drivers/net/dsa/mv88e6xxx/chip.c269
-rw-r--r--drivers/net/dsa/mv88e6xxx/chip.h18
-rw-r--r--drivers/net/dsa/mv88e6xxx/global1.c35
-rw-r--r--drivers/net/dsa/mv88e6xxx/global1.h16
-rw-r--r--drivers/net/dsa/mv88e6xxx/global1_atu.c11
-rw-r--r--drivers/net/dsa/mv88e6xxx/global1_vtu.c64
-rw-r--r--drivers/net/dsa/mv88e6xxx/global2.c46
-rw-r--r--drivers/net/dsa/mv88e6xxx/global2.h14
-rw-r--r--drivers/net/dsa/mv88e6xxx/hwtstamp.c28
-rw-r--r--drivers/net/dsa/mv88e6xxx/phy.c4
-rw-r--r--drivers/net/dsa/mv88e6xxx/port.c77
-rw-r--r--drivers/net/dsa/mv88e6xxx/port.h14
-rw-r--r--drivers/net/dsa/mv88e6xxx/ptp.c32
-rw-r--r--drivers/net/dsa/mv88e6xxx/serdes.c24
-rw-r--r--drivers/net/dsa/mv88e6xxx/smi.c25
-rw-r--r--drivers/net/dsa/qca8k.c15
-rw-r--r--drivers/net/dsa/qca8k.h2
-rw-r--r--drivers/net/dsa/sja1105/Kconfig9
-rw-r--r--drivers/net/dsa/sja1105/Makefile4
-rw-r--r--drivers/net/dsa/sja1105/sja1105.h54
-rw-r--r--drivers/net/dsa/sja1105/sja1105_clocking.c100
-rw-r--r--drivers/net/dsa/sja1105/sja1105_dynamic_config.c296
-rw-r--r--drivers/net/dsa/sja1105/sja1105_dynamic_config.h11
-rw-r--r--drivers/net/dsa/sja1105/sja1105_main.c868
-rw-r--r--drivers/net/dsa/sja1105/sja1105_ptp.c393
-rw-r--r--drivers/net/dsa/sja1105/sja1105_ptp.h64
-rw-r--r--drivers/net/dsa/sja1105/sja1105_spi.c70
-rw-r--r--drivers/net/dsa/sja1105/sja1105_static_config.c88
-rw-r--r--drivers/net/dsa/sja1105/sja1105_static_config.h37
-rw-r--r--drivers/net/dsa/vitesse-vsc73xx-core.c (renamed from drivers/net/dsa/vitesse-vsc73xx.c)206
-rw-r--r--drivers/net/dsa/vitesse-vsc73xx-platform.c164
-rw-r--r--drivers/net/dsa/vitesse-vsc73xx-spi.c203
-rw-r--r--drivers/net/dsa/vitesse-vsc73xx.h29
-rw-r--r--drivers/net/ethernet/Kconfig1
-rw-r--r--drivers/net/ethernet/Makefile1
-rw-r--r--drivers/net/ethernet/allwinner/sun4i-emac.c5
-rw-r--r--drivers/net/ethernet/amazon/ena/ena_admin_defs.h61
-rw-r--r--drivers/net/ethernet/amazon/ena/ena_com.c145
-rw-r--r--drivers/net/ethernet/amazon/ena/ena_com.h19
-rw-r--r--drivers/net/ethernet/amazon/ena/ena_eth_com.c54
-rw-r--r--drivers/net/ethernet/amazon/ena/ena_eth_com.h73
-rw-r--r--drivers/net/ethernet/amazon/ena/ena_ethtool.c35
-rw-r--r--drivers/net/ethernet/amazon/ena/ena_netdev.c389
-rw-r--r--drivers/net/ethernet/amazon/ena/ena_netdev.h42
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/aq_cfg.h7
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/aq_drvinfo.c2
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/aq_drvinfo.h2
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/aq_filters.c2
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/aq_filters.h2
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/aq_main.c34
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/aq_nic.c28
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/aq_nic.h2
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/aq_ring.c4
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/aq_ring.h9
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c2
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c62
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0_internal.h7
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c16
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.h5
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h18
-rw-r--r--drivers/net/ethernet/aquantia/atlantic/ver.h5
-rw-r--r--drivers/net/ethernet/atheros/Kconfig10
-rw-r--r--drivers/net/ethernet/atheros/Makefile1
-rw-r--r--drivers/net/ethernet/atheros/ag71xx.c1898
-rw-r--r--drivers/net/ethernet/atheros/atl1c/atl1c_main.c2
-rw-r--r--drivers/net/ethernet/broadcom/Kconfig2
-rw-r--r--drivers/net/ethernet/broadcom/bcm63xx_enet.c1
-rw-r--r--drivers/net/ethernet/broadcom/bcmsysport.c20
-rw-r--r--drivers/net/ethernet/broadcom/bcmsysport.h4
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c7
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c4
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c33
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h3
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt.c125
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt.h21
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c2
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_debugfs.c6
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_dim.c9
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c8
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c18
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_tc.h4
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c4
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c29
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c144
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h7
-rw-r--r--drivers/net/ethernet/broadcom/genet/bcmgenet.c18
-rw-r--r--drivers/net/ethernet/broadcom/genet/bcmgenet.h4
-rw-r--r--drivers/net/ethernet/broadcom/tg3.c2
-rw-r--r--drivers/net/ethernet/cadence/Kconfig10
-rw-r--r--drivers/net/ethernet/cadence/macb.h12
-rw-r--r--drivers/net/ethernet/cadence/macb_main.c143
-rw-r--r--drivers/net/ethernet/cadence/macb_ptp.c7
-rw-r--r--drivers/net/ethernet/calxeda/xgmac.c4
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/Makefile2
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4.h62
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c49
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.h2
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c240
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4_mps.c241
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c22
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.h6
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c21
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h2
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/t4_hw.c79
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/t4_regs.h4
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h28
-rw-r--r--drivers/net/ethernet/chelsio/libcxgb/libcxgb_ppm.c47
-rw-r--r--drivers/net/ethernet/chelsio/libcxgb/libcxgb_ppm.h7
-rw-r--r--drivers/net/ethernet/freescale/dpaa2/Kconfig3
-rw-r--r--drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c147
-rw-r--r--drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h9
-rw-r--r--drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c242
-rw-r--r--drivers/net/ethernet/freescale/dpaa2/dprtc-cmd.h48
-rw-r--r--drivers/net/ethernet/freescale/dpaa2/dprtc.c191
-rw-r--r--drivers/net/ethernet/freescale/dpaa2/dprtc.h62
-rw-r--r--drivers/net/ethernet/freescale/enetc/Kconfig10
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc.c216
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc.h18
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc_ethtool.c31
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc_hw.h25
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc_pf.c2
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc_ptp.c5
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc_vf.c2
-rw-r--r--drivers/net/ethernet/freescale/fec_main.c16
-rw-r--r--drivers/net/ethernet/freescale/fec_ptp.c2
-rw-r--r--drivers/net/ethernet/freescale/fman/fman_keygen.c3
-rw-r--r--drivers/net/ethernet/google/Kconfig27
-rw-r--r--drivers/net/ethernet/google/Makefile5
-rw-r--r--drivers/net/ethernet/google/gve/Makefile4
-rw-r--r--drivers/net/ethernet/google/gve/gve.h459
-rw-r--r--drivers/net/ethernet/google/gve/gve_adminq.c387
-rw-r--r--drivers/net/ethernet/google/gve/gve_adminq.h217
-rw-r--r--drivers/net/ethernet/google/gve/gve_desc.h113
-rw-r--r--drivers/net/ethernet/google/gve/gve_ethtool.c245
-rw-r--r--drivers/net/ethernet/google/gve/gve_main.c1232
-rw-r--r--drivers/net/ethernet/google/gve/gve_register.h27
-rw-r--r--drivers/net/ethernet/google/gve/gve_rx.c446
-rw-r--r--drivers/net/ethernet/google/gve/gve_tx.c584
-rw-r--r--drivers/net/ethernet/hisilicon/Kconfig10
-rw-r--r--drivers/net/ethernet/hisilicon/hip04_eth.c142
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_enet.c1
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h2
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hnae3.c26
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hnae3.h27
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c12
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c6
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3_enet.c455
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3_enet.h27
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c60
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c70
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h43
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c2
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c95
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c799
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h21
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c1348
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h62
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c32
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c15
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c170
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h3
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile2
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c59
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h14
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c286
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h9
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c3
-rw-r--r--drivers/net/ethernet/huawei/hinic/Makefile2
-rw-r--r--drivers/net/ethernet/huawei/hinic/hinic_dev.h28
-rw-r--r--drivers/net/ethernet/huawei/hinic/hinic_ethtool.c762
-rw-r--r--drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c12
-rw-r--r--drivers/net/ethernet/huawei/hinic/hinic_hw_dev.h56
-rw-r--r--drivers/net/ethernet/huawei/hinic/hinic_hw_io.c60
-rw-r--r--drivers/net/ethernet/huawei/hinic/hinic_hw_qp_ctxt.h5
-rw-r--r--drivers/net/ethernet/huawei/hinic/hinic_hw_wqe.h53
-rw-r--r--drivers/net/ethernet/huawei/hinic/hinic_main.c339
-rw-r--r--drivers/net/ethernet/huawei/hinic/hinic_port.c638
-rw-r--r--drivers/net/ethernet/huawei/hinic/hinic_port.h371
-rw-r--r--drivers/net/ethernet/huawei/hinic/hinic_rx.c82
-rw-r--r--drivers/net/ethernet/huawei/hinic/hinic_rx.h7
-rw-r--r--drivers/net/ethernet/huawei/hinic/hinic_tx.c25
-rw-r--r--drivers/net/ethernet/huawei/hinic/hinic_tx.h1
-rw-r--r--drivers/net/ethernet/intel/e1000/e1000_main.c6
-rw-r--r--drivers/net/ethernet/intel/e1000e/80003es2lan.c2
-rw-r--r--drivers/net/ethernet/intel/e1000e/82571.c2
-rw-r--r--drivers/net/ethernet/intel/e1000e/defines.h3
-rw-r--r--drivers/net/ethernet/intel/e1000e/e1000.h5
-rw-r--r--drivers/net/ethernet/intel/e1000e/ethtool.c14
-rw-r--r--drivers/net/ethernet/intel/e1000e/ich8lan.c20
-rw-r--r--drivers/net/ethernet/intel/e1000e/mac.c2
-rw-r--r--drivers/net/ethernet/intel/e1000e/netdev.c111
-rw-r--r--drivers/net/ethernet/intel/e1000e/nvm.c2
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e.h32
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_adminq.c8
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_common.c43
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_debugfs.c9
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_ethtool.c86
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_main.c672
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_prototype.h4
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_ptp.c3
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_txrx.c2
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c118
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_xsk.c13
-rw-r--r--drivers/net/ethernet/intel/iavf/Makefile2
-rw-r--r--drivers/net/ethernet/intel/iavf/i40e_adminq_cmd.h530
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf.h13
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_adminq.c (renamed from drivers/net/ethernet/intel/iavf/i40e_adminq.c)267
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_adminq.h (renamed from drivers/net/ethernet/intel/iavf/i40e_adminq.h)80
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_adminq_cmd.h528
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_alloc.h17
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_client.c127
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_client.h104
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_common.c499
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_ethtool.c16
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_main.c868
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_osdep.h11
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_prototype.h58
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_status.h136
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_trace.h4
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_txrx.c41
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_type.h4
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_virtchnl.c77
-rw-r--r--drivers/net/ethernet/intel/ice/ice.h63
-rw-r--r--drivers/net/ethernet/intel/ice/ice_adminq_cmd.h49
-rw-r--r--drivers/net/ethernet/intel/ice/ice_common.c250
-rw-r--r--drivers/net/ethernet/intel/ice/ice_common.h11
-rw-r--r--drivers/net/ethernet/intel/ice/ice_controlq.c2
-rw-r--r--drivers/net/ethernet/intel/ice/ice_controlq.h2
-rw-r--r--drivers/net/ethernet/intel/ice/ice_dcb.c35
-rw-r--r--drivers/net/ethernet/intel/ice/ice_dcb.h12
-rw-r--r--drivers/net/ethernet/intel/ice/ice_dcb_lib.c230
-rw-r--r--drivers/net/ethernet/intel/ice/ice_dcb_lib.h5
-rw-r--r--drivers/net/ethernet/intel/ice/ice_ethtool.c1027
-rw-r--r--drivers/net/ethernet/intel/ice/ice_hw_autogen.h4
-rw-r--r--drivers/net/ethernet/intel/ice/ice_lib.c477
-rw-r--r--drivers/net/ethernet/intel/ice/ice_lib.h14
-rw-r--r--drivers/net/ethernet/intel/ice/ice_main.c362
-rw-r--r--drivers/net/ethernet/intel/ice/ice_nvm.c35
-rw-r--r--drivers/net/ethernet/intel/ice/ice_sched.c4
-rw-r--r--drivers/net/ethernet/intel/ice/ice_status.h1
-rw-r--r--drivers/net/ethernet/intel/ice/ice_switch.c9
-rw-r--r--drivers/net/ethernet/intel/ice/ice_switch.h7
-rw-r--r--drivers/net/ethernet/intel/ice/ice_txrx.c16
-rw-r--r--drivers/net/ethernet/intel/ice/ice_txrx.h35
-rw-r--r--drivers/net/ethernet/intel/ice/ice_type.h13
-rw-r--r--drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c301
-rw-r--r--drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h33
-rw-r--r--drivers/net/ethernet/intel/igb/e1000_82575.c2
-rw-r--r--drivers/net/ethernet/intel/igb/e1000_regs.h2
-rw-r--r--drivers/net/ethernet/intel/igb/igb_ethtool.c75
-rw-r--r--drivers/net/ethernet/intel/igb/igb_main.c47
-rw-r--r--drivers/net/ethernet/intel/igc/igc_base.c49
-rw-r--r--drivers/net/ethernet/intel/igc/igc_defines.h18
-rw-r--r--drivers/net/ethernet/intel/igc/igc_hw.h3
-rw-r--r--drivers/net/ethernet/intel/igc/igc_mac.c23
-rw-r--r--drivers/net/ethernet/intel/igc/igc_main.c22
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe.h14
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c3
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c3
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_main.c36
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h1
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c181
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c2
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_type.h14
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c97
-rw-r--r--drivers/net/ethernet/intel/ixgbevf/ethtool.c10
-rw-r--r--drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c3
-rw-r--r--drivers/net/ethernet/intel/ixgbevf/vf.c5
-rw-r--r--drivers/net/ethernet/marvell/mvmdio.c11
-rw-r--r--drivers/net/ethernet/marvell/mvneta.c38
-rw-r--r--drivers/net/ethernet/marvell/mvneta_bm.c4
-rw-r--r--drivers/net/ethernet/marvell/mvpp2/mvpp2.h39
-rw-r--r--drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c400
-rw-r--r--drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h43
-rw-r--r--drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c244
-rw-r--r--drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c3
-rw-r--r--drivers/net/ethernet/mediatek/Makefile3
-rw-r--r--drivers/net/ethernet/mediatek/mtk_eth_path.c352
-rw-r--r--drivers/net/ethernet/mediatek/mtk_eth_soc.c138
-rw-r--r--drivers/net/ethernet/mediatek/mtk_eth_soc.h199
-rw-r--r--drivers/net/ethernet/mediatek/mtk_sgmii.c105
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/Kconfig53
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/Makefile24
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/accel/ipsec.c9
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/accel/ipsec.h7
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/accel/tls.c45
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/accel/tls.h51
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/cmd.c4
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/cq.c21
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/dev.c9
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/devlink.c118
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/devlink.h14
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/diag/crdump.c115
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.h4
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c139
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h20
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/ecpf.c27
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/ecpf.h4
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en.h285
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/params.c108
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/params.h118
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c293
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.h43
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_geneve.c335
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_gre.c95
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c151
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h208
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c231
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h37
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/xsk/Makefile1
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c192
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h27
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c223
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.h25
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c111
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.h15
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/xsk/umem.c267
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/xsk/umem.h31
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h1
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h1
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c93
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h97
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c460
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.c17
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.h11
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_rxtx.c7
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_rxtx.h1
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_dim.c14
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c66
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c20
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_main.c845
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_rep.c323
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_rep.h8
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_rx.c132
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_stats.c143
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_stats.h44
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_tc.c139
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_tc.h9
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_tx.c105
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c54
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/eq.c507
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/eswitch.c233
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/eswitch.h114
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c786
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c277
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/events.c4
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c8
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c8
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.h75
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c13
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fs_core.c76
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fs_core.h1
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c10
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fw.c237
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/health.c569
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c9
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c31
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib_vlan.c5
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lag.c4
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c33
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.c72
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h14
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/geneve.c157
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/geneve.h33
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h8
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c33
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c316
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.h32
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/main.c114
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h26
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/mr.c27
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c334
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/rdma.c6
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/sriov.c52
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/vport.c43
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/wq.h5
-rw-r--r--drivers/net/ethernet/mellanox/mlxfw/mlxfw.h11
-rw-r--r--drivers/net/ethernet/mellanox/mlxfw/mlxfw_fsm.c57
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/Kconfig2
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/Makefile1
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/cmd.h12
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core.c57
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core.h30
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c18
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h22
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core_env.c27
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c143
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core_thermal.c248
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/i2c.c76
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/minimal.c18
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/pci.c49
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/pci_hw.h3
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/reg.h522
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum.c584
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum.h35
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c9
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c10
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c80
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c1111
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.h186
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c273
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/switchx2.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/trap.h6
-rw-r--r--drivers/net/ethernet/mscc/Makefile2
-rw-r--r--drivers/net/ethernet/mscc/ocelot.c26
-rw-r--r--drivers/net/ethernet/mscc/ocelot.h11
-rw-r--r--drivers/net/ethernet/mscc/ocelot_ace.c782
-rw-r--r--drivers/net/ethernet/mscc/ocelot_ace.h232
-rw-r--r--drivers/net/ethernet/mscc/ocelot_board.c1
-rw-r--r--drivers/net/ethernet/mscc/ocelot_flower.c363
-rw-r--r--drivers/net/ethernet/mscc/ocelot_police.c227
-rw-r--r--drivers/net/ethernet/mscc/ocelot_police.h22
-rw-r--r--drivers/net/ethernet/mscc/ocelot_regs.c11
-rw-r--r--drivers/net/ethernet/mscc/ocelot_s2.h64
-rw-r--r--drivers/net/ethernet/mscc/ocelot_tc.c197
-rw-r--r--drivers/net/ethernet/mscc/ocelot_tc.h22
-rw-r--r--drivers/net/ethernet/mscc/ocelot_vcap.h403
-rw-r--r--drivers/net/ethernet/netronome/Kconfig1
-rw-r--r--drivers/net/ethernet/netronome/nfp/Makefile6
-rw-r--r--drivers/net/ethernet/netronome/nfp/abm/cls.c22
-rw-r--r--drivers/net/ethernet/netronome/nfp/abm/main.h2
-rw-r--r--drivers/net/ethernet/netronome/nfp/bpf/jit.c115
-rw-r--r--drivers/net/ethernet/netronome/nfp/bpf/main.c30
-rw-r--r--drivers/net/ethernet/netronome/nfp/bpf/main.h2
-rw-r--r--drivers/net/ethernet/netronome/nfp/bpf/verifier.c12
-rw-r--r--drivers/net/ethernet/netronome/nfp/ccm.c3
-rw-r--r--drivers/net/ethernet/netronome/nfp/ccm.h60
-rw-r--r--drivers/net/ethernet/netronome/nfp/ccm_mbox.c743
-rw-r--r--drivers/net/ethernet/netronome/nfp/crypto/crypto.h27
-rw-r--r--drivers/net/ethernet/netronome/nfp/crypto/fw.h84
-rw-r--r--drivers/net/ethernet/netronome/nfp/crypto/tls.c522
-rw-r--r--drivers/net/ethernet/netronome/nfp/flower/action.c260
-rw-r--r--drivers/net/ethernet/netronome/nfp/flower/cmsg.h57
-rw-r--r--drivers/net/ethernet/netronome/nfp/flower/lag_conf.c4
-rw-r--r--drivers/net/ethernet/netronome/nfp/flower/main.h18
-rw-r--r--drivers/net/ethernet/netronome/nfp/flower/match.c149
-rw-r--r--drivers/net/ethernet/netronome/nfp/flower/metadata.c30
-rw-r--r--drivers/net/ethernet/netronome/nfp/flower/offload.c339
-rw-r--r--drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c3
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfp_main.c4
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfp_net.h73
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfp_net_common.c212
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c15
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h21
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c26
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c7
-rw-r--r--drivers/net/ethernet/ni/nixge.c2
-rw-r--r--drivers/net/ethernet/pasemi/pasemi_mac.c2
-rw-r--r--drivers/net/ethernet/qlogic/Kconfig1
-rw-r--r--drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c8
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed.h24
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_cxt.c5
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_debug.c2
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_dev.c1276
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_dev_api.h113
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_fcoe.c26
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_hsi.h16
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_hw.c44
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_init_ops.c9
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_int.c8
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_iscsi.c35
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_iwarp.c67
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_iwarp.h4
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_l2.c4
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_ll2.c406
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_main.c157
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_mcp.c65
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_mcp.h16
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_ptp.c11
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_rdma.c75
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_reg_addr.h6
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_sp_commands.c2
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_sriov.c3
-rw-r--r--drivers/net/ethernet/qlogic/qede/qede.h4
-rw-r--r--drivers/net/ethernet/qlogic/qede/qede_ethtool.c1
-rw-r--r--drivers/net/ethernet/qlogic/qede/qede_filter.c2
-rw-r--r--drivers/net/ethernet/qlogic/qede/qede_main.c42
-rw-r--r--drivers/net/ethernet/qlogic/qede/qede_ptp.c37
-rw-r--r--drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c5
-rw-r--r--drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c2
-rw-r--r--drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h25
-rw-r--r--drivers/net/ethernet/realtek/Makefile1
-rw-r--r--drivers/net/ethernet/realtek/r8169_firmware.c231
-rw-r--r--drivers/net/ethernet/realtek/r8169_firmware.h39
-rw-r--r--drivers/net/ethernet/realtek/r8169_main.c (renamed from drivers/net/ethernet/realtek/r8169.c)1212
-rw-r--r--drivers/net/ethernet/rocker/rocker_main.c4
-rw-r--r--drivers/net/ethernet/rocker/rocker_ofdpa.c25
-rw-r--r--drivers/net/ethernet/sfc/efx.c6
-rw-r--r--drivers/net/ethernet/sis/sis900.c24
-rw-r--r--drivers/net/ethernet/socionext/Kconfig1
-rw-r--r--drivers/net/ethernet/socionext/netsec.c577
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/Kconfig16
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/Makefile2
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/common.h20
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c8
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c118
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c42
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac1000.h1
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c22
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c8
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac100_core.c13
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac100_dma.c8
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac4.h7
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c86
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c13
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c9
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c4
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h20
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c29
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c4
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c41
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/hwif.c9
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/hwif.h25
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/mmc.h4
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/mmc_core.c13
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac.h41
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c96
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_main.c816
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c104
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c1
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c26
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c850
-rw-r--r--drivers/net/ethernet/sun/niu.c2
-rw-r--r--drivers/net/ethernet/ti/Kconfig2
-rw-r--r--drivers/net/ethernet/ti/cpsw.c561
-rw-r--r--drivers/net/ethernet/ti/cpsw_ethtool.c97
-rw-r--r--drivers/net/ethernet/ti/cpsw_priv.h8
-rw-r--r--drivers/net/ethernet/ti/cpts.c88
-rw-r--r--drivers/net/ethernet/ti/cpts.h2
-rw-r--r--drivers/net/ethernet/ti/davinci_cpdma.c187
-rw-r--r--drivers/net/ethernet/ti/davinci_cpdma.h9
-rw-r--r--drivers/net/ethernet/ti/davinci_emac.c4
-rw-r--r--drivers/net/ethernet/ti/netcp_ethss.c9
-rw-r--r--drivers/net/ethernet/toshiba/ps3_gelic_net.h2
-rw-r--r--drivers/net/ethernet/via/via-velocity.h2
-rw-r--r--drivers/net/ethernet/wiznet/w5100-spi.c24
-rw-r--r--drivers/net/ethernet/xilinx/Kconfig6
-rw-r--r--drivers/net/ethernet/xilinx/ll_temac.h5
-rw-r--r--drivers/net/ethernet/xilinx/ll_temac_main.c258
-rw-r--r--drivers/net/ethernet/xilinx/ll_temac_mdio.c20
-rw-r--r--drivers/net/ethernet/xilinx/xilinx_axienet.h35
-rw-r--r--drivers/net/ethernet/xilinx/xilinx_axienet_main.c678
-rw-r--r--drivers/net/ethernet/xilinx/xilinx_axienet_mdio.c111
-rw-r--r--drivers/net/fddi/skfp/drvfbi.c3
-rw-r--r--drivers/net/fddi/skfp/h/skfbi.h231
-rw-r--r--drivers/net/fjes/fjes_debugfs.c15
-rw-r--r--drivers/net/gtp.c37
-rw-r--r--drivers/net/loopback.c78
-rw-r--r--drivers/net/macsec.c6
-rw-r--r--drivers/net/macvlan.c2
-rw-r--r--drivers/net/netdevsim/dev.c44
-rw-r--r--drivers/net/netdevsim/netdev.c29
-rw-r--r--drivers/net/netdevsim/netdevsim.h1
-rw-r--r--drivers/net/phy/Kconfig6
-rw-r--r--drivers/net/phy/Makefile1
-rw-r--r--drivers/net/phy/aquantia_main.c8
-rw-r--r--drivers/net/phy/bcm87xx.c20
-rw-r--r--drivers/net/phy/broadcom.c2
-rw-r--r--drivers/net/phy/dp83867.c193
-rw-r--r--drivers/net/phy/lxt.c6
-rw-r--r--drivers/net/phy/nxp-tja11xx.c403
-rw-r--r--drivers/net/phy/phy-core.c4
-rw-r--r--drivers/net/phy/phy.c128
-rw-r--r--drivers/net/phy/phy_device.c109
-rw-r--r--drivers/net/phy/phylink.c288
-rw-r--r--drivers/net/phy/sfp-bus.c14
-rw-r--r--drivers/net/phy/sfp.c72
-rw-r--r--drivers/net/plip/plip.c4
-rw-r--r--drivers/net/tap.c5
-rw-r--r--drivers/net/team/team.c25
-rw-r--r--drivers/net/tun.c8
-rw-r--r--drivers/net/usb/asix_devices.c6
-rw-r--r--drivers/net/usb/r8152.c101
-rw-r--r--drivers/net/veth.c61
-rw-r--r--drivers/net/virtio_net.c2
-rw-r--r--drivers/net/vmxnet3/vmxnet3_drv.c20
-rw-r--r--drivers/net/vmxnet3/vmxnet3_ethtool.c10
-rw-r--r--drivers/net/vmxnet3/vmxnet3_int.h7
-rw-r--r--drivers/net/vrf.c5
-rw-r--r--drivers/net/vxlan.c131
-rw-r--r--drivers/net/wan/hdlc_cisco.c11
-rw-r--r--drivers/net/wan/x25_asy.c4
-rw-r--r--drivers/net/wireless/ath/Kconfig2
-rw-r--r--drivers/net/wireless/ath/Makefile2
-rw-r--r--drivers/net/wireless/ath/ar5523/Kconfig2
-rw-r--r--drivers/net/wireless/ath/ar5523/Makefile2
-rw-r--r--drivers/net/wireless/ath/ath10k/Kconfig2
-rw-r--r--drivers/net/wireless/ath/ath10k/ahb.c2
-rw-r--r--drivers/net/wireless/ath/ath10k/core.c80
-rw-r--r--drivers/net/wireless/ath/ath10k/core.h27
-rw-r--r--drivers/net/wireless/ath/ath10k/coredump.c4
-rw-r--r--drivers/net/wireless/ath/ath10k/debug.c58
-rw-r--r--drivers/net/wireless/ath/ath10k/debug.h25
-rw-r--r--drivers/net/wireless/ath/ath10k/debugfs_sta.c7
-rw-r--r--drivers/net/wireless/ath/ath10k/hif.h15
-rw-r--r--drivers/net/wireless/ath/ath10k/htc.c1
-rw-r--r--drivers/net/wireless/ath/ath10k/htt.c2
-rw-r--r--drivers/net/wireless/ath/ath10k/htt.h76
-rw-r--r--drivers/net/wireless/ath/ath10k/htt_rx.c401
-rw-r--r--drivers/net/wireless/ath/ath10k/htt_tx.c38
-rw-r--r--drivers/net/wireless/ath/ath10k/hw.c6
-rw-r--r--drivers/net/wireless/ath/ath10k/hw.h13
-rw-r--r--drivers/net/wireless/ath/ath10k/mac.c223
-rw-r--r--drivers/net/wireless/ath/ath10k/pci.c27
-rw-r--r--drivers/net/wireless/ath/ath10k/qmi.c61
-rw-r--r--drivers/net/wireless/ath/ath10k/qmi.h1
-rw-r--r--drivers/net/wireless/ath/ath10k/sdio.c35
-rw-r--r--drivers/net/wireless/ath/ath10k/snoc.c19
-rw-r--r--drivers/net/wireless/ath/ath10k/swap.c4
-rw-r--r--drivers/net/wireless/ath/ath10k/testmode.c17
-rw-r--r--drivers/net/wireless/ath/ath10k/trace.c1
-rw-r--r--drivers/net/wireless/ath/ath10k/trace.h6
-rw-r--r--drivers/net/wireless/ath/ath10k/txrx.c3
-rw-r--r--drivers/net/wireless/ath/ath10k/usb.c4
-rw-r--r--drivers/net/wireless/ath/ath10k/wmi-tlv.c61
-rw-r--r--drivers/net/wireless/ath/ath10k/wmi-tlv.h20
-rw-r--r--drivers/net/wireless/ath/ath10k/wmi.c37
-rw-r--r--drivers/net/wireless/ath/ath10k/wmi.h23
-rw-r--r--drivers/net/wireless/ath/ath5k/Kconfig2
-rw-r--r--drivers/net/wireless/ath/ath5k/Makefile2
-rw-r--r--drivers/net/wireless/ath/ath6kl/Kconfig2
-rw-r--r--drivers/net/wireless/ath/ath6kl/cfg80211.c4
-rw-r--r--drivers/net/wireless/ath/ath6kl/debug.c3
-rw-r--r--drivers/net/wireless/ath/ath6kl/htc_pipe.c3
-rw-r--r--drivers/net/wireless/ath/ath6kl/trace.h2
-rw-r--r--drivers/net/wireless/ath/ath6kl/wmi.c13
-rw-r--r--drivers/net/wireless/ath/ath9k/Kconfig2
-rw-r--r--drivers/net/wireless/ath/ath9k/Makefile2
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_phy.c24
-rw-r--r--drivers/net/wireless/ath/ath9k/eeprom.c2
-rw-r--r--drivers/net/wireless/ath/ath9k/eeprom_4k.c1
-rw-r--r--drivers/net/wireless/ath/ath9k/hw.c40
-rw-r--r--drivers/net/wireless/ath/ath9k/hw.h1
-rw-r--r--drivers/net/wireless/ath/ath9k/init.c2
-rw-r--r--drivers/net/wireless/ath/ath9k/recv.c6
-rw-r--r--drivers/net/wireless/ath/ath9k/xmit.c18
-rw-r--r--drivers/net/wireless/ath/carl9170/mac.c2
-rw-r--r--drivers/net/wireless/ath/carl9170/main.c9
-rw-r--r--drivers/net/wireless/ath/carl9170/rx.c2
-rw-r--r--drivers/net/wireless/ath/carl9170/usb.c39
-rw-r--r--drivers/net/wireless/ath/dfs_pattern_detector.c2
-rw-r--r--drivers/net/wireless/ath/regd.h1
-rw-r--r--drivers/net/wireless/ath/wcn36xx/Kconfig2
-rw-r--r--drivers/net/wireless/ath/wcn36xx/Makefile2
-rw-r--r--drivers/net/wireless/ath/wil6210/Kconfig2
-rw-r--r--drivers/net/wireless/ath/wil6210/Makefile2
-rw-r--r--drivers/net/wireless/ath/wil6210/cfg80211.c26
-rw-r--r--drivers/net/wireless/ath/wil6210/debugfs.c238
-rw-r--r--drivers/net/wireless/ath/wil6210/fw.h11
-rw-r--r--drivers/net/wireless/ath/wil6210/fw_inc.c148
-rw-r--r--drivers/net/wireless/ath/wil6210/interrupt.c67
-rw-r--r--drivers/net/wireless/ath/wil6210/main.c37
-rw-r--r--drivers/net/wireless/ath/wil6210/pcie_bus.c3
-rw-r--r--drivers/net/wireless/ath/wil6210/rx_reorder.c33
-rw-r--r--drivers/net/wireless/ath/wil6210/txrx.c35
-rw-r--r--drivers/net/wireless/ath/wil6210/txrx_edma.c26
-rw-r--r--drivers/net/wireless/ath/wil6210/txrx_edma.h2
-rw-r--r--drivers/net/wireless/ath/wil6210/wil6210.h39
-rw-r--r--drivers/net/wireless/ath/wil6210/wmi.c141
-rw-r--r--drivers/net/wireless/ath/wil6210/wmi.h47
-rw-r--r--drivers/net/wireless/broadcom/b43/dma.c69
-rw-r--r--drivers/net/wireless/broadcom/b43/main.c7
-rw-r--r--drivers/net/wireless/broadcom/b43legacy/dma.c57
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/Kconfig52
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/Makefile14
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/Kconfig50
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/Makefile14
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c15
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.h16
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/commonring.c16
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/commonring.h16
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/debug.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/debug.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/flowring.c16
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/flowring.h16
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.c15
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.h14
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c16
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.h16
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c16
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h16
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/tracepoint.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/tracepoint.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_cmn.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_hal.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_int.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_n.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_radio.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phyreg_n.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_lcn.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_lcn.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_n.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_n.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmutil/Makefile13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmutil/utils.c13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/include/brcmu_d11.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/include/brcmu_utils.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/include/chipcommon.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/include/defs.h13
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/include/soc.h13
-rw-r--r--drivers/net/wireless/cisco/Kconfig2
-rw-r--r--drivers/net/wireless/cisco/airo.c57
-rw-r--r--drivers/net/wireless/intel/iwlegacy/3945-rs.c17
-rw-r--r--drivers/net/wireless/intel/iwlegacy/3945.h3
-rw-r--r--drivers/net/wireless/intel/iwlegacy/4965-rs.c35
-rw-r--r--drivers/net/wireless/intel/iwlegacy/common.h4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/cfg/22000.c144
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/lib.c3
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/rs.c4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/acpi.c28
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/acpi.h5
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h22
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/location.h11
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/power.h12
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/scan.h15
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/dbg.c427
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/dbg.h133
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/error-dump.h111
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/file.h17
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/init.c7
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/runtime.h28
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/smem.c12
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-config.h14
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-csr.h1
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c33
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-drv.c35
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-trans.h75
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/constants.h1
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/d3.c14
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c66
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/fw.c72
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c16
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c66
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/mvm.h12
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/nvm.c9
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/ops.c26
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c25
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/rs.c4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/scan.c12
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/sta.h4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/tx.c16
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/utils.c20
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c10
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/drv.c241
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/internal.h29
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/rx.c68
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c11
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/trans.c204
-rw-r--r--drivers/net/wireless/intersil/p54/main.c9
-rw-r--r--drivers/net/wireless/intersil/p54/p54usb.c43
-rw-r--r--drivers/net/wireless/intersil/p54/txrx.c11
-rw-r--r--drivers/net/wireless/mac80211_hwsim.c2
-rw-r--r--drivers/net/wireless/marvell/libertas/if_usb.c2
-rw-r--r--drivers/net/wireless/marvell/libertas_tf/if_usb.c2
-rw-r--r--drivers/net/wireless/marvell/mwifiex/11n.c53
-rw-r--r--drivers/net/wireless/marvell/mwifiex/11n.h5
-rw-r--r--drivers/net/wireless/marvell/mwifiex/11n_aggr.c26
-rw-r--r--drivers/net/wireless/marvell/mwifiex/11n_aggr.h2
-rw-r--r--drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c125
-rw-r--r--drivers/net/wireless/marvell/mwifiex/cfg80211.c37
-rw-r--r--drivers/net/wireless/marvell/mwifiex/cmdevt.c103
-rw-r--r--drivers/net/wireless/marvell/mwifiex/fw.h12
-rw-r--r--drivers/net/wireless/marvell/mwifiex/init.c32
-rw-r--r--drivers/net/wireless/marvell/mwifiex/main.c35
-rw-r--r--drivers/net/wireless/marvell/mwifiex/main.h2
-rw-r--r--drivers/net/wireless/marvell/mwifiex/pcie.c5
-rw-r--r--drivers/net/wireless/marvell/mwifiex/scan.c76
-rw-r--r--drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c5
-rw-r--r--drivers/net/wireless/marvell/mwifiex/sta_event.c10
-rw-r--r--drivers/net/wireless/marvell/mwifiex/sta_ioctl.c4
-rw-r--r--drivers/net/wireless/marvell/mwifiex/tdls.c68
-rw-r--r--drivers/net/wireless/marvell/mwifiex/txrx.c5
-rw-r--r--drivers/net/wireless/marvell/mwifiex/uap_txrx.c10
-rw-r--r--drivers/net/wireless/marvell/mwifiex/usb.c10
-rw-r--r--drivers/net/wireless/marvell/mwifiex/util.c15
-rw-r--r--drivers/net/wireless/marvell/mwifiex/wmm.c111
-rw-r--r--drivers/net/wireless/mediatek/mt76/dma.c1
-rw-r--r--drivers/net/wireless/mediatek/mt76/mac80211.c62
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76.h24
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/core.c2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/debugfs.c30
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/dma.c29
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/eeprom.h2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/init.c26
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/mac.c191
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/main.c8
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/mcu.c2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/mt7603.h15
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/regs.h6
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/dma.c23
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c97
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/eeprom.h61
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/init.c77
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/mac.c85
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/mac.h5
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/main.c52
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/mcu.c1265
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/mcu.h56
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h16
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/pci.c7
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x0/init.c5
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x0/main.c2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x0/phy.c13
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x0/usb.c2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x02.h1
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x02_beacon.c4
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x02_debugfs.c10
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x02_dfs.c18
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x02_dfs.h2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.h1
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x02_mac.c106
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x02_mac.h2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c18
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x02_regs.h3
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x02_txrx.c9
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c11
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x2/init.c9
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x2/pci_main.c16
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x2/pci_phy.c8
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x2/usb_init.c2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x2/usb_main.c23
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x2/usb_phy.c7
-rw-r--r--drivers/net/wireless/mediatek/mt76/usb.c66
-rw-r--r--drivers/net/wireless/mediatek/mt7601u/dma.c54
-rw-r--r--drivers/net/wireless/mediatek/mt7601u/tx.c4
-rw-r--r--drivers/net/wireless/quantenna/qtnfmac/commands.c5
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2800lib.c96
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2800lib.h11
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2800mmio.c31
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2800mmio.h2
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2800pci.c3
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2800soc.c3
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2800usb.c11
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2x00.h10
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2x00debug.c35
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2x00dev.c10
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2x00link.c15
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2x00queue.h6
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.c35
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.h1
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/btcoexist/rtl_btc.c3
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/efuse.c5
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rc.c3
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c2
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192de/dm.c695
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8821ae/dm.c8
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.c253
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.h708
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/usb.c5
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/wifi.h1
-rw-r--r--drivers/net/wireless/realtek/rtw88/hci.h2
-rw-r--r--drivers/net/wireless/realtek/rtw88/mac.c8
-rw-r--r--drivers/net/wireless/realtek/rtw88/mac80211.c32
-rw-r--r--drivers/net/wireless/realtek/rtw88/main.c36
-rw-r--r--drivers/net/wireless/realtek/rtw88/main.h38
-rw-r--r--drivers/net/wireless/realtek/rtw88/pci.c10
-rw-r--r--drivers/net/wireless/realtek/rtw88/phy.c1265
-rw-r--r--drivers/net/wireless/realtek/rtw88/phy.h18
-rw-r--r--drivers/net/wireless/realtek/rtw88/regd.c69
-rw-r--r--drivers/net/wireless/realtek/rtw88/regd.h4
-rw-r--r--drivers/net/wireless/realtek/rtw88/rtw8822c.c436
-rw-r--r--drivers/net/wireless/realtek/rtw88/rtw8822c.h23
-rw-r--r--drivers/net/wireless/realtek/rtw88/rtw8822c_table.c799
-rw-r--r--drivers/net/wireless/realtek/rtw88/tx.c2
-rw-r--r--drivers/net/wireless/ti/wl18xx/main.c38
-rw-r--r--drivers/net/xen-netback/interface.c2
-rw-r--r--drivers/nfc/st-nci/i2c.c2
-rw-r--r--drivers/pci/pcie/aspm.c20
-rw-r--r--drivers/ptp/Kconfig2
-rw-r--r--drivers/ptp/ptp_clock.c3
-rw-r--r--drivers/s390/net/qeth_core.h109
-rw-r--r--drivers/s390/net/qeth_core_main.c1013
-rw-r--r--drivers/s390/net/qeth_core_mpc.h51
-rw-r--r--drivers/s390/net/qeth_l2_main.c276
-rw-r--r--drivers/s390/net/qeth_l3_main.c249
-rw-r--r--drivers/scsi/cxgbi/cxgb3i/cxgb3i.c10
-rw-r--r--drivers/scsi/cxgbi/cxgb4i/cxgb4i.c17
-rw-r--r--drivers/scsi/cxgbi/libcxgbi.c15
-rw-r--r--drivers/scsi/cxgbi/libcxgbi.h9
-rw-r--r--drivers/scsi/qedf/qedf_main.c39
-rw-r--r--drivers/scsi/qedi/qedi_main.c34
-rw-r--r--drivers/ssb/driver_gpio.c6
-rw-r--r--drivers/staging/Kconfig2
-rw-r--r--drivers/staging/Makefile1
-rw-r--r--drivers/staging/isdn/Kconfig12
-rw-r--r--drivers/staging/isdn/Makefile8
-rw-r--r--drivers/staging/isdn/TODO22
-rw-r--r--drivers/staging/isdn/avm/Kconfig (renamed from drivers/isdn/hardware/avm/Kconfig)0
-rw-r--r--drivers/staging/isdn/avm/Makefile (renamed from drivers/isdn/hardware/avm/Makefile)0
-rw-r--r--drivers/staging/isdn/avm/avm_cs.c (renamed from drivers/isdn/hardware/avm/avm_cs.c)0
-rw-r--r--drivers/staging/isdn/avm/avmcard.h (renamed from drivers/isdn/hardware/avm/avmcard.h)0
-rw-r--r--drivers/staging/isdn/avm/b1.c (renamed from drivers/isdn/hardware/avm/b1.c)0
-rw-r--r--drivers/staging/isdn/avm/b1dma.c (renamed from drivers/isdn/hardware/avm/b1dma.c)0
-rw-r--r--drivers/staging/isdn/avm/b1isa.c (renamed from drivers/isdn/hardware/avm/b1isa.c)0
-rw-r--r--drivers/staging/isdn/avm/b1pci.c (renamed from drivers/isdn/hardware/avm/b1pci.c)0
-rw-r--r--drivers/staging/isdn/avm/b1pcmcia.c (renamed from drivers/isdn/hardware/avm/b1pcmcia.c)0
-rw-r--r--drivers/staging/isdn/avm/c4.c (renamed from drivers/isdn/hardware/avm/c4.c)0
-rw-r--r--drivers/staging/isdn/avm/t1isa.c (renamed from drivers/isdn/hardware/avm/t1isa.c)0
-rw-r--r--drivers/staging/isdn/avm/t1pci.c (renamed from drivers/isdn/hardware/avm/t1pci.c)0
-rw-r--r--drivers/staging/isdn/gigaset/Kconfig (renamed from drivers/isdn/gigaset/Kconfig)9
-rw-r--r--drivers/staging/isdn/gigaset/Makefile (renamed from drivers/isdn/gigaset/Makefile)10
-rw-r--r--drivers/staging/isdn/gigaset/asyncdata.c (renamed from drivers/isdn/gigaset/asyncdata.c)0
-rw-r--r--drivers/staging/isdn/gigaset/bas-gigaset.c (renamed from drivers/isdn/gigaset/bas-gigaset.c)0
-rw-r--r--drivers/staging/isdn/gigaset/capi.c (renamed from drivers/isdn/gigaset/capi.c)0
-rw-r--r--drivers/staging/isdn/gigaset/common.c (renamed from drivers/isdn/gigaset/common.c)0
-rw-r--r--drivers/staging/isdn/gigaset/dummyll.c (renamed from drivers/isdn/gigaset/dummyll.c)0
-rw-r--r--drivers/staging/isdn/gigaset/ev-layer.c (renamed from drivers/isdn/gigaset/ev-layer.c)0
-rw-r--r--drivers/staging/isdn/gigaset/gigaset.h (renamed from drivers/isdn/gigaset/gigaset.h)0
-rw-r--r--drivers/staging/isdn/gigaset/interface.c (renamed from drivers/isdn/gigaset/interface.c)0
-rw-r--r--drivers/staging/isdn/gigaset/isocdata.c (renamed from drivers/isdn/gigaset/isocdata.c)0
-rw-r--r--drivers/staging/isdn/gigaset/proc.c (renamed from drivers/isdn/gigaset/proc.c)0
-rw-r--r--drivers/staging/isdn/gigaset/ser-gigaset.c (renamed from drivers/isdn/gigaset/ser-gigaset.c)0
-rw-r--r--drivers/staging/isdn/gigaset/usb-gigaset.c (renamed from drivers/isdn/gigaset/usb-gigaset.c)0
-rw-r--r--drivers/staging/isdn/hysdn/Kconfig (renamed from drivers/isdn/hysdn/Kconfig)0
-rw-r--r--drivers/staging/isdn/hysdn/Makefile (renamed from drivers/isdn/hysdn/Makefile)0
-rw-r--r--drivers/staging/isdn/hysdn/boardergo.c (renamed from drivers/isdn/hysdn/boardergo.c)0
-rw-r--r--drivers/staging/isdn/hysdn/boardergo.h (renamed from drivers/isdn/hysdn/boardergo.h)0
-rw-r--r--drivers/staging/isdn/hysdn/hycapi.c (renamed from drivers/isdn/hysdn/hycapi.c)0
-rw-r--r--drivers/staging/isdn/hysdn/hysdn_boot.c (renamed from drivers/isdn/hysdn/hysdn_boot.c)0
-rw-r--r--drivers/staging/isdn/hysdn/hysdn_defs.h (renamed from drivers/isdn/hysdn/hysdn_defs.h)0
-rw-r--r--drivers/staging/isdn/hysdn/hysdn_init.c (renamed from drivers/isdn/hysdn/hysdn_init.c)0
-rw-r--r--drivers/staging/isdn/hysdn/hysdn_net.c (renamed from drivers/isdn/hysdn/hysdn_net.c)6
-rw-r--r--drivers/staging/isdn/hysdn/hysdn_pof.h (renamed from drivers/isdn/hysdn/hysdn_pof.h)0
-rw-r--r--drivers/staging/isdn/hysdn/hysdn_procconf.c (renamed from drivers/isdn/hysdn/hysdn_procconf.c)0
-rw-r--r--drivers/staging/isdn/hysdn/hysdn_proclog.c (renamed from drivers/isdn/hysdn/hysdn_proclog.c)0
-rw-r--r--drivers/staging/isdn/hysdn/hysdn_sched.c (renamed from drivers/isdn/hysdn/hysdn_sched.c)0
-rw-r--r--drivers/staging/isdn/hysdn/ince1pc.h (renamed from drivers/isdn/hysdn/ince1pc.h)0
-rw-r--r--drivers/target/iscsi/cxgbit/cxgbit_ddp.c6
-rw-r--r--drivers/vhost/net.c2
1178 files changed, 57664 insertions, 98766 deletions
diff --git a/drivers/bluetooth/Kconfig b/drivers/bluetooth/Kconfig
index b9c34ff9a0d3..aae665a3a254 100644
--- a/drivers/bluetooth/Kconfig
+++ b/drivers/bluetooth/Kconfig
@@ -52,6 +52,17 @@ config BT_HCIBTUSB_BCM
Say Y here to compile support for Broadcom protocol.
+config BT_HCIBTUSB_MTK
+ bool "MediaTek protocol support"
+ depends on BT_HCIBTUSB
+ default n
+ help
+ The MediaTek protocol support enables firmware download
+ support and chip initialization for MediaTek Bluetooth
+ USB controllers.
+
+ Say Y here to compile support for MediaTek protocol.
+
config BT_HCIBTUSB_RTL
bool "Realtek protocol support"
depends on BT_HCIBTUSB
@@ -237,6 +248,7 @@ config BT_HCIUART_AG6XX
config BT_HCIUART_MRVL
bool "Marvell protocol support"
depends on BT_HCIUART
+ depends on BT_HCIUART_SERDEV
select BT_HCIUART_H4
help
Marvell is serial protocol for communication between Bluetooth
diff --git a/drivers/bluetooth/bpa10x.c b/drivers/bluetooth/bpa10x.c
index a346ccb5450d..a0e84538cec8 100644
--- a/drivers/bluetooth/bpa10x.c
+++ b/drivers/bluetooth/bpa10x.c
@@ -359,7 +359,8 @@ static int bpa10x_set_diag(struct hci_dev *hdev, bool enable)
return 0;
}
-static int bpa10x_probe(struct usb_interface *intf, const struct usb_device_id *id)
+static int bpa10x_probe(struct usb_interface *intf,
+ const struct usb_device_id *id)
{
struct bpa10x_data *data;
struct hci_dev *hdev;
diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
index 3fe941539a1f..124ef0a3e1dd 100644
--- a/drivers/bluetooth/btbcm.c
+++ b/drivers/bluetooth/btbcm.c
@@ -335,6 +335,7 @@ static const struct bcm_subver_table bcm_uart_subver_table[] = {
{ 0x230f, "BCM4356A2" }, /* 001.003.015 */
{ 0x220e, "BCM20702A1" }, /* 001.002.014 */
{ 0x4217, "BCM4329B1" }, /* 002.002.023 */
+ { 0x6106, "BCM4359C0" }, /* 003.001.006 */
{ }
};
diff --git a/drivers/bluetooth/btmtkuart.c b/drivers/bluetooth/btmtkuart.c
index f5dbeec8e274..e11169ad8247 100644
--- a/drivers/bluetooth/btmtkuart.c
+++ b/drivers/bluetooth/btmtkuart.c
@@ -115,10 +115,12 @@ struct btmtk_hci_wmt_params {
struct btmtkuart_dev {
struct hci_dev *hdev;
struct serdev_device *serdev;
- struct clk *clk;
+ struct clk *clk;
+ struct clk *osc;
struct regulator *vcc;
struct gpio_desc *reset;
+ struct gpio_desc *boot;
struct pinctrl *pinctrl;
struct pinctrl_state *pins_runtime;
struct pinctrl_state *pins_boot;
@@ -911,6 +913,19 @@ static int btmtkuart_parse_dt(struct serdev_device *serdev)
return err;
}
+ bdev->osc = devm_clk_get_optional(&serdev->dev, "osc");
+ if (IS_ERR(bdev->osc)) {
+ err = PTR_ERR(bdev->osc);
+ return err;
+ }
+
+ bdev->boot = devm_gpiod_get_optional(&serdev->dev, "boot",
+ GPIOD_OUT_LOW);
+ if (IS_ERR(bdev->boot)) {
+ err = PTR_ERR(bdev->boot);
+ return err;
+ }
+
bdev->pinctrl = devm_pinctrl_get(&serdev->dev);
if (IS_ERR(bdev->pinctrl)) {
err = PTR_ERR(bdev->pinctrl);
@@ -919,8 +934,10 @@ static int btmtkuart_parse_dt(struct serdev_device *serdev)
bdev->pins_boot = pinctrl_lookup_state(bdev->pinctrl,
"default");
- if (IS_ERR(bdev->pins_boot)) {
+ if (IS_ERR(bdev->pins_boot) && !bdev->boot) {
err = PTR_ERR(bdev->pins_boot);
+ dev_err(&serdev->dev,
+ "Should assign RXD to LOW at boot stage\n");
return err;
}
@@ -996,13 +1013,25 @@ static int btmtkuart_probe(struct serdev_device *serdev)
set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks);
if (btmtkuart_is_standalone(bdev)) {
- /* Switch to the specific pin state for the booting requires */
- pinctrl_select_state(bdev->pinctrl, bdev->pins_boot);
+ err = clk_prepare_enable(bdev->osc);
+ if (err < 0)
+ return err;
+
+ if (bdev->boot) {
+ gpiod_set_value_cansleep(bdev->boot, 1);
+ } else {
+ /* Switch to the specific pin state for the booting
+ * requires.
+ */
+ pinctrl_select_state(bdev->pinctrl, bdev->pins_boot);
+ }
/* Power on */
err = regulator_enable(bdev->vcc);
- if (err < 0)
+ if (err < 0) {
+ clk_disable_unprepare(bdev->osc);
return err;
+ }
/* Reset if the reset-gpios is available otherwise the board
* -level design should be guaranteed.
@@ -1017,6 +1046,10 @@ static int btmtkuart_probe(struct serdev_device *serdev)
* mode the device requires for UART transfers.
*/
msleep(50);
+
+ if (bdev->boot)
+ devm_gpiod_put(&serdev->dev, bdev->boot);
+
pinctrl_select_state(bdev->pinctrl, bdev->pins_runtime);
/* A standalone device doesn't depends on power domain on SoC,
@@ -1037,10 +1070,8 @@ static int btmtkuart_probe(struct serdev_device *serdev)
return 0;
err_regulator_disable:
- if (btmtkuart_is_standalone(bdev)) {
- pinctrl_select_state(bdev->pinctrl, bdev->pins_boot);
+ if (btmtkuart_is_standalone(bdev))
regulator_disable(bdev->vcc);
- }
return err;
}
@@ -1050,9 +1081,9 @@ static void btmtkuart_remove(struct serdev_device *serdev)
struct btmtkuart_dev *bdev = serdev_device_get_drvdata(serdev);
struct hci_dev *hdev = bdev->hdev;
- if (btmtkuart_is_standalone(bdev)) {
- pinctrl_select_state(bdev->pinctrl, bdev->pins_boot);
+ if (btmtkuart_is_standalone(bdev)) {
regulator_disable(bdev->vcc);
+ clk_disable_unprepare(bdev->osc);
}
hci_unregister_dev(hdev);
diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c
index aff1d22223bd..8b33128dccee 100644
--- a/drivers/bluetooth/btqca.c
+++ b/drivers/bluetooth/btqca.c
@@ -131,6 +131,7 @@ static void qca_tlv_check_data(struct rome_config *config,
* In case VSE is skipped, only the last segment is acked.
*/
config->dnld_mode = tlv_patch->download_mode;
+ config->dnld_type = config->dnld_mode;
BT_DBG("Total Length : %d bytes",
le32_to_cpu(tlv_patch->total_size));
@@ -251,6 +252,31 @@ out:
return err;
}
+static int qca_inject_cmd_complete_event(struct hci_dev *hdev)
+{
+ struct hci_event_hdr *hdr;
+ struct hci_ev_cmd_complete *evt;
+ struct sk_buff *skb;
+
+ skb = bt_skb_alloc(sizeof(*hdr) + sizeof(*evt) + 1, GFP_KERNEL);
+ if (!skb)
+ return -ENOMEM;
+
+ hdr = skb_put(skb, sizeof(*hdr));
+ hdr->evt = HCI_EV_CMD_COMPLETE;
+ hdr->plen = sizeof(*evt) + 1;
+
+ evt = skb_put(skb, sizeof(*evt));
+ evt->ncmd = 1;
+ evt->opcode = QCA_HCI_CC_OPCODE;
+
+ skb_put_u8(skb, QCA_HCI_CC_SUCCESS);
+
+ hci_skb_pkt_type(skb) = HCI_EVENT_PKT;
+
+ return hci_recv_frame(hdev, skb);
+}
+
static int qca_download_firmware(struct hci_dev *hdev,
struct rome_config *config)
{
@@ -284,11 +310,22 @@ static int qca_download_firmware(struct hci_dev *hdev,
ret = qca_tlv_send_segment(hdev, segsize, segment,
config->dnld_mode);
if (ret)
- break;
+ goto out;
segment += segsize;
}
+ /* Latest qualcomm chipsets are not sending a command complete event
+ * for every fw packet sent. They only respond with a vendor specific
+ * event for the last packet. This optimization in the chip will
+ * decrease the BT in initialization time. Here we will inject a command
+ * complete event to avoid a command timeout error message.
+ */
+ if (config->dnld_type == ROME_SKIP_EVT_VSE_CC ||
+ config->dnld_type == ROME_SKIP_EVT_VSE)
+ return qca_inject_cmd_complete_event(hdev);
+
+out:
release_firmware(fw);
return ret;
@@ -319,7 +356,8 @@ int qca_set_bdaddr_rome(struct hci_dev *hdev, const bdaddr_t *bdaddr)
EXPORT_SYMBOL_GPL(qca_set_bdaddr_rome);
int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
- enum qca_btsoc_type soc_type, u32 soc_ver)
+ enum qca_btsoc_type soc_type, u32 soc_ver,
+ const char *firmware_name)
{
struct rome_config config;
int err;
@@ -352,7 +390,10 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
/* Download NVM configuration */
config.type = TLV_TYPE_NVM;
- if (qca_is_wcn399x(soc_type))
+ if (firmware_name)
+ snprintf(config.fwname, sizeof(config.fwname),
+ "qca/%s", firmware_name);
+ else if (qca_is_wcn399x(soc_type))
snprintf(config.fwname, sizeof(config.fwname),
"qca/crnv%02x.bin", rom_ver);
else
diff --git a/drivers/bluetooth/btqca.h b/drivers/bluetooth/btqca.h
index e9c999959603..6a291a7a5d96 100644
--- a/drivers/bluetooth/btqca.h
+++ b/drivers/bluetooth/btqca.h
@@ -28,6 +28,9 @@
#define QCA_WCN3990_POWERON_PULSE 0xFC
#define QCA_WCN3990_POWEROFF_PULSE 0xC0
+#define QCA_HCI_CC_OPCODE 0xFC00
+#define QCA_HCI_CC_SUCCESS 0x00
+
enum qca_baudrate {
QCA_BAUDRATE_115200 = 0,
QCA_BAUDRATE_57600,
@@ -69,6 +72,7 @@ struct rome_config {
char fwname[64];
uint8_t user_baud_rate;
enum rome_tlv_dnld_mode dnld_mode;
+ enum rome_tlv_dnld_mode dnld_type;
};
struct edl_event_hdr {
@@ -127,7 +131,8 @@ enum qca_btsoc_type {
int qca_set_bdaddr_rome(struct hci_dev *hdev, const bdaddr_t *bdaddr);
int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
- enum qca_btsoc_type soc_type, u32 soc_ver);
+ enum qca_btsoc_type soc_type, u32 soc_ver,
+ const char *firmware_name);
int qca_read_soc_version(struct hci_dev *hdev, u32 *soc_version);
int qca_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr);
static inline bool qca_is_wcn399x(enum qca_btsoc_type soc_type)
@@ -142,7 +147,8 @@ static inline int qca_set_bdaddr_rome(struct hci_dev *hdev, const bdaddr_t *bdad
}
static inline int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
- enum qca_btsoc_type soc_type, u32 soc_ver)
+ enum qca_btsoc_type soc_type, u32 soc_ver,
+ const char *firmware_name)
{
return -EOPNOTSUPP;
}
diff --git a/drivers/bluetooth/btrtl.c b/drivers/bluetooth/btrtl.c
index 208feef63de4..4f75a9b61d09 100644
--- a/drivers/bluetooth/btrtl.c
+++ b/drivers/bluetooth/btrtl.c
@@ -21,6 +21,7 @@
#define RTL_ROM_LMP_3499 0x3499
#define RTL_ROM_LMP_8723A 0x1200
#define RTL_ROM_LMP_8723B 0x8723
+#define RTL_ROM_LMP_8723D 0x8873
#define RTL_ROM_LMP_8821A 0x8821
#define RTL_ROM_LMP_8761A 0x8761
#define RTL_ROM_LMP_8822B 0x8822
@@ -107,6 +108,13 @@ static const struct id_table ic_id_table[] = {
.fw_name = "rtl_bt/rtl8723ds_fw.bin",
.cfg_name = "rtl_bt/rtl8723ds_config" },
+ /* 8723DU */
+ { IC_INFO(RTL_ROM_LMP_8723D, 0x826C),
+ .config_needed = true,
+ .has_rom_version = true,
+ .fw_name = "rtl_bt/rtl8723d_fw.bin",
+ .cfg_name = "rtl_bt/rtl8723d_config" },
+
/* 8821A */
{ IC_INFO(RTL_ROM_LMP_8821A, 0xa),
.config_needed = false,
@@ -637,6 +645,26 @@ int btrtl_setup_realtek(struct hci_dev *hdev)
}
EXPORT_SYMBOL_GPL(btrtl_setup_realtek);
+int btrtl_shutdown_realtek(struct hci_dev *hdev)
+{
+ struct sk_buff *skb;
+ int ret;
+
+ /* According to the vendor driver, BT must be reset on close to avoid
+ * firmware crash.
+ */
+ skb = __hci_cmd_sync(hdev, HCI_OP_RESET, 0, NULL, HCI_INIT_TIMEOUT);
+ if (IS_ERR(skb)) {
+ ret = PTR_ERR(skb);
+ bt_dev_err(hdev, "HCI reset during shutdown failed");
+ return ret;
+ }
+ kfree_skb(skb);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(btrtl_shutdown_realtek);
+
static unsigned int btrtl_convert_baudrate(u32 device_baudrate)
{
switch (device_baudrate) {
diff --git a/drivers/bluetooth/btrtl.h b/drivers/bluetooth/btrtl.h
index f1676144fce8..10ad40c3e42c 100644
--- a/drivers/bluetooth/btrtl.h
+++ b/drivers/bluetooth/btrtl.h
@@ -55,6 +55,7 @@ void btrtl_free(struct btrtl_device_info *btrtl_dev);
int btrtl_download_firmware(struct hci_dev *hdev,
struct btrtl_device_info *btrtl_dev);
int btrtl_setup_realtek(struct hci_dev *hdev);
+int btrtl_shutdown_realtek(struct hci_dev *hdev);
int btrtl_get_uart_settings(struct hci_dev *hdev,
struct btrtl_device_info *btrtl_dev,
unsigned int *controller_baudrate,
@@ -83,6 +84,11 @@ static inline int btrtl_setup_realtek(struct hci_dev *hdev)
return -EOPNOTSUPP;
}
+static inline int btrtl_shutdown_realtek(struct hci_dev *hdev)
+{
+ return -EOPNOTSUPP;
+}
+
static inline int btrtl_get_uart_settings(struct hci_dev *hdev,
struct btrtl_device_info *btrtl_dev,
unsigned int *controller_baudrate,
diff --git a/drivers/bluetooth/btsdio.c b/drivers/bluetooth/btsdio.c
index 83748b7b2033..fd9571d5fdac 100644
--- a/drivers/bluetooth/btsdio.c
+++ b/drivers/bluetooth/btsdio.c
@@ -286,6 +286,7 @@ static int btsdio_probe(struct sdio_func *func,
switch (func->device) {
case SDIO_DEVICE_ID_BROADCOM_43341:
case SDIO_DEVICE_ID_BROADCOM_43430:
+ case SDIO_DEVICE_ID_BROADCOM_4356:
return -ENODEV;
}
}
diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
index 50aed5259c2b..3876fee6ad13 100644
--- a/drivers/bluetooth/btusb.c
+++ b/drivers/bluetooth/btusb.c
@@ -11,6 +11,7 @@
#include <linux/usb.h>
#include <linux/usb/quirks.h>
#include <linux/firmware.h>
+#include <linux/iopoll.h>
#include <linux/of_device.h>
#include <linux/of_irq.h>
#include <linux/suspend.h>
@@ -55,6 +56,7 @@ static struct usb_driver btusb_driver;
#define BTUSB_BCM2045 0x40000
#define BTUSB_IFNUM_2 0x80000
#define BTUSB_CW6622 0x100000
+#define BTUSB_MEDIATEK 0x200000
static const struct usb_device_id btusb_table[] = {
/* Generic Bluetooth USB device */
@@ -264,7 +266,9 @@ static const struct usb_device_id blacklist_table[] = {
{ USB_DEVICE(0x04ca, 0x3015), .driver_info = BTUSB_QCA_ROME },
{ USB_DEVICE(0x04ca, 0x3016), .driver_info = BTUSB_QCA_ROME },
{ USB_DEVICE(0x04ca, 0x301a), .driver_info = BTUSB_QCA_ROME },
+ { USB_DEVICE(0x13d3, 0x3491), .driver_info = BTUSB_QCA_ROME },
{ USB_DEVICE(0x13d3, 0x3496), .driver_info = BTUSB_QCA_ROME },
+ { USB_DEVICE(0x13d3, 0x3501), .driver_info = BTUSB_QCA_ROME },
/* Broadcom BCM2035 */
{ USB_DEVICE(0x0a5c, 0x2009), .driver_info = BTUSB_BCM92035 },
@@ -346,6 +350,10 @@ static const struct usb_device_id blacklist_table[] = {
{ USB_VENDOR_AND_INTERFACE_INFO(0x0bda, 0xe0, 0x01, 0x01),
.driver_info = BTUSB_REALTEK },
+ /* MediaTek Bluetooth devices */
+ { USB_VENDOR_AND_INTERFACE_INFO(0x0e8d, 0xe0, 0x01, 0x01),
+ .driver_info = BTUSB_MEDIATEK },
+
/* Additional Realtek 8723AE Bluetooth devices */
{ USB_DEVICE(0x0930, 0x021d), .driver_info = BTUSB_REALTEK },
{ USB_DEVICE(0x13d3, 0x3394), .driver_info = BTUSB_REALTEK },
@@ -426,6 +434,7 @@ static const struct dmi_system_id btusb_needs_reset_resume_table[] = {
#define BTUSB_DIAG_RUNNING 10
#define BTUSB_OOB_WAKE_ENABLED 11
#define BTUSB_HW_RESET_ACTIVE 12
+#define BTUSB_TX_WAIT_VND_EVT 13
struct btusb_data {
struct hci_dev *hdev;
@@ -449,6 +458,7 @@ struct btusb_data {
struct usb_anchor bulk_anchor;
struct usb_anchor isoc_anchor;
struct usb_anchor diag_anchor;
+ struct usb_anchor ctrl_anchor;
spinlock_t rxlock;
struct sk_buff *evt_skb;
@@ -1202,6 +1212,7 @@ static void btusb_stop_traffic(struct btusb_data *data)
usb_kill_anchored_urbs(&data->bulk_anchor);
usb_kill_anchored_urbs(&data->isoc_anchor);
usb_kill_anchored_urbs(&data->diag_anchor);
+ usb_kill_anchored_urbs(&data->ctrl_anchor);
}
static int btusb_close(struct hci_dev *hdev)
@@ -2437,6 +2448,568 @@ static int btusb_shutdown_intel_new(struct hci_dev *hdev)
return 0;
}
+#ifdef CONFIG_BT_HCIBTUSB_MTK
+
+#define FIRMWARE_MT7663 "mediatek/mt7663pr2h.bin"
+#define FIRMWARE_MT7668 "mediatek/mt7668pr2h.bin"
+
+#define HCI_WMT_MAX_EVENT_SIZE 64
+
+enum {
+ BTMTK_WMT_PATCH_DWNLD = 0x1,
+ BTMTK_WMT_FUNC_CTRL = 0x6,
+ BTMTK_WMT_RST = 0x7,
+ BTMTK_WMT_SEMAPHORE = 0x17,
+};
+
+enum {
+ BTMTK_WMT_INVALID,
+ BTMTK_WMT_PATCH_UNDONE,
+ BTMTK_WMT_PATCH_DONE,
+ BTMTK_WMT_ON_UNDONE,
+ BTMTK_WMT_ON_DONE,
+ BTMTK_WMT_ON_PROGRESS,
+};
+
+struct btmtk_wmt_hdr {
+ u8 dir;
+ u8 op;
+ __le16 dlen;
+ u8 flag;
+} __packed;
+
+struct btmtk_hci_wmt_cmd {
+ struct btmtk_wmt_hdr hdr;
+ u8 data[256];
+} __packed;
+
+struct btmtk_hci_wmt_evt {
+ struct hci_event_hdr hhdr;
+ struct btmtk_wmt_hdr whdr;
+} __packed;
+
+struct btmtk_hci_wmt_evt_funcc {
+ struct btmtk_hci_wmt_evt hwhdr;
+ __be16 status;
+} __packed;
+
+struct btmtk_tci_sleep {
+ u8 mode;
+ __le16 duration;
+ __le16 host_duration;
+ u8 host_wakeup_pin;
+ u8 time_compensation;
+} __packed;
+
+struct btmtk_hci_wmt_params {
+ u8 op;
+ u8 flag;
+ u16 dlen;
+ const void *data;
+ u32 *status;
+};
+
+static void btusb_mtk_wmt_recv(struct urb *urb)
+{
+ struct hci_dev *hdev = urb->context;
+ struct btusb_data *data = hci_get_drvdata(hdev);
+ struct hci_event_hdr *hdr;
+ struct sk_buff *skb;
+ int err;
+
+ if (urb->status == 0 && urb->actual_length > 0) {
+ hdev->stat.byte_rx += urb->actual_length;
+
+ /* WMT event shouldn't be fragmented and the size should be
+ * less than HCI_WMT_MAX_EVENT_SIZE.
+ */
+ skb = bt_skb_alloc(HCI_WMT_MAX_EVENT_SIZE, GFP_ATOMIC);
+ if (!skb) {
+ hdev->stat.err_rx++;
+ goto err_out;
+ }
+
+ hci_skb_pkt_type(skb) = HCI_EVENT_PKT;
+ skb_put_data(skb, urb->transfer_buffer, urb->actual_length);
+
+ hdr = (void *)skb->data;
+ /* Fix up the vendor event id with 0xff for vendor specific
+ * instead of 0xe4 so that event send via monitoring socket can
+ * be parsed properly.
+ */
+ hdr->evt = 0xff;
+
+ /* When someone waits for the WMT event, the skb is being cloned
+ * and being processed the events from there then.
+ */
+ if (test_bit(BTUSB_TX_WAIT_VND_EVT, &data->flags)) {
+ data->evt_skb = skb_clone(skb, GFP_KERNEL);
+ if (!data->evt_skb)
+ goto err_out;
+ }
+
+ err = hci_recv_frame(hdev, skb);
+ if (err < 0)
+ goto err_free_skb;
+
+ if (test_and_clear_bit(BTUSB_TX_WAIT_VND_EVT,
+ &data->flags)) {
+ /* Barrier to sync with other CPUs */
+ smp_mb__after_atomic();
+ wake_up_bit(&data->flags,
+ BTUSB_TX_WAIT_VND_EVT);
+ }
+err_out:
+ return;
+err_free_skb:
+ kfree_skb(data->evt_skb);
+ data->evt_skb = NULL;
+ return;
+ } else if (urb->status == -ENOENT) {
+ /* Avoid suspend failed when usb_kill_urb */
+ return;
+ }
+
+ usb_mark_last_busy(data->udev);
+
+ /* The URB complete handler is still called with urb->actual_length = 0
+ * when the event is not available, so we should keep re-submitting
+ * URB until WMT event returns, Also, It's necessary to wait some time
+ * between the two consecutive control URBs to relax the target device
+ * to generate the event. Otherwise, the WMT event cannot return from
+ * the device successfully.
+ */
+ udelay(100);
+
+ usb_anchor_urb(urb, &data->ctrl_anchor);
+ err = usb_submit_urb(urb, GFP_ATOMIC);
+ if (err < 0) {
+ /* -EPERM: urb is being killed;
+ * -ENODEV: device got disconnected
+ */
+ if (err != -EPERM && err != -ENODEV)
+ bt_dev_err(hdev, "urb %p failed to resubmit (%d)",
+ urb, -err);
+ usb_unanchor_urb(urb);
+ }
+}
+
+static int btusb_mtk_submit_wmt_recv_urb(struct hci_dev *hdev)
+{
+ struct btusb_data *data = hci_get_drvdata(hdev);
+ struct usb_ctrlrequest *dr;
+ unsigned char *buf;
+ int err, size = 64;
+ unsigned int pipe;
+ struct urb *urb;
+
+ urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (!urb)
+ return -ENOMEM;
+
+ dr = kmalloc(sizeof(*dr), GFP_KERNEL);
+ if (!dr) {
+ usb_free_urb(urb);
+ return -ENOMEM;
+ }
+
+ dr->bRequestType = USB_TYPE_VENDOR | USB_DIR_IN;
+ dr->bRequest = 1;
+ dr->wIndex = cpu_to_le16(0);
+ dr->wValue = cpu_to_le16(48);
+ dr->wLength = cpu_to_le16(size);
+
+ buf = kmalloc(size, GFP_KERNEL);
+ if (!buf) {
+ kfree(dr);
+ return -ENOMEM;
+ }
+
+ pipe = usb_rcvctrlpipe(data->udev, 0);
+
+ usb_fill_control_urb(urb, data->udev, pipe, (void *)dr,
+ buf, size, btusb_mtk_wmt_recv, hdev);
+
+ urb->transfer_flags |= URB_FREE_BUFFER;
+
+ usb_anchor_urb(urb, &data->ctrl_anchor);
+ err = usb_submit_urb(urb, GFP_KERNEL);
+ if (err < 0) {
+ if (err != -EPERM && err != -ENODEV)
+ bt_dev_err(hdev, "urb %p submission failed (%d)",
+ urb, -err);
+ usb_unanchor_urb(urb);
+ }
+
+ usb_free_urb(urb);
+
+ return err;
+}
+
+static int btusb_mtk_hci_wmt_sync(struct hci_dev *hdev,
+ struct btmtk_hci_wmt_params *wmt_params)
+{
+ struct btusb_data *data = hci_get_drvdata(hdev);
+ struct btmtk_hci_wmt_evt_funcc *wmt_evt_funcc;
+ u32 hlen, status = BTMTK_WMT_INVALID;
+ struct btmtk_hci_wmt_evt *wmt_evt;
+ struct btmtk_hci_wmt_cmd wc;
+ struct btmtk_wmt_hdr *hdr;
+ int err;
+
+ /* Submit control IN URB on demand to process the WMT event */
+ err = btusb_mtk_submit_wmt_recv_urb(hdev);
+ if (err < 0)
+ return err;
+
+ /* Send the WMT command and wait until the WMT event returns */
+ hlen = sizeof(*hdr) + wmt_params->dlen;
+ if (hlen > 255)
+ return -EINVAL;
+
+ hdr = (struct btmtk_wmt_hdr *)&wc;
+ hdr->dir = 1;
+ hdr->op = wmt_params->op;
+ hdr->dlen = cpu_to_le16(wmt_params->dlen + 1);
+ hdr->flag = wmt_params->flag;
+ memcpy(wc.data, wmt_params->data, wmt_params->dlen);
+
+ set_bit(BTUSB_TX_WAIT_VND_EVT, &data->flags);
+
+ err = __hci_cmd_send(hdev, 0xfc6f, hlen, &wc);
+
+ if (err < 0) {
+ clear_bit(BTUSB_TX_WAIT_VND_EVT, &data->flags);
+ return err;
+ }
+
+ /* The vendor specific WMT commands are all answered by a vendor
+ * specific event and will have the Command Status or Command
+ * Complete as with usual HCI command flow control.
+ *
+ * After sending the command, wait for BTUSB_TX_WAIT_VND_EVT
+ * state to be cleared. The driver specific event receive routine
+ * will clear that state and with that indicate completion of the
+ * WMT command.
+ */
+ err = wait_on_bit_timeout(&data->flags, BTUSB_TX_WAIT_VND_EVT,
+ TASK_INTERRUPTIBLE, HCI_INIT_TIMEOUT);
+ if (err == -EINTR) {
+ bt_dev_err(hdev, "Execution of wmt command interrupted");
+ clear_bit(BTUSB_TX_WAIT_VND_EVT, &data->flags);
+ return err;
+ }
+
+ if (err) {
+ bt_dev_err(hdev, "Execution of wmt command timed out");
+ clear_bit(BTUSB_TX_WAIT_VND_EVT, &data->flags);
+ return -ETIMEDOUT;
+ }
+
+ /* Parse and handle the return WMT event */
+ wmt_evt = (struct btmtk_hci_wmt_evt *)data->evt_skb->data;
+ if (wmt_evt->whdr.op != hdr->op) {
+ bt_dev_err(hdev, "Wrong op received %d expected %d",
+ wmt_evt->whdr.op, hdr->op);
+ err = -EIO;
+ goto err_free_skb;
+ }
+
+ switch (wmt_evt->whdr.op) {
+ case BTMTK_WMT_SEMAPHORE:
+ if (wmt_evt->whdr.flag == 2)
+ status = BTMTK_WMT_PATCH_UNDONE;
+ else
+ status = BTMTK_WMT_PATCH_DONE;
+ break;
+ case BTMTK_WMT_FUNC_CTRL:
+ wmt_evt_funcc = (struct btmtk_hci_wmt_evt_funcc *)wmt_evt;
+ if (be16_to_cpu(wmt_evt_funcc->status) == 0x404)
+ status = BTMTK_WMT_ON_DONE;
+ else if (be16_to_cpu(wmt_evt_funcc->status) == 0x420)
+ status = BTMTK_WMT_ON_PROGRESS;
+ else
+ status = BTMTK_WMT_ON_UNDONE;
+ break;
+ }
+
+ if (wmt_params->status)
+ *wmt_params->status = status;
+
+err_free_skb:
+ kfree_skb(data->evt_skb);
+ data->evt_skb = NULL;
+
+ return err;
+}
+
+static int btusb_mtk_setup_firmware(struct hci_dev *hdev, const char *fwname)
+{
+ struct btmtk_hci_wmt_params wmt_params;
+ const struct firmware *fw;
+ const u8 *fw_ptr;
+ size_t fw_size;
+ int err, dlen;
+ u8 flag;
+
+ err = request_firmware(&fw, fwname, &hdev->dev);
+ if (err < 0) {
+ bt_dev_err(hdev, "Failed to load firmware file (%d)", err);
+ return err;
+ }
+
+ fw_ptr = fw->data;
+ fw_size = fw->size;
+
+ /* The size of patch header is 30 bytes, should be skip */
+ if (fw_size < 30)
+ goto err_release_fw;
+
+ fw_size -= 30;
+ fw_ptr += 30;
+ flag = 1;
+
+ wmt_params.op = BTMTK_WMT_PATCH_DWNLD;
+ wmt_params.status = NULL;
+
+ while (fw_size > 0) {
+ dlen = min_t(int, 250, fw_size);
+
+ /* Tell deivice the position in sequence */
+ if (fw_size - dlen <= 0)
+ flag = 3;
+ else if (fw_size < fw->size - 30)
+ flag = 2;
+
+ wmt_params.flag = flag;
+ wmt_params.dlen = dlen;
+ wmt_params.data = fw_ptr;
+
+ err = btusb_mtk_hci_wmt_sync(hdev, &wmt_params);
+ if (err < 0) {
+ bt_dev_err(hdev, "Failed to send wmt patch dwnld (%d)",
+ err);
+ goto err_release_fw;
+ }
+
+ fw_size -= dlen;
+ fw_ptr += dlen;
+ }
+
+ wmt_params.op = BTMTK_WMT_RST;
+ wmt_params.flag = 4;
+ wmt_params.dlen = 0;
+ wmt_params.data = NULL;
+ wmt_params.status = NULL;
+
+ /* Activate funciton the firmware providing to */
+ err = btusb_mtk_hci_wmt_sync(hdev, &wmt_params);
+ if (err < 0) {
+ bt_dev_err(hdev, "Failed to send wmt rst (%d)", err);
+ return err;
+ }
+
+ /* Wait a few moments for firmware activation done */
+ usleep_range(10000, 12000);
+
+err_release_fw:
+ release_firmware(fw);
+
+ return err;
+}
+
+static int btusb_mtk_func_query(struct hci_dev *hdev)
+{
+ struct btmtk_hci_wmt_params wmt_params;
+ int status, err;
+ u8 param = 0;
+
+ /* Query whether the function is enabled */
+ wmt_params.op = BTMTK_WMT_FUNC_CTRL;
+ wmt_params.flag = 4;
+ wmt_params.dlen = sizeof(param);
+ wmt_params.data = &param;
+ wmt_params.status = &status;
+
+ err = btusb_mtk_hci_wmt_sync(hdev, &wmt_params);
+ if (err < 0) {
+ bt_dev_err(hdev, "Failed to query function status (%d)", err);
+ return err;
+ }
+
+ return status;
+}
+
+static int btusb_mtk_reg_read(struct btusb_data *data, u32 reg, u32 *val)
+{
+ int pipe, err, size = sizeof(u32);
+ void *buf;
+
+ buf = kzalloc(size, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ pipe = usb_rcvctrlpipe(data->udev, 0);
+ err = usb_control_msg(data->udev, pipe, 0x63,
+ USB_TYPE_VENDOR | USB_DIR_IN,
+ reg >> 16, reg & 0xffff,
+ buf, size, USB_CTRL_SET_TIMEOUT);
+ if (err < 0)
+ goto err_free_buf;
+
+ *val = get_unaligned_le32(buf);
+
+err_free_buf:
+ kfree(buf);
+
+ return err;
+}
+
+static int btusb_mtk_id_get(struct btusb_data *data, u32 *id)
+{
+ return btusb_mtk_reg_read(data, 0x80000008, id);
+}
+
+static int btusb_mtk_setup(struct hci_dev *hdev)
+{
+ struct btusb_data *data = hci_get_drvdata(hdev);
+ struct btmtk_hci_wmt_params wmt_params;
+ ktime_t calltime, delta, rettime;
+ struct btmtk_tci_sleep tci_sleep;
+ unsigned long long duration;
+ struct sk_buff *skb;
+ const char *fwname;
+ int err, status;
+ u32 dev_id;
+ u8 param;
+
+ calltime = ktime_get();
+
+ err = btusb_mtk_id_get(data, &dev_id);
+ if (err < 0) {
+ bt_dev_err(hdev, "Failed to get device id (%d)", err);
+ return err;
+ }
+
+ switch (dev_id) {
+ case 0x7663:
+ fwname = FIRMWARE_MT7663;
+ break;
+ case 0x7668:
+ fwname = FIRMWARE_MT7668;
+ break;
+ default:
+ bt_dev_err(hdev, "Unsupported support hardware variant (%08x)",
+ dev_id);
+ return -ENODEV;
+ }
+
+ /* Query whether the firmware is already download */
+ wmt_params.op = BTMTK_WMT_SEMAPHORE;
+ wmt_params.flag = 1;
+ wmt_params.dlen = 0;
+ wmt_params.data = NULL;
+ wmt_params.status = &status;
+
+ err = btusb_mtk_hci_wmt_sync(hdev, &wmt_params);
+ if (err < 0) {
+ bt_dev_err(hdev, "Failed to query firmware status (%d)", err);
+ return err;
+ }
+
+ if (status == BTMTK_WMT_PATCH_DONE) {
+ bt_dev_info(hdev, "firmware already downloaded");
+ goto ignore_setup_fw;
+ }
+
+ /* Setup a firmware which the device definitely requires */
+ err = btusb_mtk_setup_firmware(hdev, fwname);
+ if (err < 0)
+ return err;
+
+ignore_setup_fw:
+ err = readx_poll_timeout(btusb_mtk_func_query, hdev, status,
+ status < 0 || status != BTMTK_WMT_ON_PROGRESS,
+ 2000, 5000000);
+ /* -ETIMEDOUT happens */
+ if (err < 0)
+ return err;
+
+ /* The other errors happen in btusb_mtk_func_query */
+ if (status < 0)
+ return status;
+
+ if (status == BTMTK_WMT_ON_DONE) {
+ bt_dev_info(hdev, "function already on");
+ goto ignore_func_on;
+ }
+
+ /* Enable Bluetooth protocol */
+ param = 1;
+ wmt_params.op = BTMTK_WMT_FUNC_CTRL;
+ wmt_params.flag = 0;
+ wmt_params.dlen = sizeof(param);
+ wmt_params.data = &param;
+ wmt_params.status = NULL;
+
+ err = btusb_mtk_hci_wmt_sync(hdev, &wmt_params);
+ if (err < 0) {
+ bt_dev_err(hdev, "Failed to send wmt func ctrl (%d)", err);
+ return err;
+ }
+
+ignore_func_on:
+ /* Apply the low power environment setup */
+ tci_sleep.mode = 0x5;
+ tci_sleep.duration = cpu_to_le16(0x640);
+ tci_sleep.host_duration = cpu_to_le16(0x640);
+ tci_sleep.host_wakeup_pin = 0;
+ tci_sleep.time_compensation = 0;
+
+ skb = __hci_cmd_sync(hdev, 0xfc7a, sizeof(tci_sleep), &tci_sleep,
+ HCI_INIT_TIMEOUT);
+ if (IS_ERR(skb)) {
+ err = PTR_ERR(skb);
+ bt_dev_err(hdev, "Failed to apply low power setting (%d)", err);
+ return err;
+ }
+ kfree_skb(skb);
+
+ rettime = ktime_get();
+ delta = ktime_sub(rettime, calltime);
+ duration = (unsigned long long)ktime_to_ns(delta) >> 10;
+
+ bt_dev_info(hdev, "Device setup in %llu usecs", duration);
+
+ return 0;
+}
+
+static int btusb_mtk_shutdown(struct hci_dev *hdev)
+{
+ struct btmtk_hci_wmt_params wmt_params;
+ u8 param = 0;
+ int err;
+
+ /* Disable the device */
+ wmt_params.op = BTMTK_WMT_FUNC_CTRL;
+ wmt_params.flag = 0;
+ wmt_params.dlen = sizeof(param);
+ wmt_params.data = &param;
+ wmt_params.status = NULL;
+
+ err = btusb_mtk_hci_wmt_sync(hdev, &wmt_params);
+ if (err < 0) {
+ bt_dev_err(hdev, "Failed to send wmt func ctrl (%d)", err);
+ return err;
+ }
+
+ return 0;
+}
+
+MODULE_FIRMWARE(FIRMWARE_MT7663);
+MODULE_FIRMWARE(FIRMWARE_MT7668);
+#endif
+
#ifdef CONFIG_PM
/* Configure an out-of-band gpio as wake-up pin, if specified in device tree */
static int marvell_config_oob_wake(struct hci_dev *hdev)
@@ -3044,6 +3617,7 @@ static int btusb_probe(struct usb_interface *intf,
init_usb_anchor(&data->bulk_anchor);
init_usb_anchor(&data->isoc_anchor);
init_usb_anchor(&data->diag_anchor);
+ init_usb_anchor(&data->ctrl_anchor);
spin_lock_init(&data->rxlock);
if (id->driver_info & BTUSB_INTEL_NEW) {
@@ -3157,6 +3731,15 @@ static int btusb_probe(struct usb_interface *intf,
if (id->driver_info & BTUSB_MARVELL)
hdev->set_bdaddr = btusb_set_bdaddr_marvell;
+#ifdef CONFIG_BT_HCIBTUSB_MTK
+ if (id->driver_info & BTUSB_MEDIATEK) {
+ hdev->setup = btusb_mtk_setup;
+ hdev->shutdown = btusb_mtk_shutdown;
+ hdev->manufacturer = 70;
+ set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks);
+ }
+#endif
+
if (id->driver_info & BTUSB_SWAVE) {
set_bit(HCI_QUIRK_FIXUP_INQUIRY_MODE, &hdev->quirks);
set_bit(HCI_QUIRK_BROKEN_LOCAL_COMMANDS, &hdev->quirks);
@@ -3184,6 +3767,7 @@ static int btusb_probe(struct usb_interface *intf,
#ifdef CONFIG_BT_HCIBTUSB_RTL
if (id->driver_info & BTUSB_REALTEK) {
hdev->setup = btrtl_setup_realtek;
+ hdev->shutdown = btrtl_shutdown_realtek;
/* Realtek devices lose their updated firmware over suspend,
* but the USB hub doesn't notice any status change.
diff --git a/drivers/bluetooth/hci_bcsp.c b/drivers/bluetooth/hci_bcsp.c
index 82b13faa9422..fe2e307009f4 100644
--- a/drivers/bluetooth/hci_bcsp.c
+++ b/drivers/bluetooth/hci_bcsp.c
@@ -744,6 +744,11 @@ static int bcsp_close(struct hci_uart *hu)
skb_queue_purge(&bcsp->rel);
skb_queue_purge(&bcsp->unrel);
+ if (bcsp->rx_skb) {
+ kfree_skb(bcsp->rx_skb);
+ bcsp->rx_skb = NULL;
+ }
+
kfree(bcsp);
return 0;
}
diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c
index c84f985f348d..8950e07889fe 100644
--- a/drivers/bluetooth/hci_ldisc.c
+++ b/drivers/bluetooth/hci_ldisc.c
@@ -178,6 +178,7 @@ restart:
goto restart;
clear_bit(HCI_UART_SENDING, &hu->tx_state);
+ wake_up_bit(&hu->tx_state, HCI_UART_SENDING);
}
void hci_uart_init_work(struct work_struct *work)
@@ -213,6 +214,13 @@ int hci_uart_init_ready(struct hci_uart *hu)
return 0;
}
+int hci_uart_wait_until_sent(struct hci_uart *hu)
+{
+ return wait_on_bit_timeout(&hu->tx_state, HCI_UART_SENDING,
+ TASK_INTERRUPTIBLE,
+ msecs_to_jiffies(2000));
+}
+
/* ------- Interface to HCI layer ------ */
/* Reset device */
static int hci_uart_flush(struct hci_dev *hdev)
diff --git a/drivers/bluetooth/hci_ll.c b/drivers/bluetooth/hci_ll.c
index c04f5f9e1ed0..285706618f8a 100644
--- a/drivers/bluetooth/hci_ll.c
+++ b/drivers/bluetooth/hci_ll.c
@@ -128,6 +128,7 @@ static int ll_open(struct hci_uart *hu)
if (hu->serdev) {
struct ll_device *lldev = serdev_device_get_drvdata(hu->serdev);
+
if (!IS_ERR(lldev->ext_clk))
clk_prepare_enable(lldev->ext_clk);
}
@@ -162,6 +163,7 @@ static int ll_close(struct hci_uart *hu)
if (hu->serdev) {
struct ll_device *lldev = serdev_device_get_drvdata(hu->serdev);
+
gpiod_set_value_cansleep(lldev->enable_gpio, 0);
clk_disable_unprepare(lldev->ext_clk);
@@ -227,7 +229,8 @@ static void ll_device_want_to_wakeup(struct hci_uart *hu)
break;
default:
/* any other state is illegal */
- BT_ERR("received HCILL_WAKE_UP_IND in state %ld", ll->hcill_state);
+ BT_ERR("received HCILL_WAKE_UP_IND in state %ld",
+ ll->hcill_state);
break;
}
@@ -256,7 +259,8 @@ static void ll_device_want_to_sleep(struct hci_uart *hu)
/* sanity check */
if (ll->hcill_state != HCILL_AWAKE)
- BT_ERR("ERR: HCILL_GO_TO_SLEEP_IND in state %ld", ll->hcill_state);
+ BT_ERR("ERR: HCILL_GO_TO_SLEEP_IND in state %ld",
+ ll->hcill_state);
/* acknowledge device sleep */
if (send_hcill_cmd(HCILL_GO_TO_SLEEP_ACK, hu) < 0) {
@@ -289,7 +293,8 @@ static void ll_device_woke_up(struct hci_uart *hu)
/* sanity check */
if (ll->hcill_state != HCILL_ASLEEP_TO_AWAKE)
- BT_ERR("received HCILL_WAKE_UP_ACK in state %ld", ll->hcill_state);
+ BT_ERR("received HCILL_WAKE_UP_ACK in state %ld",
+ ll->hcill_state);
/* send pending packets and change state to HCILL_AWAKE */
__ll_do_awake(ll);
@@ -338,7 +343,8 @@ static int ll_enqueue(struct hci_uart *hu, struct sk_buff *skb)
skb_queue_tail(&ll->tx_wait_q, skb);
break;
default:
- BT_ERR("illegal hcill state: %ld (losing packet)", ll->hcill_state);
+ BT_ERR("illegal hcill state: %ld (losing packet)",
+ ll->hcill_state);
kfree_skb(skb);
break;
}
@@ -438,6 +444,7 @@ static int ll_recv(struct hci_uart *hu, const void *data, int count)
static struct sk_buff *ll_dequeue(struct hci_uart *hu)
{
struct ll_struct *ll = hu->priv;
+
return skb_dequeue(&ll->txq);
}
@@ -449,7 +456,8 @@ static int read_local_version(struct hci_dev *hdev)
struct sk_buff *skb;
struct hci_rp_read_local_version *ver;
- skb = __hci_cmd_sync(hdev, HCI_OP_READ_LOCAL_VERSION, 0, NULL, HCI_INIT_TIMEOUT);
+ skb = __hci_cmd_sync(hdev, HCI_OP_READ_LOCAL_VERSION, 0, NULL,
+ HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
bt_dev_err(hdev, "Reading TI version information failed (%ld)",
PTR_ERR(skb));
@@ -469,11 +477,38 @@ static int read_local_version(struct hci_dev *hdev)
version = le16_to_cpu(ver->lmp_subver);
out:
- if (err) bt_dev_err(hdev, "Failed to read TI version info: %d", err);
+ if (err)
+ bt_dev_err(hdev, "Failed to read TI version info: %d", err);
kfree_skb(skb);
return err ? err : version;
}
+static int send_command_from_firmware(struct ll_device *lldev,
+ struct hci_command *cmd)
+{
+ struct sk_buff *skb;
+
+ if (cmd->opcode == HCI_VS_UPDATE_UART_HCI_BAUDRATE) {
+ /* ignore remote change
+ * baud rate HCI VS command
+ */
+ bt_dev_warn(lldev->hu.hdev,
+ "change remote baud rate command in firmware");
+ return 0;
+ }
+ if (cmd->prefix != 1)
+ bt_dev_dbg(lldev->hu.hdev, "command type %d", cmd->prefix);
+
+ skb = __hci_cmd_sync(lldev->hu.hdev, cmd->opcode, cmd->plen,
+ &cmd->speed, HCI_INIT_TIMEOUT);
+ if (IS_ERR(skb)) {
+ bt_dev_err(lldev->hu.hdev, "send command failed");
+ return PTR_ERR(skb);
+ }
+ kfree_skb(skb);
+ return 0;
+}
+
/**
* download_firmware -
* internal function which parses through the .bts firmware
@@ -486,7 +521,6 @@ static int download_firmware(struct ll_device *lldev)
unsigned char *ptr, *action_ptr;
unsigned char bts_scr_name[40]; /* 40 char long bts scr name? */
const struct firmware *fw;
- struct sk_buff *skb;
struct hci_command *cmd;
version = read_local_version(lldev->hu.hdev);
@@ -528,23 +562,9 @@ static int download_firmware(struct ll_device *lldev)
case ACTION_SEND_COMMAND: /* action send */
bt_dev_dbg(lldev->hu.hdev, "S");
cmd = (struct hci_command *)action_ptr;
- if (cmd->opcode == HCI_VS_UPDATE_UART_HCI_BAUDRATE) {
- /* ignore remote change
- * baud rate HCI VS command
- */
- bt_dev_warn(lldev->hu.hdev, "change remote baud rate command in firmware");
- break;
- }
- if (cmd->prefix != 1)
- bt_dev_dbg(lldev->hu.hdev, "command type %d", cmd->prefix);
-
- skb = __hci_cmd_sync(lldev->hu.hdev, cmd->opcode, cmd->plen, &cmd->speed, HCI_INIT_TIMEOUT);
- if (IS_ERR(skb)) {
- bt_dev_err(lldev->hu.hdev, "send command failed");
- err = PTR_ERR(skb);
+ err = send_command_from_firmware(lldev, cmd);
+ if (err)
goto out_rel_fw;
- }
- kfree_skb(skb);
break;
case ACTION_WAIT_EVENT: /* wait */
/* no need to wait as command was synchronous */
@@ -601,6 +621,13 @@ static int ll_setup(struct hci_uart *hu)
serdev_device_set_flow_control(serdev, true);
+ if (hu->oper_speed)
+ speed = hu->oper_speed;
+ else if (hu->proto->oper_speed)
+ speed = hu->proto->oper_speed;
+ else
+ speed = 0;
+
do {
/* Reset the Bluetooth device */
gpiod_set_value_cansleep(lldev->enable_gpio, 0);
@@ -612,6 +639,20 @@ static int ll_setup(struct hci_uart *hu)
return err;
}
+ if (speed) {
+ __le32 speed_le = cpu_to_le32(speed);
+ struct sk_buff *skb;
+
+ skb = __hci_cmd_sync(hu->hdev,
+ HCI_VS_UPDATE_UART_HCI_BAUDRATE,
+ sizeof(speed_le), &speed_le,
+ HCI_INIT_TIMEOUT);
+ if (!IS_ERR(skb)) {
+ kfree_skb(skb);
+ serdev_device_set_baudrate(serdev, speed);
+ }
+ }
+
err = download_firmware(lldev);
if (!err)
break;
@@ -636,25 +677,7 @@ static int ll_setup(struct hci_uart *hu)
}
/* Operational speed if any */
- if (hu->oper_speed)
- speed = hu->oper_speed;
- else if (hu->proto->oper_speed)
- speed = hu->proto->oper_speed;
- else
- speed = 0;
-
- if (speed) {
- __le32 speed_le = cpu_to_le32(speed);
- struct sk_buff *skb;
- skb = __hci_cmd_sync(hu->hdev, HCI_VS_UPDATE_UART_HCI_BAUDRATE,
- sizeof(speed_le), &speed_le,
- HCI_INIT_TIMEOUT);
- if (!IS_ERR(skb)) {
- kfree_skb(skb);
- serdev_device_set_baudrate(serdev, speed);
- }
- }
return 0;
}
@@ -676,7 +699,9 @@ static int hci_ti_probe(struct serdev_device *serdev)
serdev_device_set_drvdata(serdev, lldev);
lldev->serdev = hu->serdev = serdev;
- lldev->enable_gpio = devm_gpiod_get_optional(&serdev->dev, "enable", GPIOD_OUT_LOW);
+ lldev->enable_gpio = devm_gpiod_get_optional(&serdev->dev,
+ "enable",
+ GPIOD_OUT_LOW);
if (IS_ERR(lldev->enable_gpio))
return PTR_ERR(lldev->enable_gpio);
diff --git a/drivers/bluetooth/hci_mrvl.c b/drivers/bluetooth/hci_mrvl.c
index 50212ac629e3..f98e5cc343b2 100644
--- a/drivers/bluetooth/hci_mrvl.c
+++ b/drivers/bluetooth/hci_mrvl.c
@@ -13,6 +13,8 @@
#include <linux/firmware.h>
#include <linux/module.h>
#include <linux/tty.h>
+#include <linux/of.h>
+#include <linux/serdev.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
@@ -40,6 +42,10 @@ struct mrvl_data {
u8 id, rev;
};
+struct mrvl_serdev {
+ struct hci_uart hu;
+};
+
struct hci_mrvl_pkt {
__le16 lhs;
__le16 rhs;
@@ -49,6 +55,7 @@ struct hci_mrvl_pkt {
static int mrvl_open(struct hci_uart *hu)
{
struct mrvl_data *mrvl;
+ int ret;
BT_DBG("hu %p", hu);
@@ -62,7 +69,18 @@ static int mrvl_open(struct hci_uart *hu)
set_bit(STATE_CHIP_VER_PENDING, &mrvl->flags);
hu->priv = mrvl;
+
+ if (hu->serdev) {
+ ret = serdev_device_open(hu->serdev);
+ if (ret)
+ goto err;
+ }
+
return 0;
+err:
+ kfree(mrvl);
+
+ return ret;
}
static int mrvl_close(struct hci_uart *hu)
@@ -71,6 +89,9 @@ static int mrvl_close(struct hci_uart *hu)
BT_DBG("hu %p", hu);
+ if (hu->serdev)
+ serdev_device_close(hu->serdev);
+
skb_queue_purge(&mrvl->txq);
skb_queue_purge(&mrvl->rawq);
kfree_skb(mrvl->rx_skb);
@@ -339,7 +360,14 @@ static int mrvl_setup(struct hci_uart *hu)
return -EINVAL;
}
- hci_uart_set_baudrate(hu, 3000000);
+ /* Let the final ack go out before switching the baudrate */
+ hci_uart_wait_until_sent(hu);
+
+ if (hu->serdev)
+ serdev_device_set_baudrate(hu->serdev, 3000000);
+ else
+ hci_uart_set_baudrate(hu, 3000000);
+
hci_uart_set_flow_control(hu, false);
err = mrvl_load_firmware(hu->hdev, "mrvl/uart8897_bt.bin");
@@ -362,12 +390,54 @@ static const struct hci_uart_proto mrvl_proto = {
.dequeue = mrvl_dequeue,
};
+static int mrvl_serdev_probe(struct serdev_device *serdev)
+{
+ struct mrvl_serdev *mrvldev;
+
+ mrvldev = devm_kzalloc(&serdev->dev, sizeof(*mrvldev), GFP_KERNEL);
+ if (!mrvldev)
+ return -ENOMEM;
+
+ mrvldev->hu.serdev = serdev;
+ serdev_device_set_drvdata(serdev, mrvldev);
+
+ return hci_uart_register_device(&mrvldev->hu, &mrvl_proto);
+}
+
+static void mrvl_serdev_remove(struct serdev_device *serdev)
+{
+ struct mrvl_serdev *mrvldev = serdev_device_get_drvdata(serdev);
+
+ hci_uart_unregister_device(&mrvldev->hu);
+}
+
+#ifdef CONFIG_OF
+static const struct of_device_id mrvl_bluetooth_of_match[] = {
+ { .compatible = "mrvl,88w8897" },
+ { },
+};
+MODULE_DEVICE_TABLE(of, mrvl_bluetooth_of_match);
+#endif
+
+static struct serdev_device_driver mrvl_serdev_driver = {
+ .probe = mrvl_serdev_probe,
+ .remove = mrvl_serdev_remove,
+ .driver = {
+ .name = "hci_uart_mrvl",
+ .of_match_table = of_match_ptr(mrvl_bluetooth_of_match),
+ },
+};
+
int __init mrvl_init(void)
{
+ serdev_device_driver_register(&mrvl_serdev_driver);
+
return hci_uart_register_proto(&mrvl_proto);
}
int __exit mrvl_deinit(void)
{
+ serdev_device_driver_unregister(&mrvl_serdev_driver);
+
return hci_uart_unregister_proto(&mrvl_proto);
}
diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
index 9d273cdde563..9a5c9c1f9484 100644
--- a/drivers/bluetooth/hci_qca.c
+++ b/drivers/bluetooth/hci_qca.c
@@ -17,6 +17,7 @@
#include <linux/kernel.h>
#include <linux/clk.h>
+#include <linux/completion.h>
#include <linux/debugfs.h>
#include <linux/delay.h>
#include <linux/device.h>
@@ -53,6 +54,7 @@
enum qca_flags {
QCA_IBS_ENABLED,
+ QCA_DROP_VENDOR_EVENT,
};
/* HCI_IBS transmit side sleep protocol states */
@@ -97,6 +99,7 @@ struct qca_data {
struct work_struct ws_rx_vote_off;
struct work_struct ws_tx_vote_off;
unsigned long flags;
+ struct completion drop_ev_comp;
/* For debugging purpose */
u64 ibs_sent_wacks;
@@ -156,6 +159,7 @@ struct qca_serdev {
struct qca_power *bt_power;
u32 init_speed;
u32 oper_speed;
+ const char *firmware_name;
};
static int qca_power_setup(struct hci_uart *hu, bool on);
@@ -177,6 +181,17 @@ static enum qca_btsoc_type qca_soc_type(struct hci_uart *hu)
return soc_type;
}
+static const char *qca_get_firmware_name(struct hci_uart *hu)
+{
+ if (hu->serdev) {
+ struct qca_serdev *qsd = serdev_device_get_drvdata(hu->serdev);
+
+ return qsd->firmware_name;
+ } else {
+ return NULL;
+ }
+}
+
static void __serial_clock_on(struct tty_struct *tty)
{
/* TODO: Some chipset requires to enable UART clock on client
@@ -478,6 +493,7 @@ static int qca_open(struct hci_uart *hu)
INIT_WORK(&qca->ws_tx_vote_off, qca_wq_serial_tx_clock_vote_off);
qca->hu = hu;
+ init_completion(&qca->drop_ev_comp);
/* Assume we start with both sides asleep -- extra wakes OK */
qca->tx_ibs_state = HCI_IBS_TX_ASLEEP;
@@ -872,6 +888,35 @@ static int qca_recv_acl_data(struct hci_dev *hdev, struct sk_buff *skb)
return hci_recv_frame(hdev, skb);
}
+static int qca_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+{
+ struct hci_uart *hu = hci_get_drvdata(hdev);
+ struct qca_data *qca = hu->priv;
+
+ if (test_bit(QCA_DROP_VENDOR_EVENT, &qca->flags)) {
+ struct hci_event_hdr *hdr = (void *)skb->data;
+
+ /* For the WCN3990 the vendor command for a baudrate change
+ * isn't sent as synchronous HCI command, because the
+ * controller sends the corresponding vendor event with the
+ * new baudrate. The event is received and properly decoded
+ * after changing the baudrate of the host port. It needs to
+ * be dropped, otherwise it can be misinterpreted as
+ * response to a later firmware download command (also a
+ * vendor command).
+ */
+
+ if (hdr->evt == HCI_EV_VENDOR)
+ complete(&qca->drop_ev_comp);
+
+ kfree(skb);
+
+ return 0;
+ }
+
+ return hci_recv_frame(hdev, skb);
+}
+
#define QCA_IBS_SLEEP_IND_EVENT \
.type = HCI_IBS_SLEEP_IND, \
.hlen = 0, \
@@ -896,7 +941,7 @@ static int qca_recv_acl_data(struct hci_dev *hdev, struct sk_buff *skb)
static const struct h4_recv_pkt qca_recv_pkts[] = {
{ H4_RECV_ACL, .recv = qca_recv_acl_data },
{ H4_RECV_SCO, .recv = hci_recv_frame },
- { H4_RECV_EVENT, .recv = hci_recv_frame },
+ { H4_RECV_EVENT, .recv = qca_recv_event },
{ QCA_IBS_WAKE_IND_EVENT, .recv = qca_ibs_wake_ind },
{ QCA_IBS_WAKE_ACK_EVENT, .recv = qca_ibs_wake_ack },
{ QCA_IBS_SLEEP_IND_EVENT, .recv = qca_ibs_sleep_ind },
@@ -1091,6 +1136,7 @@ static int qca_check_speeds(struct hci_uart *hu)
static int qca_set_speed(struct hci_uart *hu, enum qca_speed_type speed_type)
{
unsigned int speed, qca_baudrate;
+ struct qca_data *qca = hu->priv;
int ret = 0;
if (speed_type == QCA_INIT_SPEED) {
@@ -1110,6 +1156,11 @@ static int qca_set_speed(struct hci_uart *hu, enum qca_speed_type speed_type)
if (qca_is_wcn399x(soc_type))
hci_uart_set_flow_control(hu, true);
+ if (soc_type == QCA_WCN3990) {
+ reinit_completion(&qca->drop_ev_comp);
+ set_bit(QCA_DROP_VENDOR_EVENT, &qca->flags);
+ }
+
qca_baudrate = qca_get_baudrate_value(speed);
bt_dev_dbg(hu->hdev, "Set UART speed to %d", speed);
ret = qca_set_baudrate(hu->hdev, qca_baudrate);
@@ -1121,6 +1172,20 @@ static int qca_set_speed(struct hci_uart *hu, enum qca_speed_type speed_type)
error:
if (qca_is_wcn399x(soc_type))
hci_uart_set_flow_control(hu, false);
+
+ if (soc_type == QCA_WCN3990) {
+ /* Wait for the controller to send the vendor event
+ * for the baudrate change command.
+ */
+ if (!wait_for_completion_timeout(&qca->drop_ev_comp,
+ msecs_to_jiffies(100))) {
+ bt_dev_err(hu->hdev,
+ "Failed to change controller baudrate\n");
+ ret = -ETIMEDOUT;
+ }
+
+ clear_bit(QCA_DROP_VENDOR_EVENT, &qca->flags);
+ }
}
return ret;
@@ -1182,6 +1247,7 @@ static int qca_setup(struct hci_uart *hu)
struct qca_data *qca = hu->priv;
unsigned int speed, qca_baudrate = QCA_BAUDRATE_115200;
enum qca_btsoc_type soc_type = qca_soc_type(hu);
+ const char *firmware_name = qca_get_firmware_name(hu);
int ret;
int soc_ver = 0;
@@ -1232,7 +1298,8 @@ static int qca_setup(struct hci_uart *hu)
bt_dev_info(hdev, "QCA controller version 0x%08x", soc_ver);
/* Setup patch / NVM configurations */
- ret = qca_uart_setup(hdev, qca_baudrate, soc_type, soc_ver);
+ ret = qca_uart_setup(hdev, qca_baudrate, soc_type, soc_ver,
+ firmware_name);
if (!ret) {
set_bit(QCA_IBS_ENABLED, &qca->flags);
qca_debugfs_init(hdev);
@@ -1426,6 +1493,8 @@ static int qca_serdev_probe(struct serdev_device *serdev)
qcadev->serdev_hu.serdev = serdev;
data = of_device_get_match_data(&serdev->dev);
serdev_device_set_drvdata(serdev, qcadev);
+ device_property_read_string(&serdev->dev, "firmware-name",
+ &qcadev->firmware_name);
if (data && qca_is_wcn399x(data->soc_type)) {
qcadev->btsoc_type = data->soc_type;
qcadev->bt_power = devm_kzalloc(&serdev->dev,
diff --git a/drivers/bluetooth/hci_uart.h b/drivers/bluetooth/hci_uart.h
index d8cf005e3c5d..f11af3912ce6 100644
--- a/drivers/bluetooth/hci_uart.h
+++ b/drivers/bluetooth/hci_uart.h
@@ -100,6 +100,7 @@ int hci_uart_register_device(struct hci_uart *hu, const struct hci_uart_proto *p
void hci_uart_unregister_device(struct hci_uart *hu);
int hci_uart_tx_wakeup(struct hci_uart *hu);
+int hci_uart_wait_until_sent(struct hci_uart *hu);
int hci_uart_init_ready(struct hci_uart *hu);
void hci_uart_init_work(struct work_struct *work);
void hci_uart_set_baudrate(struct hci_uart *hu, unsigned int speed);
diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c
index f86065e16772..1969bfdfe6a4 100644
--- a/drivers/i2c/i2c-core-acpi.c
+++ b/drivers/i2c/i2c-core-acpi.c
@@ -335,7 +335,7 @@ static int i2c_acpi_find_match_device(struct device *dev, void *data)
return ACPI_COMPANION(dev) == data;
}
-static struct i2c_adapter *i2c_acpi_find_adapter_by_handle(acpi_handle handle)
+struct i2c_adapter *i2c_acpi_find_adapter_by_handle(acpi_handle handle)
{
struct device *dev;
@@ -343,6 +343,7 @@ static struct i2c_adapter *i2c_acpi_find_adapter_by_handle(acpi_handle handle)
i2c_acpi_find_match_adapter);
return dev ? i2c_verify_adapter(dev) : NULL;
}
+EXPORT_SYMBOL_GPL(i2c_acpi_find_adapter_by_handle);
static struct i2c_client *i2c_acpi_find_client_by_adev(struct acpi_device *adev)
{
diff --git a/drivers/infiniband/core/roce_gid_mgmt.c b/drivers/infiniband/core/roce_gid_mgmt.c
index 558de0b9895c..2860def84f4d 100644
--- a/drivers/infiniband/core/roce_gid_mgmt.c
+++ b/drivers/infiniband/core/roce_gid_mgmt.c
@@ -330,6 +330,7 @@ static void bond_delete_netdev_default_gids(struct ib_device *ib_dev,
static void enum_netdev_ipv4_ips(struct ib_device *ib_dev,
u8 port, struct net_device *ndev)
{
+ const struct in_ifaddr *ifa;
struct in_device *in_dev;
struct sin_list {
struct list_head list;
@@ -349,7 +350,7 @@ static void enum_netdev_ipv4_ips(struct ib_device *ib_dev,
return;
}
- for_ifa(in_dev) {
+ in_dev_for_each_ifa_rcu(ifa, in_dev) {
struct sin_list *entry = kzalloc(sizeof(*entry), GFP_ATOMIC);
if (!entry)
@@ -359,7 +360,7 @@ static void enum_netdev_ipv4_ips(struct ib_device *ib_dev,
entry->ip.sin_addr.s_addr = ifa->ifa_address;
list_add_tail(&entry->list, &sin_list);
}
- endfor_ifa(in_dev);
+
rcu_read_unlock();
list_for_each_entry_safe(sin_iter, sin_temp, &sin_list, list) {
diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
index 0f3b1193d5f8..09fcfc9e052d 100644
--- a/drivers/infiniband/hw/cxgb4/cm.c
+++ b/drivers/infiniband/hw/cxgb4/cm.c
@@ -3230,17 +3230,22 @@ static int pick_local_ipaddrs(struct c4iw_dev *dev, struct iw_cm_id *cm_id)
int found = 0;
struct sockaddr_in *laddr = (struct sockaddr_in *)&cm_id->m_local_addr;
struct sockaddr_in *raddr = (struct sockaddr_in *)&cm_id->m_remote_addr;
+ const struct in_ifaddr *ifa;
ind = in_dev_get(dev->rdev.lldi.ports[0]);
if (!ind)
return -EADDRNOTAVAIL;
- for_primary_ifa(ind) {
+ rcu_read_lock();
+ in_dev_for_each_ifa_rcu(ifa, ind) {
+ if (ifa->ifa_flags & IFA_F_SECONDARY)
+ continue;
laddr->sin_addr.s_addr = ifa->ifa_address;
raddr->sin_addr.s_addr = ifa->ifa_address;
found = 1;
break;
}
- endfor_ifa(ind);
+ rcu_read_unlock();
+
in_dev_put(ind);
return found ? 0 : -EADDRNOTAVAIL;
}
diff --git a/drivers/infiniband/hw/i40iw/i40iw_cm.c b/drivers/infiniband/hw/i40iw/i40iw_cm.c
index 8233f5a4e623..700a5d06b60c 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_cm.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_cm.c
@@ -1773,8 +1773,11 @@ static enum i40iw_status_code i40iw_add_mqh_4(
if ((((rdma_vlan_dev_vlan_id(dev) < I40IW_NO_VLAN) &&
(rdma_vlan_dev_real_dev(dev) == iwdev->netdev)) ||
(dev == iwdev->netdev)) && (dev->flags & IFF_UP)) {
+ const struct in_ifaddr *ifa;
+
idev = in_dev_get(dev);
- for_ifa(idev) {
+
+ in_dev_for_each_ifa_rtnl(ifa, idev) {
i40iw_debug(&iwdev->sc_dev,
I40IW_DEBUG_CM,
"Allocating child CM Listener forIP=%pI4, vlan_id=%d, MAC=%pM\n",
@@ -1819,7 +1822,7 @@ static enum i40iw_status_code i40iw_add_mqh_4(
cm_parent_listen_node->cm_core->stats_listen_nodes_created--;
}
}
- endfor_ifa(idev);
+
in_dev_put(idev);
}
}
diff --git a/drivers/infiniband/hw/i40iw/i40iw_main.c b/drivers/infiniband/hw/i40iw/i40iw_main.c
index 10932baee279..d44cf33df81a 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_main.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_main.c
@@ -1222,8 +1222,10 @@ static void i40iw_add_ipv4_addr(struct i40iw_device *iwdev)
if ((((rdma_vlan_dev_vlan_id(dev) < 0xFFFF) &&
(rdma_vlan_dev_real_dev(dev) == iwdev->netdev)) ||
(dev == iwdev->netdev)) && (dev->flags & IFF_UP)) {
+ const struct in_ifaddr *ifa;
+
idev = in_dev_get(dev);
- for_ifa(idev) {
+ in_dev_for_each_ifa_rtnl(ifa, idev) {
i40iw_debug(&iwdev->sc_dev, I40IW_DEBUG_CM,
"IP=%pI4, vlan_id=%d, MAC=%pM\n", &ifa->ifa_address,
rdma_vlan_dev_vlan_id(dev), dev->dev_addr);
@@ -1235,7 +1237,7 @@ static void i40iw_add_ipv4_addr(struct i40iw_device *iwdev)
true,
I40IW_ARP_ADD);
}
- endfor_ifa(idev);
+
in_dev_put(idev);
}
}
diff --git a/drivers/infiniband/hw/i40iw/i40iw_utils.c b/drivers/infiniband/hw/i40iw/i40iw_utils.c
index 337410f40860..016524683e17 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_utils.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_utils.c
@@ -174,10 +174,14 @@ int i40iw_inetaddr_event(struct notifier_block *notifier,
rcu_read_lock();
in = __in_dev_get_rcu(upper_dev);
- if (!in->ifa_list)
- local_ipaddr = 0;
- else
- local_ipaddr = ntohl(in->ifa_list->ifa_address);
+ local_ipaddr = 0;
+ if (in) {
+ struct in_ifaddr *ifa;
+
+ ifa = rcu_dereference(in->ifa_list);
+ if (ifa)
+ local_ipaddr = ntohl(ifa->ifa_address);
+ }
rcu_read_unlock();
} else {
diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
index 2e2e65f00257..4efbbd2fce0c 100644
--- a/drivers/infiniband/hw/mlx5/cq.c
+++ b/drivers/infiniband/hw/mlx5/cq.c
@@ -37,7 +37,7 @@
#include "mlx5_ib.h"
#include "srq.h"
-static void mlx5_ib_cq_comp(struct mlx5_core_cq *cq)
+static void mlx5_ib_cq_comp(struct mlx5_core_cq *cq, struct mlx5_eqe *eqe)
{
struct ib_cq *ibcq = &to_mibcq(cq)->ibcq;
@@ -522,9 +522,9 @@ repoll:
case MLX5_CQE_SIG_ERR:
sig_err_cqe = (struct mlx5_sig_err_cqe *)cqe64;
- read_lock(&dev->mdev->priv.mkey_table.lock);
- mmkey = __mlx5_mr_lookup(dev->mdev,
- mlx5_base_mkey(be32_to_cpu(sig_err_cqe->mkey)));
+ xa_lock(&dev->mdev->priv.mkey_table);
+ mmkey = xa_load(&dev->mdev->priv.mkey_table,
+ mlx5_base_mkey(be32_to_cpu(sig_err_cqe->mkey)));
mr = to_mibmr(mmkey);
get_sig_err_item(sig_err_cqe, &mr->sig->err_item);
mr->sig->sig_err_exists = true;
@@ -537,7 +537,7 @@ repoll:
mr->sig->err_item.expected,
mr->sig->err_item.actual);
- read_unlock(&dev->mdev->priv.mkey_table.lock);
+ xa_unlock(&dev->mdev->priv.mkey_table);
goto repoll;
}
@@ -891,6 +891,7 @@ struct ib_cq *mlx5_ib_create_cq(struct ib_device *ibdev,
int entries = attr->cqe;
int vector = attr->comp_vector;
struct mlx5_ib_dev *dev = to_mdev(ibdev);
+ u32 out[MLX5_ST_SZ_DW(create_cq_out)];
struct mlx5_ib_cq *cq;
int uninitialized_var(index);
int uninitialized_var(inlen);
@@ -958,7 +959,7 @@ struct ib_cq *mlx5_ib_create_cq(struct ib_device *ibdev,
if (cq->create_flags & IB_UVERBS_CQ_FLAGS_IGNORE_OVERRUN)
MLX5_SET(cqc, cqc, oi, 1);
- err = mlx5_core_create_cq(dev->mdev, &cq->mcq, cqb, inlen);
+ err = mlx5_core_create_cq(dev->mdev, &cq->mcq, cqb, inlen, out, sizeof(out));
if (err)
goto err_cqb;
diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
index 80b42d069328..931f587dfb8f 100644
--- a/drivers/infiniband/hw/mlx5/devx.c
+++ b/drivers/infiniband/hw/mlx5/devx.c
@@ -1043,13 +1043,10 @@ static int devx_handle_mkey_indirect(struct devx_obj *obj,
struct mlx5_ib_dev *dev,
void *in, void *out)
{
- struct mlx5_mkey_table *table = &dev->mdev->priv.mkey_table;
struct mlx5_ib_devx_mr *devx_mr = &obj->devx_mr;
- unsigned long flags;
struct mlx5_core_mkey *mkey;
void *mkc;
u8 key;
- int err;
mkey = &devx_mr->mmkey;
mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry);
@@ -1062,11 +1059,8 @@ static int devx_handle_mkey_indirect(struct devx_obj *obj,
mkey->pd = MLX5_GET(mkc, mkc, pd);
devx_mr->ndescs = MLX5_GET(mkc, mkc, translations_octword_size);
- write_lock_irqsave(&table->lock, flags);
- err = radix_tree_insert(&table->tree, mlx5_base_mkey(mkey->key),
- mkey);
- write_unlock_irqrestore(&table->lock, flags);
- return err;
+ return xa_err(xa_store(&dev->mdev->priv.mkey_table,
+ mlx5_base_mkey(mkey->key), mkey, GFP_KERNEL));
}
static int devx_handle_mkey_create(struct mlx5_ib_dev *dev,
@@ -1117,12 +1111,8 @@ static void devx_free_indirect_mkey(struct rcu_head *rcu)
*/
static void devx_cleanup_mkey(struct devx_obj *obj)
{
- struct mlx5_mkey_table *table = &obj->mdev->priv.mkey_table;
- unsigned long flags;
-
- write_lock_irqsave(&table->lock, flags);
- radix_tree_delete(&table->tree, mlx5_base_mkey(obj->devx_mr.mmkey.key));
- write_unlock_irqrestore(&table->lock, flags);
+ xa_erase(&obj->mdev->priv.mkey_table,
+ mlx5_base_mkey(obj->devx_mr.mmkey.key));
}
static int devx_obj_cleanup(struct ib_uobject *uobject,
diff --git a/drivers/infiniband/hw/mlx5/flow.c b/drivers/infiniband/hw/mlx5/flow.c
index 1fc302d41a53..b8841355fcd5 100644
--- a/drivers/infiniband/hw/mlx5/flow.c
+++ b/drivers/infiniband/hw/mlx5/flow.c
@@ -65,11 +65,12 @@ static const struct uverbs_attr_spec mlx5_ib_flow_type[] = {
static int UVERBS_HANDLER(MLX5_IB_METHOD_CREATE_FLOW)(
struct uverbs_attr_bundle *attrs)
{
- struct mlx5_flow_act flow_act = {.flow_tag = MLX5_FS_DEFAULT_FLOW_TAG};
+ struct mlx5_flow_context flow_context = {.flow_tag = MLX5_FS_DEFAULT_FLOW_TAG};
struct mlx5_ib_flow_handler *flow_handler;
struct mlx5_ib_flow_matcher *fs_matcher;
struct ib_uobject **arr_flow_actions;
struct ib_uflow_resources *uflow_res;
+ struct mlx5_flow_act flow_act = {};
void *devx_obj;
int dest_id, dest_type;
void *cmd_in;
@@ -172,17 +173,19 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_CREATE_FLOW)(
arr_flow_actions[i]->object);
}
- ret = uverbs_copy_from(&flow_act.flow_tag, attrs,
+ ret = uverbs_copy_from(&flow_context.flow_tag, attrs,
MLX5_IB_ATTR_CREATE_FLOW_TAG);
if (!ret) {
- if (flow_act.flow_tag >= BIT(24)) {
+ if (flow_context.flow_tag >= BIT(24)) {
ret = -EINVAL;
goto err_out;
}
- flow_act.flags |= FLOW_ACT_HAS_TAG;
+ flow_context.flags |= FLOW_CONTEXT_HAS_TAG;
}
- flow_handler = mlx5_ib_raw_fs_rule_add(dev, fs_matcher, &flow_act,
+ flow_handler = mlx5_ib_raw_fs_rule_add(dev, fs_matcher,
+ &flow_context,
+ &flow_act,
counter_id,
cmd_in, inlen,
dest_id, dest_type);
diff --git a/drivers/infiniband/hw/mlx5/ib_rep.c b/drivers/infiniband/hw/mlx5/ib_rep.c
index 269b24a3baa1..74ce9249e75a 100644
--- a/drivers/infiniband/hw/mlx5/ib_rep.c
+++ b/drivers/infiniband/hw/mlx5/ib_rep.c
@@ -14,9 +14,10 @@ mlx5_ib_set_vport_rep(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
int vport_index;
ibdev = mlx5_ib_get_uplink_ibdev(dev->priv.eswitch);
- vport_index = ibdev->free_port++;
+ vport_index = rep->vport_index;
ibdev->port[vport_index].rep = rep;
+ rep->rep_data[REP_IB].priv = ibdev;
write_lock(&ibdev->port[vport_index].roce.netdev_lock);
ibdev->port[vport_index].roce.netdev =
mlx5_ib_get_rep_netdev(dev->priv.eswitch, rep->vport);
@@ -28,7 +29,7 @@ mlx5_ib_set_vport_rep(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
static int
mlx5_ib_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
{
- int num_ports = MLX5_TOTAL_VPORTS(dev);
+ int num_ports = mlx5_eswitch_get_total_vports(dev);
const struct mlx5_ib_profile *profile;
struct mlx5_ib_dev *ibdev;
int vport_index;
@@ -50,7 +51,7 @@ mlx5_ib_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
}
ibdev->is_rep = true;
- vport_index = ibdev->free_port++;
+ vport_index = rep->vport_index;
ibdev->port[vport_index].rep = rep;
ibdev->port[vport_index].roce.netdev =
mlx5_ib_get_rep_netdev(dev->priv.eswitch, rep->vport);
@@ -60,7 +61,7 @@ mlx5_ib_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
if (!__mlx5_ib_add(ibdev, profile))
return -EINVAL;
- rep->rep_if[REP_IB].priv = ibdev;
+ rep->rep_data[REP_IB].priv = ibdev;
return 0;
}
@@ -68,15 +69,18 @@ mlx5_ib_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
static void
mlx5_ib_vport_rep_unload(struct mlx5_eswitch_rep *rep)
{
- struct mlx5_ib_dev *dev;
+ struct mlx5_ib_dev *dev = mlx5_ib_rep_to_dev(rep);
+ struct mlx5_ib_port *port;
- if (!rep->rep_if[REP_IB].priv ||
- rep->vport != MLX5_VPORT_UPLINK)
- return;
+ port = &dev->port[rep->vport_index];
+ write_lock(&port->roce.netdev_lock);
+ port->roce.netdev = NULL;
+ write_unlock(&port->roce.netdev_lock);
+ rep->rep_data[REP_IB].priv = NULL;
+ port->rep = NULL;
- dev = mlx5_ib_rep_to_dev(rep);
- __mlx5_ib_remove(dev, dev->profile, MLX5_IB_STAGE_MAX);
- rep->rep_if[REP_IB].priv = NULL;
+ if (rep->vport == MLX5_VPORT_UPLINK)
+ __mlx5_ib_remove(dev, dev->profile, MLX5_IB_STAGE_MAX);
}
static void *mlx5_ib_vport_get_proto_dev(struct mlx5_eswitch_rep *rep)
@@ -84,16 +88,17 @@ static void *mlx5_ib_vport_get_proto_dev(struct mlx5_eswitch_rep *rep)
return mlx5_ib_rep_to_dev(rep);
}
+static const struct mlx5_eswitch_rep_ops rep_ops = {
+ .load = mlx5_ib_vport_rep_load,
+ .unload = mlx5_ib_vport_rep_unload,
+ .get_proto_dev = mlx5_ib_vport_get_proto_dev,
+};
+
void mlx5_ib_register_vport_reps(struct mlx5_core_dev *mdev)
{
struct mlx5_eswitch *esw = mdev->priv.eswitch;
- struct mlx5_eswitch_rep_if rep_if = {};
-
- rep_if.load = mlx5_ib_vport_rep_load;
- rep_if.unload = mlx5_ib_vport_rep_unload;
- rep_if.get_proto_dev = mlx5_ib_vport_get_proto_dev;
- mlx5_eswitch_register_vport_reps(esw, &rep_if, REP_IB);
+ mlx5_eswitch_register_vport_reps(esw, &rep_ops, REP_IB);
}
void mlx5_ib_unregister_vport_reps(struct mlx5_core_dev *mdev)
diff --git a/drivers/infiniband/hw/mlx5/ib_rep.h b/drivers/infiniband/hw/mlx5/ib_rep.h
index 8336e0517a5c..de43b423bafc 100644
--- a/drivers/infiniband/hw/mlx5/ib_rep.h
+++ b/drivers/infiniband/hw/mlx5/ib_rep.h
@@ -28,7 +28,7 @@ struct net_device *mlx5_ib_get_rep_netdev(struct mlx5_eswitch *esw,
#else /* CONFIG_MLX5_ESWITCH */
static inline u8 mlx5_ib_eswitch_mode(struct mlx5_eswitch *esw)
{
- return SRIOV_NONE;
+ return MLX5_ESWITCH_NONE;
}
static inline
@@ -72,6 +72,6 @@ struct net_device *mlx5_ib_get_rep_netdev(struct mlx5_eswitch *esw,
static inline
struct mlx5_ib_dev *mlx5_ib_rep_to_dev(struct mlx5_eswitch_rep *rep)
{
- return (struct mlx5_ib_dev *)rep->rep_if[REP_IB].priv;
+ return rep->rep_data[REP_IB].priv;
}
#endif /* __MLX5_IB_REP_H__ */
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 340290b883fe..ba312bf59c7a 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -2666,11 +2666,15 @@ int parse_flow_flow_action(struct mlx5_ib_flow_action *maction,
}
}
-static int parse_flow_attr(struct mlx5_core_dev *mdev, u32 *match_c,
- u32 *match_v, const union ib_flow_spec *ib_spec,
+static int parse_flow_attr(struct mlx5_core_dev *mdev,
+ struct mlx5_flow_spec *spec,
+ const union ib_flow_spec *ib_spec,
const struct ib_flow_attr *flow_attr,
struct mlx5_flow_act *action, u32 prev_type)
{
+ struct mlx5_flow_context *flow_context = &spec->flow_context;
+ u32 *match_c = spec->match_criteria;
+ u32 *match_v = spec->match_value;
void *misc_params_c = MLX5_ADDR_OF(fte_match_param, match_c,
misc_parameters);
void *misc_params_v = MLX5_ADDR_OF(fte_match_param, match_v,
@@ -2989,8 +2993,8 @@ static int parse_flow_attr(struct mlx5_core_dev *mdev, u32 *match_c,
if (ib_spec->flow_tag.tag_id >= BIT(24))
return -EINVAL;
- action->flow_tag = ib_spec->flow_tag.tag_id;
- action->flags |= FLOW_ACT_HAS_TAG;
+ flow_context->flow_tag = ib_spec->flow_tag.tag_id;
+ flow_context->flags |= FLOW_CONTEXT_HAS_TAG;
break;
case IB_FLOW_SPEC_ACTION_DROP:
if (FIELDS_NOT_SUPPORTED(ib_spec->drop,
@@ -3084,7 +3088,8 @@ is_valid_esp_aes_gcm(struct mlx5_core_dev *mdev,
return VALID_SPEC_NA;
return is_crypto && is_ipsec &&
- (!egress || (!is_drop && !(flow_act->flags & FLOW_ACT_HAS_TAG))) ?
+ (!egress || (!is_drop &&
+ !(spec->flow_context.flags & FLOW_CONTEXT_HAS_TAG))) ?
VALID_SPEC_VALID : VALID_SPEC_INVALID;
}
@@ -3464,6 +3469,37 @@ free:
return ret;
}
+static void mlx5_ib_set_rule_source_port(struct mlx5_ib_dev *dev,
+ struct mlx5_flow_spec *spec,
+ struct mlx5_eswitch_rep *rep)
+{
+ struct mlx5_eswitch *esw = dev->mdev->priv.eswitch;
+ void *misc;
+
+ if (mlx5_eswitch_vport_match_metadata_enabled(esw)) {
+ misc = MLX5_ADDR_OF(fte_match_param, spec->match_value,
+ misc_parameters_2);
+
+ MLX5_SET(fte_match_set_misc2, misc, metadata_reg_c_0,
+ mlx5_eswitch_get_vport_metadata_for_match(esw,
+ rep->vport));
+ misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
+ misc_parameters_2);
+
+ MLX5_SET_TO_ONES(fte_match_set_misc2, misc, metadata_reg_c_0);
+ } else {
+ misc = MLX5_ADDR_OF(fte_match_param, spec->match_value,
+ misc_parameters);
+
+ MLX5_SET(fte_match_set_misc, misc, source_port, rep->vport);
+
+ misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
+ misc_parameters);
+
+ MLX5_SET_TO_ONES(fte_match_set_misc, misc, source_port);
+ }
+}
+
static struct mlx5_ib_flow_handler *_create_flow_rule(struct mlx5_ib_dev *dev,
struct mlx5_ib_flow_prio *ft_prio,
const struct ib_flow_attr *flow_attr,
@@ -3473,7 +3509,7 @@ static struct mlx5_ib_flow_handler *_create_flow_rule(struct mlx5_ib_dev *dev,
{
struct mlx5_flow_table *ft = ft_prio->flow_table;
struct mlx5_ib_flow_handler *handler;
- struct mlx5_flow_act flow_act = {.flow_tag = MLX5_FS_DEFAULT_FLOW_TAG};
+ struct mlx5_flow_act flow_act = {};
struct mlx5_flow_spec *spec;
struct mlx5_flow_destination dest_arr[2] = {};
struct mlx5_flow_destination *rule_dst = dest_arr;
@@ -3504,8 +3540,7 @@ static struct mlx5_ib_flow_handler *_create_flow_rule(struct mlx5_ib_dev *dev,
}
for (spec_index = 0; spec_index < flow_attr->num_of_specs; spec_index++) {
- err = parse_flow_attr(dev->mdev, spec->match_criteria,
- spec->match_value,
+ err = parse_flow_attr(dev->mdev, spec,
ib_flow, flow_attr, &flow_act,
prev_type);
if (err < 0)
@@ -3519,19 +3554,15 @@ static struct mlx5_ib_flow_handler *_create_flow_rule(struct mlx5_ib_dev *dev,
set_underlay_qp(dev, spec, underlay_qpn);
if (dev->is_rep) {
- void *misc;
+ struct mlx5_eswitch_rep *rep;
- if (!dev->port[flow_attr->port - 1].rep) {
+ rep = dev->port[flow_attr->port - 1].rep;
+ if (!rep) {
err = -EINVAL;
goto free;
}
- misc = MLX5_ADDR_OF(fte_match_param, spec->match_value,
- misc_parameters);
- MLX5_SET(fte_match_set_misc, misc, source_port,
- dev->port[flow_attr->port - 1].rep->vport);
- misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
- misc_parameters);
- MLX5_SET_TO_ONES(fte_match_set_misc, misc, source_port);
+
+ mlx5_ib_set_rule_source_port(dev, spec, rep);
}
spec->match_criteria_enable = get_match_criteria_enable(spec->match_criteria);
@@ -3572,11 +3603,11 @@ static struct mlx5_ib_flow_handler *_create_flow_rule(struct mlx5_ib_dev *dev,
MLX5_FLOW_CONTEXT_ACTION_FWD_NEXT_PRIO;
}
- if ((flow_act.flags & FLOW_ACT_HAS_TAG) &&
+ if ((spec->flow_context.flags & FLOW_CONTEXT_HAS_TAG) &&
(flow_attr->type == IB_FLOW_ATTR_ALL_DEFAULT ||
flow_attr->type == IB_FLOW_ATTR_MC_DEFAULT)) {
mlx5_ib_warn(dev, "Flow tag %u and attribute type %x isn't allowed in leftovers\n",
- flow_act.flow_tag, flow_attr->type);
+ spec->flow_context.flow_tag, flow_attr->type);
err = -EINVAL;
goto free;
}
@@ -3947,6 +3978,7 @@ _create_raw_flow_rule(struct mlx5_ib_dev *dev,
struct mlx5_ib_flow_prio *ft_prio,
struct mlx5_flow_destination *dst,
struct mlx5_ib_flow_matcher *fs_matcher,
+ struct mlx5_flow_context *flow_context,
struct mlx5_flow_act *flow_act,
void *cmd_in, int inlen,
int dst_num)
@@ -3969,6 +4001,7 @@ _create_raw_flow_rule(struct mlx5_ib_dev *dev,
memcpy(spec->match_criteria, fs_matcher->matcher_mask.match_params,
fs_matcher->mask_len);
spec->match_criteria_enable = fs_matcher->match_criteria_enable;
+ spec->flow_context = *flow_context;
handler->rule = mlx5_add_flow_rules(ft, spec,
flow_act, dst, dst_num);
@@ -4033,6 +4066,7 @@ static bool raw_fs_is_multicast(struct mlx5_ib_flow_matcher *fs_matcher,
struct mlx5_ib_flow_handler *
mlx5_ib_raw_fs_rule_add(struct mlx5_ib_dev *dev,
struct mlx5_ib_flow_matcher *fs_matcher,
+ struct mlx5_flow_context *flow_context,
struct mlx5_flow_act *flow_act,
u32 counter_id,
void *cmd_in, int inlen, int dest_id,
@@ -4085,7 +4119,8 @@ mlx5_ib_raw_fs_rule_add(struct mlx5_ib_dev *dev,
dst_num++;
}
- handler = _create_raw_flow_rule(dev, ft_prio, dst, fs_matcher, flow_act,
+ handler = _create_raw_flow_rule(dev, ft_prio, dst, fs_matcher,
+ flow_context, flow_act,
cmd_in, inlen, dst_num);
if (IS_ERR(handler)) {
@@ -4457,7 +4492,7 @@ static void mlx5_ib_handle_internal_error(struct mlx5_ib_dev *ibdev)
* lock/unlock above locks Now need to arm all involved CQs.
*/
list_for_each_entry(mcq, &cq_armed_list, reset_notify) {
- mcq->comp(mcq);
+ mcq->comp(mcq, NULL);
}
spin_unlock_irqrestore(&ibdev->reset_flow_resource_lock, flags);
}
@@ -6779,7 +6814,7 @@ static void *mlx5_ib_add(struct mlx5_core_dev *mdev)
printk_once(KERN_INFO "%s", mlx5_version);
if (MLX5_ESWITCH_MANAGER(mdev) &&
- mlx5_ib_eswitch_mode(mdev->priv.eswitch) == SRIOV_OFFLOADS) {
+ mlx5_ib_eswitch_mode(mdev->priv.eswitch) == MLX5_ESWITCH_OFFLOADS) {
if (!mlx5_core_mp_enabled(mdev))
mlx5_ib_register_vport_reps(mdev);
return mdev;
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 40eb8be482e4..ee73dc122d28 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -920,6 +920,7 @@ struct mlx5_ib_lb_state {
};
struct mlx5_ib_pf_eq {
+ struct notifier_block irq_nb;
struct mlx5_ib_dev *dev;
struct mlx5_eq *core;
struct work_struct work;
@@ -977,7 +978,6 @@ struct mlx5_ib_dev {
u16 devx_whitelist_uid;
struct mlx5_srq_table srq_table;
struct mlx5_async_ctx async_ctx;
- int free_port;
};
static inline struct mlx5_ib_cq *to_mibcq(struct mlx5_core_cq *mcq)
@@ -1316,6 +1316,7 @@ extern const struct uapi_definition mlx5_ib_devx_defs[];
extern const struct uapi_definition mlx5_ib_flow_defs[];
struct mlx5_ib_flow_handler *mlx5_ib_raw_fs_rule_add(
struct mlx5_ib_dev *dev, struct mlx5_ib_flow_matcher *fs_matcher,
+ struct mlx5_flow_context *flow_context,
struct mlx5_flow_act *flow_act, u32 counter_id,
void *cmd_in, int inlen, int dest_id, int dest_type);
bool mlx5_ib_devx_is_flow_dest(void *obj, int *dest_id, int *dest_type);
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 5f09699fab98..83b452d977d4 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -130,7 +130,7 @@ static void reg_mr_callback(int status, struct mlx5_async_work *context)
struct mlx5_cache_ent *ent = &cache->ent[c];
u8 key;
unsigned long flags;
- struct mlx5_mkey_table *table = &dev->mdev->priv.mkey_table;
+ struct xarray *mkeys = &dev->mdev->priv.mkey_table;
int err;
spin_lock_irqsave(&ent->lock, flags);
@@ -158,12 +158,12 @@ static void reg_mr_callback(int status, struct mlx5_async_work *context)
ent->size++;
spin_unlock_irqrestore(&ent->lock, flags);
- write_lock_irqsave(&table->lock, flags);
- err = radix_tree_insert(&table->tree, mlx5_base_mkey(mr->mmkey.key),
- &mr->mmkey);
+ xa_lock_irqsave(mkeys, flags);
+ err = xa_err(__xa_store(mkeys, mlx5_base_mkey(mr->mmkey.key),
+ &mr->mmkey, GFP_ATOMIC));
+ xa_unlock_irqrestore(mkeys, flags);
if (err)
pr_err("Error inserting to mkey tree. 0x%x\n", -err);
- write_unlock_irqrestore(&table->lock, flags);
if (!completion_done(&ent->compl))
complete(&ent->compl);
diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
index 91507a2e9290..831c450b271a 100644
--- a/drivers/infiniband/hw/mlx5/odp.c
+++ b/drivers/infiniband/hw/mlx5/odp.c
@@ -768,7 +768,7 @@ static int pagefault_single_data_segment(struct mlx5_ib_dev *dev,
bcnt -= *bytes_committed;
next_mr:
- mmkey = __mlx5_mr_lookup(dev->mdev, mlx5_base_mkey(key));
+ mmkey = xa_load(&dev->mdev->priv.mkey_table, mlx5_base_mkey(key));
if (!mkey_is_eq(mmkey, key)) {
mlx5_ib_dbg(dev, "failed to find mkey %x\n", key);
ret = -EFAULT;
@@ -1488,9 +1488,11 @@ static void mlx5_ib_eq_pf_process(struct mlx5_ib_pf_eq *eq)
mlx5_eq_update_ci(eq->core, cc, 1);
}
-static irqreturn_t mlx5_ib_eq_pf_int(int irq, void *eq_ptr)
+static int mlx5_ib_eq_pf_int(struct notifier_block *nb, unsigned long type,
+ void *data)
{
- struct mlx5_ib_pf_eq *eq = eq_ptr;
+ struct mlx5_ib_pf_eq *eq =
+ container_of(nb, struct mlx5_ib_pf_eq, irq_nb);
unsigned long flags;
if (spin_trylock_irqsave(&eq->lock, flags)) {
@@ -1553,20 +1555,26 @@ mlx5_ib_create_pf_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq)
goto err_mempool;
}
+ eq->irq_nb.notifier_call = mlx5_ib_eq_pf_int;
param = (struct mlx5_eq_param) {
- .index = MLX5_EQ_PFAULT_IDX,
- .mask = 1 << MLX5_EVENT_TYPE_PAGE_FAULT,
+ .irq_index = 0,
.nent = MLX5_IB_NUM_PF_EQE,
- .context = eq,
- .handler = mlx5_ib_eq_pf_int
};
- eq->core = mlx5_eq_create_generic(dev->mdev, "mlx5_ib_page_fault_eq", &param);
+ param.mask[0] = 1ull << MLX5_EVENT_TYPE_PAGE_FAULT;
+ eq->core = mlx5_eq_create_generic(dev->mdev, &param);
if (IS_ERR(eq->core)) {
err = PTR_ERR(eq->core);
goto err_wq;
}
+ err = mlx5_eq_enable(dev->mdev, eq->core, &eq->irq_nb);
+ if (err) {
+ mlx5_ib_err(dev, "failed to enable odp EQ %d\n", err);
+ goto err_eq;
+ }
return 0;
+err_eq:
+ mlx5_eq_destroy_generic(dev->mdev, eq->core);
err_wq:
destroy_workqueue(eq->wq);
err_mempool:
@@ -1579,6 +1587,7 @@ mlx5_ib_destroy_pf_eq(struct mlx5_ib_dev *dev, struct mlx5_ib_pf_eq *eq)
{
int err;
+ mlx5_eq_disable(dev->mdev, eq->core, &eq->irq_nb);
err = mlx5_eq_destroy_generic(dev->mdev, eq->core);
cancel_work_sync(&eq->work);
destroy_workqueue(eq->wq);
@@ -1677,8 +1686,8 @@ static void num_pending_prefetch_dec(struct mlx5_ib_dev *dev,
struct mlx5_core_mkey *mmkey;
struct mlx5_ib_mr *mr;
- mmkey = __mlx5_mr_lookup(dev->mdev,
- mlx5_base_mkey(sg_list[i].lkey));
+ mmkey = xa_load(&dev->mdev->priv.mkey_table,
+ mlx5_base_mkey(sg_list[i].lkey));
mr = container_of(mmkey, struct mlx5_ib_mr, mmkey);
atomic_dec(&mr->num_pending_prefetch);
}
@@ -1697,8 +1706,8 @@ static bool num_pending_prefetch_inc(struct ib_pd *pd,
struct mlx5_core_mkey *mmkey;
struct mlx5_ib_mr *mr;
- mmkey = __mlx5_mr_lookup(dev->mdev,
- mlx5_base_mkey(sg_list[i].lkey));
+ mmkey = xa_load(&dev->mdev->priv.mkey_table,
+ mlx5_base_mkey(sg_list[i].lkey));
if (!mmkey || mmkey->key != sg_list[i].lkey) {
ret = false;
break;
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index f6623c77443a..768c7e81f688 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -6297,7 +6297,7 @@ static void handle_drain_completion(struct ib_cq *cq,
/* Run the CQ handler - this makes sure that the drain WR will
* be processed if wasn't processed yet.
*/
- mcq->mcq.comp(&mcq->mcq);
+ mcq->mcq.comp(&mcq->mcq, NULL);
}
wait_for_completion(&sdrain->done);
diff --git a/drivers/infiniband/hw/nes/nes.c b/drivers/infiniband/hw/nes/nes.c
index e00add6d78ec..29b324726ea6 100644
--- a/drivers/infiniband/hw/nes/nes.c
+++ b/drivers/infiniband/hw/nes/nes.c
@@ -183,7 +183,13 @@ static int nes_inetaddr_event(struct notifier_block *notifier,
rcu_read_lock();
in = __in_dev_get_rcu(upper_dev);
- nesvnic->local_ipaddr = in->ifa_list->ifa_address;
+ if (in) {
+ struct in_ifaddr *ifa;
+
+ ifa = rcu_dereference(in->ifa_list);
+ if (ifa)
+ nesvnic->local_ipaddr = ifa->ifa_address;
+ }
rcu_read_unlock();
} else {
nesvnic->local_ipaddr = ifa->ifa_address;
diff --git a/drivers/infiniband/hw/qedr/main.c b/drivers/infiniband/hw/qedr/main.c
index 083c2c00a8e9..5ebf3c53b3fb 100644
--- a/drivers/infiniband/hw/qedr/main.c
+++ b/drivers/infiniband/hw/qedr/main.c
@@ -312,7 +312,8 @@ static void qedr_free_mem_sb(struct qedr_dev *dev,
struct qed_sb_info *sb_info, int sb_id)
{
if (sb_info->sb_virt) {
- dev->ops->common->sb_release(dev->cdev, sb_info, sb_id);
+ dev->ops->common->sb_release(dev->cdev, sb_info, sb_id,
+ QED_SB_TYPE_CNQ);
dma_free_coherent(&dev->pdev->dev, sizeof(*sb_info->sb_virt),
(void *)sb_info->sb_virt, sb_info->sb_phys);
}
@@ -504,11 +505,13 @@ static irqreturn_t qedr_irq_handler(int irq, void *handle)
static void qedr_sync_free_irqs(struct qedr_dev *dev)
{
u32 vector;
+ u16 idx;
int i;
for (i = 0; i < dev->int_info.used_cnt; i++) {
if (dev->int_info.msix_cnt) {
- vector = dev->int_info.msix[i * dev->num_hwfns].vector;
+ idx = i * dev->num_hwfns + dev->affin_hwfn_idx;
+ vector = dev->int_info.msix[idx].vector;
synchronize_irq(vector);
free_irq(vector, &dev->cnq_array[i]);
}
@@ -520,6 +523,7 @@ static void qedr_sync_free_irqs(struct qedr_dev *dev)
static int qedr_req_msix_irqs(struct qedr_dev *dev)
{
int i, rc = 0;
+ u16 idx;
if (dev->num_cnq > dev->int_info.msix_cnt) {
DP_ERR(dev,
@@ -529,7 +533,8 @@ static int qedr_req_msix_irqs(struct qedr_dev *dev)
}
for (i = 0; i < dev->num_cnq; i++) {
- rc = request_irq(dev->int_info.msix[i * dev->num_hwfns].vector,
+ idx = i * dev->num_hwfns + dev->affin_hwfn_idx;
+ rc = request_irq(dev->int_info.msix[idx].vector,
qedr_irq_handler, 0, dev->cnq_array[i].name,
&dev->cnq_array[i]);
if (rc) {
@@ -866,6 +871,16 @@ static struct qedr_dev *qedr_add(struct qed_dev *cdev, struct pci_dev *pdev,
dev->user_dpm_enabled = dev_info.user_dpm_enabled;
dev->rdma_type = dev_info.rdma_type;
dev->num_hwfns = dev_info.common.num_hwfns;
+
+ if (IS_IWARP(dev) && QEDR_IS_CMT(dev)) {
+ rc = dev->ops->iwarp_set_engine_affin(cdev, false);
+ if (rc) {
+ DP_ERR(dev, "iWARP is disabled over a 100g device Enabling it may impact L2 performance. To enable it run devlink dev param set <dev> name iwarp_cmt value true cmode runtime\n");
+ goto init_err;
+ }
+ }
+ dev->affin_hwfn_idx = dev->ops->common->get_affin_hwfn_idx(cdev);
+
dev->rdma_ctx = dev->ops->rdma_get_rdma_ctx(cdev);
dev->num_cnq = dev->ops->rdma_get_min_cnq_msix(cdev);
@@ -926,6 +941,10 @@ static void qedr_remove(struct qedr_dev *dev)
qedr_stop_hw(dev);
qedr_sync_free_irqs(dev);
qedr_free_resources(dev);
+
+ if (IS_IWARP(dev) && QEDR_IS_CMT(dev))
+ dev->ops->iwarp_set_engine_affin(dev->cdev, true);
+
ib_dealloc_device(&dev->ibdev);
}
diff --git a/drivers/infiniband/hw/qedr/qedr.h b/drivers/infiniband/hw/qedr/qedr.h
index 6175d1e98717..a92ca22e5de1 100644
--- a/drivers/infiniband/hw/qedr/qedr.h
+++ b/drivers/infiniband/hw/qedr/qedr.h
@@ -157,6 +157,8 @@ struct qedr_dev {
u32 dp_module;
u8 dp_level;
u8 num_hwfns;
+#define QEDR_IS_CMT(dev) ((dev)->num_hwfns > 1)
+ u8 affin_hwfn_idx;
u8 gsi_ll2_handle;
uint wq_multiplier;
diff --git a/drivers/infiniband/hw/usnic/usnic_ib_main.c b/drivers/infiniband/hw/usnic/usnic_ib_main.c
index d88d9f8a7f9a..34c1f9d6c915 100644
--- a/drivers/infiniband/hw/usnic/usnic_ib_main.c
+++ b/drivers/infiniband/hw/usnic/usnic_ib_main.c
@@ -427,11 +427,16 @@ static void *usnic_ib_device_add(struct pci_dev *dev)
if (netif_carrier_ok(us_ibdev->netdev))
usnic_fwd_carrier_up(us_ibdev->ufdev);
- ind = in_dev_get(netdev);
- if (ind->ifa_list)
- usnic_fwd_add_ipaddr(us_ibdev->ufdev,
- ind->ifa_list->ifa_address);
- in_dev_put(ind);
+ rcu_read_lock();
+ ind = __in_dev_get_rcu(netdev);
+ if (ind) {
+ const struct in_ifaddr *ifa;
+
+ ifa = rcu_dereference(ind->ifa_list);
+ if (ifa)
+ usnic_fwd_add_ipaddr(us_ibdev->ufdev, ifa->ifa_address);
+ }
+ rcu_read_unlock();
usnic_mac_ip_to_gid(us_ibdev->netdev->perm_addr,
us_ibdev->ufdev->inaddr, &gid.raw[0]);
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
index 9b5e11d3fb85..04ea7db08e87 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
@@ -1998,6 +1998,7 @@ static int ipoib_get_vf_config(struct net_device *dev, int vf,
return err;
ivf->vf = vf;
+ memcpy(ivf->mac, dev->dev_addr, dev->addr_len);
return 0;
}
diff --git a/drivers/isdn/Kconfig b/drivers/isdn/Kconfig
index 1ca4d70d198a..be8387c0eeef 100644
--- a/drivers/isdn/Kconfig
+++ b/drivers/isdn/Kconfig
@@ -21,59 +21,8 @@ menuconfig ISDN
if ISDN
-menuconfig ISDN_I4L
- tristate "Old ISDN4Linux (deprecated)"
- depends on TTY
- ---help---
- This driver allows you to use an ISDN adapter for networking
- connections and as dialin/out device. The isdn-tty's have a built
- in AT-compatible modem emulator. Network devices support autodial,
- channel-bundling, callback and caller-authentication without having
- a daemon running. A reduced T.70 protocol is supported with tty's
- suitable for German BTX. On D-Channel, the protocols EDSS1
- (Euro-ISDN) and 1TR6 (German style) are supported. See
- <file:Documentation/isdn/README> for more information.
-
- ISDN support in the linux kernel is moving towards a new API,
- called CAPI (Common ISDN Application Programming Interface).
- Therefore the old ISDN4Linux layer will eventually become obsolete.
- It is still available, though, for use with adapters that are not
- supported by the new CAPI subsystem yet.
-
-source "drivers/isdn/i4l/Kconfig"
-
-menuconfig ISDN_CAPI
- tristate "CAPI 2.0 subsystem"
- help
- This provides CAPI (the Common ISDN Application Programming
- Interface) Version 2.0, a standard making it easy for programs to
- access ISDN hardware in a device independent way. (For details see
- <http://www.capi.org/>.) CAPI supports making and accepting voice
- and data connections, controlling call options and protocols,
- as well as ISDN supplementary services like call forwarding or
- three-party conferences (if supported by the specific hardware
- driver).
-
- Select this option and the appropriate hardware driver below if
- you have an ISDN adapter supported by the CAPI subsystem.
-
-if ISDN_CAPI
-
source "drivers/isdn/capi/Kconfig"
-source "drivers/isdn/hardware/Kconfig"
-
-endif # ISDN_CAPI
-
-source "drivers/isdn/gigaset/Kconfig"
-
-source "drivers/isdn/hysdn/Kconfig"
-
source "drivers/isdn/mISDN/Kconfig"
-config ISDN_HDLC
- tristate
- select CRC_CCITT
- select BITREVERSE
-
endif # ISDN
diff --git a/drivers/isdn/Makefile b/drivers/isdn/Makefile
index e7d3d8f2ad5a..63baf27a2c79 100644
--- a/drivers/isdn/Makefile
+++ b/drivers/isdn/Makefile
@@ -3,12 +3,6 @@
# Object files in subdirectories
-obj-$(CONFIG_ISDN_I4L) += i4l/
obj-$(CONFIG_ISDN_CAPI) += capi/
obj-$(CONFIG_MISDN) += mISDN/
obj-$(CONFIG_ISDN) += hardware/
-obj-$(CONFIG_ISDN_DIVERSION) += divert/
-obj-$(CONFIG_ISDN_DRV_HISAX) += hisax/
-obj-$(CONFIG_ISDN_DRV_LOOP) += isdnloop/
-obj-$(CONFIG_HYSDN) += hysdn/
-obj-$(CONFIG_ISDN_DRV_GIGASET) += gigaset/
diff --git a/drivers/isdn/capi/Kconfig b/drivers/isdn/capi/Kconfig
index abaadce376c5..573fea5500ce 100644
--- a/drivers/isdn/capi/Kconfig
+++ b/drivers/isdn/capi/Kconfig
@@ -1,4 +1,22 @@
# SPDX-License-Identifier: GPL-2.0-only
+menuconfig ISDN_CAPI
+ tristate "CAPI 2.0 subsystem"
+ help
+ This provides CAPI (the Common ISDN Application Programming
+ Interface) Version 2.0, a standard making it easy for programs to
+ access ISDN hardware in a device independent way. (For details see
+ <http://www.capi.org/>.) CAPI supports making and accepting voice
+ and data connections, controlling call options and protocols,
+ as well as ISDN supplementary services like call forwarding or
+ three-party conferences (if supported by the specific hardware
+ driver).
+
+ This subsystem requires a hardware specific driver.
+ See CONFIG_BT_CMTP for the last remaining regular driver
+ in the kernel that uses the CAPI subsystem.
+
+if ISDN_CAPI
+
config CAPI_TRACE
bool "CAPI trace support"
default y
@@ -27,15 +45,6 @@ config ISDN_CAPI_MIDDLEWARE
device. If you want to use pppd with pppdcapiplugin to dial up to
your ISP, say Y here.
-config ISDN_CAPI_CAPIDRV
- tristate "CAPI2.0 capidrv interface support"
- depends on ISDN_I4L
- help
- This option provides the glue code to hook up CAPI driven cards to
- the legacy isdn4linux link layer. If you have a card which is
- supported by a CAPI driver, but still want to use old features like
- ippp interfaces or ttyI emulation, say Y/M here.
-
config ISDN_CAPI_CAPIDRV_VERBOSE
bool "Verbose reason code reporting"
depends on ISDN_CAPI_CAPIDRV
@@ -43,3 +52,5 @@ config ISDN_CAPI_CAPIDRV_VERBOSE
If you say Y here, the capidrv interface will give verbose reasons
for disconnecting. This will increase the size of the kernel by 7 KB.
If unsure, say N.
+
+endif
diff --git a/drivers/isdn/capi/Makefile b/drivers/isdn/capi/Makefile
index 06da3ed2c40a..d299f3e75f89 100644
--- a/drivers/isdn/capi/Makefile
+++ b/drivers/isdn/capi/Makefile
@@ -13,3 +13,5 @@ obj-$(CONFIG_ISDN_CAPI_CAPIDRV) += capidrv.o
kernelcapi-y := kcapi.o capiutil.o capilib.o
kernelcapi-$(CONFIG_PROC_FS) += kcapi_proc.o
+
+ccflags-y += -I$(srctree)/$(src)/../include -I$(srctree)/$(src)/../include/uapi
diff --git a/drivers/isdn/capi/capidrv.c b/drivers/isdn/capi/capidrv.c
deleted file mode 100644
index e8949f3dcae1..000000000000
--- a/drivers/isdn/capi/capidrv.c
+++ /dev/null
@@ -1,2525 +0,0 @@
-/* $Id: capidrv.c,v 1.1.2.2 2004/01/12 23:17:24 keil Exp $
- *
- * ISDN4Linux Driver, using capi20 interface (kernelcapi)
- *
- * Copyright 1997 by Carsten Paeth <calle@calle.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/compiler.h>
-#include <linux/module.h>
-#include <linux/errno.h>
-#include <linux/kernel.h>
-#include <linux/major.h>
-#include <linux/slab.h>
-#include <linux/fcntl.h>
-#include <linux/fs.h>
-#include <linux/signal.h>
-#include <linux/mm.h>
-#include <linux/timer.h>
-#include <linux/wait.h>
-#include <linux/skbuff.h>
-#include <linux/isdn.h>
-#include <linux/isdnif.h>
-#include <linux/proc_fs.h>
-#include <linux/seq_file.h>
-#include <linux/capi.h>
-#include <linux/kernelcapi.h>
-#include <linux/ctype.h>
-#include <linux/init.h>
-#include <linux/moduleparam.h>
-
-#include <linux/isdn/capiutil.h>
-#include <linux/isdn/capicmd.h>
-#include "capidrv.h"
-
-static int debugmode = 0;
-
-MODULE_DESCRIPTION("CAPI4Linux: Interface to ISDN4Linux");
-MODULE_AUTHOR("Carsten Paeth");
-MODULE_LICENSE("GPL");
-module_param(debugmode, uint, S_IRUGO | S_IWUSR);
-
-/* -------- type definitions ----------------------------------------- */
-
-
-struct capidrv_contr {
-
- struct capidrv_contr *next;
- struct module *owner;
- u32 contrnr;
- char name[20];
-
- /*
- * for isdn4linux
- */
- isdn_if interface;
- int myid;
-
- /*
- * LISTEN state
- */
- int state;
- u32 cipmask;
- u32 cipmask2;
- struct timer_list listentimer;
-
- /*
- * ID of capi message sent
- */
- u16 msgid;
-
- /*
- * B-Channels
- */
- int nbchan;
- struct capidrv_bchan {
- struct capidrv_contr *contr;
- u8 msn[ISDN_MSNLEN];
- int l2;
- int l3;
- u8 num[ISDN_MSNLEN];
- u8 mynum[ISDN_MSNLEN];
- int si1;
- int si2;
- int incoming;
- int disconnecting;
- struct capidrv_plci {
- struct capidrv_plci *next;
- u32 plci;
- u32 ncci; /* ncci for CONNECT_ACTIVE_IND */
- u16 msgid; /* to identfy CONNECT_CONF */
- int chan;
- int state;
- int leasedline;
- struct capidrv_ncci {
- struct capidrv_ncci *next;
- struct capidrv_plci *plcip;
- u32 ncci;
- u16 msgid; /* to identfy CONNECT_B3_CONF */
- int chan;
- int state;
- int oldstate;
- /* */
- u16 datahandle;
- struct ncci_datahandle_queue {
- struct ncci_datahandle_queue *next;
- u16 datahandle;
- int len;
- } *ackqueue;
- } *ncci_list;
- } *plcip;
- struct capidrv_ncci *nccip;
- } *bchans;
-
- struct capidrv_plci *plci_list;
-
- /* for q931 data */
- u8 q931_buf[4096];
- u8 *q931_read;
- u8 *q931_write;
- u8 *q931_end;
-};
-
-
-struct capidrv_data {
- struct capi20_appl ap;
- int ncontr;
- struct capidrv_contr *contr_list;
-};
-
-typedef struct capidrv_plci capidrv_plci;
-typedef struct capidrv_ncci capidrv_ncci;
-typedef struct capidrv_contr capidrv_contr;
-typedef struct capidrv_data capidrv_data;
-typedef struct capidrv_bchan capidrv_bchan;
-
-/* -------- data definitions ----------------------------------------- */
-
-static capidrv_data global;
-static DEFINE_SPINLOCK(global_lock);
-
-static void handle_dtrace_data(capidrv_contr *card,
- int send, int level2, u8 *data, u16 len);
-
-/* -------- convert functions ---------------------------------------- */
-
-static inline u32 b1prot(int l2, int l3)
-{
- switch (l2) {
- case ISDN_PROTO_L2_X75I:
- case ISDN_PROTO_L2_X75UI:
- case ISDN_PROTO_L2_X75BUI:
- return 0;
- case ISDN_PROTO_L2_HDLC:
- default:
- return 0;
- case ISDN_PROTO_L2_TRANS:
- return 1;
- case ISDN_PROTO_L2_V11096:
- case ISDN_PROTO_L2_V11019:
- case ISDN_PROTO_L2_V11038:
- return 2;
- case ISDN_PROTO_L2_FAX:
- return 4;
- case ISDN_PROTO_L2_MODEM:
- return 8;
- }
-}
-
-static inline u32 b2prot(int l2, int l3)
-{
- switch (l2) {
- case ISDN_PROTO_L2_X75I:
- case ISDN_PROTO_L2_X75UI:
- case ISDN_PROTO_L2_X75BUI:
- default:
- return 0;
- case ISDN_PROTO_L2_HDLC:
- case ISDN_PROTO_L2_TRANS:
- case ISDN_PROTO_L2_V11096:
- case ISDN_PROTO_L2_V11019:
- case ISDN_PROTO_L2_V11038:
- case ISDN_PROTO_L2_MODEM:
- return 1;
- case ISDN_PROTO_L2_FAX:
- return 4;
- }
-}
-
-static inline u32 b3prot(int l2, int l3)
-{
- switch (l2) {
- case ISDN_PROTO_L2_X75I:
- case ISDN_PROTO_L2_X75UI:
- case ISDN_PROTO_L2_X75BUI:
- case ISDN_PROTO_L2_HDLC:
- case ISDN_PROTO_L2_TRANS:
- case ISDN_PROTO_L2_V11096:
- case ISDN_PROTO_L2_V11019:
- case ISDN_PROTO_L2_V11038:
- case ISDN_PROTO_L2_MODEM:
- default:
- return 0;
- case ISDN_PROTO_L2_FAX:
- return 4;
- }
-}
-
-static _cstruct b1config_async_v110(u16 rate)
-{
- /* CAPI-Spec "B1 Configuration" */
- static unsigned char buf[9];
- buf[0] = 8; /* len */
- /* maximum bitrate */
- buf[1] = rate & 0xff; buf[2] = (rate >> 8) & 0xff;
- buf[3] = 8; buf[4] = 0; /* 8 bits per character */
- buf[5] = 0; buf[6] = 0; /* parity none */
- buf[7] = 0; buf[8] = 0; /* 1 stop bit */
- return buf;
-}
-
-static _cstruct b1config(int l2, int l3)
-{
- switch (l2) {
- case ISDN_PROTO_L2_X75I:
- case ISDN_PROTO_L2_X75UI:
- case ISDN_PROTO_L2_X75BUI:
- case ISDN_PROTO_L2_HDLC:
- case ISDN_PROTO_L2_TRANS:
- default:
- return NULL;
- case ISDN_PROTO_L2_V11096:
- return b1config_async_v110(9600);
- case ISDN_PROTO_L2_V11019:
- return b1config_async_v110(19200);
- case ISDN_PROTO_L2_V11038:
- return b1config_async_v110(38400);
- }
-}
-
-static inline u16 si2cip(u8 si1, u8 si2)
-{
- static const u8 cip[17][5] =
- {
- /* 0 1 2 3 4 */
- {0, 0, 0, 0, 0}, /*0 */
- {16, 16, 4, 26, 16}, /*1 */
- {17, 17, 17, 4, 4}, /*2 */
- {2, 2, 2, 2, 2}, /*3 */
- {18, 18, 18, 18, 18}, /*4 */
- {2, 2, 2, 2, 2}, /*5 */
- {0, 0, 0, 0, 0}, /*6 */
- {2, 2, 2, 2, 2}, /*7 */
- {2, 2, 2, 2, 2}, /*8 */
- {21, 21, 21, 21, 21}, /*9 */
- {19, 19, 19, 19, 19}, /*10 */
- {0, 0, 0, 0, 0}, /*11 */
- {0, 0, 0, 0, 0}, /*12 */
- {0, 0, 0, 0, 0}, /*13 */
- {0, 0, 0, 0, 0}, /*14 */
- {22, 22, 22, 22, 22}, /*15 */
- {27, 27, 27, 28, 27} /*16 */
- };
- if (si1 > 16)
- si1 = 0;
- if (si2 > 4)
- si2 = 0;
-
- return (u16) cip[si1][si2];
-}
-
-static inline u8 cip2si1(u16 cipval)
-{
- static const u8 si[32] =
- {7, 1, 7, 7, 1, 1, 7, 7, /*0-7 */
- 7, 1, 0, 0, 0, 0, 0, 0, /*8-15 */
- 1, 2, 4, 10, 9, 9, 15, 7, /*16-23 */
- 7, 7, 1, 16, 16, 0, 0, 0}; /*24-31 */
-
- if (cipval > 31)
- cipval = 0; /* .... */
- return si[cipval];
-}
-
-static inline u8 cip2si2(u16 cipval)
-{
- static const u8 si[32] =
- {0, 0, 0, 0, 2, 3, 0, 0, /*0-7 */
- 0, 3, 0, 0, 0, 0, 0, 0, /*8-15 */
- 1, 2, 0, 0, 9, 0, 0, 0, /*16-23 */
- 0, 0, 3, 2, 3, 0, 0, 0}; /*24-31 */
-
- if (cipval > 31)
- cipval = 0; /* .... */
- return si[cipval];
-}
-
-
-/* -------- controller management ------------------------------------- */
-
-static inline capidrv_contr *findcontrbydriverid(int driverid)
-{
- unsigned long flags;
- capidrv_contr *p;
-
- spin_lock_irqsave(&global_lock, flags);
- for (p = global.contr_list; p; p = p->next)
- if (p->myid == driverid)
- break;
- spin_unlock_irqrestore(&global_lock, flags);
- return p;
-}
-
-static capidrv_contr *findcontrbynumber(u32 contr)
-{
- unsigned long flags;
- capidrv_contr *p = global.contr_list;
-
- spin_lock_irqsave(&global_lock, flags);
- for (p = global.contr_list; p; p = p->next)
- if (p->contrnr == contr)
- break;
- spin_unlock_irqrestore(&global_lock, flags);
- return p;
-}
-
-
-/* -------- plci management ------------------------------------------ */
-
-static capidrv_plci *new_plci(capidrv_contr *card, int chan)
-{
- capidrv_plci *plcip;
-
- plcip = kzalloc(sizeof(capidrv_plci), GFP_ATOMIC);
-
- if (plcip == NULL)
- return NULL;
-
- plcip->state = ST_PLCI_NONE;
- plcip->plci = 0;
- plcip->msgid = 0;
- plcip->chan = chan;
- plcip->next = card->plci_list;
- card->plci_list = plcip;
- card->bchans[chan].plcip = plcip;
-
- return plcip;
-}
-
-static capidrv_plci *find_plci_by_plci(capidrv_contr *card, u32 plci)
-{
- capidrv_plci *p;
- for (p = card->plci_list; p; p = p->next)
- if (p->plci == plci)
- return p;
- return NULL;
-}
-
-static capidrv_plci *find_plci_by_msgid(capidrv_contr *card, u16 msgid)
-{
- capidrv_plci *p;
- for (p = card->plci_list; p; p = p->next)
- if (p->msgid == msgid)
- return p;
- return NULL;
-}
-
-static capidrv_plci *find_plci_by_ncci(capidrv_contr *card, u32 ncci)
-{
- capidrv_plci *p;
- for (p = card->plci_list; p; p = p->next)
- if (p->plci == (ncci & 0xffff))
- return p;
- return NULL;
-}
-
-static void free_plci(capidrv_contr *card, capidrv_plci *plcip)
-{
- capidrv_plci **pp;
-
- for (pp = &card->plci_list; *pp; pp = &(*pp)->next) {
- if (*pp == plcip) {
- *pp = (*pp)->next;
- card->bchans[plcip->chan].plcip = NULL;
- card->bchans[plcip->chan].disconnecting = 0;
- card->bchans[plcip->chan].incoming = 0;
- kfree(plcip);
- return;
- }
- }
- printk(KERN_ERR "capidrv-%d: free_plci %p (0x%x) not found, Huh?\n",
- card->contrnr, plcip, plcip->plci);
-}
-
-/* -------- ncci management ------------------------------------------ */
-
-static inline capidrv_ncci *new_ncci(capidrv_contr *card,
- capidrv_plci *plcip,
- u32 ncci)
-{
- capidrv_ncci *nccip;
-
- nccip = kzalloc(sizeof(capidrv_ncci), GFP_ATOMIC);
-
- if (nccip == NULL)
- return NULL;
-
- nccip->ncci = ncci;
- nccip->state = ST_NCCI_NONE;
- nccip->plcip = plcip;
- nccip->chan = plcip->chan;
- nccip->datahandle = 0;
-
- nccip->next = plcip->ncci_list;
- plcip->ncci_list = nccip;
-
- card->bchans[plcip->chan].nccip = nccip;
-
- return nccip;
-}
-
-static inline capidrv_ncci *find_ncci(capidrv_contr *card, u32 ncci)
-{
- capidrv_plci *plcip;
- capidrv_ncci *p;
-
- if ((plcip = find_plci_by_ncci(card, ncci)) == NULL)
- return NULL;
-
- for (p = plcip->ncci_list; p; p = p->next)
- if (p->ncci == ncci)
- return p;
- return NULL;
-}
-
-static inline capidrv_ncci *find_ncci_by_msgid(capidrv_contr *card,
- u32 ncci, u16 msgid)
-{
- capidrv_plci *plcip;
- capidrv_ncci *p;
-
- if ((plcip = find_plci_by_ncci(card, ncci)) == NULL)
- return NULL;
-
- for (p = plcip->ncci_list; p; p = p->next)
- if (p->msgid == msgid)
- return p;
- return NULL;
-}
-
-static void free_ncci(capidrv_contr *card, struct capidrv_ncci *nccip)
-{
- struct capidrv_ncci **pp;
-
- for (pp = &(nccip->plcip->ncci_list); *pp; pp = &(*pp)->next) {
- if (*pp == nccip) {
- *pp = (*pp)->next;
- break;
- }
- }
- card->bchans[nccip->chan].nccip = NULL;
- kfree(nccip);
-}
-
-static int capidrv_add_ack(struct capidrv_ncci *nccip,
- u16 datahandle, int len)
-{
- struct ncci_datahandle_queue *n, **pp;
-
- n = kmalloc(sizeof(struct ncci_datahandle_queue), GFP_ATOMIC);
- if (!n) {
- printk(KERN_ERR "capidrv: kmalloc ncci_datahandle failed\n");
- return -1;
- }
- n->next = NULL;
- n->datahandle = datahandle;
- n->len = len;
- for (pp = &nccip->ackqueue; *pp; pp = &(*pp)->next);
- *pp = n;
- return 0;
-}
-
-static int capidrv_del_ack(struct capidrv_ncci *nccip, u16 datahandle)
-{
- struct ncci_datahandle_queue **pp, *p;
- int len;
-
- for (pp = &nccip->ackqueue; *pp; pp = &(*pp)->next) {
- if ((*pp)->datahandle == datahandle) {
- p = *pp;
- len = p->len;
- *pp = (*pp)->next;
- kfree(p);
- return len;
- }
- }
- return -1;
-}
-
-/* -------- convert and send capi message ---------------------------- */
-
-static void send_message(capidrv_contr *card, _cmsg *cmsg)
-{
- struct sk_buff *skb;
- size_t len;
-
- if (capi_cmsg2message(cmsg, cmsg->buf)) {
- printk(KERN_ERR "capidrv::send_message: parser failure\n");
- return;
- }
- len = CAPIMSG_LEN(cmsg->buf);
- skb = alloc_skb(len, GFP_ATOMIC);
- if (!skb) {
- printk(KERN_ERR "capidrv::send_message: can't allocate mem\n");
- return;
- }
- skb_put_data(skb, cmsg->buf, len);
- if (capi20_put_message(&global.ap, skb) != CAPI_NOERROR)
- kfree_skb(skb);
-}
-
-/* -------- state machine -------------------------------------------- */
-
-struct listenstatechange {
- int actstate;
- int nextstate;
- int event;
-};
-
-static struct listenstatechange listentable[] =
-{
- {ST_LISTEN_NONE, ST_LISTEN_WAIT_CONF, EV_LISTEN_REQ},
- {ST_LISTEN_ACTIVE, ST_LISTEN_ACTIVE_WAIT_CONF, EV_LISTEN_REQ},
- {ST_LISTEN_WAIT_CONF, ST_LISTEN_NONE, EV_LISTEN_CONF_ERROR},
- {ST_LISTEN_ACTIVE_WAIT_CONF, ST_LISTEN_ACTIVE, EV_LISTEN_CONF_ERROR},
- {ST_LISTEN_WAIT_CONF, ST_LISTEN_NONE, EV_LISTEN_CONF_EMPTY},
- {ST_LISTEN_ACTIVE_WAIT_CONF, ST_LISTEN_NONE, EV_LISTEN_CONF_EMPTY},
- {ST_LISTEN_WAIT_CONF, ST_LISTEN_ACTIVE, EV_LISTEN_CONF_OK},
- {ST_LISTEN_ACTIVE_WAIT_CONF, ST_LISTEN_ACTIVE, EV_LISTEN_CONF_OK},
- {},
-};
-
-static void listen_change_state(capidrv_contr *card, int event)
-{
- struct listenstatechange *p = listentable;
- while (p->event) {
- if (card->state == p->actstate && p->event == event) {
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: listen_change_state %d -> %d\n",
- card->contrnr, card->state, p->nextstate);
- card->state = p->nextstate;
- return;
- }
- p++;
- }
- printk(KERN_ERR "capidrv-%d: listen_change_state state=%d event=%d ????\n",
- card->contrnr, card->state, event);
-
-}
-
-/* ------------------------------------------------------------------ */
-
-static void p0(capidrv_contr *card, capidrv_plci *plci)
-{
- isdn_ctrl cmd;
-
- card->bchans[plci->chan].contr = NULL;
- cmd.command = ISDN_STAT_DHUP;
- cmd.driver = card->myid;
- cmd.arg = plci->chan;
- card->interface.statcallb(&cmd);
- free_plci(card, plci);
-}
-
-/* ------------------------------------------------------------------ */
-
-struct plcistatechange {
- int actstate;
- int nextstate;
- int event;
- void (*changefunc)(capidrv_contr *card, capidrv_plci *plci);
-};
-
-static struct plcistatechange plcitable[] =
-{
- /* P-0 */
- {ST_PLCI_NONE, ST_PLCI_OUTGOING, EV_PLCI_CONNECT_REQ, NULL},
- {ST_PLCI_NONE, ST_PLCI_ALLOCATED, EV_PLCI_FACILITY_IND_UP, NULL},
- {ST_PLCI_NONE, ST_PLCI_INCOMING, EV_PLCI_CONNECT_IND, NULL},
- {ST_PLCI_NONE, ST_PLCI_RESUMEING, EV_PLCI_RESUME_REQ, NULL},
- /* P-0.1 */
- {ST_PLCI_OUTGOING, ST_PLCI_NONE, EV_PLCI_CONNECT_CONF_ERROR, p0},
- {ST_PLCI_OUTGOING, ST_PLCI_ALLOCATED, EV_PLCI_CONNECT_CONF_OK, NULL},
- /* P-1 */
- {ST_PLCI_ALLOCATED, ST_PLCI_ACTIVE, EV_PLCI_CONNECT_ACTIVE_IND, NULL},
- {ST_PLCI_ALLOCATED, ST_PLCI_DISCONNECTING, EV_PLCI_DISCONNECT_REQ, NULL},
- {ST_PLCI_ALLOCATED, ST_PLCI_DISCONNECTING, EV_PLCI_FACILITY_IND_DOWN, NULL},
- {ST_PLCI_ALLOCATED, ST_PLCI_DISCONNECTED, EV_PLCI_DISCONNECT_IND, NULL},
- /* P-ACT */
- {ST_PLCI_ACTIVE, ST_PLCI_DISCONNECTING, EV_PLCI_DISCONNECT_REQ, NULL},
- {ST_PLCI_ACTIVE, ST_PLCI_DISCONNECTING, EV_PLCI_FACILITY_IND_DOWN, NULL},
- {ST_PLCI_ACTIVE, ST_PLCI_DISCONNECTED, EV_PLCI_DISCONNECT_IND, NULL},
- {ST_PLCI_ACTIVE, ST_PLCI_HELD, EV_PLCI_HOLD_IND, NULL},
- {ST_PLCI_ACTIVE, ST_PLCI_DISCONNECTING, EV_PLCI_SUSPEND_IND, NULL},
- /* P-2 */
- {ST_PLCI_INCOMING, ST_PLCI_DISCONNECTING, EV_PLCI_CONNECT_REJECT, NULL},
- {ST_PLCI_INCOMING, ST_PLCI_FACILITY_IND, EV_PLCI_FACILITY_IND_UP, NULL},
- {ST_PLCI_INCOMING, ST_PLCI_ACCEPTING, EV_PLCI_CONNECT_RESP, NULL},
- {ST_PLCI_INCOMING, ST_PLCI_DISCONNECTING, EV_PLCI_DISCONNECT_REQ, NULL},
- {ST_PLCI_INCOMING, ST_PLCI_DISCONNECTING, EV_PLCI_FACILITY_IND_DOWN, NULL},
- {ST_PLCI_INCOMING, ST_PLCI_DISCONNECTED, EV_PLCI_DISCONNECT_IND, NULL},
- {ST_PLCI_INCOMING, ST_PLCI_DISCONNECTING, EV_PLCI_CD_IND, NULL},
- /* P-3 */
- {ST_PLCI_FACILITY_IND, ST_PLCI_DISCONNECTING, EV_PLCI_CONNECT_REJECT, NULL},
- {ST_PLCI_FACILITY_IND, ST_PLCI_ACCEPTING, EV_PLCI_CONNECT_ACTIVE_IND, NULL},
- {ST_PLCI_FACILITY_IND, ST_PLCI_DISCONNECTING, EV_PLCI_DISCONNECT_REQ, NULL},
- {ST_PLCI_FACILITY_IND, ST_PLCI_DISCONNECTING, EV_PLCI_FACILITY_IND_DOWN, NULL},
- {ST_PLCI_FACILITY_IND, ST_PLCI_DISCONNECTED, EV_PLCI_DISCONNECT_IND, NULL},
- /* P-4 */
- {ST_PLCI_ACCEPTING, ST_PLCI_ACTIVE, EV_PLCI_CONNECT_ACTIVE_IND, NULL},
- {ST_PLCI_ACCEPTING, ST_PLCI_DISCONNECTING, EV_PLCI_DISCONNECT_REQ, NULL},
- {ST_PLCI_ACCEPTING, ST_PLCI_DISCONNECTING, EV_PLCI_FACILITY_IND_DOWN, NULL},
- {ST_PLCI_ACCEPTING, ST_PLCI_DISCONNECTED, EV_PLCI_DISCONNECT_IND, NULL},
- /* P-5 */
- {ST_PLCI_DISCONNECTING, ST_PLCI_DISCONNECTED, EV_PLCI_DISCONNECT_IND, NULL},
- /* P-6 */
- {ST_PLCI_DISCONNECTED, ST_PLCI_NONE, EV_PLCI_DISCONNECT_RESP, p0},
- /* P-0.Res */
- {ST_PLCI_RESUMEING, ST_PLCI_NONE, EV_PLCI_RESUME_CONF_ERROR, p0},
- {ST_PLCI_RESUMEING, ST_PLCI_RESUME, EV_PLCI_RESUME_CONF_OK, NULL},
- /* P-RES */
- {ST_PLCI_RESUME, ST_PLCI_ACTIVE, EV_PLCI_RESUME_IND, NULL},
- /* P-HELD */
- {ST_PLCI_HELD, ST_PLCI_ACTIVE, EV_PLCI_RETRIEVE_IND, NULL},
- {},
-};
-
-static void plci_change_state(capidrv_contr *card, capidrv_plci *plci, int event)
-{
- struct plcistatechange *p = plcitable;
- while (p->event) {
- if (plci->state == p->actstate && p->event == event) {
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: plci_change_state:0x%x %d -> %d\n",
- card->contrnr, plci->plci, plci->state, p->nextstate);
- plci->state = p->nextstate;
- if (p->changefunc)
- p->changefunc(card, plci);
- return;
- }
- p++;
- }
- printk(KERN_ERR "capidrv-%d: plci_change_state:0x%x state=%d event=%d ????\n",
- card->contrnr, plci->plci, plci->state, event);
-}
-
-/* ------------------------------------------------------------------ */
-
-static _cmsg cmsg;
-
-static void n0(capidrv_contr *card, capidrv_ncci *ncci)
-{
- isdn_ctrl cmd;
-
- capi_fill_DISCONNECT_REQ(&cmsg,
- global.ap.applid,
- card->msgid++,
- ncci->plcip->plci,
- NULL, /* BChannelinformation */
- NULL, /* Keypadfacility */
- NULL, /* Useruserdata */ /* $$$$ */
- NULL /* Facilitydataarray */
- );
- plci_change_state(card, ncci->plcip, EV_PLCI_DISCONNECT_REQ);
- send_message(card, &cmsg);
-
- cmd.command = ISDN_STAT_BHUP;
- cmd.driver = card->myid;
- cmd.arg = ncci->chan;
- card->interface.statcallb(&cmd);
- free_ncci(card, ncci);
-}
-
-/* ------------------------------------------------------------------ */
-
-struct nccistatechange {
- int actstate;
- int nextstate;
- int event;
- void (*changefunc)(capidrv_contr *card, capidrv_ncci *ncci);
-};
-
-static struct nccistatechange nccitable[] =
-{
- /* N-0 */
- {ST_NCCI_NONE, ST_NCCI_OUTGOING, EV_NCCI_CONNECT_B3_REQ, NULL},
- {ST_NCCI_NONE, ST_NCCI_INCOMING, EV_NCCI_CONNECT_B3_IND, NULL},
- /* N-0.1 */
- {ST_NCCI_OUTGOING, ST_NCCI_ALLOCATED, EV_NCCI_CONNECT_B3_CONF_OK, NULL},
- {ST_NCCI_OUTGOING, ST_NCCI_NONE, EV_NCCI_CONNECT_B3_CONF_ERROR, n0},
- /* N-1 */
- {ST_NCCI_INCOMING, ST_NCCI_DISCONNECTING, EV_NCCI_CONNECT_B3_REJECT, NULL},
- {ST_NCCI_INCOMING, ST_NCCI_ALLOCATED, EV_NCCI_CONNECT_B3_RESP, NULL},
- {ST_NCCI_INCOMING, ST_NCCI_DISCONNECTED, EV_NCCI_DISCONNECT_B3_IND, NULL},
- {ST_NCCI_INCOMING, ST_NCCI_DISCONNECTING, EV_NCCI_DISCONNECT_B3_REQ, NULL},
- /* N-2 */
- {ST_NCCI_ALLOCATED, ST_NCCI_ACTIVE, EV_NCCI_CONNECT_B3_ACTIVE_IND, NULL},
- {ST_NCCI_ALLOCATED, ST_NCCI_DISCONNECTED, EV_NCCI_DISCONNECT_B3_IND, NULL},
- {ST_NCCI_ALLOCATED, ST_NCCI_DISCONNECTING, EV_NCCI_DISCONNECT_B3_REQ, NULL},
- /* N-ACT */
- {ST_NCCI_ACTIVE, ST_NCCI_ACTIVE, EV_NCCI_RESET_B3_IND, NULL},
- {ST_NCCI_ACTIVE, ST_NCCI_RESETING, EV_NCCI_RESET_B3_REQ, NULL},
- {ST_NCCI_ACTIVE, ST_NCCI_DISCONNECTED, EV_NCCI_DISCONNECT_B3_IND, NULL},
- {ST_NCCI_ACTIVE, ST_NCCI_DISCONNECTING, EV_NCCI_DISCONNECT_B3_REQ, NULL},
- /* N-3 */
- {ST_NCCI_RESETING, ST_NCCI_ACTIVE, EV_NCCI_RESET_B3_IND, NULL},
- {ST_NCCI_RESETING, ST_NCCI_DISCONNECTED, EV_NCCI_DISCONNECT_B3_IND, NULL},
- {ST_NCCI_RESETING, ST_NCCI_DISCONNECTING, EV_NCCI_DISCONNECT_B3_REQ, NULL},
- /* N-4 */
- {ST_NCCI_DISCONNECTING, ST_NCCI_DISCONNECTED, EV_NCCI_DISCONNECT_B3_IND, NULL},
- {ST_NCCI_DISCONNECTING, ST_NCCI_PREVIOUS, EV_NCCI_DISCONNECT_B3_CONF_ERROR, NULL},
- /* N-5 */
- {ST_NCCI_DISCONNECTED, ST_NCCI_NONE, EV_NCCI_DISCONNECT_B3_RESP, n0},
- {},
-};
-
-static void ncci_change_state(capidrv_contr *card, capidrv_ncci *ncci, int event)
-{
- struct nccistatechange *p = nccitable;
- while (p->event) {
- if (ncci->state == p->actstate && p->event == event) {
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: ncci_change_state:0x%x %d -> %d\n",
- card->contrnr, ncci->ncci, ncci->state, p->nextstate);
- if (p->nextstate == ST_NCCI_PREVIOUS) {
- ncci->state = ncci->oldstate;
- ncci->oldstate = p->actstate;
- } else {
- ncci->oldstate = p->actstate;
- ncci->state = p->nextstate;
- }
- if (p->changefunc)
- p->changefunc(card, ncci);
- return;
- }
- p++;
- }
- printk(KERN_ERR "capidrv-%d: ncci_change_state:0x%x state=%d event=%d ????\n",
- card->contrnr, ncci->ncci, ncci->state, event);
-}
-
-/* ------------------------------------------------------------------- */
-
-static inline int new_bchan(capidrv_contr *card)
-{
- int i;
- for (i = 0; i < card->nbchan; i++) {
- if (card->bchans[i].plcip == NULL) {
- card->bchans[i].disconnecting = 0;
- return i;
- }
- }
- return -1;
-}
-
-/* ------------------------------------------------------------------- */
-static char *capi_info2str(u16 reason)
-{
-#ifndef CONFIG_ISDN_CAPI_CAPIDRV_VERBOSE
- return "..";
-#else
- switch (reason) {
-
-/*-- informative values (corresponding message was processed) -----*/
- case 0x0001:
- return "NCPI not supported by current protocol, NCPI ignored";
- case 0x0002:
- return "Flags not supported by current protocol, flags ignored";
- case 0x0003:
- return "Alert already sent by another application";
-
-/*-- error information concerning CAPI_REGISTER -----*/
- case 0x1001:
- return "Too many applications";
- case 0x1002:
- return "Logical block size too small, must be at least 128 Bytes";
- case 0x1003:
- return "Buffer exceeds 64 kByte";
- case 0x1004:
- return "Message buffer size too small, must be at least 1024 Bytes";
- case 0x1005:
- return "Max. number of logical connections not supported";
- case 0x1006:
- return "Reserved";
- case 0x1007:
- return "The message could not be accepted because of an internal busy condition";
- case 0x1008:
- return "OS resource error (no memory ?)";
- case 0x1009:
- return "CAPI not installed";
- case 0x100A:
- return "Controller does not support external equipment";
- case 0x100B:
- return "Controller does only support external equipment";
-
-/*-- error information concerning message exchange functions -----*/
- case 0x1101:
- return "Illegal application number";
- case 0x1102:
- return "Illegal command or subcommand or message length less than 12 bytes";
- case 0x1103:
- return "The message could not be accepted because of a queue full condition !! The error code does not imply that CAPI cannot receive messages directed to another controller, PLCI or NCCI";
- case 0x1104:
- return "Queue is empty";
- case 0x1105:
- return "Queue overflow, a message was lost !! This indicates a configuration error. The only recovery from this error is to perform a CAPI_RELEASE";
- case 0x1106:
- return "Unknown notification parameter";
- case 0x1107:
- return "The Message could not be accepted because of an internal busy condition";
- case 0x1108:
- return "OS Resource error (no memory ?)";
- case 0x1109:
- return "CAPI not installed";
- case 0x110A:
- return "Controller does not support external equipment";
- case 0x110B:
- return "Controller does only support external equipment";
-
-/*-- error information concerning resource / coding problems -----*/
- case 0x2001:
- return "Message not supported in current state";
- case 0x2002:
- return "Illegal Controller / PLCI / NCCI";
- case 0x2003:
- return "Out of PLCI";
- case 0x2004:
- return "Out of NCCI";
- case 0x2005:
- return "Out of LISTEN";
- case 0x2006:
- return "Out of FAX resources (protocol T.30)";
- case 0x2007:
- return "Illegal message parameter coding";
-
-/*-- error information concerning requested services -----*/
- case 0x3001:
- return "B1 protocol not supported";
- case 0x3002:
- return "B2 protocol not supported";
- case 0x3003:
- return "B3 protocol not supported";
- case 0x3004:
- return "B1 protocol parameter not supported";
- case 0x3005:
- return "B2 protocol parameter not supported";
- case 0x3006:
- return "B3 protocol parameter not supported";
- case 0x3007:
- return "B protocol combination not supported";
- case 0x3008:
- return "NCPI not supported";
- case 0x3009:
- return "CIP Value unknown";
- case 0x300A:
- return "Flags not supported (reserved bits)";
- case 0x300B:
- return "Facility not supported";
- case 0x300C:
- return "Data length not supported by current protocol";
- case 0x300D:
- return "Reset procedure not supported by current protocol";
-
-/*-- informations about the clearing of a physical connection -----*/
- case 0x3301:
- return "Protocol error layer 1 (broken line or B-channel removed by signalling protocol)";
- case 0x3302:
- return "Protocol error layer 2";
- case 0x3303:
- return "Protocol error layer 3";
- case 0x3304:
- return "Another application got that call";
-/*-- T.30 specific reasons -----*/
- case 0x3311:
- return "Connecting not successful (remote station is no FAX G3 machine)";
- case 0x3312:
- return "Connecting not successful (training error)";
- case 0x3313:
- return "Disconnected before transfer (remote station does not support transfer mode, e.g. resolution)";
- case 0x3314:
- return "Disconnected during transfer (remote abort)";
- case 0x3315:
- return "Disconnected during transfer (remote procedure error, e.g. unsuccessful repetition of T.30 commands)";
- case 0x3316:
- return "Disconnected during transfer (local tx data underrun)";
- case 0x3317:
- return "Disconnected during transfer (local rx data overflow)";
- case 0x3318:
- return "Disconnected during transfer (local abort)";
- case 0x3319:
- return "Illegal parameter coding (e.g. SFF coding error)";
-
-/*-- disconnect causes from the network according to ETS 300 102-1/Q.931 -----*/
- case 0x3481: return "Unallocated (unassigned) number";
- case 0x3482: return "No route to specified transit network";
- case 0x3483: return "No route to destination";
- case 0x3486: return "Channel unacceptable";
- case 0x3487:
- return "Call awarded and being delivered in an established channel";
- case 0x3490: return "Normal call clearing";
- case 0x3491: return "User busy";
- case 0x3492: return "No user responding";
- case 0x3493: return "No answer from user (user alerted)";
- case 0x3495: return "Call rejected";
- case 0x3496: return "Number changed";
- case 0x349A: return "Non-selected user clearing";
- case 0x349B: return "Destination out of order";
- case 0x349C: return "Invalid number format";
- case 0x349D: return "Facility rejected";
- case 0x349E: return "Response to STATUS ENQUIRY";
- case 0x349F: return "Normal, unspecified";
- case 0x34A2: return "No circuit / channel available";
- case 0x34A6: return "Network out of order";
- case 0x34A9: return "Temporary failure";
- case 0x34AA: return "Switching equipment congestion";
- case 0x34AB: return "Access information discarded";
- case 0x34AC: return "Requested circuit / channel not available";
- case 0x34AF: return "Resources unavailable, unspecified";
- case 0x34B1: return "Quality of service unavailable";
- case 0x34B2: return "Requested facility not subscribed";
- case 0x34B9: return "Bearer capability not authorized";
- case 0x34BA: return "Bearer capability not presently available";
- case 0x34BF: return "Service or option not available, unspecified";
- case 0x34C1: return "Bearer capability not implemented";
- case 0x34C2: return "Channel type not implemented";
- case 0x34C5: return "Requested facility not implemented";
- case 0x34C6: return "Only restricted digital information bearer capability is available";
- case 0x34CF: return "Service or option not implemented, unspecified";
- case 0x34D1: return "Invalid call reference value";
- case 0x34D2: return "Identified channel does not exist";
- case 0x34D3: return "A suspended call exists, but this call identity does not";
- case 0x34D4: return "Call identity in use";
- case 0x34D5: return "No call suspended";
- case 0x34D6: return "Call having the requested call identity has been cleared";
- case 0x34D8: return "Incompatible destination";
- case 0x34DB: return "Invalid transit network selection";
- case 0x34DF: return "Invalid message, unspecified";
- case 0x34E0: return "Mandatory information element is missing";
- case 0x34E1: return "Message type non-existent or not implemented";
- case 0x34E2: return "Message not compatible with call state or message type non-existent or not implemented";
- case 0x34E3: return "Information element non-existent or not implemented";
- case 0x34E4: return "Invalid information element contents";
- case 0x34E5: return "Message not compatible with call state";
- case 0x34E6: return "Recovery on timer expiry";
- case 0x34EF: return "Protocol error, unspecified";
- case 0x34FF: return "Interworking, unspecified";
-
- default: return "No additional information";
- }
-#endif
-}
-
-static void handle_controller(_cmsg *cmsg)
-{
- capidrv_contr *card = findcontrbynumber(cmsg->adr.adrController & 0x7f);
-
- if (!card) {
- printk(KERN_ERR "capidrv: %s from unknown controller 0x%x\n",
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->adr.adrController & 0x7f);
- return;
- }
- switch (CAPICMD(cmsg->Command, cmsg->Subcommand)) {
-
- case CAPI_LISTEN_CONF: /* Controller */
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: listenconf Info=0x%4x (%s) cipmask=0x%x\n",
- card->contrnr, cmsg->Info, capi_info2str(cmsg->Info), card->cipmask);
- if (cmsg->Info) {
- listen_change_state(card, EV_LISTEN_CONF_ERROR);
- } else if (card->cipmask == 0) {
- listen_change_state(card, EV_LISTEN_CONF_EMPTY);
- } else {
- listen_change_state(card, EV_LISTEN_CONF_OK);
- }
- break;
-
- case CAPI_MANUFACTURER_IND: /* Controller */
- if (cmsg->ManuID == 0x214D5641
- && cmsg->Class == 0
- && cmsg->Function == 1) {
- u8 *data = cmsg->ManuData + 3;
- u16 len = cmsg->ManuData[0];
- u16 layer;
- int direction;
- if (len == 255) {
- len = (cmsg->ManuData[1] | (cmsg->ManuData[2] << 8));
- data += 2;
- }
- len -= 2;
- layer = ((*(data - 1)) << 8) | *(data - 2);
- if (layer & 0x300)
- direction = (layer & 0x200) ? 0 : 1;
- else direction = (layer & 0x800) ? 0 : 1;
- if (layer & 0x0C00) {
- if ((layer & 0xff) == 0x80) {
- handle_dtrace_data(card, direction, 1, data, len);
- break;
- }
- } else if ((layer & 0xff) < 0x80) {
- handle_dtrace_data(card, direction, 0, data, len);
- break;
- }
- printk(KERN_INFO "capidrv-%d: %s from controller 0x%x layer 0x%x, ignored\n",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->adr.adrController, layer);
- break;
- }
- goto ignored;
- case CAPI_MANUFACTURER_CONF: /* Controller */
- if (cmsg->ManuID == 0x214D5641) {
- char *s = NULL;
- switch (cmsg->Class) {
- case 0: break;
- case 1: s = "unknown class"; break;
- case 2: s = "unknown function"; break;
- default: s = "unknown error"; break;
- }
- if (s)
- printk(KERN_INFO "capidrv-%d: %s from controller 0x%x function %d: %s\n",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->adr.adrController,
- cmsg->Function, s);
- break;
- }
- goto ignored;
- case CAPI_FACILITY_IND: /* Controller/plci/ncci */
- goto ignored;
- case CAPI_FACILITY_CONF: /* Controller/plci/ncci */
- goto ignored;
- case CAPI_INFO_IND: /* Controller/plci */
- goto ignored;
- case CAPI_INFO_CONF: /* Controller/plci */
- goto ignored;
-
- default:
- printk(KERN_ERR "capidrv-%d: got %s from controller 0x%x ???",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->adr.adrController);
- }
- return;
-
-ignored:
- printk(KERN_INFO "capidrv-%d: %s from controller 0x%x ignored\n",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->adr.adrController);
-}
-
-static void handle_incoming_call(capidrv_contr *card, _cmsg *cmsg)
-{
- capidrv_plci *plcip;
- capidrv_bchan *bchan;
- isdn_ctrl cmd;
- int chan;
-
- if ((chan = new_bchan(card)) == -1) {
- printk(KERN_ERR "capidrv-%d: incoming call on not existing bchan ?\n", card->contrnr);
- return;
- }
- bchan = &card->bchans[chan];
- if ((plcip = new_plci(card, chan)) == NULL) {
- printk(KERN_ERR "capidrv-%d: incoming call: no memory, sorry.\n", card->contrnr);
- return;
- }
- bchan->incoming = 1;
- plcip->plci = cmsg->adr.adrPLCI;
- plci_change_state(card, plcip, EV_PLCI_CONNECT_IND);
-
- cmd.command = ISDN_STAT_ICALL;
- cmd.driver = card->myid;
- cmd.arg = chan;
- memset(&cmd.parm.setup, 0, sizeof(cmd.parm.setup));
- strncpy(cmd.parm.setup.phone,
- cmsg->CallingPartyNumber + 3,
- cmsg->CallingPartyNumber[0] - 2);
- strncpy(cmd.parm.setup.eazmsn,
- cmsg->CalledPartyNumber + 2,
- cmsg->CalledPartyNumber[0] - 1);
- cmd.parm.setup.si1 = cip2si1(cmsg->CIPValue);
- cmd.parm.setup.si2 = cip2si2(cmsg->CIPValue);
- cmd.parm.setup.plan = cmsg->CallingPartyNumber[1];
- cmd.parm.setup.screen = cmsg->CallingPartyNumber[2];
-
- printk(KERN_INFO "capidrv-%d: incoming call %s,%d,%d,%s\n",
- card->contrnr,
- cmd.parm.setup.phone,
- cmd.parm.setup.si1,
- cmd.parm.setup.si2,
- cmd.parm.setup.eazmsn);
-
- if (cmd.parm.setup.si1 == 1 && cmd.parm.setup.si2 != 0) {
- printk(KERN_INFO "capidrv-%d: patching si2=%d to 0 for VBOX\n",
- card->contrnr,
- cmd.parm.setup.si2);
- cmd.parm.setup.si2 = 0;
- }
-
- switch (card->interface.statcallb(&cmd)) {
- case 0:
- case 3:
- /* No device matching this call.
- * and isdn_common.c has send a HANGUP command
- * which is ignored in state ST_PLCI_INCOMING,
- * so we send RESP to ignore the call
- */
- capi_cmsg_answer(cmsg);
- cmsg->Reject = 1; /* ignore */
- plci_change_state(card, plcip, EV_PLCI_CONNECT_REJECT);
- send_message(card, cmsg);
- printk(KERN_INFO "capidrv-%d: incoming call %s,%d,%d,%s ignored\n",
- card->contrnr,
- cmd.parm.setup.phone,
- cmd.parm.setup.si1,
- cmd.parm.setup.si2,
- cmd.parm.setup.eazmsn);
- break;
- case 1:
- /* At least one device matching this call (RING on ttyI)
- * HL-driver may send ALERTING on the D-channel in this
- * case.
- * really means: RING on ttyI or a net interface
- * accepted this call already.
- *
- * If the call was accepted, state has already changed,
- * and CONNECT_RESP already sent.
- */
- if (plcip->state == ST_PLCI_INCOMING) {
- printk(KERN_INFO "capidrv-%d: incoming call %s,%d,%d,%s tty alerting\n",
- card->contrnr,
- cmd.parm.setup.phone,
- cmd.parm.setup.si1,
- cmd.parm.setup.si2,
- cmd.parm.setup.eazmsn);
- capi_fill_ALERT_REQ(cmsg,
- global.ap.applid,
- card->msgid++,
- plcip->plci, /* adr */
- NULL,/* BChannelinformation */
- NULL,/* Keypadfacility */
- NULL,/* Useruserdata */
- NULL /* Facilitydataarray */
- );
- plcip->msgid = cmsg->Messagenumber;
- send_message(card, cmsg);
- } else {
- printk(KERN_INFO "capidrv-%d: incoming call %s,%d,%d,%s on netdev\n",
- card->contrnr,
- cmd.parm.setup.phone,
- cmd.parm.setup.si1,
- cmd.parm.setup.si2,
- cmd.parm.setup.eazmsn);
- }
- break;
-
- case 2: /* Call will be rejected. */
- capi_cmsg_answer(cmsg);
- cmsg->Reject = 2; /* reject call, normal call clearing */
- plci_change_state(card, plcip, EV_PLCI_CONNECT_REJECT);
- send_message(card, cmsg);
- break;
-
- default:
- /* An error happened. (Invalid parameters for example.) */
- capi_cmsg_answer(cmsg);
- cmsg->Reject = 8; /* reject call,
- destination out of order */
- plci_change_state(card, plcip, EV_PLCI_CONNECT_REJECT);
- send_message(card, cmsg);
- break;
- }
- return;
-}
-
-static void handle_plci(_cmsg *cmsg)
-{
- capidrv_contr *card = findcontrbynumber(cmsg->adr.adrController & 0x7f);
- capidrv_plci *plcip;
- isdn_ctrl cmd;
- _cdebbuf *cdb;
-
- if (!card) {
- printk(KERN_ERR "capidrv: %s from unknown controller 0x%x\n",
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->adr.adrController & 0x7f);
- return;
- }
- switch (CAPICMD(cmsg->Command, cmsg->Subcommand)) {
-
- case CAPI_DISCONNECT_IND: /* plci */
- if (cmsg->Reason) {
- printk(KERN_INFO "capidrv-%d: %s reason 0x%x (%s) for plci 0x%x\n",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->Reason, capi_info2str(cmsg->Reason), cmsg->adr.adrPLCI);
- }
- if (!(plcip = find_plci_by_plci(card, cmsg->adr.adrPLCI))) {
- capi_cmsg_answer(cmsg);
- send_message(card, cmsg);
- goto notfound;
- }
- card->bchans[plcip->chan].disconnecting = 1;
- plci_change_state(card, plcip, EV_PLCI_DISCONNECT_IND);
- capi_cmsg_answer(cmsg);
- plci_change_state(card, plcip, EV_PLCI_DISCONNECT_RESP);
- send_message(card, cmsg);
- break;
-
- case CAPI_DISCONNECT_CONF: /* plci */
- if (cmsg->Info) {
- printk(KERN_INFO "capidrv-%d: %s info 0x%x (%s) for plci 0x%x\n",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->Info, capi_info2str(cmsg->Info),
- cmsg->adr.adrPLCI);
- }
- if (!(plcip = find_plci_by_plci(card, cmsg->adr.adrPLCI)))
- goto notfound;
-
- card->bchans[plcip->chan].disconnecting = 1;
- break;
-
- case CAPI_ALERT_CONF: /* plci */
- if (cmsg->Info) {
- printk(KERN_INFO "capidrv-%d: %s info 0x%x (%s) for plci 0x%x\n",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->Info, capi_info2str(cmsg->Info),
- cmsg->adr.adrPLCI);
- }
- break;
-
- case CAPI_CONNECT_IND: /* plci */
- handle_incoming_call(card, cmsg);
- break;
-
- case CAPI_CONNECT_CONF: /* plci */
- if (cmsg->Info) {
- printk(KERN_INFO "capidrv-%d: %s info 0x%x (%s) for plci 0x%x\n",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->Info, capi_info2str(cmsg->Info),
- cmsg->adr.adrPLCI);
- }
- if (!(plcip = find_plci_by_msgid(card, cmsg->Messagenumber)))
- goto notfound;
-
- plcip->plci = cmsg->adr.adrPLCI;
- if (cmsg->Info) {
- plci_change_state(card, plcip, EV_PLCI_CONNECT_CONF_ERROR);
- } else {
- plci_change_state(card, plcip, EV_PLCI_CONNECT_CONF_OK);
- }
- break;
-
- case CAPI_CONNECT_ACTIVE_IND: /* plci */
-
- if (!(plcip = find_plci_by_plci(card, cmsg->adr.adrPLCI)))
- goto notfound;
-
- if (card->bchans[plcip->chan].incoming) {
- capi_cmsg_answer(cmsg);
- plci_change_state(card, plcip, EV_PLCI_CONNECT_ACTIVE_IND);
- send_message(card, cmsg);
- } else {
- capidrv_ncci *nccip;
- capi_cmsg_answer(cmsg);
- send_message(card, cmsg);
-
- nccip = new_ncci(card, plcip, cmsg->adr.adrPLCI);
-
- if (!nccip) {
- printk(KERN_ERR "capidrv-%d: no mem for ncci, sorry\n", card->contrnr);
- break; /* $$$$ */
- }
- capi_fill_CONNECT_B3_REQ(cmsg,
- global.ap.applid,
- card->msgid++,
- plcip->plci, /* adr */
- NULL /* NCPI */
- );
- nccip->msgid = cmsg->Messagenumber;
- plci_change_state(card, plcip,
- EV_PLCI_CONNECT_ACTIVE_IND);
- ncci_change_state(card, nccip, EV_NCCI_CONNECT_B3_REQ);
- send_message(card, cmsg);
- cmd.command = ISDN_STAT_DCONN;
- cmd.driver = card->myid;
- cmd.arg = plcip->chan;
- card->interface.statcallb(&cmd);
- }
- break;
-
- case CAPI_INFO_IND: /* Controller/plci */
-
- if (!(plcip = find_plci_by_plci(card, cmsg->adr.adrPLCI)))
- goto notfound;
-
- if (cmsg->InfoNumber == 0x4000) {
- if (cmsg->InfoElement[0] == 4) {
- cmd.command = ISDN_STAT_CINF;
- cmd.driver = card->myid;
- cmd.arg = plcip->chan;
- sprintf(cmd.parm.num, "%lu",
- (unsigned long)
- ((u32) cmsg->InfoElement[1]
- | ((u32) (cmsg->InfoElement[2]) << 8)
- | ((u32) (cmsg->InfoElement[3]) << 16)
- | ((u32) (cmsg->InfoElement[4]) << 24)));
- card->interface.statcallb(&cmd);
- break;
- }
- }
- cdb = capi_cmsg2str(cmsg);
- if (cdb) {
- printk(KERN_WARNING "capidrv-%d: %s\n",
- card->contrnr, cdb->buf);
- cdebbuf_free(cdb);
- } else
- printk(KERN_WARNING "capidrv-%d: CAPI_INFO_IND InfoNumber %x not handled\n",
- card->contrnr, cmsg->InfoNumber);
-
- break;
-
- case CAPI_CONNECT_ACTIVE_CONF: /* plci */
- goto ignored;
- case CAPI_SELECT_B_PROTOCOL_CONF: /* plci */
- goto ignored;
- case CAPI_FACILITY_IND: /* Controller/plci/ncci */
- goto ignored;
- case CAPI_FACILITY_CONF: /* Controller/plci/ncci */
- goto ignored;
-
- case CAPI_INFO_CONF: /* Controller/plci */
- goto ignored;
-
- default:
- printk(KERN_ERR "capidrv-%d: got %s for plci 0x%x ???",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->adr.adrPLCI);
- }
- return;
-ignored:
- printk(KERN_INFO "capidrv-%d: %s for plci 0x%x ignored\n",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->adr.adrPLCI);
- return;
-notfound:
- printk(KERN_ERR "capidrv-%d: %s: plci 0x%x not found\n",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->adr.adrPLCI);
- return;
-}
-
-static void handle_ncci(_cmsg *cmsg)
-{
- capidrv_contr *card = findcontrbynumber(cmsg->adr.adrController & 0x7f);
- capidrv_plci *plcip;
- capidrv_ncci *nccip;
- isdn_ctrl cmd;
- int len;
-
- if (!card) {
- printk(KERN_ERR "capidrv: %s from unknown controller 0x%x\n",
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->adr.adrController & 0x7f);
- return;
- }
- switch (CAPICMD(cmsg->Command, cmsg->Subcommand)) {
-
- case CAPI_CONNECT_B3_ACTIVE_IND: /* ncci */
- if (!(nccip = find_ncci(card, cmsg->adr.adrNCCI)))
- goto notfound;
-
- capi_cmsg_answer(cmsg);
- ncci_change_state(card, nccip, EV_NCCI_CONNECT_B3_ACTIVE_IND);
- send_message(card, cmsg);
-
- cmd.command = ISDN_STAT_BCONN;
- cmd.driver = card->myid;
- cmd.arg = nccip->chan;
- card->interface.statcallb(&cmd);
-
- printk(KERN_INFO "capidrv-%d: chan %d up with ncci 0x%x\n",
- card->contrnr, nccip->chan, nccip->ncci);
- break;
-
- case CAPI_CONNECT_B3_ACTIVE_CONF: /* ncci */
- goto ignored;
-
- case CAPI_CONNECT_B3_IND: /* ncci */
-
- plcip = find_plci_by_ncci(card, cmsg->adr.adrNCCI);
- if (plcip) {
- nccip = new_ncci(card, plcip, cmsg->adr.adrNCCI);
- if (nccip) {
- ncci_change_state(card, nccip, EV_NCCI_CONNECT_B3_IND);
- capi_fill_CONNECT_B3_RESP(cmsg,
- global.ap.applid,
- card->msgid++,
- nccip->ncci, /* adr */
- 0, /* Reject */
- NULL /* NCPI */
- );
- ncci_change_state(card, nccip, EV_NCCI_CONNECT_B3_RESP);
- send_message(card, cmsg);
- break;
- }
- printk(KERN_ERR "capidrv-%d: no mem for ncci, sorry\n", card->contrnr);
- } else {
- printk(KERN_ERR "capidrv-%d: %s: plci for ncci 0x%x not found\n",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->adr.adrNCCI);
- }
- capi_fill_CONNECT_B3_RESP(cmsg,
- global.ap.applid,
- card->msgid++,
- cmsg->adr.adrNCCI,
- 2, /* Reject */
- NULL /* NCPI */
- );
- send_message(card, cmsg);
- break;
-
- case CAPI_CONNECT_B3_CONF: /* ncci */
-
- if (!(nccip = find_ncci_by_msgid(card,
- cmsg->adr.adrNCCI,
- cmsg->Messagenumber)))
- goto notfound;
-
- nccip->ncci = cmsg->adr.adrNCCI;
- if (cmsg->Info) {
- printk(KERN_INFO "capidrv-%d: %s info 0x%x (%s) for ncci 0x%x\n",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->Info, capi_info2str(cmsg->Info),
- cmsg->adr.adrNCCI);
- }
-
- if (cmsg->Info)
- ncci_change_state(card, nccip, EV_NCCI_CONNECT_B3_CONF_ERROR);
- else
- ncci_change_state(card, nccip, EV_NCCI_CONNECT_B3_CONF_OK);
- break;
-
- case CAPI_CONNECT_B3_T90_ACTIVE_IND: /* ncci */
- capi_cmsg_answer(cmsg);
- send_message(card, cmsg);
- break;
-
- case CAPI_DATA_B3_IND: /* ncci */
- /* handled in handle_data() */
- goto ignored;
-
- case CAPI_DATA_B3_CONF: /* ncci */
- if (cmsg->Info) {
- printk(KERN_WARNING "CAPI_DATA_B3_CONF: Info %x - %s\n",
- cmsg->Info, capi_info2str(cmsg->Info));
- }
- if (!(nccip = find_ncci(card, cmsg->adr.adrNCCI)))
- goto notfound;
-
- len = capidrv_del_ack(nccip, cmsg->DataHandle);
- if (len < 0)
- break;
- cmd.command = ISDN_STAT_BSENT;
- cmd.driver = card->myid;
- cmd.arg = nccip->chan;
- cmd.parm.length = len;
- card->interface.statcallb(&cmd);
- break;
-
- case CAPI_DISCONNECT_B3_IND: /* ncci */
- if (!(nccip = find_ncci(card, cmsg->adr.adrNCCI)))
- goto notfound;
-
- card->bchans[nccip->chan].disconnecting = 1;
- ncci_change_state(card, nccip, EV_NCCI_DISCONNECT_B3_IND);
- capi_cmsg_answer(cmsg);
- ncci_change_state(card, nccip, EV_NCCI_DISCONNECT_B3_RESP);
- send_message(card, cmsg);
- break;
-
- case CAPI_DISCONNECT_B3_CONF: /* ncci */
- if (!(nccip = find_ncci(card, cmsg->adr.adrNCCI)))
- goto notfound;
- if (cmsg->Info) {
- printk(KERN_INFO "capidrv-%d: %s info 0x%x (%s) for ncci 0x%x\n",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->Info, capi_info2str(cmsg->Info),
- cmsg->adr.adrNCCI);
- ncci_change_state(card, nccip, EV_NCCI_DISCONNECT_B3_CONF_ERROR);
- }
- break;
-
- case CAPI_RESET_B3_IND: /* ncci */
- if (!(nccip = find_ncci(card, cmsg->adr.adrNCCI)))
- goto notfound;
- ncci_change_state(card, nccip, EV_NCCI_RESET_B3_IND);
- capi_cmsg_answer(cmsg);
- send_message(card, cmsg);
- break;
-
- case CAPI_RESET_B3_CONF: /* ncci */
- goto ignored; /* $$$$ */
-
- case CAPI_FACILITY_IND: /* Controller/plci/ncci */
- goto ignored;
- case CAPI_FACILITY_CONF: /* Controller/plci/ncci */
- goto ignored;
-
- default:
- printk(KERN_ERR "capidrv-%d: got %s for ncci 0x%x ???",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->adr.adrNCCI);
- }
- return;
-ignored:
- printk(KERN_INFO "capidrv-%d: %s for ncci 0x%x ignored\n",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->adr.adrNCCI);
- return;
-notfound:
- printk(KERN_ERR "capidrv-%d: %s: ncci 0x%x not found\n",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->adr.adrNCCI);
-}
-
-
-static void handle_data(_cmsg *cmsg, struct sk_buff *skb)
-{
- capidrv_contr *card = findcontrbynumber(cmsg->adr.adrController & 0x7f);
- capidrv_ncci *nccip;
-
- if (!card) {
- printk(KERN_ERR "capidrv: %s from unknown controller 0x%x\n",
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->adr.adrController & 0x7f);
- kfree_skb(skb);
- return;
- }
- if (!(nccip = find_ncci(card, cmsg->adr.adrNCCI))) {
- printk(KERN_ERR "capidrv-%d: %s: ncci 0x%x not found\n",
- card->contrnr,
- capi_cmd2str(cmsg->Command, cmsg->Subcommand),
- cmsg->adr.adrNCCI);
- kfree_skb(skb);
- return;
- }
- (void) skb_pull(skb, CAPIMSG_LEN(skb->data));
- card->interface.rcvcallb_skb(card->myid, nccip->chan, skb);
- capi_cmsg_answer(cmsg);
- send_message(card, cmsg);
-}
-
-static _cmsg s_cmsg;
-
-static void capidrv_recv_message(struct capi20_appl *ap, struct sk_buff *skb)
-{
- if (capi_message2cmsg(&s_cmsg, skb->data)) {
- printk(KERN_ERR "capidrv: applid=%d: received invalid message\n",
- ap->applid);
- kfree_skb(skb);
- return;
- }
- if (debugmode > 3) {
- _cdebbuf *cdb = capi_cmsg2str(&s_cmsg);
-
- if (cdb) {
- printk(KERN_DEBUG "%s: applid=%d %s\n", __func__,
- ap->applid, cdb->buf);
- cdebbuf_free(cdb);
- } else
- printk(KERN_DEBUG "%s: applid=%d %s not traced\n",
- __func__, ap->applid,
- capi_cmd2str(s_cmsg.Command, s_cmsg.Subcommand));
- }
- if (s_cmsg.Command == CAPI_DATA_B3
- && s_cmsg.Subcommand == CAPI_IND) {
- handle_data(&s_cmsg, skb);
- return;
- }
- if ((s_cmsg.adr.adrController & 0xffffff00) == 0)
- handle_controller(&s_cmsg);
- else if ((s_cmsg.adr.adrPLCI & 0xffff0000) == 0)
- handle_plci(&s_cmsg);
- else
- handle_ncci(&s_cmsg);
- /*
- * data of skb used in s_cmsg,
- * free data when s_cmsg is not used again
- * thanks to Lars Heete <hel@admin.de>
- */
- kfree_skb(skb);
-}
-
-/* ------------------------------------------------------------------- */
-
-#define PUTBYTE_TO_STATUS(card, byte) \
- do { \
- *(card)->q931_write++ = (byte); \
- if ((card)->q931_write > (card)->q931_end) \
- (card)->q931_write = (card)->q931_buf; \
- } while (0)
-
-static void handle_dtrace_data(capidrv_contr *card,
- int send, int level2, u8 *data, u16 len)
-{
- u8 *p, *end;
- isdn_ctrl cmd;
-
- if (!len) {
- printk(KERN_DEBUG "capidrv-%d: avmb1_q931_data: len == %d\n",
- card->contrnr, len);
- return;
- }
-
- if (level2) {
- PUTBYTE_TO_STATUS(card, 'D');
- PUTBYTE_TO_STATUS(card, '2');
- PUTBYTE_TO_STATUS(card, send ? '>' : '<');
- PUTBYTE_TO_STATUS(card, ':');
- } else {
- PUTBYTE_TO_STATUS(card, 'D');
- PUTBYTE_TO_STATUS(card, '3');
- PUTBYTE_TO_STATUS(card, send ? '>' : '<');
- PUTBYTE_TO_STATUS(card, ':');
- }
-
- for (p = data, end = data + len; p < end; p++) {
- PUTBYTE_TO_STATUS(card, ' ');
- PUTBYTE_TO_STATUS(card, hex_asc_hi(*p));
- PUTBYTE_TO_STATUS(card, hex_asc_lo(*p));
- }
- PUTBYTE_TO_STATUS(card, '\n');
-
- cmd.command = ISDN_STAT_STAVAIL;
- cmd.driver = card->myid;
- cmd.arg = len * 3 + 5;
- card->interface.statcallb(&cmd);
-}
-
-/* ------------------------------------------------------------------- */
-
-static _cmsg cmdcmsg;
-
-static int capidrv_ioctl(isdn_ctrl *c, capidrv_contr *card)
-{
- switch (c->arg) {
- case 1:
- debugmode = (int)(*((unsigned int *)c->parm.num));
- printk(KERN_DEBUG "capidrv-%d: debugmode=%d\n",
- card->contrnr, debugmode);
- return 0;
- default:
- printk(KERN_DEBUG "capidrv-%d: capidrv_ioctl(%ld) called ??\n",
- card->contrnr, c->arg);
- return -EINVAL;
- }
- return -EINVAL;
-}
-
-/*
- * Handle leased lines (CAPI-Bundling)
- */
-
-struct internal_bchannelinfo {
- unsigned short channelalloc;
- unsigned short operation;
- unsigned char cmask[31];
-};
-
-static int decodeFVteln(char *teln, unsigned long *bmaskp, int *activep)
-{
- unsigned long bmask = 0;
- int active = !0;
- char *s;
- int i;
-
- if (strncmp(teln, "FV:", 3) != 0)
- return 1;
- s = teln + 3;
- while (*s && *s == ' ') s++;
- if (!*s) return -2;
- if (*s == 'p' || *s == 'P') {
- active = 0;
- s++;
- }
- if (*s == 'a' || *s == 'A') {
- active = !0;
- s++;
- }
- while (*s) {
- int digit1 = 0;
- int digit2 = 0;
- char *endp;
-
- digit1 = simple_strtoul(s, &endp, 10);
- if (s == endp)
- return -3;
- s = endp;
-
- if (digit1 <= 0 || digit1 > 30) return -4;
- if (*s == 0 || *s == ',' || *s == ' ') {
- bmask |= (1 << digit1);
- digit1 = 0;
- if (*s) s++;
- continue;
- }
- if (*s != '-') return -5;
- s++;
-
- digit2 = simple_strtoul(s, &endp, 10);
- if (s == endp)
- return -3;
- s = endp;
-
- if (digit2 <= 0 || digit2 > 30) return -4;
- if (*s == 0 || *s == ',' || *s == ' ') {
- if (digit1 > digit2)
- for (i = digit2; i <= digit1; i++)
- bmask |= (1 << i);
- else
- for (i = digit1; i <= digit2; i++)
- bmask |= (1 << i);
- digit1 = digit2 = 0;
- if (*s) s++;
- continue;
- }
- return -6;
- }
- if (activep) *activep = active;
- if (bmaskp) *bmaskp = bmask;
- return 0;
-}
-
-static int FVteln2capi20(char *teln, u8 AdditionalInfo[1 + 2 + 2 + 31])
-{
- unsigned long bmask;
- int active;
- int rc, i;
-
- rc = decodeFVteln(teln, &bmask, &active);
- if (rc) return rc;
- /* Length */
- AdditionalInfo[0] = 2 + 2 + 31;
- /* Channel: 3 => use channel allocation */
- AdditionalInfo[1] = 3; AdditionalInfo[2] = 0;
- /* Operation: 0 => DTE mode, 1 => DCE mode */
- if (active) {
- AdditionalInfo[3] = 0; AdditionalInfo[4] = 0;
- } else {
- AdditionalInfo[3] = 1; AdditionalInfo[4] = 0;
- }
- /* Channel mask array */
- AdditionalInfo[5] = 0; /* no D-Channel */
- for (i = 1; i <= 30; i++)
- AdditionalInfo[5 + i] = (bmask & (1 << i)) ? 0xff : 0;
- return 0;
-}
-
-static int capidrv_command(isdn_ctrl *c, capidrv_contr *card)
-{
- isdn_ctrl cmd;
- struct capidrv_bchan *bchan;
- struct capidrv_plci *plcip;
- u8 AdditionalInfo[1 + 2 + 2 + 31];
- int rc, isleasedline = 0;
-
- if (c->command == ISDN_CMD_IOCTL)
- return capidrv_ioctl(c, card);
-
- switch (c->command) {
- case ISDN_CMD_DIAL: {
- u8 calling[ISDN_MSNLEN + 3];
- u8 called[ISDN_MSNLEN + 2];
-
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: ISDN_CMD_DIAL(ch=%ld,\"%s,%d,%d,%s\")\n",
- card->contrnr,
- c->arg,
- c->parm.setup.phone,
- c->parm.setup.si1,
- c->parm.setup.si2,
- c->parm.setup.eazmsn);
-
- bchan = &card->bchans[c->arg % card->nbchan];
-
- if (bchan->plcip) {
- printk(KERN_ERR "capidrv-%d: dail ch=%ld,\"%s,%d,%d,%s\" in use (plci=0x%x)\n",
- card->contrnr,
- c->arg,
- c->parm.setup.phone,
- c->parm.setup.si1,
- c->parm.setup.si2,
- c->parm.setup.eazmsn,
- bchan->plcip->plci);
- return 0;
- }
- bchan->si1 = c->parm.setup.si1;
- bchan->si2 = c->parm.setup.si2;
-
- strncpy(bchan->num, c->parm.setup.phone, sizeof(bchan->num));
- strncpy(bchan->mynum, c->parm.setup.eazmsn, sizeof(bchan->mynum));
- rc = FVteln2capi20(bchan->num, AdditionalInfo);
- isleasedline = (rc == 0);
- if (rc < 0)
- printk(KERN_ERR "capidrv-%d: WARNING: invalid leased linedefinition \"%s\"\n", card->contrnr, bchan->num);
-
- if (isleasedline) {
- calling[0] = 0;
- called[0] = 0;
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: connecting leased line\n", card->contrnr);
- } else {
- calling[0] = strlen(bchan->mynum) + 2;
- calling[1] = 0;
- calling[2] = 0x80;
- strncpy(calling + 3, bchan->mynum, ISDN_MSNLEN);
- called[0] = strlen(bchan->num) + 1;
- called[1] = 0x80;
- strncpy(called + 2, bchan->num, ISDN_MSNLEN);
- }
-
- capi_fill_CONNECT_REQ(&cmdcmsg,
- global.ap.applid,
- card->msgid++,
- card->contrnr, /* adr */
- si2cip(bchan->si1, bchan->si2), /* cipvalue */
- called, /* CalledPartyNumber */
- calling, /* CallingPartyNumber */
- NULL, /* CalledPartySubaddress */
- NULL, /* CallingPartySubaddress */
- b1prot(bchan->l2, bchan->l3), /* B1protocol */
- b2prot(bchan->l2, bchan->l3), /* B2protocol */
- b3prot(bchan->l2, bchan->l3), /* B3protocol */
- b1config(bchan->l2, bchan->l3), /* B1configuration */
- NULL, /* B2configuration */
- NULL, /* B3configuration */
- NULL, /* BC */
- NULL, /* LLC */
- NULL, /* HLC */
- /* BChannelinformation */
- isleasedline ? AdditionalInfo : NULL,
- NULL, /* Keypadfacility */
- NULL, /* Useruserdata */
- NULL /* Facilitydataarray */
- );
- if ((plcip = new_plci(card, (c->arg % card->nbchan))) == NULL) {
- cmd.command = ISDN_STAT_DHUP;
- cmd.driver = card->myid;
- cmd.arg = (c->arg % card->nbchan);
- card->interface.statcallb(&cmd);
- return -1;
- }
- plcip->msgid = cmdcmsg.Messagenumber;
- plcip->leasedline = isleasedline;
- plci_change_state(card, plcip, EV_PLCI_CONNECT_REQ);
- send_message(card, &cmdcmsg);
- return 0;
- }
-
- case ISDN_CMD_ACCEPTD:
-
- bchan = &card->bchans[c->arg % card->nbchan];
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: ISDN_CMD_ACCEPTD(ch=%ld) l2=%d l3=%d\n",
- card->contrnr,
- c->arg, bchan->l2, bchan->l3);
-
- capi_fill_CONNECT_RESP(&cmdcmsg,
- global.ap.applid,
- card->msgid++,
- bchan->plcip->plci, /* adr */
- 0, /* Reject */
- b1prot(bchan->l2, bchan->l3), /* B1protocol */
- b2prot(bchan->l2, bchan->l3), /* B2protocol */
- b3prot(bchan->l2, bchan->l3), /* B3protocol */
- b1config(bchan->l2, bchan->l3), /* B1configuration */
- NULL, /* B2configuration */
- NULL, /* B3configuration */
- NULL, /* ConnectedNumber */
- NULL, /* ConnectedSubaddress */
- NULL, /* LLC */
- NULL, /* BChannelinformation */
- NULL, /* Keypadfacility */
- NULL, /* Useruserdata */
- NULL /* Facilitydataarray */
- );
- if (capi_cmsg2message(&cmdcmsg, cmdcmsg.buf)) {
- printk(KERN_ERR "capidrv-%d: capidrv_command: parser failure\n",
- card->contrnr);
- return -EINVAL;
- }
- plci_change_state(card, bchan->plcip, EV_PLCI_CONNECT_RESP);
- send_message(card, &cmdcmsg);
- return 0;
-
- case ISDN_CMD_ACCEPTB:
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: ISDN_CMD_ACCEPTB(ch=%ld)\n",
- card->contrnr,
- c->arg);
- return -ENOSYS;
-
- case ISDN_CMD_HANGUP:
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: ISDN_CMD_HANGUP(ch=%ld)\n",
- card->contrnr,
- c->arg);
- bchan = &card->bchans[c->arg % card->nbchan];
-
- if (bchan->disconnecting) {
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: chan %ld already disconnecting ...\n",
- card->contrnr,
- c->arg);
- return 0;
- }
- if (bchan->nccip) {
- bchan->disconnecting = 1;
- capi_fill_DISCONNECT_B3_REQ(&cmdcmsg,
- global.ap.applid,
- card->msgid++,
- bchan->nccip->ncci,
- NULL /* NCPI */
- );
- ncci_change_state(card, bchan->nccip, EV_NCCI_DISCONNECT_B3_REQ);
- send_message(card, &cmdcmsg);
- return 0;
- } else if (bchan->plcip) {
- if (bchan->plcip->state == ST_PLCI_INCOMING) {
- /*
- * just ignore, we a called from
- * isdn_status_callback(),
- * which will return 0 or 2, this is handled
- * by the CONNECT_IND handler
- */
- bchan->disconnecting = 1;
- return 0;
- } else if (bchan->plcip->plci) {
- bchan->disconnecting = 1;
- capi_fill_DISCONNECT_REQ(&cmdcmsg,
- global.ap.applid,
- card->msgid++,
- bchan->plcip->plci,
- NULL, /* BChannelinformation */
- NULL, /* Keypadfacility */
- NULL, /* Useruserdata */
- NULL /* Facilitydataarray */
- );
- plci_change_state(card, bchan->plcip, EV_PLCI_DISCONNECT_REQ);
- send_message(card, &cmdcmsg);
- return 0;
- } else {
- printk(KERN_ERR "capidrv-%d: chan %ld disconnect request while waiting for CONNECT_CONF\n",
- card->contrnr,
- c->arg);
- return -EINVAL;
- }
- }
- printk(KERN_ERR "capidrv-%d: chan %ld disconnect request on free channel\n",
- card->contrnr,
- c->arg);
- return -EINVAL;
-/* ready */
-
- case ISDN_CMD_SETL2:
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: set L2 on chan %ld to %ld\n",
- card->contrnr,
- (c->arg & 0xff), (c->arg >> 8));
- bchan = &card->bchans[(c->arg & 0xff) % card->nbchan];
- bchan->l2 = (c->arg >> 8);
- return 0;
-
- case ISDN_CMD_SETL3:
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: set L3 on chan %ld to %ld\n",
- card->contrnr,
- (c->arg & 0xff), (c->arg >> 8));
- bchan = &card->bchans[(c->arg & 0xff) % card->nbchan];
- bchan->l3 = (c->arg >> 8);
- return 0;
-
- case ISDN_CMD_SETEAZ:
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: set EAZ \"%s\" on chan %ld\n",
- card->contrnr,
- c->parm.num, c->arg);
- bchan = &card->bchans[c->arg % card->nbchan];
- strncpy(bchan->msn, c->parm.num, ISDN_MSNLEN);
- return 0;
-
- case ISDN_CMD_CLREAZ:
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: clearing EAZ on chan %ld\n",
- card->contrnr, c->arg);
- bchan = &card->bchans[c->arg % card->nbchan];
- bchan->msn[0] = 0;
- return 0;
-
- default:
- printk(KERN_ERR "capidrv-%d: ISDN_CMD_%d, Huh?\n",
- card->contrnr, c->command);
- return -EINVAL;
- }
- return 0;
-}
-
-static int if_command(isdn_ctrl *c)
-{
- capidrv_contr *card = findcontrbydriverid(c->driver);
-
- if (card)
- return capidrv_command(c, card);
-
- printk(KERN_ERR
- "capidrv: if_command %d called with invalid driverId %d!\n",
- c->command, c->driver);
- return -ENODEV;
-}
-
-static _cmsg sendcmsg;
-
-static int if_sendbuf(int id, int channel, int doack, struct sk_buff *skb)
-{
- capidrv_contr *card = findcontrbydriverid(id);
- capidrv_bchan *bchan;
- capidrv_ncci *nccip;
- int len = skb->len;
- int msglen;
- u16 errcode;
- u16 datahandle;
- u32 data;
-
- if (!card) {
- printk(KERN_ERR "capidrv: if_sendbuf called with invalid driverId %d!\n",
- id);
- return 0;
- }
- if (debugmode > 4)
- printk(KERN_DEBUG "capidrv-%d: sendbuf len=%d skb=%p doack=%d\n",
- card->contrnr, len, skb, doack);
- bchan = &card->bchans[channel % card->nbchan];
- nccip = bchan->nccip;
- if (!nccip || nccip->state != ST_NCCI_ACTIVE) {
- printk(KERN_ERR "capidrv-%d: if_sendbuf: %s:%d: chan not up!\n",
- card->contrnr, card->name, channel);
- return 0;
- }
- datahandle = nccip->datahandle;
-
- /*
- * Here we copy pointer skb->data into the 32-bit 'Data' field.
- * The 'Data' field is not used in practice in linux kernel
- * (neither in 32 or 64 bit), but should have some value,
- * since a CAPI message trace will display it.
- *
- * The correct value in the 32 bit case is the address of the
- * data, in 64 bit it makes no sense, we use 0 there.
- */
-
-#ifdef CONFIG_64BIT
- data = 0;
-#else
- data = (unsigned long) skb->data;
-#endif
-
- capi_fill_DATA_B3_REQ(&sendcmsg, global.ap.applid, card->msgid++,
- nccip->ncci, /* adr */
- data, /* Data */
- skb->len, /* DataLength */
- datahandle, /* DataHandle */
- 0 /* Flags */
- );
-
- if (capidrv_add_ack(nccip, datahandle, doack ? (int)skb->len : -1) < 0)
- return 0;
-
- if (capi_cmsg2message(&sendcmsg, sendcmsg.buf)) {
- printk(KERN_ERR "capidrv-%d: if_sendbuf: parser failure\n",
- card->contrnr);
- return -EINVAL;
- }
- msglen = CAPIMSG_LEN(sendcmsg.buf);
- if (skb_headroom(skb) < msglen) {
- struct sk_buff *nskb = skb_realloc_headroom(skb, msglen);
- if (!nskb) {
- printk(KERN_ERR "capidrv-%d: if_sendbuf: no memory\n",
- card->contrnr);
- (void)capidrv_del_ack(nccip, datahandle);
- return 0;
- }
- printk(KERN_DEBUG "capidrv-%d: only %d bytes headroom, need %d\n",
- card->contrnr, skb_headroom(skb), msglen);
- memcpy(skb_push(nskb, msglen), sendcmsg.buf, msglen);
- errcode = capi20_put_message(&global.ap, nskb);
- if (errcode == CAPI_NOERROR) {
- dev_kfree_skb(skb);
- nccip->datahandle++;
- return len;
- }
- if (debugmode > 3)
- printk(KERN_DEBUG "capidrv-%d: sendbuf putmsg ret(%x) - %s\n",
- card->contrnr, errcode, capi_info2str(errcode));
- (void)capidrv_del_ack(nccip, datahandle);
- dev_kfree_skb(nskb);
- return errcode == CAPI_SENDQUEUEFULL ? 0 : -1;
- } else {
- memcpy(skb_push(skb, msglen), sendcmsg.buf, msglen);
- errcode = capi20_put_message(&global.ap, skb);
- if (errcode == CAPI_NOERROR) {
- nccip->datahandle++;
- return len;
- }
- if (debugmode > 3)
- printk(KERN_DEBUG "capidrv-%d: sendbuf putmsg ret(%x) - %s\n",
- card->contrnr, errcode, capi_info2str(errcode));
- skb_pull(skb, msglen);
- (void)capidrv_del_ack(nccip, datahandle);
- return errcode == CAPI_SENDQUEUEFULL ? 0 : -1;
- }
-}
-
-static int if_readstat(u8 __user *buf, int len, int id, int channel)
-{
- capidrv_contr *card = findcontrbydriverid(id);
- int count;
- u8 __user *p;
-
- if (!card) {
- printk(KERN_ERR "capidrv: if_readstat called with invalid driverId %d!\n",
- id);
- return -ENODEV;
- }
-
- for (p = buf, count = 0; count < len; p++, count++) {
- if (put_user(*card->q931_read++, p))
- return -EFAULT;
- if (card->q931_read > card->q931_end)
- card->q931_read = card->q931_buf;
- }
- return count;
-
-}
-
-static void enable_dchannel_trace(capidrv_contr *card)
-{
- u8 manufacturer[CAPI_MANUFACTURER_LEN];
- capi_version version;
- u16 contr = card->contrnr;
- u16 errcode;
- u16 avmversion[3];
-
- errcode = capi20_get_manufacturer(contr, manufacturer);
- if (errcode != CAPI_NOERROR) {
- printk(KERN_ERR "%s: can't get manufacturer (0x%x)\n",
- card->name, errcode);
- return;
- }
- if (strstr(manufacturer, "AVM") == NULL) {
- printk(KERN_ERR "%s: not from AVM, no d-channel trace possible (%s)\n",
- card->name, manufacturer);
- return;
- }
- errcode = capi20_get_version(contr, &version);
- if (errcode != CAPI_NOERROR) {
- printk(KERN_ERR "%s: can't get version (0x%x)\n",
- card->name, errcode);
- return;
- }
- avmversion[0] = (version.majormanuversion >> 4) & 0x0f;
- avmversion[1] = (version.majormanuversion << 4) & 0xf0;
- avmversion[1] |= (version.minormanuversion >> 4) & 0x0f;
- avmversion[2] |= version.minormanuversion & 0x0f;
-
- if (avmversion[0] > 3 || (avmversion[0] == 3 && avmversion[1] > 5)) {
- printk(KERN_INFO "%s: D2 trace enabled\n", card->name);
- capi_fill_MANUFACTURER_REQ(&cmdcmsg, global.ap.applid,
- card->msgid++,
- contr,
- 0x214D5641, /* ManuID */
- 0, /* Class */
- 1, /* Function */
- (_cstruct)"\004\200\014\000\000");
- } else {
- printk(KERN_INFO "%s: D3 trace enabled\n", card->name);
- capi_fill_MANUFACTURER_REQ(&cmdcmsg, global.ap.applid,
- card->msgid++,
- contr,
- 0x214D5641, /* ManuID */
- 0, /* Class */
- 1, /* Function */
- (_cstruct)"\004\002\003\000\000");
- }
- send_message(card, &cmdcmsg);
-}
-
-
-static void send_listen(capidrv_contr *card)
-{
- capi_fill_LISTEN_REQ(&cmdcmsg, global.ap.applid,
- card->msgid++,
- card->contrnr, /* controller */
- 1 << 6, /* Infomask */
- card->cipmask,
- card->cipmask2,
- NULL, NULL);
- listen_change_state(card, EV_LISTEN_REQ);
- send_message(card, &cmdcmsg);
-}
-
-static void listentimerfunc(struct timer_list *t)
-{
- capidrv_contr *card = from_timer(card, t, listentimer);
- if (card->state != ST_LISTEN_NONE && card->state != ST_LISTEN_ACTIVE)
- printk(KERN_ERR "%s: controller dead ??\n", card->name);
- send_listen(card);
- mod_timer(&card->listentimer, jiffies + 60 * HZ);
-}
-
-
-static int capidrv_addcontr(u16 contr, struct capi_profile *profp)
-{
- capidrv_contr *card;
- unsigned long flags;
- isdn_ctrl cmd;
- char id[20];
- int i;
-
- sprintf(id, "capidrv-%d", contr);
- if (!try_module_get(THIS_MODULE)) {
- printk(KERN_WARNING "capidrv: (%s) Could not reserve module\n", id);
- return -1;
- }
- if (!(card = kzalloc(sizeof(capidrv_contr), GFP_ATOMIC))) {
- printk(KERN_WARNING
- "capidrv: (%s) Could not allocate contr-struct.\n", id);
- return -1;
- }
- card->owner = THIS_MODULE;
- timer_setup(&card->listentimer, listentimerfunc, 0);
- strcpy(card->name, id);
- card->contrnr = contr;
- card->nbchan = profp->nbchannel;
- card->bchans = kmalloc_array(card->nbchan, sizeof(capidrv_bchan),
- GFP_ATOMIC);
- if (!card->bchans) {
- printk(KERN_WARNING
- "capidrv: (%s) Could not allocate bchan-structs.\n", id);
- module_put(card->owner);
- kfree(card);
- return -1;
- }
- card->interface.channels = profp->nbchannel;
- card->interface.maxbufsize = 2048;
- card->interface.command = if_command;
- card->interface.writebuf_skb = if_sendbuf;
- card->interface.writecmd = NULL;
- card->interface.readstat = if_readstat;
- card->interface.features =
- ISDN_FEATURE_L2_HDLC |
- ISDN_FEATURE_L2_TRANS |
- ISDN_FEATURE_L3_TRANS |
- ISDN_FEATURE_P_UNKNOWN |
- ISDN_FEATURE_L2_X75I |
- ISDN_FEATURE_L2_X75UI |
- ISDN_FEATURE_L2_X75BUI;
- if (profp->support1 & (1 << 2))
- card->interface.features |=
- ISDN_FEATURE_L2_V11096 |
- ISDN_FEATURE_L2_V11019 |
- ISDN_FEATURE_L2_V11038;
- if (profp->support1 & (1 << 8))
- card->interface.features |= ISDN_FEATURE_L2_MODEM;
- card->interface.hl_hdrlen = 22; /* len of DATA_B3_REQ */
- strncpy(card->interface.id, id, sizeof(card->interface.id) - 1);
-
-
- card->q931_read = card->q931_buf;
- card->q931_write = card->q931_buf;
- card->q931_end = card->q931_buf + sizeof(card->q931_buf) - 1;
-
- if (!register_isdn(&card->interface)) {
- printk(KERN_ERR "capidrv: Unable to register contr %s\n", id);
- kfree(card->bchans);
- module_put(card->owner);
- kfree(card);
- return -1;
- }
- card->myid = card->interface.channels;
- memset(card->bchans, 0, sizeof(capidrv_bchan) * card->nbchan);
- for (i = 0; i < card->nbchan; i++) {
- card->bchans[i].contr = card;
- }
-
- spin_lock_irqsave(&global_lock, flags);
- card->next = global.contr_list;
- global.contr_list = card;
- global.ncontr++;
- spin_unlock_irqrestore(&global_lock, flags);
-
- cmd.command = ISDN_STAT_RUN;
- cmd.driver = card->myid;
- card->interface.statcallb(&cmd);
-
- card->cipmask = 0x1FFF03FF; /* any */
- card->cipmask2 = 0;
-
- send_listen(card);
- mod_timer(&card->listentimer, jiffies + 60 * HZ);
-
- printk(KERN_INFO "%s: now up (%d B channels)\n",
- card->name, card->nbchan);
-
- enable_dchannel_trace(card);
-
- return 0;
-}
-
-static int capidrv_delcontr(u16 contr)
-{
- capidrv_contr **pp, *card;
- unsigned long flags;
- isdn_ctrl cmd;
-
- spin_lock_irqsave(&global_lock, flags);
- for (card = global.contr_list; card; card = card->next) {
- if (card->contrnr == contr)
- break;
- }
- if (!card) {
- spin_unlock_irqrestore(&global_lock, flags);
- printk(KERN_ERR "capidrv: delcontr: no contr %u\n", contr);
- return -1;
- }
-
- /* FIXME: maybe a race condition the card should be removed
- * here from global list /kkeil
- */
- spin_unlock_irqrestore(&global_lock, flags);
-
- del_timer(&card->listentimer);
-
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: id=%d unloading\n",
- card->contrnr, card->myid);
-
- cmd.command = ISDN_STAT_STOP;
- cmd.driver = card->myid;
- card->interface.statcallb(&cmd);
-
- while (card->nbchan) {
-
- cmd.command = ISDN_STAT_DISCH;
- cmd.driver = card->myid;
- cmd.arg = card->nbchan - 1;
- cmd.parm.num[0] = 0;
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: id=%d disable chan=%ld\n",
- card->contrnr, card->myid, cmd.arg);
- card->interface.statcallb(&cmd);
-
- if (card->bchans[card->nbchan - 1].nccip)
- free_ncci(card, card->bchans[card->nbchan - 1].nccip);
- if (card->bchans[card->nbchan - 1].plcip)
- free_plci(card, card->bchans[card->nbchan - 1].plcip);
- if (card->plci_list)
- printk(KERN_ERR "capidrv: bug in free_plci()\n");
- card->nbchan--;
- }
- kfree(card->bchans);
- card->bchans = NULL;
-
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: id=%d isdn unload\n",
- card->contrnr, card->myid);
-
- cmd.command = ISDN_STAT_UNLOAD;
- cmd.driver = card->myid;
- card->interface.statcallb(&cmd);
-
- if (debugmode)
- printk(KERN_DEBUG "capidrv-%d: id=%d remove contr from list\n",
- card->contrnr, card->myid);
-
- spin_lock_irqsave(&global_lock, flags);
- for (pp = &global.contr_list; *pp; pp = &(*pp)->next) {
- if (*pp == card) {
- *pp = (*pp)->next;
- card->next = NULL;
- global.ncontr--;
- break;
- }
- }
- spin_unlock_irqrestore(&global_lock, flags);
-
- module_put(card->owner);
- printk(KERN_INFO "%s: now down.\n", card->name);
- kfree(card);
- return 0;
-}
-
-
-static int
-lower_callback(struct notifier_block *nb, unsigned long val, void *v)
-{
- capi_profile profile;
- u32 contr = (long)v;
-
- switch (val) {
- case CAPICTR_UP:
- printk(KERN_INFO "capidrv: controller %hu up\n", contr);
- if (capi20_get_profile(contr, &profile) == CAPI_NOERROR)
- (void) capidrv_addcontr(contr, &profile);
- break;
- case CAPICTR_DOWN:
- printk(KERN_INFO "capidrv: controller %hu down\n", contr);
- (void) capidrv_delcontr(contr);
- break;
- }
- return NOTIFY_OK;
-}
-
-/*
- * /proc/capi/capidrv:
- * nrecvctlpkt nrecvdatapkt nsendctlpkt nsenddatapkt
- */
-static int __maybe_unused capidrv_proc_show(struct seq_file *m, void *v)
-{
- seq_printf(m, "%lu %lu %lu %lu\n",
- global.ap.nrecvctlpkt,
- global.ap.nrecvdatapkt,
- global.ap.nsentctlpkt,
- global.ap.nsentdatapkt);
- return 0;
-}
-
-static void __init proc_init(void)
-{
- proc_create_single("capi/capidrv", 0, NULL, capidrv_proc_show);
-}
-
-static void __exit proc_exit(void)
-{
- remove_proc_entry("capi/capidrv", NULL);
-}
-
-static struct notifier_block capictr_nb = {
- .notifier_call = lower_callback,
-};
-
-static int __init capidrv_init(void)
-{
- capi_profile profile;
- u32 ncontr, contr;
- u16 errcode;
-
- global.ap.rparam.level3cnt = -2; /* number of bchannels twice */
- global.ap.rparam.datablkcnt = 16;
- global.ap.rparam.datablklen = 2048;
-
- global.ap.recv_message = capidrv_recv_message;
- errcode = capi20_register(&global.ap);
- if (errcode) {
- return -EIO;
- }
-
- register_capictr_notifier(&capictr_nb);
-
- errcode = capi20_get_profile(0, &profile);
- if (errcode != CAPI_NOERROR) {
- unregister_capictr_notifier(&capictr_nb);
- capi20_release(&global.ap);
- return -EIO;
- }
-
- ncontr = profile.ncontroller;
- for (contr = 1; contr <= ncontr; contr++) {
- errcode = capi20_get_profile(contr, &profile);
- if (errcode != CAPI_NOERROR)
- continue;
- (void) capidrv_addcontr(contr, &profile);
- }
- proc_init();
-
- return 0;
-}
-
-static void __exit capidrv_exit(void)
-{
- unregister_capictr_notifier(&capictr_nb);
- capi20_release(&global.ap);
-
- proc_exit();
-}
-
-module_init(capidrv_init);
-module_exit(capidrv_exit);
diff --git a/drivers/isdn/capi/capidrv.h b/drivers/isdn/capi/capidrv.h
deleted file mode 100644
index 4466b2e0176d..000000000000
--- a/drivers/isdn/capi/capidrv.h
+++ /dev/null
@@ -1,140 +0,0 @@
-/* $Id: capidrv.h,v 1.2.8.2 2001/09/23 22:24:33 kai Exp $
- *
- * ISDN4Linux Driver, using capi20 interface (kernelcapi)
- *
- * Copyright 1997 by Carsten Paeth <calle@calle.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#ifndef __CAPIDRV_H__
-#define __CAPIDRV_H__
-
-/*
- * LISTEN state machine
- */
-#define ST_LISTEN_NONE 0 /* L-0 */
-#define ST_LISTEN_WAIT_CONF 1 /* L-0.1 */
-#define ST_LISTEN_ACTIVE 2 /* L-1 */
-#define ST_LISTEN_ACTIVE_WAIT_CONF 3 /* L-1.1 */
-
-
-#define EV_LISTEN_REQ 1 /* L-0 -> L-0.1
- L-1 -> L-1.1 */
-#define EV_LISTEN_CONF_ERROR 2 /* L-0.1 -> L-0
- L-1.1 -> L-1 */
-#define EV_LISTEN_CONF_EMPTY 3 /* L-0.1 -> L-0
- L-1.1 -> L-0 */
-#define EV_LISTEN_CONF_OK 4 /* L-0.1 -> L-1
- L-1.1 -> L.1 */
-
-/*
- * per plci state machine
- */
-#define ST_PLCI_NONE 0 /* P-0 */
-#define ST_PLCI_OUTGOING 1 /* P-0.1 */
-#define ST_PLCI_ALLOCATED 2 /* P-1 */
-#define ST_PLCI_ACTIVE 3 /* P-ACT */
-#define ST_PLCI_INCOMING 4 /* P-2 */
-#define ST_PLCI_FACILITY_IND 5 /* P-3 */
-#define ST_PLCI_ACCEPTING 6 /* P-4 */
-#define ST_PLCI_DISCONNECTING 7 /* P-5 */
-#define ST_PLCI_DISCONNECTED 8 /* P-6 */
-#define ST_PLCI_RESUMEING 9 /* P-0.Res */
-#define ST_PLCI_RESUME 10 /* P-Res */
-#define ST_PLCI_HELD 11 /* P-HELD */
-
-#define EV_PLCI_CONNECT_REQ 1 /* P-0 -> P-0.1
- */
-#define EV_PLCI_CONNECT_CONF_ERROR 2 /* P-0.1 -> P-0
- */
-#define EV_PLCI_CONNECT_CONF_OK 3 /* P-0.1 -> P-1
- */
-#define EV_PLCI_FACILITY_IND_UP 4 /* P-0 -> P-1
- */
-#define EV_PLCI_CONNECT_IND 5 /* P-0 -> P-2
- */
-#define EV_PLCI_CONNECT_ACTIVE_IND 6 /* P-1 -> P-ACT
- */
-#define EV_PLCI_CONNECT_REJECT 7 /* P-2 -> P-5
- P-3 -> P-5
- */
-#define EV_PLCI_DISCONNECT_REQ 8 /* P-1 -> P-5
- P-2 -> P-5
- P-3 -> P-5
- P-4 -> P-5
- P-ACT -> P-5
- P-Res -> P-5 (*)
- P-HELD -> P-5 (*)
- */
-#define EV_PLCI_DISCONNECT_IND 9 /* P-1 -> P-6
- P-2 -> P-6
- P-3 -> P-6
- P-4 -> P-6
- P-5 -> P-6
- P-ACT -> P-6
- P-Res -> P-6 (*)
- P-HELD -> P-6 (*)
- */
-#define EV_PLCI_FACILITY_IND_DOWN 10 /* P-0.1 -> P-5
- P-1 -> P-5
- P-ACT -> P-5
- P-2 -> P-5
- P-3 -> P-5
- P-4 -> P-5
- */
-#define EV_PLCI_DISCONNECT_RESP 11 /* P-6 -> P-0
- */
-#define EV_PLCI_CONNECT_RESP 12 /* P-6 -> P-0
- */
-
-#define EV_PLCI_RESUME_REQ 13 /* P-0 -> P-0.Res
- */
-#define EV_PLCI_RESUME_CONF_OK 14 /* P-0.Res -> P-Res
- */
-#define EV_PLCI_RESUME_CONF_ERROR 15 /* P-0.Res -> P-0
- */
-#define EV_PLCI_RESUME_IND 16 /* P-Res -> P-ACT
- */
-#define EV_PLCI_HOLD_IND 17 /* P-ACT -> P-HELD
- */
-#define EV_PLCI_RETRIEVE_IND 18 /* P-HELD -> P-ACT
- */
-#define EV_PLCI_SUSPEND_IND 19 /* P-ACT -> P-5
- */
-#define EV_PLCI_CD_IND 20 /* P-2 -> P-5
- */
-
-/*
- * per ncci state machine
- */
-#define ST_NCCI_PREVIOUS -1
-#define ST_NCCI_NONE 0 /* N-0 */
-#define ST_NCCI_OUTGOING 1 /* N-0.1 */
-#define ST_NCCI_INCOMING 2 /* N-1 */
-#define ST_NCCI_ALLOCATED 3 /* N-2 */
-#define ST_NCCI_ACTIVE 4 /* N-ACT */
-#define ST_NCCI_RESETING 5 /* N-3 */
-#define ST_NCCI_DISCONNECTING 6 /* N-4 */
-#define ST_NCCI_DISCONNECTED 7 /* N-5 */
-
-#define EV_NCCI_CONNECT_B3_REQ 1 /* N-0 -> N-0.1 */
-#define EV_NCCI_CONNECT_B3_IND 2 /* N-0 -> N.1 */
-#define EV_NCCI_CONNECT_B3_CONF_OK 3 /* N-0.1 -> N.2 */
-#define EV_NCCI_CONNECT_B3_CONF_ERROR 4 /* N-0.1 -> N.0 */
-#define EV_NCCI_CONNECT_B3_REJECT 5 /* N-1 -> N-4 */
-#define EV_NCCI_CONNECT_B3_RESP 6 /* N-1 -> N-2 */
-#define EV_NCCI_CONNECT_B3_ACTIVE_IND 7 /* N-2 -> N-ACT */
-#define EV_NCCI_RESET_B3_REQ 8 /* N-ACT -> N-3 */
-#define EV_NCCI_RESET_B3_IND 9 /* N-3 -> N-ACT */
-#define EV_NCCI_DISCONNECT_B3_IND 10 /* N-4 -> N.5 */
-#define EV_NCCI_DISCONNECT_B3_CONF_ERROR 11 /* N-4 -> previous */
-#define EV_NCCI_DISCONNECT_B3_REQ 12 /* N-1 -> N-4
- N-2 -> N-4
- N-3 -> N-4
- N-ACT -> N-4 */
-#define EV_NCCI_DISCONNECT_B3_RESP 13 /* N-5 -> N-0 */
-
-#endif /* __CAPIDRV_H__ */
diff --git a/drivers/isdn/divert/Makefile b/drivers/isdn/divert/Makefile
deleted file mode 100644
index 07684fe53537..000000000000
--- a/drivers/isdn/divert/Makefile
+++ /dev/null
@@ -1,10 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-# Makefile for the dss1_divert ISDN module
-
-# Each configuration option enables a list of files.
-
-obj-$(CONFIG_ISDN_DIVERSION) += dss1_divert.o
-
-# Multipart objects.
-
-dss1_divert-y := isdn_divert.o divert_procfs.o divert_init.o
diff --git a/drivers/isdn/divert/divert_init.c b/drivers/isdn/divert/divert_init.c
deleted file mode 100644
index 267dede13bfd..000000000000
--- a/drivers/isdn/divert/divert_init.c
+++ /dev/null
@@ -1,82 +0,0 @@
-/* $Id divert_init.c,v 1.5.6.2 2001/01/24 22:18:17 kai Exp $
- *
- * Module init for DSS1 diversion services for i4l.
- *
- * Copyright 1999 by Werner Cornelius (werner@isdn4linux.de)
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/kernel.h>
-
-#include "isdn_divert.h"
-
-MODULE_DESCRIPTION("ISDN4Linux: Call diversion support");
-MODULE_AUTHOR("Werner Cornelius");
-MODULE_LICENSE("GPL");
-
-/****************************************/
-/* structure containing interface to hl */
-/****************************************/
-isdn_divert_if divert_if = {
- DIVERT_IF_MAGIC, /* magic value */
- DIVERT_CMD_REG, /* register cmd */
- ll_callback, /* callback routine from ll */
- NULL, /* command still not specified */
- NULL, /* drv_to_name */
- NULL, /* name_to_drv */
-};
-
-/*************************/
-/* Module interface code */
-/* no cmd line parms */
-/*************************/
-static int __init divert_init(void)
-{
- int i;
-
- if (divert_dev_init()) {
- printk(KERN_WARNING "dss1_divert: cannot install device, not loaded\n");
- return (-EIO);
- }
- if ((i = DIVERT_REG_NAME(&divert_if)) != DIVERT_NO_ERR) {
- divert_dev_deinit();
- printk(KERN_WARNING "dss1_divert: error %d registering module, not loaded\n", i);
- return (-EIO);
- }
- printk(KERN_INFO "dss1_divert module successfully installed\n");
- return (0);
-}
-
-/**********************/
-/* Module deinit code */
-/**********************/
-static void __exit divert_exit(void)
-{
- unsigned long flags;
- int i;
-
- spin_lock_irqsave(&divert_lock, flags);
- divert_if.cmd = DIVERT_CMD_REL; /* release */
- if ((i = DIVERT_REG_NAME(&divert_if)) != DIVERT_NO_ERR) {
- printk(KERN_WARNING "dss1_divert: error %d releasing module\n", i);
- spin_unlock_irqrestore(&divert_lock, flags);
- return;
- }
- if (divert_dev_deinit()) {
- printk(KERN_WARNING "dss1_divert: device busy, remove cancelled\n");
- spin_unlock_irqrestore(&divert_lock, flags);
- return;
- }
- spin_unlock_irqrestore(&divert_lock, flags);
- deleterule(-1); /* delete all rules and free mem */
- deleteprocs();
- printk(KERN_INFO "dss1_divert module successfully removed \n");
-}
-
-module_init(divert_init);
-module_exit(divert_exit);
diff --git a/drivers/isdn/divert/divert_procfs.c b/drivers/isdn/divert/divert_procfs.c
deleted file mode 100644
index 342585e04fd3..000000000000
--- a/drivers/isdn/divert/divert_procfs.c
+++ /dev/null
@@ -1,336 +0,0 @@
-/* $Id: divert_procfs.c,v 1.11.6.2 2001/09/23 22:24:36 kai Exp $
- *
- * Filesystem handling for the diversion supplementary services.
- *
- * Copyright 1998 by Werner Cornelius (werner@isdn4linux.de)
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/module.h>
-#include <linux/poll.h>
-#include <linux/slab.h>
-#ifdef CONFIG_PROC_FS
-#include <linux/proc_fs.h>
-#else
-#include <linux/fs.h>
-#endif
-#include <linux/sched.h>
-#include <linux/isdnif.h>
-#include <net/net_namespace.h>
-#include <linux/mutex.h>
-#include "isdn_divert.h"
-
-
-/*********************************/
-/* Variables for interface queue */
-/*********************************/
-ulong if_used = 0; /* number of interface users */
-static DEFINE_MUTEX(isdn_divert_mutex);
-static struct divert_info *divert_info_head = NULL; /* head of queue */
-static struct divert_info *divert_info_tail = NULL; /* pointer to last entry */
-static DEFINE_SPINLOCK(divert_info_lock);/* lock for queue */
-static wait_queue_head_t rd_queue;
-
-/*********************************/
-/* put an info buffer into queue */
-/*********************************/
-void
-put_info_buffer(char *cp)
-{
- struct divert_info *ib;
- unsigned long flags;
-
- if (if_used <= 0)
- return;
- if (!cp)
- return;
- if (!*cp)
- return;
- if (!(ib = kmalloc(sizeof(struct divert_info) + strlen(cp), GFP_ATOMIC)))
- return; /* no memory */
- strcpy(ib->info_start, cp); /* set output string */
- ib->next = NULL;
- spin_lock_irqsave(&divert_info_lock, flags);
- ib->usage_cnt = if_used;
- if (!divert_info_head)
- divert_info_head = ib; /* new head */
- else
- divert_info_tail->next = ib; /* follows existing messages */
- divert_info_tail = ib; /* new tail */
-
- /* delete old entrys */
- while (divert_info_head->next) {
- if ((divert_info_head->usage_cnt <= 0) &&
- (divert_info_head->next->usage_cnt <= 0)) {
- ib = divert_info_head;
- divert_info_head = divert_info_head->next;
- kfree(ib);
- } else
- break;
- } /* divert_info_head->next */
- spin_unlock_irqrestore(&divert_info_lock, flags);
- wake_up_interruptible(&(rd_queue));
-} /* put_info_buffer */
-
-#ifdef CONFIG_PROC_FS
-
-/**********************************/
-/* deflection device read routine */
-/**********************************/
-static ssize_t
-isdn_divert_read(struct file *file, char __user *buf, size_t count, loff_t *off)
-{
- struct divert_info *inf;
- int len;
-
- if (!(inf = *((struct divert_info **) file->private_data))) {
- if (file->f_flags & O_NONBLOCK)
- return -EAGAIN;
- wait_event_interruptible(rd_queue, (inf =
- *((struct divert_info **) file->private_data)));
- }
- if (!inf)
- return (0);
-
- inf->usage_cnt--; /* new usage count */
- file->private_data = &inf->next; /* next structure */
- if ((len = strlen(inf->info_start)) <= count) {
- if (copy_to_user(buf, inf->info_start, len))
- return -EFAULT;
- *off += len;
- return (len);
- }
- return (0);
-} /* isdn_divert_read */
-
-/**********************************/
-/* deflection device write routine */
-/**********************************/
-static ssize_t
-isdn_divert_write(struct file *file, const char __user *buf, size_t count, loff_t *off)
-{
- return (-ENODEV);
-} /* isdn_divert_write */
-
-
-/***************************************/
-/* select routines for various kernels */
-/***************************************/
-static __poll_t
-isdn_divert_poll(struct file *file, poll_table *wait)
-{
- __poll_t mask = 0;
-
- poll_wait(file, &(rd_queue), wait);
- /* mask = EPOLLOUT | EPOLLWRNORM; */
- if (*((struct divert_info **) file->private_data)) {
- mask |= EPOLLIN | EPOLLRDNORM;
- }
- return mask;
-} /* isdn_divert_poll */
-
-/****************/
-/* Open routine */
-/****************/
-static int
-isdn_divert_open(struct inode *ino, struct file *filep)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&divert_info_lock, flags);
- if_used++;
- if (divert_info_head)
- filep->private_data = &(divert_info_tail->next);
- else
- filep->private_data = &divert_info_head;
- spin_unlock_irqrestore(&divert_info_lock, flags);
- /* start_divert(); */
- return nonseekable_open(ino, filep);
-} /* isdn_divert_open */
-
-/*******************/
-/* close routine */
-/*******************/
-static int
-isdn_divert_close(struct inode *ino, struct file *filep)
-{
- struct divert_info *inf;
- unsigned long flags;
-
- spin_lock_irqsave(&divert_info_lock, flags);
- if_used--;
- inf = *((struct divert_info **) filep->private_data);
- while (inf) {
- inf->usage_cnt--;
- inf = inf->next;
- }
- if (if_used <= 0)
- while (divert_info_head) {
- inf = divert_info_head;
- divert_info_head = divert_info_head->next;
- kfree(inf);
- }
- spin_unlock_irqrestore(&divert_info_lock, flags);
- return (0);
-} /* isdn_divert_close */
-
-/*********/
-/* IOCTL */
-/*********/
-static int isdn_divert_ioctl_unlocked(struct file *file, uint cmd, ulong arg)
-{
- divert_ioctl dioctl;
- int i;
- unsigned long flags;
- divert_rule *rulep;
- char *cp;
-
- if (copy_from_user(&dioctl, (void __user *) arg, sizeof(dioctl)))
- return -EFAULT;
-
- switch (cmd) {
- case IIOCGETVER:
- dioctl.drv_version = DIVERT_IIOC_VERSION; /* set version */
- break;
-
- case IIOCGETDRV:
- if ((dioctl.getid.drvid = divert_if.name_to_drv(dioctl.getid.drvnam)) < 0)
- return (-EINVAL);
- break;
-
- case IIOCGETNAM:
- cp = divert_if.drv_to_name(dioctl.getid.drvid);
- if (!cp)
- return (-EINVAL);
- if (!*cp)
- return (-EINVAL);
- strcpy(dioctl.getid.drvnam, cp);
- break;
-
- case IIOCGETRULE:
- if (!(rulep = getruleptr(dioctl.getsetrule.ruleidx)))
- return (-EINVAL);
- dioctl.getsetrule.rule = *rulep; /* copy data */
- break;
-
- case IIOCMODRULE:
- if (!(rulep = getruleptr(dioctl.getsetrule.ruleidx)))
- return (-EINVAL);
- spin_lock_irqsave(&divert_lock, flags);
- *rulep = dioctl.getsetrule.rule; /* copy data */
- spin_unlock_irqrestore(&divert_lock, flags);
- return (0); /* no copy required */
- break;
-
- case IIOCINSRULE:
- return (insertrule(dioctl.getsetrule.ruleidx, &dioctl.getsetrule.rule));
- break;
-
- case IIOCDELRULE:
- return (deleterule(dioctl.getsetrule.ruleidx));
- break;
-
- case IIOCDODFACT:
- return (deflect_extern_action(dioctl.fwd_ctrl.subcmd,
- dioctl.fwd_ctrl.callid,
- dioctl.fwd_ctrl.to_nr));
-
- case IIOCDOCFACT:
- case IIOCDOCFDIS:
- case IIOCDOCFINT:
- if (!divert_if.drv_to_name(dioctl.cf_ctrl.drvid))
- return (-EINVAL); /* invalid driver */
- if (strnlen(dioctl.cf_ctrl.msn, sizeof(dioctl.cf_ctrl.msn)) ==
- sizeof(dioctl.cf_ctrl.msn))
- return -EINVAL;
- if (strnlen(dioctl.cf_ctrl.fwd_nr, sizeof(dioctl.cf_ctrl.fwd_nr)) ==
- sizeof(dioctl.cf_ctrl.fwd_nr))
- return -EINVAL;
- if ((i = cf_command(dioctl.cf_ctrl.drvid,
- (cmd == IIOCDOCFACT) ? 1 : (cmd == IIOCDOCFDIS) ? 0 : 2,
- dioctl.cf_ctrl.cfproc,
- dioctl.cf_ctrl.msn,
- dioctl.cf_ctrl.service,
- dioctl.cf_ctrl.fwd_nr,
- &dioctl.cf_ctrl.procid)))
- return (i);
- break;
-
- default:
- return (-EINVAL);
- } /* switch cmd */
- return copy_to_user((void __user *)arg, &dioctl, sizeof(dioctl)) ? -EFAULT : 0;
-} /* isdn_divert_ioctl */
-
-static long isdn_divert_ioctl(struct file *file, uint cmd, ulong arg)
-{
- long ret;
-
- mutex_lock(&isdn_divert_mutex);
- ret = isdn_divert_ioctl_unlocked(file, cmd, arg);
- mutex_unlock(&isdn_divert_mutex);
-
- return ret;
-}
-
-static const struct file_operations isdn_fops =
-{
- .owner = THIS_MODULE,
- .llseek = no_llseek,
- .read = isdn_divert_read,
- .write = isdn_divert_write,
- .poll = isdn_divert_poll,
- .unlocked_ioctl = isdn_divert_ioctl,
- .open = isdn_divert_open,
- .release = isdn_divert_close,
-};
-
-/****************************/
-/* isdn subdir in /proc/net */
-/****************************/
-static struct proc_dir_entry *isdn_proc_entry = NULL;
-static struct proc_dir_entry *isdn_divert_entry = NULL;
-#endif /* CONFIG_PROC_FS */
-
-/***************************************************************************/
-/* divert_dev_init must be called before the proc filesystem may be used */
-/***************************************************************************/
-int
-divert_dev_init(void)
-{
-
- init_waitqueue_head(&rd_queue);
-
-#ifdef CONFIG_PROC_FS
- isdn_proc_entry = proc_mkdir("isdn", init_net.proc_net);
- if (!isdn_proc_entry)
- return (-1);
- isdn_divert_entry = proc_create("divert", S_IFREG | S_IRUGO,
- isdn_proc_entry, &isdn_fops);
- if (!isdn_divert_entry) {
- remove_proc_entry("isdn", init_net.proc_net);
- return (-1);
- }
-#endif /* CONFIG_PROC_FS */
-
- return (0);
-} /* divert_dev_init */
-
-/***************************************************************************/
-/* divert_dev_deinit must be called before leaving isdn when included as */
-/* a module. */
-/***************************************************************************/
-int
-divert_dev_deinit(void)
-{
-
-#ifdef CONFIG_PROC_FS
- remove_proc_entry("divert", isdn_proc_entry);
- remove_proc_entry("isdn", init_net.proc_net);
-#endif /* CONFIG_PROC_FS */
-
- return (0);
-} /* divert_dev_deinit */
diff --git a/drivers/isdn/divert/isdn_divert.c b/drivers/isdn/divert/isdn_divert.c
deleted file mode 100644
index 5620fd2c6009..000000000000
--- a/drivers/isdn/divert/isdn_divert.c
+++ /dev/null
@@ -1,846 +0,0 @@
-/* $Id: isdn_divert.c,v 1.6.6.3 2001/09/23 22:24:36 kai Exp $
- *
- * DSS1 main diversion supplementary handling for i4l.
- *
- * Copyright 1999 by Werner Cornelius (werner@isdn4linux.de)
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/proc_fs.h>
-#include <linux/slab.h>
-#include <linux/timer.h>
-#include <linux/jiffies.h>
-
-#include "isdn_divert.h"
-
-/**********************************/
-/* structure keeping calling info */
-/**********************************/
-struct call_struc {
- isdn_ctrl ics; /* delivered setup + driver parameters */
- ulong divert_id; /* Id delivered to user */
- unsigned char akt_state; /* actual state */
- char deflect_dest[35]; /* deflection destination */
- struct timer_list timer; /* timer control structure */
- char info[90]; /* device info output */
- struct call_struc *next; /* pointer to next entry */
- struct call_struc *prev;
-};
-
-
-/********************************************/
-/* structure keeping deflection table entry */
-/********************************************/
-struct deflect_struc {
- struct deflect_struc *next, *prev;
- divert_rule rule; /* used rule */
-};
-
-
-/*****************************************/
-/* variables for main diversion services */
-/*****************************************/
-/* diversion/deflection processes */
-static struct call_struc *divert_head = NULL; /* head of remembered entrys */
-static ulong next_id = 1; /* next info id */
-static struct deflect_struc *table_head = NULL;
-static struct deflect_struc *table_tail = NULL;
-static unsigned char extern_wait_max = 4; /* maximum wait in s for external process */
-
-DEFINE_SPINLOCK(divert_lock);
-
-/***************************/
-/* timer callback function */
-/***************************/
-static void deflect_timer_expire(struct timer_list *t)
-{
- unsigned long flags;
- struct call_struc *cs = from_timer(cs, t, timer);
-
- spin_lock_irqsave(&divert_lock, flags);
- del_timer(&cs->timer); /* delete active timer */
- spin_unlock_irqrestore(&divert_lock, flags);
-
- switch (cs->akt_state) {
- case DEFLECT_PROCEED:
- cs->ics.command = ISDN_CMD_HANGUP; /* cancel action */
- divert_if.ll_cmd(&cs->ics);
- spin_lock_irqsave(&divert_lock, flags);
- cs->akt_state = DEFLECT_AUTODEL; /* delete after timeout */
- cs->timer.expires = jiffies + (HZ * AUTODEL_TIME);
- add_timer(&cs->timer);
- spin_unlock_irqrestore(&divert_lock, flags);
- break;
-
- case DEFLECT_ALERT:
- cs->ics.command = ISDN_CMD_REDIR; /* protocol */
- strlcpy(cs->ics.parm.setup.phone, cs->deflect_dest, sizeof(cs->ics.parm.setup.phone));
- strcpy(cs->ics.parm.setup.eazmsn, "Testtext delayed");
- divert_if.ll_cmd(&cs->ics);
- spin_lock_irqsave(&divert_lock, flags);
- cs->akt_state = DEFLECT_AUTODEL; /* delete after timeout */
- cs->timer.expires = jiffies + (HZ * AUTODEL_TIME);
- add_timer(&cs->timer);
- spin_unlock_irqrestore(&divert_lock, flags);
- break;
-
- case DEFLECT_AUTODEL:
- default:
- spin_lock_irqsave(&divert_lock, flags);
- if (cs->prev)
- cs->prev->next = cs->next; /* forward link */
- else
- divert_head = cs->next;
- if (cs->next)
- cs->next->prev = cs->prev; /* back link */
- spin_unlock_irqrestore(&divert_lock, flags);
- kfree(cs);
- return;
-
- } /* switch */
-} /* deflect_timer_func */
-
-
-/*****************************************/
-/* handle call forwarding de/activations */
-/* 0 = deact, 1 = act, 2 = interrogate */
-/*****************************************/
-int cf_command(int drvid, int mode,
- u_char proc, char *msn,
- u_char service, char *fwd_nr, ulong *procid)
-{
- unsigned long flags;
- int retval, msnlen;
- int fwd_len;
- char *p, *ielenp, tmp[60];
- struct call_struc *cs;
-
- if (strchr(msn, '.')) return (-EINVAL); /* subaddress not allowed in msn */
- if ((proc & 0x7F) > 2) return (-EINVAL);
- proc &= 3;
- p = tmp;
- *p++ = 0x30; /* enumeration */
- ielenp = p++; /* remember total length position */
- *p++ = 0xa; /* proc tag */
- *p++ = 1; /* length */
- *p++ = proc & 0x7F; /* procedure to de/activate/interrogate */
- *p++ = 0xa; /* service tag */
- *p++ = 1; /* length */
- *p++ = service; /* service to handle */
-
- if (mode == 1) {
- if (!*fwd_nr) return (-EINVAL); /* destination missing */
- if (strchr(fwd_nr, '.')) return (-EINVAL); /* subaddress not allowed */
- fwd_len = strlen(fwd_nr);
- *p++ = 0x30; /* number enumeration */
- *p++ = fwd_len + 2; /* complete forward to len */
- *p++ = 0x80; /* fwd to nr */
- *p++ = fwd_len; /* length of number */
- strcpy(p, fwd_nr); /* copy number */
- p += fwd_len; /* pointer beyond fwd */
- } /* activate */
-
- msnlen = strlen(msn);
- *p++ = 0x80; /* msn number */
- if (msnlen > 1) {
- *p++ = msnlen; /* length */
- strcpy(p, msn);
- p += msnlen;
- } else
- *p++ = 0;
-
- *ielenp = p - ielenp - 1; /* set total IE length */
-
- /* allocate mem for information struct */
- if (!(cs = kmalloc(sizeof(struct call_struc), GFP_ATOMIC)))
- return (-ENOMEM); /* no memory */
- timer_setup(&cs->timer, deflect_timer_expire, 0);
- cs->info[0] = '\0';
- cs->ics.driver = drvid;
- cs->ics.command = ISDN_CMD_PROT_IO; /* protocol specific io */
- cs->ics.arg = DSS1_CMD_INVOKE; /* invoke supplementary service */
- cs->ics.parm.dss1_io.proc = (mode == 1) ? 7 : (mode == 2) ? 11 : 8; /* operation */
- cs->ics.parm.dss1_io.timeout = 4000; /* from ETS 300 207-1 */
- cs->ics.parm.dss1_io.datalen = p - tmp; /* total len */
- cs->ics.parm.dss1_io.data = tmp; /* start of buffer */
-
- spin_lock_irqsave(&divert_lock, flags);
- cs->ics.parm.dss1_io.ll_id = next_id++; /* id for callback */
- spin_unlock_irqrestore(&divert_lock, flags);
- *procid = cs->ics.parm.dss1_io.ll_id;
-
- sprintf(cs->info, "%d 0x%lx %s%s 0 %s %02x %d%s%s\n",
- (!mode) ? DIVERT_DEACTIVATE : (mode == 1) ? DIVERT_ACTIVATE : DIVERT_REPORT,
- cs->ics.parm.dss1_io.ll_id,
- (mode != 2) ? "" : "0 ",
- divert_if.drv_to_name(cs->ics.driver),
- msn,
- service & 0xFF,
- proc,
- (mode != 1) ? "" : " 0 ",
- (mode != 1) ? "" : fwd_nr);
-
- retval = divert_if.ll_cmd(&cs->ics); /* execute command */
-
- if (!retval) {
- cs->prev = NULL;
- spin_lock_irqsave(&divert_lock, flags);
- cs->next = divert_head;
- divert_head = cs;
- spin_unlock_irqrestore(&divert_lock, flags);
- } else
- kfree(cs);
- return (retval);
-} /* cf_command */
-
-
-/****************************************/
-/* handle a external deflection command */
-/****************************************/
-int deflect_extern_action(u_char cmd, ulong callid, char *to_nr)
-{
- struct call_struc *cs;
- isdn_ctrl ic;
- unsigned long flags;
- int i;
-
- if ((cmd & 0x7F) > 2) return (-EINVAL); /* invalid command */
- cs = divert_head; /* start of parameter list */
- while (cs) {
- if (cs->divert_id == callid) break; /* found */
- cs = cs->next;
- } /* search entry */
- if (!cs) return (-EINVAL); /* invalid callid */
-
- ic.driver = cs->ics.driver;
- ic.arg = cs->ics.arg;
- i = -EINVAL;
- if (cs->akt_state == DEFLECT_AUTODEL) return (i); /* no valid call */
- switch (cmd & 0x7F) {
- case 0: /* hangup */
- del_timer(&cs->timer);
- ic.command = ISDN_CMD_HANGUP;
- i = divert_if.ll_cmd(&ic);
- spin_lock_irqsave(&divert_lock, flags);
- cs->akt_state = DEFLECT_AUTODEL; /* delete after timeout */
- cs->timer.expires = jiffies + (HZ * AUTODEL_TIME);
- add_timer(&cs->timer);
- spin_unlock_irqrestore(&divert_lock, flags);
- break;
-
- case 1: /* alert */
- if (cs->akt_state == DEFLECT_ALERT) return (0);
- cmd &= 0x7F; /* never wait */
- del_timer(&cs->timer);
- ic.command = ISDN_CMD_ALERT;
- if ((i = divert_if.ll_cmd(&ic))) {
- spin_lock_irqsave(&divert_lock, flags);
- cs->akt_state = DEFLECT_AUTODEL; /* delete after timeout */
- cs->timer.expires = jiffies + (HZ * AUTODEL_TIME);
- add_timer(&cs->timer);
- spin_unlock_irqrestore(&divert_lock, flags);
- } else
- cs->akt_state = DEFLECT_ALERT;
- break;
-
- case 2: /* redir */
- del_timer(&cs->timer);
- strlcpy(cs->ics.parm.setup.phone, to_nr, sizeof(cs->ics.parm.setup.phone));
- strcpy(cs->ics.parm.setup.eazmsn, "Testtext manual");
- ic.command = ISDN_CMD_REDIR;
- if ((i = divert_if.ll_cmd(&ic))) {
- spin_lock_irqsave(&divert_lock, flags);
- cs->akt_state = DEFLECT_AUTODEL; /* delete after timeout */
- cs->timer.expires = jiffies + (HZ * AUTODEL_TIME);
- add_timer(&cs->timer);
- spin_unlock_irqrestore(&divert_lock, flags);
- } else
- cs->akt_state = DEFLECT_ALERT;
- break;
-
- } /* switch */
- return (i);
-} /* deflect_extern_action */
-
-/********************************/
-/* insert a new rule before idx */
-/********************************/
-int insertrule(int idx, divert_rule *newrule)
-{
- struct deflect_struc *ds, *ds1 = NULL;
- unsigned long flags;
-
- if (!(ds = kmalloc(sizeof(struct deflect_struc), GFP_KERNEL)))
- return (-ENOMEM); /* no memory */
-
- ds->rule = *newrule; /* set rule */
-
- spin_lock_irqsave(&divert_lock, flags);
-
- if (idx >= 0) {
- ds1 = table_head;
- while ((ds1) && (idx > 0))
- { idx--;
- ds1 = ds1->next;
- }
- if (!ds1) idx = -1;
- }
-
- if (idx < 0) {
- ds->prev = table_tail; /* previous entry */
- ds->next = NULL; /* end of chain */
- if (ds->prev)
- ds->prev->next = ds; /* last forward */
- else
- table_head = ds; /* is first entry */
- table_tail = ds; /* end of queue */
- } else {
- ds->next = ds1; /* next entry */
- ds->prev = ds1->prev; /* prev entry */
- ds1->prev = ds; /* backward chain old element */
- if (!ds->prev)
- table_head = ds; /* first element */
- }
-
- spin_unlock_irqrestore(&divert_lock, flags);
- return (0);
-} /* insertrule */
-
-/***********************************/
-/* delete the rule at position idx */
-/***********************************/
-int deleterule(int idx)
-{
- struct deflect_struc *ds, *ds1;
- unsigned long flags;
-
- if (idx < 0) {
- spin_lock_irqsave(&divert_lock, flags);
- ds = table_head;
- table_head = NULL;
- table_tail = NULL;
- spin_unlock_irqrestore(&divert_lock, flags);
- while (ds) {
- ds1 = ds;
- ds = ds->next;
- kfree(ds1);
- }
- return (0);
- }
-
- spin_lock_irqsave(&divert_lock, flags);
- ds = table_head;
-
- while ((ds) && (idx > 0)) {
- idx--;
- ds = ds->next;
- }
-
- if (!ds) {
- spin_unlock_irqrestore(&divert_lock, flags);
- return (-EINVAL);
- }
-
- if (ds->next)
- ds->next->prev = ds->prev; /* backward chain */
- else
- table_tail = ds->prev; /* end of chain */
-
- if (ds->prev)
- ds->prev->next = ds->next; /* forward chain */
- else
- table_head = ds->next; /* start of chain */
-
- spin_unlock_irqrestore(&divert_lock, flags);
- kfree(ds);
- return (0);
-} /* deleterule */
-
-/*******************************************/
-/* get a pointer to a specific rule number */
-/*******************************************/
-divert_rule *getruleptr(int idx)
-{
- struct deflect_struc *ds = table_head;
-
- if (idx < 0) return (NULL);
- while ((ds) && (idx >= 0)) {
- if (!(idx--)) {
- return (&ds->rule);
- break;
- }
- ds = ds->next;
- }
- return (NULL);
-} /* getruleptr */
-
-/*************************************************/
-/* called from common module on an incoming call */
-/*************************************************/
-static int isdn_divert_icall(isdn_ctrl *ic)
-{
- int retval = 0;
- unsigned long flags;
- struct call_struc *cs = NULL;
- struct deflect_struc *dv;
- char *p, *p1;
- u_char accept;
-
- /* first check the internal deflection table */
- for (dv = table_head; dv; dv = dv->next) {
- /* scan table */
- if (((dv->rule.callopt == 1) && (ic->command == ISDN_STAT_ICALLW)) ||
- ((dv->rule.callopt == 2) && (ic->command == ISDN_STAT_ICALL)))
- continue; /* call option check */
- if (!(dv->rule.drvid & (1L << ic->driver)))
- continue; /* driver not matching */
- if ((dv->rule.si1) && (dv->rule.si1 != ic->parm.setup.si1))
- continue; /* si1 not matching */
- if ((dv->rule.si2) && (dv->rule.si2 != ic->parm.setup.si2))
- continue; /* si2 not matching */
-
- p = dv->rule.my_msn;
- p1 = ic->parm.setup.eazmsn;
- accept = 0;
- while (*p) {
- /* complete compare */
- if (*p == '-') {
- accept = 1; /* call accepted */
- break;
- }
- if (*p++ != *p1++)
- break; /* not accepted */
- if ((!*p) && (!*p1))
- accept = 1;
- } /* complete compare */
- if (!accept) continue; /* not accepted */
-
- if ((strcmp(dv->rule.caller, "0")) ||
- (ic->parm.setup.phone[0])) {
- p = dv->rule.caller;
- p1 = ic->parm.setup.phone;
- accept = 0;
- while (*p) {
- /* complete compare */
- if (*p == '-') {
- accept = 1; /* call accepted */
- break;
- }
- if (*p++ != *p1++)
- break; /* not accepted */
- if ((!*p) && (!*p1))
- accept = 1;
- } /* complete compare */
- if (!accept) continue; /* not accepted */
- }
-
- switch (dv->rule.action) {
- case DEFLECT_IGNORE:
- return 0;
-
- case DEFLECT_ALERT:
- case DEFLECT_PROCEED:
- case DEFLECT_REPORT:
- case DEFLECT_REJECT:
- if (dv->rule.action == DEFLECT_PROCEED)
- if ((!if_used) || ((!extern_wait_max) && (!dv->rule.waittime)))
- return (0); /* no external deflection needed */
- if (!(cs = kmalloc(sizeof(struct call_struc), GFP_ATOMIC)))
- return (0); /* no memory */
- timer_setup(&cs->timer, deflect_timer_expire, 0);
- cs->info[0] = '\0';
-
- cs->ics = *ic; /* copy incoming data */
- if (!cs->ics.parm.setup.phone[0]) strcpy(cs->ics.parm.setup.phone, "0");
- if (!cs->ics.parm.setup.eazmsn[0]) strcpy(cs->ics.parm.setup.eazmsn, "0");
- cs->ics.parm.setup.screen = dv->rule.screen;
- if (dv->rule.waittime)
- cs->timer.expires = jiffies + (HZ * dv->rule.waittime);
- else if (dv->rule.action == DEFLECT_PROCEED)
- cs->timer.expires = jiffies + (HZ * extern_wait_max);
- else
- cs->timer.expires = 0;
- cs->akt_state = dv->rule.action;
- spin_lock_irqsave(&divert_lock, flags);
- cs->divert_id = next_id++; /* new sequence number */
- spin_unlock_irqrestore(&divert_lock, flags);
- cs->prev = NULL;
- if (cs->akt_state == DEFLECT_ALERT) {
- strcpy(cs->deflect_dest, dv->rule.to_nr);
- if (!cs->timer.expires) {
- strcpy(ic->parm.setup.eazmsn,
- "Testtext direct");
- ic->parm.setup.screen = dv->rule.screen;
- strlcpy(ic->parm.setup.phone, dv->rule.to_nr, sizeof(ic->parm.setup.phone));
- cs->akt_state = DEFLECT_AUTODEL; /* delete after timeout */
- cs->timer.expires = jiffies + (HZ * AUTODEL_TIME);
- retval = 5;
- } else
- retval = 1; /* alerting */
- } else {
- cs->deflect_dest[0] = '\0';
- retval = 4; /* only proceed */
- }
- snprintf(cs->info, sizeof(cs->info),
- "%d 0x%lx %s %s %s %s 0x%x 0x%x %d %d %s\n",
- cs->akt_state,
- cs->divert_id,
- divert_if.drv_to_name(cs->ics.driver),
- (ic->command == ISDN_STAT_ICALLW) ? "1" : "0",
- cs->ics.parm.setup.phone,
- cs->ics.parm.setup.eazmsn,
- cs->ics.parm.setup.si1,
- cs->ics.parm.setup.si2,
- cs->ics.parm.setup.screen,
- dv->rule.waittime,
- cs->deflect_dest);
- if ((dv->rule.action == DEFLECT_REPORT) ||
- (dv->rule.action == DEFLECT_REJECT)) {
- put_info_buffer(cs->info);
- kfree(cs); /* remove */
- return ((dv->rule.action == DEFLECT_REPORT) ? 0 : 2); /* nothing to do */
- }
- break;
-
- default:
- return 0; /* ignore call */
- } /* switch action */
- break; /* will break the 'for' looping */
- } /* scan_table */
-
- if (cs) {
- cs->prev = NULL;
- spin_lock_irqsave(&divert_lock, flags);
- cs->next = divert_head;
- divert_head = cs;
- if (cs->timer.expires) add_timer(&cs->timer);
- spin_unlock_irqrestore(&divert_lock, flags);
-
- put_info_buffer(cs->info);
- return (retval);
- } else
- return (0);
-} /* isdn_divert_icall */
-
-
-void deleteprocs(void)
-{
- struct call_struc *cs, *cs1;
- unsigned long flags;
-
- spin_lock_irqsave(&divert_lock, flags);
- cs = divert_head;
- divert_head = NULL;
- while (cs) {
- del_timer(&cs->timer);
- cs1 = cs;
- cs = cs->next;
- kfree(cs1);
- }
- spin_unlock_irqrestore(&divert_lock, flags);
-} /* deleteprocs */
-
-/****************************************************/
-/* put a address including address type into buffer */
-/****************************************************/
-static int put_address(char *st, u_char *p, int len)
-{
- u_char retval = 0;
- u_char adr_typ = 0; /* network standard */
-
- if (len < 2) return (retval);
- if (*p == 0xA1) {
- retval = *(++p) + 2; /* total length */
- if (retval > len) return (0); /* too short */
- len = retval - 2; /* remaining length */
- if (len < 3) return (0);
- if ((*(++p) != 0x0A) || (*(++p) != 1)) return (0);
- adr_typ = *(++p);
- len -= 3;
- p++;
- if (len < 2) return (0);
- if (*p++ != 0x12) return (0);
- if (*p > len) return (0); /* check number length */
- len = *p++;
- } else if (*p == 0x80) {
- retval = *(++p) + 2; /* total length */
- if (retval > len) return (0);
- len = retval - 2;
- p++;
- } else
- return (0); /* invalid address information */
-
- sprintf(st, "%d ", adr_typ);
- st += strlen(st);
- if (!len)
- *st++ = '-';
- else
- while (len--)
- *st++ = *p++;
- *st = '\0';
- return (retval);
-} /* put_address */
-
-/*************************************/
-/* report a successful interrogation */
-/*************************************/
-static int interrogate_success(isdn_ctrl *ic, struct call_struc *cs)
-{
- char *src = ic->parm.dss1_io.data;
- int restlen = ic->parm.dss1_io.datalen;
- int cnt = 1;
- u_char n, n1;
- char st[90], *p, *stp;
-
- if (restlen < 2) return (-100); /* frame too short */
- if (*src++ != 0x30) return (-101);
- if ((n = *src++) > 0x81) return (-102); /* invalid length field */
- restlen -= 2; /* remaining bytes */
- if (n == 0x80) {
- if (restlen < 2) return (-103);
- if ((*(src + restlen - 1)) || (*(src + restlen - 2))) return (-104);
- restlen -= 2;
- } else if (n == 0x81) {
- n = *src++;
- restlen--;
- if (n > restlen) return (-105);
- restlen = n;
- } else if (n > restlen)
- return (-106);
- else
- restlen = n; /* standard format */
- if (restlen < 3) return (-107); /* no procedure */
- if ((*src++ != 2) || (*src++ != 1) || (*src++ != 0x0B)) return (-108);
- restlen -= 3;
- if (restlen < 2) return (-109); /* list missing */
- if (*src == 0x31) {
- src++;
- if ((n = *src++) > 0x81) return (-110); /* invalid length field */
- restlen -= 2; /* remaining bytes */
- if (n == 0x80) {
- if (restlen < 2) return (-111);
- if ((*(src + restlen - 1)) || (*(src + restlen - 2))) return (-112);
- restlen -= 2;
- } else if (n == 0x81) {
- n = *src++;
- restlen--;
- if (n > restlen) return (-113);
- restlen = n;
- } else if (n > restlen)
- return (-114);
- else
- restlen = n; /* standard format */
- } /* result list header */
-
- while (restlen >= 2) {
- stp = st;
- sprintf(stp, "%d 0x%lx %d %s ", DIVERT_REPORT, ic->parm.dss1_io.ll_id,
- cnt++, divert_if.drv_to_name(ic->driver));
- stp += strlen(stp);
- if (*src++ != 0x30) return (-115); /* invalid enum */
- n = *src++;
- restlen -= 2;
- if (n > restlen) return (-116); /* enum length wrong */
- restlen -= n;
- p = src; /* one entry */
- src += n;
- if (!(n1 = put_address(stp, p, n & 0xFF))) continue;
- stp += strlen(stp);
- p += n1;
- n -= n1;
- if (n < 6) continue; /* no service and proc */
- if ((*p++ != 0x0A) || (*p++ != 1)) continue;
- sprintf(stp, " 0x%02x ", (*p++) & 0xFF);
- stp += strlen(stp);
- if ((*p++ != 0x0A) || (*p++ != 1)) continue;
- sprintf(stp, "%d ", (*p++) & 0xFF);
- stp += strlen(stp);
- n -= 6;
- if (n > 2) {
- if (*p++ != 0x30) continue;
- if (*p > (n - 2)) continue;
- n = *p++;
- if (!(n1 = put_address(stp, p, n & 0xFF))) continue;
- stp += strlen(stp);
- }
- sprintf(stp, "\n");
- put_info_buffer(st);
- } /* while restlen */
- if (restlen) return (-117);
- return (0);
-} /* interrogate_success */
-
-/*********************************************/
-/* callback for protocol specific extensions */
-/*********************************************/
-static int prot_stat_callback(isdn_ctrl *ic)
-{
- struct call_struc *cs, *cs1;
- int i;
- unsigned long flags;
-
- cs = divert_head; /* start of list */
- cs1 = NULL;
- while (cs) {
- if (ic->driver == cs->ics.driver) {
- switch (cs->ics.arg) {
- case DSS1_CMD_INVOKE:
- if ((cs->ics.parm.dss1_io.ll_id == ic->parm.dss1_io.ll_id) &&
- (cs->ics.parm.dss1_io.hl_id == ic->parm.dss1_io.hl_id)) {
- switch (ic->arg) {
- case DSS1_STAT_INVOKE_ERR:
- sprintf(cs->info, "128 0x%lx 0x%x\n",
- ic->parm.dss1_io.ll_id,
- ic->parm.dss1_io.timeout);
- put_info_buffer(cs->info);
- break;
-
- case DSS1_STAT_INVOKE_RES:
- switch (cs->ics.parm.dss1_io.proc) {
- case 7:
- case 8:
- put_info_buffer(cs->info);
- break;
-
- case 11:
- i = interrogate_success(ic, cs);
- if (i)
- sprintf(cs->info, "%d 0x%lx %d\n", DIVERT_REPORT,
- ic->parm.dss1_io.ll_id, i);
- put_info_buffer(cs->info);
- break;
-
- default:
- printk(KERN_WARNING "dss1_divert: unknown proc %d\n", cs->ics.parm.dss1_io.proc);
- break;
- }
-
- break;
-
- default:
- printk(KERN_WARNING "dss1_divert unknown invoke answer %lx\n", ic->arg);
- break;
- }
- cs1 = cs; /* remember structure */
- cs = NULL;
- continue; /* abort search */
- } /* id found */
- break;
-
- case DSS1_CMD_INVOKE_ABORT:
- printk(KERN_WARNING "dss1_divert unhandled invoke abort\n");
- break;
-
- default:
- printk(KERN_WARNING "dss1_divert unknown cmd 0x%lx\n", cs->ics.arg);
- break;
- } /* switch ics.arg */
- cs = cs->next;
- } /* driver ok */
- }
-
- if (!cs1) {
- printk(KERN_WARNING "dss1_divert unhandled process\n");
- return (0);
- }
-
- if (cs1->ics.driver == -1) {
- spin_lock_irqsave(&divert_lock, flags);
- del_timer(&cs1->timer);
- if (cs1->prev)
- cs1->prev->next = cs1->next; /* forward link */
- else
- divert_head = cs1->next;
- if (cs1->next)
- cs1->next->prev = cs1->prev; /* back link */
- spin_unlock_irqrestore(&divert_lock, flags);
- kfree(cs1);
- }
-
- return (0);
-} /* prot_stat_callback */
-
-
-/***************************/
-/* status callback from HL */
-/***************************/
-static int isdn_divert_stat_callback(isdn_ctrl *ic)
-{
- struct call_struc *cs, *cs1;
- unsigned long flags;
- int retval;
-
- retval = -1;
- cs = divert_head; /* start of list */
- while (cs) {
- if ((ic->driver == cs->ics.driver) &&
- (ic->arg == cs->ics.arg)) {
- switch (ic->command) {
- case ISDN_STAT_DHUP:
- sprintf(cs->info, "129 0x%lx\n", cs->divert_id);
- del_timer(&cs->timer);
- cs->ics.driver = -1;
- break;
-
- case ISDN_STAT_CAUSE:
- sprintf(cs->info, "130 0x%lx %s\n", cs->divert_id, ic->parm.num);
- break;
-
- case ISDN_STAT_REDIR:
- sprintf(cs->info, "131 0x%lx\n", cs->divert_id);
- del_timer(&cs->timer);
- cs->ics.driver = -1;
- break;
-
- default:
- sprintf(cs->info, "999 0x%lx 0x%x\n", cs->divert_id, (int)(ic->command));
- break;
- }
- put_info_buffer(cs->info);
- retval = 0;
- }
- cs1 = cs;
- cs = cs->next;
- if (cs1->ics.driver == -1) {
- spin_lock_irqsave(&divert_lock, flags);
- if (cs1->prev)
- cs1->prev->next = cs1->next; /* forward link */
- else
- divert_head = cs1->next;
- if (cs1->next)
- cs1->next->prev = cs1->prev; /* back link */
- spin_unlock_irqrestore(&divert_lock, flags);
- kfree(cs1);
- }
- }
- return (retval); /* not found */
-} /* isdn_divert_stat_callback */
-
-
-/********************/
-/* callback from ll */
-/********************/
-int ll_callback(isdn_ctrl *ic)
-{
- switch (ic->command) {
- case ISDN_STAT_ICALL:
- case ISDN_STAT_ICALLW:
- return (isdn_divert_icall(ic));
- break;
-
- case ISDN_STAT_PROT:
- if ((ic->arg & 0xFF) == ISDN_PTYPE_EURO) {
- if (ic->arg != DSS1_STAT_INVOKE_BRD)
- return (prot_stat_callback(ic));
- else
- return (0); /* DSS1 invoke broadcast */
- } else
- return (-1); /* protocol not euro */
-
- default:
- return (isdn_divert_stat_callback(ic));
- }
-} /* ll_callback */
diff --git a/drivers/isdn/divert/isdn_divert.h b/drivers/isdn/divert/isdn_divert.h
deleted file mode 100644
index 55033dd872c0..000000000000
--- a/drivers/isdn/divert/isdn_divert.h
+++ /dev/null
@@ -1,132 +0,0 @@
-/* $Id: isdn_divert.h,v 1.5.6.1 2001/09/23 22:24:36 kai Exp $
- *
- * Header for the diversion supplementary ioctl interface.
- *
- * Copyright 1998 by Werner Cornelius (werner@ikt.de)
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/ioctl.h>
-#include <linux/types.h>
-
-/******************************************/
-/* IOCTL codes for interface to user prog */
-/******************************************/
-#define DIVERT_IIOC_VERSION 0x01 /* actual version */
-#define IIOCGETVER _IO('I', 1) /* get version of interface */
-#define IIOCGETDRV _IO('I', 2) /* get driver number */
-#define IIOCGETNAM _IO('I', 3) /* get driver name */
-#define IIOCGETRULE _IO('I', 4) /* read one rule */
-#define IIOCMODRULE _IO('I', 5) /* modify/replace a rule */
-#define IIOCINSRULE _IO('I', 6) /* insert/append one rule */
-#define IIOCDELRULE _IO('I', 7) /* delete a rule */
-#define IIOCDODFACT _IO('I', 8) /* hangup/reject/alert/immediately deflect a call */
-#define IIOCDOCFACT _IO('I', 9) /* activate control forwarding in PBX */
-#define IIOCDOCFDIS _IO('I', 10) /* deactivate control forwarding in PBX */
-#define IIOCDOCFINT _IO('I', 11) /* interrogate control forwarding in PBX */
-
-/*************************************/
-/* states reported through interface */
-/*************************************/
-#define DEFLECT_IGNORE 0 /* ignore incoming call */
-#define DEFLECT_REPORT 1 /* only report */
-#define DEFLECT_PROCEED 2 /* deflect when externally triggered */
-#define DEFLECT_ALERT 3 /* alert and deflect after delay */
-#define DEFLECT_REJECT 4 /* reject immediately */
-#define DIVERT_ACTIVATE 5 /* diversion activate */
-#define DIVERT_DEACTIVATE 6 /* diversion deactivate */
-#define DIVERT_REPORT 7 /* interrogation result */
-#define DEFLECT_AUTODEL 255 /* only for internal use */
-
-#define DEFLECT_ALL_IDS 0xFFFFFFFF /* all drivers selected */
-
-typedef struct {
- ulong drvid; /* driver ids, bit mapped */
- char my_msn[35]; /* desired msn, subaddr allowed */
- char caller[35]; /* caller id, partial string with * + subaddr allowed */
- char to_nr[35]; /* deflected to number incl. subaddress */
- u_char si1, si2; /* service indicators, si1=bitmask, si1+2 0 = all */
- u_char screen; /* screening: 0 = no info, 1 = info, 2 = nfo with nr */
- u_char callopt; /* option for call handling:
- 0 = all calls
- 1 = only non waiting calls
- 2 = only waiting calls */
- u_char action; /* desired action:
- 0 = don't report call -> ignore
- 1 = report call, do not allow/proceed for deflection
- 2 = report call, send proceed, wait max waittime secs
- 3 = report call, alert and deflect after waittime
- 4 = report call, reject immediately
- actions 1-2 only take place if interface is opened
- */
- u_char waittime; /* maximum wait time for proceeding */
-} divert_rule;
-
-typedef union {
- int drv_version; /* return of driver version */
- struct {
- int drvid; /* id of driver */
- char drvnam[30]; /* name of driver */
- } getid;
- struct {
- int ruleidx; /* index of rule */
- divert_rule rule; /* rule parms */
- } getsetrule;
- struct {
- u_char subcmd; /* 0 = hangup/reject,
- 1 = alert,
- 2 = deflect */
- ulong callid; /* id of call delivered by ascii output */
- char to_nr[35]; /* destination when deflect,
- else uus1 string (maxlen 31),
- data from rule used if empty */
- } fwd_ctrl;
- struct {
- int drvid; /* id of driver */
- u_char cfproc; /* cfu = 0, cfb = 1, cfnr = 2 */
- ulong procid; /* process id returned when no error */
- u_char service; /* basically coded service, 0 = all */
- char msn[25]; /* desired msn, empty = all */
- char fwd_nr[35];/* forwarded to number + subaddress */
- } cf_ctrl;
-} divert_ioctl;
-
-#ifdef __KERNEL__
-
-#include <linux/isdnif.h>
-#include <linux/isdn_divertif.h>
-
-#define AUTODEL_TIME 30 /* timeout in s to delete internal entries */
-
-/**************************************************/
-/* structure keeping ascii info for device output */
-/**************************************************/
-struct divert_info {
- struct divert_info *next;
- ulong usage_cnt; /* number of files still to work */
- char info_start[2]; /* info string start */
-};
-
-
-/**************/
-/* Prototypes */
-/**************/
-extern spinlock_t divert_lock;
-
-extern ulong if_used; /* number of interface users */
-extern int divert_dev_deinit(void);
-extern int divert_dev_init(void);
-extern void put_info_buffer(char *);
-extern int ll_callback(isdn_ctrl *);
-extern isdn_divert_if divert_if;
-extern divert_rule *getruleptr(int);
-extern int insertrule(int, divert_rule *);
-extern int deleterule(int);
-extern void deleteprocs(void);
-extern int deflect_extern_action(u_char, ulong, char *);
-extern int cf_command(int, int, u_char, char *, u_char, char *, ulong *);
-
-#endif /* __KERNEL__ */
diff --git a/drivers/isdn/gigaset/i4l.c b/drivers/isdn/gigaset/i4l.c
deleted file mode 100644
index 335b8ce2bb06..000000000000
--- a/drivers/isdn/gigaset/i4l.c
+++ /dev/null
@@ -1,692 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-or-later
-/*
- * Stuff used by all variants of the driver
- *
- * Copyright (c) 2001 by Stefan Eilers,
- * Hansjoerg Lipp <hjlipp@web.de>,
- * Tilman Schmidt <tilman@imap.cc>.
- *
- * =====================================================================
- * =====================================================================
- */
-
-#include "gigaset.h"
-#include <linux/isdnif.h>
-#include <linux/export.h>
-
-#define SBUFSIZE 4096 /* sk_buff payload size */
-#define TRANSBUFSIZE 768 /* bytes per skb for transparent receive */
-#define HW_HDR_LEN 2 /* Header size used to store ack info */
-#define MAX_BUF_SIZE (SBUFSIZE - HW_HDR_LEN) /* max data packet from LL */
-
-/* == Handling of I4L IO =====================================================*/
-
-/* writebuf_from_LL
- * called by LL to transmit data on an open channel
- * inserts the buffer data into the send queue and starts the transmission
- * Note that this operation must not sleep!
- * When the buffer is processed completely, gigaset_skb_sent() should be called.
- * parameters:
- * driverID driver ID as assigned by LL
- * channel channel number
- * ack if != 0 LL wants to be notified on completion via
- * statcallb(ISDN_STAT_BSENT)
- * skb skb containing data to send
- * return value:
- * number of accepted bytes
- * 0 if temporarily unable to accept data (out of buffer space)
- * <0 on error (eg. -EINVAL)
- */
-static int writebuf_from_LL(int driverID, int channel, int ack,
- struct sk_buff *skb)
-{
- struct cardstate *cs = gigaset_get_cs_by_id(driverID);
- struct bc_state *bcs;
- unsigned char *ack_header;
- unsigned len;
-
- if (!cs) {
- pr_err("%s: invalid driver ID (%d)\n", __func__, driverID);
- return -ENODEV;
- }
- if (channel < 0 || channel >= cs->channels) {
- dev_err(cs->dev, "%s: invalid channel ID (%d)\n",
- __func__, channel);
- return -ENODEV;
- }
- bcs = &cs->bcs[channel];
-
- /* can only handle linear sk_buffs */
- if (skb_linearize(skb) < 0) {
- dev_err(cs->dev, "%s: skb_linearize failed\n", __func__);
- return -ENOMEM;
- }
- len = skb->len;
-
- gig_dbg(DEBUG_LLDATA,
- "Receiving data from LL (id: %d, ch: %d, ack: %d, sz: %d)",
- driverID, channel, ack, len);
-
- if (!len) {
- if (ack)
- dev_notice(cs->dev, "%s: not ACKing empty packet\n",
- __func__);
- return 0;
- }
- if (len > MAX_BUF_SIZE) {
- dev_err(cs->dev, "%s: packet too large (%d bytes)\n",
- __func__, len);
- return -EINVAL;
- }
-
- /* set up acknowledgement header */
- if (skb_headroom(skb) < HW_HDR_LEN) {
- /* should never happen */
- dev_err(cs->dev, "%s: insufficient skb headroom\n", __func__);
- return -ENOMEM;
- }
- skb_set_mac_header(skb, -HW_HDR_LEN);
- skb->mac_len = HW_HDR_LEN;
- ack_header = skb_mac_header(skb);
- if (ack) {
- ack_header[0] = len & 0xff;
- ack_header[1] = len >> 8;
- } else {
- ack_header[0] = ack_header[1] = 0;
- }
- gig_dbg(DEBUG_MCMD, "skb: len=%u, ack=%d: %02x %02x",
- len, ack, ack_header[0], ack_header[1]);
-
- /* pass to device-specific module */
- return cs->ops->send_skb(bcs, skb);
-}
-
-/**
- * gigaset_skb_sent() - acknowledge sending an skb
- * @bcs: B channel descriptor structure.
- * @skb: sent data.
- *
- * Called by hardware module {bas,ser,usb}_gigaset when the data in a
- * skb has been successfully sent, for signalling completion to the LL.
- */
-void gigaset_skb_sent(struct bc_state *bcs, struct sk_buff *skb)
-{
- isdn_if *iif = bcs->cs->iif;
- unsigned char *ack_header = skb_mac_header(skb);
- unsigned len;
- isdn_ctrl response;
-
- ++bcs->trans_up;
-
- if (skb->len)
- dev_warn(bcs->cs->dev, "%s: skb->len==%d\n",
- __func__, skb->len);
-
- len = ack_header[0] + ((unsigned) ack_header[1] << 8);
- if (len) {
- gig_dbg(DEBUG_MCMD, "ACKing to LL (id: %d, ch: %d, sz: %u)",
- bcs->cs->myid, bcs->channel, len);
-
- response.driver = bcs->cs->myid;
- response.command = ISDN_STAT_BSENT;
- response.arg = bcs->channel;
- response.parm.length = len;
- iif->statcallb(&response);
- }
-}
-EXPORT_SYMBOL_GPL(gigaset_skb_sent);
-
-/**
- * gigaset_skb_rcvd() - pass received skb to LL
- * @bcs: B channel descriptor structure.
- * @skb: received data.
- *
- * Called by hardware module {bas,ser,usb}_gigaset when user data has
- * been successfully received, for passing to the LL.
- * Warning: skb must not be accessed anymore!
- */
-void gigaset_skb_rcvd(struct bc_state *bcs, struct sk_buff *skb)
-{
- isdn_if *iif = bcs->cs->iif;
-
- iif->rcvcallb_skb(bcs->cs->myid, bcs->channel, skb);
- bcs->trans_down++;
-}
-EXPORT_SYMBOL_GPL(gigaset_skb_rcvd);
-
-/**
- * gigaset_isdn_rcv_err() - signal receive error
- * @bcs: B channel descriptor structure.
- *
- * Called by hardware module {bas,ser,usb}_gigaset when a receive error
- * has occurred, for signalling to the LL.
- */
-void gigaset_isdn_rcv_err(struct bc_state *bcs)
-{
- isdn_if *iif = bcs->cs->iif;
- isdn_ctrl response;
-
- /* if currently ignoring packets, just count down */
- if (bcs->ignore) {
- bcs->ignore--;
- return;
- }
-
- /* update statistics */
- bcs->corrupted++;
-
- /* error -> LL */
- gig_dbg(DEBUG_CMD, "sending L1ERR");
- response.driver = bcs->cs->myid;
- response.command = ISDN_STAT_L1ERR;
- response.arg = bcs->channel;
- response.parm.errcode = ISDN_STAT_L1ERR_RECV;
- iif->statcallb(&response);
-}
-EXPORT_SYMBOL_GPL(gigaset_isdn_rcv_err);
-
-/* This function will be called by LL to send commands
- * NOTE: LL ignores the returned value, for commands other than ISDN_CMD_IOCTL,
- * so don't put too much effort into it.
- */
-static int command_from_LL(isdn_ctrl *cntrl)
-{
- struct cardstate *cs;
- struct bc_state *bcs;
- int retval = 0;
- char **commands;
- int ch;
- int i;
- size_t l;
-
- gig_dbg(DEBUG_CMD, "driver: %d, command: %d, arg: 0x%lx",
- cntrl->driver, cntrl->command, cntrl->arg);
-
- cs = gigaset_get_cs_by_id(cntrl->driver);
- if (cs == NULL) {
- pr_err("%s: invalid driver ID (%d)\n", __func__, cntrl->driver);
- return -ENODEV;
- }
- ch = cntrl->arg & 0xff;
-
- switch (cntrl->command) {
- case ISDN_CMD_IOCTL:
- dev_warn(cs->dev, "ISDN_CMD_IOCTL not supported\n");
- return -EINVAL;
-
- case ISDN_CMD_DIAL:
- gig_dbg(DEBUG_CMD,
- "ISDN_CMD_DIAL (phone: %s, msn: %s, si1: %d, si2: %d)",
- cntrl->parm.setup.phone, cntrl->parm.setup.eazmsn,
- cntrl->parm.setup.si1, cntrl->parm.setup.si2);
-
- if (ch >= cs->channels) {
- dev_err(cs->dev,
- "ISDN_CMD_DIAL: invalid channel (%d)\n", ch);
- return -EINVAL;
- }
- bcs = cs->bcs + ch;
- if (gigaset_get_channel(bcs) < 0) {
- dev_err(cs->dev, "ISDN_CMD_DIAL: channel not free\n");
- return -EBUSY;
- }
- switch (bcs->proto2) {
- case L2_HDLC:
- bcs->rx_bufsize = SBUFSIZE;
- break;
- default: /* assume transparent */
- bcs->rx_bufsize = TRANSBUFSIZE;
- }
- dev_kfree_skb(bcs->rx_skb);
- gigaset_new_rx_skb(bcs);
-
- commands = kcalloc(AT_NUM, sizeof(*commands), GFP_ATOMIC);
- if (!commands) {
- gigaset_free_channel(bcs);
- dev_err(cs->dev, "ISDN_CMD_DIAL: out of memory\n");
- return -ENOMEM;
- }
-
- l = 3 + strlen(cntrl->parm.setup.phone);
- commands[AT_DIAL] = kmalloc(l, GFP_ATOMIC);
- if (!commands[AT_DIAL])
- goto oom;
- if (cntrl->parm.setup.phone[0] == '*' &&
- cntrl->parm.setup.phone[1] == '*') {
- /* internal call: translate ** prefix to CTP value */
- commands[AT_TYPE] = kstrdup("^SCTP=0\r", GFP_ATOMIC);
- if (!commands[AT_TYPE])
- goto oom;
- snprintf(commands[AT_DIAL], l,
- "D%s\r", cntrl->parm.setup.phone + 2);
- } else {
- commands[AT_TYPE] = kstrdup("^SCTP=1\r", GFP_ATOMIC);
- if (!commands[AT_TYPE])
- goto oom;
- snprintf(commands[AT_DIAL], l,
- "D%s\r", cntrl->parm.setup.phone);
- }
-
- l = strlen(cntrl->parm.setup.eazmsn);
- if (l) {
- l += 8;
- commands[AT_MSN] = kmalloc(l, GFP_ATOMIC);
- if (!commands[AT_MSN])
- goto oom;
- snprintf(commands[AT_MSN], l, "^SMSN=%s\r",
- cntrl->parm.setup.eazmsn);
- }
-
- switch (cntrl->parm.setup.si1) {
- case 1: /* audio */
- /* BC = 9090A3: 3.1 kHz audio, A-law */
- commands[AT_BC] = kstrdup("^SBC=9090A3\r", GFP_ATOMIC);
- if (!commands[AT_BC])
- goto oom;
- break;
- case 7: /* data */
- default: /* hope the app knows what it is doing */
- /* BC = 8890: unrestricted digital information */
- commands[AT_BC] = kstrdup("^SBC=8890\r", GFP_ATOMIC);
- if (!commands[AT_BC])
- goto oom;
- }
- /* ToDo: other si1 values, inspect si2, set HLC/LLC */
-
- commands[AT_PROTO] = kmalloc(9, GFP_ATOMIC);
- if (!commands[AT_PROTO])
- goto oom;
- snprintf(commands[AT_PROTO], 9, "^SBPR=%u\r", bcs->proto2);
-
- commands[AT_ISO] = kmalloc(9, GFP_ATOMIC);
- if (!commands[AT_ISO])
- goto oom;
- snprintf(commands[AT_ISO], 9, "^SISO=%u\r",
- (unsigned) bcs->channel + 1);
-
- if (!gigaset_add_event(cs, &bcs->at_state, EV_DIAL, commands,
- bcs->at_state.seq_index, NULL)) {
- for (i = 0; i < AT_NUM; ++i)
- kfree(commands[i]);
- kfree(commands);
- gigaset_free_channel(bcs);
- return -ENOMEM;
- }
- gigaset_schedule_event(cs);
- break;
- case ISDN_CMD_ACCEPTD:
- gig_dbg(DEBUG_CMD, "ISDN_CMD_ACCEPTD");
- if (ch >= cs->channels) {
- dev_err(cs->dev,
- "ISDN_CMD_ACCEPTD: invalid channel (%d)\n", ch);
- return -EINVAL;
- }
- bcs = cs->bcs + ch;
- switch (bcs->proto2) {
- case L2_HDLC:
- bcs->rx_bufsize = SBUFSIZE;
- break;
- default: /* assume transparent */
- bcs->rx_bufsize = TRANSBUFSIZE;
- }
- dev_kfree_skb(bcs->rx_skb);
- gigaset_new_rx_skb(bcs);
- if (!gigaset_add_event(cs, &bcs->at_state,
- EV_ACCEPT, NULL, 0, NULL))
- return -ENOMEM;
- gigaset_schedule_event(cs);
-
- break;
- case ISDN_CMD_HANGUP:
- gig_dbg(DEBUG_CMD, "ISDN_CMD_HANGUP");
- if (ch >= cs->channels) {
- dev_err(cs->dev,
- "ISDN_CMD_HANGUP: invalid channel (%d)\n", ch);
- return -EINVAL;
- }
- bcs = cs->bcs + ch;
- if (!gigaset_add_event(cs, &bcs->at_state,
- EV_HUP, NULL, 0, NULL))
- return -ENOMEM;
- gigaset_schedule_event(cs);
-
- break;
- case ISDN_CMD_CLREAZ: /* Do not signal incoming signals */
- dev_info(cs->dev, "ignoring ISDN_CMD_CLREAZ\n");
- break;
- case ISDN_CMD_SETEAZ: /* Signal incoming calls for given MSN */
- dev_info(cs->dev, "ignoring ISDN_CMD_SETEAZ (%s)\n",
- cntrl->parm.num);
- break;
- case ISDN_CMD_SETL2: /* Set L2 to given protocol */
- if (ch >= cs->channels) {
- dev_err(cs->dev,
- "ISDN_CMD_SETL2: invalid channel (%d)\n", ch);
- return -EINVAL;
- }
- bcs = cs->bcs + ch;
- if (bcs->chstate & CHS_D_UP) {
- dev_err(cs->dev,
- "ISDN_CMD_SETL2: channel active (%d)\n", ch);
- return -EINVAL;
- }
- switch (cntrl->arg >> 8) {
- case ISDN_PROTO_L2_HDLC:
- gig_dbg(DEBUG_CMD, "ISDN_CMD_SETL2: setting L2_HDLC");
- bcs->proto2 = L2_HDLC;
- break;
- case ISDN_PROTO_L2_TRANS:
- gig_dbg(DEBUG_CMD, "ISDN_CMD_SETL2: setting L2_VOICE");
- bcs->proto2 = L2_VOICE;
- break;
- default:
- dev_err(cs->dev,
- "ISDN_CMD_SETL2: unsupported protocol (%lu)\n",
- cntrl->arg >> 8);
- return -EINVAL;
- }
- break;
- case ISDN_CMD_SETL3: /* Set L3 to given protocol */
- gig_dbg(DEBUG_CMD, "ISDN_CMD_SETL3");
- if (ch >= cs->channels) {
- dev_err(cs->dev,
- "ISDN_CMD_SETL3: invalid channel (%d)\n", ch);
- return -EINVAL;
- }
-
- if (cntrl->arg >> 8 != ISDN_PROTO_L3_TRANS) {
- dev_err(cs->dev,
- "ISDN_CMD_SETL3: unsupported protocol (%lu)\n",
- cntrl->arg >> 8);
- return -EINVAL;
- }
-
- break;
-
- default:
- gig_dbg(DEBUG_CMD, "unknown command %d from LL",
- cntrl->command);
- return -EINVAL;
- }
-
- return retval;
-
-oom:
- dev_err(bcs->cs->dev, "out of memory\n");
- for (i = 0; i < AT_NUM; ++i)
- kfree(commands[i]);
- kfree(commands);
- gigaset_free_channel(bcs);
- return -ENOMEM;
-}
-
-static void gigaset_i4l_cmd(struct cardstate *cs, int cmd)
-{
- isdn_if *iif = cs->iif;
- isdn_ctrl command;
-
- command.driver = cs->myid;
- command.command = cmd;
- command.arg = 0;
- iif->statcallb(&command);
-}
-
-static void gigaset_i4l_channel_cmd(struct bc_state *bcs, int cmd)
-{
- isdn_if *iif = bcs->cs->iif;
- isdn_ctrl command;
-
- command.driver = bcs->cs->myid;
- command.command = cmd;
- command.arg = bcs->channel;
- iif->statcallb(&command);
-}
-
-/**
- * gigaset_isdn_icall() - signal incoming call
- * @at_state: connection state structure.
- *
- * Called by main module to notify the LL that an incoming call has been
- * received. @at_state contains the parameters of the call.
- *
- * Return value: call disposition (ICALL_*)
- */
-int gigaset_isdn_icall(struct at_state_t *at_state)
-{
- struct cardstate *cs = at_state->cs;
- struct bc_state *bcs = at_state->bcs;
- isdn_if *iif = cs->iif;
- isdn_ctrl response;
- int retval;
-
- /* fill ICALL structure */
- response.parm.setup.si1 = 0; /* default: unknown */
- response.parm.setup.si2 = 0;
- response.parm.setup.screen = 0;
- response.parm.setup.plan = 0;
- if (!at_state->str_var[STR_ZBC]) {
- /* no BC (internal call): assume speech, A-law */
- response.parm.setup.si1 = 1;
- } else if (!strcmp(at_state->str_var[STR_ZBC], "8890")) {
- /* unrestricted digital information */
- response.parm.setup.si1 = 7;
- } else if (!strcmp(at_state->str_var[STR_ZBC], "8090A3")) {
- /* speech, A-law */
- response.parm.setup.si1 = 1;
- } else if (!strcmp(at_state->str_var[STR_ZBC], "9090A3")) {
- /* 3,1 kHz audio, A-law */
- response.parm.setup.si1 = 1;
- response.parm.setup.si2 = 2;
- } else {
- dev_warn(cs->dev, "RING ignored - unsupported BC %s\n",
- at_state->str_var[STR_ZBC]);
- return ICALL_IGNORE;
- }
- if (at_state->str_var[STR_NMBR]) {
- strlcpy(response.parm.setup.phone, at_state->str_var[STR_NMBR],
- sizeof response.parm.setup.phone);
- } else
- response.parm.setup.phone[0] = 0;
- if (at_state->str_var[STR_ZCPN]) {
- strlcpy(response.parm.setup.eazmsn, at_state->str_var[STR_ZCPN],
- sizeof response.parm.setup.eazmsn);
- } else
- response.parm.setup.eazmsn[0] = 0;
-
- if (!bcs) {
- dev_notice(cs->dev, "no channel for incoming call\n");
- response.command = ISDN_STAT_ICALLW;
- response.arg = 0;
- } else {
- gig_dbg(DEBUG_CMD, "Sending ICALL");
- response.command = ISDN_STAT_ICALL;
- response.arg = bcs->channel;
- }
- response.driver = cs->myid;
- retval = iif->statcallb(&response);
- gig_dbg(DEBUG_CMD, "Response: %d", retval);
- switch (retval) {
- case 0: /* no takers */
- return ICALL_IGNORE;
- case 1: /* alerting */
- bcs->chstate |= CHS_NOTIFY_LL;
- return ICALL_ACCEPT;
- case 2: /* reject */
- return ICALL_REJECT;
- case 3: /* incomplete */
- dev_warn(cs->dev,
- "LL requested unsupported feature: Incomplete Number\n");
- return ICALL_IGNORE;
- case 4: /* proceeding */
- /* Gigaset will send ALERTING anyway.
- * There doesn't seem to be a way to avoid this.
- */
- return ICALL_ACCEPT;
- case 5: /* deflect */
- dev_warn(cs->dev,
- "LL requested unsupported feature: Call Deflection\n");
- return ICALL_IGNORE;
- default:
- dev_err(cs->dev, "LL error %d on ICALL\n", retval);
- return ICALL_IGNORE;
- }
-}
-
-/**
- * gigaset_isdn_connD() - signal D channel connect
- * @bcs: B channel descriptor structure.
- *
- * Called by main module to notify the LL that the D channel connection has
- * been established.
- */
-void gigaset_isdn_connD(struct bc_state *bcs)
-{
- gig_dbg(DEBUG_CMD, "sending DCONN");
- gigaset_i4l_channel_cmd(bcs, ISDN_STAT_DCONN);
-}
-
-/**
- * gigaset_isdn_hupD() - signal D channel hangup
- * @bcs: B channel descriptor structure.
- *
- * Called by main module to notify the LL that the D channel connection has
- * been shut down.
- */
-void gigaset_isdn_hupD(struct bc_state *bcs)
-{
- gig_dbg(DEBUG_CMD, "sending DHUP");
- gigaset_i4l_channel_cmd(bcs, ISDN_STAT_DHUP);
-}
-
-/**
- * gigaset_isdn_connB() - signal B channel connect
- * @bcs: B channel descriptor structure.
- *
- * Called by main module to notify the LL that the B channel connection has
- * been established.
- */
-void gigaset_isdn_connB(struct bc_state *bcs)
-{
- gig_dbg(DEBUG_CMD, "sending BCONN");
- gigaset_i4l_channel_cmd(bcs, ISDN_STAT_BCONN);
-}
-
-/**
- * gigaset_isdn_hupB() - signal B channel hangup
- * @bcs: B channel descriptor structure.
- *
- * Called by main module to notify the LL that the B channel connection has
- * been shut down.
- */
-void gigaset_isdn_hupB(struct bc_state *bcs)
-{
- gig_dbg(DEBUG_CMD, "sending BHUP");
- gigaset_i4l_channel_cmd(bcs, ISDN_STAT_BHUP);
-}
-
-/**
- * gigaset_isdn_start() - signal device availability
- * @cs: device descriptor structure.
- *
- * Called by main module to notify the LL that the device is available for
- * use.
- */
-void gigaset_isdn_start(struct cardstate *cs)
-{
- gig_dbg(DEBUG_CMD, "sending RUN");
- gigaset_i4l_cmd(cs, ISDN_STAT_RUN);
-}
-
-/**
- * gigaset_isdn_stop() - signal device unavailability
- * @cs: device descriptor structure.
- *
- * Called by main module to notify the LL that the device is no longer
- * available for use.
- */
-void gigaset_isdn_stop(struct cardstate *cs)
-{
- gig_dbg(DEBUG_CMD, "sending STOP");
- gigaset_i4l_cmd(cs, ISDN_STAT_STOP);
-}
-
-/**
- * gigaset_isdn_regdev() - register to LL
- * @cs: device descriptor structure.
- * @isdnid: device name.
- *
- * Return value: 0 on success, error code < 0 on failure
- */
-int gigaset_isdn_regdev(struct cardstate *cs, const char *isdnid)
-{
- isdn_if *iif;
-
- iif = kmalloc(sizeof *iif, GFP_KERNEL);
- if (!iif) {
- pr_err("out of memory\n");
- return -ENOMEM;
- }
-
- if (snprintf(iif->id, sizeof iif->id, "%s_%u", isdnid, cs->minor_index)
- >= sizeof iif->id) {
- pr_err("ID too long: %s\n", isdnid);
- kfree(iif);
- return -EINVAL;
- }
-
- iif->owner = THIS_MODULE;
- iif->channels = cs->channels;
- iif->maxbufsize = MAX_BUF_SIZE;
- iif->features = ISDN_FEATURE_L2_TRANS |
- ISDN_FEATURE_L2_HDLC |
- ISDN_FEATURE_L2_X75I |
- ISDN_FEATURE_L3_TRANS |
- ISDN_FEATURE_P_EURO;
- iif->hl_hdrlen = HW_HDR_LEN; /* Area for storing ack */
- iif->command = command_from_LL;
- iif->writebuf_skb = writebuf_from_LL;
- iif->writecmd = NULL; /* Don't support isdnctrl */
- iif->readstat = NULL; /* Don't support isdnctrl */
- iif->rcvcallb_skb = NULL; /* Will be set by LL */
- iif->statcallb = NULL; /* Will be set by LL */
-
- if (!register_isdn(iif)) {
- pr_err("register_isdn failed\n");
- kfree(iif);
- return -EINVAL;
- }
-
- cs->iif = iif;
- cs->myid = iif->channels; /* Set my device id */
- cs->hw_hdr_len = HW_HDR_LEN;
- return 0;
-}
-
-/**
- * gigaset_isdn_unregdev() - unregister device from LL
- * @cs: device descriptor structure.
- */
-void gigaset_isdn_unregdev(struct cardstate *cs)
-{
- gig_dbg(DEBUG_CMD, "sending UNLOAD");
- gigaset_i4l_cmd(cs, ISDN_STAT_UNLOAD);
- kfree(cs->iif);
- cs->iif = NULL;
-}
-
-/**
- * gigaset_isdn_regdrv() - register driver to LL
- */
-void gigaset_isdn_regdrv(void)
-{
- pr_info("ISDN4Linux interface\n");
- /* nothing to do */
-}
-
-/**
- * gigaset_isdn_unregdrv() - unregister driver from LL
- */
-void gigaset_isdn_unregdrv(void)
-{
- /* nothing to do */
-}
diff --git a/drivers/isdn/hardware/Kconfig b/drivers/isdn/hardware/Kconfig
deleted file mode 100644
index 0d609b5fcf01..000000000000
--- a/drivers/isdn/hardware/Kconfig
+++ /dev/null
@@ -1,8 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-#
-# ISDN hardware drivers
-#
-comment "CAPI hardware drivers"
-
-source "drivers/isdn/hardware/avm/Kconfig"
-
diff --git a/drivers/isdn/hardware/Makefile b/drivers/isdn/hardware/Makefile
index a43760a0a4f5..96f9eb2e46ba 100644
--- a/drivers/isdn/hardware/Makefile
+++ b/drivers/isdn/hardware/Makefile
@@ -3,5 +3,4 @@
# Object files in subdirectories
-obj-$(CONFIG_CAPI_AVM) += avm/
obj-$(CONFIG_MISDN) += mISDN/
diff --git a/drivers/isdn/hardware/mISDN/Kconfig b/drivers/isdn/hardware/mISDN/Kconfig
index a7a34a85b970..304f50c08da2 100644
--- a/drivers/isdn/hardware/mISDN/Kconfig
+++ b/drivers/isdn/hardware/mISDN/Kconfig
@@ -79,11 +79,14 @@ config MISDN_NETJET
depends on PCI
depends on TTY
select MISDN_IPAC
- select ISDN_HDLC
- select ISDN_I4L
+ select MISDN_HDLC
help
Enable support for Traverse Technologies NETJet PCI cards.
+config MISDN_HDLC
+ tristate
+ select CRC_CCITT
+ select BITREVERSE
config MISDN_IPAC
tristate
diff --git a/drivers/isdn/hardware/mISDN/Makefile b/drivers/isdn/hardware/mISDN/Makefile
index 422f9fd8ab9a..3f50f8c4753f 100644
--- a/drivers/isdn/hardware/mISDN/Makefile
+++ b/drivers/isdn/hardware/mISDN/Makefile
@@ -15,3 +15,5 @@ obj-$(CONFIG_MISDN_NETJET) += netjet.o
# chip modules
obj-$(CONFIG_MISDN_IPAC) += mISDNipac.o
obj-$(CONFIG_MISDN_ISAR) += mISDNisar.o
+
+obj-$(CONFIG_MISDN_HDLC) += isdnhdlc.o
diff --git a/drivers/isdn/i4l/isdnhdlc.c b/drivers/isdn/hardware/mISDN/isdnhdlc.c
index 382a6b24e6a3..9fea16ed3dd8 100644
--- a/drivers/isdn/i4l/isdnhdlc.c
+++ b/drivers/isdn/hardware/mISDN/isdnhdlc.c
@@ -12,8 +12,8 @@
#include <linux/module.h>
#include <linux/init.h>
#include <linux/crc-ccitt.h>
-#include <linux/isdn/hdlc.h>
#include <linux/bitrev.h>
+#include "isdnhdlc.h"
/*-------------------------------------------------------------------*/
diff --git a/drivers/isdn/hardware/mISDN/isdnhdlc.h b/drivers/isdn/hardware/mISDN/isdnhdlc.h
new file mode 100644
index 000000000000..fe2c1279c139
--- /dev/null
+++ b/drivers/isdn/hardware/mISDN/isdnhdlc.h
@@ -0,0 +1,69 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * hdlc.h -- General purpose ISDN HDLC decoder.
+ *
+ * Implementation of a HDLC decoder/encoder in software.
+ * Necessary because some ISDN devices don't have HDLC
+ * controllers.
+ *
+ * Copyright (C)
+ * 2009 Karsten Keil <keil@b1-systems.de>
+ * 2002 Wolfgang Mües <wolfgang@iksw-muees.de>
+ * 2001 Frode Isaksen <fisaksen@bewan.com>
+ * 2001 Kai Germaschewski <kai.germaschewski@gmx.de>
+ */
+
+#ifndef __ISDNHDLC_H__
+#define __ISDNHDLC_H__
+
+struct isdnhdlc_vars {
+ int bit_shift;
+ int hdlc_bits1;
+ int data_bits;
+ int ffbit_shift; /* encoding only */
+ int state;
+ int dstpos;
+
+ u16 crc;
+
+ u8 cbin;
+ u8 shift_reg;
+ u8 ffvalue;
+
+ /* set if transferring data */
+ u32 data_received:1;
+ /* set if D channel (send idle instead of flags) */
+ u32 dchannel:1;
+ /* set if 56K adaptation */
+ u32 do_adapt56:1;
+ /* set if in closing phase (need to send CRC + flag) */
+ u32 do_closing:1;
+ /* set if data is bitreverse */
+ u32 do_bitreverse:1;
+};
+
+/* Feature Flags */
+#define HDLC_56KBIT 0x01
+#define HDLC_DCHANNEL 0x02
+#define HDLC_BITREVERSE 0x04
+
+/*
+ The return value from isdnhdlc_decode is
+ the frame length, 0 if no complete frame was decoded,
+ or a negative error number
+*/
+#define HDLC_FRAMING_ERROR 1
+#define HDLC_CRC_ERROR 2
+#define HDLC_LENGTH_ERROR 3
+
+extern void isdnhdlc_rcv_init(struct isdnhdlc_vars *hdlc, u32 features);
+
+extern int isdnhdlc_decode(struct isdnhdlc_vars *hdlc, const u8 *src,
+ int slen, int *count, u8 *dst, int dsize);
+
+extern void isdnhdlc_out_init(struct isdnhdlc_vars *hdlc, u32 features);
+
+extern int isdnhdlc_encode(struct isdnhdlc_vars *hdlc, const u8 *src,
+ u16 slen, int *count, u8 *dst, int dsize);
+
+#endif /* __ISDNHDLC_H__ */
diff --git a/drivers/isdn/hardware/mISDN/netjet.c b/drivers/isdn/hardware/mISDN/netjet.c
index 5c9e38ba52ea..4e30affd1a7c 100644
--- a/drivers/isdn/hardware/mISDN/netjet.c
+++ b/drivers/isdn/hardware/mISDN/netjet.c
@@ -16,7 +16,7 @@
#include "ipac.h"
#include "iohelper.h"
#include "netjet.h"
-#include <linux/isdn/hdlc.h>
+#include "isdnhdlc.h"
#define NETJET_REV "2.0"
diff --git a/drivers/isdn/hisax/Kconfig b/drivers/isdn/hisax/Kconfig
deleted file mode 100644
index 43d98ccf5ff6..000000000000
--- a/drivers/isdn/hisax/Kconfig
+++ /dev/null
@@ -1,423 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-
-menu "Passive cards"
-
-config ISDN_DRV_HISAX
- tristate "HiSax SiemensChipSet driver support"
- select CRC_CCITT
- ---help---
- This is a driver supporting the Siemens chipset on various
- ISDN-cards (like AVM A1, Elsa ISDN cards, Teles S0-16.0, Teles
- S0-16.3, Teles S0-8, Teles/Creatix PnP, ITK micro ix1 and many
- compatibles).
-
- HiSax is just the name of this driver, not the name of any hardware.
-
- If you have a card with such a chipset, you should say Y here and
- also to the configuration option of the driver for your particular
- card, below.
-
-if ISDN_DRV_HISAX
-
-comment "D-channel protocol features"
-
-config HISAX_EURO
- bool "HiSax Support for EURO/DSS1"
- help
- Say Y or N according to the D-channel protocol which your local
- telephone service company provides.
-
- The call control protocol E-DSS1 is used in most European countries.
- If unsure, say Y.
-
-config DE_AOC
- bool "Support for german chargeinfo"
- depends on HISAX_EURO
- help
- If you want that the HiSax hardware driver sends messages to the
- upper level of the isdn code on each AOCD (Advice Of Charge, During
- the call -- transmission of the fee information during a call) and
- on each AOCE (Advice Of Charge, at the End of the call --
- transmission of fee information at the end of the call), say Y here.
- This works only in Germany.
-
-config HISAX_NO_SENDCOMPLETE
- bool "Disable sending complete"
- depends on HISAX_EURO
- help
- If you have trouble with some ugly exchanges or you live in
- Australia select this option.
-
-config HISAX_NO_LLC
- bool "Disable sending low layer compatibility"
- depends on HISAX_EURO
- help
- If you have trouble with some ugly exchanges try to select this
- option.
-
-config HISAX_NO_KEYPAD
- bool "Disable keypad protocol option"
- depends on HISAX_EURO
- help
- If you like to send special dial strings including * or # without
- using the keypad protocol, select this option.
-
-config HISAX_1TR6
- bool "HiSax Support for german 1TR6"
- help
- Say Y or N according to the D-channel protocol which your local
- telephone service company provides.
-
- 1TR6 is an old call control protocol which was used in Germany
- before E-DSS1 was established. Nowadays, all new lines in Germany
- use E-DSS1.
-
-config HISAX_NI1
- bool "HiSax Support for US NI1"
- help
- Enable this if you like to use ISDN in US on a NI1 basic rate
- interface.
-
-config HISAX_MAX_CARDS
- int "Maximum number of cards supported by HiSax"
- default "8"
- help
- This option allows you to specify the maximum number of cards which
- the HiSax driver will be able to handle.
-
-comment "HiSax supported cards"
-
-config HISAX_16_0
- bool "Teles 16.0/8.0"
- depends on ISA
- help
- This enables HiSax support for the Teles ISDN-cards S0-16.0, S0-8
- and many compatibles.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using the different cards, a different D-channel protocol, or
- non-standard IRQ/port/shmem settings.
-
-config HISAX_16_3
- bool "Teles 16.3 or PNP or PCMCIA"
- help
- This enables HiSax support for the Teles ISDN-cards S0-16.3 the
- Teles/Creatix PnP and the Teles PCMCIA.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using the different cards, a different D-channel protocol, or
- non-standard IRQ/port settings.
-
-config HISAX_TELESPCI
- bool "Teles PCI"
- depends on PCI && (BROKEN || !(SPARC || PPC || PARISC || M68K || (MIPS && !CPU_LITTLE_ENDIAN) || (XTENSA && !CPU_LITTLE_ENDIAN)))
- help
- This enables HiSax support for the Teles PCI.
- See <file:Documentation/isdn/README.HiSax> on how to configure it.
-
-config HISAX_S0BOX
- bool "Teles S0Box"
- help
- This enables HiSax support for the Teles/Creatix parallel port
- S0BOX. See <file:Documentation/isdn/README.HiSax> on how to
- configure it.
-
-config HISAX_AVM_A1
- bool "AVM A1 (Fritz)"
- depends on ISA
- help
- This enables HiSax support for the AVM A1 (aka "Fritz").
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using the different cards, a different D-channel protocol, or
- non-standard IRQ/port settings.
-
-config HISAX_FRITZPCI
- bool "AVM PnP/PCI (Fritz!PnP/PCI)"
- depends on BROKEN || !PPC64
- help
- This enables HiSax support for the AVM "Fritz!PnP" and "Fritz!PCI".
- See <file:Documentation/isdn/README.HiSax> on how to configure it.
-
-config HISAX_AVM_A1_PCMCIA
- bool "AVM A1 PCMCIA (Fritz)"
- help
- This enables HiSax support for the AVM A1 "Fritz!PCMCIA").
- See <file:Documentation/isdn/README.HiSax> on how to configure it.
-
-config HISAX_ELSA
- bool "Elsa cards"
- help
- This enables HiSax support for the Elsa Mircolink ISA cards, for the
- Elsa Quickstep series cards and Elsa PCMCIA.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using the different cards, a different D-channel protocol, or
- non-standard IRQ/port settings.
-
-config HISAX_IX1MICROR2
- bool "ITK ix1-micro Revision 2"
- depends on ISA
- help
- This enables HiSax support for the ITK ix1-micro Revision 2 card.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using the different cards, a different D-channel protocol, or
- non-standard IRQ/port settings.
-
-config HISAX_DIEHLDIVA
- bool "Eicon.Diehl Diva cards"
- help
- This enables HiSax support for the Eicon.Diehl Diva none PRO
- versions passive ISDN cards.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using the different cards, a different D-channel protocol, or
- non-standard IRQ/port settings.
-
-config HISAX_ASUSCOM
- bool "ASUSCOM ISA cards"
- depends on ISA
- help
- This enables HiSax support for the AsusCom and their OEM versions
- passive ISDN ISA cards.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using the different cards, a different D-channel protocol, or
- non-standard IRQ/port settings.
-
-config HISAX_TELEINT
- bool "TELEINT cards"
- depends on ISA
- help
- This enables HiSax support for the TELEINT SA1 semiactiv ISDN card.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using the different cards, a different D-channel protocol, or
- non-standard IRQ/port settings.
-
-config HISAX_HFCS
- bool "HFC-S based cards"
- depends on ISA
- help
- This enables HiSax support for the HFC-S 2BDS0 based cards, like
- teles 16.3c.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using the different cards, a different D-channel protocol, or
- non-standard IRQ/port settings.
-
-config HISAX_SEDLBAUER
- bool "Sedlbauer cards"
- help
- This enables HiSax support for the Sedlbauer passive ISDN cards.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using the different cards, a different D-channel protocol, or
- non-standard IRQ/port settings.
-
-config HISAX_SPORTSTER
- bool "USR Sportster internal TA"
- depends on ISA
- help
- This enables HiSax support for the USR Sportster internal TA card.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using a different D-channel protocol, or non-standard IRQ/port
- settings.
-
-config HISAX_MIC
- bool "MIC card"
- depends on ISA
- help
- This enables HiSax support for the ITH MIC card.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using a different D-channel protocol, or non-standard IRQ/port
- settings.
-
-config HISAX_NETJET
- bool "NETjet card"
- depends on PCI && (BROKEN || !(PPC || PARISC || M68K || (MIPS && !CPU_LITTLE_ENDIAN) || (XTENSA && !CPU_LITTLE_ENDIAN) || MICROBLAZE))
- depends on VIRT_TO_BUS
- help
- This enables HiSax support for the NetJet from Traverse
- Technologies.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using a different D-channel protocol, or non-standard IRQ/port
- settings.
-
-config HISAX_NETJET_U
- bool "NETspider U card"
- depends on PCI && (BROKEN || !(PPC || PARISC || M68K || (MIPS && !CPU_LITTLE_ENDIAN) || (XTENSA && !CPU_LITTLE_ENDIAN) || MICROBLAZE))
- depends on VIRT_TO_BUS
- help
- This enables HiSax support for the Netspider U interface ISDN card
- from Traverse Technologies.
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using a different D-channel protocol, or non-standard IRQ/port
- settings.
-
-config HISAX_NICCY
- bool "Niccy PnP/PCI card"
- help
- This enables HiSax support for the Dr. Neuhaus Niccy PnP or PCI.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using a different D-channel protocol, or non-standard IRQ/port
- settings.
-
-config HISAX_ISURF
- bool "Siemens I-Surf card"
- depends on ISA
- help
- This enables HiSax support for the Siemens I-Talk/I-Surf card with
- ISAR chip.
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using a different D-channel protocol, or non-standard IRQ/port
- settings.
-
-config HISAX_HSTSAPHIR
- bool "HST Saphir card"
- depends on ISA
- help
- This enables HiSax support for the HST Saphir card.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using a different D-channel protocol, or non-standard IRQ/port
- settings.
-
-config HISAX_BKM_A4T
- bool "Telekom A4T card"
- depends on PCI
- help
- This enables HiSax support for the Telekom A4T card.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using a different D-channel protocol, or non-standard IRQ/port
- settings.
-
-config HISAX_SCT_QUADRO
- bool "Scitel Quadro card"
- depends on PCI
- help
- This enables HiSax support for the Scitel Quadro card.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using a different D-channel protocol, or non-standard IRQ/port
- settings.
-
-config HISAX_GAZEL
- bool "Gazel cards"
- help
- This enables HiSax support for the Gazel cards.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using a different D-channel protocol, or non-standard IRQ/port
- settings.
-
-config HISAX_HFC_PCI
- bool "HFC PCI-Bus cards"
- depends on PCI && (BROKEN || !(SPARC || PPC || PARISC || M68K || (MIPS && !CPU_LITTLE_ENDIAN) || (XTENSA && !CPU_LITTLE_ENDIAN)))
- help
- This enables HiSax support for the HFC-S PCI 2BDS0 based cards.
-
- For more information see under
- <file:Documentation/isdn/README.hfc-pci>.
-
-config HISAX_W6692
- bool "Winbond W6692 based cards"
- depends on PCI
- help
- This enables HiSax support for Winbond W6692 based PCI ISDN cards.
-
- See <file:Documentation/isdn/README.HiSax> on how to configure it
- using a different D-channel protocol, or non-standard IRQ/port
- settings.
-
-config HISAX_HFC_SX
- bool "HFC-S+, HFC-SP, HFC-PCMCIA cards"
- help
- This enables HiSax support for the HFC-S+, HFC-SP and HFC-PCMCIA
- cards. This code is not finished yet.
-
-config HISAX_ENTERNOW_PCI
- bool "Formula-n enter:now PCI card"
- depends on HISAX_NETJET && PCI && (BROKEN || !(SPARC || PPC || PARISC || M68K || (MIPS && !CPU_LITTLE_ENDIAN) || (XTENSA && !CPU_LITTLE_ENDIAN)))
- help
- This enables HiSax support for the Formula-n enter:now PCI
- ISDN card.
-
-config HISAX_DEBUG
- bool "HiSax debugging"
- help
- This enables debugging code in the new-style HiSax drivers, i.e.
- the ST5481 USB driver currently.
- If in doubt, say yes.
-
-comment "HiSax PCMCIA card service modules"
-
-config HISAX_SEDLBAUER_CS
- tristate "Sedlbauer PCMCIA cards"
- depends on PCMCIA && HISAX_SEDLBAUER
- help
- This enables the PCMCIA client driver for the Sedlbauer Speed Star
- and Speed Star II cards.
-
-config HISAX_ELSA_CS
- tristate "ELSA PCMCIA MicroLink cards"
- depends on PCMCIA && HISAX_ELSA
- help
- This enables the PCMCIA client driver for the Elsa PCMCIA MicroLink
- card.
-
-config HISAX_AVM_A1_CS
- tristate "AVM A1 PCMCIA cards"
- depends on PCMCIA && ISDN_DRV_HISAX
- help
- This enables the PCMCIA client driver for the AVM A1 / Fritz!Card
- PCMCIA cards.
-
-config HISAX_TELES_CS
- tristate "TELES PCMCIA cards"
- depends on PCMCIA && HISAX_16_3
- help
- This enables the PCMCIA client driver for the Teles PCMCIA cards.
-
-comment "HiSax sub driver modules"
-
-config HISAX_ST5481
- tristate "ST5481 USB ISDN modem"
- depends on USB
- select ISDN_HDLC
- select CRC_CCITT
- select BITREVERSE
- help
- This enables the driver for ST5481 based USB ISDN adapters,
- e.g. the BeWan Gazel 128 USB
-
-config HISAX_HFCUSB
- tristate "HFC USB based ISDN modems"
- depends on USB
- help
- This enables the driver for HFC USB based ISDN modems.
-
-config HISAX_HFC4S8S
- tristate "HFC-4S/8S based ISDN cards"
- help
- This enables the driver for HFC-4S/8S based ISDN cards.
-
-config HISAX_FRITZ_PCIPNP
- tristate "AVM Fritz!Card PCI/PCIv2/PnP support"
- depends on PCI
- help
- This enables the driver for the AVM Fritz!Card PCI,
- Fritz!Card PCI v2 and Fritz!Card PnP.
- (the latter also needs you to select "ISA Plug and Play support"
- from the menu "Plug and Play configuration")
-
-endif
-
-endmenu
-
diff --git a/drivers/isdn/hisax/Makefile b/drivers/isdn/hisax/Makefile
deleted file mode 100644
index 3eca9d23f1c2..000000000000
--- a/drivers/isdn/hisax/Makefile
+++ /dev/null
@@ -1,60 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0
-# Makefile for the hisax ISDN device driver
-
-# The target object and module list name.
-
-# Define maximum number of cards
-
-ccflags-y := -DHISAX_MAX_CARDS=$(CONFIG_HISAX_MAX_CARDS)
-
-obj-$(CONFIG_ISDN_DRV_HISAX) += hisax.o
-obj-$(CONFIG_HISAX_SEDLBAUER_CS) += sedlbauer_cs.o
-obj-$(CONFIG_HISAX_ELSA_CS) += elsa_cs.o
-obj-$(CONFIG_HISAX_AVM_A1_CS) += avma1_cs.o
-obj-$(CONFIG_HISAX_TELES_CS) += teles_cs.o
-obj-$(CONFIG_HISAX_ST5481) += hisax_st5481.o
-obj-$(CONFIG_HISAX_HFCUSB) += hfc_usb.o
-obj-$(CONFIG_HISAX_HFC4S8S) += hfc4s8s_l1.o
-obj-$(CONFIG_HISAX_FRITZ_PCIPNP) += hisax_isac.o hisax_fcpcipnp.o
-
-# Multipart objects.
-
-hisax_st5481-y := st5481_init.o st5481_usb.o st5481_d.o \
- st5481_b.o
-
-hisax-y := config.o isdnl1.o tei.o isdnl2.o isdnl3.o \
- lmgr.o q931.o callc.o fsm.o
-hisax-$(CONFIG_HISAX_EURO) += l3dss1.o
-hisax-$(CONFIG_HISAX_NI1) += l3ni1.o
-hisax-$(CONFIG_HISAX_1TR6) += l3_1tr6.o
-
-hisax-$(CONFIG_HISAX_16_0) += teles0.o isac.o arcofi.o hscx.o
-hisax-$(CONFIG_HISAX_16_3) += teles3.o isac.o arcofi.o hscx.o
-hisax-$(CONFIG_HISAX_TELESPCI) += telespci.o isac.o arcofi.o hscx.o
-hisax-$(CONFIG_HISAX_S0BOX) += s0box.o isac.o arcofi.o hscx.o
-hisax-$(CONFIG_HISAX_AVM_A1) += avm_a1.o isac.o arcofi.o hscx.o
-hisax-$(CONFIG_HISAX_AVM_A1_PCMCIA) += avm_a1p.o isac.o arcofi.o hscx.o
-hisax-$(CONFIG_HISAX_FRITZPCI) += avm_pci.o isac.o arcofi.o
-hisax-$(CONFIG_HISAX_ELSA) += elsa.o isac.o arcofi.o hscx.o
-hisax-$(CONFIG_HISAX_IX1MICROR2) += ix1_micro.o isac.o arcofi.o hscx.o
-hisax-$(CONFIG_HISAX_DIEHLDIVA) += diva.o isac.o arcofi.o hscx.o ipacx.o
-hisax-$(CONFIG_HISAX_ASUSCOM) += asuscom.o isac.o arcofi.o hscx.o
-hisax-$(CONFIG_HISAX_TELEINT) += teleint.o isac.o arcofi.o hfc_2bs0.o
-hisax-$(CONFIG_HISAX_SEDLBAUER) += sedlbauer.o isac.o arcofi.o hscx.o \
- isar.o
-hisax-$(CONFIG_HISAX_SPORTSTER) += sportster.o isac.o arcofi.o hscx.o
-hisax-$(CONFIG_HISAX_MIC) += mic.o isac.o arcofi.o hscx.o
-hisax-$(CONFIG_HISAX_NETJET) += nj_s.o netjet.o isac.o arcofi.o
-hisax-$(CONFIG_HISAX_NETJET_U) += nj_u.o netjet.o icc.o
-hisax-$(CONFIG_HISAX_HFCS) += hfcscard.o hfc_2bds0.o
-hisax-$(CONFIG_HISAX_HFC_PCI) += hfc_pci.o
-hisax-$(CONFIG_HISAX_HFC_SX) += hfc_sx.o
-hisax-$(CONFIG_HISAX_NICCY) += niccy.o isac.o arcofi.o hscx.o
-hisax-$(CONFIG_HISAX_ISURF) += isurf.o isac.o arcofi.o isar.o
-hisax-$(CONFIG_HISAX_HSTSAPHIR) += saphir.o isac.o arcofi.o hscx.o
-hisax-$(CONFIG_HISAX_BKM_A4T) += bkm_a4t.o isac.o arcofi.o jade.o
-hisax-$(CONFIG_HISAX_SCT_QUADRO) += bkm_a8.o isac.o arcofi.o hscx.o
-hisax-$(CONFIG_HISAX_GAZEL) += gazel.o isac.o arcofi.o hscx.o
-hisax-$(CONFIG_HISAX_W6692) += w6692.o
-hisax-$(CONFIG_HISAX_ENTERNOW_PCI) += enternow_pci.o amd7930_fn.o
-
diff --git a/drivers/isdn/hisax/amd7930_fn.c b/drivers/isdn/hisax/amd7930_fn.c
deleted file mode 100644
index 6c336366128c..000000000000
--- a/drivers/isdn/hisax/amd7930_fn.c
+++ /dev/null
@@ -1,794 +0,0 @@
-/* gerdes_amd7930.c,v 0.99 2001/10/02
- *
- * gerdes_amd7930.c Amd 79C30A and 79C32A specific routines
- * (based on HiSax driver by Karsten Keil)
- *
- * Author Christoph Ersfeld <info@formula-n.de>
- * Formula-n Europe AG (www.formula-n.com)
- * previously Gerdes AG
- *
- *
- * This file is (c) under GNU PUBLIC LICENSE
- *
- *
- * Notes:
- * Version 0.99 is the first release of this driver and there are
- * certainly a few bugs.
- *
- * Please don't report any malfunction to me without sending
- * (compressed) debug-logs.
- * It would be nearly impossible to retrace it.
- *
- * Log D-channel-processing as follows:
- *
- * 1. Load hisax with card-specific parameters, this example ist for
- * Formula-n enter:now ISDN PCI and compatible
- * (f.e. Gerdes Power ISDN PCI)
- *
- * modprobe hisax type=41 protocol=2 id=gerdes
- *
- * if you chose an other value for id, you need to modify the
- * code below, too.
- *
- * 2. set debug-level
- *
- * hisaxctrl gerdes 1 0x3ff
- * hisaxctrl gerdes 11 0x4f
- * cat /dev/isdnctrl >> ~/log &
- *
- * Please take also a look into /var/log/messages if there is
- * anything importand concerning HISAX.
- *
- *
- * Credits:
- * Programming the driver for Formula-n enter:now ISDN PCI and
- * necessary this driver for the used Amd 7930 D-channel-controller
- * was spnsored by Formula-n Europe AG.
- * Thanks to Karsten Keil and Petr Novak, who gave me support in
- * Hisax-specific questions.
- * I want so say special thanks to Carl-Friedrich Braun, who had to
- * answer a lot of questions about generally ISDN and about handling
- * of the Amd-Chip.
- *
- */
-
-
-#include "hisax.h"
-#include "isdnl1.h"
-#include "isac.h"
-#include "amd7930_fn.h"
-#include <linux/interrupt.h>
-#include <linux/init.h>
-#include <linux/gfp.h>
-
-static void Amd7930_new_ph(struct IsdnCardState *cs);
-
-static WORD initAMD[] = {
- 0x0100,
-
- 0x00A5, 3, 0x01, 0x40, 0x58, // LPR, LMR1, LMR2
- 0x0086, 1, 0x0B, // DMR1 (D-Buffer TH-Interrupts on)
- 0x0087, 1, 0xFF, // DMR2
- 0x0092, 1, 0x03, // EFCR (extended mode d-channel-fifo on)
- 0x0090, 4, 0xFE, 0xFF, 0x02, 0x0F, // FRAR4, SRAR4, DMR3, DMR4 (address recognition )
- 0x0084, 2, 0x80, 0x00, // DRLR
- 0x00C0, 1, 0x47, // PPCR1
- 0x00C8, 1, 0x01, // PPCR2
-
- 0x0102,
- 0x0107,
- 0x01A1, 1,
- 0x0121, 1,
- 0x0189, 2,
-
- 0x0045, 4, 0x61, 0x72, 0x00, 0x00, // MCR1, MCR2, MCR3, MCR4
- 0x0063, 2, 0x08, 0x08, // GX
- 0x0064, 2, 0x08, 0x08, // GR
- 0x0065, 2, 0x99, 0x00, // GER
- 0x0066, 2, 0x7C, 0x8B, // STG
- 0x0067, 2, 0x00, 0x00, // FTGR1, FTGR2
- 0x0068, 2, 0x20, 0x20, // ATGR1, ATGR2
- 0x0069, 1, 0x4F, // MMR1
- 0x006A, 1, 0x00, // MMR2
- 0x006C, 1, 0x40, // MMR3
- 0x0021, 1, 0x02, // INIT
- 0x00A3, 1, 0x40, // LMR1
-
- 0xFFFF
-};
-
-
-static void /* macro wWordAMD */
-WriteWordAmd7930(struct IsdnCardState *cs, BYTE reg, WORD val)
-{
- wByteAMD(cs, 0x00, reg);
- wByteAMD(cs, 0x01, LOBYTE(val));
- wByteAMD(cs, 0x01, HIBYTE(val));
-}
-
-static WORD /* macro rWordAMD */
-ReadWordAmd7930(struct IsdnCardState *cs, BYTE reg)
-{
- WORD res;
- /* direct access register */
- if (reg < 8) {
- res = rByteAMD(cs, reg);
- res += 256 * rByteAMD(cs, reg);
- }
- /* indirect access register */
- else {
- wByteAMD(cs, 0x00, reg);
- res = rByteAMD(cs, 0x01);
- res += 256 * rByteAMD(cs, 0x01);
- }
- return (res);
-}
-
-
-static void
-Amd7930_ph_command(struct IsdnCardState *cs, u_char command, char *s)
-{
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "AMD7930: %s: ph_command 0x%02X", s, command);
-
- cs->dc.amd7930.lmr1 = command;
- wByteAMD(cs, 0xA3, command);
-}
-
-
-
-static BYTE i430States[] = {
-// to reset F3 F4 F5 F6 F7 F8 AR from
- 0x01, 0x02, 0x00, 0x00, 0x00, 0x07, 0x05, 0x00, // init
- 0x01, 0x02, 0x00, 0x00, 0x00, 0x07, 0x05, 0x00, // reset
- 0x01, 0x02, 0x00, 0x00, 0x00, 0x09, 0x05, 0x04, // F3
- 0x01, 0x02, 0x00, 0x00, 0x1B, 0x00, 0x00, 0x00, // F4
- 0x01, 0x02, 0x00, 0x00, 0x1B, 0x00, 0x00, 0x00, // F5
- 0x01, 0x03, 0x00, 0x00, 0x00, 0x06, 0x05, 0x00, // F6
- 0x11, 0x13, 0x00, 0x00, 0x1B, 0x00, 0x15, 0x00, // F7
- 0x01, 0x03, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, // F8
- 0x01, 0x03, 0x00, 0x00, 0x00, 0x09, 0x00, 0x0A}; // AR
-
-
-/* Row init - reset F3 F4 F5 F6 F7 F8 AR */
-static BYTE stateHelper[] = { 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08 };
-
-
-
-
-static void
-Amd7930_get_state(struct IsdnCardState *cs) {
- BYTE lsr = rByteAMD(cs, 0xA1);
- cs->dc.amd7930.ph_state = (lsr & 0x7) + 2;
- Amd7930_new_ph(cs);
-}
-
-
-
-static void
-Amd7930_new_ph(struct IsdnCardState *cs)
-{
- u_char index = stateHelper[cs->dc.amd7930.old_state] * 8 + stateHelper[cs->dc.amd7930.ph_state] - 1;
- u_char message = i430States[index];
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "AMD7930: new_ph %d, old_ph %d, message %d, index %d",
- cs->dc.amd7930.ph_state, cs->dc.amd7930.old_state, message & 0x0f, index);
-
- cs->dc.amd7930.old_state = cs->dc.amd7930.ph_state;
-
- /* abort transmit if nessesary */
- if ((message & 0xf0) && (cs->tx_skb)) {
- wByteAMD(cs, 0x21, 0xC2);
- wByteAMD(cs, 0x21, 0x02);
- }
-
- switch (message & 0x0f) {
-
- case (1):
- l1_msg(cs, HW_RESET | INDICATION, NULL);
- Amd7930_get_state(cs);
- break;
- case (2): /* init, Card starts in F3 */
- l1_msg(cs, HW_DEACTIVATE | CONFIRM, NULL);
- break;
- case (3):
- l1_msg(cs, HW_DEACTIVATE | INDICATION, NULL);
- break;
- case (4):
- l1_msg(cs, HW_POWERUP | CONFIRM, NULL);
- Amd7930_ph_command(cs, 0x50, "HW_ENABLE REQUEST");
- break;
- case (5):
- l1_msg(cs, HW_RSYNC | INDICATION, NULL);
- break;
- case (6):
- l1_msg(cs, HW_INFO4_P8 | INDICATION, NULL);
- break;
- case (7): /* init, Card starts in F7 */
- l1_msg(cs, HW_RSYNC | INDICATION, NULL);
- l1_msg(cs, HW_INFO4_P8 | INDICATION, NULL);
- break;
- case (8):
- l1_msg(cs, HW_POWERUP | CONFIRM, NULL);
- /* fall through */
- case (9):
- Amd7930_ph_command(cs, 0x40, "HW_ENABLE REQ cleared if set");
- l1_msg(cs, HW_RSYNC | INDICATION, NULL);
- l1_msg(cs, HW_INFO2 | INDICATION, NULL);
- l1_msg(cs, HW_INFO4_P8 | INDICATION, NULL);
- break;
- case (10):
- Amd7930_ph_command(cs, 0x40, "T3 expired, HW_ENABLE REQ cleared");
- cs->dc.amd7930.old_state = 3;
- break;
- case (11):
- l1_msg(cs, HW_INFO2 | INDICATION, NULL);
- break;
- default:
- break;
- }
-}
-
-
-
-static void
-Amd7930_bh(struct work_struct *work)
-{
- struct IsdnCardState *cs =
- container_of(work, struct IsdnCardState, tqueue);
- struct PStack *stptr;
-
- if (test_and_clear_bit(D_CLEARBUSY, &cs->event)) {
- if (cs->debug)
- debugl1(cs, "Amd7930: bh, D-Channel Busy cleared");
- stptr = cs->stlist;
- while (stptr != NULL) {
- stptr->l1.l1l2(stptr, PH_PAUSE | CONFIRM, NULL);
- stptr = stptr->next;
- }
- }
- if (test_and_clear_bit(D_L1STATECHANGE, &cs->event)) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "AMD7930: bh, D_L1STATECHANGE");
- Amd7930_new_ph(cs);
- }
-
- if (test_and_clear_bit(D_RCVBUFREADY, &cs->event)) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "AMD7930: bh, D_RCVBUFREADY");
- DChannel_proc_rcv(cs);
- }
-
- if (test_and_clear_bit(D_XMTBUFREADY, &cs->event)) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "AMD7930: bh, D_XMTBUFREADY");
- DChannel_proc_xmt(cs);
- }
-}
-
-static void
-Amd7930_empty_Dfifo(struct IsdnCardState *cs, int flag)
-{
-
- BYTE stat, der;
- BYTE *ptr;
- struct sk_buff *skb;
-
-
- if ((cs->debug & L1_DEB_ISAC) && !(cs->debug & L1_DEB_ISAC_FIFO))
- debugl1(cs, "Amd7930: empty_Dfifo");
-
-
- ptr = cs->rcvbuf + cs->rcvidx;
-
- /* AMD interrupts off */
- AmdIrqOff(cs);
-
- /* read D-Channel-Fifo*/
- stat = rByteAMD(cs, 0x07); // DSR2
-
- /* while Data in Fifo ... */
- while ((stat & 2) && ((ptr-cs->rcvbuf) < MAX_DFRAME_LEN_L1)) {
- *ptr = rByteAMD(cs, 0x04); // DCRB
- ptr++;
- stat = rByteAMD(cs, 0x07); // DSR2
- cs->rcvidx = ptr - cs->rcvbuf;
-
- /* Paket ready? */
- if (stat & 1) {
-
- der = rWordAMD(cs, 0x03);
-
- /* no errors, packet ok */
- if (!der && !flag) {
- rWordAMD(cs, 0x89); // clear DRCR
-
- if ((cs->rcvidx) > 0) {
- if (!(skb = alloc_skb(cs->rcvidx, GFP_ATOMIC)))
- printk(KERN_WARNING "HiSax: Amd7930: empty_Dfifo, D receive out of memory!\n");
- else {
- /* Debugging */
- if (cs->debug & L1_DEB_ISAC_FIFO) {
- char *t = cs->dlog;
-
- t += sprintf(t, "Amd7930: empty_Dfifo cnt: %d |", cs->rcvidx);
- QuickHex(t, cs->rcvbuf, cs->rcvidx);
- debugl1(cs, "%s", cs->dlog);
- }
- /* moves received data in sk-buffer */
- skb_put_data(skb, cs->rcvbuf,
- cs->rcvidx);
- skb_queue_tail(&cs->rq, skb);
- }
- }
-
- }
- /* throw damaged packets away, reset receive-buffer, indicate RX */
- ptr = cs->rcvbuf;
- cs->rcvidx = 0;
- schedule_event(cs, D_RCVBUFREADY);
- }
- }
- /* Packet to long, overflow */
- if (cs->rcvidx >= MAX_DFRAME_LEN_L1) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "AMD7930: empty_Dfifo L2-Framelength overrun");
- cs->rcvidx = 0;
- return;
- }
- /* AMD interrupts on */
- AmdIrqOn(cs);
-}
-
-
-static void
-Amd7930_fill_Dfifo(struct IsdnCardState *cs)
-{
-
- WORD dtcrr, dtcrw, len, count;
- BYTE txstat, dmr3;
- BYTE *ptr, *deb_ptr;
-
- if ((cs->debug & L1_DEB_ISAC) && !(cs->debug & L1_DEB_ISAC_FIFO))
- debugl1(cs, "Amd7930: fill_Dfifo");
-
- if ((!cs->tx_skb) || (cs->tx_skb->len <= 0))
- return;
-
- dtcrw = 0;
- if (!cs->dc.amd7930.tx_xmtlen)
- /* new Frame */
- len = dtcrw = cs->tx_skb->len;
- /* continue frame */
- else len = cs->dc.amd7930.tx_xmtlen;
-
-
- /* AMD interrupts off */
- AmdIrqOff(cs);
-
- deb_ptr = ptr = cs->tx_skb->data;
-
- /* while free place in tx-fifo available and data in sk-buffer */
- txstat = 0x10;
- while ((txstat & 0x10) && (cs->tx_cnt < len)) {
- wByteAMD(cs, 0x04, *ptr);
- ptr++;
- cs->tx_cnt++;
- txstat = rByteAMD(cs, 0x07);
- }
- count = ptr - cs->tx_skb->data;
- skb_pull(cs->tx_skb, count);
-
-
- dtcrr = rWordAMD(cs, 0x85); // DTCR
- dmr3 = rByteAMD(cs, 0x8E);
-
- if (cs->debug & L1_DEB_ISAC) {
- debugl1(cs, "Amd7930: fill_Dfifo, DMR3: 0x%02X, DTCR read: 0x%04X write: 0x%02X 0x%02X", dmr3, dtcrr, LOBYTE(dtcrw), HIBYTE(dtcrw));
- }
-
- /* writeing of dtcrw starts transmit */
- if (!cs->dc.amd7930.tx_xmtlen) {
- wWordAMD(cs, 0x85, dtcrw);
- cs->dc.amd7930.tx_xmtlen = dtcrw;
- }
-
- if (test_and_set_bit(FLG_DBUSY_TIMER, &cs->HW_Flags)) {
- debugl1(cs, "Amd7930: fill_Dfifo dbusytimer running");
- del_timer(&cs->dbusytimer);
- }
- cs->dbusytimer.expires = jiffies + ((DBUSY_TIMER_VALUE * HZ) / 1000);
- add_timer(&cs->dbusytimer);
-
- if (cs->debug & L1_DEB_ISAC_FIFO) {
- char *t = cs->dlog;
-
- t += sprintf(t, "Amd7930: fill_Dfifo cnt: %d |", count);
- QuickHex(t, deb_ptr, count);
- debugl1(cs, "%s", cs->dlog);
- }
- /* AMD interrupts on */
- AmdIrqOn(cs);
-}
-
-
-void Amd7930_interrupt(struct IsdnCardState *cs, BYTE irflags)
-{
- BYTE dsr1, dsr2, lsr;
- WORD der;
-
- while (irflags)
- {
-
- dsr1 = rByteAMD(cs, 0x02);
- der = rWordAMD(cs, 0x03);
- dsr2 = rByteAMD(cs, 0x07);
- lsr = rByteAMD(cs, 0xA1);
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "Amd7930: interrupt: flags: 0x%02X, DSR1: 0x%02X, DSR2: 0x%02X, LSR: 0x%02X, DER=0x%04X", irflags, dsr1, dsr2, lsr, der);
-
- /* D error -> read DER and DSR2 bit 2 */
- if (der || (dsr2 & 4)) {
-
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "Amd7930: interrupt: D error DER=0x%04X", der);
-
- /* RX, TX abort if collision detected */
- if (der & 2) {
- wByteAMD(cs, 0x21, 0xC2);
- wByteAMD(cs, 0x21, 0x02);
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- schedule_event(cs, D_CLEARBUSY);
- /* restart frame */
- if (cs->tx_skb) {
- skb_push(cs->tx_skb, cs->tx_cnt);
- cs->tx_cnt = 0;
- cs->dc.amd7930.tx_xmtlen = 0;
- Amd7930_fill_Dfifo(cs);
- } else {
- printk(KERN_WARNING "HiSax: Amd7930 D-Collision, no skb\n");
- debugl1(cs, "Amd7930: interrupt: D-Collision, no skb");
- }
- }
- /* remove damaged data from fifo */
- Amd7930_empty_Dfifo(cs, 1);
-
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- schedule_event(cs, D_CLEARBUSY);
- /* restart TX-Frame */
- if (cs->tx_skb) {
- skb_push(cs->tx_skb, cs->tx_cnt);
- cs->tx_cnt = 0;
- cs->dc.amd7930.tx_xmtlen = 0;
- Amd7930_fill_Dfifo(cs);
- }
- }
-
- /* D TX FIFO empty -> fill */
- if (irflags & 1) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "Amd7930: interrupt: clear Timer and fill D-TX-FIFO if data");
-
- /* AMD interrupts off */
- AmdIrqOff(cs);
-
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- schedule_event(cs, D_CLEARBUSY);
- if (cs->tx_skb) {
- if (cs->tx_skb->len)
- Amd7930_fill_Dfifo(cs);
- }
- /* AMD interrupts on */
- AmdIrqOn(cs);
- }
-
-
- /* D RX FIFO full or tiny packet in Fifo -> empty */
- if ((irflags & 2) || (dsr1 & 2)) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "Amd7930: interrupt: empty D-FIFO");
- Amd7930_empty_Dfifo(cs, 0);
- }
-
-
- /* D-Frame transmit complete */
- if (dsr1 & 64) {
- if (cs->debug & L1_DEB_ISAC) {
- debugl1(cs, "Amd7930: interrupt: transmit packet ready");
- }
- /* AMD interrupts off */
- AmdIrqOff(cs);
-
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- schedule_event(cs, D_CLEARBUSY);
-
- if (cs->tx_skb) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "Amd7930: interrupt: TX-Packet ready, freeing skb");
- dev_kfree_skb_irq(cs->tx_skb);
- cs->tx_cnt = 0;
- cs->dc.amd7930.tx_xmtlen = 0;
- cs->tx_skb = NULL;
- }
- if ((cs->tx_skb = skb_dequeue(&cs->sq))) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "Amd7930: interrupt: TX-Packet ready, next packet dequeued");
- cs->tx_cnt = 0;
- cs->dc.amd7930.tx_xmtlen = 0;
- Amd7930_fill_Dfifo(cs);
- }
- else
- schedule_event(cs, D_XMTBUFREADY);
- /* AMD interrupts on */
- AmdIrqOn(cs);
- }
-
- /* LIU status interrupt -> read LSR, check statechanges */
- if (lsr & 0x38) {
- /* AMD interrupts off */
- AmdIrqOff(cs);
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "Amd: interrupt: LSR=0x%02X, LIU is in state %d", lsr, ((lsr & 0x7) + 2));
-
- cs->dc.amd7930.ph_state = (lsr & 0x7) + 2;
-
- schedule_event(cs, D_L1STATECHANGE);
- /* AMD interrupts on */
- AmdIrqOn(cs);
- }
-
- /* reads Interrupt-Register again. If there is a new interrupt-flag: restart handler */
- irflags = rByteAMD(cs, 0x00);
- }
-
-}
-
-static void
-Amd7930_l1hw(struct PStack *st, int pr, void *arg)
-{
- struct IsdnCardState *cs = (struct IsdnCardState *) st->l1.hardware;
- struct sk_buff *skb = arg;
- u_long flags;
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "Amd7930: l1hw called, pr: 0x%04X", pr);
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- if (cs->debug & DEB_DLOG_HEX)
- LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE)
- dlogframe(cs, skb, 0);
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->tx_skb) {
- skb_queue_tail(&cs->sq, skb);
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "Amd7930: l1hw: PH_DATA Queued", 0);
-#endif
- } else {
- cs->tx_skb = skb;
- cs->tx_cnt = 0;
- cs->dc.amd7930.tx_xmtlen = 0;
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "Amd7930: l1hw: PH_DATA", 0);
-#endif
- Amd7930_fill_Dfifo(cs);
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->tx_skb) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "Amd7930: l1hw: l2l1 tx_skb exist this shouldn't happen");
- skb_queue_tail(&cs->sq, skb);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- }
- if (cs->debug & DEB_DLOG_HEX)
- LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE)
- dlogframe(cs, skb, 0);
- cs->tx_skb = skb;
- cs->tx_cnt = 0;
- cs->dc.amd7930.tx_xmtlen = 0;
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "Amd7930: l1hw: PH_DATA_PULLED", 0);
-#endif
- Amd7930_fill_Dfifo(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- debugl1(cs, "Amd7930: l1hw: -> PH_REQUEST_PULL, skb: %s", (cs->tx_skb) ? "yes" : "no");
-#endif
- if (!cs->tx_skb) {
- test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case (HW_RESET | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->dc.amd7930.ph_state == 8) {
- /* b-channels off, PH-AR cleared
- * change to F3 */
- Amd7930_ph_command(cs, 0x20, "HW_RESET REQUEST"); //LMR1 bit 5
- spin_unlock_irqrestore(&cs->lock, flags);
- } else {
- Amd7930_ph_command(cs, 0x40, "HW_RESET REQUEST");
- cs->dc.amd7930.ph_state = 2;
- spin_unlock_irqrestore(&cs->lock, flags);
- Amd7930_new_ph(cs);
- }
- break;
- case (HW_ENABLE | REQUEST):
- cs->dc.amd7930.ph_state = 9;
- Amd7930_new_ph(cs);
- break;
- case (HW_INFO3 | REQUEST):
- // automatic
- break;
- case (HW_TESTLOOP | REQUEST):
- /* not implemented yet */
- break;
- case (HW_DEACTIVATE | RESPONSE):
- skb_queue_purge(&cs->rq);
- skb_queue_purge(&cs->sq);
- if (cs->tx_skb) {
- dev_kfree_skb(cs->tx_skb);
- cs->tx_skb = NULL;
- }
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- schedule_event(cs, D_CLEARBUSY);
- break;
- default:
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "Amd7930: l1hw: unknown %04x", pr);
- break;
- }
-}
-
-static void
-setstack_Amd7930(struct PStack *st, struct IsdnCardState *cs)
-{
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "Amd7930: setstack called");
-
- st->l1.l1hw = Amd7930_l1hw;
-}
-
-
-static void
-DC_Close_Amd7930(struct IsdnCardState *cs) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "Amd7930: DC_Close called");
-}
-
-
-static void
-dbusy_timer_handler(struct timer_list *t)
-{
- struct IsdnCardState *cs = from_timer(cs, t, dbusytimer);
- u_long flags;
- struct PStack *stptr;
- WORD dtcr, der;
- BYTE dsr1, dsr2;
-
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "Amd7930: dbusy_timer expired!");
-
- if (test_bit(FLG_DBUSY_TIMER, &cs->HW_Flags)) {
- spin_lock_irqsave(&cs->lock, flags);
- /* D Transmit Byte Count Register:
- * Counts down packet's number of Bytes, 0 if packet ready */
- dtcr = rWordAMD(cs, 0x85);
- dsr1 = rByteAMD(cs, 0x02);
- dsr2 = rByteAMD(cs, 0x07);
- der = rWordAMD(cs, 0x03);
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "Amd7930: dbusy_timer_handler: DSR1=0x%02X, DSR2=0x%02X, DER=0x%04X, cs->tx_skb->len=%u, tx_stat=%u, dtcr=%u, cs->tx_cnt=%u", dsr1, dsr2, der, cs->tx_skb->len, cs->dc.amd7930.tx_xmtlen, dtcr, cs->tx_cnt);
-
- if ((cs->dc.amd7930.tx_xmtlen - dtcr) < cs->tx_cnt) { /* D-Channel Busy */
- test_and_set_bit(FLG_L1_DBUSY, &cs->HW_Flags);
- stptr = cs->stlist;
- spin_unlock_irqrestore(&cs->lock, flags);
- while (stptr != NULL) {
- stptr->l1.l1l2(stptr, PH_PAUSE | INDICATION, NULL);
- stptr = stptr->next;
- }
-
- } else {
- /* discard frame; reset transceiver */
- test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags);
- if (cs->tx_skb) {
- dev_kfree_skb_any(cs->tx_skb);
- cs->tx_cnt = 0;
- cs->tx_skb = NULL;
- cs->dc.amd7930.tx_xmtlen = 0;
- } else {
- printk(KERN_WARNING "HiSax: Amd7930: D-Channel Busy no skb\n");
- debugl1(cs, "Amd7930: D-Channel Busy no skb");
-
- }
- /* Transmitter reset, abort transmit */
- wByteAMD(cs, 0x21, 0x82);
- wByteAMD(cs, 0x21, 0x02);
- spin_unlock_irqrestore(&cs->lock, flags);
- cs->irq_func(cs->irq, cs);
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "Amd7930: dbusy_timer_handler: Transmitter reset");
- }
- }
-}
-
-
-
-void Amd7930_init(struct IsdnCardState *cs)
-{
- WORD *ptr;
- BYTE cmd, cnt;
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "Amd7930: initamd called");
-
- cs->dc.amd7930.tx_xmtlen = 0;
- cs->dc.amd7930.old_state = 0;
- cs->dc.amd7930.lmr1 = 0x40;
- cs->dc.amd7930.ph_command = Amd7930_ph_command;
- cs->setstack_d = setstack_Amd7930;
- cs->DC_Close = DC_Close_Amd7930;
-
- /* AMD Initialisation */
- for (ptr = initAMD; *ptr != 0xFFFF; ) {
- cmd = LOBYTE(*ptr);
-
- /* read */
- if (*ptr++ >= 0x100) {
- if (cmd < 8)
- /* reset register */
- rByteAMD(cs, cmd);
- else {
- wByteAMD(cs, 0x00, cmd);
- for (cnt = *ptr++; cnt > 0; cnt--)
- rByteAMD(cs, 0x01);
- }
- }
- /* write */
- else if (cmd < 8)
- wByteAMD(cs, cmd, LOBYTE(*ptr++));
-
- else {
- wByteAMD(cs, 0x00, cmd);
- for (cnt = *ptr++; cnt > 0; cnt--)
- wByteAMD(cs, 0x01, LOBYTE(*ptr++));
- }
- }
-}
-
-void setup_Amd7930(struct IsdnCardState *cs)
-{
- INIT_WORK(&cs->tqueue, Amd7930_bh);
- timer_setup(&cs->dbusytimer, dbusy_timer_handler, 0);
-}
diff --git a/drivers/isdn/hisax/amd7930_fn.h b/drivers/isdn/hisax/amd7930_fn.h
deleted file mode 100644
index 1f4d80c5e5a6..000000000000
--- a/drivers/isdn/hisax/amd7930_fn.h
+++ /dev/null
@@ -1,37 +0,0 @@
-/* drivers/isdn/hisax/amd7930_fn.h
- *
- * gerdes_amd7930.h Header-file included by
- * gerdes_amd7930.c
- *
- * Author Christoph Ersfeld <info@formula-n.de>
- * Formula-n Europe AG (www.formula-n.com)
- * previously Gerdes AG
- *
- *
- * This file is (c) under GNU PUBLIC LICENSE
- */
-
-
-
-
-#define BYTE unsigned char
-#define WORD unsigned int
-#define rByteAMD(cs, reg) cs->readisac(cs, reg)
-#define wByteAMD(cs, reg, val) cs->writeisac(cs, reg, val)
-#define rWordAMD(cs, reg) ReadWordAmd7930(cs, reg)
-#define wWordAMD(cs, reg, val) WriteWordAmd7930(cs, reg, val)
-#define HIBYTE(w) ((unsigned char)((w & 0xff00) / 256))
-#define LOBYTE(w) ((unsigned char)(w & 0x00ff))
-
-#define AmdIrqOff(cs) cs->dc.amd7930.setIrqMask(cs, 0)
-#define AmdIrqOn(cs) cs->dc.amd7930.setIrqMask(cs, 1)
-
-#define AMD_CR 0x00
-#define AMD_DR 0x01
-
-
-#define DBUSY_TIMER_VALUE 80
-
-extern void Amd7930_interrupt(struct IsdnCardState *, unsigned char);
-extern void Amd7930_init(struct IsdnCardState *);
-extern void setup_Amd7930(struct IsdnCardState *);
diff --git a/drivers/isdn/hisax/arcofi.c b/drivers/isdn/hisax/arcofi.c
deleted file mode 100644
index 2f784f96d439..000000000000
--- a/drivers/isdn/hisax/arcofi.c
+++ /dev/null
@@ -1,131 +0,0 @@
-/* $Id: arcofi.c,v 1.14.2.3 2004/01/13 14:31:24 keil Exp $
- *
- * Ansteuerung ARCOFI 2165
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/sched.h>
-#include "hisax.h"
-#include "isdnl1.h"
-#include "isac.h"
-#include "arcofi.h"
-
-#define ARCOFI_TIMER_VALUE 20
-
-static void
-add_arcofi_timer(struct IsdnCardState *cs) {
- if (test_and_set_bit(FLG_ARCOFI_TIMER, &cs->HW_Flags)) {
- del_timer(&cs->dc.isac.arcofitimer);
- }
- cs->dc.isac.arcofitimer.expires = jiffies + ((ARCOFI_TIMER_VALUE * HZ) / 1000);
- add_timer(&cs->dc.isac.arcofitimer);
-}
-
-static void
-send_arcofi(struct IsdnCardState *cs) {
- add_arcofi_timer(cs);
- cs->dc.isac.mon_txp = 0;
- cs->dc.isac.mon_txc = cs->dc.isac.arcofi_list->len;
- memcpy(cs->dc.isac.mon_tx, cs->dc.isac.arcofi_list->msg, cs->dc.isac.mon_txc);
- switch (cs->dc.isac.arcofi_bc) {
- case 0: break;
- case 1: cs->dc.isac.mon_tx[1] |= 0x40;
- break;
- default: break;
- }
- cs->dc.isac.mocr &= 0x0f;
- cs->dc.isac.mocr |= 0xa0;
- cs->writeisac(cs, ISAC_MOCR, cs->dc.isac.mocr);
- (void) cs->readisac(cs, ISAC_MOSR);
- cs->writeisac(cs, ISAC_MOX1, cs->dc.isac.mon_tx[cs->dc.isac.mon_txp++]);
- cs->dc.isac.mocr |= 0x10;
- cs->writeisac(cs, ISAC_MOCR, cs->dc.isac.mocr);
-}
-
-int
-arcofi_fsm(struct IsdnCardState *cs, int event, void *data) {
- if (cs->debug & L1_DEB_MONITOR) {
- debugl1(cs, "arcofi state %d event %d", cs->dc.isac.arcofi_state, event);
- }
- if (event == ARCOFI_TIMEOUT) {
- cs->dc.isac.arcofi_state = ARCOFI_NOP;
- test_and_set_bit(FLG_ARCOFI_ERROR, &cs->HW_Flags);
- wake_up(&cs->dc.isac.arcofi_wait);
- return (1);
- }
- switch (cs->dc.isac.arcofi_state) {
- case ARCOFI_NOP:
- if (event == ARCOFI_START) {
- cs->dc.isac.arcofi_list = data;
- cs->dc.isac.arcofi_state = ARCOFI_TRANSMIT;
- send_arcofi(cs);
- }
- break;
- case ARCOFI_TRANSMIT:
- if (event == ARCOFI_TX_END) {
- if (cs->dc.isac.arcofi_list->receive) {
- add_arcofi_timer(cs);
- cs->dc.isac.arcofi_state = ARCOFI_RECEIVE;
- } else {
- if (cs->dc.isac.arcofi_list->next) {
- cs->dc.isac.arcofi_list =
- cs->dc.isac.arcofi_list->next;
- send_arcofi(cs);
- } else {
- if (test_and_clear_bit(FLG_ARCOFI_TIMER, &cs->HW_Flags)) {
- del_timer(&cs->dc.isac.arcofitimer);
- }
- cs->dc.isac.arcofi_state = ARCOFI_NOP;
- wake_up(&cs->dc.isac.arcofi_wait);
- }
- }
- }
- break;
- case ARCOFI_RECEIVE:
- if (event == ARCOFI_RX_END) {
- if (cs->dc.isac.arcofi_list->next) {
- cs->dc.isac.arcofi_list =
- cs->dc.isac.arcofi_list->next;
- cs->dc.isac.arcofi_state = ARCOFI_TRANSMIT;
- send_arcofi(cs);
- } else {
- if (test_and_clear_bit(FLG_ARCOFI_TIMER, &cs->HW_Flags)) {
- del_timer(&cs->dc.isac.arcofitimer);
- }
- cs->dc.isac.arcofi_state = ARCOFI_NOP;
- wake_up(&cs->dc.isac.arcofi_wait);
- }
- }
- break;
- default:
- debugl1(cs, "Arcofi unknown state %x", cs->dc.isac.arcofi_state);
- return (2);
- }
- return (0);
-}
-
-static void
-arcofi_timer(struct timer_list *t) {
- struct IsdnCardState *cs = from_timer(cs, t, dc.isac.arcofitimer);
- arcofi_fsm(cs, ARCOFI_TIMEOUT, NULL);
-}
-
-void
-clear_arcofi(struct IsdnCardState *cs) {
- if (test_and_clear_bit(FLG_ARCOFI_TIMER, &cs->HW_Flags)) {
- del_timer(&cs->dc.isac.arcofitimer);
- }
-}
-
-void
-init_arcofi(struct IsdnCardState *cs) {
- timer_setup(&cs->dc.isac.arcofitimer, arcofi_timer, 0);
- init_waitqueue_head(&cs->dc.isac.arcofi_wait);
- test_and_set_bit(HW_ARCOFI, &cs->HW_Flags);
-}
diff --git a/drivers/isdn/hisax/arcofi.h b/drivers/isdn/hisax/arcofi.h
deleted file mode 100644
index b9c77529fabf..000000000000
--- a/drivers/isdn/hisax/arcofi.h
+++ /dev/null
@@ -1,27 +0,0 @@
-/* $Id: arcofi.h,v 1.6.6.2 2001/09/23 22:24:46 kai Exp $
- *
- * Ansteuerung ARCOFI 2165
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#define ARCOFI_USE 1
-
-/* states */
-#define ARCOFI_NOP 0
-#define ARCOFI_TRANSMIT 1
-#define ARCOFI_RECEIVE 2
-/* events */
-#define ARCOFI_START 1
-#define ARCOFI_TX_END 2
-#define ARCOFI_RX_END 3
-#define ARCOFI_TIMEOUT 4
-
-extern int arcofi_fsm(struct IsdnCardState *cs, int event, void *data);
-extern void init_arcofi(struct IsdnCardState *cs);
-extern void clear_arcofi(struct IsdnCardState *cs);
diff --git a/drivers/isdn/hisax/asuscom.c b/drivers/isdn/hisax/asuscom.c
deleted file mode 100644
index 74c871495e81..000000000000
--- a/drivers/isdn/hisax/asuscom.c
+++ /dev/null
@@ -1,423 +0,0 @@
-/* $Id: asuscom.c,v 1.14.2.4 2004/01/13 23:48:39 keil Exp $
- *
- * low level stuff for ASUSCOM NETWORK INC. ISDNLink cards
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * Thanks to ASUSCOM NETWORK INC. Taiwan and Dynalink NL for information
- *
- */
-
-#include <linux/init.h>
-#include <linux/isapnp.h>
-#include "hisax.h"
-#include "isac.h"
-#include "ipac.h"
-#include "hscx.h"
-#include "isdnl1.h"
-
-static const char *Asuscom_revision = "$Revision: 1.14.2.4 $";
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-#define ASUS_ISAC 0
-#define ASUS_HSCX 1
-#define ASUS_ADR 2
-#define ASUS_CTRL_U7 3
-#define ASUS_CTRL_POTS 5
-
-#define ASUS_IPAC_ALE 0
-#define ASUS_IPAC_DATA 1
-
-#define ASUS_ISACHSCX 1
-#define ASUS_IPAC 2
-
-/* CARD_ADR (Write) */
-#define ASUS_RESET 0x80 /* Bit 7 Reset-Leitung */
-
-static inline u_char
-readreg(unsigned int ale, unsigned int adr, u_char off)
-{
- register u_char ret;
-
- byteout(ale, off);
- ret = bytein(adr);
- return (ret);
-}
-
-static inline void
-readfifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- byteout(ale, off);
- insb(adr, data, size);
-}
-
-
-static inline void
-writereg(unsigned int ale, unsigned int adr, u_char off, u_char data)
-{
- byteout(ale, off);
- byteout(adr, data);
-}
-
-static inline void
-writefifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- byteout(ale, off);
- outsb(adr, data, size);
-}
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readreg(cs->hw.asus.adr, cs->hw.asus.isac, offset));
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writereg(cs->hw.asus.adr, cs->hw.asus.isac, offset, value);
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- readfifo(cs->hw.asus.adr, cs->hw.asus.isac, 0, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- writefifo(cs->hw.asus.adr, cs->hw.asus.isac, 0, data, size);
-}
-
-static u_char
-ReadISAC_IPAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readreg(cs->hw.asus.adr, cs->hw.asus.isac, offset | 0x80));
-}
-
-static void
-WriteISAC_IPAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writereg(cs->hw.asus.adr, cs->hw.asus.isac, offset | 0x80, value);
-}
-
-static void
-ReadISACfifo_IPAC(struct IsdnCardState *cs, u_char *data, int size)
-{
- readfifo(cs->hw.asus.adr, cs->hw.asus.isac, 0x80, data, size);
-}
-
-static void
-WriteISACfifo_IPAC(struct IsdnCardState *cs, u_char *data, int size)
-{
- writefifo(cs->hw.asus.adr, cs->hw.asus.isac, 0x80, data, size);
-}
-
-static u_char
-ReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- return (readreg(cs->hw.asus.adr,
- cs->hw.asus.hscx, offset + (hscx ? 0x40 : 0)));
-}
-
-static void
-WriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- writereg(cs->hw.asus.adr,
- cs->hw.asus.hscx, offset + (hscx ? 0x40 : 0), value);
-}
-
-/*
- * fast interrupt HSCX stuff goes here
- */
-
-#define READHSCX(cs, nr, reg) readreg(cs->hw.asus.adr, \
- cs->hw.asus.hscx, reg + (nr ? 0x40 : 0))
-#define WRITEHSCX(cs, nr, reg, data) writereg(cs->hw.asus.adr, \
- cs->hw.asus.hscx, reg + (nr ? 0x40 : 0), data)
-
-#define READHSCXFIFO(cs, nr, ptr, cnt) readfifo(cs->hw.asus.adr, \
- cs->hw.asus.hscx, (nr ? 0x40 : 0), ptr, cnt)
-
-#define WRITEHSCXFIFO(cs, nr, ptr, cnt) writefifo(cs->hw.asus.adr, \
- cs->hw.asus.hscx, (nr ? 0x40 : 0), ptr, cnt)
-
-#include "hscx_irq.c"
-
-static irqreturn_t
-asuscom_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- val = readreg(cs->hw.asus.adr, cs->hw.asus.hscx, HSCX_ISTA + 0x40);
-Start_HSCX:
- if (val)
- hscx_int_main(cs, val);
- val = readreg(cs->hw.asus.adr, cs->hw.asus.isac, ISAC_ISTA);
-Start_ISAC:
- if (val)
- isac_interrupt(cs, val);
- val = readreg(cs->hw.asus.adr, cs->hw.asus.hscx, HSCX_ISTA + 0x40);
- if (val) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX IntStat after IntRoutine");
- goto Start_HSCX;
- }
- val = readreg(cs->hw.asus.adr, cs->hw.asus.isac, ISAC_ISTA);
- if (val) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ISAC IntStat after IntRoutine");
- goto Start_ISAC;
- }
- writereg(cs->hw.asus.adr, cs->hw.asus.hscx, HSCX_MASK, 0xFF);
- writereg(cs->hw.asus.adr, cs->hw.asus.hscx, HSCX_MASK + 0x40, 0xFF);
- writereg(cs->hw.asus.adr, cs->hw.asus.isac, ISAC_MASK, 0xFF);
- writereg(cs->hw.asus.adr, cs->hw.asus.isac, ISAC_MASK, 0x0);
- writereg(cs->hw.asus.adr, cs->hw.asus.hscx, HSCX_MASK, 0x0);
- writereg(cs->hw.asus.adr, cs->hw.asus.hscx, HSCX_MASK + 0x40, 0x0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static irqreturn_t
-asuscom_interrupt_ipac(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char ista, val, icnt = 5;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- ista = readreg(cs->hw.asus.adr, cs->hw.asus.isac, IPAC_ISTA);
-Start_IPAC:
- if (cs->debug & L1_DEB_IPAC)
- debugl1(cs, "IPAC ISTA %02X", ista);
- if (ista & 0x0f) {
- val = readreg(cs->hw.asus.adr, cs->hw.asus.hscx, HSCX_ISTA + 0x40);
- if (ista & 0x01)
- val |= 0x01;
- if (ista & 0x04)
- val |= 0x02;
- if (ista & 0x08)
- val |= 0x04;
- if (val)
- hscx_int_main(cs, val);
- }
- if (ista & 0x20) {
- val = 0xfe & readreg(cs->hw.asus.adr, cs->hw.asus.isac, ISAC_ISTA | 0x80);
- if (val) {
- isac_interrupt(cs, val);
- }
- }
- if (ista & 0x10) {
- val = 0x01;
- isac_interrupt(cs, val);
- }
- ista = readreg(cs->hw.asus.adr, cs->hw.asus.isac, IPAC_ISTA);
- if ((ista & 0x3f) && icnt) {
- icnt--;
- goto Start_IPAC;
- }
- if (!icnt)
- printk(KERN_WARNING "ASUS IRQ LOOP\n");
- writereg(cs->hw.asus.adr, cs->hw.asus.isac, IPAC_MASK, 0xFF);
- writereg(cs->hw.asus.adr, cs->hw.asus.isac, IPAC_MASK, 0xC0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-release_io_asuscom(struct IsdnCardState *cs)
-{
- int bytecnt = 8;
-
- if (cs->hw.asus.cfg_reg)
- release_region(cs->hw.asus.cfg_reg, bytecnt);
-}
-
-static void
-reset_asuscom(struct IsdnCardState *cs)
-{
- if (cs->subtyp == ASUS_IPAC)
- writereg(cs->hw.asus.adr, cs->hw.asus.isac, IPAC_POTA2, 0x20);
- else
- byteout(cs->hw.asus.adr, ASUS_RESET); /* Reset On */
- mdelay(10);
- if (cs->subtyp == ASUS_IPAC)
- writereg(cs->hw.asus.adr, cs->hw.asus.isac, IPAC_POTA2, 0x0);
- else
- byteout(cs->hw.asus.adr, 0); /* Reset Off */
- mdelay(10);
- if (cs->subtyp == ASUS_IPAC) {
- writereg(cs->hw.asus.adr, cs->hw.asus.isac, IPAC_CONF, 0x0);
- writereg(cs->hw.asus.adr, cs->hw.asus.isac, IPAC_ACFG, 0xff);
- writereg(cs->hw.asus.adr, cs->hw.asus.isac, IPAC_AOE, 0x0);
- writereg(cs->hw.asus.adr, cs->hw.asus.isac, IPAC_MASK, 0xc0);
- writereg(cs->hw.asus.adr, cs->hw.asus.isac, IPAC_PCFG, 0x12);
- }
-}
-
-static int
-Asus_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- reset_asuscom(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- release_io_asuscom(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- cs->debug |= L1_DEB_IPAC;
- inithscxisac(cs, 3);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-#ifdef __ISAPNP__
-static struct isapnp_device_id asus_ids[] = {
- { ISAPNP_VENDOR('A', 'S', 'U'), ISAPNP_FUNCTION(0x1688),
- ISAPNP_VENDOR('A', 'S', 'U'), ISAPNP_FUNCTION(0x1688),
- (unsigned long) "Asus1688 PnP" },
- { ISAPNP_VENDOR('A', 'S', 'U'), ISAPNP_FUNCTION(0x1690),
- ISAPNP_VENDOR('A', 'S', 'U'), ISAPNP_FUNCTION(0x1690),
- (unsigned long) "Asus1690 PnP" },
- { ISAPNP_VENDOR('S', 'I', 'E'), ISAPNP_FUNCTION(0x0020),
- ISAPNP_VENDOR('S', 'I', 'E'), ISAPNP_FUNCTION(0x0020),
- (unsigned long) "Isurf2 PnP" },
- { ISAPNP_VENDOR('E', 'L', 'F'), ISAPNP_FUNCTION(0x0000),
- ISAPNP_VENDOR('E', 'L', 'F'), ISAPNP_FUNCTION(0x0000),
- (unsigned long) "Iscas TE320" },
- { 0, }
-};
-
-static struct isapnp_device_id *ipid = &asus_ids[0];
-static struct pnp_card *pnp_c = NULL;
-#endif
-
-int setup_asuscom(struct IsdnCard *card)
-{
- int bytecnt;
- struct IsdnCardState *cs = card->cs;
- u_char val;
- char tmp[64];
-
- strcpy(tmp, Asuscom_revision);
- printk(KERN_INFO "HiSax: Asuscom ISDNLink driver Rev. %s\n", HiSax_getrev(tmp));
- if (cs->typ != ISDN_CTYPE_ASUSCOM)
- return (0);
-#ifdef __ISAPNP__
- if (!card->para[1] && isapnp_present()) {
- struct pnp_dev *pnp_d;
- while (ipid->card_vendor) {
- if ((pnp_c = pnp_find_card(ipid->card_vendor,
- ipid->card_device, pnp_c))) {
- pnp_d = NULL;
- if ((pnp_d = pnp_find_dev(pnp_c,
- ipid->vendor, ipid->function, pnp_d))) {
- int err;
-
- printk(KERN_INFO "HiSax: %s detected\n",
- (char *)ipid->driver_data);
- pnp_disable_dev(pnp_d);
- err = pnp_activate_dev(pnp_d);
- if (err < 0) {
- printk(KERN_WARNING "%s: pnp_activate_dev ret(%d)\n",
- __func__, err);
- return (0);
- }
- card->para[1] = pnp_port_start(pnp_d, 0);
- card->para[0] = pnp_irq(pnp_d, 0);
- if (card->para[0] == -1 || !card->para[1]) {
- printk(KERN_ERR "AsusPnP:some resources are missing %ld/%lx\n",
- card->para[0], card->para[1]);
- pnp_disable_dev(pnp_d);
- return (0);
- }
- break;
- } else {
- printk(KERN_ERR "AsusPnP: PnP error card found, no device\n");
- }
- }
- ipid++;
- pnp_c = NULL;
- }
- if (!ipid->card_vendor) {
- printk(KERN_INFO "AsusPnP: no ISAPnP card found\n");
- return (0);
- }
- }
-#endif
- bytecnt = 8;
- cs->hw.asus.cfg_reg = card->para[1];
- cs->irq = card->para[0];
- if (!request_region(cs->hw.asus.cfg_reg, bytecnt, "asuscom isdn")) {
- printk(KERN_WARNING
- "HiSax: ISDNLink config port %x-%x already in use\n",
- cs->hw.asus.cfg_reg,
- cs->hw.asus.cfg_reg + bytecnt);
- return (0);
- }
- printk(KERN_INFO "ISDNLink: defined at 0x%x IRQ %d\n",
- cs->hw.asus.cfg_reg, cs->irq);
- setup_isac(cs);
- cs->BC_Read_Reg = &ReadHSCX;
- cs->BC_Write_Reg = &WriteHSCX;
- cs->BC_Send_Data = &hscx_fill_fifo;
- cs->cardmsg = &Asus_card_msg;
- val = readreg(cs->hw.asus.cfg_reg + ASUS_IPAC_ALE,
- cs->hw.asus.cfg_reg + ASUS_IPAC_DATA, IPAC_ID);
- if ((val == 1) || (val == 2)) {
- cs->subtyp = ASUS_IPAC;
- cs->hw.asus.adr = cs->hw.asus.cfg_reg + ASUS_IPAC_ALE;
- cs->hw.asus.isac = cs->hw.asus.cfg_reg + ASUS_IPAC_DATA;
- cs->hw.asus.hscx = cs->hw.asus.cfg_reg + ASUS_IPAC_DATA;
- test_and_set_bit(HW_IPAC, &cs->HW_Flags);
- cs->readisac = &ReadISAC_IPAC;
- cs->writeisac = &WriteISAC_IPAC;
- cs->readisacfifo = &ReadISACfifo_IPAC;
- cs->writeisacfifo = &WriteISACfifo_IPAC;
- cs->irq_func = &asuscom_interrupt_ipac;
- printk(KERN_INFO "Asus: IPAC version %x\n", val);
- } else {
- cs->subtyp = ASUS_ISACHSCX;
- cs->hw.asus.adr = cs->hw.asus.cfg_reg + ASUS_ADR;
- cs->hw.asus.isac = cs->hw.asus.cfg_reg + ASUS_ISAC;
- cs->hw.asus.hscx = cs->hw.asus.cfg_reg + ASUS_HSCX;
- cs->hw.asus.u7 = cs->hw.asus.cfg_reg + ASUS_CTRL_U7;
- cs->hw.asus.pots = cs->hw.asus.cfg_reg + ASUS_CTRL_POTS;
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->irq_func = &asuscom_interrupt;
- ISACVersion(cs, "ISDNLink:");
- if (HscxVersion(cs, "ISDNLink:")) {
- printk(KERN_WARNING
- "ISDNLink: wrong HSCX versions check IO address\n");
- release_io_asuscom(cs);
- return (0);
- }
- }
- return (1);
-}
diff --git a/drivers/isdn/hisax/avm_a1.c b/drivers/isdn/hisax/avm_a1.c
deleted file mode 100644
index 7dd74087ad72..000000000000
--- a/drivers/isdn/hisax/avm_a1.c
+++ /dev/null
@@ -1,307 +0,0 @@
-/* $Id: avm_a1.c,v 2.15.2.4 2004/01/13 21:46:03 keil Exp $
- *
- * low level stuff for AVM A1 (Fritz) isdn cards
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "hscx.h"
-#include "isdnl1.h"
-
-static const char *avm_revision = "$Revision: 2.15.2.4 $";
-
-#define AVM_A1_STAT_ISAC 0x01
-#define AVM_A1_STAT_HSCX 0x02
-#define AVM_A1_STAT_TIMER 0x04
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-static inline u_char
-readreg(unsigned int adr, u_char off)
-{
- return (bytein(adr + off));
-}
-
-static inline void
-writereg(unsigned int adr, u_char off, u_char data)
-{
- byteout(adr + off, data);
-}
-
-
-static inline void
-read_fifo(unsigned int adr, u_char *data, int size)
-{
- insb(adr, data, size);
-}
-
-static void
-write_fifo(unsigned int adr, u_char *data, int size)
-{
- outsb(adr, data, size);
-}
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readreg(cs->hw.avm.isac, offset));
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writereg(cs->hw.avm.isac, offset, value);
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- read_fifo(cs->hw.avm.isacfifo, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- write_fifo(cs->hw.avm.isacfifo, data, size);
-}
-
-static u_char
-ReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- return (readreg(cs->hw.avm.hscx[hscx], offset));
-}
-
-static void
-WriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- writereg(cs->hw.avm.hscx[hscx], offset, value);
-}
-
-/*
- * fast interrupt HSCX stuff goes here
- */
-
-#define READHSCX(cs, nr, reg) readreg(cs->hw.avm.hscx[nr], reg)
-#define WRITEHSCX(cs, nr, reg, data) writereg(cs->hw.avm.hscx[nr], reg, data)
-#define READHSCXFIFO(cs, nr, ptr, cnt) read_fifo(cs->hw.avm.hscxfifo[nr], ptr, cnt)
-#define WRITEHSCXFIFO(cs, nr, ptr, cnt) write_fifo(cs->hw.avm.hscxfifo[nr], ptr, cnt)
-
-#include "hscx_irq.c"
-
-static irqreturn_t
-avm_a1_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val, sval;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- while (((sval = bytein(cs->hw.avm.cfg_reg)) & 0xf) != 0x7) {
- if (!(sval & AVM_A1_STAT_TIMER)) {
- byteout(cs->hw.avm.cfg_reg, 0x1E);
- sval = bytein(cs->hw.avm.cfg_reg);
- } else if (cs->debug & L1_DEB_INTSTAT)
- debugl1(cs, "avm IntStatus %x", sval);
- if (!(sval & AVM_A1_STAT_HSCX)) {
- val = readreg(cs->hw.avm.hscx[1], HSCX_ISTA);
- if (val)
- hscx_int_main(cs, val);
- }
- if (!(sval & AVM_A1_STAT_ISAC)) {
- val = readreg(cs->hw.avm.isac, ISAC_ISTA);
- if (val)
- isac_interrupt(cs, val);
- }
- }
- writereg(cs->hw.avm.hscx[0], HSCX_MASK, 0xFF);
- writereg(cs->hw.avm.hscx[1], HSCX_MASK, 0xFF);
- writereg(cs->hw.avm.isac, ISAC_MASK, 0xFF);
- writereg(cs->hw.avm.isac, ISAC_MASK, 0x0);
- writereg(cs->hw.avm.hscx[0], HSCX_MASK, 0x0);
- writereg(cs->hw.avm.hscx[1], HSCX_MASK, 0x0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static inline void
-release_ioregs(struct IsdnCardState *cs, int mask)
-{
- release_region(cs->hw.avm.cfg_reg, 8);
- if (mask & 1)
- release_region(cs->hw.avm.isac + 32, 32);
- if (mask & 2)
- release_region(cs->hw.avm.isacfifo, 1);
- if (mask & 4)
- release_region(cs->hw.avm.hscx[0] + 32, 32);
- if (mask & 8)
- release_region(cs->hw.avm.hscxfifo[0], 1);
- if (mask & 0x10)
- release_region(cs->hw.avm.hscx[1] + 32, 32);
- if (mask & 0x20)
- release_region(cs->hw.avm.hscxfifo[1], 1);
-}
-
-static int
-AVM_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- return (0);
- case CARD_RELEASE:
- release_ioregs(cs, 0x3f);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- inithscxisac(cs, 1);
- byteout(cs->hw.avm.cfg_reg, 0x16);
- byteout(cs->hw.avm.cfg_reg, 0x1E);
- inithscxisac(cs, 2);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-int setup_avm_a1(struct IsdnCard *card)
-{
- u_char val;
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
- strcpy(tmp, avm_revision);
- printk(KERN_INFO "HiSax: AVM driver Rev. %s\n", HiSax_getrev(tmp));
- if (cs->typ != ISDN_CTYPE_A1)
- return (0);
-
- cs->hw.avm.cfg_reg = card->para[1] + 0x1800;
- cs->hw.avm.isac = card->para[1] + 0x1400 - 0x20;
- cs->hw.avm.hscx[0] = card->para[1] + 0x400 - 0x20;
- cs->hw.avm.hscx[1] = card->para[1] + 0xc00 - 0x20;
- cs->hw.avm.isacfifo = card->para[1] + 0x1000;
- cs->hw.avm.hscxfifo[0] = card->para[1];
- cs->hw.avm.hscxfifo[1] = card->para[1] + 0x800;
- cs->irq = card->para[0];
- if (!request_region(cs->hw.avm.cfg_reg, 8, "avm cfg")) {
- printk(KERN_WARNING
- "HiSax: AVM A1 config port %x-%x already in use\n",
- cs->hw.avm.cfg_reg,
- cs->hw.avm.cfg_reg + 8);
- return (0);
- }
- if (!request_region(cs->hw.avm.isac + 32, 32, "HiSax isac")) {
- printk(KERN_WARNING
- "HiSax: AVM A1 isac ports %x-%x already in use\n",
- cs->hw.avm.isac + 32,
- cs->hw.avm.isac + 64);
- release_ioregs(cs, 0);
- return (0);
- }
- if (!request_region(cs->hw.avm.isacfifo, 1, "HiSax isac fifo")) {
- printk(KERN_WARNING
- "HiSax: AVM A1 isac fifo port %x already in use\n",
- cs->hw.avm.isacfifo);
- release_ioregs(cs, 1);
- return (0);
- }
- if (!request_region(cs->hw.avm.hscx[0] + 32, 32, "HiSax hscx A")) {
- printk(KERN_WARNING
- "HiSax: AVM A1 hscx A ports %x-%x already in use\n",
- cs->hw.avm.hscx[0] + 32,
- cs->hw.avm.hscx[0] + 64);
- release_ioregs(cs, 3);
- return (0);
- }
- if (!request_region(cs->hw.avm.hscxfifo[0], 1, "HiSax hscx A fifo")) {
- printk(KERN_WARNING
- "HiSax: AVM A1 hscx A fifo port %x already in use\n",
- cs->hw.avm.hscxfifo[0]);
- release_ioregs(cs, 7);
- return (0);
- }
- if (!request_region(cs->hw.avm.hscx[1] + 32, 32, "HiSax hscx B")) {
- printk(KERN_WARNING
- "HiSax: AVM A1 hscx B ports %x-%x already in use\n",
- cs->hw.avm.hscx[1] + 32,
- cs->hw.avm.hscx[1] + 64);
- release_ioregs(cs, 0xf);
- return (0);
- }
- if (!request_region(cs->hw.avm.hscxfifo[1], 1, "HiSax hscx B fifo")) {
- printk(KERN_WARNING
- "HiSax: AVM A1 hscx B fifo port %x already in use\n",
- cs->hw.avm.hscxfifo[1]);
- release_ioregs(cs, 0x1f);
- return (0);
- }
- byteout(cs->hw.avm.cfg_reg, 0x0);
- HZDELAY(HZ / 5 + 1);
- byteout(cs->hw.avm.cfg_reg, 0x1);
- HZDELAY(HZ / 5 + 1);
- byteout(cs->hw.avm.cfg_reg, 0x0);
- HZDELAY(HZ / 5 + 1);
- val = cs->irq;
- if (val == 9)
- val = 2;
- byteout(cs->hw.avm.cfg_reg + 1, val);
- HZDELAY(HZ / 5 + 1);
- byteout(cs->hw.avm.cfg_reg, 0x0);
- HZDELAY(HZ / 5 + 1);
-
- val = bytein(cs->hw.avm.cfg_reg);
- printk(KERN_INFO "AVM A1: Byte at %x is %x\n",
- cs->hw.avm.cfg_reg, val);
- val = bytein(cs->hw.avm.cfg_reg + 3);
- printk(KERN_INFO "AVM A1: Byte at %x is %x\n",
- cs->hw.avm.cfg_reg + 3, val);
- val = bytein(cs->hw.avm.cfg_reg + 2);
- printk(KERN_INFO "AVM A1: Byte at %x is %x\n",
- cs->hw.avm.cfg_reg + 2, val);
- val = bytein(cs->hw.avm.cfg_reg);
- printk(KERN_INFO "AVM A1: Byte at %x is %x\n",
- cs->hw.avm.cfg_reg, val);
-
- printk(KERN_INFO "HiSax: AVM A1 config irq:%d cfg:0x%X\n",
- cs->irq,
- cs->hw.avm.cfg_reg);
- printk(KERN_INFO
- "HiSax: isac:0x%X/0x%X\n",
- cs->hw.avm.isac + 32, cs->hw.avm.isacfifo);
- printk(KERN_INFO
- "HiSax: hscx A:0x%X/0x%X hscx B:0x%X/0x%X\n",
- cs->hw.avm.hscx[0] + 32, cs->hw.avm.hscxfifo[0],
- cs->hw.avm.hscx[1] + 32, cs->hw.avm.hscxfifo[1]);
-
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->BC_Read_Reg = &ReadHSCX;
- cs->BC_Write_Reg = &WriteHSCX;
- cs->BC_Send_Data = &hscx_fill_fifo;
- setup_isac(cs);
- cs->cardmsg = &AVM_card_msg;
- cs->irq_func = &avm_a1_interrupt;
- ISACVersion(cs, "AVM A1:");
- if (HscxVersion(cs, "AVM A1:")) {
- printk(KERN_WARNING
- "AVM A1: wrong HSCX versions check IO address\n");
- release_ioregs(cs, 0x3f);
- return (0);
- }
- return (1);
-}
diff --git a/drivers/isdn/hisax/avm_a1p.c b/drivers/isdn/hisax/avm_a1p.c
deleted file mode 100644
index bc52d54ff5e1..000000000000
--- a/drivers/isdn/hisax/avm_a1p.c
+++ /dev/null
@@ -1,267 +0,0 @@
-/* $Id: avm_a1p.c,v 2.9.2.5 2004/01/24 20:47:19 keil Exp $
- *
- * low level stuff for the following AVM cards:
- * A1 PCMCIA
- * FRITZ!Card PCMCIA
- * FRITZ!Card PCMCIA 2.0
- *
- * Author Carsten Paeth
- * Copyright by Carsten Paeth <calle@calle.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "hscx.h"
-#include "isdnl1.h"
-
-/* register offsets */
-#define ADDRREG_OFFSET 0x02
-#define DATAREG_OFFSET 0x03
-#define ASL0_OFFSET 0x04
-#define ASL1_OFFSET 0x05
-#define MODREG_OFFSET 0x06
-#define VERREG_OFFSET 0x07
-
-/* address offsets */
-#define ISAC_FIFO_OFFSET 0x00
-#define ISAC_REG_OFFSET 0x20
-#define HSCX_CH_DIFF 0x40
-#define HSCX_FIFO_OFFSET 0x80
-#define HSCX_REG_OFFSET 0xa0
-
-/* read bits ASL0 */
-#define ASL0_R_TIMER 0x10 /* active low */
-#define ASL0_R_ISAC 0x20 /* active low */
-#define ASL0_R_HSCX 0x40 /* active low */
-#define ASL0_R_TESTBIT 0x80
-#define ASL0_R_IRQPENDING (ASL0_R_ISAC | ASL0_R_HSCX | ASL0_R_TIMER)
-
-/* write bits ASL0 */
-#define ASL0_W_RESET 0x01
-#define ASL0_W_TDISABLE 0x02
-#define ASL0_W_TRESET 0x04
-#define ASL0_W_IRQENABLE 0x08
-#define ASL0_W_TESTBIT 0x80
-
-/* write bits ASL1 */
-#define ASL1_W_LED0 0x10
-#define ASL1_W_LED1 0x20
-#define ASL1_W_ENABLE_S0 0xC0
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-static const char *avm_revision = "$Revision: 2.9.2.5 $";
-
-static inline u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- u_char ret;
-
- offset -= 0x20;
- byteout(cs->hw.avm.cfg_reg + ADDRREG_OFFSET, ISAC_REG_OFFSET + offset);
- ret = bytein(cs->hw.avm.cfg_reg + DATAREG_OFFSET);
- return ret;
-}
-
-static inline void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- offset -= 0x20;
- byteout(cs->hw.avm.cfg_reg + ADDRREG_OFFSET, ISAC_REG_OFFSET + offset);
- byteout(cs->hw.avm.cfg_reg + DATAREG_OFFSET, value);
-}
-
-static inline void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- byteout(cs->hw.avm.cfg_reg + ADDRREG_OFFSET, ISAC_FIFO_OFFSET);
- insb(cs->hw.avm.cfg_reg + DATAREG_OFFSET, data, size);
-}
-
-static inline void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- byteout(cs->hw.avm.cfg_reg + ADDRREG_OFFSET, ISAC_FIFO_OFFSET);
- outsb(cs->hw.avm.cfg_reg + DATAREG_OFFSET, data, size);
-}
-
-static inline u_char
-ReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- u_char ret;
-
- offset -= 0x20;
- byteout(cs->hw.avm.cfg_reg + ADDRREG_OFFSET,
- HSCX_REG_OFFSET + hscx * HSCX_CH_DIFF + offset);
- ret = bytein(cs->hw.avm.cfg_reg + DATAREG_OFFSET);
- return ret;
-}
-
-static inline void
-WriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- offset -= 0x20;
- byteout(cs->hw.avm.cfg_reg + ADDRREG_OFFSET,
- HSCX_REG_OFFSET + hscx * HSCX_CH_DIFF + offset);
- byteout(cs->hw.avm.cfg_reg + DATAREG_OFFSET, value);
-}
-
-static inline void
-ReadHSCXfifo(struct IsdnCardState *cs, int hscx, u_char *data, int size)
-{
- byteout(cs->hw.avm.cfg_reg + ADDRREG_OFFSET,
- HSCX_FIFO_OFFSET + hscx * HSCX_CH_DIFF);
- insb(cs->hw.avm.cfg_reg + DATAREG_OFFSET, data, size);
-}
-
-static inline void
-WriteHSCXfifo(struct IsdnCardState *cs, int hscx, u_char *data, int size)
-{
- byteout(cs->hw.avm.cfg_reg + ADDRREG_OFFSET,
- HSCX_FIFO_OFFSET + hscx * HSCX_CH_DIFF);
- outsb(cs->hw.avm.cfg_reg + DATAREG_OFFSET, data, size);
-}
-
-/*
- * fast interrupt HSCX stuff goes here
- */
-
-#define READHSCX(cs, nr, reg) ReadHSCX(cs, nr, reg)
-#define WRITEHSCX(cs, nr, reg, data) WriteHSCX(cs, nr, reg, data)
-#define READHSCXFIFO(cs, nr, ptr, cnt) ReadHSCXfifo(cs, nr, ptr, cnt)
-#define WRITEHSCXFIFO(cs, nr, ptr, cnt) WriteHSCXfifo(cs, nr, ptr, cnt)
-
-#include "hscx_irq.c"
-
-static irqreturn_t
-avm_a1p_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val, sval;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- while ((sval = (~bytein(cs->hw.avm.cfg_reg + ASL0_OFFSET) & ASL0_R_IRQPENDING))) {
- if (cs->debug & L1_DEB_INTSTAT)
- debugl1(cs, "avm IntStatus %x", sval);
- if (sval & ASL0_R_HSCX) {
- val = ReadHSCX(cs, 1, HSCX_ISTA);
- if (val)
- hscx_int_main(cs, val);
- }
- if (sval & ASL0_R_ISAC) {
- val = ReadISAC(cs, ISAC_ISTA);
- if (val)
- isac_interrupt(cs, val);
- }
- }
- WriteHSCX(cs, 0, HSCX_MASK, 0xff);
- WriteHSCX(cs, 1, HSCX_MASK, 0xff);
- WriteISAC(cs, ISAC_MASK, 0xff);
- WriteISAC(cs, ISAC_MASK, 0x00);
- WriteHSCX(cs, 0, HSCX_MASK, 0x00);
- WriteHSCX(cs, 1, HSCX_MASK, 0x00);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static int
-AVM_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- byteout(cs->hw.avm.cfg_reg + ASL0_OFFSET, 0x00);
- HZDELAY(HZ / 5 + 1);
- byteout(cs->hw.avm.cfg_reg + ASL0_OFFSET, ASL0_W_RESET);
- HZDELAY(HZ / 5 + 1);
- byteout(cs->hw.avm.cfg_reg + ASL0_OFFSET, 0x00);
- spin_unlock_irqrestore(&cs->lock, flags);
- return 0;
-
- case CARD_RELEASE:
- /* free_irq is done in HiSax_closecard(). */
- /* free_irq(cs->irq, cs); */
- return 0;
-
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- byteout(cs->hw.avm.cfg_reg + ASL0_OFFSET, ASL0_W_TDISABLE | ASL0_W_TRESET | ASL0_W_IRQENABLE);
- clear_pending_isac_ints(cs);
- clear_pending_hscx_ints(cs);
- inithscxisac(cs, 1);
- inithscxisac(cs, 2);
- spin_unlock_irqrestore(&cs->lock, flags);
- return 0;
-
- case CARD_TEST:
- /* we really don't need it for the PCMCIA Version */
- return 0;
-
- default:
- /* all card drivers ignore others, so we do the same */
- return 0;
- }
- return 0;
-}
-
-int setup_avm_a1_pcmcia(struct IsdnCard *card)
-{
- u_char model, vers;
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
-
- strcpy(tmp, avm_revision);
- printk(KERN_INFO "HiSax: AVM A1 PCMCIA driver Rev. %s\n",
- HiSax_getrev(tmp));
- if (cs->typ != ISDN_CTYPE_A1_PCMCIA)
- return (0);
-
- cs->hw.avm.cfg_reg = card->para[1];
- cs->irq = card->para[0];
-
-
- byteout(cs->hw.avm.cfg_reg + ASL1_OFFSET, ASL1_W_ENABLE_S0);
- byteout(cs->hw.avm.cfg_reg + ASL0_OFFSET, 0x00);
- HZDELAY(HZ / 5 + 1);
- byteout(cs->hw.avm.cfg_reg + ASL0_OFFSET, ASL0_W_RESET);
- HZDELAY(HZ / 5 + 1);
- byteout(cs->hw.avm.cfg_reg + ASL0_OFFSET, 0x00);
-
- byteout(cs->hw.avm.cfg_reg + ASL0_OFFSET, ASL0_W_TDISABLE | ASL0_W_TRESET);
-
- model = bytein(cs->hw.avm.cfg_reg + MODREG_OFFSET);
- vers = bytein(cs->hw.avm.cfg_reg + VERREG_OFFSET);
-
- printk(KERN_INFO "AVM A1 PCMCIA: io 0x%x irq %d model %d version %d\n",
- cs->hw.avm.cfg_reg, cs->irq, model, vers);
-
- setup_isac(cs);
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->BC_Read_Reg = &ReadHSCX;
- cs->BC_Write_Reg = &WriteHSCX;
- cs->BC_Send_Data = &hscx_fill_fifo;
- cs->cardmsg = &AVM_card_msg;
- cs->irq_flags = IRQF_SHARED;
- cs->irq_func = &avm_a1p_interrupt;
-
- ISACVersion(cs, "AVM A1 PCMCIA:");
- if (HscxVersion(cs, "AVM A1 PCMCIA:")) {
- printk(KERN_WARNING
- "AVM A1 PCMCIA: wrong HSCX versions check IO address\n");
- return (0);
- }
- return (1);
-}
diff --git a/drivers/isdn/hisax/avm_pci.c b/drivers/isdn/hisax/avm_pci.c
deleted file mode 100644
index b161456c942e..000000000000
--- a/drivers/isdn/hisax/avm_pci.c
+++ /dev/null
@@ -1,904 +0,0 @@
-/* $Id: avm_pci.c,v 1.29.2.4 2004/02/11 13:21:32 keil Exp $
- *
- * low level stuff for AVM Fritz!PCI and ISA PnP isdn cards
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * Thanks to AVM, Berlin for information
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "isdnl1.h"
-#include <linux/pci.h>
-#include <linux/slab.h>
-#include <linux/isapnp.h>
-#include <linux/interrupt.h>
-
-static const char *avm_pci_rev = "$Revision: 1.29.2.4 $";
-
-#define AVM_FRITZ_PCI 1
-#define AVM_FRITZ_PNP 2
-
-#define HDLC_FIFO 0x0
-#define HDLC_STATUS 0x4
-
-#define AVM_HDLC_1 0x00
-#define AVM_HDLC_2 0x01
-#define AVM_ISAC_FIFO 0x02
-#define AVM_ISAC_REG_LOW 0x04
-#define AVM_ISAC_REG_HIGH 0x06
-
-#define AVM_STATUS0_IRQ_ISAC 0x01
-#define AVM_STATUS0_IRQ_HDLC 0x02
-#define AVM_STATUS0_IRQ_TIMER 0x04
-#define AVM_STATUS0_IRQ_MASK 0x07
-
-#define AVM_STATUS0_RESET 0x01
-#define AVM_STATUS0_DIS_TIMER 0x02
-#define AVM_STATUS0_RES_TIMER 0x04
-#define AVM_STATUS0_ENA_IRQ 0x08
-#define AVM_STATUS0_TESTBIT 0x10
-
-#define AVM_STATUS1_INT_SEL 0x0f
-#define AVM_STATUS1_ENA_IOM 0x80
-
-#define HDLC_MODE_ITF_FLG 0x01
-#define HDLC_MODE_TRANS 0x02
-#define HDLC_MODE_CCR_7 0x04
-#define HDLC_MODE_CCR_16 0x08
-#define HDLC_MODE_TESTLOOP 0x80
-
-#define HDLC_INT_XPR 0x80
-#define HDLC_INT_XDU 0x40
-#define HDLC_INT_RPR 0x20
-#define HDLC_INT_MASK 0xE0
-
-#define HDLC_STAT_RME 0x01
-#define HDLC_STAT_RDO 0x10
-#define HDLC_STAT_CRCVFRRAB 0x0E
-#define HDLC_STAT_CRCVFR 0x06
-#define HDLC_STAT_RML_MASK 0x3f00
-
-#define HDLC_CMD_XRS 0x80
-#define HDLC_CMD_XME 0x01
-#define HDLC_CMD_RRS 0x20
-#define HDLC_CMD_XML_MASK 0x3f00
-
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- register u_char idx = (offset > 0x2f) ? AVM_ISAC_REG_HIGH : AVM_ISAC_REG_LOW;
- register u_char val;
-
- outb(idx, cs->hw.avm.cfg_reg + 4);
- val = inb(cs->hw.avm.isac + (offset & 0xf));
- return (val);
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- register u_char idx = (offset > 0x2f) ? AVM_ISAC_REG_HIGH : AVM_ISAC_REG_LOW;
-
- outb(idx, cs->hw.avm.cfg_reg + 4);
- outb(value, cs->hw.avm.isac + (offset & 0xf));
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- outb(AVM_ISAC_FIFO, cs->hw.avm.cfg_reg + 4);
- insb(cs->hw.avm.isac, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- outb(AVM_ISAC_FIFO, cs->hw.avm.cfg_reg + 4);
- outsb(cs->hw.avm.isac, data, size);
-}
-
-static inline u_int
-ReadHDLCPCI(struct IsdnCardState *cs, int chan, u_char offset)
-{
- register u_int idx = chan ? AVM_HDLC_2 : AVM_HDLC_1;
- register u_int val;
-
- outl(idx, cs->hw.avm.cfg_reg + 4);
- val = inl(cs->hw.avm.isac + offset);
- return (val);
-}
-
-static inline void
-WriteHDLCPCI(struct IsdnCardState *cs, int chan, u_char offset, u_int value)
-{
- register u_int idx = chan ? AVM_HDLC_2 : AVM_HDLC_1;
-
- outl(idx, cs->hw.avm.cfg_reg + 4);
- outl(value, cs->hw.avm.isac + offset);
-}
-
-static inline u_char
-ReadHDLCPnP(struct IsdnCardState *cs, int chan, u_char offset)
-{
- register u_char idx = chan ? AVM_HDLC_2 : AVM_HDLC_1;
- register u_char val;
-
- outb(idx, cs->hw.avm.cfg_reg + 4);
- val = inb(cs->hw.avm.isac + offset);
- return (val);
-}
-
-static inline void
-WriteHDLCPnP(struct IsdnCardState *cs, int chan, u_char offset, u_char value)
-{
- register u_char idx = chan ? AVM_HDLC_2 : AVM_HDLC_1;
-
- outb(idx, cs->hw.avm.cfg_reg + 4);
- outb(value, cs->hw.avm.isac + offset);
-}
-
-static u_char
-ReadHDLC_s(struct IsdnCardState *cs, int chan, u_char offset)
-{
- return (0xff & ReadHDLCPCI(cs, chan, offset));
-}
-
-static void
-WriteHDLC_s(struct IsdnCardState *cs, int chan, u_char offset, u_char value)
-{
- WriteHDLCPCI(cs, chan, offset, value);
-}
-
-static inline
-struct BCState *Sel_BCS(struct IsdnCardState *cs, int channel)
-{
- if (cs->bcs[0].mode && (cs->bcs[0].channel == channel))
- return (&cs->bcs[0]);
- else if (cs->bcs[1].mode && (cs->bcs[1].channel == channel))
- return (&cs->bcs[1]);
- else
- return (NULL);
-}
-
-static void
-write_ctrl(struct BCState *bcs, int which) {
-
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "hdlc %c wr%x ctrl %x",
- 'A' + bcs->channel, which, bcs->hw.hdlc.ctrl.ctrl);
- if (bcs->cs->subtyp == AVM_FRITZ_PCI) {
- WriteHDLCPCI(bcs->cs, bcs->channel, HDLC_STATUS, bcs->hw.hdlc.ctrl.ctrl);
- } else {
- if (which & 4)
- WriteHDLCPnP(bcs->cs, bcs->channel, HDLC_STATUS + 2,
- bcs->hw.hdlc.ctrl.sr.mode);
- if (which & 2)
- WriteHDLCPnP(bcs->cs, bcs->channel, HDLC_STATUS + 1,
- bcs->hw.hdlc.ctrl.sr.xml);
- if (which & 1)
- WriteHDLCPnP(bcs->cs, bcs->channel, HDLC_STATUS,
- bcs->hw.hdlc.ctrl.sr.cmd);
- }
-}
-
-static void
-modehdlc(struct BCState *bcs, int mode, int bc)
-{
- struct IsdnCardState *cs = bcs->cs;
- int hdlc = bcs->channel;
-
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hdlc %c mode %d --> %d ichan %d --> %d",
- 'A' + hdlc, bcs->mode, mode, hdlc, bc);
- bcs->hw.hdlc.ctrl.ctrl = 0;
- switch (mode) {
- case (-1): /* used for init */
- bcs->mode = 1;
- bcs->channel = bc;
- bc = 0;
- /* fall through */
- case (L1_MODE_NULL):
- if (bcs->mode == L1_MODE_NULL)
- return;
- bcs->hw.hdlc.ctrl.sr.cmd = HDLC_CMD_XRS | HDLC_CMD_RRS;
- bcs->hw.hdlc.ctrl.sr.mode = HDLC_MODE_TRANS;
- write_ctrl(bcs, 5);
- bcs->mode = L1_MODE_NULL;
- bcs->channel = bc;
- break;
- case (L1_MODE_TRANS):
- bcs->mode = mode;
- bcs->channel = bc;
- bcs->hw.hdlc.ctrl.sr.cmd = HDLC_CMD_XRS | HDLC_CMD_RRS;
- bcs->hw.hdlc.ctrl.sr.mode = HDLC_MODE_TRANS;
- write_ctrl(bcs, 5);
- bcs->hw.hdlc.ctrl.sr.cmd = HDLC_CMD_XRS;
- write_ctrl(bcs, 1);
- bcs->hw.hdlc.ctrl.sr.cmd = 0;
- schedule_event(bcs, B_XMTBUFREADY);
- break;
- case (L1_MODE_HDLC):
- bcs->mode = mode;
- bcs->channel = bc;
- bcs->hw.hdlc.ctrl.sr.cmd = HDLC_CMD_XRS | HDLC_CMD_RRS;
- bcs->hw.hdlc.ctrl.sr.mode = HDLC_MODE_ITF_FLG;
- write_ctrl(bcs, 5);
- bcs->hw.hdlc.ctrl.sr.cmd = HDLC_CMD_XRS;
- write_ctrl(bcs, 1);
- bcs->hw.hdlc.ctrl.sr.cmd = 0;
- schedule_event(bcs, B_XMTBUFREADY);
- break;
- }
-}
-
-static inline void
-hdlc_empty_fifo(struct BCState *bcs, int count)
-{
- register u_int *ptr;
- u_char *p;
- u_char idx = bcs->channel ? AVM_HDLC_2 : AVM_HDLC_1;
- int cnt = 0;
- struct IsdnCardState *cs = bcs->cs;
-
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "hdlc_empty_fifo %d", count);
- if (bcs->hw.hdlc.rcvidx + count > HSCX_BUFMAX) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "hdlc_empty_fifo: incoming packet too large");
- return;
- }
- p = bcs->hw.hdlc.rcvbuf + bcs->hw.hdlc.rcvidx;
- ptr = (u_int *)p;
- bcs->hw.hdlc.rcvidx += count;
- if (cs->subtyp == AVM_FRITZ_PCI) {
- outl(idx, cs->hw.avm.cfg_reg + 4);
- while (cnt < count) {
-#ifdef __powerpc__
- *ptr++ = in_be32((unsigned *)(cs->hw.avm.isac + _IO_BASE));
-#else
- *ptr++ = inl(cs->hw.avm.isac);
-#endif /* __powerpc__ */
- cnt += 4;
- }
- } else {
- outb(idx, cs->hw.avm.cfg_reg + 4);
- while (cnt < count) {
- *p++ = inb(cs->hw.avm.isac);
- cnt++;
- }
- }
- if (cs->debug & L1_DEB_HSCX_FIFO) {
- char *t = bcs->blog;
-
- if (cs->subtyp == AVM_FRITZ_PNP)
- p = (u_char *) ptr;
- t += sprintf(t, "hdlc_empty_fifo %c cnt %d",
- bcs->channel ? 'B' : 'A', count);
- QuickHex(t, p, count);
- debugl1(cs, "%s", bcs->blog);
- }
-}
-
-static inline void
-hdlc_fill_fifo(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
- int count, cnt = 0;
- int fifo_size = 32;
- u_char *p;
- u_int *ptr;
-
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "hdlc_fill_fifo");
- if (!bcs->tx_skb)
- return;
- if (bcs->tx_skb->len <= 0)
- return;
-
- bcs->hw.hdlc.ctrl.sr.cmd &= ~HDLC_CMD_XME;
- if (bcs->tx_skb->len > fifo_size) {
- count = fifo_size;
- } else {
- count = bcs->tx_skb->len;
- if (bcs->mode != L1_MODE_TRANS)
- bcs->hw.hdlc.ctrl.sr.cmd |= HDLC_CMD_XME;
- }
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "hdlc_fill_fifo %d/%u", count, bcs->tx_skb->len);
- p = bcs->tx_skb->data;
- ptr = (u_int *)p;
- skb_pull(bcs->tx_skb, count);
- bcs->tx_cnt -= count;
- bcs->hw.hdlc.count += count;
- bcs->hw.hdlc.ctrl.sr.xml = ((count == fifo_size) ? 0 : count);
- write_ctrl(bcs, 3); /* sets the correct index too */
- if (cs->subtyp == AVM_FRITZ_PCI) {
- while (cnt < count) {
-#ifdef __powerpc__
- out_be32((unsigned *)(cs->hw.avm.isac + _IO_BASE), *ptr++);
-#else
- outl(*ptr++, cs->hw.avm.isac);
-#endif /* __powerpc__ */
- cnt += 4;
- }
- } else {
- while (cnt < count) {
- outb(*p++, cs->hw.avm.isac);
- cnt++;
- }
- }
- if (cs->debug & L1_DEB_HSCX_FIFO) {
- char *t = bcs->blog;
-
- if (cs->subtyp == AVM_FRITZ_PNP)
- p = (u_char *) ptr;
- t += sprintf(t, "hdlc_fill_fifo %c cnt %d",
- bcs->channel ? 'B' : 'A', count);
- QuickHex(t, p, count);
- debugl1(cs, "%s", bcs->blog);
- }
-}
-
-static void
-HDLC_irq(struct BCState *bcs, u_int stat) {
- int len;
- struct sk_buff *skb;
-
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "ch%d stat %#x", bcs->channel, stat);
- if (stat & HDLC_INT_RPR) {
- if (stat & HDLC_STAT_RDO) {
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "RDO");
- else
- debugl1(bcs->cs, "ch%d stat %#x", bcs->channel, stat);
- bcs->hw.hdlc.ctrl.sr.xml = 0;
- bcs->hw.hdlc.ctrl.sr.cmd |= HDLC_CMD_RRS;
- write_ctrl(bcs, 1);
- bcs->hw.hdlc.ctrl.sr.cmd &= ~HDLC_CMD_RRS;
- write_ctrl(bcs, 1);
- bcs->hw.hdlc.rcvidx = 0;
- } else {
- if (!(len = (stat & HDLC_STAT_RML_MASK) >> 8))
- len = 32;
- hdlc_empty_fifo(bcs, len);
- if ((stat & HDLC_STAT_RME) || (bcs->mode == L1_MODE_TRANS)) {
- if (((stat & HDLC_STAT_CRCVFRRAB) == HDLC_STAT_CRCVFR) ||
- (bcs->mode == L1_MODE_TRANS)) {
- if (!(skb = dev_alloc_skb(bcs->hw.hdlc.rcvidx)))
- printk(KERN_WARNING "HDLC: receive out of memory\n");
- else {
- skb_put_data(skb,
- bcs->hw.hdlc.rcvbuf,
- bcs->hw.hdlc.rcvidx);
- skb_queue_tail(&bcs->rqueue, skb);
- }
- bcs->hw.hdlc.rcvidx = 0;
- schedule_event(bcs, B_RCVBUFREADY);
- } else {
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "invalid frame");
- else
- debugl1(bcs->cs, "ch%d invalid frame %#x", bcs->channel, stat);
- bcs->hw.hdlc.rcvidx = 0;
- }
- }
- }
- }
- if (stat & HDLC_INT_XDU) {
- /* Here we lost an TX interrupt, so
- * restart transmitting the whole frame.
- */
- if (bcs->tx_skb) {
- skb_push(bcs->tx_skb, bcs->hw.hdlc.count);
- bcs->tx_cnt += bcs->hw.hdlc.count;
- bcs->hw.hdlc.count = 0;
- if (bcs->cs->debug & L1_DEB_WARN)
- debugl1(bcs->cs, "ch%d XDU", bcs->channel);
- } else if (bcs->cs->debug & L1_DEB_WARN)
- debugl1(bcs->cs, "ch%d XDU without skb", bcs->channel);
- bcs->hw.hdlc.ctrl.sr.xml = 0;
- bcs->hw.hdlc.ctrl.sr.cmd |= HDLC_CMD_XRS;
- write_ctrl(bcs, 1);
- bcs->hw.hdlc.ctrl.sr.cmd &= ~HDLC_CMD_XRS;
- write_ctrl(bcs, 1);
- hdlc_fill_fifo(bcs);
- } else if (stat & HDLC_INT_XPR) {
- if (bcs->tx_skb) {
- if (bcs->tx_skb->len) {
- hdlc_fill_fifo(bcs);
- return;
- } else {
- if (test_bit(FLG_LLI_L1WAKEUP, &bcs->st->lli.flag) &&
- (PACKET_NOACK != bcs->tx_skb->pkt_type)) {
- u_long flags;
- spin_lock_irqsave(&bcs->aclock, flags);
- bcs->ackcnt += bcs->hw.hdlc.count;
- spin_unlock_irqrestore(&bcs->aclock, flags);
- schedule_event(bcs, B_ACKPENDING);
- }
- dev_kfree_skb_irq(bcs->tx_skb);
- bcs->hw.hdlc.count = 0;
- bcs->tx_skb = NULL;
- }
- }
- if ((bcs->tx_skb = skb_dequeue(&bcs->squeue))) {
- bcs->hw.hdlc.count = 0;
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- hdlc_fill_fifo(bcs);
- } else {
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- schedule_event(bcs, B_XMTBUFREADY);
- }
- }
-}
-
-static inline void
-HDLC_irq_main(struct IsdnCardState *cs)
-{
- u_int stat;
- struct BCState *bcs;
-
- if (cs->subtyp == AVM_FRITZ_PCI) {
- stat = ReadHDLCPCI(cs, 0, HDLC_STATUS);
- } else {
- stat = ReadHDLCPnP(cs, 0, HDLC_STATUS);
- if (stat & HDLC_INT_RPR)
- stat |= (ReadHDLCPnP(cs, 0, HDLC_STATUS + 1)) << 8;
- }
- if (stat & HDLC_INT_MASK) {
- if (!(bcs = Sel_BCS(cs, 0))) {
- if (cs->debug)
- debugl1(cs, "hdlc spurious channel 0 IRQ");
- } else
- HDLC_irq(bcs, stat);
- }
- if (cs->subtyp == AVM_FRITZ_PCI) {
- stat = ReadHDLCPCI(cs, 1, HDLC_STATUS);
- } else {
- stat = ReadHDLCPnP(cs, 1, HDLC_STATUS);
- if (stat & HDLC_INT_RPR)
- stat |= (ReadHDLCPnP(cs, 1, HDLC_STATUS + 1)) << 8;
- }
- if (stat & HDLC_INT_MASK) {
- if (!(bcs = Sel_BCS(cs, 1))) {
- if (cs->debug)
- debugl1(cs, "hdlc spurious channel 1 IRQ");
- } else
- HDLC_irq(bcs, stat);
- }
-}
-
-static void
-hdlc_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct BCState *bcs = st->l1.bcs;
- struct sk_buff *skb = arg;
- u_long flags;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- skb_queue_tail(&bcs->squeue, skb);
- } else {
- bcs->tx_skb = skb;
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->hw.hdlc.count = 0;
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- printk(KERN_WARNING "hdlc_l2l1: this shouldn't happen\n");
- } else {
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->tx_skb = skb;
- bcs->hw.hdlc.count = 0;
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
- if (!bcs->tx_skb) {
- test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case (PH_ACTIVATE | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_set_bit(BC_FLG_ACTIV, &bcs->Flag);
- modehdlc(bcs, st->l1.mode, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | REQUEST):
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | CONFIRM):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_clear_bit(BC_FLG_ACTIV, &bcs->Flag);
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- modehdlc(bcs, 0, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- st->l1.l1l2(st, PH_DEACTIVATE | CONFIRM, NULL);
- break;
- }
-}
-
-static void
-close_hdlcstate(struct BCState *bcs)
-{
- modehdlc(bcs, 0, 0);
- if (test_and_clear_bit(BC_FLG_INIT, &bcs->Flag)) {
- kfree(bcs->hw.hdlc.rcvbuf);
- bcs->hw.hdlc.rcvbuf = NULL;
- kfree(bcs->blog);
- bcs->blog = NULL;
- skb_queue_purge(&bcs->rqueue);
- skb_queue_purge(&bcs->squeue);
- if (bcs->tx_skb) {
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- }
- }
-}
-
-static int
-open_hdlcstate(struct IsdnCardState *cs, struct BCState *bcs)
-{
- if (!test_and_set_bit(BC_FLG_INIT, &bcs->Flag)) {
- if (!(bcs->hw.hdlc.rcvbuf = kmalloc(HSCX_BUFMAX, GFP_ATOMIC))) {
- printk(KERN_WARNING
- "HiSax: No memory for hdlc.rcvbuf\n");
- return (1);
- }
- if (!(bcs->blog = kmalloc(MAX_BLOG_SPACE, GFP_ATOMIC))) {
- printk(KERN_WARNING
- "HiSax: No memory for bcs->blog\n");
- test_and_clear_bit(BC_FLG_INIT, &bcs->Flag);
- kfree(bcs->hw.hdlc.rcvbuf);
- bcs->hw.hdlc.rcvbuf = NULL;
- return (2);
- }
- skb_queue_head_init(&bcs->rqueue);
- skb_queue_head_init(&bcs->squeue);
- }
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->event = 0;
- bcs->hw.hdlc.rcvidx = 0;
- bcs->tx_cnt = 0;
- return (0);
-}
-
-static int
-setstack_hdlc(struct PStack *st, struct BCState *bcs)
-{
- bcs->channel = st->l1.bc;
- if (open_hdlcstate(st->l1.hardware, bcs))
- return (-1);
- st->l1.bcs = bcs;
- st->l2.l2l1 = hdlc_l2l1;
- setstack_manager(st);
- bcs->st = st;
- setstack_l1_B(st);
- return (0);
-}
-
-#if 0
-void __init
-clear_pending_hdlc_ints(struct IsdnCardState *cs)
-{
- u_int val;
-
- if (cs->subtyp == AVM_FRITZ_PCI) {
- val = ReadHDLCPCI(cs, 0, HDLC_STATUS);
- debugl1(cs, "HDLC 1 STA %x", val);
- val = ReadHDLCPCI(cs, 1, HDLC_STATUS);
- debugl1(cs, "HDLC 2 STA %x", val);
- } else {
- val = ReadHDLCPnP(cs, 0, HDLC_STATUS);
- debugl1(cs, "HDLC 1 STA %x", val);
- val = ReadHDLCPnP(cs, 0, HDLC_STATUS + 1);
- debugl1(cs, "HDLC 1 RML %x", val);
- val = ReadHDLCPnP(cs, 0, HDLC_STATUS + 2);
- debugl1(cs, "HDLC 1 MODE %x", val);
- val = ReadHDLCPnP(cs, 0, HDLC_STATUS + 3);
- debugl1(cs, "HDLC 1 VIN %x", val);
- val = ReadHDLCPnP(cs, 1, HDLC_STATUS);
- debugl1(cs, "HDLC 2 STA %x", val);
- val = ReadHDLCPnP(cs, 1, HDLC_STATUS + 1);
- debugl1(cs, "HDLC 2 RML %x", val);
- val = ReadHDLCPnP(cs, 1, HDLC_STATUS + 2);
- debugl1(cs, "HDLC 2 MODE %x", val);
- val = ReadHDLCPnP(cs, 1, HDLC_STATUS + 3);
- debugl1(cs, "HDLC 2 VIN %x", val);
- }
-}
-#endif /* 0 */
-
-static void
-inithdlc(struct IsdnCardState *cs)
-{
- cs->bcs[0].BC_SetStack = setstack_hdlc;
- cs->bcs[1].BC_SetStack = setstack_hdlc;
- cs->bcs[0].BC_Close = close_hdlcstate;
- cs->bcs[1].BC_Close = close_hdlcstate;
- modehdlc(cs->bcs, -1, 0);
- modehdlc(cs->bcs + 1, -1, 1);
-}
-
-static irqreturn_t
-avm_pcipnp_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_long flags;
- u_char val;
- u_char sval;
-
- spin_lock_irqsave(&cs->lock, flags);
- sval = inb(cs->hw.avm.cfg_reg + 2);
- if ((sval & AVM_STATUS0_IRQ_MASK) == AVM_STATUS0_IRQ_MASK) {
- /* possible a shared IRQ reqest */
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_NONE;
- }
- if (!(sval & AVM_STATUS0_IRQ_ISAC)) {
- val = ReadISAC(cs, ISAC_ISTA);
- isac_interrupt(cs, val);
- }
- if (!(sval & AVM_STATUS0_IRQ_HDLC)) {
- HDLC_irq_main(cs);
- }
- WriteISAC(cs, ISAC_MASK, 0xFF);
- WriteISAC(cs, ISAC_MASK, 0x0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-reset_avmpcipnp(struct IsdnCardState *cs)
-{
- printk(KERN_INFO "AVM PCI/PnP: reset\n");
- outb(AVM_STATUS0_RESET | AVM_STATUS0_DIS_TIMER, cs->hw.avm.cfg_reg + 2);
- mdelay(10);
- outb(AVM_STATUS0_DIS_TIMER | AVM_STATUS0_RES_TIMER | AVM_STATUS0_ENA_IRQ, cs->hw.avm.cfg_reg + 2);
- outb(AVM_STATUS1_ENA_IOM | cs->irq, cs->hw.avm.cfg_reg + 3);
- mdelay(10);
- printk(KERN_INFO "AVM PCI/PnP: S1 %x\n", inb(cs->hw.avm.cfg_reg + 3));
-}
-
-static int
-AVM_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- reset_avmpcipnp(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- outb(0, cs->hw.avm.cfg_reg + 2);
- release_region(cs->hw.avm.cfg_reg, 32);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- reset_avmpcipnp(cs);
- clear_pending_isac_ints(cs);
- initisac(cs);
- inithdlc(cs);
- outb(AVM_STATUS0_DIS_TIMER | AVM_STATUS0_RES_TIMER,
- cs->hw.avm.cfg_reg + 2);
- WriteISAC(cs, ISAC_MASK, 0);
- outb(AVM_STATUS0_DIS_TIMER | AVM_STATUS0_RES_TIMER |
- AVM_STATUS0_ENA_IRQ, cs->hw.avm.cfg_reg + 2);
- /* RESET Receiver and Transmitter */
- WriteISAC(cs, ISAC_CMDR, 0x41);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-static int avm_setup_rest(struct IsdnCardState *cs)
-{
- u_int val, ver;
-
- cs->hw.avm.isac = cs->hw.avm.cfg_reg + 0x10;
- if (!request_region(cs->hw.avm.cfg_reg, 32,
- (cs->subtyp == AVM_FRITZ_PCI) ? "avm PCI" : "avm PnP")) {
- printk(KERN_WARNING
- "HiSax: Fritz!PCI/PNP config port %x-%x already in use\n",
- cs->hw.avm.cfg_reg,
- cs->hw.avm.cfg_reg + 31);
- return (0);
- }
- switch (cs->subtyp) {
- case AVM_FRITZ_PCI:
- val = inl(cs->hw.avm.cfg_reg);
- printk(KERN_INFO "AVM PCI: stat %#x\n", val);
- printk(KERN_INFO "AVM PCI: Class %X Rev %d\n",
- val & 0xff, (val >> 8) & 0xff);
- cs->BC_Read_Reg = &ReadHDLC_s;
- cs->BC_Write_Reg = &WriteHDLC_s;
- break;
- case AVM_FRITZ_PNP:
- val = inb(cs->hw.avm.cfg_reg);
- ver = inb(cs->hw.avm.cfg_reg + 1);
- printk(KERN_INFO "AVM PnP: Class %X Rev %d\n", val, ver);
- cs->BC_Read_Reg = &ReadHDLCPnP;
- cs->BC_Write_Reg = &WriteHDLCPnP;
- break;
- default:
- printk(KERN_WARNING "AVM unknown subtype %d\n", cs->subtyp);
- return (0);
- }
- printk(KERN_INFO "HiSax: %s config irq:%d base:0x%X\n",
- (cs->subtyp == AVM_FRITZ_PCI) ? "AVM Fritz!PCI" : "AVM Fritz!PnP",
- cs->irq, cs->hw.avm.cfg_reg);
-
- setup_isac(cs);
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->BC_Send_Data = &hdlc_fill_fifo;
- cs->cardmsg = &AVM_card_msg;
- cs->irq_func = &avm_pcipnp_interrupt;
- cs->writeisac(cs, ISAC_MASK, 0xFF);
- ISACVersion(cs, (cs->subtyp == AVM_FRITZ_PCI) ? "AVM PCI:" : "AVM PnP:");
- return (1);
-}
-
-#ifndef __ISAPNP__
-
-static int avm_pnp_setup(struct IsdnCardState *cs)
-{
- return (1); /* no-op: success */
-}
-
-#else
-
-static struct pnp_card *pnp_avm_c = NULL;
-
-static int avm_pnp_setup(struct IsdnCardState *cs)
-{
- struct pnp_dev *pnp_avm_d = NULL;
-
- if (!isapnp_present())
- return (1); /* no-op: success */
-
- if ((pnp_avm_c = pnp_find_card(
- ISAPNP_VENDOR('A', 'V', 'M'),
- ISAPNP_FUNCTION(0x0900), pnp_avm_c))) {
- if ((pnp_avm_d = pnp_find_dev(pnp_avm_c,
- ISAPNP_VENDOR('A', 'V', 'M'),
- ISAPNP_FUNCTION(0x0900), pnp_avm_d))) {
- int err;
-
- pnp_disable_dev(pnp_avm_d);
- err = pnp_activate_dev(pnp_avm_d);
- if (err < 0) {
- printk(KERN_WARNING "%s: pnp_activate_dev ret(%d)\n",
- __func__, err);
- return (0);
- }
- cs->hw.avm.cfg_reg =
- pnp_port_start(pnp_avm_d, 0);
- cs->irq = pnp_irq(pnp_avm_d, 0);
- if (cs->irq == -1) {
- printk(KERN_ERR "FritzPnP:No IRQ\n");
- return (0);
- }
- if (!cs->hw.avm.cfg_reg) {
- printk(KERN_ERR "FritzPnP:No IO address\n");
- return (0);
- }
- cs->subtyp = AVM_FRITZ_PNP;
-
- return (2); /* goto 'ready' label */
- }
- }
-
- return (1);
-}
-
-#endif /* __ISAPNP__ */
-
-#ifndef CONFIG_PCI
-
-static int avm_pci_setup(struct IsdnCardState *cs)
-{
- return (1); /* no-op: success */
-}
-
-#else
-
-static struct pci_dev *dev_avm = NULL;
-
-static int avm_pci_setup(struct IsdnCardState *cs)
-{
- if ((dev_avm = hisax_find_pci_device(PCI_VENDOR_ID_AVM,
- PCI_DEVICE_ID_AVM_A1, dev_avm))) {
-
- if (pci_enable_device(dev_avm))
- return (0);
-
- cs->irq = dev_avm->irq;
- if (!cs->irq) {
- printk(KERN_ERR "FritzPCI: No IRQ for PCI card found\n");
- return (0);
- }
-
- cs->hw.avm.cfg_reg = pci_resource_start(dev_avm, 1);
- if (!cs->hw.avm.cfg_reg) {
- printk(KERN_ERR "FritzPCI: No IO-Adr for PCI card found\n");
- return (0);
- }
-
- cs->subtyp = AVM_FRITZ_PCI;
- } else {
- printk(KERN_WARNING "FritzPCI: No PCI card found\n");
- return (0);
- }
-
- cs->irq_flags |= IRQF_SHARED;
-
- return (1);
-}
-
-#endif /* CONFIG_PCI */
-
-int setup_avm_pcipnp(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
- int rc;
-
- strcpy(tmp, avm_pci_rev);
- printk(KERN_INFO "HiSax: AVM PCI driver Rev. %s\n", HiSax_getrev(tmp));
-
- if (cs->typ != ISDN_CTYPE_FRITZPCI)
- return (0);
-
- if (card->para[1]) {
- /* old manual method */
- cs->hw.avm.cfg_reg = card->para[1];
- cs->irq = card->para[0];
- cs->subtyp = AVM_FRITZ_PNP;
- goto ready;
- }
-
- rc = avm_pnp_setup(cs);
- if (rc < 1)
- return (0);
- if (rc == 2)
- goto ready;
-
- rc = avm_pci_setup(cs);
- if (rc < 1)
- return (0);
-
-ready:
- return avm_setup_rest(cs);
-}
diff --git a/drivers/isdn/hisax/avma1_cs.c b/drivers/isdn/hisax/avma1_cs.c
deleted file mode 100644
index baad94ec1f4a..000000000000
--- a/drivers/isdn/hisax/avma1_cs.c
+++ /dev/null
@@ -1,162 +0,0 @@
-/*
- * PCMCIA client driver for AVM A1 / Fritz!PCMCIA
- *
- * Author Carsten Paeth
- * Copyright 1998-2001 by Carsten Paeth <calle@calle.in-berlin.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/module.h>
-
-
-#include <linux/kernel.h>
-#include <linux/init.h>
-#include <linux/ptrace.h>
-#include <linux/slab.h>
-#include <linux/string.h>
-#include <asm/io.h>
-
-#include <pcmcia/cistpl.h>
-#include <pcmcia/ds.h>
-#include "hisax_cfg.h"
-
-MODULE_DESCRIPTION("ISDN4Linux: PCMCIA client driver for AVM A1/Fritz!PCMCIA cards");
-MODULE_AUTHOR("Carsten Paeth");
-MODULE_LICENSE("GPL");
-
-
-/*====================================================================*/
-
-/* Parameters that can be set with 'insmod' */
-
-static int isdnprot = 2;
-
-module_param(isdnprot, int, 0);
-
-/*====================================================================*/
-
-static int avma1cs_config(struct pcmcia_device *link);
-static void avma1cs_release(struct pcmcia_device *link);
-static void avma1cs_detach(struct pcmcia_device *p_dev);
-
-static int avma1cs_probe(struct pcmcia_device *p_dev)
-{
- dev_dbg(&p_dev->dev, "avma1cs_attach()\n");
-
- /* General socket configuration */
- p_dev->config_flags |= CONF_ENABLE_IRQ | CONF_AUTO_SET_IO;
- p_dev->config_index = 1;
- p_dev->config_regs = PRESENT_OPTION;
-
- return avma1cs_config(p_dev);
-} /* avma1cs_attach */
-
-static void avma1cs_detach(struct pcmcia_device *link)
-{
- dev_dbg(&link->dev, "avma1cs_detach(0x%p)\n", link);
- avma1cs_release(link);
- kfree(link->priv);
-} /* avma1cs_detach */
-
-static int avma1cs_configcheck(struct pcmcia_device *p_dev, void *priv_data)
-{
- p_dev->resource[0]->end = 16;
- p_dev->resource[0]->flags &= ~IO_DATA_PATH_WIDTH;
- p_dev->resource[0]->flags |= IO_DATA_PATH_WIDTH_8;
- p_dev->io_lines = 5;
-
- return pcmcia_request_io(p_dev);
-}
-
-
-static int avma1cs_config(struct pcmcia_device *link)
-{
- int i = -1;
- char devname[128];
- IsdnCard_t icard;
- int busy = 0;
-
- dev_dbg(&link->dev, "avma1cs_config(0x%p)\n", link);
-
- devname[0] = 0;
- if (link->prod_id[1])
- strlcpy(devname, link->prod_id[1], sizeof(devname));
-
- if (pcmcia_loop_config(link, avma1cs_configcheck, NULL))
- return -ENODEV;
-
- do {
- /*
- * allocate an interrupt line
- */
- if (!link->irq) {
- /* undo */
- pcmcia_disable_device(link);
- break;
- }
-
- /*
- * configure the PCMCIA socket
- */
- i = pcmcia_enable_device(link);
- if (i != 0) {
- pcmcia_disable_device(link);
- break;
- }
-
- } while (0);
-
- /* If any step failed, release any partially configured state */
- if (i != 0) {
- avma1cs_release(link);
- return -ENODEV;
- }
-
- icard.para[0] = link->irq;
- icard.para[1] = link->resource[0]->start;
- icard.protocol = isdnprot;
- icard.typ = ISDN_CTYPE_A1_PCMCIA;
-
- i = hisax_init_pcmcia(link, &busy, &icard);
- if (i < 0) {
- printk(KERN_ERR "avma1_cs: failed to initialize AVM A1 "
- "PCMCIA %d at i/o %#x\n", i,
- (unsigned int) link->resource[0]->start);
- avma1cs_release(link);
- return -ENODEV;
- }
- link->priv = (void *) (unsigned long) i;
-
- return 0;
-} /* avma1cs_config */
-
-static void avma1cs_release(struct pcmcia_device *link)
-{
- unsigned long minor = (unsigned long) link->priv;
-
- dev_dbg(&link->dev, "avma1cs_release(0x%p)\n", link);
-
- /* now unregister function with hisax */
- HiSax_closecard(minor);
-
- pcmcia_disable_device(link);
-} /* avma1cs_release */
-
-static const struct pcmcia_device_id avma1cs_ids[] = {
- PCMCIA_DEVICE_PROD_ID12("AVM", "ISDN A", 0x95d42008, 0xadc9d4bb),
- PCMCIA_DEVICE_PROD_ID12("ISDN", "CARD", 0x8d9761c8, 0x01c5aa7b),
- PCMCIA_DEVICE_NULL
-};
-MODULE_DEVICE_TABLE(pcmcia, avma1cs_ids);
-
-static struct pcmcia_driver avma1cs_driver = {
- .owner = THIS_MODULE,
- .name = "avma1_cs",
- .probe = avma1cs_probe,
- .remove = avma1cs_detach,
- .id_table = avma1cs_ids,
-};
-module_pcmcia_driver(avma1cs_driver);
diff --git a/drivers/isdn/hisax/bkm_a4t.c b/drivers/isdn/hisax/bkm_a4t.c
deleted file mode 100644
index c360164bde1b..000000000000
--- a/drivers/isdn/hisax/bkm_a4t.c
+++ /dev/null
@@ -1,358 +0,0 @@
-/* $Id: bkm_a4t.c,v 1.22.2.4 2004/01/14 16:04:48 keil Exp $
- *
- * low level stuff for T-Berkom A4T
- *
- * Author Roland Klabunde
- * Copyright by Roland Klabunde <R.Klabunde@Berkom.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "hscx.h"
-#include "jade.h"
-#include "isdnl1.h"
-#include <linux/pci.h>
-#include "bkm_ax.h"
-
-static const char *bkm_a4t_revision = "$Revision: 1.22.2.4 $";
-
-
-static inline u_char
-readreg(unsigned int ale, unsigned long adr, u_char off)
-{
- register u_int ret;
- unsigned int *po = (unsigned int *) adr; /* Postoffice */
-
- *po = (GCS_2 | PO_WRITE | off);
- __WAITI20__(po);
- *po = (ale | PO_READ);
- __WAITI20__(po);
- ret = *po;
- return ((unsigned char) ret);
-}
-
-
-static inline void
-readfifo(unsigned int ale, unsigned long adr, u_char off, u_char *data, int size)
-{
- int i;
- for (i = 0; i < size; i++)
- *data++ = readreg(ale, adr, off);
-}
-
-
-static inline void
-writereg(unsigned int ale, unsigned long adr, u_char off, u_char data)
-{
- unsigned int *po = (unsigned int *) adr; /* Postoffice */
- *po = (GCS_2 | PO_WRITE | off);
- __WAITI20__(po);
- *po = (ale | PO_WRITE | data);
- __WAITI20__(po);
-}
-
-
-static inline void
-writefifo(unsigned int ale, unsigned long adr, u_char off, u_char *data, int size)
-{
- int i;
-
- for (i = 0; i < size; i++)
- writereg(ale, adr, off, *data++);
-}
-
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readreg(cs->hw.ax.isac_ale, cs->hw.ax.isac_adr, offset));
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writereg(cs->hw.ax.isac_ale, cs->hw.ax.isac_adr, offset, value);
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- readfifo(cs->hw.ax.isac_ale, cs->hw.ax.isac_adr, 0, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- writefifo(cs->hw.ax.isac_ale, cs->hw.ax.isac_adr, 0, data, size);
-}
-
-static u_char
-ReadJADE(struct IsdnCardState *cs, int jade, u_char offset)
-{
- return (readreg(cs->hw.ax.jade_ale, cs->hw.ax.jade_adr, offset + (jade == -1 ? 0 : (jade ? 0xC0 : 0x80))));
-}
-
-static void
-WriteJADE(struct IsdnCardState *cs, int jade, u_char offset, u_char value)
-{
- writereg(cs->hw.ax.jade_ale, cs->hw.ax.jade_adr, offset + (jade == -1 ? 0 : (jade ? 0xC0 : 0x80)), value);
-}
-
-/*
- * fast interrupt JADE stuff goes here
- */
-
-#define READJADE(cs, nr, reg) readreg(cs->hw.ax.jade_ale, \
- cs->hw.ax.jade_adr, reg + (nr == -1 ? 0 : (nr ? 0xC0 : 0x80)))
-#define WRITEJADE(cs, nr, reg, data) writereg(cs->hw.ax.jade_ale, \
- cs->hw.ax.jade_adr, reg + (nr == -1 ? 0 : (nr ? 0xC0 : 0x80)), data)
-
-#define READJADEFIFO(cs, nr, ptr, cnt) readfifo(cs->hw.ax.jade_ale, \
- cs->hw.ax.jade_adr, (nr == -1 ? 0 : (nr ? 0xC0 : 0x80)), ptr, cnt)
-#define WRITEJADEFIFO(cs, nr, ptr, cnt) writefifo(cs->hw.ax.jade_ale, \
- cs->hw.ax.jade_adr, (nr == -1 ? 0 : (nr ? 0xC0 : 0x80)), ptr, cnt)
-
-#include "jade_irq.c"
-
-static irqreturn_t
-bkm_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val = 0;
- u_long flags;
- I20_REGISTER_FILE *pI20_Regs;
-
- spin_lock_irqsave(&cs->lock, flags);
- pI20_Regs = (I20_REGISTER_FILE *) (cs->hw.ax.base);
-
- /* ISDN interrupt pending? */
- if (pI20_Regs->i20IntStatus & intISDN) {
- /* Reset the ISDN interrupt */
- pI20_Regs->i20IntStatus = intISDN;
- /* Disable ISDN interrupt */
- pI20_Regs->i20IntCtrl &= ~intISDN;
- /* Channel A first */
- val = readreg(cs->hw.ax.jade_ale, cs->hw.ax.jade_adr, jade_HDLC_ISR + 0x80);
- if (val) {
- jade_int_main(cs, val, 0);
- }
- /* Channel B */
- val = readreg(cs->hw.ax.jade_ale, cs->hw.ax.jade_adr, jade_HDLC_ISR + 0xC0);
- if (val) {
- jade_int_main(cs, val, 1);
- }
- /* D-Channel */
- val = readreg(cs->hw.ax.isac_ale, cs->hw.ax.isac_adr, ISAC_ISTA);
- if (val) {
- isac_interrupt(cs, val);
- }
- /* Reenable ISDN interrupt */
- pI20_Regs->i20IntCtrl |= intISDN;
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
- } else {
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_NONE;
- }
-}
-
-static void
-release_io_bkm(struct IsdnCardState *cs)
-{
- if (cs->hw.ax.base) {
- iounmap((void *) cs->hw.ax.base);
- cs->hw.ax.base = 0;
- }
-}
-
-static void
-enable_bkm_int(struct IsdnCardState *cs, unsigned bEnable)
-{
- if (cs->typ == ISDN_CTYPE_BKM_A4T) {
- I20_REGISTER_FILE *pI20_Regs = (I20_REGISTER_FILE *) (cs->hw.ax.base);
- if (bEnable)
- pI20_Regs->i20IntCtrl |= (intISDN | intPCI);
- else
- /* CAUTION: This disables the video capture driver too */
- pI20_Regs->i20IntCtrl &= ~(intISDN | intPCI);
- }
-}
-
-static void
-reset_bkm(struct IsdnCardState *cs)
-{
- if (cs->typ == ISDN_CTYPE_BKM_A4T) {
- I20_REGISTER_FILE *pI20_Regs = (I20_REGISTER_FILE *) (cs->hw.ax.base);
- /* Issue the I20 soft reset */
- pI20_Regs->i20SysControl = 0xFF; /* all in */
- mdelay(10);
- /* Remove the soft reset */
- pI20_Regs->i20SysControl = sysRESET | 0xFF;
- mdelay(10);
- /* Set our configuration */
- pI20_Regs->i20SysControl = sysRESET | sysCFG;
- /* Issue ISDN reset */
- pI20_Regs->i20GuestControl = guestWAIT_CFG |
- g_A4T_JADE_RES |
- g_A4T_ISAR_RES |
- g_A4T_ISAC_RES |
- g_A4T_JADE_BOOTR |
- g_A4T_ISAR_BOOTR;
- mdelay(10);
-
- /* Remove RESET state from ISDN */
- pI20_Regs->i20GuestControl &= ~(g_A4T_ISAC_RES |
- g_A4T_JADE_RES |
- g_A4T_ISAR_RES);
- mdelay(10);
- }
-}
-
-static int
-BKM_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- /* Disable ints */
- spin_lock_irqsave(&cs->lock, flags);
- enable_bkm_int(cs, 0);
- reset_bkm(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- /* Sanity */
- spin_lock_irqsave(&cs->lock, flags);
- enable_bkm_int(cs, 0);
- reset_bkm(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- release_io_bkm(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- clear_pending_isac_ints(cs);
- clear_pending_jade_ints(cs);
- initisac(cs);
- initjade(cs);
- /* Enable ints */
- enable_bkm_int(cs, 1);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-static int a4t_pci_probe(struct pci_dev *dev_a4t, struct IsdnCardState *cs,
- u_int *found, u_int *pci_memaddr)
-{
- u16 sub_sys;
- u16 sub_vendor;
-
- sub_vendor = dev_a4t->subsystem_vendor;
- sub_sys = dev_a4t->subsystem_device;
- if ((sub_sys == PCI_DEVICE_ID_BERKOM_A4T) && (sub_vendor == PCI_VENDOR_ID_BERKOM)) {
- if (pci_enable_device(dev_a4t))
- return (0); /* end loop & function */
- *found = 1;
- *pci_memaddr = pci_resource_start(dev_a4t, 0);
- cs->irq = dev_a4t->irq;
- return (1); /* end loop */
- }
-
- return (-1); /* continue looping */
-}
-
-static int a4t_cs_init(struct IsdnCard *card, struct IsdnCardState *cs,
- u_int pci_memaddr)
-{
- I20_REGISTER_FILE *pI20_Regs;
-
- if (!cs->irq) { /* IRQ range check ?? */
- printk(KERN_WARNING "HiSax: Telekom A4T: No IRQ\n");
- return (0);
- }
- cs->hw.ax.base = (long) ioremap(pci_memaddr, 4096);
- /* Check suspecious address */
- pI20_Regs = (I20_REGISTER_FILE *) (cs->hw.ax.base);
- if ((pI20_Regs->i20IntStatus & 0x8EFFFFFF) != 0) {
- printk(KERN_WARNING "HiSax: Telekom A4T address "
- "%lx-%lx suspicious\n",
- cs->hw.ax.base, cs->hw.ax.base + 4096);
- iounmap((void *) cs->hw.ax.base);
- cs->hw.ax.base = 0;
- return (0);
- }
- cs->hw.ax.isac_adr = cs->hw.ax.base + PO_OFFSET;
- cs->hw.ax.jade_adr = cs->hw.ax.base + PO_OFFSET;
- cs->hw.ax.isac_ale = GCS_1;
- cs->hw.ax.jade_ale = GCS_3;
-
- printk(KERN_INFO "HiSax: Telekom A4T: Card configured at "
- "0x%lX IRQ %d\n",
- cs->hw.ax.base, cs->irq);
-
- setup_isac(cs);
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->BC_Read_Reg = &ReadJADE;
- cs->BC_Write_Reg = &WriteJADE;
- cs->BC_Send_Data = &jade_fill_fifo;
- cs->cardmsg = &BKM_card_msg;
- cs->irq_func = &bkm_interrupt;
- cs->irq_flags |= IRQF_SHARED;
- ISACVersion(cs, "Telekom A4T:");
- /* Jade version */
- JadeVersion(cs, "Telekom A4T:");
-
- return (1);
-}
-
-static struct pci_dev *dev_a4t = NULL;
-
-int setup_bkm_a4t(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
- u_int pci_memaddr = 0, found = 0;
- int ret;
-
- strcpy(tmp, bkm_a4t_revision);
- printk(KERN_INFO "HiSax: T-Berkom driver Rev. %s\n", HiSax_getrev(tmp));
- if (cs->typ == ISDN_CTYPE_BKM_A4T) {
- cs->subtyp = BKM_A4T;
- } else
- return (0);
-
- while ((dev_a4t = hisax_find_pci_device(PCI_VENDOR_ID_ZORAN,
- PCI_DEVICE_ID_ZORAN_36120, dev_a4t))) {
- ret = a4t_pci_probe(dev_a4t, cs, &found, &pci_memaddr);
- if (!ret)
- return (0);
- if (ret > 0)
- break;
- }
- if (!found) {
- printk(KERN_WARNING "HiSax: Telekom A4T: Card not found\n");
- return (0);
- }
- if (!pci_memaddr) {
- printk(KERN_WARNING "HiSax: Telekom A4T: "
- "No Memory base address\n");
- return (0);
- }
-
- return a4t_cs_init(card, cs, pci_memaddr);
-}
diff --git a/drivers/isdn/hisax/bkm_a8.c b/drivers/isdn/hisax/bkm_a8.c
deleted file mode 100644
index dd663ea57ec6..000000000000
--- a/drivers/isdn/hisax/bkm_a8.c
+++ /dev/null
@@ -1,433 +0,0 @@
-/* $Id: bkm_a8.c,v 1.22.2.4 2004/01/15 14:02:34 keil Exp $
- *
- * low level stuff for Scitel Quadro (4*S0, passive)
- *
- * Author Roland Klabunde
- * Copyright by Roland Klabunde <R.Klabunde@Berkom.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "ipac.h"
-#include "hscx.h"
-#include "isdnl1.h"
-#include <linux/pci.h>
-#include "bkm_ax.h"
-
-#define ATTEMPT_PCI_REMAPPING /* Required for PLX rev 1 */
-
-static const char sct_quadro_revision[] = "$Revision: 1.22.2.4 $";
-
-static const char *sct_quadro_subtypes[] =
-{
- "",
- "#1",
- "#2",
- "#3",
- "#4"
-};
-
-
-#define wordout(addr, val) outw(val, addr)
-#define wordin(addr) inw(addr)
-
-static inline u_char
-readreg(unsigned int ale, unsigned int adr, u_char off)
-{
- register u_char ret;
- wordout(ale, off);
- ret = wordin(adr) & 0xFF;
- return (ret);
-}
-
-static inline void
-readfifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- int i;
- wordout(ale, off);
- for (i = 0; i < size; i++)
- data[i] = wordin(adr) & 0xFF;
-}
-
-
-static inline void
-writereg(unsigned int ale, unsigned int adr, u_char off, u_char data)
-{
- wordout(ale, off);
- wordout(adr, data);
-}
-
-static inline void
-writefifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- int i;
- wordout(ale, off);
- for (i = 0; i < size; i++)
- wordout(adr, data[i]);
-}
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readreg(cs->hw.ax.base, cs->hw.ax.data_adr, offset | 0x80));
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writereg(cs->hw.ax.base, cs->hw.ax.data_adr, offset | 0x80, value);
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- readfifo(cs->hw.ax.base, cs->hw.ax.data_adr, 0x80, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- writefifo(cs->hw.ax.base, cs->hw.ax.data_adr, 0x80, data, size);
-}
-
-
-static u_char
-ReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- return (readreg(cs->hw.ax.base, cs->hw.ax.data_adr, offset + (hscx ? 0x40 : 0)));
-}
-
-static void
-WriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- writereg(cs->hw.ax.base, cs->hw.ax.data_adr, offset + (hscx ? 0x40 : 0), value);
-}
-
-/* Set the specific ipac to active */
-static void
-set_ipac_active(struct IsdnCardState *cs, u_int active)
-{
- /* set irq mask */
- writereg(cs->hw.ax.base, cs->hw.ax.data_adr, IPAC_MASK,
- active ? 0xc0 : 0xff);
-}
-
-/*
- * fast interrupt HSCX stuff goes here
- */
-
-#define READHSCX(cs, nr, reg) readreg(cs->hw.ax.base, \
- cs->hw.ax.data_adr, reg + (nr ? 0x40 : 0))
-#define WRITEHSCX(cs, nr, reg, data) writereg(cs->hw.ax.base, \
- cs->hw.ax.data_adr, reg + (nr ? 0x40 : 0), data)
-#define READHSCXFIFO(cs, nr, ptr, cnt) readfifo(cs->hw.ax.base, \
- cs->hw.ax.data_adr, (nr ? 0x40 : 0), ptr, cnt)
-#define WRITEHSCXFIFO(cs, nr, ptr, cnt) writefifo(cs->hw.ax.base, \
- cs->hw.ax.data_adr, (nr ? 0x40 : 0), ptr, cnt)
-
-#include "hscx_irq.c"
-
-static irqreturn_t
-bkm_interrupt_ipac(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char ista, val, icnt = 5;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- ista = readreg(cs->hw.ax.base, cs->hw.ax.data_adr, IPAC_ISTA);
- if (!(ista & 0x3f)) { /* not this IPAC */
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_NONE;
- }
-Start_IPAC:
- if (cs->debug & L1_DEB_IPAC)
- debugl1(cs, "IPAC ISTA %02X", ista);
- if (ista & 0x0f) {
- val = readreg(cs->hw.ax.base, cs->hw.ax.data_adr, HSCX_ISTA + 0x40);
- if (ista & 0x01)
- val |= 0x01;
- if (ista & 0x04)
- val |= 0x02;
- if (ista & 0x08)
- val |= 0x04;
- if (val) {
- hscx_int_main(cs, val);
- }
- }
- if (ista & 0x20) {
- val = 0xfe & readreg(cs->hw.ax.base, cs->hw.ax.data_adr, ISAC_ISTA | 0x80);
- if (val) {
- isac_interrupt(cs, val);
- }
- }
- if (ista & 0x10) {
- val = 0x01;
- isac_interrupt(cs, val);
- }
- ista = readreg(cs->hw.ax.base, cs->hw.ax.data_adr, IPAC_ISTA);
- if ((ista & 0x3f) && icnt) {
- icnt--;
- goto Start_IPAC;
- }
- if (!icnt)
- printk(KERN_WARNING "HiSax: Scitel Quadro (%s) IRQ LOOP\n",
- sct_quadro_subtypes[cs->subtyp]);
- writereg(cs->hw.ax.base, cs->hw.ax.data_adr, IPAC_MASK, 0xFF);
- writereg(cs->hw.ax.base, cs->hw.ax.data_adr, IPAC_MASK, 0xC0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-release_io_sct_quadro(struct IsdnCardState *cs)
-{
- release_region(cs->hw.ax.base & 0xffffffc0, 128);
- if (cs->subtyp == SCT_1)
- release_region(cs->hw.ax.plx_adr, 64);
-}
-
-static void
-enable_bkm_int(struct IsdnCardState *cs, unsigned bEnable)
-{
- if (cs->typ == ISDN_CTYPE_SCT_QUADRO) {
- if (bEnable)
- wordout(cs->hw.ax.plx_adr + 0x4C, (wordin(cs->hw.ax.plx_adr + 0x4C) | 0x41));
- else
- wordout(cs->hw.ax.plx_adr + 0x4C, (wordin(cs->hw.ax.plx_adr + 0x4C) & ~0x41));
- }
-}
-
-static void
-reset_bkm(struct IsdnCardState *cs)
-{
- if (cs->subtyp == SCT_1) {
- wordout(cs->hw.ax.plx_adr + 0x50, (wordin(cs->hw.ax.plx_adr + 0x50) & ~4));
- mdelay(10);
- /* Remove the soft reset */
- wordout(cs->hw.ax.plx_adr + 0x50, (wordin(cs->hw.ax.plx_adr + 0x50) | 4));
- mdelay(10);
- }
-}
-
-static int
-BKM_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- /* Disable ints */
- set_ipac_active(cs, 0);
- enable_bkm_int(cs, 0);
- reset_bkm(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- /* Sanity */
- spin_lock_irqsave(&cs->lock, flags);
- set_ipac_active(cs, 0);
- enable_bkm_int(cs, 0);
- spin_unlock_irqrestore(&cs->lock, flags);
- release_io_sct_quadro(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- cs->debug |= L1_DEB_IPAC;
- set_ipac_active(cs, 1);
- inithscxisac(cs, 3);
- /* Enable ints */
- enable_bkm_int(cs, 1);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-static int sct_alloc_io(u_int adr, u_int len)
-{
- if (!request_region(adr, len, "scitel")) {
- printk(KERN_WARNING
- "HiSax: Scitel port %#x-%#x already in use\n",
- adr, adr + len);
- return (1);
- }
- return (0);
-}
-
-static struct pci_dev *dev_a8 = NULL;
-static u16 sub_vendor_id = 0;
-static u16 sub_sys_id = 0;
-static u_char pci_bus = 0;
-static u_char pci_device_fn = 0;
-static u_char pci_irq = 0;
-
-int setup_sct_quadro(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
- u_int found = 0;
- u_int pci_ioaddr1, pci_ioaddr2, pci_ioaddr3, pci_ioaddr4, pci_ioaddr5;
-
- strcpy(tmp, sct_quadro_revision);
- printk(KERN_INFO "HiSax: T-Berkom driver Rev. %s\n", HiSax_getrev(tmp));
- if (cs->typ == ISDN_CTYPE_SCT_QUADRO) {
- cs->subtyp = SCT_1; /* Preset */
- } else
- return (0);
-
- /* Identify subtype by para[0] */
- if (card->para[0] >= SCT_1 && card->para[0] <= SCT_4)
- cs->subtyp = card->para[0];
- else {
- printk(KERN_WARNING "HiSax: Scitel Quadro: Invalid "
- "subcontroller in configuration, default to 1\n");
- return (0);
- }
- if ((cs->subtyp != SCT_1) && ((sub_sys_id != PCI_DEVICE_ID_BERKOM_SCITEL_QUADRO) ||
- (sub_vendor_id != PCI_VENDOR_ID_BERKOM)))
- return (0);
- if (cs->subtyp == SCT_1) {
- while ((dev_a8 = hisax_find_pci_device(PCI_VENDOR_ID_PLX,
- PCI_DEVICE_ID_PLX_9050, dev_a8))) {
-
- sub_vendor_id = dev_a8->subsystem_vendor;
- sub_sys_id = dev_a8->subsystem_device;
- if ((sub_sys_id == PCI_DEVICE_ID_BERKOM_SCITEL_QUADRO) &&
- (sub_vendor_id == PCI_VENDOR_ID_BERKOM)) {
- if (pci_enable_device(dev_a8))
- return (0);
- pci_ioaddr1 = pci_resource_start(dev_a8, 1);
- pci_irq = dev_a8->irq;
- pci_bus = dev_a8->bus->number;
- pci_device_fn = dev_a8->devfn;
- found = 1;
- break;
- }
- }
- if (!found) {
- printk(KERN_WARNING "HiSax: Scitel Quadro (%s): "
- "Card not found\n",
- sct_quadro_subtypes[cs->subtyp]);
- return (0);
- }
-#ifdef ATTEMPT_PCI_REMAPPING
-/* HACK: PLX revision 1 bug: PLX address bit 7 must not be set */
- if ((pci_ioaddr1 & 0x80) && (dev_a8->revision == 1)) {
- printk(KERN_WARNING "HiSax: Scitel Quadro (%s): "
- "PLX rev 1, remapping required!\n",
- sct_quadro_subtypes[cs->subtyp]);
- /* Restart PCI negotiation */
- pci_write_config_dword(dev_a8, PCI_BASE_ADDRESS_1, (u_int)-1);
- /* Move up by 0x80 byte */
- pci_ioaddr1 += 0x80;
- pci_ioaddr1 &= PCI_BASE_ADDRESS_IO_MASK;
- pci_write_config_dword(dev_a8, PCI_BASE_ADDRESS_1, pci_ioaddr1);
- dev_a8->resource[1].start = pci_ioaddr1;
- }
-#endif /* End HACK */
- }
- if (!pci_irq) { /* IRQ range check ?? */
- printk(KERN_WARNING "HiSax: Scitel Quadro (%s): No IRQ\n",
- sct_quadro_subtypes[cs->subtyp]);
- return (0);
- }
- pci_read_config_dword(dev_a8, PCI_BASE_ADDRESS_1, &pci_ioaddr1);
- pci_read_config_dword(dev_a8, PCI_BASE_ADDRESS_2, &pci_ioaddr2);
- pci_read_config_dword(dev_a8, PCI_BASE_ADDRESS_3, &pci_ioaddr3);
- pci_read_config_dword(dev_a8, PCI_BASE_ADDRESS_4, &pci_ioaddr4);
- pci_read_config_dword(dev_a8, PCI_BASE_ADDRESS_5, &pci_ioaddr5);
- if (!pci_ioaddr1 || !pci_ioaddr2 || !pci_ioaddr3 || !pci_ioaddr4 || !pci_ioaddr5) {
- printk(KERN_WARNING "HiSax: Scitel Quadro (%s): "
- "No IO base address(es)\n",
- sct_quadro_subtypes[cs->subtyp]);
- return (0);
- }
- pci_ioaddr1 &= PCI_BASE_ADDRESS_IO_MASK;
- pci_ioaddr2 &= PCI_BASE_ADDRESS_IO_MASK;
- pci_ioaddr3 &= PCI_BASE_ADDRESS_IO_MASK;
- pci_ioaddr4 &= PCI_BASE_ADDRESS_IO_MASK;
- pci_ioaddr5 &= PCI_BASE_ADDRESS_IO_MASK;
- /* Take over */
- cs->irq = pci_irq;
- cs->irq_flags |= IRQF_SHARED;
- /* pci_ioaddr1 is unique to all subdevices */
- /* pci_ioaddr2 is for the fourth subdevice only */
- /* pci_ioaddr3 is for the third subdevice only */
- /* pci_ioaddr4 is for the second subdevice only */
- /* pci_ioaddr5 is for the first subdevice only */
- cs->hw.ax.plx_adr = pci_ioaddr1;
- /* Enter all ipac_base addresses */
- switch (cs->subtyp) {
- case 1:
- cs->hw.ax.base = pci_ioaddr5 + 0x00;
- if (sct_alloc_io(pci_ioaddr1, 128))
- return (0);
- if (sct_alloc_io(pci_ioaddr5, 64))
- return (0);
- /* disable all IPAC */
- writereg(pci_ioaddr5, pci_ioaddr5 + 4,
- IPAC_MASK, 0xFF);
- writereg(pci_ioaddr4 + 0x08, pci_ioaddr4 + 0x0c,
- IPAC_MASK, 0xFF);
- writereg(pci_ioaddr3 + 0x10, pci_ioaddr3 + 0x14,
- IPAC_MASK, 0xFF);
- writereg(pci_ioaddr2 + 0x20, pci_ioaddr2 + 0x24,
- IPAC_MASK, 0xFF);
- break;
- case 2:
- cs->hw.ax.base = pci_ioaddr4 + 0x08;
- if (sct_alloc_io(pci_ioaddr4, 64))
- return (0);
- break;
- case 3:
- cs->hw.ax.base = pci_ioaddr3 + 0x10;
- if (sct_alloc_io(pci_ioaddr3, 64))
- return (0);
- break;
- case 4:
- cs->hw.ax.base = pci_ioaddr2 + 0x20;
- if (sct_alloc_io(pci_ioaddr2, 64))
- return (0);
- break;
- }
- /* For isac and hscx data path */
- cs->hw.ax.data_adr = cs->hw.ax.base + 4;
-
- printk(KERN_INFO "HiSax: Scitel Quadro (%s) configured at "
- "0x%.4lX, 0x%.4lX, 0x%.4lX and IRQ %d\n",
- sct_quadro_subtypes[cs->subtyp],
- cs->hw.ax.plx_adr,
- cs->hw.ax.base,
- cs->hw.ax.data_adr,
- cs->irq);
-
- test_and_set_bit(HW_IPAC, &cs->HW_Flags);
-
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
-
- cs->BC_Read_Reg = &ReadHSCX;
- cs->BC_Write_Reg = &WriteHSCX;
- cs->BC_Send_Data = &hscx_fill_fifo;
- cs->cardmsg = &BKM_card_msg;
- cs->irq_func = &bkm_interrupt_ipac;
-
- printk(KERN_INFO "HiSax: Scitel Quadro (%s): IPAC Version %d\n",
- sct_quadro_subtypes[cs->subtyp],
- readreg(cs->hw.ax.base, cs->hw.ax.data_adr, IPAC_ID));
- return (1);
-}
diff --git a/drivers/isdn/hisax/bkm_ax.h b/drivers/isdn/hisax/bkm_ax.h
deleted file mode 100644
index 27ff8a88679b..000000000000
--- a/drivers/isdn/hisax/bkm_ax.h
+++ /dev/null
@@ -1,119 +0,0 @@
-/* $Id: bkm_ax.h,v 1.5.6.3 2001/09/23 22:24:46 kai Exp $
- *
- * low level decls for T-Berkom cards A4T and Scitel Quadro (4*S0, passive)
- *
- * Author Roland Klabunde
- * Copyright by Roland Klabunde <R.Klabunde@Berkom.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#ifndef __BKM_AX_H__
-#define __BKM_AX_H__
-
-/* Supported boards (subtypes) */
-#define SCT_1 1
-#define SCT_2 2
-#define SCT_3 3
-#define SCT_4 4
-#define BKM_A4T 5
-
-#define PLX_ADDR_PLX 0x14 /* Addr PLX configuration */
-#define PLX_ADDR_ISAC 0x18 /* Addr ISAC */
-#define PLX_ADDR_HSCX 0x1C /* Addr HSCX */
-#define PLX_ADDR_ALE 0x20 /* Addr ALE */
-#define PLX_ADDR_ALEPLUS 0x24 /* Next Addr behind ALE */
-
-#define PLX_SUBVEN 0x2C /* Offset SubVendor */
-#define PLX_SUBSYS 0x2E /* Offset SubSystem */
-
-
-/* Application specific registers I20 (Siemens SZB6120H) */
-typedef struct {
- /* Video front end horizontal configuration register */
- volatile u_int i20VFEHorzCfg; /* Offset 00 */
- /* Video front end vertical configuration register */
- volatile u_int i20VFEVertCfg; /* Offset 04 */
- /* Video front end scaler and pixel format register */
- volatile u_int i20VFEScaler; /* Offset 08 */
- /* Video display top register */
- volatile u_int i20VDispTop; /* Offset 0C */
- /* Video display bottom register */
- volatile u_int i20VDispBottom; /* Offset 10 */
- /* Video stride, status and frame grab register */
- volatile u_int i20VidFrameGrab;/* Offset 14 */
- /* Video display configuration register */
- volatile u_int i20VDispCfg; /* Offset 18 */
- /* Video masking map top */
- volatile u_int i20VMaskTop; /* Offset 1C */
- /* Video masking map bottom */
- volatile u_int i20VMaskBottom; /* Offset 20 */
- /* Overlay control register */
- volatile u_int i20OvlyControl; /* Offset 24 */
- /* System, PCI and general purpose pins control register */
- volatile u_int i20SysControl; /* Offset 28 */
-#define sysRESET 0x01000000 /* bit 24:Softreset (Low) */
- /* GPIO 4...0: Output fixed for our cfg! */
-#define sysCFG 0x000000E0 /* GPIO 7,6,5: Input */
- /* General purpose pins and guest bus control register */
- volatile u_int i20GuestControl;/* Offset 2C */
-#define guestWAIT_CFG 0x00005555 /* 4 PCI waits for all */
-#define guestISDN_INT_E 0x01000000 /* ISDN Int en (low) */
-#define guestVID_INT_E 0x02000000 /* Video interrupt en (low) */
-#define guestADI1_INT_R 0x04000000 /* ADI #1 int req (low) */
-#define guestADI2_INT_R 0x08000000 /* ADI #2 int req (low) */
-#define guestISDN_RES 0x10000000 /* ISDN reset bit (high) */
-#define guestADI1_INT_S 0x20000000 /* ADI #1 int pending (low) */
-#define guestADI2_INT_S 0x40000000 /* ADI #2 int pending (low) */
-#define guestISDN_INT_S 0x80000000 /* ISAC int pending (low) */
-
-#define g_A4T_JADE_RES 0x01000000 /* JADE Reset (High) */
-#define g_A4T_ISAR_RES 0x02000000 /* ISAR Reset (High) */
-#define g_A4T_ISAC_RES 0x04000000 /* ISAC Reset (High) */
-#define g_A4T_JADE_BOOTR 0x08000000 /* JADE enable boot SRAM (Low) NOT USED */
-#define g_A4T_ISAR_BOOTR 0x10000000 /* ISAR enable boot SRAM (Low) NOT USED */
-#define g_A4T_JADE_INT_S 0x20000000 /* JADE interrupt pnd (Low) */
-#define g_A4T_ISAR_INT_S 0x40000000 /* ISAR interrupt pnd (Low) */
-#define g_A4T_ISAC_INT_S 0x80000000 /* ISAC interrupt pnd (Low) */
-
- volatile u_int i20CodeSource; /* Offset 30 */
- volatile u_int i20CodeXferCtrl;/* Offset 34 */
- volatile u_int i20CodeMemPtr; /* Offset 38 */
-
- volatile u_int i20IntStatus; /* Offset 3C */
- volatile u_int i20IntCtrl; /* Offset 40 */
-#define intISDN 0x40000000 /* GIRQ1En (ISAC/ADI) (High) */
-#define intVID 0x20000000 /* GIRQ0En (VSYNC) (High) */
-#define intCOD 0x10000000 /* CodRepIrqEn (High) */
-#define intPCI 0x01000000 /* PCI IntA enable (High) */
-
- volatile u_int i20I2CCtrl; /* Offset 44 */
-} I20_REGISTER_FILE, *PI20_REGISTER_FILE;
-
-/*
- * Postoffice structure for A4T
- *
- */
-#define PO_OFFSET 0x00000200 /* Postoffice offset from base */
-
-#define GCS_0 0x00000000 /* Guest bus chip selects */
-#define GCS_1 0x00100000
-#define GCS_2 0x00200000
-#define GCS_3 0x00300000
-
-#define PO_READ 0x00000000 /* R/W from/to guest bus */
-#define PO_WRITE 0x00800000
-
-#define PO_PEND 0x02000000
-
-#define POSTOFFICE(postoffice) *(volatile unsigned int *)(postoffice)
-
-/* Wait unlimited (don't worry) */
-#define __WAITI20__(postoffice) \
- do { \
- while ((POSTOFFICE(postoffice) & PO_PEND)) ; \
- } while (0)
-
-#endif /* __BKM_AX_H__ */
diff --git a/drivers/isdn/hisax/callc.c b/drivers/isdn/hisax/callc.c
deleted file mode 100644
index 9ee06328784c..000000000000
--- a/drivers/isdn/hisax/callc.c
+++ /dev/null
@@ -1,1792 +0,0 @@
-/* $Id: callc.c,v 2.59.2.4 2004/02/11 13:21:32 keil Exp $
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * For changes and modifications please read
- * Documentation/isdn/HiSax.cert
- *
- * based on the teles driver from Jan den Ouden
- *
- * Thanks to Jan den Ouden
- * Fritz Elfert
- *
- */
-
-#include <linux/module.h>
-#include <linux/slab.h>
-#include <linux/init.h>
-#include "hisax.h"
-#include <linux/isdn/capicmd.h>
-
-const char *lli_revision = "$Revision: 2.59.2.4 $";
-
-extern struct IsdnCard cards[];
-
-static int init_b_st(struct Channel *chanp, int incoming);
-static void release_b_st(struct Channel *chanp);
-
-static struct Fsm callcfsm;
-static int chancount;
-
-/* experimental REJECT after ALERTING for CALLBACK to beat the 4s delay */
-#define ALERT_REJECT 0
-
-/* Value to delay the sending of the first B-channel packet after CONNECT
- * here is no value given by ITU, but experience shows that 300 ms will
- * work on many networks, if you or your other side is behind local exchanges
- * a greater value may be recommented. If the delay is to short the first paket
- * will be lost and autodetect on many comercial routers goes wrong !
- * You can adjust this value on runtime with
- * hisaxctrl <id> 2 <value>
- * value is in milliseconds
- */
-#define DEFAULT_B_DELAY 300
-
-/* Flags for remembering action done in lli */
-
-#define FLG_START_B 0
-
-/*
- * Find card with given driverId
- */
-static inline struct IsdnCardState *
-hisax_findcard(int driverid)
-{
- int i;
-
- for (i = 0; i < nrcards; i++)
- if (cards[i].cs)
- if (cards[i].cs->myid == driverid)
- return (cards[i].cs);
- return (struct IsdnCardState *) 0;
-}
-
-static __printf(3, 4) void
- link_debug(struct Channel *chanp, int direction, char *fmt, ...)
-{
- va_list args;
- char tmp[16];
-
- va_start(args, fmt);
- sprintf(tmp, "Ch%d %s ", chanp->chan,
- direction ? "LL->HL" : "HL->LL");
- VHiSax_putstatus(chanp->cs, tmp, fmt, args);
- va_end(args);
-}
-
-enum {
- ST_NULL, /* 0 inactive */
- ST_OUT_DIAL, /* 1 outgoing, SETUP send; awaiting confirm */
- ST_IN_WAIT_LL, /* 2 incoming call received; wait for LL confirm */
- ST_IN_ALERT_SENT, /* 3 incoming call received; ALERT send */
- ST_IN_WAIT_CONN_ACK, /* 4 incoming CONNECT send; awaiting CONN_ACK */
- ST_WAIT_BCONN, /* 5 CONNECT/CONN_ACK received, awaiting b-channel prot. estbl. */
- ST_ACTIVE, /* 6 active, b channel prot. established */
- ST_WAIT_BRELEASE, /* 7 call clear. (initiator), awaiting b channel prot. rel. */
- ST_WAIT_BREL_DISC, /* 8 call clear. (receiver), DISCONNECT req. received */
- ST_WAIT_DCOMMAND, /* 9 call clear. (receiver), awaiting DCHANNEL message */
- ST_WAIT_DRELEASE, /* 10 DISCONNECT sent, awaiting RELEASE */
- ST_WAIT_D_REL_CNF, /* 11 RELEASE sent, awaiting RELEASE confirm */
- ST_IN_PROCEED_SEND, /* 12 incoming call, proceeding send */
-};
-
-
-#define STATE_COUNT (ST_IN_PROCEED_SEND + 1)
-
-static char *strState[] =
-{
- "ST_NULL",
- "ST_OUT_DIAL",
- "ST_IN_WAIT_LL",
- "ST_IN_ALERT_SENT",
- "ST_IN_WAIT_CONN_ACK",
- "ST_WAIT_BCONN",
- "ST_ACTIVE",
- "ST_WAIT_BRELEASE",
- "ST_WAIT_BREL_DISC",
- "ST_WAIT_DCOMMAND",
- "ST_WAIT_DRELEASE",
- "ST_WAIT_D_REL_CNF",
- "ST_IN_PROCEED_SEND",
-};
-
-enum {
- EV_DIAL, /* 0 */
- EV_SETUP_CNF, /* 1 */
- EV_ACCEPTB, /* 2 */
- EV_DISCONNECT_IND, /* 3 */
- EV_RELEASE, /* 4 */
- EV_LEASED, /* 5 */
- EV_LEASED_REL, /* 6 */
- EV_SETUP_IND, /* 7 */
- EV_ACCEPTD, /* 8 */
- EV_SETUP_CMPL_IND, /* 9 */
- EV_BC_EST, /* 10 */
- EV_WRITEBUF, /* 11 */
- EV_HANGUP, /* 12 */
- EV_BC_REL, /* 13 */
- EV_CINF, /* 14 */
- EV_SUSPEND, /* 15 */
- EV_RESUME, /* 16 */
- EV_NOSETUP_RSP, /* 17 */
- EV_SETUP_ERR, /* 18 */
- EV_CONNECT_ERR, /* 19 */
- EV_PROCEED, /* 20 */
- EV_ALERT, /* 21 */
- EV_REDIR, /* 22 */
-};
-
-#define EVENT_COUNT (EV_REDIR + 1)
-
-static char *strEvent[] =
-{
- "EV_DIAL",
- "EV_SETUP_CNF",
- "EV_ACCEPTB",
- "EV_DISCONNECT_IND",
- "EV_RELEASE",
- "EV_LEASED",
- "EV_LEASED_REL",
- "EV_SETUP_IND",
- "EV_ACCEPTD",
- "EV_SETUP_CMPL_IND",
- "EV_BC_EST",
- "EV_WRITEBUF",
- "EV_HANGUP",
- "EV_BC_REL",
- "EV_CINF",
- "EV_SUSPEND",
- "EV_RESUME",
- "EV_NOSETUP_RSP",
- "EV_SETUP_ERR",
- "EV_CONNECT_ERR",
- "EV_PROCEED",
- "EV_ALERT",
- "EV_REDIR",
-};
-
-
-static inline void
-HL_LL(struct Channel *chanp, int command)
-{
- isdn_ctrl ic;
-
- ic.driver = chanp->cs->myid;
- ic.command = command;
- ic.arg = chanp->chan;
- chanp->cs->iif.statcallb(&ic);
-}
-
-static inline void
-lli_deliver_cause(struct Channel *chanp)
-{
- isdn_ctrl ic;
-
- if (!chanp->proc)
- return;
- if (chanp->proc->para.cause == NO_CAUSE)
- return;
- ic.driver = chanp->cs->myid;
- ic.command = ISDN_STAT_CAUSE;
- ic.arg = chanp->chan;
- if (chanp->cs->protocol == ISDN_PTYPE_EURO)
- sprintf(ic.parm.num, "E%02X%02X", chanp->proc->para.loc & 0x7f,
- chanp->proc->para.cause & 0x7f);
- else
- sprintf(ic.parm.num, "%02X%02X", chanp->proc->para.loc & 0x7f,
- chanp->proc->para.cause & 0x7f);
- chanp->cs->iif.statcallb(&ic);
-}
-
-static inline void
-lli_close(struct FsmInst *fi)
-{
- struct Channel *chanp = fi->userdata;
-
- FsmChangeState(fi, ST_NULL);
- chanp->Flags = 0;
- chanp->cs->cardmsg(chanp->cs, MDL_INFO_REL, (void *) (long)chanp->chan);
-}
-
-static void
-lli_leased_in(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
- isdn_ctrl ic;
- int ret;
-
- if (!chanp->leased)
- return;
- chanp->cs->cardmsg(chanp->cs, MDL_INFO_SETUP, (void *) (long)chanp->chan);
- FsmChangeState(fi, ST_IN_WAIT_LL);
- if (chanp->debug & 1)
- link_debug(chanp, 0, "STAT_ICALL_LEASED");
- ic.driver = chanp->cs->myid;
- ic.command = ((chanp->chan < 2) ? ISDN_STAT_ICALL : ISDN_STAT_ICALLW);
- ic.arg = chanp->chan;
- ic.parm.setup.si1 = 7;
- ic.parm.setup.si2 = 0;
- ic.parm.setup.plan = 0;
- ic.parm.setup.screen = 0;
- sprintf(ic.parm.setup.eazmsn, "%d", chanp->chan + 1);
- sprintf(ic.parm.setup.phone, "LEASED%d", chanp->cs->myid);
- ret = chanp->cs->iif.statcallb(&ic);
- if (chanp->debug & 1)
- link_debug(chanp, 1, "statcallb ret=%d", ret);
- if (!ret) {
- chanp->cs->cardmsg(chanp->cs, MDL_INFO_REL, (void *) (long)chanp->chan);
- FsmChangeState(fi, ST_NULL);
- }
-}
-
-
-/*
- * Dial out
- */
-static void
-lli_init_bchan_out(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- FsmChangeState(fi, ST_WAIT_BCONN);
- if (chanp->debug & 1)
- link_debug(chanp, 0, "STAT_DCONN");
- HL_LL(chanp, ISDN_STAT_DCONN);
- init_b_st(chanp, 0);
- chanp->b_st->lli.l4l3(chanp->b_st, DL_ESTABLISH | REQUEST, NULL);
-}
-
-static void
-lli_prep_dialout(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- FsmDelTimer(&chanp->drel_timer, 60);
- FsmDelTimer(&chanp->dial_timer, 73);
- chanp->l2_active_protocol = chanp->l2_protocol;
- chanp->incoming = 0;
- chanp->cs->cardmsg(chanp->cs, MDL_INFO_SETUP, (void *) (long)chanp->chan);
- if (chanp->leased) {
- lli_init_bchan_out(fi, event, arg);
- } else {
- FsmChangeState(fi, ST_OUT_DIAL);
- chanp->d_st->lli.l4l3(chanp->d_st, CC_SETUP | REQUEST, chanp);
- }
-}
-
-static void
-lli_resume(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- FsmDelTimer(&chanp->drel_timer, 60);
- FsmDelTimer(&chanp->dial_timer, 73);
- chanp->l2_active_protocol = chanp->l2_protocol;
- chanp->incoming = 0;
- chanp->cs->cardmsg(chanp->cs, MDL_INFO_SETUP, (void *) (long)chanp->chan);
- if (chanp->leased) {
- lli_init_bchan_out(fi, event, arg);
- } else {
- FsmChangeState(fi, ST_OUT_DIAL);
- chanp->d_st->lli.l4l3(chanp->d_st, CC_RESUME | REQUEST, chanp);
- }
-}
-
-static void
-lli_go_active(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
- isdn_ctrl ic;
-
-
- FsmChangeState(fi, ST_ACTIVE);
- chanp->data_open = !0;
- if (chanp->bcs->conmsg)
- strcpy(ic.parm.num, chanp->bcs->conmsg);
- else
- ic.parm.num[0] = 0;
- if (chanp->debug & 1)
- link_debug(chanp, 0, "STAT_BCONN %s", ic.parm.num);
- ic.driver = chanp->cs->myid;
- ic.command = ISDN_STAT_BCONN;
- ic.arg = chanp->chan;
- chanp->cs->iif.statcallb(&ic);
- chanp->cs->cardmsg(chanp->cs, MDL_INFO_CONN, (void *) (long)chanp->chan);
-}
-
-
-/*
- * RESUME
- */
-
-/* incoming call */
-
-static void
-lli_deliver_call(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
- isdn_ctrl ic;
- int ret;
-
- chanp->cs->cardmsg(chanp->cs, MDL_INFO_SETUP, (void *) (long)chanp->chan);
- /*
- * Report incoming calls only once to linklevel, use CallFlags
- * which is set to 3 with each broadcast message in isdnl1.c
- * and resetted if a interface answered the STAT_ICALL.
- */
- if (1) { /* for only one TEI */
- FsmChangeState(fi, ST_IN_WAIT_LL);
- if (chanp->debug & 1)
- link_debug(chanp, 0, (chanp->chan < 2) ? "STAT_ICALL" : "STAT_ICALLW");
- ic.driver = chanp->cs->myid;
- ic.command = ((chanp->chan < 2) ? ISDN_STAT_ICALL : ISDN_STAT_ICALLW);
-
- ic.arg = chanp->chan;
- /*
- * No need to return "unknown" for calls without OAD,
- * cause that's handled in linklevel now (replaced by '0')
- */
- memcpy(&ic.parm.setup, &chanp->proc->para.setup, sizeof(setup_parm));
- ret = chanp->cs->iif.statcallb(&ic);
- if (chanp->debug & 1)
- link_debug(chanp, 1, "statcallb ret=%d", ret);
-
- switch (ret) {
- case 1: /* OK, someone likes this call */
- FsmDelTimer(&chanp->drel_timer, 61);
- FsmChangeState(fi, ST_IN_ALERT_SENT);
- chanp->d_st->lli.l4l3(chanp->d_st, CC_ALERTING | REQUEST, chanp->proc);
- break;
- case 5: /* direct redirect */
- case 4: /* Proceeding desired */
- FsmDelTimer(&chanp->drel_timer, 61);
- FsmChangeState(fi, ST_IN_PROCEED_SEND);
- chanp->d_st->lli.l4l3(chanp->d_st, CC_PROCEED_SEND | REQUEST, chanp->proc);
- if (ret == 5) {
- memcpy(&chanp->setup, &ic.parm.setup, sizeof(setup_parm));
- chanp->d_st->lli.l4l3(chanp->d_st, CC_REDIR | REQUEST, chanp->proc);
- }
- break;
- case 2: /* Rejecting Call */
- break;
- case 3: /* incomplete number */
- FsmDelTimer(&chanp->drel_timer, 61);
- chanp->d_st->lli.l4l3(chanp->d_st, CC_MORE_INFO | REQUEST, chanp->proc);
- break;
- case 0: /* OK, nobody likes this call */
- default: /* statcallb problems */
- chanp->d_st->lli.l4l3(chanp->d_st, CC_IGNORE | REQUEST, chanp->proc);
- chanp->cs->cardmsg(chanp->cs, MDL_INFO_REL, (void *) (long)chanp->chan);
- FsmChangeState(fi, ST_NULL);
- break;
- }
- } else {
- chanp->d_st->lli.l4l3(chanp->d_st, CC_IGNORE | REQUEST, chanp->proc);
- chanp->cs->cardmsg(chanp->cs, MDL_INFO_REL, (void *) (long)chanp->chan);
- }
-}
-
-static void
-lli_send_dconnect(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- FsmChangeState(fi, ST_IN_WAIT_CONN_ACK);
- chanp->d_st->lli.l4l3(chanp->d_st, CC_SETUP | RESPONSE, chanp->proc);
-}
-
-static void
-lli_send_alert(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- FsmChangeState(fi, ST_IN_ALERT_SENT);
- chanp->d_st->lli.l4l3(chanp->d_st, CC_ALERTING | REQUEST, chanp->proc);
-}
-
-static void
-lli_send_redir(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- chanp->d_st->lli.l4l3(chanp->d_st, CC_REDIR | REQUEST, chanp->proc);
-}
-
-static void
-lli_init_bchan_in(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- FsmChangeState(fi, ST_WAIT_BCONN);
- if (chanp->debug & 1)
- link_debug(chanp, 0, "STAT_DCONN");
- HL_LL(chanp, ISDN_STAT_DCONN);
- chanp->l2_active_protocol = chanp->l2_protocol;
- chanp->incoming = !0;
- init_b_st(chanp, !0);
- chanp->b_st->lli.l4l3(chanp->b_st, DL_ESTABLISH | REQUEST, NULL);
-}
-
-static void
-lli_setup_rsp(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- if (chanp->leased) {
- lli_init_bchan_in(fi, event, arg);
- } else {
- FsmChangeState(fi, ST_IN_WAIT_CONN_ACK);
-#ifdef WANT_ALERT
- chanp->d_st->lli.l4l3(chanp->d_st, CC_ALERTING | REQUEST, chanp->proc);
-#endif
- chanp->d_st->lli.l4l3(chanp->d_st, CC_SETUP | RESPONSE, chanp->proc);
- }
-}
-
-/* Call suspend */
-
-static void
-lli_suspend(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- chanp->d_st->lli.l4l3(chanp->d_st, CC_SUSPEND | REQUEST, chanp->proc);
-}
-
-/* Call clearing */
-
-static void
-lli_leased_hup(struct FsmInst *fi, struct Channel *chanp)
-{
- isdn_ctrl ic;
-
- ic.driver = chanp->cs->myid;
- ic.command = ISDN_STAT_CAUSE;
- ic.arg = chanp->chan;
- sprintf(ic.parm.num, "L0010");
- chanp->cs->iif.statcallb(&ic);
- if (chanp->debug & 1)
- link_debug(chanp, 0, "STAT_DHUP");
- HL_LL(chanp, ISDN_STAT_DHUP);
- lli_close(fi);
-}
-
-static void
-lli_disconnect_req(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- if (chanp->leased) {
- lli_leased_hup(fi, chanp);
- } else {
- FsmChangeState(fi, ST_WAIT_DRELEASE);
- if (chanp->proc)
- chanp->proc->para.cause = 0x10; /* Normal Call Clearing */
- chanp->d_st->lli.l4l3(chanp->d_st, CC_DISCONNECT | REQUEST,
- chanp->proc);
- }
-}
-
-static void
-lli_disconnect_reject(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- if (chanp->leased) {
- lli_leased_hup(fi, chanp);
- } else {
- FsmChangeState(fi, ST_WAIT_DRELEASE);
- if (chanp->proc)
- chanp->proc->para.cause = 0x15; /* Call Rejected */
- chanp->d_st->lli.l4l3(chanp->d_st, CC_DISCONNECT | REQUEST,
- chanp->proc);
- }
-}
-
-static void
-lli_dhup_close(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- if (chanp->leased) {
- lli_leased_hup(fi, chanp);
- } else {
- if (chanp->debug & 1)
- link_debug(chanp, 0, "STAT_DHUP");
- lli_deliver_cause(chanp);
- HL_LL(chanp, ISDN_STAT_DHUP);
- lli_close(fi);
- }
-}
-
-static void
-lli_reject_req(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- if (chanp->leased) {
- lli_leased_hup(fi, chanp);
- return;
- }
-#ifndef ALERT_REJECT
- if (chanp->proc)
- chanp->proc->para.cause = 0x15; /* Call Rejected */
- chanp->d_st->lli.l4l3(chanp->d_st, CC_REJECT | REQUEST, chanp->proc);
- lli_dhup_close(fi, event, arg);
-#else
- FsmRestartTimer(&chanp->drel_timer, 40, EV_HANGUP, NULL, 63);
- FsmChangeState(fi, ST_IN_ALERT_SENT);
- chanp->d_st->lli.l4l3(chanp->d_st, CC_ALERTING | REQUEST, chanp->proc);
-#endif
-}
-
-static void
-lli_disconn_bchan(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- chanp->data_open = 0;
- FsmChangeState(fi, ST_WAIT_BRELEASE);
- chanp->b_st->lli.l4l3(chanp->b_st, DL_RELEASE | REQUEST, NULL);
-}
-
-static void
-lli_start_disc(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- if (chanp->leased) {
- lli_leased_hup(fi, chanp);
- } else {
- lli_disconnect_req(fi, event, arg);
- }
-}
-
-static void
-lli_rel_b_disc(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- release_b_st(chanp);
- lli_start_disc(fi, event, arg);
-}
-
-static void
-lli_bhup_disc(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- if (chanp->debug & 1)
- link_debug(chanp, 0, "STAT_BHUP");
- HL_LL(chanp, ISDN_STAT_BHUP);
- lli_rel_b_disc(fi, event, arg);
-}
-
-static void
-lli_bhup_rel_b(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- FsmChangeState(fi, ST_WAIT_DCOMMAND);
- chanp->data_open = 0;
- if (chanp->debug & 1)
- link_debug(chanp, 0, "STAT_BHUP");
- HL_LL(chanp, ISDN_STAT_BHUP);
- release_b_st(chanp);
-}
-
-static void
-lli_release_bchan(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- chanp->data_open = 0;
- FsmChangeState(fi, ST_WAIT_BREL_DISC);
- chanp->b_st->lli.l4l3(chanp->b_st, DL_RELEASE | REQUEST, NULL);
-}
-
-
-static void
-lli_rel_b_dhup(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- release_b_st(chanp);
- lli_dhup_close(fi, event, arg);
-}
-
-static void
-lli_bhup_dhup(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- if (chanp->debug & 1)
- link_debug(chanp, 0, "STAT_BHUP");
- HL_LL(chanp, ISDN_STAT_BHUP);
- lli_rel_b_dhup(fi, event, arg);
-}
-
-static void
-lli_abort(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- chanp->data_open = 0;
- chanp->b_st->lli.l4l3(chanp->b_st, DL_RELEASE | REQUEST, NULL);
- lli_bhup_dhup(fi, event, arg);
-}
-
-static void
-lli_release_req(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- if (chanp->leased) {
- lli_leased_hup(fi, chanp);
- } else {
- FsmChangeState(fi, ST_WAIT_D_REL_CNF);
- chanp->d_st->lli.l4l3(chanp->d_st, CC_RELEASE | REQUEST,
- chanp->proc);
- }
-}
-
-static void
-lli_rel_b_release_req(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- release_b_st(chanp);
- lli_release_req(fi, event, arg);
-}
-
-static void
-lli_bhup_release_req(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- if (chanp->debug & 1)
- link_debug(chanp, 0, "STAT_BHUP");
- HL_LL(chanp, ISDN_STAT_BHUP);
- lli_rel_b_release_req(fi, event, arg);
-}
-
-
-/* processing charge info */
-static void
-lli_charge_info(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
- isdn_ctrl ic;
-
- ic.driver = chanp->cs->myid;
- ic.command = ISDN_STAT_CINF;
- ic.arg = chanp->chan;
- sprintf(ic.parm.num, "%d", chanp->proc->para.chargeinfo);
- chanp->cs->iif.statcallb(&ic);
-}
-
-/* error procedures */
-
-static void
-lli_dchan_not_ready(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- if (chanp->debug & 1)
- link_debug(chanp, 0, "STAT_DHUP");
- HL_LL(chanp, ISDN_STAT_DHUP);
-}
-
-static void
-lli_no_setup_rsp(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- if (chanp->debug & 1)
- link_debug(chanp, 0, "STAT_DHUP");
- HL_LL(chanp, ISDN_STAT_DHUP);
- lli_close(fi);
-}
-
-static void
-lli_error(struct FsmInst *fi, int event, void *arg)
-{
- FsmChangeState(fi, ST_WAIT_DRELEASE);
-}
-
-static void
-lli_failure_l(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
- isdn_ctrl ic;
-
- FsmChangeState(fi, ST_NULL);
- ic.driver = chanp->cs->myid;
- ic.command = ISDN_STAT_CAUSE;
- ic.arg = chanp->chan;
- sprintf(ic.parm.num, "L%02X%02X", 0, 0x2f);
- chanp->cs->iif.statcallb(&ic);
- HL_LL(chanp, ISDN_STAT_DHUP);
- chanp->Flags = 0;
- chanp->cs->cardmsg(chanp->cs, MDL_INFO_REL, (void *) (long)chanp->chan);
-}
-
-static void
-lli_rel_b_fail(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- release_b_st(chanp);
- lli_failure_l(fi, event, arg);
-}
-
-static void
-lli_bhup_fail(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- if (chanp->debug & 1)
- link_debug(chanp, 0, "STAT_BHUP");
- HL_LL(chanp, ISDN_STAT_BHUP);
- lli_rel_b_fail(fi, event, arg);
-}
-
-static void
-lli_failure_a(struct FsmInst *fi, int event, void *arg)
-{
- struct Channel *chanp = fi->userdata;
-
- chanp->data_open = 0;
- chanp->b_st->lli.l4l3(chanp->b_st, DL_RELEASE | REQUEST, NULL);
- lli_bhup_fail(fi, event, arg);
-}
-
-/* *INDENT-OFF* */
-static struct FsmNode fnlist[] __initdata =
-{
- {ST_NULL, EV_DIAL, lli_prep_dialout},
- {ST_NULL, EV_RESUME, lli_resume},
- {ST_NULL, EV_SETUP_IND, lli_deliver_call},
- {ST_NULL, EV_LEASED, lli_leased_in},
- {ST_OUT_DIAL, EV_SETUP_CNF, lli_init_bchan_out},
- {ST_OUT_DIAL, EV_HANGUP, lli_disconnect_req},
- {ST_OUT_DIAL, EV_DISCONNECT_IND, lli_release_req},
- {ST_OUT_DIAL, EV_RELEASE, lli_dhup_close},
- {ST_OUT_DIAL, EV_NOSETUP_RSP, lli_no_setup_rsp},
- {ST_OUT_DIAL, EV_SETUP_ERR, lli_error},
- {ST_IN_WAIT_LL, EV_LEASED_REL, lli_failure_l},
- {ST_IN_WAIT_LL, EV_ACCEPTD, lli_setup_rsp},
- {ST_IN_WAIT_LL, EV_HANGUP, lli_reject_req},
- {ST_IN_WAIT_LL, EV_DISCONNECT_IND, lli_release_req},
- {ST_IN_WAIT_LL, EV_RELEASE, lli_dhup_close},
- {ST_IN_WAIT_LL, EV_SETUP_IND, lli_deliver_call},
- {ST_IN_WAIT_LL, EV_SETUP_ERR, lli_error},
- {ST_IN_ALERT_SENT, EV_SETUP_CMPL_IND, lli_init_bchan_in},
- {ST_IN_ALERT_SENT, EV_ACCEPTD, lli_send_dconnect},
- {ST_IN_ALERT_SENT, EV_HANGUP, lli_disconnect_reject},
- {ST_IN_ALERT_SENT, EV_DISCONNECT_IND, lli_release_req},
- {ST_IN_ALERT_SENT, EV_RELEASE, lli_dhup_close},
- {ST_IN_ALERT_SENT, EV_REDIR, lli_send_redir},
- {ST_IN_PROCEED_SEND, EV_REDIR, lli_send_redir},
- {ST_IN_PROCEED_SEND, EV_ALERT, lli_send_alert},
- {ST_IN_PROCEED_SEND, EV_ACCEPTD, lli_send_dconnect},
- {ST_IN_PROCEED_SEND, EV_HANGUP, lli_disconnect_reject},
- {ST_IN_PROCEED_SEND, EV_DISCONNECT_IND, lli_dhup_close},
- {ST_IN_ALERT_SENT, EV_RELEASE, lli_dhup_close},
- {ST_IN_WAIT_CONN_ACK, EV_SETUP_CMPL_IND, lli_init_bchan_in},
- {ST_IN_WAIT_CONN_ACK, EV_HANGUP, lli_disconnect_req},
- {ST_IN_WAIT_CONN_ACK, EV_DISCONNECT_IND, lli_release_req},
- {ST_IN_WAIT_CONN_ACK, EV_RELEASE, lli_dhup_close},
- {ST_IN_WAIT_CONN_ACK, EV_CONNECT_ERR, lli_error},
- {ST_WAIT_BCONN, EV_BC_EST, lli_go_active},
- {ST_WAIT_BCONN, EV_BC_REL, lli_rel_b_disc},
- {ST_WAIT_BCONN, EV_HANGUP, lli_rel_b_disc},
- {ST_WAIT_BCONN, EV_DISCONNECT_IND, lli_rel_b_release_req},
- {ST_WAIT_BCONN, EV_RELEASE, lli_rel_b_dhup},
- {ST_WAIT_BCONN, EV_LEASED_REL, lli_rel_b_fail},
- {ST_WAIT_BCONN, EV_CINF, lli_charge_info},
- {ST_ACTIVE, EV_CINF, lli_charge_info},
- {ST_ACTIVE, EV_BC_REL, lli_bhup_rel_b},
- {ST_ACTIVE, EV_SUSPEND, lli_suspend},
- {ST_ACTIVE, EV_HANGUP, lli_disconn_bchan},
- {ST_ACTIVE, EV_DISCONNECT_IND, lli_release_bchan},
- {ST_ACTIVE, EV_RELEASE, lli_abort},
- {ST_ACTIVE, EV_LEASED_REL, lli_failure_a},
- {ST_WAIT_BRELEASE, EV_BC_REL, lli_bhup_disc},
- {ST_WAIT_BRELEASE, EV_DISCONNECT_IND, lli_bhup_release_req},
- {ST_WAIT_BRELEASE, EV_RELEASE, lli_bhup_dhup},
- {ST_WAIT_BRELEASE, EV_LEASED_REL, lli_bhup_fail},
- {ST_WAIT_BREL_DISC, EV_BC_REL, lli_bhup_release_req},
- {ST_WAIT_BREL_DISC, EV_RELEASE, lli_bhup_dhup},
- {ST_WAIT_DCOMMAND, EV_HANGUP, lli_start_disc},
- {ST_WAIT_DCOMMAND, EV_DISCONNECT_IND, lli_release_req},
- {ST_WAIT_DCOMMAND, EV_RELEASE, lli_dhup_close},
- {ST_WAIT_DCOMMAND, EV_LEASED_REL, lli_failure_l},
- {ST_WAIT_DRELEASE, EV_RELEASE, lli_dhup_close},
- {ST_WAIT_DRELEASE, EV_DIAL, lli_dchan_not_ready},
- /* ETS 300-104 16.1 */
- {ST_WAIT_D_REL_CNF, EV_RELEASE, lli_dhup_close},
- {ST_WAIT_D_REL_CNF, EV_DIAL, lli_dchan_not_ready},
-};
-/* *INDENT-ON* */
-
-int __init
-CallcNew(void)
-{
- callcfsm.state_count = STATE_COUNT;
- callcfsm.event_count = EVENT_COUNT;
- callcfsm.strEvent = strEvent;
- callcfsm.strState = strState;
- return FsmNew(&callcfsm, fnlist, ARRAY_SIZE(fnlist));
-}
-
-void
-CallcFree(void)
-{
- FsmFree(&callcfsm);
-}
-
-static void
-release_b_st(struct Channel *chanp)
-{
- struct PStack *st = chanp->b_st;
-
- if (test_and_clear_bit(FLG_START_B, &chanp->Flags)) {
- chanp->bcs->BC_Close(chanp->bcs);
- switch (chanp->l2_active_protocol) {
- case (ISDN_PROTO_L2_X75I):
- releasestack_isdnl2(st);
- break;
- case (ISDN_PROTO_L2_HDLC):
- case (ISDN_PROTO_L2_HDLC_56K):
- case (ISDN_PROTO_L2_TRANS):
- case (ISDN_PROTO_L2_MODEM):
- case (ISDN_PROTO_L2_FAX):
- releasestack_transl2(st);
- break;
- }
- }
-}
-
-static struct Channel
-*selectfreechannel(struct PStack *st, int bch)
-{
- struct IsdnCardState *cs = st->l1.hardware;
- struct Channel *chanp = st->lli.userdata;
- int i;
-
- if (test_bit(FLG_TWO_DCHAN, &cs->HW_Flags))
- i = 1;
- else
- i = 0;
-
- if (!bch) {
- i = 2; /* virtual channel */
- chanp += 2;
- }
-
- while (i < ((bch) ? cs->chanlimit : (2 + MAX_WAITING_CALLS))) {
- if (chanp->fi.state == ST_NULL)
- return (chanp);
- chanp++;
- i++;
- }
-
- if (bch) /* number of channels is limited */ {
- i = 2; /* virtual channel */
- chanp = st->lli.userdata;
- chanp += i;
- while (i < (2 + MAX_WAITING_CALLS)) {
- if (chanp->fi.state == ST_NULL)
- return (chanp);
- chanp++;
- i++;
- }
- }
- return (NULL);
-}
-
-static void stat_redir_result(struct IsdnCardState *cs, int chan, ulong result)
-{ isdn_ctrl ic;
-
- ic.driver = cs->myid;
- ic.command = ISDN_STAT_REDIR;
- ic.arg = chan;
- ic.parm.num[0] = result;
- cs->iif.statcallb(&ic);
-} /* stat_redir_result */
-
-static void
-dchan_l3l4(struct PStack *st, int pr, void *arg)
-{
- struct l3_process *pc = arg;
- struct IsdnCardState *cs = st->l1.hardware;
- struct Channel *chanp;
-
- if (!pc)
- return;
-
- if (pr == (CC_SETUP | INDICATION)) {
- if (!(chanp = selectfreechannel(pc->st, pc->para.bchannel))) {
- pc->para.cause = 0x11; /* User busy */
- pc->st->lli.l4l3(pc->st, CC_REJECT | REQUEST, pc);
- } else {
- chanp->proc = pc;
- pc->chan = chanp;
- FsmEvent(&chanp->fi, EV_SETUP_IND, NULL);
- }
- return;
- }
- if (!(chanp = pc->chan))
- return;
-
- switch (pr) {
- case (CC_MORE_INFO | INDICATION):
- FsmEvent(&chanp->fi, EV_SETUP_IND, NULL);
- break;
- case (CC_DISCONNECT | INDICATION):
- FsmEvent(&chanp->fi, EV_DISCONNECT_IND, NULL);
- break;
- case (CC_RELEASE | CONFIRM):
- FsmEvent(&chanp->fi, EV_RELEASE, NULL);
- break;
- case (CC_SUSPEND | CONFIRM):
- FsmEvent(&chanp->fi, EV_RELEASE, NULL);
- break;
- case (CC_RESUME | CONFIRM):
- FsmEvent(&chanp->fi, EV_SETUP_CNF, NULL);
- break;
- case (CC_RESUME_ERR):
- FsmEvent(&chanp->fi, EV_RELEASE, NULL);
- break;
- case (CC_RELEASE | INDICATION):
- FsmEvent(&chanp->fi, EV_RELEASE, NULL);
- break;
- case (CC_SETUP_COMPL | INDICATION):
- FsmEvent(&chanp->fi, EV_SETUP_CMPL_IND, NULL);
- break;
- case (CC_SETUP | CONFIRM):
- FsmEvent(&chanp->fi, EV_SETUP_CNF, NULL);
- break;
- case (CC_CHARGE | INDICATION):
- FsmEvent(&chanp->fi, EV_CINF, NULL);
- break;
- case (CC_NOSETUP_RSP):
- FsmEvent(&chanp->fi, EV_NOSETUP_RSP, NULL);
- break;
- case (CC_SETUP_ERR):
- FsmEvent(&chanp->fi, EV_SETUP_ERR, NULL);
- break;
- case (CC_CONNECT_ERR):
- FsmEvent(&chanp->fi, EV_CONNECT_ERR, NULL);
- break;
- case (CC_RELEASE_ERR):
- FsmEvent(&chanp->fi, EV_RELEASE, NULL);
- break;
- case (CC_PROCEED_SEND | INDICATION):
- case (CC_PROCEEDING | INDICATION):
- case (CC_ALERTING | INDICATION):
- case (CC_PROGRESS | INDICATION):
- case (CC_NOTIFY | INDICATION):
- break;
- case (CC_REDIR | INDICATION):
- stat_redir_result(cs, chanp->chan, pc->redir_result);
- break;
- default:
- if (chanp->debug & 0x800) {
- HiSax_putstatus(chanp->cs, "Ch",
- "%d L3->L4 unknown primitiv %#x",
- chanp->chan, pr);
- }
- }
-}
-
-static void
-dummy_pstack(struct PStack *st, int pr, void *arg) {
- printk(KERN_WARNING"call to dummy_pstack pr=%04x arg %lx\n", pr, (long)arg);
-}
-
-static int
-init_PStack(struct PStack **stp) {
- *stp = kmalloc(sizeof(struct PStack), GFP_KERNEL);
- if (!*stp)
- return -ENOMEM;
- (*stp)->next = NULL;
- (*stp)->l1.l1l2 = dummy_pstack;
- (*stp)->l1.l1hw = dummy_pstack;
- (*stp)->l1.l1tei = dummy_pstack;
- (*stp)->l2.l2tei = dummy_pstack;
- (*stp)->l2.l2l1 = dummy_pstack;
- (*stp)->l2.l2l3 = dummy_pstack;
- (*stp)->l3.l3l2 = dummy_pstack;
- (*stp)->l3.l3ml3 = dummy_pstack;
- (*stp)->l3.l3l4 = dummy_pstack;
- (*stp)->lli.l4l3 = dummy_pstack;
- (*stp)->ma.layer = dummy_pstack;
- return 0;
-}
-
-static int
-init_d_st(struct Channel *chanp)
-{
- struct PStack *st;
- struct IsdnCardState *cs = chanp->cs;
- char tmp[16];
- int err;
-
- err = init_PStack(&chanp->d_st);
- if (err)
- return err;
- st = chanp->d_st;
- st->next = NULL;
- HiSax_addlist(cs, st);
- setstack_HiSax(st, cs);
- st->l2.sap = 0;
- st->l2.tei = -1;
- st->l2.flag = 0;
- test_and_set_bit(FLG_MOD128, &st->l2.flag);
- test_and_set_bit(FLG_LAPD, &st->l2.flag);
- test_and_set_bit(FLG_ORIG, &st->l2.flag);
- st->l2.maxlen = MAX_DFRAME_LEN;
- st->l2.window = 1;
- st->l2.T200 = 1000; /* 1000 milliseconds */
- st->l2.N200 = 3; /* try 3 times */
- st->l2.T203 = 10000; /* 10000 milliseconds */
- if (test_bit(FLG_TWO_DCHAN, &cs->HW_Flags))
- sprintf(tmp, "DCh%d Q.921 ", chanp->chan);
- else
- sprintf(tmp, "DCh Q.921 ");
- setstack_isdnl2(st, tmp);
- setstack_l3dc(st, chanp);
- st->lli.userdata = chanp;
- st->l3.l3l4 = dchan_l3l4;
-
- return 0;
-}
-
-static __printf(2, 3) void
- callc_debug(struct FsmInst *fi, char *fmt, ...)
-{
- va_list args;
- struct Channel *chanp = fi->userdata;
- char tmp[16];
-
- va_start(args, fmt);
- sprintf(tmp, "Ch%d callc ", chanp->chan);
- VHiSax_putstatus(chanp->cs, tmp, fmt, args);
- va_end(args);
-}
-
-static int
-init_chan(int chan, struct IsdnCardState *csta)
-{
- struct Channel *chanp = csta->channel + chan;
- int err;
-
- chanp->cs = csta;
- chanp->bcs = csta->bcs + chan;
- chanp->chan = chan;
- chanp->incoming = 0;
- chanp->debug = 0;
- chanp->Flags = 0;
- chanp->leased = 0;
- err = init_PStack(&chanp->b_st);
- if (err)
- return err;
- chanp->b_st->l1.delay = DEFAULT_B_DELAY;
- chanp->fi.fsm = &callcfsm;
- chanp->fi.state = ST_NULL;
- chanp->fi.debug = 0;
- chanp->fi.userdata = chanp;
- chanp->fi.printdebug = callc_debug;
- FsmInitTimer(&chanp->fi, &chanp->dial_timer);
- FsmInitTimer(&chanp->fi, &chanp->drel_timer);
- if (!chan || (test_bit(FLG_TWO_DCHAN, &csta->HW_Flags) && chan < 2)) {
- err = init_d_st(chanp);
- if (err)
- return err;
- } else {
- chanp->d_st = csta->channel->d_st;
- }
- chanp->data_open = 0;
- return 0;
-}
-
-int
-CallcNewChan(struct IsdnCardState *csta) {
- int i, err;
-
- chancount += 2;
- err = init_chan(0, csta);
- if (err)
- return err;
- err = init_chan(1, csta);
- if (err)
- return err;
- printk(KERN_INFO "HiSax: 2 channels added\n");
-
- for (i = 0; i < MAX_WAITING_CALLS; i++) {
- err = init_chan(i + 2, csta);
- if (err)
- return err;
- }
- printk(KERN_INFO "HiSax: MAX_WAITING_CALLS added\n");
- if (test_bit(FLG_PTP, &csta->channel->d_st->l2.flag)) {
- printk(KERN_INFO "LAYER2 WATCHING ESTABLISH\n");
- csta->channel->d_st->lli.l4l3(csta->channel->d_st,
- DL_ESTABLISH | REQUEST, NULL);
- }
- return (0);
-}
-
-static void
-release_d_st(struct Channel *chanp)
-{
- struct PStack *st = chanp->d_st;
-
- if (!st)
- return;
- releasestack_isdnl2(st);
- releasestack_isdnl3(st);
- HiSax_rmlist(st->l1.hardware, st);
- kfree(st);
- chanp->d_st = NULL;
-}
-
-void
-CallcFreeChan(struct IsdnCardState *csta)
-{
- int i;
-
- for (i = 0; i < 2; i++) {
- FsmDelTimer(&csta->channel[i].drel_timer, 74);
- FsmDelTimer(&csta->channel[i].dial_timer, 75);
- if (i || test_bit(FLG_TWO_DCHAN, &csta->HW_Flags))
- release_d_st(csta->channel + i);
- if (csta->channel[i].b_st) {
- release_b_st(csta->channel + i);
- kfree(csta->channel[i].b_st);
- csta->channel[i].b_st = NULL;
- } else
- printk(KERN_WARNING "CallcFreeChan b_st ch%d already freed\n", i);
- if (i || test_bit(FLG_TWO_DCHAN, &csta->HW_Flags)) {
- release_d_st(csta->channel + i);
- } else
- csta->channel[i].d_st = NULL;
- }
-}
-
-static void
-lldata_handler(struct PStack *st, int pr, void *arg)
-{
- struct Channel *chanp = (struct Channel *) st->lli.userdata;
- struct sk_buff *skb = arg;
-
- switch (pr) {
- case (DL_DATA | INDICATION):
- if (chanp->data_open) {
- if (chanp->debug & 0x800)
- link_debug(chanp, 0, "lldata: %d", skb->len);
- chanp->cs->iif.rcvcallb_skb(chanp->cs->myid, chanp->chan, skb);
- } else {
- link_debug(chanp, 0, "lldata: channel not open");
- dev_kfree_skb(skb);
- }
- break;
- case (DL_ESTABLISH | INDICATION):
- case (DL_ESTABLISH | CONFIRM):
- FsmEvent(&chanp->fi, EV_BC_EST, NULL);
- break;
- case (DL_RELEASE | INDICATION):
- case (DL_RELEASE | CONFIRM):
- FsmEvent(&chanp->fi, EV_BC_REL, NULL);
- break;
- default:
- printk(KERN_WARNING "lldata_handler unknown primitive %#x\n",
- pr);
- break;
- }
-}
-
-static void
-lltrans_handler(struct PStack *st, int pr, void *arg)
-{
- struct Channel *chanp = (struct Channel *) st->lli.userdata;
- struct sk_buff *skb = arg;
-
- switch (pr) {
- case (PH_DATA | INDICATION):
- if (chanp->data_open) {
- if (chanp->debug & 0x800)
- link_debug(chanp, 0, "lltrans: %d", skb->len);
- chanp->cs->iif.rcvcallb_skb(chanp->cs->myid, chanp->chan, skb);
- } else {
- link_debug(chanp, 0, "lltrans: channel not open");
- dev_kfree_skb(skb);
- }
- break;
- case (PH_ACTIVATE | INDICATION):
- case (PH_ACTIVATE | CONFIRM):
- FsmEvent(&chanp->fi, EV_BC_EST, NULL);
- break;
- case (PH_DEACTIVATE | INDICATION):
- case (PH_DEACTIVATE | CONFIRM):
- FsmEvent(&chanp->fi, EV_BC_REL, NULL);
- break;
- default:
- printk(KERN_WARNING "lltrans_handler unknown primitive %#x\n",
- pr);
- break;
- }
-}
-
-void
-lli_writewakeup(struct PStack *st, int len)
-{
- struct Channel *chanp = st->lli.userdata;
- isdn_ctrl ic;
-
- if (chanp->debug & 0x800)
- link_debug(chanp, 0, "llwakeup: %d", len);
- ic.driver = chanp->cs->myid;
- ic.command = ISDN_STAT_BSENT;
- ic.arg = chanp->chan;
- ic.parm.length = len;
- chanp->cs->iif.statcallb(&ic);
-}
-
-static int
-init_b_st(struct Channel *chanp, int incoming)
-{
- struct PStack *st = chanp->b_st;
- struct IsdnCardState *cs = chanp->cs;
- char tmp[16];
-
- st->l1.hardware = cs;
- if (chanp->leased)
- st->l1.bc = chanp->chan & 1;
- else
- st->l1.bc = chanp->proc->para.bchannel - 1;
- switch (chanp->l2_active_protocol) {
- case (ISDN_PROTO_L2_X75I):
- case (ISDN_PROTO_L2_HDLC):
- st->l1.mode = L1_MODE_HDLC;
- break;
- case (ISDN_PROTO_L2_HDLC_56K):
- st->l1.mode = L1_MODE_HDLC_56K;
- break;
- case (ISDN_PROTO_L2_TRANS):
- st->l1.mode = L1_MODE_TRANS;
- break;
- case (ISDN_PROTO_L2_MODEM):
- st->l1.mode = L1_MODE_V32;
- break;
- case (ISDN_PROTO_L2_FAX):
- st->l1.mode = L1_MODE_FAX;
- break;
- }
- chanp->bcs->conmsg = NULL;
- if (chanp->bcs->BC_SetStack(st, chanp->bcs))
- return (-1);
- st->l2.flag = 0;
- test_and_set_bit(FLG_LAPB, &st->l2.flag);
- st->l2.maxlen = MAX_DATA_SIZE;
- if (!incoming)
- test_and_set_bit(FLG_ORIG, &st->l2.flag);
- st->l2.T200 = 1000; /* 1000 milliseconds */
- st->l2.window = 7;
- st->l2.N200 = 4; /* try 4 times */
- st->l2.T203 = 5000; /* 5000 milliseconds */
- st->l3.debug = 0;
- switch (chanp->l2_active_protocol) {
- case (ISDN_PROTO_L2_X75I):
- sprintf(tmp, "Ch%d X.75", chanp->chan);
- setstack_isdnl2(st, tmp);
- setstack_l3bc(st, chanp);
- st->l2.l2l3 = lldata_handler;
- st->lli.userdata = chanp;
- test_and_clear_bit(FLG_LLI_L1WAKEUP, &st->lli.flag);
- test_and_set_bit(FLG_LLI_L2WAKEUP, &st->lli.flag);
- st->l2.l2m.debug = chanp->debug & 16;
- st->l2.debug = chanp->debug & 64;
- break;
- case (ISDN_PROTO_L2_HDLC):
- case (ISDN_PROTO_L2_HDLC_56K):
- case (ISDN_PROTO_L2_TRANS):
- case (ISDN_PROTO_L2_MODEM):
- case (ISDN_PROTO_L2_FAX):
- st->l1.l1l2 = lltrans_handler;
- st->lli.userdata = chanp;
- test_and_set_bit(FLG_LLI_L1WAKEUP, &st->lli.flag);
- test_and_clear_bit(FLG_LLI_L2WAKEUP, &st->lli.flag);
- setstack_transl2(st);
- setstack_l3bc(st, chanp);
- break;
- }
- test_and_set_bit(FLG_START_B, &chanp->Flags);
- return (0);
-}
-
-static void
-leased_l4l3(struct PStack *st, int pr, void *arg)
-{
- struct Channel *chanp = (struct Channel *) st->lli.userdata;
- struct sk_buff *skb = arg;
-
- switch (pr) {
- case (DL_DATA | REQUEST):
- link_debug(chanp, 0, "leased line d-channel DATA");
- dev_kfree_skb(skb);
- break;
- case (DL_ESTABLISH | REQUEST):
- st->l2.l2l1(st, PH_ACTIVATE | REQUEST, NULL);
- break;
- case (DL_RELEASE | REQUEST):
- break;
- default:
- printk(KERN_WARNING "transd_l4l3 unknown primitive %#x\n",
- pr);
- break;
- }
-}
-
-static void
-leased_l1l2(struct PStack *st, int pr, void *arg)
-{
- struct Channel *chanp = (struct Channel *) st->lli.userdata;
- struct sk_buff *skb = arg;
- int i, event = EV_LEASED_REL;
-
- switch (pr) {
- case (PH_DATA | INDICATION):
- link_debug(chanp, 0, "leased line d-channel DATA");
- dev_kfree_skb(skb);
- break;
- case (PH_ACTIVATE | INDICATION):
- case (PH_ACTIVATE | CONFIRM):
- event = EV_LEASED;
- /* fall through */
- case (PH_DEACTIVATE | INDICATION):
- case (PH_DEACTIVATE | CONFIRM):
- if (test_bit(FLG_TWO_DCHAN, &chanp->cs->HW_Flags))
- i = 1;
- else
- i = 0;
- while (i < 2) {
- FsmEvent(&chanp->fi, event, NULL);
- chanp++;
- i++;
- }
- break;
- default:
- printk(KERN_WARNING
- "transd_l1l2 unknown primitive %#x\n", pr);
- break;
- }
-}
-
-static void
-distr_debug(struct IsdnCardState *csta, int debugflags)
-{
- int i;
- struct Channel *chanp = csta->channel;
-
- for (i = 0; i < (2 + MAX_WAITING_CALLS); i++) {
- chanp[i].debug = debugflags;
- chanp[i].fi.debug = debugflags & 2;
- chanp[i].d_st->l2.l2m.debug = debugflags & 8;
- chanp[i].b_st->l2.l2m.debug = debugflags & 0x10;
- chanp[i].d_st->l2.debug = debugflags & 0x20;
- chanp[i].b_st->l2.debug = debugflags & 0x40;
- chanp[i].d_st->l3.l3m.debug = debugflags & 0x80;
- chanp[i].b_st->l3.l3m.debug = debugflags & 0x100;
- chanp[i].b_st->ma.tei_m.debug = debugflags & 0x200;
- chanp[i].b_st->ma.debug = debugflags & 0x200;
- chanp[i].d_st->l1.l1m.debug = debugflags & 0x1000;
- chanp[i].b_st->l1.l1m.debug = debugflags & 0x2000;
- }
- if (debugflags & 4)
- csta->debug |= DEB_DLOG_HEX;
- else
- csta->debug &= ~DEB_DLOG_HEX;
-}
-
-static char tmpbuf[256];
-
-static void
-capi_debug(struct Channel *chanp, capi_msg *cm)
-{
- char *t = tmpbuf;
-
- t += QuickHex(t, (u_char *)cm, (cm->Length > 50) ? 50 : cm->Length);
- t--;
- *t = 0;
- HiSax_putstatus(chanp->cs, "Ch", "%d CAPIMSG %s", chanp->chan, tmpbuf);
-}
-
-static void
-lli_got_fac_req(struct Channel *chanp, capi_msg *cm) {
- if ((cm->para[0] != 3) || (cm->para[1] != 0))
- return;
- if (cm->para[2] < 3)
- return;
- if (cm->para[4] != 0)
- return;
- switch (cm->para[3]) {
- case 4: /* Suspend */
- strncpy(chanp->setup.phone, &cm->para[5], cm->para[5] + 1);
- FsmEvent(&chanp->fi, EV_SUSPEND, cm);
- break;
- case 5: /* Resume */
- strncpy(chanp->setup.phone, &cm->para[5], cm->para[5] + 1);
- if (chanp->fi.state == ST_NULL) {
- FsmEvent(&chanp->fi, EV_RESUME, cm);
- } else {
- FsmDelTimer(&chanp->dial_timer, 72);
- FsmAddTimer(&chanp->dial_timer, 80, EV_RESUME, cm, 73);
- }
- break;
- }
-}
-
-static void
-lli_got_manufacturer(struct Channel *chanp, struct IsdnCardState *cs, capi_msg *cm) {
- if ((cs->typ == ISDN_CTYPE_ELSA) || (cs->typ == ISDN_CTYPE_ELSA_PNP) ||
- (cs->typ == ISDN_CTYPE_ELSA_PCI)) {
- if (cs->hw.elsa.MFlag) {
- cs->cardmsg(cs, CARD_AUX_IND, cm->para);
- }
- }
-}
-
-
-/***************************************************************/
-/* Limit the available number of channels for the current card */
-/***************************************************************/
-static int
-set_channel_limit(struct IsdnCardState *cs, int chanmax)
-{
- isdn_ctrl ic;
- int i, ii;
-
- if ((chanmax < 0) || (chanmax > 2))
- return (-EINVAL);
- cs->chanlimit = 0;
- for (ii = 0; ii < 2; ii++) {
- ic.driver = cs->myid;
- ic.command = ISDN_STAT_DISCH;
- ic.arg = ii;
- if (ii >= chanmax)
- ic.parm.num[0] = 0; /* disabled */
- else
- ic.parm.num[0] = 1; /* enabled */
- i = cs->iif.statcallb(&ic);
- if (i) return (-EINVAL);
- if (ii < chanmax)
- cs->chanlimit++;
- }
- return (0);
-} /* set_channel_limit */
-
-int
-HiSax_command(isdn_ctrl *ic)
-{
- struct IsdnCardState *csta = hisax_findcard(ic->driver);
- struct PStack *st;
- struct Channel *chanp;
- int i;
- u_int num;
-
- if (!csta) {
- printk(KERN_ERR
- "HiSax: if_command %d called with invalid driverId %d!\n",
- ic->command, ic->driver);
- return -ENODEV;
- }
- switch (ic->command) {
- case (ISDN_CMD_SETEAZ):
- chanp = csta->channel + ic->arg;
- break;
- case (ISDN_CMD_SETL2):
- chanp = csta->channel + (ic->arg & 0xff);
- if (chanp->debug & 1)
- link_debug(chanp, 1, "SETL2 card %d %ld",
- csta->cardnr + 1, ic->arg >> 8);
- chanp->l2_protocol = ic->arg >> 8;
- break;
- case (ISDN_CMD_SETL3):
- chanp = csta->channel + (ic->arg & 0xff);
- if (chanp->debug & 1)
- link_debug(chanp, 1, "SETL3 card %d %ld",
- csta->cardnr + 1, ic->arg >> 8);
- chanp->l3_protocol = ic->arg >> 8;
- break;
- case (ISDN_CMD_DIAL):
- chanp = csta->channel + (ic->arg & 0xff);
- if (chanp->debug & 1)
- link_debug(chanp, 1, "DIAL %s -> %s (%d,%d)",
- ic->parm.setup.eazmsn, ic->parm.setup.phone,
- ic->parm.setup.si1, ic->parm.setup.si2);
- memcpy(&chanp->setup, &ic->parm.setup, sizeof(setup_parm));
- if (!strcmp(chanp->setup.eazmsn, "0"))
- chanp->setup.eazmsn[0] = '\0';
- /* this solution is dirty and may be change, if
- * we make a callreference based callmanager */
- if (chanp->fi.state == ST_NULL) {
- FsmEvent(&chanp->fi, EV_DIAL, NULL);
- } else {
- FsmDelTimer(&chanp->dial_timer, 70);
- FsmAddTimer(&chanp->dial_timer, 50, EV_DIAL, NULL, 71);
- }
- break;
- case (ISDN_CMD_ACCEPTB):
- chanp = csta->channel + ic->arg;
- if (chanp->debug & 1)
- link_debug(chanp, 1, "ACCEPTB");
- FsmEvent(&chanp->fi, EV_ACCEPTB, NULL);
- break;
- case (ISDN_CMD_ACCEPTD):
- chanp = csta->channel + ic->arg;
- memcpy(&chanp->setup, &ic->parm.setup, sizeof(setup_parm));
- if (chanp->debug & 1)
- link_debug(chanp, 1, "ACCEPTD");
- FsmEvent(&chanp->fi, EV_ACCEPTD, NULL);
- break;
- case (ISDN_CMD_HANGUP):
- chanp = csta->channel + ic->arg;
- if (chanp->debug & 1)
- link_debug(chanp, 1, "HANGUP");
- FsmEvent(&chanp->fi, EV_HANGUP, NULL);
- break;
- case (CAPI_PUT_MESSAGE):
- chanp = csta->channel + ic->arg;
- if (chanp->debug & 1)
- capi_debug(chanp, &ic->parm.cmsg);
- if (ic->parm.cmsg.Length < 8)
- break;
- switch (ic->parm.cmsg.Command) {
- case CAPI_FACILITY:
- if (ic->parm.cmsg.Subcommand == CAPI_REQ)
- lli_got_fac_req(chanp, &ic->parm.cmsg);
- break;
- case CAPI_MANUFACTURER:
- if (ic->parm.cmsg.Subcommand == CAPI_REQ)
- lli_got_manufacturer(chanp, csta, &ic->parm.cmsg);
- break;
- default:
- break;
- }
- break;
- case (ISDN_CMD_IOCTL):
- switch (ic->arg) {
- case (0):
- num = *(unsigned int *) ic->parm.num;
- HiSax_reportcard(csta->cardnr, num);
- break;
- case (1):
- num = *(unsigned int *) ic->parm.num;
- distr_debug(csta, num);
- printk(KERN_DEBUG "HiSax: debugging flags card %d set to %x\n",
- csta->cardnr + 1, num);
- HiSax_putstatus(csta, "debugging flags ",
- "card %d set to %x", csta->cardnr + 1, num);
- break;
- case (2):
- num = *(unsigned int *) ic->parm.num;
- csta->channel[0].b_st->l1.delay = num;
- csta->channel[1].b_st->l1.delay = num;
- HiSax_putstatus(csta, "delay ", "card %d set to %d ms",
- csta->cardnr + 1, num);
- printk(KERN_DEBUG "HiSax: delay card %d set to %d ms\n",
- csta->cardnr + 1, num);
- break;
- case (5): /* set card in leased mode */
- num = *(unsigned int *) ic->parm.num;
- if ((num < 1) || (num > 2)) {
- HiSax_putstatus(csta, "Set LEASED ",
- "wrong channel %d", num);
- printk(KERN_WARNING "HiSax: Set LEASED wrong channel %d\n",
- num);
- } else {
- num--;
- chanp = csta->channel + num;
- chanp->leased = 1;
- HiSax_putstatus(csta, "Card",
- "%d channel %d set leased mode\n",
- csta->cardnr + 1, num + 1);
- chanp->d_st->l1.l1l2 = leased_l1l2;
- chanp->d_st->lli.l4l3 = leased_l4l3;
- chanp->d_st->lli.l4l3(chanp->d_st,
- DL_ESTABLISH | REQUEST, NULL);
- }
- break;
- case (6): /* set B-channel test loop */
- num = *(unsigned int *) ic->parm.num;
- if (csta->stlist)
- csta->stlist->l2.l2l1(csta->stlist,
- PH_TESTLOOP | REQUEST, (void *) (long)num);
- break;
- case (7): /* set card in PTP mode */
- num = *(unsigned int *) ic->parm.num;
- if (test_bit(FLG_TWO_DCHAN, &csta->HW_Flags)) {
- printk(KERN_ERR "HiSax PTP mode only with one TEI possible\n");
- } else if (num) {
- test_and_set_bit(FLG_PTP, &csta->channel[0].d_st->l2.flag);
- test_and_set_bit(FLG_FIXED_TEI, &csta->channel[0].d_st->l2.flag);
- csta->channel[0].d_st->l2.tei = 0;
- HiSax_putstatus(csta, "set card ", "in PTP mode");
- printk(KERN_DEBUG "HiSax: set card in PTP mode\n");
- printk(KERN_INFO "LAYER2 WATCHING ESTABLISH\n");
- csta->channel[0].d_st->lli.l4l3(csta->channel[0].d_st,
- DL_ESTABLISH | REQUEST, NULL);
- } else {
- test_and_clear_bit(FLG_PTP, &csta->channel[0].d_st->l2.flag);
- test_and_clear_bit(FLG_FIXED_TEI, &csta->channel[0].d_st->l2.flag);
- HiSax_putstatus(csta, "set card ", "in PTMP mode");
- printk(KERN_DEBUG "HiSax: set card in PTMP mode\n");
- }
- break;
- case (8): /* set card in FIXED TEI mode */
- num = *(unsigned int *)ic->parm.num;
- chanp = csta->channel + (num & 1);
- num = num >> 1;
- if (num == 127) {
- test_and_clear_bit(FLG_FIXED_TEI, &chanp->d_st->l2.flag);
- chanp->d_st->l2.tei = -1;
- HiSax_putstatus(csta, "set card ", "in VAR TEI mode");
- printk(KERN_DEBUG "HiSax: set card in VAR TEI mode\n");
- } else {
- test_and_set_bit(FLG_FIXED_TEI, &chanp->d_st->l2.flag);
- chanp->d_st->l2.tei = num;
- HiSax_putstatus(csta, "set card ", "in FIXED TEI (%d) mode", num);
- printk(KERN_DEBUG "HiSax: set card in FIXED TEI (%d) mode\n",
- num);
- }
- chanp->d_st->lli.l4l3(chanp->d_st,
- DL_ESTABLISH | REQUEST, NULL);
- break;
- case (11):
- num = csta->debug & DEB_DLOG_HEX;
- csta->debug = *(unsigned int *) ic->parm.num;
- csta->debug |= num;
- HiSax_putstatus(cards[0].cs, "l1 debugging ",
- "flags card %d set to %x",
- csta->cardnr + 1, csta->debug);
- printk(KERN_DEBUG "HiSax: l1 debugging flags card %d set to %x\n",
- csta->cardnr + 1, csta->debug);
- break;
- case (13):
- csta->channel[0].d_st->l3.debug = *(unsigned int *) ic->parm.num;
- csta->channel[1].d_st->l3.debug = *(unsigned int *) ic->parm.num;
- HiSax_putstatus(cards[0].cs, "l3 debugging ",
- "flags card %d set to %x\n", csta->cardnr + 1,
- *(unsigned int *) ic->parm.num);
- printk(KERN_DEBUG "HiSax: l3 debugging flags card %d set to %x\n",
- csta->cardnr + 1, *(unsigned int *) ic->parm.num);
- break;
- case (10):
- i = *(unsigned int *) ic->parm.num;
- return (set_channel_limit(csta, i));
- default:
- if (csta->auxcmd)
- return (csta->auxcmd(csta, ic));
- printk(KERN_DEBUG "HiSax: invalid ioctl %d\n",
- (int) ic->arg);
- return (-EINVAL);
- }
- break;
-
- case (ISDN_CMD_PROCEED):
- chanp = csta->channel + ic->arg;
- if (chanp->debug & 1)
- link_debug(chanp, 1, "PROCEED");
- FsmEvent(&chanp->fi, EV_PROCEED, NULL);
- break;
-
- case (ISDN_CMD_ALERT):
- chanp = csta->channel + ic->arg;
- if (chanp->debug & 1)
- link_debug(chanp, 1, "ALERT");
- FsmEvent(&chanp->fi, EV_ALERT, NULL);
- break;
-
- case (ISDN_CMD_REDIR):
- chanp = csta->channel + ic->arg;
- if (chanp->debug & 1)
- link_debug(chanp, 1, "REDIR");
- memcpy(&chanp->setup, &ic->parm.setup, sizeof(setup_parm));
- FsmEvent(&chanp->fi, EV_REDIR, NULL);
- break;
-
- /* protocol specific io commands */
- case (ISDN_CMD_PROT_IO):
- for (st = csta->stlist; st; st = st->next)
- if (st->protocol == (ic->arg & 0xFF))
- return (st->lli.l4l3_proto(st, ic));
- return (-EINVAL);
- break;
- default:
- if (csta->auxcmd)
- return (csta->auxcmd(csta, ic));
- return (-EINVAL);
- }
- return (0);
-}
-
-int
-HiSax_writebuf_skb(int id, int chan, int ack, struct sk_buff *skb)
-{
- struct IsdnCardState *csta = hisax_findcard(id);
- struct Channel *chanp;
- struct PStack *st;
- int len = skb->len;
- struct sk_buff *nskb;
-
- if (!csta) {
- printk(KERN_ERR
- "HiSax: if_sendbuf called with invalid driverId!\n");
- return -ENODEV;
- }
- chanp = csta->channel + chan;
- st = chanp->b_st;
- if (!chanp->data_open) {
- link_debug(chanp, 1, "writebuf: channel not open");
- return -EIO;
- }
- if (len > MAX_DATA_SIZE) {
- link_debug(chanp, 1, "writebuf: packet too large (%d bytes)", len);
- printk(KERN_WARNING "HiSax_writebuf: packet too large (%d bytes) !\n",
- len);
- return -EINVAL;
- }
- if (len) {
- if ((len + chanp->bcs->tx_cnt) > MAX_DATA_MEM) {
- /* Must return 0 here, since this is not an error
- * but a temporary lack of resources.
- */
- if (chanp->debug & 0x800)
- link_debug(chanp, 1, "writebuf: no buffers for %d bytes", len);
- return 0;
- } else if (chanp->debug & 0x800)
- link_debug(chanp, 1, "writebuf %d/%d/%d", len, chanp->bcs->tx_cnt, MAX_DATA_MEM);
- nskb = skb_clone(skb, GFP_ATOMIC);
- if (nskb) {
- nskb->truesize = nskb->len;
- if (!ack)
- nskb->pkt_type = PACKET_NOACK;
- if (chanp->l2_active_protocol == ISDN_PROTO_L2_X75I)
- st->l3.l3l2(st, DL_DATA | REQUEST, nskb);
- else {
- chanp->bcs->tx_cnt += len;
- st->l2.l2l1(st, PH_DATA | REQUEST, nskb);
- }
- dev_kfree_skb(skb);
- } else
- len = 0;
- }
- return (len);
-}
diff --git a/drivers/isdn/hisax/config.c b/drivers/isdn/hisax/config.c
deleted file mode 100644
index de965115a183..000000000000
--- a/drivers/isdn/hisax/config.c
+++ /dev/null
@@ -1,1993 +0,0 @@
-/* $Id: config.c,v 2.84.2.5 2004/02/11 13:21:33 keil Exp $
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- * by Kai Germaschewski <kai.germaschewski@gmx.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * For changes and modifications please read
- * Documentation/isdn/HiSax.cert
- *
- * based on the teles driver from Jan den Ouden
- *
- */
-
-#include <linux/types.h>
-#include <linux/stddef.h>
-#include <linux/timer.h>
-#include <linux/init.h>
-#include "hisax.h"
-#include <linux/module.h>
-#include <linux/kernel_stat.h>
-#include <linux/workqueue.h>
-#include <linux/interrupt.h>
-#include <linux/slab.h>
-#define HISAX_STATUS_BUFSIZE 4096
-
-/*
- * This structure array contains one entry per card. An entry looks
- * like this:
- *
- * { type, protocol, p0, p1, p2, NULL }
- *
- * type
- * 1 Teles 16.0 p0=irq p1=membase p2=iobase
- * 2 Teles 8.0 p0=irq p1=membase
- * 3 Teles 16.3 p0=irq p1=iobase
- * 4 Creatix PNP p0=irq p1=IO0 (ISAC) p2=IO1 (HSCX)
- * 5 AVM A1 (Fritz) p0=irq p1=iobase
- * 6 ELSA PC [p0=iobase] or nothing (autodetect)
- * 7 ELSA Quickstep p0=irq p1=iobase
- * 8 Teles PCMCIA p0=irq p1=iobase
- * 9 ITK ix1-micro p0=irq p1=iobase
- * 10 ELSA PCMCIA p0=irq p1=iobase
- * 11 Eicon.Diehl Diva p0=irq p1=iobase
- * 12 Asuscom ISDNLink p0=irq p1=iobase
- * 13 Teleint p0=irq p1=iobase
- * 14 Teles 16.3c p0=irq p1=iobase
- * 15 Sedlbauer speed p0=irq p1=iobase
- * 15 Sedlbauer PC/104 p0=irq p1=iobase
- * 15 Sedlbauer speed pci no parameter
- * 16 USR Sportster internal p0=irq p1=iobase
- * 17 MIC card p0=irq p1=iobase
- * 18 ELSA Quickstep 1000PCI no parameter
- * 19 Compaq ISDN S0 ISA card p0=irq p1=IO0 (HSCX) p2=IO1 (ISAC) p3=IO2
- * 20 Travers Technologies NETjet-S PCI card
- * 21 TELES PCI no parameter
- * 22 Sedlbauer Speed Star p0=irq p1=iobase
- * 23 reserved
- * 24 Dr Neuhaus Niccy PnP/PCI card p0=irq p1=IO0 p2=IO1 (PnP only)
- * 25 Teles S0Box p0=irq p1=iobase (from isapnp setup)
- * 26 AVM A1 PCMCIA (Fritz) p0=irq p1=iobase
- * 27 AVM PnP/PCI p0=irq p1=iobase (PCI no parameter)
- * 28 Sedlbauer Speed Fax+ p0=irq p1=iobase (from isapnp setup)
- * 29 Siemens I-Surf p0=irq p1=iobase p2=memory (from isapnp setup)
- * 30 ACER P10 p0=irq p1=iobase (from isapnp setup)
- * 31 HST Saphir p0=irq p1=iobase
- * 32 Telekom A4T none
- * 33 Scitel Quadro p0=subcontroller (4*S0, subctrl 1...4)
- * 34 Gazel ISDN cards
- * 35 HFC 2BDS0 PCI none
- * 36 Winbond 6692 PCI none
- * 37 HFC 2BDS0 S+/SP p0=irq p1=iobase
- * 38 Travers Technologies NETspider-U PCI card
- * 39 HFC 2BDS0-SP PCMCIA p0=irq p1=iobase
- * 40 hotplug interface
- * 41 Formula-n enter:now ISDN PCI a/b none
- *
- * protocol can be either ISDN_PTYPE_EURO or ISDN_PTYPE_1TR6 or ISDN_PTYPE_NI1
- *
- *
- */
-
-const char *CardType[] = {
- "No Card", "Teles 16.0", "Teles 8.0", "Teles 16.3",
- "Creatix/Teles PnP", "AVM A1", "Elsa ML", "Elsa Quickstep",
- "Teles PCMCIA", "ITK ix1-micro Rev.2", "Elsa PCMCIA",
- "Eicon.Diehl Diva", "ISDNLink", "TeleInt", "Teles 16.3c",
- "Sedlbauer Speed Card", "USR Sportster", "ith mic Linux",
- "Elsa PCI", "Compaq ISA", "NETjet-S", "Teles PCI",
- "Sedlbauer Speed Star (PCMCIA)", "AMD 7930", "NICCY", "S0Box",
- "AVM A1 (PCMCIA)", "AVM Fritz PnP/PCI", "Sedlbauer Speed Fax +",
- "Siemens I-Surf", "Acer P10", "HST Saphir", "Telekom A4T",
- "Scitel Quadro", "Gazel", "HFC 2BDS0 PCI", "Winbond 6692",
- "HFC 2BDS0 SX", "NETspider-U", "HFC-2BDS0-SP PCMCIA",
- "Hotplug", "Formula-n enter:now PCI a/b",
-};
-
-#ifdef CONFIG_HISAX_ELSA
-#define DEFAULT_CARD ISDN_CTYPE_ELSA
-#define DEFAULT_CFG {0, 0, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_AVM_A1
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_A1
-#define DEFAULT_CFG {10, 0x340, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_AVM_A1_PCMCIA
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_A1_PCMCIA
-#define DEFAULT_CFG {11, 0x170, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_FRITZPCI
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_FRITZPCI
-#define DEFAULT_CFG {0, 0, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_16_3
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_16_3
-#define DEFAULT_CFG {15, 0x180, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_S0BOX
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_S0BOX
-#define DEFAULT_CFG {7, 0x378, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_16_0
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_16_0
-#define DEFAULT_CFG {15, 0xd0000, 0xd80, 0}
-#endif
-
-#ifdef CONFIG_HISAX_TELESPCI
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_TELESPCI
-#define DEFAULT_CFG {0, 0, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_IX1MICROR2
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_IX1MICROR2
-#define DEFAULT_CFG {5, 0x390, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_DIEHLDIVA
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_DIEHLDIVA
-#define DEFAULT_CFG {0, 0x0, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_ASUSCOM
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_ASUSCOM
-#define DEFAULT_CFG {5, 0x200, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_TELEINT
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_TELEINT
-#define DEFAULT_CFG {5, 0x300, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_SEDLBAUER
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_SEDLBAUER
-#define DEFAULT_CFG {11, 0x270, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_SPORTSTER
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_SPORTSTER
-#define DEFAULT_CFG {7, 0x268, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_MIC
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_MIC
-#define DEFAULT_CFG {12, 0x3e0, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_NETJET
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_NETJET_S
-#define DEFAULT_CFG {0, 0, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_HFCS
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_TELES3C
-#define DEFAULT_CFG {5, 0x500, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_HFC_PCI
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_HFC_PCI
-#define DEFAULT_CFG {0, 0, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_HFC_SX
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_HFC_SX
-#define DEFAULT_CFG {5, 0x2E0, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_NICCY
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_NICCY
-#define DEFAULT_CFG {0, 0x0, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_ISURF
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_ISURF
-#define DEFAULT_CFG {5, 0x100, 0xc8000, 0}
-#endif
-
-#ifdef CONFIG_HISAX_HSTSAPHIR
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_HSTSAPHIR
-#define DEFAULT_CFG {5, 0x250, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_BKM_A4T
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_BKM_A4T
-#define DEFAULT_CFG {0, 0x0, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_SCT_QUADRO
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_SCT_QUADRO
-#define DEFAULT_CFG {1, 0x0, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_GAZEL
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_GAZEL
-#define DEFAULT_CFG {15, 0x180, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_W6692
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_W6692
-#define DEFAULT_CFG {0, 0, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_NETJET_U
-#undef DEFAULT_CARD
-#undef DEFAULT_CFG
-#define DEFAULT_CARD ISDN_CTYPE_NETJET_U
-#define DEFAULT_CFG {0, 0, 0, 0}
-#endif
-
-#ifdef CONFIG_HISAX_1TR6
-#define DEFAULT_PROTO ISDN_PTYPE_1TR6
-#define DEFAULT_PROTO_NAME "1TR6"
-#endif
-#ifdef CONFIG_HISAX_NI1
-#undef DEFAULT_PROTO
-#define DEFAULT_PROTO ISDN_PTYPE_NI1
-#undef DEFAULT_PROTO_NAME
-#define DEFAULT_PROTO_NAME "NI1"
-#endif
-#ifdef CONFIG_HISAX_EURO
-#undef DEFAULT_PROTO
-#define DEFAULT_PROTO ISDN_PTYPE_EURO
-#undef DEFAULT_PROTO_NAME
-#define DEFAULT_PROTO_NAME "EURO"
-#endif
-#ifndef DEFAULT_PROTO
-#define DEFAULT_PROTO ISDN_PTYPE_UNKNOWN
-#define DEFAULT_PROTO_NAME "UNKNOWN"
-#endif
-#ifndef DEFAULT_CARD
-#define DEFAULT_CARD 0
-#define DEFAULT_CFG {0, 0, 0, 0}
-#endif
-
-#define FIRST_CARD { \
- DEFAULT_CARD, \
- DEFAULT_PROTO, \
- DEFAULT_CFG, \
- NULL, \
- }
-
-struct IsdnCard cards[HISAX_MAX_CARDS] = {
- FIRST_CARD,
-};
-
-#define HISAX_IDSIZE (HISAX_MAX_CARDS * 8)
-static char HiSaxID[HISAX_IDSIZE] = { 0, };
-
-static char *HiSax_id = HiSaxID;
-#ifdef MODULE
-/* Variables for insmod */
-static int type[HISAX_MAX_CARDS] = { 0, };
-static int protocol[HISAX_MAX_CARDS] = { 0, };
-static int io[HISAX_MAX_CARDS] = { 0, };
-#undef IO0_IO1
-#ifdef CONFIG_HISAX_16_3
-#define IO0_IO1
-#endif
-#ifdef CONFIG_HISAX_NICCY
-#undef IO0_IO1
-#define IO0_IO1
-#endif
-#ifdef IO0_IO1
-static int io0[HISAX_MAX_CARDS] = { 0, };
-static int io1[HISAX_MAX_CARDS] = { 0, };
-#endif
-static int irq[HISAX_MAX_CARDS] = { 0, };
-static int mem[HISAX_MAX_CARDS] = { 0, };
-static char *id = HiSaxID;
-
-MODULE_DESCRIPTION("ISDN4Linux: Driver for passive ISDN cards");
-MODULE_AUTHOR("Karsten Keil");
-MODULE_LICENSE("GPL");
-module_param_array(type, int, NULL, 0);
-module_param_array(protocol, int, NULL, 0);
-module_param_hw_array(io, int, ioport, NULL, 0);
-module_param_hw_array(irq, int, irq, NULL, 0);
-module_param_hw_array(mem, int, iomem, NULL, 0);
-module_param(id, charp, 0);
-#ifdef IO0_IO1
-module_param_hw_array(io0, int, ioport, NULL, 0);
-module_param_hw_array(io1, int, ioport, NULL, 0);
-#endif
-#endif /* MODULE */
-
-int nrcards;
-
-char *HiSax_getrev(const char *revision)
-{
- char *rev;
- char *p;
-
- if ((p = strchr(revision, ':'))) {
- rev = p + 2;
- p = strchr(rev, '$');
- *--p = 0;
- } else
- rev = "???";
- return rev;
-}
-
-static void __init HiSaxVersion(void)
-{
- char tmp[64];
-
- printk(KERN_INFO "HiSax: Linux Driver for passive ISDN cards\n");
-#ifdef MODULE
- printk(KERN_INFO "HiSax: Version 3.5 (module)\n");
-#else
- printk(KERN_INFO "HiSax: Version 3.5 (kernel)\n");
-#endif
- strcpy(tmp, l1_revision);
- printk(KERN_INFO "HiSax: Layer1 Revision %s\n", HiSax_getrev(tmp));
- strcpy(tmp, l2_revision);
- printk(KERN_INFO "HiSax: Layer2 Revision %s\n", HiSax_getrev(tmp));
- strcpy(tmp, tei_revision);
- printk(KERN_INFO "HiSax: TeiMgr Revision %s\n", HiSax_getrev(tmp));
- strcpy(tmp, l3_revision);
- printk(KERN_INFO "HiSax: Layer3 Revision %s\n", HiSax_getrev(tmp));
- strcpy(tmp, lli_revision);
- printk(KERN_INFO "HiSax: LinkLayer Revision %s\n",
- HiSax_getrev(tmp));
-}
-
-#ifndef MODULE
-#define MAX_ARG (HISAX_MAX_CARDS * 5)
-static int __init HiSax_setup(char *line)
-{
- int i, j, argc;
- int ints[MAX_ARG + 1];
- char *str;
-
- str = get_options(line, MAX_ARG, ints);
- argc = ints[0];
- printk(KERN_DEBUG "HiSax_setup: argc(%d) str(%s)\n", argc, str);
- i = 0;
- j = 1;
- while (argc && (i < HISAX_MAX_CARDS)) {
- cards[i].protocol = DEFAULT_PROTO;
- if (argc) {
- cards[i].typ = ints[j];
- j++;
- argc--;
- }
- if (argc) {
- cards[i].protocol = ints[j];
- j++;
- argc--;
- }
- if (argc) {
- cards[i].para[0] = ints[j];
- j++;
- argc--;
- }
- if (argc) {
- cards[i].para[1] = ints[j];
- j++;
- argc--;
- }
- if (argc) {
- cards[i].para[2] = ints[j];
- j++;
- argc--;
- }
- i++;
- }
- if (str && *str) {
- if (strlen(str) < HISAX_IDSIZE)
- strcpy(HiSaxID, str);
- else
- printk(KERN_WARNING "HiSax: ID too long!");
- } else
- strcpy(HiSaxID, "HiSax");
-
- HiSax_id = HiSaxID;
- return 1;
-}
-
-__setup("hisax=", HiSax_setup);
-#endif /* MODULES */
-
-#if CARD_TELES0
-extern int setup_teles0(struct IsdnCard *card);
-#endif
-
-#if CARD_TELES3
-extern int setup_teles3(struct IsdnCard *card);
-#endif
-
-#if CARD_S0BOX
-extern int setup_s0box(struct IsdnCard *card);
-#endif
-
-#if CARD_TELESPCI
-extern int setup_telespci(struct IsdnCard *card);
-#endif
-
-#if CARD_AVM_A1
-extern int setup_avm_a1(struct IsdnCard *card);
-#endif
-
-#if CARD_AVM_A1_PCMCIA
-extern int setup_avm_a1_pcmcia(struct IsdnCard *card);
-#endif
-
-#if CARD_FRITZPCI
-extern int setup_avm_pcipnp(struct IsdnCard *card);
-#endif
-
-#if CARD_ELSA
-extern int setup_elsa(struct IsdnCard *card);
-#endif
-
-#if CARD_IX1MICROR2
-extern int setup_ix1micro(struct IsdnCard *card);
-#endif
-
-#if CARD_DIEHLDIVA
-extern int setup_diva(struct IsdnCard *card);
-#endif
-
-#if CARD_ASUSCOM
-extern int setup_asuscom(struct IsdnCard *card);
-#endif
-
-#if CARD_TELEINT
-extern int setup_TeleInt(struct IsdnCard *card);
-#endif
-
-#if CARD_SEDLBAUER
-extern int setup_sedlbauer(struct IsdnCard *card);
-#endif
-
-#if CARD_SPORTSTER
-extern int setup_sportster(struct IsdnCard *card);
-#endif
-
-#if CARD_MIC
-extern int setup_mic(struct IsdnCard *card);
-#endif
-
-#if CARD_NETJET_S
-extern int setup_netjet_s(struct IsdnCard *card);
-#endif
-
-#if CARD_HFCS
-extern int setup_hfcs(struct IsdnCard *card);
-#endif
-
-#if CARD_HFC_PCI
-extern int setup_hfcpci(struct IsdnCard *card);
-#endif
-
-#if CARD_HFC_SX
-extern int setup_hfcsx(struct IsdnCard *card);
-#endif
-
-#if CARD_NICCY
-extern int setup_niccy(struct IsdnCard *card);
-#endif
-
-#if CARD_ISURF
-extern int setup_isurf(struct IsdnCard *card);
-#endif
-
-#if CARD_HSTSAPHIR
-extern int setup_saphir(struct IsdnCard *card);
-#endif
-
-#if CARD_BKM_A4T
-extern int setup_bkm_a4t(struct IsdnCard *card);
-#endif
-
-#if CARD_SCT_QUADRO
-extern int setup_sct_quadro(struct IsdnCard *card);
-#endif
-
-#if CARD_GAZEL
-extern int setup_gazel(struct IsdnCard *card);
-#endif
-
-#if CARD_W6692
-extern int setup_w6692(struct IsdnCard *card);
-#endif
-
-#if CARD_NETJET_U
-extern int setup_netjet_u(struct IsdnCard *card);
-#endif
-
-#if CARD_FN_ENTERNOW_PCI
-extern int setup_enternow_pci(struct IsdnCard *card);
-#endif
-
-/*
- * Find card with given driverId
- */
-static inline struct IsdnCardState *hisax_findcard(int driverid)
-{
- int i;
-
- for (i = 0; i < nrcards; i++)
- if (cards[i].cs)
- if (cards[i].cs->myid == driverid)
- return cards[i].cs;
- return NULL;
-}
-
-/*
- * Find card with given card number
- */
-#if 0
-struct IsdnCardState *hisax_get_card(int cardnr)
-{
- if ((cardnr <= nrcards) && (cardnr > 0))
- if (cards[cardnr - 1].cs)
- return cards[cardnr - 1].cs;
- return NULL;
-}
-#endif /* 0 */
-
-static int HiSax_readstatus(u_char __user *buf, int len, int id, int channel)
-{
- int count, cnt;
- u_char __user *p = buf;
- struct IsdnCardState *cs = hisax_findcard(id);
-
- if (cs) {
- if (len > HISAX_STATUS_BUFSIZE) {
- printk(KERN_WARNING
- "HiSax: status overflow readstat %d/%d\n",
- len, HISAX_STATUS_BUFSIZE);
- }
- count = cs->status_end - cs->status_read + 1;
- if (count >= len)
- count = len;
- if (copy_to_user(p, cs->status_read, count))
- return -EFAULT;
- cs->status_read += count;
- if (cs->status_read > cs->status_end)
- cs->status_read = cs->status_buf;
- p += count;
- count = len - count;
- while (count) {
- if (count > HISAX_STATUS_BUFSIZE)
- cnt = HISAX_STATUS_BUFSIZE;
- else
- cnt = count;
- if (copy_to_user(p, cs->status_read, cnt))
- return -EFAULT;
- p += cnt;
- cs->status_read += cnt % HISAX_STATUS_BUFSIZE;
- count -= cnt;
- }
- return len;
- } else {
- printk(KERN_ERR
- "HiSax: if_readstatus called with invalid driverId!\n");
- return -ENODEV;
- }
-}
-
-int jiftime(char *s, long mark)
-{
- s += 8;
-
- *s-- = '\0';
- *s-- = mark % 10 + '0';
- mark /= 10;
- *s-- = mark % 10 + '0';
- mark /= 10;
- *s-- = '.';
- *s-- = mark % 10 + '0';
- mark /= 10;
- *s-- = mark % 6 + '0';
- mark /= 6;
- *s-- = ':';
- *s-- = mark % 10 + '0';
- mark /= 10;
- *s-- = mark % 10 + '0';
- return 8;
-}
-
-static u_char tmpbuf[HISAX_STATUS_BUFSIZE];
-
-void VHiSax_putstatus(struct IsdnCardState *cs, char *head, const char *fmt,
- va_list args)
-{
- /* if head == NULL the fmt contains the full info */
-
- u_long flags;
- int count, i;
- u_char *p;
- isdn_ctrl ic;
- int len;
- const u_char *data;
-
- if (!cs) {
- printk(KERN_WARNING "HiSax: No CardStatus for message");
- return;
- }
- spin_lock_irqsave(&cs->statlock, flags);
- if (head) {
- p = tmpbuf;
- p += jiftime(p, jiffies);
- p += sprintf(p, " %s", head);
- p += vsprintf(p, fmt, args);
- *p++ = '\n';
- *p = 0;
- len = p - tmpbuf;
- data = tmpbuf;
- } else {
- data = fmt;
- len = strlen(fmt);
- }
- if (len > HISAX_STATUS_BUFSIZE) {
- spin_unlock_irqrestore(&cs->statlock, flags);
- printk(KERN_WARNING "HiSax: status overflow %d/%d\n",
- len, HISAX_STATUS_BUFSIZE);
- return;
- }
- count = len;
- i = cs->status_end - cs->status_write + 1;
- if (i >= len)
- i = len;
- len -= i;
- memcpy(cs->status_write, data, i);
- cs->status_write += i;
- if (cs->status_write > cs->status_end)
- cs->status_write = cs->status_buf;
- if (len) {
- memcpy(cs->status_write, data + i, len);
- cs->status_write += len;
- }
-#ifdef KERNELSTACK_DEBUG
- i = (ulong) & len - current->kernel_stack_page;
- sprintf(tmpbuf, "kstack %s %lx use %ld\n", current->comm,
- current->kernel_stack_page, i);
- len = strlen(tmpbuf);
- for (p = tmpbuf, i = len; i > 0; i--, p++) {
- *cs->status_write++ = *p;
- if (cs->status_write > cs->status_end)
- cs->status_write = cs->status_buf;
- count++;
- }
-#endif
- spin_unlock_irqrestore(&cs->statlock, flags);
- if (count) {
- ic.command = ISDN_STAT_STAVAIL;
- ic.driver = cs->myid;
- ic.arg = count;
- cs->iif.statcallb(&ic);
- }
-}
-
-void HiSax_putstatus(struct IsdnCardState *cs, char *head, const char *fmt, ...)
-{
- va_list args;
-
- va_start(args, fmt);
- VHiSax_putstatus(cs, head, fmt, args);
- va_end(args);
-}
-
-int ll_run(struct IsdnCardState *cs, int addfeatures)
-{
- isdn_ctrl ic;
-
- ic.driver = cs->myid;
- ic.command = ISDN_STAT_RUN;
- cs->iif.features |= addfeatures;
- cs->iif.statcallb(&ic);
- return 0;
-}
-
-static void ll_stop(struct IsdnCardState *cs)
-{
- isdn_ctrl ic;
-
- ic.command = ISDN_STAT_STOP;
- ic.driver = cs->myid;
- cs->iif.statcallb(&ic);
- // CallcFreeChan(cs);
-}
-
-static void ll_unload(struct IsdnCardState *cs)
-{
- isdn_ctrl ic;
-
- ic.command = ISDN_STAT_UNLOAD;
- ic.driver = cs->myid;
- cs->iif.statcallb(&ic);
- kfree(cs->status_buf);
- cs->status_read = NULL;
- cs->status_write = NULL;
- cs->status_end = NULL;
- kfree(cs->dlog);
- cs->dlog = NULL;
-}
-
-static void closecard(int cardnr)
-{
- struct IsdnCardState *csta = cards[cardnr].cs;
-
- if (csta->bcs->BC_Close != NULL) {
- csta->bcs->BC_Close(csta->bcs + 1);
- csta->bcs->BC_Close(csta->bcs);
- }
-
- skb_queue_purge(&csta->rq);
- skb_queue_purge(&csta->sq);
- kfree(csta->rcvbuf);
- csta->rcvbuf = NULL;
- if (csta->tx_skb) {
- dev_kfree_skb(csta->tx_skb);
- csta->tx_skb = NULL;
- }
- if (csta->DC_Close != NULL) {
- csta->DC_Close(csta);
- }
- if (csta->cardmsg)
- csta->cardmsg(csta, CARD_RELEASE, NULL);
- if (csta->dbusytimer.function != NULL) // FIXME?
- del_timer(&csta->dbusytimer);
- ll_unload(csta);
-}
-
-static irqreturn_t card_irq(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- irqreturn_t ret = cs->irq_func(intno, cs);
-
- if (ret == IRQ_HANDLED)
- cs->irq_cnt++;
- return ret;
-}
-
-static int init_card(struct IsdnCardState *cs)
-{
- int irq_cnt, cnt = 3, ret;
-
- if (!cs->irq) {
- ret = cs->cardmsg(cs, CARD_INIT, NULL);
- return (ret);
- }
- irq_cnt = cs->irq_cnt = 0;
- printk(KERN_INFO "%s: IRQ %d count %d\n", CardType[cs->typ],
- cs->irq, irq_cnt);
- if (request_irq(cs->irq, card_irq, cs->irq_flags, "HiSax", cs)) {
- printk(KERN_WARNING "HiSax: couldn't get interrupt %d\n",
- cs->irq);
- return 1;
- }
- while (cnt) {
- cs->cardmsg(cs, CARD_INIT, NULL);
- /* Timeout 10ms */
- msleep(10);
- printk(KERN_INFO "%s: IRQ %d count %d\n",
- CardType[cs->typ], cs->irq, cs->irq_cnt);
- if (cs->irq_cnt == irq_cnt) {
- printk(KERN_WARNING
- "%s: IRQ(%d) getting no interrupts during init %d\n",
- CardType[cs->typ], cs->irq, 4 - cnt);
- if (cnt == 1) {
- free_irq(cs->irq, cs);
- return 2;
- } else {
- cs->cardmsg(cs, CARD_RESET, NULL);
- cnt--;
- }
- } else {
- cs->cardmsg(cs, CARD_TEST, NULL);
- return 0;
- }
- }
- return 3;
-}
-
-static int hisax_cs_setup_card(struct IsdnCard *card)
-{
- int ret;
-
- switch (card->typ) {
-#if CARD_TELES0
- case ISDN_CTYPE_16_0:
- case ISDN_CTYPE_8_0:
- ret = setup_teles0(card);
- break;
-#endif
-#if CARD_TELES3
- case ISDN_CTYPE_16_3:
- case ISDN_CTYPE_PNP:
- case ISDN_CTYPE_TELESPCMCIA:
- case ISDN_CTYPE_COMPAQ_ISA:
- ret = setup_teles3(card);
- break;
-#endif
-#if CARD_S0BOX
- case ISDN_CTYPE_S0BOX:
- ret = setup_s0box(card);
- break;
-#endif
-#if CARD_TELESPCI
- case ISDN_CTYPE_TELESPCI:
- ret = setup_telespci(card);
- break;
-#endif
-#if CARD_AVM_A1
- case ISDN_CTYPE_A1:
- ret = setup_avm_a1(card);
- break;
-#endif
-#if CARD_AVM_A1_PCMCIA
- case ISDN_CTYPE_A1_PCMCIA:
- ret = setup_avm_a1_pcmcia(card);
- break;
-#endif
-#if CARD_FRITZPCI
- case ISDN_CTYPE_FRITZPCI:
- ret = setup_avm_pcipnp(card);
- break;
-#endif
-#if CARD_ELSA
- case ISDN_CTYPE_ELSA:
- case ISDN_CTYPE_ELSA_PNP:
- case ISDN_CTYPE_ELSA_PCMCIA:
- case ISDN_CTYPE_ELSA_PCI:
- ret = setup_elsa(card);
- break;
-#endif
-#if CARD_IX1MICROR2
- case ISDN_CTYPE_IX1MICROR2:
- ret = setup_ix1micro(card);
- break;
-#endif
-#if CARD_DIEHLDIVA
- case ISDN_CTYPE_DIEHLDIVA:
- ret = setup_diva(card);
- break;
-#endif
-#if CARD_ASUSCOM
- case ISDN_CTYPE_ASUSCOM:
- ret = setup_asuscom(card);
- break;
-#endif
-#if CARD_TELEINT
- case ISDN_CTYPE_TELEINT:
- ret = setup_TeleInt(card);
- break;
-#endif
-#if CARD_SEDLBAUER
- case ISDN_CTYPE_SEDLBAUER:
- case ISDN_CTYPE_SEDLBAUER_PCMCIA:
- case ISDN_CTYPE_SEDLBAUER_FAX:
- ret = setup_sedlbauer(card);
- break;
-#endif
-#if CARD_SPORTSTER
- case ISDN_CTYPE_SPORTSTER:
- ret = setup_sportster(card);
- break;
-#endif
-#if CARD_MIC
- case ISDN_CTYPE_MIC:
- ret = setup_mic(card);
- break;
-#endif
-#if CARD_NETJET_S
- case ISDN_CTYPE_NETJET_S:
- ret = setup_netjet_s(card);
- break;
-#endif
-#if CARD_HFCS
- case ISDN_CTYPE_TELES3C:
- case ISDN_CTYPE_ACERP10:
- ret = setup_hfcs(card);
- break;
-#endif
-#if CARD_HFC_PCI
- case ISDN_CTYPE_HFC_PCI:
- ret = setup_hfcpci(card);
- break;
-#endif
-#if CARD_HFC_SX
- case ISDN_CTYPE_HFC_SX:
- ret = setup_hfcsx(card);
- break;
-#endif
-#if CARD_NICCY
- case ISDN_CTYPE_NICCY:
- ret = setup_niccy(card);
- break;
-#endif
-#if CARD_ISURF
- case ISDN_CTYPE_ISURF:
- ret = setup_isurf(card);
- break;
-#endif
-#if CARD_HSTSAPHIR
- case ISDN_CTYPE_HSTSAPHIR:
- ret = setup_saphir(card);
- break;
-#endif
-#if CARD_BKM_A4T
- case ISDN_CTYPE_BKM_A4T:
- ret = setup_bkm_a4t(card);
- break;
-#endif
-#if CARD_SCT_QUADRO
- case ISDN_CTYPE_SCT_QUADRO:
- ret = setup_sct_quadro(card);
- break;
-#endif
-#if CARD_GAZEL
- case ISDN_CTYPE_GAZEL:
- ret = setup_gazel(card);
- break;
-#endif
-#if CARD_W6692
- case ISDN_CTYPE_W6692:
- ret = setup_w6692(card);
- break;
-#endif
-#if CARD_NETJET_U
- case ISDN_CTYPE_NETJET_U:
- ret = setup_netjet_u(card);
- break;
-#endif
-#if CARD_FN_ENTERNOW_PCI
- case ISDN_CTYPE_ENTERNOW:
- ret = setup_enternow_pci(card);
- break;
-#endif
- case ISDN_CTYPE_DYNAMIC:
- ret = 2;
- break;
- default:
- printk(KERN_WARNING
- "HiSax: Support for %s Card not selected\n",
- CardType[card->typ]);
- ret = 0;
- break;
- }
-
- return ret;
-}
-
-static int hisax_cs_new(int cardnr, char *id, struct IsdnCard *card,
- struct IsdnCardState **cs_out, int *busy_flag,
- struct module *lockowner)
-{
- struct IsdnCardState *cs;
-
- *cs_out = NULL;
-
- cs = kzalloc(sizeof(struct IsdnCardState), GFP_KERNEL);
- if (!cs) {
- printk(KERN_WARNING
- "HiSax: No memory for IsdnCardState(card %d)\n",
- cardnr + 1);
- goto out;
- }
- card->cs = cs;
- spin_lock_init(&cs->statlock);
- spin_lock_init(&cs->lock);
- cs->chanlimit = 2; /* maximum B-channel number */
- cs->logecho = 0; /* No echo logging */
- cs->cardnr = cardnr;
- cs->debug = L1_DEB_WARN;
- cs->HW_Flags = 0;
- cs->busy_flag = busy_flag;
- cs->irq_flags = I4L_IRQ_FLAG;
-#if TEI_PER_CARD
- if (card->protocol == ISDN_PTYPE_NI1)
- test_and_set_bit(FLG_TWO_DCHAN, &cs->HW_Flags);
-#else
- test_and_set_bit(FLG_TWO_DCHAN, &cs->HW_Flags);
-#endif
- cs->protocol = card->protocol;
-
- if (card->typ <= 0 || card->typ > ISDN_CTYPE_COUNT) {
- printk(KERN_WARNING
- "HiSax: Card Type %d out of range\n", card->typ);
- goto outf_cs;
- }
- if (!(cs->dlog = kmalloc(MAX_DLOG_SPACE, GFP_KERNEL))) {
- printk(KERN_WARNING
- "HiSax: No memory for dlog(card %d)\n", cardnr + 1);
- goto outf_cs;
- }
- if (!(cs->status_buf = kmalloc(HISAX_STATUS_BUFSIZE, GFP_KERNEL))) {
- printk(KERN_WARNING
- "HiSax: No memory for status_buf(card %d)\n",
- cardnr + 1);
- goto outf_dlog;
- }
- cs->stlist = NULL;
- cs->status_read = cs->status_buf;
- cs->status_write = cs->status_buf;
- cs->status_end = cs->status_buf + HISAX_STATUS_BUFSIZE - 1;
- cs->typ = card->typ;
-#ifdef MODULE
- cs->iif.owner = lockowner;
-#endif
- strcpy(cs->iif.id, id);
- cs->iif.channels = 2;
- cs->iif.maxbufsize = MAX_DATA_SIZE;
- cs->iif.hl_hdrlen = MAX_HEADER_LEN;
- cs->iif.features =
- ISDN_FEATURE_L2_X75I |
- ISDN_FEATURE_L2_HDLC |
- ISDN_FEATURE_L2_HDLC_56K |
- ISDN_FEATURE_L2_TRANS |
- ISDN_FEATURE_L3_TRANS |
-#ifdef CONFIG_HISAX_1TR6
- ISDN_FEATURE_P_1TR6 |
-#endif
-#ifdef CONFIG_HISAX_EURO
- ISDN_FEATURE_P_EURO |
-#endif
-#ifdef CONFIG_HISAX_NI1
- ISDN_FEATURE_P_NI1 |
-#endif
- 0;
-
- cs->iif.command = HiSax_command;
- cs->iif.writecmd = NULL;
- cs->iif.writebuf_skb = HiSax_writebuf_skb;
- cs->iif.readstat = HiSax_readstatus;
- register_isdn(&cs->iif);
- cs->myid = cs->iif.channels;
-
- *cs_out = cs;
- return 1; /* success */
-
-outf_dlog:
- kfree(cs->dlog);
-outf_cs:
- kfree(cs);
- card->cs = NULL;
-out:
- return 0; /* error */
-}
-
-static int hisax_cs_setup(int cardnr, struct IsdnCard *card,
- struct IsdnCardState *cs)
-{
- int ret;
-
- if (!(cs->rcvbuf = kmalloc(MAX_DFRAME_LEN_L1, GFP_KERNEL))) {
- printk(KERN_WARNING "HiSax: No memory for isac rcvbuf\n");
- ll_unload(cs);
- goto outf_cs;
- }
- cs->rcvidx = 0;
- cs->tx_skb = NULL;
- cs->tx_cnt = 0;
- cs->event = 0;
-
- skb_queue_head_init(&cs->rq);
- skb_queue_head_init(&cs->sq);
-
- init_bcstate(cs, 0);
- init_bcstate(cs, 1);
-
- /* init_card only handles interrupts which are not */
- /* used here for the loadable driver */
- switch (card->typ) {
- case ISDN_CTYPE_DYNAMIC:
- ret = 0;
- break;
- default:
- ret = init_card(cs);
- break;
- }
- if (ret) {
- closecard(cardnr);
- goto outf_cs;
- }
- init_tei(cs, cs->protocol);
- ret = CallcNewChan(cs);
- if (ret) {
- closecard(cardnr);
- goto outf_cs;
- }
- /* ISAR needs firmware download first */
- if (!test_bit(HW_ISAR, &cs->HW_Flags))
- ll_run(cs, 0);
-
- return 1;
-
-outf_cs:
- kfree(cs);
- card->cs = NULL;
- return 0;
-}
-
-static int checkcard(int cardnr, char *id, int *busy_flag,
- struct module *lockowner, hisax_setup_func_t card_setup)
-{
- int ret;
- struct IsdnCard *card = cards + cardnr;
- struct IsdnCardState *cs;
-
- ret = hisax_cs_new(cardnr, id, card, &cs, busy_flag, lockowner);
- if (!ret)
- return 0;
-
- printk(KERN_INFO
- "HiSax: Card %d Protocol %s Id=%s (%d)\n", cardnr + 1,
- (card->protocol == ISDN_PTYPE_1TR6) ? "1TR6" :
- (card->protocol == ISDN_PTYPE_EURO) ? "EDSS1" :
- (card->protocol == ISDN_PTYPE_LEASED) ? "LEASED" :
- (card->protocol == ISDN_PTYPE_NI1) ? "NI1" :
- "NONE", cs->iif.id, cs->myid);
-
- ret = card_setup(card);
- if (!ret) {
- ll_unload(cs);
- goto outf_cs;
- }
-
- ret = hisax_cs_setup(cardnr, card, cs);
- goto out;
-
-outf_cs:
- kfree(cs);
- card->cs = NULL;
-out:
- return ret;
-}
-
-static void HiSax_shiftcards(int idx)
-{
- int i;
-
- for (i = idx; i < (HISAX_MAX_CARDS - 1); i++)
- memcpy(&cards[i], &cards[i + 1], sizeof(cards[i]));
-}
-
-static int __init HiSax_inithardware(int *busy_flag)
-{
- int foundcards = 0;
- int i = 0;
- int t = ',';
- int flg = 0;
- char *id;
- char *next_id = HiSax_id;
- char ids[20];
-
- if (strchr(HiSax_id, ','))
- t = ',';
- else if (strchr(HiSax_id, '%'))
- t = '%';
-
- while (i < nrcards) {
- if (cards[i].typ < 1)
- break;
- id = next_id;
- if ((next_id = strchr(id, t))) {
- *next_id++ = 0;
- strcpy(ids, id);
- flg = i + 1;
- } else {
- next_id = id;
- if (flg >= i)
- strcpy(ids, id);
- else
- sprintf(ids, "%s%d", id, i);
- }
- if (checkcard(i, ids, busy_flag, THIS_MODULE,
- hisax_cs_setup_card)) {
- foundcards++;
- i++;
- } else {
- /* make sure we don't oops the module */
- if (cards[i].typ > 0 && cards[i].typ <= ISDN_CTYPE_COUNT) {
- printk(KERN_WARNING
- "HiSax: Card %s not installed !\n",
- CardType[cards[i].typ]);
- }
- HiSax_shiftcards(i);
- nrcards--;
- }
- }
- return foundcards;
-}
-
-void HiSax_closecard(int cardnr)
-{
- int i, last = nrcards - 1;
-
- if (cardnr > last || cardnr < 0)
- return;
- if (cards[cardnr].cs) {
- ll_stop(cards[cardnr].cs);
- release_tei(cards[cardnr].cs);
- CallcFreeChan(cards[cardnr].cs);
-
- closecard(cardnr);
- if (cards[cardnr].cs->irq)
- free_irq(cards[cardnr].cs->irq, cards[cardnr].cs);
- kfree((void *) cards[cardnr].cs);
- cards[cardnr].cs = NULL;
- }
- i = cardnr;
- while (i <= last) {
- cards[i] = cards[i + 1];
- i++;
- }
- nrcards--;
-}
-
-void HiSax_reportcard(int cardnr, int sel)
-{
- struct IsdnCardState *cs = cards[cardnr].cs;
-
- printk(KERN_DEBUG "HiSax: reportcard No %d\n", cardnr + 1);
- printk(KERN_DEBUG "HiSax: Type %s\n", CardType[cs->typ]);
- printk(KERN_DEBUG "HiSax: debuglevel %x\n", cs->debug);
- printk(KERN_DEBUG "HiSax: HiSax_reportcard address 0x%px\n",
- HiSax_reportcard);
- printk(KERN_DEBUG "HiSax: cs 0x%px\n", cs);
- printk(KERN_DEBUG "HiSax: HW_Flags %lx bc0 flg %lx bc1 flg %lx\n",
- cs->HW_Flags, cs->bcs[0].Flag, cs->bcs[1].Flag);
- printk(KERN_DEBUG "HiSax: bcs 0 mode %d ch%d\n",
- cs->bcs[0].mode, cs->bcs[0].channel);
- printk(KERN_DEBUG "HiSax: bcs 1 mode %d ch%d\n",
- cs->bcs[1].mode, cs->bcs[1].channel);
-#ifdef ERROR_STATISTIC
- printk(KERN_DEBUG "HiSax: dc errors(rx,crc,tx) %d,%d,%d\n",
- cs->err_rx, cs->err_crc, cs->err_tx);
- printk(KERN_DEBUG
- "HiSax: bc0 errors(inv,rdo,crc,tx) %d,%d,%d,%d\n",
- cs->bcs[0].err_inv, cs->bcs[0].err_rdo, cs->bcs[0].err_crc,
- cs->bcs[0].err_tx);
- printk(KERN_DEBUG
- "HiSax: bc1 errors(inv,rdo,crc,tx) %d,%d,%d,%d\n",
- cs->bcs[1].err_inv, cs->bcs[1].err_rdo, cs->bcs[1].err_crc,
- cs->bcs[1].err_tx);
- if (sel == 99) {
- cs->err_rx = 0;
- cs->err_crc = 0;
- cs->err_tx = 0;
- cs->bcs[0].err_inv = 0;
- cs->bcs[0].err_rdo = 0;
- cs->bcs[0].err_crc = 0;
- cs->bcs[0].err_tx = 0;
- cs->bcs[1].err_inv = 0;
- cs->bcs[1].err_rdo = 0;
- cs->bcs[1].err_crc = 0;
- cs->bcs[1].err_tx = 0;
- }
-#endif
-}
-
-static int __init HiSax_init(void)
-{
- int i, retval;
-#ifdef MODULE
- int j;
- int nzproto = 0;
-#endif
-
- HiSaxVersion();
- retval = CallcNew();
- if (retval)
- goto out;
- retval = Isdnl3New();
- if (retval)
- goto out_callc;
- retval = Isdnl2New();
- if (retval)
- goto out_isdnl3;
- retval = TeiNew();
- if (retval)
- goto out_isdnl2;
- retval = Isdnl1New();
- if (retval)
- goto out_tei;
-
-#ifdef MODULE
- if (!type[0]) {
- /* We 'll register drivers later, but init basic functions */
- for (i = 0; i < HISAX_MAX_CARDS; i++)
- cards[i].typ = 0;
- return 0;
- }
-#ifdef CONFIG_HISAX_ELSA
- if (type[0] == ISDN_CTYPE_ELSA_PCMCIA) {
- /* we have exported and return in this case */
- return 0;
- }
-#endif
-#ifdef CONFIG_HISAX_SEDLBAUER
- if (type[0] == ISDN_CTYPE_SEDLBAUER_PCMCIA) {
- /* we have to export and return in this case */
- return 0;
- }
-#endif
-#ifdef CONFIG_HISAX_AVM_A1_PCMCIA
- if (type[0] == ISDN_CTYPE_A1_PCMCIA) {
- /* we have to export and return in this case */
- return 0;
- }
-#endif
-#ifdef CONFIG_HISAX_HFC_SX
- if (type[0] == ISDN_CTYPE_HFC_SP_PCMCIA) {
- /* we have to export and return in this case */
- return 0;
- }
-#endif
-#endif
- nrcards = 0;
-#ifdef MODULE
- if (id) /* If id= string used */
- HiSax_id = id;
- for (i = j = 0; j < HISAX_MAX_CARDS; i++) {
- cards[j].typ = type[i];
- if (protocol[i]) {
- cards[j].protocol = protocol[i];
- nzproto++;
- } else {
- cards[j].protocol = DEFAULT_PROTO;
- }
- switch (type[i]) {
- case ISDN_CTYPE_16_0:
- cards[j].para[0] = irq[i];
- cards[j].para[1] = mem[i];
- cards[j].para[2] = io[i];
- break;
-
- case ISDN_CTYPE_8_0:
- cards[j].para[0] = irq[i];
- cards[j].para[1] = mem[i];
- break;
-
-#ifdef IO0_IO1
- case ISDN_CTYPE_PNP:
- case ISDN_CTYPE_NICCY:
- cards[j].para[0] = irq[i];
- cards[j].para[1] = io0[i];
- cards[j].para[2] = io1[i];
- break;
- case ISDN_CTYPE_COMPAQ_ISA:
- cards[j].para[0] = irq[i];
- cards[j].para[1] = io0[i];
- cards[j].para[2] = io1[i];
- cards[j].para[3] = io[i];
- break;
-#endif
- case ISDN_CTYPE_ELSA:
- case ISDN_CTYPE_HFC_PCI:
- cards[j].para[0] = io[i];
- break;
- case ISDN_CTYPE_16_3:
- case ISDN_CTYPE_TELESPCMCIA:
- case ISDN_CTYPE_A1:
- case ISDN_CTYPE_A1_PCMCIA:
- case ISDN_CTYPE_ELSA_PNP:
- case ISDN_CTYPE_ELSA_PCMCIA:
- case ISDN_CTYPE_IX1MICROR2:
- case ISDN_CTYPE_DIEHLDIVA:
- case ISDN_CTYPE_ASUSCOM:
- case ISDN_CTYPE_TELEINT:
- case ISDN_CTYPE_SEDLBAUER:
- case ISDN_CTYPE_SEDLBAUER_PCMCIA:
- case ISDN_CTYPE_SEDLBAUER_FAX:
- case ISDN_CTYPE_SPORTSTER:
- case ISDN_CTYPE_MIC:
- case ISDN_CTYPE_TELES3C:
- case ISDN_CTYPE_ACERP10:
- case ISDN_CTYPE_S0BOX:
- case ISDN_CTYPE_FRITZPCI:
- case ISDN_CTYPE_HSTSAPHIR:
- case ISDN_CTYPE_GAZEL:
- case ISDN_CTYPE_HFC_SX:
- case ISDN_CTYPE_HFC_SP_PCMCIA:
- cards[j].para[0] = irq[i];
- cards[j].para[1] = io[i];
- break;
- case ISDN_CTYPE_ISURF:
- cards[j].para[0] = irq[i];
- cards[j].para[1] = io[i];
- cards[j].para[2] = mem[i];
- break;
- case ISDN_CTYPE_ELSA_PCI:
- case ISDN_CTYPE_NETJET_S:
- case ISDN_CTYPE_TELESPCI:
- case ISDN_CTYPE_W6692:
- case ISDN_CTYPE_NETJET_U:
- break;
- case ISDN_CTYPE_BKM_A4T:
- break;
- case ISDN_CTYPE_SCT_QUADRO:
- if (irq[i]) {
- cards[j].para[0] = irq[i];
- } else {
- /* QUADRO is a 4 BRI card */
- cards[j++].para[0] = 1;
- /* we need to check if further cards can be added */
- if (j < HISAX_MAX_CARDS) {
- cards[j].typ = ISDN_CTYPE_SCT_QUADRO;
- cards[j].protocol = protocol[i];
- cards[j++].para[0] = 2;
- }
- if (j < HISAX_MAX_CARDS) {
- cards[j].typ = ISDN_CTYPE_SCT_QUADRO;
- cards[j].protocol = protocol[i];
- cards[j++].para[0] = 3;
- }
- if (j < HISAX_MAX_CARDS) {
- cards[j].typ = ISDN_CTYPE_SCT_QUADRO;
- cards[j].protocol = protocol[i];
- cards[j].para[0] = 4;
- }
- }
- break;
- }
- j++;
- }
- if (!nzproto) {
- printk(KERN_WARNING
- "HiSax: Warning - no protocol specified\n");
- printk(KERN_WARNING "HiSax: using protocol %s\n",
- DEFAULT_PROTO_NAME);
- }
-#endif
- if (!HiSax_id)
- HiSax_id = HiSaxID;
- if (!HiSaxID[0])
- strcpy(HiSaxID, "HiSax");
- for (i = 0; i < HISAX_MAX_CARDS; i++)
- if (cards[i].typ > 0)
- nrcards++;
- printk(KERN_DEBUG "HiSax: Total %d card%s defined\n",
- nrcards, (nrcards > 1) ? "s" : "");
-
- /* Install only, if at least one card found */
- if (!HiSax_inithardware(NULL))
- return -ENODEV;
- return 0;
-
-out_tei:
- TeiFree();
-out_isdnl2:
- Isdnl2Free();
-out_isdnl3:
- Isdnl3Free();
-out_callc:
- CallcFree();
-out:
- return retval;
-}
-
-static void __exit HiSax_exit(void)
-{
- int cardnr = nrcards - 1;
-
- while (cardnr >= 0)
- HiSax_closecard(cardnr--);
- Isdnl1Free();
- TeiFree();
- Isdnl2Free();
- Isdnl3Free();
- CallcFree();
- printk(KERN_INFO "HiSax module removed\n");
-}
-
-int hisax_init_pcmcia(void *pcm_iob, int *busy_flag, struct IsdnCard *card)
-{
- u_char ids[16];
- int ret = -1;
-
- cards[nrcards] = *card;
- if (nrcards)
- sprintf(ids, "HiSax%d", nrcards);
- else
- sprintf(ids, "HiSax");
- if (!checkcard(nrcards, ids, busy_flag, THIS_MODULE,
- hisax_cs_setup_card))
- goto error;
-
- ret = nrcards;
- nrcards++;
-error:
- return ret;
-}
-EXPORT_SYMBOL(hisax_init_pcmcia);
-
-EXPORT_SYMBOL(HiSax_closecard);
-
-#include "hisax_if.h"
-
-EXPORT_SYMBOL(hisax_register);
-EXPORT_SYMBOL(hisax_unregister);
-
-static void hisax_d_l1l2(struct hisax_if *ifc, int pr, void *arg);
-static void hisax_b_l1l2(struct hisax_if *ifc, int pr, void *arg);
-static void hisax_d_l2l1(struct PStack *st, int pr, void *arg);
-static void hisax_b_l2l1(struct PStack *st, int pr, void *arg);
-static int hisax_cardmsg(struct IsdnCardState *cs, int mt, void *arg);
-static int hisax_bc_setstack(struct PStack *st, struct BCState *bcs);
-static void hisax_bc_close(struct BCState *bcs);
-static void hisax_bh(struct work_struct *work);
-static void EChannel_proc_rcv(struct hisax_d_if *d_if);
-
-static int hisax_setup_card_dynamic(struct IsdnCard *card)
-{
- return 2;
-}
-
-int hisax_register(struct hisax_d_if *hisax_d_if, struct hisax_b_if *b_if[],
- char *name, int protocol)
-{
- int i, retval;
- char id[20];
- struct IsdnCardState *cs;
-
- for (i = 0; i < HISAX_MAX_CARDS; i++) {
- if (!cards[i].typ)
- break;
- }
-
- if (i >= HISAX_MAX_CARDS)
- return -EBUSY;
-
- cards[i].typ = ISDN_CTYPE_DYNAMIC;
- cards[i].protocol = protocol;
- sprintf(id, "%s%d", name, i);
- nrcards++;
- retval = checkcard(i, id, NULL, hisax_d_if->owner,
- hisax_setup_card_dynamic);
- if (retval == 0) { // yuck
- cards[i].typ = 0;
- nrcards--;
- return -EINVAL;
- }
- cs = cards[i].cs;
- hisax_d_if->cs = cs;
- cs->hw.hisax_d_if = hisax_d_if;
- cs->cardmsg = hisax_cardmsg;
- INIT_WORK(&cs->tqueue, hisax_bh);
- cs->channel[0].d_st->l2.l2l1 = hisax_d_l2l1;
- for (i = 0; i < 2; i++) {
- cs->bcs[i].BC_SetStack = hisax_bc_setstack;
- cs->bcs[i].BC_Close = hisax_bc_close;
-
- b_if[i]->ifc.l1l2 = hisax_b_l1l2;
-
- hisax_d_if->b_if[i] = b_if[i];
- }
- hisax_d_if->ifc.l1l2 = hisax_d_l1l2;
- skb_queue_head_init(&hisax_d_if->erq);
- clear_bit(0, &hisax_d_if->ph_state);
-
- return 0;
-}
-
-void hisax_unregister(struct hisax_d_if *hisax_d_if)
-{
- cards[hisax_d_if->cs->cardnr].typ = 0;
- HiSax_closecard(hisax_d_if->cs->cardnr);
- skb_queue_purge(&hisax_d_if->erq);
-}
-
-#include "isdnl1.h"
-
-static void hisax_sched_event(struct IsdnCardState *cs, int event)
-{
- test_and_set_bit(event, &cs->event);
- schedule_work(&cs->tqueue);
-}
-
-static void hisax_bh(struct work_struct *work)
-{
- struct IsdnCardState *cs =
- container_of(work, struct IsdnCardState, tqueue);
- struct PStack *st;
- int pr;
-
- if (test_and_clear_bit(D_RCVBUFREADY, &cs->event))
- DChannel_proc_rcv(cs);
- if (test_and_clear_bit(E_RCVBUFREADY, &cs->event))
- EChannel_proc_rcv(cs->hw.hisax_d_if);
- if (test_and_clear_bit(D_L1STATECHANGE, &cs->event)) {
- if (test_bit(0, &cs->hw.hisax_d_if->ph_state))
- pr = PH_ACTIVATE | INDICATION;
- else
- pr = PH_DEACTIVATE | INDICATION;
- for (st = cs->stlist; st; st = st->next)
- st->l1.l1l2(st, pr, NULL);
-
- }
-}
-
-static void hisax_b_sched_event(struct BCState *bcs, int event)
-{
- test_and_set_bit(event, &bcs->event);
- schedule_work(&bcs->tqueue);
-}
-
-static inline void D_L2L1(struct hisax_d_if *d_if, int pr, void *arg)
-{
- struct hisax_if *ifc = (struct hisax_if *) d_if;
- ifc->l2l1(ifc, pr, arg);
-}
-
-static inline void B_L2L1(struct hisax_b_if *b_if, int pr, void *arg)
-{
- struct hisax_if *ifc = (struct hisax_if *) b_if;
- ifc->l2l1(ifc, pr, arg);
-}
-
-static void hisax_d_l1l2(struct hisax_if *ifc, int pr, void *arg)
-{
- struct hisax_d_if *d_if = (struct hisax_d_if *) ifc;
- struct IsdnCardState *cs = d_if->cs;
- struct PStack *st;
- struct sk_buff *skb;
-
- switch (pr) {
- case PH_ACTIVATE | INDICATION:
- set_bit(0, &d_if->ph_state);
- hisax_sched_event(cs, D_L1STATECHANGE);
- break;
- case PH_DEACTIVATE | INDICATION:
- clear_bit(0, &d_if->ph_state);
- hisax_sched_event(cs, D_L1STATECHANGE);
- break;
- case PH_DATA | INDICATION:
- skb_queue_tail(&cs->rq, arg);
- hisax_sched_event(cs, D_RCVBUFREADY);
- break;
- case PH_DATA | CONFIRM:
- skb = skb_dequeue(&cs->sq);
- if (skb) {
- D_L2L1(d_if, PH_DATA | REQUEST, skb);
- break;
- }
- clear_bit(FLG_L1_DBUSY, &cs->HW_Flags);
- for (st = cs->stlist; st; st = st->next) {
- if (test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags)) {
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- break;
- }
- }
- break;
- case PH_DATA_E | INDICATION:
- skb_queue_tail(&d_if->erq, arg);
- hisax_sched_event(cs, E_RCVBUFREADY);
- break;
- default:
- printk("pr %#x\n", pr);
- break;
- }
-}
-
-static void hisax_b_l1l2(struct hisax_if *ifc, int pr, void *arg)
-{
- struct hisax_b_if *b_if = (struct hisax_b_if *) ifc;
- struct BCState *bcs = b_if->bcs;
- struct PStack *st = bcs->st;
- struct sk_buff *skb;
-
- // FIXME use isdnl1?
- switch (pr) {
- case PH_ACTIVATE | INDICATION:
- st->l1.l1l2(st, pr, NULL);
- break;
- case PH_DEACTIVATE | INDICATION:
- st->l1.l1l2(st, pr, NULL);
- clear_bit(BC_FLG_BUSY, &bcs->Flag);
- skb_queue_purge(&bcs->squeue);
- bcs->hw.b_if = NULL;
- break;
- case PH_DATA | INDICATION:
- skb_queue_tail(&bcs->rqueue, arg);
- hisax_b_sched_event(bcs, B_RCVBUFREADY);
- break;
- case PH_DATA | CONFIRM:
- bcs->tx_cnt -= (long)arg;
- if (test_bit(FLG_LLI_L1WAKEUP, &bcs->st->lli.flag)) {
- u_long flags;
- spin_lock_irqsave(&bcs->aclock, flags);
- bcs->ackcnt += (long)arg;
- spin_unlock_irqrestore(&bcs->aclock, flags);
- schedule_event(bcs, B_ACKPENDING);
- }
- skb = skb_dequeue(&bcs->squeue);
- if (skb) {
- B_L2L1(b_if, PH_DATA | REQUEST, skb);
- break;
- }
- clear_bit(BC_FLG_BUSY, &bcs->Flag);
- if (test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags)) {
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- }
- break;
- default:
- printk("hisax_b_l1l2 pr %#x\n", pr);
- break;
- }
-}
-
-static void hisax_d_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct IsdnCardState *cs = st->l1.hardware;
- struct hisax_d_if *hisax_d_if = cs->hw.hisax_d_if;
- struct sk_buff *skb = arg;
-
- switch (pr) {
- case PH_DATA | REQUEST:
- case PH_PULL | INDICATION:
- if (cs->debug & DEB_DLOG_HEX)
- LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE)
- dlogframe(cs, skb, 0);
- Logl2Frame(cs, skb, "PH_DATA_REQ", 0);
- // FIXME lock?
- if (!test_and_set_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- D_L2L1(hisax_d_if, PH_DATA | REQUEST, skb);
- else
- skb_queue_tail(&cs->sq, skb);
- break;
- case PH_PULL | REQUEST:
- if (!test_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- else
- set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- default:
- D_L2L1(hisax_d_if, pr, arg);
- break;
- }
-}
-
-static int hisax_cardmsg(struct IsdnCardState *cs, int mt, void *arg)
-{
- return 0;
-}
-
-static void hisax_b_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct BCState *bcs = st->l1.bcs;
- struct hisax_b_if *b_if = bcs->hw.b_if;
-
- switch (pr) {
- case PH_ACTIVATE | REQUEST:
- B_L2L1(b_if, pr, (void *)(unsigned long)st->l1.mode);
- break;
- case PH_DATA | REQUEST:
- case PH_PULL | INDICATION:
- // FIXME lock?
- if (!test_and_set_bit(BC_FLG_BUSY, &bcs->Flag)) {
- B_L2L1(b_if, PH_DATA | REQUEST, arg);
- } else {
- skb_queue_tail(&bcs->squeue, arg);
- }
- break;
- case PH_PULL | REQUEST:
- if (!test_bit(BC_FLG_BUSY, &bcs->Flag))
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- else
- set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case PH_DEACTIVATE | REQUEST:
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- skb_queue_purge(&bcs->squeue);
- /* fall through */
- default:
- B_L2L1(b_if, pr, arg);
- break;
- }
-}
-
-static int hisax_bc_setstack(struct PStack *st, struct BCState *bcs)
-{
- struct IsdnCardState *cs = st->l1.hardware;
- struct hisax_d_if *hisax_d_if = cs->hw.hisax_d_if;
-
- bcs->channel = st->l1.bc;
-
- bcs->hw.b_if = hisax_d_if->b_if[st->l1.bc];
- hisax_d_if->b_if[st->l1.bc]->bcs = bcs;
-
- st->l1.bcs = bcs;
- st->l2.l2l1 = hisax_b_l2l1;
- setstack_manager(st);
- bcs->st = st;
- setstack_l1_B(st);
- skb_queue_head_init(&bcs->rqueue);
- skb_queue_head_init(&bcs->squeue);
- return 0;
-}
-
-static void hisax_bc_close(struct BCState *bcs)
-{
- struct hisax_b_if *b_if = bcs->hw.b_if;
-
- if (b_if)
- B_L2L1(b_if, PH_DEACTIVATE | REQUEST, NULL);
-}
-
-static void EChannel_proc_rcv(struct hisax_d_if *d_if)
-{
- struct IsdnCardState *cs = d_if->cs;
- u_char *ptr;
- struct sk_buff *skb;
-
- while ((skb = skb_dequeue(&d_if->erq)) != NULL) {
- if (cs->debug & DEB_DLOG_HEX) {
- ptr = cs->dlog;
- if ((skb->len) < MAX_DLOG_SPACE / 3 - 10) {
- *ptr++ = 'E';
- *ptr++ = 'C';
- *ptr++ = 'H';
- *ptr++ = 'O';
- *ptr++ = ':';
- ptr += QuickHex(ptr, skb->data, skb->len);
- ptr--;
- *ptr++ = '\n';
- *ptr = 0;
- HiSax_putstatus(cs, NULL, cs->dlog);
- } else
- HiSax_putstatus(cs, "LogEcho: ",
- "warning Frame too big (%d)",
- skb->len);
- }
- dev_kfree_skb_any(skb);
- }
-}
-
-#ifdef CONFIG_PCI
-#include <linux/pci.h>
-
-static const struct pci_device_id hisax_pci_tbl[] __used = {
-#ifdef CONFIG_HISAX_FRITZPCI
- {PCI_VDEVICE(AVM, PCI_DEVICE_ID_AVM_A1) },
-#endif
-#ifdef CONFIG_HISAX_DIEHLDIVA
- {PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_DIVA20) },
- {PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_DIVA20_U) },
- {PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_DIVA201) },
-/*##########################################################################*/
- {PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_DIVA202) },
-/*##########################################################################*/
-#endif
-#ifdef CONFIG_HISAX_ELSA
- {PCI_VDEVICE(ELSA, PCI_DEVICE_ID_ELSA_MICROLINK) },
- {PCI_VDEVICE(ELSA, PCI_DEVICE_ID_ELSA_QS3000) },
-#endif
-#ifdef CONFIG_HISAX_GAZEL
- {PCI_VDEVICE(PLX, PCI_DEVICE_ID_PLX_R685) },
- {PCI_VDEVICE(PLX, PCI_DEVICE_ID_PLX_R753) },
- {PCI_VDEVICE(PLX, PCI_DEVICE_ID_PLX_DJINN_ITOO) },
- {PCI_VDEVICE(PLX, PCI_DEVICE_ID_PLX_OLITEC) },
-#endif
-#ifdef CONFIG_HISAX_SCT_QUADRO
- {PCI_VDEVICE(PLX, PCI_DEVICE_ID_PLX_9050) },
-#endif
-#ifdef CONFIG_HISAX_NICCY
- {PCI_VDEVICE(SATSAGEM, PCI_DEVICE_ID_SATSAGEM_NICCY) },
-#endif
-#ifdef CONFIG_HISAX_SEDLBAUER
- {PCI_VDEVICE(TIGERJET, PCI_DEVICE_ID_TIGERJET_100) },
-#endif
-#if defined(CONFIG_HISAX_NETJET) || defined(CONFIG_HISAX_NETJET_U)
- {PCI_VDEVICE(TIGERJET, PCI_DEVICE_ID_TIGERJET_300) },
-#endif
-#if defined(CONFIG_HISAX_TELESPCI) || defined(CONFIG_HISAX_SCT_QUADRO)
- {PCI_VDEVICE(ZORAN, PCI_DEVICE_ID_ZORAN_36120) },
-#endif
-#ifdef CONFIG_HISAX_W6692
- {PCI_VDEVICE(DYNALINK, PCI_DEVICE_ID_DYNALINK_IS64PH) },
- {PCI_VDEVICE(WINBOND2, PCI_DEVICE_ID_WINBOND2_6692) },
-#endif
-#ifdef CONFIG_HISAX_HFC_PCI
- {PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_2BD0) },
- {PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B000) },
- {PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B006) },
- {PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B007) },
- {PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B008) },
- {PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B009) },
- {PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B00A) },
- {PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B00B) },
- {PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B00C) },
- {PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B100) },
- {PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B700) },
- {PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B701) },
- {PCI_VDEVICE(ABOCOM, PCI_DEVICE_ID_ABOCOM_2BD1) },
- {PCI_VDEVICE(ASUSTEK, PCI_DEVICE_ID_ASUSTEK_0675) },
- {PCI_VDEVICE(BERKOM, PCI_DEVICE_ID_BERKOM_T_CONCEPT) },
- {PCI_VDEVICE(BERKOM, PCI_DEVICE_ID_BERKOM_A1T) },
- {PCI_VDEVICE(ANIGMA, PCI_DEVICE_ID_ANIGMA_MC145575) },
- {PCI_VDEVICE(ZOLTRIX, PCI_DEVICE_ID_ZOLTRIX_2BD0) },
- {PCI_VDEVICE(DIGI, PCI_DEVICE_ID_DIGI_DF_M_IOM2_E) },
- {PCI_VDEVICE(DIGI, PCI_DEVICE_ID_DIGI_DF_M_E) },
- {PCI_VDEVICE(DIGI, PCI_DEVICE_ID_DIGI_DF_M_IOM2_A) },
- {PCI_VDEVICE(DIGI, PCI_DEVICE_ID_DIGI_DF_M_A) },
-#endif
- { } /* Terminating entry */
-};
-
-MODULE_DEVICE_TABLE(pci, hisax_pci_tbl);
-#endif /* CONFIG_PCI */
-
-module_init(HiSax_init);
-module_exit(HiSax_exit);
-
-EXPORT_SYMBOL(FsmNew);
-EXPORT_SYMBOL(FsmFree);
-EXPORT_SYMBOL(FsmEvent);
-EXPORT_SYMBOL(FsmChangeState);
-EXPORT_SYMBOL(FsmInitTimer);
-EXPORT_SYMBOL(FsmDelTimer);
-EXPORT_SYMBOL(FsmRestartTimer);
diff --git a/drivers/isdn/hisax/diva.c b/drivers/isdn/hisax/diva.c
deleted file mode 100644
index d23df7a7784d..000000000000
--- a/drivers/isdn/hisax/diva.c
+++ /dev/null
@@ -1,1282 +0,0 @@
-/* $Id: diva.c,v 1.33.2.6 2004/02/11 13:21:33 keil Exp $
- *
- * low level stuff for Eicon.Diehl Diva Family ISDN cards
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * For changes and modifications please read
- * Documentation/isdn/HiSax.cert
- *
- * Thanks to Eicon Technology for documents and information
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "hscx.h"
-#include "ipac.h"
-#include "ipacx.h"
-#include "isdnl1.h"
-#include <linux/pci.h>
-#include <linux/isapnp.h>
-
-static const char *Diva_revision = "$Revision: 1.33.2.6 $";
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-#define DIVA_HSCX_DATA 0
-#define DIVA_HSCX_ADR 4
-#define DIVA_ISA_ISAC_DATA 2
-#define DIVA_ISA_ISAC_ADR 6
-#define DIVA_ISA_CTRL 7
-#define DIVA_IPAC_ADR 0
-#define DIVA_IPAC_DATA 1
-
-#define DIVA_PCI_ISAC_DATA 8
-#define DIVA_PCI_ISAC_ADR 0xc
-#define DIVA_PCI_CTRL 0x10
-
-/* SUB Types */
-#define DIVA_ISA 1
-#define DIVA_PCI 2
-#define DIVA_IPAC_ISA 3
-#define DIVA_IPAC_PCI 4
-#define DIVA_IPACX_PCI 5
-
-/* CTRL (Read) */
-#define DIVA_IRQ_STAT 0x01
-#define DIVA_EEPROM_SDA 0x02
-
-/* CTRL (Write) */
-#define DIVA_IRQ_REQ 0x01
-#define DIVA_RESET 0x08
-#define DIVA_EEPROM_CLK 0x40
-#define DIVA_PCI_LED_A 0x10
-#define DIVA_PCI_LED_B 0x20
-#define DIVA_ISA_LED_A 0x20
-#define DIVA_ISA_LED_B 0x40
-#define DIVA_IRQ_CLR 0x80
-
-/* Siemens PITA */
-#define PITA_MISC_REG 0x1c
-#ifdef __BIG_ENDIAN
-#define PITA_PARA_SOFTRESET 0x00000001
-#define PITA_SER_SOFTRESET 0x00000002
-#define PITA_PARA_MPX_MODE 0x00000004
-#define PITA_INT0_ENABLE 0x00000200
-#else
-#define PITA_PARA_SOFTRESET 0x01000000
-#define PITA_SER_SOFTRESET 0x02000000
-#define PITA_PARA_MPX_MODE 0x04000000
-#define PITA_INT0_ENABLE 0x00020000
-#endif
-#define PITA_INT0_STATUS 0x02
-
-static inline u_char
-readreg(unsigned int ale, unsigned int adr, u_char off)
-{
- register u_char ret;
-
- byteout(ale, off);
- ret = bytein(adr);
- return (ret);
-}
-
-static inline void
-readfifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- byteout(ale, off);
- insb(adr, data, size);
-}
-
-
-static inline void
-writereg(unsigned int ale, unsigned int adr, u_char off, u_char data)
-{
- byteout(ale, off);
- byteout(adr, data);
-}
-
-static inline void
-writefifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- byteout(ale, off);
- outsb(adr, data, size);
-}
-
-static inline u_char
-memreadreg(unsigned long adr, u_char off)
-{
- return (*((unsigned char *)
- (((unsigned int *)adr) + off)));
-}
-
-static inline void
-memwritereg(unsigned long adr, u_char off, u_char data)
-{
- register u_char *p;
-
- p = (unsigned char *)(((unsigned int *)adr) + off);
- *p = data;
-}
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readreg(cs->hw.diva.isac_adr, cs->hw.diva.isac, offset));
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writereg(cs->hw.diva.isac_adr, cs->hw.diva.isac, offset, value);
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- readfifo(cs->hw.diva.isac_adr, cs->hw.diva.isac, 0, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- writefifo(cs->hw.diva.isac_adr, cs->hw.diva.isac, 0, data, size);
-}
-
-static u_char
-ReadISAC_IPAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readreg(cs->hw.diva.isac_adr, cs->hw.diva.isac, offset + 0x80));
-}
-
-static void
-WriteISAC_IPAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writereg(cs->hw.diva.isac_adr, cs->hw.diva.isac, offset | 0x80, value);
-}
-
-static void
-ReadISACfifo_IPAC(struct IsdnCardState *cs, u_char *data, int size)
-{
- readfifo(cs->hw.diva.isac_adr, cs->hw.diva.isac, 0x80, data, size);
-}
-
-static void
-WriteISACfifo_IPAC(struct IsdnCardState *cs, u_char *data, int size)
-{
- writefifo(cs->hw.diva.isac_adr, cs->hw.diva.isac, 0x80, data, size);
-}
-
-static u_char
-ReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- return (readreg(cs->hw.diva.hscx_adr,
- cs->hw.diva.hscx, offset + (hscx ? 0x40 : 0)));
-}
-
-static void
-WriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- writereg(cs->hw.diva.hscx_adr,
- cs->hw.diva.hscx, offset + (hscx ? 0x40 : 0), value);
-}
-
-static u_char
-MemReadISAC_IPAC(struct IsdnCardState *cs, u_char offset)
-{
- return (memreadreg(cs->hw.diva.cfg_reg, offset + 0x80));
-}
-
-static void
-MemWriteISAC_IPAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- memwritereg(cs->hw.diva.cfg_reg, offset | 0x80, value);
-}
-
-static void
-MemReadISACfifo_IPAC(struct IsdnCardState *cs, u_char *data, int size)
-{
- while (size--)
- *data++ = memreadreg(cs->hw.diva.cfg_reg, 0x80);
-}
-
-static void
-MemWriteISACfifo_IPAC(struct IsdnCardState *cs, u_char *data, int size)
-{
- while (size--)
- memwritereg(cs->hw.diva.cfg_reg, 0x80, *data++);
-}
-
-static u_char
-MemReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- return (memreadreg(cs->hw.diva.cfg_reg, offset + (hscx ? 0x40 : 0)));
-}
-
-static void
-MemWriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- memwritereg(cs->hw.diva.cfg_reg, offset + (hscx ? 0x40 : 0), value);
-}
-
-/* IO-Functions for IPACX type cards */
-static u_char
-MemReadISAC_IPACX(struct IsdnCardState *cs, u_char offset)
-{
- return (memreadreg(cs->hw.diva.cfg_reg, offset));
-}
-
-static void
-MemWriteISAC_IPACX(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- memwritereg(cs->hw.diva.cfg_reg, offset, value);
-}
-
-static void
-MemReadISACfifo_IPACX(struct IsdnCardState *cs, u_char *data, int size)
-{
- while (size--)
- *data++ = memreadreg(cs->hw.diva.cfg_reg, 0);
-}
-
-static void
-MemWriteISACfifo_IPACX(struct IsdnCardState *cs, u_char *data, int size)
-{
- while (size--)
- memwritereg(cs->hw.diva.cfg_reg, 0, *data++);
-}
-
-static u_char
-MemReadHSCX_IPACX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- return (memreadreg(cs->hw.diva.cfg_reg, offset +
- (hscx ? IPACX_OFF_B2 : IPACX_OFF_B1)));
-}
-
-static void
-MemWriteHSCX_IPACX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- memwritereg(cs->hw.diva.cfg_reg, offset +
- (hscx ? IPACX_OFF_B2 : IPACX_OFF_B1), value);
-}
-
-/*
- * fast interrupt HSCX stuff goes here
- */
-
-#define READHSCX(cs, nr, reg) readreg(cs->hw.diva.hscx_adr, \
- cs->hw.diva.hscx, reg + (nr ? 0x40 : 0))
-#define WRITEHSCX(cs, nr, reg, data) writereg(cs->hw.diva.hscx_adr, \
- cs->hw.diva.hscx, reg + (nr ? 0x40 : 0), data)
-
-#define READHSCXFIFO(cs, nr, ptr, cnt) readfifo(cs->hw.diva.hscx_adr, \
- cs->hw.diva.hscx, (nr ? 0x40 : 0), ptr, cnt)
-
-#define WRITEHSCXFIFO(cs, nr, ptr, cnt) writefifo(cs->hw.diva.hscx_adr, \
- cs->hw.diva.hscx, (nr ? 0x40 : 0), ptr, cnt)
-
-#include "hscx_irq.c"
-
-static irqreturn_t
-diva_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val, sval;
- u_long flags;
- int cnt = 5;
-
- spin_lock_irqsave(&cs->lock, flags);
- while (((sval = bytein(cs->hw.diva.ctrl)) & DIVA_IRQ_REQ) && cnt) {
- val = readreg(cs->hw.diva.hscx_adr, cs->hw.diva.hscx, HSCX_ISTA + 0x40);
- if (val)
- hscx_int_main(cs, val);
- val = readreg(cs->hw.diva.isac_adr, cs->hw.diva.isac, ISAC_ISTA);
- if (val)
- isac_interrupt(cs, val);
- cnt--;
- }
- if (!cnt)
- printk(KERN_WARNING "Diva: IRQ LOOP\n");
- writereg(cs->hw.diva.hscx_adr, cs->hw.diva.hscx, HSCX_MASK, 0xFF);
- writereg(cs->hw.diva.hscx_adr, cs->hw.diva.hscx, HSCX_MASK + 0x40, 0xFF);
- writereg(cs->hw.diva.isac_adr, cs->hw.diva.isac, ISAC_MASK, 0xFF);
- writereg(cs->hw.diva.isac_adr, cs->hw.diva.isac, ISAC_MASK, 0x0);
- writereg(cs->hw.diva.hscx_adr, cs->hw.diva.hscx, HSCX_MASK, 0x0);
- writereg(cs->hw.diva.hscx_adr, cs->hw.diva.hscx, HSCX_MASK + 0x40, 0x0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static irqreturn_t
-diva_irq_ipac_isa(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char ista, val;
- u_long flags;
- int icnt = 5;
-
- spin_lock_irqsave(&cs->lock, flags);
- ista = readreg(cs->hw.diva.isac_adr, cs->hw.diva.isac, IPAC_ISTA);
-Start_IPACISA:
- if (cs->debug & L1_DEB_IPAC)
- debugl1(cs, "IPAC ISTA %02X", ista);
- if (ista & 0x0f) {
- val = readreg(cs->hw.diva.isac_adr, cs->hw.diva.isac, HSCX_ISTA + 0x40);
- if (ista & 0x01)
- val |= 0x01;
- if (ista & 0x04)
- val |= 0x02;
- if (ista & 0x08)
- val |= 0x04;
- if (val)
- hscx_int_main(cs, val);
- }
- if (ista & 0x20) {
- val = 0xfe & readreg(cs->hw.diva.isac_adr, cs->hw.diva.isac, ISAC_ISTA + 0x80);
- if (val) {
- isac_interrupt(cs, val);
- }
- }
- if (ista & 0x10) {
- val = 0x01;
- isac_interrupt(cs, val);
- }
- ista = readreg(cs->hw.diva.isac_adr, cs->hw.diva.isac, IPAC_ISTA);
- if ((ista & 0x3f) && icnt) {
- icnt--;
- goto Start_IPACISA;
- }
- if (!icnt)
- printk(KERN_WARNING "DIVA IPAC IRQ LOOP\n");
- writereg(cs->hw.diva.isac_adr, cs->hw.diva.isac, IPAC_MASK, 0xFF);
- writereg(cs->hw.diva.isac_adr, cs->hw.diva.isac, IPAC_MASK, 0xC0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static inline void
-MemwaitforCEC(struct IsdnCardState *cs, int hscx)
-{
- int to = 50;
-
- while ((MemReadHSCX(cs, hscx, HSCX_STAR) & 0x04) && to) {
- udelay(1);
- to--;
- }
- if (!to)
- printk(KERN_WARNING "HiSax: waitforCEC timeout\n");
-}
-
-
-static inline void
-MemwaitforXFW(struct IsdnCardState *cs, int hscx)
-{
- int to = 50;
-
- while (((MemReadHSCX(cs, hscx, HSCX_STAR) & 0x44) != 0x40) && to) {
- udelay(1);
- to--;
- }
- if (!to)
- printk(KERN_WARNING "HiSax: waitforXFW timeout\n");
-}
-
-static inline void
-MemWriteHSCXCMDR(struct IsdnCardState *cs, int hscx, u_char data)
-{
- MemwaitforCEC(cs, hscx);
- MemWriteHSCX(cs, hscx, HSCX_CMDR, data);
-}
-
-static void
-Memhscx_empty_fifo(struct BCState *bcs, int count)
-{
- u_char *ptr;
- struct IsdnCardState *cs = bcs->cs;
- int cnt;
-
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "hscx_empty_fifo");
-
- if (bcs->hw.hscx.rcvidx + count > HSCX_BUFMAX) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "hscx_empty_fifo: incoming packet too large");
- MemWriteHSCXCMDR(cs, bcs->hw.hscx.hscx, 0x80);
- bcs->hw.hscx.rcvidx = 0;
- return;
- }
- ptr = bcs->hw.hscx.rcvbuf + bcs->hw.hscx.rcvidx;
- cnt = count;
- while (cnt--)
- *ptr++ = memreadreg(cs->hw.diva.cfg_reg, bcs->hw.hscx.hscx ? 0x40 : 0);
- MemWriteHSCXCMDR(cs, bcs->hw.hscx.hscx, 0x80);
- ptr = bcs->hw.hscx.rcvbuf + bcs->hw.hscx.rcvidx;
- bcs->hw.hscx.rcvidx += count;
- if (cs->debug & L1_DEB_HSCX_FIFO) {
- char *t = bcs->blog;
-
- t += sprintf(t, "hscx_empty_fifo %c cnt %d",
- bcs->hw.hscx.hscx ? 'B' : 'A', count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", bcs->blog);
- }
-}
-
-static void
-Memhscx_fill_fifo(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
- int more, count, cnt;
- int fifo_size = test_bit(HW_IPAC, &cs->HW_Flags) ? 64 : 32;
- u_char *ptr, *p;
-
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "hscx_fill_fifo");
-
- if (!bcs->tx_skb)
- return;
- if (bcs->tx_skb->len <= 0)
- return;
-
- more = (bcs->mode == L1_MODE_TRANS) ? 1 : 0;
- if (bcs->tx_skb->len > fifo_size) {
- more = !0;
- count = fifo_size;
- } else
- count = bcs->tx_skb->len;
- cnt = count;
- MemwaitforXFW(cs, bcs->hw.hscx.hscx);
- p = ptr = bcs->tx_skb->data;
- skb_pull(bcs->tx_skb, count);
- bcs->tx_cnt -= count;
- bcs->hw.hscx.count += count;
- while (cnt--)
- memwritereg(cs->hw.diva.cfg_reg, bcs->hw.hscx.hscx ? 0x40 : 0,
- *p++);
- MemWriteHSCXCMDR(cs, bcs->hw.hscx.hscx, more ? 0x8 : 0xa);
- if (cs->debug & L1_DEB_HSCX_FIFO) {
- char *t = bcs->blog;
-
- t += sprintf(t, "hscx_fill_fifo %c cnt %d",
- bcs->hw.hscx.hscx ? 'B' : 'A', count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", bcs->blog);
- }
-}
-
-static void
-Memhscx_interrupt(struct IsdnCardState *cs, u_char val, u_char hscx)
-{
- u_char r;
- struct BCState *bcs = cs->bcs + hscx;
- struct sk_buff *skb;
- int fifo_size = test_bit(HW_IPAC, &cs->HW_Flags) ? 64 : 32;
- int count;
-
- if (!test_bit(BC_FLG_INIT, &bcs->Flag))
- return;
-
- if (val & 0x80) { /* RME */
- r = MemReadHSCX(cs, hscx, HSCX_RSTA);
- if ((r & 0xf0) != 0xa0) {
- if (!(r & 0x80))
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "HSCX invalid frame");
- if ((r & 0x40) && bcs->mode)
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "HSCX RDO mode=%d",
- bcs->mode);
- if (!(r & 0x20))
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "HSCX CRC error");
- MemWriteHSCXCMDR(cs, hscx, 0x80);
- } else {
- count = MemReadHSCX(cs, hscx, HSCX_RBCL) & (
- test_bit(HW_IPAC, &cs->HW_Flags) ? 0x3f : 0x1f);
- if (count == 0)
- count = fifo_size;
- Memhscx_empty_fifo(bcs, count);
- if ((count = bcs->hw.hscx.rcvidx - 1) > 0) {
- if (cs->debug & L1_DEB_HSCX_FIFO)
- debugl1(cs, "HX Frame %d", count);
- if (!(skb = dev_alloc_skb(count)))
- printk(KERN_WARNING "HSCX: receive out of memory\n");
- else {
- skb_put_data(skb, bcs->hw.hscx.rcvbuf,
- count);
- skb_queue_tail(&bcs->rqueue, skb);
- }
- }
- }
- bcs->hw.hscx.rcvidx = 0;
- schedule_event(bcs, B_RCVBUFREADY);
- }
- if (val & 0x40) { /* RPF */
- Memhscx_empty_fifo(bcs, fifo_size);
- if (bcs->mode == L1_MODE_TRANS) {
- /* receive audio data */
- if (!(skb = dev_alloc_skb(fifo_size)))
- printk(KERN_WARNING "HiSax: receive out of memory\n");
- else {
- skb_put_data(skb, bcs->hw.hscx.rcvbuf,
- fifo_size);
- skb_queue_tail(&bcs->rqueue, skb);
- }
- bcs->hw.hscx.rcvidx = 0;
- schedule_event(bcs, B_RCVBUFREADY);
- }
- }
- if (val & 0x10) { /* XPR */
- if (bcs->tx_skb) {
- if (bcs->tx_skb->len) {
- Memhscx_fill_fifo(bcs);
- return;
- } else {
- if (test_bit(FLG_LLI_L1WAKEUP, &bcs->st->lli.flag) &&
- (PACKET_NOACK != bcs->tx_skb->pkt_type)) {
- u_long flags;
- spin_lock_irqsave(&bcs->aclock, flags);
- bcs->ackcnt += bcs->hw.hscx.count;
- spin_unlock_irqrestore(&bcs->aclock, flags);
- schedule_event(bcs, B_ACKPENDING);
- }
- dev_kfree_skb_irq(bcs->tx_skb);
- bcs->hw.hscx.count = 0;
- bcs->tx_skb = NULL;
- }
- }
- if ((bcs->tx_skb = skb_dequeue(&bcs->squeue))) {
- bcs->hw.hscx.count = 0;
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- Memhscx_fill_fifo(bcs);
- } else {
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- schedule_event(bcs, B_XMTBUFREADY);
- }
- }
-}
-
-static inline void
-Memhscx_int_main(struct IsdnCardState *cs, u_char val)
-{
-
- u_char exval;
- struct BCState *bcs;
-
- if (val & 0x01) { // EXB
- bcs = cs->bcs + 1;
- exval = MemReadHSCX(cs, 1, HSCX_EXIR);
- if (exval & 0x40) {
- if (bcs->mode == 1)
- Memhscx_fill_fifo(bcs);
- else {
- /* Here we lost an TX interrupt, so
- * restart transmitting the whole frame.
- */
- if (bcs->tx_skb) {
- skb_push(bcs->tx_skb, bcs->hw.hscx.count);
- bcs->tx_cnt += bcs->hw.hscx.count;
- bcs->hw.hscx.count = 0;
- }
- MemWriteHSCXCMDR(cs, bcs->hw.hscx.hscx, 0x01);
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "HSCX B EXIR %x Lost TX", exval);
- }
- } else if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX B EXIR %x", exval);
- }
- if (val & 0xf8) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX B interrupt %x", val);
- Memhscx_interrupt(cs, val, 1);
- }
- if (val & 0x02) { // EXA
- bcs = cs->bcs;
- exval = MemReadHSCX(cs, 0, HSCX_EXIR);
- if (exval & 0x40) {
- if (bcs->mode == L1_MODE_TRANS)
- Memhscx_fill_fifo(bcs);
- else {
- /* Here we lost an TX interrupt, so
- * restart transmitting the whole frame.
- */
- if (bcs->tx_skb) {
- skb_push(bcs->tx_skb, bcs->hw.hscx.count);
- bcs->tx_cnt += bcs->hw.hscx.count;
- bcs->hw.hscx.count = 0;
- }
- MemWriteHSCXCMDR(cs, bcs->hw.hscx.hscx, 0x01);
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "HSCX A EXIR %x Lost TX", exval);
- }
- } else if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX A EXIR %x", exval);
- }
- if (val & 0x04) { // ICA
- exval = MemReadHSCX(cs, 0, HSCX_ISTA);
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX A interrupt %x", exval);
- Memhscx_interrupt(cs, exval, 0);
- }
-}
-
-static irqreturn_t
-diva_irq_ipac_pci(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char ista, val;
- int icnt = 5;
- u_char *cfg;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- cfg = (u_char *) cs->hw.diva.pci_cfg;
- val = *cfg;
- if (!(val & PITA_INT0_STATUS)) {
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_NONE; /* other shared IRQ */
- }
- *cfg = PITA_INT0_STATUS; /* Reset pending INT0 */
- ista = memreadreg(cs->hw.diva.cfg_reg, IPAC_ISTA);
-Start_IPACPCI:
- if (cs->debug & L1_DEB_IPAC)
- debugl1(cs, "IPAC ISTA %02X", ista);
- if (ista & 0x0f) {
- val = memreadreg(cs->hw.diva.cfg_reg, HSCX_ISTA + 0x40);
- if (ista & 0x01)
- val |= 0x01;
- if (ista & 0x04)
- val |= 0x02;
- if (ista & 0x08)
- val |= 0x04;
- if (val)
- Memhscx_int_main(cs, val);
- }
- if (ista & 0x20) {
- val = 0xfe & memreadreg(cs->hw.diva.cfg_reg, ISAC_ISTA + 0x80);
- if (val) {
- isac_interrupt(cs, val);
- }
- }
- if (ista & 0x10) {
- val = 0x01;
- isac_interrupt(cs, val);
- }
- ista = memreadreg(cs->hw.diva.cfg_reg, IPAC_ISTA);
- if ((ista & 0x3f) && icnt) {
- icnt--;
- goto Start_IPACPCI;
- }
- if (!icnt)
- printk(KERN_WARNING "DIVA IPAC PCI IRQ LOOP\n");
- memwritereg(cs->hw.diva.cfg_reg, IPAC_MASK, 0xFF);
- memwritereg(cs->hw.diva.cfg_reg, IPAC_MASK, 0xC0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static irqreturn_t
-diva_irq_ipacx_pci(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val;
- u_char *cfg;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- cfg = (u_char *) cs->hw.diva.pci_cfg;
- val = *cfg;
- if (!(val & PITA_INT0_STATUS)) {
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_NONE; // other shared IRQ
- }
- interrupt_ipacx(cs); // handler for chip
- *cfg = PITA_INT0_STATUS; // Reset PLX interrupt
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-release_io_diva(struct IsdnCardState *cs)
-{
- int bytecnt;
-
- if ((cs->subtyp == DIVA_IPAC_PCI) ||
- (cs->subtyp == DIVA_IPACX_PCI)) {
- u_int *cfg = (unsigned int *)cs->hw.diva.pci_cfg;
-
- *cfg = 0; /* disable INT0/1 */
- *cfg = 2; /* reset pending INT0 */
- if (cs->hw.diva.cfg_reg)
- iounmap((void *)cs->hw.diva.cfg_reg);
- if (cs->hw.diva.pci_cfg)
- iounmap((void *)cs->hw.diva.pci_cfg);
- return;
- } else if (cs->subtyp != DIVA_IPAC_ISA) {
- del_timer(&cs->hw.diva.tl);
- if (cs->hw.diva.cfg_reg)
- byteout(cs->hw.diva.ctrl, 0); /* LED off, Reset */
- }
- if ((cs->subtyp == DIVA_ISA) || (cs->subtyp == DIVA_IPAC_ISA))
- bytecnt = 8;
- else
- bytecnt = 32;
- if (cs->hw.diva.cfg_reg) {
- release_region(cs->hw.diva.cfg_reg, bytecnt);
- }
-}
-
-static void
-iounmap_diva(struct IsdnCardState *cs)
-{
- if ((cs->subtyp == DIVA_IPAC_PCI) || (cs->subtyp == DIVA_IPACX_PCI)) {
- if (cs->hw.diva.cfg_reg) {
- iounmap((void *)cs->hw.diva.cfg_reg);
- cs->hw.diva.cfg_reg = 0;
- }
- if (cs->hw.diva.pci_cfg) {
- iounmap((void *)cs->hw.diva.pci_cfg);
- cs->hw.diva.pci_cfg = 0;
- }
- }
-
- return;
-}
-
-static void
-reset_diva(struct IsdnCardState *cs)
-{
- if (cs->subtyp == DIVA_IPAC_ISA) {
- writereg(cs->hw.diva.isac_adr, cs->hw.diva.isac, IPAC_POTA2, 0x20);
- mdelay(10);
- writereg(cs->hw.diva.isac_adr, cs->hw.diva.isac, IPAC_POTA2, 0x00);
- mdelay(10);
- writereg(cs->hw.diva.isac_adr, cs->hw.diva.isac, IPAC_MASK, 0xc0);
- } else if (cs->subtyp == DIVA_IPAC_PCI) {
- unsigned int *ireg = (unsigned int *)(cs->hw.diva.pci_cfg +
- PITA_MISC_REG);
- *ireg = PITA_PARA_SOFTRESET | PITA_PARA_MPX_MODE;
- mdelay(10);
- *ireg = PITA_PARA_MPX_MODE;
- mdelay(10);
- memwritereg(cs->hw.diva.cfg_reg, IPAC_MASK, 0xc0);
- } else if (cs->subtyp == DIVA_IPACX_PCI) {
- unsigned int *ireg = (unsigned int *)(cs->hw.diva.pci_cfg +
- PITA_MISC_REG);
- *ireg = PITA_PARA_SOFTRESET | PITA_PARA_MPX_MODE;
- mdelay(10);
- *ireg = PITA_PARA_MPX_MODE | PITA_SER_SOFTRESET;
- mdelay(10);
- MemWriteISAC_IPACX(cs, IPACX_MASK, 0xff); // Interrupts off
- } else { /* DIVA 2.0 */
- cs->hw.diva.ctrl_reg = 0; /* Reset On */
- byteout(cs->hw.diva.ctrl, cs->hw.diva.ctrl_reg);
- mdelay(10);
- cs->hw.diva.ctrl_reg |= DIVA_RESET; /* Reset Off */
- byteout(cs->hw.diva.ctrl, cs->hw.diva.ctrl_reg);
- mdelay(10);
- if (cs->subtyp == DIVA_ISA)
- cs->hw.diva.ctrl_reg |= DIVA_ISA_LED_A;
- else {
- /* Workaround PCI9060 */
- byteout(cs->hw.diva.pci_cfg + 0x69, 9);
- cs->hw.diva.ctrl_reg |= DIVA_PCI_LED_A;
- }
- byteout(cs->hw.diva.ctrl, cs->hw.diva.ctrl_reg);
- }
-}
-
-#define DIVA_ASSIGN 1
-
-static void
-diva_led_handler(struct timer_list *t)
-{
- struct IsdnCardState *cs = from_timer(cs, t, hw.diva.tl);
- int blink = 0;
-
- if ((cs->subtyp == DIVA_IPAC_ISA) ||
- (cs->subtyp == DIVA_IPAC_PCI) ||
- (cs->subtyp == DIVA_IPACX_PCI))
- return;
- del_timer(&cs->hw.diva.tl);
- if (cs->hw.diva.status & DIVA_ASSIGN)
- cs->hw.diva.ctrl_reg |= (DIVA_ISA == cs->subtyp) ?
- DIVA_ISA_LED_A : DIVA_PCI_LED_A;
- else {
- cs->hw.diva.ctrl_reg ^= (DIVA_ISA == cs->subtyp) ?
- DIVA_ISA_LED_A : DIVA_PCI_LED_A;
- blink = 250;
- }
- if (cs->hw.diva.status & 0xf000)
- cs->hw.diva.ctrl_reg |= (DIVA_ISA == cs->subtyp) ?
- DIVA_ISA_LED_B : DIVA_PCI_LED_B;
- else if (cs->hw.diva.status & 0x0f00) {
- cs->hw.diva.ctrl_reg ^= (DIVA_ISA == cs->subtyp) ?
- DIVA_ISA_LED_B : DIVA_PCI_LED_B;
- blink = 500;
- } else
- cs->hw.diva.ctrl_reg &= ~((DIVA_ISA == cs->subtyp) ?
- DIVA_ISA_LED_B : DIVA_PCI_LED_B);
-
- byteout(cs->hw.diva.ctrl, cs->hw.diva.ctrl_reg);
- if (blink) {
- cs->hw.diva.tl.expires = jiffies + ((blink * HZ) / 1000);
- add_timer(&cs->hw.diva.tl);
- }
-}
-
-static int
-Diva_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_int *ireg;
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- reset_diva(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- release_io_diva(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- reset_diva(cs);
- if (cs->subtyp == DIVA_IPACX_PCI) {
- ireg = (unsigned int *)cs->hw.diva.pci_cfg;
- *ireg = PITA_INT0_ENABLE;
- init_ipacx(cs, 3); // init chip and enable interrupts
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- }
- if (cs->subtyp == DIVA_IPAC_PCI) {
- ireg = (unsigned int *)cs->hw.diva.pci_cfg;
- *ireg = PITA_INT0_ENABLE;
- }
- inithscxisac(cs, 3);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- case (MDL_REMOVE | REQUEST):
- cs->hw.diva.status = 0;
- break;
- case (MDL_ASSIGN | REQUEST):
- cs->hw.diva.status |= DIVA_ASSIGN;
- break;
- case MDL_INFO_SETUP:
- if ((long)arg)
- cs->hw.diva.status |= 0x0200;
- else
- cs->hw.diva.status |= 0x0100;
- break;
- case MDL_INFO_CONN:
- if ((long)arg)
- cs->hw.diva.status |= 0x2000;
- else
- cs->hw.diva.status |= 0x1000;
- break;
- case MDL_INFO_REL:
- if ((long)arg) {
- cs->hw.diva.status &= ~0x2000;
- cs->hw.diva.status &= ~0x0200;
- } else {
- cs->hw.diva.status &= ~0x1000;
- cs->hw.diva.status &= ~0x0100;
- }
- break;
- }
- if ((cs->subtyp != DIVA_IPAC_ISA) &&
- (cs->subtyp != DIVA_IPAC_PCI) &&
- (cs->subtyp != DIVA_IPACX_PCI)) {
- spin_lock_irqsave(&cs->lock, flags);
- diva_led_handler(&cs->hw.diva.tl);
- spin_unlock_irqrestore(&cs->lock, flags);
- }
- return (0);
-}
-
-static int setup_diva_common(struct IsdnCardState *cs)
-{
- int bytecnt;
- u_char val;
-
- if ((cs->subtyp == DIVA_ISA) || (cs->subtyp == DIVA_IPAC_ISA))
- bytecnt = 8;
- else
- bytecnt = 32;
-
- printk(KERN_INFO
- "Diva: %s card configured at %#lx IRQ %d\n",
- (cs->subtyp == DIVA_PCI) ? "PCI" :
- (cs->subtyp == DIVA_ISA) ? "ISA" :
- (cs->subtyp == DIVA_IPAC_ISA) ? "IPAC ISA" :
- (cs->subtyp == DIVA_IPAC_PCI) ? "IPAC PCI" : "IPACX PCI",
- cs->hw.diva.cfg_reg, cs->irq);
- if ((cs->subtyp == DIVA_IPAC_PCI) ||
- (cs->subtyp == DIVA_IPACX_PCI) ||
- (cs->subtyp == DIVA_PCI))
- printk(KERN_INFO "Diva: %s space at %#lx\n",
- (cs->subtyp == DIVA_PCI) ? "PCI" :
- (cs->subtyp == DIVA_IPAC_PCI) ? "IPAC PCI" : "IPACX PCI",
- cs->hw.diva.pci_cfg);
- if ((cs->subtyp != DIVA_IPAC_PCI) &&
- (cs->subtyp != DIVA_IPACX_PCI)) {
- if (!request_region(cs->hw.diva.cfg_reg, bytecnt, "diva isdn")) {
- printk(KERN_WARNING
- "HiSax: %s config port %lx-%lx already in use\n",
- "diva",
- cs->hw.diva.cfg_reg,
- cs->hw.diva.cfg_reg + bytecnt);
- iounmap_diva(cs);
- return (0);
- }
- }
- cs->BC_Read_Reg = &ReadHSCX;
- cs->BC_Write_Reg = &WriteHSCX;
- cs->BC_Send_Data = &hscx_fill_fifo;
- cs->cardmsg = &Diva_card_msg;
- setup_isac(cs);
- if (cs->subtyp == DIVA_IPAC_ISA) {
- cs->readisac = &ReadISAC_IPAC;
- cs->writeisac = &WriteISAC_IPAC;
- cs->readisacfifo = &ReadISACfifo_IPAC;
- cs->writeisacfifo = &WriteISACfifo_IPAC;
- cs->irq_func = &diva_irq_ipac_isa;
- val = readreg(cs->hw.diva.isac_adr, cs->hw.diva.isac, IPAC_ID);
- printk(KERN_INFO "Diva: IPAC version %x\n", val);
- } else if (cs->subtyp == DIVA_IPAC_PCI) {
- cs->readisac = &MemReadISAC_IPAC;
- cs->writeisac = &MemWriteISAC_IPAC;
- cs->readisacfifo = &MemReadISACfifo_IPAC;
- cs->writeisacfifo = &MemWriteISACfifo_IPAC;
- cs->BC_Read_Reg = &MemReadHSCX;
- cs->BC_Write_Reg = &MemWriteHSCX;
- cs->BC_Send_Data = &Memhscx_fill_fifo;
- cs->irq_func = &diva_irq_ipac_pci;
- val = memreadreg(cs->hw.diva.cfg_reg, IPAC_ID);
- printk(KERN_INFO "Diva: IPAC version %x\n", val);
- } else if (cs->subtyp == DIVA_IPACX_PCI) {
- cs->readisac = &MemReadISAC_IPACX;
- cs->writeisac = &MemWriteISAC_IPACX;
- cs->readisacfifo = &MemReadISACfifo_IPACX;
- cs->writeisacfifo = &MemWriteISACfifo_IPACX;
- cs->BC_Read_Reg = &MemReadHSCX_IPACX;
- cs->BC_Write_Reg = &MemWriteHSCX_IPACX;
- cs->BC_Send_Data = NULL; // function located in ipacx module
- cs->irq_func = &diva_irq_ipacx_pci;
- printk(KERN_INFO "Diva: IPACX Design Id: %x\n",
- MemReadISAC_IPACX(cs, IPACX_ID) & 0x3F);
- } else { /* DIVA 2.0 */
- timer_setup(&cs->hw.diva.tl, diva_led_handler, 0);
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->irq_func = &diva_interrupt;
- ISACVersion(cs, "Diva:");
- if (HscxVersion(cs, "Diva:")) {
- printk(KERN_WARNING
- "Diva: wrong HSCX versions check IO address\n");
- release_io_diva(cs);
- return (0);
- }
- }
- return (1);
-}
-
-#ifdef CONFIG_ISA
-
-static int setup_diva_isa(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- u_char val;
-
- if (!card->para[1])
- return (-1); /* card not found; continue search */
-
- cs->hw.diva.ctrl_reg = 0;
- cs->hw.diva.cfg_reg = card->para[1];
- val = readreg(cs->hw.diva.cfg_reg + DIVA_IPAC_ADR,
- cs->hw.diva.cfg_reg + DIVA_IPAC_DATA, IPAC_ID);
- printk(KERN_INFO "Diva: IPAC version %x\n", val);
- if ((val == 1) || (val == 2)) {
- cs->subtyp = DIVA_IPAC_ISA;
- cs->hw.diva.ctrl = 0;
- cs->hw.diva.isac = card->para[1] + DIVA_IPAC_DATA;
- cs->hw.diva.hscx = card->para[1] + DIVA_IPAC_DATA;
- cs->hw.diva.isac_adr = card->para[1] + DIVA_IPAC_ADR;
- cs->hw.diva.hscx_adr = card->para[1] + DIVA_IPAC_ADR;
- test_and_set_bit(HW_IPAC, &cs->HW_Flags);
- } else {
- cs->subtyp = DIVA_ISA;
- cs->hw.diva.ctrl = card->para[1] + DIVA_ISA_CTRL;
- cs->hw.diva.isac = card->para[1] + DIVA_ISA_ISAC_DATA;
- cs->hw.diva.hscx = card->para[1] + DIVA_HSCX_DATA;
- cs->hw.diva.isac_adr = card->para[1] + DIVA_ISA_ISAC_ADR;
- cs->hw.diva.hscx_adr = card->para[1] + DIVA_HSCX_ADR;
- }
- cs->irq = card->para[0];
-
- return (1); /* card found */
-}
-
-#else /* if !CONFIG_ISA */
-
-static int setup_diva_isa(struct IsdnCard *card)
-{
- return (-1); /* card not found; continue search */
-}
-
-#endif /* CONFIG_ISA */
-
-#ifdef __ISAPNP__
-static struct isapnp_device_id diva_ids[] = {
- { ISAPNP_VENDOR('G', 'D', 'I'), ISAPNP_FUNCTION(0x51),
- ISAPNP_VENDOR('G', 'D', 'I'), ISAPNP_FUNCTION(0x51),
- (unsigned long) "Diva picola" },
- { ISAPNP_VENDOR('G', 'D', 'I'), ISAPNP_FUNCTION(0x51),
- ISAPNP_VENDOR('E', 'I', 'C'), ISAPNP_FUNCTION(0x51),
- (unsigned long) "Diva picola" },
- { ISAPNP_VENDOR('G', 'D', 'I'), ISAPNP_FUNCTION(0x71),
- ISAPNP_VENDOR('G', 'D', 'I'), ISAPNP_FUNCTION(0x71),
- (unsigned long) "Diva 2.0" },
- { ISAPNP_VENDOR('G', 'D', 'I'), ISAPNP_FUNCTION(0x71),
- ISAPNP_VENDOR('E', 'I', 'C'), ISAPNP_FUNCTION(0x71),
- (unsigned long) "Diva 2.0" },
- { ISAPNP_VENDOR('G', 'D', 'I'), ISAPNP_FUNCTION(0xA1),
- ISAPNP_VENDOR('G', 'D', 'I'), ISAPNP_FUNCTION(0xA1),
- (unsigned long) "Diva 2.01" },
- { ISAPNP_VENDOR('G', 'D', 'I'), ISAPNP_FUNCTION(0xA1),
- ISAPNP_VENDOR('E', 'I', 'C'), ISAPNP_FUNCTION(0xA1),
- (unsigned long) "Diva 2.01" },
- { 0, }
-};
-
-static struct isapnp_device_id *ipid = &diva_ids[0];
-static struct pnp_card *pnp_c = NULL;
-
-static int setup_diva_isapnp(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- struct pnp_dev *pnp_d;
-
- if (!isapnp_present())
- return (-1); /* card not found; continue search */
-
- while (ipid->card_vendor) {
- if ((pnp_c = pnp_find_card(ipid->card_vendor,
- ipid->card_device, pnp_c))) {
- pnp_d = NULL;
- if ((pnp_d = pnp_find_dev(pnp_c,
- ipid->vendor, ipid->function, pnp_d))) {
- int err;
-
- printk(KERN_INFO "HiSax: %s detected\n",
- (char *)ipid->driver_data);
- pnp_disable_dev(pnp_d);
- err = pnp_activate_dev(pnp_d);
- if (err < 0) {
- printk(KERN_WARNING "%s: pnp_activate_dev ret(%d)\n",
- __func__, err);
- return (0);
- }
- card->para[1] = pnp_port_start(pnp_d, 0);
- card->para[0] = pnp_irq(pnp_d, 0);
- if (card->para[0] == -1 || !card->para[1]) {
- printk(KERN_ERR "Diva PnP:some resources are missing %ld/%lx\n",
- card->para[0], card->para[1]);
- pnp_disable_dev(pnp_d);
- return (0);
- }
- cs->hw.diva.cfg_reg = card->para[1];
- cs->irq = card->para[0];
- if (ipid->function == ISAPNP_FUNCTION(0xA1)) {
- cs->subtyp = DIVA_IPAC_ISA;
- cs->hw.diva.ctrl = 0;
- cs->hw.diva.isac =
- card->para[1] + DIVA_IPAC_DATA;
- cs->hw.diva.hscx =
- card->para[1] + DIVA_IPAC_DATA;
- cs->hw.diva.isac_adr =
- card->para[1] + DIVA_IPAC_ADR;
- cs->hw.diva.hscx_adr =
- card->para[1] + DIVA_IPAC_ADR;
- test_and_set_bit(HW_IPAC, &cs->HW_Flags);
- } else {
- cs->subtyp = DIVA_ISA;
- cs->hw.diva.ctrl =
- card->para[1] + DIVA_ISA_CTRL;
- cs->hw.diva.isac =
- card->para[1] + DIVA_ISA_ISAC_DATA;
- cs->hw.diva.hscx =
- card->para[1] + DIVA_HSCX_DATA;
- cs->hw.diva.isac_adr =
- card->para[1] + DIVA_ISA_ISAC_ADR;
- cs->hw.diva.hscx_adr =
- card->para[1] + DIVA_HSCX_ADR;
- }
- return (1); /* card found */
- } else {
- printk(KERN_ERR "Diva PnP: PnP error card found, no device\n");
- return (0);
- }
- }
- ipid++;
- pnp_c = NULL;
- }
-
- return (-1); /* card not found; continue search */
-}
-
-#else /* if !ISAPNP */
-
-static int setup_diva_isapnp(struct IsdnCard *card)
-{
- return (-1); /* card not found; continue search */
-}
-
-#endif /* ISAPNP */
-
-#ifdef CONFIG_PCI
-static struct pci_dev *dev_diva = NULL;
-static struct pci_dev *dev_diva_u = NULL;
-static struct pci_dev *dev_diva201 = NULL;
-static struct pci_dev *dev_diva202 = NULL;
-
-static int setup_diva_pci(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
-
- cs->subtyp = 0;
- if ((dev_diva = hisax_find_pci_device(PCI_VENDOR_ID_EICON,
- PCI_DEVICE_ID_EICON_DIVA20, dev_diva))) {
- if (pci_enable_device(dev_diva))
- return (0);
- cs->subtyp = DIVA_PCI;
- cs->irq = dev_diva->irq;
- cs->hw.diva.cfg_reg = pci_resource_start(dev_diva, 2);
- } else if ((dev_diva_u = hisax_find_pci_device(PCI_VENDOR_ID_EICON,
- PCI_DEVICE_ID_EICON_DIVA20_U, dev_diva_u))) {
- if (pci_enable_device(dev_diva_u))
- return (0);
- cs->subtyp = DIVA_PCI;
- cs->irq = dev_diva_u->irq;
- cs->hw.diva.cfg_reg = pci_resource_start(dev_diva_u, 2);
- } else if ((dev_diva201 = hisax_find_pci_device(PCI_VENDOR_ID_EICON,
- PCI_DEVICE_ID_EICON_DIVA201, dev_diva201))) {
- if (pci_enable_device(dev_diva201))
- return (0);
- cs->subtyp = DIVA_IPAC_PCI;
- cs->irq = dev_diva201->irq;
- cs->hw.diva.pci_cfg =
- (ulong) ioremap(pci_resource_start(dev_diva201, 0), 4096);
- cs->hw.diva.cfg_reg =
- (ulong) ioremap(pci_resource_start(dev_diva201, 1), 4096);
- } else if ((dev_diva202 = hisax_find_pci_device(PCI_VENDOR_ID_EICON,
- PCI_DEVICE_ID_EICON_DIVA202, dev_diva202))) {
- if (pci_enable_device(dev_diva202))
- return (0);
- cs->subtyp = DIVA_IPACX_PCI;
- cs->irq = dev_diva202->irq;
- cs->hw.diva.pci_cfg =
- (ulong) ioremap(pci_resource_start(dev_diva202, 0), 4096);
- cs->hw.diva.cfg_reg =
- (ulong) ioremap(pci_resource_start(dev_diva202, 1), 4096);
- } else {
- return (-1); /* card not found; continue search */
- }
-
- if (!cs->irq) {
- printk(KERN_WARNING "Diva: No IRQ for PCI card found\n");
- iounmap_diva(cs);
- return (0);
- }
-
- if (!cs->hw.diva.cfg_reg) {
- printk(KERN_WARNING "Diva: No IO-Adr for PCI card found\n");
- iounmap_diva(cs);
- return (0);
- }
- cs->irq_flags |= IRQF_SHARED;
-
- if ((cs->subtyp == DIVA_IPAC_PCI) ||
- (cs->subtyp == DIVA_IPACX_PCI)) {
- cs->hw.diva.ctrl = 0;
- cs->hw.diva.isac = 0;
- cs->hw.diva.hscx = 0;
- cs->hw.diva.isac_adr = 0;
- cs->hw.diva.hscx_adr = 0;
- test_and_set_bit(HW_IPAC, &cs->HW_Flags);
- } else {
- cs->hw.diva.ctrl = cs->hw.diva.cfg_reg + DIVA_PCI_CTRL;
- cs->hw.diva.isac = cs->hw.diva.cfg_reg + DIVA_PCI_ISAC_DATA;
- cs->hw.diva.hscx = cs->hw.diva.cfg_reg + DIVA_HSCX_DATA;
- cs->hw.diva.isac_adr = cs->hw.diva.cfg_reg + DIVA_PCI_ISAC_ADR;
- cs->hw.diva.hscx_adr = cs->hw.diva.cfg_reg + DIVA_HSCX_ADR;
- }
-
- return (1); /* card found */
-}
-
-#else /* if !CONFIG_PCI */
-
-static int setup_diva_pci(struct IsdnCard *card)
-{
- return (-1); /* card not found; continue search */
-}
-
-#endif /* CONFIG_PCI */
-
-int setup_diva(struct IsdnCard *card)
-{
- int rc, have_card = 0;
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
- strcpy(tmp, Diva_revision);
- printk(KERN_INFO "HiSax: Eicon.Diehl Diva driver Rev. %s\n", HiSax_getrev(tmp));
- if (cs->typ != ISDN_CTYPE_DIEHLDIVA)
- return (0);
- cs->hw.diva.status = 0;
-
- rc = setup_diva_isa(card);
- if (!rc)
- return rc;
- if (rc > 0) {
- have_card = 1;
- goto ready;
- }
-
- rc = setup_diva_isapnp(card);
- if (!rc)
- return rc;
- if (rc > 0) {
- have_card = 1;
- goto ready;
- }
-
- rc = setup_diva_pci(card);
- if (!rc)
- return rc;
- if (rc > 0)
- have_card = 1;
-
-ready:
- if (!have_card) {
- printk(KERN_WARNING "Diva: No ISA, ISAPNP or PCI card found\n");
- return (0);
- }
-
- return setup_diva_common(card->cs);
-}
diff --git a/drivers/isdn/hisax/elsa.c b/drivers/isdn/hisax/elsa.c
deleted file mode 100644
index 0754c0743790..000000000000
--- a/drivers/isdn/hisax/elsa.c
+++ /dev/null
@@ -1,1245 +0,0 @@
-/* $Id: elsa.c,v 2.32.2.4 2004/01/24 20:47:21 keil Exp $
- *
- * low level stuff for Elsa isdn cards
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * For changes and modifications please read
- * Documentation/isdn/HiSax.cert
- *
- * Thanks to Elsa GmbH for documents and information
- *
- * Klaus Lichtenwalder (Klaus.Lichtenwalder@WebForum.DE)
- * for ELSA PCMCIA support
- *
- */
-
-#include <linux/init.h>
-#include <linux/slab.h>
-#include "hisax.h"
-#include "arcofi.h"
-#include "isac.h"
-#include "ipac.h"
-#include "hscx.h"
-#include "isdnl1.h"
-#include <linux/pci.h>
-#include <linux/isapnp.h>
-#include <linux/serial.h>
-#include <linux/serial_reg.h>
-
-static const char *Elsa_revision = "$Revision: 2.32.2.4 $";
-static const char *Elsa_Types[] =
-{"None", "PC", "PCC-8", "PCC-16", "PCF", "PCF-Pro",
- "PCMCIA", "QS 1000", "QS 3000", "Microlink PCI", "QS 3000 PCI",
- "PCMCIA-IPAC" };
-
-static const char *ITACVer[] =
-{"?0?", "?1?", "?2?", "?3?", "?4?", "V2.2",
- "B1", "A1"};
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-#define ELSA_ISAC 0
-#define ELSA_ISAC_PCM 1
-#define ELSA_ITAC 1
-#define ELSA_HSCX 2
-#define ELSA_ALE 3
-#define ELSA_ALE_PCM 4
-#define ELSA_CONTROL 4
-#define ELSA_CONFIG 5
-#define ELSA_START_TIMER 6
-#define ELSA_TRIG_IRQ 7
-
-#define ELSA_PC 1
-#define ELSA_PCC8 2
-#define ELSA_PCC16 3
-#define ELSA_PCF 4
-#define ELSA_PCFPRO 5
-#define ELSA_PCMCIA 6
-#define ELSA_QS1000 7
-#define ELSA_QS3000 8
-#define ELSA_QS1000PCI 9
-#define ELSA_QS3000PCI 10
-#define ELSA_PCMCIA_IPAC 11
-
-/* PCI stuff */
-#define ELSA_PCI_IRQ_MASK 0x04
-
-/* ITAC Registeradressen (only Microlink PC) */
-#define ITAC_SYS 0x34
-#define ITAC_ISEN 0x48
-#define ITAC_RFIE 0x4A
-#define ITAC_XFIE 0x4C
-#define ITAC_SCIE 0x4E
-#define ITAC_STIE 0x46
-
-/*** ***
- *** Makros als Befehle fuer die Kartenregister ***
- *** (mehrere Befehle werden durch Bit-Oderung kombiniert) ***
- *** ***/
-
-/* Config-Register (Read) */
-#define ELIRQF_TIMER_RUN 0x02 /* Bit 1 des Config-Reg */
-#define ELIRQF_TIMER_RUN_PCC8 0x01 /* Bit 0 des Config-Reg bei PCC */
-#define ELSA_IRQ_IDX 0x38 /* Bit 3,4,5 des Config-Reg */
-#define ELSA_IRQ_IDX_PCC8 0x30 /* Bit 4,5 des Config-Reg */
-#define ELSA_IRQ_IDX_PC 0x0c /* Bit 2,3 des Config-Reg */
-
-/* Control-Register (Write) */
-#define ELSA_LINE_LED 0x02 /* Bit 1 Gelbe LED */
-#define ELSA_STAT_LED 0x08 /* Bit 3 Gruene LED */
-#define ELSA_ISDN_RESET 0x20 /* Bit 5 Reset-Leitung */
-#define ELSA_ENA_TIMER_INT 0x80 /* Bit 7 Freigabe Timer Interrupt */
-
-/* ALE-Register (Read) */
-#define ELSA_HW_RELEASE 0x07 /* Bit 0-2 Hardwarerkennung */
-#define ELSA_S0_POWER_BAD 0x08 /* Bit 3 S0-Bus Spannung fehlt */
-
-/* Status Flags */
-#define ELIRQF_TIMER_AKTIV 1
-#define ELSA_BAD_PWR 2
-#define ELSA_ASSIGN 4
-
-#define RS_ISR_PASS_LIMIT 256
-#define FLG_MODEM_ACTIVE 1
-/* IPAC AUX */
-#define ELSA_IPAC_LINE_LED 0x40 /* Bit 6 Gelbe LED */
-#define ELSA_IPAC_STAT_LED 0x80 /* Bit 7 Gruene LED */
-
-#if ARCOFI_USE
-static struct arcofi_msg ARCOFI_XOP_F =
-{NULL,0,2,{0xa1,0x3f,0,0,0,0,0,0,0,0}}; /* Normal OP */
-static struct arcofi_msg ARCOFI_XOP_1 =
-{&ARCOFI_XOP_F,0,2,{0xa1,0x31,0,0,0,0,0,0,0,0}}; /* PWR UP */
-static struct arcofi_msg ARCOFI_SOP_F =
-{&ARCOFI_XOP_1,0,10,{0xa1,0x1f,0x00,0x50,0x10,0x00,0x00,0x80,0x02,0x12}};
-static struct arcofi_msg ARCOFI_COP_9 =
-{&ARCOFI_SOP_F,0,10,{0xa1,0x29,0x80,0xcb,0xe9,0x88,0x00,0xc8,0xd8,0x80}}; /* RX */
-static struct arcofi_msg ARCOFI_COP_8 =
-{&ARCOFI_COP_9,0,10,{0xa1,0x28,0x49,0x31,0x8,0x13,0x6e,0x88,0x2a,0x61}}; /* TX */
-static struct arcofi_msg ARCOFI_COP_7 =
-{&ARCOFI_COP_8,0,4,{0xa1,0x27,0x80,0x80,0,0,0,0,0,0}}; /* GZ */
-static struct arcofi_msg ARCOFI_COP_6 =
-{&ARCOFI_COP_7,0,6,{0xa1,0x26,0,0,0x82,0x7c,0,0,0,0}}; /* GRL GRH */
-static struct arcofi_msg ARCOFI_COP_5 =
-{&ARCOFI_COP_6,0,4,{0xa1,0x25,0xbb,0x4a,0,0,0,0,0,0}}; /* GTX */
-static struct arcofi_msg ARCOFI_VERSION =
-{NULL,1,2,{0xa0,0,0,0,0,0,0,0,0,0}};
-static struct arcofi_msg ARCOFI_XOP_0 =
-{NULL,0,2,{0xa1,0x30,0,0,0,0,0,0,0,0}}; /* PWR Down */
-
-static void set_arcofi(struct IsdnCardState *cs, int bc);
-
-#include "elsa_ser.c"
-#endif /* ARCOFI_USE */
-
-static inline u_char
-readreg(unsigned int ale, unsigned int adr, u_char off)
-{
- register u_char ret;
-
- byteout(ale, off);
- ret = bytein(adr);
- return (ret);
-}
-
-static inline void
-readfifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- byteout(ale, off);
- insb(adr, data, size);
-}
-
-
-static inline void
-writereg(unsigned int ale, unsigned int adr, u_char off, u_char data)
-{
- byteout(ale, off);
- byteout(adr, data);
-}
-
-static inline void
-writefifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- byteout(ale, off);
- outsb(adr, data, size);
-}
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readreg(cs->hw.elsa.ale, cs->hw.elsa.isac, offset));
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, offset, value);
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- readfifo(cs->hw.elsa.ale, cs->hw.elsa.isac, 0, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- writefifo(cs->hw.elsa.ale, cs->hw.elsa.isac, 0, data, size);
-}
-
-static u_char
-ReadISAC_IPAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readreg(cs->hw.elsa.ale, cs->hw.elsa.isac, offset + 0x80));
-}
-
-static void
-WriteISAC_IPAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, offset | 0x80, value);
-}
-
-static void
-ReadISACfifo_IPAC(struct IsdnCardState *cs, u_char *data, int size)
-{
- readfifo(cs->hw.elsa.ale, cs->hw.elsa.isac, 0x80, data, size);
-}
-
-static void
-WriteISACfifo_IPAC(struct IsdnCardState *cs, u_char *data, int size)
-{
- writefifo(cs->hw.elsa.ale, cs->hw.elsa.isac, 0x80, data, size);
-}
-
-static u_char
-ReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- return (readreg(cs->hw.elsa.ale,
- cs->hw.elsa.hscx, offset + (hscx ? 0x40 : 0)));
-}
-
-static void
-WriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- writereg(cs->hw.elsa.ale,
- cs->hw.elsa.hscx, offset + (hscx ? 0x40 : 0), value);
-}
-
-static inline u_char
-readitac(struct IsdnCardState *cs, u_char off)
-{
- register u_char ret;
-
- byteout(cs->hw.elsa.ale, off);
- ret = bytein(cs->hw.elsa.itac);
- return (ret);
-}
-
-static inline void
-writeitac(struct IsdnCardState *cs, u_char off, u_char data)
-{
- byteout(cs->hw.elsa.ale, off);
- byteout(cs->hw.elsa.itac, data);
-}
-
-static inline int
-TimerRun(struct IsdnCardState *cs)
-{
- register u_char v;
-
- v = bytein(cs->hw.elsa.cfg);
- if ((cs->subtyp == ELSA_QS1000) || (cs->subtyp == ELSA_QS3000))
- return (0 == (v & ELIRQF_TIMER_RUN));
- else if (cs->subtyp == ELSA_PCC8)
- return (v & ELIRQF_TIMER_RUN_PCC8);
- return (v & ELIRQF_TIMER_RUN);
-}
-/*
- * fast interrupt HSCX stuff goes here
- */
-
-#define READHSCX(cs, nr, reg) readreg(cs->hw.elsa.ale, \
- cs->hw.elsa.hscx, reg + (nr ? 0x40 : 0))
-#define WRITEHSCX(cs, nr, reg, data) writereg(cs->hw.elsa.ale, \
- cs->hw.elsa.hscx, reg + (nr ? 0x40 : 0), data)
-
-#define READHSCXFIFO(cs, nr, ptr, cnt) readfifo(cs->hw.elsa.ale, \
- cs->hw.elsa.hscx, (nr ? 0x40 : 0), ptr, cnt)
-
-#define WRITEHSCXFIFO(cs, nr, ptr, cnt) writefifo(cs->hw.elsa.ale, \
- cs->hw.elsa.hscx, (nr ? 0x40 : 0), ptr, cnt)
-
-#include "hscx_irq.c"
-
-static irqreturn_t
-elsa_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_long flags;
- u_char val;
- int icnt = 5;
-
- if ((cs->typ == ISDN_CTYPE_ELSA_PCMCIA) && (*cs->busy_flag == 1)) {
- /* The card tends to generate interrupts while being removed
- causing us to just crash the kernel. bad. */
- printk(KERN_WARNING "Elsa: card not available!\n");
- return IRQ_NONE;
- }
- spin_lock_irqsave(&cs->lock, flags);
-#if ARCOFI_USE
- if (cs->hw.elsa.MFlag) {
- val = serial_inp(cs, UART_IIR);
- if (!(val & UART_IIR_NO_INT)) {
- debugl1(cs, "IIR %02x", val);
- rs_interrupt_elsa(cs);
- }
- }
-#endif
- val = readreg(cs->hw.elsa.ale, cs->hw.elsa.hscx, HSCX_ISTA + 0x40);
-Start_HSCX:
- if (val) {
- hscx_int_main(cs, val);
- }
- val = readreg(cs->hw.elsa.ale, cs->hw.elsa.isac, ISAC_ISTA);
-Start_ISAC:
- if (val) {
- isac_interrupt(cs, val);
- }
- val = readreg(cs->hw.elsa.ale, cs->hw.elsa.hscx, HSCX_ISTA + 0x40);
- if (val && icnt) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX IntStat after IntRoutine");
- icnt--;
- goto Start_HSCX;
- }
- val = readreg(cs->hw.elsa.ale, cs->hw.elsa.isac, ISAC_ISTA);
- if (val && icnt) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ISAC IntStat after IntRoutine");
- icnt--;
- goto Start_ISAC;
- }
- if (!icnt)
- printk(KERN_WARNING"ELSA IRQ LOOP\n");
- writereg(cs->hw.elsa.ale, cs->hw.elsa.hscx, HSCX_MASK, 0xFF);
- writereg(cs->hw.elsa.ale, cs->hw.elsa.hscx, HSCX_MASK + 0x40, 0xFF);
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, ISAC_MASK, 0xFF);
- if (cs->hw.elsa.status & ELIRQF_TIMER_AKTIV) {
- if (!TimerRun(cs)) {
- /* Timer Restart */
- byteout(cs->hw.elsa.timer, 0);
- cs->hw.elsa.counter++;
- }
- }
-#if ARCOFI_USE
- if (cs->hw.elsa.MFlag) {
- val = serial_inp(cs, UART_MCR);
- val ^= 0x8;
- serial_outp(cs, UART_MCR, val);
- val = serial_inp(cs, UART_MCR);
- val ^= 0x8;
- serial_outp(cs, UART_MCR, val);
- }
-#endif
- if (cs->hw.elsa.trig)
- byteout(cs->hw.elsa.trig, 0x00);
- writereg(cs->hw.elsa.ale, cs->hw.elsa.hscx, HSCX_MASK, 0x0);
- writereg(cs->hw.elsa.ale, cs->hw.elsa.hscx, HSCX_MASK + 0x40, 0x0);
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, ISAC_MASK, 0x0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static irqreturn_t
-elsa_interrupt_ipac(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_long flags;
- u_char ista, val;
- int icnt = 5;
-
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->subtyp == ELSA_QS1000PCI || cs->subtyp == ELSA_QS3000PCI) {
- val = bytein(cs->hw.elsa.cfg + 0x4c); /* PCI IRQ */
- if (!(val & ELSA_PCI_IRQ_MASK)) {
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_NONE;
- }
- }
-#if ARCOFI_USE
- if (cs->hw.elsa.MFlag) {
- val = serial_inp(cs, UART_IIR);
- if (!(val & UART_IIR_NO_INT)) {
- debugl1(cs, "IIR %02x", val);
- rs_interrupt_elsa(cs);
- }
- }
-#endif
- ista = readreg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_ISTA);
-Start_IPAC:
- if (cs->debug & L1_DEB_IPAC)
- debugl1(cs, "IPAC ISTA %02X", ista);
- if (ista & 0x0f) {
- val = readreg(cs->hw.elsa.ale, cs->hw.elsa.hscx, HSCX_ISTA + 0x40);
- if (ista & 0x01)
- val |= 0x01;
- if (ista & 0x04)
- val |= 0x02;
- if (ista & 0x08)
- val |= 0x04;
- if (val)
- hscx_int_main(cs, val);
- }
- if (ista & 0x20) {
- val = 0xfe & readreg(cs->hw.elsa.ale, cs->hw.elsa.isac, ISAC_ISTA + 0x80);
- if (val) {
- isac_interrupt(cs, val);
- }
- }
- if (ista & 0x10) {
- val = 0x01;
- isac_interrupt(cs, val);
- }
- ista = readreg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_ISTA);
- if ((ista & 0x3f) && icnt) {
- icnt--;
- goto Start_IPAC;
- }
- if (!icnt)
- printk(KERN_WARNING "ELSA IRQ LOOP\n");
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_MASK, 0xFF);
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_MASK, 0xC0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-release_io_elsa(struct IsdnCardState *cs)
-{
- int bytecnt = 8;
-
- del_timer(&cs->hw.elsa.tl);
-#if ARCOFI_USE
- clear_arcofi(cs);
-#endif
- if (cs->hw.elsa.ctrl)
- byteout(cs->hw.elsa.ctrl, 0); /* LEDs Out */
- if (cs->subtyp == ELSA_QS1000PCI) {
- byteout(cs->hw.elsa.cfg + 0x4c, 0x01); /* disable IRQ */
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_ATX, 0xff);
- bytecnt = 2;
- release_region(cs->hw.elsa.cfg, 0x80);
- }
- if (cs->subtyp == ELSA_QS3000PCI) {
- byteout(cs->hw.elsa.cfg + 0x4c, 0x03); /* disable ELSA PCI IRQ */
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_ATX, 0xff);
- release_region(cs->hw.elsa.cfg, 0x80);
- }
- if (cs->subtyp == ELSA_PCMCIA_IPAC) {
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_ATX, 0xff);
- }
- if ((cs->subtyp == ELSA_PCFPRO) ||
- (cs->subtyp == ELSA_QS3000) ||
- (cs->subtyp == ELSA_PCF) ||
- (cs->subtyp == ELSA_QS3000PCI)) {
- bytecnt = 16;
-#if ARCOFI_USE
- release_modem(cs);
-#endif
- }
- if (cs->hw.elsa.base)
- release_region(cs->hw.elsa.base, bytecnt);
-}
-
-static void
-reset_elsa(struct IsdnCardState *cs)
-{
- if (cs->hw.elsa.timer) {
- /* Wait 1 Timer */
- byteout(cs->hw.elsa.timer, 0);
- while (TimerRun(cs));
- cs->hw.elsa.ctrl_reg |= 0x50;
- cs->hw.elsa.ctrl_reg &= ~ELSA_ISDN_RESET; /* Reset On */
- byteout(cs->hw.elsa.ctrl, cs->hw.elsa.ctrl_reg);
- /* Wait 1 Timer */
- byteout(cs->hw.elsa.timer, 0);
- while (TimerRun(cs));
- cs->hw.elsa.ctrl_reg |= ELSA_ISDN_RESET; /* Reset Off */
- byteout(cs->hw.elsa.ctrl, cs->hw.elsa.ctrl_reg);
- /* Wait 1 Timer */
- byteout(cs->hw.elsa.timer, 0);
- while (TimerRun(cs));
- if (cs->hw.elsa.trig)
- byteout(cs->hw.elsa.trig, 0xff);
- }
- if ((cs->subtyp == ELSA_QS1000PCI) || (cs->subtyp == ELSA_QS3000PCI) || (cs->subtyp == ELSA_PCMCIA_IPAC)) {
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_POTA2, 0x20);
- mdelay(10);
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_POTA2, 0x00);
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_MASK, 0xc0);
- mdelay(10);
- if (cs->subtyp != ELSA_PCMCIA_IPAC) {
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_ACFG, 0x0);
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_AOE, 0x3c);
- } else {
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_PCFG, 0x10);
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_ACFG, 0x4);
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_AOE, 0xf8);
- }
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_ATX, 0xff);
- if (cs->subtyp == ELSA_QS1000PCI)
- byteout(cs->hw.elsa.cfg + 0x4c, 0x41); /* enable ELSA PCI IRQ */
- else if (cs->subtyp == ELSA_QS3000PCI)
- byteout(cs->hw.elsa.cfg + 0x4c, 0x43); /* enable ELSA PCI IRQ */
- }
-}
-
-#if ARCOFI_USE
-
-static void
-set_arcofi(struct IsdnCardState *cs, int bc) {
- cs->dc.isac.arcofi_bc = bc;
- arcofi_fsm(cs, ARCOFI_START, &ARCOFI_COP_5);
- wait_event_interruptible(cs->dc.isac.arcofi_wait,
- cs->dc.isac.arcofi_state == ARCOFI_NOP);
-}
-
-static int
-check_arcofi(struct IsdnCardState *cs)
-{
- int arcofi_present = 0;
- char tmp[40];
- char *t;
- u_char *p;
-
- if (!cs->dc.isac.mon_tx)
- if (!(cs->dc.isac.mon_tx = kmalloc(MAX_MON_FRAME, GFP_ATOMIC))) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ISAC MON TX out of buffers!");
- return (0);
- }
- cs->dc.isac.arcofi_bc = 0;
- arcofi_fsm(cs, ARCOFI_START, &ARCOFI_VERSION);
- wait_event_interruptible(cs->dc.isac.arcofi_wait,
- cs->dc.isac.arcofi_state == ARCOFI_NOP);
- if (!test_and_clear_bit(FLG_ARCOFI_ERROR, &cs->HW_Flags)) {
- debugl1(cs, "Arcofi response received %d bytes", cs->dc.isac.mon_rxp);
- p = cs->dc.isac.mon_rx;
- t = tmp;
- t += sprintf(tmp, "Arcofi data");
- QuickHex(t, p, cs->dc.isac.mon_rxp);
- debugl1(cs, "%s", tmp);
- if ((cs->dc.isac.mon_rxp == 2) && (cs->dc.isac.mon_rx[0] == 0xa0)) {
- switch (cs->dc.isac.mon_rx[1]) {
- case 0x80:
- debugl1(cs, "Arcofi 2160 detected");
- arcofi_present = 1;
- break;
- case 0x82:
- debugl1(cs, "Arcofi 2165 detected");
- arcofi_present = 2;
- break;
- case 0x84:
- debugl1(cs, "Arcofi 2163 detected");
- arcofi_present = 3;
- break;
- default:
- debugl1(cs, "unknown Arcofi response");
- break;
- }
- } else
- debugl1(cs, "undefined Monitor response");
- cs->dc.isac.mon_rxp = 0;
- } else if (cs->dc.isac.mon_tx) {
- debugl1(cs, "Arcofi not detected");
- }
- if (arcofi_present) {
- if (cs->subtyp == ELSA_QS1000) {
- cs->subtyp = ELSA_QS3000;
- printk(KERN_INFO
- "Elsa: %s detected modem at 0x%lx\n",
- Elsa_Types[cs->subtyp],
- cs->hw.elsa.base + 8);
- release_region(cs->hw.elsa.base, 8);
- if (!request_region(cs->hw.elsa.base, 16, "elsa isdn modem")) {
- printk(KERN_WARNING
- "HiSax: %s config port %lx-%lx already in use\n",
- Elsa_Types[cs->subtyp],
- cs->hw.elsa.base + 8,
- cs->hw.elsa.base + 16);
- }
- } else if (cs->subtyp == ELSA_PCC16) {
- cs->subtyp = ELSA_PCF;
- printk(KERN_INFO
- "Elsa: %s detected modem at 0x%lx\n",
- Elsa_Types[cs->subtyp],
- cs->hw.elsa.base + 8);
- release_region(cs->hw.elsa.base, 8);
- if (!request_region(cs->hw.elsa.base, 16, "elsa isdn modem")) {
- printk(KERN_WARNING
- "HiSax: %s config port %lx-%lx already in use\n",
- Elsa_Types[cs->subtyp],
- cs->hw.elsa.base + 8,
- cs->hw.elsa.base + 16);
- }
- } else
- printk(KERN_INFO
- "Elsa: %s detected modem at 0x%lx\n",
- Elsa_Types[cs->subtyp],
- cs->hw.elsa.base + 8);
- arcofi_fsm(cs, ARCOFI_START, &ARCOFI_XOP_0);
- wait_event_interruptible(cs->dc.isac.arcofi_wait,
- cs->dc.isac.arcofi_state == ARCOFI_NOP);
- return (1);
- }
- return (0);
-}
-#endif /* ARCOFI_USE */
-
-static void
-elsa_led_handler(struct timer_list *t)
-{
- struct IsdnCardState *cs = from_timer(cs, t, hw.elsa.tl);
- int blink = 0;
-
- if (cs->subtyp == ELSA_PCMCIA || cs->subtyp == ELSA_PCMCIA_IPAC)
- return;
- del_timer(&cs->hw.elsa.tl);
- if (cs->hw.elsa.status & ELSA_ASSIGN)
- cs->hw.elsa.ctrl_reg |= ELSA_STAT_LED;
- else if (cs->hw.elsa.status & ELSA_BAD_PWR)
- cs->hw.elsa.ctrl_reg &= ~ELSA_STAT_LED;
- else {
- cs->hw.elsa.ctrl_reg ^= ELSA_STAT_LED;
- blink = 250;
- }
- if (cs->hw.elsa.status & 0xf000)
- cs->hw.elsa.ctrl_reg |= ELSA_LINE_LED;
- else if (cs->hw.elsa.status & 0x0f00) {
- cs->hw.elsa.ctrl_reg ^= ELSA_LINE_LED;
- blink = 500;
- } else
- cs->hw.elsa.ctrl_reg &= ~ELSA_LINE_LED;
-
- if ((cs->subtyp == ELSA_QS1000PCI) ||
- (cs->subtyp == ELSA_QS3000PCI)) {
- u_char led = 0xff;
- if (cs->hw.elsa.ctrl_reg & ELSA_LINE_LED)
- led ^= ELSA_IPAC_LINE_LED;
- if (cs->hw.elsa.ctrl_reg & ELSA_STAT_LED)
- led ^= ELSA_IPAC_STAT_LED;
- writereg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_ATX, led);
- } else
- byteout(cs->hw.elsa.ctrl, cs->hw.elsa.ctrl_reg);
- if (blink) {
- cs->hw.elsa.tl.expires = jiffies + ((blink * HZ) / 1000);
- add_timer(&cs->hw.elsa.tl);
- }
-}
-
-static int
-Elsa_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- int ret = 0;
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- reset_elsa(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- release_io_elsa(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- cs->debug |= L1_DEB_IPAC;
- reset_elsa(cs);
- inithscxisac(cs, 1);
- if ((cs->subtyp == ELSA_QS1000) ||
- (cs->subtyp == ELSA_QS3000))
- {
- byteout(cs->hw.elsa.timer, 0);
- }
- if (cs->hw.elsa.trig)
- byteout(cs->hw.elsa.trig, 0xff);
- inithscxisac(cs, 2);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- if ((cs->subtyp == ELSA_PCMCIA) ||
- (cs->subtyp == ELSA_PCMCIA_IPAC) ||
- (cs->subtyp == ELSA_QS1000PCI)) {
- return (0);
- } else if (cs->subtyp == ELSA_QS3000PCI) {
- ret = 0;
- } else {
- spin_lock_irqsave(&cs->lock, flags);
- cs->hw.elsa.counter = 0;
- cs->hw.elsa.ctrl_reg |= ELSA_ENA_TIMER_INT;
- cs->hw.elsa.status |= ELIRQF_TIMER_AKTIV;
- byteout(cs->hw.elsa.ctrl, cs->hw.elsa.ctrl_reg);
- byteout(cs->hw.elsa.timer, 0);
- spin_unlock_irqrestore(&cs->lock, flags);
- msleep(110);
- spin_lock_irqsave(&cs->lock, flags);
- cs->hw.elsa.ctrl_reg &= ~ELSA_ENA_TIMER_INT;
- byteout(cs->hw.elsa.ctrl, cs->hw.elsa.ctrl_reg);
- cs->hw.elsa.status &= ~ELIRQF_TIMER_AKTIV;
- spin_unlock_irqrestore(&cs->lock, flags);
- printk(KERN_INFO "Elsa: %d timer tics in 110 msek\n",
- cs->hw.elsa.counter);
- if ((cs->hw.elsa.counter > 10) &&
- (cs->hw.elsa.counter < 16)) {
- printk(KERN_INFO "Elsa: timer and irq OK\n");
- ret = 0;
- } else {
- printk(KERN_WARNING
- "Elsa: timer tic problem (%d/12) maybe an IRQ(%d) conflict\n",
- cs->hw.elsa.counter, cs->irq);
- ret = 1;
- }
- }
-#if ARCOFI_USE
- if (check_arcofi(cs)) {
- init_modem(cs);
- }
-#endif
- elsa_led_handler(&cs->hw.elsa.tl);
- return (ret);
- case (MDL_REMOVE | REQUEST):
- cs->hw.elsa.status &= 0;
- break;
- case (MDL_ASSIGN | REQUEST):
- cs->hw.elsa.status |= ELSA_ASSIGN;
- break;
- case MDL_INFO_SETUP:
- if ((long) arg)
- cs->hw.elsa.status |= 0x0200;
- else
- cs->hw.elsa.status |= 0x0100;
- break;
- case MDL_INFO_CONN:
- if ((long) arg)
- cs->hw.elsa.status |= 0x2000;
- else
- cs->hw.elsa.status |= 0x1000;
- break;
- case MDL_INFO_REL:
- if ((long) arg) {
- cs->hw.elsa.status &= ~0x2000;
- cs->hw.elsa.status &= ~0x0200;
- } else {
- cs->hw.elsa.status &= ~0x1000;
- cs->hw.elsa.status &= ~0x0100;
- }
- break;
-#if ARCOFI_USE
- case CARD_AUX_IND:
- if (cs->hw.elsa.MFlag) {
- int len;
- u_char *msg;
-
- if (!arg)
- return (0);
- msg = arg;
- len = *msg;
- msg++;
- modem_write_cmd(cs, msg, len);
- }
- break;
-#endif
- }
- if (cs->typ == ISDN_CTYPE_ELSA) {
- int pwr = bytein(cs->hw.elsa.ale);
- if (pwr & 0x08)
- cs->hw.elsa.status |= ELSA_BAD_PWR;
- else
- cs->hw.elsa.status &= ~ELSA_BAD_PWR;
- }
- elsa_led_handler(&cs->hw.elsa.tl);
- return (ret);
-}
-
-static unsigned char
-probe_elsa_adr(unsigned int adr, int typ)
-{
- int i, in1, in2, p16_1 = 0, p16_2 = 0, p8_1 = 0, p8_2 = 0, pc_1 = 0,
- pc_2 = 0, pfp_1 = 0, pfp_2 = 0;
-
- /* In case of the elsa pcmcia card, this region is in use,
- reserved for us by the card manager. So we do not check it
- here, it would fail. */
- if (typ != ISDN_CTYPE_ELSA_PCMCIA) {
- if (request_region(adr, 8, "elsa card")) {
- release_region(adr, 8);
- } else {
- printk(KERN_WARNING
- "Elsa: Probing Port 0x%x: already in use\n", adr);
- return (0);
- }
- }
- for (i = 0; i < 16; i++) {
- in1 = inb(adr + ELSA_CONFIG); /* 'toggelt' bei */
- in2 = inb(adr + ELSA_CONFIG); /* jedem Zugriff */
- p16_1 += 0x04 & in1;
- p16_2 += 0x04 & in2;
- p8_1 += 0x02 & in1;
- p8_2 += 0x02 & in2;
- pc_1 += 0x01 & in1;
- pc_2 += 0x01 & in2;
- pfp_1 += 0x40 & in1;
- pfp_2 += 0x40 & in2;
- }
- printk(KERN_INFO "Elsa: Probing IO 0x%x", adr);
- if (65 == ++p16_1 * ++p16_2) {
- printk(" PCC-16/PCF found\n");
- return (ELSA_PCC16);
- } else if (1025 == ++pfp_1 * ++pfp_2) {
- printk(" PCF-Pro found\n");
- return (ELSA_PCFPRO);
- } else if (33 == ++p8_1 * ++p8_2) {
- printk(" PCC8 found\n");
- return (ELSA_PCC8);
- } else if (17 == ++pc_1 * ++pc_2) {
- printk(" PC found\n");
- return (ELSA_PC);
- } else {
- printk(" failed\n");
- return (0);
- }
-}
-
-static unsigned int
-probe_elsa(struct IsdnCardState *cs)
-{
- int i;
- unsigned int CARD_portlist[] =
- {0x160, 0x170, 0x260, 0x360, 0};
-
- for (i = 0; CARD_portlist[i]; i++) {
- if ((cs->subtyp = probe_elsa_adr(CARD_portlist[i], cs->typ)))
- break;
- }
- return (CARD_portlist[i]);
-}
-
-static int setup_elsa_isa(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- u_char val;
-
- cs->hw.elsa.base = card->para[0];
- printk(KERN_INFO "Elsa: Microlink IO probing\n");
- if (cs->hw.elsa.base) {
- if (!(cs->subtyp = probe_elsa_adr(cs->hw.elsa.base,
- cs->typ))) {
- printk(KERN_WARNING
- "Elsa: no Elsa Microlink at %#lx\n",
- cs->hw.elsa.base);
- return (0);
- }
- } else
- cs->hw.elsa.base = probe_elsa(cs);
-
- if (!cs->hw.elsa.base) {
- printk(KERN_WARNING
- "No Elsa Microlink found\n");
- return (0);
- }
-
- cs->hw.elsa.cfg = cs->hw.elsa.base + ELSA_CONFIG;
- cs->hw.elsa.ctrl = cs->hw.elsa.base + ELSA_CONTROL;
- cs->hw.elsa.ale = cs->hw.elsa.base + ELSA_ALE;
- cs->hw.elsa.isac = cs->hw.elsa.base + ELSA_ISAC;
- cs->hw.elsa.itac = cs->hw.elsa.base + ELSA_ITAC;
- cs->hw.elsa.hscx = cs->hw.elsa.base + ELSA_HSCX;
- cs->hw.elsa.trig = cs->hw.elsa.base + ELSA_TRIG_IRQ;
- cs->hw.elsa.timer = cs->hw.elsa.base + ELSA_START_TIMER;
- val = bytein(cs->hw.elsa.cfg);
- if (cs->subtyp == ELSA_PC) {
- const u_char CARD_IrqTab[8] =
- {7, 3, 5, 9, 0, 0, 0, 0};
- cs->irq = CARD_IrqTab[(val & ELSA_IRQ_IDX_PC) >> 2];
- } else if (cs->subtyp == ELSA_PCC8) {
- const u_char CARD_IrqTab[8] =
- {7, 3, 5, 9, 0, 0, 0, 0};
- cs->irq = CARD_IrqTab[(val & ELSA_IRQ_IDX_PCC8) >> 4];
- } else {
- const u_char CARD_IrqTab[8] =
- {15, 10, 15, 3, 11, 5, 11, 9};
- cs->irq = CARD_IrqTab[(val & ELSA_IRQ_IDX) >> 3];
- }
- val = bytein(cs->hw.elsa.ale) & ELSA_HW_RELEASE;
- if (val < 3)
- val |= 8;
- val += 'A' - 3;
- if (val == 'B' || val == 'C')
- val ^= 1;
- if ((cs->subtyp == ELSA_PCFPRO) && (val == 'G'))
- val = 'C';
- printk(KERN_INFO
- "Elsa: %s found at %#lx Rev.:%c IRQ %d\n",
- Elsa_Types[cs->subtyp],
- cs->hw.elsa.base,
- val, cs->irq);
- val = bytein(cs->hw.elsa.ale) & ELSA_S0_POWER_BAD;
- if (val) {
- printk(KERN_WARNING
- "Elsa: Microlink S0 bus power bad\n");
- cs->hw.elsa.status |= ELSA_BAD_PWR;
- }
-
- return (1);
-}
-
-#ifdef __ISAPNP__
-static struct isapnp_device_id elsa_ids[] = {
- { ISAPNP_VENDOR('E', 'L', 'S'), ISAPNP_FUNCTION(0x0133),
- ISAPNP_VENDOR('E', 'L', 'S'), ISAPNP_FUNCTION(0x0133),
- (unsigned long) "Elsa QS1000" },
- { ISAPNP_VENDOR('E', 'L', 'S'), ISAPNP_FUNCTION(0x0134),
- ISAPNP_VENDOR('E', 'L', 'S'), ISAPNP_FUNCTION(0x0134),
- (unsigned long) "Elsa QS3000" },
- { 0, }
-};
-
-static struct isapnp_device_id *ipid = &elsa_ids[0];
-static struct pnp_card *pnp_c = NULL;
-#endif /* __ISAPNP__ */
-
-static int setup_elsa_isapnp(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
-
-#ifdef __ISAPNP__
- if (!card->para[1] && isapnp_present()) {
- struct pnp_dev *pnp_d;
- while (ipid->card_vendor) {
- if ((pnp_c = pnp_find_card(ipid->card_vendor,
- ipid->card_device, pnp_c))) {
- pnp_d = NULL;
- if ((pnp_d = pnp_find_dev(pnp_c,
- ipid->vendor, ipid->function, pnp_d))) {
- int err;
-
- printk(KERN_INFO "HiSax: %s detected\n",
- (char *)ipid->driver_data);
- pnp_disable_dev(pnp_d);
- err = pnp_activate_dev(pnp_d);
- if (err < 0) {
- printk(KERN_WARNING "%s: pnp_activate_dev ret(%d)\n",
- __func__, err);
- return (0);
- }
- card->para[1] = pnp_port_start(pnp_d, 0);
- card->para[0] = pnp_irq(pnp_d, 0);
-
- if (card->para[0] == -1 || !card->para[1]) {
- printk(KERN_ERR "Elsa PnP:some resources are missing %ld/%lx\n",
- card->para[0], card->para[1]);
- pnp_disable_dev(pnp_d);
- return (0);
- }
- if (ipid->function == ISAPNP_FUNCTION(0x133))
- cs->subtyp = ELSA_QS1000;
- else
- cs->subtyp = ELSA_QS3000;
- break;
- } else {
- printk(KERN_ERR "Elsa PnP: PnP error card found, no device\n");
- return (0);
- }
- }
- ipid++;
- pnp_c = NULL;
- }
- if (!ipid->card_vendor) {
- printk(KERN_INFO "Elsa PnP: no ISAPnP card found\n");
- return (0);
- }
- }
-#endif /* __ISAPNP__ */
-
- if (card->para[1] && card->para[0]) {
- cs->hw.elsa.base = card->para[1];
- cs->irq = card->para[0];
- if (!cs->subtyp)
- cs->subtyp = ELSA_QS1000;
- } else {
- printk(KERN_ERR "Elsa PnP: no parameter\n");
- }
- cs->hw.elsa.cfg = cs->hw.elsa.base + ELSA_CONFIG;
- cs->hw.elsa.ale = cs->hw.elsa.base + ELSA_ALE;
- cs->hw.elsa.isac = cs->hw.elsa.base + ELSA_ISAC;
- cs->hw.elsa.hscx = cs->hw.elsa.base + ELSA_HSCX;
- cs->hw.elsa.trig = cs->hw.elsa.base + ELSA_TRIG_IRQ;
- cs->hw.elsa.timer = cs->hw.elsa.base + ELSA_START_TIMER;
- cs->hw.elsa.ctrl = cs->hw.elsa.base + ELSA_CONTROL;
- printk(KERN_INFO
- "Elsa: %s defined at %#lx IRQ %d\n",
- Elsa_Types[cs->subtyp],
- cs->hw.elsa.base,
- cs->irq);
-
- return (1);
-}
-
-static void setup_elsa_pcmcia(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- u_char val;
-
- cs->hw.elsa.base = card->para[1];
- cs->irq = card->para[0];
- val = readreg(cs->hw.elsa.base + 0, cs->hw.elsa.base + 2, IPAC_ID);
- if ((val == 1) || (val == 2)) { /* IPAC version 1.1/1.2 */
- cs->subtyp = ELSA_PCMCIA_IPAC;
- cs->hw.elsa.ale = cs->hw.elsa.base + 0;
- cs->hw.elsa.isac = cs->hw.elsa.base + 2;
- cs->hw.elsa.hscx = cs->hw.elsa.base + 2;
- test_and_set_bit(HW_IPAC, &cs->HW_Flags);
- } else {
- cs->subtyp = ELSA_PCMCIA;
- cs->hw.elsa.ale = cs->hw.elsa.base + ELSA_ALE_PCM;
- cs->hw.elsa.isac = cs->hw.elsa.base + ELSA_ISAC_PCM;
- cs->hw.elsa.hscx = cs->hw.elsa.base + ELSA_HSCX;
- }
- cs->hw.elsa.timer = 0;
- cs->hw.elsa.trig = 0;
- cs->hw.elsa.ctrl = 0;
- cs->irq_flags |= IRQF_SHARED;
- printk(KERN_INFO
- "Elsa: %s defined at %#lx IRQ %d\n",
- Elsa_Types[cs->subtyp],
- cs->hw.elsa.base,
- cs->irq);
-}
-
-#ifdef CONFIG_PCI
-static struct pci_dev *dev_qs1000 = NULL;
-static struct pci_dev *dev_qs3000 = NULL;
-
-static int setup_elsa_pci(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
-
- cs->subtyp = 0;
- if ((dev_qs1000 = hisax_find_pci_device(PCI_VENDOR_ID_ELSA,
- PCI_DEVICE_ID_ELSA_MICROLINK, dev_qs1000))) {
- if (pci_enable_device(dev_qs1000))
- return (0);
- cs->subtyp = ELSA_QS1000PCI;
- cs->irq = dev_qs1000->irq;
- cs->hw.elsa.cfg = pci_resource_start(dev_qs1000, 1);
- cs->hw.elsa.base = pci_resource_start(dev_qs1000, 3);
- } else if ((dev_qs3000 = hisax_find_pci_device(PCI_VENDOR_ID_ELSA,
- PCI_DEVICE_ID_ELSA_QS3000, dev_qs3000))) {
- if (pci_enable_device(dev_qs3000))
- return (0);
- cs->subtyp = ELSA_QS3000PCI;
- cs->irq = dev_qs3000->irq;
- cs->hw.elsa.cfg = pci_resource_start(dev_qs3000, 1);
- cs->hw.elsa.base = pci_resource_start(dev_qs3000, 3);
- } else {
- printk(KERN_WARNING "Elsa: No PCI card found\n");
- return (0);
- }
- if (!cs->irq) {
- printk(KERN_WARNING "Elsa: No IRQ for PCI card found\n");
- return (0);
- }
-
- if (!(cs->hw.elsa.base && cs->hw.elsa.cfg)) {
- printk(KERN_WARNING "Elsa: No IO-Adr for PCI card found\n");
- return (0);
- }
- if ((cs->hw.elsa.cfg & 0xff) || (cs->hw.elsa.base & 0xf)) {
- printk(KERN_WARNING "Elsa: You may have a wrong PCI bios\n");
- printk(KERN_WARNING "Elsa: If your system hangs now, read\n");
- printk(KERN_WARNING "Elsa: Documentation/isdn/README.HiSax\n");
- }
- cs->hw.elsa.ale = cs->hw.elsa.base;
- cs->hw.elsa.isac = cs->hw.elsa.base + 1;
- cs->hw.elsa.hscx = cs->hw.elsa.base + 1;
- test_and_set_bit(HW_IPAC, &cs->HW_Flags);
- cs->hw.elsa.timer = 0;
- cs->hw.elsa.trig = 0;
- cs->irq_flags |= IRQF_SHARED;
- printk(KERN_INFO
- "Elsa: %s defined at %#lx/0x%x IRQ %d\n",
- Elsa_Types[cs->subtyp],
- cs->hw.elsa.base,
- cs->hw.elsa.cfg,
- cs->irq);
-
- return (1);
-}
-
-#else
-
-static int setup_elsa_pci(struct IsdnCard *card)
-{
- return (1);
-}
-#endif /* CONFIG_PCI */
-
-static int setup_elsa_common(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- u_char val;
- int bytecnt;
-
- switch (cs->subtyp) {
- case ELSA_PC:
- case ELSA_PCC8:
- case ELSA_PCC16:
- case ELSA_QS1000:
- case ELSA_PCMCIA:
- case ELSA_PCMCIA_IPAC:
- bytecnt = 8;
- break;
- case ELSA_PCFPRO:
- case ELSA_PCF:
- case ELSA_QS3000:
- case ELSA_QS3000PCI:
- bytecnt = 16;
- break;
- case ELSA_QS1000PCI:
- bytecnt = 2;
- break;
- default:
- printk(KERN_WARNING
- "Unknown ELSA subtype %d\n", cs->subtyp);
- return (0);
- }
- /* In case of the elsa pcmcia card, this region is in use,
- reserved for us by the card manager. So we do not check it
- here, it would fail. */
- if (cs->typ != ISDN_CTYPE_ELSA_PCMCIA && !request_region(cs->hw.elsa.base, bytecnt, "elsa isdn")) {
- printk(KERN_WARNING
- "HiSax: ELSA config port %#lx-%#lx already in use\n",
- cs->hw.elsa.base,
- cs->hw.elsa.base + bytecnt);
- return (0);
- }
- if ((cs->subtyp == ELSA_QS1000PCI) || (cs->subtyp == ELSA_QS3000PCI)) {
- if (!request_region(cs->hw.elsa.cfg, 0x80, "elsa isdn pci")) {
- printk(KERN_WARNING
- "HiSax: ELSA pci port %x-%x already in use\n",
- cs->hw.elsa.cfg,
- cs->hw.elsa.cfg + 0x80);
- release_region(cs->hw.elsa.base, bytecnt);
- return (0);
- }
- }
-#if ARCOFI_USE
- init_arcofi(cs);
-#endif
- setup_isac(cs);
- timer_setup(&cs->hw.elsa.tl, elsa_led_handler, 0);
- /* Teste Timer */
- if (cs->hw.elsa.timer) {
- byteout(cs->hw.elsa.trig, 0xff);
- byteout(cs->hw.elsa.timer, 0);
- if (!TimerRun(cs)) {
- byteout(cs->hw.elsa.timer, 0); /* 2. Versuch */
- if (!TimerRun(cs)) {
- printk(KERN_WARNING
- "Elsa: timer do not start\n");
- release_io_elsa(cs);
- return (0);
- }
- }
- HZDELAY((HZ / 100) + 1); /* wait >=10 ms */
- if (TimerRun(cs)) {
- printk(KERN_WARNING "Elsa: timer do not run down\n");
- release_io_elsa(cs);
- return (0);
- }
- printk(KERN_INFO "Elsa: timer OK; resetting card\n");
- }
- cs->BC_Read_Reg = &ReadHSCX;
- cs->BC_Write_Reg = &WriteHSCX;
- cs->BC_Send_Data = &hscx_fill_fifo;
- cs->cardmsg = &Elsa_card_msg;
- if ((cs->subtyp == ELSA_QS1000PCI) || (cs->subtyp == ELSA_QS3000PCI) || (cs->subtyp == ELSA_PCMCIA_IPAC)) {
- cs->readisac = &ReadISAC_IPAC;
- cs->writeisac = &WriteISAC_IPAC;
- cs->readisacfifo = &ReadISACfifo_IPAC;
- cs->writeisacfifo = &WriteISACfifo_IPAC;
- cs->irq_func = &elsa_interrupt_ipac;
- val = readreg(cs->hw.elsa.ale, cs->hw.elsa.isac, IPAC_ID);
- printk(KERN_INFO "Elsa: IPAC version %x\n", val);
- } else {
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->irq_func = &elsa_interrupt;
- ISACVersion(cs, "Elsa:");
- if (HscxVersion(cs, "Elsa:")) {
- printk(KERN_WARNING
- "Elsa: wrong HSCX versions check IO address\n");
- release_io_elsa(cs);
- return (0);
- }
- }
- if (cs->subtyp == ELSA_PC) {
- val = readitac(cs, ITAC_SYS);
- printk(KERN_INFO "Elsa: ITAC version %s\n", ITACVer[val & 7]);
- writeitac(cs, ITAC_ISEN, 0);
- writeitac(cs, ITAC_RFIE, 0);
- writeitac(cs, ITAC_XFIE, 0);
- writeitac(cs, ITAC_SCIE, 0);
- writeitac(cs, ITAC_STIE, 0);
- }
- return (1);
-}
-
-int setup_elsa(struct IsdnCard *card)
-{
- int rc;
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
- strcpy(tmp, Elsa_revision);
- printk(KERN_INFO "HiSax: Elsa driver Rev. %s\n", HiSax_getrev(tmp));
- cs->hw.elsa.ctrl_reg = 0;
- cs->hw.elsa.status = 0;
- cs->hw.elsa.MFlag = 0;
- cs->subtyp = 0;
-
- if (cs->typ == ISDN_CTYPE_ELSA) {
- rc = setup_elsa_isa(card);
- if (!rc)
- return (0);
-
- } else if (cs->typ == ISDN_CTYPE_ELSA_PNP) {
- rc = setup_elsa_isapnp(card);
- if (!rc)
- return (0);
-
- } else if (cs->typ == ISDN_CTYPE_ELSA_PCMCIA)
- setup_elsa_pcmcia(card);
-
- else if (cs->typ == ISDN_CTYPE_ELSA_PCI) {
- rc = setup_elsa_pci(card);
- if (!rc)
- return (0);
-
- } else
- return (0);
-
- return setup_elsa_common(card);
-}
diff --git a/drivers/isdn/hisax/elsa_cs.c b/drivers/isdn/hisax/elsa_cs.c
deleted file mode 100644
index 40f6fad79de3..000000000000
--- a/drivers/isdn/hisax/elsa_cs.c
+++ /dev/null
@@ -1,218 +0,0 @@
-/*======================================================================
-
- An elsa_cs PCMCIA client driver
-
- This driver is for the Elsa PCM ISDN Cards, i.e. the MicroLink
-
-
- The contents of this file are subject to the Mozilla Public
- License Version 1.1 (the "License"); you may not use this file
- except in compliance with the License. You may obtain a copy of
- the License at http://www.mozilla.org/MPL/
-
- Software distributed under the License is distributed on an "AS
- IS" basis, WITHOUT WARRANTY OF ANY KIND, either express or
- implied. See the License for the specific language governing
- rights and limitations under the License.
-
- The initial developer of the original code is David A. Hinds
- <dahinds@users.sourceforge.net>. Portions created by David A. Hinds
- are Copyright (C) 1999 David A. Hinds. All Rights Reserved.
-
- Modifications from dummy_cs.c are Copyright (C) 1999-2001 Klaus
- Lichtenwalder <Lichtenwalder@ACM.org>. All Rights Reserved.
-
- Alternatively, the contents of this file may be used under the
- terms of the GNU General Public License version 2 (the "GPL"), in
- which case the provisions of the GPL are applicable instead of the
- above. If you wish to allow the use of your version of this file
- only under the terms of the GPL and not to allow others to use
- your version of this file under the MPL, indicate your decision
- by deleting the provisions above and replace them with the notice
- and other provisions required by the GPL. If you do not delete
- the provisions above, a recipient may use your version of this
- file under either the MPL or the GPL.
-
- ======================================================================*/
-
-#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/init.h>
-#include <linux/ptrace.h>
-#include <linux/slab.h>
-#include <linux/string.h>
-#include <linux/timer.h>
-#include <linux/ioport.h>
-#include <asm/io.h>
-
-#include <pcmcia/cistpl.h>
-#include <pcmcia/cisreg.h>
-#include <pcmcia/ds.h>
-#include "hisax_cfg.h"
-
-MODULE_DESCRIPTION("ISDN4Linux: PCMCIA client driver for Elsa PCM cards");
-MODULE_AUTHOR("Klaus Lichtenwalder");
-MODULE_LICENSE("Dual MPL/GPL");
-
-
-/*====================================================================*/
-
-/* Parameters that can be set with 'insmod' */
-
-static int protocol = 2; /* EURO-ISDN Default */
-module_param(protocol, int, 0);
-
-static int elsa_cs_config(struct pcmcia_device *link);
-static void elsa_cs_release(struct pcmcia_device *link);
-static void elsa_cs_detach(struct pcmcia_device *p_dev);
-
-typedef struct local_info_t {
- struct pcmcia_device *p_dev;
- int busy;
- int cardnr;
-} local_info_t;
-
-static int elsa_cs_probe(struct pcmcia_device *link)
-{
- local_info_t *local;
-
- dev_dbg(&link->dev, "elsa_cs_attach()\n");
-
- /* Allocate space for private device-specific data */
- local = kzalloc(sizeof(local_info_t), GFP_KERNEL);
- if (!local) return -ENOMEM;
-
- local->p_dev = link;
- link->priv = local;
-
- local->cardnr = -1;
-
- return elsa_cs_config(link);
-} /* elsa_cs_attach */
-
-static void elsa_cs_detach(struct pcmcia_device *link)
-{
- local_info_t *info = link->priv;
-
- dev_dbg(&link->dev, "elsa_cs_detach(0x%p)\n", link);
-
- info->busy = 1;
- elsa_cs_release(link);
-
- kfree(info);
-} /* elsa_cs_detach */
-
-static int elsa_cs_configcheck(struct pcmcia_device *p_dev, void *priv_data)
-{
- int j;
-
- p_dev->io_lines = 3;
- p_dev->resource[0]->end = 8;
- p_dev->resource[0]->flags &= IO_DATA_PATH_WIDTH;
- p_dev->resource[0]->flags |= IO_DATA_PATH_WIDTH_AUTO;
-
- if ((p_dev->resource[0]->end) && p_dev->resource[0]->start) {
- printk(KERN_INFO "(elsa_cs: looks like the 96 model)\n");
- if (!pcmcia_request_io(p_dev))
- return 0;
- } else {
- printk(KERN_INFO "(elsa_cs: looks like the 97 model)\n");
- for (j = 0x2f0; j > 0x100; j -= 0x10) {
- p_dev->resource[0]->start = j;
- if (!pcmcia_request_io(p_dev))
- return 0;
- }
- }
- return -ENODEV;
-}
-
-static int elsa_cs_config(struct pcmcia_device *link)
-{
- int i;
- IsdnCard_t icard;
-
- dev_dbg(&link->dev, "elsa_config(0x%p)\n", link);
-
- link->config_flags |= CONF_ENABLE_IRQ | CONF_AUTO_SET_IO;
-
- i = pcmcia_loop_config(link, elsa_cs_configcheck, NULL);
- if (i != 0)
- goto failed;
-
- if (!link->irq)
- goto failed;
-
- i = pcmcia_enable_device(link);
- if (i != 0)
- goto failed;
-
- icard.para[0] = link->irq;
- icard.para[1] = link->resource[0]->start;
- icard.protocol = protocol;
- icard.typ = ISDN_CTYPE_ELSA_PCMCIA;
-
- i = hisax_init_pcmcia(link, &(((local_info_t *)link->priv)->busy), &icard);
- if (i < 0) {
- printk(KERN_ERR "elsa_cs: failed to initialize Elsa "
- "PCMCIA %d with %pR\n", i, link->resource[0]);
- elsa_cs_release(link);
- } else
- ((local_info_t *)link->priv)->cardnr = i;
-
- return 0;
-failed:
- elsa_cs_release(link);
- return -ENODEV;
-} /* elsa_cs_config */
-
-static void elsa_cs_release(struct pcmcia_device *link)
-{
- local_info_t *local = link->priv;
-
- dev_dbg(&link->dev, "elsa_cs_release(0x%p)\n", link);
-
- if (local) {
- if (local->cardnr >= 0) {
- /* no unregister function with hisax */
- HiSax_closecard(local->cardnr);
- }
- }
-
- pcmcia_disable_device(link);
-} /* elsa_cs_release */
-
-static int elsa_suspend(struct pcmcia_device *link)
-{
- local_info_t *dev = link->priv;
-
- dev->busy = 1;
-
- return 0;
-}
-
-static int elsa_resume(struct pcmcia_device *link)
-{
- local_info_t *dev = link->priv;
-
- dev->busy = 0;
-
- return 0;
-}
-
-static const struct pcmcia_device_id elsa_ids[] = {
- PCMCIA_DEVICE_PROD_ID12("ELSA AG (Aachen, Germany)", "MicroLink ISDN/MC ", 0x983de2c4, 0x333ba257),
- PCMCIA_DEVICE_PROD_ID12("ELSA GmbH, Aachen", "MicroLink ISDN/MC ", 0x639e5718, 0x333ba257),
- PCMCIA_DEVICE_NULL
-};
-MODULE_DEVICE_TABLE(pcmcia, elsa_ids);
-
-static struct pcmcia_driver elsa_cs_driver = {
- .owner = THIS_MODULE,
- .name = "elsa_cs",
- .probe = elsa_cs_probe,
- .remove = elsa_cs_detach,
- .id_table = elsa_ids,
- .suspend = elsa_suspend,
- .resume = elsa_resume,
-};
-module_pcmcia_driver(elsa_cs_driver);
diff --git a/drivers/isdn/hisax/elsa_ser.c b/drivers/isdn/hisax/elsa_ser.c
deleted file mode 100644
index 999effd7a276..000000000000
--- a/drivers/isdn/hisax/elsa_ser.c
+++ /dev/null
@@ -1,659 +0,0 @@
-/* $Id: elsa_ser.c,v 2.14.2.3 2004/02/11 13:21:33 keil Exp $
- *
- * stuff for the serial modem on ELSA cards
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/serial.h>
-#include <linux/serial_reg.h>
-#include <linux/slab.h>
-
-#define MAX_MODEM_BUF 256
-#define WAKEUP_CHARS (MAX_MODEM_BUF / 2)
-#define RS_ISR_PASS_LIMIT 256
-#define BASE_BAUD (1843200 / 16)
-
-//#define SERIAL_DEBUG_OPEN 1
-//#define SERIAL_DEBUG_INTR 1
-//#define SERIAL_DEBUG_FLOW 1
-#undef SERIAL_DEBUG_OPEN
-#undef SERIAL_DEBUG_INTR
-#undef SERIAL_DEBUG_FLOW
-#undef SERIAL_DEBUG_REG
-//#define SERIAL_DEBUG_REG 1
-
-#ifdef SERIAL_DEBUG_REG
-static u_char deb[32];
-const char *ModemIn[] = {"RBR", "IER", "IIR", "LCR", "MCR", "LSR", "MSR", "SCR"};
-const char *ModemOut[] = {"THR", "IER", "FCR", "LCR", "MCR", "LSR", "MSR", "SCR"};
-#endif
-
-static char *MInit_1 = "AT&F&C1E0&D2\r\0";
-static char *MInit_2 = "ATL2M1S64=13\r\0";
-static char *MInit_3 = "AT+FCLASS=0\r\0";
-static char *MInit_4 = "ATV1S2=128X1\r\0";
-static char *MInit_5 = "AT\\V8\\N3\r\0";
-static char *MInit_6 = "ATL0M0&G0%E1\r\0";
-static char *MInit_7 = "AT%L1%M0%C3\r\0";
-
-static char *MInit_speed28800 = "AT%G0%B28800\r\0";
-
-static char *MInit_dialout = "ATs7=60 x1 d\r\0";
-static char *MInit_dialin = "ATs7=60 x1 a\r\0";
-
-
-static inline unsigned int serial_in(struct IsdnCardState *cs, int offset)
-{
-#ifdef SERIAL_DEBUG_REG
- u_int val = inb(cs->hw.elsa.base + 8 + offset);
- debugl1(cs, "in %s %02x", ModemIn[offset], val);
- return (val);
-#else
- return inb(cs->hw.elsa.base + 8 + offset);
-#endif
-}
-
-static inline unsigned int serial_inp(struct IsdnCardState *cs, int offset)
-{
-#ifdef SERIAL_DEBUG_REG
-#ifdef ELSA_SERIAL_NOPAUSE_IO
- u_int val = inb(cs->hw.elsa.base + 8 + offset);
- debugl1(cs, "inp %s %02x", ModemIn[offset], val);
-#else
- u_int val = inb_p(cs->hw.elsa.base + 8 + offset);
- debugl1(cs, "inP %s %02x", ModemIn[offset], val);
-#endif
- return (val);
-#else
-#ifdef ELSA_SERIAL_NOPAUSE_IO
- return inb(cs->hw.elsa.base + 8 + offset);
-#else
- return inb_p(cs->hw.elsa.base + 8 + offset);
-#endif
-#endif
-}
-
-static inline void serial_out(struct IsdnCardState *cs, int offset, int value)
-{
-#ifdef SERIAL_DEBUG_REG
- debugl1(cs, "out %s %02x", ModemOut[offset], value);
-#endif
- outb(value, cs->hw.elsa.base + 8 + offset);
-}
-
-static inline void serial_outp(struct IsdnCardState *cs, int offset,
- int value)
-{
-#ifdef SERIAL_DEBUG_REG
-#ifdef ELSA_SERIAL_NOPAUSE_IO
- debugl1(cs, "outp %s %02x", ModemOut[offset], value);
-#else
- debugl1(cs, "outP %s %02x", ModemOut[offset], value);
-#endif
-#endif
-#ifdef ELSA_SERIAL_NOPAUSE_IO
- outb(value, cs->hw.elsa.base + 8 + offset);
-#else
- outb_p(value, cs->hw.elsa.base + 8 + offset);
-#endif
-}
-
-/*
- * This routine is called to set the UART divisor registers to match
- * the specified baud rate for a serial port.
- */
-static void change_speed(struct IsdnCardState *cs, int baud)
-{
- int quot = 0, baud_base;
- unsigned cval, fcr = 0;
-
-
- /* byte size and parity */
- cval = 0x03;
- /* Determine divisor based on baud rate */
- baud_base = BASE_BAUD;
- quot = baud_base / baud;
- /* If the quotient is ever zero, default to 9600 bps */
- if (!quot)
- quot = baud_base / 9600;
-
- /* Set up FIFO's */
- if ((baud_base / quot) < 2400)
- fcr = UART_FCR_ENABLE_FIFO | UART_FCR_TRIGGER_1;
- else
- fcr = UART_FCR_ENABLE_FIFO | UART_FCR_TRIGGER_8;
- serial_outp(cs, UART_FCR, fcr);
- /* CTS flow control flag and modem status interrupts */
- cs->hw.elsa.IER &= ~UART_IER_MSI;
- cs->hw.elsa.IER |= UART_IER_MSI;
- serial_outp(cs, UART_IER, cs->hw.elsa.IER);
-
- debugl1(cs, "modem quot=0x%x", quot);
- serial_outp(cs, UART_LCR, cval | UART_LCR_DLAB);/* set DLAB */
- serial_outp(cs, UART_DLL, quot & 0xff); /* LS of divisor */
- serial_outp(cs, UART_DLM, quot >> 8); /* MS of divisor */
- serial_outp(cs, UART_LCR, cval); /* reset DLAB */
- serial_inp(cs, UART_RX);
-}
-
-static int mstartup(struct IsdnCardState *cs)
-{
- int retval = 0;
-
- /*
- * Clear the FIFO buffers and disable them
- * (they will be reenabled in change_speed())
- */
- serial_outp(cs, UART_FCR, (UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT));
-
- /*
- * At this point there's no way the LSR could still be 0xFF;
- * if it is, then bail out, because there's likely no UART
- * here.
- */
- if (serial_inp(cs, UART_LSR) == 0xff) {
- retval = -ENODEV;
- goto errout;
- }
-
- /*
- * Clear the interrupt registers.
- */
- (void) serial_inp(cs, UART_RX);
- (void) serial_inp(cs, UART_IIR);
- (void) serial_inp(cs, UART_MSR);
-
- /*
- * Now, initialize the UART
- */
- serial_outp(cs, UART_LCR, UART_LCR_WLEN8); /* reset DLAB */
-
- cs->hw.elsa.MCR = 0;
- cs->hw.elsa.MCR = UART_MCR_DTR | UART_MCR_RTS | UART_MCR_OUT2;
- serial_outp(cs, UART_MCR, cs->hw.elsa.MCR);
-
- /*
- * Finally, enable interrupts
- */
- cs->hw.elsa.IER = UART_IER_MSI | UART_IER_RLSI | UART_IER_RDI;
- serial_outp(cs, UART_IER, cs->hw.elsa.IER); /* enable interrupts */
-
- /*
- * And clear the interrupt registers again for luck.
- */
- (void)serial_inp(cs, UART_LSR);
- (void)serial_inp(cs, UART_RX);
- (void)serial_inp(cs, UART_IIR);
- (void)serial_inp(cs, UART_MSR);
-
- cs->hw.elsa.transcnt = cs->hw.elsa.transp = 0;
- cs->hw.elsa.rcvcnt = cs->hw.elsa.rcvp = 0;
-
- /*
- * and set the speed of the serial port
- */
- change_speed(cs, BASE_BAUD);
- cs->hw.elsa.MFlag = 1;
-errout:
- return retval;
-}
-
-/*
- * This routine will shutdown a serial port; interrupts are disabled, and
- * DTR is dropped if the hangup on close termio flag is on.
- */
-static void mshutdown(struct IsdnCardState *cs)
-{
-
-#ifdef SERIAL_DEBUG_OPEN
- printk(KERN_DEBUG"Shutting down serial ....");
-#endif
-
- /*
- * clear delta_msr_wait queue to avoid mem leaks: we may free the irq
- * here so the queue might never be waken up
- */
-
- cs->hw.elsa.IER = 0;
- serial_outp(cs, UART_IER, 0x00); /* disable all intrs */
- cs->hw.elsa.MCR &= ~UART_MCR_OUT2;
-
- /* disable break condition */
- serial_outp(cs, UART_LCR, serial_inp(cs, UART_LCR) & ~UART_LCR_SBC);
-
- cs->hw.elsa.MCR &= ~(UART_MCR_DTR | UART_MCR_RTS);
- serial_outp(cs, UART_MCR, cs->hw.elsa.MCR);
-
- /* disable FIFO's */
- serial_outp(cs, UART_FCR, (UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT));
- serial_inp(cs, UART_RX); /* read data port to reset things */
-
-#ifdef SERIAL_DEBUG_OPEN
- printk(" done\n");
-#endif
-}
-
-static inline int
-write_modem(struct BCState *bcs) {
- int ret = 0;
- struct IsdnCardState *cs = bcs->cs;
- int count, len, fp;
-
- if (!bcs->tx_skb)
- return 0;
- if (bcs->tx_skb->len <= 0)
- return 0;
- len = bcs->tx_skb->len;
- if (len > MAX_MODEM_BUF - cs->hw.elsa.transcnt)
- len = MAX_MODEM_BUF - cs->hw.elsa.transcnt;
- fp = cs->hw.elsa.transcnt + cs->hw.elsa.transp;
- fp &= (MAX_MODEM_BUF - 1);
- count = len;
- if (count > MAX_MODEM_BUF - fp) {
- count = MAX_MODEM_BUF - fp;
- skb_copy_from_linear_data(bcs->tx_skb,
- cs->hw.elsa.transbuf + fp, count);
- skb_pull(bcs->tx_skb, count);
- cs->hw.elsa.transcnt += count;
- ret = count;
- count = len - count;
- fp = 0;
- }
- skb_copy_from_linear_data(bcs->tx_skb,
- cs->hw.elsa.transbuf + fp, count);
- skb_pull(bcs->tx_skb, count);
- cs->hw.elsa.transcnt += count;
- ret += count;
-
- if (cs->hw.elsa.transcnt &&
- !(cs->hw.elsa.IER & UART_IER_THRI)) {
- cs->hw.elsa.IER |= UART_IER_THRI;
- serial_outp(cs, UART_IER, cs->hw.elsa.IER);
- }
- return (ret);
-}
-
-static inline void
-modem_fill(struct BCState *bcs) {
-
- if (bcs->tx_skb) {
- if (bcs->tx_skb->len) {
- write_modem(bcs);
- return;
- } else {
- if (test_bit(FLG_LLI_L1WAKEUP, &bcs->st->lli.flag) &&
- (PACKET_NOACK != bcs->tx_skb->pkt_type)) {
- u_long flags;
- spin_lock_irqsave(&bcs->aclock, flags);
- bcs->ackcnt += bcs->hw.hscx.count;
- spin_unlock_irqrestore(&bcs->aclock, flags);
- schedule_event(bcs, B_ACKPENDING);
- }
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- }
- }
- if ((bcs->tx_skb = skb_dequeue(&bcs->squeue))) {
- bcs->hw.hscx.count = 0;
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- write_modem(bcs);
- } else {
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- schedule_event(bcs, B_XMTBUFREADY);
- }
-}
-
-static inline void receive_chars(struct IsdnCardState *cs,
- int *status)
-{
- unsigned char ch;
- struct sk_buff *skb;
-
- do {
- ch = serial_in(cs, UART_RX);
- if (cs->hw.elsa.rcvcnt >= MAX_MODEM_BUF)
- break;
- cs->hw.elsa.rcvbuf[cs->hw.elsa.rcvcnt++] = ch;
-#ifdef SERIAL_DEBUG_INTR
- printk("DR%02x:%02x...", ch, *status);
-#endif
- if (*status & (UART_LSR_BI | UART_LSR_PE |
- UART_LSR_FE | UART_LSR_OE)) {
-
-#ifdef SERIAL_DEBUG_INTR
- printk("handling exept....");
-#endif
- }
- *status = serial_inp(cs, UART_LSR);
- } while (*status & UART_LSR_DR);
- if (cs->hw.elsa.MFlag == 2) {
- if (!(skb = dev_alloc_skb(cs->hw.elsa.rcvcnt)))
- printk(KERN_WARNING "ElsaSER: receive out of memory\n");
- else {
- skb_put_data(skb, cs->hw.elsa.rcvbuf,
- cs->hw.elsa.rcvcnt);
- skb_queue_tail(&cs->hw.elsa.bcs->rqueue, skb);
- }
- schedule_event(cs->hw.elsa.bcs, B_RCVBUFREADY);
- } else {
- char tmp[128];
- char *t = tmp;
-
- t += sprintf(t, "modem read cnt %d", cs->hw.elsa.rcvcnt);
- QuickHex(t, cs->hw.elsa.rcvbuf, cs->hw.elsa.rcvcnt);
- debugl1(cs, "%s", tmp);
- }
- cs->hw.elsa.rcvcnt = 0;
-}
-
-static inline void transmit_chars(struct IsdnCardState *cs, int *intr_done)
-{
- int count;
-
- debugl1(cs, "transmit_chars: p(%x) cnt(%x)", cs->hw.elsa.transp,
- cs->hw.elsa.transcnt);
-
- if (cs->hw.elsa.transcnt <= 0) {
- cs->hw.elsa.IER &= ~UART_IER_THRI;
- serial_out(cs, UART_IER, cs->hw.elsa.IER);
- return;
- }
- count = 16;
- do {
- serial_outp(cs, UART_TX, cs->hw.elsa.transbuf[cs->hw.elsa.transp++]);
- if (cs->hw.elsa.transp >= MAX_MODEM_BUF)
- cs->hw.elsa.transp = 0;
- if (--cs->hw.elsa.transcnt <= 0)
- break;
- } while (--count > 0);
- if ((cs->hw.elsa.transcnt < WAKEUP_CHARS) && (cs->hw.elsa.MFlag == 2))
- modem_fill(cs->hw.elsa.bcs);
-
-#ifdef SERIAL_DEBUG_INTR
- printk("THRE...");
-#endif
- if (intr_done)
- *intr_done = 0;
- if (cs->hw.elsa.transcnt <= 0) {
- cs->hw.elsa.IER &= ~UART_IER_THRI;
- serial_outp(cs, UART_IER, cs->hw.elsa.IER);
- }
-}
-
-
-static void rs_interrupt_elsa(struct IsdnCardState *cs)
-{
- int status, iir, msr;
- int pass_counter = 0;
-
-#ifdef SERIAL_DEBUG_INTR
- printk(KERN_DEBUG "rs_interrupt_single(%d)...", cs->irq);
-#endif
-
- do {
- status = serial_inp(cs, UART_LSR);
- debugl1(cs, "rs LSR %02x", status);
-#ifdef SERIAL_DEBUG_INTR
- printk("status = %x...", status);
-#endif
- if (status & UART_LSR_DR)
- receive_chars(cs, &status);
- if (status & UART_LSR_THRE)
- transmit_chars(cs, NULL);
- if (pass_counter++ > RS_ISR_PASS_LIMIT) {
- printk("rs_single loop break.\n");
- break;
- }
- iir = serial_inp(cs, UART_IIR);
- debugl1(cs, "rs IIR %02x", iir);
- if ((iir & 0xf) == 0) {
- msr = serial_inp(cs, UART_MSR);
- debugl1(cs, "rs MSR %02x", msr);
- }
- } while (!(iir & UART_IIR_NO_INT));
-#ifdef SERIAL_DEBUG_INTR
- printk("end.\n");
-#endif
-}
-
-extern int open_hscxstate(struct IsdnCardState *cs, struct BCState *bcs);
-extern void modehscx(struct BCState *bcs, int mode, int bc);
-extern void hscx_l2l1(struct PStack *st, int pr, void *arg);
-
-static void
-close_elsastate(struct BCState *bcs)
-{
- modehscx(bcs, 0, bcs->channel);
- if (test_and_clear_bit(BC_FLG_INIT, &bcs->Flag)) {
- if (bcs->hw.hscx.rcvbuf) {
- if (bcs->mode != L1_MODE_MODEM)
- kfree(bcs->hw.hscx.rcvbuf);
- bcs->hw.hscx.rcvbuf = NULL;
- }
- skb_queue_purge(&bcs->rqueue);
- skb_queue_purge(&bcs->squeue);
- if (bcs->tx_skb) {
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- }
- }
-}
-
-static void
-modem_write_cmd(struct IsdnCardState *cs, u_char *buf, int len) {
- int count, fp;
- u_char *msg = buf;
-
- if (!len)
- return;
- if (len > (MAX_MODEM_BUF - cs->hw.elsa.transcnt)) {
- return;
- }
- fp = cs->hw.elsa.transcnt + cs->hw.elsa.transp;
- fp &= (MAX_MODEM_BUF - 1);
- count = len;
- if (count > MAX_MODEM_BUF - fp) {
- count = MAX_MODEM_BUF - fp;
- memcpy(cs->hw.elsa.transbuf + fp, msg, count);
- cs->hw.elsa.transcnt += count;
- msg += count;
- count = len - count;
- fp = 0;
- }
- memcpy(cs->hw.elsa.transbuf + fp, msg, count);
- cs->hw.elsa.transcnt += count;
- if (cs->hw.elsa.transcnt &&
- !(cs->hw.elsa.IER & UART_IER_THRI)) {
- cs->hw.elsa.IER |= UART_IER_THRI;
- serial_outp(cs, UART_IER, cs->hw.elsa.IER);
- }
-}
-
-static void
-modem_set_init(struct IsdnCardState *cs) {
- int timeout;
-
-#define RCV_DELAY 20
- modem_write_cmd(cs, MInit_1, strlen(MInit_1));
- timeout = 1000;
- while (timeout-- && cs->hw.elsa.transcnt)
- udelay(1000);
- debugl1(cs, "msi tout=%d", timeout);
- mdelay(RCV_DELAY);
- modem_write_cmd(cs, MInit_2, strlen(MInit_2));
- timeout = 1000;
- while (timeout-- && cs->hw.elsa.transcnt)
- udelay(1000);
- debugl1(cs, "msi tout=%d", timeout);
- mdelay(RCV_DELAY);
- modem_write_cmd(cs, MInit_3, strlen(MInit_3));
- timeout = 1000;
- while (timeout-- && cs->hw.elsa.transcnt)
- udelay(1000);
- debugl1(cs, "msi tout=%d", timeout);
- mdelay(RCV_DELAY);
- modem_write_cmd(cs, MInit_4, strlen(MInit_4));
- timeout = 1000;
- while (timeout-- && cs->hw.elsa.transcnt)
- udelay(1000);
- debugl1(cs, "msi tout=%d", timeout);
- mdelay(RCV_DELAY);
- modem_write_cmd(cs, MInit_5, strlen(MInit_5));
- timeout = 1000;
- while (timeout-- && cs->hw.elsa.transcnt)
- udelay(1000);
- debugl1(cs, "msi tout=%d", timeout);
- mdelay(RCV_DELAY);
- modem_write_cmd(cs, MInit_6, strlen(MInit_6));
- timeout = 1000;
- while (timeout-- && cs->hw.elsa.transcnt)
- udelay(1000);
- debugl1(cs, "msi tout=%d", timeout);
- mdelay(RCV_DELAY);
- modem_write_cmd(cs, MInit_7, strlen(MInit_7));
- timeout = 1000;
- while (timeout-- && cs->hw.elsa.transcnt)
- udelay(1000);
- debugl1(cs, "msi tout=%d", timeout);
- mdelay(RCV_DELAY);
-}
-
-static void
-modem_set_dial(struct IsdnCardState *cs, int outgoing) {
- int timeout;
-#define RCV_DELAY 20
-
- modem_write_cmd(cs, MInit_speed28800, strlen(MInit_speed28800));
- timeout = 1000;
- while (timeout-- && cs->hw.elsa.transcnt)
- udelay(1000);
- debugl1(cs, "msi tout=%d", timeout);
- mdelay(RCV_DELAY);
- if (outgoing)
- modem_write_cmd(cs, MInit_dialout, strlen(MInit_dialout));
- else
- modem_write_cmd(cs, MInit_dialin, strlen(MInit_dialin));
- timeout = 1000;
- while (timeout-- && cs->hw.elsa.transcnt)
- udelay(1000);
- debugl1(cs, "msi tout=%d", timeout);
- mdelay(RCV_DELAY);
-}
-
-static void
-modem_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct BCState *bcs = st->l1.bcs;
- struct sk_buff *skb = arg;
- u_long flags;
-
- if (pr == (PH_DATA | REQUEST)) {
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- skb_queue_tail(&bcs->squeue, skb);
- } else {
- bcs->tx_skb = skb;
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->hw.hscx.count = 0;
- write_modem(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- } else if (pr == (PH_ACTIVATE | REQUEST)) {
- test_and_set_bit(BC_FLG_ACTIV, &bcs->Flag);
- st->l1.l1l2(st, PH_ACTIVATE | CONFIRM, NULL);
- set_arcofi(bcs->cs, st->l1.bc);
- mstartup(bcs->cs);
- modem_set_dial(bcs->cs, test_bit(FLG_ORIG, &st->l2.flag));
- bcs->cs->hw.elsa.MFlag = 2;
- } else if (pr == (PH_DEACTIVATE | REQUEST)) {
- test_and_clear_bit(BC_FLG_ACTIV, &bcs->Flag);
- bcs->cs->dc.isac.arcofi_bc = st->l1.bc;
- arcofi_fsm(bcs->cs, ARCOFI_START, &ARCOFI_XOP_0);
- wait_event_interruptible(bcs->cs->dc.isac.arcofi_wait,
- bcs->cs->dc.isac.arcofi_state == ARCOFI_NOP);
- bcs->cs->hw.elsa.MFlag = 1;
- } else {
- printk(KERN_WARNING "ElsaSer: unknown pr %x\n", pr);
- }
-}
-
-static int
-setstack_elsa(struct PStack *st, struct BCState *bcs)
-{
-
- bcs->channel = st->l1.bc;
- switch (st->l1.mode) {
- case L1_MODE_HDLC:
- case L1_MODE_TRANS:
- if (open_hscxstate(st->l1.hardware, bcs))
- return (-1);
- st->l2.l2l1 = hscx_l2l1;
- break;
- case L1_MODE_MODEM:
- bcs->mode = L1_MODE_MODEM;
- if (!test_and_set_bit(BC_FLG_INIT, &bcs->Flag)) {
- bcs->hw.hscx.rcvbuf = bcs->cs->hw.elsa.rcvbuf;
- skb_queue_head_init(&bcs->rqueue);
- skb_queue_head_init(&bcs->squeue);
- }
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->event = 0;
- bcs->hw.hscx.rcvidx = 0;
- bcs->tx_cnt = 0;
- bcs->cs->hw.elsa.bcs = bcs;
- st->l2.l2l1 = modem_l2l1;
- break;
- }
- st->l1.bcs = bcs;
- setstack_manager(st);
- bcs->st = st;
- setstack_l1_B(st);
- return (0);
-}
-
-static void
-init_modem(struct IsdnCardState *cs) {
-
- cs->bcs[0].BC_SetStack = setstack_elsa;
- cs->bcs[1].BC_SetStack = setstack_elsa;
- cs->bcs[0].BC_Close = close_elsastate;
- cs->bcs[1].BC_Close = close_elsastate;
- if (!(cs->hw.elsa.rcvbuf = kmalloc(MAX_MODEM_BUF,
- GFP_ATOMIC))) {
- printk(KERN_WARNING
- "Elsa: No modem mem hw.elsa.rcvbuf\n");
- return;
- }
- if (!(cs->hw.elsa.transbuf = kmalloc(MAX_MODEM_BUF,
- GFP_ATOMIC))) {
- printk(KERN_WARNING
- "Elsa: No modem mem hw.elsa.transbuf\n");
- kfree(cs->hw.elsa.rcvbuf);
- cs->hw.elsa.rcvbuf = NULL;
- return;
- }
- if (mstartup(cs)) {
- printk(KERN_WARNING "Elsa: problem startup modem\n");
- }
- modem_set_init(cs);
-}
-
-static void
-release_modem(struct IsdnCardState *cs) {
-
- cs->hw.elsa.MFlag = 0;
- if (cs->hw.elsa.transbuf) {
- if (cs->hw.elsa.rcvbuf) {
- mshutdown(cs);
- kfree(cs->hw.elsa.rcvbuf);
- cs->hw.elsa.rcvbuf = NULL;
- }
- kfree(cs->hw.elsa.transbuf);
- cs->hw.elsa.transbuf = NULL;
- }
-}
diff --git a/drivers/isdn/hisax/enternow_pci.c b/drivers/isdn/hisax/enternow_pci.c
deleted file mode 100644
index e8d431a8302d..000000000000
--- a/drivers/isdn/hisax/enternow_pci.c
+++ /dev/null
@@ -1,420 +0,0 @@
-/* enternow_pci.c,v 0.99 2001/10/02
- *
- * enternow_pci.c Card-specific routines for
- * Formula-n enter:now ISDN PCI ab
- * Gerdes AG Power ISDN PCI
- * Woerltronic SA 16 PCI
- * (based on HiSax driver by Karsten Keil)
- *
- * Author Christoph Ersfeld <info@formula-n.de>
- * Formula-n Europe AG (www.formula-n.com)
- * previously Gerdes AG
- *
- *
- * This file is (c) under GNU PUBLIC LICENSE
- *
- * Notes:
- * This driver interfaces to netjet.c which performs B-channel
- * processing.
- *
- * Version 0.99 is the first release of this driver and there are
- * certainly a few bugs.
- * It isn't testet on linux 2.4 yet, so consider this code to be
- * beta.
- *
- * Please don't report me any malfunction without sending
- * (compressed) debug-logs.
- * It would be nearly impossible to retrace it.
- *
- * Log D-channel-processing as follows:
- *
- * 1. Load hisax with card-specific parameters, this example ist for
- * Formula-n enter:now ISDN PCI and compatible
- * (f.e. Gerdes Power ISDN PCI)
- *
- * modprobe hisax type=41 protocol=2 id=gerdes
- *
- * if you chose an other value for id, you need to modify the
- * code below, too.
- *
- * 2. set debug-level
- *
- * hisaxctrl gerdes 1 0x3ff
- * hisaxctrl gerdes 11 0x4f
- * cat /dev/isdnctrl >> ~/log &
- *
- * Please take also a look into /var/log/messages if there is
- * anything importand concerning HISAX.
- *
- *
- * Credits:
- * Programming the driver for Formula-n enter:now ISDN PCI and
- * necessary the driver for the used Amd 7930 D-channel-controller
- * was spnsored by Formula-n Europe AG.
- * Thanks to Karsten Keil and Petr Novak, who gave me support in
- * Hisax-specific questions.
- * I want so say special thanks to Carl-Friedrich Braun, who had to
- * answer a lot of questions about generally ISDN and about handling
- * of the Amd-Chip.
- *
- */
-
-
-#include "hisax.h"
-#include "isac.h"
-#include "isdnl1.h"
-#include "amd7930_fn.h"
-#include <linux/interrupt.h>
-#include <linux/ppp_defs.h>
-#include <linux/pci.h>
-#include <linux/init.h>
-#include "netjet.h"
-
-
-
-static const char *enternow_pci_rev = "$Revision: 1.1.4.5 $";
-
-
-/* for PowerISDN PCI */
-#define TJ_AMD_IRQ 0x20
-#define TJ_LED1 0x40
-#define TJ_LED2 0x80
-
-
-/* The window to [the] AMD [chip]...
- * From address hw.njet.base + TJ_AMD_PORT onwards, the AMD
- * maps [consecutive/multiple] 8 bits into the TigerJet I/O space
- * -> 0x01 of the AMD at hw.njet.base + 0C4 */
-#define TJ_AMD_PORT 0xC0
-
-
-
-/* *************************** I/O-Interface functions ************************************* */
-
-
-/* cs->readisac, macro rByteAMD */
-static unsigned char
-ReadByteAmd7930(struct IsdnCardState *cs, unsigned char offset)
-{
- /* direct register */
- if (offset < 8)
- return (inb(cs->hw.njet.isac + 4 * offset));
-
- /* indirect register */
- else {
- outb(offset, cs->hw.njet.isac + 4 * AMD_CR);
- return (inb(cs->hw.njet.isac + 4 * AMD_DR));
- }
-}
-
-/* cs->writeisac, macro wByteAMD */
-static void
-WriteByteAmd7930(struct IsdnCardState *cs, unsigned char offset, unsigned char value)
-{
- /* direct register */
- if (offset < 8)
- outb(value, cs->hw.njet.isac + 4 * offset);
-
- /* indirect register */
- else {
- outb(offset, cs->hw.njet.isac + 4 * AMD_CR);
- outb(value, cs->hw.njet.isac + 4 * AMD_DR);
- }
-}
-
-
-static void
-enpci_setIrqMask(struct IsdnCardState *cs, unsigned char val) {
- if (!val)
- outb(0x00, cs->hw.njet.base + NETJET_IRQMASK1);
- else
- outb(TJ_AMD_IRQ, cs->hw.njet.base + NETJET_IRQMASK1);
-}
-
-
-static unsigned char dummyrr(struct IsdnCardState *cs, int chan, unsigned char off)
-{
- return (5);
-}
-
-static void dummywr(struct IsdnCardState *cs, int chan, unsigned char off, unsigned char value)
-{
-
-}
-
-
-/* ******************************************************************************** */
-
-
-static void
-reset_enpci(struct IsdnCardState *cs)
-{
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "enter:now PCI: reset");
-
- /* Reset on, (also for AMD) */
- cs->hw.njet.ctrl_reg = 0x07;
- outb(cs->hw.njet.ctrl_reg, cs->hw.njet.base + NETJET_CTRL);
- mdelay(20);
- /* Reset off */
- cs->hw.njet.ctrl_reg = 0x30;
- outb(cs->hw.njet.ctrl_reg, cs->hw.njet.base + NETJET_CTRL);
- /* 20ms delay */
- mdelay(20);
- cs->hw.njet.auxd = 0; // LED-status
- cs->hw.njet.dmactrl = 0;
- outb(~TJ_AMD_IRQ, cs->hw.njet.base + NETJET_AUXCTRL);
- outb(TJ_AMD_IRQ, cs->hw.njet.base + NETJET_IRQMASK1);
- outb(cs->hw.njet.auxd, cs->hw.njet.auxa); // LED off
-}
-
-
-static int
-enpci_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
- unsigned char *chan;
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "enter:now PCI: card_msg: 0x%04X", mt);
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- reset_enpci(cs);
- Amd7930_init(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case CARD_RELEASE:
- release_io_netjet(cs);
- break;
- case CARD_INIT:
- reset_enpci(cs);
- inittiger(cs);
- /* irq must be on here */
- Amd7930_init(cs);
- break;
- case CARD_TEST:
- break;
- case MDL_ASSIGN:
- /* TEI assigned, LED1 on */
- cs->hw.njet.auxd = TJ_AMD_IRQ << 1;
- outb(cs->hw.njet.auxd, cs->hw.njet.base + NETJET_AUXDATA);
- break;
- case MDL_REMOVE:
- /* TEI removed, LEDs off */
- cs->hw.njet.auxd = 0;
- outb(0x00, cs->hw.njet.base + NETJET_AUXDATA);
- break;
- case MDL_BC_ASSIGN:
- /* activate B-channel */
- chan = (unsigned char *)arg;
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "enter:now PCI: assign phys. BC %d in AMD LMR1", *chan);
-
- cs->dc.amd7930.ph_command(cs, (cs->dc.amd7930.lmr1 | (*chan + 1)), "MDL_BC_ASSIGN");
- /* at least one b-channel in use, LED 2 on */
- cs->hw.njet.auxd |= TJ_AMD_IRQ << 2;
- outb(cs->hw.njet.auxd, cs->hw.njet.base + NETJET_AUXDATA);
- break;
- case MDL_BC_RELEASE:
- /* deactivate B-channel */
- chan = (unsigned char *)arg;
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "enter:now PCI: release phys. BC %d in Amd LMR1", *chan);
-
- cs->dc.amd7930.ph_command(cs, (cs->dc.amd7930.lmr1 & ~(*chan + 1)), "MDL_BC_RELEASE");
- /* no b-channel active -> LED2 off */
- if (!(cs->dc.amd7930.lmr1 & 3)) {
- cs->hw.njet.auxd &= ~(TJ_AMD_IRQ << 2);
- outb(cs->hw.njet.auxd, cs->hw.njet.base + NETJET_AUXDATA);
- }
- break;
- default:
- break;
-
- }
- return (0);
-}
-
-static irqreturn_t
-enpci_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- unsigned char s0val, s1val, ir;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- s1val = inb(cs->hw.njet.base + NETJET_IRQSTAT1);
-
- /* AMD threw an interrupt */
- if (!(s1val & TJ_AMD_IRQ)) {
- /* read and clear interrupt-register */
- ir = ReadByteAmd7930(cs, 0x00);
- Amd7930_interrupt(cs, ir);
- s1val = 1;
- } else
- s1val = 0;
- s0val = inb(cs->hw.njet.base + NETJET_IRQSTAT0);
- if ((s0val | s1val) == 0) { // shared IRQ
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_NONE;
- }
- if (s0val)
- outb(s0val, cs->hw.njet.base + NETJET_IRQSTAT0);
-
- /* DMA-Interrupt: B-channel-stuff */
- /* set bits in sval to indicate which page is free */
- if (inl(cs->hw.njet.base + NETJET_DMA_WRITE_ADR) <
- inl(cs->hw.njet.base + NETJET_DMA_WRITE_IRQ))
- /* the 2nd write page is free */
- s0val = 0x08;
- else /* the 1st write page is free */
- s0val = 0x04;
- if (inl(cs->hw.njet.base + NETJET_DMA_READ_ADR) <
- inl(cs->hw.njet.base + NETJET_DMA_READ_IRQ))
- /* the 2nd read page is free */
- s0val = s0val | 0x02;
- else /* the 1st read page is free */
- s0val = s0val | 0x01;
- if (s0val != cs->hw.njet.last_is0) /* we have a DMA interrupt */
- {
- if (test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
- }
- cs->hw.njet.irqstat0 = s0val;
- if ((cs->hw.njet.irqstat0 & NETJET_IRQM0_READ) !=
- (cs->hw.njet.last_is0 & NETJET_IRQM0_READ))
- /* we have a read dma int */
- read_tiger(cs);
- if ((cs->hw.njet.irqstat0 & NETJET_IRQM0_WRITE) !=
- (cs->hw.njet.last_is0 & NETJET_IRQM0_WRITE))
- /* we have a write dma int */
- write_tiger(cs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static int en_pci_probe(struct pci_dev *dev_netjet, struct IsdnCardState *cs)
-{
- if (pci_enable_device(dev_netjet))
- return (0);
- cs->irq = dev_netjet->irq;
- if (!cs->irq) {
- printk(KERN_WARNING "enter:now PCI: No IRQ for PCI card found\n");
- return (0);
- }
- cs->hw.njet.base = pci_resource_start(dev_netjet, 0);
- if (!cs->hw.njet.base) {
- printk(KERN_WARNING "enter:now PCI: No IO-Adr for PCI card found\n");
- return (0);
- }
- /* checks Sub-Vendor ID because system crashes with Traverse-Card */
- if ((dev_netjet->subsystem_vendor != 0x55) ||
- (dev_netjet->subsystem_device != 0x02)) {
- printk(KERN_WARNING "enter:now: You tried to load this driver with an incompatible TigerJet-card\n");
- printk(KERN_WARNING "Use type=20 for Traverse NetJet PCI Card.\n");
- return (0);
- }
-
- return (1);
-}
-
-static void en_cs_init(struct IsdnCard *card, struct IsdnCardState *cs)
-{
- cs->hw.njet.auxa = cs->hw.njet.base + NETJET_AUXDATA;
- cs->hw.njet.isac = cs->hw.njet.base + 0xC0; // Fenster zum AMD
-
- /* Reset an */
- cs->hw.njet.ctrl_reg = 0x07; // geändert von 0xff
- outb(cs->hw.njet.ctrl_reg, cs->hw.njet.base + NETJET_CTRL);
- /* 20 ms Pause */
- mdelay(20);
-
- cs->hw.njet.ctrl_reg = 0x30; /* Reset Off and status read clear */
- outb(cs->hw.njet.ctrl_reg, cs->hw.njet.base + NETJET_CTRL);
- mdelay(10);
-
- cs->hw.njet.auxd = 0x00; // war 0xc0
- cs->hw.njet.dmactrl = 0;
-
- outb(~TJ_AMD_IRQ, cs->hw.njet.base + NETJET_AUXCTRL);
- outb(TJ_AMD_IRQ, cs->hw.njet.base + NETJET_IRQMASK1);
- outb(cs->hw.njet.auxd, cs->hw.njet.auxa);
-}
-
-static int en_cs_init_rest(struct IsdnCard *card, struct IsdnCardState *cs)
-{
- const int bytecnt = 256;
-
- printk(KERN_INFO
- "enter:now PCI: PCI card configured at 0x%lx IRQ %d\n",
- cs->hw.njet.base, cs->irq);
- if (!request_region(cs->hw.njet.base, bytecnt, "Fn_ISDN")) {
- printk(KERN_WARNING
- "HiSax: enter:now config port %lx-%lx already in use\n",
- cs->hw.njet.base,
- cs->hw.njet.base + bytecnt);
- return (0);
- }
-
- setup_Amd7930(cs);
- cs->hw.njet.last_is0 = 0;
- /* macro rByteAMD */
- cs->readisac = &ReadByteAmd7930;
- /* macro wByteAMD */
- cs->writeisac = &WriteByteAmd7930;
- cs->dc.amd7930.setIrqMask = &enpci_setIrqMask;
-
- cs->BC_Read_Reg = &dummyrr;
- cs->BC_Write_Reg = &dummywr;
- cs->BC_Send_Data = &netjet_fill_dma;
- cs->cardmsg = &enpci_card_msg;
- cs->irq_func = &enpci_interrupt;
- cs->irq_flags |= IRQF_SHARED;
-
- return (1);
-}
-
-static struct pci_dev *dev_netjet = NULL;
-
-/* called by config.c */
-int setup_enternow_pci(struct IsdnCard *card)
-{
- int ret;
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
-#ifdef __BIG_ENDIAN
-#error "not running on big endian machines now"
-#endif
-
- strcpy(tmp, enternow_pci_rev);
- printk(KERN_INFO "HiSax: Formula-n Europe AG enter:now ISDN PCI driver Rev. %s\n", HiSax_getrev(tmp));
- if (cs->typ != ISDN_CTYPE_ENTERNOW)
- return (0);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
-
- for (;;)
- {
- if ((dev_netjet = hisax_find_pci_device(PCI_VENDOR_ID_TIGERJET,
- PCI_DEVICE_ID_TIGERJET_300, dev_netjet))) {
- ret = en_pci_probe(dev_netjet, cs);
- if (!ret)
- return (0);
- } else {
- printk(KERN_WARNING "enter:now PCI: No PCI card found\n");
- return (0);
- }
-
- en_cs_init(card, cs);
- break;
- }
-
- return en_cs_init_rest(card, cs);
-}
diff --git a/drivers/isdn/hisax/fsm.c b/drivers/isdn/hisax/fsm.c
deleted file mode 100644
index 80ba82f77c63..000000000000
--- a/drivers/isdn/hisax/fsm.c
+++ /dev/null
@@ -1,161 +0,0 @@
-/* $Id: fsm.c,v 1.14.6.4 2001/09/23 22:24:47 kai Exp $
- *
- * Finite state machine
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- * by Kai Germaschewski <kai.germaschewski@gmx.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * Thanks to Jan den Ouden
- * Fritz Elfert
- *
- */
-
-#include <linux/module.h>
-#include <linux/slab.h>
-#include <linux/init.h>
-#include "hisax.h"
-
-#define FSM_TIMER_DEBUG 0
-
-int
-FsmNew(struct Fsm *fsm, struct FsmNode *fnlist, int fncount)
-{
- int i;
-
- fsm->jumpmatrix =
- kzalloc(array3_size(sizeof(FSMFNPTR), fsm->state_count,
- fsm->event_count),
- GFP_KERNEL);
- if (!fsm->jumpmatrix)
- return -ENOMEM;
-
- for (i = 0; i < fncount; i++)
- if ((fnlist[i].state >= fsm->state_count) || (fnlist[i].event >= fsm->event_count)) {
- printk(KERN_ERR "FsmNew Error line %d st(%ld/%ld) ev(%ld/%ld)\n",
- i, (long)fnlist[i].state, (long)fsm->state_count,
- (long)fnlist[i].event, (long)fsm->event_count);
- } else
- fsm->jumpmatrix[fsm->state_count * fnlist[i].event +
- fnlist[i].state] = (FSMFNPTR)fnlist[i].routine;
- return 0;
-}
-
-void
-FsmFree(struct Fsm *fsm)
-{
- kfree((void *) fsm->jumpmatrix);
-}
-
-int
-FsmEvent(struct FsmInst *fi, int event, void *arg)
-{
- FSMFNPTR r;
-
- if ((fi->state >= fi->fsm->state_count) || (event >= fi->fsm->event_count)) {
- printk(KERN_ERR "FsmEvent Error st(%ld/%ld) ev(%d/%ld)\n",
- (long)fi->state, (long)fi->fsm->state_count, event, (long)fi->fsm->event_count);
- return (1);
- }
- r = fi->fsm->jumpmatrix[fi->fsm->state_count * event + fi->state];
- if (r) {
- if (fi->debug)
- fi->printdebug(fi, "State %s Event %s",
- fi->fsm->strState[fi->state],
- fi->fsm->strEvent[event]);
- r(fi, event, arg);
- return (0);
- } else {
- if (fi->debug)
- fi->printdebug(fi, "State %s Event %s no routine",
- fi->fsm->strState[fi->state],
- fi->fsm->strEvent[event]);
- return (!0);
- }
-}
-
-void
-FsmChangeState(struct FsmInst *fi, int newstate)
-{
- fi->state = newstate;
- if (fi->debug)
- fi->printdebug(fi, "ChangeState %s",
- fi->fsm->strState[newstate]);
-}
-
-static void
-FsmExpireTimer(struct timer_list *t)
-{
- struct FsmTimer *ft = from_timer(ft, t, tl);
-#if FSM_TIMER_DEBUG
- if (ft->fi->debug)
- ft->fi->printdebug(ft->fi, "FsmExpireTimer %lx", (long) ft);
-#endif
- FsmEvent(ft->fi, ft->event, ft->arg);
-}
-
-void
-FsmInitTimer(struct FsmInst *fi, struct FsmTimer *ft)
-{
- ft->fi = fi;
-#if FSM_TIMER_DEBUG
- if (ft->fi->debug)
- ft->fi->printdebug(ft->fi, "FsmInitTimer %lx", (long) ft);
-#endif
- timer_setup(&ft->tl, FsmExpireTimer, 0);
-}
-
-void
-FsmDelTimer(struct FsmTimer *ft, int where)
-{
-#if FSM_TIMER_DEBUG
- if (ft->fi->debug)
- ft->fi->printdebug(ft->fi, "FsmDelTimer %lx %d", (long) ft, where);
-#endif
- del_timer(&ft->tl);
-}
-
-int
-FsmAddTimer(struct FsmTimer *ft,
- int millisec, int event, void *arg, int where)
-{
-
-#if FSM_TIMER_DEBUG
- if (ft->fi->debug)
- ft->fi->printdebug(ft->fi, "FsmAddTimer %lx %d %d",
- (long) ft, millisec, where);
-#endif
-
- if (timer_pending(&ft->tl)) {
- printk(KERN_WARNING "FsmAddTimer: timer already active!\n");
- ft->fi->printdebug(ft->fi, "FsmAddTimer already active!");
- return -1;
- }
- ft->event = event;
- ft->arg = arg;
- ft->tl.expires = jiffies + (millisec * HZ) / 1000;
- add_timer(&ft->tl);
- return 0;
-}
-
-void
-FsmRestartTimer(struct FsmTimer *ft,
- int millisec, int event, void *arg, int where)
-{
-
-#if FSM_TIMER_DEBUG
- if (ft->fi->debug)
- ft->fi->printdebug(ft->fi, "FsmRestartTimer %lx %d %d",
- (long) ft, millisec, where);
-#endif
-
- if (timer_pending(&ft->tl))
- del_timer(&ft->tl);
- ft->event = event;
- ft->arg = arg;
- ft->tl.expires = jiffies + (millisec * HZ) / 1000;
- add_timer(&ft->tl);
-}
diff --git a/drivers/isdn/hisax/fsm.h b/drivers/isdn/hisax/fsm.h
deleted file mode 100644
index 8c7385619a46..000000000000
--- a/drivers/isdn/hisax/fsm.h
+++ /dev/null
@@ -1,61 +0,0 @@
-/* $Id: fsm.h,v 1.3.2.2 2001/09/23 22:24:47 kai Exp $
- *
- * Finite state machine
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- * by Kai Germaschewski <kai.germaschewski@gmx.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#ifndef __FSM_H__
-#define __FSM_H__
-
-#include <linux/timer.h>
-
-struct FsmInst;
-
-typedef void (*FSMFNPTR)(struct FsmInst *, int, void *);
-
-struct Fsm {
- FSMFNPTR *jumpmatrix;
- int state_count, event_count;
- char **strEvent, **strState;
-};
-
-struct FsmInst {
- struct Fsm *fsm;
- int state;
- int debug;
- void *userdata;
- int userint;
- void (*printdebug) (struct FsmInst *, char *, ...);
-};
-
-struct FsmNode {
- int state, event;
- void (*routine) (struct FsmInst *, int, void *);
-};
-
-struct FsmTimer {
- struct FsmInst *fi;
- struct timer_list tl;
- int event;
- void *arg;
-};
-
-int FsmNew(struct Fsm *fsm, struct FsmNode *fnlist, int fncount);
-void FsmFree(struct Fsm *fsm);
-int FsmEvent(struct FsmInst *fi, int event, void *arg);
-void FsmChangeState(struct FsmInst *fi, int newstate);
-void FsmInitTimer(struct FsmInst *fi, struct FsmTimer *ft);
-int FsmAddTimer(struct FsmTimer *ft, int millisec, int event,
- void *arg, int where);
-void FsmRestartTimer(struct FsmTimer *ft, int millisec, int event,
- void *arg, int where);
-void FsmDelTimer(struct FsmTimer *ft, int where);
-
-#endif
diff --git a/drivers/isdn/hisax/gazel.c b/drivers/isdn/hisax/gazel.c
deleted file mode 100644
index a6d8af02354a..000000000000
--- a/drivers/isdn/hisax/gazel.c
+++ /dev/null
@@ -1,691 +0,0 @@
-/* $Id: gazel.c,v 2.19.2.4 2004/01/14 16:04:48 keil Exp $
- *
- * low level stuff for Gazel isdn cards
- *
- * Author BeWan Systems
- * based on source code from Karsten Keil
- * Copyright by BeWan Systems
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "hscx.h"
-#include "isdnl1.h"
-#include "ipac.h"
-#include <linux/pci.h>
-
-static const char *gazel_revision = "$Revision: 2.19.2.4 $";
-
-#define R647 1
-#define R685 2
-#define R753 3
-#define R742 4
-
-#define PLX_CNTRL 0x50 /* registre de controle PLX */
-#define RESET_GAZEL 0x4
-#define RESET_9050 0x40000000
-#define PLX_INCSR 0x4C /* registre d'IT du 9050 */
-#define INT_ISAC_EN 0x8 /* 1 = enable IT isac */
-#define INT_ISAC 0x20 /* 1 = IT isac en cours */
-#define INT_HSCX_EN 0x1 /* 1 = enable IT hscx */
-#define INT_HSCX 0x4 /* 1 = IT hscx en cours */
-#define INT_PCI_EN 0x40 /* 1 = enable IT PCI */
-#define INT_IPAC_EN 0x3 /* enable IT ipac */
-
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-static inline u_char
-readreg(unsigned int adr, u_short off)
-{
- return bytein(adr + off);
-}
-
-static inline void
-writereg(unsigned int adr, u_short off, u_char data)
-{
- byteout(adr + off, data);
-}
-
-
-static inline void
-read_fifo(unsigned int adr, u_char *data, int size)
-{
- insb(adr, data, size);
-}
-
-static void
-write_fifo(unsigned int adr, u_char *data, int size)
-{
- outsb(adr, data, size);
-}
-
-static inline u_char
-readreg_ipac(unsigned int adr, u_short off)
-{
- register u_char ret;
-
- byteout(adr, off);
- ret = bytein(adr + 4);
- return ret;
-}
-
-static inline void
-writereg_ipac(unsigned int adr, u_short off, u_char data)
-{
- byteout(adr, off);
- byteout(adr + 4, data);
-}
-
-
-static inline void
-read_fifo_ipac(unsigned int adr, u_short off, u_char *data, int size)
-{
- byteout(adr, off);
- insb(adr + 4, data, size);
-}
-
-static void
-write_fifo_ipac(unsigned int adr, u_short off, u_char *data, int size)
-{
- byteout(adr, off);
- outsb(adr + 4, data, size);
-}
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- u_short off2 = offset;
-
- switch (cs->subtyp) {
- case R647:
- off2 = ((off2 << 8 & 0xf000) | (off2 & 0xf));
- /* fall through */
- case R685:
- return (readreg(cs->hw.gazel.isac, off2));
- case R753:
- case R742:
- return (readreg_ipac(cs->hw.gazel.ipac, 0x80 + off2));
- }
- return 0;
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- u_short off2 = offset;
-
- switch (cs->subtyp) {
- case R647:
- off2 = ((off2 << 8 & 0xf000) | (off2 & 0xf));
- /* fall through */
- case R685:
- writereg(cs->hw.gazel.isac, off2, value);
- break;
- case R753:
- case R742:
- writereg_ipac(cs->hw.gazel.ipac, 0x80 + off2, value);
- break;
- }
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- switch (cs->subtyp) {
- case R647:
- case R685:
- read_fifo(cs->hw.gazel.isacfifo, data, size);
- break;
- case R753:
- case R742:
- read_fifo_ipac(cs->hw.gazel.ipac, 0x80, data, size);
- break;
- }
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- switch (cs->subtyp) {
- case R647:
- case R685:
- write_fifo(cs->hw.gazel.isacfifo, data, size);
- break;
- case R753:
- case R742:
- write_fifo_ipac(cs->hw.gazel.ipac, 0x80, data, size);
- break;
- }
-}
-
-static void
-ReadHSCXfifo(struct IsdnCardState *cs, int hscx, u_char *data, int size)
-{
- switch (cs->subtyp) {
- case R647:
- case R685:
- read_fifo(cs->hw.gazel.hscxfifo[hscx], data, size);
- break;
- case R753:
- case R742:
- read_fifo_ipac(cs->hw.gazel.ipac, hscx * 0x40, data, size);
- break;
- }
-}
-
-static void
-WriteHSCXfifo(struct IsdnCardState *cs, int hscx, u_char *data, int size)
-{
- switch (cs->subtyp) {
- case R647:
- case R685:
- write_fifo(cs->hw.gazel.hscxfifo[hscx], data, size);
- break;
- case R753:
- case R742:
- write_fifo_ipac(cs->hw.gazel.ipac, hscx * 0x40, data, size);
- break;
- }
-}
-
-static u_char
-ReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- u_short off2 = offset;
-
- switch (cs->subtyp) {
- case R647:
- off2 = ((off2 << 8 & 0xf000) | (off2 & 0xf));
- /* fall through */
- case R685:
- return (readreg(cs->hw.gazel.hscx[hscx], off2));
- case R753:
- case R742:
- return (readreg_ipac(cs->hw.gazel.ipac, hscx * 0x40 + off2));
- }
- return 0;
-}
-
-static void
-WriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- u_short off2 = offset;
-
- switch (cs->subtyp) {
- case R647:
- off2 = ((off2 << 8 & 0xf000) | (off2 & 0xf));
- /* fall through */
- case R685:
- writereg(cs->hw.gazel.hscx[hscx], off2, value);
- break;
- case R753:
- case R742:
- writereg_ipac(cs->hw.gazel.ipac, hscx * 0x40 + off2, value);
- break;
- }
-}
-
-/*
- * fast interrupt HSCX stuff goes here
- */
-
-#define READHSCX(cs, nr, reg) ReadHSCX(cs, nr, reg)
-#define WRITEHSCX(cs, nr, reg, data) WriteHSCX(cs, nr, reg, data)
-#define READHSCXFIFO(cs, nr, ptr, cnt) ReadHSCXfifo(cs, nr, ptr, cnt)
-#define WRITEHSCXFIFO(cs, nr, ptr, cnt) WriteHSCXfifo(cs, nr, ptr, cnt)
-
-#include "hscx_irq.c"
-
-static irqreturn_t
-gazel_interrupt(int intno, void *dev_id)
-{
-#define MAXCOUNT 5
- struct IsdnCardState *cs = dev_id;
- u_char valisac, valhscx;
- int count = 0;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- do {
- valhscx = ReadHSCX(cs, 1, HSCX_ISTA);
- if (valhscx)
- hscx_int_main(cs, valhscx);
- valisac = ReadISAC(cs, ISAC_ISTA);
- if (valisac)
- isac_interrupt(cs, valisac);
- count++;
- } while ((valhscx || valisac) && (count < MAXCOUNT));
-
- WriteHSCX(cs, 0, HSCX_MASK, 0xFF);
- WriteHSCX(cs, 1, HSCX_MASK, 0xFF);
- WriteISAC(cs, ISAC_MASK, 0xFF);
- WriteISAC(cs, ISAC_MASK, 0x0);
- WriteHSCX(cs, 0, HSCX_MASK, 0x0);
- WriteHSCX(cs, 1, HSCX_MASK, 0x0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-
-static irqreturn_t
-gazel_interrupt_ipac(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char ista, val;
- int count = 0;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- ista = ReadISAC(cs, IPAC_ISTA - 0x80);
- do {
- if (ista & 0x0f) {
- val = ReadHSCX(cs, 1, HSCX_ISTA);
- if (ista & 0x01)
- val |= 0x01;
- if (ista & 0x04)
- val |= 0x02;
- if (ista & 0x08)
- val |= 0x04;
- if (val) {
- hscx_int_main(cs, val);
- }
- }
- if (ista & 0x20) {
- val = 0xfe & ReadISAC(cs, ISAC_ISTA);
- if (val) {
- isac_interrupt(cs, val);
- }
- }
- if (ista & 0x10) {
- val = 0x01;
- isac_interrupt(cs, val);
- }
- ista = ReadISAC(cs, IPAC_ISTA - 0x80);
- count++;
- }
- while ((ista & 0x3f) && (count < MAXCOUNT));
-
- WriteISAC(cs, IPAC_MASK - 0x80, 0xFF);
- WriteISAC(cs, IPAC_MASK - 0x80, 0xC0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-release_io_gazel(struct IsdnCardState *cs)
-{
- unsigned int i;
-
- switch (cs->subtyp) {
- case R647:
- for (i = 0x0000; i < 0xC000; i += 0x1000)
- release_region(i + cs->hw.gazel.hscx[0], 16);
- release_region(0xC000 + cs->hw.gazel.hscx[0], 1);
- break;
-
- case R685:
- release_region(cs->hw.gazel.hscx[0], 0x100);
- release_region(cs->hw.gazel.cfg_reg, 0x80);
- break;
-
- case R753:
- release_region(cs->hw.gazel.ipac, 0x8);
- release_region(cs->hw.gazel.cfg_reg, 0x80);
- break;
-
- case R742:
- release_region(cs->hw.gazel.ipac, 8);
- break;
- }
-}
-
-static int
-reset_gazel(struct IsdnCardState *cs)
-{
- unsigned long plxcntrl, addr = cs->hw.gazel.cfg_reg;
-
- switch (cs->subtyp) {
- case R647:
- writereg(addr, 0, 0);
- HZDELAY(10);
- writereg(addr, 0, 1);
- HZDELAY(2);
- break;
- case R685:
- plxcntrl = inl(addr + PLX_CNTRL);
- plxcntrl |= (RESET_9050 + RESET_GAZEL);
- outl(plxcntrl, addr + PLX_CNTRL);
- plxcntrl &= ~(RESET_9050 + RESET_GAZEL);
- HZDELAY(4);
- outl(plxcntrl, addr + PLX_CNTRL);
- HZDELAY(10);
- outb(INT_ISAC_EN + INT_HSCX_EN + INT_PCI_EN, addr + PLX_INCSR);
- break;
- case R753:
- plxcntrl = inl(addr + PLX_CNTRL);
- plxcntrl |= (RESET_9050 + RESET_GAZEL);
- outl(plxcntrl, addr + PLX_CNTRL);
- plxcntrl &= ~(RESET_9050 + RESET_GAZEL);
- WriteISAC(cs, IPAC_POTA2 - 0x80, 0x20);
- HZDELAY(4);
- outl(plxcntrl, addr + PLX_CNTRL);
- HZDELAY(10);
- WriteISAC(cs, IPAC_POTA2 - 0x80, 0x00);
- WriteISAC(cs, IPAC_ACFG - 0x80, 0xff);
- WriteISAC(cs, IPAC_AOE - 0x80, 0x0);
- WriteISAC(cs, IPAC_MASK - 0x80, 0xff);
- WriteISAC(cs, IPAC_CONF - 0x80, 0x1);
- outb(INT_IPAC_EN + INT_PCI_EN, addr + PLX_INCSR);
- WriteISAC(cs, IPAC_MASK - 0x80, 0xc0);
- break;
- case R742:
- WriteISAC(cs, IPAC_POTA2 - 0x80, 0x20);
- HZDELAY(4);
- WriteISAC(cs, IPAC_POTA2 - 0x80, 0x00);
- WriteISAC(cs, IPAC_ACFG - 0x80, 0xff);
- WriteISAC(cs, IPAC_AOE - 0x80, 0x0);
- WriteISAC(cs, IPAC_MASK - 0x80, 0xff);
- WriteISAC(cs, IPAC_CONF - 0x80, 0x1);
- WriteISAC(cs, IPAC_MASK - 0x80, 0xc0);
- break;
- }
- return (0);
-}
-
-static int
-Gazel_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- reset_gazel(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- release_io_gazel(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- inithscxisac(cs, 1);
- if ((cs->subtyp == R647) || (cs->subtyp == R685)) {
- int i;
- for (i = 0; i < (2 + MAX_WAITING_CALLS); i++) {
- cs->bcs[i].hw.hscx.tsaxr0 = 0x1f;
- cs->bcs[i].hw.hscx.tsaxr1 = 0x23;
- }
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-static int
-reserve_regions(struct IsdnCard *card, struct IsdnCardState *cs)
-{
- unsigned int i, j, base = 0, adr = 0, len = 0;
-
- switch (cs->subtyp) {
- case R647:
- base = cs->hw.gazel.hscx[0];
- if (!request_region(adr = (0xC000 + base), len = 1, "gazel"))
- goto error;
- for (i = 0x0000; i < 0xC000; i += 0x1000) {
- if (!request_region(adr = (i + base), len = 16, "gazel"))
- goto error;
- }
- if (i != 0xC000) {
- for (j = 0; j < i; j += 0x1000)
- release_region(j + base, 16);
- release_region(0xC000 + base, 1);
- goto error;
- }
- break;
-
- case R685:
- if (!request_region(adr = cs->hw.gazel.hscx[0], len = 0x100, "gazel"))
- goto error;
- if (!request_region(adr = cs->hw.gazel.cfg_reg, len = 0x80, "gazel")) {
- release_region(cs->hw.gazel.hscx[0], 0x100);
- goto error;
- }
- break;
-
- case R753:
- if (!request_region(adr = cs->hw.gazel.ipac, len = 0x8, "gazel"))
- goto error;
- if (!request_region(adr = cs->hw.gazel.cfg_reg, len = 0x80, "gazel")) {
- release_region(cs->hw.gazel.ipac, 8);
- goto error;
- }
- break;
-
- case R742:
- if (!request_region(adr = cs->hw.gazel.ipac, len = 0x8, "gazel"))
- goto error;
- break;
- }
-
- return 0;
-
-error:
- printk(KERN_WARNING "Gazel: io ports 0x%x-0x%x already in use\n",
- adr, adr + len);
- return 1;
-}
-
-static int setup_gazelisa(struct IsdnCard *card, struct IsdnCardState *cs)
-{
- printk(KERN_INFO "Gazel: ISA PnP card automatic recognition\n");
- // we got an irq parameter, assume it is an ISA card
- // R742 decodes address even in not started...
- // R647 returns FF if not present or not started
- // eventually needs improvment
- if (readreg_ipac(card->para[1], IPAC_ID) == 1)
- cs->subtyp = R742;
- else
- cs->subtyp = R647;
-
- setup_isac(cs);
- cs->hw.gazel.cfg_reg = card->para[1] + 0xC000;
- cs->hw.gazel.ipac = card->para[1];
- cs->hw.gazel.isac = card->para[1] + 0x8000;
- cs->hw.gazel.hscx[0] = card->para[1];
- cs->hw.gazel.hscx[1] = card->para[1] + 0x4000;
- cs->irq = card->para[0];
- cs->hw.gazel.isacfifo = cs->hw.gazel.isac;
- cs->hw.gazel.hscxfifo[0] = cs->hw.gazel.hscx[0];
- cs->hw.gazel.hscxfifo[1] = cs->hw.gazel.hscx[1];
-
- switch (cs->subtyp) {
- case R647:
- printk(KERN_INFO "Gazel: Card ISA R647/R648 found\n");
- cs->dc.isac.adf2 = 0x87;
- printk(KERN_INFO
- "Gazel: config irq:%d isac:0x%X cfg:0x%X\n",
- cs->irq, cs->hw.gazel.isac, cs->hw.gazel.cfg_reg);
- printk(KERN_INFO
- "Gazel: hscx A:0x%X hscx B:0x%X\n",
- cs->hw.gazel.hscx[0], cs->hw.gazel.hscx[1]);
-
- break;
- case R742:
- printk(KERN_INFO "Gazel: Card ISA R742 found\n");
- test_and_set_bit(HW_IPAC, &cs->HW_Flags);
- printk(KERN_INFO
- "Gazel: config irq:%d ipac:0x%X\n",
- cs->irq, cs->hw.gazel.ipac);
- break;
- }
-
- return (0);
-}
-
-#ifdef CONFIG_PCI
-static struct pci_dev *dev_tel = NULL;
-
-static int setup_gazelpci(struct IsdnCardState *cs)
-{
- u_int pci_ioaddr0 = 0, pci_ioaddr1 = 0;
- u_char pci_irq = 0, found;
- u_int nbseek, seekcard;
-
- printk(KERN_WARNING "Gazel: PCI card automatic recognition\n");
-
- found = 0;
- seekcard = PCI_DEVICE_ID_PLX_R685;
- for (nbseek = 0; nbseek < 4; nbseek++) {
- if ((dev_tel = hisax_find_pci_device(PCI_VENDOR_ID_PLX,
- seekcard, dev_tel))) {
- if (pci_enable_device(dev_tel))
- return 1;
- pci_irq = dev_tel->irq;
- pci_ioaddr0 = pci_resource_start(dev_tel, 1);
- pci_ioaddr1 = pci_resource_start(dev_tel, 2);
- found = 1;
- }
- if (found)
- break;
- else {
- switch (seekcard) {
- case PCI_DEVICE_ID_PLX_R685:
- seekcard = PCI_DEVICE_ID_PLX_R753;
- break;
- case PCI_DEVICE_ID_PLX_R753:
- seekcard = PCI_DEVICE_ID_PLX_DJINN_ITOO;
- break;
- case PCI_DEVICE_ID_PLX_DJINN_ITOO:
- seekcard = PCI_DEVICE_ID_PLX_OLITEC;
- break;
- }
- }
- }
- if (!found) {
- printk(KERN_WARNING "Gazel: No PCI card found\n");
- return (1);
- }
- if (!pci_irq) {
- printk(KERN_WARNING "Gazel: No IRQ for PCI card found\n");
- return 1;
- }
- cs->hw.gazel.pciaddr[0] = pci_ioaddr0;
- cs->hw.gazel.pciaddr[1] = pci_ioaddr1;
- setup_isac(cs);
- pci_ioaddr1 &= 0xfffe;
- cs->hw.gazel.cfg_reg = pci_ioaddr0 & 0xfffe;
- cs->hw.gazel.ipac = pci_ioaddr1;
- cs->hw.gazel.isac = pci_ioaddr1 + 0x80;
- cs->hw.gazel.hscx[0] = pci_ioaddr1;
- cs->hw.gazel.hscx[1] = pci_ioaddr1 + 0x40;
- cs->hw.gazel.isacfifo = cs->hw.gazel.isac;
- cs->hw.gazel.hscxfifo[0] = cs->hw.gazel.hscx[0];
- cs->hw.gazel.hscxfifo[1] = cs->hw.gazel.hscx[1];
- cs->irq = pci_irq;
- cs->irq_flags |= IRQF_SHARED;
-
- switch (seekcard) {
- case PCI_DEVICE_ID_PLX_R685:
- printk(KERN_INFO "Gazel: Card PCI R685 found\n");
- cs->subtyp = R685;
- cs->dc.isac.adf2 = 0x87;
- printk(KERN_INFO
- "Gazel: config irq:%d isac:0x%X cfg:0x%X\n",
- cs->irq, cs->hw.gazel.isac, cs->hw.gazel.cfg_reg);
- printk(KERN_INFO
- "Gazel: hscx A:0x%X hscx B:0x%X\n",
- cs->hw.gazel.hscx[0], cs->hw.gazel.hscx[1]);
- break;
- case PCI_DEVICE_ID_PLX_R753:
- case PCI_DEVICE_ID_PLX_DJINN_ITOO:
- case PCI_DEVICE_ID_PLX_OLITEC:
- printk(KERN_INFO "Gazel: Card PCI R753 found\n");
- cs->subtyp = R753;
- test_and_set_bit(HW_IPAC, &cs->HW_Flags);
- printk(KERN_INFO
- "Gazel: config irq:%d ipac:0x%X cfg:0x%X\n",
- cs->irq, cs->hw.gazel.ipac, cs->hw.gazel.cfg_reg);
- break;
- }
-
- return (0);
-}
-#endif /* CONFIG_PCI */
-
-int setup_gazel(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
- u_char val;
-
- strcpy(tmp, gazel_revision);
- printk(KERN_INFO "Gazel: Driver Revision %s\n", HiSax_getrev(tmp));
-
- if (cs->typ != ISDN_CTYPE_GAZEL)
- return (0);
-
- if (card->para[0]) {
- if (setup_gazelisa(card, cs))
- return (0);
- } else {
-
-#ifdef CONFIG_PCI
- if (setup_gazelpci(cs))
- return (0);
-#else
- printk(KERN_WARNING "Gazel: Card PCI requested and NO_PCI_BIOS, unable to config\n");
- return (0);
-#endif /* CONFIG_PCI */
- }
-
- if (reserve_regions(card, cs)) {
- return (0);
- }
- if (reset_gazel(cs)) {
- printk(KERN_WARNING "Gazel: wrong IRQ\n");
- release_io_gazel(cs);
- return (0);
- }
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->BC_Read_Reg = &ReadHSCX;
- cs->BC_Write_Reg = &WriteHSCX;
- cs->BC_Send_Data = &hscx_fill_fifo;
- cs->cardmsg = &Gazel_card_msg;
-
- switch (cs->subtyp) {
- case R647:
- case R685:
- cs->irq_func = &gazel_interrupt;
- ISACVersion(cs, "Gazel:");
- if (HscxVersion(cs, "Gazel:")) {
- printk(KERN_WARNING
- "Gazel: wrong HSCX versions check IO address\n");
- release_io_gazel(cs);
- return (0);
- }
- break;
- case R742:
- case R753:
- cs->irq_func = &gazel_interrupt_ipac;
- val = ReadISAC(cs, IPAC_ID - 0x80);
- printk(KERN_INFO "Gazel: IPAC version %x\n", val);
- break;
- }
-
- return (1);
-}
diff --git a/drivers/isdn/hisax/hfc4s8s_l1.c b/drivers/isdn/hisax/hfc4s8s_l1.c
deleted file mode 100644
index e9bb8fb67ad0..000000000000
--- a/drivers/isdn/hisax/hfc4s8s_l1.c
+++ /dev/null
@@ -1,1584 +0,0 @@
-/*************************************************************************/
-/* $Id: hfc4s8s_l1.c,v 1.10 2005/02/09 16:31:09 martinb1 Exp $ */
-/* HFC-4S/8S low layer interface for Cologne Chip HFC-4S/8S isdn chips */
-/* The low layer (L1) is implemented as a loadable module for usage with */
-/* the HiSax isdn driver for passive cards. */
-/* */
-/* Author: Werner Cornelius */
-/* (C) 2003 Cornelius Consult (werner@cornelius-consult.de) */
-/* */
-/* Driver maintained by Cologne Chip */
-/* - Martin Bachem, support@colognechip.com */
-/* */
-/* This driver only works with chip revisions >= 1, older revision 0 */
-/* engineering samples (only first manufacturer sample cards) will not */
-/* work and are rejected by the driver. */
-/* */
-/* This file distributed under the GNU GPL. */
-/* */
-/* See Version History at the end of this file */
-/* */
-/*************************************************************************/
-
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/pci.h>
-#include <linux/interrupt.h>
-#include <linux/delay.h>
-#include <linux/slab.h>
-#include <linux/timer.h>
-#include <linux/skbuff.h>
-#include <linux/wait.h>
-#include <asm/io.h>
-#include "hisax_if.h"
-#include "hfc4s8s_l1.h"
-
-static const char hfc4s8s_rev[] = "Revision: 1.10";
-
-/***************************************************************/
-/* adjustable transparent mode fifo threshold */
-/* The value defines the used fifo threshold with the equation */
-/* */
-/* notify number of bytes = 2 * 2 ^ TRANS_FIFO_THRES */
-/* */
-/* The default value is 5 which results in a buffer size of 64 */
-/* and an interrupt rate of 8ms. */
-/* The maximum value is 7 due to fifo size restrictions. */
-/* Values below 3-4 are not recommended due to high interrupt */
-/* load of the processor. For non critical applications the */
-/* value should be raised to 7 to reduce any interrupt overhead*/
-/***************************************************************/
-#define TRANS_FIFO_THRES 5
-
-/*************/
-/* constants */
-/*************/
-#define CLOCKMODE_0 0 /* ext. 24.576 MhZ clk freq, int. single clock mode */
-#define CLOCKMODE_1 1 /* ext. 49.576 MhZ clk freq, int. single clock mode */
-#define CHIP_ID_SHIFT 4
-#define HFC_MAX_ST 8
-#define MAX_D_FRAME_SIZE 270
-#define MAX_B_FRAME_SIZE 1536
-#define TRANS_TIMER_MODE (TRANS_FIFO_THRES & 0xf)
-#define TRANS_FIFO_BYTES (2 << TRANS_FIFO_THRES)
-#define MAX_F_CNT 0x0f
-
-#define CLKDEL_NT 0x6c
-#define CLKDEL_TE 0xf
-#define CTRL0_NT 4
-#define CTRL0_TE 0
-
-#define L1_TIMER_T4 2 /* minimum in jiffies */
-#define L1_TIMER_T3 (7 * HZ) /* activation timeout */
-#define L1_TIMER_T1 ((120 * HZ) / 1000) /* NT mode deactivation timeout */
-
-
-/******************/
-/* types and vars */
-/******************/
-static int card_cnt;
-
-/* private driver_data */
-typedef struct {
- int chip_id;
- int clock_mode;
- int max_st_ports;
- char *device_name;
-} hfc4s8s_param;
-
-static const struct pci_device_id hfc4s8s_ids[] = {
- {.vendor = PCI_VENDOR_ID_CCD,
- .device = PCI_DEVICE_ID_4S,
- .subvendor = 0x1397,
- .subdevice = 0x08b4,
- .driver_data =
- (unsigned long) &((hfc4s8s_param) {CHIP_ID_4S, CLOCKMODE_0, 4,
- "HFC-4S Evaluation Board"}),
- },
- {.vendor = PCI_VENDOR_ID_CCD,
- .device = PCI_DEVICE_ID_8S,
- .subvendor = 0x1397,
- .subdevice = 0x16b8,
- .driver_data =
- (unsigned long) &((hfc4s8s_param) {CHIP_ID_8S, CLOCKMODE_0, 8,
- "HFC-8S Evaluation Board"}),
- },
- {.vendor = PCI_VENDOR_ID_CCD,
- .device = PCI_DEVICE_ID_4S,
- .subvendor = 0x1397,
- .subdevice = 0xb520,
- .driver_data =
- (unsigned long) &((hfc4s8s_param) {CHIP_ID_4S, CLOCKMODE_1, 4,
- "IOB4ST"}),
- },
- {.vendor = PCI_VENDOR_ID_CCD,
- .device = PCI_DEVICE_ID_8S,
- .subvendor = 0x1397,
- .subdevice = 0xb522,
- .driver_data =
- (unsigned long) &((hfc4s8s_param) {CHIP_ID_8S, CLOCKMODE_1, 8,
- "IOB8ST"}),
- },
- {}
-};
-
-MODULE_DEVICE_TABLE(pci, hfc4s8s_ids);
-
-MODULE_AUTHOR("Werner Cornelius, werner@cornelius-consult.de");
-MODULE_DESCRIPTION("ISDN layer 1 for Cologne Chip HFC-4S/8S chips");
-MODULE_LICENSE("GPL");
-
-/***********/
-/* layer 1 */
-/***********/
-struct hfc4s8s_btype {
- spinlock_t lock;
- struct hisax_b_if b_if;
- struct hfc4s8s_l1 *l1p;
- struct sk_buff_head tx_queue;
- struct sk_buff *tx_skb;
- struct sk_buff *rx_skb;
- __u8 *rx_ptr;
- int tx_cnt;
- int bchan;
- int mode;
-};
-
-struct _hfc4s8s_hw;
-
-struct hfc4s8s_l1 {
- spinlock_t lock;
- struct _hfc4s8s_hw *hw; /* pointer to hardware area */
- int l1_state; /* actual l1 state */
- struct timer_list l1_timer; /* layer 1 timer structure */
- int nt_mode; /* set to nt mode */
- int st_num; /* own index */
- int enabled; /* interface is enabled */
- struct sk_buff_head d_tx_queue; /* send queue */
- int tx_cnt; /* bytes to send */
- struct hisax_d_if d_if; /* D-channel interface */
- struct hfc4s8s_btype b_ch[2]; /* B-channel data */
- struct hisax_b_if *b_table[2];
-};
-
-/**********************/
-/* hardware structure */
-/**********************/
-typedef struct _hfc4s8s_hw {
- spinlock_t lock;
-
- int cardnum;
- int ifnum;
- int iobase;
- int nt_mode;
- u_char *membase;
- u_char *hw_membase;
- void *pdev;
- int max_fifo;
- hfc4s8s_param driver_data;
- int irq;
- int fifo_sched_cnt;
- struct work_struct tqueue;
- struct hfc4s8s_l1 l1[HFC_MAX_ST];
- char card_name[60];
- struct {
- u_char r_irq_ctrl;
- u_char r_ctrl0;
- volatile u_char r_irq_statech; /* active isdn l1 status */
- u_char r_irqmsk_statchg; /* enabled isdn status ints */
- u_char r_irq_fifo_blx[8]; /* fifo status registers */
- u_char fifo_rx_trans_enables[8]; /* mask for enabled transparent rx fifos */
- u_char fifo_slow_timer_service[8]; /* mask for fifos needing slower timer service */
- volatile u_char r_irq_oview; /* contents of overview register */
- volatile u_char timer_irq;
- int timer_usg_cnt; /* number of channels using timer */
- } mr;
-} hfc4s8s_hw;
-
-
-
-/* inline functions io mapped */
-static inline void
-SetRegAddr(hfc4s8s_hw *a, u_char b)
-{
- outb(b, (a->iobase) + 4);
-}
-
-static inline u_char
-GetRegAddr(hfc4s8s_hw *a)
-{
- return (inb((volatile u_int) (a->iobase + 4)));
-}
-
-
-static inline void
-Write_hfc8(hfc4s8s_hw *a, u_char b, u_char c)
-{
- SetRegAddr(a, b);
- outb(c, a->iobase);
-}
-
-static inline void
-fWrite_hfc8(hfc4s8s_hw *a, u_char c)
-{
- outb(c, a->iobase);
-}
-
-static inline void
-fWrite_hfc32(hfc4s8s_hw *a, u_long c)
-{
- outl(c, a->iobase);
-}
-
-static inline u_char
-Read_hfc8(hfc4s8s_hw *a, u_char b)
-{
- SetRegAddr(a, b);
- return (inb((volatile u_int) a->iobase));
-}
-
-static inline u_char
-fRead_hfc8(hfc4s8s_hw *a)
-{
- return (inb((volatile u_int) a->iobase));
-}
-
-
-static inline u_short
-Read_hfc16(hfc4s8s_hw *a, u_char b)
-{
- SetRegAddr(a, b);
- return (inw((volatile u_int) a->iobase));
-}
-
-static inline u_long
-fRead_hfc32(hfc4s8s_hw *a)
-{
- return (inl((volatile u_int) a->iobase));
-}
-
-static inline void
-wait_busy(hfc4s8s_hw *a)
-{
- SetRegAddr(a, R_STATUS);
- while (inb((volatile u_int) a->iobase) & M_BUSY);
-}
-
-#define PCI_ENA_REGIO 0x01
-
-/******************************************************/
-/* function to read critical counter registers that */
-/* may be updated by the chip during read */
-/******************************************************/
-static u_char
-Read_hfc8_stable(hfc4s8s_hw *hw, int reg)
-{
- u_char ref8;
- u_char in8;
- ref8 = Read_hfc8(hw, reg);
- while (((in8 = Read_hfc8(hw, reg)) != ref8)) {
- ref8 = in8;
- }
- return in8;
-}
-
-static int
-Read_hfc16_stable(hfc4s8s_hw *hw, int reg)
-{
- int ref16;
- int in16;
-
- ref16 = Read_hfc16(hw, reg);
- while (((in16 = Read_hfc16(hw, reg)) != ref16)) {
- ref16 = in16;
- }
- return in16;
-}
-
-/*****************************/
-/* D-channel call from HiSax */
-/*****************************/
-static void
-dch_l2l1(struct hisax_d_if *iface, int pr, void *arg)
-{
- struct hfc4s8s_l1 *l1 = iface->ifc.priv;
- struct sk_buff *skb = (struct sk_buff *) arg;
- u_long flags;
-
- switch (pr) {
-
- case (PH_DATA | REQUEST):
- if (!l1->enabled) {
- dev_kfree_skb(skb);
- break;
- }
- spin_lock_irqsave(&l1->lock, flags);
- skb_queue_tail(&l1->d_tx_queue, skb);
- if ((skb_queue_len(&l1->d_tx_queue) == 1) &&
- (l1->tx_cnt <= 0)) {
- l1->hw->mr.r_irq_fifo_blx[l1->st_num] |=
- 0x10;
- spin_unlock_irqrestore(&l1->lock, flags);
- schedule_work(&l1->hw->tqueue);
- } else
- spin_unlock_irqrestore(&l1->lock, flags);
- break;
-
- case (PH_ACTIVATE | REQUEST):
- if (!l1->enabled)
- break;
- if (!l1->nt_mode) {
- if (l1->l1_state < 6) {
- spin_lock_irqsave(&l1->lock,
- flags);
-
- Write_hfc8(l1->hw, R_ST_SEL,
- l1->st_num);
- Write_hfc8(l1->hw, A_ST_WR_STA,
- 0x60);
- mod_timer(&l1->l1_timer,
- jiffies + L1_TIMER_T3);
- spin_unlock_irqrestore(&l1->lock,
- flags);
- } else if (l1->l1_state == 7)
- l1->d_if.ifc.l1l2(&l1->d_if.ifc,
- PH_ACTIVATE |
- INDICATION,
- NULL);
- } else {
- if (l1->l1_state != 3) {
- spin_lock_irqsave(&l1->lock,
- flags);
- Write_hfc8(l1->hw, R_ST_SEL,
- l1->st_num);
- Write_hfc8(l1->hw, A_ST_WR_STA,
- 0x60);
- spin_unlock_irqrestore(&l1->lock,
- flags);
- } else if (l1->l1_state == 3)
- l1->d_if.ifc.l1l2(&l1->d_if.ifc,
- PH_ACTIVATE |
- INDICATION,
- NULL);
- }
- break;
-
- default:
- printk(KERN_INFO
- "HFC-4S/8S: Unknown D-chan cmd 0x%x received, ignored\n",
- pr);
- break;
- }
- if (!l1->enabled)
- l1->d_if.ifc.l1l2(&l1->d_if.ifc,
- PH_DEACTIVATE | INDICATION, NULL);
-} /* dch_l2l1 */
-
-/*****************************/
-/* B-channel call from HiSax */
-/*****************************/
-static void
-bch_l2l1(struct hisax_if *ifc, int pr, void *arg)
-{
- struct hfc4s8s_btype *bch = ifc->priv;
- struct hfc4s8s_l1 *l1 = bch->l1p;
- struct sk_buff *skb = (struct sk_buff *) arg;
- long mode = (long) arg;
- u_long flags;
-
- switch (pr) {
-
- case (PH_DATA | REQUEST):
- if (!l1->enabled || (bch->mode == L1_MODE_NULL)) {
- dev_kfree_skb(skb);
- break;
- }
- spin_lock_irqsave(&l1->lock, flags);
- skb_queue_tail(&bch->tx_queue, skb);
- if (!bch->tx_skb && (bch->tx_cnt <= 0)) {
- l1->hw->mr.r_irq_fifo_blx[l1->st_num] |=
- ((bch->bchan == 1) ? 1 : 4);
- spin_unlock_irqrestore(&l1->lock, flags);
- schedule_work(&l1->hw->tqueue);
- } else
- spin_unlock_irqrestore(&l1->lock, flags);
- break;
-
- case (PH_ACTIVATE | REQUEST):
- case (PH_DEACTIVATE | REQUEST):
- if (!l1->enabled)
- break;
- if (pr == (PH_DEACTIVATE | REQUEST))
- mode = L1_MODE_NULL;
-
- switch (mode) {
- case L1_MODE_HDLC:
- spin_lock_irqsave(&l1->lock,
- flags);
- l1->hw->mr.timer_usg_cnt++;
- l1->hw->mr.
- fifo_slow_timer_service[l1->
- st_num]
- |=
- ((bch->bchan ==
- 1) ? 0x2 : 0x8);
- Write_hfc8(l1->hw, R_FIFO,
- (l1->st_num * 8 +
- ((bch->bchan ==
- 1) ? 0 : 2)));
- wait_busy(l1->hw);
- Write_hfc8(l1->hw, A_CON_HDLC, 0xc); /* HDLC mode, flag fill, connect ST */
- Write_hfc8(l1->hw, A_SUBCH_CFG, 0); /* 8 bits */
- Write_hfc8(l1->hw, A_IRQ_MSK, 1); /* enable TX interrupts for hdlc */
- Write_hfc8(l1->hw, A_INC_RES_FIFO, 2); /* reset fifo */
- wait_busy(l1->hw);
-
- Write_hfc8(l1->hw, R_FIFO,
- (l1->st_num * 8 +
- ((bch->bchan ==
- 1) ? 1 : 3)));
- wait_busy(l1->hw);
- Write_hfc8(l1->hw, A_CON_HDLC, 0xc); /* HDLC mode, flag fill, connect ST */
- Write_hfc8(l1->hw, A_SUBCH_CFG, 0); /* 8 bits */
- Write_hfc8(l1->hw, A_IRQ_MSK, 1); /* enable RX interrupts for hdlc */
- Write_hfc8(l1->hw, A_INC_RES_FIFO, 2); /* reset fifo */
-
- Write_hfc8(l1->hw, R_ST_SEL,
- l1->st_num);
- l1->hw->mr.r_ctrl0 |=
- (bch->bchan & 3);
- Write_hfc8(l1->hw, A_ST_CTRL0,
- l1->hw->mr.r_ctrl0);
- bch->mode = L1_MODE_HDLC;
- spin_unlock_irqrestore(&l1->lock,
- flags);
-
- bch->b_if.ifc.l1l2(&bch->b_if.ifc,
- PH_ACTIVATE |
- INDICATION,
- NULL);
- break;
-
- case L1_MODE_TRANS:
- spin_lock_irqsave(&l1->lock,
- flags);
- l1->hw->mr.
- fifo_rx_trans_enables[l1->
- st_num]
- |=
- ((bch->bchan ==
- 1) ? 0x2 : 0x8);
- l1->hw->mr.timer_usg_cnt++;
- Write_hfc8(l1->hw, R_FIFO,
- (l1->st_num * 8 +
- ((bch->bchan ==
- 1) ? 0 : 2)));
- wait_busy(l1->hw);
- Write_hfc8(l1->hw, A_CON_HDLC, 0xf); /* Transparent mode, 1 fill, connect ST */
- Write_hfc8(l1->hw, A_SUBCH_CFG, 0); /* 8 bits */
- Write_hfc8(l1->hw, A_IRQ_MSK, 0); /* disable TX interrupts */
- Write_hfc8(l1->hw, A_INC_RES_FIFO, 2); /* reset fifo */
- wait_busy(l1->hw);
-
- Write_hfc8(l1->hw, R_FIFO,
- (l1->st_num * 8 +
- ((bch->bchan ==
- 1) ? 1 : 3)));
- wait_busy(l1->hw);
- Write_hfc8(l1->hw, A_CON_HDLC, 0xf); /* Transparent mode, 1 fill, connect ST */
- Write_hfc8(l1->hw, A_SUBCH_CFG, 0); /* 8 bits */
- Write_hfc8(l1->hw, A_IRQ_MSK, 0); /* disable RX interrupts */
- Write_hfc8(l1->hw, A_INC_RES_FIFO, 2); /* reset fifo */
-
- Write_hfc8(l1->hw, R_ST_SEL,
- l1->st_num);
- l1->hw->mr.r_ctrl0 |=
- (bch->bchan & 3);
- Write_hfc8(l1->hw, A_ST_CTRL0,
- l1->hw->mr.r_ctrl0);
- bch->mode = L1_MODE_TRANS;
- spin_unlock_irqrestore(&l1->lock,
- flags);
-
- bch->b_if.ifc.l1l2(&bch->b_if.ifc,
- PH_ACTIVATE |
- INDICATION,
- NULL);
- break;
-
- default:
- if (bch->mode == L1_MODE_NULL)
- break;
- spin_lock_irqsave(&l1->lock,
- flags);
- l1->hw->mr.
- fifo_slow_timer_service[l1->
- st_num]
- &=
- ~((bch->bchan ==
- 1) ? 0x3 : 0xc);
- l1->hw->mr.
- fifo_rx_trans_enables[l1->
- st_num]
- &=
- ~((bch->bchan ==
- 1) ? 0x3 : 0xc);
- l1->hw->mr.timer_usg_cnt--;
- Write_hfc8(l1->hw, R_FIFO,
- (l1->st_num * 8 +
- ((bch->bchan ==
- 1) ? 0 : 2)));
- wait_busy(l1->hw);
- Write_hfc8(l1->hw, A_IRQ_MSK, 0); /* disable TX interrupts */
- wait_busy(l1->hw);
- Write_hfc8(l1->hw, R_FIFO,
- (l1->st_num * 8 +
- ((bch->bchan ==
- 1) ? 1 : 3)));
- wait_busy(l1->hw);
- Write_hfc8(l1->hw, A_IRQ_MSK, 0); /* disable RX interrupts */
- Write_hfc8(l1->hw, R_ST_SEL,
- l1->st_num);
- l1->hw->mr.r_ctrl0 &=
- ~(bch->bchan & 3);
- Write_hfc8(l1->hw, A_ST_CTRL0,
- l1->hw->mr.r_ctrl0);
- spin_unlock_irqrestore(&l1->lock,
- flags);
-
- bch->mode = L1_MODE_NULL;
- bch->b_if.ifc.l1l2(&bch->b_if.ifc,
- PH_DEACTIVATE |
- INDICATION,
- NULL);
- if (bch->tx_skb) {
- dev_kfree_skb(bch->tx_skb);
- bch->tx_skb = NULL;
- }
- if (bch->rx_skb) {
- dev_kfree_skb(bch->rx_skb);
- bch->rx_skb = NULL;
- }
- skb_queue_purge(&bch->tx_queue);
- bch->tx_cnt = 0;
- bch->rx_ptr = NULL;
- break;
- }
-
- /* timer is only used when at least one b channel */
- /* is set up to transparent mode */
- if (l1->hw->mr.timer_usg_cnt) {
- Write_hfc8(l1->hw, R_IRQMSK_MISC,
- M_TI_IRQMSK);
- } else {
- Write_hfc8(l1->hw, R_IRQMSK_MISC, 0);
- }
-
- break;
-
- default:
- printk(KERN_INFO
- "HFC-4S/8S: Unknown B-chan cmd 0x%x received, ignored\n",
- pr);
- break;
- }
- if (!l1->enabled)
- bch->b_if.ifc.l1l2(&bch->b_if.ifc,
- PH_DEACTIVATE | INDICATION, NULL);
-} /* bch_l2l1 */
-
-/**************************/
-/* layer 1 timer function */
-/**************************/
-static void
-hfc_l1_timer(struct timer_list *t)
-{
- struct hfc4s8s_l1 *l1 = from_timer(l1, t, l1_timer);
- u_long flags;
-
- if (!l1->enabled)
- return;
-
- spin_lock_irqsave(&l1->lock, flags);
- if (l1->nt_mode) {
- l1->l1_state = 1;
- Write_hfc8(l1->hw, R_ST_SEL, l1->st_num);
- Write_hfc8(l1->hw, A_ST_WR_STA, 0x11);
- spin_unlock_irqrestore(&l1->lock, flags);
- l1->d_if.ifc.l1l2(&l1->d_if.ifc,
- PH_DEACTIVATE | INDICATION, NULL);
- spin_lock_irqsave(&l1->lock, flags);
- l1->l1_state = 1;
- Write_hfc8(l1->hw, A_ST_WR_STA, 0x1);
- spin_unlock_irqrestore(&l1->lock, flags);
- } else {
- /* activation timed out */
- Write_hfc8(l1->hw, R_ST_SEL, l1->st_num);
- Write_hfc8(l1->hw, A_ST_WR_STA, 0x13);
- spin_unlock_irqrestore(&l1->lock, flags);
- l1->d_if.ifc.l1l2(&l1->d_if.ifc,
- PH_DEACTIVATE | INDICATION, NULL);
- spin_lock_irqsave(&l1->lock, flags);
- Write_hfc8(l1->hw, R_ST_SEL, l1->st_num);
- Write_hfc8(l1->hw, A_ST_WR_STA, 0x3);
- spin_unlock_irqrestore(&l1->lock, flags);
- }
-} /* hfc_l1_timer */
-
-/****************************************/
-/* a complete D-frame has been received */
-/****************************************/
-static void
-rx_d_frame(struct hfc4s8s_l1 *l1p, int ech)
-{
- int z1, z2;
- u_char f1, f2, df;
- struct sk_buff *skb;
- u_char *cp;
-
-
- if (!l1p->enabled)
- return;
- do {
- /* E/D RX fifo */
- Write_hfc8(l1p->hw, R_FIFO,
- (l1p->st_num * 8 + ((ech) ? 7 : 5)));
- wait_busy(l1p->hw);
-
- f1 = Read_hfc8_stable(l1p->hw, A_F1);
- f2 = Read_hfc8(l1p->hw, A_F2);
-
- if (f1 < f2)
- df = MAX_F_CNT + 1 + f1 - f2;
- else
- df = f1 - f2;
-
- if (!df)
- return; /* no complete frame in fifo */
-
- z1 = Read_hfc16_stable(l1p->hw, A_Z1);
- z2 = Read_hfc16(l1p->hw, A_Z2);
-
- z1 = z1 - z2 + 1;
- if (z1 < 0)
- z1 += 384;
-
- if (!(skb = dev_alloc_skb(MAX_D_FRAME_SIZE))) {
- printk(KERN_INFO
- "HFC-4S/8S: Could not allocate D/E "
- "channel receive buffer");
- Write_hfc8(l1p->hw, A_INC_RES_FIFO, 2);
- wait_busy(l1p->hw);
- return;
- }
-
- if (((z1 < 4) || (z1 > MAX_D_FRAME_SIZE))) {
- if (skb)
- dev_kfree_skb(skb);
- /* remove errornous D frame */
- if (df == 1) {
- /* reset fifo */
- Write_hfc8(l1p->hw, A_INC_RES_FIFO, 2);
- wait_busy(l1p->hw);
- return;
- } else {
- /* read errornous D frame */
- SetRegAddr(l1p->hw, A_FIFO_DATA0);
-
- while (z1 >= 4) {
- fRead_hfc32(l1p->hw);
- z1 -= 4;
- }
-
- while (z1--)
- fRead_hfc8(l1p->hw);
-
- Write_hfc8(l1p->hw, A_INC_RES_FIFO, 1);
- wait_busy(l1p->hw);
- return;
- }
- }
-
- cp = skb->data;
-
- SetRegAddr(l1p->hw, A_FIFO_DATA0);
-
- while (z1 >= 4) {
- *((unsigned long *) cp) = fRead_hfc32(l1p->hw);
- cp += 4;
- z1 -= 4;
- }
-
- while (z1--)
- *cp++ = fRead_hfc8(l1p->hw);
-
- Write_hfc8(l1p->hw, A_INC_RES_FIFO, 1); /* increment f counter */
- wait_busy(l1p->hw);
-
- if (*(--cp)) {
- dev_kfree_skb(skb);
- } else {
- skb->len = (cp - skb->data) - 2;
- if (ech)
- l1p->d_if.ifc.l1l2(&l1p->d_if.ifc,
- PH_DATA_E | INDICATION,
- skb);
- else
- l1p->d_if.ifc.l1l2(&l1p->d_if.ifc,
- PH_DATA | INDICATION,
- skb);
- }
- } while (1);
-} /* rx_d_frame */
-
-/*************************************************************/
-/* a B-frame has been received (perhaps not fully completed) */
-/*************************************************************/
-static void
-rx_b_frame(struct hfc4s8s_btype *bch)
-{
- int z1, z2, hdlc_complete;
- u_char f1, f2;
- struct hfc4s8s_l1 *l1 = bch->l1p;
- struct sk_buff *skb;
-
- if (!l1->enabled || (bch->mode == L1_MODE_NULL))
- return;
-
- do {
- /* RX Fifo */
- Write_hfc8(l1->hw, R_FIFO,
- (l1->st_num * 8 + ((bch->bchan == 1) ? 1 : 3)));
- wait_busy(l1->hw);
-
- if (bch->mode == L1_MODE_HDLC) {
- f1 = Read_hfc8_stable(l1->hw, A_F1);
- f2 = Read_hfc8(l1->hw, A_F2);
- hdlc_complete = ((f1 ^ f2) & MAX_F_CNT);
- } else
- hdlc_complete = 0;
- z1 = Read_hfc16_stable(l1->hw, A_Z1);
- z2 = Read_hfc16(l1->hw, A_Z2);
- z1 = (z1 - z2);
- if (hdlc_complete)
- z1++;
- if (z1 < 0)
- z1 += 384;
-
- if (!z1)
- break;
-
- if (!(skb = bch->rx_skb)) {
- if (!
- (skb =
- dev_alloc_skb((bch->mode ==
- L1_MODE_TRANS) ? z1
- : (MAX_B_FRAME_SIZE + 3)))) {
- printk(KERN_ERR
- "HFC-4S/8S: Could not allocate B "
- "channel receive buffer");
- return;
- }
- bch->rx_ptr = skb->data;
- bch->rx_skb = skb;
- }
-
- skb->len = (bch->rx_ptr - skb->data) + z1;
-
- /* HDLC length check */
- if ((bch->mode == L1_MODE_HDLC) &&
- ((hdlc_complete && (skb->len < 4)) ||
- (skb->len > (MAX_B_FRAME_SIZE + 3)))) {
-
- skb->len = 0;
- bch->rx_ptr = skb->data;
- Write_hfc8(l1->hw, A_INC_RES_FIFO, 2); /* reset fifo */
- wait_busy(l1->hw);
- return;
- }
- SetRegAddr(l1->hw, A_FIFO_DATA0);
-
- while (z1 >= 4) {
- *((unsigned long *) bch->rx_ptr) =
- fRead_hfc32(l1->hw);
- bch->rx_ptr += 4;
- z1 -= 4;
- }
-
- while (z1--)
- *(bch->rx_ptr++) = fRead_hfc8(l1->hw);
-
- if (hdlc_complete) {
- /* increment f counter */
- Write_hfc8(l1->hw, A_INC_RES_FIFO, 1);
- wait_busy(l1->hw);
-
- /* hdlc crc check */
- bch->rx_ptr--;
- if (*bch->rx_ptr) {
- skb->len = 0;
- bch->rx_ptr = skb->data;
- continue;
- }
- skb->len -= 3;
- }
- if (hdlc_complete || (bch->mode == L1_MODE_TRANS)) {
- bch->rx_skb = NULL;
- bch->rx_ptr = NULL;
- bch->b_if.ifc.l1l2(&bch->b_if.ifc,
- PH_DATA | INDICATION, skb);
- }
-
- } while (1);
-} /* rx_b_frame */
-
-/********************************************/
-/* a D-frame has been/should be transmitted */
-/********************************************/
-static void
-tx_d_frame(struct hfc4s8s_l1 *l1p)
-{
- struct sk_buff *skb;
- u_char f1, f2;
- u_char *cp;
- long cnt;
-
- if (l1p->l1_state != 7)
- return;
-
- /* TX fifo */
- Write_hfc8(l1p->hw, R_FIFO, (l1p->st_num * 8 + 4));
- wait_busy(l1p->hw);
-
- f1 = Read_hfc8(l1p->hw, A_F1);
- f2 = Read_hfc8_stable(l1p->hw, A_F2);
-
- if ((f1 ^ f2) & MAX_F_CNT)
- return; /* fifo is still filled */
-
- if (l1p->tx_cnt > 0) {
- cnt = l1p->tx_cnt;
- l1p->tx_cnt = 0;
- l1p->d_if.ifc.l1l2(&l1p->d_if.ifc, PH_DATA | CONFIRM,
- (void *) cnt);
- }
-
- if ((skb = skb_dequeue(&l1p->d_tx_queue))) {
- cp = skb->data;
- cnt = skb->len;
- SetRegAddr(l1p->hw, A_FIFO_DATA0);
-
- while (cnt >= 4) {
- SetRegAddr(l1p->hw, A_FIFO_DATA0);
- fWrite_hfc32(l1p->hw, *(unsigned long *) cp);
- cp += 4;
- cnt -= 4;
- }
-
- while (cnt--)
- fWrite_hfc8(l1p->hw, *cp++);
-
- l1p->tx_cnt = skb->truesize;
- Write_hfc8(l1p->hw, A_INC_RES_FIFO, 1); /* increment f counter */
- wait_busy(l1p->hw);
-
- dev_kfree_skb(skb);
- }
-} /* tx_d_frame */
-
-/******************************************************/
-/* a B-frame may be transmitted (or is not completed) */
-/******************************************************/
-static void
-tx_b_frame(struct hfc4s8s_btype *bch)
-{
- struct sk_buff *skb;
- struct hfc4s8s_l1 *l1 = bch->l1p;
- u_char *cp;
- int cnt, max, hdlc_num;
- long ack_len = 0;
-
- if (!l1->enabled || (bch->mode == L1_MODE_NULL))
- return;
-
- /* TX fifo */
- Write_hfc8(l1->hw, R_FIFO,
- (l1->st_num * 8 + ((bch->bchan == 1) ? 0 : 2)));
- wait_busy(l1->hw);
- do {
-
- if (bch->mode == L1_MODE_HDLC) {
- hdlc_num = Read_hfc8(l1->hw, A_F1) & MAX_F_CNT;
- hdlc_num -=
- (Read_hfc8_stable(l1->hw, A_F2) & MAX_F_CNT);
- if (hdlc_num < 0)
- hdlc_num += 16;
- if (hdlc_num >= 15)
- break; /* fifo still filled up with hdlc frames */
- } else
- hdlc_num = 0;
-
- if (!(skb = bch->tx_skb)) {
- if (!(skb = skb_dequeue(&bch->tx_queue))) {
- l1->hw->mr.fifo_slow_timer_service[l1->
- st_num]
- &= ~((bch->bchan == 1) ? 1 : 4);
- break; /* list empty */
- }
- bch->tx_skb = skb;
- bch->tx_cnt = 0;
- }
-
- if (!hdlc_num)
- l1->hw->mr.fifo_slow_timer_service[l1->st_num] |=
- ((bch->bchan == 1) ? 1 : 4);
- else
- l1->hw->mr.fifo_slow_timer_service[l1->st_num] &=
- ~((bch->bchan == 1) ? 1 : 4);
-
- max = Read_hfc16_stable(l1->hw, A_Z2);
- max -= Read_hfc16(l1->hw, A_Z1);
- if (max <= 0)
- max += 384;
- max--;
-
- if (max < 16)
- break; /* don't write to small amounts of bytes */
-
- cnt = skb->len - bch->tx_cnt;
- if (cnt > max)
- cnt = max;
- cp = skb->data + bch->tx_cnt;
- bch->tx_cnt += cnt;
-
- SetRegAddr(l1->hw, A_FIFO_DATA0);
- while (cnt >= 4) {
- fWrite_hfc32(l1->hw, *(unsigned long *) cp);
- cp += 4;
- cnt -= 4;
- }
-
- while (cnt--)
- fWrite_hfc8(l1->hw, *cp++);
-
- if (bch->tx_cnt >= skb->len) {
- if (bch->mode == L1_MODE_HDLC) {
- /* increment f counter */
- Write_hfc8(l1->hw, A_INC_RES_FIFO, 1);
- }
- ack_len += skb->truesize;
- bch->tx_skb = NULL;
- bch->tx_cnt = 0;
- dev_kfree_skb(skb);
- } else
- /* Re-Select */
- Write_hfc8(l1->hw, R_FIFO,
- (l1->st_num * 8 +
- ((bch->bchan == 1) ? 0 : 2)));
- wait_busy(l1->hw);
- } while (1);
-
- if (ack_len)
- bch->b_if.ifc.l1l2((struct hisax_if *) &bch->b_if,
- PH_DATA | CONFIRM, (void *) ack_len);
-} /* tx_b_frame */
-
-/*************************************/
-/* bottom half handler for interrupt */
-/*************************************/
-static void
-hfc4s8s_bh(struct work_struct *work)
-{
- hfc4s8s_hw *hw = container_of(work, hfc4s8s_hw, tqueue);
- u_char b;
- struct hfc4s8s_l1 *l1p;
- volatile u_char *fifo_stat;
- int idx;
-
- /* handle layer 1 state changes */
- b = 1;
- l1p = hw->l1;
- while (b) {
- if ((b & hw->mr.r_irq_statech)) {
- /* reset l1 event */
- hw->mr.r_irq_statech &= ~b;
- if (l1p->enabled) {
- if (l1p->nt_mode) {
- u_char oldstate = l1p->l1_state;
-
- Write_hfc8(l1p->hw, R_ST_SEL,
- l1p->st_num);
- l1p->l1_state =
- Read_hfc8(l1p->hw,
- A_ST_RD_STA) & 0xf;
-
- if ((oldstate == 3)
- && (l1p->l1_state != 3))
- l1p->d_if.ifc.l1l2(&l1p->
- d_if.
- ifc,
- PH_DEACTIVATE
- |
- INDICATION,
- NULL);
-
- if (l1p->l1_state != 2) {
- del_timer(&l1p->l1_timer);
- if (l1p->l1_state == 3) {
- l1p->d_if.ifc.
- l1l2(&l1p->
- d_if.ifc,
- PH_ACTIVATE
- |
- INDICATION,
- NULL);
- }
- } else {
- /* allow transition */
- Write_hfc8(hw, A_ST_WR_STA,
- M_SET_G2_G3);
- mod_timer(&l1p->l1_timer,
- jiffies +
- L1_TIMER_T1);
- }
- printk(KERN_INFO
- "HFC-4S/8S: NT ch %d l1 state %d -> %d\n",
- l1p->st_num, oldstate,
- l1p->l1_state);
- } else {
- u_char oldstate = l1p->l1_state;
-
- Write_hfc8(l1p->hw, R_ST_SEL,
- l1p->st_num);
- l1p->l1_state =
- Read_hfc8(l1p->hw,
- A_ST_RD_STA) & 0xf;
-
- if (((l1p->l1_state == 3) &&
- ((oldstate == 7) ||
- (oldstate == 8))) ||
- ((timer_pending
- (&l1p->l1_timer))
- && (l1p->l1_state == 8))) {
- mod_timer(&l1p->l1_timer,
- L1_TIMER_T4 +
- jiffies);
- } else {
- if (l1p->l1_state == 7) {
- del_timer(&l1p->
- l1_timer);
- l1p->d_if.ifc.
- l1l2(&l1p->
- d_if.ifc,
- PH_ACTIVATE
- |
- INDICATION,
- NULL);
- tx_d_frame(l1p);
- }
- if (l1p->l1_state == 3) {
- if (oldstate != 3)
- l1p->d_if.
- ifc.
- l1l2
- (&l1p->
- d_if.
- ifc,
- PH_DEACTIVATE
- |
- INDICATION,
- NULL);
- }
- }
- printk(KERN_INFO
- "HFC-4S/8S: TE %d ch %d l1 state %d -> %d\n",
- l1p->hw->cardnum,
- l1p->st_num, oldstate,
- l1p->l1_state);
- }
- }
- }
- b <<= 1;
- l1p++;
- }
-
- /* now handle the fifos */
- idx = 0;
- fifo_stat = hw->mr.r_irq_fifo_blx;
- l1p = hw->l1;
- while (idx < hw->driver_data.max_st_ports) {
-
- if (hw->mr.timer_irq) {
- *fifo_stat |= hw->mr.fifo_rx_trans_enables[idx];
- if (hw->fifo_sched_cnt <= 0) {
- *fifo_stat |=
- hw->mr.fifo_slow_timer_service[l1p->
- st_num];
- }
- }
- /* ignore fifo 6 (TX E fifo) */
- *fifo_stat &= 0xff - 0x40;
-
- while (*fifo_stat) {
-
- if (!l1p->nt_mode) {
- /* RX Fifo has data to read */
- if ((*fifo_stat & 0x20)) {
- *fifo_stat &= ~0x20;
- rx_d_frame(l1p, 0);
- }
- /* E Fifo has data to read */
- if ((*fifo_stat & 0x80)) {
- *fifo_stat &= ~0x80;
- rx_d_frame(l1p, 1);
- }
- /* TX Fifo completed send */
- if ((*fifo_stat & 0x10)) {
- *fifo_stat &= ~0x10;
- tx_d_frame(l1p);
- }
- }
- /* B1 RX Fifo has data to read */
- if ((*fifo_stat & 0x2)) {
- *fifo_stat &= ~0x2;
- rx_b_frame(l1p->b_ch);
- }
- /* B1 TX Fifo has send completed */
- if ((*fifo_stat & 0x1)) {
- *fifo_stat &= ~0x1;
- tx_b_frame(l1p->b_ch);
- }
- /* B2 RX Fifo has data to read */
- if ((*fifo_stat & 0x8)) {
- *fifo_stat &= ~0x8;
- rx_b_frame(l1p->b_ch + 1);
- }
- /* B2 TX Fifo has send completed */
- if ((*fifo_stat & 0x4)) {
- *fifo_stat &= ~0x4;
- tx_b_frame(l1p->b_ch + 1);
- }
- }
- fifo_stat++;
- l1p++;
- idx++;
- }
-
- if (hw->fifo_sched_cnt <= 0)
- hw->fifo_sched_cnt += (1 << (7 - TRANS_TIMER_MODE));
- hw->mr.timer_irq = 0; /* clear requested timer irq */
-} /* hfc4s8s_bh */
-
-/*********************/
-/* interrupt handler */
-/*********************/
-static irqreturn_t
-hfc4s8s_interrupt(int intno, void *dev_id)
-{
- hfc4s8s_hw *hw = dev_id;
- u_char b, ovr;
- volatile u_char *ovp;
- int idx;
- u_char old_ioreg;
-
- if (!hw || !(hw->mr.r_irq_ctrl & M_GLOB_IRQ_EN))
- return IRQ_NONE;
-
- /* read current selected regsister */
- old_ioreg = GetRegAddr(hw);
-
- /* Layer 1 State change */
- hw->mr.r_irq_statech |=
- (Read_hfc8(hw, R_SCI) & hw->mr.r_irqmsk_statchg);
- if (!
- (b = (Read_hfc8(hw, R_STATUS) & (M_MISC_IRQSTA | M_FR_IRQSTA)))
- && !hw->mr.r_irq_statech) {
- SetRegAddr(hw, old_ioreg);
- return IRQ_NONE;
- }
-
- /* timer event */
- if (Read_hfc8(hw, R_IRQ_MISC) & M_TI_IRQ) {
- hw->mr.timer_irq = 1;
- hw->fifo_sched_cnt--;
- }
-
- /* FIFO event */
- if ((ovr = Read_hfc8(hw, R_IRQ_OVIEW))) {
- hw->mr.r_irq_oview |= ovr;
- idx = R_IRQ_FIFO_BL0;
- ovp = hw->mr.r_irq_fifo_blx;
- while (ovr) {
- if ((ovr & 1)) {
- *ovp |= Read_hfc8(hw, idx);
- }
- ovp++;
- idx++;
- ovr >>= 1;
- }
- }
-
- /* queue the request to allow other cards to interrupt */
- schedule_work(&hw->tqueue);
-
- SetRegAddr(hw, old_ioreg);
- return IRQ_HANDLED;
-} /* hfc4s8s_interrupt */
-
-/***********************************************************************/
-/* reset the complete chip, don't release the chips irq but disable it */
-/***********************************************************************/
-static void
-chipreset(hfc4s8s_hw *hw)
-{
- u_long flags;
-
- spin_lock_irqsave(&hw->lock, flags);
- Write_hfc8(hw, R_CTRL, 0); /* use internal RAM */
- Write_hfc8(hw, R_RAM_MISC, 0); /* 32k*8 RAM */
- Write_hfc8(hw, R_FIFO_MD, 0); /* fifo mode 386 byte/fifo simple mode */
- Write_hfc8(hw, R_CIRM, M_SRES); /* reset chip */
- hw->mr.r_irq_ctrl = 0; /* interrupt is inactive */
- spin_unlock_irqrestore(&hw->lock, flags);
-
- udelay(3);
- Write_hfc8(hw, R_CIRM, 0); /* disable reset */
- wait_busy(hw);
-
- Write_hfc8(hw, R_PCM_MD0, M_PCM_MD); /* master mode */
- Write_hfc8(hw, R_RAM_MISC, M_FZ_MD); /* transmit fifo option */
- if (hw->driver_data.clock_mode == 1)
- Write_hfc8(hw, R_BRG_PCM_CFG, M_PCM_CLK); /* PCM clk / 2 */
- Write_hfc8(hw, R_TI_WD, TRANS_TIMER_MODE); /* timer interval */
-
- memset(&hw->mr, 0, sizeof(hw->mr));
-} /* chipreset */
-
-/********************************************/
-/* disable/enable hardware in nt or te mode */
-/********************************************/
-static void
-hfc_hardware_enable(hfc4s8s_hw *hw, int enable, int nt_mode)
-{
- u_long flags;
- char if_name[40];
- int i;
-
- if (enable) {
- /* save system vars */
- hw->nt_mode = nt_mode;
-
- /* enable fifo and state irqs, but not global irq enable */
- hw->mr.r_irq_ctrl = M_FIFO_IRQ;
- Write_hfc8(hw, R_IRQ_CTRL, hw->mr.r_irq_ctrl);
- hw->mr.r_irqmsk_statchg = 0;
- Write_hfc8(hw, R_SCI_MSK, hw->mr.r_irqmsk_statchg);
- Write_hfc8(hw, R_PWM_MD, 0x80);
- Write_hfc8(hw, R_PWM1, 26);
- if (!nt_mode)
- Write_hfc8(hw, R_ST_SYNC, M_AUTO_SYNC);
-
- /* enable the line interfaces and fifos */
- for (i = 0; i < hw->driver_data.max_st_ports; i++) {
- hw->mr.r_irqmsk_statchg |= (1 << i);
- Write_hfc8(hw, R_SCI_MSK, hw->mr.r_irqmsk_statchg);
- Write_hfc8(hw, R_ST_SEL, i);
- Write_hfc8(hw, A_ST_CLK_DLY,
- ((nt_mode) ? CLKDEL_NT : CLKDEL_TE));
- hw->mr.r_ctrl0 = ((nt_mode) ? CTRL0_NT : CTRL0_TE);
- Write_hfc8(hw, A_ST_CTRL0, hw->mr.r_ctrl0);
- Write_hfc8(hw, A_ST_CTRL2, 3);
- Write_hfc8(hw, A_ST_WR_STA, 0); /* enable state machine */
-
- hw->l1[i].enabled = 1;
- hw->l1[i].nt_mode = nt_mode;
-
- if (!nt_mode) {
- /* setup E-fifo */
- Write_hfc8(hw, R_FIFO, i * 8 + 7); /* E fifo */
- wait_busy(hw);
- Write_hfc8(hw, A_CON_HDLC, 0x11); /* HDLC mode, 1 fill, connect ST */
- Write_hfc8(hw, A_SUBCH_CFG, 2); /* only 2 bits */
- Write_hfc8(hw, A_IRQ_MSK, 1); /* enable interrupt */
- Write_hfc8(hw, A_INC_RES_FIFO, 2); /* reset fifo */
- wait_busy(hw);
-
- /* setup D RX-fifo */
- Write_hfc8(hw, R_FIFO, i * 8 + 5); /* RX fifo */
- wait_busy(hw);
- Write_hfc8(hw, A_CON_HDLC, 0x11); /* HDLC mode, 1 fill, connect ST */
- Write_hfc8(hw, A_SUBCH_CFG, 2); /* only 2 bits */
- Write_hfc8(hw, A_IRQ_MSK, 1); /* enable interrupt */
- Write_hfc8(hw, A_INC_RES_FIFO, 2); /* reset fifo */
- wait_busy(hw);
-
- /* setup D TX-fifo */
- Write_hfc8(hw, R_FIFO, i * 8 + 4); /* TX fifo */
- wait_busy(hw);
- Write_hfc8(hw, A_CON_HDLC, 0x11); /* HDLC mode, 1 fill, connect ST */
- Write_hfc8(hw, A_SUBCH_CFG, 2); /* only 2 bits */
- Write_hfc8(hw, A_IRQ_MSK, 1); /* enable interrupt */
- Write_hfc8(hw, A_INC_RES_FIFO, 2); /* reset fifo */
- wait_busy(hw);
- }
-
- sprintf(if_name, "hfc4s8s_%d%d_", hw->cardnum, i);
-
- if (hisax_register
- (&hw->l1[i].d_if, hw->l1[i].b_table, if_name,
- ((nt_mode) ? 3 : 2))) {
-
- hw->l1[i].enabled = 0;
- hw->mr.r_irqmsk_statchg &= ~(1 << i);
- Write_hfc8(hw, R_SCI_MSK,
- hw->mr.r_irqmsk_statchg);
- printk(KERN_INFO
- "HFC-4S/8S: Unable to register S/T device %s, break\n",
- if_name);
- break;
- }
- }
- spin_lock_irqsave(&hw->lock, flags);
- hw->mr.r_irq_ctrl |= M_GLOB_IRQ_EN;
- Write_hfc8(hw, R_IRQ_CTRL, hw->mr.r_irq_ctrl);
- spin_unlock_irqrestore(&hw->lock, flags);
- } else {
- /* disable hardware */
- spin_lock_irqsave(&hw->lock, flags);
- hw->mr.r_irq_ctrl &= ~M_GLOB_IRQ_EN;
- Write_hfc8(hw, R_IRQ_CTRL, hw->mr.r_irq_ctrl);
- spin_unlock_irqrestore(&hw->lock, flags);
-
- for (i = hw->driver_data.max_st_ports - 1; i >= 0; i--) {
- hw->l1[i].enabled = 0;
- hisax_unregister(&hw->l1[i].d_if);
- del_timer(&hw->l1[i].l1_timer);
- skb_queue_purge(&hw->l1[i].d_tx_queue);
- skb_queue_purge(&hw->l1[i].b_ch[0].tx_queue);
- skb_queue_purge(&hw->l1[i].b_ch[1].tx_queue);
- }
- chipreset(hw);
- }
-} /* hfc_hardware_enable */
-
-/******************************************/
-/* disable memory mapped ports / io ports */
-/******************************************/
-static void
-release_pci_ports(hfc4s8s_hw *hw)
-{
- pci_write_config_word(hw->pdev, PCI_COMMAND, 0);
- if (hw->iobase)
- release_region(hw->iobase, 8);
-}
-
-/*****************************************/
-/* enable memory mapped ports / io ports */
-/*****************************************/
-static void
-enable_pci_ports(hfc4s8s_hw *hw)
-{
- pci_write_config_word(hw->pdev, PCI_COMMAND, PCI_ENA_REGIO);
-}
-
-/*************************************/
-/* initialise the HFC-4s/8s hardware */
-/* return 0 on success. */
-/*************************************/
-static int
-setup_instance(hfc4s8s_hw *hw)
-{
- int err = -EIO;
- int i;
-
- for (i = 0; i < HFC_MAX_ST; i++) {
- struct hfc4s8s_l1 *l1p;
-
- l1p = hw->l1 + i;
- spin_lock_init(&l1p->lock);
- l1p->hw = hw;
- timer_setup(&l1p->l1_timer, hfc_l1_timer, 0);
- l1p->st_num = i;
- skb_queue_head_init(&l1p->d_tx_queue);
- l1p->d_if.ifc.priv = hw->l1 + i;
- l1p->d_if.ifc.l2l1 = (void *) dch_l2l1;
-
- spin_lock_init(&l1p->b_ch[0].lock);
- l1p->b_ch[0].b_if.ifc.l2l1 = (void *) bch_l2l1;
- l1p->b_ch[0].b_if.ifc.priv = (void *) &l1p->b_ch[0];
- l1p->b_ch[0].l1p = hw->l1 + i;
- l1p->b_ch[0].bchan = 1;
- l1p->b_table[0] = &l1p->b_ch[0].b_if;
- skb_queue_head_init(&l1p->b_ch[0].tx_queue);
-
- spin_lock_init(&l1p->b_ch[1].lock);
- l1p->b_ch[1].b_if.ifc.l2l1 = (void *) bch_l2l1;
- l1p->b_ch[1].b_if.ifc.priv = (void *) &l1p->b_ch[1];
- l1p->b_ch[1].l1p = hw->l1 + i;
- l1p->b_ch[1].bchan = 2;
- l1p->b_table[1] = &l1p->b_ch[1].b_if;
- skb_queue_head_init(&l1p->b_ch[1].tx_queue);
- }
-
- enable_pci_ports(hw);
- chipreset(hw);
-
- i = Read_hfc8(hw, R_CHIP_ID) >> CHIP_ID_SHIFT;
- if (i != hw->driver_data.chip_id) {
- printk(KERN_INFO
- "HFC-4S/8S: invalid chip id 0x%x instead of 0x%x, card ignored\n",
- i, hw->driver_data.chip_id);
- goto out;
- }
-
- i = Read_hfc8(hw, R_CHIP_RV) & 0xf;
- if (!i) {
- printk(KERN_INFO
- "HFC-4S/8S: chip revision 0 not supported, card ignored\n");
- goto out;
- }
-
- INIT_WORK(&hw->tqueue, hfc4s8s_bh);
-
- if (request_irq
- (hw->irq, hfc4s8s_interrupt, IRQF_SHARED, hw->card_name, hw)) {
- printk(KERN_INFO
- "HFC-4S/8S: unable to alloc irq %d, card ignored\n",
- hw->irq);
- goto out;
- }
- printk(KERN_INFO
- "HFC-4S/8S: found PCI card at iobase 0x%x, irq %d\n",
- hw->iobase, hw->irq);
-
- hfc_hardware_enable(hw, 1, 0);
-
- return (0);
-
-out:
- hw->irq = 0;
- release_pci_ports(hw);
- kfree(hw);
- return (err);
-}
-
-/*****************************************/
-/* PCI hotplug interface: probe new card */
-/*****************************************/
-static int
-hfc4s8s_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
-{
- int err = -ENOMEM;
- hfc4s8s_param *driver_data = (hfc4s8s_param *) ent->driver_data;
- hfc4s8s_hw *hw;
-
- if (!(hw = kzalloc(sizeof(hfc4s8s_hw), GFP_ATOMIC))) {
- printk(KERN_ERR "No kmem for HFC-4S/8S card\n");
- return (err);
- }
-
- hw->pdev = pdev;
- err = pci_enable_device(pdev);
-
- if (err)
- goto out;
-
- hw->cardnum = card_cnt;
- sprintf(hw->card_name, "hfc4s8s_%d", hw->cardnum);
- printk(KERN_INFO "HFC-4S/8S: found adapter %s (%s) at %s\n",
- driver_data->device_name, hw->card_name, pci_name(pdev));
-
- spin_lock_init(&hw->lock);
-
- hw->driver_data = *driver_data;
- hw->irq = pdev->irq;
- hw->iobase = pci_resource_start(pdev, 0);
-
- if (!request_region(hw->iobase, 8, hw->card_name)) {
- printk(KERN_INFO
- "HFC-4S/8S: failed to request address space at 0x%04x\n",
- hw->iobase);
- err = -EBUSY;
- goto out;
- }
-
- pci_set_drvdata(pdev, hw);
- err = setup_instance(hw);
- if (!err)
- card_cnt++;
- return (err);
-
-out:
- kfree(hw);
- return (err);
-}
-
-/**************************************/
-/* PCI hotplug interface: remove card */
-/**************************************/
-static void
-hfc4s8s_remove(struct pci_dev *pdev)
-{
- hfc4s8s_hw *hw = pci_get_drvdata(pdev);
-
- printk(KERN_INFO "HFC-4S/8S: removing card %d\n", hw->cardnum);
- hfc_hardware_enable(hw, 0, 0);
-
- if (hw->irq)
- free_irq(hw->irq, hw);
- hw->irq = 0;
- release_pci_ports(hw);
-
- card_cnt--;
- pci_disable_device(pdev);
- kfree(hw);
- return;
-}
-
-static struct pci_driver hfc4s8s_driver = {
- .name = "hfc4s8s_l1",
- .probe = hfc4s8s_probe,
- .remove = hfc4s8s_remove,
- .id_table = hfc4s8s_ids,
-};
-
-/**********************/
-/* driver Module init */
-/**********************/
-static int __init
-hfc4s8s_module_init(void)
-{
- int err;
-
- printk(KERN_INFO
- "HFC-4S/8S: Layer 1 driver module for HFC-4S/8S isdn chips, %s\n",
- hfc4s8s_rev);
- printk(KERN_INFO
- "HFC-4S/8S: (C) 2003 Cornelius Consult, www.cornelius-consult.de\n");
-
- card_cnt = 0;
-
- err = pci_register_driver(&hfc4s8s_driver);
- if (err < 0) {
- goto out;
- }
- printk(KERN_INFO "HFC-4S/8S: found %d cards\n", card_cnt);
-
- return 0;
-out:
- return (err);
-} /* hfc4s8s_init_hw */
-
-/*************************************/
-/* driver module exit : */
-/* release the HFC-4s/8s hardware */
-/*************************************/
-static void __exit
-hfc4s8s_module_exit(void)
-{
- pci_unregister_driver(&hfc4s8s_driver);
- printk(KERN_INFO "HFC-4S/8S: module removed\n");
-} /* hfc4s8s_release_hw */
-
-module_init(hfc4s8s_module_init);
-module_exit(hfc4s8s_module_exit);
diff --git a/drivers/isdn/hisax/hfc4s8s_l1.h b/drivers/isdn/hisax/hfc4s8s_l1.h
deleted file mode 100644
index 4665b9d5df16..000000000000
--- a/drivers/isdn/hisax/hfc4s8s_l1.h
+++ /dev/null
@@ -1,89 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/***************************************************************/
-/* $Id: hfc4s8s_l1.h,v 1.1 2005/02/02 17:28:55 martinb1 Exp $ */
-/* */
-/* This file is a minimal required extraction of hfc48scu.h */
-/* (Genero 3.2, HFC XML 1.7a for HFC-E1, HFC-4S and HFC-8S) */
-/* */
-/* To get this complete register description contact */
-/* Cologne Chip AG : */
-/* Internet: http://www.colognechip.com/ */
-/* E-Mail: info@colognechip.com */
-/***************************************************************/
-
-#ifndef _HFC4S8S_L1_H_
-#define _HFC4S8S_L1_H_
-
-
-/*
- * include Genero generated HFC-4S/8S header file hfc48scu.h
- * for complete register description. This will define _HFC48SCU_H_
- * to prevent redefinitions
- */
-
-// #include "hfc48scu.h"
-
-#ifndef _HFC48SCU_H_
-#define _HFC48SCU_H_
-
-#ifndef PCI_VENDOR_ID_CCD
-#define PCI_VENDOR_ID_CCD 0x1397
-#endif
-
-#define CHIP_ID_4S 0x0C
-#define CHIP_ID_8S 0x08
-#define PCI_DEVICE_ID_4S 0x08B4
-#define PCI_DEVICE_ID_8S 0x16B8
-
-#define R_IRQ_MISC 0x11
-#define M_TI_IRQ 0x02
-#define A_ST_RD_STA 0x30
-#define A_ST_WR_STA 0x30
-#define M_SET_G2_G3 0x80
-#define A_ST_CTRL0 0x31
-#define A_ST_CTRL2 0x33
-#define A_ST_CLK_DLY 0x37
-#define A_Z1 0x04
-#define A_Z2 0x06
-#define R_CIRM 0x00
-#define M_SRES 0x08
-#define R_CTRL 0x01
-#define R_BRG_PCM_CFG 0x02
-#define M_PCM_CLK 0x20
-#define R_RAM_MISC 0x0C
-#define M_FZ_MD 0x80
-#define R_FIFO_MD 0x0D
-#define A_INC_RES_FIFO 0x0E
-#define R_FIFO 0x0F
-#define A_F1 0x0C
-#define A_F2 0x0D
-#define R_IRQ_OVIEW 0x10
-#define R_CHIP_ID 0x16
-#define R_STATUS 0x1C
-#define M_BUSY 0x01
-#define M_MISC_IRQSTA 0x40
-#define M_FR_IRQSTA 0x80
-#define R_CHIP_RV 0x1F
-#define R_IRQ_CTRL 0x13
-#define M_FIFO_IRQ 0x01
-#define M_GLOB_IRQ_EN 0x08
-#define R_PCM_MD0 0x14
-#define M_PCM_MD 0x01
-#define A_FIFO_DATA0 0x80
-#define R_TI_WD 0x1A
-#define R_PWM1 0x39
-#define R_PWM_MD 0x46
-#define R_IRQ_FIFO_BL0 0xC8
-#define A_CON_HDLC 0xFA
-#define A_SUBCH_CFG 0xFB
-#define A_IRQ_MSK 0xFF
-#define R_SCI_MSK 0x12
-#define R_ST_SEL 0x16
-#define R_ST_SYNC 0x17
-#define M_AUTO_SYNC 0x08
-#define R_SCI 0x12
-#define R_IRQMSK_MISC 0x11
-#define M_TI_IRQMSK 0x02
-
-#endif /* _HFC4S8S_L1_H_ */
-#endif /* _HFC48SCU_H_ */
diff --git a/drivers/isdn/hisax/hfc_2bds0.c b/drivers/isdn/hisax/hfc_2bds0.c
deleted file mode 100644
index 3715fa0343db..000000000000
--- a/drivers/isdn/hisax/hfc_2bds0.c
+++ /dev/null
@@ -1,1078 +0,0 @@
-/* $Id: hfc_2bds0.c,v 1.18.2.6 2004/02/11 13:21:33 keil Exp $
- *
- * specific routines for CCD's HFC 2BDS0
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include <linux/sched.h>
-#include <linux/slab.h>
-#include "hisax.h"
-#include "hfc_2bds0.h"
-#include "isdnl1.h"
-#include <linux/interrupt.h>
-/*
- #define KDEBUG_DEF
- #include "kdebug.h"
-*/
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-static void
-dummyf(struct IsdnCardState *cs, u_char *data, int size)
-{
- printk(KERN_WARNING "HiSax: hfcd dummy fifo called\n");
-}
-
-static inline u_char
-ReadReg(struct IsdnCardState *cs, int data, u_char reg)
-{
- register u_char ret;
-
- if (data) {
- if (cs->hw.hfcD.cip != reg) {
- cs->hw.hfcD.cip = reg;
- byteout(cs->hw.hfcD.addr | 1, reg);
- }
- ret = bytein(cs->hw.hfcD.addr);
-#ifdef HFC_REG_DEBUG
- if (cs->debug & L1_DEB_HSCX_FIFO && (data != 2))
- debugl1(cs, "t3c RD %02x %02x", reg, ret);
-#endif
- } else
- ret = bytein(cs->hw.hfcD.addr | 1);
- return (ret);
-}
-
-static inline void
-WriteReg(struct IsdnCardState *cs, int data, u_char reg, u_char value)
-{
- if (cs->hw.hfcD.cip != reg) {
- cs->hw.hfcD.cip = reg;
- byteout(cs->hw.hfcD.addr | 1, reg);
- }
- if (data)
- byteout(cs->hw.hfcD.addr, value);
-#ifdef HFC_REG_DEBUG
- if (cs->debug & L1_DEB_HSCX_FIFO && (data != HFCD_DATA_NODEB))
- debugl1(cs, "t3c W%c %02x %02x", data ? 'D' : 'C', reg, value);
-#endif
-}
-
-/* Interface functions */
-
-static u_char
-readreghfcd(struct IsdnCardState *cs, u_char offset)
-{
- return (ReadReg(cs, HFCD_DATA, offset));
-}
-
-static void
-writereghfcd(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- WriteReg(cs, HFCD_DATA, offset, value);
-}
-
-static inline int
-WaitForBusy(struct IsdnCardState *cs)
-{
- int to = 130;
-
- while (!(ReadReg(cs, HFCD_DATA, HFCD_STAT) & HFCD_BUSY) && to) {
- udelay(1);
- to--;
- }
- if (!to)
- printk(KERN_WARNING "HiSax: WaitForBusy timeout\n");
- return (to);
-}
-
-static inline int
-WaitNoBusy(struct IsdnCardState *cs)
-{
- int to = 130;
-
- while ((ReadReg(cs, HFCD_STATUS, HFCD_STATUS) & HFCD_BUSY) && to) {
- udelay(1);
- to--;
- }
- if (!to)
- printk(KERN_WARNING "HiSax: WaitNoBusy timeout\n");
- return (to);
-}
-
-static int
-SelFiFo(struct IsdnCardState *cs, u_char FiFo)
-{
- u_char cip;
-
- if (cs->hw.hfcD.fifo == FiFo)
- return (1);
- switch (FiFo) {
- case 0: cip = HFCB_FIFO | HFCB_Z1 | HFCB_SEND | HFCB_B1;
- break;
- case 1: cip = HFCB_FIFO | HFCB_Z1 | HFCB_REC | HFCB_B1;
- break;
- case 2: cip = HFCB_FIFO | HFCB_Z1 | HFCB_SEND | HFCB_B2;
- break;
- case 3: cip = HFCB_FIFO | HFCB_Z1 | HFCB_REC | HFCB_B2;
- break;
- case 4: cip = HFCD_FIFO | HFCD_Z1 | HFCD_SEND;
- break;
- case 5: cip = HFCD_FIFO | HFCD_Z1 | HFCD_REC;
- break;
- default:
- debugl1(cs, "SelFiFo Error");
- return (0);
- }
- cs->hw.hfcD.fifo = FiFo;
- WaitNoBusy(cs);
- cs->BC_Write_Reg(cs, HFCD_DATA, cip, 0);
- WaitForBusy(cs);
- return (2);
-}
-
-static int
-GetFreeFifoBytes_B(struct BCState *bcs)
-{
- int s;
-
- if (bcs->hw.hfc.f1 == bcs->hw.hfc.f2)
- return (bcs->cs->hw.hfcD.bfifosize);
- s = bcs->hw.hfc.send[bcs->hw.hfc.f1] - bcs->hw.hfc.send[bcs->hw.hfc.f2];
- if (s <= 0)
- s += bcs->cs->hw.hfcD.bfifosize;
- s = bcs->cs->hw.hfcD.bfifosize - s;
- return (s);
-}
-
-static int
-GetFreeFifoBytes_D(struct IsdnCardState *cs)
-{
- int s;
-
- if (cs->hw.hfcD.f1 == cs->hw.hfcD.f2)
- return (cs->hw.hfcD.dfifosize);
- s = cs->hw.hfcD.send[cs->hw.hfcD.f1] - cs->hw.hfcD.send[cs->hw.hfcD.f2];
- if (s <= 0)
- s += cs->hw.hfcD.dfifosize;
- s = cs->hw.hfcD.dfifosize - s;
- return (s);
-}
-
-static int
-ReadZReg(struct IsdnCardState *cs, u_char reg)
-{
- int val;
-
- WaitNoBusy(cs);
- val = 256 * ReadReg(cs, HFCD_DATA, reg | HFCB_Z_HIGH);
- WaitNoBusy(cs);
- val += ReadReg(cs, HFCD_DATA, reg | HFCB_Z_LOW);
- return (val);
-}
-
-static struct sk_buff
-*hfc_empty_fifo(struct BCState *bcs, int count)
-{
- u_char *ptr;
- struct sk_buff *skb;
- struct IsdnCardState *cs = bcs->cs;
- int idx;
- int chksum;
- u_char stat, cip;
-
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "hfc_empty_fifo");
- idx = 0;
- if (count > HSCX_BUFMAX + 3) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "hfc_empty_fifo: incoming packet too large");
- cip = HFCB_FIFO | HFCB_FIFO_OUT | HFCB_REC | HFCB_CHANNEL(bcs->channel);
- while (idx++ < count) {
- WaitNoBusy(cs);
- ReadReg(cs, HFCD_DATA_NODEB, cip);
- }
- skb = NULL;
- } else if (count < 4) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "hfc_empty_fifo: incoming packet too small");
- cip = HFCB_FIFO | HFCB_FIFO_OUT | HFCB_REC | HFCB_CHANNEL(bcs->channel);
-#ifdef ERROR_STATISTIC
- bcs->err_inv++;
-#endif
- while ((idx++ < count) && WaitNoBusy(cs))
- ReadReg(cs, HFCD_DATA_NODEB, cip);
- skb = NULL;
- } else if (!(skb = dev_alloc_skb(count - 3)))
- printk(KERN_WARNING "HFC: receive out of memory\n");
- else {
- ptr = skb_put(skb, count - 3);
- idx = 0;
- cip = HFCB_FIFO | HFCB_FIFO_OUT | HFCB_REC | HFCB_CHANNEL(bcs->channel);
- while (idx < (count - 3)) {
- if (!WaitNoBusy(cs))
- break;
- *ptr = ReadReg(cs, HFCD_DATA_NODEB, cip);
- ptr++;
- idx++;
- }
- if (idx != count - 3) {
- debugl1(cs, "RFIFO BUSY error");
- printk(KERN_WARNING "HFC FIFO channel %d BUSY Error\n", bcs->channel);
- dev_kfree_skb_irq(skb);
- skb = NULL;
- } else {
- WaitNoBusy(cs);
- chksum = (ReadReg(cs, HFCD_DATA, cip) << 8);
- WaitNoBusy(cs);
- chksum += ReadReg(cs, HFCD_DATA, cip);
- WaitNoBusy(cs);
- stat = ReadReg(cs, HFCD_DATA, cip);
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfc_empty_fifo %d chksum %x stat %x",
- bcs->channel, chksum, stat);
- if (stat) {
- debugl1(cs, "FIFO CRC error");
- dev_kfree_skb_irq(skb);
- skb = NULL;
-#ifdef ERROR_STATISTIC
- bcs->err_crc++;
-#endif
- }
- }
- }
- WaitForBusy(cs);
- WaitNoBusy(cs);
- stat = ReadReg(cs, HFCD_DATA, HFCB_FIFO | HFCB_F2_INC |
- HFCB_REC | HFCB_CHANNEL(bcs->channel));
- WaitForBusy(cs);
- return (skb);
-}
-
-static void
-hfc_fill_fifo(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
- int idx, fcnt;
- int count;
- u_char cip;
-
- if (!bcs->tx_skb)
- return;
- if (bcs->tx_skb->len <= 0)
- return;
- SelFiFo(cs, HFCB_SEND | HFCB_CHANNEL(bcs->channel));
- cip = HFCB_FIFO | HFCB_F1 | HFCB_SEND | HFCB_CHANNEL(bcs->channel);
- WaitNoBusy(cs);
- bcs->hw.hfc.f1 = ReadReg(cs, HFCD_DATA, cip);
- WaitNoBusy(cs);
- cip = HFCB_FIFO | HFCB_F2 | HFCB_SEND | HFCB_CHANNEL(bcs->channel);
- WaitNoBusy(cs);
- bcs->hw.hfc.f2 = ReadReg(cs, HFCD_DATA, cip);
- bcs->hw.hfc.send[bcs->hw.hfc.f1] = ReadZReg(cs, HFCB_FIFO | HFCB_Z1 | HFCB_SEND | HFCB_CHANNEL(bcs->channel));
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfc_fill_fifo %d f1(%d) f2(%d) z1(%x)",
- bcs->channel, bcs->hw.hfc.f1, bcs->hw.hfc.f2,
- bcs->hw.hfc.send[bcs->hw.hfc.f1]);
- fcnt = bcs->hw.hfc.f1 - bcs->hw.hfc.f2;
- if (fcnt < 0)
- fcnt += 32;
- if (fcnt > 30) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfc_fill_fifo more as 30 frames");
- return;
- }
- count = GetFreeFifoBytes_B(bcs);
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfc_fill_fifo %d count(%u/%d),%lx",
- bcs->channel, bcs->tx_skb->len,
- count, current->state);
- if (count < bcs->tx_skb->len) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfc_fill_fifo no fifo mem");
- return;
- }
- cip = HFCB_FIFO | HFCB_FIFO_IN | HFCB_SEND | HFCB_CHANNEL(bcs->channel);
- idx = 0;
- WaitForBusy(cs);
- WaitNoBusy(cs);
- WriteReg(cs, HFCD_DATA_NODEB, cip, bcs->tx_skb->data[idx++]);
- while (idx < bcs->tx_skb->len) {
- if (!WaitNoBusy(cs))
- break;
- WriteReg(cs, HFCD_DATA_NODEB, cip, bcs->tx_skb->data[idx]);
- idx++;
- }
- if (idx != bcs->tx_skb->len) {
- debugl1(cs, "FIFO Send BUSY error");
- printk(KERN_WARNING "HFC S FIFO channel %d BUSY Error\n", bcs->channel);
- } else {
- bcs->tx_cnt -= bcs->tx_skb->len;
- if (test_bit(FLG_LLI_L1WAKEUP, &bcs->st->lli.flag) &&
- (PACKET_NOACK != bcs->tx_skb->pkt_type)) {
- u_long flags;
- spin_lock_irqsave(&bcs->aclock, flags);
- bcs->ackcnt += bcs->tx_skb->len;
- spin_unlock_irqrestore(&bcs->aclock, flags);
- schedule_event(bcs, B_ACKPENDING);
- }
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- }
- WaitForBusy(cs);
- WaitNoBusy(cs);
- ReadReg(cs, HFCD_DATA, HFCB_FIFO | HFCB_F1_INC | HFCB_SEND | HFCB_CHANNEL(bcs->channel));
- WaitForBusy(cs);
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- return;
-}
-
-static void
-hfc_send_data(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
-
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfc_fill_fifo(bcs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "send_data %d blocked", bcs->channel);
-}
-
-static void
-main_rec_2bds0(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
- int z1, z2, rcnt;
- u_char f1, f2, cip;
- int receive, count = 5;
- struct sk_buff *skb;
-
-Begin:
- count--;
- if (test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- debugl1(cs, "rec_data %d blocked", bcs->channel);
- return;
- }
- SelFiFo(cs, HFCB_REC | HFCB_CHANNEL(bcs->channel));
- cip = HFCB_FIFO | HFCB_F1 | HFCB_REC | HFCB_CHANNEL(bcs->channel);
- WaitNoBusy(cs);
- f1 = ReadReg(cs, HFCD_DATA, cip);
- cip = HFCB_FIFO | HFCB_F2 | HFCB_REC | HFCB_CHANNEL(bcs->channel);
- WaitNoBusy(cs);
- f2 = ReadReg(cs, HFCD_DATA, cip);
- if (f1 != f2) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfc rec %d f1(%d) f2(%d)",
- bcs->channel, f1, f2);
- z1 = ReadZReg(cs, HFCB_FIFO | HFCB_Z1 | HFCB_REC | HFCB_CHANNEL(bcs->channel));
- z2 = ReadZReg(cs, HFCB_FIFO | HFCB_Z2 | HFCB_REC | HFCB_CHANNEL(bcs->channel));
- rcnt = z1 - z2;
- if (rcnt < 0)
- rcnt += cs->hw.hfcD.bfifosize;
- rcnt++;
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfc rec %d z1(%x) z2(%x) cnt(%d)",
- bcs->channel, z1, z2, rcnt);
- if ((skb = hfc_empty_fifo(bcs, rcnt))) {
- skb_queue_tail(&bcs->rqueue, skb);
- schedule_event(bcs, B_RCVBUFREADY);
- }
- rcnt = f1 - f2;
- if (rcnt < 0)
- rcnt += 32;
- if (rcnt > 1)
- receive = 1;
- else
- receive = 0;
- } else
- receive = 0;
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- if (count && receive)
- goto Begin;
- return;
-}
-
-static void
-mode_2bs0(struct BCState *bcs, int mode, int bc)
-{
- struct IsdnCardState *cs = bcs->cs;
-
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HFCD bchannel mode %d bchan %d/%d",
- mode, bc, bcs->channel);
- bcs->mode = mode;
- bcs->channel = bc;
- switch (mode) {
- case (L1_MODE_NULL):
- if (bc) {
- cs->hw.hfcD.conn |= 0x18;
- cs->hw.hfcD.sctrl &= ~SCTRL_B2_ENA;
- } else {
- cs->hw.hfcD.conn |= 0x3;
- cs->hw.hfcD.sctrl &= ~SCTRL_B1_ENA;
- }
- break;
- case (L1_MODE_TRANS):
- if (bc) {
- cs->hw.hfcD.ctmt |= 2;
- cs->hw.hfcD.conn &= ~0x18;
- cs->hw.hfcD.sctrl |= SCTRL_B2_ENA;
- } else {
- cs->hw.hfcD.ctmt |= 1;
- cs->hw.hfcD.conn &= ~0x3;
- cs->hw.hfcD.sctrl |= SCTRL_B1_ENA;
- }
- break;
- case (L1_MODE_HDLC):
- if (bc) {
- cs->hw.hfcD.ctmt &= ~2;
- cs->hw.hfcD.conn &= ~0x18;
- cs->hw.hfcD.sctrl |= SCTRL_B2_ENA;
- } else {
- cs->hw.hfcD.ctmt &= ~1;
- cs->hw.hfcD.conn &= ~0x3;
- cs->hw.hfcD.sctrl |= SCTRL_B1_ENA;
- }
- break;
- }
- WriteReg(cs, HFCD_DATA, HFCD_SCTRL, cs->hw.hfcD.sctrl);
- WriteReg(cs, HFCD_DATA, HFCD_CTMT, cs->hw.hfcD.ctmt);
- WriteReg(cs, HFCD_DATA, HFCD_CONN, cs->hw.hfcD.conn);
-}
-
-static void
-hfc_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct BCState *bcs = st->l1.bcs;
- struct sk_buff *skb = arg;
- u_long flags;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- skb_queue_tail(&bcs->squeue, skb);
- } else {
- bcs->tx_skb = skb;
-// test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- printk(KERN_WARNING "hfc_l2l1: this shouldn't happen\n");
- } else {
-// test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->tx_skb = skb;
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
- if (!bcs->tx_skb) {
- test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case (PH_ACTIVATE | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_set_bit(BC_FLG_ACTIV, &bcs->Flag);
- mode_2bs0(bcs, st->l1.mode, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | REQUEST):
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | CONFIRM):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_clear_bit(BC_FLG_ACTIV, &bcs->Flag);
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- mode_2bs0(bcs, 0, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- st->l1.l1l2(st, PH_DEACTIVATE | CONFIRM, NULL);
- break;
- }
-}
-
-static void
-close_2bs0(struct BCState *bcs)
-{
- mode_2bs0(bcs, 0, bcs->channel);
- if (test_and_clear_bit(BC_FLG_INIT, &bcs->Flag)) {
- skb_queue_purge(&bcs->rqueue);
- skb_queue_purge(&bcs->squeue);
- if (bcs->tx_skb) {
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- }
- }
-}
-
-static int
-open_hfcstate(struct IsdnCardState *cs, struct BCState *bcs)
-{
- if (!test_and_set_bit(BC_FLG_INIT, &bcs->Flag)) {
- skb_queue_head_init(&bcs->rqueue);
- skb_queue_head_init(&bcs->squeue);
- }
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->event = 0;
- bcs->tx_cnt = 0;
- return (0);
-}
-
-static int
-setstack_2b(struct PStack *st, struct BCState *bcs)
-{
- bcs->channel = st->l1.bc;
- if (open_hfcstate(st->l1.hardware, bcs))
- return (-1);
- st->l1.bcs = bcs;
- st->l2.l2l1 = hfc_l2l1;
- setstack_manager(st);
- bcs->st = st;
- setstack_l1_B(st);
- return (0);
-}
-
-static void
-hfcd_bh(struct work_struct *work)
-{
- struct IsdnCardState *cs =
- container_of(work, struct IsdnCardState, tqueue);
-
- if (test_and_clear_bit(D_L1STATECHANGE, &cs->event)) {
- switch (cs->dc.hfcd.ph_state) {
- case (0):
- l1_msg(cs, HW_RESET | INDICATION, NULL);
- break;
- case (3):
- l1_msg(cs, HW_DEACTIVATE | INDICATION, NULL);
- break;
- case (8):
- l1_msg(cs, HW_RSYNC | INDICATION, NULL);
- break;
- case (6):
- l1_msg(cs, HW_INFO2 | INDICATION, NULL);
- break;
- case (7):
- l1_msg(cs, HW_INFO4_P8 | INDICATION, NULL);
- break;
- default:
- break;
- }
- }
- if (test_and_clear_bit(D_RCVBUFREADY, &cs->event))
- DChannel_proc_rcv(cs);
- if (test_and_clear_bit(D_XMTBUFREADY, &cs->event))
- DChannel_proc_xmt(cs);
-}
-
-static
-int receive_dmsg(struct IsdnCardState *cs)
-{
- struct sk_buff *skb;
- int idx;
- int rcnt, z1, z2;
- u_char stat, cip, f1, f2;
- int chksum;
- int count = 5;
- u_char *ptr;
-
- if (test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- debugl1(cs, "rec_dmsg blocked");
- return (1);
- }
- SelFiFo(cs, 4 | HFCD_REC);
- cip = HFCD_FIFO | HFCD_F1 | HFCD_REC;
- WaitNoBusy(cs);
- f1 = cs->readisac(cs, cip) & 0xf;
- cip = HFCD_FIFO | HFCD_F2 | HFCD_REC;
- WaitNoBusy(cs);
- f2 = cs->readisac(cs, cip) & 0xf;
- while ((f1 != f2) && count--) {
- z1 = ReadZReg(cs, HFCD_FIFO | HFCD_Z1 | HFCD_REC);
- z2 = ReadZReg(cs, HFCD_FIFO | HFCD_Z2 | HFCD_REC);
- rcnt = z1 - z2;
- if (rcnt < 0)
- rcnt += cs->hw.hfcD.dfifosize;
- rcnt++;
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "hfcd recd f1(%d) f2(%d) z1(%x) z2(%x) cnt(%d)",
- f1, f2, z1, z2, rcnt);
- idx = 0;
- cip = HFCD_FIFO | HFCD_FIFO_OUT | HFCD_REC;
- if (rcnt > MAX_DFRAME_LEN + 3) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "empty_fifo d: incoming packet too large");
- while (idx < rcnt) {
- if (!(WaitNoBusy(cs)))
- break;
- ReadReg(cs, HFCD_DATA_NODEB, cip);
- idx++;
- }
- } else if (rcnt < 4) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "empty_fifo d: incoming packet too small");
- while ((idx++ < rcnt) && WaitNoBusy(cs))
- ReadReg(cs, HFCD_DATA_NODEB, cip);
- } else if ((skb = dev_alloc_skb(rcnt - 3))) {
- ptr = skb_put(skb, rcnt - 3);
- while (idx < (rcnt - 3)) {
- if (!(WaitNoBusy(cs)))
- break;
- *ptr = ReadReg(cs, HFCD_DATA_NODEB, cip);
- idx++;
- ptr++;
- }
- if (idx != (rcnt - 3)) {
- debugl1(cs, "RFIFO D BUSY error");
- printk(KERN_WARNING "HFC DFIFO channel BUSY Error\n");
- dev_kfree_skb_irq(skb);
- skb = NULL;
-#ifdef ERROR_STATISTIC
- cs->err_rx++;
-#endif
- } else {
- WaitNoBusy(cs);
- chksum = (ReadReg(cs, HFCD_DATA, cip) << 8);
- WaitNoBusy(cs);
- chksum += ReadReg(cs, HFCD_DATA, cip);
- WaitNoBusy(cs);
- stat = ReadReg(cs, HFCD_DATA, cip);
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "empty_dfifo chksum %x stat %x",
- chksum, stat);
- if (stat) {
- debugl1(cs, "FIFO CRC error");
- dev_kfree_skb_irq(skb);
- skb = NULL;
-#ifdef ERROR_STATISTIC
- cs->err_crc++;
-#endif
- } else {
- skb_queue_tail(&cs->rq, skb);
- schedule_event(cs, D_RCVBUFREADY);
- }
- }
- } else
- printk(KERN_WARNING "HFC: D receive out of memory\n");
- WaitForBusy(cs);
- cip = HFCD_FIFO | HFCD_F2_INC | HFCD_REC;
- WaitNoBusy(cs);
- stat = ReadReg(cs, HFCD_DATA, cip);
- WaitForBusy(cs);
- cip = HFCD_FIFO | HFCD_F2 | HFCD_REC;
- WaitNoBusy(cs);
- f2 = cs->readisac(cs, cip) & 0xf;
- }
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- return (1);
-}
-
-static void
-hfc_fill_dfifo(struct IsdnCardState *cs)
-{
- int idx, fcnt;
- int count;
- u_char cip;
-
- if (!cs->tx_skb)
- return;
- if (cs->tx_skb->len <= 0)
- return;
-
- SelFiFo(cs, 4 | HFCD_SEND);
- cip = HFCD_FIFO | HFCD_F1 | HFCD_SEND;
- WaitNoBusy(cs);
- cs->hw.hfcD.f1 = ReadReg(cs, HFCD_DATA, cip) & 0xf;
- WaitNoBusy(cs);
- cip = HFCD_FIFO | HFCD_F2 | HFCD_SEND;
- cs->hw.hfcD.f2 = ReadReg(cs, HFCD_DATA, cip) & 0xf;
- cs->hw.hfcD.send[cs->hw.hfcD.f1] = ReadZReg(cs, HFCD_FIFO | HFCD_Z1 | HFCD_SEND);
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "hfc_fill_Dfifo f1(%d) f2(%d) z1(%x)",
- cs->hw.hfcD.f1, cs->hw.hfcD.f2,
- cs->hw.hfcD.send[cs->hw.hfcD.f1]);
- fcnt = cs->hw.hfcD.f1 - cs->hw.hfcD.f2;
- if (fcnt < 0)
- fcnt += 16;
- if (fcnt > 14) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfc_fill_Dfifo more as 14 frames");
- return;
- }
- count = GetFreeFifoBytes_D(cs);
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "hfc_fill_Dfifo count(%u/%d)",
- cs->tx_skb->len, count);
- if (count < cs->tx_skb->len) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "hfc_fill_Dfifo no fifo mem");
- return;
- }
- cip = HFCD_FIFO | HFCD_FIFO_IN | HFCD_SEND;
- idx = 0;
- WaitForBusy(cs);
- WaitNoBusy(cs);
- WriteReg(cs, HFCD_DATA_NODEB, cip, cs->tx_skb->data[idx++]);
- while (idx < cs->tx_skb->len) {
- if (!(WaitNoBusy(cs)))
- break;
- WriteReg(cs, HFCD_DATA_NODEB, cip, cs->tx_skb->data[idx]);
- idx++;
- }
- if (idx != cs->tx_skb->len) {
- debugl1(cs, "DFIFO Send BUSY error");
- printk(KERN_WARNING "HFC S DFIFO channel BUSY Error\n");
- }
- WaitForBusy(cs);
- WaitNoBusy(cs);
- ReadReg(cs, HFCD_DATA, HFCD_FIFO | HFCD_F1_INC | HFCD_SEND);
- dev_kfree_skb_any(cs->tx_skb);
- cs->tx_skb = NULL;
- WaitForBusy(cs);
- return;
-}
-
-static
-struct BCState *Sel_BCS(struct IsdnCardState *cs, int channel)
-{
- if (cs->bcs[0].mode && (cs->bcs[0].channel == channel))
- return (&cs->bcs[0]);
- else if (cs->bcs[1].mode && (cs->bcs[1].channel == channel))
- return (&cs->bcs[1]);
- else
- return (NULL);
-}
-
-void
-hfc2bds0_interrupt(struct IsdnCardState *cs, u_char val)
-{
- u_char exval;
- struct BCState *bcs;
- int count = 15;
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "HFCD irq %x %s", val,
- test_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags) ?
- "locked" : "unlocked");
- val &= cs->hw.hfcD.int_m1;
- if (val & 0x40) { /* TE state machine irq */
- exval = cs->readisac(cs, HFCD_STATES) & 0xf;
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ph_state chg %d->%d", cs->dc.hfcd.ph_state,
- exval);
- cs->dc.hfcd.ph_state = exval;
- schedule_event(cs, D_L1STATECHANGE);
- val &= ~0x40;
- }
- while (val) {
- if (test_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- cs->hw.hfcD.int_s1 |= val;
- return;
- }
- if (cs->hw.hfcD.int_s1 & 0x18) {
- exval = val;
- val = cs->hw.hfcD.int_s1;
- cs->hw.hfcD.int_s1 = exval;
- }
- if (val & 0x08) {
- if (!(bcs = Sel_BCS(cs, 0))) {
- if (cs->debug)
- debugl1(cs, "hfcd spurious 0x08 IRQ");
- } else
- main_rec_2bds0(bcs);
- }
- if (val & 0x10) {
- if (!(bcs = Sel_BCS(cs, 1))) {
- if (cs->debug)
- debugl1(cs, "hfcd spurious 0x10 IRQ");
- } else
- main_rec_2bds0(bcs);
- }
- if (val & 0x01) {
- if (!(bcs = Sel_BCS(cs, 0))) {
- if (cs->debug)
- debugl1(cs, "hfcd spurious 0x01 IRQ");
- } else {
- if (bcs->tx_skb) {
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfc_fill_fifo(bcs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "fill_data %d blocked", bcs->channel);
- } else {
- if ((bcs->tx_skb = skb_dequeue(&bcs->squeue))) {
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfc_fill_fifo(bcs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "fill_data %d blocked", bcs->channel);
- } else {
- schedule_event(bcs, B_XMTBUFREADY);
- }
- }
- }
- }
- if (val & 0x02) {
- if (!(bcs = Sel_BCS(cs, 1))) {
- if (cs->debug)
- debugl1(cs, "hfcd spurious 0x02 IRQ");
- } else {
- if (bcs->tx_skb) {
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfc_fill_fifo(bcs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "fill_data %d blocked", bcs->channel);
- } else {
- if ((bcs->tx_skb = skb_dequeue(&bcs->squeue))) {
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfc_fill_fifo(bcs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "fill_data %d blocked", bcs->channel);
- } else {
- schedule_event(bcs, B_XMTBUFREADY);
- }
- }
- }
- }
- if (val & 0x20) { /* receive dframe */
- receive_dmsg(cs);
- }
- if (val & 0x04) { /* dframe transmitted */
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- schedule_event(cs, D_CLEARBUSY);
- if (cs->tx_skb) {
- if (cs->tx_skb->len) {
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfc_fill_dfifo(cs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else {
- debugl1(cs, "hfc_fill_dfifo irq blocked");
- }
- goto afterXPR;
- } else {
- dev_kfree_skb_irq(cs->tx_skb);
- cs->tx_cnt = 0;
- cs->tx_skb = NULL;
- }
- }
- if ((cs->tx_skb = skb_dequeue(&cs->sq))) {
- cs->tx_cnt = 0;
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfc_fill_dfifo(cs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else {
- debugl1(cs, "hfc_fill_dfifo irq blocked");
- }
- } else
- schedule_event(cs, D_XMTBUFREADY);
- }
- afterXPR:
- if (cs->hw.hfcD.int_s1 && count--) {
- val = cs->hw.hfcD.int_s1;
- cs->hw.hfcD.int_s1 = 0;
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "HFCD irq %x loop %d", val, 15-count);
- } else
- val = 0;
- }
-}
-
-static void
-HFCD_l1hw(struct PStack *st, int pr, void *arg)
-{
- struct IsdnCardState *cs = (struct IsdnCardState *) st->l1.hardware;
- struct sk_buff *skb = arg;
- u_long flags;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- if (cs->debug & DEB_DLOG_HEX)
- LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE)
- dlogframe(cs, skb, 0);
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->tx_skb) {
- skb_queue_tail(&cs->sq, skb);
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA Queued", 0);
-#endif
- } else {
- cs->tx_skb = skb;
- cs->tx_cnt = 0;
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA", 0);
-#endif
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfc_fill_dfifo(cs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "hfc_fill_dfifo blocked");
-
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->tx_skb) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, " l2l1 tx_skb exist this shouldn't happen");
- skb_queue_tail(&cs->sq, skb);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- }
- if (cs->debug & DEB_DLOG_HEX)
- LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE)
- dlogframe(cs, skb, 0);
- cs->tx_skb = skb;
- cs->tx_cnt = 0;
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA_PULLED", 0);
-#endif
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfc_fill_dfifo(cs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "hfc_fill_dfifo blocked");
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- debugl1(cs, "-> PH_REQUEST_PULL");
-#endif
- if (!cs->tx_skb) {
- test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case (HW_RESET | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- cs->writeisac(cs, HFCD_STATES, HFCD_LOAD_STATE | 3); /* HFC ST 3 */
- udelay(6);
- cs->writeisac(cs, HFCD_STATES, 3); /* HFC ST 2 */
- cs->hw.hfcD.mst_m |= HFCD_MASTER;
- cs->writeisac(cs, HFCD_MST_MODE, cs->hw.hfcD.mst_m);
- cs->writeisac(cs, HFCD_STATES, HFCD_ACTIVATE | HFCD_DO_ACTION);
- spin_unlock_irqrestore(&cs->lock, flags);
- l1_msg(cs, HW_POWERUP | CONFIRM, NULL);
- break;
- case (HW_ENABLE | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- cs->writeisac(cs, HFCD_STATES, HFCD_ACTIVATE | HFCD_DO_ACTION);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_DEACTIVATE | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- cs->hw.hfcD.mst_m &= ~HFCD_MASTER;
- cs->writeisac(cs, HFCD_MST_MODE, cs->hw.hfcD.mst_m);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_INFO3 | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- cs->hw.hfcD.mst_m |= HFCD_MASTER;
- cs->writeisac(cs, HFCD_MST_MODE, cs->hw.hfcD.mst_m);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- default:
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "hfcd_l1hw unknown pr %4x", pr);
- break;
- }
-}
-
-static void
-setstack_hfcd(struct PStack *st, struct IsdnCardState *cs)
-{
- st->l1.l1hw = HFCD_l1hw;
-}
-
-static void
-hfc_dbusy_timer(struct timer_list *t)
-{
-}
-
-static unsigned int
-*init_send_hfcd(int cnt)
-{
- int i;
- unsigned *send;
-
- if (!(send = kmalloc_array(cnt, sizeof(unsigned int), GFP_ATOMIC))) {
- printk(KERN_WARNING
- "HiSax: No memory for hfcd.send\n");
- return (NULL);
- }
- for (i = 0; i < cnt; i++)
- send[i] = 0x1fff;
- return (send);
-}
-
-void
-init2bds0(struct IsdnCardState *cs)
-{
- cs->setstack_d = setstack_hfcd;
- if (!cs->hw.hfcD.send)
- cs->hw.hfcD.send = init_send_hfcd(16);
- if (!cs->bcs[0].hw.hfc.send)
- cs->bcs[0].hw.hfc.send = init_send_hfcd(32);
- if (!cs->bcs[1].hw.hfc.send)
- cs->bcs[1].hw.hfc.send = init_send_hfcd(32);
- cs->BC_Send_Data = &hfc_send_data;
- cs->bcs[0].BC_SetStack = setstack_2b;
- cs->bcs[1].BC_SetStack = setstack_2b;
- cs->bcs[0].BC_Close = close_2bs0;
- cs->bcs[1].BC_Close = close_2bs0;
- mode_2bs0(cs->bcs, 0, 0);
- mode_2bs0(cs->bcs + 1, 0, 1);
-}
-
-void
-release2bds0(struct IsdnCardState *cs)
-{
- kfree(cs->bcs[0].hw.hfc.send);
- cs->bcs[0].hw.hfc.send = NULL;
- kfree(cs->bcs[1].hw.hfc.send);
- cs->bcs[1].hw.hfc.send = NULL;
- kfree(cs->hw.hfcD.send);
- cs->hw.hfcD.send = NULL;
-}
-
-void
-set_cs_func(struct IsdnCardState *cs)
-{
- cs->readisac = &readreghfcd;
- cs->writeisac = &writereghfcd;
- cs->readisacfifo = &dummyf;
- cs->writeisacfifo = &dummyf;
- cs->BC_Read_Reg = &ReadReg;
- cs->BC_Write_Reg = &WriteReg;
- timer_setup(&cs->dbusytimer, hfc_dbusy_timer, 0);
- INIT_WORK(&cs->tqueue, hfcd_bh);
-}
diff --git a/drivers/isdn/hisax/hfc_2bds0.h b/drivers/isdn/hisax/hfc_2bds0.h
deleted file mode 100644
index 8c7582a3c51e..000000000000
--- a/drivers/isdn/hisax/hfc_2bds0.h
+++ /dev/null
@@ -1,128 +0,0 @@
-/* $Id: hfc_2bds0.h,v 1.6.2.2 2004/01/12 22:52:26 keil Exp $
- *
- * specific defines for CCD's HFC 2BDS0
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#define HFCD_CIRM 0x18
-#define HFCD_CTMT 0x19
-#define HFCD_INT_M1 0x1A
-#define HFCD_INT_M2 0x1B
-#define HFCD_INT_S1 0x1E
-#define HFCD_STAT 0x1C
-#define HFCD_STAT_DISB 0x1D
-#define HFCD_STATES 0x30
-#define HFCD_SCTRL 0x31
-#define HFCD_TEST 0x32
-#define HFCD_SQ 0x34
-#define HFCD_CLKDEL 0x37
-#define HFCD_MST_MODE 0x2E
-#define HFCD_CONN 0x2F
-
-#define HFCD_FIFO 0x80
-#define HFCD_Z1 0x10
-#define HFCD_Z2 0x18
-#define HFCD_Z_LOW 0x00
-#define HFCD_Z_HIGH 0x04
-#define HFCD_F1_INC 0x12
-#define HFCD_FIFO_IN 0x16
-#define HFCD_F1 0x1a
-#define HFCD_F2 0x1e
-#define HFCD_F2_INC 0x22
-#define HFCD_FIFO_OUT 0x26
-#define HFCD_REC 0x01
-#define HFCD_SEND 0x00
-
-#define HFCB_FIFO 0x80
-#define HFCB_Z1 0x00
-#define HFCB_Z2 0x08
-#define HFCB_Z_LOW 0x00
-#define HFCB_Z_HIGH 0x04
-#define HFCB_F1_INC 0x28
-#define HFCB_FIFO_IN 0x2c
-#define HFCB_F1 0x30
-#define HFCB_F2 0x34
-#define HFCB_F2_INC 0x38
-#define HFCB_FIFO_OUT 0x3c
-#define HFCB_REC 0x01
-#define HFCB_SEND 0x00
-#define HFCB_B1 0x00
-#define HFCB_B2 0x02
-#define HFCB_CHANNEL(ch) (ch ? HFCB_B2 : HFCB_B1)
-
-#define HFCD_STATUS 0
-#define HFCD_DATA 1
-#define HFCD_DATA_NODEB 2
-
-/* Status (READ) */
-#define HFCD_BUSY 0x01
-#define HFCD_BUSY_NBUSY 0x04
-#define HFCD_TIMER_ELAP 0x10
-#define HFCD_STATINT 0x20
-#define HFCD_FRAMEINT 0x40
-#define HFCD_ANYINT 0x80
-
-/* CTMT (Write) */
-#define HFCD_CLTIMER 0x80
-#define HFCD_TIM25 0x00
-#define HFCD_TIM50 0x08
-#define HFCD_TIM400 0x10
-#define HFCD_TIM800 0x18
-#define HFCD_AUTO_TIMER 0x20
-#define HFCD_TRANSB2 0x02
-#define HFCD_TRANSB1 0x01
-
-/* CIRM (Write) */
-#define HFCD_RESET 0x08
-#define HFCD_MEM8K 0x10
-#define HFCD_INTA 0x01
-#define HFCD_INTB 0x02
-#define HFCD_INTC 0x03
-#define HFCD_INTD 0x04
-#define HFCD_INTE 0x05
-#define HFCD_INTF 0x06
-
-/* INT_M1;INT_S1 */
-#define HFCD_INTS_B1TRANS 0x01
-#define HFCD_INTS_B2TRANS 0x02
-#define HFCD_INTS_DTRANS 0x04
-#define HFCD_INTS_B1REC 0x08
-#define HFCD_INTS_B2REC 0x10
-#define HFCD_INTS_DREC 0x20
-#define HFCD_INTS_L1STATE 0x40
-#define HFCD_INTS_TIMER 0x80
-
-/* INT_M2 */
-#define HFCD_IRQ_ENABLE 0x08
-
-/* STATES */
-#define HFCD_LOAD_STATE 0x10
-#define HFCD_ACTIVATE 0x20
-#define HFCD_DO_ACTION 0x40
-
-/* HFCD_MST_MODE */
-#define HFCD_MASTER 0x01
-
-/* HFCD_SCTRL */
-#define SCTRL_B1_ENA 0x01
-#define SCTRL_B2_ENA 0x02
-#define SCTRL_LOW_PRIO 0x08
-#define SCTRL_SQ_ENA 0x10
-#define SCTRL_TEST 0x20
-#define SCTRL_NONE_CAP 0x40
-#define SCTRL_PWR_DOWN 0x80
-
-/* HFCD_TEST */
-#define HFCD_AUTO_AWAKE 0x01
-
-extern void main_irq_2bds0(struct BCState *bcs);
-extern void init2bds0(struct IsdnCardState *cs);
-extern void release2bds0(struct IsdnCardState *cs);
-extern void hfc2bds0_interrupt(struct IsdnCardState *cs, u_char val);
-extern void set_cs_func(struct IsdnCardState *cs);
diff --git a/drivers/isdn/hisax/hfc_2bs0.c b/drivers/isdn/hisax/hfc_2bs0.c
deleted file mode 100644
index 34d59992839a..000000000000
--- a/drivers/isdn/hisax/hfc_2bs0.c
+++ /dev/null
@@ -1,591 +0,0 @@
-/* $Id: hfc_2bs0.c,v 1.20.2.6 2004/02/11 13:21:33 keil Exp $
- *
- * specific routines for CCD's HFC 2BS0
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "hfc_2bs0.h"
-#include "isac.h"
-#include "isdnl1.h"
-#include <linux/interrupt.h>
-#include <linux/slab.h>
-
-static inline int
-WaitForBusy(struct IsdnCardState *cs)
-{
- int to = 130;
- u_char val;
-
- while (!(cs->BC_Read_Reg(cs, HFC_STATUS, 0) & HFC_BUSY) && to) {
- val = cs->BC_Read_Reg(cs, HFC_DATA, HFC_CIP | HFC_F2 |
- (cs->hw.hfc.cip & 3));
- udelay(1);
- to--;
- }
- if (!to) {
- printk(KERN_WARNING "HiSax: %s timeout\n", __func__);
- return (0);
- } else
- return (to);
-}
-
-static inline int
-WaitNoBusy(struct IsdnCardState *cs)
-{
- int to = 125;
-
- while ((cs->BC_Read_Reg(cs, HFC_STATUS, 0) & HFC_BUSY) && to) {
- udelay(1);
- to--;
- }
- if (!to) {
- printk(KERN_WARNING "HiSax: waitforBusy timeout\n");
- return (0);
- } else
- return (to);
-}
-
-static int
-GetFreeFifoBytes(struct BCState *bcs)
-{
- int s;
-
- if (bcs->hw.hfc.f1 == bcs->hw.hfc.f2)
- return (bcs->cs->hw.hfc.fifosize);
- s = bcs->hw.hfc.send[bcs->hw.hfc.f1] - bcs->hw.hfc.send[bcs->hw.hfc.f2];
- if (s <= 0)
- s += bcs->cs->hw.hfc.fifosize;
- s = bcs->cs->hw.hfc.fifosize - s;
- return (s);
-}
-
-static int
-ReadZReg(struct BCState *bcs, u_char reg)
-{
- int val;
-
- WaitNoBusy(bcs->cs);
- val = 256 * bcs->cs->BC_Read_Reg(bcs->cs, HFC_DATA, reg | HFC_CIP | HFC_Z_HIGH);
- WaitNoBusy(bcs->cs);
- val += bcs->cs->BC_Read_Reg(bcs->cs, HFC_DATA, reg | HFC_CIP | HFC_Z_LOW);
- return (val);
-}
-
-static void
-hfc_clear_fifo(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
- int idx, cnt;
- int rcnt, z1, z2;
- u_char cip, f1, f2;
-
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "hfc_clear_fifo");
- cip = HFC_CIP | HFC_F1 | HFC_REC | HFC_CHANNEL(bcs->channel);
- if ((cip & 0xc3) != (cs->hw.hfc.cip & 0xc3)) {
- cs->BC_Write_Reg(cs, HFC_STATUS, cip, cip);
- WaitForBusy(cs);
- }
- WaitNoBusy(cs);
- f1 = cs->BC_Read_Reg(cs, HFC_DATA, cip);
- cip = HFC_CIP | HFC_F2 | HFC_REC | HFC_CHANNEL(bcs->channel);
- WaitNoBusy(cs);
- f2 = cs->BC_Read_Reg(cs, HFC_DATA, cip);
- z1 = ReadZReg(bcs, HFC_Z1 | HFC_REC | HFC_CHANNEL(bcs->channel));
- z2 = ReadZReg(bcs, HFC_Z2 | HFC_REC | HFC_CHANNEL(bcs->channel));
- cnt = 32;
- while (((f1 != f2) || (z1 != z2)) && cnt--) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfc clear %d f1(%d) f2(%d)",
- bcs->channel, f1, f2);
- rcnt = z1 - z2;
- if (rcnt < 0)
- rcnt += cs->hw.hfc.fifosize;
- if (rcnt)
- rcnt++;
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfc clear %d z1(%x) z2(%x) cnt(%d)",
- bcs->channel, z1, z2, rcnt);
- cip = HFC_CIP | HFC_FIFO_OUT | HFC_REC | HFC_CHANNEL(bcs->channel);
- idx = 0;
- while ((idx < rcnt) && WaitNoBusy(cs)) {
- cs->BC_Read_Reg(cs, HFC_DATA_NODEB, cip);
- idx++;
- }
- if (f1 != f2) {
- WaitNoBusy(cs);
- cs->BC_Read_Reg(cs, HFC_DATA, HFC_CIP | HFC_F2_INC | HFC_REC |
- HFC_CHANNEL(bcs->channel));
- WaitForBusy(cs);
- }
- cip = HFC_CIP | HFC_F1 | HFC_REC | HFC_CHANNEL(bcs->channel);
- WaitNoBusy(cs);
- f1 = cs->BC_Read_Reg(cs, HFC_DATA, cip);
- cip = HFC_CIP | HFC_F2 | HFC_REC | HFC_CHANNEL(bcs->channel);
- WaitNoBusy(cs);
- f2 = cs->BC_Read_Reg(cs, HFC_DATA, cip);
- z1 = ReadZReg(bcs, HFC_Z1 | HFC_REC | HFC_CHANNEL(bcs->channel));
- z2 = ReadZReg(bcs, HFC_Z2 | HFC_REC | HFC_CHANNEL(bcs->channel));
- }
- return;
-}
-
-
-static struct sk_buff
-*
-hfc_empty_fifo(struct BCState *bcs, int count)
-{
- u_char *ptr;
- struct sk_buff *skb;
- struct IsdnCardState *cs = bcs->cs;
- int idx;
- int chksum;
- u_char stat, cip;
-
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "hfc_empty_fifo");
- idx = 0;
- if (count > HSCX_BUFMAX + 3) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "hfc_empty_fifo: incoming packet too large");
- cip = HFC_CIP | HFC_FIFO_OUT | HFC_REC | HFC_CHANNEL(bcs->channel);
- while ((idx++ < count) && WaitNoBusy(cs))
- cs->BC_Read_Reg(cs, HFC_DATA_NODEB, cip);
- WaitNoBusy(cs);
- stat = cs->BC_Read_Reg(cs, HFC_DATA, HFC_CIP | HFC_F2_INC | HFC_REC |
- HFC_CHANNEL(bcs->channel));
- WaitForBusy(cs);
- return (NULL);
- }
- if ((count < 4) && (bcs->mode != L1_MODE_TRANS)) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "hfc_empty_fifo: incoming packet too small");
- cip = HFC_CIP | HFC_FIFO_OUT | HFC_REC | HFC_CHANNEL(bcs->channel);
- while ((idx++ < count) && WaitNoBusy(cs))
- cs->BC_Read_Reg(cs, HFC_DATA_NODEB, cip);
- WaitNoBusy(cs);
- stat = cs->BC_Read_Reg(cs, HFC_DATA, HFC_CIP | HFC_F2_INC | HFC_REC |
- HFC_CHANNEL(bcs->channel));
- WaitForBusy(cs);
-#ifdef ERROR_STATISTIC
- bcs->err_inv++;
-#endif
- return (NULL);
- }
- if (bcs->mode == L1_MODE_TRANS)
- count -= 1;
- else
- count -= 3;
- if (!(skb = dev_alloc_skb(count)))
- printk(KERN_WARNING "HFC: receive out of memory\n");
- else {
- ptr = skb_put(skb, count);
- idx = 0;
- cip = HFC_CIP | HFC_FIFO_OUT | HFC_REC | HFC_CHANNEL(bcs->channel);
- while ((idx < count) && WaitNoBusy(cs)) {
- *ptr++ = cs->BC_Read_Reg(cs, HFC_DATA_NODEB, cip);
- idx++;
- }
- if (idx != count) {
- debugl1(cs, "RFIFO BUSY error");
- printk(KERN_WARNING "HFC FIFO channel %d BUSY Error\n", bcs->channel);
- dev_kfree_skb_any(skb);
- if (bcs->mode != L1_MODE_TRANS) {
- WaitNoBusy(cs);
- stat = cs->BC_Read_Reg(cs, HFC_DATA, HFC_CIP | HFC_F2_INC | HFC_REC |
- HFC_CHANNEL(bcs->channel));
- WaitForBusy(cs);
- }
- return (NULL);
- }
- if (bcs->mode != L1_MODE_TRANS) {
- WaitNoBusy(cs);
- chksum = (cs->BC_Read_Reg(cs, HFC_DATA, cip) << 8);
- WaitNoBusy(cs);
- chksum += cs->BC_Read_Reg(cs, HFC_DATA, cip);
- WaitNoBusy(cs);
- stat = cs->BC_Read_Reg(cs, HFC_DATA, cip);
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfc_empty_fifo %d chksum %x stat %x",
- bcs->channel, chksum, stat);
- if (stat) {
- debugl1(cs, "FIFO CRC error");
- dev_kfree_skb_any(skb);
- skb = NULL;
-#ifdef ERROR_STATISTIC
- bcs->err_crc++;
-#endif
- }
- WaitNoBusy(cs);
- stat = cs->BC_Read_Reg(cs, HFC_DATA, HFC_CIP | HFC_F2_INC | HFC_REC |
- HFC_CHANNEL(bcs->channel));
- WaitForBusy(cs);
- }
- }
- return (skb);
-}
-
-static void
-hfc_fill_fifo(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
- int idx, fcnt;
- int count;
- int z1, z2;
- u_char cip;
-
- if (!bcs->tx_skb)
- return;
- if (bcs->tx_skb->len <= 0)
- return;
-
- cip = HFC_CIP | HFC_F1 | HFC_SEND | HFC_CHANNEL(bcs->channel);
- if ((cip & 0xc3) != (cs->hw.hfc.cip & 0xc3)) {
- cs->BC_Write_Reg(cs, HFC_STATUS, cip, cip);
- WaitForBusy(cs);
- }
- WaitNoBusy(cs);
- if (bcs->mode != L1_MODE_TRANS) {
- bcs->hw.hfc.f1 = cs->BC_Read_Reg(cs, HFC_DATA, cip);
- cip = HFC_CIP | HFC_F2 | HFC_SEND | HFC_CHANNEL(bcs->channel);
- WaitNoBusy(cs);
- bcs->hw.hfc.f2 = cs->BC_Read_Reg(cs, HFC_DATA, cip);
- bcs->hw.hfc.send[bcs->hw.hfc.f1] = ReadZReg(bcs, HFC_Z1 | HFC_SEND | HFC_CHANNEL(bcs->channel));
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfc_fill_fifo %d f1(%d) f2(%d) z1(%x)",
- bcs->channel, bcs->hw.hfc.f1, bcs->hw.hfc.f2,
- bcs->hw.hfc.send[bcs->hw.hfc.f1]);
- fcnt = bcs->hw.hfc.f1 - bcs->hw.hfc.f2;
- if (fcnt < 0)
- fcnt += 32;
- if (fcnt > 30) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfc_fill_fifo more as 30 frames");
- return;
- }
- count = GetFreeFifoBytes(bcs);
- }
- else {
- WaitForBusy(cs);
- z1 = ReadZReg(bcs, HFC_Z1 | HFC_REC | HFC_CHANNEL(bcs->channel));
- z2 = ReadZReg(bcs, HFC_Z2 | HFC_REC | HFC_CHANNEL(bcs->channel));
- count = z1 - z2;
- if (count < 0)
- count += cs->hw.hfc.fifosize;
- } /* L1_MODE_TRANS */
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfc_fill_fifo %d count(%u/%d)",
- bcs->channel, bcs->tx_skb->len,
- count);
- if (count < bcs->tx_skb->len) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfc_fill_fifo no fifo mem");
- return;
- }
- cip = HFC_CIP | HFC_FIFO_IN | HFC_SEND | HFC_CHANNEL(bcs->channel);
- idx = 0;
- while ((idx < bcs->tx_skb->len) && WaitNoBusy(cs))
- cs->BC_Write_Reg(cs, HFC_DATA_NODEB, cip, bcs->tx_skb->data[idx++]);
- if (idx != bcs->tx_skb->len) {
- debugl1(cs, "FIFO Send BUSY error");
- printk(KERN_WARNING "HFC S FIFO channel %d BUSY Error\n", bcs->channel);
- } else {
- count = bcs->tx_skb->len;
- bcs->tx_cnt -= count;
- if (PACKET_NOACK == bcs->tx_skb->pkt_type)
- count = -1;
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- if (bcs->mode != L1_MODE_TRANS) {
- WaitForBusy(cs);
- WaitNoBusy(cs);
- cs->BC_Read_Reg(cs, HFC_DATA, HFC_CIP | HFC_F1_INC | HFC_SEND | HFC_CHANNEL(bcs->channel));
- }
- if (test_bit(FLG_LLI_L1WAKEUP, &bcs->st->lli.flag) &&
- (count >= 0)) {
- u_long flags;
- spin_lock_irqsave(&bcs->aclock, flags);
- bcs->ackcnt += count;
- spin_unlock_irqrestore(&bcs->aclock, flags);
- schedule_event(bcs, B_ACKPENDING);
- }
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- }
- return;
-}
-
-void
-main_irq_hfc(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
- int z1, z2, rcnt;
- u_char f1, f2, cip;
- int receive, transmit, count = 5;
- struct sk_buff *skb;
-
-Begin:
- count--;
- cip = HFC_CIP | HFC_F1 | HFC_REC | HFC_CHANNEL(bcs->channel);
- if ((cip & 0xc3) != (cs->hw.hfc.cip & 0xc3)) {
- cs->BC_Write_Reg(cs, HFC_STATUS, cip, cip);
- WaitForBusy(cs);
- }
- WaitNoBusy(cs);
- receive = 0;
- if (bcs->mode == L1_MODE_HDLC) {
- f1 = cs->BC_Read_Reg(cs, HFC_DATA, cip);
- cip = HFC_CIP | HFC_F2 | HFC_REC | HFC_CHANNEL(bcs->channel);
- WaitNoBusy(cs);
- f2 = cs->BC_Read_Reg(cs, HFC_DATA, cip);
- if (f1 != f2) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfc rec %d f1(%d) f2(%d)",
- bcs->channel, f1, f2);
- receive = 1;
- }
- }
- if (receive || (bcs->mode == L1_MODE_TRANS)) {
- WaitForBusy(cs);
- z1 = ReadZReg(bcs, HFC_Z1 | HFC_REC | HFC_CHANNEL(bcs->channel));
- z2 = ReadZReg(bcs, HFC_Z2 | HFC_REC | HFC_CHANNEL(bcs->channel));
- rcnt = z1 - z2;
- if (rcnt < 0)
- rcnt += cs->hw.hfc.fifosize;
- if ((bcs->mode == L1_MODE_HDLC) || (rcnt)) {
- rcnt++;
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfc rec %d z1(%x) z2(%x) cnt(%d)",
- bcs->channel, z1, z2, rcnt);
- /* sti(); */
- if ((skb = hfc_empty_fifo(bcs, rcnt))) {
- skb_queue_tail(&bcs->rqueue, skb);
- schedule_event(bcs, B_RCVBUFREADY);
- }
- }
- receive = 1;
- }
- if (bcs->tx_skb) {
- transmit = 1;
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- hfc_fill_fifo(bcs);
- if (test_bit(BC_FLG_BUSY, &bcs->Flag))
- transmit = 0;
- } else {
- if ((bcs->tx_skb = skb_dequeue(&bcs->squeue))) {
- transmit = 1;
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- hfc_fill_fifo(bcs);
- if (test_bit(BC_FLG_BUSY, &bcs->Flag))
- transmit = 0;
- } else {
- transmit = 0;
- schedule_event(bcs, B_XMTBUFREADY);
- }
- }
- if ((receive || transmit) && count)
- goto Begin;
- return;
-}
-
-static void
-mode_hfc(struct BCState *bcs, int mode, int bc)
-{
- struct IsdnCardState *cs = bcs->cs;
-
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HFC 2BS0 mode %d bchan %d/%d",
- mode, bc, bcs->channel);
- bcs->mode = mode;
- bcs->channel = bc;
-
- switch (mode) {
- case (L1_MODE_NULL):
- if (bc) {
- cs->hw.hfc.ctmt &= ~1;
- cs->hw.hfc.isac_spcr &= ~0x03;
- }
- else {
- cs->hw.hfc.ctmt &= ~2;
- cs->hw.hfc.isac_spcr &= ~0x0c;
- }
- break;
- case (L1_MODE_TRANS):
- cs->hw.hfc.ctmt &= ~(1 << bc); /* set HDLC mode */
- cs->BC_Write_Reg(cs, HFC_STATUS, cs->hw.hfc.ctmt, cs->hw.hfc.ctmt);
- hfc_clear_fifo(bcs); /* complete fifo clear */
- if (bc) {
- cs->hw.hfc.ctmt |= 1;
- cs->hw.hfc.isac_spcr &= ~0x03;
- cs->hw.hfc.isac_spcr |= 0x02;
- } else {
- cs->hw.hfc.ctmt |= 2;
- cs->hw.hfc.isac_spcr &= ~0x0c;
- cs->hw.hfc.isac_spcr |= 0x08;
- }
- break;
- case (L1_MODE_HDLC):
- if (bc) {
- cs->hw.hfc.ctmt &= ~1;
- cs->hw.hfc.isac_spcr &= ~0x03;
- cs->hw.hfc.isac_spcr |= 0x02;
- } else {
- cs->hw.hfc.ctmt &= ~2;
- cs->hw.hfc.isac_spcr &= ~0x0c;
- cs->hw.hfc.isac_spcr |= 0x08;
- }
- break;
- }
- cs->BC_Write_Reg(cs, HFC_STATUS, cs->hw.hfc.ctmt, cs->hw.hfc.ctmt);
- cs->writeisac(cs, ISAC_SPCR, cs->hw.hfc.isac_spcr);
- if (mode == L1_MODE_HDLC)
- hfc_clear_fifo(bcs);
-}
-
-static void
-hfc_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct BCState *bcs = st->l1.bcs;
- struct sk_buff *skb = arg;
- u_long flags;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- skb_queue_tail(&bcs->squeue, skb);
- } else {
- bcs->tx_skb = skb;
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- printk(KERN_WARNING "hfc_l2l1: this shouldn't happen\n");
- } else {
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->tx_skb = skb;
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
- if (!bcs->tx_skb) {
- test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case (PH_ACTIVATE | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_set_bit(BC_FLG_ACTIV, &bcs->Flag);
- mode_hfc(bcs, st->l1.mode, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | REQUEST):
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | CONFIRM):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_clear_bit(BC_FLG_ACTIV, &bcs->Flag);
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- mode_hfc(bcs, 0, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- st->l1.l1l2(st, PH_DEACTIVATE | CONFIRM, NULL);
- break;
- }
-}
-
-
-static void
-close_hfcstate(struct BCState *bcs)
-{
- mode_hfc(bcs, 0, bcs->channel);
- if (test_bit(BC_FLG_INIT, &bcs->Flag)) {
- skb_queue_purge(&bcs->rqueue);
- skb_queue_purge(&bcs->squeue);
- if (bcs->tx_skb) {
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- }
- }
- test_and_clear_bit(BC_FLG_INIT, &bcs->Flag);
-}
-
-static int
-open_hfcstate(struct IsdnCardState *cs, struct BCState *bcs)
-{
- if (!test_and_set_bit(BC_FLG_INIT, &bcs->Flag)) {
- skb_queue_head_init(&bcs->rqueue);
- skb_queue_head_init(&bcs->squeue);
- }
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->event = 0;
- bcs->tx_cnt = 0;
- return (0);
-}
-
-static int
-setstack_hfc(struct PStack *st, struct BCState *bcs)
-{
- bcs->channel = st->l1.bc;
- if (open_hfcstate(st->l1.hardware, bcs))
- return (-1);
- st->l1.bcs = bcs;
- st->l2.l2l1 = hfc_l2l1;
- setstack_manager(st);
- bcs->st = st;
- setstack_l1_B(st);
- return (0);
-}
-
-static void
-init_send(struct BCState *bcs)
-{
- int i;
-
- bcs->hw.hfc.send = kmalloc_array(32, sizeof(unsigned int), GFP_ATOMIC);
- if (!bcs->hw.hfc.send) {
- printk(KERN_WARNING
- "HiSax: No memory for hfc.send\n");
- return;
- }
- for (i = 0; i < 32; i++)
- bcs->hw.hfc.send[i] = 0x1fff;
-}
-
-void
-inithfc(struct IsdnCardState *cs)
-{
- init_send(&cs->bcs[0]);
- init_send(&cs->bcs[1]);
- cs->BC_Send_Data = &hfc_fill_fifo;
- cs->bcs[0].BC_SetStack = setstack_hfc;
- cs->bcs[1].BC_SetStack = setstack_hfc;
- cs->bcs[0].BC_Close = close_hfcstate;
- cs->bcs[1].BC_Close = close_hfcstate;
- mode_hfc(cs->bcs, 0, 0);
- mode_hfc(cs->bcs + 1, 0, 0);
-}
-
-void
-releasehfc(struct IsdnCardState *cs)
-{
- kfree(cs->bcs[0].hw.hfc.send);
- cs->bcs[0].hw.hfc.send = NULL;
- kfree(cs->bcs[1].hw.hfc.send);
- cs->bcs[1].hw.hfc.send = NULL;
-}
diff --git a/drivers/isdn/hisax/hfc_2bs0.h b/drivers/isdn/hisax/hfc_2bs0.h
deleted file mode 100644
index 1510096363dc..000000000000
--- a/drivers/isdn/hisax/hfc_2bs0.h
+++ /dev/null
@@ -1,60 +0,0 @@
-/* $Id: hfc_2bs0.h,v 1.5.2.2 2004/01/12 22:52:26 keil Exp $
- *
- * specific defines for CCD's HFC 2BS0
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#define HFC_CTMT 0xe0
-#define HFC_CIRM 0xc0
-#define HFC_CIP 0x80
-#define HFC_Z1 0x00
-#define HFC_Z2 0x08
-#define HFC_Z_LOW 0x00
-#define HFC_Z_HIGH 0x04
-#define HFC_F1_INC 0x28
-#define HFC_FIFO_IN 0x2c
-#define HFC_F1 0x30
-#define HFC_F2 0x34
-#define HFC_F2_INC 0x38
-#define HFC_FIFO_OUT 0x3c
-#define HFC_B1 0x00
-#define HFC_B2 0x02
-#define HFC_REC 0x01
-#define HFC_SEND 0x00
-#define HFC_CHANNEL(ch) (ch ? HFC_B2 : HFC_B1)
-
-#define HFC_STATUS 0
-#define HFC_DATA 1
-#define HFC_DATA_NODEB 2
-
-/* Status (READ) */
-#define HFC_BUSY 0x01
-#define HFC_TIMINT 0x02
-#define HFC_EXTINT 0x04
-
-/* CTMT (Write) */
-#define HFC_CLTIMER 0x10
-#define HFC_TIM50MS 0x08
-#define HFC_TIMIRQE 0x04
-#define HFC_TRANSB2 0x02
-#define HFC_TRANSB1 0x01
-
-/* CIRM (Write) */
-#define HFC_RESET 0x08
-#define HFC_MEM8K 0x10
-#define HFC_INTA 0x01
-#define HFC_INTB 0x02
-#define HFC_INTC 0x03
-#define HFC_INTD 0x04
-#define HFC_INTE 0x05
-#define HFC_INTF 0x06
-
-extern void main_irq_hfc(struct BCState *bcs);
-extern void inithfc(struct IsdnCardState *cs);
-extern void releasehfc(struct IsdnCardState *cs);
diff --git a/drivers/isdn/hisax/hfc_pci.c b/drivers/isdn/hisax/hfc_pci.c
deleted file mode 100644
index 71a8312592d6..000000000000
--- a/drivers/isdn/hisax/hfc_pci.c
+++ /dev/null
@@ -1,1755 +0,0 @@
-/* $Id: hfc_pci.c,v 1.48.2.4 2004/02/11 13:21:33 keil Exp $
- *
- * low level driver for CCD's hfc-pci based cards
- *
- * Author Werner Cornelius
- * based on existing driver for CCD hfc ISA cards
- * Copyright by Werner Cornelius <werner@isdn4linux.de>
- * by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * For changes and modifications please read
- * Documentation/isdn/HiSax.cert
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "hfc_pci.h"
-#include "isdnl1.h"
-#include <linux/pci.h>
-#include <linux/sched.h>
-#include <linux/interrupt.h>
-
-static const char *hfcpci_revision = "$Revision: 1.48.2.4 $";
-
-/* table entry in the PCI devices list */
-typedef struct {
- int vendor_id;
- int device_id;
- char *vendor_name;
- char *card_name;
-} PCI_ENTRY;
-
-#define NT_T1_COUNT 20 /* number of 3.125ms interrupts for G2 timeout */
-#define CLKDEL_TE 0x0e /* CLKDEL in TE mode */
-#define CLKDEL_NT 0x6c /* CLKDEL in NT mode */
-
-static const PCI_ENTRY id_list[] =
-{
- {PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_2BD0, "CCD/Billion/Asuscom", "2BD0"},
- {PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B000, "Billion", "B000"},
- {PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B006, "Billion", "B006"},
- {PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B007, "Billion", "B007"},
- {PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B008, "Billion", "B008"},
- {PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B009, "Billion", "B009"},
- {PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B00A, "Billion", "B00A"},
- {PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B00B, "Billion", "B00B"},
- {PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B00C, "Billion", "B00C"},
- {PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B100, "Seyeon", "B100"},
- {PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B700, "Primux II S0", "B700"},
- {PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B701, "Primux II S0 NT", "B701"},
- {PCI_VENDOR_ID_ABOCOM, PCI_DEVICE_ID_ABOCOM_2BD1, "Abocom/Magitek", "2BD1"},
- {PCI_VENDOR_ID_ASUSTEK, PCI_DEVICE_ID_ASUSTEK_0675, "Asuscom/Askey", "675"},
- {PCI_VENDOR_ID_BERKOM, PCI_DEVICE_ID_BERKOM_T_CONCEPT, "German telekom", "T-Concept"},
- {PCI_VENDOR_ID_BERKOM, PCI_DEVICE_ID_BERKOM_A1T, "German telekom", "A1T"},
- {PCI_VENDOR_ID_ANIGMA, PCI_DEVICE_ID_ANIGMA_MC145575, "Motorola MC145575", "MC145575"},
- {PCI_VENDOR_ID_ZOLTRIX, PCI_DEVICE_ID_ZOLTRIX_2BD0, "Zoltrix", "2BD0"},
- {PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_DIGI_DF_M_IOM2_E, "Digi International", "Digi DataFire Micro V IOM2 (Europe)"},
- {PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_DIGI_DF_M_E, "Digi International", "Digi DataFire Micro V (Europe)"},
- {PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_DIGI_DF_M_IOM2_A, "Digi International", "Digi DataFire Micro V IOM2 (North America)"},
- {PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_DIGI_DF_M_A, "Digi International", "Digi DataFire Micro V (North America)"},
- {PCI_VENDOR_ID_SITECOM, PCI_DEVICE_ID_SITECOM_DC105V2, "Sitecom Europe", "DC-105 ISDN PCI"},
- {0, 0, NULL, NULL},
-};
-
-
-/******************************************/
-/* free hardware resources used by driver */
-/******************************************/
-static void
-release_io_hfcpci(struct IsdnCardState *cs)
-{
- printk(KERN_INFO "HiSax: release hfcpci at %p\n",
- cs->hw.hfcpci.pci_io);
- cs->hw.hfcpci.int_m2 = 0; /* interrupt output off ! */
- Write_hfc(cs, HFCPCI_INT_M2, cs->hw.hfcpci.int_m2);
- Write_hfc(cs, HFCPCI_CIRM, HFCPCI_RESET); /* Reset On */
- mdelay(10);
- Write_hfc(cs, HFCPCI_CIRM, 0); /* Reset Off */
- mdelay(10);
- Write_hfc(cs, HFCPCI_INT_M2, cs->hw.hfcpci.int_m2);
- pci_write_config_word(cs->hw.hfcpci.dev, PCI_COMMAND, 0); /* disable memory mapped ports + busmaster */
- del_timer(&cs->hw.hfcpci.timer);
- pci_free_consistent(cs->hw.hfcpci.dev, 0x8000,
- cs->hw.hfcpci.fifos, cs->hw.hfcpci.dma);
- cs->hw.hfcpci.fifos = NULL;
- iounmap(cs->hw.hfcpci.pci_io);
-}
-
-/********************************************************************************/
-/* function called to reset the HFC PCI chip. A complete software reset of chip */
-/* and fifos is done. */
-/********************************************************************************/
-static void
-reset_hfcpci(struct IsdnCardState *cs)
-{
- pci_write_config_word(cs->hw.hfcpci.dev, PCI_COMMAND, PCI_ENA_MEMIO); /* enable memory mapped ports, disable busmaster */
- cs->hw.hfcpci.int_m2 = 0; /* interrupt output off ! */
- Write_hfc(cs, HFCPCI_INT_M2, cs->hw.hfcpci.int_m2);
-
- printk(KERN_INFO "HFC_PCI: resetting card\n");
- pci_write_config_word(cs->hw.hfcpci.dev, PCI_COMMAND, PCI_ENA_MEMIO + PCI_ENA_MASTER); /* enable memory ports + busmaster */
- Write_hfc(cs, HFCPCI_CIRM, HFCPCI_RESET); /* Reset On */
- mdelay(10);
- Write_hfc(cs, HFCPCI_CIRM, 0); /* Reset Off */
- mdelay(10);
- if (Read_hfc(cs, HFCPCI_STATUS) & 2)
- printk(KERN_WARNING "HFC-PCI init bit busy\n");
-
- cs->hw.hfcpci.fifo_en = 0x30; /* only D fifos enabled */
- Write_hfc(cs, HFCPCI_FIFO_EN, cs->hw.hfcpci.fifo_en);
-
- cs->hw.hfcpci.trm = 0 + HFCPCI_BTRANS_THRESMASK; /* no echo connect , threshold */
- Write_hfc(cs, HFCPCI_TRM, cs->hw.hfcpci.trm);
-
- Write_hfc(cs, HFCPCI_CLKDEL, CLKDEL_TE); /* ST-Bit delay for TE-Mode */
- cs->hw.hfcpci.sctrl_e = HFCPCI_AUTO_AWAKE;
- Write_hfc(cs, HFCPCI_SCTRL_E, cs->hw.hfcpci.sctrl_e); /* S/T Auto awake */
- cs->hw.hfcpci.bswapped = 0; /* no exchange */
- cs->hw.hfcpci.nt_mode = 0; /* we are in TE mode */
- cs->hw.hfcpci.ctmt = HFCPCI_TIM3_125 | HFCPCI_AUTO_TIMER;
- Write_hfc(cs, HFCPCI_CTMT, cs->hw.hfcpci.ctmt);
-
- cs->hw.hfcpci.int_m1 = HFCPCI_INTS_DTRANS | HFCPCI_INTS_DREC |
- HFCPCI_INTS_L1STATE | HFCPCI_INTS_TIMER;
- Write_hfc(cs, HFCPCI_INT_M1, cs->hw.hfcpci.int_m1);
-
- /* Clear already pending ints */
- Read_hfc(cs, HFCPCI_INT_S1);
-
- Write_hfc(cs, HFCPCI_STATES, HFCPCI_LOAD_STATE | 2); /* HFC ST 2 */
- udelay(10);
- Write_hfc(cs, HFCPCI_STATES, 2); /* HFC ST 2 */
- cs->hw.hfcpci.mst_m = HFCPCI_MASTER; /* HFC Master Mode */
-
- Write_hfc(cs, HFCPCI_MST_MODE, cs->hw.hfcpci.mst_m);
- cs->hw.hfcpci.sctrl = 0x40; /* set tx_lo mode, error in datasheet ! */
- Write_hfc(cs, HFCPCI_SCTRL, cs->hw.hfcpci.sctrl);
- cs->hw.hfcpci.sctrl_r = 0;
- Write_hfc(cs, HFCPCI_SCTRL_R, cs->hw.hfcpci.sctrl_r);
-
- /* Init GCI/IOM2 in master mode */
- /* Slots 0 and 1 are set for B-chan 1 and 2 */
- /* D- and monitor/CI channel are not enabled */
- /* STIO1 is used as output for data, B1+B2 from ST->IOM+HFC */
- /* STIO2 is used as data input, B1+B2 from IOM->ST */
- /* ST B-channel send disabled -> continuous 1s */
- /* The IOM slots are always enabled */
- cs->hw.hfcpci.conn = 0x36; /* set data flow directions */
- Write_hfc(cs, HFCPCI_CONNECT, cs->hw.hfcpci.conn);
- Write_hfc(cs, HFCPCI_B1_SSL, 0x80); /* B1-Slot 0 STIO1 out enabled */
- Write_hfc(cs, HFCPCI_B2_SSL, 0x81); /* B2-Slot 1 STIO1 out enabled */
- Write_hfc(cs, HFCPCI_B1_RSL, 0x80); /* B1-Slot 0 STIO2 in enabled */
- Write_hfc(cs, HFCPCI_B2_RSL, 0x81); /* B2-Slot 1 STIO2 in enabled */
-
- /* Finally enable IRQ output */
- cs->hw.hfcpci.int_m2 = HFCPCI_IRQ_ENABLE;
- Write_hfc(cs, HFCPCI_INT_M2, cs->hw.hfcpci.int_m2);
- Read_hfc(cs, HFCPCI_INT_S1);
-}
-
-/***************************************************/
-/* Timer function called when kernel timer expires */
-/***************************************************/
-static void
-hfcpci_Timer(struct timer_list *t)
-{
- struct IsdnCardState *cs = from_timer(cs, t, hw.hfcpci.timer);
- cs->hw.hfcpci.timer.expires = jiffies + 75;
- /* WD RESET */
-/* WriteReg(cs, HFCD_DATA, HFCD_CTMT, cs->hw.hfcpci.ctmt | 0x80);
- add_timer(&cs->hw.hfcpci.timer);
-*/
-}
-
-
-/*********************************/
-/* schedule a new D-channel task */
-/*********************************/
-static void
-sched_event_D_pci(struct IsdnCardState *cs, int event)
-{
- test_and_set_bit(event, &cs->event);
- schedule_work(&cs->tqueue);
-}
-
-/*********************************/
-/* schedule a new b_channel task */
-/*********************************/
-static void
-hfcpci_sched_event(struct BCState *bcs, int event)
-{
- test_and_set_bit(event, &bcs->event);
- schedule_work(&bcs->tqueue);
-}
-
-/************************************************/
-/* select a b-channel entry matching and active */
-/************************************************/
-static
-struct BCState *
-Sel_BCS(struct IsdnCardState *cs, int channel)
-{
- if (cs->bcs[0].mode && (cs->bcs[0].channel == channel))
- return (&cs->bcs[0]);
- else if (cs->bcs[1].mode && (cs->bcs[1].channel == channel))
- return (&cs->bcs[1]);
- else
- return (NULL);
-}
-
-/***************************************/
-/* clear the desired B-channel rx fifo */
-/***************************************/
-static void hfcpci_clear_fifo_rx(struct IsdnCardState *cs, int fifo)
-{ u_char fifo_state;
- bzfifo_type *bzr;
-
- if (fifo) {
- bzr = &((fifo_area *) (cs->hw.hfcpci.fifos))->b_chans.rxbz_b2;
- fifo_state = cs->hw.hfcpci.fifo_en & HFCPCI_FIFOEN_B2RX;
- } else {
- bzr = &((fifo_area *) (cs->hw.hfcpci.fifos))->b_chans.rxbz_b1;
- fifo_state = cs->hw.hfcpci.fifo_en & HFCPCI_FIFOEN_B1RX;
- }
- if (fifo_state)
- cs->hw.hfcpci.fifo_en ^= fifo_state;
- Write_hfc(cs, HFCPCI_FIFO_EN, cs->hw.hfcpci.fifo_en);
- cs->hw.hfcpci.last_bfifo_cnt[fifo] = 0;
- bzr->za[MAX_B_FRAMES].z1 = B_FIFO_SIZE + B_SUB_VAL - 1;
- bzr->za[MAX_B_FRAMES].z2 = bzr->za[MAX_B_FRAMES].z1;
- bzr->f1 = MAX_B_FRAMES;
- bzr->f2 = bzr->f1; /* init F pointers to remain constant */
- if (fifo_state)
- cs->hw.hfcpci.fifo_en |= fifo_state;
- Write_hfc(cs, HFCPCI_FIFO_EN, cs->hw.hfcpci.fifo_en);
-}
-
-/***************************************/
-/* clear the desired B-channel tx fifo */
-/***************************************/
-static void hfcpci_clear_fifo_tx(struct IsdnCardState *cs, int fifo)
-{ u_char fifo_state;
- bzfifo_type *bzt;
-
- if (fifo) {
- bzt = &((fifo_area *) (cs->hw.hfcpci.fifos))->b_chans.txbz_b2;
- fifo_state = cs->hw.hfcpci.fifo_en & HFCPCI_FIFOEN_B2TX;
- } else {
- bzt = &((fifo_area *) (cs->hw.hfcpci.fifos))->b_chans.txbz_b1;
- fifo_state = cs->hw.hfcpci.fifo_en & HFCPCI_FIFOEN_B1TX;
- }
- if (fifo_state)
- cs->hw.hfcpci.fifo_en ^= fifo_state;
- Write_hfc(cs, HFCPCI_FIFO_EN, cs->hw.hfcpci.fifo_en);
- bzt->za[MAX_B_FRAMES].z1 = B_FIFO_SIZE + B_SUB_VAL - 1;
- bzt->za[MAX_B_FRAMES].z2 = bzt->za[MAX_B_FRAMES].z1;
- bzt->f1 = MAX_B_FRAMES;
- bzt->f2 = bzt->f1; /* init F pointers to remain constant */
- if (fifo_state)
- cs->hw.hfcpci.fifo_en |= fifo_state;
- Write_hfc(cs, HFCPCI_FIFO_EN, cs->hw.hfcpci.fifo_en);
-}
-
-/*********************************************/
-/* read a complete B-frame out of the buffer */
-/*********************************************/
-static struct sk_buff
-*
-hfcpci_empty_fifo(struct BCState *bcs, bzfifo_type *bz, u_char *bdata, int count)
-{
- u_char *ptr, *ptr1, new_f2;
- struct sk_buff *skb;
- struct IsdnCardState *cs = bcs->cs;
- int maxlen, new_z2;
- z_type *zp;
-
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "hfcpci_empty_fifo");
- zp = &bz->za[bz->f2]; /* point to Z-Regs */
- new_z2 = zp->z2 + count; /* new position in fifo */
- if (new_z2 >= (B_FIFO_SIZE + B_SUB_VAL))
- new_z2 -= B_FIFO_SIZE; /* buffer wrap */
- new_f2 = (bz->f2 + 1) & MAX_B_FRAMES;
- if ((count > HSCX_BUFMAX + 3) || (count < 4) ||
- (*(bdata + (zp->z1 - B_SUB_VAL)))) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "hfcpci_empty_fifo: incoming packet invalid length %d or crc", count);
-#ifdef ERROR_STATISTIC
- bcs->err_inv++;
-#endif
- bz->za[new_f2].z2 = new_z2;
- bz->f2 = new_f2; /* next buffer */
- skb = NULL;
- } else if (!(skb = dev_alloc_skb(count - 3)))
- printk(KERN_WARNING "HFCPCI: receive out of memory\n");
- else {
- count -= 3;
- ptr = skb_put(skb, count);
-
- if (zp->z2 + count <= B_FIFO_SIZE + B_SUB_VAL)
- maxlen = count; /* complete transfer */
- else
- maxlen = B_FIFO_SIZE + B_SUB_VAL - zp->z2; /* maximum */
-
- ptr1 = bdata + (zp->z2 - B_SUB_VAL); /* start of data */
- memcpy(ptr, ptr1, maxlen); /* copy data */
- count -= maxlen;
-
- if (count) { /* rest remaining */
- ptr += maxlen;
- ptr1 = bdata; /* start of buffer */
- memcpy(ptr, ptr1, count); /* rest */
- }
- bz->za[new_f2].z2 = new_z2;
- bz->f2 = new_f2; /* next buffer */
-
- }
- return (skb);
-}
-
-/*******************************/
-/* D-channel receive procedure */
-/*******************************/
-static
-int
-receive_dmsg(struct IsdnCardState *cs)
-{
- struct sk_buff *skb;
- int maxlen;
- int rcnt, total;
- int count = 5;
- u_char *ptr, *ptr1;
- dfifo_type *df;
- z_type *zp;
-
- df = &((fifo_area *) (cs->hw.hfcpci.fifos))->d_chan.d_rx;
- if (test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- debugl1(cs, "rec_dmsg blocked");
- return (1);
- }
- while (((df->f1 & D_FREG_MASK) != (df->f2 & D_FREG_MASK)) && count--) {
- zp = &df->za[df->f2 & D_FREG_MASK];
- rcnt = zp->z1 - zp->z2;
- if (rcnt < 0)
- rcnt += D_FIFO_SIZE;
- rcnt++;
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "hfcpci recd f1(%d) f2(%d) z1(%x) z2(%x) cnt(%d)",
- df->f1, df->f2, zp->z1, zp->z2, rcnt);
-
- if ((rcnt > MAX_DFRAME_LEN + 3) || (rcnt < 4) ||
- (df->data[zp->z1])) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "empty_fifo hfcpci packet inv. len %d or crc %d", rcnt, df->data[zp->z1]);
-#ifdef ERROR_STATISTIC
- cs->err_rx++;
-#endif
- df->f2 = ((df->f2 + 1) & MAX_D_FRAMES) | (MAX_D_FRAMES + 1); /* next buffer */
- df->za[df->f2 & D_FREG_MASK].z2 = (zp->z2 + rcnt) & (D_FIFO_SIZE - 1);
- } else if ((skb = dev_alloc_skb(rcnt - 3))) {
- total = rcnt;
- rcnt -= 3;
- ptr = skb_put(skb, rcnt);
-
- if (zp->z2 + rcnt <= D_FIFO_SIZE)
- maxlen = rcnt; /* complete transfer */
- else
- maxlen = D_FIFO_SIZE - zp->z2; /* maximum */
-
- ptr1 = df->data + zp->z2; /* start of data */
- memcpy(ptr, ptr1, maxlen); /* copy data */
- rcnt -= maxlen;
-
- if (rcnt) { /* rest remaining */
- ptr += maxlen;
- ptr1 = df->data; /* start of buffer */
- memcpy(ptr, ptr1, rcnt); /* rest */
- }
- df->f2 = ((df->f2 + 1) & MAX_D_FRAMES) | (MAX_D_FRAMES + 1); /* next buffer */
- df->za[df->f2 & D_FREG_MASK].z2 = (zp->z2 + total) & (D_FIFO_SIZE - 1);
-
- skb_queue_tail(&cs->rq, skb);
- sched_event_D_pci(cs, D_RCVBUFREADY);
- } else
- printk(KERN_WARNING "HFC-PCI: D receive out of memory\n");
- }
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- return (1);
-}
-
-/*******************************************************************************/
-/* check for transparent receive data and read max one threshold size if avail */
-/*******************************************************************************/
-static int
-hfcpci_empty_fifo_trans(struct BCState *bcs, bzfifo_type *bz, u_char *bdata)
-{
- unsigned short *z1r, *z2r;
- int new_z2, fcnt, maxlen;
- struct sk_buff *skb;
- u_char *ptr, *ptr1;
-
- z1r = &bz->za[MAX_B_FRAMES].z1; /* pointer to z reg */
- z2r = z1r + 1;
-
- if (!(fcnt = *z1r - *z2r))
- return (0); /* no data avail */
-
- if (fcnt <= 0)
- fcnt += B_FIFO_SIZE; /* bytes actually buffered */
- if (fcnt > HFCPCI_BTRANS_THRESHOLD)
- fcnt = HFCPCI_BTRANS_THRESHOLD; /* limit size */
-
- new_z2 = *z2r + fcnt; /* new position in fifo */
- if (new_z2 >= (B_FIFO_SIZE + B_SUB_VAL))
- new_z2 -= B_FIFO_SIZE; /* buffer wrap */
-
- if (!(skb = dev_alloc_skb(fcnt)))
- printk(KERN_WARNING "HFCPCI: receive out of memory\n");
- else {
- ptr = skb_put(skb, fcnt);
- if (*z2r + fcnt <= B_FIFO_SIZE + B_SUB_VAL)
- maxlen = fcnt; /* complete transfer */
- else
- maxlen = B_FIFO_SIZE + B_SUB_VAL - *z2r; /* maximum */
-
- ptr1 = bdata + (*z2r - B_SUB_VAL); /* start of data */
- memcpy(ptr, ptr1, maxlen); /* copy data */
- fcnt -= maxlen;
-
- if (fcnt) { /* rest remaining */
- ptr += maxlen;
- ptr1 = bdata; /* start of buffer */
- memcpy(ptr, ptr1, fcnt); /* rest */
- }
- skb_queue_tail(&bcs->rqueue, skb);
- hfcpci_sched_event(bcs, B_RCVBUFREADY);
- }
-
- *z2r = new_z2; /* new position */
- return (1);
-} /* hfcpci_empty_fifo_trans */
-
-/**********************************/
-/* B-channel main receive routine */
-/**********************************/
-static void
-main_rec_hfcpci(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
- int rcnt, real_fifo;
- int receive, count = 5;
- struct sk_buff *skb;
- bzfifo_type *bz;
- u_char *bdata;
- z_type *zp;
-
-
- if ((bcs->channel) && (!cs->hw.hfcpci.bswapped)) {
- bz = &((fifo_area *) (cs->hw.hfcpci.fifos))->b_chans.rxbz_b2;
- bdata = ((fifo_area *) (cs->hw.hfcpci.fifos))->b_chans.rxdat_b2;
- real_fifo = 1;
- } else {
- bz = &((fifo_area *) (cs->hw.hfcpci.fifos))->b_chans.rxbz_b1;
- bdata = ((fifo_area *) (cs->hw.hfcpci.fifos))->b_chans.rxdat_b1;
- real_fifo = 0;
- }
-Begin:
- count--;
- if (test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- debugl1(cs, "rec_data %d blocked", bcs->channel);
- return;
- }
- if (bz->f1 != bz->f2) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfcpci rec %d f1(%d) f2(%d)",
- bcs->channel, bz->f1, bz->f2);
- zp = &bz->za[bz->f2];
-
- rcnt = zp->z1 - zp->z2;
- if (rcnt < 0)
- rcnt += B_FIFO_SIZE;
- rcnt++;
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfcpci rec %d z1(%x) z2(%x) cnt(%d)",
- bcs->channel, zp->z1, zp->z2, rcnt);
- if ((skb = hfcpci_empty_fifo(bcs, bz, bdata, rcnt))) {
- skb_queue_tail(&bcs->rqueue, skb);
- hfcpci_sched_event(bcs, B_RCVBUFREADY);
- }
- rcnt = bz->f1 - bz->f2;
- if (rcnt < 0)
- rcnt += MAX_B_FRAMES + 1;
- if (cs->hw.hfcpci.last_bfifo_cnt[real_fifo] > rcnt + 1) {
- rcnt = 0;
- hfcpci_clear_fifo_rx(cs, real_fifo);
- }
- cs->hw.hfcpci.last_bfifo_cnt[real_fifo] = rcnt;
- if (rcnt > 1)
- receive = 1;
- else
- receive = 0;
- } else if (bcs->mode == L1_MODE_TRANS)
- receive = hfcpci_empty_fifo_trans(bcs, bz, bdata);
- else
- receive = 0;
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- if (count && receive)
- goto Begin;
-}
-
-/**************************/
-/* D-channel send routine */
-/**************************/
-static void
-hfcpci_fill_dfifo(struct IsdnCardState *cs)
-{
- int fcnt;
- int count, new_z1, maxlen;
- dfifo_type *df;
- u_char *src, *dst, new_f1;
-
- if (!cs->tx_skb)
- return;
- if (cs->tx_skb->len <= 0)
- return;
-
- df = &((fifo_area *) (cs->hw.hfcpci.fifos))->d_chan.d_tx;
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "hfcpci_fill_Dfifo f1(%d) f2(%d) z1(f1)(%x)",
- df->f1, df->f2,
- df->za[df->f1 & D_FREG_MASK].z1);
- fcnt = df->f1 - df->f2; /* frame count actually buffered */
- if (fcnt < 0)
- fcnt += (MAX_D_FRAMES + 1); /* if wrap around */
- if (fcnt > (MAX_D_FRAMES - 1)) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "hfcpci_fill_Dfifo more as 14 frames");
-#ifdef ERROR_STATISTIC
- cs->err_tx++;
-#endif
- return;
- }
- /* now determine free bytes in FIFO buffer */
- count = df->za[df->f2 & D_FREG_MASK].z2 - df->za[df->f1 & D_FREG_MASK].z1 - 1;
- if (count <= 0)
- count += D_FIFO_SIZE; /* count now contains available bytes */
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "hfcpci_fill_Dfifo count(%u/%d)",
- cs->tx_skb->len, count);
- if (count < cs->tx_skb->len) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "hfcpci_fill_Dfifo no fifo mem");
- return;
- }
- count = cs->tx_skb->len; /* get frame len */
- new_z1 = (df->za[df->f1 & D_FREG_MASK].z1 + count) & (D_FIFO_SIZE - 1);
- new_f1 = ((df->f1 + 1) & D_FREG_MASK) | (D_FREG_MASK + 1);
- src = cs->tx_skb->data; /* source pointer */
- dst = df->data + df->za[df->f1 & D_FREG_MASK].z1;
- maxlen = D_FIFO_SIZE - df->za[df->f1 & D_FREG_MASK].z1; /* end fifo */
- if (maxlen > count)
- maxlen = count; /* limit size */
- memcpy(dst, src, maxlen); /* first copy */
-
- count -= maxlen; /* remaining bytes */
- if (count) {
- dst = df->data; /* start of buffer */
- src += maxlen; /* new position */
- memcpy(dst, src, count);
- }
- df->za[new_f1 & D_FREG_MASK].z1 = new_z1; /* for next buffer */
- df->za[df->f1 & D_FREG_MASK].z1 = new_z1; /* new pos actual buffer */
- df->f1 = new_f1; /* next frame */
-
- dev_kfree_skb_any(cs->tx_skb);
- cs->tx_skb = NULL;
-}
-
-/**************************/
-/* B-channel send routine */
-/**************************/
-static void
-hfcpci_fill_fifo(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
- int maxlen, fcnt;
- int count, new_z1;
- bzfifo_type *bz;
- u_char *bdata;
- u_char new_f1, *src, *dst;
- unsigned short *z1t, *z2t;
-
- if (!bcs->tx_skb)
- return;
- if (bcs->tx_skb->len <= 0)
- return;
-
- if ((bcs->channel) && (!cs->hw.hfcpci.bswapped)) {
- bz = &((fifo_area *) (cs->hw.hfcpci.fifos))->b_chans.txbz_b2;
- bdata = ((fifo_area *) (cs->hw.hfcpci.fifos))->b_chans.txdat_b2;
- } else {
- bz = &((fifo_area *) (cs->hw.hfcpci.fifos))->b_chans.txbz_b1;
- bdata = ((fifo_area *) (cs->hw.hfcpci.fifos))->b_chans.txdat_b1;
- }
-
- if (bcs->mode == L1_MODE_TRANS) {
- z1t = &bz->za[MAX_B_FRAMES].z1;
- z2t = z1t + 1;
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfcpci_fill_fifo_trans %d z1(%x) z2(%x)",
- bcs->channel, *z1t, *z2t);
- fcnt = *z2t - *z1t;
- if (fcnt <= 0)
- fcnt += B_FIFO_SIZE; /* fcnt contains available bytes in fifo */
- fcnt = B_FIFO_SIZE - fcnt; /* remaining bytes to send */
-
- while ((fcnt < 2 * HFCPCI_BTRANS_THRESHOLD) && (bcs->tx_skb)) {
- if (bcs->tx_skb->len < B_FIFO_SIZE - fcnt) {
- /* data is suitable for fifo */
- count = bcs->tx_skb->len;
-
- new_z1 = *z1t + count; /* new buffer Position */
- if (new_z1 >= (B_FIFO_SIZE + B_SUB_VAL))
- new_z1 -= B_FIFO_SIZE; /* buffer wrap */
- src = bcs->tx_skb->data; /* source pointer */
- dst = bdata + (*z1t - B_SUB_VAL);
- maxlen = (B_FIFO_SIZE + B_SUB_VAL) - *z1t; /* end of fifo */
- if (maxlen > count)
- maxlen = count; /* limit size */
- memcpy(dst, src, maxlen); /* first copy */
-
- count -= maxlen; /* remaining bytes */
- if (count) {
- dst = bdata; /* start of buffer */
- src += maxlen; /* new position */
- memcpy(dst, src, count);
- }
- bcs->tx_cnt -= bcs->tx_skb->len;
- fcnt += bcs->tx_skb->len;
- *z1t = new_z1; /* now send data */
- } else if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfcpci_fill_fifo_trans %d frame length %d discarded",
- bcs->channel, bcs->tx_skb->len);
-
- if (test_bit(FLG_LLI_L1WAKEUP, &bcs->st->lli.flag) &&
- (PACKET_NOACK != bcs->tx_skb->pkt_type)) {
- u_long flags;
- spin_lock_irqsave(&bcs->aclock, flags);
- bcs->ackcnt += bcs->tx_skb->len;
- spin_unlock_irqrestore(&bcs->aclock, flags);
- schedule_event(bcs, B_ACKPENDING);
- }
-
- dev_consume_skb_any(bcs->tx_skb);
- bcs->tx_skb = skb_dequeue(&bcs->squeue); /* fetch next data */
- }
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- return;
- }
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfcpci_fill_fifo_hdlc %d f1(%d) f2(%d) z1(f1)(%x)",
- bcs->channel, bz->f1, bz->f2,
- bz->za[bz->f1].z1);
-
- fcnt = bz->f1 - bz->f2; /* frame count actually buffered */
- if (fcnt < 0)
- fcnt += (MAX_B_FRAMES + 1); /* if wrap around */
- if (fcnt > (MAX_B_FRAMES - 1)) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfcpci_fill_Bfifo more as 14 frames");
- return;
- }
- /* now determine free bytes in FIFO buffer */
- count = bz->za[bz->f2].z2 - bz->za[bz->f1].z1 - 1;
- if (count <= 0)
- count += B_FIFO_SIZE; /* count now contains available bytes */
-
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfcpci_fill_fifo %d count(%u/%d),%lx",
- bcs->channel, bcs->tx_skb->len,
- count, current->state);
-
- if (count < bcs->tx_skb->len) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hfcpci_fill_fifo no fifo mem");
- return;
- }
- count = bcs->tx_skb->len; /* get frame len */
- new_z1 = bz->za[bz->f1].z1 + count; /* new buffer Position */
- if (new_z1 >= (B_FIFO_SIZE + B_SUB_VAL))
- new_z1 -= B_FIFO_SIZE; /* buffer wrap */
-
- new_f1 = ((bz->f1 + 1) & MAX_B_FRAMES);
- src = bcs->tx_skb->data; /* source pointer */
- dst = bdata + (bz->za[bz->f1].z1 - B_SUB_VAL);
- maxlen = (B_FIFO_SIZE + B_SUB_VAL) - bz->za[bz->f1].z1; /* end fifo */
- if (maxlen > count)
- maxlen = count; /* limit size */
- memcpy(dst, src, maxlen); /* first copy */
-
- count -= maxlen; /* remaining bytes */
- if (count) {
- dst = bdata; /* start of buffer */
- src += maxlen; /* new position */
- memcpy(dst, src, count);
- }
- bcs->tx_cnt -= bcs->tx_skb->len;
- if (test_bit(FLG_LLI_L1WAKEUP, &bcs->st->lli.flag) &&
- (PACKET_NOACK != bcs->tx_skb->pkt_type)) {
- u_long flags;
- spin_lock_irqsave(&bcs->aclock, flags);
- bcs->ackcnt += bcs->tx_skb->len;
- spin_unlock_irqrestore(&bcs->aclock, flags);
- schedule_event(bcs, B_ACKPENDING);
- }
-
- bz->za[new_f1].z1 = new_z1; /* for next buffer */
- bz->f1 = new_f1; /* next frame */
-
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
-}
-
-/**********************************************/
-/* D-channel l1 state call for leased NT-mode */
-/**********************************************/
-static void
-dch_nt_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct IsdnCardState *cs = (struct IsdnCardState *) st->l1.hardware;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- case (PH_PULL | REQUEST):
- case (PH_PULL | INDICATION):
- st->l1.l1hw(st, pr, arg);
- break;
- case (PH_ACTIVATE | REQUEST):
- st->l1.l1l2(st, PH_ACTIVATE | CONFIRM, NULL);
- break;
- case (PH_TESTLOOP | REQUEST):
- if (1 & (long) arg)
- debugl1(cs, "PH_TEST_LOOP B1");
- if (2 & (long) arg)
- debugl1(cs, "PH_TEST_LOOP B2");
- if (!(3 & (long) arg))
- debugl1(cs, "PH_TEST_LOOP DISABLED");
- st->l1.l1hw(st, HW_TESTLOOP | REQUEST, arg);
- break;
- default:
- if (cs->debug)
- debugl1(cs, "dch_nt_l2l1 msg %04X unhandled", pr);
- break;
- }
-}
-
-
-
-/***********************/
-/* set/reset echo mode */
-/***********************/
-static int
-hfcpci_auxcmd(struct IsdnCardState *cs, isdn_ctrl *ic)
-{
- u_long flags;
- int i = *(unsigned int *) ic->parm.num;
-
- if ((ic->arg == 98) &&
- (!(cs->hw.hfcpci.int_m1 & (HFCPCI_INTS_B2TRANS + HFCPCI_INTS_B2REC + HFCPCI_INTS_B1TRANS + HFCPCI_INTS_B1REC)))) {
- spin_lock_irqsave(&cs->lock, flags);
- Write_hfc(cs, HFCPCI_CLKDEL, CLKDEL_NT); /* ST-Bit delay for NT-Mode */
- Write_hfc(cs, HFCPCI_STATES, HFCPCI_LOAD_STATE | 0); /* HFC ST G0 */
- udelay(10);
- cs->hw.hfcpci.sctrl |= SCTRL_MODE_NT;
- Write_hfc(cs, HFCPCI_SCTRL, cs->hw.hfcpci.sctrl); /* set NT-mode */
- udelay(10);
- Write_hfc(cs, HFCPCI_STATES, HFCPCI_LOAD_STATE | 1); /* HFC ST G1 */
- udelay(10);
- Write_hfc(cs, HFCPCI_STATES, 1 | HFCPCI_ACTIVATE | HFCPCI_DO_ACTION);
- cs->dc.hfcpci.ph_state = 1;
- cs->hw.hfcpci.nt_mode = 1;
- cs->hw.hfcpci.nt_timer = 0;
- cs->stlist->l2.l2l1 = dch_nt_l2l1;
- spin_unlock_irqrestore(&cs->lock, flags);
- debugl1(cs, "NT mode activated");
- return (0);
- }
- if ((cs->chanlimit > 1) || (cs->hw.hfcpci.bswapped) ||
- (cs->hw.hfcpci.nt_mode) || (ic->arg != 12))
- return (-EINVAL);
-
- spin_lock_irqsave(&cs->lock, flags);
- if (i) {
- cs->logecho = 1;
- cs->hw.hfcpci.trm |= 0x20; /* enable echo chan */
- cs->hw.hfcpci.int_m1 |= HFCPCI_INTS_B2REC;
- cs->hw.hfcpci.fifo_en |= HFCPCI_FIFOEN_B2RX;
- } else {
- cs->logecho = 0;
- cs->hw.hfcpci.trm &= ~0x20; /* disable echo chan */
- cs->hw.hfcpci.int_m1 &= ~HFCPCI_INTS_B2REC;
- cs->hw.hfcpci.fifo_en &= ~HFCPCI_FIFOEN_B2RX;
- }
- cs->hw.hfcpci.sctrl_r &= ~SCTRL_B2_ENA;
- cs->hw.hfcpci.sctrl &= ~SCTRL_B2_ENA;
- cs->hw.hfcpci.conn |= 0x10; /* B2-IOM -> B2-ST */
- cs->hw.hfcpci.ctmt &= ~2;
- Write_hfc(cs, HFCPCI_CTMT, cs->hw.hfcpci.ctmt);
- Write_hfc(cs, HFCPCI_SCTRL_R, cs->hw.hfcpci.sctrl_r);
- Write_hfc(cs, HFCPCI_SCTRL, cs->hw.hfcpci.sctrl);
- Write_hfc(cs, HFCPCI_CONNECT, cs->hw.hfcpci.conn);
- Write_hfc(cs, HFCPCI_TRM, cs->hw.hfcpci.trm);
- Write_hfc(cs, HFCPCI_FIFO_EN, cs->hw.hfcpci.fifo_en);
- Write_hfc(cs, HFCPCI_INT_M1, cs->hw.hfcpci.int_m1);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
-} /* hfcpci_auxcmd */
-
-/*****************************/
-/* E-channel receive routine */
-/*****************************/
-static void
-receive_emsg(struct IsdnCardState *cs)
-{
- int rcnt;
- int receive, count = 5;
- bzfifo_type *bz;
- u_char *bdata;
- z_type *zp;
- u_char *ptr, *ptr1, new_f2;
- int total, maxlen, new_z2;
- u_char e_buffer[256];
-
- bz = &((fifo_area *) (cs->hw.hfcpci.fifos))->b_chans.rxbz_b2;
- bdata = ((fifo_area *) (cs->hw.hfcpci.fifos))->b_chans.rxdat_b2;
-Begin:
- count--;
- if (test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- debugl1(cs, "echo_rec_data blocked");
- return;
- }
- if (bz->f1 != bz->f2) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "hfcpci e_rec f1(%d) f2(%d)",
- bz->f1, bz->f2);
- zp = &bz->za[bz->f2];
-
- rcnt = zp->z1 - zp->z2;
- if (rcnt < 0)
- rcnt += B_FIFO_SIZE;
- rcnt++;
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "hfcpci e_rec z1(%x) z2(%x) cnt(%d)",
- zp->z1, zp->z2, rcnt);
- new_z2 = zp->z2 + rcnt; /* new position in fifo */
- if (new_z2 >= (B_FIFO_SIZE + B_SUB_VAL))
- new_z2 -= B_FIFO_SIZE; /* buffer wrap */
- new_f2 = (bz->f2 + 1) & MAX_B_FRAMES;
- if ((rcnt > 256 + 3) || (count < 4) ||
- (*(bdata + (zp->z1 - B_SUB_VAL)))) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "hfcpci_empty_echan: incoming packet invalid length %d or crc", rcnt);
- bz->za[new_f2].z2 = new_z2;
- bz->f2 = new_f2; /* next buffer */
- } else {
- total = rcnt;
- rcnt -= 3;
- ptr = e_buffer;
-
- if (zp->z2 <= B_FIFO_SIZE + B_SUB_VAL)
- maxlen = rcnt; /* complete transfer */
- else
- maxlen = B_FIFO_SIZE + B_SUB_VAL - zp->z2; /* maximum */
-
- ptr1 = bdata + (zp->z2 - B_SUB_VAL); /* start of data */
- memcpy(ptr, ptr1, maxlen); /* copy data */
- rcnt -= maxlen;
-
- if (rcnt) { /* rest remaining */
- ptr += maxlen;
- ptr1 = bdata; /* start of buffer */
- memcpy(ptr, ptr1, rcnt); /* rest */
- }
- bz->za[new_f2].z2 = new_z2;
- bz->f2 = new_f2; /* next buffer */
- if (cs->debug & DEB_DLOG_HEX) {
- ptr = cs->dlog;
- if ((total - 3) < MAX_DLOG_SPACE / 3 - 10) {
- *ptr++ = 'E';
- *ptr++ = 'C';
- *ptr++ = 'H';
- *ptr++ = 'O';
- *ptr++ = ':';
- ptr += QuickHex(ptr, e_buffer, total - 3);
- ptr--;
- *ptr++ = '\n';
- *ptr = 0;
- HiSax_putstatus(cs, NULL, cs->dlog);
- } else
- HiSax_putstatus(cs, "LogEcho: ", "warning Frame too big (%d)", total - 3);
- }
- }
-
- rcnt = bz->f1 - bz->f2;
- if (rcnt < 0)
- rcnt += MAX_B_FRAMES + 1;
- if (rcnt > 1)
- receive = 1;
- else
- receive = 0;
- } else
- receive = 0;
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- if (count && receive)
- goto Begin;
-} /* receive_emsg */
-
-/*********************/
-/* Interrupt handler */
-/*********************/
-static irqreturn_t
-hfcpci_interrupt(int intno, void *dev_id)
-{
- u_long flags;
- struct IsdnCardState *cs = dev_id;
- u_char exval;
- struct BCState *bcs;
- int count = 15;
- u_char val, stat;
-
- if (!(cs->hw.hfcpci.int_m2 & 0x08)) {
- debugl1(cs, "HFC-PCI: int_m2 %x not initialised", cs->hw.hfcpci.int_m2);
- return IRQ_NONE; /* not initialised */
- }
- spin_lock_irqsave(&cs->lock, flags);
- if (HFCPCI_ANYINT & (stat = Read_hfc(cs, HFCPCI_STATUS))) {
- val = Read_hfc(cs, HFCPCI_INT_S1);
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "HFC-PCI: stat(%02x) s1(%02x)", stat, val);
- } else {
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_NONE;
- }
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "HFC-PCI irq %x %s", val,
- test_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags) ?
- "locked" : "unlocked");
- val &= cs->hw.hfcpci.int_m1;
- if (val & 0x40) { /* state machine irq */
- exval = Read_hfc(cs, HFCPCI_STATES) & 0xf;
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ph_state chg %d->%d", cs->dc.hfcpci.ph_state,
- exval);
- cs->dc.hfcpci.ph_state = exval;
- sched_event_D_pci(cs, D_L1STATECHANGE);
- val &= ~0x40;
- }
- if (val & 0x80) { /* timer irq */
- if (cs->hw.hfcpci.nt_mode) {
- if ((--cs->hw.hfcpci.nt_timer) < 0)
- sched_event_D_pci(cs, D_L1STATECHANGE);
- }
- val &= ~0x80;
- Write_hfc(cs, HFCPCI_CTMT, cs->hw.hfcpci.ctmt | HFCPCI_CLTIMER);
- }
- while (val) {
- if (test_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- cs->hw.hfcpci.int_s1 |= val;
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
- }
- if (cs->hw.hfcpci.int_s1 & 0x18) {
- exval = val;
- val = cs->hw.hfcpci.int_s1;
- cs->hw.hfcpci.int_s1 = exval;
- }
- if (val & 0x08) {
- if (!(bcs = Sel_BCS(cs, cs->hw.hfcpci.bswapped ? 1 : 0))) {
- if (cs->debug)
- debugl1(cs, "hfcpci spurious 0x08 IRQ");
- } else
- main_rec_hfcpci(bcs);
- }
- if (val & 0x10) {
- if (cs->logecho)
- receive_emsg(cs);
- else if (!(bcs = Sel_BCS(cs, 1))) {
- if (cs->debug)
- debugl1(cs, "hfcpci spurious 0x10 IRQ");
- } else
- main_rec_hfcpci(bcs);
- }
- if (val & 0x01) {
- if (!(bcs = Sel_BCS(cs, cs->hw.hfcpci.bswapped ? 1 : 0))) {
- if (cs->debug)
- debugl1(cs, "hfcpci spurious 0x01 IRQ");
- } else {
- if (bcs->tx_skb) {
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcpci_fill_fifo(bcs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "fill_data %d blocked", bcs->channel);
- } else {
- if ((bcs->tx_skb = skb_dequeue(&bcs->squeue))) {
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcpci_fill_fifo(bcs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "fill_data %d blocked", bcs->channel);
- } else {
- hfcpci_sched_event(bcs, B_XMTBUFREADY);
- }
- }
- }
- }
- if (val & 0x02) {
- if (!(bcs = Sel_BCS(cs, 1))) {
- if (cs->debug)
- debugl1(cs, "hfcpci spurious 0x02 IRQ");
- } else {
- if (bcs->tx_skb) {
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcpci_fill_fifo(bcs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "fill_data %d blocked", bcs->channel);
- } else {
- if ((bcs->tx_skb = skb_dequeue(&bcs->squeue))) {
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcpci_fill_fifo(bcs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "fill_data %d blocked", bcs->channel);
- } else {
- hfcpci_sched_event(bcs, B_XMTBUFREADY);
- }
- }
- }
- }
- if (val & 0x20) { /* receive dframe */
- receive_dmsg(cs);
- }
- if (val & 0x04) { /* dframe transmitted */
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- sched_event_D_pci(cs, D_CLEARBUSY);
- if (cs->tx_skb) {
- if (cs->tx_skb->len) {
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcpci_fill_dfifo(cs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else {
- debugl1(cs, "hfcpci_fill_dfifo irq blocked");
- }
- goto afterXPR;
- } else {
- dev_kfree_skb_irq(cs->tx_skb);
- cs->tx_cnt = 0;
- cs->tx_skb = NULL;
- }
- }
- if ((cs->tx_skb = skb_dequeue(&cs->sq))) {
- cs->tx_cnt = 0;
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcpci_fill_dfifo(cs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else {
- debugl1(cs, "hfcpci_fill_dfifo irq blocked");
- }
- } else
- sched_event_D_pci(cs, D_XMTBUFREADY);
- }
- afterXPR:
- if (cs->hw.hfcpci.int_s1 && count--) {
- val = cs->hw.hfcpci.int_s1;
- cs->hw.hfcpci.int_s1 = 0;
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "HFC-PCI irq %x loop %d", val, 15 - count);
- } else
- val = 0;
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-/********************************************************************/
-/* timer callback for D-chan busy resolution. Currently no function */
-/********************************************************************/
-static void
-hfcpci_dbusy_timer(struct timer_list *t)
-{
-}
-
-/*************************************/
-/* Layer 1 D-channel hardware access */
-/*************************************/
-static void
-HFCPCI_l1hw(struct PStack *st, int pr, void *arg)
-{
- u_long flags;
- struct IsdnCardState *cs = (struct IsdnCardState *) st->l1.hardware;
- struct sk_buff *skb = arg;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- if (cs->debug & DEB_DLOG_HEX)
- LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE)
- dlogframe(cs, skb, 0);
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->tx_skb) {
- skb_queue_tail(&cs->sq, skb);
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA Queued", 0);
-#endif
- } else {
- cs->tx_skb = skb;
- cs->tx_cnt = 0;
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA", 0);
-#endif
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcpci_fill_dfifo(cs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "hfcpci_fill_dfifo blocked");
-
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->tx_skb) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, " l2l1 tx_skb exist this shouldn't happen");
- skb_queue_tail(&cs->sq, skb);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- }
- if (cs->debug & DEB_DLOG_HEX)
- LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE)
- dlogframe(cs, skb, 0);
- cs->tx_skb = skb;
- cs->tx_cnt = 0;
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA_PULLED", 0);
-#endif
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcpci_fill_dfifo(cs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "hfcpci_fill_dfifo blocked");
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- debugl1(cs, "-> PH_REQUEST_PULL");
-#endif
- spin_lock_irqsave(&cs->lock, flags);
- if (!cs->tx_skb) {
- test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_RESET | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- Write_hfc(cs, HFCPCI_STATES, HFCPCI_LOAD_STATE | 3); /* HFC ST 3 */
- udelay(6);
- Write_hfc(cs, HFCPCI_STATES, 3); /* HFC ST 2 */
- cs->hw.hfcpci.mst_m |= HFCPCI_MASTER;
- Write_hfc(cs, HFCPCI_MST_MODE, cs->hw.hfcpci.mst_m);
- Write_hfc(cs, HFCPCI_STATES, HFCPCI_ACTIVATE | HFCPCI_DO_ACTION);
- spin_unlock_irqrestore(&cs->lock, flags);
- l1_msg(cs, HW_POWERUP | CONFIRM, NULL);
- break;
- case (HW_ENABLE | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- Write_hfc(cs, HFCPCI_STATES, HFCPCI_DO_ACTION);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_DEACTIVATE | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- cs->hw.hfcpci.mst_m &= ~HFCPCI_MASTER;
- Write_hfc(cs, HFCPCI_MST_MODE, cs->hw.hfcpci.mst_m);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_INFO3 | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- cs->hw.hfcpci.mst_m |= HFCPCI_MASTER;
- Write_hfc(cs, HFCPCI_MST_MODE, cs->hw.hfcpci.mst_m);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_TESTLOOP | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- switch ((long) arg) {
- case (1):
- Write_hfc(cs, HFCPCI_B1_SSL, 0x80); /* tx slot */
- Write_hfc(cs, HFCPCI_B1_RSL, 0x80); /* rx slot */
- cs->hw.hfcpci.conn = (cs->hw.hfcpci.conn & ~7) | 1;
- Write_hfc(cs, HFCPCI_CONNECT, cs->hw.hfcpci.conn);
- break;
-
- case (2):
- Write_hfc(cs, HFCPCI_B2_SSL, 0x81); /* tx slot */
- Write_hfc(cs, HFCPCI_B2_RSL, 0x81); /* rx slot */
- cs->hw.hfcpci.conn = (cs->hw.hfcpci.conn & ~0x38) | 0x08;
- Write_hfc(cs, HFCPCI_CONNECT, cs->hw.hfcpci.conn);
- break;
-
- default:
- spin_unlock_irqrestore(&cs->lock, flags);
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "hfcpci_l1hw loop invalid %4lx", (long) arg);
- return;
- }
- cs->hw.hfcpci.trm |= 0x80; /* enable IOM-loop */
- Write_hfc(cs, HFCPCI_TRM, cs->hw.hfcpci.trm);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- default:
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "hfcpci_l1hw unknown pr %4x", pr);
- break;
- }
-}
-
-/***********************************************/
-/* called during init setting l1 stack pointer */
-/***********************************************/
-static void
-setstack_hfcpci(struct PStack *st, struct IsdnCardState *cs)
-{
- st->l1.l1hw = HFCPCI_l1hw;
-}
-
-/**************************************/
-/* send B-channel data if not blocked */
-/**************************************/
-static void
-hfcpci_send_data(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
-
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcpci_fill_fifo(bcs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "send_data %d blocked", bcs->channel);
-}
-
-/***************************************************************/
-/* activate/deactivate hardware for selected channels and mode */
-/***************************************************************/
-static void
-mode_hfcpci(struct BCState *bcs, int mode, int bc)
-{
- struct IsdnCardState *cs = bcs->cs;
- int fifo2;
-
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HFCPCI bchannel mode %d bchan %d/%d",
- mode, bc, bcs->channel);
- bcs->mode = mode;
- bcs->channel = bc;
- fifo2 = bc;
- if (cs->chanlimit > 1) {
- cs->hw.hfcpci.bswapped = 0; /* B1 and B2 normal mode */
- cs->hw.hfcpci.sctrl_e &= ~0x80;
- } else {
- if (bc) {
- if (mode != L1_MODE_NULL) {
- cs->hw.hfcpci.bswapped = 1; /* B1 and B2 exchanged */
- cs->hw.hfcpci.sctrl_e |= 0x80;
- } else {
- cs->hw.hfcpci.bswapped = 0; /* B1 and B2 normal mode */
- cs->hw.hfcpci.sctrl_e &= ~0x80;
- }
- fifo2 = 0;
- } else {
- cs->hw.hfcpci.bswapped = 0; /* B1 and B2 normal mode */
- cs->hw.hfcpci.sctrl_e &= ~0x80;
- }
- }
- switch (mode) {
- case (L1_MODE_NULL):
- if (bc) {
- cs->hw.hfcpci.sctrl &= ~SCTRL_B2_ENA;
- cs->hw.hfcpci.sctrl_r &= ~SCTRL_B2_ENA;
- } else {
- cs->hw.hfcpci.sctrl &= ~SCTRL_B1_ENA;
- cs->hw.hfcpci.sctrl_r &= ~SCTRL_B1_ENA;
- }
- if (fifo2) {
- cs->hw.hfcpci.fifo_en &= ~HFCPCI_FIFOEN_B2;
- cs->hw.hfcpci.int_m1 &= ~(HFCPCI_INTS_B2TRANS + HFCPCI_INTS_B2REC);
- } else {
- cs->hw.hfcpci.fifo_en &= ~HFCPCI_FIFOEN_B1;
- cs->hw.hfcpci.int_m1 &= ~(HFCPCI_INTS_B1TRANS + HFCPCI_INTS_B1REC);
- }
- break;
- case (L1_MODE_TRANS):
- hfcpci_clear_fifo_rx(cs, fifo2);
- hfcpci_clear_fifo_tx(cs, fifo2);
- if (bc) {
- cs->hw.hfcpci.sctrl |= SCTRL_B2_ENA;
- cs->hw.hfcpci.sctrl_r |= SCTRL_B2_ENA;
- } else {
- cs->hw.hfcpci.sctrl |= SCTRL_B1_ENA;
- cs->hw.hfcpci.sctrl_r |= SCTRL_B1_ENA;
- }
- if (fifo2) {
- cs->hw.hfcpci.fifo_en |= HFCPCI_FIFOEN_B2;
- cs->hw.hfcpci.int_m1 |= (HFCPCI_INTS_B2TRANS + HFCPCI_INTS_B2REC);
- cs->hw.hfcpci.ctmt |= 2;
- cs->hw.hfcpci.conn &= ~0x18;
- } else {
- cs->hw.hfcpci.fifo_en |= HFCPCI_FIFOEN_B1;
- cs->hw.hfcpci.int_m1 |= (HFCPCI_INTS_B1TRANS + HFCPCI_INTS_B1REC);
- cs->hw.hfcpci.ctmt |= 1;
- cs->hw.hfcpci.conn &= ~0x03;
- }
- break;
- case (L1_MODE_HDLC):
- hfcpci_clear_fifo_rx(cs, fifo2);
- hfcpci_clear_fifo_tx(cs, fifo2);
- if (bc) {
- cs->hw.hfcpci.sctrl |= SCTRL_B2_ENA;
- cs->hw.hfcpci.sctrl_r |= SCTRL_B2_ENA;
- } else {
- cs->hw.hfcpci.sctrl |= SCTRL_B1_ENA;
- cs->hw.hfcpci.sctrl_r |= SCTRL_B1_ENA;
- }
- if (fifo2) {
- cs->hw.hfcpci.last_bfifo_cnt[1] = 0;
- cs->hw.hfcpci.fifo_en |= HFCPCI_FIFOEN_B2;
- cs->hw.hfcpci.int_m1 |= (HFCPCI_INTS_B2TRANS + HFCPCI_INTS_B2REC);
- cs->hw.hfcpci.ctmt &= ~2;
- cs->hw.hfcpci.conn &= ~0x18;
- } else {
- cs->hw.hfcpci.last_bfifo_cnt[0] = 0;
- cs->hw.hfcpci.fifo_en |= HFCPCI_FIFOEN_B1;
- cs->hw.hfcpci.int_m1 |= (HFCPCI_INTS_B1TRANS + HFCPCI_INTS_B1REC);
- cs->hw.hfcpci.ctmt &= ~1;
- cs->hw.hfcpci.conn &= ~0x03;
- }
- break;
- case (L1_MODE_EXTRN):
- if (bc) {
- cs->hw.hfcpci.conn |= 0x10;
- cs->hw.hfcpci.sctrl |= SCTRL_B2_ENA;
- cs->hw.hfcpci.sctrl_r |= SCTRL_B2_ENA;
- cs->hw.hfcpci.fifo_en &= ~HFCPCI_FIFOEN_B2;
- cs->hw.hfcpci.int_m1 &= ~(HFCPCI_INTS_B2TRANS + HFCPCI_INTS_B2REC);
- } else {
- cs->hw.hfcpci.conn |= 0x02;
- cs->hw.hfcpci.sctrl |= SCTRL_B1_ENA;
- cs->hw.hfcpci.sctrl_r |= SCTRL_B1_ENA;
- cs->hw.hfcpci.fifo_en &= ~HFCPCI_FIFOEN_B1;
- cs->hw.hfcpci.int_m1 &= ~(HFCPCI_INTS_B1TRANS + HFCPCI_INTS_B1REC);
- }
- break;
- }
- Write_hfc(cs, HFCPCI_SCTRL_E, cs->hw.hfcpci.sctrl_e);
- Write_hfc(cs, HFCPCI_INT_M1, cs->hw.hfcpci.int_m1);
- Write_hfc(cs, HFCPCI_FIFO_EN, cs->hw.hfcpci.fifo_en);
- Write_hfc(cs, HFCPCI_SCTRL, cs->hw.hfcpci.sctrl);
- Write_hfc(cs, HFCPCI_SCTRL_R, cs->hw.hfcpci.sctrl_r);
- Write_hfc(cs, HFCPCI_CTMT, cs->hw.hfcpci.ctmt);
- Write_hfc(cs, HFCPCI_CONNECT, cs->hw.hfcpci.conn);
-}
-
-/******************************/
-/* Layer2 -> Layer 1 Transfer */
-/******************************/
-static void
-hfcpci_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct BCState *bcs = st->l1.bcs;
- u_long flags;
- struct sk_buff *skb = arg;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- skb_queue_tail(&bcs->squeue, skb);
- } else {
- bcs->tx_skb = skb;
-// test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- printk(KERN_WARNING "hfc_l2l1: this shouldn't happen\n");
- break;
- }
-// test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->tx_skb = skb;
- bcs->cs->BC_Send_Data(bcs);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
- if (!bcs->tx_skb) {
- test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case (PH_ACTIVATE | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_set_bit(BC_FLG_ACTIV, &bcs->Flag);
- mode_hfcpci(bcs, st->l1.mode, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | REQUEST):
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | CONFIRM):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_clear_bit(BC_FLG_ACTIV, &bcs->Flag);
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- mode_hfcpci(bcs, 0, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- st->l1.l1l2(st, PH_DEACTIVATE | CONFIRM, NULL);
- break;
- }
-}
-
-/******************************************/
-/* deactivate B-channel access and queues */
-/******************************************/
-static void
-close_hfcpci(struct BCState *bcs)
-{
- mode_hfcpci(bcs, 0, bcs->channel);
- if (test_and_clear_bit(BC_FLG_INIT, &bcs->Flag)) {
- skb_queue_purge(&bcs->rqueue);
- skb_queue_purge(&bcs->squeue);
- if (bcs->tx_skb) {
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- }
- }
-}
-
-/*************************************/
-/* init B-channel queues and control */
-/*************************************/
-static int
-open_hfcpcistate(struct IsdnCardState *cs, struct BCState *bcs)
-{
- if (!test_and_set_bit(BC_FLG_INIT, &bcs->Flag)) {
- skb_queue_head_init(&bcs->rqueue);
- skb_queue_head_init(&bcs->squeue);
- }
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->event = 0;
- bcs->tx_cnt = 0;
- return (0);
-}
-
-/*********************************/
-/* inits the stack for B-channel */
-/*********************************/
-static int
-setstack_2b(struct PStack *st, struct BCState *bcs)
-{
- bcs->channel = st->l1.bc;
- if (open_hfcpcistate(st->l1.hardware, bcs))
- return (-1);
- st->l1.bcs = bcs;
- st->l2.l2l1 = hfcpci_l2l1;
- setstack_manager(st);
- bcs->st = st;
- setstack_l1_B(st);
- return (0);
-}
-
-/***************************/
-/* handle L1 state changes */
-/***************************/
-static void
-hfcpci_bh(struct work_struct *work)
-{
- struct IsdnCardState *cs =
- container_of(work, struct IsdnCardState, tqueue);
- u_long flags;
-// struct PStack *stptr;
-
- if (test_and_clear_bit(D_L1STATECHANGE, &cs->event)) {
- if (!cs->hw.hfcpci.nt_mode)
- switch (cs->dc.hfcpci.ph_state) {
- case (0):
- l1_msg(cs, HW_RESET | INDICATION, NULL);
- break;
- case (3):
- l1_msg(cs, HW_DEACTIVATE | INDICATION, NULL);
- break;
- case (8):
- l1_msg(cs, HW_RSYNC | INDICATION, NULL);
- break;
- case (6):
- l1_msg(cs, HW_INFO2 | INDICATION, NULL);
- break;
- case (7):
- l1_msg(cs, HW_INFO4_P8 | INDICATION, NULL);
- break;
- default:
- break;
- } else {
- spin_lock_irqsave(&cs->lock, flags);
- switch (cs->dc.hfcpci.ph_state) {
- case (2):
- if (cs->hw.hfcpci.nt_timer < 0) {
- cs->hw.hfcpci.nt_timer = 0;
- cs->hw.hfcpci.int_m1 &= ~HFCPCI_INTS_TIMER;
- Write_hfc(cs, HFCPCI_INT_M1, cs->hw.hfcpci.int_m1);
- /* Clear already pending ints */
- Read_hfc(cs, HFCPCI_INT_S1);
- Write_hfc(cs, HFCPCI_STATES, 4 | HFCPCI_LOAD_STATE);
- udelay(10);
- Write_hfc(cs, HFCPCI_STATES, 4);
- cs->dc.hfcpci.ph_state = 4;
- } else {
- cs->hw.hfcpci.int_m1 |= HFCPCI_INTS_TIMER;
- Write_hfc(cs, HFCPCI_INT_M1, cs->hw.hfcpci.int_m1);
- cs->hw.hfcpci.ctmt &= ~HFCPCI_AUTO_TIMER;
- cs->hw.hfcpci.ctmt |= HFCPCI_TIM3_125;
- Write_hfc(cs, HFCPCI_CTMT, cs->hw.hfcpci.ctmt | HFCPCI_CLTIMER);
- Write_hfc(cs, HFCPCI_CTMT, cs->hw.hfcpci.ctmt | HFCPCI_CLTIMER);
- cs->hw.hfcpci.nt_timer = NT_T1_COUNT;
- Write_hfc(cs, HFCPCI_STATES, 2 | HFCPCI_NT_G2_G3); /* allow G2 -> G3 transition */
- }
- break;
- case (1):
- case (3):
- case (4):
- cs->hw.hfcpci.nt_timer = 0;
- cs->hw.hfcpci.int_m1 &= ~HFCPCI_INTS_TIMER;
- Write_hfc(cs, HFCPCI_INT_M1, cs->hw.hfcpci.int_m1);
- break;
- default:
- break;
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- }
- }
- if (test_and_clear_bit(D_RCVBUFREADY, &cs->event))
- DChannel_proc_rcv(cs);
- if (test_and_clear_bit(D_XMTBUFREADY, &cs->event))
- DChannel_proc_xmt(cs);
-}
-
-
-/********************************/
-/* called for card init message */
-/********************************/
-static void
-inithfcpci(struct IsdnCardState *cs)
-{
- cs->bcs[0].BC_SetStack = setstack_2b;
- cs->bcs[1].BC_SetStack = setstack_2b;
- cs->bcs[0].BC_Close = close_hfcpci;
- cs->bcs[1].BC_Close = close_hfcpci;
- timer_setup(&cs->dbusytimer, hfcpci_dbusy_timer, 0);
- mode_hfcpci(cs->bcs, 0, 0);
- mode_hfcpci(cs->bcs + 1, 0, 1);
-}
-
-
-
-/*******************************************/
-/* handle card messages from control layer */
-/*******************************************/
-static int
-hfcpci_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "HFCPCI: card_msg %x", mt);
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- reset_hfcpci(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- release_io_hfcpci(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- inithfcpci(cs);
- reset_hfcpci(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- msleep(80); /* Timeout 80ms */
- /* now switch timer interrupt off */
- spin_lock_irqsave(&cs->lock, flags);
- cs->hw.hfcpci.int_m1 &= ~HFCPCI_INTS_TIMER;
- Write_hfc(cs, HFCPCI_INT_M1, cs->hw.hfcpci.int_m1);
- /* reinit mode reg */
- Write_hfc(cs, HFCPCI_MST_MODE, cs->hw.hfcpci.mst_m);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-
-/* this variable is used as card index when more than one cards are present */
-static struct pci_dev *dev_hfcpci = NULL;
-
-int
-setup_hfcpci(struct IsdnCard *card)
-{
- u_long flags;
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
- int i;
- struct pci_dev *tmp_hfcpci = NULL;
-
- strcpy(tmp, hfcpci_revision);
- printk(KERN_INFO "HiSax: HFC-PCI driver Rev. %s\n", HiSax_getrev(tmp));
-
- cs->hw.hfcpci.int_s1 = 0;
- cs->dc.hfcpci.ph_state = 0;
- cs->hw.hfcpci.fifo = 255;
- if (cs->typ != ISDN_CTYPE_HFC_PCI)
- return (0);
-
- i = 0;
- while (id_list[i].vendor_id) {
- tmp_hfcpci = hisax_find_pci_device(id_list[i].vendor_id,
- id_list[i].device_id,
- dev_hfcpci);
- i++;
- if (tmp_hfcpci) {
- dma_addr_t dma_mask = DMA_BIT_MASK(32) & ~0x7fffUL;
- if (pci_enable_device(tmp_hfcpci))
- continue;
- if (pci_set_dma_mask(tmp_hfcpci, dma_mask)) {
- printk(KERN_WARNING
- "HiSax hfc_pci: No suitable DMA available.\n");
- continue;
- }
- if (pci_set_consistent_dma_mask(tmp_hfcpci, dma_mask)) {
- printk(KERN_WARNING
- "HiSax hfc_pci: No suitable consistent DMA available.\n");
- continue;
- }
- pci_set_master(tmp_hfcpci);
- if ((card->para[0]) && (card->para[0] != (tmp_hfcpci->resource[0].start & PCI_BASE_ADDRESS_IO_MASK)))
- continue;
- else
- break;
- }
- }
-
- if (!tmp_hfcpci) {
- printk(KERN_WARNING "HFC-PCI: No PCI card found\n");
- return (0);
- }
-
- i--;
- dev_hfcpci = tmp_hfcpci; /* old device */
- cs->hw.hfcpci.dev = dev_hfcpci;
- cs->irq = dev_hfcpci->irq;
- if (!cs->irq) {
- printk(KERN_WARNING "HFC-PCI: No IRQ for PCI card found\n");
- return (0);
- }
- cs->hw.hfcpci.pci_io = ioremap(dev_hfcpci->resource[1].start, 256);
- printk(KERN_INFO "HiSax: HFC-PCI card manufacturer: %s card name: %s\n", id_list[i].vendor_name, id_list[i].card_name);
-
- if (!cs->hw.hfcpci.pci_io) {
- printk(KERN_WARNING "HFC-PCI: No IO-Mem for PCI card found\n");
- return (0);
- }
-
- /* Allocate memory for FIFOS */
- cs->hw.hfcpci.fifos = pci_alloc_consistent(cs->hw.hfcpci.dev,
- 0x8000, &cs->hw.hfcpci.dma);
- if (!cs->hw.hfcpci.fifos) {
- printk(KERN_WARNING "HFC-PCI: Error allocating FIFO memory!\n");
- return 0;
- }
- if (cs->hw.hfcpci.dma & 0x7fff) {
- printk(KERN_WARNING
- "HFC-PCI: Error DMA memory not on 32K boundary (%lx)\n",
- (u_long)cs->hw.hfcpci.dma);
- pci_free_consistent(cs->hw.hfcpci.dev, 0x8000,
- cs->hw.hfcpci.fifos, cs->hw.hfcpci.dma);
- return 0;
- }
- pci_write_config_dword(cs->hw.hfcpci.dev, 0x80, (u32)cs->hw.hfcpci.dma);
- printk(KERN_INFO
- "HFC-PCI: defined at mem %p fifo %p(%lx) IRQ %d HZ %d\n",
- cs->hw.hfcpci.pci_io,
- cs->hw.hfcpci.fifos,
- (u_long)cs->hw.hfcpci.dma,
- cs->irq, HZ);
-
- spin_lock_irqsave(&cs->lock, flags);
-
- pci_write_config_word(cs->hw.hfcpci.dev, PCI_COMMAND, PCI_ENA_MEMIO); /* enable memory mapped ports, disable busmaster */
- cs->hw.hfcpci.int_m2 = 0; /* disable alle interrupts */
- cs->hw.hfcpci.int_m1 = 0;
- Write_hfc(cs, HFCPCI_INT_M1, cs->hw.hfcpci.int_m1);
- Write_hfc(cs, HFCPCI_INT_M2, cs->hw.hfcpci.int_m2);
- /* At this point the needed PCI config is done */
- /* fifos are still not enabled */
-
- INIT_WORK(&cs->tqueue, hfcpci_bh);
- cs->setstack_d = setstack_hfcpci;
- cs->BC_Send_Data = &hfcpci_send_data;
- cs->readisac = NULL;
- cs->writeisac = NULL;
- cs->readisacfifo = NULL;
- cs->writeisacfifo = NULL;
- cs->BC_Read_Reg = NULL;
- cs->BC_Write_Reg = NULL;
- cs->irq_func = &hfcpci_interrupt;
- cs->irq_flags |= IRQF_SHARED;
- timer_setup(&cs->hw.hfcpci.timer, hfcpci_Timer, 0);
- cs->cardmsg = &hfcpci_card_msg;
- cs->auxcmd = &hfcpci_auxcmd;
-
- spin_unlock_irqrestore(&cs->lock, flags);
-
- return (1);
-}
diff --git a/drivers/isdn/hisax/hfc_pci.h b/drivers/isdn/hisax/hfc_pci.h
deleted file mode 100644
index 4c3b3ba35726..000000000000
--- a/drivers/isdn/hisax/hfc_pci.h
+++ /dev/null
@@ -1,235 +0,0 @@
-/* $Id: hfc_pci.h,v 1.10.2.2 2004/01/12 22:52:26 keil Exp $
- *
- * specific defines for CCD's HFC 2BDS0 PCI chips
- *
- * Author Werner Cornelius
- * Copyright by Werner Cornelius <werner@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-/*********************************************/
-/* thresholds for transparent B-channel mode */
-/* change mask and threshold simultaneously */
-/*********************************************/
-#define HFCPCI_BTRANS_THRESHOLD 128
-#define HFCPCI_BTRANS_THRESMASK 0x00
-
-
-
-/* defines for PCI config */
-
-#define PCI_ENA_MEMIO 0x02
-#define PCI_ENA_MASTER 0x04
-
-
-/* GCI/IOM bus monitor registers */
-
-#define HCFPCI_C_I 0x08
-#define HFCPCI_TRxR 0x0C
-#define HFCPCI_MON1_D 0x28
-#define HFCPCI_MON2_D 0x2C
-
-
-/* GCI/IOM bus timeslot registers */
-
-#define HFCPCI_B1_SSL 0x80
-#define HFCPCI_B2_SSL 0x84
-#define HFCPCI_AUX1_SSL 0x88
-#define HFCPCI_AUX2_SSL 0x8C
-#define HFCPCI_B1_RSL 0x90
-#define HFCPCI_B2_RSL 0x94
-#define HFCPCI_AUX1_RSL 0x98
-#define HFCPCI_AUX2_RSL 0x9C
-
-/* GCI/IOM bus data registers */
-
-#define HFCPCI_B1_D 0xA0
-#define HFCPCI_B2_D 0xA4
-#define HFCPCI_AUX1_D 0xA8
-#define HFCPCI_AUX2_D 0xAC
-
-/* GCI/IOM bus configuration registers */
-
-#define HFCPCI_MST_EMOD 0xB4
-#define HFCPCI_MST_MODE 0xB8
-#define HFCPCI_CONNECT 0xBC
-
-
-/* Interrupt and status registers */
-
-#define HFCPCI_FIFO_EN 0x44
-#define HFCPCI_TRM 0x48
-#define HFCPCI_B_MODE 0x4C
-#define HFCPCI_CHIP_ID 0x58
-#define HFCPCI_CIRM 0x60
-#define HFCPCI_CTMT 0x64
-#define HFCPCI_INT_M1 0x68
-#define HFCPCI_INT_M2 0x6C
-#define HFCPCI_INT_S1 0x78
-#define HFCPCI_INT_S2 0x7C
-#define HFCPCI_STATUS 0x70
-
-/* S/T section registers */
-
-#define HFCPCI_STATES 0xC0
-#define HFCPCI_SCTRL 0xC4
-#define HFCPCI_SCTRL_E 0xC8
-#define HFCPCI_SCTRL_R 0xCC
-#define HFCPCI_SQ 0xD0
-#define HFCPCI_CLKDEL 0xDC
-#define HFCPCI_B1_REC 0xF0
-#define HFCPCI_B1_SEND 0xF0
-#define HFCPCI_B2_REC 0xF4
-#define HFCPCI_B2_SEND 0xF4
-#define HFCPCI_D_REC 0xF8
-#define HFCPCI_D_SEND 0xF8
-#define HFCPCI_E_REC 0xFC
-
-
-/* bits in status register (READ) */
-#define HFCPCI_PCI_PROC 0x02
-#define HFCPCI_NBUSY 0x04
-#define HFCPCI_TIMER_ELAP 0x10
-#define HFCPCI_STATINT 0x20
-#define HFCPCI_FRAMEINT 0x40
-#define HFCPCI_ANYINT 0x80
-
-/* bits in CTMT (Write) */
-#define HFCPCI_CLTIMER 0x80
-#define HFCPCI_TIM3_125 0x04
-#define HFCPCI_TIM25 0x10
-#define HFCPCI_TIM50 0x14
-#define HFCPCI_TIM400 0x18
-#define HFCPCI_TIM800 0x1C
-#define HFCPCI_AUTO_TIMER 0x20
-#define HFCPCI_TRANSB2 0x02
-#define HFCPCI_TRANSB1 0x01
-
-/* bits in CIRM (Write) */
-#define HFCPCI_AUX_MSK 0x07
-#define HFCPCI_RESET 0x08
-#define HFCPCI_B1_REV 0x40
-#define HFCPCI_B2_REV 0x80
-
-/* bits in INT_M1 and INT_S1 */
-#define HFCPCI_INTS_B1TRANS 0x01
-#define HFCPCI_INTS_B2TRANS 0x02
-#define HFCPCI_INTS_DTRANS 0x04
-#define HFCPCI_INTS_B1REC 0x08
-#define HFCPCI_INTS_B2REC 0x10
-#define HFCPCI_INTS_DREC 0x20
-#define HFCPCI_INTS_L1STATE 0x40
-#define HFCPCI_INTS_TIMER 0x80
-
-/* bits in INT_M2 */
-#define HFCPCI_PROC_TRANS 0x01
-#define HFCPCI_GCI_I_CHG 0x02
-#define HFCPCI_GCI_MON_REC 0x04
-#define HFCPCI_IRQ_ENABLE 0x08
-#define HFCPCI_PMESEL 0x80
-
-/* bits in STATES */
-#define HFCPCI_STATE_MSK 0x0F
-#define HFCPCI_LOAD_STATE 0x10
-#define HFCPCI_ACTIVATE 0x20
-#define HFCPCI_DO_ACTION 0x40
-#define HFCPCI_NT_G2_G3 0x80
-
-/* bits in HFCD_MST_MODE */
-#define HFCPCI_MASTER 0x01
-#define HFCPCI_SLAVE 0x00
-/* remaining bits are for codecs control */
-
-/* bits in HFCD_SCTRL */
-#define SCTRL_B1_ENA 0x01
-#define SCTRL_B2_ENA 0x02
-#define SCTRL_MODE_TE 0x00
-#define SCTRL_MODE_NT 0x04
-#define SCTRL_LOW_PRIO 0x08
-#define SCTRL_SQ_ENA 0x10
-#define SCTRL_TEST 0x20
-#define SCTRL_NONE_CAP 0x40
-#define SCTRL_PWR_DOWN 0x80
-
-/* bits in SCTRL_E */
-#define HFCPCI_AUTO_AWAKE 0x01
-#define HFCPCI_DBIT_1 0x04
-#define HFCPCI_IGNORE_COL 0x08
-#define HFCPCI_CHG_B1_B2 0x80
-
-/****************************/
-/* bits in FIFO_EN register */
-/****************************/
-#define HFCPCI_FIFOEN_B1 0x03
-#define HFCPCI_FIFOEN_B2 0x0C
-#define HFCPCI_FIFOEN_DTX 0x10
-#define HFCPCI_FIFOEN_B1TX 0x01
-#define HFCPCI_FIFOEN_B1RX 0x02
-#define HFCPCI_FIFOEN_B2TX 0x04
-#define HFCPCI_FIFOEN_B2RX 0x08
-
-
-/***********************************/
-/* definitions of fifo memory area */
-/***********************************/
-#define MAX_D_FRAMES 15
-#define MAX_B_FRAMES 31
-#define B_SUB_VAL 0x200
-#define B_FIFO_SIZE (0x2000 - B_SUB_VAL)
-#define D_FIFO_SIZE 512
-#define D_FREG_MASK 0xF
-
-typedef struct {
- unsigned short z1; /* Z1 pointer 16 Bit */
- unsigned short z2; /* Z2 pointer 16 Bit */
-} z_type;
-
-typedef struct {
- u_char data[D_FIFO_SIZE]; /* FIFO data space */
- u_char fill1[0x20A0 - D_FIFO_SIZE]; /* reserved, do not use */
- u_char f1, f2; /* f pointers */
- u_char fill2[0x20C0 - 0x20A2]; /* reserved, do not use */
- z_type za[MAX_D_FRAMES + 1]; /* mask index with D_FREG_MASK for access */
- u_char fill3[0x4000 - 0x2100]; /* align 16K */
-} dfifo_type;
-
-typedef struct {
- z_type za[MAX_B_FRAMES + 1]; /* only range 0x0..0x1F allowed */
- u_char f1, f2; /* f pointers */
- u_char fill[0x2100 - 0x2082]; /* alignment */
-} bzfifo_type;
-
-
-typedef union {
- struct {
- dfifo_type d_tx; /* D-send channel */
- dfifo_type d_rx; /* D-receive channel */
- } d_chan;
- struct {
- u_char fill1[0x200];
- u_char txdat_b1[B_FIFO_SIZE];
- bzfifo_type txbz_b1;
-
- bzfifo_type txbz_b2;
- u_char txdat_b2[B_FIFO_SIZE];
-
- u_char fill2[D_FIFO_SIZE];
-
- u_char rxdat_b1[B_FIFO_SIZE];
- bzfifo_type rxbz_b1;
-
- bzfifo_type rxbz_b2;
- u_char rxdat_b2[B_FIFO_SIZE];
- } b_chans;
- u_char fill[32768];
-} fifo_area;
-
-
-#define Write_hfc(a, b, c) (writeb(c, (a->hw.hfcpci.pci_io) + b))
-#define Read_hfc(a, b) (readb((a->hw.hfcpci.pci_io) + b))
-
-extern void main_irq_hcpci(struct BCState *bcs);
-extern void releasehfcpci(struct IsdnCardState *cs);
diff --git a/drivers/isdn/hisax/hfc_sx.c b/drivers/isdn/hisax/hfc_sx.c
deleted file mode 100644
index 12af628d9b2c..000000000000
--- a/drivers/isdn/hisax/hfc_sx.c
+++ /dev/null
@@ -1,1517 +0,0 @@
-/* $Id: hfc_sx.c,v 1.12.2.5 2004/02/11 13:21:33 keil Exp $
- *
- * level driver for Cologne Chip Designs hfc-s+/sp based cards
- *
- * Author Werner Cornelius
- * based on existing driver for CCD HFC PCI cards
- * Copyright by Werner Cornelius <werner@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "hfc_sx.h"
-#include "isdnl1.h"
-#include <linux/interrupt.h>
-#include <linux/isapnp.h>
-#include <linux/slab.h>
-
-static const char *hfcsx_revision = "$Revision: 1.12.2.5 $";
-
-/***************************************/
-/* IRQ-table for CCDs demo board */
-/* IRQs 6,5,10,11,12,15 are supported */
-/***************************************/
-
-/* Teles 16.3c Vendor Id TAG2620, Version 1.0, Vendor version 2.1
- *
- * Thanks to Uwe Wisniewski
- *
- * ISA-SLOT Signal PIN
- * B25 IRQ3 92 IRQ_G
- * B23 IRQ5 94 IRQ_A
- * B4 IRQ2/9 95 IRQ_B
- * D3 IRQ10 96 IRQ_C
- * D4 IRQ11 97 IRQ_D
- * D5 IRQ12 98 IRQ_E
- * D6 IRQ15 99 IRQ_F
- */
-
-#undef CCD_DEMO_BOARD
-#ifdef CCD_DEMO_BOARD
-static u_char ccd_sp_irqtab[16] = {
- 0, 0, 0, 0, 0, 2, 1, 0, 0, 0, 3, 4, 5, 0, 0, 6
-};
-#else /* Teles 16.3c */
-static u_char ccd_sp_irqtab[16] = {
- 0, 0, 0, 7, 0, 1, 0, 0, 0, 2, 3, 4, 5, 0, 0, 6
-};
-#endif
-#define NT_T1_COUNT 20 /* number of 3.125ms interrupts for G2 timeout */
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-/******************************/
-/* In/Out access to registers */
-/******************************/
-static inline void
-Write_hfc(struct IsdnCardState *cs, u_char regnum, u_char val)
-{
- byteout(cs->hw.hfcsx.base + 1, regnum);
- byteout(cs->hw.hfcsx.base, val);
-}
-
-static inline u_char
-Read_hfc(struct IsdnCardState *cs, u_char regnum)
-{
- u_char ret;
-
- byteout(cs->hw.hfcsx.base + 1, regnum);
- ret = bytein(cs->hw.hfcsx.base);
- return (ret);
-}
-
-
-/**************************************************/
-/* select a fifo and remember which one for reuse */
-/**************************************************/
-static void
-fifo_select(struct IsdnCardState *cs, u_char fifo)
-{
- if (fifo == cs->hw.hfcsx.last_fifo)
- return; /* still valid */
-
- byteout(cs->hw.hfcsx.base + 1, HFCSX_FIF_SEL);
- byteout(cs->hw.hfcsx.base, fifo);
- while (bytein(cs->hw.hfcsx.base + 1) & 1); /* wait for busy */
- udelay(4);
- byteout(cs->hw.hfcsx.base, fifo);
- while (bytein(cs->hw.hfcsx.base + 1) & 1); /* wait for busy */
-}
-
-/******************************************/
-/* reset the specified fifo to defaults. */
-/* If its a send fifo init needed markers */
-/******************************************/
-static void
-reset_fifo(struct IsdnCardState *cs, u_char fifo)
-{
- fifo_select(cs, fifo); /* first select the fifo */
- byteout(cs->hw.hfcsx.base + 1, HFCSX_CIRM);
- byteout(cs->hw.hfcsx.base, cs->hw.hfcsx.cirm | 0x80); /* reset cmd */
- udelay(1);
- while (bytein(cs->hw.hfcsx.base + 1) & 1); /* wait for busy */
-}
-
-
-/*************************************************************/
-/* write_fifo writes the skb contents to the desired fifo */
-/* if no space is available or an error occurs 0 is returned */
-/* the skb is not released in any way. */
-/*************************************************************/
-static int
-write_fifo(struct IsdnCardState *cs, struct sk_buff *skb, u_char fifo, int trans_max)
-{
- unsigned short *msp;
- int fifo_size, count, z1, z2;
- u_char f_msk, f1, f2, *src;
-
- if (skb->len <= 0) return (0);
- if (fifo & 1) return (0); /* no write fifo */
-
- fifo_select(cs, fifo);
- if (fifo & 4) {
- fifo_size = D_FIFO_SIZE; /* D-channel */
- f_msk = MAX_D_FRAMES;
- if (trans_max) return (0); /* only HDLC */
- }
- else {
- fifo_size = cs->hw.hfcsx.b_fifo_size; /* B-channel */
- f_msk = MAX_B_FRAMES;
- }
-
- z1 = Read_hfc(cs, HFCSX_FIF_Z1H);
- z1 = ((z1 << 8) | Read_hfc(cs, HFCSX_FIF_Z1L));
-
- /* Check for transparent mode */
- if (trans_max) {
- z2 = Read_hfc(cs, HFCSX_FIF_Z2H);
- z2 = ((z2 << 8) | Read_hfc(cs, HFCSX_FIF_Z2L));
- count = z2 - z1;
- if (count <= 0)
- count += fifo_size; /* free bytes */
- if (count < skb->len + 1) return (0); /* no room */
- count = fifo_size - count; /* bytes still not send */
- if (count > 2 * trans_max) return (0); /* delay to long */
- count = skb->len;
- src = skb->data;
- while (count--)
- Write_hfc(cs, HFCSX_FIF_DWR, *src++);
- return (1); /* success */
- }
-
- msp = ((struct hfcsx_extra *)(cs->hw.hfcsx.extra))->marker;
- msp += (((fifo >> 1) & 3) * (MAX_B_FRAMES + 1));
- f1 = Read_hfc(cs, HFCSX_FIF_F1) & f_msk;
- f2 = Read_hfc(cs, HFCSX_FIF_F2) & f_msk;
-
- count = f1 - f2; /* frame count actually buffered */
- if (count < 0)
- count += (f_msk + 1); /* if wrap around */
- if (count > f_msk - 1) {
- if (cs->debug & L1_DEB_ISAC_FIFO)
- debugl1(cs, "hfcsx_write_fifo %d more as %d frames", fifo, f_msk - 1);
- return (0);
- }
-
- *(msp + f1) = z1; /* remember marker */
-
- if (cs->debug & L1_DEB_ISAC_FIFO)
- debugl1(cs, "hfcsx_write_fifo %d f1(%x) f2(%x) z1(f1)(%x)",
- fifo, f1, f2, z1);
- /* now determine free bytes in FIFO buffer */
- count = *(msp + f2) - z1;
- if (count <= 0)
- count += fifo_size; /* count now contains available bytes */
-
- if (cs->debug & L1_DEB_ISAC_FIFO)
- debugl1(cs, "hfcsx_write_fifo %d count(%u/%d)",
- fifo, skb->len, count);
- if (count < skb->len) {
- if (cs->debug & L1_DEB_ISAC_FIFO)
- debugl1(cs, "hfcsx_write_fifo %d no fifo mem", fifo);
- return (0);
- }
-
- count = skb->len; /* get frame len */
- src = skb->data; /* source pointer */
- while (count--)
- Write_hfc(cs, HFCSX_FIF_DWR, *src++);
-
- Read_hfc(cs, HFCSX_FIF_INCF1); /* increment F1 */
- udelay(1);
- while (bytein(cs->hw.hfcsx.base + 1) & 1); /* wait for busy */
- return (1);
-}
-
-/***************************************************************/
-/* read_fifo reads data to an skb from the desired fifo */
-/* if no data is available or an error occurs NULL is returned */
-/* the skb is not released in any way. */
-/***************************************************************/
-static struct sk_buff *
-read_fifo(struct IsdnCardState *cs, u_char fifo, int trans_max)
-{ int fifo_size, count, z1, z2;
- u_char f_msk, f1, f2, *dst;
- struct sk_buff *skb;
-
- if (!(fifo & 1)) return (NULL); /* no read fifo */
- fifo_select(cs, fifo);
- if (fifo & 4) {
- fifo_size = D_FIFO_SIZE; /* D-channel */
- f_msk = MAX_D_FRAMES;
- if (trans_max) return (NULL); /* only hdlc */
- }
- else {
- fifo_size = cs->hw.hfcsx.b_fifo_size; /* B-channel */
- f_msk = MAX_B_FRAMES;
- }
-
- /* transparent mode */
- if (trans_max) {
- z1 = Read_hfc(cs, HFCSX_FIF_Z1H);
- z1 = ((z1 << 8) | Read_hfc(cs, HFCSX_FIF_Z1L));
- z2 = Read_hfc(cs, HFCSX_FIF_Z2H);
- z2 = ((z2 << 8) | Read_hfc(cs, HFCSX_FIF_Z2L));
- /* now determine bytes in actual FIFO buffer */
- count = z1 - z2;
- if (count <= 0)
- count += fifo_size; /* count now contains buffered bytes */
- count++;
- if (count > trans_max)
- count = trans_max; /* limit length */
- skb = dev_alloc_skb(count);
- if (skb) {
- dst = skb_put(skb, count);
- while (count--)
- *dst++ = Read_hfc(cs, HFCSX_FIF_DRD);
- return skb;
- } else
- return NULL; /* no memory */
- }
-
- do {
- f1 = Read_hfc(cs, HFCSX_FIF_F1) & f_msk;
- f2 = Read_hfc(cs, HFCSX_FIF_F2) & f_msk;
-
- if (f1 == f2) return (NULL); /* no frame available */
-
- z1 = Read_hfc(cs, HFCSX_FIF_Z1H);
- z1 = ((z1 << 8) | Read_hfc(cs, HFCSX_FIF_Z1L));
- z2 = Read_hfc(cs, HFCSX_FIF_Z2H);
- z2 = ((z2 << 8) | Read_hfc(cs, HFCSX_FIF_Z2L));
-
- if (cs->debug & L1_DEB_ISAC_FIFO)
- debugl1(cs, "hfcsx_read_fifo %d f1(%x) f2(%x) z1(f2)(%x) z2(f2)(%x)",
- fifo, f1, f2, z1, z2);
- /* now determine bytes in actual FIFO buffer */
- count = z1 - z2;
- if (count <= 0)
- count += fifo_size; /* count now contains buffered bytes */
- count++;
-
- if (cs->debug & L1_DEB_ISAC_FIFO)
- debugl1(cs, "hfcsx_read_fifo %d count %u)",
- fifo, count);
-
- if ((count > fifo_size) || (count < 4)) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "hfcsx_read_fifo %d packet inv. len %d ", fifo , count);
- while (count) {
- count--; /* empty fifo */
- Read_hfc(cs, HFCSX_FIF_DRD);
- }
- skb = NULL;
- } else
- if ((skb = dev_alloc_skb(count - 3))) {
- count -= 3;
- dst = skb_put(skb, count);
-
- while (count--)
- *dst++ = Read_hfc(cs, HFCSX_FIF_DRD);
-
- Read_hfc(cs, HFCSX_FIF_DRD); /* CRC 1 */
- Read_hfc(cs, HFCSX_FIF_DRD); /* CRC 2 */
- if (Read_hfc(cs, HFCSX_FIF_DRD)) {
- dev_kfree_skb_irq(skb);
- if (cs->debug & L1_DEB_ISAC_FIFO)
- debugl1(cs, "hfcsx_read_fifo %d crc error", fifo);
- skb = NULL;
- }
- } else {
- printk(KERN_WARNING "HFC-SX: receive out of memory\n");
- return (NULL);
- }
-
- Read_hfc(cs, HFCSX_FIF_INCF2); /* increment F2 */
- udelay(1);
- while (bytein(cs->hw.hfcsx.base + 1) & 1); /* wait for busy */
- udelay(1);
- } while (!skb); /* retry in case of crc error */
- return (skb);
-}
-
-/******************************************/
-/* free hardware resources used by driver */
-/******************************************/
-static void
-release_io_hfcsx(struct IsdnCardState *cs)
-{
- cs->hw.hfcsx.int_m2 = 0; /* interrupt output off ! */
- Write_hfc(cs, HFCSX_INT_M2, cs->hw.hfcsx.int_m2);
- Write_hfc(cs, HFCSX_CIRM, HFCSX_RESET); /* Reset On */
- msleep(30); /* Timeout 30ms */
- Write_hfc(cs, HFCSX_CIRM, 0); /* Reset Off */
- del_timer(&cs->hw.hfcsx.timer);
- release_region(cs->hw.hfcsx.base, 2); /* release IO-Block */
- kfree(cs->hw.hfcsx.extra);
- cs->hw.hfcsx.extra = NULL;
-}
-
-/**********************************************************/
-/* set_fifo_size determines the size of the RAM and FIFOs */
-/* returning 0 -> need to reset the chip again. */
-/**********************************************************/
-static int set_fifo_size(struct IsdnCardState *cs)
-{
-
- if (cs->hw.hfcsx.b_fifo_size) return (1); /* already determined */
-
- if ((cs->hw.hfcsx.chip >> 4) == 9) {
- cs->hw.hfcsx.b_fifo_size = B_FIFO_SIZE_32K;
- return (1);
- }
-
- cs->hw.hfcsx.b_fifo_size = B_FIFO_SIZE_8K;
- cs->hw.hfcsx.cirm |= 0x10; /* only 8K of ram */
- return (0);
-
-}
-
-/********************************************************************************/
-/* function called to reset the HFC SX chip. A complete software reset of chip */
-/* and fifos is done. */
-/********************************************************************************/
-static void
-reset_hfcsx(struct IsdnCardState *cs)
-{
- cs->hw.hfcsx.int_m2 = 0; /* interrupt output off ! */
- Write_hfc(cs, HFCSX_INT_M2, cs->hw.hfcsx.int_m2);
-
- printk(KERN_INFO "HFC_SX: resetting card\n");
- while (1) {
- Write_hfc(cs, HFCSX_CIRM, HFCSX_RESET | cs->hw.hfcsx.cirm); /* Reset */
- mdelay(30);
- Write_hfc(cs, HFCSX_CIRM, cs->hw.hfcsx.cirm); /* Reset Off */
- mdelay(20);
- if (Read_hfc(cs, HFCSX_STATUS) & 2)
- printk(KERN_WARNING "HFC-SX init bit busy\n");
- cs->hw.hfcsx.last_fifo = 0xff; /* invalidate */
- if (!set_fifo_size(cs)) continue;
- break;
- }
-
- cs->hw.hfcsx.trm = 0 + HFCSX_BTRANS_THRESMASK; /* no echo connect , threshold */
- Write_hfc(cs, HFCSX_TRM, cs->hw.hfcsx.trm);
-
- Write_hfc(cs, HFCSX_CLKDEL, 0x0e); /* ST-Bit delay for TE-Mode */
- cs->hw.hfcsx.sctrl_e = HFCSX_AUTO_AWAKE;
- Write_hfc(cs, HFCSX_SCTRL_E, cs->hw.hfcsx.sctrl_e); /* S/T Auto awake */
- cs->hw.hfcsx.bswapped = 0; /* no exchange */
- cs->hw.hfcsx.nt_mode = 0; /* we are in TE mode */
- cs->hw.hfcsx.ctmt = HFCSX_TIM3_125 | HFCSX_AUTO_TIMER;
- Write_hfc(cs, HFCSX_CTMT, cs->hw.hfcsx.ctmt);
-
- cs->hw.hfcsx.int_m1 = HFCSX_INTS_DTRANS | HFCSX_INTS_DREC |
- HFCSX_INTS_L1STATE | HFCSX_INTS_TIMER;
- Write_hfc(cs, HFCSX_INT_M1, cs->hw.hfcsx.int_m1);
-
- /* Clear already pending ints */
- Read_hfc(cs, HFCSX_INT_S1);
-
- Write_hfc(cs, HFCSX_STATES, HFCSX_LOAD_STATE | 2); /* HFC ST 2 */
- udelay(10);
- Write_hfc(cs, HFCSX_STATES, 2); /* HFC ST 2 */
- cs->hw.hfcsx.mst_m = HFCSX_MASTER; /* HFC Master Mode */
-
- Write_hfc(cs, HFCSX_MST_MODE, cs->hw.hfcsx.mst_m);
- cs->hw.hfcsx.sctrl = 0x40; /* set tx_lo mode, error in datasheet ! */
- Write_hfc(cs, HFCSX_SCTRL, cs->hw.hfcsx.sctrl);
- cs->hw.hfcsx.sctrl_r = 0;
- Write_hfc(cs, HFCSX_SCTRL_R, cs->hw.hfcsx.sctrl_r);
-
- /* Init GCI/IOM2 in master mode */
- /* Slots 0 and 1 are set for B-chan 1 and 2 */
- /* D- and monitor/CI channel are not enabled */
- /* STIO1 is used as output for data, B1+B2 from ST->IOM+HFC */
- /* STIO2 is used as data input, B1+B2 from IOM->ST */
- /* ST B-channel send disabled -> continuous 1s */
- /* The IOM slots are always enabled */
- cs->hw.hfcsx.conn = 0x36; /* set data flow directions */
- Write_hfc(cs, HFCSX_CONNECT, cs->hw.hfcsx.conn);
- Write_hfc(cs, HFCSX_B1_SSL, 0x80); /* B1-Slot 0 STIO1 out enabled */
- Write_hfc(cs, HFCSX_B2_SSL, 0x81); /* B2-Slot 1 STIO1 out enabled */
- Write_hfc(cs, HFCSX_B1_RSL, 0x80); /* B1-Slot 0 STIO2 in enabled */
- Write_hfc(cs, HFCSX_B2_RSL, 0x81); /* B2-Slot 1 STIO2 in enabled */
-
- /* Finally enable IRQ output */
- cs->hw.hfcsx.int_m2 = HFCSX_IRQ_ENABLE;
- Write_hfc(cs, HFCSX_INT_M2, cs->hw.hfcsx.int_m2);
- Read_hfc(cs, HFCSX_INT_S2);
-}
-
-/***************************************************/
-/* Timer function called when kernel timer expires */
-/***************************************************/
-static void
-hfcsx_Timer(struct timer_list *t)
-{
- struct IsdnCardState *cs = from_timer(cs, t, hw.hfcsx.timer);
- cs->hw.hfcsx.timer.expires = jiffies + 75;
- /* WD RESET */
-/* WriteReg(cs, HFCD_DATA, HFCD_CTMT, cs->hw.hfcsx.ctmt | 0x80);
- add_timer(&cs->hw.hfcsx.timer);
-*/
-}
-
-/************************************************/
-/* select a b-channel entry matching and active */
-/************************************************/
-static
-struct BCState *
-Sel_BCS(struct IsdnCardState *cs, int channel)
-{
- if (cs->bcs[0].mode && (cs->bcs[0].channel == channel))
- return (&cs->bcs[0]);
- else if (cs->bcs[1].mode && (cs->bcs[1].channel == channel))
- return (&cs->bcs[1]);
- else
- return (NULL);
-}
-
-/*******************************/
-/* D-channel receive procedure */
-/*******************************/
-static
-int
-receive_dmsg(struct IsdnCardState *cs)
-{
- struct sk_buff *skb;
- int count = 5;
-
- if (test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- debugl1(cs, "rec_dmsg blocked");
- return (1);
- }
-
- do {
- skb = read_fifo(cs, HFCSX_SEL_D_RX, 0);
- if (skb) {
- skb_queue_tail(&cs->rq, skb);
- schedule_event(cs, D_RCVBUFREADY);
- }
- } while (--count && skb);
-
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- return (1);
-}
-
-/**********************************/
-/* B-channel main receive routine */
-/**********************************/
-static void
-main_rec_hfcsx(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
- int count = 5;
- struct sk_buff *skb;
-
-Begin:
- count--;
- if (test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- debugl1(cs, "rec_data %d blocked", bcs->channel);
- return;
- }
- skb = read_fifo(cs, ((bcs->channel) && (!cs->hw.hfcsx.bswapped)) ?
- HFCSX_SEL_B2_RX : HFCSX_SEL_B1_RX,
- (bcs->mode == L1_MODE_TRANS) ?
- HFCSX_BTRANS_THRESHOLD : 0);
-
- if (skb) {
- skb_queue_tail(&bcs->rqueue, skb);
- schedule_event(bcs, B_RCVBUFREADY);
- }
-
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- if (count && skb)
- goto Begin;
- return;
-}
-
-/**************************/
-/* D-channel send routine */
-/**************************/
-static void
-hfcsx_fill_dfifo(struct IsdnCardState *cs)
-{
- if (!cs->tx_skb)
- return;
- if (cs->tx_skb->len <= 0)
- return;
-
- if (write_fifo(cs, cs->tx_skb, HFCSX_SEL_D_TX, 0)) {
- dev_kfree_skb_any(cs->tx_skb);
- cs->tx_skb = NULL;
- }
- return;
-}
-
-/**************************/
-/* B-channel send routine */
-/**************************/
-static void
-hfcsx_fill_fifo(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
-
- if (!bcs->tx_skb)
- return;
- if (bcs->tx_skb->len <= 0)
- return;
-
- if (write_fifo(cs, bcs->tx_skb,
- ((bcs->channel) && (!cs->hw.hfcsx.bswapped)) ?
- HFCSX_SEL_B2_TX : HFCSX_SEL_B1_TX,
- (bcs->mode == L1_MODE_TRANS) ?
- HFCSX_BTRANS_THRESHOLD : 0)) {
-
- bcs->tx_cnt -= bcs->tx_skb->len;
- if (test_bit(FLG_LLI_L1WAKEUP, &bcs->st->lli.flag) &&
- (PACKET_NOACK != bcs->tx_skb->pkt_type)) {
- u_long flags;
- spin_lock_irqsave(&bcs->aclock, flags);
- bcs->ackcnt += bcs->tx_skb->len;
- spin_unlock_irqrestore(&bcs->aclock, flags);
- schedule_event(bcs, B_ACKPENDING);
- }
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- }
-}
-
-/**********************************************/
-/* D-channel l1 state call for leased NT-mode */
-/**********************************************/
-static void
-dch_nt_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct IsdnCardState *cs = (struct IsdnCardState *) st->l1.hardware;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- case (PH_PULL | REQUEST):
- case (PH_PULL | INDICATION):
- st->l1.l1hw(st, pr, arg);
- break;
- case (PH_ACTIVATE | REQUEST):
- st->l1.l1l2(st, PH_ACTIVATE | CONFIRM, NULL);
- break;
- case (PH_TESTLOOP | REQUEST):
- if (1 & (long) arg)
- debugl1(cs, "PH_TEST_LOOP B1");
- if (2 & (long) arg)
- debugl1(cs, "PH_TEST_LOOP B2");
- if (!(3 & (long) arg))
- debugl1(cs, "PH_TEST_LOOP DISABLED");
- st->l1.l1hw(st, HW_TESTLOOP | REQUEST, arg);
- break;
- default:
- if (cs->debug)
- debugl1(cs, "dch_nt_l2l1 msg %04X unhandled", pr);
- break;
- }
-}
-
-
-
-/***********************/
-/* set/reset echo mode */
-/***********************/
-static int
-hfcsx_auxcmd(struct IsdnCardState *cs, isdn_ctrl *ic)
-{
- unsigned long flags;
- int i = *(unsigned int *) ic->parm.num;
-
- if ((ic->arg == 98) &&
- (!(cs->hw.hfcsx.int_m1 & (HFCSX_INTS_B2TRANS + HFCSX_INTS_B2REC + HFCSX_INTS_B1TRANS + HFCSX_INTS_B1REC)))) {
- spin_lock_irqsave(&cs->lock, flags);
- Write_hfc(cs, HFCSX_STATES, HFCSX_LOAD_STATE | 0); /* HFC ST G0 */
- udelay(10);
- cs->hw.hfcsx.sctrl |= SCTRL_MODE_NT;
- Write_hfc(cs, HFCSX_SCTRL, cs->hw.hfcsx.sctrl); /* set NT-mode */
- udelay(10);
- Write_hfc(cs, HFCSX_STATES, HFCSX_LOAD_STATE | 1); /* HFC ST G1 */
- udelay(10);
- Write_hfc(cs, HFCSX_STATES, 1 | HFCSX_ACTIVATE | HFCSX_DO_ACTION);
- cs->dc.hfcsx.ph_state = 1;
- cs->hw.hfcsx.nt_mode = 1;
- cs->hw.hfcsx.nt_timer = 0;
- spin_unlock_irqrestore(&cs->lock, flags);
- cs->stlist->l2.l2l1 = dch_nt_l2l1;
- debugl1(cs, "NT mode activated");
- return (0);
- }
- if ((cs->chanlimit > 1) || (cs->hw.hfcsx.bswapped) ||
- (cs->hw.hfcsx.nt_mode) || (ic->arg != 12))
- return (-EINVAL);
-
- if (i) {
- cs->logecho = 1;
- cs->hw.hfcsx.trm |= 0x20; /* enable echo chan */
- cs->hw.hfcsx.int_m1 |= HFCSX_INTS_B2REC;
- /* reset Channel !!!!! */
- } else {
- cs->logecho = 0;
- cs->hw.hfcsx.trm &= ~0x20; /* disable echo chan */
- cs->hw.hfcsx.int_m1 &= ~HFCSX_INTS_B2REC;
- }
- cs->hw.hfcsx.sctrl_r &= ~SCTRL_B2_ENA;
- cs->hw.hfcsx.sctrl &= ~SCTRL_B2_ENA;
- cs->hw.hfcsx.conn |= 0x10; /* B2-IOM -> B2-ST */
- cs->hw.hfcsx.ctmt &= ~2;
- spin_lock_irqsave(&cs->lock, flags);
- Write_hfc(cs, HFCSX_CTMT, cs->hw.hfcsx.ctmt);
- Write_hfc(cs, HFCSX_SCTRL_R, cs->hw.hfcsx.sctrl_r);
- Write_hfc(cs, HFCSX_SCTRL, cs->hw.hfcsx.sctrl);
- Write_hfc(cs, HFCSX_CONNECT, cs->hw.hfcsx.conn);
- Write_hfc(cs, HFCSX_TRM, cs->hw.hfcsx.trm);
- Write_hfc(cs, HFCSX_INT_M1, cs->hw.hfcsx.int_m1);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
-} /* hfcsx_auxcmd */
-
-/*****************************/
-/* E-channel receive routine */
-/*****************************/
-static void
-receive_emsg(struct IsdnCardState *cs)
-{
- int count = 5;
- u_char *ptr;
- struct sk_buff *skb;
-
- if (test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- debugl1(cs, "echo_rec_data blocked");
- return;
- }
- do {
- skb = read_fifo(cs, HFCSX_SEL_B2_RX, 0);
- if (skb) {
- if (cs->debug & DEB_DLOG_HEX) {
- ptr = cs->dlog;
- if ((skb->len) < MAX_DLOG_SPACE / 3 - 10) {
- *ptr++ = 'E';
- *ptr++ = 'C';
- *ptr++ = 'H';
- *ptr++ = 'O';
- *ptr++ = ':';
- ptr += QuickHex(ptr, skb->data, skb->len);
- ptr--;
- *ptr++ = '\n';
- *ptr = 0;
- HiSax_putstatus(cs, NULL, cs->dlog);
- } else
- HiSax_putstatus(cs, "LogEcho: ", "warning Frame too big (%d)", skb->len);
- }
- dev_kfree_skb_any(skb);
- }
- } while (--count && skb);
-
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- return;
-} /* receive_emsg */
-
-
-/*********************/
-/* Interrupt handler */
-/*********************/
-static irqreturn_t
-hfcsx_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char exval;
- struct BCState *bcs;
- int count = 15;
- u_long flags;
- u_char val, stat;
-
- if (!(cs->hw.hfcsx.int_m2 & 0x08))
- return IRQ_NONE; /* not initialised */
-
- spin_lock_irqsave(&cs->lock, flags);
- if (HFCSX_ANYINT & (stat = Read_hfc(cs, HFCSX_STATUS))) {
- val = Read_hfc(cs, HFCSX_INT_S1);
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "HFC-SX: stat(%02x) s1(%02x)", stat, val);
- } else {
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_NONE;
- }
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "HFC-SX irq %x %s", val,
- test_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags) ?
- "locked" : "unlocked");
- val &= cs->hw.hfcsx.int_m1;
- if (val & 0x40) { /* state machine irq */
- exval = Read_hfc(cs, HFCSX_STATES) & 0xf;
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ph_state chg %d->%d", cs->dc.hfcsx.ph_state,
- exval);
- cs->dc.hfcsx.ph_state = exval;
- schedule_event(cs, D_L1STATECHANGE);
- val &= ~0x40;
- }
- if (val & 0x80) { /* timer irq */
- if (cs->hw.hfcsx.nt_mode) {
- if ((--cs->hw.hfcsx.nt_timer) < 0)
- schedule_event(cs, D_L1STATECHANGE);
- }
- val &= ~0x80;
- Write_hfc(cs, HFCSX_CTMT, cs->hw.hfcsx.ctmt | HFCSX_CLTIMER);
- }
- while (val) {
- if (test_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- cs->hw.hfcsx.int_s1 |= val;
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
- }
- if (cs->hw.hfcsx.int_s1 & 0x18) {
- exval = val;
- val = cs->hw.hfcsx.int_s1;
- cs->hw.hfcsx.int_s1 = exval;
- }
- if (val & 0x08) {
- if (!(bcs = Sel_BCS(cs, cs->hw.hfcsx.bswapped ? 1 : 0))) {
- if (cs->debug)
- debugl1(cs, "hfcsx spurious 0x08 IRQ");
- } else
- main_rec_hfcsx(bcs);
- }
- if (val & 0x10) {
- if (cs->logecho)
- receive_emsg(cs);
- else if (!(bcs = Sel_BCS(cs, 1))) {
- if (cs->debug)
- debugl1(cs, "hfcsx spurious 0x10 IRQ");
- } else
- main_rec_hfcsx(bcs);
- }
- if (val & 0x01) {
- if (!(bcs = Sel_BCS(cs, cs->hw.hfcsx.bswapped ? 1 : 0))) {
- if (cs->debug)
- debugl1(cs, "hfcsx spurious 0x01 IRQ");
- } else {
- if (bcs->tx_skb) {
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcsx_fill_fifo(bcs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "fill_data %d blocked", bcs->channel);
- } else {
- if ((bcs->tx_skb = skb_dequeue(&bcs->squeue))) {
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcsx_fill_fifo(bcs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "fill_data %d blocked", bcs->channel);
- } else {
- schedule_event(bcs, B_XMTBUFREADY);
- }
- }
- }
- }
- if (val & 0x02) {
- if (!(bcs = Sel_BCS(cs, 1))) {
- if (cs->debug)
- debugl1(cs, "hfcsx spurious 0x02 IRQ");
- } else {
- if (bcs->tx_skb) {
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcsx_fill_fifo(bcs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "fill_data %d blocked", bcs->channel);
- } else {
- if ((bcs->tx_skb = skb_dequeue(&bcs->squeue))) {
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcsx_fill_fifo(bcs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "fill_data %d blocked", bcs->channel);
- } else {
- schedule_event(bcs, B_XMTBUFREADY);
- }
- }
- }
- }
- if (val & 0x20) { /* receive dframe */
- receive_dmsg(cs);
- }
- if (val & 0x04) { /* dframe transmitted */
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- schedule_event(cs, D_CLEARBUSY);
- if (cs->tx_skb) {
- if (cs->tx_skb->len) {
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcsx_fill_dfifo(cs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else {
- debugl1(cs, "hfcsx_fill_dfifo irq blocked");
- }
- goto afterXPR;
- } else {
- dev_kfree_skb_irq(cs->tx_skb);
- cs->tx_cnt = 0;
- cs->tx_skb = NULL;
- }
- }
- if ((cs->tx_skb = skb_dequeue(&cs->sq))) {
- cs->tx_cnt = 0;
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcsx_fill_dfifo(cs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else {
- debugl1(cs, "hfcsx_fill_dfifo irq blocked");
- }
- } else
- schedule_event(cs, D_XMTBUFREADY);
- }
- afterXPR:
- if (cs->hw.hfcsx.int_s1 && count--) {
- val = cs->hw.hfcsx.int_s1;
- cs->hw.hfcsx.int_s1 = 0;
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "HFC-SX irq %x loop %d", val, 15 - count);
- } else
- val = 0;
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-/********************************************************************/
-/* timer callback for D-chan busy resolution. Currently no function */
-/********************************************************************/
-static void
-hfcsx_dbusy_timer(struct timer_list *t)
-{
-}
-
-/*************************************/
-/* Layer 1 D-channel hardware access */
-/*************************************/
-static void
-HFCSX_l1hw(struct PStack *st, int pr, void *arg)
-{
- struct IsdnCardState *cs = (struct IsdnCardState *) st->l1.hardware;
- struct sk_buff *skb = arg;
- u_long flags;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- if (cs->debug & DEB_DLOG_HEX)
- LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE)
- dlogframe(cs, skb, 0);
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->tx_skb) {
- skb_queue_tail(&cs->sq, skb);
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA Queued", 0);
-#endif
- } else {
- cs->tx_skb = skb;
- cs->tx_cnt = 0;
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA", 0);
-#endif
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcsx_fill_dfifo(cs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "hfcsx_fill_dfifo blocked");
-
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->tx_skb) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, " l2l1 tx_skb exist this shouldn't happen");
- skb_queue_tail(&cs->sq, skb);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- }
- if (cs->debug & DEB_DLOG_HEX)
- LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE)
- dlogframe(cs, skb, 0);
- cs->tx_skb = skb;
- cs->tx_cnt = 0;
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA_PULLED", 0);
-#endif
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcsx_fill_dfifo(cs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "hfcsx_fill_dfifo blocked");
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- debugl1(cs, "-> PH_REQUEST_PULL");
-#endif
- if (!cs->tx_skb) {
- test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case (HW_RESET | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- Write_hfc(cs, HFCSX_STATES, HFCSX_LOAD_STATE | 3); /* HFC ST 3 */
- udelay(6);
- Write_hfc(cs, HFCSX_STATES, 3); /* HFC ST 2 */
- cs->hw.hfcsx.mst_m |= HFCSX_MASTER;
- Write_hfc(cs, HFCSX_MST_MODE, cs->hw.hfcsx.mst_m);
- Write_hfc(cs, HFCSX_STATES, HFCSX_ACTIVATE | HFCSX_DO_ACTION);
- spin_unlock_irqrestore(&cs->lock, flags);
- l1_msg(cs, HW_POWERUP | CONFIRM, NULL);
- break;
- case (HW_ENABLE | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- Write_hfc(cs, HFCSX_STATES, HFCSX_ACTIVATE | HFCSX_DO_ACTION);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_DEACTIVATE | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- cs->hw.hfcsx.mst_m &= ~HFCSX_MASTER;
- Write_hfc(cs, HFCSX_MST_MODE, cs->hw.hfcsx.mst_m);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_INFO3 | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- cs->hw.hfcsx.mst_m |= HFCSX_MASTER;
- Write_hfc(cs, HFCSX_MST_MODE, cs->hw.hfcsx.mst_m);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_TESTLOOP | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- switch ((long) arg) {
- case (1):
- Write_hfc(cs, HFCSX_B1_SSL, 0x80); /* tx slot */
- Write_hfc(cs, HFCSX_B1_RSL, 0x80); /* rx slot */
- cs->hw.hfcsx.conn = (cs->hw.hfcsx.conn & ~7) | 1;
- Write_hfc(cs, HFCSX_CONNECT, cs->hw.hfcsx.conn);
- break;
- case (2):
- Write_hfc(cs, HFCSX_B2_SSL, 0x81); /* tx slot */
- Write_hfc(cs, HFCSX_B2_RSL, 0x81); /* rx slot */
- cs->hw.hfcsx.conn = (cs->hw.hfcsx.conn & ~0x38) | 0x08;
- Write_hfc(cs, HFCSX_CONNECT, cs->hw.hfcsx.conn);
- break;
- default:
- spin_unlock_irqrestore(&cs->lock, flags);
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "hfcsx_l1hw loop invalid %4lx", (unsigned long)arg);
- return;
- }
- cs->hw.hfcsx.trm |= 0x80; /* enable IOM-loop */
- Write_hfc(cs, HFCSX_TRM, cs->hw.hfcsx.trm);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- default:
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "hfcsx_l1hw unknown pr %4x", pr);
- break;
- }
-}
-
-/***********************************************/
-/* called during init setting l1 stack pointer */
-/***********************************************/
-static void
-setstack_hfcsx(struct PStack *st, struct IsdnCardState *cs)
-{
- st->l1.l1hw = HFCSX_l1hw;
-}
-
-/**************************************/
-/* send B-channel data if not blocked */
-/**************************************/
-static void
-hfcsx_send_data(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
-
- if (!test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- hfcsx_fill_fifo(bcs);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- } else
- debugl1(cs, "send_data %d blocked", bcs->channel);
-}
-
-/***************************************************************/
-/* activate/deactivate hardware for selected channels and mode */
-/***************************************************************/
-static void
-mode_hfcsx(struct BCState *bcs, int mode, int bc)
-{
- struct IsdnCardState *cs = bcs->cs;
- int fifo2;
-
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HFCSX bchannel mode %d bchan %d/%d",
- mode, bc, bcs->channel);
- bcs->mode = mode;
- bcs->channel = bc;
- fifo2 = bc;
- if (cs->chanlimit > 1) {
- cs->hw.hfcsx.bswapped = 0; /* B1 and B2 normal mode */
- cs->hw.hfcsx.sctrl_e &= ~0x80;
- } else {
- if (bc) {
- if (mode != L1_MODE_NULL) {
- cs->hw.hfcsx.bswapped = 1; /* B1 and B2 exchanged */
- cs->hw.hfcsx.sctrl_e |= 0x80;
- } else {
- cs->hw.hfcsx.bswapped = 0; /* B1 and B2 normal mode */
- cs->hw.hfcsx.sctrl_e &= ~0x80;
- }
- fifo2 = 0;
- } else {
- cs->hw.hfcsx.bswapped = 0; /* B1 and B2 normal mode */
- cs->hw.hfcsx.sctrl_e &= ~0x80;
- }
- }
- switch (mode) {
- case (L1_MODE_NULL):
- if (bc) {
- cs->hw.hfcsx.sctrl &= ~SCTRL_B2_ENA;
- cs->hw.hfcsx.sctrl_r &= ~SCTRL_B2_ENA;
- } else {
- cs->hw.hfcsx.sctrl &= ~SCTRL_B1_ENA;
- cs->hw.hfcsx.sctrl_r &= ~SCTRL_B1_ENA;
- }
- if (fifo2) {
- cs->hw.hfcsx.int_m1 &= ~(HFCSX_INTS_B2TRANS + HFCSX_INTS_B2REC);
- } else {
- cs->hw.hfcsx.int_m1 &= ~(HFCSX_INTS_B1TRANS + HFCSX_INTS_B1REC);
- }
- break;
- case (L1_MODE_TRANS):
- if (bc) {
- cs->hw.hfcsx.sctrl |= SCTRL_B2_ENA;
- cs->hw.hfcsx.sctrl_r |= SCTRL_B2_ENA;
- } else {
- cs->hw.hfcsx.sctrl |= SCTRL_B1_ENA;
- cs->hw.hfcsx.sctrl_r |= SCTRL_B1_ENA;
- }
- if (fifo2) {
- cs->hw.hfcsx.int_m1 |= (HFCSX_INTS_B2TRANS + HFCSX_INTS_B2REC);
- cs->hw.hfcsx.ctmt |= 2;
- cs->hw.hfcsx.conn &= ~0x18;
- } else {
- cs->hw.hfcsx.int_m1 |= (HFCSX_INTS_B1TRANS + HFCSX_INTS_B1REC);
- cs->hw.hfcsx.ctmt |= 1;
- cs->hw.hfcsx.conn &= ~0x03;
- }
- break;
- case (L1_MODE_HDLC):
- if (bc) {
- cs->hw.hfcsx.sctrl |= SCTRL_B2_ENA;
- cs->hw.hfcsx.sctrl_r |= SCTRL_B2_ENA;
- } else {
- cs->hw.hfcsx.sctrl |= SCTRL_B1_ENA;
- cs->hw.hfcsx.sctrl_r |= SCTRL_B1_ENA;
- }
- if (fifo2) {
- cs->hw.hfcsx.int_m1 |= (HFCSX_INTS_B2TRANS + HFCSX_INTS_B2REC);
- cs->hw.hfcsx.ctmt &= ~2;
- cs->hw.hfcsx.conn &= ~0x18;
- } else {
- cs->hw.hfcsx.int_m1 |= (HFCSX_INTS_B1TRANS + HFCSX_INTS_B1REC);
- cs->hw.hfcsx.ctmt &= ~1;
- cs->hw.hfcsx.conn &= ~0x03;
- }
- break;
- case (L1_MODE_EXTRN):
- if (bc) {
- cs->hw.hfcsx.conn |= 0x10;
- cs->hw.hfcsx.sctrl |= SCTRL_B2_ENA;
- cs->hw.hfcsx.sctrl_r |= SCTRL_B2_ENA;
- cs->hw.hfcsx.int_m1 &= ~(HFCSX_INTS_B2TRANS + HFCSX_INTS_B2REC);
- } else {
- cs->hw.hfcsx.conn |= 0x02;
- cs->hw.hfcsx.sctrl |= SCTRL_B1_ENA;
- cs->hw.hfcsx.sctrl_r |= SCTRL_B1_ENA;
- cs->hw.hfcsx.int_m1 &= ~(HFCSX_INTS_B1TRANS + HFCSX_INTS_B1REC);
- }
- break;
- }
- Write_hfc(cs, HFCSX_SCTRL_E, cs->hw.hfcsx.sctrl_e);
- Write_hfc(cs, HFCSX_INT_M1, cs->hw.hfcsx.int_m1);
- Write_hfc(cs, HFCSX_SCTRL, cs->hw.hfcsx.sctrl);
- Write_hfc(cs, HFCSX_SCTRL_R, cs->hw.hfcsx.sctrl_r);
- Write_hfc(cs, HFCSX_CTMT, cs->hw.hfcsx.ctmt);
- Write_hfc(cs, HFCSX_CONNECT, cs->hw.hfcsx.conn);
- if (mode != L1_MODE_EXTRN) {
- reset_fifo(cs, fifo2 ? HFCSX_SEL_B2_RX : HFCSX_SEL_B1_RX);
- reset_fifo(cs, fifo2 ? HFCSX_SEL_B2_TX : HFCSX_SEL_B1_TX);
- }
-}
-
-/******************************/
-/* Layer2 -> Layer 1 Transfer */
-/******************************/
-static void
-hfcsx_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct BCState *bcs = st->l1.bcs;
- struct sk_buff *skb = arg;
- u_long flags;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- skb_queue_tail(&bcs->squeue, skb);
- } else {
- bcs->tx_skb = skb;
-// test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- printk(KERN_WARNING "%s: this shouldn't happen\n",
- __func__);
- } else {
-// test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->tx_skb = skb;
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
- if (!bcs->tx_skb) {
- test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case (PH_ACTIVATE | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_set_bit(BC_FLG_ACTIV, &bcs->Flag);
- mode_hfcsx(bcs, st->l1.mode, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | REQUEST):
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | CONFIRM):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_clear_bit(BC_FLG_ACTIV, &bcs->Flag);
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- mode_hfcsx(bcs, 0, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- st->l1.l1l2(st, PH_DEACTIVATE | CONFIRM, NULL);
- break;
- }
-}
-
-/******************************************/
-/* deactivate B-channel access and queues */
-/******************************************/
-static void
-close_hfcsx(struct BCState *bcs)
-{
- mode_hfcsx(bcs, 0, bcs->channel);
- if (test_and_clear_bit(BC_FLG_INIT, &bcs->Flag)) {
- skb_queue_purge(&bcs->rqueue);
- skb_queue_purge(&bcs->squeue);
- if (bcs->tx_skb) {
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- }
- }
-}
-
-/*************************************/
-/* init B-channel queues and control */
-/*************************************/
-static int
-open_hfcsxstate(struct IsdnCardState *cs, struct BCState *bcs)
-{
- if (!test_and_set_bit(BC_FLG_INIT, &bcs->Flag)) {
- skb_queue_head_init(&bcs->rqueue);
- skb_queue_head_init(&bcs->squeue);
- }
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->event = 0;
- bcs->tx_cnt = 0;
- return (0);
-}
-
-/*********************************/
-/* inits the stack for B-channel */
-/*********************************/
-static int
-setstack_2b(struct PStack *st, struct BCState *bcs)
-{
- bcs->channel = st->l1.bc;
- if (open_hfcsxstate(st->l1.hardware, bcs))
- return (-1);
- st->l1.bcs = bcs;
- st->l2.l2l1 = hfcsx_l2l1;
- setstack_manager(st);
- bcs->st = st;
- setstack_l1_B(st);
- return (0);
-}
-
-/***************************/
-/* handle L1 state changes */
-/***************************/
-static void
-hfcsx_bh(struct work_struct *work)
-{
- struct IsdnCardState *cs =
- container_of(work, struct IsdnCardState, tqueue);
- u_long flags;
-
- if (test_and_clear_bit(D_L1STATECHANGE, &cs->event)) {
- if (!cs->hw.hfcsx.nt_mode)
- switch (cs->dc.hfcsx.ph_state) {
- case (0):
- l1_msg(cs, HW_RESET | INDICATION, NULL);
- break;
- case (3):
- l1_msg(cs, HW_DEACTIVATE | INDICATION, NULL);
- break;
- case (8):
- l1_msg(cs, HW_RSYNC | INDICATION, NULL);
- break;
- case (6):
- l1_msg(cs, HW_INFO2 | INDICATION, NULL);
- break;
- case (7):
- l1_msg(cs, HW_INFO4_P8 | INDICATION, NULL);
- break;
- default:
- break;
- } else {
- switch (cs->dc.hfcsx.ph_state) {
- case (2):
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->hw.hfcsx.nt_timer < 0) {
- cs->hw.hfcsx.nt_timer = 0;
- cs->hw.hfcsx.int_m1 &= ~HFCSX_INTS_TIMER;
- Write_hfc(cs, HFCSX_INT_M1, cs->hw.hfcsx.int_m1);
- /* Clear already pending ints */
- Read_hfc(cs, HFCSX_INT_S1);
-
- Write_hfc(cs, HFCSX_STATES, 4 | HFCSX_LOAD_STATE);
- udelay(10);
- Write_hfc(cs, HFCSX_STATES, 4);
- cs->dc.hfcsx.ph_state = 4;
- } else {
- cs->hw.hfcsx.int_m1 |= HFCSX_INTS_TIMER;
- Write_hfc(cs, HFCSX_INT_M1, cs->hw.hfcsx.int_m1);
- cs->hw.hfcsx.ctmt &= ~HFCSX_AUTO_TIMER;
- cs->hw.hfcsx.ctmt |= HFCSX_TIM3_125;
- Write_hfc(cs, HFCSX_CTMT, cs->hw.hfcsx.ctmt | HFCSX_CLTIMER);
- Write_hfc(cs, HFCSX_CTMT, cs->hw.hfcsx.ctmt | HFCSX_CLTIMER);
- cs->hw.hfcsx.nt_timer = NT_T1_COUNT;
- Write_hfc(cs, HFCSX_STATES, 2 | HFCSX_NT_G2_G3); /* allow G2 -> G3 transition */
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (1):
- case (3):
- case (4):
- spin_lock_irqsave(&cs->lock, flags);
- cs->hw.hfcsx.nt_timer = 0;
- cs->hw.hfcsx.int_m1 &= ~HFCSX_INTS_TIMER;
- Write_hfc(cs, HFCSX_INT_M1, cs->hw.hfcsx.int_m1);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- default:
- break;
- }
- }
- }
- if (test_and_clear_bit(D_RCVBUFREADY, &cs->event))
- DChannel_proc_rcv(cs);
- if (test_and_clear_bit(D_XMTBUFREADY, &cs->event))
- DChannel_proc_xmt(cs);
-}
-
-
-/********************************/
-/* called for card init message */
-/********************************/
-static void inithfcsx(struct IsdnCardState *cs)
-{
- cs->setstack_d = setstack_hfcsx;
- cs->BC_Send_Data = &hfcsx_send_data;
- cs->bcs[0].BC_SetStack = setstack_2b;
- cs->bcs[1].BC_SetStack = setstack_2b;
- cs->bcs[0].BC_Close = close_hfcsx;
- cs->bcs[1].BC_Close = close_hfcsx;
- mode_hfcsx(cs->bcs, 0, 0);
- mode_hfcsx(cs->bcs + 1, 0, 1);
-}
-
-
-
-/*******************************************/
-/* handle card messages from control layer */
-/*******************************************/
-static int
-hfcsx_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "HFCSX: card_msg %x", mt);
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- reset_hfcsx(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- release_io_hfcsx(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- inithfcsx(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- msleep(80); /* Timeout 80ms */
- /* now switch timer interrupt off */
- spin_lock_irqsave(&cs->lock, flags);
- cs->hw.hfcsx.int_m1 &= ~HFCSX_INTS_TIMER;
- Write_hfc(cs, HFCSX_INT_M1, cs->hw.hfcsx.int_m1);
- /* reinit mode reg */
- Write_hfc(cs, HFCSX_MST_MODE, cs->hw.hfcsx.mst_m);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-#ifdef __ISAPNP__
-static struct isapnp_device_id hfc_ids[] = {
- { ISAPNP_VENDOR('T', 'A', 'G'), ISAPNP_FUNCTION(0x2620),
- ISAPNP_VENDOR('T', 'A', 'G'), ISAPNP_FUNCTION(0x2620),
- (unsigned long) "Teles 16.3c2" },
- { 0, }
-};
-
-static struct isapnp_device_id *ipid = &hfc_ids[0];
-static struct pnp_card *pnp_c = NULL;
-#endif
-
-int setup_hfcsx(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
- strcpy(tmp, hfcsx_revision);
- printk(KERN_INFO "HiSax: HFC-SX driver Rev. %s\n", HiSax_getrev(tmp));
-#ifdef __ISAPNP__
- if (!card->para[1] && isapnp_present()) {
- struct pnp_dev *pnp_d;
- while (ipid->card_vendor) {
- if ((pnp_c = pnp_find_card(ipid->card_vendor,
- ipid->card_device, pnp_c))) {
- pnp_d = NULL;
- if ((pnp_d = pnp_find_dev(pnp_c,
- ipid->vendor, ipid->function, pnp_d))) {
- int err;
-
- printk(KERN_INFO "HiSax: %s detected\n",
- (char *)ipid->driver_data);
- pnp_disable_dev(pnp_d);
- err = pnp_activate_dev(pnp_d);
- if (err < 0) {
- printk(KERN_WARNING "%s: pnp_activate_dev ret(%d)\n",
- __func__, err);
- return (0);
- }
- card->para[1] = pnp_port_start(pnp_d, 0);
- card->para[0] = pnp_irq(pnp_d, 0);
- if (card->para[0] == -1 || !card->para[1]) {
- printk(KERN_ERR "HFC PnP:some resources are missing %ld/%lx\n",
- card->para[0], card->para[1]);
- pnp_disable_dev(pnp_d);
- return (0);
- }
- break;
- } else {
- printk(KERN_ERR "HFC PnP: PnP error card found, no device\n");
- }
- }
- ipid++;
- pnp_c = NULL;
- }
- if (!ipid->card_vendor) {
- printk(KERN_INFO "HFC PnP: no ISAPnP card found\n");
- return (0);
- }
- }
-#endif
- cs->hw.hfcsx.base = card->para[1] & 0xfffe;
- cs->irq = card->para[0];
- cs->hw.hfcsx.int_s1 = 0;
- cs->dc.hfcsx.ph_state = 0;
- cs->hw.hfcsx.fifo = 255;
- if ((cs->typ == ISDN_CTYPE_HFC_SX) ||
- (cs->typ == ISDN_CTYPE_HFC_SP_PCMCIA)) {
- if ((!cs->hw.hfcsx.base) || !request_region(cs->hw.hfcsx.base, 2, "HFCSX isdn")) {
- printk(KERN_WARNING
- "HiSax: HFC-SX io-base %#lx already in use\n",
- cs->hw.hfcsx.base);
- return (0);
- }
- byteout(cs->hw.hfcsx.base, cs->hw.hfcsx.base & 0xFF);
- byteout(cs->hw.hfcsx.base + 1,
- ((cs->hw.hfcsx.base >> 8) & 3) | 0x54);
- udelay(10);
- cs->hw.hfcsx.chip = Read_hfc(cs, HFCSX_CHIP_ID);
- switch (cs->hw.hfcsx.chip >> 4) {
- case 1:
- tmp[0] = '+';
- break;
- case 9:
- tmp[0] = 'P';
- break;
- default:
- printk(KERN_WARNING
- "HFC-SX: invalid chip id 0x%x\n",
- cs->hw.hfcsx.chip >> 4);
- release_region(cs->hw.hfcsx.base, 2);
- return (0);
- }
- if (!ccd_sp_irqtab[cs->irq & 0xF]) {
- printk(KERN_WARNING
- "HFC_SX: invalid irq %d specified\n", cs->irq & 0xF);
- release_region(cs->hw.hfcsx.base, 2);
- return (0);
- }
- if (!(cs->hw.hfcsx.extra =
- kmalloc(sizeof(struct hfcsx_extra), GFP_ATOMIC))) {
- release_region(cs->hw.hfcsx.base, 2);
- printk(KERN_WARNING "HFC-SX: unable to allocate memory\n");
- return (0);
- }
- printk(KERN_INFO "HFC-S%c chip detected at base 0x%x IRQ %d HZ %d\n",
- tmp[0], (u_int) cs->hw.hfcsx.base, cs->irq, HZ);
- cs->hw.hfcsx.int_m2 = 0; /* disable alle interrupts */
- cs->hw.hfcsx.int_m1 = 0;
- Write_hfc(cs, HFCSX_INT_M1, cs->hw.hfcsx.int_m1);
- Write_hfc(cs, HFCSX_INT_M2, cs->hw.hfcsx.int_m2);
- } else
- return (0); /* no valid card type */
-
- timer_setup(&cs->dbusytimer, hfcsx_dbusy_timer, 0);
- INIT_WORK(&cs->tqueue, hfcsx_bh);
- cs->readisac = NULL;
- cs->writeisac = NULL;
- cs->readisacfifo = NULL;
- cs->writeisacfifo = NULL;
- cs->BC_Read_Reg = NULL;
- cs->BC_Write_Reg = NULL;
- cs->irq_func = &hfcsx_interrupt;
-
- cs->hw.hfcsx.b_fifo_size = 0; /* fifo size still unknown */
- cs->hw.hfcsx.cirm = ccd_sp_irqtab[cs->irq & 0xF]; /* RAM not evaluated */
- timer_setup(&cs->hw.hfcsx.timer, hfcsx_Timer, 0);
-
- reset_hfcsx(cs);
- cs->cardmsg = &hfcsx_card_msg;
- cs->auxcmd = &hfcsx_auxcmd;
- return (1);
-}
diff --git a/drivers/isdn/hisax/hfc_sx.h b/drivers/isdn/hisax/hfc_sx.h
deleted file mode 100644
index eee85dbb0883..000000000000
--- a/drivers/isdn/hisax/hfc_sx.h
+++ /dev/null
@@ -1,196 +0,0 @@
-/* $Id: hfc_sx.h,v 1.2.6.1 2001/09/23 22:24:48 kai Exp $
- *
- * specific defines for CCD's HFC 2BDS0 S+,SP chips
- *
- * Author Werner Cornelius
- * based on existing driver for CCD HFC PCI cards
- * Copyright by Werner Cornelius <werner@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-/*********************************************/
-/* thresholds for transparent B-channel mode */
-/* change mask and threshold simultaneously */
-/*********************************************/
-#define HFCSX_BTRANS_THRESHOLD 128
-#define HFCSX_BTRANS_THRESMASK 0x00
-
-/* GCI/IOM bus monitor registers */
-
-#define HFCSX_C_I 0x02
-#define HFCSX_TRxR 0x03
-#define HFCSX_MON1_D 0x0A
-#define HFCSX_MON2_D 0x0B
-
-
-/* GCI/IOM bus timeslot registers */
-
-#define HFCSX_B1_SSL 0x20
-#define HFCSX_B2_SSL 0x21
-#define HFCSX_AUX1_SSL 0x22
-#define HFCSX_AUX2_SSL 0x23
-#define HFCSX_B1_RSL 0x24
-#define HFCSX_B2_RSL 0x25
-#define HFCSX_AUX1_RSL 0x26
-#define HFCSX_AUX2_RSL 0x27
-
-/* GCI/IOM bus data registers */
-
-#define HFCSX_B1_D 0x28
-#define HFCSX_B2_D 0x29
-#define HFCSX_AUX1_D 0x2A
-#define HFCSX_AUX2_D 0x2B
-
-/* GCI/IOM bus configuration registers */
-
-#define HFCSX_MST_EMOD 0x2D
-#define HFCSX_MST_MODE 0x2E
-#define HFCSX_CONNECT 0x2F
-
-
-/* Interrupt and status registers */
-
-#define HFCSX_TRM 0x12
-#define HFCSX_B_MODE 0x13
-#define HFCSX_CHIP_ID 0x16
-#define HFCSX_CIRM 0x18
-#define HFCSX_CTMT 0x19
-#define HFCSX_INT_M1 0x1A
-#define HFCSX_INT_M2 0x1B
-#define HFCSX_INT_S1 0x1E
-#define HFCSX_INT_S2 0x1F
-#define HFCSX_STATUS 0x1C
-
-/* S/T section registers */
-
-#define HFCSX_STATES 0x30
-#define HFCSX_SCTRL 0x31
-#define HFCSX_SCTRL_E 0x32
-#define HFCSX_SCTRL_R 0x33
-#define HFCSX_SQ 0x34
-#define HFCSX_CLKDEL 0x37
-#define HFCSX_B1_REC 0x3C
-#define HFCSX_B1_SEND 0x3C
-#define HFCSX_B2_REC 0x3D
-#define HFCSX_B2_SEND 0x3D
-#define HFCSX_D_REC 0x3E
-#define HFCSX_D_SEND 0x3E
-#define HFCSX_E_REC 0x3F
-
-/****************/
-/* FIFO section */
-/****************/
-#define HFCSX_FIF_SEL 0x10
-#define HFCSX_FIF_Z1L 0x80
-#define HFCSX_FIF_Z1H 0x84
-#define HFCSX_FIF_Z2L 0x88
-#define HFCSX_FIF_Z2H 0x8C
-#define HFCSX_FIF_INCF1 0xA8
-#define HFCSX_FIF_DWR 0xAC
-#define HFCSX_FIF_F1 0xB0
-#define HFCSX_FIF_F2 0xB4
-#define HFCSX_FIF_INCF2 0xB8
-#define HFCSX_FIF_DRD 0xBC
-
-/* bits in status register (READ) */
-#define HFCSX_SX_PROC 0x02
-#define HFCSX_NBUSY 0x04
-#define HFCSX_TIMER_ELAP 0x10
-#define HFCSX_STATINT 0x20
-#define HFCSX_FRAMEINT 0x40
-#define HFCSX_ANYINT 0x80
-
-/* bits in CTMT (Write) */
-#define HFCSX_CLTIMER 0x80
-#define HFCSX_TIM3_125 0x04
-#define HFCSX_TIM25 0x10
-#define HFCSX_TIM50 0x14
-#define HFCSX_TIM400 0x18
-#define HFCSX_TIM800 0x1C
-#define HFCSX_AUTO_TIMER 0x20
-#define HFCSX_TRANSB2 0x02
-#define HFCSX_TRANSB1 0x01
-
-/* bits in CIRM (Write) */
-#define HFCSX_IRQ_SELMSK 0x07
-#define HFCSX_IRQ_SELDIS 0x00
-#define HFCSX_RESET 0x08
-#define HFCSX_FIFO_RESET 0x80
-
-
-/* bits in INT_M1 and INT_S1 */
-#define HFCSX_INTS_B1TRANS 0x01
-#define HFCSX_INTS_B2TRANS 0x02
-#define HFCSX_INTS_DTRANS 0x04
-#define HFCSX_INTS_B1REC 0x08
-#define HFCSX_INTS_B2REC 0x10
-#define HFCSX_INTS_DREC 0x20
-#define HFCSX_INTS_L1STATE 0x40
-#define HFCSX_INTS_TIMER 0x80
-
-/* bits in INT_M2 */
-#define HFCSX_PROC_TRANS 0x01
-#define HFCSX_GCI_I_CHG 0x02
-#define HFCSX_GCI_MON_REC 0x04
-#define HFCSX_IRQ_ENABLE 0x08
-
-/* bits in STATES */
-#define HFCSX_STATE_MSK 0x0F
-#define HFCSX_LOAD_STATE 0x10
-#define HFCSX_ACTIVATE 0x20
-#define HFCSX_DO_ACTION 0x40
-#define HFCSX_NT_G2_G3 0x80
-
-/* bits in HFCD_MST_MODE */
-#define HFCSX_MASTER 0x01
-#define HFCSX_SLAVE 0x00
-/* remaining bits are for codecs control */
-
-/* bits in HFCD_SCTRL */
-#define SCTRL_B1_ENA 0x01
-#define SCTRL_B2_ENA 0x02
-#define SCTRL_MODE_TE 0x00
-#define SCTRL_MODE_NT 0x04
-#define SCTRL_LOW_PRIO 0x08
-#define SCTRL_SQ_ENA 0x10
-#define SCTRL_TEST 0x20
-#define SCTRL_NONE_CAP 0x40
-#define SCTRL_PWR_DOWN 0x80
-
-/* bits in SCTRL_E */
-#define HFCSX_AUTO_AWAKE 0x01
-#define HFCSX_DBIT_1 0x04
-#define HFCSX_IGNORE_COL 0x08
-#define HFCSX_CHG_B1_B2 0x80
-
-/**********************************/
-/* definitions for FIFO selection */
-/**********************************/
-#define HFCSX_SEL_D_RX 5
-#define HFCSX_SEL_D_TX 4
-#define HFCSX_SEL_B1_RX 1
-#define HFCSX_SEL_B1_TX 0
-#define HFCSX_SEL_B2_RX 3
-#define HFCSX_SEL_B2_TX 2
-
-#define MAX_D_FRAMES 15
-#define MAX_B_FRAMES 31
-#define B_SUB_VAL_32K 0x0200
-#define B_FIFO_SIZE_32K (0x2000 - B_SUB_VAL_32K)
-#define B_SUB_VAL_8K 0x1A00
-#define B_FIFO_SIZE_8K (0x2000 - B_SUB_VAL_8K)
-#define D_FIFO_SIZE 512
-#define D_FREG_MASK 0xF
-
-/************************************************************/
-/* structure holding additional dynamic data -> send marker */
-/************************************************************/
-struct hfcsx_extra {
- unsigned short marker[2 * (MAX_B_FRAMES + 1) + (MAX_D_FRAMES + 1)];
-};
-
-extern void main_irq_hfcsx(struct BCState *bcs);
-extern void releasehfcsx(struct IsdnCardState *cs);
diff --git a/drivers/isdn/hisax/hfc_usb.c b/drivers/isdn/hisax/hfc_usb.c
deleted file mode 100644
index b6e58c11c288..000000000000
--- a/drivers/isdn/hisax/hfc_usb.c
+++ /dev/null
@@ -1,1594 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-or-later
-/*
- * hfc_usb.c
- *
- * $Id: hfc_usb.c,v 2.3.2.24 2007/10/14 08:40:29 mbachem Exp $
- *
- * modular HiSax ISDN driver for Colognechip HFC-S USB chip
- *
- * Authors : Peter Sprenger (sprenger@moving-bytes.de)
- * Martin Bachem (m.bachem@gmx.de, info@colognechip.com)
- *
- * based on the first hfc_usb driver of
- * Werner Cornelius (werner@isdn-development.de)
- *
- * See Version Histroy at the bottom of this file
- */
-
-#include <linux/types.h>
-#include <linux/stddef.h>
-#include <linux/timer.h>
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/kernel_stat.h>
-#include <linux/usb.h>
-#include <linux/kernel.h>
-#include <linux/sched.h>
-#include <linux/moduleparam.h>
-#include <linux/slab.h>
-#include "hisax.h"
-#include "hisax_if.h"
-#include "hfc_usb.h"
-
-static const char *hfcusb_revision =
- "$Revision: 2.3.2.24 $ $Date: 2007/10/14 08:40:29 $ ";
-
-/* Hisax debug support
- * debug flags defined in hfc_usb.h as HFCUSB_DBG_[*]
- */
-#define __debug_variable hfc_debug
-#include "hisax_debug.h"
-static u_int debug;
-module_param(debug, uint, 0);
-static int hfc_debug;
-
-
-/* private vendor specific data */
-typedef struct {
- __u8 led_scheme; // led display scheme
- signed short led_bits[8]; // array of 8 possible LED bitmask settings
- char *vend_name; // device name
-} hfcsusb_vdata;
-
-/* VID/PID device list */
-static const struct usb_device_id hfcusb_idtab[] = {
- {
- USB_DEVICE(0x0959, 0x2bd0),
- .driver_info = (unsigned long) &((hfcsusb_vdata)
- {LED_OFF, {4, 0, 2, 1},
- "ISDN USB TA (Cologne Chip HFC-S USB based)"}),
- },
- {
- USB_DEVICE(0x0675, 0x1688),
- .driver_info = (unsigned long) &((hfcsusb_vdata)
- {LED_SCHEME1, {1, 2, 0, 0},
- "DrayTek miniVigor 128 USB ISDN TA"}),
- },
- {
- USB_DEVICE(0x07b0, 0x0007),
- .driver_info = (unsigned long) &((hfcsusb_vdata)
- {LED_SCHEME1, {0x80, -64, -32, -16},
- "Billion tiny USB ISDN TA 128"}),
- },
- {
- USB_DEVICE(0x0742, 0x2008),
- .driver_info = (unsigned long) &((hfcsusb_vdata)
- {LED_SCHEME1, {4, 0, 2, 1},
- "Stollmann USB TA"}),
- },
- {
- USB_DEVICE(0x0742, 0x2009),
- .driver_info = (unsigned long) &((hfcsusb_vdata)
- {LED_SCHEME1, {4, 0, 2, 1},
- "Aceex USB ISDN TA"}),
- },
- {
- USB_DEVICE(0x0742, 0x200A),
- .driver_info = (unsigned long) &((hfcsusb_vdata)
- {LED_SCHEME1, {4, 0, 2, 1},
- "OEM USB ISDN TA"}),
- },
- {
- USB_DEVICE(0x08e3, 0x0301),
- .driver_info = (unsigned long) &((hfcsusb_vdata)
- {LED_SCHEME1, {2, 0, 1, 4},
- "Olitec USB RNIS"}),
- },
- {
- USB_DEVICE(0x07fa, 0x0846),
- .driver_info = (unsigned long) &((hfcsusb_vdata)
- {LED_SCHEME1, {0x80, -64, -32, -16},
- "Bewan Modem RNIS USB"}),
- },
- {
- USB_DEVICE(0x07fa, 0x0847),
- .driver_info = (unsigned long) &((hfcsusb_vdata)
- {LED_SCHEME1, {0x80, -64, -32, -16},
- "Djinn Numeris USB"}),
- },
- {
- USB_DEVICE(0x07b0, 0x0006),
- .driver_info = (unsigned long) &((hfcsusb_vdata)
- {LED_SCHEME1, {0x80, -64, -32, -16},
- "Twister ISDN TA"}),
- },
- {
- USB_DEVICE(0x071d, 0x1005),
- .driver_info = (unsigned long) &((hfcsusb_vdata)
- {LED_SCHEME1, {0x02, 0, 0x01, 0x04},
- "Eicon DIVA USB 4.0"}),
- },
- { }
-};
-
-/* structure defining input+output fifos (interrupt/bulk mode) */
-struct usb_fifo; /* forward definition */
-typedef struct iso_urb_struct {
- struct urb *purb;
- __u8 buffer[ISO_BUFFER_SIZE]; /* buffer incoming/outgoing data */
- struct usb_fifo *owner_fifo; /* pointer to owner fifo */
-} iso_urb_struct;
-
-struct hfcusb_data; /* forward definition */
-
-typedef struct usb_fifo {
- int fifonum; /* fifo index attached to this structure */
- int active; /* fifo is currently active */
- struct hfcusb_data *hfc; /* pointer to main structure */
- int pipe; /* address of endpoint */
- __u8 usb_packet_maxlen; /* maximum length for usb transfer */
- unsigned int max_size; /* maximum size of receive/send packet */
- __u8 intervall; /* interrupt interval */
- struct sk_buff *skbuff; /* actual used buffer */
- struct urb *urb; /* transfer structure for usb routines */
- __u8 buffer[128]; /* buffer incoming/outgoing data */
- int bit_line; /* how much bits are in the fifo? */
-
- volatile __u8 usb_transfer_mode; /* switched between ISO and INT */
- iso_urb_struct iso[2]; /* need two urbs to have one always for pending */
- struct hisax_if *hif; /* hisax interface */
- int delete_flg; /* only delete skbuff once */
- int last_urblen; /* remember length of last packet */
-} usb_fifo;
-
-/* structure holding all data for one device */
-typedef struct hfcusb_data {
- /* HiSax Interface for loadable Layer1 drivers */
- struct hisax_d_if d_if; /* see hisax_if.h */
- struct hisax_b_if b_if[2]; /* see hisax_if.h */
- int protocol;
-
- struct usb_device *dev; /* our device */
- int if_used; /* used interface number */
- int alt_used; /* used alternate config */
- int ctrl_paksize; /* control pipe packet size */
- int ctrl_in_pipe, /* handles for control pipe */
- ctrl_out_pipe;
- int cfg_used; /* configuration index used */
- int vend_idx; /* vendor found */
- int b_mode[2]; /* B-channel mode */
- int l1_activated; /* layer 1 activated */
- int disc_flag; /* TRUE if device was disonnected to avoid some USB actions */
- int packet_size, iso_packet_size;
-
- /* control pipe background handling */
- ctrl_buft ctrl_buff[HFC_CTRL_BUFSIZE]; /* buffer holding queued data */
- volatile int ctrl_in_idx, ctrl_out_idx, ctrl_cnt; /* input/output pointer + count */
- struct urb *ctrl_urb; /* transfer structure for control channel */
-
- struct usb_ctrlrequest ctrl_write; /* buffer for control write request */
- struct usb_ctrlrequest ctrl_read; /* same for read request */
-
- __u8 old_led_state, led_state;
-
- volatile __u8 threshold_mask; /* threshold actually reported */
- volatile __u8 bch_enables; /* or mask for sctrl_r and sctrl register values */
-
- usb_fifo fifos[HFCUSB_NUM_FIFOS]; /* structure holding all fifo data */
-
- volatile __u8 l1_state; /* actual l1 state */
- struct timer_list t3_timer; /* timer 3 for activation/deactivation */
- struct timer_list t4_timer; /* timer 4 for activation/deactivation */
-} hfcusb_data;
-
-
-static void collect_rx_frame(usb_fifo *fifo, __u8 *data, int len,
- int finish);
-
-static inline const char *
-symbolic(struct hfcusb_symbolic_list list[], const int num)
-{
- int i;
- for (i = 0; list[i].name != NULL; i++)
- if (list[i].num == num)
- return (list[i].name);
- return "<unknown ERROR>";
-}
-
-static void
-ctrl_start_transfer(hfcusb_data *hfc)
-{
- if (hfc->ctrl_cnt) {
- hfc->ctrl_urb->pipe = hfc->ctrl_out_pipe;
- hfc->ctrl_urb->setup_packet = (u_char *)&hfc->ctrl_write;
- hfc->ctrl_urb->transfer_buffer = NULL;
- hfc->ctrl_urb->transfer_buffer_length = 0;
- hfc->ctrl_write.wIndex =
- cpu_to_le16(hfc->ctrl_buff[hfc->ctrl_out_idx].hfc_reg);
- hfc->ctrl_write.wValue =
- cpu_to_le16(hfc->ctrl_buff[hfc->ctrl_out_idx].reg_val);
-
- usb_submit_urb(hfc->ctrl_urb, GFP_ATOMIC); /* start transfer */
- }
-} /* ctrl_start_transfer */
-
-static int
-queue_control_request(hfcusb_data *hfc, __u8 reg, __u8 val, int action)
-{
- ctrl_buft *buf;
-
- if (hfc->ctrl_cnt >= HFC_CTRL_BUFSIZE)
- return (1); /* no space left */
- buf = &hfc->ctrl_buff[hfc->ctrl_in_idx]; /* pointer to new index */
- buf->hfc_reg = reg;
- buf->reg_val = val;
- buf->action = action;
- if (++hfc->ctrl_in_idx >= HFC_CTRL_BUFSIZE)
- hfc->ctrl_in_idx = 0; /* pointer wrap */
- if (++hfc->ctrl_cnt == 1)
- ctrl_start_transfer(hfc);
- return (0);
-}
-
-static void
-ctrl_complete(struct urb *urb)
-{
- hfcusb_data *hfc = (hfcusb_data *) urb->context;
-
- urb->dev = hfc->dev;
- if (hfc->ctrl_cnt) {
- hfc->ctrl_cnt--; /* decrement actual count */
- if (++hfc->ctrl_out_idx >= HFC_CTRL_BUFSIZE)
- hfc->ctrl_out_idx = 0; /* pointer wrap */
-
- ctrl_start_transfer(hfc); /* start next transfer */
- }
-}
-
-/* write led data to auxport & invert if necessary */
-static void
-write_led(hfcusb_data *hfc, __u8 led_state)
-{
- if (led_state != hfc->old_led_state) {
- hfc->old_led_state = led_state;
- queue_control_request(hfc, HFCUSB_P_DATA, led_state, 1);
- }
-}
-
-static void
-set_led_bit(hfcusb_data *hfc, signed short led_bits, int on)
-{
- if (on) {
- if (led_bits < 0)
- hfc->led_state &= ~abs(led_bits);
- else
- hfc->led_state |= led_bits;
- } else {
- if (led_bits < 0)
- hfc->led_state |= abs(led_bits);
- else
- hfc->led_state &= ~led_bits;
- }
-}
-
-/* handle LED requests */
-static void
-handle_led(hfcusb_data *hfc, int event)
-{
- hfcsusb_vdata *driver_info =
- (hfcsusb_vdata *) hfcusb_idtab[hfc->vend_idx].driver_info;
-
- /* if no scheme -> no LED action */
- if (driver_info->led_scheme == LED_OFF)
- return;
-
- switch (event) {
- case LED_POWER_ON:
- set_led_bit(hfc, driver_info->led_bits[0], 1);
- set_led_bit(hfc, driver_info->led_bits[1], 0);
- set_led_bit(hfc, driver_info->led_bits[2], 0);
- set_led_bit(hfc, driver_info->led_bits[3], 0);
- break;
- case LED_POWER_OFF:
- set_led_bit(hfc, driver_info->led_bits[0], 0);
- set_led_bit(hfc, driver_info->led_bits[1], 0);
- set_led_bit(hfc, driver_info->led_bits[2], 0);
- set_led_bit(hfc, driver_info->led_bits[3], 0);
- break;
- case LED_S0_ON:
- set_led_bit(hfc, driver_info->led_bits[1], 1);
- break;
- case LED_S0_OFF:
- set_led_bit(hfc, driver_info->led_bits[1], 0);
- break;
- case LED_B1_ON:
- set_led_bit(hfc, driver_info->led_bits[2], 1);
- break;
- case LED_B1_OFF:
- set_led_bit(hfc, driver_info->led_bits[2], 0);
- break;
- case LED_B2_ON:
- set_led_bit(hfc, driver_info->led_bits[3], 1);
- break;
- case LED_B2_OFF:
- set_led_bit(hfc, driver_info->led_bits[3], 0);
- break;
- }
- write_led(hfc, hfc->led_state);
-}
-
-/* ISDN l1 timer T3 expires */
-static void
-l1_timer_expire_t3(struct timer_list *t)
-{
- hfcusb_data *hfc = from_timer(hfc, t, t3_timer);
- hfc->d_if.ifc.l1l2(&hfc->d_if.ifc, PH_DEACTIVATE | INDICATION,
- NULL);
-
- DBG(HFCUSB_DBG_STATES,
- "HFC-S USB: PH_DEACTIVATE | INDICATION sent (T3 expire)");
-
- hfc->l1_activated = 0;
- handle_led(hfc, LED_S0_OFF);
- /* deactivate : */
- queue_control_request(hfc, HFCUSB_STATES, 0x10, 1);
- queue_control_request(hfc, HFCUSB_STATES, 3, 1);
-}
-
-/* ISDN l1 timer T4 expires */
-static void
-l1_timer_expire_t4(struct timer_list *t)
-{
- hfcusb_data *hfc = from_timer(hfc, t, t4_timer);
- hfc->d_if.ifc.l1l2(&hfc->d_if.ifc, PH_DEACTIVATE | INDICATION,
- NULL);
-
- DBG(HFCUSB_DBG_STATES,
- "HFC-S USB: PH_DEACTIVATE | INDICATION sent (T4 expire)");
-
- hfc->l1_activated = 0;
- handle_led(hfc, LED_S0_OFF);
-}
-
-/* S0 state changed */
-static void
-s0_state_handler(hfcusb_data *hfc, __u8 state)
-{
- __u8 old_state;
-
- old_state = hfc->l1_state;
- if (state == old_state || state < 1 || state > 8)
- return;
-
- DBG(HFCUSB_DBG_STATES, "HFC-S USB: S0 statechange(%d -> %d)",
- old_state, state);
-
- if (state < 4 || state == 7 || state == 8) {
- if (timer_pending(&hfc->t3_timer))
- del_timer(&hfc->t3_timer);
- DBG(HFCUSB_DBG_STATES, "HFC-S USB: T3 deactivated");
- }
- if (state >= 7) {
- if (timer_pending(&hfc->t4_timer))
- del_timer(&hfc->t4_timer);
- DBG(HFCUSB_DBG_STATES, "HFC-S USB: T4 deactivated");
- }
-
- if (state == 7 && !hfc->l1_activated) {
- hfc->d_if.ifc.l1l2(&hfc->d_if.ifc,
- PH_ACTIVATE | INDICATION, NULL);
- DBG(HFCUSB_DBG_STATES, "HFC-S USB: PH_ACTIVATE | INDICATION sent");
- hfc->l1_activated = 1;
- handle_led(hfc, LED_S0_ON);
- } else if (state <= 3 /* && activated */) {
- if (old_state == 7 || old_state == 8) {
- DBG(HFCUSB_DBG_STATES, "HFC-S USB: T4 activated");
- if (!timer_pending(&hfc->t4_timer)) {
- hfc->t4_timer.expires =
- jiffies + (HFC_TIMER_T4 * HZ) / 1000;
- add_timer(&hfc->t4_timer);
- }
- } else {
- hfc->d_if.ifc.l1l2(&hfc->d_if.ifc,
- PH_DEACTIVATE | INDICATION,
- NULL);
- DBG(HFCUSB_DBG_STATES,
- "HFC-S USB: PH_DEACTIVATE | INDICATION sent");
- hfc->l1_activated = 0;
- handle_led(hfc, LED_S0_OFF);
- }
- }
- hfc->l1_state = state;
-}
-
-static void
-fill_isoc_urb(struct urb *urb, struct usb_device *dev, unsigned int pipe,
- void *buf, int num_packets, int packet_size, int interval,
- usb_complete_t complete, void *context)
-{
- int k;
-
- usb_fill_int_urb(urb, dev, pipe, buf, packet_size * num_packets,
- complete, context, interval);
-
- urb->number_of_packets = num_packets;
- urb->transfer_flags = URB_ISO_ASAP;
- urb->actual_length = 0;
- for (k = 0; k < num_packets; k++) {
- urb->iso_frame_desc[k].offset = packet_size * k;
- urb->iso_frame_desc[k].length = packet_size;
- urb->iso_frame_desc[k].actual_length = 0;
- }
-}
-
-/* allocs urbs and start isoc transfer with two pending urbs to avoid
- * gaps in the transfer chain
- */
-static int
-start_isoc_chain(usb_fifo *fifo, int num_packets_per_urb,
- usb_complete_t complete, int packet_size)
-{
- int i, k, errcode;
-
- DBG(HFCUSB_DBG_INIT, "HFC-S USB: starting ISO-URBs for fifo:%d\n",
- fifo->fifonum);
-
- /* allocate Memory for Iso out Urbs */
- for (i = 0; i < 2; i++) {
- if (!(fifo->iso[i].purb)) {
- fifo->iso[i].purb =
- usb_alloc_urb(num_packets_per_urb, GFP_KERNEL);
- if (!(fifo->iso[i].purb)) {
- printk(KERN_INFO
- "alloc urb for fifo %i failed!!!",
- fifo->fifonum);
- }
- fifo->iso[i].owner_fifo = (struct usb_fifo *) fifo;
-
- /* Init the first iso */
- if (ISO_BUFFER_SIZE >=
- (fifo->usb_packet_maxlen *
- num_packets_per_urb)) {
- fill_isoc_urb(fifo->iso[i].purb,
- fifo->hfc->dev, fifo->pipe,
- fifo->iso[i].buffer,
- num_packets_per_urb,
- fifo->usb_packet_maxlen,
- fifo->intervall, complete,
- &fifo->iso[i]);
- memset(fifo->iso[i].buffer, 0,
- sizeof(fifo->iso[i].buffer));
- /* defining packet delimeters in fifo->buffer */
- for (k = 0; k < num_packets_per_urb; k++) {
- fifo->iso[i].purb->
- iso_frame_desc[k].offset =
- k * packet_size;
- fifo->iso[i].purb->
- iso_frame_desc[k].length =
- packet_size;
- }
- } else {
- printk(KERN_INFO
- "HFC-S USB: ISO Buffer size to small!\n");
- }
- }
- fifo->bit_line = BITLINE_INF;
-
- errcode = usb_submit_urb(fifo->iso[i].purb, GFP_KERNEL);
- fifo->active = (errcode >= 0) ? 1 : 0;
- if (errcode < 0)
- printk(KERN_INFO "HFC-S USB: usb_submit_urb URB nr:%d, error(%i): '%s'\n",
- i, errcode, symbolic(urb_errlist, errcode));
- }
- return (fifo->active);
-}
-
-/* stops running iso chain and frees their pending urbs */
-static void
-stop_isoc_chain(usb_fifo *fifo)
-{
- int i;
-
- for (i = 0; i < 2; i++) {
- if (fifo->iso[i].purb) {
- DBG(HFCUSB_DBG_INIT,
- "HFC-S USB: Stopping iso chain for fifo %i.%i",
- fifo->fifonum, i);
- usb_kill_urb(fifo->iso[i].purb);
- usb_free_urb(fifo->iso[i].purb);
- fifo->iso[i].purb = NULL;
- }
- }
-
- usb_kill_urb(fifo->urb);
- usb_free_urb(fifo->urb);
- fifo->urb = NULL;
- fifo->active = 0;
-}
-
-/* defines how much ISO packets are handled in one URB */
-static int iso_packets[8] =
-{ ISOC_PACKETS_B, ISOC_PACKETS_B, ISOC_PACKETS_B, ISOC_PACKETS_B,
- ISOC_PACKETS_D, ISOC_PACKETS_D, ISOC_PACKETS_D, ISOC_PACKETS_D
-};
-
-static void
-tx_iso_complete(struct urb *urb)
-{
- iso_urb_struct *context_iso_urb = (iso_urb_struct *) urb->context;
- usb_fifo *fifo = context_iso_urb->owner_fifo;
- hfcusb_data *hfc = fifo->hfc;
- int k, tx_offset, num_isoc_packets, sink, len, current_len,
- errcode;
- int frame_complete, transp_mode, fifon, status;
- __u8 threshbit;
-
- fifon = fifo->fifonum;
- status = urb->status;
-
- tx_offset = 0;
-
- /* ISO transfer only partially completed,
- look at individual frame status for details */
- if (status == -EXDEV) {
- DBG(HFCUSB_DBG_VERBOSE_USB, "HFC-S USB: tx_iso_complete with -EXDEV"
- ", urb->status %d, fifonum %d\n",
- status, fifon);
-
- for (k = 0; k < iso_packets[fifon]; ++k) {
- errcode = urb->iso_frame_desc[k].status;
- if (errcode)
- DBG(HFCUSB_DBG_VERBOSE_USB, "HFC-S USB: tx_iso_complete "
- "packet %i, status: %i\n",
- k, errcode);
- }
-
- // clear status, so go on with ISO transfers
- status = 0;
- }
-
- if (fifo->active && !status) {
- transp_mode = 0;
- if (fifon < 4 && hfc->b_mode[fifon / 2] == L1_MODE_TRANS)
- transp_mode = 1;
-
- /* is FifoFull-threshold set for our channel? */
- threshbit = (hfc->threshold_mask & (1 << fifon));
- num_isoc_packets = iso_packets[fifon];
-
- /* predict dataflow to avoid fifo overflow */
- if (fifon >= HFCUSB_D_TX) {
- sink = (threshbit) ? SINK_DMIN : SINK_DMAX;
- } else {
- sink = (threshbit) ? SINK_MIN : SINK_MAX;
- }
- fill_isoc_urb(urb, fifo->hfc->dev, fifo->pipe,
- context_iso_urb->buffer, num_isoc_packets,
- fifo->usb_packet_maxlen, fifo->intervall,
- tx_iso_complete, urb->context);
- memset(context_iso_urb->buffer, 0,
- sizeof(context_iso_urb->buffer));
- frame_complete = 0;
-
- /* Generate next ISO Packets */
- for (k = 0; k < num_isoc_packets; ++k) {
- if (fifo->skbuff) {
- len = fifo->skbuff->len;
- /* we lower data margin every msec */
- fifo->bit_line -= sink;
- current_len = (0 - fifo->bit_line) / 8;
- /* maximum 15 byte for every ISO packet makes our life easier */
- if (current_len > 14)
- current_len = 14;
- current_len =
- (len <=
- current_len) ? len : current_len;
- /* how much bit do we put on the line? */
- fifo->bit_line += current_len * 8;
-
- context_iso_urb->buffer[tx_offset] = 0;
- if (current_len == len) {
- if (!transp_mode) {
- /* here frame completion */
- context_iso_urb->
- buffer[tx_offset] = 1;
- /* add 2 byte flags and 16bit CRC at end of ISDN frame */
- fifo->bit_line += 32;
- }
- frame_complete = 1;
- }
-
- memcpy(context_iso_urb->buffer +
- tx_offset + 1, fifo->skbuff->data,
- current_len);
- skb_pull(fifo->skbuff, current_len);
-
- /* define packet delimeters within the URB buffer */
- urb->iso_frame_desc[k].offset = tx_offset;
- urb->iso_frame_desc[k].length =
- current_len + 1;
-
- tx_offset += (current_len + 1);
- } else {
- urb->iso_frame_desc[k].offset =
- tx_offset++;
-
- urb->iso_frame_desc[k].length = 1;
- fifo->bit_line -= sink; /* we lower data margin every msec */
-
- if (fifo->bit_line < BITLINE_INF) {
- fifo->bit_line = BITLINE_INF;
- }
- }
-
- if (frame_complete) {
- fifo->delete_flg = 1;
- fifo->hif->l1l2(fifo->hif,
- PH_DATA | CONFIRM,
- (void *) (unsigned long) fifo->skbuff->
- truesize);
- if (fifo->skbuff && fifo->delete_flg) {
- dev_kfree_skb_any(fifo->skbuff);
- fifo->skbuff = NULL;
- fifo->delete_flg = 0;
- }
- frame_complete = 0;
- }
- }
- errcode = usb_submit_urb(urb, GFP_ATOMIC);
- if (errcode < 0) {
- printk(KERN_INFO
- "HFC-S USB: error submitting ISO URB: %d\n",
- errcode);
- }
- } else {
- if (status && !hfc->disc_flag) {
- printk(KERN_INFO
- "HFC-S USB: tx_iso_complete: error(%i): '%s', fifonum=%d\n",
- status, symbolic(urb_errlist, status), fifon);
- }
- }
-}
-
-static void
-rx_iso_complete(struct urb *urb)
-{
- iso_urb_struct *context_iso_urb = (iso_urb_struct *) urb->context;
- usb_fifo *fifo = context_iso_urb->owner_fifo;
- hfcusb_data *hfc = fifo->hfc;
- int k, len, errcode, offset, num_isoc_packets, fifon, maxlen,
- status;
- unsigned int iso_status;
- __u8 *buf;
- static __u8 eof[8];
-
- fifon = fifo->fifonum;
- status = urb->status;
-
- if (urb->status == -EOVERFLOW) {
- DBG(HFCUSB_DBG_VERBOSE_USB,
- "HFC-USB: ignoring USB DATAOVERRUN fifo(%i)", fifon);
- status = 0;
- }
-
- /* ISO transfer only partially completed,
- look at individual frame status for details */
- if (status == -EXDEV) {
- DBG(HFCUSB_DBG_VERBOSE_USB, "HFC-S USB: rx_iso_complete with -EXDEV "
- "urb->status %d, fifonum %d\n",
- status, fifon);
- status = 0;
- }
-
- if (fifo->active && !status) {
- num_isoc_packets = iso_packets[fifon];
- maxlen = fifo->usb_packet_maxlen;
- for (k = 0; k < num_isoc_packets; ++k) {
- len = urb->iso_frame_desc[k].actual_length;
- offset = urb->iso_frame_desc[k].offset;
- buf = context_iso_urb->buffer + offset;
- iso_status = urb->iso_frame_desc[k].status;
-
- if (iso_status && !hfc->disc_flag)
- DBG(HFCUSB_DBG_VERBOSE_USB,
- "HFC-S USB: rx_iso_complete "
- "ISO packet %i, status: %i\n",
- k, iso_status);
-
- if (fifon == HFCUSB_D_RX) {
- DBG(HFCUSB_DBG_VERBOSE_USB,
- "HFC-S USB: ISO-D-RX lst_urblen:%2d "
- "act_urblen:%2d max-urblen:%2d EOF:0x%0x",
- fifo->last_urblen, len, maxlen,
- eof[5]);
-
- DBG_PACKET(HFCUSB_DBG_VERBOSE_USB, buf, len);
- }
-
- if (fifo->last_urblen != maxlen) {
- /* the threshold mask is in the 2nd status byte */
- hfc->threshold_mask = buf[1];
- /* care for L1 state only for D-Channel
- to avoid overlapped iso completions */
- if (fifon == HFCUSB_D_RX) {
- /* the S0 state is in the upper half
- of the 1st status byte */
- s0_state_handler(hfc, buf[0] >> 4);
- }
- eof[fifon] = buf[0] & 1;
- if (len > 2)
- collect_rx_frame(fifo, buf + 2,
- len - 2,
- (len < maxlen) ?
- eof[fifon] : 0);
- } else {
- collect_rx_frame(fifo, buf, len,
- (len <
- maxlen) ? eof[fifon] :
- 0);
- }
- fifo->last_urblen = len;
- }
-
- fill_isoc_urb(urb, fifo->hfc->dev, fifo->pipe,
- context_iso_urb->buffer, num_isoc_packets,
- fifo->usb_packet_maxlen, fifo->intervall,
- rx_iso_complete, urb->context);
- errcode = usb_submit_urb(urb, GFP_ATOMIC);
- if (errcode < 0) {
- printk(KERN_ERR
- "HFC-S USB: error submitting ISO URB: %d\n",
- errcode);
- }
- } else {
- if (status && !hfc->disc_flag) {
- printk(KERN_ERR
- "HFC-S USB: rx_iso_complete : "
- "urb->status %d, fifonum %d\n",
- status, fifon);
- }
- }
-}
-
-/* collect rx data from INT- and ISO-URBs */
-static void
-collect_rx_frame(usb_fifo *fifo, __u8 *data, int len, int finish)
-{
- hfcusb_data *hfc = fifo->hfc;
- int transp_mode, fifon;
-
- fifon = fifo->fifonum;
- transp_mode = 0;
- if (fifon < 4 && hfc->b_mode[fifon / 2] == L1_MODE_TRANS)
- transp_mode = 1;
-
- if (!fifo->skbuff) {
- fifo->skbuff = dev_alloc_skb(fifo->max_size + 3);
- if (!fifo->skbuff) {
- printk(KERN_ERR
- "HFC-S USB: cannot allocate buffer for fifo(%d)\n",
- fifon);
- return;
- }
- }
- if (len) {
- if (fifo->skbuff->len + len < fifo->max_size) {
- skb_put_data(fifo->skbuff, data, len);
- } else {
- DBG(HFCUSB_DBG_FIFO_ERR,
- "HCF-USB: got frame exceeded fifo->max_size(%d) fifo(%d)",
- fifo->max_size, fifon);
- DBG_SKB(HFCUSB_DBG_VERBOSE_USB, fifo->skbuff);
- skb_trim(fifo->skbuff, 0);
- }
- }
- if (transp_mode && fifo->skbuff->len >= 128) {
- fifo->hif->l1l2(fifo->hif, PH_DATA | INDICATION,
- fifo->skbuff);
- fifo->skbuff = NULL;
- return;
- }
- /* we have a complete hdlc packet */
- if (finish) {
- if (fifo->skbuff->len > 3 &&
- !fifo->skbuff->data[fifo->skbuff->len - 1]) {
-
- if (fifon == HFCUSB_D_RX) {
- DBG(HFCUSB_DBG_DCHANNEL,
- "HFC-S USB: D-RX len(%d)", fifo->skbuff->len);
- DBG_SKB(HFCUSB_DBG_DCHANNEL, fifo->skbuff);
- }
-
- /* remove CRC & status */
- skb_trim(fifo->skbuff, fifo->skbuff->len - 3);
- if (fifon == HFCUSB_PCM_RX) {
- fifo->hif->l1l2(fifo->hif,
- PH_DATA_E | INDICATION,
- fifo->skbuff);
- } else
- fifo->hif->l1l2(fifo->hif,
- PH_DATA | INDICATION,
- fifo->skbuff);
- fifo->skbuff = NULL; /* buffer was freed from upper layer */
- } else {
- DBG(HFCUSB_DBG_FIFO_ERR,
- "HFC-S USB: ERROR frame len(%d) fifo(%d)",
- fifo->skbuff->len, fifon);
- DBG_SKB(HFCUSB_DBG_VERBOSE_USB, fifo->skbuff);
- skb_trim(fifo->skbuff, 0);
- }
- }
-}
-
-static void
-rx_int_complete(struct urb *urb)
-{
- int len;
- int status;
- __u8 *buf, maxlen, fifon;
- usb_fifo *fifo = (usb_fifo *) urb->context;
- hfcusb_data *hfc = fifo->hfc;
- static __u8 eof[8];
-
- urb->dev = hfc->dev; /* security init */
-
- fifon = fifo->fifonum;
- if ((!fifo->active) || (urb->status)) {
- DBG(HFCUSB_DBG_INIT, "HFC-S USB: RX-Fifo %i is going down (%i)",
- fifon, urb->status);
-
- fifo->urb->interval = 0; /* cancel automatic rescheduling */
- if (fifo->skbuff) {
- dev_kfree_skb_any(fifo->skbuff);
- fifo->skbuff = NULL;
- }
- return;
- }
- len = urb->actual_length;
- buf = fifo->buffer;
- maxlen = fifo->usb_packet_maxlen;
-
- if (fifon == HFCUSB_D_RX) {
- DBG(HFCUSB_DBG_VERBOSE_USB,
- "HFC-S USB: INT-D-RX lst_urblen:%2d "
- "act_urblen:%2d max-urblen:%2d EOF:0x%0x",
- fifo->last_urblen, len, maxlen,
- eof[5]);
- DBG_PACKET(HFCUSB_DBG_VERBOSE_USB, buf, len);
- }
-
- if (fifo->last_urblen != fifo->usb_packet_maxlen) {
- /* the threshold mask is in the 2nd status byte */
- hfc->threshold_mask = buf[1];
- /* the S0 state is in the upper half of the 1st status byte */
- s0_state_handler(hfc, buf[0] >> 4);
- eof[fifon] = buf[0] & 1;
- /* if we have more than the 2 status bytes -> collect data */
- if (len > 2)
- collect_rx_frame(fifo, buf + 2,
- urb->actual_length - 2,
- (len < maxlen) ? eof[fifon] : 0);
- } else {
- collect_rx_frame(fifo, buf, urb->actual_length,
- (len < maxlen) ? eof[fifon] : 0);
- }
- fifo->last_urblen = urb->actual_length;
- status = usb_submit_urb(urb, GFP_ATOMIC);
- if (status) {
- printk(KERN_INFO
- "HFC-S USB: %s error resubmitting URB fifo(%d)\n",
- __func__, fifon);
- }
-}
-
-/* start initial INT-URB for certain fifo */
-static void
-start_int_fifo(usb_fifo *fifo)
-{
- int errcode;
-
- DBG(HFCUSB_DBG_INIT, "HFC-S USB: starting RX INT-URB for fifo:%d\n",
- fifo->fifonum);
-
- if (!fifo->urb) {
- fifo->urb = usb_alloc_urb(0, GFP_KERNEL);
- if (!fifo->urb)
- return;
- }
- usb_fill_int_urb(fifo->urb, fifo->hfc->dev, fifo->pipe,
- fifo->buffer, fifo->usb_packet_maxlen,
- rx_int_complete, fifo, fifo->intervall);
- fifo->active = 1; /* must be marked active */
- errcode = usb_submit_urb(fifo->urb, GFP_KERNEL);
- if (errcode) {
- printk(KERN_ERR "HFC-S USB: submit URB error(%s): status:%i\n",
- __func__, errcode);
- fifo->active = 0;
- fifo->skbuff = NULL;
- }
-}
-
-static void
-setup_bchannel(hfcusb_data *hfc, int channel, int mode)
-{
- __u8 val, idx_table[2] = { 0, 2 };
-
- if (hfc->disc_flag) {
- return;
- }
- DBG(HFCUSB_DBG_STATES, "HFC-S USB: setting channel %d to mode %d",
- channel, mode);
- hfc->b_mode[channel] = mode;
-
- /* setup CON_HDLC */
- val = 0;
- if (mode != L1_MODE_NULL)
- val = 8; /* enable fifo? */
- if (mode == L1_MODE_TRANS)
- val |= 2; /* set transparent bit */
-
- /* set FIFO to transmit register */
- queue_control_request(hfc, HFCUSB_FIFO, idx_table[channel], 1);
- queue_control_request(hfc, HFCUSB_CON_HDLC, val, 1);
- /* reset fifo */
- queue_control_request(hfc, HFCUSB_INC_RES_F, 2, 1);
- /* set FIFO to receive register */
- queue_control_request(hfc, HFCUSB_FIFO, idx_table[channel] + 1, 1);
- queue_control_request(hfc, HFCUSB_CON_HDLC, val, 1);
- /* reset fifo */
- queue_control_request(hfc, HFCUSB_INC_RES_F, 2, 1);
-
- val = 0x40;
- if (hfc->b_mode[0])
- val |= 1;
- if (hfc->b_mode[1])
- val |= 2;
- queue_control_request(hfc, HFCUSB_SCTRL, val, 1);
-
- val = 0;
- if (hfc->b_mode[0])
- val |= 1;
- if (hfc->b_mode[1])
- val |= 2;
- queue_control_request(hfc, HFCUSB_SCTRL_R, val, 1);
-
- if (mode == L1_MODE_NULL) {
- if (channel)
- handle_led(hfc, LED_B2_OFF);
- else
- handle_led(hfc, LED_B1_OFF);
- } else {
- if (channel)
- handle_led(hfc, LED_B2_ON);
- else
- handle_led(hfc, LED_B1_ON);
- }
-}
-
-static void
-hfc_usb_l2l1(struct hisax_if *my_hisax_if, int pr, void *arg)
-{
- usb_fifo *fifo = my_hisax_if->priv;
- hfcusb_data *hfc = fifo->hfc;
-
- switch (pr) {
- case PH_ACTIVATE | REQUEST:
- if (fifo->fifonum == HFCUSB_D_TX) {
- DBG(HFCUSB_DBG_STATES,
- "HFC_USB: hfc_usb_d_l2l1 D-chan: PH_ACTIVATE | REQUEST");
-
- if (hfc->l1_state != 3
- && hfc->l1_state != 7) {
- hfc->d_if.ifc.l1l2(&hfc->d_if.ifc,
- PH_DEACTIVATE |
- INDICATION,
- NULL);
- DBG(HFCUSB_DBG_STATES,
- "HFC-S USB: PH_DEACTIVATE | INDICATION sent (not state 3 or 7)");
- } else {
- if (hfc->l1_state == 7) { /* l1 already active */
- hfc->d_if.ifc.l1l2(&hfc->
- d_if.
- ifc,
- PH_ACTIVATE
- |
- INDICATION,
- NULL);
- DBG(HFCUSB_DBG_STATES,
- "HFC-S USB: PH_ACTIVATE | INDICATION sent again ;)");
- } else {
- /* force sending sending INFO1 */
- queue_control_request(hfc,
- HFCUSB_STATES,
- 0x14,
- 1);
- mdelay(1);
- /* start l1 activation */
- queue_control_request(hfc,
- HFCUSB_STATES,
- 0x04,
- 1);
- if (!timer_pending
- (&hfc->t3_timer)) {
- hfc->t3_timer.
- expires =
- jiffies +
- (HFC_TIMER_T3 *
- HZ) / 1000;
- add_timer(&hfc->
- t3_timer);
- }
- }
- }
- } else {
- DBG(HFCUSB_DBG_STATES,
- "HFC_USB: hfc_usb_d_l2l1 B-chan: PH_ACTIVATE | REQUEST");
- setup_bchannel(hfc,
- (fifo->fifonum ==
- HFCUSB_B1_TX) ? 0 : 1,
- (long) arg);
- fifo->hif->l1l2(fifo->hif,
- PH_ACTIVATE | INDICATION,
- NULL);
- }
- break;
- case PH_DEACTIVATE | REQUEST:
- if (fifo->fifonum == HFCUSB_D_TX) {
- DBG(HFCUSB_DBG_STATES,
- "HFC_USB: hfc_usb_d_l2l1 D-chan: PH_DEACTIVATE | REQUEST");
- } else {
- DBG(HFCUSB_DBG_STATES,
- "HFC_USB: hfc_usb_d_l2l1 Bx-chan: PH_DEACTIVATE | REQUEST");
- setup_bchannel(hfc,
- (fifo->fifonum ==
- HFCUSB_B1_TX) ? 0 : 1,
- (int) L1_MODE_NULL);
- fifo->hif->l1l2(fifo->hif,
- PH_DEACTIVATE | INDICATION,
- NULL);
- }
- break;
- case PH_DATA | REQUEST:
- if (fifo->skbuff && fifo->delete_flg) {
- dev_kfree_skb_any(fifo->skbuff);
- fifo->skbuff = NULL;
- fifo->delete_flg = 0;
- }
- fifo->skbuff = arg; /* we have a new buffer */
- break;
- default:
- DBG(HFCUSB_DBG_STATES,
- "HFC_USB: hfc_usb_d_l2l1: unknown state : %#x", pr);
- break;
- }
-}
-
-/* initial init HFC-S USB chip registers, HiSax interface, USB URBs */
-static int
-hfc_usb_init(hfcusb_data *hfc)
-{
- usb_fifo *fifo;
- int i;
- u_char b;
- struct hisax_b_if *p_b_if[2];
-
- /* check the chip id */
- if (read_usb(hfc, HFCUSB_CHIP_ID, &b) != 1) {
- printk(KERN_INFO "HFC-USB: cannot read chip id\n");
- return (1);
- }
- if (b != HFCUSB_CHIPID) {
- printk(KERN_INFO "HFC-S USB: Invalid chip id 0x%02x\n", b);
- return (1);
- }
-
- /* first set the needed config, interface and alternate */
- usb_set_interface(hfc->dev, hfc->if_used, hfc->alt_used);
-
- /* do Chip reset */
- write_usb(hfc, HFCUSB_CIRM, 8);
- /* aux = output, reset off */
- write_usb(hfc, HFCUSB_CIRM, 0x10);
-
- /* set USB_SIZE to match wMaxPacketSize for INT or BULK transfers */
- write_usb(hfc, HFCUSB_USB_SIZE,
- (hfc->packet_size / 8) | ((hfc->packet_size / 8) << 4));
-
- /* set USB_SIZE_I to match wMaxPacketSize for ISO transfers */
- write_usb(hfc, HFCUSB_USB_SIZE_I, hfc->iso_packet_size);
-
- /* enable PCM/GCI master mode */
- write_usb(hfc, HFCUSB_MST_MODE1, 0); /* set default values */
- write_usb(hfc, HFCUSB_MST_MODE0, 1); /* enable master mode */
-
- /* init the fifos */
- write_usb(hfc, HFCUSB_F_THRES,
- (HFCUSB_TX_THRESHOLD /
- 8) | ((HFCUSB_RX_THRESHOLD / 8) << 4));
-
- fifo = hfc->fifos;
- for (i = 0; i < HFCUSB_NUM_FIFOS; i++) {
- write_usb(hfc, HFCUSB_FIFO, i); /* select the desired fifo */
- fifo[i].skbuff = NULL; /* init buffer pointer */
- fifo[i].max_size =
- (i <= HFCUSB_B2_RX) ? MAX_BCH_SIZE : MAX_DFRAME_LEN;
- fifo[i].last_urblen = 0;
- /* set 2 bit for D- & E-channel */
- write_usb(hfc, HFCUSB_HDLC_PAR,
- ((i <= HFCUSB_B2_RX) ? 0 : 2));
- /* rx hdlc, enable IFF for D-channel */
- write_usb(hfc, HFCUSB_CON_HDLC,
- ((i == HFCUSB_D_TX) ? 0x09 : 0x08));
- write_usb(hfc, HFCUSB_INC_RES_F, 2); /* reset the fifo */
- }
-
- write_usb(hfc, HFCUSB_CLKDEL, 0x0f); /* clock delay value */
- write_usb(hfc, HFCUSB_STATES, 3 | 0x10); /* set deactivated mode */
- write_usb(hfc, HFCUSB_STATES, 3); /* enable state machine */
-
- write_usb(hfc, HFCUSB_SCTRL_R, 0); /* disable both B receivers */
- write_usb(hfc, HFCUSB_SCTRL, 0x40); /* disable B transmitters + capacitive mode */
-
- /* set both B-channel to not connected */
- hfc->b_mode[0] = L1_MODE_NULL;
- hfc->b_mode[1] = L1_MODE_NULL;
-
- hfc->l1_activated = 0;
- hfc->disc_flag = 0;
- hfc->led_state = 0;
- hfc->old_led_state = 0;
-
- /* init the t3 timer */
- timer_setup(&hfc->t3_timer, l1_timer_expire_t3, 0);
-
- /* init the t4 timer */
- timer_setup(&hfc->t4_timer, l1_timer_expire_t4, 0);
-
- /* init the background machinery for control requests */
- hfc->ctrl_read.bRequestType = 0xc0;
- hfc->ctrl_read.bRequest = 1;
- hfc->ctrl_read.wLength = cpu_to_le16(1);
- hfc->ctrl_write.bRequestType = 0x40;
- hfc->ctrl_write.bRequest = 0;
- hfc->ctrl_write.wLength = 0;
- usb_fill_control_urb(hfc->ctrl_urb,
- hfc->dev,
- hfc->ctrl_out_pipe,
- (u_char *)&hfc->ctrl_write,
- NULL, 0, ctrl_complete, hfc);
- /* Init All Fifos */
- for (i = 0; i < HFCUSB_NUM_FIFOS; i++) {
- hfc->fifos[i].iso[0].purb = NULL;
- hfc->fifos[i].iso[1].purb = NULL;
- hfc->fifos[i].active = 0;
- }
- /* register Modul to upper Hisax Layers */
- hfc->d_if.owner = THIS_MODULE;
- hfc->d_if.ifc.priv = &hfc->fifos[HFCUSB_D_TX];
- hfc->d_if.ifc.l2l1 = hfc_usb_l2l1;
- for (i = 0; i < 2; i++) {
- hfc->b_if[i].ifc.priv = &hfc->fifos[HFCUSB_B1_TX + i * 2];
- hfc->b_if[i].ifc.l2l1 = hfc_usb_l2l1;
- p_b_if[i] = &hfc->b_if[i];
- }
- /* default Prot: EURO ISDN, should be a module_param */
- hfc->protocol = 2;
- i = hisax_register(&hfc->d_if, p_b_if, "hfc_usb", hfc->protocol);
- if (i) {
- printk(KERN_INFO "HFC-S USB: hisax_register -> %d\n", i);
- return i;
- }
-
-#ifdef CONFIG_HISAX_DEBUG
- hfc_debug = debug;
-#endif
-
- for (i = 0; i < 4; i++)
- hfc->fifos[i].hif = &p_b_if[i / 2]->ifc;
- for (i = 4; i < 8; i++)
- hfc->fifos[i].hif = &hfc->d_if.ifc;
-
- /* 3 (+1) INT IN + 3 ISO OUT */
- if (hfc->cfg_used == CNF_3INT3ISO || hfc->cfg_used == CNF_4INT3ISO) {
- start_int_fifo(hfc->fifos + HFCUSB_D_RX);
- if (hfc->fifos[HFCUSB_PCM_RX].pipe)
- start_int_fifo(hfc->fifos + HFCUSB_PCM_RX);
- start_int_fifo(hfc->fifos + HFCUSB_B1_RX);
- start_int_fifo(hfc->fifos + HFCUSB_B2_RX);
- }
- /* 3 (+1) ISO IN + 3 ISO OUT */
- if (hfc->cfg_used == CNF_3ISO3ISO || hfc->cfg_used == CNF_4ISO3ISO) {
- start_isoc_chain(hfc->fifos + HFCUSB_D_RX, ISOC_PACKETS_D,
- rx_iso_complete, 16);
- if (hfc->fifos[HFCUSB_PCM_RX].pipe)
- start_isoc_chain(hfc->fifos + HFCUSB_PCM_RX,
- ISOC_PACKETS_D, rx_iso_complete,
- 16);
- start_isoc_chain(hfc->fifos + HFCUSB_B1_RX, ISOC_PACKETS_B,
- rx_iso_complete, 16);
- start_isoc_chain(hfc->fifos + HFCUSB_B2_RX, ISOC_PACKETS_B,
- rx_iso_complete, 16);
- }
-
- start_isoc_chain(hfc->fifos + HFCUSB_D_TX, ISOC_PACKETS_D,
- tx_iso_complete, 1);
- start_isoc_chain(hfc->fifos + HFCUSB_B1_TX, ISOC_PACKETS_B,
- tx_iso_complete, 1);
- start_isoc_chain(hfc->fifos + HFCUSB_B2_TX, ISOC_PACKETS_B,
- tx_iso_complete, 1);
-
- handle_led(hfc, LED_POWER_ON);
-
- return (0);
-}
-
-/* initial callback for each plugged USB device */
-static int
-hfc_usb_probe(struct usb_interface *intf, const struct usb_device_id *id)
-{
- struct usb_device *dev = interface_to_usbdev(intf);
- hfcusb_data *context;
- struct usb_host_interface *iface = intf->cur_altsetting;
- struct usb_host_interface *iface_used = NULL;
- struct usb_host_endpoint *ep;
- int ifnum = iface->desc.bInterfaceNumber;
- int i, idx, alt_idx, probe_alt_setting, vend_idx, cfg_used, *vcf,
- attr, cfg_found, cidx, ep_addr;
- int cmptbl[16], small_match, iso_packet_size, packet_size,
- alt_used = 0;
- hfcsusb_vdata *driver_info;
-
- vend_idx = 0xffff;
- for (i = 0; hfcusb_idtab[i].idVendor; i++) {
- if ((le16_to_cpu(dev->descriptor.idVendor) == hfcusb_idtab[i].idVendor)
- && (le16_to_cpu(dev->descriptor.idProduct) == hfcusb_idtab[i].idProduct)) {
- vend_idx = i;
- continue;
- }
- }
-
- printk(KERN_INFO
- "HFC-S USB: probing interface(%d) actalt(%d) minor(%d)\n",
- ifnum, iface->desc.bAlternateSetting, intf->minor);
-
- if (vend_idx != 0xffff) {
- /* if vendor and product ID is OK, start probing alternate settings */
- alt_idx = 0;
- small_match = 0xffff;
-
- /* default settings */
- iso_packet_size = 16;
- packet_size = 64;
-
- while (alt_idx < intf->num_altsetting) {
- iface = intf->altsetting + alt_idx;
- probe_alt_setting = iface->desc.bAlternateSetting;
- cfg_used = 0;
-
- /* check for config EOL element */
- while (validconf[cfg_used][0]) {
- cfg_found = 1;
- vcf = validconf[cfg_used];
- /* first endpoint descriptor */
- ep = iface->endpoint;
-
- memcpy(cmptbl, vcf, 16 * sizeof(int));
-
- /* check for all endpoints in this alternate setting */
- for (i = 0; i < iface->desc.bNumEndpoints;
- i++) {
- ep_addr =
- ep->desc.bEndpointAddress;
- /* get endpoint base */
- idx = ((ep_addr & 0x7f) - 1) * 2;
- if (ep_addr & 0x80)
- idx++;
- attr = ep->desc.bmAttributes;
- if (cmptbl[idx] == EP_NUL) {
- cfg_found = 0;
- }
- if (attr == USB_ENDPOINT_XFER_INT
- && cmptbl[idx] == EP_INT)
- cmptbl[idx] = EP_NUL;
- if (attr == USB_ENDPOINT_XFER_BULK
- && cmptbl[idx] == EP_BLK)
- cmptbl[idx] = EP_NUL;
- if (attr == USB_ENDPOINT_XFER_ISOC
- && cmptbl[idx] == EP_ISO)
- cmptbl[idx] = EP_NUL;
-
- /* check if all INT endpoints match minimum interval */
- if ((attr == USB_ENDPOINT_XFER_INT)
- && (ep->desc.bInterval < vcf[17])) {
- cfg_found = 0;
- }
- ep++;
- }
- for (i = 0; i < 16; i++) {
- /* all entries must be EP_NOP or EP_NUL for a valid config */
- if (cmptbl[i] != EP_NOP
- && cmptbl[i] != EP_NUL)
- cfg_found = 0;
- }
- if (cfg_found) {
- if (cfg_used < small_match) {
- small_match = cfg_used;
- alt_used =
- probe_alt_setting;
- iface_used = iface;
- }
- }
- cfg_used++;
- }
- alt_idx++;
- } /* (alt_idx < intf->num_altsetting) */
-
- /* found a valid USB Ta Endpint config */
- if (small_match != 0xffff) {
- iface = iface_used;
- if (!(context = kzalloc(sizeof(hfcusb_data), GFP_KERNEL)))
- return (-ENOMEM); /* got no mem */
-
- ep = iface->endpoint;
- vcf = validconf[small_match];
-
- for (i = 0; i < iface->desc.bNumEndpoints; i++) {
- ep_addr = ep->desc.bEndpointAddress;
- /* get endpoint base */
- idx = ((ep_addr & 0x7f) - 1) * 2;
- if (ep_addr & 0x80)
- idx++;
- cidx = idx & 7;
- attr = ep->desc.bmAttributes;
-
- /* init Endpoints */
- if (vcf[idx] != EP_NOP
- && vcf[idx] != EP_NUL) {
- switch (attr) {
- case USB_ENDPOINT_XFER_INT:
- context->
- fifos[cidx].
- pipe =
- usb_rcvintpipe
- (dev,
- ep->desc.
- bEndpointAddress);
- context->
- fifos[cidx].
- usb_transfer_mode
- = USB_INT;
- packet_size =
- le16_to_cpu(ep->desc.wMaxPacketSize);
- break;
- case USB_ENDPOINT_XFER_BULK:
- if (ep_addr & 0x80)
- context->
- fifos
- [cidx].
- pipe =
- usb_rcvbulkpipe
- (dev,
- ep->
- desc.
- bEndpointAddress);
- else
- context->
- fifos
- [cidx].
- pipe =
- usb_sndbulkpipe
- (dev,
- ep->
- desc.
- bEndpointAddress);
- context->
- fifos[cidx].
- usb_transfer_mode
- = USB_BULK;
- packet_size =
- le16_to_cpu(ep->desc.wMaxPacketSize);
- break;
- case USB_ENDPOINT_XFER_ISOC:
- if (ep_addr & 0x80)
- context->
- fifos
- [cidx].
- pipe =
- usb_rcvisocpipe
- (dev,
- ep->
- desc.
- bEndpointAddress);
- else
- context->
- fifos
- [cidx].
- pipe =
- usb_sndisocpipe
- (dev,
- ep->
- desc.
- bEndpointAddress);
- context->
- fifos[cidx].
- usb_transfer_mode
- = USB_ISOC;
- iso_packet_size =
- le16_to_cpu(ep->desc.wMaxPacketSize);
- break;
- default:
- context->
- fifos[cidx].
- pipe = 0;
- } /* switch attribute */
-
- if (context->fifos[cidx].pipe) {
- context->fifos[cidx].
- fifonum = cidx;
- context->fifos[cidx].hfc =
- context;
- context->fifos[cidx].usb_packet_maxlen =
- le16_to_cpu(ep->desc.wMaxPacketSize);
- context->fifos[cidx].
- intervall =
- ep->desc.bInterval;
- context->fifos[cidx].
- skbuff = NULL;
- }
- }
- ep++;
- }
- context->dev = dev; /* save device */
- context->if_used = ifnum; /* save used interface */
- context->alt_used = alt_used; /* and alternate config */
- context->ctrl_paksize = dev->descriptor.bMaxPacketSize0; /* control size */
- context->cfg_used = vcf[16]; /* store used config */
- context->vend_idx = vend_idx; /* store found vendor */
- context->packet_size = packet_size;
- context->iso_packet_size = iso_packet_size;
-
- /* create the control pipes needed for register access */
- context->ctrl_in_pipe =
- usb_rcvctrlpipe(context->dev, 0);
- context->ctrl_out_pipe =
- usb_sndctrlpipe(context->dev, 0);
-
- driver_info = (hfcsusb_vdata *)
- hfcusb_idtab[vend_idx].driver_info;
-
- context->ctrl_urb = usb_alloc_urb(0, GFP_KERNEL);
-
- if (!context->ctrl_urb) {
- pr_warn("%s: No memory for control urb\n",
- driver_info->vend_name);
- kfree(context);
- return -ENOMEM;
- }
-
- pr_info("HFC-S USB: detected \"%s\"\n",
- driver_info->vend_name);
-
- DBG(HFCUSB_DBG_INIT,
- "HFC-S USB: Endpoint-Config: %s (if=%d alt=%d), E-Channel(%d)",
- conf_str[small_match], context->if_used,
- context->alt_used,
- validconf[small_match][18]);
-
- /* init the chip and register the driver */
- if (hfc_usb_init(context)) {
- usb_kill_urb(context->ctrl_urb);
- usb_free_urb(context->ctrl_urb);
- context->ctrl_urb = NULL;
- kfree(context);
- return (-EIO);
- }
- usb_set_intfdata(intf, context);
- return (0);
- }
- } else {
- printk(KERN_INFO
- "HFC-S USB: no valid vendor found in USB descriptor\n");
- }
- return (-EIO);
-}
-
-/* callback for unplugged USB device */
-static void
-hfc_usb_disconnect(struct usb_interface *intf)
-{
- hfcusb_data *context = usb_get_intfdata(intf);
- int i;
-
- handle_led(context, LED_POWER_OFF);
- schedule_timeout(HZ / 100);
-
- printk(KERN_INFO "HFC-S USB: device disconnect\n");
- context->disc_flag = 1;
- usb_set_intfdata(intf, NULL);
-
- if (timer_pending(&context->t3_timer))
- del_timer(&context->t3_timer);
- if (timer_pending(&context->t4_timer))
- del_timer(&context->t4_timer);
-
- /* tell all fifos to terminate */
- for (i = 0; i < HFCUSB_NUM_FIFOS; i++) {
- if (context->fifos[i].usb_transfer_mode == USB_ISOC) {
- if (context->fifos[i].active > 0) {
- stop_isoc_chain(&context->fifos[i]);
- DBG(HFCUSB_DBG_INIT,
- "HFC-S USB: %s stopping ISOC chain Fifo(%i)",
- __func__, i);
- }
- } else {
- if (context->fifos[i].active > 0) {
- context->fifos[i].active = 0;
- DBG(HFCUSB_DBG_INIT,
- "HFC-S USB: %s unlinking URB for Fifo(%i)",
- __func__, i);
- }
- usb_kill_urb(context->fifos[i].urb);
- usb_free_urb(context->fifos[i].urb);
- context->fifos[i].urb = NULL;
- }
- context->fifos[i].active = 0;
- }
- usb_kill_urb(context->ctrl_urb);
- usb_free_urb(context->ctrl_urb);
- context->ctrl_urb = NULL;
- hisax_unregister(&context->d_if);
- kfree(context); /* free our structure again */
-}
-
-static struct usb_driver hfc_drv = {
- .name = "hfc_usb",
- .id_table = hfcusb_idtab,
- .probe = hfc_usb_probe,
- .disconnect = hfc_usb_disconnect,
- .disable_hub_initiated_lpm = 1,
-};
-
-static void __exit
-hfc_usb_mod_exit(void)
-{
- usb_deregister(&hfc_drv); /* release our driver */
- printk(KERN_INFO "HFC-S USB: module removed\n");
-}
-
-static int __init
-hfc_usb_mod_init(void)
-{
- char revstr[30], datestr[30], dummy[30];
-#ifndef CONFIG_HISAX_DEBUG
- hfc_debug = debug;
-#endif
- sscanf(hfcusb_revision,
- "%s %s $ %s %s %s $ ", dummy, revstr,
- dummy, datestr, dummy);
- printk(KERN_INFO
- "HFC-S USB: driver module revision %s date %s loaded, (debug=%i)\n",
- revstr, datestr, debug);
- if (usb_register(&hfc_drv)) {
- printk(KERN_INFO
- "HFC-S USB: Unable to register HFC-S USB module at usb stack\n");
- return (-1); /* unable to register */
- }
- return (0);
-}
-
-module_init(hfc_usb_mod_init);
-module_exit(hfc_usb_mod_exit);
-MODULE_AUTHOR(DRIVER_AUTHOR);
-MODULE_DESCRIPTION(DRIVER_DESC);
-MODULE_LICENSE("GPL");
-MODULE_DEVICE_TABLE(usb, hfcusb_idtab);
diff --git a/drivers/isdn/hisax/hfc_usb.h b/drivers/isdn/hisax/hfc_usb.h
deleted file mode 100644
index 9a212330e8a8..000000000000
--- a/drivers/isdn/hisax/hfc_usb.h
+++ /dev/null
@@ -1,208 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * hfc_usb.h
- *
- * $Id: hfc_usb.h,v 1.1.2.5 2007/08/20 14:36:03 mbachem Exp $
- */
-
-#ifndef __HFC_USB_H__
-#define __HFC_USB_H__
-
-#define DRIVER_AUTHOR "Peter Sprenger (sprenger@moving-byters.de)"
-#define DRIVER_DESC "HFC-S USB based HiSAX ISDN driver"
-
-
-#define HFC_CTRL_TIMEOUT 20 /* 5ms timeout writing/reading regs */
-#define HFC_TIMER_T3 8000 /* timeout for l1 activation timer */
-#define HFC_TIMER_T4 500 /* time for state change interval */
-
-#define HFCUSB_L1_STATECHANGE 0 /* L1 state changed */
-#define HFCUSB_L1_DRX 1 /* D-frame received */
-#define HFCUSB_L1_ERX 2 /* E-frame received */
-#define HFCUSB_L1_DTX 4 /* D-frames completed */
-
-#define MAX_BCH_SIZE 2048 /* allowed B-channel packet size */
-
-#define HFCUSB_RX_THRESHOLD 64 /* threshold for fifo report bit rx */
-#define HFCUSB_TX_THRESHOLD 64 /* threshold for fifo report bit tx */
-
-#define HFCUSB_CHIP_ID 0x16 /* Chip ID register index */
-#define HFCUSB_CIRM 0x00 /* cirm register index */
-#define HFCUSB_USB_SIZE 0x07 /* int length register */
-#define HFCUSB_USB_SIZE_I 0x06 /* iso length register */
-#define HFCUSB_F_CROSS 0x0b /* bit order register */
-#define HFCUSB_CLKDEL 0x37 /* bit delay register */
-#define HFCUSB_CON_HDLC 0xfa /* channel connect register */
-#define HFCUSB_HDLC_PAR 0xfb
-#define HFCUSB_SCTRL 0x31 /* S-bus control register (tx) */
-#define HFCUSB_SCTRL_E 0x32 /* same for E and special funcs */
-#define HFCUSB_SCTRL_R 0x33 /* S-bus control register (rx) */
-#define HFCUSB_F_THRES 0x0c /* threshold register */
-#define HFCUSB_FIFO 0x0f /* fifo select register */
-#define HFCUSB_F_USAGE 0x1a /* fifo usage register */
-#define HFCUSB_MST_MODE0 0x14
-#define HFCUSB_MST_MODE1 0x15
-#define HFCUSB_P_DATA 0x1f
-#define HFCUSB_INC_RES_F 0x0e
-#define HFCUSB_STATES 0x30
-
-#define HFCUSB_CHIPID 0x40 /* ID value of HFC-S USB */
-
-
-/* fifo registers */
-#define HFCUSB_NUM_FIFOS 8 /* maximum number of fifos */
-#define HFCUSB_B1_TX 0 /* index for B1 transmit bulk/int */
-#define HFCUSB_B1_RX 1 /* index for B1 receive bulk/int */
-#define HFCUSB_B2_TX 2
-#define HFCUSB_B2_RX 3
-#define HFCUSB_D_TX 4
-#define HFCUSB_D_RX 5
-#define HFCUSB_PCM_TX 6
-#define HFCUSB_PCM_RX 7
-
-/*
- * used to switch snd_transfer_mode for different TA modes e.g. the Billion USB TA just
- * supports ISO out, while the Cologne Chip EVAL TA just supports BULK out
- */
-#define USB_INT 0
-#define USB_BULK 1
-#define USB_ISOC 2
-
-#define ISOC_PACKETS_D 8
-#define ISOC_PACKETS_B 8
-#define ISO_BUFFER_SIZE 128
-
-/* Fifo flow Control for TX ISO */
-#define SINK_MAX 68
-#define SINK_MIN 48
-#define SINK_DMIN 12
-#define SINK_DMAX 18
-#define BITLINE_INF (-64 * 8)
-
-/* HFC-S USB register access by Control-URSs */
-#define write_usb(a, b, c) usb_control_msg((a)->dev, (a)->ctrl_out_pipe, 0, 0x40, (c), (b), NULL, 0, HFC_CTRL_TIMEOUT)
-#define read_usb(a, b, c) usb_control_msg((a)->dev, (a)->ctrl_in_pipe, 1, 0xC0, 0, (b), (c), 1, HFC_CTRL_TIMEOUT)
-#define HFC_CTRL_BUFSIZE 32
-
-/* entry and size of output/input control buffer */
-typedef struct {
- __u8 hfc_reg; /* register number */
- __u8 reg_val; /* value to be written (or read) */
- int action; /* data for action handler */
-} ctrl_buft;
-
-/* Debugging Flags */
-#define HFCUSB_DBG_INIT 0x0001
-#define HFCUSB_DBG_STATES 0x0002
-#define HFCUSB_DBG_DCHANNEL 0x0080
-#define HFCUSB_DBG_FIFO_ERR 0x4000
-#define HFCUSB_DBG_VERBOSE_USB 0x8000
-
-/*
- * URB error codes:
- * Used to represent a list of values and their respective symbolic names
- */
-struct hfcusb_symbolic_list {
- const int num;
- const char *name;
-};
-
-static struct hfcusb_symbolic_list urb_errlist[] = {
- {-ENOMEM, "No memory for allocation of internal structures"},
- {-ENOSPC, "The host controller's bandwidth is already consumed"},
- {-ENOENT, "URB was canceled by unlink_urb"},
- {-EXDEV, "ISO transfer only partially completed"},
- {-EAGAIN, "Too match scheduled for the future"},
- {-ENXIO, "URB already queued"},
- {-EFBIG, "Too much ISO frames requested"},
- {-ENOSR, "Buffer error (overrun)"},
- {-EPIPE, "Specified endpoint is stalled (device not responding)"},
- {-EOVERFLOW, "Babble (bad cable?)"},
- {-EPROTO, "Bit-stuff error (bad cable?)"},
- {-EILSEQ, "CRC/Timeout"},
- {-ETIMEDOUT, "NAK (device does not respond)"},
- {-ESHUTDOWN, "Device unplugged"},
- {-1, NULL}
-};
-
-
-/*
- * device dependent information to support different
- * ISDN Ta's using the HFC-S USB chip
- */
-
-/* USB descriptor need to contain one of the following EndPoint combination: */
-#define CNF_4INT3ISO 1 // 4 INT IN, 3 ISO OUT
-#define CNF_3INT3ISO 2 // 3 INT IN, 3 ISO OUT
-#define CNF_4ISO3ISO 3 // 4 ISO IN, 3 ISO OUT
-#define CNF_3ISO3ISO 4 // 3 ISO IN, 3 ISO OUT
-
-#define EP_NUL 1 // Endpoint at this position not allowed
-#define EP_NOP 2 // all type of endpoints allowed at this position
-#define EP_ISO 3 // Isochron endpoint mandatory at this position
-#define EP_BLK 4 // Bulk endpoint mandatory at this position
-#define EP_INT 5 // Interrupt endpoint mandatory at this position
-
-/*
- * List of all supported endpoint configuration sets, used to find the
- * best matching endpoint configuration within a devices' USB descriptor.
- * We need at least 3 RX endpoints, and 3 TX endpoints, either
- * INT-in and ISO-out, or ISO-in and ISO-out)
- * with 4 RX endpoints even E-Channel logging is possible
- */
-static int validconf[][19] = {
- // INT in, ISO out config
- {EP_NUL, EP_INT, EP_NUL, EP_INT, EP_NUL, EP_INT, EP_NOP, EP_INT,
- EP_ISO, EP_NUL, EP_ISO, EP_NUL, EP_ISO, EP_NUL, EP_NUL, EP_NUL,
- CNF_4INT3ISO, 2, 1},
- {EP_NUL, EP_INT, EP_NUL, EP_INT, EP_NUL, EP_INT, EP_NUL, EP_NUL,
- EP_ISO, EP_NUL, EP_ISO, EP_NUL, EP_ISO, EP_NUL, EP_NUL, EP_NUL,
- CNF_3INT3ISO, 2, 0},
- // ISO in, ISO out config
- {EP_NUL, EP_NUL, EP_NUL, EP_NUL, EP_NUL, EP_NUL, EP_NUL, EP_NUL,
- EP_ISO, EP_ISO, EP_ISO, EP_ISO, EP_ISO, EP_ISO, EP_NOP, EP_ISO,
- CNF_4ISO3ISO, 2, 1},
- {EP_NUL, EP_NUL, EP_NUL, EP_NUL, EP_NUL, EP_NUL, EP_NUL, EP_NUL,
- EP_ISO, EP_ISO, EP_ISO, EP_ISO, EP_ISO, EP_ISO, EP_NUL, EP_NUL,
- CNF_3ISO3ISO, 2, 0},
- {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} // EOL element
-};
-
-#ifdef CONFIG_HISAX_DEBUG
-// string description of chosen config
-static char *conf_str[] = {
- "4 Interrupt IN + 3 Isochron OUT",
- "3 Interrupt IN + 3 Isochron OUT",
- "4 Isochron IN + 3 Isochron OUT",
- "3 Isochron IN + 3 Isochron OUT"
-};
-#endif
-
-typedef struct {
- int vendor; // vendor id
- int prod_id; // product id
- char *vend_name; // vendor string
- __u8 led_scheme; // led display scheme
- signed short led_bits[8]; // array of 8 possible LED bitmask settings
-} vendor_data;
-
-#define LED_OFF 0 // no LED support
-#define LED_SCHEME1 1 // LED standard scheme
-#define LED_SCHEME2 2 // not used yet...
-
-#define LED_POWER_ON 1
-#define LED_POWER_OFF 2
-#define LED_S0_ON 3
-#define LED_S0_OFF 4
-#define LED_B1_ON 5
-#define LED_B1_OFF 6
-#define LED_B1_DATA 7
-#define LED_B2_ON 8
-#define LED_B2_OFF 9
-#define LED_B2_DATA 10
-
-#define LED_NORMAL 0 // LEDs are normal
-#define LED_INVERTED 1 // LEDs are inverted
-
-
-#endif // __HFC_USB_H__
diff --git a/drivers/isdn/hisax/hfcscard.c b/drivers/isdn/hisax/hfcscard.c
deleted file mode 100644
index 91b5219499ca..000000000000
--- a/drivers/isdn/hisax/hfcscard.c
+++ /dev/null
@@ -1,261 +0,0 @@
-/* $Id: hfcscard.c,v 1.10.2.4 2004/01/14 16:04:48 keil Exp $
- *
- * low level stuff for hfcs based cards (Teles3c, ACER P10)
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include <linux/isapnp.h>
-#include "hisax.h"
-#include "hfc_2bds0.h"
-#include "isdnl1.h"
-
-static const char *hfcs_revision = "$Revision: 1.10.2.4 $";
-
-static irqreturn_t
-hfcs_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val, stat;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- if ((HFCD_ANYINT | HFCD_BUSY_NBUSY) &
- (stat = cs->BC_Read_Reg(cs, HFCD_DATA, HFCD_STAT))) {
- val = cs->BC_Read_Reg(cs, HFCD_DATA, HFCD_INT_S1);
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "HFCS: stat(%02x) s1(%02x)", stat, val);
- hfc2bds0_interrupt(cs, val);
- } else {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "HFCS: irq_no_irq stat(%02x)", stat);
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-hfcs_Timer(struct timer_list *t)
-{
- struct IsdnCardState *cs = from_timer(cs, t, hw.hfcD.timer);
- cs->hw.hfcD.timer.expires = jiffies + 75;
- /* WD RESET */
-/* WriteReg(cs, HFCD_DATA, HFCD_CTMT, cs->hw.hfcD.ctmt | 0x80);
- add_timer(&cs->hw.hfcD.timer);
-*/
-}
-
-static void
-release_io_hfcs(struct IsdnCardState *cs)
-{
- release2bds0(cs);
- del_timer(&cs->hw.hfcD.timer);
- if (cs->hw.hfcD.addr)
- release_region(cs->hw.hfcD.addr, 2);
-}
-
-static void
-reset_hfcs(struct IsdnCardState *cs)
-{
- printk(KERN_INFO "HFCS: resetting card\n");
- cs->hw.hfcD.cirm = HFCD_RESET;
- if (cs->typ == ISDN_CTYPE_TELES3C)
- cs->hw.hfcD.cirm |= HFCD_MEM8K;
- cs->BC_Write_Reg(cs, HFCD_DATA, HFCD_CIRM, cs->hw.hfcD.cirm); /* Reset On */
- mdelay(10);
- cs->hw.hfcD.cirm = 0;
- if (cs->typ == ISDN_CTYPE_TELES3C)
- cs->hw.hfcD.cirm |= HFCD_MEM8K;
- cs->BC_Write_Reg(cs, HFCD_DATA, HFCD_CIRM, cs->hw.hfcD.cirm); /* Reset Off */
- mdelay(10);
- if (cs->typ == ISDN_CTYPE_TELES3C)
- cs->hw.hfcD.cirm |= HFCD_INTB;
- else if (cs->typ == ISDN_CTYPE_ACERP10)
- cs->hw.hfcD.cirm |= HFCD_INTA;
- cs->BC_Write_Reg(cs, HFCD_DATA, HFCD_CIRM, cs->hw.hfcD.cirm);
- cs->BC_Write_Reg(cs, HFCD_DATA, HFCD_CLKDEL, 0x0e);
- cs->BC_Write_Reg(cs, HFCD_DATA, HFCD_TEST, HFCD_AUTO_AWAKE); /* S/T Auto awake */
- cs->hw.hfcD.ctmt = HFCD_TIM25 | HFCD_AUTO_TIMER;
- cs->BC_Write_Reg(cs, HFCD_DATA, HFCD_CTMT, cs->hw.hfcD.ctmt);
- cs->hw.hfcD.int_m2 = HFCD_IRQ_ENABLE;
- cs->hw.hfcD.int_m1 = HFCD_INTS_B1TRANS | HFCD_INTS_B2TRANS |
- HFCD_INTS_DTRANS | HFCD_INTS_B1REC | HFCD_INTS_B2REC |
- HFCD_INTS_DREC | HFCD_INTS_L1STATE;
- cs->BC_Write_Reg(cs, HFCD_DATA, HFCD_INT_M1, cs->hw.hfcD.int_m1);
- cs->BC_Write_Reg(cs, HFCD_DATA, HFCD_INT_M2, cs->hw.hfcD.int_m2);
- cs->BC_Write_Reg(cs, HFCD_DATA, HFCD_STATES, HFCD_LOAD_STATE | 2); /* HFC ST 2 */
- udelay(10);
- cs->BC_Write_Reg(cs, HFCD_DATA, HFCD_STATES, 2); /* HFC ST 2 */
- cs->hw.hfcD.mst_m = HFCD_MASTER;
- cs->BC_Write_Reg(cs, HFCD_DATA, HFCD_MST_MODE, cs->hw.hfcD.mst_m); /* HFC Master */
- cs->hw.hfcD.sctrl = 0;
- cs->BC_Write_Reg(cs, HFCD_DATA, HFCD_SCTRL, cs->hw.hfcD.sctrl);
-}
-
-static int
-hfcs_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
- int delay;
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "HFCS: card_msg %x", mt);
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- reset_hfcs(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- release_io_hfcs(cs);
- return (0);
- case CARD_INIT:
- delay = (75 * HZ) / 100 + 1;
- mod_timer(&cs->hw.hfcD.timer, jiffies + delay);
- spin_lock_irqsave(&cs->lock, flags);
- reset_hfcs(cs);
- init2bds0(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- delay = (80 * HZ) / 1000 + 1;
- msleep(80);
- spin_lock_irqsave(&cs->lock, flags);
- cs->hw.hfcD.ctmt |= HFCD_TIM800;
- cs->BC_Write_Reg(cs, HFCD_DATA, HFCD_CTMT, cs->hw.hfcD.ctmt);
- cs->BC_Write_Reg(cs, HFCD_DATA, HFCD_MST_MODE, cs->hw.hfcD.mst_m);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-#ifdef __ISAPNP__
-static struct isapnp_device_id hfc_ids[] = {
- { ISAPNP_VENDOR('A', 'N', 'X'), ISAPNP_FUNCTION(0x1114),
- ISAPNP_VENDOR('A', 'N', 'X'), ISAPNP_FUNCTION(0x1114),
- (unsigned long) "Acer P10" },
- { ISAPNP_VENDOR('B', 'I', 'L'), ISAPNP_FUNCTION(0x0002),
- ISAPNP_VENDOR('B', 'I', 'L'), ISAPNP_FUNCTION(0x0002),
- (unsigned long) "Billion 2" },
- { ISAPNP_VENDOR('B', 'I', 'L'), ISAPNP_FUNCTION(0x0001),
- ISAPNP_VENDOR('B', 'I', 'L'), ISAPNP_FUNCTION(0x0001),
- (unsigned long) "Billion 1" },
- { ISAPNP_VENDOR('T', 'A', 'G'), ISAPNP_FUNCTION(0x7410),
- ISAPNP_VENDOR('T', 'A', 'G'), ISAPNP_FUNCTION(0x7410),
- (unsigned long) "IStar PnP" },
- { ISAPNP_VENDOR('T', 'A', 'G'), ISAPNP_FUNCTION(0x2610),
- ISAPNP_VENDOR('T', 'A', 'G'), ISAPNP_FUNCTION(0x2610),
- (unsigned long) "Teles 16.3c" },
- { ISAPNP_VENDOR('S', 'F', 'M'), ISAPNP_FUNCTION(0x0001),
- ISAPNP_VENDOR('S', 'F', 'M'), ISAPNP_FUNCTION(0x0001),
- (unsigned long) "Tornado Tipa C" },
- { ISAPNP_VENDOR('K', 'Y', 'E'), ISAPNP_FUNCTION(0x0001),
- ISAPNP_VENDOR('K', 'Y', 'E'), ISAPNP_FUNCTION(0x0001),
- (unsigned long) "Genius Speed Surfer" },
- { 0, }
-};
-
-static struct isapnp_device_id *ipid = &hfc_ids[0];
-static struct pnp_card *pnp_c = NULL;
-#endif
-
-int setup_hfcs(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
- strcpy(tmp, hfcs_revision);
- printk(KERN_INFO "HiSax: HFC-S driver Rev. %s\n", HiSax_getrev(tmp));
-
-#ifdef __ISAPNP__
- if (!card->para[1] && isapnp_present()) {
- struct pnp_dev *pnp_d;
- while (ipid->card_vendor) {
- if ((pnp_c = pnp_find_card(ipid->card_vendor,
- ipid->card_device, pnp_c))) {
- pnp_d = NULL;
- if ((pnp_d = pnp_find_dev(pnp_c,
- ipid->vendor, ipid->function, pnp_d))) {
- int err;
-
- printk(KERN_INFO "HiSax: %s detected\n",
- (char *)ipid->driver_data);
- pnp_disable_dev(pnp_d);
- err = pnp_activate_dev(pnp_d);
- if (err < 0) {
- printk(KERN_WARNING "%s: pnp_activate_dev ret(%d)\n",
- __func__, err);
- return (0);
- }
- card->para[1] = pnp_port_start(pnp_d, 0);
- card->para[0] = pnp_irq(pnp_d, 0);
- if (card->para[0] == -1 || !card->para[1]) {
- printk(KERN_ERR "HFC PnP:some resources are missing %ld/%lx\n",
- card->para[0], card->para[1]);
- pnp_disable_dev(pnp_d);
- return (0);
- }
- break;
- } else {
- printk(KERN_ERR "HFC PnP: PnP error card found, no device\n");
- }
- }
- ipid++;
- pnp_c = NULL;
- }
- if (!ipid->card_vendor) {
- printk(KERN_INFO "HFC PnP: no ISAPnP card found\n");
- return (0);
- }
- }
-#endif
- cs->hw.hfcD.addr = card->para[1] & 0xfffe;
- cs->irq = card->para[0];
- cs->hw.hfcD.cip = 0;
- cs->hw.hfcD.int_s1 = 0;
- cs->hw.hfcD.send = NULL;
- cs->bcs[0].hw.hfc.send = NULL;
- cs->bcs[1].hw.hfc.send = NULL;
- cs->hw.hfcD.dfifosize = 512;
- cs->dc.hfcd.ph_state = 0;
- cs->hw.hfcD.fifo = 255;
- if (cs->typ == ISDN_CTYPE_TELES3C) {
- cs->hw.hfcD.bfifosize = 1024 + 512;
- } else if (cs->typ == ISDN_CTYPE_ACERP10) {
- cs->hw.hfcD.bfifosize = 7 * 1024 + 512;
- } else
- return (0);
- if (!request_region(cs->hw.hfcD.addr, 2, "HFCS isdn")) {
- printk(KERN_WARNING
- "HiSax: %s config port %x-%x already in use\n",
- CardType[card->typ],
- cs->hw.hfcD.addr,
- cs->hw.hfcD.addr + 2);
- return (0);
- }
- printk(KERN_INFO
- "HFCS: defined at 0x%x IRQ %d HZ %d\n",
- cs->hw.hfcD.addr,
- cs->irq, HZ);
- if (cs->typ == ISDN_CTYPE_TELES3C) {
- /* Teles 16.3c IO ADR is 0x200 | YY0U (YY Bit 15/14 address) */
- outb(0x00, cs->hw.hfcD.addr);
- outb(0x56, cs->hw.hfcD.addr | 1);
- } else if (cs->typ == ISDN_CTYPE_ACERP10) {
- /* Acer P10 IO ADR is 0x300 */
- outb(0x00, cs->hw.hfcD.addr);
- outb(0x57, cs->hw.hfcD.addr | 1);
- }
- set_cs_func(cs);
- timer_setup(&cs->hw.hfcD.timer, hfcs_Timer, 0);
- cs->cardmsg = &hfcs_card_msg;
- cs->irq_func = &hfcs_interrupt;
- return (1);
-}
diff --git a/drivers/isdn/hisax/hisax.h b/drivers/isdn/hisax/hisax.h
deleted file mode 100644
index 40080e06421c..000000000000
--- a/drivers/isdn/hisax/hisax.h
+++ /dev/null
@@ -1,1352 +0,0 @@
-/* $Id: hisax.h,v 2.64.2.4 2004/02/11 13:21:33 keil Exp $
- *
- * Basic declarations, defines and prototypes
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-#include <linux/errno.h>
-#include <linux/fs.h>
-#include <linux/major.h>
-#include <asm/io.h>
-#include <linux/delay.h>
-#include <linux/kernel.h>
-#include <linux/signal.h>
-#include <linux/slab.h>
-#include <linux/mm.h>
-#include <linux/mman.h>
-#include <linux/interrupt.h>
-#include <linux/ioport.h>
-#include <linux/timer.h>
-#include <linux/wait.h>
-#include <linux/isdnif.h>
-#include <linux/tty.h>
-#include <linux/serial_reg.h>
-#include <linux/netdevice.h>
-
-#define ERROR_STATISTIC
-
-#define REQUEST 0
-#define CONFIRM 1
-#define INDICATION 2
-#define RESPONSE 3
-
-#define HW_ENABLE 0x0000
-#define HW_RESET 0x0004
-#define HW_POWERUP 0x0008
-#define HW_ACTIVATE 0x0010
-#define HW_DEACTIVATE 0x0018
-
-#define HW_INFO1 0x0010
-#define HW_INFO2 0x0020
-#define HW_INFO3 0x0030
-#define HW_INFO4 0x0040
-#define HW_INFO4_P8 0x0040
-#define HW_INFO4_P10 0x0048
-#define HW_RSYNC 0x0060
-#define HW_TESTLOOP 0x0070
-#define CARD_RESET 0x00F0
-#define CARD_INIT 0x00F2
-#define CARD_RELEASE 0x00F3
-#define CARD_TEST 0x00F4
-#define CARD_AUX_IND 0x00F5
-
-#define PH_ACTIVATE 0x0100
-#define PH_DEACTIVATE 0x0110
-#define PH_DATA 0x0120
-#define PH_PULL 0x0130
-#define PH_TESTLOOP 0x0140
-#define PH_PAUSE 0x0150
-#define MPH_ACTIVATE 0x0180
-#define MPH_DEACTIVATE 0x0190
-#define MPH_INFORMATION 0x01A0
-
-#define DL_ESTABLISH 0x0200
-#define DL_RELEASE 0x0210
-#define DL_DATA 0x0220
-#define DL_FLUSH 0x0224
-#define DL_UNIT_DATA 0x0230
-
-#define MDL_BC_RELEASE 0x0278 // Formula-n enter:now
-#define MDL_BC_ASSIGN 0x027C // Formula-n enter:now
-#define MDL_ASSIGN 0x0280
-#define MDL_REMOVE 0x0284
-#define MDL_ERROR 0x0288
-#define MDL_INFO_SETUP 0x02E0
-#define MDL_INFO_CONN 0x02E4
-#define MDL_INFO_REL 0x02E8
-
-#define CC_SETUP 0x0300
-#define CC_RESUME 0x0304
-#define CC_MORE_INFO 0x0310
-#define CC_IGNORE 0x0320
-#define CC_REJECT 0x0324
-#define CC_SETUP_COMPL 0x0330
-#define CC_PROCEEDING 0x0340
-#define CC_ALERTING 0x0344
-#define CC_PROGRESS 0x0348
-#define CC_CONNECT 0x0350
-#define CC_CHARGE 0x0354
-#define CC_NOTIFY 0x0358
-#define CC_DISCONNECT 0x0360
-#define CC_RELEASE 0x0368
-#define CC_SUSPEND 0x0370
-#define CC_PROCEED_SEND 0x0374
-#define CC_REDIR 0x0378
-#define CC_T302 0x0382
-#define CC_T303 0x0383
-#define CC_T304 0x0384
-#define CC_T305 0x0385
-#define CC_T308_1 0x0388
-#define CC_T308_2 0x038A
-#define CC_T309 0x0309
-#define CC_T310 0x0390
-#define CC_T313 0x0393
-#define CC_T318 0x0398
-#define CC_T319 0x0399
-#define CC_TSPID 0x03A0
-#define CC_NOSETUP_RSP 0x03E0
-#define CC_SETUP_ERR 0x03E1
-#define CC_SUSPEND_ERR 0x03E2
-#define CC_RESUME_ERR 0x03E3
-#define CC_CONNECT_ERR 0x03E4
-#define CC_RELEASE_ERR 0x03E5
-#define CC_RESTART 0x03F4
-#define CC_TDSS1_IO 0x13F4 /* DSS1 IO user timer */
-#define CC_TNI1_IO 0x13F5 /* NI1 IO user timer */
-
-/* define maximum number of possible waiting incoming calls */
-#define MAX_WAITING_CALLS 2
-
-
-#ifdef __KERNEL__
-
-extern const char *CardType[];
-extern int nrcards;
-
-extern const char *l1_revision;
-extern const char *l2_revision;
-extern const char *l3_revision;
-extern const char *lli_revision;
-extern const char *tei_revision;
-
-/* include l3dss1 & ni1 specific process structures, but no other defines */
-#ifdef CONFIG_HISAX_EURO
-#define l3dss1_process
-#include "l3dss1.h"
-#undef l3dss1_process
-#endif /* CONFIG_HISAX_EURO */
-
-#ifdef CONFIG_HISAX_NI1
-#define l3ni1_process
-#include "l3ni1.h"
-#undef l3ni1_process
-#endif /* CONFIG_HISAX_NI1 */
-
-#define MAX_DFRAME_LEN 260
-#define MAX_DFRAME_LEN_L1 300
-#define HSCX_BUFMAX 4096
-#define MAX_DATA_SIZE (HSCX_BUFMAX - 4)
-#define MAX_DATA_MEM (HSCX_BUFMAX + 64)
-#define RAW_BUFMAX (((HSCX_BUFMAX * 6) / 5) + 5)
-#define MAX_HEADER_LEN 4
-#define MAX_WINDOW 8
-#define MAX_MON_FRAME 32
-#define MAX_DLOG_SPACE 2048
-#define MAX_BLOG_SPACE 256
-
-/* #define I4L_IRQ_FLAG SA_INTERRUPT */
-#define I4L_IRQ_FLAG 0
-
-/*
- * Statemachine
- */
-
-struct FsmInst;
-
-typedef void (*FSMFNPTR)(struct FsmInst *, int, void *);
-
-struct Fsm {
- FSMFNPTR *jumpmatrix;
- int state_count, event_count;
- char **strEvent, **strState;
-};
-
-struct FsmInst {
- struct Fsm *fsm;
- int state;
- int debug;
- void *userdata;
- int userint;
- void (*printdebug) (struct FsmInst *, char *, ...);
-};
-
-struct FsmNode {
- int state, event;
- void (*routine) (struct FsmInst *, int, void *);
-};
-
-struct FsmTimer {
- struct FsmInst *fi;
- struct timer_list tl;
- int event;
- void *arg;
-};
-
-struct L3Timer {
- struct l3_process *pc;
- struct timer_list tl;
- int event;
-};
-
-#define FLG_L1_ACTIVATING 1
-#define FLG_L1_ACTIVATED 2
-#define FLG_L1_DEACTTIMER 3
-#define FLG_L1_ACTTIMER 4
-#define FLG_L1_T3RUN 5
-#define FLG_L1_PULL_REQ 6
-#define FLG_L1_UINT 7
-
-struct Layer1 {
- void *hardware;
- struct BCState *bcs;
- struct PStack **stlistp;
- unsigned long Flags;
- struct FsmInst l1m;
- struct FsmTimer timer;
- void (*l1l2) (struct PStack *, int, void *);
- void (*l1hw) (struct PStack *, int, void *);
- void (*l1tei) (struct PStack *, int, void *);
- int mode, bc;
- int delay;
-};
-
-#define GROUP_TEI 127
-#define TEI_SAPI 63
-#define CTRL_SAPI 0
-#define PACKET_NOACK 7
-
-/* Layer2 Flags */
-
-#define FLG_LAPB 0
-#define FLG_LAPD 1
-#define FLG_ORIG 2
-#define FLG_MOD128 3
-#define FLG_PEND_REL 4
-#define FLG_L3_INIT 5
-#define FLG_T200_RUN 6
-#define FLG_ACK_PEND 7
-#define FLG_REJEXC 8
-#define FLG_OWN_BUSY 9
-#define FLG_PEER_BUSY 10
-#define FLG_DCHAN_BUSY 11
-#define FLG_L1_ACTIV 12
-#define FLG_ESTAB_PEND 13
-#define FLG_PTP 14
-#define FLG_FIXED_TEI 15
-#define FLG_L2BLOCK 16
-
-struct Layer2 {
- int tei;
- int sap;
- int maxlen;
- u_long flag;
- spinlock_t lock;
- u_int vs, va, vr;
- int rc;
- unsigned int window;
- unsigned int sow;
- struct sk_buff *windowar[MAX_WINDOW];
- struct sk_buff_head i_queue;
- struct sk_buff_head ui_queue;
- void (*l2l1) (struct PStack *, int, void *);
- void (*l2l3) (struct PStack *, int, void *);
- void (*l2tei) (struct PStack *, int, void *);
- struct FsmInst l2m;
- struct FsmTimer t200, t203;
- int T200, N200, T203;
- int debug;
- char debug_id[16];
-};
-
-struct Layer3 {
- void (*l3l4) (struct PStack *, int, void *);
- void (*l3ml3) (struct PStack *, int, void *);
- void (*l3l2) (struct PStack *, int, void *);
- struct FsmInst l3m;
- struct FsmTimer l3m_timer;
- struct sk_buff_head squeue;
- struct l3_process *proc;
- struct l3_process *global;
- int N303;
- int debug;
- char debug_id[8];
-};
-
-struct LLInterface {
- void (*l4l3) (struct PStack *, int, void *);
- int (*l4l3_proto) (struct PStack *, isdn_ctrl *);
- void *userdata;
- u_long flag;
-};
-
-#define FLG_LLI_L1WAKEUP 1
-#define FLG_LLI_L2WAKEUP 2
-
-struct Management {
- int ri;
- struct FsmInst tei_m;
- struct FsmTimer t202;
- int T202, N202, debug;
- void (*layer) (struct PStack *, int, void *);
-};
-
-#define NO_CAUSE 254
-
-struct Param {
- u_char cause;
- u_char loc;
- u_char diag[6];
- int bchannel;
- int chargeinfo;
- int spv; /* SPV Flag */
- setup_parm setup; /* from isdnif.h numbers and Serviceindicator */
- u_char moderate; /* transfer mode and rate (bearer octet 4) */
-};
-
-
-struct PStack {
- struct PStack *next;
- struct Layer1 l1;
- struct Layer2 l2;
- struct Layer3 l3;
- struct LLInterface lli;
- struct Management ma;
- int protocol; /* EDSS1, 1TR6 or NI1 */
-
- /* protocol specific data fields */
- union
- { u_char uuuu; /* only as dummy */
-#ifdef CONFIG_HISAX_EURO
- dss1_stk_priv dss1; /* private dss1 data */
-#endif /* CONFIG_HISAX_EURO */
-#ifdef CONFIG_HISAX_NI1
- ni1_stk_priv ni1; /* private ni1 data */
-#endif /* CONFIG_HISAX_NI1 */
- } prot;
-};
-
-struct l3_process {
- int callref;
- int state;
- struct L3Timer timer;
- int N303;
- int debug;
- struct Param para;
- struct Channel *chan;
- struct PStack *st;
- struct l3_process *next;
- ulong redir_result;
-
- /* protocol specific data fields */
- union
- { u_char uuuu; /* only when euro not defined, avoiding empty union */
-#ifdef CONFIG_HISAX_EURO
- dss1_proc_priv dss1; /* private dss1 data */
-#endif /* CONFIG_HISAX_EURO */
-#ifdef CONFIG_HISAX_NI1
- ni1_proc_priv ni1; /* private ni1 data */
-#endif /* CONFIG_HISAX_NI1 */
- } prot;
-};
-
-struct hscx_hw {
- int hscx;
- int rcvidx;
- int count; /* Current skb sent count */
- u_char *rcvbuf; /* B-Channel receive Buffer */
- u_char tsaxr0;
- u_char tsaxr1;
-};
-
-struct w6692B_hw {
- int bchan;
- int rcvidx;
- int count; /* Current skb sent count */
- u_char *rcvbuf; /* B-Channel receive Buffer */
-};
-
-struct isar_reg {
- unsigned long Flags;
- volatile u_char bstat;
- volatile u_char iis;
- volatile u_char cmsb;
- volatile u_char clsb;
- volatile u_char par[8];
-};
-
-struct isar_hw {
- int dpath;
- int rcvidx;
- int txcnt;
- int mml;
- u_char state;
- u_char cmd;
- u_char mod;
- u_char newcmd;
- u_char newmod;
- char try_mod;
- struct timer_list ftimer;
- u_char *rcvbuf; /* B-Channel receive Buffer */
- u_char conmsg[16];
- struct isar_reg *reg;
-};
-
-struct hdlc_stat_reg {
-#ifdef __BIG_ENDIAN
- u_char fill;
- u_char mode;
- u_char xml;
- u_char cmd;
-#else
- u_char cmd;
- u_char xml;
- u_char mode;
- u_char fill;
-#endif
-} __attribute__((packed));
-
-struct hdlc_hw {
- union {
- u_int ctrl;
- struct hdlc_stat_reg sr;
- } ctrl;
- u_int stat;
- int rcvidx;
- int count; /* Current skb sent count */
- u_char *rcvbuf; /* B-Channel receive Buffer */
-};
-
-struct hfcB_hw {
- unsigned int *send;
- int f1;
- int f2;
-};
-
-struct tiger_hw {
- u_int *send;
- u_int *s_irq;
- u_int *s_end;
- u_int *sendp;
- u_int *rec;
- int free;
- u_char *rcvbuf;
- u_char *sendbuf;
- u_char *sp;
- int sendcnt;
- u_int s_tot;
- u_int r_bitcnt;
- u_int r_tot;
- u_int r_err;
- u_int r_fcs;
- u_char r_state;
- u_char r_one;
- u_char r_val;
- u_char s_state;
-};
-
-struct amd7930_hw {
- u_char *tx_buff;
- u_char *rv_buff;
- int rv_buff_in;
- int rv_buff_out;
- struct sk_buff *rv_skb;
- struct hdlc_state *hdlc_state;
- struct work_struct tq_rcv;
- struct work_struct tq_xmt;
-};
-
-#define BC_FLG_INIT 1
-#define BC_FLG_ACTIV 2
-#define BC_FLG_BUSY 3
-#define BC_FLG_NOFRAME 4
-#define BC_FLG_HALF 5
-#define BC_FLG_EMPTY 6
-#define BC_FLG_ORIG 7
-#define BC_FLG_DLEETX 8
-#define BC_FLG_LASTDLE 9
-#define BC_FLG_FIRST 10
-#define BC_FLG_LASTDATA 11
-#define BC_FLG_NMD_DATA 12
-#define BC_FLG_FTI_RUN 13
-#define BC_FLG_LL_OK 14
-#define BC_FLG_LL_CONN 15
-#define BC_FLG_FTI_FTS 16
-#define BC_FLG_FRH_WAIT 17
-
-#define L1_MODE_NULL 0
-#define L1_MODE_TRANS 1
-#define L1_MODE_HDLC 2
-#define L1_MODE_EXTRN 3
-#define L1_MODE_HDLC_56K 4
-#define L1_MODE_MODEM 7
-#define L1_MODE_V32 8
-#define L1_MODE_FAX 9
-
-struct BCState {
- int channel;
- int mode;
- u_long Flag;
- struct IsdnCardState *cs;
- int tx_cnt; /* B-Channel transmit counter */
- struct sk_buff *tx_skb; /* B-Channel transmit Buffer */
- struct sk_buff_head rqueue; /* B-Channel receive Queue */
- struct sk_buff_head squeue; /* B-Channel send Queue */
- int ackcnt;
- spinlock_t aclock;
- struct PStack *st;
- u_char *blog;
- u_char *conmsg;
- struct timer_list transbusy;
- struct work_struct tqueue;
- u_long event;
- int (*BC_SetStack) (struct PStack *, struct BCState *);
- void (*BC_Close) (struct BCState *);
-#ifdef ERROR_STATISTIC
- int err_crc;
- int err_tx;
- int err_rdo;
- int err_inv;
-#endif
- union {
- struct hscx_hw hscx;
- struct hdlc_hw hdlc;
- struct isar_hw isar;
- struct hfcB_hw hfc;
- struct tiger_hw tiger;
- struct amd7930_hw amd7930;
- struct w6692B_hw w6692;
- struct hisax_b_if *b_if;
- } hw;
-};
-
-struct Channel {
- struct PStack *b_st, *d_st;
- struct IsdnCardState *cs;
- struct BCState *bcs;
- int chan;
- int incoming;
- struct FsmInst fi;
- struct FsmTimer drel_timer, dial_timer;
- int debug;
- int l2_protocol, l2_active_protocol;
- int l3_protocol;
- int data_open;
- struct l3_process *proc;
- setup_parm setup; /* from isdnif.h numbers and Serviceindicator */
- u_long Flags; /* for remembering action done in l4 */
- int leased;
-};
-
-struct elsa_hw {
- struct pci_dev *dev;
- unsigned long base;
- unsigned int cfg;
- unsigned int ctrl;
- unsigned int ale;
- unsigned int isac;
- unsigned int itac;
- unsigned int hscx;
- unsigned int trig;
- unsigned int timer;
- unsigned int counter;
- unsigned int status;
- struct timer_list tl;
- unsigned int MFlag;
- struct BCState *bcs;
- u_char *transbuf;
- u_char *rcvbuf;
- unsigned int transp;
- unsigned int rcvp;
- unsigned int transcnt;
- unsigned int rcvcnt;
- u_char IER;
- u_char FCR;
- u_char LCR;
- u_char MCR;
- u_char ctrl_reg;
-};
-
-struct teles3_hw {
- unsigned int cfg_reg;
- signed int isac;
- signed int hscx[2];
- signed int isacfifo;
- signed int hscxfifo[2];
-};
-
-struct teles0_hw {
- unsigned int cfg_reg;
- void __iomem *membase;
- unsigned long phymem;
-};
-
-struct avm_hw {
- unsigned int cfg_reg;
- unsigned int isac;
- unsigned int hscx[2];
- unsigned int isacfifo;
- unsigned int hscxfifo[2];
- unsigned int counter;
- struct pci_dev *dev;
-};
-
-struct ix1_hw {
- unsigned int cfg_reg;
- unsigned int isac_ale;
- unsigned int isac;
- unsigned int hscx_ale;
- unsigned int hscx;
-};
-
-struct diva_hw {
- unsigned long cfg_reg;
- unsigned long pci_cfg;
- unsigned int ctrl;
- unsigned long isac_adr;
- unsigned int isac;
- unsigned long hscx_adr;
- unsigned int hscx;
- unsigned int status;
- struct timer_list tl;
- u_char ctrl_reg;
- struct pci_dev *dev;
-};
-
-struct asus_hw {
- unsigned int cfg_reg;
- unsigned int adr;
- unsigned int isac;
- unsigned int hscx;
- unsigned int u7;
- unsigned int pots;
-};
-
-
-struct hfc_hw {
- unsigned int addr;
- unsigned int fifosize;
- unsigned char cirm;
- unsigned char ctmt;
- unsigned char cip;
- u_char isac_spcr;
- struct timer_list timer;
-};
-
-struct sedl_hw {
- unsigned int cfg_reg;
- unsigned int adr;
- unsigned int isac;
- unsigned int hscx;
- unsigned int reset_on;
- unsigned int reset_off;
- struct isar_reg isar;
- unsigned int chip;
- unsigned int bus;
- struct pci_dev *dev;
-};
-
-struct spt_hw {
- unsigned int cfg_reg;
- unsigned int isac;
- unsigned int hscx[2];
- unsigned char res_irq;
-};
-
-struct mic_hw {
- unsigned int cfg_reg;
- unsigned int adr;
- unsigned int isac;
- unsigned int hscx;
-};
-
-struct njet_hw {
- unsigned long base;
- unsigned int isac;
- unsigned int auxa;
- unsigned char auxd;
- unsigned char dmactrl;
- unsigned char ctrl_reg;
- unsigned char irqmask0;
- unsigned char irqstat0;
- unsigned char last_is0;
- struct pci_dev *dev;
-};
-
-struct hfcPCI_hw {
- unsigned char cirm;
- unsigned char ctmt;
- unsigned char conn;
- unsigned char mst_m;
- unsigned char int_m1;
- unsigned char int_m2;
- unsigned char int_s1;
- unsigned char sctrl;
- unsigned char sctrl_r;
- unsigned char sctrl_e;
- unsigned char trm;
- unsigned char stat;
- unsigned char fifo;
- unsigned char fifo_en;
- unsigned char bswapped;
- unsigned char nt_mode;
- int nt_timer;
- struct pci_dev *dev;
- void __iomem *pci_io; /* start of PCI IO memory */
- dma_addr_t dma; /* dma handle for Fifos */
- void *fifos; /* FIFO memory */
- int last_bfifo_cnt[2]; /* marker saving last b-fifo frame count */
- struct timer_list timer;
-};
-
-struct hfcSX_hw {
- unsigned long base;
- unsigned char cirm;
- unsigned char ctmt;
- unsigned char conn;
- unsigned char mst_m;
- unsigned char int_m1;
- unsigned char int_m2;
- unsigned char int_s1;
- unsigned char sctrl;
- unsigned char sctrl_r;
- unsigned char sctrl_e;
- unsigned char trm;
- unsigned char stat;
- unsigned char fifo;
- unsigned char bswapped;
- unsigned char nt_mode;
- unsigned char chip;
- int b_fifo_size;
- unsigned char last_fifo;
- void *extra;
- int nt_timer;
- struct timer_list timer;
-};
-
-struct hfcD_hw {
- unsigned int addr;
- unsigned int bfifosize;
- unsigned int dfifosize;
- unsigned char cirm;
- unsigned char ctmt;
- unsigned char cip;
- unsigned char conn;
- unsigned char mst_m;
- unsigned char int_m1;
- unsigned char int_m2;
- unsigned char int_s1;
- unsigned char sctrl;
- unsigned char stat;
- unsigned char fifo;
- unsigned char f1;
- unsigned char f2;
- unsigned int *send;
- struct timer_list timer;
-};
-
-struct isurf_hw {
- unsigned int reset;
- unsigned long phymem;
- void __iomem *isac;
- void __iomem *isar;
- struct isar_reg isar_r;
-};
-
-struct saphir_hw {
- struct pci_dev *dev;
- unsigned int cfg_reg;
- unsigned int ale;
- unsigned int isac;
- unsigned int hscx;
- struct timer_list timer;
-};
-
-struct bkm_hw {
- struct pci_dev *dev;
- unsigned long base;
- /* A4T stuff */
- unsigned long isac_adr;
- unsigned int isac_ale;
- unsigned long jade_adr;
- unsigned int jade_ale;
- /* Scitel Quadro stuff */
- unsigned long plx_adr;
- unsigned long data_adr;
-};
-
-struct gazel_hw {
- struct pci_dev *dev;
- unsigned int cfg_reg;
- unsigned int pciaddr[2];
- signed int ipac;
- signed int isac;
- signed int hscx[2];
- signed int isacfifo;
- signed int hscxfifo[2];
- unsigned char timeslot;
- unsigned char iom2;
-};
-
-struct w6692_hw {
- struct pci_dev *dev;
- unsigned int iobase;
- struct timer_list timer;
-};
-
-struct arcofi_msg {
- struct arcofi_msg *next;
- u_char receive;
- u_char len;
- u_char msg[10];
-};
-
-struct isac_chip {
- int ph_state;
- u_char *mon_tx;
- u_char *mon_rx;
- int mon_txp;
- int mon_txc;
- int mon_rxp;
- struct arcofi_msg *arcofi_list;
- struct timer_list arcofitimer;
- wait_queue_head_t arcofi_wait;
- u_char arcofi_bc;
- u_char arcofi_state;
- u_char mocr;
- u_char adf2;
-};
-
-struct hfcd_chip {
- int ph_state;
-};
-
-struct hfcpci_chip {
- int ph_state;
-};
-
-struct hfcsx_chip {
- int ph_state;
-};
-
-struct w6692_chip {
- int ph_state;
-};
-
-struct amd7930_chip {
- u_char lmr1;
- u_char ph_state;
- u_char old_state;
- u_char flg_t3;
- unsigned int tx_xmtlen;
- struct timer_list timer3;
- void (*ph_command) (struct IsdnCardState *, u_char, char *);
- void (*setIrqMask) (struct IsdnCardState *, u_char);
-};
-
-struct icc_chip {
- int ph_state;
- u_char *mon_tx;
- u_char *mon_rx;
- int mon_txp;
- int mon_txc;
- int mon_rxp;
- struct arcofi_msg *arcofi_list;
- struct timer_list arcofitimer;
- wait_queue_head_t arcofi_wait;
- u_char arcofi_bc;
- u_char arcofi_state;
- u_char mocr;
- u_char adf2;
-};
-
-#define HW_IOM1 0
-#define HW_IPAC 1
-#define HW_ISAR 2
-#define HW_ARCOFI 3
-#define FLG_TWO_DCHAN 4
-#define FLG_L1_DBUSY 5
-#define FLG_DBUSY_TIMER 6
-#define FLG_LOCK_ATOMIC 7
-#define FLG_ARCOFI_TIMER 8
-#define FLG_ARCOFI_ERROR 9
-#define FLG_HW_L1_UINT 10
-
-struct IsdnCardState {
- spinlock_t lock;
- u_char typ;
- u_char subtyp;
- int protocol;
- u_int irq;
- u_long irq_flags;
- u_long HW_Flags;
- int *busy_flag;
- int chanlimit; /* limited number of B-chans to use */
- int logecho; /* log echo if supported by card */
- union {
- struct elsa_hw elsa;
- struct teles0_hw teles0;
- struct teles3_hw teles3;
- struct avm_hw avm;
- struct ix1_hw ix1;
- struct diva_hw diva;
- struct asus_hw asus;
- struct hfc_hw hfc;
- struct sedl_hw sedl;
- struct spt_hw spt;
- struct mic_hw mic;
- struct njet_hw njet;
- struct hfcD_hw hfcD;
- struct hfcPCI_hw hfcpci;
- struct hfcSX_hw hfcsx;
- struct ix1_hw niccy;
- struct isurf_hw isurf;
- struct saphir_hw saphir;
- struct bkm_hw ax;
- struct gazel_hw gazel;
- struct w6692_hw w6692;
- struct hisax_d_if *hisax_d_if;
- } hw;
- int myid;
- isdn_if iif;
- spinlock_t statlock;
- u_char *status_buf;
- u_char *status_read;
- u_char *status_write;
- u_char *status_end;
- u_char (*readisac) (struct IsdnCardState *, u_char);
- void (*writeisac) (struct IsdnCardState *, u_char, u_char);
- void (*readisacfifo) (struct IsdnCardState *, u_char *, int);
- void (*writeisacfifo) (struct IsdnCardState *, u_char *, int);
- u_char (*BC_Read_Reg) (struct IsdnCardState *, int, u_char);
- void (*BC_Write_Reg) (struct IsdnCardState *, int, u_char, u_char);
- void (*BC_Send_Data) (struct BCState *);
- int (*cardmsg) (struct IsdnCardState *, int, void *);
- void (*setstack_d) (struct PStack *, struct IsdnCardState *);
- void (*DC_Close) (struct IsdnCardState *);
- irq_handler_t irq_func;
- int (*auxcmd) (struct IsdnCardState *, isdn_ctrl *);
- struct Channel channel[2 + MAX_WAITING_CALLS];
- struct BCState bcs[2 + MAX_WAITING_CALLS];
- struct PStack *stlist;
- struct sk_buff_head rq, sq; /* D-channel queues */
- int cardnr;
- char *dlog;
- int debug;
- union {
- struct isac_chip isac;
- struct hfcd_chip hfcd;
- struct hfcpci_chip hfcpci;
- struct hfcsx_chip hfcsx;
- struct w6692_chip w6692;
- struct amd7930_chip amd7930;
- struct icc_chip icc;
- } dc;
- u_char *rcvbuf;
- int rcvidx;
- struct sk_buff *tx_skb;
- int tx_cnt;
- u_long event;
- struct work_struct tqueue;
- struct timer_list dbusytimer;
- unsigned int irq_cnt;
-#ifdef ERROR_STATISTIC
- int err_crc;
- int err_tx;
- int err_rx;
-#endif
-};
-
-
-#define schedule_event(s, ev) do { test_and_set_bit(ev, &s->event); schedule_work(&s->tqueue); } while (0)
-
-#define MON0_RX 1
-#define MON1_RX 2
-#define MON0_TX 4
-#define MON1_TX 8
-
-
-#ifdef ISDN_CHIP_ISAC
-#undef ISDN_CHIP_ISAC
-#endif
-
-#ifdef CONFIG_HISAX_16_0
-#define CARD_TELES0 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_TELES0 0
-#endif
-
-#ifdef CONFIG_HISAX_16_3
-#define CARD_TELES3 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_TELES3 0
-#endif
-
-#ifdef CONFIG_HISAX_TELESPCI
-#define CARD_TELESPCI 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_TELESPCI 0
-#endif
-
-#ifdef CONFIG_HISAX_AVM_A1
-#define CARD_AVM_A1 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_AVM_A1 0
-#endif
-
-#ifdef CONFIG_HISAX_AVM_A1_PCMCIA
-#define CARD_AVM_A1_PCMCIA 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_AVM_A1_PCMCIA 0
-#endif
-
-#ifdef CONFIG_HISAX_FRITZPCI
-#define CARD_FRITZPCI 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_FRITZPCI 0
-#endif
-
-#ifdef CONFIG_HISAX_ELSA
-#define CARD_ELSA 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_ELSA 0
-#endif
-
-#ifdef CONFIG_HISAX_IX1MICROR2
-#define CARD_IX1MICROR2 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_IX1MICROR2 0
-#endif
-
-#ifdef CONFIG_HISAX_DIEHLDIVA
-#define CARD_DIEHLDIVA 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_DIEHLDIVA 0
-#endif
-
-#ifdef CONFIG_HISAX_ASUSCOM
-#define CARD_ASUSCOM 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_ASUSCOM 0
-#endif
-
-#ifdef CONFIG_HISAX_TELEINT
-#define CARD_TELEINT 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_TELEINT 0
-#endif
-
-#ifdef CONFIG_HISAX_SEDLBAUER
-#define CARD_SEDLBAUER 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_SEDLBAUER 0
-#endif
-
-#ifdef CONFIG_HISAX_SPORTSTER
-#define CARD_SPORTSTER 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_SPORTSTER 0
-#endif
-
-#ifdef CONFIG_HISAX_MIC
-#define CARD_MIC 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_MIC 0
-#endif
-
-#ifdef CONFIG_HISAX_NETJET
-#define CARD_NETJET_S 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_NETJET_S 0
-#endif
-
-#ifdef CONFIG_HISAX_HFCS
-#define CARD_HFCS 1
-#else
-#define CARD_HFCS 0
-#endif
-
-#ifdef CONFIG_HISAX_HFC_PCI
-#define CARD_HFC_PCI 1
-#else
-#define CARD_HFC_PCI 0
-#endif
-
-#ifdef CONFIG_HISAX_HFC_SX
-#define CARD_HFC_SX 1
-#else
-#define CARD_HFC_SX 0
-#endif
-
-#ifdef CONFIG_HISAX_NICCY
-#define CARD_NICCY 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_NICCY 0
-#endif
-
-#ifdef CONFIG_HISAX_ISURF
-#define CARD_ISURF 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_ISURF 0
-#endif
-
-#ifdef CONFIG_HISAX_S0BOX
-#define CARD_S0BOX 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_S0BOX 0
-#endif
-
-#ifdef CONFIG_HISAX_HSTSAPHIR
-#define CARD_HSTSAPHIR 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_HSTSAPHIR 0
-#endif
-
-#ifdef CONFIG_HISAX_BKM_A4T
-#define CARD_BKM_A4T 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_BKM_A4T 0
-#endif
-
-#ifdef CONFIG_HISAX_SCT_QUADRO
-#define CARD_SCT_QUADRO 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_SCT_QUADRO 0
-#endif
-
-#ifdef CONFIG_HISAX_GAZEL
-#define CARD_GAZEL 1
-#ifndef ISDN_CHIP_ISAC
-#define ISDN_CHIP_ISAC 1
-#endif
-#else
-#define CARD_GAZEL 0
-#endif
-
-#ifdef CONFIG_HISAX_W6692
-#define CARD_W6692 1
-#ifndef ISDN_CHIP_W6692
-#define ISDN_CHIP_W6692 1
-#endif
-#else
-#define CARD_W6692 0
-#endif
-
-#ifdef CONFIG_HISAX_NETJET_U
-#define CARD_NETJET_U 1
-#ifndef ISDN_CHIP_ICC
-#define ISDN_CHIP_ICC 1
-#endif
-#ifndef HISAX_UINTERFACE
-#define HISAX_UINTERFACE 1
-#endif
-#else
-#define CARD_NETJET_U 0
-#endif
-
-#ifdef CONFIG_HISAX_ENTERNOW_PCI
-#define CARD_FN_ENTERNOW_PCI 1
-#else
-#define CARD_FN_ENTERNOW_PCI 0
-#endif
-
-#define TEI_PER_CARD 1
-
-/* L1 Debug */
-#define L1_DEB_WARN 0x01
-#define L1_DEB_INTSTAT 0x02
-#define L1_DEB_ISAC 0x04
-#define L1_DEB_ISAC_FIFO 0x08
-#define L1_DEB_HSCX 0x10
-#define L1_DEB_HSCX_FIFO 0x20
-#define L1_DEB_LAPD 0x40
-#define L1_DEB_IPAC 0x80
-#define L1_DEB_RECEIVE_FRAME 0x100
-#define L1_DEB_MONITOR 0x200
-#define DEB_DLOG_HEX 0x400
-#define DEB_DLOG_VERBOSE 0x800
-
-#define L2FRAME_DEBUG
-
-#ifdef L2FRAME_DEBUG
-extern void Logl2Frame(struct IsdnCardState *cs, struct sk_buff *skb, char *buf, int dir);
-#endif
-
-#include "hisax_cfg.h"
-
-void init_bcstate(struct IsdnCardState *cs, int bc);
-
-void setstack_HiSax(struct PStack *st, struct IsdnCardState *cs);
-void HiSax_addlist(struct IsdnCardState *sp, struct PStack *st);
-void HiSax_rmlist(struct IsdnCardState *sp, struct PStack *st);
-
-void setstack_l1_B(struct PStack *st);
-
-void setstack_tei(struct PStack *st);
-void setstack_manager(struct PStack *st);
-
-void setstack_isdnl2(struct PStack *st, char *debug_id);
-void releasestack_isdnl2(struct PStack *st);
-void setstack_transl2(struct PStack *st);
-void releasestack_transl2(struct PStack *st);
-void lli_writewakeup(struct PStack *st, int len);
-
-void setstack_l3dc(struct PStack *st, struct Channel *chanp);
-void setstack_l3bc(struct PStack *st, struct Channel *chanp);
-void releasestack_isdnl3(struct PStack *st);
-
-u_char *findie(u_char *p, int size, u_char ie, int wanted_set);
-int getcallref(u_char *p);
-int newcallref(void);
-
-int FsmNew(struct Fsm *fsm, struct FsmNode *fnlist, int fncount);
-void FsmFree(struct Fsm *fsm);
-int FsmEvent(struct FsmInst *fi, int event, void *arg);
-void FsmChangeState(struct FsmInst *fi, int newstate);
-void FsmInitTimer(struct FsmInst *fi, struct FsmTimer *ft);
-int FsmAddTimer(struct FsmTimer *ft, int millisec, int event,
- void *arg, int where);
-void FsmRestartTimer(struct FsmTimer *ft, int millisec, int event,
- void *arg, int where);
-void FsmDelTimer(struct FsmTimer *ft, int where);
-int jiftime(char *s, long mark);
-
-int HiSax_command(isdn_ctrl *ic);
-int HiSax_writebuf_skb(int id, int chan, int ack, struct sk_buff *skb);
-__printf(3, 4)
-void HiSax_putstatus(struct IsdnCardState *cs, char *head, const char *fmt, ...);
-__printf(3, 0)
-void VHiSax_putstatus(struct IsdnCardState *cs, char *head, const char *fmt, va_list args);
-void HiSax_reportcard(int cardnr, int sel);
-int QuickHex(char *txt, u_char *p, int cnt);
-void LogFrame(struct IsdnCardState *cs, u_char *p, int size);
-void dlogframe(struct IsdnCardState *cs, struct sk_buff *skb, int dir);
-void iecpy(u_char *dest, u_char *iestart, int ieoffset);
-#endif /* __KERNEL__ */
-
-/*
- * Busywait delay for `jiffs' jiffies
- */
-#define HZDELAY(jiffs) do { \
- int tout = jiffs; \
- \
- while (tout--) { \
- int loops = USEC_PER_SEC / HZ; \
- while (loops--) \
- udelay(1); \
- } \
- } while (0)
-
-int ll_run(struct IsdnCardState *cs, int addfeatures);
-int CallcNew(void);
-void CallcFree(void);
-int CallcNewChan(struct IsdnCardState *cs);
-void CallcFreeChan(struct IsdnCardState *cs);
-int Isdnl1New(void);
-void Isdnl1Free(void);
-int Isdnl2New(void);
-void Isdnl2Free(void);
-int Isdnl3New(void);
-void Isdnl3Free(void);
-void init_tei(struct IsdnCardState *cs, int protocol);
-void release_tei(struct IsdnCardState *cs);
-char *HiSax_getrev(const char *revision);
-int TeiNew(void);
-void TeiFree(void);
-
-#ifdef CONFIG_PCI
-
-#include <linux/pci.h>
-
-/* adaptation wrapper for old usage
- * WARNING! This is unfit for use in a PCI hotplug environment,
- * as the returned PCI device can disappear at any moment in time.
- * Callers should be converted to use pci_get_device() instead.
- */
-static inline struct pci_dev *hisax_find_pci_device(unsigned int vendor,
- unsigned int device,
- struct pci_dev *from)
-{
- struct pci_dev *pdev;
-
- pci_dev_get(from);
- pdev = pci_get_subsys(vendor, device, PCI_ANY_ID, PCI_ANY_ID, from);
- pci_dev_put(pdev);
- return pdev;
-}
-
-#endif
diff --git a/drivers/isdn/hisax/hisax_cfg.h b/drivers/isdn/hisax/hisax_cfg.h
deleted file mode 100644
index 487dcfe9e718..000000000000
--- a/drivers/isdn/hisax/hisax_cfg.h
+++ /dev/null
@@ -1,66 +0,0 @@
-/* $Id: hisax_cfg.h,v 1.1.2.1 2004/01/24 20:47:23 keil Exp $
- * define of the basic HiSax configuration structures
- * and pcmcia interface
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#define ISDN_CTYPE_16_0 1
-#define ISDN_CTYPE_8_0 2
-#define ISDN_CTYPE_16_3 3
-#define ISDN_CTYPE_PNP 4
-#define ISDN_CTYPE_A1 5
-#define ISDN_CTYPE_ELSA 6
-#define ISDN_CTYPE_ELSA_PNP 7
-#define ISDN_CTYPE_TELESPCMCIA 8
-#define ISDN_CTYPE_IX1MICROR2 9
-#define ISDN_CTYPE_ELSA_PCMCIA 10
-#define ISDN_CTYPE_DIEHLDIVA 11
-#define ISDN_CTYPE_ASUSCOM 12
-#define ISDN_CTYPE_TELEINT 13
-#define ISDN_CTYPE_TELES3C 14
-#define ISDN_CTYPE_SEDLBAUER 15
-#define ISDN_CTYPE_SPORTSTER 16
-#define ISDN_CTYPE_MIC 17
-#define ISDN_CTYPE_ELSA_PCI 18
-#define ISDN_CTYPE_COMPAQ_ISA 19
-#define ISDN_CTYPE_NETJET_S 20
-#define ISDN_CTYPE_TELESPCI 21
-#define ISDN_CTYPE_SEDLBAUER_PCMCIA 22
-#define ISDN_CTYPE_AMD7930 23
-#define ISDN_CTYPE_NICCY 24
-#define ISDN_CTYPE_S0BOX 25
-#define ISDN_CTYPE_A1_PCMCIA 26
-#define ISDN_CTYPE_FRITZPCI 27
-#define ISDN_CTYPE_SEDLBAUER_FAX 28
-#define ISDN_CTYPE_ISURF 29
-#define ISDN_CTYPE_ACERP10 30
-#define ISDN_CTYPE_HSTSAPHIR 31
-#define ISDN_CTYPE_BKM_A4T 32
-#define ISDN_CTYPE_SCT_QUADRO 33
-#define ISDN_CTYPE_GAZEL 34
-#define ISDN_CTYPE_HFC_PCI 35
-#define ISDN_CTYPE_W6692 36
-#define ISDN_CTYPE_HFC_SX 37
-#define ISDN_CTYPE_NETJET_U 38
-#define ISDN_CTYPE_HFC_SP_PCMCIA 39
-#define ISDN_CTYPE_DYNAMIC 40
-#define ISDN_CTYPE_ENTERNOW 41
-#define ISDN_CTYPE_COUNT 41
-
-typedef struct IsdnCardState IsdnCardState_t;
-typedef struct IsdnCard IsdnCard_t;
-
-struct IsdnCard {
- int typ;
- int protocol; /* EDSS1, 1TR6 or NI1 */
- unsigned long para[4];
- IsdnCardState_t *cs;
-};
-
-typedef int (*hisax_setup_func_t)(struct IsdnCard *card);
-
-extern void HiSax_closecard(int);
-extern int hisax_init_pcmcia(void *, int *, IsdnCard_t *);
diff --git a/drivers/isdn/hisax/hisax_debug.h b/drivers/isdn/hisax/hisax_debug.h
deleted file mode 100644
index 7b3093d0856a..000000000000
--- a/drivers/isdn/hisax/hisax_debug.h
+++ /dev/null
@@ -1,80 +0,0 @@
-/*
- * Common debugging macros for use with the hisax driver
- *
- * Author Frode Isaksen
- * Copyright 2001 by Frode Isaksen <fisaksen@bewan.com>
- * 2001 by Kai Germaschewski <kai.germaschewski@gmx.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * How to use:
- *
- * Before including this file, you need to
- * #define __debug_variable my_debug
- * where my_debug is a variable in your code which
- * determines the debug bitmask.
- *
- * If CONFIG_HISAX_DEBUG is not set, all macros evaluate to nothing
- *
- */
-
-#ifndef __HISAX_DEBUG_H__
-#define __HISAX_DEBUG_H__
-
-
-#ifdef CONFIG_HISAX_DEBUG
-
-#define DBG(level, format, arg...) do { \
- if (level & __debug_variable) \
- printk(KERN_DEBUG "%s: " format "\n" , __func__ , ## arg); \
- } while (0)
-
-#define DBG_PACKET(level, data, count) \
- if (level & __debug_variable) dump_packet(__func__, data, count)
-
-#define DBG_SKB(level, skb) \
- if ((level & __debug_variable) && skb) dump_packet(__func__, skb->data, skb->len)
-
-
-static void __attribute__((unused))
-dump_packet(const char *name, const u_char *data, int pkt_len)
-{
-#define DUMP_HDR_SIZE 20
-#define DUMP_TLR_SIZE 8
- if (pkt_len) {
- int i, len1, len2;
-
- printk(KERN_DEBUG "%s: length=%d,data=", name, pkt_len);
-
- if (pkt_len > DUMP_HDR_SIZE + DUMP_TLR_SIZE) {
- len1 = DUMP_HDR_SIZE;
- len2 = DUMP_TLR_SIZE;
- } else {
- len1 = pkt_len > DUMP_HDR_SIZE ? DUMP_HDR_SIZE : pkt_len;
- len2 = 0;
- }
- for (i = 0; i < len1; ++i) {
- printk("%.2x", data[i]);
- }
- if (len2) {
- printk("..");
- for (i = pkt_len-DUMP_TLR_SIZE; i < pkt_len; ++i) {
- printk("%.2x", data[i]);
- }
- }
- printk("\n");
- }
-#undef DUMP_HDR_SIZE
-#undef DUMP_TLR_SIZE
-}
-
-#else
-
-#define DBG(level, format, arg...) do {} while (0)
-#define DBG_PACKET(level, data, count) do {} while (0)
-#define DBG_SKB(level, skb) do {} while (0)
-
-#endif
-
-#endif
diff --git a/drivers/isdn/hisax/hisax_fcpcipnp.c b/drivers/isdn/hisax/hisax_fcpcipnp.c
deleted file mode 100644
index 7a7137d8664b..000000000000
--- a/drivers/isdn/hisax/hisax_fcpcipnp.c
+++ /dev/null
@@ -1,1024 +0,0 @@
-/*
- * Driver for AVM Fritz!PCI, Fritz!PCI v2, Fritz!PnP ISDN cards
- *
- * Author Kai Germaschewski
- * Copyright 2001 by Kai Germaschewski <kai.germaschewski@gmx.de>
- * 2001 by Karsten Keil <keil@isdn4linux.de>
- *
- * based upon Karsten Keil's original avm_pci.c driver
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * Thanks to Wizard Computersysteme GmbH, Bremervoerde and
- * SoHaNet Technology GmbH, Berlin
- * for supporting the development of this driver
- */
-
-
-/* TODO:
- *
- * o POWER PC
- * o clean up debugging
- * o tx_skb at PH_DEACTIVATE time
- */
-
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/interrupt.h>
-#include <linux/pci.h>
-#include <linux/isapnp.h>
-#include <linux/kmod.h>
-#include <linux/slab.h>
-#include <linux/skbuff.h>
-#include <linux/netdevice.h>
-#include <linux/delay.h>
-
-#include <asm/io.h>
-
-#include "hisax_fcpcipnp.h"
-
-// debugging cruft
-#define __debug_variable debug
-#include "hisax_debug.h"
-
-#ifdef CONFIG_HISAX_DEBUG
-static int debug = 0;
-/* static int hdlcfifosize = 32; */
-module_param(debug, int, 0);
-/* module_param(hdlcfifosize, int, 0); */
-#endif
-
-MODULE_AUTHOR("Kai Germaschewski <kai.germaschewski@gmx.de>/Karsten Keil <kkeil@suse.de>");
-MODULE_DESCRIPTION("AVM Fritz!PCI/PnP ISDN driver");
-
-static const struct pci_device_id fcpci_ids[] = {
- { .vendor = PCI_VENDOR_ID_AVM,
- .device = PCI_DEVICE_ID_AVM_A1,
- .subvendor = PCI_ANY_ID,
- .subdevice = PCI_ANY_ID,
- .driver_data = (unsigned long) "Fritz!Card PCI",
- },
- { .vendor = PCI_VENDOR_ID_AVM,
- .device = PCI_DEVICE_ID_AVM_A1_V2,
- .subvendor = PCI_ANY_ID,
- .subdevice = PCI_ANY_ID,
- .driver_data = (unsigned long) "Fritz!Card PCI v2" },
- {}
-};
-
-MODULE_DEVICE_TABLE(pci, fcpci_ids);
-
-#ifdef CONFIG_PNP
-static struct pnp_device_id fcpnp_ids[] = {
- {
- .id = "AVM0900",
- .driver_data = (unsigned long) "Fritz!Card PnP",
- },
- { .id = "" }
-};
-
-MODULE_DEVICE_TABLE(pnp, fcpnp_ids);
-#endif
-
-static int protocol = 2; /* EURO-ISDN Default */
-module_param(protocol, int, 0);
-MODULE_LICENSE("GPL");
-
-// ----------------------------------------------------------------------
-
-#define AVM_INDEX 0x04
-#define AVM_DATA 0x10
-
-#define AVM_IDX_HDLC_1 0x00
-#define AVM_IDX_HDLC_2 0x01
-#define AVM_IDX_ISAC_FIFO 0x02
-#define AVM_IDX_ISAC_REG_LOW 0x04
-#define AVM_IDX_ISAC_REG_HIGH 0x06
-
-#define AVM_STATUS0 0x02
-
-#define AVM_STATUS0_IRQ_ISAC 0x01
-#define AVM_STATUS0_IRQ_HDLC 0x02
-#define AVM_STATUS0_IRQ_TIMER 0x04
-#define AVM_STATUS0_IRQ_MASK 0x07
-
-#define AVM_STATUS0_RESET 0x01
-#define AVM_STATUS0_DIS_TIMER 0x02
-#define AVM_STATUS0_RES_TIMER 0x04
-#define AVM_STATUS0_ENA_IRQ 0x08
-#define AVM_STATUS0_TESTBIT 0x10
-
-#define AVM_STATUS1 0x03
-#define AVM_STATUS1_ENA_IOM 0x80
-
-#define HDLC_FIFO 0x0
-#define HDLC_STATUS 0x4
-#define HDLC_CTRL 0x4
-
-#define HDLC_MODE_ITF_FLG 0x01
-#define HDLC_MODE_TRANS 0x02
-#define HDLC_MODE_CCR_7 0x04
-#define HDLC_MODE_CCR_16 0x08
-#define HDLC_MODE_TESTLOOP 0x80
-
-#define HDLC_INT_XPR 0x80
-#define HDLC_INT_XDU 0x40
-#define HDLC_INT_RPR 0x20
-#define HDLC_INT_MASK 0xE0
-
-#define HDLC_STAT_RME 0x01
-#define HDLC_STAT_RDO 0x10
-#define HDLC_STAT_CRCVFRRAB 0x0E
-#define HDLC_STAT_CRCVFR 0x06
-#define HDLC_STAT_RML_MASK 0xff00
-
-#define HDLC_CMD_XRS 0x80
-#define HDLC_CMD_XME 0x01
-#define HDLC_CMD_RRS 0x20
-#define HDLC_CMD_XML_MASK 0xff00
-
-#define AVM_HDLC_FIFO_1 0x10
-#define AVM_HDLC_FIFO_2 0x18
-
-#define AVM_HDLC_STATUS_1 0x14
-#define AVM_HDLC_STATUS_2 0x1c
-
-#define AVM_ISACSX_INDEX 0x04
-#define AVM_ISACSX_DATA 0x08
-
-// ----------------------------------------------------------------------
-// Fritz!PCI
-
-static unsigned char fcpci_read_isac(struct isac *isac, unsigned char offset)
-{
- struct fritz_adapter *adapter = isac->priv;
- unsigned char idx = (offset > 0x2f) ?
- AVM_IDX_ISAC_REG_HIGH : AVM_IDX_ISAC_REG_LOW;
- unsigned char val;
- unsigned long flags;
-
- spin_lock_irqsave(&adapter->hw_lock, flags);
- outb(idx, adapter->io + AVM_INDEX);
- val = inb(adapter->io + AVM_DATA + (offset & 0xf));
- spin_unlock_irqrestore(&adapter->hw_lock, flags);
- DBG(0x1000, " port %#x, value %#x",
- offset, val);
- return val;
-}
-
-static void fcpci_write_isac(struct isac *isac, unsigned char offset,
- unsigned char value)
-{
- struct fritz_adapter *adapter = isac->priv;
- unsigned char idx = (offset > 0x2f) ?
- AVM_IDX_ISAC_REG_HIGH : AVM_IDX_ISAC_REG_LOW;
- unsigned long flags;
-
- DBG(0x1000, " port %#x, value %#x",
- offset, value);
- spin_lock_irqsave(&adapter->hw_lock, flags);
- outb(idx, adapter->io + AVM_INDEX);
- outb(value, adapter->io + AVM_DATA + (offset & 0xf));
- spin_unlock_irqrestore(&adapter->hw_lock, flags);
-}
-
-static void fcpci_read_isac_fifo(struct isac *isac, unsigned char *data,
- int size)
-{
- struct fritz_adapter *adapter = isac->priv;
- unsigned long flags;
-
- spin_lock_irqsave(&adapter->hw_lock, flags);
- outb(AVM_IDX_ISAC_FIFO, adapter->io + AVM_INDEX);
- insb(adapter->io + AVM_DATA, data, size);
- spin_unlock_irqrestore(&adapter->hw_lock, flags);
-}
-
-static void fcpci_write_isac_fifo(struct isac *isac, unsigned char *data,
- int size)
-{
- struct fritz_adapter *adapter = isac->priv;
- unsigned long flags;
-
- spin_lock_irqsave(&adapter->hw_lock, flags);
- outb(AVM_IDX_ISAC_FIFO, adapter->io + AVM_INDEX);
- outsb(adapter->io + AVM_DATA, data, size);
- spin_unlock_irqrestore(&adapter->hw_lock, flags);
-}
-
-static u32 fcpci_read_hdlc_status(struct fritz_adapter *adapter, int nr)
-{
- u32 val;
- int idx = nr ? AVM_IDX_HDLC_2 : AVM_IDX_HDLC_1;
- unsigned long flags;
-
- spin_lock_irqsave(&adapter->hw_lock, flags);
- outl(idx, adapter->io + AVM_INDEX);
- val = inl(adapter->io + AVM_DATA + HDLC_STATUS);
- spin_unlock_irqrestore(&adapter->hw_lock, flags);
- return val;
-}
-
-static void __fcpci_write_ctrl(struct fritz_bcs *bcs, int which)
-{
- struct fritz_adapter *adapter = bcs->adapter;
- int idx = bcs->channel ? AVM_IDX_HDLC_2 : AVM_IDX_HDLC_1;
-
- DBG(0x40, "hdlc %c wr%x ctrl %x",
- 'A' + bcs->channel, which, bcs->ctrl.ctrl);
-
- outl(idx, adapter->io + AVM_INDEX);
- outl(bcs->ctrl.ctrl, adapter->io + AVM_DATA + HDLC_CTRL);
-}
-
-static void fcpci_write_ctrl(struct fritz_bcs *bcs, int which)
-{
- struct fritz_adapter *adapter = bcs->adapter;
- unsigned long flags;
-
- spin_lock_irqsave(&adapter->hw_lock, flags);
- __fcpci_write_ctrl(bcs, which);
- spin_unlock_irqrestore(&adapter->hw_lock, flags);
-}
-
-// ----------------------------------------------------------------------
-// Fritz!PCI v2
-
-static unsigned char fcpci2_read_isac(struct isac *isac, unsigned char offset)
-{
- struct fritz_adapter *adapter = isac->priv;
- unsigned char val;
- unsigned long flags;
-
- spin_lock_irqsave(&adapter->hw_lock, flags);
- outl(offset, adapter->io + AVM_ISACSX_INDEX);
- val = inl(adapter->io + AVM_ISACSX_DATA);
- spin_unlock_irqrestore(&adapter->hw_lock, flags);
- DBG(0x1000, " port %#x, value %#x",
- offset, val);
-
- return val;
-}
-
-static void fcpci2_write_isac(struct isac *isac, unsigned char offset,
- unsigned char value)
-{
- struct fritz_adapter *adapter = isac->priv;
- unsigned long flags;
-
- DBG(0x1000, " port %#x, value %#x",
- offset, value);
- spin_lock_irqsave(&adapter->hw_lock, flags);
- outl(offset, adapter->io + AVM_ISACSX_INDEX);
- outl(value, adapter->io + AVM_ISACSX_DATA);
- spin_unlock_irqrestore(&adapter->hw_lock, flags);
-}
-
-static void fcpci2_read_isac_fifo(struct isac *isac, unsigned char *data,
- int size)
-{
- struct fritz_adapter *adapter = isac->priv;
- int i;
- unsigned long flags;
-
- spin_lock_irqsave(&adapter->hw_lock, flags);
- outl(0, adapter->io + AVM_ISACSX_INDEX);
- for (i = 0; i < size; i++)
- data[i] = inl(adapter->io + AVM_ISACSX_DATA);
- spin_unlock_irqrestore(&adapter->hw_lock, flags);
-}
-
-static void fcpci2_write_isac_fifo(struct isac *isac, unsigned char *data,
- int size)
-{
- struct fritz_adapter *adapter = isac->priv;
- int i;
- unsigned long flags;
-
- spin_lock_irqsave(&adapter->hw_lock, flags);
- outl(0, adapter->io + AVM_ISACSX_INDEX);
- for (i = 0; i < size; i++)
- outl(data[i], adapter->io + AVM_ISACSX_DATA);
- spin_unlock_irqrestore(&adapter->hw_lock, flags);
-}
-
-static u32 fcpci2_read_hdlc_status(struct fritz_adapter *adapter, int nr)
-{
- int offset = nr ? AVM_HDLC_STATUS_2 : AVM_HDLC_STATUS_1;
-
- return inl(adapter->io + offset);
-}
-
-static void fcpci2_write_ctrl(struct fritz_bcs *bcs, int which)
-{
- struct fritz_adapter *adapter = bcs->adapter;
- int offset = bcs->channel ? AVM_HDLC_STATUS_2 : AVM_HDLC_STATUS_1;
-
- DBG(0x40, "hdlc %c wr%x ctrl %x",
- 'A' + bcs->channel, which, bcs->ctrl.ctrl);
-
- outl(bcs->ctrl.ctrl, adapter->io + offset);
-}
-
-// ----------------------------------------------------------------------
-// Fritz!PnP (ISAC access as for Fritz!PCI)
-
-static u32 fcpnp_read_hdlc_status(struct fritz_adapter *adapter, int nr)
-{
- unsigned char idx = nr ? AVM_IDX_HDLC_2 : AVM_IDX_HDLC_1;
- u32 val;
- unsigned long flags;
-
- spin_lock_irqsave(&adapter->hw_lock, flags);
- outb(idx, adapter->io + AVM_INDEX);
- val = inb(adapter->io + AVM_DATA + HDLC_STATUS);
- if (val & HDLC_INT_RPR)
- val |= inb(adapter->io + AVM_DATA + HDLC_STATUS + 1) << 8;
- spin_unlock_irqrestore(&adapter->hw_lock, flags);
- return val;
-}
-
-static void __fcpnp_write_ctrl(struct fritz_bcs *bcs, int which)
-{
- struct fritz_adapter *adapter = bcs->adapter;
- unsigned char idx = bcs->channel ? AVM_IDX_HDLC_2 : AVM_IDX_HDLC_1;
-
- DBG(0x40, "hdlc %c wr%x ctrl %x",
- 'A' + bcs->channel, which, bcs->ctrl.ctrl);
-
- outb(idx, adapter->io + AVM_INDEX);
- if (which & 4)
- outb(bcs->ctrl.sr.mode,
- adapter->io + AVM_DATA + HDLC_STATUS + 2);
- if (which & 2)
- outb(bcs->ctrl.sr.xml,
- adapter->io + AVM_DATA + HDLC_STATUS + 1);
- if (which & 1)
- outb(bcs->ctrl.sr.cmd,
- adapter->io + AVM_DATA + HDLC_STATUS + 0);
-}
-
-static void fcpnp_write_ctrl(struct fritz_bcs *bcs, int which)
-{
- struct fritz_adapter *adapter = bcs->adapter;
- unsigned long flags;
-
- spin_lock_irqsave(&adapter->hw_lock, flags);
- __fcpnp_write_ctrl(bcs, which);
- spin_unlock_irqrestore(&adapter->hw_lock, flags);
-}
-
-// ----------------------------------------------------------------------
-
-static inline void B_L1L2(struct fritz_bcs *bcs, int pr, void *arg)
-{
- struct hisax_if *ifc = (struct hisax_if *) &bcs->b_if;
-
- DBG(2, "pr %#x", pr);
- ifc->l1l2(ifc, pr, arg);
-}
-
-static void hdlc_fill_fifo(struct fritz_bcs *bcs)
-{
- struct fritz_adapter *adapter = bcs->adapter;
- struct sk_buff *skb = bcs->tx_skb;
- int count;
- unsigned long flags;
- unsigned char *p;
-
- DBG(0x40, "hdlc_fill_fifo");
-
- BUG_ON(skb->len == 0);
-
- bcs->ctrl.sr.cmd &= ~HDLC_CMD_XME;
- if (bcs->tx_skb->len > bcs->fifo_size) {
- count = bcs->fifo_size;
- } else {
- count = bcs->tx_skb->len;
- if (bcs->mode != L1_MODE_TRANS)
- bcs->ctrl.sr.cmd |= HDLC_CMD_XME;
- }
- DBG(0x40, "hdlc_fill_fifo %d/%d", count, bcs->tx_skb->len);
- p = bcs->tx_skb->data;
- skb_pull(bcs->tx_skb, count);
- bcs->tx_cnt += count;
- bcs->ctrl.sr.xml = ((count == bcs->fifo_size) ? 0 : count);
-
- switch (adapter->type) {
- case AVM_FRITZ_PCI:
- spin_lock_irqsave(&adapter->hw_lock, flags);
- // sets the correct AVM_INDEX, too
- __fcpci_write_ctrl(bcs, 3);
- outsl(adapter->io + AVM_DATA + HDLC_FIFO,
- p, (count + 3) / 4);
- spin_unlock_irqrestore(&adapter->hw_lock, flags);
- break;
- case AVM_FRITZ_PCIV2:
- fcpci2_write_ctrl(bcs, 3);
- outsl(adapter->io +
- (bcs->channel ? AVM_HDLC_FIFO_2 : AVM_HDLC_FIFO_1),
- p, (count + 3) / 4);
- break;
- case AVM_FRITZ_PNP:
- spin_lock_irqsave(&adapter->hw_lock, flags);
- // sets the correct AVM_INDEX, too
- __fcpnp_write_ctrl(bcs, 3);
- outsb(adapter->io + AVM_DATA, p, count);
- spin_unlock_irqrestore(&adapter->hw_lock, flags);
- break;
- }
-}
-
-static inline void hdlc_empty_fifo(struct fritz_bcs *bcs, int count)
-{
- struct fritz_adapter *adapter = bcs->adapter;
- unsigned char *p;
- unsigned char idx = bcs->channel ? AVM_IDX_HDLC_2 : AVM_IDX_HDLC_1;
-
- DBG(0x10, "hdlc_empty_fifo %d", count);
- if (bcs->rcvidx + count > HSCX_BUFMAX) {
- DBG(0x10, "hdlc_empty_fifo: incoming packet too large");
- return;
- }
- p = bcs->rcvbuf + bcs->rcvidx;
- bcs->rcvidx += count;
- switch (adapter->type) {
- case AVM_FRITZ_PCI:
- spin_lock(&adapter->hw_lock);
- outl(idx, adapter->io + AVM_INDEX);
- insl(adapter->io + AVM_DATA + HDLC_FIFO,
- p, (count + 3) / 4);
- spin_unlock(&adapter->hw_lock);
- break;
- case AVM_FRITZ_PCIV2:
- insl(adapter->io +
- (bcs->channel ? AVM_HDLC_FIFO_2 : AVM_HDLC_FIFO_1),
- p, (count + 3) / 4);
- break;
- case AVM_FRITZ_PNP:
- spin_lock(&adapter->hw_lock);
- outb(idx, adapter->io + AVM_INDEX);
- insb(adapter->io + AVM_DATA, p, count);
- spin_unlock(&adapter->hw_lock);
- break;
- }
-}
-
-static inline void hdlc_rpr_irq(struct fritz_bcs *bcs, u32 stat)
-{
- struct fritz_adapter *adapter = bcs->adapter;
- struct sk_buff *skb;
- int len;
-
- if (stat & HDLC_STAT_RDO) {
- DBG(0x10, "RDO");
- bcs->ctrl.sr.xml = 0;
- bcs->ctrl.sr.cmd |= HDLC_CMD_RRS;
- adapter->write_ctrl(bcs, 1);
- bcs->ctrl.sr.cmd &= ~HDLC_CMD_RRS;
- adapter->write_ctrl(bcs, 1);
- bcs->rcvidx = 0;
- return;
- }
-
- len = (stat & HDLC_STAT_RML_MASK) >> 8;
- if (len == 0)
- len = bcs->fifo_size;
-
- hdlc_empty_fifo(bcs, len);
-
- if ((stat & HDLC_STAT_RME) || (bcs->mode == L1_MODE_TRANS)) {
- if (((stat & HDLC_STAT_CRCVFRRAB) == HDLC_STAT_CRCVFR) ||
- (bcs->mode == L1_MODE_TRANS)) {
- skb = dev_alloc_skb(bcs->rcvidx);
- if (!skb) {
- printk(KERN_WARNING "HDLC: receive out of memory\n");
- } else {
- skb_put_data(skb, bcs->rcvbuf, bcs->rcvidx);
- DBG_SKB(1, skb);
- B_L1L2(bcs, PH_DATA | INDICATION, skb);
- }
- bcs->rcvidx = 0;
- } else {
- DBG(0x10, "ch%d invalid frame %#x",
- bcs->channel, stat);
- bcs->rcvidx = 0;
- }
- }
-}
-
-static inline void hdlc_xdu_irq(struct fritz_bcs *bcs)
-{
- struct fritz_adapter *adapter = bcs->adapter;
-
-
- /* Here we lost an TX interrupt, so
- * restart transmitting the whole frame.
- */
- bcs->ctrl.sr.xml = 0;
- bcs->ctrl.sr.cmd |= HDLC_CMD_XRS;
- adapter->write_ctrl(bcs, 1);
- bcs->ctrl.sr.cmd &= ~HDLC_CMD_XRS;
-
- if (!bcs->tx_skb) {
- DBG(0x10, "XDU without skb");
- adapter->write_ctrl(bcs, 1);
- return;
- }
- /* only hdlc restarts the frame, transparent mode must continue */
- if (bcs->mode == L1_MODE_HDLC) {
- skb_push(bcs->tx_skb, bcs->tx_cnt);
- bcs->tx_cnt = 0;
- }
-}
-
-static inline void hdlc_xpr_irq(struct fritz_bcs *bcs)
-{
- struct sk_buff *skb;
-
- skb = bcs->tx_skb;
- if (!skb)
- return;
-
- if (skb->len) {
- hdlc_fill_fifo(bcs);
- return;
- }
- bcs->tx_cnt = 0;
- bcs->tx_skb = NULL;
- B_L1L2(bcs, PH_DATA | CONFIRM, (void *)(unsigned long)skb->truesize);
- dev_kfree_skb_irq(skb);
-}
-
-static void hdlc_irq_one(struct fritz_bcs *bcs, u32 stat)
-{
- DBG(0x10, "ch%d stat %#x", bcs->channel, stat);
- if (stat & HDLC_INT_RPR) {
- DBG(0x10, "RPR");
- hdlc_rpr_irq(bcs, stat);
- }
- if (stat & HDLC_INT_XDU) {
- DBG(0x10, "XDU");
- hdlc_xdu_irq(bcs);
- hdlc_xpr_irq(bcs);
- return;
- }
- if (stat & HDLC_INT_XPR) {
- DBG(0x10, "XPR");
- hdlc_xpr_irq(bcs);
- }
-}
-
-static inline void hdlc_irq(struct fritz_adapter *adapter)
-{
- int nr;
- u32 stat;
-
- for (nr = 0; nr < 2; nr++) {
- stat = adapter->read_hdlc_status(adapter, nr);
- DBG(0x10, "HDLC %c stat %#x", 'A' + nr, stat);
- if (stat & HDLC_INT_MASK)
- hdlc_irq_one(&adapter->bcs[nr], stat);
- }
-}
-
-static void modehdlc(struct fritz_bcs *bcs, int mode)
-{
- struct fritz_adapter *adapter = bcs->adapter;
-
- DBG(0x40, "hdlc %c mode %d --> %d",
- 'A' + bcs->channel, bcs->mode, mode);
-
- if (bcs->mode == mode)
- return;
-
- bcs->fifo_size = 32;
- bcs->ctrl.ctrl = 0;
- bcs->ctrl.sr.cmd = HDLC_CMD_XRS | HDLC_CMD_RRS;
- switch (mode) {
- case L1_MODE_NULL:
- bcs->ctrl.sr.mode = HDLC_MODE_TRANS;
- adapter->write_ctrl(bcs, 5);
- break;
- case L1_MODE_TRANS:
- case L1_MODE_HDLC:
- bcs->rcvidx = 0;
- bcs->tx_cnt = 0;
- bcs->tx_skb = NULL;
- if (mode == L1_MODE_TRANS) {
- bcs->ctrl.sr.mode = HDLC_MODE_TRANS;
- } else {
- bcs->ctrl.sr.mode = HDLC_MODE_ITF_FLG;
- }
- adapter->write_ctrl(bcs, 5);
- bcs->ctrl.sr.cmd = HDLC_CMD_XRS;
- adapter->write_ctrl(bcs, 1);
- bcs->ctrl.sr.cmd = 0;
- break;
- }
- bcs->mode = mode;
-}
-
-static void fritz_b_l2l1(struct hisax_if *ifc, int pr, void *arg)
-{
- struct fritz_bcs *bcs = ifc->priv;
- struct sk_buff *skb = arg;
- int mode;
-
- DBG(0x10, "pr %#x", pr);
-
- switch (pr) {
- case PH_DATA | REQUEST:
- BUG_ON(bcs->tx_skb);
- bcs->tx_skb = skb;
- DBG_SKB(1, skb);
- hdlc_fill_fifo(bcs);
- break;
- case PH_ACTIVATE | REQUEST:
- mode = (long) arg;
- DBG(4, "B%d,PH_ACTIVATE_REQUEST %d", bcs->channel + 1, mode);
- modehdlc(bcs, mode);
- B_L1L2(bcs, PH_ACTIVATE | INDICATION, NULL);
- break;
- case PH_DEACTIVATE | REQUEST:
- DBG(4, "B%d,PH_DEACTIVATE_REQUEST", bcs->channel + 1);
- modehdlc(bcs, L1_MODE_NULL);
- B_L1L2(bcs, PH_DEACTIVATE | INDICATION, NULL);
- break;
- }
-}
-
-// ----------------------------------------------------------------------
-
-static irqreturn_t
-fcpci2_irq(int intno, void *dev)
-{
- struct fritz_adapter *adapter = dev;
- unsigned char val;
-
- val = inb(adapter->io + AVM_STATUS0);
- if (!(val & AVM_STATUS0_IRQ_MASK))
- /* hopefully a shared IRQ reqest */
- return IRQ_NONE;
- DBG(2, "STATUS0 %#x", val);
- if (val & AVM_STATUS0_IRQ_ISAC)
- isacsx_irq(&adapter->isac);
- if (val & AVM_STATUS0_IRQ_HDLC)
- hdlc_irq(adapter);
- if (val & AVM_STATUS0_IRQ_ISAC)
- isacsx_irq(&adapter->isac);
- return IRQ_HANDLED;
-}
-
-static irqreturn_t
-fcpci_irq(int intno, void *dev)
-{
- struct fritz_adapter *adapter = dev;
- unsigned char sval;
-
- sval = inb(adapter->io + 2);
- if ((sval & AVM_STATUS0_IRQ_MASK) == AVM_STATUS0_IRQ_MASK)
- /* possibly a shared IRQ reqest */
- return IRQ_NONE;
- DBG(2, "sval %#x", sval);
- if (!(sval & AVM_STATUS0_IRQ_ISAC))
- isac_irq(&adapter->isac);
-
- if (!(sval & AVM_STATUS0_IRQ_HDLC))
- hdlc_irq(adapter);
- return IRQ_HANDLED;
-}
-
-// ----------------------------------------------------------------------
-
-static inline void fcpci2_init(struct fritz_adapter *adapter)
-{
- outb(AVM_STATUS0_RES_TIMER, adapter->io + AVM_STATUS0);
- outb(AVM_STATUS0_ENA_IRQ, adapter->io + AVM_STATUS0);
-
-}
-
-static inline void fcpci_init(struct fritz_adapter *adapter)
-{
- outb(AVM_STATUS0_DIS_TIMER | AVM_STATUS0_RES_TIMER |
- AVM_STATUS0_ENA_IRQ, adapter->io + AVM_STATUS0);
-
- outb(AVM_STATUS1_ENA_IOM | adapter->irq,
- adapter->io + AVM_STATUS1);
- mdelay(10);
-}
-
-// ----------------------------------------------------------------------
-
-static int fcpcipnp_setup(struct fritz_adapter *adapter)
-{
- u32 val = 0;
- int retval;
-
- DBG(1, "");
-
- isac_init(&adapter->isac); // FIXME is this okay now
-
- retval = -EBUSY;
- if (!request_region(adapter->io, 32, "fcpcipnp"))
- goto err;
-
- switch (adapter->type) {
- case AVM_FRITZ_PCIV2:
- case AVM_FRITZ_PCI:
- val = inl(adapter->io);
- break;
- case AVM_FRITZ_PNP:
- val = inb(adapter->io);
- val |= inb(adapter->io + 1) << 8;
- break;
- }
-
- DBG(1, "stat %#x Class %X Rev %d",
- val, val & 0xff, (val >> 8) & 0xff);
-
- spin_lock_init(&adapter->hw_lock);
- adapter->isac.priv = adapter;
- switch (adapter->type) {
- case AVM_FRITZ_PCIV2:
- adapter->isac.read_isac = &fcpci2_read_isac;
- adapter->isac.write_isac = &fcpci2_write_isac;
- adapter->isac.read_isac_fifo = &fcpci2_read_isac_fifo;
- adapter->isac.write_isac_fifo = &fcpci2_write_isac_fifo;
-
- adapter->read_hdlc_status = &fcpci2_read_hdlc_status;
- adapter->write_ctrl = &fcpci2_write_ctrl;
- break;
- case AVM_FRITZ_PCI:
- adapter->isac.read_isac = &fcpci_read_isac;
- adapter->isac.write_isac = &fcpci_write_isac;
- adapter->isac.read_isac_fifo = &fcpci_read_isac_fifo;
- adapter->isac.write_isac_fifo = &fcpci_write_isac_fifo;
-
- adapter->read_hdlc_status = &fcpci_read_hdlc_status;
- adapter->write_ctrl = &fcpci_write_ctrl;
- break;
- case AVM_FRITZ_PNP:
- adapter->isac.read_isac = &fcpci_read_isac;
- adapter->isac.write_isac = &fcpci_write_isac;
- adapter->isac.read_isac_fifo = &fcpci_read_isac_fifo;
- adapter->isac.write_isac_fifo = &fcpci_write_isac_fifo;
-
- adapter->read_hdlc_status = &fcpnp_read_hdlc_status;
- adapter->write_ctrl = &fcpnp_write_ctrl;
- break;
- }
-
- // Reset
- outb(0, adapter->io + AVM_STATUS0);
- mdelay(10);
- outb(AVM_STATUS0_RESET, adapter->io + AVM_STATUS0);
- mdelay(10);
- outb(0, adapter->io + AVM_STATUS0);
- mdelay(10);
-
- switch (adapter->type) {
- case AVM_FRITZ_PCIV2:
- retval = request_irq(adapter->irq, fcpci2_irq, IRQF_SHARED,
- "fcpcipnp", adapter);
- break;
- case AVM_FRITZ_PCI:
- retval = request_irq(adapter->irq, fcpci_irq, IRQF_SHARED,
- "fcpcipnp", adapter);
- break;
- case AVM_FRITZ_PNP:
- retval = request_irq(adapter->irq, fcpci_irq, 0,
- "fcpcipnp", adapter);
- break;
- }
- if (retval)
- goto err_region;
-
- switch (adapter->type) {
- case AVM_FRITZ_PCIV2:
- fcpci2_init(adapter);
- isacsx_setup(&adapter->isac);
- break;
- case AVM_FRITZ_PCI:
- case AVM_FRITZ_PNP:
- fcpci_init(adapter);
- isac_setup(&adapter->isac);
- break;
- }
- val = adapter->read_hdlc_status(adapter, 0);
- DBG(0x20, "HDLC A STA %x", val);
- val = adapter->read_hdlc_status(adapter, 1);
- DBG(0x20, "HDLC B STA %x", val);
-
- adapter->bcs[0].mode = -1;
- adapter->bcs[1].mode = -1;
- modehdlc(&adapter->bcs[0], L1_MODE_NULL);
- modehdlc(&adapter->bcs[1], L1_MODE_NULL);
-
- return 0;
-
-err_region:
- release_region(adapter->io, 32);
-err:
- return retval;
-}
-
-static void fcpcipnp_release(struct fritz_adapter *adapter)
-{
- DBG(1, "");
-
- outb(0, adapter->io + AVM_STATUS0);
- free_irq(adapter->irq, adapter);
- release_region(adapter->io, 32);
-}
-
-// ----------------------------------------------------------------------
-
-static struct fritz_adapter *new_adapter(void)
-{
- struct fritz_adapter *adapter;
- struct hisax_b_if *b_if[2];
- int i;
-
- adapter = kzalloc(sizeof(struct fritz_adapter), GFP_KERNEL);
- if (!adapter)
- return NULL;
-
- adapter->isac.hisax_d_if.owner = THIS_MODULE;
- adapter->isac.hisax_d_if.ifc.priv = &adapter->isac;
- adapter->isac.hisax_d_if.ifc.l2l1 = isac_d_l2l1;
-
- for (i = 0; i < 2; i++) {
- adapter->bcs[i].adapter = adapter;
- adapter->bcs[i].channel = i;
- adapter->bcs[i].b_if.ifc.priv = &adapter->bcs[i];
- adapter->bcs[i].b_if.ifc.l2l1 = fritz_b_l2l1;
- }
-
- for (i = 0; i < 2; i++)
- b_if[i] = &adapter->bcs[i].b_if;
-
- if (hisax_register(&adapter->isac.hisax_d_if, b_if, "fcpcipnp",
- protocol) != 0) {
- kfree(adapter);
- adapter = NULL;
- }
-
- return adapter;
-}
-
-static void delete_adapter(struct fritz_adapter *adapter)
-{
- hisax_unregister(&adapter->isac.hisax_d_if);
- kfree(adapter);
-}
-
-static int fcpci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
-{
- struct fritz_adapter *adapter;
- int retval;
-
- retval = -ENOMEM;
- adapter = new_adapter();
- if (!adapter)
- goto err;
-
- pci_set_drvdata(pdev, adapter);
-
- if (pdev->device == PCI_DEVICE_ID_AVM_A1_V2)
- adapter->type = AVM_FRITZ_PCIV2;
- else
- adapter->type = AVM_FRITZ_PCI;
-
- retval = pci_enable_device(pdev);
- if (retval)
- goto err_free;
-
- adapter->io = pci_resource_start(pdev, 1);
- adapter->irq = pdev->irq;
-
- printk(KERN_INFO "hisax_fcpcipnp: found adapter %s at %s\n",
- (char *) ent->driver_data, pci_name(pdev));
-
- retval = fcpcipnp_setup(adapter);
- if (retval)
- goto err_free;
-
- return 0;
-
-err_free:
- delete_adapter(adapter);
-err:
- return retval;
-}
-
-#ifdef CONFIG_PNP
-static int fcpnp_probe(struct pnp_dev *pdev, const struct pnp_device_id *dev_id)
-{
- struct fritz_adapter *adapter;
- int retval;
-
- if (!pdev)
- return (-ENODEV);
-
- retval = -ENOMEM;
- adapter = new_adapter();
- if (!adapter)
- goto err;
-
- pnp_set_drvdata(pdev, adapter);
-
- adapter->type = AVM_FRITZ_PNP;
-
- pnp_disable_dev(pdev);
- retval = pnp_activate_dev(pdev);
- if (retval < 0) {
- printk(KERN_WARNING "%s: pnp_activate_dev(%s) ret(%d)\n", __func__,
- (char *)dev_id->driver_data, retval);
- goto err_free;
- }
- adapter->io = pnp_port_start(pdev, 0);
- adapter->irq = pnp_irq(pdev, 0);
- if (!adapter->io || adapter->irq == -1)
- goto err_free;
-
- printk(KERN_INFO "hisax_fcpcipnp: found adapter %s at IO %#x irq %d\n",
- (char *) dev_id->driver_data, adapter->io, adapter->irq);
-
- retval = fcpcipnp_setup(adapter);
- if (retval)
- goto err_free;
-
- return 0;
-
-err_free:
- delete_adapter(adapter);
-err:
- return retval;
-}
-
-static void fcpnp_remove(struct pnp_dev *pdev)
-{
- struct fritz_adapter *adapter = pnp_get_drvdata(pdev);
-
- if (adapter) {
- fcpcipnp_release(adapter);
- delete_adapter(adapter);
- }
- pnp_disable_dev(pdev);
-}
-
-static struct pnp_driver fcpnp_driver = {
- .name = "fcpnp",
- .probe = fcpnp_probe,
- .remove = fcpnp_remove,
- .id_table = fcpnp_ids,
-};
-#endif
-
-static void fcpci_remove(struct pci_dev *pdev)
-{
- struct fritz_adapter *adapter = pci_get_drvdata(pdev);
-
- fcpcipnp_release(adapter);
- pci_disable_device(pdev);
- delete_adapter(adapter);
-}
-
-static struct pci_driver fcpci_driver = {
- .name = "fcpci",
- .probe = fcpci_probe,
- .remove = fcpci_remove,
- .id_table = fcpci_ids,
-};
-
-static int __init hisax_fcpcipnp_init(void)
-{
- int retval;
-
- printk(KERN_INFO "hisax_fcpcipnp: Fritz!Card PCI/PCIv2/PnP ISDN driver v0.0.1\n");
-
- retval = pci_register_driver(&fcpci_driver);
- if (retval)
- return retval;
-#ifdef CONFIG_PNP
- retval = pnp_register_driver(&fcpnp_driver);
- if (retval < 0) {
- pci_unregister_driver(&fcpci_driver);
- return retval;
- }
-#endif
- return 0;
-}
-
-static void __exit hisax_fcpcipnp_exit(void)
-{
-#ifdef CONFIG_PNP
- pnp_unregister_driver(&fcpnp_driver);
-#endif
- pci_unregister_driver(&fcpci_driver);
-}
-
-module_init(hisax_fcpcipnp_init);
-module_exit(hisax_fcpcipnp_exit);
diff --git a/drivers/isdn/hisax/hisax_fcpcipnp.h b/drivers/isdn/hisax/hisax_fcpcipnp.h
deleted file mode 100644
index 1f64e9937aa1..000000000000
--- a/drivers/isdn/hisax/hisax_fcpcipnp.h
+++ /dev/null
@@ -1,58 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#include "hisax_if.h"
-#include "hisax_isac.h"
-#include <linux/pci.h>
-
-#define HSCX_BUFMAX 4096
-
-enum {
- AVM_FRITZ_PCI,
- AVM_FRITZ_PNP,
- AVM_FRITZ_PCIV2,
-};
-
-struct hdlc_stat_reg {
-#ifdef __BIG_ENDIAN
- u_char fill;
- u_char mode;
- u_char xml;
- u_char cmd;
-#else
- u_char cmd;
- u_char xml;
- u_char mode;
- u_char fill;
-#endif
-} __attribute__((packed));
-
-struct fritz_bcs {
- struct hisax_b_if b_if;
- struct fritz_adapter *adapter;
- int mode;
- int channel;
-
- union {
- u_int ctrl;
- struct hdlc_stat_reg sr;
- } ctrl;
- u_int stat;
- int rcvidx;
- int fifo_size;
- u_char rcvbuf[HSCX_BUFMAX]; /* B-Channel receive Buffer */
-
- int tx_cnt; /* B-Channel transmit counter */
- struct sk_buff *tx_skb; /* B-Channel transmit Buffer */
-};
-
-struct fritz_adapter {
- int type;
- spinlock_t hw_lock;
- unsigned int io;
- unsigned int irq;
- struct isac isac;
-
- struct fritz_bcs bcs[2];
-
- u32 (*read_hdlc_status) (struct fritz_adapter *adapter, int nr);
- void (*write_ctrl) (struct fritz_bcs *bcs, int which);
-};
diff --git a/drivers/isdn/hisax/hisax_if.h b/drivers/isdn/hisax/hisax_if.h
deleted file mode 100644
index 7098d6bd5ff2..000000000000
--- a/drivers/isdn/hisax/hisax_if.h
+++ /dev/null
@@ -1,66 +0,0 @@
-/*
- * Interface between low level (hardware) drivers and
- * HiSax protocol stack
- *
- * Author Kai Germaschewski
- * Copyright 2001 by Kai Germaschewski <kai.germaschewski@gmx.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#ifndef __HISAX_IF_H__
-#define __HISAX_IF_H__
-
-#include <linux/skbuff.h>
-
-#define REQUEST 0
-#define CONFIRM 1
-#define INDICATION 2
-#define RESPONSE 3
-
-#define PH_ACTIVATE 0x0100
-#define PH_DEACTIVATE 0x0110
-#define PH_DATA 0x0120
-#define PH_PULL 0x0130
-#define PH_DATA_E 0x0140
-
-#define L1_MODE_NULL 0
-#define L1_MODE_TRANS 1
-#define L1_MODE_HDLC 2
-#define L1_MODE_EXTRN 3
-#define L1_MODE_HDLC_56K 4
-#define L1_MODE_MODEM 7
-#define L1_MODE_V32 8
-#define L1_MODE_FAX 9
-
-struct hisax_if {
- void *priv; // private to driver
- void (*l1l2)(struct hisax_if *, int pr, void *arg);
- void (*l2l1)(struct hisax_if *, int pr, void *arg);
-};
-
-struct hisax_b_if {
- struct hisax_if ifc;
-
- // private to hisax
- struct BCState *bcs;
-};
-
-struct hisax_d_if {
- struct hisax_if ifc;
-
- // private to hisax
- struct module *owner;
- struct IsdnCardState *cs;
- struct hisax_b_if *b_if[2];
- struct sk_buff_head erq;
- unsigned long ph_state;
-};
-
-int hisax_register(struct hisax_d_if *hisax_if, struct hisax_b_if *b_if[],
- char *name, int protocol);
-void hisax_unregister(struct hisax_d_if *hisax_if);
-
-#endif
diff --git a/drivers/isdn/hisax/hisax_isac.c b/drivers/isdn/hisax/hisax_isac.c
deleted file mode 100644
index 0f36375478c5..000000000000
--- a/drivers/isdn/hisax/hisax_isac.c
+++ /dev/null
@@ -1,895 +0,0 @@
-/*
- * Driver for ISAC-S and ISAC-SX
- * ISDN Subscriber Access Controller for Terminals
- *
- * Author Kai Germaschewski
- * Copyright 2001 by Kai Germaschewski <kai.germaschewski@gmx.de>
- * 2001 by Karsten Keil <keil@isdn4linux.de>
- *
- * based upon Karsten Keil's original isac.c driver
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * Thanks to Wizard Computersysteme GmbH, Bremervoerde and
- * SoHaNet Technology GmbH, Berlin
- * for supporting the development of this driver
- */
-
-/* TODO:
- * specifically handle level vs edge triggered?
- */
-
-#include <linux/module.h>
-#include <linux/gfp.h>
-#include <linux/init.h>
-#include <linux/netdevice.h>
-#include "hisax_isac.h"
-
-// debugging cruft
-
-#define __debug_variable debug
-#include "hisax_debug.h"
-
-#ifdef CONFIG_HISAX_DEBUG
-static int debug = 1;
-module_param(debug, int, 0);
-
-static char *ISACVer[] = {
- "2086/2186 V1.1",
- "2085 B1",
- "2085 B2",
- "2085 V2.3"
-};
-#endif
-
-MODULE_AUTHOR("Kai Germaschewski <kai.germaschewski@gmx.de>/Karsten Keil <kkeil@suse.de>");
-MODULE_DESCRIPTION("ISAC/ISAC-SX driver");
-MODULE_LICENSE("GPL");
-
-#define DBG_WARN 0x0001
-#define DBG_IRQ 0x0002
-#define DBG_L1M 0x0004
-#define DBG_PR 0x0008
-#define DBG_RFIFO 0x0100
-#define DBG_RPACKET 0x0200
-#define DBG_XFIFO 0x1000
-#define DBG_XPACKET 0x2000
-
-// we need to distinguish ISAC-S and ISAC-SX
-#define TYPE_ISAC 0x00
-#define TYPE_ISACSX 0x01
-
-// registers etc.
-#define ISAC_MASK 0x20
-#define ISAC_ISTA 0x20
-#define ISAC_ISTA_EXI 0x01
-#define ISAC_ISTA_SIN 0x02
-#define ISAC_ISTA_CISQ 0x04
-#define ISAC_ISTA_XPR 0x10
-#define ISAC_ISTA_RSC 0x20
-#define ISAC_ISTA_RPF 0x40
-#define ISAC_ISTA_RME 0x80
-
-#define ISAC_STAR 0x21
-#define ISAC_CMDR 0x21
-#define ISAC_CMDR_XRES 0x01
-#define ISAC_CMDR_XME 0x02
-#define ISAC_CMDR_XTF 0x08
-#define ISAC_CMDR_RRES 0x40
-#define ISAC_CMDR_RMC 0x80
-
-#define ISAC_EXIR 0x24
-#define ISAC_EXIR_MOS 0x04
-#define ISAC_EXIR_XDU 0x40
-#define ISAC_EXIR_XMR 0x80
-
-#define ISAC_ADF2 0x39
-#define ISAC_SPCR 0x30
-#define ISAC_ADF1 0x38
-
-#define ISAC_CIR0 0x31
-#define ISAC_CIX0 0x31
-#define ISAC_CIR0_CIC0 0x02
-#define ISAC_CIR0_CIC1 0x01
-
-#define ISAC_CIR1 0x33
-#define ISAC_CIX1 0x33
-#define ISAC_STCR 0x37
-#define ISAC_MODE 0x22
-
-#define ISAC_RSTA 0x27
-#define ISAC_RSTA_RDO 0x40
-#define ISAC_RSTA_CRC 0x20
-#define ISAC_RSTA_RAB 0x10
-
-#define ISAC_RBCL 0x25
-#define ISAC_RBCH 0x2A
-#define ISAC_TIMR 0x23
-#define ISAC_SQXR 0x3b
-#define ISAC_MOSR 0x3a
-#define ISAC_MOCR 0x3a
-#define ISAC_MOR0 0x32
-#define ISAC_MOX0 0x32
-#define ISAC_MOR1 0x34
-#define ISAC_MOX1 0x34
-
-#define ISAC_RBCH_XAC 0x80
-
-#define ISAC_CMD_TIM 0x0
-#define ISAC_CMD_RES 0x1
-#define ISAC_CMD_SSP 0x2
-#define ISAC_CMD_SCP 0x3
-#define ISAC_CMD_AR8 0x8
-#define ISAC_CMD_AR10 0x9
-#define ISAC_CMD_ARL 0xa
-#define ISAC_CMD_DI 0xf
-
-#define ISACSX_MASK 0x60
-#define ISACSX_ISTA 0x60
-#define ISACSX_ISTA_ICD 0x01
-#define ISACSX_ISTA_CIC 0x10
-
-#define ISACSX_MASKD 0x20
-#define ISACSX_ISTAD 0x20
-#define ISACSX_ISTAD_XDU 0x04
-#define ISACSX_ISTAD_XMR 0x08
-#define ISACSX_ISTAD_XPR 0x10
-#define ISACSX_ISTAD_RFO 0x20
-#define ISACSX_ISTAD_RPF 0x40
-#define ISACSX_ISTAD_RME 0x80
-
-#define ISACSX_CMDRD 0x21
-#define ISACSX_CMDRD_XRES 0x01
-#define ISACSX_CMDRD_XME 0x02
-#define ISACSX_CMDRD_XTF 0x08
-#define ISACSX_CMDRD_RRES 0x40
-#define ISACSX_CMDRD_RMC 0x80
-
-#define ISACSX_MODED 0x22
-
-#define ISACSX_RBCLD 0x26
-
-#define ISACSX_RSTAD 0x28
-#define ISACSX_RSTAD_RAB 0x10
-#define ISACSX_RSTAD_CRC 0x20
-#define ISACSX_RSTAD_RDO 0x40
-#define ISACSX_RSTAD_VFR 0x80
-
-#define ISACSX_CIR0 0x2e
-#define ISACSX_CIR0_CIC0 0x08
-#define ISACSX_CIX0 0x2e
-
-#define ISACSX_TR_CONF0 0x30
-
-#define ISACSX_TR_CONF2 0x32
-
-static struct Fsm l1fsm;
-
-enum {
- ST_L1_RESET,
- ST_L1_F3_PDOWN,
- ST_L1_F3_PUP,
- ST_L1_F3_PEND_DEACT,
- ST_L1_F4,
- ST_L1_F5,
- ST_L1_F6,
- ST_L1_F7,
- ST_L1_F8,
-};
-
-#define L1_STATE_COUNT (ST_L1_F8 + 1)
-
-static char *strL1State[] =
-{
- "ST_L1_RESET",
- "ST_L1_F3_PDOWN",
- "ST_L1_F3_PUP",
- "ST_L1_F3_PEND_DEACT",
- "ST_L1_F4",
- "ST_L1_F5",
- "ST_L1_F6",
- "ST_L1_F7",
- "ST_L1_F8",
-};
-
-enum {
- EV_PH_DR, // 0000
- EV_PH_RES, // 0001
- EV_PH_TMA, // 0010
- EV_PH_SLD, // 0011
- EV_PH_RSY, // 0100
- EV_PH_DR6, // 0101
- EV_PH_EI, // 0110
- EV_PH_PU, // 0111
- EV_PH_AR, // 1000
- EV_PH_9, // 1001
- EV_PH_ARL, // 1010
- EV_PH_CVR, // 1011
- EV_PH_AI8, // 1100
- EV_PH_AI10, // 1101
- EV_PH_AIL, // 1110
- EV_PH_DC, // 1111
- EV_PH_ACTIVATE_REQ,
- EV_PH_DEACTIVATE_REQ,
- EV_TIMER3,
-};
-
-#define L1_EVENT_COUNT (EV_TIMER3 + 1)
-
-static char *strL1Event[] =
-{
- "EV_PH_DR", // 0000
- "EV_PH_RES", // 0001
- "EV_PH_TMA", // 0010
- "EV_PH_SLD", // 0011
- "EV_PH_RSY", // 0100
- "EV_PH_DR6", // 0101
- "EV_PH_EI", // 0110
- "EV_PH_PU", // 0111
- "EV_PH_AR", // 1000
- "EV_PH_9", // 1001
- "EV_PH_ARL", // 1010
- "EV_PH_CVR", // 1011
- "EV_PH_AI8", // 1100
- "EV_PH_AI10", // 1101
- "EV_PH_AIL", // 1110
- "EV_PH_DC", // 1111
- "EV_PH_ACTIVATE_REQ",
- "EV_PH_DEACTIVATE_REQ",
- "EV_TIMER3",
-};
-
-static inline void D_L1L2(struct isac *isac, int pr, void *arg)
-{
- struct hisax_if *ifc = (struct hisax_if *) &isac->hisax_d_if;
-
- DBG(DBG_PR, "pr %#x", pr);
- ifc->l1l2(ifc, pr, arg);
-}
-
-static void ph_command(struct isac *isac, unsigned int command)
-{
- DBG(DBG_L1M, "ph_command %#x", command);
- switch (isac->type) {
- case TYPE_ISAC:
- isac->write_isac(isac, ISAC_CIX0, (command << 2) | 3);
- break;
- case TYPE_ISACSX:
- isac->write_isac(isac, ISACSX_CIX0, (command << 4) | (7 << 1));
- break;
- }
-}
-
-// ----------------------------------------------------------------------
-
-static void l1_di(struct FsmInst *fi, int event, void *arg)
-{
- struct isac *isac = fi->userdata;
-
- FsmChangeState(fi, ST_L1_RESET);
- ph_command(isac, ISAC_CMD_DI);
-}
-
-static void l1_di_deact_ind(struct FsmInst *fi, int event, void *arg)
-{
- struct isac *isac = fi->userdata;
-
- FsmChangeState(fi, ST_L1_RESET);
- D_L1L2(isac, PH_DEACTIVATE | INDICATION, NULL);
- ph_command(isac, ISAC_CMD_DI);
-}
-
-static void l1_go_f3pdown(struct FsmInst *fi, int event, void *arg)
-{
- FsmChangeState(fi, ST_L1_F3_PDOWN);
-}
-
-static void l1_go_f3pend_deact_ind(struct FsmInst *fi, int event, void *arg)
-{
- struct isac *isac = fi->userdata;
-
- FsmChangeState(fi, ST_L1_F3_PEND_DEACT);
- D_L1L2(isac, PH_DEACTIVATE | INDICATION, NULL);
- ph_command(isac, ISAC_CMD_DI);
-}
-
-static void l1_go_f3pend(struct FsmInst *fi, int event, void *arg)
-{
- struct isac *isac = fi->userdata;
-
- FsmChangeState(fi, ST_L1_F3_PEND_DEACT);
- ph_command(isac, ISAC_CMD_DI);
-}
-
-static void l1_go_f4(struct FsmInst *fi, int event, void *arg)
-{
- FsmChangeState(fi, ST_L1_F4);
-}
-
-static void l1_go_f5(struct FsmInst *fi, int event, void *arg)
-{
- FsmChangeState(fi, ST_L1_F5);
-}
-
-static void l1_go_f6(struct FsmInst *fi, int event, void *arg)
-{
- FsmChangeState(fi, ST_L1_F6);
-}
-
-static void l1_go_f6_deact_ind(struct FsmInst *fi, int event, void *arg)
-{
- struct isac *isac = fi->userdata;
-
- FsmChangeState(fi, ST_L1_F6);
- D_L1L2(isac, PH_DEACTIVATE | INDICATION, NULL);
-}
-
-static void l1_go_f7_act_ind(struct FsmInst *fi, int event, void *arg)
-{
- struct isac *isac = fi->userdata;
-
- FsmDelTimer(&isac->timer, 0);
- FsmChangeState(fi, ST_L1_F7);
- ph_command(isac, ISAC_CMD_AR8);
- D_L1L2(isac, PH_ACTIVATE | INDICATION, NULL);
-}
-
-static void l1_go_f8(struct FsmInst *fi, int event, void *arg)
-{
- FsmChangeState(fi, ST_L1_F8);
-}
-
-static void l1_go_f8_deact_ind(struct FsmInst *fi, int event, void *arg)
-{
- struct isac *isac = fi->userdata;
-
- FsmChangeState(fi, ST_L1_F8);
- D_L1L2(isac, PH_DEACTIVATE | INDICATION, NULL);
-}
-
-static void l1_ar8(struct FsmInst *fi, int event, void *arg)
-{
- struct isac *isac = fi->userdata;
-
- FsmRestartTimer(&isac->timer, TIMER3_VALUE, EV_TIMER3, NULL, 2);
- ph_command(isac, ISAC_CMD_AR8);
-}
-
-static void l1_timer3(struct FsmInst *fi, int event, void *arg)
-{
- struct isac *isac = fi->userdata;
-
- ph_command(isac, ISAC_CMD_DI);
- D_L1L2(isac, PH_DEACTIVATE | INDICATION, NULL);
-}
-
-// state machines according to data sheet PSB 2186 / 3186
-
-static struct FsmNode L1FnList[] __initdata =
-{
- {ST_L1_RESET, EV_PH_RES, l1_di},
- {ST_L1_RESET, EV_PH_EI, l1_di},
- {ST_L1_RESET, EV_PH_DC, l1_go_f3pdown},
- {ST_L1_RESET, EV_PH_AR, l1_go_f6},
- {ST_L1_RESET, EV_PH_AI8, l1_go_f7_act_ind},
-
- {ST_L1_F3_PDOWN, EV_PH_RES, l1_di},
- {ST_L1_F3_PDOWN, EV_PH_EI, l1_di},
- {ST_L1_F3_PDOWN, EV_PH_AR, l1_go_f6},
- {ST_L1_F3_PDOWN, EV_PH_RSY, l1_go_f5},
- {ST_L1_F3_PDOWN, EV_PH_PU, l1_go_f4},
- {ST_L1_F3_PDOWN, EV_PH_AI8, l1_go_f7_act_ind},
- {ST_L1_F3_PDOWN, EV_PH_ACTIVATE_REQ, l1_ar8},
- {ST_L1_F3_PDOWN, EV_TIMER3, l1_timer3},
-
- {ST_L1_F3_PEND_DEACT, EV_PH_RES, l1_di},
- {ST_L1_F3_PEND_DEACT, EV_PH_EI, l1_di},
- {ST_L1_F3_PEND_DEACT, EV_PH_DC, l1_go_f3pdown},
- {ST_L1_F3_PEND_DEACT, EV_PH_RSY, l1_go_f5},
- {ST_L1_F3_PEND_DEACT, EV_PH_AR, l1_go_f6},
- {ST_L1_F3_PEND_DEACT, EV_PH_AI8, l1_go_f7_act_ind},
-
- {ST_L1_F4, EV_PH_RES, l1_di},
- {ST_L1_F4, EV_PH_EI, l1_di},
- {ST_L1_F4, EV_PH_RSY, l1_go_f5},
- {ST_L1_F4, EV_PH_AI8, l1_go_f7_act_ind},
- {ST_L1_F4, EV_TIMER3, l1_timer3},
- {ST_L1_F4, EV_PH_DC, l1_go_f3pdown},
-
- {ST_L1_F5, EV_PH_RES, l1_di},
- {ST_L1_F5, EV_PH_EI, l1_di},
- {ST_L1_F5, EV_PH_AR, l1_go_f6},
- {ST_L1_F5, EV_PH_AI8, l1_go_f7_act_ind},
- {ST_L1_F5, EV_TIMER3, l1_timer3},
- {ST_L1_F5, EV_PH_DR, l1_go_f3pend},
- {ST_L1_F5, EV_PH_DC, l1_go_f3pdown},
-
- {ST_L1_F6, EV_PH_RES, l1_di},
- {ST_L1_F6, EV_PH_EI, l1_di},
- {ST_L1_F6, EV_PH_RSY, l1_go_f8},
- {ST_L1_F6, EV_PH_AI8, l1_go_f7_act_ind},
- {ST_L1_F6, EV_PH_DR6, l1_go_f3pend},
- {ST_L1_F6, EV_TIMER3, l1_timer3},
- {ST_L1_F6, EV_PH_DC, l1_go_f3pdown},
-
- {ST_L1_F7, EV_PH_RES, l1_di_deact_ind},
- {ST_L1_F7, EV_PH_EI, l1_di_deact_ind},
- {ST_L1_F7, EV_PH_AR, l1_go_f6_deact_ind},
- {ST_L1_F7, EV_PH_RSY, l1_go_f8_deact_ind},
- {ST_L1_F7, EV_PH_DR, l1_go_f3pend_deact_ind},
-
- {ST_L1_F8, EV_PH_RES, l1_di},
- {ST_L1_F8, EV_PH_EI, l1_di},
- {ST_L1_F8, EV_PH_AR, l1_go_f6},
- {ST_L1_F8, EV_PH_DR, l1_go_f3pend},
- {ST_L1_F8, EV_PH_AI8, l1_go_f7_act_ind},
- {ST_L1_F8, EV_TIMER3, l1_timer3},
- {ST_L1_F8, EV_PH_DC, l1_go_f3pdown},
-};
-
-static void l1m_debug(struct FsmInst *fi, char *fmt, ...)
-{
- va_list args;
- char buf[256];
-
- va_start(args, fmt);
- vsnprintf(buf, sizeof(buf), fmt, args);
- DBG(DBG_L1M, "%s", buf);
- va_end(args);
-}
-
-static void isac_version(struct isac *cs)
-{
- int val;
-
- val = cs->read_isac(cs, ISAC_RBCH);
- DBG(1, "ISAC version (%x): %s", val, ISACVer[(val >> 5) & 3]);
-}
-
-static void isac_empty_fifo(struct isac *isac, int count)
-{
- // this also works for isacsx, since
- // CMDR(D) register works the same
- u_char *ptr;
-
- DBG(DBG_IRQ, "count %d", count);
-
- if ((isac->rcvidx + count) >= MAX_DFRAME_LEN_L1) {
- DBG(DBG_WARN, "overrun %d", isac->rcvidx + count);
- isac->write_isac(isac, ISAC_CMDR, ISAC_CMDR_RMC);
- isac->rcvidx = 0;
- return;
- }
- ptr = isac->rcvbuf + isac->rcvidx;
- isac->rcvidx += count;
- isac->read_isac_fifo(isac, ptr, count);
- isac->write_isac(isac, ISAC_CMDR, ISAC_CMDR_RMC);
- DBG_PACKET(DBG_RFIFO, ptr, count);
-}
-
-static void isac_fill_fifo(struct isac *isac)
-{
- // this also works for isacsx, since
- // CMDR(D) register works the same
-
- int count;
- unsigned char cmd;
- u_char *ptr;
-
- BUG_ON(!isac->tx_skb);
-
- count = isac->tx_skb->len;
- BUG_ON(count <= 0);
-
- DBG(DBG_IRQ, "count %d", count);
-
- if (count > 0x20) {
- count = 0x20;
- cmd = ISAC_CMDR_XTF;
- } else {
- cmd = ISAC_CMDR_XTF | ISAC_CMDR_XME;
- }
-
- ptr = isac->tx_skb->data;
- skb_pull(isac->tx_skb, count);
- isac->tx_cnt += count;
- DBG_PACKET(DBG_XFIFO, ptr, count);
- isac->write_isac_fifo(isac, ptr, count);
- isac->write_isac(isac, ISAC_CMDR, cmd);
-}
-
-static void isac_retransmit(struct isac *isac)
-{
- if (!isac->tx_skb) {
- DBG(DBG_WARN, "no skb");
- return;
- }
- skb_push(isac->tx_skb, isac->tx_cnt);
- isac->tx_cnt = 0;
-}
-
-
-static inline void isac_cisq_interrupt(struct isac *isac)
-{
- unsigned char val;
-
- val = isac->read_isac(isac, ISAC_CIR0);
- DBG(DBG_IRQ, "CIR0 %#x", val);
- if (val & ISAC_CIR0_CIC0) {
- DBG(DBG_IRQ, "CODR0 %#x", (val >> 2) & 0xf);
- FsmEvent(&isac->l1m, (val >> 2) & 0xf, NULL);
- }
- if (val & ISAC_CIR0_CIC1) {
- val = isac->read_isac(isac, ISAC_CIR1);
- DBG(DBG_WARN, "ISAC CIR1 %#x", val);
- }
-}
-
-static inline void isac_rme_interrupt(struct isac *isac)
-{
- unsigned char val;
- int count;
- struct sk_buff *skb;
-
- val = isac->read_isac(isac, ISAC_RSTA);
- if ((val & (ISAC_RSTA_RDO | ISAC_RSTA_CRC | ISAC_RSTA_RAB))
- != ISAC_RSTA_CRC) {
- DBG(DBG_WARN, "RSTA %#x, dropped", val);
- isac->write_isac(isac, ISAC_CMDR, ISAC_CMDR_RMC);
- goto out;
- }
-
- count = isac->read_isac(isac, ISAC_RBCL) & 0x1f;
- DBG(DBG_IRQ, "RBCL %#x", count);
- if (count == 0)
- count = 0x20;
-
- isac_empty_fifo(isac, count);
- count = isac->rcvidx;
- if (count < 1) {
- DBG(DBG_WARN, "count %d < 1", count);
- goto out;
- }
-
- skb = alloc_skb(count, GFP_ATOMIC);
- if (!skb) {
- DBG(DBG_WARN, "no memory, dropping\n");
- goto out;
- }
- skb_put_data(skb, isac->rcvbuf, count);
- DBG_SKB(DBG_RPACKET, skb);
- D_L1L2(isac, PH_DATA | INDICATION, skb);
-out:
- isac->rcvidx = 0;
-}
-
-static inline void isac_xpr_interrupt(struct isac *isac)
-{
- if (!isac->tx_skb)
- return;
-
- if (isac->tx_skb->len > 0) {
- isac_fill_fifo(isac);
- return;
- }
- dev_kfree_skb_irq(isac->tx_skb);
- isac->tx_cnt = 0;
- isac->tx_skb = NULL;
- D_L1L2(isac, PH_DATA | CONFIRM, NULL);
-}
-
-static inline void isac_exi_interrupt(struct isac *isac)
-{
- unsigned char val;
-
- val = isac->read_isac(isac, ISAC_EXIR);
- DBG(2, "EXIR %#x", val);
-
- if (val & ISAC_EXIR_XMR) {
- DBG(DBG_WARN, "ISAC XMR");
- isac_retransmit(isac);
- }
- if (val & ISAC_EXIR_XDU) {
- DBG(DBG_WARN, "ISAC XDU");
- isac_retransmit(isac);
- }
- if (val & ISAC_EXIR_MOS) { /* MOS */
- DBG(DBG_WARN, "MOS");
- val = isac->read_isac(isac, ISAC_MOSR);
- DBG(2, "ISAC MOSR %#x", val);
- }
-}
-
-void isac_irq(struct isac *isac)
-{
- unsigned char val;
-
- val = isac->read_isac(isac, ISAC_ISTA);
- DBG(DBG_IRQ, "ISTA %#x", val);
-
- if (val & ISAC_ISTA_EXI) {
- DBG(DBG_IRQ, "EXI");
- isac_exi_interrupt(isac);
- }
- if (val & ISAC_ISTA_XPR) {
- DBG(DBG_IRQ, "XPR");
- isac_xpr_interrupt(isac);
- }
- if (val & ISAC_ISTA_RME) {
- DBG(DBG_IRQ, "RME");
- isac_rme_interrupt(isac);
- }
- if (val & ISAC_ISTA_RPF) {
- DBG(DBG_IRQ, "RPF");
- isac_empty_fifo(isac, 0x20);
- }
- if (val & ISAC_ISTA_CISQ) {
- DBG(DBG_IRQ, "CISQ");
- isac_cisq_interrupt(isac);
- }
- if (val & ISAC_ISTA_RSC) {
- DBG(DBG_WARN, "RSC");
- }
- if (val & ISAC_ISTA_SIN) {
- DBG(DBG_WARN, "SIN");
- }
- isac->write_isac(isac, ISAC_MASK, 0xff);
- isac->write_isac(isac, ISAC_MASK, 0x00);
-}
-
-// ======================================================================
-
-static inline void isacsx_cic_interrupt(struct isac *isac)
-{
- unsigned char val;
-
- val = isac->read_isac(isac, ISACSX_CIR0);
- DBG(DBG_IRQ, "CIR0 %#x", val);
- if (val & ISACSX_CIR0_CIC0) {
- DBG(DBG_IRQ, "CODR0 %#x", val >> 4);
- FsmEvent(&isac->l1m, val >> 4, NULL);
- }
-}
-
-static inline void isacsx_rme_interrupt(struct isac *isac)
-{
- int count;
- struct sk_buff *skb;
- unsigned char val;
-
- val = isac->read_isac(isac, ISACSX_RSTAD);
- if ((val & (ISACSX_RSTAD_VFR |
- ISACSX_RSTAD_RDO |
- ISACSX_RSTAD_CRC |
- ISACSX_RSTAD_RAB))
- != (ISACSX_RSTAD_VFR | ISACSX_RSTAD_CRC)) {
- DBG(DBG_WARN, "RSTAD %#x, dropped", val);
- isac->write_isac(isac, ISACSX_CMDRD, ISACSX_CMDRD_RMC);
- goto out;
- }
-
- count = isac->read_isac(isac, ISACSX_RBCLD) & 0x1f;
- DBG(DBG_IRQ, "RBCLD %#x", count);
- if (count == 0)
- count = 0x20;
-
- isac_empty_fifo(isac, count);
- // strip trailing status byte
- count = isac->rcvidx - 1;
- if (count < 1) {
- DBG(DBG_WARN, "count %d < 1", count);
- goto out;
- }
-
- skb = dev_alloc_skb(count);
- if (!skb) {
- DBG(DBG_WARN, "no memory, dropping");
- goto out;
- }
- skb_put_data(skb, isac->rcvbuf, count);
- DBG_SKB(DBG_RPACKET, skb);
- D_L1L2(isac, PH_DATA | INDICATION, skb);
-out:
- isac->rcvidx = 0;
-}
-
-static inline void isacsx_xpr_interrupt(struct isac *isac)
-{
- if (!isac->tx_skb)
- return;
-
- if (isac->tx_skb->len > 0) {
- isac_fill_fifo(isac);
- return;
- }
- dev_kfree_skb_irq(isac->tx_skb);
- isac->tx_skb = NULL;
- isac->tx_cnt = 0;
- D_L1L2(isac, PH_DATA | CONFIRM, NULL);
-}
-
-static inline void isacsx_icd_interrupt(struct isac *isac)
-{
- unsigned char val;
-
- val = isac->read_isac(isac, ISACSX_ISTAD);
- DBG(DBG_IRQ, "ISTAD %#x", val);
- if (val & ISACSX_ISTAD_XDU) {
- DBG(DBG_WARN, "ISTAD XDU");
- isac_retransmit(isac);
- }
- if (val & ISACSX_ISTAD_XMR) {
- DBG(DBG_WARN, "ISTAD XMR");
- isac_retransmit(isac);
- }
- if (val & ISACSX_ISTAD_XPR) {
- DBG(DBG_IRQ, "ISTAD XPR");
- isacsx_xpr_interrupt(isac);
- }
- if (val & ISACSX_ISTAD_RFO) {
- DBG(DBG_WARN, "ISTAD RFO");
- isac->write_isac(isac, ISACSX_CMDRD, ISACSX_CMDRD_RMC);
- }
- if (val & ISACSX_ISTAD_RME) {
- DBG(DBG_IRQ, "ISTAD RME");
- isacsx_rme_interrupt(isac);
- }
- if (val & ISACSX_ISTAD_RPF) {
- DBG(DBG_IRQ, "ISTAD RPF");
- isac_empty_fifo(isac, 0x20);
- }
-}
-
-void isacsx_irq(struct isac *isac)
-{
- unsigned char val;
-
- val = isac->read_isac(isac, ISACSX_ISTA);
- DBG(DBG_IRQ, "ISTA %#x", val);
-
- if (val & ISACSX_ISTA_ICD)
- isacsx_icd_interrupt(isac);
- if (val & ISACSX_ISTA_CIC)
- isacsx_cic_interrupt(isac);
-}
-
-void isac_init(struct isac *isac)
-{
- isac->tx_skb = NULL;
- isac->l1m.fsm = &l1fsm;
- isac->l1m.state = ST_L1_RESET;
-#ifdef CONFIG_HISAX_DEBUG
- isac->l1m.debug = 1;
-#else
- isac->l1m.debug = 0;
-#endif
- isac->l1m.userdata = isac;
- isac->l1m.printdebug = l1m_debug;
- FsmInitTimer(&isac->l1m, &isac->timer);
-}
-
-void isac_setup(struct isac *isac)
-{
- int val, eval;
-
- isac->type = TYPE_ISAC;
- isac_version(isac);
-
- ph_command(isac, ISAC_CMD_RES);
-
- isac->write_isac(isac, ISAC_MASK, 0xff);
- isac->mocr = 0xaa;
- if (test_bit(ISAC_IOM1, &isac->flags)) {
- /* IOM 1 Mode */
- isac->write_isac(isac, ISAC_ADF2, 0x0);
- isac->write_isac(isac, ISAC_SPCR, 0xa);
- isac->write_isac(isac, ISAC_ADF1, 0x2);
- isac->write_isac(isac, ISAC_STCR, 0x70);
- isac->write_isac(isac, ISAC_MODE, 0xc9);
- } else {
- /* IOM 2 Mode */
- if (!isac->adf2)
- isac->adf2 = 0x80;
- isac->write_isac(isac, ISAC_ADF2, isac->adf2);
- isac->write_isac(isac, ISAC_SQXR, 0x2f);
- isac->write_isac(isac, ISAC_SPCR, 0x00);
- isac->write_isac(isac, ISAC_STCR, 0x70);
- isac->write_isac(isac, ISAC_MODE, 0xc9);
- isac->write_isac(isac, ISAC_TIMR, 0x00);
- isac->write_isac(isac, ISAC_ADF1, 0x00);
- }
- val = isac->read_isac(isac, ISAC_STAR);
- DBG(2, "ISAC STAR %x", val);
- val = isac->read_isac(isac, ISAC_MODE);
- DBG(2, "ISAC MODE %x", val);
- val = isac->read_isac(isac, ISAC_ADF2);
- DBG(2, "ISAC ADF2 %x", val);
- val = isac->read_isac(isac, ISAC_ISTA);
- DBG(2, "ISAC ISTA %x", val);
- if (val & 0x01) {
- eval = isac->read_isac(isac, ISAC_EXIR);
- DBG(2, "ISAC EXIR %x", eval);
- }
- val = isac->read_isac(isac, ISAC_CIR0);
- DBG(2, "ISAC CIR0 %x", val);
- FsmEvent(&isac->l1m, (val >> 2) & 0xf, NULL);
-
- isac->write_isac(isac, ISAC_MASK, 0x0);
- // RESET Receiver and Transmitter
- isac->write_isac(isac, ISAC_CMDR, ISAC_CMDR_XRES | ISAC_CMDR_RRES);
-}
-
-void isacsx_setup(struct isac *isac)
-{
- isac->type = TYPE_ISACSX;
- // clear LDD
- isac->write_isac(isac, ISACSX_TR_CONF0, 0x00);
- // enable transmitter
- isac->write_isac(isac, ISACSX_TR_CONF2, 0x00);
- // transparent mode 0, RAC, stop/go
- isac->write_isac(isac, ISACSX_MODED, 0xc9);
- // all HDLC IRQ unmasked
- isac->write_isac(isac, ISACSX_MASKD, 0x03);
- // unmask ICD, CID IRQs
- isac->write_isac(isac, ISACSX_MASK,
- ~(ISACSX_ISTA_ICD | ISACSX_ISTA_CIC));
-}
-
-void isac_d_l2l1(struct hisax_if *hisax_d_if, int pr, void *arg)
-{
- struct isac *isac = hisax_d_if->priv;
- struct sk_buff *skb = arg;
-
- DBG(DBG_PR, "pr %#x", pr);
-
- switch (pr) {
- case PH_ACTIVATE | REQUEST:
- FsmEvent(&isac->l1m, EV_PH_ACTIVATE_REQ, NULL);
- break;
- case PH_DEACTIVATE | REQUEST:
- FsmEvent(&isac->l1m, EV_PH_DEACTIVATE_REQ, NULL);
- break;
- case PH_DATA | REQUEST:
- DBG(DBG_PR, "PH_DATA REQUEST len %d", skb->len);
- DBG_SKB(DBG_XPACKET, skb);
- if (isac->l1m.state != ST_L1_F7) {
- DBG(1, "L1 wrong state %d\n", isac->l1m.state);
- dev_kfree_skb(skb);
- break;
- }
- BUG_ON(isac->tx_skb);
-
- isac->tx_skb = skb;
- isac_fill_fifo(isac);
- break;
- }
-}
-
-static int __init hisax_isac_init(void)
-{
- printk(KERN_INFO "hisax_isac: ISAC-S/ISAC-SX ISDN driver v0.1.0\n");
-
- l1fsm.state_count = L1_STATE_COUNT;
- l1fsm.event_count = L1_EVENT_COUNT;
- l1fsm.strState = strL1State;
- l1fsm.strEvent = strL1Event;
- return FsmNew(&l1fsm, L1FnList, ARRAY_SIZE(L1FnList));
-}
-
-static void __exit hisax_isac_exit(void)
-{
- FsmFree(&l1fsm);
-}
-
-EXPORT_SYMBOL(isac_init);
-EXPORT_SYMBOL(isac_d_l2l1);
-
-EXPORT_SYMBOL(isacsx_setup);
-EXPORT_SYMBOL(isacsx_irq);
-
-EXPORT_SYMBOL(isac_setup);
-EXPORT_SYMBOL(isac_irq);
-
-module_init(hisax_isac_init);
-module_exit(hisax_isac_exit);
diff --git a/drivers/isdn/hisax/hisax_isac.h b/drivers/isdn/hisax/hisax_isac.h
deleted file mode 100644
index d7301da97991..000000000000
--- a/drivers/isdn/hisax/hisax_isac.h
+++ /dev/null
@@ -1,46 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __HISAX_ISAC_H__
-#define __HISAX_ISAC_H__
-
-#include <linux/kernel.h>
-#include "fsm.h"
-#include "hisax_if.h"
-
-#define TIMER3_VALUE 7000
-#define MAX_DFRAME_LEN_L1 300
-
-#define ISAC_IOM1 0
-
-struct isac {
- void *priv;
-
- u_long flags;
- struct hisax_d_if hisax_d_if;
- struct FsmInst l1m;
- struct FsmTimer timer;
- u_char mocr;
- u_char adf2;
- int type;
-
- u_char rcvbuf[MAX_DFRAME_LEN_L1];
- int rcvidx;
-
- struct sk_buff *tx_skb;
- int tx_cnt;
-
- u_char (*read_isac) (struct isac *, u_char);
- void (*write_isac) (struct isac *, u_char, u_char);
- void (*read_isac_fifo) (struct isac *, u_char *, int);
- void (*write_isac_fifo)(struct isac *, u_char *, int);
-};
-
-void isac_init(struct isac *isac);
-void isac_d_l2l1(struct hisax_if *hisax_d_if, int pr, void *arg);
-
-void isac_setup(struct isac *isac);
-void isac_irq(struct isac *isac);
-
-void isacsx_setup(struct isac *isac);
-void isacsx_irq(struct isac *isac);
-
-#endif
diff --git a/drivers/isdn/hisax/hscx.c b/drivers/isdn/hisax/hscx.c
deleted file mode 100644
index 3e305fec0ed9..000000000000
--- a/drivers/isdn/hisax/hscx.c
+++ /dev/null
@@ -1,277 +0,0 @@
-/* $Id: hscx.c,v 1.24.2.4 2004/01/24 20:47:23 keil Exp $
- *
- * HSCX specific routines
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "hscx.h"
-#include "isac.h"
-#include "isdnl1.h"
-#include <linux/interrupt.h>
-#include <linux/slab.h>
-
-static char *HSCXVer[] =
-{"A1", "?1", "A2", "?3", "A3", "V2.1", "?6", "?7",
- "?8", "?9", "?10", "?11", "?12", "?13", "?14", "???"};
-
-int
-HscxVersion(struct IsdnCardState *cs, char *s)
-{
- int verA, verB;
-
- verA = cs->BC_Read_Reg(cs, 0, HSCX_VSTR) & 0xf;
- verB = cs->BC_Read_Reg(cs, 1, HSCX_VSTR) & 0xf;
- printk(KERN_INFO "%s HSCX version A: %s B: %s\n", s,
- HSCXVer[verA], HSCXVer[verB]);
- if ((verA == 0) | (verA == 0xf) | (verB == 0) | (verB == 0xf))
- return (1);
- else
- return (0);
-}
-
-void
-modehscx(struct BCState *bcs, int mode, int bc)
-{
- struct IsdnCardState *cs = bcs->cs;
- int hscx = bcs->hw.hscx.hscx;
-
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "hscx %c mode %d ichan %d",
- 'A' + hscx, mode, bc);
- bcs->mode = mode;
- bcs->channel = bc;
- cs->BC_Write_Reg(cs, hscx, HSCX_XAD1, 0xFF);
- cs->BC_Write_Reg(cs, hscx, HSCX_XAD2, 0xFF);
- cs->BC_Write_Reg(cs, hscx, HSCX_RAH2, 0xFF);
- cs->BC_Write_Reg(cs, hscx, HSCX_XBCH, 0x0);
- cs->BC_Write_Reg(cs, hscx, HSCX_RLCR, 0x0);
- cs->BC_Write_Reg(cs, hscx, HSCX_CCR1,
- test_bit(HW_IPAC, &cs->HW_Flags) ? 0x82 : 0x85);
- cs->BC_Write_Reg(cs, hscx, HSCX_CCR2, 0x30);
- cs->BC_Write_Reg(cs, hscx, HSCX_XCCR, 7);
- cs->BC_Write_Reg(cs, hscx, HSCX_RCCR, 7);
-
- /* Switch IOM 1 SSI */
- if (test_bit(HW_IOM1, &cs->HW_Flags) && (hscx == 0))
- bc = 1 - bc;
-
- if (bc == 0) {
- cs->BC_Write_Reg(cs, hscx, HSCX_TSAX,
- test_bit(HW_IOM1, &cs->HW_Flags) ? 0x7 : bcs->hw.hscx.tsaxr0);
- cs->BC_Write_Reg(cs, hscx, HSCX_TSAR,
- test_bit(HW_IOM1, &cs->HW_Flags) ? 0x7 : bcs->hw.hscx.tsaxr0);
- } else {
- cs->BC_Write_Reg(cs, hscx, HSCX_TSAX, bcs->hw.hscx.tsaxr1);
- cs->BC_Write_Reg(cs, hscx, HSCX_TSAR, bcs->hw.hscx.tsaxr1);
- }
- switch (mode) {
- case (L1_MODE_NULL):
- cs->BC_Write_Reg(cs, hscx, HSCX_TSAX, 0x1f);
- cs->BC_Write_Reg(cs, hscx, HSCX_TSAR, 0x1f);
- cs->BC_Write_Reg(cs, hscx, HSCX_MODE, 0x84);
- break;
- case (L1_MODE_TRANS):
- cs->BC_Write_Reg(cs, hscx, HSCX_MODE, 0xe4);
- break;
- case (L1_MODE_HDLC):
- cs->BC_Write_Reg(cs, hscx, HSCX_CCR1,
- test_bit(HW_IPAC, &cs->HW_Flags) ? 0x8a : 0x8d);
- cs->BC_Write_Reg(cs, hscx, HSCX_MODE, 0x8c);
- break;
- }
- if (mode)
- cs->BC_Write_Reg(cs, hscx, HSCX_CMDR, 0x41);
- cs->BC_Write_Reg(cs, hscx, HSCX_ISTA, 0x00);
-}
-
-void
-hscx_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct BCState *bcs = st->l1.bcs;
- u_long flags;
- struct sk_buff *skb = arg;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- skb_queue_tail(&bcs->squeue, skb);
- } else {
- bcs->tx_skb = skb;
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->hw.hscx.count = 0;
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- printk(KERN_WARNING "hscx_l2l1: this shouldn't happen\n");
- } else {
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->tx_skb = skb;
- bcs->hw.hscx.count = 0;
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
- if (!bcs->tx_skb) {
- test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case (PH_ACTIVATE | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_set_bit(BC_FLG_ACTIV, &bcs->Flag);
- modehscx(bcs, st->l1.mode, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | REQUEST):
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | CONFIRM):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_clear_bit(BC_FLG_ACTIV, &bcs->Flag);
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- modehscx(bcs, 0, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- st->l1.l1l2(st, PH_DEACTIVATE | CONFIRM, NULL);
- break;
- }
-}
-
-static void
-close_hscxstate(struct BCState *bcs)
-{
- modehscx(bcs, 0, bcs->channel);
- if (test_and_clear_bit(BC_FLG_INIT, &bcs->Flag)) {
- kfree(bcs->hw.hscx.rcvbuf);
- bcs->hw.hscx.rcvbuf = NULL;
- kfree(bcs->blog);
- bcs->blog = NULL;
- skb_queue_purge(&bcs->rqueue);
- skb_queue_purge(&bcs->squeue);
- if (bcs->tx_skb) {
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- }
- }
-}
-
-int
-open_hscxstate(struct IsdnCardState *cs, struct BCState *bcs)
-{
- if (!test_and_set_bit(BC_FLG_INIT, &bcs->Flag)) {
- if (!(bcs->hw.hscx.rcvbuf = kmalloc(HSCX_BUFMAX, GFP_ATOMIC))) {
- printk(KERN_WARNING
- "HiSax: No memory for hscx.rcvbuf\n");
- test_and_clear_bit(BC_FLG_INIT, &bcs->Flag);
- return (1);
- }
- if (!(bcs->blog = kmalloc(MAX_BLOG_SPACE, GFP_ATOMIC))) {
- printk(KERN_WARNING
- "HiSax: No memory for bcs->blog\n");
- test_and_clear_bit(BC_FLG_INIT, &bcs->Flag);
- kfree(bcs->hw.hscx.rcvbuf);
- bcs->hw.hscx.rcvbuf = NULL;
- return (2);
- }
- skb_queue_head_init(&bcs->rqueue);
- skb_queue_head_init(&bcs->squeue);
- }
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->event = 0;
- bcs->hw.hscx.rcvidx = 0;
- bcs->tx_cnt = 0;
- return (0);
-}
-
-static int
-setstack_hscx(struct PStack *st, struct BCState *bcs)
-{
- bcs->channel = st->l1.bc;
- if (open_hscxstate(st->l1.hardware, bcs))
- return (-1);
- st->l1.bcs = bcs;
- st->l2.l2l1 = hscx_l2l1;
- setstack_manager(st);
- bcs->st = st;
- setstack_l1_B(st);
- return (0);
-}
-
-void
-clear_pending_hscx_ints(struct IsdnCardState *cs)
-{
- int val, eval;
-
- val = cs->BC_Read_Reg(cs, 1, HSCX_ISTA);
- debugl1(cs, "HSCX B ISTA %x", val);
- if (val & 0x01) {
- eval = cs->BC_Read_Reg(cs, 1, HSCX_EXIR);
- debugl1(cs, "HSCX B EXIR %x", eval);
- }
- if (val & 0x02) {
- eval = cs->BC_Read_Reg(cs, 0, HSCX_EXIR);
- debugl1(cs, "HSCX A EXIR %x", eval);
- }
- val = cs->BC_Read_Reg(cs, 0, HSCX_ISTA);
- debugl1(cs, "HSCX A ISTA %x", val);
- val = cs->BC_Read_Reg(cs, 1, HSCX_STAR);
- debugl1(cs, "HSCX B STAR %x", val);
- val = cs->BC_Read_Reg(cs, 0, HSCX_STAR);
- debugl1(cs, "HSCX A STAR %x", val);
- /* disable all IRQ */
- cs->BC_Write_Reg(cs, 0, HSCX_MASK, 0xFF);
- cs->BC_Write_Reg(cs, 1, HSCX_MASK, 0xFF);
-}
-
-void
-inithscx(struct IsdnCardState *cs)
-{
- cs->bcs[0].BC_SetStack = setstack_hscx;
- cs->bcs[1].BC_SetStack = setstack_hscx;
- cs->bcs[0].BC_Close = close_hscxstate;
- cs->bcs[1].BC_Close = close_hscxstate;
- cs->bcs[0].hw.hscx.hscx = 0;
- cs->bcs[1].hw.hscx.hscx = 1;
- cs->bcs[0].hw.hscx.tsaxr0 = 0x2f;
- cs->bcs[0].hw.hscx.tsaxr1 = 3;
- cs->bcs[1].hw.hscx.tsaxr0 = 0x2f;
- cs->bcs[1].hw.hscx.tsaxr1 = 3;
- modehscx(cs->bcs, 0, 0);
- modehscx(cs->bcs + 1, 0, 0);
-}
-
-void
-inithscxisac(struct IsdnCardState *cs, int part)
-{
- if (part & 1) {
- clear_pending_isac_ints(cs);
- clear_pending_hscx_ints(cs);
- initisac(cs);
- inithscx(cs);
- }
- if (part & 2) {
- /* Reenable all IRQ */
- cs->writeisac(cs, ISAC_MASK, 0);
- cs->BC_Write_Reg(cs, 0, HSCX_MASK, 0);
- cs->BC_Write_Reg(cs, 1, HSCX_MASK, 0);
- /* RESET Receiver and Transmitter */
- cs->writeisac(cs, ISAC_CMDR, 0x41);
- }
-}
diff --git a/drivers/isdn/hisax/hscx.h b/drivers/isdn/hisax/hscx.h
deleted file mode 100644
index 1148b4bbe711..000000000000
--- a/drivers/isdn/hisax/hscx.h
+++ /dev/null
@@ -1,41 +0,0 @@
-/* $Id: hscx.h,v 1.8.2.2 2004/01/12 22:52:26 keil Exp $
- *
- * HSCX specific defines
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-/* All Registers original Siemens Spec */
-
-#define HSCX_ISTA 0x20
-#define HSCX_CCR1 0x2f
-#define HSCX_CCR2 0x2c
-#define HSCX_TSAR 0x31
-#define HSCX_TSAX 0x30
-#define HSCX_XCCR 0x32
-#define HSCX_RCCR 0x33
-#define HSCX_MODE 0x22
-#define HSCX_CMDR 0x21
-#define HSCX_EXIR 0x24
-#define HSCX_XAD1 0x24
-#define HSCX_XAD2 0x25
-#define HSCX_RAH2 0x27
-#define HSCX_RSTA 0x27
-#define HSCX_TIMR 0x23
-#define HSCX_STAR 0x21
-#define HSCX_RBCL 0x25
-#define HSCX_XBCH 0x2d
-#define HSCX_VSTR 0x2e
-#define HSCX_RLCR 0x2e
-#define HSCX_MASK 0x20
-
-extern int HscxVersion(struct IsdnCardState *cs, char *s);
-extern void modehscx(struct BCState *bcs, int mode, int bc);
-extern void clear_pending_hscx_ints(struct IsdnCardState *cs);
-extern void inithscx(struct IsdnCardState *cs);
-extern void inithscxisac(struct IsdnCardState *cs, int part);
diff --git a/drivers/isdn/hisax/hscx_irq.c b/drivers/isdn/hisax/hscx_irq.c
deleted file mode 100644
index 0d7e783c8bef..000000000000
--- a/drivers/isdn/hisax/hscx_irq.c
+++ /dev/null
@@ -1,294 +0,0 @@
-/* $Id: hscx_irq.c,v 1.18.2.3 2004/02/11 13:21:34 keil Exp $
- *
- * low level b-channel stuff for Siemens HSCX
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * This is an include file for fast inline IRQ stuff
- *
- */
-
-
-static inline void
-waitforCEC(struct IsdnCardState *cs, int hscx)
-{
- int to = 50;
-
- while ((READHSCX(cs, hscx, HSCX_STAR) & 0x04) && to) {
- udelay(1);
- to--;
- }
- if (!to)
- printk(KERN_WARNING "HiSax: waitforCEC timeout\n");
-}
-
-
-static inline void
-waitforXFW(struct IsdnCardState *cs, int hscx)
-{
- int to = 50;
-
- while (((READHSCX(cs, hscx, HSCX_STAR) & 0x44) != 0x40) && to) {
- udelay(1);
- to--;
- }
- if (!to)
- printk(KERN_WARNING "HiSax: waitforXFW timeout\n");
-}
-
-static inline void
-WriteHSCXCMDR(struct IsdnCardState *cs, int hscx, u_char data)
-{
- waitforCEC(cs, hscx);
- WRITEHSCX(cs, hscx, HSCX_CMDR, data);
-}
-
-
-
-static void
-hscx_empty_fifo(struct BCState *bcs, int count)
-{
- u_char *ptr;
- struct IsdnCardState *cs = bcs->cs;
-
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "hscx_empty_fifo");
-
- if (bcs->hw.hscx.rcvidx + count > HSCX_BUFMAX) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "hscx_empty_fifo: incoming packet too large");
- WriteHSCXCMDR(cs, bcs->hw.hscx.hscx, 0x80);
- bcs->hw.hscx.rcvidx = 0;
- return;
- }
- ptr = bcs->hw.hscx.rcvbuf + bcs->hw.hscx.rcvidx;
- bcs->hw.hscx.rcvidx += count;
- READHSCXFIFO(cs, bcs->hw.hscx.hscx, ptr, count);
- WriteHSCXCMDR(cs, bcs->hw.hscx.hscx, 0x80);
- if (cs->debug & L1_DEB_HSCX_FIFO) {
- char *t = bcs->blog;
-
- t += sprintf(t, "hscx_empty_fifo %c cnt %d",
- bcs->hw.hscx.hscx ? 'B' : 'A', count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", bcs->blog);
- }
-}
-
-static void
-hscx_fill_fifo(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
- int more, count;
- int fifo_size = test_bit(HW_IPAC, &cs->HW_Flags) ? 64 : 32;
- u_char *ptr;
-
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "hscx_fill_fifo");
-
- if (!bcs->tx_skb)
- return;
- if (bcs->tx_skb->len <= 0)
- return;
-
- more = (bcs->mode == L1_MODE_TRANS) ? 1 : 0;
- if (bcs->tx_skb->len > fifo_size) {
- more = !0;
- count = fifo_size;
- } else
- count = bcs->tx_skb->len;
-
- waitforXFW(cs, bcs->hw.hscx.hscx);
- ptr = bcs->tx_skb->data;
- skb_pull(bcs->tx_skb, count);
- bcs->tx_cnt -= count;
- bcs->hw.hscx.count += count;
- WRITEHSCXFIFO(cs, bcs->hw.hscx.hscx, ptr, count);
- WriteHSCXCMDR(cs, bcs->hw.hscx.hscx, more ? 0x8 : 0xa);
- if (cs->debug & L1_DEB_HSCX_FIFO) {
- char *t = bcs->blog;
-
- t += sprintf(t, "hscx_fill_fifo %c cnt %d",
- bcs->hw.hscx.hscx ? 'B' : 'A', count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", bcs->blog);
- }
-}
-
-static void
-hscx_interrupt(struct IsdnCardState *cs, u_char val, u_char hscx)
-{
- u_char r;
- struct BCState *bcs = cs->bcs + hscx;
- struct sk_buff *skb;
- int fifo_size = test_bit(HW_IPAC, &cs->HW_Flags) ? 64 : 32;
- int count;
-
- if (!test_bit(BC_FLG_INIT, &bcs->Flag))
- return;
-
- if (val & 0x80) { /* RME */
- r = READHSCX(cs, hscx, HSCX_RSTA);
- if ((r & 0xf0) != 0xa0) {
- if (!(r & 0x80)) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "HSCX invalid frame");
-#ifdef ERROR_STATISTIC
- bcs->err_inv++;
-#endif
- }
- if ((r & 0x40) && bcs->mode) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "HSCX RDO mode=%d",
- bcs->mode);
-#ifdef ERROR_STATISTIC
- bcs->err_rdo++;
-#endif
- }
- if (!(r & 0x20)) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "HSCX CRC error");
-#ifdef ERROR_STATISTIC
- bcs->err_crc++;
-#endif
- }
- WriteHSCXCMDR(cs, hscx, 0x80);
- } else {
- count = READHSCX(cs, hscx, HSCX_RBCL) & (
- test_bit(HW_IPAC, &cs->HW_Flags) ? 0x3f : 0x1f);
- if (count == 0)
- count = fifo_size;
- hscx_empty_fifo(bcs, count);
- if ((count = bcs->hw.hscx.rcvidx - 1) > 0) {
- if (cs->debug & L1_DEB_HSCX_FIFO)
- debugl1(cs, "HX Frame %d", count);
- if (!(skb = dev_alloc_skb(count)))
- printk(KERN_WARNING "HSCX: receive out of memory\n");
- else {
- skb_put_data(skb, bcs->hw.hscx.rcvbuf,
- count);
- skb_queue_tail(&bcs->rqueue, skb);
- }
- }
- }
- bcs->hw.hscx.rcvidx = 0;
- schedule_event(bcs, B_RCVBUFREADY);
- }
- if (val & 0x40) { /* RPF */
- hscx_empty_fifo(bcs, fifo_size);
- if (bcs->mode == L1_MODE_TRANS) {
- /* receive audio data */
- if (!(skb = dev_alloc_skb(fifo_size)))
- printk(KERN_WARNING "HiSax: receive out of memory\n");
- else {
- skb_put_data(skb, bcs->hw.hscx.rcvbuf,
- fifo_size);
- skb_queue_tail(&bcs->rqueue, skb);
- }
- bcs->hw.hscx.rcvidx = 0;
- schedule_event(bcs, B_RCVBUFREADY);
- }
- }
- if (val & 0x10) { /* XPR */
- if (bcs->tx_skb) {
- if (bcs->tx_skb->len) {
- hscx_fill_fifo(bcs);
- return;
- } else {
- if (test_bit(FLG_LLI_L1WAKEUP, &bcs->st->lli.flag) &&
- (PACKET_NOACK != bcs->tx_skb->pkt_type)) {
- u_long flags;
- spin_lock_irqsave(&bcs->aclock, flags);
- bcs->ackcnt += bcs->hw.hscx.count;
- spin_unlock_irqrestore(&bcs->aclock, flags);
- schedule_event(bcs, B_ACKPENDING);
- }
- dev_kfree_skb_irq(bcs->tx_skb);
- bcs->hw.hscx.count = 0;
- bcs->tx_skb = NULL;
- }
- }
- if ((bcs->tx_skb = skb_dequeue(&bcs->squeue))) {
- bcs->hw.hscx.count = 0;
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- hscx_fill_fifo(bcs);
- } else {
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- schedule_event(bcs, B_XMTBUFREADY);
- }
- }
-}
-
-static void
-hscx_int_main(struct IsdnCardState *cs, u_char val)
-{
-
- u_char exval;
- struct BCState *bcs;
-
- if (val & 0x01) {
- bcs = cs->bcs + 1;
- exval = READHSCX(cs, 1, HSCX_EXIR);
- if (exval & 0x40) {
- if (bcs->mode == 1)
- hscx_fill_fifo(bcs);
- else {
-#ifdef ERROR_STATISTIC
- bcs->err_tx++;
-#endif
- /* Here we lost an TX interrupt, so
- * restart transmitting the whole frame.
- */
- if (bcs->tx_skb) {
- skb_push(bcs->tx_skb, bcs->hw.hscx.count);
- bcs->tx_cnt += bcs->hw.hscx.count;
- bcs->hw.hscx.count = 0;
- }
- WriteHSCXCMDR(cs, bcs->hw.hscx.hscx, 0x01);
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "HSCX B EXIR %x Lost TX", exval);
- }
- } else if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX B EXIR %x", exval);
- }
- if (val & 0xf8) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX B interrupt %x", val);
- hscx_interrupt(cs, val, 1);
- }
- if (val & 0x02) {
- bcs = cs->bcs;
- exval = READHSCX(cs, 0, HSCX_EXIR);
- if (exval & 0x40) {
- if (bcs->mode == L1_MODE_TRANS)
- hscx_fill_fifo(bcs);
- else {
- /* Here we lost an TX interrupt, so
- * restart transmitting the whole frame.
- */
-#ifdef ERROR_STATISTIC
- bcs->err_tx++;
-#endif
- if (bcs->tx_skb) {
- skb_push(bcs->tx_skb, bcs->hw.hscx.count);
- bcs->tx_cnt += bcs->hw.hscx.count;
- bcs->hw.hscx.count = 0;
- }
- WriteHSCXCMDR(cs, bcs->hw.hscx.hscx, 0x01);
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "HSCX A EXIR %x Lost TX", exval);
- }
- } else if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX A EXIR %x", exval);
- }
- if (val & 0x04) {
- exval = READHSCX(cs, 0, HSCX_ISTA);
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX A interrupt %x", exval);
- hscx_interrupt(cs, exval, 0);
- }
-}
diff --git a/drivers/isdn/hisax/icc.c b/drivers/isdn/hisax/icc.c
deleted file mode 100644
index 831dd1bb81ef..000000000000
--- a/drivers/isdn/hisax/icc.c
+++ /dev/null
@@ -1,680 +0,0 @@
-/* $Id: icc.c,v 1.8.2.3 2004/01/13 14:31:25 keil Exp $
- *
- * ICC specific routines
- *
- * Author Matt Henderson & Guy Ellis
- * Copyright by Traverse Technologies Pty Ltd, www.travers.com.au
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * 1999.6.25 Initial implementation of routines for Siemens ISDN
- * Communication Controller PEB 2070 based on the ISAC routines
- * written by Karsten Keil.
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "icc.h"
-// #include "arcofi.h"
-#include "isdnl1.h"
-#include <linux/interrupt.h>
-#include <linux/slab.h>
-
-#define DBUSY_TIMER_VALUE 80
-#define ARCOFI_USE 0
-
-static char *ICCVer[] =
-{"2070 A1/A3", "2070 B1", "2070 B2/B3", "2070 V2.4"};
-
-void
-ICCVersion(struct IsdnCardState *cs, char *s)
-{
- int val;
-
- val = cs->readisac(cs, ICC_RBCH);
- printk(KERN_INFO "%s ICC version (%x): %s\n", s, val, ICCVer[(val >> 5) & 3]);
-}
-
-static void
-ph_command(struct IsdnCardState *cs, unsigned int command)
-{
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ph_command %x", command);
- cs->writeisac(cs, ICC_CIX0, (command << 2) | 3);
-}
-
-
-static void
-icc_new_ph(struct IsdnCardState *cs)
-{
- switch (cs->dc.icc.ph_state) {
- case (ICC_IND_EI1):
- ph_command(cs, ICC_CMD_DI);
- l1_msg(cs, HW_RESET | INDICATION, NULL);
- break;
- case (ICC_IND_DC):
- l1_msg(cs, HW_DEACTIVATE | CONFIRM, NULL);
- break;
- case (ICC_IND_DR):
- l1_msg(cs, HW_DEACTIVATE | INDICATION, NULL);
- break;
- case (ICC_IND_PU):
- l1_msg(cs, HW_POWERUP | CONFIRM, NULL);
- break;
- case (ICC_IND_FJ):
- l1_msg(cs, HW_RSYNC | INDICATION, NULL);
- break;
- case (ICC_IND_AR):
- l1_msg(cs, HW_INFO2 | INDICATION, NULL);
- break;
- case (ICC_IND_AI):
- l1_msg(cs, HW_INFO4 | INDICATION, NULL);
- break;
- default:
- break;
- }
-}
-
-static void
-icc_bh(struct work_struct *work)
-{
- struct IsdnCardState *cs =
- container_of(work, struct IsdnCardState, tqueue);
- struct PStack *stptr;
-
- if (test_and_clear_bit(D_CLEARBUSY, &cs->event)) {
- if (cs->debug)
- debugl1(cs, "D-Channel Busy cleared");
- stptr = cs->stlist;
- while (stptr != NULL) {
- stptr->l1.l1l2(stptr, PH_PAUSE | CONFIRM, NULL);
- stptr = stptr->next;
- }
- }
- if (test_and_clear_bit(D_L1STATECHANGE, &cs->event))
- icc_new_ph(cs);
- if (test_and_clear_bit(D_RCVBUFREADY, &cs->event))
- DChannel_proc_rcv(cs);
- if (test_and_clear_bit(D_XMTBUFREADY, &cs->event))
- DChannel_proc_xmt(cs);
-#if ARCOFI_USE
- if (!test_bit(HW_ARCOFI, &cs->HW_Flags))
- return;
- if (test_and_clear_bit(D_RX_MON1, &cs->event))
- arcofi_fsm(cs, ARCOFI_RX_END, NULL);
- if (test_and_clear_bit(D_TX_MON1, &cs->event))
- arcofi_fsm(cs, ARCOFI_TX_END, NULL);
-#endif
-}
-
-static void
-icc_empty_fifo(struct IsdnCardState *cs, int count)
-{
- u_char *ptr;
-
- if ((cs->debug & L1_DEB_ISAC) && !(cs->debug & L1_DEB_ISAC_FIFO))
- debugl1(cs, "icc_empty_fifo");
-
- if ((cs->rcvidx + count) >= MAX_DFRAME_LEN_L1) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "icc_empty_fifo overrun %d",
- cs->rcvidx + count);
- cs->writeisac(cs, ICC_CMDR, 0x80);
- cs->rcvidx = 0;
- return;
- }
- ptr = cs->rcvbuf + cs->rcvidx;
- cs->rcvidx += count;
- cs->readisacfifo(cs, ptr, count);
- cs->writeisac(cs, ICC_CMDR, 0x80);
- if (cs->debug & L1_DEB_ISAC_FIFO) {
- char *t = cs->dlog;
-
- t += sprintf(t, "icc_empty_fifo cnt %d", count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", cs->dlog);
- }
-}
-
-static void
-icc_fill_fifo(struct IsdnCardState *cs)
-{
- int count, more;
- u_char *ptr;
-
- if ((cs->debug & L1_DEB_ISAC) && !(cs->debug & L1_DEB_ISAC_FIFO))
- debugl1(cs, "icc_fill_fifo");
-
- if (!cs->tx_skb)
- return;
-
- count = cs->tx_skb->len;
- if (count <= 0)
- return;
-
- more = 0;
- if (count > 32) {
- more = !0;
- count = 32;
- }
- ptr = cs->tx_skb->data;
- skb_pull(cs->tx_skb, count);
- cs->tx_cnt += count;
- cs->writeisacfifo(cs, ptr, count);
- cs->writeisac(cs, ICC_CMDR, more ? 0x8 : 0xa);
- if (test_and_set_bit(FLG_DBUSY_TIMER, &cs->HW_Flags)) {
- debugl1(cs, "icc_fill_fifo dbusytimer running");
- del_timer(&cs->dbusytimer);
- }
- cs->dbusytimer.expires = jiffies + ((DBUSY_TIMER_VALUE * HZ)/1000);
- add_timer(&cs->dbusytimer);
- if (cs->debug & L1_DEB_ISAC_FIFO) {
- char *t = cs->dlog;
-
- t += sprintf(t, "icc_fill_fifo cnt %d", count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", cs->dlog);
- }
-}
-
-void
-icc_interrupt(struct IsdnCardState *cs, u_char val)
-{
- u_char exval, v1;
- struct sk_buff *skb;
- unsigned int count;
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ICC interrupt %x", val);
- if (val & 0x80) { /* RME */
- exval = cs->readisac(cs, ICC_RSTA);
- if ((exval & 0x70) != 0x20) {
- if (exval & 0x40) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ICC RDO");
-#ifdef ERROR_STATISTIC
- cs->err_rx++;
-#endif
- }
- if (!(exval & 0x20)) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ICC CRC error");
-#ifdef ERROR_STATISTIC
- cs->err_crc++;
-#endif
- }
- cs->writeisac(cs, ICC_CMDR, 0x80);
- } else {
- count = cs->readisac(cs, ICC_RBCL) & 0x1f;
- if (count == 0)
- count = 32;
- icc_empty_fifo(cs, count);
- if ((count = cs->rcvidx) > 0) {
- cs->rcvidx = 0;
- if (!(skb = alloc_skb(count, GFP_ATOMIC)))
- printk(KERN_WARNING "HiSax: D receive out of memory\n");
- else {
- skb_put_data(skb, cs->rcvbuf, count);
- skb_queue_tail(&cs->rq, skb);
- }
- }
- }
- cs->rcvidx = 0;
- schedule_event(cs, D_RCVBUFREADY);
- }
- if (val & 0x40) { /* RPF */
- icc_empty_fifo(cs, 32);
- }
- if (val & 0x20) { /* RSC */
- /* never */
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ICC RSC interrupt");
- }
- if (val & 0x10) { /* XPR */
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- schedule_event(cs, D_CLEARBUSY);
- if (cs->tx_skb) {
- if (cs->tx_skb->len) {
- icc_fill_fifo(cs);
- goto afterXPR;
- } else {
- dev_kfree_skb_irq(cs->tx_skb);
- cs->tx_cnt = 0;
- cs->tx_skb = NULL;
- }
- }
- if ((cs->tx_skb = skb_dequeue(&cs->sq))) {
- cs->tx_cnt = 0;
- icc_fill_fifo(cs);
- } else
- schedule_event(cs, D_XMTBUFREADY);
- }
-afterXPR:
- if (val & 0x04) { /* CISQ */
- exval = cs->readisac(cs, ICC_CIR0);
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ICC CIR0 %02X", exval);
- if (exval & 2) {
- cs->dc.icc.ph_state = (exval >> 2) & 0xf;
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ph_state change %x", cs->dc.icc.ph_state);
- schedule_event(cs, D_L1STATECHANGE);
- }
- if (exval & 1) {
- exval = cs->readisac(cs, ICC_CIR1);
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ICC CIR1 %02X", exval);
- }
- }
- if (val & 0x02) { /* SIN */
- /* never */
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ICC SIN interrupt");
- }
- if (val & 0x01) { /* EXI */
- exval = cs->readisac(cs, ICC_EXIR);
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ICC EXIR %02x", exval);
- if (exval & 0x80) { /* XMR */
- debugl1(cs, "ICC XMR");
- printk(KERN_WARNING "HiSax: ICC XMR\n");
- }
- if (exval & 0x40) { /* XDU */
- debugl1(cs, "ICC XDU");
- printk(KERN_WARNING "HiSax: ICC XDU\n");
-#ifdef ERROR_STATISTIC
- cs->err_tx++;
-#endif
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- schedule_event(cs, D_CLEARBUSY);
- if (cs->tx_skb) { /* Restart frame */
- skb_push(cs->tx_skb, cs->tx_cnt);
- cs->tx_cnt = 0;
- icc_fill_fifo(cs);
- } else {
- printk(KERN_WARNING "HiSax: ICC XDU no skb\n");
- debugl1(cs, "ICC XDU no skb");
- }
- }
- if (exval & 0x04) { /* MOS */
- v1 = cs->readisac(cs, ICC_MOSR);
- if (cs->debug & L1_DEB_MONITOR)
- debugl1(cs, "ICC MOSR %02x", v1);
-#if ARCOFI_USE
- if (v1 & 0x08) {
- if (!cs->dc.icc.mon_rx) {
- if (!(cs->dc.icc.mon_rx = kmalloc(MAX_MON_FRAME, GFP_ATOMIC))) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ICC MON RX out of memory!");
- cs->dc.icc.mocr &= 0xf0;
- cs->dc.icc.mocr |= 0x0a;
- cs->writeisac(cs, ICC_MOCR, cs->dc.icc.mocr);
- goto afterMONR0;
- } else
- cs->dc.icc.mon_rxp = 0;
- }
- if (cs->dc.icc.mon_rxp >= MAX_MON_FRAME) {
- cs->dc.icc.mocr &= 0xf0;
- cs->dc.icc.mocr |= 0x0a;
- cs->writeisac(cs, ICC_MOCR, cs->dc.icc.mocr);
- cs->dc.icc.mon_rxp = 0;
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ICC MON RX overflow!");
- goto afterMONR0;
- }
- cs->dc.icc.mon_rx[cs->dc.icc.mon_rxp++] = cs->readisac(cs, ICC_MOR0);
- if (cs->debug & L1_DEB_MONITOR)
- debugl1(cs, "ICC MOR0 %02x", cs->dc.icc.mon_rx[cs->dc.icc.mon_rxp - 1]);
- if (cs->dc.icc.mon_rxp == 1) {
- cs->dc.icc.mocr |= 0x04;
- cs->writeisac(cs, ICC_MOCR, cs->dc.icc.mocr);
- }
- }
- afterMONR0:
- if (v1 & 0x80) {
- if (!cs->dc.icc.mon_rx) {
- if (!(cs->dc.icc.mon_rx = kmalloc(MAX_MON_FRAME, GFP_ATOMIC))) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ICC MON RX out of memory!");
- cs->dc.icc.mocr &= 0x0f;
- cs->dc.icc.mocr |= 0xa0;
- cs->writeisac(cs, ICC_MOCR, cs->dc.icc.mocr);
- goto afterMONR1;
- } else
- cs->dc.icc.mon_rxp = 0;
- }
- if (cs->dc.icc.mon_rxp >= MAX_MON_FRAME) {
- cs->dc.icc.mocr &= 0x0f;
- cs->dc.icc.mocr |= 0xa0;
- cs->writeisac(cs, ICC_MOCR, cs->dc.icc.mocr);
- cs->dc.icc.mon_rxp = 0;
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ICC MON RX overflow!");
- goto afterMONR1;
- }
- cs->dc.icc.mon_rx[cs->dc.icc.mon_rxp++] = cs->readisac(cs, ICC_MOR1);
- if (cs->debug & L1_DEB_MONITOR)
- debugl1(cs, "ICC MOR1 %02x", cs->dc.icc.mon_rx[cs->dc.icc.mon_rxp - 1]);
- cs->dc.icc.mocr |= 0x40;
- cs->writeisac(cs, ICC_MOCR, cs->dc.icc.mocr);
- }
- afterMONR1:
- if (v1 & 0x04) {
- cs->dc.icc.mocr &= 0xf0;
- cs->writeisac(cs, ICC_MOCR, cs->dc.icc.mocr);
- cs->dc.icc.mocr |= 0x0a;
- cs->writeisac(cs, ICC_MOCR, cs->dc.icc.mocr);
- schedule_event(cs, D_RX_MON0);
- }
- if (v1 & 0x40) {
- cs->dc.icc.mocr &= 0x0f;
- cs->writeisac(cs, ICC_MOCR, cs->dc.icc.mocr);
- cs->dc.icc.mocr |= 0xa0;
- cs->writeisac(cs, ICC_MOCR, cs->dc.icc.mocr);
- schedule_event(cs, D_RX_MON1);
- }
- if (v1 & 0x02) {
- if ((!cs->dc.icc.mon_tx) || (cs->dc.icc.mon_txc &&
- (cs->dc.icc.mon_txp >= cs->dc.icc.mon_txc) &&
- !(v1 & 0x08))) {
- cs->dc.icc.mocr &= 0xf0;
- cs->writeisac(cs, ICC_MOCR, cs->dc.icc.mocr);
- cs->dc.icc.mocr |= 0x0a;
- cs->writeisac(cs, ICC_MOCR, cs->dc.icc.mocr);
- if (cs->dc.icc.mon_txc &&
- (cs->dc.icc.mon_txp >= cs->dc.icc.mon_txc))
- schedule_event(cs, D_TX_MON0);
- goto AfterMOX0;
- }
- if (cs->dc.icc.mon_txc && (cs->dc.icc.mon_txp >= cs->dc.icc.mon_txc)) {
- schedule_event(cs, D_TX_MON0);
- goto AfterMOX0;
- }
- cs->writeisac(cs, ICC_MOX0,
- cs->dc.icc.mon_tx[cs->dc.icc.mon_txp++]);
- if (cs->debug & L1_DEB_MONITOR)
- debugl1(cs, "ICC %02x -> MOX0", cs->dc.icc.mon_tx[cs->dc.icc.mon_txp - 1]);
- }
- AfterMOX0:
- if (v1 & 0x20) {
- if ((!cs->dc.icc.mon_tx) || (cs->dc.icc.mon_txc &&
- (cs->dc.icc.mon_txp >= cs->dc.icc.mon_txc) &&
- !(v1 & 0x80))) {
- cs->dc.icc.mocr &= 0x0f;
- cs->writeisac(cs, ICC_MOCR, cs->dc.icc.mocr);
- cs->dc.icc.mocr |= 0xa0;
- cs->writeisac(cs, ICC_MOCR, cs->dc.icc.mocr);
- if (cs->dc.icc.mon_txc &&
- (cs->dc.icc.mon_txp >= cs->dc.icc.mon_txc))
- schedule_event(cs, D_TX_MON1);
- goto AfterMOX1;
- }
- if (cs->dc.icc.mon_txc && (cs->dc.icc.mon_txp >= cs->dc.icc.mon_txc)) {
- schedule_event(cs, D_TX_MON1);
- goto AfterMOX1;
- }
- cs->writeisac(cs, ICC_MOX1,
- cs->dc.icc.mon_tx[cs->dc.icc.mon_txp++]);
- if (cs->debug & L1_DEB_MONITOR)
- debugl1(cs, "ICC %02x -> MOX1", cs->dc.icc.mon_tx[cs->dc.icc.mon_txp - 1]);
- }
- AfterMOX1: ;
-#endif
- }
- }
-}
-
-static void
-ICC_l1hw(struct PStack *st, int pr, void *arg)
-{
- struct IsdnCardState *cs = (struct IsdnCardState *) st->l1.hardware;
- struct sk_buff *skb = arg;
- u_long flags;
- int val;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- if (cs->debug & DEB_DLOG_HEX)
- LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE)
- dlogframe(cs, skb, 0);
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->tx_skb) {
- skb_queue_tail(&cs->sq, skb);
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA Queued", 0);
-#endif
- } else {
- cs->tx_skb = skb;
- cs->tx_cnt = 0;
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA", 0);
-#endif
- icc_fill_fifo(cs);
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->tx_skb) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, " l2l1 tx_skb exist this shouldn't happen");
- skb_queue_tail(&cs->sq, skb);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- }
- if (cs->debug & DEB_DLOG_HEX)
- LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE)
- dlogframe(cs, skb, 0);
- cs->tx_skb = skb;
- cs->tx_cnt = 0;
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA_PULLED", 0);
-#endif
- icc_fill_fifo(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- debugl1(cs, "-> PH_REQUEST_PULL");
-#endif
- if (!cs->tx_skb) {
- test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case (HW_RESET | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- if ((cs->dc.icc.ph_state == ICC_IND_EI1) ||
- (cs->dc.icc.ph_state == ICC_IND_DR))
- ph_command(cs, ICC_CMD_DI);
- else
- ph_command(cs, ICC_CMD_RES);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_ENABLE | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- ph_command(cs, ICC_CMD_DI);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_INFO1 | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- ph_command(cs, ICC_CMD_AR);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_INFO3 | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- ph_command(cs, ICC_CMD_AI);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_TESTLOOP | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- val = 0;
- if (1 & (long) arg)
- val |= 0x0c;
- if (2 & (long) arg)
- val |= 0x3;
- if (test_bit(HW_IOM1, &cs->HW_Flags)) {
- /* IOM 1 Mode */
- if (!val) {
- cs->writeisac(cs, ICC_SPCR, 0xa);
- cs->writeisac(cs, ICC_ADF1, 0x2);
- } else {
- cs->writeisac(cs, ICC_SPCR, val);
- cs->writeisac(cs, ICC_ADF1, 0xa);
- }
- } else {
- /* IOM 2 Mode */
- cs->writeisac(cs, ICC_SPCR, val);
- if (val)
- cs->writeisac(cs, ICC_ADF1, 0x8);
- else
- cs->writeisac(cs, ICC_ADF1, 0x0);
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_DEACTIVATE | RESPONSE):
- skb_queue_purge(&cs->rq);
- skb_queue_purge(&cs->sq);
- if (cs->tx_skb) {
- dev_kfree_skb_any(cs->tx_skb);
- cs->tx_skb = NULL;
- }
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- schedule_event(cs, D_CLEARBUSY);
- break;
- default:
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "icc_l1hw unknown %04x", pr);
- break;
- }
-}
-
-static void
-setstack_icc(struct PStack *st, struct IsdnCardState *cs)
-{
- st->l1.l1hw = ICC_l1hw;
-}
-
-static void
-DC_Close_icc(struct IsdnCardState *cs) {
- kfree(cs->dc.icc.mon_rx);
- cs->dc.icc.mon_rx = NULL;
- kfree(cs->dc.icc.mon_tx);
- cs->dc.icc.mon_tx = NULL;
-}
-
-static void
-dbusy_timer_handler(struct timer_list *t)
-{
- struct IsdnCardState *cs = from_timer(cs, t, dbusytimer);
- struct PStack *stptr;
- int rbch, star;
-
- if (test_bit(FLG_DBUSY_TIMER, &cs->HW_Flags)) {
- rbch = cs->readisac(cs, ICC_RBCH);
- star = cs->readisac(cs, ICC_STAR);
- if (cs->debug)
- debugl1(cs, "D-Channel Busy RBCH %02x STAR %02x",
- rbch, star);
- if (rbch & ICC_RBCH_XAC) { /* D-Channel Busy */
- test_and_set_bit(FLG_L1_DBUSY, &cs->HW_Flags);
- stptr = cs->stlist;
- while (stptr != NULL) {
- stptr->l1.l1l2(stptr, PH_PAUSE | INDICATION, NULL);
- stptr = stptr->next;
- }
- } else {
- /* discard frame; reset transceiver */
- test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags);
- if (cs->tx_skb) {
- dev_kfree_skb_any(cs->tx_skb);
- cs->tx_cnt = 0;
- cs->tx_skb = NULL;
- } else {
- printk(KERN_WARNING "HiSax: ICC D-Channel Busy no skb\n");
- debugl1(cs, "D-Channel Busy no skb");
- }
- cs->writeisac(cs, ICC_CMDR, 0x01); /* Transmitter reset */
- cs->irq_func(cs->irq, cs);
- }
- }
-}
-
-void
-initicc(struct IsdnCardState *cs)
-{
- cs->setstack_d = setstack_icc;
- cs->DC_Close = DC_Close_icc;
- cs->dc.icc.mon_tx = NULL;
- cs->dc.icc.mon_rx = NULL;
- cs->writeisac(cs, ICC_MASK, 0xff);
- cs->dc.icc.mocr = 0xaa;
- if (test_bit(HW_IOM1, &cs->HW_Flags)) {
- /* IOM 1 Mode */
- cs->writeisac(cs, ICC_ADF2, 0x0);
- cs->writeisac(cs, ICC_SPCR, 0xa);
- cs->writeisac(cs, ICC_ADF1, 0x2);
- cs->writeisac(cs, ICC_STCR, 0x70);
- cs->writeisac(cs, ICC_MODE, 0xc9);
- } else {
- /* IOM 2 Mode */
- if (!cs->dc.icc.adf2)
- cs->dc.icc.adf2 = 0x80;
- cs->writeisac(cs, ICC_ADF2, cs->dc.icc.adf2);
- cs->writeisac(cs, ICC_SQXR, 0xa0);
- cs->writeisac(cs, ICC_SPCR, 0x20);
- cs->writeisac(cs, ICC_STCR, 0x70);
- cs->writeisac(cs, ICC_MODE, 0xca);
- cs->writeisac(cs, ICC_TIMR, 0x00);
- cs->writeisac(cs, ICC_ADF1, 0x20);
- }
- ph_command(cs, ICC_CMD_RES);
- cs->writeisac(cs, ICC_MASK, 0x0);
- ph_command(cs, ICC_CMD_DI);
-}
-
-void
-clear_pending_icc_ints(struct IsdnCardState *cs)
-{
- int val, eval;
-
- val = cs->readisac(cs, ICC_STAR);
- debugl1(cs, "ICC STAR %x", val);
- val = cs->readisac(cs, ICC_MODE);
- debugl1(cs, "ICC MODE %x", val);
- val = cs->readisac(cs, ICC_ADF2);
- debugl1(cs, "ICC ADF2 %x", val);
- val = cs->readisac(cs, ICC_ISTA);
- debugl1(cs, "ICC ISTA %x", val);
- if (val & 0x01) {
- eval = cs->readisac(cs, ICC_EXIR);
- debugl1(cs, "ICC EXIR %x", eval);
- }
- val = cs->readisac(cs, ICC_CIR0);
- debugl1(cs, "ICC CIR0 %x", val);
- cs->dc.icc.ph_state = (val >> 2) & 0xf;
- schedule_event(cs, D_L1STATECHANGE);
- /* Disable all IRQ */
- cs->writeisac(cs, ICC_MASK, 0xFF);
-}
-
-void setup_icc(struct IsdnCardState *cs)
-{
- INIT_WORK(&cs->tqueue, icc_bh);
- timer_setup(&cs->dbusytimer, dbusy_timer_handler, 0);
-}
diff --git a/drivers/isdn/hisax/icc.h b/drivers/isdn/hisax/icc.h
deleted file mode 100644
index f367df5d3669..000000000000
--- a/drivers/isdn/hisax/icc.h
+++ /dev/null
@@ -1,72 +0,0 @@
-/* $Id: icc.h,v 1.4.2.2 2004/01/12 22:52:26 keil Exp $
- *
- * ICC specific routines
- *
- * Author Matt Henderson & Guy Ellis
- * Copyright by Traverse Technologies Pty Ltd, www.travers.com.au
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * 1999.7.14 Initial implementation of routines for Siemens ISDN
- * Communication Controller PEB 2070 based on the ISAC routines
- * written by Karsten Keil.
- */
-
-/* All Registers original Siemens Spec */
-
-#define ICC_MASK 0x20
-#define ICC_ISTA 0x20
-#define ICC_STAR 0x21
-#define ICC_CMDR 0x21
-#define ICC_EXIR 0x24
-#define ICC_ADF2 0x39
-#define ICC_SPCR 0x30
-#define ICC_ADF1 0x38
-#define ICC_CIR0 0x31
-#define ICC_CIX0 0x31
-#define ICC_CIR1 0x33
-#define ICC_CIX1 0x33
-#define ICC_STCR 0x37
-#define ICC_MODE 0x22
-#define ICC_RSTA 0x27
-#define ICC_RBCL 0x25
-#define ICC_RBCH 0x2A
-#define ICC_TIMR 0x23
-#define ICC_SQXR 0x3b
-#define ICC_MOSR 0x3a
-#define ICC_MOCR 0x3a
-#define ICC_MOR0 0x32
-#define ICC_MOX0 0x32
-#define ICC_MOR1 0x34
-#define ICC_MOX1 0x34
-
-#define ICC_RBCH_XAC 0x80
-
-#define ICC_CMD_TIM 0x0
-#define ICC_CMD_RES 0x1
-#define ICC_CMD_DU 0x3
-#define ICC_CMD_EI1 0x4
-#define ICC_CMD_SSP 0x5
-#define ICC_CMD_DT 0x6
-#define ICC_CMD_AR 0x8
-#define ICC_CMD_ARL 0xA
-#define ICC_CMD_AI 0xC
-#define ICC_CMD_DI 0xF
-
-#define ICC_IND_DR 0x0
-#define ICC_IND_FJ 0x2
-#define ICC_IND_EI1 0x4
-#define ICC_IND_INT 0x6
-#define ICC_IND_PU 0x7
-#define ICC_IND_AR 0x8
-#define ICC_IND_ARL 0xA
-#define ICC_IND_AI 0xC
-#define ICC_IND_AIL 0xE
-#define ICC_IND_DC 0xF
-
-extern void ICCVersion(struct IsdnCardState *cs, char *s);
-extern void initicc(struct IsdnCardState *cs);
-extern void icc_interrupt(struct IsdnCardState *cs, u_char val);
-extern void clear_pending_icc_ints(struct IsdnCardState *cs);
-extern void setup_icc(struct IsdnCardState *);
diff --git a/drivers/isdn/hisax/ipac.h b/drivers/isdn/hisax/ipac.h
deleted file mode 100644
index 4f937f02ee34..000000000000
--- a/drivers/isdn/hisax/ipac.h
+++ /dev/null
@@ -1,29 +0,0 @@
-/* $Id: ipac.h,v 1.7.2.2 2004/01/12 22:52:26 keil Exp $
- *
- * IPAC specific defines
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-/* All Registers original Siemens Spec */
-
-#define IPAC_CONF 0xC0
-#define IPAC_MASK 0xC1
-#define IPAC_ISTA 0xC1
-#define IPAC_ID 0xC2
-#define IPAC_ACFG 0xC3
-#define IPAC_AOE 0xC4
-#define IPAC_ARX 0xC5
-#define IPAC_ATX 0xC5
-#define IPAC_PITA1 0xC6
-#define IPAC_PITA2 0xC7
-#define IPAC_POTA1 0xC8
-#define IPAC_POTA2 0xC9
-#define IPAC_PCFG 0xCA
-#define IPAC_SCFG 0xCB
-#define IPAC_TIMR2 0xCC
diff --git a/drivers/isdn/hisax/ipacx.c b/drivers/isdn/hisax/ipacx.c
deleted file mode 100644
index c7086c1534bd..000000000000
--- a/drivers/isdn/hisax/ipacx.c
+++ /dev/null
@@ -1,913 +0,0 @@
-/*
- *
- * IPACX specific routines
- *
- * Author Joerg Petersohn
- * Derived from hisax_isac.c, isac.c, hscx.c and others
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-#include <linux/kernel.h>
-#include <linux/slab.h>
-#include <linux/init.h>
-#include "hisax_if.h"
-#include "hisax.h"
-#include "isdnl1.h"
-#include "ipacx.h"
-
-#define DBUSY_TIMER_VALUE 80
-#define TIMER3_VALUE 7000
-#define MAX_DFRAME_LEN_L1 300
-#define B_FIFO_SIZE 64
-#define D_FIFO_SIZE 32
-
-
-// ipacx interrupt mask values
-#define _MASK_IMASK 0x2E // global mask
-#define _MASKB_IMASK 0x0B
-#define _MASKD_IMASK 0x03 // all on
-
-//----------------------------------------------------------
-// local function declarations
-//----------------------------------------------------------
-static void ph_command(struct IsdnCardState *cs, unsigned int command);
-static inline void cic_int(struct IsdnCardState *cs);
-static void dch_l2l1(struct PStack *st, int pr, void *arg);
-static void dbusy_timer_handler(struct timer_list *t);
-static void dch_empty_fifo(struct IsdnCardState *cs, int count);
-static void dch_fill_fifo(struct IsdnCardState *cs);
-static inline void dch_int(struct IsdnCardState *cs);
-static void dch_setstack(struct PStack *st, struct IsdnCardState *cs);
-static void dch_init(struct IsdnCardState *cs);
-static void bch_l2l1(struct PStack *st, int pr, void *arg);
-static void bch_empty_fifo(struct BCState *bcs, int count);
-static void bch_fill_fifo(struct BCState *bcs);
-static void bch_int(struct IsdnCardState *cs, u_char hscx);
-static void bch_mode(struct BCState *bcs, int mode, int bc);
-static void bch_close_state(struct BCState *bcs);
-static int bch_open_state(struct IsdnCardState *cs, struct BCState *bcs);
-static int bch_setstack(struct PStack *st, struct BCState *bcs);
-static void bch_init(struct IsdnCardState *cs, int hscx);
-static void clear_pending_ints(struct IsdnCardState *cs);
-
-//----------------------------------------------------------
-// Issue Layer 1 command to chip
-//----------------------------------------------------------
-static void
-ph_command(struct IsdnCardState *cs, unsigned int command)
-{
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ph_command (%#x) in (%#x)", command,
- cs->dc.isac.ph_state);
-//###################################
-// printk(KERN_INFO "ph_command (%#x)\n", command);
-//###################################
- cs->writeisac(cs, IPACX_CIX0, (command << 4) | 0x0E);
-}
-
-//----------------------------------------------------------
-// Transceiver interrupt handler
-//----------------------------------------------------------
-static inline void
-cic_int(struct IsdnCardState *cs)
-{
- u_char event;
-
- event = cs->readisac(cs, IPACX_CIR0) >> 4;
- if (cs->debug & L1_DEB_ISAC) debugl1(cs, "cic_int(event=%#x)", event);
-//#########################################
-// printk(KERN_INFO "cic_int(%x)\n", event);
-//#########################################
- cs->dc.isac.ph_state = event;
- schedule_event(cs, D_L1STATECHANGE);
-}
-
-//==========================================================
-// D channel functions
-//==========================================================
-
-//----------------------------------------------------------
-// Command entry point
-//----------------------------------------------------------
-static void
-dch_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct IsdnCardState *cs = (struct IsdnCardState *) st->l1.hardware;
- struct sk_buff *skb = arg;
- u_char cda1_cr;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- if (cs->debug & DEB_DLOG_HEX) LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE) dlogframe(cs, skb, 0);
- if (cs->tx_skb) {
- skb_queue_tail(&cs->sq, skb);
-#ifdef L2FRAME_DEBUG
- if (cs->debug & L1_DEB_LAPD) Logl2Frame(cs, skb, "PH_DATA Queued", 0);
-#endif
- } else {
- cs->tx_skb = skb;
- cs->tx_cnt = 0;
-#ifdef L2FRAME_DEBUG
- if (cs->debug & L1_DEB_LAPD) Logl2Frame(cs, skb, "PH_DATA", 0);
-#endif
- dch_fill_fifo(cs);
- }
- break;
-
- case (PH_PULL | INDICATION):
- if (cs->tx_skb) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, " l2l1 tx_skb exist this shouldn't happen");
- skb_queue_tail(&cs->sq, skb);
- break;
- }
- if (cs->debug & DEB_DLOG_HEX) LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE) dlogframe(cs, skb, 0);
- cs->tx_skb = skb;
- cs->tx_cnt = 0;
-#ifdef L2FRAME_DEBUG
- if (cs->debug & L1_DEB_LAPD) Logl2Frame(cs, skb, "PH_DATA_PULLED", 0);
-#endif
- dch_fill_fifo(cs);
- break;
-
- case (PH_PULL | REQUEST):
-#ifdef L2FRAME_DEBUG
- if (cs->debug & L1_DEB_LAPD) debugl1(cs, "-> PH_REQUEST_PULL");
-#endif
- if (!cs->tx_skb) {
- clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
-
- case (HW_RESET | REQUEST):
- case (HW_ENABLE | REQUEST):
- if ((cs->dc.isac.ph_state == IPACX_IND_RES) ||
- (cs->dc.isac.ph_state == IPACX_IND_DR) ||
- (cs->dc.isac.ph_state == IPACX_IND_DC))
- ph_command(cs, IPACX_CMD_TIM);
- else
- ph_command(cs, IPACX_CMD_RES);
- break;
-
- case (HW_INFO3 | REQUEST):
- ph_command(cs, IPACX_CMD_AR8);
- break;
-
- case (HW_TESTLOOP | REQUEST):
- cs->writeisac(cs, IPACX_CDA_TSDP10, 0x80); // Timeslot 0 is B1
- cs->writeisac(cs, IPACX_CDA_TSDP11, 0x81); // Timeslot 0 is B1
- cda1_cr = cs->readisac(cs, IPACX_CDA1_CR);
- (void) cs->readisac(cs, IPACX_CDA2_CR);
- if ((long)arg & 1) { // loop B1
- cs->writeisac(cs, IPACX_CDA1_CR, cda1_cr | 0x0a);
- }
- else { // B1 off
- cs->writeisac(cs, IPACX_CDA1_CR, cda1_cr & ~0x0a);
- }
- if ((long)arg & 2) { // loop B2
- cs->writeisac(cs, IPACX_CDA1_CR, cda1_cr | 0x14);
- }
- else { // B2 off
- cs->writeisac(cs, IPACX_CDA1_CR, cda1_cr & ~0x14);
- }
- break;
-
- case (HW_DEACTIVATE | RESPONSE):
- skb_queue_purge(&cs->rq);
- skb_queue_purge(&cs->sq);
- if (cs->tx_skb) {
- dev_kfree_skb_any(cs->tx_skb);
- cs->tx_skb = NULL;
- }
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- break;
-
- default:
- if (cs->debug & L1_DEB_WARN) debugl1(cs, "dch_l2l1 unknown %04x", pr);
- break;
- }
-}
-
-//----------------------------------------------------------
-//----------------------------------------------------------
-static void
-dbusy_timer_handler(struct timer_list *t)
-{
- struct IsdnCardState *cs = from_timer(cs, t, dbusytimer);
- struct PStack *st;
- int rbchd, stard;
-
- if (test_bit(FLG_DBUSY_TIMER, &cs->HW_Flags)) {
- rbchd = cs->readisac(cs, IPACX_RBCHD);
- stard = cs->readisac(cs, IPACX_STARD);
- if (cs->debug)
- debugl1(cs, "D-Channel Busy RBCHD %02x STARD %02x", rbchd, stard);
- if (!(stard & 0x40)) { // D-Channel Busy
- set_bit(FLG_L1_DBUSY, &cs->HW_Flags);
- for (st = cs->stlist; st; st = st->next) {
- st->l1.l1l2(st, PH_PAUSE | INDICATION, NULL); // flow control on
- }
- } else {
- // seems we lost an interrupt; reset transceiver */
- clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags);
- if (cs->tx_skb) {
- dev_kfree_skb_any(cs->tx_skb);
- cs->tx_cnt = 0;
- cs->tx_skb = NULL;
- } else {
- printk(KERN_WARNING "HiSax: ISAC D-Channel Busy no skb\n");
- debugl1(cs, "D-Channel Busy no skb");
- }
- cs->writeisac(cs, IPACX_CMDRD, 0x01); // Tx reset, generates XPR
- }
- }
-}
-
-//----------------------------------------------------------
-// Fill buffer from receive FIFO
-//----------------------------------------------------------
-static void
-dch_empty_fifo(struct IsdnCardState *cs, int count)
-{
- u_char *ptr;
-
- if ((cs->debug & L1_DEB_ISAC) && !(cs->debug & L1_DEB_ISAC_FIFO))
- debugl1(cs, "dch_empty_fifo()");
-
- // message too large, remove
- if ((cs->rcvidx + count) >= MAX_DFRAME_LEN_L1) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "dch_empty_fifo() incoming message too large");
- cs->writeisac(cs, IPACX_CMDRD, 0x80); // RMC
- cs->rcvidx = 0;
- return;
- }
-
- ptr = cs->rcvbuf + cs->rcvidx;
- cs->rcvidx += count;
-
- cs->readisacfifo(cs, ptr, count);
- cs->writeisac(cs, IPACX_CMDRD, 0x80); // RMC
-
- if (cs->debug & L1_DEB_ISAC_FIFO) {
- char *t = cs->dlog;
-
- t += sprintf(t, "dch_empty_fifo() cnt %d", count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", cs->dlog);
- }
-}
-
-//----------------------------------------------------------
-// Fill transmit FIFO
-//----------------------------------------------------------
-static void
-dch_fill_fifo(struct IsdnCardState *cs)
-{
- int count;
- u_char cmd, *ptr;
-
- if ((cs->debug & L1_DEB_ISAC) && !(cs->debug & L1_DEB_ISAC_FIFO))
- debugl1(cs, "dch_fill_fifo()");
-
- if (!cs->tx_skb) return;
- count = cs->tx_skb->len;
- if (count <= 0) return;
-
- if (count > D_FIFO_SIZE) {
- count = D_FIFO_SIZE;
- cmd = 0x08; // XTF
- } else {
- cmd = 0x0A; // XTF | XME
- }
-
- ptr = cs->tx_skb->data;
- skb_pull(cs->tx_skb, count);
- cs->tx_cnt += count;
- cs->writeisacfifo(cs, ptr, count);
- cs->writeisac(cs, IPACX_CMDRD, cmd);
-
- // set timeout for transmission contol
- if (test_and_set_bit(FLG_DBUSY_TIMER, &cs->HW_Flags)) {
- debugl1(cs, "dch_fill_fifo dbusytimer running");
- del_timer(&cs->dbusytimer);
- }
- cs->dbusytimer.expires = jiffies + ((DBUSY_TIMER_VALUE * HZ)/1000);
- add_timer(&cs->dbusytimer);
-
- if (cs->debug & L1_DEB_ISAC_FIFO) {
- char *t = cs->dlog;
-
- t += sprintf(t, "dch_fill_fifo() cnt %d", count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", cs->dlog);
- }
-}
-
-//----------------------------------------------------------
-// D channel interrupt handler
-//----------------------------------------------------------
-static inline void
-dch_int(struct IsdnCardState *cs)
-{
- struct sk_buff *skb;
- u_char istad, rstad;
- int count;
-
- istad = cs->readisac(cs, IPACX_ISTAD);
-//##############################################
-// printk(KERN_WARNING "dch_int(istad=%02x)\n", istad);
-//##############################################
-
- if (istad & 0x80) { // RME
- rstad = cs->readisac(cs, IPACX_RSTAD);
- if ((rstad & 0xf0) != 0xa0) { // !(VFR && !RDO && CRC && !RAB)
- if (!(rstad & 0x80))
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "dch_int(): invalid frame");
- if ((rstad & 0x40))
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "dch_int(): RDO");
- if (!(rstad & 0x20))
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "dch_int(): CRC error");
- cs->writeisac(cs, IPACX_CMDRD, 0x80); // RMC
- } else { // received frame ok
- count = cs->readisac(cs, IPACX_RBCLD);
- if (count) count--; // RSTAB is last byte
- count &= D_FIFO_SIZE - 1;
- if (count == 0) count = D_FIFO_SIZE;
- dch_empty_fifo(cs, count);
- if ((count = cs->rcvidx) > 0) {
- cs->rcvidx = 0;
- if (!(skb = dev_alloc_skb(count)))
- printk(KERN_WARNING "HiSax dch_int(): receive out of memory\n");
- else {
- skb_put_data(skb, cs->rcvbuf, count);
- skb_queue_tail(&cs->rq, skb);
- }
- }
- }
- cs->rcvidx = 0;
- schedule_event(cs, D_RCVBUFREADY);
- }
-
- if (istad & 0x40) { // RPF
- dch_empty_fifo(cs, D_FIFO_SIZE);
- }
-
- if (istad & 0x20) { // RFO
- if (cs->debug & L1_DEB_WARN) debugl1(cs, "dch_int(): RFO");
- cs->writeisac(cs, IPACX_CMDRD, 0x40); //RRES
- }
-
- if (istad & 0x10) { // XPR
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- schedule_event(cs, D_CLEARBUSY);
- if (cs->tx_skb) {
- if (cs->tx_skb->len) {
- dch_fill_fifo(cs);
- goto afterXPR;
- }
- else {
- dev_kfree_skb_irq(cs->tx_skb);
- cs->tx_skb = NULL;
- cs->tx_cnt = 0;
- }
- }
- if ((cs->tx_skb = skb_dequeue(&cs->sq))) {
- cs->tx_cnt = 0;
- dch_fill_fifo(cs);
- }
- else {
- schedule_event(cs, D_XMTBUFREADY);
- }
- }
-afterXPR:
-
- if (istad & 0x0C) { // XDU or XMR
- if (cs->debug & L1_DEB_WARN) debugl1(cs, "dch_int(): XDU");
- if (cs->tx_skb) {
- skb_push(cs->tx_skb, cs->tx_cnt); // retransmit
- cs->tx_cnt = 0;
- dch_fill_fifo(cs);
- } else {
- printk(KERN_WARNING "HiSax: ISAC XDU no skb\n");
- debugl1(cs, "ISAC XDU no skb");
- }
- }
-}
-
-//----------------------------------------------------------
-//----------------------------------------------------------
-static void
-dch_setstack(struct PStack *st, struct IsdnCardState *cs)
-{
- st->l1.l1hw = dch_l2l1;
-}
-
-//----------------------------------------------------------
-//----------------------------------------------------------
-static void
-dch_init(struct IsdnCardState *cs)
-{
- printk(KERN_INFO "HiSax: IPACX ISDN driver v0.1.0\n");
-
- cs->setstack_d = dch_setstack;
-
- timer_setup(&cs->dbusytimer, dbusy_timer_handler, 0);
-
- cs->writeisac(cs, IPACX_TR_CONF0, 0x00); // clear LDD
- cs->writeisac(cs, IPACX_TR_CONF2, 0x00); // enable transmitter
- cs->writeisac(cs, IPACX_MODED, 0xC9); // transparent mode 0, RAC, stop/go
- cs->writeisac(cs, IPACX_MON_CR, 0x00); // disable monitor channel
-}
-
-
-//==========================================================
-// B channel functions
-//==========================================================
-
-//----------------------------------------------------------
-// Entry point for commands
-//----------------------------------------------------------
-static void
-bch_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct BCState *bcs = st->l1.bcs;
- struct sk_buff *skb = arg;
- u_long flags;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- skb_queue_tail(&bcs->squeue, skb);
- } else {
- bcs->tx_skb = skb;
- set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->hw.hscx.count = 0;
- bch_fill_fifo(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- printk(KERN_WARNING "HiSax bch_l2l1(): this shouldn't happen\n");
- } else {
- set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->tx_skb = skb;
- bcs->hw.hscx.count = 0;
- bch_fill_fifo(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
- if (!bcs->tx_skb) {
- clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case (PH_ACTIVATE | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- set_bit(BC_FLG_ACTIV, &bcs->Flag);
- bch_mode(bcs, st->l1.mode, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | REQUEST):
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | CONFIRM):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- clear_bit(BC_FLG_ACTIV, &bcs->Flag);
- clear_bit(BC_FLG_BUSY, &bcs->Flag);
- bch_mode(bcs, 0, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- st->l1.l1l2(st, PH_DEACTIVATE | CONFIRM, NULL);
- break;
- }
-}
-
-//----------------------------------------------------------
-// Read B channel fifo to receive buffer
-//----------------------------------------------------------
-static void
-bch_empty_fifo(struct BCState *bcs, int count)
-{
- u_char *ptr, hscx;
- struct IsdnCardState *cs;
- int cnt;
-
- cs = bcs->cs;
- hscx = bcs->hw.hscx.hscx;
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "bch_empty_fifo()");
-
- // message too large, remove
- if (bcs->hw.hscx.rcvidx + count > HSCX_BUFMAX) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "bch_empty_fifo() incoming packet too large");
- cs->BC_Write_Reg(cs, hscx, IPACX_CMDRB, 0x80); // RMC
- bcs->hw.hscx.rcvidx = 0;
- return;
- }
-
- ptr = bcs->hw.hscx.rcvbuf + bcs->hw.hscx.rcvidx;
- cnt = count;
- while (cnt--) *ptr++ = cs->BC_Read_Reg(cs, hscx, IPACX_RFIFOB);
- cs->BC_Write_Reg(cs, hscx, IPACX_CMDRB, 0x80); // RMC
-
- ptr = bcs->hw.hscx.rcvbuf + bcs->hw.hscx.rcvidx;
- bcs->hw.hscx.rcvidx += count;
-
- if (cs->debug & L1_DEB_HSCX_FIFO) {
- char *t = bcs->blog;
-
- t += sprintf(t, "bch_empty_fifo() B-%d cnt %d", hscx, count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", bcs->blog);
- }
-}
-
-//----------------------------------------------------------
-// Fill buffer to transmit FIFO
-//----------------------------------------------------------
-static void
-bch_fill_fifo(struct BCState *bcs)
-{
- struct IsdnCardState *cs;
- int more, count, cnt;
- u_char *ptr, *p, hscx;
-
- cs = bcs->cs;
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "bch_fill_fifo()");
-
- if (!bcs->tx_skb) return;
- if (bcs->tx_skb->len <= 0) return;
-
- hscx = bcs->hw.hscx.hscx;
- more = (bcs->mode == L1_MODE_TRANS) ? 1 : 0;
- if (bcs->tx_skb->len > B_FIFO_SIZE) {
- more = 1;
- count = B_FIFO_SIZE;
- } else {
- count = bcs->tx_skb->len;
- }
- cnt = count;
-
- p = ptr = bcs->tx_skb->data;
- skb_pull(bcs->tx_skb, count);
- bcs->tx_cnt -= count;
- bcs->hw.hscx.count += count;
- while (cnt--) cs->BC_Write_Reg(cs, hscx, IPACX_XFIFOB, *p++);
- cs->BC_Write_Reg(cs, hscx, IPACX_CMDRB, (more ? 0x08 : 0x0a));
-
- if (cs->debug & L1_DEB_HSCX_FIFO) {
- char *t = bcs->blog;
-
- t += sprintf(t, "%s() B-%d cnt %d", __func__, hscx, count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", bcs->blog);
- }
-}
-
-//----------------------------------------------------------
-// B channel interrupt handler
-//----------------------------------------------------------
-static void
-bch_int(struct IsdnCardState *cs, u_char hscx)
-{
- u_char istab;
- struct BCState *bcs;
- struct sk_buff *skb;
- int count;
- u_char rstab;
-
- bcs = cs->bcs + hscx;
- istab = cs->BC_Read_Reg(cs, hscx, IPACX_ISTAB);
-//##############################################
-// printk(KERN_WARNING "bch_int(istab=%02x)\n", istab);
-//##############################################
- if (!test_bit(BC_FLG_INIT, &bcs->Flag)) return;
-
- if (istab & 0x80) { // RME
- rstab = cs->BC_Read_Reg(cs, hscx, IPACX_RSTAB);
- if ((rstab & 0xf0) != 0xa0) { // !(VFR && !RDO && CRC && !RAB)
- if (!(rstab & 0x80))
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "bch_int() B-%d: invalid frame", hscx);
- if ((rstab & 0x40) && (bcs->mode != L1_MODE_NULL))
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "bch_int() B-%d: RDO mode=%d", hscx, bcs->mode);
- if (!(rstab & 0x20))
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "bch_int() B-%d: CRC error", hscx);
- cs->BC_Write_Reg(cs, hscx, IPACX_CMDRB, 0x80); // RMC
- }
- else { // received frame ok
- count = cs->BC_Read_Reg(cs, hscx, IPACX_RBCLB) & (B_FIFO_SIZE - 1);
- if (count == 0) count = B_FIFO_SIZE;
- bch_empty_fifo(bcs, count);
- if ((count = bcs->hw.hscx.rcvidx - 1) > 0) {
- if (cs->debug & L1_DEB_HSCX_FIFO)
- debugl1(cs, "bch_int Frame %d", count);
- if (!(skb = dev_alloc_skb(count)))
- printk(KERN_WARNING "HiSax bch_int(): receive frame out of memory\n");
- else {
- skb_put_data(skb, bcs->hw.hscx.rcvbuf,
- count);
- skb_queue_tail(&bcs->rqueue, skb);
- }
- }
- }
- bcs->hw.hscx.rcvidx = 0;
- schedule_event(bcs, B_RCVBUFREADY);
- }
-
- if (istab & 0x40) { // RPF
- bch_empty_fifo(bcs, B_FIFO_SIZE);
-
- if (bcs->mode == L1_MODE_TRANS) { // queue every chunk
- // receive transparent audio data
- if (!(skb = dev_alloc_skb(B_FIFO_SIZE)))
- printk(KERN_WARNING "HiSax bch_int(): receive transparent out of memory\n");
- else {
- skb_put_data(skb, bcs->hw.hscx.rcvbuf,
- B_FIFO_SIZE);
- skb_queue_tail(&bcs->rqueue, skb);
- }
- bcs->hw.hscx.rcvidx = 0;
- schedule_event(bcs, B_RCVBUFREADY);
- }
- }
-
- if (istab & 0x20) { // RFO
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "bch_int() B-%d: RFO error", hscx);
- cs->BC_Write_Reg(cs, hscx, IPACX_CMDRB, 0x40); // RRES
- }
-
- if (istab & 0x10) { // XPR
- if (bcs->tx_skb) {
- if (bcs->tx_skb->len) {
- bch_fill_fifo(bcs);
- goto afterXPR;
- } else {
- if (test_bit(FLG_LLI_L1WAKEUP, &bcs->st->lli.flag) &&
- (PACKET_NOACK != bcs->tx_skb->pkt_type)) {
- u_long flags;
- spin_lock_irqsave(&bcs->aclock, flags);
- bcs->ackcnt += bcs->hw.hscx.count;
- spin_unlock_irqrestore(&bcs->aclock, flags);
- schedule_event(bcs, B_ACKPENDING);
- }
- }
- dev_kfree_skb_irq(bcs->tx_skb);
- bcs->hw.hscx.count = 0;
- bcs->tx_skb = NULL;
- }
- if ((bcs->tx_skb = skb_dequeue(&bcs->squeue))) {
- bcs->hw.hscx.count = 0;
- set_bit(BC_FLG_BUSY, &bcs->Flag);
- bch_fill_fifo(bcs);
- } else {
- clear_bit(BC_FLG_BUSY, &bcs->Flag);
- schedule_event(bcs, B_XMTBUFREADY);
- }
- }
-afterXPR:
-
- if (istab & 0x04) { // XDU
- if (bcs->mode == L1_MODE_TRANS) {
- bch_fill_fifo(bcs);
- }
- else {
- if (bcs->tx_skb) { // restart transmitting the whole frame
- skb_push(bcs->tx_skb, bcs->hw.hscx.count);
- bcs->tx_cnt += bcs->hw.hscx.count;
- bcs->hw.hscx.count = 0;
- }
- cs->BC_Write_Reg(cs, hscx, IPACX_CMDRB, 0x01); // XRES
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "bch_int() B-%d XDU error", hscx);
- }
- }
-}
-
-//----------------------------------------------------------
-//----------------------------------------------------------
-static void
-bch_mode(struct BCState *bcs, int mode, int bc)
-{
- struct IsdnCardState *cs = bcs->cs;
- int hscx = bcs->hw.hscx.hscx;
-
- bc = bc ? 1 : 0; // in case bc is greater than 1
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "mode_bch() switch B-%d mode %d chan %d", hscx, mode, bc);
- bcs->mode = mode;
- bcs->channel = bc;
-
- // map controller to according timeslot
- if (!hscx)
- {
- cs->writeisac(cs, IPACX_BCHA_TSDP_BC1, 0x80 | bc);
- cs->writeisac(cs, IPACX_BCHA_CR, 0x88);
- }
- else
- {
- cs->writeisac(cs, IPACX_BCHB_TSDP_BC1, 0x80 | bc);
- cs->writeisac(cs, IPACX_BCHB_CR, 0x88);
- }
-
- switch (mode) {
- case (L1_MODE_NULL):
- cs->BC_Write_Reg(cs, hscx, IPACX_MODEB, 0xC0); // rec off
- cs->BC_Write_Reg(cs, hscx, IPACX_EXMB, 0x30); // std adj.
- cs->BC_Write_Reg(cs, hscx, IPACX_MASKB, 0xFF); // ints off
- cs->BC_Write_Reg(cs, hscx, IPACX_CMDRB, 0x41); // validate adjustments
- break;
- case (L1_MODE_TRANS):
- cs->BC_Write_Reg(cs, hscx, IPACX_MODEB, 0x88); // ext transp mode
- cs->BC_Write_Reg(cs, hscx, IPACX_EXMB, 0x00); // xxx00000
- cs->BC_Write_Reg(cs, hscx, IPACX_CMDRB, 0x41); // validate adjustments
- cs->BC_Write_Reg(cs, hscx, IPACX_MASKB, _MASKB_IMASK);
- break;
- case (L1_MODE_HDLC):
- cs->BC_Write_Reg(cs, hscx, IPACX_MODEB, 0xC8); // transp mode 0
- cs->BC_Write_Reg(cs, hscx, IPACX_EXMB, 0x01); // idle=hdlc flags crc enabled
- cs->BC_Write_Reg(cs, hscx, IPACX_CMDRB, 0x41); // validate adjustments
- cs->BC_Write_Reg(cs, hscx, IPACX_MASKB, _MASKB_IMASK);
- break;
- }
-}
-
-//----------------------------------------------------------
-//----------------------------------------------------------
-static void
-bch_close_state(struct BCState *bcs)
-{
- bch_mode(bcs, 0, bcs->channel);
- if (test_and_clear_bit(BC_FLG_INIT, &bcs->Flag)) {
- kfree(bcs->hw.hscx.rcvbuf);
- bcs->hw.hscx.rcvbuf = NULL;
- kfree(bcs->blog);
- bcs->blog = NULL;
- skb_queue_purge(&bcs->rqueue);
- skb_queue_purge(&bcs->squeue);
- if (bcs->tx_skb) {
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- clear_bit(BC_FLG_BUSY, &bcs->Flag);
- }
- }
-}
-
-//----------------------------------------------------------
-//----------------------------------------------------------
-static int
-bch_open_state(struct IsdnCardState *cs, struct BCState *bcs)
-{
- if (!test_and_set_bit(BC_FLG_INIT, &bcs->Flag)) {
- if (!(bcs->hw.hscx.rcvbuf = kmalloc(HSCX_BUFMAX, GFP_ATOMIC))) {
- printk(KERN_WARNING
- "HiSax open_bchstate(): No memory for hscx.rcvbuf\n");
- clear_bit(BC_FLG_INIT, &bcs->Flag);
- return (1);
- }
- if (!(bcs->blog = kmalloc(MAX_BLOG_SPACE, GFP_ATOMIC))) {
- printk(KERN_WARNING
- "HiSax open_bchstate: No memory for bcs->blog\n");
- clear_bit(BC_FLG_INIT, &bcs->Flag);
- kfree(bcs->hw.hscx.rcvbuf);
- bcs->hw.hscx.rcvbuf = NULL;
- return (2);
- }
- skb_queue_head_init(&bcs->rqueue);
- skb_queue_head_init(&bcs->squeue);
- }
- bcs->tx_skb = NULL;
- clear_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->event = 0;
- bcs->hw.hscx.rcvidx = 0;
- bcs->tx_cnt = 0;
- return (0);
-}
-
-//----------------------------------------------------------
-//----------------------------------------------------------
-static int
-bch_setstack(struct PStack *st, struct BCState *bcs)
-{
- bcs->channel = st->l1.bc;
- if (bch_open_state(st->l1.hardware, bcs)) return (-1);
- st->l1.bcs = bcs;
- st->l2.l2l1 = bch_l2l1;
- setstack_manager(st);
- bcs->st = st;
- setstack_l1_B(st);
- return (0);
-}
-
-//----------------------------------------------------------
-//----------------------------------------------------------
-static void
-bch_init(struct IsdnCardState *cs, int hscx)
-{
- cs->bcs[hscx].BC_SetStack = bch_setstack;
- cs->bcs[hscx].BC_Close = bch_close_state;
- cs->bcs[hscx].hw.hscx.hscx = hscx;
- cs->bcs[hscx].cs = cs;
- bch_mode(cs->bcs + hscx, 0, hscx);
-}
-
-
-//==========================================================
-// Shared functions
-//==========================================================
-
-//----------------------------------------------------------
-// Main interrupt handler
-//----------------------------------------------------------
-void
-interrupt_ipacx(struct IsdnCardState *cs)
-{
- u_char ista;
-
- while ((ista = cs->readisac(cs, IPACX_ISTA))) {
-//#################################################
-// printk(KERN_WARNING "interrupt_ipacx(ista=%02x)\n", ista);
-//#################################################
- if (ista & 0x80) bch_int(cs, 0); // B channel interrupts
- if (ista & 0x40) bch_int(cs, 1);
-
- if (ista & 0x01) dch_int(cs); // D channel
- if (ista & 0x10) cic_int(cs); // Layer 1 state
- }
-}
-
-//----------------------------------------------------------
-// Clears chip interrupt status
-//----------------------------------------------------------
-static void
-clear_pending_ints(struct IsdnCardState *cs)
-{
- int ista;
-
- // all interrupts off
- cs->writeisac(cs, IPACX_MASK, 0xff);
- cs->writeisac(cs, IPACX_MASKD, 0xff);
- cs->BC_Write_Reg(cs, 0, IPACX_MASKB, 0xff);
- cs->BC_Write_Reg(cs, 1, IPACX_MASKB, 0xff);
-
- ista = cs->readisac(cs, IPACX_ISTA);
- if (ista & 0x80) cs->BC_Read_Reg(cs, 0, IPACX_ISTAB);
- if (ista & 0x40) cs->BC_Read_Reg(cs, 1, IPACX_ISTAB);
- if (ista & 0x10) cs->readisac(cs, IPACX_CIR0);
- if (ista & 0x01) cs->readisac(cs, IPACX_ISTAD);
-}
-
-//----------------------------------------------------------
-// Does chip configuration work
-// Work to do depends on bit mask in part
-//----------------------------------------------------------
-void
-init_ipacx(struct IsdnCardState *cs, int part)
-{
- if (part & 1) { // initialise chip
-//##################################################
-// printk(KERN_INFO "init_ipacx(%x)\n", part);
-//##################################################
- clear_pending_ints(cs);
- bch_init(cs, 0);
- bch_init(cs, 1);
- dch_init(cs);
- }
- if (part & 2) { // reenable all interrupts and start chip
- cs->BC_Write_Reg(cs, 0, IPACX_MASKB, _MASKB_IMASK);
- cs->BC_Write_Reg(cs, 1, IPACX_MASKB, _MASKB_IMASK);
- cs->writeisac(cs, IPACX_MASKD, _MASKD_IMASK);
- cs->writeisac(cs, IPACX_MASK, _MASK_IMASK); // global mask register
-
- // reset HDLC Transmitters/receivers
- cs->writeisac(cs, IPACX_CMDRD, 0x41);
- cs->BC_Write_Reg(cs, 0, IPACX_CMDRB, 0x41);
- cs->BC_Write_Reg(cs, 1, IPACX_CMDRB, 0x41);
- ph_command(cs, IPACX_CMD_RES);
- }
-}
-
-//----------------- end of file -----------------------
diff --git a/drivers/isdn/hisax/ipacx.h b/drivers/isdn/hisax/ipacx.h
deleted file mode 100644
index e8a22e8f34b6..000000000000
--- a/drivers/isdn/hisax/ipacx.h
+++ /dev/null
@@ -1,162 +0,0 @@
-/*
- *
- * IPACX specific defines
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-/* All Registers original Siemens Spec */
-
-#ifndef INCLUDE_IPACX_H
-#define INCLUDE_IPACX_H
-
-/* D-channel registers */
-#define IPACX_RFIFOD 0x00 /* RD */
-#define IPACX_XFIFOD 0x00 /* WR */
-#define IPACX_ISTAD 0x20 /* RD */
-#define IPACX_MASKD 0x20 /* WR */
-#define IPACX_STARD 0x21 /* RD */
-#define IPACX_CMDRD 0x21 /* WR */
-#define IPACX_MODED 0x22 /* RD/WR */
-#define IPACX_EXMD1 0x23 /* RD/WR */
-#define IPACX_TIMR1 0x24 /* RD/WR */
-#define IPACX_SAP1 0x25 /* WR */
-#define IPACX_SAP2 0x26 /* WR */
-#define IPACX_RBCLD 0x26 /* RD */
-#define IPACX_RBCHD 0x27 /* RD */
-#define IPACX_TEI1 0x27 /* WR */
-#define IPACX_TEI2 0x28 /* WR */
-#define IPACX_RSTAD 0x28 /* RD */
-#define IPACX_TMD 0x29 /* RD/WR */
-#define IPACX_CIR0 0x2E /* RD */
-#define IPACX_CIX0 0x2E /* WR */
-#define IPACX_CIR1 0x2F /* RD */
-#define IPACX_CIX1 0x2F /* WR */
-
-/* Transceiver registers */
-#define IPACX_TR_CONF0 0x30 /* RD/WR */
-#define IPACX_TR_CONF1 0x31 /* RD/WR */
-#define IPACX_TR_CONF2 0x32 /* RD/WR */
-#define IPACX_TR_STA 0x33 /* RD */
-#define IPACX_TR_CMD 0x34 /* RD/WR */
-#define IPACX_SQRR1 0x35 /* RD */
-#define IPACX_SQXR1 0x35 /* WR */
-#define IPACX_SQRR2 0x36 /* RD */
-#define IPACX_SQXR2 0x36 /* WR */
-#define IPACX_SQRR3 0x37 /* RD */
-#define IPACX_SQXR3 0x37 /* WR */
-#define IPACX_ISTATR 0x38 /* RD */
-#define IPACX_MASKTR 0x39 /* RD/WR */
-#define IPACX_TR_MODE 0x3A /* RD/WR */
-#define IPACX_ACFG1 0x3C /* RD/WR */
-#define IPACX_ACFG2 0x3D /* RD/WR */
-#define IPACX_AOE 0x3E /* RD/WR */
-#define IPACX_ARX 0x3F /* RD */
-#define IPACX_ATX 0x3F /* WR */
-
-/* IOM: Timeslot, DPS, CDA */
-#define IPACX_CDA10 0x40 /* RD/WR */
-#define IPACX_CDA11 0x41 /* RD/WR */
-#define IPACX_CDA20 0x42 /* RD/WR */
-#define IPACX_CDA21 0x43 /* RD/WR */
-#define IPACX_CDA_TSDP10 0x44 /* RD/WR */
-#define IPACX_CDA_TSDP11 0x45 /* RD/WR */
-#define IPACX_CDA_TSDP20 0x46 /* RD/WR */
-#define IPACX_CDA_TSDP21 0x47 /* RD/WR */
-#define IPACX_BCHA_TSDP_BC1 0x48 /* RD/WR */
-#define IPACX_BCHA_TSDP_BC2 0x49 /* RD/WR */
-#define IPACX_BCHB_TSDP_BC1 0x4A /* RD/WR */
-#define IPACX_BCHB_TSDP_BC2 0x4B /* RD/WR */
-#define IPACX_TR_TSDP_BC1 0x4C /* RD/WR */
-#define IPACX_TR_TSDP_BC2 0x4D /* RD/WR */
-#define IPACX_CDA1_CR 0x4E /* RD/WR */
-#define IPACX_CDA2_CR 0x4F /* RD/WR */
-
-/* IOM: Contol, Sync transfer, Monitor */
-#define IPACX_TR_CR 0x50 /* RD/WR */
-#define IPACX_TRC_CR 0x50 /* RD/WR */
-#define IPACX_BCHA_CR 0x51 /* RD/WR */
-#define IPACX_BCHB_CR 0x52 /* RD/WR */
-#define IPACX_DCI_CR 0x53 /* RD/WR */
-#define IPACX_DCIC_CR 0x53 /* RD/WR */
-#define IPACX_MON_CR 0x54 /* RD/WR */
-#define IPACX_SDS1_CR 0x55 /* RD/WR */
-#define IPACX_SDS2_CR 0x56 /* RD/WR */
-#define IPACX_IOM_CR 0x57 /* RD/WR */
-#define IPACX_STI 0x58 /* RD */
-#define IPACX_ASTI 0x58 /* WR */
-#define IPACX_MSTI 0x59 /* RD/WR */
-#define IPACX_SDS_CONF 0x5A /* RD/WR */
-#define IPACX_MCDA 0x5B /* RD */
-#define IPACX_MOR 0x5C /* RD */
-#define IPACX_MOX 0x5C /* WR */
-#define IPACX_MOSR 0x5D /* RD */
-#define IPACX_MOCR 0x5E /* RD/WR */
-#define IPACX_MSTA 0x5F /* RD */
-#define IPACX_MCONF 0x5F /* WR */
-
-/* Interrupt and general registers */
-#define IPACX_ISTA 0x60 /* RD */
-#define IPACX_MASK 0x60 /* WR */
-#define IPACX_AUXI 0x61 /* RD */
-#define IPACX_AUXM 0x61 /* WR */
-#define IPACX_MODE1 0x62 /* RD/WR */
-#define IPACX_MODE2 0x63 /* RD/WR */
-#define IPACX_ID 0x64 /* RD */
-#define IPACX_SRES 0x64 /* WR */
-#define IPACX_TIMR2 0x65 /* RD/WR */
-
-/* B-channel registers */
-#define IPACX_OFF_B1 0x70
-#define IPACX_OFF_B2 0x80
-
-#define IPACX_ISTAB 0x00 /* RD */
-#define IPACX_MASKB 0x00 /* WR */
-#define IPACX_STARB 0x01 /* RD */
-#define IPACX_CMDRB 0x01 /* WR */
-#define IPACX_MODEB 0x02 /* RD/WR */
-#define IPACX_EXMB 0x03 /* RD/WR */
-#define IPACX_RAH1 0x05 /* WR */
-#define IPACX_RAH2 0x06 /* WR */
-#define IPACX_RBCLB 0x06 /* RD */
-#define IPACX_RBCHB 0x07 /* RD */
-#define IPACX_RAL1 0x07 /* WR */
-#define IPACX_RAL2 0x08 /* WR */
-#define IPACX_RSTAB 0x08 /* RD */
-#define IPACX_TMB 0x09 /* RD/WR */
-#define IPACX_RFIFOB 0x0A /*- RD */
-#define IPACX_XFIFOB 0x0A /*- WR */
-
-/* Layer 1 Commands */
-#define IPACX_CMD_TIM 0x0
-#define IPACX_CMD_RES 0x1
-#define IPACX_CMD_SSP 0x2
-#define IPACX_CMD_SCP 0x3
-#define IPACX_CMD_AR8 0x8
-#define IPACX_CMD_AR10 0x9
-#define IPACX_CMD_ARL 0xa
-#define IPACX_CMD_DI 0xf
-
-/* Layer 1 Indications */
-#define IPACX_IND_DR 0x0
-#define IPACX_IND_RES 0x1
-#define IPACX_IND_TMA 0x2
-#define IPACX_IND_SLD 0x3
-#define IPACX_IND_RSY 0x4
-#define IPACX_IND_DR6 0x5
-#define IPACX_IND_PU 0x7
-#define IPACX_IND_AR 0x8
-#define IPACX_IND_ARL 0xa
-#define IPACX_IND_CVR 0xb
-#define IPACX_IND_AI8 0xc
-#define IPACX_IND_AI10 0xd
-#define IPACX_IND_AIL 0xe
-#define IPACX_IND_DC 0xf
-
-extern void init_ipacx(struct IsdnCardState *, int);
-extern void interrupt_ipacx(struct IsdnCardState *);
-extern void setup_isac(struct IsdnCardState *);
-
-#endif
diff --git a/drivers/isdn/hisax/isac.c b/drivers/isdn/hisax/isac.c
deleted file mode 100644
index bd40e0671ded..000000000000
--- a/drivers/isdn/hisax/isac.c
+++ /dev/null
@@ -1,681 +0,0 @@
-/* $Id: isac.c,v 1.31.2.3 2004/01/13 14:31:25 keil Exp $
- *
- * ISAC specific routines
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * For changes and modifications please read
- * Documentation/isdn/HiSax.cert
- *
- */
-
-#include "hisax.h"
-#include "isac.h"
-#include "arcofi.h"
-#include "isdnl1.h"
-#include <linux/interrupt.h>
-#include <linux/slab.h>
-#include <linux/init.h>
-
-#define DBUSY_TIMER_VALUE 80
-#define ARCOFI_USE 1
-
-static char *ISACVer[] =
-{"2086/2186 V1.1", "2085 B1", "2085 B2",
- "2085 V2.3"};
-
-void ISACVersion(struct IsdnCardState *cs, char *s)
-{
- int val;
-
- val = cs->readisac(cs, ISAC_RBCH);
- printk(KERN_INFO "%s ISAC version (%x): %s\n", s, val, ISACVer[(val >> 5) & 3]);
-}
-
-static void
-ph_command(struct IsdnCardState *cs, unsigned int command)
-{
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ph_command %x", command);
- cs->writeisac(cs, ISAC_CIX0, (command << 2) | 3);
-}
-
-
-static void
-isac_new_ph(struct IsdnCardState *cs)
-{
- switch (cs->dc.isac.ph_state) {
- case (ISAC_IND_RS):
- case (ISAC_IND_EI):
- ph_command(cs, ISAC_CMD_DUI);
- l1_msg(cs, HW_RESET | INDICATION, NULL);
- break;
- case (ISAC_IND_DID):
- l1_msg(cs, HW_DEACTIVATE | CONFIRM, NULL);
- break;
- case (ISAC_IND_DR):
- l1_msg(cs, HW_DEACTIVATE | INDICATION, NULL);
- break;
- case (ISAC_IND_PU):
- l1_msg(cs, HW_POWERUP | CONFIRM, NULL);
- break;
- case (ISAC_IND_RSY):
- l1_msg(cs, HW_RSYNC | INDICATION, NULL);
- break;
- case (ISAC_IND_ARD):
- l1_msg(cs, HW_INFO2 | INDICATION, NULL);
- break;
- case (ISAC_IND_AI8):
- l1_msg(cs, HW_INFO4_P8 | INDICATION, NULL);
- break;
- case (ISAC_IND_AI10):
- l1_msg(cs, HW_INFO4_P10 | INDICATION, NULL);
- break;
- default:
- break;
- }
-}
-
-static void
-isac_bh(struct work_struct *work)
-{
- struct IsdnCardState *cs =
- container_of(work, struct IsdnCardState, tqueue);
- struct PStack *stptr;
-
- if (test_and_clear_bit(D_CLEARBUSY, &cs->event)) {
- if (cs->debug)
- debugl1(cs, "D-Channel Busy cleared");
- stptr = cs->stlist;
- while (stptr != NULL) {
- stptr->l1.l1l2(stptr, PH_PAUSE | CONFIRM, NULL);
- stptr = stptr->next;
- }
- }
- if (test_and_clear_bit(D_L1STATECHANGE, &cs->event))
- isac_new_ph(cs);
- if (test_and_clear_bit(D_RCVBUFREADY, &cs->event))
- DChannel_proc_rcv(cs);
- if (test_and_clear_bit(D_XMTBUFREADY, &cs->event))
- DChannel_proc_xmt(cs);
-#if ARCOFI_USE
- if (!test_bit(HW_ARCOFI, &cs->HW_Flags))
- return;
- if (test_and_clear_bit(D_RX_MON1, &cs->event))
- arcofi_fsm(cs, ARCOFI_RX_END, NULL);
- if (test_and_clear_bit(D_TX_MON1, &cs->event))
- arcofi_fsm(cs, ARCOFI_TX_END, NULL);
-#endif
-}
-
-static void
-isac_empty_fifo(struct IsdnCardState *cs, int count)
-{
- u_char *ptr;
-
- if ((cs->debug & L1_DEB_ISAC) && !(cs->debug & L1_DEB_ISAC_FIFO))
- debugl1(cs, "isac_empty_fifo");
-
- if ((cs->rcvidx + count) >= MAX_DFRAME_LEN_L1) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "isac_empty_fifo overrun %d",
- cs->rcvidx + count);
- cs->writeisac(cs, ISAC_CMDR, 0x80);
- cs->rcvidx = 0;
- return;
- }
- ptr = cs->rcvbuf + cs->rcvidx;
- cs->rcvidx += count;
- cs->readisacfifo(cs, ptr, count);
- cs->writeisac(cs, ISAC_CMDR, 0x80);
- if (cs->debug & L1_DEB_ISAC_FIFO) {
- char *t = cs->dlog;
-
- t += sprintf(t, "isac_empty_fifo cnt %d", count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", cs->dlog);
- }
-}
-
-static void
-isac_fill_fifo(struct IsdnCardState *cs)
-{
- int count, more;
- u_char *ptr;
-
- if ((cs->debug & L1_DEB_ISAC) && !(cs->debug & L1_DEB_ISAC_FIFO))
- debugl1(cs, "isac_fill_fifo");
-
- if (!cs->tx_skb)
- return;
-
- count = cs->tx_skb->len;
- if (count <= 0)
- return;
-
- more = 0;
- if (count > 32) {
- more = !0;
- count = 32;
- }
- ptr = cs->tx_skb->data;
- skb_pull(cs->tx_skb, count);
- cs->tx_cnt += count;
- cs->writeisacfifo(cs, ptr, count);
- cs->writeisac(cs, ISAC_CMDR, more ? 0x8 : 0xa);
- if (test_and_set_bit(FLG_DBUSY_TIMER, &cs->HW_Flags)) {
- debugl1(cs, "isac_fill_fifo dbusytimer running");
- del_timer(&cs->dbusytimer);
- }
- cs->dbusytimer.expires = jiffies + ((DBUSY_TIMER_VALUE * HZ)/1000);
- add_timer(&cs->dbusytimer);
- if (cs->debug & L1_DEB_ISAC_FIFO) {
- char *t = cs->dlog;
-
- t += sprintf(t, "isac_fill_fifo cnt %d", count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", cs->dlog);
- }
-}
-
-void
-isac_interrupt(struct IsdnCardState *cs, u_char val)
-{
- u_char exval, v1;
- struct sk_buff *skb;
- unsigned int count;
-
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ISAC interrupt %x", val);
- if (val & 0x80) { /* RME */
- exval = cs->readisac(cs, ISAC_RSTA);
- if ((exval & 0x70) != 0x20) {
- if (exval & 0x40) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ISAC RDO");
-#ifdef ERROR_STATISTIC
- cs->err_rx++;
-#endif
- }
- if (!(exval & 0x20)) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ISAC CRC error");
-#ifdef ERROR_STATISTIC
- cs->err_crc++;
-#endif
- }
- cs->writeisac(cs, ISAC_CMDR, 0x80);
- } else {
- count = cs->readisac(cs, ISAC_RBCL) & 0x1f;
- if (count == 0)
- count = 32;
- isac_empty_fifo(cs, count);
- count = cs->rcvidx;
- if (count > 0) {
- cs->rcvidx = 0;
- skb = alloc_skb(count, GFP_ATOMIC);
- if (!skb)
- printk(KERN_WARNING "HiSax: D receive out of memory\n");
- else {
- skb_put_data(skb, cs->rcvbuf, count);
- skb_queue_tail(&cs->rq, skb);
- }
- }
- }
- cs->rcvidx = 0;
- schedule_event(cs, D_RCVBUFREADY);
- }
- if (val & 0x40) { /* RPF */
- isac_empty_fifo(cs, 32);
- }
- if (val & 0x20) { /* RSC */
- /* never */
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ISAC RSC interrupt");
- }
- if (val & 0x10) { /* XPR */
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- schedule_event(cs, D_CLEARBUSY);
- if (cs->tx_skb) {
- if (cs->tx_skb->len) {
- isac_fill_fifo(cs);
- goto afterXPR;
- } else {
- dev_kfree_skb_irq(cs->tx_skb);
- cs->tx_cnt = 0;
- cs->tx_skb = NULL;
- }
- }
- cs->tx_skb = skb_dequeue(&cs->sq);
- if (cs->tx_skb) {
- cs->tx_cnt = 0;
- isac_fill_fifo(cs);
- } else
- schedule_event(cs, D_XMTBUFREADY);
- }
-afterXPR:
- if (val & 0x04) { /* CISQ */
- exval = cs->readisac(cs, ISAC_CIR0);
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ISAC CIR0 %02X", exval);
- if (exval & 2) {
- cs->dc.isac.ph_state = (exval >> 2) & 0xf;
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ph_state change %x", cs->dc.isac.ph_state);
- schedule_event(cs, D_L1STATECHANGE);
- }
- if (exval & 1) {
- exval = cs->readisac(cs, ISAC_CIR1);
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ISAC CIR1 %02X", exval);
- }
- }
- if (val & 0x02) { /* SIN */
- /* never */
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ISAC SIN interrupt");
- }
- if (val & 0x01) { /* EXI */
- exval = cs->readisac(cs, ISAC_EXIR);
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ISAC EXIR %02x", exval);
- if (exval & 0x80) { /* XMR */
- debugl1(cs, "ISAC XMR");
- printk(KERN_WARNING "HiSax: ISAC XMR\n");
- }
- if (exval & 0x40) { /* XDU */
- debugl1(cs, "ISAC XDU");
- printk(KERN_WARNING "HiSax: ISAC XDU\n");
-#ifdef ERROR_STATISTIC
- cs->err_tx++;
-#endif
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- schedule_event(cs, D_CLEARBUSY);
- if (cs->tx_skb) { /* Restart frame */
- skb_push(cs->tx_skb, cs->tx_cnt);
- cs->tx_cnt = 0;
- isac_fill_fifo(cs);
- } else {
- printk(KERN_WARNING "HiSax: ISAC XDU no skb\n");
- debugl1(cs, "ISAC XDU no skb");
- }
- }
- if (exval & 0x04) { /* MOS */
- v1 = cs->readisac(cs, ISAC_MOSR);
- if (cs->debug & L1_DEB_MONITOR)
- debugl1(cs, "ISAC MOSR %02x", v1);
-#if ARCOFI_USE
- if (v1 & 0x08) {
- if (!cs->dc.isac.mon_rx) {
- cs->dc.isac.mon_rx = kmalloc(MAX_MON_FRAME, GFP_ATOMIC);
- if (!cs->dc.isac.mon_rx) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ISAC MON RX out of memory!");
- cs->dc.isac.mocr &= 0xf0;
- cs->dc.isac.mocr |= 0x0a;
- cs->writeisac(cs, ISAC_MOCR, cs->dc.isac.mocr);
- goto afterMONR0;
- } else
- cs->dc.isac.mon_rxp = 0;
- }
- if (cs->dc.isac.mon_rxp >= MAX_MON_FRAME) {
- cs->dc.isac.mocr &= 0xf0;
- cs->dc.isac.mocr |= 0x0a;
- cs->writeisac(cs, ISAC_MOCR, cs->dc.isac.mocr);
- cs->dc.isac.mon_rxp = 0;
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ISAC MON RX overflow!");
- goto afterMONR0;
- }
- cs->dc.isac.mon_rx[cs->dc.isac.mon_rxp++] = cs->readisac(cs, ISAC_MOR0);
- if (cs->debug & L1_DEB_MONITOR)
- debugl1(cs, "ISAC MOR0 %02x", cs->dc.isac.mon_rx[cs->dc.isac.mon_rxp - 1]);
- if (cs->dc.isac.mon_rxp == 1) {
- cs->dc.isac.mocr |= 0x04;
- cs->writeisac(cs, ISAC_MOCR, cs->dc.isac.mocr);
- }
- }
- afterMONR0:
- if (v1 & 0x80) {
- if (!cs->dc.isac.mon_rx) {
- cs->dc.isac.mon_rx = kmalloc(MAX_MON_FRAME, GFP_ATOMIC);
- if (!cs->dc.isac.mon_rx) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ISAC MON RX out of memory!");
- cs->dc.isac.mocr &= 0x0f;
- cs->dc.isac.mocr |= 0xa0;
- cs->writeisac(cs, ISAC_MOCR, cs->dc.isac.mocr);
- goto afterMONR1;
- } else
- cs->dc.isac.mon_rxp = 0;
- }
- if (cs->dc.isac.mon_rxp >= MAX_MON_FRAME) {
- cs->dc.isac.mocr &= 0x0f;
- cs->dc.isac.mocr |= 0xa0;
- cs->writeisac(cs, ISAC_MOCR, cs->dc.isac.mocr);
- cs->dc.isac.mon_rxp = 0;
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "ISAC MON RX overflow!");
- goto afterMONR1;
- }
- cs->dc.isac.mon_rx[cs->dc.isac.mon_rxp++] = cs->readisac(cs, ISAC_MOR1);
- if (cs->debug & L1_DEB_MONITOR)
- debugl1(cs, "ISAC MOR1 %02x", cs->dc.isac.mon_rx[cs->dc.isac.mon_rxp - 1]);
- cs->dc.isac.mocr |= 0x40;
- cs->writeisac(cs, ISAC_MOCR, cs->dc.isac.mocr);
- }
- afterMONR1:
- if (v1 & 0x04) {
- cs->dc.isac.mocr &= 0xf0;
- cs->writeisac(cs, ISAC_MOCR, cs->dc.isac.mocr);
- cs->dc.isac.mocr |= 0x0a;
- cs->writeisac(cs, ISAC_MOCR, cs->dc.isac.mocr);
- schedule_event(cs, D_RX_MON0);
- }
- if (v1 & 0x40) {
- cs->dc.isac.mocr &= 0x0f;
- cs->writeisac(cs, ISAC_MOCR, cs->dc.isac.mocr);
- cs->dc.isac.mocr |= 0xa0;
- cs->writeisac(cs, ISAC_MOCR, cs->dc.isac.mocr);
- schedule_event(cs, D_RX_MON1);
- }
- if (v1 & 0x02) {
- if ((!cs->dc.isac.mon_tx) || (cs->dc.isac.mon_txc &&
- (cs->dc.isac.mon_txp >= cs->dc.isac.mon_txc) &&
- !(v1 & 0x08))) {
- cs->dc.isac.mocr &= 0xf0;
- cs->writeisac(cs, ISAC_MOCR, cs->dc.isac.mocr);
- cs->dc.isac.mocr |= 0x0a;
- cs->writeisac(cs, ISAC_MOCR, cs->dc.isac.mocr);
- if (cs->dc.isac.mon_txc &&
- (cs->dc.isac.mon_txp >= cs->dc.isac.mon_txc))
- schedule_event(cs, D_TX_MON0);
- goto AfterMOX0;
- }
- if (cs->dc.isac.mon_txc && (cs->dc.isac.mon_txp >= cs->dc.isac.mon_txc)) {
- schedule_event(cs, D_TX_MON0);
- goto AfterMOX0;
- }
- cs->writeisac(cs, ISAC_MOX0,
- cs->dc.isac.mon_tx[cs->dc.isac.mon_txp++]);
- if (cs->debug & L1_DEB_MONITOR)
- debugl1(cs, "ISAC %02x -> MOX0", cs->dc.isac.mon_tx[cs->dc.isac.mon_txp - 1]);
- }
- AfterMOX0:
- if (v1 & 0x20) {
- if ((!cs->dc.isac.mon_tx) || (cs->dc.isac.mon_txc &&
- (cs->dc.isac.mon_txp >= cs->dc.isac.mon_txc) &&
- !(v1 & 0x80))) {
- cs->dc.isac.mocr &= 0x0f;
- cs->writeisac(cs, ISAC_MOCR, cs->dc.isac.mocr);
- cs->dc.isac.mocr |= 0xa0;
- cs->writeisac(cs, ISAC_MOCR, cs->dc.isac.mocr);
- if (cs->dc.isac.mon_txc &&
- (cs->dc.isac.mon_txp >= cs->dc.isac.mon_txc))
- schedule_event(cs, D_TX_MON1);
- goto AfterMOX1;
- }
- if (cs->dc.isac.mon_txc && (cs->dc.isac.mon_txp >= cs->dc.isac.mon_txc)) {
- schedule_event(cs, D_TX_MON1);
- goto AfterMOX1;
- }
- cs->writeisac(cs, ISAC_MOX1,
- cs->dc.isac.mon_tx[cs->dc.isac.mon_txp++]);
- if (cs->debug & L1_DEB_MONITOR)
- debugl1(cs, "ISAC %02x -> MOX1", cs->dc.isac.mon_tx[cs->dc.isac.mon_txp - 1]);
- }
- AfterMOX1:;
-#endif
- }
- }
-}
-
-static void
-ISAC_l1hw(struct PStack *st, int pr, void *arg)
-{
- struct IsdnCardState *cs = (struct IsdnCardState *) st->l1.hardware;
- struct sk_buff *skb = arg;
- u_long flags;
- int val;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- if (cs->debug & DEB_DLOG_HEX)
- LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE)
- dlogframe(cs, skb, 0);
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->tx_skb) {
- skb_queue_tail(&cs->sq, skb);
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA Queued", 0);
-#endif
- } else {
- cs->tx_skb = skb;
- cs->tx_cnt = 0;
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA", 0);
-#endif
- isac_fill_fifo(cs);
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->tx_skb) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, " l2l1 tx_skb exist this shouldn't happen");
- skb_queue_tail(&cs->sq, skb);
- } else {
- if (cs->debug & DEB_DLOG_HEX)
- LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE)
- dlogframe(cs, skb, 0);
- cs->tx_skb = skb;
- cs->tx_cnt = 0;
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA_PULLED", 0);
-#endif
- isac_fill_fifo(cs);
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- debugl1(cs, "-> PH_REQUEST_PULL");
-#endif
- if (!cs->tx_skb) {
- test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case (HW_RESET | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- if ((cs->dc.isac.ph_state == ISAC_IND_EI) ||
- (cs->dc.isac.ph_state == ISAC_IND_DR) ||
- (cs->dc.isac.ph_state == ISAC_IND_RS))
- ph_command(cs, ISAC_CMD_TIM);
- else
- ph_command(cs, ISAC_CMD_RS);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_ENABLE | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- ph_command(cs, ISAC_CMD_TIM);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_INFO3 | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- ph_command(cs, ISAC_CMD_AR8);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_TESTLOOP | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- val = 0;
- if (1 & (long) arg)
- val |= 0x0c;
- if (2 & (long) arg)
- val |= 0x3;
- if (test_bit(HW_IOM1, &cs->HW_Flags)) {
- /* IOM 1 Mode */
- if (!val) {
- cs->writeisac(cs, ISAC_SPCR, 0xa);
- cs->writeisac(cs, ISAC_ADF1, 0x2);
- } else {
- cs->writeisac(cs, ISAC_SPCR, val);
- cs->writeisac(cs, ISAC_ADF1, 0xa);
- }
- } else {
- /* IOM 2 Mode */
- cs->writeisac(cs, ISAC_SPCR, val);
- if (val)
- cs->writeisac(cs, ISAC_ADF1, 0x8);
- else
- cs->writeisac(cs, ISAC_ADF1, 0x0);
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_DEACTIVATE | RESPONSE):
- skb_queue_purge(&cs->rq);
- skb_queue_purge(&cs->sq);
- if (cs->tx_skb) {
- dev_kfree_skb_any(cs->tx_skb);
- cs->tx_skb = NULL;
- }
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- schedule_event(cs, D_CLEARBUSY);
- break;
- default:
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "isac_l1hw unknown %04x", pr);
- break;
- }
-}
-
-static void
-setstack_isac(struct PStack *st, struct IsdnCardState *cs)
-{
- st->l1.l1hw = ISAC_l1hw;
-}
-
-static void
-DC_Close_isac(struct IsdnCardState *cs)
-{
- kfree(cs->dc.isac.mon_rx);
- cs->dc.isac.mon_rx = NULL;
- kfree(cs->dc.isac.mon_tx);
- cs->dc.isac.mon_tx = NULL;
-}
-
-static void
-dbusy_timer_handler(struct timer_list *t)
-{
- struct IsdnCardState *cs = from_timer(cs, t, dbusytimer);
- struct PStack *stptr;
- int rbch, star;
-
- if (test_bit(FLG_DBUSY_TIMER, &cs->HW_Flags)) {
- rbch = cs->readisac(cs, ISAC_RBCH);
- star = cs->readisac(cs, ISAC_STAR);
- if (cs->debug)
- debugl1(cs, "D-Channel Busy RBCH %02x STAR %02x",
- rbch, star);
- if (rbch & ISAC_RBCH_XAC) { /* D-Channel Busy */
- test_and_set_bit(FLG_L1_DBUSY, &cs->HW_Flags);
- stptr = cs->stlist;
- while (stptr != NULL) {
- stptr->l1.l1l2(stptr, PH_PAUSE | INDICATION, NULL);
- stptr = stptr->next;
- }
- } else {
- /* discard frame; reset transceiver */
- test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags);
- if (cs->tx_skb) {
- dev_kfree_skb_any(cs->tx_skb);
- cs->tx_cnt = 0;
- cs->tx_skb = NULL;
- } else {
- printk(KERN_WARNING "HiSax: ISAC D-Channel Busy no skb\n");
- debugl1(cs, "D-Channel Busy no skb");
- }
- cs->writeisac(cs, ISAC_CMDR, 0x01); /* Transmitter reset */
- cs->irq_func(cs->irq, cs);
- }
- }
-}
-
-void initisac(struct IsdnCardState *cs)
-{
- cs->setstack_d = setstack_isac;
- cs->DC_Close = DC_Close_isac;
- cs->dc.isac.mon_tx = NULL;
- cs->dc.isac.mon_rx = NULL;
- cs->writeisac(cs, ISAC_MASK, 0xff);
- cs->dc.isac.mocr = 0xaa;
- if (test_bit(HW_IOM1, &cs->HW_Flags)) {
- /* IOM 1 Mode */
- cs->writeisac(cs, ISAC_ADF2, 0x0);
- cs->writeisac(cs, ISAC_SPCR, 0xa);
- cs->writeisac(cs, ISAC_ADF1, 0x2);
- cs->writeisac(cs, ISAC_STCR, 0x70);
- cs->writeisac(cs, ISAC_MODE, 0xc9);
- } else {
- /* IOM 2 Mode */
- if (!cs->dc.isac.adf2)
- cs->dc.isac.adf2 = 0x80;
- cs->writeisac(cs, ISAC_ADF2, cs->dc.isac.adf2);
- cs->writeisac(cs, ISAC_SQXR, 0x2f);
- cs->writeisac(cs, ISAC_SPCR, 0x00);
- cs->writeisac(cs, ISAC_STCR, 0x70);
- cs->writeisac(cs, ISAC_MODE, 0xc9);
- cs->writeisac(cs, ISAC_TIMR, 0x00);
- cs->writeisac(cs, ISAC_ADF1, 0x00);
- }
- ph_command(cs, ISAC_CMD_RS);
- cs->writeisac(cs, ISAC_MASK, 0x0);
-}
-
-void clear_pending_isac_ints(struct IsdnCardState *cs)
-{
- int val, eval;
-
- val = cs->readisac(cs, ISAC_STAR);
- debugl1(cs, "ISAC STAR %x", val);
- val = cs->readisac(cs, ISAC_MODE);
- debugl1(cs, "ISAC MODE %x", val);
- val = cs->readisac(cs, ISAC_ADF2);
- debugl1(cs, "ISAC ADF2 %x", val);
- val = cs->readisac(cs, ISAC_ISTA);
- debugl1(cs, "ISAC ISTA %x", val);
- if (val & 0x01) {
- eval = cs->readisac(cs, ISAC_EXIR);
- debugl1(cs, "ISAC EXIR %x", eval);
- }
- val = cs->readisac(cs, ISAC_CIR0);
- debugl1(cs, "ISAC CIR0 %x", val);
- cs->dc.isac.ph_state = (val >> 2) & 0xf;
- schedule_event(cs, D_L1STATECHANGE);
- /* Disable all IRQ */
- cs->writeisac(cs, ISAC_MASK, 0xFF);
-}
-
-void setup_isac(struct IsdnCardState *cs)
-{
- INIT_WORK(&cs->tqueue, isac_bh);
- timer_setup(&cs->dbusytimer, dbusy_timer_handler, 0);
-}
diff --git a/drivers/isdn/hisax/isac.h b/drivers/isdn/hisax/isac.h
deleted file mode 100644
index 04f16b91b822..000000000000
--- a/drivers/isdn/hisax/isac.h
+++ /dev/null
@@ -1,70 +0,0 @@
-/* $Id: isac.h,v 1.9.2.2 2004/01/12 22:52:27 keil Exp $
- *
- * ISAC specific defines
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-/* All Registers original Siemens Spec */
-
-#define ISAC_MASK 0x20
-#define ISAC_ISTA 0x20
-#define ISAC_STAR 0x21
-#define ISAC_CMDR 0x21
-#define ISAC_EXIR 0x24
-#define ISAC_ADF2 0x39
-#define ISAC_SPCR 0x30
-#define ISAC_ADF1 0x38
-#define ISAC_CIR0 0x31
-#define ISAC_CIX0 0x31
-#define ISAC_CIR1 0x33
-#define ISAC_CIX1 0x33
-#define ISAC_STCR 0x37
-#define ISAC_MODE 0x22
-#define ISAC_RSTA 0x27
-#define ISAC_RBCL 0x25
-#define ISAC_RBCH 0x2A
-#define ISAC_TIMR 0x23
-#define ISAC_SQXR 0x3b
-#define ISAC_MOSR 0x3a
-#define ISAC_MOCR 0x3a
-#define ISAC_MOR0 0x32
-#define ISAC_MOX0 0x32
-#define ISAC_MOR1 0x34
-#define ISAC_MOX1 0x34
-
-#define ISAC_RBCH_XAC 0x80
-
-#define ISAC_CMD_TIM 0x0
-#define ISAC_CMD_RS 0x1
-#define ISAC_CMD_SCZ 0x4
-#define ISAC_CMD_SSZ 0x2
-#define ISAC_CMD_AR8 0x8
-#define ISAC_CMD_AR10 0x9
-#define ISAC_CMD_ARL 0xA
-#define ISAC_CMD_DUI 0xF
-
-#define ISAC_IND_RS 0x1
-#define ISAC_IND_PU 0x7
-#define ISAC_IND_DR 0x0
-#define ISAC_IND_SD 0x2
-#define ISAC_IND_DIS 0x3
-#define ISAC_IND_EI 0x6
-#define ISAC_IND_RSY 0x4
-#define ISAC_IND_ARD 0x8
-#define ISAC_IND_TI 0xA
-#define ISAC_IND_ATI 0xB
-#define ISAC_IND_AI8 0xC
-#define ISAC_IND_AI10 0xD
-#define ISAC_IND_DID 0xF
-
-extern void ISACVersion(struct IsdnCardState *, char *);
-extern void setup_isac(struct IsdnCardState *);
-extern void initisac(struct IsdnCardState *);
-extern void isac_interrupt(struct IsdnCardState *, u_char);
-extern void clear_pending_isac_ints(struct IsdnCardState *);
diff --git a/drivers/isdn/hisax/isar.c b/drivers/isdn/hisax/isar.c
deleted file mode 100644
index 82c1879f5664..000000000000
--- a/drivers/isdn/hisax/isar.c
+++ /dev/null
@@ -1,1910 +0,0 @@
-/* $Id: isar.c,v 1.22.2.6 2004/02/11 13:21:34 keil Exp $
- *
- * isar.c ISAR (Siemens PSB 7110) specific routines
- *
- * Author Karsten Keil (keil@isdn4linux.de)
- *
- * This file is (c) under GNU General Public License
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isar.h"
-#include "isdnl1.h"
-#include <linux/interrupt.h>
-#include <linux/slab.h>
-
-#define DBG_LOADFIRM 0
-#define DUMP_MBOXFRAME 2
-
-#define DLE 0x10
-#define ETX 0x03
-
-#define FAXMODCNT 13
-static const u_char faxmodulation[] = {3, 24, 48, 72, 73, 74, 96, 97, 98, 121, 122, 145, 146};
-static u_int modmask = 0x1fff;
-static int frm_extra_delay = 2;
-static int para_TOA = 6;
-static const u_char *FC1_CMD[] = {"FAE", "FTS", "FRS", "FTM", "FRM", "FTH", "FRH", "CTRL"};
-
-static void isar_setup(struct IsdnCardState *cs);
-static void isar_pump_cmd(struct BCState *bcs, u_char cmd, u_char para);
-static void ll_deliver_faxstat(struct BCState *bcs, u_char status);
-
-static inline int
-waitforHIA(struct IsdnCardState *cs, int timeout)
-{
-
- while ((cs->BC_Read_Reg(cs, 0, ISAR_HIA) & 1) && timeout) {
- udelay(1);
- timeout--;
- }
- if (!timeout)
- printk(KERN_WARNING "HiSax: ISAR waitforHIA timeout\n");
- return (timeout);
-}
-
-
-static int
-sendmsg(struct IsdnCardState *cs, u_char his, u_char creg, u_char len,
- u_char *msg)
-{
- int i;
-
- if (!waitforHIA(cs, 4000))
- return (0);
-#if DUMP_MBOXFRAME
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "sendmsg(%02x,%02x,%d)", his, creg, len);
-#endif
- cs->BC_Write_Reg(cs, 0, ISAR_CTRL_H, creg);
- cs->BC_Write_Reg(cs, 0, ISAR_CTRL_L, len);
- cs->BC_Write_Reg(cs, 0, ISAR_WADR, 0);
- if (msg && len) {
- cs->BC_Write_Reg(cs, 1, ISAR_MBOX, msg[0]);
- for (i = 1; i < len; i++)
- cs->BC_Write_Reg(cs, 2, ISAR_MBOX, msg[i]);
-#if DUMP_MBOXFRAME > 1
- if (cs->debug & L1_DEB_HSCX_FIFO) {
- char tmp[256], *t;
-
- i = len;
- while (i > 0) {
- t = tmp;
- t += sprintf(t, "sendmbox cnt %d", len);
- QuickHex(t, &msg[len-i], (i > 64) ? 64 : i);
- debugl1(cs, "%s", tmp);
- i -= 64;
- }
- }
-#endif
- }
- cs->BC_Write_Reg(cs, 1, ISAR_HIS, his);
- waitforHIA(cs, 10000);
- return (1);
-}
-
-/* Call only with IRQ disabled !!! */
-static inline void
-rcv_mbox(struct IsdnCardState *cs, struct isar_reg *ireg, u_char *msg)
-{
- int i;
-
- cs->BC_Write_Reg(cs, 1, ISAR_RADR, 0);
- if (msg && ireg->clsb) {
- msg[0] = cs->BC_Read_Reg(cs, 1, ISAR_MBOX);
- for (i = 1; i < ireg->clsb; i++)
- msg[i] = cs->BC_Read_Reg(cs, 2, ISAR_MBOX);
-#if DUMP_MBOXFRAME > 1
- if (cs->debug & L1_DEB_HSCX_FIFO) {
- char tmp[256], *t;
-
- i = ireg->clsb;
- while (i > 0) {
- t = tmp;
- t += sprintf(t, "rcv_mbox cnt %d", ireg->clsb);
- QuickHex(t, &msg[ireg->clsb - i], (i > 64) ? 64 : i);
- debugl1(cs, "%s", tmp);
- i -= 64;
- }
- }
-#endif
- }
- cs->BC_Write_Reg(cs, 1, ISAR_IIA, 0);
-}
-
-/* Call only with IRQ disabled !!! */
-static inline void
-get_irq_infos(struct IsdnCardState *cs, struct isar_reg *ireg)
-{
- ireg->iis = cs->BC_Read_Reg(cs, 1, ISAR_IIS);
- ireg->cmsb = cs->BC_Read_Reg(cs, 1, ISAR_CTRL_H);
- ireg->clsb = cs->BC_Read_Reg(cs, 1, ISAR_CTRL_L);
-#if DUMP_MBOXFRAME
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "irq_stat(%02x,%02x,%d)", ireg->iis, ireg->cmsb,
- ireg->clsb);
-#endif
-}
-
-static int
-waitrecmsg(struct IsdnCardState *cs, u_char *len,
- u_char *msg, int maxdelay)
-{
- int timeout = 0;
- struct isar_reg *ir = cs->bcs[0].hw.isar.reg;
-
-
- while ((!(cs->BC_Read_Reg(cs, 0, ISAR_IRQBIT) & ISAR_IRQSTA)) &&
- (timeout++ < maxdelay))
- udelay(1);
- if (timeout > maxdelay) {
- printk(KERN_WARNING"isar recmsg IRQSTA timeout\n");
- return (0);
- }
- get_irq_infos(cs, ir);
- rcv_mbox(cs, ir, msg);
- *len = ir->clsb;
- return (1);
-}
-
-int
-ISARVersion(struct IsdnCardState *cs, char *s)
-{
- int ver;
- u_char msg[] = ISAR_MSG_HWVER;
- u_char tmp[64];
- u_char len;
- u_long flags;
- int debug;
-
- cs->cardmsg(cs, CARD_RESET, NULL);
- spin_lock_irqsave(&cs->lock, flags);
- /* disable ISAR IRQ */
- cs->BC_Write_Reg(cs, 0, ISAR_IRQBIT, 0);
- debug = cs->debug;
- cs->debug &= ~(L1_DEB_HSCX | L1_DEB_HSCX_FIFO);
- if (!sendmsg(cs, ISAR_HIS_VNR, 0, 3, msg)) {
- spin_unlock_irqrestore(&cs->lock, flags);
- return (-1);
- }
- if (!waitrecmsg(cs, &len, tmp, 100000)) {
- spin_unlock_irqrestore(&cs->lock, flags);
- return (-2);
- }
- cs->debug = debug;
- if (cs->bcs[0].hw.isar.reg->iis == ISAR_IIS_VNR) {
- if (len == 1) {
- ver = tmp[0] & 0xf;
- printk(KERN_INFO "%s ISAR version %d\n", s, ver);
- } else
- ver = -3;
- } else
- ver = -4;
- spin_unlock_irqrestore(&cs->lock, flags);
- return (ver);
-}
-
-static int
-isar_load_firmware(struct IsdnCardState *cs, u_char __user *buf)
-{
- int cfu_ret, ret, size, cnt, debug;
- u_char len, nom, noc;
- u_short sadr, left, *sp;
- u_char __user *p = buf;
- u_char *msg, *tmpmsg, *mp, tmp[64];
- u_long flags;
- struct isar_reg *ireg = cs->bcs[0].hw.isar.reg;
-
- struct {u_short sadr;
- u_short len;
- u_short d_key;
- } blk_head;
-
-#define BLK_HEAD_SIZE 6
- if (1 != (ret = ISARVersion(cs, "Testing"))) {
- printk(KERN_ERR"isar_load_firmware wrong isar version %d\n", ret);
- return (1);
- }
- debug = cs->debug;
-#if DBG_LOADFIRM < 2
- cs->debug &= ~(L1_DEB_HSCX | L1_DEB_HSCX_FIFO);
-#endif
-
- cfu_ret = copy_from_user(&size, p, sizeof(int));
- if (cfu_ret) {
- printk(KERN_ERR "isar_load_firmware copy_from_user ret %d\n", cfu_ret);
- return -EFAULT;
- }
- p += sizeof(int);
- printk(KERN_DEBUG"isar_load_firmware size: %d\n", size);
- cnt = 0;
- /* disable ISAR IRQ */
- cs->BC_Write_Reg(cs, 0, ISAR_IRQBIT, 0);
- if (!(msg = kmalloc(256, GFP_KERNEL))) {
- printk(KERN_ERR"isar_load_firmware no buffer\n");
- return (1);
- }
- if (!(tmpmsg = kmalloc(256, GFP_KERNEL))) {
- printk(KERN_ERR"isar_load_firmware no tmp buffer\n");
- kfree(msg);
- return (1);
- }
- spin_lock_irqsave(&cs->lock, flags);
- /* disable ISAR IRQ */
- cs->BC_Write_Reg(cs, 0, ISAR_IRQBIT, 0);
- spin_unlock_irqrestore(&cs->lock, flags);
- while (cnt < size) {
- if ((ret = copy_from_user(&blk_head, p, BLK_HEAD_SIZE))) {
- printk(KERN_ERR"isar_load_firmware copy_from_user ret %d\n", ret);
- goto reterror;
- }
-#ifdef __BIG_ENDIAN
- sadr = (blk_head.sadr & 0xff) * 256 + blk_head.sadr / 256;
- blk_head.sadr = sadr;
- sadr = (blk_head.len & 0xff) * 256 + blk_head.len / 256;
- blk_head.len = sadr;
- sadr = (blk_head.d_key & 0xff) * 256 + blk_head.d_key / 256;
- blk_head.d_key = sadr;
-#endif /* __BIG_ENDIAN */
- cnt += BLK_HEAD_SIZE;
- p += BLK_HEAD_SIZE;
- printk(KERN_DEBUG"isar firmware block (%#x,%5d,%#x)\n",
- blk_head.sadr, blk_head.len, blk_head.d_key & 0xff);
- sadr = blk_head.sadr;
- left = blk_head.len;
- spin_lock_irqsave(&cs->lock, flags);
- if (!sendmsg(cs, ISAR_HIS_DKEY, blk_head.d_key & 0xff, 0, NULL)) {
- printk(KERN_ERR"isar sendmsg dkey failed\n");
- ret = 1; goto reterr_unlock;
- }
- if (!waitrecmsg(cs, &len, tmp, 100000)) {
- printk(KERN_ERR"isar waitrecmsg dkey failed\n");
- ret = 1; goto reterr_unlock;
- }
- if ((ireg->iis != ISAR_IIS_DKEY) || ireg->cmsb || len) {
- printk(KERN_ERR"isar wrong dkey response (%x,%x,%x)\n",
- ireg->iis, ireg->cmsb, len);
- ret = 1; goto reterr_unlock;
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- while (left > 0) {
- if (left > 126)
- noc = 126;
- else
- noc = left;
- nom = 2 * noc;
- mp = msg;
- *mp++ = sadr / 256;
- *mp++ = sadr % 256;
- left -= noc;
- *mp++ = noc;
- if ((ret = copy_from_user(tmpmsg, p, nom))) {
- printk(KERN_ERR"isar_load_firmware copy_from_user ret %d\n", ret);
- goto reterror;
- }
- p += nom;
- cnt += nom;
- nom += 3;
- sp = (u_short *)tmpmsg;
-#if DBG_LOADFIRM
- printk(KERN_DEBUG"isar: load %3d words at %04x left %d\n",
- noc, sadr, left);
-#endif
- sadr += noc;
- while (noc) {
-#ifdef __BIG_ENDIAN
- *mp++ = *sp % 256;
- *mp++ = *sp / 256;
-#else
- *mp++ = *sp / 256;
- *mp++ = *sp % 256;
-#endif /* __BIG_ENDIAN */
- sp++;
- noc--;
- }
- spin_lock_irqsave(&cs->lock, flags);
- if (!sendmsg(cs, ISAR_HIS_FIRM, 0, nom, msg)) {
- printk(KERN_ERR"isar sendmsg prog failed\n");
- ret = 1; goto reterr_unlock;
- }
- if (!waitrecmsg(cs, &len, tmp, 100000)) {
- printk(KERN_ERR"isar waitrecmsg prog failed\n");
- ret = 1; goto reterr_unlock;
- }
- if ((ireg->iis != ISAR_IIS_FIRM) || ireg->cmsb || len) {
- printk(KERN_ERR"isar wrong prog response (%x,%x,%x)\n",
- ireg->iis, ireg->cmsb, len);
- ret = 1; goto reterr_unlock;
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- }
- printk(KERN_DEBUG"isar firmware block %5d words loaded\n",
- blk_head.len);
- }
- /* 10ms delay */
- cnt = 10;
- while (cnt--)
- udelay(1000);
- msg[0] = 0xff;
- msg[1] = 0xfe;
- ireg->bstat = 0;
- spin_lock_irqsave(&cs->lock, flags);
- if (!sendmsg(cs, ISAR_HIS_STDSP, 0, 2, msg)) {
- printk(KERN_ERR"isar sendmsg start dsp failed\n");
- ret = 1; goto reterr_unlock;
- }
- if (!waitrecmsg(cs, &len, tmp, 100000)) {
- printk(KERN_ERR"isar waitrecmsg start dsp failed\n");
- ret = 1; goto reterr_unlock;
- }
- if ((ireg->iis != ISAR_IIS_STDSP) || ireg->cmsb || len) {
- printk(KERN_ERR"isar wrong start dsp response (%x,%x,%x)\n",
- ireg->iis, ireg->cmsb, len);
- ret = 1; goto reterr_unlock;
- } else
- printk(KERN_DEBUG"isar start dsp success\n");
- /* NORMAL mode entered */
- /* Enable IRQs of ISAR */
- cs->BC_Write_Reg(cs, 0, ISAR_IRQBIT, ISAR_IRQSTA);
- spin_unlock_irqrestore(&cs->lock, flags);
- cnt = 1000; /* max 1s */
- while ((!ireg->bstat) && cnt) {
- udelay(1000);
- cnt--;
- }
- if (!cnt) {
- printk(KERN_ERR"isar no general status event received\n");
- ret = 1; goto reterror;
- } else {
- printk(KERN_DEBUG"isar general status event %x\n",
- ireg->bstat);
- }
- /* 10ms delay */
- cnt = 10;
- while (cnt--)
- udelay(1000);
- spin_lock_irqsave(&cs->lock, flags);
- ireg->iis = 0;
- if (!sendmsg(cs, ISAR_HIS_DIAG, ISAR_CTRL_STST, 0, NULL)) {
- printk(KERN_ERR"isar sendmsg self tst failed\n");
- ret = 1; goto reterr_unlock;
- }
- cnt = 10000; /* max 100 ms */
- spin_unlock_irqrestore(&cs->lock, flags);
- while ((ireg->iis != ISAR_IIS_DIAG) && cnt) {
- udelay(10);
- cnt--;
- }
- udelay(1000);
- if (!cnt) {
- printk(KERN_ERR"isar no self tst response\n");
- ret = 1; goto reterror;
- }
- if ((ireg->cmsb == ISAR_CTRL_STST) && (ireg->clsb == 1)
- && (ireg->par[0] == 0)) {
- printk(KERN_DEBUG"isar selftest OK\n");
- } else {
- printk(KERN_DEBUG"isar selftest not OK %x/%x/%x\n",
- ireg->cmsb, ireg->clsb, ireg->par[0]);
- ret = 1; goto reterror;
- }
- spin_lock_irqsave(&cs->lock, flags);
- ireg->iis = 0;
- if (!sendmsg(cs, ISAR_HIS_DIAG, ISAR_CTRL_SWVER, 0, NULL)) {
- printk(KERN_ERR"isar RQST SVN failed\n");
- ret = 1; goto reterr_unlock;
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- cnt = 30000; /* max 300 ms */
- while ((ireg->iis != ISAR_IIS_DIAG) && cnt) {
- udelay(10);
- cnt--;
- }
- udelay(1000);
- if (!cnt) {
- printk(KERN_ERR"isar no SVN response\n");
- ret = 1; goto reterror;
- } else {
- if ((ireg->cmsb == ISAR_CTRL_SWVER) && (ireg->clsb == 1))
- printk(KERN_DEBUG"isar software version %#x\n",
- ireg->par[0]);
- else {
- printk(KERN_ERR"isar wrong swver response (%x,%x) cnt(%d)\n",
- ireg->cmsb, ireg->clsb, cnt);
- ret = 1; goto reterror;
- }
- }
- spin_lock_irqsave(&cs->lock, flags);
- cs->debug = debug;
- isar_setup(cs);
-
- ret = 0;
-reterr_unlock:
- spin_unlock_irqrestore(&cs->lock, flags);
-reterror:
- cs->debug = debug;
- if (ret)
- /* disable ISAR IRQ */
- cs->BC_Write_Reg(cs, 0, ISAR_IRQBIT, 0);
- kfree(msg);
- kfree(tmpmsg);
- return (ret);
-}
-
-#define B_LL_NOCARRIER 8
-#define B_LL_CONNECT 9
-#define B_LL_OK 10
-
-static void
-isar_bh(struct work_struct *work)
-{
- struct BCState *bcs = container_of(work, struct BCState, tqueue);
-
- BChannel_bh(work);
- if (test_and_clear_bit(B_LL_NOCARRIER, &bcs->event))
- ll_deliver_faxstat(bcs, ISDN_FAX_CLASS1_NOCARR);
- if (test_and_clear_bit(B_LL_CONNECT, &bcs->event))
- ll_deliver_faxstat(bcs, ISDN_FAX_CLASS1_CONNECT);
- if (test_and_clear_bit(B_LL_OK, &bcs->event))
- ll_deliver_faxstat(bcs, ISDN_FAX_CLASS1_OK);
-}
-
-static void
-send_DLE_ETX(struct BCState *bcs)
-{
- u_char dleetx[2] = {DLE, ETX};
- struct sk_buff *skb;
-
- if ((skb = dev_alloc_skb(2))) {
- skb_put_data(skb, dleetx, 2);
- skb_queue_tail(&bcs->rqueue, skb);
- schedule_event(bcs, B_RCVBUFREADY);
- } else {
- printk(KERN_WARNING "HiSax: skb out of memory\n");
- }
-}
-
-static inline int
-dle_count(unsigned char *buf, int len)
-{
- int count = 0;
-
- while (len--)
- if (*buf++ == DLE)
- count++;
- return count;
-}
-
-static inline void
-insert_dle(unsigned char *dest, unsigned char *src, int count) {
- /* <DLE> in input stream have to be flagged as <DLE><DLE> */
- while (count--) {
- *dest++ = *src;
- if (*src++ == DLE)
- *dest++ = DLE;
- }
-}
-
-static void
-isar_rcv_frame(struct IsdnCardState *cs, struct BCState *bcs)
-{
- u_char *ptr;
- struct sk_buff *skb;
- struct isar_reg *ireg = bcs->hw.isar.reg;
-
- if (!ireg->clsb) {
- debugl1(cs, "isar zero len frame");
- cs->BC_Write_Reg(cs, 1, ISAR_IIA, 0);
- return;
- }
- switch (bcs->mode) {
- case L1_MODE_NULL:
- debugl1(cs, "isar mode 0 spurious IIS_RDATA %x/%x/%x",
- ireg->iis, ireg->cmsb, ireg->clsb);
- printk(KERN_WARNING"isar mode 0 spurious IIS_RDATA %x/%x/%x\n",
- ireg->iis, ireg->cmsb, ireg->clsb);
- cs->BC_Write_Reg(cs, 1, ISAR_IIA, 0);
- break;
- case L1_MODE_TRANS:
- case L1_MODE_V32:
- if ((skb = dev_alloc_skb(ireg->clsb))) {
- rcv_mbox(cs, ireg, (u_char *)skb_put(skb, ireg->clsb));
- skb_queue_tail(&bcs->rqueue, skb);
- schedule_event(bcs, B_RCVBUFREADY);
- } else {
- printk(KERN_WARNING "HiSax: skb out of memory\n");
- cs->BC_Write_Reg(cs, 1, ISAR_IIA, 0);
- }
- break;
- case L1_MODE_HDLC:
- if ((bcs->hw.isar.rcvidx + ireg->clsb) > HSCX_BUFMAX) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "isar_rcv_frame: incoming packet too large");
- cs->BC_Write_Reg(cs, 1, ISAR_IIA, 0);
- bcs->hw.isar.rcvidx = 0;
- } else if (ireg->cmsb & HDLC_ERROR) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "isar frame error %x len %d",
- ireg->cmsb, ireg->clsb);
-#ifdef ERROR_STATISTIC
- if (ireg->cmsb & HDLC_ERR_RER)
- bcs->err_inv++;
- if (ireg->cmsb & HDLC_ERR_CER)
- bcs->err_crc++;
-#endif
- bcs->hw.isar.rcvidx = 0;
- cs->BC_Write_Reg(cs, 1, ISAR_IIA, 0);
- } else {
- if (ireg->cmsb & HDLC_FSD)
- bcs->hw.isar.rcvidx = 0;
- ptr = bcs->hw.isar.rcvbuf + bcs->hw.isar.rcvidx;
- bcs->hw.isar.rcvidx += ireg->clsb;
- rcv_mbox(cs, ireg, ptr);
- if (ireg->cmsb & HDLC_FED) {
- if (bcs->hw.isar.rcvidx < 3) { /* last 2 bytes are the FCS */
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "isar frame to short %d",
- bcs->hw.isar.rcvidx);
- } else if (!(skb = dev_alloc_skb(bcs->hw.isar.rcvidx - 2))) {
- printk(KERN_WARNING "ISAR: receive out of memory\n");
- } else {
- skb_put_data(skb, bcs->hw.isar.rcvbuf,
- bcs->hw.isar.rcvidx - 2);
- skb_queue_tail(&bcs->rqueue, skb);
- schedule_event(bcs, B_RCVBUFREADY);
- }
- bcs->hw.isar.rcvidx = 0;
- }
- }
- break;
- case L1_MODE_FAX:
- if (bcs->hw.isar.state != STFAX_ACTIV) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "isar_rcv_frame: not ACTIV");
- cs->BC_Write_Reg(cs, 1, ISAR_IIA, 0);
- bcs->hw.isar.rcvidx = 0;
- break;
- }
- if (bcs->hw.isar.cmd == PCTRL_CMD_FRM) {
- rcv_mbox(cs, ireg, bcs->hw.isar.rcvbuf);
- bcs->hw.isar.rcvidx = ireg->clsb +
- dle_count(bcs->hw.isar.rcvbuf, ireg->clsb);
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "isar_rcv_frame: raw(%d) dle(%d)",
- ireg->clsb, bcs->hw.isar.rcvidx);
- if ((skb = dev_alloc_skb(bcs->hw.isar.rcvidx))) {
- insert_dle((u_char *)skb_put(skb, bcs->hw.isar.rcvidx),
- bcs->hw.isar.rcvbuf, ireg->clsb);
- skb_queue_tail(&bcs->rqueue, skb);
- schedule_event(bcs, B_RCVBUFREADY);
- if (ireg->cmsb & SART_NMD) { /* ABORT */
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "isar_rcv_frame: no more data");
- bcs->hw.isar.rcvidx = 0;
- send_DLE_ETX(bcs);
- sendmsg(cs, SET_DPS(bcs->hw.isar.dpath) |
- ISAR_HIS_PUMPCTRL, PCTRL_CMD_ESC,
- 0, NULL);
- bcs->hw.isar.state = STFAX_ESCAPE;
- schedule_event(bcs, B_LL_NOCARRIER);
- }
- } else {
- printk(KERN_WARNING "HiSax: skb out of memory\n");
- }
- break;
- }
- if (bcs->hw.isar.cmd != PCTRL_CMD_FRH) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "isar_rcv_frame: unknown fax mode %x",
- bcs->hw.isar.cmd);
- cs->BC_Write_Reg(cs, 1, ISAR_IIA, 0);
- bcs->hw.isar.rcvidx = 0;
- break;
- }
- /* PCTRL_CMD_FRH */
- if ((bcs->hw.isar.rcvidx + ireg->clsb) > HSCX_BUFMAX) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "isar_rcv_frame: incoming packet too large");
- cs->BC_Write_Reg(cs, 1, ISAR_IIA, 0);
- bcs->hw.isar.rcvidx = 0;
- } else if (ireg->cmsb & HDLC_ERROR) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "isar frame error %x len %d",
- ireg->cmsb, ireg->clsb);
- bcs->hw.isar.rcvidx = 0;
- cs->BC_Write_Reg(cs, 1, ISAR_IIA, 0);
- } else {
- if (ireg->cmsb & HDLC_FSD) {
- bcs->hw.isar.rcvidx = 0;
- }
- ptr = bcs->hw.isar.rcvbuf + bcs->hw.isar.rcvidx;
- bcs->hw.isar.rcvidx += ireg->clsb;
- rcv_mbox(cs, ireg, ptr);
- if (ireg->cmsb & HDLC_FED) {
- int len = bcs->hw.isar.rcvidx +
- dle_count(bcs->hw.isar.rcvbuf, bcs->hw.isar.rcvidx);
- if (bcs->hw.isar.rcvidx < 3) { /* last 2 bytes are the FCS */
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "isar frame to short %d",
- bcs->hw.isar.rcvidx);
- printk(KERN_WARNING "ISAR: frame to short %d\n",
- bcs->hw.isar.rcvidx);
- } else if (!(skb = dev_alloc_skb(len))) {
- printk(KERN_WARNING "ISAR: receive out of memory\n");
- } else {
- insert_dle((u_char *)skb_put(skb, len),
- bcs->hw.isar.rcvbuf,
- bcs->hw.isar.rcvidx);
- skb_queue_tail(&bcs->rqueue, skb);
- schedule_event(bcs, B_RCVBUFREADY);
- send_DLE_ETX(bcs);
- schedule_event(bcs, B_LL_OK);
- test_and_clear_bit(BC_FLG_FRH_WAIT, &bcs->Flag);
- }
- bcs->hw.isar.rcvidx = 0;
- }
- }
- if (ireg->cmsb & SART_NMD) { /* ABORT */
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "isar_rcv_frame: no more data");
- bcs->hw.isar.rcvidx = 0;
- sendmsg(cs, SET_DPS(bcs->hw.isar.dpath) |
- ISAR_HIS_PUMPCTRL, PCTRL_CMD_ESC, 0, NULL);
- bcs->hw.isar.state = STFAX_ESCAPE;
- if (test_and_clear_bit(BC_FLG_FRH_WAIT, &bcs->Flag)) {
- send_DLE_ETX(bcs);
- schedule_event(bcs, B_LL_NOCARRIER);
- }
- }
- break;
- default:
- printk(KERN_ERR"isar_rcv_frame mode (%x)error\n", bcs->mode);
- cs->BC_Write_Reg(cs, 1, ISAR_IIA, 0);
- break;
- }
-}
-
-void
-isar_fill_fifo(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
- int count;
- u_char msb;
- u_char *ptr;
-
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "isar_fill_fifo");
- if (!bcs->tx_skb)
- return;
- if (bcs->tx_skb->len <= 0)
- return;
- if (!(bcs->hw.isar.reg->bstat &
- (bcs->hw.isar.dpath == 1 ? BSTAT_RDM1 : BSTAT_RDM2)))
- return;
- if (bcs->tx_skb->len > bcs->hw.isar.mml) {
- msb = 0;
- count = bcs->hw.isar.mml;
- } else {
- count = bcs->tx_skb->len;
- msb = HDLC_FED;
- }
- ptr = bcs->tx_skb->data;
- if (!bcs->hw.isar.txcnt) {
- msb |= HDLC_FST;
- if ((bcs->mode == L1_MODE_FAX) &&
- (bcs->hw.isar.cmd == PCTRL_CMD_FTH)) {
- if (bcs->tx_skb->len > 1) {
- if ((ptr[0] == 0xff) && (ptr[1] == 0x13))
- /* last frame */
- test_and_set_bit(BC_FLG_LASTDATA,
- &bcs->Flag);
- }
- }
- }
- skb_pull(bcs->tx_skb, count);
- bcs->tx_cnt -= count;
- bcs->hw.isar.txcnt += count;
- switch (bcs->mode) {
- case L1_MODE_NULL:
- printk(KERN_ERR"isar_fill_fifo wrong mode 0\n");
- break;
- case L1_MODE_TRANS:
- case L1_MODE_V32:
- sendmsg(cs, SET_DPS(bcs->hw.isar.dpath) | ISAR_HIS_SDATA,
- 0, count, ptr);
- break;
- case L1_MODE_HDLC:
- sendmsg(cs, SET_DPS(bcs->hw.isar.dpath) | ISAR_HIS_SDATA,
- msb, count, ptr);
- break;
- case L1_MODE_FAX:
- if (bcs->hw.isar.state != STFAX_ACTIV) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "isar_fill_fifo: not ACTIV");
- } else if (bcs->hw.isar.cmd == PCTRL_CMD_FTH) {
- sendmsg(cs, SET_DPS(bcs->hw.isar.dpath) | ISAR_HIS_SDATA,
- msb, count, ptr);
- } else if (bcs->hw.isar.cmd == PCTRL_CMD_FTM) {
- sendmsg(cs, SET_DPS(bcs->hw.isar.dpath) | ISAR_HIS_SDATA,
- 0, count, ptr);
- } else {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "isar_fill_fifo: not FTH/FTM");
- }
- break;
- default:
- if (cs->debug)
- debugl1(cs, "isar_fill_fifo mode(%x) error", bcs->mode);
- printk(KERN_ERR"isar_fill_fifo mode(%x) error\n", bcs->mode);
- break;
- }
-}
-
-static inline
-struct BCState *sel_bcs_isar(struct IsdnCardState *cs, u_char dpath)
-{
- if ((!dpath) || (dpath == 3))
- return (NULL);
- if (cs->bcs[0].hw.isar.dpath == dpath)
- return (&cs->bcs[0]);
- if (cs->bcs[1].hw.isar.dpath == dpath)
- return (&cs->bcs[1]);
- return (NULL);
-}
-
-static void
-send_frames(struct BCState *bcs)
-{
- if (bcs->tx_skb) {
- if (bcs->tx_skb->len) {
- isar_fill_fifo(bcs);
- return;
- } else {
- if (test_bit(FLG_LLI_L1WAKEUP, &bcs->st->lli.flag) &&
- (PACKET_NOACK != bcs->tx_skb->pkt_type)) {
- u_long flags;
- spin_lock_irqsave(&bcs->aclock, flags);
- bcs->ackcnt += bcs->hw.isar.txcnt;
- spin_unlock_irqrestore(&bcs->aclock, flags);
- schedule_event(bcs, B_ACKPENDING);
- }
- if (bcs->mode == L1_MODE_FAX) {
- if (bcs->hw.isar.cmd == PCTRL_CMD_FTH) {
- if (test_bit(BC_FLG_LASTDATA, &bcs->Flag)) {
- test_and_set_bit(BC_FLG_NMD_DATA, &bcs->Flag);
- }
- } else if (bcs->hw.isar.cmd == PCTRL_CMD_FTM) {
- if (test_bit(BC_FLG_DLEETX, &bcs->Flag)) {
- test_and_set_bit(BC_FLG_LASTDATA, &bcs->Flag);
- test_and_set_bit(BC_FLG_NMD_DATA, &bcs->Flag);
- }
- }
- }
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->hw.isar.txcnt = 0;
- bcs->tx_skb = NULL;
- }
- }
- if ((bcs->tx_skb = skb_dequeue(&bcs->squeue))) {
- bcs->hw.isar.txcnt = 0;
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- isar_fill_fifo(bcs);
- } else {
- if (test_and_clear_bit(BC_FLG_DLEETX, &bcs->Flag)) {
- if (test_and_clear_bit(BC_FLG_LASTDATA, &bcs->Flag)) {
- if (test_and_clear_bit(BC_FLG_NMD_DATA, &bcs->Flag)) {
- u_char dummy = 0;
- sendmsg(bcs->cs, SET_DPS(bcs->hw.isar.dpath) |
- ISAR_HIS_SDATA, 0x01, 1, &dummy);
- }
- test_and_set_bit(BC_FLG_LL_OK, &bcs->Flag);
- } else {
- schedule_event(bcs, B_LL_CONNECT);
- }
- }
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- schedule_event(bcs, B_XMTBUFREADY);
- }
-}
-
-static inline void
-check_send(struct IsdnCardState *cs, u_char rdm)
-{
- struct BCState *bcs;
-
- if (rdm & BSTAT_RDM1) {
- if ((bcs = sel_bcs_isar(cs, 1))) {
- if (bcs->mode) {
- send_frames(bcs);
- }
- }
- }
- if (rdm & BSTAT_RDM2) {
- if ((bcs = sel_bcs_isar(cs, 2))) {
- if (bcs->mode) {
- send_frames(bcs);
- }
- }
- }
-
-}
-
-static const char *dmril[] = {"NO SPEED", "1200/75", "NODEF2", "75/1200",
- "NODEF4", "300", "600", "1200", "2400",
- "4800", "7200", "9600nt", "9600t", "12000",
- "14400", "WRONG"};
-static const char *dmrim[] = {"NO MOD", "NO DEF", "V32/V32b", "V22", "V21",
- "Bell103", "V23", "Bell202", "V17", "V29",
- "V27ter"};
-
-static void
-isar_pump_status_rsp(struct BCState *bcs, struct isar_reg *ireg) {
- struct IsdnCardState *cs = bcs->cs;
- u_char ril = ireg->par[0];
- u_char rim;
-
- if (!test_and_clear_bit(ISAR_RATE_REQ, &bcs->hw.isar.reg->Flags))
- return;
- if (ril > 14) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "wrong pstrsp ril=%d", ril);
- ril = 15;
- }
- switch (ireg->par[1]) {
- case 0:
- rim = 0;
- break;
- case 0x20:
- rim = 2;
- break;
- case 0x40:
- rim = 3;
- break;
- case 0x41:
- rim = 4;
- break;
- case 0x51:
- rim = 5;
- break;
- case 0x61:
- rim = 6;
- break;
- case 0x71:
- rim = 7;
- break;
- case 0x82:
- rim = 8;
- break;
- case 0x92:
- rim = 9;
- break;
- case 0xa2:
- rim = 10;
- break;
- default:
- rim = 1;
- break;
- }
- sprintf(bcs->hw.isar.conmsg, "%s %s", dmril[ril], dmrim[rim]);
- bcs->conmsg = bcs->hw.isar.conmsg;
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump strsp %s", bcs->conmsg);
-}
-
-static void
-isar_pump_statev_modem(struct BCState *bcs, u_char devt) {
- struct IsdnCardState *cs = bcs->cs;
- u_char dps = SET_DPS(bcs->hw.isar.dpath);
-
- switch (devt) {
- case PSEV_10MS_TIMER:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev TIMER");
- break;
- case PSEV_CON_ON:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev CONNECT");
- l1_msg_b(bcs->st, PH_ACTIVATE | REQUEST, NULL);
- break;
- case PSEV_CON_OFF:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev NO CONNECT");
- sendmsg(cs, dps | ISAR_HIS_PSTREQ, 0, 0, NULL);
- l1_msg_b(bcs->st, PH_DEACTIVATE | REQUEST, NULL);
- break;
- case PSEV_V24_OFF:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev V24 OFF");
- break;
- case PSEV_CTS_ON:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev CTS ON");
- break;
- case PSEV_CTS_OFF:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev CTS OFF");
- break;
- case PSEV_DCD_ON:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev CARRIER ON");
- test_and_set_bit(ISAR_RATE_REQ, &bcs->hw.isar.reg->Flags);
- sendmsg(cs, dps | ISAR_HIS_PSTREQ, 0, 0, NULL);
- break;
- case PSEV_DCD_OFF:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev CARRIER OFF");
- break;
- case PSEV_DSR_ON:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev DSR ON");
- break;
- case PSEV_DSR_OFF:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev DSR_OFF");
- break;
- case PSEV_REM_RET:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev REMOTE RETRAIN");
- break;
- case PSEV_REM_REN:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev REMOTE RENEGOTIATE");
- break;
- case PSEV_GSTN_CLR:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev GSTN CLEAR");
- break;
- default:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "unknown pump stev %x", devt);
- break;
- }
-}
-
-static void
-ll_deliver_faxstat(struct BCState *bcs, u_char status)
-{
- isdn_ctrl ic;
- struct Channel *chanp = (struct Channel *) bcs->st->lli.userdata;
-
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "HL->LL FAXIND %x", status);
- ic.driver = bcs->cs->myid;
- ic.command = ISDN_STAT_FAXIND;
- ic.arg = chanp->chan;
- ic.parm.aux.cmd = status;
- bcs->cs->iif.statcallb(&ic);
-}
-
-static void
-isar_pump_statev_fax(struct BCState *bcs, u_char devt) {
- struct IsdnCardState *cs = bcs->cs;
- u_char dps = SET_DPS(bcs->hw.isar.dpath);
- u_char p1;
-
- switch (devt) {
- case PSEV_10MS_TIMER:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev TIMER");
- break;
- case PSEV_RSP_READY:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev RSP_READY");
- bcs->hw.isar.state = STFAX_READY;
- l1_msg_b(bcs->st, PH_ACTIVATE | REQUEST, NULL);
- if (test_bit(BC_FLG_ORIG, &bcs->Flag)) {
- isar_pump_cmd(bcs, ISDN_FAX_CLASS1_FRH, 3);
- } else {
- isar_pump_cmd(bcs, ISDN_FAX_CLASS1_FTH, 3);
- }
- break;
- case PSEV_LINE_TX_H:
- if (bcs->hw.isar.state == STFAX_LINE) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev LINE_TX_H");
- bcs->hw.isar.state = STFAX_CONT;
- sendmsg(cs, dps | ISAR_HIS_PUMPCTRL, PCTRL_CMD_CONT, 0, NULL);
- } else {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "pump stev LINE_TX_H wrong st %x",
- bcs->hw.isar.state);
- }
- break;
- case PSEV_LINE_RX_H:
- if (bcs->hw.isar.state == STFAX_LINE) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev LINE_RX_H");
- bcs->hw.isar.state = STFAX_CONT;
- sendmsg(cs, dps | ISAR_HIS_PUMPCTRL, PCTRL_CMD_CONT, 0, NULL);
- } else {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "pump stev LINE_RX_H wrong st %x",
- bcs->hw.isar.state);
- }
- break;
- case PSEV_LINE_TX_B:
- if (bcs->hw.isar.state == STFAX_LINE) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev LINE_TX_B");
- bcs->hw.isar.state = STFAX_CONT;
- sendmsg(cs, dps | ISAR_HIS_PUMPCTRL, PCTRL_CMD_CONT, 0, NULL);
- } else {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "pump stev LINE_TX_B wrong st %x",
- bcs->hw.isar.state);
- }
- break;
- case PSEV_LINE_RX_B:
- if (bcs->hw.isar.state == STFAX_LINE) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev LINE_RX_B");
- bcs->hw.isar.state = STFAX_CONT;
- sendmsg(cs, dps | ISAR_HIS_PUMPCTRL, PCTRL_CMD_CONT, 0, NULL);
- } else {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "pump stev LINE_RX_B wrong st %x",
- bcs->hw.isar.state);
- }
- break;
- case PSEV_RSP_CONN:
- if (bcs->hw.isar.state == STFAX_CONT) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev RSP_CONN");
- bcs->hw.isar.state = STFAX_ACTIV;
- test_and_set_bit(ISAR_RATE_REQ, &bcs->hw.isar.reg->Flags);
- sendmsg(cs, dps | ISAR_HIS_PSTREQ, 0, 0, NULL);
- if (bcs->hw.isar.cmd == PCTRL_CMD_FTH) {
- /* 1s Flags before data */
- if (test_and_set_bit(BC_FLG_FTI_RUN, &bcs->Flag))
- del_timer(&bcs->hw.isar.ftimer);
- /* 1000 ms */
- bcs->hw.isar.ftimer.expires =
- jiffies + ((1000 * HZ) / 1000);
- test_and_set_bit(BC_FLG_LL_CONN,
- &bcs->Flag);
- add_timer(&bcs->hw.isar.ftimer);
- } else {
- schedule_event(bcs, B_LL_CONNECT);
- }
- } else {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "pump stev RSP_CONN wrong st %x",
- bcs->hw.isar.state);
- }
- break;
- case PSEV_FLAGS_DET:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev FLAGS_DET");
- break;
- case PSEV_RSP_DISC:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev RSP_DISC");
- if (bcs->hw.isar.state == STFAX_ESCAPE) {
- p1 = 5;
- switch (bcs->hw.isar.newcmd) {
- case 0:
- bcs->hw.isar.state = STFAX_READY;
- break;
- case PCTRL_CMD_FTM:
- p1 = 2;
- /* fall through */
- case PCTRL_CMD_FTH:
- sendmsg(cs, dps | ISAR_HIS_PUMPCTRL,
- PCTRL_CMD_SILON, 1, &p1);
- bcs->hw.isar.state = STFAX_SILDET;
- break;
- case PCTRL_CMD_FRM:
- if (frm_extra_delay)
- mdelay(frm_extra_delay);
- /* fall through */
- case PCTRL_CMD_FRH:
- p1 = bcs->hw.isar.mod = bcs->hw.isar.newmod;
- bcs->hw.isar.newmod = 0;
- bcs->hw.isar.cmd = bcs->hw.isar.newcmd;
- bcs->hw.isar.newcmd = 0;
- sendmsg(cs, dps | ISAR_HIS_PUMPCTRL,
- bcs->hw.isar.cmd, 1, &p1);
- bcs->hw.isar.state = STFAX_LINE;
- bcs->hw.isar.try_mod = 3;
- break;
- default:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "RSP_DISC unknown newcmd %x", bcs->hw.isar.newcmd);
- break;
- }
- } else if (bcs->hw.isar.state == STFAX_ACTIV) {
- if (test_and_clear_bit(BC_FLG_LL_OK, &bcs->Flag)) {
- schedule_event(bcs, B_LL_OK);
- } else if (bcs->hw.isar.cmd == PCTRL_CMD_FRM) {
- send_DLE_ETX(bcs);
- schedule_event(bcs, B_LL_NOCARRIER);
- } else {
- ll_deliver_faxstat(bcs, ISDN_FAX_CLASS1_FCERROR);
- }
- bcs->hw.isar.state = STFAX_READY;
- } else {
- bcs->hw.isar.state = STFAX_READY;
- ll_deliver_faxstat(bcs, ISDN_FAX_CLASS1_FCERROR);
- }
- break;
- case PSEV_RSP_SILDET:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev RSP_SILDET");
- if (bcs->hw.isar.state == STFAX_SILDET) {
- p1 = bcs->hw.isar.mod = bcs->hw.isar.newmod;
- bcs->hw.isar.newmod = 0;
- bcs->hw.isar.cmd = bcs->hw.isar.newcmd;
- bcs->hw.isar.newcmd = 0;
- sendmsg(cs, dps | ISAR_HIS_PUMPCTRL,
- bcs->hw.isar.cmd, 1, &p1);
- bcs->hw.isar.state = STFAX_LINE;
- bcs->hw.isar.try_mod = 3;
- }
- break;
- case PSEV_RSP_SILOFF:
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev RSP_SILOFF");
- break;
- case PSEV_RSP_FCERR:
- if (bcs->hw.isar.state == STFAX_LINE) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev RSP_FCERR try %d",
- bcs->hw.isar.try_mod);
- if (bcs->hw.isar.try_mod--) {
- sendmsg(cs, dps | ISAR_HIS_PUMPCTRL,
- bcs->hw.isar.cmd, 1,
- &bcs->hw.isar.mod);
- break;
- }
- }
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev RSP_FCERR");
- bcs->hw.isar.state = STFAX_ESCAPE;
- sendmsg(cs, dps | ISAR_HIS_PUMPCTRL, PCTRL_CMD_ESC, 0, NULL);
- ll_deliver_faxstat(bcs, ISDN_FAX_CLASS1_FCERROR);
- break;
- default:
- break;
- }
-}
-
-static char debbuf[128];
-
-void
-isar_int_main(struct IsdnCardState *cs)
-{
- struct isar_reg *ireg = cs->bcs[0].hw.isar.reg;
- struct BCState *bcs;
-
- get_irq_infos(cs, ireg);
- switch (ireg->iis & ISAR_IIS_MSCMSD) {
- case ISAR_IIS_RDATA:
- if ((bcs = sel_bcs_isar(cs, ireg->iis >> 6))) {
- isar_rcv_frame(cs, bcs);
- } else {
- debugl1(cs, "isar spurious IIS_RDATA %x/%x/%x",
- ireg->iis, ireg->cmsb, ireg->clsb);
- cs->BC_Write_Reg(cs, 1, ISAR_IIA, 0);
- }
- break;
- case ISAR_IIS_GSTEV:
- cs->BC_Write_Reg(cs, 1, ISAR_IIA, 0);
- ireg->bstat |= ireg->cmsb;
- check_send(cs, ireg->cmsb);
- break;
- case ISAR_IIS_BSTEV:
-#ifdef ERROR_STATISTIC
- if ((bcs = sel_bcs_isar(cs, ireg->iis >> 6))) {
- if (ireg->cmsb == BSTEV_TBO)
- bcs->err_tx++;
- if (ireg->cmsb == BSTEV_RBO)
- bcs->err_rdo++;
- }
-#endif
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "Buffer STEV dpath%d msb(%x)",
- ireg->iis >> 6, ireg->cmsb);
- cs->BC_Write_Reg(cs, 1, ISAR_IIA, 0);
- break;
- case ISAR_IIS_PSTEV:
- if ((bcs = sel_bcs_isar(cs, ireg->iis >> 6))) {
- rcv_mbox(cs, ireg, (u_char *)ireg->par);
- if (bcs->mode == L1_MODE_V32) {
- isar_pump_statev_modem(bcs, ireg->cmsb);
- } else if (bcs->mode == L1_MODE_FAX) {
- isar_pump_statev_fax(bcs, ireg->cmsb);
- } else if (ireg->cmsb == PSEV_10MS_TIMER) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "pump stev TIMER");
- } else {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "isar IIS_PSTEV pmode %d stat %x",
- bcs->mode, ireg->cmsb);
- }
- } else {
- debugl1(cs, "isar spurious IIS_PSTEV %x/%x/%x",
- ireg->iis, ireg->cmsb, ireg->clsb);
- cs->BC_Write_Reg(cs, 1, ISAR_IIA, 0);
- }
- break;
- case ISAR_IIS_PSTRSP:
- if ((bcs = sel_bcs_isar(cs, ireg->iis >> 6))) {
- rcv_mbox(cs, ireg, (u_char *)ireg->par);
- isar_pump_status_rsp(bcs, ireg);
- } else {
- debugl1(cs, "isar spurious IIS_PSTRSP %x/%x/%x",
- ireg->iis, ireg->cmsb, ireg->clsb);
- cs->BC_Write_Reg(cs, 1, ISAR_IIA, 0);
- }
- break;
- case ISAR_IIS_DIAG:
- case ISAR_IIS_BSTRSP:
- case ISAR_IIS_IOM2RSP:
- rcv_mbox(cs, ireg, (u_char *)ireg->par);
- if ((cs->debug & (L1_DEB_HSCX | L1_DEB_HSCX_FIFO))
- == L1_DEB_HSCX) {
- u_char *tp = debbuf;
-
- tp += sprintf(debbuf, "msg iis(%x) msb(%x)",
- ireg->iis, ireg->cmsb);
- QuickHex(tp, (u_char *)ireg->par, ireg->clsb);
- debugl1(cs, "%s", debbuf);
- }
- break;
- case ISAR_IIS_INVMSG:
- rcv_mbox(cs, ireg, debbuf);
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "invalid msg his:%x",
- ireg->cmsb);
- break;
- default:
- rcv_mbox(cs, ireg, debbuf);
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "unhandled msg iis(%x) ctrl(%x/%x)",
- ireg->iis, ireg->cmsb, ireg->clsb);
- break;
- }
-}
-
-static void
-ftimer_handler(struct timer_list *t) {
- struct BCState *bcs = from_timer(bcs, t, hw.isar.ftimer);
- if (bcs->cs->debug)
- debugl1(bcs->cs, "ftimer flags %04lx",
- bcs->Flag);
- test_and_clear_bit(BC_FLG_FTI_RUN, &bcs->Flag);
- if (test_and_clear_bit(BC_FLG_LL_CONN, &bcs->Flag)) {
- schedule_event(bcs, B_LL_CONNECT);
- }
- if (test_and_clear_bit(BC_FLG_FTI_FTS, &bcs->Flag)) {
- schedule_event(bcs, B_LL_OK);
- }
-}
-
-static void
-setup_pump(struct BCState *bcs) {
- struct IsdnCardState *cs = bcs->cs;
- u_char dps = SET_DPS(bcs->hw.isar.dpath);
- u_char ctrl, param[6];
-
- switch (bcs->mode) {
- case L1_MODE_NULL:
- case L1_MODE_TRANS:
- case L1_MODE_HDLC:
- sendmsg(cs, dps | ISAR_HIS_PUMPCFG, PMOD_BYPASS, 0, NULL);
- break;
- case L1_MODE_V32:
- ctrl = PMOD_DATAMODEM;
- if (test_bit(BC_FLG_ORIG, &bcs->Flag)) {
- ctrl |= PCTRL_ORIG;
- param[5] = PV32P6_CTN;
- } else {
- param[5] = PV32P6_ATN;
- }
- param[0] = para_TOA; /* 6 db */
- param[1] = PV32P2_V23R | PV32P2_V22A | PV32P2_V22B |
- PV32P2_V22C | PV32P2_V21 | PV32P2_BEL;
- param[2] = PV32P3_AMOD | PV32P3_V32B | PV32P3_V23B;
- param[3] = PV32P4_UT144;
- param[4] = PV32P5_UT144;
- sendmsg(cs, dps | ISAR_HIS_PUMPCFG, ctrl, 6, param);
- break;
- case L1_MODE_FAX:
- ctrl = PMOD_FAX;
- if (test_bit(BC_FLG_ORIG, &bcs->Flag)) {
- ctrl |= PCTRL_ORIG;
- param[1] = PFAXP2_CTN;
- } else {
- param[1] = PFAXP2_ATN;
- }
- param[0] = para_TOA; /* 6 db */
- sendmsg(cs, dps | ISAR_HIS_PUMPCFG, ctrl, 2, param);
- bcs->hw.isar.state = STFAX_NULL;
- bcs->hw.isar.newcmd = 0;
- bcs->hw.isar.newmod = 0;
- test_and_set_bit(BC_FLG_FTI_RUN, &bcs->Flag);
- break;
- }
- udelay(1000);
- sendmsg(cs, dps | ISAR_HIS_PSTREQ, 0, 0, NULL);
- udelay(1000);
-}
-
-static void
-setup_sart(struct BCState *bcs) {
- struct IsdnCardState *cs = bcs->cs;
- u_char dps = SET_DPS(bcs->hw.isar.dpath);
- u_char ctrl, param[2];
-
- switch (bcs->mode) {
- case L1_MODE_NULL:
- sendmsg(cs, dps | ISAR_HIS_SARTCFG, SMODE_DISABLE, 0,
- NULL);
- break;
- case L1_MODE_TRANS:
- sendmsg(cs, dps | ISAR_HIS_SARTCFG, SMODE_BINARY, 2,
- "\0\0");
- break;
- case L1_MODE_HDLC:
- param[0] = 0;
- sendmsg(cs, dps | ISAR_HIS_SARTCFG, SMODE_HDLC, 1,
- param);
- break;
- case L1_MODE_V32:
- ctrl = SMODE_V14 | SCTRL_HDMC_BOTH;
- param[0] = S_P1_CHS_8;
- param[1] = S_P2_BFT_DEF;
- sendmsg(cs, dps | ISAR_HIS_SARTCFG, ctrl, 2,
- param);
- break;
- case L1_MODE_FAX:
- /* SART must not configured with FAX */
- break;
- }
- udelay(1000);
- sendmsg(cs, dps | ISAR_HIS_BSTREQ, 0, 0, NULL);
- udelay(1000);
-}
-
-static void
-setup_iom2(struct BCState *bcs) {
- struct IsdnCardState *cs = bcs->cs;
- u_char dps = SET_DPS(bcs->hw.isar.dpath);
- u_char cmsb = IOM_CTRL_ENA, msg[5] = {IOM_P1_TXD, 0, 0, 0, 0};
-
- if (bcs->channel)
- msg[1] = msg[3] = 1;
- switch (bcs->mode) {
- case L1_MODE_NULL:
- cmsb = 0;
- /* dummy slot */
- msg[1] = msg[3] = bcs->hw.isar.dpath + 2;
- break;
- case L1_MODE_TRANS:
- case L1_MODE_HDLC:
- break;
- case L1_MODE_V32:
- case L1_MODE_FAX:
- cmsb |= IOM_CTRL_ALAW | IOM_CTRL_RCV;
- break;
- }
- sendmsg(cs, dps | ISAR_HIS_IOM2CFG, cmsb, 5, msg);
- udelay(1000);
- sendmsg(cs, dps | ISAR_HIS_IOM2REQ, 0, 0, NULL);
- udelay(1000);
-}
-
-static int
-modeisar(struct BCState *bcs, int mode, int bc)
-{
- struct IsdnCardState *cs = bcs->cs;
-
- /* Here we are selecting the best datapath for requested mode */
- if (bcs->mode == L1_MODE_NULL) { /* New Setup */
- bcs->channel = bc;
- switch (mode) {
- case L1_MODE_NULL: /* init */
- if (!bcs->hw.isar.dpath)
- /* no init for dpath 0 */
- return (0);
- break;
- case L1_MODE_TRANS:
- case L1_MODE_HDLC:
- /* best is datapath 2 */
- if (!test_and_set_bit(ISAR_DP2_USE,
- &bcs->hw.isar.reg->Flags))
- bcs->hw.isar.dpath = 2;
- else if (!test_and_set_bit(ISAR_DP1_USE,
- &bcs->hw.isar.reg->Flags))
- bcs->hw.isar.dpath = 1;
- else {
- printk(KERN_WARNING"isar modeisar both paths in use\n");
- return (1);
- }
- break;
- case L1_MODE_V32:
- case L1_MODE_FAX:
- /* only datapath 1 */
- if (!test_and_set_bit(ISAR_DP1_USE,
- &bcs->hw.isar.reg->Flags))
- bcs->hw.isar.dpath = 1;
- else {
- printk(KERN_WARNING"isar modeisar analog functions only with DP1\n");
- debugl1(cs, "isar modeisar analog functions only with DP1");
- return (1);
- }
- break;
- }
- }
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "isar dp%d mode %d->%d ichan %d",
- bcs->hw.isar.dpath, bcs->mode, mode, bc);
- bcs->mode = mode;
- setup_pump(bcs);
- setup_iom2(bcs);
- setup_sart(bcs);
- if (bcs->mode == L1_MODE_NULL) {
- /* Clear resources */
- if (bcs->hw.isar.dpath == 1)
- test_and_clear_bit(ISAR_DP1_USE, &bcs->hw.isar.reg->Flags);
- else if (bcs->hw.isar.dpath == 2)
- test_and_clear_bit(ISAR_DP2_USE, &bcs->hw.isar.reg->Flags);
- bcs->hw.isar.dpath = 0;
- }
- return (0);
-}
-
-static void
-isar_pump_cmd(struct BCState *bcs, u_char cmd, u_char para)
-{
- struct IsdnCardState *cs = bcs->cs;
- u_char dps = SET_DPS(bcs->hw.isar.dpath);
- u_char ctrl = 0, nom = 0, p1 = 0;
-
- switch (cmd) {
- case ISDN_FAX_CLASS1_FTM:
- test_and_clear_bit(BC_FLG_FRH_WAIT, &bcs->Flag);
- if (bcs->hw.isar.state == STFAX_READY) {
- p1 = para;
- ctrl = PCTRL_CMD_FTM;
- nom = 1;
- bcs->hw.isar.state = STFAX_LINE;
- bcs->hw.isar.cmd = ctrl;
- bcs->hw.isar.mod = para;
- bcs->hw.isar.newmod = 0;
- bcs->hw.isar.newcmd = 0;
- bcs->hw.isar.try_mod = 3;
- } else if ((bcs->hw.isar.state == STFAX_ACTIV) &&
- (bcs->hw.isar.cmd == PCTRL_CMD_FTM) &&
- (bcs->hw.isar.mod == para)) {
- ll_deliver_faxstat(bcs, ISDN_FAX_CLASS1_CONNECT);
- } else {
- bcs->hw.isar.newmod = para;
- bcs->hw.isar.newcmd = PCTRL_CMD_FTM;
- nom = 0;
- ctrl = PCTRL_CMD_ESC;
- bcs->hw.isar.state = STFAX_ESCAPE;
- }
- break;
- case ISDN_FAX_CLASS1_FTH:
- test_and_clear_bit(BC_FLG_FRH_WAIT, &bcs->Flag);
- if (bcs->hw.isar.state == STFAX_READY) {
- p1 = para;
- ctrl = PCTRL_CMD_FTH;
- nom = 1;
- bcs->hw.isar.state = STFAX_LINE;
- bcs->hw.isar.cmd = ctrl;
- bcs->hw.isar.mod = para;
- bcs->hw.isar.newmod = 0;
- bcs->hw.isar.newcmd = 0;
- bcs->hw.isar.try_mod = 3;
- } else if ((bcs->hw.isar.state == STFAX_ACTIV) &&
- (bcs->hw.isar.cmd == PCTRL_CMD_FTH) &&
- (bcs->hw.isar.mod == para)) {
- ll_deliver_faxstat(bcs, ISDN_FAX_CLASS1_CONNECT);
- } else {
- bcs->hw.isar.newmod = para;
- bcs->hw.isar.newcmd = PCTRL_CMD_FTH;
- nom = 0;
- ctrl = PCTRL_CMD_ESC;
- bcs->hw.isar.state = STFAX_ESCAPE;
- }
- break;
- case ISDN_FAX_CLASS1_FRM:
- test_and_clear_bit(BC_FLG_FRH_WAIT, &bcs->Flag);
- if (bcs->hw.isar.state == STFAX_READY) {
- p1 = para;
- ctrl = PCTRL_CMD_FRM;
- nom = 1;
- bcs->hw.isar.state = STFAX_LINE;
- bcs->hw.isar.cmd = ctrl;
- bcs->hw.isar.mod = para;
- bcs->hw.isar.newmod = 0;
- bcs->hw.isar.newcmd = 0;
- bcs->hw.isar.try_mod = 3;
- } else if ((bcs->hw.isar.state == STFAX_ACTIV) &&
- (bcs->hw.isar.cmd == PCTRL_CMD_FRM) &&
- (bcs->hw.isar.mod == para)) {
- ll_deliver_faxstat(bcs, ISDN_FAX_CLASS1_CONNECT);
- } else {
- bcs->hw.isar.newmod = para;
- bcs->hw.isar.newcmd = PCTRL_CMD_FRM;
- nom = 0;
- ctrl = PCTRL_CMD_ESC;
- bcs->hw.isar.state = STFAX_ESCAPE;
- }
- break;
- case ISDN_FAX_CLASS1_FRH:
- test_and_set_bit(BC_FLG_FRH_WAIT, &bcs->Flag);
- if (bcs->hw.isar.state == STFAX_READY) {
- p1 = para;
- ctrl = PCTRL_CMD_FRH;
- nom = 1;
- bcs->hw.isar.state = STFAX_LINE;
- bcs->hw.isar.cmd = ctrl;
- bcs->hw.isar.mod = para;
- bcs->hw.isar.newmod = 0;
- bcs->hw.isar.newcmd = 0;
- bcs->hw.isar.try_mod = 3;
- } else if ((bcs->hw.isar.state == STFAX_ACTIV) &&
- (bcs->hw.isar.cmd == PCTRL_CMD_FRH) &&
- (bcs->hw.isar.mod == para)) {
- ll_deliver_faxstat(bcs, ISDN_FAX_CLASS1_CONNECT);
- } else {
- bcs->hw.isar.newmod = para;
- bcs->hw.isar.newcmd = PCTRL_CMD_FRH;
- nom = 0;
- ctrl = PCTRL_CMD_ESC;
- bcs->hw.isar.state = STFAX_ESCAPE;
- }
- break;
- case ISDN_FAXPUMP_HALT:
- bcs->hw.isar.state = STFAX_NULL;
- nom = 0;
- ctrl = PCTRL_CMD_HALT;
- break;
- }
- if (ctrl)
- sendmsg(cs, dps | ISAR_HIS_PUMPCTRL, ctrl, nom, &p1);
-}
-
-static void
-isar_setup(struct IsdnCardState *cs)
-{
- u_char msg;
- int i;
-
- /* Dpath 1, 2 */
- msg = 61;
- for (i = 0; i < 2; i++) {
- /* Buffer Config */
- sendmsg(cs, (i ? ISAR_HIS_DPS2 : ISAR_HIS_DPS1) |
- ISAR_HIS_P12CFG, 4, 1, &msg);
- cs->bcs[i].hw.isar.mml = msg;
- cs->bcs[i].mode = 0;
- cs->bcs[i].hw.isar.dpath = i + 1;
- modeisar(&cs->bcs[i], 0, 0);
- INIT_WORK(&cs->bcs[i].tqueue, isar_bh);
- }
-}
-
-static void
-isar_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct BCState *bcs = st->l1.bcs;
- struct sk_buff *skb = arg;
- int ret;
- u_long flags;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- skb_queue_tail(&bcs->squeue, skb);
- } else {
- bcs->tx_skb = skb;
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "DRQ set BC_FLG_BUSY");
- bcs->hw.isar.txcnt = 0;
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- printk(KERN_WARNING "isar_l2l1: this shouldn't happen\n");
- } else {
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "PUI set BC_FLG_BUSY");
- bcs->tx_skb = skb;
- bcs->hw.isar.txcnt = 0;
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
- if (!bcs->tx_skb) {
- test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case (PH_ACTIVATE | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_set_bit(BC_FLG_ACTIV, &bcs->Flag);
- bcs->hw.isar.conmsg[0] = 0;
- if (test_bit(FLG_ORIG, &st->l2.flag))
- test_and_set_bit(BC_FLG_ORIG, &bcs->Flag);
- else
- test_and_clear_bit(BC_FLG_ORIG, &bcs->Flag);
- switch (st->l1.mode) {
- case L1_MODE_TRANS:
- case L1_MODE_HDLC:
- ret = modeisar(bcs, st->l1.mode, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- if (ret)
- l1_msg_b(st, PH_DEACTIVATE | REQUEST, arg);
- else
- l1_msg_b(st, PH_ACTIVATE | REQUEST, arg);
- break;
- case L1_MODE_V32:
- case L1_MODE_FAX:
- ret = modeisar(bcs, st->l1.mode, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- if (ret)
- l1_msg_b(st, PH_DEACTIVATE | REQUEST, arg);
- break;
- default:
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- }
- break;
- case (PH_DEACTIVATE | REQUEST):
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | CONFIRM):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- switch (st->l1.mode) {
- case L1_MODE_TRANS:
- case L1_MODE_HDLC:
- case L1_MODE_V32:
- break;
- case L1_MODE_FAX:
- isar_pump_cmd(bcs, ISDN_FAXPUMP_HALT, 0);
- break;
- }
- test_and_clear_bit(BC_FLG_ACTIV, &bcs->Flag);
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "PDAC clear BC_FLG_BUSY");
- modeisar(bcs, 0, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- st->l1.l1l2(st, PH_DEACTIVATE | CONFIRM, NULL);
- break;
- }
-}
-
-static void
-close_isarstate(struct BCState *bcs)
-{
- modeisar(bcs, 0, bcs->channel);
- if (test_and_clear_bit(BC_FLG_INIT, &bcs->Flag)) {
- kfree(bcs->hw.isar.rcvbuf);
- bcs->hw.isar.rcvbuf = NULL;
- skb_queue_purge(&bcs->rqueue);
- skb_queue_purge(&bcs->squeue);
- if (bcs->tx_skb) {
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "closeisar clear BC_FLG_BUSY");
- }
- }
- del_timer(&bcs->hw.isar.ftimer);
-}
-
-static int
-open_isarstate(struct IsdnCardState *cs, struct BCState *bcs)
-{
- if (!test_and_set_bit(BC_FLG_INIT, &bcs->Flag)) {
- if (!(bcs->hw.isar.rcvbuf = kmalloc(HSCX_BUFMAX, GFP_ATOMIC))) {
- printk(KERN_WARNING
- "HiSax: No memory for isar.rcvbuf\n");
- return (1);
- }
- skb_queue_head_init(&bcs->rqueue);
- skb_queue_head_init(&bcs->squeue);
- }
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "openisar clear BC_FLG_BUSY");
- bcs->event = 0;
- bcs->hw.isar.rcvidx = 0;
- bcs->tx_cnt = 0;
- return (0);
-}
-
-static int
-setstack_isar(struct PStack *st, struct BCState *bcs)
-{
- bcs->channel = st->l1.bc;
- if (open_isarstate(st->l1.hardware, bcs))
- return (-1);
- st->l1.bcs = bcs;
- st->l2.l2l1 = isar_l2l1;
- setstack_manager(st);
- bcs->st = st;
- setstack_l1_B(st);
- return (0);
-}
-
-int
-isar_auxcmd(struct IsdnCardState *cs, isdn_ctrl *ic) {
- u_long adr;
- int features, i;
- struct BCState *bcs;
-
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "isar_auxcmd cmd/ch %x/%ld", ic->command, ic->arg);
- switch (ic->command) {
- case (ISDN_CMD_FAXCMD):
- bcs = cs->channel[ic->arg].bcs;
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "isar_auxcmd cmd/subcmd %d/%d",
- ic->parm.aux.cmd, ic->parm.aux.subcmd);
- switch (ic->parm.aux.cmd) {
- case ISDN_FAX_CLASS1_CTRL:
- if (ic->parm.aux.subcmd == ETX)
- test_and_set_bit(BC_FLG_DLEETX,
- &bcs->Flag);
- break;
- case ISDN_FAX_CLASS1_FTS:
- if (ic->parm.aux.subcmd == AT_QUERY) {
- ic->command = ISDN_STAT_FAXIND;
- ic->parm.aux.cmd = ISDN_FAX_CLASS1_OK;
- cs->iif.statcallb(ic);
- return (0);
- } else if (ic->parm.aux.subcmd == AT_EQ_QUERY) {
- strcpy(ic->parm.aux.para, "0-255");
- ic->command = ISDN_STAT_FAXIND;
- ic->parm.aux.cmd = ISDN_FAX_CLASS1_QUERY;
- cs->iif.statcallb(ic);
- return (0);
- } else if (ic->parm.aux.subcmd == AT_EQ_VALUE) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "isar_auxcmd %s=%d",
- FC1_CMD[ic->parm.aux.cmd], ic->parm.aux.para[0]);
- if (bcs->hw.isar.state == STFAX_READY) {
- if (!ic->parm.aux.para[0]) {
- ic->command = ISDN_STAT_FAXIND;
- ic->parm.aux.cmd = ISDN_FAX_CLASS1_OK;
- cs->iif.statcallb(ic);
- return (0);
- }
- if (!test_and_set_bit(BC_FLG_FTI_RUN, &bcs->Flag)) {
- /* n*10 ms */
- bcs->hw.isar.ftimer.expires =
- jiffies + ((ic->parm.aux.para[0] * 10 * HZ) / 1000);
- test_and_set_bit(BC_FLG_FTI_FTS, &bcs->Flag);
- add_timer(&bcs->hw.isar.ftimer);
- return (0);
- } else {
- if (cs->debug)
- debugl1(cs, "isar FTS=%d and FTI busy",
- ic->parm.aux.para[0]);
- }
- } else {
- if (cs->debug)
- debugl1(cs, "isar FTS=%d and isar.state not ready(%x)",
- ic->parm.aux.para[0], bcs->hw.isar.state);
- }
- ic->command = ISDN_STAT_FAXIND;
- ic->parm.aux.cmd = ISDN_FAX_CLASS1_ERROR;
- cs->iif.statcallb(ic);
- }
- break;
- case ISDN_FAX_CLASS1_FRM:
- case ISDN_FAX_CLASS1_FRH:
- case ISDN_FAX_CLASS1_FTM:
- case ISDN_FAX_CLASS1_FTH:
- if (ic->parm.aux.subcmd == AT_QUERY) {
- sprintf(ic->parm.aux.para,
- "%d", bcs->hw.isar.mod);
- ic->command = ISDN_STAT_FAXIND;
- ic->parm.aux.cmd = ISDN_FAX_CLASS1_QUERY;
- cs->iif.statcallb(ic);
- return (0);
- } else if (ic->parm.aux.subcmd == AT_EQ_QUERY) {
- char *p = ic->parm.aux.para;
- for (i = 0; i < FAXMODCNT; i++)
- if ((1 << i) & modmask)
- p += sprintf(p, "%d,", faxmodulation[i]);
- p--;
- *p = 0;
- ic->command = ISDN_STAT_FAXIND;
- ic->parm.aux.cmd = ISDN_FAX_CLASS1_QUERY;
- cs->iif.statcallb(ic);
- return (0);
- } else if (ic->parm.aux.subcmd == AT_EQ_VALUE) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "isar_auxcmd %s=%d",
- FC1_CMD[ic->parm.aux.cmd], ic->parm.aux.para[0]);
- for (i = 0; i < FAXMODCNT; i++)
- if (faxmodulation[i] == ic->parm.aux.para[0])
- break;
- if ((i < FAXMODCNT) && ((1 << i) & modmask) &&
- test_bit(BC_FLG_INIT, &bcs->Flag)) {
- isar_pump_cmd(bcs,
- ic->parm.aux.cmd,
- ic->parm.aux.para[0]);
- return (0);
- }
- }
- /* wrong modulation or not activ */
- /* fall through */
- default:
- ic->command = ISDN_STAT_FAXIND;
- ic->parm.aux.cmd = ISDN_FAX_CLASS1_ERROR;
- cs->iif.statcallb(ic);
- }
- break;
- case (ISDN_CMD_IOCTL):
- switch (ic->arg) {
- case 9: /* load firmware */
- features = ISDN_FEATURE_L2_MODEM |
- ISDN_FEATURE_L2_FAX |
- ISDN_FEATURE_L3_FCLASS1;
- memcpy(&adr, ic->parm.num, sizeof(ulong));
- if (isar_load_firmware(cs, (u_char __user *)adr))
- return (1);
- else
- ll_run(cs, features);
- break;
- case 20:
- features = *(unsigned int *) ic->parm.num;
- printk(KERN_DEBUG "HiSax: max modulation old(%04x) new(%04x)\n",
- modmask, features);
- modmask = features;
- break;
- case 21:
- features = *(unsigned int *) ic->parm.num;
- printk(KERN_DEBUG "HiSax: FRM extra delay old(%d) new(%d) ms\n",
- frm_extra_delay, features);
- if (features >= 0)
- frm_extra_delay = features;
- break;
- case 22:
- features = *(unsigned int *) ic->parm.num;
- printk(KERN_DEBUG "HiSax: TOA old(%d) new(%d) db\n",
- para_TOA, features);
- if (features >= 0 && features < 32)
- para_TOA = features;
- break;
- default:
- printk(KERN_DEBUG "HiSax: invalid ioctl %d\n",
- (int) ic->arg);
- return (-EINVAL);
- }
- break;
- default:
- return (-EINVAL);
- }
- return (0);
-}
-
-void initisar(struct IsdnCardState *cs)
-{
- cs->bcs[0].BC_SetStack = setstack_isar;
- cs->bcs[1].BC_SetStack = setstack_isar;
- cs->bcs[0].BC_Close = close_isarstate;
- cs->bcs[1].BC_Close = close_isarstate;
- timer_setup(&cs->bcs[0].hw.isar.ftimer, ftimer_handler, 0);
- timer_setup(&cs->bcs[1].hw.isar.ftimer, ftimer_handler, 0);
-}
diff --git a/drivers/isdn/hisax/isar.h b/drivers/isdn/hisax/isar.h
deleted file mode 100644
index 0f4d101faf37..000000000000
--- a/drivers/isdn/hisax/isar.h
+++ /dev/null
@@ -1,222 +0,0 @@
-/* $Id: isar.h,v 1.11.2.2 2004/01/12 22:52:27 keil Exp $
- *
- * ISAR (Siemens PSB 7110) specific defines
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#define ISAR_IRQMSK 0x04
-#define ISAR_IRQSTA 0x04
-#define ISAR_IRQBIT 0x75
-#define ISAR_CTRL_H 0x61
-#define ISAR_CTRL_L 0x60
-#define ISAR_IIS 0x58
-#define ISAR_IIA 0x58
-#define ISAR_HIS 0x50
-#define ISAR_HIA 0x50
-#define ISAR_MBOX 0x4c
-#define ISAR_WADR 0x4a
-#define ISAR_RADR 0x48
-
-#define ISAR_HIS_VNR 0x14
-#define ISAR_HIS_DKEY 0x02
-#define ISAR_HIS_FIRM 0x1e
-#define ISAR_HIS_STDSP 0x08
-#define ISAR_HIS_DIAG 0x05
-#define ISAR_HIS_WAITSTATE 0x27
-#define ISAR_HIS_TIMERIRQ 0x25
-#define ISAR_HIS_P0CFG 0x3c
-#define ISAR_HIS_P12CFG 0x24
-#define ISAR_HIS_SARTCFG 0x25
-#define ISAR_HIS_PUMPCFG 0x26
-#define ISAR_HIS_PUMPCTRL 0x2a
-#define ISAR_HIS_IOM2CFG 0x27
-#define ISAR_HIS_IOM2REQ 0x07
-#define ISAR_HIS_IOM2CTRL 0x2b
-#define ISAR_HIS_BSTREQ 0x0c
-#define ISAR_HIS_PSTREQ 0x0e
-#define ISAR_HIS_SDATA 0x20
-#define ISAR_HIS_DPS1 0x40
-#define ISAR_HIS_DPS2 0x80
-#define SET_DPS(x) ((x << 6) & 0xc0)
-
-#define ISAR_CMD_TIMERIRQ_OFF 0x20
-#define ISAR_CMD_TIMERIRQ_ON 0x21
-
-
-#define ISAR_IIS_MSCMSD 0x3f
-#define ISAR_IIS_VNR 0x15
-#define ISAR_IIS_DKEY 0x03
-#define ISAR_IIS_FIRM 0x1f
-#define ISAR_IIS_STDSP 0x09
-#define ISAR_IIS_DIAG 0x25
-#define ISAR_IIS_GSTEV 0x00
-#define ISAR_IIS_BSTEV 0x28
-#define ISAR_IIS_BSTRSP 0x2c
-#define ISAR_IIS_PSTRSP 0x2e
-#define ISAR_IIS_PSTEV 0x2a
-#define ISAR_IIS_IOM2RSP 0x27
-#define ISAR_IIS_RDATA 0x20
-#define ISAR_IIS_INVMSG 0x3f
-
-#define ISAR_CTRL_SWVER 0x10
-#define ISAR_CTRL_STST 0x40
-
-#define ISAR_MSG_HWVER {0x20, 0, 1}
-
-#define ISAR_DP1_USE 1
-#define ISAR_DP2_USE 2
-#define ISAR_RATE_REQ 3
-
-#define PMOD_DISABLE 0
-#define PMOD_FAX 1
-#define PMOD_DATAMODEM 2
-#define PMOD_HALFDUPLEX 3
-#define PMOD_V110 4
-#define PMOD_DTMF 5
-#define PMOD_DTMF_TRANS 6
-#define PMOD_BYPASS 7
-
-#define PCTRL_ORIG 0x80
-#define PV32P2_V23R 0x40
-#define PV32P2_V22A 0x20
-#define PV32P2_V22B 0x10
-#define PV32P2_V22C 0x08
-#define PV32P2_V21 0x02
-#define PV32P2_BEL 0x01
-
-// LSB MSB in ISAR doc wrong !!! Arghhh
-#define PV32P3_AMOD 0x80
-#define PV32P3_V32B 0x02
-#define PV32P3_V23B 0x01
-#define PV32P4_48 0x11
-#define PV32P5_48 0x05
-#define PV32P4_UT48 0x11
-#define PV32P5_UT48 0x0d
-#define PV32P4_96 0x11
-#define PV32P5_96 0x03
-#define PV32P4_UT96 0x11
-#define PV32P5_UT96 0x0f
-#define PV32P4_B96 0x91
-#define PV32P5_B96 0x0b
-#define PV32P4_UTB96 0xd1
-#define PV32P5_UTB96 0x0f
-#define PV32P4_120 0xb1
-#define PV32P5_120 0x09
-#define PV32P4_UT120 0xf1
-#define PV32P5_UT120 0x0f
-#define PV32P4_144 0x99
-#define PV32P5_144 0x09
-#define PV32P4_UT144 0xf9
-#define PV32P5_UT144 0x0f
-#define PV32P6_CTN 0x01
-#define PV32P6_ATN 0x02
-
-#define PFAXP2_CTN 0x01
-#define PFAXP2_ATN 0x04
-
-#define PSEV_10MS_TIMER 0x02
-#define PSEV_CON_ON 0x18
-#define PSEV_CON_OFF 0x19
-#define PSEV_V24_OFF 0x20
-#define PSEV_CTS_ON 0x21
-#define PSEV_CTS_OFF 0x22
-#define PSEV_DCD_ON 0x23
-#define PSEV_DCD_OFF 0x24
-#define PSEV_DSR_ON 0x25
-#define PSEV_DSR_OFF 0x26
-#define PSEV_REM_RET 0xcc
-#define PSEV_REM_REN 0xcd
-#define PSEV_GSTN_CLR 0xd4
-
-#define PSEV_RSP_READY 0xbc
-#define PSEV_LINE_TX_H 0xb3
-#define PSEV_LINE_TX_B 0xb2
-#define PSEV_LINE_RX_H 0xb1
-#define PSEV_LINE_RX_B 0xb0
-#define PSEV_RSP_CONN 0xb5
-#define PSEV_RSP_DISC 0xb7
-#define PSEV_RSP_FCERR 0xb9
-#define PSEV_RSP_SILDET 0xbe
-#define PSEV_RSP_SILOFF 0xab
-#define PSEV_FLAGS_DET 0xba
-
-#define PCTRL_CMD_FTH 0xa7
-#define PCTRL_CMD_FRH 0xa5
-#define PCTRL_CMD_FTM 0xa8
-#define PCTRL_CMD_FRM 0xa6
-#define PCTRL_CMD_SILON 0xac
-#define PCTRL_CMD_CONT 0xa2
-#define PCTRL_CMD_ESC 0xa4
-#define PCTRL_CMD_SILOFF 0xab
-#define PCTRL_CMD_HALT 0xa9
-
-#define PCTRL_LOC_RET 0xcf
-#define PCTRL_LOC_REN 0xce
-
-#define SMODE_DISABLE 0
-#define SMODE_V14 2
-#define SMODE_HDLC 3
-#define SMODE_BINARY 4
-#define SMODE_FSK_V14 5
-
-#define SCTRL_HDMC_BOTH 0x00
-#define SCTRL_HDMC_DTX 0x80
-#define SCTRL_HDMC_DRX 0x40
-#define S_P1_OVSP 0x40
-#define S_P1_SNP 0x20
-#define S_P1_EOP 0x10
-#define S_P1_EDP 0x08
-#define S_P1_NSB 0x04
-#define S_P1_CHS_8 0x03
-#define S_P1_CHS_7 0x02
-#define S_P1_CHS_6 0x01
-#define S_P1_CHS_5 0x00
-
-#define S_P2_BFT_DEF 0x10
-
-#define IOM_CTRL_ENA 0x80
-#define IOM_CTRL_NOPCM 0x00
-#define IOM_CTRL_ALAW 0x02
-#define IOM_CTRL_ULAW 0x04
-#define IOM_CTRL_RCV 0x01
-
-#define IOM_P1_TXD 0x10
-
-#define HDLC_FED 0x40
-#define HDLC_FSD 0x20
-#define HDLC_FST 0x20
-#define HDLC_ERROR 0x1c
-#define HDLC_ERR_FAD 0x10
-#define HDLC_ERR_RER 0x08
-#define HDLC_ERR_CER 0x04
-#define SART_NMD 0x01
-
-#define BSTAT_RDM0 0x1
-#define BSTAT_RDM1 0x2
-#define BSTAT_RDM2 0x4
-#define BSTAT_RDM3 0x8
-#define BSTEV_TBO 0x1f
-#define BSTEV_RBO 0x2f
-
-/* FAX State Machine */
-#define STFAX_NULL 0
-#define STFAX_READY 1
-#define STFAX_LINE 2
-#define STFAX_CONT 3
-#define STFAX_ACTIV 4
-#define STFAX_ESCAPE 5
-#define STFAX_SILDET 6
-
-#define ISDN_FAXPUMP_HALT 100
-
-extern int ISARVersion(struct IsdnCardState *cs, char *s);
-extern void isar_int_main(struct IsdnCardState *cs);
-extern void initisar(struct IsdnCardState *cs);
-extern void isar_fill_fifo(struct BCState *bcs);
-extern int isar_auxcmd(struct IsdnCardState *cs, isdn_ctrl *ic);
diff --git a/drivers/isdn/hisax/isdnl1.c b/drivers/isdn/hisax/isdnl1.c
deleted file mode 100644
index a560842c0e48..000000000000
--- a/drivers/isdn/hisax/isdnl1.c
+++ /dev/null
@@ -1,930 +0,0 @@
-/* $Id: isdnl1.c,v 2.46.2.5 2004/02/11 13:21:34 keil Exp $
- *
- * common low level stuff for Siemens Chipsetbased isdn cards
- *
- * Author Karsten Keil
- * based on the teles driver from Jan den Ouden
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * For changes and modifications please read
- * Documentation/isdn/HiSax.cert
- *
- * Thanks to Jan den Ouden
- * Fritz Elfert
- * Beat Doebeli
- *
- */
-
-#include <linux/init.h>
-#include <linux/gfp.h>
-#include "hisax.h"
-#include "isdnl1.h"
-
-const char *l1_revision = "$Revision: 2.46.2.5 $";
-
-#define TIMER3_VALUE 7000
-
-static struct Fsm l1fsm_b;
-static struct Fsm l1fsm_s;
-
-enum {
- ST_L1_F2,
- ST_L1_F3,
- ST_L1_F4,
- ST_L1_F5,
- ST_L1_F6,
- ST_L1_F7,
- ST_L1_F8,
-};
-
-#define L1S_STATE_COUNT (ST_L1_F8 + 1)
-
-static char *strL1SState[] =
-{
- "ST_L1_F2",
- "ST_L1_F3",
- "ST_L1_F4",
- "ST_L1_F5",
- "ST_L1_F6",
- "ST_L1_F7",
- "ST_L1_F8",
-};
-
-#ifdef HISAX_UINTERFACE
-static
-struct Fsm l1fsm_u =
-{NULL, 0, 0, NULL, NULL};
-
-enum {
- ST_L1_RESET,
- ST_L1_DEACT,
- ST_L1_SYNC2,
- ST_L1_TRANS,
-};
-
-#define L1U_STATE_COUNT (ST_L1_TRANS + 1)
-
-static char *strL1UState[] =
-{
- "ST_L1_RESET",
- "ST_L1_DEACT",
- "ST_L1_SYNC2",
- "ST_L1_TRANS",
-};
-#endif
-
-enum {
- ST_L1_NULL,
- ST_L1_WAIT_ACT,
- ST_L1_WAIT_DEACT,
- ST_L1_ACTIV,
-};
-
-#define L1B_STATE_COUNT (ST_L1_ACTIV + 1)
-
-static char *strL1BState[] =
-{
- "ST_L1_NULL",
- "ST_L1_WAIT_ACT",
- "ST_L1_WAIT_DEACT",
- "ST_L1_ACTIV",
-};
-
-enum {
- EV_PH_ACTIVATE,
- EV_PH_DEACTIVATE,
- EV_RESET_IND,
- EV_DEACT_CNF,
- EV_DEACT_IND,
- EV_POWER_UP,
- EV_RSYNC_IND,
- EV_INFO2_IND,
- EV_INFO4_IND,
- EV_TIMER_DEACT,
- EV_TIMER_ACT,
- EV_TIMER3,
-};
-
-#define L1_EVENT_COUNT (EV_TIMER3 + 1)
-
-static char *strL1Event[] =
-{
- "EV_PH_ACTIVATE",
- "EV_PH_DEACTIVATE",
- "EV_RESET_IND",
- "EV_DEACT_CNF",
- "EV_DEACT_IND",
- "EV_POWER_UP",
- "EV_RSYNC_IND",
- "EV_INFO2_IND",
- "EV_INFO4_IND",
- "EV_TIMER_DEACT",
- "EV_TIMER_ACT",
- "EV_TIMER3",
-};
-
-void
-debugl1(struct IsdnCardState *cs, char *fmt, ...)
-{
- va_list args;
- char tmp[8];
-
- va_start(args, fmt);
- sprintf(tmp, "Card%d ", cs->cardnr + 1);
- VHiSax_putstatus(cs, tmp, fmt, args);
- va_end(args);
-}
-
-static void
-l1m_debug(struct FsmInst *fi, char *fmt, ...)
-{
- va_list args;
- struct PStack *st = fi->userdata;
- struct IsdnCardState *cs = st->l1.hardware;
- char tmp[8];
-
- va_start(args, fmt);
- sprintf(tmp, "Card%d ", cs->cardnr + 1);
- VHiSax_putstatus(cs, tmp, fmt, args);
- va_end(args);
-}
-
-static void
-L1activated(struct IsdnCardState *cs)
-{
- struct PStack *st;
-
- st = cs->stlist;
- while (st) {
- if (test_and_clear_bit(FLG_L1_ACTIVATING, &st->l1.Flags))
- st->l1.l1l2(st, PH_ACTIVATE | CONFIRM, NULL);
- else
- st->l1.l1l2(st, PH_ACTIVATE | INDICATION, NULL);
- st = st->next;
- }
-}
-
-static void
-L1deactivated(struct IsdnCardState *cs)
-{
- struct PStack *st;
-
- st = cs->stlist;
- while (st) {
- if (test_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- st->l1.l1l2(st, PH_PAUSE | CONFIRM, NULL);
- st->l1.l1l2(st, PH_DEACTIVATE | INDICATION, NULL);
- st = st->next;
- }
- test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags);
-}
-
-void
-DChannel_proc_xmt(struct IsdnCardState *cs)
-{
- struct PStack *stptr;
-
- if (cs->tx_skb)
- return;
-
- stptr = cs->stlist;
- while (stptr != NULL) {
- if (test_and_clear_bit(FLG_L1_PULL_REQ, &stptr->l1.Flags)) {
- stptr->l1.l1l2(stptr, PH_PULL | CONFIRM, NULL);
- break;
- } else
- stptr = stptr->next;
- }
-}
-
-void
-DChannel_proc_rcv(struct IsdnCardState *cs)
-{
- struct sk_buff *skb, *nskb;
- struct PStack *stptr = cs->stlist;
- int found, tei, sapi;
-
- if (stptr)
- if (test_bit(FLG_L1_ACTTIMER, &stptr->l1.Flags))
- FsmEvent(&stptr->l1.l1m, EV_TIMER_ACT, NULL);
- while ((skb = skb_dequeue(&cs->rq))) {
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA", 1);
-#endif
- stptr = cs->stlist;
- if (skb->len < 3) {
- debugl1(cs, "D-channel frame too short(%d)", skb->len);
- dev_kfree_skb(skb);
- return;
- }
- if ((skb->data[0] & 1) || !(skb->data[1] & 1)) {
- debugl1(cs, "D-channel frame wrong EA0/EA1");
- dev_kfree_skb(skb);
- return;
- }
- sapi = skb->data[0] >> 2;
- tei = skb->data[1] >> 1;
- if (cs->debug & DEB_DLOG_HEX)
- LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE)
- dlogframe(cs, skb, 1);
- if (tei == GROUP_TEI) {
- if (sapi == CTRL_SAPI) { /* sapi 0 */
- while (stptr != NULL) {
- if ((nskb = skb_clone(skb, GFP_ATOMIC)))
- stptr->l1.l1l2(stptr, PH_DATA | INDICATION, nskb);
- else
- printk(KERN_WARNING "HiSax: isdn broadcast buffer shortage\n");
- stptr = stptr->next;
- }
- } else if (sapi == TEI_SAPI) {
- while (stptr != NULL) {
- if ((nskb = skb_clone(skb, GFP_ATOMIC)))
- stptr->l1.l1tei(stptr, PH_DATA | INDICATION, nskb);
- else
- printk(KERN_WARNING "HiSax: tei broadcast buffer shortage\n");
- stptr = stptr->next;
- }
- }
- dev_kfree_skb(skb);
- } else if (sapi == CTRL_SAPI) { /* sapi 0 */
- found = 0;
- while (stptr != NULL)
- if (tei == stptr->l2.tei) {
- stptr->l1.l1l2(stptr, PH_DATA | INDICATION, skb);
- found = !0;
- break;
- } else
- stptr = stptr->next;
- if (!found)
- dev_kfree_skb(skb);
- } else
- dev_kfree_skb(skb);
- }
-}
-
-static void
-BChannel_proc_xmt(struct BCState *bcs)
-{
- struct PStack *st = bcs->st;
-
- if (test_bit(BC_FLG_BUSY, &bcs->Flag)) {
- debugl1(bcs->cs, "BC_BUSY Error");
- return;
- }
-
- if (test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags))
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- if (!test_bit(BC_FLG_ACTIV, &bcs->Flag)) {
- if (!test_bit(BC_FLG_BUSY, &bcs->Flag) &&
- skb_queue_empty(&bcs->squeue)) {
- st->l2.l2l1(st, PH_DEACTIVATE | CONFIRM, NULL);
- }
- }
-}
-
-static void
-BChannel_proc_rcv(struct BCState *bcs)
-{
- struct sk_buff *skb;
-
- if (bcs->st->l1.l1m.state == ST_L1_WAIT_ACT) {
- FsmDelTimer(&bcs->st->l1.timer, 4);
- FsmEvent(&bcs->st->l1.l1m, EV_TIMER_ACT, NULL);
- }
- while ((skb = skb_dequeue(&bcs->rqueue))) {
- bcs->st->l1.l1l2(bcs->st, PH_DATA | INDICATION, skb);
- }
-}
-
-static void
-BChannel_proc_ack(struct BCState *bcs)
-{
- u_long flags;
- int ack;
-
- spin_lock_irqsave(&bcs->aclock, flags);
- ack = bcs->ackcnt;
- bcs->ackcnt = 0;
- spin_unlock_irqrestore(&bcs->aclock, flags);
- if (ack)
- lli_writewakeup(bcs->st, ack);
-}
-
-void
-BChannel_bh(struct work_struct *work)
-{
- struct BCState *bcs = container_of(work, struct BCState, tqueue);
-
- if (!bcs)
- return;
- if (test_and_clear_bit(B_RCVBUFREADY, &bcs->event))
- BChannel_proc_rcv(bcs);
- if (test_and_clear_bit(B_XMTBUFREADY, &bcs->event))
- BChannel_proc_xmt(bcs);
- if (test_and_clear_bit(B_ACKPENDING, &bcs->event))
- BChannel_proc_ack(bcs);
-}
-
-void
-HiSax_addlist(struct IsdnCardState *cs,
- struct PStack *st)
-{
- st->next = cs->stlist;
- cs->stlist = st;
-}
-
-void
-HiSax_rmlist(struct IsdnCardState *cs,
- struct PStack *st)
-{
- struct PStack *p;
-
- FsmDelTimer(&st->l1.timer, 0);
- if (cs->stlist == st)
- cs->stlist = st->next;
- else {
- p = cs->stlist;
- while (p)
- if (p->next == st) {
- p->next = st->next;
- return;
- } else
- p = p->next;
- }
-}
-
-void
-init_bcstate(struct IsdnCardState *cs, int bc)
-{
- struct BCState *bcs = cs->bcs + bc;
-
- bcs->cs = cs;
- bcs->channel = bc;
- INIT_WORK(&bcs->tqueue, BChannel_bh);
- spin_lock_init(&bcs->aclock);
- bcs->BC_SetStack = NULL;
- bcs->BC_Close = NULL;
- bcs->Flag = 0;
-}
-
-#ifdef L2FRAME_DEBUG /* psa */
-
-static char *
-l2cmd(u_char cmd)
-{
- switch (cmd & ~0x10) {
- case 1:
- return "RR";
- case 5:
- return "RNR";
- case 9:
- return "REJ";
- case 0x6f:
- return "SABME";
- case 0x0f:
- return "DM";
- case 3:
- return "UI";
- case 0x43:
- return "DISC";
- case 0x63:
- return "UA";
- case 0x87:
- return "FRMR";
- case 0xaf:
- return "XID";
- default:
- if (!(cmd & 1))
- return "I";
- else
- return "invalid command";
- }
-}
-
-static char tmpdeb[32];
-
-static char *
-l2frames(u_char *ptr)
-{
- switch (ptr[2] & ~0x10) {
- case 1:
- case 5:
- case 9:
- sprintf(tmpdeb, "%s[%d](nr %d)", l2cmd(ptr[2]), ptr[3] & 1, ptr[3] >> 1);
- break;
- case 0x6f:
- case 0x0f:
- case 3:
- case 0x43:
- case 0x63:
- case 0x87:
- case 0xaf:
- sprintf(tmpdeb, "%s[%d]", l2cmd(ptr[2]), (ptr[2] & 0x10) >> 4);
- break;
- default:
- if (!(ptr[2] & 1)) {
- sprintf(tmpdeb, "I[%d](ns %d, nr %d)", ptr[3] & 1, ptr[2] >> 1, ptr[3] >> 1);
- break;
- } else
- return "invalid command";
- }
-
-
- return tmpdeb;
-}
-
-void
-Logl2Frame(struct IsdnCardState *cs, struct sk_buff *skb, char *buf, int dir)
-{
- u_char *ptr;
-
- ptr = skb->data;
-
- if (ptr[0] & 1 || !(ptr[1] & 1))
- debugl1(cs, "Address not LAPD");
- else
- debugl1(cs, "%s %s: %s%c (sapi %d, tei %d)",
- (dir ? "<-" : "->"), buf, l2frames(ptr),
- ((ptr[0] & 2) >> 1) == dir ? 'C' : 'R', ptr[0] >> 2, ptr[1] >> 1);
-}
-#endif
-
-static void
-l1_reset(struct FsmInst *fi, int event, void *arg)
-{
- FsmChangeState(fi, ST_L1_F3);
-}
-
-static void
-l1_deact_cnf(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- FsmChangeState(fi, ST_L1_F3);
- if (test_bit(FLG_L1_ACTIVATING, &st->l1.Flags))
- st->l1.l1hw(st, HW_ENABLE | REQUEST, NULL);
-}
-
-static void
-l1_deact_req_s(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- FsmChangeState(fi, ST_L1_F3);
- FsmRestartTimer(&st->l1.timer, 550, EV_TIMER_DEACT, NULL, 2);
- test_and_set_bit(FLG_L1_DEACTTIMER, &st->l1.Flags);
-}
-
-static void
-l1_power_up_s(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- if (test_bit(FLG_L1_ACTIVATING, &st->l1.Flags)) {
- FsmChangeState(fi, ST_L1_F4);
- st->l1.l1hw(st, HW_INFO3 | REQUEST, NULL);
- FsmRestartTimer(&st->l1.timer, TIMER3_VALUE, EV_TIMER3, NULL, 2);
- test_and_set_bit(FLG_L1_T3RUN, &st->l1.Flags);
- } else
- FsmChangeState(fi, ST_L1_F3);
-}
-
-static void
-l1_go_F5(struct FsmInst *fi, int event, void *arg)
-{
- FsmChangeState(fi, ST_L1_F5);
-}
-
-static void
-l1_go_F8(struct FsmInst *fi, int event, void *arg)
-{
- FsmChangeState(fi, ST_L1_F8);
-}
-
-static void
-l1_info2_ind(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
-#ifdef HISAX_UINTERFACE
- if (test_bit(FLG_L1_UINT, &st->l1.Flags))
- FsmChangeState(fi, ST_L1_SYNC2);
- else
-#endif
- FsmChangeState(fi, ST_L1_F6);
- st->l1.l1hw(st, HW_INFO3 | REQUEST, NULL);
-}
-
-static void
-l1_info4_ind(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
-#ifdef HISAX_UINTERFACE
- if (test_bit(FLG_L1_UINT, &st->l1.Flags))
- FsmChangeState(fi, ST_L1_TRANS);
- else
-#endif
- FsmChangeState(fi, ST_L1_F7);
- st->l1.l1hw(st, HW_INFO3 | REQUEST, NULL);
- if (test_and_clear_bit(FLG_L1_DEACTTIMER, &st->l1.Flags))
- FsmDelTimer(&st->l1.timer, 4);
- if (!test_bit(FLG_L1_ACTIVATED, &st->l1.Flags)) {
- if (test_and_clear_bit(FLG_L1_T3RUN, &st->l1.Flags))
- FsmDelTimer(&st->l1.timer, 3);
- FsmRestartTimer(&st->l1.timer, 110, EV_TIMER_ACT, NULL, 2);
- test_and_set_bit(FLG_L1_ACTTIMER, &st->l1.Flags);
- }
-}
-
-static void
-l1_timer3(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- test_and_clear_bit(FLG_L1_T3RUN, &st->l1.Flags);
- if (test_and_clear_bit(FLG_L1_ACTIVATING, &st->l1.Flags))
- L1deactivated(st->l1.hardware);
-
-#ifdef HISAX_UINTERFACE
- if (!test_bit(FLG_L1_UINT, &st->l1.Flags))
-#endif
- if (st->l1.l1m.state != ST_L1_F6) {
- FsmChangeState(fi, ST_L1_F3);
- st->l1.l1hw(st, HW_ENABLE | REQUEST, NULL);
- }
-}
-
-static void
-l1_timer_act(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- test_and_clear_bit(FLG_L1_ACTTIMER, &st->l1.Flags);
- test_and_set_bit(FLG_L1_ACTIVATED, &st->l1.Flags);
- L1activated(st->l1.hardware);
-}
-
-static void
-l1_timer_deact(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- test_and_clear_bit(FLG_L1_DEACTTIMER, &st->l1.Flags);
- test_and_clear_bit(FLG_L1_ACTIVATED, &st->l1.Flags);
- L1deactivated(st->l1.hardware);
- st->l1.l1hw(st, HW_DEACTIVATE | RESPONSE, NULL);
-}
-
-static void
-l1_activate_s(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- st->l1.l1hw(st, HW_RESET | REQUEST, NULL);
-}
-
-static void
-l1_activate_no(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- if ((!test_bit(FLG_L1_DEACTTIMER, &st->l1.Flags)) && (!test_bit(FLG_L1_T3RUN, &st->l1.Flags))) {
- test_and_clear_bit(FLG_L1_ACTIVATING, &st->l1.Flags);
- L1deactivated(st->l1.hardware);
- }
-}
-
-static struct FsmNode L1SFnList[] __initdata =
-{
- {ST_L1_F3, EV_PH_ACTIVATE, l1_activate_s},
- {ST_L1_F6, EV_PH_ACTIVATE, l1_activate_no},
- {ST_L1_F8, EV_PH_ACTIVATE, l1_activate_no},
- {ST_L1_F3, EV_RESET_IND, l1_reset},
- {ST_L1_F4, EV_RESET_IND, l1_reset},
- {ST_L1_F5, EV_RESET_IND, l1_reset},
- {ST_L1_F6, EV_RESET_IND, l1_reset},
- {ST_L1_F7, EV_RESET_IND, l1_reset},
- {ST_L1_F8, EV_RESET_IND, l1_reset},
- {ST_L1_F3, EV_DEACT_CNF, l1_deact_cnf},
- {ST_L1_F4, EV_DEACT_CNF, l1_deact_cnf},
- {ST_L1_F5, EV_DEACT_CNF, l1_deact_cnf},
- {ST_L1_F6, EV_DEACT_CNF, l1_deact_cnf},
- {ST_L1_F7, EV_DEACT_CNF, l1_deact_cnf},
- {ST_L1_F8, EV_DEACT_CNF, l1_deact_cnf},
- {ST_L1_F6, EV_DEACT_IND, l1_deact_req_s},
- {ST_L1_F7, EV_DEACT_IND, l1_deact_req_s},
- {ST_L1_F8, EV_DEACT_IND, l1_deact_req_s},
- {ST_L1_F3, EV_POWER_UP, l1_power_up_s},
- {ST_L1_F4, EV_RSYNC_IND, l1_go_F5},
- {ST_L1_F6, EV_RSYNC_IND, l1_go_F8},
- {ST_L1_F7, EV_RSYNC_IND, l1_go_F8},
- {ST_L1_F3, EV_INFO2_IND, l1_info2_ind},
- {ST_L1_F4, EV_INFO2_IND, l1_info2_ind},
- {ST_L1_F5, EV_INFO2_IND, l1_info2_ind},
- {ST_L1_F7, EV_INFO2_IND, l1_info2_ind},
- {ST_L1_F8, EV_INFO2_IND, l1_info2_ind},
- {ST_L1_F3, EV_INFO4_IND, l1_info4_ind},
- {ST_L1_F4, EV_INFO4_IND, l1_info4_ind},
- {ST_L1_F5, EV_INFO4_IND, l1_info4_ind},
- {ST_L1_F6, EV_INFO4_IND, l1_info4_ind},
- {ST_L1_F8, EV_INFO4_IND, l1_info4_ind},
- {ST_L1_F3, EV_TIMER3, l1_timer3},
- {ST_L1_F4, EV_TIMER3, l1_timer3},
- {ST_L1_F5, EV_TIMER3, l1_timer3},
- {ST_L1_F6, EV_TIMER3, l1_timer3},
- {ST_L1_F8, EV_TIMER3, l1_timer3},
- {ST_L1_F7, EV_TIMER_ACT, l1_timer_act},
- {ST_L1_F3, EV_TIMER_DEACT, l1_timer_deact},
- {ST_L1_F4, EV_TIMER_DEACT, l1_timer_deact},
- {ST_L1_F5, EV_TIMER_DEACT, l1_timer_deact},
- {ST_L1_F6, EV_TIMER_DEACT, l1_timer_deact},
- {ST_L1_F7, EV_TIMER_DEACT, l1_timer_deact},
- {ST_L1_F8, EV_TIMER_DEACT, l1_timer_deact},
-};
-
-#ifdef HISAX_UINTERFACE
-static void
-l1_deact_req_u(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- FsmChangeState(fi, ST_L1_RESET);
- FsmRestartTimer(&st->l1.timer, 550, EV_TIMER_DEACT, NULL, 2);
- test_and_set_bit(FLG_L1_DEACTTIMER, &st->l1.Flags);
- st->l1.l1hw(st, HW_ENABLE | REQUEST, NULL);
-}
-
-static void
-l1_power_up_u(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- FsmRestartTimer(&st->l1.timer, TIMER3_VALUE, EV_TIMER3, NULL, 2);
- test_and_set_bit(FLG_L1_T3RUN, &st->l1.Flags);
-}
-
-static void
-l1_info0_ind(struct FsmInst *fi, int event, void *arg)
-{
- FsmChangeState(fi, ST_L1_DEACT);
-}
-
-static void
-l1_activate_u(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- st->l1.l1hw(st, HW_INFO1 | REQUEST, NULL);
-}
-
-static struct FsmNode L1UFnList[] __initdata =
-{
- {ST_L1_RESET, EV_DEACT_IND, l1_deact_req_u},
- {ST_L1_DEACT, EV_DEACT_IND, l1_deact_req_u},
- {ST_L1_SYNC2, EV_DEACT_IND, l1_deact_req_u},
- {ST_L1_TRANS, EV_DEACT_IND, l1_deact_req_u},
- {ST_L1_DEACT, EV_PH_ACTIVATE, l1_activate_u},
- {ST_L1_DEACT, EV_POWER_UP, l1_power_up_u},
- {ST_L1_DEACT, EV_INFO2_IND, l1_info2_ind},
- {ST_L1_TRANS, EV_INFO2_IND, l1_info2_ind},
- {ST_L1_RESET, EV_DEACT_CNF, l1_info0_ind},
- {ST_L1_DEACT, EV_INFO4_IND, l1_info4_ind},
- {ST_L1_SYNC2, EV_INFO4_IND, l1_info4_ind},
- {ST_L1_RESET, EV_INFO4_IND, l1_info4_ind},
- {ST_L1_DEACT, EV_TIMER3, l1_timer3},
- {ST_L1_SYNC2, EV_TIMER3, l1_timer3},
- {ST_L1_TRANS, EV_TIMER_ACT, l1_timer_act},
- {ST_L1_DEACT, EV_TIMER_DEACT, l1_timer_deact},
- {ST_L1_SYNC2, EV_TIMER_DEACT, l1_timer_deact},
- {ST_L1_RESET, EV_TIMER_DEACT, l1_timer_deact},
-};
-
-#endif
-
-static void
-l1b_activate(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- FsmChangeState(fi, ST_L1_WAIT_ACT);
- FsmRestartTimer(&st->l1.timer, st->l1.delay, EV_TIMER_ACT, NULL, 2);
-}
-
-static void
-l1b_deactivate(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- FsmChangeState(fi, ST_L1_WAIT_DEACT);
- FsmRestartTimer(&st->l1.timer, 10, EV_TIMER_DEACT, NULL, 2);
-}
-
-static void
-l1b_timer_act(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- FsmChangeState(fi, ST_L1_ACTIV);
- st->l1.l1l2(st, PH_ACTIVATE | CONFIRM, NULL);
-}
-
-static void
-l1b_timer_deact(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- FsmChangeState(fi, ST_L1_NULL);
- st->l2.l2l1(st, PH_DEACTIVATE | CONFIRM, NULL);
-}
-
-static struct FsmNode L1BFnList[] __initdata =
-{
- {ST_L1_NULL, EV_PH_ACTIVATE, l1b_activate},
- {ST_L1_WAIT_ACT, EV_TIMER_ACT, l1b_timer_act},
- {ST_L1_ACTIV, EV_PH_DEACTIVATE, l1b_deactivate},
- {ST_L1_WAIT_DEACT, EV_TIMER_DEACT, l1b_timer_deact},
-};
-
-int __init
-Isdnl1New(void)
-{
- int retval;
-
- l1fsm_s.state_count = L1S_STATE_COUNT;
- l1fsm_s.event_count = L1_EVENT_COUNT;
- l1fsm_s.strEvent = strL1Event;
- l1fsm_s.strState = strL1SState;
- retval = FsmNew(&l1fsm_s, L1SFnList, ARRAY_SIZE(L1SFnList));
- if (retval)
- return retval;
-
- l1fsm_b.state_count = L1B_STATE_COUNT;
- l1fsm_b.event_count = L1_EVENT_COUNT;
- l1fsm_b.strEvent = strL1Event;
- l1fsm_b.strState = strL1BState;
- retval = FsmNew(&l1fsm_b, L1BFnList, ARRAY_SIZE(L1BFnList));
- if (retval) {
- FsmFree(&l1fsm_s);
- return retval;
- }
-#ifdef HISAX_UINTERFACE
- l1fsm_u.state_count = L1U_STATE_COUNT;
- l1fsm_u.event_count = L1_EVENT_COUNT;
- l1fsm_u.strEvent = strL1Event;
- l1fsm_u.strState = strL1UState;
- retval = FsmNew(&l1fsm_u, L1UFnList, ARRAY_SIZE(L1UFnList));
- if (retval) {
- FsmFree(&l1fsm_s);
- FsmFree(&l1fsm_b);
- return retval;
- }
-#endif
- return 0;
-}
-
-void Isdnl1Free(void)
-{
-#ifdef HISAX_UINTERFACE
- FsmFree(&l1fsm_u);
-#endif
- FsmFree(&l1fsm_s);
- FsmFree(&l1fsm_b);
-}
-
-static void
-dch_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct IsdnCardState *cs = (struct IsdnCardState *) st->l1.hardware;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- case (PH_PULL | REQUEST):
- case (PH_PULL | INDICATION):
- st->l1.l1hw(st, pr, arg);
- break;
- case (PH_ACTIVATE | REQUEST):
- if (cs->debug)
- debugl1(cs, "PH_ACTIVATE_REQ %s",
- st->l1.l1m.fsm->strState[st->l1.l1m.state]);
- if (test_bit(FLG_L1_ACTIVATED, &st->l1.Flags))
- st->l1.l1l2(st, PH_ACTIVATE | CONFIRM, NULL);
- else {
- test_and_set_bit(FLG_L1_ACTIVATING, &st->l1.Flags);
- FsmEvent(&st->l1.l1m, EV_PH_ACTIVATE, arg);
- }
- break;
- case (PH_TESTLOOP | REQUEST):
- if (1 & (long) arg)
- debugl1(cs, "PH_TEST_LOOP B1");
- if (2 & (long) arg)
- debugl1(cs, "PH_TEST_LOOP B2");
- if (!(3 & (long) arg))
- debugl1(cs, "PH_TEST_LOOP DISABLED");
- st->l1.l1hw(st, HW_TESTLOOP | REQUEST, arg);
- break;
- default:
- if (cs->debug)
- debugl1(cs, "dch_l2l1 msg %04X unhandled", pr);
- break;
- }
-}
-
-void
-l1_msg(struct IsdnCardState *cs, int pr, void *arg) {
- struct PStack *st;
-
- st = cs->stlist;
-
- while (st) {
- switch (pr) {
- case (HW_RESET | INDICATION):
- FsmEvent(&st->l1.l1m, EV_RESET_IND, arg);
- break;
- case (HW_DEACTIVATE | CONFIRM):
- FsmEvent(&st->l1.l1m, EV_DEACT_CNF, arg);
- break;
- case (HW_DEACTIVATE | INDICATION):
- FsmEvent(&st->l1.l1m, EV_DEACT_IND, arg);
- break;
- case (HW_POWERUP | CONFIRM):
- FsmEvent(&st->l1.l1m, EV_POWER_UP, arg);
- break;
- case (HW_RSYNC | INDICATION):
- FsmEvent(&st->l1.l1m, EV_RSYNC_IND, arg);
- break;
- case (HW_INFO2 | INDICATION):
- FsmEvent(&st->l1.l1m, EV_INFO2_IND, arg);
- break;
- case (HW_INFO4_P8 | INDICATION):
- case (HW_INFO4_P10 | INDICATION):
- FsmEvent(&st->l1.l1m, EV_INFO4_IND, arg);
- break;
- default:
- if (cs->debug)
- debugl1(cs, "%s %04X unhandled", __func__, pr);
- break;
- }
- st = st->next;
- }
-}
-
-void
-l1_msg_b(struct PStack *st, int pr, void *arg) {
- switch (pr) {
- case (PH_ACTIVATE | REQUEST):
- FsmEvent(&st->l1.l1m, EV_PH_ACTIVATE, NULL);
- break;
- case (PH_DEACTIVATE | REQUEST):
- FsmEvent(&st->l1.l1m, EV_PH_DEACTIVATE, NULL);
- break;
- }
-}
-
-void
-setstack_HiSax(struct PStack *st, struct IsdnCardState *cs)
-{
- st->l1.hardware = cs;
- st->protocol = cs->protocol;
- st->l1.l1m.fsm = &l1fsm_s;
- st->l1.l1m.state = ST_L1_F3;
- st->l1.Flags = 0;
-#ifdef HISAX_UINTERFACE
- if (test_bit(FLG_HW_L1_UINT, &cs->HW_Flags)) {
- st->l1.l1m.fsm = &l1fsm_u;
- st->l1.l1m.state = ST_L1_RESET;
- st->l1.Flags = FLG_L1_UINT;
- }
-#endif
- st->l1.l1m.debug = cs->debug;
- st->l1.l1m.userdata = st;
- st->l1.l1m.userint = 0;
- st->l1.l1m.printdebug = l1m_debug;
- FsmInitTimer(&st->l1.l1m, &st->l1.timer);
- setstack_tei(st);
- setstack_manager(st);
- st->l1.stlistp = &(cs->stlist);
- st->l2.l2l1 = dch_l2l1;
- if (cs->setstack_d)
- cs->setstack_d(st, cs);
-}
-
-void
-setstack_l1_B(struct PStack *st)
-{
- struct IsdnCardState *cs = st->l1.hardware;
-
- st->l1.l1m.fsm = &l1fsm_b;
- st->l1.l1m.state = ST_L1_NULL;
- st->l1.l1m.debug = cs->debug;
- st->l1.l1m.userdata = st;
- st->l1.l1m.userint = 0;
- st->l1.l1m.printdebug = l1m_debug;
- st->l1.Flags = 0;
- FsmInitTimer(&st->l1.l1m, &st->l1.timer);
-}
diff --git a/drivers/isdn/hisax/isdnl1.h b/drivers/isdn/hisax/isdnl1.h
deleted file mode 100644
index 66ddcab19bba..000000000000
--- a/drivers/isdn/hisax/isdnl1.h
+++ /dev/null
@@ -1,32 +0,0 @@
-/* $Id: isdnl1.h,v 2.12.2.3 2004/02/11 13:21:34 keil Exp $
- *
- * Layer 1 defines
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#define D_RCVBUFREADY 0
-#define D_XMTBUFREADY 1
-#define D_L1STATECHANGE 2
-#define D_CLEARBUSY 3
-#define D_RX_MON0 4
-#define D_RX_MON1 5
-#define D_TX_MON0 6
-#define D_TX_MON1 7
-#define E_RCVBUFREADY 8
-
-#define B_RCVBUFREADY 0
-#define B_XMTBUFREADY 1
-#define B_ACKPENDING 2
-
-__printf(2, 3)
-void debugl1(struct IsdnCardState *cs, char *fmt, ...);
-void DChannel_proc_xmt(struct IsdnCardState *cs);
-void DChannel_proc_rcv(struct IsdnCardState *cs);
-void l1_msg(struct IsdnCardState *cs, int pr, void *arg);
-void l1_msg_b(struct PStack *st, int pr, void *arg);
-void Logl2Frame(struct IsdnCardState *cs, struct sk_buff *skb, char *buf,
- int dir);
-void BChannel_bh(struct work_struct *work);
diff --git a/drivers/isdn/hisax/isdnl2.c b/drivers/isdn/hisax/isdnl2.c
deleted file mode 100644
index 1a40ed04cb52..000000000000
--- a/drivers/isdn/hisax/isdnl2.c
+++ /dev/null
@@ -1,1839 +0,0 @@
-/* $Id: isdnl2.c,v 2.30.2.4 2004/02/11 13:21:34 keil Exp $
- *
- * Author Karsten Keil
- * based on the teles driver from Jan den Ouden
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * For changes and modifications please read
- * Documentation/isdn/HiSax.cert
- *
- * Thanks to Jan den Ouden
- * Fritz Elfert
- *
- */
-
-#include <linux/init.h>
-#include <linux/gfp.h>
-#include "hisax.h"
-#include "isdnl2.h"
-
-const char *l2_revision = "$Revision: 2.30.2.4 $";
-
-static void l2m_debug(struct FsmInst *fi, char *fmt, ...);
-
-static struct Fsm l2fsm;
-
-enum {
- ST_L2_1,
- ST_L2_2,
- ST_L2_3,
- ST_L2_4,
- ST_L2_5,
- ST_L2_6,
- ST_L2_7,
- ST_L2_8,
-};
-
-#define L2_STATE_COUNT (ST_L2_8 + 1)
-
-static char *strL2State[] =
-{
- "ST_L2_1",
- "ST_L2_2",
- "ST_L2_3",
- "ST_L2_4",
- "ST_L2_5",
- "ST_L2_6",
- "ST_L2_7",
- "ST_L2_8",
-};
-
-enum {
- EV_L2_UI,
- EV_L2_SABME,
- EV_L2_DISC,
- EV_L2_DM,
- EV_L2_UA,
- EV_L2_FRMR,
- EV_L2_SUPER,
- EV_L2_I,
- EV_L2_DL_DATA,
- EV_L2_ACK_PULL,
- EV_L2_DL_UNIT_DATA,
- EV_L2_DL_ESTABLISH_REQ,
- EV_L2_DL_RELEASE_REQ,
- EV_L2_MDL_ASSIGN,
- EV_L2_MDL_REMOVE,
- EV_L2_MDL_ERROR,
- EV_L1_DEACTIVATE,
- EV_L2_T200,
- EV_L2_T203,
- EV_L2_SET_OWN_BUSY,
- EV_L2_CLEAR_OWN_BUSY,
- EV_L2_FRAME_ERROR,
-};
-
-#define L2_EVENT_COUNT (EV_L2_FRAME_ERROR + 1)
-
-static char *strL2Event[] =
-{
- "EV_L2_UI",
- "EV_L2_SABME",
- "EV_L2_DISC",
- "EV_L2_DM",
- "EV_L2_UA",
- "EV_L2_FRMR",
- "EV_L2_SUPER",
- "EV_L2_I",
- "EV_L2_DL_DATA",
- "EV_L2_ACK_PULL",
- "EV_L2_DL_UNIT_DATA",
- "EV_L2_DL_ESTABLISH_REQ",
- "EV_L2_DL_RELEASE_REQ",
- "EV_L2_MDL_ASSIGN",
- "EV_L2_MDL_REMOVE",
- "EV_L2_MDL_ERROR",
- "EV_L1_DEACTIVATE",
- "EV_L2_T200",
- "EV_L2_T203",
- "EV_L2_SET_OWN_BUSY",
- "EV_L2_CLEAR_OWN_BUSY",
- "EV_L2_FRAME_ERROR",
-};
-
-static int l2addrsize(struct Layer2 *l2);
-
-static void
-set_peer_busy(struct Layer2 *l2) {
- test_and_set_bit(FLG_PEER_BUSY, &l2->flag);
- if (!skb_queue_empty(&l2->i_queue) ||
- !skb_queue_empty(&l2->ui_queue))
- test_and_set_bit(FLG_L2BLOCK, &l2->flag);
-}
-
-static void
-clear_peer_busy(struct Layer2 *l2) {
- if (test_and_clear_bit(FLG_PEER_BUSY, &l2->flag))
- test_and_clear_bit(FLG_L2BLOCK, &l2->flag);
-}
-
-static void
-InitWin(struct Layer2 *l2)
-{
- int i;
-
- for (i = 0; i < MAX_WINDOW; i++)
- l2->windowar[i] = NULL;
-}
-
-static int
-freewin1(struct Layer2 *l2)
-{
- int i, cnt = 0;
-
- for (i = 0; i < MAX_WINDOW; i++) {
- if (l2->windowar[i]) {
- cnt++;
- dev_kfree_skb(l2->windowar[i]);
- l2->windowar[i] = NULL;
- }
- }
- return cnt;
-}
-
-static inline void
-freewin(struct PStack *st)
-{
- freewin1(&st->l2);
-}
-
-static void
-ReleaseWin(struct Layer2 *l2)
-{
- int cnt;
-
- if ((cnt = freewin1(l2)))
- printk(KERN_WARNING "isdl2 freed %d skbuffs in release\n", cnt);
-}
-
-static inline unsigned int
-cansend(struct PStack *st)
-{
- unsigned int p1;
-
- if (test_bit(FLG_MOD128, &st->l2.flag))
- p1 = (st->l2.vs - st->l2.va) % 128;
- else
- p1 = (st->l2.vs - st->l2.va) % 8;
- return ((p1 < st->l2.window) && !test_bit(FLG_PEER_BUSY, &st->l2.flag));
-}
-
-static inline void
-clear_exception(struct Layer2 *l2)
-{
- test_and_clear_bit(FLG_ACK_PEND, &l2->flag);
- test_and_clear_bit(FLG_REJEXC, &l2->flag);
- test_and_clear_bit(FLG_OWN_BUSY, &l2->flag);
- clear_peer_busy(l2);
-}
-
-static inline int
-l2headersize(struct Layer2 *l2, int ui)
-{
- return (((test_bit(FLG_MOD128, &l2->flag) && (!ui)) ? 2 : 1) +
- (test_bit(FLG_LAPD, &l2->flag) ? 2 : 1));
-}
-
-inline int
-l2addrsize(struct Layer2 *l2)
-{
- return (test_bit(FLG_LAPD, &l2->flag) ? 2 : 1);
-}
-
-static int
-sethdraddr(struct Layer2 *l2, u_char *header, int rsp)
-{
- u_char *ptr = header;
- int crbit = rsp;
-
- if (test_bit(FLG_LAPD, &l2->flag)) {
- *ptr++ = (l2->sap << 2) | (rsp ? 2 : 0);
- *ptr++ = (l2->tei << 1) | 1;
- return (2);
- } else {
- if (test_bit(FLG_ORIG, &l2->flag))
- crbit = !crbit;
- if (crbit)
- *ptr++ = 1;
- else
- *ptr++ = 3;
- return (1);
- }
-}
-
-static inline void
-enqueue_super(struct PStack *st,
- struct sk_buff *skb)
-{
- if (test_bit(FLG_LAPB, &st->l2.flag))
- st->l1.bcs->tx_cnt += skb->len;
- st->l2.l2l1(st, PH_DATA | REQUEST, skb);
-}
-
-#define enqueue_ui(a, b) enqueue_super(a, b)
-
-static inline int
-IsUI(u_char *data)
-{
- return ((data[0] & 0xef) == UI);
-}
-
-static inline int
-IsUA(u_char *data)
-{
- return ((data[0] & 0xef) == UA);
-}
-
-static inline int
-IsDM(u_char *data)
-{
- return ((data[0] & 0xef) == DM);
-}
-
-static inline int
-IsDISC(u_char *data)
-{
- return ((data[0] & 0xef) == DISC);
-}
-
-static inline int
-IsSFrame(u_char *data, struct PStack *st)
-{
- register u_char d = *data;
-
- if (!test_bit(FLG_MOD128, &st->l2.flag))
- d &= 0xf;
- return (((d & 0xf3) == 1) && ((d & 0x0c) != 0x0c));
-}
-
-static inline int
-IsSABME(u_char *data, struct PStack *st)
-{
- u_char d = data[0] & ~0x10;
-
- return (test_bit(FLG_MOD128, &st->l2.flag) ? d == SABME : d == SABM);
-}
-
-static inline int
-IsREJ(u_char *data, struct PStack *st)
-{
- return (test_bit(FLG_MOD128, &st->l2.flag) ? data[0] == REJ : (data[0] & 0xf) == REJ);
-}
-
-static inline int
-IsFRMR(u_char *data)
-{
- return ((data[0] & 0xef) == FRMR);
-}
-
-static inline int
-IsRNR(u_char *data, struct PStack *st)
-{
- return (test_bit(FLG_MOD128, &st->l2.flag) ? data[0] == RNR : (data[0] & 0xf) == RNR);
-}
-
-static int
-iframe_error(struct PStack *st, struct sk_buff *skb)
-{
- int i = l2addrsize(&st->l2) + (test_bit(FLG_MOD128, &st->l2.flag) ? 2 : 1);
- int rsp = *skb->data & 0x2;
-
- if (test_bit(FLG_ORIG, &st->l2.flag))
- rsp = !rsp;
-
- if (rsp)
- return 'L';
-
-
- if (skb->len < i)
- return 'N';
-
- if ((skb->len - i) > st->l2.maxlen)
- return 'O';
-
-
- return 0;
-}
-
-static int
-super_error(struct PStack *st, struct sk_buff *skb)
-{
- if (skb->len != l2addrsize(&st->l2) +
- (test_bit(FLG_MOD128, &st->l2.flag) ? 2 : 1))
- return 'N';
-
- return 0;
-}
-
-static int
-unnum_error(struct PStack *st, struct sk_buff *skb, int wantrsp)
-{
- int rsp = (*skb->data & 0x2) >> 1;
- if (test_bit(FLG_ORIG, &st->l2.flag))
- rsp = !rsp;
-
- if (rsp != wantrsp)
- return 'L';
-
- if (skb->len != l2addrsize(&st->l2) + 1)
- return 'N';
-
- return 0;
-}
-
-static int
-UI_error(struct PStack *st, struct sk_buff *skb)
-{
- int rsp = *skb->data & 0x2;
- if (test_bit(FLG_ORIG, &st->l2.flag))
- rsp = !rsp;
-
- if (rsp)
- return 'L';
-
- if (skb->len > st->l2.maxlen + l2addrsize(&st->l2) + 1)
- return 'O';
-
- return 0;
-}
-
-static int
-FRMR_error(struct PStack *st, struct sk_buff *skb)
-{
- int headers = l2addrsize(&st->l2) + 1;
- u_char *datap = skb->data + headers;
- int rsp = *skb->data & 0x2;
-
- if (test_bit(FLG_ORIG, &st->l2.flag))
- rsp = !rsp;
-
- if (!rsp)
- return 'L';
-
- if (test_bit(FLG_MOD128, &st->l2.flag)) {
- if (skb->len < headers + 5)
- return 'N';
- else
- l2m_debug(&st->l2.l2m, "FRMR information %2x %2x %2x %2x %2x",
- datap[0], datap[1], datap[2],
- datap[3], datap[4]);
- } else {
- if (skb->len < headers + 3)
- return 'N';
- else
- l2m_debug(&st->l2.l2m, "FRMR information %2x %2x %2x",
- datap[0], datap[1], datap[2]);
- }
-
- return 0;
-}
-
-static unsigned int
-legalnr(struct PStack *st, unsigned int nr)
-{
- struct Layer2 *l2 = &st->l2;
-
- if (test_bit(FLG_MOD128, &l2->flag))
- return ((nr - l2->va) % 128) <= ((l2->vs - l2->va) % 128);
- else
- return ((nr - l2->va) % 8) <= ((l2->vs - l2->va) % 8);
-}
-
-static void
-setva(struct PStack *st, unsigned int nr)
-{
- struct Layer2 *l2 = &st->l2;
- int len;
- u_long flags;
-
- spin_lock_irqsave(&l2->lock, flags);
- while (l2->va != nr) {
- (l2->va)++;
- if (test_bit(FLG_MOD128, &l2->flag))
- l2->va %= 128;
- else
- l2->va %= 8;
- len = l2->windowar[l2->sow]->len;
- if (PACKET_NOACK == l2->windowar[l2->sow]->pkt_type)
- len = -1;
- dev_kfree_skb(l2->windowar[l2->sow]);
- l2->windowar[l2->sow] = NULL;
- l2->sow = (l2->sow + 1) % l2->window;
- spin_unlock_irqrestore(&l2->lock, flags);
- if (test_bit(FLG_LLI_L2WAKEUP, &st->lli.flag) && (len >= 0))
- lli_writewakeup(st, len);
- spin_lock_irqsave(&l2->lock, flags);
- }
- spin_unlock_irqrestore(&l2->lock, flags);
-}
-
-static void
-send_uframe(struct PStack *st, u_char cmd, u_char cr)
-{
- struct sk_buff *skb;
- u_char tmp[MAX_HEADER_LEN];
- int i;
-
- i = sethdraddr(&st->l2, tmp, cr);
- tmp[i++] = cmd;
- if (!(skb = alloc_skb(i, GFP_ATOMIC))) {
- printk(KERN_WARNING "isdl2 can't alloc sbbuff for send_uframe\n");
- return;
- }
- skb_put_data(skb, tmp, i);
- enqueue_super(st, skb);
-}
-
-static inline u_char
-get_PollFlag(struct PStack *st, struct sk_buff *skb)
-{
- return (skb->data[l2addrsize(&(st->l2))] & 0x10);
-}
-
-static inline u_char
-get_PollFlagFree(struct PStack *st, struct sk_buff *skb)
-{
- u_char PF;
-
- PF = get_PollFlag(st, skb);
- dev_kfree_skb(skb);
- return (PF);
-}
-
-static inline void
-start_t200(struct PStack *st, int i)
-{
- FsmAddTimer(&st->l2.t200, st->l2.T200, EV_L2_T200, NULL, i);
- test_and_set_bit(FLG_T200_RUN, &st->l2.flag);
-}
-
-static inline void
-restart_t200(struct PStack *st, int i)
-{
- FsmRestartTimer(&st->l2.t200, st->l2.T200, EV_L2_T200, NULL, i);
- test_and_set_bit(FLG_T200_RUN, &st->l2.flag);
-}
-
-static inline void
-stop_t200(struct PStack *st, int i)
-{
- if (test_and_clear_bit(FLG_T200_RUN, &st->l2.flag))
- FsmDelTimer(&st->l2.t200, i);
-}
-
-static inline void
-st5_dl_release_l2l3(struct PStack *st)
-{
- int pr;
-
- if (test_and_clear_bit(FLG_PEND_REL, &st->l2.flag))
- pr = DL_RELEASE | CONFIRM;
- else
- pr = DL_RELEASE | INDICATION;
-
- st->l2.l2l3(st, pr, NULL);
-}
-
-static inline void
-lapb_dl_release_l2l3(struct PStack *st, int f)
-{
- if (test_bit(FLG_LAPB, &st->l2.flag))
- st->l2.l2l1(st, PH_DEACTIVATE | REQUEST, NULL);
- st->l2.l2l3(st, DL_RELEASE | f, NULL);
-}
-
-static void
-establishlink(struct FsmInst *fi)
-{
- struct PStack *st = fi->userdata;
- u_char cmd;
-
- clear_exception(&st->l2);
- st->l2.rc = 0;
- cmd = (test_bit(FLG_MOD128, &st->l2.flag) ? SABME : SABM) | 0x10;
- send_uframe(st, cmd, CMD);
- FsmDelTimer(&st->l2.t203, 1);
- restart_t200(st, 1);
- test_and_clear_bit(FLG_PEND_REL, &st->l2.flag);
- freewin(st);
- FsmChangeState(fi, ST_L2_5);
-}
-
-static void
-l2_mdl_error_ua(struct FsmInst *fi, int event, void *arg)
-{
- struct sk_buff *skb = arg;
- struct PStack *st = fi->userdata;
-
- if (get_PollFlagFree(st, skb))
- st->ma.layer(st, MDL_ERROR | INDICATION, (void *) 'C');
- else
- st->ma.layer(st, MDL_ERROR | INDICATION, (void *) 'D');
-}
-
-static void
-l2_mdl_error_dm(struct FsmInst *fi, int event, void *arg)
-{
- struct sk_buff *skb = arg;
- struct PStack *st = fi->userdata;
-
- if (get_PollFlagFree(st, skb))
- st->ma.layer(st, MDL_ERROR | INDICATION, (void *) 'B');
- else {
- st->ma.layer(st, MDL_ERROR | INDICATION, (void *) 'E');
- establishlink(fi);
- test_and_clear_bit(FLG_L3_INIT, &st->l2.flag);
- }
-}
-
-static void
-l2_st8_mdl_error_dm(struct FsmInst *fi, int event, void *arg)
-{
- struct sk_buff *skb = arg;
- struct PStack *st = fi->userdata;
-
- if (get_PollFlagFree(st, skb))
- st->ma.layer(st, MDL_ERROR | INDICATION, (void *) 'B');
- else {
- st->ma.layer(st, MDL_ERROR | INDICATION, (void *) 'E');
- }
- establishlink(fi);
- test_and_clear_bit(FLG_L3_INIT, &st->l2.flag);
-}
-
-static void
-l2_go_st3(struct FsmInst *fi, int event, void *arg)
-{
- FsmChangeState(fi, ST_L2_3);
-}
-
-static void
-l2_mdl_assign(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- FsmChangeState(fi, ST_L2_3);
- st->l2.l2tei(st, MDL_ASSIGN | INDICATION, NULL);
-}
-
-static void
-l2_queue_ui_assign(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
-
- skb_queue_tail(&st->l2.ui_queue, skb);
- FsmChangeState(fi, ST_L2_2);
- st->l2.l2tei(st, MDL_ASSIGN | INDICATION, NULL);
-}
-
-static void
-l2_queue_ui(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
-
- skb_queue_tail(&st->l2.ui_queue, skb);
-}
-
-static void
-tx_ui(struct PStack *st)
-{
- struct sk_buff *skb;
- u_char header[MAX_HEADER_LEN];
- int i;
-
- i = sethdraddr(&(st->l2), header, CMD);
- header[i++] = UI;
- while ((skb = skb_dequeue(&st->l2.ui_queue))) {
- memcpy(skb_push(skb, i), header, i);
- enqueue_ui(st, skb);
- }
-}
-
-static void
-l2_send_ui(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
-
- skb_queue_tail(&st->l2.ui_queue, skb);
- tx_ui(st);
-}
-
-static void
-l2_got_ui(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
-
- skb_pull(skb, l2headersize(&st->l2, 1));
- st->l2.l2l3(st, DL_UNIT_DATA | INDICATION, skb);
-/* ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- * in states 1-3 for broadcast
- */
-
-
-}
-
-static void
-l2_establish(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- establishlink(fi);
- test_and_set_bit(FLG_L3_INIT, &st->l2.flag);
-}
-
-static void
-l2_discard_i_setl3(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- skb_queue_purge(&st->l2.i_queue);
- test_and_set_bit(FLG_L3_INIT, &st->l2.flag);
- test_and_clear_bit(FLG_PEND_REL, &st->l2.flag);
-}
-
-static void
-l2_l3_reestablish(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- skb_queue_purge(&st->l2.i_queue);
- establishlink(fi);
- test_and_set_bit(FLG_L3_INIT, &st->l2.flag);
-}
-
-static void
-l2_release(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- st->l2.l2l3(st, DL_RELEASE | CONFIRM, NULL);
-}
-
-static void
-l2_pend_rel(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- test_and_set_bit(FLG_PEND_REL, &st->l2.flag);
-}
-
-static void
-l2_disconnect(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- skb_queue_purge(&st->l2.i_queue);
- freewin(st);
- FsmChangeState(fi, ST_L2_6);
- st->l2.rc = 0;
- send_uframe(st, DISC | 0x10, CMD);
- FsmDelTimer(&st->l2.t203, 1);
- restart_t200(st, 2);
-}
-
-static void
-l2_start_multi(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
-
- send_uframe(st, UA | get_PollFlagFree(st, skb), RSP);
-
- clear_exception(&st->l2);
- st->l2.vs = 0;
- st->l2.va = 0;
- st->l2.vr = 0;
- st->l2.sow = 0;
- FsmChangeState(fi, ST_L2_7);
- FsmAddTimer(&st->l2.t203, st->l2.T203, EV_L2_T203, NULL, 3);
-
- st->l2.l2l3(st, DL_ESTABLISH | INDICATION, NULL);
-}
-
-static void
-l2_send_UA(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
-
- send_uframe(st, UA | get_PollFlagFree(st, skb), RSP);
-}
-
-static void
-l2_send_DM(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
-
- send_uframe(st, DM | get_PollFlagFree(st, skb), RSP);
-}
-
-static void
-l2_restart_multi(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
- int est = 0, state;
-
- state = fi->state;
-
- send_uframe(st, UA | get_PollFlagFree(st, skb), RSP);
-
- st->ma.layer(st, MDL_ERROR | INDICATION, (void *) 'F');
-
- if (st->l2.vs != st->l2.va) {
- skb_queue_purge(&st->l2.i_queue);
- est = 1;
- }
-
- clear_exception(&st->l2);
- st->l2.vs = 0;
- st->l2.va = 0;
- st->l2.vr = 0;
- st->l2.sow = 0;
- FsmChangeState(fi, ST_L2_7);
- stop_t200(st, 3);
- FsmRestartTimer(&st->l2.t203, st->l2.T203, EV_L2_T203, NULL, 3);
-
- if (est)
- st->l2.l2l3(st, DL_ESTABLISH | INDICATION, NULL);
-
- if ((ST_L2_7 == state) || (ST_L2_8 == state))
- if (!skb_queue_empty(&st->l2.i_queue) && cansend(st))
- st->l2.l2l1(st, PH_PULL | REQUEST, NULL);
-}
-
-static void
-l2_stop_multi(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
-
- FsmChangeState(fi, ST_L2_4);
- FsmDelTimer(&st->l2.t203, 3);
- stop_t200(st, 4);
-
- send_uframe(st, UA | get_PollFlagFree(st, skb), RSP);
-
- skb_queue_purge(&st->l2.i_queue);
- freewin(st);
- lapb_dl_release_l2l3(st, INDICATION);
-}
-
-static void
-l2_connected(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
- int pr = -1;
-
- if (!get_PollFlag(st, skb)) {
- l2_mdl_error_ua(fi, event, arg);
- return;
- }
- dev_kfree_skb(skb);
-
- if (test_and_clear_bit(FLG_PEND_REL, &st->l2.flag))
- l2_disconnect(fi, event, arg);
-
- if (test_and_clear_bit(FLG_L3_INIT, &st->l2.flag)) {
- pr = DL_ESTABLISH | CONFIRM;
- } else if (st->l2.vs != st->l2.va) {
- skb_queue_purge(&st->l2.i_queue);
- pr = DL_ESTABLISH | INDICATION;
- }
-
- stop_t200(st, 5);
-
- st->l2.vr = 0;
- st->l2.vs = 0;
- st->l2.va = 0;
- st->l2.sow = 0;
- FsmChangeState(fi, ST_L2_7);
- FsmAddTimer(&st->l2.t203, st->l2.T203, EV_L2_T203, NULL, 4);
-
- if (pr != -1)
- st->l2.l2l3(st, pr, NULL);
-
- if (!skb_queue_empty(&st->l2.i_queue) && cansend(st))
- st->l2.l2l1(st, PH_PULL | REQUEST, NULL);
-}
-
-static void
-l2_released(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
-
- if (!get_PollFlag(st, skb)) {
- l2_mdl_error_ua(fi, event, arg);
- return;
- }
- dev_kfree_skb(skb);
-
- stop_t200(st, 6);
- lapb_dl_release_l2l3(st, CONFIRM);
- FsmChangeState(fi, ST_L2_4);
-}
-
-static void
-l2_reestablish(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
-
- if (!get_PollFlagFree(st, skb)) {
- establishlink(fi);
- test_and_set_bit(FLG_L3_INIT, &st->l2.flag);
- }
-}
-
-static void
-l2_st5_dm_release(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
-
- if (get_PollFlagFree(st, skb)) {
- stop_t200(st, 7);
- if (!test_bit(FLG_L3_INIT, &st->l2.flag))
- skb_queue_purge(&st->l2.i_queue);
- if (test_bit(FLG_LAPB, &st->l2.flag))
- st->l2.l2l1(st, PH_DEACTIVATE | REQUEST, NULL);
- st5_dl_release_l2l3(st);
- FsmChangeState(fi, ST_L2_4);
- }
-}
-
-static void
-l2_st6_dm_release(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
-
- if (get_PollFlagFree(st, skb)) {
- stop_t200(st, 8);
- lapb_dl_release_l2l3(st, CONFIRM);
- FsmChangeState(fi, ST_L2_4);
- }
-}
-
-static inline void
-enquiry_cr(struct PStack *st, u_char typ, u_char cr, u_char pf)
-{
- struct sk_buff *skb;
- struct Layer2 *l2;
- u_char tmp[MAX_HEADER_LEN];
- int i;
-
- l2 = &st->l2;
- i = sethdraddr(l2, tmp, cr);
- if (test_bit(FLG_MOD128, &l2->flag)) {
- tmp[i++] = typ;
- tmp[i++] = (l2->vr << 1) | (pf ? 1 : 0);
- } else
- tmp[i++] = (l2->vr << 5) | typ | (pf ? 0x10 : 0);
- if (!(skb = alloc_skb(i, GFP_ATOMIC))) {
- printk(KERN_WARNING "isdl2 can't alloc sbbuff for enquiry_cr\n");
- return;
- }
- skb_put_data(skb, tmp, i);
- enqueue_super(st, skb);
-}
-
-static inline void
-enquiry_response(struct PStack *st)
-{
- if (test_bit(FLG_OWN_BUSY, &st->l2.flag))
- enquiry_cr(st, RNR, RSP, 1);
- else
- enquiry_cr(st, RR, RSP, 1);
- test_and_clear_bit(FLG_ACK_PEND, &st->l2.flag);
-}
-
-static inline void
-transmit_enquiry(struct PStack *st)
-{
- if (test_bit(FLG_OWN_BUSY, &st->l2.flag))
- enquiry_cr(st, RNR, CMD, 1);
- else
- enquiry_cr(st, RR, CMD, 1);
- test_and_clear_bit(FLG_ACK_PEND, &st->l2.flag);
- start_t200(st, 9);
-}
-
-
-static void
-nrerrorrecovery(struct FsmInst *fi)
-{
- struct PStack *st = fi->userdata;
-
- st->ma.layer(st, MDL_ERROR | INDICATION, (void *) 'J');
- establishlink(fi);
- test_and_clear_bit(FLG_L3_INIT, &st->l2.flag);
-}
-
-static void
-invoke_retransmission(struct PStack *st, unsigned int nr)
-{
- struct Layer2 *l2 = &st->l2;
- u_int p1;
- u_long flags;
-
- spin_lock_irqsave(&l2->lock, flags);
- if (l2->vs != nr) {
- while (l2->vs != nr) {
- (l2->vs)--;
- if (test_bit(FLG_MOD128, &l2->flag)) {
- l2->vs %= 128;
- p1 = (l2->vs - l2->va) % 128;
- } else {
- l2->vs %= 8;
- p1 = (l2->vs - l2->va) % 8;
- }
- p1 = (p1 + l2->sow) % l2->window;
- if (test_bit(FLG_LAPB, &l2->flag))
- st->l1.bcs->tx_cnt += l2->windowar[p1]->len + l2headersize(l2, 0);
- skb_queue_head(&l2->i_queue, l2->windowar[p1]);
- l2->windowar[p1] = NULL;
- }
- spin_unlock_irqrestore(&l2->lock, flags);
- st->l2.l2l1(st, PH_PULL | REQUEST, NULL);
- return;
- }
- spin_unlock_irqrestore(&l2->lock, flags);
-}
-
-static void
-l2_st7_got_super(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
- int PollFlag, rsp, typ = RR;
- unsigned int nr;
- struct Layer2 *l2 = &st->l2;
-
- rsp = *skb->data & 0x2;
- if (test_bit(FLG_ORIG, &l2->flag))
- rsp = !rsp;
-
- skb_pull(skb, l2addrsize(l2));
- if (IsRNR(skb->data, st)) {
- set_peer_busy(l2);
- typ = RNR;
- } else
- clear_peer_busy(l2);
- if (IsREJ(skb->data, st))
- typ = REJ;
-
- if (test_bit(FLG_MOD128, &l2->flag)) {
- PollFlag = (skb->data[1] & 0x1) == 0x1;
- nr = skb->data[1] >> 1;
- } else {
- PollFlag = (skb->data[0] & 0x10);
- nr = (skb->data[0] >> 5) & 0x7;
- }
- dev_kfree_skb(skb);
-
- if (PollFlag) {
- if (rsp)
- st->ma.layer(st, MDL_ERROR | INDICATION, (void *) 'A');
- else
- enquiry_response(st);
- }
- if (legalnr(st, nr)) {
- if (typ == REJ) {
- setva(st, nr);
- invoke_retransmission(st, nr);
- stop_t200(st, 10);
- if (FsmAddTimer(&st->l2.t203, st->l2.T203,
- EV_L2_T203, NULL, 6))
- l2m_debug(&st->l2.l2m, "Restart T203 ST7 REJ");
- } else if ((nr == l2->vs) && (typ == RR)) {
- setva(st, nr);
- stop_t200(st, 11);
- FsmRestartTimer(&st->l2.t203, st->l2.T203,
- EV_L2_T203, NULL, 7);
- } else if ((l2->va != nr) || (typ == RNR)) {
- setva(st, nr);
- if (typ != RR) FsmDelTimer(&st->l2.t203, 9);
- restart_t200(st, 12);
- }
- if (!skb_queue_empty(&st->l2.i_queue) && (typ == RR))
- st->l2.l2l1(st, PH_PULL | REQUEST, NULL);
- } else
- nrerrorrecovery(fi);
-}
-
-static void
-l2_feed_i_if_reest(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
-
- if (test_bit(FLG_LAPB, &st->l2.flag))
- st->l1.bcs->tx_cnt += skb->len + l2headersize(&st->l2, 0);
- if (!test_bit(FLG_L3_INIT, &st->l2.flag))
- skb_queue_tail(&st->l2.i_queue, skb);
- else
- dev_kfree_skb(skb);
-}
-
-static void
-l2_feed_i_pull(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
-
- if (test_bit(FLG_LAPB, &st->l2.flag))
- st->l1.bcs->tx_cnt += skb->len + l2headersize(&st->l2, 0);
- skb_queue_tail(&st->l2.i_queue, skb);
- st->l2.l2l1(st, PH_PULL | REQUEST, NULL);
-}
-
-static void
-l2_feed_iqueue(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
-
- if (test_bit(FLG_LAPB, &st->l2.flag))
- st->l1.bcs->tx_cnt += skb->len + l2headersize(&st->l2, 0);
- skb_queue_tail(&st->l2.i_queue, skb);
-}
-
-static void
-l2_got_iframe(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
- struct Layer2 *l2 = &(st->l2);
- int PollFlag, ns, i;
- unsigned int nr;
-
- i = l2addrsize(l2);
- if (test_bit(FLG_MOD128, &l2->flag)) {
- PollFlag = ((skb->data[i + 1] & 0x1) == 0x1);
- ns = skb->data[i] >> 1;
- nr = (skb->data[i + 1] >> 1) & 0x7f;
- } else {
- PollFlag = (skb->data[i] & 0x10);
- ns = (skb->data[i] >> 1) & 0x7;
- nr = (skb->data[i] >> 5) & 0x7;
- }
- if (test_bit(FLG_OWN_BUSY, &l2->flag)) {
- dev_kfree_skb(skb);
- if (PollFlag) enquiry_response(st);
- } else if (l2->vr == ns) {
- (l2->vr)++;
- if (test_bit(FLG_MOD128, &l2->flag))
- l2->vr %= 128;
- else
- l2->vr %= 8;
- test_and_clear_bit(FLG_REJEXC, &l2->flag);
-
- if (PollFlag)
- enquiry_response(st);
- else
- test_and_set_bit(FLG_ACK_PEND, &l2->flag);
- skb_pull(skb, l2headersize(l2, 0));
- st->l2.l2l3(st, DL_DATA | INDICATION, skb);
- } else {
- /* n(s)!=v(r) */
- dev_kfree_skb(skb);
- if (test_and_set_bit(FLG_REJEXC, &l2->flag)) {
- if (PollFlag)
- enquiry_response(st);
- } else {
- enquiry_cr(st, REJ, RSP, PollFlag);
- test_and_clear_bit(FLG_ACK_PEND, &l2->flag);
- }
- }
-
- if (legalnr(st, nr)) {
- if (!test_bit(FLG_PEER_BUSY, &st->l2.flag) && (fi->state == ST_L2_7)) {
- if (nr == st->l2.vs) {
- stop_t200(st, 13);
- FsmRestartTimer(&st->l2.t203, st->l2.T203,
- EV_L2_T203, NULL, 7);
- } else if (nr != st->l2.va)
- restart_t200(st, 14);
- }
- setva(st, nr);
- } else {
- nrerrorrecovery(fi);
- return;
- }
-
- if (!skb_queue_empty(&st->l2.i_queue) && (fi->state == ST_L2_7))
- st->l2.l2l1(st, PH_PULL | REQUEST, NULL);
- if (test_and_clear_bit(FLG_ACK_PEND, &st->l2.flag))
- enquiry_cr(st, RR, RSP, 0);
-}
-
-static void
-l2_got_tei(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- st->l2.tei = (long) arg;
-
- if (fi->state == ST_L2_3) {
- establishlink(fi);
- test_and_set_bit(FLG_L3_INIT, &st->l2.flag);
- } else
- FsmChangeState(fi, ST_L2_4);
- if (!skb_queue_empty(&st->l2.ui_queue))
- tx_ui(st);
-}
-
-static void
-l2_st5_tout_200(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- if (test_bit(FLG_LAPD, &st->l2.flag) &&
- test_bit(FLG_DCHAN_BUSY, &st->l2.flag)) {
- FsmAddTimer(&st->l2.t200, st->l2.T200, EV_L2_T200, NULL, 9);
- } else if (st->l2.rc == st->l2.N200) {
- FsmChangeState(fi, ST_L2_4);
- test_and_clear_bit(FLG_T200_RUN, &st->l2.flag);
- skb_queue_purge(&st->l2.i_queue);
- st->ma.layer(st, MDL_ERROR | INDICATION, (void *) 'G');
- if (test_bit(FLG_LAPB, &st->l2.flag))
- st->l2.l2l1(st, PH_DEACTIVATE | REQUEST, NULL);
- st5_dl_release_l2l3(st);
- } else {
- st->l2.rc++;
- FsmAddTimer(&st->l2.t200, st->l2.T200, EV_L2_T200, NULL, 9);
- send_uframe(st, (test_bit(FLG_MOD128, &st->l2.flag) ? SABME : SABM)
- | 0x10, CMD);
- }
-}
-
-static void
-l2_st6_tout_200(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- if (test_bit(FLG_LAPD, &st->l2.flag) &&
- test_bit(FLG_DCHAN_BUSY, &st->l2.flag)) {
- FsmAddTimer(&st->l2.t200, st->l2.T200, EV_L2_T200, NULL, 9);
- } else if (st->l2.rc == st->l2.N200) {
- FsmChangeState(fi, ST_L2_4);
- test_and_clear_bit(FLG_T200_RUN, &st->l2.flag);
- st->ma.layer(st, MDL_ERROR | INDICATION, (void *) 'H');
- lapb_dl_release_l2l3(st, CONFIRM);
- } else {
- st->l2.rc++;
- FsmAddTimer(&st->l2.t200, st->l2.T200, EV_L2_T200,
- NULL, 9);
- send_uframe(st, DISC | 0x10, CMD);
- }
-}
-
-static void
-l2_st7_tout_200(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- if (test_bit(FLG_LAPD, &st->l2.flag) &&
- test_bit(FLG_DCHAN_BUSY, &st->l2.flag)) {
- FsmAddTimer(&st->l2.t200, st->l2.T200, EV_L2_T200, NULL, 9);
- return;
- }
- test_and_clear_bit(FLG_T200_RUN, &st->l2.flag);
- st->l2.rc = 0;
- FsmChangeState(fi, ST_L2_8);
-
- transmit_enquiry(st);
- st->l2.rc++;
-}
-
-static void
-l2_st8_tout_200(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- if (test_bit(FLG_LAPD, &st->l2.flag) &&
- test_bit(FLG_DCHAN_BUSY, &st->l2.flag)) {
- FsmAddTimer(&st->l2.t200, st->l2.T200, EV_L2_T200, NULL, 9);
- return;
- }
- test_and_clear_bit(FLG_T200_RUN, &st->l2.flag);
- if (st->l2.rc == st->l2.N200) {
- st->ma.layer(st, MDL_ERROR | INDICATION, (void *) 'I');
- establishlink(fi);
- test_and_clear_bit(FLG_L3_INIT, &st->l2.flag);
- } else {
- transmit_enquiry(st);
- st->l2.rc++;
- }
-}
-
-static void
-l2_st7_tout_203(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- if (test_bit(FLG_LAPD, &st->l2.flag) &&
- test_bit(FLG_DCHAN_BUSY, &st->l2.flag)) {
- FsmAddTimer(&st->l2.t203, st->l2.T203, EV_L2_T203, NULL, 9);
- return;
- }
- FsmChangeState(fi, ST_L2_8);
- transmit_enquiry(st);
- st->l2.rc = 0;
-}
-
-static void
-l2_pull_iqueue(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb, *nskb;
- struct Layer2 *l2 = &st->l2;
- u_char header[MAX_HEADER_LEN];
- int i, hdr_space_needed;
- int unsigned p1;
- u_long flags;
-
- if (!cansend(st))
- return;
-
- skb = skb_dequeue(&l2->i_queue);
- if (!skb)
- return;
-
- hdr_space_needed = l2headersize(l2, 0);
- nskb = skb_realloc_headroom(skb, hdr_space_needed);
- if (!nskb) {
- skb_queue_head(&l2->i_queue, skb);
- return;
- }
- spin_lock_irqsave(&l2->lock, flags);
- if (test_bit(FLG_MOD128, &l2->flag))
- p1 = (l2->vs - l2->va) % 128;
- else
- p1 = (l2->vs - l2->va) % 8;
- p1 = (p1 + l2->sow) % l2->window;
- if (l2->windowar[p1]) {
- printk(KERN_WARNING "isdnl2 try overwrite ack queue entry %d\n",
- p1);
- dev_kfree_skb(l2->windowar[p1]);
- }
- l2->windowar[p1] = skb;
-
- i = sethdraddr(&st->l2, header, CMD);
-
- if (test_bit(FLG_MOD128, &l2->flag)) {
- header[i++] = l2->vs << 1;
- header[i++] = l2->vr << 1;
- l2->vs = (l2->vs + 1) % 128;
- } else {
- header[i++] = (l2->vr << 5) | (l2->vs << 1);
- l2->vs = (l2->vs + 1) % 8;
- }
- spin_unlock_irqrestore(&l2->lock, flags);
- memcpy(skb_push(nskb, i), header, i);
- st->l2.l2l1(st, PH_PULL | INDICATION, nskb);
- test_and_clear_bit(FLG_ACK_PEND, &st->l2.flag);
- if (!test_and_set_bit(FLG_T200_RUN, &st->l2.flag)) {
- FsmDelTimer(&st->l2.t203, 13);
- FsmAddTimer(&st->l2.t200, st->l2.T200, EV_L2_T200, NULL, 11);
- }
- if (!skb_queue_empty(&l2->i_queue) && cansend(st))
- st->l2.l2l1(st, PH_PULL | REQUEST, NULL);
-}
-
-static void
-l2_st8_got_super(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
- int PollFlag, rsp, rnr = 0;
- unsigned int nr;
- struct Layer2 *l2 = &st->l2;
-
- rsp = *skb->data & 0x2;
- if (test_bit(FLG_ORIG, &l2->flag))
- rsp = !rsp;
-
- skb_pull(skb, l2addrsize(l2));
-
- if (IsRNR(skb->data, st)) {
- set_peer_busy(l2);
- rnr = 1;
- } else
- clear_peer_busy(l2);
-
- if (test_bit(FLG_MOD128, &l2->flag)) {
- PollFlag = (skb->data[1] & 0x1) == 0x1;
- nr = skb->data[1] >> 1;
- } else {
- PollFlag = (skb->data[0] & 0x10);
- nr = (skb->data[0] >> 5) & 0x7;
- }
- dev_kfree_skb(skb);
-
- if (rsp && PollFlag) {
- if (legalnr(st, nr)) {
- if (rnr) {
- restart_t200(st, 15);
- } else {
- stop_t200(st, 16);
- FsmAddTimer(&l2->t203, l2->T203,
- EV_L2_T203, NULL, 5);
- setva(st, nr);
- }
- invoke_retransmission(st, nr);
- FsmChangeState(fi, ST_L2_7);
- if (!skb_queue_empty(&l2->i_queue) && cansend(st))
- st->l2.l2l1(st, PH_PULL | REQUEST, NULL);
- } else
- nrerrorrecovery(fi);
- } else {
- if (!rsp && PollFlag)
- enquiry_response(st);
- if (legalnr(st, nr)) {
- setva(st, nr);
- } else
- nrerrorrecovery(fi);
- }
-}
-
-static void
-l2_got_FRMR(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
-
- skb_pull(skb, l2addrsize(&st->l2) + 1);
-
- if (!(skb->data[0] & 1) || ((skb->data[0] & 3) == 1) || /* I or S */
- (IsUA(skb->data) && (fi->state == ST_L2_7))) {
- st->ma.layer(st, MDL_ERROR | INDICATION, (void *) 'K');
- establishlink(fi);
- test_and_clear_bit(FLG_L3_INIT, &st->l2.flag);
- }
- dev_kfree_skb(skb);
-}
-
-static void
-l2_st24_tei_remove(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- skb_queue_purge(&st->l2.ui_queue);
- st->l2.tei = -1;
- FsmChangeState(fi, ST_L2_1);
-}
-
-static void
-l2_st3_tei_remove(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- skb_queue_purge(&st->l2.ui_queue);
- st->l2.tei = -1;
- st->l2.l2l3(st, DL_RELEASE | INDICATION, NULL);
- FsmChangeState(fi, ST_L2_1);
-}
-
-static void
-l2_st5_tei_remove(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- skb_queue_purge(&st->l2.i_queue);
- skb_queue_purge(&st->l2.ui_queue);
- freewin(st);
- st->l2.tei = -1;
- stop_t200(st, 17);
- st5_dl_release_l2l3(st);
- FsmChangeState(fi, ST_L2_1);
-}
-
-static void
-l2_st6_tei_remove(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- skb_queue_purge(&st->l2.ui_queue);
- st->l2.tei = -1;
- stop_t200(st, 18);
- st->l2.l2l3(st, DL_RELEASE | CONFIRM, NULL);
- FsmChangeState(fi, ST_L2_1);
-}
-
-static void
-l2_tei_remove(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- skb_queue_purge(&st->l2.i_queue);
- skb_queue_purge(&st->l2.ui_queue);
- freewin(st);
- st->l2.tei = -1;
- stop_t200(st, 17);
- FsmDelTimer(&st->l2.t203, 19);
- st->l2.l2l3(st, DL_RELEASE | INDICATION, NULL);
- FsmChangeState(fi, ST_L2_1);
-}
-
-static void
-l2_st14_persistent_da(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- skb_queue_purge(&st->l2.i_queue);
- skb_queue_purge(&st->l2.ui_queue);
- if (test_and_clear_bit(FLG_ESTAB_PEND, &st->l2.flag))
- st->l2.l2l3(st, DL_RELEASE | INDICATION, NULL);
-}
-
-static void
-l2_st5_persistent_da(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- skb_queue_purge(&st->l2.i_queue);
- skb_queue_purge(&st->l2.ui_queue);
- freewin(st);
- stop_t200(st, 19);
- st5_dl_release_l2l3(st);
- FsmChangeState(fi, ST_L2_4);
-}
-
-static void
-l2_st6_persistent_da(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- skb_queue_purge(&st->l2.ui_queue);
- stop_t200(st, 20);
- st->l2.l2l3(st, DL_RELEASE | CONFIRM, NULL);
- FsmChangeState(fi, ST_L2_4);
-}
-
-static void
-l2_persistent_da(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- skb_queue_purge(&st->l2.i_queue);
- skb_queue_purge(&st->l2.ui_queue);
- freewin(st);
- stop_t200(st, 19);
- FsmDelTimer(&st->l2.t203, 19);
- st->l2.l2l3(st, DL_RELEASE | INDICATION, NULL);
- FsmChangeState(fi, ST_L2_4);
-}
-
-static void
-l2_set_own_busy(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- if (!test_and_set_bit(FLG_OWN_BUSY, &st->l2.flag)) {
- enquiry_cr(st, RNR, RSP, 0);
- test_and_clear_bit(FLG_ACK_PEND, &st->l2.flag);
- }
-}
-
-static void
-l2_clear_own_busy(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- if (!test_and_clear_bit(FLG_OWN_BUSY, &st->l2.flag)) {
- enquiry_cr(st, RR, RSP, 0);
- test_and_clear_bit(FLG_ACK_PEND, &st->l2.flag);
- }
-}
-
-static void
-l2_frame_error(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- st->ma.layer(st, MDL_ERROR | INDICATION, arg);
-}
-
-static void
-l2_frame_error_reest(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- st->ma.layer(st, MDL_ERROR | INDICATION, arg);
- establishlink(fi);
- test_and_clear_bit(FLG_L3_INIT, &st->l2.flag);
-}
-
-static struct FsmNode L2FnList[] __initdata =
-{
- {ST_L2_1, EV_L2_DL_ESTABLISH_REQ, l2_mdl_assign},
- {ST_L2_2, EV_L2_DL_ESTABLISH_REQ, l2_go_st3},
- {ST_L2_4, EV_L2_DL_ESTABLISH_REQ, l2_establish},
- {ST_L2_5, EV_L2_DL_ESTABLISH_REQ, l2_discard_i_setl3},
- {ST_L2_7, EV_L2_DL_ESTABLISH_REQ, l2_l3_reestablish},
- {ST_L2_8, EV_L2_DL_ESTABLISH_REQ, l2_l3_reestablish},
- {ST_L2_4, EV_L2_DL_RELEASE_REQ, l2_release},
- {ST_L2_5, EV_L2_DL_RELEASE_REQ, l2_pend_rel},
- {ST_L2_7, EV_L2_DL_RELEASE_REQ, l2_disconnect},
- {ST_L2_8, EV_L2_DL_RELEASE_REQ, l2_disconnect},
- {ST_L2_5, EV_L2_DL_DATA, l2_feed_i_if_reest},
- {ST_L2_7, EV_L2_DL_DATA, l2_feed_i_pull},
- {ST_L2_8, EV_L2_DL_DATA, l2_feed_iqueue},
- {ST_L2_1, EV_L2_DL_UNIT_DATA, l2_queue_ui_assign},
- {ST_L2_2, EV_L2_DL_UNIT_DATA, l2_queue_ui},
- {ST_L2_3, EV_L2_DL_UNIT_DATA, l2_queue_ui},
- {ST_L2_4, EV_L2_DL_UNIT_DATA, l2_send_ui},
- {ST_L2_5, EV_L2_DL_UNIT_DATA, l2_send_ui},
- {ST_L2_6, EV_L2_DL_UNIT_DATA, l2_send_ui},
- {ST_L2_7, EV_L2_DL_UNIT_DATA, l2_send_ui},
- {ST_L2_8, EV_L2_DL_UNIT_DATA, l2_send_ui},
- {ST_L2_1, EV_L2_MDL_ASSIGN, l2_got_tei},
- {ST_L2_2, EV_L2_MDL_ASSIGN, l2_got_tei},
- {ST_L2_3, EV_L2_MDL_ASSIGN, l2_got_tei},
- {ST_L2_2, EV_L2_MDL_ERROR, l2_st24_tei_remove},
- {ST_L2_3, EV_L2_MDL_ERROR, l2_st3_tei_remove},
- {ST_L2_4, EV_L2_MDL_REMOVE, l2_st24_tei_remove},
- {ST_L2_5, EV_L2_MDL_REMOVE, l2_st5_tei_remove},
- {ST_L2_6, EV_L2_MDL_REMOVE, l2_st6_tei_remove},
- {ST_L2_7, EV_L2_MDL_REMOVE, l2_tei_remove},
- {ST_L2_8, EV_L2_MDL_REMOVE, l2_tei_remove},
- {ST_L2_4, EV_L2_SABME, l2_start_multi},
- {ST_L2_5, EV_L2_SABME, l2_send_UA},
- {ST_L2_6, EV_L2_SABME, l2_send_DM},
- {ST_L2_7, EV_L2_SABME, l2_restart_multi},
- {ST_L2_8, EV_L2_SABME, l2_restart_multi},
- {ST_L2_4, EV_L2_DISC, l2_send_DM},
- {ST_L2_5, EV_L2_DISC, l2_send_DM},
- {ST_L2_6, EV_L2_DISC, l2_send_UA},
- {ST_L2_7, EV_L2_DISC, l2_stop_multi},
- {ST_L2_8, EV_L2_DISC, l2_stop_multi},
- {ST_L2_4, EV_L2_UA, l2_mdl_error_ua},
- {ST_L2_5, EV_L2_UA, l2_connected},
- {ST_L2_6, EV_L2_UA, l2_released},
- {ST_L2_7, EV_L2_UA, l2_mdl_error_ua},
- {ST_L2_8, EV_L2_UA, l2_mdl_error_ua},
- {ST_L2_4, EV_L2_DM, l2_reestablish},
- {ST_L2_5, EV_L2_DM, l2_st5_dm_release},
- {ST_L2_6, EV_L2_DM, l2_st6_dm_release},
- {ST_L2_7, EV_L2_DM, l2_mdl_error_dm},
- {ST_L2_8, EV_L2_DM, l2_st8_mdl_error_dm},
- {ST_L2_1, EV_L2_UI, l2_got_ui},
- {ST_L2_2, EV_L2_UI, l2_got_ui},
- {ST_L2_3, EV_L2_UI, l2_got_ui},
- {ST_L2_4, EV_L2_UI, l2_got_ui},
- {ST_L2_5, EV_L2_UI, l2_got_ui},
- {ST_L2_6, EV_L2_UI, l2_got_ui},
- {ST_L2_7, EV_L2_UI, l2_got_ui},
- {ST_L2_8, EV_L2_UI, l2_got_ui},
- {ST_L2_7, EV_L2_FRMR, l2_got_FRMR},
- {ST_L2_8, EV_L2_FRMR, l2_got_FRMR},
- {ST_L2_7, EV_L2_SUPER, l2_st7_got_super},
- {ST_L2_8, EV_L2_SUPER, l2_st8_got_super},
- {ST_L2_7, EV_L2_I, l2_got_iframe},
- {ST_L2_8, EV_L2_I, l2_got_iframe},
- {ST_L2_5, EV_L2_T200, l2_st5_tout_200},
- {ST_L2_6, EV_L2_T200, l2_st6_tout_200},
- {ST_L2_7, EV_L2_T200, l2_st7_tout_200},
- {ST_L2_8, EV_L2_T200, l2_st8_tout_200},
- {ST_L2_7, EV_L2_T203, l2_st7_tout_203},
- {ST_L2_7, EV_L2_ACK_PULL, l2_pull_iqueue},
- {ST_L2_7, EV_L2_SET_OWN_BUSY, l2_set_own_busy},
- {ST_L2_8, EV_L2_SET_OWN_BUSY, l2_set_own_busy},
- {ST_L2_7, EV_L2_CLEAR_OWN_BUSY, l2_clear_own_busy},
- {ST_L2_8, EV_L2_CLEAR_OWN_BUSY, l2_clear_own_busy},
- {ST_L2_4, EV_L2_FRAME_ERROR, l2_frame_error},
- {ST_L2_5, EV_L2_FRAME_ERROR, l2_frame_error},
- {ST_L2_6, EV_L2_FRAME_ERROR, l2_frame_error},
- {ST_L2_7, EV_L2_FRAME_ERROR, l2_frame_error_reest},
- {ST_L2_8, EV_L2_FRAME_ERROR, l2_frame_error_reest},
- {ST_L2_1, EV_L1_DEACTIVATE, l2_st14_persistent_da},
- {ST_L2_2, EV_L1_DEACTIVATE, l2_st24_tei_remove},
- {ST_L2_3, EV_L1_DEACTIVATE, l2_st3_tei_remove},
- {ST_L2_4, EV_L1_DEACTIVATE, l2_st14_persistent_da},
- {ST_L2_5, EV_L1_DEACTIVATE, l2_st5_persistent_da},
- {ST_L2_6, EV_L1_DEACTIVATE, l2_st6_persistent_da},
- {ST_L2_7, EV_L1_DEACTIVATE, l2_persistent_da},
- {ST_L2_8, EV_L1_DEACTIVATE, l2_persistent_da},
-};
-
-static void
-isdnl2_l1l2(struct PStack *st, int pr, void *arg)
-{
- struct sk_buff *skb = arg;
- u_char *datap;
- int ret = 1, len;
- int c = 0;
-
- switch (pr) {
- case (PH_DATA | INDICATION):
- datap = skb->data;
- len = l2addrsize(&st->l2);
- if (skb->len > len)
- datap += len;
- else {
- FsmEvent(&st->l2.l2m, EV_L2_FRAME_ERROR, (void *) 'N');
- dev_kfree_skb(skb);
- return;
- }
- if (!(*datap & 1)) { /* I-Frame */
- if (!(c = iframe_error(st, skb)))
- ret = FsmEvent(&st->l2.l2m, EV_L2_I, skb);
- } else if (IsSFrame(datap, st)) { /* S-Frame */
- if (!(c = super_error(st, skb)))
- ret = FsmEvent(&st->l2.l2m, EV_L2_SUPER, skb);
- } else if (IsUI(datap)) {
- if (!(c = UI_error(st, skb)))
- ret = FsmEvent(&st->l2.l2m, EV_L2_UI, skb);
- } else if (IsSABME(datap, st)) {
- if (!(c = unnum_error(st, skb, CMD)))
- ret = FsmEvent(&st->l2.l2m, EV_L2_SABME, skb);
- } else if (IsUA(datap)) {
- if (!(c = unnum_error(st, skb, RSP)))
- ret = FsmEvent(&st->l2.l2m, EV_L2_UA, skb);
- } else if (IsDISC(datap)) {
- if (!(c = unnum_error(st, skb, CMD)))
- ret = FsmEvent(&st->l2.l2m, EV_L2_DISC, skb);
- } else if (IsDM(datap)) {
- if (!(c = unnum_error(st, skb, RSP)))
- ret = FsmEvent(&st->l2.l2m, EV_L2_DM, skb);
- } else if (IsFRMR(datap)) {
- if (!(c = FRMR_error(st, skb)))
- ret = FsmEvent(&st->l2.l2m, EV_L2_FRMR, skb);
- } else {
- FsmEvent(&st->l2.l2m, EV_L2_FRAME_ERROR, (void *) 'L');
- dev_kfree_skb(skb);
- ret = 0;
- }
- if (c) {
- dev_kfree_skb(skb);
- FsmEvent(&st->l2.l2m, EV_L2_FRAME_ERROR, (void *)(long)c);
- ret = 0;
- }
- if (ret)
- dev_kfree_skb(skb);
- break;
- case (PH_PULL | CONFIRM):
- FsmEvent(&st->l2.l2m, EV_L2_ACK_PULL, arg);
- break;
- case (PH_PAUSE | INDICATION):
- test_and_set_bit(FLG_DCHAN_BUSY, &st->l2.flag);
- break;
- case (PH_PAUSE | CONFIRM):
- test_and_clear_bit(FLG_DCHAN_BUSY, &st->l2.flag);
- break;
- case (PH_ACTIVATE | CONFIRM):
- case (PH_ACTIVATE | INDICATION):
- test_and_set_bit(FLG_L1_ACTIV, &st->l2.flag);
- if (test_and_clear_bit(FLG_ESTAB_PEND, &st->l2.flag))
- FsmEvent(&st->l2.l2m, EV_L2_DL_ESTABLISH_REQ, arg);
- break;
- case (PH_DEACTIVATE | INDICATION):
- case (PH_DEACTIVATE | CONFIRM):
- test_and_clear_bit(FLG_L1_ACTIV, &st->l2.flag);
- FsmEvent(&st->l2.l2m, EV_L1_DEACTIVATE, arg);
- break;
- default:
- l2m_debug(&st->l2.l2m, "l2 unknown pr %04x", pr);
- break;
- }
-}
-
-static void
-isdnl2_l3l2(struct PStack *st, int pr, void *arg)
-{
- switch (pr) {
- case (DL_DATA | REQUEST):
- if (FsmEvent(&st->l2.l2m, EV_L2_DL_DATA, arg)) {
- dev_kfree_skb((struct sk_buff *) arg);
- }
- break;
- case (DL_UNIT_DATA | REQUEST):
- if (FsmEvent(&st->l2.l2m, EV_L2_DL_UNIT_DATA, arg)) {
- dev_kfree_skb((struct sk_buff *) arg);
- }
- break;
- case (DL_ESTABLISH | REQUEST):
- if (test_bit(FLG_L1_ACTIV, &st->l2.flag)) {
- if (test_bit(FLG_LAPD, &st->l2.flag) ||
- test_bit(FLG_ORIG, &st->l2.flag)) {
- FsmEvent(&st->l2.l2m, EV_L2_DL_ESTABLISH_REQ, arg);
- }
- } else {
- if (test_bit(FLG_LAPD, &st->l2.flag) ||
- test_bit(FLG_ORIG, &st->l2.flag)) {
- test_and_set_bit(FLG_ESTAB_PEND, &st->l2.flag);
- }
- st->l2.l2l1(st, PH_ACTIVATE, NULL);
- }
- break;
- case (DL_RELEASE | REQUEST):
- if (test_bit(FLG_LAPB, &st->l2.flag)) {
- st->l2.l2l1(st, PH_DEACTIVATE, NULL);
- }
- FsmEvent(&st->l2.l2m, EV_L2_DL_RELEASE_REQ, arg);
- break;
- case (MDL_ASSIGN | REQUEST):
- FsmEvent(&st->l2.l2m, EV_L2_MDL_ASSIGN, arg);
- break;
- case (MDL_REMOVE | REQUEST):
- FsmEvent(&st->l2.l2m, EV_L2_MDL_REMOVE, arg);
- break;
- case (MDL_ERROR | RESPONSE):
- FsmEvent(&st->l2.l2m, EV_L2_MDL_ERROR, arg);
- break;
- }
-}
-
-void
-releasestack_isdnl2(struct PStack *st)
-{
- FsmDelTimer(&st->l2.t200, 21);
- FsmDelTimer(&st->l2.t203, 16);
- skb_queue_purge(&st->l2.i_queue);
- skb_queue_purge(&st->l2.ui_queue);
- ReleaseWin(&st->l2);
-}
-
-static void
-l2m_debug(struct FsmInst *fi, char *fmt, ...)
-{
- va_list args;
- struct PStack *st = fi->userdata;
-
- va_start(args, fmt);
- VHiSax_putstatus(st->l1.hardware, st->l2.debug_id, fmt, args);
- va_end(args);
-}
-
-void
-setstack_isdnl2(struct PStack *st, char *debug_id)
-{
- spin_lock_init(&st->l2.lock);
- st->l1.l1l2 = isdnl2_l1l2;
- st->l3.l3l2 = isdnl2_l3l2;
-
- skb_queue_head_init(&st->l2.i_queue);
- skb_queue_head_init(&st->l2.ui_queue);
- InitWin(&st->l2);
- st->l2.debug = 0;
-
- st->l2.l2m.fsm = &l2fsm;
- if (test_bit(FLG_LAPB, &st->l2.flag))
- st->l2.l2m.state = ST_L2_4;
- else
- st->l2.l2m.state = ST_L2_1;
- st->l2.l2m.debug = 0;
- st->l2.l2m.userdata = st;
- st->l2.l2m.userint = 0;
- st->l2.l2m.printdebug = l2m_debug;
- strcpy(st->l2.debug_id, debug_id);
-
- FsmInitTimer(&st->l2.l2m, &st->l2.t200);
- FsmInitTimer(&st->l2.l2m, &st->l2.t203);
-}
-
-static void
-transl2_l3l2(struct PStack *st, int pr, void *arg)
-{
- switch (pr) {
- case (DL_DATA | REQUEST):
- case (DL_UNIT_DATA | REQUEST):
- st->l2.l2l1(st, PH_DATA | REQUEST, arg);
- break;
- case (DL_ESTABLISH | REQUEST):
- st->l2.l2l1(st, PH_ACTIVATE | REQUEST, NULL);
- break;
- case (DL_RELEASE | REQUEST):
- st->l2.l2l1(st, PH_DEACTIVATE | REQUEST, NULL);
- break;
- }
-}
-
-void
-setstack_transl2(struct PStack *st)
-{
- st->l3.l3l2 = transl2_l3l2;
-}
-
-void
-releasestack_transl2(struct PStack *st)
-{
-}
-
-int __init
-Isdnl2New(void)
-{
- l2fsm.state_count = L2_STATE_COUNT;
- l2fsm.event_count = L2_EVENT_COUNT;
- l2fsm.strEvent = strL2Event;
- l2fsm.strState = strL2State;
- return FsmNew(&l2fsm, L2FnList, ARRAY_SIZE(L2FnList));
-}
-
-void
-Isdnl2Free(void)
-{
- FsmFree(&l2fsm);
-}
diff --git a/drivers/isdn/hisax/isdnl2.h b/drivers/isdn/hisax/isdnl2.h
deleted file mode 100644
index 7e447fb8ed1d..000000000000
--- a/drivers/isdn/hisax/isdnl2.h
+++ /dev/null
@@ -1,25 +0,0 @@
-/* $Id: isdnl2.h,v 1.3.6.2 2001/09/23 22:24:49 kai Exp $
- *
- * Layer 2 defines
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#define RR 0x01
-#define RNR 0x05
-#define REJ 0x09
-#define SABME 0x6f
-#define SABM 0x2f
-#define DM 0x0f
-#define UI 0x03
-#define DISC 0x43
-#define UA 0x63
-#define FRMR 0x87
-#define XID 0xaf
-
-#define CMD 0
-#define RSP 1
-
-#define LC_FLUSH_WAIT 1
diff --git a/drivers/isdn/hisax/isdnl3.c b/drivers/isdn/hisax/isdnl3.c
deleted file mode 100644
index bb3f9ec62749..000000000000
--- a/drivers/isdn/hisax/isdnl3.c
+++ /dev/null
@@ -1,594 +0,0 @@
-/* $Id: isdnl3.c,v 2.22.2.3 2004/01/13 14:31:25 keil Exp $
- *
- * Author Karsten Keil
- * based on the teles driver from Jan den Ouden
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * For changes and modifications please read
- * Documentation/isdn/HiSax.cert
- *
- * Thanks to Jan den Ouden
- * Fritz Elfert
- *
- */
-
-#include <linux/init.h>
-#include <linux/slab.h>
-#include "hisax.h"
-#include "isdnl3.h"
-
-const char *l3_revision = "$Revision: 2.22.2.3 $";
-
-static struct Fsm l3fsm;
-
-enum {
- ST_L3_LC_REL,
- ST_L3_LC_ESTAB_WAIT,
- ST_L3_LC_REL_DELAY,
- ST_L3_LC_REL_WAIT,
- ST_L3_LC_ESTAB,
-};
-
-#define L3_STATE_COUNT (ST_L3_LC_ESTAB + 1)
-
-static char *strL3State[] =
-{
- "ST_L3_LC_REL",
- "ST_L3_LC_ESTAB_WAIT",
- "ST_L3_LC_REL_DELAY",
- "ST_L3_LC_REL_WAIT",
- "ST_L3_LC_ESTAB",
-};
-
-enum {
- EV_ESTABLISH_REQ,
- EV_ESTABLISH_IND,
- EV_ESTABLISH_CNF,
- EV_RELEASE_REQ,
- EV_RELEASE_CNF,
- EV_RELEASE_IND,
- EV_TIMEOUT,
-};
-
-#define L3_EVENT_COUNT (EV_TIMEOUT + 1)
-
-static char *strL3Event[] =
-{
- "EV_ESTABLISH_REQ",
- "EV_ESTABLISH_IND",
- "EV_ESTABLISH_CNF",
- "EV_RELEASE_REQ",
- "EV_RELEASE_CNF",
- "EV_RELEASE_IND",
- "EV_TIMEOUT",
-};
-
-static __printf(2, 3) void
- l3m_debug(struct FsmInst *fi, char *fmt, ...)
-{
- va_list args;
- struct PStack *st = fi->userdata;
-
- va_start(args, fmt);
- VHiSax_putstatus(st->l1.hardware, st->l3.debug_id, fmt, args);
- va_end(args);
-}
-
-u_char *
-findie(u_char *p, int size, u_char ie, int wanted_set)
-{
- int l, codeset, maincodeset;
- u_char *pend = p + size;
-
- /* skip protocol discriminator, callref and message type */
- p++;
- l = (*p++) & 0xf;
- p += l;
- p++;
- codeset = 0;
- maincodeset = 0;
- /* while there are bytes left... */
- while (p < pend) {
- if ((*p & 0xf0) == 0x90) {
- codeset = *p & 0x07;
- if (!(*p & 0x08))
- maincodeset = codeset;
- }
- if (*p & 0x80)
- p++;
- else {
- if (codeset == wanted_set) {
- if (*p == ie)
- { /* improved length check (Werner Cornelius) */
- if ((pend - p) < 2)
- return (NULL);
- if (*(p + 1) > (pend - (p + 2)))
- return (NULL);
- return (p);
- }
-
- if (*p > ie)
- return (NULL);
- }
- p++;
- l = *p++;
- p += l;
- codeset = maincodeset;
- }
- }
- return (NULL);
-}
-
-int
-getcallref(u_char *p)
-{
- int l, cr = 0;
-
- p++; /* prot discr */
- if (*p & 0xfe) /* wrong callref BRI only 1 octet*/
- return (-2);
- l = 0xf & *p++; /* callref length */
- if (!l) /* dummy CallRef */
- return (-1);
- cr = *p++;
- return (cr);
-}
-
-static int OrigCallRef = 0;
-
-int
-newcallref(void)
-{
- if (OrigCallRef == 127)
- OrigCallRef = 1;
- else
- OrigCallRef++;
- return (OrigCallRef);
-}
-
-void
-newl3state(struct l3_process *pc, int state)
-{
- if (pc->debug & L3_DEB_STATE)
- l3_debug(pc->st, "%s cr %d %d --> %d", __func__,
- pc->callref & 0x7F,
- pc->state, state);
- pc->state = state;
-}
-
-static void
-L3ExpireTimer(struct timer_list *timer)
-{
- struct L3Timer *t = from_timer(t, timer, tl);
- t->pc->st->lli.l4l3(t->pc->st, t->event, t->pc);
-}
-
-void
-L3InitTimer(struct l3_process *pc, struct L3Timer *t)
-{
- t->pc = pc;
- timer_setup(&t->tl, L3ExpireTimer, 0);
-}
-
-void
-L3DelTimer(struct L3Timer *t)
-{
- del_timer(&t->tl);
-}
-
-int
-L3AddTimer(struct L3Timer *t,
- int millisec, int event)
-{
- if (timer_pending(&t->tl)) {
- printk(KERN_WARNING "L3AddTimer: timer already active!\n");
- return -1;
- }
- t->event = event;
- t->tl.expires = jiffies + (millisec * HZ) / 1000;
- add_timer(&t->tl);
- return 0;
-}
-
-void
-StopAllL3Timer(struct l3_process *pc)
-{
- L3DelTimer(&pc->timer);
-}
-
-struct sk_buff *
-l3_alloc_skb(int len)
-{
- struct sk_buff *skb;
-
- if (!(skb = alloc_skb(len + MAX_HEADER_LEN, GFP_ATOMIC))) {
- printk(KERN_WARNING "HiSax: No skb for D-channel\n");
- return (NULL);
- }
- skb_reserve(skb, MAX_HEADER_LEN);
- return (skb);
-}
-
-static void
-no_l3_proto(struct PStack *st, int pr, void *arg)
-{
- struct sk_buff *skb = arg;
-
- HiSax_putstatus(st->l1.hardware, "L3", "no D protocol");
- if (skb) {
- dev_kfree_skb(skb);
- }
-}
-
-static int
-no_l3_proto_spec(struct PStack *st, isdn_ctrl *ic)
-{
- printk(KERN_WARNING "HiSax: no specific protocol handler for proto %lu\n", ic->arg & 0xFF);
- return (-1);
-}
-
-struct l3_process
-*getl3proc(struct PStack *st, int cr)
-{
- struct l3_process *p = st->l3.proc;
-
- while (p)
- if (p->callref == cr)
- return (p);
- else
- p = p->next;
- return (NULL);
-}
-
-struct l3_process
-*new_l3_process(struct PStack *st, int cr)
-{
- struct l3_process *p, *np;
-
- if (!(p = kmalloc(sizeof(struct l3_process), GFP_ATOMIC))) {
- printk(KERN_ERR "HiSax can't get memory for cr %d\n", cr);
- return (NULL);
- }
- if (!st->l3.proc)
- st->l3.proc = p;
- else {
- np = st->l3.proc;
- while (np->next)
- np = np->next;
- np->next = p;
- }
- p->next = NULL;
- p->debug = st->l3.debug;
- p->callref = cr;
- p->state = 0;
- p->chan = NULL;
- p->st = st;
- p->N303 = st->l3.N303;
- L3InitTimer(p, &p->timer);
- return (p);
-};
-
-void
-release_l3_process(struct l3_process *p)
-{
- struct l3_process *np, *pp = NULL;
-
- if (!p)
- return;
- np = p->st->l3.proc;
- while (np) {
- if (np == p) {
- StopAllL3Timer(p);
- if (pp)
- pp->next = np->next;
- else if (!(p->st->l3.proc = np->next) &&
- !test_bit(FLG_PTP, &p->st->l2.flag)) {
- if (p->debug)
- l3_debug(p->st, "release_l3_process: last process");
- if (skb_queue_empty(&p->st->l3.squeue)) {
- if (p->debug)
- l3_debug(p->st, "release_l3_process: release link");
- if (p->st->protocol != ISDN_PTYPE_NI1)
- FsmEvent(&p->st->l3.l3m, EV_RELEASE_REQ, NULL);
- else
- FsmEvent(&p->st->l3.l3m, EV_RELEASE_IND, NULL);
- } else {
- if (p->debug)
- l3_debug(p->st, "release_l3_process: not release link");
- }
- }
- kfree(p);
- return;
- }
- pp = np;
- np = np->next;
- }
- printk(KERN_ERR "HiSax internal L3 error CR(%d) not in list\n", p->callref);
- l3_debug(p->st, "HiSax internal L3 error CR(%d) not in list", p->callref);
-};
-
-static void
-l3ml3p(struct PStack *st, int pr)
-{
- struct l3_process *p = st->l3.proc;
- struct l3_process *np;
-
- while (p) {
- /* p might be kfreed under us, so we need to save where we want to go on */
- np = p->next;
- st->l3.l3ml3(st, pr, p);
- p = np;
- }
-}
-
-void
-setstack_l3dc(struct PStack *st, struct Channel *chanp)
-{
- char tmp[64];
-
- st->l3.proc = NULL;
- st->l3.global = NULL;
- skb_queue_head_init(&st->l3.squeue);
- st->l3.l3m.fsm = &l3fsm;
- st->l3.l3m.state = ST_L3_LC_REL;
- st->l3.l3m.debug = 1;
- st->l3.l3m.userdata = st;
- st->l3.l3m.userint = 0;
- st->l3.l3m.printdebug = l3m_debug;
- FsmInitTimer(&st->l3.l3m, &st->l3.l3m_timer);
- strcpy(st->l3.debug_id, "L3DC ");
- st->lli.l4l3_proto = no_l3_proto_spec;
-
-#ifdef CONFIG_HISAX_EURO
- if (st->protocol == ISDN_PTYPE_EURO) {
- setstack_dss1(st);
- } else
-#endif
-#ifdef CONFIG_HISAX_NI1
- if (st->protocol == ISDN_PTYPE_NI1) {
- setstack_ni1(st);
- } else
-#endif
-#ifdef CONFIG_HISAX_1TR6
- if (st->protocol == ISDN_PTYPE_1TR6) {
- setstack_1tr6(st);
- } else
-#endif
- if (st->protocol == ISDN_PTYPE_LEASED) {
- st->lli.l4l3 = no_l3_proto;
- st->l2.l2l3 = no_l3_proto;
- st->l3.l3ml3 = no_l3_proto;
- printk(KERN_INFO "HiSax: Leased line mode\n");
- } else {
- st->lli.l4l3 = no_l3_proto;
- st->l2.l2l3 = no_l3_proto;
- st->l3.l3ml3 = no_l3_proto;
- sprintf(tmp, "protocol %s not supported",
- (st->protocol == ISDN_PTYPE_1TR6) ? "1tr6" :
- (st->protocol == ISDN_PTYPE_EURO) ? "euro" :
- (st->protocol == ISDN_PTYPE_NI1) ? "ni1" :
- "unknown");
- printk(KERN_WARNING "HiSax: %s\n", tmp);
- st->protocol = -1;
- }
-}
-
-static void
-isdnl3_trans(struct PStack *st, int pr, void *arg) {
- st->l3.l3l2(st, pr, arg);
-}
-
-void
-releasestack_isdnl3(struct PStack *st)
-{
- while (st->l3.proc)
- release_l3_process(st->l3.proc);
- if (st->l3.global) {
- StopAllL3Timer(st->l3.global);
- kfree(st->l3.global);
- st->l3.global = NULL;
- }
- FsmDelTimer(&st->l3.l3m_timer, 54);
- skb_queue_purge(&st->l3.squeue);
-}
-
-void
-setstack_l3bc(struct PStack *st, struct Channel *chanp)
-{
-
- st->l3.proc = NULL;
- st->l3.global = NULL;
- skb_queue_head_init(&st->l3.squeue);
- st->l3.l3m.fsm = &l3fsm;
- st->l3.l3m.state = ST_L3_LC_REL;
- st->l3.l3m.debug = 1;
- st->l3.l3m.userdata = st;
- st->l3.l3m.userint = 0;
- st->l3.l3m.printdebug = l3m_debug;
- strcpy(st->l3.debug_id, "L3BC ");
- st->lli.l4l3 = isdnl3_trans;
-}
-
-#define DREL_TIMER_VALUE 40000
-
-static void
-lc_activate(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- FsmChangeState(fi, ST_L3_LC_ESTAB_WAIT);
- st->l3.l3l2(st, DL_ESTABLISH | REQUEST, NULL);
-}
-
-static void
-lc_connect(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
- int dequeued = 0;
-
- FsmChangeState(fi, ST_L3_LC_ESTAB);
- while ((skb = skb_dequeue(&st->l3.squeue))) {
- st->l3.l3l2(st, DL_DATA | REQUEST, skb);
- dequeued++;
- }
- if ((!st->l3.proc) && dequeued) {
- if (st->l3.debug)
- l3_debug(st, "lc_connect: release link");
- FsmEvent(&st->l3.l3m, EV_RELEASE_REQ, NULL);
- } else
- l3ml3p(st, DL_ESTABLISH | INDICATION);
-}
-
-static void
-lc_connected(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
- int dequeued = 0;
-
- FsmDelTimer(&st->l3.l3m_timer, 51);
- FsmChangeState(fi, ST_L3_LC_ESTAB);
- while ((skb = skb_dequeue(&st->l3.squeue))) {
- st->l3.l3l2(st, DL_DATA | REQUEST, skb);
- dequeued++;
- }
- if ((!st->l3.proc) && dequeued) {
- if (st->l3.debug)
- l3_debug(st, "lc_connected: release link");
- FsmEvent(&st->l3.l3m, EV_RELEASE_REQ, NULL);
- } else
- l3ml3p(st, DL_ESTABLISH | CONFIRM);
-}
-
-static void
-lc_start_delay(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- FsmChangeState(fi, ST_L3_LC_REL_DELAY);
- FsmAddTimer(&st->l3.l3m_timer, DREL_TIMER_VALUE, EV_TIMEOUT, NULL, 50);
-}
-
-static void
-lc_start_delay_check(struct FsmInst *fi, int event, void *arg)
-/* 20/09/00 - GE timer not user for NI-1 as layer 2 should stay up */
-{
- struct PStack *st = fi->userdata;
-
- FsmChangeState(fi, ST_L3_LC_REL_DELAY);
- /* 19/09/00 - GE timer not user for NI-1 */
- if (st->protocol != ISDN_PTYPE_NI1)
- FsmAddTimer(&st->l3.l3m_timer, DREL_TIMER_VALUE, EV_TIMEOUT, NULL, 50);
-}
-
-static void
-lc_release_req(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- if (test_bit(FLG_L2BLOCK, &st->l2.flag)) {
- if (st->l3.debug)
- l3_debug(st, "lc_release_req: l2 blocked");
- /* restart release timer */
- FsmAddTimer(&st->l3.l3m_timer, DREL_TIMER_VALUE, EV_TIMEOUT, NULL, 51);
- } else {
- FsmChangeState(fi, ST_L3_LC_REL_WAIT);
- st->l3.l3l2(st, DL_RELEASE | REQUEST, NULL);
- }
-}
-
-static void
-lc_release_ind(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- FsmDelTimer(&st->l3.l3m_timer, 52);
- FsmChangeState(fi, ST_L3_LC_REL);
- skb_queue_purge(&st->l3.squeue);
- l3ml3p(st, DL_RELEASE | INDICATION);
-}
-
-static void
-lc_release_cnf(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- FsmChangeState(fi, ST_L3_LC_REL);
- skb_queue_purge(&st->l3.squeue);
- l3ml3p(st, DL_RELEASE | CONFIRM);
-}
-
-
-/* *INDENT-OFF* */
-static struct FsmNode L3FnList[] __initdata =
-{
- {ST_L3_LC_REL, EV_ESTABLISH_REQ, lc_activate},
- {ST_L3_LC_REL, EV_ESTABLISH_IND, lc_connect},
- {ST_L3_LC_REL, EV_ESTABLISH_CNF, lc_connect},
- {ST_L3_LC_ESTAB_WAIT, EV_ESTABLISH_CNF, lc_connected},
- {ST_L3_LC_ESTAB_WAIT, EV_RELEASE_REQ, lc_start_delay},
- {ST_L3_LC_ESTAB_WAIT, EV_RELEASE_IND, lc_release_ind},
- {ST_L3_LC_ESTAB, EV_RELEASE_IND, lc_release_ind},
- {ST_L3_LC_ESTAB, EV_RELEASE_REQ, lc_start_delay_check},
- {ST_L3_LC_REL_DELAY, EV_RELEASE_IND, lc_release_ind},
- {ST_L3_LC_REL_DELAY, EV_ESTABLISH_REQ, lc_connected},
- {ST_L3_LC_REL_DELAY, EV_TIMEOUT, lc_release_req},
- {ST_L3_LC_REL_WAIT, EV_RELEASE_CNF, lc_release_cnf},
- {ST_L3_LC_REL_WAIT, EV_ESTABLISH_REQ, lc_activate},
-};
-/* *INDENT-ON* */
-
-void
-l3_msg(struct PStack *st, int pr, void *arg)
-{
- switch (pr) {
- case (DL_DATA | REQUEST):
- if (st->l3.l3m.state == ST_L3_LC_ESTAB) {
- st->l3.l3l2(st, pr, arg);
- } else {
- struct sk_buff *skb = arg;
-
- skb_queue_tail(&st->l3.squeue, skb);
- FsmEvent(&st->l3.l3m, EV_ESTABLISH_REQ, NULL);
- }
- break;
- case (DL_ESTABLISH | REQUEST):
- FsmEvent(&st->l3.l3m, EV_ESTABLISH_REQ, NULL);
- break;
- case (DL_ESTABLISH | CONFIRM):
- FsmEvent(&st->l3.l3m, EV_ESTABLISH_CNF, NULL);
- break;
- case (DL_ESTABLISH | INDICATION):
- FsmEvent(&st->l3.l3m, EV_ESTABLISH_IND, NULL);
- break;
- case (DL_RELEASE | INDICATION):
- FsmEvent(&st->l3.l3m, EV_RELEASE_IND, NULL);
- break;
- case (DL_RELEASE | CONFIRM):
- FsmEvent(&st->l3.l3m, EV_RELEASE_CNF, NULL);
- break;
- case (DL_RELEASE | REQUEST):
- FsmEvent(&st->l3.l3m, EV_RELEASE_REQ, NULL);
- break;
- }
-}
-
-int __init
-Isdnl3New(void)
-{
- l3fsm.state_count = L3_STATE_COUNT;
- l3fsm.event_count = L3_EVENT_COUNT;
- l3fsm.strEvent = strL3Event;
- l3fsm.strState = strL3State;
- return FsmNew(&l3fsm, L3FnList, ARRAY_SIZE(L3FnList));
-}
-
-void
-Isdnl3Free(void)
-{
- FsmFree(&l3fsm);
-}
diff --git a/drivers/isdn/hisax/isdnl3.h b/drivers/isdn/hisax/isdnl3.h
deleted file mode 100644
index 0edc99d40dc2..000000000000
--- a/drivers/isdn/hisax/isdnl3.h
+++ /dev/null
@@ -1,42 +0,0 @@
-/* $Id: isdnl3.h,v 2.6.6.2 2001/09/23 22:24:49 kai Exp $
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#define SBIT(state) (1 << state)
-#define ALL_STATES 0x03ffffff
-
-#define PROTO_DIS_EURO 0x08
-
-#define L3_DEB_WARN 0x01
-#define L3_DEB_PROTERR 0x02
-#define L3_DEB_STATE 0x04
-#define L3_DEB_CHARGE 0x08
-#define L3_DEB_CHECK 0x10
-#define L3_DEB_SI 0x20
-
-struct stateentry {
- int state;
- int primitive;
- void (*rout) (struct l3_process *, u8, void *);
-};
-
-#define l3_debug(st, fmt, args...) HiSax_putstatus(st->l1.hardware, "l3 ", fmt, ## args)
-
-struct PStack;
-
-void newl3state(struct l3_process *pc, int state);
-void L3InitTimer(struct l3_process *pc, struct L3Timer *t);
-void L3DelTimer(struct L3Timer *t);
-int L3AddTimer(struct L3Timer *t, int millisec, int event);
-void StopAllL3Timer(struct l3_process *pc);
-struct sk_buff *l3_alloc_skb(int len);
-struct l3_process *new_l3_process(struct PStack *st, int cr);
-void release_l3_process(struct l3_process *p);
-struct l3_process *getl3proc(struct PStack *st, int cr);
-void l3_msg(struct PStack *st, int pr, void *arg);
-void setstack_dss1(struct PStack *st);
-void setstack_ni1(struct PStack *st);
-void setstack_1tr6(struct PStack *st);
diff --git a/drivers/isdn/hisax/isurf.c b/drivers/isdn/hisax/isurf.c
deleted file mode 100644
index 53e299be4304..000000000000
--- a/drivers/isdn/hisax/isurf.c
+++ /dev/null
@@ -1,305 +0,0 @@
-/* $Id: isurf.c,v 1.12.2.4 2004/01/13 21:46:03 keil Exp $
- *
- * low level stuff for Siemens I-Surf/I-Talk cards
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "isar.h"
-#include "isdnl1.h"
-#include <linux/isapnp.h>
-
-static const char *ISurf_revision = "$Revision: 1.12.2.4 $";
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-#define ISURF_ISAR_RESET 1
-#define ISURF_ISAC_RESET 2
-#define ISURF_ISAR_EA 4
-#define ISURF_ARCOFI_RESET 8
-#define ISURF_RESET (ISURF_ISAR_RESET | ISURF_ISAC_RESET | ISURF_ARCOFI_RESET)
-
-#define ISURF_ISAR_OFFSET 0
-#define ISURF_ISAC_OFFSET 0x100
-#define ISURF_IOMEM_SIZE 0x400
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readb(cs->hw.isurf.isac + offset));
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writeb(value, cs->hw.isurf.isac + offset); mb();
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- register int i;
- for (i = 0; i < size; i++)
- data[i] = readb(cs->hw.isurf.isac);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- register int i;
- for (i = 0; i < size; i++) {
- writeb(data[i], cs->hw.isurf.isac); mb();
- }
-}
-
-/* ISAR access routines
- * mode = 0 access with IRQ on
- * mode = 1 access with IRQ off
- * mode = 2 access with IRQ off and using last offset
- */
-
-static u_char
-ReadISAR(struct IsdnCardState *cs, int mode, u_char offset)
-{
- return (readb(cs->hw.isurf.isar + offset));
-}
-
-static void
-WriteISAR(struct IsdnCardState *cs, int mode, u_char offset, u_char value)
-{
- writeb(value, cs->hw.isurf.isar + offset); mb();
-}
-
-static irqreturn_t
-isurf_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val;
- int cnt = 5;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- val = readb(cs->hw.isurf.isar + ISAR_IRQBIT);
-Start_ISAR:
- if (val & ISAR_IRQSTA)
- isar_int_main(cs);
- val = readb(cs->hw.isurf.isac + ISAC_ISTA);
-Start_ISAC:
- if (val)
- isac_interrupt(cs, val);
- val = readb(cs->hw.isurf.isar + ISAR_IRQBIT);
- if ((val & ISAR_IRQSTA) && --cnt) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "ISAR IntStat after IntRoutine");
- goto Start_ISAR;
- }
- val = readb(cs->hw.isurf.isac + ISAC_ISTA);
- if (val && --cnt) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ISAC IntStat after IntRoutine");
- goto Start_ISAC;
- }
- if (!cnt)
- printk(KERN_WARNING "ISurf IRQ LOOP\n");
-
- writeb(0, cs->hw.isurf.isar + ISAR_IRQBIT); mb();
- writeb(0xFF, cs->hw.isurf.isac + ISAC_MASK); mb();
- writeb(0, cs->hw.isurf.isac + ISAC_MASK); mb();
- writeb(ISAR_IRQMSK, cs->hw.isurf.isar + ISAR_IRQBIT); mb();
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-release_io_isurf(struct IsdnCardState *cs)
-{
- release_region(cs->hw.isurf.reset, 1);
- iounmap(cs->hw.isurf.isar);
- release_mem_region(cs->hw.isurf.phymem, ISURF_IOMEM_SIZE);
-}
-
-static void
-reset_isurf(struct IsdnCardState *cs, u_char chips)
-{
- printk(KERN_INFO "ISurf: resetting card\n");
-
- byteout(cs->hw.isurf.reset, chips); /* Reset On */
- mdelay(10);
- byteout(cs->hw.isurf.reset, ISURF_ISAR_EA); /* Reset Off */
- mdelay(10);
-}
-
-static int
-ISurf_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- reset_isurf(cs, ISURF_RESET);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- release_io_isurf(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- reset_isurf(cs, ISURF_RESET);
- clear_pending_isac_ints(cs);
- writeb(0, cs->hw.isurf.isar + ISAR_IRQBIT); mb();
- initisac(cs);
- initisar(cs);
- /* Reenable ISAC IRQ */
- cs->writeisac(cs, ISAC_MASK, 0);
- /* RESET Receiver and Transmitter */
- cs->writeisac(cs, ISAC_CMDR, 0x41);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-static int
-isurf_auxcmd(struct IsdnCardState *cs, isdn_ctrl *ic) {
- int ret;
- u_long flags;
-
- if ((ic->command == ISDN_CMD_IOCTL) && (ic->arg == 9)) {
- ret = isar_auxcmd(cs, ic);
- spin_lock_irqsave(&cs->lock, flags);
- if (!ret) {
- reset_isurf(cs, ISURF_ISAR_EA | ISURF_ISAC_RESET |
- ISURF_ARCOFI_RESET);
- initisac(cs);
- cs->writeisac(cs, ISAC_MASK, 0);
- cs->writeisac(cs, ISAC_CMDR, 0x41);
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- return (ret);
- }
- return (isar_auxcmd(cs, ic));
-}
-
-#ifdef __ISAPNP__
-static struct pnp_card *pnp_c = NULL;
-#endif
-
-int setup_isurf(struct IsdnCard *card)
-{
- int ver;
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
- strcpy(tmp, ISurf_revision);
- printk(KERN_INFO "HiSax: ISurf driver Rev. %s\n", HiSax_getrev(tmp));
-
- if (cs->typ != ISDN_CTYPE_ISURF)
- return (0);
- if (card->para[1] && card->para[2]) {
- cs->hw.isurf.reset = card->para[1];
- cs->hw.isurf.phymem = card->para[2];
- cs->irq = card->para[0];
- } else {
-#ifdef __ISAPNP__
- if (isapnp_present()) {
- struct pnp_dev *pnp_d = NULL;
- int err;
-
- cs->subtyp = 0;
- if ((pnp_c = pnp_find_card(
- ISAPNP_VENDOR('S', 'I', 'E'),
- ISAPNP_FUNCTION(0x0010), pnp_c))) {
- if (!(pnp_d = pnp_find_dev(pnp_c,
- ISAPNP_VENDOR('S', 'I', 'E'),
- ISAPNP_FUNCTION(0x0010), pnp_d))) {
- printk(KERN_ERR "ISurfPnP: PnP error card found, no device\n");
- return (0);
- }
- pnp_disable_dev(pnp_d);
- err = pnp_activate_dev(pnp_d);
- if (err < 0) {
- pr_warn("%s: pnp_activate_dev ret=%d\n",
- __func__, err);
- return 0;
- }
- cs->hw.isurf.reset = pnp_port_start(pnp_d, 0);
- cs->hw.isurf.phymem = pnp_mem_start(pnp_d, 1);
- cs->irq = pnp_irq(pnp_d, 0);
- if (cs->irq == -1 || !cs->hw.isurf.reset || !cs->hw.isurf.phymem) {
- printk(KERN_ERR "ISurfPnP:some resources are missing %d/%x/%lx\n",
- cs->irq, cs->hw.isurf.reset, cs->hw.isurf.phymem);
- pnp_disable_dev(pnp_d);
- return (0);
- }
- } else {
- printk(KERN_INFO "ISurfPnP: no ISAPnP card found\n");
- return (0);
- }
- } else {
- printk(KERN_INFO "ISurfPnP: no ISAPnP bus found\n");
- return (0);
- }
-#else
- printk(KERN_WARNING "HiSax: Siemens I-Surf port/mem not set\n");
- return (0);
-#endif
- }
- if (!request_region(cs->hw.isurf.reset, 1, "isurf isdn")) {
- printk(KERN_WARNING
- "HiSax: Siemens I-Surf config port %x already in use\n",
- cs->hw.isurf.reset);
- return (0);
- }
- if (!request_region(cs->hw.isurf.phymem, ISURF_IOMEM_SIZE, "isurf iomem")) {
- printk(KERN_WARNING "HiSax: Siemens I-Surf memory region "
- "%lx-%lx already in use\n",
- cs->hw.isurf.phymem,
- cs->hw.isurf.phymem + ISURF_IOMEM_SIZE);
- release_region(cs->hw.isurf.reset, 1);
- return (0);
- }
- cs->hw.isurf.isar = ioremap(cs->hw.isurf.phymem, ISURF_IOMEM_SIZE);
- cs->hw.isurf.isac = cs->hw.isurf.isar + ISURF_ISAC_OFFSET;
- printk(KERN_INFO
- "ISurf: defined at 0x%x 0x%lx IRQ %d\n",
- cs->hw.isurf.reset,
- cs->hw.isurf.phymem,
- cs->irq);
-
- setup_isac(cs);
- cs->cardmsg = &ISurf_card_msg;
- cs->irq_func = &isurf_interrupt;
- cs->auxcmd = &isurf_auxcmd;
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->bcs[0].hw.isar.reg = &cs->hw.isurf.isar_r;
- cs->bcs[1].hw.isar.reg = &cs->hw.isurf.isar_r;
- test_and_set_bit(HW_ISAR, &cs->HW_Flags);
- ISACVersion(cs, "ISurf:");
- cs->BC_Read_Reg = &ReadISAR;
- cs->BC_Write_Reg = &WriteISAR;
- cs->BC_Send_Data = &isar_fill_fifo;
- ver = ISARVersion(cs, "ISurf:");
- if (ver < 0) {
- printk(KERN_WARNING
- "ISurf: wrong ISAR version (ret = %d)\n", ver);
- release_io_isurf(cs);
- return (0);
- }
- return (1);
-}
diff --git a/drivers/isdn/hisax/ix1_micro.c b/drivers/isdn/hisax/ix1_micro.c
deleted file mode 100644
index bfb79f3f0a49..000000000000
--- a/drivers/isdn/hisax/ix1_micro.c
+++ /dev/null
@@ -1,316 +0,0 @@
-/* $Id: ix1_micro.c,v 2.12.2.4 2004/01/13 23:48:39 keil Exp $
- *
- * low level stuff for ITK ix1-micro Rev.2 isdn cards
- * derived from the original file teles3.c from Karsten Keil
- *
- * Author Klaus-Peter Nischke
- * Copyright by Klaus-Peter Nischke, ITK AG
- * <klaus@nischke.do.eunet.de>
- * by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * Klaus-Peter Nischke
- * Deusener Str. 287
- * 44369 Dortmund
- * Germany
- */
-
-#include <linux/init.h>
-#include <linux/isapnp.h>
-#include "hisax.h"
-#include "isac.h"
-#include "hscx.h"
-#include "isdnl1.h"
-
-static const char *ix1_revision = "$Revision: 2.12.2.4 $";
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-#define SPECIAL_PORT_OFFSET 3
-
-#define ISAC_COMMAND_OFFSET 2
-#define ISAC_DATA_OFFSET 0
-#define HSCX_COMMAND_OFFSET 2
-#define HSCX_DATA_OFFSET 1
-
-#define TIMEOUT 50
-
-static inline u_char
-readreg(unsigned int ale, unsigned int adr, u_char off)
-{
- register u_char ret;
-
- byteout(ale, off);
- ret = bytein(adr);
- return (ret);
-}
-
-static inline void
-readfifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- byteout(ale, off);
- insb(adr, data, size);
-}
-
-
-static inline void
-writereg(unsigned int ale, unsigned int adr, u_char off, u_char data)
-{
- byteout(ale, off);
- byteout(adr, data);
-}
-
-static inline void
-writefifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- byteout(ale, off);
- outsb(adr, data, size);
-}
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readreg(cs->hw.ix1.isac_ale, cs->hw.ix1.isac, offset));
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writereg(cs->hw.ix1.isac_ale, cs->hw.ix1.isac, offset, value);
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- readfifo(cs->hw.ix1.isac_ale, cs->hw.ix1.isac, 0, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- writefifo(cs->hw.ix1.isac_ale, cs->hw.ix1.isac, 0, data, size);
-}
-
-static u_char
-ReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- return (readreg(cs->hw.ix1.hscx_ale,
- cs->hw.ix1.hscx, offset + (hscx ? 0x40 : 0)));
-}
-
-static void
-WriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- writereg(cs->hw.ix1.hscx_ale,
- cs->hw.ix1.hscx, offset + (hscx ? 0x40 : 0), value);
-}
-
-#define READHSCX(cs, nr, reg) readreg(cs->hw.ix1.hscx_ale, \
- cs->hw.ix1.hscx, reg + (nr ? 0x40 : 0))
-#define WRITEHSCX(cs, nr, reg, data) writereg(cs->hw.ix1.hscx_ale, \
- cs->hw.ix1.hscx, reg + (nr ? 0x40 : 0), data)
-
-#define READHSCXFIFO(cs, nr, ptr, cnt) readfifo(cs->hw.ix1.hscx_ale, \
- cs->hw.ix1.hscx, (nr ? 0x40 : 0), ptr, cnt)
-
-#define WRITEHSCXFIFO(cs, nr, ptr, cnt) writefifo(cs->hw.ix1.hscx_ale, \
- cs->hw.ix1.hscx, (nr ? 0x40 : 0), ptr, cnt)
-
-#include "hscx_irq.c"
-
-static irqreturn_t
-ix1micro_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- val = readreg(cs->hw.ix1.hscx_ale, cs->hw.ix1.hscx, HSCX_ISTA + 0x40);
-Start_HSCX:
- if (val)
- hscx_int_main(cs, val);
- val = readreg(cs->hw.ix1.isac_ale, cs->hw.ix1.isac, ISAC_ISTA);
-Start_ISAC:
- if (val)
- isac_interrupt(cs, val);
- val = readreg(cs->hw.ix1.hscx_ale, cs->hw.ix1.hscx, HSCX_ISTA + 0x40);
- if (val) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX IntStat after IntRoutine");
- goto Start_HSCX;
- }
- val = readreg(cs->hw.ix1.isac_ale, cs->hw.ix1.isac, ISAC_ISTA);
- if (val) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ISAC IntStat after IntRoutine");
- goto Start_ISAC;
- }
- writereg(cs->hw.ix1.hscx_ale, cs->hw.ix1.hscx, HSCX_MASK, 0xFF);
- writereg(cs->hw.ix1.hscx_ale, cs->hw.ix1.hscx, HSCX_MASK + 0x40, 0xFF);
- writereg(cs->hw.ix1.isac_ale, cs->hw.ix1.isac, ISAC_MASK, 0xFF);
- writereg(cs->hw.ix1.isac_ale, cs->hw.ix1.isac, ISAC_MASK, 0);
- writereg(cs->hw.ix1.hscx_ale, cs->hw.ix1.hscx, HSCX_MASK, 0);
- writereg(cs->hw.ix1.hscx_ale, cs->hw.ix1.hscx, HSCX_MASK + 0x40, 0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-release_io_ix1micro(struct IsdnCardState *cs)
-{
- if (cs->hw.ix1.cfg_reg)
- release_region(cs->hw.ix1.cfg_reg, 4);
-}
-
-static void
-ix1_reset(struct IsdnCardState *cs)
-{
- int cnt;
-
- /* reset isac */
- cnt = 3 * (HZ / 10) + 1;
- while (cnt--) {
- byteout(cs->hw.ix1.cfg_reg + SPECIAL_PORT_OFFSET, 1);
- HZDELAY(1); /* wait >=10 ms */
- }
- byteout(cs->hw.ix1.cfg_reg + SPECIAL_PORT_OFFSET, 0);
-}
-
-static int
-ix1_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- ix1_reset(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- release_io_ix1micro(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- ix1_reset(cs);
- inithscxisac(cs, 3);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-#ifdef __ISAPNP__
-static struct isapnp_device_id itk_ids[] = {
- { ISAPNP_VENDOR('I', 'T', 'K'), ISAPNP_FUNCTION(0x25),
- ISAPNP_VENDOR('I', 'T', 'K'), ISAPNP_FUNCTION(0x25),
- (unsigned long) "ITK micro 2" },
- { ISAPNP_VENDOR('I', 'T', 'K'), ISAPNP_FUNCTION(0x29),
- ISAPNP_VENDOR('I', 'T', 'K'), ISAPNP_FUNCTION(0x29),
- (unsigned long) "ITK micro 2." },
- { 0, }
-};
-
-static struct isapnp_device_id *ipid = &itk_ids[0];
-static struct pnp_card *pnp_c = NULL;
-#endif
-
-
-int setup_ix1micro(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
- strcpy(tmp, ix1_revision);
- printk(KERN_INFO "HiSax: ITK IX1 driver Rev. %s\n", HiSax_getrev(tmp));
- if (cs->typ != ISDN_CTYPE_IX1MICROR2)
- return (0);
-
-#ifdef __ISAPNP__
- if (!card->para[1] && isapnp_present()) {
- struct pnp_dev *pnp_d;
- while (ipid->card_vendor) {
- if ((pnp_c = pnp_find_card(ipid->card_vendor,
- ipid->card_device, pnp_c))) {
- pnp_d = NULL;
- if ((pnp_d = pnp_find_dev(pnp_c,
- ipid->vendor, ipid->function, pnp_d))) {
- int err;
-
- printk(KERN_INFO "HiSax: %s detected\n",
- (char *)ipid->driver_data);
- pnp_disable_dev(pnp_d);
- err = pnp_activate_dev(pnp_d);
- if (err < 0) {
- printk(KERN_WARNING "%s: pnp_activate_dev ret(%d)\n",
- __func__, err);
- return (0);
- }
- card->para[1] = pnp_port_start(pnp_d, 0);
- card->para[0] = pnp_irq(pnp_d, 0);
- if (card->para[0] == -1 || !card->para[1]) {
- printk(KERN_ERR "ITK PnP:some resources are missing %ld/%lx\n",
- card->para[0], card->para[1]);
- pnp_disable_dev(pnp_d);
- return (0);
- }
- break;
- } else {
- printk(KERN_ERR "ITK PnP: PnP error card found, no device\n");
- }
- }
- ipid++;
- pnp_c = NULL;
- }
- if (!ipid->card_vendor) {
- printk(KERN_INFO "ITK PnP: no ISAPnP card found\n");
- return (0);
- }
- }
-#endif
- /* IO-Ports */
- cs->hw.ix1.isac_ale = card->para[1] + ISAC_COMMAND_OFFSET;
- cs->hw.ix1.hscx_ale = card->para[1] + HSCX_COMMAND_OFFSET;
- cs->hw.ix1.isac = card->para[1] + ISAC_DATA_OFFSET;
- cs->hw.ix1.hscx = card->para[1] + HSCX_DATA_OFFSET;
- cs->hw.ix1.cfg_reg = card->para[1];
- cs->irq = card->para[0];
- if (cs->hw.ix1.cfg_reg) {
- if (!request_region(cs->hw.ix1.cfg_reg, 4, "ix1micro cfg")) {
- printk(KERN_WARNING
- "HiSax: ITK ix1-micro Rev.2 config port "
- "%x-%x already in use\n",
- cs->hw.ix1.cfg_reg,
- cs->hw.ix1.cfg_reg + 4);
- return (0);
- }
- }
- printk(KERN_INFO "HiSax: ITK ix1-micro Rev.2 config irq:%d io:0x%X\n",
- cs->irq, cs->hw.ix1.cfg_reg);
- setup_isac(cs);
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->BC_Read_Reg = &ReadHSCX;
- cs->BC_Write_Reg = &WriteHSCX;
- cs->BC_Send_Data = &hscx_fill_fifo;
- cs->cardmsg = &ix1_card_msg;
- cs->irq_func = &ix1micro_interrupt;
- ISACVersion(cs, "ix1-Micro:");
- if (HscxVersion(cs, "ix1-Micro:")) {
- printk(KERN_WARNING
- "ix1-Micro: wrong HSCX versions check IO address\n");
- release_io_ix1micro(cs);
- return (0);
- }
- return (1);
-}
diff --git a/drivers/isdn/hisax/jade.c b/drivers/isdn/hisax/jade.c
deleted file mode 100644
index e2ae7871a209..000000000000
--- a/drivers/isdn/hisax/jade.c
+++ /dev/null
@@ -1,305 +0,0 @@
-/* $Id: jade.c,v 1.9.2.4 2004/01/14 16:04:48 keil Exp $
- *
- * JADE stuff (derived from original hscx.c)
- *
- * Author Roland Klabunde
- * Copyright by Roland Klabunde <R.Klabunde@Berkom.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "hscx.h"
-#include "jade.h"
-#include "isdnl1.h"
-#include <linux/interrupt.h>
-#include <linux/slab.h>
-
-
-int
-JadeVersion(struct IsdnCardState *cs, char *s)
-{
- int ver;
- int to = 50;
- cs->BC_Write_Reg(cs, -1, 0x50, 0x19);
- while (to) {
- udelay(1);
- ver = cs->BC_Read_Reg(cs, -1, 0x60);
- to--;
- if (ver)
- break;
- if (!to) {
- printk(KERN_INFO "%s JADE version not obtainable\n", s);
- return (0);
- }
- }
- /* Wait for the JADE */
- udelay(10);
- /* Read version */
- ver = cs->BC_Read_Reg(cs, -1, 0x60);
- printk(KERN_INFO "%s JADE version: %d\n", s, ver);
- return (1);
-}
-
-/* Write to indirect accessible jade register set */
-static void
-jade_write_indirect(struct IsdnCardState *cs, u_char reg, u_char value)
-{
- int to = 50;
- u_char ret;
-
- /* Write the data */
- cs->BC_Write_Reg(cs, -1, COMM_JADE + 1, value);
- /* Say JADE we wanna write indirect reg 'reg' */
- cs->BC_Write_Reg(cs, -1, COMM_JADE, reg);
- to = 50;
- /* Wait for RDY goes high */
- while (to) {
- udelay(1);
- ret = cs->BC_Read_Reg(cs, -1, COMM_JADE);
- to--;
- if (ret & 1)
- /* Got acknowledge */
- break;
- if (!to) {
- printk(KERN_INFO "Can not see ready bit from JADE DSP (reg=0x%X, value=0x%X)\n", reg, value);
- return;
- }
- }
-}
-
-
-
-static void
-modejade(struct BCState *bcs, int mode, int bc)
-{
- struct IsdnCardState *cs = bcs->cs;
- int jade = bcs->hw.hscx.hscx;
-
- if (cs->debug & L1_DEB_HSCX) {
- debugl1(cs, "jade %c mode %d ichan %d", 'A' + jade, mode, bc);
- }
- bcs->mode = mode;
- bcs->channel = bc;
-
- cs->BC_Write_Reg(cs, jade, jade_HDLC_MODE, (mode == L1_MODE_TRANS ? jadeMODE_TMO : 0x00));
- cs->BC_Write_Reg(cs, jade, jade_HDLC_CCR0, (jadeCCR0_PU | jadeCCR0_ITF));
- cs->BC_Write_Reg(cs, jade, jade_HDLC_CCR1, 0x00);
-
- jade_write_indirect(cs, jade_HDLC1SERRXPATH, 0x08);
- jade_write_indirect(cs, jade_HDLC2SERRXPATH, 0x08);
- jade_write_indirect(cs, jade_HDLC1SERTXPATH, 0x00);
- jade_write_indirect(cs, jade_HDLC2SERTXPATH, 0x00);
-
- cs->BC_Write_Reg(cs, jade, jade_HDLC_XCCR, 0x07);
- cs->BC_Write_Reg(cs, jade, jade_HDLC_RCCR, 0x07);
-
- if (bc == 0) {
- cs->BC_Write_Reg(cs, jade, jade_HDLC_TSAX, 0x00);
- cs->BC_Write_Reg(cs, jade, jade_HDLC_TSAR, 0x00);
- } else {
- cs->BC_Write_Reg(cs, jade, jade_HDLC_TSAX, 0x04);
- cs->BC_Write_Reg(cs, jade, jade_HDLC_TSAR, 0x04);
- }
- switch (mode) {
- case (L1_MODE_NULL):
- cs->BC_Write_Reg(cs, jade, jade_HDLC_MODE, jadeMODE_TMO);
- break;
- case (L1_MODE_TRANS):
- cs->BC_Write_Reg(cs, jade, jade_HDLC_MODE, (jadeMODE_TMO | jadeMODE_RAC | jadeMODE_XAC));
- break;
- case (L1_MODE_HDLC):
- cs->BC_Write_Reg(cs, jade, jade_HDLC_MODE, (jadeMODE_RAC | jadeMODE_XAC));
- break;
- }
- if (mode) {
- cs->BC_Write_Reg(cs, jade, jade_HDLC_RCMD, (jadeRCMD_RRES | jadeRCMD_RMC));
- cs->BC_Write_Reg(cs, jade, jade_HDLC_XCMD, jadeXCMD_XRES);
- /* Unmask ints */
- cs->BC_Write_Reg(cs, jade, jade_HDLC_IMR, 0xF8);
- }
- else
- /* Mask ints */
- cs->BC_Write_Reg(cs, jade, jade_HDLC_IMR, 0x00);
-}
-
-static void
-jade_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct BCState *bcs = st->l1.bcs;
- struct sk_buff *skb = arg;
- u_long flags;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- skb_queue_tail(&bcs->squeue, skb);
- } else {
- bcs->tx_skb = skb;
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->hw.hscx.count = 0;
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- printk(KERN_WARNING "jade_l2l1: this shouldn't happen\n");
- } else {
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->tx_skb = skb;
- bcs->hw.hscx.count = 0;
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
- if (!bcs->tx_skb) {
- test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case (PH_ACTIVATE | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_set_bit(BC_FLG_ACTIV, &bcs->Flag);
- modejade(bcs, st->l1.mode, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | REQUEST):
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | CONFIRM):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_clear_bit(BC_FLG_ACTIV, &bcs->Flag);
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- modejade(bcs, 0, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- st->l1.l1l2(st, PH_DEACTIVATE | CONFIRM, NULL);
- break;
- }
-}
-
-static void
-close_jadestate(struct BCState *bcs)
-{
- modejade(bcs, 0, bcs->channel);
- if (test_and_clear_bit(BC_FLG_INIT, &bcs->Flag)) {
- kfree(bcs->hw.hscx.rcvbuf);
- bcs->hw.hscx.rcvbuf = NULL;
- kfree(bcs->blog);
- bcs->blog = NULL;
- skb_queue_purge(&bcs->rqueue);
- skb_queue_purge(&bcs->squeue);
- if (bcs->tx_skb) {
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- }
- }
-}
-
-static int
-open_jadestate(struct IsdnCardState *cs, struct BCState *bcs)
-{
- if (!test_and_set_bit(BC_FLG_INIT, &bcs->Flag)) {
- if (!(bcs->hw.hscx.rcvbuf = kmalloc(HSCX_BUFMAX, GFP_ATOMIC))) {
- printk(KERN_WARNING
- "HiSax: No memory for hscx.rcvbuf\n");
- test_and_clear_bit(BC_FLG_INIT, &bcs->Flag);
- return (1);
- }
- if (!(bcs->blog = kmalloc(MAX_BLOG_SPACE, GFP_ATOMIC))) {
- printk(KERN_WARNING
- "HiSax: No memory for bcs->blog\n");
- test_and_clear_bit(BC_FLG_INIT, &bcs->Flag);
- kfree(bcs->hw.hscx.rcvbuf);
- bcs->hw.hscx.rcvbuf = NULL;
- return (2);
- }
- skb_queue_head_init(&bcs->rqueue);
- skb_queue_head_init(&bcs->squeue);
- }
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->event = 0;
- bcs->hw.hscx.rcvidx = 0;
- bcs->tx_cnt = 0;
- return (0);
-}
-
-
-static int
-setstack_jade(struct PStack *st, struct BCState *bcs)
-{
- bcs->channel = st->l1.bc;
- if (open_jadestate(st->l1.hardware, bcs))
- return (-1);
- st->l1.bcs = bcs;
- st->l2.l2l1 = jade_l2l1;
- setstack_manager(st);
- bcs->st = st;
- setstack_l1_B(st);
- return (0);
-}
-
-void
-clear_pending_jade_ints(struct IsdnCardState *cs)
-{
- int val;
-
- cs->BC_Write_Reg(cs, 0, jade_HDLC_IMR, 0x00);
- cs->BC_Write_Reg(cs, 1, jade_HDLC_IMR, 0x00);
-
- val = cs->BC_Read_Reg(cs, 1, jade_HDLC_ISR);
- debugl1(cs, "jade B ISTA %x", val);
- val = cs->BC_Read_Reg(cs, 0, jade_HDLC_ISR);
- debugl1(cs, "jade A ISTA %x", val);
- val = cs->BC_Read_Reg(cs, 1, jade_HDLC_STAR);
- debugl1(cs, "jade B STAR %x", val);
- val = cs->BC_Read_Reg(cs, 0, jade_HDLC_STAR);
- debugl1(cs, "jade A STAR %x", val);
- /* Unmask ints */
- cs->BC_Write_Reg(cs, 0, jade_HDLC_IMR, 0xF8);
- cs->BC_Write_Reg(cs, 1, jade_HDLC_IMR, 0xF8);
-}
-
-void
-initjade(struct IsdnCardState *cs)
-{
- cs->bcs[0].BC_SetStack = setstack_jade;
- cs->bcs[1].BC_SetStack = setstack_jade;
- cs->bcs[0].BC_Close = close_jadestate;
- cs->bcs[1].BC_Close = close_jadestate;
- cs->bcs[0].hw.hscx.hscx = 0;
- cs->bcs[1].hw.hscx.hscx = 1;
-
- /* Stop DSP audio tx/rx */
- jade_write_indirect(cs, 0x11, 0x0f);
- jade_write_indirect(cs, 0x17, 0x2f);
-
- /* Transparent Mode, RxTx inactive, No Test, No RFS/TFS */
- cs->BC_Write_Reg(cs, 0, jade_HDLC_MODE, jadeMODE_TMO);
- cs->BC_Write_Reg(cs, 1, jade_HDLC_MODE, jadeMODE_TMO);
- /* Power down, 1-Idle, RxTx least significant bit first */
- cs->BC_Write_Reg(cs, 0, jade_HDLC_CCR0, 0x00);
- cs->BC_Write_Reg(cs, 1, jade_HDLC_CCR0, 0x00);
- /* Mask all interrupts */
- cs->BC_Write_Reg(cs, 0, jade_HDLC_IMR, 0x00);
- cs->BC_Write_Reg(cs, 1, jade_HDLC_IMR, 0x00);
- /* Setup host access to hdlc controller */
- jade_write_indirect(cs, jade_HDLCCNTRACCESS, (jadeINDIRECT_HAH1 | jadeINDIRECT_HAH2));
- /* Unmask HDLC int (don't forget DSP int later on)*/
- cs->BC_Write_Reg(cs, -1, jade_INT, (jadeINT_HDLC1 | jadeINT_HDLC2));
-
- /* once again TRANSPARENT */
- modejade(cs->bcs, 0, 0);
- modejade(cs->bcs + 1, 0, 0);
-}
diff --git a/drivers/isdn/hisax/jade.h b/drivers/isdn/hisax/jade.h
deleted file mode 100644
index 4b98096a5858..000000000000
--- a/drivers/isdn/hisax/jade.h
+++ /dev/null
@@ -1,134 +0,0 @@
-/* $Id: jade.h,v 1.5.2.3 2004/01/14 16:04:48 keil Exp $
- *
- * JADE specific defines
- *
- * Author Roland Klabunde
- * Copyright by Roland Klabunde <R.Klabunde@Berkom.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-/* All Registers original Siemens Spec */
-#ifndef __JADE_H__
-#define __JADE_H__
-
-/* Special registers for access to indirect accessible JADE regs */
-#define DIRECT_IO_JADE 0x0000 /* Jade direct io access area */
-#define COMM_JADE 0x0040 /* Jade communication area */
-
-/********************************************************************/
-/* JADE-HDLC registers */
-/********************************************************************/
-#define jade_HDLC_RFIFO 0x00 /* R */
-#define jade_HDLC_XFIFO 0x00 /* W */
-
-#define jade_HDLC_STAR 0x20 /* R */
-#define jadeSTAR_XDOV 0x80
-#define jadeSTAR_XFW 0x40 /* Does not work*/
-#define jadeSTAR_XCEC 0x20
-#define jadeSTAR_RCEC 0x10
-#define jadeSTAR_BSY 0x08
-#define jadeSTAR_RNA 0x04
-#define jadeSTAR_STR 0x02
-#define jadeSTAR_STX 0x01
-
-#define jade_HDLC_XCMD 0x20 /* W */
-#define jadeXCMD_XF 0x80
-#define jadeXCMD_XME 0x40
-#define jadeXCMD_XRES 0x20
-#define jadeXCMD_STX 0x01
-
-#define jade_HDLC_RSTA 0x21 /* R */
-#define jadeRSTA_VFR 0x80
-#define jadeRSTA_RDO 0x40
-#define jadeRSTA_CRC 0x20
-#define jadeRSTA_RAB 0x10
-#define jadeRSTA_MASK 0xF0
-
-#define jade_HDLC_MODE 0x22 /* RW*/
-#define jadeMODE_TMO 0x80
-#define jadeMODE_RAC 0x40
-#define jadeMODE_XAC 0x20
-#define jadeMODE_TLP 0x10
-#define jadeMODE_ERFS 0x02
-#define jadeMODE_ETFS 0x01
-
-#define jade_HDLC_RBCH 0x24 /* R */
-
-#define jade_HDLC_RBCL 0x25 /* R */
-#define jade_HDLC_RCMD 0x25 /* W */
-#define jadeRCMD_RMC 0x80
-#define jadeRCMD_RRES 0x40
-#define jadeRCMD_RMD 0x20
-#define jadeRCMD_STR 0x02
-
-#define jade_HDLC_CCR0 0x26 /* RW*/
-#define jadeCCR0_PU 0x80
-#define jadeCCR0_ITF 0x40
-#define jadeCCR0_C32 0x20
-#define jadeCCR0_CRL 0x10
-#define jadeCCR0_RCRC 0x08
-#define jadeCCR0_XCRC 0x04
-#define jadeCCR0_RMSB 0x02
-#define jadeCCR0_XMSB 0x01
-
-#define jade_HDLC_CCR1 0x27 /* RW*/
-#define jadeCCR1_RCS0 0x80
-#define jadeCCR1_RCONT 0x40
-#define jadeCCR1_RFDIS 0x20
-#define jadeCCR1_XCS0 0x10
-#define jadeCCR1_XCONT 0x08
-#define jadeCCR1_XFDIS 0x04
-
-#define jade_HDLC_TSAR 0x28 /* RW*/
-#define jade_HDLC_TSAX 0x29 /* RW*/
-#define jade_HDLC_RCCR 0x2A /* RW*/
-#define jade_HDLC_XCCR 0x2B /* RW*/
-
-#define jade_HDLC_ISR 0x2C /* R */
-#define jade_HDLC_IMR 0x2C /* W */
-#define jadeISR_RME 0x80
-#define jadeISR_RPF 0x40
-#define jadeISR_RFO 0x20
-#define jadeISR_XPR 0x10
-#define jadeISR_XDU 0x08
-#define jadeISR_ALLS 0x04
-
-#define jade_INT 0x75
-#define jadeINT_HDLC1 0x02
-#define jadeINT_HDLC2 0x01
-#define jadeINT_DSP 0x04
-#define jade_INTR 0x70
-
-/********************************************************************/
-/* Indirect accessible JADE registers of common interest */
-/********************************************************************/
-#define jade_CHIPVERSIONNR 0x00 /* Does not work*/
-
-#define jade_HDLCCNTRACCESS 0x10
-#define jadeINDIRECT_HAH1 0x02
-#define jadeINDIRECT_HAH2 0x01
-
-#define jade_HDLC1SERRXPATH 0x1D
-#define jade_HDLC1SERTXPATH 0x1E
-#define jade_HDLC2SERRXPATH 0x1F
-#define jade_HDLC2SERTXPATH 0x20
-#define jadeINDIRECT_SLIN1 0x10
-#define jadeINDIRECT_SLIN0 0x08
-#define jadeINDIRECT_LMOD1 0x04
-#define jadeINDIRECT_LMOD0 0x02
-#define jadeINDIRECT_HHR 0x01
-#define jadeINDIRECT_HHX 0x01
-
-#define jade_RXAUDIOCH1CFG 0x11
-#define jade_RXAUDIOCH2CFG 0x14
-#define jade_TXAUDIOCH1CFG 0x17
-#define jade_TXAUDIOCH2CFG 0x1A
-
-extern int JadeVersion(struct IsdnCardState *cs, char *s);
-extern void clear_pending_jade_ints(struct IsdnCardState *cs);
-extern void initjade(struct IsdnCardState *cs);
-
-#endif /* __JADE_H__ */
diff --git a/drivers/isdn/hisax/jade_irq.c b/drivers/isdn/hisax/jade_irq.c
deleted file mode 100644
index a89e2df911c5..000000000000
--- a/drivers/isdn/hisax/jade_irq.c
+++ /dev/null
@@ -1,238 +0,0 @@
-/* $Id: jade_irq.c,v 1.7.2.4 2004/02/11 13:21:34 keil Exp $
- *
- * Low level JADE IRQ stuff (derived from original hscx_irq.c)
- *
- * Author Roland Klabunde
- * Copyright by Roland Klabunde <R.Klabunde@Berkom.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-static inline void
-waitforCEC(struct IsdnCardState *cs, int jade, int reg)
-{
- int to = 50;
- int mask = (reg == jade_HDLC_XCMD ? jadeSTAR_XCEC : jadeSTAR_RCEC);
- while ((READJADE(cs, jade, jade_HDLC_STAR) & mask) && to) {
- udelay(1);
- to--;
- }
- if (!to)
- printk(KERN_WARNING "HiSax: waitforCEC (jade) timeout\n");
-}
-
-
-static inline void
-waitforXFW(struct IsdnCardState *cs, int jade)
-{
- /* Does not work on older jade versions, don't care */
-}
-
-static inline void
-WriteJADECMDR(struct IsdnCardState *cs, int jade, int reg, u_char data)
-{
- waitforCEC(cs, jade, reg);
- WRITEJADE(cs, jade, reg, data);
-}
-
-
-
-static void
-jade_empty_fifo(struct BCState *bcs, int count)
-{
- u_char *ptr;
- struct IsdnCardState *cs = bcs->cs;
-
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "jade_empty_fifo");
-
- if (bcs->hw.hscx.rcvidx + count > HSCX_BUFMAX) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "jade_empty_fifo: incoming packet too large");
- WriteJADECMDR(cs, bcs->hw.hscx.hscx, jade_HDLC_RCMD, jadeRCMD_RMC);
- bcs->hw.hscx.rcvidx = 0;
- return;
- }
- ptr = bcs->hw.hscx.rcvbuf + bcs->hw.hscx.rcvidx;
- bcs->hw.hscx.rcvidx += count;
- READJADEFIFO(cs, bcs->hw.hscx.hscx, ptr, count);
- WriteJADECMDR(cs, bcs->hw.hscx.hscx, jade_HDLC_RCMD, jadeRCMD_RMC);
- if (cs->debug & L1_DEB_HSCX_FIFO) {
- char *t = bcs->blog;
-
- t += sprintf(t, "jade_empty_fifo %c cnt %d",
- bcs->hw.hscx.hscx ? 'B' : 'A', count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", bcs->blog);
- }
-}
-
-static void
-jade_fill_fifo(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
- int more, count;
- int fifo_size = 32;
- u_char *ptr;
-
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "jade_fill_fifo");
-
- if (!bcs->tx_skb)
- return;
- if (bcs->tx_skb->len <= 0)
- return;
-
- more = (bcs->mode == L1_MODE_TRANS) ? 1 : 0;
- if (bcs->tx_skb->len > fifo_size) {
- more = !0;
- count = fifo_size;
- } else
- count = bcs->tx_skb->len;
-
- waitforXFW(cs, bcs->hw.hscx.hscx);
- ptr = bcs->tx_skb->data;
- skb_pull(bcs->tx_skb, count);
- bcs->tx_cnt -= count;
- bcs->hw.hscx.count += count;
- WRITEJADEFIFO(cs, bcs->hw.hscx.hscx, ptr, count);
- WriteJADECMDR(cs, bcs->hw.hscx.hscx, jade_HDLC_XCMD, more ? jadeXCMD_XF : (jadeXCMD_XF | jadeXCMD_XME));
- if (cs->debug & L1_DEB_HSCX_FIFO) {
- char *t = bcs->blog;
-
- t += sprintf(t, "jade_fill_fifo %c cnt %d",
- bcs->hw.hscx.hscx ? 'B' : 'A', count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", bcs->blog);
- }
-}
-
-
-static void
-jade_interrupt(struct IsdnCardState *cs, u_char val, u_char jade)
-{
- u_char r;
- struct BCState *bcs = cs->bcs + jade;
- struct sk_buff *skb;
- int fifo_size = 32;
- int count;
- int i_jade = (int) jade; /* To satisfy the compiler */
-
- if (!test_bit(BC_FLG_INIT, &bcs->Flag))
- return;
-
- if (val & 0x80) { /* RME */
- r = READJADE(cs, i_jade, jade_HDLC_RSTA);
- if ((r & 0xf0) != 0xa0) {
- if (!(r & 0x80))
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "JADE %s invalid frame", (jade ? "B" : "A"));
- if ((r & 0x40) && bcs->mode)
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "JADE %c RDO mode=%d", 'A' + jade, bcs->mode);
- if (!(r & 0x20))
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "JADE %c CRC error", 'A' + jade);
- WriteJADECMDR(cs, jade, jade_HDLC_RCMD, jadeRCMD_RMC);
- } else {
- count = READJADE(cs, i_jade, jade_HDLC_RBCL) & 0x1F;
- if (count == 0)
- count = fifo_size;
- jade_empty_fifo(bcs, count);
- if ((count = bcs->hw.hscx.rcvidx - 1) > 0) {
- if (cs->debug & L1_DEB_HSCX_FIFO)
- debugl1(cs, "HX Frame %d", count);
- if (!(skb = dev_alloc_skb(count)))
- printk(KERN_WARNING "JADE %s receive out of memory\n", (jade ? "B" : "A"));
- else {
- skb_put_data(skb, bcs->hw.hscx.rcvbuf,
- count);
- skb_queue_tail(&bcs->rqueue, skb);
- }
- }
- }
- bcs->hw.hscx.rcvidx = 0;
- schedule_event(bcs, B_RCVBUFREADY);
- }
- if (val & 0x40) { /* RPF */
- jade_empty_fifo(bcs, fifo_size);
- if (bcs->mode == L1_MODE_TRANS) {
- /* receive audio data */
- if (!(skb = dev_alloc_skb(fifo_size)))
- printk(KERN_WARNING "HiSax: receive out of memory\n");
- else {
- skb_put_data(skb, bcs->hw.hscx.rcvbuf,
- fifo_size);
- skb_queue_tail(&bcs->rqueue, skb);
- }
- bcs->hw.hscx.rcvidx = 0;
- schedule_event(bcs, B_RCVBUFREADY);
- }
- }
- if (val & 0x10) { /* XPR */
- if (bcs->tx_skb) {
- if (bcs->tx_skb->len) {
- jade_fill_fifo(bcs);
- return;
- } else {
- if (test_bit(FLG_LLI_L1WAKEUP, &bcs->st->lli.flag) &&
- (PACKET_NOACK != bcs->tx_skb->pkt_type)) {
- u_long flags;
- spin_lock_irqsave(&bcs->aclock, flags);
- bcs->ackcnt += bcs->hw.hscx.count;
- spin_unlock_irqrestore(&bcs->aclock, flags);
- schedule_event(bcs, B_ACKPENDING);
- }
- dev_kfree_skb_irq(bcs->tx_skb);
- bcs->hw.hscx.count = 0;
- bcs->tx_skb = NULL;
- }
- }
- if ((bcs->tx_skb = skb_dequeue(&bcs->squeue))) {
- bcs->hw.hscx.count = 0;
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- jade_fill_fifo(bcs);
- } else {
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- schedule_event(bcs, B_XMTBUFREADY);
- }
- }
-}
-
-static inline void
-jade_int_main(struct IsdnCardState *cs, u_char val, int jade)
-{
- struct BCState *bcs;
- bcs = cs->bcs + jade;
-
- if (val & jadeISR_RFO) {
- /* handled with RDO */
- val &= ~jadeISR_RFO;
- }
- if (val & jadeISR_XDU) {
- /* relevant in HDLC mode only */
- /* don't reset XPR here */
- if (bcs->mode == 1)
- jade_fill_fifo(bcs);
- else {
- /* Here we lost an TX interrupt, so
- * restart transmitting the whole frame.
- */
- if (bcs->tx_skb) {
- skb_push(bcs->tx_skb, bcs->hw.hscx.count);
- bcs->tx_cnt += bcs->hw.hscx.count;
- bcs->hw.hscx.count = 0;
- }
- WriteJADECMDR(cs, bcs->hw.hscx.hscx, jade_HDLC_XCMD, jadeXCMD_XRES);
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "JADE %c EXIR %x Lost TX", 'A' + jade, val);
- }
- }
- if (val & (jadeISR_RME | jadeISR_RPF | jadeISR_XPR)) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "JADE %c interrupt %x", 'A' + jade, val);
- jade_interrupt(cs, val, jade);
- }
-}
diff --git a/drivers/isdn/hisax/l3_1tr6.c b/drivers/isdn/hisax/l3_1tr6.c
deleted file mode 100644
index 98f60d1523f4..000000000000
--- a/drivers/isdn/hisax/l3_1tr6.c
+++ /dev/null
@@ -1,932 +0,0 @@
-/* $Id: l3_1tr6.c,v 2.15.2.3 2004/01/13 14:31:25 keil Exp $
- *
- * German 1TR6 D-channel protocol
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * For changes and modifications please read
- * Documentation/isdn/HiSax.cert
- *
- */
-
-#include "hisax.h"
-#include "l3_1tr6.h"
-#include "isdnl3.h"
-#include <linux/ctype.h>
-
-extern char *HiSax_getrev(const char *revision);
-static const char *l3_1tr6_revision = "$Revision: 2.15.2.3 $";
-
-#define MsgHead(ptr, cref, mty, dis) \
- *ptr++ = dis; \
- *ptr++ = 0x1; \
- *ptr++ = cref ^ 0x80; \
- *ptr++ = mty
-
-static void
-l3_1TR6_message(struct l3_process *pc, u_char mt, u_char pd)
-{
- struct sk_buff *skb;
- u_char *p;
-
- if (!(skb = l3_alloc_skb(4)))
- return;
- p = skb_put(skb, 4);
- MsgHead(p, pc->callref, mt, pd);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
-}
-
-static void
-l3_1tr6_release_req(struct l3_process *pc, u_char pr, void *arg)
-{
- StopAllL3Timer(pc);
- newl3state(pc, 19);
- l3_1TR6_message(pc, MT_N1_REL, PROTO_DIS_N1);
- L3AddTimer(&pc->timer, T308, CC_T308_1);
-}
-
-static void
-l3_1tr6_invalid(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
-
- dev_kfree_skb(skb);
- l3_1tr6_release_req(pc, 0, NULL);
-}
-
-static void
-l3_1tr6_error(struct l3_process *pc, u_char *msg, struct sk_buff *skb)
-{
- dev_kfree_skb(skb);
- if (pc->st->l3.debug & L3_DEB_WARN)
- l3_debug(pc->st, "%s", msg);
- l3_1tr6_release_req(pc, 0, NULL);
-}
-
-static void
-l3_1tr6_setup_req(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb;
- u_char tmp[128];
- u_char *p = tmp;
- u_char *teln;
- u_char *eaz;
- u_char channel = 0;
- int l;
-
- MsgHead(p, pc->callref, MT_N1_SETUP, PROTO_DIS_N1);
- teln = pc->para.setup.phone;
- pc->para.spv = 0;
- if (!isdigit(*teln)) {
- switch (0x5f & *teln) {
- case 'S':
- pc->para.spv = 1;
- break;
- case 'C':
- channel = 0x08;
- /* fall through */
- case 'P':
- channel |= 0x80;
- teln++;
- if (*teln == '1')
- channel |= 0x01;
- else
- channel |= 0x02;
- break;
- default:
- if (pc->st->l3.debug & L3_DEB_WARN)
- l3_debug(pc->st, "Wrong MSN Code");
- break;
- }
- teln++;
- }
- if (channel) {
- *p++ = 0x18; /* channel indicator */
- *p++ = 1;
- *p++ = channel;
- }
- if (pc->para.spv) { /* SPV ? */
- /* NSF SPV */
- *p++ = WE0_netSpecFac;
- *p++ = 4; /* Laenge */
- *p++ = 0;
- *p++ = FAC_SPV; /* SPV */
- *p++ = pc->para.setup.si1; /* 0 for all Services */
- *p++ = pc->para.setup.si2; /* 0 for all Services */
- *p++ = WE0_netSpecFac;
- *p++ = 4; /* Laenge */
- *p++ = 0;
- *p++ = FAC_Activate; /* aktiviere SPV (default) */
- *p++ = pc->para.setup.si1; /* 0 for all Services */
- *p++ = pc->para.setup.si2; /* 0 for all Services */
- }
- eaz = pc->para.setup.eazmsn;
- if (*eaz) {
- *p++ = WE0_origAddr;
- *p++ = strlen(eaz) + 1;
- /* Classify as AnyPref. */
- *p++ = 0x81; /* Ext = '1'B, Type = '000'B, Plan = '0001'B. */
- while (*eaz)
- *p++ = *eaz++ & 0x7f;
- }
- *p++ = WE0_destAddr;
- *p++ = strlen(teln) + 1;
- /* Classify as AnyPref. */
- *p++ = 0x81; /* Ext = '1'B, Type = '000'B, Plan = '0001'B. */
- while (*teln)
- *p++ = *teln++ & 0x7f;
-
- *p++ = WE_Shift_F6;
- /* Codesatz 6 fuer Service */
- *p++ = WE6_serviceInd;
- *p++ = 2; /* len=2 info,info2 */
- *p++ = pc->para.setup.si1;
- *p++ = pc->para.setup.si2;
-
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- L3DelTimer(&pc->timer);
- L3AddTimer(&pc->timer, T303, CC_T303);
- newl3state(pc, 1);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
-}
-
-static void
-l3_1tr6_setup(struct l3_process *pc, u_char pr, void *arg)
-{
- u_char *p;
- int bcfound = 0;
- struct sk_buff *skb = arg;
-
- /* Channel Identification */
- p = findie(skb->data, skb->len, WE0_chanID, 0);
- if (p) {
- if (p[1] != 1) {
- l3_1tr6_error(pc, "setup wrong chanID len", skb);
- return;
- }
- if ((p[2] & 0xf4) != 0x80) {
- l3_1tr6_error(pc, "setup wrong WE0_chanID", skb);
- return;
- }
- if ((pc->para.bchannel = p[2] & 0x3))
- bcfound++;
- } else {
- l3_1tr6_error(pc, "missing setup chanID", skb);
- return;
- }
-
- p = skb->data;
- if ((p = findie(p, skb->len, WE6_serviceInd, 6))) {
- pc->para.setup.si1 = p[2];
- pc->para.setup.si2 = p[3];
- } else {
- l3_1tr6_error(pc, "missing setup SI", skb);
- return;
- }
-
- p = skb->data;
- if ((p = findie(p, skb->len, WE0_destAddr, 0)))
- iecpy(pc->para.setup.eazmsn, p, 1);
- else
- pc->para.setup.eazmsn[0] = 0;
-
- p = skb->data;
- if ((p = findie(p, skb->len, WE0_origAddr, 0))) {
- iecpy(pc->para.setup.phone, p, 1);
- } else
- pc->para.setup.phone[0] = 0;
-
- p = skb->data;
- pc->para.spv = 0;
- if ((p = findie(p, skb->len, WE0_netSpecFac, 0))) {
- if ((FAC_SPV == p[3]) || (FAC_Activate == p[3]))
- pc->para.spv = 1;
- }
- dev_kfree_skb(skb);
-
- /* Signal all services, linklevel takes care of Service-Indicator */
- if (bcfound) {
- if ((pc->para.setup.si1 != 7) && (pc->st->l3.debug & L3_DEB_WARN)) {
- l3_debug(pc->st, "non-digital call: %s -> %s",
- pc->para.setup.phone,
- pc->para.setup.eazmsn);
- }
- newl3state(pc, 6);
- pc->st->l3.l3l4(pc->st, CC_SETUP | INDICATION, pc);
- } else
- release_l3_process(pc);
-}
-
-static void
-l3_1tr6_setup_ack(struct l3_process *pc, u_char pr, void *arg)
-{
- u_char *p;
- struct sk_buff *skb = arg;
-
- L3DelTimer(&pc->timer);
- p = skb->data;
- newl3state(pc, 2);
- if ((p = findie(p, skb->len, WE0_chanID, 0))) {
- if (p[1] != 1) {
- l3_1tr6_error(pc, "setup_ack wrong chanID len", skb);
- return;
- }
- if ((p[2] & 0xf4) != 0x80) {
- l3_1tr6_error(pc, "setup_ack wrong WE0_chanID", skb);
- return;
- }
- pc->para.bchannel = p[2] & 0x3;
- } else {
- l3_1tr6_error(pc, "missing setup_ack WE0_chanID", skb);
- return;
- }
- dev_kfree_skb(skb);
- L3AddTimer(&pc->timer, T304, CC_T304);
- pc->st->l3.l3l4(pc->st, CC_MORE_INFO | INDICATION, pc);
-}
-
-static void
-l3_1tr6_call_sent(struct l3_process *pc, u_char pr, void *arg)
-{
- u_char *p;
- struct sk_buff *skb = arg;
-
- L3DelTimer(&pc->timer);
- p = skb->data;
- if ((p = findie(p, skb->len, WE0_chanID, 0))) {
- if (p[1] != 1) {
- l3_1tr6_error(pc, "call sent wrong chanID len", skb);
- return;
- }
- if ((p[2] & 0xf4) != 0x80) {
- l3_1tr6_error(pc, "call sent wrong WE0_chanID", skb);
- return;
- }
- if ((pc->state == 2) && (pc->para.bchannel != (p[2] & 0x3))) {
- l3_1tr6_error(pc, "call sent wrong chanID value", skb);
- return;
- }
- pc->para.bchannel = p[2] & 0x3;
- } else {
- l3_1tr6_error(pc, "missing call sent WE0_chanID", skb);
- return;
- }
- dev_kfree_skb(skb);
- L3AddTimer(&pc->timer, T310, CC_T310);
- newl3state(pc, 3);
- pc->st->l3.l3l4(pc->st, CC_PROCEEDING | INDICATION, pc);
-}
-
-static void
-l3_1tr6_alert(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
-
- dev_kfree_skb(skb);
- L3DelTimer(&pc->timer); /* T304 */
- newl3state(pc, 4);
- pc->st->l3.l3l4(pc->st, CC_ALERTING | INDICATION, pc);
-}
-
-static void
-l3_1tr6_info(struct l3_process *pc, u_char pr, void *arg)
-{
- u_char *p;
- int i, tmpcharge = 0;
- char a_charge[8];
- struct sk_buff *skb = arg;
-
- p = skb->data;
- if ((p = findie(p, skb->len, WE6_chargingInfo, 6))) {
- iecpy(a_charge, p, 1);
- for (i = 0; i < strlen(a_charge); i++) {
- tmpcharge *= 10;
- tmpcharge += a_charge[i] & 0xf;
- }
- if (tmpcharge > pc->para.chargeinfo) {
- pc->para.chargeinfo = tmpcharge;
- pc->st->l3.l3l4(pc->st, CC_CHARGE | INDICATION, pc);
- }
- if (pc->st->l3.debug & L3_DEB_CHARGE) {
- l3_debug(pc->st, "charging info %d",
- pc->para.chargeinfo);
- }
- } else if (pc->st->l3.debug & L3_DEB_CHARGE)
- l3_debug(pc->st, "charging info not found");
- dev_kfree_skb(skb);
-
-}
-
-static void
-l3_1tr6_info_s2(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
-
- dev_kfree_skb(skb);
-}
-
-static void
-l3_1tr6_connect(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
-
- L3DelTimer(&pc->timer); /* T310 */
- if (!findie(skb->data, skb->len, WE6_date, 6)) {
- l3_1tr6_error(pc, "missing connect date", skb);
- return;
- }
- newl3state(pc, 10);
- dev_kfree_skb(skb);
- pc->para.chargeinfo = 0;
- pc->st->l3.l3l4(pc->st, CC_SETUP | CONFIRM, pc);
-}
-
-static void
-l3_1tr6_rel(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- u_char *p;
-
- p = skb->data;
- if ((p = findie(p, skb->len, WE0_cause, 0))) {
- if (p[1] > 0) {
- pc->para.cause = p[2];
- if (p[1] > 1)
- pc->para.loc = p[3];
- else
- pc->para.loc = 0;
- } else {
- pc->para.cause = 0;
- pc->para.loc = 0;
- }
- } else {
- pc->para.cause = NO_CAUSE;
- l3_1tr6_error(pc, "missing REL cause", skb);
- return;
- }
- dev_kfree_skb(skb);
- StopAllL3Timer(pc);
- newl3state(pc, 0);
- l3_1TR6_message(pc, MT_N1_REL_ACK, PROTO_DIS_N1);
- pc->st->l3.l3l4(pc->st, CC_RELEASE | INDICATION, pc);
- release_l3_process(pc);
-}
-
-static void
-l3_1tr6_rel_ack(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
-
- dev_kfree_skb(skb);
- StopAllL3Timer(pc);
- newl3state(pc, 0);
- pc->para.cause = NO_CAUSE;
- pc->st->l3.l3l4(pc->st, CC_RELEASE | CONFIRM, pc);
- release_l3_process(pc);
-}
-
-static void
-l3_1tr6_disc(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- u_char *p;
- int i, tmpcharge = 0;
- char a_charge[8];
-
- StopAllL3Timer(pc);
- p = skb->data;
- if ((p = findie(p, skb->len, WE6_chargingInfo, 6))) {
- iecpy(a_charge, p, 1);
- for (i = 0; i < strlen(a_charge); i++) {
- tmpcharge *= 10;
- tmpcharge += a_charge[i] & 0xf;
- }
- if (tmpcharge > pc->para.chargeinfo) {
- pc->para.chargeinfo = tmpcharge;
- pc->st->l3.l3l4(pc->st, CC_CHARGE | INDICATION, pc);
- }
- if (pc->st->l3.debug & L3_DEB_CHARGE) {
- l3_debug(pc->st, "charging info %d",
- pc->para.chargeinfo);
- }
- } else if (pc->st->l3.debug & L3_DEB_CHARGE)
- l3_debug(pc->st, "charging info not found");
-
-
- p = skb->data;
- if ((p = findie(p, skb->len, WE0_cause, 0))) {
- if (p[1] > 0) {
- pc->para.cause = p[2];
- if (p[1] > 1)
- pc->para.loc = p[3];
- else
- pc->para.loc = 0;
- } else {
- pc->para.cause = 0;
- pc->para.loc = 0;
- }
- } else {
- if (pc->st->l3.debug & L3_DEB_WARN)
- l3_debug(pc->st, "cause not found");
- pc->para.cause = NO_CAUSE;
- }
- if (!findie(skb->data, skb->len, WE6_date, 6)) {
- l3_1tr6_error(pc, "missing connack date", skb);
- return;
- }
- dev_kfree_skb(skb);
- newl3state(pc, 12);
- pc->st->l3.l3l4(pc->st, CC_DISCONNECT | INDICATION, pc);
-}
-
-
-static void
-l3_1tr6_connect_ack(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
-
- if (!findie(skb->data, skb->len, WE6_date, 6)) {
- l3_1tr6_error(pc, "missing connack date", skb);
- return;
- }
- dev_kfree_skb(skb);
- newl3state(pc, 10);
- pc->para.chargeinfo = 0;
- L3DelTimer(&pc->timer);
- pc->st->l3.l3l4(pc->st, CC_SETUP_COMPL | INDICATION, pc);
-}
-
-static void
-l3_1tr6_alert_req(struct l3_process *pc, u_char pr, void *arg)
-{
- newl3state(pc, 7);
- l3_1TR6_message(pc, MT_N1_ALERT, PROTO_DIS_N1);
-}
-
-static void
-l3_1tr6_setup_rsp(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb;
- u_char tmp[24];
- u_char *p = tmp;
- int l;
-
- MsgHead(p, pc->callref, MT_N1_CONN, PROTO_DIS_N1);
- if (pc->para.spv) { /* SPV ? */
- /* NSF SPV */
- *p++ = WE0_netSpecFac;
- *p++ = 4; /* Laenge */
- *p++ = 0;
- *p++ = FAC_SPV; /* SPV */
- *p++ = pc->para.setup.si1;
- *p++ = pc->para.setup.si2;
- *p++ = WE0_netSpecFac;
- *p++ = 4; /* Laenge */
- *p++ = 0;
- *p++ = FAC_Activate; /* aktiviere SPV */
- *p++ = pc->para.setup.si1;
- *p++ = pc->para.setup.si2;
- }
- newl3state(pc, 8);
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
- L3DelTimer(&pc->timer);
- L3AddTimer(&pc->timer, T313, CC_T313);
-}
-
-static void
-l3_1tr6_reset(struct l3_process *pc, u_char pr, void *arg)
-{
- release_l3_process(pc);
-}
-
-static void
-l3_1tr6_disconnect_req(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb;
- u_char tmp[16];
- u_char *p = tmp;
- int l;
- u_char cause = 0x10;
- u_char clen = 1;
-
- if (pc->para.cause > 0)
- cause = pc->para.cause;
- /* Map DSS1 causes */
- switch (cause & 0x7f) {
- case 0x10:
- clen = 0;
- break;
- case 0x11:
- cause = CAUSE_UserBusy;
- break;
- case 0x15:
- cause = CAUSE_CallRejected;
- break;
- }
- StopAllL3Timer(pc);
- MsgHead(p, pc->callref, MT_N1_DISC, PROTO_DIS_N1);
- *p++ = WE0_cause;
- *p++ = clen; /* Laenge */
- if (clen)
- *p++ = cause | 0x80;
- newl3state(pc, 11);
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
- L3AddTimer(&pc->timer, T305, CC_T305);
-}
-
-static void
-l3_1tr6_t303(struct l3_process *pc, u_char pr, void *arg)
-{
- if (pc->N303 > 0) {
- pc->N303--;
- L3DelTimer(&pc->timer);
- l3_1tr6_setup_req(pc, pr, arg);
- } else {
- L3DelTimer(&pc->timer);
- pc->para.cause = 0;
- l3_1tr6_disconnect_req(pc, 0, NULL);
- }
-}
-
-static void
-l3_1tr6_t304(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->para.cause = 0xE6;
- l3_1tr6_disconnect_req(pc, pr, NULL);
- pc->st->l3.l3l4(pc->st, CC_SETUP_ERR, pc);
-}
-
-static void
-l3_1tr6_t305(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb;
- u_char tmp[16];
- u_char *p = tmp;
- int l;
- u_char cause = 0x90;
- u_char clen = 1;
-
- L3DelTimer(&pc->timer);
- if (pc->para.cause != NO_CAUSE)
- cause = pc->para.cause;
- /* Map DSS1 causes */
- switch (cause & 0x7f) {
- case 0x10:
- clen = 0;
- break;
- case 0x15:
- cause = CAUSE_CallRejected;
- break;
- }
- MsgHead(p, pc->callref, MT_N1_REL, PROTO_DIS_N1);
- *p++ = WE0_cause;
- *p++ = clen; /* Laenge */
- if (clen)
- *p++ = cause;
- newl3state(pc, 19);
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
- L3AddTimer(&pc->timer, T308, CC_T308_1);
-}
-
-static void
-l3_1tr6_t310(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->para.cause = 0xE6;
- l3_1tr6_disconnect_req(pc, pr, NULL);
- pc->st->l3.l3l4(pc->st, CC_SETUP_ERR, pc);
-}
-
-static void
-l3_1tr6_t313(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->para.cause = 0xE6;
- l3_1tr6_disconnect_req(pc, pr, NULL);
- pc->st->l3.l3l4(pc->st, CC_CONNECT_ERR, pc);
-}
-
-static void
-l3_1tr6_t308_1(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- l3_1TR6_message(pc, MT_N1_REL, PROTO_DIS_N1);
- L3AddTimer(&pc->timer, T308, CC_T308_2);
- newl3state(pc, 19);
-}
-
-static void
-l3_1tr6_t308_2(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->st->l3.l3l4(pc->st, CC_RELEASE_ERR, pc);
- release_l3_process(pc);
-}
-
-static void
-l3_1tr6_dl_reset(struct l3_process *pc, u_char pr, void *arg)
-{
- pc->para.cause = CAUSE_LocalProcErr;
- l3_1tr6_disconnect_req(pc, pr, NULL);
- pc->st->l3.l3l4(pc->st, CC_SETUP_ERR, pc);
-}
-
-static void
-l3_1tr6_dl_release(struct l3_process *pc, u_char pr, void *arg)
-{
- newl3state(pc, 0);
- pc->para.cause = 0x1b; /* Destination out of order */
- pc->para.loc = 0;
- pc->st->l3.l3l4(pc->st, CC_RELEASE | INDICATION, pc);
- release_l3_process(pc);
-}
-
-/* *INDENT-OFF* */
-static struct stateentry downstl[] =
-{
- {SBIT(0),
- CC_SETUP | REQUEST, l3_1tr6_setup_req},
- {SBIT(1) | SBIT(2) | SBIT(3) | SBIT(4) | SBIT(6) | SBIT(7) | SBIT(8) |
- SBIT(10),
- CC_DISCONNECT | REQUEST, l3_1tr6_disconnect_req},
- {SBIT(12),
- CC_RELEASE | REQUEST, l3_1tr6_release_req},
- {SBIT(6),
- CC_IGNORE | REQUEST, l3_1tr6_reset},
- {SBIT(6),
- CC_REJECT | REQUEST, l3_1tr6_disconnect_req},
- {SBIT(6),
- CC_ALERTING | REQUEST, l3_1tr6_alert_req},
- {SBIT(6) | SBIT(7),
- CC_SETUP | RESPONSE, l3_1tr6_setup_rsp},
- {SBIT(1),
- CC_T303, l3_1tr6_t303},
- {SBIT(2),
- CC_T304, l3_1tr6_t304},
- {SBIT(3),
- CC_T310, l3_1tr6_t310},
- {SBIT(8),
- CC_T313, l3_1tr6_t313},
- {SBIT(11),
- CC_T305, l3_1tr6_t305},
- {SBIT(19),
- CC_T308_1, l3_1tr6_t308_1},
- {SBIT(19),
- CC_T308_2, l3_1tr6_t308_2},
-};
-
-static struct stateentry datastln1[] =
-{
- {SBIT(0),
- MT_N1_INVALID, l3_1tr6_invalid},
- {SBIT(0),
- MT_N1_SETUP, l3_1tr6_setup},
- {SBIT(1),
- MT_N1_SETUP_ACK, l3_1tr6_setup_ack},
- {SBIT(1) | SBIT(2),
- MT_N1_CALL_SENT, l3_1tr6_call_sent},
- {SBIT(1) | SBIT(2) | SBIT(3) | SBIT(4) | SBIT(7) | SBIT(8) | SBIT(10),
- MT_N1_DISC, l3_1tr6_disc},
- {SBIT(2) | SBIT(3) | SBIT(4),
- MT_N1_ALERT, l3_1tr6_alert},
- {SBIT(2) | SBIT(3) | SBIT(4),
- MT_N1_CONN, l3_1tr6_connect},
- {SBIT(2),
- MT_N1_INFO, l3_1tr6_info_s2},
- {SBIT(8),
- MT_N1_CONN_ACK, l3_1tr6_connect_ack},
- {SBIT(10),
- MT_N1_INFO, l3_1tr6_info},
- {SBIT(0) | SBIT(1) | SBIT(2) | SBIT(3) | SBIT(4) | SBIT(7) | SBIT(8) |
- SBIT(10) | SBIT(11) | SBIT(12) | SBIT(15) | SBIT(17),
- MT_N1_REL, l3_1tr6_rel},
- {SBIT(19),
- MT_N1_REL, l3_1tr6_rel_ack},
- {SBIT(0) | SBIT(1) | SBIT(2) | SBIT(3) | SBIT(4) | SBIT(7) | SBIT(8) |
- SBIT(10) | SBIT(11) | SBIT(12) | SBIT(15) | SBIT(17),
- MT_N1_REL_ACK, l3_1tr6_invalid},
- {SBIT(19),
- MT_N1_REL_ACK, l3_1tr6_rel_ack}
-};
-
-static struct stateentry manstatelist[] =
-{
- {SBIT(2),
- DL_ESTABLISH | INDICATION, l3_1tr6_dl_reset},
- {ALL_STATES,
- DL_RELEASE | INDICATION, l3_1tr6_dl_release},
-};
-
-/* *INDENT-ON* */
-
-static void
-up1tr6(struct PStack *st, int pr, void *arg)
-{
- int i, mt, cr;
- struct l3_process *proc;
- struct sk_buff *skb = arg;
-
- switch (pr) {
- case (DL_DATA | INDICATION):
- case (DL_UNIT_DATA | INDICATION):
- break;
- case (DL_ESTABLISH | CONFIRM):
- case (DL_ESTABLISH | INDICATION):
- case (DL_RELEASE | INDICATION):
- case (DL_RELEASE | CONFIRM):
- l3_msg(st, pr, arg);
- return;
- break;
- }
- if (skb->len < 4) {
- if (st->l3.debug & L3_DEB_PROTERR) {
- l3_debug(st, "up1tr6 len only %d", skb->len);
- }
- dev_kfree_skb(skb);
- return;
- }
- if ((skb->data[0] & 0xfe) != PROTO_DIS_N0) {
- if (st->l3.debug & L3_DEB_PROTERR) {
- l3_debug(st, "up1tr6%sunexpected discriminator %x message len %d",
- (pr == (DL_DATA | INDICATION)) ? " " : "(broadcast) ",
- skb->data[0], skb->len);
- }
- dev_kfree_skb(skb);
- return;
- }
- if (skb->data[1] != 1) {
- if (st->l3.debug & L3_DEB_PROTERR) {
- l3_debug(st, "up1tr6 CR len not 1");
- }
- dev_kfree_skb(skb);
- return;
- }
- cr = skb->data[2];
- mt = skb->data[3];
- if (skb->data[0] == PROTO_DIS_N0) {
- dev_kfree_skb(skb);
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "up1tr6%s N0 mt %x unhandled",
- (pr == (DL_DATA | INDICATION)) ? " " : "(broadcast) ", mt);
- }
- } else if (skb->data[0] == PROTO_DIS_N1) {
- if (!(proc = getl3proc(st, cr))) {
- if (mt == MT_N1_SETUP) {
- if (cr < 128) {
- if (!(proc = new_l3_process(st, cr))) {
- if (st->l3.debug & L3_DEB_PROTERR) {
- l3_debug(st, "up1tr6 no roc mem");
- }
- dev_kfree_skb(skb);
- return;
- }
- } else {
- dev_kfree_skb(skb);
- return;
- }
- } else if ((mt == MT_N1_REL) || (mt == MT_N1_REL_ACK) ||
- (mt == MT_N1_CANC_ACK) || (mt == MT_N1_CANC_REJ) ||
- (mt == MT_N1_REG_ACK) || (mt == MT_N1_REG_REJ) ||
- (mt == MT_N1_SUSP_ACK) || (mt == MT_N1_RES_REJ) ||
- (mt == MT_N1_INFO)) {
- dev_kfree_skb(skb);
- return;
- } else {
- if (!(proc = new_l3_process(st, cr))) {
- if (st->l3.debug & L3_DEB_PROTERR) {
- l3_debug(st, "up1tr6 no roc mem");
- }
- dev_kfree_skb(skb);
- return;
- }
- mt = MT_N1_INVALID;
- }
- }
- for (i = 0; i < ARRAY_SIZE(datastln1); i++)
- if ((mt == datastln1[i].primitive) &&
- ((1 << proc->state) & datastln1[i].state))
- break;
- if (i == ARRAY_SIZE(datastln1)) {
- dev_kfree_skb(skb);
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "up1tr6%sstate %d mt %x unhandled",
- (pr == (DL_DATA | INDICATION)) ? " " : "(broadcast) ",
- proc->state, mt);
- }
- return;
- } else {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "up1tr6%sstate %d mt %x",
- (pr == (DL_DATA | INDICATION)) ? " " : "(broadcast) ",
- proc->state, mt);
- }
- datastln1[i].rout(proc, pr, skb);
- }
- }
-}
-
-static void
-down1tr6(struct PStack *st, int pr, void *arg)
-{
- int i, cr;
- struct l3_process *proc;
- struct Channel *chan;
-
- if ((DL_ESTABLISH | REQUEST) == pr) {
- l3_msg(st, pr, NULL);
- return;
- } else if ((CC_SETUP | REQUEST) == pr) {
- chan = arg;
- cr = newcallref();
- cr |= 0x80;
- if (!(proc = new_l3_process(st, cr))) {
- return;
- } else {
- proc->chan = chan;
- chan->proc = proc;
- memcpy(&proc->para.setup, &chan->setup, sizeof(setup_parm));
- proc->callref = cr;
- }
- } else {
- proc = arg;
- }
-
- for (i = 0; i < ARRAY_SIZE(downstl); i++)
- if ((pr == downstl[i].primitive) &&
- ((1 << proc->state) & downstl[i].state))
- break;
- if (i == ARRAY_SIZE(downstl)) {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "down1tr6 state %d prim %d unhandled",
- proc->state, pr);
- }
- } else {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "down1tr6 state %d prim %d",
- proc->state, pr);
- }
- downstl[i].rout(proc, pr, arg);
- }
-}
-
-static void
-man1tr6(struct PStack *st, int pr, void *arg)
-{
- int i;
- struct l3_process *proc = arg;
-
- if (!proc) {
- printk(KERN_ERR "HiSax man1tr6 without proc pr=%04x\n", pr);
- return;
- }
- for (i = 0; i < ARRAY_SIZE(manstatelist); i++)
- if ((pr == manstatelist[i].primitive) &&
- ((1 << proc->state) & manstatelist[i].state))
- break;
- if (i == ARRAY_SIZE(manstatelist)) {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "cr %d man1tr6 state %d prim %d unhandled",
- proc->callref & 0x7f, proc->state, pr);
- }
- } else {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "cr %d man1tr6 state %d prim %d",
- proc->callref & 0x7f, proc->state, pr);
- }
- manstatelist[i].rout(proc, pr, arg);
- }
-}
-
-void
-setstack_1tr6(struct PStack *st)
-{
- char tmp[64];
-
- st->lli.l4l3 = down1tr6;
- st->l2.l2l3 = up1tr6;
- st->l3.l3ml3 = man1tr6;
- st->l3.N303 = 0;
-
- strcpy(tmp, l3_1tr6_revision);
- printk(KERN_INFO "HiSax: 1TR6 Rev. %s\n", HiSax_getrev(tmp));
-}
diff --git a/drivers/isdn/hisax/l3_1tr6.h b/drivers/isdn/hisax/l3_1tr6.h
deleted file mode 100644
index 43215c00cada..000000000000
--- a/drivers/isdn/hisax/l3_1tr6.h
+++ /dev/null
@@ -1,164 +0,0 @@
-/* $Id: l3_1tr6.h,v 2.2.6.2 2001/09/23 22:24:49 kai Exp $
- *
- * German 1TR6 D-channel protocol defines
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#ifndef l3_1tr6
-#define l3_1tr6
-
-#define PROTO_DIS_N0 0x40
-#define PROTO_DIS_N1 0x41
-
-/*
- * MsgType N0
- */
-#define MT_N0_REG_IND 0x61
-#define MT_N0_CANC_IND 0x62
-#define MT_N0_FAC_STA 0x63
-#define MT_N0_STA_ACK 0x64
-#define MT_N0_STA_REJ 0x65
-#define MT_N0_FAC_INF 0x66
-#define MT_N0_INF_ACK 0x67
-#define MT_N0_INF_REJ 0x68
-#define MT_N0_CLOSE 0x75
-#define MT_N0_CLO_ACK 0x77
-
-/*
- * MsgType N1
- */
-
-#define MT_N1_ESC 0x00
-#define MT_N1_ALERT 0x01
-#define MT_N1_CALL_SENT 0x02
-#define MT_N1_CONN 0x07
-#define MT_N1_CONN_ACK 0x0F
-#define MT_N1_SETUP 0x05
-#define MT_N1_SETUP_ACK 0x0D
-#define MT_N1_RES 0x26
-#define MT_N1_RES_ACK 0x2E
-#define MT_N1_RES_REJ 0x22
-#define MT_N1_SUSP 0x25
-#define MT_N1_SUSP_ACK 0x2D
-#define MT_N1_SUSP_REJ 0x21
-#define MT_N1_USER_INFO 0x20
-#define MT_N1_DET 0x40
-#define MT_N1_DISC 0x45
-#define MT_N1_REL 0x4D
-#define MT_N1_REL_ACK 0x5A
-#define MT_N1_CANC_ACK 0x6E
-#define MT_N1_CANC_REJ 0x67
-#define MT_N1_CON_CON 0x69
-#define MT_N1_FAC 0x60
-#define MT_N1_FAC_ACK 0x68
-#define MT_N1_FAC_CAN 0x66
-#define MT_N1_FAC_REG 0x64
-#define MT_N1_FAC_REJ 0x65
-#define MT_N1_INFO 0x6D
-#define MT_N1_REG_ACK 0x6C
-#define MT_N1_REG_REJ 0x6F
-#define MT_N1_STAT 0x63
-#define MT_N1_INVALID 0
-
-/*
- * W Elemente
- */
-
-#define WE_Shift_F0 0x90
-#define WE_Shift_F6 0x96
-#define WE_Shift_OF0 0x98
-#define WE_Shift_OF6 0x9E
-
-#define WE0_cause 0x08
-#define WE0_connAddr 0x0C
-#define WE0_callID 0x10
-#define WE0_chanID 0x18
-#define WE0_netSpecFac 0x20
-#define WE0_display 0x28
-#define WE0_keypad 0x2C
-#define WE0_origAddr 0x6C
-#define WE0_destAddr 0x70
-#define WE0_userInfo 0x7E
-
-#define WE0_moreData 0xA0
-#define WE0_congestLevel 0xB0
-
-#define WE6_serviceInd 0x01
-#define WE6_chargingInfo 0x02
-#define WE6_date 0x03
-#define WE6_facSelect 0x05
-#define WE6_facStatus 0x06
-#define WE6_statusCalled 0x07
-#define WE6_addTransAttr 0x08
-
-/*
- * FacCodes
- */
-#define FAC_Sperre 0x01
-#define FAC_Sperre_All 0x02
-#define FAC_Sperre_Fern 0x03
-#define FAC_Sperre_Intl 0x04
-#define FAC_Sperre_Interk 0x05
-
-#define FAC_Forward1 0x02
-#define FAC_Forward2 0x03
-#define FAC_Konferenz 0x06
-#define FAC_GrabBchan 0x0F
-#define FAC_Reactivate 0x10
-#define FAC_Konferenz3 0x11
-#define FAC_Dienstwechsel1 0x12
-#define FAC_Dienstwechsel2 0x13
-#define FAC_NummernIdent 0x14
-#define FAC_GBG 0x15
-#define FAC_DisplayUebergeben 0x17
-#define FAC_DisplayUmgeleitet 0x1A
-#define FAC_Unterdruecke 0x1B
-#define FAC_Deactivate 0x1E
-#define FAC_Activate 0x1D
-#define FAC_SPV 0x1F
-#define FAC_Rueckwechsel 0x23
-#define FAC_Umleitung 0x24
-
-/*
- * Cause codes
- */
-#define CAUSE_InvCRef 0x01
-#define CAUSE_BearerNotImpl 0x03
-#define CAUSE_CIDunknown 0x07
-#define CAUSE_CIDinUse 0x08
-#define CAUSE_NoChans 0x0A
-#define CAUSE_FacNotImpl 0x10
-#define CAUSE_FacNotSubscr 0x11
-#define CAUSE_OutgoingBarred 0x20
-#define CAUSE_UserAccessBusy 0x21
-#define CAUSE_NegativeGBG 0x22
-#define CAUSE_UnknownGBG 0x23
-#define CAUSE_NoSPVknown 0x25
-#define CAUSE_DestNotObtain 0x35
-#define CAUSE_NumberChanged 0x38
-#define CAUSE_OutOfOrder 0x39
-#define CAUSE_NoUserResponse 0x3A
-#define CAUSE_UserBusy 0x3B
-#define CAUSE_IncomingBarred 0x3D
-#define CAUSE_CallRejected 0x3E
-#define CAUSE_NetworkCongestion 0x59
-#define CAUSE_RemoteUser 0x5A
-#define CAUSE_LocalProcErr 0x70
-#define CAUSE_RemoteProcErr 0x71
-#define CAUSE_RemoteUserSuspend 0x72
-#define CAUSE_RemoteUserResumed 0x73
-#define CAUSE_UserInfoDiscarded 0x7F
-
-#define T303 4000
-#define T304 20000
-#define T305 4000
-#define T308 4000
-#define T310 120000
-#define T313 4000
-#define T318 4000
-#define T319 4000
-
-#endif
diff --git a/drivers/isdn/hisax/l3dss1.c b/drivers/isdn/hisax/l3dss1.c
deleted file mode 100644
index 368d152a8f1d..000000000000
--- a/drivers/isdn/hisax/l3dss1.c
+++ /dev/null
@@ -1,3227 +0,0 @@
-/* $Id: l3dss1.c,v 2.32.2.3 2004/01/13 14:31:25 keil Exp $
- *
- * EURO/DSS1 D-channel protocol
- *
- * German 1TR6 D-channel protocol
- *
- * Author Karsten Keil
- * based on the teles driver from Jan den Ouden
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * For changes and modifications please read
- * Documentation/isdn/HiSax.cert
- *
- * Thanks to Jan den Ouden
- * Fritz Elfert
- *
- */
-
-#include "hisax.h"
-#include "isdnl3.h"
-#include "l3dss1.h"
-#include <linux/ctype.h>
-#include <linux/slab.h>
-
-extern char *HiSax_getrev(const char *revision);
-static const char *dss1_revision = "$Revision: 2.32.2.3 $";
-
-#define EXT_BEARER_CAPS 1
-
-#define MsgHead(ptr, cref, mty) \
- *ptr++ = 0x8; \
- if (cref == -1) { \
- *ptr++ = 0x0; \
- } else { \
- *ptr++ = 0x1; \
- *ptr++ = cref^0x80; \
- } \
- *ptr++ = mty
-
-
-/**********************************************/
-/* get a new invoke id for remote operations. */
-/* Only a return value != 0 is valid */
-/**********************************************/
-static unsigned char new_invoke_id(struct PStack *p)
-{
- unsigned char retval;
- int i;
-
- i = 32; /* maximum search depth */
-
- retval = p->prot.dss1.last_invoke_id + 1; /* try new id */
- while ((i) && (p->prot.dss1.invoke_used[retval >> 3] == 0xFF)) {
- p->prot.dss1.last_invoke_id = (retval & 0xF8) + 8;
- i--;
- }
- if (i) {
- while (p->prot.dss1.invoke_used[retval >> 3] & (1 << (retval & 7)))
- retval++;
- } else
- retval = 0;
- p->prot.dss1.last_invoke_id = retval;
- p->prot.dss1.invoke_used[retval >> 3] |= (1 << (retval & 7));
- return (retval);
-} /* new_invoke_id */
-
-/*************************/
-/* free a used invoke id */
-/*************************/
-static void free_invoke_id(struct PStack *p, unsigned char id)
-{
-
- if (!id) return; /* 0 = invalid value */
-
- p->prot.dss1.invoke_used[id >> 3] &= ~(1 << (id & 7));
-} /* free_invoke_id */
-
-
-/**********************************************************/
-/* create a new l3 process and fill in dss1 specific data */
-/**********************************************************/
-static struct l3_process
-*dss1_new_l3_process(struct PStack *st, int cr)
-{ struct l3_process *proc;
-
- if (!(proc = new_l3_process(st, cr)))
- return (NULL);
-
- proc->prot.dss1.invoke_id = 0;
- proc->prot.dss1.remote_operation = 0;
- proc->prot.dss1.uus1_data[0] = '\0';
-
- return (proc);
-} /* dss1_new_l3_process */
-
-/************************************************/
-/* free a l3 process and all dss1 specific data */
-/************************************************/
-static void
-dss1_release_l3_process(struct l3_process *p)
-{
- free_invoke_id(p->st, p->prot.dss1.invoke_id);
- release_l3_process(p);
-} /* dss1_release_l3_process */
-
-/********************************************************/
-/* search a process with invoke id id and dummy callref */
-/********************************************************/
-static struct l3_process *
-l3dss1_search_dummy_proc(struct PStack *st, int id)
-{ struct l3_process *pc = st->l3.proc; /* start of processes */
-
- if (!id) return (NULL);
-
- while (pc)
- { if ((pc->callref == -1) && (pc->prot.dss1.invoke_id == id))
- return (pc);
- pc = pc->next;
- }
- return (NULL);
-} /* l3dss1_search_dummy_proc */
-
-/*******************************************************************/
-/* called when a facility message with a dummy callref is received */
-/* and a return result is delivered. id specifies the invoke id. */
-/*******************************************************************/
-static void
-l3dss1_dummy_return_result(struct PStack *st, int id, u_char *p, u_char nlen)
-{ isdn_ctrl ic;
- struct IsdnCardState *cs;
- struct l3_process *pc = NULL;
-
- if ((pc = l3dss1_search_dummy_proc(st, id)))
- { L3DelTimer(&pc->timer); /* remove timer */
-
- cs = pc->st->l1.hardware;
- ic.driver = cs->myid;
- ic.command = ISDN_STAT_PROT;
- ic.arg = DSS1_STAT_INVOKE_RES;
- ic.parm.dss1_io.hl_id = pc->prot.dss1.invoke_id;
- ic.parm.dss1_io.ll_id = pc->prot.dss1.ll_id;
- ic.parm.dss1_io.proc = pc->prot.dss1.proc;
- ic.parm.dss1_io.timeout = 0;
- ic.parm.dss1_io.datalen = nlen;
- ic.parm.dss1_io.data = p;
- free_invoke_id(pc->st, pc->prot.dss1.invoke_id);
- pc->prot.dss1.invoke_id = 0; /* reset id */
-
- cs->iif.statcallb(&ic);
- dss1_release_l3_process(pc);
- }
- else
- l3_debug(st, "dummy return result id=0x%x result len=%d", id, nlen);
-} /* l3dss1_dummy_return_result */
-
-/*******************************************************************/
-/* called when a facility message with a dummy callref is received */
-/* and a return error is delivered. id specifies the invoke id. */
-/*******************************************************************/
-static void
-l3dss1_dummy_error_return(struct PStack *st, int id, ulong error)
-{ isdn_ctrl ic;
- struct IsdnCardState *cs;
- struct l3_process *pc = NULL;
-
- if ((pc = l3dss1_search_dummy_proc(st, id)))
- { L3DelTimer(&pc->timer); /* remove timer */
-
- cs = pc->st->l1.hardware;
- ic.driver = cs->myid;
- ic.command = ISDN_STAT_PROT;
- ic.arg = DSS1_STAT_INVOKE_ERR;
- ic.parm.dss1_io.hl_id = pc->prot.dss1.invoke_id;
- ic.parm.dss1_io.ll_id = pc->prot.dss1.ll_id;
- ic.parm.dss1_io.proc = pc->prot.dss1.proc;
- ic.parm.dss1_io.timeout = error;
- ic.parm.dss1_io.datalen = 0;
- ic.parm.dss1_io.data = NULL;
- free_invoke_id(pc->st, pc->prot.dss1.invoke_id);
- pc->prot.dss1.invoke_id = 0; /* reset id */
-
- cs->iif.statcallb(&ic);
- dss1_release_l3_process(pc);
- }
- else
- l3_debug(st, "dummy return error id=0x%x error=0x%lx", id, error);
-} /* l3dss1_error_return */
-
-/*******************************************************************/
-/* called when a facility message with a dummy callref is received */
-/* and a invoke is delivered. id specifies the invoke id. */
-/*******************************************************************/
-static void
-l3dss1_dummy_invoke(struct PStack *st, int cr, int id,
- int ident, u_char *p, u_char nlen)
-{ isdn_ctrl ic;
- struct IsdnCardState *cs;
-
- l3_debug(st, "dummy invoke %s id=0x%x ident=0x%x datalen=%d",
- (cr == -1) ? "local" : "broadcast", id, ident, nlen);
- if (cr >= -1) return; /* ignore local data */
-
- cs = st->l1.hardware;
- ic.driver = cs->myid;
- ic.command = ISDN_STAT_PROT;
- ic.arg = DSS1_STAT_INVOKE_BRD;
- ic.parm.dss1_io.hl_id = id;
- ic.parm.dss1_io.ll_id = 0;
- ic.parm.dss1_io.proc = ident;
- ic.parm.dss1_io.timeout = 0;
- ic.parm.dss1_io.datalen = nlen;
- ic.parm.dss1_io.data = p;
-
- cs->iif.statcallb(&ic);
-} /* l3dss1_dummy_invoke */
-
-static void
-l3dss1_parse_facility(struct PStack *st, struct l3_process *pc,
- int cr, u_char *p)
-{
- int qd_len = 0;
- unsigned char nlen = 0, ilen, cp_tag;
- int ident, id;
- ulong err_ret;
-
- if (pc)
- st = pc->st; /* valid Stack */
- else
- if ((!st) || (cr >= 0)) return; /* neither pc nor st specified */
-
- p++;
- qd_len = *p++;
- if (qd_len == 0) {
- l3_debug(st, "qd_len == 0");
- return;
- }
- if ((*p & 0x1F) != 0x11) { /* Service discriminator, supplementary service */
- l3_debug(st, "supplementary service != 0x11");
- return;
- }
- while (qd_len > 0 && !(*p & 0x80)) { /* extension ? */
- p++;
- qd_len--;
- }
- if (qd_len < 2) {
- l3_debug(st, "qd_len < 2");
- return;
- }
- p++;
- qd_len--;
- if ((*p & 0xE0) != 0xA0) { /* class and form */
- l3_debug(st, "class and form != 0xA0");
- return;
- }
-
- cp_tag = *p & 0x1F; /* remember tag value */
-
- p++;
- qd_len--;
- if (qd_len < 1)
- { l3_debug(st, "qd_len < 1");
- return;
- }
- if (*p & 0x80)
- { /* length format indefinite or limited */
- nlen = *p++ & 0x7F; /* number of len bytes or indefinite */
- if ((qd_len-- < ((!nlen) ? 3 : (1 + nlen))) ||
- (nlen > 1))
- { l3_debug(st, "length format error or not implemented");
- return;
- }
- if (nlen == 1)
- { nlen = *p++; /* complete length */
- qd_len--;
- }
- else
- { qd_len -= 2; /* trailing null bytes */
- if ((*(p + qd_len)) || (*(p + qd_len + 1)))
- { l3_debug(st, "length format indefinite error");
- return;
- }
- nlen = qd_len;
- }
- }
- else
- { nlen = *p++;
- qd_len--;
- }
- if (qd_len < nlen)
- { l3_debug(st, "qd_len < nlen");
- return;
- }
- qd_len -= nlen;
-
- if (nlen < 2)
- { l3_debug(st, "nlen < 2");
- return;
- }
- if (*p != 0x02)
- { /* invoke identifier tag */
- l3_debug(st, "invoke identifier tag !=0x02");
- return;
- }
- p++;
- nlen--;
- if (*p & 0x80)
- { /* length format */
- l3_debug(st, "invoke id length format 2");
- return;
- }
- ilen = *p++;
- nlen--;
- if (ilen > nlen || ilen == 0)
- { l3_debug(st, "ilen > nlen || ilen == 0");
- return;
- }
- nlen -= ilen;
- id = 0;
- while (ilen > 0)
- { id = (id << 8) | (*p++ & 0xFF); /* invoke identifier */
- ilen--;
- }
-
- switch (cp_tag) { /* component tag */
- case 1: /* invoke */
- if (nlen < 2) {
- l3_debug(st, "nlen < 2 22");
- return;
- }
- if (*p != 0x02) { /* operation value */
- l3_debug(st, "operation value !=0x02");
- return;
- }
- p++;
- nlen--;
- ilen = *p++;
- nlen--;
- if (ilen > nlen || ilen == 0) {
- l3_debug(st, "ilen > nlen || ilen == 0 22");
- return;
- }
- nlen -= ilen;
- ident = 0;
- while (ilen > 0) {
- ident = (ident << 8) | (*p++ & 0xFF);
- ilen--;
- }
-
- if (!pc)
- { l3dss1_dummy_invoke(st, cr, id, ident, p, nlen);
- return;
- }
-#ifdef CONFIG_DE_AOC
- {
-
-#define FOO1(s, a, b) \
- while (nlen > 1) { \
- int ilen = p[1]; \
- if (nlen < ilen + 2) { \
- l3_debug(st, "FOO1 nlen < ilen+2"); \
- return; \
- } \
- nlen -= ilen + 2; \
- if ((*p & 0xFF) == (a)) { \
- int nlen = ilen; \
- p += 2; \
- b; \
- } else { \
- p += ilen + 2; \
- } \
- }
-
- switch (ident) {
- case 0x22: /* during */
- FOO1("1A", 0x30, FOO1("1C", 0xA1, FOO1("1D", 0x30, FOO1("1E", 0x02, ( {
- ident = 0;
- nlen = (nlen) ? nlen : 0; /* Make gcc happy */
- while (ilen > 0) {
- ident = (ident << 8) | *p++;
- ilen--;
- }
- if (ident > pc->para.chargeinfo) {
- pc->para.chargeinfo = ident;
- st->l3.l3l4(st, CC_CHARGE | INDICATION, pc);
- }
- if (st->l3.debug & L3_DEB_CHARGE) {
- if (*(p + 2) == 0) {
- l3_debug(st, "charging info during %d", pc->para.chargeinfo);
- }
- else {
- l3_debug(st, "charging info final %d", pc->para.chargeinfo);
- }
- }
- }
- )))))
- break;
- case 0x24: /* final */
- FOO1("2A", 0x30, FOO1("2B", 0x30, FOO1("2C", 0xA1, FOO1("2D", 0x30, FOO1("2E", 0x02, ( {
- ident = 0;
- nlen = (nlen) ? nlen : 0; /* Make gcc happy */
- while (ilen > 0) {
- ident = (ident << 8) | *p++;
- ilen--;
- }
- if (ident > pc->para.chargeinfo) {
- pc->para.chargeinfo = ident;
- st->l3.l3l4(st, CC_CHARGE | INDICATION, pc);
- }
- if (st->l3.debug & L3_DEB_CHARGE) {
- l3_debug(st, "charging info final %d", pc->para.chargeinfo);
- }
- }
- ))))))
- break;
- default:
- l3_debug(st, "invoke break invalid ident %02x", ident);
- break;
- }
-#undef FOO1
-
- }
-#else /* not CONFIG_DE_AOC */
- l3_debug(st, "invoke break");
-#endif /* not CONFIG_DE_AOC */
- break;
- case 2: /* return result */
- /* if no process available handle separately */
- if (!pc)
- { if (cr == -1)
- l3dss1_dummy_return_result(st, id, p, nlen);
- return;
- }
- if ((pc->prot.dss1.invoke_id) && (pc->prot.dss1.invoke_id == id))
- { /* Diversion successful */
- free_invoke_id(st, pc->prot.dss1.invoke_id);
- pc->prot.dss1.remote_result = 0; /* success */
- pc->prot.dss1.invoke_id = 0;
- pc->redir_result = pc->prot.dss1.remote_result;
- st->l3.l3l4(st, CC_REDIR | INDICATION, pc); } /* Diversion successful */
- else
- l3_debug(st, "return error unknown identifier");
- break;
- case 3: /* return error */
- err_ret = 0;
- if (nlen < 2)
- { l3_debug(st, "return error nlen < 2");
- return;
- }
- if (*p != 0x02)
- { /* result tag */
- l3_debug(st, "invoke error tag !=0x02");
- return;
- }
- p++;
- nlen--;
- if (*p > 4)
- { /* length format */
- l3_debug(st, "invoke return errlen > 4 ");
- return;
- }
- ilen = *p++;
- nlen--;
- if (ilen > nlen || ilen == 0)
- { l3_debug(st, "error return ilen > nlen || ilen == 0");
- return;
- }
- nlen -= ilen;
- while (ilen > 0)
- { err_ret = (err_ret << 8) | (*p++ & 0xFF); /* error value */
- ilen--;
- }
- /* if no process available handle separately */
- if (!pc)
- { if (cr == -1)
- l3dss1_dummy_error_return(st, id, err_ret);
- return;
- }
- if ((pc->prot.dss1.invoke_id) && (pc->prot.dss1.invoke_id == id))
- { /* Deflection error */
- free_invoke_id(st, pc->prot.dss1.invoke_id);
- pc->prot.dss1.remote_result = err_ret; /* result */
- pc->prot.dss1.invoke_id = 0;
- pc->redir_result = pc->prot.dss1.remote_result;
- st->l3.l3l4(st, CC_REDIR | INDICATION, pc);
- } /* Deflection error */
- else
- l3_debug(st, "return result unknown identifier");
- break;
- default:
- l3_debug(st, "facility default break tag=0x%02x", cp_tag);
- break;
- }
-}
-
-static void
-l3dss1_message(struct l3_process *pc, u_char mt)
-{
- struct sk_buff *skb;
- u_char *p;
-
- if (!(skb = l3_alloc_skb(4)))
- return;
- p = skb_put(skb, 4);
- MsgHead(p, pc->callref, mt);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
-}
-
-static void
-l3dss1_message_cause(struct l3_process *pc, u_char mt, u_char cause)
-{
- struct sk_buff *skb;
- u_char tmp[16];
- u_char *p = tmp;
- int l;
-
- MsgHead(p, pc->callref, mt);
- *p++ = IE_CAUSE;
- *p++ = 0x2;
- *p++ = 0x80;
- *p++ = cause | 0x80;
-
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
-}
-
-static void
-l3dss1_status_send(struct l3_process *pc, u_char pr, void *arg)
-{
- u_char tmp[16];
- u_char *p = tmp;
- int l;
- struct sk_buff *skb;
-
- MsgHead(p, pc->callref, MT_STATUS);
-
- *p++ = IE_CAUSE;
- *p++ = 0x2;
- *p++ = 0x80;
- *p++ = pc->para.cause | 0x80;
-
- *p++ = IE_CALL_STATE;
- *p++ = 0x1;
- *p++ = pc->state & 0x3f;
-
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
-}
-
-static void
-l3dss1_msg_without_setup(struct l3_process *pc, u_char pr, void *arg)
-{
- /* This routine is called if here was no SETUP made (checks in dss1up and in
- * l3dss1_setup) and a RELEASE_COMPLETE have to be sent with an error code
- * MT_STATUS_ENQUIRE in the NULL state is handled too
- */
- u_char tmp[16];
- u_char *p = tmp;
- int l;
- struct sk_buff *skb;
-
- switch (pc->para.cause) {
- case 81: /* invalid callreference */
- case 88: /* incomp destination */
- case 96: /* mandory IE missing */
- case 100: /* invalid IE contents */
- case 101: /* incompatible Callstate */
- MsgHead(p, pc->callref, MT_RELEASE_COMPLETE);
- *p++ = IE_CAUSE;
- *p++ = 0x2;
- *p++ = 0x80;
- *p++ = pc->para.cause | 0x80;
- break;
- default:
- printk(KERN_ERR "HiSax l3dss1_msg_without_setup wrong cause %d\n",
- pc->para.cause);
- return;
- }
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
- dss1_release_l3_process(pc);
-}
-
-static int ie_ALERTING[] = {IE_BEARER, IE_CHANNEL_ID | IE_MANDATORY_1,
- IE_FACILITY, IE_PROGRESS, IE_DISPLAY, IE_SIGNAL, IE_HLC,
- IE_USER_USER, -1};
-static int ie_CALL_PROCEEDING[] = {IE_BEARER, IE_CHANNEL_ID | IE_MANDATORY_1,
- IE_FACILITY, IE_PROGRESS, IE_DISPLAY, IE_HLC, -1};
-static int ie_CONNECT[] = {IE_BEARER, IE_CHANNEL_ID | IE_MANDATORY_1,
- IE_FACILITY, IE_PROGRESS, IE_DISPLAY, IE_DATE, IE_SIGNAL,
- IE_CONNECT_PN, IE_CONNECT_SUB, IE_LLC, IE_HLC, IE_USER_USER, -1};
-static int ie_CONNECT_ACKNOWLEDGE[] = {IE_CHANNEL_ID, IE_DISPLAY, IE_SIGNAL, -1};
-static int ie_DISCONNECT[] = {IE_CAUSE | IE_MANDATORY, IE_FACILITY,
- IE_PROGRESS, IE_DISPLAY, IE_SIGNAL, IE_USER_USER, -1};
-static int ie_INFORMATION[] = {IE_COMPLETE, IE_DISPLAY, IE_KEYPAD, IE_SIGNAL,
- IE_CALLED_PN, -1};
-static int ie_NOTIFY[] = {IE_BEARER, IE_NOTIFY | IE_MANDATORY, IE_DISPLAY, -1};
-static int ie_PROGRESS[] = {IE_BEARER, IE_CAUSE, IE_FACILITY, IE_PROGRESS |
- IE_MANDATORY, IE_DISPLAY, IE_HLC, IE_USER_USER, -1};
-static int ie_RELEASE[] = {IE_CAUSE | IE_MANDATORY_1, IE_FACILITY, IE_DISPLAY,
- IE_SIGNAL, IE_USER_USER, -1};
-/* a RELEASE_COMPLETE with errors don't require special actions
- static int ie_RELEASE_COMPLETE[] = {IE_CAUSE | IE_MANDATORY_1, IE_DISPLAY, IE_SIGNAL, IE_USER_USER, -1};
-*/
-static int ie_RESUME_ACKNOWLEDGE[] = {IE_CHANNEL_ID | IE_MANDATORY, IE_FACILITY,
- IE_DISPLAY, -1};
-static int ie_RESUME_REJECT[] = {IE_CAUSE | IE_MANDATORY, IE_DISPLAY, -1};
-static int ie_SETUP[] = {IE_COMPLETE, IE_BEARER | IE_MANDATORY,
- IE_CHANNEL_ID | IE_MANDATORY, IE_FACILITY, IE_PROGRESS,
- IE_NET_FAC, IE_DISPLAY, IE_KEYPAD, IE_SIGNAL, IE_CALLING_PN,
- IE_CALLING_SUB, IE_CALLED_PN, IE_CALLED_SUB, IE_REDIR_NR,
- IE_LLC, IE_HLC, IE_USER_USER, -1};
-static int ie_SETUP_ACKNOWLEDGE[] = {IE_CHANNEL_ID | IE_MANDATORY, IE_FACILITY,
- IE_PROGRESS, IE_DISPLAY, IE_SIGNAL, -1};
-static int ie_STATUS[] = {IE_CAUSE | IE_MANDATORY, IE_CALL_STATE |
- IE_MANDATORY, IE_DISPLAY, -1};
-static int ie_STATUS_ENQUIRY[] = {IE_DISPLAY, -1};
-static int ie_SUSPEND_ACKNOWLEDGE[] = {IE_DISPLAY, IE_FACILITY, -1};
-static int ie_SUSPEND_REJECT[] = {IE_CAUSE | IE_MANDATORY, IE_DISPLAY, -1};
-/* not used
- * static int ie_CONGESTION_CONTROL[] = {IE_CONGESTION | IE_MANDATORY,
- * IE_CAUSE | IE_MANDATORY, IE_DISPLAY, -1};
- * static int ie_USER_INFORMATION[] = {IE_MORE_DATA, IE_USER_USER | IE_MANDATORY, -1};
- * static int ie_RESTART[] = {IE_CHANNEL_ID, IE_DISPLAY, IE_RESTART_IND |
- * IE_MANDATORY, -1};
- */
-static int ie_FACILITY[] = {IE_FACILITY | IE_MANDATORY, IE_DISPLAY, -1};
-static int comp_required[] = {1, 2, 3, 5, 6, 7, 9, 10, 11, 14, 15, -1};
-static int l3_valid_states[] = {0, 1, 2, 3, 4, 6, 7, 8, 9, 10, 11, 12, 15, 17, 19, 25, -1};
-
-struct ie_len {
- int ie;
- int len;
-};
-
-static
-struct ie_len max_ie_len[] = {
- {IE_SEGMENT, 4},
- {IE_BEARER, 12},
- {IE_CAUSE, 32},
- {IE_CALL_ID, 10},
- {IE_CALL_STATE, 3},
- {IE_CHANNEL_ID, 34},
- {IE_FACILITY, 255},
- {IE_PROGRESS, 4},
- {IE_NET_FAC, 255},
- {IE_NOTIFY, 3},
- {IE_DISPLAY, 82},
- {IE_DATE, 8},
- {IE_KEYPAD, 34},
- {IE_SIGNAL, 3},
- {IE_INFORATE, 6},
- {IE_E2E_TDELAY, 11},
- {IE_TDELAY_SEL, 5},
- {IE_PACK_BINPARA, 3},
- {IE_PACK_WINSIZE, 4},
- {IE_PACK_SIZE, 4},
- {IE_CUG, 7},
- {IE_REV_CHARGE, 3},
- {IE_CALLING_PN, 24},
- {IE_CALLING_SUB, 23},
- {IE_CALLED_PN, 24},
- {IE_CALLED_SUB, 23},
- {IE_REDIR_NR, 255},
- {IE_TRANS_SEL, 255},
- {IE_RESTART_IND, 3},
- {IE_LLC, 18},
- {IE_HLC, 5},
- {IE_USER_USER, 131},
- {-1, 0},
-};
-
-static int
-getmax_ie_len(u_char ie) {
- int i = 0;
- while (max_ie_len[i].ie != -1) {
- if (max_ie_len[i].ie == ie)
- return (max_ie_len[i].len);
- i++;
- }
- return (255);
-}
-
-static int
-ie_in_set(struct l3_process *pc, u_char ie, int *checklist) {
- int ret = 1;
-
- while (*checklist != -1) {
- if ((*checklist & 0xff) == ie) {
- if (ie & 0x80)
- return (-ret);
- else
- return (ret);
- }
- ret++;
- checklist++;
- }
- return (0);
-}
-
-static int
-check_infoelements(struct l3_process *pc, struct sk_buff *skb, int *checklist)
-{
- int *cl = checklist;
- u_char mt;
- u_char *p, ie;
- int l, newpos, oldpos;
- int err_seq = 0, err_len = 0, err_compr = 0, err_ureg = 0;
- u_char codeset = 0;
- u_char old_codeset = 0;
- u_char codelock = 1;
-
- p = skb->data;
- /* skip cr */
- p++;
- l = (*p++) & 0xf;
- p += l;
- mt = *p++;
- oldpos = 0;
- while ((p - skb->data) < skb->len) {
- if ((*p & 0xf0) == 0x90) { /* shift codeset */
- old_codeset = codeset;
- codeset = *p & 7;
- if (*p & 0x08)
- codelock = 0;
- else
- codelock = 1;
- if (pc->debug & L3_DEB_CHECK)
- l3_debug(pc->st, "check IE shift%scodeset %d->%d",
- codelock ? " locking " : " ", old_codeset, codeset);
- p++;
- continue;
- }
- if (!codeset) { /* only codeset 0 */
- if ((newpos = ie_in_set(pc, *p, cl))) {
- if (newpos > 0) {
- if (newpos < oldpos)
- err_seq++;
- else
- oldpos = newpos;
- }
- } else {
- if (ie_in_set(pc, *p, comp_required))
- err_compr++;
- else
- err_ureg++;
- }
- }
- ie = *p++;
- if (ie & 0x80) {
- l = 1;
- } else {
- l = *p++;
- p += l;
- l += 2;
- }
- if (!codeset && (l > getmax_ie_len(ie)))
- err_len++;
- if (!codelock) {
- if (pc->debug & L3_DEB_CHECK)
- l3_debug(pc->st, "check IE shift back codeset %d->%d",
- codeset, old_codeset);
- codeset = old_codeset;
- codelock = 1;
- }
- }
- if (err_compr | err_ureg | err_len | err_seq) {
- if (pc->debug & L3_DEB_CHECK)
- l3_debug(pc->st, "check IE MT(%x) %d/%d/%d/%d",
- mt, err_compr, err_ureg, err_len, err_seq);
- if (err_compr)
- return (ERR_IE_COMPREHENSION);
- if (err_ureg)
- return (ERR_IE_UNRECOGNIZED);
- if (err_len)
- return (ERR_IE_LENGTH);
- if (err_seq)
- return (ERR_IE_SEQUENCE);
- }
- return (0);
-}
-
-/* verify if a message type exists and contain no IE error */
-static int
-l3dss1_check_messagetype_validity(struct l3_process *pc, int mt, void *arg)
-{
- switch (mt) {
- case MT_ALERTING:
- case MT_CALL_PROCEEDING:
- case MT_CONNECT:
- case MT_CONNECT_ACKNOWLEDGE:
- case MT_DISCONNECT:
- case MT_INFORMATION:
- case MT_FACILITY:
- case MT_NOTIFY:
- case MT_PROGRESS:
- case MT_RELEASE:
- case MT_RELEASE_COMPLETE:
- case MT_SETUP:
- case MT_SETUP_ACKNOWLEDGE:
- case MT_RESUME_ACKNOWLEDGE:
- case MT_RESUME_REJECT:
- case MT_SUSPEND_ACKNOWLEDGE:
- case MT_SUSPEND_REJECT:
- case MT_USER_INFORMATION:
- case MT_RESTART:
- case MT_RESTART_ACKNOWLEDGE:
- case MT_CONGESTION_CONTROL:
- case MT_STATUS:
- case MT_STATUS_ENQUIRY:
- if (pc->debug & L3_DEB_CHECK)
- l3_debug(pc->st, "l3dss1_check_messagetype_validity mt(%x) OK", mt);
- break;
- case MT_RESUME: /* RESUME only in user->net */
- case MT_SUSPEND: /* SUSPEND only in user->net */
- default:
- if (pc->debug & (L3_DEB_CHECK | L3_DEB_WARN))
- l3_debug(pc->st, "l3dss1_check_messagetype_validity mt(%x) fail", mt);
- pc->para.cause = 97;
- l3dss1_status_send(pc, 0, NULL);
- return (1);
- }
- return (0);
-}
-
-static void
-l3dss1_std_ie_err(struct l3_process *pc, int ret) {
-
- if (pc->debug & L3_DEB_CHECK)
- l3_debug(pc->st, "check_infoelements ret %d", ret);
- switch (ret) {
- case 0:
- break;
- case ERR_IE_COMPREHENSION:
- pc->para.cause = 96;
- l3dss1_status_send(pc, 0, NULL);
- break;
- case ERR_IE_UNRECOGNIZED:
- pc->para.cause = 99;
- l3dss1_status_send(pc, 0, NULL);
- break;
- case ERR_IE_LENGTH:
- pc->para.cause = 100;
- l3dss1_status_send(pc, 0, NULL);
- break;
- case ERR_IE_SEQUENCE:
- default:
- break;
- }
-}
-
-static int
-l3dss1_get_channel_id(struct l3_process *pc, struct sk_buff *skb) {
- u_char *p;
-
- p = skb->data;
- if ((p = findie(p, skb->len, IE_CHANNEL_ID, 0))) {
- p++;
- if (*p != 1) { /* len for BRI = 1 */
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "wrong chid len %d", *p);
- return (-2);
- }
- p++;
- if (*p & 0x60) { /* only base rate interface */
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "wrong chid %x", *p);
- return (-3);
- }
- return (*p & 0x3);
- } else
- return (-1);
-}
-
-static int
-l3dss1_get_cause(struct l3_process *pc, struct sk_buff *skb) {
- u_char l, i = 0;
- u_char *p;
-
- p = skb->data;
- pc->para.cause = 31;
- pc->para.loc = 0;
- if ((p = findie(p, skb->len, IE_CAUSE, 0))) {
- p++;
- l = *p++;
- if (l > 30)
- return (1);
- if (l) {
- pc->para.loc = *p++;
- l--;
- } else {
- return (2);
- }
- if (l && !(pc->para.loc & 0x80)) {
- l--;
- p++; /* skip recommendation */
- }
- if (l) {
- pc->para.cause = *p++;
- l--;
- if (!(pc->para.cause & 0x80))
- return (3);
- } else
- return (4);
- while (l && (i < 6)) {
- pc->para.diag[i++] = *p++;
- l--;
- }
- } else
- return (-1);
- return (0);
-}
-
-static void
-l3dss1_msg_with_uus(struct l3_process *pc, u_char cmd)
-{
- struct sk_buff *skb;
- u_char tmp[16 + 40];
- u_char *p = tmp;
- int l;
-
- MsgHead(p, pc->callref, cmd);
-
- if (pc->prot.dss1.uus1_data[0])
- { *p++ = IE_USER_USER; /* UUS info element */
- *p++ = strlen(pc->prot.dss1.uus1_data) + 1;
- *p++ = 0x04; /* IA5 chars */
- strcpy(p, pc->prot.dss1.uus1_data);
- p += strlen(pc->prot.dss1.uus1_data);
- pc->prot.dss1.uus1_data[0] = '\0';
- }
-
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
-} /* l3dss1_msg_with_uus */
-
-static void
-l3dss1_release_req(struct l3_process *pc, u_char pr, void *arg)
-{
- StopAllL3Timer(pc);
- newl3state(pc, 19);
- if (!pc->prot.dss1.uus1_data[0])
- l3dss1_message(pc, MT_RELEASE);
- else
- l3dss1_msg_with_uus(pc, MT_RELEASE);
- L3AddTimer(&pc->timer, T308, CC_T308_1);
-}
-
-static void
-l3dss1_release_cmpl(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int ret;
-
- if ((ret = l3dss1_get_cause(pc, skb)) > 0) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "RELCMPL get_cause ret(%d)", ret);
- } else if (ret < 0)
- pc->para.cause = NO_CAUSE;
- StopAllL3Timer(pc);
- newl3state(pc, 0);
- pc->st->l3.l3l4(pc->st, CC_RELEASE | CONFIRM, pc);
- dss1_release_l3_process(pc);
-}
-
-#ifdef EXT_BEARER_CAPS
-
-static u_char *
-EncodeASyncParams(u_char *p, u_char si2)
-{ // 7c 06 88 90 21 42 00 bb
-
- p[0] = 0;
- p[1] = 0x40; // Intermediate rate: 16 kbit/s jj 2000.02.19
- p[2] = 0x80;
- if (si2 & 32) // 7 data bits
-
- p[2] += 16;
- else // 8 data bits
-
- p[2] += 24;
-
- if (si2 & 16) // 2 stop bits
-
- p[2] += 96;
- else // 1 stop bit
-
- p[2] += 32;
-
- if (si2 & 8) // even parity
-
- p[2] += 2;
- else // no parity
-
- p[2] += 3;
-
- switch (si2 & 0x07) {
- case 0:
- p[0] = 66; // 1200 bit/s
-
- break;
- case 1:
- p[0] = 88; // 1200/75 bit/s
-
- break;
- case 2:
- p[0] = 87; // 75/1200 bit/s
-
- break;
- case 3:
- p[0] = 67; // 2400 bit/s
-
- break;
- case 4:
- p[0] = 69; // 4800 bit/s
-
- break;
- case 5:
- p[0] = 72; // 9600 bit/s
-
- break;
- case 6:
- p[0] = 73; // 14400 bit/s
-
- break;
- case 7:
- p[0] = 75; // 19200 bit/s
-
- break;
- }
- return p + 3;
-}
-
-static u_char
-EncodeSyncParams(u_char si2, u_char ai)
-{
-
- switch (si2) {
- case 0:
- return ai + 2; // 1200 bit/s
-
- case 1:
- return ai + 24; // 1200/75 bit/s
-
- case 2:
- return ai + 23; // 75/1200 bit/s
-
- case 3:
- return ai + 3; // 2400 bit/s
-
- case 4:
- return ai + 5; // 4800 bit/s
-
- case 5:
- return ai + 8; // 9600 bit/s
-
- case 6:
- return ai + 9; // 14400 bit/s
-
- case 7:
- return ai + 11; // 19200 bit/s
-
- case 8:
- return ai + 14; // 48000 bit/s
-
- case 9:
- return ai + 15; // 56000 bit/s
-
- case 15:
- return ai + 40; // negotiate bit/s
-
- default:
- break;
- }
- return ai;
-}
-
-
-static u_char
-DecodeASyncParams(u_char si2, u_char *p)
-{
- u_char info;
-
- switch (p[5]) {
- case 66: // 1200 bit/s
-
- break; // si2 don't change
-
- case 88: // 1200/75 bit/s
-
- si2 += 1;
- break;
- case 87: // 75/1200 bit/s
-
- si2 += 2;
- break;
- case 67: // 2400 bit/s
-
- si2 += 3;
- break;
- case 69: // 4800 bit/s
-
- si2 += 4;
- break;
- case 72: // 9600 bit/s
-
- si2 += 5;
- break;
- case 73: // 14400 bit/s
-
- si2 += 6;
- break;
- case 75: // 19200 bit/s
-
- si2 += 7;
- break;
- }
-
- info = p[7] & 0x7f;
- if ((info & 16) && (!(info & 8))) // 7 data bits
-
- si2 += 32; // else 8 data bits
-
- if ((info & 96) == 96) // 2 stop bits
-
- si2 += 16; // else 1 stop bit
-
- if ((info & 2) && (!(info & 1))) // even parity
-
- si2 += 8; // else no parity
-
- return si2;
-}
-
-
-static u_char
-DecodeSyncParams(u_char si2, u_char info)
-{
- info &= 0x7f;
- switch (info) {
- case 40: // bit/s negotiation failed ai := 165 not 175!
-
- return si2 + 15;
- case 15: // 56000 bit/s failed, ai := 0 not 169 !
-
- return si2 + 9;
- case 14: // 48000 bit/s
-
- return si2 + 8;
- case 11: // 19200 bit/s
-
- return si2 + 7;
- case 9: // 14400 bit/s
-
- return si2 + 6;
- case 8: // 9600 bit/s
-
- return si2 + 5;
- case 5: // 4800 bit/s
-
- return si2 + 4;
- case 3: // 2400 bit/s
-
- return si2 + 3;
- case 23: // 75/1200 bit/s
-
- return si2 + 2;
- case 24: // 1200/75 bit/s
-
- return si2 + 1;
- default: // 1200 bit/s
-
- return si2;
- }
-}
-
-static u_char
-DecodeSI2(struct sk_buff *skb)
-{
- u_char *p; //, *pend=skb->data + skb->len;
-
- if ((p = findie(skb->data, skb->len, 0x7c, 0))) {
- switch (p[4] & 0x0f) {
- case 0x01:
- if (p[1] == 0x04) // sync. Bitratenadaption
-
- return DecodeSyncParams(160, p[5]); // V.110/X.30
-
- else if (p[1] == 0x06) // async. Bitratenadaption
-
- return DecodeASyncParams(192, p); // V.110/X.30
-
- break;
- case 0x08: // if (p[5] == 0x02) // sync. Bitratenadaption
- if (p[1] > 3)
- return DecodeSyncParams(176, p[5]); // V.120
- break;
- }
- }
- return 0;
-}
-
-#endif
-
-
-static void
-l3dss1_setup_req(struct l3_process *pc, u_char pr,
- void *arg)
-{
- struct sk_buff *skb;
- u_char tmp[128];
- u_char *p = tmp;
- u_char channel = 0;
-
- u_char send_keypad;
- u_char screen = 0x80;
- u_char *teln;
- u_char *msn;
- u_char *sub;
- u_char *sp;
- int l;
-
- MsgHead(p, pc->callref, MT_SETUP);
-
- teln = pc->para.setup.phone;
-#ifndef CONFIG_HISAX_NO_KEYPAD
- send_keypad = (strchr(teln, '*') || strchr(teln, '#')) ? 1 : 0;
-#else
- send_keypad = 0;
-#endif
-#ifndef CONFIG_HISAX_NO_SENDCOMPLETE
- if (!send_keypad)
- *p++ = 0xa1; /* complete indicator */
-#endif
- /*
- * Set Bearer Capability, Map info from 1TR6-convention to EDSS1
- */
- switch (pc->para.setup.si1) {
- case 1: /* Telephony */
- *p++ = IE_BEARER;
- *p++ = 0x3; /* Length */
- *p++ = 0x90; /* Coding Std. CCITT, 3.1 kHz audio */
- *p++ = 0x90; /* Circuit-Mode 64kbps */
- *p++ = 0xa3; /* A-Law Audio */
- break;
- case 5: /* Datatransmission 64k, BTX */
- case 7: /* Datatransmission 64k */
- default:
- *p++ = IE_BEARER;
- *p++ = 0x2; /* Length */
- *p++ = 0x88; /* Coding Std. CCITT, unrestr. dig. Inform. */
- *p++ = 0x90; /* Circuit-Mode 64kbps */
- break;
- }
-
- if (send_keypad) {
- *p++ = IE_KEYPAD;
- *p++ = strlen(teln);
- while (*teln)
- *p++ = (*teln++) & 0x7F;
- }
-
- /*
- * What about info2? Mapping to High-Layer-Compatibility?
- */
- if ((*teln) && (!send_keypad)) {
- /* parse number for special things */
- if (!isdigit(*teln)) {
- switch (0x5f & *teln) {
- case 'C':
- channel = 0x08;
- /* fall through */
- case 'P':
- channel |= 0x80;
- teln++;
- if (*teln == '1')
- channel |= 0x01;
- else
- channel |= 0x02;
- break;
- case 'R':
- screen = 0xA0;
- break;
- case 'D':
- screen = 0x80;
- break;
-
- default:
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "Wrong MSN Code");
- break;
- }
- teln++;
- }
- }
- if (channel) {
- *p++ = IE_CHANNEL_ID;
- *p++ = 1;
- *p++ = channel;
- }
- msn = pc->para.setup.eazmsn;
- sub = NULL;
- sp = msn;
- while (*sp) {
- if ('.' == *sp) {
- sub = sp;
- *sp = 0;
- } else
- sp++;
- }
- if (*msn) {
- *p++ = IE_CALLING_PN;
- *p++ = strlen(msn) + (screen ? 2 : 1);
- /* Classify as AnyPref. */
- if (screen) {
- *p++ = 0x01; /* Ext = '0'B, Type = '000'B, Plan = '0001'B. */
- *p++ = screen;
- } else
- *p++ = 0x81; /* Ext = '1'B, Type = '000'B, Plan = '0001'B. */
- while (*msn)
- *p++ = *msn++ & 0x7f;
- }
- if (sub) {
- *sub++ = '.';
- *p++ = IE_CALLING_SUB;
- *p++ = strlen(sub) + 2;
- *p++ = 0x80; /* NSAP coded */
- *p++ = 0x50; /* local IDI format */
- while (*sub)
- *p++ = *sub++ & 0x7f;
- }
- sub = NULL;
- sp = teln;
- while (*sp) {
- if ('.' == *sp) {
- sub = sp;
- *sp = 0;
- } else
- sp++;
- }
-
- if (!send_keypad) {
- *p++ = IE_CALLED_PN;
- *p++ = strlen(teln) + 1;
- /* Classify as AnyPref. */
- *p++ = 0x81; /* Ext = '1'B, Type = '000'B, Plan = '0001'B. */
- while (*teln)
- *p++ = *teln++ & 0x7f;
-
- if (sub) {
- *sub++ = '.';
- *p++ = IE_CALLED_SUB;
- *p++ = strlen(sub) + 2;
- *p++ = 0x80; /* NSAP coded */
- *p++ = 0x50; /* local IDI format */
- while (*sub)
- *p++ = *sub++ & 0x7f;
- }
- }
-#ifdef EXT_BEARER_CAPS
- if ((pc->para.setup.si2 >= 160) && (pc->para.setup.si2 <= 175)) { // sync. Bitratenadaption, V.110/X.30
-
- *p++ = IE_LLC;
- *p++ = 0x04;
- *p++ = 0x88;
- *p++ = 0x90;
- *p++ = 0x21;
- *p++ = EncodeSyncParams(pc->para.setup.si2 - 160, 0x80);
- } else if ((pc->para.setup.si2 >= 176) && (pc->para.setup.si2 <= 191)) { // sync. Bitratenadaption, V.120
-
- *p++ = IE_LLC;
- *p++ = 0x05;
- *p++ = 0x88;
- *p++ = 0x90;
- *p++ = 0x28;
- *p++ = EncodeSyncParams(pc->para.setup.si2 - 176, 0);
- *p++ = 0x82;
- } else if (pc->para.setup.si2 >= 192) { // async. Bitratenadaption, V.110/X.30
-
- *p++ = IE_LLC;
- *p++ = 0x06;
- *p++ = 0x88;
- *p++ = 0x90;
- *p++ = 0x21;
- p = EncodeASyncParams(p, pc->para.setup.si2 - 192);
-#ifndef CONFIG_HISAX_NO_LLC
- } else {
- switch (pc->para.setup.si1) {
- case 1: /* Telephony */
- *p++ = IE_LLC;
- *p++ = 0x3; /* Length */
- *p++ = 0x90; /* Coding Std. CCITT, 3.1 kHz audio */
- *p++ = 0x90; /* Circuit-Mode 64kbps */
- *p++ = 0xa3; /* A-Law Audio */
- break;
- case 5: /* Datatransmission 64k, BTX */
- case 7: /* Datatransmission 64k */
- default:
- *p++ = IE_LLC;
- *p++ = 0x2; /* Length */
- *p++ = 0x88; /* Coding Std. CCITT, unrestr. dig. Inform. */
- *p++ = 0x90; /* Circuit-Mode 64kbps */
- break;
- }
-#endif
- }
-#endif
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- L3DelTimer(&pc->timer);
- L3AddTimer(&pc->timer, T303, CC_T303);
- newl3state(pc, 1);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
-}
-
-static void
-l3dss1_call_proc(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int id, ret;
-
- if ((id = l3dss1_get_channel_id(pc, skb)) >= 0) {
- if ((0 == id) || ((3 == id) && (0x10 == pc->para.moderate))) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup answer with wrong chid %x", id);
- pc->para.cause = 100;
- l3dss1_status_send(pc, pr, NULL);
- return;
- }
- pc->para.bchannel = id;
- } else if (1 == pc->state) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup answer wrong chid (ret %d)", id);
- if (id == -1)
- pc->para.cause = 96;
- else
- pc->para.cause = 100;
- l3dss1_status_send(pc, pr, NULL);
- return;
- }
- /* Now we are on none mandatory IEs */
- ret = check_infoelements(pc, skb, ie_CALL_PROCEEDING);
- if (ERR_IE_COMPREHENSION == ret) {
- l3dss1_std_ie_err(pc, ret);
- return;
- }
- L3DelTimer(&pc->timer);
- newl3state(pc, 3);
- L3AddTimer(&pc->timer, T310, CC_T310);
- if (ret) /* STATUS for none mandatory IE errors after actions are taken */
- l3dss1_std_ie_err(pc, ret);
- pc->st->l3.l3l4(pc->st, CC_PROCEEDING | INDICATION, pc);
-}
-
-static void
-l3dss1_setup_ack(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int id, ret;
-
- if ((id = l3dss1_get_channel_id(pc, skb)) >= 0) {
- if ((0 == id) || ((3 == id) && (0x10 == pc->para.moderate))) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup answer with wrong chid %x", id);
- pc->para.cause = 100;
- l3dss1_status_send(pc, pr, NULL);
- return;
- }
- pc->para.bchannel = id;
- } else {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup answer wrong chid (ret %d)", id);
- if (id == -1)
- pc->para.cause = 96;
- else
- pc->para.cause = 100;
- l3dss1_status_send(pc, pr, NULL);
- return;
- }
- /* Now we are on none mandatory IEs */
- ret = check_infoelements(pc, skb, ie_SETUP_ACKNOWLEDGE);
- if (ERR_IE_COMPREHENSION == ret) {
- l3dss1_std_ie_err(pc, ret);
- return;
- }
- L3DelTimer(&pc->timer);
- newl3state(pc, 2);
- L3AddTimer(&pc->timer, T304, CC_T304);
- if (ret) /* STATUS for none mandatory IE errors after actions are taken */
- l3dss1_std_ie_err(pc, ret);
- pc->st->l3.l3l4(pc->st, CC_MORE_INFO | INDICATION, pc);
-}
-
-static void
-l3dss1_disconnect(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- u_char *p;
- int ret;
- u_char cause = 0;
-
- StopAllL3Timer(pc);
- if ((ret = l3dss1_get_cause(pc, skb))) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "DISC get_cause ret(%d)", ret);
- if (ret < 0)
- cause = 96;
- else if (ret > 0)
- cause = 100;
- }
- if ((p = findie(skb->data, skb->len, IE_FACILITY, 0)))
- l3dss1_parse_facility(pc->st, pc, pc->callref, p);
- ret = check_infoelements(pc, skb, ie_DISCONNECT);
- if (ERR_IE_COMPREHENSION == ret)
- cause = 96;
- else if ((!cause) && (ERR_IE_UNRECOGNIZED == ret))
- cause = 99;
- ret = pc->state;
- newl3state(pc, 12);
- if (cause)
- newl3state(pc, 19);
- if (11 != ret)
- pc->st->l3.l3l4(pc->st, CC_DISCONNECT | INDICATION, pc);
- else if (!cause)
- l3dss1_release_req(pc, pr, NULL);
- if (cause) {
- l3dss1_message_cause(pc, MT_RELEASE, cause);
- L3AddTimer(&pc->timer, T308, CC_T308_1);
- }
-}
-
-static void
-l3dss1_connect(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int ret;
-
- ret = check_infoelements(pc, skb, ie_CONNECT);
- if (ERR_IE_COMPREHENSION == ret) {
- l3dss1_std_ie_err(pc, ret);
- return;
- }
- L3DelTimer(&pc->timer); /* T310 */
- newl3state(pc, 10);
- pc->para.chargeinfo = 0;
- /* here should inserted COLP handling KKe */
- if (ret)
- l3dss1_std_ie_err(pc, ret);
- pc->st->l3.l3l4(pc->st, CC_SETUP | CONFIRM, pc);
-}
-
-static void
-l3dss1_alerting(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int ret;
-
- ret = check_infoelements(pc, skb, ie_ALERTING);
- if (ERR_IE_COMPREHENSION == ret) {
- l3dss1_std_ie_err(pc, ret);
- return;
- }
- L3DelTimer(&pc->timer); /* T304 */
- newl3state(pc, 4);
- if (ret)
- l3dss1_std_ie_err(pc, ret);
- pc->st->l3.l3l4(pc->st, CC_ALERTING | INDICATION, pc);
-}
-
-static void
-l3dss1_setup(struct l3_process *pc, u_char pr, void *arg)
-{
- u_char *p;
- int bcfound = 0;
- char tmp[80];
- struct sk_buff *skb = arg;
- int id;
- int err = 0;
-
- /*
- * Bearer Capabilities
- */
- p = skb->data;
- /* only the first occurrence 'll be detected ! */
- if ((p = findie(p, skb->len, 0x04, 0))) {
- if ((p[1] < 2) || (p[1] > 11))
- err = 1;
- else {
- pc->para.setup.si2 = 0;
- switch (p[2] & 0x7f) {
- case 0x00: /* Speech */
- case 0x10: /* 3.1 Khz audio */
- pc->para.setup.si1 = 1;
- break;
- case 0x08: /* Unrestricted digital information */
- pc->para.setup.si1 = 7;
-/* JIM, 05.11.97 I wanna set service indicator 2 */
-#ifdef EXT_BEARER_CAPS
- pc->para.setup.si2 = DecodeSI2(skb);
-#endif
- break;
- case 0x09: /* Restricted digital information */
- pc->para.setup.si1 = 2;
- break;
- case 0x11:
- /* Unrestr. digital information with
- * tones/announcements ( or 7 kHz audio
- */
- pc->para.setup.si1 = 3;
- break;
- case 0x18: /* Video */
- pc->para.setup.si1 = 4;
- break;
- default:
- err = 2;
- break;
- }
- switch (p[3] & 0x7f) {
- case 0x40: /* packed mode */
- pc->para.setup.si1 = 8;
- break;
- case 0x10: /* 64 kbit */
- case 0x11: /* 2*64 kbit */
- case 0x13: /* 384 kbit */
- case 0x15: /* 1536 kbit */
- case 0x17: /* 1920 kbit */
- pc->para.moderate = p[3] & 0x7f;
- break;
- default:
- err = 3;
- break;
- }
- }
- if (pc->debug & L3_DEB_SI)
- l3_debug(pc->st, "SI=%d, AI=%d",
- pc->para.setup.si1, pc->para.setup.si2);
- if (err) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup with wrong bearer(l=%d:%x,%x)",
- p[1], p[2], p[3]);
- pc->para.cause = 100;
- l3dss1_msg_without_setup(pc, pr, NULL);
- return;
- }
- } else {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup without bearer capabilities");
- /* ETS 300-104 1.3.3 */
- pc->para.cause = 96;
- l3dss1_msg_without_setup(pc, pr, NULL);
- return;
- }
- /*
- * Channel Identification
- */
- if ((id = l3dss1_get_channel_id(pc, skb)) >= 0) {
- if ((pc->para.bchannel = id)) {
- if ((3 == id) && (0x10 == pc->para.moderate)) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup with wrong chid %x",
- id);
- pc->para.cause = 100;
- l3dss1_msg_without_setup(pc, pr, NULL);
- return;
- }
- bcfound++;
- } else
- { if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup without bchannel, call waiting");
- bcfound++;
- }
- } else {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup with wrong chid ret %d", id);
- if (id == -1)
- pc->para.cause = 96;
- else
- pc->para.cause = 100;
- l3dss1_msg_without_setup(pc, pr, NULL);
- return;
- }
- /* Now we are on none mandatory IEs */
- err = check_infoelements(pc, skb, ie_SETUP);
- if (ERR_IE_COMPREHENSION == err) {
- pc->para.cause = 96;
- l3dss1_msg_without_setup(pc, pr, NULL);
- return;
- }
- p = skb->data;
- if ((p = findie(p, skb->len, 0x70, 0)))
- iecpy(pc->para.setup.eazmsn, p, 1);
- else
- pc->para.setup.eazmsn[0] = 0;
-
- p = skb->data;
- if ((p = findie(p, skb->len, 0x71, 0))) {
- /* Called party subaddress */
- if ((p[1] >= 2) && (p[2] == 0x80) && (p[3] == 0x50)) {
- tmp[0] = '.';
- iecpy(&tmp[1], p, 2);
- strcat(pc->para.setup.eazmsn, tmp);
- } else if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "wrong called subaddress");
- }
- p = skb->data;
- if ((p = findie(p, skb->len, 0x6c, 0))) {
- pc->para.setup.plan = p[2];
- if (p[2] & 0x80) {
- iecpy(pc->para.setup.phone, p, 1);
- pc->para.setup.screen = 0;
- } else {
- iecpy(pc->para.setup.phone, p, 2);
- pc->para.setup.screen = p[3];
- }
- } else {
- pc->para.setup.phone[0] = 0;
- pc->para.setup.plan = 0;
- pc->para.setup.screen = 0;
- }
- p = skb->data;
- if ((p = findie(p, skb->len, 0x6d, 0))) {
- /* Calling party subaddress */
- if ((p[1] >= 2) && (p[2] == 0x80) && (p[3] == 0x50)) {
- tmp[0] = '.';
- iecpy(&tmp[1], p, 2);
- strcat(pc->para.setup.phone, tmp);
- } else if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "wrong calling subaddress");
- }
- newl3state(pc, 6);
- if (err) /* STATUS for none mandatory IE errors after actions are taken */
- l3dss1_std_ie_err(pc, err);
- pc->st->l3.l3l4(pc->st, CC_SETUP | INDICATION, pc);
-}
-
-static void
-l3dss1_reset(struct l3_process *pc, u_char pr, void *arg)
-{
- dss1_release_l3_process(pc);
-}
-
-static void
-l3dss1_disconnect_req(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb;
- u_char tmp[16 + 40];
- u_char *p = tmp;
- int l;
- u_char cause = 16;
-
- if (pc->para.cause != NO_CAUSE)
- cause = pc->para.cause;
-
- StopAllL3Timer(pc);
-
- MsgHead(p, pc->callref, MT_DISCONNECT);
-
- *p++ = IE_CAUSE;
- *p++ = 0x2;
- *p++ = 0x80;
- *p++ = cause | 0x80;
-
- if (pc->prot.dss1.uus1_data[0])
- { *p++ = IE_USER_USER; /* UUS info element */
- *p++ = strlen(pc->prot.dss1.uus1_data) + 1;
- *p++ = 0x04; /* IA5 chars */
- strcpy(p, pc->prot.dss1.uus1_data);
- p += strlen(pc->prot.dss1.uus1_data);
- pc->prot.dss1.uus1_data[0] = '\0';
- }
-
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- newl3state(pc, 11);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
- L3AddTimer(&pc->timer, T305, CC_T305);
-}
-
-static void
-l3dss1_setup_rsp(struct l3_process *pc, u_char pr,
- void *arg)
-{
- if (!pc->para.bchannel)
- { if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "D-chan connect for waiting call");
- l3dss1_disconnect_req(pc, pr, arg);
- return;
- }
- newl3state(pc, 8);
- l3dss1_message(pc, MT_CONNECT);
- L3DelTimer(&pc->timer);
- L3AddTimer(&pc->timer, T313, CC_T313);
-}
-
-static void
-l3dss1_connect_ack(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int ret;
-
- ret = check_infoelements(pc, skb, ie_CONNECT_ACKNOWLEDGE);
- if (ERR_IE_COMPREHENSION == ret) {
- l3dss1_std_ie_err(pc, ret);
- return;
- }
- newl3state(pc, 10);
- L3DelTimer(&pc->timer);
- if (ret)
- l3dss1_std_ie_err(pc, ret);
- pc->st->l3.l3l4(pc->st, CC_SETUP_COMPL | INDICATION, pc);
-}
-
-static void
-l3dss1_reject_req(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb;
- u_char tmp[16];
- u_char *p = tmp;
- int l;
- u_char cause = 21;
-
- if (pc->para.cause != NO_CAUSE)
- cause = pc->para.cause;
-
- MsgHead(p, pc->callref, MT_RELEASE_COMPLETE);
-
- *p++ = IE_CAUSE;
- *p++ = 0x2;
- *p++ = 0x80;
- *p++ = cause | 0x80;
-
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
- pc->st->l3.l3l4(pc->st, CC_RELEASE | INDICATION, pc);
- newl3state(pc, 0);
- dss1_release_l3_process(pc);
-}
-
-static void
-l3dss1_release(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- u_char *p;
- int ret, cause = 0;
-
- StopAllL3Timer(pc);
- if ((ret = l3dss1_get_cause(pc, skb)) > 0) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "REL get_cause ret(%d)", ret);
- } else if (ret < 0)
- pc->para.cause = NO_CAUSE;
- if ((p = findie(skb->data, skb->len, IE_FACILITY, 0))) {
- l3dss1_parse_facility(pc->st, pc, pc->callref, p);
- }
- if ((ret < 0) && (pc->state != 11))
- cause = 96;
- else if (ret > 0)
- cause = 100;
- ret = check_infoelements(pc, skb, ie_RELEASE);
- if (ERR_IE_COMPREHENSION == ret)
- cause = 96;
- else if ((ERR_IE_UNRECOGNIZED == ret) && (!cause))
- cause = 99;
- if (cause)
- l3dss1_message_cause(pc, MT_RELEASE_COMPLETE, cause);
- else
- l3dss1_message(pc, MT_RELEASE_COMPLETE);
- pc->st->l3.l3l4(pc->st, CC_RELEASE | INDICATION, pc);
- newl3state(pc, 0);
- dss1_release_l3_process(pc);
-}
-
-static void
-l3dss1_alert_req(struct l3_process *pc, u_char pr,
- void *arg)
-{
- newl3state(pc, 7);
- if (!pc->prot.dss1.uus1_data[0])
- l3dss1_message(pc, MT_ALERTING);
- else
- l3dss1_msg_with_uus(pc, MT_ALERTING);
-}
-
-static void
-l3dss1_proceed_req(struct l3_process *pc, u_char pr,
- void *arg)
-{
- newl3state(pc, 9);
- l3dss1_message(pc, MT_CALL_PROCEEDING);
- pc->st->l3.l3l4(pc->st, CC_PROCEED_SEND | INDICATION, pc);
-}
-
-static void
-l3dss1_setup_ack_req(struct l3_process *pc, u_char pr,
- void *arg)
-{
- newl3state(pc, 25);
- L3DelTimer(&pc->timer);
- L3AddTimer(&pc->timer, T302, CC_T302);
- l3dss1_message(pc, MT_SETUP_ACKNOWLEDGE);
-}
-
-/********************************************/
-/* deliver a incoming display message to HL */
-/********************************************/
-static void
-l3dss1_deliver_display(struct l3_process *pc, int pr, u_char *infp)
-{ u_char len;
- isdn_ctrl ic;
- struct IsdnCardState *cs;
- char *p;
-
- if (*infp++ != IE_DISPLAY) return;
- if ((len = *infp++) > 80) return; /* total length <= 82 */
- if (!pc->chan) return;
-
- p = ic.parm.display;
- while (len--)
- *p++ = *infp++;
- *p = '\0';
- ic.command = ISDN_STAT_DISPLAY;
- cs = pc->st->l1.hardware;
- ic.driver = cs->myid;
- ic.arg = pc->chan->chan;
- cs->iif.statcallb(&ic);
-} /* l3dss1_deliver_display */
-
-
-static void
-l3dss1_progress(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int err = 0;
- u_char *p;
-
- if ((p = findie(skb->data, skb->len, IE_PROGRESS, 0))) {
- if (p[1] != 2) {
- err = 1;
- pc->para.cause = 100;
- } else if (!(p[2] & 0x70)) {
- switch (p[2]) {
- case 0x80:
- case 0x81:
- case 0x82:
- case 0x84:
- case 0x85:
- case 0x87:
- case 0x8a:
- switch (p[3]) {
- case 0x81:
- case 0x82:
- case 0x83:
- case 0x84:
- case 0x88:
- break;
- default:
- err = 2;
- pc->para.cause = 100;
- break;
- }
- break;
- default:
- err = 3;
- pc->para.cause = 100;
- break;
- }
- }
- } else {
- pc->para.cause = 96;
- err = 4;
- }
- if (err) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "progress error %d", err);
- l3dss1_status_send(pc, pr, NULL);
- return;
- }
- /* Now we are on none mandatory IEs */
- err = check_infoelements(pc, skb, ie_PROGRESS);
- if (err)
- l3dss1_std_ie_err(pc, err);
- if (ERR_IE_COMPREHENSION != err)
- pc->st->l3.l3l4(pc->st, CC_PROGRESS | INDICATION, pc);
-}
-
-static void
-l3dss1_notify(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int err = 0;
- u_char *p;
-
- if ((p = findie(skb->data, skb->len, IE_NOTIFY, 0))) {
- if (p[1] != 1) {
- err = 1;
- pc->para.cause = 100;
- } else {
- switch (p[2]) {
- case 0x80:
- case 0x81:
- case 0x82:
- break;
- default:
- pc->para.cause = 100;
- err = 2;
- break;
- }
- }
- } else {
- pc->para.cause = 96;
- err = 3;
- }
- if (err) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "notify error %d", err);
- l3dss1_status_send(pc, pr, NULL);
- return;
- }
- /* Now we are on none mandatory IEs */
- err = check_infoelements(pc, skb, ie_NOTIFY);
- if (err)
- l3dss1_std_ie_err(pc, err);
- if (ERR_IE_COMPREHENSION != err)
- pc->st->l3.l3l4(pc->st, CC_NOTIFY | INDICATION, pc);
-}
-
-static void
-l3dss1_status_enq(struct l3_process *pc, u_char pr, void *arg)
-{
- int ret;
- struct sk_buff *skb = arg;
-
- ret = check_infoelements(pc, skb, ie_STATUS_ENQUIRY);
- l3dss1_std_ie_err(pc, ret);
- pc->para.cause = 30; /* response to STATUS_ENQUIRY */
- l3dss1_status_send(pc, pr, NULL);
-}
-
-static void
-l3dss1_information(struct l3_process *pc, u_char pr, void *arg)
-{
- int ret;
- struct sk_buff *skb = arg;
- u_char *p;
- char tmp[32];
-
- ret = check_infoelements(pc, skb, ie_INFORMATION);
- if (ret)
- l3dss1_std_ie_err(pc, ret);
- if (pc->state == 25) { /* overlap receiving */
- L3DelTimer(&pc->timer);
- p = skb->data;
- if ((p = findie(p, skb->len, 0x70, 0))) {
- iecpy(tmp, p, 1);
- strcat(pc->para.setup.eazmsn, tmp);
- pc->st->l3.l3l4(pc->st, CC_MORE_INFO | INDICATION, pc);
- }
- L3AddTimer(&pc->timer, T302, CC_T302);
- }
-}
-
-/******************************/
-/* handle deflection requests */
-/******************************/
-static void l3dss1_redir_req(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb;
- u_char tmp[128];
- u_char *p = tmp;
- u_char *subp;
- u_char len_phone = 0;
- u_char len_sub = 0;
- int l;
-
-
- strcpy(pc->prot.dss1.uus1_data, pc->chan->setup.eazmsn); /* copy uus element if available */
- if (!pc->chan->setup.phone[0])
- { pc->para.cause = -1;
- l3dss1_disconnect_req(pc, pr, arg); /* disconnect immediately */
- return;
- } /* only uus */
-
- if (pc->prot.dss1.invoke_id)
- free_invoke_id(pc->st, pc->prot.dss1.invoke_id);
-
- if (!(pc->prot.dss1.invoke_id = new_invoke_id(pc->st)))
- return;
-
- MsgHead(p, pc->callref, MT_FACILITY);
-
- for (subp = pc->chan->setup.phone; (*subp) && (*subp != '.'); subp++) len_phone++; /* len of phone number */
- if (*subp++ == '.') len_sub = strlen(subp) + 2; /* length including info subaddress element */
-
- *p++ = 0x1c; /* Facility info element */
- *p++ = len_phone + len_sub + 2 + 2 + 8 + 3 + 3; /* length of element */
- *p++ = 0x91; /* remote operations protocol */
- *p++ = 0xa1; /* invoke component */
-
- *p++ = len_phone + len_sub + 2 + 2 + 8 + 3; /* length of data */
- *p++ = 0x02; /* invoke id tag, integer */
- *p++ = 0x01; /* length */
- *p++ = pc->prot.dss1.invoke_id; /* invoke id */
- *p++ = 0x02; /* operation value tag, integer */
- *p++ = 0x01; /* length */
- *p++ = 0x0D; /* Call Deflect */
-
- *p++ = 0x30; /* sequence phone number */
- *p++ = len_phone + 2 + 2 + 3 + len_sub; /* length */
-
- *p++ = 0x30; /* Deflected to UserNumber */
- *p++ = len_phone + 2 + len_sub; /* length */
- *p++ = 0x80; /* NumberDigits */
- *p++ = len_phone; /* length */
- for (l = 0; l < len_phone; l++)
- *p++ = pc->chan->setup.phone[l];
-
- if (len_sub)
- { *p++ = 0x04; /* called party subaddress */
- *p++ = len_sub - 2;
- while (*subp) *p++ = *subp++;
- }
-
- *p++ = 0x01; /* screening identifier */
- *p++ = 0x01;
- *p++ = pc->chan->setup.screen;
-
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l))) return;
- skb_put_data(skb, tmp, l);
-
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
-} /* l3dss1_redir_req */
-
-/********************************************/
-/* handle deflection request in early state */
-/********************************************/
-static void l3dss1_redir_req_early(struct l3_process *pc, u_char pr, void *arg)
-{
- l3dss1_proceed_req(pc, pr, arg);
- l3dss1_redir_req(pc, pr, arg);
-} /* l3dss1_redir_req_early */
-
-/***********************************************/
-/* handle special commands for this protocol. */
-/* Examples are call independent services like */
-/* remote operations with dummy callref. */
-/***********************************************/
-static int l3dss1_cmd_global(struct PStack *st, isdn_ctrl *ic)
-{ u_char id;
- u_char temp[265];
- u_char *p = temp;
- int i, l, proc_len;
- struct sk_buff *skb;
- struct l3_process *pc = NULL;
-
- switch (ic->arg)
- { case DSS1_CMD_INVOKE:
- if (ic->parm.dss1_io.datalen < 0) return (-2); /* invalid parameter */
-
- for (proc_len = 1, i = ic->parm.dss1_io.proc >> 8; i; i++)
- i = i >> 8; /* add one byte */
- l = ic->parm.dss1_io.datalen + proc_len + 8; /* length excluding ie header */
- if (l > 255)
- return (-2); /* too long */
-
- if (!(id = new_invoke_id(st)))
- return (0); /* first get a invoke id -> return if no available */
-
- i = -1;
- MsgHead(p, i, MT_FACILITY); /* build message head */
- *p++ = 0x1C; /* Facility IE */
- *p++ = l; /* length of ie */
- *p++ = 0x91; /* remote operations */
- *p++ = 0xA1; /* invoke */
- *p++ = l - 3; /* length of invoke */
- *p++ = 0x02; /* invoke id tag */
- *p++ = 0x01; /* length is 1 */
- *p++ = id; /* invoke id */
- *p++ = 0x02; /* operation */
- *p++ = proc_len; /* length of operation */
-
- for (i = proc_len; i; i--)
- *p++ = (ic->parm.dss1_io.proc >> (i - 1)) & 0xFF;
- memcpy(p, ic->parm.dss1_io.data, ic->parm.dss1_io.datalen); /* copy data */
- l = (p - temp) + ic->parm.dss1_io.datalen; /* total length */
-
- if (ic->parm.dss1_io.timeout > 0)
- if (!(pc = dss1_new_l3_process(st, -1)))
- { free_invoke_id(st, id);
- return (-2);
- }
- pc->prot.dss1.ll_id = ic->parm.dss1_io.ll_id; /* remember id */
- pc->prot.dss1.proc = ic->parm.dss1_io.proc; /* and procedure */
-
- if (!(skb = l3_alloc_skb(l)))
- { free_invoke_id(st, id);
- if (pc) dss1_release_l3_process(pc);
- return (-2);
- }
- skb_put_data(skb, temp, l);
-
- if (pc)
- { pc->prot.dss1.invoke_id = id; /* remember id */
- L3AddTimer(&pc->timer, ic->parm.dss1_io.timeout, CC_TDSS1_IO | REQUEST);
- }
-
- l3_msg(st, DL_DATA | REQUEST, skb);
- ic->parm.dss1_io.hl_id = id; /* return id */
- return (0);
-
- case DSS1_CMD_INVOKE_ABORT:
- if ((pc = l3dss1_search_dummy_proc(st, ic->parm.dss1_io.hl_id)))
- { L3DelTimer(&pc->timer); /* remove timer */
- dss1_release_l3_process(pc);
- return (0);
- }
- else
- { l3_debug(st, "l3dss1_cmd_global abort unknown id");
- return (-2);
- }
- break;
-
- default:
- l3_debug(st, "l3dss1_cmd_global unknown cmd 0x%lx", ic->arg);
- return (-1);
- } /* switch ic-> arg */
- return (-1);
-} /* l3dss1_cmd_global */
-
-static void
-l3dss1_io_timer(struct l3_process *pc)
-{ isdn_ctrl ic;
- struct IsdnCardState *cs = pc->st->l1.hardware;
-
- L3DelTimer(&pc->timer); /* remove timer */
-
- ic.driver = cs->myid;
- ic.command = ISDN_STAT_PROT;
- ic.arg = DSS1_STAT_INVOKE_ERR;
- ic.parm.dss1_io.hl_id = pc->prot.dss1.invoke_id;
- ic.parm.dss1_io.ll_id = pc->prot.dss1.ll_id;
- ic.parm.dss1_io.proc = pc->prot.dss1.proc;
- ic.parm.dss1_io.timeout = -1;
- ic.parm.dss1_io.datalen = 0;
- ic.parm.dss1_io.data = NULL;
- free_invoke_id(pc->st, pc->prot.dss1.invoke_id);
- pc->prot.dss1.invoke_id = 0; /* reset id */
-
- cs->iif.statcallb(&ic);
-
- dss1_release_l3_process(pc);
-} /* l3dss1_io_timer */
-
-static void
-l3dss1_release_ind(struct l3_process *pc, u_char pr, void *arg)
-{
- u_char *p;
- struct sk_buff *skb = arg;
- int callState = 0;
- p = skb->data;
-
- if ((p = findie(p, skb->len, IE_CALL_STATE, 0))) {
- p++;
- if (1 == *p++)
- callState = *p;
- }
- if (callState == 0) {
- /* ETS 300-104 7.6.1, 8.6.1, 10.6.1... and 16.1
- * set down layer 3 without sending any message
- */
- pc->st->l3.l3l4(pc->st, CC_RELEASE | INDICATION, pc);
- newl3state(pc, 0);
- dss1_release_l3_process(pc);
- } else {
- pc->st->l3.l3l4(pc->st, CC_IGNORE | INDICATION, pc);
- }
-}
-
-static void
-l3dss1_dummy(struct l3_process *pc, u_char pr, void *arg)
-{
-}
-
-static void
-l3dss1_t302(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->para.loc = 0;
- pc->para.cause = 28; /* invalid number */
- l3dss1_disconnect_req(pc, pr, NULL);
- pc->st->l3.l3l4(pc->st, CC_SETUP_ERR, pc);
-}
-
-static void
-l3dss1_t303(struct l3_process *pc, u_char pr, void *arg)
-{
- if (pc->N303 > 0) {
- pc->N303--;
- L3DelTimer(&pc->timer);
- l3dss1_setup_req(pc, pr, arg);
- } else {
- L3DelTimer(&pc->timer);
- l3dss1_message_cause(pc, MT_RELEASE_COMPLETE, 102);
- pc->st->l3.l3l4(pc->st, CC_NOSETUP_RSP, pc);
- dss1_release_l3_process(pc);
- }
-}
-
-static void
-l3dss1_t304(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->para.loc = 0;
- pc->para.cause = 102;
- l3dss1_disconnect_req(pc, pr, NULL);
- pc->st->l3.l3l4(pc->st, CC_SETUP_ERR, pc);
-
-}
-
-static void
-l3dss1_t305(struct l3_process *pc, u_char pr, void *arg)
-{
- u_char tmp[16];
- u_char *p = tmp;
- int l;
- struct sk_buff *skb;
- u_char cause = 16;
-
- L3DelTimer(&pc->timer);
- if (pc->para.cause != NO_CAUSE)
- cause = pc->para.cause;
-
- MsgHead(p, pc->callref, MT_RELEASE);
-
- *p++ = IE_CAUSE;
- *p++ = 0x2;
- *p++ = 0x80;
- *p++ = cause | 0x80;
-
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- newl3state(pc, 19);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
- L3AddTimer(&pc->timer, T308, CC_T308_1);
-}
-
-static void
-l3dss1_t310(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->para.loc = 0;
- pc->para.cause = 102;
- l3dss1_disconnect_req(pc, pr, NULL);
- pc->st->l3.l3l4(pc->st, CC_SETUP_ERR, pc);
-}
-
-static void
-l3dss1_t313(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->para.loc = 0;
- pc->para.cause = 102;
- l3dss1_disconnect_req(pc, pr, NULL);
- pc->st->l3.l3l4(pc->st, CC_CONNECT_ERR, pc);
-}
-
-static void
-l3dss1_t308_1(struct l3_process *pc, u_char pr, void *arg)
-{
- newl3state(pc, 19);
- L3DelTimer(&pc->timer);
- l3dss1_message(pc, MT_RELEASE);
- L3AddTimer(&pc->timer, T308, CC_T308_2);
-}
-
-static void
-l3dss1_t308_2(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->st->l3.l3l4(pc->st, CC_RELEASE_ERR, pc);
- dss1_release_l3_process(pc);
-}
-
-static void
-l3dss1_t318(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->para.cause = 102; /* Timer expiry */
- pc->para.loc = 0; /* local */
- pc->st->l3.l3l4(pc->st, CC_RESUME_ERR, pc);
- newl3state(pc, 19);
- l3dss1_message(pc, MT_RELEASE);
- L3AddTimer(&pc->timer, T308, CC_T308_1);
-}
-
-static void
-l3dss1_t319(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->para.cause = 102; /* Timer expiry */
- pc->para.loc = 0; /* local */
- pc->st->l3.l3l4(pc->st, CC_SUSPEND_ERR, pc);
- newl3state(pc, 10);
-}
-
-static void
-l3dss1_restart(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->st->l3.l3l4(pc->st, CC_RELEASE | INDICATION, pc);
- dss1_release_l3_process(pc);
-}
-
-static void
-l3dss1_status(struct l3_process *pc, u_char pr, void *arg)
-{
- u_char *p;
- struct sk_buff *skb = arg;
- int ret;
- u_char cause = 0, callState = 0;
-
- if ((ret = l3dss1_get_cause(pc, skb))) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "STATUS get_cause ret(%d)", ret);
- if (ret < 0)
- cause = 96;
- else if (ret > 0)
- cause = 100;
- }
- if ((p = findie(skb->data, skb->len, IE_CALL_STATE, 0))) {
- p++;
- if (1 == *p++) {
- callState = *p;
- if (!ie_in_set(pc, *p, l3_valid_states))
- cause = 100;
- } else
- cause = 100;
- } else
- cause = 96;
- if (!cause) { /* no error before */
- ret = check_infoelements(pc, skb, ie_STATUS);
- if (ERR_IE_COMPREHENSION == ret)
- cause = 96;
- else if (ERR_IE_UNRECOGNIZED == ret)
- cause = 99;
- }
- if (cause) {
- u_char tmp;
-
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "STATUS error(%d/%d)", ret, cause);
- tmp = pc->para.cause;
- pc->para.cause = cause;
- l3dss1_status_send(pc, 0, NULL);
- if (cause == 99)
- pc->para.cause = tmp;
- else
- return;
- }
- cause = pc->para.cause;
- if (((cause & 0x7f) == 111) && (callState == 0)) {
- /* ETS 300-104 7.6.1, 8.6.1, 10.6.1...
- * if received MT_STATUS with cause == 111 and call
- * state == 0, then we must set down layer 3
- */
- pc->st->l3.l3l4(pc->st, CC_RELEASE | INDICATION, pc);
- newl3state(pc, 0);
- dss1_release_l3_process(pc);
- }
-}
-
-static void
-l3dss1_facility(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int ret;
-
- ret = check_infoelements(pc, skb, ie_FACILITY);
- l3dss1_std_ie_err(pc, ret);
- {
- u_char *p;
- if ((p = findie(skb->data, skb->len, IE_FACILITY, 0)))
- l3dss1_parse_facility(pc->st, pc, pc->callref, p);
- }
-}
-
-static void
-l3dss1_suspend_req(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb;
- u_char tmp[32];
- u_char *p = tmp;
- u_char i, l;
- u_char *msg = pc->chan->setup.phone;
-
- MsgHead(p, pc->callref, MT_SUSPEND);
- l = *msg++;
- if (l && (l <= 10)) { /* Max length 10 octets */
- *p++ = IE_CALL_ID;
- *p++ = l;
- for (i = 0; i < l; i++)
- *p++ = *msg++;
- } else if (l) {
- l3_debug(pc->st, "SUS wrong CALL_ID len %d", l);
- return;
- }
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
- newl3state(pc, 15);
- L3AddTimer(&pc->timer, T319, CC_T319);
-}
-
-static void
-l3dss1_suspend_ack(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int ret;
-
- L3DelTimer(&pc->timer);
- newl3state(pc, 0);
- pc->para.cause = NO_CAUSE;
- pc->st->l3.l3l4(pc->st, CC_SUSPEND | CONFIRM, pc);
- /* We don't handle suspend_ack for IE errors now */
- if ((ret = check_infoelements(pc, skb, ie_SUSPEND_ACKNOWLEDGE)))
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "SUSPACK check ie(%d)", ret);
- dss1_release_l3_process(pc);
-}
-
-static void
-l3dss1_suspend_rej(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int ret;
-
- if ((ret = l3dss1_get_cause(pc, skb))) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "SUSP_REJ get_cause ret(%d)", ret);
- if (ret < 0)
- pc->para.cause = 96;
- else
- pc->para.cause = 100;
- l3dss1_status_send(pc, pr, NULL);
- return;
- }
- ret = check_infoelements(pc, skb, ie_SUSPEND_REJECT);
- if (ERR_IE_COMPREHENSION == ret) {
- l3dss1_std_ie_err(pc, ret);
- return;
- }
- L3DelTimer(&pc->timer);
- pc->st->l3.l3l4(pc->st, CC_SUSPEND_ERR, pc);
- newl3state(pc, 10);
- if (ret) /* STATUS for none mandatory IE errors after actions are taken */
- l3dss1_std_ie_err(pc, ret);
-}
-
-static void
-l3dss1_resume_req(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb;
- u_char tmp[32];
- u_char *p = tmp;
- u_char i, l;
- u_char *msg = pc->para.setup.phone;
-
- MsgHead(p, pc->callref, MT_RESUME);
-
- l = *msg++;
- if (l && (l <= 10)) { /* Max length 10 octets */
- *p++ = IE_CALL_ID;
- *p++ = l;
- for (i = 0; i < l; i++)
- *p++ = *msg++;
- } else if (l) {
- l3_debug(pc->st, "RES wrong CALL_ID len %d", l);
- return;
- }
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
- newl3state(pc, 17);
- L3AddTimer(&pc->timer, T318, CC_T318);
-}
-
-static void
-l3dss1_resume_ack(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int id, ret;
-
- if ((id = l3dss1_get_channel_id(pc, skb)) > 0) {
- if ((0 == id) || ((3 == id) && (0x10 == pc->para.moderate))) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "resume ack with wrong chid %x", id);
- pc->para.cause = 100;
- l3dss1_status_send(pc, pr, NULL);
- return;
- }
- pc->para.bchannel = id;
- } else if (1 == pc->state) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "resume ack without chid (ret %d)", id);
- pc->para.cause = 96;
- l3dss1_status_send(pc, pr, NULL);
- return;
- }
- ret = check_infoelements(pc, skb, ie_RESUME_ACKNOWLEDGE);
- if (ERR_IE_COMPREHENSION == ret) {
- l3dss1_std_ie_err(pc, ret);
- return;
- }
- L3DelTimer(&pc->timer);
- pc->st->l3.l3l4(pc->st, CC_RESUME | CONFIRM, pc);
- newl3state(pc, 10);
- if (ret) /* STATUS for none mandatory IE errors after actions are taken */
- l3dss1_std_ie_err(pc, ret);
-}
-
-static void
-l3dss1_resume_rej(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int ret;
-
- if ((ret = l3dss1_get_cause(pc, skb))) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "RES_REJ get_cause ret(%d)", ret);
- if (ret < 0)
- pc->para.cause = 96;
- else
- pc->para.cause = 100;
- l3dss1_status_send(pc, pr, NULL);
- return;
- }
- ret = check_infoelements(pc, skb, ie_RESUME_REJECT);
- if (ERR_IE_COMPREHENSION == ret) {
- l3dss1_std_ie_err(pc, ret);
- return;
- }
- L3DelTimer(&pc->timer);
- pc->st->l3.l3l4(pc->st, CC_RESUME_ERR, pc);
- newl3state(pc, 0);
- if (ret) /* STATUS for none mandatory IE errors after actions are taken */
- l3dss1_std_ie_err(pc, ret);
- dss1_release_l3_process(pc);
-}
-
-static void
-l3dss1_global_restart(struct l3_process *pc, u_char pr, void *arg)
-{
- u_char tmp[32];
- u_char *p;
- u_char ri, ch = 0, chan = 0;
- int l;
- struct sk_buff *skb = arg;
- struct l3_process *up;
-
- newl3state(pc, 2);
- L3DelTimer(&pc->timer);
- p = skb->data;
- if ((p = findie(p, skb->len, IE_RESTART_IND, 0))) {
- ri = p[2];
- l3_debug(pc->st, "Restart %x", ri);
- } else {
- l3_debug(pc->st, "Restart without restart IE");
- ri = 0x86;
- }
- p = skb->data;
- if ((p = findie(p, skb->len, IE_CHANNEL_ID, 0))) {
- chan = p[2] & 3;
- ch = p[2];
- if (pc->st->l3.debug)
- l3_debug(pc->st, "Restart for channel %d", chan);
- }
- newl3state(pc, 2);
- up = pc->st->l3.proc;
- while (up) {
- if ((ri & 7) == 7)
- up->st->lli.l4l3(up->st, CC_RESTART | REQUEST, up);
- else if (up->para.bchannel == chan)
- up->st->lli.l4l3(up->st, CC_RESTART | REQUEST, up);
- up = up->next;
- }
- p = tmp;
- MsgHead(p, pc->callref, MT_RESTART_ACKNOWLEDGE);
- if (chan) {
- *p++ = IE_CHANNEL_ID;
- *p++ = 1;
- *p++ = ch | 0x80;
- }
- *p++ = 0x79; /* RESTART Ind */
- *p++ = 1;
- *p++ = ri;
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- newl3state(pc, 0);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
-}
-
-static void
-l3dss1_dl_reset(struct l3_process *pc, u_char pr, void *arg)
-{
- pc->para.cause = 0x29; /* Temporary failure */
- pc->para.loc = 0;
- l3dss1_disconnect_req(pc, pr, NULL);
- pc->st->l3.l3l4(pc->st, CC_SETUP_ERR, pc);
-}
-
-static void
-l3dss1_dl_release(struct l3_process *pc, u_char pr, void *arg)
-{
- newl3state(pc, 0);
- pc->para.cause = 0x1b; /* Destination out of order */
- pc->para.loc = 0;
- pc->st->l3.l3l4(pc->st, CC_RELEASE | INDICATION, pc);
- release_l3_process(pc);
-}
-
-static void
-l3dss1_dl_reestablish(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- L3AddTimer(&pc->timer, T309, CC_T309);
- l3_msg(pc->st, DL_ESTABLISH | REQUEST, NULL);
-}
-
-static void
-l3dss1_dl_reest_status(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
-
- pc->para.cause = 0x1F; /* normal, unspecified */
- l3dss1_status_send(pc, 0, NULL);
-}
-
-/* *INDENT-OFF* */
-static struct stateentry downstatelist[] =
-{
- {SBIT(0),
- CC_SETUP | REQUEST, l3dss1_setup_req},
- {SBIT(0),
- CC_RESUME | REQUEST, l3dss1_resume_req},
- {SBIT(1) | SBIT(2) | SBIT(3) | SBIT(4) | SBIT(6) | SBIT(7) | SBIT(8) | SBIT(9) | SBIT(10) | SBIT(25),
- CC_DISCONNECT | REQUEST, l3dss1_disconnect_req},
- {SBIT(12),
- CC_RELEASE | REQUEST, l3dss1_release_req},
- {ALL_STATES,
- CC_RESTART | REQUEST, l3dss1_restart},
- {SBIT(6) | SBIT(25),
- CC_IGNORE | REQUEST, l3dss1_reset},
- {SBIT(6) | SBIT(25),
- CC_REJECT | REQUEST, l3dss1_reject_req},
- {SBIT(6) | SBIT(25),
- CC_PROCEED_SEND | REQUEST, l3dss1_proceed_req},
- {SBIT(6),
- CC_MORE_INFO | REQUEST, l3dss1_setup_ack_req},
- {SBIT(25),
- CC_MORE_INFO | REQUEST, l3dss1_dummy},
- {SBIT(6) | SBIT(9) | SBIT(25),
- CC_ALERTING | REQUEST, l3dss1_alert_req},
- {SBIT(6) | SBIT(7) | SBIT(9) | SBIT(25),
- CC_SETUP | RESPONSE, l3dss1_setup_rsp},
- {SBIT(10),
- CC_SUSPEND | REQUEST, l3dss1_suspend_req},
- {SBIT(7) | SBIT(9) | SBIT(25),
- CC_REDIR | REQUEST, l3dss1_redir_req},
- {SBIT(6),
- CC_REDIR | REQUEST, l3dss1_redir_req_early},
- {SBIT(9) | SBIT(25),
- CC_DISCONNECT | REQUEST, l3dss1_disconnect_req},
- {SBIT(25),
- CC_T302, l3dss1_t302},
- {SBIT(1),
- CC_T303, l3dss1_t303},
- {SBIT(2),
- CC_T304, l3dss1_t304},
- {SBIT(3),
- CC_T310, l3dss1_t310},
- {SBIT(8),
- CC_T313, l3dss1_t313},
- {SBIT(11),
- CC_T305, l3dss1_t305},
- {SBIT(15),
- CC_T319, l3dss1_t319},
- {SBIT(17),
- CC_T318, l3dss1_t318},
- {SBIT(19),
- CC_T308_1, l3dss1_t308_1},
- {SBIT(19),
- CC_T308_2, l3dss1_t308_2},
- {SBIT(10),
- CC_T309, l3dss1_dl_release},
-};
-
-static struct stateentry datastatelist[] =
-{
- {ALL_STATES,
- MT_STATUS_ENQUIRY, l3dss1_status_enq},
- {ALL_STATES,
- MT_FACILITY, l3dss1_facility},
- {SBIT(19),
- MT_STATUS, l3dss1_release_ind},
- {ALL_STATES,
- MT_STATUS, l3dss1_status},
- {SBIT(0),
- MT_SETUP, l3dss1_setup},
- {SBIT(6) | SBIT(7) | SBIT(8) | SBIT(9) | SBIT(10) | SBIT(11) | SBIT(12) |
- SBIT(15) | SBIT(17) | SBIT(19) | SBIT(25),
- MT_SETUP, l3dss1_dummy},
- {SBIT(1) | SBIT(2),
- MT_CALL_PROCEEDING, l3dss1_call_proc},
- {SBIT(1),
- MT_SETUP_ACKNOWLEDGE, l3dss1_setup_ack},
- {SBIT(2) | SBIT(3),
- MT_ALERTING, l3dss1_alerting},
- {SBIT(2) | SBIT(3),
- MT_PROGRESS, l3dss1_progress},
- {SBIT(2) | SBIT(3) | SBIT(4) | SBIT(7) | SBIT(8) | SBIT(9) | SBIT(10) |
- SBIT(11) | SBIT(12) | SBIT(15) | SBIT(17) | SBIT(19) | SBIT(25),
- MT_INFORMATION, l3dss1_information},
- {SBIT(10) | SBIT(11) | SBIT(15),
- MT_NOTIFY, l3dss1_notify},
- {SBIT(0) | SBIT(1) | SBIT(2) | SBIT(3) | SBIT(4) | SBIT(7) | SBIT(8) | SBIT(10) |
- SBIT(11) | SBIT(12) | SBIT(15) | SBIT(17) | SBIT(19) | SBIT(25),
- MT_RELEASE_COMPLETE, l3dss1_release_cmpl},
- {SBIT(1) | SBIT(2) | SBIT(3) | SBIT(4) | SBIT(7) | SBIT(8) | SBIT(9) | SBIT(10) | SBIT(11) | SBIT(12) | SBIT(15) | SBIT(17) | SBIT(25),
- MT_RELEASE, l3dss1_release},
- {SBIT(19), MT_RELEASE, l3dss1_release_ind},
- {SBIT(1) | SBIT(2) | SBIT(3) | SBIT(4) | SBIT(7) | SBIT(8) | SBIT(9) | SBIT(10) | SBIT(11) | SBIT(15) | SBIT(17) | SBIT(25),
- MT_DISCONNECT, l3dss1_disconnect},
- {SBIT(19),
- MT_DISCONNECT, l3dss1_dummy},
- {SBIT(1) | SBIT(2) | SBIT(3) | SBIT(4),
- MT_CONNECT, l3dss1_connect},
- {SBIT(8),
- MT_CONNECT_ACKNOWLEDGE, l3dss1_connect_ack},
- {SBIT(15),
- MT_SUSPEND_ACKNOWLEDGE, l3dss1_suspend_ack},
- {SBIT(15),
- MT_SUSPEND_REJECT, l3dss1_suspend_rej},
- {SBIT(17),
- MT_RESUME_ACKNOWLEDGE, l3dss1_resume_ack},
- {SBIT(17),
- MT_RESUME_REJECT, l3dss1_resume_rej},
-};
-
-static struct stateentry globalmes_list[] =
-{
- {ALL_STATES,
- MT_STATUS, l3dss1_status},
- {SBIT(0),
- MT_RESTART, l3dss1_global_restart},
-/* {SBIT(1),
- MT_RESTART_ACKNOWLEDGE, l3dss1_restart_ack},
-*/
-};
-
-static struct stateentry manstatelist[] =
-{
- {SBIT(2),
- DL_ESTABLISH | INDICATION, l3dss1_dl_reset},
- {SBIT(10),
- DL_ESTABLISH | CONFIRM, l3dss1_dl_reest_status},
- {SBIT(10),
- DL_RELEASE | INDICATION, l3dss1_dl_reestablish},
- {ALL_STATES,
- DL_RELEASE | INDICATION, l3dss1_dl_release},
-};
-
-/* *INDENT-ON* */
-
-
-static void
-global_handler(struct PStack *st, int mt, struct sk_buff *skb)
-{
- u_char tmp[16];
- u_char *p = tmp;
- int l;
- int i;
- struct l3_process *proc = st->l3.global;
-
- proc->callref = skb->data[2]; /* cr flag */
- for (i = 0; i < ARRAY_SIZE(globalmes_list); i++)
- if ((mt == globalmes_list[i].primitive) &&
- ((1 << proc->state) & globalmes_list[i].state))
- break;
- if (i == ARRAY_SIZE(globalmes_list)) {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "dss1 global state %d mt %x unhandled",
- proc->state, mt);
- }
- MsgHead(p, proc->callref, MT_STATUS);
- *p++ = IE_CAUSE;
- *p++ = 0x2;
- *p++ = 0x80;
- *p++ = 81 | 0x80; /* invalid cr */
- *p++ = 0x14; /* CallState */
- *p++ = 0x1;
- *p++ = proc->state & 0x3f;
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(proc->st, DL_DATA | REQUEST, skb);
- } else {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "dss1 global %d mt %x",
- proc->state, mt);
- }
- globalmes_list[i].rout(proc, mt, skb);
- }
-}
-
-static void
-dss1up(struct PStack *st, int pr, void *arg)
-{
- int i, mt, cr, callState;
- char *ptr;
- u_char *p;
- struct sk_buff *skb = arg;
- struct l3_process *proc;
-
- switch (pr) {
- case (DL_DATA | INDICATION):
- case (DL_UNIT_DATA | INDICATION):
- break;
- case (DL_ESTABLISH | CONFIRM):
- case (DL_ESTABLISH | INDICATION):
- case (DL_RELEASE | INDICATION):
- case (DL_RELEASE | CONFIRM):
- l3_msg(st, pr, arg);
- return;
- break;
- default:
- printk(KERN_ERR "HiSax dss1up unknown pr=%04x\n", pr);
- return;
- }
- if (skb->len < 3) {
- l3_debug(st, "dss1up frame too short(%d)", skb->len);
- dev_kfree_skb(skb);
- return;
- }
-
- if (skb->data[0] != PROTO_DIS_EURO) {
- if (st->l3.debug & L3_DEB_PROTERR) {
- l3_debug(st, "dss1up%sunexpected discriminator %x message len %d",
- (pr == (DL_DATA | INDICATION)) ? " " : "(broadcast) ",
- skb->data[0], skb->len);
- }
- dev_kfree_skb(skb);
- return;
- }
- cr = getcallref(skb->data);
- if (skb->len < ((skb->data[1] & 0x0f) + 3)) {
- l3_debug(st, "dss1up frame too short(%d)", skb->len);
- dev_kfree_skb(skb);
- return;
- }
- mt = skb->data[skb->data[1] + 2];
- if (st->l3.debug & L3_DEB_STATE)
- l3_debug(st, "dss1up cr %d", cr);
- if (cr == -2) { /* wrong Callref */
- if (st->l3.debug & L3_DEB_WARN)
- l3_debug(st, "dss1up wrong Callref");
- dev_kfree_skb(skb);
- return;
- } else if (cr == -1) { /* Dummy Callref */
- if (mt == MT_FACILITY)
- if ((p = findie(skb->data, skb->len, IE_FACILITY, 0))) {
- l3dss1_parse_facility(st, NULL,
- (pr == (DL_DATA | INDICATION)) ? -1 : -2, p);
- dev_kfree_skb(skb);
- return;
- }
- if (st->l3.debug & L3_DEB_WARN)
- l3_debug(st, "dss1up dummy Callref (no facility msg or ie)");
- dev_kfree_skb(skb);
- return;
- } else if ((((skb->data[1] & 0x0f) == 1) && (0 == (cr & 0x7f))) ||
- (((skb->data[1] & 0x0f) == 2) && (0 == (cr & 0x7fff)))) { /* Global CallRef */
- if (st->l3.debug & L3_DEB_STATE)
- l3_debug(st, "dss1up Global CallRef");
- global_handler(st, mt, skb);
- dev_kfree_skb(skb);
- return;
- } else if (!(proc = getl3proc(st, cr))) {
- /* No transaction process exist, that means no call with
- * this callreference is active
- */
- if (mt == MT_SETUP) {
- /* Setup creates a new transaction process */
- if (skb->data[2] & 0x80) {
- /* Setup with wrong CREF flag */
- if (st->l3.debug & L3_DEB_STATE)
- l3_debug(st, "dss1up wrong CRef flag");
- dev_kfree_skb(skb);
- return;
- }
- if (!(proc = dss1_new_l3_process(st, cr))) {
- /* May be to answer with RELEASE_COMPLETE and
- * CAUSE 0x2f "Resource unavailable", but this
- * need a new_l3_process too ... arghh
- */
- dev_kfree_skb(skb);
- return;
- }
- } else if (mt == MT_STATUS) {
- if ((ptr = findie(skb->data, skb->len, IE_CAUSE, 0)) != NULL) {
- ptr++;
- if (*ptr++ == 2)
- ptr++;
- }
- callState = 0;
- if ((ptr = findie(skb->data, skb->len, IE_CALL_STATE, 0)) != NULL) {
- ptr++;
- if (*ptr++ == 2)
- ptr++;
- callState = *ptr;
- }
- /* ETS 300-104 part 2.4.1
- * if setup has not been made and a message type
- * MT_STATUS is received with call state == 0,
- * we must send nothing
- */
- if (callState != 0) {
- /* ETS 300-104 part 2.4.2
- * if setup has not been made and a message type
- * MT_STATUS is received with call state != 0,
- * we must send MT_RELEASE_COMPLETE cause 101
- */
- if ((proc = dss1_new_l3_process(st, cr))) {
- proc->para.cause = 101;
- l3dss1_msg_without_setup(proc, 0, NULL);
- }
- }
- dev_kfree_skb(skb);
- return;
- } else if (mt == MT_RELEASE_COMPLETE) {
- dev_kfree_skb(skb);
- return;
- } else {
- /* ETS 300-104 part 2
- * if setup has not been made and a message type
- * (except MT_SETUP and RELEASE_COMPLETE) is received,
- * we must send MT_RELEASE_COMPLETE cause 81 */
- dev_kfree_skb(skb);
- if ((proc = dss1_new_l3_process(st, cr))) {
- proc->para.cause = 81;
- l3dss1_msg_without_setup(proc, 0, NULL);
- }
- return;
- }
- }
- if (l3dss1_check_messagetype_validity(proc, mt, skb)) {
- dev_kfree_skb(skb);
- return;
- }
- if ((p = findie(skb->data, skb->len, IE_DISPLAY, 0)) != NULL)
- l3dss1_deliver_display(proc, pr, p); /* Display IE included */
- for (i = 0; i < ARRAY_SIZE(datastatelist); i++)
- if ((mt == datastatelist[i].primitive) &&
- ((1 << proc->state) & datastatelist[i].state))
- break;
- if (i == ARRAY_SIZE(datastatelist)) {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "dss1up%sstate %d mt %#x unhandled",
- (pr == (DL_DATA | INDICATION)) ? " " : "(broadcast) ",
- proc->state, mt);
- }
- if ((MT_RELEASE_COMPLETE != mt) && (MT_RELEASE != mt)) {
- proc->para.cause = 101;
- l3dss1_status_send(proc, pr, skb);
- }
- } else {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "dss1up%sstate %d mt %x",
- (pr == (DL_DATA | INDICATION)) ? " " : "(broadcast) ",
- proc->state, mt);
- }
- datastatelist[i].rout(proc, pr, skb);
- }
- dev_kfree_skb(skb);
- return;
-}
-
-static void
-dss1down(struct PStack *st, int pr, void *arg)
-{
- int i, cr;
- struct l3_process *proc;
- struct Channel *chan;
-
- if ((DL_ESTABLISH | REQUEST) == pr) {
- l3_msg(st, pr, NULL);
- return;
- } else if (((CC_SETUP | REQUEST) == pr) || ((CC_RESUME | REQUEST) == pr)) {
- chan = arg;
- cr = newcallref();
- cr |= 0x80;
- if ((proc = dss1_new_l3_process(st, cr))) {
- proc->chan = chan;
- chan->proc = proc;
- memcpy(&proc->para.setup, &chan->setup, sizeof(setup_parm));
- proc->callref = cr;
- }
- } else {
- proc = arg;
- }
- if (!proc) {
- printk(KERN_ERR "HiSax dss1down without proc pr=%04x\n", pr);
- return;
- }
-
- if (pr == (CC_TDSS1_IO | REQUEST)) {
- l3dss1_io_timer(proc); /* timer expires */
- return;
- }
-
- for (i = 0; i < ARRAY_SIZE(downstatelist); i++)
- if ((pr == downstatelist[i].primitive) &&
- ((1 << proc->state) & downstatelist[i].state))
- break;
- if (i == ARRAY_SIZE(downstatelist)) {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "dss1down state %d prim %#x unhandled",
- proc->state, pr);
- }
- } else {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "dss1down state %d prim %#x",
- proc->state, pr);
- }
- downstatelist[i].rout(proc, pr, arg);
- }
-}
-
-static void
-dss1man(struct PStack *st, int pr, void *arg)
-{
- int i;
- struct l3_process *proc = arg;
-
- if (!proc) {
- printk(KERN_ERR "HiSax dss1man without proc pr=%04x\n", pr);
- return;
- }
- for (i = 0; i < ARRAY_SIZE(manstatelist); i++)
- if ((pr == manstatelist[i].primitive) &&
- ((1 << proc->state) & manstatelist[i].state))
- break;
- if (i == ARRAY_SIZE(manstatelist)) {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "cr %d dss1man state %d prim %#x unhandled",
- proc->callref & 0x7f, proc->state, pr);
- }
- } else {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "cr %d dss1man state %d prim %#x",
- proc->callref & 0x7f, proc->state, pr);
- }
- manstatelist[i].rout(proc, pr, arg);
- }
-}
-
-void
-setstack_dss1(struct PStack *st)
-{
- char tmp[64];
- int i;
-
- st->lli.l4l3 = dss1down;
- st->lli.l4l3_proto = l3dss1_cmd_global;
- st->l2.l2l3 = dss1up;
- st->l3.l3ml3 = dss1man;
- st->l3.N303 = 1;
- st->prot.dss1.last_invoke_id = 0;
- st->prot.dss1.invoke_used[0] = 1; /* Bit 0 must always be set to 1 */
- i = 1;
- while (i < 32)
- st->prot.dss1.invoke_used[i++] = 0;
-
- if (!(st->l3.global = kmalloc(sizeof(struct l3_process), GFP_ATOMIC))) {
- printk(KERN_ERR "HiSax can't get memory for dss1 global CR\n");
- } else {
- st->l3.global->state = 0;
- st->l3.global->callref = 0;
- st->l3.global->next = NULL;
- st->l3.global->debug = L3_DEB_WARN;
- st->l3.global->st = st;
- st->l3.global->N303 = 1;
- st->l3.global->prot.dss1.invoke_id = 0;
-
- L3InitTimer(st->l3.global, &st->l3.global->timer);
- }
- strcpy(tmp, dss1_revision);
- printk(KERN_INFO "HiSax: DSS1 Rev. %s\n", HiSax_getrev(tmp));
-}
diff --git a/drivers/isdn/hisax/l3dss1.h b/drivers/isdn/hisax/l3dss1.h
deleted file mode 100644
index a7807e8a94f1..000000000000
--- a/drivers/isdn/hisax/l3dss1.h
+++ /dev/null
@@ -1,124 +0,0 @@
-/* $Id: l3dss1.h,v 1.10.6.2 2001/09/23 22:24:50 kai Exp $
- *
- * DSS1 (Euro) D-channel protocol defines
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#ifndef l3dss1_process
-
-#define T302 15000
-#define T303 4000
-#define T304 30000
-#define T305 30000
-#define T308 4000
-/* for layer 1 certification T309 < layer1 T3 (e.g. 4000) */
-/* This makes some tests easier and quicker */
-#define T309 40000
-#define T310 30000
-#define T313 4000
-#define T318 4000
-#define T319 4000
-
-/*
- * Message-Types
- */
-
-#define MT_ALERTING 0x01
-#define MT_CALL_PROCEEDING 0x02
-#define MT_CONNECT 0x07
-#define MT_CONNECT_ACKNOWLEDGE 0x0f
-#define MT_PROGRESS 0x03
-#define MT_SETUP 0x05
-#define MT_SETUP_ACKNOWLEDGE 0x0d
-#define MT_RESUME 0x26
-#define MT_RESUME_ACKNOWLEDGE 0x2e
-#define MT_RESUME_REJECT 0x22
-#define MT_SUSPEND 0x25
-#define MT_SUSPEND_ACKNOWLEDGE 0x2d
-#define MT_SUSPEND_REJECT 0x21
-#define MT_USER_INFORMATION 0x20
-#define MT_DISCONNECT 0x45
-#define MT_RELEASE 0x4d
-#define MT_RELEASE_COMPLETE 0x5a
-#define MT_RESTART 0x46
-#define MT_RESTART_ACKNOWLEDGE 0x4e
-#define MT_SEGMENT 0x60
-#define MT_CONGESTION_CONTROL 0x79
-#define MT_INFORMATION 0x7b
-#define MT_FACILITY 0x62
-#define MT_NOTIFY 0x6e
-#define MT_STATUS 0x7d
-#define MT_STATUS_ENQUIRY 0x75
-
-#define IE_SEGMENT 0x00
-#define IE_BEARER 0x04
-#define IE_CAUSE 0x08
-#define IE_CALL_ID 0x10
-#define IE_CALL_STATE 0x14
-#define IE_CHANNEL_ID 0x18
-#define IE_FACILITY 0x1c
-#define IE_PROGRESS 0x1e
-#define IE_NET_FAC 0x20
-#define IE_NOTIFY 0x27
-#define IE_DISPLAY 0x28
-#define IE_DATE 0x29
-#define IE_KEYPAD 0x2c
-#define IE_SIGNAL 0x34
-#define IE_INFORATE 0x40
-#define IE_E2E_TDELAY 0x42
-#define IE_TDELAY_SEL 0x43
-#define IE_PACK_BINPARA 0x44
-#define IE_PACK_WINSIZE 0x45
-#define IE_PACK_SIZE 0x46
-#define IE_CUG 0x47
-#define IE_REV_CHARGE 0x4a
-#define IE_CONNECT_PN 0x4c
-#define IE_CONNECT_SUB 0x4d
-#define IE_CALLING_PN 0x6c
-#define IE_CALLING_SUB 0x6d
-#define IE_CALLED_PN 0x70
-#define IE_CALLED_SUB 0x71
-#define IE_REDIR_NR 0x74
-#define IE_TRANS_SEL 0x78
-#define IE_RESTART_IND 0x79
-#define IE_LLC 0x7c
-#define IE_HLC 0x7d
-#define IE_USER_USER 0x7e
-#define IE_ESCAPE 0x7f
-#define IE_SHIFT 0x90
-#define IE_MORE_DATA 0xa0
-#define IE_COMPLETE 0xa1
-#define IE_CONGESTION 0xb0
-#define IE_REPEAT 0xd0
-
-#define IE_MANDATORY 0x0100
-/* mandatory not in every case */
-#define IE_MANDATORY_1 0x0200
-
-#define ERR_IE_COMPREHENSION 1
-#define ERR_IE_UNRECOGNIZED -1
-#define ERR_IE_LENGTH -2
-#define ERR_IE_SEQUENCE -3
-
-#else /* only l3dss1_process */
-
-/* l3dss1 specific data in l3 process */
-typedef struct
-{ unsigned char invoke_id; /* used invoke id in remote ops, 0 = not active */
- ulong ll_id; /* remebered ll id */
- u8 remote_operation; /* handled remote operation, 0 = not active */
- int proc; /* rememered procedure */
- ulong remote_result; /* result of remote operation for statcallb */
- char uus1_data[35]; /* data send during alerting or disconnect */
-} dss1_proc_priv;
-
-/* l3dss1 specific data in protocol stack */
-typedef struct
-{ unsigned char last_invoke_id; /* last used value for invoking */
- unsigned char invoke_used[32]; /* 256 bits for 256 values */
-} dss1_stk_priv;
-
-#endif /* only l3dss1_process */
diff --git a/drivers/isdn/hisax/l3ni1.c b/drivers/isdn/hisax/l3ni1.c
deleted file mode 100644
index ea311e7df48e..000000000000
--- a/drivers/isdn/hisax/l3ni1.c
+++ /dev/null
@@ -1,3182 +0,0 @@
-/* $Id: l3ni1.c,v 2.8.2.3 2004/01/13 14:31:25 keil Exp $
- *
- * NI1 D-channel protocol
- *
- * Author Matt Henderson & Guy Ellis
- * Copyright by Traverse Technologies Pty Ltd, www.travers.com.au
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * 2000.6.6 Initial implementation of routines for US NI1
- * Layer 3 protocol based on the EURO/DSS1 D-channel protocol
- * driver written by Karsten Keil et al.
- * NI-1 Hall of Fame - Thanks to....
- * Ragnar Paulson - for some handy code fragments
- * Will Scales - beta tester extraordinaire
- * Brett Whittacre - beta tester and remote devel system in Vegas
- *
- */
-
-#include "hisax.h"
-#include "isdnl3.h"
-#include "l3ni1.h"
-#include <linux/ctype.h>
-#include <linux/slab.h>
-
-extern char *HiSax_getrev(const char *revision);
-static const char *ni1_revision = "$Revision: 2.8.2.3 $";
-
-#define EXT_BEARER_CAPS 1
-
-#define MsgHead(ptr, cref, mty) \
- *ptr++ = 0x8; \
- if (cref == -1) { \
- *ptr++ = 0x0; \
- } else { \
- *ptr++ = 0x1; \
- *ptr++ = cref^0x80; \
- } \
- *ptr++ = mty
-
-
-/**********************************************/
-/* get a new invoke id for remote operations. */
-/* Only a return value != 0 is valid */
-/**********************************************/
-static unsigned char new_invoke_id(struct PStack *p)
-{
- unsigned char retval;
- int i;
-
- i = 32; /* maximum search depth */
-
- retval = p->prot.ni1.last_invoke_id + 1; /* try new id */
- while ((i) && (p->prot.ni1.invoke_used[retval >> 3] == 0xFF)) {
- p->prot.ni1.last_invoke_id = (retval & 0xF8) + 8;
- i--;
- }
- if (i) {
- while (p->prot.ni1.invoke_used[retval >> 3] & (1 << (retval & 7)))
- retval++;
- } else
- retval = 0;
- p->prot.ni1.last_invoke_id = retval;
- p->prot.ni1.invoke_used[retval >> 3] |= (1 << (retval & 7));
- return (retval);
-} /* new_invoke_id */
-
-/*************************/
-/* free a used invoke id */
-/*************************/
-static void free_invoke_id(struct PStack *p, unsigned char id)
-{
-
- if (!id) return; /* 0 = invalid value */
-
- p->prot.ni1.invoke_used[id >> 3] &= ~(1 << (id & 7));
-} /* free_invoke_id */
-
-
-/**********************************************************/
-/* create a new l3 process and fill in ni1 specific data */
-/**********************************************************/
-static struct l3_process
-*ni1_new_l3_process(struct PStack *st, int cr)
-{ struct l3_process *proc;
-
- if (!(proc = new_l3_process(st, cr)))
- return (NULL);
-
- proc->prot.ni1.invoke_id = 0;
- proc->prot.ni1.remote_operation = 0;
- proc->prot.ni1.uus1_data[0] = '\0';
-
- return (proc);
-} /* ni1_new_l3_process */
-
-/************************************************/
-/* free a l3 process and all ni1 specific data */
-/************************************************/
-static void
-ni1_release_l3_process(struct l3_process *p)
-{
- free_invoke_id(p->st, p->prot.ni1.invoke_id);
- release_l3_process(p);
-} /* ni1_release_l3_process */
-
-/********************************************************/
-/* search a process with invoke id id and dummy callref */
-/********************************************************/
-static struct l3_process *
-l3ni1_search_dummy_proc(struct PStack *st, int id)
-{ struct l3_process *pc = st->l3.proc; /* start of processes */
-
- if (!id) return (NULL);
-
- while (pc)
- { if ((pc->callref == -1) && (pc->prot.ni1.invoke_id == id))
- return (pc);
- pc = pc->next;
- }
- return (NULL);
-} /* l3ni1_search_dummy_proc */
-
-/*******************************************************************/
-/* called when a facility message with a dummy callref is received */
-/* and a return result is delivered. id specifies the invoke id. */
-/*******************************************************************/
-static void
-l3ni1_dummy_return_result(struct PStack *st, int id, u_char *p, u_char nlen)
-{ isdn_ctrl ic;
- struct IsdnCardState *cs;
- struct l3_process *pc = NULL;
-
- if ((pc = l3ni1_search_dummy_proc(st, id)))
- { L3DelTimer(&pc->timer); /* remove timer */
-
- cs = pc->st->l1.hardware;
- ic.driver = cs->myid;
- ic.command = ISDN_STAT_PROT;
- ic.arg = NI1_STAT_INVOKE_RES;
- ic.parm.ni1_io.hl_id = pc->prot.ni1.invoke_id;
- ic.parm.ni1_io.ll_id = pc->prot.ni1.ll_id;
- ic.parm.ni1_io.proc = pc->prot.ni1.proc;
- ic.parm.ni1_io.timeout = 0;
- ic.parm.ni1_io.datalen = nlen;
- ic.parm.ni1_io.data = p;
- free_invoke_id(pc->st, pc->prot.ni1.invoke_id);
- pc->prot.ni1.invoke_id = 0; /* reset id */
-
- cs->iif.statcallb(&ic);
- ni1_release_l3_process(pc);
- }
- else
- l3_debug(st, "dummy return result id=0x%x result len=%d", id, nlen);
-} /* l3ni1_dummy_return_result */
-
-/*******************************************************************/
-/* called when a facility message with a dummy callref is received */
-/* and a return error is delivered. id specifies the invoke id. */
-/*******************************************************************/
-static void
-l3ni1_dummy_error_return(struct PStack *st, int id, ulong error)
-{ isdn_ctrl ic;
- struct IsdnCardState *cs;
- struct l3_process *pc = NULL;
-
- if ((pc = l3ni1_search_dummy_proc(st, id)))
- { L3DelTimer(&pc->timer); /* remove timer */
-
- cs = pc->st->l1.hardware;
- ic.driver = cs->myid;
- ic.command = ISDN_STAT_PROT;
- ic.arg = NI1_STAT_INVOKE_ERR;
- ic.parm.ni1_io.hl_id = pc->prot.ni1.invoke_id;
- ic.parm.ni1_io.ll_id = pc->prot.ni1.ll_id;
- ic.parm.ni1_io.proc = pc->prot.ni1.proc;
- ic.parm.ni1_io.timeout = error;
- ic.parm.ni1_io.datalen = 0;
- ic.parm.ni1_io.data = NULL;
- free_invoke_id(pc->st, pc->prot.ni1.invoke_id);
- pc->prot.ni1.invoke_id = 0; /* reset id */
-
- cs->iif.statcallb(&ic);
- ni1_release_l3_process(pc);
- }
- else
- l3_debug(st, "dummy return error id=0x%x error=0x%lx", id, error);
-} /* l3ni1_error_return */
-
-/*******************************************************************/
-/* called when a facility message with a dummy callref is received */
-/* and a invoke is delivered. id specifies the invoke id. */
-/*******************************************************************/
-static void
-l3ni1_dummy_invoke(struct PStack *st, int cr, int id,
- int ident, u_char *p, u_char nlen)
-{ isdn_ctrl ic;
- struct IsdnCardState *cs;
-
- l3_debug(st, "dummy invoke %s id=0x%x ident=0x%x datalen=%d",
- (cr == -1) ? "local" : "broadcast", id, ident, nlen);
- if (cr >= -1) return; /* ignore local data */
-
- cs = st->l1.hardware;
- ic.driver = cs->myid;
- ic.command = ISDN_STAT_PROT;
- ic.arg = NI1_STAT_INVOKE_BRD;
- ic.parm.ni1_io.hl_id = id;
- ic.parm.ni1_io.ll_id = 0;
- ic.parm.ni1_io.proc = ident;
- ic.parm.ni1_io.timeout = 0;
- ic.parm.ni1_io.datalen = nlen;
- ic.parm.ni1_io.data = p;
-
- cs->iif.statcallb(&ic);
-} /* l3ni1_dummy_invoke */
-
-static void
-l3ni1_parse_facility(struct PStack *st, struct l3_process *pc,
- int cr, u_char *p)
-{
- int qd_len = 0;
- unsigned char nlen = 0, ilen, cp_tag;
- int ident, id;
- ulong err_ret;
-
- if (pc)
- st = pc->st; /* valid Stack */
- else
- if ((!st) || (cr >= 0)) return; /* neither pc nor st specified */
-
- p++;
- qd_len = *p++;
- if (qd_len == 0) {
- l3_debug(st, "qd_len == 0");
- return;
- }
- if ((*p & 0x1F) != 0x11) { /* Service discriminator, supplementary service */
- l3_debug(st, "supplementary service != 0x11");
- return;
- }
- while (qd_len > 0 && !(*p & 0x80)) { /* extension ? */
- p++;
- qd_len--;
- }
- if (qd_len < 2) {
- l3_debug(st, "qd_len < 2");
- return;
- }
- p++;
- qd_len--;
- if ((*p & 0xE0) != 0xA0) { /* class and form */
- l3_debug(st, "class and form != 0xA0");
- return;
- }
-
- cp_tag = *p & 0x1F; /* remember tag value */
-
- p++;
- qd_len--;
- if (qd_len < 1)
- { l3_debug(st, "qd_len < 1");
- return;
- }
- if (*p & 0x80)
- { /* length format indefinite or limited */
- nlen = *p++ & 0x7F; /* number of len bytes or indefinite */
- if ((qd_len-- < ((!nlen) ? 3 : (1 + nlen))) ||
- (nlen > 1))
- { l3_debug(st, "length format error or not implemented");
- return;
- }
- if (nlen == 1)
- { nlen = *p++; /* complete length */
- qd_len--;
- }
- else
- { qd_len -= 2; /* trailing null bytes */
- if ((*(p + qd_len)) || (*(p + qd_len + 1)))
- { l3_debug(st, "length format indefinite error");
- return;
- }
- nlen = qd_len;
- }
- }
- else
- { nlen = *p++;
- qd_len--;
- }
- if (qd_len < nlen)
- { l3_debug(st, "qd_len < nlen");
- return;
- }
- qd_len -= nlen;
-
- if (nlen < 2)
- { l3_debug(st, "nlen < 2");
- return;
- }
- if (*p != 0x02)
- { /* invoke identifier tag */
- l3_debug(st, "invoke identifier tag !=0x02");
- return;
- }
- p++;
- nlen--;
- if (*p & 0x80)
- { /* length format */
- l3_debug(st, "invoke id length format 2");
- return;
- }
- ilen = *p++;
- nlen--;
- if (ilen > nlen || ilen == 0)
- { l3_debug(st, "ilen > nlen || ilen == 0");
- return;
- }
- nlen -= ilen;
- id = 0;
- while (ilen > 0)
- { id = (id << 8) | (*p++ & 0xFF); /* invoke identifier */
- ilen--;
- }
-
- switch (cp_tag) { /* component tag */
- case 1: /* invoke */
- if (nlen < 2) {
- l3_debug(st, "nlen < 2 22");
- return;
- }
- if (*p != 0x02) { /* operation value */
- l3_debug(st, "operation value !=0x02");
- return;
- }
- p++;
- nlen--;
- ilen = *p++;
- nlen--;
- if (ilen > nlen || ilen == 0) {
- l3_debug(st, "ilen > nlen || ilen == 0 22");
- return;
- }
- nlen -= ilen;
- ident = 0;
- while (ilen > 0) {
- ident = (ident << 8) | (*p++ & 0xFF);
- ilen--;
- }
-
- if (!pc)
- {
- l3ni1_dummy_invoke(st, cr, id, ident, p, nlen);
- return;
- }
- l3_debug(st, "invoke break");
- break;
- case 2: /* return result */
- /* if no process available handle separately */
- if (!pc)
- { if (cr == -1)
- l3ni1_dummy_return_result(st, id, p, nlen);
- return;
- }
- if ((pc->prot.ni1.invoke_id) && (pc->prot.ni1.invoke_id == id))
- { /* Diversion successful */
- free_invoke_id(st, pc->prot.ni1.invoke_id);
- pc->prot.ni1.remote_result = 0; /* success */
- pc->prot.ni1.invoke_id = 0;
- pc->redir_result = pc->prot.ni1.remote_result;
- st->l3.l3l4(st, CC_REDIR | INDICATION, pc); } /* Diversion successful */
- else
- l3_debug(st, "return error unknown identifier");
- break;
- case 3: /* return error */
- err_ret = 0;
- if (nlen < 2)
- { l3_debug(st, "return error nlen < 2");
- return;
- }
- if (*p != 0x02)
- { /* result tag */
- l3_debug(st, "invoke error tag !=0x02");
- return;
- }
- p++;
- nlen--;
- if (*p > 4)
- { /* length format */
- l3_debug(st, "invoke return errlen > 4 ");
- return;
- }
- ilen = *p++;
- nlen--;
- if (ilen > nlen || ilen == 0)
- { l3_debug(st, "error return ilen > nlen || ilen == 0");
- return;
- }
- nlen -= ilen;
- while (ilen > 0)
- { err_ret = (err_ret << 8) | (*p++ & 0xFF); /* error value */
- ilen--;
- }
- /* if no process available handle separately */
- if (!pc)
- { if (cr == -1)
- l3ni1_dummy_error_return(st, id, err_ret);
- return;
- }
- if ((pc->prot.ni1.invoke_id) && (pc->prot.ni1.invoke_id == id))
- { /* Deflection error */
- free_invoke_id(st, pc->prot.ni1.invoke_id);
- pc->prot.ni1.remote_result = err_ret; /* result */
- pc->prot.ni1.invoke_id = 0;
- pc->redir_result = pc->prot.ni1.remote_result;
- st->l3.l3l4(st, CC_REDIR | INDICATION, pc);
- } /* Deflection error */
- else
- l3_debug(st, "return result unknown identifier");
- break;
- default:
- l3_debug(st, "facility default break tag=0x%02x", cp_tag);
- break;
- }
-}
-
-static void
-l3ni1_message(struct l3_process *pc, u_char mt)
-{
- struct sk_buff *skb;
- u_char *p;
-
- if (!(skb = l3_alloc_skb(4)))
- return;
- p = skb_put(skb, 4);
- MsgHead(p, pc->callref, mt);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
-}
-
-static void
-l3ni1_message_plus_chid(struct l3_process *pc, u_char mt)
-/* sends an l3 messages plus channel id - added GE 05/09/00 */
-{
- struct sk_buff *skb;
- u_char tmp[16];
- u_char *p = tmp;
- u_char chid;
-
- chid = (u_char)(pc->para.bchannel & 0x03) | 0x88;
- MsgHead(p, pc->callref, mt);
- *p++ = IE_CHANNEL_ID;
- *p++ = 0x01;
- *p++ = chid;
-
- if (!(skb = l3_alloc_skb(7)))
- return;
- skb_put_data(skb, tmp, 7);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
-}
-
-static void
-l3ni1_message_cause(struct l3_process *pc, u_char mt, u_char cause)
-{
- struct sk_buff *skb;
- u_char tmp[16];
- u_char *p = tmp;
- int l;
-
- MsgHead(p, pc->callref, mt);
- *p++ = IE_CAUSE;
- *p++ = 0x2;
- *p++ = 0x80;
- *p++ = cause | 0x80;
-
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
-}
-
-static void
-l3ni1_status_send(struct l3_process *pc, u_char pr, void *arg)
-{
- u_char tmp[16];
- u_char *p = tmp;
- int l;
- struct sk_buff *skb;
-
- MsgHead(p, pc->callref, MT_STATUS);
-
- *p++ = IE_CAUSE;
- *p++ = 0x2;
- *p++ = 0x80;
- *p++ = pc->para.cause | 0x80;
-
- *p++ = IE_CALL_STATE;
- *p++ = 0x1;
- *p++ = pc->state & 0x3f;
-
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
-}
-
-static void
-l3ni1_msg_without_setup(struct l3_process *pc, u_char pr, void *arg)
-{
- /* This routine is called if here was no SETUP made (checks in ni1up and in
- * l3ni1_setup) and a RELEASE_COMPLETE have to be sent with an error code
- * MT_STATUS_ENQUIRE in the NULL state is handled too
- */
- u_char tmp[16];
- u_char *p = tmp;
- int l;
- struct sk_buff *skb;
-
- switch (pc->para.cause) {
- case 81: /* invalid callreference */
- case 88: /* incomp destination */
- case 96: /* mandory IE missing */
- case 100: /* invalid IE contents */
- case 101: /* incompatible Callstate */
- MsgHead(p, pc->callref, MT_RELEASE_COMPLETE);
- *p++ = IE_CAUSE;
- *p++ = 0x2;
- *p++ = 0x80;
- *p++ = pc->para.cause | 0x80;
- break;
- default:
- printk(KERN_ERR "HiSax l3ni1_msg_without_setup wrong cause %d\n",
- pc->para.cause);
- return;
- }
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
- ni1_release_l3_process(pc);
-}
-
-static int ie_ALERTING[] = {IE_BEARER, IE_CHANNEL_ID | IE_MANDATORY_1,
- IE_FACILITY, IE_PROGRESS, IE_DISPLAY, IE_SIGNAL, IE_HLC,
- IE_USER_USER, -1};
-static int ie_CALL_PROCEEDING[] = {IE_BEARER, IE_CHANNEL_ID | IE_MANDATORY_1,
- IE_FACILITY, IE_PROGRESS, IE_DISPLAY, IE_HLC, -1};
-static int ie_CONNECT[] = {IE_BEARER, IE_CHANNEL_ID | IE_MANDATORY_1,
- IE_FACILITY, IE_PROGRESS, IE_DISPLAY, IE_DATE, IE_SIGNAL,
- IE_CONNECT_PN, IE_CONNECT_SUB, IE_LLC, IE_HLC, IE_USER_USER, -1};
-static int ie_CONNECT_ACKNOWLEDGE[] = {IE_CHANNEL_ID, IE_DISPLAY, IE_SIGNAL, -1};
-static int ie_DISCONNECT[] = {IE_CAUSE | IE_MANDATORY, IE_FACILITY,
- IE_PROGRESS, IE_DISPLAY, IE_SIGNAL, IE_USER_USER, -1};
-static int ie_INFORMATION[] = {IE_COMPLETE, IE_DISPLAY, IE_KEYPAD, IE_SIGNAL,
- IE_CALLED_PN, -1};
-static int ie_NOTIFY[] = {IE_BEARER, IE_NOTIFY | IE_MANDATORY, IE_DISPLAY, -1};
-static int ie_PROGRESS[] = {IE_BEARER, IE_CAUSE, IE_FACILITY, IE_PROGRESS |
- IE_MANDATORY, IE_DISPLAY, IE_HLC, IE_USER_USER, -1};
-static int ie_RELEASE[] = {IE_CAUSE | IE_MANDATORY_1, IE_FACILITY, IE_DISPLAY,
- IE_SIGNAL, IE_USER_USER, -1};
-/* a RELEASE_COMPLETE with errors don't require special actions
- static int ie_RELEASE_COMPLETE[] = {IE_CAUSE | IE_MANDATORY_1, IE_DISPLAY, IE_SIGNAL, IE_USER_USER, -1};
-*/
-static int ie_RESUME_ACKNOWLEDGE[] = {IE_CHANNEL_ID | IE_MANDATORY, IE_FACILITY,
- IE_DISPLAY, -1};
-static int ie_RESUME_REJECT[] = {IE_CAUSE | IE_MANDATORY, IE_DISPLAY, -1};
-static int ie_SETUP[] = {IE_COMPLETE, IE_BEARER | IE_MANDATORY,
- IE_CHANNEL_ID | IE_MANDATORY, IE_FACILITY, IE_PROGRESS,
- IE_NET_FAC, IE_DISPLAY, IE_KEYPAD, IE_SIGNAL, IE_CALLING_PN,
- IE_CALLING_SUB, IE_CALLED_PN, IE_CALLED_SUB, IE_REDIR_NR,
- IE_LLC, IE_HLC, IE_USER_USER, -1};
-static int ie_SETUP_ACKNOWLEDGE[] = {IE_CHANNEL_ID | IE_MANDATORY, IE_FACILITY,
- IE_PROGRESS, IE_DISPLAY, IE_SIGNAL, -1};
-static int ie_STATUS[] = {IE_CAUSE | IE_MANDATORY, IE_CALL_STATE |
- IE_MANDATORY, IE_DISPLAY, -1};
-static int ie_STATUS_ENQUIRY[] = {IE_DISPLAY, -1};
-static int ie_SUSPEND_ACKNOWLEDGE[] = {IE_DISPLAY, IE_FACILITY, -1};
-static int ie_SUSPEND_REJECT[] = {IE_CAUSE | IE_MANDATORY, IE_DISPLAY, -1};
-/* not used
- * static int ie_CONGESTION_CONTROL[] = {IE_CONGESTION | IE_MANDATORY,
- * IE_CAUSE | IE_MANDATORY, IE_DISPLAY, -1};
- * static int ie_USER_INFORMATION[] = {IE_MORE_DATA, IE_USER_USER | IE_MANDATORY, -1};
- * static int ie_RESTART[] = {IE_CHANNEL_ID, IE_DISPLAY, IE_RESTART_IND |
- * IE_MANDATORY, -1};
- */
-static int ie_FACILITY[] = {IE_FACILITY | IE_MANDATORY, IE_DISPLAY, -1};
-static int comp_required[] = {1, 2, 3, 5, 6, 7, 9, 10, 11, 14, 15, -1};
-static int l3_valid_states[] = {0, 1, 2, 3, 4, 6, 7, 8, 9, 10, 11, 12, 15, 17, 19, 25, -1};
-
-struct ie_len {
- int ie;
- int len;
-};
-
-static
-struct ie_len max_ie_len[] = {
- {IE_SEGMENT, 4},
- {IE_BEARER, 12},
- {IE_CAUSE, 32},
- {IE_CALL_ID, 10},
- {IE_CALL_STATE, 3},
- {IE_CHANNEL_ID, 34},
- {IE_FACILITY, 255},
- {IE_PROGRESS, 4},
- {IE_NET_FAC, 255},
- {IE_NOTIFY, 3},
- {IE_DISPLAY, 82},
- {IE_DATE, 8},
- {IE_KEYPAD, 34},
- {IE_SIGNAL, 3},
- {IE_INFORATE, 6},
- {IE_E2E_TDELAY, 11},
- {IE_TDELAY_SEL, 5},
- {IE_PACK_BINPARA, 3},
- {IE_PACK_WINSIZE, 4},
- {IE_PACK_SIZE, 4},
- {IE_CUG, 7},
- {IE_REV_CHARGE, 3},
- {IE_CALLING_PN, 24},
- {IE_CALLING_SUB, 23},
- {IE_CALLED_PN, 24},
- {IE_CALLED_SUB, 23},
- {IE_REDIR_NR, 255},
- {IE_TRANS_SEL, 255},
- {IE_RESTART_IND, 3},
- {IE_LLC, 18},
- {IE_HLC, 5},
- {IE_USER_USER, 131},
- {-1, 0},
-};
-
-static int
-getmax_ie_len(u_char ie) {
- int i = 0;
- while (max_ie_len[i].ie != -1) {
- if (max_ie_len[i].ie == ie)
- return (max_ie_len[i].len);
- i++;
- }
- return (255);
-}
-
-static int
-ie_in_set(struct l3_process *pc, u_char ie, int *checklist) {
- int ret = 1;
-
- while (*checklist != -1) {
- if ((*checklist & 0xff) == ie) {
- if (ie & 0x80)
- return (-ret);
- else
- return (ret);
- }
- ret++;
- checklist++;
- }
- return (0);
-}
-
-static int
-check_infoelements(struct l3_process *pc, struct sk_buff *skb, int *checklist)
-{
- int *cl = checklist;
- u_char mt;
- u_char *p, ie;
- int l, newpos, oldpos;
- int err_seq = 0, err_len = 0, err_compr = 0, err_ureg = 0;
- u_char codeset = 0;
- u_char old_codeset = 0;
- u_char codelock = 1;
-
- p = skb->data;
- /* skip cr */
- p++;
- l = (*p++) & 0xf;
- p += l;
- mt = *p++;
- oldpos = 0;
- while ((p - skb->data) < skb->len) {
- if ((*p & 0xf0) == 0x90) { /* shift codeset */
- old_codeset = codeset;
- codeset = *p & 7;
- if (*p & 0x08)
- codelock = 0;
- else
- codelock = 1;
- if (pc->debug & L3_DEB_CHECK)
- l3_debug(pc->st, "check IE shift%scodeset %d->%d",
- codelock ? " locking " : " ", old_codeset, codeset);
- p++;
- continue;
- }
- if (!codeset) { /* only codeset 0 */
- if ((newpos = ie_in_set(pc, *p, cl))) {
- if (newpos > 0) {
- if (newpos < oldpos)
- err_seq++;
- else
- oldpos = newpos;
- }
- } else {
- if (ie_in_set(pc, *p, comp_required))
- err_compr++;
- else
- err_ureg++;
- }
- }
- ie = *p++;
- if (ie & 0x80) {
- l = 1;
- } else {
- l = *p++;
- p += l;
- l += 2;
- }
- if (!codeset && (l > getmax_ie_len(ie)))
- err_len++;
- if (!codelock) {
- if (pc->debug & L3_DEB_CHECK)
- l3_debug(pc->st, "check IE shift back codeset %d->%d",
- codeset, old_codeset);
- codeset = old_codeset;
- codelock = 1;
- }
- }
- if (err_compr | err_ureg | err_len | err_seq) {
- if (pc->debug & L3_DEB_CHECK)
- l3_debug(pc->st, "check IE MT(%x) %d/%d/%d/%d",
- mt, err_compr, err_ureg, err_len, err_seq);
- if (err_compr)
- return (ERR_IE_COMPREHENSION);
- if (err_ureg)
- return (ERR_IE_UNRECOGNIZED);
- if (err_len)
- return (ERR_IE_LENGTH);
- if (err_seq)
- return (ERR_IE_SEQUENCE);
- }
- return (0);
-}
-
-/* verify if a message type exists and contain no IE error */
-static int
-l3ni1_check_messagetype_validity(struct l3_process *pc, int mt, void *arg)
-{
- switch (mt) {
- case MT_ALERTING:
- case MT_CALL_PROCEEDING:
- case MT_CONNECT:
- case MT_CONNECT_ACKNOWLEDGE:
- case MT_DISCONNECT:
- case MT_INFORMATION:
- case MT_FACILITY:
- case MT_NOTIFY:
- case MT_PROGRESS:
- case MT_RELEASE:
- case MT_RELEASE_COMPLETE:
- case MT_SETUP:
- case MT_SETUP_ACKNOWLEDGE:
- case MT_RESUME_ACKNOWLEDGE:
- case MT_RESUME_REJECT:
- case MT_SUSPEND_ACKNOWLEDGE:
- case MT_SUSPEND_REJECT:
- case MT_USER_INFORMATION:
- case MT_RESTART:
- case MT_RESTART_ACKNOWLEDGE:
- case MT_CONGESTION_CONTROL:
- case MT_STATUS:
- case MT_STATUS_ENQUIRY:
- if (pc->debug & L3_DEB_CHECK)
- l3_debug(pc->st, "l3ni1_check_messagetype_validity mt(%x) OK", mt);
- break;
- case MT_RESUME: /* RESUME only in user->net */
- case MT_SUSPEND: /* SUSPEND only in user->net */
- default:
- if (pc->debug & (L3_DEB_CHECK | L3_DEB_WARN))
- l3_debug(pc->st, "l3ni1_check_messagetype_validity mt(%x) fail", mt);
- pc->para.cause = 97;
- l3ni1_status_send(pc, 0, NULL);
- return (1);
- }
- return (0);
-}
-
-static void
-l3ni1_std_ie_err(struct l3_process *pc, int ret) {
-
- if (pc->debug & L3_DEB_CHECK)
- l3_debug(pc->st, "check_infoelements ret %d", ret);
- switch (ret) {
- case 0:
- break;
- case ERR_IE_COMPREHENSION:
- pc->para.cause = 96;
- l3ni1_status_send(pc, 0, NULL);
- break;
- case ERR_IE_UNRECOGNIZED:
- pc->para.cause = 99;
- l3ni1_status_send(pc, 0, NULL);
- break;
- case ERR_IE_LENGTH:
- pc->para.cause = 100;
- l3ni1_status_send(pc, 0, NULL);
- break;
- case ERR_IE_SEQUENCE:
- default:
- break;
- }
-}
-
-static int
-l3ni1_get_channel_id(struct l3_process *pc, struct sk_buff *skb) {
- u_char *p;
-
- p = skb->data;
- if ((p = findie(p, skb->len, IE_CHANNEL_ID, 0))) {
- p++;
- if (*p != 1) { /* len for BRI = 1 */
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "wrong chid len %d", *p);
- return (-2);
- }
- p++;
- if (*p & 0x60) { /* only base rate interface */
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "wrong chid %x", *p);
- return (-3);
- }
- return (*p & 0x3);
- } else
- return (-1);
-}
-
-static int
-l3ni1_get_cause(struct l3_process *pc, struct sk_buff *skb) {
- u_char l, i = 0;
- u_char *p;
-
- p = skb->data;
- pc->para.cause = 31;
- pc->para.loc = 0;
- if ((p = findie(p, skb->len, IE_CAUSE, 0))) {
- p++;
- l = *p++;
- if (l > 30)
- return (1);
- if (l) {
- pc->para.loc = *p++;
- l--;
- } else {
- return (2);
- }
- if (l && !(pc->para.loc & 0x80)) {
- l--;
- p++; /* skip recommendation */
- }
- if (l) {
- pc->para.cause = *p++;
- l--;
- if (!(pc->para.cause & 0x80))
- return (3);
- } else
- return (4);
- while (l && (i < 6)) {
- pc->para.diag[i++] = *p++;
- l--;
- }
- } else
- return (-1);
- return (0);
-}
-
-static void
-l3ni1_msg_with_uus(struct l3_process *pc, u_char cmd)
-{
- struct sk_buff *skb;
- u_char tmp[16 + 40];
- u_char *p = tmp;
- int l;
-
- MsgHead(p, pc->callref, cmd);
-
- if (pc->prot.ni1.uus1_data[0])
- { *p++ = IE_USER_USER; /* UUS info element */
- *p++ = strlen(pc->prot.ni1.uus1_data) + 1;
- *p++ = 0x04; /* IA5 chars */
- strcpy(p, pc->prot.ni1.uus1_data);
- p += strlen(pc->prot.ni1.uus1_data);
- pc->prot.ni1.uus1_data[0] = '\0';
- }
-
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
-} /* l3ni1_msg_with_uus */
-
-static void
-l3ni1_release_req(struct l3_process *pc, u_char pr, void *arg)
-{
- StopAllL3Timer(pc);
- newl3state(pc, 19);
- if (!pc->prot.ni1.uus1_data[0])
- l3ni1_message(pc, MT_RELEASE);
- else
- l3ni1_msg_with_uus(pc, MT_RELEASE);
- L3AddTimer(&pc->timer, T308, CC_T308_1);
-}
-
-static void
-l3ni1_release_cmpl(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int ret;
-
- if ((ret = l3ni1_get_cause(pc, skb)) > 0) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "RELCMPL get_cause ret(%d)", ret);
- } else if (ret < 0)
- pc->para.cause = NO_CAUSE;
- StopAllL3Timer(pc);
- newl3state(pc, 0);
- pc->st->l3.l3l4(pc->st, CC_RELEASE | CONFIRM, pc);
- ni1_release_l3_process(pc);
-}
-
-#if EXT_BEARER_CAPS
-
-static u_char *
-EncodeASyncParams(u_char *p, u_char si2)
-{ // 7c 06 88 90 21 42 00 bb
-
- p[0] = 0;
- p[1] = 0x40; // Intermediate rate: 16 kbit/s jj 2000.02.19
- p[2] = 0x80;
- if (si2 & 32) // 7 data bits
-
- p[2] += 16;
- else // 8 data bits
-
- p[2] += 24;
-
- if (si2 & 16) // 2 stop bits
-
- p[2] += 96;
- else // 1 stop bit
-
- p[2] += 32;
-
- if (si2 & 8) // even parity
-
- p[2] += 2;
- else // no parity
-
- p[2] += 3;
-
- switch (si2 & 0x07) {
- case 0:
- p[0] = 66; // 1200 bit/s
-
- break;
- case 1:
- p[0] = 88; // 1200/75 bit/s
-
- break;
- case 2:
- p[0] = 87; // 75/1200 bit/s
-
- break;
- case 3:
- p[0] = 67; // 2400 bit/s
-
- break;
- case 4:
- p[0] = 69; // 4800 bit/s
-
- break;
- case 5:
- p[0] = 72; // 9600 bit/s
-
- break;
- case 6:
- p[0] = 73; // 14400 bit/s
-
- break;
- case 7:
- p[0] = 75; // 19200 bit/s
-
- break;
- }
- return p + 3;
-}
-
-static u_char
-EncodeSyncParams(u_char si2, u_char ai)
-{
-
- switch (si2) {
- case 0:
- return ai + 2; // 1200 bit/s
-
- case 1:
- return ai + 24; // 1200/75 bit/s
-
- case 2:
- return ai + 23; // 75/1200 bit/s
-
- case 3:
- return ai + 3; // 2400 bit/s
-
- case 4:
- return ai + 5; // 4800 bit/s
-
- case 5:
- return ai + 8; // 9600 bit/s
-
- case 6:
- return ai + 9; // 14400 bit/s
-
- case 7:
- return ai + 11; // 19200 bit/s
-
- case 8:
- return ai + 14; // 48000 bit/s
-
- case 9:
- return ai + 15; // 56000 bit/s
-
- case 15:
- return ai + 40; // negotiate bit/s
-
- default:
- break;
- }
- return ai;
-}
-
-
-static u_char
-DecodeASyncParams(u_char si2, u_char *p)
-{
- u_char info;
-
- switch (p[5]) {
- case 66: // 1200 bit/s
-
- break; // si2 don't change
-
- case 88: // 1200/75 bit/s
-
- si2 += 1;
- break;
- case 87: // 75/1200 bit/s
-
- si2 += 2;
- break;
- case 67: // 2400 bit/s
-
- si2 += 3;
- break;
- case 69: // 4800 bit/s
-
- si2 += 4;
- break;
- case 72: // 9600 bit/s
-
- si2 += 5;
- break;
- case 73: // 14400 bit/s
-
- si2 += 6;
- break;
- case 75: // 19200 bit/s
-
- si2 += 7;
- break;
- }
-
- info = p[7] & 0x7f;
- if ((info & 16) && (!(info & 8))) // 7 data bits
-
- si2 += 32; // else 8 data bits
-
- if ((info & 96) == 96) // 2 stop bits
-
- si2 += 16; // else 1 stop bit
-
- if ((info & 2) && (!(info & 1))) // even parity
-
- si2 += 8; // else no parity
-
- return si2;
-}
-
-
-static u_char
-DecodeSyncParams(u_char si2, u_char info)
-{
- info &= 0x7f;
- switch (info) {
- case 40: // bit/s negotiation failed ai := 165 not 175!
-
- return si2 + 15;
- case 15: // 56000 bit/s failed, ai := 0 not 169 !
-
- return si2 + 9;
- case 14: // 48000 bit/s
-
- return si2 + 8;
- case 11: // 19200 bit/s
-
- return si2 + 7;
- case 9: // 14400 bit/s
-
- return si2 + 6;
- case 8: // 9600 bit/s
-
- return si2 + 5;
- case 5: // 4800 bit/s
-
- return si2 + 4;
- case 3: // 2400 bit/s
-
- return si2 + 3;
- case 23: // 75/1200 bit/s
-
- return si2 + 2;
- case 24: // 1200/75 bit/s
-
- return si2 + 1;
- default: // 1200 bit/s
-
- return si2;
- }
-}
-
-static u_char
-DecodeSI2(struct sk_buff *skb)
-{
- u_char *p; //, *pend=skb->data + skb->len;
-
- if ((p = findie(skb->data, skb->len, 0x7c, 0))) {
- switch (p[4] & 0x0f) {
- case 0x01:
- if (p[1] == 0x04) // sync. Bitratenadaption
-
- return DecodeSyncParams(160, p[5]); // V.110/X.30
-
- else if (p[1] == 0x06) // async. Bitratenadaption
-
- return DecodeASyncParams(192, p); // V.110/X.30
-
- break;
- case 0x08: // if (p[5] == 0x02) // sync. Bitratenadaption
- if (p[1] > 3)
- return DecodeSyncParams(176, p[5]); // V.120
- break;
- }
- }
- return 0;
-}
-
-#endif
-
-
-static void
-l3ni1_setup_req(struct l3_process *pc, u_char pr,
- void *arg)
-{
- struct sk_buff *skb;
- u_char tmp[128];
- u_char *p = tmp;
-
- u_char *teln;
- u_char *sub;
- u_char *sp;
- int l;
-
- MsgHead(p, pc->callref, MT_SETUP);
-
- teln = pc->para.setup.phone;
-
- *p++ = 0xa1; /* complete indicator */
- /*
- * Set Bearer Capability, Map info from 1TR6-convention to NI1
- */
- switch (pc->para.setup.si1) {
- case 1: /* Telephony */
- *p++ = IE_BEARER;
- *p++ = 0x3; /* Length */
- *p++ = 0x90; /* 3.1khz Audio */
- *p++ = 0x90; /* Circuit-Mode 64kbps */
- *p++ = 0xa2; /* u-Law Audio */
- break;
- case 5: /* Datatransmission 64k, BTX */
- case 7: /* Datatransmission 64k */
- default:
- *p++ = IE_BEARER;
- *p++ = 0x2; /* Length */
- *p++ = 0x88; /* Coding Std. CCITT, unrestr. dig. Inform. */
- *p++ = 0x90; /* Circuit-Mode 64kbps */
- break;
- }
-
- sub = NULL;
- sp = teln;
- while (*sp) {
- if ('.' == *sp) {
- sub = sp;
- *sp = 0;
- } else
- sp++;
- }
-
- *p++ = IE_KEYPAD;
- *p++ = strlen(teln);
- while (*teln)
- *p++ = (*teln++) & 0x7F;
-
- if (sub)
- *sub++ = '.';
-
-#if EXT_BEARER_CAPS
- if ((pc->para.setup.si2 >= 160) && (pc->para.setup.si2 <= 175)) { // sync. Bitratenadaption, V.110/X.30
-
- *p++ = IE_LLC;
- *p++ = 0x04;
- *p++ = 0x88;
- *p++ = 0x90;
- *p++ = 0x21;
- *p++ = EncodeSyncParams(pc->para.setup.si2 - 160, 0x80);
- } else if ((pc->para.setup.si2 >= 176) && (pc->para.setup.si2 <= 191)) { // sync. Bitratenadaption, V.120
-
- *p++ = IE_LLC;
- *p++ = 0x05;
- *p++ = 0x88;
- *p++ = 0x90;
- *p++ = 0x28;
- *p++ = EncodeSyncParams(pc->para.setup.si2 - 176, 0);
- *p++ = 0x82;
- } else if (pc->para.setup.si2 >= 192) { // async. Bitratenadaption, V.110/X.30
-
- *p++ = IE_LLC;
- *p++ = 0x06;
- *p++ = 0x88;
- *p++ = 0x90;
- *p++ = 0x21;
- p = EncodeASyncParams(p, pc->para.setup.si2 - 192);
- } else {
- switch (pc->para.setup.si1) {
- case 1: /* Telephony */
- *p++ = IE_LLC;
- *p++ = 0x3; /* Length */
- *p++ = 0x90; /* Coding Std. CCITT, 3.1 kHz audio */
- *p++ = 0x90; /* Circuit-Mode 64kbps */
- *p++ = 0xa2; /* u-Law Audio */
- break;
- case 5: /* Datatransmission 64k, BTX */
- case 7: /* Datatransmission 64k */
- default:
- *p++ = IE_LLC;
- *p++ = 0x2; /* Length */
- *p++ = 0x88; /* Coding Std. CCITT, unrestr. dig. Inform. */
- *p++ = 0x90; /* Circuit-Mode 64kbps */
- break;
- }
- }
-#endif
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- {
- return;
- }
- skb_put_data(skb, tmp, l);
- L3DelTimer(&pc->timer);
- L3AddTimer(&pc->timer, T303, CC_T303);
- newl3state(pc, 1);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
-}
-
-static void
-l3ni1_call_proc(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int id, ret;
-
- if ((id = l3ni1_get_channel_id(pc, skb)) >= 0) {
- if ((0 == id) || ((3 == id) && (0x10 == pc->para.moderate))) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup answer with wrong chid %x", id);
- pc->para.cause = 100;
- l3ni1_status_send(pc, pr, NULL);
- return;
- }
- pc->para.bchannel = id;
- } else if (1 == pc->state) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup answer wrong chid (ret %d)", id);
- if (id == -1)
- pc->para.cause = 96;
- else
- pc->para.cause = 100;
- l3ni1_status_send(pc, pr, NULL);
- return;
- }
- /* Now we are on none mandatory IEs */
- ret = check_infoelements(pc, skb, ie_CALL_PROCEEDING);
- if (ERR_IE_COMPREHENSION == ret) {
- l3ni1_std_ie_err(pc, ret);
- return;
- }
- L3DelTimer(&pc->timer);
- newl3state(pc, 3);
- L3AddTimer(&pc->timer, T310, CC_T310);
- if (ret) /* STATUS for none mandatory IE errors after actions are taken */
- l3ni1_std_ie_err(pc, ret);
- pc->st->l3.l3l4(pc->st, CC_PROCEEDING | INDICATION, pc);
-}
-
-static void
-l3ni1_setup_ack(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int id, ret;
-
- if ((id = l3ni1_get_channel_id(pc, skb)) >= 0) {
- if ((0 == id) || ((3 == id) && (0x10 == pc->para.moderate))) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup answer with wrong chid %x", id);
- pc->para.cause = 100;
- l3ni1_status_send(pc, pr, NULL);
- return;
- }
- pc->para.bchannel = id;
- } else {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup answer wrong chid (ret %d)", id);
- if (id == -1)
- pc->para.cause = 96;
- else
- pc->para.cause = 100;
- l3ni1_status_send(pc, pr, NULL);
- return;
- }
- /* Now we are on none mandatory IEs */
- ret = check_infoelements(pc, skb, ie_SETUP_ACKNOWLEDGE);
- if (ERR_IE_COMPREHENSION == ret) {
- l3ni1_std_ie_err(pc, ret);
- return;
- }
- L3DelTimer(&pc->timer);
- newl3state(pc, 2);
- L3AddTimer(&pc->timer, T304, CC_T304);
- if (ret) /* STATUS for none mandatory IE errors after actions are taken */
- l3ni1_std_ie_err(pc, ret);
- pc->st->l3.l3l4(pc->st, CC_MORE_INFO | INDICATION, pc);
-}
-
-static void
-l3ni1_disconnect(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- u_char *p;
- int ret;
- u_char cause = 0;
-
- StopAllL3Timer(pc);
- if ((ret = l3ni1_get_cause(pc, skb))) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "DISC get_cause ret(%d)", ret);
- if (ret < 0)
- cause = 96;
- else if (ret > 0)
- cause = 100;
- }
- if ((p = findie(skb->data, skb->len, IE_FACILITY, 0)))
- l3ni1_parse_facility(pc->st, pc, pc->callref, p);
- ret = check_infoelements(pc, skb, ie_DISCONNECT);
- if (ERR_IE_COMPREHENSION == ret)
- cause = 96;
- else if ((!cause) && (ERR_IE_UNRECOGNIZED == ret))
- cause = 99;
- ret = pc->state;
- newl3state(pc, 12);
- if (cause)
- newl3state(pc, 19);
- if (11 != ret)
- pc->st->l3.l3l4(pc->st, CC_DISCONNECT | INDICATION, pc);
- else if (!cause)
- l3ni1_release_req(pc, pr, NULL);
- if (cause) {
- l3ni1_message_cause(pc, MT_RELEASE, cause);
- L3AddTimer(&pc->timer, T308, CC_T308_1);
- }
-}
-
-static void
-l3ni1_connect(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int ret;
-
- ret = check_infoelements(pc, skb, ie_CONNECT);
- if (ERR_IE_COMPREHENSION == ret) {
- l3ni1_std_ie_err(pc, ret);
- return;
- }
- L3DelTimer(&pc->timer); /* T310 */
- newl3state(pc, 10);
- pc->para.chargeinfo = 0;
- /* here should inserted COLP handling KKe */
- if (ret)
- l3ni1_std_ie_err(pc, ret);
- pc->st->l3.l3l4(pc->st, CC_SETUP | CONFIRM, pc);
-}
-
-static void
-l3ni1_alerting(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int ret;
-
- ret = check_infoelements(pc, skb, ie_ALERTING);
- if (ERR_IE_COMPREHENSION == ret) {
- l3ni1_std_ie_err(pc, ret);
- return;
- }
- L3DelTimer(&pc->timer); /* T304 */
- newl3state(pc, 4);
- if (ret)
- l3ni1_std_ie_err(pc, ret);
- pc->st->l3.l3l4(pc->st, CC_ALERTING | INDICATION, pc);
-}
-
-static void
-l3ni1_setup(struct l3_process *pc, u_char pr, void *arg)
-{
- u_char *p;
- int bcfound = 0;
- char tmp[80];
- struct sk_buff *skb = arg;
- int id;
- int err = 0;
-
- /*
- * Bearer Capabilities
- */
- p = skb->data;
- /* only the first occurrence 'll be detected ! */
- if ((p = findie(p, skb->len, 0x04, 0))) {
- if ((p[1] < 2) || (p[1] > 11))
- err = 1;
- else {
- pc->para.setup.si2 = 0;
- switch (p[2] & 0x7f) {
- case 0x00: /* Speech */
- case 0x10: /* 3.1 Khz audio */
- pc->para.setup.si1 = 1;
- break;
- case 0x08: /* Unrestricted digital information */
- pc->para.setup.si1 = 7;
-/* JIM, 05.11.97 I wanna set service indicator 2 */
-#if EXT_BEARER_CAPS
- pc->para.setup.si2 = DecodeSI2(skb);
-#endif
- break;
- case 0x09: /* Restricted digital information */
- pc->para.setup.si1 = 2;
- break;
- case 0x11:
- /* Unrestr. digital information with
- * tones/announcements ( or 7 kHz audio
- */
- pc->para.setup.si1 = 3;
- break;
- case 0x18: /* Video */
- pc->para.setup.si1 = 4;
- break;
- default:
- err = 2;
- break;
- }
- switch (p[3] & 0x7f) {
- case 0x40: /* packed mode */
- pc->para.setup.si1 = 8;
- break;
- case 0x10: /* 64 kbit */
- case 0x11: /* 2*64 kbit */
- case 0x13: /* 384 kbit */
- case 0x15: /* 1536 kbit */
- case 0x17: /* 1920 kbit */
- pc->para.moderate = p[3] & 0x7f;
- break;
- default:
- err = 3;
- break;
- }
- }
- if (pc->debug & L3_DEB_SI)
- l3_debug(pc->st, "SI=%d, AI=%d",
- pc->para.setup.si1, pc->para.setup.si2);
- if (err) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup with wrong bearer(l=%d:%x,%x)",
- p[1], p[2], p[3]);
- pc->para.cause = 100;
- l3ni1_msg_without_setup(pc, pr, NULL);
- return;
- }
- } else {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup without bearer capabilities");
- /* ETS 300-104 1.3.3 */
- pc->para.cause = 96;
- l3ni1_msg_without_setup(pc, pr, NULL);
- return;
- }
- /*
- * Channel Identification
- */
- if ((id = l3ni1_get_channel_id(pc, skb)) >= 0) {
- if ((pc->para.bchannel = id)) {
- if ((3 == id) && (0x10 == pc->para.moderate)) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup with wrong chid %x",
- id);
- pc->para.cause = 100;
- l3ni1_msg_without_setup(pc, pr, NULL);
- return;
- }
- bcfound++;
- } else
- { if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup without bchannel, call waiting");
- bcfound++;
- }
- } else {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "setup with wrong chid ret %d", id);
- if (id == -1)
- pc->para.cause = 96;
- else
- pc->para.cause = 100;
- l3ni1_msg_without_setup(pc, pr, NULL);
- return;
- }
- /* Now we are on none mandatory IEs */
- err = check_infoelements(pc, skb, ie_SETUP);
- if (ERR_IE_COMPREHENSION == err) {
- pc->para.cause = 96;
- l3ni1_msg_without_setup(pc, pr, NULL);
- return;
- }
- p = skb->data;
- if ((p = findie(p, skb->len, 0x70, 0)))
- iecpy(pc->para.setup.eazmsn, p, 1);
- else
- pc->para.setup.eazmsn[0] = 0;
-
- p = skb->data;
- if ((p = findie(p, skb->len, 0x71, 0))) {
- /* Called party subaddress */
- if ((p[1] >= 2) && (p[2] == 0x80) && (p[3] == 0x50)) {
- tmp[0] = '.';
- iecpy(&tmp[1], p, 2);
- strcat(pc->para.setup.eazmsn, tmp);
- } else if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "wrong called subaddress");
- }
- p = skb->data;
- if ((p = findie(p, skb->len, 0x6c, 0))) {
- pc->para.setup.plan = p[2];
- if (p[2] & 0x80) {
- iecpy(pc->para.setup.phone, p, 1);
- pc->para.setup.screen = 0;
- } else {
- iecpy(pc->para.setup.phone, p, 2);
- pc->para.setup.screen = p[3];
- }
- } else {
- pc->para.setup.phone[0] = 0;
- pc->para.setup.plan = 0;
- pc->para.setup.screen = 0;
- }
- p = skb->data;
- if ((p = findie(p, skb->len, 0x6d, 0))) {
- /* Calling party subaddress */
- if ((p[1] >= 2) && (p[2] == 0x80) && (p[3] == 0x50)) {
- tmp[0] = '.';
- iecpy(&tmp[1], p, 2);
- strcat(pc->para.setup.phone, tmp);
- } else if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "wrong calling subaddress");
- }
- newl3state(pc, 6);
- if (err) /* STATUS for none mandatory IE errors after actions are taken */
- l3ni1_std_ie_err(pc, err);
- pc->st->l3.l3l4(pc->st, CC_SETUP | INDICATION, pc);
-}
-
-static void
-l3ni1_reset(struct l3_process *pc, u_char pr, void *arg)
-{
- ni1_release_l3_process(pc);
-}
-
-static void
-l3ni1_disconnect_req(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb;
- u_char tmp[16 + 40];
- u_char *p = tmp;
- int l;
- u_char cause = 16;
-
- if (pc->para.cause != NO_CAUSE)
- cause = pc->para.cause;
-
- StopAllL3Timer(pc);
-
- MsgHead(p, pc->callref, MT_DISCONNECT);
-
- *p++ = IE_CAUSE;
- *p++ = 0x2;
- *p++ = 0x80;
- *p++ = cause | 0x80;
-
- if (pc->prot.ni1.uus1_data[0])
- { *p++ = IE_USER_USER; /* UUS info element */
- *p++ = strlen(pc->prot.ni1.uus1_data) + 1;
- *p++ = 0x04; /* IA5 chars */
- strcpy(p, pc->prot.ni1.uus1_data);
- p += strlen(pc->prot.ni1.uus1_data);
- pc->prot.ni1.uus1_data[0] = '\0';
- }
-
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- newl3state(pc, 11);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
- L3AddTimer(&pc->timer, T305, CC_T305);
-}
-
-static void
-l3ni1_setup_rsp(struct l3_process *pc, u_char pr,
- void *arg)
-{
- if (!pc->para.bchannel)
- { if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "D-chan connect for waiting call");
- l3ni1_disconnect_req(pc, pr, arg);
- return;
- }
- newl3state(pc, 8);
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "D-chan connect for waiting call");
- l3ni1_message_plus_chid(pc, MT_CONNECT); /* GE 05/09/00 */
- L3DelTimer(&pc->timer);
- L3AddTimer(&pc->timer, T313, CC_T313);
-}
-
-static void
-l3ni1_connect_ack(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int ret;
-
- ret = check_infoelements(pc, skb, ie_CONNECT_ACKNOWLEDGE);
- if (ERR_IE_COMPREHENSION == ret) {
- l3ni1_std_ie_err(pc, ret);
- return;
- }
- newl3state(pc, 10);
- L3DelTimer(&pc->timer);
- if (ret)
- l3ni1_std_ie_err(pc, ret);
- pc->st->l3.l3l4(pc->st, CC_SETUP_COMPL | INDICATION, pc);
-}
-
-static void
-l3ni1_reject_req(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb;
- u_char tmp[16];
- u_char *p = tmp;
- int l;
- u_char cause = 21;
-
- if (pc->para.cause != NO_CAUSE)
- cause = pc->para.cause;
-
- MsgHead(p, pc->callref, MT_RELEASE_COMPLETE);
-
- *p++ = IE_CAUSE;
- *p++ = 0x2;
- *p++ = 0x80;
- *p++ = cause | 0x80;
-
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
- pc->st->l3.l3l4(pc->st, CC_RELEASE | INDICATION, pc);
- newl3state(pc, 0);
- ni1_release_l3_process(pc);
-}
-
-static void
-l3ni1_release(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- u_char *p;
- int ret, cause = 0;
-
- StopAllL3Timer(pc);
- if ((ret = l3ni1_get_cause(pc, skb)) > 0) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "REL get_cause ret(%d)", ret);
- } else if (ret < 0)
- pc->para.cause = NO_CAUSE;
- if ((p = findie(skb->data, skb->len, IE_FACILITY, 0))) {
- l3ni1_parse_facility(pc->st, pc, pc->callref, p);
- }
- if ((ret < 0) && (pc->state != 11))
- cause = 96;
- else if (ret > 0)
- cause = 100;
- ret = check_infoelements(pc, skb, ie_RELEASE);
- if (ERR_IE_COMPREHENSION == ret)
- cause = 96;
- else if ((ERR_IE_UNRECOGNIZED == ret) && (!cause))
- cause = 99;
- if (cause)
- l3ni1_message_cause(pc, MT_RELEASE_COMPLETE, cause);
- else
- l3ni1_message(pc, MT_RELEASE_COMPLETE);
- pc->st->l3.l3l4(pc->st, CC_RELEASE | INDICATION, pc);
- newl3state(pc, 0);
- ni1_release_l3_process(pc);
-}
-
-static void
-l3ni1_alert_req(struct l3_process *pc, u_char pr,
- void *arg)
-{
- newl3state(pc, 7);
- if (!pc->prot.ni1.uus1_data[0])
- l3ni1_message(pc, MT_ALERTING);
- else
- l3ni1_msg_with_uus(pc, MT_ALERTING);
-}
-
-static void
-l3ni1_proceed_req(struct l3_process *pc, u_char pr,
- void *arg)
-{
- newl3state(pc, 9);
- l3ni1_message(pc, MT_CALL_PROCEEDING);
- pc->st->l3.l3l4(pc->st, CC_PROCEED_SEND | INDICATION, pc);
-}
-
-static void
-l3ni1_setup_ack_req(struct l3_process *pc, u_char pr,
- void *arg)
-{
- newl3state(pc, 25);
- L3DelTimer(&pc->timer);
- L3AddTimer(&pc->timer, T302, CC_T302);
- l3ni1_message(pc, MT_SETUP_ACKNOWLEDGE);
-}
-
-/********************************************/
-/* deliver a incoming display message to HL */
-/********************************************/
-static void
-l3ni1_deliver_display(struct l3_process *pc, int pr, u_char *infp)
-{ u_char len;
- isdn_ctrl ic;
- struct IsdnCardState *cs;
- char *p;
-
- if (*infp++ != IE_DISPLAY) return;
- if ((len = *infp++) > 80) return; /* total length <= 82 */
- if (!pc->chan) return;
-
- p = ic.parm.display;
- while (len--)
- *p++ = *infp++;
- *p = '\0';
- ic.command = ISDN_STAT_DISPLAY;
- cs = pc->st->l1.hardware;
- ic.driver = cs->myid;
- ic.arg = pc->chan->chan;
- cs->iif.statcallb(&ic);
-} /* l3ni1_deliver_display */
-
-
-static void
-l3ni1_progress(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int err = 0;
- u_char *p;
-
- if ((p = findie(skb->data, skb->len, IE_PROGRESS, 0))) {
- if (p[1] != 2) {
- err = 1;
- pc->para.cause = 100;
- } else if (!(p[2] & 0x70)) {
- switch (p[2]) {
- case 0x80:
- case 0x81:
- case 0x82:
- case 0x84:
- case 0x85:
- case 0x87:
- case 0x8a:
- switch (p[3]) {
- case 0x81:
- case 0x82:
- case 0x83:
- case 0x84:
- case 0x88:
- break;
- default:
- err = 2;
- pc->para.cause = 100;
- break;
- }
- break;
- default:
- err = 3;
- pc->para.cause = 100;
- break;
- }
- }
- } else {
- pc->para.cause = 96;
- err = 4;
- }
- if (err) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "progress error %d", err);
- l3ni1_status_send(pc, pr, NULL);
- return;
- }
- /* Now we are on none mandatory IEs */
- err = check_infoelements(pc, skb, ie_PROGRESS);
- if (err)
- l3ni1_std_ie_err(pc, err);
- if (ERR_IE_COMPREHENSION != err)
- pc->st->l3.l3l4(pc->st, CC_PROGRESS | INDICATION, pc);
-}
-
-static void
-l3ni1_notify(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int err = 0;
- u_char *p;
-
- if ((p = findie(skb->data, skb->len, IE_NOTIFY, 0))) {
- if (p[1] != 1) {
- err = 1;
- pc->para.cause = 100;
- } else {
- switch (p[2]) {
- case 0x80:
- case 0x81:
- case 0x82:
- break;
- default:
- pc->para.cause = 100;
- err = 2;
- break;
- }
- }
- } else {
- pc->para.cause = 96;
- err = 3;
- }
- if (err) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "notify error %d", err);
- l3ni1_status_send(pc, pr, NULL);
- return;
- }
- /* Now we are on none mandatory IEs */
- err = check_infoelements(pc, skb, ie_NOTIFY);
- if (err)
- l3ni1_std_ie_err(pc, err);
- if (ERR_IE_COMPREHENSION != err)
- pc->st->l3.l3l4(pc->st, CC_NOTIFY | INDICATION, pc);
-}
-
-static void
-l3ni1_status_enq(struct l3_process *pc, u_char pr, void *arg)
-{
- int ret;
- struct sk_buff *skb = arg;
-
- ret = check_infoelements(pc, skb, ie_STATUS_ENQUIRY);
- l3ni1_std_ie_err(pc, ret);
- pc->para.cause = 30; /* response to STATUS_ENQUIRY */
- l3ni1_status_send(pc, pr, NULL);
-}
-
-static void
-l3ni1_information(struct l3_process *pc, u_char pr, void *arg)
-{
- int ret;
- struct sk_buff *skb = arg;
- u_char *p;
- char tmp[32];
-
- ret = check_infoelements(pc, skb, ie_INFORMATION);
- if (ret)
- l3ni1_std_ie_err(pc, ret);
- if (pc->state == 25) { /* overlap receiving */
- L3DelTimer(&pc->timer);
- p = skb->data;
- if ((p = findie(p, skb->len, 0x70, 0))) {
- iecpy(tmp, p, 1);
- strcat(pc->para.setup.eazmsn, tmp);
- pc->st->l3.l3l4(pc->st, CC_MORE_INFO | INDICATION, pc);
- }
- L3AddTimer(&pc->timer, T302, CC_T302);
- }
-}
-
-/******************************/
-/* handle deflection requests */
-/******************************/
-static void l3ni1_redir_req(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb;
- u_char tmp[128];
- u_char *p = tmp;
- u_char *subp;
- u_char len_phone = 0;
- u_char len_sub = 0;
- int l;
-
-
- strcpy(pc->prot.ni1.uus1_data, pc->chan->setup.eazmsn); /* copy uus element if available */
- if (!pc->chan->setup.phone[0])
- { pc->para.cause = -1;
- l3ni1_disconnect_req(pc, pr, arg); /* disconnect immediately */
- return;
- } /* only uus */
-
- if (pc->prot.ni1.invoke_id)
- free_invoke_id(pc->st, pc->prot.ni1.invoke_id);
-
- if (!(pc->prot.ni1.invoke_id = new_invoke_id(pc->st)))
- return;
-
- MsgHead(p, pc->callref, MT_FACILITY);
-
- for (subp = pc->chan->setup.phone; (*subp) && (*subp != '.'); subp++) len_phone++; /* len of phone number */
- if (*subp++ == '.') len_sub = strlen(subp) + 2; /* length including info subaddress element */
-
- *p++ = 0x1c; /* Facility info element */
- *p++ = len_phone + len_sub + 2 + 2 + 8 + 3 + 3; /* length of element */
- *p++ = 0x91; /* remote operations protocol */
- *p++ = 0xa1; /* invoke component */
-
- *p++ = len_phone + len_sub + 2 + 2 + 8 + 3; /* length of data */
- *p++ = 0x02; /* invoke id tag, integer */
- *p++ = 0x01; /* length */
- *p++ = pc->prot.ni1.invoke_id; /* invoke id */
- *p++ = 0x02; /* operation value tag, integer */
- *p++ = 0x01; /* length */
- *p++ = 0x0D; /* Call Deflect */
-
- *p++ = 0x30; /* sequence phone number */
- *p++ = len_phone + 2 + 2 + 3 + len_sub; /* length */
-
- *p++ = 0x30; /* Deflected to UserNumber */
- *p++ = len_phone + 2 + len_sub; /* length */
- *p++ = 0x80; /* NumberDigits */
- *p++ = len_phone; /* length */
- for (l = 0; l < len_phone; l++)
- *p++ = pc->chan->setup.phone[l];
-
- if (len_sub)
- { *p++ = 0x04; /* called party subaddress */
- *p++ = len_sub - 2;
- while (*subp) *p++ = *subp++;
- }
-
- *p++ = 0x01; /* screening identifier */
- *p++ = 0x01;
- *p++ = pc->chan->setup.screen;
-
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l))) return;
- skb_put_data(skb, tmp, l);
-
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
-} /* l3ni1_redir_req */
-
-/********************************************/
-/* handle deflection request in early state */
-/********************************************/
-static void l3ni1_redir_req_early(struct l3_process *pc, u_char pr, void *arg)
-{
- l3ni1_proceed_req(pc, pr, arg);
- l3ni1_redir_req(pc, pr, arg);
-} /* l3ni1_redir_req_early */
-
-/***********************************************/
-/* handle special commands for this protocol. */
-/* Examples are call independent services like */
-/* remote operations with dummy callref. */
-/***********************************************/
-static int l3ni1_cmd_global(struct PStack *st, isdn_ctrl *ic)
-{ u_char id;
- u_char temp[265];
- u_char *p = temp;
- int i, l, proc_len;
- struct sk_buff *skb;
- struct l3_process *pc = NULL;
-
- switch (ic->arg)
- { case NI1_CMD_INVOKE:
- if (ic->parm.ni1_io.datalen < 0) return (-2); /* invalid parameter */
-
- for (proc_len = 1, i = ic->parm.ni1_io.proc >> 8; i; i++)
- i = i >> 8; /* add one byte */
- l = ic->parm.ni1_io.datalen + proc_len + 8; /* length excluding ie header */
- if (l > 255)
- return (-2); /* too long */
-
- if (!(id = new_invoke_id(st)))
- return (0); /* first get a invoke id -> return if no available */
-
- i = -1;
- MsgHead(p, i, MT_FACILITY); /* build message head */
- *p++ = 0x1C; /* Facility IE */
- *p++ = l; /* length of ie */
- *p++ = 0x91; /* remote operations */
- *p++ = 0xA1; /* invoke */
- *p++ = l - 3; /* length of invoke */
- *p++ = 0x02; /* invoke id tag */
- *p++ = 0x01; /* length is 1 */
- *p++ = id; /* invoke id */
- *p++ = 0x02; /* operation */
- *p++ = proc_len; /* length of operation */
-
- for (i = proc_len; i; i--)
- *p++ = (ic->parm.ni1_io.proc >> (i - 1)) & 0xFF;
- memcpy(p, ic->parm.ni1_io.data, ic->parm.ni1_io.datalen); /* copy data */
- l = (p - temp) + ic->parm.ni1_io.datalen; /* total length */
-
- if (ic->parm.ni1_io.timeout > 0) {
- pc = ni1_new_l3_process(st, -1);
- if (!pc) {
- free_invoke_id(st, id);
- return (-2);
- }
- /* remember id */
- pc->prot.ni1.ll_id = ic->parm.ni1_io.ll_id;
- /* and procedure */
- pc->prot.ni1.proc = ic->parm.ni1_io.proc;
- }
-
- if (!(skb = l3_alloc_skb(l)))
- { free_invoke_id(st, id);
- if (pc) ni1_release_l3_process(pc);
- return (-2);
- }
- skb_put_data(skb, temp, l);
-
- if (pc)
- { pc->prot.ni1.invoke_id = id; /* remember id */
- L3AddTimer(&pc->timer, ic->parm.ni1_io.timeout, CC_TNI1_IO | REQUEST);
- }
-
- l3_msg(st, DL_DATA | REQUEST, skb);
- ic->parm.ni1_io.hl_id = id; /* return id */
- return (0);
-
- case NI1_CMD_INVOKE_ABORT:
- if ((pc = l3ni1_search_dummy_proc(st, ic->parm.ni1_io.hl_id)))
- { L3DelTimer(&pc->timer); /* remove timer */
- ni1_release_l3_process(pc);
- return (0);
- }
- else
- { l3_debug(st, "l3ni1_cmd_global abort unknown id");
- return (-2);
- }
- break;
-
- default:
- l3_debug(st, "l3ni1_cmd_global unknown cmd 0x%lx", ic->arg);
- return (-1);
- } /* switch ic-> arg */
- return (-1);
-} /* l3ni1_cmd_global */
-
-static void
-l3ni1_io_timer(struct l3_process *pc)
-{ isdn_ctrl ic;
- struct IsdnCardState *cs = pc->st->l1.hardware;
-
- L3DelTimer(&pc->timer); /* remove timer */
-
- ic.driver = cs->myid;
- ic.command = ISDN_STAT_PROT;
- ic.arg = NI1_STAT_INVOKE_ERR;
- ic.parm.ni1_io.hl_id = pc->prot.ni1.invoke_id;
- ic.parm.ni1_io.ll_id = pc->prot.ni1.ll_id;
- ic.parm.ni1_io.proc = pc->prot.ni1.proc;
- ic.parm.ni1_io.timeout = -1;
- ic.parm.ni1_io.datalen = 0;
- ic.parm.ni1_io.data = NULL;
- free_invoke_id(pc->st, pc->prot.ni1.invoke_id);
- pc->prot.ni1.invoke_id = 0; /* reset id */
-
- cs->iif.statcallb(&ic);
-
- ni1_release_l3_process(pc);
-} /* l3ni1_io_timer */
-
-static void
-l3ni1_release_ind(struct l3_process *pc, u_char pr, void *arg)
-{
- u_char *p;
- struct sk_buff *skb = arg;
- int callState = 0;
- p = skb->data;
-
- if ((p = findie(p, skb->len, IE_CALL_STATE, 0))) {
- p++;
- if (1 == *p++)
- callState = *p;
- }
- if (callState == 0) {
- /* ETS 300-104 7.6.1, 8.6.1, 10.6.1... and 16.1
- * set down layer 3 without sending any message
- */
- pc->st->l3.l3l4(pc->st, CC_RELEASE | INDICATION, pc);
- newl3state(pc, 0);
- ni1_release_l3_process(pc);
- } else {
- pc->st->l3.l3l4(pc->st, CC_IGNORE | INDICATION, pc);
- }
-}
-
-static void
-l3ni1_dummy(struct l3_process *pc, u_char pr, void *arg)
-{
-}
-
-static void
-l3ni1_t302(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->para.loc = 0;
- pc->para.cause = 28; /* invalid number */
- l3ni1_disconnect_req(pc, pr, NULL);
- pc->st->l3.l3l4(pc->st, CC_SETUP_ERR, pc);
-}
-
-static void
-l3ni1_t303(struct l3_process *pc, u_char pr, void *arg)
-{
- if (pc->N303 > 0) {
- pc->N303--;
- L3DelTimer(&pc->timer);
- l3ni1_setup_req(pc, pr, arg);
- } else {
- L3DelTimer(&pc->timer);
- l3ni1_message_cause(pc, MT_RELEASE_COMPLETE, 102);
- pc->st->l3.l3l4(pc->st, CC_NOSETUP_RSP, pc);
- ni1_release_l3_process(pc);
- }
-}
-
-static void
-l3ni1_t304(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->para.loc = 0;
- pc->para.cause = 102;
- l3ni1_disconnect_req(pc, pr, NULL);
- pc->st->l3.l3l4(pc->st, CC_SETUP_ERR, pc);
-
-}
-
-static void
-l3ni1_t305(struct l3_process *pc, u_char pr, void *arg)
-{
- u_char tmp[16];
- u_char *p = tmp;
- int l;
- struct sk_buff *skb;
- u_char cause = 16;
-
- L3DelTimer(&pc->timer);
- if (pc->para.cause != NO_CAUSE)
- cause = pc->para.cause;
-
- MsgHead(p, pc->callref, MT_RELEASE);
-
- *p++ = IE_CAUSE;
- *p++ = 0x2;
- *p++ = 0x80;
- *p++ = cause | 0x80;
-
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- newl3state(pc, 19);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
- L3AddTimer(&pc->timer, T308, CC_T308_1);
-}
-
-static void
-l3ni1_t310(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->para.loc = 0;
- pc->para.cause = 102;
- l3ni1_disconnect_req(pc, pr, NULL);
- pc->st->l3.l3l4(pc->st, CC_SETUP_ERR, pc);
-}
-
-static void
-l3ni1_t313(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->para.loc = 0;
- pc->para.cause = 102;
- l3ni1_disconnect_req(pc, pr, NULL);
- pc->st->l3.l3l4(pc->st, CC_CONNECT_ERR, pc);
-}
-
-static void
-l3ni1_t308_1(struct l3_process *pc, u_char pr, void *arg)
-{
- newl3state(pc, 19);
- L3DelTimer(&pc->timer);
- l3ni1_message(pc, MT_RELEASE);
- L3AddTimer(&pc->timer, T308, CC_T308_2);
-}
-
-static void
-l3ni1_t308_2(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->st->l3.l3l4(pc->st, CC_RELEASE_ERR, pc);
- ni1_release_l3_process(pc);
-}
-
-static void
-l3ni1_t318(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->para.cause = 102; /* Timer expiry */
- pc->para.loc = 0; /* local */
- pc->st->l3.l3l4(pc->st, CC_RESUME_ERR, pc);
- newl3state(pc, 19);
- l3ni1_message(pc, MT_RELEASE);
- L3AddTimer(&pc->timer, T308, CC_T308_1);
-}
-
-static void
-l3ni1_t319(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->para.cause = 102; /* Timer expiry */
- pc->para.loc = 0; /* local */
- pc->st->l3.l3l4(pc->st, CC_SUSPEND_ERR, pc);
- newl3state(pc, 10);
-}
-
-static void
-l3ni1_restart(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- pc->st->l3.l3l4(pc->st, CC_RELEASE | INDICATION, pc);
- ni1_release_l3_process(pc);
-}
-
-static void
-l3ni1_status(struct l3_process *pc, u_char pr, void *arg)
-{
- u_char *p;
- struct sk_buff *skb = arg;
- int ret;
- u_char cause = 0, callState = 0;
-
- if ((ret = l3ni1_get_cause(pc, skb))) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "STATUS get_cause ret(%d)", ret);
- if (ret < 0)
- cause = 96;
- else if (ret > 0)
- cause = 100;
- }
- if ((p = findie(skb->data, skb->len, IE_CALL_STATE, 0))) {
- p++;
- if (1 == *p++) {
- callState = *p;
- if (!ie_in_set(pc, *p, l3_valid_states))
- cause = 100;
- } else
- cause = 100;
- } else
- cause = 96;
- if (!cause) { /* no error before */
- ret = check_infoelements(pc, skb, ie_STATUS);
- if (ERR_IE_COMPREHENSION == ret)
- cause = 96;
- else if (ERR_IE_UNRECOGNIZED == ret)
- cause = 99;
- }
- if (cause) {
- u_char tmp;
-
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "STATUS error(%d/%d)", ret, cause);
- tmp = pc->para.cause;
- pc->para.cause = cause;
- l3ni1_status_send(pc, 0, NULL);
- if (cause == 99)
- pc->para.cause = tmp;
- else
- return;
- }
- cause = pc->para.cause;
- if (((cause & 0x7f) == 111) && (callState == 0)) {
- /* ETS 300-104 7.6.1, 8.6.1, 10.6.1...
- * if received MT_STATUS with cause == 111 and call
- * state == 0, then we must set down layer 3
- */
- pc->st->l3.l3l4(pc->st, CC_RELEASE | INDICATION, pc);
- newl3state(pc, 0);
- ni1_release_l3_process(pc);
- }
-}
-
-static void
-l3ni1_facility(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int ret;
-
- ret = check_infoelements(pc, skb, ie_FACILITY);
- l3ni1_std_ie_err(pc, ret);
- {
- u_char *p;
- if ((p = findie(skb->data, skb->len, IE_FACILITY, 0)))
- l3ni1_parse_facility(pc->st, pc, pc->callref, p);
- }
-}
-
-static void
-l3ni1_suspend_req(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb;
- u_char tmp[32];
- u_char *p = tmp;
- u_char i, l;
- u_char *msg = pc->chan->setup.phone;
-
- MsgHead(p, pc->callref, MT_SUSPEND);
- l = *msg++;
- if (l && (l <= 10)) { /* Max length 10 octets */
- *p++ = IE_CALL_ID;
- *p++ = l;
- for (i = 0; i < l; i++)
- *p++ = *msg++;
- } else if (l) {
- l3_debug(pc->st, "SUS wrong CALL_ID len %d", l);
- return;
- }
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
- newl3state(pc, 15);
- L3AddTimer(&pc->timer, T319, CC_T319);
-}
-
-static void
-l3ni1_suspend_ack(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int ret;
-
- L3DelTimer(&pc->timer);
- newl3state(pc, 0);
- pc->para.cause = NO_CAUSE;
- pc->st->l3.l3l4(pc->st, CC_SUSPEND | CONFIRM, pc);
- /* We don't handle suspend_ack for IE errors now */
- if ((ret = check_infoelements(pc, skb, ie_SUSPEND_ACKNOWLEDGE)))
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "SUSPACK check ie(%d)", ret);
- ni1_release_l3_process(pc);
-}
-
-static void
-l3ni1_suspend_rej(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int ret;
-
- if ((ret = l3ni1_get_cause(pc, skb))) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "SUSP_REJ get_cause ret(%d)", ret);
- if (ret < 0)
- pc->para.cause = 96;
- else
- pc->para.cause = 100;
- l3ni1_status_send(pc, pr, NULL);
- return;
- }
- ret = check_infoelements(pc, skb, ie_SUSPEND_REJECT);
- if (ERR_IE_COMPREHENSION == ret) {
- l3ni1_std_ie_err(pc, ret);
- return;
- }
- L3DelTimer(&pc->timer);
- pc->st->l3.l3l4(pc->st, CC_SUSPEND_ERR, pc);
- newl3state(pc, 10);
- if (ret) /* STATUS for none mandatory IE errors after actions are taken */
- l3ni1_std_ie_err(pc, ret);
-}
-
-static void
-l3ni1_resume_req(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb;
- u_char tmp[32];
- u_char *p = tmp;
- u_char i, l;
- u_char *msg = pc->para.setup.phone;
-
- MsgHead(p, pc->callref, MT_RESUME);
-
- l = *msg++;
- if (l && (l <= 10)) { /* Max length 10 octets */
- *p++ = IE_CALL_ID;
- *p++ = l;
- for (i = 0; i < l; i++)
- *p++ = *msg++;
- } else if (l) {
- l3_debug(pc->st, "RES wrong CALL_ID len %d", l);
- return;
- }
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
- newl3state(pc, 17);
- L3AddTimer(&pc->timer, T318, CC_T318);
-}
-
-static void
-l3ni1_resume_ack(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int id, ret;
-
- if ((id = l3ni1_get_channel_id(pc, skb)) > 0) {
- if ((0 == id) || ((3 == id) && (0x10 == pc->para.moderate))) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "resume ack with wrong chid %x", id);
- pc->para.cause = 100;
- l3ni1_status_send(pc, pr, NULL);
- return;
- }
- pc->para.bchannel = id;
- } else if (1 == pc->state) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "resume ack without chid (ret %d)", id);
- pc->para.cause = 96;
- l3ni1_status_send(pc, pr, NULL);
- return;
- }
- ret = check_infoelements(pc, skb, ie_RESUME_ACKNOWLEDGE);
- if (ERR_IE_COMPREHENSION == ret) {
- l3ni1_std_ie_err(pc, ret);
- return;
- }
- L3DelTimer(&pc->timer);
- pc->st->l3.l3l4(pc->st, CC_RESUME | CONFIRM, pc);
- newl3state(pc, 10);
- if (ret) /* STATUS for none mandatory IE errors after actions are taken */
- l3ni1_std_ie_err(pc, ret);
-}
-
-static void
-l3ni1_resume_rej(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int ret;
-
- if ((ret = l3ni1_get_cause(pc, skb))) {
- if (pc->debug & L3_DEB_WARN)
- l3_debug(pc->st, "RES_REJ get_cause ret(%d)", ret);
- if (ret < 0)
- pc->para.cause = 96;
- else
- pc->para.cause = 100;
- l3ni1_status_send(pc, pr, NULL);
- return;
- }
- ret = check_infoelements(pc, skb, ie_RESUME_REJECT);
- if (ERR_IE_COMPREHENSION == ret) {
- l3ni1_std_ie_err(pc, ret);
- return;
- }
- L3DelTimer(&pc->timer);
- pc->st->l3.l3l4(pc->st, CC_RESUME_ERR, pc);
- newl3state(pc, 0);
- if (ret) /* STATUS for none mandatory IE errors after actions are taken */
- l3ni1_std_ie_err(pc, ret);
- ni1_release_l3_process(pc);
-}
-
-static void
-l3ni1_global_restart(struct l3_process *pc, u_char pr, void *arg)
-{
- u_char tmp[32];
- u_char *p;
- u_char ri, ch = 0, chan = 0;
- int l;
- struct sk_buff *skb = arg;
- struct l3_process *up;
-
- newl3state(pc, 2);
- L3DelTimer(&pc->timer);
- p = skb->data;
- if ((p = findie(p, skb->len, IE_RESTART_IND, 0))) {
- ri = p[2];
- l3_debug(pc->st, "Restart %x", ri);
- } else {
- l3_debug(pc->st, "Restart without restart IE");
- ri = 0x86;
- }
- p = skb->data;
- if ((p = findie(p, skb->len, IE_CHANNEL_ID, 0))) {
- chan = p[2] & 3;
- ch = p[2];
- if (pc->st->l3.debug)
- l3_debug(pc->st, "Restart for channel %d", chan);
- }
- newl3state(pc, 2);
- up = pc->st->l3.proc;
- while (up) {
- if ((ri & 7) == 7)
- up->st->lli.l4l3(up->st, CC_RESTART | REQUEST, up);
- else if (up->para.bchannel == chan)
- up->st->lli.l4l3(up->st, CC_RESTART | REQUEST, up);
-
- up = up->next;
- }
- p = tmp;
- MsgHead(p, pc->callref, MT_RESTART_ACKNOWLEDGE);
- if (chan) {
- *p++ = IE_CHANNEL_ID;
- *p++ = 1;
- *p++ = ch | 0x80;
- }
- *p++ = 0x79; /* RESTART Ind */
- *p++ = 1;
- *p++ = ri;
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- newl3state(pc, 0);
- l3_msg(pc->st, DL_DATA | REQUEST, skb);
-}
-
-static void
-l3ni1_dl_reset(struct l3_process *pc, u_char pr, void *arg)
-{
- pc->para.cause = 0x29; /* Temporary failure */
- pc->para.loc = 0;
- l3ni1_disconnect_req(pc, pr, NULL);
- pc->st->l3.l3l4(pc->st, CC_SETUP_ERR, pc);
-}
-
-static void
-l3ni1_dl_release(struct l3_process *pc, u_char pr, void *arg)
-{
- newl3state(pc, 0);
- pc->para.cause = 0x1b; /* Destination out of order */
- pc->para.loc = 0;
- pc->st->l3.l3l4(pc->st, CC_RELEASE | INDICATION, pc);
- release_l3_process(pc);
-}
-
-static void
-l3ni1_dl_reestablish(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
- L3AddTimer(&pc->timer, T309, CC_T309);
- l3_msg(pc->st, DL_ESTABLISH | REQUEST, NULL);
-}
-
-static void
-l3ni1_dl_reest_status(struct l3_process *pc, u_char pr, void *arg)
-{
- L3DelTimer(&pc->timer);
-
- pc->para.cause = 0x1F; /* normal, unspecified */
- l3ni1_status_send(pc, 0, NULL);
-}
-
-static void l3ni1_SendSpid(struct l3_process *pc, u_char pr, struct sk_buff *skb, int iNewState)
-{
- u_char *p;
- char *pSPID;
- struct Channel *pChan = pc->st->lli.userdata;
- int l;
-
- if (skb)
- dev_kfree_skb(skb);
-
- if (!(pSPID = strchr(pChan->setup.eazmsn, ':')))
- {
- printk(KERN_ERR "SPID not supplied in EAZMSN %s\n", pChan->setup.eazmsn);
- newl3state(pc, 0);
- pc->st->l3.l3l2(pc->st, DL_RELEASE | REQUEST, NULL);
- return;
- }
-
- l = strlen(++pSPID);
- if (!(skb = l3_alloc_skb(5 + l)))
- {
- printk(KERN_ERR "HiSax can't get memory to send SPID\n");
- return;
- }
-
- p = skb_put(skb, 5);
- *p++ = PROTO_DIS_EURO;
- *p++ = 0;
- *p++ = MT_INFORMATION;
- *p++ = IE_SPID;
- *p++ = l;
-
- skb_put_data(skb, pSPID, l);
-
- newl3state(pc, iNewState);
-
- L3DelTimer(&pc->timer);
- L3AddTimer(&pc->timer, TSPID, CC_TSPID);
-
- pc->st->l3.l3l2(pc->st, DL_DATA | REQUEST, skb);
-}
-
-static void l3ni1_spid_send(struct l3_process *pc, u_char pr, void *arg)
-{
- l3ni1_SendSpid(pc, pr, arg, 20);
-}
-
-static void l3ni1_spid_epid(struct l3_process *pc, u_char pr, void *arg)
-{
- struct sk_buff *skb = arg;
-
- if (skb->data[1] == 0)
- if (skb->data[3] == IE_ENDPOINT_ID)
- {
- L3DelTimer(&pc->timer);
- newl3state(pc, 0);
- l3_msg(pc->st, DL_ESTABLISH | CONFIRM, NULL);
- }
- dev_kfree_skb(skb);
-}
-
-static void l3ni1_spid_tout(struct l3_process *pc, u_char pr, void *arg)
-{
- if (pc->state < 22)
- l3ni1_SendSpid(pc, pr, arg, pc->state + 1);
- else
- {
- L3DelTimer(&pc->timer);
- dev_kfree_skb(arg);
-
- printk(KERN_ERR "SPID not accepted\n");
- newl3state(pc, 0);
- pc->st->l3.l3l2(pc->st, DL_RELEASE | REQUEST, NULL);
- }
-}
-
-/* *INDENT-OFF* */
-static struct stateentry downstatelist[] =
-{
- {SBIT(0),
- CC_SETUP | REQUEST, l3ni1_setup_req},
- {SBIT(0),
- CC_RESUME | REQUEST, l3ni1_resume_req},
- {SBIT(1) | SBIT(2) | SBIT(3) | SBIT(4) | SBIT(6) | SBIT(7) | SBIT(8) | SBIT(9) | SBIT(10) | SBIT(25),
- CC_DISCONNECT | REQUEST, l3ni1_disconnect_req},
- {SBIT(12),
- CC_RELEASE | REQUEST, l3ni1_release_req},
- {ALL_STATES,
- CC_RESTART | REQUEST, l3ni1_restart},
- {SBIT(6) | SBIT(25),
- CC_IGNORE | REQUEST, l3ni1_reset},
- {SBIT(6) | SBIT(25),
- CC_REJECT | REQUEST, l3ni1_reject_req},
- {SBIT(6) | SBIT(25),
- CC_PROCEED_SEND | REQUEST, l3ni1_proceed_req},
- {SBIT(6),
- CC_MORE_INFO | REQUEST, l3ni1_setup_ack_req},
- {SBIT(25),
- CC_MORE_INFO | REQUEST, l3ni1_dummy},
- {SBIT(6) | SBIT(9) | SBIT(25),
- CC_ALERTING | REQUEST, l3ni1_alert_req},
- {SBIT(6) | SBIT(7) | SBIT(9) | SBIT(25),
- CC_SETUP | RESPONSE, l3ni1_setup_rsp},
- {SBIT(10),
- CC_SUSPEND | REQUEST, l3ni1_suspend_req},
- {SBIT(7) | SBIT(9) | SBIT(25),
- CC_REDIR | REQUEST, l3ni1_redir_req},
- {SBIT(6),
- CC_REDIR | REQUEST, l3ni1_redir_req_early},
- {SBIT(9) | SBIT(25),
- CC_DISCONNECT | REQUEST, l3ni1_disconnect_req},
- {SBIT(25),
- CC_T302, l3ni1_t302},
- {SBIT(1),
- CC_T303, l3ni1_t303},
- {SBIT(2),
- CC_T304, l3ni1_t304},
- {SBIT(3),
- CC_T310, l3ni1_t310},
- {SBIT(8),
- CC_T313, l3ni1_t313},
- {SBIT(11),
- CC_T305, l3ni1_t305},
- {SBIT(15),
- CC_T319, l3ni1_t319},
- {SBIT(17),
- CC_T318, l3ni1_t318},
- {SBIT(19),
- CC_T308_1, l3ni1_t308_1},
- {SBIT(19),
- CC_T308_2, l3ni1_t308_2},
- {SBIT(10),
- CC_T309, l3ni1_dl_release},
- { SBIT(20) | SBIT(21) | SBIT(22),
- CC_TSPID, l3ni1_spid_tout },
-};
-
-static struct stateentry datastatelist[] =
-{
- {ALL_STATES,
- MT_STATUS_ENQUIRY, l3ni1_status_enq},
- {ALL_STATES,
- MT_FACILITY, l3ni1_facility},
- {SBIT(19),
- MT_STATUS, l3ni1_release_ind},
- {ALL_STATES,
- MT_STATUS, l3ni1_status},
- {SBIT(0),
- MT_SETUP, l3ni1_setup},
- {SBIT(6) | SBIT(7) | SBIT(8) | SBIT(9) | SBIT(10) | SBIT(11) | SBIT(12) |
- SBIT(15) | SBIT(17) | SBIT(19) | SBIT(25),
- MT_SETUP, l3ni1_dummy},
- {SBIT(1) | SBIT(2),
- MT_CALL_PROCEEDING, l3ni1_call_proc},
- {SBIT(1),
- MT_SETUP_ACKNOWLEDGE, l3ni1_setup_ack},
- {SBIT(2) | SBIT(3),
- MT_ALERTING, l3ni1_alerting},
- {SBIT(2) | SBIT(3),
- MT_PROGRESS, l3ni1_progress},
- {SBIT(2) | SBIT(3) | SBIT(4) | SBIT(7) | SBIT(8) | SBIT(9) | SBIT(10) |
- SBIT(11) | SBIT(12) | SBIT(15) | SBIT(17) | SBIT(19) | SBIT(25),
- MT_INFORMATION, l3ni1_information},
- {SBIT(10) | SBIT(11) | SBIT(15),
- MT_NOTIFY, l3ni1_notify},
- {SBIT(0) | SBIT(1) | SBIT(2) | SBIT(3) | SBIT(4) | SBIT(7) | SBIT(8) | SBIT(10) |
- SBIT(11) | SBIT(12) | SBIT(15) | SBIT(17) | SBIT(19) | SBIT(25),
- MT_RELEASE_COMPLETE, l3ni1_release_cmpl},
- {SBIT(1) | SBIT(2) | SBIT(3) | SBIT(4) | SBIT(7) | SBIT(8) | SBIT(9) | SBIT(10) | SBIT(11) | SBIT(12) | SBIT(15) | SBIT(17) | SBIT(25),
- MT_RELEASE, l3ni1_release},
- {SBIT(19), MT_RELEASE, l3ni1_release_ind},
- {SBIT(1) | SBIT(2) | SBIT(3) | SBIT(4) | SBIT(7) | SBIT(8) | SBIT(9) | SBIT(10) | SBIT(11) | SBIT(15) | SBIT(17) | SBIT(25),
- MT_DISCONNECT, l3ni1_disconnect},
- {SBIT(19),
- MT_DISCONNECT, l3ni1_dummy},
- {SBIT(1) | SBIT(2) | SBIT(3) | SBIT(4),
- MT_CONNECT, l3ni1_connect},
- {SBIT(8),
- MT_CONNECT_ACKNOWLEDGE, l3ni1_connect_ack},
- {SBIT(15),
- MT_SUSPEND_ACKNOWLEDGE, l3ni1_suspend_ack},
- {SBIT(15),
- MT_SUSPEND_REJECT, l3ni1_suspend_rej},
- {SBIT(17),
- MT_RESUME_ACKNOWLEDGE, l3ni1_resume_ack},
- {SBIT(17),
- MT_RESUME_REJECT, l3ni1_resume_rej},
-};
-
-static struct stateentry globalmes_list[] =
-{
- {ALL_STATES,
- MT_STATUS, l3ni1_status},
- {SBIT(0),
- MT_RESTART, l3ni1_global_restart},
-/* {SBIT(1),
- MT_RESTART_ACKNOWLEDGE, l3ni1_restart_ack},
-*/
- { SBIT(0), MT_DL_ESTABLISHED, l3ni1_spid_send },
- { SBIT(20) | SBIT(21) | SBIT(22), MT_INFORMATION, l3ni1_spid_epid },
-};
-
-static struct stateentry manstatelist[] =
-{
- {SBIT(2),
- DL_ESTABLISH | INDICATION, l3ni1_dl_reset},
- {SBIT(10),
- DL_ESTABLISH | CONFIRM, l3ni1_dl_reest_status},
- {SBIT(10),
- DL_RELEASE | INDICATION, l3ni1_dl_reestablish},
- {ALL_STATES,
- DL_RELEASE | INDICATION, l3ni1_dl_release},
-};
-
-/* *INDENT-ON* */
-
-
-static void
-global_handler(struct PStack *st, int mt, struct sk_buff *skb)
-{
- u_char tmp[16];
- u_char *p = tmp;
- int l;
- int i;
- struct l3_process *proc = st->l3.global;
-
- if (skb)
- proc->callref = skb->data[2]; /* cr flag */
- else
- proc->callref = 0;
- for (i = 0; i < ARRAY_SIZE(globalmes_list); i++)
- if ((mt == globalmes_list[i].primitive) &&
- ((1 << proc->state) & globalmes_list[i].state))
- break;
- if (i == ARRAY_SIZE(globalmes_list)) {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "ni1 global state %d mt %x unhandled",
- proc->state, mt);
- }
- MsgHead(p, proc->callref, MT_STATUS);
- *p++ = IE_CAUSE;
- *p++ = 0x2;
- *p++ = 0x80;
- *p++ = 81 | 0x80; /* invalid cr */
- *p++ = 0x14; /* CallState */
- *p++ = 0x1;
- *p++ = proc->state & 0x3f;
- l = p - tmp;
- if (!(skb = l3_alloc_skb(l)))
- return;
- skb_put_data(skb, tmp, l);
- l3_msg(proc->st, DL_DATA | REQUEST, skb);
- } else {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "ni1 global %d mt %x",
- proc->state, mt);
- }
- globalmes_list[i].rout(proc, mt, skb);
- }
-}
-
-static void
-ni1up(struct PStack *st, int pr, void *arg)
-{
- int i, mt, cr, callState;
- char *ptr;
- u_char *p;
- struct sk_buff *skb = arg;
- struct l3_process *proc;
-
- switch (pr) {
- case (DL_DATA | INDICATION):
- case (DL_UNIT_DATA | INDICATION):
- break;
- case (DL_ESTABLISH | INDICATION):
- case (DL_RELEASE | INDICATION):
- case (DL_RELEASE | CONFIRM):
- l3_msg(st, pr, arg);
- return;
- break;
-
- case (DL_ESTABLISH | CONFIRM):
- global_handler(st, MT_DL_ESTABLISHED, NULL);
- return;
-
- default:
- printk(KERN_ERR "HiSax ni1up unknown pr=%04x\n", pr);
- return;
- }
- if (skb->len < 3) {
- l3_debug(st, "ni1up frame too short(%d)", skb->len);
- dev_kfree_skb(skb);
- return;
- }
-
- if (skb->data[0] != PROTO_DIS_EURO) {
- if (st->l3.debug & L3_DEB_PROTERR) {
- l3_debug(st, "ni1up%sunexpected discriminator %x message len %d",
- (pr == (DL_DATA | INDICATION)) ? " " : "(broadcast) ",
- skb->data[0], skb->len);
- }
- dev_kfree_skb(skb);
- return;
- }
- cr = getcallref(skb->data);
- if (skb->len < ((skb->data[1] & 0x0f) + 3)) {
- l3_debug(st, "ni1up frame too short(%d)", skb->len);
- dev_kfree_skb(skb);
- return;
- }
- mt = skb->data[skb->data[1] + 2];
- if (st->l3.debug & L3_DEB_STATE)
- l3_debug(st, "ni1up cr %d", cr);
- if (cr == -2) { /* wrong Callref */
- if (st->l3.debug & L3_DEB_WARN)
- l3_debug(st, "ni1up wrong Callref");
- dev_kfree_skb(skb);
- return;
- } else if (cr == -1) { /* Dummy Callref */
- if (mt == MT_FACILITY)
- {
- if ((p = findie(skb->data, skb->len, IE_FACILITY, 0))) {
- l3ni1_parse_facility(st, NULL,
- (pr == (DL_DATA | INDICATION)) ? -1 : -2, p);
- dev_kfree_skb(skb);
- return;
- }
- }
- else
- {
- global_handler(st, mt, skb);
- return;
- }
-
- if (st->l3.debug & L3_DEB_WARN)
- l3_debug(st, "ni1up dummy Callref (no facility msg or ie)");
- dev_kfree_skb(skb);
- return;
- } else if ((((skb->data[1] & 0x0f) == 1) && (0 == (cr & 0x7f))) ||
- (((skb->data[1] & 0x0f) == 2) && (0 == (cr & 0x7fff)))) { /* Global CallRef */
- if (st->l3.debug & L3_DEB_STATE)
- l3_debug(st, "ni1up Global CallRef");
- global_handler(st, mt, skb);
- dev_kfree_skb(skb);
- return;
- } else if (!(proc = getl3proc(st, cr))) {
- /* No transaction process exist, that means no call with
- * this callreference is active
- */
- if (mt == MT_SETUP) {
- /* Setup creates a new transaction process */
- if (skb->data[2] & 0x80) {
- /* Setup with wrong CREF flag */
- if (st->l3.debug & L3_DEB_STATE)
- l3_debug(st, "ni1up wrong CRef flag");
- dev_kfree_skb(skb);
- return;
- }
- if (!(proc = ni1_new_l3_process(st, cr))) {
- /* May be to answer with RELEASE_COMPLETE and
- * CAUSE 0x2f "Resource unavailable", but this
- * need a new_l3_process too ... arghh
- */
- dev_kfree_skb(skb);
- return;
- }
- } else if (mt == MT_STATUS) {
- if ((ptr = findie(skb->data, skb->len, IE_CAUSE, 0)) != NULL) {
- ptr++;
- if (*ptr++ == 2)
- ptr++;
- }
- callState = 0;
- if ((ptr = findie(skb->data, skb->len, IE_CALL_STATE, 0)) != NULL) {
- ptr++;
- if (*ptr++ == 2)
- ptr++;
- callState = *ptr;
- }
- /* ETS 300-104 part 2.4.1
- * if setup has not been made and a message type
- * MT_STATUS is received with call state == 0,
- * we must send nothing
- */
- if (callState != 0) {
- /* ETS 300-104 part 2.4.2
- * if setup has not been made and a message type
- * MT_STATUS is received with call state != 0,
- * we must send MT_RELEASE_COMPLETE cause 101
- */
- if ((proc = ni1_new_l3_process(st, cr))) {
- proc->para.cause = 101;
- l3ni1_msg_without_setup(proc, 0, NULL);
- }
- }
- dev_kfree_skb(skb);
- return;
- } else if (mt == MT_RELEASE_COMPLETE) {
- dev_kfree_skb(skb);
- return;
- } else {
- /* ETS 300-104 part 2
- * if setup has not been made and a message type
- * (except MT_SETUP and RELEASE_COMPLETE) is received,
- * we must send MT_RELEASE_COMPLETE cause 81 */
- dev_kfree_skb(skb);
- if ((proc = ni1_new_l3_process(st, cr))) {
- proc->para.cause = 81;
- l3ni1_msg_without_setup(proc, 0, NULL);
- }
- return;
- }
- }
- if (l3ni1_check_messagetype_validity(proc, mt, skb)) {
- dev_kfree_skb(skb);
- return;
- }
- if ((p = findie(skb->data, skb->len, IE_DISPLAY, 0)) != NULL)
- l3ni1_deliver_display(proc, pr, p); /* Display IE included */
- for (i = 0; i < ARRAY_SIZE(datastatelist); i++)
- if ((mt == datastatelist[i].primitive) &&
- ((1 << proc->state) & datastatelist[i].state))
- break;
- if (i == ARRAY_SIZE(datastatelist)) {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "ni1up%sstate %d mt %#x unhandled",
- (pr == (DL_DATA | INDICATION)) ? " " : "(broadcast) ",
- proc->state, mt);
- }
- if ((MT_RELEASE_COMPLETE != mt) && (MT_RELEASE != mt)) {
- proc->para.cause = 101;
- l3ni1_status_send(proc, pr, skb);
- }
- } else {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "ni1up%sstate %d mt %x",
- (pr == (DL_DATA | INDICATION)) ? " " : "(broadcast) ",
- proc->state, mt);
- }
- datastatelist[i].rout(proc, pr, skb);
- }
- dev_kfree_skb(skb);
- return;
-}
-
-static void
-ni1down(struct PStack *st, int pr, void *arg)
-{
- int i, cr;
- struct l3_process *proc;
- struct Channel *chan;
-
- if ((DL_ESTABLISH | REQUEST) == pr) {
- l3_msg(st, pr, NULL);
- return;
- } else if (((CC_SETUP | REQUEST) == pr) || ((CC_RESUME | REQUEST) == pr)) {
- chan = arg;
- cr = newcallref();
- cr |= 0x80;
- if ((proc = ni1_new_l3_process(st, cr))) {
- proc->chan = chan;
- chan->proc = proc;
- memcpy(&proc->para.setup, &chan->setup, sizeof(setup_parm));
- proc->callref = cr;
- }
- } else {
- proc = arg;
- }
- if (!proc) {
- printk(KERN_ERR "HiSax ni1down without proc pr=%04x\n", pr);
- return;
- }
-
- if (pr == (CC_TNI1_IO | REQUEST)) {
- l3ni1_io_timer(proc); /* timer expires */
- return;
- }
-
- for (i = 0; i < ARRAY_SIZE(downstatelist); i++)
- if ((pr == downstatelist[i].primitive) &&
- ((1 << proc->state) & downstatelist[i].state))
- break;
- if (i == ARRAY_SIZE(downstatelist)) {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "ni1down state %d prim %#x unhandled",
- proc->state, pr);
- }
- } else {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "ni1down state %d prim %#x",
- proc->state, pr);
- }
- downstatelist[i].rout(proc, pr, arg);
- }
-}
-
-static void
-ni1man(struct PStack *st, int pr, void *arg)
-{
- int i;
- struct l3_process *proc = arg;
-
- if (!proc) {
- printk(KERN_ERR "HiSax ni1man without proc pr=%04x\n", pr);
- return;
- }
- for (i = 0; i < ARRAY_SIZE(manstatelist); i++)
- if ((pr == manstatelist[i].primitive) &&
- ((1 << proc->state) & manstatelist[i].state))
- break;
- if (i == ARRAY_SIZE(manstatelist)) {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "cr %d ni1man state %d prim %#x unhandled",
- proc->callref & 0x7f, proc->state, pr);
- }
- } else {
- if (st->l3.debug & L3_DEB_STATE) {
- l3_debug(st, "cr %d ni1man state %d prim %#x",
- proc->callref & 0x7f, proc->state, pr);
- }
- manstatelist[i].rout(proc, pr, arg);
- }
-}
-
-void
-setstack_ni1(struct PStack *st)
-{
- char tmp[64];
- int i;
-
- st->lli.l4l3 = ni1down;
- st->lli.l4l3_proto = l3ni1_cmd_global;
- st->l2.l2l3 = ni1up;
- st->l3.l3ml3 = ni1man;
- st->l3.N303 = 1;
- st->prot.ni1.last_invoke_id = 0;
- st->prot.ni1.invoke_used[0] = 1; /* Bit 0 must always be set to 1 */
- i = 1;
- while (i < 32)
- st->prot.ni1.invoke_used[i++] = 0;
-
- if (!(st->l3.global = kmalloc(sizeof(struct l3_process), GFP_ATOMIC))) {
- printk(KERN_ERR "HiSax can't get memory for ni1 global CR\n");
- } else {
- st->l3.global->state = 0;
- st->l3.global->callref = 0;
- st->l3.global->next = NULL;
- st->l3.global->debug = L3_DEB_WARN;
- st->l3.global->st = st;
- st->l3.global->N303 = 1;
- st->l3.global->prot.ni1.invoke_id = 0;
-
- L3InitTimer(st->l3.global, &st->l3.global->timer);
- }
- strcpy(tmp, ni1_revision);
- printk(KERN_INFO "HiSax: National ISDN-1 Rev. %s\n", HiSax_getrev(tmp));
-}
diff --git a/drivers/isdn/hisax/l3ni1.h b/drivers/isdn/hisax/l3ni1.h
deleted file mode 100644
index 99d37d2cea4f..000000000000
--- a/drivers/isdn/hisax/l3ni1.h
+++ /dev/null
@@ -1,136 +0,0 @@
-/* $Id: l3ni1.h,v 2.3.6.2 2001/09/23 22:24:50 kai Exp $
- *
- * NI1 D-channel protocol
- *
- * Author Matt Henderson & Guy Ellis
- * Copyright by Traverse Technologies Pty Ltd, www.travers.com.au
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * 2000.6.6 Initial implementation of routines for US NI1
- * Layer 3 protocol based on the EURO/DSS1 D-channel protocol
- * driver written by Karsten Keil et al. Thanks also for the
- * code provided by Ragnar Paulson.
- *
- */
-
-#ifndef l3ni1_process
-
-#define T302 15000
-#define T303 4000
-#define T304 30000
-#define T305 30000
-#define T308 4000
-/* for layer 1 certification T309 < layer1 T3 (e.g. 4000) */
-/* This makes some tests easier and quicker */
-#define T309 40000
-#define T310 30000
-#define T313 4000
-#define T318 4000
-#define T319 4000
-#define TSPID 5000 /* was 2000 - Guy Ellis */
-
-/*
- * Message-Types
- */
-
-#define MT_ALERTING 0x01
-#define MT_CALL_PROCEEDING 0x02
-#define MT_CONNECT 0x07
-#define MT_CONNECT_ACKNOWLEDGE 0x0f
-#define MT_PROGRESS 0x03
-#define MT_SETUP 0x05
-#define MT_SETUP_ACKNOWLEDGE 0x0d
-#define MT_RESUME 0x26
-#define MT_RESUME_ACKNOWLEDGE 0x2e
-#define MT_RESUME_REJECT 0x22
-#define MT_SUSPEND 0x25
-#define MT_SUSPEND_ACKNOWLEDGE 0x2d
-#define MT_SUSPEND_REJECT 0x21
-#define MT_USER_INFORMATION 0x20
-#define MT_DISCONNECT 0x45
-#define MT_RELEASE 0x4d
-#define MT_RELEASE_COMPLETE 0x5a
-#define MT_RESTART 0x46
-#define MT_RESTART_ACKNOWLEDGE 0x4e
-#define MT_SEGMENT 0x60
-#define MT_CONGESTION_CONTROL 0x79
-#define MT_INFORMATION 0x7b
-#define MT_FACILITY 0x62
-#define MT_NOTIFY 0x6e
-#define MT_STATUS 0x7d
-#define MT_STATUS_ENQUIRY 0x75
-#define MT_DL_ESTABLISHED 0xfe
-
-#define IE_SEGMENT 0x00
-#define IE_BEARER 0x04
-#define IE_CAUSE 0x08
-#define IE_CALL_ID 0x10
-#define IE_CALL_STATE 0x14
-#define IE_CHANNEL_ID 0x18
-#define IE_FACILITY 0x1c
-#define IE_PROGRESS 0x1e
-#define IE_NET_FAC 0x20
-#define IE_NOTIFY 0x27
-#define IE_DISPLAY 0x28
-#define IE_DATE 0x29
-#define IE_KEYPAD 0x2c
-#define IE_SIGNAL 0x34
-#define IE_SPID 0x3a
-#define IE_ENDPOINT_ID 0x3b
-#define IE_INFORATE 0x40
-#define IE_E2E_TDELAY 0x42
-#define IE_TDELAY_SEL 0x43
-#define IE_PACK_BINPARA 0x44
-#define IE_PACK_WINSIZE 0x45
-#define IE_PACK_SIZE 0x46
-#define IE_CUG 0x47
-#define IE_REV_CHARGE 0x4a
-#define IE_CONNECT_PN 0x4c
-#define IE_CONNECT_SUB 0x4d
-#define IE_CALLING_PN 0x6c
-#define IE_CALLING_SUB 0x6d
-#define IE_CALLED_PN 0x70
-#define IE_CALLED_SUB 0x71
-#define IE_REDIR_NR 0x74
-#define IE_TRANS_SEL 0x78
-#define IE_RESTART_IND 0x79
-#define IE_LLC 0x7c
-#define IE_HLC 0x7d
-#define IE_USER_USER 0x7e
-#define IE_ESCAPE 0x7f
-#define IE_SHIFT 0x90
-#define IE_MORE_DATA 0xa0
-#define IE_COMPLETE 0xa1
-#define IE_CONGESTION 0xb0
-#define IE_REPEAT 0xd0
-
-#define IE_MANDATORY 0x0100
-/* mandatory not in every case */
-#define IE_MANDATORY_1 0x0200
-
-#define ERR_IE_COMPREHENSION 1
-#define ERR_IE_UNRECOGNIZED -1
-#define ERR_IE_LENGTH -2
-#define ERR_IE_SEQUENCE -3
-
-#else /* only l3ni1_process */
-
-/* l3ni1 specific data in l3 process */
-typedef struct
-{ unsigned char invoke_id; /* used invoke id in remote ops, 0 = not active */
- ulong ll_id; /* remebered ll id */
- u8 remote_operation; /* handled remote operation, 0 = not active */
- int proc; /* rememered procedure */
- ulong remote_result; /* result of remote operation for statcallb */
- char uus1_data[35]; /* data send during alerting or disconnect */
-} ni1_proc_priv;
-
-/* l3dni1 specific data in protocol stack */
-typedef struct
-{ unsigned char last_invoke_id; /* last used value for invoking */
- unsigned char invoke_used[32]; /* 256 bits for 256 values */
-} ni1_stk_priv;
-
-#endif /* only l3dni1_process */
diff --git a/drivers/isdn/hisax/lmgr.c b/drivers/isdn/hisax/lmgr.c
deleted file mode 100644
index 5b63eb6601aa..000000000000
--- a/drivers/isdn/hisax/lmgr.c
+++ /dev/null
@@ -1,50 +0,0 @@
-/* $Id: lmgr.c,v 1.7.6.2 2001/09/23 22:24:50 kai Exp $
- *
- * Layermanagement module
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include "hisax.h"
-
-static void
-error_handling_dchan(struct PStack *st, int Error)
-{
- switch (Error) {
- case 'C':
- case 'D':
- case 'G':
- case 'H':
- st->l2.l2tei(st, MDL_ERROR | REQUEST, NULL);
- break;
- }
-}
-
-static void
-hisax_manager(struct PStack *st, int pr, void *arg)
-{
- long Code;
-
- switch (pr) {
- case (MDL_ERROR | INDICATION):
- Code = (long) arg;
- HiSax_putstatus(st->l1.hardware, "manager: MDL_ERROR",
- " %c %s", (char)Code,
- test_bit(FLG_LAPD, &st->l2.flag) ?
- "D-channel" : "B-channel");
- if (test_bit(FLG_LAPD, &st->l2.flag))
- error_handling_dchan(st, Code);
- break;
- }
-}
-
-void
-setstack_manager(struct PStack *st)
-{
- st->ma.layer = hisax_manager;
-}
diff --git a/drivers/isdn/hisax/mic.c b/drivers/isdn/hisax/mic.c
deleted file mode 100644
index 93398676f78f..000000000000
--- a/drivers/isdn/hisax/mic.c
+++ /dev/null
@@ -1,235 +0,0 @@
-/* $Id: mic.c,v 1.12.2.4 2004/01/13 23:48:39 keil Exp $
- *
- * low level stuff for mic cards
- *
- * Author Stephan von Krawczynski
- * Copyright by Stephan von Krawczynski <skraw@ithnet.com>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "hscx.h"
-#include "isdnl1.h"
-
-static const char *mic_revision = "$Revision: 1.12.2.4 $";
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-#define MIC_ISAC 2
-#define MIC_HSCX 1
-#define MIC_ADR 7
-
-/* CARD_ADR (Write) */
-#define MIC_RESET 0x3 /* same as DOS driver */
-
-static inline u_char
-readreg(unsigned int ale, unsigned int adr, u_char off)
-{
- register u_char ret;
-
- byteout(ale, off);
- ret = bytein(adr);
- return (ret);
-}
-
-static inline void
-readfifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- byteout(ale, off);
- insb(adr, data, size);
-}
-
-
-static inline void
-writereg(unsigned int ale, unsigned int adr, u_char off, u_char data)
-{
- byteout(ale, off);
- byteout(adr, data);
-}
-
-static inline void
-writefifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- byteout(ale, off);
- outsb(adr, data, size);
-}
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readreg(cs->hw.mic.adr, cs->hw.mic.isac, offset));
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writereg(cs->hw.mic.adr, cs->hw.mic.isac, offset, value);
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- readfifo(cs->hw.mic.adr, cs->hw.mic.isac, 0, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- writefifo(cs->hw.mic.adr, cs->hw.mic.isac, 0, data, size);
-}
-
-static u_char
-ReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- return (readreg(cs->hw.mic.adr,
- cs->hw.mic.hscx, offset + (hscx ? 0x40 : 0)));
-}
-
-static void
-WriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- writereg(cs->hw.mic.adr,
- cs->hw.mic.hscx, offset + (hscx ? 0x40 : 0), value);
-}
-
-/*
- * fast interrupt HSCX stuff goes here
- */
-
-#define READHSCX(cs, nr, reg) readreg(cs->hw.mic.adr, \
- cs->hw.mic.hscx, reg + (nr ? 0x40 : 0))
-#define WRITEHSCX(cs, nr, reg, data) writereg(cs->hw.mic.adr, \
- cs->hw.mic.hscx, reg + (nr ? 0x40 : 0), data)
-
-#define READHSCXFIFO(cs, nr, ptr, cnt) readfifo(cs->hw.mic.adr, \
- cs->hw.mic.hscx, (nr ? 0x40 : 0), ptr, cnt)
-
-#define WRITEHSCXFIFO(cs, nr, ptr, cnt) writefifo(cs->hw.mic.adr, \
- cs->hw.mic.hscx, (nr ? 0x40 : 0), ptr, cnt)
-
-#include "hscx_irq.c"
-
-static irqreturn_t
-mic_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- val = readreg(cs->hw.mic.adr, cs->hw.mic.hscx, HSCX_ISTA + 0x40);
-Start_HSCX:
- if (val)
- hscx_int_main(cs, val);
- val = readreg(cs->hw.mic.adr, cs->hw.mic.isac, ISAC_ISTA);
-Start_ISAC:
- if (val)
- isac_interrupt(cs, val);
- val = readreg(cs->hw.mic.adr, cs->hw.mic.hscx, HSCX_ISTA + 0x40);
- if (val) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX IntStat after IntRoutine");
- goto Start_HSCX;
- }
- val = readreg(cs->hw.mic.adr, cs->hw.mic.isac, ISAC_ISTA);
- if (val) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ISAC IntStat after IntRoutine");
- goto Start_ISAC;
- }
- writereg(cs->hw.mic.adr, cs->hw.mic.hscx, HSCX_MASK, 0xFF);
- writereg(cs->hw.mic.adr, cs->hw.mic.hscx, HSCX_MASK + 0x40, 0xFF);
- writereg(cs->hw.mic.adr, cs->hw.mic.isac, ISAC_MASK, 0xFF);
- writereg(cs->hw.mic.adr, cs->hw.mic.isac, ISAC_MASK, 0x0);
- writereg(cs->hw.mic.adr, cs->hw.mic.hscx, HSCX_MASK, 0x0);
- writereg(cs->hw.mic.adr, cs->hw.mic.hscx, HSCX_MASK + 0x40, 0x0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-release_io_mic(struct IsdnCardState *cs)
-{
- int bytecnt = 8;
-
- if (cs->hw.mic.cfg_reg)
- release_region(cs->hw.mic.cfg_reg, bytecnt);
-}
-
-static int
-mic_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- return (0);
- case CARD_RELEASE:
- release_io_mic(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- inithscx(cs); /* /RTSA := ISAC RST */
- inithscxisac(cs, 3);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-int setup_mic(struct IsdnCard *card)
-{
- int bytecnt;
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
- strcpy(tmp, mic_revision);
- printk(KERN_INFO "HiSax: mic driver Rev. %s\n", HiSax_getrev(tmp));
- if (cs->typ != ISDN_CTYPE_MIC)
- return (0);
-
- bytecnt = 8;
- cs->hw.mic.cfg_reg = card->para[1];
- cs->irq = card->para[0];
- cs->hw.mic.adr = cs->hw.mic.cfg_reg + MIC_ADR;
- cs->hw.mic.isac = cs->hw.mic.cfg_reg + MIC_ISAC;
- cs->hw.mic.hscx = cs->hw.mic.cfg_reg + MIC_HSCX;
-
- if (!request_region(cs->hw.mic.cfg_reg, bytecnt, "mic isdn")) {
- printk(KERN_WARNING
- "HiSax: ith mic config port %x-%x already in use\n",
- cs->hw.mic.cfg_reg,
- cs->hw.mic.cfg_reg + bytecnt);
- return (0);
- }
- printk(KERN_INFO "mic: defined at 0x%x IRQ %d\n",
- cs->hw.mic.cfg_reg, cs->irq);
- setup_isac(cs);
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->BC_Read_Reg = &ReadHSCX;
- cs->BC_Write_Reg = &WriteHSCX;
- cs->BC_Send_Data = &hscx_fill_fifo;
- cs->cardmsg = &mic_card_msg;
- cs->irq_func = &mic_interrupt;
- ISACVersion(cs, "mic:");
- if (HscxVersion(cs, "mic:")) {
- printk(KERN_WARNING
- "mic: wrong HSCX versions check IO address\n");
- release_io_mic(cs);
- return (0);
- }
- return (1);
-}
diff --git a/drivers/isdn/hisax/netjet.c b/drivers/isdn/hisax/netjet.c
deleted file mode 100644
index d7b011c8d692..000000000000
--- a/drivers/isdn/hisax/netjet.c
+++ /dev/null
@@ -1,985 +0,0 @@
-/* $Id: netjet.c,v 1.29.2.4 2004/02/11 13:21:34 keil Exp $
- *
- * low level stuff for Traverse Technologie NETJet ISDN cards
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * Thanks to Traverse Technologies Australia for documents and information
- *
- * 16-Apr-2002 - led code added - Guy Ellis (guy@traverse.com.au)
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "hscx.h"
-#include "isdnl1.h"
-#include <linux/interrupt.h>
-#include <linux/ppp_defs.h>
-#include <linux/slab.h>
-#include <asm/io.h>
-#include "netjet.h"
-
-/* Interface functions */
-
-u_char
-NETjet_ReadIC(struct IsdnCardState *cs, u_char offset)
-{
- u_char ret;
-
- cs->hw.njet.auxd &= 0xfc;
- cs->hw.njet.auxd |= (offset >> 4) & 3;
- byteout(cs->hw.njet.auxa, cs->hw.njet.auxd);
- ret = bytein(cs->hw.njet.isac + ((offset & 0xf) << 2));
- return (ret);
-}
-
-void
-NETjet_WriteIC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- cs->hw.njet.auxd &= 0xfc;
- cs->hw.njet.auxd |= (offset >> 4) & 3;
- byteout(cs->hw.njet.auxa, cs->hw.njet.auxd);
- byteout(cs->hw.njet.isac + ((offset & 0xf) << 2), value);
-}
-
-void
-NETjet_ReadICfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- cs->hw.njet.auxd &= 0xfc;
- byteout(cs->hw.njet.auxa, cs->hw.njet.auxd);
- insb(cs->hw.njet.isac, data, size);
-}
-
-void
-NETjet_WriteICfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- cs->hw.njet.auxd &= 0xfc;
- byteout(cs->hw.njet.auxa, cs->hw.njet.auxd);
- outsb(cs->hw.njet.isac, data, size);
-}
-
-static void fill_mem(struct BCState *bcs, u_int *pos, u_int cnt, int chan, u_char fill)
-{
- u_int mask = 0x000000ff, val = 0, *p = pos;
- u_int i;
-
- val |= fill;
- if (chan) {
- val <<= 8;
- mask <<= 8;
- }
- mask ^= 0xffffffff;
- for (i = 0; i < cnt; i++) {
- *p &= mask;
- *p++ |= val;
- if (p > bcs->hw.tiger.s_end)
- p = bcs->hw.tiger.send;
- }
-}
-
-static void
-mode_tiger(struct BCState *bcs, int mode, int bc)
-{
- struct IsdnCardState *cs = bcs->cs;
- u_char led;
-
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "Tiger mode %d bchan %d/%d",
- mode, bc, bcs->channel);
- bcs->mode = mode;
- bcs->channel = bc;
- switch (mode) {
- case (L1_MODE_NULL):
- fill_mem(bcs, bcs->hw.tiger.send,
- NETJET_DMA_TXSIZE, bc, 0xff);
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "Tiger stat rec %d/%d send %d",
- bcs->hw.tiger.r_tot, bcs->hw.tiger.r_err,
- bcs->hw.tiger.s_tot);
- if ((cs->bcs[0].mode == L1_MODE_NULL) &&
- (cs->bcs[1].mode == L1_MODE_NULL)) {
- cs->hw.njet.dmactrl = 0;
- byteout(cs->hw.njet.base + NETJET_DMACTRL,
- cs->hw.njet.dmactrl);
- byteout(cs->hw.njet.base + NETJET_IRQMASK0, 0);
- }
- if (cs->typ == ISDN_CTYPE_NETJET_S)
- {
- // led off
- led = bc & 0x01;
- led = 0x01 << (6 + led); // convert to mask
- led = ~led;
- cs->hw.njet.auxd &= led;
- byteout(cs->hw.njet.auxa, cs->hw.njet.auxd);
- }
- break;
- case (L1_MODE_TRANS):
- break;
- case (L1_MODE_HDLC_56K):
- case (L1_MODE_HDLC):
- fill_mem(bcs, bcs->hw.tiger.send,
- NETJET_DMA_TXSIZE, bc, 0xff);
- bcs->hw.tiger.r_state = HDLC_ZERO_SEARCH;
- bcs->hw.tiger.r_tot = 0;
- bcs->hw.tiger.r_bitcnt = 0;
- bcs->hw.tiger.r_one = 0;
- bcs->hw.tiger.r_err = 0;
- bcs->hw.tiger.s_tot = 0;
- if (!cs->hw.njet.dmactrl) {
- fill_mem(bcs, bcs->hw.tiger.send,
- NETJET_DMA_TXSIZE, !bc, 0xff);
- cs->hw.njet.dmactrl = 1;
- byteout(cs->hw.njet.base + NETJET_DMACTRL,
- cs->hw.njet.dmactrl);
- byteout(cs->hw.njet.base + NETJET_IRQMASK0, 0x0f);
- /* was 0x3f now 0x0f for TJ300 and TJ320 GE 13/07/00 */
- }
- bcs->hw.tiger.sendp = bcs->hw.tiger.send;
- bcs->hw.tiger.free = NETJET_DMA_TXSIZE;
- test_and_set_bit(BC_FLG_EMPTY, &bcs->Flag);
- if (cs->typ == ISDN_CTYPE_NETJET_S)
- {
- // led on
- led = bc & 0x01;
- led = 0x01 << (6 + led); // convert to mask
- cs->hw.njet.auxd |= led;
- byteout(cs->hw.njet.auxa, cs->hw.njet.auxd);
- }
- break;
- }
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "tiger: set %x %x %x %x/%x pulse=%d",
- bytein(cs->hw.njet.base + NETJET_DMACTRL),
- bytein(cs->hw.njet.base + NETJET_IRQMASK0),
- bytein(cs->hw.njet.base + NETJET_IRQSTAT0),
- inl(cs->hw.njet.base + NETJET_DMA_READ_ADR),
- inl(cs->hw.njet.base + NETJET_DMA_WRITE_ADR),
- bytein(cs->hw.njet.base + NETJET_PULSE_CNT));
-}
-
-static void printframe(struct IsdnCardState *cs, u_char *buf, int count, char *s) {
- char tmp[128];
- char *t = tmp;
- int i = count, j;
- u_char *p = buf;
-
- t += sprintf(t, "tiger %s(%4d)", s, count);
- while (i > 0) {
- if (i > 16)
- j = 16;
- else
- j = i;
- QuickHex(t, p, j);
- debugl1(cs, "%s", tmp);
- p += j;
- i -= j;
- t = tmp;
- t += sprintf(t, "tiger %s ", s);
- }
-}
-
-// macro for 64k
-
-#define MAKE_RAW_BYTE for (j = 0; j < 8; j++) { \
- bitcnt++; \
- s_val >>= 1; \
- if (val & 1) { \
- s_one++; \
- s_val |= 0x80; \
- } else { \
- s_one = 0; \
- s_val &= 0x7f; \
- } \
- if (bitcnt == 8) { \
- bcs->hw.tiger.sendbuf[s_cnt++] = s_val; \
- bitcnt = 0; \
- } \
- if (s_one == 5) { \
- s_val >>= 1; \
- s_val &= 0x7f; \
- bitcnt++; \
- s_one = 0; \
- } \
- if (bitcnt == 8) { \
- bcs->hw.tiger.sendbuf[s_cnt++] = s_val; \
- bitcnt = 0; \
- } \
- val >>= 1; \
- }
-
-static int make_raw_data(struct BCState *bcs) {
-// this make_raw is for 64k
- register u_int i, s_cnt = 0;
- register u_char j;
- register u_char val;
- register u_char s_one = 0;
- register u_char s_val = 0;
- register u_char bitcnt = 0;
- u_int fcs;
-
- if (!bcs->tx_skb) {
- debugl1(bcs->cs, "tiger make_raw: NULL skb");
- return (1);
- }
- bcs->hw.tiger.sendbuf[s_cnt++] = HDLC_FLAG_VALUE;
- fcs = PPP_INITFCS;
- for (i = 0; i < bcs->tx_skb->len; i++) {
- val = bcs->tx_skb->data[i];
- fcs = PPP_FCS(fcs, val);
- MAKE_RAW_BYTE;
- }
- fcs ^= 0xffff;
- val = fcs & 0xff;
- MAKE_RAW_BYTE;
- val = (fcs >> 8) & 0xff;
- MAKE_RAW_BYTE;
- val = HDLC_FLAG_VALUE;
- for (j = 0; j < 8; j++) {
- bitcnt++;
- s_val >>= 1;
- if (val & 1)
- s_val |= 0x80;
- else
- s_val &= 0x7f;
- if (bitcnt == 8) {
- bcs->hw.tiger.sendbuf[s_cnt++] = s_val;
- bitcnt = 0;
- }
- val >>= 1;
- }
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "tiger make_raw: in %u out %d.%d",
- bcs->tx_skb->len, s_cnt, bitcnt);
- if (bitcnt) {
- while (8 > bitcnt++) {
- s_val >>= 1;
- s_val |= 0x80;
- }
- bcs->hw.tiger.sendbuf[s_cnt++] = s_val;
- bcs->hw.tiger.sendbuf[s_cnt++] = 0xff; // NJ<->NJ thoughput bug fix
- }
- bcs->hw.tiger.sendcnt = s_cnt;
- bcs->tx_cnt -= bcs->tx_skb->len;
- bcs->hw.tiger.sp = bcs->hw.tiger.sendbuf;
- return (0);
-}
-
-// macro for 56k
-
-#define MAKE_RAW_BYTE_56K for (j = 0; j < 8; j++) { \
- bitcnt++; \
- s_val >>= 1; \
- if (val & 1) { \
- s_one++; \
- s_val |= 0x80; \
- } else { \
- s_one = 0; \
- s_val &= 0x7f; \
- } \
- if (bitcnt == 7) { \
- s_val >>= 1; \
- s_val |= 0x80; \
- bcs->hw.tiger.sendbuf[s_cnt++] = s_val; \
- bitcnt = 0; \
- } \
- if (s_one == 5) { \
- s_val >>= 1; \
- s_val &= 0x7f; \
- bitcnt++; \
- s_one = 0; \
- } \
- if (bitcnt == 7) { \
- s_val >>= 1; \
- s_val |= 0x80; \
- bcs->hw.tiger.sendbuf[s_cnt++] = s_val; \
- bitcnt = 0; \
- } \
- val >>= 1; \
- }
-
-static int make_raw_data_56k(struct BCState *bcs) {
-// this make_raw is for 56k
- register u_int i, s_cnt = 0;
- register u_char j;
- register u_char val;
- register u_char s_one = 0;
- register u_char s_val = 0;
- register u_char bitcnt = 0;
- u_int fcs;
-
- if (!bcs->tx_skb) {
- debugl1(bcs->cs, "tiger make_raw_56k: NULL skb");
- return (1);
- }
- val = HDLC_FLAG_VALUE;
- for (j = 0; j < 8; j++) {
- bitcnt++;
- s_val >>= 1;
- if (val & 1)
- s_val |= 0x80;
- else
- s_val &= 0x7f;
- if (bitcnt == 7) {
- s_val >>= 1;
- s_val |= 0x80;
- bcs->hw.tiger.sendbuf[s_cnt++] = s_val;
- bitcnt = 0;
- }
- val >>= 1;
- }
- fcs = PPP_INITFCS;
- for (i = 0; i < bcs->tx_skb->len; i++) {
- val = bcs->tx_skb->data[i];
- fcs = PPP_FCS(fcs, val);
- MAKE_RAW_BYTE_56K;
- }
- fcs ^= 0xffff;
- val = fcs & 0xff;
- MAKE_RAW_BYTE_56K;
- val = (fcs >> 8) & 0xff;
- MAKE_RAW_BYTE_56K;
- val = HDLC_FLAG_VALUE;
- for (j = 0; j < 8; j++) {
- bitcnt++;
- s_val >>= 1;
- if (val & 1)
- s_val |= 0x80;
- else
- s_val &= 0x7f;
- if (bitcnt == 7) {
- s_val >>= 1;
- s_val |= 0x80;
- bcs->hw.tiger.sendbuf[s_cnt++] = s_val;
- bitcnt = 0;
- }
- val >>= 1;
- }
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "tiger make_raw_56k: in %u out %d.%d",
- bcs->tx_skb->len, s_cnt, bitcnt);
- if (bitcnt) {
- while (8 > bitcnt++) {
- s_val >>= 1;
- s_val |= 0x80;
- }
- bcs->hw.tiger.sendbuf[s_cnt++] = s_val;
- bcs->hw.tiger.sendbuf[s_cnt++] = 0xff; // NJ<->NJ thoughput bug fix
- }
- bcs->hw.tiger.sendcnt = s_cnt;
- bcs->tx_cnt -= bcs->tx_skb->len;
- bcs->hw.tiger.sp = bcs->hw.tiger.sendbuf;
- return (0);
-}
-
-static void got_frame(struct BCState *bcs, int count) {
- struct sk_buff *skb;
-
- if (!(skb = dev_alloc_skb(count)))
- printk(KERN_WARNING "TIGER: receive out of memory\n");
- else {
- skb_put_data(skb, bcs->hw.tiger.rcvbuf, count);
- skb_queue_tail(&bcs->rqueue, skb);
- }
- test_and_set_bit(B_RCVBUFREADY, &bcs->event);
- schedule_work(&bcs->tqueue);
-
- if (bcs->cs->debug & L1_DEB_RECEIVE_FRAME)
- printframe(bcs->cs, bcs->hw.tiger.rcvbuf, count, "rec");
-}
-
-
-
-static void read_raw(struct BCState *bcs, u_int *buf, int cnt) {
- int i;
- register u_char j;
- register u_char val;
- u_int *pend = bcs->hw.tiger.rec + NETJET_DMA_RXSIZE - 1;
- register u_char state = bcs->hw.tiger.r_state;
- register u_char r_one = bcs->hw.tiger.r_one;
- register u_char r_val = bcs->hw.tiger.r_val;
- register u_int bitcnt = bcs->hw.tiger.r_bitcnt;
- u_int *p = buf;
- int bits;
- u_char mask;
-
- if (bcs->mode == L1_MODE_HDLC) { // it's 64k
- mask = 0xff;
- bits = 8;
- }
- else { // it's 56K
- mask = 0x7f;
- bits = 7;
- }
- for (i = 0; i < cnt; i++) {
- val = bcs->channel ? ((*p >> 8) & 0xff) : (*p & 0xff);
- p++;
- if (p > pend)
- p = bcs->hw.tiger.rec;
- if ((val & mask) == mask) {
- state = HDLC_ZERO_SEARCH;
- bcs->hw.tiger.r_tot++;
- bitcnt = 0;
- r_one = 0;
- continue;
- }
- for (j = 0; j < bits; j++) {
- if (state == HDLC_ZERO_SEARCH) {
- if (val & 1) {
- r_one++;
- } else {
- r_one = 0;
- state = HDLC_FLAG_SEARCH;
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "tiger read_raw: zBit(%d,%d,%d) %x",
- bcs->hw.tiger.r_tot, i, j, val);
- }
- } else if (state == HDLC_FLAG_SEARCH) {
- if (val & 1) {
- r_one++;
- if (r_one > 6) {
- state = HDLC_ZERO_SEARCH;
- }
- } else {
- if (r_one == 6) {
- bitcnt = 0;
- r_val = 0;
- state = HDLC_FLAG_FOUND;
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "tiger read_raw: flag(%d,%d,%d) %x",
- bcs->hw.tiger.r_tot, i, j, val);
- }
- r_one = 0;
- }
- } else if (state == HDLC_FLAG_FOUND) {
- if (val & 1) {
- r_one++;
- if (r_one > 6) {
- state = HDLC_ZERO_SEARCH;
- } else {
- r_val >>= 1;
- r_val |= 0x80;
- bitcnt++;
- }
- } else {
- if (r_one == 6) {
- bitcnt = 0;
- r_val = 0;
- r_one = 0;
- val >>= 1;
- continue;
- } else if (r_one != 5) {
- r_val >>= 1;
- r_val &= 0x7f;
- bitcnt++;
- }
- r_one = 0;
- }
- if ((state != HDLC_ZERO_SEARCH) &&
- !(bitcnt & 7)) {
- state = HDLC_FRAME_FOUND;
- bcs->hw.tiger.r_fcs = PPP_INITFCS;
- bcs->hw.tiger.rcvbuf[0] = r_val;
- bcs->hw.tiger.r_fcs = PPP_FCS(bcs->hw.tiger.r_fcs, r_val);
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "tiger read_raw: byte1(%d,%d,%d) rval %x val %x i %x",
- bcs->hw.tiger.r_tot, i, j, r_val, val,
- bcs->cs->hw.njet.irqstat0);
- }
- } else if (state == HDLC_FRAME_FOUND) {
- if (val & 1) {
- r_one++;
- if (r_one > 6) {
- state = HDLC_ZERO_SEARCH;
- bitcnt = 0;
- } else {
- r_val >>= 1;
- r_val |= 0x80;
- bitcnt++;
- }
- } else {
- if (r_one == 6) {
- r_val = 0;
- r_one = 0;
- bitcnt++;
- if (bitcnt & 7) {
- debugl1(bcs->cs, "tiger: frame not byte aligned");
- state = HDLC_FLAG_SEARCH;
- bcs->hw.tiger.r_err++;
-#ifdef ERROR_STATISTIC
- bcs->err_inv++;
-#endif
- } else {
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "tiger frame end(%d,%d): fcs(%x) i %x",
- i, j, bcs->hw.tiger.r_fcs, bcs->cs->hw.njet.irqstat0);
- if (bcs->hw.tiger.r_fcs == PPP_GOODFCS) {
- got_frame(bcs, (bitcnt >> 3) - 3);
- } else {
- if (bcs->cs->debug) {
- debugl1(bcs->cs, "tiger FCS error");
- printframe(bcs->cs, bcs->hw.tiger.rcvbuf,
- (bitcnt >> 3) - 1, "rec");
- bcs->hw.tiger.r_err++;
- }
-#ifdef ERROR_STATISTIC
- bcs->err_crc++;
-#endif
- }
- state = HDLC_FLAG_FOUND;
- }
- bitcnt = 0;
- } else if (r_one == 5) {
- val >>= 1;
- r_one = 0;
- continue;
- } else {
- r_val >>= 1;
- r_val &= 0x7f;
- bitcnt++;
- }
- r_one = 0;
- }
- if ((state == HDLC_FRAME_FOUND) &&
- !(bitcnt & 7)) {
- if ((bitcnt >> 3) >= HSCX_BUFMAX) {
- debugl1(bcs->cs, "tiger: frame too big");
- r_val = 0;
- state = HDLC_FLAG_SEARCH;
- bcs->hw.tiger.r_err++;
-#ifdef ERROR_STATISTIC
- bcs->err_inv++;
-#endif
- } else {
- bcs->hw.tiger.rcvbuf[(bitcnt >> 3) - 1] = r_val;
- bcs->hw.tiger.r_fcs =
- PPP_FCS(bcs->hw.tiger.r_fcs, r_val);
- }
- }
- }
- val >>= 1;
- }
- bcs->hw.tiger.r_tot++;
- }
- bcs->hw.tiger.r_state = state;
- bcs->hw.tiger.r_one = r_one;
- bcs->hw.tiger.r_val = r_val;
- bcs->hw.tiger.r_bitcnt = bitcnt;
-}
-
-void read_tiger(struct IsdnCardState *cs) {
- u_int *p;
- int cnt = NETJET_DMA_RXSIZE / 2;
-
- if ((cs->hw.njet.irqstat0 & cs->hw.njet.last_is0) & NETJET_IRQM0_READ) {
- debugl1(cs, "tiger warn read double dma %x/%x",
- cs->hw.njet.irqstat0, cs->hw.njet.last_is0);
-#ifdef ERROR_STATISTIC
- if (cs->bcs[0].mode)
- cs->bcs[0].err_rdo++;
- if (cs->bcs[1].mode)
- cs->bcs[1].err_rdo++;
-#endif
- return;
- } else {
- cs->hw.njet.last_is0 &= ~NETJET_IRQM0_READ;
- cs->hw.njet.last_is0 |= (cs->hw.njet.irqstat0 & NETJET_IRQM0_READ);
- }
- if (cs->hw.njet.irqstat0 & NETJET_IRQM0_READ_1)
- p = cs->bcs[0].hw.tiger.rec + NETJET_DMA_RXSIZE - 1;
- else
- p = cs->bcs[0].hw.tiger.rec + cnt - 1;
- if ((cs->bcs[0].mode == L1_MODE_HDLC) || (cs->bcs[0].mode == L1_MODE_HDLC_56K))
- read_raw(cs->bcs, p, cnt);
-
- if ((cs->bcs[1].mode == L1_MODE_HDLC) || (cs->bcs[1].mode == L1_MODE_HDLC_56K))
- read_raw(cs->bcs + 1, p, cnt);
- cs->hw.njet.irqstat0 &= ~NETJET_IRQM0_READ;
-}
-
-static void write_raw(struct BCState *bcs, u_int *buf, int cnt);
-
-void netjet_fill_dma(struct BCState *bcs)
-{
- register u_int *p, *sp;
- register int cnt;
-
- if (!bcs->tx_skb)
- return;
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "tiger fill_dma1: c%d %4lx", bcs->channel,
- bcs->Flag);
- if (test_and_set_bit(BC_FLG_BUSY, &bcs->Flag))
- return;
- if (bcs->mode == L1_MODE_HDLC) { // it's 64k
- if (make_raw_data(bcs))
- return;
- }
- else { // it's 56k
- if (make_raw_data_56k(bcs))
- return;
- }
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "tiger fill_dma2: c%d %4lx", bcs->channel,
- bcs->Flag);
- if (test_and_clear_bit(BC_FLG_NOFRAME, &bcs->Flag)) {
- write_raw(bcs, bcs->hw.tiger.sendp, bcs->hw.tiger.free);
- } else if (test_and_clear_bit(BC_FLG_HALF, &bcs->Flag)) {
- p = bus_to_virt(inl(bcs->cs->hw.njet.base + NETJET_DMA_READ_ADR));
- sp = bcs->hw.tiger.sendp;
- if (p == bcs->hw.tiger.s_end)
- p = bcs->hw.tiger.send - 1;
- if (sp == bcs->hw.tiger.s_end)
- sp = bcs->hw.tiger.send - 1;
- cnt = p - sp;
- if (cnt < 0) {
- write_raw(bcs, bcs->hw.tiger.sendp, bcs->hw.tiger.free);
- } else {
- p++;
- cnt++;
- if (p > bcs->hw.tiger.s_end)
- p = bcs->hw.tiger.send;
- p++;
- cnt++;
- if (p > bcs->hw.tiger.s_end)
- p = bcs->hw.tiger.send;
- write_raw(bcs, p, bcs->hw.tiger.free - cnt);
- }
- } else if (test_and_clear_bit(BC_FLG_EMPTY, &bcs->Flag)) {
- p = bus_to_virt(inl(bcs->cs->hw.njet.base + NETJET_DMA_READ_ADR));
- cnt = bcs->hw.tiger.s_end - p;
- if (cnt < 2) {
- p = bcs->hw.tiger.send + 1;
- cnt = NETJET_DMA_TXSIZE / 2 - 2;
- } else {
- p++;
- p++;
- if (cnt <= (NETJET_DMA_TXSIZE / 2))
- cnt += NETJET_DMA_TXSIZE / 2;
- cnt--;
- cnt--;
- }
- write_raw(bcs, p, cnt);
- }
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "tiger fill_dma3: c%d %4lx", bcs->channel,
- bcs->Flag);
-}
-
-static void write_raw(struct BCState *bcs, u_int *buf, int cnt) {
- u_int mask, val, *p = buf;
- u_int i, s_cnt;
-
- if (cnt <= 0)
- return;
- if (test_bit(BC_FLG_BUSY, &bcs->Flag)) {
- if (bcs->hw.tiger.sendcnt > cnt) {
- s_cnt = cnt;
- bcs->hw.tiger.sendcnt -= cnt;
- } else {
- s_cnt = bcs->hw.tiger.sendcnt;
- bcs->hw.tiger.sendcnt = 0;
- }
- if (bcs->channel)
- mask = 0xffff00ff;
- else
- mask = 0xffffff00;
- for (i = 0; i < s_cnt; i++) {
- val = bcs->channel ? ((bcs->hw.tiger.sp[i] << 8) & 0xff00) :
- (bcs->hw.tiger.sp[i]);
- *p &= mask;
- *p++ |= val;
- if (p > bcs->hw.tiger.s_end)
- p = bcs->hw.tiger.send;
- }
- bcs->hw.tiger.s_tot += s_cnt;
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "tiger write_raw: c%d %p-%p %d/%d %d %x", bcs->channel,
- buf, p, s_cnt, cnt,
- bcs->hw.tiger.sendcnt, bcs->cs->hw.njet.irqstat0);
- if (bcs->cs->debug & L1_DEB_HSCX_FIFO)
- printframe(bcs->cs, bcs->hw.tiger.sp, s_cnt, "snd");
- bcs->hw.tiger.sp += s_cnt;
- bcs->hw.tiger.sendp = p;
- if (!bcs->hw.tiger.sendcnt) {
- if (!bcs->tx_skb) {
- debugl1(bcs->cs, "tiger write_raw: NULL skb s_cnt %d", s_cnt);
- } else {
- if (test_bit(FLG_LLI_L1WAKEUP, &bcs->st->lli.flag) &&
- (PACKET_NOACK != bcs->tx_skb->pkt_type)) {
- u_long flags;
- spin_lock_irqsave(&bcs->aclock, flags);
- bcs->ackcnt += bcs->tx_skb->len;
- spin_unlock_irqrestore(&bcs->aclock, flags);
- schedule_event(bcs, B_ACKPENDING);
- }
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- }
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->hw.tiger.free = cnt - s_cnt;
- if (bcs->hw.tiger.free > (NETJET_DMA_TXSIZE / 2))
- test_and_set_bit(BC_FLG_HALF, &bcs->Flag);
- else {
- test_and_clear_bit(BC_FLG_HALF, &bcs->Flag);
- test_and_set_bit(BC_FLG_NOFRAME, &bcs->Flag);
- }
- if ((bcs->tx_skb = skb_dequeue(&bcs->squeue))) {
- netjet_fill_dma(bcs);
- } else {
- mask ^= 0xffffffff;
- if (s_cnt < cnt) {
- for (i = s_cnt; i < cnt; i++) {
- *p++ |= mask;
- if (p > bcs->hw.tiger.s_end)
- p = bcs->hw.tiger.send;
- }
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "tiger write_raw: fill rest %d",
- cnt - s_cnt);
- }
- test_and_set_bit(B_XMTBUFREADY, &bcs->event);
- schedule_work(&bcs->tqueue);
- }
- }
- } else if (test_and_clear_bit(BC_FLG_NOFRAME, &bcs->Flag)) {
- test_and_set_bit(BC_FLG_HALF, &bcs->Flag);
- fill_mem(bcs, buf, cnt, bcs->channel, 0xff);
- bcs->hw.tiger.free += cnt;
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "tiger write_raw: fill half");
- } else if (test_and_clear_bit(BC_FLG_HALF, &bcs->Flag)) {
- test_and_set_bit(BC_FLG_EMPTY, &bcs->Flag);
- fill_mem(bcs, buf, cnt, bcs->channel, 0xff);
- if (bcs->cs->debug & L1_DEB_HSCX)
- debugl1(bcs->cs, "tiger write_raw: fill full");
- }
-}
-
-void write_tiger(struct IsdnCardState *cs) {
- u_int *p, cnt = NETJET_DMA_TXSIZE / 2;
-
- if ((cs->hw.njet.irqstat0 & cs->hw.njet.last_is0) & NETJET_IRQM0_WRITE) {
- debugl1(cs, "tiger warn write double dma %x/%x",
- cs->hw.njet.irqstat0, cs->hw.njet.last_is0);
-#ifdef ERROR_STATISTIC
- if (cs->bcs[0].mode)
- cs->bcs[0].err_tx++;
- if (cs->bcs[1].mode)
- cs->bcs[1].err_tx++;
-#endif
- return;
- } else {
- cs->hw.njet.last_is0 &= ~NETJET_IRQM0_WRITE;
- cs->hw.njet.last_is0 |= (cs->hw.njet.irqstat0 & NETJET_IRQM0_WRITE);
- }
- if (cs->hw.njet.irqstat0 & NETJET_IRQM0_WRITE_1)
- p = cs->bcs[0].hw.tiger.send + NETJET_DMA_TXSIZE - 1;
- else
- p = cs->bcs[0].hw.tiger.send + cnt - 1;
- if ((cs->bcs[0].mode == L1_MODE_HDLC) || (cs->bcs[0].mode == L1_MODE_HDLC_56K))
- write_raw(cs->bcs, p, cnt);
- if ((cs->bcs[1].mode == L1_MODE_HDLC) || (cs->bcs[1].mode == L1_MODE_HDLC_56K))
- write_raw(cs->bcs + 1, p, cnt);
- cs->hw.njet.irqstat0 &= ~NETJET_IRQM0_WRITE;
-}
-
-static void
-tiger_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct BCState *bcs = st->l1.bcs;
- struct sk_buff *skb = arg;
- u_long flags;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- skb_queue_tail(&bcs->squeue, skb);
- } else {
- bcs->tx_skb = skb;
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- printk(KERN_WARNING "tiger_l2l1: this shouldn't happen\n");
- } else {
- bcs->tx_skb = skb;
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
- if (!bcs->tx_skb) {
- test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case (PH_ACTIVATE | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_set_bit(BC_FLG_ACTIV, &bcs->Flag);
- mode_tiger(bcs, st->l1.mode, st->l1.bc);
- /* 2001/10/04 Christoph Ersfeld, Formula-n Europe AG */
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- bcs->cs->cardmsg(bcs->cs, MDL_BC_ASSIGN, (void *)(&st->l1.bc));
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | REQUEST):
- /* 2001/10/04 Christoph Ersfeld, Formula-n Europe AG */
- bcs->cs->cardmsg(bcs->cs, MDL_BC_RELEASE, (void *)(&st->l1.bc));
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | CONFIRM):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_clear_bit(BC_FLG_ACTIV, &bcs->Flag);
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- mode_tiger(bcs, 0, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- st->l1.l1l2(st, PH_DEACTIVATE | CONFIRM, NULL);
- break;
- }
-}
-
-
-static void
-close_tigerstate(struct BCState *bcs)
-{
- mode_tiger(bcs, 0, bcs->channel);
- if (test_and_clear_bit(BC_FLG_INIT, &bcs->Flag)) {
- kfree(bcs->hw.tiger.rcvbuf);
- bcs->hw.tiger.rcvbuf = NULL;
- kfree(bcs->hw.tiger.sendbuf);
- bcs->hw.tiger.sendbuf = NULL;
- skb_queue_purge(&bcs->rqueue);
- skb_queue_purge(&bcs->squeue);
- if (bcs->tx_skb) {
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- }
- }
-}
-
-static int
-open_tigerstate(struct IsdnCardState *cs, struct BCState *bcs)
-{
- if (!test_and_set_bit(BC_FLG_INIT, &bcs->Flag)) {
- if (!(bcs->hw.tiger.rcvbuf = kmalloc(HSCX_BUFMAX, GFP_ATOMIC))) {
- printk(KERN_WARNING
- "HiSax: No memory for tiger.rcvbuf\n");
- return (1);
- }
- if (!(bcs->hw.tiger.sendbuf = kmalloc(RAW_BUFMAX, GFP_ATOMIC))) {
- printk(KERN_WARNING
- "HiSax: No memory for tiger.sendbuf\n");
- return (1);
- }
- skb_queue_head_init(&bcs->rqueue);
- skb_queue_head_init(&bcs->squeue);
- }
- bcs->tx_skb = NULL;
- bcs->hw.tiger.sendcnt = 0;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->event = 0;
- bcs->tx_cnt = 0;
- return (0);
-}
-
-static int
-setstack_tiger(struct PStack *st, struct BCState *bcs)
-{
- bcs->channel = st->l1.bc;
- if (open_tigerstate(st->l1.hardware, bcs))
- return (-1);
- st->l1.bcs = bcs;
- st->l2.l2l1 = tiger_l2l1;
- setstack_manager(st);
- bcs->st = st;
- setstack_l1_B(st);
- return (0);
-}
-
-
-void
-inittiger(struct IsdnCardState *cs)
-{
- cs->bcs[0].hw.tiger.send = kmalloc_array(NETJET_DMA_TXSIZE,
- sizeof(unsigned int),
- GFP_KERNEL | GFP_DMA);
- if (!cs->bcs[0].hw.tiger.send) {
- printk(KERN_WARNING
- "HiSax: No memory for tiger.send\n");
- return;
- }
- cs->bcs[0].hw.tiger.s_irq = cs->bcs[0].hw.tiger.send + NETJET_DMA_TXSIZE / 2 - 1;
- cs->bcs[0].hw.tiger.s_end = cs->bcs[0].hw.tiger.send + NETJET_DMA_TXSIZE - 1;
- cs->bcs[1].hw.tiger.send = cs->bcs[0].hw.tiger.send;
- cs->bcs[1].hw.tiger.s_irq = cs->bcs[0].hw.tiger.s_irq;
- cs->bcs[1].hw.tiger.s_end = cs->bcs[0].hw.tiger.s_end;
-
- memset(cs->bcs[0].hw.tiger.send, 0xff, NETJET_DMA_TXSIZE * sizeof(unsigned int));
- debugl1(cs, "tiger: send buf %p - %p", cs->bcs[0].hw.tiger.send,
- cs->bcs[0].hw.tiger.send + NETJET_DMA_TXSIZE - 1);
- outl(virt_to_bus(cs->bcs[0].hw.tiger.send),
- cs->hw.njet.base + NETJET_DMA_READ_START);
- outl(virt_to_bus(cs->bcs[0].hw.tiger.s_irq),
- cs->hw.njet.base + NETJET_DMA_READ_IRQ);
- outl(virt_to_bus(cs->bcs[0].hw.tiger.s_end),
- cs->hw.njet.base + NETJET_DMA_READ_END);
- cs->bcs[0].hw.tiger.rec = kmalloc_array(NETJET_DMA_RXSIZE,
- sizeof(unsigned int),
- GFP_KERNEL | GFP_DMA);
- if (!cs->bcs[0].hw.tiger.rec) {
- printk(KERN_WARNING
- "HiSax: No memory for tiger.rec\n");
- return;
- }
- debugl1(cs, "tiger: rec buf %p - %p", cs->bcs[0].hw.tiger.rec,
- cs->bcs[0].hw.tiger.rec + NETJET_DMA_RXSIZE - 1);
- cs->bcs[1].hw.tiger.rec = cs->bcs[0].hw.tiger.rec;
- memset(cs->bcs[0].hw.tiger.rec, 0xff, NETJET_DMA_RXSIZE * sizeof(unsigned int));
- outl(virt_to_bus(cs->bcs[0].hw.tiger.rec),
- cs->hw.njet.base + NETJET_DMA_WRITE_START);
- outl(virt_to_bus(cs->bcs[0].hw.tiger.rec + NETJET_DMA_RXSIZE / 2 - 1),
- cs->hw.njet.base + NETJET_DMA_WRITE_IRQ);
- outl(virt_to_bus(cs->bcs[0].hw.tiger.rec + NETJET_DMA_RXSIZE - 1),
- cs->hw.njet.base + NETJET_DMA_WRITE_END);
- debugl1(cs, "tiger: dmacfg %x/%x pulse=%d",
- inl(cs->hw.njet.base + NETJET_DMA_WRITE_ADR),
- inl(cs->hw.njet.base + NETJET_DMA_READ_ADR),
- bytein(cs->hw.njet.base + NETJET_PULSE_CNT));
- cs->hw.njet.last_is0 = 0;
- cs->bcs[0].BC_SetStack = setstack_tiger;
- cs->bcs[1].BC_SetStack = setstack_tiger;
- cs->bcs[0].BC_Close = close_tigerstate;
- cs->bcs[1].BC_Close = close_tigerstate;
-}
-
-static void
-releasetiger(struct IsdnCardState *cs)
-{
- kfree(cs->bcs[0].hw.tiger.send);
- cs->bcs[0].hw.tiger.send = NULL;
- cs->bcs[1].hw.tiger.send = NULL;
- kfree(cs->bcs[0].hw.tiger.rec);
- cs->bcs[0].hw.tiger.rec = NULL;
- cs->bcs[1].hw.tiger.rec = NULL;
-}
-
-void
-release_io_netjet(struct IsdnCardState *cs)
-{
- byteout(cs->hw.njet.base + NETJET_IRQMASK0, 0);
- byteout(cs->hw.njet.base + NETJET_IRQMASK1, 0);
- releasetiger(cs);
- release_region(cs->hw.njet.base, 256);
-}
diff --git a/drivers/isdn/hisax/netjet.h b/drivers/isdn/hisax/netjet.h
deleted file mode 100644
index 70590d5d5e64..000000000000
--- a/drivers/isdn/hisax/netjet.h
+++ /dev/null
@@ -1,69 +0,0 @@
-/* $Id: netjet.h,v 2.8.2.2 2004/01/12 22:52:28 keil Exp $
- *
- * NETjet common header file
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- * by Matt Henderson,
- * Traverse Technologies P/L www.traverse.com.au
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-#define NETJET_CTRL 0x00
-#define NETJET_DMACTRL 0x01
-#define NETJET_AUXCTRL 0x02
-#define NETJET_AUXDATA 0x03
-#define NETJET_IRQMASK0 0x04
-#define NETJET_IRQMASK1 0x05
-#define NETJET_IRQSTAT0 0x06
-#define NETJET_IRQSTAT1 0x07
-#define NETJET_DMA_READ_START 0x08
-#define NETJET_DMA_READ_IRQ 0x0c
-#define NETJET_DMA_READ_END 0x10
-#define NETJET_DMA_READ_ADR 0x14
-#define NETJET_DMA_WRITE_START 0x18
-#define NETJET_DMA_WRITE_IRQ 0x1c
-#define NETJET_DMA_WRITE_END 0x20
-#define NETJET_DMA_WRITE_ADR 0x24
-#define NETJET_PULSE_CNT 0x28
-
-#define NETJET_ISAC_OFF 0xc0
-#define NETJET_ISACIRQ 0x10
-#define NETJET_IRQM0_READ 0x0c
-#define NETJET_IRQM0_READ_1 0x04
-#define NETJET_IRQM0_READ_2 0x08
-#define NETJET_IRQM0_WRITE 0x03
-#define NETJET_IRQM0_WRITE_1 0x01
-#define NETJET_IRQM0_WRITE_2 0x02
-
-#define NETJET_DMA_TXSIZE 512
-#define NETJET_DMA_RXSIZE 128
-
-#define HDLC_ZERO_SEARCH 0
-#define HDLC_FLAG_SEARCH 1
-#define HDLC_FLAG_FOUND 2
-#define HDLC_FRAME_FOUND 3
-#define HDLC_NULL 4
-#define HDLC_PART 5
-#define HDLC_FULL 6
-
-#define HDLC_FLAG_VALUE 0x7e
-
-u_char NETjet_ReadIC(struct IsdnCardState *cs, u_char offset);
-void NETjet_WriteIC(struct IsdnCardState *cs, u_char offset, u_char value);
-void NETjet_ReadICfifo(struct IsdnCardState *cs, u_char *data, int size);
-void NETjet_WriteICfifo(struct IsdnCardState *cs, u_char *data, int size);
-
-void read_tiger(struct IsdnCardState *cs);
-void write_tiger(struct IsdnCardState *cs);
-
-void netjet_fill_dma(struct BCState *bcs);
-void netjet_interrupt(int intno, void *dev_id);
-void inittiger(struct IsdnCardState *cs);
-void release_io_netjet(struct IsdnCardState *cs);
diff --git a/drivers/isdn/hisax/niccy.c b/drivers/isdn/hisax/niccy.c
deleted file mode 100644
index dfbcd2eaa81a..000000000000
--- a/drivers/isdn/hisax/niccy.c
+++ /dev/null
@@ -1,380 +0,0 @@
-/* $Id: niccy.c,v 1.21.2.4 2004/01/13 23:48:39 keil Exp $
- *
- * low level stuff for Dr. Neuhaus NICCY PnP and NICCY PCI and
- * compatible (SAGEM cybermodem)
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * Thanks to Dr. Neuhaus and SAGEM for information
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "hscx.h"
-#include "isdnl1.h"
-#include <linux/pci.h>
-#include <linux/isapnp.h>
-
-static const char *niccy_revision = "$Revision: 1.21.2.4 $";
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-#define ISAC_PCI_DATA 0
-#define HSCX_PCI_DATA 1
-#define ISAC_PCI_ADDR 2
-#define HSCX_PCI_ADDR 3
-#define ISAC_PNP 0
-#define HSCX_PNP 1
-
-/* SUB Types */
-#define NICCY_PNP 1
-#define NICCY_PCI 2
-
-/* PCI stuff */
-#define PCI_IRQ_CTRL_REG 0x38
-#define PCI_IRQ_ENABLE 0x1f00
-#define PCI_IRQ_DISABLE 0xff0000
-#define PCI_IRQ_ASSERT 0x800000
-
-static inline u_char readreg(unsigned int ale, unsigned int adr, u_char off)
-{
- register u_char ret;
-
- byteout(ale, off);
- ret = bytein(adr);
- return ret;
-}
-
-static inline void readfifo(unsigned int ale, unsigned int adr, u_char off,
- u_char *data, int size)
-{
- byteout(ale, off);
- insb(adr, data, size);
-}
-
-static inline void writereg(unsigned int ale, unsigned int adr, u_char off,
- u_char data)
-{
- byteout(ale, off);
- byteout(adr, data);
-}
-
-static inline void writefifo(unsigned int ale, unsigned int adr, u_char off,
- u_char *data, int size)
-{
- byteout(ale, off);
- outsb(adr, data, size);
-}
-
-/* Interface functions */
-
-static u_char ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- return readreg(cs->hw.niccy.isac_ale, cs->hw.niccy.isac, offset);
-}
-
-static void WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writereg(cs->hw.niccy.isac_ale, cs->hw.niccy.isac, offset, value);
-}
-
-static void ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- readfifo(cs->hw.niccy.isac_ale, cs->hw.niccy.isac, 0, data, size);
-}
-
-static void WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- writefifo(cs->hw.niccy.isac_ale, cs->hw.niccy.isac, 0, data, size);
-}
-
-static u_char ReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- return readreg(cs->hw.niccy.hscx_ale,
- cs->hw.niccy.hscx, offset + (hscx ? 0x40 : 0));
-}
-
-static void WriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset,
- u_char value)
-{
- writereg(cs->hw.niccy.hscx_ale,
- cs->hw.niccy.hscx, offset + (hscx ? 0x40 : 0), value);
-}
-
-#define READHSCX(cs, nr, reg) readreg(cs->hw.niccy.hscx_ale, \
- cs->hw.niccy.hscx, reg + (nr ? 0x40 : 0))
-#define WRITEHSCX(cs, nr, reg, data) writereg(cs->hw.niccy.hscx_ale, \
- cs->hw.niccy.hscx, reg + (nr ? 0x40 : 0), data)
-
-#define READHSCXFIFO(cs, nr, ptr, cnt) readfifo(cs->hw.niccy.hscx_ale, \
- cs->hw.niccy.hscx, (nr ? 0x40 : 0), ptr, cnt)
-
-#define WRITEHSCXFIFO(cs, nr, ptr, cnt) writefifo(cs->hw.niccy.hscx_ale, \
- cs->hw.niccy.hscx, (nr ? 0x40 : 0), ptr, cnt)
-
-#include "hscx_irq.c"
-
-static irqreturn_t niccy_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->subtyp == NICCY_PCI) {
- int ival;
- ival = inl(cs->hw.niccy.cfg_reg + PCI_IRQ_CTRL_REG);
- if (!(ival & PCI_IRQ_ASSERT)) { /* IRQ not for us (shared) */
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_NONE;
- }
- outl(ival, cs->hw.niccy.cfg_reg + PCI_IRQ_CTRL_REG);
- }
- val = readreg(cs->hw.niccy.hscx_ale, cs->hw.niccy.hscx,
- HSCX_ISTA + 0x40);
-Start_HSCX:
- if (val)
- hscx_int_main(cs, val);
- val = readreg(cs->hw.niccy.isac_ale, cs->hw.niccy.isac, ISAC_ISTA);
-Start_ISAC:
- if (val)
- isac_interrupt(cs, val);
- val = readreg(cs->hw.niccy.hscx_ale, cs->hw.niccy.hscx,
- HSCX_ISTA + 0x40);
- if (val) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX IntStat after IntRoutine");
- goto Start_HSCX;
- }
- val = readreg(cs->hw.niccy.isac_ale, cs->hw.niccy.isac, ISAC_ISTA);
- if (val) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ISAC IntStat after IntRoutine");
- goto Start_ISAC;
- }
- writereg(cs->hw.niccy.hscx_ale, cs->hw.niccy.hscx, HSCX_MASK, 0xFF);
- writereg(cs->hw.niccy.hscx_ale, cs->hw.niccy.hscx, HSCX_MASK + 0x40,
- 0xFF);
- writereg(cs->hw.niccy.isac_ale, cs->hw.niccy.isac, ISAC_MASK, 0xFF);
- writereg(cs->hw.niccy.isac_ale, cs->hw.niccy.isac, ISAC_MASK, 0);
- writereg(cs->hw.niccy.hscx_ale, cs->hw.niccy.hscx, HSCX_MASK, 0);
- writereg(cs->hw.niccy.hscx_ale, cs->hw.niccy.hscx, HSCX_MASK + 0x40, 0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void release_io_niccy(struct IsdnCardState *cs)
-{
- if (cs->subtyp == NICCY_PCI) {
- int val;
-
- val = inl(cs->hw.niccy.cfg_reg + PCI_IRQ_CTRL_REG);
- val &= PCI_IRQ_DISABLE;
- outl(val, cs->hw.niccy.cfg_reg + PCI_IRQ_CTRL_REG);
- release_region(cs->hw.niccy.cfg_reg, 0x40);
- release_region(cs->hw.niccy.isac, 4);
- } else {
- release_region(cs->hw.niccy.isac, 2);
- release_region(cs->hw.niccy.isac_ale, 2);
- }
-}
-
-static void niccy_reset(struct IsdnCardState *cs)
-{
- if (cs->subtyp == NICCY_PCI) {
- int val;
-
- val = inl(cs->hw.niccy.cfg_reg + PCI_IRQ_CTRL_REG);
- val |= PCI_IRQ_ENABLE;
- outl(val, cs->hw.niccy.cfg_reg + PCI_IRQ_CTRL_REG);
- }
- inithscxisac(cs, 3);
-}
-
-static int niccy_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- niccy_reset(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return 0;
- case CARD_RELEASE:
- release_io_niccy(cs);
- return 0;
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- niccy_reset(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return 0;
- case CARD_TEST:
- return 0;
- }
- return 0;
-}
-
-#ifdef __ISAPNP__
-static struct pnp_card *pnp_c = NULL;
-#endif
-
-int setup_niccy(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
- strcpy(tmp, niccy_revision);
- printk(KERN_INFO "HiSax: Niccy driver Rev. %s\n", HiSax_getrev(tmp));
- if (cs->typ != ISDN_CTYPE_NICCY)
- return 0;
-#ifdef __ISAPNP__
- if (!card->para[1] && isapnp_present()) {
- struct pnp_dev *pnp_d = NULL;
- int err;
-
- pnp_c = pnp_find_card(ISAPNP_VENDOR('S', 'D', 'A'),
- ISAPNP_FUNCTION(0x0150), pnp_c);
- if (pnp_c) {
- pnp_d = pnp_find_dev(pnp_c,
- ISAPNP_VENDOR('S', 'D', 'A'),
- ISAPNP_FUNCTION(0x0150), pnp_d);
- if (!pnp_d) {
- printk(KERN_ERR "NiccyPnP: PnP error card "
- "found, no device\n");
- return 0;
- }
- pnp_disable_dev(pnp_d);
- err = pnp_activate_dev(pnp_d);
- if (err < 0) {
- printk(KERN_WARNING "%s: pnp_activate_dev "
- "ret(%d)\n", __func__, err);
- return 0;
- }
- card->para[1] = pnp_port_start(pnp_d, 0);
- card->para[2] = pnp_port_start(pnp_d, 1);
- card->para[0] = pnp_irq(pnp_d, 0);
- if (card->para[0] == -1 || !card->para[1] ||
- !card->para[2]) {
- printk(KERN_ERR "NiccyPnP:some resources are "
- "missing %ld/%lx/%lx\n",
- card->para[0], card->para[1],
- card->para[2]);
- pnp_disable_dev(pnp_d);
- return 0;
- }
- } else
- printk(KERN_INFO "NiccyPnP: no ISAPnP card found\n");
- }
-#endif
- if (card->para[1]) {
- cs->hw.niccy.isac = card->para[1] + ISAC_PNP;
- cs->hw.niccy.hscx = card->para[1] + HSCX_PNP;
- cs->hw.niccy.isac_ale = card->para[2] + ISAC_PNP;
- cs->hw.niccy.hscx_ale = card->para[2] + HSCX_PNP;
- cs->hw.niccy.cfg_reg = 0;
- cs->subtyp = NICCY_PNP;
- cs->irq = card->para[0];
- if (!request_region(cs->hw.niccy.isac, 2, "niccy data")) {
- printk(KERN_WARNING "HiSax: NICCY data port %x-%x "
- "already in use\n",
- cs->hw.niccy.isac, cs->hw.niccy.isac + 1);
- return 0;
- }
- if (!request_region(cs->hw.niccy.isac_ale, 2, "niccy addr")) {
- printk(KERN_WARNING "HiSax: NICCY address port %x-%x "
- "already in use\n",
- cs->hw.niccy.isac_ale,
- cs->hw.niccy.isac_ale + 1);
- release_region(cs->hw.niccy.isac, 2);
- return 0;
- }
- } else {
-#ifdef CONFIG_PCI
- static struct pci_dev *niccy_dev;
-
- u_int pci_ioaddr;
- cs->subtyp = 0;
- if ((niccy_dev = hisax_find_pci_device(PCI_VENDOR_ID_SATSAGEM,
- PCI_DEVICE_ID_SATSAGEM_NICCY,
- niccy_dev))) {
- if (pci_enable_device(niccy_dev))
- return 0;
- /* get IRQ */
- if (!niccy_dev->irq) {
- printk(KERN_WARNING
- "Niccy: No IRQ for PCI card found\n");
- return 0;
- }
- cs->irq = niccy_dev->irq;
- cs->hw.niccy.cfg_reg = pci_resource_start(niccy_dev, 0);
- if (!cs->hw.niccy.cfg_reg) {
- printk(KERN_WARNING
- "Niccy: No IO-Adr for PCI cfg found\n");
- return 0;
- }
- pci_ioaddr = pci_resource_start(niccy_dev, 1);
- if (!pci_ioaddr) {
- printk(KERN_WARNING
- "Niccy: No IO-Adr for PCI card found\n");
- return 0;
- }
- cs->subtyp = NICCY_PCI;
- } else {
- printk(KERN_WARNING "Niccy: No PCI card found\n");
- return 0;
- }
- cs->irq_flags |= IRQF_SHARED;
- cs->hw.niccy.isac = pci_ioaddr + ISAC_PCI_DATA;
- cs->hw.niccy.isac_ale = pci_ioaddr + ISAC_PCI_ADDR;
- cs->hw.niccy.hscx = pci_ioaddr + HSCX_PCI_DATA;
- cs->hw.niccy.hscx_ale = pci_ioaddr + HSCX_PCI_ADDR;
- if (!request_region(cs->hw.niccy.isac, 4, "niccy")) {
- printk(KERN_WARNING
- "HiSax: NICCY data port %x-%x already in use\n",
- cs->hw.niccy.isac, cs->hw.niccy.isac + 4);
- return 0;
- }
- if (!request_region(cs->hw.niccy.cfg_reg, 0x40, "niccy pci")) {
- printk(KERN_WARNING
- "HiSax: NICCY pci port %x-%x already in use\n",
- cs->hw.niccy.cfg_reg,
- cs->hw.niccy.cfg_reg + 0x40);
- release_region(cs->hw.niccy.isac, 4);
- return 0;
- }
-#else
- printk(KERN_WARNING "Niccy: io0 0 and NO_PCI_BIOS\n");
- printk(KERN_WARNING "Niccy: unable to config NICCY PCI\n");
- return 0;
-#endif /* CONFIG_PCI */
- }
- printk(KERN_INFO "HiSax: NICCY %s config irq:%d data:0x%X ale:0x%X\n",
- (cs->subtyp == 1) ? "PnP" : "PCI",
- cs->irq, cs->hw.niccy.isac, cs->hw.niccy.isac_ale);
- setup_isac(cs);
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->BC_Read_Reg = &ReadHSCX;
- cs->BC_Write_Reg = &WriteHSCX;
- cs->BC_Send_Data = &hscx_fill_fifo;
- cs->cardmsg = &niccy_card_msg;
- cs->irq_func = &niccy_interrupt;
- ISACVersion(cs, "Niccy:");
- if (HscxVersion(cs, "Niccy:")) {
- printk(KERN_WARNING "Niccy: wrong HSCX versions check IO "
- "address\n");
- release_io_niccy(cs);
- return 0;
- }
- return 1;
-}
diff --git a/drivers/isdn/hisax/nj_s.c b/drivers/isdn/hisax/nj_s.c
deleted file mode 100644
index 32b4bbd18eb9..000000000000
--- a/drivers/isdn/hisax/nj_s.c
+++ /dev/null
@@ -1,294 +0,0 @@
-/* $Id: nj_s.c,v 2.13.2.4 2004/01/16 01:53:48 keil Exp $
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "isdnl1.h"
-#include <linux/pci.h>
-#include <linux/interrupt.h>
-#include <linux/ppp_defs.h>
-#include "netjet.h"
-
-static const char *NETjet_S_revision = "$Revision: 2.13.2.4 $";
-
-static u_char dummyrr(struct IsdnCardState *cs, int chan, u_char off)
-{
- return (5);
-}
-
-static void dummywr(struct IsdnCardState *cs, int chan, u_char off, u_char value)
-{
-}
-
-static irqreturn_t
-netjet_s_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val, s1val, s0val;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- s1val = bytein(cs->hw.njet.base + NETJET_IRQSTAT1);
- if (!(s1val & NETJET_ISACIRQ)) {
- val = NETjet_ReadIC(cs, ISAC_ISTA);
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "tiger: i1 %x %x", s1val, val);
- if (val) {
- isac_interrupt(cs, val);
- NETjet_WriteIC(cs, ISAC_MASK, 0xFF);
- NETjet_WriteIC(cs, ISAC_MASK, 0x0);
- }
- s1val = 1;
- } else
- s1val = 0;
- /*
- * read/write stat0 is better, because lower IRQ rate
- * Note the IRQ is on for 125 us if a condition match
- * thats long on modern CPU and so the IRQ is reentered
- * all the time.
- */
- s0val = bytein(cs->hw.njet.base + NETJET_IRQSTAT0);
- if ((s0val | s1val) == 0) { // shared IRQ
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_NONE;
- }
- if (s0val)
- byteout(cs->hw.njet.base + NETJET_IRQSTAT0, s0val);
- /* start new code 13/07/00 GE */
- /* set bits in sval to indicate which page is free */
- if (inl(cs->hw.njet.base + NETJET_DMA_WRITE_ADR) <
- inl(cs->hw.njet.base + NETJET_DMA_WRITE_IRQ))
- /* the 2nd write page is free */
- s0val = 0x08;
- else /* the 1st write page is free */
- s0val = 0x04;
- if (inl(cs->hw.njet.base + NETJET_DMA_READ_ADR) <
- inl(cs->hw.njet.base + NETJET_DMA_READ_IRQ))
- /* the 2nd read page is free */
- s0val |= 0x02;
- else /* the 1st read page is free */
- s0val |= 0x01;
- if (s0val != cs->hw.njet.last_is0) /* we have a DMA interrupt */
- {
- if (test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- printk(KERN_WARNING "nj LOCK_ATOMIC s0val %x->%x\n",
- cs->hw.njet.last_is0, s0val);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
- }
- cs->hw.njet.irqstat0 = s0val;
- if ((cs->hw.njet.irqstat0 & NETJET_IRQM0_READ) !=
- (cs->hw.njet.last_is0 & NETJET_IRQM0_READ))
- /* we have a read dma int */
- read_tiger(cs);
- if ((cs->hw.njet.irqstat0 & NETJET_IRQM0_WRITE) !=
- (cs->hw.njet.last_is0 & NETJET_IRQM0_WRITE))
- /* we have a write dma int */
- write_tiger(cs);
- /* end new code 13/07/00 GE */
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-reset_netjet_s(struct IsdnCardState *cs)
-{
- cs->hw.njet.ctrl_reg = 0xff; /* Reset On */
- byteout(cs->hw.njet.base + NETJET_CTRL, cs->hw.njet.ctrl_reg);
- mdelay(10);
- /* now edge triggered for TJ320 GE 13/07/00 */
- /* see comment in IRQ function */
- if (cs->subtyp) /* TJ320 */
- cs->hw.njet.ctrl_reg = 0x40; /* Reset Off and status read clear */
- else
- cs->hw.njet.ctrl_reg = 0x00; /* Reset Off and status read clear */
- byteout(cs->hw.njet.base + NETJET_CTRL, cs->hw.njet.ctrl_reg);
- mdelay(10);
- cs->hw.njet.auxd = 0;
- cs->hw.njet.dmactrl = 0;
- byteout(cs->hw.njet.base + NETJET_AUXCTRL, ~NETJET_ISACIRQ);
- byteout(cs->hw.njet.base + NETJET_IRQMASK1, NETJET_ISACIRQ);
- byteout(cs->hw.njet.auxa, cs->hw.njet.auxd);
-}
-
-static int
-NETjet_S_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- reset_netjet_s(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- release_io_netjet(cs);
- return (0);
- case CARD_INIT:
- reset_netjet_s(cs);
- inittiger(cs);
- spin_lock_irqsave(&cs->lock, flags);
- clear_pending_isac_ints(cs);
- initisac(cs);
- /* Reenable all IRQ */
- cs->writeisac(cs, ISAC_MASK, 0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-static int njs_pci_probe(struct pci_dev *dev_netjet, struct IsdnCardState *cs)
-{
- u32 cfg;
-
- if (pci_enable_device(dev_netjet))
- return (0);
- pci_set_master(dev_netjet);
- cs->irq = dev_netjet->irq;
- if (!cs->irq) {
- printk(KERN_WARNING "NETjet-S: No IRQ for PCI card found\n");
- return (0);
- }
- cs->hw.njet.base = pci_resource_start(dev_netjet, 0);
- if (!cs->hw.njet.base) {
- printk(KERN_WARNING "NETjet-S: No IO-Adr for PCI card found\n");
- return (0);
- }
- /* the TJ300 and TJ320 must be detected, the IRQ handling is different
- * unfortunately the chips use the same device ID, but the TJ320 has
- * the bit20 in status PCI cfg register set
- */
- pci_read_config_dword(dev_netjet, 0x04, &cfg);
- if (cfg & 0x00100000)
- cs->subtyp = 1; /* TJ320 */
- else
- cs->subtyp = 0; /* TJ300 */
- /* 2001/10/04 Christoph Ersfeld, Formula-n Europe AG www.formula-n.com */
- if ((dev_netjet->subsystem_vendor == 0x55) &&
- (dev_netjet->subsystem_device == 0x02)) {
- printk(KERN_WARNING "Netjet: You tried to load this driver with an incompatible TigerJet-card\n");
- printk(KERN_WARNING "Use type=41 for Formula-n enter:now ISDN PCI and compatible\n");
- return (0);
- }
- /* end new code */
-
- return (1);
-}
-
-static int njs_cs_init(struct IsdnCard *card, struct IsdnCardState *cs)
-{
-
- cs->hw.njet.auxa = cs->hw.njet.base + NETJET_AUXDATA;
- cs->hw.njet.isac = cs->hw.njet.base | NETJET_ISAC_OFF;
-
- cs->hw.njet.ctrl_reg = 0xff; /* Reset On */
- byteout(cs->hw.njet.base + NETJET_CTRL, cs->hw.njet.ctrl_reg);
- mdelay(10);
-
- cs->hw.njet.ctrl_reg = 0x00; /* Reset Off and status read clear */
- byteout(cs->hw.njet.base + NETJET_CTRL, cs->hw.njet.ctrl_reg);
- mdelay(10);
-
- cs->hw.njet.auxd = 0xC0;
- cs->hw.njet.dmactrl = 0;
-
- byteout(cs->hw.njet.base + NETJET_AUXCTRL, ~NETJET_ISACIRQ);
- byteout(cs->hw.njet.base + NETJET_IRQMASK1, NETJET_ISACIRQ);
- byteout(cs->hw.njet.auxa, cs->hw.njet.auxd);
-
- switch (((NETjet_ReadIC(cs, ISAC_RBCH) >> 5) & 3))
- {
- case 0:
- return 1; /* end loop */
-
- case 3:
- printk(KERN_WARNING "NETjet-S: NETspider-U PCI card found\n");
- return -1; /* continue looping */
-
- default:
- printk(KERN_WARNING "NETjet-S: No PCI card found\n");
- return 0; /* end loop & function */
- }
- return 1; /* end loop */
-}
-
-static int njs_cs_init_rest(struct IsdnCard *card, struct IsdnCardState *cs)
-{
- const int bytecnt = 256;
-
- printk(KERN_INFO
- "NETjet-S: %s card configured at %#lx IRQ %d\n",
- cs->subtyp ? "TJ320" : "TJ300", cs->hw.njet.base, cs->irq);
- if (!request_region(cs->hw.njet.base, bytecnt, "netjet-s isdn")) {
- printk(KERN_WARNING
- "HiSax: NETjet-S config port %#lx-%#lx already in use\n",
- cs->hw.njet.base,
- cs->hw.njet.base + bytecnt);
- return (0);
- }
- cs->readisac = &NETjet_ReadIC;
- cs->writeisac = &NETjet_WriteIC;
- cs->readisacfifo = &NETjet_ReadICfifo;
- cs->writeisacfifo = &NETjet_WriteICfifo;
- cs->BC_Read_Reg = &dummyrr;
- cs->BC_Write_Reg = &dummywr;
- cs->BC_Send_Data = &netjet_fill_dma;
- setup_isac(cs);
- cs->cardmsg = &NETjet_S_card_msg;
- cs->irq_func = &netjet_s_interrupt;
- cs->irq_flags |= IRQF_SHARED;
- ISACVersion(cs, "NETjet-S:");
-
- return (1);
-}
-
-static struct pci_dev *dev_netjet = NULL;
-
-int setup_netjet_s(struct IsdnCard *card)
-{
- int ret;
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
-#ifdef __BIG_ENDIAN
-#error "not running on big endian machines now"
-#endif
- strcpy(tmp, NETjet_S_revision);
- printk(KERN_INFO "HiSax: Traverse Tech. NETjet-S driver Rev. %s\n", HiSax_getrev(tmp));
- if (cs->typ != ISDN_CTYPE_NETJET_S)
- return (0);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
-
- for (;;)
- {
- if ((dev_netjet = hisax_find_pci_device(PCI_VENDOR_ID_TIGERJET,
- PCI_DEVICE_ID_TIGERJET_300, dev_netjet))) {
- ret = njs_pci_probe(dev_netjet, cs);
- if (!ret)
- return (0);
- } else {
- printk(KERN_WARNING "NETjet-S: No PCI card found\n");
- return (0);
- }
-
- ret = njs_cs_init(card, cs);
- if (!ret)
- return (0);
- if (ret > 0)
- break;
- /* otherwise, ret < 0, continue looping */
- }
-
- return njs_cs_init_rest(card, cs);
-}
diff --git a/drivers/isdn/hisax/nj_u.c b/drivers/isdn/hisax/nj_u.c
deleted file mode 100644
index 4e8adbede361..000000000000
--- a/drivers/isdn/hisax/nj_u.c
+++ /dev/null
@@ -1,258 +0,0 @@
-/* $Id: nj_u.c,v 2.14.2.3 2004/01/13 14:31:26 keil Exp $
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "icc.h"
-#include "isdnl1.h"
-#include <linux/pci.h>
-#include <linux/interrupt.h>
-#include <linux/ppp_defs.h>
-#include "netjet.h"
-
-static const char *NETjet_U_revision = "$Revision: 2.14.2.3 $";
-
-static u_char dummyrr(struct IsdnCardState *cs, int chan, u_char off)
-{
- return (5);
-}
-
-static void dummywr(struct IsdnCardState *cs, int chan, u_char off, u_char value)
-{
-}
-
-static irqreturn_t
-netjet_u_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val, sval;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- if (!((sval = bytein(cs->hw.njet.base + NETJET_IRQSTAT1)) &
- NETJET_ISACIRQ)) {
- val = NETjet_ReadIC(cs, ICC_ISTA);
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "tiger: i1 %x %x", sval, val);
- if (val) {
- icc_interrupt(cs, val);
- NETjet_WriteIC(cs, ICC_MASK, 0xFF);
- NETjet_WriteIC(cs, ICC_MASK, 0x0);
- }
- }
- /* start new code 13/07/00 GE */
- /* set bits in sval to indicate which page is free */
- if (inl(cs->hw.njet.base + NETJET_DMA_WRITE_ADR) <
- inl(cs->hw.njet.base + NETJET_DMA_WRITE_IRQ))
- /* the 2nd write page is free */
- sval = 0x08;
- else /* the 1st write page is free */
- sval = 0x04;
- if (inl(cs->hw.njet.base + NETJET_DMA_READ_ADR) <
- inl(cs->hw.njet.base + NETJET_DMA_READ_IRQ))
- /* the 2nd read page is free */
- sval = sval | 0x02;
- else /* the 1st read page is free */
- sval = sval | 0x01;
- if (sval != cs->hw.njet.last_is0) /* we have a DMA interrupt */
- {
- if (test_and_set_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags)) {
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
- }
- cs->hw.njet.irqstat0 = sval;
- if ((cs->hw.njet.irqstat0 & NETJET_IRQM0_READ) !=
- (cs->hw.njet.last_is0 & NETJET_IRQM0_READ))
- /* we have a read dma int */
- read_tiger(cs);
- if ((cs->hw.njet.irqstat0 & NETJET_IRQM0_WRITE) !=
- (cs->hw.njet.last_is0 & NETJET_IRQM0_WRITE))
- /* we have a write dma int */
- write_tiger(cs);
- /* end new code 13/07/00 GE */
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-reset_netjet_u(struct IsdnCardState *cs)
-{
- cs->hw.njet.ctrl_reg = 0xff; /* Reset On */
- byteout(cs->hw.njet.base + NETJET_CTRL, cs->hw.njet.ctrl_reg);
- mdelay(10);
- cs->hw.njet.ctrl_reg = 0x40; /* Reset Off and status read clear */
- /* now edge triggered for TJ320 GE 13/07/00 */
- byteout(cs->hw.njet.base + NETJET_CTRL, cs->hw.njet.ctrl_reg);
- mdelay(10);
- cs->hw.njet.auxd = 0xC0;
- cs->hw.njet.dmactrl = 0;
- byteout(cs->hw.njet.auxa, 0);
- byteout(cs->hw.njet.base + NETJET_AUXCTRL, ~NETJET_ISACIRQ);
- byteout(cs->hw.njet.base + NETJET_IRQMASK1, NETJET_ISACIRQ);
- byteout(cs->hw.njet.auxa, cs->hw.njet.auxd);
-}
-
-static int
-NETjet_U_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- reset_netjet_u(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- release_io_netjet(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- inittiger(cs);
- reset_netjet_u(cs);
- clear_pending_icc_ints(cs);
- initicc(cs);
- /* Reenable all IRQ */
- cs->writeisac(cs, ICC_MASK, 0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-static int nju_pci_probe(struct pci_dev *dev_netjet, struct IsdnCardState *cs)
-{
- if (pci_enable_device(dev_netjet))
- return (0);
- pci_set_master(dev_netjet);
- cs->irq = dev_netjet->irq;
- if (!cs->irq) {
- printk(KERN_WARNING "NETspider-U: No IRQ for PCI card found\n");
- return (0);
- }
- cs->hw.njet.base = pci_resource_start(dev_netjet, 0);
- if (!cs->hw.njet.base) {
- printk(KERN_WARNING "NETspider-U: No IO-Adr for PCI card found\n");
- return (0);
- }
-
- return (1);
-}
-
-static int nju_cs_init(struct IsdnCard *card, struct IsdnCardState *cs)
-{
- cs->hw.njet.auxa = cs->hw.njet.base + NETJET_AUXDATA;
- cs->hw.njet.isac = cs->hw.njet.base | NETJET_ISAC_OFF;
- mdelay(10);
-
- cs->hw.njet.ctrl_reg = 0xff; /* Reset On */
- byteout(cs->hw.njet.base + NETJET_CTRL, cs->hw.njet.ctrl_reg);
- mdelay(10);
-
- cs->hw.njet.ctrl_reg = 0x00; /* Reset Off and status read clear */
- byteout(cs->hw.njet.base + NETJET_CTRL, cs->hw.njet.ctrl_reg);
- mdelay(10);
-
- cs->hw.njet.auxd = 0xC0;
- cs->hw.njet.dmactrl = 0;
-
- byteout(cs->hw.njet.auxa, 0);
- byteout(cs->hw.njet.base + NETJET_AUXCTRL, ~NETJET_ISACIRQ);
- byteout(cs->hw.njet.base + NETJET_IRQMASK1, NETJET_ISACIRQ);
- byteout(cs->hw.njet.auxa, cs->hw.njet.auxd);
-
- switch (((NETjet_ReadIC(cs, ICC_RBCH) >> 5) & 3))
- {
- case 3:
- return 1; /* end loop */
-
- case 0:
- printk(KERN_WARNING "NETspider-U: NETjet-S PCI card found\n");
- return -1; /* continue looping */
-
- default:
- printk(KERN_WARNING "NETspider-U: No PCI card found\n");
- return 0; /* end loop & function */
- }
- return 1; /* end loop */
-}
-
-static int nju_cs_init_rest(struct IsdnCard *card, struct IsdnCardState *cs)
-{
- const int bytecnt = 256;
-
- printk(KERN_INFO
- "NETspider-U: PCI card configured at %#lx IRQ %d\n",
- cs->hw.njet.base, cs->irq);
- if (!request_region(cs->hw.njet.base, bytecnt, "netspider-u isdn")) {
- printk(KERN_WARNING
- "HiSax: NETspider-U config port %#lx-%#lx "
- "already in use\n",
- cs->hw.njet.base,
- cs->hw.njet.base + bytecnt);
- return (0);
- }
- setup_icc(cs);
- cs->readisac = &NETjet_ReadIC;
- cs->writeisac = &NETjet_WriteIC;
- cs->readisacfifo = &NETjet_ReadICfifo;
- cs->writeisacfifo = &NETjet_WriteICfifo;
- cs->BC_Read_Reg = &dummyrr;
- cs->BC_Write_Reg = &dummywr;
- cs->BC_Send_Data = &netjet_fill_dma;
- cs->cardmsg = &NETjet_U_card_msg;
- cs->irq_func = &netjet_u_interrupt;
- cs->irq_flags |= IRQF_SHARED;
- ICCVersion(cs, "NETspider-U:");
-
- return (1);
-}
-
-static struct pci_dev *dev_netjet = NULL;
-
-int setup_netjet_u(struct IsdnCard *card)
-{
- int ret;
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
-#ifdef __BIG_ENDIAN
-#error "not running on big endian machines now"
-#endif
-
- strcpy(tmp, NETjet_U_revision);
- printk(KERN_INFO "HiSax: Traverse Tech. NETspider-U driver Rev. %s\n", HiSax_getrev(tmp));
- if (cs->typ != ISDN_CTYPE_NETJET_U)
- return (0);
- test_and_clear_bit(FLG_LOCK_ATOMIC, &cs->HW_Flags);
-
- for (;;)
- {
- if ((dev_netjet = hisax_find_pci_device(PCI_VENDOR_ID_TIGERJET,
- PCI_DEVICE_ID_TIGERJET_300, dev_netjet))) {
- ret = nju_pci_probe(dev_netjet, cs);
- if (!ret)
- return (0);
- } else {
- printk(KERN_WARNING "NETspider-U: No PCI card found\n");
- return (0);
- }
-
- ret = nju_cs_init(card, cs);
- if (!ret)
- return (0);
- if (ret > 0)
- break;
- /* ret < 0 == continue looping */
- }
-
- return nju_cs_init_rest(card, cs);
-}
diff --git a/drivers/isdn/hisax/q931.c b/drivers/isdn/hisax/q931.c
deleted file mode 100644
index 6b8c3fbe3965..000000000000
--- a/drivers/isdn/hisax/q931.c
+++ /dev/null
@@ -1,1513 +0,0 @@
-/* $Id: q931.c,v 1.12.2.3 2004/01/13 14:31:26 keil Exp $
- *
- * code to decode ITU Q.931 call control messages
- *
- * Author Jan den Ouden
- * Copyright by Jan den Ouden
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * Changelog:
- *
- * Pauline Middelink general improvements
- * Beat Doebeli cause texts, display information element
- * Karsten Keil cause texts, display information element for 1TR6
- *
- */
-
-
-#include "hisax.h"
-#include "l3_1tr6.h"
-
-void
-iecpy(u_char *dest, u_char *iestart, int ieoffset)
-{
- u_char *p;
- int l;
-
- p = iestart + ieoffset + 2;
- l = iestart[1] - ieoffset;
- while (l--)
- *dest++ = *p++;
- *dest++ = '\0';
-}
-
-/*
- * According to Table 4-2/Q.931
- */
-static
-struct MessageType {
- u_char nr;
- char *descr;
-} mtlist[] = {
-
- {
- 0x1, "ALERTING"
- },
- {
- 0x2, "CALL PROCEEDING"
- },
- {
- 0x7, "CONNECT"
- },
- {
- 0xf, "CONNECT ACKNOWLEDGE"
- },
- {
- 0x3, "PROGRESS"
- },
- {
- 0x5, "SETUP"
- },
- {
- 0xd, "SETUP ACKNOWLEDGE"
- },
- {
- 0x24, "HOLD"
- },
- {
- 0x28, "HOLD ACKNOWLEDGE"
- },
- {
- 0x30, "HOLD REJECT"
- },
- {
- 0x31, "RETRIEVE"
- },
- {
- 0x33, "RETRIEVE ACKNOWLEDGE"
- },
- {
- 0x37, "RETRIEVE REJECT"
- },
- {
- 0x26, "RESUME"
- },
- {
- 0x2e, "RESUME ACKNOWLEDGE"
- },
- {
- 0x22, "RESUME REJECT"
- },
- {
- 0x25, "SUSPEND"
- },
- {
- 0x2d, "SUSPEND ACKNOWLEDGE"
- },
- {
- 0x21, "SUSPEND REJECT"
- },
- {
- 0x20, "USER INFORMATION"
- },
- {
- 0x45, "DISCONNECT"
- },
- {
- 0x4d, "RELEASE"
- },
- {
- 0x5a, "RELEASE COMPLETE"
- },
- {
- 0x46, "RESTART"
- },
- {
- 0x4e, "RESTART ACKNOWLEDGE"
- },
- {
- 0x60, "SEGMENT"
- },
- {
- 0x79, "CONGESTION CONTROL"
- },
- {
- 0x7b, "INFORMATION"
- },
- {
- 0x62, "FACILITY"
- },
- {
- 0x6e, "NOTIFY"
- },
- {
- 0x7d, "STATUS"
- },
- {
- 0x75, "STATUS ENQUIRY"
- }
-};
-
-#define MTSIZE ARRAY_SIZE(mtlist)
-
-static
-struct MessageType mt_n0[] =
-{
- {MT_N0_REG_IND, "REGister INDication"},
- {MT_N0_CANC_IND, "CANCel INDication"},
- {MT_N0_FAC_STA, "FACility STAtus"},
- {MT_N0_STA_ACK, "STAtus ACKnowledge"},
- {MT_N0_STA_REJ, "STAtus REJect"},
- {MT_N0_FAC_INF, "FACility INFormation"},
- {MT_N0_INF_ACK, "INFormation ACKnowledge"},
- {MT_N0_INF_REJ, "INFormation REJect"},
- {MT_N0_CLOSE, "CLOSE"},
- {MT_N0_CLO_ACK, "CLOse ACKnowledge"}
-};
-
-#define MT_N0_LEN ARRAY_SIZE(mt_n0)
-
-static
-struct MessageType mt_n1[] =
-{
- {MT_N1_ESC, "ESCape"},
- {MT_N1_ALERT, "ALERT"},
- {MT_N1_CALL_SENT, "CALL SENT"},
- {MT_N1_CONN, "CONNect"},
- {MT_N1_CONN_ACK, "CONNect ACKnowledge"},
- {MT_N1_SETUP, "SETUP"},
- {MT_N1_SETUP_ACK, "SETUP ACKnowledge"},
- {MT_N1_RES, "RESume"},
- {MT_N1_RES_ACK, "RESume ACKnowledge"},
- {MT_N1_RES_REJ, "RESume REJect"},
- {MT_N1_SUSP, "SUSPend"},
- {MT_N1_SUSP_ACK, "SUSPend ACKnowledge"},
- {MT_N1_SUSP_REJ, "SUSPend REJect"},
- {MT_N1_USER_INFO, "USER INFO"},
- {MT_N1_DET, "DETach"},
- {MT_N1_DISC, "DISConnect"},
- {MT_N1_REL, "RELease"},
- {MT_N1_REL_ACK, "RELease ACKnowledge"},
- {MT_N1_CANC_ACK, "CANCel ACKnowledge"},
- {MT_N1_CANC_REJ, "CANCel REJect"},
- {MT_N1_CON_CON, "CONgestion CONtrol"},
- {MT_N1_FAC, "FACility"},
- {MT_N1_FAC_ACK, "FACility ACKnowledge"},
- {MT_N1_FAC_CAN, "FACility CANcel"},
- {MT_N1_FAC_REG, "FACility REGister"},
- {MT_N1_FAC_REJ, "FACility REJect"},
- {MT_N1_INFO, "INFOrmation"},
- {MT_N1_REG_ACK, "REGister ACKnowledge"},
- {MT_N1_REG_REJ, "REGister REJect"},
- {MT_N1_STAT, "STATus"}
-};
-
-#define MT_N1_LEN ARRAY_SIZE(mt_n1)
-
-
-static int
-prbits(char *dest, u_char b, int start, int len)
-{
- char *dp = dest;
-
- b = b << (8 - start);
- while (len--) {
- if (b & 0x80)
- *dp++ = '1';
- else
- *dp++ = '0';
- b = b << 1;
- }
- return (dp - dest);
-}
-
-static
-u_char *
-skipext(u_char *p)
-{
- while (!(*p++ & 0x80));
- return (p);
-}
-
-/*
- * Cause Values According to Q.850
- * edescr: English description
- * ddescr: German description used by Swissnet II (Swiss Telecom
- * not yet written...
- */
-
-static
-struct CauseValue {
- u_char nr;
- char *edescr;
- char *ddescr;
-} cvlist[] = {
-
- {
- 0x01, "Unallocated (unassigned) number", "Nummer nicht zugeteilt"
- },
- {
- 0x02, "No route to specified transit network", ""
- },
- {
- 0x03, "No route to destination", ""
- },
- {
- 0x04, "Send special information tone", ""
- },
- {
- 0x05, "Misdialled trunk prefix", ""
- },
- {
- 0x06, "Channel unacceptable", "Kanal nicht akzeptierbar"
- },
- {
- 0x07, "Channel awarded and being delivered in an established channel", ""
- },
- {
- 0x08, "Preemption", ""
- },
- {
- 0x09, "Preemption - circuit reserved for reuse", ""
- },
- {
- 0x10, "Normal call clearing", "Normale Ausloesung"
- },
- {
- 0x11, "User busy", "TNB besetzt"
- },
- {
- 0x12, "No user responding", ""
- },
- {
- 0x13, "No answer from user (user alerted)", ""
- },
- {
- 0x14, "Subscriber absent", ""
- },
- {
- 0x15, "Call rejected", ""
- },
- {
- 0x16, "Number changed", ""
- },
- {
- 0x1a, "non-selected user clearing", ""
- },
- {
- 0x1b, "Destination out of order", ""
- },
- {
- 0x1c, "Invalid number format (address incomplete)", ""
- },
- {
- 0x1d, "Facility rejected", ""
- },
- {
- 0x1e, "Response to Status enquiry", ""
- },
- {
- 0x1f, "Normal, unspecified", ""
- },
- {
- 0x22, "No circuit/channel available", ""
- },
- {
- 0x26, "Network out of order", ""
- },
- {
- 0x27, "Permanent frame mode connection out-of-service", ""
- },
- {
- 0x28, "Permanent frame mode connection operational", ""
- },
- {
- 0x29, "Temporary failure", ""
- },
- {
- 0x2a, "Switching equipment congestion", ""
- },
- {
- 0x2b, "Access information discarded", ""
- },
- {
- 0x2c, "Requested circuit/channel not available", ""
- },
- {
- 0x2e, "Precedence call blocked", ""
- },
- {
- 0x2f, "Resource unavailable, unspecified", ""
- },
- {
- 0x31, "Quality of service unavailable", ""
- },
- {
- 0x32, "Requested facility not subscribed", ""
- },
- {
- 0x35, "Outgoing calls barred within CUG", ""
- },
- {
- 0x37, "Incoming calls barred within CUG", ""
- },
- {
- 0x39, "Bearer capability not authorized", ""
- },
- {
- 0x3a, "Bearer capability not presently available", ""
- },
- {
- 0x3e, "Inconsistency in designated outgoing access information and subscriber class ", " "
- },
- {
- 0x3f, "Service or option not available, unspecified", ""
- },
- {
- 0x41, "Bearer capability not implemented", ""
- },
- {
- 0x42, "Channel type not implemented", ""
- },
- {
- 0x43, "Requested facility not implemented", ""
- },
- {
- 0x44, "Only restricted digital information bearer capability is available", ""
- },
- {
- 0x4f, "Service or option not implemented", ""
- },
- {
- 0x51, "Invalid call reference value", ""
- },
- {
- 0x52, "Identified channel does not exist", ""
- },
- {
- 0x53, "A suspended call exists, but this call identity does not", ""
- },
- {
- 0x54, "Call identity in use", ""
- },
- {
- 0x55, "No call suspended", ""
- },
- {
- 0x56, "Call having the requested call identity has been cleared", ""
- },
- {
- 0x57, "User not member of CUG", ""
- },
- {
- 0x58, "Incompatible destination", ""
- },
- {
- 0x5a, "Non-existent CUG", ""
- },
- {
- 0x5b, "Invalid transit network selection", ""
- },
- {
- 0x5f, "Invalid message, unspecified", ""
- },
- {
- 0x60, "Mandatory information element is missing", ""
- },
- {
- 0x61, "Message type non-existent or not implemented", ""
- },
- {
- 0x62, "Message not compatible with call state or message type non-existent or not implemented ", " "
- },
- {
- 0x63, "Information element/parameter non-existent or not implemented", ""
- },
- {
- 0x64, "Invalid information element contents", ""
- },
- {
- 0x65, "Message not compatible with call state", ""
- },
- {
- 0x66, "Recovery on timer expiry", ""
- },
- {
- 0x67, "Parameter non-existent or not implemented - passed on", ""
- },
- {
- 0x6e, "Message with unrecognized parameter discarded", ""
- },
- {
- 0x6f, "Protocol error, unspecified", ""
- },
- {
- 0x7f, "Interworking, unspecified", ""
- },
-};
-
-#define CVSIZE ARRAY_SIZE(cvlist)
-
-static
-int
-prcause(char *dest, u_char *p)
-{
- u_char *end;
- char *dp = dest;
- int i, cause;
-
- end = p + p[1] + 1;
- p += 2;
- dp += sprintf(dp, " coding ");
- dp += prbits(dp, *p, 7, 2);
- dp += sprintf(dp, " location ");
- dp += prbits(dp, *p, 4, 4);
- *dp++ = '\n';
- p = skipext(p);
-
- cause = 0x7f & *p++;
-
- /* locate cause value */
- for (i = 0; i < CVSIZE; i++)
- if (cvlist[i].nr == cause)
- break;
-
- /* display cause value if it exists */
- if (i == CVSIZE)
- dp += sprintf(dp, "Unknown cause type %x!\n", cause);
- else
- dp += sprintf(dp, " cause value %x : %s \n", cause, cvlist[i].edescr);
-
- while (!0) {
- if (p > end)
- break;
- dp += sprintf(dp, " diag attribute %d ", *p++ & 0x7f);
- dp += sprintf(dp, " rej %d ", *p & 0x7f);
- if (*p & 0x80) {
- *dp++ = '\n';
- break;
- } else
- dp += sprintf(dp, " av %d\n", (*++p) & 0x7f);
- }
- return (dp - dest);
-
-}
-
-static
-struct MessageType cause_1tr6[] =
-{
- {CAUSE_InvCRef, "Invalid Call Reference"},
- {CAUSE_BearerNotImpl, "Bearer Service Not Implemented"},
- {CAUSE_CIDunknown, "Caller Identity unknown"},
- {CAUSE_CIDinUse, "Caller Identity in Use"},
- {CAUSE_NoChans, "No Channels available"},
- {CAUSE_FacNotImpl, "Facility Not Implemented"},
- {CAUSE_FacNotSubscr, "Facility Not Subscribed"},
- {CAUSE_OutgoingBarred, "Outgoing calls barred"},
- {CAUSE_UserAccessBusy, "User Access Busy"},
- {CAUSE_NegativeGBG, "Negative GBG"},
- {CAUSE_UnknownGBG, "Unknown GBG"},
- {CAUSE_NoSPVknown, "No SPV known"},
- {CAUSE_DestNotObtain, "Destination not obtainable"},
- {CAUSE_NumberChanged, "Number changed"},
- {CAUSE_OutOfOrder, "Out Of Order"},
- {CAUSE_NoUserResponse, "No User Response"},
- {CAUSE_UserBusy, "User Busy"},
- {CAUSE_IncomingBarred, "Incoming Barred"},
- {CAUSE_CallRejected, "Call Rejected"},
- {CAUSE_NetworkCongestion, "Network Congestion"},
- {CAUSE_RemoteUser, "Remote User initiated"},
- {CAUSE_LocalProcErr, "Local Procedure Error"},
- {CAUSE_RemoteProcErr, "Remote Procedure Error"},
- {CAUSE_RemoteUserSuspend, "Remote User Suspend"},
- {CAUSE_RemoteUserResumed, "Remote User Resumed"},
- {CAUSE_UserInfoDiscarded, "User Info Discarded"}
-};
-
-static int cause_1tr6_len = ARRAY_SIZE(cause_1tr6);
-
-static int
-prcause_1tr6(char *dest, u_char *p)
-{
- char *dp = dest;
- int i, cause;
-
- p++;
- if (0 == *p) {
- dp += sprintf(dp, " OK (cause length=0)\n");
- return (dp - dest);
- } else if (*p > 1) {
- dp += sprintf(dp, " coding ");
- dp += prbits(dp, p[2], 7, 2);
- dp += sprintf(dp, " location ");
- dp += prbits(dp, p[2], 4, 4);
- *dp++ = '\n';
- }
- p++;
- cause = 0x7f & *p;
-
- /* locate cause value */
- for (i = 0; i < cause_1tr6_len; i++)
- if (cause_1tr6[i].nr == cause)
- break;
-
- /* display cause value if it exists */
- if (i == cause_1tr6_len)
- dp += sprintf(dp, "Unknown cause type %x!\n", cause);
- else
- dp += sprintf(dp, " cause value %x : %s \n", cause, cause_1tr6[i].descr);
-
- return (dp - dest);
-
-}
-
-static int
-prchident(char *dest, u_char *p)
-{
- char *dp = dest;
-
- p += 2;
- dp += sprintf(dp, " octet 3 ");
- dp += prbits(dp, *p, 8, 8);
- *dp++ = '\n';
- return (dp - dest);
-}
-
-static int
-prcalled(char *dest, u_char *p)
-{
- int l;
- char *dp = dest;
-
- p++;
- l = *p++ - 1;
- dp += sprintf(dp, " octet 3 ");
- dp += prbits(dp, *p++, 8, 8);
- *dp++ = '\n';
- dp += sprintf(dp, " number digits ");
- while (l--)
- *dp++ = *p++;
- *dp++ = '\n';
- return (dp - dest);
-}
-static int
-prcalling(char *dest, u_char *p)
-{
- int l;
- char *dp = dest;
-
- p++;
- l = *p++ - 1;
- dp += sprintf(dp, " octet 3 ");
- dp += prbits(dp, *p, 8, 8);
- *dp++ = '\n';
- if (!(*p & 0x80)) {
- dp += sprintf(dp, " octet 3a ");
- dp += prbits(dp, *++p, 8, 8);
- *dp++ = '\n';
- l--;
- }
- p++;
-
- dp += sprintf(dp, " number digits ");
- while (l--)
- *dp++ = *p++;
- *dp++ = '\n';
- return (dp - dest);
-}
-
-static
-int
-prbearer(char *dest, u_char *p)
-{
- char *dp = dest, ch;
-
- p += 2;
- dp += sprintf(dp, " octet 3 ");
- dp += prbits(dp, *p++, 8, 8);
- *dp++ = '\n';
- dp += sprintf(dp, " octet 4 ");
- dp += prbits(dp, *p, 8, 8);
- *dp++ = '\n';
- if ((*p++ & 0x1f) == 0x18) {
- dp += sprintf(dp, " octet 4.1 ");
- dp += prbits(dp, *p++, 8, 8);
- *dp++ = '\n';
- }
- /* check for user information layer 1 */
- if ((*p & 0x60) == 0x20) {
- ch = ' ';
- do {
- dp += sprintf(dp, " octet 5%c ", ch);
- dp += prbits(dp, *p, 8, 8);
- *dp++ = '\n';
- if (ch == ' ')
- ch = 'a';
- else
- ch++;
- }
- while (!(*p++ & 0x80));
- }
- /* check for user information layer 2 */
- if ((*p & 0x60) == 0x40) {
- dp += sprintf(dp, " octet 6 ");
- dp += prbits(dp, *p++, 8, 8);
- *dp++ = '\n';
- }
- /* check for user information layer 3 */
- if ((*p & 0x60) == 0x60) {
- dp += sprintf(dp, " octet 7 ");
- dp += prbits(dp, *p++, 8, 8);
- *dp++ = '\n';
- }
- return (dp - dest);
-}
-
-
-static
-int
-prbearer_ni1(char *dest, u_char *p)
-{
- char *dp = dest;
- u_char len;
-
- p++;
- len = *p++;
- dp += sprintf(dp, " octet 3 ");
- dp += prbits(dp, *p, 8, 8);
- switch (*p++) {
- case 0x80:
- dp += sprintf(dp, " Speech");
- break;
- case 0x88:
- dp += sprintf(dp, " Unrestricted digital information");
- break;
- case 0x90:
- dp += sprintf(dp, " 3.1 kHz audio");
- break;
- default:
- dp += sprintf(dp, " Unknown information-transfer capability");
- }
- *dp++ = '\n';
- dp += sprintf(dp, " octet 4 ");
- dp += prbits(dp, *p, 8, 8);
- switch (*p++) {
- case 0x90:
- dp += sprintf(dp, " 64 kbps, circuit mode");
- break;
- case 0xc0:
- dp += sprintf(dp, " Packet mode");
- break;
- default:
- dp += sprintf(dp, " Unknown transfer mode");
- }
- *dp++ = '\n';
- if (len > 2) {
- dp += sprintf(dp, " octet 5 ");
- dp += prbits(dp, *p, 8, 8);
- switch (*p++) {
- case 0x21:
- dp += sprintf(dp, " Rate adaption\n");
- dp += sprintf(dp, " octet 5a ");
- dp += prbits(dp, *p, 8, 8);
- break;
- case 0xa2:
- dp += sprintf(dp, " u-law");
- break;
- default:
- dp += sprintf(dp, " Unknown UI layer 1 protocol");
- }
- *dp++ = '\n';
- }
- return (dp - dest);
-}
-
-static int
-general(char *dest, u_char *p)
-{
- char *dp = dest;
- char ch = ' ';
- int l, octet = 3;
-
- p++;
- l = *p++;
- /* Iterate over all octets in the information element */
- while (l--) {
- dp += sprintf(dp, " octet %d%c ", octet, ch);
- dp += prbits(dp, *p++, 8, 8);
- *dp++ = '\n';
-
- /* last octet in group? */
- if (*p & 0x80) {
- octet++;
- ch = ' ';
- } else if (ch == ' ')
- ch = 'a';
- else
- ch++;
- }
- return (dp - dest);
-}
-
-static int
-general_ni1(char *dest, u_char *p)
-{
- char *dp = dest;
- char ch = ' ';
- int l, octet = 3;
-
- p++;
- l = *p++;
- /* Iterate over all octets in the information element */
- while (l--) {
- dp += sprintf(dp, " octet %d%c ", octet, ch);
- dp += prbits(dp, *p, 8, 8);
- *dp++ = '\n';
-
- /* last octet in group? */
- if (*p++ & 0x80) {
- octet++;
- ch = ' ';
- } else if (ch == ' ')
- ch = 'a';
- else
- ch++;
- }
- return (dp - dest);
-}
-
-static int
-prcharge(char *dest, u_char *p)
-{
- char *dp = dest;
- int l;
-
- p++;
- l = *p++ - 1;
- dp += sprintf(dp, " GEA ");
- dp += prbits(dp, *p++, 8, 8);
- dp += sprintf(dp, " Anzahl: ");
- /* Iterate over all octets in the * information element */
- while (l--)
- *dp++ = *p++;
- *dp++ = '\n';
- return (dp - dest);
-}
-static int
-prtext(char *dest, u_char *p)
-{
- char *dp = dest;
- int l;
-
- p++;
- l = *p++;
- dp += sprintf(dp, " ");
- /* Iterate over all octets in the * information element */
- while (l--)
- *dp++ = *p++;
- *dp++ = '\n';
- return (dp - dest);
-}
-
-static int
-prfeatureind(char *dest, u_char *p)
-{
- char *dp = dest;
-
- p += 2; /* skip id, len */
- dp += sprintf(dp, " octet 3 ");
- dp += prbits(dp, *p, 8, 8);
- *dp++ = '\n';
- if (!(*p++ & 0x80)) {
- dp += sprintf(dp, " octet 4 ");
- dp += prbits(dp, *p++, 8, 8);
- *dp++ = '\n';
- }
- dp += sprintf(dp, " Status: ");
- switch (*p) {
- case 0:
- dp += sprintf(dp, "Idle");
- break;
- case 1:
- dp += sprintf(dp, "Active");
- break;
- case 2:
- dp += sprintf(dp, "Prompt");
- break;
- case 3:
- dp += sprintf(dp, "Pending");
- break;
- default:
- dp += sprintf(dp, "(Reserved)");
- break;
- }
- *dp++ = '\n';
- return (dp - dest);
-}
-
-static
-struct DTag { /* Display tags */
- u_char nr;
- char *descr;
-} dtaglist[] = {
- { 0x82, "Continuation" },
- { 0x83, "Called address" },
- { 0x84, "Cause" },
- { 0x85, "Progress indicator" },
- { 0x86, "Notification indicator" },
- { 0x87, "Prompt" },
- { 0x88, "Accumlated digits" },
- { 0x89, "Status" },
- { 0x8a, "Inband" },
- { 0x8b, "Calling address" },
- { 0x8c, "Reason" },
- { 0x8d, "Calling party name" },
- { 0x8e, "Called party name" },
- { 0x8f, "Original called name" },
- { 0x90, "Redirecting name" },
- { 0x91, "Connected name" },
- { 0x92, "Originating restrictions" },
- { 0x93, "Date & time of day" },
- { 0x94, "Call Appearance ID" },
- { 0x95, "Feature address" },
- { 0x96, "Redirection name" },
- { 0x9e, "Text" },
-};
-#define DTAGSIZE ARRAY_SIZE(dtaglist)
-
-static int
-disptext_ni1(char *dest, u_char *p)
-{
- char *dp = dest;
- int l, tag, len, i;
-
- p++;
- l = *p++ - 1;
- if (*p++ != 0x80) {
- dp += sprintf(dp, " Unknown display type\n");
- return (dp - dest);
- }
- /* Iterate over all tag,length,text fields */
- while (l > 0) {
- tag = *p++;
- len = *p++;
- l -= len + 2;
- /* Don't space or skip */
- if ((tag == 0x80) || (tag == 0x81)) p++;
- else {
- for (i = 0; i < DTAGSIZE; i++)
- if (tag == dtaglist[i].nr)
- break;
-
- /* When not found, give appropriate msg */
- if (i != DTAGSIZE) {
- dp += sprintf(dp, " %s: ", dtaglist[i].descr);
- while (len--)
- *dp++ = *p++;
- } else {
- dp += sprintf(dp, " (unknown display tag %2x): ", tag);
- while (len--)
- *dp++ = *p++;
- }
- dp += sprintf(dp, "\n");
- }
- }
- return (dp - dest);
-}
-static int
-display(char *dest, u_char *p)
-{
- char *dp = dest;
- char ch = ' ';
- int l, octet = 3;
-
- p++;
- l = *p++;
- /* Iterate over all octets in the * display-information element */
- dp += sprintf(dp, " \"");
- while (l--) {
- dp += sprintf(dp, "%c", *p++);
-
- /* last octet in group? */
- if (*p & 0x80) {
- octet++;
- ch = ' ';
- } else if (ch == ' ')
- ch = 'a';
-
- else
- ch++;
- }
- *dp++ = '\"';
- *dp++ = '\n';
- return (dp - dest);
-}
-
-static int
-prfacility(char *dest, u_char *p)
-{
- char *dp = dest;
- int l, l2;
-
- p++;
- l = *p++;
- dp += sprintf(dp, " octet 3 ");
- dp += prbits(dp, *p++, 8, 8);
- dp += sprintf(dp, "\n");
- l -= 1;
-
- while (l > 0) {
- dp += sprintf(dp, " octet 4 ");
- dp += prbits(dp, *p++, 8, 8);
- dp += sprintf(dp, "\n");
- dp += sprintf(dp, " octet 5 %d\n", l2 = *p++ & 0x7f);
- l -= 2;
- dp += sprintf(dp, " contents ");
- while (l2--) {
- dp += sprintf(dp, "%2x ", *p++);
- l--;
- }
- dp += sprintf(dp, "\n");
- }
-
- return (dp - dest);
-}
-
-static
-struct InformationElement {
- u_char nr;
- char *descr;
- int (*f) (char *, u_char *);
-} ielist[] = {
-
- {
- 0x00, "Segmented message", general
- },
- {
- 0x04, "Bearer capability", prbearer
- },
- {
- 0x08, "Cause", prcause
- },
- {
- 0x10, "Call identity", general
- },
- {
- 0x14, "Call state", general
- },
- {
- 0x18, "Channel identification", prchident
- },
- {
- 0x1c, "Facility", prfacility
- },
- {
- 0x1e, "Progress indicator", general
- },
- {
- 0x20, "Network-specific facilities", general
- },
- {
- 0x27, "Notification indicator", general
- },
- {
- 0x28, "Display", display
- },
- {
- 0x29, "Date/Time", general
- },
- {
- 0x2c, "Keypad facility", general
- },
- {
- 0x34, "Signal", general
- },
- {
- 0x40, "Information rate", general
- },
- {
- 0x42, "End-to-end delay", general
- },
- {
- 0x43, "Transit delay selection and indication", general
- },
- {
- 0x44, "Packet layer binary parameters", general
- },
- {
- 0x45, "Packet layer window size", general
- },
- {
- 0x46, "Packet size", general
- },
- {
- 0x47, "Closed user group", general
- },
- {
- 0x4a, "Reverse charge indication", general
- },
- {
- 0x6c, "Calling party number", prcalling
- },
- {
- 0x6d, "Calling party subaddress", general
- },
- {
- 0x70, "Called party number", prcalled
- },
- {
- 0x71, "Called party subaddress", general
- },
- {
- 0x74, "Redirecting number", general
- },
- {
- 0x78, "Transit network selection", general
- },
- {
- 0x79, "Restart indicator", general
- },
- {
- 0x7c, "Low layer compatibility", general
- },
- {
- 0x7d, "High layer compatibility", general
- },
- {
- 0x7e, "User-user", general
- },
- {
- 0x7f, "Escape for extension", general
- },
-};
-
-
-#define IESIZE ARRAY_SIZE(ielist)
-
-static
-struct InformationElement ielist_ni1[] = {
- { 0x04, "Bearer Capability", prbearer_ni1 },
- { 0x08, "Cause", prcause },
- { 0x14, "Call State", general_ni1 },
- { 0x18, "Channel Identification", prchident },
- { 0x1e, "Progress Indicator", general_ni1 },
- { 0x27, "Notification Indicator", general_ni1 },
- { 0x2c, "Keypad Facility", prtext },
- { 0x32, "Information Request", general_ni1 },
- { 0x34, "Signal", general_ni1 },
- { 0x38, "Feature Activation", general_ni1 },
- { 0x39, "Feature Indication", prfeatureind },
- { 0x3a, "Service Profile Identification (SPID)", prtext },
- { 0x3b, "Endpoint Identifier", general_ni1 },
- { 0x6c, "Calling Party Number", prcalling },
- { 0x6d, "Calling Party Subaddress", general_ni1 },
- { 0x70, "Called Party Number", prcalled },
- { 0x71, "Called Party Subaddress", general_ni1 },
- { 0x74, "Redirecting Number", general_ni1 },
- { 0x78, "Transit Network Selection", general_ni1 },
- { 0x7c, "Low Layer Compatibility", general_ni1 },
- { 0x7d, "High Layer Compatibility", general_ni1 },
-};
-
-
-#define IESIZE_NI1 ARRAY_SIZE(ielist_ni1)
-
-static
-struct InformationElement ielist_ni1_cs5[] = {
- { 0x1d, "Operator system access", general_ni1 },
- { 0x2a, "Display text", disptext_ni1 },
-};
-
-#define IESIZE_NI1_CS5 ARRAY_SIZE(ielist_ni1_cs5)
-
-static
-struct InformationElement ielist_ni1_cs6[] = {
- { 0x7b, "Call appearance", general_ni1 },
-};
-
-#define IESIZE_NI1_CS6 ARRAY_SIZE(ielist_ni1_cs6)
-
-static struct InformationElement we_0[] =
-{
- {WE0_cause, "Cause", prcause_1tr6},
- {WE0_connAddr, "Connecting Address", prcalled},
- {WE0_callID, "Call IDentity", general},
- {WE0_chanID, "Channel IDentity", general},
- {WE0_netSpecFac, "Network Specific Facility", general},
- {WE0_display, "Display", general},
- {WE0_keypad, "Keypad", general},
- {WE0_origAddr, "Origination Address", prcalled},
- {WE0_destAddr, "Destination Address", prcalled},
- {WE0_userInfo, "User Info", general}
-};
-
-#define WE_0_LEN ARRAY_SIZE(we_0)
-
-static struct InformationElement we_6[] =
-{
- {WE6_serviceInd, "Service Indicator", general},
- {WE6_chargingInfo, "Charging Information", prcharge},
- {WE6_date, "Date", prtext},
- {WE6_facSelect, "Facility Select", general},
- {WE6_facStatus, "Facility Status", general},
- {WE6_statusCalled, "Status Called", general},
- {WE6_addTransAttr, "Additional Transmission Attributes", general}
-};
-#define WE_6_LEN ARRAY_SIZE(we_6)
-
-int
-QuickHex(char *txt, u_char *p, int cnt)
-{
- register int i;
- register char *t = txt;
-
- for (i = 0; i < cnt; i++) {
- *t++ = ' ';
- *t++ = hex_asc_hi(p[i]);
- *t++ = hex_asc_lo(p[i]);
- }
- *t++ = 0;
- return (t - txt);
-}
-
-void
-LogFrame(struct IsdnCardState *cs, u_char *buf, int size)
-{
- char *dp;
-
- if (size < 1)
- return;
- dp = cs->dlog;
- if (size < MAX_DLOG_SPACE / 3 - 10) {
- *dp++ = 'H';
- *dp++ = 'E';
- *dp++ = 'X';
- *dp++ = ':';
- dp += QuickHex(dp, buf, size);
- dp--;
- *dp++ = '\n';
- *dp = 0;
- HiSax_putstatus(cs, NULL, cs->dlog);
- } else
- HiSax_putstatus(cs, "LogFrame: ", "warning Frame too big (%d)", size);
-}
-
-void
-dlogframe(struct IsdnCardState *cs, struct sk_buff *skb, int dir)
-{
- u_char *bend, *buf;
- char *dp;
- unsigned char pd, cr_l, cr, mt;
- unsigned char sapi, tei, ftyp;
- int i, cset = 0, cs_old = 0, cs_fest = 0;
- int size, finish = 0;
-
- if (skb->len < 3)
- return;
- /* display header */
- dp = cs->dlog;
- dp += jiftime(dp, jiffies);
- *dp++ = ' ';
- sapi = skb->data[0] >> 2;
- tei = skb->data[1] >> 1;
- ftyp = skb->data[2];
- buf = skb->data;
- dp += sprintf(dp, "frame %s ", dir ? "network->user" : "user->network");
- size = skb->len;
-
- if (tei == GROUP_TEI) {
- if (sapi == CTRL_SAPI) { /* sapi 0 */
- if (ftyp == 3) {
- dp += sprintf(dp, "broadcast\n");
- buf += 3;
- size -= 3;
- } else {
- dp += sprintf(dp, "no UI broadcast\n");
- finish = 1;
- }
- } else if (sapi == TEI_SAPI) {
- dp += sprintf(dp, "tei management\n");
- finish = 1;
- } else {
- dp += sprintf(dp, "unknown sapi %d broadcast\n", sapi);
- finish = 1;
- }
- } else {
- if (sapi == CTRL_SAPI) {
- if (!(ftyp & 1)) { /* IFrame */
- dp += sprintf(dp, "with tei %d\n", tei);
- buf += 4;
- size -= 4;
- } else {
- dp += sprintf(dp, "SFrame with tei %d\n", tei);
- finish = 1;
- }
- } else {
- dp += sprintf(dp, "unknown sapi %d tei %d\n", sapi, tei);
- finish = 1;
- }
- }
- bend = skb->data + skb->len;
- if (buf >= bend) {
- dp += sprintf(dp, "frame too short\n");
- finish = 1;
- }
- if (finish) {
- *dp = 0;
- HiSax_putstatus(cs, NULL, cs->dlog);
- return;
- }
- if ((0xfe & buf[0]) == PROTO_DIS_N0) { /* 1TR6 */
- /* locate message type */
- pd = *buf++;
- cr_l = *buf++;
- if (cr_l)
- cr = *buf++;
- else
- cr = 0;
- mt = *buf++;
- if (pd == PROTO_DIS_N0) { /* N0 */
- for (i = 0; i < MT_N0_LEN; i++)
- if (mt_n0[i].nr == mt)
- break;
- /* display message type if it exists */
- if (i == MT_N0_LEN)
- dp += sprintf(dp, "callref %d %s size %d unknown message type N0 %x!\n",
- cr & 0x7f, (cr & 0x80) ? "called" : "caller",
- size, mt);
- else
- dp += sprintf(dp, "callref %d %s size %d message type %s\n",
- cr & 0x7f, (cr & 0x80) ? "called" : "caller",
- size, mt_n0[i].descr);
- } else { /* N1 */
- for (i = 0; i < MT_N1_LEN; i++)
- if (mt_n1[i].nr == mt)
- break;
- /* display message type if it exists */
- if (i == MT_N1_LEN)
- dp += sprintf(dp, "callref %d %s size %d unknown message type N1 %x!\n",
- cr & 0x7f, (cr & 0x80) ? "called" : "caller",
- size, mt);
- else
- dp += sprintf(dp, "callref %d %s size %d message type %s\n",
- cr & 0x7f, (cr & 0x80) ? "called" : "caller",
- size, mt_n1[i].descr);
- }
-
- /* display each information element */
- while (buf < bend) {
- /* Is it a single octet information element? */
- if (*buf & 0x80) {
- switch ((*buf >> 4) & 7) {
- case 1:
- dp += sprintf(dp, " Shift %x\n", *buf & 0xf);
- cs_old = cset;
- cset = *buf & 7;
- cs_fest = *buf & 8;
- break;
- case 3:
- dp += sprintf(dp, " Congestion level %x\n", *buf & 0xf);
- break;
- case 2:
- if (*buf == 0xa0) {
- dp += sprintf(dp, " More data\n");
- break;
- }
- if (*buf == 0xa1) {
- dp += sprintf(dp, " Sending complete\n");
- }
- break;
- /* fall through */
- default:
- dp += sprintf(dp, " Reserved %x\n", *buf);
- break;
- }
- buf++;
- continue;
- }
- /* No, locate it in the table */
- if (cset == 0) {
- for (i = 0; i < WE_0_LEN; i++)
- if (*buf == we_0[i].nr)
- break;
-
- /* When found, give appropriate msg */
- if (i != WE_0_LEN) {
- dp += sprintf(dp, " %s\n", we_0[i].descr);
- dp += we_0[i].f(dp, buf);
- } else
- dp += sprintf(dp, " Codeset %d attribute %x attribute size %d\n", cset, *buf, buf[1]);
- } else if (cset == 6) {
- for (i = 0; i < WE_6_LEN; i++)
- if (*buf == we_6[i].nr)
- break;
-
- /* When found, give appropriate msg */
- if (i != WE_6_LEN) {
- dp += sprintf(dp, " %s\n", we_6[i].descr);
- dp += we_6[i].f(dp, buf);
- } else
- dp += sprintf(dp, " Codeset %d attribute %x attribute size %d\n", cset, *buf, buf[1]);
- } else
- dp += sprintf(dp, " Unknown Codeset %d attribute %x attribute size %d\n", cset, *buf, buf[1]);
- /* Skip to next element */
- if (cs_fest == 8) {
- cset = cs_old;
- cs_old = 0;
- cs_fest = 0;
- }
- buf += buf[1] + 2;
- }
- } else if ((buf[0] == 8) && (cs->protocol == ISDN_PTYPE_NI1)) { /* NI-1 */
- /* locate message type */
- buf++;
- cr_l = *buf++;
- if (cr_l)
- cr = *buf++;
- else
- cr = 0;
- mt = *buf++;
- for (i = 0; i < MTSIZE; i++)
- if (mtlist[i].nr == mt)
- break;
-
- /* display message type if it exists */
- if (i == MTSIZE)
- dp += sprintf(dp, "callref %d %s size %d unknown message type %x!\n",
- cr & 0x7f, (cr & 0x80) ? "called" : "caller",
- size, mt);
- else
- dp += sprintf(dp, "callref %d %s size %d message type %s\n",
- cr & 0x7f, (cr & 0x80) ? "called" : "caller",
- size, mtlist[i].descr);
-
- /* display each information element */
- while (buf < bend) {
- /* Is it a single octet information element? */
- if (*buf & 0x80) {
- switch ((*buf >> 4) & 7) {
- case 1:
- dp += sprintf(dp, " Shift %x\n", *buf & 0xf);
- cs_old = cset;
- cset = *buf & 7;
- cs_fest = *buf & 8;
- break;
- default:
- dp += sprintf(dp, " Unknown single-octet IE %x\n", *buf);
- break;
- }
- buf++;
- continue;
- }
- /* No, locate it in the table */
- if (cset == 0) {
- for (i = 0; i < IESIZE_NI1; i++)
- if (*buf == ielist_ni1[i].nr)
- break;
-
- /* When not found, give appropriate msg */
- if (i != IESIZE_NI1) {
- dp += sprintf(dp, " %s\n", ielist_ni1[i].descr);
- dp += ielist_ni1[i].f(dp, buf);
- } else
- dp += sprintf(dp, " attribute %x attribute size %d\n", *buf, buf[1]);
- } else if (cset == 5) {
- for (i = 0; i < IESIZE_NI1_CS5; i++)
- if (*buf == ielist_ni1_cs5[i].nr)
- break;
-
- /* When not found, give appropriate msg */
- if (i != IESIZE_NI1_CS5) {
- dp += sprintf(dp, " %s\n", ielist_ni1_cs5[i].descr);
- dp += ielist_ni1_cs5[i].f(dp, buf);
- } else
- dp += sprintf(dp, " attribute %x attribute size %d\n", *buf, buf[1]);
- } else if (cset == 6) {
- for (i = 0; i < IESIZE_NI1_CS6; i++)
- if (*buf == ielist_ni1_cs6[i].nr)
- break;
-
- /* When not found, give appropriate msg */
- if (i != IESIZE_NI1_CS6) {
- dp += sprintf(dp, " %s\n", ielist_ni1_cs6[i].descr);
- dp += ielist_ni1_cs6[i].f(dp, buf);
- } else
- dp += sprintf(dp, " attribute %x attribute size %d\n", *buf, buf[1]);
- } else
- dp += sprintf(dp, " Unknown Codeset %d attribute %x attribute size %d\n", cset, *buf, buf[1]);
-
- /* Skip to next element */
- if (cs_fest == 8) {
- cset = cs_old;
- cs_old = 0;
- cs_fest = 0;
- }
- buf += buf[1] + 2;
- }
- } else if ((buf[0] == 8) && (cs->protocol == ISDN_PTYPE_EURO)) { /* EURO */
- /* locate message type */
- buf++;
- cr_l = *buf++;
- if (cr_l)
- cr = *buf++;
- else
- cr = 0;
- mt = *buf++;
- for (i = 0; i < MTSIZE; i++)
- if (mtlist[i].nr == mt)
- break;
-
- /* display message type if it exists */
- if (i == MTSIZE)
- dp += sprintf(dp, "callref %d %s size %d unknown message type %x!\n",
- cr & 0x7f, (cr & 0x80) ? "called" : "caller",
- size, mt);
- else
- dp += sprintf(dp, "callref %d %s size %d message type %s\n",
- cr & 0x7f, (cr & 0x80) ? "called" : "caller",
- size, mtlist[i].descr);
-
- /* display each information element */
- while (buf < bend) {
- /* Is it a single octet information element? */
- if (*buf & 0x80) {
- switch ((*buf >> 4) & 7) {
- case 1:
- dp += sprintf(dp, " Shift %x\n", *buf & 0xf);
- break;
- case 3:
- dp += sprintf(dp, " Congestion level %x\n", *buf & 0xf);
- break;
- case 5:
- dp += sprintf(dp, " Repeat indicator %x\n", *buf & 0xf);
- break;
- case 2:
- if (*buf == 0xa0) {
- dp += sprintf(dp, " More data\n");
- break;
- }
- if (*buf == 0xa1) {
- dp += sprintf(dp, " Sending complete\n");
- }
- break;
- /* fall through */
- default:
- dp += sprintf(dp, " Reserved %x\n", *buf);
- break;
- }
- buf++;
- continue;
- }
- /* No, locate it in the table */
- for (i = 0; i < IESIZE; i++)
- if (*buf == ielist[i].nr)
- break;
-
- /* When not found, give appropriate msg */
- if (i != IESIZE) {
- dp += sprintf(dp, " %s\n", ielist[i].descr);
- dp += ielist[i].f(dp, buf);
- } else
- dp += sprintf(dp, " attribute %x attribute size %d\n", *buf, buf[1]);
-
- /* Skip to next element */
- buf += buf[1] + 2;
- }
- } else {
- dp += sprintf(dp, "Unknown protocol %x!", buf[0]);
- }
- *dp = 0;
- HiSax_putstatus(cs, NULL, cs->dlog);
-}
diff --git a/drivers/isdn/hisax/s0box.c b/drivers/isdn/hisax/s0box.c
deleted file mode 100644
index 4e7d0aa227ad..000000000000
--- a/drivers/isdn/hisax/s0box.c
+++ /dev/null
@@ -1,260 +0,0 @@
-/* $Id: s0box.c,v 2.6.2.4 2004/01/13 23:48:39 keil Exp $
- *
- * low level stuff for Creatix S0BOX
- *
- * Author Enrik Berkhan
- * Copyright by Enrik Berkhan <enrik@starfleet.inka.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "hscx.h"
-#include "isdnl1.h"
-
-static const char *s0box_revision = "$Revision: 2.6.2.4 $";
-
-static inline void
-writereg(unsigned int padr, signed int addr, u_char off, u_char val) {
- outb_p(0x1c, padr + 2);
- outb_p(0x14, padr + 2);
- outb_p((addr + off) & 0x7f, padr);
- outb_p(0x16, padr + 2);
- outb_p(val, padr);
- outb_p(0x17, padr + 2);
- outb_p(0x14, padr + 2);
- outb_p(0x1c, padr + 2);
-}
-
-static u_char nibtab[] = { 1, 9, 5, 0xd, 3, 0xb, 7, 0xf,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 8, 4, 0xc, 2, 0xa, 6, 0xe };
-
-static inline u_char
-readreg(unsigned int padr, signed int addr, u_char off) {
- register u_char n1, n2;
-
- outb_p(0x1c, padr + 2);
- outb_p(0x14, padr + 2);
- outb_p((addr + off) | 0x80, padr);
- outb_p(0x16, padr + 2);
- outb_p(0x17, padr + 2);
- n1 = (inb_p(padr + 1) >> 3) & 0x17;
- outb_p(0x16, padr + 2);
- n2 = (inb_p(padr + 1) >> 3) & 0x17;
- outb_p(0x14, padr + 2);
- outb_p(0x1c, padr + 2);
- return nibtab[n1] | (nibtab[n2] << 4);
-}
-
-static inline void
-read_fifo(unsigned int padr, signed int adr, u_char *data, int size)
-{
- int i;
- register u_char n1, n2;
-
- outb_p(0x1c, padr + 2);
- outb_p(0x14, padr + 2);
- outb_p(adr | 0x80, padr);
- outb_p(0x16, padr + 2);
- for (i = 0; i < size; i++) {
- outb_p(0x17, padr + 2);
- n1 = (inb_p(padr + 1) >> 3) & 0x17;
- outb_p(0x16, padr + 2);
- n2 = (inb_p(padr + 1) >> 3) & 0x17;
- *(data++) = nibtab[n1] | (nibtab[n2] << 4);
- }
- outb_p(0x14, padr + 2);
- outb_p(0x1c, padr + 2);
- return;
-}
-
-static inline void
-write_fifo(unsigned int padr, signed int adr, u_char *data, int size)
-{
- int i;
- outb_p(0x1c, padr + 2);
- outb_p(0x14, padr + 2);
- outb_p(adr & 0x7f, padr);
- for (i = 0; i < size; i++) {
- outb_p(0x16, padr + 2);
- outb_p(*(data++), padr);
- outb_p(0x17, padr + 2);
- }
- outb_p(0x14, padr + 2);
- outb_p(0x1c, padr + 2);
- return;
-}
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readreg(cs->hw.teles3.cfg_reg, cs->hw.teles3.isac, offset));
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writereg(cs->hw.teles3.cfg_reg, cs->hw.teles3.isac, offset, value);
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- read_fifo(cs->hw.teles3.cfg_reg, cs->hw.teles3.isacfifo, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- write_fifo(cs->hw.teles3.cfg_reg, cs->hw.teles3.isacfifo, data, size);
-}
-
-static u_char
-ReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- return (readreg(cs->hw.teles3.cfg_reg, cs->hw.teles3.hscx[hscx], offset));
-}
-
-static void
-WriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- writereg(cs->hw.teles3.cfg_reg, cs->hw.teles3.hscx[hscx], offset, value);
-}
-
-/*
- * fast interrupt HSCX stuff goes here
- */
-
-#define READHSCX(cs, nr, reg) readreg(cs->hw.teles3.cfg_reg, cs->hw.teles3.hscx[nr], reg)
-#define WRITEHSCX(cs, nr, reg, data) writereg(cs->hw.teles3.cfg_reg, cs->hw.teles3.hscx[nr], reg, data)
-#define READHSCXFIFO(cs, nr, ptr, cnt) read_fifo(cs->hw.teles3.cfg_reg, cs->hw.teles3.hscxfifo[nr], ptr, cnt)
-#define WRITEHSCXFIFO(cs, nr, ptr, cnt) write_fifo(cs->hw.teles3.cfg_reg, cs->hw.teles3.hscxfifo[nr], ptr, cnt)
-
-#include "hscx_irq.c"
-
-static irqreturn_t
-s0box_interrupt(int intno, void *dev_id)
-{
-#define MAXCOUNT 5
- struct IsdnCardState *cs = dev_id;
- u_char val;
- u_long flags;
- int count = 0;
-
- spin_lock_irqsave(&cs->lock, flags);
- val = readreg(cs->hw.teles3.cfg_reg, cs->hw.teles3.hscx[1], HSCX_ISTA);
-Start_HSCX:
- if (val)
- hscx_int_main(cs, val);
- val = readreg(cs->hw.teles3.cfg_reg, cs->hw.teles3.isac, ISAC_ISTA);
-Start_ISAC:
- if (val)
- isac_interrupt(cs, val);
- count++;
- val = readreg(cs->hw.teles3.cfg_reg, cs->hw.teles3.hscx[1], HSCX_ISTA);
- if (val && count < MAXCOUNT) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX IntStat after IntRoutine");
- goto Start_HSCX;
- }
- val = readreg(cs->hw.teles3.cfg_reg, cs->hw.teles3.isac, ISAC_ISTA);
- if (val && count < MAXCOUNT) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ISAC IntStat after IntRoutine");
- goto Start_ISAC;
- }
- if (count >= MAXCOUNT)
- printk(KERN_WARNING "S0Box: more than %d loops in s0box_interrupt\n", count);
- writereg(cs->hw.teles3.cfg_reg, cs->hw.teles3.hscx[0], HSCX_MASK, 0xFF);
- writereg(cs->hw.teles3.cfg_reg, cs->hw.teles3.hscx[1], HSCX_MASK, 0xFF);
- writereg(cs->hw.teles3.cfg_reg, cs->hw.teles3.isac, ISAC_MASK, 0xFF);
- writereg(cs->hw.teles3.cfg_reg, cs->hw.teles3.isac, ISAC_MASK, 0x0);
- writereg(cs->hw.teles3.cfg_reg, cs->hw.teles3.hscx[0], HSCX_MASK, 0x0);
- writereg(cs->hw.teles3.cfg_reg, cs->hw.teles3.hscx[1], HSCX_MASK, 0x0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-release_io_s0box(struct IsdnCardState *cs)
-{
- release_region(cs->hw.teles3.cfg_reg, 8);
-}
-
-static int
-S0Box_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- break;
- case CARD_RELEASE:
- release_io_s0box(cs);
- break;
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- inithscxisac(cs, 3);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case CARD_TEST:
- break;
- }
- return (0);
-}
-
-int setup_s0box(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
- strcpy(tmp, s0box_revision);
- printk(KERN_INFO "HiSax: S0Box IO driver Rev. %s\n", HiSax_getrev(tmp));
- if (cs->typ != ISDN_CTYPE_S0BOX)
- return (0);
-
- cs->hw.teles3.cfg_reg = card->para[1];
- cs->hw.teles3.hscx[0] = -0x20;
- cs->hw.teles3.hscx[1] = 0x0;
- cs->hw.teles3.isac = 0x20;
- cs->hw.teles3.isacfifo = cs->hw.teles3.isac + 0x3e;
- cs->hw.teles3.hscxfifo[0] = cs->hw.teles3.hscx[0] + 0x3e;
- cs->hw.teles3.hscxfifo[1] = cs->hw.teles3.hscx[1] + 0x3e;
- cs->irq = card->para[0];
- if (!request_region(cs->hw.teles3.cfg_reg, 8, "S0Box parallel I/O")) {
- printk(KERN_WARNING "HiSax: S0Box ports %x-%x already in use\n",
- cs->hw.teles3.cfg_reg,
- cs->hw.teles3.cfg_reg + 7);
- return 0;
- }
- printk(KERN_INFO "HiSax: S0Box config irq:%d isac:0x%x cfg:0x%x\n",
- cs->irq,
- cs->hw.teles3.isac, cs->hw.teles3.cfg_reg);
- printk(KERN_INFO "HiSax: hscx A:0x%x hscx B:0x%x\n",
- cs->hw.teles3.hscx[0], cs->hw.teles3.hscx[1]);
- setup_isac(cs);
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->BC_Read_Reg = &ReadHSCX;
- cs->BC_Write_Reg = &WriteHSCX;
- cs->BC_Send_Data = &hscx_fill_fifo;
- cs->cardmsg = &S0Box_card_msg;
- cs->irq_func = &s0box_interrupt;
- ISACVersion(cs, "S0Box:");
- if (HscxVersion(cs, "S0Box:")) {
- printk(KERN_WARNING
- "S0Box: wrong HSCX versions check IO address\n");
- release_io_s0box(cs);
- return (0);
- }
- return (1);
-}
diff --git a/drivers/isdn/hisax/saphir.c b/drivers/isdn/hisax/saphir.c
deleted file mode 100644
index db906cb37a3f..000000000000
--- a/drivers/isdn/hisax/saphir.c
+++ /dev/null
@@ -1,296 +0,0 @@
-/* $Id: saphir.c,v 1.10.2.4 2004/01/13 23:48:39 keil Exp $
- *
- * low level stuff for HST Saphir 1
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * Thanks to HST High Soft Tech GmbH
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "hscx.h"
-#include "isdnl1.h"
-
-static char *saphir_rev = "$Revision: 1.10.2.4 $";
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-#define ISAC_DATA 0
-#define HSCX_DATA 1
-#define ADDRESS_REG 2
-#define IRQ_REG 3
-#define SPARE_REG 4
-#define RESET_REG 5
-
-static inline u_char
-readreg(unsigned int ale, unsigned int adr, u_char off)
-{
- register u_char ret;
-
- byteout(ale, off);
- ret = bytein(adr);
- return (ret);
-}
-
-static inline void
-readfifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- byteout(ale, off);
- insb(adr, data, size);
-}
-
-
-static inline void
-writereg(unsigned int ale, unsigned int adr, u_char off, u_char data)
-{
- byteout(ale, off);
- byteout(adr, data);
-}
-
-static inline void
-writefifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- byteout(ale, off);
- outsb(adr, data, size);
-}
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readreg(cs->hw.saphir.ale, cs->hw.saphir.isac, offset));
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writereg(cs->hw.saphir.ale, cs->hw.saphir.isac, offset, value);
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- readfifo(cs->hw.saphir.ale, cs->hw.saphir.isac, 0, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- writefifo(cs->hw.saphir.ale, cs->hw.saphir.isac, 0, data, size);
-}
-
-static u_char
-ReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- return (readreg(cs->hw.saphir.ale, cs->hw.saphir.hscx,
- offset + (hscx ? 0x40 : 0)));
-}
-
-static void
-WriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- writereg(cs->hw.saphir.ale, cs->hw.saphir.hscx,
- offset + (hscx ? 0x40 : 0), value);
-}
-
-#define READHSCX(cs, nr, reg) readreg(cs->hw.saphir.ale, \
- cs->hw.saphir.hscx, reg + (nr ? 0x40 : 0))
-#define WRITEHSCX(cs, nr, reg, data) writereg(cs->hw.saphir.ale, \
- cs->hw.saphir.hscx, reg + (nr ? 0x40 : 0), data)
-
-#define READHSCXFIFO(cs, nr, ptr, cnt) readfifo(cs->hw.saphir.ale, \
- cs->hw.saphir.hscx, (nr ? 0x40 : 0), ptr, cnt)
-
-#define WRITEHSCXFIFO(cs, nr, ptr, cnt) writefifo(cs->hw.saphir.ale, \
- cs->hw.saphir.hscx, (nr ? 0x40 : 0), ptr, cnt)
-
-#include "hscx_irq.c"
-
-static irqreturn_t
-saphir_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- val = readreg(cs->hw.saphir.ale, cs->hw.saphir.hscx, HSCX_ISTA + 0x40);
-Start_HSCX:
- if (val)
- hscx_int_main(cs, val);
- val = readreg(cs->hw.saphir.ale, cs->hw.saphir.isac, ISAC_ISTA);
-Start_ISAC:
- if (val)
- isac_interrupt(cs, val);
- val = readreg(cs->hw.saphir.ale, cs->hw.saphir.hscx, HSCX_ISTA + 0x40);
- if (val) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX IntStat after IntRoutine");
- goto Start_HSCX;
- }
- val = readreg(cs->hw.saphir.ale, cs->hw.saphir.isac, ISAC_ISTA);
- if (val) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ISAC IntStat after IntRoutine");
- goto Start_ISAC;
- }
- /* Watchdog */
- if (cs->hw.saphir.timer.function)
- mod_timer(&cs->hw.saphir.timer, jiffies + 1 * HZ);
- else
- printk(KERN_WARNING "saphir: Spurious timer!\n");
- writereg(cs->hw.saphir.ale, cs->hw.saphir.hscx, HSCX_MASK, 0xFF);
- writereg(cs->hw.saphir.ale, cs->hw.saphir.hscx, HSCX_MASK + 0x40, 0xFF);
- writereg(cs->hw.saphir.ale, cs->hw.saphir.isac, ISAC_MASK, 0xFF);
- writereg(cs->hw.saphir.ale, cs->hw.saphir.isac, ISAC_MASK, 0);
- writereg(cs->hw.saphir.ale, cs->hw.saphir.hscx, HSCX_MASK, 0);
- writereg(cs->hw.saphir.ale, cs->hw.saphir.hscx, HSCX_MASK + 0x40, 0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-SaphirWatchDog(struct timer_list *t)
-{
- struct IsdnCardState *cs = from_timer(cs, t, hw.saphir.timer);
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- /* 5 sec WatchDog, so read at least every 4 sec */
- cs->readisac(cs, ISAC_RBCH);
- spin_unlock_irqrestore(&cs->lock, flags);
- mod_timer(&cs->hw.saphir.timer, jiffies + 1 * HZ);
-}
-
-static void
-release_io_saphir(struct IsdnCardState *cs)
-{
- byteout(cs->hw.saphir.cfg_reg + IRQ_REG, 0xff);
- del_timer(&cs->hw.saphir.timer);
- cs->hw.saphir.timer.function = NULL;
- if (cs->hw.saphir.cfg_reg)
- release_region(cs->hw.saphir.cfg_reg, 6);
-}
-
-static int
-saphir_reset(struct IsdnCardState *cs)
-{
- u_char irq_val;
-
- switch (cs->irq) {
- case 5: irq_val = 0;
- break;
- case 3: irq_val = 1;
- break;
- case 11:
- irq_val = 2;
- break;
- case 12:
- irq_val = 3;
- break;
- case 15:
- irq_val = 4;
- break;
- default:
- printk(KERN_WARNING "HiSax: saphir wrong IRQ %d\n",
- cs->irq);
- return (1);
- }
- byteout(cs->hw.saphir.cfg_reg + IRQ_REG, irq_val);
- byteout(cs->hw.saphir.cfg_reg + RESET_REG, 1);
- mdelay(10);
- byteout(cs->hw.saphir.cfg_reg + RESET_REG, 0);
- mdelay(10);
- byteout(cs->hw.saphir.cfg_reg + IRQ_REG, irq_val);
- byteout(cs->hw.saphir.cfg_reg + SPARE_REG, 0x02);
- return (0);
-}
-
-static int
-saphir_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- saphir_reset(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- release_io_saphir(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- inithscxisac(cs, 3);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-
-int setup_saphir(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
- strcpy(tmp, saphir_rev);
- printk(KERN_INFO "HiSax: HST Saphir driver Rev. %s\n", HiSax_getrev(tmp));
- if (cs->typ != ISDN_CTYPE_HSTSAPHIR)
- return (0);
-
- /* IO-Ports */
- cs->hw.saphir.cfg_reg = card->para[1];
- cs->hw.saphir.isac = card->para[1] + ISAC_DATA;
- cs->hw.saphir.hscx = card->para[1] + HSCX_DATA;
- cs->hw.saphir.ale = card->para[1] + ADDRESS_REG;
- cs->irq = card->para[0];
- if (!request_region(cs->hw.saphir.cfg_reg, 6, "saphir")) {
- printk(KERN_WARNING
- "HiSax: HST Saphir config port %x-%x already in use\n",
- cs->hw.saphir.cfg_reg,
- cs->hw.saphir.cfg_reg + 5);
- return (0);
- }
-
- printk(KERN_INFO "HiSax: HST Saphir config irq:%d io:0x%X\n",
- cs->irq, cs->hw.saphir.cfg_reg);
-
- setup_isac(cs);
- timer_setup(&cs->hw.saphir.timer, SaphirWatchDog, 0);
- cs->hw.saphir.timer.expires = jiffies + 4 * HZ;
- add_timer(&cs->hw.saphir.timer);
- if (saphir_reset(cs)) {
- release_io_saphir(cs);
- return (0);
- }
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->BC_Read_Reg = &ReadHSCX;
- cs->BC_Write_Reg = &WriteHSCX;
- cs->BC_Send_Data = &hscx_fill_fifo;
- cs->cardmsg = &saphir_card_msg;
- cs->irq_func = &saphir_interrupt;
- ISACVersion(cs, "saphir:");
- if (HscxVersion(cs, "saphir:")) {
- printk(KERN_WARNING
- "saphir: wrong HSCX versions check IO address\n");
- release_io_saphir(cs);
- return (0);
- }
- return (1);
-}
diff --git a/drivers/isdn/hisax/sedlbauer.c b/drivers/isdn/hisax/sedlbauer.c
deleted file mode 100644
index c0b97b893495..000000000000
--- a/drivers/isdn/hisax/sedlbauer.c
+++ /dev/null
@@ -1,873 +0,0 @@
-/* $Id: sedlbauer.c,v 1.34.2.6 2004/01/24 20:47:24 keil Exp $
- *
- * low level stuff for Sedlbauer cards
- * includes support for the Sedlbauer speed star (speed star II),
- * support for the Sedlbauer speed fax+,
- * support for the Sedlbauer ISDN-Controller PC/104 and
- * support for the Sedlbauer speed pci
- * derived from the original file asuscom.c from Karsten Keil
- *
- * Author Marcus Niemann
- * Copyright by Marcus Niemann <niemann@www-bib.fh-bielefeld.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * Thanks to Karsten Keil
- * Sedlbauer AG for informations
- * Edgar Toernig
- *
- */
-
-/* Supported cards:
- * Card: Chip: Configuration: Comment:
- * ---------------------------------------------------------------------
- * Speed Card ISAC_HSCX DIP-SWITCH
- * Speed Win ISAC_HSCX ISAPNP
- * Speed Fax+ ISAC_ISAR ISAPNP Full analog support
- * Speed Star ISAC_HSCX CARDMGR
- * Speed Win2 IPAC ISAPNP
- * ISDN PC/104 IPAC DIP-SWITCH
- * Speed Star2 IPAC CARDMGR
- * Speed PCI IPAC PCI PNP
- * Speed Fax+ ISAC_ISAR PCI PNP Full analog support
- *
- * Important:
- * For the sedlbauer speed fax+ to work properly you have to download
- * the firmware onto the card.
- * For example: hisaxctrl <DriverID> 9 ISAR.BIN
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "ipac.h"
-#include "hscx.h"
-#include "isar.h"
-#include "isdnl1.h"
-#include <linux/pci.h>
-#include <linux/isapnp.h>
-
-static const char *Sedlbauer_revision = "$Revision: 1.34.2.6 $";
-
-static const char *Sedlbauer_Types[] =
-{"None", "speed card/win", "speed star", "speed fax+",
- "speed win II / ISDN PC/104", "speed star II", "speed pci",
- "speed fax+ pyramid", "speed fax+ pci", "HST Saphir III"};
-
-#define PCI_SUBVENDOR_SPEEDFAX_PYRAMID 0x51
-#define PCI_SUBVENDOR_HST_SAPHIR3 0x52
-#define PCI_SUBVENDOR_SEDLBAUER_PCI 0x53
-#define PCI_SUBVENDOR_SPEEDFAX_PCI 0x54
-#define PCI_SUB_ID_SEDLBAUER 0x01
-
-#define SEDL_SPEED_CARD_WIN 1
-#define SEDL_SPEED_STAR 2
-#define SEDL_SPEED_FAX 3
-#define SEDL_SPEED_WIN2_PC104 4
-#define SEDL_SPEED_STAR2 5
-#define SEDL_SPEED_PCI 6
-#define SEDL_SPEEDFAX_PYRAMID 7
-#define SEDL_SPEEDFAX_PCI 8
-#define HST_SAPHIR3 9
-
-#define SEDL_CHIP_TEST 0
-#define SEDL_CHIP_ISAC_HSCX 1
-#define SEDL_CHIP_ISAC_ISAR 2
-#define SEDL_CHIP_IPAC 3
-
-#define SEDL_BUS_ISA 1
-#define SEDL_BUS_PCI 2
-#define SEDL_BUS_PCMCIA 3
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-#define SEDL_HSCX_ISA_RESET_ON 0
-#define SEDL_HSCX_ISA_RESET_OFF 1
-#define SEDL_HSCX_ISA_ISAC 2
-#define SEDL_HSCX_ISA_HSCX 3
-#define SEDL_HSCX_ISA_ADR 4
-
-#define SEDL_HSCX_PCMCIA_RESET 0
-#define SEDL_HSCX_PCMCIA_ISAC 1
-#define SEDL_HSCX_PCMCIA_HSCX 2
-#define SEDL_HSCX_PCMCIA_ADR 4
-
-#define SEDL_ISAR_ISA_ISAC 4
-#define SEDL_ISAR_ISA_ISAR 6
-#define SEDL_ISAR_ISA_ADR 8
-#define SEDL_ISAR_ISA_ISAR_RESET_ON 10
-#define SEDL_ISAR_ISA_ISAR_RESET_OFF 12
-
-#define SEDL_IPAC_ANY_ADR 0
-#define SEDL_IPAC_ANY_IPAC 2
-
-#define SEDL_IPAC_PCI_BASE 0
-#define SEDL_IPAC_PCI_ADR 0xc0
-#define SEDL_IPAC_PCI_IPAC 0xc8
-#define SEDL_ISAR_PCI_ADR 0xc8
-#define SEDL_ISAR_PCI_ISAC 0xd0
-#define SEDL_ISAR_PCI_ISAR 0xe0
-#define SEDL_ISAR_PCI_ISAR_RESET_ON 0x01
-#define SEDL_ISAR_PCI_ISAR_RESET_OFF 0x18
-#define SEDL_ISAR_PCI_LED1 0x08
-#define SEDL_ISAR_PCI_LED2 0x10
-
-#define SEDL_RESET 0x3 /* same as DOS driver */
-
-static inline u_char
-readreg(unsigned int ale, unsigned int adr, u_char off)
-{
- register u_char ret;
-
- byteout(ale, off);
- ret = bytein(adr);
- return (ret);
-}
-
-static inline void
-readfifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- byteout(ale, off);
- insb(adr, data, size);
-}
-
-
-static inline void
-writereg(unsigned int ale, unsigned int adr, u_char off, u_char data)
-{
- byteout(ale, off);
- byteout(adr, data);
-}
-
-static inline void
-writefifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- byteout(ale, off);
- outsb(adr, data, size);
-}
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readreg(cs->hw.sedl.adr, cs->hw.sedl.isac, offset));
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writereg(cs->hw.sedl.adr, cs->hw.sedl.isac, offset, value);
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- readfifo(cs->hw.sedl.adr, cs->hw.sedl.isac, 0, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- writefifo(cs->hw.sedl.adr, cs->hw.sedl.isac, 0, data, size);
-}
-
-static u_char
-ReadISAC_IPAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readreg(cs->hw.sedl.adr, cs->hw.sedl.isac, offset | 0x80));
-}
-
-static void
-WriteISAC_IPAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writereg(cs->hw.sedl.adr, cs->hw.sedl.isac, offset | 0x80, value);
-}
-
-static void
-ReadISACfifo_IPAC(struct IsdnCardState *cs, u_char *data, int size)
-{
- readfifo(cs->hw.sedl.adr, cs->hw.sedl.isac, 0x80, data, size);
-}
-
-static void
-WriteISACfifo_IPAC(struct IsdnCardState *cs, u_char *data, int size)
-{
- writefifo(cs->hw.sedl.adr, cs->hw.sedl.isac, 0x80, data, size);
-}
-
-static u_char
-ReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- return (readreg(cs->hw.sedl.adr,
- cs->hw.sedl.hscx, offset + (hscx ? 0x40 : 0)));
-}
-
-static void
-WriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- writereg(cs->hw.sedl.adr,
- cs->hw.sedl.hscx, offset + (hscx ? 0x40 : 0), value);
-}
-
-/* ISAR access routines
- * mode = 0 access with IRQ on
- * mode = 1 access with IRQ off
- * mode = 2 access with IRQ off and using last offset
- */
-
-static u_char
-ReadISAR(struct IsdnCardState *cs, int mode, u_char offset)
-{
- if (mode == 0)
- return (readreg(cs->hw.sedl.adr, cs->hw.sedl.hscx, offset));
- else if (mode == 1)
- byteout(cs->hw.sedl.adr, offset);
- return (bytein(cs->hw.sedl.hscx));
-}
-
-static void
-WriteISAR(struct IsdnCardState *cs, int mode, u_char offset, u_char value)
-{
- if (mode == 0)
- writereg(cs->hw.sedl.adr, cs->hw.sedl.hscx, offset, value);
- else {
- if (mode == 1)
- byteout(cs->hw.sedl.adr, offset);
- byteout(cs->hw.sedl.hscx, value);
- }
-}
-
-/*
- * fast interrupt HSCX stuff goes here
- */
-
-#define READHSCX(cs, nr, reg) readreg(cs->hw.sedl.adr, \
- cs->hw.sedl.hscx, reg + (nr ? 0x40 : 0))
-#define WRITEHSCX(cs, nr, reg, data) writereg(cs->hw.sedl.adr, \
- cs->hw.sedl.hscx, reg + (nr ? 0x40 : 0), data)
-
-#define READHSCXFIFO(cs, nr, ptr, cnt) readfifo(cs->hw.sedl.adr, \
- cs->hw.sedl.hscx, (nr ? 0x40 : 0), ptr, cnt)
-
-#define WRITEHSCXFIFO(cs, nr, ptr, cnt) writefifo(cs->hw.sedl.adr, \
- cs->hw.sedl.hscx, (nr ? 0x40 : 0), ptr, cnt)
-
-#include "hscx_irq.c"
-
-static irqreturn_t
-sedlbauer_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- if ((cs->hw.sedl.bus == SEDL_BUS_PCMCIA) && (*cs->busy_flag == 1)) {
- /* The card tends to generate interrupts while being removed
- causing us to just crash the kernel. bad. */
- spin_unlock_irqrestore(&cs->lock, flags);
- printk(KERN_WARNING "Sedlbauer: card not available!\n");
- return IRQ_NONE;
- }
-
- val = readreg(cs->hw.sedl.adr, cs->hw.sedl.hscx, HSCX_ISTA + 0x40);
-Start_HSCX:
- if (val)
- hscx_int_main(cs, val);
- val = readreg(cs->hw.sedl.adr, cs->hw.sedl.isac, ISAC_ISTA);
-Start_ISAC:
- if (val)
- isac_interrupt(cs, val);
- val = readreg(cs->hw.sedl.adr, cs->hw.sedl.hscx, HSCX_ISTA + 0x40);
- if (val) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX IntStat after IntRoutine");
- goto Start_HSCX;
- }
- val = readreg(cs->hw.sedl.adr, cs->hw.sedl.isac, ISAC_ISTA);
- if (val) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ISAC IntStat after IntRoutine");
- goto Start_ISAC;
- }
- writereg(cs->hw.sedl.adr, cs->hw.sedl.hscx, HSCX_MASK, 0xFF);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.hscx, HSCX_MASK + 0x40, 0xFF);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.isac, ISAC_MASK, 0xFF);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.isac, ISAC_MASK, 0x0);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.hscx, HSCX_MASK, 0x0);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.hscx, HSCX_MASK + 0x40, 0x0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static irqreturn_t
-sedlbauer_interrupt_ipac(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char ista, val, icnt = 5;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- ista = readreg(cs->hw.sedl.adr, cs->hw.sedl.isac, IPAC_ISTA);
-Start_IPAC:
- if (cs->debug & L1_DEB_IPAC)
- debugl1(cs, "IPAC ISTA %02X", ista);
- if (ista & 0x0f) {
- val = readreg(cs->hw.sedl.adr, cs->hw.sedl.hscx, HSCX_ISTA + 0x40);
- if (ista & 0x01)
- val |= 0x01;
- if (ista & 0x04)
- val |= 0x02;
- if (ista & 0x08)
- val |= 0x04;
- if (val)
- hscx_int_main(cs, val);
- }
- if (ista & 0x20) {
- val = 0xfe & readreg(cs->hw.sedl.adr, cs->hw.sedl.isac, ISAC_ISTA | 0x80);
- if (val) {
- isac_interrupt(cs, val);
- }
- }
- if (ista & 0x10) {
- val = 0x01;
- isac_interrupt(cs, val);
- }
- ista = readreg(cs->hw.sedl.adr, cs->hw.sedl.isac, IPAC_ISTA);
- if ((ista & 0x3f) && icnt) {
- icnt--;
- goto Start_IPAC;
- }
- if (!icnt)
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "Sedlbauer IRQ LOOP");
- writereg(cs->hw.sedl.adr, cs->hw.sedl.isac, IPAC_MASK, 0xFF);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.isac, IPAC_MASK, 0xC0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static irqreturn_t
-sedlbauer_interrupt_isar(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val;
- int cnt = 5;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- val = readreg(cs->hw.sedl.adr, cs->hw.sedl.hscx, ISAR_IRQBIT);
-Start_ISAR:
- if (val & ISAR_IRQSTA)
- isar_int_main(cs);
- val = readreg(cs->hw.sedl.adr, cs->hw.sedl.isac, ISAC_ISTA);
-Start_ISAC:
- if (val)
- isac_interrupt(cs, val);
- val = readreg(cs->hw.sedl.adr, cs->hw.sedl.hscx, ISAR_IRQBIT);
- if ((val & ISAR_IRQSTA) && --cnt) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "ISAR IntStat after IntRoutine");
- goto Start_ISAR;
- }
- val = readreg(cs->hw.sedl.adr, cs->hw.sedl.isac, ISAC_ISTA);
- if (val && --cnt) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ISAC IntStat after IntRoutine");
- goto Start_ISAC;
- }
- if (!cnt)
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "Sedlbauer IRQ LOOP");
-
- writereg(cs->hw.sedl.adr, cs->hw.sedl.hscx, ISAR_IRQBIT, 0);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.isac, ISAC_MASK, 0xFF);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.isac, ISAC_MASK, 0x0);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.hscx, ISAR_IRQBIT, ISAR_IRQMSK);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-release_io_sedlbauer(struct IsdnCardState *cs)
-{
- int bytecnt = 8;
-
- if (cs->subtyp == SEDL_SPEED_FAX) {
- bytecnt = 16;
- } else if (cs->hw.sedl.bus == SEDL_BUS_PCI) {
- bytecnt = 256;
- }
- if (cs->hw.sedl.cfg_reg)
- release_region(cs->hw.sedl.cfg_reg, bytecnt);
-}
-
-static void
-reset_sedlbauer(struct IsdnCardState *cs)
-{
- printk(KERN_INFO "Sedlbauer: resetting card\n");
-
- if (!((cs->hw.sedl.bus == SEDL_BUS_PCMCIA) &&
- (cs->hw.sedl.chip == SEDL_CHIP_ISAC_HSCX))) {
- if (cs->hw.sedl.chip == SEDL_CHIP_IPAC) {
- writereg(cs->hw.sedl.adr, cs->hw.sedl.isac, IPAC_POTA2, 0x20);
- mdelay(2);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.isac, IPAC_POTA2, 0x0);
- mdelay(10);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.isac, IPAC_CONF, 0x0);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.isac, IPAC_ACFG, 0xff);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.isac, IPAC_AOE, 0x0);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.isac, IPAC_MASK, 0xc0);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.isac, IPAC_PCFG, 0x12);
- } else if ((cs->hw.sedl.chip == SEDL_CHIP_ISAC_ISAR) &&
- (cs->hw.sedl.bus == SEDL_BUS_PCI)) {
- byteout(cs->hw.sedl.cfg_reg + 3, cs->hw.sedl.reset_on);
- mdelay(2);
- byteout(cs->hw.sedl.cfg_reg + 3, cs->hw.sedl.reset_off);
- mdelay(10);
- } else {
- byteout(cs->hw.sedl.reset_on, SEDL_RESET); /* Reset On */
- mdelay(2);
- byteout(cs->hw.sedl.reset_off, 0); /* Reset Off */
- mdelay(10);
- }
- }
-}
-
-static int
-Sedl_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- reset_sedlbauer(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- if (cs->hw.sedl.bus == SEDL_BUS_PCI)
- /* disable all IRQ */
- byteout(cs->hw.sedl.cfg_reg + 5, 0);
- if (cs->hw.sedl.chip == SEDL_CHIP_ISAC_ISAR) {
- spin_lock_irqsave(&cs->lock, flags);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.hscx,
- ISAR_IRQBIT, 0);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.isac,
- ISAC_MASK, 0xFF);
- reset_sedlbauer(cs);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.hscx,
- ISAR_IRQBIT, 0);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.isac,
- ISAC_MASK, 0xFF);
- spin_unlock_irqrestore(&cs->lock, flags);
- }
- release_io_sedlbauer(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->hw.sedl.bus == SEDL_BUS_PCI)
- /* enable all IRQ */
- byteout(cs->hw.sedl.cfg_reg + 5, 0x02);
- reset_sedlbauer(cs);
- if (cs->hw.sedl.chip == SEDL_CHIP_ISAC_ISAR) {
- clear_pending_isac_ints(cs);
- writereg(cs->hw.sedl.adr, cs->hw.sedl.hscx,
- ISAR_IRQBIT, 0);
- initisac(cs);
- initisar(cs);
- /* Reenable all IRQ */
- cs->writeisac(cs, ISAC_MASK, 0);
- /* RESET Receiver and Transmitter */
- cs->writeisac(cs, ISAC_CMDR, 0x41);
- } else {
- inithscxisac(cs, 3);
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- case MDL_INFO_CONN:
- if (cs->subtyp != SEDL_SPEEDFAX_PYRAMID)
- return (0);
- spin_lock_irqsave(&cs->lock, flags);
- if ((long) arg)
- cs->hw.sedl.reset_off &= ~SEDL_ISAR_PCI_LED2;
- else
- cs->hw.sedl.reset_off &= ~SEDL_ISAR_PCI_LED1;
- byteout(cs->hw.sedl.cfg_reg + 3, cs->hw.sedl.reset_off);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case MDL_INFO_REL:
- if (cs->subtyp != SEDL_SPEEDFAX_PYRAMID)
- return (0);
- spin_lock_irqsave(&cs->lock, flags);
- if ((long) arg)
- cs->hw.sedl.reset_off |= SEDL_ISAR_PCI_LED2;
- else
- cs->hw.sedl.reset_off |= SEDL_ISAR_PCI_LED1;
- byteout(cs->hw.sedl.cfg_reg + 3, cs->hw.sedl.reset_off);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- }
- return (0);
-}
-
-#ifdef __ISAPNP__
-static struct isapnp_device_id sedl_ids[] = {
- { ISAPNP_VENDOR('S', 'A', 'G'), ISAPNP_FUNCTION(0x01),
- ISAPNP_VENDOR('S', 'A', 'G'), ISAPNP_FUNCTION(0x01),
- (unsigned long) "Speed win" },
- { ISAPNP_VENDOR('S', 'A', 'G'), ISAPNP_FUNCTION(0x02),
- ISAPNP_VENDOR('S', 'A', 'G'), ISAPNP_FUNCTION(0x02),
- (unsigned long) "Speed Fax+" },
- { 0, }
-};
-
-static struct isapnp_device_id *ipid = &sedl_ids[0];
-static struct pnp_card *pnp_c = NULL;
-
-static int setup_sedlbauer_isapnp(struct IsdnCard *card, int *bytecnt)
-{
- struct IsdnCardState *cs = card->cs;
- struct pnp_dev *pnp_d;
-
- if (!isapnp_present())
- return -1;
-
- while (ipid->card_vendor) {
- if ((pnp_c = pnp_find_card(ipid->card_vendor,
- ipid->card_device, pnp_c))) {
- pnp_d = NULL;
- if ((pnp_d = pnp_find_dev(pnp_c,
- ipid->vendor, ipid->function, pnp_d))) {
- int err;
-
- printk(KERN_INFO "HiSax: %s detected\n",
- (char *)ipid->driver_data);
- pnp_disable_dev(pnp_d);
- err = pnp_activate_dev(pnp_d);
- if (err < 0) {
- printk(KERN_WARNING "%s: pnp_activate_dev ret(%d)\n",
- __func__, err);
- return (0);
- }
- card->para[1] = pnp_port_start(pnp_d, 0);
- card->para[0] = pnp_irq(pnp_d, 0);
-
- if (card->para[0] == -1 || !card->para[1]) {
- printk(KERN_ERR "Sedlbauer PnP:some resources are missing %ld/%lx\n",
- card->para[0], card->para[1]);
- pnp_disable_dev(pnp_d);
- return (0);
- }
- cs->hw.sedl.cfg_reg = card->para[1];
- cs->irq = card->para[0];
- if (ipid->function == ISAPNP_FUNCTION(0x2)) {
- cs->subtyp = SEDL_SPEED_FAX;
- cs->hw.sedl.chip = SEDL_CHIP_ISAC_ISAR;
- *bytecnt = 16;
- } else {
- cs->subtyp = SEDL_SPEED_CARD_WIN;
- cs->hw.sedl.chip = SEDL_CHIP_TEST;
- }
-
- return (1);
- } else {
- printk(KERN_ERR "Sedlbauer PnP: PnP error card found, no device\n");
- return (0);
- }
- }
- ipid++;
- pnp_c = NULL;
- }
-
- printk(KERN_INFO "Sedlbauer PnP: no ISAPnP card found\n");
- return -1;
-}
-#else
-
-static int setup_sedlbauer_isapnp(struct IsdnCard *card, int *bytecnt)
-{
- return -1;
-}
-#endif /* __ISAPNP__ */
-
-#ifdef CONFIG_PCI
-static struct pci_dev *dev_sedl = NULL;
-
-static int setup_sedlbauer_pci(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- u16 sub_vendor_id, sub_id;
-
- if ((dev_sedl = hisax_find_pci_device(PCI_VENDOR_ID_TIGERJET,
- PCI_DEVICE_ID_TIGERJET_100, dev_sedl))) {
- if (pci_enable_device(dev_sedl))
- return (0);
- cs->irq = dev_sedl->irq;
- if (!cs->irq) {
- printk(KERN_WARNING "Sedlbauer: No IRQ for PCI card found\n");
- return (0);
- }
- cs->hw.sedl.cfg_reg = pci_resource_start(dev_sedl, 0);
- } else {
- printk(KERN_WARNING "Sedlbauer: No PCI card found\n");
- return (0);
- }
- cs->irq_flags |= IRQF_SHARED;
- cs->hw.sedl.bus = SEDL_BUS_PCI;
- sub_vendor_id = dev_sedl->subsystem_vendor;
- sub_id = dev_sedl->subsystem_device;
- printk(KERN_INFO "Sedlbauer: PCI subvendor:%x subid %x\n",
- sub_vendor_id, sub_id);
- printk(KERN_INFO "Sedlbauer: PCI base adr %#x\n",
- cs->hw.sedl.cfg_reg);
- if (sub_id != PCI_SUB_ID_SEDLBAUER) {
- printk(KERN_ERR "Sedlbauer: unknown sub id %#x\n", sub_id);
- return (0);
- }
- if (sub_vendor_id == PCI_SUBVENDOR_SPEEDFAX_PYRAMID) {
- cs->hw.sedl.chip = SEDL_CHIP_ISAC_ISAR;
- cs->subtyp = SEDL_SPEEDFAX_PYRAMID;
- } else if (sub_vendor_id == PCI_SUBVENDOR_SPEEDFAX_PCI) {
- cs->hw.sedl.chip = SEDL_CHIP_ISAC_ISAR;
- cs->subtyp = SEDL_SPEEDFAX_PCI;
- } else if (sub_vendor_id == PCI_SUBVENDOR_HST_SAPHIR3) {
- cs->hw.sedl.chip = SEDL_CHIP_IPAC;
- cs->subtyp = HST_SAPHIR3;
- } else if (sub_vendor_id == PCI_SUBVENDOR_SEDLBAUER_PCI) {
- cs->hw.sedl.chip = SEDL_CHIP_IPAC;
- cs->subtyp = SEDL_SPEED_PCI;
- } else {
- printk(KERN_ERR "Sedlbauer: unknown sub vendor id %#x\n",
- sub_vendor_id);
- return (0);
- }
-
- cs->hw.sedl.reset_on = SEDL_ISAR_PCI_ISAR_RESET_ON;
- cs->hw.sedl.reset_off = SEDL_ISAR_PCI_ISAR_RESET_OFF;
- byteout(cs->hw.sedl.cfg_reg, 0xff);
- byteout(cs->hw.sedl.cfg_reg, 0x00);
- byteout(cs->hw.sedl.cfg_reg + 2, 0xdd);
- byteout(cs->hw.sedl.cfg_reg + 5, 0); /* disable all IRQ */
- byteout(cs->hw.sedl.cfg_reg + 3, cs->hw.sedl.reset_on);
- mdelay(2);
- byteout(cs->hw.sedl.cfg_reg + 3, cs->hw.sedl.reset_off);
- mdelay(10);
-
- return (1);
-}
-
-#else
-
-static int setup_sedlbauer_pci(struct IsdnCard *card)
-{
- return (1);
-}
-
-#endif /* CONFIG_PCI */
-
-int setup_sedlbauer(struct IsdnCard *card)
-{
- int bytecnt = 8, ver, val, rc;
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
- strcpy(tmp, Sedlbauer_revision);
- printk(KERN_INFO "HiSax: Sedlbauer driver Rev. %s\n", HiSax_getrev(tmp));
-
- if (cs->typ == ISDN_CTYPE_SEDLBAUER) {
- cs->subtyp = SEDL_SPEED_CARD_WIN;
- cs->hw.sedl.bus = SEDL_BUS_ISA;
- cs->hw.sedl.chip = SEDL_CHIP_TEST;
- } else if (cs->typ == ISDN_CTYPE_SEDLBAUER_PCMCIA) {
- cs->subtyp = SEDL_SPEED_STAR;
- cs->hw.sedl.bus = SEDL_BUS_PCMCIA;
- cs->hw.sedl.chip = SEDL_CHIP_TEST;
- } else if (cs->typ == ISDN_CTYPE_SEDLBAUER_FAX) {
- cs->subtyp = SEDL_SPEED_FAX;
- cs->hw.sedl.bus = SEDL_BUS_ISA;
- cs->hw.sedl.chip = SEDL_CHIP_ISAC_ISAR;
- } else
- return (0);
-
- bytecnt = 8;
- if (card->para[1]) {
- cs->hw.sedl.cfg_reg = card->para[1];
- cs->irq = card->para[0];
- if (cs->hw.sedl.chip == SEDL_CHIP_ISAC_ISAR) {
- bytecnt = 16;
- }
- } else {
- rc = setup_sedlbauer_isapnp(card, &bytecnt);
- if (!rc)
- return (0);
- if (rc > 0)
- goto ready;
-
- /* Probe for Sedlbauer speed pci */
- rc = setup_sedlbauer_pci(card);
- if (!rc)
- return (0);
-
- bytecnt = 256;
- }
-
-ready:
-
- /* In case of the sedlbauer pcmcia card, this region is in use,
- * reserved for us by the card manager. So we do not check it
- * here, it would fail.
- */
- if (cs->hw.sedl.bus != SEDL_BUS_PCMCIA &&
- !request_region(cs->hw.sedl.cfg_reg, bytecnt, "sedlbauer isdn")) {
- printk(KERN_WARNING
- "HiSax: %s config port %x-%x already in use\n",
- CardType[card->typ],
- cs->hw.sedl.cfg_reg,
- cs->hw.sedl.cfg_reg + bytecnt);
- return (0);
- }
-
- printk(KERN_INFO
- "Sedlbauer: defined at 0x%x-0x%x IRQ %d\n",
- cs->hw.sedl.cfg_reg,
- cs->hw.sedl.cfg_reg + bytecnt,
- cs->irq);
-
- cs->BC_Read_Reg = &ReadHSCX;
- cs->BC_Write_Reg = &WriteHSCX;
- cs->BC_Send_Data = &hscx_fill_fifo;
- cs->cardmsg = &Sedl_card_msg;
-
-/*
- * testing ISA and PCMCIA Cards for IPAC, default is ISAC
- * do not test for PCI card, because ports are different
- * and PCI card uses only IPAC (for the moment)
- */
- if (cs->hw.sedl.bus != SEDL_BUS_PCI) {
- val = readreg(cs->hw.sedl.cfg_reg + SEDL_IPAC_ANY_ADR,
- cs->hw.sedl.cfg_reg + SEDL_IPAC_ANY_IPAC, IPAC_ID);
- printk(KERN_DEBUG "Sedlbauer: testing IPAC version %x\n", val);
- if ((val == 1) || (val == 2)) {
- /* IPAC */
- cs->subtyp = SEDL_SPEED_WIN2_PC104;
- if (cs->hw.sedl.bus == SEDL_BUS_PCMCIA) {
- cs->subtyp = SEDL_SPEED_STAR2;
- }
- cs->hw.sedl.chip = SEDL_CHIP_IPAC;
- } else {
- /* ISAC_HSCX oder ISAC_ISAR */
- if (cs->hw.sedl.chip == SEDL_CHIP_TEST) {
- cs->hw.sedl.chip = SEDL_CHIP_ISAC_HSCX;
- }
- }
- }
-
-/*
- * hw.sedl.chip is now properly set
- */
- printk(KERN_INFO "Sedlbauer: %s detected\n",
- Sedlbauer_Types[cs->subtyp]);
-
- setup_isac(cs);
- if (cs->hw.sedl.chip == SEDL_CHIP_IPAC) {
- if (cs->hw.sedl.bus == SEDL_BUS_PCI) {
- cs->hw.sedl.adr = cs->hw.sedl.cfg_reg + SEDL_IPAC_PCI_ADR;
- cs->hw.sedl.isac = cs->hw.sedl.cfg_reg + SEDL_IPAC_PCI_IPAC;
- cs->hw.sedl.hscx = cs->hw.sedl.cfg_reg + SEDL_IPAC_PCI_IPAC;
- } else {
- cs->hw.sedl.adr = cs->hw.sedl.cfg_reg + SEDL_IPAC_ANY_ADR;
- cs->hw.sedl.isac = cs->hw.sedl.cfg_reg + SEDL_IPAC_ANY_IPAC;
- cs->hw.sedl.hscx = cs->hw.sedl.cfg_reg + SEDL_IPAC_ANY_IPAC;
- }
- test_and_set_bit(HW_IPAC, &cs->HW_Flags);
- cs->readisac = &ReadISAC_IPAC;
- cs->writeisac = &WriteISAC_IPAC;
- cs->readisacfifo = &ReadISACfifo_IPAC;
- cs->writeisacfifo = &WriteISACfifo_IPAC;
- cs->irq_func = &sedlbauer_interrupt_ipac;
- val = readreg(cs->hw.sedl.adr, cs->hw.sedl.isac, IPAC_ID);
- printk(KERN_INFO "Sedlbauer: IPAC version %x\n", val);
- } else {
- /* ISAC_HSCX oder ISAC_ISAR */
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- if (cs->hw.sedl.chip == SEDL_CHIP_ISAC_ISAR) {
- if (cs->hw.sedl.bus == SEDL_BUS_PCI) {
- cs->hw.sedl.adr = cs->hw.sedl.cfg_reg +
- SEDL_ISAR_PCI_ADR;
- cs->hw.sedl.isac = cs->hw.sedl.cfg_reg +
- SEDL_ISAR_PCI_ISAC;
- cs->hw.sedl.hscx = cs->hw.sedl.cfg_reg +
- SEDL_ISAR_PCI_ISAR;
- } else {
- cs->hw.sedl.adr = cs->hw.sedl.cfg_reg +
- SEDL_ISAR_ISA_ADR;
- cs->hw.sedl.isac = cs->hw.sedl.cfg_reg +
- SEDL_ISAR_ISA_ISAC;
- cs->hw.sedl.hscx = cs->hw.sedl.cfg_reg +
- SEDL_ISAR_ISA_ISAR;
- cs->hw.sedl.reset_on = cs->hw.sedl.cfg_reg +
- SEDL_ISAR_ISA_ISAR_RESET_ON;
- cs->hw.sedl.reset_off = cs->hw.sedl.cfg_reg +
- SEDL_ISAR_ISA_ISAR_RESET_OFF;
- }
- cs->bcs[0].hw.isar.reg = &cs->hw.sedl.isar;
- cs->bcs[1].hw.isar.reg = &cs->hw.sedl.isar;
- test_and_set_bit(HW_ISAR, &cs->HW_Flags);
- cs->irq_func = &sedlbauer_interrupt_isar;
- cs->auxcmd = &isar_auxcmd;
- ISACVersion(cs, "Sedlbauer:");
- cs->BC_Read_Reg = &ReadISAR;
- cs->BC_Write_Reg = &WriteISAR;
- cs->BC_Send_Data = &isar_fill_fifo;
- bytecnt = 3;
- while (bytecnt) {
- ver = ISARVersion(cs, "Sedlbauer:");
- if (ver < 0)
- printk(KERN_WARNING
- "Sedlbauer: wrong ISAR version (ret = %d)\n", ver);
- else
- break;
- reset_sedlbauer(cs);
- bytecnt--;
- }
- if (!bytecnt) {
- release_io_sedlbauer(cs);
- return (0);
- }
- } else {
- if (cs->hw.sedl.bus == SEDL_BUS_PCMCIA) {
- cs->hw.sedl.adr = cs->hw.sedl.cfg_reg + SEDL_HSCX_PCMCIA_ADR;
- cs->hw.sedl.isac = cs->hw.sedl.cfg_reg + SEDL_HSCX_PCMCIA_ISAC;
- cs->hw.sedl.hscx = cs->hw.sedl.cfg_reg + SEDL_HSCX_PCMCIA_HSCX;
- cs->hw.sedl.reset_on = cs->hw.sedl.cfg_reg + SEDL_HSCX_PCMCIA_RESET;
- cs->hw.sedl.reset_off = cs->hw.sedl.cfg_reg + SEDL_HSCX_PCMCIA_RESET;
- cs->irq_flags |= IRQF_SHARED;
- } else {
- cs->hw.sedl.adr = cs->hw.sedl.cfg_reg + SEDL_HSCX_ISA_ADR;
- cs->hw.sedl.isac = cs->hw.sedl.cfg_reg + SEDL_HSCX_ISA_ISAC;
- cs->hw.sedl.hscx = cs->hw.sedl.cfg_reg + SEDL_HSCX_ISA_HSCX;
- cs->hw.sedl.reset_on = cs->hw.sedl.cfg_reg + SEDL_HSCX_ISA_RESET_ON;
- cs->hw.sedl.reset_off = cs->hw.sedl.cfg_reg + SEDL_HSCX_ISA_RESET_OFF;
- }
- cs->irq_func = &sedlbauer_interrupt;
- ISACVersion(cs, "Sedlbauer:");
-
- if (HscxVersion(cs, "Sedlbauer:")) {
- printk(KERN_WARNING
- "Sedlbauer: wrong HSCX versions check IO address\n");
- release_io_sedlbauer(cs);
- return (0);
- }
- }
- }
- return (1);
-}
diff --git a/drivers/isdn/hisax/sedlbauer_cs.c b/drivers/isdn/hisax/sedlbauer_cs.c
deleted file mode 100644
index 92ef62d4caf4..000000000000
--- a/drivers/isdn/hisax/sedlbauer_cs.c
+++ /dev/null
@@ -1,209 +0,0 @@
-/*======================================================================
-
- A Sedlbauer PCMCIA client driver
-
- This driver is for the Sedlbauer Speed Star and Speed Star II,
- which are ISDN PCMCIA Cards.
-
- The contents of this file are subject to the Mozilla Public
- License Version 1.1 (the "License"); you may not use this file
- except in compliance with the License. You may obtain a copy of
- the License at http://www.mozilla.org/MPL/
-
- Software distributed under the License is distributed on an "AS
- IS" basis, WITHOUT WARRANTY OF ANY KIND, either express or
- implied. See the License for the specific language governing
- rights and limitations under the License.
-
- The initial developer of the original code is David A. Hinds
- <dahinds@users.sourceforge.net>. Portions created by David A. Hinds
- are Copyright (C) 1999 David A. Hinds. All Rights Reserved.
-
- Modifications from dummy_cs.c are Copyright (C) 1999-2001 Marcus Niemann
- <maniemann@users.sourceforge.net>. All Rights Reserved.
-
- Alternatively, the contents of this file may be used under the
- terms of the GNU General Public License version 2 (the "GPL"), in
- which case the provisions of the GPL are applicable instead of the
- above. If you wish to allow the use of your version of this file
- only under the terms of the GPL and not to allow others to use
- your version of this file under the MPL, indicate your decision
- by deleting the provisions above and replace them with the notice
- and other provisions required by the GPL. If you do not delete
- the provisions above, a recipient may use your version of this
- file under either the MPL or the GPL.
-
- ======================================================================*/
-
-#include <linux/kernel.h>
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/ptrace.h>
-#include <linux/slab.h>
-#include <linux/string.h>
-#include <linux/timer.h>
-#include <linux/ioport.h>
-#include <asm/io.h>
-
-#include <pcmcia/cistpl.h>
-#include <pcmcia/cisreg.h>
-#include <pcmcia/ds.h>
-#include "hisax_cfg.h"
-
-MODULE_DESCRIPTION("ISDN4Linux: PCMCIA client driver for Sedlbauer cards");
-MODULE_AUTHOR("Marcus Niemann");
-MODULE_LICENSE("Dual MPL/GPL");
-
-
-/*====================================================================*/
-
-/* Parameters that can be set with 'insmod' */
-
-static int protocol = 2; /* EURO-ISDN Default */
-module_param(protocol, int, 0);
-
-static int sedlbauer_config(struct pcmcia_device *link);
-static void sedlbauer_release(struct pcmcia_device *link);
-
-static void sedlbauer_detach(struct pcmcia_device *p_dev);
-
-typedef struct local_info_t {
- struct pcmcia_device *p_dev;
- int stop;
- int cardnr;
-} local_info_t;
-
-static int sedlbauer_probe(struct pcmcia_device *link)
-{
- local_info_t *local;
-
- dev_dbg(&link->dev, "sedlbauer_attach()\n");
-
- /* Allocate space for private device-specific data */
- local = kzalloc(sizeof(local_info_t), GFP_KERNEL);
- if (!local) return -ENOMEM;
- local->cardnr = -1;
-
- local->p_dev = link;
- link->priv = local;
-
- return sedlbauer_config(link);
-} /* sedlbauer_attach */
-
-static void sedlbauer_detach(struct pcmcia_device *link)
-{
- dev_dbg(&link->dev, "sedlbauer_detach(0x%p)\n", link);
-
- ((local_info_t *)link->priv)->stop = 1;
- sedlbauer_release(link);
-
- /* This points to the parent local_info_t struct */
- kfree(link->priv);
-} /* sedlbauer_detach */
-
-static int sedlbauer_config_check(struct pcmcia_device *p_dev, void *priv_data)
-{
- if (p_dev->config_index == 0)
- return -EINVAL;
-
- p_dev->io_lines = 3;
- return pcmcia_request_io(p_dev);
-}
-
-static int sedlbauer_config(struct pcmcia_device *link)
-{
- int ret;
- IsdnCard_t icard;
-
- dev_dbg(&link->dev, "sedlbauer_config(0x%p)\n", link);
-
- link->config_flags |= CONF_ENABLE_IRQ | CONF_AUTO_CHECK_VCC |
- CONF_AUTO_SET_VPP | CONF_AUTO_AUDIO | CONF_AUTO_SET_IO;
-
- ret = pcmcia_loop_config(link, sedlbauer_config_check, NULL);
- if (ret)
- goto failed;
-
- ret = pcmcia_enable_device(link);
- if (ret)
- goto failed;
-
- icard.para[0] = link->irq;
- icard.para[1] = link->resource[0]->start;
- icard.protocol = protocol;
- icard.typ = ISDN_CTYPE_SEDLBAUER_PCMCIA;
-
- ret = hisax_init_pcmcia(link,
- &(((local_info_t *)link->priv)->stop), &icard);
- if (ret < 0) {
- printk(KERN_ERR "sedlbauer_cs: failed to initialize SEDLBAUER PCMCIA %d with %pR\n",
- ret, link->resource[0]);
- sedlbauer_release(link);
- return -ENODEV;
- } else
- ((local_info_t *)link->priv)->cardnr = ret;
-
- return 0;
-
-failed:
- sedlbauer_release(link);
- return -ENODEV;
-
-} /* sedlbauer_config */
-
-static void sedlbauer_release(struct pcmcia_device *link)
-{
- local_info_t *local = link->priv;
- dev_dbg(&link->dev, "sedlbauer_release(0x%p)\n", link);
-
- if (local) {
- if (local->cardnr >= 0) {
- /* no unregister function with hisax */
- HiSax_closecard(local->cardnr);
- }
- }
-
- pcmcia_disable_device(link);
-} /* sedlbauer_release */
-
-static int sedlbauer_suspend(struct pcmcia_device *link)
-{
- local_info_t *dev = link->priv;
-
- dev->stop = 1;
-
- return 0;
-}
-
-static int sedlbauer_resume(struct pcmcia_device *link)
-{
- local_info_t *dev = link->priv;
-
- dev->stop = 0;
-
- return 0;
-}
-
-
-static const struct pcmcia_device_id sedlbauer_ids[] = {
- PCMCIA_DEVICE_PROD_ID123("SEDLBAUER", "speed star II", "V 3.1", 0x81fb79f5, 0xf3612e1d, 0x6b95c78a),
- PCMCIA_DEVICE_PROD_ID123("SEDLBAUER", "ISDN-Adapter", "4D67", 0x81fb79f5, 0xe4e9bc12, 0x397b7e90),
- PCMCIA_DEVICE_PROD_ID123("SEDLBAUER", "ISDN-Adapter", "4D98", 0x81fb79f5, 0xe4e9bc12, 0x2e5c7fce),
- PCMCIA_DEVICE_PROD_ID123("SEDLBAUER", "ISDN-Adapter", " (C) 93-94 VK", 0x81fb79f5, 0xe4e9bc12, 0x8db143fe),
- PCMCIA_DEVICE_PROD_ID123("SEDLBAUER", "ISDN-Adapter", " (c) 93-95 VK", 0x81fb79f5, 0xe4e9bc12, 0xb391ab4c),
- PCMCIA_DEVICE_PROD_ID12("HST High Soft Tech GmbH", "Saphir II B", 0xd79e0b84, 0x21d083ae),
-/* PCMCIA_DEVICE_PROD_ID1234("SEDLBAUER", 0x81fb79f5), */ /* too generic*/
- PCMCIA_DEVICE_NULL
-};
-MODULE_DEVICE_TABLE(pcmcia, sedlbauer_ids);
-
-static struct pcmcia_driver sedlbauer_driver = {
- .owner = THIS_MODULE,
- .name = "sedlbauer_cs",
- .probe = sedlbauer_probe,
- .remove = sedlbauer_detach,
- .id_table = sedlbauer_ids,
- .suspend = sedlbauer_suspend,
- .resume = sedlbauer_resume,
-};
-module_pcmcia_driver(sedlbauer_driver);
diff --git a/drivers/isdn/hisax/sportster.c b/drivers/isdn/hisax/sportster.c
deleted file mode 100644
index 18cee6360d0a..000000000000
--- a/drivers/isdn/hisax/sportster.c
+++ /dev/null
@@ -1,267 +0,0 @@
-/* $Id: sportster.c,v 1.16.2.4 2004/01/13 23:48:39 keil Exp $
- *
- * low level stuff for USR Sportster internal TA
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * Thanks to Christian "naddy" Weisgerber (3Com, US Robotics) for documentation
- *
- *
- */
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "hscx.h"
-#include "isdnl1.h"
-
-static const char *sportster_revision = "$Revision: 1.16.2.4 $";
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-#define SPORTSTER_ISAC 0xC000
-#define SPORTSTER_HSCXA 0x0000
-#define SPORTSTER_HSCXB 0x4000
-#define SPORTSTER_RES_IRQ 0x8000
-#define SPORTSTER_RESET 0x80
-#define SPORTSTER_INTE 0x40
-
-static inline int
-calc_off(unsigned int base, unsigned int off)
-{
- return (base + ((off & 0xfc) << 8) + ((off & 3) << 1));
-}
-
-static inline void
-read_fifo(unsigned int adr, u_char *data, int size)
-{
- insb(adr, data, size);
-}
-
-static void
-write_fifo(unsigned int adr, u_char *data, int size)
-{
- outsb(adr, data, size);
-}
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- return (bytein(calc_off(cs->hw.spt.isac, offset)));
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- byteout(calc_off(cs->hw.spt.isac, offset), value);
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- read_fifo(cs->hw.spt.isac, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- write_fifo(cs->hw.spt.isac, data, size);
-}
-
-static u_char
-ReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- return (bytein(calc_off(cs->hw.spt.hscx[hscx], offset)));
-}
-
-static void
-WriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- byteout(calc_off(cs->hw.spt.hscx[hscx], offset), value);
-}
-
-/*
- * fast interrupt HSCX stuff goes here
- */
-
-#define READHSCX(cs, nr, reg) bytein(calc_off(cs->hw.spt.hscx[nr], reg))
-#define WRITEHSCX(cs, nr, reg, data) byteout(calc_off(cs->hw.spt.hscx[nr], reg), data)
-#define READHSCXFIFO(cs, nr, ptr, cnt) read_fifo(cs->hw.spt.hscx[nr], ptr, cnt)
-#define WRITEHSCXFIFO(cs, nr, ptr, cnt) write_fifo(cs->hw.spt.hscx[nr], ptr, cnt)
-
-#include "hscx_irq.c"
-
-static irqreturn_t
-sportster_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- val = READHSCX(cs, 1, HSCX_ISTA);
-Start_HSCX:
- if (val)
- hscx_int_main(cs, val);
- val = ReadISAC(cs, ISAC_ISTA);
-Start_ISAC:
- if (val)
- isac_interrupt(cs, val);
- val = READHSCX(cs, 1, HSCX_ISTA);
- if (val) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX IntStat after IntRoutine");
- goto Start_HSCX;
- }
- val = ReadISAC(cs, ISAC_ISTA);
- if (val) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ISAC IntStat after IntRoutine");
- goto Start_ISAC;
- }
- /* get a new irq impulse if there any pending */
- bytein(cs->hw.spt.cfg_reg + SPORTSTER_RES_IRQ + 1);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-release_io_sportster(struct IsdnCardState *cs)
-{
- int i, adr;
-
- byteout(cs->hw.spt.cfg_reg + SPORTSTER_RES_IRQ, 0);
- for (i = 0; i < 64; i++) {
- adr = cs->hw.spt.cfg_reg + i * 1024;
- release_region(adr, 8);
- }
-}
-
-static void
-reset_sportster(struct IsdnCardState *cs)
-{
- cs->hw.spt.res_irq |= SPORTSTER_RESET; /* Reset On */
- byteout(cs->hw.spt.cfg_reg + SPORTSTER_RES_IRQ, cs->hw.spt.res_irq);
- mdelay(10);
- cs->hw.spt.res_irq &= ~SPORTSTER_RESET; /* Reset Off */
- byteout(cs->hw.spt.cfg_reg + SPORTSTER_RES_IRQ, cs->hw.spt.res_irq);
- mdelay(10);
-}
-
-static int
-Sportster_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- reset_sportster(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- release_io_sportster(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- reset_sportster(cs);
- inithscxisac(cs, 1);
- cs->hw.spt.res_irq |= SPORTSTER_INTE; /* IRQ On */
- byteout(cs->hw.spt.cfg_reg + SPORTSTER_RES_IRQ, cs->hw.spt.res_irq);
- inithscxisac(cs, 2);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-static int get_io_range(struct IsdnCardState *cs)
-{
- int i, j, adr;
-
- for (i = 0; i < 64; i++) {
- adr = cs->hw.spt.cfg_reg + i * 1024;
- if (!request_region(adr, 8, "sportster")) {
- printk(KERN_WARNING "HiSax: USR Sportster config port "
- "%x-%x already in use\n",
- adr, adr + 8);
- break;
- }
- }
- if (i == 64)
- return (1);
- else {
- for (j = 0; j < i; j++) {
- adr = cs->hw.spt.cfg_reg + j * 1024;
- release_region(adr, 8);
- }
- return (0);
- }
-}
-
-int setup_sportster(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
- strcpy(tmp, sportster_revision);
- printk(KERN_INFO "HiSax: USR Sportster driver Rev. %s\n", HiSax_getrev(tmp));
- if (cs->typ != ISDN_CTYPE_SPORTSTER)
- return (0);
-
- cs->hw.spt.cfg_reg = card->para[1];
- cs->irq = card->para[0];
- if (!get_io_range(cs))
- return (0);
- cs->hw.spt.isac = cs->hw.spt.cfg_reg + SPORTSTER_ISAC;
- cs->hw.spt.hscx[0] = cs->hw.spt.cfg_reg + SPORTSTER_HSCXA;
- cs->hw.spt.hscx[1] = cs->hw.spt.cfg_reg + SPORTSTER_HSCXB;
-
- switch (cs->irq) {
- case 5: cs->hw.spt.res_irq = 1;
- break;
- case 7: cs->hw.spt.res_irq = 2;
- break;
- case 10:cs->hw.spt.res_irq = 3;
- break;
- case 11:cs->hw.spt.res_irq = 4;
- break;
- case 12:cs->hw.spt.res_irq = 5;
- break;
- case 14:cs->hw.spt.res_irq = 6;
- break;
- case 15:cs->hw.spt.res_irq = 7;
- break;
- default:release_io_sportster(cs);
- printk(KERN_WARNING "Sportster: wrong IRQ\n");
- return (0);
- }
- printk(KERN_INFO "HiSax: USR Sportster config irq:%d cfg:0x%X\n",
- cs->irq, cs->hw.spt.cfg_reg);
- setup_isac(cs);
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->BC_Read_Reg = &ReadHSCX;
- cs->BC_Write_Reg = &WriteHSCX;
- cs->BC_Send_Data = &hscx_fill_fifo;
- cs->cardmsg = &Sportster_card_msg;
- cs->irq_func = &sportster_interrupt;
- ISACVersion(cs, "Sportster:");
- if (HscxVersion(cs, "Sportster:")) {
- printk(KERN_WARNING
- "Sportster: wrong HSCX versions check IO address\n");
- release_io_sportster(cs);
- return (0);
- }
- return (1);
-}
diff --git a/drivers/isdn/hisax/st5481.h b/drivers/isdn/hisax/st5481.h
deleted file mode 100644
index b421b86ca7da..000000000000
--- a/drivers/isdn/hisax/st5481.h
+++ /dev/null
@@ -1,529 +0,0 @@
-/*
- * Driver for ST5481 USB ISDN modem
- *
- * Author Frode Isaksen
- * Copyright 2001 by Frode Isaksen <fisaksen@bewan.com>
- * 2001 by Kai Germaschewski <kai.germaschewski@gmx.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#ifndef _ST5481_H_
-#define _ST5481_H_
-
-
-// USB IDs, the Product Id is in the range 0x4810-0x481F
-
-#define ST_VENDOR_ID 0x0483
-#define ST5481_PRODUCT_ID 0x4810
-#define ST5481_PRODUCT_ID_MASK 0xFFF0
-
-// ST5481 endpoints when using alternative setting 3 (2B+D).
-// To get the endpoint address, OR with 0x80 for IN endpoints.
-
-#define EP_CTRL 0x00U /* Control endpoint */
-#define EP_INT 0x01U /* Interrupt endpoint */
-#define EP_B1_OUT 0x02U /* B1 channel out */
-#define EP_B1_IN 0x03U /* B1 channel in */
-#define EP_B2_OUT 0x04U /* B2 channel out */
-#define EP_B2_IN 0x05U /* B2 channel in */
-#define EP_D_OUT 0x06U /* D channel out */
-#define EP_D_IN 0x07U /* D channel in */
-
-// Number of isochronous packets. With 20 packets we get
-// 50 interrupts/sec for each endpoint.
-
-#define NUM_ISO_PACKETS_D 20
-#define NUM_ISO_PACKETS_B 20
-
-// Size of each isochronous packet.
-// In outgoing direction we need to match ISDN data rates:
-// D: 2 bytes / msec -> 16 kbit / s
-// B: 16 bytes / msec -> 64 kbit / s
-#define SIZE_ISO_PACKETS_D_IN 16
-#define SIZE_ISO_PACKETS_D_OUT 2
-#define SIZE_ISO_PACKETS_B_IN 32
-#define SIZE_ISO_PACKETS_B_OUT 8
-
-// If we overrun/underrun, we send one packet with +/- 2 bytes
-#define B_FLOW_ADJUST 2
-
-// Registers that are written using vendor specific device request
-// on endpoint 0.
-
-#define LBA 0x02 /* S loopback */
-#define SET_DEFAULT 0x06 /* Soft reset */
-#define LBB 0x1D /* S maintenance loopback */
-#define STT 0x1e /* S force transmission signals */
-#define SDA_MIN 0x20 /* SDA-sin minimal value */
-#define SDA_MAX 0x21 /* SDA-sin maximal value */
-#define SDELAY_VALUE 0x22 /* Delay between Tx and Rx clock */
-#define IN_D_COUNTER 0x36 /* D receive channel fifo counter */
-#define OUT_D_COUNTER 0x37 /* D transmit channel fifo counter */
-#define IN_B1_COUNTER 0x38 /* B1 receive channel fifo counter */
-#define OUT_B1_COUNTER 0x39 /* B1 transmit channel fifo counter */
-#define IN_B2_COUNTER 0x3a /* B2 receive channel fifo counter */
-#define OUT_B2_COUNTER 0x3b /* B2 transmit channel fifo counter */
-#define FFCTRL_IN_D 0x3C /* D receive channel fifo threshold low */
-#define FFCTRH_IN_D 0x3D /* D receive channel fifo threshold high */
-#define FFCTRL_OUT_D 0x3E /* D transmit channel fifo threshold low */
-#define FFCTRH_OUT_D 0x3F /* D transmit channel fifo threshold high */
-#define FFCTRL_IN_B1 0x40 /* B1 receive channel fifo threshold low */
-#define FFCTRH_IN_B1 0x41 /* B1 receive channel fifo threshold high */
-#define FFCTRL_OUT_B1 0x42 /* B1 transmit channel fifo threshold low */
-#define FFCTRH_OUT_B1 0x43 /* B1 transmit channel fifo threshold high */
-#define FFCTRL_IN_B2 0x44 /* B2 receive channel fifo threshold low */
-#define FFCTRH_IN_B2 0x45 /* B2 receive channel fifo threshold high */
-#define FFCTRL_OUT_B2 0x46 /* B2 transmit channel fifo threshold low */
-#define FFCTRH_OUT_B2 0x47 /* B2 transmit channel fifo threshold high */
-#define MPMSK 0x4A /* Multi purpose interrupt MASK register */
-#define FFMSK_D 0x4c /* D fifo interrupt MASK register */
-#define FFMSK_B1 0x4e /* B1 fifo interrupt MASK register */
-#define FFMSK_B2 0x50 /* B2 fifo interrupt MASK register */
-#define GPIO_DIR 0x52 /* GPIO pins direction registers */
-#define GPIO_OUT 0x53 /* GPIO pins output register */
-#define GPIO_IN 0x54 /* GPIO pins input register */
-#define TXCI 0x56 /* CI command to be transmitted */
-
-
-// Format of the interrupt packet received on endpoint 1:
-//
-// +--------+--------+--------+--------+--------+--------+
-// !MPINT !FFINT_D !FFINT_B1!FFINT_B2!CCIST !GPIO_INT!
-// +--------+--------+--------+--------+--------+--------+
-
-// Offsets in the interrupt packet
-
-#define MPINT 0
-#define FFINT_D 1
-#define FFINT_B1 2
-#define FFINT_B2 3
-#define CCIST 4
-#define GPIO_INT 5
-#define INT_PKT_SIZE 6
-
-// MPINT
-#define LSD_INT 0x80 /* S line activity detected */
-#define RXCI_INT 0x40 /* Indicate primitive arrived */
-#define DEN_INT 0x20 /* Signal enabling data out of D Tx fifo */
-#define DCOLL_INT 0x10 /* D channel collision */
-#define AMIVN_INT 0x04 /* AMI violation number reached 2 */
-#define INFOI_INT 0x04 /* INFOi changed */
-#define DRXON_INT 0x02 /* Reception channel active */
-#define GPCHG_INT 0x01 /* GPIO pin value changed */
-
-// FFINT_x
-#define IN_OVERRUN 0x80 /* In fifo overrun */
-#define OUT_UNDERRUN 0x40 /* Out fifo underrun */
-#define IN_UP 0x20 /* In fifo thresholdh up-crossed */
-#define IN_DOWN 0x10 /* In fifo thresholdl down-crossed */
-#define OUT_UP 0x08 /* Out fifo thresholdh up-crossed */
-#define OUT_DOWN 0x04 /* Out fifo thresholdl down-crossed */
-#define IN_COUNTER_ZEROED 0x02 /* In down-counter reached 0 */
-#define OUT_COUNTER_ZEROED 0x01 /* Out down-counter reached 0 */
-
-#define ANY_REC_INT (IN_OVERRUN + IN_UP + IN_DOWN + IN_COUNTER_ZEROED)
-#define ANY_XMIT_INT (OUT_UNDERRUN + OUT_UP + OUT_DOWN + OUT_COUNTER_ZEROED)
-
-
-// Level 1 commands that are sent using the TXCI device request
-#define ST5481_CMD_DR 0x0 /* Deactivation Request */
-#define ST5481_CMD_RES 0x1 /* state machine RESet */
-#define ST5481_CMD_TM1 0x2 /* Test Mode 1 */
-#define ST5481_CMD_TM2 0x3 /* Test Mode 2 */
-#define ST5481_CMD_PUP 0x7 /* Power UP */
-#define ST5481_CMD_AR8 0x8 /* Activation Request class 1 */
-#define ST5481_CMD_AR10 0x9 /* Activation Request class 2 */
-#define ST5481_CMD_ARL 0xA /* Activation Request Loopback */
-#define ST5481_CMD_PDN 0xF /* Power DoWn */
-
-// Turn on/off the LEDs using the GPIO device request.
-// To use the B LEDs, number_of_leds must be set to 4
-#define B1_LED 0x10U
-#define B2_LED 0x20U
-#define GREEN_LED 0x40U
-#define RED_LED 0x80U
-
-// D channel out states
-enum {
- ST_DOUT_NONE,
-
- ST_DOUT_SHORT_INIT,
- ST_DOUT_SHORT_WAIT_DEN,
-
- ST_DOUT_LONG_INIT,
- ST_DOUT_LONG_WAIT_DEN,
- ST_DOUT_NORMAL,
-
- ST_DOUT_WAIT_FOR_UNDERRUN,
- ST_DOUT_WAIT_FOR_NOT_BUSY,
- ST_DOUT_WAIT_FOR_STOP,
- ST_DOUT_WAIT_FOR_RESET,
-};
-
-#define DOUT_STATE_COUNT (ST_DOUT_WAIT_FOR_RESET + 1)
-
-// D channel out events
-enum {
- EV_DOUT_START_XMIT,
- EV_DOUT_COMPLETE,
- EV_DOUT_DEN,
- EV_DOUT_RESETED,
- EV_DOUT_STOPPED,
- EV_DOUT_COLL,
- EV_DOUT_UNDERRUN,
-};
-
-#define DOUT_EVENT_COUNT (EV_DOUT_UNDERRUN + 1)
-
-// ----------------------------------------------------------------------
-
-enum {
- ST_L1_F3,
- ST_L1_F4,
- ST_L1_F6,
- ST_L1_F7,
- ST_L1_F8,
-};
-
-#define L1_STATE_COUNT (ST_L1_F8 + 1)
-
-// The first 16 entries match the Level 1 indications that
-// are found at offset 4 (CCIST) in the interrupt packet
-
-enum {
- EV_IND_DP, // 0000 Deactivation Pending
- EV_IND_1, // 0001
- EV_IND_2, // 0010
- EV_IND_3, // 0011
- EV_IND_RSY, // 0100 ReSYnchronizing
- EV_IND_5, // 0101
- EV_IND_6, // 0110
- EV_IND_7, // 0111
- EV_IND_AP, // 1000 Activation Pending
- EV_IND_9, // 1001
- EV_IND_10, // 1010
- EV_IND_11, // 1011
- EV_IND_AI8, // 1100 Activation Indication class 8
- EV_IND_AI10,// 1101 Activation Indication class 10
- EV_IND_AIL, // 1110 Activation Indication Loopback
- EV_IND_DI, // 1111 Deactivation Indication
- EV_PH_ACTIVATE_REQ,
- EV_PH_DEACTIVATE_REQ,
- EV_TIMER3,
-};
-
-#define L1_EVENT_COUNT (EV_TIMER3 + 1)
-
-#define ERR(format, arg...) \
- printk(KERN_ERR "%s:%s: " format "\n" , __FILE__, __func__ , ## arg)
-
-#define WARNING(format, arg...) \
- printk(KERN_WARNING "%s:%s: " format "\n" , __FILE__, __func__ , ## arg)
-
-#define INFO(format, arg...) \
- printk(KERN_INFO "%s:%s: " format "\n" , __FILE__, __func__ , ## arg)
-
-#include <linux/isdn/hdlc.h>
-#include "fsm.h"
-#include "hisax_if.h"
-#include <linux/skbuff.h>
-
-/* ======================================================================
- * FIFO handling
- */
-
-/* Generic FIFO structure */
-struct fifo {
- u_char r, w, count, size;
- spinlock_t lock;
-};
-
-/*
- * Init an FIFO
- */
-static inline void fifo_init(struct fifo *fifo, int size)
-{
- fifo->r = fifo->w = fifo->count = 0;
- fifo->size = size;
- spin_lock_init(&fifo->lock);
-}
-
-/*
- * Add an entry to the FIFO
- */
-static inline int fifo_add(struct fifo *fifo)
-{
- unsigned long flags;
- int index;
-
- if (!fifo) {
- return -1;
- }
-
- spin_lock_irqsave(&fifo->lock, flags);
- if (fifo->count == fifo->size) {
- // FIFO full
- index = -1;
- } else {
- // Return index where to get the next data to add to the FIFO
- index = fifo->w++ & (fifo->size - 1);
- fifo->count++;
- }
- spin_unlock_irqrestore(&fifo->lock, flags);
- return index;
-}
-
-/*
- * Remove an entry from the FIFO with the index returned.
- */
-static inline int fifo_remove(struct fifo *fifo)
-{
- unsigned long flags;
- int index;
-
- if (!fifo) {
- return -1;
- }
-
- spin_lock_irqsave(&fifo->lock, flags);
- if (!fifo->count) {
- // FIFO empty
- index = -1;
- } else {
- // Return index where to get the next data from the FIFO
- index = fifo->r++ & (fifo->size - 1);
- fifo->count--;
- }
- spin_unlock_irqrestore(&fifo->lock, flags);
-
- return index;
-}
-
-/* ======================================================================
- * control pipe
- */
-typedef void (*ctrl_complete_t)(void *);
-
-typedef struct ctrl_msg {
- struct usb_ctrlrequest dr;
- ctrl_complete_t complete;
- void *context;
-} ctrl_msg;
-
-/* FIFO of ctrl messages waiting to be sent */
-#define MAX_EP0_MSG 16
-struct ctrl_msg_fifo {
- struct fifo f;
- struct ctrl_msg data[MAX_EP0_MSG];
-};
-
-#define MAX_DFRAME_LEN_L1 300
-#define HSCX_BUFMAX 4096
-
-struct st5481_ctrl {
- struct ctrl_msg_fifo msg_fifo;
- unsigned long busy;
- struct urb *urb;
-};
-
-struct st5481_intr {
- // struct evt_fifo evt_fifo;
- struct urb *urb;
-};
-
-struct st5481_d_out {
- struct isdnhdlc_vars hdlc_state;
- struct urb *urb[2]; /* double buffering */
- unsigned long busy;
- struct sk_buff *tx_skb;
- struct FsmInst fsm;
-};
-
-struct st5481_b_out {
- struct isdnhdlc_vars hdlc_state;
- struct urb *urb[2]; /* double buffering */
- u_char flow_event;
- u_long busy;
- struct sk_buff *tx_skb;
-};
-
-struct st5481_in {
- struct isdnhdlc_vars hdlc_state;
- struct urb *urb[2]; /* double buffering */
- int mode;
- int bufsize;
- unsigned int num_packets;
- unsigned int packet_size;
- unsigned char ep, counter;
- unsigned char *rcvbuf;
- struct st5481_adapter *adapter;
- struct hisax_if *hisax_if;
-};
-
-int st5481_setup_in(struct st5481_in *in);
-void st5481_release_in(struct st5481_in *in);
-void st5481_in_mode(struct st5481_in *in, int mode);
-
-struct st5481_bcs {
- struct hisax_b_if b_if;
- struct st5481_adapter *adapter;
- struct st5481_in b_in;
- struct st5481_b_out b_out;
- int channel;
- int mode;
-};
-
-struct st5481_adapter {
- int number_of_leds;
- struct usb_device *usb_dev;
- struct hisax_d_if hisax_d_if;
-
- struct st5481_ctrl ctrl;
- struct st5481_intr intr;
- struct st5481_in d_in;
- struct st5481_d_out d_out;
-
- unsigned char leds;
- unsigned int led_counter;
-
- unsigned long event;
-
- struct FsmInst l1m;
- struct FsmTimer timer;
-
- struct st5481_bcs bcs[2];
-};
-
-#define TIMER3_VALUE 7000
-
-/* ======================================================================
- *
- */
-
-/*
- * Submit an URB with error reporting. This is a macro so
- * the __func__ returns the caller function name.
- */
-#define SUBMIT_URB(urb, mem_flags) \
- ({ \
- int status; \
- if ((status = usb_submit_urb(urb, mem_flags)) < 0) { \
- WARNING("usb_submit_urb failed,status=%d", status); \
- } \
- status; \
- })
-
-/*
- * USB double buffering, return the URB index (0 or 1).
- */
-static inline int get_buf_nr(struct urb *urbs[], struct urb *urb)
-{
- return (urbs[0] == urb ? 0 : 1);
-}
-
-/* ---------------------------------------------------------------------- */
-
-/* B Channel */
-
-int st5481_setup_b(struct st5481_bcs *bcs);
-void st5481_release_b(struct st5481_bcs *bcs);
-void st5481_d_l2l1(struct hisax_if *hisax_d_if, int pr, void *arg);
-
-/* D Channel */
-
-int st5481_setup_d(struct st5481_adapter *adapter);
-void st5481_release_d(struct st5481_adapter *adapter);
-void st5481_b_l2l1(struct hisax_if *b_if, int pr, void *arg);
-int st5481_d_init(void);
-void st5481_d_exit(void);
-
-/* USB */
-void st5481_ph_command(struct st5481_adapter *adapter, unsigned int command);
-int st5481_setup_isocpipes(struct urb *urb[2], struct usb_device *dev,
- unsigned int pipe, int num_packets,
- int packet_size, int buf_size,
- usb_complete_t complete, void *context);
-void st5481_release_isocpipes(struct urb *urb[2]);
-
-void st5481_usb_pipe_reset(struct st5481_adapter *adapter,
- u_char pipe, ctrl_complete_t complete, void *context);
-void st5481_usb_device_ctrl_msg(struct st5481_adapter *adapter,
- u8 request, u16 value,
- ctrl_complete_t complete, void *context);
-int st5481_setup_usb(struct st5481_adapter *adapter);
-void st5481_release_usb(struct st5481_adapter *adapter);
-void st5481_start(struct st5481_adapter *adapter);
-void st5481_stop(struct st5481_adapter *adapter);
-
-// ----------------------------------------------------------------------
-// debugging macros
-
-#define __debug_variable st5481_debug
-#include "hisax_debug.h"
-
-extern int st5481_debug;
-
-#ifdef CONFIG_HISAX_DEBUG
-
-#define DBG_ISO_PACKET(level, urb) \
- if (level & __debug_variable) dump_iso_packet(__func__, urb)
-
-static void __attribute__((unused))
-dump_iso_packet(const char *name, struct urb *urb)
-{
- int i, j;
- int len, ofs;
- u_char *data;
-
- printk(KERN_DEBUG "%s: packets=%d,errors=%d\n",
- name, urb->number_of_packets, urb->error_count);
- for (i = 0; i < urb->number_of_packets; ++i) {
- if (urb->pipe & USB_DIR_IN) {
- len = urb->iso_frame_desc[i].actual_length;
- } else {
- len = urb->iso_frame_desc[i].length;
- }
- ofs = urb->iso_frame_desc[i].offset;
- printk(KERN_DEBUG "len=%.2d,ofs=%.3d ", len, ofs);
- if (len) {
- data = urb->transfer_buffer + ofs;
- for (j = 0; j < len; j++) {
- printk("%.2x", data[j]);
- }
- }
- printk("\n");
- }
-}
-
-static inline const char *ST5481_CMD_string(int evt)
-{
- static char s[16];
-
- switch (evt) {
- case ST5481_CMD_DR: return "DR";
- case ST5481_CMD_RES: return "RES";
- case ST5481_CMD_TM1: return "TM1";
- case ST5481_CMD_TM2: return "TM2";
- case ST5481_CMD_PUP: return "PUP";
- case ST5481_CMD_AR8: return "AR8";
- case ST5481_CMD_AR10: return "AR10";
- case ST5481_CMD_ARL: return "ARL";
- case ST5481_CMD_PDN: return "PDN";
- }
-
- sprintf(s, "0x%x", evt);
- return s;
-}
-
-#else
-
-#define DBG_ISO_PACKET(level, urb) do {} while (0)
-
-#endif
-
-
-
-#endif
diff --git a/drivers/isdn/hisax/st5481_b.c b/drivers/isdn/hisax/st5481_b.c
deleted file mode 100644
index f64a36007800..000000000000
--- a/drivers/isdn/hisax/st5481_b.c
+++ /dev/null
@@ -1,380 +0,0 @@
-/*
- * Driver for ST5481 USB ISDN modem
- *
- * Author Frode Isaksen
- * Copyright 2001 by Frode Isaksen <fisaksen@bewan.com>
- * 2001 by Kai Germaschewski <kai.germaschewski@gmx.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include <linux/gfp.h>
-#include <linux/usb.h>
-#include <linux/netdevice.h>
-#include <linux/bitrev.h>
-#include "st5481.h"
-
-static inline void B_L1L2(struct st5481_bcs *bcs, int pr, void *arg)
-{
- struct hisax_if *ifc = (struct hisax_if *) &bcs->b_if;
-
- ifc->l1l2(ifc, pr, arg);
-}
-
-/*
- * Encode and transmit next frame.
- */
-static void usb_b_out(struct st5481_bcs *bcs, int buf_nr)
-{
- struct st5481_b_out *b_out = &bcs->b_out;
- struct st5481_adapter *adapter = bcs->adapter;
- struct urb *urb;
- unsigned int packet_size, offset;
- int len, buf_size, bytes_sent;
- int i;
- struct sk_buff *skb;
-
- if (test_and_set_bit(buf_nr, &b_out->busy)) {
- DBG(4, "ep %d urb %d busy", (bcs->channel + 1) * 2, buf_nr);
- return;
- }
- urb = b_out->urb[buf_nr];
-
- // Adjust isoc buffer size according to flow state
- if (b_out->flow_event & (OUT_DOWN | OUT_UNDERRUN)) {
- buf_size = NUM_ISO_PACKETS_B * SIZE_ISO_PACKETS_B_OUT + B_FLOW_ADJUST;
- packet_size = SIZE_ISO_PACKETS_B_OUT + B_FLOW_ADJUST;
- DBG(4, "B%d,adjust flow,add %d bytes", bcs->channel + 1, B_FLOW_ADJUST);
- } else if (b_out->flow_event & OUT_UP) {
- buf_size = NUM_ISO_PACKETS_B * SIZE_ISO_PACKETS_B_OUT - B_FLOW_ADJUST;
- packet_size = SIZE_ISO_PACKETS_B_OUT - B_FLOW_ADJUST;
- DBG(4, "B%d,adjust flow,remove %d bytes", bcs->channel + 1, B_FLOW_ADJUST);
- } else {
- buf_size = NUM_ISO_PACKETS_B * SIZE_ISO_PACKETS_B_OUT;
- packet_size = 8;
- }
- b_out->flow_event = 0;
-
- len = 0;
- while (len < buf_size) {
- if ((skb = b_out->tx_skb)) {
- DBG_SKB(0x100, skb);
- DBG(4, "B%d,len=%d", bcs->channel + 1, skb->len);
-
- if (bcs->mode == L1_MODE_TRANS) {
- bytes_sent = buf_size - len;
- if (skb->len < bytes_sent)
- bytes_sent = skb->len;
- { /* swap tx bytes to get hearable audio data */
- register unsigned char *src = skb->data;
- register unsigned char *dest = urb->transfer_buffer + len;
- register unsigned int count;
- for (count = 0; count < bytes_sent; count++)
- *dest++ = bitrev8(*src++);
- }
- len += bytes_sent;
- } else {
- len += isdnhdlc_encode(&b_out->hdlc_state,
- skb->data, skb->len, &bytes_sent,
- urb->transfer_buffer + len, buf_size-len);
- }
-
- skb_pull(skb, bytes_sent);
-
- if (!skb->len) {
- // Frame sent
- b_out->tx_skb = NULL;
- B_L1L2(bcs, PH_DATA | CONFIRM, (void *)(unsigned long) skb->truesize);
- dev_kfree_skb_any(skb);
-
-/* if (!(bcs->tx_skb = skb_dequeue(&bcs->sq))) { */
-/* st5481B_sched_event(bcs, B_XMTBUFREADY); */
-/* } */
- }
- } else {
- if (bcs->mode == L1_MODE_TRANS) {
- memset(urb->transfer_buffer + len, 0xff, buf_size-len);
- len = buf_size;
- } else {
- // Send flags
- len += isdnhdlc_encode(&b_out->hdlc_state,
- NULL, 0, &bytes_sent,
- urb->transfer_buffer + len, buf_size-len);
- }
- }
- }
-
- // Prepare the URB
- for (i = 0, offset = 0; offset < len; i++) {
- urb->iso_frame_desc[i].offset = offset;
- urb->iso_frame_desc[i].length = packet_size;
- offset += packet_size;
- packet_size = SIZE_ISO_PACKETS_B_OUT;
- }
- urb->transfer_buffer_length = len;
- urb->number_of_packets = i;
- urb->dev = adapter->usb_dev;
-
- DBG_ISO_PACKET(0x200, urb);
-
- SUBMIT_URB(urb, GFP_NOIO);
-}
-
-/*
- * Start transferring (flags or data) on the B channel, since
- * FIFO counters has been set to a non-zero value.
- */
-static void st5481B_start_xfer(void *context)
-{
- struct st5481_bcs *bcs = context;
-
- DBG(4, "B%d", bcs->channel + 1);
-
- // Start transmitting (flags or data) on B channel
-
- usb_b_out(bcs, 0);
- usb_b_out(bcs, 1);
-}
-
-/*
- * If the adapter has only 2 LEDs, the green
- * LED will blink with a rate depending
- * on the number of channels opened.
- */
-static void led_blink(struct st5481_adapter *adapter)
-{
- u_char leds = adapter->leds;
-
- // 50 frames/sec for each channel
- if (++adapter->led_counter % 50) {
- return;
- }
-
- if (adapter->led_counter % 100) {
- leds |= GREEN_LED;
- } else {
- leds &= ~GREEN_LED;
- }
-
- st5481_usb_device_ctrl_msg(adapter, GPIO_OUT, leds, NULL, NULL);
-}
-
-static void usb_b_out_complete(struct urb *urb)
-{
- struct st5481_bcs *bcs = urb->context;
- struct st5481_b_out *b_out = &bcs->b_out;
- struct st5481_adapter *adapter = bcs->adapter;
- int buf_nr;
-
- buf_nr = get_buf_nr(b_out->urb, urb);
- test_and_clear_bit(buf_nr, &b_out->busy);
-
- if (unlikely(urb->status < 0)) {
- switch (urb->status) {
- case -ENOENT:
- case -ESHUTDOWN:
- case -ECONNRESET:
- DBG(4, "urb killed status %d", urb->status);
- return; // Give up
- default:
- WARNING("urb status %d", urb->status);
- if (b_out->busy == 0) {
- st5481_usb_pipe_reset(adapter, (bcs->channel + 1) * 2 | USB_DIR_OUT, NULL, NULL);
- }
- break;
- }
- }
-
- usb_b_out(bcs, buf_nr);
-
- if (adapter->number_of_leds == 2)
- led_blink(adapter);
-}
-
-/*
- * Start or stop the transfer on the B channel.
- */
-static void st5481B_mode(struct st5481_bcs *bcs, int mode)
-{
- struct st5481_b_out *b_out = &bcs->b_out;
- struct st5481_adapter *adapter = bcs->adapter;
-
- DBG(4, "B%d,mode=%d", bcs->channel + 1, mode);
-
- if (bcs->mode == mode)
- return;
-
- bcs->mode = mode;
-
- // Cancel all USB transfers on this B channel
- usb_unlink_urb(b_out->urb[0]);
- usb_unlink_urb(b_out->urb[1]);
- b_out->busy = 0;
-
- st5481_in_mode(&bcs->b_in, mode);
- if (bcs->mode != L1_MODE_NULL) {
- // Open the B channel
- if (bcs->mode != L1_MODE_TRANS) {
- u32 features = HDLC_BITREVERSE;
- if (bcs->mode == L1_MODE_HDLC_56K)
- features |= HDLC_56KBIT;
- isdnhdlc_out_init(&b_out->hdlc_state, features);
- }
- st5481_usb_pipe_reset(adapter, (bcs->channel + 1) * 2, NULL, NULL);
-
- // Enable B channel interrupts
- st5481_usb_device_ctrl_msg(adapter, FFMSK_B1 + (bcs->channel * 2),
- OUT_UP + OUT_DOWN + OUT_UNDERRUN, NULL, NULL);
-
- // Enable B channel FIFOs
- st5481_usb_device_ctrl_msg(adapter, OUT_B1_COUNTER+(bcs->channel * 2), 32, st5481B_start_xfer, bcs);
- if (adapter->number_of_leds == 4) {
- if (bcs->channel == 0) {
- adapter->leds |= B1_LED;
- } else {
- adapter->leds |= B2_LED;
- }
- }
- } else {
- // Disable B channel interrupts
- st5481_usb_device_ctrl_msg(adapter, FFMSK_B1+(bcs->channel * 2), 0, NULL, NULL);
-
- // Disable B channel FIFOs
- st5481_usb_device_ctrl_msg(adapter, OUT_B1_COUNTER+(bcs->channel * 2), 0, NULL, NULL);
-
- if (adapter->number_of_leds == 4) {
- if (bcs->channel == 0) {
- adapter->leds &= ~B1_LED;
- } else {
- adapter->leds &= ~B2_LED;
- }
- } else {
- st5481_usb_device_ctrl_msg(adapter, GPIO_OUT, adapter->leds, NULL, NULL);
- }
- if (b_out->tx_skb) {
- dev_kfree_skb_any(b_out->tx_skb);
- b_out->tx_skb = NULL;
- }
-
- }
-}
-
-static int st5481_setup_b_out(struct st5481_bcs *bcs)
-{
- struct usb_device *dev = bcs->adapter->usb_dev;
- struct usb_interface *intf;
- struct usb_host_interface *altsetting = NULL;
- struct usb_host_endpoint *endpoint;
- struct st5481_b_out *b_out = &bcs->b_out;
-
- DBG(4, "");
-
- intf = usb_ifnum_to_if(dev, 0);
- if (intf)
- altsetting = usb_altnum_to_altsetting(intf, 3);
- if (!altsetting)
- return -ENXIO;
-
- // Allocate URBs and buffers for the B channel out
- endpoint = &altsetting->endpoint[EP_B1_OUT - 1 + bcs->channel * 2];
-
- DBG(4, "endpoint address=%02x,packet size=%d",
- endpoint->desc.bEndpointAddress, le16_to_cpu(endpoint->desc.wMaxPacketSize));
-
- // Allocate memory for 8000bytes/sec + extra bytes if underrun
- return st5481_setup_isocpipes(b_out->urb, dev,
- usb_sndisocpipe(dev, endpoint->desc.bEndpointAddress),
- NUM_ISO_PACKETS_B, SIZE_ISO_PACKETS_B_OUT,
- NUM_ISO_PACKETS_B * SIZE_ISO_PACKETS_B_OUT + B_FLOW_ADJUST,
- usb_b_out_complete, bcs);
-}
-
-static void st5481_release_b_out(struct st5481_bcs *bcs)
-{
- struct st5481_b_out *b_out = &bcs->b_out;
-
- DBG(4, "");
-
- st5481_release_isocpipes(b_out->urb);
-}
-
-int st5481_setup_b(struct st5481_bcs *bcs)
-{
- int retval;
-
- DBG(4, "");
-
- retval = st5481_setup_b_out(bcs);
- if (retval)
- goto err;
- bcs->b_in.bufsize = HSCX_BUFMAX;
- bcs->b_in.num_packets = NUM_ISO_PACKETS_B;
- bcs->b_in.packet_size = SIZE_ISO_PACKETS_B_IN;
- bcs->b_in.ep = (bcs->channel ? EP_B2_IN : EP_B1_IN) | USB_DIR_IN;
- bcs->b_in.counter = bcs->channel ? IN_B2_COUNTER : IN_B1_COUNTER;
- bcs->b_in.adapter = bcs->adapter;
- bcs->b_in.hisax_if = &bcs->b_if.ifc;
- retval = st5481_setup_in(&bcs->b_in);
- if (retval)
- goto err_b_out;
-
-
- return 0;
-
-err_b_out:
- st5481_release_b_out(bcs);
-err:
- return retval;
-}
-
-/*
- * Release buffers and URBs for the B channels
- */
-void st5481_release_b(struct st5481_bcs *bcs)
-{
- DBG(4, "");
-
- st5481_release_in(&bcs->b_in);
- st5481_release_b_out(bcs);
-}
-
-/*
- * st5481_b_l2l1 is the entry point for upper layer routines that want to
- * transmit on the B channel. PH_DATA | REQUEST is a normal packet that
- * we either start transmitting (if idle) or queue (if busy).
- * PH_PULL | REQUEST can be called to request a callback message
- * (PH_PULL | CONFIRM)
- * once the link is idle. After a "pull" callback, the upper layer
- * routines can use PH_PULL | INDICATION to send data.
- */
-void st5481_b_l2l1(struct hisax_if *ifc, int pr, void *arg)
-{
- struct st5481_bcs *bcs = ifc->priv;
- struct sk_buff *skb = arg;
- long mode;
-
- DBG(4, "");
-
- switch (pr) {
- case PH_DATA | REQUEST:
- BUG_ON(bcs->b_out.tx_skb);
- bcs->b_out.tx_skb = skb;
- break;
- case PH_ACTIVATE | REQUEST:
- mode = (long) arg;
- DBG(4, "B%d,PH_ACTIVATE_REQUEST %ld", bcs->channel + 1, mode);
- st5481B_mode(bcs, mode);
- B_L1L2(bcs, PH_ACTIVATE | INDICATION, NULL);
- break;
- case PH_DEACTIVATE | REQUEST:
- DBG(4, "B%d,PH_DEACTIVATE_REQUEST", bcs->channel + 1);
- st5481B_mode(bcs, L1_MODE_NULL);
- B_L1L2(bcs, PH_DEACTIVATE | INDICATION, NULL);
- break;
- default:
- WARNING("pr %#x\n", pr);
- }
-}
diff --git a/drivers/isdn/hisax/st5481_d.c b/drivers/isdn/hisax/st5481_d.c
deleted file mode 100644
index e88c5c71fca7..000000000000
--- a/drivers/isdn/hisax/st5481_d.c
+++ /dev/null
@@ -1,780 +0,0 @@
-/*
- * Driver for ST5481 USB ISDN modem
- *
- * Author Frode Isaksen
- * Copyright 2001 by Frode Isaksen <fisaksen@bewan.com>
- * 2001 by Kai Germaschewski <kai.germaschewski@gmx.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include <linux/gfp.h>
-#include <linux/usb.h>
-#include <linux/netdevice.h>
-#include "st5481.h"
-
-static void ph_connect(struct st5481_adapter *adapter);
-static void ph_disconnect(struct st5481_adapter *adapter);
-
-static struct Fsm l1fsm;
-
-static char *strL1State[] =
-{
- "ST_L1_F3",
- "ST_L1_F4",
- "ST_L1_F6",
- "ST_L1_F7",
- "ST_L1_F8",
-};
-
-static char *strL1Event[] =
-{
- "EV_IND_DP",
- "EV_IND_1",
- "EV_IND_2",
- "EV_IND_3",
- "EV_IND_RSY",
- "EV_IND_5",
- "EV_IND_6",
- "EV_IND_7",
- "EV_IND_AP",
- "EV_IND_9",
- "EV_IND_10",
- "EV_IND_11",
- "EV_IND_AI8",
- "EV_IND_AI10",
- "EV_IND_AIL",
- "EV_IND_DI",
- "EV_PH_ACTIVATE_REQ",
- "EV_PH_DEACTIVATE_REQ",
- "EV_TIMER3",
-};
-
-static inline void D_L1L2(struct st5481_adapter *adapter, int pr, void *arg)
-{
- struct hisax_if *ifc = (struct hisax_if *) &adapter->hisax_d_if;
-
- ifc->l1l2(ifc, pr, arg);
-}
-
-static void
-l1_go_f3(struct FsmInst *fi, int event, void *arg)
-{
- struct st5481_adapter *adapter = fi->userdata;
-
- if (fi->state == ST_L1_F7)
- ph_disconnect(adapter);
-
- FsmChangeState(fi, ST_L1_F3);
- D_L1L2(adapter, PH_DEACTIVATE | INDICATION, NULL);
-}
-
-static void
-l1_go_f6(struct FsmInst *fi, int event, void *arg)
-{
- struct st5481_adapter *adapter = fi->userdata;
-
- if (fi->state == ST_L1_F7)
- ph_disconnect(adapter);
-
- FsmChangeState(fi, ST_L1_F6);
-}
-
-static void
-l1_go_f7(struct FsmInst *fi, int event, void *arg)
-{
- struct st5481_adapter *adapter = fi->userdata;
-
- FsmDelTimer(&adapter->timer, 0);
- ph_connect(adapter);
- FsmChangeState(fi, ST_L1_F7);
- D_L1L2(adapter, PH_ACTIVATE | INDICATION, NULL);
-}
-
-static void
-l1_go_f8(struct FsmInst *fi, int event, void *arg)
-{
- struct st5481_adapter *adapter = fi->userdata;
-
- if (fi->state == ST_L1_F7)
- ph_disconnect(adapter);
-
- FsmChangeState(fi, ST_L1_F8);
-}
-
-static void
-l1_timer3(struct FsmInst *fi, int event, void *arg)
-{
- struct st5481_adapter *adapter = fi->userdata;
-
- st5481_ph_command(adapter, ST5481_CMD_DR);
- FsmChangeState(fi, ST_L1_F3);
- D_L1L2(adapter, PH_DEACTIVATE | INDICATION, NULL);
-}
-
-static void
-l1_ignore(struct FsmInst *fi, int event, void *arg)
-{
-}
-
-static void
-l1_activate(struct FsmInst *fi, int event, void *arg)
-{
- struct st5481_adapter *adapter = fi->userdata;
-
- st5481_ph_command(adapter, ST5481_CMD_DR);
- st5481_ph_command(adapter, ST5481_CMD_PUP);
- FsmRestartTimer(&adapter->timer, TIMER3_VALUE, EV_TIMER3, NULL, 2);
- st5481_ph_command(adapter, ST5481_CMD_AR8);
- FsmChangeState(fi, ST_L1_F4);
-}
-
-static struct FsmNode L1FnList[] __initdata =
-{
- {ST_L1_F3, EV_IND_DP, l1_ignore},
- {ST_L1_F3, EV_IND_AP, l1_go_f6},
- {ST_L1_F3, EV_IND_AI8, l1_go_f7},
- {ST_L1_F3, EV_IND_AI10, l1_go_f7},
- {ST_L1_F3, EV_PH_ACTIVATE_REQ, l1_activate},
-
- {ST_L1_F4, EV_TIMER3, l1_timer3},
- {ST_L1_F4, EV_IND_DP, l1_go_f3},
- {ST_L1_F4, EV_IND_AP, l1_go_f6},
- {ST_L1_F4, EV_IND_AI8, l1_go_f7},
- {ST_L1_F4, EV_IND_AI10, l1_go_f7},
-
- {ST_L1_F6, EV_TIMER3, l1_timer3},
- {ST_L1_F6, EV_IND_DP, l1_go_f3},
- {ST_L1_F6, EV_IND_AP, l1_ignore},
- {ST_L1_F6, EV_IND_AI8, l1_go_f7},
- {ST_L1_F6, EV_IND_AI10, l1_go_f7},
- {ST_L1_F7, EV_IND_RSY, l1_go_f8},
-
- {ST_L1_F7, EV_IND_DP, l1_go_f3},
- {ST_L1_F7, EV_IND_AP, l1_go_f6},
- {ST_L1_F7, EV_IND_AI8, l1_ignore},
- {ST_L1_F7, EV_IND_AI10, l1_ignore},
- {ST_L1_F7, EV_IND_RSY, l1_go_f8},
-
- {ST_L1_F8, EV_TIMER3, l1_timer3},
- {ST_L1_F8, EV_IND_DP, l1_go_f3},
- {ST_L1_F8, EV_IND_AP, l1_go_f6},
- {ST_L1_F8, EV_IND_AI8, l1_go_f8},
- {ST_L1_F8, EV_IND_AI10, l1_go_f8},
- {ST_L1_F8, EV_IND_RSY, l1_ignore},
-};
-
-static __printf(2, 3)
- void l1m_debug(struct FsmInst *fi, char *fmt, ...)
-{
- va_list args;
- char buf[256];
-
- va_start(args, fmt);
- vsnprintf(buf, sizeof(buf), fmt, args);
- DBG(8, "%s", buf);
- va_end(args);
-}
-
-/* ======================================================================
- * D-Channel out
- */
-
-/*
- D OUT state machine:
- ====================
-
- Transmit short frame (< 16 bytes of encoded data):
-
- L1 FRAME D_OUT_STATE USB D CHANNEL
- -------- ----------- --- ---------
-
- FIXME
-
- -> [xx..xx] SHORT_INIT -> [7Exx..xxC1C27EFF]
- SHORT_WAIT_DEN <> OUT_D_COUNTER=16
-
- END_OF_SHORT <- DEN_EVENT -> 7Exx
- xxxx
- xxxx
- xxxx
- xxxx
- xxxx
- C1C1
- 7EFF
- WAIT_FOR_RESET_IDLE <- D_UNDERRUN <- (8ms)
- IDLE <> Reset pipe
-
-
-
- Transmit long frame (>= 16 bytes of encoded data):
-
- L1 FRAME D_OUT_STATE USB D CHANNEL
- -------- ----------- --- ---------
-
- -> [xx...xx] IDLE
- WAIT_FOR_STOP <> OUT_D_COUNTER=0
- WAIT_FOR_RESET <> Reset pipe
- STOP
- INIT_LONG_FRAME -> [7Exx..xx]
- WAIT_DEN <> OUT_D_COUNTER=16
- OUT_NORMAL <- DEN_EVENT -> 7Exx
- END_OF_FRAME_BUSY -> [xxxx] xxxx
- END_OF_FRAME_NOT_BUSY -> [xxxx] xxxx
- -> [xxxx] xxxx
- -> [C1C2] xxxx
- -> [7EFF] xxxx
- xxxx
- xxxx
- ....
- xxxx
- C1C2
- 7EFF
- <- D_UNDERRUN <- (> 8ms)
- WAIT_FOR_STOP <> OUT_D_COUNTER=0
- WAIT_FOR_RESET <> Reset pipe
- STOP
-
-*/
-
-static struct Fsm dout_fsm;
-
-static char *strDoutState[] =
-{
- "ST_DOUT_NONE",
-
- "ST_DOUT_SHORT_INIT",
- "ST_DOUT_SHORT_WAIT_DEN",
-
- "ST_DOUT_LONG_INIT",
- "ST_DOUT_LONG_WAIT_DEN",
- "ST_DOUT_NORMAL",
-
- "ST_DOUT_WAIT_FOR_UNDERRUN",
- "ST_DOUT_WAIT_FOR_NOT_BUSY",
- "ST_DOUT_WAIT_FOR_STOP",
- "ST_DOUT_WAIT_FOR_RESET",
-};
-
-static char *strDoutEvent[] =
-{
- "EV_DOUT_START_XMIT",
- "EV_DOUT_COMPLETE",
- "EV_DOUT_DEN",
- "EV_DOUT_RESETED",
- "EV_DOUT_STOPPED",
- "EV_DOUT_COLL",
- "EV_DOUT_UNDERRUN",
-};
-
-static __printf(2, 3)
- void dout_debug(struct FsmInst *fi, char *fmt, ...)
-{
- va_list args;
- char buf[256];
-
- va_start(args, fmt);
- vsnprintf(buf, sizeof(buf), fmt, args);
- DBG(0x2, "%s", buf);
- va_end(args);
-}
-
-static void dout_stop_event(void *context)
-{
- struct st5481_adapter *adapter = context;
-
- FsmEvent(&adapter->d_out.fsm, EV_DOUT_STOPPED, NULL);
-}
-
-/*
- * Start the transfer of a D channel frame.
- */
-static void usb_d_out(struct st5481_adapter *adapter, int buf_nr)
-{
- struct st5481_d_out *d_out = &adapter->d_out;
- struct urb *urb;
- unsigned int num_packets, packet_offset;
- int len, buf_size, bytes_sent;
- struct sk_buff *skb;
- struct usb_iso_packet_descriptor *desc;
-
- if (d_out->fsm.state != ST_DOUT_NORMAL)
- return;
-
- if (test_and_set_bit(buf_nr, &d_out->busy)) {
- DBG(2, "ep %d urb %d busy %#lx", EP_D_OUT, buf_nr, d_out->busy);
- return;
- }
- urb = d_out->urb[buf_nr];
-
- skb = d_out->tx_skb;
-
- buf_size = NUM_ISO_PACKETS_D * SIZE_ISO_PACKETS_D_OUT;
-
- if (skb) {
- len = isdnhdlc_encode(&d_out->hdlc_state,
- skb->data, skb->len, &bytes_sent,
- urb->transfer_buffer, buf_size);
- skb_pull(skb, bytes_sent);
- } else {
- // Send flags or idle
- len = isdnhdlc_encode(&d_out->hdlc_state,
- NULL, 0, &bytes_sent,
- urb->transfer_buffer, buf_size);
- }
-
- if (len < buf_size) {
- FsmChangeState(&d_out->fsm, ST_DOUT_WAIT_FOR_UNDERRUN);
- }
- if (skb && !skb->len) {
- d_out->tx_skb = NULL;
- D_L1L2(adapter, PH_DATA | CONFIRM, NULL);
- dev_kfree_skb_any(skb);
- }
-
- // Prepare the URB
- urb->transfer_buffer_length = len;
- num_packets = 0;
- packet_offset = 0;
- while (packet_offset < len) {
- desc = &urb->iso_frame_desc[num_packets];
- desc->offset = packet_offset;
- desc->length = SIZE_ISO_PACKETS_D_OUT;
- if (len - packet_offset < desc->length)
- desc->length = len - packet_offset;
- num_packets++;
- packet_offset += desc->length;
- }
- urb->number_of_packets = num_packets;
-
- // Prepare the URB
- urb->dev = adapter->usb_dev;
- // Need to transmit the next buffer 2ms after the DEN_EVENT
- urb->transfer_flags = 0;
- urb->start_frame = usb_get_current_frame_number(adapter->usb_dev) + 2;
-
- DBG_ISO_PACKET(0x20, urb);
-
- if (usb_submit_urb(urb, GFP_KERNEL) < 0) {
- // There is another URB queued up
- urb->transfer_flags = URB_ISO_ASAP;
- SUBMIT_URB(urb, GFP_KERNEL);
- }
-}
-
-static void fifo_reseted(void *context)
-{
- struct st5481_adapter *adapter = context;
-
- FsmEvent(&adapter->d_out.fsm, EV_DOUT_RESETED, NULL);
-}
-
-static void usb_d_out_complete(struct urb *urb)
-{
- struct st5481_adapter *adapter = urb->context;
- struct st5481_d_out *d_out = &adapter->d_out;
- long buf_nr;
-
- DBG(2, "");
-
- buf_nr = get_buf_nr(d_out->urb, urb);
- test_and_clear_bit(buf_nr, &d_out->busy);
-
- if (unlikely(urb->status < 0)) {
- switch (urb->status) {
- case -ENOENT:
- case -ESHUTDOWN:
- case -ECONNRESET:
- DBG(1, "urb killed status %d", urb->status);
- break;
- default:
- WARNING("urb status %d", urb->status);
- if (d_out->busy == 0) {
- st5481_usb_pipe_reset(adapter, EP_D_OUT | USB_DIR_OUT, fifo_reseted, adapter);
- }
- break;
- }
- return; // Give up
- }
-
- FsmEvent(&adapter->d_out.fsm, EV_DOUT_COMPLETE, (void *) buf_nr);
-}
-
-/* ====================================================================== */
-
-static void dout_start_xmit(struct FsmInst *fsm, int event, void *arg)
-{
- // FIXME unify?
- struct st5481_adapter *adapter = fsm->userdata;
- struct st5481_d_out *d_out = &adapter->d_out;
- struct urb *urb;
- int len, bytes_sent;
- struct sk_buff *skb;
- int buf_nr = 0;
-
- skb = d_out->tx_skb;
-
- DBG(2, "len=%d", skb->len);
-
- isdnhdlc_out_init(&d_out->hdlc_state, HDLC_DCHANNEL | HDLC_BITREVERSE);
-
- if (test_and_set_bit(buf_nr, &d_out->busy)) {
- WARNING("ep %d urb %d busy %#lx", EP_D_OUT, buf_nr, d_out->busy);
- return;
- }
- urb = d_out->urb[buf_nr];
-
- DBG_SKB(0x10, skb);
- len = isdnhdlc_encode(&d_out->hdlc_state,
- skb->data, skb->len, &bytes_sent,
- urb->transfer_buffer, 16);
- skb_pull(skb, bytes_sent);
-
- if (len < 16)
- FsmChangeState(&d_out->fsm, ST_DOUT_SHORT_INIT);
- else
- FsmChangeState(&d_out->fsm, ST_DOUT_LONG_INIT);
-
- if (skb->len == 0) {
- d_out->tx_skb = NULL;
- D_L1L2(adapter, PH_DATA | CONFIRM, NULL);
- dev_kfree_skb_any(skb);
- }
-
-// Prepare the URB
- urb->transfer_buffer_length = len;
-
- urb->iso_frame_desc[0].offset = 0;
- urb->iso_frame_desc[0].length = len;
- urb->number_of_packets = 1;
-
- // Prepare the URB
- urb->dev = adapter->usb_dev;
- urb->transfer_flags = URB_ISO_ASAP;
-
- DBG_ISO_PACKET(0x20, urb);
- SUBMIT_URB(urb, GFP_KERNEL);
-}
-
-static void dout_short_fifo(struct FsmInst *fsm, int event, void *arg)
-{
- struct st5481_adapter *adapter = fsm->userdata;
- struct st5481_d_out *d_out = &adapter->d_out;
-
- FsmChangeState(&d_out->fsm, ST_DOUT_SHORT_WAIT_DEN);
- st5481_usb_device_ctrl_msg(adapter, OUT_D_COUNTER, 16, NULL, NULL);
-}
-
-static void dout_end_short_frame(struct FsmInst *fsm, int event, void *arg)
-{
- struct st5481_adapter *adapter = fsm->userdata;
- struct st5481_d_out *d_out = &adapter->d_out;
-
- FsmChangeState(&d_out->fsm, ST_DOUT_WAIT_FOR_UNDERRUN);
-}
-
-static void dout_long_enable_fifo(struct FsmInst *fsm, int event, void *arg)
-{
- struct st5481_adapter *adapter = fsm->userdata;
- struct st5481_d_out *d_out = &adapter->d_out;
-
- st5481_usb_device_ctrl_msg(adapter, OUT_D_COUNTER, 16, NULL, NULL);
- FsmChangeState(&d_out->fsm, ST_DOUT_LONG_WAIT_DEN);
-}
-
-static void dout_long_den(struct FsmInst *fsm, int event, void *arg)
-{
- struct st5481_adapter *adapter = fsm->userdata;
- struct st5481_d_out *d_out = &adapter->d_out;
-
- FsmChangeState(&d_out->fsm, ST_DOUT_NORMAL);
- usb_d_out(adapter, 0);
- usb_d_out(adapter, 1);
-}
-
-static void dout_reset(struct FsmInst *fsm, int event, void *arg)
-{
- struct st5481_adapter *adapter = fsm->userdata;
- struct st5481_d_out *d_out = &adapter->d_out;
-
- FsmChangeState(&d_out->fsm, ST_DOUT_WAIT_FOR_RESET);
- st5481_usb_pipe_reset(adapter, EP_D_OUT | USB_DIR_OUT, fifo_reseted, adapter);
-}
-
-static void dout_stop(struct FsmInst *fsm, int event, void *arg)
-{
- struct st5481_adapter *adapter = fsm->userdata;
- struct st5481_d_out *d_out = &adapter->d_out;
-
- FsmChangeState(&d_out->fsm, ST_DOUT_WAIT_FOR_STOP);
- st5481_usb_device_ctrl_msg(adapter, OUT_D_COUNTER, 0, dout_stop_event, adapter);
-}
-
-static void dout_underrun(struct FsmInst *fsm, int event, void *arg)
-{
- struct st5481_adapter *adapter = fsm->userdata;
- struct st5481_d_out *d_out = &adapter->d_out;
-
- if (test_bit(0, &d_out->busy) || test_bit(1, &d_out->busy)) {
- FsmChangeState(&d_out->fsm, ST_DOUT_WAIT_FOR_NOT_BUSY);
- } else {
- dout_stop(fsm, event, arg);
- }
-}
-
-static void dout_check_busy(struct FsmInst *fsm, int event, void *arg)
-{
- struct st5481_adapter *adapter = fsm->userdata;
- struct st5481_d_out *d_out = &adapter->d_out;
-
- if (!test_bit(0, &d_out->busy) && !test_bit(1, &d_out->busy))
- dout_stop(fsm, event, arg);
-}
-
-static void dout_reseted(struct FsmInst *fsm, int event, void *arg)
-{
- struct st5481_adapter *adapter = fsm->userdata;
- struct st5481_d_out *d_out = &adapter->d_out;
-
- FsmChangeState(&d_out->fsm, ST_DOUT_NONE);
- // FIXME locking
- if (d_out->tx_skb)
- FsmEvent(&d_out->fsm, EV_DOUT_START_XMIT, NULL);
-}
-
-static void dout_complete(struct FsmInst *fsm, int event, void *arg)
-{
- struct st5481_adapter *adapter = fsm->userdata;
- long buf_nr = (long) arg;
-
- usb_d_out(adapter, buf_nr);
-}
-
-static void dout_ignore(struct FsmInst *fsm, int event, void *arg)
-{
-}
-
-static struct FsmNode DoutFnList[] __initdata =
-{
- {ST_DOUT_NONE, EV_DOUT_START_XMIT, dout_start_xmit},
-
- {ST_DOUT_SHORT_INIT, EV_DOUT_COMPLETE, dout_short_fifo},
-
- {ST_DOUT_SHORT_WAIT_DEN, EV_DOUT_DEN, dout_end_short_frame},
- {ST_DOUT_SHORT_WAIT_DEN, EV_DOUT_UNDERRUN, dout_underrun},
-
- {ST_DOUT_LONG_INIT, EV_DOUT_COMPLETE, dout_long_enable_fifo},
-
- {ST_DOUT_LONG_WAIT_DEN, EV_DOUT_DEN, dout_long_den},
- {ST_DOUT_LONG_WAIT_DEN, EV_DOUT_UNDERRUN, dout_underrun},
-
- {ST_DOUT_NORMAL, EV_DOUT_UNDERRUN, dout_underrun},
- {ST_DOUT_NORMAL, EV_DOUT_COMPLETE, dout_complete},
-
- {ST_DOUT_WAIT_FOR_UNDERRUN, EV_DOUT_UNDERRUN, dout_underrun},
- {ST_DOUT_WAIT_FOR_UNDERRUN, EV_DOUT_COMPLETE, dout_ignore},
-
- {ST_DOUT_WAIT_FOR_NOT_BUSY, EV_DOUT_COMPLETE, dout_check_busy},
-
- {ST_DOUT_WAIT_FOR_STOP, EV_DOUT_STOPPED, dout_reset},
-
- {ST_DOUT_WAIT_FOR_RESET, EV_DOUT_RESETED, dout_reseted},
-};
-
-void st5481_d_l2l1(struct hisax_if *hisax_d_if, int pr, void *arg)
-{
- struct st5481_adapter *adapter = hisax_d_if->priv;
- struct sk_buff *skb = arg;
-
- switch (pr) {
- case PH_ACTIVATE | REQUEST:
- FsmEvent(&adapter->l1m, EV_PH_ACTIVATE_REQ, NULL);
- break;
- case PH_DEACTIVATE | REQUEST:
- FsmEvent(&adapter->l1m, EV_PH_DEACTIVATE_REQ, NULL);
- break;
- case PH_DATA | REQUEST:
- DBG(2, "PH_DATA REQUEST len %d", skb->len);
- BUG_ON(adapter->d_out.tx_skb);
- adapter->d_out.tx_skb = skb;
- FsmEvent(&adapter->d_out.fsm, EV_DOUT_START_XMIT, NULL);
- break;
- default:
- WARNING("pr %#x\n", pr);
- break;
- }
-}
-
-/* ======================================================================
- */
-
-/*
- * Start receiving on the D channel since entered state F7.
- */
-static void ph_connect(struct st5481_adapter *adapter)
-{
- struct st5481_d_out *d_out = &adapter->d_out;
- struct st5481_in *d_in = &adapter->d_in;
-
- DBG(8, "");
-
- FsmChangeState(&d_out->fsm, ST_DOUT_NONE);
-
- // st5481_usb_device_ctrl_msg(adapter, FFMSK_D, OUT_UNDERRUN, NULL, NULL);
- st5481_usb_device_ctrl_msg(adapter, FFMSK_D, 0xfc, NULL, NULL);
- st5481_in_mode(d_in, L1_MODE_HDLC);
-
-#ifdef LOOPBACK
- // Turn loopback on (data sent on B and D looped back)
- st5481_usb_device_ctrl_msg(cs, LBB, 0x04, NULL, NULL);
-#endif
-
- st5481_usb_pipe_reset(adapter, EP_D_OUT | USB_DIR_OUT, NULL, NULL);
-
- // Turn on the green LED to tell that we are in state F7
- adapter->leds |= GREEN_LED;
- st5481_usb_device_ctrl_msg(adapter, GPIO_OUT, adapter->leds, NULL, NULL);
-}
-
-/*
- * Stop receiving on the D channel since not in state F7.
- */
-static void ph_disconnect(struct st5481_adapter *adapter)
-{
- DBG(8, "");
-
- st5481_in_mode(&adapter->d_in, L1_MODE_NULL);
-
- // Turn off the green LED to tell that we left state F7
- adapter->leds &= ~GREEN_LED;
- st5481_usb_device_ctrl_msg(adapter, GPIO_OUT, adapter->leds, NULL, NULL);
-}
-
-static int st5481_setup_d_out(struct st5481_adapter *adapter)
-{
- struct usb_device *dev = adapter->usb_dev;
- struct usb_interface *intf;
- struct usb_host_interface *altsetting = NULL;
- struct usb_host_endpoint *endpoint;
- struct st5481_d_out *d_out = &adapter->d_out;
-
- DBG(2, "");
-
- intf = usb_ifnum_to_if(dev, 0);
- if (intf)
- altsetting = usb_altnum_to_altsetting(intf, 3);
- if (!altsetting)
- return -ENXIO;
-
- // Allocate URBs and buffers for the D channel out
- endpoint = &altsetting->endpoint[EP_D_OUT-1];
-
- DBG(2, "endpoint address=%02x,packet size=%d",
- endpoint->desc.bEndpointAddress, le16_to_cpu(endpoint->desc.wMaxPacketSize));
-
- return st5481_setup_isocpipes(d_out->urb, dev,
- usb_sndisocpipe(dev, endpoint->desc.bEndpointAddress),
- NUM_ISO_PACKETS_D, SIZE_ISO_PACKETS_D_OUT,
- NUM_ISO_PACKETS_D * SIZE_ISO_PACKETS_D_OUT,
- usb_d_out_complete, adapter);
-}
-
-static void st5481_release_d_out(struct st5481_adapter *adapter)
-{
- struct st5481_d_out *d_out = &adapter->d_out;
-
- DBG(2, "");
-
- st5481_release_isocpipes(d_out->urb);
-}
-
-int st5481_setup_d(struct st5481_adapter *adapter)
-{
- int retval;
-
- DBG(2, "");
-
- retval = st5481_setup_d_out(adapter);
- if (retval)
- goto err;
- adapter->d_in.bufsize = MAX_DFRAME_LEN_L1;
- adapter->d_in.num_packets = NUM_ISO_PACKETS_D;
- adapter->d_in.packet_size = SIZE_ISO_PACKETS_D_IN;
- adapter->d_in.ep = EP_D_IN | USB_DIR_IN;
- adapter->d_in.counter = IN_D_COUNTER;
- adapter->d_in.adapter = adapter;
- adapter->d_in.hisax_if = &adapter->hisax_d_if.ifc;
- retval = st5481_setup_in(&adapter->d_in);
- if (retval)
- goto err_d_out;
-
- adapter->l1m.fsm = &l1fsm;
- adapter->l1m.state = ST_L1_F3;
- adapter->l1m.debug = st5481_debug & 0x100;
- adapter->l1m.userdata = adapter;
- adapter->l1m.printdebug = l1m_debug;
- FsmInitTimer(&adapter->l1m, &adapter->timer);
-
- adapter->d_out.fsm.fsm = &dout_fsm;
- adapter->d_out.fsm.state = ST_DOUT_NONE;
- adapter->d_out.fsm.debug = st5481_debug & 0x100;
- adapter->d_out.fsm.userdata = adapter;
- adapter->d_out.fsm.printdebug = dout_debug;
-
- return 0;
-
-err_d_out:
- st5481_release_d_out(adapter);
-err:
- return retval;
-}
-
-void st5481_release_d(struct st5481_adapter *adapter)
-{
- DBG(2, "");
-
- st5481_release_in(&adapter->d_in);
- st5481_release_d_out(adapter);
-}
-
-/* ======================================================================
- * init / exit
- */
-
-int __init st5481_d_init(void)
-{
- int retval;
-
- l1fsm.state_count = L1_STATE_COUNT;
- l1fsm.event_count = L1_EVENT_COUNT;
- l1fsm.strEvent = strL1Event;
- l1fsm.strState = strL1State;
- retval = FsmNew(&l1fsm, L1FnList, ARRAY_SIZE(L1FnList));
- if (retval)
- goto err;
-
- dout_fsm.state_count = DOUT_STATE_COUNT;
- dout_fsm.event_count = DOUT_EVENT_COUNT;
- dout_fsm.strEvent = strDoutEvent;
- dout_fsm.strState = strDoutState;
- retval = FsmNew(&dout_fsm, DoutFnList, ARRAY_SIZE(DoutFnList));
- if (retval)
- goto err_l1;
-
- return 0;
-
-err_l1:
- FsmFree(&l1fsm);
-err:
- return retval;
-}
-
-// can't be __exit
-void st5481_d_exit(void)
-{
- FsmFree(&l1fsm);
- FsmFree(&dout_fsm);
-}
diff --git a/drivers/isdn/hisax/st5481_init.c b/drivers/isdn/hisax/st5481_init.c
deleted file mode 100644
index 54ef9e4f8cbc..000000000000
--- a/drivers/isdn/hisax/st5481_init.c
+++ /dev/null
@@ -1,221 +0,0 @@
-/*
- * Driver for ST5481 USB ISDN modem
- *
- * Author Frode Isaksen
- * Copyright 2001 by Frode Isaksen <fisaksen@bewan.com>
- * 2001 by Kai Germaschewski <kai.germaschewski@gmx.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-/*
- * TODO:
- *
- * b layer1 delay?
- * hotplug / unregister issues
- * mod_inc/dec_use_count
- * unify parts of d/b channel usb handling
- * file header
- * avoid copy to isoc buffer?
- * improve usb delay?
- * merge l1 state machines?
- * clean up debug
- */
-
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/usb.h>
-#include <linux/slab.h>
-#include "st5481.h"
-
-MODULE_DESCRIPTION("ISDN4Linux: driver for ST5481 USB ISDN adapter");
-MODULE_AUTHOR("Frode Isaksen");
-MODULE_LICENSE("GPL");
-
-static int protocol = 2; /* EURO-ISDN Default */
-module_param(protocol, int, 0);
-
-static int number_of_leds = 2; /* 2 LEDs on the adpater default */
-module_param(number_of_leds, int, 0);
-
-#ifdef CONFIG_HISAX_DEBUG
-static int debug = 0;
-module_param(debug, int, 0);
-#endif
-int st5481_debug;
-
-/* ======================================================================
- * registration/deregistration with the USB layer
- */
-
-/*
- * This function will be called when the adapter is plugged
- * into the USB bus.
- */
-static int probe_st5481(struct usb_interface *intf,
- const struct usb_device_id *id)
-{
- struct usb_device *dev = interface_to_usbdev(intf);
- struct st5481_adapter *adapter;
- struct hisax_b_if *b_if[2];
- int retval, i;
-
- printk(KERN_INFO "st541: found adapter VendorId %04x, ProductId %04x, LEDs %d\n",
- le16_to_cpu(dev->descriptor.idVendor),
- le16_to_cpu(dev->descriptor.idProduct),
- number_of_leds);
-
- adapter = kzalloc(sizeof(struct st5481_adapter), GFP_KERNEL);
- if (!adapter)
- return -ENOMEM;
-
- adapter->number_of_leds = number_of_leds;
- adapter->usb_dev = dev;
-
- adapter->hisax_d_if.owner = THIS_MODULE;
- adapter->hisax_d_if.ifc.priv = adapter;
- adapter->hisax_d_if.ifc.l2l1 = st5481_d_l2l1;
-
- for (i = 0; i < 2; i++) {
- adapter->bcs[i].adapter = adapter;
- adapter->bcs[i].channel = i;
- adapter->bcs[i].b_if.ifc.priv = &adapter->bcs[i];
- adapter->bcs[i].b_if.ifc.l2l1 = st5481_b_l2l1;
- }
-
- retval = st5481_setup_usb(adapter);
- if (retval < 0)
- goto err;
-
- retval = st5481_setup_d(adapter);
- if (retval < 0)
- goto err_usb;
-
- retval = st5481_setup_b(&adapter->bcs[0]);
- if (retval < 0)
- goto err_d;
-
- retval = st5481_setup_b(&adapter->bcs[1]);
- if (retval < 0)
- goto err_b;
-
- for (i = 0; i < 2; i++)
- b_if[i] = &adapter->bcs[i].b_if;
-
- if (hisax_register(&adapter->hisax_d_if, b_if, "st5481_usb",
- protocol) != 0)
- goto err_b1;
-
- st5481_start(adapter);
-
- usb_set_intfdata(intf, adapter);
- return 0;
-
-err_b1:
- st5481_release_b(&adapter->bcs[1]);
-err_b:
- st5481_release_b(&adapter->bcs[0]);
-err_d:
- st5481_release_d(adapter);
-err_usb:
- st5481_release_usb(adapter);
-err:
- kfree(adapter);
- return -EIO;
-}
-
-/*
- * This function will be called when the adapter is removed
- * from the USB bus.
- */
-static void disconnect_st5481(struct usb_interface *intf)
-{
- struct st5481_adapter *adapter = usb_get_intfdata(intf);
-
- DBG(1, "");
-
- usb_set_intfdata(intf, NULL);
- if (!adapter)
- return;
-
- st5481_stop(adapter);
- st5481_release_b(&adapter->bcs[1]);
- st5481_release_b(&adapter->bcs[0]);
- st5481_release_d(adapter);
- // we would actually better wait for completion of outstanding urbs
- mdelay(2);
- st5481_release_usb(adapter);
-
- hisax_unregister(&adapter->hisax_d_if);
-
- kfree(adapter);
-}
-
-/*
- * The last 4 bits in the Product Id is set with 4 pins on the chip.
- */
-static struct usb_device_id st5481_ids[] = {
- { USB_DEVICE(ST_VENDOR_ID, ST5481_PRODUCT_ID + 0x0) },
- { USB_DEVICE(ST_VENDOR_ID, ST5481_PRODUCT_ID + 0x1) },
- { USB_DEVICE(ST_VENDOR_ID, ST5481_PRODUCT_ID + 0x2) },
- { USB_DEVICE(ST_VENDOR_ID, ST5481_PRODUCT_ID + 0x3) },
- { USB_DEVICE(ST_VENDOR_ID, ST5481_PRODUCT_ID + 0x4) },
- { USB_DEVICE(ST_VENDOR_ID, ST5481_PRODUCT_ID + 0x5) },
- { USB_DEVICE(ST_VENDOR_ID, ST5481_PRODUCT_ID + 0x6) },
- { USB_DEVICE(ST_VENDOR_ID, ST5481_PRODUCT_ID + 0x7) },
- { USB_DEVICE(ST_VENDOR_ID, ST5481_PRODUCT_ID + 0x8) },
- { USB_DEVICE(ST_VENDOR_ID, ST5481_PRODUCT_ID + 0x9) },
- { USB_DEVICE(ST_VENDOR_ID, ST5481_PRODUCT_ID + 0xA) },
- { USB_DEVICE(ST_VENDOR_ID, ST5481_PRODUCT_ID + 0xB) },
- { USB_DEVICE(ST_VENDOR_ID, ST5481_PRODUCT_ID + 0xC) },
- { USB_DEVICE(ST_VENDOR_ID, ST5481_PRODUCT_ID + 0xD) },
- { USB_DEVICE(ST_VENDOR_ID, ST5481_PRODUCT_ID + 0xE) },
- { USB_DEVICE(ST_VENDOR_ID, ST5481_PRODUCT_ID + 0xF) },
- { }
-};
-MODULE_DEVICE_TABLE(usb, st5481_ids);
-
-static struct usb_driver st5481_usb_driver = {
- .name = "st5481_usb",
- .probe = probe_st5481,
- .disconnect = disconnect_st5481,
- .id_table = st5481_ids,
- .disable_hub_initiated_lpm = 1,
-};
-
-static int __init st5481_usb_init(void)
-{
- int retval;
-
-#ifdef CONFIG_HISAX_DEBUG
- st5481_debug = debug;
-#endif
-
- printk(KERN_INFO "hisax_st5481: ST5481 USB ISDN driver $Revision: 2.4.2.3 $\n");
-
- retval = st5481_d_init();
- if (retval < 0)
- goto out;
-
- retval = usb_register(&st5481_usb_driver);
- if (retval < 0)
- goto out_d_exit;
-
- return 0;
-
-out_d_exit:
- st5481_d_exit();
-out:
- return retval;
-}
-
-static void __exit st5481_usb_exit(void)
-{
- usb_deregister(&st5481_usb_driver);
- st5481_d_exit();
-}
-
-module_init(st5481_usb_init);
-module_exit(st5481_usb_exit);
diff --git a/drivers/isdn/hisax/st5481_usb.c b/drivers/isdn/hisax/st5481_usb.c
deleted file mode 100644
index f207fda691c7..000000000000
--- a/drivers/isdn/hisax/st5481_usb.c
+++ /dev/null
@@ -1,659 +0,0 @@
-/*
- * Driver for ST5481 USB ISDN modem
- *
- * Author Frode Isaksen
- * Copyright 2001 by Frode Isaksen <fisaksen@bewan.com>
- * 2001 by Kai Germaschewski <kai.germaschewski@gmx.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include <linux/usb.h>
-#include <linux/slab.h>
-#include "st5481.h"
-
-static int st5481_isoc_flatten(struct urb *urb);
-
-/* ======================================================================
- * control pipe
- */
-
-/*
- * Send the next endpoint 0 request stored in the FIFO.
- * Called either by the completion or by usb_ctrl_msg.
- */
-static void usb_next_ctrl_msg(struct urb *urb,
- struct st5481_adapter *adapter)
-{
- struct st5481_ctrl *ctrl = &adapter->ctrl;
- int r_index;
-
- if (test_and_set_bit(0, &ctrl->busy)) {
- return;
- }
-
- if ((r_index = fifo_remove(&ctrl->msg_fifo.f)) < 0) {
- test_and_clear_bit(0, &ctrl->busy);
- return;
- }
- urb->setup_packet =
- (unsigned char *)&ctrl->msg_fifo.data[r_index];
-
- DBG(1, "request=0x%02x,value=0x%04x,index=%x",
- ((struct ctrl_msg *)urb->setup_packet)->dr.bRequest,
- ((struct ctrl_msg *)urb->setup_packet)->dr.wValue,
- ((struct ctrl_msg *)urb->setup_packet)->dr.wIndex);
-
- // Prepare the URB
- urb->dev = adapter->usb_dev;
-
- SUBMIT_URB(urb, GFP_ATOMIC);
-}
-
-/*
- * Asynchronous endpoint 0 request (async version of usb_control_msg).
- * The request will be queued up in a FIFO if the endpoint is busy.
- */
-static void usb_ctrl_msg(struct st5481_adapter *adapter,
- u8 request, u8 requesttype, u16 value, u16 index,
- ctrl_complete_t complete, void *context)
-{
- struct st5481_ctrl *ctrl = &adapter->ctrl;
- int w_index;
- struct ctrl_msg *ctrl_msg;
-
- if ((w_index = fifo_add(&ctrl->msg_fifo.f)) < 0) {
- WARNING("control msg FIFO full");
- return;
- }
- ctrl_msg = &ctrl->msg_fifo.data[w_index];
-
- ctrl_msg->dr.bRequestType = requesttype;
- ctrl_msg->dr.bRequest = request;
- ctrl_msg->dr.wValue = cpu_to_le16p(&value);
- ctrl_msg->dr.wIndex = cpu_to_le16p(&index);
- ctrl_msg->dr.wLength = 0;
- ctrl_msg->complete = complete;
- ctrl_msg->context = context;
-
- usb_next_ctrl_msg(ctrl->urb, adapter);
-}
-
-/*
- * Asynchronous endpoint 0 device request.
- */
-void st5481_usb_device_ctrl_msg(struct st5481_adapter *adapter,
- u8 request, u16 value,
- ctrl_complete_t complete, void *context)
-{
- usb_ctrl_msg(adapter, request,
- USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
- value, 0, complete, context);
-}
-
-/*
- * Asynchronous pipe reset (async version of usb_clear_halt).
- */
-void st5481_usb_pipe_reset(struct st5481_adapter *adapter,
- u_char pipe,
- ctrl_complete_t complete, void *context)
-{
- DBG(1, "pipe=%02x", pipe);
-
- usb_ctrl_msg(adapter,
- USB_REQ_CLEAR_FEATURE, USB_DIR_OUT | USB_RECIP_ENDPOINT,
- 0, pipe, complete, context);
-}
-
-
-/*
- Physical level functions
-*/
-
-void st5481_ph_command(struct st5481_adapter *adapter, unsigned int command)
-{
- DBG(8, "command=%s", ST5481_CMD_string(command));
-
- st5481_usb_device_ctrl_msg(adapter, TXCI, command, NULL, NULL);
-}
-
-/*
- * The request on endpoint 0 has completed.
- * Call the user provided completion routine and try
- * to send the next request.
- */
-static void usb_ctrl_complete(struct urb *urb)
-{
- struct st5481_adapter *adapter = urb->context;
- struct st5481_ctrl *ctrl = &adapter->ctrl;
- struct ctrl_msg *ctrl_msg;
-
- if (unlikely(urb->status < 0)) {
- switch (urb->status) {
- case -ENOENT:
- case -ESHUTDOWN:
- case -ECONNRESET:
- DBG(1, "urb killed status %d", urb->status);
- return; // Give up
- default:
- WARNING("urb status %d", urb->status);
- break;
- }
- }
-
- ctrl_msg = (struct ctrl_msg *)urb->setup_packet;
-
- if (ctrl_msg->dr.bRequest == USB_REQ_CLEAR_FEATURE) {
- /* Special case handling for pipe reset */
- le16_to_cpus(&ctrl_msg->dr.wIndex);
- usb_reset_endpoint(adapter->usb_dev, ctrl_msg->dr.wIndex);
- }
-
- if (ctrl_msg->complete)
- ctrl_msg->complete(ctrl_msg->context);
-
- clear_bit(0, &ctrl->busy);
-
- // Try to send next control message
- usb_next_ctrl_msg(urb, adapter);
- return;
-}
-
-/* ======================================================================
- * interrupt pipe
- */
-
-/*
- * The interrupt endpoint will be called when any
- * of the 6 registers changes state (depending on masks).
- * Decode the register values and schedule a private event.
- * Called at interrupt.
- */
-static void usb_int_complete(struct urb *urb)
-{
- u8 *data = urb->transfer_buffer;
- u8 irqbyte;
- struct st5481_adapter *adapter = urb->context;
- int j;
- int status;
-
- switch (urb->status) {
- case 0:
- /* success */
- break;
- case -ECONNRESET:
- case -ENOENT:
- case -ESHUTDOWN:
- /* this urb is terminated, clean up */
- DBG(2, "urb shutting down with status: %d", urb->status);
- return;
- default:
- WARNING("nonzero urb status received: %d", urb->status);
- goto exit;
- }
-
-
- DBG_PACKET(2, data, INT_PKT_SIZE);
-
- if (urb->actual_length == 0) {
- goto exit;
- }
-
- irqbyte = data[MPINT];
- if (irqbyte & DEN_INT)
- FsmEvent(&adapter->d_out.fsm, EV_DOUT_DEN, NULL);
-
- if (irqbyte & DCOLL_INT)
- FsmEvent(&adapter->d_out.fsm, EV_DOUT_COLL, NULL);
-
- irqbyte = data[FFINT_D];
- if (irqbyte & OUT_UNDERRUN)
- FsmEvent(&adapter->d_out.fsm, EV_DOUT_UNDERRUN, NULL);
-
- if (irqbyte & OUT_DOWN)
- ;// printk("OUT_DOWN\n");
-
- irqbyte = data[MPINT];
- if (irqbyte & RXCI_INT)
- FsmEvent(&adapter->l1m, data[CCIST] & 0x0f, NULL);
-
- for (j = 0; j < 2; j++)
- adapter->bcs[j].b_out.flow_event |= data[FFINT_B1 + j];
-
- urb->actual_length = 0;
-
-exit:
- status = usb_submit_urb(urb, GFP_ATOMIC);
- if (status)
- WARNING("usb_submit_urb failed with result %d", status);
-}
-
-/* ======================================================================
- * initialization
- */
-
-int st5481_setup_usb(struct st5481_adapter *adapter)
-{
- struct usb_device *dev = adapter->usb_dev;
- struct st5481_ctrl *ctrl = &adapter->ctrl;
- struct st5481_intr *intr = &adapter->intr;
- struct usb_interface *intf;
- struct usb_host_interface *altsetting = NULL;
- struct usb_host_endpoint *endpoint;
- int status;
- struct urb *urb;
- u8 *buf;
-
- DBG(2, "");
-
- if ((status = usb_reset_configuration(dev)) < 0) {
- WARNING("reset_configuration failed,status=%d", status);
- return status;
- }
-
- intf = usb_ifnum_to_if(dev, 0);
- if (intf)
- altsetting = usb_altnum_to_altsetting(intf, 3);
- if (!altsetting)
- return -ENXIO;
-
- // Check if the config is sane
- if (altsetting->desc.bNumEndpoints != 7) {
- WARNING("expecting 7 got %d endpoints!", altsetting->desc.bNumEndpoints);
- return -EINVAL;
- }
-
- // The descriptor is wrong for some early samples of the ST5481 chip
- altsetting->endpoint[3].desc.wMaxPacketSize = cpu_to_le16(32);
- altsetting->endpoint[4].desc.wMaxPacketSize = cpu_to_le16(32);
-
- // Use alternative setting 3 on interface 0 to have 2B+D
- if ((status = usb_set_interface(dev, 0, 3)) < 0) {
- WARNING("usb_set_interface failed,status=%d", status);
- return status;
- }
-
- // Allocate URB for control endpoint
- urb = usb_alloc_urb(0, GFP_KERNEL);
- if (!urb) {
- return -ENOMEM;
- }
- ctrl->urb = urb;
-
- // Fill the control URB
- usb_fill_control_urb(urb, dev,
- usb_sndctrlpipe(dev, 0),
- NULL, NULL, 0, usb_ctrl_complete, adapter);
-
-
- fifo_init(&ctrl->msg_fifo.f, ARRAY_SIZE(ctrl->msg_fifo.data));
-
- // Allocate URBs and buffers for interrupt endpoint
- urb = usb_alloc_urb(0, GFP_KERNEL);
- if (!urb) {
- goto err1;
- }
- intr->urb = urb;
-
- buf = kmalloc(INT_PKT_SIZE, GFP_KERNEL);
- if (!buf) {
- goto err2;
- }
-
- endpoint = &altsetting->endpoint[EP_INT-1];
-
- // Fill the interrupt URB
- usb_fill_int_urb(urb, dev,
- usb_rcvintpipe(dev, endpoint->desc.bEndpointAddress),
- buf, INT_PKT_SIZE,
- usb_int_complete, adapter,
- endpoint->desc.bInterval);
-
- return 0;
-err2:
- usb_free_urb(intr->urb);
- intr->urb = NULL;
-err1:
- usb_free_urb(ctrl->urb);
- ctrl->urb = NULL;
-
- return -ENOMEM;
-}
-
-/*
- * Release buffers and URBs for the interrupt and control
- * endpoint.
- */
-void st5481_release_usb(struct st5481_adapter *adapter)
-{
- struct st5481_intr *intr = &adapter->intr;
- struct st5481_ctrl *ctrl = &adapter->ctrl;
-
- DBG(1, "");
-
- // Stop and free Control and Interrupt URBs
- usb_kill_urb(ctrl->urb);
- kfree(ctrl->urb->transfer_buffer);
- usb_free_urb(ctrl->urb);
- ctrl->urb = NULL;
-
- usb_kill_urb(intr->urb);
- kfree(intr->urb->transfer_buffer);
- usb_free_urb(intr->urb);
- intr->urb = NULL;
-}
-
-/*
- * Initialize the adapter.
- */
-void st5481_start(struct st5481_adapter *adapter)
-{
- static const u8 init_cmd_table[] = {
- SET_DEFAULT, 0,
- STT, 0,
- SDA_MIN, 0x0d,
- SDA_MAX, 0x29,
- SDELAY_VALUE, 0x14,
- GPIO_DIR, 0x01,
- GPIO_OUT, RED_LED,
-// FFCTRL_OUT_D,4,
-// FFCTRH_OUT_D,12,
- FFCTRL_OUT_B1, 6,
- FFCTRH_OUT_B1, 20,
- FFCTRL_OUT_B2, 6,
- FFCTRH_OUT_B2, 20,
- MPMSK, RXCI_INT + DEN_INT + DCOLL_INT,
- 0
- };
- struct st5481_intr *intr = &adapter->intr;
- int i = 0;
- u8 request, value;
-
- DBG(8, "");
-
- adapter->leds = RED_LED;
-
- // Start receiving on the interrupt endpoint
- SUBMIT_URB(intr->urb, GFP_KERNEL);
-
- while ((request = init_cmd_table[i++])) {
- value = init_cmd_table[i++];
- st5481_usb_device_ctrl_msg(adapter, request, value, NULL, NULL);
- }
- st5481_ph_command(adapter, ST5481_CMD_PUP);
-}
-
-/*
- * Reset the adapter to default values.
- */
-void st5481_stop(struct st5481_adapter *adapter)
-{
- DBG(8, "");
-
- st5481_usb_device_ctrl_msg(adapter, SET_DEFAULT, 0, NULL, NULL);
-}
-
-/* ======================================================================
- * isochronous USB helpers
- */
-
-static void
-fill_isoc_urb(struct urb *urb, struct usb_device *dev,
- unsigned int pipe, void *buf, int num_packets,
- int packet_size, usb_complete_t complete,
- void *context)
-{
- int k;
-
- usb_fill_int_urb(urb, dev, pipe, buf, num_packets * packet_size,
- complete, context, 1);
-
- urb->number_of_packets = num_packets;
- urb->transfer_flags = URB_ISO_ASAP;
- for (k = 0; k < num_packets; k++) {
- urb->iso_frame_desc[k].offset = packet_size * k;
- urb->iso_frame_desc[k].length = packet_size;
- urb->iso_frame_desc[k].actual_length = 0;
- }
-}
-
-int
-st5481_setup_isocpipes(struct urb *urb[2], struct usb_device *dev,
- unsigned int pipe, int num_packets,
- int packet_size, int buf_size,
- usb_complete_t complete, void *context)
-{
- int j, retval;
- unsigned char *buf;
-
- for (j = 0; j < 2; j++) {
- retval = -ENOMEM;
- urb[j] = usb_alloc_urb(num_packets, GFP_KERNEL);
- if (!urb[j])
- goto err;
-
- // Allocate memory for 2000bytes/sec (16Kb/s)
- buf = kmalloc(buf_size, GFP_KERNEL);
- if (!buf)
- goto err;
-
- // Fill the isochronous URB
- fill_isoc_urb(urb[j], dev, pipe, buf,
- num_packets, packet_size, complete,
- context);
- }
- return 0;
-
-err:
- for (j = 0; j < 2; j++) {
- if (urb[j]) {
- kfree(urb[j]->transfer_buffer);
- urb[j]->transfer_buffer = NULL;
- usb_free_urb(urb[j]);
- urb[j] = NULL;
- }
- }
- return retval;
-}
-
-void st5481_release_isocpipes(struct urb *urb[2])
-{
- int j;
-
- for (j = 0; j < 2; j++) {
- usb_kill_urb(urb[j]);
- kfree(urb[j]->transfer_buffer);
- usb_free_urb(urb[j]);
- urb[j] = NULL;
- }
-}
-
-/*
- * Decode frames received on the B/D channel.
- * Note that this function will be called continuously
- * with 64Kbit/s / 16Kbit/s of data and hence it will be
- * called 50 times per second with 20 ISOC descriptors.
- * Called at interrupt.
- */
-static void usb_in_complete(struct urb *urb)
-{
- struct st5481_in *in = urb->context;
- unsigned char *ptr;
- struct sk_buff *skb;
- int len, count, status;
-
- if (unlikely(urb->status < 0)) {
- switch (urb->status) {
- case -ENOENT:
- case -ESHUTDOWN:
- case -ECONNRESET:
- DBG(1, "urb killed status %d", urb->status);
- return; // Give up
- default:
- WARNING("urb status %d", urb->status);
- break;
- }
- }
-
- DBG_ISO_PACKET(0x80, urb);
-
- len = st5481_isoc_flatten(urb);
- ptr = urb->transfer_buffer;
- while (len > 0) {
- if (in->mode == L1_MODE_TRANS) {
- memcpy(in->rcvbuf, ptr, len);
- status = len;
- len = 0;
- } else {
- status = isdnhdlc_decode(&in->hdlc_state, ptr, len, &count,
- in->rcvbuf, in->bufsize);
- ptr += count;
- len -= count;
- }
-
- if (status > 0) {
- // Good frame received
- DBG(4, "count=%d", status);
- DBG_PACKET(0x400, in->rcvbuf, status);
- if (!(skb = dev_alloc_skb(status))) {
- WARNING("receive out of memory\n");
- break;
- }
- skb_put_data(skb, in->rcvbuf, status);
- in->hisax_if->l1l2(in->hisax_if, PH_DATA | INDICATION, skb);
- } else if (status == -HDLC_CRC_ERROR) {
- INFO("CRC error");
- } else if (status == -HDLC_FRAMING_ERROR) {
- INFO("framing error");
- } else if (status == -HDLC_LENGTH_ERROR) {
- INFO("length error");
- }
- }
-
- // Prepare URB for next transfer
- urb->dev = in->adapter->usb_dev;
- urb->actual_length = 0;
-
- SUBMIT_URB(urb, GFP_ATOMIC);
-}
-
-int st5481_setup_in(struct st5481_in *in)
-{
- struct usb_device *dev = in->adapter->usb_dev;
- int retval;
-
- DBG(4, "");
-
- in->rcvbuf = kmalloc(in->bufsize, GFP_KERNEL);
- retval = -ENOMEM;
- if (!in->rcvbuf)
- goto err;
-
- retval = st5481_setup_isocpipes(in->urb, dev,
- usb_rcvisocpipe(dev, in->ep),
- in->num_packets, in->packet_size,
- in->num_packets * in->packet_size,
- usb_in_complete, in);
- if (retval)
- goto err_free;
- return 0;
-
-err_free:
- kfree(in->rcvbuf);
-err:
- return retval;
-}
-
-void st5481_release_in(struct st5481_in *in)
-{
- DBG(2, "");
-
- st5481_release_isocpipes(in->urb);
-}
-
-/*
- * Make the transfer_buffer contiguous by
- * copying from the iso descriptors if necessary.
- */
-static int st5481_isoc_flatten(struct urb *urb)
-{
- struct usb_iso_packet_descriptor *pipd, *pend;
- unsigned char *src, *dst;
- unsigned int len;
-
- if (urb->status < 0) {
- return urb->status;
- }
- for (pipd = &urb->iso_frame_desc[0],
- pend = &urb->iso_frame_desc[urb->number_of_packets],
- dst = urb->transfer_buffer;
- pipd < pend;
- pipd++) {
-
- if (pipd->status < 0) {
- return (pipd->status);
- }
-
- len = pipd->actual_length;
- pipd->actual_length = 0;
- src = urb->transfer_buffer + pipd->offset;
-
- if (src != dst) {
- // Need to copy since isoc buffers not full
- while (len--) {
- *dst++ = *src++;
- }
- } else {
- // No need to copy, just update destination buffer
- dst += len;
- }
- }
- // Return size of flattened buffer
- return (dst - (unsigned char *)urb->transfer_buffer);
-}
-
-static void st5481_start_rcv(void *context)
-{
- struct st5481_in *in = context;
- struct st5481_adapter *adapter = in->adapter;
-
- DBG(4, "");
-
- in->urb[0]->dev = adapter->usb_dev;
- SUBMIT_URB(in->urb[0], GFP_KERNEL);
-
- in->urb[1]->dev = adapter->usb_dev;
- SUBMIT_URB(in->urb[1], GFP_KERNEL);
-}
-
-void st5481_in_mode(struct st5481_in *in, int mode)
-{
- if (in->mode == mode)
- return;
-
- in->mode = mode;
-
- usb_unlink_urb(in->urb[0]);
- usb_unlink_urb(in->urb[1]);
-
- if (in->mode != L1_MODE_NULL) {
- if (in->mode != L1_MODE_TRANS) {
- u32 features = HDLC_BITREVERSE;
-
- if (in->mode == L1_MODE_HDLC_56K)
- features |= HDLC_56KBIT;
- isdnhdlc_rcv_init(&in->hdlc_state, features);
- }
- st5481_usb_pipe_reset(in->adapter, in->ep, NULL, NULL);
- st5481_usb_device_ctrl_msg(in->adapter, in->counter,
- in->packet_size,
- NULL, NULL);
- st5481_start_rcv(in);
- } else {
- st5481_usb_device_ctrl_msg(in->adapter, in->counter,
- 0, NULL, NULL);
- }
-}
diff --git a/drivers/isdn/hisax/tei.c b/drivers/isdn/hisax/tei.c
deleted file mode 100644
index 9195f9fd628f..000000000000
--- a/drivers/isdn/hisax/tei.c
+++ /dev/null
@@ -1,465 +0,0 @@
-/* $Id: tei.c,v 2.20.2.3 2004/01/13 14:31:26 keil Exp $
- *
- * Author Karsten Keil
- * based on the teles driver from Jan den Ouden
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * For changes and modifications please read
- * Documentation/isdn/HiSax.cert
- *
- * Thanks to Jan den Ouden
- * Fritz Elfert
- *
- */
-
-#include "hisax.h"
-#include "isdnl2.h"
-#include <linux/gfp.h>
-#include <linux/init.h>
-#include <linux/random.h>
-
-const char *tei_revision = "$Revision: 2.20.2.3 $";
-
-#define ID_REQUEST 1
-#define ID_ASSIGNED 2
-#define ID_DENIED 3
-#define ID_CHK_REQ 4
-#define ID_CHK_RES 5
-#define ID_REMOVE 6
-#define ID_VERIFY 7
-
-#define TEI_ENTITY_ID 0xf
-
-static struct Fsm teifsm;
-
-void tei_handler(struct PStack *st, u_char pr, struct sk_buff *skb);
-
-enum {
- ST_TEI_NOP,
- ST_TEI_IDREQ,
- ST_TEI_IDVERIFY,
-};
-
-#define TEI_STATE_COUNT (ST_TEI_IDVERIFY + 1)
-
-static char *strTeiState[] =
-{
- "ST_TEI_NOP",
- "ST_TEI_IDREQ",
- "ST_TEI_IDVERIFY",
-};
-
-enum {
- EV_IDREQ,
- EV_ASSIGN,
- EV_DENIED,
- EV_CHKREQ,
- EV_REMOVE,
- EV_VERIFY,
- EV_T202,
-};
-
-#define TEI_EVENT_COUNT (EV_T202 + 1)
-
-static char *strTeiEvent[] =
-{
- "EV_IDREQ",
- "EV_ASSIGN",
- "EV_DENIED",
- "EV_CHKREQ",
- "EV_REMOVE",
- "EV_VERIFY",
- "EV_T202",
-};
-
-static unsigned int
-random_ri(void)
-{
- unsigned int x;
-
- get_random_bytes(&x, sizeof(x));
- return (x & 0xffff);
-}
-
-static struct PStack *
-findtei(struct PStack *st, int tei)
-{
- struct PStack *ptr = *(st->l1.stlistp);
-
- if (tei == 127)
- return (NULL);
-
- while (ptr)
- if (ptr->l2.tei == tei)
- return (ptr);
- else
- ptr = ptr->next;
- return (NULL);
-}
-
-static void
-put_tei_msg(struct PStack *st, u_char m_id, unsigned int ri, u_char tei)
-{
- struct sk_buff *skb;
- u_char *bp;
-
- if (!(skb = alloc_skb(8, GFP_ATOMIC))) {
- printk(KERN_WARNING "HiSax: No skb for TEI manager\n");
- return;
- }
- bp = skb_put(skb, 3);
- bp[0] = (TEI_SAPI << 2);
- bp[1] = (GROUP_TEI << 1) | 0x1;
- bp[2] = UI;
- bp = skb_put(skb, 5);
- bp[0] = TEI_ENTITY_ID;
- bp[1] = ri >> 8;
- bp[2] = ri & 0xff;
- bp[3] = m_id;
- bp[4] = (tei << 1) | 1;
- st->l2.l2l1(st, PH_DATA | REQUEST, skb);
-}
-
-static void
-tei_id_request(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- if (st->l2.tei != -1) {
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "assign request for already assigned tei %d",
- st->l2.tei);
- return;
- }
- st->ma.ri = random_ri();
- if (st->ma.debug)
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "assign request ri %d", st->ma.ri);
- put_tei_msg(st, ID_REQUEST, st->ma.ri, 127);
- FsmChangeState(&st->ma.tei_m, ST_TEI_IDREQ);
- FsmAddTimer(&st->ma.t202, st->ma.T202, EV_T202, NULL, 1);
- st->ma.N202 = 3;
-}
-
-static void
-tei_id_assign(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *ost, *st = fi->userdata;
- struct sk_buff *skb = arg;
- struct IsdnCardState *cs;
- int ri, tei;
-
- ri = ((unsigned int) skb->data[1] << 8) + skb->data[2];
- tei = skb->data[4] >> 1;
- if (st->ma.debug)
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "identity assign ri %d tei %d", ri, tei);
- if ((ost = findtei(st, tei))) { /* same tei is in use */
- if (ri != ost->ma.ri) {
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "possible duplicate assignment tei %d", tei);
- ost->l2.l2tei(ost, MDL_ERROR | RESPONSE, NULL);
- }
- } else if (ri == st->ma.ri) {
- FsmDelTimer(&st->ma.t202, 1);
- FsmChangeState(&st->ma.tei_m, ST_TEI_NOP);
- st->l3.l3l2(st, MDL_ASSIGN | REQUEST, (void *) (long) tei);
- cs = (struct IsdnCardState *) st->l1.hardware;
- cs->cardmsg(cs, MDL_ASSIGN | REQUEST, NULL);
- }
-}
-
-static void
-tei_id_test_dup(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *ost, *st = fi->userdata;
- struct sk_buff *skb = arg;
- int tei, ri;
-
- ri = ((unsigned int) skb->data[1] << 8) + skb->data[2];
- tei = skb->data[4] >> 1;
- if (st->ma.debug)
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "foreign identity assign ri %d tei %d", ri, tei);
- if ((ost = findtei(st, tei))) { /* same tei is in use */
- if (ri != ost->ma.ri) { /* and it wasn't our request */
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "possible duplicate assignment tei %d", tei);
- FsmEvent(&ost->ma.tei_m, EV_VERIFY, NULL);
- }
- }
-}
-
-static void
-tei_id_denied(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
- int ri, tei;
-
- ri = ((unsigned int) skb->data[1] << 8) + skb->data[2];
- tei = skb->data[4] >> 1;
- if (st->ma.debug)
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "identity denied ri %d tei %d", ri, tei);
-}
-
-static void
-tei_id_chk_req(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
- int tei;
-
- tei = skb->data[4] >> 1;
- if (st->ma.debug)
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "identity check req tei %d", tei);
- if ((st->l2.tei != -1) && ((tei == GROUP_TEI) || (tei == st->l2.tei))) {
- FsmDelTimer(&st->ma.t202, 4);
- FsmChangeState(&st->ma.tei_m, ST_TEI_NOP);
- put_tei_msg(st, ID_CHK_RES, random_ri(), st->l2.tei);
- }
-}
-
-static void
-tei_id_remove(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct sk_buff *skb = arg;
- struct IsdnCardState *cs;
- int tei;
-
- tei = skb->data[4] >> 1;
- if (st->ma.debug)
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "identity remove tei %d", tei);
- if ((st->l2.tei != -1) && ((tei == GROUP_TEI) || (tei == st->l2.tei))) {
- FsmDelTimer(&st->ma.t202, 5);
- FsmChangeState(&st->ma.tei_m, ST_TEI_NOP);
- st->l3.l3l2(st, MDL_REMOVE | REQUEST, NULL);
- cs = (struct IsdnCardState *) st->l1.hardware;
- cs->cardmsg(cs, MDL_REMOVE | REQUEST, NULL);
- }
-}
-
-static void
-tei_id_verify(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
-
- if (st->ma.debug)
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "id verify request for tei %d", st->l2.tei);
- put_tei_msg(st, ID_VERIFY, 0, st->l2.tei);
- FsmChangeState(&st->ma.tei_m, ST_TEI_IDVERIFY);
- FsmAddTimer(&st->ma.t202, st->ma.T202, EV_T202, NULL, 2);
- st->ma.N202 = 2;
-}
-
-static void
-tei_id_req_tout(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct IsdnCardState *cs;
-
- if (--st->ma.N202) {
- st->ma.ri = random_ri();
- if (st->ma.debug)
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "assign req(%d) ri %d", 4 - st->ma.N202,
- st->ma.ri);
- put_tei_msg(st, ID_REQUEST, st->ma.ri, 127);
- FsmAddTimer(&st->ma.t202, st->ma.T202, EV_T202, NULL, 3);
- } else {
- st->ma.tei_m.printdebug(&st->ma.tei_m, "assign req failed");
- st->l3.l3l2(st, MDL_ERROR | RESPONSE, NULL);
- cs = (struct IsdnCardState *) st->l1.hardware;
- cs->cardmsg(cs, MDL_REMOVE | REQUEST, NULL);
- FsmChangeState(fi, ST_TEI_NOP);
- }
-}
-
-static void
-tei_id_ver_tout(struct FsmInst *fi, int event, void *arg)
-{
- struct PStack *st = fi->userdata;
- struct IsdnCardState *cs;
-
- if (--st->ma.N202) {
- if (st->ma.debug)
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "id verify req(%d) for tei %d",
- 3 - st->ma.N202, st->l2.tei);
- put_tei_msg(st, ID_VERIFY, 0, st->l2.tei);
- FsmAddTimer(&st->ma.t202, st->ma.T202, EV_T202, NULL, 4);
- } else {
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "verify req for tei %d failed", st->l2.tei);
- st->l3.l3l2(st, MDL_REMOVE | REQUEST, NULL);
- cs = (struct IsdnCardState *) st->l1.hardware;
- cs->cardmsg(cs, MDL_REMOVE | REQUEST, NULL);
- FsmChangeState(fi, ST_TEI_NOP);
- }
-}
-
-static void
-tei_l1l2(struct PStack *st, int pr, void *arg)
-{
- struct sk_buff *skb = arg;
- int mt;
-
- if (test_bit(FLG_FIXED_TEI, &st->l2.flag)) {
- dev_kfree_skb(skb);
- return;
- }
-
- if (pr == (PH_DATA | INDICATION)) {
- if (skb->len < 3) {
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "short mgr frame %ld/3", skb->len);
- } else if ((skb->data[0] != ((TEI_SAPI << 2) | 2)) ||
- (skb->data[1] != ((GROUP_TEI << 1) | 1))) {
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "wrong mgr sapi/tei %x/%x",
- skb->data[0], skb->data[1]);
- } else if ((skb->data[2] & 0xef) != UI) {
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "mgr frame is not ui %x", skb->data[2]);
- } else {
- skb_pull(skb, 3);
- if (skb->len < 5) {
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "short mgr frame %ld/5", skb->len);
- } else if (skb->data[0] != TEI_ENTITY_ID) {
- /* wrong management entity identifier, ignore */
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "tei handler wrong entity id %x",
- skb->data[0]);
- } else {
- mt = skb->data[3];
- if (mt == ID_ASSIGNED)
- FsmEvent(&st->ma.tei_m, EV_ASSIGN, skb);
- else if (mt == ID_DENIED)
- FsmEvent(&st->ma.tei_m, EV_DENIED, skb);
- else if (mt == ID_CHK_REQ)
- FsmEvent(&st->ma.tei_m, EV_CHKREQ, skb);
- else if (mt == ID_REMOVE)
- FsmEvent(&st->ma.tei_m, EV_REMOVE, skb);
- else {
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "tei handler wrong mt %x\n", mt);
- }
- }
- }
- } else {
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "tei handler wrong pr %x\n", pr);
- }
- dev_kfree_skb(skb);
-}
-
-static void
-tei_l2tei(struct PStack *st, int pr, void *arg)
-{
- struct IsdnCardState *cs;
-
- if (test_bit(FLG_FIXED_TEI, &st->l2.flag)) {
- if (pr == (MDL_ASSIGN | INDICATION)) {
- if (st->ma.debug)
- st->ma.tei_m.printdebug(&st->ma.tei_m,
- "fixed assign tei %d", st->l2.tei);
- st->l3.l3l2(st, MDL_ASSIGN | REQUEST, (void *) (long) st->l2.tei);
- cs = (struct IsdnCardState *) st->l1.hardware;
- cs->cardmsg(cs, MDL_ASSIGN | REQUEST, NULL);
- }
- return;
- }
- switch (pr) {
- case (MDL_ASSIGN | INDICATION):
- FsmEvent(&st->ma.tei_m, EV_IDREQ, arg);
- break;
- case (MDL_ERROR | REQUEST):
- FsmEvent(&st->ma.tei_m, EV_VERIFY, arg);
- break;
- default:
- break;
- }
-}
-
-static void
-tei_debug(struct FsmInst *fi, char *fmt, ...)
-{
- va_list args;
- struct PStack *st = fi->userdata;
-
- va_start(args, fmt);
- VHiSax_putstatus(st->l1.hardware, "tei ", fmt, args);
- va_end(args);
-}
-
-void
-setstack_tei(struct PStack *st)
-{
- st->l2.l2tei = tei_l2tei;
- st->ma.T202 = 2000; /* T202 2000 milliseconds */
- st->l1.l1tei = tei_l1l2;
- st->ma.debug = 1;
- st->ma.tei_m.fsm = &teifsm;
- st->ma.tei_m.state = ST_TEI_NOP;
- st->ma.tei_m.debug = 1;
- st->ma.tei_m.userdata = st;
- st->ma.tei_m.userint = 0;
- st->ma.tei_m.printdebug = tei_debug;
- FsmInitTimer(&st->ma.tei_m, &st->ma.t202);
-}
-
-void
-init_tei(struct IsdnCardState *cs, int protocol)
-{
-}
-
-void
-release_tei(struct IsdnCardState *cs)
-{
- struct PStack *st = cs->stlist;
-
- while (st) {
- FsmDelTimer(&st->ma.t202, 1);
- st = st->next;
- }
-}
-
-static struct FsmNode TeiFnList[] __initdata =
-{
- {ST_TEI_NOP, EV_IDREQ, tei_id_request},
- {ST_TEI_NOP, EV_ASSIGN, tei_id_test_dup},
- {ST_TEI_NOP, EV_VERIFY, tei_id_verify},
- {ST_TEI_NOP, EV_REMOVE, tei_id_remove},
- {ST_TEI_NOP, EV_CHKREQ, tei_id_chk_req},
- {ST_TEI_IDREQ, EV_T202, tei_id_req_tout},
- {ST_TEI_IDREQ, EV_ASSIGN, tei_id_assign},
- {ST_TEI_IDREQ, EV_DENIED, tei_id_denied},
- {ST_TEI_IDVERIFY, EV_T202, tei_id_ver_tout},
- {ST_TEI_IDVERIFY, EV_REMOVE, tei_id_remove},
- {ST_TEI_IDVERIFY, EV_CHKREQ, tei_id_chk_req},
-};
-
-int __init
-TeiNew(void)
-{
- teifsm.state_count = TEI_STATE_COUNT;
- teifsm.event_count = TEI_EVENT_COUNT;
- teifsm.strEvent = strTeiEvent;
- teifsm.strState = strTeiState;
- return FsmNew(&teifsm, TeiFnList, ARRAY_SIZE(TeiFnList));
-}
-
-void
-TeiFree(void)
-{
- FsmFree(&teifsm);
-}
diff --git a/drivers/isdn/hisax/teleint.c b/drivers/isdn/hisax/teleint.c
deleted file mode 100644
index 247aa33076b1..000000000000
--- a/drivers/isdn/hisax/teleint.c
+++ /dev/null
@@ -1,334 +0,0 @@
-/* $Id: teleint.c,v 1.16.2.5 2004/01/19 15:31:50 keil Exp $
- *
- * low level stuff for TeleInt isdn cards
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "hfc_2bs0.h"
-#include "isdnl1.h"
-
-static const char *TeleInt_revision = "$Revision: 1.16.2.5 $";
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-static inline u_char
-readreg(unsigned int ale, unsigned int adr, u_char off)
-{
- register u_char ret;
- int max_delay = 2000;
-
- byteout(ale, off);
- ret = HFC_BUSY & bytein(ale);
- while (ret && --max_delay)
- ret = HFC_BUSY & bytein(ale);
- if (!max_delay) {
- printk(KERN_WARNING "TeleInt Busy not inactive\n");
- return (0);
- }
- ret = bytein(adr);
- return (ret);
-}
-
-static inline void
-readfifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- register u_char ret;
- register int max_delay = 20000;
- register int i;
-
- byteout(ale, off);
- for (i = 0; i < size; i++) {
- ret = HFC_BUSY & bytein(ale);
- while (ret && --max_delay)
- ret = HFC_BUSY & bytein(ale);
- if (!max_delay) {
- printk(KERN_WARNING "TeleInt Busy not inactive\n");
- return;
- }
- data[i] = bytein(adr);
- }
-}
-
-
-static inline void
-writereg(unsigned int ale, unsigned int adr, u_char off, u_char data)
-{
- register u_char ret;
- int max_delay = 2000;
-
- byteout(ale, off);
- ret = HFC_BUSY & bytein(ale);
- while (ret && --max_delay)
- ret = HFC_BUSY & bytein(ale);
- if (!max_delay) {
- printk(KERN_WARNING "TeleInt Busy not inactive\n");
- return;
- }
- byteout(adr, data);
-}
-
-static inline void
-writefifo(unsigned int ale, unsigned int adr, u_char off, u_char *data, int size)
-{
- register u_char ret;
- register int max_delay = 20000;
- register int i;
-
- byteout(ale, off);
- for (i = 0; i < size; i++) {
- ret = HFC_BUSY & bytein(ale);
- while (ret && --max_delay)
- ret = HFC_BUSY & bytein(ale);
- if (!max_delay) {
- printk(KERN_WARNING "TeleInt Busy not inactive\n");
- return;
- }
- byteout(adr, data[i]);
- }
-}
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- cs->hw.hfc.cip = offset;
- return (readreg(cs->hw.hfc.addr | 1, cs->hw.hfc.addr, offset));
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- cs->hw.hfc.cip = offset;
- writereg(cs->hw.hfc.addr | 1, cs->hw.hfc.addr, offset, value);
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- cs->hw.hfc.cip = 0;
- readfifo(cs->hw.hfc.addr | 1, cs->hw.hfc.addr, 0, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- cs->hw.hfc.cip = 0;
- writefifo(cs->hw.hfc.addr | 1, cs->hw.hfc.addr, 0, data, size);
-}
-
-static u_char
-ReadHFC(struct IsdnCardState *cs, int data, u_char reg)
-{
- register u_char ret;
-
- if (data) {
- cs->hw.hfc.cip = reg;
- byteout(cs->hw.hfc.addr | 1, reg);
- ret = bytein(cs->hw.hfc.addr);
- if (cs->debug & L1_DEB_HSCX_FIFO && (data != 2))
- debugl1(cs, "hfc RD %02x %02x", reg, ret);
- } else
- ret = bytein(cs->hw.hfc.addr | 1);
- return (ret);
-}
-
-static void
-WriteHFC(struct IsdnCardState *cs, int data, u_char reg, u_char value)
-{
- byteout(cs->hw.hfc.addr | 1, reg);
- cs->hw.hfc.cip = reg;
- if (data)
- byteout(cs->hw.hfc.addr, value);
- if (cs->debug & L1_DEB_HSCX_FIFO && (data != 2))
- debugl1(cs, "hfc W%c %02x %02x", data ? 'D' : 'C', reg, value);
-}
-
-static irqreturn_t
-TeleInt_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- val = readreg(cs->hw.hfc.addr | 1, cs->hw.hfc.addr, ISAC_ISTA);
-Start_ISAC:
- if (val)
- isac_interrupt(cs, val);
- val = readreg(cs->hw.hfc.addr | 1, cs->hw.hfc.addr, ISAC_ISTA);
- if (val) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ISAC IntStat after IntRoutine");
- goto Start_ISAC;
- }
- writereg(cs->hw.hfc.addr | 1, cs->hw.hfc.addr, ISAC_MASK, 0xFF);
- writereg(cs->hw.hfc.addr | 1, cs->hw.hfc.addr, ISAC_MASK, 0x0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-TeleInt_Timer(struct timer_list *t)
-{
- struct IsdnCardState *cs = from_timer(cs, t, hw.hfc.timer);
- int stat = 0;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->bcs[0].mode) {
- stat |= 1;
- main_irq_hfc(&cs->bcs[0]);
- }
- if (cs->bcs[1].mode) {
- stat |= 2;
- main_irq_hfc(&cs->bcs[1]);
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- stat = HZ / 100;
- if (!stat)
- stat = 1;
- cs->hw.hfc.timer.expires = jiffies + stat;
- add_timer(&cs->hw.hfc.timer);
-}
-
-static void
-release_io_TeleInt(struct IsdnCardState *cs)
-{
- del_timer(&cs->hw.hfc.timer);
- releasehfc(cs);
- if (cs->hw.hfc.addr)
- release_region(cs->hw.hfc.addr, 2);
-}
-
-static void
-reset_TeleInt(struct IsdnCardState *cs)
-{
- printk(KERN_INFO "TeleInt: resetting card\n");
- cs->hw.hfc.cirm |= HFC_RESET;
- byteout(cs->hw.hfc.addr | 1, cs->hw.hfc.cirm); /* Reset On */
- mdelay(10);
- cs->hw.hfc.cirm &= ~HFC_RESET;
- byteout(cs->hw.hfc.addr | 1, cs->hw.hfc.cirm); /* Reset Off */
- mdelay(10);
-}
-
-static int
-TeleInt_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
- int delay;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- reset_TeleInt(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- release_io_TeleInt(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- reset_TeleInt(cs);
- inithfc(cs);
- clear_pending_isac_ints(cs);
- initisac(cs);
- /* Reenable all IRQ */
- cs->writeisac(cs, ISAC_MASK, 0);
- cs->writeisac(cs, ISAC_CMDR, 0x41);
- spin_unlock_irqrestore(&cs->lock, flags);
- delay = HZ / 100;
- if (!delay)
- delay = 1;
- cs->hw.hfc.timer.expires = jiffies + delay;
- add_timer(&cs->hw.hfc.timer);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-int setup_TeleInt(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
- strcpy(tmp, TeleInt_revision);
- printk(KERN_INFO "HiSax: TeleInt driver Rev. %s\n", HiSax_getrev(tmp));
- if (cs->typ != ISDN_CTYPE_TELEINT)
- return (0);
-
- cs->hw.hfc.addr = card->para[1] & 0x3fe;
- cs->irq = card->para[0];
- cs->hw.hfc.cirm = HFC_CIRM;
- cs->hw.hfc.isac_spcr = 0x00;
- cs->hw.hfc.cip = 0;
- cs->hw.hfc.ctmt = HFC_CTMT | HFC_CLTIMER;
- cs->bcs[0].hw.hfc.send = NULL;
- cs->bcs[1].hw.hfc.send = NULL;
- cs->hw.hfc.fifosize = 7 * 1024 + 512;
- timer_setup(&cs->hw.hfc.timer, TeleInt_Timer, 0);
- if (!request_region(cs->hw.hfc.addr, 2, "TeleInt isdn")) {
- printk(KERN_WARNING
- "HiSax: TeleInt config port %x-%x already in use\n",
- cs->hw.hfc.addr,
- cs->hw.hfc.addr + 2);
- return (0);
- }
- /* HW IO = IO */
- byteout(cs->hw.hfc.addr, cs->hw.hfc.addr & 0xff);
- byteout(cs->hw.hfc.addr | 1, ((cs->hw.hfc.addr & 0x300) >> 8) | 0x54);
- switch (cs->irq) {
- case 3:
- cs->hw.hfc.cirm |= HFC_INTA;
- break;
- case 4:
- cs->hw.hfc.cirm |= HFC_INTB;
- break;
- case 5:
- cs->hw.hfc.cirm |= HFC_INTC;
- break;
- case 7:
- cs->hw.hfc.cirm |= HFC_INTD;
- break;
- case 10:
- cs->hw.hfc.cirm |= HFC_INTE;
- break;
- case 11:
- cs->hw.hfc.cirm |= HFC_INTF;
- break;
- default:
- printk(KERN_WARNING "TeleInt: wrong IRQ\n");
- release_io_TeleInt(cs);
- return (0);
- }
- byteout(cs->hw.hfc.addr | 1, cs->hw.hfc.cirm);
- byteout(cs->hw.hfc.addr | 1, cs->hw.hfc.ctmt);
-
- printk(KERN_INFO "TeleInt: defined at 0x%x IRQ %d\n",
- cs->hw.hfc.addr, cs->irq);
-
- setup_isac(cs);
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->BC_Read_Reg = &ReadHFC;
- cs->BC_Write_Reg = &WriteHFC;
- cs->cardmsg = &TeleInt_card_msg;
- cs->irq_func = &TeleInt_interrupt;
- ISACVersion(cs, "TeleInt:");
- return (1);
-}
diff --git a/drivers/isdn/hisax/teles0.c b/drivers/isdn/hisax/teles0.c
deleted file mode 100644
index ce9eabdd2f6e..000000000000
--- a/drivers/isdn/hisax/teles0.c
+++ /dev/null
@@ -1,364 +0,0 @@
-/* $Id: teles0.c,v 2.15.2.4 2004/01/13 23:48:39 keil Exp $
- *
- * low level stuff for Teles Memory IO isdn cards
- *
- * Author Karsten Keil
- * based on the teles driver from Jan den Ouden
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * Thanks to Jan den Ouden
- * Fritz Elfert
- * Beat Doebeli
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isdnl1.h"
-#include "isac.h"
-#include "hscx.h"
-
-static const char *teles0_revision = "$Revision: 2.15.2.4 $";
-
-#define TELES_IOMEM_SIZE 0x400
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-static inline u_char
-readisac(void __iomem *adr, u_char off)
-{
- return readb(adr + ((off & 1) ? 0x2ff : 0x100) + off);
-}
-
-static inline void
-writeisac(void __iomem *adr, u_char off, u_char data)
-{
- writeb(data, adr + ((off & 1) ? 0x2ff : 0x100) + off); mb();
-}
-
-
-static inline u_char
-readhscx(void __iomem *adr, int hscx, u_char off)
-{
- return readb(adr + (hscx ? 0x1c0 : 0x180) +
- ((off & 1) ? 0x1ff : 0) + off);
-}
-
-static inline void
-writehscx(void __iomem *adr, int hscx, u_char off, u_char data)
-{
- writeb(data, adr + (hscx ? 0x1c0 : 0x180) +
- ((off & 1) ? 0x1ff : 0) + off); mb();
-}
-
-static inline void
-read_fifo_isac(void __iomem *adr, u_char *data, int size)
-{
- register int i;
- register u_char __iomem *ad = adr + 0x100;
- for (i = 0; i < size; i++)
- data[i] = readb(ad);
-}
-
-static inline void
-write_fifo_isac(void __iomem *adr, u_char *data, int size)
-{
- register int i;
- register u_char __iomem *ad = adr + 0x100;
- for (i = 0; i < size; i++) {
- writeb(data[i], ad); mb();
- }
-}
-
-static inline void
-read_fifo_hscx(void __iomem *adr, int hscx, u_char *data, int size)
-{
- register int i;
- register u_char __iomem *ad = adr + (hscx ? 0x1c0 : 0x180);
- for (i = 0; i < size; i++)
- data[i] = readb(ad);
-}
-
-static inline void
-write_fifo_hscx(void __iomem *adr, int hscx, u_char *data, int size)
-{
- int i;
- register u_char __iomem *ad = adr + (hscx ? 0x1c0 : 0x180);
- for (i = 0; i < size; i++) {
- writeb(data[i], ad); mb();
- }
-}
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readisac(cs->hw.teles0.membase, offset));
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writeisac(cs->hw.teles0.membase, offset, value);
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- read_fifo_isac(cs->hw.teles0.membase, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- write_fifo_isac(cs->hw.teles0.membase, data, size);
-}
-
-static u_char
-ReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- return (readhscx(cs->hw.teles0.membase, hscx, offset));
-}
-
-static void
-WriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- writehscx(cs->hw.teles0.membase, hscx, offset, value);
-}
-
-/*
- * fast interrupt HSCX stuff goes here
- */
-
-#define READHSCX(cs, nr, reg) readhscx(cs->hw.teles0.membase, nr, reg)
-#define WRITEHSCX(cs, nr, reg, data) writehscx(cs->hw.teles0.membase, nr, reg, data)
-#define READHSCXFIFO(cs, nr, ptr, cnt) read_fifo_hscx(cs->hw.teles0.membase, nr, ptr, cnt)
-#define WRITEHSCXFIFO(cs, nr, ptr, cnt) write_fifo_hscx(cs->hw.teles0.membase, nr, ptr, cnt)
-
-#include "hscx_irq.c"
-
-static irqreturn_t
-teles0_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val;
- u_long flags;
- int count = 0;
-
- spin_lock_irqsave(&cs->lock, flags);
- val = readhscx(cs->hw.teles0.membase, 1, HSCX_ISTA);
-Start_HSCX:
- if (val)
- hscx_int_main(cs, val);
- val = readisac(cs->hw.teles0.membase, ISAC_ISTA);
-Start_ISAC:
- if (val)
- isac_interrupt(cs, val);
- count++;
- val = readhscx(cs->hw.teles0.membase, 1, HSCX_ISTA);
- if (val && count < 5) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX IntStat after IntRoutine");
- goto Start_HSCX;
- }
- val = readisac(cs->hw.teles0.membase, ISAC_ISTA);
- if (val && count < 5) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ISAC IntStat after IntRoutine");
- goto Start_ISAC;
- }
- writehscx(cs->hw.teles0.membase, 0, HSCX_MASK, 0xFF);
- writehscx(cs->hw.teles0.membase, 1, HSCX_MASK, 0xFF);
- writeisac(cs->hw.teles0.membase, ISAC_MASK, 0xFF);
- writeisac(cs->hw.teles0.membase, ISAC_MASK, 0x0);
- writehscx(cs->hw.teles0.membase, 0, HSCX_MASK, 0x0);
- writehscx(cs->hw.teles0.membase, 1, HSCX_MASK, 0x0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-release_io_teles0(struct IsdnCardState *cs)
-{
- if (cs->hw.teles0.cfg_reg)
- release_region(cs->hw.teles0.cfg_reg, 8);
- iounmap(cs->hw.teles0.membase);
- release_mem_region(cs->hw.teles0.phymem, TELES_IOMEM_SIZE);
-}
-
-static int
-reset_teles0(struct IsdnCardState *cs)
-{
- u_char cfval;
-
- if (cs->hw.teles0.cfg_reg) {
- switch (cs->irq) {
- case 2:
- case 9:
- cfval = 0x00;
- break;
- case 3:
- cfval = 0x02;
- break;
- case 4:
- cfval = 0x04;
- break;
- case 5:
- cfval = 0x06;
- break;
- case 10:
- cfval = 0x08;
- break;
- case 11:
- cfval = 0x0A;
- break;
- case 12:
- cfval = 0x0C;
- break;
- case 15:
- cfval = 0x0E;
- break;
- default:
- return (1);
- }
- cfval |= ((cs->hw.teles0.phymem >> 9) & 0xF0);
- byteout(cs->hw.teles0.cfg_reg + 4, cfval);
- HZDELAY(HZ / 10 + 1);
- byteout(cs->hw.teles0.cfg_reg + 4, cfval | 1);
- HZDELAY(HZ / 10 + 1);
- }
- writeb(0, cs->hw.teles0.membase + 0x80); mb();
- HZDELAY(HZ / 5 + 1);
- writeb(1, cs->hw.teles0.membase + 0x80); mb();
- HZDELAY(HZ / 5 + 1);
- return (0);
-}
-
-static int
-Teles_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- reset_teles0(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- release_io_teles0(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- inithscxisac(cs, 3);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-int setup_teles0(struct IsdnCard *card)
-{
- u_char val;
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
- strcpy(tmp, teles0_revision);
- printk(KERN_INFO "HiSax: Teles 8.0/16.0 driver Rev. %s\n", HiSax_getrev(tmp));
- if ((cs->typ != ISDN_CTYPE_16_0) && (cs->typ != ISDN_CTYPE_8_0))
- return (0);
-
- if (cs->typ == ISDN_CTYPE_16_0)
- cs->hw.teles0.cfg_reg = card->para[2];
- else /* 8.0 */
- cs->hw.teles0.cfg_reg = 0;
-
- if (card->para[1] < 0x10000) {
- card->para[1] <<= 4;
- printk(KERN_INFO
- "Teles0: membase configured DOSish, assuming 0x%lx\n",
- (unsigned long) card->para[1]);
- }
- cs->irq = card->para[0];
- if (cs->hw.teles0.cfg_reg) {
- if (!request_region(cs->hw.teles0.cfg_reg, 8, "teles cfg")) {
- printk(KERN_WARNING
- "HiSax: %s config port %x-%x already in use\n",
- CardType[card->typ],
- cs->hw.teles0.cfg_reg,
- cs->hw.teles0.cfg_reg + 8);
- return (0);
- }
- }
- if (cs->hw.teles0.cfg_reg) {
- if ((val = bytein(cs->hw.teles0.cfg_reg + 0)) != 0x51) {
- printk(KERN_WARNING "Teles0: 16.0 Byte at %x is %x\n",
- cs->hw.teles0.cfg_reg + 0, val);
- release_region(cs->hw.teles0.cfg_reg, 8);
- return (0);
- }
- if ((val = bytein(cs->hw.teles0.cfg_reg + 1)) != 0x93) {
- printk(KERN_WARNING "Teles0: 16.0 Byte at %x is %x\n",
- cs->hw.teles0.cfg_reg + 1, val);
- release_region(cs->hw.teles0.cfg_reg, 8);
- return (0);
- }
- val = bytein(cs->hw.teles0.cfg_reg + 2); /* 0x1e=without AB
- * 0x1f=with AB
- * 0x1c 16.3 ???
- */
- if (val != 0x1e && val != 0x1f) {
- printk(KERN_WARNING "Teles0: 16.0 Byte at %x is %x\n",
- cs->hw.teles0.cfg_reg + 2, val);
- release_region(cs->hw.teles0.cfg_reg, 8);
- return (0);
- }
- }
- /* 16.0 and 8.0 designed for IOM1 */
- test_and_set_bit(HW_IOM1, &cs->HW_Flags);
- cs->hw.teles0.phymem = card->para[1];
- if (!request_mem_region(cs->hw.teles0.phymem, TELES_IOMEM_SIZE, "teles iomem")) {
- printk(KERN_WARNING
- "HiSax: %s memory region %lx-%lx already in use\n",
- CardType[card->typ],
- cs->hw.teles0.phymem,
- cs->hw.teles0.phymem + TELES_IOMEM_SIZE);
- if (cs->hw.teles0.cfg_reg)
- release_region(cs->hw.teles0.cfg_reg, 8);
- return (0);
- }
- cs->hw.teles0.membase = ioremap(cs->hw.teles0.phymem, TELES_IOMEM_SIZE);
- printk(KERN_INFO
- "HiSax: %s config irq:%d mem:%p cfg:0x%X\n",
- CardType[cs->typ], cs->irq,
- cs->hw.teles0.membase, cs->hw.teles0.cfg_reg);
- if (reset_teles0(cs)) {
- printk(KERN_WARNING "Teles0: wrong IRQ\n");
- release_io_teles0(cs);
- return (0);
- }
- setup_isac(cs);
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->BC_Read_Reg = &ReadHSCX;
- cs->BC_Write_Reg = &WriteHSCX;
- cs->BC_Send_Data = &hscx_fill_fifo;
- cs->cardmsg = &Teles_card_msg;
- cs->irq_func = &teles0_interrupt;
- ISACVersion(cs, "Teles0:");
- if (HscxVersion(cs, "Teles0:")) {
- printk(KERN_WARNING
- "Teles0: wrong HSCX versions check IO/MEM addresses\n");
- release_io_teles0(cs);
- return (0);
- }
- return (1);
-}
diff --git a/drivers/isdn/hisax/teles3.c b/drivers/isdn/hisax/teles3.c
deleted file mode 100644
index 1eef693f04f0..000000000000
--- a/drivers/isdn/hisax/teles3.c
+++ /dev/null
@@ -1,498 +0,0 @@
-/* $Id: teles3.c,v 2.19.2.4 2004/01/13 23:48:39 keil Exp $
- *
- * low level stuff for Teles 16.3 & PNP isdn cards
- *
- * Author Karsten Keil
- * Copyright by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * Thanks to Jan den Ouden
- * Fritz Elfert
- * Beat Doebeli
- *
- */
-#include <linux/init.h>
-#include <linux/isapnp.h>
-#include "hisax.h"
-#include "isac.h"
-#include "hscx.h"
-#include "isdnl1.h"
-
-static const char *teles3_revision = "$Revision: 2.19.2.4 $";
-
-#define byteout(addr, val) outb(val, addr)
-#define bytein(addr) inb(addr)
-
-static inline u_char
-readreg(unsigned int adr, u_char off)
-{
- return (bytein(adr + off));
-}
-
-static inline void
-writereg(unsigned int adr, u_char off, u_char data)
-{
- byteout(adr + off, data);
-}
-
-
-static inline void
-read_fifo(unsigned int adr, u_char *data, int size)
-{
- insb(adr, data, size);
-}
-
-static void
-write_fifo(unsigned int adr, u_char *data, int size)
-{
- outsb(adr, data, size);
-}
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readreg(cs->hw.teles3.isac, offset));
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writereg(cs->hw.teles3.isac, offset, value);
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- read_fifo(cs->hw.teles3.isacfifo, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- write_fifo(cs->hw.teles3.isacfifo, data, size);
-}
-
-static u_char
-ReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- return (readreg(cs->hw.teles3.hscx[hscx], offset));
-}
-
-static void
-WriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- writereg(cs->hw.teles3.hscx[hscx], offset, value);
-}
-
-/*
- * fast interrupt HSCX stuff goes here
- */
-
-#define READHSCX(cs, nr, reg) readreg(cs->hw.teles3.hscx[nr], reg)
-#define WRITEHSCX(cs, nr, reg, data) writereg(cs->hw.teles3.hscx[nr], reg, data)
-#define READHSCXFIFO(cs, nr, ptr, cnt) read_fifo(cs->hw.teles3.hscxfifo[nr], ptr, cnt)
-#define WRITEHSCXFIFO(cs, nr, ptr, cnt) write_fifo(cs->hw.teles3.hscxfifo[nr], ptr, cnt)
-
-#include "hscx_irq.c"
-
-static irqreturn_t
-teles3_interrupt(int intno, void *dev_id)
-{
-#define MAXCOUNT 5
- struct IsdnCardState *cs = dev_id;
- u_char val;
- u_long flags;
- int count = 0;
-
- spin_lock_irqsave(&cs->lock, flags);
- val = readreg(cs->hw.teles3.hscx[1], HSCX_ISTA);
-Start_HSCX:
- if (val)
- hscx_int_main(cs, val);
- val = readreg(cs->hw.teles3.isac, ISAC_ISTA);
-Start_ISAC:
- if (val)
- isac_interrupt(cs, val);
- count++;
- val = readreg(cs->hw.teles3.hscx[1], HSCX_ISTA);
- if (val && count < MAXCOUNT) {
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "HSCX IntStat after IntRoutine");
- goto Start_HSCX;
- }
- val = readreg(cs->hw.teles3.isac, ISAC_ISTA);
- if (val && count < MAXCOUNT) {
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ISAC IntStat after IntRoutine");
- goto Start_ISAC;
- }
- if (count >= MAXCOUNT)
- printk(KERN_WARNING "Teles3: more than %d loops in teles3_interrupt\n", count);
- writereg(cs->hw.teles3.hscx[0], HSCX_MASK, 0xFF);
- writereg(cs->hw.teles3.hscx[1], HSCX_MASK, 0xFF);
- writereg(cs->hw.teles3.isac, ISAC_MASK, 0xFF);
- writereg(cs->hw.teles3.isac, ISAC_MASK, 0x0);
- writereg(cs->hw.teles3.hscx[0], HSCX_MASK, 0x0);
- writereg(cs->hw.teles3.hscx[1], HSCX_MASK, 0x0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static inline void
-release_ioregs(struct IsdnCardState *cs, int mask)
-{
- if (mask & 1)
- release_region(cs->hw.teles3.isac + 32, 32);
- if (mask & 2)
- release_region(cs->hw.teles3.hscx[0] + 32, 32);
- if (mask & 4)
- release_region(cs->hw.teles3.hscx[1] + 32, 32);
-}
-
-static void
-release_io_teles3(struct IsdnCardState *cs)
-{
- if (cs->typ == ISDN_CTYPE_TELESPCMCIA) {
- release_region(cs->hw.teles3.hscx[1], 96);
- } else {
- if (cs->hw.teles3.cfg_reg) {
- if (cs->typ == ISDN_CTYPE_COMPAQ_ISA) {
- release_region(cs->hw.teles3.cfg_reg, 1);
- } else {
- release_region(cs->hw.teles3.cfg_reg, 8);
- }
- }
- release_ioregs(cs, 0x7);
- }
-}
-
-static int
-reset_teles3(struct IsdnCardState *cs)
-{
- u_char irqcfg;
-
- if (cs->typ != ISDN_CTYPE_TELESPCMCIA) {
- if ((cs->hw.teles3.cfg_reg) && (cs->typ != ISDN_CTYPE_COMPAQ_ISA)) {
- switch (cs->irq) {
- case 2:
- case 9:
- irqcfg = 0x00;
- break;
- case 3:
- irqcfg = 0x02;
- break;
- case 4:
- irqcfg = 0x04;
- break;
- case 5:
- irqcfg = 0x06;
- break;
- case 10:
- irqcfg = 0x08;
- break;
- case 11:
- irqcfg = 0x0A;
- break;
- case 12:
- irqcfg = 0x0C;
- break;
- case 15:
- irqcfg = 0x0E;
- break;
- default:
- return (1);
- }
- byteout(cs->hw.teles3.cfg_reg + 4, irqcfg);
- HZDELAY(HZ / 10 + 1);
- byteout(cs->hw.teles3.cfg_reg + 4, irqcfg | 1);
- HZDELAY(HZ / 10 + 1);
- } else if (cs->typ == ISDN_CTYPE_COMPAQ_ISA) {
- byteout(cs->hw.teles3.cfg_reg, 0xff);
- HZDELAY(2);
- byteout(cs->hw.teles3.cfg_reg, 0x00);
- HZDELAY(2);
- } else {
- /* Reset off for 16.3 PnP , thanks to Georg Acher */
- byteout(cs->hw.teles3.isac + 0x3c, 0);
- HZDELAY(2);
- byteout(cs->hw.teles3.isac + 0x3c, 1);
- HZDELAY(2);
- }
- }
- return (0);
-}
-
-static int
-Teles_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- spin_lock_irqsave(&cs->lock, flags);
- reset_teles3(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_RELEASE:
- release_io_teles3(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- inithscxisac(cs, 3);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-#ifdef __ISAPNP__
-
-static struct isapnp_device_id teles_ids[] = {
- { ISAPNP_VENDOR('T', 'A', 'G'), ISAPNP_FUNCTION(0x2110),
- ISAPNP_VENDOR('T', 'A', 'G'), ISAPNP_FUNCTION(0x2110),
- (unsigned long) "Teles 16.3 PnP" },
- { ISAPNP_VENDOR('C', 'T', 'X'), ISAPNP_FUNCTION(0x0),
- ISAPNP_VENDOR('C', 'T', 'X'), ISAPNP_FUNCTION(0x0),
- (unsigned long) "Creatix 16.3 PnP" },
- { ISAPNP_VENDOR('C', 'P', 'Q'), ISAPNP_FUNCTION(0x1002),
- ISAPNP_VENDOR('C', 'P', 'Q'), ISAPNP_FUNCTION(0x1002),
- (unsigned long) "Compaq ISDN S0" },
- { 0, }
-};
-
-static struct isapnp_device_id *ipid = &teles_ids[0];
-static struct pnp_card *pnp_c = NULL;
-#endif
-
-int setup_teles3(struct IsdnCard *card)
-{
- u_char val;
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
- strcpy(tmp, teles3_revision);
- printk(KERN_INFO "HiSax: Teles IO driver Rev. %s\n", HiSax_getrev(tmp));
- if ((cs->typ != ISDN_CTYPE_16_3) && (cs->typ != ISDN_CTYPE_PNP)
- && (cs->typ != ISDN_CTYPE_TELESPCMCIA) && (cs->typ != ISDN_CTYPE_COMPAQ_ISA))
- return (0);
-
-#ifdef __ISAPNP__
- if (!card->para[1] && isapnp_present()) {
- struct pnp_dev *pnp_d;
- while (ipid->card_vendor) {
- if ((pnp_c = pnp_find_card(ipid->card_vendor,
- ipid->card_device, pnp_c))) {
- pnp_d = NULL;
- if ((pnp_d = pnp_find_dev(pnp_c,
- ipid->vendor, ipid->function, pnp_d))) {
- int err;
-
- printk(KERN_INFO "HiSax: %s detected\n",
- (char *)ipid->driver_data);
- pnp_disable_dev(pnp_d);
- err = pnp_activate_dev(pnp_d);
- if (err < 0) {
- printk(KERN_WARNING "%s: pnp_activate_dev ret(%d)\n",
- __func__, err);
- return (0);
- }
- card->para[3] = pnp_port_start(pnp_d, 2);
- card->para[2] = pnp_port_start(pnp_d, 1);
- card->para[1] = pnp_port_start(pnp_d, 0);
- card->para[0] = pnp_irq(pnp_d, 0);
- if (card->para[0] == -1 || !card->para[1] || !card->para[2]) {
- printk(KERN_ERR "Teles PnP:some resources are missing %ld/%lx/%lx\n",
- card->para[0], card->para[1], card->para[2]);
- pnp_disable_dev(pnp_d);
- return (0);
- }
- break;
- } else {
- printk(KERN_ERR "Teles PnP: PnP error card found, no device\n");
- }
- }
- ipid++;
- pnp_c = NULL;
- }
- if (!ipid->card_vendor) {
- printk(KERN_INFO "Teles PnP: no ISAPnP card found\n");
- return (0);
- }
- }
-#endif
- if (cs->typ == ISDN_CTYPE_16_3) {
- cs->hw.teles3.cfg_reg = card->para[1];
- switch (cs->hw.teles3.cfg_reg) {
- case 0x180:
- case 0x280:
- case 0x380:
- cs->hw.teles3.cfg_reg |= 0xc00;
- break;
- }
- cs->hw.teles3.isac = cs->hw.teles3.cfg_reg - 0x420;
- cs->hw.teles3.hscx[0] = cs->hw.teles3.cfg_reg - 0xc20;
- cs->hw.teles3.hscx[1] = cs->hw.teles3.cfg_reg - 0x820;
- } else if (cs->typ == ISDN_CTYPE_TELESPCMCIA) {
- cs->hw.teles3.cfg_reg = 0;
- cs->hw.teles3.hscx[0] = card->para[1] - 0x20;
- cs->hw.teles3.hscx[1] = card->para[1];
- cs->hw.teles3.isac = card->para[1] + 0x20;
- } else if (cs->typ == ISDN_CTYPE_COMPAQ_ISA) {
- cs->hw.teles3.cfg_reg = card->para[3];
- cs->hw.teles3.isac = card->para[2] - 32;
- cs->hw.teles3.hscx[0] = card->para[1] - 32;
- cs->hw.teles3.hscx[1] = card->para[1];
- } else { /* PNP */
- cs->hw.teles3.cfg_reg = 0;
- cs->hw.teles3.isac = card->para[1] - 32;
- cs->hw.teles3.hscx[0] = card->para[2] - 32;
- cs->hw.teles3.hscx[1] = card->para[2];
- }
- cs->irq = card->para[0];
- cs->hw.teles3.isacfifo = cs->hw.teles3.isac + 0x3e;
- cs->hw.teles3.hscxfifo[0] = cs->hw.teles3.hscx[0] + 0x3e;
- cs->hw.teles3.hscxfifo[1] = cs->hw.teles3.hscx[1] + 0x3e;
- if (cs->typ == ISDN_CTYPE_TELESPCMCIA) {
- if (!request_region(cs->hw.teles3.hscx[1], 96, "HiSax Teles PCMCIA")) {
- printk(KERN_WARNING
- "HiSax: %s ports %x-%x already in use\n",
- CardType[cs->typ],
- cs->hw.teles3.hscx[1],
- cs->hw.teles3.hscx[1] + 96);
- return (0);
- }
- cs->irq_flags |= IRQF_SHARED; /* cardbus can share */
- } else {
- if (cs->hw.teles3.cfg_reg) {
- if (cs->typ == ISDN_CTYPE_COMPAQ_ISA) {
- if (!request_region(cs->hw.teles3.cfg_reg, 1, "teles3 cfg")) {
- printk(KERN_WARNING
- "HiSax: %s config port %x already in use\n",
- CardType[card->typ],
- cs->hw.teles3.cfg_reg);
- return (0);
- }
- } else {
- if (!request_region(cs->hw.teles3.cfg_reg, 8, "teles3 cfg")) {
- printk(KERN_WARNING
- "HiSax: %s config port %x-%x already in use\n",
- CardType[card->typ],
- cs->hw.teles3.cfg_reg,
- cs->hw.teles3.cfg_reg + 8);
- return (0);
- }
- }
- }
- if (!request_region(cs->hw.teles3.isac + 32, 32, "HiSax isac")) {
- printk(KERN_WARNING
- "HiSax: %s isac ports %x-%x already in use\n",
- CardType[cs->typ],
- cs->hw.teles3.isac + 32,
- cs->hw.teles3.isac + 64);
- if (cs->hw.teles3.cfg_reg) {
- if (cs->typ == ISDN_CTYPE_COMPAQ_ISA) {
- release_region(cs->hw.teles3.cfg_reg, 1);
- } else {
- release_region(cs->hw.teles3.cfg_reg, 8);
- }
- }
- return (0);
- }
- if (!request_region(cs->hw.teles3.hscx[0] + 32, 32, "HiSax hscx A")) {
- printk(KERN_WARNING
- "HiSax: %s hscx A ports %x-%x already in use\n",
- CardType[cs->typ],
- cs->hw.teles3.hscx[0] + 32,
- cs->hw.teles3.hscx[0] + 64);
- if (cs->hw.teles3.cfg_reg) {
- if (cs->typ == ISDN_CTYPE_COMPAQ_ISA) {
- release_region(cs->hw.teles3.cfg_reg, 1);
- } else {
- release_region(cs->hw.teles3.cfg_reg, 8);
- }
- }
- release_ioregs(cs, 1);
- return (0);
- }
- if (!request_region(cs->hw.teles3.hscx[1] + 32, 32, "HiSax hscx B")) {
- printk(KERN_WARNING
- "HiSax: %s hscx B ports %x-%x already in use\n",
- CardType[cs->typ],
- cs->hw.teles3.hscx[1] + 32,
- cs->hw.teles3.hscx[1] + 64);
- if (cs->hw.teles3.cfg_reg) {
- if (cs->typ == ISDN_CTYPE_COMPAQ_ISA) {
- release_region(cs->hw.teles3.cfg_reg, 1);
- } else {
- release_region(cs->hw.teles3.cfg_reg, 8);
- }
- }
- release_ioregs(cs, 3);
- return (0);
- }
- }
- if ((cs->hw.teles3.cfg_reg) && (cs->typ != ISDN_CTYPE_COMPAQ_ISA)) {
- if ((val = bytein(cs->hw.teles3.cfg_reg + 0)) != 0x51) {
- printk(KERN_WARNING "Teles: 16.3 Byte at %x is %x\n",
- cs->hw.teles3.cfg_reg + 0, val);
- release_io_teles3(cs);
- return (0);
- }
- if ((val = bytein(cs->hw.teles3.cfg_reg + 1)) != 0x93) {
- printk(KERN_WARNING "Teles: 16.3 Byte at %x is %x\n",
- cs->hw.teles3.cfg_reg + 1, val);
- release_io_teles3(cs);
- return (0);
- }
- val = bytein(cs->hw.teles3.cfg_reg + 2);/* 0x1e=without AB
- * 0x1f=with AB
- * 0x1c 16.3 ???
- * 0x39 16.3 1.1
- * 0x38 16.3 1.3
- * 0x46 16.3 with AB + Video (Teles-Vision)
- */
- if (val != 0x46 && val != 0x39 && val != 0x38 && val != 0x1c && val != 0x1e && val != 0x1f) {
- printk(KERN_WARNING "Teles: 16.3 Byte at %x is %x\n",
- cs->hw.teles3.cfg_reg + 2, val);
- release_io_teles3(cs);
- return (0);
- }
- }
- printk(KERN_INFO
- "HiSax: %s config irq:%d isac:0x%X cfg:0x%X\n",
- CardType[cs->typ], cs->irq,
- cs->hw.teles3.isac + 32, cs->hw.teles3.cfg_reg);
- printk(KERN_INFO
- "HiSax: hscx A:0x%X hscx B:0x%X\n",
- cs->hw.teles3.hscx[0] + 32, cs->hw.teles3.hscx[1] + 32);
-
- setup_isac(cs);
- if (reset_teles3(cs)) {
- printk(KERN_WARNING "Teles3: wrong IRQ\n");
- release_io_teles3(cs);
- return (0);
- }
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->BC_Read_Reg = &ReadHSCX;
- cs->BC_Write_Reg = &WriteHSCX;
- cs->BC_Send_Data = &hscx_fill_fifo;
- cs->cardmsg = &Teles_card_msg;
- cs->irq_func = &teles3_interrupt;
- ISACVersion(cs, "Teles3:");
- if (HscxVersion(cs, "Teles3:")) {
- printk(KERN_WARNING
- "Teles3: wrong HSCX versions check IO address\n");
- release_io_teles3(cs);
- return (0);
- }
- return (1);
-}
diff --git a/drivers/isdn/hisax/teles_cs.c b/drivers/isdn/hisax/teles_cs.c
deleted file mode 100644
index bcc37e955622..000000000000
--- a/drivers/isdn/hisax/teles_cs.c
+++ /dev/null
@@ -1,201 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/* $Id: teles_cs.c,v 1.1.2.2 2004/01/25 15:07:06 keil Exp $ */
-/*======================================================================
-
- A teles S0 PCMCIA client driver
-
- Based on skeleton by David Hinds, dhinds@allegro.stanford.edu
- Written by Christof Petig, christof.petig@wtal.de
-
- Also inspired by ELSA PCMCIA driver
- by Klaus Lichtenwalder <Lichtenwalder@ACM.org>
-
- Extensions to new hisax_pcmcia by Karsten Keil
-
- minor changes to be compatible with kernel 2.4.x
- by Jan.Schubert@GMX.li
-
- ======================================================================*/
-
-#include <linux/kernel.h>
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/ptrace.h>
-#include <linux/slab.h>
-#include <linux/string.h>
-#include <linux/timer.h>
-#include <linux/ioport.h>
-#include <asm/io.h>
-
-#include <pcmcia/cistpl.h>
-#include <pcmcia/cisreg.h>
-#include <pcmcia/ds.h>
-#include "hisax_cfg.h"
-
-MODULE_DESCRIPTION("ISDN4Linux: PCMCIA client driver for Teles PCMCIA cards");
-MODULE_AUTHOR("Christof Petig, christof.petig@wtal.de, Karsten Keil, kkeil@suse.de");
-MODULE_LICENSE("GPL");
-
-
-/*====================================================================*/
-
-/* Parameters that can be set with 'insmod' */
-
-static int protocol = 2; /* EURO-ISDN Default */
-module_param(protocol, int, 0);
-
-static int teles_cs_config(struct pcmcia_device *link);
-static void teles_cs_release(struct pcmcia_device *link);
-static void teles_detach(struct pcmcia_device *p_dev);
-
-typedef struct local_info_t {
- struct pcmcia_device *p_dev;
- int busy;
- int cardnr;
-} local_info_t;
-
-static int teles_probe(struct pcmcia_device *link)
-{
- local_info_t *local;
-
- dev_dbg(&link->dev, "teles_attach()\n");
-
- /* Allocate space for private device-specific data */
- local = kzalloc(sizeof(local_info_t), GFP_KERNEL);
- if (!local) return -ENOMEM;
- local->cardnr = -1;
-
- local->p_dev = link;
- link->priv = local;
-
- link->config_flags |= CONF_ENABLE_IRQ | CONF_AUTO_SET_IO;
-
- return teles_cs_config(link);
-} /* teles_attach */
-
-static void teles_detach(struct pcmcia_device *link)
-{
- local_info_t *info = link->priv;
-
- dev_dbg(&link->dev, "teles_detach(0x%p)\n", link);
-
- info->busy = 1;
- teles_cs_release(link);
-
- kfree(info);
-} /* teles_detach */
-
-static int teles_cs_configcheck(struct pcmcia_device *p_dev, void *priv_data)
-{
- int j;
-
- p_dev->io_lines = 5;
- p_dev->resource[0]->end = 96;
- p_dev->resource[0]->flags &= IO_DATA_PATH_WIDTH;
- p_dev->resource[0]->flags |= IO_DATA_PATH_WIDTH_AUTO;
-
- if ((p_dev->resource[0]->end) && p_dev->resource[0]->start) {
- printk(KERN_INFO "(teles_cs: looks like the 96 model)\n");
- if (!pcmcia_request_io(p_dev))
- return 0;
- } else {
- printk(KERN_INFO "(teles_cs: looks like the 97 model)\n");
- for (j = 0x2f0; j > 0x100; j -= 0x10) {
- p_dev->resource[0]->start = j;
- if (!pcmcia_request_io(p_dev))
- return 0;
- }
- }
- return -ENODEV;
-}
-
-static int teles_cs_config(struct pcmcia_device *link)
-{
- int i;
- IsdnCard_t icard;
-
- dev_dbg(&link->dev, "teles_config(0x%p)\n", link);
-
- i = pcmcia_loop_config(link, teles_cs_configcheck, NULL);
- if (i != 0)
- goto cs_failed;
-
- if (!link->irq)
- goto cs_failed;
-
- i = pcmcia_enable_device(link);
- if (i != 0)
- goto cs_failed;
-
- icard.para[0] = link->irq;
- icard.para[1] = link->resource[0]->start;
- icard.protocol = protocol;
- icard.typ = ISDN_CTYPE_TELESPCMCIA;
-
- i = hisax_init_pcmcia(link, &(((local_info_t *)link->priv)->busy), &icard);
- if (i < 0) {
- printk(KERN_ERR "teles_cs: failed to initialize Teles PCMCIA %d at i/o %#x\n",
- i, (unsigned int) link->resource[0]->start);
- teles_cs_release(link);
- return -ENODEV;
- }
-
- ((local_info_t *)link->priv)->cardnr = i;
- return 0;
-
-cs_failed:
- teles_cs_release(link);
- return -ENODEV;
-} /* teles_cs_config */
-
-static void teles_cs_release(struct pcmcia_device *link)
-{
- local_info_t *local = link->priv;
-
- dev_dbg(&link->dev, "teles_cs_release(0x%p)\n", link);
-
- if (local) {
- if (local->cardnr >= 0) {
- /* no unregister function with hisax */
- HiSax_closecard(local->cardnr);
- }
- }
-
- pcmcia_disable_device(link);
-} /* teles_cs_release */
-
-static int teles_suspend(struct pcmcia_device *link)
-{
- local_info_t *dev = link->priv;
-
- dev->busy = 1;
-
- return 0;
-}
-
-static int teles_resume(struct pcmcia_device *link)
-{
- local_info_t *dev = link->priv;
-
- dev->busy = 0;
-
- return 0;
-}
-
-
-static const struct pcmcia_device_id teles_ids[] = {
- PCMCIA_DEVICE_PROD_ID12("TELES", "S0/PC", 0x67b50eae, 0xe9e70119),
- PCMCIA_DEVICE_NULL,
-};
-MODULE_DEVICE_TABLE(pcmcia, teles_ids);
-
-static struct pcmcia_driver teles_cs_driver = {
- .owner = THIS_MODULE,
- .name = "teles_cs",
- .probe = teles_probe,
- .remove = teles_detach,
- .id_table = teles_ids,
- .suspend = teles_suspend,
- .resume = teles_resume,
-};
-module_pcmcia_driver(teles_cs_driver);
diff --git a/drivers/isdn/hisax/telespci.c b/drivers/isdn/hisax/telespci.c
deleted file mode 100644
index 33eeb4602c7e..000000000000
--- a/drivers/isdn/hisax/telespci.c
+++ /dev/null
@@ -1,349 +0,0 @@
-/* $Id: telespci.c,v 2.23.2.3 2004/01/13 14:31:26 keil Exp $
- *
- * low level stuff for Teles PCI isdn cards
- *
- * Author Ton van Rosmalen
- * Karsten Keil
- * Copyright by Ton van Rosmalen
- * by Karsten Keil <keil@isdn4linux.de>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "isac.h"
-#include "hscx.h"
-#include "isdnl1.h"
-#include <linux/pci.h>
-
-static const char *telespci_revision = "$Revision: 2.23.2.3 $";
-
-#define ZORAN_PO_RQ_PEN 0x02000000
-#define ZORAN_PO_WR 0x00800000
-#define ZORAN_PO_GID0 0x00000000
-#define ZORAN_PO_GID1 0x00100000
-#define ZORAN_PO_GREG0 0x00000000
-#define ZORAN_PO_GREG1 0x00010000
-#define ZORAN_PO_DMASK 0xFF
-
-#define WRITE_ADDR_ISAC (ZORAN_PO_WR | ZORAN_PO_GID0 | ZORAN_PO_GREG0)
-#define READ_DATA_ISAC (ZORAN_PO_GID0 | ZORAN_PO_GREG1)
-#define WRITE_DATA_ISAC (ZORAN_PO_WR | ZORAN_PO_GID0 | ZORAN_PO_GREG1)
-#define WRITE_ADDR_HSCX (ZORAN_PO_WR | ZORAN_PO_GID1 | ZORAN_PO_GREG0)
-#define READ_DATA_HSCX (ZORAN_PO_GID1 | ZORAN_PO_GREG1)
-#define WRITE_DATA_HSCX (ZORAN_PO_WR | ZORAN_PO_GID1 | ZORAN_PO_GREG1)
-
-#define ZORAN_WAIT_NOBUSY do { \
- portdata = readl(adr + 0x200); \
- } while (portdata & ZORAN_PO_RQ_PEN)
-
-static inline u_char
-readisac(void __iomem *adr, u_char off)
-{
- register unsigned int portdata;
-
- ZORAN_WAIT_NOBUSY;
-
- /* set address for ISAC */
- writel(WRITE_ADDR_ISAC | off, adr + 0x200);
- ZORAN_WAIT_NOBUSY;
-
- /* read data from ISAC */
- writel(READ_DATA_ISAC, adr + 0x200);
- ZORAN_WAIT_NOBUSY;
- return ((u_char)(portdata & ZORAN_PO_DMASK));
-}
-
-static inline void
-writeisac(void __iomem *adr, u_char off, u_char data)
-{
- register unsigned int portdata;
-
- ZORAN_WAIT_NOBUSY;
-
- /* set address for ISAC */
- writel(WRITE_ADDR_ISAC | off, adr + 0x200);
- ZORAN_WAIT_NOBUSY;
-
- /* write data to ISAC */
- writel(WRITE_DATA_ISAC | data, adr + 0x200);
- ZORAN_WAIT_NOBUSY;
-}
-
-static inline u_char
-readhscx(void __iomem *adr, int hscx, u_char off)
-{
- register unsigned int portdata;
-
- ZORAN_WAIT_NOBUSY;
- /* set address for HSCX */
- writel(WRITE_ADDR_HSCX | ((hscx ? 0x40 : 0) + off), adr + 0x200);
- ZORAN_WAIT_NOBUSY;
-
- /* read data from HSCX */
- writel(READ_DATA_HSCX, adr + 0x200);
- ZORAN_WAIT_NOBUSY;
- return ((u_char)(portdata & ZORAN_PO_DMASK));
-}
-
-static inline void
-writehscx(void __iomem *adr, int hscx, u_char off, u_char data)
-{
- register unsigned int portdata;
-
- ZORAN_WAIT_NOBUSY;
- /* set address for HSCX */
- writel(WRITE_ADDR_HSCX | ((hscx ? 0x40 : 0) + off), adr + 0x200);
- ZORAN_WAIT_NOBUSY;
-
- /* write data to HSCX */
- writel(WRITE_DATA_HSCX | data, adr + 0x200);
- ZORAN_WAIT_NOBUSY;
-}
-
-static inline void
-read_fifo_isac(void __iomem *adr, u_char *data, int size)
-{
- register unsigned int portdata;
- register int i;
-
- ZORAN_WAIT_NOBUSY;
- /* read data from ISAC */
- for (i = 0; i < size; i++) {
- /* set address for ISAC fifo */
- writel(WRITE_ADDR_ISAC | 0x1E, adr + 0x200);
- ZORAN_WAIT_NOBUSY;
- writel(READ_DATA_ISAC, adr + 0x200);
- ZORAN_WAIT_NOBUSY;
- data[i] = (u_char)(portdata & ZORAN_PO_DMASK);
- }
-}
-
-static void
-write_fifo_isac(void __iomem *adr, u_char *data, int size)
-{
- register unsigned int portdata;
- register int i;
-
- ZORAN_WAIT_NOBUSY;
- /* write data to ISAC */
- for (i = 0; i < size; i++) {
- /* set address for ISAC fifo */
- writel(WRITE_ADDR_ISAC | 0x1E, adr + 0x200);
- ZORAN_WAIT_NOBUSY;
- writel(WRITE_DATA_ISAC | data[i], adr + 0x200);
- ZORAN_WAIT_NOBUSY;
- }
-}
-
-static inline void
-read_fifo_hscx(void __iomem *adr, int hscx, u_char *data, int size)
-{
- register unsigned int portdata;
- register int i;
-
- ZORAN_WAIT_NOBUSY;
- /* read data from HSCX */
- for (i = 0; i < size; i++) {
- /* set address for HSCX fifo */
- writel(WRITE_ADDR_HSCX | (hscx ? 0x5F : 0x1F), adr + 0x200);
- ZORAN_WAIT_NOBUSY;
- writel(READ_DATA_HSCX, adr + 0x200);
- ZORAN_WAIT_NOBUSY;
- data[i] = (u_char) (portdata & ZORAN_PO_DMASK);
- }
-}
-
-static inline void
-write_fifo_hscx(void __iomem *adr, int hscx, u_char *data, int size)
-{
- unsigned int portdata;
- register int i;
-
- ZORAN_WAIT_NOBUSY;
- /* write data to HSCX */
- for (i = 0; i < size; i++) {
- /* set address for HSCX fifo */
- writel(WRITE_ADDR_HSCX | (hscx ? 0x5F : 0x1F), adr + 0x200);
- ZORAN_WAIT_NOBUSY;
- writel(WRITE_DATA_HSCX | data[i], adr + 0x200);
- ZORAN_WAIT_NOBUSY;
- udelay(10);
- }
-}
-
-/* Interface functions */
-
-static u_char
-ReadISAC(struct IsdnCardState *cs, u_char offset)
-{
- return (readisac(cs->hw.teles0.membase, offset));
-}
-
-static void
-WriteISAC(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- writeisac(cs->hw.teles0.membase, offset, value);
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- read_fifo_isac(cs->hw.teles0.membase, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- write_fifo_isac(cs->hw.teles0.membase, data, size);
-}
-
-static u_char
-ReadHSCX(struct IsdnCardState *cs, int hscx, u_char offset)
-{
- return (readhscx(cs->hw.teles0.membase, hscx, offset));
-}
-
-static void
-WriteHSCX(struct IsdnCardState *cs, int hscx, u_char offset, u_char value)
-{
- writehscx(cs->hw.teles0.membase, hscx, offset, value);
-}
-
-/*
- * fast interrupt HSCX stuff goes here
- */
-
-#define READHSCX(cs, nr, reg) readhscx(cs->hw.teles0.membase, nr, reg)
-#define WRITEHSCX(cs, nr, reg, data) writehscx(cs->hw.teles0.membase, nr, reg, data)
-#define READHSCXFIFO(cs, nr, ptr, cnt) read_fifo_hscx(cs->hw.teles0.membase, nr, ptr, cnt)
-#define WRITEHSCXFIFO(cs, nr, ptr, cnt) write_fifo_hscx(cs->hw.teles0.membase, nr, ptr, cnt)
-
-#include "hscx_irq.c"
-
-static irqreturn_t
-telespci_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char hval, ival;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- hval = readhscx(cs->hw.teles0.membase, 1, HSCX_ISTA);
- if (hval)
- hscx_int_main(cs, hval);
- ival = readisac(cs->hw.teles0.membase, ISAC_ISTA);
- if ((hval | ival) == 0) {
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_NONE;
- }
- if (ival)
- isac_interrupt(cs, ival);
- /* Clear interrupt register for Zoran PCI controller */
- writel(0x70000000, cs->hw.teles0.membase + 0x3C);
-
- writehscx(cs->hw.teles0.membase, 0, HSCX_MASK, 0xFF);
- writehscx(cs->hw.teles0.membase, 1, HSCX_MASK, 0xFF);
- writeisac(cs->hw.teles0.membase, ISAC_MASK, 0xFF);
- writeisac(cs->hw.teles0.membase, ISAC_MASK, 0x0);
- writehscx(cs->hw.teles0.membase, 0, HSCX_MASK, 0x0);
- writehscx(cs->hw.teles0.membase, 1, HSCX_MASK, 0x0);
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-release_io_telespci(struct IsdnCardState *cs)
-{
- iounmap(cs->hw.teles0.membase);
-}
-
-static int
-TelesPCI_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- u_long flags;
-
- switch (mt) {
- case CARD_RESET:
- return (0);
- case CARD_RELEASE:
- release_io_telespci(cs);
- return (0);
- case CARD_INIT:
- spin_lock_irqsave(&cs->lock, flags);
- inithscxisac(cs, 3);
- spin_unlock_irqrestore(&cs->lock, flags);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-static struct pci_dev *dev_tel = NULL;
-
-int setup_telespci(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
-
- strcpy(tmp, telespci_revision);
- printk(KERN_INFO "HiSax: Teles/PCI driver Rev. %s\n", HiSax_getrev(tmp));
- if (cs->typ != ISDN_CTYPE_TELESPCI)
- return (0);
-
- if ((dev_tel = hisax_find_pci_device(PCI_VENDOR_ID_ZORAN, PCI_DEVICE_ID_ZORAN_36120, dev_tel))) {
- if (pci_enable_device(dev_tel))
- return (0);
- cs->irq = dev_tel->irq;
- if (!cs->irq) {
- printk(KERN_WARNING "Teles: No IRQ for PCI card found\n");
- return (0);
- }
- cs->hw.teles0.membase = ioremap(pci_resource_start(dev_tel, 0),
- PAGE_SIZE);
- printk(KERN_INFO "Found: Zoran, base-address: 0x%llx, irq: 0x%x\n",
- (unsigned long long)pci_resource_start(dev_tel, 0),
- dev_tel->irq);
- } else {
- printk(KERN_WARNING "TelesPCI: No PCI card found\n");
- return (0);
- }
-
- /* Initialize Zoran PCI controller */
- writel(0x00000000, cs->hw.teles0.membase + 0x28);
- writel(0x01000000, cs->hw.teles0.membase + 0x28);
- writel(0x01000000, cs->hw.teles0.membase + 0x28);
- writel(0x7BFFFFFF, cs->hw.teles0.membase + 0x2C);
- writel(0x70000000, cs->hw.teles0.membase + 0x3C);
- writel(0x61000000, cs->hw.teles0.membase + 0x40);
- /* writel(0x00800000, cs->hw.teles0.membase + 0x200); */
-
- printk(KERN_INFO
- "HiSax: Teles PCI config irq:%d mem:%p\n",
- cs->irq,
- cs->hw.teles0.membase);
-
- setup_isac(cs);
- cs->readisac = &ReadISAC;
- cs->writeisac = &WriteISAC;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->BC_Read_Reg = &ReadHSCX;
- cs->BC_Write_Reg = &WriteHSCX;
- cs->BC_Send_Data = &hscx_fill_fifo;
- cs->cardmsg = &TelesPCI_card_msg;
- cs->irq_func = &telespci_interrupt;
- cs->irq_flags |= IRQF_SHARED;
- ISACVersion(cs, "TelesPCI:");
- if (HscxVersion(cs, "TelesPCI:")) {
- printk(KERN_WARNING
- "TelesPCI: wrong HSCX versions check IO/MEM addresses\n");
- release_io_telespci(cs);
- return (0);
- }
- return (1);
-}
diff --git a/drivers/isdn/hisax/w6692.c b/drivers/isdn/hisax/w6692.c
deleted file mode 100644
index 36eefaa3a7d9..000000000000
--- a/drivers/isdn/hisax/w6692.c
+++ /dev/null
@@ -1,1085 +0,0 @@
-/* $Id: w6692.c,v 1.18.2.4 2004/02/11 13:21:34 keil Exp $
- *
- * Winbond W6692 specific routines
- *
- * Author Petr Novak
- * Copyright by Petr Novak <petr.novak@i.cz>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/init.h>
-#include "hisax.h"
-#include "w6692.h"
-#include "isdnl1.h"
-#include <linux/interrupt.h>
-#include <linux/pci.h>
-#include <linux/slab.h>
-
-/* table entry in the PCI devices list */
-typedef struct {
- int vendor_id;
- int device_id;
- char *vendor_name;
- char *card_name;
-} PCI_ENTRY;
-
-static const PCI_ENTRY id_list[] =
-{
- {PCI_VENDOR_ID_WINBOND2, PCI_DEVICE_ID_WINBOND2_6692, "Winbond", "W6692"},
- {PCI_VENDOR_ID_DYNALINK, PCI_DEVICE_ID_DYNALINK_IS64PH, "Dynalink/AsusCom", "IS64PH"},
- {0, 0, "U.S.Robotics", "ISDN PCI Card TA"}
-};
-
-#define W6692_SV_USR 0x16ec
-#define W6692_SD_USR 0x3409
-#define W6692_WINBOND 0
-#define W6692_DYNALINK 1
-#define W6692_USR 2
-
-static const char *w6692_revision = "$Revision: 1.18.2.4 $";
-
-#define DBUSY_TIMER_VALUE 80
-
-static char *W6692Ver[] =
-{"W6692 V00", "W6692 V01", "W6692 V10",
- "W6692 V11"};
-
-static void
-W6692Version(struct IsdnCardState *cs, char *s)
-{
- int val;
-
- val = cs->readW6692(cs, W_D_RBCH);
- printk(KERN_INFO "%s Winbond W6692 version (%x): %s\n", s, val, W6692Ver[(val >> 6) & 3]);
-}
-
-static void
-ph_command(struct IsdnCardState *cs, unsigned int command)
-{
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ph_command %x", command);
- cs->writeisac(cs, W_CIX, command);
-}
-
-
-static void
-W6692_new_ph(struct IsdnCardState *cs)
-{
- switch (cs->dc.w6692.ph_state) {
- case (W_L1CMD_RST):
- ph_command(cs, W_L1CMD_DRC);
- l1_msg(cs, HW_RESET | INDICATION, NULL);
- /* fall through */
- case (W_L1IND_CD):
- l1_msg(cs, HW_DEACTIVATE | CONFIRM, NULL);
- break;
- case (W_L1IND_DRD):
- l1_msg(cs, HW_DEACTIVATE | INDICATION, NULL);
- break;
- case (W_L1IND_CE):
- l1_msg(cs, HW_POWERUP | CONFIRM, NULL);
- break;
- case (W_L1IND_LD):
- l1_msg(cs, HW_RSYNC | INDICATION, NULL);
- break;
- case (W_L1IND_ARD):
- l1_msg(cs, HW_INFO2 | INDICATION, NULL);
- break;
- case (W_L1IND_AI8):
- l1_msg(cs, HW_INFO4_P8 | INDICATION, NULL);
- break;
- case (W_L1IND_AI10):
- l1_msg(cs, HW_INFO4_P10 | INDICATION, NULL);
- break;
- default:
- break;
- }
-}
-
-static void
-W6692_bh(struct work_struct *work)
-{
- struct IsdnCardState *cs =
- container_of(work, struct IsdnCardState, tqueue);
- struct PStack *stptr;
-
- if (test_and_clear_bit(D_CLEARBUSY, &cs->event)) {
- if (cs->debug)
- debugl1(cs, "D-Channel Busy cleared");
- stptr = cs->stlist;
- while (stptr != NULL) {
- stptr->l1.l1l2(stptr, PH_PAUSE | CONFIRM, NULL);
- stptr = stptr->next;
- }
- }
- if (test_and_clear_bit(D_L1STATECHANGE, &cs->event))
- W6692_new_ph(cs);
- if (test_and_clear_bit(D_RCVBUFREADY, &cs->event))
- DChannel_proc_rcv(cs);
- if (test_and_clear_bit(D_XMTBUFREADY, &cs->event))
- DChannel_proc_xmt(cs);
-/*
- if (test_and_clear_bit(D_RX_MON1, &cs->event))
- arcofi_fsm(cs, ARCOFI_RX_END, NULL);
- if (test_and_clear_bit(D_TX_MON1, &cs->event))
- arcofi_fsm(cs, ARCOFI_TX_END, NULL);
-*/
-}
-
-static void
-W6692_empty_fifo(struct IsdnCardState *cs, int count)
-{
- u_char *ptr;
-
- if ((cs->debug & L1_DEB_ISAC) && !(cs->debug & L1_DEB_ISAC_FIFO))
- debugl1(cs, "W6692_empty_fifo");
-
- if ((cs->rcvidx + count) >= MAX_DFRAME_LEN_L1) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "W6692_empty_fifo overrun %d",
- cs->rcvidx + count);
- cs->writeW6692(cs, W_D_CMDR, W_D_CMDR_RACK);
- cs->rcvidx = 0;
- return;
- }
- ptr = cs->rcvbuf + cs->rcvidx;
- cs->rcvidx += count;
- cs->readW6692fifo(cs, ptr, count);
- cs->writeW6692(cs, W_D_CMDR, W_D_CMDR_RACK);
- if (cs->debug & L1_DEB_ISAC_FIFO) {
- char *t = cs->dlog;
-
- t += sprintf(t, "W6692_empty_fifo cnt %d", count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", cs->dlog);
- }
-}
-
-static void
-W6692_fill_fifo(struct IsdnCardState *cs)
-{
- int count, more;
- u_char *ptr;
-
- if ((cs->debug & L1_DEB_ISAC) && !(cs->debug & L1_DEB_ISAC_FIFO))
- debugl1(cs, "W6692_fill_fifo");
-
- if (!cs->tx_skb)
- return;
-
- count = cs->tx_skb->len;
- if (count <= 0)
- return;
-
- more = 0;
- if (count > W_D_FIFO_THRESH) {
- more = !0;
- count = W_D_FIFO_THRESH;
- }
- ptr = cs->tx_skb->data;
- skb_pull(cs->tx_skb, count);
- cs->tx_cnt += count;
- cs->writeW6692fifo(cs, ptr, count);
- cs->writeW6692(cs, W_D_CMDR, more ? W_D_CMDR_XMS : (W_D_CMDR_XMS | W_D_CMDR_XME));
- if (test_and_set_bit(FLG_DBUSY_TIMER, &cs->HW_Flags)) {
- debugl1(cs, "W6692_fill_fifo dbusytimer running");
- del_timer(&cs->dbusytimer);
- }
- cs->dbusytimer.expires = jiffies + ((DBUSY_TIMER_VALUE * HZ) / 1000);
- add_timer(&cs->dbusytimer);
- if (cs->debug & L1_DEB_ISAC_FIFO) {
- char *t = cs->dlog;
-
- t += sprintf(t, "W6692_fill_fifo cnt %d", count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", cs->dlog);
- }
-}
-
-static void
-W6692B_empty_fifo(struct BCState *bcs, int count)
-{
- u_char *ptr;
- struct IsdnCardState *cs = bcs->cs;
-
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "W6692B_empty_fifo");
-
- if (bcs->hw.w6692.rcvidx + count > HSCX_BUFMAX) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "W6692B_empty_fifo: incoming packet too large");
- cs->BC_Write_Reg(cs, bcs->channel, W_B_CMDR, W_B_CMDR_RACK | W_B_CMDR_RACT);
- bcs->hw.w6692.rcvidx = 0;
- return;
- }
- ptr = bcs->hw.w6692.rcvbuf + bcs->hw.w6692.rcvidx;
- bcs->hw.w6692.rcvidx += count;
- READW6692BFIFO(cs, bcs->channel, ptr, count);
- cs->BC_Write_Reg(cs, bcs->channel, W_B_CMDR, W_B_CMDR_RACK | W_B_CMDR_RACT);
- if (cs->debug & L1_DEB_HSCX_FIFO) {
- char *t = bcs->blog;
-
- t += sprintf(t, "W6692B_empty_fifo %c cnt %d",
- bcs->channel + '1', count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", bcs->blog);
- }
-}
-
-static void
-W6692B_fill_fifo(struct BCState *bcs)
-{
- struct IsdnCardState *cs = bcs->cs;
- int more, count;
- u_char *ptr;
-
- if (!bcs->tx_skb)
- return;
- if (bcs->tx_skb->len <= 0)
- return;
-
- more = (bcs->mode == L1_MODE_TRANS) ? 1 : 0;
- if (bcs->tx_skb->len > W_B_FIFO_THRESH) {
- more = 1;
- count = W_B_FIFO_THRESH;
- } else
- count = bcs->tx_skb->len;
-
- if ((cs->debug & L1_DEB_HSCX) && !(cs->debug & L1_DEB_HSCX_FIFO))
- debugl1(cs, "W6692B_fill_fifo%s%d", (more ? " " : " last "), count);
-
- ptr = bcs->tx_skb->data;
- skb_pull(bcs->tx_skb, count);
- bcs->tx_cnt -= count;
- bcs->hw.w6692.count += count;
- WRITEW6692BFIFO(cs, bcs->channel, ptr, count);
- cs->BC_Write_Reg(cs, bcs->channel, W_B_CMDR, W_B_CMDR_RACT | W_B_CMDR_XMS | (more ? 0 : W_B_CMDR_XME));
- if (cs->debug & L1_DEB_HSCX_FIFO) {
- char *t = bcs->blog;
-
- t += sprintf(t, "W6692B_fill_fifo %c cnt %d",
- bcs->channel + '1', count);
- QuickHex(t, ptr, count);
- debugl1(cs, "%s", bcs->blog);
- }
-}
-
-static void
-W6692B_interrupt(struct IsdnCardState *cs, u_char bchan)
-{
- u_char val;
- u_char r;
- struct BCState *bcs;
- struct sk_buff *skb;
- int count;
-
- bcs = (cs->bcs->channel == bchan) ? cs->bcs : (cs->bcs + 1);
- val = cs->BC_Read_Reg(cs, bchan, W_B_EXIR);
- debugl1(cs, "W6692B chan %d B_EXIR 0x%02X", bchan, val);
-
- if (!test_bit(BC_FLG_INIT, &bcs->Flag)) {
- debugl1(cs, "W6692B not INIT yet");
- return;
- }
- if (val & W_B_EXI_RME) { /* RME */
- r = cs->BC_Read_Reg(cs, bchan, W_B_STAR);
- if (r & (W_B_STAR_RDOV | W_B_STAR_CRCE | W_B_STAR_RMB)) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "W6692 B STAR %x", r);
- if ((r & W_B_STAR_RDOV) && bcs->mode)
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "W6692 B RDOV mode=%d",
- bcs->mode);
- if (r & W_B_STAR_CRCE)
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "W6692 B CRC error");
- cs->BC_Write_Reg(cs, bchan, W_B_CMDR, W_B_CMDR_RACK | W_B_CMDR_RRST | W_B_CMDR_RACT);
- } else {
- count = cs->BC_Read_Reg(cs, bchan, W_B_RBCL) & (W_B_FIFO_THRESH - 1);
- if (count == 0)
- count = W_B_FIFO_THRESH;
- W6692B_empty_fifo(bcs, count);
- if ((count = bcs->hw.w6692.rcvidx) > 0) {
- if (cs->debug & L1_DEB_HSCX_FIFO)
- debugl1(cs, "W6692 Bchan Frame %d", count);
- if (!(skb = dev_alloc_skb(count)))
- printk(KERN_WARNING "W6692: Bchan receive out of memory\n");
- else {
- skb_put_data(skb,
- bcs->hw.w6692.rcvbuf,
- count);
- skb_queue_tail(&bcs->rqueue, skb);
- }
- }
- }
- bcs->hw.w6692.rcvidx = 0;
- schedule_event(bcs, B_RCVBUFREADY);
- }
- if (val & W_B_EXI_RMR) { /* RMR */
- W6692B_empty_fifo(bcs, W_B_FIFO_THRESH);
- r = cs->BC_Read_Reg(cs, bchan, W_B_STAR);
- if (r & W_B_STAR_RDOV) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "W6692 B RDOV(RMR) mode=%d", bcs->mode);
- cs->BC_Write_Reg(cs, bchan, W_B_CMDR, W_B_CMDR_RACK | W_B_CMDR_RRST | W_B_CMDR_RACT);
- if (bcs->mode != L1_MODE_TRANS)
- bcs->hw.w6692.rcvidx = 0;
- }
- if (bcs->mode == L1_MODE_TRANS) {
- /* receive audio data */
- if (!(skb = dev_alloc_skb(W_B_FIFO_THRESH)))
- printk(KERN_WARNING "HiSax: receive out of memory\n");
- else {
- skb_put_data(skb, bcs->hw.w6692.rcvbuf,
- W_B_FIFO_THRESH);
- skb_queue_tail(&bcs->rqueue, skb);
- }
- bcs->hw.w6692.rcvidx = 0;
- schedule_event(bcs, B_RCVBUFREADY);
- }
- }
- if (val & W_B_EXI_XDUN) { /* XDUN */
- cs->BC_Write_Reg(cs, bchan, W_B_CMDR, W_B_CMDR_XRST | W_B_CMDR_RACT);
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "W6692 B EXIR %x Lost TX", val);
- if (bcs->mode == 1)
- W6692B_fill_fifo(bcs);
- else {
- /* Here we lost an TX interrupt, so
- * restart transmitting the whole frame.
- */
- if (bcs->tx_skb) {
- skb_push(bcs->tx_skb, bcs->hw.w6692.count);
- bcs->tx_cnt += bcs->hw.w6692.count;
- bcs->hw.w6692.count = 0;
- }
- }
- return;
- }
- if (val & W_B_EXI_XFR) { /* XFR */
- r = cs->BC_Read_Reg(cs, bchan, W_B_STAR);
- if (r & W_B_STAR_XDOW) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "W6692 B STAR %x XDOW", r);
- cs->BC_Write_Reg(cs, bchan, W_B_CMDR, W_B_CMDR_XRST | W_B_CMDR_RACT);
- if (bcs->tx_skb && (bcs->mode != 1)) {
- skb_push(bcs->tx_skb, bcs->hw.w6692.count);
- bcs->tx_cnt += bcs->hw.w6692.count;
- bcs->hw.w6692.count = 0;
- }
- }
- if (bcs->tx_skb) {
- if (bcs->tx_skb->len) {
- W6692B_fill_fifo(bcs);
- return;
- } else {
- if (test_bit(FLG_LLI_L1WAKEUP, &bcs->st->lli.flag) &&
- (PACKET_NOACK != bcs->tx_skb->pkt_type)) {
- u_long flags;
- spin_lock_irqsave(&bcs->aclock, flags);
- bcs->ackcnt += bcs->hw.w6692.count;
- spin_unlock_irqrestore(&bcs->aclock, flags);
- schedule_event(bcs, B_ACKPENDING);
- }
- dev_kfree_skb_irq(bcs->tx_skb);
- bcs->hw.w6692.count = 0;
- bcs->tx_skb = NULL;
- }
- }
- if ((bcs->tx_skb = skb_dequeue(&bcs->squeue))) {
- bcs->hw.w6692.count = 0;
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- W6692B_fill_fifo(bcs);
- } else {
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- schedule_event(bcs, B_XMTBUFREADY);
- }
- }
-}
-
-static irqreturn_t
-W6692_interrupt(int intno, void *dev_id)
-{
- struct IsdnCardState *cs = dev_id;
- u_char val, exval, v1;
- struct sk_buff *skb;
- u_int count;
- u_long flags;
- int icnt = 5;
-
- spin_lock_irqsave(&cs->lock, flags);
- val = cs->readW6692(cs, W_ISTA);
- if (!val) {
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_NONE;
- }
-StartW6692:
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "W6692 ISTA %x", val);
-
- if (val & W_INT_D_RME) { /* RME */
- exval = cs->readW6692(cs, W_D_RSTA);
- if (exval & (W_D_RSTA_RDOV | W_D_RSTA_CRCE | W_D_RSTA_RMB)) {
- if (exval & W_D_RSTA_RDOV)
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "W6692 RDOV");
- if (exval & W_D_RSTA_CRCE)
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "W6692 D-channel CRC error");
- if (exval & W_D_RSTA_RMB)
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "W6692 D-channel ABORT");
- cs->writeW6692(cs, W_D_CMDR, W_D_CMDR_RACK | W_D_CMDR_RRST);
- } else {
- count = cs->readW6692(cs, W_D_RBCL) & (W_D_FIFO_THRESH - 1);
- if (count == 0)
- count = W_D_FIFO_THRESH;
- W6692_empty_fifo(cs, count);
- if ((count = cs->rcvidx) > 0) {
- cs->rcvidx = 0;
- if (!(skb = alloc_skb(count, GFP_ATOMIC)))
- printk(KERN_WARNING "HiSax: D receive out of memory\n");
- else {
- skb_put_data(skb, cs->rcvbuf, count);
- skb_queue_tail(&cs->rq, skb);
- }
- }
- }
- cs->rcvidx = 0;
- schedule_event(cs, D_RCVBUFREADY);
- }
- if (val & W_INT_D_RMR) { /* RMR */
- W6692_empty_fifo(cs, W_D_FIFO_THRESH);
- }
- if (val & W_INT_D_XFR) { /* XFR */
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- schedule_event(cs, D_CLEARBUSY);
- if (cs->tx_skb) {
- if (cs->tx_skb->len) {
- W6692_fill_fifo(cs);
- goto afterXFR;
- } else {
- dev_kfree_skb_irq(cs->tx_skb);
- cs->tx_cnt = 0;
- cs->tx_skb = NULL;
- }
- }
- if ((cs->tx_skb = skb_dequeue(&cs->sq))) {
- cs->tx_cnt = 0;
- W6692_fill_fifo(cs);
- } else
- schedule_event(cs, D_XMTBUFREADY);
- }
-afterXFR:
- if (val & (W_INT_XINT0 | W_INT_XINT1)) { /* XINT0/1 - never */
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "W6692 spurious XINT!");
- }
- if (val & W_INT_D_EXI) { /* EXI */
- exval = cs->readW6692(cs, W_D_EXIR);
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "W6692 D_EXIR %02x", exval);
- if (exval & (W_D_EXI_XDUN | W_D_EXI_XCOL)) { /* Transmit underrun/collision */
- debugl1(cs, "W6692 D-chan underrun/collision");
- printk(KERN_WARNING "HiSax: W6692 XDUN/XCOL\n");
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- schedule_event(cs, D_CLEARBUSY);
- if (cs->tx_skb) { /* Restart frame */
- skb_push(cs->tx_skb, cs->tx_cnt);
- cs->tx_cnt = 0;
- W6692_fill_fifo(cs);
- } else {
- printk(KERN_WARNING "HiSax: W6692 XDUN/XCOL no skb\n");
- debugl1(cs, "W6692 XDUN/XCOL no skb");
- cs->writeW6692(cs, W_D_CMDR, W_D_CMDR_XRST);
- }
- }
- if (exval & W_D_EXI_RDOV) { /* RDOV */
- debugl1(cs, "W6692 D-channel RDOV");
- printk(KERN_WARNING "HiSax: W6692 D-RDOV\n");
- cs->writeW6692(cs, W_D_CMDR, W_D_CMDR_RRST);
- }
- if (exval & W_D_EXI_TIN2) { /* TIN2 - never */
- debugl1(cs, "W6692 spurious TIN2 interrupt");
- }
- if (exval & W_D_EXI_MOC) { /* MOC - not supported */
- debugl1(cs, "W6692 spurious MOC interrupt");
- v1 = cs->readW6692(cs, W_MOSR);
- debugl1(cs, "W6692 MOSR %02x", v1);
- }
- if (exval & W_D_EXI_ISC) { /* ISC - Level1 change */
- v1 = cs->readW6692(cs, W_CIR);
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "W6692 ISC CIR=0x%02X", v1);
- if (v1 & W_CIR_ICC) {
- cs->dc.w6692.ph_state = v1 & W_CIR_COD_MASK;
- if (cs->debug & L1_DEB_ISAC)
- debugl1(cs, "ph_state_change %x", cs->dc.w6692.ph_state);
- schedule_event(cs, D_L1STATECHANGE);
- }
- if (v1 & W_CIR_SCC) {
- v1 = cs->readW6692(cs, W_SQR);
- debugl1(cs, "W6692 SCC SQR=0x%02X", v1);
- }
- }
- if (exval & W_D_EXI_WEXP) {
- debugl1(cs, "W6692 spurious WEXP interrupt!");
- }
- if (exval & W_D_EXI_TEXP) {
- debugl1(cs, "W6692 spurious TEXP interrupt!");
- }
- }
- if (val & W_INT_B1_EXI) {
- debugl1(cs, "W6692 B channel 1 interrupt");
- W6692B_interrupt(cs, 0);
- }
- if (val & W_INT_B2_EXI) {
- debugl1(cs, "W6692 B channel 2 interrupt");
- W6692B_interrupt(cs, 1);
- }
- val = cs->readW6692(cs, W_ISTA);
- if (val && icnt) {
- icnt--;
- goto StartW6692;
- }
- if (!icnt) {
- printk(KERN_WARNING "W6692 IRQ LOOP\n");
- cs->writeW6692(cs, W_IMASK, 0xff);
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- return IRQ_HANDLED;
-}
-
-static void
-W6692_l1hw(struct PStack *st, int pr, void *arg)
-{
- struct IsdnCardState *cs = (struct IsdnCardState *) st->l1.hardware;
- struct sk_buff *skb = arg;
- u_long flags;
- int val;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- if (cs->debug & DEB_DLOG_HEX)
- LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE)
- dlogframe(cs, skb, 0);
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->tx_skb) {
- skb_queue_tail(&cs->sq, skb);
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA Queued", 0);
-#endif
- } else {
- cs->tx_skb = skb;
- cs->tx_cnt = 0;
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA", 0);
-#endif
- W6692_fill_fifo(cs);
- }
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->tx_skb) {
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, " l2l1 tx_skb exist this shouldn't happen");
- skb_queue_tail(&cs->sq, skb);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- }
- if (cs->debug & DEB_DLOG_HEX)
- LogFrame(cs, skb->data, skb->len);
- if (cs->debug & DEB_DLOG_VERBOSE)
- dlogframe(cs, skb, 0);
- cs->tx_skb = skb;
- cs->tx_cnt = 0;
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- Logl2Frame(cs, skb, "PH_DATA_PULLED", 0);
-#endif
- W6692_fill_fifo(cs);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
-#ifdef L2FRAME_DEBUG /* psa */
- if (cs->debug & L1_DEB_LAPD)
- debugl1(cs, "-> PH_REQUEST_PULL");
-#endif
- if (!cs->tx_skb) {
- test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case (HW_RESET | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- if (cs->dc.w6692.ph_state == W_L1IND_DRD) {
- ph_command(cs, W_L1CMD_ECK);
- spin_unlock_irqrestore(&cs->lock, flags);
- } else {
- ph_command(cs, W_L1CMD_RST);
- cs->dc.w6692.ph_state = W_L1CMD_RST;
- spin_unlock_irqrestore(&cs->lock, flags);
- W6692_new_ph(cs);
- }
- break;
- case (HW_ENABLE | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- ph_command(cs, W_L1CMD_ECK);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_INFO3 | REQUEST):
- spin_lock_irqsave(&cs->lock, flags);
- ph_command(cs, W_L1CMD_AR8);
- spin_unlock_irqrestore(&cs->lock, flags);
- break;
- case (HW_TESTLOOP | REQUEST):
- val = 0;
- if (1 & (long) arg)
- val |= 0x0c;
- if (2 & (long) arg)
- val |= 0x3;
- /* !!! not implemented yet */
- break;
- case (HW_DEACTIVATE | RESPONSE):
- skb_queue_purge(&cs->rq);
- skb_queue_purge(&cs->sq);
- if (cs->tx_skb) {
- dev_kfree_skb_any(cs->tx_skb);
- cs->tx_skb = NULL;
- }
- if (test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags))
- del_timer(&cs->dbusytimer);
- if (test_and_clear_bit(FLG_L1_DBUSY, &cs->HW_Flags))
- schedule_event(cs, D_CLEARBUSY);
- break;
- default:
- if (cs->debug & L1_DEB_WARN)
- debugl1(cs, "W6692_l1hw unknown %04x", pr);
- break;
- }
-}
-
-static void
-setstack_W6692(struct PStack *st, struct IsdnCardState *cs)
-{
- st->l1.l1hw = W6692_l1hw;
-}
-
-static void
-DC_Close_W6692(struct IsdnCardState *cs)
-{
-}
-
-static void
-dbusy_timer_handler(struct timer_list *t)
-{
- struct IsdnCardState *cs = from_timer(cs, t, dbusytimer);
- struct PStack *stptr;
- int rbch, star;
- u_long flags;
-
- spin_lock_irqsave(&cs->lock, flags);
- if (test_bit(FLG_DBUSY_TIMER, &cs->HW_Flags)) {
- rbch = cs->readW6692(cs, W_D_RBCH);
- star = cs->readW6692(cs, W_D_STAR);
- if (cs->debug)
- debugl1(cs, "D-Channel Busy D_RBCH %02x D_STAR %02x",
- rbch, star);
- if (star & W_D_STAR_XBZ) { /* D-Channel Busy */
- test_and_set_bit(FLG_L1_DBUSY, &cs->HW_Flags);
- stptr = cs->stlist;
- while (stptr != NULL) {
- stptr->l1.l1l2(stptr, PH_PAUSE | INDICATION, NULL);
- stptr = stptr->next;
- }
- } else {
- /* discard frame; reset transceiver */
- test_and_clear_bit(FLG_DBUSY_TIMER, &cs->HW_Flags);
- if (cs->tx_skb) {
- dev_kfree_skb_any(cs->tx_skb);
- cs->tx_cnt = 0;
- cs->tx_skb = NULL;
- } else {
- printk(KERN_WARNING "HiSax: W6692 D-Channel Busy no skb\n");
- debugl1(cs, "D-Channel Busy no skb");
- }
- cs->writeW6692(cs, W_D_CMDR, W_D_CMDR_XRST); /* Transmitter reset */
- spin_unlock_irqrestore(&cs->lock, flags);
- cs->irq_func(cs->irq, cs);
- return;
- }
- }
- spin_unlock_irqrestore(&cs->lock, flags);
-}
-
-static void
-W6692Bmode(struct BCState *bcs, int mode, int bchan)
-{
- struct IsdnCardState *cs = bcs->cs;
-
- if (cs->debug & L1_DEB_HSCX)
- debugl1(cs, "w6692 %c mode %d ichan %d",
- '1' + bchan, mode, bchan);
- bcs->mode = mode;
- bcs->channel = bchan;
- bcs->hw.w6692.bchan = bchan;
-
- switch (mode) {
- case (L1_MODE_NULL):
- cs->BC_Write_Reg(cs, bchan, W_B_MODE, 0);
- break;
- case (L1_MODE_TRANS):
- cs->BC_Write_Reg(cs, bchan, W_B_MODE, W_B_MODE_MMS);
- break;
- case (L1_MODE_HDLC):
- cs->BC_Write_Reg(cs, bchan, W_B_MODE, W_B_MODE_ITF);
- cs->BC_Write_Reg(cs, bchan, W_B_ADM1, 0xff);
- cs->BC_Write_Reg(cs, bchan, W_B_ADM2, 0xff);
- break;
- }
- if (mode)
- cs->BC_Write_Reg(cs, bchan, W_B_CMDR, W_B_CMDR_RRST |
- W_B_CMDR_RACT | W_B_CMDR_XRST);
- cs->BC_Write_Reg(cs, bchan, W_B_EXIM, 0x00);
-}
-
-static void
-W6692_l2l1(struct PStack *st, int pr, void *arg)
-{
- struct sk_buff *skb = arg;
- struct BCState *bcs = st->l1.bcs;
- u_long flags;
-
- switch (pr) {
- case (PH_DATA | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- if (bcs->tx_skb) {
- skb_queue_tail(&bcs->squeue, skb);
- } else {
- bcs->tx_skb = skb;
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->hw.w6692.count = 0;
- bcs->cs->BC_Send_Data(bcs);
- }
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | INDICATION):
- if (bcs->tx_skb) {
- printk(KERN_WARNING "W6692_l2l1: this shouldn't happen\n");
- break;
- }
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_set_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->tx_skb = skb;
- bcs->hw.w6692.count = 0;
- bcs->cs->BC_Send_Data(bcs);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- break;
- case (PH_PULL | REQUEST):
- if (!bcs->tx_skb) {
- test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- st->l1.l1l2(st, PH_PULL | CONFIRM, NULL);
- } else
- test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags);
- break;
- case (PH_ACTIVATE | REQUEST):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_set_bit(BC_FLG_ACTIV, &bcs->Flag);
- W6692Bmode(bcs, st->l1.mode, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | REQUEST):
- l1_msg_b(st, pr, arg);
- break;
- case (PH_DEACTIVATE | CONFIRM):
- spin_lock_irqsave(&bcs->cs->lock, flags);
- test_and_clear_bit(BC_FLG_ACTIV, &bcs->Flag);
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- W6692Bmode(bcs, 0, st->l1.bc);
- spin_unlock_irqrestore(&bcs->cs->lock, flags);
- st->l1.l1l2(st, PH_DEACTIVATE | CONFIRM, NULL);
- break;
- }
-}
-
-static void
-close_w6692state(struct BCState *bcs)
-{
- W6692Bmode(bcs, 0, bcs->channel);
- if (test_and_clear_bit(BC_FLG_INIT, &bcs->Flag)) {
- kfree(bcs->hw.w6692.rcvbuf);
- bcs->hw.w6692.rcvbuf = NULL;
- kfree(bcs->blog);
- bcs->blog = NULL;
- skb_queue_purge(&bcs->rqueue);
- skb_queue_purge(&bcs->squeue);
- if (bcs->tx_skb) {
- dev_kfree_skb_any(bcs->tx_skb);
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- }
- }
-}
-
-static int
-open_w6692state(struct IsdnCardState *cs, struct BCState *bcs)
-{
- if (!test_and_set_bit(BC_FLG_INIT, &bcs->Flag)) {
- if (!(bcs->hw.w6692.rcvbuf = kmalloc(HSCX_BUFMAX, GFP_ATOMIC))) {
- printk(KERN_WARNING
- "HiSax: No memory for w6692.rcvbuf\n");
- test_and_clear_bit(BC_FLG_INIT, &bcs->Flag);
- return (1);
- }
- if (!(bcs->blog = kmalloc(MAX_BLOG_SPACE, GFP_ATOMIC))) {
- printk(KERN_WARNING
- "HiSax: No memory for bcs->blog\n");
- test_and_clear_bit(BC_FLG_INIT, &bcs->Flag);
- kfree(bcs->hw.w6692.rcvbuf);
- bcs->hw.w6692.rcvbuf = NULL;
- return (2);
- }
- skb_queue_head_init(&bcs->rqueue);
- skb_queue_head_init(&bcs->squeue);
- }
- bcs->tx_skb = NULL;
- test_and_clear_bit(BC_FLG_BUSY, &bcs->Flag);
- bcs->event = 0;
- bcs->hw.w6692.rcvidx = 0;
- bcs->tx_cnt = 0;
- return (0);
-}
-
-static int
-setstack_w6692(struct PStack *st, struct BCState *bcs)
-{
- bcs->channel = st->l1.bc;
- if (open_w6692state(st->l1.hardware, bcs))
- return (-1);
- st->l1.bcs = bcs;
- st->l2.l2l1 = W6692_l2l1;
- setstack_manager(st);
- bcs->st = st;
- setstack_l1_B(st);
- return (0);
-}
-
-static void resetW6692(struct IsdnCardState *cs)
-{
- cs->writeW6692(cs, W_D_CTL, W_D_CTL_SRST);
- mdelay(10);
- cs->writeW6692(cs, W_D_CTL, 0x00);
- mdelay(10);
- cs->writeW6692(cs, W_IMASK, 0xff);
- cs->writeW6692(cs, W_D_SAM, 0xff);
- cs->writeW6692(cs, W_D_TAM, 0xff);
- cs->writeW6692(cs, W_D_EXIM, 0x00);
- cs->writeW6692(cs, W_D_MODE, W_D_MODE_RACT);
- cs->writeW6692(cs, W_IMASK, 0x18);
- if (cs->subtyp == W6692_USR) {
- /* seems that USR implemented some power control features
- * Pin 79 is connected to the oscilator circuit so we
- * have to handle it here
- */
- cs->writeW6692(cs, W_PCTL, 0x80);
- cs->writeW6692(cs, W_XDATA, 0x00);
- }
-}
-
-static void initW6692(struct IsdnCardState *cs, int part)
-{
- if (part & 1) {
- cs->setstack_d = setstack_W6692;
- cs->DC_Close = DC_Close_W6692;
- timer_setup(&cs->dbusytimer, dbusy_timer_handler, 0);
- resetW6692(cs);
- ph_command(cs, W_L1CMD_RST);
- cs->dc.w6692.ph_state = W_L1CMD_RST;
- W6692_new_ph(cs);
- ph_command(cs, W_L1CMD_ECK);
-
- cs->bcs[0].BC_SetStack = setstack_w6692;
- cs->bcs[1].BC_SetStack = setstack_w6692;
- cs->bcs[0].BC_Close = close_w6692state;
- cs->bcs[1].BC_Close = close_w6692state;
- W6692Bmode(cs->bcs, 0, 0);
- W6692Bmode(cs->bcs + 1, 0, 0);
- }
- if (part & 2) {
- /* Reenable all IRQ */
- cs->writeW6692(cs, W_IMASK, 0x18);
- cs->writeW6692(cs, W_D_EXIM, 0x00);
- cs->BC_Write_Reg(cs, 0, W_B_EXIM, 0x00);
- cs->BC_Write_Reg(cs, 1, W_B_EXIM, 0x00);
- /* Reset D-chan receiver and transmitter */
- cs->writeW6692(cs, W_D_CMDR, W_D_CMDR_RRST | W_D_CMDR_XRST);
- }
-}
-
-/* Interface functions */
-
-static u_char
-ReadW6692(struct IsdnCardState *cs, u_char offset)
-{
- return (inb(cs->hw.w6692.iobase + offset));
-}
-
-static void
-WriteW6692(struct IsdnCardState *cs, u_char offset, u_char value)
-{
- outb(value, cs->hw.w6692.iobase + offset);
-}
-
-static void
-ReadISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- insb(cs->hw.w6692.iobase + W_D_RFIFO, data, size);
-}
-
-static void
-WriteISACfifo(struct IsdnCardState *cs, u_char *data, int size)
-{
- outsb(cs->hw.w6692.iobase + W_D_XFIFO, data, size);
-}
-
-static u_char
-ReadW6692B(struct IsdnCardState *cs, int bchan, u_char offset)
-{
- return (inb(cs->hw.w6692.iobase + (bchan ? 0x40 : 0) + offset));
-}
-
-static void
-WriteW6692B(struct IsdnCardState *cs, int bchan, u_char offset, u_char value)
-{
- outb(value, cs->hw.w6692.iobase + (bchan ? 0x40 : 0) + offset);
-}
-
-static int
-w6692_card_msg(struct IsdnCardState *cs, int mt, void *arg)
-{
- switch (mt) {
- case CARD_RESET:
- resetW6692(cs);
- return (0);
- case CARD_RELEASE:
- cs->writeW6692(cs, W_IMASK, 0xff);
- release_region(cs->hw.w6692.iobase, 256);
- if (cs->subtyp == W6692_USR) {
- cs->writeW6692(cs, W_XDATA, 0x04);
- }
- return (0);
- case CARD_INIT:
- initW6692(cs, 3);
- return (0);
- case CARD_TEST:
- return (0);
- }
- return (0);
-}
-
-static int id_idx;
-
-static struct pci_dev *dev_w6692 = NULL;
-
-int setup_w6692(struct IsdnCard *card)
-{
- struct IsdnCardState *cs = card->cs;
- char tmp[64];
- u_char found = 0;
- u_char pci_irq = 0;
- u_int pci_ioaddr = 0;
-
- strcpy(tmp, w6692_revision);
- printk(KERN_INFO "HiSax: W6692 driver Rev. %s\n", HiSax_getrev(tmp));
- if (cs->typ != ISDN_CTYPE_W6692)
- return (0);
-
- while (id_list[id_idx].vendor_id) {
- dev_w6692 = hisax_find_pci_device(id_list[id_idx].vendor_id,
- id_list[id_idx].device_id,
- dev_w6692);
- if (dev_w6692) {
- if (pci_enable_device(dev_w6692))
- continue;
- cs->subtyp = id_idx;
- break;
- }
- id_idx++;
- }
- if (dev_w6692) {
- found = 1;
- pci_irq = dev_w6692->irq;
- /* I think address 0 is allways the configuration area */
- /* and address 1 is the real IO space KKe 03.09.99 */
- pci_ioaddr = pci_resource_start(dev_w6692, 1);
- /* USR ISDN PCI card TA need some special handling */
- if (cs->subtyp == W6692_WINBOND) {
- if ((W6692_SV_USR == dev_w6692->subsystem_vendor) &&
- (W6692_SD_USR == dev_w6692->subsystem_device)) {
- cs->subtyp = W6692_USR;
- }
- }
- }
- if (!found) {
- printk(KERN_WARNING "W6692: No PCI card found\n");
- return (0);
- }
- cs->irq = pci_irq;
- if (!cs->irq) {
- printk(KERN_WARNING "W6692: No IRQ for PCI card found\n");
- return (0);
- }
- if (!pci_ioaddr) {
- printk(KERN_WARNING "W6692: NO I/O Base Address found\n");
- return (0);
- }
- cs->hw.w6692.iobase = pci_ioaddr;
- printk(KERN_INFO "Found: %s %s, I/O base: 0x%x, irq: %d\n",
- id_list[cs->subtyp].vendor_name, id_list[cs->subtyp].card_name,
- pci_ioaddr, pci_irq);
- if (!request_region(cs->hw.w6692.iobase, 256, id_list[cs->subtyp].card_name)) {
- printk(KERN_WARNING
- "HiSax: %s I/O ports %x-%x already in use\n",
- id_list[cs->subtyp].card_name,
- cs->hw.w6692.iobase,
- cs->hw.w6692.iobase + 255);
- return (0);
- }
-
- printk(KERN_INFO
- "HiSax: %s config irq:%d I/O:%x\n",
- id_list[cs->subtyp].card_name, cs->irq,
- cs->hw.w6692.iobase);
-
- INIT_WORK(&cs->tqueue, W6692_bh);
- cs->readW6692 = &ReadW6692;
- cs->writeW6692 = &WriteW6692;
- cs->readisacfifo = &ReadISACfifo;
- cs->writeisacfifo = &WriteISACfifo;
- cs->BC_Read_Reg = &ReadW6692B;
- cs->BC_Write_Reg = &WriteW6692B;
- cs->BC_Send_Data = &W6692B_fill_fifo;
- cs->cardmsg = &w6692_card_msg;
- cs->irq_func = &W6692_interrupt;
- cs->irq_flags |= IRQF_SHARED;
- W6692Version(cs, "W6692:");
- printk(KERN_INFO "W6692 ISTA=0x%X\n", ReadW6692(cs, W_ISTA));
- printk(KERN_INFO "W6692 IMASK=0x%X\n", ReadW6692(cs, W_IMASK));
- printk(KERN_INFO "W6692 D_EXIR=0x%X\n", ReadW6692(cs, W_D_EXIR));
- printk(KERN_INFO "W6692 D_EXIM=0x%X\n", ReadW6692(cs, W_D_EXIM));
- printk(KERN_INFO "W6692 D_RSTA=0x%X\n", ReadW6692(cs, W_D_RSTA));
- return (1);
-}
diff --git a/drivers/isdn/hisax/w6692.h b/drivers/isdn/hisax/w6692.h
deleted file mode 100644
index 024b04d33e43..000000000000
--- a/drivers/isdn/hisax/w6692.h
+++ /dev/null
@@ -1,184 +0,0 @@
-/* $Id: w6692.h,v 1.4.2.2 2004/01/12 22:52:29 keil Exp $
- *
- * Winbond W6692 specific defines
- *
- * Author Petr Novak
- * Copyright by Petr Novak <petr.novak@i.cz>
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-/* map W6692 functions to ISAC functions */
-#define readW6692 readisac
-#define writeW6692 writeisac
-#define readW6692fifo readisacfifo
-#define writeW6692fifo writeisacfifo
-
-/* B-channel FIFO read/write routines */
-
-#define READW6692BFIFO(cs, bchan, ptr, count) \
- insb(cs->hw.w6692.iobase + W_B_RFIFO + (bchan ? 0x40 : 0), ptr, count)
-
-#define WRITEW6692BFIFO(cs, bchan, ptr, count) \
- outsb(cs->hw.w6692.iobase + W_B_XFIFO + (bchan ? 0x40 : 0), ptr, count)
-
-/* Specifications of W6692 registers */
-
-#define W_D_RFIFO 0x00 /* R */
-#define W_D_XFIFO 0x04 /* W */
-#define W_D_CMDR 0x08 /* W */
-#define W_D_MODE 0x0c /* R/W */
-#define W_D_TIMR 0x10 /* R/W */
-#define W_ISTA 0x14 /* R_clr */
-#define W_IMASK 0x18 /* R/W */
-#define W_D_EXIR 0x1c /* R_clr */
-#define W_D_EXIM 0x20 /* R/W */
-#define W_D_STAR 0x24 /* R */
-#define W_D_RSTA 0x28 /* R */
-#define W_D_SAM 0x2c /* R/W */
-#define W_D_SAP1 0x30 /* R/W */
-#define W_D_SAP2 0x34 /* R/W */
-#define W_D_TAM 0x38 /* R/W */
-#define W_D_TEI1 0x3c /* R/W */
-#define W_D_TEI2 0x40 /* R/W */
-#define W_D_RBCH 0x44 /* R */
-#define W_D_RBCL 0x48 /* R */
-#define W_TIMR2 0x4c /* W */
-#define W_L1_RC 0x50 /* R/W */
-#define W_D_CTL 0x54 /* R/W */
-#define W_CIR 0x58 /* R */
-#define W_CIX 0x5c /* W */
-#define W_SQR 0x60 /* R */
-#define W_SQX 0x64 /* W */
-#define W_PCTL 0x68 /* R/W */
-#define W_MOR 0x6c /* R */
-#define W_MOX 0x70 /* R/W */
-#define W_MOSR 0x74 /* R_clr */
-#define W_MOCR 0x78 /* R/W */
-#define W_GCR 0x7c /* R/W */
-
-#define W_B_RFIFO 0x80 /* R */
-#define W_B_XFIFO 0x84 /* W */
-#define W_B_CMDR 0x88 /* W */
-#define W_B_MODE 0x8c /* R/W */
-#define W_B_EXIR 0x90 /* R_clr */
-#define W_B_EXIM 0x94 /* R/W */
-#define W_B_STAR 0x98 /* R */
-#define W_B_ADM1 0x9c /* R/W */
-#define W_B_ADM2 0xa0 /* R/W */
-#define W_B_ADR1 0xa4 /* R/W */
-#define W_B_ADR2 0xa8 /* R/W */
-#define W_B_RBCL 0xac /* R */
-#define W_B_RBCH 0xb0 /* R */
-
-#define W_XADDR 0xf4 /* R/W */
-#define W_XDATA 0xf8 /* R/W */
-#define W_EPCTL 0xfc /* W */
-
-/* W6692 register bits */
-
-#define W_D_CMDR_XRST 0x01
-#define W_D_CMDR_XME 0x02
-#define W_D_CMDR_XMS 0x08
-#define W_D_CMDR_STT 0x10
-#define W_D_CMDR_RRST 0x40
-#define W_D_CMDR_RACK 0x80
-
-#define W_D_MODE_RLP 0x01
-#define W_D_MODE_DLP 0x02
-#define W_D_MODE_MFD 0x04
-#define W_D_MODE_TEE 0x08
-#define W_D_MODE_TMS 0x10
-#define W_D_MODE_RACT 0x40
-#define W_D_MODE_MMS 0x80
-
-#define W_INT_B2_EXI 0x01
-#define W_INT_B1_EXI 0x02
-#define W_INT_D_EXI 0x04
-#define W_INT_XINT0 0x08
-#define W_INT_XINT1 0x10
-#define W_INT_D_XFR 0x20
-#define W_INT_D_RME 0x40
-#define W_INT_D_RMR 0x80
-
-#define W_D_EXI_WEXP 0x01
-#define W_D_EXI_TEXP 0x02
-#define W_D_EXI_ISC 0x04
-#define W_D_EXI_MOC 0x08
-#define W_D_EXI_TIN2 0x10
-#define W_D_EXI_XCOL 0x20
-#define W_D_EXI_XDUN 0x40
-#define W_D_EXI_RDOV 0x80
-
-#define W_D_STAR_DRDY 0x10
-#define W_D_STAR_XBZ 0x20
-#define W_D_STAR_XDOW 0x80
-
-#define W_D_RSTA_RMB 0x10
-#define W_D_RSTA_CRCE 0x20
-#define W_D_RSTA_RDOV 0x40
-
-#define W_D_CTL_SRST 0x20
-
-#define W_CIR_SCC 0x80
-#define W_CIR_ICC 0x40
-#define W_CIR_COD_MASK 0x0f
-
-#define W_B_CMDR_XRST 0x01
-#define W_B_CMDR_XME 0x02
-#define W_B_CMDR_XMS 0x04
-#define W_B_CMDR_RACT 0x20
-#define W_B_CMDR_RRST 0x40
-#define W_B_CMDR_RACK 0x80
-
-#define W_B_MODE_FTS0 0x01
-#define W_B_MODE_FTS1 0x02
-#define W_B_MODE_SW56 0x04
-#define W_B_MODE_BSW0 0x08
-#define W_B_MODE_BSW1 0x10
-#define W_B_MODE_EPCM 0x20
-#define W_B_MODE_ITF 0x40
-#define W_B_MODE_MMS 0x80
-
-#define W_B_EXI_XDUN 0x01
-#define W_B_EXI_XFR 0x02
-#define W_B_EXI_RDOV 0x10
-#define W_B_EXI_RME 0x20
-#define W_B_EXI_RMR 0x40
-
-#define W_B_STAR_XBZ 0x01
-#define W_B_STAR_XDOW 0x04
-#define W_B_STAR_RMB 0x10
-#define W_B_STAR_CRCE 0x20
-#define W_B_STAR_RDOV 0x40
-
-#define W_B_RBCH_LOV 0x20
-
-/* W6692 Layer1 commands */
-
-#define W_L1CMD_ECK 0x00
-#define W_L1CMD_RST 0x01
-#define W_L1CMD_SCP 0x04
-#define W_L1CMD_SSP 0x02
-#define W_L1CMD_AR8 0x08
-#define W_L1CMD_AR10 0x09
-#define W_L1CMD_EAL 0x0a
-#define W_L1CMD_DRC 0x0f
-
-/* W6692 Layer1 indications */
-
-#define W_L1IND_CE 0x07
-#define W_L1IND_DRD 0x00
-#define W_L1IND_LD 0x04
-#define W_L1IND_ARD 0x08
-#define W_L1IND_TI 0x0a
-#define W_L1IND_ATI 0x0b
-#define W_L1IND_AI8 0x0c
-#define W_L1IND_AI10 0x0d
-#define W_L1IND_CD 0x0f
-
-/* FIFO thresholds */
-#define W_D_FIFO_THRESH 64
-#define W_B_FIFO_THRESH 64
diff --git a/drivers/isdn/i4l/Kconfig b/drivers/isdn/i4l/Kconfig
deleted file mode 100644
index caa1b52f06f7..000000000000
--- a/drivers/isdn/i4l/Kconfig
+++ /dev/null
@@ -1,129 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-#
-# Old ISDN4Linux config
-#
-
-if ISDN_I4L
-
-config ISDN_PPP
- bool "Support synchronous PPP"
- depends on INET
- select SLHC
- help
- Over digital connections such as ISDN, there is no need to
- synchronize sender and recipient's clocks with start and stop bits
- as is done over analog telephone lines. Instead, one can use
- "synchronous PPP". Saying Y here will include this protocol. This
- protocol is used by Cisco and Sun for example. So you want to say Y
- here if the other end of your ISDN connection supports it. You will
- need a special version of pppd (called ipppd) for using this
- feature. See <file:Documentation/isdn/README.syncppp> and
- <file:Documentation/isdn/syncPPP.FAQ> for more information.
-
-config ISDN_PPP_VJ
- bool "Use VJ-compression with synchronous PPP"
- depends on ISDN_PPP
- help
- This enables Van Jacobson header compression for synchronous PPP.
- Say Y if the other end of the connection supports it.
-
-config ISDN_MPP
- bool "Support generic MP (RFC 1717)"
- depends on ISDN_PPP
- help
- With synchronous PPP enabled, it is possible to increase throughput
- by bundling several ISDN-connections, using this protocol. See
- <file:Documentation/isdn/README.syncppp> for more information.
-
-config IPPP_FILTER
- bool "Filtering for synchronous PPP"
- depends on ISDN_PPP
- help
- Say Y here if you want to be able to filter the packets passing over
- IPPP interfaces. This allows you to control which packets count as
- activity (i.e. which packets will reset the idle timer or bring up
- a demand-dialled link) and which packets are to be dropped entirely.
- You need to say Y here if you wish to use the pass-filter and
- active-filter options to ipppd.
-
-config ISDN_PPP_BSDCOMP
- tristate "Support BSD compression"
- depends on ISDN_PPP
- help
- Support for the BSD-Compress compression method for PPP, which uses
- the LZW compression method to compress each PPP packet before it is
- sent over the wire. The machine at the other end of the PPP link
- (usually your ISP) has to support the BSD-Compress compression
- method as well for this to be useful. Even if they don't support it,
- it is safe to say Y here.
-
-config ISDN_AUDIO
- bool "Support audio via ISDN"
- help
- If you say Y here, the modem-emulator will support a subset of the
- EIA Class 8 Voice commands. Using a getty with voice-support
- (mgetty+sendfax by <gert@greenie.muc.de> with an extension, available
- with the ISDN utility package for example), you will be able to use
- your Linux box as an ISDN-answering machine. Of course, this must be
- supported by the lowlevel driver also. Currently, the HiSax driver
- is the only voice-supporting driver. See
- <file:Documentation/isdn/README.audio> for more information.
-
-config ISDN_TTY_FAX
- bool "Support AT-Fax Class 1 and 2 commands"
- depends on ISDN_AUDIO
- help
- If you say Y here, the modem-emulator will support a subset of the
- Fax Class 1 and 2 commands. Using a getty with fax-support
- (mgetty+sendfax, hylafax), you will be able to use your Linux box as
- an ISDN-fax-machine. This must be supported by the lowlevel driver
- also. See <file:Documentation/isdn/README.fax> for more information.
-
-config ISDN_X25
- bool "X.25 PLP on top of ISDN"
- depends on X25
- help
- This feature provides the X.25 protocol over ISDN connections.
- See <file:Documentation/isdn/README.x25> for more information
- if you are thinking about using this.
-
-
-menu "ISDN feature submodules"
-
-config ISDN_DRV_LOOP
- tristate "isdnloop support"
- depends on BROKEN_ON_SMP
- help
- This driver provides a virtual ISDN card. Its primary purpose is
- testing of linklevel features or configuration without getting
- charged by your service-provider for lots of phone calls.
- You need will need the loopctrl utility from the latest isdn4k-utils
- package to set up this driver.
-
-config ISDN_DIVERSION
- tristate "Support isdn diversion services"
- help
- This option allows you to use some supplementary diversion
- services in conjunction with the HiSax driver on an EURO/DSS1
- line.
-
- Supported options are CD (call deflection), CFU (Call forward
- unconditional), CFB (Call forward when busy) and CFNR (call forward
- not reachable). Additionally the actual CFU, CFB and CFNR state may
- be interrogated.
-
- The use of CFU, CFB, CFNR and interrogation may be limited to some
- countries. The keypad protocol is still not implemented. CD should
- work in all countries if the service has been subscribed to.
-
- Please read the file <file:Documentation/isdn/README.diversion>.
-
-endmenu
-
-comment "ISDN4Linux hardware drivers"
-
-source "drivers/isdn/hisax/Kconfig"
-
-# end ISDN_I4L
-endif
-
diff --git a/drivers/isdn/i4l/Makefile b/drivers/isdn/i4l/Makefile
deleted file mode 100644
index be77500c9e86..000000000000
--- a/drivers/isdn/i4l/Makefile
+++ /dev/null
@@ -1,20 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0
-# Makefile for the kernel ISDN subsystem and device drivers.
-
-# Each configuration option enables a list of files.
-
-obj-$(CONFIG_ISDN_I4L) += isdn.o
-obj-$(CONFIG_ISDN_PPP_BSDCOMP) += isdn_bsdcomp.o
-obj-$(CONFIG_ISDN_HDLC) += isdnhdlc.o
-
-# Multipart objects.
-
-isdn-y := isdn_net.o isdn_tty.o isdn_v110.o isdn_common.o
-
-# Optional parts of multipart objects.
-
-isdn-$(CONFIG_ISDN_PPP) += isdn_ppp.o
-isdn-$(CONFIG_ISDN_X25) += isdn_concap.o isdn_x25iface.o
-isdn-$(CONFIG_ISDN_AUDIO) += isdn_audio.o
-isdn-$(CONFIG_ISDN_TTY_FAX) += isdn_ttyfax.o
-
diff --git a/drivers/isdn/i4l/isdn_audio.c b/drivers/isdn/i4l/isdn_audio.c
deleted file mode 100644
index b6bcd1eca128..000000000000
--- a/drivers/isdn/i4l/isdn_audio.c
+++ /dev/null
@@ -1,711 +0,0 @@
-/* $Id: isdn_audio.c,v 1.1.2.2 2004/01/12 22:37:18 keil Exp $
- *
- * Linux ISDN subsystem, audio conversion and compression (linklevel).
- *
- * Copyright 1994-1999 by Fritz Elfert (fritz@isdn4linux.de)
- * DTMF code (c) 1996 by Christian Mock (cm@kukuruz.ping.at)
- * Silence detection (c) 1998 by Armin Schindler (mac@gismo.telekom.de)
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/isdn.h>
-#include <linux/slab.h>
-#include "isdn_audio.h"
-#include "isdn_common.h"
-
-char *isdn_audio_revision = "$Revision: 1.1.2.2 $";
-
-/*
- * Misc. lookup-tables.
- */
-
-/* ulaw -> signed 16-bit */
-static short isdn_audio_ulaw_to_s16[] =
-{
- 0x8284, 0x8684, 0x8a84, 0x8e84, 0x9284, 0x9684, 0x9a84, 0x9e84,
- 0xa284, 0xa684, 0xaa84, 0xae84, 0xb284, 0xb684, 0xba84, 0xbe84,
- 0xc184, 0xc384, 0xc584, 0xc784, 0xc984, 0xcb84, 0xcd84, 0xcf84,
- 0xd184, 0xd384, 0xd584, 0xd784, 0xd984, 0xdb84, 0xdd84, 0xdf84,
- 0xe104, 0xe204, 0xe304, 0xe404, 0xe504, 0xe604, 0xe704, 0xe804,
- 0xe904, 0xea04, 0xeb04, 0xec04, 0xed04, 0xee04, 0xef04, 0xf004,
- 0xf0c4, 0xf144, 0xf1c4, 0xf244, 0xf2c4, 0xf344, 0xf3c4, 0xf444,
- 0xf4c4, 0xf544, 0xf5c4, 0xf644, 0xf6c4, 0xf744, 0xf7c4, 0xf844,
- 0xf8a4, 0xf8e4, 0xf924, 0xf964, 0xf9a4, 0xf9e4, 0xfa24, 0xfa64,
- 0xfaa4, 0xfae4, 0xfb24, 0xfb64, 0xfba4, 0xfbe4, 0xfc24, 0xfc64,
- 0xfc94, 0xfcb4, 0xfcd4, 0xfcf4, 0xfd14, 0xfd34, 0xfd54, 0xfd74,
- 0xfd94, 0xfdb4, 0xfdd4, 0xfdf4, 0xfe14, 0xfe34, 0xfe54, 0xfe74,
- 0xfe8c, 0xfe9c, 0xfeac, 0xfebc, 0xfecc, 0xfedc, 0xfeec, 0xfefc,
- 0xff0c, 0xff1c, 0xff2c, 0xff3c, 0xff4c, 0xff5c, 0xff6c, 0xff7c,
- 0xff88, 0xff90, 0xff98, 0xffa0, 0xffa8, 0xffb0, 0xffb8, 0xffc0,
- 0xffc8, 0xffd0, 0xffd8, 0xffe0, 0xffe8, 0xfff0, 0xfff8, 0x0000,
- 0x7d7c, 0x797c, 0x757c, 0x717c, 0x6d7c, 0x697c, 0x657c, 0x617c,
- 0x5d7c, 0x597c, 0x557c, 0x517c, 0x4d7c, 0x497c, 0x457c, 0x417c,
- 0x3e7c, 0x3c7c, 0x3a7c, 0x387c, 0x367c, 0x347c, 0x327c, 0x307c,
- 0x2e7c, 0x2c7c, 0x2a7c, 0x287c, 0x267c, 0x247c, 0x227c, 0x207c,
- 0x1efc, 0x1dfc, 0x1cfc, 0x1bfc, 0x1afc, 0x19fc, 0x18fc, 0x17fc,
- 0x16fc, 0x15fc, 0x14fc, 0x13fc, 0x12fc, 0x11fc, 0x10fc, 0x0ffc,
- 0x0f3c, 0x0ebc, 0x0e3c, 0x0dbc, 0x0d3c, 0x0cbc, 0x0c3c, 0x0bbc,
- 0x0b3c, 0x0abc, 0x0a3c, 0x09bc, 0x093c, 0x08bc, 0x083c, 0x07bc,
- 0x075c, 0x071c, 0x06dc, 0x069c, 0x065c, 0x061c, 0x05dc, 0x059c,
- 0x055c, 0x051c, 0x04dc, 0x049c, 0x045c, 0x041c, 0x03dc, 0x039c,
- 0x036c, 0x034c, 0x032c, 0x030c, 0x02ec, 0x02cc, 0x02ac, 0x028c,
- 0x026c, 0x024c, 0x022c, 0x020c, 0x01ec, 0x01cc, 0x01ac, 0x018c,
- 0x0174, 0x0164, 0x0154, 0x0144, 0x0134, 0x0124, 0x0114, 0x0104,
- 0x00f4, 0x00e4, 0x00d4, 0x00c4, 0x00b4, 0x00a4, 0x0094, 0x0084,
- 0x0078, 0x0070, 0x0068, 0x0060, 0x0058, 0x0050, 0x0048, 0x0040,
- 0x0038, 0x0030, 0x0028, 0x0020, 0x0018, 0x0010, 0x0008, 0x0000
-};
-
-/* alaw -> signed 16-bit */
-static short isdn_audio_alaw_to_s16[] =
-{
- 0x13fc, 0xec04, 0x0144, 0xfebc, 0x517c, 0xae84, 0x051c, 0xfae4,
- 0x0a3c, 0xf5c4, 0x0048, 0xffb8, 0x287c, 0xd784, 0x028c, 0xfd74,
- 0x1bfc, 0xe404, 0x01cc, 0xfe34, 0x717c, 0x8e84, 0x071c, 0xf8e4,
- 0x0e3c, 0xf1c4, 0x00c4, 0xff3c, 0x387c, 0xc784, 0x039c, 0xfc64,
- 0x0ffc, 0xf004, 0x0104, 0xfefc, 0x417c, 0xbe84, 0x041c, 0xfbe4,
- 0x083c, 0xf7c4, 0x0008, 0xfff8, 0x207c, 0xdf84, 0x020c, 0xfdf4,
- 0x17fc, 0xe804, 0x018c, 0xfe74, 0x617c, 0x9e84, 0x061c, 0xf9e4,
- 0x0c3c, 0xf3c4, 0x0084, 0xff7c, 0x307c, 0xcf84, 0x030c, 0xfcf4,
- 0x15fc, 0xea04, 0x0164, 0xfe9c, 0x597c, 0xa684, 0x059c, 0xfa64,
- 0x0b3c, 0xf4c4, 0x0068, 0xff98, 0x2c7c, 0xd384, 0x02cc, 0xfd34,
- 0x1dfc, 0xe204, 0x01ec, 0xfe14, 0x797c, 0x8684, 0x07bc, 0xf844,
- 0x0f3c, 0xf0c4, 0x00e4, 0xff1c, 0x3c7c, 0xc384, 0x03dc, 0xfc24,
- 0x11fc, 0xee04, 0x0124, 0xfedc, 0x497c, 0xb684, 0x049c, 0xfb64,
- 0x093c, 0xf6c4, 0x0028, 0xffd8, 0x247c, 0xdb84, 0x024c, 0xfdb4,
- 0x19fc, 0xe604, 0x01ac, 0xfe54, 0x697c, 0x9684, 0x069c, 0xf964,
- 0x0d3c, 0xf2c4, 0x00a4, 0xff5c, 0x347c, 0xcb84, 0x034c, 0xfcb4,
- 0x12fc, 0xed04, 0x0134, 0xfecc, 0x4d7c, 0xb284, 0x04dc, 0xfb24,
- 0x09bc, 0xf644, 0x0038, 0xffc8, 0x267c, 0xd984, 0x026c, 0xfd94,
- 0x1afc, 0xe504, 0x01ac, 0xfe54, 0x6d7c, 0x9284, 0x06dc, 0xf924,
- 0x0dbc, 0xf244, 0x00b4, 0xff4c, 0x367c, 0xc984, 0x036c, 0xfc94,
- 0x0f3c, 0xf0c4, 0x00f4, 0xff0c, 0x3e7c, 0xc184, 0x03dc, 0xfc24,
- 0x07bc, 0xf844, 0x0008, 0xfff8, 0x1efc, 0xe104, 0x01ec, 0xfe14,
- 0x16fc, 0xe904, 0x0174, 0xfe8c, 0x5d7c, 0xa284, 0x05dc, 0xfa24,
- 0x0bbc, 0xf444, 0x0078, 0xff88, 0x2e7c, 0xd184, 0x02ec, 0xfd14,
- 0x14fc, 0xeb04, 0x0154, 0xfeac, 0x557c, 0xaa84, 0x055c, 0xfaa4,
- 0x0abc, 0xf544, 0x0058, 0xffa8, 0x2a7c, 0xd584, 0x02ac, 0xfd54,
- 0x1cfc, 0xe304, 0x01cc, 0xfe34, 0x757c, 0x8a84, 0x075c, 0xf8a4,
- 0x0ebc, 0xf144, 0x00d4, 0xff2c, 0x3a7c, 0xc584, 0x039c, 0xfc64,
- 0x10fc, 0xef04, 0x0114, 0xfeec, 0x457c, 0xba84, 0x045c, 0xfba4,
- 0x08bc, 0xf744, 0x0018, 0xffe8, 0x227c, 0xdd84, 0x022c, 0xfdd4,
- 0x18fc, 0xe704, 0x018c, 0xfe74, 0x657c, 0x9a84, 0x065c, 0xf9a4,
- 0x0cbc, 0xf344, 0x0094, 0xff6c, 0x327c, 0xcd84, 0x032c, 0xfcd4
-};
-
-/* alaw -> ulaw */
-static char isdn_audio_alaw_to_ulaw[] =
-{
- 0xab, 0x2b, 0xe3, 0x63, 0x8b, 0x0b, 0xc9, 0x49,
- 0xba, 0x3a, 0xf6, 0x76, 0x9b, 0x1b, 0xd7, 0x57,
- 0xa3, 0x23, 0xdd, 0x5d, 0x83, 0x03, 0xc1, 0x41,
- 0xb2, 0x32, 0xeb, 0x6b, 0x93, 0x13, 0xcf, 0x4f,
- 0xaf, 0x2f, 0xe7, 0x67, 0x8f, 0x0f, 0xcd, 0x4d,
- 0xbe, 0x3e, 0xfe, 0x7e, 0x9f, 0x1f, 0xdb, 0x5b,
- 0xa7, 0x27, 0xdf, 0x5f, 0x87, 0x07, 0xc5, 0x45,
- 0xb6, 0x36, 0xef, 0x6f, 0x97, 0x17, 0xd3, 0x53,
- 0xa9, 0x29, 0xe1, 0x61, 0x89, 0x09, 0xc7, 0x47,
- 0xb8, 0x38, 0xf2, 0x72, 0x99, 0x19, 0xd5, 0x55,
- 0xa1, 0x21, 0xdc, 0x5c, 0x81, 0x01, 0xbf, 0x3f,
- 0xb0, 0x30, 0xe9, 0x69, 0x91, 0x11, 0xce, 0x4e,
- 0xad, 0x2d, 0xe5, 0x65, 0x8d, 0x0d, 0xcb, 0x4b,
- 0xbc, 0x3c, 0xfa, 0x7a, 0x9d, 0x1d, 0xd9, 0x59,
- 0xa5, 0x25, 0xde, 0x5e, 0x85, 0x05, 0xc3, 0x43,
- 0xb4, 0x34, 0xed, 0x6d, 0x95, 0x15, 0xd1, 0x51,
- 0xac, 0x2c, 0xe4, 0x64, 0x8c, 0x0c, 0xca, 0x4a,
- 0xbb, 0x3b, 0xf8, 0x78, 0x9c, 0x1c, 0xd8, 0x58,
- 0xa4, 0x24, 0xde, 0x5e, 0x84, 0x04, 0xc2, 0x42,
- 0xb3, 0x33, 0xec, 0x6c, 0x94, 0x14, 0xd0, 0x50,
- 0xb0, 0x30, 0xe8, 0x68, 0x90, 0x10, 0xce, 0x4e,
- 0xbf, 0x3f, 0xfe, 0x7e, 0xa0, 0x20, 0xdc, 0x5c,
- 0xa8, 0x28, 0xe0, 0x60, 0x88, 0x08, 0xc6, 0x46,
- 0xb7, 0x37, 0xf0, 0x70, 0x98, 0x18, 0xd4, 0x54,
- 0xaa, 0x2a, 0xe2, 0x62, 0x8a, 0x0a, 0xc8, 0x48,
- 0xb9, 0x39, 0xf4, 0x74, 0x9a, 0x1a, 0xd6, 0x56,
- 0xa2, 0x22, 0xdd, 0x5d, 0x82, 0x02, 0xc0, 0x40,
- 0xb1, 0x31, 0xea, 0x6a, 0x92, 0x12, 0xcf, 0x4f,
- 0xae, 0x2e, 0xe6, 0x66, 0x8e, 0x0e, 0xcc, 0x4c,
- 0xbd, 0x3d, 0xfc, 0x7c, 0x9e, 0x1e, 0xda, 0x5a,
- 0xa6, 0x26, 0xdf, 0x5f, 0x86, 0x06, 0xc4, 0x44,
- 0xb5, 0x35, 0xee, 0x6e, 0x96, 0x16, 0xd2, 0x52
-};
-
-/* ulaw -> alaw */
-static char isdn_audio_ulaw_to_alaw[] =
-{
- 0xab, 0x55, 0xd5, 0x15, 0x95, 0x75, 0xf5, 0x35,
- 0xb5, 0x45, 0xc5, 0x05, 0x85, 0x65, 0xe5, 0x25,
- 0xa5, 0x5d, 0xdd, 0x1d, 0x9d, 0x7d, 0xfd, 0x3d,
- 0xbd, 0x4d, 0xcd, 0x0d, 0x8d, 0x6d, 0xed, 0x2d,
- 0xad, 0x51, 0xd1, 0x11, 0x91, 0x71, 0xf1, 0x31,
- 0xb1, 0x41, 0xc1, 0x01, 0x81, 0x61, 0xe1, 0x21,
- 0x59, 0xd9, 0x19, 0x99, 0x79, 0xf9, 0x39, 0xb9,
- 0x49, 0xc9, 0x09, 0x89, 0x69, 0xe9, 0x29, 0xa9,
- 0xd7, 0x17, 0x97, 0x77, 0xf7, 0x37, 0xb7, 0x47,
- 0xc7, 0x07, 0x87, 0x67, 0xe7, 0x27, 0xa7, 0xdf,
- 0x9f, 0x7f, 0xff, 0x3f, 0xbf, 0x4f, 0xcf, 0x0f,
- 0x8f, 0x6f, 0xef, 0x2f, 0x53, 0x13, 0x73, 0x33,
- 0xb3, 0x43, 0xc3, 0x03, 0x83, 0x63, 0xe3, 0x23,
- 0xa3, 0x5b, 0xdb, 0x1b, 0x9b, 0x7b, 0xfb, 0x3b,
- 0xbb, 0xbb, 0x4b, 0x4b, 0xcb, 0xcb, 0x0b, 0x0b,
- 0x8b, 0x8b, 0x6b, 0x6b, 0xeb, 0xeb, 0x2b, 0x2b,
- 0xab, 0x54, 0xd4, 0x14, 0x94, 0x74, 0xf4, 0x34,
- 0xb4, 0x44, 0xc4, 0x04, 0x84, 0x64, 0xe4, 0x24,
- 0xa4, 0x5c, 0xdc, 0x1c, 0x9c, 0x7c, 0xfc, 0x3c,
- 0xbc, 0x4c, 0xcc, 0x0c, 0x8c, 0x6c, 0xec, 0x2c,
- 0xac, 0x50, 0xd0, 0x10, 0x90, 0x70, 0xf0, 0x30,
- 0xb0, 0x40, 0xc0, 0x00, 0x80, 0x60, 0xe0, 0x20,
- 0x58, 0xd8, 0x18, 0x98, 0x78, 0xf8, 0x38, 0xb8,
- 0x48, 0xc8, 0x08, 0x88, 0x68, 0xe8, 0x28, 0xa8,
- 0xd6, 0x16, 0x96, 0x76, 0xf6, 0x36, 0xb6, 0x46,
- 0xc6, 0x06, 0x86, 0x66, 0xe6, 0x26, 0xa6, 0xde,
- 0x9e, 0x7e, 0xfe, 0x3e, 0xbe, 0x4e, 0xce, 0x0e,
- 0x8e, 0x6e, 0xee, 0x2e, 0x52, 0x12, 0x72, 0x32,
- 0xb2, 0x42, 0xc2, 0x02, 0x82, 0x62, 0xe2, 0x22,
- 0xa2, 0x5a, 0xda, 0x1a, 0x9a, 0x7a, 0xfa, 0x3a,
- 0xba, 0xba, 0x4a, 0x4a, 0xca, 0xca, 0x0a, 0x0a,
- 0x8a, 0x8a, 0x6a, 0x6a, 0xea, 0xea, 0x2a, 0x2a
-};
-
-#define NCOEFF 8 /* number of frequencies to be analyzed */
-#define DTMF_TRESH 4000 /* above this is dtmf */
-#define SILENCE_TRESH 200 /* below this is silence */
-#define AMP_BITS 9 /* bits per sample, reduced to avoid overflow */
-#define LOGRP 0
-#define HIGRP 1
-
-/* For DTMF recognition:
- * 2 * cos(2 * PI * k / N) precalculated for all k
- */
-static int cos2pik[NCOEFF] =
-{
- 55813, 53604, 51193, 48591, 38114, 33057, 25889, 18332
-};
-
-static char dtmf_matrix[4][4] =
-{
- {'1', '2', '3', 'A'},
- {'4', '5', '6', 'B'},
- {'7', '8', '9', 'C'},
- {'*', '0', '#', 'D'}
-};
-
-static inline void
-isdn_audio_tlookup(const u_char *table, u_char *buff, unsigned long n)
-{
-#ifdef __i386__
- unsigned long d0, d1, d2, d3;
- __asm__ __volatile__(
- "cld\n"
- "1:\tlodsb\n\t"
- "xlatb\n\t"
- "stosb\n\t"
- "loop 1b\n\t"
- : "=&b"(d0), "=&c"(d1), "=&D"(d2), "=&S"(d3)
- : "0"((long) table), "1"(n), "2"((long) buff), "3"((long) buff)
- : "memory", "ax");
-#else
- while (n--)
- *buff = table[*(unsigned char *)buff], buff++;
-#endif
-}
-
-void
-isdn_audio_ulaw2alaw(unsigned char *buff, unsigned long len)
-{
- isdn_audio_tlookup(isdn_audio_ulaw_to_alaw, buff, len);
-}
-
-void
-isdn_audio_alaw2ulaw(unsigned char *buff, unsigned long len)
-{
- isdn_audio_tlookup(isdn_audio_alaw_to_ulaw, buff, len);
-}
-
-/*
- * linear <-> adpcm conversion stuff
- * Most parts from the mgetty-package.
- * (C) by Gert Doering and Klaus Weidner
- * Used by permission of Gert Doering
- */
-
-
-#define ZEROTRAP /* turn on the trap as per the MIL-STD */
-#undef ZEROTRAP
-#define BIAS 0x84 /* define the add-in bias for 16 bit samples */
-#define CLIP 32635
-
-static unsigned char
-isdn_audio_linear2ulaw(int sample)
-{
- static int exp_lut[256] =
- {
- 0, 0, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3,
- 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
- 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
- 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
- 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
- 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
- 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
- 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
- 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7,
- 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7,
- 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7,
- 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7,
- 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7,
- 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7,
- 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7,
- 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
- };
- int sign,
- exponent,
- mantissa;
- unsigned char ulawbyte;
-
- /* Get the sample into sign-magnitude. */
- sign = (sample >> 8) & 0x80; /* set aside the sign */
- if (sign != 0)
- sample = -sample; /* get magnitude */
- if (sample > CLIP)
- sample = CLIP; /* clip the magnitude */
-
- /* Convert from 16 bit linear to ulaw. */
- sample = sample + BIAS;
- exponent = exp_lut[(sample >> 7) & 0xFF];
- mantissa = (sample >> (exponent + 3)) & 0x0F;
- ulawbyte = ~(sign | (exponent << 4) | mantissa);
-#ifdef ZEROTRAP
- /* optional CCITT trap */
- if (ulawbyte == 0)
- ulawbyte = 0x02;
-#endif
- return (ulawbyte);
-}
-
-
-static int Mx[3][8] =
-{
- {0x3800, 0x5600, 0, 0, 0, 0, 0, 0},
- {0x399a, 0x3a9f, 0x4d14, 0x6607, 0, 0, 0, 0},
- {0x3556, 0x3556, 0x399A, 0x3A9F, 0x4200, 0x4D14, 0x6607, 0x6607},
-};
-
-static int bitmask[9] =
-{
- 0, 0x01, 0x03, 0x07, 0x0f, 0x1f, 0x3f, 0x7f, 0xff
-};
-
-static int
-isdn_audio_get_bits(adpcm_state *s, unsigned char **in, int *len)
-{
- while (s->nleft < s->nbits) {
- int d = *((*in)++);
- (*len)--;
- s->word = (s->word << 8) | d;
- s->nleft += 8;
- }
- s->nleft -= s->nbits;
- return (s->word >> s->nleft) & bitmask[s->nbits];
-}
-
-static void
-isdn_audio_put_bits(int data, int nbits, adpcm_state *s,
- unsigned char **out, int *len)
-{
- s->word = (s->word << nbits) | (data & bitmask[nbits]);
- s->nleft += nbits;
- while (s->nleft >= 8) {
- int d = (s->word >> (s->nleft - 8));
- *(out[0]++) = d & 255;
- (*len)++;
- s->nleft -= 8;
- }
-}
-
-adpcm_state *
-isdn_audio_adpcm_init(adpcm_state *s, int nbits)
-{
- if (!s)
- s = kmalloc(sizeof(adpcm_state), GFP_ATOMIC);
- if (s) {
- s->a = 0;
- s->d = 5;
- s->word = 0;
- s->nleft = 0;
- s->nbits = nbits;
- }
- return s;
-}
-
-dtmf_state *
-isdn_audio_dtmf_init(dtmf_state *s)
-{
- if (!s)
- s = kmalloc(sizeof(dtmf_state), GFP_ATOMIC);
- if (s) {
- s->idx = 0;
- s->last = ' ';
- }
- return s;
-}
-
-/*
- * Decompression of adpcm data to a/u-law
- *
- */
-
-int
-isdn_audio_adpcm2xlaw(adpcm_state *s, int fmt, unsigned char *in,
- unsigned char *out, int len)
-{
- int a = s->a;
- int d = s->d;
- int nbits = s->nbits;
- int olen = 0;
-
- while (len) {
- int e = isdn_audio_get_bits(s, &in, &len);
- int sign;
-
- if (nbits == 4 && e == 0)
- d = 4;
- sign = (e >> (nbits - 1)) ? -1 : 1;
- e &= bitmask[nbits - 1];
- a += sign * ((e << 1) + 1) * d >> 1;
- if (d & 1)
- a++;
- if (fmt)
- *out++ = isdn_audio_ulaw_to_alaw[
- isdn_audio_linear2ulaw(a << 2)];
- else
- *out++ = isdn_audio_linear2ulaw(a << 2);
- olen++;
- d = (d * Mx[nbits - 2][e] + 0x2000) >> 14;
- if (d < 5)
- d = 5;
- }
- s->a = a;
- s->d = d;
- return olen;
-}
-
-int
-isdn_audio_xlaw2adpcm(adpcm_state *s, int fmt, unsigned char *in,
- unsigned char *out, int len)
-{
- int a = s->a;
- int d = s->d;
- int nbits = s->nbits;
- int olen = 0;
-
- while (len--) {
- int e = 0,
- nmax = 1 << (nbits - 1);
- int sign,
- delta;
-
- if (fmt)
- delta = (isdn_audio_alaw_to_s16[*in++] >> 2) - a;
- else
- delta = (isdn_audio_ulaw_to_s16[*in++] >> 2) - a;
- if (delta < 0) {
- e = nmax;
- delta = -delta;
- }
- while (--nmax && delta > d) {
- delta -= d;
- e++;
- }
- if (nbits == 4 && ((e & 0x0f) == 0))
- e = 8;
- isdn_audio_put_bits(e, nbits, s, &out, &olen);
- sign = (e >> (nbits - 1)) ? -1 : 1;
- e &= bitmask[nbits - 1];
-
- a += sign * ((e << 1) + 1) * d >> 1;
- if (d & 1)
- a++;
- d = (d * Mx[nbits - 2][e] + 0x2000) >> 14;
- if (d < 5)
- d = 5;
- }
- s->a = a;
- s->d = d;
- return olen;
-}
-
-/*
- * Goertzel algorithm.
- * See http://ptolemy.eecs.berkeley.edu/papers/96/dtmf_ict/
- * for more info.
- * Result is stored into an sk_buff and queued up for later
- * evaluation.
- */
-static void
-isdn_audio_goertzel(int *sample, modem_info *info)
-{
- int sk,
- sk1,
- sk2;
- int k,
- n;
- struct sk_buff *skb;
- int *result;
-
- skb = dev_alloc_skb(sizeof(int) * NCOEFF);
- if (!skb) {
- printk(KERN_WARNING
- "isdn_audio: Could not alloc DTMF result for ttyI%d\n",
- info->line);
- return;
- }
- result = skb_put(skb, sizeof(int) * NCOEFF);
- for (k = 0; k < NCOEFF; k++) {
- sk = sk1 = sk2 = 0;
- for (n = 0; n < DTMF_NPOINTS; n++) {
- sk = sample[n] + ((cos2pik[k] * sk1) >> 15) - sk2;
- sk2 = sk1;
- sk1 = sk;
- }
- /* Avoid overflows */
- sk >>= 1;
- sk2 >>= 1;
- /* compute |X(k)|**2 */
- /* report overflows. This should not happen. */
- /* Comment this out if desired */
- if (sk < -32768 || sk > 32767)
- printk(KERN_DEBUG
- "isdn_audio: dtmf goertzel overflow, sk=%d\n", sk);
- if (sk2 < -32768 || sk2 > 32767)
- printk(KERN_DEBUG
- "isdn_audio: dtmf goertzel overflow, sk2=%d\n", sk2);
- result[k] =
- ((sk * sk) >> AMP_BITS) -
- ((((cos2pik[k] * sk) >> 15) * sk2) >> AMP_BITS) +
- ((sk2 * sk2) >> AMP_BITS);
- }
- skb_queue_tail(&info->dtmf_queue, skb);
- isdn_timer_ctrl(ISDN_TIMER_MODEMREAD, 1);
-}
-
-void
-isdn_audio_eval_dtmf(modem_info *info)
-{
- struct sk_buff *skb;
- int *result;
- dtmf_state *s;
- int silence;
- int i;
- int di;
- int ch;
- int grp[2];
- char what;
- char *p;
- int thresh;
-
- while ((skb = skb_dequeue(&info->dtmf_queue))) {
- result = (int *) skb->data;
- s = info->dtmf_state;
- grp[LOGRP] = grp[HIGRP] = -1;
- silence = 0;
- thresh = 0;
- for (i = 0; i < NCOEFF; i++) {
- if (result[i] > DTMF_TRESH) {
- if (result[i] > thresh)
- thresh = result[i];
- }
- else if (result[i] < SILENCE_TRESH)
- silence++;
- }
- if (silence == NCOEFF)
- what = ' ';
- else {
- if (thresh > 0) {
- thresh = thresh >> 4; /* touchtones must match within 12 dB */
- for (i = 0; i < NCOEFF; i++) {
- if (result[i] < thresh)
- continue; /* ignore */
- /* good level found. This is allowed only one time per group */
- if (i < NCOEFF / 2) {
- /* lowgroup*/
- if (grp[LOGRP] >= 0) {
- // Bad. Another tone found. */
- grp[LOGRP] = -1;
- break;
- }
- else
- grp[LOGRP] = i;
- }
- else { /* higroup */
- if (grp[HIGRP] >= 0) { // Bad. Another tone found. */
- grp[HIGRP] = -1;
- break;
- }
- else
- grp[HIGRP] = i - NCOEFF/2;
- }
- }
- if ((grp[LOGRP] >= 0) && (grp[HIGRP] >= 0)) {
- what = dtmf_matrix[grp[LOGRP]][grp[HIGRP]];
- if (s->last != ' ' && s->last != '.')
- s->last = what; /* min. 1 non-DTMF between DTMF */
- } else
- what = '.';
- }
- else
- what = '.';
- }
- if ((what != s->last) && (what != ' ') && (what != '.')) {
- printk(KERN_DEBUG "dtmf: tt='%c'\n", what);
- p = skb->data;
- *p++ = 0x10;
- *p = what;
- skb_trim(skb, 2);
- ISDN_AUDIO_SKB_DLECOUNT(skb) = 0;
- ISDN_AUDIO_SKB_LOCK(skb) = 0;
- di = info->isdn_driver;
- ch = info->isdn_channel;
- __skb_queue_tail(&dev->drv[di]->rpqueue[ch], skb);
- dev->drv[di]->rcvcount[ch] += 2;
- /* Schedule dequeuing */
- if ((dev->modempoll) && (info->rcvsched))
- isdn_timer_ctrl(ISDN_TIMER_MODEMREAD, 1);
- wake_up_interruptible(&dev->drv[di]->rcv_waitq[ch]);
- } else
- kfree_skb(skb);
- s->last = what;
- }
-}
-
-/*
- * Decode DTMF tones, queue result in separate sk_buf for
- * later examination.
- * Parameters:
- * s = pointer to state-struct.
- * buf = input audio data
- * len = size of audio data.
- * fmt = audio data format (0 = ulaw, 1 = alaw)
- */
-void
-isdn_audio_calc_dtmf(modem_info *info, unsigned char *buf, int len, int fmt)
-{
- dtmf_state *s = info->dtmf_state;
- int i;
- int c;
-
- while (len) {
- c = DTMF_NPOINTS - s->idx;
- if (c > len)
- c = len;
- if (c <= 0)
- break;
- for (i = 0; i < c; i++) {
- if (fmt)
- s->buf[s->idx++] =
- isdn_audio_alaw_to_s16[*buf++] >> (15 - AMP_BITS);
- else
- s->buf[s->idx++] =
- isdn_audio_ulaw_to_s16[*buf++] >> (15 - AMP_BITS);
- }
- if (s->idx == DTMF_NPOINTS) {
- isdn_audio_goertzel(s->buf, info);
- s->idx = 0;
- }
- len -= c;
- }
-}
-
-silence_state *
-isdn_audio_silence_init(silence_state *s)
-{
- if (!s)
- s = kmalloc(sizeof(silence_state), GFP_ATOMIC);
- if (s) {
- s->idx = 0;
- s->state = 0;
- }
- return s;
-}
-
-void
-isdn_audio_calc_silence(modem_info *info, unsigned char *buf, int len, int fmt)
-{
- silence_state *s = info->silence_state;
- int i;
- signed char c;
-
- if (!info->emu.vpar[1]) return;
-
- for (i = 0; i < len; i++) {
- if (fmt)
- c = isdn_audio_alaw_to_ulaw[*buf++];
- else
- c = *buf++;
-
- if (c > 0) c -= 128;
- c = abs(c);
-
- if (c > (info->emu.vpar[1] * 4)) {
- s->idx = 0;
- s->state = 1;
- } else {
- if (s->idx < 210000) s->idx++;
- }
- }
-}
-
-void
-isdn_audio_put_dle_code(modem_info *info, u_char code)
-{
- struct sk_buff *skb;
- int di;
- int ch;
- char *p;
-
- skb = dev_alloc_skb(2);
- if (!skb) {
- printk(KERN_WARNING
- "isdn_audio: Could not alloc skb for ttyI%d\n",
- info->line);
- return;
- }
- p = skb_put(skb, 2);
- p[0] = 0x10;
- p[1] = code;
- ISDN_AUDIO_SKB_DLECOUNT(skb) = 0;
- ISDN_AUDIO_SKB_LOCK(skb) = 0;
- di = info->isdn_driver;
- ch = info->isdn_channel;
- __skb_queue_tail(&dev->drv[di]->rpqueue[ch], skb);
- dev->drv[di]->rcvcount[ch] += 2;
- /* Schedule dequeuing */
- if ((dev->modempoll) && (info->rcvsched))
- isdn_timer_ctrl(ISDN_TIMER_MODEMREAD, 1);
- wake_up_interruptible(&dev->drv[di]->rcv_waitq[ch]);
-}
-
-void
-isdn_audio_eval_silence(modem_info *info)
-{
- silence_state *s = info->silence_state;
- char what;
-
- what = ' ';
-
- if (s->idx > (info->emu.vpar[2] * 800)) {
- s->idx = 0;
- if (!s->state) { /* silence from beginning of rec */
- what = 's';
- } else {
- what = 'q';
- }
- }
- if ((what == 's') || (what == 'q')) {
- printk(KERN_DEBUG "ttyI%d: %s\n", info->line,
- (what == 's') ? "silence" : "quiet");
- isdn_audio_put_dle_code(info, what);
- }
-}
diff --git a/drivers/isdn/i4l/isdn_audio.h b/drivers/isdn/i4l/isdn_audio.h
deleted file mode 100644
index 013c3582e0d1..000000000000
--- a/drivers/isdn/i4l/isdn_audio.h
+++ /dev/null
@@ -1,44 +0,0 @@
-/* $Id: isdn_audio.h,v 1.1.2.2 2004/01/12 22:37:18 keil Exp $
- *
- * Linux ISDN subsystem, audio conversion and compression (linklevel).
- *
- * Copyright 1994-1999 by Fritz Elfert (fritz@isdn4linux.de)
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#define DTMF_NPOINTS 205 /* Number of samples for DTMF recognition */
-typedef struct adpcm_state {
- int a;
- int d;
- int word;
- int nleft;
- int nbits;
-} adpcm_state;
-
-typedef struct dtmf_state {
- char last;
- char llast;
- int idx;
- int buf[DTMF_NPOINTS];
-} dtmf_state;
-
-typedef struct silence_state {
- int state;
- unsigned int idx;
-} silence_state;
-
-extern void isdn_audio_ulaw2alaw(unsigned char *, unsigned long);
-extern void isdn_audio_alaw2ulaw(unsigned char *, unsigned long);
-extern adpcm_state *isdn_audio_adpcm_init(adpcm_state *, int);
-extern int isdn_audio_adpcm2xlaw(adpcm_state *, int, unsigned char *, unsigned char *, int);
-extern int isdn_audio_xlaw2adpcm(adpcm_state *, int, unsigned char *, unsigned char *, int);
-extern void isdn_audio_calc_dtmf(modem_info *, unsigned char *, int, int);
-extern void isdn_audio_eval_dtmf(modem_info *);
-dtmf_state *isdn_audio_dtmf_init(dtmf_state *);
-extern void isdn_audio_calc_silence(modem_info *, unsigned char *, int, int);
-extern void isdn_audio_eval_silence(modem_info *);
-silence_state *isdn_audio_silence_init(silence_state *);
-extern void isdn_audio_put_dle_code(modem_info *, u_char);
diff --git a/drivers/isdn/i4l/isdn_bsdcomp.c b/drivers/isdn/i4l/isdn_bsdcomp.c
deleted file mode 100644
index 7f28b967ed19..000000000000
--- a/drivers/isdn/i4l/isdn_bsdcomp.c
+++ /dev/null
@@ -1,930 +0,0 @@
-/*
- * BSD compression module
- *
- * Patched version for ISDN syncPPP written 1997/1998 by Michael Hipp
- * The whole module is now SKB based.
- *
- */
-
-/*
- * Update: The Berkeley copyright was changed, and the change
- * is retroactive to all "true" BSD software (ie everything
- * from UCB as opposed to other peoples code that just carried
- * the same license). The new copyright doesn't clash with the
- * GPL, so the module-only restriction has been removed..
- */
-
-/*
- * Original copyright notice:
- *
- * Copyright (c) 1985, 1986 The Regents of the University of California.
- * All rights reserved.
- *
- * This code is derived from software contributed to Berkeley by
- * James A. Woods, derived from original work by Spencer Thomas
- * and Joseph Orost.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * 3. All advertising materials mentioning features or use of this software
- * must display the following acknowledgement:
- * This product includes software developed by the University of
- * California, Berkeley and its contributors.
- * 4. Neither the name of the University nor the names of its contributors
- * may be used to endorse or promote products derived from this software
- * without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
- * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- * SUCH DAMAGE.
- */
-
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/fcntl.h>
-#include <linux/interrupt.h>
-#include <linux/ptrace.h>
-#include <linux/ioport.h>
-#include <linux/in.h>
-#include <linux/slab.h>
-#include <linux/tty.h>
-#include <linux/errno.h>
-#include <linux/string.h> /* used in new tty drivers */
-#include <linux/signal.h> /* used in new tty drivers */
-#include <linux/bitops.h>
-
-#include <asm/byteorder.h>
-#include <asm/types.h>
-
-#include <linux/if.h>
-
-#include <linux/if_ether.h>
-#include <linux/netdevice.h>
-#include <linux/skbuff.h>
-#include <linux/inet.h>
-#include <linux/ioctl.h>
-#include <linux/vmalloc.h>
-
-#include <linux/ppp_defs.h>
-
-#include <linux/isdn.h>
-#include <linux/isdn_ppp.h>
-#include <linux/ip.h>
-#include <linux/tcp.h>
-#include <linux/if_arp.h>
-#include <linux/ppp-comp.h>
-
-#include "isdn_ppp.h"
-
-MODULE_DESCRIPTION("ISDN4Linux: BSD Compression for PPP over ISDN");
-MODULE_LICENSE("Dual BSD/GPL");
-
-#define BSD_VERSION(x) ((x) >> 5)
-#define BSD_NBITS(x) ((x) & 0x1F)
-
-#define BSD_CURRENT_VERSION 1
-
-#define DEBUG 1
-
-/*
- * A dictionary for doing BSD compress.
- */
-
-struct bsd_dict {
- u32 fcode;
- u16 codem1; /* output of hash table -1 */
- u16 cptr; /* map code to hash table entry */
-};
-
-struct bsd_db {
- int totlen; /* length of this structure */
- unsigned int hsize; /* size of the hash table */
- unsigned char hshift; /* used in hash function */
- unsigned char n_bits; /* current bits/code */
- unsigned char maxbits; /* maximum bits/code */
- unsigned char debug; /* non-zero if debug desired */
- unsigned char unit; /* ppp unit number */
- u16 seqno; /* sequence # of next packet */
- unsigned int mru; /* size of receive (decompress) bufr */
- unsigned int maxmaxcode; /* largest valid code */
- unsigned int max_ent; /* largest code in use */
- unsigned int in_count; /* uncompressed bytes, aged */
- unsigned int bytes_out; /* compressed bytes, aged */
- unsigned int ratio; /* recent compression ratio */
- unsigned int checkpoint; /* when to next check the ratio */
- unsigned int clear_count; /* times dictionary cleared */
- unsigned int incomp_count; /* incompressible packets */
- unsigned int incomp_bytes; /* incompressible bytes */
- unsigned int uncomp_count; /* uncompressed packets */
- unsigned int uncomp_bytes; /* uncompressed bytes */
- unsigned int comp_count; /* compressed packets */
- unsigned int comp_bytes; /* compressed bytes */
- unsigned short *lens; /* array of lengths of codes */
- struct bsd_dict *dict; /* dictionary */
- int xmit;
-};
-
-#define BSD_OVHD 2 /* BSD compress overhead/packet */
-#define MIN_BSD_BITS 9
-#define BSD_INIT_BITS MIN_BSD_BITS
-#define MAX_BSD_BITS 15
-
-/*
- * the next two codes should not be changed lightly, as they must not
- * lie within the contiguous general code space.
- */
-#define CLEAR 256 /* table clear output code */
-#define FIRST 257 /* first free entry */
-#define LAST 255
-
-#define MAXCODE(b) ((1 << (b)) - 1)
-#define BADCODEM1 MAXCODE(MAX_BSD_BITS)
-
-#define BSD_HASH(prefix, suffix, hshift) ((((unsigned long)(suffix)) << (hshift)) \
- ^ (unsigned long)(prefix))
-#define BSD_KEY(prefix, suffix) ((((unsigned long)(suffix)) << 16) \
- + (unsigned long)(prefix))
-
-#define CHECK_GAP 10000 /* Ratio check interval */
-
-#define RATIO_SCALE_LOG 8
-#define RATIO_SCALE (1 << RATIO_SCALE_LOG)
-#define RATIO_MAX (0x7fffffff >> RATIO_SCALE_LOG)
-
-/*
- * clear the dictionary
- */
-
-static void bsd_clear(struct bsd_db *db)
-{
- db->clear_count++;
- db->max_ent = FIRST - 1;
- db->n_bits = BSD_INIT_BITS;
- db->bytes_out = 0;
- db->in_count = 0;
- db->incomp_count = 0;
- db->ratio = 0;
- db->checkpoint = CHECK_GAP;
-}
-
-/*
- * If the dictionary is full, then see if it is time to reset it.
- *
- * Compute the compression ratio using fixed-point arithmetic
- * with 8 fractional bits.
- *
- * Since we have an infinite stream instead of a single file,
- * watch only the local compression ratio.
- *
- * Since both peers must reset the dictionary at the same time even in
- * the absence of CLEAR codes (while packets are incompressible), they
- * must compute the same ratio.
- */
-static int bsd_check(struct bsd_db *db) /* 1=output CLEAR */
-{
- unsigned int new_ratio;
-
- if (db->in_count >= db->checkpoint)
- {
- /* age the ratio by limiting the size of the counts */
- if (db->in_count >= RATIO_MAX || db->bytes_out >= RATIO_MAX)
- {
- db->in_count -= (db->in_count >> 2);
- db->bytes_out -= (db->bytes_out >> 2);
- }
-
- db->checkpoint = db->in_count + CHECK_GAP;
-
- if (db->max_ent >= db->maxmaxcode)
- {
- /* Reset the dictionary only if the ratio is worse,
- * or if it looks as if it has been poisoned
- * by incompressible data.
- *
- * This does not overflow, because
- * db->in_count <= RATIO_MAX.
- */
-
- new_ratio = db->in_count << RATIO_SCALE_LOG;
- if (db->bytes_out != 0)
- {
- new_ratio /= db->bytes_out;
- }
-
- if (new_ratio < db->ratio || new_ratio < 1 * RATIO_SCALE)
- {
- bsd_clear(db);
- return 1;
- }
- db->ratio = new_ratio;
- }
- }
- return 0;
-}
-
-/*
- * Return statistics.
- */
-
-static void bsd_stats(void *state, struct compstat *stats)
-{
- struct bsd_db *db = (struct bsd_db *) state;
-
- stats->unc_bytes = db->uncomp_bytes;
- stats->unc_packets = db->uncomp_count;
- stats->comp_bytes = db->comp_bytes;
- stats->comp_packets = db->comp_count;
- stats->inc_bytes = db->incomp_bytes;
- stats->inc_packets = db->incomp_count;
- stats->in_count = db->in_count;
- stats->bytes_out = db->bytes_out;
-}
-
-/*
- * Reset state, as on a CCP ResetReq.
- */
-static void bsd_reset(void *state, unsigned char code, unsigned char id,
- unsigned char *data, unsigned len,
- struct isdn_ppp_resetparams *rsparm)
-{
- struct bsd_db *db = (struct bsd_db *) state;
-
- bsd_clear(db);
- db->seqno = 0;
- db->clear_count = 0;
-}
-
-/*
- * Release the compression structure
- */
-static void bsd_free(void *state)
-{
- struct bsd_db *db = (struct bsd_db *) state;
-
- if (db) {
- /*
- * Release the dictionary
- */
- vfree(db->dict);
- db->dict = NULL;
-
- /*
- * Release the string buffer
- */
- vfree(db->lens);
- db->lens = NULL;
-
- /*
- * Finally release the structure itself.
- */
- kfree(db);
- }
-}
-
-
-/*
- * Allocate space for a (de) compressor.
- */
-static void *bsd_alloc(struct isdn_ppp_comp_data *data)
-{
- int bits;
- unsigned int hsize, hshift, maxmaxcode;
- struct bsd_db *db;
- int decomp;
-
- static unsigned int htab[][2] = {
- { 5003 , 4 } , { 5003 , 4 } , { 5003 , 4 } , { 5003 , 4 } ,
- { 9001 , 5 } , { 18013 , 6 } , { 35023 , 7 } , { 69001 , 8 }
- };
-
- if (data->optlen != 1 || data->num != CI_BSD_COMPRESS
- || BSD_VERSION(data->options[0]) != BSD_CURRENT_VERSION)
- return NULL;
-
- bits = BSD_NBITS(data->options[0]);
-
- if (bits < 9 || bits > 15)
- return NULL;
-
- hsize = htab[bits - 9][0];
- hshift = htab[bits - 9][1];
-
- /*
- * Allocate the main control structure for this instance.
- */
- maxmaxcode = MAXCODE(bits);
- db = kzalloc(sizeof(struct bsd_db), GFP_KERNEL);
- if (!db)
- return NULL;
-
- db->xmit = data->flags & IPPP_COMP_FLAG_XMIT;
- decomp = db->xmit ? 0 : 1;
-
- /*
- * Allocate space for the dictionary. This may be more than one page in
- * length.
- */
- db->dict = vmalloc(array_size(hsize, sizeof(struct bsd_dict)));
- if (!db->dict) {
- bsd_free(db);
- return NULL;
- }
-
- /*
- * If this is the compression buffer then there is no length data.
- * For decompression, the length information is needed as well.
- */
- if (!decomp)
- db->lens = NULL;
- else {
- db->lens = vmalloc(array_size(sizeof(db->lens[0]),
- maxmaxcode + 1));
- if (!db->lens) {
- bsd_free(db);
- return (NULL);
- }
- }
-
- /*
- * Initialize the data information for the compression code
- */
- db->totlen = sizeof(struct bsd_db) + (sizeof(struct bsd_dict) * hsize);
- db->hsize = hsize;
- db->hshift = hshift;
- db->maxmaxcode = maxmaxcode;
- db->maxbits = bits;
-
- return (void *)db;
-}
-
-/*
- * Initialize the database.
- */
-static int bsd_init(void *state, struct isdn_ppp_comp_data *data, int unit, int debug)
-{
- struct bsd_db *db = state;
- int indx;
- int decomp;
-
- if (!state || !data) {
- printk(KERN_ERR "isdn_bsd_init: [%d] ERR, state %lx data %lx\n", unit, (long)state, (long)data);
- return 0;
- }
-
- decomp = db->xmit ? 0 : 1;
-
- if (data->optlen != 1 || data->num != CI_BSD_COMPRESS
- || (BSD_VERSION(data->options[0]) != BSD_CURRENT_VERSION)
- || (BSD_NBITS(data->options[0]) != db->maxbits)
- || (decomp && db->lens == NULL)) {
- printk(KERN_ERR "isdn_bsd: %d %d %d %d %lx\n", data->optlen, data->num, data->options[0], decomp, (unsigned long)db->lens);
- return 0;
- }
-
- if (decomp)
- for (indx = LAST; indx >= 0; indx--)
- db->lens[indx] = 1;
-
- indx = db->hsize;
- while (indx-- != 0) {
- db->dict[indx].codem1 = BADCODEM1;
- db->dict[indx].cptr = 0;
- }
-
- db->unit = unit;
- db->mru = 0;
-
- db->debug = 1;
-
- bsd_reset(db, 0, 0, NULL, 0, NULL);
-
- return 1;
-}
-
-/*
- * Obtain pointers to the various structures in the compression tables
- */
-
-#define dict_ptrx(p, idx) &(p->dict[idx])
-#define lens_ptrx(p, idx) &(p->lens[idx])
-
-#ifdef DEBUG
-static unsigned short *lens_ptr(struct bsd_db *db, int idx)
-{
- if ((unsigned int) idx > (unsigned int) db->maxmaxcode) {
- printk(KERN_DEBUG "<9>ppp: lens_ptr(%d) > max\n", idx);
- idx = 0;
- }
- return lens_ptrx(db, idx);
-}
-
-static struct bsd_dict *dict_ptr(struct bsd_db *db, int idx)
-{
- if ((unsigned int) idx >= (unsigned int) db->hsize) {
- printk(KERN_DEBUG "<9>ppp: dict_ptr(%d) > max\n", idx);
- idx = 0;
- }
- return dict_ptrx(db, idx);
-}
-
-#else
-#define lens_ptr(db, idx) lens_ptrx(db, idx)
-#define dict_ptr(db, idx) dict_ptrx(db, idx)
-#endif
-
-/*
- * compress a packet
- */
-static int bsd_compress(void *state, struct sk_buff *skb_in, struct sk_buff *skb_out, int proto)
-{
- struct bsd_db *db;
- int hshift;
- unsigned int max_ent;
- unsigned int n_bits;
- unsigned int bitno;
- unsigned long accm;
- int ent;
- unsigned long fcode;
- struct bsd_dict *dictp;
- unsigned char c;
- int hval, disp, ilen, mxcode;
- unsigned char *rptr = skb_in->data;
- int isize = skb_in->len;
-
-#define OUTPUT(ent) \
- { \
- bitno -= n_bits; \
- accm |= ((ent) << bitno); \
- do { \
- if (skb_out && skb_tailroom(skb_out) > 0) \
- skb_put_u8(skb_out, (u8)(accm >> 24)); \
- accm <<= 8; \
- bitno += 8; \
- } while (bitno <= 24); \
- }
-
- /*
- * If the protocol is not in the range we're interested in,
- * just return without compressing the packet. If it is,
- * the protocol becomes the first byte to compress.
- */
- printk(KERN_DEBUG "bsd_compress called with %x\n", proto);
-
- ent = proto;
- if (proto < 0x21 || proto > 0xf9 || !(proto & 0x1))
- return 0;
-
- db = (struct bsd_db *) state;
- hshift = db->hshift;
- max_ent = db->max_ent;
- n_bits = db->n_bits;
- bitno = 32;
- accm = 0;
- mxcode = MAXCODE(n_bits);
-
- /* This is the PPP header information */
- if (skb_out && skb_tailroom(skb_out) >= 2) {
- char *v = skb_put(skb_out, 2);
- /* we only push our own data on the header,
- AC,PC and protos is pushed by caller */
- v[0] = db->seqno >> 8;
- v[1] = db->seqno;
- }
-
- ilen = ++isize; /* This is off by one, but that is what is in draft! */
-
- while (--ilen > 0) {
- c = *rptr++;
- fcode = BSD_KEY(ent, c);
- hval = BSD_HASH(ent, c, hshift);
- dictp = dict_ptr(db, hval);
-
- /* Validate and then check the entry. */
- if (dictp->codem1 >= max_ent)
- goto nomatch;
-
- if (dictp->fcode == fcode) {
- ent = dictp->codem1 + 1;
- continue; /* found (prefix,suffix) */
- }
-
- /* continue probing until a match or invalid entry */
- disp = (hval == 0) ? 1 : hval;
-
- do {
- hval += disp;
- if (hval >= db->hsize)
- hval -= db->hsize;
- dictp = dict_ptr(db, hval);
- if (dictp->codem1 >= max_ent)
- goto nomatch;
- } while (dictp->fcode != fcode);
-
- ent = dictp->codem1 + 1; /* finally found (prefix,suffix) */
- continue;
-
- nomatch:
- OUTPUT(ent); /* output the prefix */
-
- /* code -> hashtable */
- if (max_ent < db->maxmaxcode) {
- struct bsd_dict *dictp2;
- struct bsd_dict *dictp3;
- int indx;
-
- /* expand code size if needed */
- if (max_ent >= mxcode) {
- db->n_bits = ++n_bits;
- mxcode = MAXCODE(n_bits);
- }
-
- /*
- * Invalidate old hash table entry using
- * this code, and then take it over.
- */
- dictp2 = dict_ptr(db, max_ent + 1);
- indx = dictp2->cptr;
- dictp3 = dict_ptr(db, indx);
-
- if (dictp3->codem1 == max_ent)
- dictp3->codem1 = BADCODEM1;
-
- dictp2->cptr = hval;
- dictp->codem1 = max_ent;
- dictp->fcode = fcode;
- db->max_ent = ++max_ent;
-
- if (db->lens) {
- unsigned short *len1 = lens_ptr(db, max_ent);
- unsigned short *len2 = lens_ptr(db, ent);
- *len1 = *len2 + 1;
- }
- }
- ent = c;
- }
-
- OUTPUT(ent); /* output the last code */
-
- if (skb_out)
- db->bytes_out += skb_out->len; /* Do not count bytes from here */
- db->uncomp_bytes += isize;
- db->in_count += isize;
- ++db->uncomp_count;
- ++db->seqno;
-
- if (bitno < 32)
- ++db->bytes_out; /* must be set before calling bsd_check */
-
- /*
- * Generate the clear command if needed
- */
-
- if (bsd_check(db))
- OUTPUT(CLEAR);
-
- /*
- * Pad dribble bits of last code with ones.
- * Do not emit a completely useless byte of ones.
- */
- if (bitno < 32 && skb_out && skb_tailroom(skb_out) > 0)
- skb_put_u8(skb_out,
- (unsigned char)((accm | (0xff << (bitno - 8))) >> 24));
-
- /*
- * Increase code size if we would have without the packet
- * boundary because the decompressor will do so.
- */
- if (max_ent >= mxcode && max_ent < db->maxmaxcode)
- db->n_bits++;
-
- /* If output length is too large then this is an incompressible frame. */
- if (!skb_out || skb_out->len >= skb_in->len) {
- ++db->incomp_count;
- db->incomp_bytes += isize;
- return 0;
- }
-
- /* Count the number of compressed frames */
- ++db->comp_count;
- db->comp_bytes += skb_out->len;
- return skb_out->len;
-
-#undef OUTPUT
-}
-
-/*
- * Update the "BSD Compress" dictionary on the receiver for
- * incompressible data by pretending to compress the incoming data.
- */
-static void bsd_incomp(void *state, struct sk_buff *skb_in, int proto)
-{
- bsd_compress(state, skb_in, NULL, proto);
-}
-
-/*
- * Decompress "BSD Compress".
- */
-static int bsd_decompress(void *state, struct sk_buff *skb_in, struct sk_buff *skb_out,
- struct isdn_ppp_resetparams *rsparm)
-{
- struct bsd_db *db;
- unsigned int max_ent;
- unsigned long accm;
- unsigned int bitno; /* 1st valid bit in accm */
- unsigned int n_bits;
- unsigned int tgtbitno; /* bitno when we have a code */
- struct bsd_dict *dictp;
- int seq;
- unsigned int incode;
- unsigned int oldcode;
- unsigned int finchar;
- unsigned char *p, *ibuf;
- int ilen;
- int codelen;
- int extra;
-
- db = (struct bsd_db *) state;
- max_ent = db->max_ent;
- accm = 0;
- bitno = 32; /* 1st valid bit in accm */
- n_bits = db->n_bits;
- tgtbitno = 32 - n_bits; /* bitno when we have a code */
-
- printk(KERN_DEBUG "bsd_decompress called\n");
-
- if (!skb_in || !skb_out) {
- printk(KERN_ERR "bsd_decompress called with NULL parameter\n");
- return DECOMP_ERROR;
- }
-
- /*
- * Get the sequence number.
- */
- if ((p = skb_pull(skb_in, 2)) == NULL) {
- return DECOMP_ERROR;
- }
- p -= 2;
- seq = (p[0] << 8) + p[1];
- ilen = skb_in->len;
- ibuf = skb_in->data;
-
- /*
- * Check the sequence number and give up if it differs from
- * the value we're expecting.
- */
- if (seq != db->seqno) {
- if (db->debug) {
- printk(KERN_DEBUG "bsd_decomp%d: bad sequence # %d, expected %d\n",
- db->unit, seq, db->seqno - 1);
- }
- return DECOMP_ERROR;
- }
-
- ++db->seqno;
- db->bytes_out += ilen;
-
- if (skb_tailroom(skb_out) > 0)
- skb_put_u8(skb_out, 0);
- else
- return DECOMP_ERR_NOMEM;
-
- oldcode = CLEAR;
-
- /*
- * Keep the checkpoint correctly so that incompressible packets
- * clear the dictionary at the proper times.
- */
-
- for (;;) {
- if (ilen-- <= 0) {
- db->in_count += (skb_out->len - 1); /* don't count the header */
- break;
- }
-
- /*
- * Accumulate bytes until we have a complete code.
- * Then get the next code, relying on the 32-bit,
- * unsigned accm to mask the result.
- */
-
- bitno -= 8;
- accm |= *ibuf++ << bitno;
- if (tgtbitno < bitno)
- continue;
-
- incode = accm >> tgtbitno;
- accm <<= n_bits;
- bitno += n_bits;
-
- /*
- * The dictionary must only be cleared at the end of a packet.
- */
-
- if (incode == CLEAR) {
- if (ilen > 0) {
- if (db->debug)
- printk(KERN_DEBUG "bsd_decomp%d: bad CLEAR\n", db->unit);
- return DECOMP_FATALERROR; /* probably a bug */
- }
- bsd_clear(db);
- break;
- }
-
- if ((incode > max_ent + 2) || (incode > db->maxmaxcode)
- || (incode > max_ent && oldcode == CLEAR)) {
- if (db->debug) {
- printk(KERN_DEBUG "bsd_decomp%d: bad code 0x%x oldcode=0x%x ",
- db->unit, incode, oldcode);
- printk(KERN_DEBUG "max_ent=0x%x skb->Len=%d seqno=%d\n",
- max_ent, skb_out->len, db->seqno);
- }
- return DECOMP_FATALERROR; /* probably a bug */
- }
-
- /* Special case for KwKwK string. */
- if (incode > max_ent) {
- finchar = oldcode;
- extra = 1;
- } else {
- finchar = incode;
- extra = 0;
- }
-
- codelen = *(lens_ptr(db, finchar));
- if (skb_tailroom(skb_out) < codelen + extra) {
- if (db->debug) {
- printk(KERN_DEBUG "bsd_decomp%d: ran out of mru\n", db->unit);
-#ifdef DEBUG
- printk(KERN_DEBUG " len=%d, finchar=0x%x, codelen=%d,skblen=%d\n",
- ilen, finchar, codelen, skb_out->len);
-#endif
- }
- return DECOMP_FATALERROR;
- }
-
- /*
- * Decode this code and install it in the decompressed buffer.
- */
-
- p = skb_put(skb_out, codelen);
- p += codelen;
- while (finchar > LAST) {
- struct bsd_dict *dictp2 = dict_ptr(db, finchar);
-
- dictp = dict_ptr(db, dictp2->cptr);
-
-#ifdef DEBUG
- if (--codelen <= 0 || dictp->codem1 != finchar - 1) {
- if (codelen <= 0) {
- printk(KERN_ERR "bsd_decomp%d: fell off end of chain ", db->unit);
- printk(KERN_ERR "0x%x at 0x%x by 0x%x, max_ent=0x%x\n", incode, finchar, dictp2->cptr, max_ent);
- } else {
- if (dictp->codem1 != finchar - 1) {
- printk(KERN_ERR "bsd_decomp%d: bad code chain 0x%x finchar=0x%x ", db->unit, incode, finchar);
- printk(KERN_ERR "oldcode=0x%x cptr=0x%x codem1=0x%x\n", oldcode, dictp2->cptr, dictp->codem1);
- }
- }
- return DECOMP_FATALERROR;
- }
-#endif
-
- {
- u32 fcode = dictp->fcode;
- *--p = (fcode >> 16) & 0xff;
- finchar = fcode & 0xffff;
- }
- }
- *--p = finchar;
-
-#ifdef DEBUG
- if (--codelen != 0)
- printk(KERN_ERR "bsd_decomp%d: short by %d after code 0x%x, max_ent=0x%x\n", db->unit, codelen, incode, max_ent);
-#endif
-
- if (extra) /* the KwKwK case again */
- skb_put_u8(skb_out, finchar);
-
- /*
- * If not first code in a packet, and
- * if not out of code space, then allocate a new code.
- *
- * Keep the hash table correct so it can be used
- * with uncompressed packets.
- */
- if (oldcode != CLEAR && max_ent < db->maxmaxcode) {
- struct bsd_dict *dictp2, *dictp3;
- u16 *lens1, *lens2;
- unsigned long fcode;
- int hval, disp, indx;
-
- fcode = BSD_KEY(oldcode, finchar);
- hval = BSD_HASH(oldcode, finchar, db->hshift);
- dictp = dict_ptr(db, hval);
-
- /* look for a free hash table entry */
- if (dictp->codem1 < max_ent) {
- disp = (hval == 0) ? 1 : hval;
- do {
- hval += disp;
- if (hval >= db->hsize)
- hval -= db->hsize;
- dictp = dict_ptr(db, hval);
- } while (dictp->codem1 < max_ent);
- }
-
- /*
- * Invalidate previous hash table entry
- * assigned this code, and then take it over
- */
-
- dictp2 = dict_ptr(db, max_ent + 1);
- indx = dictp2->cptr;
- dictp3 = dict_ptr(db, indx);
-
- if (dictp3->codem1 == max_ent)
- dictp3->codem1 = BADCODEM1;
-
- dictp2->cptr = hval;
- dictp->codem1 = max_ent;
- dictp->fcode = fcode;
- db->max_ent = ++max_ent;
-
- /* Update the length of this string. */
- lens1 = lens_ptr(db, max_ent);
- lens2 = lens_ptr(db, oldcode);
- *lens1 = *lens2 + 1;
-
- /* Expand code size if needed. */
- if (max_ent >= MAXCODE(n_bits) && max_ent < db->maxmaxcode) {
- db->n_bits = ++n_bits;
- tgtbitno = 32-n_bits;
- }
- }
- oldcode = incode;
- }
-
- ++db->comp_count;
- ++db->uncomp_count;
- db->comp_bytes += skb_in->len - BSD_OVHD;
- db->uncomp_bytes += skb_out->len;
-
- if (bsd_check(db)) {
- if (db->debug)
- printk(KERN_DEBUG "bsd_decomp%d: peer should have cleared dictionary on %d\n",
- db->unit, db->seqno - 1);
- }
- return skb_out->len;
-}
-
-/*************************************************************
- * Table of addresses for the BSD compression module
- *************************************************************/
-
-static struct isdn_ppp_compressor ippp_bsd_compress = {
- .owner = THIS_MODULE,
- .num = CI_BSD_COMPRESS,
- .alloc = bsd_alloc,
- .free = bsd_free,
- .init = bsd_init,
- .reset = bsd_reset,
- .compress = bsd_compress,
- .decompress = bsd_decompress,
- .incomp = bsd_incomp,
- .stat = bsd_stats,
-};
-
-/*************************************************************
- * Module support routines
- *************************************************************/
-
-static int __init isdn_bsdcomp_init(void)
-{
- int answer = isdn_ppp_register_compressor(&ippp_bsd_compress);
- if (answer == 0)
- printk(KERN_INFO "PPP BSD Compression module registered\n");
- return answer;
-}
-
-static void __exit isdn_bsdcomp_exit(void)
-{
- isdn_ppp_unregister_compressor(&ippp_bsd_compress);
-}
-
-module_init(isdn_bsdcomp_init);
-module_exit(isdn_bsdcomp_exit);
diff --git a/drivers/isdn/i4l/isdn_common.c b/drivers/isdn/i4l/isdn_common.c
deleted file mode 100644
index 74ee00f5b310..000000000000
--- a/drivers/isdn/i4l/isdn_common.c
+++ /dev/null
@@ -1,2368 +0,0 @@
-/* $Id: isdn_common.c,v 1.1.2.3 2004/02/10 01:07:13 keil Exp $
- *
- * Linux ISDN subsystem, common used functions (linklevel).
- *
- * Copyright 1994-1999 by Fritz Elfert (fritz@isdn4linux.de)
- * Copyright 1995,96 Thinking Objects Software GmbH Wuerzburg
- * Copyright 1995,96 by Michael Hipp (Michael.Hipp@student.uni-tuebingen.de)
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/poll.h>
-#include <linux/slab.h>
-#include <linux/vmalloc.h>
-#include <linux/isdn.h>
-#include <linux/mutex.h>
-#include "isdn_common.h"
-#include "isdn_tty.h"
-#include "isdn_net.h"
-#include "isdn_ppp.h"
-#ifdef CONFIG_ISDN_AUDIO
-#include "isdn_audio.h"
-#endif
-#ifdef CONFIG_ISDN_DIVERSION_MODULE
-#define CONFIG_ISDN_DIVERSION
-#endif
-#ifdef CONFIG_ISDN_DIVERSION
-#include <linux/isdn_divertif.h>
-#endif /* CONFIG_ISDN_DIVERSION */
-#include "isdn_v110.h"
-
-/* Debugflags */
-#undef ISDN_DEBUG_STATCALLB
-
-MODULE_DESCRIPTION("ISDN4Linux: link layer");
-MODULE_AUTHOR("Fritz Elfert");
-MODULE_LICENSE("GPL");
-
-isdn_dev *dev;
-
-static DEFINE_MUTEX(isdn_mutex);
-static char *isdn_revision = "$Revision: 1.1.2.3 $";
-
-extern char *isdn_net_revision;
-#ifdef CONFIG_ISDN_PPP
-extern char *isdn_ppp_revision;
-#else
-static char *isdn_ppp_revision = ": none $";
-#endif
-#ifdef CONFIG_ISDN_AUDIO
-extern char *isdn_audio_revision;
-#else
-static char *isdn_audio_revision = ": none $";
-#endif
-extern char *isdn_v110_revision;
-
-#ifdef CONFIG_ISDN_DIVERSION
-static isdn_divert_if *divert_if; /* = NULL */
-#endif /* CONFIG_ISDN_DIVERSION */
-
-
-static int isdn_writebuf_stub(int, int, const u_char __user *, int);
-static void set_global_features(void);
-static int isdn_wildmat(char *s, char *p);
-static int isdn_add_channels(isdn_driver_t *d, int drvidx, int n, int adding);
-
-static inline void
-isdn_lock_driver(isdn_driver_t *drv)
-{
- try_module_get(drv->interface->owner);
- drv->locks++;
-}
-
-void
-isdn_lock_drivers(void)
-{
- int i;
-
- for (i = 0; i < ISDN_MAX_DRIVERS; i++) {
- if (!dev->drv[i])
- continue;
- isdn_lock_driver(dev->drv[i]);
- }
-}
-
-static inline void
-isdn_unlock_driver(isdn_driver_t *drv)
-{
- if (drv->locks > 0) {
- drv->locks--;
- module_put(drv->interface->owner);
- }
-}
-
-void
-isdn_unlock_drivers(void)
-{
- int i;
-
- for (i = 0; i < ISDN_MAX_DRIVERS; i++) {
- if (!dev->drv[i])
- continue;
- isdn_unlock_driver(dev->drv[i]);
- }
-}
-
-#if defined(ISDN_DEBUG_NET_DUMP) || defined(ISDN_DEBUG_MODEM_DUMP)
-void
-isdn_dumppkt(char *s, u_char *p, int len, int dumplen)
-{
- int dumpc;
-
- printk(KERN_DEBUG "%s(%d) ", s, len);
- for (dumpc = 0; (dumpc < dumplen) && (len); len--, dumpc++)
- printk(" %02x", *p++);
- printk("\n");
-}
-#endif
-
-/*
- * I picked the pattern-matching-functions from an old GNU-tar version (1.10)
- * It was originally written and put to PD by rs@mirror.TMC.COM (Rich Salz)
- */
-static int
-isdn_star(char *s, char *p)
-{
- while (isdn_wildmat(s, p)) {
- if (*++s == '\0')
- return (2);
- }
- return (0);
-}
-
-/*
- * Shell-type Pattern-matching for incoming caller-Ids
- * This function gets a string in s and checks, if it matches the pattern
- * given in p.
- *
- * Return:
- * 0 = match.
- * 1 = no match.
- * 2 = no match. Would eventually match, if s would be longer.
- *
- * Possible Patterns:
- *
- * '?' matches one character
- * '*' matches zero or more characters
- * [xyz] matches the set of characters in brackets.
- * [^xyz] matches any single character not in the set of characters
- */
-
-static int
-isdn_wildmat(char *s, char *p)
-{
- register int last;
- register int matched;
- register int reverse;
- register int nostar = 1;
-
- if (!(*s) && !(*p))
- return (1);
- for (; *p; s++, p++)
- switch (*p) {
- case '\\':
- /* Literal match with following character. */
- p++;
- /* fall through */
- default:
- if (*s != *p)
- return (*s == '\0') ? 2 : 1;
- continue;
- case '?':
- /* Match anything. */
- if (*s == '\0')
- return (2);
- continue;
- case '*':
- nostar = 0;
- /* Trailing star matches everything. */
- return (*++p ? isdn_star(s, p) : 0);
- case '[':
- /* [^....] means inverse character class. */
- if ((reverse = (p[1] == '^')))
- p++;
- for (last = 0, matched = 0; *++p && (*p != ']'); last = *p)
- /* This next line requires a good C compiler. */
- if (*p == '-' ? *s <= *++p && *s >= last : *s == *p)
- matched = 1;
- if (matched == reverse)
- return (1);
- continue;
- }
- return (*s == '\0') ? 0 : nostar;
-}
-
-int isdn_msncmp(const char *msn1, const char *msn2)
-{
- char TmpMsn1[ISDN_MSNLEN];
- char TmpMsn2[ISDN_MSNLEN];
- char *p;
-
- for (p = TmpMsn1; *msn1 && *msn1 != ':';) // Strip off a SPID
- *p++ = *msn1++;
- *p = '\0';
-
- for (p = TmpMsn2; *msn2 && *msn2 != ':';) // Strip off a SPID
- *p++ = *msn2++;
- *p = '\0';
-
- return isdn_wildmat(TmpMsn1, TmpMsn2);
-}
-
-int
-isdn_dc2minor(int di, int ch)
-{
- int i;
- for (i = 0; i < ISDN_MAX_CHANNELS; i++)
- if (dev->chanmap[i] == ch && dev->drvmap[i] == di)
- return i;
- return -1;
-}
-
-static int isdn_timer_cnt1 = 0;
-static int isdn_timer_cnt2 = 0;
-static int isdn_timer_cnt3 = 0;
-
-static void
-isdn_timer_funct(struct timer_list *unused)
-{
- int tf = dev->tflags;
- if (tf & ISDN_TIMER_FAST) {
- if (tf & ISDN_TIMER_MODEMREAD)
- isdn_tty_readmodem();
- if (tf & ISDN_TIMER_MODEMPLUS)
- isdn_tty_modem_escape();
- if (tf & ISDN_TIMER_MODEMXMIT)
- isdn_tty_modem_xmit();
- }
- if (tf & ISDN_TIMER_SLOW) {
- if (++isdn_timer_cnt1 >= ISDN_TIMER_02SEC) {
- isdn_timer_cnt1 = 0;
- if (tf & ISDN_TIMER_NETDIAL)
- isdn_net_dial();
- }
- if (++isdn_timer_cnt2 >= ISDN_TIMER_1SEC) {
- isdn_timer_cnt2 = 0;
- if (tf & ISDN_TIMER_NETHANGUP)
- isdn_net_autohup();
- if (++isdn_timer_cnt3 >= ISDN_TIMER_RINGING) {
- isdn_timer_cnt3 = 0;
- if (tf & ISDN_TIMER_MODEMRING)
- isdn_tty_modem_ring();
- }
- if (tf & ISDN_TIMER_CARRIER)
- isdn_tty_carrier_timeout();
- }
- }
- if (tf)
- mod_timer(&dev->timer, jiffies + ISDN_TIMER_RES);
-}
-
-void
-isdn_timer_ctrl(int tf, int onoff)
-{
- unsigned long flags;
- int old_tflags;
-
- spin_lock_irqsave(&dev->timerlock, flags);
- if ((tf & ISDN_TIMER_SLOW) && (!(dev->tflags & ISDN_TIMER_SLOW))) {
- /* If the slow-timer wasn't activated until now */
- isdn_timer_cnt1 = 0;
- isdn_timer_cnt2 = 0;
- }
- old_tflags = dev->tflags;
- if (onoff)
- dev->tflags |= tf;
- else
- dev->tflags &= ~tf;
- if (dev->tflags && !old_tflags)
- mod_timer(&dev->timer, jiffies + ISDN_TIMER_RES);
- spin_unlock_irqrestore(&dev->timerlock, flags);
-}
-
-/*
- * Receive a packet from B-Channel. (Called from low-level-module)
- */
-static void
-isdn_receive_skb_callback(int di, int channel, struct sk_buff *skb)
-{
- int i;
-
- if ((i = isdn_dc2minor(di, channel)) == -1) {
- dev_kfree_skb(skb);
- return;
- }
- /* Update statistics */
- dev->ibytes[i] += skb->len;
-
- /* First, try to deliver data to network-device */
- if (isdn_net_rcv_skb(i, skb))
- return;
-
- /* V.110 handling
- * makes sense for async streams only, so it is
- * called after possible net-device delivery.
- */
- if (dev->v110[i]) {
- atomic_inc(&dev->v110use[i]);
- skb = isdn_v110_decode(dev->v110[i], skb);
- atomic_dec(&dev->v110use[i]);
- if (!skb)
- return;
- }
-
- /* No network-device found, deliver to tty or raw-channel */
- if (skb->len) {
- if (isdn_tty_rcv_skb(i, di, channel, skb))
- return;
- wake_up_interruptible(&dev->drv[di]->rcv_waitq[channel]);
- } else
- dev_kfree_skb(skb);
-}
-
-/*
- * Intercept command from Linklevel to Lowlevel.
- * If layer 2 protocol is V.110 and this is not supported by current
- * lowlevel-driver, use driver's transparent mode and handle V.110 in
- * linklevel instead.
- */
-int
-isdn_command(isdn_ctrl *cmd)
-{
- if (cmd->driver == -1) {
- printk(KERN_WARNING "isdn_command command(%x) driver -1\n", cmd->command);
- return (1);
- }
- if (!dev->drv[cmd->driver]) {
- printk(KERN_WARNING "isdn_command command(%x) dev->drv[%d] NULL\n",
- cmd->command, cmd->driver);
- return (1);
- }
- if (!dev->drv[cmd->driver]->interface) {
- printk(KERN_WARNING "isdn_command command(%x) dev->drv[%d]->interface NULL\n",
- cmd->command, cmd->driver);
- return (1);
- }
- if (cmd->command == ISDN_CMD_SETL2) {
- int idx = isdn_dc2minor(cmd->driver, cmd->arg & 255);
- unsigned long l2prot = (cmd->arg >> 8) & 255;
- unsigned long features = (dev->drv[cmd->driver]->interface->features
- >> ISDN_FEATURE_L2_SHIFT) &
- ISDN_FEATURE_L2_MASK;
- unsigned long l2_feature = (1 << l2prot);
-
- switch (l2prot) {
- case ISDN_PROTO_L2_V11096:
- case ISDN_PROTO_L2_V11019:
- case ISDN_PROTO_L2_V11038:
- /* If V.110 requested, but not supported by
- * HL-driver, set emulator-flag and change
- * Layer-2 to transparent
- */
- if (!(features & l2_feature)) {
- dev->v110emu[idx] = l2prot;
- cmd->arg = (cmd->arg & 255) |
- (ISDN_PROTO_L2_TRANS << 8);
- } else
- dev->v110emu[idx] = 0;
- }
- }
- return dev->drv[cmd->driver]->interface->command(cmd);
-}
-
-void
-isdn_all_eaz(int di, int ch)
-{
- isdn_ctrl cmd;
-
- if (di < 0)
- return;
- cmd.driver = di;
- cmd.arg = ch;
- cmd.command = ISDN_CMD_SETEAZ;
- cmd.parm.num[0] = '\0';
- isdn_command(&cmd);
-}
-
-/*
- * Begin of a CAPI like LL<->HL interface, currently used only for
- * supplementary service (CAPI 2.0 part III)
- */
-#include <linux/isdn/capicmd.h>
-
-static int
-isdn_capi_rec_hl_msg(capi_msg *cm)
-{
- switch (cm->Command) {
- case CAPI_FACILITY:
- /* in the moment only handled in tty */
- return (isdn_tty_capi_facility(cm));
- default:
- return (-1);
- }
-}
-
-static int
-isdn_status_callback(isdn_ctrl *c)
-{
- int di;
- u_long flags;
- int i;
- int r;
- int retval = 0;
- isdn_ctrl cmd;
- isdn_net_dev *p;
-
- di = c->driver;
- i = isdn_dc2minor(di, c->arg);
- switch (c->command) {
- case ISDN_STAT_BSENT:
- if (i < 0)
- return -1;
- if (dev->global_flags & ISDN_GLOBAL_STOPPED)
- return 0;
- if (isdn_net_stat_callback(i, c))
- return 0;
- if (isdn_v110_stat_callback(i, c))
- return 0;
- if (isdn_tty_stat_callback(i, c))
- return 0;
- wake_up_interruptible(&dev->drv[di]->snd_waitq[c->arg]);
- break;
- case ISDN_STAT_STAVAIL:
- dev->drv[di]->stavail += c->arg;
- wake_up_interruptible(&dev->drv[di]->st_waitq);
- break;
- case ISDN_STAT_RUN:
- dev->drv[di]->flags |= DRV_FLAG_RUNNING;
- for (i = 0; i < ISDN_MAX_CHANNELS; i++)
- if (dev->drvmap[i] == di)
- isdn_all_eaz(di, dev->chanmap[i]);
- set_global_features();
- break;
- case ISDN_STAT_STOP:
- dev->drv[di]->flags &= ~DRV_FLAG_RUNNING;
- break;
- case ISDN_STAT_ICALL:
- if (i < 0)
- return -1;
-#ifdef ISDN_DEBUG_STATCALLB
- printk(KERN_DEBUG "ICALL (net): %d %ld %s\n", di, c->arg, c->parm.num);
-#endif
- if (dev->global_flags & ISDN_GLOBAL_STOPPED) {
- cmd.driver = di;
- cmd.arg = c->arg;
- cmd.command = ISDN_CMD_HANGUP;
- isdn_command(&cmd);
- return 0;
- }
- /* Try to find a network-interface which will accept incoming call */
- r = ((c->command == ISDN_STAT_ICALLW) ? 0 : isdn_net_find_icall(di, c->arg, i, &c->parm.setup));
- switch (r) {
- case 0:
- /* No network-device replies.
- * Try ttyI's.
- * These return 0 on no match, 1 on match and
- * 3 on eventually match, if CID is longer.
- */
- if (c->command == ISDN_STAT_ICALL)
- if ((retval = isdn_tty_find_icall(di, c->arg, &c->parm.setup))) return (retval);
-#ifdef CONFIG_ISDN_DIVERSION
- if (divert_if)
- if ((retval = divert_if->stat_callback(c)))
- return (retval); /* processed */
-#endif /* CONFIG_ISDN_DIVERSION */
- if ((!retval) && (dev->drv[di]->flags & DRV_FLAG_REJBUS)) {
- /* No tty responding */
- cmd.driver = di;
- cmd.arg = c->arg;
- cmd.command = ISDN_CMD_HANGUP;
- isdn_command(&cmd);
- retval = 2;
- }
- break;
- case 1:
- /* Schedule connection-setup */
- isdn_net_dial();
- cmd.driver = di;
- cmd.arg = c->arg;
- cmd.command = ISDN_CMD_ACCEPTD;
- for (p = dev->netdev; p; p = p->next)
- if (p->local->isdn_channel == cmd.arg)
- {
- strcpy(cmd.parm.setup.eazmsn, p->local->msn);
- isdn_command(&cmd);
- retval = 1;
- break;
- }
- break;
-
- case 2: /* For calling back, first reject incoming call ... */
- case 3: /* Interface found, but down, reject call actively */
- retval = 2;
- printk(KERN_INFO "isdn: Rejecting Call\n");
- cmd.driver = di;
- cmd.arg = c->arg;
- cmd.command = ISDN_CMD_HANGUP;
- isdn_command(&cmd);
- if (r == 3)
- break;
- /* Fall through */
- case 4:
- /* ... then start callback. */
- isdn_net_dial();
- break;
- case 5:
- /* Number would eventually match, if longer */
- retval = 3;
- break;
- }
-#ifdef ISDN_DEBUG_STATCALLB
- printk(KERN_DEBUG "ICALL: ret=%d\n", retval);
-#endif
- return retval;
- break;
- case ISDN_STAT_CINF:
- if (i < 0)
- return -1;
-#ifdef ISDN_DEBUG_STATCALLB
- printk(KERN_DEBUG "CINF: %ld %s\n", c->arg, c->parm.num);
-#endif
- if (dev->global_flags & ISDN_GLOBAL_STOPPED)
- return 0;
- if (strcmp(c->parm.num, "0"))
- isdn_net_stat_callback(i, c);
- isdn_tty_stat_callback(i, c);
- break;
- case ISDN_STAT_CAUSE:
-#ifdef ISDN_DEBUG_STATCALLB
- printk(KERN_DEBUG "CAUSE: %ld %s\n", c->arg, c->parm.num);
-#endif
- printk(KERN_INFO "isdn: %s,ch%ld cause: %s\n",
- dev->drvid[di], c->arg, c->parm.num);
- isdn_tty_stat_callback(i, c);
-#ifdef CONFIG_ISDN_DIVERSION
- if (divert_if)
- divert_if->stat_callback(c);
-#endif /* CONFIG_ISDN_DIVERSION */
- break;
- case ISDN_STAT_DISPLAY:
-#ifdef ISDN_DEBUG_STATCALLB
- printk(KERN_DEBUG "DISPLAY: %ld %s\n", c->arg, c->parm.display);
-#endif
- isdn_tty_stat_callback(i, c);
-#ifdef CONFIG_ISDN_DIVERSION
- if (divert_if)
- divert_if->stat_callback(c);
-#endif /* CONFIG_ISDN_DIVERSION */
- break;
- case ISDN_STAT_DCONN:
- if (i < 0)
- return -1;
-#ifdef ISDN_DEBUG_STATCALLB
- printk(KERN_DEBUG "DCONN: %ld\n", c->arg);
-#endif
- if (dev->global_flags & ISDN_GLOBAL_STOPPED)
- return 0;
- /* Find any net-device, waiting for D-channel setup */
- if (isdn_net_stat_callback(i, c))
- break;
- isdn_v110_stat_callback(i, c);
- /* Find any ttyI, waiting for D-channel setup */
- if (isdn_tty_stat_callback(i, c)) {
- cmd.driver = di;
- cmd.arg = c->arg;
- cmd.command = ISDN_CMD_ACCEPTB;
- isdn_command(&cmd);
- break;
- }
- break;
- case ISDN_STAT_DHUP:
- if (i < 0)
- return -1;
-#ifdef ISDN_DEBUG_STATCALLB
- printk(KERN_DEBUG "DHUP: %ld\n", c->arg);
-#endif
- if (dev->global_flags & ISDN_GLOBAL_STOPPED)
- return 0;
- dev->drv[di]->online &= ~(1 << (c->arg));
- isdn_info_update();
- /* Signal hangup to network-devices */
- if (isdn_net_stat_callback(i, c))
- break;
- isdn_v110_stat_callback(i, c);
- if (isdn_tty_stat_callback(i, c))
- break;
-#ifdef CONFIG_ISDN_DIVERSION
- if (divert_if)
- divert_if->stat_callback(c);
-#endif /* CONFIG_ISDN_DIVERSION */
- break;
- break;
- case ISDN_STAT_BCONN:
- if (i < 0)
- return -1;
-#ifdef ISDN_DEBUG_STATCALLB
- printk(KERN_DEBUG "BCONN: %ld\n", c->arg);
-#endif
- /* Signal B-channel-connect to network-devices */
- if (dev->global_flags & ISDN_GLOBAL_STOPPED)
- return 0;
- dev->drv[di]->online |= (1 << (c->arg));
- isdn_info_update();
- if (isdn_net_stat_callback(i, c))
- break;
- isdn_v110_stat_callback(i, c);
- if (isdn_tty_stat_callback(i, c))
- break;
- break;
- case ISDN_STAT_BHUP:
- if (i < 0)
- return -1;
-#ifdef ISDN_DEBUG_STATCALLB
- printk(KERN_DEBUG "BHUP: %ld\n", c->arg);
-#endif
- if (dev->global_flags & ISDN_GLOBAL_STOPPED)
- return 0;
- dev->drv[di]->online &= ~(1 << (c->arg));
- isdn_info_update();
-#ifdef CONFIG_ISDN_X25
- /* Signal hangup to network-devices */
- if (isdn_net_stat_callback(i, c))
- break;
-#endif
- isdn_v110_stat_callback(i, c);
- if (isdn_tty_stat_callback(i, c))
- break;
- break;
- case ISDN_STAT_NODCH:
- if (i < 0)
- return -1;
-#ifdef ISDN_DEBUG_STATCALLB
- printk(KERN_DEBUG "NODCH: %ld\n", c->arg);
-#endif
- if (dev->global_flags & ISDN_GLOBAL_STOPPED)
- return 0;
- if (isdn_net_stat_callback(i, c))
- break;
- if (isdn_tty_stat_callback(i, c))
- break;
- break;
- case ISDN_STAT_ADDCH:
- spin_lock_irqsave(&dev->lock, flags);
- if (isdn_add_channels(dev->drv[di], di, c->arg, 1)) {
- spin_unlock_irqrestore(&dev->lock, flags);
- return -1;
- }
- spin_unlock_irqrestore(&dev->lock, flags);
- isdn_info_update();
- break;
- case ISDN_STAT_DISCH:
- spin_lock_irqsave(&dev->lock, flags);
- for (i = 0; i < ISDN_MAX_CHANNELS; i++)
- if ((dev->drvmap[i] == di) &&
- (dev->chanmap[i] == c->arg)) {
- if (c->parm.num[0])
- dev->usage[i] &= ~ISDN_USAGE_DISABLED;
- else
- if (USG_NONE(dev->usage[i])) {
- dev->usage[i] |= ISDN_USAGE_DISABLED;
- }
- else
- retval = -1;
- break;
- }
- spin_unlock_irqrestore(&dev->lock, flags);
- isdn_info_update();
- break;
- case ISDN_STAT_UNLOAD:
- while (dev->drv[di]->locks > 0) {
- isdn_unlock_driver(dev->drv[di]);
- }
- spin_lock_irqsave(&dev->lock, flags);
- isdn_tty_stat_callback(i, c);
- for (i = 0; i < ISDN_MAX_CHANNELS; i++)
- if (dev->drvmap[i] == di) {
- dev->drvmap[i] = -1;
- dev->chanmap[i] = -1;
- dev->usage[i] &= ~ISDN_USAGE_DISABLED;
- }
- dev->drivers--;
- dev->channels -= dev->drv[di]->channels;
- kfree(dev->drv[di]->rcverr);
- kfree(dev->drv[di]->rcvcount);
- for (i = 0; i < dev->drv[di]->channels; i++)
- skb_queue_purge(&dev->drv[di]->rpqueue[i]);
- kfree(dev->drv[di]->rpqueue);
- kfree(dev->drv[di]->rcv_waitq);
- kfree(dev->drv[di]);
- dev->drv[di] = NULL;
- dev->drvid[di][0] = '\0';
- isdn_info_update();
- set_global_features();
- spin_unlock_irqrestore(&dev->lock, flags);
- return 0;
- case ISDN_STAT_L1ERR:
- break;
- case CAPI_PUT_MESSAGE:
- return (isdn_capi_rec_hl_msg(&c->parm.cmsg));
-#ifdef CONFIG_ISDN_TTY_FAX
- case ISDN_STAT_FAXIND:
- isdn_tty_stat_callback(i, c);
- break;
-#endif
-#ifdef CONFIG_ISDN_AUDIO
- case ISDN_STAT_AUDIO:
- isdn_tty_stat_callback(i, c);
- break;
-#endif
-#ifdef CONFIG_ISDN_DIVERSION
- case ISDN_STAT_PROT:
- case ISDN_STAT_REDIR:
- if (divert_if)
- return (divert_if->stat_callback(c));
-#endif /* CONFIG_ISDN_DIVERSION */
- /* fall through */
- default:
- return -1;
- }
- return 0;
-}
-
-/*
- * Get integer from char-pointer, set pointer to end of number
- */
-int
-isdn_getnum(char **p)
-{
- int v = -1;
-
- while (*p[0] >= '0' && *p[0] <= '9')
- v = ((v < 0) ? 0 : (v * 10)) + (int) ((*p[0]++) - '0');
- return v;
-}
-
-#define DLE 0x10
-
-/*
- * isdn_readbchan() tries to get data from the read-queue.
- * It MUST be called with interrupts off.
- *
- * Be aware that this is not an atomic operation when sleep != 0, even though
- * interrupts are turned off! Well, like that we are currently only called
- * on behalf of a read system call on raw device files (which are documented
- * to be dangerous and for debugging purpose only). The inode semaphore
- * takes care that this is not called for the same minor device number while
- * we are sleeping, but access is not serialized against simultaneous read()
- * from the corresponding ttyI device. Can other ugly events, like changes
- * of the mapping (di,ch)<->minor, happen during the sleep? --he
- */
-int
-isdn_readbchan(int di, int channel, u_char *buf, u_char *fp, int len, wait_queue_head_t *sleep)
-{
- int count;
- int count_pull;
- int count_put;
- int dflag;
- struct sk_buff *skb;
- u_char *cp;
-
- if (!dev->drv[di])
- return 0;
- if (skb_queue_empty(&dev->drv[di]->rpqueue[channel])) {
- if (sleep)
- wait_event_interruptible(*sleep,
- !skb_queue_empty(&dev->drv[di]->rpqueue[channel]));
- else
- return 0;
- }
- if (len > dev->drv[di]->rcvcount[channel])
- len = dev->drv[di]->rcvcount[channel];
- cp = buf;
- count = 0;
- while (len) {
- if (!(skb = skb_peek(&dev->drv[di]->rpqueue[channel])))
- break;
-#ifdef CONFIG_ISDN_AUDIO
- if (ISDN_AUDIO_SKB_LOCK(skb))
- break;
- ISDN_AUDIO_SKB_LOCK(skb) = 1;
- if ((ISDN_AUDIO_SKB_DLECOUNT(skb)) || (dev->drv[di]->DLEflag & (1 << channel))) {
- char *p = skb->data;
- unsigned long DLEmask = (1 << channel);
-
- dflag = 0;
- count_pull = count_put = 0;
- while ((count_pull < skb->len) && (len > 0)) {
- len--;
- if (dev->drv[di]->DLEflag & DLEmask) {
- *cp++ = DLE;
- dev->drv[di]->DLEflag &= ~DLEmask;
- } else {
- *cp++ = *p;
- if (*p == DLE) {
- dev->drv[di]->DLEflag |= DLEmask;
- (ISDN_AUDIO_SKB_DLECOUNT(skb))--;
- }
- p++;
- count_pull++;
- }
- count_put++;
- }
- if (count_pull >= skb->len)
- dflag = 1;
- } else {
-#endif
- /* No DLE's in buff, so simply copy it */
- dflag = 1;
- if ((count_pull = skb->len) > len) {
- count_pull = len;
- dflag = 0;
- }
- count_put = count_pull;
- skb_copy_from_linear_data(skb, cp, count_put);
- cp += count_put;
- len -= count_put;
-#ifdef CONFIG_ISDN_AUDIO
- }
-#endif
- count += count_put;
- if (fp) {
- memset(fp, 0, count_put);
- fp += count_put;
- }
- if (dflag) {
- /* We got all the data in this buff.
- * Now we can dequeue it.
- */
- if (fp)
- *(fp - 1) = 0xff;
-#ifdef CONFIG_ISDN_AUDIO
- ISDN_AUDIO_SKB_LOCK(skb) = 0;
-#endif
- skb = skb_dequeue(&dev->drv[di]->rpqueue[channel]);
- dev_kfree_skb(skb);
- } else {
- /* Not yet emptied this buff, so it
- * must stay in the queue, for further calls
- * but we pull off the data we got until now.
- */
- skb_pull(skb, count_pull);
-#ifdef CONFIG_ISDN_AUDIO
- ISDN_AUDIO_SKB_LOCK(skb) = 0;
-#endif
- }
- dev->drv[di]->rcvcount[channel] -= count_put;
- }
- return count;
-}
-
-/*
- * isdn_readbchan_tty() tries to get data from the read-queue.
- * It MUST be called with interrupts off.
- *
- * Be aware that this is not an atomic operation when sleep != 0, even though
- * interrupts are turned off! Well, like that we are currently only called
- * on behalf of a read system call on raw device files (which are documented
- * to be dangerous and for debugging purpose only). The inode semaphore
- * takes care that this is not called for the same minor device number while
- * we are sleeping, but access is not serialized against simultaneous read()
- * from the corresponding ttyI device. Can other ugly events, like changes
- * of the mapping (di,ch)<->minor, happen during the sleep? --he
- */
-int
-isdn_readbchan_tty(int di, int channel, struct tty_port *port, int cisco_hack)
-{
- int count;
- int count_pull;
- int count_put;
- int dflag;
- struct sk_buff *skb;
- char last = 0;
- int len;
-
- if (!dev->drv[di])
- return 0;
- if (skb_queue_empty(&dev->drv[di]->rpqueue[channel]))
- return 0;
-
- len = tty_buffer_request_room(port, dev->drv[di]->rcvcount[channel]);
- if (len == 0)
- return len;
-
- count = 0;
- while (len) {
- if (!(skb = skb_peek(&dev->drv[di]->rpqueue[channel])))
- break;
-#ifdef CONFIG_ISDN_AUDIO
- if (ISDN_AUDIO_SKB_LOCK(skb))
- break;
- ISDN_AUDIO_SKB_LOCK(skb) = 1;
- if ((ISDN_AUDIO_SKB_DLECOUNT(skb)) || (dev->drv[di]->DLEflag & (1 << channel))) {
- char *p = skb->data;
- unsigned long DLEmask = (1 << channel);
-
- dflag = 0;
- count_pull = count_put = 0;
- while ((count_pull < skb->len) && (len > 0)) {
- /* push every character but the last to the tty buffer directly */
- if (count_put)
- tty_insert_flip_char(port, last, TTY_NORMAL);
- len--;
- if (dev->drv[di]->DLEflag & DLEmask) {
- last = DLE;
- dev->drv[di]->DLEflag &= ~DLEmask;
- } else {
- last = *p;
- if (last == DLE) {
- dev->drv[di]->DLEflag |= DLEmask;
- (ISDN_AUDIO_SKB_DLECOUNT(skb))--;
- }
- p++;
- count_pull++;
- }
- count_put++;
- }
- if (count_pull >= skb->len)
- dflag = 1;
- } else {
-#endif
- /* No DLE's in buff, so simply copy it */
- dflag = 1;
- if ((count_pull = skb->len) > len) {
- count_pull = len;
- dflag = 0;
- }
- count_put = count_pull;
- if (count_put > 1)
- tty_insert_flip_string(port, skb->data, count_put - 1);
- last = skb->data[count_put - 1];
- len -= count_put;
-#ifdef CONFIG_ISDN_AUDIO
- }
-#endif
- count += count_put;
- if (dflag) {
- /* We got all the data in this buff.
- * Now we can dequeue it.
- */
- if (cisco_hack)
- tty_insert_flip_char(port, last, 0xFF);
- else
- tty_insert_flip_char(port, last, TTY_NORMAL);
-#ifdef CONFIG_ISDN_AUDIO
- ISDN_AUDIO_SKB_LOCK(skb) = 0;
-#endif
- skb = skb_dequeue(&dev->drv[di]->rpqueue[channel]);
- dev_kfree_skb(skb);
- } else {
- tty_insert_flip_char(port, last, TTY_NORMAL);
- /* Not yet emptied this buff, so it
- * must stay in the queue, for further calls
- * but we pull off the data we got until now.
- */
- skb_pull(skb, count_pull);
-#ifdef CONFIG_ISDN_AUDIO
- ISDN_AUDIO_SKB_LOCK(skb) = 0;
-#endif
- }
- dev->drv[di]->rcvcount[channel] -= count_put;
- }
- return count;
-}
-
-
-static inline int
-isdn_minor2drv(int minor)
-{
- return (dev->drvmap[minor]);
-}
-
-static inline int
-isdn_minor2chan(int minor)
-{
- return (dev->chanmap[minor]);
-}
-
-static char *
-isdn_statstr(void)
-{
- static char istatbuf[2048];
- char *p;
- int i;
-
- sprintf(istatbuf, "idmap:\t");
- p = istatbuf + strlen(istatbuf);
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- sprintf(p, "%s ", (dev->drvmap[i] < 0) ? "-" : dev->drvid[dev->drvmap[i]]);
- p = istatbuf + strlen(istatbuf);
- }
- sprintf(p, "\nchmap:\t");
- p = istatbuf + strlen(istatbuf);
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- sprintf(p, "%d ", dev->chanmap[i]);
- p = istatbuf + strlen(istatbuf);
- }
- sprintf(p, "\ndrmap:\t");
- p = istatbuf + strlen(istatbuf);
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- sprintf(p, "%d ", dev->drvmap[i]);
- p = istatbuf + strlen(istatbuf);
- }
- sprintf(p, "\nusage:\t");
- p = istatbuf + strlen(istatbuf);
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- sprintf(p, "%d ", dev->usage[i]);
- p = istatbuf + strlen(istatbuf);
- }
- sprintf(p, "\nflags:\t");
- p = istatbuf + strlen(istatbuf);
- for (i = 0; i < ISDN_MAX_DRIVERS; i++) {
- if (dev->drv[i]) {
- sprintf(p, "%ld ", dev->drv[i]->online);
- p = istatbuf + strlen(istatbuf);
- } else {
- sprintf(p, "? ");
- p = istatbuf + strlen(istatbuf);
- }
- }
- sprintf(p, "\nphone:\t");
- p = istatbuf + strlen(istatbuf);
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- sprintf(p, "%s ", dev->num[i]);
- p = istatbuf + strlen(istatbuf);
- }
- sprintf(p, "\n");
- return istatbuf;
-}
-
-/* Module interface-code */
-
-void
-isdn_info_update(void)
-{
- infostruct *p = dev->infochain;
-
- while (p) {
- *(p->private) = 1;
- p = (infostruct *) p->next;
- }
- wake_up_interruptible(&(dev->info_waitq));
-}
-
-static ssize_t
-isdn_read(struct file *file, char __user *buf, size_t count, loff_t *off)
-{
- uint minor = iminor(file_inode(file));
- int len = 0;
- int drvidx;
- int chidx;
- int retval;
- char *p;
-
- mutex_lock(&isdn_mutex);
- if (minor == ISDN_MINOR_STATUS) {
- if (!file->private_data) {
- if (file->f_flags & O_NONBLOCK) {
- retval = -EAGAIN;
- goto out;
- }
- wait_event_interruptible(dev->info_waitq,
- file->private_data);
- }
- p = isdn_statstr();
- file->private_data = NULL;
- if ((len = strlen(p)) <= count) {
- if (copy_to_user(buf, p, len)) {
- retval = -EFAULT;
- goto out;
- }
- *off += len;
- retval = len;
- goto out;
- }
- retval = 0;
- goto out;
- }
- if (!dev->drivers) {
- retval = -ENODEV;
- goto out;
- }
- if (minor <= ISDN_MINOR_BMAX) {
- printk(KERN_WARNING "isdn_read minor %d obsolete!\n", minor);
- drvidx = isdn_minor2drv(minor);
- if (drvidx < 0) {
- retval = -ENODEV;
- goto out;
- }
- if (!(dev->drv[drvidx]->flags & DRV_FLAG_RUNNING)) {
- retval = -ENODEV;
- goto out;
- }
- chidx = isdn_minor2chan(minor);
- if (!(p = kmalloc(count, GFP_KERNEL))) {
- retval = -ENOMEM;
- goto out;
- }
- len = isdn_readbchan(drvidx, chidx, p, NULL, count,
- &dev->drv[drvidx]->rcv_waitq[chidx]);
- *off += len;
- if (copy_to_user(buf, p, len))
- len = -EFAULT;
- kfree(p);
- retval = len;
- goto out;
- }
- if (minor <= ISDN_MINOR_CTRLMAX) {
- drvidx = isdn_minor2drv(minor - ISDN_MINOR_CTRL);
- if (drvidx < 0) {
- retval = -ENODEV;
- goto out;
- }
- if (!dev->drv[drvidx]->stavail) {
- if (file->f_flags & O_NONBLOCK) {
- retval = -EAGAIN;
- goto out;
- }
- wait_event_interruptible(dev->drv[drvidx]->st_waitq,
- dev->drv[drvidx]->stavail);
- }
- if (dev->drv[drvidx]->interface->readstat) {
- if (count > dev->drv[drvidx]->stavail)
- count = dev->drv[drvidx]->stavail;
- len = dev->drv[drvidx]->interface->readstat(buf, count,
- drvidx, isdn_minor2chan(minor - ISDN_MINOR_CTRL));
- if (len < 0) {
- retval = len;
- goto out;
- }
- } else {
- len = 0;
- }
- if (len)
- dev->drv[drvidx]->stavail -= len;
- else
- dev->drv[drvidx]->stavail = 0;
- *off += len;
- retval = len;
- goto out;
- }
-#ifdef CONFIG_ISDN_PPP
- if (minor <= ISDN_MINOR_PPPMAX) {
- retval = isdn_ppp_read(minor - ISDN_MINOR_PPP, file, buf, count);
- goto out;
- }
-#endif
- retval = -ENODEV;
-out:
- mutex_unlock(&isdn_mutex);
- return retval;
-}
-
-static ssize_t
-isdn_write(struct file *file, const char __user *buf, size_t count, loff_t *off)
-{
- uint minor = iminor(file_inode(file));
- int drvidx;
- int chidx;
- int retval;
-
- if (minor == ISDN_MINOR_STATUS)
- return -EPERM;
- if (!dev->drivers)
- return -ENODEV;
-
- mutex_lock(&isdn_mutex);
- if (minor <= ISDN_MINOR_BMAX) {
- printk(KERN_WARNING "isdn_write minor %d obsolete!\n", minor);
- drvidx = isdn_minor2drv(minor);
- if (drvidx < 0) {
- retval = -ENODEV;
- goto out;
- }
- if (!(dev->drv[drvidx]->flags & DRV_FLAG_RUNNING)) {
- retval = -ENODEV;
- goto out;
- }
- chidx = isdn_minor2chan(minor);
- wait_event_interruptible(dev->drv[drvidx]->snd_waitq[chidx],
- (retval = isdn_writebuf_stub(drvidx, chidx, buf, count)));
- goto out;
- }
- if (minor <= ISDN_MINOR_CTRLMAX) {
- drvidx = isdn_minor2drv(minor - ISDN_MINOR_CTRL);
- if (drvidx < 0) {
- retval = -ENODEV;
- goto out;
- }
- /*
- * We want to use the isdnctrl device to load the firmware
- *
- if (!(dev->drv[drvidx]->flags & DRV_FLAG_RUNNING))
- return -ENODEV;
- */
- if (dev->drv[drvidx]->interface->writecmd)
- retval = dev->drv[drvidx]->interface->
- writecmd(buf, count, drvidx,
- isdn_minor2chan(minor - ISDN_MINOR_CTRL));
- else
- retval = count;
- goto out;
- }
-#ifdef CONFIG_ISDN_PPP
- if (minor <= ISDN_MINOR_PPPMAX) {
- retval = isdn_ppp_write(minor - ISDN_MINOR_PPP, file, buf, count);
- goto out;
- }
-#endif
- retval = -ENODEV;
-out:
- mutex_unlock(&isdn_mutex);
- return retval;
-}
-
-static __poll_t
-isdn_poll(struct file *file, poll_table *wait)
-{
- __poll_t mask = 0;
- unsigned int minor = iminor(file_inode(file));
- int drvidx = isdn_minor2drv(minor - ISDN_MINOR_CTRL);
-
- mutex_lock(&isdn_mutex);
- if (minor == ISDN_MINOR_STATUS) {
- poll_wait(file, &(dev->info_waitq), wait);
- /* mask = EPOLLOUT | EPOLLWRNORM; */
- if (file->private_data) {
- mask |= EPOLLIN | EPOLLRDNORM;
- }
- goto out;
- }
- if (minor >= ISDN_MINOR_CTRL && minor <= ISDN_MINOR_CTRLMAX) {
- if (drvidx < 0) {
- /* driver deregistered while file open */
- mask = EPOLLHUP;
- goto out;
- }
- poll_wait(file, &(dev->drv[drvidx]->st_waitq), wait);
- mask = EPOLLOUT | EPOLLWRNORM;
- if (dev->drv[drvidx]->stavail) {
- mask |= EPOLLIN | EPOLLRDNORM;
- }
- goto out;
- }
-#ifdef CONFIG_ISDN_PPP
- if (minor <= ISDN_MINOR_PPPMAX) {
- mask = isdn_ppp_poll(file, wait);
- goto out;
- }
-#endif
- mask = EPOLLERR;
-out:
- mutex_unlock(&isdn_mutex);
- return mask;
-}
-
-
-static int
-isdn_ioctl(struct file *file, uint cmd, ulong arg)
-{
- uint minor = iminor(file_inode(file));
- isdn_ctrl c;
- int drvidx;
- int ret;
- int i;
- char __user *p;
- char *s;
- union iocpar {
- char name[10];
- char bname[22];
- isdn_ioctl_struct iocts;
- isdn_net_ioctl_phone phone;
- isdn_net_ioctl_cfg cfg;
- } iocpar;
- void __user *argp = (void __user *)arg;
-
-#define name iocpar.name
-#define bname iocpar.bname
-#define iocts iocpar.iocts
-#define phone iocpar.phone
-#define cfg iocpar.cfg
-
- if (minor == ISDN_MINOR_STATUS) {
- switch (cmd) {
- case IIOCGETDVR:
- return (TTY_DV +
- (NET_DV << 8) +
- (INF_DV << 16));
- case IIOCGETCPS:
- if (arg) {
- ulong __user *p = argp;
- int i;
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- put_user(dev->ibytes[i], p++);
- put_user(dev->obytes[i], p++);
- }
- return 0;
- } else
- return -EINVAL;
- break;
- case IIOCNETGPN:
- /* Get peer phone number of a connected
- * isdn network interface */
- if (arg) {
- if (copy_from_user(&phone, argp, sizeof(phone)))
- return -EFAULT;
- return isdn_net_getpeer(&phone, argp);
- } else
- return -EINVAL;
- default:
- return -EINVAL;
- }
- }
- if (!dev->drivers)
- return -ENODEV;
- if (minor <= ISDN_MINOR_BMAX) {
- drvidx = isdn_minor2drv(minor);
- if (drvidx < 0)
- return -ENODEV;
- if (!(dev->drv[drvidx]->flags & DRV_FLAG_RUNNING))
- return -ENODEV;
- return 0;
- }
- if (minor <= ISDN_MINOR_CTRLMAX) {
-/*
- * isdn net devices manage lots of configuration variables as linked lists.
- * Those lists must only be manipulated from user space. Some of the ioctl's
- * service routines access user space and are not atomic. Therefore, ioctl's
- * manipulating the lists and ioctl's sleeping while accessing the lists
- * are serialized by means of a semaphore.
- */
- switch (cmd) {
- case IIOCNETDWRSET:
- printk(KERN_INFO "INFO: ISDN_DW_ABC_EXTENSION not enabled\n");
- return (-EINVAL);
- case IIOCNETLCR:
- printk(KERN_INFO "INFO: ISDN_ABC_LCR_SUPPORT not enabled\n");
- return -ENODEV;
- case IIOCNETAIF:
- /* Add a network-interface */
- if (arg) {
- if (copy_from_user(name, argp, sizeof(name)))
- return -EFAULT;
- s = name;
- } else {
- s = NULL;
- }
- ret = mutex_lock_interruptible(&dev->mtx);
- if (ret) return ret;
- if ((s = isdn_net_new(s, NULL))) {
- if (copy_to_user(argp, s, strlen(s) + 1)) {
- ret = -EFAULT;
- } else {
- ret = 0;
- }
- } else
- ret = -ENODEV;
- mutex_unlock(&dev->mtx);
- return ret;
- case IIOCNETASL:
- /* Add a slave to a network-interface */
- if (arg) {
- if (copy_from_user(bname, argp, sizeof(bname) - 1))
- return -EFAULT;
- bname[sizeof(bname)-1] = 0;
- } else
- return -EINVAL;
- ret = mutex_lock_interruptible(&dev->mtx);
- if (ret) return ret;
- if ((s = isdn_net_newslave(bname))) {
- if (copy_to_user(argp, s, strlen(s) + 1)) {
- ret = -EFAULT;
- } else {
- ret = 0;
- }
- } else
- ret = -ENODEV;
- mutex_unlock(&dev->mtx);
- return ret;
- case IIOCNETDIF:
- /* Delete a network-interface */
- if (arg) {
- if (copy_from_user(name, argp, sizeof(name)))
- return -EFAULT;
- ret = mutex_lock_interruptible(&dev->mtx);
- if (ret) return ret;
- ret = isdn_net_rm(name);
- mutex_unlock(&dev->mtx);
- return ret;
- } else
- return -EINVAL;
- case IIOCNETSCF:
- /* Set configurable parameters of a network-interface */
- if (arg) {
- if (copy_from_user(&cfg, argp, sizeof(cfg)))
- return -EFAULT;
- return isdn_net_setcfg(&cfg);
- } else
- return -EINVAL;
- case IIOCNETGCF:
- /* Get configurable parameters of a network-interface */
- if (arg) {
- if (copy_from_user(&cfg, argp, sizeof(cfg)))
- return -EFAULT;
- if (!(ret = isdn_net_getcfg(&cfg))) {
- if (copy_to_user(argp, &cfg, sizeof(cfg)))
- return -EFAULT;
- }
- return ret;
- } else
- return -EINVAL;
- case IIOCNETANM:
- /* Add a phone-number to a network-interface */
- if (arg) {
- if (copy_from_user(&phone, argp, sizeof(phone)))
- return -EFAULT;
- ret = mutex_lock_interruptible(&dev->mtx);
- if (ret) return ret;
- ret = isdn_net_addphone(&phone);
- mutex_unlock(&dev->mtx);
- return ret;
- } else
- return -EINVAL;
- case IIOCNETGNM:
- /* Get list of phone-numbers of a network-interface */
- if (arg) {
- if (copy_from_user(&phone, argp, sizeof(phone)))
- return -EFAULT;
- ret = mutex_lock_interruptible(&dev->mtx);
- if (ret) return ret;
- ret = isdn_net_getphones(&phone, argp);
- mutex_unlock(&dev->mtx);
- return ret;
- } else
- return -EINVAL;
- case IIOCNETDNM:
- /* Delete a phone-number of a network-interface */
- if (arg) {
- if (copy_from_user(&phone, argp, sizeof(phone)))
- return -EFAULT;
- ret = mutex_lock_interruptible(&dev->mtx);
- if (ret) return ret;
- ret = isdn_net_delphone(&phone);
- mutex_unlock(&dev->mtx);
- return ret;
- } else
- return -EINVAL;
- case IIOCNETDIL:
- /* Force dialing of a network-interface */
- if (arg) {
- if (copy_from_user(name, argp, sizeof(name)))
- return -EFAULT;
- return isdn_net_force_dial(name);
- } else
- return -EINVAL;
-#ifdef CONFIG_ISDN_PPP
- case IIOCNETALN:
- if (!arg)
- return -EINVAL;
- if (copy_from_user(name, argp, sizeof(name)))
- return -EFAULT;
- return isdn_ppp_dial_slave(name);
- case IIOCNETDLN:
- if (!arg)
- return -EINVAL;
- if (copy_from_user(name, argp, sizeof(name)))
- return -EFAULT;
- return isdn_ppp_hangup_slave(name);
-#endif
- case IIOCNETHUP:
- /* Force hangup of a network-interface */
- if (!arg)
- return -EINVAL;
- if (copy_from_user(name, argp, sizeof(name)))
- return -EFAULT;
- return isdn_net_force_hangup(name);
- break;
- case IIOCSETVER:
- dev->net_verbose = arg;
- printk(KERN_INFO "isdn: Verbose-Level is %d\n", dev->net_verbose);
- return 0;
- case IIOCSETGST:
- if (arg)
- dev->global_flags |= ISDN_GLOBAL_STOPPED;
- else
- dev->global_flags &= ~ISDN_GLOBAL_STOPPED;
- printk(KERN_INFO "isdn: Global Mode %s\n",
- (dev->global_flags & ISDN_GLOBAL_STOPPED) ? "stopped" : "running");
- return 0;
- case IIOCSETBRJ:
- drvidx = -1;
- if (arg) {
- int i;
- char *p;
- if (copy_from_user(&iocts, argp,
- sizeof(isdn_ioctl_struct)))
- return -EFAULT;
- iocts.drvid[sizeof(iocts.drvid) - 1] = 0;
- if (strlen(iocts.drvid)) {
- if ((p = strchr(iocts.drvid, ',')))
- *p = 0;
- drvidx = -1;
- for (i = 0; i < ISDN_MAX_DRIVERS; i++)
- if (!(strcmp(dev->drvid[i], iocts.drvid))) {
- drvidx = i;
- break;
- }
- }
- }
- if (drvidx == -1)
- return -ENODEV;
- if (iocts.arg)
- dev->drv[drvidx]->flags |= DRV_FLAG_REJBUS;
- else
- dev->drv[drvidx]->flags &= ~DRV_FLAG_REJBUS;
- return 0;
- case IIOCSIGPRF:
- dev->profd = current;
- return 0;
- break;
- case IIOCGETPRF:
- /* Get all Modem-Profiles */
- if (arg) {
- char __user *p = argp;
- int i;
-
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- if (copy_to_user(p, dev->mdm.info[i].emu.profile,
- ISDN_MODEM_NUMREG))
- return -EFAULT;
- p += ISDN_MODEM_NUMREG;
- if (copy_to_user(p, dev->mdm.info[i].emu.pmsn, ISDN_MSNLEN))
- return -EFAULT;
- p += ISDN_MSNLEN;
- if (copy_to_user(p, dev->mdm.info[i].emu.plmsn, ISDN_LMSNLEN))
- return -EFAULT;
- p += ISDN_LMSNLEN;
- }
- return (ISDN_MODEM_NUMREG + ISDN_MSNLEN + ISDN_LMSNLEN) * ISDN_MAX_CHANNELS;
- } else
- return -EINVAL;
- break;
- case IIOCSETPRF:
- /* Set all Modem-Profiles */
- if (arg) {
- char __user *p = argp;
- int i;
-
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- if (copy_from_user(dev->mdm.info[i].emu.profile, p,
- ISDN_MODEM_NUMREG))
- return -EFAULT;
- p += ISDN_MODEM_NUMREG;
- if (copy_from_user(dev->mdm.info[i].emu.plmsn, p, ISDN_LMSNLEN))
- return -EFAULT;
- p += ISDN_LMSNLEN;
- if (copy_from_user(dev->mdm.info[i].emu.pmsn, p, ISDN_MSNLEN))
- return -EFAULT;
- p += ISDN_MSNLEN;
- }
- return 0;
- } else
- return -EINVAL;
- break;
- case IIOCSETMAP:
- case IIOCGETMAP:
- /* Set/Get MSN->EAZ-Mapping for a driver */
- if (arg) {
-
- if (copy_from_user(&iocts, argp,
- sizeof(isdn_ioctl_struct)))
- return -EFAULT;
- iocts.drvid[sizeof(iocts.drvid) - 1] = 0;
- if (strlen(iocts.drvid)) {
- drvidx = -1;
- for (i = 0; i < ISDN_MAX_DRIVERS; i++)
- if (!(strcmp(dev->drvid[i], iocts.drvid))) {
- drvidx = i;
- break;
- }
- } else
- drvidx = 0;
- if (drvidx == -1)
- return -ENODEV;
- if (cmd == IIOCSETMAP) {
- int loop = 1;
-
- p = (char __user *) iocts.arg;
- i = 0;
- while (loop) {
- int j = 0;
-
- while (1) {
- get_user(bname[j], p++);
- switch (bname[j]) {
- case '\0':
- loop = 0;
- /* Fall through */
- case ',':
- bname[j] = '\0';
- strcpy(dev->drv[drvidx]->msn2eaz[i], bname);
- j = ISDN_MSNLEN;
- break;
- default:
- j++;
- }
- if (j >= ISDN_MSNLEN)
- break;
- }
- if (++i > 9)
- break;
- }
- } else {
- p = (char __user *) iocts.arg;
- for (i = 0; i < 10; i++) {
- snprintf(bname, sizeof(bname), "%s%s",
- strlen(dev->drv[drvidx]->msn2eaz[i]) ?
- dev->drv[drvidx]->msn2eaz[i] : "_",
- (i < 9) ? "," : "\0");
- if (copy_to_user(p, bname, strlen(bname) + 1))
- return -EFAULT;
- p += strlen(bname);
- }
- }
- return 0;
- } else
- return -EINVAL;
- case IIOCDBGVAR:
- return -EINVAL;
- default:
- if ((cmd & IIOCDRVCTL) == IIOCDRVCTL)
- cmd = ((cmd >> _IOC_NRSHIFT) & _IOC_NRMASK) & ISDN_DRVIOCTL_MASK;
- else
- return -EINVAL;
- if (arg) {
- int i;
- char *p;
- if (copy_from_user(&iocts, argp, sizeof(isdn_ioctl_struct)))
- return -EFAULT;
- iocts.drvid[sizeof(iocts.drvid) - 1] = 0;
- if (strlen(iocts.drvid)) {
- if ((p = strchr(iocts.drvid, ',')))
- *p = 0;
- drvidx = -1;
- for (i = 0; i < ISDN_MAX_DRIVERS; i++)
- if (!(strcmp(dev->drvid[i], iocts.drvid))) {
- drvidx = i;
- break;
- }
- } else
- drvidx = 0;
- if (drvidx == -1)
- return -ENODEV;
- c.driver = drvidx;
- c.command = ISDN_CMD_IOCTL;
- c.arg = cmd;
- memcpy(c.parm.num, &iocts.arg, sizeof(ulong));
- ret = isdn_command(&c);
- memcpy(&iocts.arg, c.parm.num, sizeof(ulong));
- if (copy_to_user(argp, &iocts, sizeof(isdn_ioctl_struct)))
- return -EFAULT;
- return ret;
- } else
- return -EINVAL;
- }
- }
-#ifdef CONFIG_ISDN_PPP
- if (minor <= ISDN_MINOR_PPPMAX)
- return (isdn_ppp_ioctl(minor - ISDN_MINOR_PPP, file, cmd, arg));
-#endif
- return -ENODEV;
-
-#undef name
-#undef bname
-#undef iocts
-#undef phone
-#undef cfg
-}
-
-static long
-isdn_unlocked_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
-{
- int ret;
-
- mutex_lock(&isdn_mutex);
- ret = isdn_ioctl(file, cmd, arg);
- mutex_unlock(&isdn_mutex);
-
- return ret;
-}
-
-/*
- * Open the device code.
- */
-static int
-isdn_open(struct inode *ino, struct file *filep)
-{
- uint minor = iminor(ino);
- int drvidx;
- int chidx;
- int retval = -ENODEV;
-
- mutex_lock(&isdn_mutex);
- if (minor == ISDN_MINOR_STATUS) {
- infostruct *p;
-
- if ((p = kmalloc(sizeof(infostruct), GFP_KERNEL))) {
- p->next = (char *) dev->infochain;
- p->private = (char *) &(filep->private_data);
- dev->infochain = p;
- /* At opening we allow a single update */
- filep->private_data = (char *) 1;
- retval = 0;
- goto out;
- } else {
- retval = -ENOMEM;
- goto out;
- }
- }
- if (!dev->channels)
- goto out;
- if (minor <= ISDN_MINOR_BMAX) {
- printk(KERN_WARNING "isdn_open minor %d obsolete!\n", minor);
- drvidx = isdn_minor2drv(minor);
- if (drvidx < 0)
- goto out;
- chidx = isdn_minor2chan(minor);
- if (!(dev->drv[drvidx]->flags & DRV_FLAG_RUNNING))
- goto out;
- if (!(dev->drv[drvidx]->online & (1 << chidx)))
- goto out;
- isdn_lock_drivers();
- retval = 0;
- goto out;
- }
- if (minor <= ISDN_MINOR_CTRLMAX) {
- drvidx = isdn_minor2drv(minor - ISDN_MINOR_CTRL);
- if (drvidx < 0)
- goto out;
- isdn_lock_drivers();
- retval = 0;
- goto out;
- }
-#ifdef CONFIG_ISDN_PPP
- if (minor <= ISDN_MINOR_PPPMAX) {
- retval = isdn_ppp_open(minor - ISDN_MINOR_PPP, filep);
- if (retval == 0)
- isdn_lock_drivers();
- goto out;
- }
-#endif
-out:
- nonseekable_open(ino, filep);
- mutex_unlock(&isdn_mutex);
- return retval;
-}
-
-static int
-isdn_close(struct inode *ino, struct file *filep)
-{
- uint minor = iminor(ino);
-
- mutex_lock(&isdn_mutex);
- if (minor == ISDN_MINOR_STATUS) {
- infostruct *p = dev->infochain;
- infostruct *q = NULL;
-
- while (p) {
- if (p->private == (char *) &(filep->private_data)) {
- if (q)
- q->next = p->next;
- else
- dev->infochain = (infostruct *) (p->next);
- kfree(p);
- goto out;
- }
- q = p;
- p = (infostruct *) (p->next);
- }
- printk(KERN_WARNING "isdn: No private data while closing isdnctrl\n");
- goto out;
- }
- isdn_unlock_drivers();
- if (minor <= ISDN_MINOR_BMAX)
- goto out;
- if (minor <= ISDN_MINOR_CTRLMAX) {
- if (dev->profd == current)
- dev->profd = NULL;
- goto out;
- }
-#ifdef CONFIG_ISDN_PPP
- if (minor <= ISDN_MINOR_PPPMAX)
- isdn_ppp_release(minor - ISDN_MINOR_PPP, filep);
-#endif
-
-out:
- mutex_unlock(&isdn_mutex);
- return 0;
-}
-
-static const struct file_operations isdn_fops =
-{
- .owner = THIS_MODULE,
- .llseek = no_llseek,
- .read = isdn_read,
- .write = isdn_write,
- .poll = isdn_poll,
- .unlocked_ioctl = isdn_unlocked_ioctl,
- .open = isdn_open,
- .release = isdn_close,
-};
-
-char *
-isdn_map_eaz2msn(char *msn, int di)
-{
- isdn_driver_t *this = dev->drv[di];
- int i;
-
- if (strlen(msn) == 1) {
- i = msn[0] - '0';
- if ((i >= 0) && (i <= 9))
- if (strlen(this->msn2eaz[i]))
- return (this->msn2eaz[i]);
- }
- return (msn);
-}
-
-/*
- * Find an unused ISDN-channel, whose feature-flags match the
- * given L2- and L3-protocols.
- */
-#define L2V (~(ISDN_FEATURE_L2_V11096 | ISDN_FEATURE_L2_V11019 | ISDN_FEATURE_L2_V11038))
-
-/*
- * This function must be called with holding the dev->lock.
- */
-int
-isdn_get_free_channel(int usage, int l2_proto, int l3_proto, int pre_dev
- , int pre_chan, char *msn)
-{
- int i;
- ulong features;
- ulong vfeatures;
-
- features = ((1 << l2_proto) | (0x10000 << l3_proto));
- vfeatures = (((1 << l2_proto) | (0x10000 << l3_proto)) &
- ~(ISDN_FEATURE_L2_V11096 | ISDN_FEATURE_L2_V11019 | ISDN_FEATURE_L2_V11038));
- /* If Layer-2 protocol is V.110, accept drivers with
- * transparent feature even if these don't support V.110
- * because we can emulate this in linklevel.
- */
- for (i = 0; i < ISDN_MAX_CHANNELS; i++)
- if (USG_NONE(dev->usage[i]) &&
- (dev->drvmap[i] != -1)) {
- int d = dev->drvmap[i];
- if ((dev->usage[i] & ISDN_USAGE_EXCLUSIVE) &&
- ((pre_dev != d) || (pre_chan != dev->chanmap[i])))
- continue;
- if (!strcmp(isdn_map_eaz2msn(msn, d), "-"))
- continue;
- if (dev->usage[i] & ISDN_USAGE_DISABLED)
- continue; /* usage not allowed */
- if (dev->drv[d]->flags & DRV_FLAG_RUNNING) {
- if (((dev->drv[d]->interface->features & features) == features) ||
- (((dev->drv[d]->interface->features & vfeatures) == vfeatures) &&
- (dev->drv[d]->interface->features & ISDN_FEATURE_L2_TRANS))) {
- if ((pre_dev < 0) || (pre_chan < 0)) {
- dev->usage[i] &= ISDN_USAGE_EXCLUSIVE;
- dev->usage[i] |= usage;
- isdn_info_update();
- return i;
- } else {
- if ((pre_dev == d) && (pre_chan == dev->chanmap[i])) {
- dev->usage[i] &= ISDN_USAGE_EXCLUSIVE;
- dev->usage[i] |= usage;
- isdn_info_update();
- return i;
- }
- }
- }
- }
- }
- return -1;
-}
-
-/*
- * Set state of ISDN-channel to 'unused'
- */
-void
-isdn_free_channel(int di, int ch, int usage)
-{
- int i;
-
- if ((di < 0) || (ch < 0)) {
- printk(KERN_WARNING "%s: called with invalid drv(%d) or channel(%d)\n",
- __func__, di, ch);
- return;
- }
- for (i = 0; i < ISDN_MAX_CHANNELS; i++)
- if (((!usage) || ((dev->usage[i] & ISDN_USAGE_MASK) == usage)) &&
- (dev->drvmap[i] == di) &&
- (dev->chanmap[i] == ch)) {
- dev->usage[i] &= (ISDN_USAGE_NONE | ISDN_USAGE_EXCLUSIVE);
- strcpy(dev->num[i], "???");
- dev->ibytes[i] = 0;
- dev->obytes[i] = 0;
-// 20.10.99 JIM, try to reinitialize v110 !
- dev->v110emu[i] = 0;
- atomic_set(&(dev->v110use[i]), 0);
- isdn_v110_close(dev->v110[i]);
- dev->v110[i] = NULL;
-// 20.10.99 JIM, try to reinitialize v110 !
- isdn_info_update();
- if (dev->drv[di])
- skb_queue_purge(&dev->drv[di]->rpqueue[ch]);
- }
-}
-
-/*
- * Cancel Exclusive-Flag for ISDN-channel
- */
-void
-isdn_unexclusive_channel(int di, int ch)
-{
- int i;
-
- for (i = 0; i < ISDN_MAX_CHANNELS; i++)
- if ((dev->drvmap[i] == di) &&
- (dev->chanmap[i] == ch)) {
- dev->usage[i] &= ~ISDN_USAGE_EXCLUSIVE;
- isdn_info_update();
- return;
- }
-}
-
-/*
- * writebuf replacement for SKB_ABLE drivers
- */
-static int
-isdn_writebuf_stub(int drvidx, int chan, const u_char __user *buf, int len)
-{
- int ret;
- int hl = dev->drv[drvidx]->interface->hl_hdrlen;
- struct sk_buff *skb = alloc_skb(hl + len, GFP_ATOMIC);
-
- if (!skb)
- return -ENOMEM;
- skb_reserve(skb, hl);
- if (copy_from_user(skb_put(skb, len), buf, len)) {
- dev_kfree_skb(skb);
- return -EFAULT;
- }
- ret = dev->drv[drvidx]->interface->writebuf_skb(drvidx, chan, 1, skb);
- if (ret <= 0)
- dev_kfree_skb(skb);
- if (ret > 0)
- dev->obytes[isdn_dc2minor(drvidx, chan)] += ret;
- return ret;
-}
-
-/*
- * Return: length of data on success, -ERRcode on failure.
- */
-int
-isdn_writebuf_skb_stub(int drvidx, int chan, int ack, struct sk_buff *skb)
-{
- int ret;
- struct sk_buff *nskb = NULL;
- int v110_ret = skb->len;
- int idx = isdn_dc2minor(drvidx, chan);
-
- if (dev->v110[idx]) {
- atomic_inc(&dev->v110use[idx]);
- nskb = isdn_v110_encode(dev->v110[idx], skb);
- atomic_dec(&dev->v110use[idx]);
- if (!nskb)
- return 0;
- v110_ret = *((int *)nskb->data);
- skb_pull(nskb, sizeof(int));
- if (!nskb->len) {
- dev_kfree_skb(nskb);
- return v110_ret;
- }
- /* V.110 must always be acknowledged */
- ack = 1;
- ret = dev->drv[drvidx]->interface->writebuf_skb(drvidx, chan, ack, nskb);
- } else {
- int hl = dev->drv[drvidx]->interface->hl_hdrlen;
-
- if (skb_headroom(skb) < hl) {
- /*
- * This should only occur when new HL driver with
- * increased hl_hdrlen was loaded after netdevice
- * was created and connected to the new driver.
- *
- * The V.110 branch (re-allocates on its own) does
- * not need this
- */
- struct sk_buff *skb_tmp;
-
- skb_tmp = skb_realloc_headroom(skb, hl);
- printk(KERN_DEBUG "isdn_writebuf_skb_stub: reallocating headroom%s\n", skb_tmp ? "" : " failed");
- if (!skb_tmp) return -ENOMEM; /* 0 better? */
- ret = dev->drv[drvidx]->interface->writebuf_skb(drvidx, chan, ack, skb_tmp);
- if (ret > 0) {
- dev_kfree_skb(skb);
- } else {
- dev_kfree_skb(skb_tmp);
- }
- } else {
- ret = dev->drv[drvidx]->interface->writebuf_skb(drvidx, chan, ack, skb);
- }
- }
- if (ret > 0) {
- dev->obytes[idx] += ret;
- if (dev->v110[idx]) {
- atomic_inc(&dev->v110use[idx]);
- dev->v110[idx]->skbuser++;
- atomic_dec(&dev->v110use[idx]);
- /* For V.110 return unencoded data length */
- ret = v110_ret;
- /* if the complete frame was send we free the skb;
- if not upper function will requeue the skb */
- if (ret == skb->len)
- dev_kfree_skb(skb);
- }
- } else
- if (dev->v110[idx])
- dev_kfree_skb(nskb);
- return ret;
-}
-
-static int
-isdn_add_channels(isdn_driver_t *d, int drvidx, int n, int adding)
-{
- int j, k, m;
-
- init_waitqueue_head(&d->st_waitq);
- if (d->flags & DRV_FLAG_RUNNING)
- return -1;
- if (n < 1) return 0;
-
- m = (adding) ? d->channels + n : n;
-
- if (dev->channels + n > ISDN_MAX_CHANNELS) {
- printk(KERN_WARNING "register_isdn: Max. %d channels supported\n",
- ISDN_MAX_CHANNELS);
- return -1;
- }
-
- if ((adding) && (d->rcverr))
- kfree(d->rcverr);
- if (!(d->rcverr = kcalloc(m, sizeof(int), GFP_ATOMIC))) {
- printk(KERN_WARNING "register_isdn: Could not alloc rcverr\n");
- return -1;
- }
-
- if ((adding) && (d->rcvcount))
- kfree(d->rcvcount);
- if (!(d->rcvcount = kcalloc(m, sizeof(int), GFP_ATOMIC))) {
- printk(KERN_WARNING "register_isdn: Could not alloc rcvcount\n");
- if (!adding)
- kfree(d->rcverr);
- return -1;
- }
-
- if ((adding) && (d->rpqueue)) {
- for (j = 0; j < d->channels; j++)
- skb_queue_purge(&d->rpqueue[j]);
- kfree(d->rpqueue);
- }
- d->rpqueue = kmalloc_array(m, sizeof(struct sk_buff_head), GFP_ATOMIC);
- if (!d->rpqueue) {
- printk(KERN_WARNING "register_isdn: Could not alloc rpqueue\n");
- if (!adding) {
- kfree(d->rcvcount);
- kfree(d->rcverr);
- }
- return -1;
- }
- for (j = 0; j < m; j++) {
- skb_queue_head_init(&d->rpqueue[j]);
- }
-
- if ((adding) && (d->rcv_waitq))
- kfree(d->rcv_waitq);
- d->rcv_waitq = kmalloc(array3_size(sizeof(wait_queue_head_t), 2, m),
- GFP_ATOMIC);
- if (!d->rcv_waitq) {
- printk(KERN_WARNING "register_isdn: Could not alloc rcv_waitq\n");
- if (!adding) {
- kfree(d->rpqueue);
- kfree(d->rcvcount);
- kfree(d->rcverr);
- }
- return -1;
- }
- d->snd_waitq = d->rcv_waitq + m;
- for (j = 0; j < m; j++) {
- init_waitqueue_head(&d->rcv_waitq[j]);
- init_waitqueue_head(&d->snd_waitq[j]);
- }
-
- dev->channels += n;
- for (j = d->channels; j < m; j++)
- for (k = 0; k < ISDN_MAX_CHANNELS; k++)
- if (dev->chanmap[k] < 0) {
- dev->chanmap[k] = j;
- dev->drvmap[k] = drvidx;
- break;
- }
- d->channels = m;
- return 0;
-}
-
-/*
- * Low-level-driver registration
- */
-
-static void
-set_global_features(void)
-{
- int drvidx;
-
- dev->global_features = 0;
- for (drvidx = 0; drvidx < ISDN_MAX_DRIVERS; drvidx++) {
- if (!dev->drv[drvidx])
- continue;
- if (dev->drv[drvidx]->interface)
- dev->global_features |= dev->drv[drvidx]->interface->features;
- }
-}
-
-#ifdef CONFIG_ISDN_DIVERSION
-
-static char *map_drvname(int di)
-{
- if ((di < 0) || (di >= ISDN_MAX_DRIVERS))
- return (NULL);
- return (dev->drvid[di]); /* driver name */
-} /* map_drvname */
-
-static int map_namedrv(char *id)
-{ int i;
-
- for (i = 0; i < ISDN_MAX_DRIVERS; i++)
- { if (!strcmp(dev->drvid[i], id))
- return (i);
- }
- return (-1);
-} /* map_namedrv */
-
-int DIVERT_REG_NAME(isdn_divert_if *i_div)
-{
- if (i_div->if_magic != DIVERT_IF_MAGIC)
- return (DIVERT_VER_ERR);
- switch (i_div->cmd)
- {
- case DIVERT_CMD_REL:
- if (divert_if != i_div)
- return (DIVERT_REL_ERR);
- divert_if = NULL; /* free interface */
- return (DIVERT_NO_ERR);
-
- case DIVERT_CMD_REG:
- if (divert_if)
- return (DIVERT_REG_ERR);
- i_div->ll_cmd = isdn_command; /* set command function */
- i_div->drv_to_name = map_drvname;
- i_div->name_to_drv = map_namedrv;
- divert_if = i_div; /* remember interface */
- return (DIVERT_NO_ERR);
-
- default:
- return (DIVERT_CMD_ERR);
- }
-} /* DIVERT_REG_NAME */
-
-EXPORT_SYMBOL(DIVERT_REG_NAME);
-
-#endif /* CONFIG_ISDN_DIVERSION */
-
-
-EXPORT_SYMBOL(register_isdn);
-#ifdef CONFIG_ISDN_PPP
-EXPORT_SYMBOL(isdn_ppp_register_compressor);
-EXPORT_SYMBOL(isdn_ppp_unregister_compressor);
-#endif
-
-int
-register_isdn(isdn_if *i)
-{
- isdn_driver_t *d;
- int j;
- ulong flags;
- int drvidx;
-
- if (dev->drivers >= ISDN_MAX_DRIVERS) {
- printk(KERN_WARNING "register_isdn: Max. %d drivers supported\n",
- ISDN_MAX_DRIVERS);
- return 0;
- }
- if (!i->writebuf_skb) {
- printk(KERN_WARNING "register_isdn: No write routine given.\n");
- return 0;
- }
- if (!(d = kzalloc(sizeof(isdn_driver_t), GFP_KERNEL))) {
- printk(KERN_WARNING "register_isdn: Could not alloc driver-struct\n");
- return 0;
- }
-
- d->maxbufsize = i->maxbufsize;
- d->pktcount = 0;
- d->stavail = 0;
- d->flags = DRV_FLAG_LOADED;
- d->online = 0;
- d->interface = i;
- d->channels = 0;
- spin_lock_irqsave(&dev->lock, flags);
- for (drvidx = 0; drvidx < ISDN_MAX_DRIVERS; drvidx++)
- if (!dev->drv[drvidx])
- break;
- if (isdn_add_channels(d, drvidx, i->channels, 0)) {
- spin_unlock_irqrestore(&dev->lock, flags);
- kfree(d);
- return 0;
- }
- i->channels = drvidx;
- i->rcvcallb_skb = isdn_receive_skb_callback;
- i->statcallb = isdn_status_callback;
- if (!strlen(i->id))
- sprintf(i->id, "line%d", drvidx);
- for (j = 0; j < drvidx; j++)
- if (!strcmp(i->id, dev->drvid[j]))
- sprintf(i->id, "line%d", drvidx);
- dev->drv[drvidx] = d;
- strcpy(dev->drvid[drvidx], i->id);
- isdn_info_update();
- dev->drivers++;
- set_global_features();
- spin_unlock_irqrestore(&dev->lock, flags);
- return 1;
-}
-
-/*
-*****************************************************************************
-* And now the modules code.
-*****************************************************************************
-*/
-
-static char *
-isdn_getrev(const char *revision)
-{
- char *rev;
- char *p;
-
- if ((p = strchr(revision, ':'))) {
- rev = p + 2;
- p = strchr(rev, '$');
- *--p = 0;
- } else
- rev = "???";
- return rev;
-}
-
-/*
- * Allocate and initialize all data, register modem-devices
- */
-static int __init isdn_init(void)
-{
- int i;
- char tmprev[50];
-
- dev = vzalloc(sizeof(isdn_dev));
- if (!dev) {
- printk(KERN_WARNING "isdn: Could not allocate device-struct.\n");
- return -EIO;
- }
- timer_setup(&dev->timer, isdn_timer_funct, 0);
- spin_lock_init(&dev->lock);
- spin_lock_init(&dev->timerlock);
-#ifdef MODULE
- dev->owner = THIS_MODULE;
-#endif
- mutex_init(&dev->mtx);
- init_waitqueue_head(&dev->info_waitq);
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- dev->drvmap[i] = -1;
- dev->chanmap[i] = -1;
- dev->m_idx[i] = -1;
- strcpy(dev->num[i], "???");
- }
- if (register_chrdev(ISDN_MAJOR, "isdn", &isdn_fops)) {
- printk(KERN_WARNING "isdn: Could not register control devices\n");
- vfree(dev);
- return -EIO;
- }
- if ((isdn_tty_modem_init()) < 0) {
- printk(KERN_WARNING "isdn: Could not register tty devices\n");
- vfree(dev);
- unregister_chrdev(ISDN_MAJOR, "isdn");
- return -EIO;
- }
-#ifdef CONFIG_ISDN_PPP
- if (isdn_ppp_init() < 0) {
- printk(KERN_WARNING "isdn: Could not create PPP-device-structs\n");
- isdn_tty_exit();
- unregister_chrdev(ISDN_MAJOR, "isdn");
- vfree(dev);
- return -EIO;
- }
-#endif /* CONFIG_ISDN_PPP */
-
- strcpy(tmprev, isdn_revision);
- printk(KERN_NOTICE "ISDN subsystem Rev: %s/", isdn_getrev(tmprev));
- strcpy(tmprev, isdn_net_revision);
- printk("%s/", isdn_getrev(tmprev));
- strcpy(tmprev, isdn_ppp_revision);
- printk("%s/", isdn_getrev(tmprev));
- strcpy(tmprev, isdn_audio_revision);
- printk("%s/", isdn_getrev(tmprev));
- strcpy(tmprev, isdn_v110_revision);
- printk("%s", isdn_getrev(tmprev));
-
-#ifdef MODULE
- printk(" loaded\n");
-#else
- printk("\n");
-#endif
- isdn_info_update();
- return 0;
-}
-
-/*
- * Unload module
- */
-static void __exit isdn_exit(void)
-{
-#ifdef CONFIG_ISDN_PPP
- isdn_ppp_cleanup();
-#endif
- if (isdn_net_rmall() < 0) {
- printk(KERN_WARNING "isdn: net-device busy, remove cancelled\n");
- return;
- }
- isdn_tty_exit();
- unregister_chrdev(ISDN_MAJOR, "isdn");
- del_timer_sync(&dev->timer);
- /* call vfree with interrupts enabled, else it will hang */
- vfree(dev);
- printk(KERN_NOTICE "ISDN-subsystem unloaded\n");
-}
-
-module_init(isdn_init);
-module_exit(isdn_exit);
diff --git a/drivers/isdn/i4l/isdn_common.h b/drivers/isdn/i4l/isdn_common.h
deleted file mode 100644
index 2260ef07ab9c..000000000000
--- a/drivers/isdn/i4l/isdn_common.h
+++ /dev/null
@@ -1,47 +0,0 @@
-/* $Id: isdn_common.h,v 1.1.2.2 2004/01/12 22:37:19 keil Exp $
- *
- * header for Linux ISDN subsystem
- * common used functions and debugging-switches (linklevel).
- *
- * Copyright 1994-1999 by Fritz Elfert (fritz@isdn4linux.de)
- * Copyright 1995,96 by Thinking Objects Software GmbH Wuerzburg
- * Copyright 1995,96 by Michael Hipp (Michael.Hipp@student.uni-tuebingen.de)
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#undef ISDN_DEBUG_MODEM_OPEN
-#undef ISDN_DEBUG_MODEM_IOCTL
-#undef ISDN_DEBUG_MODEM_WAITSENT
-#undef ISDN_DEBUG_MODEM_HUP
-#undef ISDN_DEBUG_MODEM_ICALL
-#undef ISDN_DEBUG_MODEM_DUMP
-#undef ISDN_DEBUG_MODEM_VOICE
-#undef ISDN_DEBUG_AT
-#undef ISDN_DEBUG_NET_DUMP
-#undef ISDN_DEBUG_NET_DIAL
-#undef ISDN_DEBUG_NET_ICALL
-
-/* Prototypes */
-extern void isdn_lock_drivers(void);
-extern void isdn_unlock_drivers(void);
-extern void isdn_free_channel(int di, int ch, int usage);
-extern void isdn_all_eaz(int di, int ch);
-extern int isdn_command(isdn_ctrl *);
-extern int isdn_dc2minor(int di, int ch);
-extern void isdn_info_update(void);
-extern char *isdn_map_eaz2msn(char *msn, int di);
-extern void isdn_timer_ctrl(int tf, int onoff);
-extern void isdn_unexclusive_channel(int di, int ch);
-extern int isdn_getnum(char **);
-extern int isdn_readbchan(int, int, u_char *, u_char *, int, wait_queue_head_t *);
-extern int isdn_readbchan_tty(int, int, struct tty_port *, int);
-extern int isdn_get_free_channel(int, int, int, int, int, char *);
-extern int isdn_writebuf_skb_stub(int, int, int, struct sk_buff *);
-extern int register_isdn(isdn_if *i);
-extern int isdn_msncmp(const char *, const char *);
-#if defined(ISDN_DEBUG_NET_DUMP) || defined(ISDN_DEBUG_MODEM_DUMP)
-extern void isdn_dumppkt(char *, u_char *, int, int);
-#endif
diff --git a/drivers/isdn/i4l/isdn_concap.c b/drivers/isdn/i4l/isdn_concap.c
deleted file mode 100644
index 336523ec077c..000000000000
--- a/drivers/isdn/i4l/isdn_concap.c
+++ /dev/null
@@ -1,99 +0,0 @@
-/* $Id: isdn_concap.c,v 1.1.2.2 2004/01/12 22:37:19 keil Exp $
- *
- * Linux ISDN subsystem, protocol encapsulation
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-/* Stuff to support the concap_proto by isdn4linux. isdn4linux - specific
- * stuff goes here. Stuff that depends only on the concap protocol goes to
- * another -- protocol specific -- source file.
- *
- */
-
-
-#include <linux/isdn.h>
-#include "isdn_x25iface.h"
-#include "isdn_net.h"
-#include <linux/concap.h>
-#include "isdn_concap.h"
-
-
-/* The following set of device service operations are for encapsulation
- protocols that require for reliable datalink semantics. That means:
-
- - before any data is to be submitted the connection must explicitly
- be set up.
- - after the successful set up of the connection is signalled the
- connection is considered to be reliably up.
-
- Auto-dialing ist not compatible with this requirements. Thus, auto-dialing
- is completely bypassed.
-
- It might be possible to implement a (non standardized) datalink protocol
- that provides a reliable data link service while using some auto dialing
- mechanism. Such a protocol would need an auxiliary channel (i.e. user-user-
- signaling on the D-channel) while the B-channel is down.
-*/
-
-
-static int isdn_concap_dl_data_req(struct concap_proto *concap, struct sk_buff *skb)
-{
- struct net_device *ndev = concap->net_dev;
- isdn_net_dev *nd = ((isdn_net_local *) netdev_priv(ndev))->netdev;
- isdn_net_local *lp = isdn_net_get_locked_lp(nd);
-
- IX25DEBUG("isdn_concap_dl_data_req: %s \n", concap->net_dev->name);
- if (!lp) {
- IX25DEBUG("isdn_concap_dl_data_req: %s : isdn_net_send_skb returned %d\n", concap->net_dev->name, 1);
- return 1;
- }
- lp->huptimer = 0;
- isdn_net_writebuf_skb(lp, skb);
- spin_unlock_bh(&lp->xmit_lock);
- IX25DEBUG("isdn_concap_dl_data_req: %s : isdn_net_send_skb returned %d\n", concap->net_dev->name, 0);
- return 0;
-}
-
-
-static int isdn_concap_dl_connect_req(struct concap_proto *concap)
-{
- struct net_device *ndev = concap->net_dev;
- isdn_net_local *lp = netdev_priv(ndev);
- int ret;
- IX25DEBUG("isdn_concap_dl_connect_req: %s \n", ndev->name);
-
- /* dial ... */
- ret = isdn_net_dial_req(lp);
- if (ret) IX25DEBUG("dialing failed\n");
- return ret;
-}
-
-static int isdn_concap_dl_disconn_req(struct concap_proto *concap)
-{
- IX25DEBUG("isdn_concap_dl_disconn_req: %s \n", concap->net_dev->name);
-
- isdn_net_hangup(concap->net_dev);
- return 0;
-}
-
-struct concap_device_ops isdn_concap_reliable_dl_dops = {
- .data_req = &isdn_concap_dl_data_req,
- .connect_req = &isdn_concap_dl_connect_req,
- .disconn_req = &isdn_concap_dl_disconn_req
-};
-
-/* The following should better go into a dedicated source file such that
- this sourcefile does not need to include any protocol specific header
- files. For now:
-*/
-struct concap_proto *isdn_concap_new(int encap)
-{
- switch (encap) {
- case ISDN_NET_ENCAP_X25IFACE:
- return isdn_x25iface_proto_new();
- }
- return NULL;
-}
diff --git a/drivers/isdn/i4l/isdn_concap.h b/drivers/isdn/i4l/isdn_concap.h
deleted file mode 100644
index cd7e3ba74e25..000000000000
--- a/drivers/isdn/i4l/isdn_concap.h
+++ /dev/null
@@ -1,11 +0,0 @@
-/* $Id: isdn_concap.h,v 1.1.2.2 2004/01/12 22:37:19 keil Exp $
- *
- * Linux ISDN subsystem, protocol encapsulation
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-extern struct concap_device_ops isdn_concap_reliable_dl_dops;
-extern struct concap_proto *isdn_concap_new(int);
diff --git a/drivers/isdn/i4l/isdn_net.c b/drivers/isdn/i4l/isdn_net.c
deleted file mode 100644
index c138f66f2659..000000000000
--- a/drivers/isdn/i4l/isdn_net.c
+++ /dev/null
@@ -1,3198 +0,0 @@
-/* $Id: isdn_net.c,v 1.1.2.2 2004/01/12 22:37:19 keil Exp $
- *
- * Linux ISDN subsystem, network interfaces and related functions (linklevel).
- *
- * Copyright 1994-1998 by Fritz Elfert (fritz@isdn4linux.de)
- * Copyright 1995,96 by Thinking Objects Software GmbH Wuerzburg
- * Copyright 1995,96 by Michael Hipp (Michael.Hipp@student.uni-tuebingen.de)
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * Data Over Voice (DOV) support added - Guy Ellis 23-Mar-02
- * guy@traverse.com.au
- * Outgoing calls - looks for a 'V' in first char of dialed number
- * Incoming calls - checks first character of eaz as follows:
- * Numeric - accept DATA only - original functionality
- * 'V' - accept VOICE (DOV) only
- * 'B' - accept BOTH DATA and DOV types
- *
- * Jan 2001: fix CISCO HDLC Bjoern A. Zeeb <i4l@zabbadoz.net>
- * for info on the protocol, see
- * http://i4l.zabbadoz.net/i4l/cisco-hdlc.txt
- */
-
-#include <linux/isdn.h>
-#include <linux/slab.h>
-#include <net/arp.h>
-#include <net/dst.h>
-#include <net/pkt_sched.h>
-#include <linux/inetdevice.h>
-#include "isdn_common.h"
-#include "isdn_net.h"
-#ifdef CONFIG_ISDN_PPP
-#include "isdn_ppp.h"
-#endif
-#ifdef CONFIG_ISDN_X25
-#include <linux/concap.h>
-#include "isdn_concap.h"
-#endif
-
-
-/*
- * Outline of new tbusy handling:
- *
- * Old method, roughly spoken, consisted of setting tbusy when entering
- * isdn_net_start_xmit() and at several other locations and clearing
- * it from isdn_net_start_xmit() thread when sending was successful.
- *
- * With 2.3.x multithreaded network core, to prevent problems, tbusy should
- * only be set by the isdn_net_start_xmit() thread and only when a tx-busy
- * condition is detected. Other threads (in particular isdn_net_stat_callb())
- * are only allowed to clear tbusy.
- *
- * -HE
- */
-
-/*
- * About SOFTNET:
- * Most of the changes were pretty obvious and basically done by HE already.
- *
- * One problem of the isdn net device code is that it uses struct net_device
- * for masters and slaves. However, only master interface are registered to
- * the network layer, and therefore, it only makes sense to call netif_*
- * functions on them.
- *
- * --KG
- */
-
-/*
- * Find out if the netdevice has been ifup-ed yet.
- * For slaves, look at the corresponding master.
- */
-static __inline__ int isdn_net_device_started(isdn_net_dev *n)
-{
- isdn_net_local *lp = n->local;
- struct net_device *dev;
-
- if (lp->master)
- dev = lp->master;
- else
- dev = n->dev;
- return netif_running(dev);
-}
-
-/*
- * wake up the network -> net_device queue.
- * For slaves, wake the corresponding master interface.
- */
-static __inline__ void isdn_net_device_wake_queue(isdn_net_local *lp)
-{
- if (lp->master)
- netif_wake_queue(lp->master);
- else
- netif_wake_queue(lp->netdev->dev);
-}
-
-/*
- * stop the network -> net_device queue.
- * For slaves, stop the corresponding master interface.
- */
-static __inline__ void isdn_net_device_stop_queue(isdn_net_local *lp)
-{
- if (lp->master)
- netif_stop_queue(lp->master);
- else
- netif_stop_queue(lp->netdev->dev);
-}
-
-/*
- * find out if the net_device which this lp belongs to (lp can be
- * master or slave) is busy. It's busy iff all (master and slave)
- * queues are busy
- */
-static __inline__ int isdn_net_device_busy(isdn_net_local *lp)
-{
- isdn_net_local *nlp;
- isdn_net_dev *nd;
- unsigned long flags;
-
- if (!isdn_net_lp_busy(lp))
- return 0;
-
- if (lp->master)
- nd = ISDN_MASTER_PRIV(lp)->netdev;
- else
- nd = lp->netdev;
-
- spin_lock_irqsave(&nd->queue_lock, flags);
- nlp = lp->next;
- while (nlp != lp) {
- if (!isdn_net_lp_busy(nlp)) {
- spin_unlock_irqrestore(&nd->queue_lock, flags);
- return 0;
- }
- nlp = nlp->next;
- }
- spin_unlock_irqrestore(&nd->queue_lock, flags);
- return 1;
-}
-
-static __inline__ void isdn_net_inc_frame_cnt(isdn_net_local *lp)
-{
- atomic_inc(&lp->frame_cnt);
- if (isdn_net_device_busy(lp))
- isdn_net_device_stop_queue(lp);
-}
-
-static __inline__ void isdn_net_dec_frame_cnt(isdn_net_local *lp)
-{
- atomic_dec(&lp->frame_cnt);
-
- if (!(isdn_net_device_busy(lp))) {
- if (!skb_queue_empty(&lp->super_tx_queue)) {
- schedule_work(&lp->tqueue);
- } else {
- isdn_net_device_wake_queue(lp);
- }
- }
-}
-
-static __inline__ void isdn_net_zero_frame_cnt(isdn_net_local *lp)
-{
- atomic_set(&lp->frame_cnt, 0);
-}
-
-/* For 2.2.x we leave the transmitter busy timeout at 2 secs, just
- * to be safe.
- * For 2.3.x we push it up to 20 secs, because call establishment
- * (in particular callback) may take such a long time, and we
- * don't want confusing messages in the log. However, there is a slight
- * possibility that this large timeout will break other things like MPPP,
- * which might rely on the tx timeout. If so, we'll find out this way...
- */
-
-#define ISDN_NET_TX_TIMEOUT (20 * HZ)
-
-/* Prototypes */
-
-static int isdn_net_force_dial_lp(isdn_net_local *);
-static netdev_tx_t isdn_net_start_xmit(struct sk_buff *,
- struct net_device *);
-
-static void isdn_net_ciscohdlck_connected(isdn_net_local *lp);
-static void isdn_net_ciscohdlck_disconnected(isdn_net_local *lp);
-
-char *isdn_net_revision = "$Revision: 1.1.2.2 $";
-
-/*
- * Code for raw-networking over ISDN
- */
-
-static void
-isdn_net_unreachable(struct net_device *dev, struct sk_buff *skb, char *reason)
-{
- if (skb) {
-
- u_short proto = ntohs(skb->protocol);
-
- printk(KERN_DEBUG "isdn_net: %s: %s, signalling dst_link_failure %s\n",
- dev->name,
- (reason != NULL) ? reason : "unknown",
- (proto != ETH_P_IP) ? "Protocol != ETH_P_IP" : "");
-
- dst_link_failure(skb);
- }
- else { /* dial not triggered by rawIP packet */
- printk(KERN_DEBUG "isdn_net: %s: %s\n",
- dev->name,
- (reason != NULL) ? reason : "reason unknown");
- }
-}
-
-static void
-isdn_net_reset(struct net_device *dev)
-{
-#ifdef CONFIG_ISDN_X25
- struct concap_device_ops *dops =
- ((isdn_net_local *)netdev_priv(dev))->dops;
- struct concap_proto *cprot =
- ((isdn_net_local *)netdev_priv(dev))->netdev->cprot;
-#endif
-#ifdef CONFIG_ISDN_X25
- if (cprot && cprot->pops && dops)
- cprot->pops->restart(cprot, dev, dops);
-#endif
-}
-
-/* Open/initialize the board. */
-static int
-isdn_net_open(struct net_device *dev)
-{
- int i;
- struct net_device *p;
- struct in_device *in_dev;
-
- /* moved here from isdn_net_reset, because only the master has an
- interface associated which is supposed to be started. BTW:
- we need to call netif_start_queue, not netif_wake_queue here */
- netif_start_queue(dev);
-
- isdn_net_reset(dev);
- /* Fill in the MAC-level header (not needed, but for compatibility... */
- for (i = 0; i < ETH_ALEN - sizeof(u32); i++)
- dev->dev_addr[i] = 0xfc;
- if ((in_dev = dev->ip_ptr) != NULL) {
- /*
- * Any address will do - we take the first
- */
- struct in_ifaddr *ifa = in_dev->ifa_list;
- if (ifa != NULL)
- memcpy(dev->dev_addr + 2, &ifa->ifa_local, 4);
- }
-
- /* If this interface has slaves, start them also */
- p = MASTER_TO_SLAVE(dev);
- if (p) {
- while (p) {
- isdn_net_reset(p);
- p = MASTER_TO_SLAVE(p);
- }
- }
- isdn_lock_drivers();
- return 0;
-}
-
-/*
- * Assign an ISDN-channel to a net-interface
- */
-static void
-isdn_net_bind_channel(isdn_net_local *lp, int idx)
-{
- lp->flags |= ISDN_NET_CONNECTED;
- lp->isdn_device = dev->drvmap[idx];
- lp->isdn_channel = dev->chanmap[idx];
- dev->rx_netdev[idx] = lp->netdev;
- dev->st_netdev[idx] = lp->netdev;
-}
-
-/*
- * unbind a net-interface (resets interface after an error)
- */
-static void
-isdn_net_unbind_channel(isdn_net_local *lp)
-{
- skb_queue_purge(&lp->super_tx_queue);
-
- if (!lp->master) { /* reset only master device */
- /* Moral equivalent of dev_purge_queues():
- BEWARE! This chunk of code cannot be called from hardware
- interrupt handler. I hope it is true. --ANK
- */
- qdisc_reset_all_tx(lp->netdev->dev);
- }
- lp->dialstate = 0;
- dev->rx_netdev[isdn_dc2minor(lp->isdn_device, lp->isdn_channel)] = NULL;
- dev->st_netdev[isdn_dc2minor(lp->isdn_device, lp->isdn_channel)] = NULL;
- if (lp->isdn_device != -1 && lp->isdn_channel != -1)
- isdn_free_channel(lp->isdn_device, lp->isdn_channel,
- ISDN_USAGE_NET);
- lp->flags &= ~ISDN_NET_CONNECTED;
- lp->isdn_device = -1;
- lp->isdn_channel = -1;
-}
-
-/*
- * Perform auto-hangup and cps-calculation for net-interfaces.
- *
- * auto-hangup:
- * Increment idle-counter (this counter is reset on any incoming or
- * outgoing packet), if counter exceeds configured limit either do a
- * hangup immediately or - if configured - wait until just before the next
- * charge-info.
- *
- * cps-calculation (needed for dynamic channel-bundling):
- * Since this function is called every second, simply reset the
- * byte-counter of the interface after copying it to the cps-variable.
- */
-static unsigned long last_jiffies = -HZ;
-
-void
-isdn_net_autohup(void)
-{
- isdn_net_dev *p = dev->netdev;
- int anymore;
-
- anymore = 0;
- while (p) {
- isdn_net_local *l = p->local;
- if (jiffies == last_jiffies)
- l->cps = l->transcount;
- else
- l->cps = (l->transcount * HZ) / (jiffies - last_jiffies);
- l->transcount = 0;
- if (dev->net_verbose > 3)
- printk(KERN_DEBUG "%s: %d bogocps\n", p->dev->name, l->cps);
- if ((l->flags & ISDN_NET_CONNECTED) && (!l->dialstate)) {
- anymore = 1;
- l->huptimer++;
- /*
- * if there is some dialmode where timeout-hangup
- * should _not_ be done, check for that here
- */
- if ((l->onhtime) &&
- (l->huptimer > l->onhtime))
- {
- if (l->hupflags & ISDN_MANCHARGE &&
- l->hupflags & ISDN_CHARGEHUP) {
- while (time_after(jiffies, l->chargetime + l->chargeint))
- l->chargetime += l->chargeint;
- if (time_after(jiffies, l->chargetime + l->chargeint - 2 * HZ))
- if (l->outgoing || l->hupflags & ISDN_INHUP)
- isdn_net_hangup(p->dev);
- } else if (l->outgoing) {
- if (l->hupflags & ISDN_CHARGEHUP) {
- if (l->hupflags & ISDN_WAITCHARGE) {
- printk(KERN_DEBUG "isdn_net: Hupflags of %s are %X\n",
- p->dev->name, l->hupflags);
- isdn_net_hangup(p->dev);
- } else if (time_after(jiffies, l->chargetime + l->chargeint)) {
- printk(KERN_DEBUG
- "isdn_net: %s: chtime = %lu, chint = %d\n",
- p->dev->name, l->chargetime, l->chargeint);
- isdn_net_hangup(p->dev);
- }
- } else
- isdn_net_hangup(p->dev);
- } else if (l->hupflags & ISDN_INHUP)
- isdn_net_hangup(p->dev);
- }
-
- if (dev->global_flags & ISDN_GLOBAL_STOPPED || (ISDN_NET_DIALMODE(*l) == ISDN_NET_DM_OFF)) {
- isdn_net_hangup(p->dev);
- break;
- }
- }
- p = (isdn_net_dev *) p->next;
- }
- last_jiffies = jiffies;
- isdn_timer_ctrl(ISDN_TIMER_NETHANGUP, anymore);
-}
-
-static void isdn_net_lp_disconnected(isdn_net_local *lp)
-{
- isdn_net_rm_from_bundle(lp);
-}
-
-/*
- * Handle status-messages from ISDN-interfacecard.
- * This function is called from within the main-status-dispatcher
- * isdn_status_callback, which itself is called from the low-level driver.
- * Return: 1 = Event handled, 0 = not for us or unknown Event.
- */
-int
-isdn_net_stat_callback(int idx, isdn_ctrl *c)
-{
- isdn_net_dev *p = dev->st_netdev[idx];
- int cmd = c->command;
-
- if (p) {
- isdn_net_local *lp = p->local;
-#ifdef CONFIG_ISDN_X25
- struct concap_proto *cprot = lp->netdev->cprot;
- struct concap_proto_ops *pops = cprot ? cprot->pops : NULL;
-#endif
- switch (cmd) {
- case ISDN_STAT_BSENT:
- /* A packet has successfully been sent out */
- if ((lp->flags & ISDN_NET_CONNECTED) &&
- (!lp->dialstate)) {
- isdn_net_dec_frame_cnt(lp);
- lp->stats.tx_packets++;
- lp->stats.tx_bytes += c->parm.length;
- }
- return 1;
- case ISDN_STAT_DCONN:
- /* D-Channel is up */
- switch (lp->dialstate) {
- case 4:
- case 7:
- case 8:
- lp->dialstate++;
- return 1;
- case 12:
- lp->dialstate = 5;
- return 1;
- }
- break;
- case ISDN_STAT_DHUP:
- /* Either D-Channel-hangup or error during dialout */
-#ifdef CONFIG_ISDN_X25
- /* If we are not connencted then dialing had
- failed. If there are generic encap protocol
- receiver routines signal the closure of
- the link*/
-
- if (!(lp->flags & ISDN_NET_CONNECTED)
- && pops && pops->disconn_ind)
- pops->disconn_ind(cprot);
-#endif /* CONFIG_ISDN_X25 */
- if ((!lp->dialstate) && (lp->flags & ISDN_NET_CONNECTED)) {
- if (lp->p_encap == ISDN_NET_ENCAP_CISCOHDLCK)
- isdn_net_ciscohdlck_disconnected(lp);
-#ifdef CONFIG_ISDN_PPP
- if (lp->p_encap == ISDN_NET_ENCAP_SYNCPPP)
- isdn_ppp_free(lp);
-#endif
- isdn_net_lp_disconnected(lp);
- isdn_all_eaz(lp->isdn_device, lp->isdn_channel);
- printk(KERN_INFO "%s: remote hangup\n", p->dev->name);
- printk(KERN_INFO "%s: Chargesum is %d\n", p->dev->name,
- lp->charge);
- isdn_net_unbind_channel(lp);
- return 1;
- }
- break;
-#ifdef CONFIG_ISDN_X25
- case ISDN_STAT_BHUP:
- /* B-Channel-hangup */
- /* try if there are generic encap protocol
- receiver routines and signal the closure of
- the link */
- if (pops && pops->disconn_ind) {
- pops->disconn_ind(cprot);
- return 1;
- }
- break;
-#endif /* CONFIG_ISDN_X25 */
- case ISDN_STAT_BCONN:
- /* B-Channel is up */
- isdn_net_zero_frame_cnt(lp);
- switch (lp->dialstate) {
- case 5:
- case 6:
- case 7:
- case 8:
- case 9:
- case 10:
- case 12:
- if (lp->dialstate <= 6) {
- dev->usage[idx] |= ISDN_USAGE_OUTGOING;
- isdn_info_update();
- } else
- dev->rx_netdev[idx] = p;
- lp->dialstate = 0;
- isdn_timer_ctrl(ISDN_TIMER_NETHANGUP, 1);
- if (lp->p_encap == ISDN_NET_ENCAP_CISCOHDLCK)
- isdn_net_ciscohdlck_connected(lp);
- if (lp->p_encap != ISDN_NET_ENCAP_SYNCPPP) {
- if (lp->master) { /* is lp a slave? */
- isdn_net_dev *nd = ISDN_MASTER_PRIV(lp)->netdev;
- isdn_net_add_to_bundle(nd, lp);
- }
- }
- printk(KERN_INFO "isdn_net: %s connected\n", p->dev->name);
- /* If first Chargeinfo comes before B-Channel connect,
- * we correct the timestamp here.
- */
- lp->chargetime = jiffies;
-
- /* reset dial-timeout */
- lp->dialstarted = 0;
- lp->dialwait_timer = 0;
-
-#ifdef CONFIG_ISDN_PPP
- if (lp->p_encap == ISDN_NET_ENCAP_SYNCPPP)
- isdn_ppp_wakeup_daemon(lp);
-#endif
-#ifdef CONFIG_ISDN_X25
- /* try if there are generic concap receiver routines */
- if (pops)
- if (pops->connect_ind)
- pops->connect_ind(cprot);
-#endif /* CONFIG_ISDN_X25 */
- /* ppp needs to do negotiations first */
- if (lp->p_encap != ISDN_NET_ENCAP_SYNCPPP)
- isdn_net_device_wake_queue(lp);
- return 1;
- }
- break;
- case ISDN_STAT_NODCH:
- /* No D-Channel avail. */
- if (lp->dialstate == 4) {
- lp->dialstate--;
- return 1;
- }
- break;
- case ISDN_STAT_CINF:
- /* Charge-info from TelCo. Calculate interval between
- * charge-infos and set timestamp for last info for
- * usage by isdn_net_autohup()
- */
- lp->charge++;
- if (lp->hupflags & ISDN_HAVECHARGE) {
- lp->hupflags &= ~ISDN_WAITCHARGE;
- lp->chargeint = jiffies - lp->chargetime - (2 * HZ);
- }
- if (lp->hupflags & ISDN_WAITCHARGE)
- lp->hupflags |= ISDN_HAVECHARGE;
- lp->chargetime = jiffies;
- printk(KERN_DEBUG "isdn_net: Got CINF chargetime of %s now %lu\n",
- p->dev->name, lp->chargetime);
- return 1;
- }
- }
- return 0;
-}
-
-/*
- * Perform dialout for net-interfaces and timeout-handling for
- * D-Channel-up and B-Channel-up Messages.
- * This function is initially called from within isdn_net_start_xmit() or
- * or isdn_net_find_icall() after initializing the dialstate for an
- * interface. If further calls are needed, the function schedules itself
- * for a timer-callback via isdn_timer_function().
- * The dialstate is also affected by incoming status-messages from
- * the ISDN-Channel which are handled in isdn_net_stat_callback() above.
- */
-void
-isdn_net_dial(void)
-{
- isdn_net_dev *p = dev->netdev;
- int anymore = 0;
- int i;
- isdn_ctrl cmd;
- u_char *phone_number;
-
- while (p) {
- isdn_net_local *lp = p->local;
-
-#ifdef ISDN_DEBUG_NET_DIAL
- if (lp->dialstate)
- printk(KERN_DEBUG "%s: dialstate=%d\n", p->dev->name, lp->dialstate);
-#endif
- switch (lp->dialstate) {
- case 0:
- /* Nothing to do for this interface */
- break;
- case 1:
- /* Initiate dialout. Set phone-number-pointer to first number
- * of interface.
- */
- lp->dial = lp->phone[1];
- if (!lp->dial) {
- printk(KERN_WARNING "%s: phone number deleted?\n",
- p->dev->name);
- isdn_net_hangup(p->dev);
- break;
- }
- anymore = 1;
-
- if (lp->dialtimeout > 0)
- if (lp->dialstarted == 0 || time_after(jiffies, lp->dialstarted + lp->dialtimeout + lp->dialwait)) {
- lp->dialstarted = jiffies;
- lp->dialwait_timer = 0;
- }
-
- lp->dialstate++;
- /* Fall through */
- case 2:
- /* Prepare dialing. Clear EAZ, then set EAZ. */
- cmd.driver = lp->isdn_device;
- cmd.arg = lp->isdn_channel;
- cmd.command = ISDN_CMD_CLREAZ;
- isdn_command(&cmd);
- sprintf(cmd.parm.num, "%s", isdn_map_eaz2msn(lp->msn, cmd.driver));
- cmd.command = ISDN_CMD_SETEAZ;
- isdn_command(&cmd);
- lp->dialretry = 0;
- anymore = 1;
- lp->dialstate++;
- /* Fall through */
- case 3:
- /* Setup interface, dial current phone-number, switch to next number.
- * If list of phone-numbers is exhausted, increment
- * retry-counter.
- */
- if (dev->global_flags & ISDN_GLOBAL_STOPPED || (ISDN_NET_DIALMODE(*lp) == ISDN_NET_DM_OFF)) {
- char *s;
- if (dev->global_flags & ISDN_GLOBAL_STOPPED)
- s = "dial suppressed: isdn system stopped";
- else
- s = "dial suppressed: dialmode `off'";
- isdn_net_unreachable(p->dev, NULL, s);
- isdn_net_hangup(p->dev);
- break;
- }
- cmd.driver = lp->isdn_device;
- cmd.command = ISDN_CMD_SETL2;
- cmd.arg = lp->isdn_channel + (lp->l2_proto << 8);
- isdn_command(&cmd);
- cmd.driver = lp->isdn_device;
- cmd.command = ISDN_CMD_SETL3;
- cmd.arg = lp->isdn_channel + (lp->l3_proto << 8);
- isdn_command(&cmd);
- cmd.driver = lp->isdn_device;
- cmd.arg = lp->isdn_channel;
- if (!lp->dial) {
- printk(KERN_WARNING "%s: phone number deleted?\n",
- p->dev->name);
- isdn_net_hangup(p->dev);
- break;
- }
- if (!strncmp(lp->dial->num, "LEASED", strlen("LEASED"))) {
- lp->dialstate = 4;
- printk(KERN_INFO "%s: Open leased line ...\n", p->dev->name);
- } else {
- if (lp->dialtimeout > 0)
- if (time_after(jiffies, lp->dialstarted + lp->dialtimeout)) {
- lp->dialwait_timer = jiffies + lp->dialwait;
- lp->dialstarted = 0;
- isdn_net_unreachable(p->dev, NULL, "dial: timed out");
- isdn_net_hangup(p->dev);
- break;
- }
-
- cmd.driver = lp->isdn_device;
- cmd.command = ISDN_CMD_DIAL;
- cmd.parm.setup.si2 = 0;
-
- /* check for DOV */
- phone_number = lp->dial->num;
- if ((*phone_number == 'v') ||
- (*phone_number == 'V')) { /* DOV call */
- cmd.parm.setup.si1 = 1;
- } else { /* DATA call */
- cmd.parm.setup.si1 = 7;
- }
-
- strcpy(cmd.parm.setup.phone, phone_number);
- /*
- * Switch to next number or back to start if at end of list.
- */
- if (!(lp->dial = (isdn_net_phone *) lp->dial->next)) {
- lp->dial = lp->phone[1];
- lp->dialretry++;
-
- if (lp->dialretry > lp->dialmax) {
- if (lp->dialtimeout == 0) {
- lp->dialwait_timer = jiffies + lp->dialwait;
- lp->dialstarted = 0;
- isdn_net_unreachable(p->dev, NULL, "dial: tried all numbers dialmax times");
- }
- isdn_net_hangup(p->dev);
- break;
- }
- }
- sprintf(cmd.parm.setup.eazmsn, "%s",
- isdn_map_eaz2msn(lp->msn, cmd.driver));
- i = isdn_dc2minor(lp->isdn_device, lp->isdn_channel);
- if (i >= 0) {
- strcpy(dev->num[i], cmd.parm.setup.phone);
- dev->usage[i] |= ISDN_USAGE_OUTGOING;
- isdn_info_update();
- }
- printk(KERN_INFO "%s: dialing %d %s... %s\n", p->dev->name,
- lp->dialretry, cmd.parm.setup.phone,
- (cmd.parm.setup.si1 == 1) ? "DOV" : "");
- lp->dtimer = 0;
-#ifdef ISDN_DEBUG_NET_DIAL
- printk(KERN_DEBUG "dial: d=%d c=%d\n", lp->isdn_device,
- lp->isdn_channel);
-#endif
- isdn_command(&cmd);
- }
- lp->huptimer = 0;
- lp->outgoing = 1;
- if (lp->chargeint) {
- lp->hupflags |= ISDN_HAVECHARGE;
- lp->hupflags &= ~ISDN_WAITCHARGE;
- } else {
- lp->hupflags |= ISDN_WAITCHARGE;
- lp->hupflags &= ~ISDN_HAVECHARGE;
- }
- anymore = 1;
- lp->dialstate =
- (lp->cbdelay &&
- (lp->flags & ISDN_NET_CBOUT)) ? 12 : 4;
- break;
- case 4:
- /* Wait for D-Channel-connect.
- * If timeout, switch back to state 3.
- * Dialmax-handling moved to state 3.
- */
- if (lp->dtimer++ > ISDN_TIMER_DTIMEOUT10)
- lp->dialstate = 3;
- anymore = 1;
- break;
- case 5:
- /* Got D-Channel-Connect, send B-Channel-request */
- cmd.driver = lp->isdn_device;
- cmd.arg = lp->isdn_channel;
- cmd.command = ISDN_CMD_ACCEPTB;
- anymore = 1;
- lp->dtimer = 0;
- lp->dialstate++;
- isdn_command(&cmd);
- break;
- case 6:
- /* Wait for B- or D-Channel-connect. If timeout,
- * switch back to state 3.
- */
-#ifdef ISDN_DEBUG_NET_DIAL
- printk(KERN_DEBUG "dialtimer2: %d\n", lp->dtimer);
-#endif
- if (lp->dtimer++ > ISDN_TIMER_DTIMEOUT10)
- lp->dialstate = 3;
- anymore = 1;
- break;
- case 7:
- /* Got incoming Call, setup L2 and L3 protocols,
- * then wait for D-Channel-connect
- */
-#ifdef ISDN_DEBUG_NET_DIAL
- printk(KERN_DEBUG "dialtimer4: %d\n", lp->dtimer);
-#endif
- cmd.driver = lp->isdn_device;
- cmd.command = ISDN_CMD_SETL2;
- cmd.arg = lp->isdn_channel + (lp->l2_proto << 8);
- isdn_command(&cmd);
- cmd.driver = lp->isdn_device;
- cmd.command = ISDN_CMD_SETL3;
- cmd.arg = lp->isdn_channel + (lp->l3_proto << 8);
- isdn_command(&cmd);
- if (lp->dtimer++ > ISDN_TIMER_DTIMEOUT15)
- isdn_net_hangup(p->dev);
- else {
- anymore = 1;
- lp->dialstate++;
- }
- break;
- case 9:
- /* Got incoming D-Channel-Connect, send B-Channel-request */
- cmd.driver = lp->isdn_device;
- cmd.arg = lp->isdn_channel;
- cmd.command = ISDN_CMD_ACCEPTB;
- isdn_command(&cmd);
- anymore = 1;
- lp->dtimer = 0;
- lp->dialstate++;
- break;
- case 8:
- case 10:
- /* Wait for B- or D-channel-connect */
-#ifdef ISDN_DEBUG_NET_DIAL
- printk(KERN_DEBUG "dialtimer4: %d\n", lp->dtimer);
-#endif
- if (lp->dtimer++ > ISDN_TIMER_DTIMEOUT10)
- isdn_net_hangup(p->dev);
- else
- anymore = 1;
- break;
- case 11:
- /* Callback Delay */
- if (lp->dtimer++ > lp->cbdelay)
- lp->dialstate = 1;
- anymore = 1;
- break;
- case 12:
- /* Remote does callback. Hangup after cbdelay, then wait for incoming
- * call (in state 4).
- */
- if (lp->dtimer++ > lp->cbdelay)
- {
- printk(KERN_INFO "%s: hangup waiting for callback ...\n", p->dev->name);
- lp->dtimer = 0;
- lp->dialstate = 4;
- cmd.driver = lp->isdn_device;
- cmd.command = ISDN_CMD_HANGUP;
- cmd.arg = lp->isdn_channel;
- isdn_command(&cmd);
- isdn_all_eaz(lp->isdn_device, lp->isdn_channel);
- }
- anymore = 1;
- break;
- default:
- printk(KERN_WARNING "isdn_net: Illegal dialstate %d for device %s\n",
- lp->dialstate, p->dev->name);
- }
- p = (isdn_net_dev *) p->next;
- }
- isdn_timer_ctrl(ISDN_TIMER_NETDIAL, anymore);
-}
-
-/*
- * Perform hangup for a net-interface.
- */
-void
-isdn_net_hangup(struct net_device *d)
-{
- isdn_net_local *lp = netdev_priv(d);
- isdn_ctrl cmd;
-#ifdef CONFIG_ISDN_X25
- struct concap_proto *cprot = lp->netdev->cprot;
- struct concap_proto_ops *pops = cprot ? cprot->pops : NULL;
-#endif
-
- if (lp->flags & ISDN_NET_CONNECTED) {
- if (lp->slave != NULL) {
- isdn_net_local *slp = ISDN_SLAVE_PRIV(lp);
- if (slp->flags & ISDN_NET_CONNECTED) {
- printk(KERN_INFO
- "isdn_net: hang up slave %s before %s\n",
- lp->slave->name, d->name);
- isdn_net_hangup(lp->slave);
- }
- }
- printk(KERN_INFO "isdn_net: local hangup %s\n", d->name);
-#ifdef CONFIG_ISDN_PPP
- if (lp->p_encap == ISDN_NET_ENCAP_SYNCPPP)
- isdn_ppp_free(lp);
-#endif
- isdn_net_lp_disconnected(lp);
-#ifdef CONFIG_ISDN_X25
- /* try if there are generic encap protocol
- receiver routines and signal the closure of
- the link */
- if (pops && pops->disconn_ind)
- pops->disconn_ind(cprot);
-#endif /* CONFIG_ISDN_X25 */
-
- cmd.driver = lp->isdn_device;
- cmd.command = ISDN_CMD_HANGUP;
- cmd.arg = lp->isdn_channel;
- isdn_command(&cmd);
- printk(KERN_INFO "%s: Chargesum is %d\n", d->name, lp->charge);
- isdn_all_eaz(lp->isdn_device, lp->isdn_channel);
- }
- isdn_net_unbind_channel(lp);
-}
-
-typedef struct {
- __be16 source;
- __be16 dest;
-} ip_ports;
-
-static void
-isdn_net_log_skb(struct sk_buff *skb, isdn_net_local *lp)
-{
- /* hopefully, this was set correctly */
- const u_char *p = skb_network_header(skb);
- unsigned short proto = ntohs(skb->protocol);
- int data_ofs;
- ip_ports *ipp;
- char addinfo[100];
-
- addinfo[0] = '\0';
- /* This check stolen from 2.1.72 dev_queue_xmit_nit() */
- if (p < skb->data || skb_network_header(skb) >= skb_tail_pointer(skb)) {
- /* fall back to old isdn_net_log_packet method() */
- char *buf = skb->data;
-
- printk(KERN_DEBUG "isdn_net: protocol %04x is buggy, dev %s\n", skb->protocol, lp->netdev->dev->name);
- p = buf;
- proto = ETH_P_IP;
- switch (lp->p_encap) {
- case ISDN_NET_ENCAP_IPTYP:
- proto = ntohs(*(__be16 *)&buf[0]);
- p = &buf[2];
- break;
- case ISDN_NET_ENCAP_ETHER:
- proto = ntohs(*(__be16 *)&buf[12]);
- p = &buf[14];
- break;
- case ISDN_NET_ENCAP_CISCOHDLC:
- proto = ntohs(*(__be16 *)&buf[2]);
- p = &buf[4];
- break;
-#ifdef CONFIG_ISDN_PPP
- case ISDN_NET_ENCAP_SYNCPPP:
- proto = ntohs(skb->protocol);
- p = &buf[IPPP_MAX_HEADER];
- break;
-#endif
- }
- }
- data_ofs = ((p[0] & 15) * 4);
- switch (proto) {
- case ETH_P_IP:
- switch (p[9]) {
- case 1:
- strcpy(addinfo, " ICMP");
- break;
- case 2:
- strcpy(addinfo, " IGMP");
- break;
- case 4:
- strcpy(addinfo, " IPIP");
- break;
- case 6:
- ipp = (ip_ports *) (&p[data_ofs]);
- sprintf(addinfo, " TCP, port: %d -> %d", ntohs(ipp->source),
- ntohs(ipp->dest));
- break;
- case 8:
- strcpy(addinfo, " EGP");
- break;
- case 12:
- strcpy(addinfo, " PUP");
- break;
- case 17:
- ipp = (ip_ports *) (&p[data_ofs]);
- sprintf(addinfo, " UDP, port: %d -> %d", ntohs(ipp->source),
- ntohs(ipp->dest));
- break;
- case 22:
- strcpy(addinfo, " IDP");
- break;
- }
- printk(KERN_INFO "OPEN: %pI4 -> %pI4%s\n",
- p + 12, p + 16, addinfo);
- break;
- case ETH_P_ARP:
- printk(KERN_INFO "OPEN: ARP %pI4 -> *.*.*.* ?%pI4\n",
- p + 14, p + 24);
- break;
- }
-}
-
-/*
- * this function is used to send supervisory data, i.e. data which was
- * not received from the network layer, but e.g. frames from ipppd, CCP
- * reset frames etc.
- */
-void isdn_net_write_super(isdn_net_local *lp, struct sk_buff *skb)
-{
- if (in_irq()) {
- // we can't grab the lock from irq context,
- // so we just queue the packet
- skb_queue_tail(&lp->super_tx_queue, skb);
- schedule_work(&lp->tqueue);
- return;
- }
-
- spin_lock_bh(&lp->xmit_lock);
- if (!isdn_net_lp_busy(lp)) {
- isdn_net_writebuf_skb(lp, skb);
- } else {
- skb_queue_tail(&lp->super_tx_queue, skb);
- }
- spin_unlock_bh(&lp->xmit_lock);
-}
-
-/*
- * called from tq_immediate
- */
-static void isdn_net_softint(struct work_struct *work)
-{
- isdn_net_local *lp = container_of(work, isdn_net_local, tqueue);
- struct sk_buff *skb;
-
- spin_lock_bh(&lp->xmit_lock);
- while (!isdn_net_lp_busy(lp)) {
- skb = skb_dequeue(&lp->super_tx_queue);
- if (!skb)
- break;
- isdn_net_writebuf_skb(lp, skb);
- }
- spin_unlock_bh(&lp->xmit_lock);
-}
-
-/*
- * all frames sent from the (net) LL to a HL driver should go via this function
- * it's serialized by the caller holding the lp->xmit_lock spinlock
- */
-void isdn_net_writebuf_skb(isdn_net_local *lp, struct sk_buff *skb)
-{
- int ret;
- int len = skb->len; /* save len */
-
- /* before obtaining the lock the caller should have checked that
- the lp isn't busy */
- if (isdn_net_lp_busy(lp)) {
- printk("isdn BUG at %s:%d!\n", __FILE__, __LINE__);
- goto error;
- }
-
- if (!(lp->flags & ISDN_NET_CONNECTED)) {
- printk("isdn BUG at %s:%d!\n", __FILE__, __LINE__);
- goto error;
- }
- ret = isdn_writebuf_skb_stub(lp->isdn_device, lp->isdn_channel, 1, skb);
- if (ret != len) {
- /* we should never get here */
- printk(KERN_WARNING "%s: HL driver queue full\n", lp->netdev->dev->name);
- goto error;
- }
-
- lp->transcount += len;
- isdn_net_inc_frame_cnt(lp);
- return;
-
-error:
- dev_kfree_skb(skb);
- lp->stats.tx_errors++;
-
-}
-
-
-/*
- * Helper function for isdn_net_start_xmit.
- * When called, the connection is already established.
- * Based on cps-calculation, check if device is overloaded.
- * If so, and if a slave exists, trigger dialing for it.
- * If any slave is online, deliver packets using a simple round robin
- * scheme.
- *
- * Return: 0 on success, !0 on failure.
- */
-
-static int
-isdn_net_xmit(struct net_device *ndev, struct sk_buff *skb)
-{
- isdn_net_dev *nd;
- isdn_net_local *slp;
- isdn_net_local *lp = netdev_priv(ndev);
- int retv = NETDEV_TX_OK;
-
- if (((isdn_net_local *) netdev_priv(ndev))->master) {
- printk("isdn BUG at %s:%d!\n", __FILE__, __LINE__);
- dev_kfree_skb(skb);
- return NETDEV_TX_OK;
- }
-
- /* For the other encaps the header has already been built */
-#ifdef CONFIG_ISDN_PPP
- if (lp->p_encap == ISDN_NET_ENCAP_SYNCPPP) {
- return isdn_ppp_xmit(skb, ndev);
- }
-#endif
- nd = ((isdn_net_local *) netdev_priv(ndev))->netdev;
- lp = isdn_net_get_locked_lp(nd);
- if (!lp) {
- printk(KERN_WARNING "%s: all channels busy - requeuing!\n", ndev->name);
- return NETDEV_TX_BUSY;
- }
- /* we have our lp locked from now on */
-
- /* Reset hangup-timeout */
- lp->huptimer = 0; // FIXME?
- isdn_net_writebuf_skb(lp, skb);
- spin_unlock_bh(&lp->xmit_lock);
-
- /* the following stuff is here for backwards compatibility.
- * in future, start-up and hangup of slaves (based on current load)
- * should move to userspace and get based on an overall cps
- * calculation
- */
- if (lp->cps > lp->triggercps) {
- if (lp->slave) {
- if (!lp->sqfull) {
- /* First time overload: set timestamp only */
- lp->sqfull = 1;
- lp->sqfull_stamp = jiffies;
- } else {
- /* subsequent overload: if slavedelay exceeded, start dialing */
- if (time_after(jiffies, lp->sqfull_stamp + lp->slavedelay)) {
- slp = ISDN_SLAVE_PRIV(lp);
- if (!(slp->flags & ISDN_NET_CONNECTED)) {
- isdn_net_force_dial_lp(ISDN_SLAVE_PRIV(lp));
- }
- }
- }
- }
- } else {
- if (lp->sqfull && time_after(jiffies, lp->sqfull_stamp + lp->slavedelay + (10 * HZ))) {
- lp->sqfull = 0;
- }
- /* this is a hack to allow auto-hangup for slaves on moderate loads */
- nd->queue = nd->local;
- }
-
- return retv;
-
-}
-
-static void
-isdn_net_adjust_hdr(struct sk_buff *skb, struct net_device *dev)
-{
- isdn_net_local *lp = netdev_priv(dev);
- if (!skb)
- return;
- if (lp->p_encap == ISDN_NET_ENCAP_ETHER) {
- const int pullsize = skb_network_offset(skb) - ETH_HLEN;
- if (pullsize > 0) {
- printk(KERN_DEBUG "isdn_net: Pull junk %d\n", pullsize);
- skb_pull(skb, pullsize);
- }
- }
-}
-
-
-static void isdn_net_tx_timeout(struct net_device *ndev)
-{
- isdn_net_local *lp = netdev_priv(ndev);
-
- printk(KERN_WARNING "isdn_tx_timeout dev %s dialstate %d\n", ndev->name, lp->dialstate);
- if (!lp->dialstate) {
- lp->stats.tx_errors++;
- /*
- * There is a certain probability that this currently
- * works at all because if we always wake up the interface,
- * then upper layer will try to send the next packet
- * immediately. And then, the old clean_up logic in the
- * driver will hopefully continue to work as it used to do.
- *
- * This is rather primitive right know, we better should
- * clean internal queues here, in particular for multilink and
- * ppp, and reset HL driver's channel, too. --HE
- *
- * actually, this may not matter at all, because ISDN hardware
- * should not see transmitter hangs at all IMO
- * changed KERN_DEBUG to KERN_WARNING to find out if this is
- * ever called --KG
- */
- }
- netif_trans_update(ndev);
- netif_wake_queue(ndev);
-}
-
-/*
- * Try sending a packet.
- * If this interface isn't connected to a ISDN-Channel, find a free channel,
- * and start dialing.
- */
-static netdev_tx_t
-isdn_net_start_xmit(struct sk_buff *skb, struct net_device *ndev)
-{
- isdn_net_local *lp = netdev_priv(ndev);
-#ifdef CONFIG_ISDN_X25
- struct concap_proto *cprot = lp->netdev->cprot;
-/* At this point hard_start_xmit() passes control to the encapsulation
- protocol (if present).
- For X.25 auto-dialing is completly bypassed because:
- - It does not conform with the semantics of a reliable datalink
- service as needed by X.25 PLP.
- - I don't want that the interface starts dialing when the network layer
- sends a message which requests to disconnect the lapb link (or if it
- sends any other message not resulting in data transmission).
- Instead, dialing will be initiated by the encapsulation protocol entity
- when a dl_establish request is received from the upper layer.
-*/
- if (cprot && cprot->pops) {
- int ret = cprot->pops->encap_and_xmit(cprot, skb);
-
- if (ret)
- netif_stop_queue(ndev);
- return ret;
- } else
-#endif
- /* auto-dialing xmit function */
- {
-#ifdef ISDN_DEBUG_NET_DUMP
- u_char *buf;
-#endif
- isdn_net_adjust_hdr(skb, ndev);
-#ifdef ISDN_DEBUG_NET_DUMP
- buf = skb->data;
- isdn_dumppkt("S:", buf, skb->len, 40);
-#endif
-
- if (!(lp->flags & ISDN_NET_CONNECTED)) {
- int chi;
- /* only do autodial if allowed by config */
- if (!(ISDN_NET_DIALMODE(*lp) == ISDN_NET_DM_AUTO)) {
- isdn_net_unreachable(ndev, skb, "dial rejected: interface not in dialmode `auto'");
- dev_kfree_skb(skb);
- return NETDEV_TX_OK;
- }
- if (lp->phone[1]) {
- ulong flags;
-
- if (lp->dialwait_timer <= 0)
- if (lp->dialstarted > 0 && lp->dialtimeout > 0 && time_before(jiffies, lp->dialstarted + lp->dialtimeout + lp->dialwait))
- lp->dialwait_timer = lp->dialstarted + lp->dialtimeout + lp->dialwait;
-
- if (lp->dialwait_timer > 0) {
- if (time_before(jiffies, lp->dialwait_timer)) {
- isdn_net_unreachable(ndev, skb, "dial rejected: retry-time not reached");
- dev_kfree_skb(skb);
- return NETDEV_TX_OK;
- } else
- lp->dialwait_timer = 0;
- }
- /* Grab a free ISDN-Channel */
- spin_lock_irqsave(&dev->lock, flags);
- if (((chi =
- isdn_get_free_channel(
- ISDN_USAGE_NET,
- lp->l2_proto,
- lp->l3_proto,
- lp->pre_device,
- lp->pre_channel,
- lp->msn)
- ) < 0) &&
- ((chi =
- isdn_get_free_channel(
- ISDN_USAGE_NET,
- lp->l2_proto,
- lp->l3_proto,
- lp->pre_device,
- lp->pre_channel^1,
- lp->msn)
- ) < 0)) {
- spin_unlock_irqrestore(&dev->lock, flags);
- isdn_net_unreachable(ndev, skb,
- "No channel");
- dev_kfree_skb(skb);
- return NETDEV_TX_OK;
- }
- /* Log packet, which triggered dialing */
- if (dev->net_verbose)
- isdn_net_log_skb(skb, lp);
- lp->dialstate = 1;
- /* Connect interface with channel */
- isdn_net_bind_channel(lp, chi);
-#ifdef CONFIG_ISDN_PPP
- if (lp->p_encap == ISDN_NET_ENCAP_SYNCPPP) {
- /* no 'first_skb' handling for syncPPP */
- if (isdn_ppp_bind(lp) < 0) {
- dev_kfree_skb(skb);
- isdn_net_unbind_channel(lp);
- spin_unlock_irqrestore(&dev->lock, flags);
- return NETDEV_TX_OK; /* STN (skb to nirvana) ;) */
- }
-#ifdef CONFIG_IPPP_FILTER
- if (isdn_ppp_autodial_filter(skb, lp)) {
- isdn_ppp_free(lp);
- isdn_net_unbind_channel(lp);
- spin_unlock_irqrestore(&dev->lock, flags);
- isdn_net_unreachable(ndev, skb, "dial rejected: packet filtered");
- dev_kfree_skb(skb);
- return NETDEV_TX_OK;
- }
-#endif
- spin_unlock_irqrestore(&dev->lock, flags);
- isdn_net_dial(); /* Initiate dialing */
- netif_stop_queue(ndev);
- return NETDEV_TX_BUSY; /* let upper layer requeue skb packet */
- }
-#endif
- /* Initiate dialing */
- spin_unlock_irqrestore(&dev->lock, flags);
- isdn_net_dial();
- isdn_net_device_stop_queue(lp);
- return NETDEV_TX_BUSY;
- } else {
- isdn_net_unreachable(ndev, skb,
- "No phone number");
- dev_kfree_skb(skb);
- return NETDEV_TX_OK;
- }
- } else {
- /* Device is connected to an ISDN channel */
- netif_trans_update(ndev);
- if (!lp->dialstate) {
- /* ISDN connection is established, try sending */
- int ret;
- ret = (isdn_net_xmit(ndev, skb));
- if (ret) netif_stop_queue(ndev);
- return ret;
- } else
- netif_stop_queue(ndev);
- }
- }
- return NETDEV_TX_BUSY;
-}
-
-/*
- * Shutdown a net-interface.
- */
-static int
-isdn_net_close(struct net_device *dev)
-{
- struct net_device *p;
-#ifdef CONFIG_ISDN_X25
- struct concap_proto *cprot =
- ((isdn_net_local *)netdev_priv(dev))->netdev->cprot;
- /* printk(KERN_DEBUG "isdn_net_close %s\n" , dev-> name); */
-#endif
-
-#ifdef CONFIG_ISDN_X25
- if (cprot && cprot->pops) cprot->pops->close(cprot);
-#endif
- netif_stop_queue(dev);
- p = MASTER_TO_SLAVE(dev);
- if (p) {
- /* If this interface has slaves, stop them also */
- while (p) {
-#ifdef CONFIG_ISDN_X25
- cprot = ((isdn_net_local *)netdev_priv(p))
- ->netdev->cprot;
- if (cprot && cprot->pops)
- cprot->pops->close(cprot);
-#endif
- isdn_net_hangup(p);
- p = MASTER_TO_SLAVE(p);
- }
- }
- isdn_net_hangup(dev);
- isdn_unlock_drivers();
- return 0;
-}
-
-/*
- * Get statistics
- */
-static struct net_device_stats *
-isdn_net_get_stats(struct net_device *dev)
-{
- isdn_net_local *lp = netdev_priv(dev);
- return &lp->stats;
-}
-
-/* This is simply a copy from std. eth.c EXCEPT we pull ETH_HLEN
- * instead of dev->hard_header_len off. This is done because the
- * lowlevel-driver has already pulled off its stuff when we get
- * here and this routine only gets called with p_encap == ETHER.
- * Determine the packet's protocol ID. The rule here is that we
- * assume 802.3 if the type field is short enough to be a length.
- * This is normal practice and works for any 'now in use' protocol.
- */
-
-static __be16
-isdn_net_type_trans(struct sk_buff *skb, struct net_device *dev)
-{
- struct ethhdr *eth;
- unsigned char *rawp;
-
- skb_reset_mac_header(skb);
- skb_pull(skb, ETH_HLEN);
- eth = eth_hdr(skb);
-
- if (*eth->h_dest & 1) {
- if (ether_addr_equal(eth->h_dest, dev->broadcast))
- skb->pkt_type = PACKET_BROADCAST;
- else
- skb->pkt_type = PACKET_MULTICAST;
- }
- /*
- * This ALLMULTI check should be redundant by 1.4
- * so don't forget to remove it.
- */
-
- else if (dev->flags & (IFF_PROMISC /*| IFF_ALLMULTI*/)) {
- if (!ether_addr_equal(eth->h_dest, dev->dev_addr))
- skb->pkt_type = PACKET_OTHERHOST;
- }
- if (ntohs(eth->h_proto) >= ETH_P_802_3_MIN)
- return eth->h_proto;
-
- rawp = skb->data;
-
- /*
- * This is a magic hack to spot IPX packets. Older Novell breaks
- * the protocol design and runs IPX over 802.3 without an 802.2 LLC
- * layer. We look for FFFF which isn't a used 802.2 SSAP/DSAP. This
- * won't work for fault tolerant netware but does for the rest.
- */
- if (*(unsigned short *) rawp == 0xFFFF)
- return htons(ETH_P_802_3);
- /*
- * Real 802.2 LLC
- */
- return htons(ETH_P_802_2);
-}
-
-
-/*
- * CISCO HDLC keepalive specific stuff
- */
-static struct sk_buff*
-isdn_net_ciscohdlck_alloc_skb(isdn_net_local *lp, int len)
-{
- unsigned short hl = dev->drv[lp->isdn_device]->interface->hl_hdrlen;
- struct sk_buff *skb;
-
- skb = alloc_skb(hl + len, GFP_ATOMIC);
- if (skb)
- skb_reserve(skb, hl);
- else
- printk("isdn out of mem at %s:%d!\n", __FILE__, __LINE__);
- return skb;
-}
-
-/* cisco hdlck device private ioctls */
-static int
-isdn_ciscohdlck_dev_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
-{
- isdn_net_local *lp = netdev_priv(dev);
- unsigned long len = 0;
- unsigned long expires = 0;
- int tmp = 0;
- int period = lp->cisco_keepalive_period;
- s8 debserint = lp->cisco_debserint;
- int rc = 0;
-
- if (lp->p_encap != ISDN_NET_ENCAP_CISCOHDLCK)
- return -EINVAL;
-
- switch (cmd) {
- /* get/set keepalive period */
- case SIOCGKEEPPERIOD:
- len = (unsigned long)sizeof(lp->cisco_keepalive_period);
- if (copy_to_user(ifr->ifr_data,
- &lp->cisco_keepalive_period, len))
- rc = -EFAULT;
- break;
- case SIOCSKEEPPERIOD:
- tmp = lp->cisco_keepalive_period;
- len = (unsigned long)sizeof(lp->cisco_keepalive_period);
- if (copy_from_user(&period, ifr->ifr_data, len))
- rc = -EFAULT;
- if ((period > 0) && (period <= 32767))
- lp->cisco_keepalive_period = period;
- else
- rc = -EINVAL;
- if (!rc && (tmp != lp->cisco_keepalive_period)) {
- expires = (unsigned long)(jiffies +
- lp->cisco_keepalive_period * HZ);
- mod_timer(&lp->cisco_timer, expires);
- printk(KERN_INFO "%s: Keepalive period set "
- "to %d seconds.\n",
- dev->name, lp->cisco_keepalive_period);
- }
- break;
-
- /* get/set debugging */
- case SIOCGDEBSERINT:
- len = (unsigned long)sizeof(lp->cisco_debserint);
- if (copy_to_user(ifr->ifr_data,
- &lp->cisco_debserint, len))
- rc = -EFAULT;
- break;
- case SIOCSDEBSERINT:
- len = (unsigned long)sizeof(lp->cisco_debserint);
- if (copy_from_user(&debserint,
- ifr->ifr_data, len))
- rc = -EFAULT;
- if ((debserint >= 0) && (debserint <= 64))
- lp->cisco_debserint = debserint;
- else
- rc = -EINVAL;
- break;
-
- default:
- rc = -EINVAL;
- break;
- }
- return (rc);
-}
-
-
-static int isdn_net_ioctl(struct net_device *dev,
- struct ifreq *ifr, int cmd)
-{
- isdn_net_local *lp = netdev_priv(dev);
-
- switch (lp->p_encap) {
-#ifdef CONFIG_ISDN_PPP
- case ISDN_NET_ENCAP_SYNCPPP:
- return isdn_ppp_dev_ioctl(dev, ifr, cmd);
-#endif
- case ISDN_NET_ENCAP_CISCOHDLCK:
- return isdn_ciscohdlck_dev_ioctl(dev, ifr, cmd);
- default:
- return -EINVAL;
- }
-}
-
-/* called via cisco_timer.function */
-static void
-isdn_net_ciscohdlck_slarp_send_keepalive(struct timer_list *t)
-{
- isdn_net_local *lp = from_timer(lp, t, cisco_timer);
- struct sk_buff *skb;
- unsigned char *p;
- unsigned long last_cisco_myseq = lp->cisco_myseq;
- int myseq_diff = 0;
-
- if (!(lp->flags & ISDN_NET_CONNECTED) || lp->dialstate) {
- printk("isdn BUG at %s:%d!\n", __FILE__, __LINE__);
- return;
- }
- lp->cisco_myseq++;
-
- myseq_diff = (lp->cisco_myseq - lp->cisco_mineseen);
- if ((lp->cisco_line_state) && ((myseq_diff >= 3) || (myseq_diff <= -3))) {
- /* line up -> down */
- lp->cisco_line_state = 0;
- printk(KERN_WARNING
- "UPDOWN: Line protocol on Interface %s,"
- " changed state to down\n", lp->netdev->dev->name);
- /* should stop routing higher-level data across */
- } else if ((!lp->cisco_line_state) &&
- (myseq_diff >= 0) && (myseq_diff <= 2)) {
- /* line down -> up */
- lp->cisco_line_state = 1;
- printk(KERN_WARNING
- "UPDOWN: Line protocol on Interface %s,"
- " changed state to up\n", lp->netdev->dev->name);
- /* restart routing higher-level data across */
- }
-
- if (lp->cisco_debserint)
- printk(KERN_DEBUG "%s: HDLC "
- "myseq %lu, mineseen %lu%c, yourseen %lu, %s\n",
- lp->netdev->dev->name, last_cisco_myseq, lp->cisco_mineseen,
- ((last_cisco_myseq == lp->cisco_mineseen) ? '*' : 040),
- lp->cisco_yourseq,
- ((lp->cisco_line_state) ? "line up" : "line down"));
-
- skb = isdn_net_ciscohdlck_alloc_skb(lp, 4 + 14);
- if (!skb)
- return;
-
- p = skb_put(skb, 4 + 14);
-
- /* cisco header */
- *(u8 *)(p + 0) = CISCO_ADDR_UNICAST;
- *(u8 *)(p + 1) = CISCO_CTRL;
- *(__be16 *)(p + 2) = cpu_to_be16(CISCO_TYPE_SLARP);
-
- /* slarp keepalive */
- *(__be32 *)(p + 4) = cpu_to_be32(CISCO_SLARP_KEEPALIVE);
- *(__be32 *)(p + 8) = cpu_to_be32(lp->cisco_myseq);
- *(__be32 *)(p + 12) = cpu_to_be32(lp->cisco_yourseq);
- *(__be16 *)(p + 16) = cpu_to_be16(0xffff); // reliability, always 0xffff
- p += 18;
-
- isdn_net_write_super(lp, skb);
-
- lp->cisco_timer.expires = jiffies + lp->cisco_keepalive_period * HZ;
-
- add_timer(&lp->cisco_timer);
-}
-
-static void
-isdn_net_ciscohdlck_slarp_send_request(isdn_net_local *lp)
-{
- struct sk_buff *skb;
- unsigned char *p;
-
- skb = isdn_net_ciscohdlck_alloc_skb(lp, 4 + 14);
- if (!skb)
- return;
-
- p = skb_put(skb, 4 + 14);
-
- /* cisco header */
- *(u8 *)(p + 0) = CISCO_ADDR_UNICAST;
- *(u8 *)(p + 1) = CISCO_CTRL;
- *(__be16 *)(p + 2) = cpu_to_be16(CISCO_TYPE_SLARP);
-
- /* slarp request */
- *(__be32 *)(p + 4) = cpu_to_be32(CISCO_SLARP_REQUEST);
- *(__be32 *)(p + 8) = cpu_to_be32(0); // address
- *(__be32 *)(p + 12) = cpu_to_be32(0); // netmask
- *(__be16 *)(p + 16) = cpu_to_be16(0); // unused
- p += 18;
-
- isdn_net_write_super(lp, skb);
-}
-
-static void
-isdn_net_ciscohdlck_connected(isdn_net_local *lp)
-{
- lp->cisco_myseq = 0;
- lp->cisco_mineseen = 0;
- lp->cisco_yourseq = 0;
- lp->cisco_keepalive_period = ISDN_TIMER_KEEPINT;
- lp->cisco_last_slarp_in = 0;
- lp->cisco_line_state = 0;
- lp->cisco_debserint = 0;
-
- /* send slarp request because interface/seq.no.s reset */
- isdn_net_ciscohdlck_slarp_send_request(lp);
-
- timer_setup(&lp->cisco_timer,
- isdn_net_ciscohdlck_slarp_send_keepalive, 0);
- lp->cisco_timer.expires = jiffies + lp->cisco_keepalive_period * HZ;
- add_timer(&lp->cisco_timer);
-}
-
-static void
-isdn_net_ciscohdlck_disconnected(isdn_net_local *lp)
-{
- del_timer(&lp->cisco_timer);
-}
-
-static void
-isdn_net_ciscohdlck_slarp_send_reply(isdn_net_local *lp)
-{
- struct sk_buff *skb;
- unsigned char *p;
- struct in_device *in_dev = NULL;
- __be32 addr = 0; /* local ipv4 address */
- __be32 mask = 0; /* local netmask */
-
- if ((in_dev = lp->netdev->dev->ip_ptr) != NULL) {
- /* take primary(first) address of interface */
- struct in_ifaddr *ifa = in_dev->ifa_list;
- if (ifa != NULL) {
- addr = ifa->ifa_local;
- mask = ifa->ifa_mask;
- }
- }
-
- skb = isdn_net_ciscohdlck_alloc_skb(lp, 4 + 14);
- if (!skb)
- return;
-
- p = skb_put(skb, 4 + 14);
-
- /* cisco header */
- *(u8 *)(p + 0) = CISCO_ADDR_UNICAST;
- *(u8 *)(p + 1) = CISCO_CTRL;
- *(__be16 *)(p + 2) = cpu_to_be16(CISCO_TYPE_SLARP);
-
- /* slarp reply, send own ip/netmask; if values are nonsense remote
- * should think we are unable to provide it with an address via SLARP */
- *(__be32 *)(p + 4) = cpu_to_be32(CISCO_SLARP_REPLY);
- *(__be32 *)(p + 8) = addr; // address
- *(__be32 *)(p + 12) = mask; // netmask
- *(__be16 *)(p + 16) = cpu_to_be16(0); // unused
- p += 18;
-
- isdn_net_write_super(lp, skb);
-}
-
-static void
-isdn_net_ciscohdlck_slarp_in(isdn_net_local *lp, struct sk_buff *skb)
-{
- unsigned char *p;
- int period;
- u32 code;
- u32 my_seq;
- u32 your_seq;
- __be32 local;
- __be32 *addr, *mask;
-
- if (skb->len < 14)
- return;
-
- p = skb->data;
- code = be32_to_cpup((__be32 *)p);
- p += 4;
-
- switch (code) {
- case CISCO_SLARP_REQUEST:
- lp->cisco_yourseq = 0;
- isdn_net_ciscohdlck_slarp_send_reply(lp);
- break;
- case CISCO_SLARP_REPLY:
- addr = (__be32 *)p;
- mask = (__be32 *)(p + 4);
- if (*mask != cpu_to_be32(0xfffffffc))
- goto slarp_reply_out;
- if ((*addr & cpu_to_be32(3)) == cpu_to_be32(0) ||
- (*addr & cpu_to_be32(3)) == cpu_to_be32(3))
- goto slarp_reply_out;
- local = *addr ^ cpu_to_be32(3);
- printk(KERN_INFO "%s: got slarp reply: remote ip: %pI4, local ip: %pI4 mask: %pI4\n",
- lp->netdev->dev->name, addr, &local, mask);
- break;
- slarp_reply_out:
- printk(KERN_INFO "%s: got invalid slarp reply (%pI4/%pI4) - ignored\n",
- lp->netdev->dev->name, addr, mask);
- break;
- case CISCO_SLARP_KEEPALIVE:
- period = (int)((jiffies - lp->cisco_last_slarp_in
- + HZ / 2 - 1) / HZ);
- if (lp->cisco_debserint &&
- (period != lp->cisco_keepalive_period) &&
- lp->cisco_last_slarp_in) {
- printk(KERN_DEBUG "%s: Keepalive period mismatch - "
- "is %d but should be %d.\n",
- lp->netdev->dev->name, period,
- lp->cisco_keepalive_period);
- }
- lp->cisco_last_slarp_in = jiffies;
- my_seq = be32_to_cpup((__be32 *)(p + 0));
- your_seq = be32_to_cpup((__be32 *)(p + 4));
- p += 10;
- lp->cisco_yourseq = my_seq;
- lp->cisco_mineseen = your_seq;
- break;
- }
-}
-
-static void
-isdn_net_ciscohdlck_receive(isdn_net_local *lp, struct sk_buff *skb)
-{
- unsigned char *p;
- u8 addr;
- u8 ctrl;
- u16 type;
-
- if (skb->len < 4)
- goto out_free;
-
- p = skb->data;
- addr = *(u8 *)(p + 0);
- ctrl = *(u8 *)(p + 1);
- type = be16_to_cpup((__be16 *)(p + 2));
- p += 4;
- skb_pull(skb, 4);
-
- if (addr != CISCO_ADDR_UNICAST && addr != CISCO_ADDR_BROADCAST) {
- printk(KERN_WARNING "%s: Unknown Cisco addr 0x%02x\n",
- lp->netdev->dev->name, addr);
- goto out_free;
- }
- if (ctrl != CISCO_CTRL) {
- printk(KERN_WARNING "%s: Unknown Cisco ctrl 0x%02x\n",
- lp->netdev->dev->name, ctrl);
- goto out_free;
- }
-
- switch (type) {
- case CISCO_TYPE_SLARP:
- isdn_net_ciscohdlck_slarp_in(lp, skb);
- goto out_free;
- case CISCO_TYPE_CDP:
- if (lp->cisco_debserint)
- printk(KERN_DEBUG "%s: Received CDP packet. use "
- "\"no cdp enable\" on cisco.\n",
- lp->netdev->dev->name);
- goto out_free;
- default:
- /* no special cisco protocol */
- skb->protocol = htons(type);
- netif_rx(skb);
- return;
- }
-
-out_free:
- kfree_skb(skb);
-}
-
-/*
- * Got a packet from ISDN-Channel.
- */
-static void
-isdn_net_receive(struct net_device *ndev, struct sk_buff *skb)
-{
- isdn_net_local *lp = netdev_priv(ndev);
- isdn_net_local *olp = lp; /* original 'lp' */
-#ifdef CONFIG_ISDN_X25
- struct concap_proto *cprot = lp->netdev->cprot;
-#endif
- lp->transcount += skb->len;
-
- lp->stats.rx_packets++;
- lp->stats.rx_bytes += skb->len;
- if (lp->master) {
- /* Bundling: If device is a slave-device, deliver to master, also
- * handle master's statistics and hangup-timeout
- */
- ndev = lp->master;
- lp = netdev_priv(ndev);
- lp->stats.rx_packets++;
- lp->stats.rx_bytes += skb->len;
- }
- skb->dev = ndev;
- skb->pkt_type = PACKET_HOST;
- skb_reset_mac_header(skb);
-#ifdef ISDN_DEBUG_NET_DUMP
- isdn_dumppkt("R:", skb->data, skb->len, 40);
-#endif
- switch (lp->p_encap) {
- case ISDN_NET_ENCAP_ETHER:
- /* Ethernet over ISDN */
- olp->huptimer = 0;
- lp->huptimer = 0;
- skb->protocol = isdn_net_type_trans(skb, ndev);
- break;
- case ISDN_NET_ENCAP_UIHDLC:
- /* HDLC with UI-frame (for ispa with -h1 option) */
- olp->huptimer = 0;
- lp->huptimer = 0;
- skb_pull(skb, 2);
- /* Fall through */
- case ISDN_NET_ENCAP_RAWIP:
- /* RAW-IP without MAC-Header */
- olp->huptimer = 0;
- lp->huptimer = 0;
- skb->protocol = htons(ETH_P_IP);
- break;
- case ISDN_NET_ENCAP_CISCOHDLCK:
- isdn_net_ciscohdlck_receive(lp, skb);
- return;
- case ISDN_NET_ENCAP_CISCOHDLC:
- /* CISCO-HDLC IP with type field and fake I-frame-header */
- skb_pull(skb, 2);
- /* Fall through */
- case ISDN_NET_ENCAP_IPTYP:
- /* IP with type field */
- olp->huptimer = 0;
- lp->huptimer = 0;
- skb->protocol = *(__be16 *)&(skb->data[0]);
- skb_pull(skb, 2);
- if (*(unsigned short *) skb->data == 0xFFFF)
- skb->protocol = htons(ETH_P_802_3);
- break;
-#ifdef CONFIG_ISDN_PPP
- case ISDN_NET_ENCAP_SYNCPPP:
- /* huptimer is done in isdn_ppp_push_higher */
- isdn_ppp_receive(lp->netdev, olp, skb);
- return;
-#endif
-
- default:
-#ifdef CONFIG_ISDN_X25
- /* try if there are generic sync_device receiver routines */
- if (cprot) if (cprot->pops)
- if (cprot->pops->data_ind) {
- cprot->pops->data_ind(cprot, skb);
- return;
- };
-#endif /* CONFIG_ISDN_X25 */
- printk(KERN_WARNING "%s: unknown encapsulation, dropping\n",
- lp->netdev->dev->name);
- kfree_skb(skb);
- return;
- }
-
- netif_rx(skb);
- return;
-}
-
-/*
- * A packet arrived via ISDN. Search interface-chain for a corresponding
- * interface. If found, deliver packet to receiver-function and return 1,
- * else return 0.
- */
-int
-isdn_net_rcv_skb(int idx, struct sk_buff *skb)
-{
- isdn_net_dev *p = dev->rx_netdev[idx];
-
- if (p) {
- isdn_net_local *lp = p->local;
- if ((lp->flags & ISDN_NET_CONNECTED) &&
- (!lp->dialstate)) {
- isdn_net_receive(p->dev, skb);
- return 1;
- }
- }
- return 0;
-}
-
-/*
- * build an header
- * depends on encaps that is being used.
- */
-
-static int isdn_net_header(struct sk_buff *skb, struct net_device *dev,
- unsigned short type,
- const void *daddr, const void *saddr, unsigned plen)
-{
- isdn_net_local *lp = netdev_priv(dev);
- unsigned char *p;
- int len = 0;
-
- switch (lp->p_encap) {
- case ISDN_NET_ENCAP_ETHER:
- len = eth_header(skb, dev, type, daddr, saddr, plen);
- break;
-#ifdef CONFIG_ISDN_PPP
- case ISDN_NET_ENCAP_SYNCPPP:
- /* stick on a fake header to keep fragmentation code happy. */
- len = IPPP_MAX_HEADER;
- skb_push(skb, len);
- break;
-#endif
- case ISDN_NET_ENCAP_RAWIP:
- printk(KERN_WARNING "isdn_net_header called with RAW_IP!\n");
- len = 0;
- break;
- case ISDN_NET_ENCAP_IPTYP:
- /* ethernet type field */
- *((__be16 *)skb_push(skb, 2)) = htons(type);
- len = 2;
- break;
- case ISDN_NET_ENCAP_UIHDLC:
- /* HDLC with UI-Frames (for ispa with -h1 option) */
- *((__be16 *)skb_push(skb, 2)) = htons(0x0103);
- len = 2;
- break;
- case ISDN_NET_ENCAP_CISCOHDLC:
- case ISDN_NET_ENCAP_CISCOHDLCK:
- p = skb_push(skb, 4);
- *(u8 *)(p + 0) = CISCO_ADDR_UNICAST;
- *(u8 *)(p + 1) = CISCO_CTRL;
- *(__be16 *)(p + 2) = cpu_to_be16(type);
- p += 4;
- len = 4;
- break;
-#ifdef CONFIG_ISDN_X25
- default:
- /* try if there are generic concap protocol routines */
- if (lp->netdev->cprot) {
- printk(KERN_WARNING "isdn_net_header called with concap_proto!\n");
- len = 0;
- break;
- }
- break;
-#endif /* CONFIG_ISDN_X25 */
- }
- return len;
-}
-
-static int isdn_header_cache(const struct neighbour *neigh, struct hh_cache *hh,
- __be16 type)
-{
- const struct net_device *dev = neigh->dev;
- isdn_net_local *lp = netdev_priv(dev);
-
- if (lp->p_encap == ISDN_NET_ENCAP_ETHER)
- return eth_header_cache(neigh, hh, type);
- return -1;
-}
-
-static void isdn_header_cache_update(struct hh_cache *hh,
- const struct net_device *dev,
- const unsigned char *haddr)
-{
- isdn_net_local *lp = netdev_priv(dev);
- if (lp->p_encap == ISDN_NET_ENCAP_ETHER)
- eth_header_cache_update(hh, dev, haddr);
-}
-
-static const struct header_ops isdn_header_ops = {
- .create = isdn_net_header,
- .cache = isdn_header_cache,
- .cache_update = isdn_header_cache_update,
-};
-
-/*
- * Interface-setup. (just after registering a new interface)
- */
-static int
-isdn_net_init(struct net_device *ndev)
-{
- ushort max_hlhdr_len = 0;
- int drvidx;
-
- /*
- * up till binding we ask the protocol layer to reserve as much
- * as we might need for HL layer
- */
-
- for (drvidx = 0; drvidx < ISDN_MAX_DRIVERS; drvidx++)
- if (dev->drv[drvidx])
- if (max_hlhdr_len < dev->drv[drvidx]->interface->hl_hdrlen)
- max_hlhdr_len = dev->drv[drvidx]->interface->hl_hdrlen;
-
- ndev->hard_header_len = ETH_HLEN + max_hlhdr_len;
- return 0;
-}
-
-static void
-isdn_net_swapbind(int drvidx)
-{
- isdn_net_dev *p;
-
-#ifdef ISDN_DEBUG_NET_ICALL
- printk(KERN_DEBUG "n_fi: swapping ch of %d\n", drvidx);
-#endif
- p = dev->netdev;
- while (p) {
- if (p->local->pre_device == drvidx)
- switch (p->local->pre_channel) {
- case 0:
- p->local->pre_channel = 1;
- break;
- case 1:
- p->local->pre_channel = 0;
- break;
- }
- p = (isdn_net_dev *) p->next;
- }
-}
-
-static void
-isdn_net_swap_usage(int i1, int i2)
-{
- int u1 = dev->usage[i1] & ISDN_USAGE_EXCLUSIVE;
- int u2 = dev->usage[i2] & ISDN_USAGE_EXCLUSIVE;
-
-#ifdef ISDN_DEBUG_NET_ICALL
- printk(KERN_DEBUG "n_fi: usage of %d and %d\n", i1, i2);
-#endif
- dev->usage[i1] &= ~ISDN_USAGE_EXCLUSIVE;
- dev->usage[i1] |= u2;
- dev->usage[i2] &= ~ISDN_USAGE_EXCLUSIVE;
- dev->usage[i2] |= u1;
- isdn_info_update();
-}
-
-/*
- * An incoming call-request has arrived.
- * Search the interface-chain for an appropriate interface.
- * If found, connect the interface to the ISDN-channel and initiate
- * D- and B-Channel-setup. If secure-flag is set, accept only
- * configured phone-numbers. If callback-flag is set, initiate
- * callback-dialing.
- *
- * Return-Value: 0 = No appropriate interface for this call.
- * 1 = Call accepted
- * 2 = Reject call, wait cbdelay, then call back
- * 3 = Reject call
- * 4 = Wait cbdelay, then call back
- * 5 = No appropriate interface for this call,
- * would eventually match if CID was longer.
- */
-
-int
-isdn_net_find_icall(int di, int ch, int idx, setup_parm *setup)
-{
- char *eaz;
- int si1;
- int si2;
- int ematch;
- int wret;
- int swapped;
- int sidx = 0;
- u_long flags;
- isdn_net_dev *p;
- isdn_net_phone *n;
- char nr[ISDN_MSNLEN];
- char *my_eaz;
-
- /* Search name in netdev-chain */
- if (!setup->phone[0]) {
- nr[0] = '0';
- nr[1] = '\0';
- printk(KERN_INFO "isdn_net: Incoming call without OAD, assuming '0'\n");
- } else
- strlcpy(nr, setup->phone, ISDN_MSNLEN);
- si1 = (int) setup->si1;
- si2 = (int) setup->si2;
- if (!setup->eazmsn[0]) {
- printk(KERN_WARNING "isdn_net: Incoming call without CPN, assuming '0'\n");
- eaz = "0";
- } else
- eaz = setup->eazmsn;
- if (dev->net_verbose > 1)
- printk(KERN_INFO "isdn_net: call from %s,%d,%d -> %s\n", nr, si1, si2, eaz);
- /* Accept DATA and VOICE calls at this stage
- * local eaz is checked later for allowed call types
- */
- if ((si1 != 7) && (si1 != 1)) {
- if (dev->net_verbose > 1)
- printk(KERN_INFO "isdn_net: Service-Indicator not 1 or 7, ignored\n");
- return 0;
- }
- n = (isdn_net_phone *) 0;
- p = dev->netdev;
- ematch = wret = swapped = 0;
-#ifdef ISDN_DEBUG_NET_ICALL
- printk(KERN_DEBUG "n_fi: di=%d ch=%d idx=%d usg=%d\n", di, ch, idx,
- dev->usage[idx]);
-#endif
- while (p) {
- int matchret;
- isdn_net_local *lp = p->local;
-
- /* If last check has triggered as binding-swap, revert it */
- switch (swapped) {
- case 2:
- isdn_net_swap_usage(idx, sidx);
- /* fall through */
- case 1:
- isdn_net_swapbind(di);
- break;
- }
- swapped = 0;
- /* check acceptable call types for DOV */
- my_eaz = isdn_map_eaz2msn(lp->msn, di);
- if (si1 == 1) { /* it's a DOV call, check if we allow it */
- if (*my_eaz == 'v' || *my_eaz == 'V' ||
- *my_eaz == 'b' || *my_eaz == 'B')
- my_eaz++; /* skip to allow a match */
- else
- my_eaz = NULL; /* force non match */
- } else { /* it's a DATA call, check if we allow it */
- if (*my_eaz == 'b' || *my_eaz == 'B')
- my_eaz++; /* skip to allow a match */
- }
- if (my_eaz)
- matchret = isdn_msncmp(eaz, my_eaz);
- else
- matchret = 1;
- if (!matchret)
- ematch = 1;
-
- /* Remember if more numbers eventually can match */
- if (matchret > wret)
- wret = matchret;
-#ifdef ISDN_DEBUG_NET_ICALL
- printk(KERN_DEBUG "n_fi: if='%s', l.msn=%s, l.flags=%d, l.dstate=%d\n",
- p->dev->name, lp->msn, lp->flags, lp->dialstate);
-#endif
- if ((!matchret) && /* EAZ is matching */
- (((!(lp->flags & ISDN_NET_CONNECTED)) && /* but not connected */
- (USG_NONE(dev->usage[idx]))) || /* and ch. unused or */
- ((((lp->dialstate == 4) || (lp->dialstate == 12)) && /* if dialing */
- (!(lp->flags & ISDN_NET_CALLBACK))) /* but no callback */
- )))
- {
-#ifdef ISDN_DEBUG_NET_ICALL
- printk(KERN_DEBUG "n_fi: match1, pdev=%d pch=%d\n",
- lp->pre_device, lp->pre_channel);
-#endif
- if (dev->usage[idx] & ISDN_USAGE_EXCLUSIVE) {
- if ((lp->pre_channel != ch) ||
- (lp->pre_device != di)) {
- /* Here we got a problem:
- * If using an ICN-Card, an incoming call is always signaled on
- * on the first channel of the card, if both channels are
- * down. However this channel may be bound exclusive. If the
- * second channel is free, this call should be accepted.
- * The solution is horribly but it runs, so what:
- * We exchange the exclusive bindings of the two channels, the
- * corresponding variables in the interface-structs.
- */
- if (ch == 0) {
- sidx = isdn_dc2minor(di, 1);
-#ifdef ISDN_DEBUG_NET_ICALL
- printk(KERN_DEBUG "n_fi: ch is 0\n");
-#endif
- if (USG_NONE(dev->usage[sidx])) {
- /* Second Channel is free, now see if it is bound
- * exclusive too. */
- if (dev->usage[sidx] & ISDN_USAGE_EXCLUSIVE) {
-#ifdef ISDN_DEBUG_NET_ICALL
- printk(KERN_DEBUG "n_fi: 2nd channel is down and bound\n");
-#endif
- /* Yes, swap bindings only, if the original
- * binding is bound to channel 1 of this driver */
- if ((lp->pre_device == di) &&
- (lp->pre_channel == 1)) {
- isdn_net_swapbind(di);
- swapped = 1;
- } else {
- /* ... else iterate next device */
- p = (isdn_net_dev *) p->next;
- continue;
- }
- } else {
-#ifdef ISDN_DEBUG_NET_ICALL
- printk(KERN_DEBUG "n_fi: 2nd channel is down and unbound\n");
-#endif
- /* No, swap always and swap excl-usage also */
- isdn_net_swap_usage(idx, sidx);
- isdn_net_swapbind(di);
- swapped = 2;
- }
- /* Now check for exclusive binding again */
-#ifdef ISDN_DEBUG_NET_ICALL
- printk(KERN_DEBUG "n_fi: final check\n");
-#endif
- if ((dev->usage[idx] & ISDN_USAGE_EXCLUSIVE) &&
- ((lp->pre_channel != ch) ||
- (lp->pre_device != di))) {
-#ifdef ISDN_DEBUG_NET_ICALL
- printk(KERN_DEBUG "n_fi: final check failed\n");
-#endif
- p = (isdn_net_dev *) p->next;
- continue;
- }
- }
- } else {
- /* We are already on the second channel, so nothing to do */
-#ifdef ISDN_DEBUG_NET_ICALL
- printk(KERN_DEBUG "n_fi: already on 2nd channel\n");
-#endif
- }
- }
- }
-#ifdef ISDN_DEBUG_NET_ICALL
- printk(KERN_DEBUG "n_fi: match2\n");
-#endif
- n = lp->phone[0];
- if (lp->flags & ISDN_NET_SECURE) {
- while (n) {
- if (!isdn_msncmp(nr, n->num))
- break;
- n = (isdn_net_phone *) n->next;
- }
- }
- if (n || (!(lp->flags & ISDN_NET_SECURE))) {
-#ifdef ISDN_DEBUG_NET_ICALL
- printk(KERN_DEBUG "n_fi: match3\n");
-#endif
- /* matching interface found */
-
- /*
- * Is the state STOPPED?
- * If so, no dialin is allowed,
- * so reject actively.
- * */
- if (ISDN_NET_DIALMODE(*lp) == ISDN_NET_DM_OFF) {
- printk(KERN_INFO "incoming call, interface %s `stopped' -> rejected\n",
- p->dev->name);
- return 3;
- }
- /*
- * Is the interface up?
- * If not, reject the call actively.
- */
- if (!isdn_net_device_started(p)) {
- printk(KERN_INFO "%s: incoming call, interface down -> rejected\n",
- p->dev->name);
- return 3;
- }
- /* Interface is up, now see if it's a slave. If so, see if
- * it's master and parent slave is online. If not, reject the call.
- */
- if (lp->master) {
- isdn_net_local *mlp = ISDN_MASTER_PRIV(lp);
- printk(KERN_DEBUG "ICALLslv: %s\n", p->dev->name);
- printk(KERN_DEBUG "master=%s\n", lp->master->name);
- if (mlp->flags & ISDN_NET_CONNECTED) {
- printk(KERN_DEBUG "master online\n");
- /* Master is online, find parent-slave (master if first slave) */
- while (mlp->slave) {
- if (ISDN_SLAVE_PRIV(mlp) == lp)
- break;
- mlp = ISDN_SLAVE_PRIV(mlp);
- }
- } else
- printk(KERN_DEBUG "master offline\n");
- /* Found parent, if it's offline iterate next device */
- printk(KERN_DEBUG "mlpf: %d\n", mlp->flags & ISDN_NET_CONNECTED);
- if (!(mlp->flags & ISDN_NET_CONNECTED)) {
- p = (isdn_net_dev *) p->next;
- continue;
- }
- }
- if (lp->flags & ISDN_NET_CALLBACK) {
- int chi;
- /*
- * Is the state MANUAL?
- * If so, no callback can be made,
- * so reject actively.
- * */
- if (ISDN_NET_DIALMODE(*lp) == ISDN_NET_DM_OFF) {
- printk(KERN_INFO "incoming call for callback, interface %s `off' -> rejected\n",
- p->dev->name);
- return 3;
- }
- printk(KERN_DEBUG "%s: call from %s -> %s, start callback\n",
- p->dev->name, nr, eaz);
- if (lp->phone[1]) {
- /* Grab a free ISDN-Channel */
- spin_lock_irqsave(&dev->lock, flags);
- if ((chi =
- isdn_get_free_channel(
- ISDN_USAGE_NET,
- lp->l2_proto,
- lp->l3_proto,
- lp->pre_device,
- lp->pre_channel,
- lp->msn)
- ) < 0) {
-
- printk(KERN_WARNING "isdn_net_find_icall: No channel for %s\n",
- p->dev->name);
- spin_unlock_irqrestore(&dev->lock, flags);
- return 0;
- }
- /* Setup dialstate. */
- lp->dtimer = 0;
- lp->dialstate = 11;
- /* Connect interface with channel */
- isdn_net_bind_channel(lp, chi);
-#ifdef CONFIG_ISDN_PPP
- if (lp->p_encap == ISDN_NET_ENCAP_SYNCPPP)
- if (isdn_ppp_bind(lp) < 0) {
- spin_unlock_irqrestore(&dev->lock, flags);
- isdn_net_unbind_channel(lp);
- return 0;
- }
-#endif
- spin_unlock_irqrestore(&dev->lock, flags);
- /* Initiate dialing by returning 2 or 4 */
- return (lp->flags & ISDN_NET_CBHUP) ? 2 : 4;
- } else
- printk(KERN_WARNING "isdn_net: %s: No phone number\n",
- p->dev->name);
- return 0;
- } else {
- printk(KERN_DEBUG "%s: call from %s -> %s accepted\n",
- p->dev->name, nr, eaz);
- /* if this interface is dialing, it does it probably on a different
- device, so free this device */
- if ((lp->dialstate == 4) || (lp->dialstate == 12)) {
-#ifdef CONFIG_ISDN_PPP
- if (lp->p_encap == ISDN_NET_ENCAP_SYNCPPP)
- isdn_ppp_free(lp);
-#endif
- isdn_net_lp_disconnected(lp);
- isdn_free_channel(lp->isdn_device, lp->isdn_channel,
- ISDN_USAGE_NET);
- }
- spin_lock_irqsave(&dev->lock, flags);
- dev->usage[idx] &= ISDN_USAGE_EXCLUSIVE;
- dev->usage[idx] |= ISDN_USAGE_NET;
- strcpy(dev->num[idx], nr);
- isdn_info_update();
- dev->st_netdev[idx] = lp->netdev;
- lp->isdn_device = di;
- lp->isdn_channel = ch;
- lp->ppp_slot = -1;
- lp->flags |= ISDN_NET_CONNECTED;
- lp->dialstate = 7;
- lp->dtimer = 0;
- lp->outgoing = 0;
- lp->huptimer = 0;
- lp->hupflags |= ISDN_WAITCHARGE;
- lp->hupflags &= ~ISDN_HAVECHARGE;
-#ifdef CONFIG_ISDN_PPP
- if (lp->p_encap == ISDN_NET_ENCAP_SYNCPPP) {
- if (isdn_ppp_bind(lp) < 0) {
- isdn_net_unbind_channel(lp);
- spin_unlock_irqrestore(&dev->lock, flags);
- return 0;
- }
- }
-#endif
- spin_unlock_irqrestore(&dev->lock, flags);
- return 1;
- }
- }
- }
- p = (isdn_net_dev *) p->next;
- }
- /* If none of configured EAZ/MSN matched and not verbose, be silent */
- if (!ematch || dev->net_verbose)
- printk(KERN_INFO "isdn_net: call from %s -> %d %s ignored\n", nr, di, eaz);
- return (wret == 2) ? 5 : 0;
-}
-
-/*
- * Search list of net-interfaces for an interface with given name.
- */
-isdn_net_dev *
-isdn_net_findif(char *name)
-{
- isdn_net_dev *p = dev->netdev;
-
- while (p) {
- if (!strcmp(p->dev->name, name))
- return p;
- p = (isdn_net_dev *) p->next;
- }
- return (isdn_net_dev *) NULL;
-}
-
-/*
- * Force a net-interface to dial out.
- * This is called from the userlevel-routine below or
- * from isdn_net_start_xmit().
- */
-static int
-isdn_net_force_dial_lp(isdn_net_local *lp)
-{
- if ((!(lp->flags & ISDN_NET_CONNECTED)) && !lp->dialstate) {
- int chi;
- if (lp->phone[1]) {
- ulong flags;
-
- /* Grab a free ISDN-Channel */
- spin_lock_irqsave(&dev->lock, flags);
- if ((chi = isdn_get_free_channel(
- ISDN_USAGE_NET,
- lp->l2_proto,
- lp->l3_proto,
- lp->pre_device,
- lp->pre_channel,
- lp->msn)) < 0) {
- printk(KERN_WARNING "isdn_net_force_dial: No channel for %s\n",
- lp->netdev->dev->name);
- spin_unlock_irqrestore(&dev->lock, flags);
- return -EAGAIN;
- }
- lp->dialstate = 1;
- /* Connect interface with channel */
- isdn_net_bind_channel(lp, chi);
-#ifdef CONFIG_ISDN_PPP
- if (lp->p_encap == ISDN_NET_ENCAP_SYNCPPP)
- if (isdn_ppp_bind(lp) < 0) {
- isdn_net_unbind_channel(lp);
- spin_unlock_irqrestore(&dev->lock, flags);
- return -EAGAIN;
- }
-#endif
- /* Initiate dialing */
- spin_unlock_irqrestore(&dev->lock, flags);
- isdn_net_dial();
- return 0;
- } else
- return -EINVAL;
- } else
- return -EBUSY;
-}
-
-/*
- * This is called from certain upper protocol layers (multilink ppp
- * and x25iface encapsulation module) that want to initiate dialing
- * themselves.
- */
-int
-isdn_net_dial_req(isdn_net_local *lp)
-{
- /* is there a better error code? */
- if (!(ISDN_NET_DIALMODE(*lp) == ISDN_NET_DM_AUTO)) return -EBUSY;
-
- return isdn_net_force_dial_lp(lp);
-}
-
-/*
- * Force a net-interface to dial out.
- * This is always called from within userspace (ISDN_IOCTL_NET_DIAL).
- */
-int
-isdn_net_force_dial(char *name)
-{
- isdn_net_dev *p = isdn_net_findif(name);
-
- if (!p)
- return -ENODEV;
- return (isdn_net_force_dial_lp(p->local));
-}
-
-/* The ISDN-specific entries in the device structure. */
-static const struct net_device_ops isdn_netdev_ops = {
- .ndo_init = isdn_net_init,
- .ndo_open = isdn_net_open,
- .ndo_stop = isdn_net_close,
- .ndo_do_ioctl = isdn_net_ioctl,
-
- .ndo_start_xmit = isdn_net_start_xmit,
- .ndo_get_stats = isdn_net_get_stats,
- .ndo_tx_timeout = isdn_net_tx_timeout,
-};
-
-/*
- * Helper for alloc_netdev()
- */
-static void _isdn_setup(struct net_device *dev)
-{
- isdn_net_local *lp = netdev_priv(dev);
-
- ether_setup(dev);
-
- /* Setup the generic properties */
- dev->flags = IFF_NOARP | IFF_POINTOPOINT;
-
- /* isdn prepends a header in the tx path, can't share skbs */
- dev->priv_flags &= ~IFF_TX_SKB_SHARING;
- dev->header_ops = NULL;
- dev->netdev_ops = &isdn_netdev_ops;
-
- /* for clients with MPPP maybe higher values better */
- dev->tx_queue_len = 30;
-
- lp->p_encap = ISDN_NET_ENCAP_RAWIP;
- lp->magic = ISDN_NET_MAGIC;
- lp->last = lp;
- lp->next = lp;
- lp->isdn_device = -1;
- lp->isdn_channel = -1;
- lp->pre_device = -1;
- lp->pre_channel = -1;
- lp->exclusive = -1;
- lp->ppp_slot = -1;
- lp->pppbind = -1;
- skb_queue_head_init(&lp->super_tx_queue);
- lp->l2_proto = ISDN_PROTO_L2_X75I;
- lp->l3_proto = ISDN_PROTO_L3_TRANS;
- lp->triggercps = 6000;
- lp->slavedelay = 10 * HZ;
- lp->hupflags = ISDN_INHUP; /* Do hangup even on incoming calls */
- lp->onhtime = 10; /* Default hangup-time for saving costs */
- lp->dialmax = 1;
- /* Hangup before Callback, manual dial */
- lp->flags = ISDN_NET_CBHUP | ISDN_NET_DM_MANUAL;
- lp->cbdelay = 25; /* Wait 5 secs before Callback */
- lp->dialtimeout = -1; /* Infinite Dial-Timeout */
- lp->dialwait = 5 * HZ; /* Wait 5 sec. after failed dial */
- lp->dialstarted = 0; /* Jiffies of last dial-start */
- lp->dialwait_timer = 0; /* Jiffies of earliest next dial-start */
-}
-
-/*
- * Allocate a new network-interface and initialize its data structures.
- */
-char *
-isdn_net_new(char *name, struct net_device *master)
-{
- isdn_net_dev *netdev;
-
- /* Avoid creating an existing interface */
- if (isdn_net_findif(name)) {
- printk(KERN_WARNING "isdn_net: interface %s already exists\n", name);
- return NULL;
- }
- if (name == NULL)
- return NULL;
- if (!(netdev = kzalloc(sizeof(isdn_net_dev), GFP_KERNEL))) {
- printk(KERN_WARNING "isdn_net: Could not allocate net-device\n");
- return NULL;
- }
- netdev->dev = alloc_netdev(sizeof(isdn_net_local), name,
- NET_NAME_UNKNOWN, _isdn_setup);
- if (!netdev->dev) {
- printk(KERN_WARNING "isdn_net: Could not allocate network device\n");
- kfree(netdev);
- return NULL;
- }
- netdev->local = netdev_priv(netdev->dev);
-
- if (master) {
- /* Device shall be a slave */
- struct net_device *p = MASTER_TO_SLAVE(master);
- struct net_device *q = master;
-
- netdev->local->master = master;
- /* Put device at end of slave-chain */
- while (p) {
- q = p;
- p = MASTER_TO_SLAVE(p);
- }
- MASTER_TO_SLAVE(q) = netdev->dev;
- } else {
- /* Device shall be a master */
- /*
- * Watchdog timer (currently) for master only.
- */
- netdev->dev->watchdog_timeo = ISDN_NET_TX_TIMEOUT;
- if (register_netdev(netdev->dev) != 0) {
- printk(KERN_WARNING "isdn_net: Could not register net-device\n");
- free_netdev(netdev->dev);
- kfree(netdev);
- return NULL;
- }
- }
- netdev->queue = netdev->local;
- spin_lock_init(&netdev->queue_lock);
-
- netdev->local->netdev = netdev;
-
- INIT_WORK(&netdev->local->tqueue, isdn_net_softint);
- spin_lock_init(&netdev->local->xmit_lock);
-
- /* Put into to netdev-chain */
- netdev->next = (void *) dev->netdev;
- dev->netdev = netdev;
- return netdev->dev->name;
-}
-
-char *
-isdn_net_newslave(char *parm)
-{
- char *p = strchr(parm, ',');
- isdn_net_dev *n;
- char newname[10];
-
- if (p) {
- /* Slave-Name MUST not be empty or overflow 'newname' */
- if (strscpy(newname, p + 1, sizeof(newname)) <= 0)
- return NULL;
- *p = 0;
- /* Master must already exist */
- if (!(n = isdn_net_findif(parm)))
- return NULL;
- /* Master must be a real interface, not a slave */
- if (n->local->master)
- return NULL;
- /* Master must not be started yet */
- if (isdn_net_device_started(n))
- return NULL;
- return (isdn_net_new(newname, n->dev));
- }
- return NULL;
-}
-
-/*
- * Set interface-parameters.
- * Always set all parameters, so the user-level application is responsible
- * for not overwriting existing setups. It has to get the current
- * setup first, if only selected parameters are to be changed.
- */
-int
-isdn_net_setcfg(isdn_net_ioctl_cfg *cfg)
-{
- isdn_net_dev *p = isdn_net_findif(cfg->name);
- ulong features;
- int i;
- int drvidx;
- int chidx;
- char drvid[25];
-
- if (p) {
- isdn_net_local *lp = p->local;
-
- /* See if any registered driver supports the features we want */
- features = ((1 << cfg->l2_proto) << ISDN_FEATURE_L2_SHIFT) |
- ((1 << cfg->l3_proto) << ISDN_FEATURE_L3_SHIFT);
- for (i = 0; i < ISDN_MAX_DRIVERS; i++)
- if (dev->drv[i])
- if ((dev->drv[i]->interface->features & features) == features)
- break;
- if (i == ISDN_MAX_DRIVERS) {
- printk(KERN_WARNING "isdn_net: No driver with selected features\n");
- return -ENODEV;
- }
- if (lp->p_encap != cfg->p_encap) {
-#ifdef CONFIG_ISDN_X25
- struct concap_proto *cprot = p->cprot;
-#endif
- if (isdn_net_device_started(p)) {
- printk(KERN_WARNING "%s: cannot change encap when if is up\n",
- p->dev->name);
- return -EBUSY;
- }
-#ifdef CONFIG_ISDN_X25
- if (cprot && cprot->pops)
- cprot->pops->proto_del(cprot);
- p->cprot = NULL;
- lp->dops = NULL;
- /* ... , prepare for configuration of new one ... */
- switch (cfg->p_encap) {
- case ISDN_NET_ENCAP_X25IFACE:
- lp->dops = &isdn_concap_reliable_dl_dops;
- }
- /* ... and allocate new one ... */
- p->cprot = isdn_concap_new(cfg->p_encap);
- /* p -> cprot == NULL now if p_encap is not supported
- by means of the concap_proto mechanism */
- /* the protocol is not configured yet; this will
- happen later when isdn_net_reset() is called */
-#endif
- }
- switch (cfg->p_encap) {
- case ISDN_NET_ENCAP_SYNCPPP:
-#ifndef CONFIG_ISDN_PPP
- printk(KERN_WARNING "%s: SyncPPP support not configured\n",
- p->dev->name);
- return -EINVAL;
-#else
- p->dev->type = ARPHRD_PPP; /* change ARP type */
- p->dev->addr_len = 0;
-#endif
- break;
- case ISDN_NET_ENCAP_X25IFACE:
-#ifndef CONFIG_ISDN_X25
- printk(KERN_WARNING "%s: isdn-x25 support not configured\n",
- p->dev->name);
- return -EINVAL;
-#else
- p->dev->type = ARPHRD_X25; /* change ARP type */
- p->dev->addr_len = 0;
-#endif
- break;
- case ISDN_NET_ENCAP_CISCOHDLCK:
- break;
- default:
- if (cfg->p_encap >= 0 &&
- cfg->p_encap <= ISDN_NET_ENCAP_MAX_ENCAP)
- break;
- printk(KERN_WARNING
- "%s: encapsulation protocol %d not supported\n",
- p->dev->name, cfg->p_encap);
- return -EINVAL;
- }
- if (strlen(cfg->drvid)) {
- /* A bind has been requested ... */
- char *c,
- *e;
-
- if (strnlen(cfg->drvid, sizeof(cfg->drvid)) ==
- sizeof(cfg->drvid))
- return -EINVAL;
- drvidx = -1;
- chidx = -1;
- strcpy(drvid, cfg->drvid);
- if ((c = strchr(drvid, ','))) {
- /* The channel-number is appended to the driver-Id with a comma */
- chidx = (int) simple_strtoul(c + 1, &e, 10);
- if (e == c)
- chidx = -1;
- *c = '\0';
- }
- for (i = 0; i < ISDN_MAX_DRIVERS; i++)
- /* Lookup driver-Id in array */
- if (!(strcmp(dev->drvid[i], drvid))) {
- drvidx = i;
- break;
- }
- if ((drvidx == -1) || (chidx == -1))
- /* Either driver-Id or channel-number invalid */
- return -ENODEV;
- } else {
- /* Parameters are valid, so get them */
- drvidx = lp->pre_device;
- chidx = lp->pre_channel;
- }
- if (cfg->exclusive > 0) {
- unsigned long flags;
-
- /* If binding is exclusive, try to grab the channel */
- spin_lock_irqsave(&dev->lock, flags);
- if ((i = isdn_get_free_channel(ISDN_USAGE_NET,
- lp->l2_proto, lp->l3_proto, drvidx,
- chidx, lp->msn)) < 0) {
- /* Grab failed, because desired channel is in use */
- lp->exclusive = -1;
- spin_unlock_irqrestore(&dev->lock, flags);
- return -EBUSY;
- }
- /* All went ok, so update isdninfo */
- dev->usage[i] = ISDN_USAGE_EXCLUSIVE;
- isdn_info_update();
- spin_unlock_irqrestore(&dev->lock, flags);
- lp->exclusive = i;
- } else {
- /* Non-exclusive binding or unbind. */
- lp->exclusive = -1;
- if ((lp->pre_device != -1) && (cfg->exclusive == -1)) {
- isdn_unexclusive_channel(lp->pre_device, lp->pre_channel);
- isdn_free_channel(lp->pre_device, lp->pre_channel, ISDN_USAGE_NET);
- drvidx = -1;
- chidx = -1;
- }
- }
- strlcpy(lp->msn, cfg->eaz, sizeof(lp->msn));
- lp->pre_device = drvidx;
- lp->pre_channel = chidx;
- lp->onhtime = cfg->onhtime;
- lp->charge = cfg->charge;
- lp->l2_proto = cfg->l2_proto;
- lp->l3_proto = cfg->l3_proto;
- lp->cbdelay = cfg->cbdelay;
- lp->dialmax = cfg->dialmax;
- lp->triggercps = cfg->triggercps;
- lp->slavedelay = cfg->slavedelay * HZ;
- lp->pppbind = cfg->pppbind;
- lp->dialtimeout = cfg->dialtimeout >= 0 ? cfg->dialtimeout * HZ : -1;
- lp->dialwait = cfg->dialwait * HZ;
- if (cfg->secure)
- lp->flags |= ISDN_NET_SECURE;
- else
- lp->flags &= ~ISDN_NET_SECURE;
- if (cfg->cbhup)
- lp->flags |= ISDN_NET_CBHUP;
- else
- lp->flags &= ~ISDN_NET_CBHUP;
- switch (cfg->callback) {
- case 0:
- lp->flags &= ~(ISDN_NET_CALLBACK | ISDN_NET_CBOUT);
- break;
- case 1:
- lp->flags |= ISDN_NET_CALLBACK;
- lp->flags &= ~ISDN_NET_CBOUT;
- break;
- case 2:
- lp->flags |= ISDN_NET_CBOUT;
- lp->flags &= ~ISDN_NET_CALLBACK;
- break;
- }
- lp->flags &= ~ISDN_NET_DIALMODE_MASK; /* first all bits off */
- if (cfg->dialmode && !(cfg->dialmode & ISDN_NET_DIALMODE_MASK)) {
- /* old isdnctrl version, where only 0 or 1 is given */
- printk(KERN_WARNING
- "Old isdnctrl version detected! Please update.\n");
- lp->flags |= ISDN_NET_DM_OFF; /* turn on `off' bit */
- }
- else {
- lp->flags |= cfg->dialmode; /* turn on selected bits */
- }
- if (cfg->chargehup)
- lp->hupflags |= ISDN_CHARGEHUP;
- else
- lp->hupflags &= ~ISDN_CHARGEHUP;
- if (cfg->ihup)
- lp->hupflags |= ISDN_INHUP;
- else
- lp->hupflags &= ~ISDN_INHUP;
- if (cfg->chargeint > 10) {
- lp->hupflags |= ISDN_CHARGEHUP | ISDN_HAVECHARGE | ISDN_MANCHARGE;
- lp->chargeint = cfg->chargeint * HZ;
- }
- if (cfg->p_encap != lp->p_encap) {
- if (cfg->p_encap == ISDN_NET_ENCAP_RAWIP) {
- p->dev->header_ops = NULL;
- p->dev->flags = IFF_NOARP | IFF_POINTOPOINT;
- } else {
- p->dev->header_ops = &isdn_header_ops;
- if (cfg->p_encap == ISDN_NET_ENCAP_ETHER)
- p->dev->flags = IFF_BROADCAST | IFF_MULTICAST;
- else
- p->dev->flags = IFF_NOARP | IFF_POINTOPOINT;
- }
- }
- lp->p_encap = cfg->p_encap;
- return 0;
- }
- return -ENODEV;
-}
-
-/*
- * Perform get-interface-parameters.ioctl
- */
-int
-isdn_net_getcfg(isdn_net_ioctl_cfg *cfg)
-{
- isdn_net_dev *p = isdn_net_findif(cfg->name);
-
- if (p) {
- isdn_net_local *lp = p->local;
-
- strcpy(cfg->eaz, lp->msn);
- cfg->exclusive = lp->exclusive;
- if (lp->pre_device >= 0) {
- sprintf(cfg->drvid, "%s,%d", dev->drvid[lp->pre_device],
- lp->pre_channel);
- } else
- cfg->drvid[0] = '\0';
- cfg->onhtime = lp->onhtime;
- cfg->charge = lp->charge;
- cfg->l2_proto = lp->l2_proto;
- cfg->l3_proto = lp->l3_proto;
- cfg->p_encap = lp->p_encap;
- cfg->secure = (lp->flags & ISDN_NET_SECURE) ? 1 : 0;
- cfg->callback = 0;
- if (lp->flags & ISDN_NET_CALLBACK)
- cfg->callback = 1;
- if (lp->flags & ISDN_NET_CBOUT)
- cfg->callback = 2;
- cfg->cbhup = (lp->flags & ISDN_NET_CBHUP) ? 1 : 0;
- cfg->dialmode = lp->flags & ISDN_NET_DIALMODE_MASK;
- cfg->chargehup = (lp->hupflags & ISDN_CHARGEHUP) ? 1 : 0;
- cfg->ihup = (lp->hupflags & ISDN_INHUP) ? 1 : 0;
- cfg->cbdelay = lp->cbdelay;
- cfg->dialmax = lp->dialmax;
- cfg->triggercps = lp->triggercps;
- cfg->slavedelay = lp->slavedelay / HZ;
- cfg->chargeint = (lp->hupflags & ISDN_CHARGEHUP) ?
- (lp->chargeint / HZ) : 0;
- cfg->pppbind = lp->pppbind;
- cfg->dialtimeout = lp->dialtimeout >= 0 ? lp->dialtimeout / HZ : -1;
- cfg->dialwait = lp->dialwait / HZ;
- if (lp->slave) {
- if (strlen(lp->slave->name) >= 10)
- strcpy(cfg->slave, "too-long");
- else
- strcpy(cfg->slave, lp->slave->name);
- } else
- cfg->slave[0] = '\0';
- if (lp->master) {
- if (strlen(lp->master->name) >= 10)
- strcpy(cfg->master, "too-long");
- else
- strcpy(cfg->master, lp->master->name);
- } else
- cfg->master[0] = '\0';
- return 0;
- }
- return -ENODEV;
-}
-
-/*
- * Add a phone-number to an interface.
- */
-int
-isdn_net_addphone(isdn_net_ioctl_phone *phone)
-{
- isdn_net_dev *p = isdn_net_findif(phone->name);
- isdn_net_phone *n;
-
- if (p) {
- if (!(n = kmalloc(sizeof(isdn_net_phone), GFP_KERNEL)))
- return -ENOMEM;
- strlcpy(n->num, phone->phone, sizeof(n->num));
- n->next = p->local->phone[phone->outgoing & 1];
- p->local->phone[phone->outgoing & 1] = n;
- return 0;
- }
- return -ENODEV;
-}
-
-/*
- * Copy a string of all phone-numbers of an interface to user space.
- * This might sleep and must be called with the isdn semaphore down.
- */
-int
-isdn_net_getphones(isdn_net_ioctl_phone *phone, char __user *phones)
-{
- isdn_net_dev *p = isdn_net_findif(phone->name);
- int inout = phone->outgoing & 1;
- int more = 0;
- int count = 0;
- isdn_net_phone *n;
-
- if (!p)
- return -ENODEV;
- inout &= 1;
- for (n = p->local->phone[inout]; n; n = n->next) {
- if (more) {
- put_user(' ', phones++);
- count++;
- }
- if (copy_to_user(phones, n->num, strlen(n->num) + 1)) {
- return -EFAULT;
- }
- phones += strlen(n->num);
- count += strlen(n->num);
- more = 1;
- }
- put_user(0, phones);
- count++;
- return count;
-}
-
-/*
- * Copy a string containing the peer's phone number of a connected interface
- * to user space.
- */
-int
-isdn_net_getpeer(isdn_net_ioctl_phone *phone, isdn_net_ioctl_phone __user *peer)
-{
- isdn_net_dev *p = isdn_net_findif(phone->name);
- int ch, dv, idx;
-
- if (!p)
- return -ENODEV;
- /*
- * Theoretical race: while this executes, the remote number might
- * become invalid (hang up) or change (new connection), resulting
- * in (partially) wrong number copied to user. This race
- * currently ignored.
- */
- ch = p->local->isdn_channel;
- dv = p->local->isdn_device;
- if (ch < 0 && dv < 0)
- return -ENOTCONN;
- idx = isdn_dc2minor(dv, ch);
- if (idx < 0)
- return -ENODEV;
- /* for pre-bound channels, we need this extra check */
- if (strncmp(dev->num[idx], "???", 3) == 0)
- return -ENOTCONN;
- strncpy(phone->phone, dev->num[idx], ISDN_MSNLEN);
- phone->outgoing = USG_OUTGOING(dev->usage[idx]);
- if (copy_to_user(peer, phone, sizeof(*peer)))
- return -EFAULT;
- return 0;
-}
-/*
- * Delete a phone-number from an interface.
- */
-int
-isdn_net_delphone(isdn_net_ioctl_phone *phone)
-{
- isdn_net_dev *p = isdn_net_findif(phone->name);
- int inout = phone->outgoing & 1;
- isdn_net_phone *n;
- isdn_net_phone *m;
-
- if (p) {
- n = p->local->phone[inout];
- m = NULL;
- while (n) {
- if (!strcmp(n->num, phone->phone)) {
- if (p->local->dial == n)
- p->local->dial = n->next;
- if (m)
- m->next = n->next;
- else
- p->local->phone[inout] = n->next;
- kfree(n);
- return 0;
- }
- m = n;
- n = (isdn_net_phone *) n->next;
- }
- return -EINVAL;
- }
- return -ENODEV;
-}
-
-/*
- * Delete all phone-numbers of an interface.
- */
-static int
-isdn_net_rmallphone(isdn_net_dev *p)
-{
- isdn_net_phone *n;
- isdn_net_phone *m;
- int i;
-
- for (i = 0; i < 2; i++) {
- n = p->local->phone[i];
- while (n) {
- m = n->next;
- kfree(n);
- n = m;
- }
- p->local->phone[i] = NULL;
- }
- p->local->dial = NULL;
- return 0;
-}
-
-/*
- * Force a hangup of a network-interface.
- */
-int
-isdn_net_force_hangup(char *name)
-{
- isdn_net_dev *p = isdn_net_findif(name);
- struct net_device *q;
-
- if (p) {
- if (p->local->isdn_device < 0)
- return 1;
- q = p->local->slave;
- /* If this interface has slaves, do a hangup for them also. */
- while (q) {
- isdn_net_hangup(q);
- q = MASTER_TO_SLAVE(q);
- }
- isdn_net_hangup(p->dev);
- return 0;
- }
- return -ENODEV;
-}
-
-/*
- * Helper-function for isdn_net_rm: Do the real work.
- */
-static int
-isdn_net_realrm(isdn_net_dev *p, isdn_net_dev *q)
-{
- u_long flags;
-
- if (isdn_net_device_started(p)) {
- return -EBUSY;
- }
-#ifdef CONFIG_ISDN_X25
- if (p->cprot && p->cprot->pops)
- p->cprot->pops->proto_del(p->cprot);
-#endif
- /* Free all phone-entries */
- isdn_net_rmallphone(p);
- /* If interface is bound exclusive, free channel-usage */
- if (p->local->exclusive != -1)
- isdn_unexclusive_channel(p->local->pre_device, p->local->pre_channel);
- if (p->local->master) {
- /* It's a slave-device, so update master's slave-pointer if necessary */
- if (((isdn_net_local *) ISDN_MASTER_PRIV(p->local))->slave ==
- p->dev)
- ((isdn_net_local *)ISDN_MASTER_PRIV(p->local))->slave =
- p->local->slave;
- } else {
- /* Unregister only if it's a master-device */
- unregister_netdev(p->dev);
- }
- /* Unlink device from chain */
- spin_lock_irqsave(&dev->lock, flags);
- if (q)
- q->next = p->next;
- else
- dev->netdev = p->next;
- if (p->local->slave) {
- /* If this interface has a slave, remove it also */
- char *slavename = p->local->slave->name;
- isdn_net_dev *n = dev->netdev;
- q = NULL;
- while (n) {
- if (!strcmp(n->dev->name, slavename)) {
- spin_unlock_irqrestore(&dev->lock, flags);
- isdn_net_realrm(n, q);
- spin_lock_irqsave(&dev->lock, flags);
- break;
- }
- q = n;
- n = (isdn_net_dev *)n->next;
- }
- }
- spin_unlock_irqrestore(&dev->lock, flags);
- /* If no more net-devices remain, disable auto-hangup timer */
- if (dev->netdev == NULL)
- isdn_timer_ctrl(ISDN_TIMER_NETHANGUP, 0);
- free_netdev(p->dev);
- kfree(p);
-
- return 0;
-}
-
-/*
- * Remove a single network-interface.
- */
-int
-isdn_net_rm(char *name)
-{
- u_long flags;
- isdn_net_dev *p;
- isdn_net_dev *q;
-
- /* Search name in netdev-chain */
- spin_lock_irqsave(&dev->lock, flags);
- p = dev->netdev;
- q = NULL;
- while (p) {
- if (!strcmp(p->dev->name, name)) {
- spin_unlock_irqrestore(&dev->lock, flags);
- return (isdn_net_realrm(p, q));
- }
- q = p;
- p = (isdn_net_dev *) p->next;
- }
- spin_unlock_irqrestore(&dev->lock, flags);
- /* If no more net-devices remain, disable auto-hangup timer */
- if (dev->netdev == NULL)
- isdn_timer_ctrl(ISDN_TIMER_NETHANGUP, 0);
- return -ENODEV;
-}
-
-/*
- * Remove all network-interfaces
- */
-int
-isdn_net_rmall(void)
-{
- u_long flags;
- int ret;
-
- /* Walk through netdev-chain */
- spin_lock_irqsave(&dev->lock, flags);
- while (dev->netdev) {
- if (!dev->netdev->local->master) {
- /* Remove master-devices only, slaves get removed with their master */
- spin_unlock_irqrestore(&dev->lock, flags);
- if ((ret = isdn_net_realrm(dev->netdev, NULL))) {
- return ret;
- }
- spin_lock_irqsave(&dev->lock, flags);
- }
- }
- dev->netdev = NULL;
- spin_unlock_irqrestore(&dev->lock, flags);
- return 0;
-}
diff --git a/drivers/isdn/i4l/isdn_net.h b/drivers/isdn/i4l/isdn_net.h
deleted file mode 100644
index cca6d68da171..000000000000
--- a/drivers/isdn/i4l/isdn_net.h
+++ /dev/null
@@ -1,151 +0,0 @@
-/* $Id: isdn_net.h,v 1.1.2.2 2004/01/12 22:37:19 keil Exp $
- *
- * header for Linux ISDN subsystem, network related functions (linklevel).
- *
- * Copyright 1994-1999 by Fritz Elfert (fritz@isdn4linux.de)
- * Copyright 1995,96 by Thinking Objects Software GmbH Wuerzburg
- * Copyright 1995,96 by Michael Hipp (Michael.Hipp@student.uni-tuebingen.de)
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-/* Definitions for hupflags: */
-#define ISDN_WAITCHARGE 1 /* did not get a charge info yet */
-#define ISDN_HAVECHARGE 2 /* We know a charge info */
-#define ISDN_CHARGEHUP 4 /* We want to use the charge mechanism */
-#define ISDN_INHUP 8 /* Even if incoming, close after huptimeout */
-#define ISDN_MANCHARGE 16 /* Charge Interval manually set */
-
-/*
- * Definitions for Cisco-HDLC header.
- */
-
-#define CISCO_ADDR_UNICAST 0x0f
-#define CISCO_ADDR_BROADCAST 0x8f
-#define CISCO_CTRL 0x00
-#define CISCO_TYPE_CDP 0x2000
-#define CISCO_TYPE_SLARP 0x8035
-#define CISCO_SLARP_REQUEST 0
-#define CISCO_SLARP_REPLY 1
-#define CISCO_SLARP_KEEPALIVE 2
-
-extern char *isdn_net_new(char *, struct net_device *);
-extern char *isdn_net_newslave(char *);
-extern int isdn_net_rm(char *);
-extern int isdn_net_rmall(void);
-extern int isdn_net_stat_callback(int, isdn_ctrl *);
-extern int isdn_net_setcfg(isdn_net_ioctl_cfg *);
-extern int isdn_net_getcfg(isdn_net_ioctl_cfg *);
-extern int isdn_net_addphone(isdn_net_ioctl_phone *);
-extern int isdn_net_getphones(isdn_net_ioctl_phone *, char __user *);
-extern int isdn_net_getpeer(isdn_net_ioctl_phone *, isdn_net_ioctl_phone __user *);
-extern int isdn_net_delphone(isdn_net_ioctl_phone *);
-extern int isdn_net_find_icall(int, int, int, setup_parm *);
-extern void isdn_net_hangup(struct net_device *);
-extern void isdn_net_dial(void);
-extern void isdn_net_autohup(void);
-extern int isdn_net_force_hangup(char *);
-extern int isdn_net_force_dial(char *);
-extern isdn_net_dev *isdn_net_findif(char *);
-extern int isdn_net_rcv_skb(int, struct sk_buff *);
-extern int isdn_net_dial_req(isdn_net_local *);
-extern void isdn_net_writebuf_skb(isdn_net_local *lp, struct sk_buff *skb);
-extern void isdn_net_write_super(isdn_net_local *lp, struct sk_buff *skb);
-
-#define ISDN_NET_MAX_QUEUE_LENGTH 2
-
-#define ISDN_MASTER_PRIV(lp) ((isdn_net_local *) netdev_priv(lp->master))
-#define ISDN_SLAVE_PRIV(lp) ((isdn_net_local *) netdev_priv(lp->slave))
-#define MASTER_TO_SLAVE(master) \
- (((isdn_net_local *) netdev_priv(master))->slave)
-
-/*
- * is this particular channel busy?
- */
-static __inline__ int isdn_net_lp_busy(isdn_net_local *lp)
-{
- if (atomic_read(&lp->frame_cnt) < ISDN_NET_MAX_QUEUE_LENGTH)
- return 0;
- else
- return 1;
-}
-
-/*
- * For the given net device, this will get a non-busy channel out of the
- * corresponding bundle. The returned channel is locked.
- */
-static __inline__ isdn_net_local *isdn_net_get_locked_lp(isdn_net_dev *nd)
-{
- unsigned long flags;
- isdn_net_local *lp;
-
- spin_lock_irqsave(&nd->queue_lock, flags);
- lp = nd->queue; /* get lp on top of queue */
- while (isdn_net_lp_busy(nd->queue)) {
- nd->queue = nd->queue->next;
- if (nd->queue == lp) { /* not found -- should never happen */
- lp = NULL;
- goto errout;
- }
- }
- lp = nd->queue;
- nd->queue = nd->queue->next;
- spin_unlock_irqrestore(&nd->queue_lock, flags);
- spin_lock(&lp->xmit_lock);
- local_bh_disable();
- return lp;
-errout:
- spin_unlock_irqrestore(&nd->queue_lock, flags);
- return lp;
-}
-
-/*
- * add a channel to a bundle
- */
-static __inline__ void isdn_net_add_to_bundle(isdn_net_dev *nd, isdn_net_local *nlp)
-{
- isdn_net_local *lp;
- unsigned long flags;
-
- spin_lock_irqsave(&nd->queue_lock, flags);
-
- lp = nd->queue;
-// printk(KERN_DEBUG "%s: lp:%s(%p) nlp:%s(%p) last(%p)\n",
-// __func__, lp->name, lp, nlp->name, nlp, lp->last);
- nlp->last = lp->last;
- lp->last->next = nlp;
- lp->last = nlp;
- nlp->next = lp;
- nd->queue = nlp;
-
- spin_unlock_irqrestore(&nd->queue_lock, flags);
-}
-/*
- * remove a channel from the bundle it belongs to
- */
-static __inline__ void isdn_net_rm_from_bundle(isdn_net_local *lp)
-{
- isdn_net_local *master_lp = lp;
- unsigned long flags;
-
- if (lp->master)
- master_lp = ISDN_MASTER_PRIV(lp);
-
-// printk(KERN_DEBUG "%s: lp:%s(%p) mlp:%s(%p) last(%p) next(%p) mndq(%p)\n",
-// __func__, lp->name, lp, master_lp->name, master_lp, lp->last, lp->next, master_lp->netdev->queue);
- spin_lock_irqsave(&master_lp->netdev->queue_lock, flags);
- lp->last->next = lp->next;
- lp->next->last = lp->last;
- if (master_lp->netdev->queue == lp) {
- master_lp->netdev->queue = lp->next;
- if (lp->next == lp) { /* last in queue */
- master_lp->netdev->queue = master_lp->netdev->local;
- }
- }
- lp->next = lp->last = lp; /* (re)set own pointers */
-// printk(KERN_DEBUG "%s: mndq(%p)\n",
-// __func__, master_lp->netdev->queue);
- spin_unlock_irqrestore(&master_lp->netdev->queue_lock, flags);
-}
diff --git a/drivers/isdn/i4l/isdn_ppp.c b/drivers/isdn/i4l/isdn_ppp.c
deleted file mode 100644
index 7e0f419c14f8..000000000000
--- a/drivers/isdn/i4l/isdn_ppp.c
+++ /dev/null
@@ -1,3046 +0,0 @@
-/* $Id: isdn_ppp.c,v 1.1.2.3 2004/02/10 01:07:13 keil Exp $
- *
- * Linux ISDN subsystem, functions for synchronous PPP (linklevel).
- *
- * Copyright 1995,96 by Michael Hipp (Michael.Hipp@student.uni-tuebingen.de)
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/isdn.h>
-#include <linux/poll.h>
-#include <linux/ppp-comp.h>
-#include <linux/slab.h>
-#ifdef CONFIG_IPPP_FILTER
-#include <linux/filter.h>
-#endif
-
-#include "isdn_common.h"
-#include "isdn_ppp.h"
-#include "isdn_net.h"
-
-#ifndef PPP_IPX
-#define PPP_IPX 0x002b
-#endif
-
-/* Prototypes */
-static int isdn_ppp_fill_rq(unsigned char *buf, int len, int proto, int slot);
-static int isdn_ppp_closewait(int slot);
-static void isdn_ppp_push_higher(isdn_net_dev *net_dev, isdn_net_local *lp,
- struct sk_buff *skb, int proto);
-static int isdn_ppp_if_get_unit(char *namebuf);
-static int isdn_ppp_set_compressor(struct ippp_struct *is, struct isdn_ppp_comp_data *);
-static struct sk_buff *isdn_ppp_decompress(struct sk_buff *,
- struct ippp_struct *, struct ippp_struct *, int *proto);
-static void isdn_ppp_receive_ccp(isdn_net_dev *net_dev, isdn_net_local *lp,
- struct sk_buff *skb, int proto);
-static struct sk_buff *isdn_ppp_compress(struct sk_buff *skb_in, int *proto,
- struct ippp_struct *is, struct ippp_struct *master, int type);
-static void isdn_ppp_send_ccp(isdn_net_dev *net_dev, isdn_net_local *lp,
- struct sk_buff *skb);
-
-/* New CCP stuff */
-static void isdn_ppp_ccp_kickup(struct ippp_struct *is);
-static void isdn_ppp_ccp_xmit_reset(struct ippp_struct *is, int proto,
- unsigned char code, unsigned char id,
- unsigned char *data, int len);
-static struct ippp_ccp_reset *isdn_ppp_ccp_reset_alloc(struct ippp_struct *is);
-static void isdn_ppp_ccp_reset_free(struct ippp_struct *is);
-static void isdn_ppp_ccp_reset_free_state(struct ippp_struct *is,
- unsigned char id);
-static void isdn_ppp_ccp_timer_callback(struct timer_list *t);
-static struct ippp_ccp_reset_state *isdn_ppp_ccp_reset_alloc_state(struct ippp_struct *is,
- unsigned char id);
-static void isdn_ppp_ccp_reset_trans(struct ippp_struct *is,
- struct isdn_ppp_resetparams *rp);
-static void isdn_ppp_ccp_reset_ack_rcvd(struct ippp_struct *is,
- unsigned char id);
-
-
-
-#ifdef CONFIG_ISDN_MPP
-static ippp_bundle *isdn_ppp_bundle_arr = NULL;
-
-static int isdn_ppp_mp_bundle_array_init(void);
-static int isdn_ppp_mp_init(isdn_net_local *lp, ippp_bundle *add_to);
-static void isdn_ppp_mp_receive(isdn_net_dev *net_dev, isdn_net_local *lp,
- struct sk_buff *skb);
-static void isdn_ppp_mp_cleanup(isdn_net_local *lp);
-
-static int isdn_ppp_bundle(struct ippp_struct *, int unit);
-#endif /* CONFIG_ISDN_MPP */
-
-char *isdn_ppp_revision = "$Revision: 1.1.2.3 $";
-
-static struct ippp_struct *ippp_table[ISDN_MAX_CHANNELS];
-
-static struct isdn_ppp_compressor *ipc_head = NULL;
-
-/*
- * frame log (debug)
- */
-static void
-isdn_ppp_frame_log(char *info, char *data, int len, int maxlen, int unit, int slot)
-{
- int cnt,
- j,
- i;
- char buf[80];
-
- if (len < maxlen)
- maxlen = len;
-
- for (i = 0, cnt = 0; cnt < maxlen; i++) {
- for (j = 0; j < 16 && cnt < maxlen; j++, cnt++)
- sprintf(buf + j * 3, "%02x ", (unsigned char)data[cnt]);
- printk(KERN_DEBUG "[%d/%d].%s[%d]: %s\n", unit, slot, info, i, buf);
- }
-}
-
-/*
- * unbind isdn_net_local <=> ippp-device
- * note: it can happen, that we hangup/free the master before the slaves
- * in this case we bind another lp to the master device
- */
-int
-isdn_ppp_free(isdn_net_local *lp)
-{
- struct ippp_struct *is;
-
- if (lp->ppp_slot < 0 || lp->ppp_slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_ERR "%s: ppp_slot(%d) out of range\n",
- __func__, lp->ppp_slot);
- return 0;
- }
-
-#ifdef CONFIG_ISDN_MPP
- spin_lock(&lp->netdev->pb->lock);
-#endif
- isdn_net_rm_from_bundle(lp);
-#ifdef CONFIG_ISDN_MPP
- if (lp->netdev->pb->ref_ct == 1) /* last link in queue? */
- isdn_ppp_mp_cleanup(lp);
-
- lp->netdev->pb->ref_ct--;
- spin_unlock(&lp->netdev->pb->lock);
-#endif /* CONFIG_ISDN_MPP */
- if (lp->ppp_slot < 0 || lp->ppp_slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_ERR "%s: ppp_slot(%d) now invalid\n",
- __func__, lp->ppp_slot);
- return 0;
- }
- is = ippp_table[lp->ppp_slot];
- if ((is->state & IPPP_CONNECT))
- isdn_ppp_closewait(lp->ppp_slot); /* force wakeup on ippp device */
- else if (is->state & IPPP_ASSIGNED)
- is->state = IPPP_OPEN; /* fallback to 'OPEN but not ASSIGNED' state */
-
- if (is->debug & 0x1)
- printk(KERN_DEBUG "isdn_ppp_free %d %lx %lx\n", lp->ppp_slot, (long) lp, (long) is->lp);
-
- is->lp = NULL; /* link is down .. set lp to NULL */
- lp->ppp_slot = -1; /* is this OK ?? */
-
- return 0;
-}
-
-/*
- * bind isdn_net_local <=> ippp-device
- *
- * This function is allways called with holding dev->lock so
- * no additional lock is needed
- */
-int
-isdn_ppp_bind(isdn_net_local *lp)
-{
- int i;
- int unit = 0;
- struct ippp_struct *is;
- int retval;
-
- if (lp->pppbind < 0) { /* device bounded to ippp device ? */
- isdn_net_dev *net_dev = dev->netdev;
- char exclusive[ISDN_MAX_CHANNELS]; /* exclusive flags */
- memset(exclusive, 0, ISDN_MAX_CHANNELS);
- while (net_dev) { /* step through net devices to find exclusive minors */
- isdn_net_local *lp = net_dev->local;
- if (lp->pppbind >= 0)
- exclusive[lp->pppbind] = 1;
- net_dev = net_dev->next;
- }
- /*
- * search a free device / slot
- */
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- if (ippp_table[i]->state == IPPP_OPEN && !exclusive[ippp_table[i]->minor]) { /* OPEN, but not connected! */
- break;
- }
- }
- } else {
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- if (ippp_table[i]->minor == lp->pppbind &&
- (ippp_table[i]->state & IPPP_OPEN) == IPPP_OPEN)
- break;
- }
- }
-
- if (i >= ISDN_MAX_CHANNELS) {
- printk(KERN_WARNING "isdn_ppp_bind: Can't find a (free) connection to the ipppd daemon.\n");
- retval = -1;
- goto out;
- }
- /* get unit number from interface name .. ugly! */
- unit = isdn_ppp_if_get_unit(lp->netdev->dev->name);
- if (unit < 0) {
- printk(KERN_ERR "isdn_ppp_bind: illegal interface name %s.\n",
- lp->netdev->dev->name);
- retval = -1;
- goto out;
- }
-
- lp->ppp_slot = i;
- is = ippp_table[i];
- is->lp = lp;
- is->unit = unit;
- is->state = IPPP_OPEN | IPPP_ASSIGNED; /* assigned to a netdevice but not connected */
-#ifdef CONFIG_ISDN_MPP
- retval = isdn_ppp_mp_init(lp, NULL);
- if (retval < 0)
- goto out;
-#endif /* CONFIG_ISDN_MPP */
-
- retval = lp->ppp_slot;
-
-out:
- return retval;
-}
-
-/*
- * kick the ipppd on the device
- * (wakes up daemon after B-channel connect)
- */
-
-void
-isdn_ppp_wakeup_daemon(isdn_net_local *lp)
-{
- if (lp->ppp_slot < 0 || lp->ppp_slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_ERR "%s: ppp_slot(%d) out of range\n",
- __func__, lp->ppp_slot);
- return;
- }
- ippp_table[lp->ppp_slot]->state = IPPP_OPEN | IPPP_CONNECT | IPPP_NOBLOCK;
- wake_up_interruptible(&ippp_table[lp->ppp_slot]->wq);
-}
-
-/*
- * there was a hangup on the netdevice
- * force wakeup of the ippp device
- * go into 'device waits for release' state
- */
-static int
-isdn_ppp_closewait(int slot)
-{
- struct ippp_struct *is;
-
- if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_ERR "%s: slot(%d) out of range\n",
- __func__, slot);
- return 0;
- }
- is = ippp_table[slot];
- if (is->state)
- wake_up_interruptible(&is->wq);
- is->state = IPPP_CLOSEWAIT;
- return 1;
-}
-
-/*
- * isdn_ppp_find_slot / isdn_ppp_free_slot
- */
-
-static int
-isdn_ppp_get_slot(void)
-{
- int i;
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- if (!ippp_table[i]->state)
- return i;
- }
- return -1;
-}
-
-/*
- * isdn_ppp_open
- */
-
-int
-isdn_ppp_open(int min, struct file *file)
-{
- int slot;
- struct ippp_struct *is;
-
- if (min < 0 || min >= ISDN_MAX_CHANNELS)
- return -ENODEV;
-
- slot = isdn_ppp_get_slot();
- if (slot < 0) {
- return -EBUSY;
- }
- is = file->private_data = ippp_table[slot];
-
- printk(KERN_DEBUG "ippp, open, slot: %d, minor: %d, state: %04x\n",
- slot, min, is->state);
-
- /* compression stuff */
- is->link_compressor = is->compressor = NULL;
- is->link_decompressor = is->decompressor = NULL;
- is->link_comp_stat = is->comp_stat = NULL;
- is->link_decomp_stat = is->decomp_stat = NULL;
- is->compflags = 0;
-
- is->reset = isdn_ppp_ccp_reset_alloc(is);
- if (!is->reset)
- return -ENOMEM;
-
- is->lp = NULL;
- is->mp_seqno = 0; /* MP sequence number */
- is->pppcfg = 0; /* ppp configuration */
- is->mpppcfg = 0; /* mppp configuration */
- is->last_link_seqno = -1; /* MP: maybe set to Bundle-MIN, when joining a bundle ?? */
- is->unit = -1; /* set, when we have our interface */
- is->mru = 1524; /* MRU, default 1524 */
- is->maxcid = 16; /* VJ: maxcid */
- is->tk = current;
- init_waitqueue_head(&is->wq);
- is->first = is->rq + NUM_RCV_BUFFS - 1; /* receive queue */
- is->last = is->rq;
- is->minor = min;
-#ifdef CONFIG_ISDN_PPP_VJ
- /*
- * VJ header compression init
- */
- is->slcomp = slhc_init(16, 16); /* not necessary for 2. link in bundle */
- if (IS_ERR(is->slcomp)) {
- isdn_ppp_ccp_reset_free(is);
- return PTR_ERR(is->slcomp);
- }
-#endif
-#ifdef CONFIG_IPPP_FILTER
- is->pass_filter = NULL;
- is->active_filter = NULL;
-#endif
- is->state = IPPP_OPEN;
-
- return 0;
-}
-
-/*
- * release ippp device
- */
-void
-isdn_ppp_release(int min, struct file *file)
-{
- int i;
- struct ippp_struct *is;
-
- if (min < 0 || min >= ISDN_MAX_CHANNELS)
- return;
- is = file->private_data;
-
- if (!is) {
- printk(KERN_ERR "%s: no file->private_data\n", __func__);
- return;
- }
- if (is->debug & 0x1)
- printk(KERN_DEBUG "ippp: release, minor: %d %lx\n", min, (long) is->lp);
-
- if (is->lp) { /* a lp address says: this link is still up */
- isdn_net_dev *p = is->lp->netdev;
-
- if (!p) {
- printk(KERN_ERR "%s: no lp->netdev\n", __func__);
- return;
- }
- is->state &= ~IPPP_CONNECT; /* -> effect: no call of wakeup */
- /*
- * isdn_net_hangup() calls isdn_ppp_free()
- * isdn_ppp_free() sets is->lp to NULL and lp->ppp_slot to -1
- * removing the IPPP_CONNECT flag omits calling of isdn_ppp_wakeup_daemon()
- */
- isdn_net_hangup(p->dev);
- }
- for (i = 0; i < NUM_RCV_BUFFS; i++) {
- kfree(is->rq[i].buf);
- is->rq[i].buf = NULL;
- }
- is->first = is->rq + NUM_RCV_BUFFS - 1; /* receive queue */
- is->last = is->rq;
-
-#ifdef CONFIG_ISDN_PPP_VJ
-/* TODO: if this was the previous master: link the slcomp to the new master */
- slhc_free(is->slcomp);
- is->slcomp = NULL;
-#endif
-#ifdef CONFIG_IPPP_FILTER
- if (is->pass_filter) {
- bpf_prog_destroy(is->pass_filter);
- is->pass_filter = NULL;
- }
-
- if (is->active_filter) {
- bpf_prog_destroy(is->active_filter);
- is->active_filter = NULL;
- }
-#endif
-
-/* TODO: if this was the previous master: link the stuff to the new master */
- if (is->comp_stat)
- is->compressor->free(is->comp_stat);
- if (is->link_comp_stat)
- is->link_compressor->free(is->link_comp_stat);
- if (is->link_decomp_stat)
- is->link_decompressor->free(is->link_decomp_stat);
- if (is->decomp_stat)
- is->decompressor->free(is->decomp_stat);
- is->compressor = is->link_compressor = NULL;
- is->decompressor = is->link_decompressor = NULL;
- is->comp_stat = is->link_comp_stat = NULL;
- is->decomp_stat = is->link_decomp_stat = NULL;
-
- /* Clean up if necessary */
- if (is->reset)
- isdn_ppp_ccp_reset_free(is);
-
- /* this slot is ready for new connections */
- is->state = 0;
-}
-
-/*
- * get_arg .. ioctl helper
- */
-static int
-get_arg(void __user *b, void *val, int len)
-{
- if (len <= 0)
- len = sizeof(void *);
- if (copy_from_user(val, b, len))
- return -EFAULT;
- return 0;
-}
-
-/*
- * set arg .. ioctl helper
- */
-static int
-set_arg(void __user *b, void *val, int len)
-{
- if (len <= 0)
- len = sizeof(void *);
- if (copy_to_user(b, val, len))
- return -EFAULT;
- return 0;
-}
-
-#ifdef CONFIG_IPPP_FILTER
-static int get_filter(void __user *arg, struct sock_filter **p)
-{
- struct sock_fprog uprog;
- struct sock_filter *code = NULL;
- int len;
-
- if (copy_from_user(&uprog, arg, sizeof(uprog)))
- return -EFAULT;
-
- if (!uprog.len) {
- *p = NULL;
- return 0;
- }
-
- /* uprog.len is unsigned short, so no overflow here */
- len = uprog.len * sizeof(struct sock_filter);
- code = memdup_user(uprog.filter, len);
- if (IS_ERR(code))
- return PTR_ERR(code);
-
- *p = code;
- return uprog.len;
-}
-#endif /* CONFIG_IPPP_FILTER */
-
-/*
- * ippp device ioctl
- */
-int
-isdn_ppp_ioctl(int min, struct file *file, unsigned int cmd, unsigned long arg)
-{
- unsigned long val;
- int r, i, j;
- struct ippp_struct *is;
- isdn_net_local *lp;
- struct isdn_ppp_comp_data data;
- void __user *argp = (void __user *)arg;
-
- is = file->private_data;
- lp = is->lp;
-
- if (is->debug & 0x1)
- printk(KERN_DEBUG "isdn_ppp_ioctl: minor: %d cmd: %x state: %x\n", min, cmd, is->state);
-
- if (!(is->state & IPPP_OPEN))
- return -EINVAL;
-
- switch (cmd) {
- case PPPIOCBUNDLE:
-#ifdef CONFIG_ISDN_MPP
- if (!(is->state & IPPP_CONNECT))
- return -EINVAL;
- if ((r = get_arg(argp, &val, sizeof(val))))
- return r;
- printk(KERN_DEBUG "iPPP-bundle: minor: %d, slave unit: %d, master unit: %d\n",
- (int) min, (int) is->unit, (int) val);
- return isdn_ppp_bundle(is, val);
-#else
- return -1;
-#endif
- break;
- case PPPIOCGUNIT: /* get ppp/isdn unit number */
- if ((r = set_arg(argp, &is->unit, sizeof(is->unit))))
- return r;
- break;
- case PPPIOCGIFNAME:
- if (!lp)
- return -EINVAL;
- if ((r = set_arg(argp, lp->netdev->dev->name,
- strlen(lp->netdev->dev->name))))
- return r;
- break;
- case PPPIOCGMPFLAGS: /* get configuration flags */
- if ((r = set_arg(argp, &is->mpppcfg, sizeof(is->mpppcfg))))
- return r;
- break;
- case PPPIOCSMPFLAGS: /* set configuration flags */
- if ((r = get_arg(argp, &val, sizeof(val))))
- return r;
- is->mpppcfg = val;
- break;
- case PPPIOCGFLAGS: /* get configuration flags */
- if ((r = set_arg(argp, &is->pppcfg, sizeof(is->pppcfg))))
- return r;
- break;
- case PPPIOCSFLAGS: /* set configuration flags */
- if ((r = get_arg(argp, &val, sizeof(val)))) {
- return r;
- }
- if (val & SC_ENABLE_IP && !(is->pppcfg & SC_ENABLE_IP) && (is->state & IPPP_CONNECT)) {
- if (lp) {
- /* OK .. we are ready to send buffers */
- is->pppcfg = val; /* isdn_ppp_xmit test for SC_ENABLE_IP !!! */
- netif_wake_queue(lp->netdev->dev);
- break;
- }
- }
- is->pppcfg = val;
- break;
- case PPPIOCGIDLE: /* get idle time information */
- if (lp) {
- struct ppp_idle pidle;
- pidle.xmit_idle = pidle.recv_idle = lp->huptimer;
- if ((r = set_arg(argp, &pidle, sizeof(struct ppp_idle))))
- return r;
- }
- break;
- case PPPIOCSMRU: /* set receive unit size for PPP */
- if ((r = get_arg(argp, &val, sizeof(val))))
- return r;
- is->mru = val;
- break;
- case PPPIOCSMPMRU:
- break;
- case PPPIOCSMPMTU:
- break;
- case PPPIOCSMAXCID: /* set the maximum compression slot id */
- if ((r = get_arg(argp, &val, sizeof(val))))
- return r;
- val++;
- if (is->maxcid != val) {
-#ifdef CONFIG_ISDN_PPP_VJ
- struct slcompress *sltmp;
-#endif
- if (is->debug & 0x1)
- printk(KERN_DEBUG "ippp, ioctl: changed MAXCID to %ld\n", val);
- is->maxcid = val;
-#ifdef CONFIG_ISDN_PPP_VJ
- sltmp = slhc_init(16, val);
- if (IS_ERR(sltmp))
- return PTR_ERR(sltmp);
- if (is->slcomp)
- slhc_free(is->slcomp);
- is->slcomp = sltmp;
-#endif
- }
- break;
- case PPPIOCGDEBUG:
- if ((r = set_arg(argp, &is->debug, sizeof(is->debug))))
- return r;
- break;
- case PPPIOCSDEBUG:
- if ((r = get_arg(argp, &val, sizeof(val))))
- return r;
- is->debug = val;
- break;
- case PPPIOCGCOMPRESSORS:
- {
- unsigned long protos[8] = {0,};
- struct isdn_ppp_compressor *ipc = ipc_head;
- while (ipc) {
- j = ipc->num / (sizeof(long) * 8);
- i = ipc->num % (sizeof(long) * 8);
- if (j < 8)
- protos[j] |= (1UL << i);
- ipc = ipc->next;
- }
- if ((r = set_arg(argp, protos, 8 * sizeof(long))))
- return r;
- }
- break;
- case PPPIOCSCOMPRESSOR:
- if ((r = get_arg(argp, &data, sizeof(struct isdn_ppp_comp_data))))
- return r;
- return isdn_ppp_set_compressor(is, &data);
- case PPPIOCGCALLINFO:
- {
- struct pppcallinfo pci;
- memset((char *)&pci, 0, sizeof(struct pppcallinfo));
- if (lp)
- {
- strncpy(pci.local_num, lp->msn, 63);
- if (lp->dial) {
- strncpy(pci.remote_num, lp->dial->num, 63);
- }
- pci.charge_units = lp->charge;
- if (lp->outgoing)
- pci.calltype = CALLTYPE_OUTGOING;
- else
- pci.calltype = CALLTYPE_INCOMING;
- if (lp->flags & ISDN_NET_CALLBACK)
- pci.calltype |= CALLTYPE_CALLBACK;
- }
- return set_arg(argp, &pci, sizeof(struct pppcallinfo));
- }
-#ifdef CONFIG_IPPP_FILTER
- case PPPIOCSPASS:
- {
- struct sock_fprog_kern fprog;
- struct sock_filter *code;
- int err, len = get_filter(argp, &code);
-
- if (len < 0)
- return len;
-
- fprog.len = len;
- fprog.filter = code;
-
- if (is->pass_filter) {
- bpf_prog_destroy(is->pass_filter);
- is->pass_filter = NULL;
- }
- if (fprog.filter != NULL)
- err = bpf_prog_create(&is->pass_filter, &fprog);
- else
- err = 0;
- kfree(code);
-
- return err;
- }
- case PPPIOCSACTIVE:
- {
- struct sock_fprog_kern fprog;
- struct sock_filter *code;
- int err, len = get_filter(argp, &code);
-
- if (len < 0)
- return len;
-
- fprog.len = len;
- fprog.filter = code;
-
- if (is->active_filter) {
- bpf_prog_destroy(is->active_filter);
- is->active_filter = NULL;
- }
- if (fprog.filter != NULL)
- err = bpf_prog_create(&is->active_filter, &fprog);
- else
- err = 0;
- kfree(code);
-
- return err;
- }
-#endif /* CONFIG_IPPP_FILTER */
- default:
- break;
- }
- return 0;
-}
-
-__poll_t
-isdn_ppp_poll(struct file *file, poll_table *wait)
-{
- __poll_t mask;
- struct ippp_buf_queue *bf, *bl;
- u_long flags;
- struct ippp_struct *is;
-
- is = file->private_data;
-
- if (is->debug & 0x2)
- printk(KERN_DEBUG "isdn_ppp_poll: minor: %d\n",
- iminor(file_inode(file)));
-
- /* just registers wait_queue hook. This doesn't really wait. */
- poll_wait(file, &is->wq, wait);
-
- if (!(is->state & IPPP_OPEN)) {
- if (is->state == IPPP_CLOSEWAIT)
- return EPOLLHUP;
- printk(KERN_DEBUG "isdn_ppp: device not open\n");
- return EPOLLERR;
- }
- /* we're always ready to send .. */
- mask = EPOLLOUT | EPOLLWRNORM;
-
- spin_lock_irqsave(&is->buflock, flags);
- bl = is->last;
- bf = is->first;
- /*
- * if IPPP_NOBLOCK is set we return even if we have nothing to read
- */
- if (bf->next != bl || (is->state & IPPP_NOBLOCK)) {
- is->state &= ~IPPP_NOBLOCK;
- mask |= EPOLLIN | EPOLLRDNORM;
- }
- spin_unlock_irqrestore(&is->buflock, flags);
- return mask;
-}
-
-/*
- * fill up isdn_ppp_read() queue ..
- */
-
-static int
-isdn_ppp_fill_rq(unsigned char *buf, int len, int proto, int slot)
-{
- struct ippp_buf_queue *bf, *bl;
- u_long flags;
- u_char *nbuf;
- struct ippp_struct *is;
-
- if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_WARNING "ippp: illegal slot(%d).\n", slot);
- return 0;
- }
- is = ippp_table[slot];
-
- if (!(is->state & IPPP_CONNECT)) {
- printk(KERN_DEBUG "ippp: device not activated.\n");
- return 0;
- }
- nbuf = kmalloc(len + 4, GFP_ATOMIC);
- if (!nbuf) {
- printk(KERN_WARNING "ippp: Can't alloc buf\n");
- return 0;
- }
- nbuf[0] = PPP_ALLSTATIONS;
- nbuf[1] = PPP_UI;
- nbuf[2] = proto >> 8;
- nbuf[3] = proto & 0xff;
- memcpy(nbuf + 4, buf, len);
-
- spin_lock_irqsave(&is->buflock, flags);
- bf = is->first;
- bl = is->last;
-
- if (bf == bl) {
- printk(KERN_WARNING "ippp: Queue is full; discarding first buffer\n");
- bf = bf->next;
- kfree(bf->buf);
- is->first = bf;
- }
- bl->buf = (char *) nbuf;
- bl->len = len + 4;
-
- is->last = bl->next;
- spin_unlock_irqrestore(&is->buflock, flags);
- wake_up_interruptible(&is->wq);
- return len;
-}
-
-/*
- * read() .. non-blocking: ipppd calls it only after select()
- * reports, that there is data
- */
-
-int
-isdn_ppp_read(int min, struct file *file, char __user *buf, int count)
-{
- struct ippp_struct *is;
- struct ippp_buf_queue *b;
- u_long flags;
- u_char *save_buf;
-
- is = file->private_data;
-
- if (!(is->state & IPPP_OPEN))
- return 0;
-
- spin_lock_irqsave(&is->buflock, flags);
- b = is->first->next;
- save_buf = b->buf;
- if (!save_buf) {
- spin_unlock_irqrestore(&is->buflock, flags);
- return -EAGAIN;
- }
- if (b->len < count)
- count = b->len;
- b->buf = NULL;
- is->first = b;
-
- spin_unlock_irqrestore(&is->buflock, flags);
- if (copy_to_user(buf, save_buf, count))
- count = -EFAULT;
- kfree(save_buf);
-
- return count;
-}
-
-/*
- * ipppd wanna write a packet to the card .. non-blocking
- */
-
-int
-isdn_ppp_write(int min, struct file *file, const char __user *buf, int count)
-{
- isdn_net_local *lp;
- struct ippp_struct *is;
- int proto;
-
- is = file->private_data;
-
- if (!(is->state & IPPP_CONNECT))
- return 0;
-
- lp = is->lp;
-
- /* -> push it directly to the lowlevel interface */
-
- if (!lp)
- printk(KERN_DEBUG "isdn_ppp_write: lp == NULL\n");
- else {
- if (lp->isdn_device < 0 || lp->isdn_channel < 0) {
- unsigned char protobuf[4];
- /*
- * Don't reset huptimer for
- * LCP packets. (Echo requests).
- */
- if (copy_from_user(protobuf, buf, 4))
- return -EFAULT;
-
- proto = PPP_PROTOCOL(protobuf);
- if (proto != PPP_LCP)
- lp->huptimer = 0;
-
- return 0;
- }
-
- if ((dev->drv[lp->isdn_device]->flags & DRV_FLAG_RUNNING) &&
- lp->dialstate == 0 &&
- (lp->flags & ISDN_NET_CONNECTED)) {
- unsigned short hl;
- struct sk_buff *skb;
- unsigned char *cpy_buf;
- /*
- * we need to reserve enough space in front of
- * sk_buff. old call to dev_alloc_skb only reserved
- * 16 bytes, now we are looking what the driver want
- */
- hl = dev->drv[lp->isdn_device]->interface->hl_hdrlen;
- skb = alloc_skb(hl + count, GFP_ATOMIC);
- if (!skb) {
- printk(KERN_WARNING "isdn_ppp_write: out of memory!\n");
- return count;
- }
- skb_reserve(skb, hl);
- cpy_buf = skb_put(skb, count);
- if (copy_from_user(cpy_buf, buf, count))
- {
- kfree_skb(skb);
- return -EFAULT;
- }
-
- /*
- * Don't reset huptimer for
- * LCP packets. (Echo requests).
- */
- proto = PPP_PROTOCOL(cpy_buf);
- if (proto != PPP_LCP)
- lp->huptimer = 0;
-
- if (is->debug & 0x40) {
- printk(KERN_DEBUG "ppp xmit: len %d\n", (int) skb->len);
- isdn_ppp_frame_log("xmit", skb->data, skb->len, 32, is->unit, lp->ppp_slot);
- }
-
- isdn_ppp_send_ccp(lp->netdev, lp, skb); /* keeps CCP/compression states in sync */
-
- isdn_net_write_super(lp, skb);
- }
- }
- return count;
-}
-
-/*
- * init memory, structures etc.
- */
-
-int
-isdn_ppp_init(void)
-{
- int i,
- j;
-
-#ifdef CONFIG_ISDN_MPP
- if (isdn_ppp_mp_bundle_array_init() < 0)
- return -ENOMEM;
-#endif /* CONFIG_ISDN_MPP */
-
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- if (!(ippp_table[i] = kzalloc(sizeof(struct ippp_struct), GFP_KERNEL))) {
- printk(KERN_WARNING "isdn_ppp_init: Could not alloc ippp_table\n");
- for (j = 0; j < i; j++)
- kfree(ippp_table[j]);
- return -1;
- }
- spin_lock_init(&ippp_table[i]->buflock);
- ippp_table[i]->state = 0;
- ippp_table[i]->first = ippp_table[i]->rq + NUM_RCV_BUFFS - 1;
- ippp_table[i]->last = ippp_table[i]->rq;
-
- for (j = 0; j < NUM_RCV_BUFFS; j++) {
- ippp_table[i]->rq[j].buf = NULL;
- ippp_table[i]->rq[j].last = ippp_table[i]->rq +
- (NUM_RCV_BUFFS + j - 1) % NUM_RCV_BUFFS;
- ippp_table[i]->rq[j].next = ippp_table[i]->rq + (j + 1) % NUM_RCV_BUFFS;
- }
- }
- return 0;
-}
-
-void
-isdn_ppp_cleanup(void)
-{
- int i;
-
- for (i = 0; i < ISDN_MAX_CHANNELS; i++)
- kfree(ippp_table[i]);
-
-#ifdef CONFIG_ISDN_MPP
- kfree(isdn_ppp_bundle_arr);
-#endif /* CONFIG_ISDN_MPP */
-
-}
-
-/*
- * check for address/control field and skip if allowed
- * retval != 0 -> discard packet silently
- */
-static int isdn_ppp_skip_ac(struct ippp_struct *is, struct sk_buff *skb)
-{
- if (skb->len < 1)
- return -1;
-
- if (skb->data[0] == 0xff) {
- if (skb->len < 2)
- return -1;
-
- if (skb->data[1] != 0x03)
- return -1;
-
- // skip address/control (AC) field
- skb_pull(skb, 2);
- } else {
- if (is->pppcfg & SC_REJ_COMP_AC)
- // if AC compression was not negotiated, but used, discard packet
- return -1;
- }
- return 0;
-}
-
-/*
- * get the PPP protocol header and pull skb
- * retval < 0 -> discard packet silently
- */
-static int isdn_ppp_strip_proto(struct sk_buff *skb)
-{
- int proto;
-
- if (skb->len < 1)
- return -1;
-
- if (skb->data[0] & 0x1) {
- // protocol field is compressed
- proto = skb->data[0];
- skb_pull(skb, 1);
- } else {
- if (skb->len < 2)
- return -1;
- proto = ((int) skb->data[0] << 8) + skb->data[1];
- skb_pull(skb, 2);
- }
- return proto;
-}
-
-
-/*
- * handler for incoming packets on a syncPPP interface
- */
-void isdn_ppp_receive(isdn_net_dev *net_dev, isdn_net_local *lp, struct sk_buff *skb)
-{
- struct ippp_struct *is;
- int slot;
- int proto;
-
- BUG_ON(net_dev->local->master); // we're called with the master device always
-
- slot = lp->ppp_slot;
- if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_ERR "isdn_ppp_receive: lp->ppp_slot(%d)\n",
- lp->ppp_slot);
- kfree_skb(skb);
- return;
- }
- is = ippp_table[slot];
-
- if (is->debug & 0x4) {
- printk(KERN_DEBUG "ippp_receive: is:%08lx lp:%08lx slot:%d unit:%d len:%d\n",
- (long)is, (long)lp, lp->ppp_slot, is->unit, (int)skb->len);
- isdn_ppp_frame_log("receive", skb->data, skb->len, 32, is->unit, lp->ppp_slot);
- }
-
- if (isdn_ppp_skip_ac(is, skb) < 0) {
- kfree_skb(skb);
- return;
- }
- proto = isdn_ppp_strip_proto(skb);
- if (proto < 0) {
- kfree_skb(skb);
- return;
- }
-
-#ifdef CONFIG_ISDN_MPP
- if (is->compflags & SC_LINK_DECOMP_ON) {
- skb = isdn_ppp_decompress(skb, is, NULL, &proto);
- if (!skb) // decompression error
- return;
- }
-
- if (!(is->mpppcfg & SC_REJ_MP_PROT)) { // we agreed to receive MPPP
- if (proto == PPP_MP) {
- isdn_ppp_mp_receive(net_dev, lp, skb);
- return;
- }
- }
-#endif
- isdn_ppp_push_higher(net_dev, lp, skb, proto);
-}
-
-/*
- * we receive a reassembled frame, MPPP has been taken care of before.
- * address/control and protocol have been stripped from the skb
- * note: net_dev has to be master net_dev
- */
-static void
-isdn_ppp_push_higher(isdn_net_dev *net_dev, isdn_net_local *lp, struct sk_buff *skb, int proto)
-{
- struct net_device *dev = net_dev->dev;
- struct ippp_struct *is, *mis;
- isdn_net_local *mlp = NULL;
- int slot;
-
- slot = lp->ppp_slot;
- if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_ERR "isdn_ppp_push_higher: lp->ppp_slot(%d)\n",
- lp->ppp_slot);
- goto drop_packet;
- }
- is = ippp_table[slot];
-
- if (lp->master) { // FIXME?
- mlp = ISDN_MASTER_PRIV(lp);
- slot = mlp->ppp_slot;
- if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_ERR "isdn_ppp_push_higher: master->ppp_slot(%d)\n",
- lp->ppp_slot);
- goto drop_packet;
- }
- }
- mis = ippp_table[slot];
-
- if (is->debug & 0x10) {
- printk(KERN_DEBUG "push, skb %d %04x\n", (int) skb->len, proto);
- isdn_ppp_frame_log("rpush", skb->data, skb->len, 32, is->unit, lp->ppp_slot);
- }
- if (mis->compflags & SC_DECOMP_ON) {
- skb = isdn_ppp_decompress(skb, is, mis, &proto);
- if (!skb) // decompression error
- return;
- }
- switch (proto) {
- case PPP_IPX: /* untested */
- if (is->debug & 0x20)
- printk(KERN_DEBUG "isdn_ppp: IPX\n");
- skb->protocol = htons(ETH_P_IPX);
- break;
- case PPP_IP:
- if (is->debug & 0x20)
- printk(KERN_DEBUG "isdn_ppp: IP\n");
- skb->protocol = htons(ETH_P_IP);
- break;
- case PPP_COMP:
- case PPP_COMPFRAG:
- printk(KERN_INFO "isdn_ppp: unexpected compressed frame dropped\n");
- goto drop_packet;
-#ifdef CONFIG_ISDN_PPP_VJ
- case PPP_VJC_UNCOMP:
- if (is->debug & 0x20)
- printk(KERN_DEBUG "isdn_ppp: VJC_UNCOMP\n");
- if (net_dev->local->ppp_slot < 0) {
- printk(KERN_ERR "%s: net_dev->local->ppp_slot(%d) out of range\n",
- __func__, net_dev->local->ppp_slot);
- goto drop_packet;
- }
- if (slhc_remember(ippp_table[net_dev->local->ppp_slot]->slcomp, skb->data, skb->len) <= 0) {
- printk(KERN_WARNING "isdn_ppp: received illegal VJC_UNCOMP frame!\n");
- goto drop_packet;
- }
- skb->protocol = htons(ETH_P_IP);
- break;
- case PPP_VJC_COMP:
- if (is->debug & 0x20)
- printk(KERN_DEBUG "isdn_ppp: VJC_COMP\n");
- {
- struct sk_buff *skb_old = skb;
- int pkt_len;
- skb = dev_alloc_skb(skb_old->len + 128);
-
- if (!skb) {
- printk(KERN_WARNING "%s: Memory squeeze, dropping packet.\n", dev->name);
- skb = skb_old;
- goto drop_packet;
- }
- skb_put(skb, skb_old->len + 128);
- skb_copy_from_linear_data(skb_old, skb->data,
- skb_old->len);
- if (net_dev->local->ppp_slot < 0) {
- printk(KERN_ERR "%s: net_dev->local->ppp_slot(%d) out of range\n",
- __func__, net_dev->local->ppp_slot);
- goto drop_packet;
- }
- pkt_len = slhc_uncompress(ippp_table[net_dev->local->ppp_slot]->slcomp,
- skb->data, skb_old->len);
- kfree_skb(skb_old);
- if (pkt_len < 0)
- goto drop_packet;
-
- skb_trim(skb, pkt_len);
- skb->protocol = htons(ETH_P_IP);
- }
- break;
-#endif
- case PPP_CCP:
- case PPP_CCPFRAG:
- isdn_ppp_receive_ccp(net_dev, lp, skb, proto);
- /* Dont pop up ResetReq/Ack stuff to the daemon any
- longer - the job is done already */
- if (skb->data[0] == CCP_RESETREQ ||
- skb->data[0] == CCP_RESETACK)
- break;
- /* fall through */
- default:
- isdn_ppp_fill_rq(skb->data, skb->len, proto, lp->ppp_slot); /* push data to pppd device */
- kfree_skb(skb);
- return;
- }
-
-#ifdef CONFIG_IPPP_FILTER
- /* check if the packet passes the pass and active filters
- * the filter instructions are constructed assuming
- * a four-byte PPP header on each packet (which is still present) */
- skb_push(skb, 4);
-
- {
- u_int16_t *p = (u_int16_t *) skb->data;
-
- *p = 0; /* indicate inbound */
- }
-
- if (is->pass_filter
- && BPF_PROG_RUN(is->pass_filter, skb) == 0) {
- if (is->debug & 0x2)
- printk(KERN_DEBUG "IPPP: inbound frame filtered.\n");
- kfree_skb(skb);
- return;
- }
- if (!(is->active_filter
- && BPF_PROG_RUN(is->active_filter, skb) == 0)) {
- if (is->debug & 0x2)
- printk(KERN_DEBUG "IPPP: link-active filter: resetting huptimer.\n");
- lp->huptimer = 0;
- if (mlp)
- mlp->huptimer = 0;
- }
- skb_pull(skb, 4);
-#else /* CONFIG_IPPP_FILTER */
- lp->huptimer = 0;
- if (mlp)
- mlp->huptimer = 0;
-#endif /* CONFIG_IPPP_FILTER */
- skb->dev = dev;
- skb_reset_mac_header(skb);
- netif_rx(skb);
- /* net_dev->local->stats.rx_packets++; done in isdn_net.c */
- return;
-
-drop_packet:
- net_dev->local->stats.rx_dropped++;
- kfree_skb(skb);
-}
-
-/*
- * isdn_ppp_skb_push ..
- * checks whether we have enough space at the beginning of the skb
- * and allocs a new SKB if necessary
- */
-static unsigned char *isdn_ppp_skb_push(struct sk_buff **skb_p, int len)
-{
- struct sk_buff *skb = *skb_p;
-
- if (skb_headroom(skb) < len) {
- struct sk_buff *nskb = skb_realloc_headroom(skb, len);
-
- if (!nskb) {
- printk(KERN_ERR "isdn_ppp_skb_push: can't realloc headroom!\n");
- dev_kfree_skb(skb);
- return NULL;
- }
- printk(KERN_DEBUG "isdn_ppp_skb_push:under %d %d\n", skb_headroom(skb), len);
- dev_kfree_skb(skb);
- *skb_p = nskb;
- return skb_push(nskb, len);
- }
- return skb_push(skb, len);
-}
-
-/*
- * send ppp frame .. we expect a PIDCOMPressable proto --
- * (here: currently always PPP_IP,PPP_VJC_COMP,PPP_VJC_UNCOMP)
- *
- * VJ compression may change skb pointer!!! .. requeue with old
- * skb isn't allowed!!
- */
-
-int
-isdn_ppp_xmit(struct sk_buff *skb, struct net_device *netdev)
-{
- isdn_net_local *lp, *mlp;
- isdn_net_dev *nd;
- unsigned int proto = PPP_IP; /* 0x21 */
- struct ippp_struct *ipt, *ipts;
- int slot, retval = NETDEV_TX_OK;
-
- mlp = netdev_priv(netdev);
- nd = mlp->netdev; /* get master lp */
-
- slot = mlp->ppp_slot;
- if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_ERR "isdn_ppp_xmit: lp->ppp_slot(%d)\n",
- mlp->ppp_slot);
- kfree_skb(skb);
- goto out;
- }
- ipts = ippp_table[slot];
-
- if (!(ipts->pppcfg & SC_ENABLE_IP)) { /* PPP connected ? */
- if (ipts->debug & 0x1)
- printk(KERN_INFO "%s: IP frame delayed.\n", netdev->name);
- retval = NETDEV_TX_BUSY;
- goto out;
- }
-
- switch (ntohs(skb->protocol)) {
- case ETH_P_IP:
- proto = PPP_IP;
- break;
- case ETH_P_IPX:
- proto = PPP_IPX; /* untested */
- break;
- default:
- printk(KERN_ERR "isdn_ppp: skipped unsupported protocol: %#x.\n",
- skb->protocol);
- dev_kfree_skb(skb);
- goto out;
- }
-
- lp = isdn_net_get_locked_lp(nd);
- if (!lp) {
- printk(KERN_WARNING "%s: all channels busy - requeuing!\n", netdev->name);
- retval = NETDEV_TX_BUSY;
- goto out;
- }
- /* we have our lp locked from now on */
-
- slot = lp->ppp_slot;
- if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_ERR "isdn_ppp_xmit: lp->ppp_slot(%d)\n",
- lp->ppp_slot);
- kfree_skb(skb);
- goto unlock;
- }
- ipt = ippp_table[slot];
-
- /*
- * after this line .. requeueing in the device queue is no longer allowed!!!
- */
-
- /* Pull off the fake header we stuck on earlier to keep
- * the fragmentation code happy.
- */
- skb_pull(skb, IPPP_MAX_HEADER);
-
-#ifdef CONFIG_IPPP_FILTER
- /* check if we should pass this packet
- * the filter instructions are constructed assuming
- * a four-byte PPP header on each packet */
- *(u8 *)skb_push(skb, 4) = 1; /* indicate outbound */
-
- {
- __be16 *p = (__be16 *)skb->data;
-
- p++;
- *p = htons(proto);
- }
-
- if (ipt->pass_filter
- && BPF_PROG_RUN(ipt->pass_filter, skb) == 0) {
- if (ipt->debug & 0x4)
- printk(KERN_DEBUG "IPPP: outbound frame filtered.\n");
- kfree_skb(skb);
- goto unlock;
- }
- if (!(ipt->active_filter
- && BPF_PROG_RUN(ipt->active_filter, skb) == 0)) {
- if (ipt->debug & 0x4)
- printk(KERN_DEBUG "IPPP: link-active filter: resetting huptimer.\n");
- lp->huptimer = 0;
- }
- skb_pull(skb, 4);
-#else /* CONFIG_IPPP_FILTER */
- lp->huptimer = 0;
-#endif /* CONFIG_IPPP_FILTER */
-
- if (ipt->debug & 0x4)
- printk(KERN_DEBUG "xmit skb, len %d\n", (int) skb->len);
- if (ipts->debug & 0x40)
- isdn_ppp_frame_log("xmit0", skb->data, skb->len, 32, ipts->unit, lp->ppp_slot);
-
-#ifdef CONFIG_ISDN_PPP_VJ
- if (proto == PPP_IP && ipts->pppcfg & SC_COMP_TCP) { /* ipts here? probably yes, but check this again */
- struct sk_buff *new_skb;
- unsigned short hl;
- /*
- * we need to reserve enough space in front of
- * sk_buff. old call to dev_alloc_skb only reserved
- * 16 bytes, now we are looking what the driver want.
- */
- hl = dev->drv[lp->isdn_device]->interface->hl_hdrlen + IPPP_MAX_HEADER;
- /*
- * Note: hl might still be insufficient because the method
- * above does not account for a possibible MPPP slave channel
- * which had larger HL header space requirements than the
- * master.
- */
- new_skb = alloc_skb(hl + skb->len, GFP_ATOMIC);
- if (new_skb) {
- u_char *buf;
- int pktlen;
-
- skb_reserve(new_skb, hl);
- new_skb->dev = skb->dev;
- skb_put(new_skb, skb->len);
- buf = skb->data;
-
- pktlen = slhc_compress(ipts->slcomp, skb->data, skb->len, new_skb->data,
- &buf, !(ipts->pppcfg & SC_NO_TCP_CCID));
-
- if (buf != skb->data) {
- if (new_skb->data != buf)
- printk(KERN_ERR "isdn_ppp: FATAL error after slhc_compress!!\n");
- dev_kfree_skb(skb);
- skb = new_skb;
- } else {
- dev_kfree_skb(new_skb);
- }
-
- skb_trim(skb, pktlen);
- if (skb->data[0] & SL_TYPE_COMPRESSED_TCP) { /* cslip? style -> PPP */
- proto = PPP_VJC_COMP;
- skb->data[0] ^= SL_TYPE_COMPRESSED_TCP;
- } else {
- if (skb->data[0] >= SL_TYPE_UNCOMPRESSED_TCP)
- proto = PPP_VJC_UNCOMP;
- skb->data[0] = (skb->data[0] & 0x0f) | 0x40;
- }
- }
- }
-#endif
-
- /*
- * normal (single link) or bundle compression
- */
- if (ipts->compflags & SC_COMP_ON) {
- /* We send compressed only if both down- und upstream
- compression is negotiated, that means, CCP is up */
- if (ipts->compflags & SC_DECOMP_ON) {
- skb = isdn_ppp_compress(skb, &proto, ipt, ipts, 0);
- } else {
- printk(KERN_DEBUG "isdn_ppp: CCP not yet up - sending as-is\n");
- }
- }
-
- if (ipt->debug & 0x24)
- printk(KERN_DEBUG "xmit2 skb, len %d, proto %04x\n", (int) skb->len, proto);
-
-#ifdef CONFIG_ISDN_MPP
- if (ipt->mpppcfg & SC_MP_PROT) {
- /* we get mp_seqno from static isdn_net_local */
- long mp_seqno = ipts->mp_seqno;
- ipts->mp_seqno++;
- if (ipt->mpppcfg & SC_OUT_SHORT_SEQ) {
- unsigned char *data = isdn_ppp_skb_push(&skb, 3);
- if (!data)
- goto unlock;
- mp_seqno &= 0xfff;
- data[0] = MP_BEGIN_FRAG | MP_END_FRAG | ((mp_seqno >> 8) & 0xf); /* (B)egin & (E)ndbit .. */
- data[1] = mp_seqno & 0xff;
- data[2] = proto; /* PID compression */
- } else {
- unsigned char *data = isdn_ppp_skb_push(&skb, 5);
- if (!data)
- goto unlock;
- data[0] = MP_BEGIN_FRAG | MP_END_FRAG; /* (B)egin & (E)ndbit .. */
- data[1] = (mp_seqno >> 16) & 0xff; /* sequence number: 24bit */
- data[2] = (mp_seqno >> 8) & 0xff;
- data[3] = (mp_seqno >> 0) & 0xff;
- data[4] = proto; /* PID compression */
- }
- proto = PPP_MP; /* MP Protocol, 0x003d */
- }
-#endif
-
- /*
- * 'link in bundle' compression ...
- */
- if (ipt->compflags & SC_LINK_COMP_ON)
- skb = isdn_ppp_compress(skb, &proto, ipt, ipts, 1);
-
- if ((ipt->pppcfg & SC_COMP_PROT) && (proto <= 0xff)) {
- unsigned char *data = isdn_ppp_skb_push(&skb, 1);
- if (!data)
- goto unlock;
- data[0] = proto & 0xff;
- }
- else {
- unsigned char *data = isdn_ppp_skb_push(&skb, 2);
- if (!data)
- goto unlock;
- data[0] = (proto >> 8) & 0xff;
- data[1] = proto & 0xff;
- }
- if (!(ipt->pppcfg & SC_COMP_AC)) {
- unsigned char *data = isdn_ppp_skb_push(&skb, 2);
- if (!data)
- goto unlock;
- data[0] = 0xff; /* All Stations */
- data[1] = 0x03; /* Unnumbered information */
- }
-
- /* tx-stats are now updated via BSENT-callback */
-
- if (ipts->debug & 0x40) {
- printk(KERN_DEBUG "skb xmit: len: %d\n", (int) skb->len);
- isdn_ppp_frame_log("xmit", skb->data, skb->len, 32, ipt->unit, lp->ppp_slot);
- }
-
- isdn_net_writebuf_skb(lp, skb);
-
-unlock:
- spin_unlock_bh(&lp->xmit_lock);
-out:
- return retval;
-}
-
-#ifdef CONFIG_IPPP_FILTER
-/*
- * check if this packet may trigger auto-dial.
- */
-
-int isdn_ppp_autodial_filter(struct sk_buff *skb, isdn_net_local *lp)
-{
- struct ippp_struct *is = ippp_table[lp->ppp_slot];
- u_int16_t proto;
- int drop = 0;
-
- switch (ntohs(skb->protocol)) {
- case ETH_P_IP:
- proto = PPP_IP;
- break;
- case ETH_P_IPX:
- proto = PPP_IPX;
- break;
- default:
- printk(KERN_ERR "isdn_ppp_autodial_filter: unsupported protocol 0x%x.\n",
- skb->protocol);
- return 1;
- }
-
- /* the filter instructions are constructed assuming
- * a four-byte PPP header on each packet. we have to
- * temporarily remove part of the fake header stuck on
- * earlier.
- */
- *(u8 *)skb_pull(skb, IPPP_MAX_HEADER - 4) = 1; /* indicate outbound */
-
- {
- __be16 *p = (__be16 *)skb->data;
-
- p++;
- *p = htons(proto);
- }
-
- drop |= is->pass_filter
- && BPF_PROG_RUN(is->pass_filter, skb) == 0;
- drop |= is->active_filter
- && BPF_PROG_RUN(is->active_filter, skb) == 0;
-
- skb_push(skb, IPPP_MAX_HEADER - 4);
- return drop;
-}
-#endif
-#ifdef CONFIG_ISDN_MPP
-
-/* this is _not_ rfc1990 header, but something we convert both short and long
- * headers to for convinience's sake:
- * byte 0 is flags as in rfc1990
- * bytes 1...4 is 24-bit seqence number converted to host byte order
- */
-#define MP_HEADER_LEN 5
-
-#define MP_LONGSEQ_MASK 0x00ffffff
-#define MP_SHORTSEQ_MASK 0x00000fff
-#define MP_LONGSEQ_MAX MP_LONGSEQ_MASK
-#define MP_SHORTSEQ_MAX MP_SHORTSEQ_MASK
-#define MP_LONGSEQ_MAXBIT ((MP_LONGSEQ_MASK + 1) >> 1)
-#define MP_SHORTSEQ_MAXBIT ((MP_SHORTSEQ_MASK + 1) >> 1)
-
-/* sequence-wrap safe comparisons (for long sequence)*/
-#define MP_LT(a, b) ((a - b) & MP_LONGSEQ_MAXBIT)
-#define MP_LE(a, b) !((b - a) & MP_LONGSEQ_MAXBIT)
-#define MP_GT(a, b) ((b - a) & MP_LONGSEQ_MAXBIT)
-#define MP_GE(a, b) !((a - b) & MP_LONGSEQ_MAXBIT)
-
-#define MP_SEQ(f) ((*(u32 *)(f->data + 1)))
-#define MP_FLAGS(f) (f->data[0])
-
-static int isdn_ppp_mp_bundle_array_init(void)
-{
- int i;
- int sz = ISDN_MAX_CHANNELS * sizeof(ippp_bundle);
- if ((isdn_ppp_bundle_arr = kzalloc(sz, GFP_KERNEL)) == NULL)
- return -ENOMEM;
- for (i = 0; i < ISDN_MAX_CHANNELS; i++)
- spin_lock_init(&isdn_ppp_bundle_arr[i].lock);
- return 0;
-}
-
-static ippp_bundle *isdn_ppp_mp_bundle_alloc(void)
-{
- int i;
- for (i = 0; i < ISDN_MAX_CHANNELS; i++)
- if (isdn_ppp_bundle_arr[i].ref_ct <= 0)
- return (isdn_ppp_bundle_arr + i);
- return NULL;
-}
-
-static int isdn_ppp_mp_init(isdn_net_local *lp, ippp_bundle *add_to)
-{
- struct ippp_struct *is;
-
- if (lp->ppp_slot < 0) {
- printk(KERN_ERR "%s: lp->ppp_slot(%d) out of range\n",
- __func__, lp->ppp_slot);
- return (-EINVAL);
- }
-
- is = ippp_table[lp->ppp_slot];
- if (add_to) {
- if (lp->netdev->pb)
- lp->netdev->pb->ref_ct--;
- lp->netdev->pb = add_to;
- } else { /* first link in a bundle */
- is->mp_seqno = 0;
- if ((lp->netdev->pb = isdn_ppp_mp_bundle_alloc()) == NULL)
- return -ENOMEM;
- lp->next = lp->last = lp; /* nobody else in a queue */
- lp->netdev->pb->frags = NULL;
- lp->netdev->pb->frames = 0;
- lp->netdev->pb->seq = UINT_MAX;
- }
- lp->netdev->pb->ref_ct++;
-
- is->last_link_seqno = 0;
- return 0;
-}
-
-static u32 isdn_ppp_mp_get_seq(int short_seq,
- struct sk_buff *skb, u32 last_seq);
-static struct sk_buff *isdn_ppp_mp_discard(ippp_bundle *mp,
- struct sk_buff *from, struct sk_buff *to);
-static void isdn_ppp_mp_reassembly(isdn_net_dev *net_dev, isdn_net_local *lp,
- struct sk_buff *from, struct sk_buff *to);
-static void isdn_ppp_mp_free_skb(ippp_bundle *mp, struct sk_buff *skb);
-static void isdn_ppp_mp_print_recv_pkt(int slot, struct sk_buff *skb);
-
-static void isdn_ppp_mp_receive(isdn_net_dev *net_dev, isdn_net_local *lp,
- struct sk_buff *skb)
-{
- struct ippp_struct *is;
- isdn_net_local *lpq;
- ippp_bundle *mp;
- isdn_mppp_stats *stats;
- struct sk_buff *newfrag, *frag, *start, *nextf;
- u32 newseq, minseq, thisseq;
- unsigned long flags;
- int slot;
-
- spin_lock_irqsave(&net_dev->pb->lock, flags);
- mp = net_dev->pb;
- stats = &mp->stats;
- slot = lp->ppp_slot;
- if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_ERR "%s: lp->ppp_slot(%d)\n",
- __func__, lp->ppp_slot);
- stats->frame_drops++;
- dev_kfree_skb(skb);
- spin_unlock_irqrestore(&mp->lock, flags);
- return;
- }
- is = ippp_table[slot];
- if (++mp->frames > stats->max_queue_len)
- stats->max_queue_len = mp->frames;
-
- if (is->debug & 0x8)
- isdn_ppp_mp_print_recv_pkt(lp->ppp_slot, skb);
-
- newseq = isdn_ppp_mp_get_seq(is->mpppcfg & SC_IN_SHORT_SEQ,
- skb, is->last_link_seqno);
-
-
- /* if this packet seq # is less than last already processed one,
- * toss it right away, but check for sequence start case first
- */
- if (mp->seq > MP_LONGSEQ_MAX && (newseq & MP_LONGSEQ_MAXBIT)) {
- mp->seq = newseq; /* the first packet: required for
- * rfc1990 non-compliant clients --
- * prevents constant packet toss */
- } else if (MP_LT(newseq, mp->seq)) {
- stats->frame_drops++;
- isdn_ppp_mp_free_skb(mp, skb);
- spin_unlock_irqrestore(&mp->lock, flags);
- return;
- }
-
- /* find the minimum received sequence number over all links */
- is->last_link_seqno = minseq = newseq;
- for (lpq = net_dev->queue;;) {
- slot = lpq->ppp_slot;
- if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_ERR "%s: lpq->ppp_slot(%d)\n",
- __func__, lpq->ppp_slot);
- } else {
- u32 lls = ippp_table[slot]->last_link_seqno;
- if (MP_LT(lls, minseq))
- minseq = lls;
- }
- if ((lpq = lpq->next) == net_dev->queue)
- break;
- }
- if (MP_LT(minseq, mp->seq))
- minseq = mp->seq; /* can't go beyond already processed
- * packets */
- newfrag = skb;
-
- /* if this new fragment is before the first one, then enqueue it now. */
- if ((frag = mp->frags) == NULL || MP_LT(newseq, MP_SEQ(frag))) {
- newfrag->next = frag;
- mp->frags = frag = newfrag;
- newfrag = NULL;
- }
-
- start = MP_FLAGS(frag) & MP_BEGIN_FRAG &&
- MP_SEQ(frag) == mp->seq ? frag : NULL;
-
- /*
- * main fragment traversing loop
- *
- * try to accomplish several tasks:
- * - insert new fragment into the proper sequence slot (once that's done
- * newfrag will be set to NULL)
- * - reassemble any complete fragment sequence (non-null 'start'
- * indicates there is a contiguous sequence present)
- * - discard any incomplete sequences that are below minseq -- due
- * to the fact that sender always increment sequence number, if there
- * is an incomplete sequence below minseq, no new fragments would
- * come to complete such sequence and it should be discarded
- *
- * loop completes when we accomplished the following tasks:
- * - new fragment is inserted in the proper sequence ('newfrag' is
- * set to NULL)
- * - we hit a gap in the sequence, so no reassembly/processing is
- * possible ('start' would be set to NULL)
- *
- * algorithm for this code is derived from code in the book
- * 'PPP Design And Debugging' by James Carlson (Addison-Wesley)
- */
- while (start != NULL || newfrag != NULL) {
-
- thisseq = MP_SEQ(frag);
- nextf = frag->next;
-
- /* drop any duplicate fragments */
- if (newfrag != NULL && thisseq == newseq) {
- isdn_ppp_mp_free_skb(mp, newfrag);
- newfrag = NULL;
- }
-
- /* insert new fragment before next element if possible. */
- if (newfrag != NULL && (nextf == NULL ||
- MP_LT(newseq, MP_SEQ(nextf)))) {
- newfrag->next = nextf;
- frag->next = nextf = newfrag;
- newfrag = NULL;
- }
-
- if (start != NULL) {
- /* check for misplaced start */
- if (start != frag && (MP_FLAGS(frag) & MP_BEGIN_FRAG)) {
- printk(KERN_WARNING"isdn_mppp(seq %d): new "
- "BEGIN flag with no prior END", thisseq);
- stats->seqerrs++;
- stats->frame_drops++;
- start = isdn_ppp_mp_discard(mp, start, frag);
- nextf = frag->next;
- }
- } else if (MP_LE(thisseq, minseq)) {
- if (MP_FLAGS(frag) & MP_BEGIN_FRAG)
- start = frag;
- else {
- if (MP_FLAGS(frag) & MP_END_FRAG)
- stats->frame_drops++;
- if (mp->frags == frag)
- mp->frags = nextf;
- isdn_ppp_mp_free_skb(mp, frag);
- frag = nextf;
- continue;
- }
- }
-
- /* if start is non-null and we have end fragment, then
- * we have full reassembly sequence -- reassemble
- * and process packet now
- */
- if (start != NULL && (MP_FLAGS(frag) & MP_END_FRAG)) {
- minseq = mp->seq = (thisseq + 1) & MP_LONGSEQ_MASK;
- /* Reassemble the packet then dispatch it */
- isdn_ppp_mp_reassembly(net_dev, lp, start, nextf);
-
- start = NULL;
- frag = NULL;
-
- mp->frags = nextf;
- }
-
- /* check if need to update start pointer: if we just
- * reassembled the packet and sequence is contiguous
- * then next fragment should be the start of new reassembly
- * if sequence is contiguous, but we haven't reassembled yet,
- * keep going.
- * if sequence is not contiguous, either clear everything
- * below low watermark and set start to the next frag or
- * clear start ptr.
- */
- if (nextf != NULL &&
- ((thisseq + 1) & MP_LONGSEQ_MASK) == MP_SEQ(nextf)) {
- /* if we just reassembled and the next one is here,
- * then start another reassembly. */
-
- if (frag == NULL) {
- if (MP_FLAGS(nextf) & MP_BEGIN_FRAG)
- start = nextf;
- else
- {
- printk(KERN_WARNING"isdn_mppp(seq %d):"
- " END flag with no following "
- "BEGIN", thisseq);
- stats->seqerrs++;
- }
- }
-
- } else {
- if (nextf != NULL && frag != NULL &&
- MP_LT(thisseq, minseq)) {
- /* we've got a break in the sequence
- * and we not at the end yet
- * and we did not just reassembled
- *(if we did, there wouldn't be anything before)
- * and we below the low watermark
- * discard all the frames below low watermark
- * and start over */
- stats->frame_drops++;
- mp->frags = isdn_ppp_mp_discard(mp, start, nextf);
- }
- /* break in the sequence, no reassembly */
- start = NULL;
- }
-
- frag = nextf;
- } /* while -- main loop */
-
- if (mp->frags == NULL)
- mp->frags = frag;
-
- /* rather straighforward way to deal with (not very) possible
- * queue overflow */
- if (mp->frames > MP_MAX_QUEUE_LEN) {
- stats->overflows++;
- while (mp->frames > MP_MAX_QUEUE_LEN) {
- frag = mp->frags->next;
- isdn_ppp_mp_free_skb(mp, mp->frags);
- mp->frags = frag;
- }
- }
- spin_unlock_irqrestore(&mp->lock, flags);
-}
-
-static void isdn_ppp_mp_cleanup(isdn_net_local *lp)
-{
- struct sk_buff *frag = lp->netdev->pb->frags;
- struct sk_buff *nextfrag;
- while (frag) {
- nextfrag = frag->next;
- isdn_ppp_mp_free_skb(lp->netdev->pb, frag);
- frag = nextfrag;
- }
- lp->netdev->pb->frags = NULL;
-}
-
-static u32 isdn_ppp_mp_get_seq(int short_seq,
- struct sk_buff *skb, u32 last_seq)
-{
- u32 seq;
- int flags = skb->data[0] & (MP_BEGIN_FRAG | MP_END_FRAG);
-
- if (!short_seq)
- {
- seq = ntohl(*(__be32 *)skb->data) & MP_LONGSEQ_MASK;
- skb_push(skb, 1);
- }
- else
- {
- /* convert 12-bit short seq number to 24-bit long one
- */
- seq = ntohs(*(__be16 *)skb->data) & MP_SHORTSEQ_MASK;
-
- /* check for seqence wrap */
- if (!(seq & MP_SHORTSEQ_MAXBIT) &&
- (last_seq & MP_SHORTSEQ_MAXBIT) &&
- (unsigned long)last_seq <= MP_LONGSEQ_MAX)
- seq |= (last_seq + MP_SHORTSEQ_MAX + 1) &
- (~MP_SHORTSEQ_MASK & MP_LONGSEQ_MASK);
- else
- seq |= last_seq & (~MP_SHORTSEQ_MASK & MP_LONGSEQ_MASK);
-
- skb_push(skb, 3); /* put converted seqence back in skb */
- }
- *(u32 *)(skb->data + 1) = seq; /* put seqence back in _host_ byte
- * order */
- skb->data[0] = flags; /* restore flags */
- return seq;
-}
-
-static struct sk_buff *isdn_ppp_mp_discard(ippp_bundle *mp,
- struct sk_buff *from,
- struct sk_buff *to)
-{
- if (from)
- while (from != to) {
- struct sk_buff *next = from->next;
- isdn_ppp_mp_free_skb(mp, from);
- from = next;
- }
- return from;
-}
-
-static void isdn_ppp_mp_reassembly(isdn_net_dev *net_dev, isdn_net_local *lp,
- struct sk_buff *from, struct sk_buff *to)
-{
- ippp_bundle *mp = net_dev->pb;
- int proto;
- struct sk_buff *skb;
- unsigned int tot_len;
-
- if (lp->ppp_slot < 0 || lp->ppp_slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_ERR "%s: lp->ppp_slot(%d) out of range\n",
- __func__, lp->ppp_slot);
- return;
- }
- if (MP_FLAGS(from) == (MP_BEGIN_FRAG | MP_END_FRAG)) {
- if (ippp_table[lp->ppp_slot]->debug & 0x40)
- printk(KERN_DEBUG "isdn_mppp: reassembly: frame %d, "
- "len %d\n", MP_SEQ(from), from->len);
- skb = from;
- skb_pull(skb, MP_HEADER_LEN);
- mp->frames--;
- } else {
- struct sk_buff *frag;
- int n;
-
- for (tot_len = n = 0, frag = from; frag != to; frag = frag->next, n++)
- tot_len += frag->len - MP_HEADER_LEN;
-
- if (ippp_table[lp->ppp_slot]->debug & 0x40)
- printk(KERN_DEBUG"isdn_mppp: reassembling frames %d "
- "to %d, len %d\n", MP_SEQ(from),
- (MP_SEQ(from) + n - 1) & MP_LONGSEQ_MASK, tot_len);
- if ((skb = dev_alloc_skb(tot_len)) == NULL) {
- printk(KERN_ERR "isdn_mppp: cannot allocate sk buff "
- "of size %d\n", tot_len);
- isdn_ppp_mp_discard(mp, from, to);
- return;
- }
-
- while (from != to) {
- unsigned int len = from->len - MP_HEADER_LEN;
-
- skb_copy_from_linear_data_offset(from, MP_HEADER_LEN,
- skb_put(skb, len),
- len);
- frag = from->next;
- isdn_ppp_mp_free_skb(mp, from);
- from = frag;
- }
- }
- proto = isdn_ppp_strip_proto(skb);
- isdn_ppp_push_higher(net_dev, lp, skb, proto);
-}
-
-static void isdn_ppp_mp_free_skb(ippp_bundle *mp, struct sk_buff *skb)
-{
- dev_kfree_skb(skb);
- mp->frames--;
-}
-
-static void isdn_ppp_mp_print_recv_pkt(int slot, struct sk_buff *skb)
-{
- printk(KERN_DEBUG "mp_recv: %d/%d -> %02x %02x %02x %02x %02x %02x\n",
- slot, (int) skb->len,
- (int) skb->data[0], (int) skb->data[1], (int) skb->data[2],
- (int) skb->data[3], (int) skb->data[4], (int) skb->data[5]);
-}
-
-static int
-isdn_ppp_bundle(struct ippp_struct *is, int unit)
-{
- char ifn[IFNAMSIZ + 1];
- isdn_net_dev *p;
- isdn_net_local *lp, *nlp;
- int rc;
- unsigned long flags;
-
- sprintf(ifn, "ippp%d", unit);
- p = isdn_net_findif(ifn);
- if (!p) {
- printk(KERN_ERR "ippp_bundle: cannot find %s\n", ifn);
- return -EINVAL;
- }
-
- spin_lock_irqsave(&p->pb->lock, flags);
-
- nlp = is->lp;
- lp = p->queue;
- if (nlp->ppp_slot < 0 || nlp->ppp_slot >= ISDN_MAX_CHANNELS ||
- lp->ppp_slot < 0 || lp->ppp_slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_ERR "ippp_bundle: binding to invalid slot %d\n",
- nlp->ppp_slot < 0 || nlp->ppp_slot >= ISDN_MAX_CHANNELS ?
- nlp->ppp_slot : lp->ppp_slot);
- rc = -EINVAL;
- goto out;
- }
-
- isdn_net_add_to_bundle(p, nlp);
-
- ippp_table[nlp->ppp_slot]->unit = ippp_table[lp->ppp_slot]->unit;
-
- /* maybe also SC_CCP stuff */
- ippp_table[nlp->ppp_slot]->pppcfg |= ippp_table[lp->ppp_slot]->pppcfg &
- (SC_ENABLE_IP | SC_NO_TCP_CCID | SC_REJ_COMP_TCP);
- ippp_table[nlp->ppp_slot]->mpppcfg |= ippp_table[lp->ppp_slot]->mpppcfg &
- (SC_MP_PROT | SC_REJ_MP_PROT | SC_OUT_SHORT_SEQ | SC_IN_SHORT_SEQ);
- rc = isdn_ppp_mp_init(nlp, p->pb);
-out:
- spin_unlock_irqrestore(&p->pb->lock, flags);
- return rc;
-}
-
-#endif /* CONFIG_ISDN_MPP */
-
-/*
- * network device ioctl handlers
- */
-
-static int
-isdn_ppp_dev_ioctl_stats(int slot, struct ifreq *ifr, struct net_device *dev)
-{
- struct ppp_stats __user *res = ifr->ifr_data;
- struct ppp_stats t;
- isdn_net_local *lp = netdev_priv(dev);
-
- /* build a temporary stat struct and copy it to user space */
-
- memset(&t, 0, sizeof(struct ppp_stats));
- if (dev->flags & IFF_UP) {
- t.p.ppp_ipackets = lp->stats.rx_packets;
- t.p.ppp_ibytes = lp->stats.rx_bytes;
- t.p.ppp_ierrors = lp->stats.rx_errors;
- t.p.ppp_opackets = lp->stats.tx_packets;
- t.p.ppp_obytes = lp->stats.tx_bytes;
- t.p.ppp_oerrors = lp->stats.tx_errors;
-#ifdef CONFIG_ISDN_PPP_VJ
- if (slot >= 0 && ippp_table[slot]->slcomp) {
- struct slcompress *slcomp = ippp_table[slot]->slcomp;
- t.vj.vjs_packets = slcomp->sls_o_compressed + slcomp->sls_o_uncompressed;
- t.vj.vjs_compressed = slcomp->sls_o_compressed;
- t.vj.vjs_searches = slcomp->sls_o_searches;
- t.vj.vjs_misses = slcomp->sls_o_misses;
- t.vj.vjs_errorin = slcomp->sls_i_error;
- t.vj.vjs_tossed = slcomp->sls_i_tossed;
- t.vj.vjs_uncompressedin = slcomp->sls_i_uncompressed;
- t.vj.vjs_compressedin = slcomp->sls_i_compressed;
- }
-#endif
- }
- if (copy_to_user(res, &t, sizeof(struct ppp_stats)))
- return -EFAULT;
- return 0;
-}
-
-int
-isdn_ppp_dev_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
-{
- int error = 0;
- int len;
- isdn_net_local *lp = netdev_priv(dev);
-
-
- if (lp->p_encap != ISDN_NET_ENCAP_SYNCPPP)
- return -EINVAL;
-
- switch (cmd) {
-#define PPP_VERSION "2.3.7"
- case SIOCGPPPVER:
- len = strlen(PPP_VERSION) + 1;
- if (copy_to_user(ifr->ifr_data, PPP_VERSION, len))
- error = -EFAULT;
- break;
-
- case SIOCGPPPSTATS:
- error = isdn_ppp_dev_ioctl_stats(lp->ppp_slot, ifr, dev);
- break;
- default:
- error = -EINVAL;
- break;
- }
- return error;
-}
-
-static int
-isdn_ppp_if_get_unit(char *name)
-{
- int len,
- i,
- unit = 0,
- deci;
-
- len = strlen(name);
-
- if (strncmp("ippp", name, 4) || len > 8)
- return -1;
-
- for (i = 0, deci = 1; i < len; i++, deci *= 10) {
- char a = name[len - i - 1];
- if (a >= '0' && a <= '9')
- unit += (a - '0') * deci;
- else
- break;
- }
- if (!i || len - i != 4)
- unit = -1;
-
- return unit;
-}
-
-
-int
-isdn_ppp_dial_slave(char *name)
-{
-#ifdef CONFIG_ISDN_MPP
- isdn_net_dev *ndev;
- isdn_net_local *lp;
- struct net_device *sdev;
-
- if (!(ndev = isdn_net_findif(name)))
- return 1;
- lp = ndev->local;
- if (!(lp->flags & ISDN_NET_CONNECTED))
- return 5;
-
- sdev = lp->slave;
- while (sdev) {
- isdn_net_local *mlp = netdev_priv(sdev);
- if (!(mlp->flags & ISDN_NET_CONNECTED))
- break;
- sdev = mlp->slave;
- }
- if (!sdev)
- return 2;
-
- isdn_net_dial_req(netdev_priv(sdev));
- return 0;
-#else
- return -1;
-#endif
-}
-
-int
-isdn_ppp_hangup_slave(char *name)
-{
-#ifdef CONFIG_ISDN_MPP
- isdn_net_dev *ndev;
- isdn_net_local *lp;
- struct net_device *sdev;
-
- if (!(ndev = isdn_net_findif(name)))
- return 1;
- lp = ndev->local;
- if (!(lp->flags & ISDN_NET_CONNECTED))
- return 5;
-
- sdev = lp->slave;
- while (sdev) {
- isdn_net_local *mlp = netdev_priv(sdev);
-
- if (mlp->slave) { /* find last connected link in chain */
- isdn_net_local *nlp = ISDN_SLAVE_PRIV(mlp);
-
- if (!(nlp->flags & ISDN_NET_CONNECTED))
- break;
- } else if (mlp->flags & ISDN_NET_CONNECTED)
- break;
-
- sdev = mlp->slave;
- }
- if (!sdev)
- return 2;
-
- isdn_net_hangup(sdev);
- return 0;
-#else
- return -1;
-#endif
-}
-
-/*
- * PPP compression stuff
- */
-
-
-/* Push an empty CCP Data Frame up to the daemon to wake it up and let it
- generate a CCP Reset-Request or tear down CCP altogether */
-
-static void isdn_ppp_ccp_kickup(struct ippp_struct *is)
-{
- isdn_ppp_fill_rq(NULL, 0, PPP_COMP, is->lp->ppp_slot);
-}
-
-/* In-kernel handling of CCP Reset-Request and Reset-Ack is necessary,
- but absolutely nontrivial. The most abstruse problem we are facing is
- that the generation, reception and all the handling of timeouts and
- resends including proper request id management should be entirely left
- to the (de)compressor, but indeed is not covered by the current API to
- the (de)compressor. The API is a prototype version from PPP where only
- some (de)compressors have yet been implemented and all of them are
- rather simple in their reset handling. Especially, their is only one
- outstanding ResetAck at a time with all of them and ResetReq/-Acks do
- not have parameters. For this very special case it was sufficient to
- just return an error code from the decompressor and have a single
- reset() entry to communicate all the necessary information between
- the framework and the (de)compressor. Bad enough, LZS is different
- (and any other compressor may be different, too). It has multiple
- histories (eventually) and needs to Reset each of them independently
- and thus uses multiple outstanding Acks and history numbers as an
- additional parameter to Reqs/Acks.
- All that makes it harder to port the reset state engine into the
- kernel because it is not just the same simple one as in (i)pppd but
- it must be able to pass additional parameters and have multiple out-
- standing Acks. We are trying to achieve the impossible by handling
- reset transactions independent by their id. The id MUST change when
- the data portion changes, thus any (de)compressor who uses more than
- one resettable state must provide and recognize individual ids for
- each individual reset transaction. The framework itself does _only_
- differentiate them by id, because it has no other semantics like the
- (de)compressor might.
- This looks like a major redesign of the interface would be nice,
- but I don't have an idea how to do it better. */
-
-/* Send a CCP Reset-Request or Reset-Ack directly from the kernel. This is
- getting that lengthy because there is no simple "send-this-frame-out"
- function above but every wrapper does a bit different. Hope I guess
- correct in this hack... */
-
-static void isdn_ppp_ccp_xmit_reset(struct ippp_struct *is, int proto,
- unsigned char code, unsigned char id,
- unsigned char *data, int len)
-{
- struct sk_buff *skb;
- unsigned char *p;
- int hl;
- int cnt = 0;
- isdn_net_local *lp = is->lp;
-
- /* Alloc large enough skb */
- hl = dev->drv[lp->isdn_device]->interface->hl_hdrlen;
- skb = alloc_skb(len + hl + 16, GFP_ATOMIC);
- if (!skb) {
- printk(KERN_WARNING
- "ippp: CCP cannot send reset - out of memory\n");
- return;
- }
- skb_reserve(skb, hl);
-
- /* We may need to stuff an address and control field first */
- if (!(is->pppcfg & SC_COMP_AC)) {
- p = skb_put(skb, 2);
- *p++ = 0xff;
- *p++ = 0x03;
- }
-
- /* Stuff proto, code, id and length */
- p = skb_put(skb, 6);
- *p++ = (proto >> 8);
- *p++ = (proto & 0xff);
- *p++ = code;
- *p++ = id;
- cnt = 4 + len;
- *p++ = (cnt >> 8);
- *p++ = (cnt & 0xff);
-
- /* Now stuff remaining bytes */
- if (len) {
- skb_put_data(skb, data, len);
- }
-
- /* skb is now ready for xmit */
- printk(KERN_DEBUG "Sending CCP Frame:\n");
- isdn_ppp_frame_log("ccp-xmit", skb->data, skb->len, 32, is->unit, lp->ppp_slot);
-
- isdn_net_write_super(lp, skb);
-}
-
-/* Allocate the reset state vector */
-static struct ippp_ccp_reset *isdn_ppp_ccp_reset_alloc(struct ippp_struct *is)
-{
- struct ippp_ccp_reset *r;
- r = kzalloc(sizeof(struct ippp_ccp_reset), GFP_KERNEL);
- if (!r) {
- printk(KERN_ERR "ippp_ccp: failed to allocate reset data"
- " structure - no mem\n");
- return NULL;
- }
- printk(KERN_DEBUG "ippp_ccp: allocated reset data structure %p\n", r);
- is->reset = r;
- return r;
-}
-
-/* Destroy the reset state vector. Kill all pending timers first. */
-static void isdn_ppp_ccp_reset_free(struct ippp_struct *is)
-{
- unsigned int id;
-
- printk(KERN_DEBUG "ippp_ccp: freeing reset data structure %p\n",
- is->reset);
- for (id = 0; id < 256; id++) {
- if (is->reset->rs[id]) {
- isdn_ppp_ccp_reset_free_state(is, (unsigned char)id);
- }
- }
- kfree(is->reset);
- is->reset = NULL;
-}
-
-/* Free a given state and clear everything up for later reallocation */
-static void isdn_ppp_ccp_reset_free_state(struct ippp_struct *is,
- unsigned char id)
-{
- struct ippp_ccp_reset_state *rs;
-
- if (is->reset->rs[id]) {
- printk(KERN_DEBUG "ippp_ccp: freeing state for id %d\n", id);
- rs = is->reset->rs[id];
- /* Make sure the kernel will not call back later */
- if (rs->ta)
- del_timer(&rs->timer);
- is->reset->rs[id] = NULL;
- kfree(rs);
- } else {
- printk(KERN_WARNING "ippp_ccp: id %d is not allocated\n", id);
- }
-}
-
-/* The timer callback function which is called when a ResetReq has timed out,
- aka has never been answered by a ResetAck */
-static void isdn_ppp_ccp_timer_callback(struct timer_list *t)
-{
- struct ippp_ccp_reset_state *rs =
- from_timer(rs, t, timer);
-
- if (!rs) {
- printk(KERN_ERR "ippp_ccp: timer cb with zero closure.\n");
- return;
- }
- if (rs->ta && rs->state == CCPResetSentReq) {
- /* We are correct here */
- if (!rs->expra) {
- /* Hmm, there is no Ack really expected. We can clean
- up the state now, it will be reallocated if the
- decompressor insists on another reset */
- rs->ta = 0;
- isdn_ppp_ccp_reset_free_state(rs->is, rs->id);
- return;
- }
- printk(KERN_DEBUG "ippp_ccp: CCP Reset timed out for id %d\n",
- rs->id);
- /* Push it again */
- isdn_ppp_ccp_xmit_reset(rs->is, PPP_CCP, CCP_RESETREQ, rs->id,
- rs->data, rs->dlen);
- /* Restart timer */
- rs->timer.expires = jiffies + HZ * 5;
- add_timer(&rs->timer);
- } else {
- printk(KERN_WARNING "ippp_ccp: timer cb in wrong state %d\n",
- rs->state);
- }
-}
-
-/* Allocate a new reset transaction state */
-static struct ippp_ccp_reset_state *isdn_ppp_ccp_reset_alloc_state(struct ippp_struct *is,
- unsigned char id)
-{
- struct ippp_ccp_reset_state *rs;
- if (is->reset->rs[id]) {
- printk(KERN_WARNING "ippp_ccp: old state exists for id %d\n",
- id);
- return NULL;
- } else {
- rs = kzalloc(sizeof(struct ippp_ccp_reset_state), GFP_ATOMIC);
- if (!rs)
- return NULL;
- rs->state = CCPResetIdle;
- rs->is = is;
- rs->id = id;
- timer_setup(&rs->timer, isdn_ppp_ccp_timer_callback, 0);
- is->reset->rs[id] = rs;
- }
- return rs;
-}
-
-
-/* A decompressor wants a reset with a set of parameters - do what is
- necessary to fulfill it */
-static void isdn_ppp_ccp_reset_trans(struct ippp_struct *is,
- struct isdn_ppp_resetparams *rp)
-{
- struct ippp_ccp_reset_state *rs;
-
- if (rp->valid) {
- /* The decompressor defines parameters by itself */
- if (rp->rsend) {
- /* And he wants us to send a request */
- if (!(rp->idval)) {
- printk(KERN_ERR "ippp_ccp: decompressor must"
- " specify reset id\n");
- return;
- }
- if (is->reset->rs[rp->id]) {
- /* There is already a transaction in existence
- for this id. May be still waiting for a
- Ack or may be wrong. */
- rs = is->reset->rs[rp->id];
- if (rs->state == CCPResetSentReq && rs->ta) {
- printk(KERN_DEBUG "ippp_ccp: reset"
- " trans still in progress"
- " for id %d\n", rp->id);
- } else {
- printk(KERN_WARNING "ippp_ccp: reset"
- " trans in wrong state %d for"
- " id %d\n", rs->state, rp->id);
- }
- } else {
- /* Ok, this is a new transaction */
- printk(KERN_DEBUG "ippp_ccp: new trans for id"
- " %d to be started\n", rp->id);
- rs = isdn_ppp_ccp_reset_alloc_state(is, rp->id);
- if (!rs) {
- printk(KERN_ERR "ippp_ccp: out of mem"
- " allocing ccp trans\n");
- return;
- }
- rs->state = CCPResetSentReq;
- rs->expra = rp->expra;
- if (rp->dtval) {
- rs->dlen = rp->dlen;
- memcpy(rs->data, rp->data, rp->dlen);
- }
- /* HACK TODO - add link comp here */
- isdn_ppp_ccp_xmit_reset(is, PPP_CCP,
- CCP_RESETREQ, rs->id,
- rs->data, rs->dlen);
- /* Start the timer */
- rs->timer.expires = jiffies + 5 * HZ;
- add_timer(&rs->timer);
- rs->ta = 1;
- }
- } else {
- printk(KERN_DEBUG "ippp_ccp: no reset sent\n");
- }
- } else {
- /* The reset params are invalid. The decompressor does not
- care about them, so we just send the minimal requests
- and increase ids only when an Ack is received for a
- given id */
- if (is->reset->rs[is->reset->lastid]) {
- /* There is already a transaction in existence
- for this id. May be still waiting for a
- Ack or may be wrong. */
- rs = is->reset->rs[is->reset->lastid];
- if (rs->state == CCPResetSentReq && rs->ta) {
- printk(KERN_DEBUG "ippp_ccp: reset"
- " trans still in progress"
- " for id %d\n", rp->id);
- } else {
- printk(KERN_WARNING "ippp_ccp: reset"
- " trans in wrong state %d for"
- " id %d\n", rs->state, rp->id);
- }
- } else {
- printk(KERN_DEBUG "ippp_ccp: new trans for id"
- " %d to be started\n", is->reset->lastid);
- rs = isdn_ppp_ccp_reset_alloc_state(is,
- is->reset->lastid);
- if (!rs) {
- printk(KERN_ERR "ippp_ccp: out of mem"
- " allocing ccp trans\n");
- return;
- }
- rs->state = CCPResetSentReq;
- /* We always expect an Ack if the decompressor doesn't
- know better */
- rs->expra = 1;
- rs->dlen = 0;
- /* HACK TODO - add link comp here */
- isdn_ppp_ccp_xmit_reset(is, PPP_CCP, CCP_RESETREQ,
- rs->id, NULL, 0);
- /* Start the timer */
- rs->timer.expires = jiffies + 5 * HZ;
- add_timer(&rs->timer);
- rs->ta = 1;
- }
- }
-}
-
-/* An Ack was received for this id. This means we stop the timer and clean
- up the state prior to calling the decompressors reset routine. */
-static void isdn_ppp_ccp_reset_ack_rcvd(struct ippp_struct *is,
- unsigned char id)
-{
- struct ippp_ccp_reset_state *rs = is->reset->rs[id];
-
- if (rs) {
- if (rs->ta && rs->state == CCPResetSentReq) {
- /* Great, we are correct */
- if (!rs->expra)
- printk(KERN_DEBUG "ippp_ccp: ResetAck received"
- " for id %d but not expected\n", id);
- } else {
- printk(KERN_INFO "ippp_ccp: ResetAck received out of"
- "sync for id %d\n", id);
- }
- if (rs->ta) {
- rs->ta = 0;
- del_timer(&rs->timer);
- }
- isdn_ppp_ccp_reset_free_state(is, id);
- } else {
- printk(KERN_INFO "ippp_ccp: ResetAck received for unknown id"
- " %d\n", id);
- }
- /* Make sure the simple reset stuff uses a new id next time */
- is->reset->lastid++;
-}
-
-/*
- * decompress packet
- *
- * if master = 0, we're trying to uncompress an per-link compressed packet,
- * as opposed to an compressed reconstructed-from-MPPP packet.
- * proto is updated to protocol field of uncompressed packet.
- *
- * retval: decompressed packet,
- * same packet if uncompressed,
- * NULL if decompression error
- */
-
-static struct sk_buff *isdn_ppp_decompress(struct sk_buff *skb, struct ippp_struct *is, struct ippp_struct *master,
- int *proto)
-{
- void *stat = NULL;
- struct isdn_ppp_compressor *ipc = NULL;
- struct sk_buff *skb_out;
- int len;
- struct ippp_struct *ri;
- struct isdn_ppp_resetparams rsparm;
- unsigned char rsdata[IPPP_RESET_MAXDATABYTES];
-
- if (!master) {
- // per-link decompression
- stat = is->link_decomp_stat;
- ipc = is->link_decompressor;
- ri = is;
- } else {
- stat = master->decomp_stat;
- ipc = master->decompressor;
- ri = master;
- }
-
- if (!ipc) {
- // no decompressor -> we can't decompress.
- printk(KERN_DEBUG "ippp: no decompressor defined!\n");
- return skb;
- }
- BUG_ON(!stat); // if we have a compressor, stat has been set as well
-
- if ((master && *proto == PPP_COMP) || (!master && *proto == PPP_COMPFRAG)) {
- // compressed packets are compressed by their protocol type
-
- // Set up reset params for the decompressor
- memset(&rsparm, 0, sizeof(rsparm));
- rsparm.data = rsdata;
- rsparm.maxdlen = IPPP_RESET_MAXDATABYTES;
-
- skb_out = dev_alloc_skb(is->mru + PPP_HDRLEN);
- if (!skb_out) {
- kfree_skb(skb);
- printk(KERN_ERR "ippp: decomp memory allocation failure\n");
- return NULL;
- }
- len = ipc->decompress(stat, skb, skb_out, &rsparm);
- kfree_skb(skb);
- if (len <= 0) {
- switch (len) {
- case DECOMP_ERROR:
- printk(KERN_INFO "ippp: decomp wants reset %s params\n",
- rsparm.valid ? "with" : "without");
-
- isdn_ppp_ccp_reset_trans(ri, &rsparm);
- break;
- case DECOMP_FATALERROR:
- ri->pppcfg |= SC_DC_FERROR;
- /* Kick ipppd to recognize the error */
- isdn_ppp_ccp_kickup(ri);
- break;
- }
- kfree_skb(skb_out);
- return NULL;
- }
- *proto = isdn_ppp_strip_proto(skb_out);
- if (*proto < 0) {
- kfree_skb(skb_out);
- return NULL;
- }
- return skb_out;
- } else {
- // uncompressed packets are fed through the decompressor to
- // update the decompressor state
- ipc->incomp(stat, skb, *proto);
- return skb;
- }
-}
-
-/*
- * compress a frame
- * type=0: normal/bundle compression
- * =1: link compression
- * returns original skb if we haven't compressed the frame
- * and a new skb pointer if we've done it
- */
-static struct sk_buff *isdn_ppp_compress(struct sk_buff *skb_in, int *proto,
- struct ippp_struct *is, struct ippp_struct *master, int type)
-{
- int ret;
- int new_proto;
- struct isdn_ppp_compressor *compressor;
- void *stat;
- struct sk_buff *skb_out;
-
- /* we do not compress control protocols */
- if (*proto < 0 || *proto > 0x3fff) {
- return skb_in;
- }
-
- if (type) { /* type=1 => Link compression */
- return skb_in;
- }
- else {
- if (!master) {
- compressor = is->compressor;
- stat = is->comp_stat;
- }
- else {
- compressor = master->compressor;
- stat = master->comp_stat;
- }
- new_proto = PPP_COMP;
- }
-
- if (!compressor) {
- printk(KERN_ERR "isdn_ppp: No compressor set!\n");
- return skb_in;
- }
- if (!stat) {
- printk(KERN_ERR "isdn_ppp: Compressor not initialized?\n");
- return skb_in;
- }
-
- /* Allow for at least 150 % expansion (for now) */
- skb_out = alloc_skb(skb_in->len + skb_in->len / 2 + 32 +
- skb_headroom(skb_in), GFP_ATOMIC);
- if (!skb_out)
- return skb_in;
- skb_reserve(skb_out, skb_headroom(skb_in));
-
- ret = (compressor->compress)(stat, skb_in, skb_out, *proto);
- if (!ret) {
- dev_kfree_skb(skb_out);
- return skb_in;
- }
-
- dev_kfree_skb(skb_in);
- *proto = new_proto;
- return skb_out;
-}
-
-/*
- * we received a CCP frame ..
- * not a clean solution, but we MUST handle a few cases in the kernel
- */
-static void isdn_ppp_receive_ccp(isdn_net_dev *net_dev, isdn_net_local *lp,
- struct sk_buff *skb, int proto)
-{
- struct ippp_struct *is;
- struct ippp_struct *mis;
- int len;
- struct isdn_ppp_resetparams rsparm;
- unsigned char rsdata[IPPP_RESET_MAXDATABYTES];
-
- printk(KERN_DEBUG "Received CCP frame from peer slot(%d)\n",
- lp->ppp_slot);
- if (lp->ppp_slot < 0 || lp->ppp_slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_ERR "%s: lp->ppp_slot(%d) out of range\n",
- __func__, lp->ppp_slot);
- return;
- }
- is = ippp_table[lp->ppp_slot];
- isdn_ppp_frame_log("ccp-rcv", skb->data, skb->len, 32, is->unit, lp->ppp_slot);
-
- if (lp->master) {
- int slot = ISDN_MASTER_PRIV(lp)->ppp_slot;
- if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_ERR "%s: slot(%d) out of range\n",
- __func__, slot);
- return;
- }
- mis = ippp_table[slot];
- } else
- mis = is;
-
- switch (skb->data[0]) {
- case CCP_CONFREQ:
- if (is->debug & 0x10)
- printk(KERN_DEBUG "Disable compression here!\n");
- if (proto == PPP_CCP)
- mis->compflags &= ~SC_COMP_ON;
- else
- is->compflags &= ~SC_LINK_COMP_ON;
- break;
- case CCP_TERMREQ:
- case CCP_TERMACK:
- if (is->debug & 0x10)
- printk(KERN_DEBUG "Disable (de)compression here!\n");
- if (proto == PPP_CCP)
- mis->compflags &= ~(SC_DECOMP_ON | SC_COMP_ON);
- else
- is->compflags &= ~(SC_LINK_DECOMP_ON | SC_LINK_COMP_ON);
- break;
- case CCP_CONFACK:
- /* if we RECEIVE an ackowledge we enable the decompressor */
- if (is->debug & 0x10)
- printk(KERN_DEBUG "Enable decompression here!\n");
- if (proto == PPP_CCP) {
- if (!mis->decompressor)
- break;
- mis->compflags |= SC_DECOMP_ON;
- } else {
- if (!is->decompressor)
- break;
- is->compflags |= SC_LINK_DECOMP_ON;
- }
- break;
-
- case CCP_RESETACK:
- printk(KERN_DEBUG "Received ResetAck from peer\n");
- len = (skb->data[2] << 8) | skb->data[3];
- len -= 4;
-
- if (proto == PPP_CCP) {
- /* If a reset Ack was outstanding for this id, then
- clean up the state engine */
- isdn_ppp_ccp_reset_ack_rcvd(mis, skb->data[1]);
- if (mis->decompressor && mis->decomp_stat)
- mis->decompressor->
- reset(mis->decomp_stat,
- skb->data[0],
- skb->data[1],
- len ? &skb->data[4] : NULL,
- len, NULL);
- /* TODO: This is not easy to decide here */
- mis->compflags &= ~SC_DECOMP_DISCARD;
- }
- else {
- isdn_ppp_ccp_reset_ack_rcvd(is, skb->data[1]);
- if (is->link_decompressor && is->link_decomp_stat)
- is->link_decompressor->
- reset(is->link_decomp_stat,
- skb->data[0],
- skb->data[1],
- len ? &skb->data[4] : NULL,
- len, NULL);
- /* TODO: neither here */
- is->compflags &= ~SC_LINK_DECOMP_DISCARD;
- }
- break;
-
- case CCP_RESETREQ:
- printk(KERN_DEBUG "Received ResetReq from peer\n");
- /* Receiving a ResetReq means we must reset our compressor */
- /* Set up reset params for the reset entry */
- memset(&rsparm, 0, sizeof(rsparm));
- rsparm.data = rsdata;
- rsparm.maxdlen = IPPP_RESET_MAXDATABYTES;
- /* Isolate data length */
- len = (skb->data[2] << 8) | skb->data[3];
- len -= 4;
- if (proto == PPP_CCP) {
- if (mis->compressor && mis->comp_stat)
- mis->compressor->
- reset(mis->comp_stat,
- skb->data[0],
- skb->data[1],
- len ? &skb->data[4] : NULL,
- len, &rsparm);
- }
- else {
- if (is->link_compressor && is->link_comp_stat)
- is->link_compressor->
- reset(is->link_comp_stat,
- skb->data[0],
- skb->data[1],
- len ? &skb->data[4] : NULL,
- len, &rsparm);
- }
- /* Ack the Req as specified by rsparm */
- if (rsparm.valid) {
- /* Compressor reset handler decided how to answer */
- if (rsparm.rsend) {
- /* We should send a Frame */
- isdn_ppp_ccp_xmit_reset(is, proto, CCP_RESETACK,
- rsparm.idval ? rsparm.id
- : skb->data[1],
- rsparm.dtval ?
- rsparm.data : NULL,
- rsparm.dtval ?
- rsparm.dlen : 0);
- } else {
- printk(KERN_DEBUG "ResetAck suppressed\n");
- }
- } else {
- /* We answer with a straight reflected Ack */
- isdn_ppp_ccp_xmit_reset(is, proto, CCP_RESETACK,
- skb->data[1],
- len ? &skb->data[4] : NULL,
- len);
- }
- break;
- }
-}
-
-
-/*
- * Daemon sends a CCP frame ...
- */
-
-/* TODO: Clean this up with new Reset semantics */
-
-/* I believe the CCP handling as-is is done wrong. Compressed frames
- * should only be sent/received after CCP reaches UP state, which means
- * both sides have sent CONF_ACK. Currently, we handle both directions
- * independently, which means we may accept compressed frames too early
- * (supposedly not a problem), but may also mean we send compressed frames
- * too early, which may turn out to be a problem.
- * This part of state machine should actually be handled by (i)pppd, but
- * that's too big of a change now. --kai
- */
-
-/* Actually, we might turn this into an advantage: deal with the RFC in
- * the old tradition of beeing generous on what we accept, but beeing
- * strict on what we send. Thus we should just
- * - accept compressed frames as soon as decompression is negotiated
- * - send compressed frames only when decomp *and* comp are negotiated
- * - drop rx compressed frames if we cannot decomp (instead of pushing them
- * up to ipppd)
- * and I tried to modify this file according to that. --abp
- */
-
-static void isdn_ppp_send_ccp(isdn_net_dev *net_dev, isdn_net_local *lp, struct sk_buff *skb)
-{
- struct ippp_struct *mis, *is;
- int proto, slot = lp->ppp_slot;
- unsigned char *data;
-
- if (!skb || skb->len < 3)
- return;
- if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_ERR "%s: lp->ppp_slot(%d) out of range\n",
- __func__, slot);
- return;
- }
- is = ippp_table[slot];
- /* Daemon may send with or without address and control field comp */
- data = skb->data;
- if (!(is->pppcfg & SC_COMP_AC) && data[0] == 0xff && data[1] == 0x03) {
- data += 2;
- if (skb->len < 5)
- return;
- }
-
- proto = ((int)data[0]<<8) + data[1];
- if (proto != PPP_CCP && proto != PPP_CCPFRAG)
- return;
-
- printk(KERN_DEBUG "Received CCP frame from daemon:\n");
- isdn_ppp_frame_log("ccp-xmit", skb->data, skb->len, 32, is->unit, lp->ppp_slot);
-
- if (lp->master) {
- slot = ISDN_MASTER_PRIV(lp)->ppp_slot;
- if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
- printk(KERN_ERR "%s: slot(%d) out of range\n",
- __func__, slot);
- return;
- }
- mis = ippp_table[slot];
- } else
- mis = is;
- if (mis != is)
- printk(KERN_DEBUG "isdn_ppp: Ouch! Master CCP sends on slave slot!\n");
-
- switch (data[2]) {
- case CCP_CONFREQ:
- if (is->debug & 0x10)
- printk(KERN_DEBUG "Disable decompression here!\n");
- if (proto == PPP_CCP)
- is->compflags &= ~SC_DECOMP_ON;
- else
- is->compflags &= ~SC_LINK_DECOMP_ON;
- break;
- case CCP_TERMREQ:
- case CCP_TERMACK:
- if (is->debug & 0x10)
- printk(KERN_DEBUG "Disable (de)compression here!\n");
- if (proto == PPP_CCP)
- is->compflags &= ~(SC_DECOMP_ON | SC_COMP_ON);
- else
- is->compflags &= ~(SC_LINK_DECOMP_ON | SC_LINK_COMP_ON);
- break;
- case CCP_CONFACK:
- /* if we SEND an ackowledge we can/must enable the compressor */
- if (is->debug & 0x10)
- printk(KERN_DEBUG "Enable compression here!\n");
- if (proto == PPP_CCP) {
- if (!is->compressor)
- break;
- is->compflags |= SC_COMP_ON;
- } else {
- if (!is->compressor)
- break;
- is->compflags |= SC_LINK_COMP_ON;
- }
- break;
- case CCP_RESETACK:
- /* If we send a ACK we should reset our compressor */
- if (is->debug & 0x10)
- printk(KERN_DEBUG "Reset decompression state here!\n");
- printk(KERN_DEBUG "ResetAck from daemon passed by\n");
- if (proto == PPP_CCP) {
- /* link to master? */
- if (is->compressor && is->comp_stat)
- is->compressor->reset(is->comp_stat, 0, 0,
- NULL, 0, NULL);
- is->compflags &= ~SC_COMP_DISCARD;
- }
- else {
- if (is->link_compressor && is->link_comp_stat)
- is->link_compressor->reset(is->link_comp_stat,
- 0, 0, NULL, 0, NULL);
- is->compflags &= ~SC_LINK_COMP_DISCARD;
- }
- break;
- case CCP_RESETREQ:
- /* Just let it pass by */
- printk(KERN_DEBUG "ResetReq from daemon passed by\n");
- break;
- }
-}
-
-int isdn_ppp_register_compressor(struct isdn_ppp_compressor *ipc)
-{
- ipc->next = ipc_head;
- ipc->prev = NULL;
- if (ipc_head) {
- ipc_head->prev = ipc;
- }
- ipc_head = ipc;
- return 0;
-}
-
-int isdn_ppp_unregister_compressor(struct isdn_ppp_compressor *ipc)
-{
- if (ipc->prev)
- ipc->prev->next = ipc->next;
- else
- ipc_head = ipc->next;
- if (ipc->next)
- ipc->next->prev = ipc->prev;
- ipc->prev = ipc->next = NULL;
- return 0;
-}
-
-static int isdn_ppp_set_compressor(struct ippp_struct *is, struct isdn_ppp_comp_data *data)
-{
- struct isdn_ppp_compressor *ipc = ipc_head;
- int ret;
- void *stat;
- int num = data->num;
-
- if (is->debug & 0x10)
- printk(KERN_DEBUG "[%d] Set %s type %d\n", is->unit,
- (data->flags & IPPP_COMP_FLAG_XMIT) ? "compressor" : "decompressor", num);
-
- /* If is has no valid reset state vector, we cannot allocate a
- decompressor. The decompressor would cause reset transactions
- sooner or later, and they need that vector. */
-
- if (!(data->flags & IPPP_COMP_FLAG_XMIT) && !is->reset) {
- printk(KERN_ERR "ippp_ccp: no reset data structure - can't"
- " allow decompression.\n");
- return -ENOMEM;
- }
-
- while (ipc) {
- if (ipc->num == num) {
- stat = ipc->alloc(data);
- if (stat) {
- ret = ipc->init(stat, data, is->unit, 0);
- if (!ret) {
- printk(KERN_ERR "Can't init (de)compression!\n");
- ipc->free(stat);
- stat = NULL;
- break;
- }
- }
- else {
- printk(KERN_ERR "Can't alloc (de)compression!\n");
- break;
- }
-
- if (data->flags & IPPP_COMP_FLAG_XMIT) {
- if (data->flags & IPPP_COMP_FLAG_LINK) {
- if (is->link_comp_stat)
- is->link_compressor->free(is->link_comp_stat);
- is->link_comp_stat = stat;
- is->link_compressor = ipc;
- }
- else {
- if (is->comp_stat)
- is->compressor->free(is->comp_stat);
- is->comp_stat = stat;
- is->compressor = ipc;
- }
- }
- else {
- if (data->flags & IPPP_COMP_FLAG_LINK) {
- if (is->link_decomp_stat)
- is->link_decompressor->free(is->link_decomp_stat);
- is->link_decomp_stat = stat;
- is->link_decompressor = ipc;
- }
- else {
- if (is->decomp_stat)
- is->decompressor->free(is->decomp_stat);
- is->decomp_stat = stat;
- is->decompressor = ipc;
- }
- }
- return 0;
- }
- ipc = ipc->next;
- }
- return -EINVAL;
-}
diff --git a/drivers/isdn/i4l/isdn_ppp.h b/drivers/isdn/i4l/isdn_ppp.h
deleted file mode 100644
index 34b8a2ce84f3..000000000000
--- a/drivers/isdn/i4l/isdn_ppp.h
+++ /dev/null
@@ -1,41 +0,0 @@
-/* $Id: isdn_ppp.h,v 1.1.2.2 2004/01/12 22:37:19 keil Exp $
- *
- * header for Linux ISDN subsystem, functions for synchronous PPP (linklevel).
- *
- * Copyright 1995,96 by Michael Hipp (Michael.Hipp@student.uni-tuebingen.de)
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/ppp_defs.h> /* for PPP_PROTOCOL */
-#include <linux/isdn_ppp.h> /* for isdn_ppp info */
-
-extern int isdn_ppp_read(int, struct file *, char __user *, int);
-extern int isdn_ppp_write(int, struct file *, const char __user *, int);
-extern int isdn_ppp_open(int, struct file *);
-extern int isdn_ppp_init(void);
-extern void isdn_ppp_cleanup(void);
-extern int isdn_ppp_free(isdn_net_local *);
-extern int isdn_ppp_bind(isdn_net_local *);
-extern int isdn_ppp_autodial_filter(struct sk_buff *, isdn_net_local *);
-extern int isdn_ppp_xmit(struct sk_buff *, struct net_device *);
-extern void isdn_ppp_receive(isdn_net_dev *, isdn_net_local *, struct sk_buff *);
-extern int isdn_ppp_dev_ioctl(struct net_device *, struct ifreq *, int);
-extern __poll_t isdn_ppp_poll(struct file *, struct poll_table_struct *);
-extern int isdn_ppp_ioctl(int, struct file *, unsigned int, unsigned long);
-extern void isdn_ppp_release(int, struct file *);
-extern int isdn_ppp_dial_slave(char *);
-extern void isdn_ppp_wakeup_daemon(isdn_net_local *);
-
-extern int isdn_ppp_register_compressor(struct isdn_ppp_compressor *ipc);
-extern int isdn_ppp_unregister_compressor(struct isdn_ppp_compressor *ipc);
-
-#define IPPP_OPEN 0x01
-#define IPPP_CONNECT 0x02
-#define IPPP_CLOSEWAIT 0x04
-#define IPPP_NOBLOCK 0x08
-#define IPPP_ASSIGNED 0x10
-
-#define IPPP_MAX_HEADER 10
diff --git a/drivers/isdn/i4l/isdn_tty.c b/drivers/isdn/i4l/isdn_tty.c
deleted file mode 100644
index 43700fc19a31..000000000000
--- a/drivers/isdn/i4l/isdn_tty.c
+++ /dev/null
@@ -1,3756 +0,0 @@
-/*
- * Linux ISDN subsystem, tty functions and AT-command emulator (linklevel).
- *
- * Copyright 1994-1999 by Fritz Elfert (fritz@isdn4linux.de)
- * Copyright 1995,96 by Thinking Objects Software GmbH Wuerzburg
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-#undef ISDN_TTY_STAT_DEBUG
-
-#include <linux/isdn.h>
-#include <linux/serial.h> /* ASYNC_* flags */
-#include <linux/slab.h>
-#include <linux/delay.h>
-#include <linux/mutex.h>
-#include <linux/sched/signal.h>
-#include "isdn_common.h"
-#include "isdn_tty.h"
-#ifdef CONFIG_ISDN_AUDIO
-#include "isdn_audio.h"
-#define VBUF 0x3e0
-#define VBUFX (VBUF/16)
-#endif
-
-#define FIX_FILE_TRANSFER
-#define DUMMY_HAYES_AT
-
-/* Prototypes */
-
-static DEFINE_MUTEX(modem_info_mutex);
-static int isdn_tty_edit_at(const char *, int, modem_info *);
-static void isdn_tty_check_esc(const u_char *, u_char, int, int *, u_long *);
-static void isdn_tty_modem_reset_regs(modem_info *, int);
-static void isdn_tty_cmd_ATA(modem_info *);
-static void isdn_tty_flush_buffer(struct tty_struct *);
-static void isdn_tty_modem_result(int, modem_info *);
-#ifdef CONFIG_ISDN_AUDIO
-static int isdn_tty_countDLE(unsigned char *, int);
-#endif
-
-/* Leave this unchanged unless you know what you do! */
-#define MODEM_PARANOIA_CHECK
-#define MODEM_DO_RESTART
-
-static int bit2si[8] =
-{1, 5, 7, 7, 7, 7, 7, 7};
-static int si2bit[8] =
-{4, 1, 4, 4, 4, 4, 4, 4};
-
-/* isdn_tty_try_read() is called from within isdn_tty_rcv_skb()
- * to stuff incoming data directly into a tty's flip-buffer. This
- * is done to speed up tty-receiving if the receive-queue is empty.
- * This routine MUST be called with interrupts off.
- * Return:
- * 1 = Success
- * 0 = Failure, data has to be buffered and later processed by
- * isdn_tty_readmodem().
- */
-static int
-isdn_tty_try_read(modem_info *info, struct sk_buff *skb)
-{
- struct tty_port *port = &info->port;
- int c;
- int len;
- char last;
-
- if (!info->online)
- return 0;
-
- if (!(info->mcr & UART_MCR_RTS))
- return 0;
-
- len = skb->len
-#ifdef CONFIG_ISDN_AUDIO
- + ISDN_AUDIO_SKB_DLECOUNT(skb)
-#endif
- ;
-
- c = tty_buffer_request_room(port, len);
- if (c < len)
- return 0;
-
-#ifdef CONFIG_ISDN_AUDIO
- if (ISDN_AUDIO_SKB_DLECOUNT(skb)) {
- int l = skb->len;
- unsigned char *dp = skb->data;
- while (--l) {
- if (*dp == DLE)
- tty_insert_flip_char(port, DLE, 0);
- tty_insert_flip_char(port, *dp++, 0);
- }
- if (*dp == DLE)
- tty_insert_flip_char(port, DLE, 0);
- last = *dp;
- } else {
-#endif
- if (len > 1)
- tty_insert_flip_string(port, skb->data, len - 1);
- last = skb->data[len - 1];
-#ifdef CONFIG_ISDN_AUDIO
- }
-#endif
- if (info->emu.mdmreg[REG_CPPP] & BIT_CPPP)
- tty_insert_flip_char(port, last, 0xFF);
- else
- tty_insert_flip_char(port, last, TTY_NORMAL);
- tty_flip_buffer_push(port);
- kfree_skb(skb);
-
- return 1;
-}
-
-/* isdn_tty_readmodem() is called periodically from within timer-interrupt.
- * It tries getting received data from the receive queue an stuff it into
- * the tty's flip-buffer.
- */
-void
-isdn_tty_readmodem(void)
-{
- int resched = 0;
- int midx;
- int i;
- int r;
- modem_info *info;
-
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- midx = dev->m_idx[i];
- if (midx < 0)
- continue;
-
- info = &dev->mdm.info[midx];
- if (!info->online)
- continue;
-
- r = 0;
-#ifdef CONFIG_ISDN_AUDIO
- isdn_audio_eval_dtmf(info);
- if ((info->vonline & 1) && (info->emu.vpar[1]))
- isdn_audio_eval_silence(info);
-#endif
- if (info->mcr & UART_MCR_RTS) {
- /* CISCO AsyncPPP Hack */
- if (!(info->emu.mdmreg[REG_CPPP] & BIT_CPPP))
- r = isdn_readbchan_tty(info->isdn_driver,
- info->isdn_channel,
- &info->port, 0);
- else
- r = isdn_readbchan_tty(info->isdn_driver,
- info->isdn_channel,
- &info->port, 1);
- if (r)
- tty_flip_buffer_push(&info->port);
- } else
- r = 1;
-
- if (r) {
- info->rcvsched = 0;
- resched = 1;
- } else
- info->rcvsched = 1;
- }
- if (!resched)
- isdn_timer_ctrl(ISDN_TIMER_MODEMREAD, 0);
-}
-
-int
-isdn_tty_rcv_skb(int i, int di, int channel, struct sk_buff *skb)
-{
- ulong flags;
- int midx;
-#ifdef CONFIG_ISDN_AUDIO
- int ifmt;
-#endif
- modem_info *info;
-
- if ((midx = dev->m_idx[i]) < 0) {
- /* if midx is invalid, packet is not for tty */
- return 0;
- }
- info = &dev->mdm.info[midx];
-#ifdef CONFIG_ISDN_AUDIO
- ifmt = 1;
-
- if ((info->vonline) && (!info->emu.vpar[4]))
- isdn_audio_calc_dtmf(info, skb->data, skb->len, ifmt);
- if ((info->vonline & 1) && (info->emu.vpar[1]))
- isdn_audio_calc_silence(info, skb->data, skb->len, ifmt);
-#endif
- if ((info->online < 2)
-#ifdef CONFIG_ISDN_AUDIO
- && (!(info->vonline & 1))
-#endif
- ) {
- /* If Modem not listening, drop data */
- kfree_skb(skb);
- return 1;
- }
- if (info->emu.mdmreg[REG_T70] & BIT_T70) {
- if (info->emu.mdmreg[REG_T70] & BIT_T70_EXT) {
- /* T.70 decoding: throw away the T.70 header (2 or 4 bytes) */
- if (skb->data[0] == 3) /* pure data packet -> 4 byte headers */
- skb_pull(skb, 4);
- else
- if (skb->data[0] == 1) /* keepalive packet -> 2 byte hdr */
- skb_pull(skb, 2);
- } else
- /* T.70 decoding: Simply throw away the T.70 header (4 bytes) */
- if ((skb->data[0] == 1) && ((skb->data[1] == 0) || (skb->data[1] == 1)))
- skb_pull(skb, 4);
- }
-#ifdef CONFIG_ISDN_AUDIO
- ISDN_AUDIO_SKB_DLECOUNT(skb) = 0;
- ISDN_AUDIO_SKB_LOCK(skb) = 0;
- if (info->vonline & 1) {
- /* voice conversion/compression */
- switch (info->emu.vpar[3]) {
- case 2:
- case 3:
- case 4:
- /* adpcm
- * Since compressed data takes less
- * space, we can overwrite the buffer.
- */
- skb_trim(skb, isdn_audio_xlaw2adpcm(info->adpcmr,
- ifmt,
- skb->data,
- skb->data,
- skb->len));
- break;
- case 5:
- /* a-law */
- if (!ifmt)
- isdn_audio_ulaw2alaw(skb->data, skb->len);
- break;
- case 6:
- /* u-law */
- if (ifmt)
- isdn_audio_alaw2ulaw(skb->data, skb->len);
- break;
- }
- ISDN_AUDIO_SKB_DLECOUNT(skb) =
- isdn_tty_countDLE(skb->data, skb->len);
- }
-#ifdef CONFIG_ISDN_TTY_FAX
- else {
- if (info->faxonline & 2) {
- isdn_tty_fax_bitorder(info, skb);
- ISDN_AUDIO_SKB_DLECOUNT(skb) =
- isdn_tty_countDLE(skb->data, skb->len);
- }
- }
-#endif
-#endif
- /* Try to deliver directly via tty-buf if queue is empty */
- spin_lock_irqsave(&info->readlock, flags);
- if (skb_queue_empty(&dev->drv[di]->rpqueue[channel]))
- if (isdn_tty_try_read(info, skb)) {
- spin_unlock_irqrestore(&info->readlock, flags);
- return 1;
- }
- /* Direct deliver failed or queue wasn't empty.
- * Queue up for later dequeueing via timer-irq.
- */
- __skb_queue_tail(&dev->drv[di]->rpqueue[channel], skb);
- dev->drv[di]->rcvcount[channel] +=
- (skb->len
-#ifdef CONFIG_ISDN_AUDIO
- + ISDN_AUDIO_SKB_DLECOUNT(skb)
-#endif
- );
- spin_unlock_irqrestore(&info->readlock, flags);
- /* Schedule dequeuing */
- if ((dev->modempoll) && (info->rcvsched))
- isdn_timer_ctrl(ISDN_TIMER_MODEMREAD, 1);
- return 1;
-}
-
-static void
-isdn_tty_cleanup_xmit(modem_info *info)
-{
- skb_queue_purge(&info->xmit_queue);
-#ifdef CONFIG_ISDN_AUDIO
- skb_queue_purge(&info->dtmf_queue);
-#endif
-}
-
-static void
-isdn_tty_tint(modem_info *info)
-{
- struct sk_buff *skb = skb_dequeue(&info->xmit_queue);
- int len, slen;
-
- if (!skb)
- return;
- len = skb->len;
- if ((slen = isdn_writebuf_skb_stub(info->isdn_driver,
- info->isdn_channel, 1, skb)) == len) {
- struct tty_struct *tty = info->port.tty;
- info->send_outstanding++;
- info->msr &= ~UART_MSR_CTS;
- info->lsr &= ~UART_LSR_TEMT;
- tty_wakeup(tty);
- return;
- }
- if (slen < 0) {
- /* Error: no channel, already shutdown, or wrong parameter */
- dev_kfree_skb(skb);
- return;
- }
- skb_queue_head(&info->xmit_queue, skb);
-}
-
-#ifdef CONFIG_ISDN_AUDIO
-static int
-isdn_tty_countDLE(unsigned char *buf, int len)
-{
- int count = 0;
-
- while (len--)
- if (*buf++ == DLE)
- count++;
- return count;
-}
-
-/* This routine is called from within isdn_tty_write() to perform
- * DLE-decoding when sending audio-data.
- */
-static int
-isdn_tty_handleDLEdown(modem_info *info, atemu *m, int len)
-{
- unsigned char *p = &info->port.xmit_buf[info->xmit_count];
- int count = 0;
-
- while (len > 0) {
- if (m->lastDLE) {
- m->lastDLE = 0;
- switch (*p) {
- case DLE:
- /* Escape code */
- if (len > 1)
- memmove(p, p + 1, len - 1);
- p--;
- count++;
- break;
- case ETX:
- /* End of data */
- info->vonline |= 4;
- return count;
- case DC4:
- /* Abort RX */
- info->vonline &= ~1;
-#ifdef ISDN_DEBUG_MODEM_VOICE
- printk(KERN_DEBUG
- "DLEdown: got DLE-DC4, send DLE-ETX on ttyI%d\n",
- info->line);
-#endif
- isdn_tty_at_cout("\020\003", info);
- if (!info->vonline) {
-#ifdef ISDN_DEBUG_MODEM_VOICE
- printk(KERN_DEBUG
- "DLEdown: send VCON on ttyI%d\n",
- info->line);
-#endif
- isdn_tty_at_cout("\r\nVCON\r\n", info);
- }
- /* Fall through */
- case 'q':
- case 's':
- /* Silence */
- if (len > 1)
- memmove(p, p + 1, len - 1);
- p--;
- break;
- }
- } else {
- if (*p == DLE)
- m->lastDLE = 1;
- else
- count++;
- }
- p++;
- len--;
- }
- if (len < 0) {
- printk(KERN_WARNING "isdn_tty: len<0 in DLEdown\n");
- return 0;
- }
- return count;
-}
-
-/* This routine is called from within isdn_tty_write() when receiving
- * audio-data. It interrupts receiving, if an character other than
- * ^S or ^Q is sent.
- */
-static int
-isdn_tty_end_vrx(const char *buf, int c)
-{
- char ch;
-
- while (c--) {
- ch = *buf;
- if ((ch != 0x11) && (ch != 0x13))
- return 1;
- buf++;
- }
- return 0;
-}
-
-static int voice_cf[7] =
-{0, 0, 4, 3, 2, 0, 0};
-
-#endif /* CONFIG_ISDN_AUDIO */
-
-/* isdn_tty_senddown() is called either directly from within isdn_tty_write()
- * or via timer-interrupt from within isdn_tty_modem_xmit(). It pulls
- * outgoing data from the tty's xmit-buffer, handles voice-decompression or
- * T.70 if necessary, and finally queues it up for sending via isdn_tty_tint.
- */
-static void
-isdn_tty_senddown(modem_info *info)
-{
- int buflen;
- int skb_res;
-#ifdef CONFIG_ISDN_AUDIO
- int audio_len;
-#endif
- struct sk_buff *skb;
-
-#ifdef CONFIG_ISDN_AUDIO
- if (info->vonline & 4) {
- info->vonline &= ~6;
- if (!info->vonline) {
-#ifdef ISDN_DEBUG_MODEM_VOICE
- printk(KERN_DEBUG
- "senddown: send VCON on ttyI%d\n",
- info->line);
-#endif
- isdn_tty_at_cout("\r\nVCON\r\n", info);
- }
- }
-#endif
- if (!(buflen = info->xmit_count))
- return;
- if ((info->emu.mdmreg[REG_CTS] & BIT_CTS) != 0)
- info->msr &= ~UART_MSR_CTS;
- info->lsr &= ~UART_LSR_TEMT;
- /* info->xmit_count is modified here and in isdn_tty_write().
- * So we return here if isdn_tty_write() is in the
- * critical section.
- */
- atomic_inc(&info->xmit_lock);
- if (!(atomic_dec_and_test(&info->xmit_lock)))
- return;
- if (info->isdn_driver < 0) {
- info->xmit_count = 0;
- return;
- }
- skb_res = dev->drv[info->isdn_driver]->interface->hl_hdrlen + 4;
-#ifdef CONFIG_ISDN_AUDIO
- if (info->vonline & 2)
- audio_len = buflen * voice_cf[info->emu.vpar[3]];
- else
- audio_len = 0;
- skb = dev_alloc_skb(skb_res + buflen + audio_len);
-#else
- skb = dev_alloc_skb(skb_res + buflen);
-#endif
- if (!skb) {
- printk(KERN_WARNING
- "isdn_tty: Out of memory in ttyI%d senddown\n",
- info->line);
- return;
- }
- skb_reserve(skb, skb_res);
- skb_put_data(skb, info->port.xmit_buf, buflen);
- info->xmit_count = 0;
-#ifdef CONFIG_ISDN_AUDIO
- if (info->vonline & 2) {
- /* For now, ifmt is fixed to 1 (alaw), since this
- * is used with ISDN everywhere in the world, except
- * US, Canada and Japan.
- * Later, when US-ISDN protocols are implemented,
- * this setting will depend on the D-channel protocol.
- */
- int ifmt = 1;
-
- /* voice conversion/decompression */
- switch (info->emu.vpar[3]) {
- case 2:
- case 3:
- case 4:
- /* adpcm, compatible to ZyXel 1496 modem
- * with ROM revision 6.01
- */
- audio_len = isdn_audio_adpcm2xlaw(info->adpcms,
- ifmt,
- skb->data,
- skb_put(skb, audio_len),
- buflen);
- skb_pull(skb, buflen);
- skb_trim(skb, audio_len);
- break;
- case 5:
- /* a-law */
- if (!ifmt)
- isdn_audio_alaw2ulaw(skb->data,
- buflen);
- break;
- case 6:
- /* u-law */
- if (ifmt)
- isdn_audio_ulaw2alaw(skb->data,
- buflen);
- break;
- }
- }
-#endif /* CONFIG_ISDN_AUDIO */
- if (info->emu.mdmreg[REG_T70] & BIT_T70) {
- /* Add T.70 simplified header */
- if (info->emu.mdmreg[REG_T70] & BIT_T70_EXT)
- memcpy(skb_push(skb, 2), "\1\0", 2);
- else
- memcpy(skb_push(skb, 4), "\1\0\1\0", 4);
- }
- skb_queue_tail(&info->xmit_queue, skb);
-}
-
-/************************************************************
- *
- * Modem-functions
- *
- * mostly "stolen" from original Linux-serial.c and friends.
- *
- ************************************************************/
-
-/* The next routine is called once from within timer-interrupt
- * triggered within isdn_tty_modem_ncarrier(). It calls
- * isdn_tty_modem_result() to stuff a "NO CARRIER" Message
- * into the tty's buffer.
- */
-static void
-isdn_tty_modem_do_ncarrier(struct timer_list *t)
-{
- modem_info *info = from_timer(info, t, nc_timer);
- isdn_tty_modem_result(RESULT_NO_CARRIER, info);
-}
-
-/* Next routine is called, whenever the DTR-signal is raised.
- * It checks the ncarrier-flag, and triggers the above routine
- * when necessary. The ncarrier-flag is set, whenever DTR goes
- * low.
- */
-static void
-isdn_tty_modem_ncarrier(modem_info *info)
-{
- if (info->ncarrier) {
- info->nc_timer.expires = jiffies + HZ;
- add_timer(&info->nc_timer);
- }
-}
-
-/*
- * return the usage calculated by si and layer 2 protocol
- */
-static int
-isdn_calc_usage(int si, int l2)
-{
- int usg = ISDN_USAGE_MODEM;
-
-#ifdef CONFIG_ISDN_AUDIO
- if (si == 1) {
- switch (l2) {
- case ISDN_PROTO_L2_MODEM:
- usg = ISDN_USAGE_MODEM;
- break;
-#ifdef CONFIG_ISDN_TTY_FAX
- case ISDN_PROTO_L2_FAX:
- usg = ISDN_USAGE_FAX;
- break;
-#endif
- case ISDN_PROTO_L2_TRANS:
- default:
- usg = ISDN_USAGE_VOICE;
- break;
- }
- }
-#endif
- return (usg);
-}
-
-/* isdn_tty_dial() performs dialing of a tty an the necessary
- * setup of the lower levels before that.
- */
-static void
-isdn_tty_dial(char *n, modem_info *info, atemu *m)
-{
- int usg = ISDN_USAGE_MODEM;
- int si = 7;
- int l2 = m->mdmreg[REG_L2PROT];
- u_long flags;
- isdn_ctrl cmd;
- int i;
- int j;
-
- for (j = 7; j >= 0; j--)
- if (m->mdmreg[REG_SI1] & (1 << j)) {
- si = bit2si[j];
- break;
- }
- usg = isdn_calc_usage(si, l2);
-#ifdef CONFIG_ISDN_AUDIO
- if ((si == 1) &&
- (l2 != ISDN_PROTO_L2_MODEM)
-#ifdef CONFIG_ISDN_TTY_FAX
- && (l2 != ISDN_PROTO_L2_FAX)
-#endif
- ) {
- l2 = ISDN_PROTO_L2_TRANS;
- usg = ISDN_USAGE_VOICE;
- }
-#endif
- m->mdmreg[REG_SI1I] = si2bit[si];
- spin_lock_irqsave(&dev->lock, flags);
- i = isdn_get_free_channel(usg, l2, m->mdmreg[REG_L3PROT], -1, -1, m->msn);
- if (i < 0) {
- spin_unlock_irqrestore(&dev->lock, flags);
- isdn_tty_modem_result(RESULT_NO_DIALTONE, info);
- } else {
- info->isdn_driver = dev->drvmap[i];
- info->isdn_channel = dev->chanmap[i];
- info->drv_index = i;
- dev->m_idx[i] = info->line;
- dev->usage[i] |= ISDN_USAGE_OUTGOING;
- info->last_dir = 1;
- strcpy(info->last_num, n);
- isdn_info_update();
- spin_unlock_irqrestore(&dev->lock, flags);
- cmd.driver = info->isdn_driver;
- cmd.arg = info->isdn_channel;
- cmd.command = ISDN_CMD_CLREAZ;
- isdn_command(&cmd);
- strcpy(cmd.parm.num, isdn_map_eaz2msn(m->msn, info->isdn_driver));
- cmd.driver = info->isdn_driver;
- cmd.command = ISDN_CMD_SETEAZ;
- isdn_command(&cmd);
- cmd.driver = info->isdn_driver;
- cmd.command = ISDN_CMD_SETL2;
- info->last_l2 = l2;
- cmd.arg = info->isdn_channel + (l2 << 8);
- isdn_command(&cmd);
- cmd.driver = info->isdn_driver;
- cmd.command = ISDN_CMD_SETL3;
- cmd.arg = info->isdn_channel + (m->mdmreg[REG_L3PROT] << 8);
-#ifdef CONFIG_ISDN_TTY_FAX
- if (l2 == ISDN_PROTO_L2_FAX) {
- cmd.parm.fax = info->fax;
- info->fax->direction = ISDN_TTY_FAX_CONN_OUT;
- }
-#endif
- isdn_command(&cmd);
- cmd.driver = info->isdn_driver;
- cmd.arg = info->isdn_channel;
- sprintf(cmd.parm.setup.phone, "%s", n);
- sprintf(cmd.parm.setup.eazmsn, "%s",
- isdn_map_eaz2msn(m->msn, info->isdn_driver));
- cmd.parm.setup.si1 = si;
- cmd.parm.setup.si2 = m->mdmreg[REG_SI2];
- cmd.command = ISDN_CMD_DIAL;
- info->dialing = 1;
- info->emu.carrierwait = 0;
- strcpy(dev->num[i], n);
- isdn_info_update();
- isdn_command(&cmd);
- isdn_timer_ctrl(ISDN_TIMER_CARRIER, 1);
- }
-}
-
-/* isdn_tty_hangup() disassociates a tty from the real
- * ISDN-line (hangup). The usage-status is cleared
- * and some cleanup is done also.
- */
-void
-isdn_tty_modem_hup(modem_info *info, int local)
-{
- isdn_ctrl cmd;
- int di, ch;
-
- if (!info)
- return;
-
- di = info->isdn_driver;
- ch = info->isdn_channel;
- if (di < 0 || ch < 0)
- return;
-
- info->isdn_driver = -1;
- info->isdn_channel = -1;
-
-#ifdef ISDN_DEBUG_MODEM_HUP
- printk(KERN_DEBUG "Mhup ttyI%d\n", info->line);
-#endif
- info->rcvsched = 0;
- isdn_tty_flush_buffer(info->port.tty);
- if (info->online) {
- info->last_lhup = local;
- info->online = 0;
- isdn_tty_modem_result(RESULT_NO_CARRIER, info);
- }
-#ifdef CONFIG_ISDN_AUDIO
- info->vonline = 0;
-#ifdef CONFIG_ISDN_TTY_FAX
- info->faxonline = 0;
- info->fax->phase = ISDN_FAX_PHASE_IDLE;
-#endif
- info->emu.vpar[4] = 0;
- info->emu.vpar[5] = 8;
- kfree(info->dtmf_state);
- info->dtmf_state = NULL;
- kfree(info->silence_state);
- info->silence_state = NULL;
- kfree(info->adpcms);
- info->adpcms = NULL;
- kfree(info->adpcmr);
- info->adpcmr = NULL;
-#endif
- if ((info->msr & UART_MSR_RI) &&
- (info->emu.mdmreg[REG_RUNG] & BIT_RUNG))
- isdn_tty_modem_result(RESULT_RUNG, info);
- info->msr &= ~(UART_MSR_DCD | UART_MSR_RI);
- info->lsr |= UART_LSR_TEMT;
-
- if (local) {
- cmd.driver = di;
- cmd.command = ISDN_CMD_HANGUP;
- cmd.arg = ch;
- isdn_command(&cmd);
- }
-
- isdn_all_eaz(di, ch);
- info->emu.mdmreg[REG_RINGCNT] = 0;
- isdn_free_channel(di, ch, 0);
-
- if (info->drv_index >= 0) {
- dev->m_idx[info->drv_index] = -1;
- info->drv_index = -1;
- }
-}
-
-/*
- * Begin of a CAPI like interface, currently used only for
- * supplementary service (CAPI 2.0 part III)
- */
-#include <linux/isdn/capicmd.h>
-#include <linux/module.h>
-
-int
-isdn_tty_capi_facility(capi_msg *cm) {
- return (-1); /* dummy */
-}
-
-/* isdn_tty_suspend() tries to suspend the current tty connection
- */
-static void
-isdn_tty_suspend(char *id, modem_info *info, atemu *m)
-{
- isdn_ctrl cmd;
-
- int l;
-
- if (!info)
- return;
-
-#ifdef ISDN_DEBUG_MODEM_SERVICES
- printk(KERN_DEBUG "Msusp ttyI%d\n", info->line);
-#endif
- l = strlen(id);
- if ((info->isdn_driver >= 0)) {
- cmd.parm.cmsg.Length = l + 18;
- cmd.parm.cmsg.Command = CAPI_FACILITY;
- cmd.parm.cmsg.Subcommand = CAPI_REQ;
- cmd.parm.cmsg.adr.Controller = info->isdn_driver + 1;
- cmd.parm.cmsg.para[0] = 3; /* 16 bit 0x0003 suplementary service */
- cmd.parm.cmsg.para[1] = 0;
- cmd.parm.cmsg.para[2] = l + 3;
- cmd.parm.cmsg.para[3] = 4; /* 16 bit 0x0004 Suspend */
- cmd.parm.cmsg.para[4] = 0;
- cmd.parm.cmsg.para[5] = l;
- memcpy(&cmd.parm.cmsg.para[6], id, l);
- cmd.command = CAPI_PUT_MESSAGE;
- cmd.driver = info->isdn_driver;
- cmd.arg = info->isdn_channel;
- isdn_command(&cmd);
- }
-}
-
-/* isdn_tty_resume() tries to resume a suspended call
- * setup of the lower levels before that. unfortunately here is no
- * checking for compatibility of used protocols implemented by Q931
- * It does the same things like isdn_tty_dial, the last command
- * is different, may be we can merge it.
- */
-
-static void
-isdn_tty_resume(char *id, modem_info *info, atemu *m)
-{
- int usg = ISDN_USAGE_MODEM;
- int si = 7;
- int l2 = m->mdmreg[REG_L2PROT];
- isdn_ctrl cmd;
- ulong flags;
- int i;
- int j;
- int l;
-
- l = strlen(id);
- for (j = 7; j >= 0; j--)
- if (m->mdmreg[REG_SI1] & (1 << j)) {
- si = bit2si[j];
- break;
- }
- usg = isdn_calc_usage(si, l2);
-#ifdef CONFIG_ISDN_AUDIO
- if ((si == 1) &&
- (l2 != ISDN_PROTO_L2_MODEM)
-#ifdef CONFIG_ISDN_TTY_FAX
- && (l2 != ISDN_PROTO_L2_FAX)
-#endif
- ) {
- l2 = ISDN_PROTO_L2_TRANS;
- usg = ISDN_USAGE_VOICE;
- }
-#endif
- m->mdmreg[REG_SI1I] = si2bit[si];
- spin_lock_irqsave(&dev->lock, flags);
- i = isdn_get_free_channel(usg, l2, m->mdmreg[REG_L3PROT], -1, -1, m->msn);
- if (i < 0) {
- spin_unlock_irqrestore(&dev->lock, flags);
- isdn_tty_modem_result(RESULT_NO_DIALTONE, info);
- } else {
- info->isdn_driver = dev->drvmap[i];
- info->isdn_channel = dev->chanmap[i];
- info->drv_index = i;
- dev->m_idx[i] = info->line;
- dev->usage[i] |= ISDN_USAGE_OUTGOING;
- info->last_dir = 1;
-// strcpy(info->last_num, n);
- isdn_info_update();
- spin_unlock_irqrestore(&dev->lock, flags);
- cmd.driver = info->isdn_driver;
- cmd.arg = info->isdn_channel;
- cmd.command = ISDN_CMD_CLREAZ;
- isdn_command(&cmd);
- strcpy(cmd.parm.num, isdn_map_eaz2msn(m->msn, info->isdn_driver));
- cmd.driver = info->isdn_driver;
- cmd.command = ISDN_CMD_SETEAZ;
- isdn_command(&cmd);
- cmd.driver = info->isdn_driver;
- cmd.command = ISDN_CMD_SETL2;
- info->last_l2 = l2;
- cmd.arg = info->isdn_channel + (l2 << 8);
- isdn_command(&cmd);
- cmd.driver = info->isdn_driver;
- cmd.command = ISDN_CMD_SETL3;
- cmd.arg = info->isdn_channel + (m->mdmreg[REG_L3PROT] << 8);
- isdn_command(&cmd);
- cmd.driver = info->isdn_driver;
- cmd.arg = info->isdn_channel;
- cmd.parm.cmsg.Length = l + 18;
- cmd.parm.cmsg.Command = CAPI_FACILITY;
- cmd.parm.cmsg.Subcommand = CAPI_REQ;
- cmd.parm.cmsg.adr.Controller = info->isdn_driver + 1;
- cmd.parm.cmsg.para[0] = 3; /* 16 bit 0x0003 suplementary service */
- cmd.parm.cmsg.para[1] = 0;
- cmd.parm.cmsg.para[2] = l + 3;
- cmd.parm.cmsg.para[3] = 5; /* 16 bit 0x0005 Resume */
- cmd.parm.cmsg.para[4] = 0;
- cmd.parm.cmsg.para[5] = l;
- memcpy(&cmd.parm.cmsg.para[6], id, l);
- cmd.command = CAPI_PUT_MESSAGE;
- info->dialing = 1;
-// strcpy(dev->num[i], n);
- isdn_info_update();
- isdn_command(&cmd);
- isdn_timer_ctrl(ISDN_TIMER_CARRIER, 1);
- }
-}
-
-/* isdn_tty_send_msg() sends a message to a HL driver
- * This is used for hybrid modem cards to send AT commands to it
- */
-
-static void
-isdn_tty_send_msg(modem_info *info, atemu *m, char *msg)
-{
- int usg = ISDN_USAGE_MODEM;
- int si = 7;
- int l2 = m->mdmreg[REG_L2PROT];
- isdn_ctrl cmd;
- ulong flags;
- int i;
- int j;
- int l;
-
- l = min(strlen(msg), sizeof(cmd.parm) - sizeof(cmd.parm.cmsg)
- + sizeof(cmd.parm.cmsg.para) - 2);
-
- if (!l) {
- isdn_tty_modem_result(RESULT_ERROR, info);
- return;
- }
- for (j = 7; j >= 0; j--)
- if (m->mdmreg[REG_SI1] & (1 << j)) {
- si = bit2si[j];
- break;
- }
- usg = isdn_calc_usage(si, l2);
-#ifdef CONFIG_ISDN_AUDIO
- if ((si == 1) &&
- (l2 != ISDN_PROTO_L2_MODEM)
-#ifdef CONFIG_ISDN_TTY_FAX
- && (l2 != ISDN_PROTO_L2_FAX)
-#endif
- ) {
- l2 = ISDN_PROTO_L2_TRANS;
- usg = ISDN_USAGE_VOICE;
- }
-#endif
- m->mdmreg[REG_SI1I] = si2bit[si];
- spin_lock_irqsave(&dev->lock, flags);
- i = isdn_get_free_channel(usg, l2, m->mdmreg[REG_L3PROT], -1, -1, m->msn);
- if (i < 0) {
- spin_unlock_irqrestore(&dev->lock, flags);
- isdn_tty_modem_result(RESULT_NO_DIALTONE, info);
- } else {
- info->isdn_driver = dev->drvmap[i];
- info->isdn_channel = dev->chanmap[i];
- info->drv_index = i;
- dev->m_idx[i] = info->line;
- dev->usage[i] |= ISDN_USAGE_OUTGOING;
- info->last_dir = 1;
- isdn_info_update();
- spin_unlock_irqrestore(&dev->lock, flags);
- cmd.driver = info->isdn_driver;
- cmd.arg = info->isdn_channel;
- cmd.command = ISDN_CMD_CLREAZ;
- isdn_command(&cmd);
- strcpy(cmd.parm.num, isdn_map_eaz2msn(m->msn, info->isdn_driver));
- cmd.driver = info->isdn_driver;
- cmd.command = ISDN_CMD_SETEAZ;
- isdn_command(&cmd);
- cmd.driver = info->isdn_driver;
- cmd.command = ISDN_CMD_SETL2;
- info->last_l2 = l2;
- cmd.arg = info->isdn_channel + (l2 << 8);
- isdn_command(&cmd);
- cmd.driver = info->isdn_driver;
- cmd.command = ISDN_CMD_SETL3;
- cmd.arg = info->isdn_channel + (m->mdmreg[REG_L3PROT] << 8);
- isdn_command(&cmd);
- cmd.driver = info->isdn_driver;
- cmd.arg = info->isdn_channel;
- cmd.parm.cmsg.Length = l + 14;
- cmd.parm.cmsg.Command = CAPI_MANUFACTURER;
- cmd.parm.cmsg.Subcommand = CAPI_REQ;
- cmd.parm.cmsg.adr.Controller = info->isdn_driver + 1;
- cmd.parm.cmsg.para[0] = l + 1;
- strncpy(&cmd.parm.cmsg.para[1], msg, l);
- cmd.parm.cmsg.para[l + 1] = 0xd;
- cmd.command = CAPI_PUT_MESSAGE;
-/* info->dialing = 1;
- strcpy(dev->num[i], n);
- isdn_info_update();
-*/
- isdn_command(&cmd);
- }
-}
-
-static inline int
-isdn_tty_paranoia_check(modem_info *info, char *name, const char *routine)
-{
-#ifdef MODEM_PARANOIA_CHECK
- if (!info) {
- printk(KERN_WARNING "isdn_tty: null info_struct for %s in %s\n",
- name, routine);
- return 1;
- }
- if (info->magic != ISDN_ASYNC_MAGIC) {
- printk(KERN_WARNING "isdn_tty: bad magic for modem struct %s in %s\n",
- name, routine);
- return 1;
- }
-#endif
- return 0;
-}
-
-/*
- * This routine is called to set the UART divisor registers to match
- * the specified baud rate for a serial port.
- */
-static void
-isdn_tty_change_speed(modem_info *info)
-{
- struct tty_port *port = &info->port;
- uint cflag,
- cval,
- quot;
- int i;
-
- if (!port->tty)
- return;
- cflag = port->tty->termios.c_cflag;
-
- quot = i = cflag & CBAUD;
- if (i & CBAUDEX) {
- i &= ~CBAUDEX;
- if (i < 1 || i > 2)
- port->tty->termios.c_cflag &= ~CBAUDEX;
- else
- i += 15;
- }
- if (quot) {
- info->mcr |= UART_MCR_DTR;
- isdn_tty_modem_ncarrier(info);
- } else {
- info->mcr &= ~UART_MCR_DTR;
- if (info->emu.mdmreg[REG_DTRHUP] & BIT_DTRHUP) {
-#ifdef ISDN_DEBUG_MODEM_HUP
- printk(KERN_DEBUG "Mhup in changespeed\n");
-#endif
- if (info->online)
- info->ncarrier = 1;
- isdn_tty_modem_reset_regs(info, 0);
- isdn_tty_modem_hup(info, 1);
- }
- return;
- }
- /* byte size and parity */
- cval = cflag & (CSIZE | CSTOPB);
- cval >>= 4;
- if (cflag & PARENB)
- cval |= UART_LCR_PARITY;
- if (!(cflag & PARODD))
- cval |= UART_LCR_EPAR;
-
- tty_port_set_check_carrier(port, ~cflag & CLOCAL);
-}
-
-static int
-isdn_tty_startup(modem_info *info)
-{
- if (tty_port_initialized(&info->port))
- return 0;
- isdn_lock_drivers();
-#ifdef ISDN_DEBUG_MODEM_OPEN
- printk(KERN_DEBUG "starting up ttyi%d ...\n", info->line);
-#endif
- /*
- * Now, initialize the UART
- */
- info->mcr = UART_MCR_DTR | UART_MCR_RTS | UART_MCR_OUT2;
- if (info->port.tty)
- clear_bit(TTY_IO_ERROR, &info->port.tty->flags);
- /*
- * and set the speed of the serial port
- */
- isdn_tty_change_speed(info);
-
- tty_port_set_initialized(&info->port, 1);
- info->msr |= (UART_MSR_DSR | UART_MSR_CTS);
- info->send_outstanding = 0;
- return 0;
-}
-
-/*
- * This routine will shutdown a serial port; interrupts are disabled, and
- * DTR is dropped if the hangup on close termio flag is on.
- */
-static void
-isdn_tty_shutdown(modem_info *info)
-{
- if (!tty_port_initialized(&info->port))
- return;
-#ifdef ISDN_DEBUG_MODEM_OPEN
- printk(KERN_DEBUG "Shutting down isdnmodem port %d ....\n", info->line);
-#endif
- isdn_unlock_drivers();
- info->msr &= ~UART_MSR_RI;
- if (!info->port.tty || (info->port.tty->termios.c_cflag & HUPCL)) {
- info->mcr &= ~(UART_MCR_DTR | UART_MCR_RTS);
- if (info->emu.mdmreg[REG_DTRHUP] & BIT_DTRHUP) {
- isdn_tty_modem_reset_regs(info, 0);
-#ifdef ISDN_DEBUG_MODEM_HUP
- printk(KERN_DEBUG "Mhup in isdn_tty_shutdown\n");
-#endif
- isdn_tty_modem_hup(info, 1);
- }
- }
- if (info->port.tty)
- set_bit(TTY_IO_ERROR, &info->port.tty->flags);
-
- tty_port_set_initialized(&info->port, 0);
-}
-
-/* isdn_tty_write() is the main send-routine. It is called from the upper
- * levels within the kernel to perform sending data. Depending on the
- * online-flag it either directs output to the at-command-interpreter or
- * to the lower level. Additional tasks done here:
- * - If online, check for escape-sequence (+++)
- * - If sending audio-data, call isdn_tty_DLEdown() to parse DLE-codes.
- * - If receiving audio-data, call isdn_tty_end_vrx() to abort if needed.
- * - If dialing, abort dial.
- */
-static int
-isdn_tty_write(struct tty_struct *tty, const u_char *buf, int count)
-{
- int c;
- int total = 0;
- modem_info *info = (modem_info *) tty->driver_data;
- atemu *m = &info->emu;
-
- if (isdn_tty_paranoia_check(info, tty->name, "isdn_tty_write"))
- return 0;
- /* See isdn_tty_senddown() */
- atomic_inc(&info->xmit_lock);
- while (1) {
- c = count;
- if (c > info->xmit_size - info->xmit_count)
- c = info->xmit_size - info->xmit_count;
- if (info->isdn_driver >= 0 && c > dev->drv[info->isdn_driver]->maxbufsize)
- c = dev->drv[info->isdn_driver]->maxbufsize;
- if (c <= 0)
- break;
- if ((info->online > 1)
-#ifdef CONFIG_ISDN_AUDIO
- || (info->vonline & 3)
-#endif
- ) {
-#ifdef CONFIG_ISDN_AUDIO
- if (!info->vonline)
-#endif
- isdn_tty_check_esc(buf, m->mdmreg[REG_ESC], c,
- &(m->pluscount),
- &(m->lastplus));
- memcpy(&info->port.xmit_buf[info->xmit_count], buf, c);
-#ifdef CONFIG_ISDN_AUDIO
- if (info->vonline) {
- int cc = isdn_tty_handleDLEdown(info, m, c);
- if (info->vonline & 2) {
- if (!cc) {
- /* If DLE decoding results in zero-transmit, but
- * c originally was non-zero, do a wakeup.
- */
- tty_wakeup(tty);
- info->msr |= UART_MSR_CTS;
- info->lsr |= UART_LSR_TEMT;
- }
- info->xmit_count += cc;
- }
- if ((info->vonline & 3) == 1) {
- /* Do NOT handle Ctrl-Q or Ctrl-S
- * when in full-duplex audio mode.
- */
- if (isdn_tty_end_vrx(buf, c)) {
- info->vonline &= ~1;
-#ifdef ISDN_DEBUG_MODEM_VOICE
- printk(KERN_DEBUG
- "got !^Q/^S, send DLE-ETX,VCON on ttyI%d\n",
- info->line);
-#endif
- isdn_tty_at_cout("\020\003\r\nVCON\r\n", info);
- }
- }
- } else
- if (TTY_IS_FCLASS1(info)) {
- int cc = isdn_tty_handleDLEdown(info, m, c);
-
- if (info->vonline & 4) { /* ETX seen */
- isdn_ctrl c;
-
- c.command = ISDN_CMD_FAXCMD;
- c.driver = info->isdn_driver;
- c.arg = info->isdn_channel;
- c.parm.aux.cmd = ISDN_FAX_CLASS1_CTRL;
- c.parm.aux.subcmd = ETX;
- isdn_command(&c);
- }
- info->vonline = 0;
-#ifdef ISDN_DEBUG_MODEM_VOICE
- printk(KERN_DEBUG "fax dle cc/c %d/%d\n", cc, c);
-#endif
- info->xmit_count += cc;
- } else
-#endif
- info->xmit_count += c;
- } else {
- info->msr |= UART_MSR_CTS;
- info->lsr |= UART_LSR_TEMT;
- if (info->dialing) {
- info->dialing = 0;
-#ifdef ISDN_DEBUG_MODEM_HUP
- printk(KERN_DEBUG "Mhup in isdn_tty_write\n");
-#endif
- isdn_tty_modem_result(RESULT_NO_CARRIER, info);
- isdn_tty_modem_hup(info, 1);
- } else
- c = isdn_tty_edit_at(buf, c, info);
- }
- buf += c;
- count -= c;
- total += c;
- }
- atomic_dec(&info->xmit_lock);
- if ((info->xmit_count) || !skb_queue_empty(&info->xmit_queue)) {
- if (m->mdmreg[REG_DXMT] & BIT_DXMT) {
- isdn_tty_senddown(info);
- isdn_tty_tint(info);
- }
- isdn_timer_ctrl(ISDN_TIMER_MODEMXMIT, 1);
- }
- return total;
-}
-
-static int
-isdn_tty_write_room(struct tty_struct *tty)
-{
- modem_info *info = (modem_info *) tty->driver_data;
- int ret;
-
- if (isdn_tty_paranoia_check(info, tty->name, "isdn_tty_write_room"))
- return 0;
- if (!info->online)
- return info->xmit_size;
- ret = info->xmit_size - info->xmit_count;
- return (ret < 0) ? 0 : ret;
-}
-
-static int
-isdn_tty_chars_in_buffer(struct tty_struct *tty)
-{
- modem_info *info = (modem_info *) tty->driver_data;
-
- if (isdn_tty_paranoia_check(info, tty->name, "isdn_tty_chars_in_buffer"))
- return 0;
- if (!info->online)
- return 0;
- return (info->xmit_count);
-}
-
-static void
-isdn_tty_flush_buffer(struct tty_struct *tty)
-{
- modem_info *info;
-
- if (!tty) {
- return;
- }
- info = (modem_info *) tty->driver_data;
- if (isdn_tty_paranoia_check(info, tty->name, "isdn_tty_flush_buffer")) {
- return;
- }
- isdn_tty_cleanup_xmit(info);
- info->xmit_count = 0;
- tty_wakeup(tty);
-}
-
-static void
-isdn_tty_flush_chars(struct tty_struct *tty)
-{
- modem_info *info = (modem_info *) tty->driver_data;
-
- if (isdn_tty_paranoia_check(info, tty->name, "isdn_tty_flush_chars"))
- return;
- if ((info->xmit_count) || !skb_queue_empty(&info->xmit_queue))
- isdn_timer_ctrl(ISDN_TIMER_MODEMXMIT, 1);
-}
-
-/*
- * ------------------------------------------------------------
- * isdn_tty_throttle()
- *
- * This routine is called by the upper-layer tty layer to signal that
- * incoming characters should be throttled.
- * ------------------------------------------------------------
- */
-static void
-isdn_tty_throttle(struct tty_struct *tty)
-{
- modem_info *info = (modem_info *) tty->driver_data;
-
- if (isdn_tty_paranoia_check(info, tty->name, "isdn_tty_throttle"))
- return;
- if (I_IXOFF(tty))
- info->x_char = STOP_CHAR(tty);
- info->mcr &= ~UART_MCR_RTS;
-}
-
-static void
-isdn_tty_unthrottle(struct tty_struct *tty)
-{
- modem_info *info = (modem_info *) tty->driver_data;
-
- if (isdn_tty_paranoia_check(info, tty->name, "isdn_tty_unthrottle"))
- return;
- if (I_IXOFF(tty)) {
- if (info->x_char)
- info->x_char = 0;
- else
- info->x_char = START_CHAR(tty);
- }
- info->mcr |= UART_MCR_RTS;
-}
-
-/*
- * ------------------------------------------------------------
- * isdn_tty_ioctl() and friends
- * ------------------------------------------------------------
- */
-
-/*
- * isdn_tty_get_lsr_info - get line status register info
- *
- * Purpose: Let user call ioctl() to get info when the UART physically
- * is emptied. On bus types like RS485, the transmitter must
- * release the bus after transmitting. This must be done when
- * the transmit shift register is empty, not be done when the
- * transmit holding register is empty. This functionality
- * allows RS485 driver to be written in user space.
- */
-static int
-isdn_tty_get_lsr_info(modem_info *info, uint __user *value)
-{
- u_char status;
- uint result;
-
- status = info->lsr;
- result = ((status & UART_LSR_TEMT) ? TIOCSER_TEMT : 0);
- return put_user(result, value);
-}
-
-
-static int
-isdn_tty_tiocmget(struct tty_struct *tty)
-{
- modem_info *info = (modem_info *) tty->driver_data;
- u_char control, status;
-
- if (isdn_tty_paranoia_check(info, tty->name, __func__))
- return -ENODEV;
- if (tty_io_error(tty))
- return -EIO;
-
- mutex_lock(&modem_info_mutex);
-#ifdef ISDN_DEBUG_MODEM_IOCTL
- printk(KERN_DEBUG "ttyI%d ioctl TIOCMGET\n", info->line);
-#endif
-
- control = info->mcr;
- status = info->msr;
- mutex_unlock(&modem_info_mutex);
- return ((control & UART_MCR_RTS) ? TIOCM_RTS : 0)
- | ((control & UART_MCR_DTR) ? TIOCM_DTR : 0)
- | ((status & UART_MSR_DCD) ? TIOCM_CAR : 0)
- | ((status & UART_MSR_RI) ? TIOCM_RNG : 0)
- | ((status & UART_MSR_DSR) ? TIOCM_DSR : 0)
- | ((status & UART_MSR_CTS) ? TIOCM_CTS : 0);
-}
-
-static int
-isdn_tty_tiocmset(struct tty_struct *tty,
- unsigned int set, unsigned int clear)
-{
- modem_info *info = (modem_info *) tty->driver_data;
-
- if (isdn_tty_paranoia_check(info, tty->name, __func__))
- return -ENODEV;
- if (tty_io_error(tty))
- return -EIO;
-
-#ifdef ISDN_DEBUG_MODEM_IOCTL
- printk(KERN_DEBUG "ttyI%d ioctl TIOCMxxx: %x %x\n", info->line, set, clear);
-#endif
-
- mutex_lock(&modem_info_mutex);
- if (set & TIOCM_RTS)
- info->mcr |= UART_MCR_RTS;
- if (set & TIOCM_DTR) {
- info->mcr |= UART_MCR_DTR;
- isdn_tty_modem_ncarrier(info);
- }
-
- if (clear & TIOCM_RTS)
- info->mcr &= ~UART_MCR_RTS;
- if (clear & TIOCM_DTR) {
- info->mcr &= ~UART_MCR_DTR;
- if (info->emu.mdmreg[REG_DTRHUP] & BIT_DTRHUP) {
- isdn_tty_modem_reset_regs(info, 0);
-#ifdef ISDN_DEBUG_MODEM_HUP
- printk(KERN_DEBUG "Mhup in TIOCMSET\n");
-#endif
- if (info->online)
- info->ncarrier = 1;
- isdn_tty_modem_hup(info, 1);
- }
- }
- mutex_unlock(&modem_info_mutex);
- return 0;
-}
-
-static int
-isdn_tty_ioctl(struct tty_struct *tty, uint cmd, ulong arg)
-{
- modem_info *info = (modem_info *) tty->driver_data;
-
- if (isdn_tty_paranoia_check(info, tty->name, "isdn_tty_ioctl"))
- return -ENODEV;
- if (tty_io_error(tty))
- return -EIO;
- switch (cmd) {
- case TIOCSERGETLSR: /* Get line status register */
-#ifdef ISDN_DEBUG_MODEM_IOCTL
- printk(KERN_DEBUG "ttyI%d ioctl TIOCSERGETLSR\n", info->line);
-#endif
- return isdn_tty_get_lsr_info(info, (uint __user *) arg);
- default:
-#ifdef ISDN_DEBUG_MODEM_IOCTL
- printk(KERN_DEBUG "UNKNOWN ioctl 0x%08x on ttyi%d\n", cmd, info->line);
-#endif
- return -ENOIOCTLCMD;
- }
- return 0;
-}
-
-static void
-isdn_tty_set_termios(struct tty_struct *tty, struct ktermios *old_termios)
-{
- modem_info *info = (modem_info *) tty->driver_data;
-
- mutex_lock(&modem_info_mutex);
- if (!old_termios)
- isdn_tty_change_speed(info);
- else {
- if (tty->termios.c_cflag == old_termios->c_cflag &&
- tty->termios.c_ispeed == old_termios->c_ispeed &&
- tty->termios.c_ospeed == old_termios->c_ospeed) {
- mutex_unlock(&modem_info_mutex);
- return;
- }
- isdn_tty_change_speed(info);
- }
- mutex_unlock(&modem_info_mutex);
-}
-
-/*
- * ------------------------------------------------------------
- * isdn_tty_open() and friends
- * ------------------------------------------------------------
- */
-
-static int isdn_tty_install(struct tty_driver *driver, struct tty_struct *tty)
-{
- modem_info *info = &dev->mdm.info[tty->index];
-
- if (isdn_tty_paranoia_check(info, tty->name, __func__))
- return -ENODEV;
-
- tty->driver_data = info;
-
- return tty_port_install(&info->port, driver, tty);
-}
-
-/*
- * This routine is called whenever a serial port is opened. It
- * enables interrupts for a serial port, linking in its async structure into
- * the IRQ chain. It also performs the serial-specific
- * initialization for the tty structure.
- */
-static int
-isdn_tty_open(struct tty_struct *tty, struct file *filp)
-{
- modem_info *info = tty->driver_data;
- struct tty_port *port = &info->port;
- int retval;
-
-#ifdef ISDN_DEBUG_MODEM_OPEN
- printk(KERN_DEBUG "isdn_tty_open %s, count = %d\n", tty->name,
- port->count);
-#endif
- port->count++;
- port->tty = tty;
- /*
- * Start up serial port
- */
- retval = isdn_tty_startup(info);
- if (retval) {
-#ifdef ISDN_DEBUG_MODEM_OPEN
- printk(KERN_DEBUG "isdn_tty_open return after startup\n");
-#endif
- return retval;
- }
- retval = tty_port_block_til_ready(port, tty, filp);
- if (retval) {
-#ifdef ISDN_DEBUG_MODEM_OPEN
- printk(KERN_DEBUG "isdn_tty_open return after isdn_tty_block_til_ready \n");
-#endif
- return retval;
- }
-#ifdef ISDN_DEBUG_MODEM_OPEN
- printk(KERN_DEBUG "isdn_tty_open ttyi%d successful...\n", info->line);
-#endif
- dev->modempoll++;
-#ifdef ISDN_DEBUG_MODEM_OPEN
- printk(KERN_DEBUG "isdn_tty_open normal exit\n");
-#endif
- return 0;
-}
-
-static void
-isdn_tty_close(struct tty_struct *tty, struct file *filp)
-{
- modem_info *info = (modem_info *) tty->driver_data;
- struct tty_port *port = &info->port;
- ulong timeout;
-
- if (!info || isdn_tty_paranoia_check(info, tty->name, "isdn_tty_close"))
- return;
- if (tty_hung_up_p(filp)) {
-#ifdef ISDN_DEBUG_MODEM_OPEN
- printk(KERN_DEBUG "isdn_tty_close return after tty_hung_up_p\n");
-#endif
- return;
- }
- if ((tty->count == 1) && (port->count != 1)) {
- /*
- * Uh, oh. tty->count is 1, which means that the tty
- * structure will be freed. Info->count should always
- * be one in these conditions. If it's greater than
- * one, we've got real problems, since it means the
- * serial port won't be shutdown.
- */
- printk(KERN_ERR "isdn_tty_close: bad port count; tty->count is 1, "
- "info->count is %d\n", port->count);
- port->count = 1;
- }
- if (--port->count < 0) {
- printk(KERN_ERR "isdn_tty_close: bad port count for ttyi%d: %d\n",
- info->line, port->count);
- port->count = 0;
- }
- if (port->count) {
-#ifdef ISDN_DEBUG_MODEM_OPEN
- printk(KERN_DEBUG "isdn_tty_close after info->count != 0\n");
-#endif
- return;
- }
- info->closing = 1;
-
- tty->closing = 1;
- /*
- * At this point we stop accepting input. To do this, we
- * disable the receive line status interrupts, and tell the
- * interrupt driver to stop checking the data ready bit in the
- * line status register.
- */
- if (tty_port_initialized(port)) {
- tty_wait_until_sent(tty, 3000); /* 30 seconds timeout */
- /*
- * Before we drop DTR, make sure the UART transmitter
- * has completely drained; this is especially
- * important if there is a transmit FIFO!
- */
- timeout = jiffies + HZ;
- while (!(info->lsr & UART_LSR_TEMT)) {
- schedule_timeout_interruptible(20);
- if (time_after(jiffies, timeout))
- break;
- }
- }
- dev->modempoll--;
- isdn_tty_shutdown(info);
- isdn_tty_flush_buffer(tty);
- tty_ldisc_flush(tty);
- port->tty = NULL;
- info->ncarrier = 0;
-
- tty_port_close_end(port, tty);
- info->closing = 0;
-#ifdef ISDN_DEBUG_MODEM_OPEN
- printk(KERN_DEBUG "isdn_tty_close normal exit\n");
-#endif
-}
-
-/*
- * isdn_tty_hangup() --- called by tty_hangup() when a hangup is signaled.
- */
-static void
-isdn_tty_hangup(struct tty_struct *tty)
-{
- modem_info *info = (modem_info *) tty->driver_data;
- struct tty_port *port = &info->port;
-
- if (isdn_tty_paranoia_check(info, tty->name, "isdn_tty_hangup"))
- return;
- isdn_tty_shutdown(info);
- port->count = 0;
- tty_port_set_active(port, 0);
- port->tty = NULL;
- wake_up_interruptible(&port->open_wait);
-}
-
-/* This routine initializes all emulator-data.
- */
-static void
-isdn_tty_reset_profile(atemu *m)
-{
- m->profile[0] = 0;
- m->profile[1] = 0;
- m->profile[2] = 43;
- m->profile[3] = 13;
- m->profile[4] = 10;
- m->profile[5] = 8;
- m->profile[6] = 3;
- m->profile[7] = 60;
- m->profile[8] = 2;
- m->profile[9] = 6;
- m->profile[10] = 7;
- m->profile[11] = 70;
- m->profile[12] = 0x45;
- m->profile[13] = 4;
- m->profile[14] = ISDN_PROTO_L2_X75I;
- m->profile[15] = ISDN_PROTO_L3_TRANS;
- m->profile[16] = ISDN_SERIAL_XMIT_SIZE / 16;
- m->profile[17] = ISDN_MODEM_WINSIZE;
- m->profile[18] = 4;
- m->profile[19] = 0;
- m->profile[20] = 0;
- m->profile[23] = 0;
- m->pmsn[0] = '\0';
- m->plmsn[0] = '\0';
-}
-
-#ifdef CONFIG_ISDN_AUDIO
-static void
-isdn_tty_modem_reset_vpar(atemu *m)
-{
- m->vpar[0] = 2; /* Voice-device (2 = phone line) */
- m->vpar[1] = 0; /* Silence detection level (0 = none ) */
- m->vpar[2] = 70; /* Silence interval (7 sec. ) */
- m->vpar[3] = 2; /* Compression type (1 = ADPCM-2 ) */
- m->vpar[4] = 0; /* DTMF detection level (0 = softcode ) */
- m->vpar[5] = 8; /* DTMF interval (8 * 5 ms. ) */
-}
-#endif
-
-#ifdef CONFIG_ISDN_TTY_FAX
-static void
-isdn_tty_modem_reset_faxpar(modem_info *info)
-{
- T30_s *f = info->fax;
-
- f->code = 0;
- f->phase = ISDN_FAX_PHASE_IDLE;
- f->direction = 0;
- f->resolution = 1; /* fine */
- f->rate = 5; /* 14400 bit/s */
- f->width = 0;
- f->length = 0;
- f->compression = 0;
- f->ecm = 0;
- f->binary = 0;
- f->scantime = 0;
- memset(&f->id[0], 32, FAXIDLEN - 1);
- f->id[FAXIDLEN - 1] = 0;
- f->badlin = 0;
- f->badmul = 0;
- f->bor = 0;
- f->nbc = 0;
- f->cq = 0;
- f->cr = 0;
- f->ctcrty = 0;
- f->minsp = 0;
- f->phcto = 30;
- f->rel = 0;
- memset(&f->pollid[0], 32, FAXIDLEN - 1);
- f->pollid[FAXIDLEN - 1] = 0;
-}
-#endif
-
-static void
-isdn_tty_modem_reset_regs(modem_info *info, int force)
-{
- atemu *m = &info->emu;
- if ((m->mdmreg[REG_DTRR] & BIT_DTRR) || force) {
- memcpy(m->mdmreg, m->profile, ISDN_MODEM_NUMREG);
- memcpy(m->msn, m->pmsn, ISDN_MSNLEN);
- memcpy(m->lmsn, m->plmsn, ISDN_LMSNLEN);
- info->xmit_size = m->mdmreg[REG_PSIZE] * 16;
- }
-#ifdef CONFIG_ISDN_AUDIO
- isdn_tty_modem_reset_vpar(m);
-#endif
-#ifdef CONFIG_ISDN_TTY_FAX
- isdn_tty_modem_reset_faxpar(info);
-#endif
- m->mdmcmdl = 0;
-}
-
-static void
-modem_write_profile(atemu *m)
-{
- memcpy(m->profile, m->mdmreg, ISDN_MODEM_NUMREG);
- memcpy(m->pmsn, m->msn, ISDN_MSNLEN);
- memcpy(m->plmsn, m->lmsn, ISDN_LMSNLEN);
- if (dev->profd)
- send_sig(SIGIO, dev->profd, 1);
-}
-
-static const struct tty_operations modem_ops = {
- .install = isdn_tty_install,
- .open = isdn_tty_open,
- .close = isdn_tty_close,
- .write = isdn_tty_write,
- .flush_chars = isdn_tty_flush_chars,
- .write_room = isdn_tty_write_room,
- .chars_in_buffer = isdn_tty_chars_in_buffer,
- .flush_buffer = isdn_tty_flush_buffer,
- .ioctl = isdn_tty_ioctl,
- .throttle = isdn_tty_throttle,
- .unthrottle = isdn_tty_unthrottle,
- .set_termios = isdn_tty_set_termios,
- .hangup = isdn_tty_hangup,
- .tiocmget = isdn_tty_tiocmget,
- .tiocmset = isdn_tty_tiocmset,
-};
-
-static int isdn_tty_carrier_raised(struct tty_port *port)
-{
- modem_info *info = container_of(port, modem_info, port);
- return info->msr & UART_MSR_DCD;
-}
-
-static const struct tty_port_operations isdn_tty_port_ops = {
- .carrier_raised = isdn_tty_carrier_raised,
-};
-
-int
-isdn_tty_modem_init(void)
-{
- isdn_modem_t *m;
- int i, retval;
- modem_info *info;
-
- m = &dev->mdm;
- m->tty_modem = alloc_tty_driver(ISDN_MAX_CHANNELS);
- if (!m->tty_modem)
- return -ENOMEM;
- m->tty_modem->name = "ttyI";
- m->tty_modem->major = ISDN_TTY_MAJOR;
- m->tty_modem->minor_start = 0;
- m->tty_modem->type = TTY_DRIVER_TYPE_SERIAL;
- m->tty_modem->subtype = SERIAL_TYPE_NORMAL;
- m->tty_modem->init_termios = tty_std_termios;
- m->tty_modem->init_termios.c_cflag = B9600 | CS8 | CREAD | HUPCL | CLOCAL;
- m->tty_modem->flags = TTY_DRIVER_REAL_RAW;
- m->tty_modem->driver_name = "isdn_tty";
- tty_set_operations(m->tty_modem, &modem_ops);
- retval = tty_register_driver(m->tty_modem);
- if (retval) {
- printk(KERN_WARNING "isdn_tty: Couldn't register modem-device\n");
- goto err;
- }
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- info = &m->info[i];
-#ifdef CONFIG_ISDN_TTY_FAX
- if (!(info->fax = kmalloc(sizeof(T30_s), GFP_KERNEL))) {
- printk(KERN_ERR "Could not allocate fax t30-buffer\n");
- retval = -ENOMEM;
- goto err_unregister;
- }
-#endif
- tty_port_init(&info->port);
- info->port.ops = &isdn_tty_port_ops;
- spin_lock_init(&info->readlock);
- sprintf(info->last_cause, "0000");
- sprintf(info->last_num, "none");
- info->last_dir = 0;
- info->last_lhup = 1;
- info->last_l2 = -1;
- info->last_si = 0;
- isdn_tty_reset_profile(&info->emu);
- isdn_tty_modem_reset_regs(info, 1);
- info->magic = ISDN_ASYNC_MAGIC;
- info->line = i;
- info->x_char = 0;
- info->isdn_driver = -1;
- info->isdn_channel = -1;
- info->drv_index = -1;
- info->xmit_size = ISDN_SERIAL_XMIT_SIZE;
- timer_setup(&info->nc_timer, isdn_tty_modem_do_ncarrier, 0);
- skb_queue_head_init(&info->xmit_queue);
-#ifdef CONFIG_ISDN_AUDIO
- skb_queue_head_init(&info->dtmf_queue);
-#endif
- info->port.xmit_buf = kmalloc(ISDN_SERIAL_XMIT_MAX + 5,
- GFP_KERNEL);
- if (!info->port.xmit_buf) {
- printk(KERN_ERR "Could not allocate modem xmit-buffer\n");
- retval = -ENOMEM;
- goto err_unregister;
- }
- /* Make room for T.70 header */
- info->port.xmit_buf += 4;
- }
- return 0;
-err_unregister:
- for (i--; i >= 0; i--) {
- info = &m->info[i];
-#ifdef CONFIG_ISDN_TTY_FAX
- kfree(info->fax);
-#endif
- kfree(info->port.xmit_buf - 4);
- info->port.xmit_buf = NULL;
- tty_port_destroy(&info->port);
- }
- tty_unregister_driver(m->tty_modem);
-err:
- put_tty_driver(m->tty_modem);
- m->tty_modem = NULL;
- return retval;
-}
-
-void
-isdn_tty_exit(void)
-{
- modem_info *info;
- int i;
-
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- info = &dev->mdm.info[i];
- isdn_tty_cleanup_xmit(info);
-#ifdef CONFIG_ISDN_TTY_FAX
- kfree(info->fax);
-#endif
- kfree(info->port.xmit_buf - 4);
- info->port.xmit_buf = NULL;
- tty_port_destroy(&info->port);
- }
- tty_unregister_driver(dev->mdm.tty_modem);
- put_tty_driver(dev->mdm.tty_modem);
- dev->mdm.tty_modem = NULL;
-}
-
-
-/*
- * isdn_tty_match_icall(char *MSN, atemu *tty_emulator, int dev_idx)
- * match the MSN against the MSNs (glob patterns) defined for tty_emulator,
- * and return 0 for match, 1 for no match, 2 if MSN could match if longer.
- */
-
-static int
-isdn_tty_match_icall(char *cid, atemu *emu, int di)
-{
-#ifdef ISDN_DEBUG_MODEM_ICALL
- printk(KERN_DEBUG "m_fi: msn=%s lmsn=%s mmsn=%s mreg[SI1]=%d mreg[SI2]=%d\n",
- emu->msn, emu->lmsn, isdn_map_eaz2msn(emu->msn, di),
- emu->mdmreg[REG_SI1], emu->mdmreg[REG_SI2]);
-#endif
- if (strlen(emu->lmsn)) {
- char *p = emu->lmsn;
- char *q;
- int tmp;
- int ret = 0;
-
- while (1) {
- if ((q = strchr(p, ';')))
- *q = '\0';
- if ((tmp = isdn_msncmp(cid, isdn_map_eaz2msn(p, di))) > ret)
- ret = tmp;
-#ifdef ISDN_DEBUG_MODEM_ICALL
- printk(KERN_DEBUG "m_fi: lmsnX=%s mmsn=%s -> tmp=%d\n",
- p, isdn_map_eaz2msn(emu->msn, di), tmp);
-#endif
- if (q) {
- *q = ';';
- p = q;
- p++;
- }
- if (!tmp)
- return 0;
- if (!q)
- break;
- }
- return ret;
- } else {
- int tmp;
- tmp = isdn_msncmp(cid, isdn_map_eaz2msn(emu->msn, di));
-#ifdef ISDN_DEBUG_MODEM_ICALL
- printk(KERN_DEBUG "m_fi: mmsn=%s -> tmp=%d\n",
- isdn_map_eaz2msn(emu->msn, di), tmp);
-#endif
- return tmp;
- }
-}
-
-/*
- * An incoming call-request has arrived.
- * Search the tty-devices for an appropriate device and bind
- * it to the ISDN-Channel.
- * Return:
- *
- * 0 = No matching device found.
- * 1 = A matching device found.
- * 3 = No match found, but eventually would match, if
- * CID is longer.
- */
-int
-isdn_tty_find_icall(int di, int ch, setup_parm *setup)
-{
- char *eaz;
- int i;
- int wret;
- int idx;
- int si1;
- int si2;
- char *nr;
- ulong flags;
-
- if (!setup->phone[0]) {
- nr = "0";
- printk(KERN_INFO "isdn_tty: Incoming call without OAD, assuming '0'\n");
- } else
- nr = setup->phone;
- si1 = (int) setup->si1;
- si2 = (int) setup->si2;
- if (!setup->eazmsn[0]) {
- printk(KERN_WARNING "isdn_tty: Incoming call without CPN, assuming '0'\n");
- eaz = "0";
- } else
- eaz = setup->eazmsn;
-#ifdef ISDN_DEBUG_MODEM_ICALL
- printk(KERN_DEBUG "m_fi: eaz=%s si1=%d si2=%d\n", eaz, si1, si2);
-#endif
- wret = 0;
- spin_lock_irqsave(&dev->lock, flags);
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- modem_info *info = &dev->mdm.info[i];
-
- if (info->port.count == 0)
- continue;
- if ((info->emu.mdmreg[REG_SI1] & si2bit[si1]) && /* SI1 is matching */
- (info->emu.mdmreg[REG_SI2] == si2)) { /* SI2 is matching */
- idx = isdn_dc2minor(di, ch);
-#ifdef ISDN_DEBUG_MODEM_ICALL
- printk(KERN_DEBUG "m_fi: match1 wret=%d\n", wret);
- printk(KERN_DEBUG "m_fi: idx=%d flags=%08lx drv=%d ch=%d usg=%d\n", idx,
- info->port.flags, info->isdn_driver,
- info->isdn_channel, dev->usage[idx]);
-#endif
- if (
-#ifndef FIX_FILE_TRANSFER
- tty_port_active(&info->port) &&
-#endif
- (info->isdn_driver == -1) &&
- (info->isdn_channel == -1) &&
- (USG_NONE(dev->usage[idx]))) {
- int matchret;
-
- if ((matchret = isdn_tty_match_icall(eaz, &info->emu, di)) > wret)
- wret = matchret;
- if (!matchret) { /* EAZ is matching */
- info->isdn_driver = di;
- info->isdn_channel = ch;
- info->drv_index = idx;
- dev->m_idx[idx] = info->line;
- dev->usage[idx] &= ISDN_USAGE_EXCLUSIVE;
- dev->usage[idx] |= isdn_calc_usage(si1, info->emu.mdmreg[REG_L2PROT]);
- strcpy(dev->num[idx], nr);
- strcpy(info->emu.cpn, eaz);
- info->emu.mdmreg[REG_SI1I] = si2bit[si1];
- info->emu.mdmreg[REG_PLAN] = setup->plan;
- info->emu.mdmreg[REG_SCREEN] = setup->screen;
- isdn_info_update();
- spin_unlock_irqrestore(&dev->lock, flags);
- printk(KERN_INFO "isdn_tty: call from %s, -> RING on ttyI%d\n", nr,
- info->line);
- info->msr |= UART_MSR_RI;
- isdn_tty_modem_result(RESULT_RING, info);
- isdn_timer_ctrl(ISDN_TIMER_MODEMRING, 1);
- return 1;
- }
- }
- }
- }
- spin_unlock_irqrestore(&dev->lock, flags);
- printk(KERN_INFO "isdn_tty: call from %s -> %s %s\n", nr, eaz,
- ((dev->drv[di]->flags & DRV_FLAG_REJBUS) && (wret != 2)) ? "rejected" : "ignored");
- return (wret == 2) ? 3 : 0;
-}
-
-int
-isdn_tty_stat_callback(int i, isdn_ctrl *c)
-{
- int mi;
- modem_info *info;
- char *e;
-
- if (i < 0)
- return 0;
- if ((mi = dev->m_idx[i]) >= 0) {
- info = &dev->mdm.info[mi];
- switch (c->command) {
- case ISDN_STAT_CINF:
- printk(KERN_DEBUG "CHARGEINFO on ttyI%d: %ld %s\n", info->line, c->arg, c->parm.num);
- info->emu.charge = (unsigned) simple_strtoul(c->parm.num, &e, 10);
- if (e == (char *)c->parm.num)
- info->emu.charge = 0;
-
- break;
- case ISDN_STAT_BSENT:
-#ifdef ISDN_TTY_STAT_DEBUG
- printk(KERN_DEBUG "tty_STAT_BSENT ttyI%d\n", info->line);
-#endif
- if ((info->isdn_driver == c->driver) &&
- (info->isdn_channel == c->arg)) {
- info->msr |= UART_MSR_CTS;
- if (info->send_outstanding)
- if (!(--info->send_outstanding))
- info->lsr |= UART_LSR_TEMT;
- isdn_tty_tint(info);
- return 1;
- }
- break;
- case ISDN_STAT_CAUSE:
-#ifdef ISDN_TTY_STAT_DEBUG
- printk(KERN_DEBUG "tty_STAT_CAUSE ttyI%d\n", info->line);
-#endif
- /* Signal cause to tty-device */
- strncpy(info->last_cause, c->parm.num, 5);
- return 1;
- case ISDN_STAT_DISPLAY:
-#ifdef ISDN_TTY_STAT_DEBUG
- printk(KERN_DEBUG "tty_STAT_DISPLAY ttyI%d\n", info->line);
-#endif
- /* Signal display to tty-device */
- if ((info->emu.mdmreg[REG_DISPLAY] & BIT_DISPLAY) &&
- !(info->emu.mdmreg[REG_RESPNUM] & BIT_RESPNUM)) {
- isdn_tty_at_cout("\r\n", info);
- isdn_tty_at_cout("DISPLAY: ", info);
- isdn_tty_at_cout(c->parm.display, info);
- isdn_tty_at_cout("\r\n", info);
- }
- return 1;
- case ISDN_STAT_DCONN:
-#ifdef ISDN_TTY_STAT_DEBUG
- printk(KERN_DEBUG "tty_STAT_DCONN ttyI%d\n", info->line);
-#endif
- if (tty_port_active(&info->port)) {
- if (info->dialing == 1) {
- info->dialing = 2;
- return 1;
- }
- }
- break;
- case ISDN_STAT_DHUP:
-#ifdef ISDN_TTY_STAT_DEBUG
- printk(KERN_DEBUG "tty_STAT_DHUP ttyI%d\n", info->line);
-#endif
- if (tty_port_active(&info->port)) {
- if (info->dialing == 1)
- isdn_tty_modem_result(RESULT_BUSY, info);
- if (info->dialing > 1)
- isdn_tty_modem_result(RESULT_NO_CARRIER, info);
- info->dialing = 0;
-#ifdef ISDN_DEBUG_MODEM_HUP
- printk(KERN_DEBUG "Mhup in ISDN_STAT_DHUP\n");
-#endif
- isdn_tty_modem_hup(info, 0);
- return 1;
- }
- break;
- case ISDN_STAT_BCONN:
-#ifdef ISDN_TTY_STAT_DEBUG
- printk(KERN_DEBUG "tty_STAT_BCONN ttyI%d\n", info->line);
-#endif
- /* Wake up any processes waiting
- * for incoming call of this device when
- * DCD follow the state of incoming carrier
- */
- if (info->port.blocked_open &&
- (info->emu.mdmreg[REG_DCD] & BIT_DCD)) {
- wake_up_interruptible(&info->port.open_wait);
- }
-
- /* Schedule CONNECT-Message to any tty
- * waiting for it and
- * set DCD-bit of its modem-status.
- */
- if (tty_port_active(&info->port) ||
- (info->port.blocked_open &&
- (info->emu.mdmreg[REG_DCD] & BIT_DCD))) {
- info->msr |= UART_MSR_DCD;
- info->emu.charge = 0;
- if (info->dialing & 0xf)
- info->last_dir = 1;
- else
- info->last_dir = 0;
- info->dialing = 0;
- info->rcvsched = 1;
- if (USG_MODEM(dev->usage[i])) {
- if (info->emu.mdmreg[REG_L2PROT] == ISDN_PROTO_L2_MODEM) {
- strcpy(info->emu.connmsg, c->parm.num);
- isdn_tty_modem_result(RESULT_CONNECT, info);
- } else
- isdn_tty_modem_result(RESULT_CONNECT64000, info);
- }
- if (USG_VOICE(dev->usage[i]))
- isdn_tty_modem_result(RESULT_VCON, info);
- return 1;
- }
- break;
- case ISDN_STAT_BHUP:
-#ifdef ISDN_TTY_STAT_DEBUG
- printk(KERN_DEBUG "tty_STAT_BHUP ttyI%d\n", info->line);
-#endif
- if (tty_port_active(&info->port)) {
-#ifdef ISDN_DEBUG_MODEM_HUP
- printk(KERN_DEBUG "Mhup in ISDN_STAT_BHUP\n");
-#endif
- isdn_tty_modem_hup(info, 0);
- return 1;
- }
- break;
- case ISDN_STAT_NODCH:
-#ifdef ISDN_TTY_STAT_DEBUG
- printk(KERN_DEBUG "tty_STAT_NODCH ttyI%d\n", info->line);
-#endif
- if (tty_port_active(&info->port)) {
- if (info->dialing) {
- info->dialing = 0;
- info->last_l2 = -1;
- info->last_si = 0;
- sprintf(info->last_cause, "0000");
- isdn_tty_modem_result(RESULT_NO_DIALTONE, info);
- }
- isdn_tty_modem_hup(info, 0);
- return 1;
- }
- break;
- case ISDN_STAT_UNLOAD:
-#ifdef ISDN_TTY_STAT_DEBUG
- printk(KERN_DEBUG "tty_STAT_UNLOAD ttyI%d\n", info->line);
-#endif
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- info = &dev->mdm.info[i];
- if (info->isdn_driver == c->driver) {
- if (info->online)
- isdn_tty_modem_hup(info, 1);
- }
- }
- return 1;
-#ifdef CONFIG_ISDN_TTY_FAX
- case ISDN_STAT_FAXIND:
- if (tty_port_active(&info->port)) {
- isdn_tty_fax_command(info, c);
- }
- break;
-#endif
-#ifdef CONFIG_ISDN_AUDIO
- case ISDN_STAT_AUDIO:
- if (tty_port_active(&info->port)) {
- switch (c->parm.num[0]) {
- case ISDN_AUDIO_DTMF:
- if (info->vonline) {
- isdn_audio_put_dle_code(info,
- c->parm.num[1]);
- }
- break;
- }
- }
- break;
-#endif
- }
- }
- return 0;
-}
-
-/*********************************************************************
- Modem-Emulator-Routines
-*********************************************************************/
-
-#define cmdchar(c) ((c >= ' ') && (c <= 0x7f))
-
-/*
- * Put a message from the AT-emulator into receive-buffer of tty,
- * convert CR, LF, and BS to values in modem-registers 3, 4 and 5.
- */
-void
-isdn_tty_at_cout(char *msg, modem_info *info)
-{
- struct tty_port *port = &info->port;
- atemu *m = &info->emu;
- char *p;
- char c;
- u_long flags;
- struct sk_buff *skb = NULL;
- char *sp = NULL;
- int l;
-
- if (!msg) {
- printk(KERN_WARNING "isdn_tty: Null-Message in isdn_tty_at_cout\n");
- return;
- }
-
- l = strlen(msg);
-
- spin_lock_irqsave(&info->readlock, flags);
- if (info->closing) {
- spin_unlock_irqrestore(&info->readlock, flags);
- return;
- }
-
- /* use queue instead of direct, if online and */
- /* data is in queue or buffer is full */
- if (info->online && ((tty_buffer_request_room(port, l) < l) ||
- !skb_queue_empty(&dev->drv[info->isdn_driver]->rpqueue[info->isdn_channel]))) {
- skb = alloc_skb(l, GFP_ATOMIC);
- if (!skb) {
- spin_unlock_irqrestore(&info->readlock, flags);
- return;
- }
- sp = skb_put(skb, l);
-#ifdef CONFIG_ISDN_AUDIO
- ISDN_AUDIO_SKB_DLECOUNT(skb) = 0;
- ISDN_AUDIO_SKB_LOCK(skb) = 0;
-#endif
- }
-
- for (p = msg; *p; p++) {
- switch (*p) {
- case '\r':
- c = m->mdmreg[REG_CR];
- break;
- case '\n':
- c = m->mdmreg[REG_LF];
- break;
- case '\b':
- c = m->mdmreg[REG_BS];
- break;
- default:
- c = *p;
- }
- if (skb) {
- *sp++ = c;
- } else {
- if (tty_insert_flip_char(port, c, TTY_NORMAL) == 0)
- break;
- }
- }
- if (skb) {
- __skb_queue_tail(&dev->drv[info->isdn_driver]->rpqueue[info->isdn_channel], skb);
- dev->drv[info->isdn_driver]->rcvcount[info->isdn_channel] += skb->len;
- spin_unlock_irqrestore(&info->readlock, flags);
- /* Schedule dequeuing */
- if (dev->modempoll && info->rcvsched)
- isdn_timer_ctrl(ISDN_TIMER_MODEMREAD, 1);
-
- } else {
- spin_unlock_irqrestore(&info->readlock, flags);
- tty_flip_buffer_push(port);
- }
-}
-
-/*
- * Perform ATH Hangup
- */
-static void
-isdn_tty_on_hook(modem_info *info)
-{
- if (info->isdn_channel >= 0) {
-#ifdef ISDN_DEBUG_MODEM_HUP
- printk(KERN_DEBUG "Mhup in isdn_tty_on_hook\n");
-#endif
- isdn_tty_modem_hup(info, 1);
- }
-}
-
-static void
-isdn_tty_off_hook(void)
-{
- printk(KERN_DEBUG "isdn_tty_off_hook\n");
-}
-
-#define PLUSWAIT1 (HZ / 2) /* 0.5 sec. */
-#define PLUSWAIT2 (HZ * 3 / 2) /* 1.5 sec */
-
-/*
- * Check Buffer for Modem-escape-sequence, activate timer-callback to
- * isdn_tty_modem_escape() if sequence found.
- *
- * Parameters:
- * p pointer to databuffer
- * plus escape-character
- * count length of buffer
- * pluscount count of valid escape-characters so far
- * lastplus timestamp of last character
- */
-static void
-isdn_tty_check_esc(const u_char *p, u_char plus, int count, int *pluscount,
- u_long *lastplus)
-{
- if (plus > 127)
- return;
- if (count > 3) {
- p += count - 3;
- count = 3;
- *pluscount = 0;
- }
- while (count > 0) {
- if (*(p++) == plus) {
- if ((*pluscount)++) {
- /* Time since last '+' > 0.5 sec. ? */
- if (time_after(jiffies, *lastplus + PLUSWAIT1))
- *pluscount = 1;
- } else {
- /* Time since last non-'+' < 1.5 sec. ? */
- if (time_before(jiffies, *lastplus + PLUSWAIT2))
- *pluscount = 0;
- }
- if ((*pluscount == 3) && (count == 1))
- isdn_timer_ctrl(ISDN_TIMER_MODEMPLUS, 1);
- if (*pluscount > 3)
- *pluscount = 1;
- } else
- *pluscount = 0;
- *lastplus = jiffies;
- count--;
- }
-}
-
-/*
- * Return result of AT-emulator to tty-receive-buffer, depending on
- * modem-register 12, bit 0 and 1.
- * For CONNECT-messages also switch to online-mode.
- * For RING-message handle auto-ATA if register 0 != 0
- */
-
-static void
-isdn_tty_modem_result(int code, modem_info *info)
-{
- atemu *m = &info->emu;
- static char *msg[] =
- {"OK", "CONNECT", "RING", "NO CARRIER", "ERROR",
- "CONNECT 64000", "NO DIALTONE", "BUSY", "NO ANSWER",
- "RINGING", "NO MSN/EAZ", "VCON", "RUNG"};
- char s[ISDN_MSNLEN + 10];
-
- switch (code) {
- case RESULT_RING:
- m->mdmreg[REG_RINGCNT]++;
- if (m->mdmreg[REG_RINGCNT] == m->mdmreg[REG_RINGATA])
- /* Automatically accept incoming call */
- isdn_tty_cmd_ATA(info);
- break;
- case RESULT_NO_CARRIER:
-#ifdef ISDN_DEBUG_MODEM_HUP
- printk(KERN_DEBUG "modem_result: NO CARRIER %d %d\n",
- info->closing, !info->port.tty);
-#endif
- m->mdmreg[REG_RINGCNT] = 0;
- del_timer(&info->nc_timer);
- info->ncarrier = 0;
- if (info->closing || !info->port.tty)
- return;
-
-#ifdef CONFIG_ISDN_AUDIO
- if (info->vonline & 1) {
-#ifdef ISDN_DEBUG_MODEM_VOICE
- printk(KERN_DEBUG "res3: send DLE-ETX on ttyI%d\n",
- info->line);
-#endif
- /* voice-recording, add DLE-ETX */
- isdn_tty_at_cout("\020\003", info);
- }
- if (info->vonline & 2) {
-#ifdef ISDN_DEBUG_MODEM_VOICE
- printk(KERN_DEBUG "res3: send DLE-DC4 on ttyI%d\n",
- info->line);
-#endif
- /* voice-playing, add DLE-DC4 */
- isdn_tty_at_cout("\020\024", info);
- }
-#endif
- break;
- case RESULT_CONNECT:
- case RESULT_CONNECT64000:
- sprintf(info->last_cause, "0000");
- if (!info->online)
- info->online = 2;
- break;
- case RESULT_VCON:
-#ifdef ISDN_DEBUG_MODEM_VOICE
- printk(KERN_DEBUG "res3: send VCON on ttyI%d\n",
- info->line);
-#endif
- sprintf(info->last_cause, "0000");
- if (!info->online)
- info->online = 1;
- break;
- } /* switch (code) */
-
- if (m->mdmreg[REG_RESP] & BIT_RESP) {
- /* Show results */
- if (m->mdmreg[REG_RESPNUM] & BIT_RESPNUM) {
- /* Show numeric results only */
- sprintf(s, "\r\n%d\r\n", code);
- isdn_tty_at_cout(s, info);
- } else {
- if (code == RESULT_RING) {
- /* return if "show RUNG" and ringcounter>1 */
- if ((m->mdmreg[REG_RUNG] & BIT_RUNG) &&
- (m->mdmreg[REG_RINGCNT] > 1))
- return;
- /* print CID, _before_ _every_ ring */
- if (!(m->mdmreg[REG_CIDONCE] & BIT_CIDONCE)) {
- isdn_tty_at_cout("\r\nCALLER NUMBER: ", info);
- isdn_tty_at_cout(dev->num[info->drv_index], info);
- if (m->mdmreg[REG_CDN] & BIT_CDN) {
- isdn_tty_at_cout("\r\nCALLED NUMBER: ", info);
- isdn_tty_at_cout(info->emu.cpn, info);
- }
- }
- }
- isdn_tty_at_cout("\r\n", info);
- isdn_tty_at_cout(msg[code], info);
- switch (code) {
- case RESULT_CONNECT:
- switch (m->mdmreg[REG_L2PROT]) {
- case ISDN_PROTO_L2_MODEM:
- isdn_tty_at_cout(" ", info);
- isdn_tty_at_cout(m->connmsg, info);
- break;
- }
- break;
- case RESULT_RING:
- /* Append CPN, if enabled */
- if ((m->mdmreg[REG_CPN] & BIT_CPN)) {
- sprintf(s, "/%s", m->cpn);
- isdn_tty_at_cout(s, info);
- }
- /* Print CID only once, _after_ 1st RING */
- if ((m->mdmreg[REG_CIDONCE] & BIT_CIDONCE) &&
- (m->mdmreg[REG_RINGCNT] == 1)) {
- isdn_tty_at_cout("\r\n", info);
- isdn_tty_at_cout("CALLER NUMBER: ", info);
- isdn_tty_at_cout(dev->num[info->drv_index], info);
- if (m->mdmreg[REG_CDN] & BIT_CDN) {
- isdn_tty_at_cout("\r\nCALLED NUMBER: ", info);
- isdn_tty_at_cout(info->emu.cpn, info);
- }
- }
- break;
- case RESULT_NO_CARRIER:
- case RESULT_NO_DIALTONE:
- case RESULT_BUSY:
- case RESULT_NO_ANSWER:
- m->mdmreg[REG_RINGCNT] = 0;
- /* Append Cause-Message if enabled */
- if (m->mdmreg[REG_RESPXT] & BIT_RESPXT) {
- sprintf(s, "/%s", info->last_cause);
- isdn_tty_at_cout(s, info);
- }
- break;
- case RESULT_CONNECT64000:
- /* Append Protocol to CONNECT message */
- switch (m->mdmreg[REG_L2PROT]) {
- case ISDN_PROTO_L2_X75I:
- case ISDN_PROTO_L2_X75UI:
- case ISDN_PROTO_L2_X75BUI:
- isdn_tty_at_cout("/X.75", info);
- break;
- case ISDN_PROTO_L2_HDLC:
- isdn_tty_at_cout("/HDLC", info);
- break;
- case ISDN_PROTO_L2_V11096:
- isdn_tty_at_cout("/V110/9600", info);
- break;
- case ISDN_PROTO_L2_V11019:
- isdn_tty_at_cout("/V110/19200", info);
- break;
- case ISDN_PROTO_L2_V11038:
- isdn_tty_at_cout("/V110/38400", info);
- break;
- }
- if (m->mdmreg[REG_T70] & BIT_T70) {
- isdn_tty_at_cout("/T.70", info);
- if (m->mdmreg[REG_T70] & BIT_T70_EXT)
- isdn_tty_at_cout("+", info);
- }
- break;
- }
- isdn_tty_at_cout("\r\n", info);
- }
- }
- if (code == RESULT_NO_CARRIER) {
- if (info->closing || (!info->port.tty))
- return;
-
- if (tty_port_check_carrier(&info->port))
- tty_hangup(info->port.tty);
- }
-}
-
-
-/*
- * Display a modem-register-value.
- */
-static void
-isdn_tty_show_profile(int ridx, modem_info *info)
-{
- char v[6];
-
- sprintf(v, "\r\n%d", info->emu.mdmreg[ridx]);
- isdn_tty_at_cout(v, info);
-}
-
-/*
- * Get MSN-string from char-pointer, set pointer to end of number
- */
-static void
-isdn_tty_get_msnstr(char *n, char **p)
-{
- int limit = ISDN_MSNLEN - 1;
-
- while (((*p[0] >= '0' && *p[0] <= '9') ||
- /* Why a comma ??? */
- (*p[0] == ',') || (*p[0] == ':')) &&
- (limit--))
- *n++ = *p[0]++;
- *n = '\0';
-}
-
-/*
- * Get phone-number from modem-commandbuffer
- */
-static void
-isdn_tty_getdial(char *p, char *q, int cnt)
-{
- int first = 1;
- int limit = ISDN_MSNLEN - 1; /* MUST match the size of interface var to avoid
- buffer overflow */
-
- while (strchr(" 0123456789,#.*WPTSR-", *p) && *p && --cnt > 0) {
- if ((*p >= '0' && *p <= '9') || ((*p == 'S') && first) ||
- ((*p == 'R') && first) ||
- (*p == '*') || (*p == '#')) {
- *q++ = *p;
- limit--;
- }
- if (!limit)
- break;
- p++;
- first = 0;
- }
- *q = 0;
-}
-
-#define PARSE_ERROR { isdn_tty_modem_result(RESULT_ERROR, info); return; }
-#define PARSE_ERROR1 { isdn_tty_modem_result(RESULT_ERROR, info); return 1; }
-
-static void
-isdn_tty_report(modem_info *info)
-{
- atemu *m = &info->emu;
- char s[80];
-
- isdn_tty_at_cout("\r\nStatistics of last connection:\r\n\r\n", info);
- sprintf(s, " Remote Number: %s\r\n", info->last_num);
- isdn_tty_at_cout(s, info);
- sprintf(s, " Direction: %s\r\n", info->last_dir ? "outgoing" : "incoming");
- isdn_tty_at_cout(s, info);
- isdn_tty_at_cout(" Layer-2 Protocol: ", info);
- switch (info->last_l2) {
- case ISDN_PROTO_L2_X75I:
- isdn_tty_at_cout("X.75i", info);
- break;
- case ISDN_PROTO_L2_X75UI:
- isdn_tty_at_cout("X.75ui", info);
- break;
- case ISDN_PROTO_L2_X75BUI:
- isdn_tty_at_cout("X.75bui", info);
- break;
- case ISDN_PROTO_L2_HDLC:
- isdn_tty_at_cout("HDLC", info);
- break;
- case ISDN_PROTO_L2_V11096:
- isdn_tty_at_cout("V.110 9600 Baud", info);
- break;
- case ISDN_PROTO_L2_V11019:
- isdn_tty_at_cout("V.110 19200 Baud", info);
- break;
- case ISDN_PROTO_L2_V11038:
- isdn_tty_at_cout("V.110 38400 Baud", info);
- break;
- case ISDN_PROTO_L2_TRANS:
- isdn_tty_at_cout("transparent", info);
- break;
- case ISDN_PROTO_L2_MODEM:
- isdn_tty_at_cout("modem", info);
- break;
- case ISDN_PROTO_L2_FAX:
- isdn_tty_at_cout("fax", info);
- break;
- default:
- isdn_tty_at_cout("unknown", info);
- break;
- }
- if (m->mdmreg[REG_T70] & BIT_T70) {
- isdn_tty_at_cout("/T.70", info);
- if (m->mdmreg[REG_T70] & BIT_T70_EXT)
- isdn_tty_at_cout("+", info);
- }
- isdn_tty_at_cout("\r\n", info);
- isdn_tty_at_cout(" Service: ", info);
- switch (info->last_si) {
- case 1:
- isdn_tty_at_cout("audio\r\n", info);
- break;
- case 5:
- isdn_tty_at_cout("btx\r\n", info);
- break;
- case 7:
- isdn_tty_at_cout("data\r\n", info);
- break;
- default:
- sprintf(s, "%d\r\n", info->last_si);
- isdn_tty_at_cout(s, info);
- break;
- }
- sprintf(s, " Hangup location: %s\r\n", info->last_lhup ? "local" : "remote");
- isdn_tty_at_cout(s, info);
- sprintf(s, " Last cause: %s\r\n", info->last_cause);
- isdn_tty_at_cout(s, info);
-}
-
-/*
- * Parse AT&.. commands.
- */
-static int
-isdn_tty_cmd_ATand(char **p, modem_info *info)
-{
- atemu *m = &info->emu;
- int i;
- char rb[100];
-
-#define MAXRB (sizeof(rb) - 1)
-
- switch (*p[0]) {
- case 'B':
- /* &B - Set Buffersize */
- p[0]++;
- i = isdn_getnum(p);
- if ((i < 0) || (i > ISDN_SERIAL_XMIT_MAX))
- PARSE_ERROR1;
-#ifdef CONFIG_ISDN_AUDIO
- if ((m->mdmreg[REG_SI1] & 1) && (i > VBUF))
- PARSE_ERROR1;
-#endif
- m->mdmreg[REG_PSIZE] = i / 16;
- info->xmit_size = m->mdmreg[REG_PSIZE] * 16;
- switch (m->mdmreg[REG_L2PROT]) {
- case ISDN_PROTO_L2_V11096:
- case ISDN_PROTO_L2_V11019:
- case ISDN_PROTO_L2_V11038:
- info->xmit_size /= 10;
- }
- break;
- case 'C':
- /* &C - DCD Status */
- p[0]++;
- switch (isdn_getnum(p)) {
- case 0:
- m->mdmreg[REG_DCD] &= ~BIT_DCD;
- break;
- case 1:
- m->mdmreg[REG_DCD] |= BIT_DCD;
- break;
- default:
- PARSE_ERROR1
- }
- break;
- case 'D':
- /* &D - Set DTR-Low-behavior */
- p[0]++;
- switch (isdn_getnum(p)) {
- case 0:
- m->mdmreg[REG_DTRHUP] &= ~BIT_DTRHUP;
- m->mdmreg[REG_DTRR] &= ~BIT_DTRR;
- break;
- case 2:
- m->mdmreg[REG_DTRHUP] |= BIT_DTRHUP;
- m->mdmreg[REG_DTRR] &= ~BIT_DTRR;
- break;
- case 3:
- m->mdmreg[REG_DTRHUP] |= BIT_DTRHUP;
- m->mdmreg[REG_DTRR] |= BIT_DTRR;
- break;
- default:
- PARSE_ERROR1
- }
- break;
- case 'E':
- /* &E -Set EAZ/MSN */
- p[0]++;
- isdn_tty_get_msnstr(m->msn, p);
- break;
- case 'F':
- /* &F -Set Factory-Defaults */
- p[0]++;
- if (info->msr & UART_MSR_DCD)
- PARSE_ERROR1;
- isdn_tty_reset_profile(m);
- isdn_tty_modem_reset_regs(info, 1);
- break;
-#ifdef DUMMY_HAYES_AT
- case 'K':
- /* only for be compilant with common scripts */
- /* &K Flowcontrol - no function */
- p[0]++;
- isdn_getnum(p);
- break;
-#endif
- case 'L':
- /* &L -Set Numbers to listen on */
- p[0]++;
- i = 0;
- while (*p[0] && (strchr("0123456789,-*[]?;", *p[0])) &&
- (i < ISDN_LMSNLEN - 1))
- m->lmsn[i++] = *p[0]++;
- m->lmsn[i] = '\0';
- break;
- case 'R':
- /* &R - Set V.110 bitrate adaption */
- p[0]++;
- i = isdn_getnum(p);
- switch (i) {
- case 0:
- /* Switch off V.110, back to X.75 */
- m->mdmreg[REG_L2PROT] = ISDN_PROTO_L2_X75I;
- m->mdmreg[REG_SI2] = 0;
- info->xmit_size = m->mdmreg[REG_PSIZE] * 16;
- break;
- case 9600:
- m->mdmreg[REG_L2PROT] = ISDN_PROTO_L2_V11096;
- m->mdmreg[REG_SI2] = 197;
- info->xmit_size = m->mdmreg[REG_PSIZE] * 16 / 10;
- break;
- case 19200:
- m->mdmreg[REG_L2PROT] = ISDN_PROTO_L2_V11019;
- m->mdmreg[REG_SI2] = 199;
- info->xmit_size = m->mdmreg[REG_PSIZE] * 16 / 10;
- break;
- case 38400:
- m->mdmreg[REG_L2PROT] = ISDN_PROTO_L2_V11038;
- m->mdmreg[REG_SI2] = 198; /* no existing standard for this */
- info->xmit_size = m->mdmreg[REG_PSIZE] * 16 / 10;
- break;
- default:
- PARSE_ERROR1;
- }
- /* Switch off T.70 */
- m->mdmreg[REG_T70] &= ~(BIT_T70 | BIT_T70_EXT);
- /* Set Service 7 */
- m->mdmreg[REG_SI1] |= 4;
- break;
- case 'S':
- /* &S - Set Windowsize */
- p[0]++;
- i = isdn_getnum(p);
- if ((i > 0) && (i < 9))
- m->mdmreg[REG_WSIZE] = i;
- else
- PARSE_ERROR1;
- break;
- case 'V':
- /* &V - Show registers */
- p[0]++;
- isdn_tty_at_cout("\r\n", info);
- for (i = 0; i < ISDN_MODEM_NUMREG; i++) {
- sprintf(rb, "S%02d=%03d%s", i,
- m->mdmreg[i], ((i + 1) % 10) ? " " : "\r\n");
- isdn_tty_at_cout(rb, info);
- }
- sprintf(rb, "\r\nEAZ/MSN: %.50s\r\n",
- strlen(m->msn) ? m->msn : "None");
- isdn_tty_at_cout(rb, info);
- if (strlen(m->lmsn)) {
- isdn_tty_at_cout("\r\nListen: ", info);
- isdn_tty_at_cout(m->lmsn, info);
- isdn_tty_at_cout("\r\n", info);
- }
- break;
- case 'W':
- /* &W - Write Profile */
- p[0]++;
- switch (*p[0]) {
- case '0':
- p[0]++;
- modem_write_profile(m);
- break;
- default:
- PARSE_ERROR1;
- }
- break;
- case 'X':
- /* &X - Switch to BTX-Mode and T.70 */
- p[0]++;
- switch (isdn_getnum(p)) {
- case 0:
- m->mdmreg[REG_T70] &= ~(BIT_T70 | BIT_T70_EXT);
- info->xmit_size = m->mdmreg[REG_PSIZE] * 16;
- break;
- case 1:
- m->mdmreg[REG_T70] |= BIT_T70;
- m->mdmreg[REG_T70] &= ~BIT_T70_EXT;
- m->mdmreg[REG_L2PROT] = ISDN_PROTO_L2_X75I;
- info->xmit_size = 112;
- m->mdmreg[REG_SI1] = 4;
- m->mdmreg[REG_SI2] = 0;
- break;
- case 2:
- m->mdmreg[REG_T70] |= (BIT_T70 | BIT_T70_EXT);
- m->mdmreg[REG_L2PROT] = ISDN_PROTO_L2_X75I;
- info->xmit_size = 112;
- m->mdmreg[REG_SI1] = 4;
- m->mdmreg[REG_SI2] = 0;
- break;
- default:
- PARSE_ERROR1;
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
-}
-
-static int
-isdn_tty_check_ats(int mreg, int mval, modem_info *info, atemu *m)
-{
- /* Some plausibility checks */
- switch (mreg) {
- case REG_L2PROT:
- if (mval > ISDN_PROTO_L2_MAX)
- return 1;
- break;
- case REG_PSIZE:
- if ((mval * 16) > ISDN_SERIAL_XMIT_MAX)
- return 1;
-#ifdef CONFIG_ISDN_AUDIO
- if ((m->mdmreg[REG_SI1] & 1) && (mval > VBUFX))
- return 1;
-#endif
- info->xmit_size = mval * 16;
- switch (m->mdmreg[REG_L2PROT]) {
- case ISDN_PROTO_L2_V11096:
- case ISDN_PROTO_L2_V11019:
- case ISDN_PROTO_L2_V11038:
- info->xmit_size /= 10;
- }
- break;
- case REG_SI1I:
- case REG_PLAN:
- case REG_SCREEN:
- /* readonly registers */
- return 1;
- }
- return 0;
-}
-
-/*
- * Perform ATS command
- */
-static int
-isdn_tty_cmd_ATS(char **p, modem_info *info)
-{
- atemu *m = &info->emu;
- int bitpos;
- int mreg;
- int mval;
- int bval;
-
- mreg = isdn_getnum(p);
- if (mreg < 0 || mreg >= ISDN_MODEM_NUMREG)
- PARSE_ERROR1;
- switch (*p[0]) {
- case '=':
- p[0]++;
- mval = isdn_getnum(p);
- if (mval < 0 || mval > 255)
- PARSE_ERROR1;
- if (isdn_tty_check_ats(mreg, mval, info, m))
- PARSE_ERROR1;
- m->mdmreg[mreg] = mval;
- break;
- case '.':
- /* Set/Clear a single bit */
- p[0]++;
- bitpos = isdn_getnum(p);
- if ((bitpos < 0) || (bitpos > 7))
- PARSE_ERROR1;
- switch (*p[0]) {
- case '=':
- p[0]++;
- bval = isdn_getnum(p);
- if (bval < 0 || bval > 1)
- PARSE_ERROR1;
- if (bval)
- mval = m->mdmreg[mreg] | (1 << bitpos);
- else
- mval = m->mdmreg[mreg] & ~(1 << bitpos);
- if (isdn_tty_check_ats(mreg, mval, info, m))
- PARSE_ERROR1;
- m->mdmreg[mreg] = mval;
- break;
- case '?':
- p[0]++;
- isdn_tty_at_cout("\r\n", info);
- isdn_tty_at_cout((m->mdmreg[mreg] & (1 << bitpos)) ? "1" : "0",
- info);
- break;
- default:
- PARSE_ERROR1;
- }
- break;
- case '?':
- p[0]++;
- isdn_tty_show_profile(mreg, info);
- break;
- default:
- PARSE_ERROR1;
- break;
- }
- return 0;
-}
-
-/*
- * Perform ATA command
- */
-static void
-isdn_tty_cmd_ATA(modem_info *info)
-{
- atemu *m = &info->emu;
- isdn_ctrl cmd;
- int l2;
-
- if (info->msr & UART_MSR_RI) {
- /* Accept incoming call */
- info->last_dir = 0;
- strcpy(info->last_num, dev->num[info->drv_index]);
- m->mdmreg[REG_RINGCNT] = 0;
- info->msr &= ~UART_MSR_RI;
- l2 = m->mdmreg[REG_L2PROT];
-#ifdef CONFIG_ISDN_AUDIO
- /* If more than one bit set in reg18, autoselect Layer2 */
- if ((m->mdmreg[REG_SI1] & m->mdmreg[REG_SI1I]) != m->mdmreg[REG_SI1]) {
- if (m->mdmreg[REG_SI1I] == 1) {
- if ((l2 != ISDN_PROTO_L2_MODEM) && (l2 != ISDN_PROTO_L2_FAX))
- l2 = ISDN_PROTO_L2_TRANS;
- } else
- l2 = ISDN_PROTO_L2_X75I;
- }
-#endif
- cmd.driver = info->isdn_driver;
- cmd.command = ISDN_CMD_SETL2;
- cmd.arg = info->isdn_channel + (l2 << 8);
- info->last_l2 = l2;
- isdn_command(&cmd);
- cmd.driver = info->isdn_driver;
- cmd.command = ISDN_CMD_SETL3;
- cmd.arg = info->isdn_channel + (m->mdmreg[REG_L3PROT] << 8);
-#ifdef CONFIG_ISDN_TTY_FAX
- if (l2 == ISDN_PROTO_L2_FAX) {
- cmd.parm.fax = info->fax;
- info->fax->direction = ISDN_TTY_FAX_CONN_IN;
- }
-#endif
- isdn_command(&cmd);
- cmd.driver = info->isdn_driver;
- cmd.arg = info->isdn_channel;
- cmd.command = ISDN_CMD_ACCEPTD;
- info->dialing = 16;
- info->emu.carrierwait = 0;
- isdn_command(&cmd);
- isdn_timer_ctrl(ISDN_TIMER_CARRIER, 1);
- } else
- isdn_tty_modem_result(RESULT_NO_ANSWER, info);
-}
-
-#ifdef CONFIG_ISDN_AUDIO
-/*
- * Parse AT+F.. commands
- */
-static int
-isdn_tty_cmd_PLUSF(char **p, modem_info *info)
-{
- atemu *m = &info->emu;
- char rs[20];
-
- if (!strncmp(p[0], "CLASS", 5)) {
- p[0] += 5;
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n%d",
- (m->mdmreg[REG_SI1] & 1) ? 8 : 0);
-#ifdef CONFIG_ISDN_TTY_FAX
- if (TTY_IS_FCLASS2(info))
- sprintf(rs, "\r\n2");
- else if (TTY_IS_FCLASS1(info))
- sprintf(rs, "\r\n1");
-#endif
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- switch (*p[0]) {
- case '0':
- p[0]++;
- m->mdmreg[REG_L2PROT] = ISDN_PROTO_L2_X75I;
- m->mdmreg[REG_L3PROT] = ISDN_PROTO_L3_TRANS;
- m->mdmreg[REG_SI1] = 4;
- info->xmit_size =
- m->mdmreg[REG_PSIZE] * 16;
- break;
-#ifdef CONFIG_ISDN_TTY_FAX
- case '1':
- p[0]++;
- if (!(dev->global_features &
- ISDN_FEATURE_L3_FCLASS1))
- PARSE_ERROR1;
- m->mdmreg[REG_SI1] = 1;
- m->mdmreg[REG_L2PROT] = ISDN_PROTO_L2_FAX;
- m->mdmreg[REG_L3PROT] = ISDN_PROTO_L3_FCLASS1;
- info->xmit_size =
- m->mdmreg[REG_PSIZE] * 16;
- break;
- case '2':
- p[0]++;
- if (!(dev->global_features &
- ISDN_FEATURE_L3_FCLASS2))
- PARSE_ERROR1;
- m->mdmreg[REG_SI1] = 1;
- m->mdmreg[REG_L2PROT] = ISDN_PROTO_L2_FAX;
- m->mdmreg[REG_L3PROT] = ISDN_PROTO_L3_FCLASS2;
- info->xmit_size =
- m->mdmreg[REG_PSIZE] * 16;
- break;
-#endif
- case '8':
- p[0]++;
- /* L2 will change on dialout with si=1 */
- m->mdmreg[REG_L2PROT] = ISDN_PROTO_L2_X75I;
- m->mdmreg[REG_L3PROT] = ISDN_PROTO_L3_TRANS;
- m->mdmreg[REG_SI1] = 5;
- info->xmit_size = VBUF;
- break;
- case '?':
- p[0]++;
- strcpy(rs, "\r\n0,");
-#ifdef CONFIG_ISDN_TTY_FAX
- if (dev->global_features &
- ISDN_FEATURE_L3_FCLASS1)
- strcat(rs, "1,");
- if (dev->global_features &
- ISDN_FEATURE_L3_FCLASS2)
- strcat(rs, "2,");
-#endif
- strcat(rs, "8");
- isdn_tty_at_cout(rs, info);
- break;
- default:
- PARSE_ERROR1;
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
-#ifdef CONFIG_ISDN_TTY_FAX
- return (isdn_tty_cmd_PLUSF_FAX(p, info));
-#else
- PARSE_ERROR1;
-#endif
-}
-
-/*
- * Parse AT+V.. commands
- */
-static int
-isdn_tty_cmd_PLUSV(char **p, modem_info *info)
-{
- atemu *m = &info->emu;
- isdn_ctrl cmd;
- static char *vcmd[] =
- {"NH", "IP", "LS", "RX", "SD", "SM", "TX", "DD", NULL};
- int i;
- int par1;
- int par2;
- char rs[20];
-
- i = 0;
- while (vcmd[i]) {
- if (!strncmp(vcmd[i], p[0], 2)) {
- p[0] += 2;
- break;
- }
- i++;
- }
- switch (i) {
- case 0:
- /* AT+VNH - Auto hangup feature */
- switch (*p[0]) {
- case '?':
- p[0]++;
- isdn_tty_at_cout("\r\n1", info);
- break;
- case '=':
- p[0]++;
- switch (*p[0]) {
- case '1':
- p[0]++;
- break;
- case '?':
- p[0]++;
- isdn_tty_at_cout("\r\n1", info);
- break;
- default:
- PARSE_ERROR1;
- }
- break;
- default:
- PARSE_ERROR1;
- }
- break;
- case 1:
- /* AT+VIP - Reset all voice parameters */
- isdn_tty_modem_reset_vpar(m);
- break;
- case 2:
- /* AT+VLS - Select device, accept incoming call */
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n%d", m->vpar[0]);
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- switch (*p[0]) {
- case '0':
- p[0]++;
- m->vpar[0] = 0;
- break;
- case '2':
- p[0]++;
- m->vpar[0] = 2;
- break;
- case '?':
- p[0]++;
- isdn_tty_at_cout("\r\n0,2", info);
- break;
- default:
- PARSE_ERROR1;
- }
- break;
- default:
- PARSE_ERROR1;
- }
- break;
- case 3:
- /* AT+VRX - Start recording */
- if (!m->vpar[0])
- PARSE_ERROR1;
- if (info->online != 1) {
- isdn_tty_modem_result(RESULT_NO_ANSWER, info);
- return 1;
- }
- info->dtmf_state = isdn_audio_dtmf_init(info->dtmf_state);
- if (!info->dtmf_state) {
- printk(KERN_WARNING "isdn_tty: Couldn't malloc dtmf state\n");
- PARSE_ERROR1;
- }
- info->silence_state = isdn_audio_silence_init(info->silence_state);
- if (!info->silence_state) {
- printk(KERN_WARNING "isdn_tty: Couldn't malloc silence state\n");
- PARSE_ERROR1;
- }
- if (m->vpar[3] < 5) {
- info->adpcmr = isdn_audio_adpcm_init(info->adpcmr, m->vpar[3]);
- if (!info->adpcmr) {
- printk(KERN_WARNING "isdn_tty: Couldn't malloc adpcm state\n");
- PARSE_ERROR1;
- }
- }
-#ifdef ISDN_DEBUG_AT
- printk(KERN_DEBUG "AT: +VRX\n");
-#endif
- info->vonline |= 1;
- isdn_tty_modem_result(RESULT_CONNECT, info);
- return 0;
- break;
- case 4:
- /* AT+VSD - Silence detection */
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n<%d>,<%d>",
- m->vpar[1],
- m->vpar[2]);
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- if ((*p[0] >= '0') && (*p[0] <= '9')) {
- par1 = isdn_getnum(p);
- if ((par1 < 0) || (par1 > 31))
- PARSE_ERROR1;
- if (*p[0] != ',')
- PARSE_ERROR1;
- p[0]++;
- par2 = isdn_getnum(p);
- if ((par2 < 0) || (par2 > 255))
- PARSE_ERROR1;
- m->vpar[1] = par1;
- m->vpar[2] = par2;
- break;
- } else
- if (*p[0] == '?') {
- p[0]++;
- isdn_tty_at_cout("\r\n<0-31>,<0-255>",
- info);
- break;
- } else
- PARSE_ERROR1;
- break;
- default:
- PARSE_ERROR1;
- }
- break;
- case 5:
- /* AT+VSM - Select compression */
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n<%d>,<%d><8000>",
- m->vpar[3],
- m->vpar[1]);
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- switch (*p[0]) {
- case '2':
- case '3':
- case '4':
- case '5':
- case '6':
- par1 = isdn_getnum(p);
- if ((par1 < 2) || (par1 > 6))
- PARSE_ERROR1;
- m->vpar[3] = par1;
- break;
- case '?':
- p[0]++;
- isdn_tty_at_cout("\r\n2;ADPCM;2;0;(8000)\r\n",
- info);
- isdn_tty_at_cout("3;ADPCM;3;0;(8000)\r\n",
- info);
- isdn_tty_at_cout("4;ADPCM;4;0;(8000)\r\n",
- info);
- isdn_tty_at_cout("5;ALAW;8;0;(8000)\r\n",
- info);
- isdn_tty_at_cout("6;ULAW;8;0;(8000)\r\n",
- info);
- break;
- default:
- PARSE_ERROR1;
- }
- break;
- default:
- PARSE_ERROR1;
- }
- break;
- case 6:
- /* AT+VTX - Start sending */
- if (!m->vpar[0])
- PARSE_ERROR1;
- if (info->online != 1) {
- isdn_tty_modem_result(RESULT_NO_ANSWER, info);
- return 1;
- }
- info->dtmf_state = isdn_audio_dtmf_init(info->dtmf_state);
- if (!info->dtmf_state) {
- printk(KERN_WARNING "isdn_tty: Couldn't malloc dtmf state\n");
- PARSE_ERROR1;
- }
- if (m->vpar[3] < 5) {
- info->adpcms = isdn_audio_adpcm_init(info->adpcms, m->vpar[3]);
- if (!info->adpcms) {
- printk(KERN_WARNING "isdn_tty: Couldn't malloc adpcm state\n");
- PARSE_ERROR1;
- }
- }
-#ifdef ISDN_DEBUG_AT
- printk(KERN_DEBUG "AT: +VTX\n");
-#endif
- m->lastDLE = 0;
- info->vonline |= 2;
- isdn_tty_modem_result(RESULT_CONNECT, info);
- return 0;
- break;
- case 7:
- /* AT+VDD - DTMF detection */
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n<%d>,<%d>",
- m->vpar[4],
- m->vpar[5]);
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- if ((*p[0] >= '0') && (*p[0] <= '9')) {
- if (info->online != 1)
- PARSE_ERROR1;
- par1 = isdn_getnum(p);
- if ((par1 < 0) || (par1 > 15))
- PARSE_ERROR1;
- if (*p[0] != ',')
- PARSE_ERROR1;
- p[0]++;
- par2 = isdn_getnum(p);
- if ((par2 < 0) || (par2 > 255))
- PARSE_ERROR1;
- m->vpar[4] = par1;
- m->vpar[5] = par2;
- cmd.driver = info->isdn_driver;
- cmd.command = ISDN_CMD_AUDIO;
- cmd.arg = info->isdn_channel + (ISDN_AUDIO_SETDD << 8);
- cmd.parm.num[0] = par1;
- cmd.parm.num[1] = par2;
- isdn_command(&cmd);
- break;
- } else
- if (*p[0] == '?') {
- p[0]++;
- isdn_tty_at_cout("\r\n<0-15>,<0-255>",
- info);
- break;
- } else
- PARSE_ERROR1;
- break;
- default:
- PARSE_ERROR1;
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
-}
-#endif /* CONFIG_ISDN_AUDIO */
-
-/*
- * Parse and perform an AT-command-line.
- */
-static void
-isdn_tty_parse_at(modem_info *info)
-{
- atemu *m = &info->emu;
- char *p;
- char ds[ISDN_MSNLEN];
-
-#ifdef ISDN_DEBUG_AT
- printk(KERN_DEBUG "AT: '%s'\n", m->mdmcmd);
-#endif
- for (p = &m->mdmcmd[2]; *p;) {
- switch (*p) {
- case ' ':
- p++;
- break;
- case 'A':
- /* A - Accept incoming call */
- p++;
- isdn_tty_cmd_ATA(info);
- return;
- case 'D':
- /* D - Dial */
- if (info->msr & UART_MSR_DCD)
- PARSE_ERROR;
- if (info->msr & UART_MSR_RI) {
- isdn_tty_modem_result(RESULT_NO_CARRIER, info);
- return;
- }
- isdn_tty_getdial(++p, ds, sizeof ds);
- p += strlen(p);
- if (!strlen(m->msn))
- isdn_tty_modem_result(RESULT_NO_MSN_EAZ, info);
- else if (strlen(ds))
- isdn_tty_dial(ds, info, m);
- else
- PARSE_ERROR;
- return;
- case 'E':
- /* E - Turn Echo on/off */
- p++;
- switch (isdn_getnum(&p)) {
- case 0:
- m->mdmreg[REG_ECHO] &= ~BIT_ECHO;
- break;
- case 1:
- m->mdmreg[REG_ECHO] |= BIT_ECHO;
- break;
- default:
- PARSE_ERROR;
- }
- break;
- case 'H':
- /* H - On/Off-hook */
- p++;
- switch (*p) {
- case '0':
- p++;
- isdn_tty_on_hook(info);
- break;
- case '1':
- p++;
- isdn_tty_off_hook();
- break;
- default:
- isdn_tty_on_hook(info);
- break;
- }
- break;
- case 'I':
- /* I - Information */
- p++;
- isdn_tty_at_cout("\r\nLinux ISDN", info);
- switch (*p) {
- case '0':
- case '1':
- p++;
- break;
- case '2':
- p++;
- isdn_tty_report(info);
- break;
- case '3':
- p++;
- snprintf(ds, sizeof(ds), "\r\n%d", info->emu.charge);
- isdn_tty_at_cout(ds, info);
- break;
- default:;
- }
- break;
-#ifdef DUMMY_HAYES_AT
- case 'L':
- case 'M':
- /* only for be compilant with common scripts */
- /* no function */
- p++;
- isdn_getnum(&p);
- break;
-#endif
- case 'O':
- /* O - Go online */
- p++;
- if (info->msr & UART_MSR_DCD)
- /* if B-Channel is up */
- isdn_tty_modem_result((m->mdmreg[REG_L2PROT] == ISDN_PROTO_L2_MODEM) ? RESULT_CONNECT : RESULT_CONNECT64000, info);
- else
- isdn_tty_modem_result(RESULT_NO_CARRIER, info);
- return;
- case 'Q':
- /* Q - Turn Emulator messages on/off */
- p++;
- switch (isdn_getnum(&p)) {
- case 0:
- m->mdmreg[REG_RESP] |= BIT_RESP;
- break;
- case 1:
- m->mdmreg[REG_RESP] &= ~BIT_RESP;
- break;
- default:
- PARSE_ERROR;
- }
- break;
- case 'S':
- /* S - Set/Get Register */
- p++;
- if (isdn_tty_cmd_ATS(&p, info))
- return;
- break;
- case 'V':
- /* V - Numeric or ASCII Emulator-messages */
- p++;
- switch (isdn_getnum(&p)) {
- case 0:
- m->mdmreg[REG_RESP] |= BIT_RESPNUM;
- break;
- case 1:
- m->mdmreg[REG_RESP] &= ~BIT_RESPNUM;
- break;
- default:
- PARSE_ERROR;
- }
- break;
- case 'Z':
- /* Z - Load Registers from Profile */
- p++;
- if (info->msr & UART_MSR_DCD) {
- info->online = 0;
- isdn_tty_on_hook(info);
- }
- isdn_tty_modem_reset_regs(info, 1);
- break;
- case '+':
- p++;
- switch (*p) {
-#ifdef CONFIG_ISDN_AUDIO
- case 'F':
- p++;
- if (isdn_tty_cmd_PLUSF(&p, info))
- return;
- break;
- case 'V':
- if ((!(m->mdmreg[REG_SI1] & 1)) ||
- (m->mdmreg[REG_L2PROT] == ISDN_PROTO_L2_MODEM))
- PARSE_ERROR;
- p++;
- if (isdn_tty_cmd_PLUSV(&p, info))
- return;
- break;
-#endif /* CONFIG_ISDN_AUDIO */
- case 'S': /* SUSPEND */
- p++;
- isdn_tty_get_msnstr(ds, &p);
- isdn_tty_suspend(ds, info, m);
- break;
- case 'R': /* RESUME */
- p++;
- isdn_tty_get_msnstr(ds, &p);
- isdn_tty_resume(ds, info, m);
- break;
- case 'M': /* MESSAGE */
- p++;
- isdn_tty_send_msg(info, m, p);
- break;
- default:
- PARSE_ERROR;
- }
- break;
- case '&':
- p++;
- if (isdn_tty_cmd_ATand(&p, info))
- return;
- break;
- default:
- PARSE_ERROR;
- }
- }
-#ifdef CONFIG_ISDN_AUDIO
- if (!info->vonline)
-#endif
- isdn_tty_modem_result(RESULT_OK, info);
-}
-
-/* Need own toupper() because standard-toupper is not available
- * within modules.
- */
-#define my_toupper(c) (((c >= 'a') && (c <= 'z')) ? (c & 0xdf) : c)
-
-/*
- * Perform line-editing of AT-commands
- *
- * Parameters:
- * p inputbuffer
- * count length of buffer
- * channel index to line (minor-device)
- */
-static int
-isdn_tty_edit_at(const char *p, int count, modem_info *info)
-{
- atemu *m = &info->emu;
- int total = 0;
- u_char c;
- char eb[2];
- int cnt;
-
- for (cnt = count; cnt > 0; p++, cnt--) {
- c = *p;
- total++;
- if (c == m->mdmreg[REG_CR] || c == m->mdmreg[REG_LF]) {
- /* Separator (CR or LF) */
- m->mdmcmd[m->mdmcmdl] = 0;
- if (m->mdmreg[REG_ECHO] & BIT_ECHO) {
- eb[0] = c;
- eb[1] = 0;
- isdn_tty_at_cout(eb, info);
- }
- if ((m->mdmcmdl >= 2) && (!(strncmp(m->mdmcmd, "AT", 2))))
- isdn_tty_parse_at(info);
- m->mdmcmdl = 0;
- continue;
- }
- if (c == m->mdmreg[REG_BS] && m->mdmreg[REG_BS] < 128) {
- /* Backspace-Function */
- if ((m->mdmcmdl > 2) || (!m->mdmcmdl)) {
- if (m->mdmcmdl)
- m->mdmcmdl--;
- if (m->mdmreg[REG_ECHO] & BIT_ECHO)
- isdn_tty_at_cout("\b", info);
- }
- continue;
- }
- if (cmdchar(c)) {
- if (m->mdmreg[REG_ECHO] & BIT_ECHO) {
- eb[0] = c;
- eb[1] = 0;
- isdn_tty_at_cout(eb, info);
- }
- if (m->mdmcmdl < 255) {
- c = my_toupper(c);
- switch (m->mdmcmdl) {
- case 1:
- if (c == 'T') {
- m->mdmcmd[m->mdmcmdl] = c;
- m->mdmcmd[++m->mdmcmdl] = 0;
- break;
- } else
- m->mdmcmdl = 0;
- /* Fall through - check for 'A' */
- case 0:
- if (c == 'A') {
- m->mdmcmd[m->mdmcmdl] = c;
- m->mdmcmd[++m->mdmcmdl] = 0;
- }
- break;
- default:
- m->mdmcmd[m->mdmcmdl] = c;
- m->mdmcmd[++m->mdmcmdl] = 0;
- }
- }
- }
- }
- return total;
-}
-
-/*
- * Switch all modem-channels who are online and got a valid
- * escape-sequence 1.5 seconds ago, to command-mode.
- * This function is called every second via timer-interrupt from within
- * timer-dispatcher isdn_timer_function()
- */
-void
-isdn_tty_modem_escape(void)
-{
- int ton = 0;
- int i;
- int midx;
-
- for (i = 0; i < ISDN_MAX_CHANNELS; i++)
- if (USG_MODEM(dev->usage[i]) && (midx = dev->m_idx[i]) >= 0) {
- modem_info *info = &dev->mdm.info[midx];
- if (info->online) {
- ton = 1;
- if ((info->emu.pluscount == 3) &&
- time_after(jiffies,
- info->emu.lastplus + PLUSWAIT2)) {
- info->emu.pluscount = 0;
- info->online = 0;
- isdn_tty_modem_result(RESULT_OK, info);
- }
- }
- }
- isdn_timer_ctrl(ISDN_TIMER_MODEMPLUS, ton);
-}
-
-/*
- * Put a RING-message to all modem-channels who have the RI-bit set.
- * This function is called every second via timer-interrupt from within
- * timer-dispatcher isdn_timer_function()
- */
-void
-isdn_tty_modem_ring(void)
-{
- int ton = 0;
- int i;
-
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- modem_info *info = &dev->mdm.info[i];
- if (info->msr & UART_MSR_RI) {
- ton = 1;
- isdn_tty_modem_result(RESULT_RING, info);
- }
- }
- isdn_timer_ctrl(ISDN_TIMER_MODEMRING, ton);
-}
-
-/*
- * For all online tty's, try sending data to
- * the lower levels.
- */
-void
-isdn_tty_modem_xmit(void)
-{
- int ton = 1;
- int i;
-
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- modem_info *info = &dev->mdm.info[i];
- if (info->online) {
- ton = 1;
- isdn_tty_senddown(info);
- isdn_tty_tint(info);
- }
- }
- isdn_timer_ctrl(ISDN_TIMER_MODEMXMIT, ton);
-}
-
-/*
- * Check all channels if we have a 'no carrier' timeout.
- * Timeout value is set by Register S7.
- */
-void
-isdn_tty_carrier_timeout(void)
-{
- int ton = 0;
- int i;
-
- for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
- modem_info *info = &dev->mdm.info[i];
- if (!info->dialing)
- continue;
- if (info->emu.carrierwait++ > info->emu.mdmreg[REG_WAITC]) {
- info->dialing = 0;
- isdn_tty_modem_result(RESULT_NO_CARRIER, info);
- isdn_tty_modem_hup(info, 1);
- } else
- ton = 1;
- }
- isdn_timer_ctrl(ISDN_TIMER_CARRIER, ton);
-}
diff --git a/drivers/isdn/i4l/isdn_tty.h b/drivers/isdn/i4l/isdn_tty.h
deleted file mode 100644
index a6f801d2263b..000000000000
--- a/drivers/isdn/i4l/isdn_tty.h
+++ /dev/null
@@ -1,120 +0,0 @@
-/* $Id: isdn_tty.h,v 1.1.2.2 2004/01/12 22:37:19 keil Exp $
- *
- * header for Linux ISDN subsystem, tty related functions (linklevel).
- *
- * Copyright 1994-1999 by Fritz Elfert (fritz@isdn4linux.de)
- * Copyright 1995,96 by Thinking Objects Software GmbH Wuerzburg
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-
-#define DLE 0x10
-#define ETX 0x03
-#define DC4 0x14
-
-
-/*
- * Definition of some special Registers of AT-Emulator
- */
-#define REG_RINGATA 0
-#define REG_RINGCNT 1 /* ring counter register */
-#define REG_ESC 2
-#define REG_CR 3
-#define REG_LF 4
-#define REG_BS 5
-
-#define REG_WAITC 7
-
-#define REG_RESP 12 /* show response messages register */
-#define BIT_RESP 1 /* show response messages bit */
-#define REG_RESPNUM 12 /* show numeric responses register */
-#define BIT_RESPNUM 2 /* show numeric responses bit */
-#define REG_ECHO 12
-#define BIT_ECHO 4
-#define REG_DCD 12
-#define BIT_DCD 8
-#define REG_CTS 12
-#define BIT_CTS 16
-#define REG_DTRR 12
-#define BIT_DTRR 32
-#define REG_DSR 12
-#define BIT_DSR 64
-#define REG_CPPP 12
-#define BIT_CPPP 128
-
-#define REG_DXMT 13
-#define BIT_DXMT 1
-#define REG_T70 13
-#define BIT_T70 2
-#define BIT_T70_EXT 32
-#define REG_DTRHUP 13
-#define BIT_DTRHUP 4
-#define REG_RESPXT 13
-#define BIT_RESPXT 8
-#define REG_CIDONCE 13
-#define BIT_CIDONCE 16
-#define REG_RUNG 13 /* show RUNG message register */
-#define BIT_RUNG 64 /* show RUNG message bit */
-#define REG_DISPLAY 13
-#define BIT_DISPLAY 128
-
-#define REG_L2PROT 14
-#define REG_L3PROT 15
-#define REG_PSIZE 16
-#define REG_WSIZE 17
-#define REG_SI1 18
-#define REG_SI2 19
-#define REG_SI1I 20
-#define REG_PLAN 21
-#define REG_SCREEN 22
-
-#define REG_CPN 23
-#define BIT_CPN 1
-#define REG_CPNFCON 23
-#define BIT_CPNFCON 2
-#define REG_CDN 23
-#define BIT_CDN 4
-
-/* defines for result codes */
-#define RESULT_OK 0
-#define RESULT_CONNECT 1
-#define RESULT_RING 2
-#define RESULT_NO_CARRIER 3
-#define RESULT_ERROR 4
-#define RESULT_CONNECT64000 5
-#define RESULT_NO_DIALTONE 6
-#define RESULT_BUSY 7
-#define RESULT_NO_ANSWER 8
-#define RESULT_RINGING 9
-#define RESULT_NO_MSN_EAZ 10
-#define RESULT_VCON 11
-#define RESULT_RUNG 12
-
-#define TTY_IS_FCLASS1(info) \
- ((info->emu.mdmreg[REG_L2PROT] == ISDN_PROTO_L2_FAX) && \
- (info->emu.mdmreg[REG_L3PROT] == ISDN_PROTO_L3_FCLASS1))
-#define TTY_IS_FCLASS2(info) \
- ((info->emu.mdmreg[REG_L2PROT] == ISDN_PROTO_L2_FAX) && \
- (info->emu.mdmreg[REG_L3PROT] == ISDN_PROTO_L3_FCLASS2))
-
-extern void isdn_tty_modem_escape(void);
-extern void isdn_tty_modem_ring(void);
-extern void isdn_tty_carrier_timeout(void);
-extern void isdn_tty_modem_xmit(void);
-extern int isdn_tty_modem_init(void);
-extern void isdn_tty_exit(void);
-extern void isdn_tty_readmodem(void);
-extern int isdn_tty_find_icall(int, int, setup_parm *);
-extern int isdn_tty_stat_callback(int, isdn_ctrl *);
-extern int isdn_tty_rcv_skb(int, int, int, struct sk_buff *);
-extern int isdn_tty_capi_facility(capi_msg *cm);
-extern void isdn_tty_at_cout(char *, modem_info *);
-extern void isdn_tty_modem_hup(modem_info *, int);
-#ifdef CONFIG_ISDN_TTY_FAX
-extern int isdn_tty_cmd_PLUSF_FAX(char **, modem_info *);
-extern int isdn_tty_fax_command(modem_info *, isdn_ctrl *);
-extern void isdn_tty_fax_bitorder(modem_info *, struct sk_buff *);
-#endif
diff --git a/drivers/isdn/i4l/isdn_ttyfax.c b/drivers/isdn/i4l/isdn_ttyfax.c
deleted file mode 100644
index 47aae4916730..000000000000
--- a/drivers/isdn/i4l/isdn_ttyfax.c
+++ /dev/null
@@ -1,1123 +0,0 @@
-/* $Id: isdn_ttyfax.c,v 1.1.2.2 2004/01/12 22:37:19 keil Exp $
- *
- * Linux ISDN subsystem, tty_fax AT-command emulator (linklevel).
- *
- * Copyright 1999 by Armin Schindler (mac@melware.de)
- * Copyright 1999 by Ralf Spachmann (mel@melware.de)
- * Copyright 1999 by Cytronics & Melware
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#undef ISDN_TTY_FAX_STAT_DEBUG
-#undef ISDN_TTY_FAX_CMD_DEBUG
-
-#include <linux/isdn.h>
-#include "isdn_common.h"
-#include "isdn_tty.h"
-#include "isdn_ttyfax.h"
-
-
-static char *isdn_tty_fax_revision = "$Revision: 1.1.2.2 $";
-
-#define PARSE_ERROR1 { isdn_tty_fax_modem_result(1, info); return 1; }
-
-static char *
-isdn_getrev(const char *revision)
-{
- char *rev;
- char *p;
-
- if ((p = strchr(revision, ':'))) {
- rev = p + 2;
- p = strchr(rev, '$');
- *--p = 0;
- } else
- rev = "???";
- return rev;
-}
-
-/*
- * Fax Class 2 Modem results
- *
- */
-
-static void
-isdn_tty_fax_modem_result(int code, modem_info *info)
-{
- atemu *m = &info->emu;
- T30_s *f = info->fax;
- char rs[50];
- char rss[50];
- char *rp;
- int i;
- static char *msg[] =
- {"OK", "ERROR", "+FCON", "+FCSI:", "+FDIS:",
- "+FHNG:", "+FDCS:", "CONNECT", "+FTSI:",
- "+FCFR", "+FPTS:", "+FET:"};
-
-
- isdn_tty_at_cout("\r\n", info);
- isdn_tty_at_cout(msg[code], info);
-
-#ifdef ISDN_TTY_FAX_CMD_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax send %s on ttyI%d\n",
- msg[code], info->line);
-#endif
- switch (code) {
- case 0: /* OK */
- break;
- case 1: /* ERROR */
- break;
- case 2: /* +FCON */
- /* Append CPN, if enabled */
- if ((m->mdmreg[REG_CPNFCON] & BIT_CPNFCON) &&
- (!(dev->usage[info->isdn_channel] & ISDN_USAGE_OUTGOING))) {
- sprintf(rs, "/%s", m->cpn);
- isdn_tty_at_cout(rs, info);
- }
- info->online = 1;
- f->fet = 0;
- if (f->phase == ISDN_FAX_PHASE_A)
- f->phase = ISDN_FAX_PHASE_B;
- break;
- case 3: /* +FCSI */
- case 8: /* +FTSI */
- sprintf(rs, "\"%s\"", f->r_id);
- isdn_tty_at_cout(rs, info);
- break;
- case 4: /* +FDIS */
- rs[0] = 0;
- rp = &f->r_resolution;
- for (i = 0; i < 8; i++) {
- sprintf(rss, "%c%s", rp[i] + 48,
- (i < 7) ? "," : "");
- strcat(rs, rss);
- }
- isdn_tty_at_cout(rs, info);
-#ifdef ISDN_TTY_FAX_CMD_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax DIS=%s on ttyI%d\n",
- rs, info->line);
-#endif
- break;
- case 5: /* +FHNG */
- sprintf(rs, "%d", f->code);
- isdn_tty_at_cout(rs, info);
- info->faxonline = 0;
- break;
- case 6: /* +FDCS */
- rs[0] = 0;
- rp = &f->r_resolution;
- for (i = 0; i < 8; i++) {
- sprintf(rss, "%c%s", rp[i] + 48,
- (i < 7) ? "," : "");
- strcat(rs, rss);
- }
- isdn_tty_at_cout(rs, info);
-#ifdef ISDN_TTY_FAX_CMD_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax DCS=%s on ttyI%d\n",
- rs, info->line);
-#endif
- break;
- case 7: /* CONNECT */
- info->faxonline |= 2;
- break;
- case 9: /* FCFR */
- break;
- case 10: /* FPTS */
- isdn_tty_at_cout("1", info);
- break;
- case 11: /* FET */
- sprintf(rs, "%d", f->fet);
- isdn_tty_at_cout(rs, info);
- break;
- }
-
- isdn_tty_at_cout("\r\n", info);
-
- switch (code) {
- case 7: /* CONNECT */
- info->online = 2;
- if (info->faxonline & 1) {
- sprintf(rs, "%c", XON);
- isdn_tty_at_cout(rs, info);
- }
- break;
- }
-}
-
-static int
-isdn_tty_fax_command1(modem_info *info, isdn_ctrl *c)
-{
- static char *msg[] =
- {"OK", "CONNECT", "NO CARRIER", "ERROR", "FCERROR"};
-
-#ifdef ISDN_TTY_FAX_CMD_DEBUG
- printk(KERN_DEBUG "isdn_tty: FCLASS1 cmd(%d)\n", c->parm.aux.cmd);
-#endif
- if (c->parm.aux.cmd < ISDN_FAX_CLASS1_QUERY) {
- if (info->online)
- info->online = 1;
- isdn_tty_at_cout("\r\n", info);
- isdn_tty_at_cout(msg[c->parm.aux.cmd], info);
- isdn_tty_at_cout("\r\n", info);
- }
- switch (c->parm.aux.cmd) {
- case ISDN_FAX_CLASS1_CONNECT:
- info->online = 2;
- break;
- case ISDN_FAX_CLASS1_OK:
- case ISDN_FAX_CLASS1_FCERROR:
- case ISDN_FAX_CLASS1_ERROR:
- case ISDN_FAX_CLASS1_NOCARR:
- break;
- case ISDN_FAX_CLASS1_QUERY:
- isdn_tty_at_cout("\r\n", info);
- if (!c->parm.aux.para[0]) {
- isdn_tty_at_cout(msg[ISDN_FAX_CLASS1_ERROR], info);
- isdn_tty_at_cout("\r\n", info);
- } else {
- isdn_tty_at_cout(c->parm.aux.para, info);
- isdn_tty_at_cout("\r\nOK\r\n", info);
- }
- break;
- }
- return (0);
-}
-
-int
-isdn_tty_fax_command(modem_info *info, isdn_ctrl *c)
-{
- T30_s *f = info->fax;
- char rs[10];
-
- if (TTY_IS_FCLASS1(info))
- return (isdn_tty_fax_command1(info, c));
-
-#ifdef ISDN_TTY_FAX_CMD_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax cmd %d on ttyI%d\n",
- f->r_code, info->line);
-#endif
- switch (f->r_code) {
- case ISDN_TTY_FAX_FCON:
- info->faxonline = 1;
- isdn_tty_fax_modem_result(2, info); /* +FCON */
- return (0);
- case ISDN_TTY_FAX_FCON_I:
- info->faxonline = 16;
- isdn_tty_fax_modem_result(2, info); /* +FCON */
- return (0);
- case ISDN_TTY_FAX_RID:
- if (info->faxonline & 1)
- isdn_tty_fax_modem_result(3, info); /* +FCSI */
- if (info->faxonline & 16)
- isdn_tty_fax_modem_result(8, info); /* +FTSI */
- return (0);
- case ISDN_TTY_FAX_DIS:
- isdn_tty_fax_modem_result(4, info); /* +FDIS */
- return (0);
- case ISDN_TTY_FAX_HNG:
- if (f->phase == ISDN_FAX_PHASE_C) {
- if (f->direction == ISDN_TTY_FAX_CONN_IN) {
- sprintf(rs, "%c%c", DLE, ETX);
- isdn_tty_at_cout(rs, info);
- } else {
- sprintf(rs, "%c", 0x18);
- isdn_tty_at_cout(rs, info);
- }
- info->faxonline &= ~2; /* leave data mode */
- info->online = 1;
- }
- f->phase = ISDN_FAX_PHASE_E;
- isdn_tty_fax_modem_result(5, info); /* +FHNG */
- isdn_tty_fax_modem_result(0, info); /* OK */
- return (0);
- case ISDN_TTY_FAX_DCS:
- isdn_tty_fax_modem_result(6, info); /* +FDCS */
- isdn_tty_fax_modem_result(7, info); /* CONNECT */
- f->phase = ISDN_FAX_PHASE_C;
- return (0);
- case ISDN_TTY_FAX_TRAIN_OK:
- isdn_tty_fax_modem_result(6, info); /* +FDCS */
- isdn_tty_fax_modem_result(0, info); /* OK */
- return (0);
- case ISDN_TTY_FAX_SENT:
- isdn_tty_fax_modem_result(0, info); /* OK */
- return (0);
- case ISDN_TTY_FAX_CFR:
- isdn_tty_fax_modem_result(9, info); /* +FCFR */
- return (0);
- case ISDN_TTY_FAX_ET:
- sprintf(rs, "%c%c", DLE, ETX);
- isdn_tty_at_cout(rs, info);
- isdn_tty_fax_modem_result(10, info); /* +FPTS */
- isdn_tty_fax_modem_result(11, info); /* +FET */
- isdn_tty_fax_modem_result(0, info); /* OK */
- info->faxonline &= ~2; /* leave data mode */
- info->online = 1;
- f->phase = ISDN_FAX_PHASE_D;
- return (0);
- case ISDN_TTY_FAX_PTS:
- isdn_tty_fax_modem_result(10, info); /* +FPTS */
- if (f->direction == ISDN_TTY_FAX_CONN_OUT) {
- if (f->fet == 1)
- f->phase = ISDN_FAX_PHASE_B;
- if (f->fet == 0)
- isdn_tty_fax_modem_result(0, info); /* OK */
- }
- return (0);
- case ISDN_TTY_FAX_EOP:
- info->faxonline &= ~2; /* leave data mode */
- info->online = 1;
- f->phase = ISDN_FAX_PHASE_D;
- return (0);
-
- }
- return (-1);
-}
-
-
-void
-isdn_tty_fax_bitorder(modem_info *info, struct sk_buff *skb)
-{
- __u8 LeftMask;
- __u8 RightMask;
- __u8 fBit;
- __u8 Data;
- int i;
-
- if (!info->fax->bor) {
- for (i = 0; i < skb->len; i++) {
- Data = skb->data[i];
- for (
- LeftMask = 0x80, RightMask = 0x01;
- LeftMask > RightMask;
- LeftMask >>= 1, RightMask <<= 1
- ) {
- fBit = (Data & LeftMask);
- if (Data & RightMask)
- Data |= LeftMask;
- else
- Data &= ~LeftMask;
- if (fBit)
- Data |= RightMask;
- else
- Data &= ~RightMask;
-
- }
- skb->data[i] = Data;
- }
- }
-}
-
-/*
- * Parse AT+F.. FAX class 1 commands
- */
-
-static int
-isdn_tty_cmd_FCLASS1(char **p, modem_info *info)
-{
- static char *cmd[] =
- {"AE", "TS", "RS", "TM", "RM", "TH", "RH"};
- isdn_ctrl c;
- int par, i;
- u_long flags;
-
- for (c.parm.aux.cmd = 0; c.parm.aux.cmd < 7; c.parm.aux.cmd++)
- if (!strncmp(p[0], cmd[c.parm.aux.cmd], 2))
- break;
-
-#ifdef ISDN_TTY_FAX_CMD_DEBUG
- printk(KERN_DEBUG "isdn_tty_cmd_FCLASS1 (%s,%d)\n", p[0], c.parm.aux.cmd);
-#endif
- if (c.parm.aux.cmd == 7)
- PARSE_ERROR1;
-
- p[0] += 2;
- switch (*p[0]) {
- case '?':
- p[0]++;
- c.parm.aux.subcmd = AT_QUERY;
- break;
- case '=':
- p[0]++;
- if (*p[0] == '?') {
- p[0]++;
- c.parm.aux.subcmd = AT_EQ_QUERY;
- } else {
- par = isdn_getnum(p);
- if ((par < 0) || (par > 255))
- PARSE_ERROR1;
- c.parm.aux.subcmd = AT_EQ_VALUE;
- c.parm.aux.para[0] = par;
- }
- break;
- case 0:
- c.parm.aux.subcmd = AT_COMMAND;
- break;
- default:
- PARSE_ERROR1;
- }
- c.command = ISDN_CMD_FAXCMD;
-#ifdef ISDN_TTY_FAX_CMD_DEBUG
- printk(KERN_DEBUG "isdn_tty_cmd_FCLASS1 %d/%d/%d)\n",
- c.parm.aux.cmd, c.parm.aux.subcmd, c.parm.aux.para[0]);
-#endif
- if (info->isdn_driver < 0) {
- if ((c.parm.aux.subcmd == AT_EQ_VALUE) ||
- (c.parm.aux.subcmd == AT_COMMAND)) {
- PARSE_ERROR1;
- }
- spin_lock_irqsave(&dev->lock, flags);
- /* get a temporary connection to the first free fax driver */
- i = isdn_get_free_channel(ISDN_USAGE_FAX, ISDN_PROTO_L2_FAX,
- ISDN_PROTO_L3_FCLASS1, -1, -1, "00");
- if (i < 0) {
- spin_unlock_irqrestore(&dev->lock, flags);
- PARSE_ERROR1;
- }
- info->isdn_driver = dev->drvmap[i];
- info->isdn_channel = dev->chanmap[i];
- info->drv_index = i;
- dev->m_idx[i] = info->line;
- spin_unlock_irqrestore(&dev->lock, flags);
- c.driver = info->isdn_driver;
- c.arg = info->isdn_channel;
- isdn_command(&c);
- spin_lock_irqsave(&dev->lock, flags);
- isdn_free_channel(info->isdn_driver, info->isdn_channel,
- ISDN_USAGE_FAX);
- info->isdn_driver = -1;
- info->isdn_channel = -1;
- if (info->drv_index >= 0) {
- dev->m_idx[info->drv_index] = -1;
- info->drv_index = -1;
- }
- spin_unlock_irqrestore(&dev->lock, flags);
- } else {
- c.driver = info->isdn_driver;
- c.arg = info->isdn_channel;
- isdn_command(&c);
- }
- return 1;
-}
-
-/*
- * Parse AT+F.. FAX class 2 commands
- */
-
-static int
-isdn_tty_cmd_FCLASS2(char **p, modem_info *info)
-{
- atemu *m = &info->emu;
- T30_s *f = info->fax;
- isdn_ctrl cmd;
- int par;
- char rs[50];
- char rss[50];
- int maxdccval[] =
- {1, 5, 2, 2, 3, 2, 0, 7};
-
- /* FAA still unchanged */
- if (!strncmp(p[0], "AA", 2)) { /* TODO */
- p[0] += 2;
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n%d", 0);
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- par = isdn_getnum(p);
- if ((par < 0) || (par > 255))
- PARSE_ERROR1;
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
- /* BADLIN=value - dummy 0=disable errorchk disabled, 1-255 nr. of lines for making page bad */
- if (!strncmp(p[0], "BADLIN", 6)) {
- p[0] += 6;
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n%d", f->badlin);
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- if (*p[0] == '?') {
- p[0]++;
- sprintf(rs, "\r\n0-255");
- isdn_tty_at_cout(rs, info);
- } else {
- par = isdn_getnum(p);
- if ((par < 0) || (par > 255))
- PARSE_ERROR1;
- f->badlin = par;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FBADLIN=%d\n", par);
-#endif
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
- /* BADMUL=value - dummy 0=disable errorchk disabled (threshold multiplier) */
- if (!strncmp(p[0], "BADMUL", 6)) {
- p[0] += 6;
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n%d", f->badmul);
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- if (*p[0] == '?') {
- p[0]++;
- sprintf(rs, "\r\n0-255");
- isdn_tty_at_cout(rs, info);
- } else {
- par = isdn_getnum(p);
- if ((par < 0) || (par > 255))
- PARSE_ERROR1;
- f->badmul = par;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FBADMUL=%d\n", par);
-#endif
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
- /* BOR=n - Phase C bit order, 0=direct, 1=reverse */
- if (!strncmp(p[0], "BOR", 3)) {
- p[0] += 3;
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n%d", f->bor);
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- if (*p[0] == '?') {
- p[0]++;
- sprintf(rs, "\r\n0,1");
- isdn_tty_at_cout(rs, info);
- } else {
- par = isdn_getnum(p);
- if ((par < 0) || (par > 1))
- PARSE_ERROR1;
- f->bor = par;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FBOR=%d\n", par);
-#endif
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
- /* NBC=n - No Best Capabilities */
- if (!strncmp(p[0], "NBC", 3)) {
- p[0] += 3;
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n%d", f->nbc);
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- if (*p[0] == '?') {
- p[0]++;
- sprintf(rs, "\r\n0,1");
- isdn_tty_at_cout(rs, info);
- } else {
- par = isdn_getnum(p);
- if ((par < 0) || (par > 1))
- PARSE_ERROR1;
- f->nbc = par;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FNBC=%d\n", par);
-#endif
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
- /* BUF? - Readonly buffersize readout */
- if (!strncmp(p[0], "BUF?", 4)) {
- p[0] += 4;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FBUF? (%d) \n", (16 * m->mdmreg[REG_PSIZE]));
-#endif
- p[0]++;
- sprintf(rs, "\r\n %d ", (16 * m->mdmreg[REG_PSIZE]));
- isdn_tty_at_cout(rs, info);
- return 0;
- }
- /* CIG=string - local fax station id string for polling rx */
- if (!strncmp(p[0], "CIG", 3)) {
- int i, r;
- p[0] += 3;
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n\"%s\"", f->pollid);
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- if (*p[0] == '?') {
- p[0]++;
- sprintf(rs, "\r\n\"STRING\"");
- isdn_tty_at_cout(rs, info);
- } else {
- if (*p[0] == '"')
- p[0]++;
- for (i = 0; (*p[0]) && i < (FAXIDLEN - 1) && (*p[0] != '"'); i++) {
- f->pollid[i] = *p[0]++;
- }
- if (*p[0] == '"')
- p[0]++;
- for (r = i; r < FAXIDLEN; r++) {
- f->pollid[r] = 32;
- }
- f->pollid[FAXIDLEN - 1] = 0;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax local poll ID rx \"%s\"\n", f->pollid);
-#endif
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
- /* CQ=n - copy qlty chk, 0= no chk, 1=only 1D chk, 2=1D+2D chk */
- if (!strncmp(p[0], "CQ", 2)) {
- p[0] += 2;
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n%d", f->cq);
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- if (*p[0] == '?') {
- p[0]++;
- sprintf(rs, "\r\n0,1,2");
- isdn_tty_at_cout(rs, info);
- } else {
- par = isdn_getnum(p);
- if ((par < 0) || (par > 2))
- PARSE_ERROR1;
- f->cq = par;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FCQ=%d\n", par);
-#endif
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
- /* CR=n - can receive? 0= no data rx or poll remote dev, 1=do receive data or poll remote dev */
- if (!strncmp(p[0], "CR", 2)) {
- p[0] += 2;
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n%d", f->cr); /* read actual value from struct and print */
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- if (*p[0] == '?') {
- p[0]++;
- sprintf(rs, "\r\n0,1"); /* display online help */
- isdn_tty_at_cout(rs, info);
- } else {
- par = isdn_getnum(p);
- if ((par < 0) || (par > 1))
- PARSE_ERROR1;
- f->cr = par;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FCR=%d\n", par);
-#endif
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
- /* CTCRTY=value - ECM retry count */
- if (!strncmp(p[0], "CTCRTY", 6)) {
- p[0] += 6;
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n%d", f->ctcrty);
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- if (*p[0] == '?') {
- p[0]++;
- sprintf(rs, "\r\n0-255");
- isdn_tty_at_cout(rs, info);
- } else {
- par = isdn_getnum(p);
- if ((par < 0) || (par > 255))
- PARSE_ERROR1;
- f->ctcrty = par;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FCTCRTY=%d\n", par);
-#endif
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
- /* DCC=vr,br,wd,ln,df,ec,bf,st - DCE capabilities parms */
- if (!strncmp(p[0], "DCC", 3)) {
- char *rp = &f->resolution;
- int i;
-
- p[0] += 3;
- switch (*p[0]) {
- case '?':
- p[0]++;
- strcpy(rs, "\r\n");
- for (i = 0; i < 8; i++) {
- sprintf(rss, "%c%s", rp[i] + 48,
- (i < 7) ? "," : "");
- strcat(rs, rss);
- }
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- if (*p[0] == '?') {
- isdn_tty_at_cout("\r\n(0,1),(0-5),(0-2),(0-2),(0-3),(0-2),(0),(0-7)", info);
- p[0]++;
- } else {
- for (i = 0; (((*p[0] >= '0') && (*p[0] <= '9')) || (*p[0] == ',')) && (i < 8); i++) {
- if (*p[0] != ',') {
- if ((*p[0] - 48) > maxdccval[i]) {
- PARSE_ERROR1;
- }
- rp[i] = *p[0] - 48;
- p[0]++;
- if (*p[0] == ',')
- p[0]++;
- } else
- p[0]++;
- }
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FDCC capabilities DCE=%d,%d,%d,%d,%d,%d,%d,%d\n",
- rp[0], rp[1], rp[2], rp[3], rp[4], rp[5], rp[6], rp[7]);
-#endif
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
- /* DIS=vr,br,wd,ln,df,ec,bf,st - current session parms */
- if (!strncmp(p[0], "DIS", 3)) {
- char *rp = &f->resolution;
- int i;
-
- p[0] += 3;
- switch (*p[0]) {
- case '?':
- p[0]++;
- strcpy(rs, "\r\n");
- for (i = 0; i < 8; i++) {
- sprintf(rss, "%c%s", rp[i] + 48,
- (i < 7) ? "," : "");
- strcat(rs, rss);
- }
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- if (*p[0] == '?') {
- isdn_tty_at_cout("\r\n(0,1),(0-5),(0-2),(0-2),(0-3),(0-2),(0),(0-7)", info);
- p[0]++;
- } else {
- for (i = 0; (((*p[0] >= '0') && (*p[0] <= '9')) || (*p[0] == ',')) && (i < 8); i++) {
- if (*p[0] != ',') {
- if ((*p[0] - 48) > maxdccval[i]) {
- PARSE_ERROR1;
- }
- rp[i] = *p[0] - 48;
- p[0]++;
- if (*p[0] == ',')
- p[0]++;
- } else
- p[0]++;
- }
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FDIS session parms=%d,%d,%d,%d,%d,%d,%d,%d\n",
- rp[0], rp[1], rp[2], rp[3], rp[4], rp[5], rp[6], rp[7]);
-#endif
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
- /* DR - Receive Phase C data command, initiates document reception */
- if (!strncmp(p[0], "DR", 2)) {
- p[0] += 2;
- if ((info->faxonline & 16) && /* incoming connection */
- ((f->phase == ISDN_FAX_PHASE_B) || (f->phase == ISDN_FAX_PHASE_D))) {
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FDR\n");
-#endif
- f->code = ISDN_TTY_FAX_DR;
- cmd.driver = info->isdn_driver;
- cmd.arg = info->isdn_channel;
- cmd.command = ISDN_CMD_FAXCMD;
- isdn_command(&cmd);
- if (f->phase == ISDN_FAX_PHASE_B) {
- f->phase = ISDN_FAX_PHASE_C;
- } else if (f->phase == ISDN_FAX_PHASE_D) {
- switch (f->fet) {
- case 0: /* next page will be received */
- f->phase = ISDN_FAX_PHASE_C;
- isdn_tty_fax_modem_result(7, info); /* CONNECT */
- break;
- case 1: /* next doc will be received */
- f->phase = ISDN_FAX_PHASE_B;
- break;
- case 2: /* fax session is terminating */
- f->phase = ISDN_FAX_PHASE_E;
- break;
- default:
- PARSE_ERROR1;
- }
- }
- } else {
- PARSE_ERROR1;
- }
- return 1;
- }
- /* DT=df,vr,wd,ln - TX phase C data command (release DCE to proceed with negotiation) */
- if (!strncmp(p[0], "DT", 2)) {
- int i, val[] =
- {4, 0, 2, 3};
- char *rp = &f->resolution;
-
- p[0] += 2;
- if (!(info->faxonline & 1)) /* not outgoing connection */
- PARSE_ERROR1;
-
- for (i = 0; (((*p[0] >= '0') && (*p[0] <= '9')) || (*p[0] == ',')) && (i < 4); i++) {
- if (*p[0] != ',') {
- if ((*p[0] - 48) > maxdccval[val[i]]) {
- PARSE_ERROR1;
- }
- rp[val[i]] = *p[0] - 48;
- p[0]++;
- if (*p[0] == ',')
- p[0]++;
- } else
- p[0]++;
- }
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FDT tx data command parms=%d,%d,%d,%d\n",
- rp[4], rp[0], rp[2], rp[3]);
-#endif
- if ((f->phase == ISDN_FAX_PHASE_B) || (f->phase == ISDN_FAX_PHASE_D)) {
- f->code = ISDN_TTY_FAX_DT;
- cmd.driver = info->isdn_driver;
- cmd.arg = info->isdn_channel;
- cmd.command = ISDN_CMD_FAXCMD;
- isdn_command(&cmd);
- if (f->phase == ISDN_FAX_PHASE_D) {
- f->phase = ISDN_FAX_PHASE_C;
- isdn_tty_fax_modem_result(7, info); /* CONNECT */
- }
- } else {
- PARSE_ERROR1;
- }
- return 1;
- }
- /* ECM=n - Error mode control 0=disabled, 2=enabled, handled by DCE alone incl. buff of partial pages */
- if (!strncmp(p[0], "ECM", 3)) {
- p[0] += 3;
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n%d", f->ecm);
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- if (*p[0] == '?') {
- p[0]++;
- sprintf(rs, "\r\n0,2");
- isdn_tty_at_cout(rs, info);
- } else {
- par = isdn_getnum(p);
- if ((par != 0) && (par != 2))
- PARSE_ERROR1;
- f->ecm = par;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FECM=%d\n", par);
-#endif
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
- /* ET=n - End of page or document */
- if (!strncmp(p[0], "ET=", 3)) {
- p[0] += 3;
- if (*p[0] == '?') {
- p[0]++;
- sprintf(rs, "\r\n0-2");
- isdn_tty_at_cout(rs, info);
- } else {
- if ((f->phase != ISDN_FAX_PHASE_D) ||
- (!(info->faxonline & 1)))
- PARSE_ERROR1;
- par = isdn_getnum(p);
- if ((par < 0) || (par > 2))
- PARSE_ERROR1;
- f->fet = par;
- f->code = ISDN_TTY_FAX_ET;
- cmd.driver = info->isdn_driver;
- cmd.arg = info->isdn_channel;
- cmd.command = ISDN_CMD_FAXCMD;
- isdn_command(&cmd);
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FET=%d\n", par);
-#endif
- return 1;
- }
- return 0;
- }
- /* K - terminate */
- if (!strncmp(p[0], "K", 1)) {
- p[0] += 1;
- if ((f->phase == ISDN_FAX_PHASE_IDLE) || (f->phase == ISDN_FAX_PHASE_E))
- PARSE_ERROR1;
- isdn_tty_modem_hup(info, 1);
- return 1;
- }
- /* LID=string - local fax ID */
- if (!strncmp(p[0], "LID", 3)) {
- int i, r;
- p[0] += 3;
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n\"%s\"", f->id);
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- if (*p[0] == '?') {
- p[0]++;
- sprintf(rs, "\r\n\"STRING\"");
- isdn_tty_at_cout(rs, info);
- } else {
- if (*p[0] == '"')
- p[0]++;
- for (i = 0; (*p[0]) && i < (FAXIDLEN - 1) && (*p[0] != '"'); i++) {
- f->id[i] = *p[0]++;
- }
- if (*p[0] == '"')
- p[0]++;
- for (r = i; r < FAXIDLEN; r++) {
- f->id[r] = 32;
- }
- f->id[FAXIDLEN - 1] = 0;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax local ID \"%s\"\n", f->id);
-#endif
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
-
- /* MDL? - DCE Model */
- if (!strncmp(p[0], "MDL?", 4)) {
- p[0] += 4;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: FMDL?\n");
-#endif
- isdn_tty_at_cout("\r\nisdn4linux", info);
- return 0;
- }
- /* MFR? - DCE Manufacturer */
- if (!strncmp(p[0], "MFR?", 4)) {
- p[0] += 4;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: FMFR?\n");
-#endif
- isdn_tty_at_cout("\r\nisdn4linux", info);
- return 0;
- }
- /* MINSP=n - Minimum Speed for Phase C */
- if (!strncmp(p[0], "MINSP", 5)) {
- p[0] += 5;
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n%d", f->minsp);
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- if (*p[0] == '?') {
- p[0]++;
- sprintf(rs, "\r\n0-5");
- isdn_tty_at_cout(rs, info);
- } else {
- par = isdn_getnum(p);
- if ((par < 0) || (par > 5))
- PARSE_ERROR1;
- f->minsp = par;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FMINSP=%d\n", par);
-#endif
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
- /* PHCTO=value - DTE phase C timeout */
- if (!strncmp(p[0], "PHCTO", 5)) {
- p[0] += 5;
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n%d", f->phcto);
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- if (*p[0] == '?') {
- p[0]++;
- sprintf(rs, "\r\n0-255");
- isdn_tty_at_cout(rs, info);
- } else {
- par = isdn_getnum(p);
- if ((par < 0) || (par > 255))
- PARSE_ERROR1;
- f->phcto = par;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FPHCTO=%d\n", par);
-#endif
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
-
- /* REL=n - Phase C received EOL alignment */
- if (!strncmp(p[0], "REL", 3)) {
- p[0] += 3;
- switch (*p[0]) {
- case '?':
- p[0]++;
- sprintf(rs, "\r\n%d", f->rel);
- isdn_tty_at_cout(rs, info);
- break;
- case '=':
- p[0]++;
- if (*p[0] == '?') {
- p[0]++;
- sprintf(rs, "\r\n0,1");
- isdn_tty_at_cout(rs, info);
- } else {
- par = isdn_getnum(p);
- if ((par < 0) || (par > 1))
- PARSE_ERROR1;
- f->rel = par;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FREL=%d\n", par);
-#endif
- }
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
- /* REV? - DCE Revision */
- if (!strncmp(p[0], "REV?", 4)) {
- p[0] += 4;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: FREV?\n");
-#endif
- strcpy(rss, isdn_tty_fax_revision);
- sprintf(rs, "\r\nRev: %s", isdn_getrev(rss));
- isdn_tty_at_cout(rs, info);
- return 0;
- }
-
- /* Phase C Transmit Data Block Size */
- if (!strncmp(p[0], "TBC=", 4)) { /* dummy, not used */
- p[0] += 4;
-#ifdef ISDN_TTY_FAX_STAT_DEBUG
- printk(KERN_DEBUG "isdn_tty: Fax FTBC=%c\n", *p[0]);
-#endif
- switch (*p[0]) {
- case '0':
- p[0]++;
- break;
- default:
- PARSE_ERROR1;
- }
- return 0;
- }
- printk(KERN_DEBUG "isdn_tty: unknown token=>AT+F%s<\n", p[0]);
- PARSE_ERROR1;
-}
-
-int
-isdn_tty_cmd_PLUSF_FAX(char **p, modem_info *info)
-{
- if (TTY_IS_FCLASS2(info))
- return (isdn_tty_cmd_FCLASS2(p, info));
- else if (TTY_IS_FCLASS1(info))
- return (isdn_tty_cmd_FCLASS1(p, info));
- PARSE_ERROR1;
-}
diff --git a/drivers/isdn/i4l/isdn_ttyfax.h b/drivers/isdn/i4l/isdn_ttyfax.h
deleted file mode 100644
index ccda4fcf8f7b..000000000000
--- a/drivers/isdn/i4l/isdn_ttyfax.h
+++ /dev/null
@@ -1,17 +0,0 @@
-/* $Id: isdn_ttyfax.h,v 1.1.2.2 2004/01/12 22:37:19 keil Exp $
- *
- * header for Linux ISDN subsystem, tty_fax related functions (linklevel).
- *
- * Copyright 1999 by Armin Schindler (mac@melware.de)
- * Copyright 1999 by Ralf Spachmann (mel@melware.de)
- * Copyright 1999 by Cytronics & Melware
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-
-#define XON 0x11
-#define XOFF 0x13
-#define DC2 0x12
diff --git a/drivers/isdn/i4l/isdn_v110.c b/drivers/isdn/i4l/isdn_v110.c
deleted file mode 100644
index d11fe76f138f..000000000000
--- a/drivers/isdn/i4l/isdn_v110.c
+++ /dev/null
@@ -1,625 +0,0 @@
-/* $Id: isdn_v110.c,v 1.1.2.2 2004/01/12 22:37:19 keil Exp $
- *
- * Linux ISDN subsystem, V.110 related functions (linklevel).
- *
- * Copyright by Thomas Pfeiffer (pfeiffer@pds.de)
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/string.h>
-#include <linux/kernel.h>
-#include <linux/slab.h>
-#include <linux/mm.h>
-#include <linux/delay.h>
-
-#include <linux/isdn.h>
-#include "isdn_v110.h"
-
-#undef ISDN_V110_DEBUG
-
-char *isdn_v110_revision = "$Revision: 1.1.2.2 $";
-
-#define V110_38400 255
-#define V110_19200 15
-#define V110_9600 3
-
-/*
- * The following data are precoded matrices, online and offline matrix
- * for 9600, 19200 und 38400, respectively
- */
-static unsigned char V110_OnMatrix_9600[] =
-{0xfc, 0xfc, 0xfc, 0xfc, 0xff, 0xff, 0xff, 0xfd, 0xff, 0xff,
- 0xff, 0xfd, 0xff, 0xff, 0xff, 0xfd, 0xff, 0xff, 0xff, 0xfd,
- 0xfd, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xfd, 0xff, 0xff,
- 0xff, 0xfd, 0xff, 0xff, 0xff, 0xfd, 0xff, 0xff, 0xff, 0xfd};
-
-static unsigned char V110_OffMatrix_9600[] =
-{0xfc, 0xfc, 0xfc, 0xfc, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
- 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
- 0xfd, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
- 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff};
-
-static unsigned char V110_OnMatrix_19200[] =
-{0xf0, 0xf0, 0xff, 0xf7, 0xff, 0xf7, 0xff, 0xf7, 0xff, 0xf7,
- 0xfd, 0xff, 0xff, 0xf7, 0xff, 0xf7, 0xff, 0xf7, 0xff, 0xf7};
-
-static unsigned char V110_OffMatrix_19200[] =
-{0xf0, 0xf0, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
- 0xfd, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff};
-
-static unsigned char V110_OnMatrix_38400[] =
-{0x00, 0x7f, 0x7f, 0x7f, 0x7f, 0xfd, 0x7f, 0x7f, 0x7f, 0x7f};
-
-static unsigned char V110_OffMatrix_38400[] =
-{0x00, 0xff, 0xff, 0xff, 0xff, 0xfd, 0xff, 0xff, 0xff, 0xff};
-
-/*
- * FlipBits reorders sequences of keylen bits in one byte.
- * E.g. source order 7654321 will be converted to 45670123 when keylen = 4,
- * and to 67452301 when keylen = 2. This is necessary because ordering on
- * the isdn line is the other way.
- */
-static inline unsigned char
-FlipBits(unsigned char c, int keylen)
-{
- unsigned char b = c;
- unsigned char bit = 128;
- int i;
- int j;
- int hunks = (8 / keylen);
-
- c = 0;
- for (i = 0; i < hunks; i++) {
- for (j = 0; j < keylen; j++) {
- if (b & (bit >> j))
- c |= bit >> (keylen - j - 1);
- }
- bit >>= keylen;
- }
- return c;
-}
-
-
-/* isdn_v110_open allocates and initializes private V.110 data
- * structures and returns a pointer to these.
- */
-static isdn_v110_stream *
-isdn_v110_open(unsigned char key, int hdrlen, int maxsize)
-{
- int i;
- isdn_v110_stream *v;
-
- if ((v = kzalloc(sizeof(isdn_v110_stream), GFP_ATOMIC)) == NULL)
- return NULL;
- v->key = key;
- v->nbits = 0;
- for (i = 0; key & (1 << i); i++)
- v->nbits++;
-
- v->nbytes = 8 / v->nbits;
- v->decodelen = 0;
-
- switch (key) {
- case V110_38400:
- v->OnlineFrame = V110_OnMatrix_38400;
- v->OfflineFrame = V110_OffMatrix_38400;
- break;
- case V110_19200:
- v->OnlineFrame = V110_OnMatrix_19200;
- v->OfflineFrame = V110_OffMatrix_19200;
- break;
- default:
- v->OnlineFrame = V110_OnMatrix_9600;
- v->OfflineFrame = V110_OffMatrix_9600;
- break;
- }
- v->framelen = v->nbytes * 10;
- v->SyncInit = 5;
- v->introducer = 0;
- v->dbit = 1;
- v->b = 0;
- v->skbres = hdrlen;
- v->maxsize = maxsize - hdrlen;
- if ((v->encodebuf = kmalloc(maxsize, GFP_ATOMIC)) == NULL) {
- kfree(v);
- return NULL;
- }
- return v;
-}
-
-/* isdn_v110_close frees private V.110 data structures */
-void
-isdn_v110_close(isdn_v110_stream *v)
-{
- if (v == NULL)
- return;
-#ifdef ISDN_V110_DEBUG
- printk(KERN_DEBUG "v110 close\n");
-#endif
- kfree(v->encodebuf);
- kfree(v);
-}
-
-
-/*
- * ValidHeaderBytes return the number of valid bytes in v->decodebuf
- */
-static int
-ValidHeaderBytes(isdn_v110_stream *v)
-{
- int i;
- for (i = 0; (i < v->decodelen) && (i < v->nbytes); i++)
- if ((v->decodebuf[i] & v->key) != 0)
- break;
- return i;
-}
-
-/*
- * SyncHeader moves the decodebuf ptr to the next valid header
- */
-static void
-SyncHeader(isdn_v110_stream *v)
-{
- unsigned char *rbuf = v->decodebuf;
- int len = v->decodelen;
-
- if (len == 0)
- return;
- for (rbuf++, len--; len > 0; len--, rbuf++) /* such den SyncHeader in buf ! */
- if ((*rbuf & v->key) == 0) /* erstes byte gefunden ? */
- break; /* jupp! */
- if (len)
- memcpy(v->decodebuf, rbuf, len);
-
- v->decodelen = len;
-#ifdef ISDN_V110_DEBUG
- printk(KERN_DEBUG "isdn_v110: Header resync\n");
-#endif
-}
-
-/* DecodeMatrix takes n (n>=1) matrices (v110 frames, 10 bytes) where
- len is the number of matrix-lines. len must be a multiple of 10, i.e.
- only complete matices must be given.
- From these, netto data is extracted and returned in buf. The return-value
- is the bytecount of the decoded data.
-*/
-static int
-DecodeMatrix(isdn_v110_stream *v, unsigned char *m, int len, unsigned char *buf)
-{
- int line = 0;
- int buflen = 0;
- int mbit = 64;
- int introducer = v->introducer;
- int dbit = v->dbit;
- unsigned char b = v->b;
-
- while (line < len) { /* Are we done with all lines of the matrix? */
- if ((line % 10) == 0) { /* the 0. line of the matrix is always 0 ! */
- if (m[line] != 0x00) { /* not 0 ? -> error! */
-#ifdef ISDN_V110_DEBUG
- printk(KERN_DEBUG "isdn_v110: DecodeMatrix, V110 Bad Header\n");
- /* returning now is not the right thing, though :-( */
-#endif
- }
- line++; /* next line of matrix */
- continue;
- } else if ((line % 10) == 5) { /* in line 5 there's only e-bits ! */
- if ((m[line] & 0x70) != 0x30) { /* 011 has to be at the beginning! */
-#ifdef ISDN_V110_DEBUG
- printk(KERN_DEBUG "isdn_v110: DecodeMatrix, V110 Bad 5th line\n");
- /* returning now is not the right thing, though :-( */
-#endif
- }
- line++; /* next line */
- continue;
- } else if (!introducer) { /* every byte starts with 10 (stopbit, startbit) */
- introducer = (m[line] & mbit) ? 0 : 1; /* current bit of the matrix */
- next_byte:
- if (mbit > 2) { /* was it the last bit in this line ? */
- mbit >>= 1; /* no -> take next */
- continue;
- } /* otherwise start with leftmost bit in the next line */
- mbit = 64;
- line++;
- continue;
- } else { /* otherwise we need to set a data bit */
- if (m[line] & mbit) /* was that bit set in the matrix ? */
- b |= dbit; /* yes -> set it in the data byte */
- else
- b &= dbit - 1; /* no -> clear it in the data byte */
- if (dbit < 128) /* is that data byte done ? */
- dbit <<= 1; /* no, got the next bit */
- else { /* data byte is done */
- buf[buflen++] = b; /* copy byte into the output buffer */
- introducer = b = 0; /* init of the intro sequence and of the data byte */
- dbit = 1; /* next we look for the 0th bit */
- }
- goto next_byte; /* look for next bit in the matrix */
- }
- }
- v->introducer = introducer;
- v->dbit = dbit;
- v->b = b;
- return buflen; /* return number of bytes in the output buffer */
-}
-
-/*
- * DecodeStream receives V.110 coded data from the input stream. It recovers the
- * original frames.
- * The input stream doesn't need to be framed
- */
-struct sk_buff *
-isdn_v110_decode(isdn_v110_stream *v, struct sk_buff *skb)
-{
- int i;
- int j;
- int len;
- unsigned char *v110_buf;
- unsigned char *rbuf;
-
- if (!skb) {
- printk(KERN_WARNING "isdn_v110_decode called with NULL skb!\n");
- return NULL;
- }
- rbuf = skb->data;
- len = skb->len;
- if (v == NULL) {
- /* invalid handle, no chance to proceed */
- printk(KERN_WARNING "isdn_v110_decode called with NULL stream!\n");
- dev_kfree_skb(skb);
- return NULL;
- }
- if (v->decodelen == 0) /* cache empty? */
- for (; len > 0; len--, rbuf++) /* scan for SyncHeader in buf */
- if ((*rbuf & v->key) == 0)
- break; /* found first byte */
- if (len == 0) {
- dev_kfree_skb(skb);
- return NULL;
- }
- /* copy new data to decode-buffer */
- memcpy(&(v->decodebuf[v->decodelen]), rbuf, len);
- v->decodelen += len;
-ReSync:
- if (v->decodelen < v->nbytes) { /* got a new header ? */
- dev_kfree_skb(skb);
- return NULL; /* no, try later */
- }
- if (ValidHeaderBytes(v) != v->nbytes) { /* is that a valid header? */
- SyncHeader(v); /* no -> look for header */
- goto ReSync;
- }
- len = (v->decodelen - (v->decodelen % (10 * v->nbytes))) / v->nbytes;
- if ((v110_buf = kmalloc(len, GFP_ATOMIC)) == NULL) {
- printk(KERN_WARNING "isdn_v110_decode: Couldn't allocate v110_buf\n");
- dev_kfree_skb(skb);
- return NULL;
- }
- for (i = 0; i < len; i++) {
- v110_buf[i] = 0;
- for (j = 0; j < v->nbytes; j++)
- v110_buf[i] |= (v->decodebuf[(i * v->nbytes) + j] & v->key) << (8 - ((j + 1) * v->nbits));
- v110_buf[i] = FlipBits(v110_buf[i], v->nbits);
- }
- v->decodelen = (v->decodelen % (10 * v->nbytes));
- memcpy(v->decodebuf, &(v->decodebuf[len * v->nbytes]), v->decodelen);
-
- skb_trim(skb, DecodeMatrix(v, v110_buf, len, skb->data));
- kfree(v110_buf);
- if (skb->len)
- return skb;
- else {
- kfree_skb(skb);
- return NULL;
- }
-}
-
-/* EncodeMatrix takes input data in buf, len is the bytecount.
- Data is encoded into v110 frames in m. Return value is the number of
- matrix-lines generated.
-*/
-static int
-EncodeMatrix(unsigned char *buf, int len, unsigned char *m, int mlen)
-{
- int line = 0;
- int i = 0;
- int mbit = 128;
- int dbit = 1;
- int introducer = 3;
- int ibit[] = {0, 1, 1};
-
- while ((i < len) && (line < mlen)) { /* while we still have input data */
- switch (line % 10) { /* in which line of the matrix are we? */
- case 0:
- m[line++] = 0x00; /* line 0 is always 0 */
- mbit = 128; /* go on with the 7th bit */
- break;
- case 5:
- m[line++] = 0xbf; /* line 5 is always 10111111 */
- mbit = 128; /* go on with the 7th bit */
- break;
- }
- if (line >= mlen) {
- printk(KERN_WARNING "isdn_v110 (EncodeMatrix): buffer full!\n");
- return line;
- }
- next_bit:
- switch (mbit) { /* leftmost or rightmost bit ? */
- case 1:
- line++; /* rightmost -> go to next line */
- if (line >= mlen) {
- printk(KERN_WARNING "isdn_v110 (EncodeMatrix): buffer full!\n");
- return line;
- }
- /* fall through */
- case 128:
- m[line] = 128; /* leftmost -> set byte to 1000000 */
- mbit = 64; /* current bit in the matrix line */
- continue;
- }
- if (introducer) { /* set 110 sequence ? */
- introducer--; /* set on digit less */
- m[line] |= ibit[introducer] ? mbit : 0; /* set corresponding bit */
- mbit >>= 1; /* bit of matrix line >> 1 */
- goto next_bit; /* and go on there */
- } /* else push data bits into the matrix! */
- m[line] |= (buf[i] & dbit) ? mbit : 0; /* set data bit in matrix */
- if (dbit == 128) { /* was it the last one? */
- dbit = 1; /* then go on with first bit of */
- i++; /* next byte in input buffer */
- if (i < len) /* input buffer done ? */
- introducer = 3; /* no, write introducer 110 */
- else { /* input buffer done ! */
- m[line] |= (mbit - 1) & 0xfe; /* set remaining bits in line to 1 */
- break;
- }
- } else /* not the last data bit */
- dbit <<= 1; /* then go to next data bit */
- mbit >>= 1; /* go to next bit of matrix */
- goto next_bit;
-
- }
- /* if necessary, generate remaining lines of the matrix... */
- if ((line) && ((line + 10) < mlen))
- switch (++line % 10) {
- case 1:
- m[line++] = 0xfe;
- /* fall through */
- case 2:
- m[line++] = 0xfe;
- /* fall through */
- case 3:
- m[line++] = 0xfe;
- /* fall through */
- case 4:
- m[line++] = 0xfe;
- /* fall through */
- case 5:
- m[line++] = 0xbf;
- /* fall through */
- case 6:
- m[line++] = 0xfe;
- /* fall through */
- case 7:
- m[line++] = 0xfe;
- /* fall through */
- case 8:
- m[line++] = 0xfe;
- /* fall through */
- case 9:
- m[line++] = 0xfe;
- }
- return line; /* that's how many lines we have */
-}
-
-/*
- * Build a sync frame.
- */
-static struct sk_buff *
-isdn_v110_sync(isdn_v110_stream *v)
-{
- struct sk_buff *skb;
-
- if (v == NULL) {
- /* invalid handle, no chance to proceed */
- printk(KERN_WARNING "isdn_v110_sync called with NULL stream!\n");
- return NULL;
- }
- if ((skb = dev_alloc_skb(v->framelen + v->skbres))) {
- skb_reserve(skb, v->skbres);
- skb_put_data(skb, v->OfflineFrame, v->framelen);
- }
- return skb;
-}
-
-/*
- * Build an idle frame.
- */
-static struct sk_buff *
-isdn_v110_idle(isdn_v110_stream *v)
-{
- struct sk_buff *skb;
-
- if (v == NULL) {
- /* invalid handle, no chance to proceed */
- printk(KERN_WARNING "isdn_v110_sync called with NULL stream!\n");
- return NULL;
- }
- if ((skb = dev_alloc_skb(v->framelen + v->skbres))) {
- skb_reserve(skb, v->skbres);
- skb_put_data(skb, v->OnlineFrame, v->framelen);
- }
- return skb;
-}
-
-struct sk_buff *
-isdn_v110_encode(isdn_v110_stream *v, struct sk_buff *skb)
-{
- int i;
- int j;
- int rlen;
- int mlen;
- int olen;
- int size;
- int sval1;
- int sval2;
- int nframes;
- unsigned char *v110buf;
- unsigned char *rbuf;
- struct sk_buff *nskb;
-
- if (v == NULL) {
- /* invalid handle, no chance to proceed */
- printk(KERN_WARNING "isdn_v110_encode called with NULL stream!\n");
- return NULL;
- }
- if (!skb) {
- /* invalid skb, no chance to proceed */
- printk(KERN_WARNING "isdn_v110_encode called with NULL skb!\n");
- return NULL;
- }
- rlen = skb->len;
- nframes = (rlen + 3) / 4;
- v110buf = v->encodebuf;
- if ((nframes * 40) > v->maxsize) {
- size = v->maxsize;
- rlen = v->maxsize / 40;
- } else
- size = nframes * 40;
- if (!(nskb = dev_alloc_skb(size + v->skbres + sizeof(int)))) {
- printk(KERN_WARNING "isdn_v110_encode: Couldn't alloc skb\n");
- return NULL;
- }
- skb_reserve(nskb, v->skbres + sizeof(int));
- if (skb->len == 0) {
- skb_put_data(nskb, v->OnlineFrame, v->framelen);
- *((int *)skb_push(nskb, sizeof(int))) = 0;
- return nskb;
- }
- mlen = EncodeMatrix(skb->data, rlen, v110buf, size);
- /* now distribute 2 or 4 bits each to the output stream! */
- rbuf = skb_put(nskb, size);
- olen = 0;
- sval1 = 8 - v->nbits;
- sval2 = v->key << sval1;
- for (i = 0; i < mlen; i++) {
- v110buf[i] = FlipBits(v110buf[i], v->nbits);
- for (j = 0; j < v->nbytes; j++) {
- if (size--)
- *rbuf++ = ~v->key | (((v110buf[i] << (j * v->nbits)) & sval2) >> sval1);
- else {
- printk(KERN_WARNING "isdn_v110_encode: buffers full!\n");
- goto buffer_full;
- }
- olen++;
- }
- }
-buffer_full:
- skb_trim(nskb, olen);
- *((int *)skb_push(nskb, sizeof(int))) = rlen;
- return nskb;
-}
-
-int
-isdn_v110_stat_callback(int idx, isdn_ctrl *c)
-{
- isdn_v110_stream *v = NULL;
- int i;
- int ret = 0;
-
- if (idx < 0)
- return 0;
- switch (c->command) {
- case ISDN_STAT_BSENT:
- /* Keep the send-queue of the driver filled
- * with frames:
- * If number of outstanding frames < 3,
- * send down an Idle-Frame (or an Sync-Frame, if
- * v->SyncInit != 0).
- */
- if (!(v = dev->v110[idx]))
- return 0;
- atomic_inc(&dev->v110use[idx]);
- for (i = 0; i * v->framelen < c->parm.length; i++) {
- if (v->skbidle > 0) {
- v->skbidle--;
- ret = 1;
- } else {
- if (v->skbuser > 0)
- v->skbuser--;
- ret = 0;
- }
- }
- for (i = v->skbuser + v->skbidle; i < 2; i++) {
- struct sk_buff *skb;
- if (v->SyncInit > 0)
- skb = isdn_v110_sync(v);
- else
- skb = isdn_v110_idle(v);
- if (skb) {
- if (dev->drv[c->driver]->interface->writebuf_skb(c->driver, c->arg, 1, skb) <= 0) {
- dev_kfree_skb(skb);
- break;
- } else {
- if (v->SyncInit)
- v->SyncInit--;
- v->skbidle++;
- }
- } else
- break;
- }
- atomic_dec(&dev->v110use[idx]);
- return ret;
- case ISDN_STAT_DHUP:
- case ISDN_STAT_BHUP:
- while (1) {
- atomic_inc(&dev->v110use[idx]);
- if (atomic_dec_and_test(&dev->v110use[idx])) {
- isdn_v110_close(dev->v110[idx]);
- dev->v110[idx] = NULL;
- break;
- }
- mdelay(1);
- }
- break;
- case ISDN_STAT_BCONN:
- if (dev->v110emu[idx] && (dev->v110[idx] == NULL)) {
- int hdrlen = dev->drv[c->driver]->interface->hl_hdrlen;
- int maxsize = dev->drv[c->driver]->interface->maxbufsize;
- atomic_inc(&dev->v110use[idx]);
- switch (dev->v110emu[idx]) {
- case ISDN_PROTO_L2_V11096:
- dev->v110[idx] = isdn_v110_open(V110_9600, hdrlen, maxsize);
- break;
- case ISDN_PROTO_L2_V11019:
- dev->v110[idx] = isdn_v110_open(V110_19200, hdrlen, maxsize);
- break;
- case ISDN_PROTO_L2_V11038:
- dev->v110[idx] = isdn_v110_open(V110_38400, hdrlen, maxsize);
- break;
- default:;
- }
- if ((v = dev->v110[idx])) {
- while (v->SyncInit) {
- struct sk_buff *skb = isdn_v110_sync(v);
- if (dev->drv[c->driver]->interface->writebuf_skb(c->driver, c->arg, 1, skb) <= 0) {
- dev_kfree_skb(skb);
- /* Unable to send, try later */
- break;
- }
- v->SyncInit--;
- v->skbidle++;
- }
- } else
- printk(KERN_WARNING "isdn_v110: Couldn't open stream for chan %d\n", idx);
- atomic_dec(&dev->v110use[idx]);
- }
- break;
- default:
- return 0;
- }
- return 0;
-}
diff --git a/drivers/isdn/i4l/isdn_v110.h b/drivers/isdn/i4l/isdn_v110.h
deleted file mode 100644
index de774ab598c9..000000000000
--- a/drivers/isdn/i4l/isdn_v110.h
+++ /dev/null
@@ -1,29 +0,0 @@
-/* $Id: isdn_v110.h,v 1.1.2.2 2004/01/12 22:37:19 keil Exp $
- *
- * Linux ISDN subsystem, V.110 related functions (linklevel).
- *
- * Copyright by Thomas Pfeiffer (pfeiffer@pds.de)
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#ifndef _isdn_v110_h_
-#define _isdn_v110_h_
-
-/*
- * isdn_v110_encode will take raw data and encode it using V.110
- */
-extern struct sk_buff *isdn_v110_encode(isdn_v110_stream *, struct sk_buff *);
-
-/*
- * isdn_v110_decode receives V.110 coded data from the stream and rebuilds
- * frames from them. The source stream doesn't need to be framed.
- */
-extern struct sk_buff *isdn_v110_decode(isdn_v110_stream *, struct sk_buff *);
-
-extern int isdn_v110_stat_callback(int, isdn_ctrl *);
-extern void isdn_v110_close(isdn_v110_stream *v);
-
-#endif
diff --git a/drivers/isdn/i4l/isdn_x25iface.c b/drivers/isdn/i4l/isdn_x25iface.c
deleted file mode 100644
index 48bfbcb4a09d..000000000000
--- a/drivers/isdn/i4l/isdn_x25iface.c
+++ /dev/null
@@ -1,332 +0,0 @@
-/* $Id: isdn_x25iface.c,v 1.1.2.2 2004/01/12 22:37:19 keil Exp $
- *
- * Linux ISDN subsystem, X.25 related functions
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * stuff needed to support the Linux X.25 PLP code on top of devices that
- * can provide a lab_b service using the concap_proto mechanism.
- * This module supports a network interface which provides lapb_sematics
- * -- as defined in Documentation/networking/x25-iface.txt -- to
- * the upper layer and assumes that the lower layer provides a reliable
- * data link service by means of the concap_device_ops callbacks.
- *
- * Only protocol specific stuff goes here. Device specific stuff
- * goes to another -- device related -- concap_proto support source file.
- *
- */
-
-/* #include <linux/isdn.h> */
-#include <linux/netdevice.h>
-#include <linux/concap.h>
-#include <linux/slab.h>
-#include <linux/wanrouter.h>
-#include <net/x25device.h>
-#include "isdn_x25iface.h"
-
-/* for debugging messages not to cause an oops when device pointer is NULL*/
-#define MY_DEVNAME(dev) ((dev) ? (dev)->name : "DEVICE UNSPECIFIED")
-
-
-typedef struct isdn_x25iface_proto_data {
- int magic;
- enum wan_states state;
- /* Private stuff, not to be accessed via proto_data. We provide the
- other storage for the concap_proto instance here as well,
- enabling us to allocate both with just one kmalloc(): */
- struct concap_proto priv;
-} ix25_pdata_t;
-
-
-
-/* is now in header file (extern): struct concap_proto * isdn_x25iface_proto_new(void); */
-static void isdn_x25iface_proto_del(struct concap_proto *);
-static int isdn_x25iface_proto_close(struct concap_proto *);
-static int isdn_x25iface_proto_restart(struct concap_proto *,
- struct net_device *,
- struct concap_device_ops *);
-static int isdn_x25iface_xmit(struct concap_proto *, struct sk_buff *);
-static int isdn_x25iface_receive(struct concap_proto *, struct sk_buff *);
-static int isdn_x25iface_connect_ind(struct concap_proto *);
-static int isdn_x25iface_disconn_ind(struct concap_proto *);
-
-
-static struct concap_proto_ops ix25_pops = {
- .proto_new = &isdn_x25iface_proto_new,
- .proto_del = &isdn_x25iface_proto_del,
- .restart = &isdn_x25iface_proto_restart,
- .close = &isdn_x25iface_proto_close,
- .encap_and_xmit = &isdn_x25iface_xmit,
- .data_ind = &isdn_x25iface_receive,
- .connect_ind = &isdn_x25iface_connect_ind,
- .disconn_ind = &isdn_x25iface_disconn_ind
-};
-
-/* error message helper function */
-static void illegal_state_warn(unsigned state, unsigned char firstbyte)
-{
- printk(KERN_WARNING "isdn_x25iface: firstbyte %x illegal in"
- "current state %d\n", firstbyte, state);
-}
-
-/* check protocol data field for consistency */
-static int pdata_is_bad(ix25_pdata_t *pda) {
-
- if (pda && pda->magic == ISDN_X25IFACE_MAGIC) return 0;
- printk(KERN_WARNING
- "isdn_x25iface_xxx: illegal pointer to proto data\n");
- return 1;
-}
-
-/* create a new x25 interface protocol instance
- */
-struct concap_proto *isdn_x25iface_proto_new(void)
-{
- ix25_pdata_t *tmp = kmalloc(sizeof(ix25_pdata_t), GFP_KERNEL);
- IX25DEBUG("isdn_x25iface_proto_new\n");
- if (tmp) {
- tmp->magic = ISDN_X25IFACE_MAGIC;
- tmp->state = WAN_UNCONFIGURED;
- /* private data space used to hold the concap_proto data.
- Only to be accessed via the returned pointer */
- spin_lock_init(&tmp->priv.lock);
- tmp->priv.dops = NULL;
- tmp->priv.net_dev = NULL;
- tmp->priv.pops = &ix25_pops;
- tmp->priv.flags = 0;
- tmp->priv.proto_data = tmp;
- return (&(tmp->priv));
- }
- return NULL;
-};
-
-/* close the x25iface encapsulation protocol
- */
-static int isdn_x25iface_proto_close(struct concap_proto *cprot) {
-
- ix25_pdata_t *tmp;
- int ret = 0;
- ulong flags;
-
- if (!cprot) {
- printk(KERN_ERR "isdn_x25iface_proto_close: "
- "invalid concap_proto pointer\n");
- return -1;
- }
- IX25DEBUG("isdn_x25iface_proto_close %s \n", MY_DEVNAME(cprot->net_dev));
- spin_lock_irqsave(&cprot->lock, flags);
- cprot->dops = NULL;
- cprot->net_dev = NULL;
- tmp = cprot->proto_data;
- if (pdata_is_bad(tmp)) {
- ret = -1;
- } else {
- tmp->state = WAN_UNCONFIGURED;
- }
- spin_unlock_irqrestore(&cprot->lock, flags);
- return ret;
-}
-
-/* Delete the x25iface encapsulation protocol instance
- */
-static void isdn_x25iface_proto_del(struct concap_proto *cprot) {
-
- ix25_pdata_t *tmp;
-
- IX25DEBUG("isdn_x25iface_proto_del \n");
- if (!cprot) {
- printk(KERN_ERR "isdn_x25iface_proto_del: "
- "concap_proto pointer is NULL\n");
- return;
- }
- tmp = cprot->proto_data;
- if (tmp == NULL) {
- printk(KERN_ERR "isdn_x25iface_proto_del: inconsistent "
- "proto_data pointer (maybe already deleted?)\n");
- return;
- }
- /* close if the protocol is still open */
- if (cprot->dops) isdn_x25iface_proto_close(cprot);
- /* freeing the storage should be sufficient now. But some additional
- settings might help to catch wild pointer bugs */
- tmp->magic = 0;
- cprot->proto_data = NULL;
-
- kfree(tmp);
- return;
-}
-
-/* (re-)initialize the data structures for x25iface encapsulation
- */
-static int isdn_x25iface_proto_restart(struct concap_proto *cprot,
- struct net_device *ndev,
- struct concap_device_ops *dops)
-{
- ix25_pdata_t *pda = cprot->proto_data;
- ulong flags;
-
- IX25DEBUG("isdn_x25iface_proto_restart %s \n", MY_DEVNAME(ndev));
-
- if (pdata_is_bad(pda)) return -1;
-
- if (!(dops && dops->data_req && dops->connect_req
- && dops->disconn_req)) {
- printk(KERN_WARNING "isdn_x25iface_restart: required dops"
- " missing\n");
- isdn_x25iface_proto_close(cprot);
- return -1;
- }
- spin_lock_irqsave(&cprot->lock, flags);
- cprot->net_dev = ndev;
- cprot->pops = &ix25_pops;
- cprot->dops = dops;
- pda->state = WAN_DISCONNECTED;
- spin_unlock_irqrestore(&cprot->lock, flags);
- return 0;
-}
-
-/* deliver a dl_data frame received from i4l HL driver to the network layer
- */
-static int isdn_x25iface_receive(struct concap_proto *cprot, struct sk_buff *skb)
-{
- IX25DEBUG("isdn_x25iface_receive %s \n", MY_DEVNAME(cprot->net_dev));
- if (((ix25_pdata_t *)(cprot->proto_data))
- ->state == WAN_CONNECTED) {
- if (skb_push(skb, 1)) {
- skb->data[0] = X25_IFACE_DATA;
- skb->protocol = x25_type_trans(skb, cprot->net_dev);
- netif_rx(skb);
- return 0;
- }
- }
- printk(KERN_WARNING "isdn_x25iface_receive %s: not connected, skb dropped\n", MY_DEVNAME(cprot->net_dev));
- dev_kfree_skb(skb);
- return -1;
-}
-
-/* a connection set up is indicated by lower layer
- */
-static int isdn_x25iface_connect_ind(struct concap_proto *cprot)
-{
- struct sk_buff *skb;
- enum wan_states *state_p
- = &(((ix25_pdata_t *)(cprot->proto_data))->state);
- IX25DEBUG("isdn_x25iface_connect_ind %s \n"
- , MY_DEVNAME(cprot->net_dev));
- if (*state_p == WAN_UNCONFIGURED) {
- printk(KERN_WARNING
- "isdn_x25iface_connect_ind while unconfigured %s\n"
- , MY_DEVNAME(cprot->net_dev));
- return -1;
- }
- *state_p = WAN_CONNECTED;
-
- skb = dev_alloc_skb(1);
- if (skb) {
- skb_put_u8(skb, X25_IFACE_CONNECT);
- skb->protocol = x25_type_trans(skb, cprot->net_dev);
- netif_rx(skb);
- return 0;
- } else {
- printk(KERN_WARNING "isdn_x25iface_connect_ind: "
- " out of memory -- disconnecting\n");
- cprot->dops->disconn_req(cprot);
- return -1;
- }
-}
-
-/* a disconnect is indicated by lower layer
- */
-static int isdn_x25iface_disconn_ind(struct concap_proto *cprot)
-{
- struct sk_buff *skb;
- enum wan_states *state_p
- = &(((ix25_pdata_t *)(cprot->proto_data))->state);
- IX25DEBUG("isdn_x25iface_disconn_ind %s \n", MY_DEVNAME(cprot->net_dev));
- if (*state_p == WAN_UNCONFIGURED) {
- printk(KERN_WARNING
- "isdn_x25iface_disconn_ind while unconfigured\n");
- return -1;
- }
- if (!cprot->net_dev) return -1;
- *state_p = WAN_DISCONNECTED;
- skb = dev_alloc_skb(1);
- if (skb) {
- skb_put_u8(skb, X25_IFACE_DISCONNECT);
- skb->protocol = x25_type_trans(skb, cprot->net_dev);
- netif_rx(skb);
- return 0;
- } else {
- printk(KERN_WARNING "isdn_x25iface_disconn_ind:"
- " out of memory\n");
- return -1;
- }
-}
-
-/* process a frame handed over to us from linux network layer. First byte
- semantics as defined in Documentation/networking/x25-iface.txt
-*/
-static int isdn_x25iface_xmit(struct concap_proto *cprot, struct sk_buff *skb)
-{
- unsigned char firstbyte = skb->data[0];
- enum wan_states *state = &((ix25_pdata_t *)cprot->proto_data)->state;
- int ret = 0;
- IX25DEBUG("isdn_x25iface_xmit: %s first=%x state=%d\n",
- MY_DEVNAME(cprot->net_dev), firstbyte, *state);
- switch (firstbyte) {
- case X25_IFACE_DATA:
- if (*state == WAN_CONNECTED) {
- skb_pull(skb, 1);
- netif_trans_update(cprot->net_dev);
- ret = (cprot->dops->data_req(cprot, skb));
- /* prepare for future retransmissions */
- if (ret) skb_push(skb, 1);
- return ret;
- }
- illegal_state_warn(*state, firstbyte);
- break;
- case X25_IFACE_CONNECT:
- if (*state == WAN_DISCONNECTED) {
- *state = WAN_CONNECTING;
- ret = cprot->dops->connect_req(cprot);
- if (ret) {
- /* reset state and notify upper layer about
- * immidiatly failed attempts */
- isdn_x25iface_disconn_ind(cprot);
- }
- } else {
- illegal_state_warn(*state, firstbyte);
- }
- break;
- case X25_IFACE_DISCONNECT:
- switch (*state) {
- case WAN_DISCONNECTED:
- /* Should not happen. However, give upper layer a
- chance to recover from inconstistency but don't
- trust the lower layer sending the disconn_confirm
- when already disconnected */
- printk(KERN_WARNING "isdn_x25iface_xmit: disconnect "
- " requested while disconnected\n");
- isdn_x25iface_disconn_ind(cprot);
- break; /* prevent infinite loops */
- case WAN_CONNECTING:
- case WAN_CONNECTED:
- *state = WAN_DISCONNECTED;
- cprot->dops->disconn_req(cprot);
- break;
- default:
- illegal_state_warn(*state, firstbyte);
- }
- break;
- case X25_IFACE_PARAMS:
- printk(KERN_WARNING "isdn_x25iface_xmit: setting of lapb"
- " options not yet supported\n");
- break;
- default:
- printk(KERN_WARNING "isdn_x25iface_xmit: frame with illegal"
- " first byte %x ignored:\n", firstbyte);
- }
- dev_kfree_skb(skb);
- return 0;
-}
diff --git a/drivers/isdn/i4l/isdn_x25iface.h b/drivers/isdn/i4l/isdn_x25iface.h
deleted file mode 100644
index ca08e082cf7c..000000000000
--- a/drivers/isdn/i4l/isdn_x25iface.h
+++ /dev/null
@@ -1,30 +0,0 @@
-/* $Id: isdn_x25iface.h,v 1.1.2.2 2004/01/12 22:37:19 keil Exp $
- *
- * header for Linux ISDN subsystem, x.25 related functions
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#ifndef _LINUX_ISDN_X25IFACE_H
-#define _LINUX_ISDN_X25IFACE_H
-
-#define ISDN_X25IFACE_MAGIC 0x1e75a2b9
-/* #define DEBUG_ISDN_X25 if you want isdn_x25 debugging messages */
-#ifdef DEBUG_ISDN_X25
-# define IX25DEBUG(fmt, args...) printk(KERN_DEBUG fmt, ##args)
-#else
-# define IX25DEBUG(fmt, args...)
-#endif
-
-#include <linux/skbuff.h>
-#include <linux/isdn.h>
-#include <linux/concap.h>
-
-extern struct concap_proto_ops *isdn_x25iface_concap_proto_ops_pt;
-extern struct concap_proto *isdn_x25iface_proto_new(void);
-
-
-
-#endif
diff --git a/drivers/isdn/isdnloop/Makefile b/drivers/isdn/isdnloop/Makefile
deleted file mode 100644
index 5ff4c0e09768..000000000000
--- a/drivers/isdn/isdnloop/Makefile
+++ /dev/null
@@ -1,6 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-# Makefile for the isdnloop ISDN device driver
-
-# Each configuration option enables a list of files.
-
-obj-$(CONFIG_ISDN_DRV_LOOP) += isdnloop.o
diff --git a/drivers/isdn/isdnloop/isdnloop.c b/drivers/isdn/isdnloop/isdnloop.c
deleted file mode 100644
index 755c6bbc9553..000000000000
--- a/drivers/isdn/isdnloop/isdnloop.c
+++ /dev/null
@@ -1,1528 +0,0 @@
-/* $Id: isdnloop.c,v 1.11.6.7 2001/11/11 19:54:31 kai Exp $
- *
- * ISDN low-level module implementing a dummy loop driver.
- *
- * Copyright 1997 by Fritz Elfert (fritz@isdn4linux.de)
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#include <linux/module.h>
-#include <linux/interrupt.h>
-#include <linux/slab.h>
-#include <linux/init.h>
-#include <linux/sched.h>
-#include "isdnloop.h"
-
-static char *isdnloop_id = "loop0";
-
-MODULE_DESCRIPTION("ISDN4Linux: Pseudo Driver that simulates an ISDN card");
-MODULE_AUTHOR("Fritz Elfert");
-MODULE_LICENSE("GPL");
-module_param(isdnloop_id, charp, 0);
-MODULE_PARM_DESC(isdnloop_id, "ID-String of first card");
-
-static int isdnloop_addcard(char *);
-
-/*
- * Free queue completely.
- *
- * Parameter:
- * card = pointer to card struct
- * channel = channel number
- */
-static void
-isdnloop_free_queue(isdnloop_card *card, int channel)
-{
- struct sk_buff_head *queue = &card->bqueue[channel];
-
- skb_queue_purge(queue);
- card->sndcount[channel] = 0;
-}
-
-/*
- * Send B-Channel data to another virtual card.
- * This routine is called via timer-callback from isdnloop_pollbchan().
- *
- * Parameter:
- * card = pointer to card struct.
- * ch = channel number (0-based)
- */
-static void
-isdnloop_bchan_send(isdnloop_card *card, int ch)
-{
- isdnloop_card *rcard = card->rcard[ch];
- int rch = card->rch[ch], len, ack;
- struct sk_buff *skb;
- isdn_ctrl cmd;
-
- while (card->sndcount[ch]) {
- skb = skb_dequeue(&card->bqueue[ch]);
- if (skb) {
- len = skb->len;
- card->sndcount[ch] -= len;
- ack = *(skb->head); /* used as scratch area */
- cmd.driver = card->myid;
- cmd.arg = ch;
- if (rcard) {
- rcard->interface.rcvcallb_skb(rcard->myid, rch, skb);
- } else {
- printk(KERN_WARNING "isdnloop: no rcard, skb dropped\n");
- dev_kfree_skb(skb);
-
- }
- cmd.command = ISDN_STAT_BSENT;
- cmd.parm.length = len;
- card->interface.statcallb(&cmd);
- } else
- card->sndcount[ch] = 0;
- }
-}
-
-/*
- * Send/Receive Data to/from the B-Channel.
- * This routine is called via timer-callback.
- * It schedules itself while any B-Channel is open.
- *
- * Parameter:
- * data = pointer to card struct, set by kernel timer.data
- */
-static void
-isdnloop_pollbchan(struct timer_list *t)
-{
- isdnloop_card *card = from_timer(card, t, rb_timer);
- unsigned long flags;
-
- if (card->flags & ISDNLOOP_FLAGS_B1ACTIVE)
- isdnloop_bchan_send(card, 0);
- if (card->flags & ISDNLOOP_FLAGS_B2ACTIVE)
- isdnloop_bchan_send(card, 1);
- if (card->flags & (ISDNLOOP_FLAGS_B1ACTIVE | ISDNLOOP_FLAGS_B2ACTIVE)) {
- /* schedule b-channel polling again */
- spin_lock_irqsave(&card->isdnloop_lock, flags);
- card->rb_timer.expires = jiffies + ISDNLOOP_TIMER_BCREAD;
- add_timer(&card->rb_timer);
- card->flags |= ISDNLOOP_FLAGS_RBTIMER;
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
- } else
- card->flags &= ~ISDNLOOP_FLAGS_RBTIMER;
-}
-
-/*
- * Parse ICN-type setup string and fill fields of setup-struct
- * with parsed data.
- *
- * Parameter:
- * setup = setup string, format: [caller-id],si1,si2,[called-id]
- * cmd = pointer to struct to be filled.
- */
-static void
-isdnloop_parse_setup(char *setup, isdn_ctrl *cmd)
-{
- char *t = setup;
- char *s = strchr(t, ',');
-
- *s++ = '\0';
- strlcpy(cmd->parm.setup.phone, t, sizeof(cmd->parm.setup.phone));
- s = strchr(t = s, ',');
- *s++ = '\0';
- if (!strlen(t))
- cmd->parm.setup.si1 = 0;
- else
- cmd->parm.setup.si1 = simple_strtoul(t, NULL, 10);
- s = strchr(t = s, ',');
- *s++ = '\0';
- if (!strlen(t))
- cmd->parm.setup.si2 = 0;
- else
- cmd->parm.setup.si2 =
- simple_strtoul(t, NULL, 10);
- strlcpy(cmd->parm.setup.eazmsn, s, sizeof(cmd->parm.setup.eazmsn));
- cmd->parm.setup.plan = 0;
- cmd->parm.setup.screen = 0;
-}
-
-typedef struct isdnloop_stat {
- char *statstr;
- int command;
- int action;
-} isdnloop_stat;
-/* *INDENT-OFF* */
-static isdnloop_stat isdnloop_stat_table[] = {
- {"BCON_", ISDN_STAT_BCONN, 1}, /* B-Channel connected */
- {"BDIS_", ISDN_STAT_BHUP, 2}, /* B-Channel disconnected */
- {"DCON_", ISDN_STAT_DCONN, 0}, /* D-Channel connected */
- {"DDIS_", ISDN_STAT_DHUP, 0}, /* D-Channel disconnected */
- {"DCAL_I", ISDN_STAT_ICALL, 3}, /* Incoming call dialup-line */
- {"DSCA_I", ISDN_STAT_ICALL, 3}, /* Incoming call 1TR6-SPV */
- {"FCALL", ISDN_STAT_ICALL, 4}, /* Leased line connection up */
- {"CIF", ISDN_STAT_CINF, 5}, /* Charge-info, 1TR6-type */
- {"AOC", ISDN_STAT_CINF, 6}, /* Charge-info, DSS1-type */
- {"CAU", ISDN_STAT_CAUSE, 7}, /* Cause code */
- {"TEI OK", ISDN_STAT_RUN, 0}, /* Card connected to wallplug */
- {"E_L1: ACT FAIL", ISDN_STAT_BHUP, 8}, /* Layer-1 activation failed */
- {"E_L2: DATA LIN", ISDN_STAT_BHUP, 8}, /* Layer-2 data link lost */
- {"E_L1: ACTIVATION FAILED",
- ISDN_STAT_BHUP, 8}, /* Layer-1 activation failed */
- {NULL, 0, -1}
-};
-/* *INDENT-ON* */
-
-
-/*
- * Parse Status message-strings from virtual card.
- * Depending on status, call statcallb for sending messages to upper
- * levels. Also set/reset B-Channel active-flags.
- *
- * Parameter:
- * status = status string to parse.
- * channel = channel where message comes from.
- * card = card where message comes from.
- */
-static void
-isdnloop_parse_status(u_char *status, int channel, isdnloop_card *card)
-{
- isdnloop_stat *s = isdnloop_stat_table;
- int action = -1;
- isdn_ctrl cmd;
-
- while (s->statstr) {
- if (!strncmp(status, s->statstr, strlen(s->statstr))) {
- cmd.command = s->command;
- action = s->action;
- break;
- }
- s++;
- }
- if (action == -1)
- return;
- cmd.driver = card->myid;
- cmd.arg = channel;
- switch (action) {
- case 1:
- /* BCON_x */
- card->flags |= (channel) ?
- ISDNLOOP_FLAGS_B2ACTIVE : ISDNLOOP_FLAGS_B1ACTIVE;
- break;
- case 2:
- /* BDIS_x */
- card->flags &= ~((channel) ?
- ISDNLOOP_FLAGS_B2ACTIVE : ISDNLOOP_FLAGS_B1ACTIVE);
- isdnloop_free_queue(card, channel);
- break;
- case 3:
- /* DCAL_I and DSCA_I */
- isdnloop_parse_setup(status + 6, &cmd);
- break;
- case 4:
- /* FCALL */
- sprintf(cmd.parm.setup.phone, "LEASED%d", card->myid);
- sprintf(cmd.parm.setup.eazmsn, "%d", channel + 1);
- cmd.parm.setup.si1 = 7;
- cmd.parm.setup.si2 = 0;
- cmd.parm.setup.plan = 0;
- cmd.parm.setup.screen = 0;
- break;
- case 5:
- /* CIF */
- strlcpy(cmd.parm.num, status + 3, sizeof(cmd.parm.num));
- break;
- case 6:
- /* AOC */
- snprintf(cmd.parm.num, sizeof(cmd.parm.num), "%d",
- (int) simple_strtoul(status + 7, NULL, 16));
- break;
- case 7:
- /* CAU */
- status += 3;
- if (strlen(status) == 4)
- snprintf(cmd.parm.num, sizeof(cmd.parm.num), "%s%c%c",
- status + 2, *status, *(status + 1));
- else
- strlcpy(cmd.parm.num, status + 1, sizeof(cmd.parm.num));
- break;
- case 8:
- /* Misc Errors on L1 and L2 */
- card->flags &= ~ISDNLOOP_FLAGS_B1ACTIVE;
- isdnloop_free_queue(card, 0);
- cmd.arg = 0;
- cmd.driver = card->myid;
- card->interface.statcallb(&cmd);
- cmd.command = ISDN_STAT_DHUP;
- cmd.arg = 0;
- cmd.driver = card->myid;
- card->interface.statcallb(&cmd);
- cmd.command = ISDN_STAT_BHUP;
- card->flags &= ~ISDNLOOP_FLAGS_B2ACTIVE;
- isdnloop_free_queue(card, 1);
- cmd.arg = 1;
- cmd.driver = card->myid;
- card->interface.statcallb(&cmd);
- cmd.command = ISDN_STAT_DHUP;
- cmd.arg = 1;
- cmd.driver = card->myid;
- break;
- }
- card->interface.statcallb(&cmd);
-}
-
-/*
- * Store a cwcharacter into ringbuffer for reading from /dev/isdnctrl
- *
- * Parameter:
- * card = pointer to card struct.
- * c = char to store.
- */
-static void
-isdnloop_putmsg(isdnloop_card *card, unsigned char c)
-{
- ulong flags;
-
- spin_lock_irqsave(&card->isdnloop_lock, flags);
- *card->msg_buf_write++ = (c == 0xff) ? '\n' : c;
- if (card->msg_buf_write == card->msg_buf_read) {
- if (++card->msg_buf_read > card->msg_buf_end)
- card->msg_buf_read = card->msg_buf;
- }
- if (card->msg_buf_write > card->msg_buf_end)
- card->msg_buf_write = card->msg_buf;
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
-}
-
-/*
- * Poll a virtual cards message queue.
- * If there are new status-replies from the card, copy them to
- * ringbuffer for reading on /dev/isdnctrl and call
- * isdnloop_parse_status() for processing them. Watch for special
- * Firmware bootmessage and parse it, to get the D-Channel protocol.
- * If there are B-Channels open, initiate a timer-callback to
- * isdnloop_pollbchan().
- * This routine is called periodically via timer interrupt.
- *
- * Parameter:
- * data = pointer to card struct
- */
-static void
-isdnloop_polldchan(struct timer_list *t)
-{
- isdnloop_card *card = from_timer(card, t, st_timer);
- struct sk_buff *skb;
- int avail;
- int left;
- u_char c;
- int ch;
- unsigned long flags;
- u_char *p;
- isdn_ctrl cmd;
-
- skb = skb_dequeue(&card->dqueue);
- if (skb)
- avail = skb->len;
- else
- avail = 0;
- for (left = avail; left > 0; left--) {
- c = *skb->data;
- skb_pull(skb, 1);
- isdnloop_putmsg(card, c);
- card->imsg[card->iptr] = c;
- if (card->iptr < 59)
- card->iptr++;
- if (!skb->len) {
- avail++;
- isdnloop_putmsg(card, '\n');
- card->imsg[card->iptr] = 0;
- card->iptr = 0;
- if (card->imsg[0] == '0' && card->imsg[1] >= '0' &&
- card->imsg[1] <= '2' && card->imsg[2] == ';') {
- ch = (card->imsg[1] - '0') - 1;
- p = &card->imsg[3];
- isdnloop_parse_status(p, ch, card);
- } else {
- p = card->imsg;
- if (!strncmp(p, "DRV1.", 5)) {
- printk(KERN_INFO "isdnloop: (%s) %s\n", CID, p);
- if (!strncmp(p + 7, "TC", 2)) {
- card->ptype = ISDN_PTYPE_1TR6;
- card->interface.features |= ISDN_FEATURE_P_1TR6;
- printk(KERN_INFO
- "isdnloop: (%s) 1TR6-Protocol loaded and running\n", CID);
- }
- if (!strncmp(p + 7, "EC", 2)) {
- card->ptype = ISDN_PTYPE_EURO;
- card->interface.features |= ISDN_FEATURE_P_EURO;
- printk(KERN_INFO
- "isdnloop: (%s) Euro-Protocol loaded and running\n", CID);
- }
- continue;
-
- }
- }
- }
- }
- if (avail) {
- cmd.command = ISDN_STAT_STAVAIL;
- cmd.driver = card->myid;
- cmd.arg = avail;
- card->interface.statcallb(&cmd);
- }
- if (card->flags & (ISDNLOOP_FLAGS_B1ACTIVE | ISDNLOOP_FLAGS_B2ACTIVE))
- if (!(card->flags & ISDNLOOP_FLAGS_RBTIMER)) {
- /* schedule b-channel polling */
- card->flags |= ISDNLOOP_FLAGS_RBTIMER;
- spin_lock_irqsave(&card->isdnloop_lock, flags);
- del_timer(&card->rb_timer);
- card->rb_timer.expires = jiffies + ISDNLOOP_TIMER_BCREAD;
- add_timer(&card->rb_timer);
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
- }
- /* schedule again */
- spin_lock_irqsave(&card->isdnloop_lock, flags);
- card->st_timer.expires = jiffies + ISDNLOOP_TIMER_DCREAD;
- add_timer(&card->st_timer);
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
-}
-
-/*
- * Append a packet to the transmit buffer-queue.
- *
- * Parameter:
- * channel = Number of B-channel
- * skb = packet to send.
- * card = pointer to card-struct
- * Return:
- * Number of bytes transferred, -E??? on error
- */
-static int
-isdnloop_sendbuf(int channel, struct sk_buff *skb, isdnloop_card *card)
-{
- int len = skb->len;
- unsigned long flags;
- struct sk_buff *nskb;
-
- if (len > 4000) {
- printk(KERN_WARNING
- "isdnloop: Send packet too large\n");
- return -EINVAL;
- }
- if (len) {
- if (!(card->flags & (channel ? ISDNLOOP_FLAGS_B2ACTIVE : ISDNLOOP_FLAGS_B1ACTIVE)))
- return 0;
- if (card->sndcount[channel] > ISDNLOOP_MAX_SQUEUE)
- return 0;
- spin_lock_irqsave(&card->isdnloop_lock, flags);
- nskb = dev_alloc_skb(skb->len);
- if (nskb) {
- skb_copy_from_linear_data(skb,
- skb_put(nskb, len), len);
- skb_queue_tail(&card->bqueue[channel], nskb);
- dev_kfree_skb(skb);
- } else
- len = 0;
- card->sndcount[channel] += len;
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
- }
- return len;
-}
-
-/*
- * Read the messages from the card's ringbuffer
- *
- * Parameter:
- * buf = pointer to buffer.
- * len = number of bytes to read.
- * user = flag, 1: called from userlevel 0: called from kernel.
- * card = pointer to card struct.
- * Return:
- * number of bytes actually transferred.
- */
-static int
-isdnloop_readstatus(u_char __user *buf, int len, isdnloop_card *card)
-{
- int count;
- u_char __user *p;
-
- for (p = buf, count = 0; count < len; p++, count++) {
- if (card->msg_buf_read == card->msg_buf_write)
- return count;
- if (put_user(*card->msg_buf_read++, p))
- return -EFAULT;
- if (card->msg_buf_read > card->msg_buf_end)
- card->msg_buf_read = card->msg_buf;
- }
- return count;
-}
-
-/*
- * Simulate a card's response by appending it to the cards
- * message queue.
- *
- * Parameter:
- * card = pointer to card struct.
- * s = pointer to message-string.
- * ch = channel: 0 = generic messages, 1 and 2 = D-channel messages.
- * Return:
- * 0 on success, 1 on memory squeeze.
- */
-static int
-isdnloop_fake(isdnloop_card *card, char *s, int ch)
-{
- struct sk_buff *skb;
- int len = strlen(s) + ((ch >= 0) ? 3 : 0);
- skb = dev_alloc_skb(len);
- if (!skb) {
- printk(KERN_WARNING "isdnloop: Out of memory in isdnloop_fake\n");
- return 1;
- }
- if (ch >= 0)
- sprintf(skb_put(skb, 3), "%02d;", ch);
- skb_put_data(skb, s, strlen(s));
- skb_queue_tail(&card->dqueue, skb);
- return 0;
-}
-/* *INDENT-OFF* */
-static isdnloop_stat isdnloop_cmd_table[] = {
- {"BCON_R", 0, 1}, /* B-Channel connect */
- {"BCON_I", 0, 17}, /* B-Channel connect ind */
- {"BDIS_R", 0, 2}, /* B-Channel disconnect */
- {"DDIS_R", 0, 3}, /* D-Channel disconnect */
- {"DCON_R", 0, 16}, /* D-Channel connect */
- {"DSCA_R", 0, 4}, /* Dial 1TR6-SPV */
- {"DCAL_R", 0, 5}, /* Dial */
- {"EAZC", 0, 6}, /* Clear EAZ listener */
- {"EAZ", 0, 7}, /* Set EAZ listener */
- {"SEEAZ", 0, 8}, /* Get EAZ listener */
- {"MSN", 0, 9}, /* Set/Clear MSN listener */
- {"MSALL", 0, 10}, /* Set multi MSN listeners */
- {"SETSIL", 0, 11}, /* Set SI list */
- {"SEESIL", 0, 12}, /* Get SI list */
- {"SILC", 0, 13}, /* Clear SI list */
- {"LOCK", 0, -1}, /* LOCK channel */
- {"UNLOCK", 0, -1}, /* UNLOCK channel */
- {"FV2ON", 1, 14}, /* Leased mode on */
- {"FV2OFF", 1, 15}, /* Leased mode off */
- {NULL, 0, -1}
-};
-/* *INDENT-ON* */
-
-
-/*
- * Simulate an error-response from a card.
- *
- * Parameter:
- * card = pointer to card struct.
- */
-static void
-isdnloop_fake_err(isdnloop_card *card)
-{
- char buf[64];
-
- snprintf(buf, sizeof(buf), "E%s", card->omsg);
- isdnloop_fake(card, buf, -1);
- isdnloop_fake(card, "NAK", -1);
-}
-
-static u_char ctable_eu[] = {0x00, 0x11, 0x01, 0x12};
-static u_char ctable_1t[] = {0x00, 0x3b, 0x01, 0x3a};
-
-/*
- * Assemble a simplified cause message depending on the
- * D-channel protocol used.
- *
- * Parameter:
- * card = pointer to card struct.
- * loc = location: 0 = local, 1 = remote.
- * cau = cause: 1 = busy, 2 = nonexistent callerid, 3 = no user responding.
- * Return:
- * Pointer to buffer containing the assembled message.
- */
-static char *
-isdnloop_unicause(isdnloop_card *card, int loc, int cau)
-{
- static char buf[6];
-
- switch (card->ptype) {
- case ISDN_PTYPE_EURO:
- sprintf(buf, "E%02X%02X", (loc) ? 4 : 2, ctable_eu[cau]);
- break;
- case ISDN_PTYPE_1TR6:
- sprintf(buf, "%02X44", ctable_1t[cau]);
- break;
- default:
- return "0000";
- }
- return buf;
-}
-
-/*
- * Release a virtual connection. Called from timer interrupt, when
- * called party did not respond.
- *
- * Parameter:
- * card = pointer to card struct.
- * ch = channel (0-based)
- */
-static void
-isdnloop_atimeout(isdnloop_card *card, int ch)
-{
- unsigned long flags;
- char buf[60];
-
- spin_lock_irqsave(&card->isdnloop_lock, flags);
- if (card->rcard[ch]) {
- isdnloop_fake(card->rcard[ch], "DDIS_I", card->rch[ch] + 1);
- card->rcard[ch]->rcard[card->rch[ch]] = NULL;
- card->rcard[ch] = NULL;
- }
- isdnloop_fake(card, "DDIS_I", ch + 1);
- /* No user responding */
- sprintf(buf, "CAU%s", isdnloop_unicause(card, 1, 3));
- isdnloop_fake(card, buf, ch + 1);
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
-}
-
-/*
- * Wrapper for isdnloop_atimeout().
- */
-static void
-isdnloop_atimeout0(struct timer_list *t)
-{
- isdnloop_card *card = from_timer(card, t, c_timer[0]);
-
- isdnloop_atimeout(card, 0);
-}
-
-/*
- * Wrapper for isdnloop_atimeout().
- */
-static void
-isdnloop_atimeout1(struct timer_list *t)
-{
- isdnloop_card *card = from_timer(card, t, c_timer[1]);
-
- isdnloop_atimeout(card, 1);
-}
-
-/*
- * Install a watchdog for a user, not responding.
- *
- * Parameter:
- * card = pointer to card struct.
- * ch = channel to watch for.
- */
-static void
-isdnloop_start_ctimer(isdnloop_card *card, int ch)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&card->isdnloop_lock, flags);
- timer_setup(&card->c_timer[ch], ch ? isdnloop_atimeout1
- : isdnloop_atimeout0, 0);
- card->c_timer[ch].expires = jiffies + ISDNLOOP_TIMER_ALERTWAIT;
- add_timer(&card->c_timer[ch]);
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
-}
-
-/*
- * Kill a pending channel watchdog.
- *
- * Parameter:
- * card = pointer to card struct.
- * ch = channel (0-based).
- */
-static void
-isdnloop_kill_ctimer(isdnloop_card *card, int ch)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&card->isdnloop_lock, flags);
- del_timer(&card->c_timer[ch]);
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
-}
-
-static u_char si2bit[] = {0, 1, 0, 0, 0, 2, 0, 4, 0, 0};
-static u_char bit2si[] = {1, 5, 7};
-
-/*
- * Try finding a listener for an outgoing call.
- *
- * Parameter:
- * card = pointer to calling card.
- * p = pointer to ICN-type setup-string.
- * lch = channel of calling card.
- * cmd = pointer to struct to be filled when parsing setup.
- * Return:
- * 0 = found match, alerting should happen.
- * 1 = found matching number but it is busy.
- * 2 = no matching listener.
- * 3 = found matching number but SI does not match.
- */
-static int
-isdnloop_try_call(isdnloop_card *card, char *p, int lch, isdn_ctrl *cmd)
-{
- isdnloop_card *cc = cards;
- unsigned long flags;
- int ch;
- int num_match;
- int i;
- char *e;
- char nbuf[32];
-
- isdnloop_parse_setup(p, cmd);
- while (cc) {
- for (ch = 0; ch < 2; ch++) {
- /* Exclude ourself */
- if ((cc == card) && (ch == lch))
- continue;
- num_match = 0;
- switch (cc->ptype) {
- case ISDN_PTYPE_EURO:
- for (i = 0; i < 3; i++)
- if (!(strcmp(cc->s0num[i], cmd->parm.setup.phone)))
- num_match = 1;
- break;
- case ISDN_PTYPE_1TR6:
- e = cc->eazlist[ch];
- while (*e) {
- sprintf(nbuf, "%s%c", cc->s0num[0], *e);
- if (!(strcmp(nbuf, cmd->parm.setup.phone)))
- num_match = 1;
- e++;
- }
- }
- if (num_match) {
- spin_lock_irqsave(&card->isdnloop_lock, flags);
- /* channel idle? */
- if (!(cc->rcard[ch])) {
- /* Check SI */
- if (!(si2bit[cmd->parm.setup.si1] & cc->sil[ch])) {
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
- return 3;
- }
- /* ch is idle, si and number matches */
- cc->rcard[ch] = card;
- cc->rch[ch] = lch;
- card->rcard[lch] = cc;
- card->rch[lch] = ch;
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
- return 0;
- } else {
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
- /* num matches, but busy */
- if (ch == 1)
- return 1;
- }
- }
- }
- cc = cc->next;
- }
- return 2;
-}
-
-/*
- * Depending on D-channel protocol and caller/called, modify
- * phone number.
- *
- * Parameter:
- * card = pointer to card struct.
- * phone = pointer phone number.
- * caller = flag: 1 = caller, 0 = called.
- * Return:
- * pointer to new phone number.
- */
-static char *
-isdnloop_vstphone(isdnloop_card *card, char *phone, int caller)
-{
- int i;
- static char nphone[30];
-
- if (!card) {
- printk("BUG!!!\n");
- return "";
- }
- switch (card->ptype) {
- case ISDN_PTYPE_EURO:
- if (caller) {
- for (i = 0; i < 2; i++)
- if (!(strcmp(card->s0num[i], phone)))
- return phone;
- return card->s0num[0];
- }
- return phone;
- break;
- case ISDN_PTYPE_1TR6:
- if (caller) {
- sprintf(nphone, "%s%c", card->s0num[0], phone[0]);
- return nphone;
- } else
- return &phone[strlen(phone) - 1];
- break;
- }
- return "";
-}
-
-/*
- * Parse an ICN-type command string sent to the 'card'.
- * Perform misc. actions depending on the command.
- *
- * Parameter:
- * card = pointer to card struct.
- */
-static void
-isdnloop_parse_cmd(isdnloop_card *card)
-{
- char *p = card->omsg;
- isdn_ctrl cmd;
- char buf[60];
- isdnloop_stat *s = isdnloop_cmd_table;
- int action = -1;
- int i;
- int ch;
-
- if ((card->omsg[0] != '0') && (card->omsg[2] != ';')) {
- isdnloop_fake_err(card);
- return;
- }
- ch = card->omsg[1] - '0';
- if ((ch < 0) || (ch > 2)) {
- isdnloop_fake_err(card);
- return;
- }
- p += 3;
- while (s->statstr) {
- if (!strncmp(p, s->statstr, strlen(s->statstr))) {
- action = s->action;
- if (s->command && (ch != 0)) {
- isdnloop_fake_err(card);
- return;
- }
- break;
- }
- s++;
- }
- if (action == -1)
- return;
- switch (action) {
- case 1:
- /* 0x;BCON_R */
- if (card->rcard[ch - 1]) {
- isdnloop_fake(card->rcard[ch - 1], "BCON_I",
- card->rch[ch - 1] + 1);
- isdnloop_fake(card, "BCON_C", ch);
- }
- break;
- case 17:
- /* 0x;BCON_I */
- if (card->rcard[ch - 1]) {
- isdnloop_fake(card->rcard[ch - 1], "BCON_C",
- card->rch[ch - 1] + 1);
- }
- break;
- case 2:
- /* 0x;BDIS_R */
- isdnloop_fake(card, "BDIS_C", ch);
- if (card->rcard[ch - 1]) {
- isdnloop_fake(card->rcard[ch - 1], "BDIS_I",
- card->rch[ch - 1] + 1);
- }
- break;
- case 16:
- /* 0x;DCON_R */
- isdnloop_kill_ctimer(card, ch - 1);
- if (card->rcard[ch - 1]) {
- isdnloop_kill_ctimer(card->rcard[ch - 1], card->rch[ch - 1]);
- isdnloop_fake(card->rcard[ch - 1], "DCON_C",
- card->rch[ch - 1] + 1);
- isdnloop_fake(card, "DCON_C", ch);
- }
- break;
- case 3:
- /* 0x;DDIS_R */
- isdnloop_kill_ctimer(card, ch - 1);
- if (card->rcard[ch - 1]) {
- isdnloop_kill_ctimer(card->rcard[ch - 1], card->rch[ch - 1]);
- isdnloop_fake(card->rcard[ch - 1], "DDIS_I",
- card->rch[ch - 1] + 1);
- card->rcard[ch - 1] = NULL;
- }
- isdnloop_fake(card, "DDIS_C", ch);
- break;
- case 4:
- /* 0x;DSCA_Rdd,yy,zz,oo */
- if (card->ptype != ISDN_PTYPE_1TR6) {
- isdnloop_fake_err(card);
- return;
- }
- /* Fall through */
- case 5:
- /* 0x;DCAL_Rdd,yy,zz,oo */
- p += 6;
- switch (isdnloop_try_call(card, p, ch - 1, &cmd)) {
- case 0:
- /* Alerting */
- sprintf(buf, "D%s_I%s,%02d,%02d,%s",
- (action == 4) ? "SCA" : "CAL",
- isdnloop_vstphone(card, cmd.parm.setup.eazmsn, 1),
- cmd.parm.setup.si1,
- cmd.parm.setup.si2,
- isdnloop_vstphone(card->rcard[ch - 1],
- cmd.parm.setup.phone, 0));
- isdnloop_fake(card->rcard[ch - 1], buf, card->rch[ch - 1] + 1);
- /* Fall through */
- case 3:
- /* si1 does not match, don't alert but start timer */
- isdnloop_start_ctimer(card, ch - 1);
- break;
- case 1:
- /* Remote busy */
- isdnloop_fake(card, "DDIS_I", ch);
- sprintf(buf, "CAU%s", isdnloop_unicause(card, 1, 1));
- isdnloop_fake(card, buf, ch);
- break;
- case 2:
- /* No such user */
- isdnloop_fake(card, "DDIS_I", ch);
- sprintf(buf, "CAU%s", isdnloop_unicause(card, 1, 2));
- isdnloop_fake(card, buf, ch);
- break;
- }
- break;
- case 6:
- /* 0x;EAZC */
- card->eazlist[ch - 1][0] = '\0';
- break;
- case 7:
- /* 0x;EAZ */
- p += 3;
- if (strlen(p) >= sizeof(card->eazlist[0]))
- break;
- strcpy(card->eazlist[ch - 1], p);
- break;
- case 8:
- /* 0x;SEEAZ */
- sprintf(buf, "EAZ-LIST: %s", card->eazlist[ch - 1]);
- isdnloop_fake(card, buf, ch + 1);
- break;
- case 9:
- /* 0x;MSN */
- break;
- case 10:
- /* 0x;MSNALL */
- break;
- case 11:
- /* 0x;SETSIL */
- p += 6;
- i = 0;
- while (strchr("0157", *p)) {
- if (i)
- card->sil[ch - 1] |= si2bit[*p - '0'];
- i = (*p++ == '0');
- }
- if (*p)
- isdnloop_fake_err(card);
- break;
- case 12:
- /* 0x;SEESIL */
- sprintf(buf, "SIN-LIST: ");
- p = buf + 10;
- for (i = 0; i < 3; i++)
- if (card->sil[ch - 1] & (1 << i))
- p += sprintf(p, "%02d", bit2si[i]);
- isdnloop_fake(card, buf, ch + 1);
- break;
- case 13:
- /* 0x;SILC */
- card->sil[ch - 1] = 0;
- break;
- case 14:
- /* 00;FV2ON */
- break;
- case 15:
- /* 00;FV2OFF */
- break;
- }
-}
-
-/*
- * Put command-strings into the of the 'card'. In reality, execute them
- * right in place by calling isdnloop_parse_cmd(). Also copy every
- * command to the read message ringbuffer, preceding it with a '>'.
- * These mesagges can be read at /dev/isdnctrl.
- *
- * Parameter:
- * buf = pointer to command buffer.
- * len = length of buffer data.
- * user = flag: 1 = called form userlevel, 0 called from kernel.
- * card = pointer to card struct.
- * Return:
- * number of bytes transferred (currently always equals len).
- */
-static int
-isdnloop_writecmd(const u_char *buf, int len, int user, isdnloop_card *card)
-{
- int xcount = 0;
- int ocount = 1;
- isdn_ctrl cmd;
-
- while (len) {
- int count = len;
- u_char *p;
- u_char msg[0x100];
-
- if (count > 255)
- count = 255;
- if (user) {
- if (copy_from_user(msg, buf, count))
- return -EFAULT;
- } else
- memcpy(msg, buf, count);
- isdnloop_putmsg(card, '>');
- for (p = msg; count > 0; count--, p++) {
- len--;
- xcount++;
- isdnloop_putmsg(card, *p);
- card->omsg[card->optr] = *p;
- if (*p == '\n') {
- card->omsg[card->optr] = '\0';
- card->optr = 0;
- isdnloop_parse_cmd(card);
- if (len) {
- isdnloop_putmsg(card, '>');
- ocount++;
- }
- } else {
- if (card->optr < 59)
- card->optr++;
- }
- ocount++;
- }
- }
- cmd.command = ISDN_STAT_STAVAIL;
- cmd.driver = card->myid;
- cmd.arg = ocount;
- card->interface.statcallb(&cmd);
- return xcount;
-}
-
-/*
- * Delete card's pending timers, send STOP to linklevel
- */
-static void
-isdnloop_stopcard(isdnloop_card *card)
-{
- unsigned long flags;
- isdn_ctrl cmd;
-
- spin_lock_irqsave(&card->isdnloop_lock, flags);
- if (card->flags & ISDNLOOP_FLAGS_RUNNING) {
- card->flags &= ~ISDNLOOP_FLAGS_RUNNING;
- del_timer(&card->st_timer);
- del_timer(&card->rb_timer);
- del_timer(&card->c_timer[0]);
- del_timer(&card->c_timer[1]);
- cmd.command = ISDN_STAT_STOP;
- cmd.driver = card->myid;
- card->interface.statcallb(&cmd);
- }
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
-}
-
-/*
- * Stop all cards before unload.
- */
-static void
-isdnloop_stopallcards(void)
-{
- isdnloop_card *p = cards;
-
- while (p) {
- isdnloop_stopcard(p);
- p = p->next;
- }
-}
-
-/*
- * Start a 'card'. Simulate card's boot message and set the phone
- * number(s) of the virtual 'S0-Interface'. Install D-channel
- * poll timer.
- *
- * Parameter:
- * card = pointer to card struct.
- * sdefp = pointer to struct holding ioctl parameters.
- * Return:
- * 0 on success, -E??? otherwise.
- */
-static int
-isdnloop_start(isdnloop_card *card, isdnloop_sdef *sdefp)
-{
- unsigned long flags;
- isdnloop_sdef sdef;
- int i;
-
- if (card->flags & ISDNLOOP_FLAGS_RUNNING)
- return -EBUSY;
- if (copy_from_user((char *) &sdef, (char *) sdefp, sizeof(sdef)))
- return -EFAULT;
-
- for (i = 0; i < 3; i++) {
- if (!memchr(sdef.num[i], 0, sizeof(sdef.num[i])))
- return -EINVAL;
- }
-
- spin_lock_irqsave(&card->isdnloop_lock, flags);
- switch (sdef.ptype) {
- case ISDN_PTYPE_EURO:
- if (isdnloop_fake(card, "DRV1.23EC-Q.931-CAPI-CNS-BASIS-20.02.96",
- -1)) {
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
- return -ENOMEM;
- }
- card->sil[0] = card->sil[1] = 4;
- if (isdnloop_fake(card, "TEI OK", 0)) {
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
- return -ENOMEM;
- }
- for (i = 0; i < 3; i++) {
- strlcpy(card->s0num[i], sdef.num[i],
- sizeof(card->s0num[0]));
- }
- break;
- case ISDN_PTYPE_1TR6:
- if (isdnloop_fake(card, "DRV1.04TC-1TR6-CAPI-CNS-BASIS-29.11.95",
- -1)) {
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
- return -ENOMEM;
- }
- card->sil[0] = card->sil[1] = 4;
- if (isdnloop_fake(card, "TEI OK", 0)) {
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
- return -ENOMEM;
- }
- strlcpy(card->s0num[0], sdef.num[0], sizeof(card->s0num[0]));
- card->s0num[1][0] = '\0';
- card->s0num[2][0] = '\0';
- break;
- default:
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
- printk(KERN_WARNING "isdnloop: Illegal D-channel protocol %d\n",
- sdef.ptype);
- return -EINVAL;
- }
- timer_setup(&card->rb_timer, isdnloop_pollbchan, 0);
- timer_setup(&card->st_timer, isdnloop_polldchan, 0);
- card->st_timer.expires = jiffies + ISDNLOOP_TIMER_DCREAD;
- add_timer(&card->st_timer);
- card->flags |= ISDNLOOP_FLAGS_RUNNING;
- spin_unlock_irqrestore(&card->isdnloop_lock, flags);
- return 0;
-}
-
-/*
- * Main handler for commands sent by linklevel.
- */
-static int
-isdnloop_command(isdn_ctrl *c, isdnloop_card *card)
-{
- ulong a;
- int i;
- char cbuf[80];
- isdn_ctrl cmd;
- isdnloop_cdef cdef;
-
- switch (c->command) {
- case ISDN_CMD_IOCTL:
- memcpy(&a, c->parm.num, sizeof(ulong));
- switch (c->arg) {
- case ISDNLOOP_IOCTL_DEBUGVAR:
- return (ulong) card;
- case ISDNLOOP_IOCTL_STARTUP:
- return isdnloop_start(card, (isdnloop_sdef *) a);
- break;
- case ISDNLOOP_IOCTL_ADDCARD:
- if (copy_from_user((char *)&cdef,
- (char *)a,
- sizeof(cdef)))
- return -EFAULT;
- return isdnloop_addcard(cdef.id1);
- break;
- case ISDNLOOP_IOCTL_LEASEDCFG:
- if (a) {
- if (!card->leased) {
- card->leased = 1;
- while (card->ptype == ISDN_PTYPE_UNKNOWN)
- schedule_timeout_interruptible(10);
- schedule_timeout_interruptible(10);
- sprintf(cbuf, "00;FV2ON\n01;EAZ1\n02;EAZ2\n");
- i = isdnloop_writecmd(cbuf, strlen(cbuf), 0, card);
- printk(KERN_INFO
- "isdnloop: (%s) Leased-line mode enabled\n",
- CID);
- cmd.command = ISDN_STAT_RUN;
- cmd.driver = card->myid;
- cmd.arg = 0;
- card->interface.statcallb(&cmd);
- }
- } else {
- if (card->leased) {
- card->leased = 0;
- sprintf(cbuf, "00;FV2OFF\n");
- i = isdnloop_writecmd(cbuf, strlen(cbuf), 0, card);
- printk(KERN_INFO
- "isdnloop: (%s) Leased-line mode disabled\n",
- CID);
- cmd.command = ISDN_STAT_RUN;
- cmd.driver = card->myid;
- cmd.arg = 0;
- card->interface.statcallb(&cmd);
- }
- }
- return 0;
- default:
- return -EINVAL;
- }
- break;
- case ISDN_CMD_DIAL:
- if (!(card->flags & ISDNLOOP_FLAGS_RUNNING))
- return -ENODEV;
- if (card->leased)
- break;
- if ((c->arg & 255) < ISDNLOOP_BCH) {
- char *p;
- char dcode[4];
-
- a = c->arg;
- p = c->parm.setup.phone;
- if (*p == 's' || *p == 'S') {
- /* Dial for SPV */
- p++;
- strcpy(dcode, "SCA");
- } else
- /* Normal Dial */
- strcpy(dcode, "CAL");
- snprintf(cbuf, sizeof(cbuf),
- "%02d;D%s_R%s,%02d,%02d,%s\n", (int) (a + 1),
- dcode, p, c->parm.setup.si1,
- c->parm.setup.si2, c->parm.setup.eazmsn);
- i = isdnloop_writecmd(cbuf, strlen(cbuf), 0, card);
- }
- break;
- case ISDN_CMD_ACCEPTD:
- if (!(card->flags & ISDNLOOP_FLAGS_RUNNING))
- return -ENODEV;
- if (c->arg < ISDNLOOP_BCH) {
- a = c->arg + 1;
- cbuf[0] = 0;
- switch (card->l2_proto[a - 1]) {
- case ISDN_PROTO_L2_X75I:
- sprintf(cbuf, "%02d;BX75\n", (int) a);
- break;
-#ifdef CONFIG_ISDN_X25
- case ISDN_PROTO_L2_X25DTE:
- sprintf(cbuf, "%02d;BX2T\n", (int) a);
- break;
- case ISDN_PROTO_L2_X25DCE:
- sprintf(cbuf, "%02d;BX2C\n", (int) a);
- break;
-#endif
- case ISDN_PROTO_L2_HDLC:
- sprintf(cbuf, "%02d;BTRA\n", (int) a);
- break;
- }
- if (strlen(cbuf))
- i = isdnloop_writecmd(cbuf, strlen(cbuf), 0, card);
- sprintf(cbuf, "%02d;DCON_R\n", (int) a);
- i = isdnloop_writecmd(cbuf, strlen(cbuf), 0, card);
- }
- break;
- case ISDN_CMD_ACCEPTB:
- if (!(card->flags & ISDNLOOP_FLAGS_RUNNING))
- return -ENODEV;
- if (c->arg < ISDNLOOP_BCH) {
- a = c->arg + 1;
- switch (card->l2_proto[a - 1]) {
- case ISDN_PROTO_L2_X75I:
- sprintf(cbuf, "%02d;BCON_R,BX75\n", (int) a);
- break;
-#ifdef CONFIG_ISDN_X25
- case ISDN_PROTO_L2_X25DTE:
- sprintf(cbuf, "%02d;BCON_R,BX2T\n", (int) a);
- break;
- case ISDN_PROTO_L2_X25DCE:
- sprintf(cbuf, "%02d;BCON_R,BX2C\n", (int) a);
- break;
-#endif
- case ISDN_PROTO_L2_HDLC:
- sprintf(cbuf, "%02d;BCON_R,BTRA\n", (int) a);
- break;
- default:
- sprintf(cbuf, "%02d;BCON_R\n", (int) a);
- }
- printk(KERN_DEBUG "isdnloop writecmd '%s'\n", cbuf);
- i = isdnloop_writecmd(cbuf, strlen(cbuf), 0, card);
- break;
- case ISDN_CMD_HANGUP:
- if (!(card->flags & ISDNLOOP_FLAGS_RUNNING))
- return -ENODEV;
- if (c->arg < ISDNLOOP_BCH) {
- a = c->arg + 1;
- sprintf(cbuf, "%02d;BDIS_R\n%02d;DDIS_R\n", (int) a, (int) a);
- i = isdnloop_writecmd(cbuf, strlen(cbuf), 0, card);
- }
- break;
- case ISDN_CMD_SETEAZ:
- if (!(card->flags & ISDNLOOP_FLAGS_RUNNING))
- return -ENODEV;
- if (card->leased)
- break;
- if (c->arg < ISDNLOOP_BCH) {
- a = c->arg + 1;
- if (card->ptype == ISDN_PTYPE_EURO) {
- sprintf(cbuf, "%02d;MS%s%s\n", (int) a,
- c->parm.num[0] ? "N" : "ALL", c->parm.num);
- } else
- sprintf(cbuf, "%02d;EAZ%s\n", (int) a,
- c->parm.num[0] ? c->parm.num : (u_char *) "0123456789");
- i = isdnloop_writecmd(cbuf, strlen(cbuf), 0, card);
- }
- break;
- case ISDN_CMD_CLREAZ:
- if (!(card->flags & ISDNLOOP_FLAGS_RUNNING))
- return -ENODEV;
- if (card->leased)
- break;
- if (c->arg < ISDNLOOP_BCH) {
- a = c->arg + 1;
- if (card->ptype == ISDN_PTYPE_EURO)
- sprintf(cbuf, "%02d;MSNC\n", (int) a);
- else
- sprintf(cbuf, "%02d;EAZC\n", (int) a);
- i = isdnloop_writecmd(cbuf, strlen(cbuf), 0, card);
- }
- break;
- case ISDN_CMD_SETL2:
- if (!(card->flags & ISDNLOOP_FLAGS_RUNNING))
- return -ENODEV;
- if ((c->arg & 255) < ISDNLOOP_BCH) {
- a = c->arg;
- switch (a >> 8) {
- case ISDN_PROTO_L2_X75I:
- sprintf(cbuf, "%02d;BX75\n", (int) (a & 255) + 1);
- break;
-#ifdef CONFIG_ISDN_X25
- case ISDN_PROTO_L2_X25DTE:
- sprintf(cbuf, "%02d;BX2T\n", (int) (a & 255) + 1);
- break;
- case ISDN_PROTO_L2_X25DCE:
- sprintf(cbuf, "%02d;BX2C\n", (int) (a & 255) + 1);
- break;
-#endif
- case ISDN_PROTO_L2_HDLC:
- sprintf(cbuf, "%02d;BTRA\n", (int) (a & 255) + 1);
- break;
- case ISDN_PROTO_L2_TRANS:
- sprintf(cbuf, "%02d;BTRA\n", (int) (a & 255) + 1);
- break;
- default:
- return -EINVAL;
- }
- i = isdnloop_writecmd(cbuf, strlen(cbuf), 0, card);
- card->l2_proto[a & 255] = (a >> 8);
- }
- break;
- case ISDN_CMD_SETL3:
- if (!(card->flags & ISDNLOOP_FLAGS_RUNNING))
- return -ENODEV;
- return 0;
- default:
- return -EINVAL;
- }
- }
- return 0;
-}
-
-/*
- * Find card with given driverId
- */
-static inline isdnloop_card *
-isdnloop_findcard(int driverid)
-{
- isdnloop_card *p = cards;
-
- while (p) {
- if (p->myid == driverid)
- return p;
- p = p->next;
- }
- return (isdnloop_card *) 0;
-}
-
-/*
- * Wrapper functions for interface to linklevel
- */
-static int
-if_command(isdn_ctrl *c)
-{
- isdnloop_card *card = isdnloop_findcard(c->driver);
-
- if (card)
- return isdnloop_command(c, card);
- printk(KERN_ERR
- "isdnloop: if_command called with invalid driverId!\n");
- return -ENODEV;
-}
-
-static int
-if_writecmd(const u_char __user *buf, int len, int id, int channel)
-{
- isdnloop_card *card = isdnloop_findcard(id);
-
- if (card) {
- if (!(card->flags & ISDNLOOP_FLAGS_RUNNING))
- return -ENODEV;
- return isdnloop_writecmd(buf, len, 1, card);
- }
- printk(KERN_ERR
- "isdnloop: if_writecmd called with invalid driverId!\n");
- return -ENODEV;
-}
-
-static int
-if_readstatus(u_char __user *buf, int len, int id, int channel)
-{
- isdnloop_card *card = isdnloop_findcard(id);
-
- if (card) {
- if (!(card->flags & ISDNLOOP_FLAGS_RUNNING))
- return -ENODEV;
- return isdnloop_readstatus(buf, len, card);
- }
- printk(KERN_ERR
- "isdnloop: if_readstatus called with invalid driverId!\n");
- return -ENODEV;
-}
-
-static int
-if_sendbuf(int id, int channel, int ack, struct sk_buff *skb)
-{
- isdnloop_card *card = isdnloop_findcard(id);
-
- if (card) {
- if (!(card->flags & ISDNLOOP_FLAGS_RUNNING))
- return -ENODEV;
- /* ack request stored in skb scratch area */
- *(skb->head) = ack;
- return isdnloop_sendbuf(channel, skb, card);
- }
- printk(KERN_ERR
- "isdnloop: if_sendbuf called with invalid driverId!\n");
- return -ENODEV;
-}
-
-/*
- * Allocate a new card-struct, initialize it
- * link it into cards-list and register it at linklevel.
- */
-static isdnloop_card *
-isdnloop_initcard(char *id)
-{
- isdnloop_card *card;
- int i;
- card = kzalloc(sizeof(isdnloop_card), GFP_KERNEL);
- if (!card) {
- printk(KERN_WARNING
- "isdnloop: (%s) Could not allocate card-struct.\n", id);
- return (isdnloop_card *) 0;
- }
- card->interface.owner = THIS_MODULE;
- card->interface.channels = ISDNLOOP_BCH;
- card->interface.hl_hdrlen = 1; /* scratch area for storing ack flag*/
- card->interface.maxbufsize = 4000;
- card->interface.command = if_command;
- card->interface.writebuf_skb = if_sendbuf;
- card->interface.writecmd = if_writecmd;
- card->interface.readstat = if_readstatus;
- card->interface.features = ISDN_FEATURE_L2_X75I |
-#ifdef CONFIG_ISDN_X25
- ISDN_FEATURE_L2_X25DTE |
- ISDN_FEATURE_L2_X25DCE |
-#endif
- ISDN_FEATURE_L2_HDLC |
- ISDN_FEATURE_L3_TRANS |
- ISDN_FEATURE_P_UNKNOWN;
- card->ptype = ISDN_PTYPE_UNKNOWN;
- strlcpy(card->interface.id, id, sizeof(card->interface.id));
- card->msg_buf_write = card->msg_buf;
- card->msg_buf_read = card->msg_buf;
- card->msg_buf_end = &card->msg_buf[sizeof(card->msg_buf) - 1];
- for (i = 0; i < ISDNLOOP_BCH; i++) {
- card->l2_proto[i] = ISDN_PROTO_L2_X75I;
- skb_queue_head_init(&card->bqueue[i]);
- }
- skb_queue_head_init(&card->dqueue);
- spin_lock_init(&card->isdnloop_lock);
- card->next = cards;
- cards = card;
- if (!register_isdn(&card->interface)) {
- cards = cards->next;
- printk(KERN_WARNING
- "isdnloop: Unable to register %s\n", id);
- kfree(card);
- return (isdnloop_card *) 0;
- }
- card->myid = card->interface.channels;
- return card;
-}
-
-static int
-isdnloop_addcard(char *id1)
-{
- isdnloop_card *card;
- card = isdnloop_initcard(id1);
- if (!card) {
- return -EIO;
- }
- printk(KERN_INFO
- "isdnloop: (%s) virtual card added\n",
- card->interface.id);
- return 0;
-}
-
-static int __init
-isdnloop_init(void)
-{
- if (isdnloop_id)
- return isdnloop_addcard(isdnloop_id);
-
- return 0;
-}
-
-static void __exit
-isdnloop_exit(void)
-{
- isdn_ctrl cmd;
- isdnloop_card *card = cards;
- isdnloop_card *last;
- int i;
-
- isdnloop_stopallcards();
- while (card) {
- cmd.command = ISDN_STAT_UNLOAD;
- cmd.driver = card->myid;
- card->interface.statcallb(&cmd);
- for (i = 0; i < ISDNLOOP_BCH; i++)
- isdnloop_free_queue(card, i);
- card = card->next;
- }
- card = cards;
- while (card) {
- last = card;
- skb_queue_purge(&card->dqueue);
- card = card->next;
- kfree(last);
- }
- printk(KERN_NOTICE "isdnloop-ISDN-driver unloaded\n");
-}
-
-module_init(isdnloop_init);
-module_exit(isdnloop_exit);
diff --git a/drivers/isdn/isdnloop/isdnloop.h b/drivers/isdn/isdnloop/isdnloop.h
deleted file mode 100644
index e9e035552bb4..000000000000
--- a/drivers/isdn/isdnloop/isdnloop.h
+++ /dev/null
@@ -1,112 +0,0 @@
-/* $Id: isdnloop.h,v 1.5.6.3 2001/09/23 22:24:56 kai Exp $
- *
- * Loopback lowlevel module for testing of linklevel.
- *
- * Copyright 1997 by Fritz Elfert (fritz@isdn4linux.de)
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- */
-
-#ifndef isdnloop_h
-#define isdnloop_h
-
-#define ISDNLOOP_IOCTL_DEBUGVAR 0
-#define ISDNLOOP_IOCTL_ADDCARD 1
-#define ISDNLOOP_IOCTL_LEASEDCFG 2
-#define ISDNLOOP_IOCTL_STARTUP 3
-
-/* Struct for adding new cards */
-typedef struct isdnloop_cdef {
- char id1[10];
-} isdnloop_cdef;
-
-/* Struct for configuring cards */
-typedef struct isdnloop_sdef {
- int ptype;
- char num[3][20];
-} isdnloop_sdef;
-
-#if defined(__KERNEL__) || defined(__DEBUGVAR__)
-
-#ifdef __KERNEL__
-/* Kernel includes */
-
-#include <linux/errno.h>
-#include <linux/fs.h>
-#include <linux/major.h>
-#include <asm/io.h>
-#include <linux/kernel.h>
-#include <linux/signal.h>
-#include <linux/slab.h>
-#include <linux/mm.h>
-#include <linux/mman.h>
-#include <linux/ioport.h>
-#include <linux/timer.h>
-#include <linux/wait.h>
-#include <linux/isdnif.h>
-
-#endif /* __KERNEL__ */
-
-#define ISDNLOOP_FLAGS_B1ACTIVE 1 /* B-Channel-1 is open */
-#define ISDNLOOP_FLAGS_B2ACTIVE 2 /* B-Channel-2 is open */
-#define ISDNLOOP_FLAGS_RUNNING 4 /* Cards driver activated */
-#define ISDNLOOP_FLAGS_RBTIMER 8 /* scheduling of B-Channel-poll */
-#define ISDNLOOP_TIMER_BCREAD 1 /* B-Channel poll-cycle */
-#define ISDNLOOP_TIMER_DCREAD (HZ/2) /* D-Channel poll-cycle */
-#define ISDNLOOP_TIMER_ALERTWAIT (10 * HZ) /* Alert timeout */
-#define ISDNLOOP_MAX_SQUEUE 65536 /* Max. outstanding send-data */
-#define ISDNLOOP_BCH 2 /* channels per card */
-
-/*
- * Per card driver data
- */
-typedef struct isdnloop_card {
- struct isdnloop_card *next; /* Pointer to next device struct */
- struct isdnloop_card
- *rcard[ISDNLOOP_BCH]; /* Pointer to 'remote' card */
- int rch[ISDNLOOP_BCH]; /* 'remote' channel */
- int myid; /* Driver-Nr. assigned by linklevel */
- int leased; /* Flag: This Adapter is connected */
- /* to a leased line */
- int sil[ISDNLOOP_BCH]; /* SI's to listen for */
- char eazlist[ISDNLOOP_BCH][11];
- /* EAZ's to listen for */
- char s0num[3][20]; /* 1TR6 base-number or MSN's */
- unsigned short flags; /* Statusflags */
- int ptype; /* Protocol type (1TR6 or Euro) */
- struct timer_list st_timer; /* Timer for Status-Polls */
- struct timer_list rb_timer; /* Timer for B-Channel-Polls */
- struct timer_list
- c_timer[ISDNLOOP_BCH]; /* Timer for Alerting */
- int l2_proto[ISDNLOOP_BCH]; /* Current layer-2-protocol */
- isdn_if interface; /* Interface to upper layer */
- int iptr; /* Index to imsg-buffer */
- char imsg[60]; /* Internal buf for status-parsing */
- int optr; /* Index to omsg-buffer */
- char omsg[60]; /* Internal buf for cmd-parsing */
- char msg_buf[2048]; /* Buffer for status-messages */
- char *msg_buf_write; /* Writepointer for statusbuffer */
- char *msg_buf_read; /* Readpointer for statusbuffer */
- char *msg_buf_end; /* Pointer to end of statusbuffer */
- int sndcount[ISDNLOOP_BCH]; /* Byte-counters for B-Ch.-send */
- struct sk_buff_head
- bqueue[ISDNLOOP_BCH]; /* B-Channel queues */
- struct sk_buff_head dqueue; /* D-Channel queue */
- spinlock_t isdnloop_lock;
-} isdnloop_card;
-
-/*
- * Main driver data
- */
-#ifdef __KERNEL__
-static isdnloop_card *cards = (isdnloop_card *) 0;
-#endif /* __KERNEL__ */
-
-/* Utility-Macros */
-
-#define CID (card->interface.id)
-
-#endif /* defined(__KERNEL__) || defined(__DEBUGVAR__) */
-#endif /* isdnloop_h */
diff --git a/drivers/media/dvb-frontends/tua6100.c b/drivers/media/dvb-frontends/tua6100.c
index f7c3e6be8e4d..2483f614d0e7 100644
--- a/drivers/media/dvb-frontends/tua6100.c
+++ b/drivers/media/dvb-frontends/tua6100.c
@@ -67,8 +67,8 @@ static int tua6100_set_params(struct dvb_frontend *fe)
struct i2c_msg msg1 = { .addr = priv->i2c_address, .flags = 0, .buf = reg1, .len = 4 };
struct i2c_msg msg2 = { .addr = priv->i2c_address, .flags = 0, .buf = reg2, .len = 3 };
-#define _R 4
-#define _P 32
+#define _R_VAL 4
+#define _P_VAL 32
#define _ri 4000000
// setup register 0
@@ -83,14 +83,14 @@ static int tua6100_set_params(struct dvb_frontend *fe)
else
reg1[1] = 0x0c;
- if (_P == 64)
+ if (_P_VAL == 64)
reg1[1] |= 0x40;
if (c->frequency >= 1525000)
reg1[1] |= 0x80;
// register 2
- reg2[1] = (_R >> 8) & 0x03;
- reg2[2] = _R;
+ reg2[1] = (_R_VAL >> 8) & 0x03;
+ reg2[2] = _R_VAL;
if (c->frequency < 1455000)
reg2[1] |= 0x1c;
else if (c->frequency < 1630000)
@@ -102,18 +102,18 @@ static int tua6100_set_params(struct dvb_frontend *fe)
* The N divisor ratio (note: c->frequency is in kHz, but we
* need it in Hz)
*/
- prediv = (c->frequency * _R) / (_ri / 1000);
- div = prediv / _P;
+ prediv = (c->frequency * _R_VAL) / (_ri / 1000);
+ div = prediv / _P_VAL;
reg1[1] |= (div >> 9) & 0x03;
reg1[2] = div >> 1;
reg1[3] = (div << 7);
- priv->frequency = ((div * _P) * (_ri / 1000)) / _R;
+ priv->frequency = ((div * _P_VAL) * (_ri / 1000)) / _R_VAL;
// Finally, calculate and store the value for A
- reg1[3] |= (prediv - (div*_P)) & 0x7f;
+ reg1[3] |= (prediv - (div*_P_VAL)) & 0x7f;
-#undef _R
-#undef _P
+#undef _R_VAL
+#undef _P_VAL
#undef _ri
if (fe->ops.i2c_gate_ctrl)
diff --git a/drivers/media/rc/bpf-lirc.c b/drivers/media/rc/bpf-lirc.c
index ee657003c1a1..0a0ce620e4a2 100644
--- a/drivers/media/rc/bpf-lirc.c
+++ b/drivers/media/rc/bpf-lirc.c
@@ -8,6 +8,9 @@
#include <linux/bpf_lirc.h>
#include "rc-core-priv.h"
+#define lirc_rcu_dereference(p) \
+ rcu_dereference_protected(p, lockdep_is_held(&ir_raw_handler_lock))
+
/*
* BPF interface for raw IR
*/
@@ -136,7 +139,7 @@ const struct bpf_verifier_ops lirc_mode2_verifier_ops = {
static int lirc_bpf_attach(struct rc_dev *rcdev, struct bpf_prog *prog)
{
- struct bpf_prog_array __rcu *old_array;
+ struct bpf_prog_array *old_array;
struct bpf_prog_array *new_array;
struct ir_raw_event_ctrl *raw;
int ret;
@@ -154,12 +157,12 @@ static int lirc_bpf_attach(struct rc_dev *rcdev, struct bpf_prog *prog)
goto unlock;
}
- if (raw->progs && bpf_prog_array_length(raw->progs) >= BPF_MAX_PROGS) {
+ old_array = lirc_rcu_dereference(raw->progs);
+ if (old_array && bpf_prog_array_length(old_array) >= BPF_MAX_PROGS) {
ret = -E2BIG;
goto unlock;
}
- old_array = raw->progs;
ret = bpf_prog_array_copy(old_array, NULL, prog, &new_array);
if (ret < 0)
goto unlock;
@@ -174,7 +177,7 @@ unlock:
static int lirc_bpf_detach(struct rc_dev *rcdev, struct bpf_prog *prog)
{
- struct bpf_prog_array __rcu *old_array;
+ struct bpf_prog_array *old_array;
struct bpf_prog_array *new_array;
struct ir_raw_event_ctrl *raw;
int ret;
@@ -192,7 +195,7 @@ static int lirc_bpf_detach(struct rc_dev *rcdev, struct bpf_prog *prog)
goto unlock;
}
- old_array = raw->progs;
+ old_array = lirc_rcu_dereference(raw->progs);
ret = bpf_prog_array_copy(old_array, prog, NULL, &new_array);
/*
* Do not use bpf_prog_array_delete_safe() as we would end up
@@ -223,21 +226,22 @@ void lirc_bpf_run(struct rc_dev *rcdev, u32 sample)
/*
* This should be called once the rc thread has been stopped, so there can be
* no concurrent bpf execution.
+ *
+ * Should be called with the ir_raw_handler_lock held.
*/
void lirc_bpf_free(struct rc_dev *rcdev)
{
struct bpf_prog_array_item *item;
+ struct bpf_prog_array *array;
- if (!rcdev->raw->progs)
+ array = lirc_rcu_dereference(rcdev->raw->progs);
+ if (!array)
return;
- item = rcu_dereference(rcdev->raw->progs)->items;
- while (item->prog) {
+ for (item = array->items; item->prog; item++)
bpf_prog_put(item->prog);
- item++;
- }
- bpf_prog_array_free(rcdev->raw->progs);
+ bpf_prog_array_free(array);
}
int lirc_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog)
@@ -290,7 +294,7 @@ int lirc_prog_detach(const union bpf_attr *attr)
int lirc_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr)
{
__u32 __user *prog_ids = u64_to_user_ptr(attr->query.prog_ids);
- struct bpf_prog_array __rcu *progs;
+ struct bpf_prog_array *progs;
struct rc_dev *rcdev;
u32 cnt, flags = 0;
int ret;
@@ -311,7 +315,7 @@ int lirc_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr)
if (ret)
goto put;
- progs = rcdev->raw->progs;
+ progs = lirc_rcu_dereference(rcdev->raw->progs);
cnt = progs ? bpf_prog_array_length(progs) : 0;
if (copy_to_user(&uattr->query.prog_cnt, &cnt, sizeof(cnt))) {
diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
index dfd6f315d2cc..e3b25f310936 100644
--- a/drivers/net/bonding/bond_3ad.c
+++ b/drivers/net/bonding/bond_3ad.c
@@ -325,17 +325,17 @@ static u16 __get_link_speed(struct port *port)
default:
/* unknown speed value from ethtool. shouldn't happen */
if (slave->speed != SPEED_UNKNOWN)
- pr_warn_once("%s: unknown ethtool speed (%d) for port %d (set it to 0)\n",
+ pr_warn_once("%s: (slave %s): unknown ethtool speed (%d) for port %d (set it to 0)\n",
slave->bond->dev->name,
- slave->speed,
+ slave->dev->name, slave->speed,
port->actor_port_number);
speed = 0;
break;
}
}
- netdev_dbg(slave->bond->dev, "Port %d Received link speed %d update from adapter\n",
- port->actor_port_number, speed);
+ slave_dbg(slave->bond->dev, slave->dev, "Port %d Received link speed %d update from adapter\n",
+ port->actor_port_number, speed);
return speed;
}
@@ -359,14 +359,14 @@ static u8 __get_duplex(struct port *port)
switch (slave->duplex) {
case DUPLEX_FULL:
retval = 0x1;
- netdev_dbg(slave->bond->dev, "Port %d Received status full duplex update from adapter\n",
- port->actor_port_number);
+ slave_dbg(slave->bond->dev, slave->dev, "Port %d Received status full duplex update from adapter\n",
+ port->actor_port_number);
break;
case DUPLEX_HALF:
default:
retval = 0x0;
- netdev_dbg(slave->bond->dev, "Port %d Received status NOT full duplex update from adapter\n",
- port->actor_port_number);
+ slave_dbg(slave->bond->dev, slave->dev, "Port %d Received status NOT full duplex update from adapter\n",
+ port->actor_port_number);
break;
}
}
@@ -500,10 +500,12 @@ static void __record_pdu(struct lacpdu *lacpdu, struct port *port)
if ((port->sm_vars & AD_PORT_MATCHED) &&
(lacpdu->actor_state & AD_STATE_SYNCHRONIZATION)) {
partner->port_state |= AD_STATE_SYNCHRONIZATION;
- pr_debug("%s partner sync=1\n", port->slave->dev->name);
+ slave_dbg(port->slave->bond->dev, port->slave->dev,
+ "partner sync=1\n");
} else {
partner->port_state &= ~AD_STATE_SYNCHRONIZATION;
- pr_debug("%s partner sync=0\n", port->slave->dev->name);
+ slave_dbg(port->slave->bond->dev, port->slave->dev,
+ "partner sync=0\n");
}
}
}
@@ -789,8 +791,9 @@ static inline void __update_lacpdu_from_port(struct port *port)
lacpdu->actor_port_priority = htons(port->actor_port_priority);
lacpdu->actor_port = htons(port->actor_port_number);
lacpdu->actor_state = port->actor_oper_port_state;
- pr_debug("update lacpdu: %s, actor port state %x\n",
- port->slave->dev->name, port->actor_oper_port_state);
+ slave_dbg(port->slave->bond->dev, port->slave->dev,
+ "update lacpdu: actor port state %x\n",
+ port->actor_oper_port_state);
/* lacpdu->reserved_3_1 initialized
* lacpdu->tlv_type_partner_info initialized
@@ -1022,11 +1025,11 @@ static void ad_mux_machine(struct port *port, bool *update_slave_arr)
/* check if the state machine was changed */
if (port->sm_mux_state != last_state) {
- pr_debug("Mux Machine: Port=%d (%s), Last State=%d, Curr State=%d\n",
- port->actor_port_number,
- port->slave->dev->name,
- last_state,
- port->sm_mux_state);
+ slave_dbg(port->slave->bond->dev, port->slave->dev,
+ "Mux Machine: Port=%d, Last State=%d, Curr State=%d\n",
+ port->actor_port_number,
+ last_state,
+ port->sm_mux_state);
switch (port->sm_mux_state) {
case AD_MUX_DETACHED:
port->actor_oper_port_state &= ~AD_STATE_SYNCHRONIZATION;
@@ -1140,11 +1143,11 @@ static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port)
/* check if the State machine was changed or new lacpdu arrived */
if ((port->sm_rx_state != last_state) || (lacpdu)) {
- pr_debug("Rx Machine: Port=%d (%s), Last State=%d, Curr State=%d\n",
- port->actor_port_number,
- port->slave->dev->name,
- last_state,
- port->sm_rx_state);
+ slave_dbg(port->slave->bond->dev, port->slave->dev,
+ "Rx Machine: Port=%d, Last State=%d, Curr State=%d\n",
+ port->actor_port_number,
+ last_state,
+ port->sm_rx_state);
switch (port->sm_rx_state) {
case AD_RX_INITIALIZE:
if (!(port->actor_oper_port_key & AD_DUPLEX_KEY_MASKS))
@@ -1192,9 +1195,8 @@ static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port)
/* detect loopback situation */
if (MAC_ADDRESS_EQUAL(&(lacpdu->actor_system),
&(port->actor_system))) {
- netdev_err(port->slave->bond->dev, "An illegal loopback occurred on adapter (%s)\n"
- "Check the configuration to verify that all adapters are connected to 802.3ad compliant switch ports\n",
- port->slave->dev->name);
+ slave_err(port->slave->bond->dev, port->slave->dev, "An illegal loopback occurred on slave\n"
+ "Check the configuration to verify that all adapters are connected to 802.3ad compliant switch ports\n");
return;
}
__update_selected(lacpdu, port);
@@ -1263,8 +1265,10 @@ static void ad_tx_machine(struct port *port)
__update_lacpdu_from_port(port);
if (ad_lacpdu_send(port) >= 0) {
- pr_debug("Sent LACPDU on port %d\n",
- port->actor_port_number);
+ slave_dbg(port->slave->bond->dev,
+ port->slave->dev,
+ "Sent LACPDU on port %d\n",
+ port->actor_port_number);
/* mark ntt as false, so it will not be sent
* again until demanded
@@ -1343,9 +1347,10 @@ static void ad_periodic_machine(struct port *port)
/* check if the state machine was changed */
if (port->sm_periodic_state != last_state) {
- pr_debug("Periodic Machine: Port=%d, Last State=%d, Curr State=%d\n",
- port->actor_port_number, last_state,
- port->sm_periodic_state);
+ slave_dbg(port->slave->bond->dev, port->slave->dev,
+ "Periodic Machine: Port=%d, Last State=%d, Curr State=%d\n",
+ port->actor_port_number, last_state,
+ port->sm_periodic_state);
switch (port->sm_periodic_state) {
case AD_NO_PERIODIC:
port->sm_periodic_timer_counter = 0;
@@ -1421,9 +1426,9 @@ static void ad_port_selection_logic(struct port *port, bool *update_slave_arr)
port->next_port_in_aggregator = NULL;
port->actor_port_aggregator_identifier = 0;
- netdev_dbg(bond->dev, "Port %d left LAG %d\n",
- port->actor_port_number,
- temp_aggregator->aggregator_identifier);
+ slave_dbg(bond->dev, port->slave->dev, "Port %d left LAG %d\n",
+ port->actor_port_number,
+ temp_aggregator->aggregator_identifier);
/* if the aggregator is empty, clear its
* parameters, and set it ready to be attached
*/
@@ -1436,10 +1441,10 @@ static void ad_port_selection_logic(struct port *port, bool *update_slave_arr)
/* meaning: the port was related to an aggregator
* but was not on the aggregator port list
*/
- net_warn_ratelimited("%s: Warning: Port %d (on %s) was related to aggregator %d but was not on its port list\n",
+ net_warn_ratelimited("%s: (slave %s): Warning: Port %d was related to aggregator %d but was not on its port list\n",
port->slave->bond->dev->name,
- port->actor_port_number,
port->slave->dev->name,
+ port->actor_port_number,
port->aggregator->aggregator_identifier);
}
}
@@ -1470,9 +1475,9 @@ static void ad_port_selection_logic(struct port *port, bool *update_slave_arr)
port->next_port_in_aggregator = aggregator->lag_ports;
port->aggregator->num_of_ports++;
aggregator->lag_ports = port;
- netdev_dbg(bond->dev, "Port %d joined LAG %d(existing LAG)\n",
- port->actor_port_number,
- port->aggregator->aggregator_identifier);
+ slave_dbg(bond->dev, slave->dev, "Port %d joined LAG %d (existing LAG)\n",
+ port->actor_port_number,
+ port->aggregator->aggregator_identifier);
/* mark this port as selected */
port->sm_vars |= AD_PORT_SELECTED;
@@ -1517,12 +1522,13 @@ static void ad_port_selection_logic(struct port *port, bool *update_slave_arr)
/* mark this port as selected */
port->sm_vars |= AD_PORT_SELECTED;
- netdev_dbg(bond->dev, "Port %d joined LAG %d(new LAG)\n",
- port->actor_port_number,
- port->aggregator->aggregator_identifier);
+ slave_dbg(bond->dev, port->slave->dev, "Port %d joined LAG %d (new LAG)\n",
+ port->actor_port_number,
+ port->aggregator->aggregator_identifier);
} else {
- netdev_err(bond->dev, "Port %d (on %s) did not find a suitable aggregator\n",
- port->actor_port_number, port->slave->dev->name);
+ slave_err(bond->dev, port->slave->dev,
+ "Port %d did not find a suitable aggregator\n",
+ port->actor_port_number);
}
}
/* if all aggregator's ports are READY_N == TRUE, set ready=TRUE
@@ -1601,8 +1607,9 @@ static struct aggregator *ad_agg_selection_test(struct aggregator *best,
break;
default:
- net_warn_ratelimited("%s: Impossible agg select mode %d\n",
+ net_warn_ratelimited("%s: (slave %s): Impossible agg select mode %d\n",
curr->slave->bond->dev->name,
+ curr->slave->dev->name,
__get_agg_selection_mode(curr->lag_ports));
break;
}
@@ -1703,36 +1710,37 @@ static void ad_agg_selection_logic(struct aggregator *agg,
/* if there is new best aggregator, activate it */
if (best) {
- netdev_dbg(bond->dev, "best Agg=%d; P=%d; a k=%d; p k=%d; Ind=%d; Act=%d\n",
+ netdev_dbg(bond->dev, "(slave %s): best Agg=%d; P=%d; a k=%d; p k=%d; Ind=%d; Act=%d\n",
+ best->slave ? best->slave->dev->name : "NULL",
best->aggregator_identifier, best->num_of_ports,
best->actor_oper_aggregator_key,
best->partner_oper_aggregator_key,
best->is_individual, best->is_active);
- netdev_dbg(bond->dev, "best ports %p slave %p %s\n",
- best->lag_ports, best->slave,
- best->slave ? best->slave->dev->name : "NULL");
+ netdev_dbg(bond->dev, "(slave %s): best ports %p slave %p\n",
+ best->slave ? best->slave->dev->name : "NULL",
+ best->lag_ports, best->slave);
bond_for_each_slave_rcu(bond, slave, iter) {
agg = &(SLAVE_AD_INFO(slave)->aggregator);
- netdev_dbg(bond->dev, "Agg=%d; P=%d; a k=%d; p k=%d; Ind=%d; Act=%d\n",
- agg->aggregator_identifier, agg->num_of_ports,
- agg->actor_oper_aggregator_key,
- agg->partner_oper_aggregator_key,
- agg->is_individual, agg->is_active);
+ slave_dbg(bond->dev, slave->dev, "Agg=%d; P=%d; a k=%d; p k=%d; Ind=%d; Act=%d\n",
+ agg->aggregator_identifier, agg->num_of_ports,
+ agg->actor_oper_aggregator_key,
+ agg->partner_oper_aggregator_key,
+ agg->is_individual, agg->is_active);
}
- /* check if any partner replys */
- if (best->is_individual) {
+ /* check if any partner replies */
+ if (best->is_individual)
net_warn_ratelimited("%s: Warning: No 802.3ad response from the link partner for any adapters in the bond\n",
- best->slave ?
- best->slave->bond->dev->name : "NULL");
- }
+ bond->dev->name);
best->is_active = 1;
- netdev_dbg(bond->dev, "LAG %d chosen as the active LAG\n",
+ netdev_dbg(bond->dev, "(slave %s): LAG %d chosen as the active LAG\n",
+ best->slave ? best->slave->dev->name : "NULL",
best->aggregator_identifier);
- netdev_dbg(bond->dev, "Agg=%d; P=%d; a k=%d; p k=%d; Ind=%d; Act=%d\n",
+ netdev_dbg(bond->dev, "(slave %s): Agg=%d; P=%d; a k=%d; p k=%d; Ind=%d; Act=%d\n",
+ best->slave ? best->slave->dev->name : "NULL",
best->aggregator_identifier, best->num_of_ports,
best->actor_oper_aggregator_key,
best->partner_oper_aggregator_key,
@@ -1788,7 +1796,9 @@ static void ad_clear_agg(struct aggregator *aggregator)
aggregator->lag_ports = NULL;
aggregator->is_active = 0;
aggregator->num_of_ports = 0;
- pr_debug("LAG %d was cleared\n",
+ pr_debug("%s: LAG %d was cleared\n",
+ aggregator->slave ?
+ aggregator->slave->dev->name : "NULL",
aggregator->aggregator_identifier);
}
}
@@ -1885,9 +1895,10 @@ static void ad_enable_collecting_distributing(struct port *port,
bool *update_slave_arr)
{
if (port->aggregator->is_active) {
- pr_debug("Enabling port %d(LAG %d)\n",
- port->actor_port_number,
- port->aggregator->aggregator_identifier);
+ slave_dbg(port->slave->bond->dev, port->slave->dev,
+ "Enabling port %d (LAG %d)\n",
+ port->actor_port_number,
+ port->aggregator->aggregator_identifier);
__enable_port(port);
/* Slave array needs update */
*update_slave_arr = true;
@@ -1905,9 +1916,10 @@ static void ad_disable_collecting_distributing(struct port *port,
if (port->aggregator &&
!MAC_ADDRESS_EQUAL(&(port->aggregator->partner_system),
&(null_mac_addr))) {
- pr_debug("Disabling port %d(LAG %d)\n",
- port->actor_port_number,
- port->aggregator->aggregator_identifier);
+ slave_dbg(port->slave->bond->dev, port->slave->dev,
+ "Disabling port %d (LAG %d)\n",
+ port->actor_port_number,
+ port->aggregator->aggregator_identifier);
__disable_port(port);
/* Slave array needs an update */
*update_slave_arr = true;
@@ -1920,7 +1932,7 @@ static void ad_disable_collecting_distributing(struct port *port,
* @port: the port we're looking at
*/
static void ad_marker_info_received(struct bond_marker *marker_info,
- struct port *port)
+ struct port *port)
{
struct bond_marker marker;
@@ -1933,10 +1945,10 @@ static void ad_marker_info_received(struct bond_marker *marker_info,
marker.tlv_type = AD_MARKER_RESPONSE_SUBTYPE;
/* send the marker response */
- if (ad_marker_send(port, &marker) >= 0) {
- pr_debug("Sent Marker Response on port %d\n",
- port->actor_port_number);
- }
+ if (ad_marker_send(port, &marker) >= 0)
+ slave_dbg(port->slave->bond->dev, port->slave->dev,
+ "Sent Marker Response on port %d\n",
+ port->actor_port_number);
}
/**
@@ -2085,13 +2097,12 @@ void bond_3ad_unbind_slave(struct slave *slave)
/* if slave is null, the whole port is not initialized */
if (!port->slave) {
- netdev_warn(bond->dev, "Trying to unbind an uninitialized port on %s\n",
- slave->dev->name);
+ slave_warn(bond->dev, slave->dev, "Trying to unbind an uninitialized port\n");
goto out;
}
- netdev_dbg(bond->dev, "Unbinding Link Aggregation Group %d\n",
- aggregator->aggregator_identifier);
+ slave_dbg(bond->dev, slave->dev, "Unbinding Link Aggregation Group %d\n",
+ aggregator->aggregator_identifier);
/* Tell the partner that this port is not suitable for aggregation */
port->actor_oper_port_state &= ~AD_STATE_SYNCHRONIZATION;
@@ -2129,13 +2140,13 @@ void bond_3ad_unbind_slave(struct slave *slave)
* new aggregator
*/
if ((new_aggregator) && ((!new_aggregator->lag_ports) || ((new_aggregator->lag_ports == port) && !new_aggregator->lag_ports->next_port_in_aggregator))) {
- netdev_dbg(bond->dev, "Some port(s) related to LAG %d - replacing with LAG %d\n",
- aggregator->aggregator_identifier,
- new_aggregator->aggregator_identifier);
+ slave_dbg(bond->dev, slave->dev, "Some port(s) related to LAG %d - replacing with LAG %d\n",
+ aggregator->aggregator_identifier,
+ new_aggregator->aggregator_identifier);
if ((new_aggregator->lag_ports == port) &&
new_aggregator->is_active) {
- netdev_info(bond->dev, "Removing an active aggregator\n");
+ slave_info(bond->dev, slave->dev, "Removing an active aggregator\n");
select_new_active_agg = 1;
}
@@ -2166,7 +2177,7 @@ void bond_3ad_unbind_slave(struct slave *slave)
ad_agg_selection_logic(__get_first_agg(port),
&dummy_slave_update);
} else {
- netdev_warn(bond->dev, "unbinding aggregator, and could not find a new aggregator for its ports\n");
+ slave_warn(bond->dev, slave->dev, "unbinding aggregator, and could not find a new aggregator for its ports\n");
}
} else {
/* in case that the only port related to this
@@ -2175,7 +2186,7 @@ void bond_3ad_unbind_slave(struct slave *slave)
select_new_active_agg = aggregator->is_active;
ad_clear_agg(aggregator);
if (select_new_active_agg) {
- netdev_info(bond->dev, "Removing an active aggregator\n");
+ slave_info(bond->dev, slave->dev, "Removing an active aggregator\n");
/* select new active aggregator */
temp_aggregator = __get_first_agg(port);
if (temp_aggregator)
@@ -2185,7 +2196,7 @@ void bond_3ad_unbind_slave(struct slave *slave)
}
}
- netdev_dbg(bond->dev, "Unbinding port %d\n", port->actor_port_number);
+ slave_dbg(bond->dev, slave->dev, "Unbinding port %d\n", port->actor_port_number);
/* find the aggregator that this port is connected to */
bond_for_each_slave(bond, slave_iter, iter) {
@@ -2208,7 +2219,7 @@ void bond_3ad_unbind_slave(struct slave *slave)
select_new_active_agg = temp_aggregator->is_active;
ad_clear_agg(temp_aggregator);
if (select_new_active_agg) {
- netdev_info(bond->dev, "Removing an active aggregator\n");
+ slave_info(bond->dev, slave->dev, "Removing an active aggregator\n");
/* select new active aggregator */
ad_agg_selection_logic(__get_first_agg(port),
&dummy_slave_update);
@@ -2379,9 +2390,9 @@ static int bond_3ad_rx_indication(struct lacpdu *lacpdu, struct slave *slave)
switch (lacpdu->subtype) {
case AD_TYPE_LACPDU:
ret = RX_HANDLER_CONSUMED;
- netdev_dbg(slave->bond->dev,
- "Received LACPDU on port %d slave %s\n",
- port->actor_port_number, slave->dev->name);
+ slave_dbg(slave->bond->dev, slave->dev,
+ "Received LACPDU on port %d\n",
+ port->actor_port_number);
/* Protect against concurrent state machines */
spin_lock(&slave->bond->mode_lock);
ad_rx_machine(lacpdu, port);
@@ -2395,18 +2406,18 @@ static int bond_3ad_rx_indication(struct lacpdu *lacpdu, struct slave *slave)
marker = (struct bond_marker *)lacpdu;
switch (marker->tlv_type) {
case AD_MARKER_INFORMATION_SUBTYPE:
- netdev_dbg(slave->bond->dev, "Received Marker Information on port %d\n",
- port->actor_port_number);
+ slave_dbg(slave->bond->dev, slave->dev, "Received Marker Information on port %d\n",
+ port->actor_port_number);
ad_marker_info_received(marker, port);
break;
case AD_MARKER_RESPONSE_SUBTYPE:
- netdev_dbg(slave->bond->dev, "Received Marker Response on port %d\n",
- port->actor_port_number);
+ slave_dbg(slave->bond->dev, slave->dev, "Received Marker Response on port %d\n",
+ port->actor_port_number);
ad_marker_response_received(marker, port);
break;
default:
- netdev_dbg(slave->bond->dev, "Received an unknown Marker subtype on slot %d\n",
- port->actor_port_number);
+ slave_dbg(slave->bond->dev, slave->dev, "Received an unknown Marker subtype on port %d\n",
+ port->actor_port_number);
stat = &SLAVE_AD_INFO(slave)->stats.marker_unknown_rx;
atomic64_inc(stat);
stat = &BOND_AD_INFO(bond).stats.marker_unknown_rx;
@@ -2456,9 +2467,10 @@ static void ad_update_actor_keys(struct port *port, bool reset)
if (!reset) {
if (!speed) {
- netdev_err(port->slave->dev,
- "speed changed to 0 for port %s",
- port->slave->dev->name);
+ slave_err(port->slave->bond->dev,
+ port->slave->dev,
+ "speed changed to 0 on port %d\n",
+ port->actor_port_number);
} else if (duplex && ospeed != speed) {
/* Speed change restarts LACP state-machine */
port->sm_vars |= AD_PORT_BEGIN;
@@ -2483,17 +2495,16 @@ void bond_3ad_adapter_speed_duplex_changed(struct slave *slave)
/* if slave is null, the whole port is not initialized */
if (!port->slave) {
- netdev_warn(slave->bond->dev,
- "speed/duplex changed for uninitialized port %s\n",
- slave->dev->name);
+ slave_warn(slave->bond->dev, slave->dev,
+ "speed/duplex changed for uninitialized port\n");
return;
}
spin_lock_bh(&slave->bond->mode_lock);
ad_update_actor_keys(port, false);
spin_unlock_bh(&slave->bond->mode_lock);
- netdev_dbg(slave->bond->dev, "Port %d slave %s changed speed/duplex\n",
- port->actor_port_number, slave->dev->name);
+ slave_dbg(slave->bond->dev, slave->dev, "Port %d changed speed/duplex\n",
+ port->actor_port_number);
}
/**
@@ -2513,8 +2524,7 @@ void bond_3ad_handle_link_change(struct slave *slave, char link)
/* if slave is null, the whole port is not initialized */
if (!port->slave) {
- netdev_warn(slave->bond->dev, "link status changed for uninitialized port on %s\n",
- slave->dev->name);
+ slave_warn(slave->bond->dev, slave->dev, "link status changed for uninitialized port\n");
return;
}
@@ -2539,9 +2549,9 @@ void bond_3ad_handle_link_change(struct slave *slave, char link)
spin_unlock_bh(&slave->bond->mode_lock);
- netdev_dbg(slave->bond->dev, "Port %d changed link status to %s\n",
- port->actor_port_number,
- link == BOND_LINK_UP ? "UP" : "DOWN");
+ slave_dbg(slave->bond->dev, slave->dev, "Port %d changed link status to %s\n",
+ port->actor_port_number,
+ link == BOND_LINK_UP ? "UP" : "DOWN");
/* RTNL is held and mode_lock is released so it's safe
* to update slave_array here.
diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c
index 790e41c6fdd0..8c79bad2a9a5 100644
--- a/drivers/net/bonding/bond_alb.c
+++ b/drivers/net/bonding/bond_alb.c
@@ -300,7 +300,7 @@ static int rlb_arp_recv(const struct sk_buff *skb, struct bonding *bond,
if (arp->op_code == htons(ARPOP_REPLY)) {
/* update rx hash table for this ARP */
rlb_update_entry_from_arp(bond, arp);
- netdev_dbg(bond->dev, "Server received an ARP Reply from client\n");
+ slave_dbg(bond->dev, slave->dev, "Server received an ARP Reply from client\n");
}
out:
return RX_HANDLER_ANOTHER;
@@ -442,8 +442,9 @@ static void rlb_update_client(struct rlb_client_info *client_info)
client_info->slave->dev->dev_addr,
client_info->mac_dst);
if (!skb) {
- netdev_err(client_info->slave->bond->dev,
- "failed to create an ARP packet\n");
+ slave_err(client_info->slave->bond->dev,
+ client_info->slave->dev,
+ "failed to create an ARP packet\n");
continue;
}
@@ -667,14 +668,15 @@ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
if (tx_slave)
bond_hw_addr_copy(arp->mac_src, tx_slave->dev->dev_addr,
tx_slave->dev->addr_len);
- netdev_dbg(bond->dev, "Server sent ARP Reply packet\n");
+ netdev_dbg(bond->dev, "(slave %s): Server sent ARP Reply packet\n",
+ tx_slave ? tx_slave->dev->name : "NULL");
} else if (arp->op_code == htons(ARPOP_REQUEST)) {
/* Create an entry in the rx_hashtbl for this client as a
* place holder.
* When the arp reply is received the entry will be updated
* with the correct unicast address of the client.
*/
- rlb_choose_channel(skb, bond);
+ tx_slave = rlb_choose_channel(skb, bond);
/* The ARP reply packets must be delayed so that
* they can cancel out the influence of the ARP request.
@@ -687,7 +689,8 @@ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
* updated with their assigned mac.
*/
rlb_req_update_subnet_clients(bond, arp->ip_src);
- netdev_dbg(bond->dev, "Server sent ARP Request packet\n");
+ netdev_dbg(bond->dev, "(slave %s): Server sent ARP Request packet\n",
+ tx_slave ? tx_slave->dev->name : "NULL");
}
return tx_slave;
@@ -923,9 +926,8 @@ static void alb_send_lp_vid(struct slave *slave, u8 mac_addr[],
skb->priority = TC_PRIO_CONTROL;
skb->dev = slave->dev;
- netdev_dbg(slave->bond->dev,
- "Send learning packet: dev %s mac %pM vlan %d\n",
- slave->dev->name, mac_addr, vid);
+ slave_dbg(slave->bond->dev, slave->dev,
+ "Send learning packet: mac %pM vlan %d\n", mac_addr, vid);
if (vid)
__vlan_hwaccel_put_tag(skb, vlan_proto, vid);
@@ -1016,8 +1018,7 @@ static int alb_set_slave_mac_addr(struct slave *slave, u8 addr[],
memcpy(ss.__data, addr, len);
ss.ss_family = dev->type;
if (dev_set_mac_address(dev, (struct sockaddr *)&ss, NULL)) {
- netdev_err(slave->bond->dev, "dev_set_mac_address of dev %s failed! ALB mode requires that the base driver support setting the hw address also when the network device's interface is open\n",
- dev->name);
+ slave_err(slave->bond->dev, dev, "dev_set_mac_address on slave failed! ALB mode requires that the base driver support setting the hw address also when the network device's interface is open\n");
return -EOPNOTSUPP;
}
return 0;
@@ -1192,12 +1193,11 @@ static int alb_handle_addr_collision_on_attach(struct bonding *bond, struct slav
alb_set_slave_mac_addr(slave, free_mac_slave->perm_hwaddr,
free_mac_slave->dev->addr_len);
- netdev_warn(bond->dev, "the hw address of slave %s is in use by the bond; giving it the hw address of %s\n",
- slave->dev->name, free_mac_slave->dev->name);
+ slave_warn(bond->dev, slave->dev, "the slave hw address is in use by the bond; giving it the hw address of %s\n",
+ free_mac_slave->dev->name);
} else if (has_bond_addr) {
- netdev_err(bond->dev, "the hw address of slave %s is in use by the bond; couldn't find a slave with a free hw address to give it (this should not have happened)\n",
- slave->dev->name);
+ slave_err(bond->dev, slave->dev, "the slave hw address is in use by the bond; couldn't find a slave with a free hw address to give it (this should not have happened)\n");
return -EFAULT;
}
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 799fc38c5c34..9b7016abca2f 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -613,8 +613,8 @@ static int bond_set_dev_addr(struct net_device *bond_dev,
{
int err;
- netdev_dbg(bond_dev, "bond_dev=%p slave_dev=%p slave_dev->name=%s slave_dev->addr_len=%d\n",
- bond_dev, slave_dev, slave_dev->name, slave_dev->addr_len);
+ slave_dbg(bond_dev, slave_dev, "bond_dev=%p slave_dev=%p slave_dev->addr_len=%d\n",
+ bond_dev, slave_dev, slave_dev->addr_len);
err = dev_pre_changeaddr_notify(bond_dev, slave_dev->dev_addr, NULL);
if (err)
return err;
@@ -661,8 +661,8 @@ static void bond_do_fail_over_mac(struct bonding *bond,
if (new_active) {
rv = bond_set_dev_addr(bond->dev, new_active->dev);
if (rv)
- netdev_err(bond->dev, "Error %d setting MAC of slave %s\n",
- -rv, bond->dev->name);
+ slave_err(bond->dev, new_active->dev, "Error %d setting bond MAC from slave\n",
+ -rv);
}
break;
case BOND_FOM_FOLLOW:
@@ -692,8 +692,8 @@ static void bond_do_fail_over_mac(struct bonding *bond,
rv = dev_set_mac_address(new_active->dev,
(struct sockaddr *)&ss, NULL);
if (rv) {
- netdev_err(bond->dev, "Error %d setting MAC of slave %s\n",
- -rv, new_active->dev->name);
+ slave_err(bond->dev, new_active->dev, "Error %d setting MAC of new active slave\n",
+ -rv);
goto out;
}
@@ -707,8 +707,8 @@ static void bond_do_fail_over_mac(struct bonding *bond,
rv = dev_set_mac_address(old_active->dev,
(struct sockaddr *)&ss, NULL);
if (rv)
- netdev_err(bond->dev, "Error %d setting MAC of slave %s\n",
- -rv, new_active->dev->name);
+ slave_err(bond->dev, old_active->dev, "Error %d setting MAC of old active slave\n",
+ -rv);
out:
break;
default:
@@ -796,6 +796,8 @@ static bool bond_should_notify_peers(struct bonding *bond)
slave ? slave->dev->name : "NULL");
if (!slave || !bond->send_peer_notif ||
+ bond->send_peer_notif %
+ max(1, bond->params.peer_notif_delay) != 0 ||
!netif_carrier_ok(bond->dev) ||
test_bit(__LINK_STATE_LINKWATCH_PENDING, &slave->dev->state))
return false;
@@ -834,9 +836,8 @@ void bond_change_active_slave(struct bonding *bond, struct slave *new_active)
if (new_active->link == BOND_LINK_BACK) {
if (bond_uses_primary(bond)) {
- netdev_info(bond->dev, "making interface %s the new active one %d ms earlier\n",
- new_active->dev->name,
- (bond->params.updelay - new_active->delay) * bond->params.miimon);
+ slave_info(bond->dev, new_active->dev, "making interface the new active one %d ms earlier\n",
+ (bond->params.updelay - new_active->delay) * bond->params.miimon);
}
new_active->delay = 0;
@@ -850,8 +851,7 @@ void bond_change_active_slave(struct bonding *bond, struct slave *new_active)
bond_alb_handle_link_change(bond, new_active, BOND_LINK_UP);
} else {
if (bond_uses_primary(bond)) {
- netdev_info(bond->dev, "making interface %s the new active one\n",
- new_active->dev->name);
+ slave_info(bond->dev, new_active->dev, "making interface the new active one\n");
}
}
}
@@ -888,15 +888,18 @@ void bond_change_active_slave(struct bonding *bond, struct slave *new_active)
if (netif_running(bond->dev)) {
bond->send_peer_notif =
- bond->params.num_peer_notif;
+ bond->params.num_peer_notif *
+ max(1, bond->params.peer_notif_delay);
should_notify_peers =
bond_should_notify_peers(bond);
}
call_netdevice_notifiers(NETDEV_BONDING_FAILOVER, bond->dev);
- if (should_notify_peers)
+ if (should_notify_peers) {
+ bond->send_peer_notif--;
call_netdevice_notifiers(NETDEV_NOTIFY_PEERS,
bond->dev);
+ }
}
}
@@ -939,7 +942,7 @@ void bond_select_active_slave(struct bonding *bond)
return;
if (netif_carrier_ok(bond->dev))
- netdev_info(bond->dev, "first active interface up!\n");
+ netdev_info(bond->dev, "active interface up!\n");
else
netdev_info(bond->dev, "now running without any active interface!\n");
}
@@ -1077,12 +1080,16 @@ static netdev_features_t bond_fix_features(struct net_device *dev,
#define BOND_ENC_FEATURES (NETIF_F_HW_CSUM | NETIF_F_SG | \
NETIF_F_RXCSUM | NETIF_F_ALL_TSO)
+#define BOND_MPLS_FEATURES (NETIF_F_HW_CSUM | NETIF_F_SG | \
+ NETIF_F_ALL_TSO)
+
static void bond_compute_features(struct bonding *bond)
{
unsigned int dst_release_flag = IFF_XMIT_DST_RELEASE |
IFF_XMIT_DST_RELEASE_PERM;
netdev_features_t vlan_features = BOND_VLAN_FEATURES;
netdev_features_t enc_features = BOND_ENC_FEATURES;
+ netdev_features_t mpls_features = BOND_MPLS_FEATURES;
struct net_device *bond_dev = bond->dev;
struct list_head *iter;
struct slave *slave;
@@ -1093,6 +1100,7 @@ static void bond_compute_features(struct bonding *bond)
if (!bond_has_slaves(bond))
goto done;
vlan_features &= NETIF_F_ALL_FOR_ALL;
+ mpls_features &= NETIF_F_ALL_FOR_ALL;
bond_for_each_slave(bond, slave, iter) {
vlan_features = netdev_increment_features(vlan_features,
@@ -1101,6 +1109,11 @@ static void bond_compute_features(struct bonding *bond)
enc_features = netdev_increment_features(enc_features,
slave->dev->hw_enc_features,
BOND_ENC_FEATURES);
+
+ mpls_features = netdev_increment_features(mpls_features,
+ slave->dev->mpls_features,
+ BOND_MPLS_FEATURES);
+
dst_release_flag &= slave->dev->priv_flags;
if (slave->dev->hard_header_len > max_hard_header_len)
max_hard_header_len = slave->dev->hard_header_len;
@@ -1114,6 +1127,7 @@ done:
bond_dev->vlan_features = vlan_features;
bond_dev->hw_enc_features = enc_features | NETIF_F_GSO_ENCAP_ALL |
NETIF_F_GSO_UDP_L4;
+ bond_dev->mpls_features = mpls_features;
bond_dev->gso_max_segs = gso_max_segs;
netif_set_gso_max_size(bond_dev, gso_max_size);
@@ -1369,15 +1383,14 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
if (!bond->params.use_carrier &&
slave_dev->ethtool_ops->get_link == NULL &&
slave_ops->ndo_do_ioctl == NULL) {
- netdev_warn(bond_dev, "no link monitoring support for %s\n",
- slave_dev->name);
+ slave_warn(bond_dev, slave_dev, "no link monitoring support\n");
}
/* already in-use? */
if (netdev_is_rx_handler_busy(slave_dev)) {
NL_SET_ERR_MSG(extack, "Device is in use and cannot be enslaved");
- netdev_err(bond_dev,
- "Error: Device is in use and cannot be enslaved\n");
+ slave_err(bond_dev, slave_dev,
+ "Error: Device is in use and cannot be enslaved\n");
return -EBUSY;
}
@@ -1390,21 +1403,16 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
/* vlan challenged mutual exclusion */
/* no need to lock since we're protected by rtnl_lock */
if (slave_dev->features & NETIF_F_VLAN_CHALLENGED) {
- netdev_dbg(bond_dev, "%s is NETIF_F_VLAN_CHALLENGED\n",
- slave_dev->name);
+ slave_dbg(bond_dev, slave_dev, "is NETIF_F_VLAN_CHALLENGED\n");
if (vlan_uses_dev(bond_dev)) {
NL_SET_ERR_MSG(extack, "Can not enslave VLAN challenged device to VLAN enabled bond");
- netdev_err(bond_dev, "Error: cannot enslave VLAN challenged slave %s on VLAN enabled bond %s\n",
- slave_dev->name, bond_dev->name);
+ slave_err(bond_dev, slave_dev, "Error: cannot enslave VLAN challenged slave on VLAN enabled bond\n");
return -EPERM;
} else {
- netdev_warn(bond_dev, "enslaved VLAN challenged slave %s. Adding VLANs will be blocked as long as %s is part of bond %s\n",
- slave_dev->name, slave_dev->name,
- bond_dev->name);
+ slave_warn(bond_dev, slave_dev, "enslaved VLAN challenged slave. Adding VLANs will be blocked as long as it is part of bond.\n");
}
} else {
- netdev_dbg(bond_dev, "%s is !NETIF_F_VLAN_CHALLENGED\n",
- slave_dev->name);
+ slave_dbg(bond_dev, slave_dev, "is !NETIF_F_VLAN_CHALLENGED\n");
}
/* Old ifenslave binaries are no longer supported. These can
@@ -1414,8 +1422,7 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
*/
if (slave_dev->flags & IFF_UP) {
NL_SET_ERR_MSG(extack, "Device can not be enslaved while up");
- netdev_err(bond_dev, "%s is up - this may be due to an out of date ifenslave\n",
- slave_dev->name);
+ slave_err(bond_dev, slave_dev, "slave is up - this may be due to an out of date ifenslave\n");
return -EPERM;
}
@@ -1428,14 +1435,14 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
*/
if (!bond_has_slaves(bond)) {
if (bond_dev->type != slave_dev->type) {
- netdev_dbg(bond_dev, "change device type from %d to %d\n",
- bond_dev->type, slave_dev->type);
+ slave_dbg(bond_dev, slave_dev, "change device type from %d to %d\n",
+ bond_dev->type, slave_dev->type);
res = call_netdevice_notifiers(NETDEV_PRE_TYPE_CHANGE,
bond_dev);
res = notifier_to_errno(res);
if (res) {
- netdev_err(bond_dev, "refused to change device type\n");
+ slave_err(bond_dev, slave_dev, "refused to change device type\n");
return -EBUSY;
}
@@ -1455,31 +1462,31 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
}
} else if (bond_dev->type != slave_dev->type) {
NL_SET_ERR_MSG(extack, "Device type is different from other slaves");
- netdev_err(bond_dev, "%s ether type (%d) is different from other slaves (%d), can not enslave it\n",
- slave_dev->name, slave_dev->type, bond_dev->type);
+ slave_err(bond_dev, slave_dev, "ether type (%d) is different from other slaves (%d), can not enslave it\n",
+ slave_dev->type, bond_dev->type);
return -EINVAL;
}
if (slave_dev->type == ARPHRD_INFINIBAND &&
BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP) {
NL_SET_ERR_MSG(extack, "Only active-backup mode is supported for infiniband slaves");
- netdev_warn(bond_dev, "Type (%d) supports only active-backup mode\n",
- slave_dev->type);
+ slave_warn(bond_dev, slave_dev, "Type (%d) supports only active-backup mode\n",
+ slave_dev->type);
res = -EOPNOTSUPP;
goto err_undo_flags;
}
if (!slave_ops->ndo_set_mac_address ||
slave_dev->type == ARPHRD_INFINIBAND) {
- netdev_warn(bond_dev, "The slave device specified does not support setting the MAC address\n");
+ slave_warn(bond_dev, slave_dev, "The slave device specified does not support setting the MAC address\n");
if (BOND_MODE(bond) == BOND_MODE_ACTIVEBACKUP &&
bond->params.fail_over_mac != BOND_FOM_ACTIVE) {
if (!bond_has_slaves(bond)) {
bond->params.fail_over_mac = BOND_FOM_ACTIVE;
- netdev_warn(bond_dev, "Setting fail_over_mac to active for active-backup mode\n");
+ slave_warn(bond_dev, slave_dev, "Setting fail_over_mac to active for active-backup mode\n");
} else {
NL_SET_ERR_MSG(extack, "Slave device does not support setting the MAC address, but fail_over_mac is not set to active");
- netdev_err(bond_dev, "The slave device specified does not support setting the MAC address, but fail_over_mac is not set to active\n");
+ slave_err(bond_dev, slave_dev, "The slave device specified does not support setting the MAC address, but fail_over_mac is not set to active\n");
res = -EOPNOTSUPP;
goto err_undo_flags;
}
@@ -1515,7 +1522,7 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
new_slave->original_mtu = slave_dev->mtu;
res = dev_set_mtu(slave_dev, bond->dev->mtu);
if (res) {
- netdev_dbg(bond_dev, "Error %d calling dev_set_mtu\n", res);
+ slave_err(bond_dev, slave_dev, "Error %d calling dev_set_mtu\n", res);
goto err_free;
}
@@ -1536,7 +1543,7 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
res = dev_set_mac_address(slave_dev, (struct sockaddr *)&ss,
extack);
if (res) {
- netdev_dbg(bond_dev, "Error %d calling set_mac_address\n", res);
+ slave_err(bond_dev, slave_dev, "Error %d calling set_mac_address\n", res);
goto err_restore_mtu;
}
}
@@ -1547,7 +1554,7 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
/* open the slave since the application closed it */
res = dev_open(slave_dev, extack);
if (res) {
- netdev_dbg(bond_dev, "Opening slave %s failed\n", slave_dev->name);
+ slave_err(bond_dev, slave_dev, "Opening slave failed\n");
goto err_restore_mac;
}
@@ -1566,8 +1573,7 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
res = vlan_vids_add_by_dev(slave_dev, bond_dev);
if (res) {
- netdev_err(bond_dev, "Couldn't add bond vlan ids to %s\n",
- slave_dev->name);
+ slave_err(bond_dev, slave_dev, "Couldn't add bond vlan ids\n");
goto err_close;
}
@@ -1597,12 +1603,10 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
* supported); thus, we don't need to change
* the messages for netif_carrier.
*/
- netdev_warn(bond_dev, "MII and ETHTOOL support not available for interface %s, and arp_interval/arp_ip_target module parameters not specified, thus bonding will not detect link failures! see bonding.txt for details\n",
- slave_dev->name);
+ slave_warn(bond_dev, slave_dev, "MII and ETHTOOL support not available for slave, and arp_interval/arp_ip_target module parameters not specified, thus bonding will not detect link failures! see bonding.txt for details\n");
} else if (link_reporting == -1) {
/* unable get link status using mii/ethtool */
- netdev_warn(bond_dev, "can't get link status from interface %s; the network driver associated with this interface does not support MII or ETHTOOL link status reporting, thus miimon has no effect on this interface\n",
- slave_dev->name);
+ slave_warn(bond_dev, slave_dev, "can't get link status from slave; the network driver associated with this interface does not support MII or ETHTOOL link status reporting, thus miimon has no effect on this interface\n");
}
}
@@ -1636,9 +1640,9 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
if (new_slave->link != BOND_LINK_DOWN)
new_slave->last_link_up = jiffies;
- netdev_dbg(bond_dev, "Initial state of slave_dev is BOND_LINK_%s\n",
- new_slave->link == BOND_LINK_DOWN ? "DOWN" :
- (new_slave->link == BOND_LINK_UP ? "UP" : "BACK"));
+ slave_dbg(bond_dev, slave_dev, "Initial state of slave is BOND_LINK_%s\n",
+ new_slave->link == BOND_LINK_DOWN ? "DOWN" :
+ (new_slave->link == BOND_LINK_UP ? "UP" : "BACK"));
if (bond_uses_primary(bond) && bond->params.primary[0]) {
/* if there is a primary slave, remember it */
@@ -1679,7 +1683,7 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
bond_set_slave_inactive_flags(new_slave, BOND_SLAVE_NOTIFY_NOW);
break;
default:
- netdev_dbg(bond_dev, "This slave is always active in trunk mode\n");
+ slave_dbg(bond_dev, slave_dev, "This slave is always active in trunk mode\n");
/* always active in trunk mode */
bond_set_active_slave(new_slave);
@@ -1698,7 +1702,7 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
#ifdef CONFIG_NET_POLL_CONTROLLER
if (bond->dev->npinfo) {
if (slave_enable_netpoll(new_slave)) {
- netdev_info(bond_dev, "master_dev is using netpoll, but new slave device does not support netpoll\n");
+ slave_info(bond_dev, slave_dev, "master_dev is using netpoll, but new slave device does not support netpoll\n");
res = -EBUSY;
goto err_detach;
}
@@ -1711,19 +1715,19 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
res = netdev_rx_handler_register(slave_dev, bond_handle_frame,
new_slave);
if (res) {
- netdev_dbg(bond_dev, "Error %d calling netdev_rx_handler_register\n", res);
+ slave_dbg(bond_dev, slave_dev, "Error %d calling netdev_rx_handler_register\n", res);
goto err_detach;
}
res = bond_master_upper_dev_link(bond, new_slave, extack);
if (res) {
- netdev_dbg(bond_dev, "Error %d calling bond_master_upper_dev_link\n", res);
+ slave_dbg(bond_dev, slave_dev, "Error %d calling bond_master_upper_dev_link\n", res);
goto err_unregister;
}
res = bond_sysfs_slave_add(new_slave);
if (res) {
- netdev_dbg(bond_dev, "Error %d calling bond_sysfs_slave_add\n", res);
+ slave_dbg(bond_dev, slave_dev, "Error %d calling bond_sysfs_slave_add\n", res);
goto err_upper_unlink;
}
@@ -1777,10 +1781,9 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
bond_update_slave_arr(bond, NULL);
- netdev_info(bond_dev, "Enslaving %s as %s interface with %s link\n",
- slave_dev->name,
- bond_is_active_slave(new_slave) ? "an active" : "a backup",
- new_slave->link != BOND_LINK_DOWN ? "an up" : "a down");
+ slave_info(bond_dev, slave_dev, "Enslaving as %s interface with %s link\n",
+ bond_is_active_slave(new_slave) ? "an active" : "a backup",
+ new_slave->link != BOND_LINK_DOWN ? "an up" : "a down");
/* enslave is successful */
bond_queue_slave_event(new_slave);
@@ -1875,8 +1878,7 @@ static int __bond_release_one(struct net_device *bond_dev,
/* slave is not a slave or master is not master of this slave */
if (!(slave_dev->flags & IFF_SLAVE) ||
!netdev_has_upper_dev(slave_dev, bond_dev)) {
- netdev_dbg(bond_dev, "cannot release %s\n",
- slave_dev->name);
+ slave_dbg(bond_dev, slave_dev, "cannot release slave\n");
return -EINVAL;
}
@@ -1885,8 +1887,7 @@ static int __bond_release_one(struct net_device *bond_dev,
slave = bond_get_slave_by_dev(bond, slave_dev);
if (!slave) {
/* not a slave of this bond */
- netdev_info(bond_dev, "%s not enslaved\n",
- slave_dev->name);
+ slave_info(bond_dev, slave_dev, "interface not enslaved\n");
unblock_netpoll_tx();
return -EINVAL;
}
@@ -1910,9 +1911,8 @@ static int __bond_release_one(struct net_device *bond_dev,
if (bond_mode_can_use_xmit_hash(bond))
bond_update_slave_arr(bond, slave);
- netdev_info(bond_dev, "Releasing %s interface %s\n",
- bond_is_active_slave(slave) ? "active" : "backup",
- slave_dev->name);
+ slave_info(bond_dev, slave_dev, "Releasing %s interface\n",
+ bond_is_active_slave(slave) ? "active" : "backup");
oldcurrent = rcu_access_pointer(bond->curr_active_slave);
@@ -1922,9 +1922,8 @@ static int __bond_release_one(struct net_device *bond_dev,
BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP)) {
if (ether_addr_equal_64bits(bond_dev->dev_addr, slave->perm_hwaddr) &&
bond_has_slaves(bond))
- netdev_warn(bond_dev, "the permanent HWaddr of %s - %pM - is still in use by %s - set the HWaddr of %s to a different address to avoid conflicts\n",
- slave_dev->name, slave->perm_hwaddr,
- bond_dev->name, slave_dev->name);
+ slave_warn(bond_dev, slave_dev, "the permanent HWaddr of slave - %pM - is still in use by bond - set the HWaddr of slave to a different address to avoid conflicts\n",
+ slave->perm_hwaddr);
}
if (rtnl_dereference(bond->primary_slave) == slave)
@@ -1972,8 +1971,7 @@ static int __bond_release_one(struct net_device *bond_dev,
bond_compute_features(bond);
if (!(bond_dev->features & NETIF_F_VLAN_CHALLENGED) &&
(old_features & NETIF_F_VLAN_CHALLENGED))
- netdev_info(bond_dev, "last VLAN challenged slave %s left bond %s - VLAN blocking is removed\n",
- slave_dev->name, bond_dev->name);
+ slave_info(bond_dev, slave_dev, "last VLAN challenged slave left bond - VLAN blocking is removed\n");
vlan_vids_del_by_dev(slave_dev, bond_dev);
@@ -2033,8 +2031,8 @@ int bond_release(struct net_device *bond_dev, struct net_device *slave_dev)
/* First release a slave and then destroy the bond if no more slaves are left.
* Must be under rtnl_lock when this function is called.
*/
-static int bond_release_and_destroy(struct net_device *bond_dev,
- struct net_device *slave_dev)
+static int bond_release_and_destroy(struct net_device *bond_dev,
+ struct net_device *slave_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
int ret;
@@ -2042,8 +2040,7 @@ static int bond_release_and_destroy(struct net_device *bond_dev,
ret = __bond_release_one(bond_dev, slave_dev, false, true);
if (ret == 0 && !bond_has_slaves(bond)) {
bond_dev->priv_flags |= IFF_DISABLE_NETPOLL;
- netdev_info(bond_dev, "Destroying bond %s\n",
- bond_dev->name);
+ netdev_info(bond_dev, "Destroying bond\n");
bond_remove_proc_entry(bond);
unregister_netdevice(bond_dev);
}
@@ -2101,13 +2098,12 @@ static int bond_miimon_inspect(struct bonding *bond)
commit++;
slave->delay = bond->params.downdelay;
if (slave->delay) {
- netdev_info(bond->dev, "link status down for %sinterface %s, disabling it in %d ms\n",
- (BOND_MODE(bond) ==
- BOND_MODE_ACTIVEBACKUP) ?
- (bond_is_active_slave(slave) ?
- "active " : "backup ") : "",
- slave->dev->name,
- bond->params.downdelay * bond->params.miimon);
+ slave_info(bond->dev, slave->dev, "link status down for %sinterface, disabling it in %d ms\n",
+ (BOND_MODE(bond) ==
+ BOND_MODE_ACTIVEBACKUP) ?
+ (bond_is_active_slave(slave) ?
+ "active " : "backup ") : "",
+ bond->params.downdelay * bond->params.miimon);
}
/*FALLTHRU*/
case BOND_LINK_FAIL:
@@ -2115,10 +2111,9 @@ static int bond_miimon_inspect(struct bonding *bond)
/* recovered before downdelay expired */
bond_propose_link_state(slave, BOND_LINK_UP);
slave->last_link_up = jiffies;
- netdev_info(bond->dev, "link status up again after %d ms for interface %s\n",
- (bond->params.downdelay - slave->delay) *
- bond->params.miimon,
- slave->dev->name);
+ slave_info(bond->dev, slave->dev, "link status up again after %d ms\n",
+ (bond->params.downdelay - slave->delay) *
+ bond->params.miimon);
commit++;
continue;
}
@@ -2141,20 +2136,18 @@ static int bond_miimon_inspect(struct bonding *bond)
slave->delay = bond->params.updelay;
if (slave->delay) {
- netdev_info(bond->dev, "link status up for interface %s, enabling it in %d ms\n",
- slave->dev->name,
- ignore_updelay ? 0 :
- bond->params.updelay *
- bond->params.miimon);
+ slave_info(bond->dev, slave->dev, "link status up, enabling it in %d ms\n",
+ ignore_updelay ? 0 :
+ bond->params.updelay *
+ bond->params.miimon);
}
/*FALLTHRU*/
case BOND_LINK_BACK:
if (!link_state) {
bond_propose_link_state(slave, BOND_LINK_DOWN);
- netdev_info(bond->dev, "link status down again after %d ms for interface %s\n",
- (bond->params.updelay - slave->delay) *
- bond->params.miimon,
- slave->dev->name);
+ slave_info(bond->dev, slave->dev, "link status down again after %d ms\n",
+ (bond->params.updelay - slave->delay) *
+ bond->params.miimon);
commit++;
continue;
}
@@ -2210,9 +2203,8 @@ static void bond_miimon_commit(struct bonding *bond)
bond_needs_speed_duplex(bond)) {
slave->link = BOND_LINK_DOWN;
if (net_ratelimit())
- netdev_warn(bond->dev,
- "failed to get link speed/duplex for %s\n",
- slave->dev->name);
+ slave_warn(bond->dev, slave->dev,
+ "failed to get link speed/duplex\n");
continue;
}
bond_set_slave_link_state(slave, BOND_LINK_UP,
@@ -2231,10 +2223,9 @@ static void bond_miimon_commit(struct bonding *bond)
bond_set_backup_slave(slave);
}
- netdev_info(bond->dev, "link status definitely up for interface %s, %u Mbps %s duplex\n",
- slave->dev->name,
- slave->speed == SPEED_UNKNOWN ? 0 : slave->speed,
- slave->duplex ? "full" : "half");
+ slave_info(bond->dev, slave->dev, "link status definitely up, %u Mbps %s duplex\n",
+ slave->speed == SPEED_UNKNOWN ? 0 : slave->speed,
+ slave->duplex ? "full" : "half");
bond_miimon_link_change(bond, slave, BOND_LINK_UP);
@@ -2255,8 +2246,7 @@ static void bond_miimon_commit(struct bonding *bond)
bond_set_slave_inactive_flags(slave,
BOND_SLAVE_NOTIFY_NOW);
- netdev_info(bond->dev, "link status definitely down for interface %s, disabling it\n",
- slave->dev->name);
+ slave_info(bond->dev, slave->dev, "link status definitely down, disabling slave\n");
bond_miimon_link_change(bond, slave, BOND_LINK_DOWN);
@@ -2266,8 +2256,8 @@ static void bond_miimon_commit(struct bonding *bond)
continue;
default:
- netdev_err(bond->dev, "invalid new link %d on slave %s\n",
- slave->new_link, slave->dev->name);
+ slave_err(bond->dev, slave->dev, "invalid new link %d on slave\n",
+ slave->new_link);
slave->new_link = BOND_LINK_NOCHANGE;
continue;
@@ -2294,6 +2284,7 @@ static void bond_mii_monitor(struct work_struct *work)
struct bonding *bond = container_of(work, struct bonding,
mii_work.work);
bool should_notify_peers = false;
+ bool commit;
unsigned long delay;
struct slave *slave;
struct list_head *iter;
@@ -2304,12 +2295,19 @@ static void bond_mii_monitor(struct work_struct *work)
goto re_arm;
rcu_read_lock();
-
should_notify_peers = bond_should_notify_peers(bond);
-
- if (bond_miimon_inspect(bond)) {
+ commit = !!bond_miimon_inspect(bond);
+ if (bond->send_peer_notif) {
rcu_read_unlock();
+ if (rtnl_trylock()) {
+ bond->send_peer_notif--;
+ rtnl_unlock();
+ }
+ } else {
+ rcu_read_unlock();
+ }
+ if (commit) {
/* Race avoidance with bond_close cancel of workqueue */
if (!rtnl_trylock()) {
delay = 1;
@@ -2323,8 +2321,7 @@ static void bond_mii_monitor(struct work_struct *work)
bond_miimon_commit(bond);
rtnl_unlock(); /* might sleep, hold no other locks */
- } else
- rcu_read_unlock();
+ }
re_arm:
if (bond->params.miimon)
@@ -2364,15 +2361,16 @@ static bool bond_has_this_ip(struct bonding *bond, __be32 ip)
* switches in VLAN mode (especially if ports are configured as
* "native" to a VLAN) might not pass non-tagged frames.
*/
-static void bond_arp_send(struct net_device *slave_dev, int arp_op,
- __be32 dest_ip, __be32 src_ip,
- struct bond_vlan_tag *tags)
+static void bond_arp_send(struct slave *slave, int arp_op, __be32 dest_ip,
+ __be32 src_ip, struct bond_vlan_tag *tags)
{
struct sk_buff *skb;
struct bond_vlan_tag *outer_tag = tags;
+ struct net_device *slave_dev = slave->dev;
+ struct net_device *bond_dev = slave->bond->dev;
- netdev_dbg(slave_dev, "arp %d on slave %s: dst %pI4 src %pI4\n",
- arp_op, slave_dev->name, &dest_ip, &src_ip);
+ slave_dbg(bond_dev, slave_dev, "arp %d on slave: dst %pI4 src %pI4\n",
+ arp_op, &dest_ip, &src_ip);
skb = arp_create(arp_op, ETH_P_ARP, dest_ip, slave_dev, src_ip,
NULL, slave_dev->dev_addr, NULL);
@@ -2394,8 +2392,8 @@ static void bond_arp_send(struct net_device *slave_dev, int arp_op,
continue;
}
- netdev_dbg(slave_dev, "inner tag: proto %X vid %X\n",
- ntohs(outer_tag->vlan_proto), tags->vlan_id);
+ slave_dbg(bond_dev, slave_dev, "inner tag: proto %X vid %X\n",
+ ntohs(outer_tag->vlan_proto), tags->vlan_id);
skb = vlan_insert_tag_set_proto(skb, tags->vlan_proto,
tags->vlan_id);
if (!skb) {
@@ -2407,8 +2405,8 @@ static void bond_arp_send(struct net_device *slave_dev, int arp_op,
}
/* Set the outer tag */
if (outer_tag->vlan_id) {
- netdev_dbg(slave_dev, "outer tag: proto %X vid %X\n",
- ntohs(outer_tag->vlan_proto), outer_tag->vlan_id);
+ slave_dbg(bond_dev, slave_dev, "outer tag: proto %X vid %X\n",
+ ntohs(outer_tag->vlan_proto), outer_tag->vlan_id);
__vlan_hwaccel_put_tag(skb, outer_tag->vlan_proto,
outer_tag->vlan_id);
}
@@ -2465,7 +2463,8 @@ static void bond_arp_send_all(struct bonding *bond, struct slave *slave)
int i;
for (i = 0; i < BOND_MAX_ARP_TARGETS && targets[i]; i++) {
- netdev_dbg(bond->dev, "basa: target %pI4\n", &targets[i]);
+ slave_dbg(bond->dev, slave->dev, "%s: target %pI4\n",
+ __func__, &targets[i]);
tags = NULL;
/* Find out through which dev should the packet go */
@@ -2479,7 +2478,7 @@ static void bond_arp_send_all(struct bonding *bond, struct slave *slave)
net_warn_ratelimited("%s: no route to arp_ip_target %pI4 and arp_validate is set\n",
bond->dev->name,
&targets[i]);
- bond_arp_send(slave->dev, ARPOP_REQUEST, targets[i],
+ bond_arp_send(slave, ARPOP_REQUEST, targets[i],
0, tags);
continue;
}
@@ -2496,7 +2495,7 @@ static void bond_arp_send_all(struct bonding *bond, struct slave *slave)
goto found;
/* Not our device - skip */
- netdev_dbg(bond->dev, "no path to arp_ip_target %pI4 via rt.dev %s\n",
+ slave_dbg(bond->dev, slave->dev, "no path to arp_ip_target %pI4 via rt.dev %s\n",
&targets[i], rt->dst.dev ? rt->dst.dev->name : "NULL");
ip_rt_put(rt);
@@ -2505,8 +2504,7 @@ static void bond_arp_send_all(struct bonding *bond, struct slave *slave)
found:
addr = bond_confirm_addr(rt->dst.dev, targets[i], 0);
ip_rt_put(rt);
- bond_arp_send(slave->dev, ARPOP_REQUEST, targets[i],
- addr, tags);
+ bond_arp_send(slave, ARPOP_REQUEST, targets[i], addr, tags);
kfree(tags);
}
}
@@ -2516,15 +2514,15 @@ static void bond_validate_arp(struct bonding *bond, struct slave *slave, __be32
int i;
if (!sip || !bond_has_this_ip(bond, tip)) {
- netdev_dbg(bond->dev, "bva: sip %pI4 tip %pI4 not found\n",
- &sip, &tip);
+ slave_dbg(bond->dev, slave->dev, "%s: sip %pI4 tip %pI4 not found\n",
+ __func__, &sip, &tip);
return;
}
i = bond_get_targets_ip(bond->params.arp_targets, sip);
if (i == -1) {
- netdev_dbg(bond->dev, "bva: sip %pI4 not found in targets\n",
- &sip);
+ slave_dbg(bond->dev, slave->dev, "%s: sip %pI4 not found in targets\n",
+ __func__, &sip);
return;
}
slave->last_rx = jiffies;
@@ -2552,8 +2550,8 @@ int bond_arp_rcv(const struct sk_buff *skb, struct bonding *bond,
alen = arp_hdr_len(bond->dev);
- netdev_dbg(bond->dev, "bond_arp_rcv: skb->dev %s\n",
- skb->dev->name);
+ slave_dbg(bond->dev, slave->dev, "%s: skb->dev %s\n",
+ __func__, skb->dev->name);
if (alen > skb_headlen(skb)) {
arp = kmalloc(alen, GFP_ATOMIC);
@@ -2577,10 +2575,10 @@ int bond_arp_rcv(const struct sk_buff *skb, struct bonding *bond,
arp_ptr += 4 + bond->dev->addr_len;
memcpy(&tip, arp_ptr, 4);
- netdev_dbg(bond->dev, "bond_arp_rcv: %s/%d av %d sv %d sip %pI4 tip %pI4\n",
- slave->dev->name, bond_slave_state(slave),
- bond->params.arp_validate, slave_do_arp_validate(bond, slave),
- &sip, &tip);
+ slave_dbg(bond->dev, slave->dev, "%s: %s/%d av %d sv %d sip %pI4 tip %pI4\n",
+ __func__, slave->dev->name, bond_slave_state(slave),
+ bond->params.arp_validate, slave_do_arp_validate(bond, slave),
+ &sip, &tip);
curr_active_slave = rcu_dereference(bond->curr_active_slave);
curr_arp_slave = rcu_dereference(bond->current_arp_slave);
@@ -2683,12 +2681,10 @@ static void bond_loadbalance_arp_mon(struct bonding *bond)
* is closed.
*/
if (!oldcurrent) {
- netdev_info(bond->dev, "link status definitely up for interface %s\n",
- slave->dev->name);
+ slave_info(bond->dev, slave->dev, "link status definitely up\n");
do_failover = 1;
} else {
- netdev_info(bond->dev, "interface %s is now up\n",
- slave->dev->name);
+ slave_info(bond->dev, slave->dev, "interface is now up\n");
}
}
} else {
@@ -2707,8 +2703,7 @@ static void bond_loadbalance_arp_mon(struct bonding *bond)
if (slave->link_failure_count < UINT_MAX)
slave->link_failure_count++;
- netdev_info(bond->dev, "interface %s is now down\n",
- slave->dev->name);
+ slave_info(bond->dev, slave->dev, "interface is now down\n");
if (slave == oldcurrent)
do_failover = 1;
@@ -2858,8 +2853,7 @@ static void bond_ab_arp_commit(struct bonding *bond)
RCU_INIT_POINTER(bond->current_arp_slave, NULL);
}
- netdev_info(bond->dev, "link status definitely up for interface %s\n",
- slave->dev->name);
+ slave_info(bond->dev, slave->dev, "link status definitely up\n");
if (!rtnl_dereference(bond->curr_active_slave) ||
slave == rtnl_dereference(bond->primary_slave))
@@ -2878,8 +2872,7 @@ static void bond_ab_arp_commit(struct bonding *bond)
bond_set_slave_inactive_flags(slave,
BOND_SLAVE_NOTIFY_NOW);
- netdev_info(bond->dev, "link status definitely down for interface %s, disabling it\n",
- slave->dev->name);
+ slave_info(bond->dev, slave->dev, "link status definitely down, disabling slave\n");
if (slave == rtnl_dereference(bond->curr_active_slave)) {
RCU_INIT_POINTER(bond->current_arp_slave, NULL);
@@ -2889,8 +2882,8 @@ static void bond_ab_arp_commit(struct bonding *bond)
continue;
default:
- netdev_err(bond->dev, "impossible: new_link %d on slave %s\n",
- slave->new_link, slave->dev->name);
+ slave_err(bond->dev, slave->dev, "impossible: new_link %d on slave\n",
+ slave->new_link);
continue;
}
@@ -2961,8 +2954,7 @@ static bool bond_ab_arp_probe(struct bonding *bond)
bond_set_slave_inactive_flags(slave,
BOND_SLAVE_NOTIFY_LATER);
- netdev_info(bond->dev, "backup interface %s is now down\n",
- slave->dev->name);
+ slave_info(bond->dev, slave->dev, "backup interface is now down\n");
}
if (slave == curr_arp_slave)
found = true;
@@ -3074,6 +3066,8 @@ static int bond_master_netdev_event(unsigned long event,
{
struct bonding *event_bond = netdev_priv(bond_dev);
+ netdev_dbg(bond_dev, "%s called\n", __func__);
+
switch (event) {
case NETDEV_CHANGENAME:
return bond_event_changename(event_bond);
@@ -3083,10 +3077,6 @@ static int bond_master_netdev_event(unsigned long event,
case NETDEV_REGISTER:
bond_create_proc_entry(event_bond);
break;
- case NETDEV_NOTIFY_PEERS:
- if (event_bond->send_peer_notif)
- event_bond->send_peer_notif--;
- break;
default:
break;
}
@@ -3105,12 +3095,17 @@ static int bond_slave_netdev_event(unsigned long event,
* before netdev_rx_handler_register is called in which case
* slave will be NULL
*/
- if (!slave)
+ if (!slave) {
+ netdev_dbg(slave_dev, "%s called on NULL slave\n", __func__);
return NOTIFY_DONE;
+ }
+
bond_dev = slave->bond->dev;
bond = slave->bond;
primary = rtnl_dereference(bond->primary_slave);
+ slave_dbg(bond_dev, slave_dev, "%s called\n", __func__);
+
switch (event) {
case NETDEV_UNREGISTER:
if (bond_dev->type != ARPHRD_ETHER)
@@ -3212,7 +3207,8 @@ static int bond_netdev_event(struct notifier_block *this,
{
struct net_device *event_dev = netdev_notifier_info_to_dev(ptr);
- netdev_dbg(event_dev, "event: %lx\n", event);
+ netdev_dbg(event_dev, "%s received %s\n",
+ __func__, netdev_cmd_to_name(event));
if (!(event_dev->priv_flags & IFF_BONDING))
return NOTIFY_DONE;
@@ -3220,16 +3216,13 @@ static int bond_netdev_event(struct notifier_block *this,
if (event_dev->flags & IFF_MASTER) {
int ret;
- netdev_dbg(event_dev, "IFF_MASTER\n");
ret = bond_master_netdev_event(event, event_dev);
if (ret != NOTIFY_DONE)
return ret;
}
- if (event_dev->flags & IFF_SLAVE) {
- netdev_dbg(event_dev, "IFF_SLAVE\n");
+ if (event_dev->flags & IFF_SLAVE)
return bond_slave_netdev_event(event, event_dev);
- }
return NOTIFY_DONE;
}
@@ -3546,12 +3539,11 @@ static int bond_do_ioctl(struct net_device *bond_dev, struct ifreq *ifr, int cmd
slave_dev = __dev_get_by_name(net, ifr->ifr_slave);
- netdev_dbg(bond_dev, "slave_dev=%p:\n", slave_dev);
+ slave_dbg(bond_dev, slave_dev, "slave_dev=%p:\n", slave_dev);
if (!slave_dev)
return -ENODEV;
- netdev_dbg(bond_dev, "slave_dev->name=%s:\n", slave_dev->name);
switch (cmd) {
case BOND_ENSLAVE_OLD:
case SIOCBONDENSLAVE:
@@ -3676,7 +3668,7 @@ static int bond_change_mtu(struct net_device *bond_dev, int new_mtu)
netdev_dbg(bond_dev, "bond=%p, new_mtu=%d\n", bond, new_mtu);
bond_for_each_slave(bond, slave, iter) {
- netdev_dbg(bond_dev, "s %p c_m %p\n",
+ slave_dbg(bond_dev, slave->dev, "s %p c_m %p\n",
slave, slave->dev->netdev_ops->ndo_change_mtu);
res = dev_set_mtu(slave->dev, new_mtu);
@@ -3690,8 +3682,8 @@ static int bond_change_mtu(struct net_device *bond_dev, int new_mtu)
* means changing their mtu from timer context, which
* is probably not a good idea.
*/
- netdev_dbg(bond_dev, "err %d %s\n", res,
- slave->dev->name);
+ slave_dbg(bond_dev, slave->dev, "err %d setting mtu to %d\n",
+ res, new_mtu);
goto unwind;
}
}
@@ -3709,10 +3701,9 @@ unwind:
break;
tmp_res = dev_set_mtu(rollback_slave->dev, bond_dev->mtu);
- if (tmp_res) {
- netdev_dbg(bond_dev, "unwind err %d dev %s\n",
- tmp_res, rollback_slave->dev->name);
- }
+ if (tmp_res)
+ slave_dbg(bond_dev, rollback_slave->dev, "unwind err %d\n",
+ tmp_res);
}
return res;
@@ -3736,7 +3727,7 @@ static int bond_set_mac_address(struct net_device *bond_dev, void *addr)
return bond_alb_set_mac_address(bond_dev, addr);
- netdev_dbg(bond_dev, "bond=%p\n", bond);
+ netdev_dbg(bond_dev, "%s: bond=%p\n", __func__, bond);
/* If fail_over_mac is enabled, do nothing and return success.
* Returning an error causes ifenslave to fail.
@@ -3749,7 +3740,8 @@ static int bond_set_mac_address(struct net_device *bond_dev, void *addr)
return -EADDRNOTAVAIL;
bond_for_each_slave(bond, slave, iter) {
- netdev_dbg(bond_dev, "slave %p %s\n", slave, slave->dev->name);
+ slave_dbg(bond_dev, slave->dev, "%s: slave=%p\n",
+ __func__, slave);
res = dev_set_mac_address(slave->dev, addr, NULL);
if (res) {
/* TODO: consider downing the slave
@@ -3758,7 +3750,8 @@ static int bond_set_mac_address(struct net_device *bond_dev, void *addr)
* breakage anyway until ARP finish
* updating, so...
*/
- netdev_dbg(bond_dev, "err %d %s\n", res, slave->dev->name);
+ slave_dbg(bond_dev, slave->dev, "%s: err %d\n",
+ __func__, res);
goto unwind;
}
}
@@ -3781,8 +3774,8 @@ unwind:
tmp_res = dev_set_mac_address(rollback_slave->dev,
(struct sockaddr *)&tmp_ss, NULL);
if (tmp_res) {
- netdev_dbg(bond_dev, "unwind err %d dev %s\n",
- tmp_res, rollback_slave->dev->name);
+ slave_dbg(bond_dev, rollback_slave->dev, "%s: unwind err %d\n",
+ __func__, tmp_res);
}
}
@@ -3866,8 +3859,8 @@ static netdev_tx_t bond_xmit_roundrobin(struct sk_buff *skb,
struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
- struct iphdr *iph = ip_hdr(skb);
struct slave *slave;
+ int slave_cnt;
u32 slave_id;
/* Start with the curr_active_slave that joined the bond as the
@@ -3876,23 +3869,32 @@ static netdev_tx_t bond_xmit_roundrobin(struct sk_buff *skb,
* send the join/membership reports. The curr_active_slave found
* will send all of this type of traffic.
*/
- if (iph->protocol == IPPROTO_IGMP && skb->protocol == htons(ETH_P_IP)) {
- slave = rcu_dereference(bond->curr_active_slave);
- if (slave)
- bond_dev_queue_xmit(bond, skb, slave->dev);
- else
- bond_xmit_slave_id(bond, skb, 0);
- } else {
- int slave_cnt = READ_ONCE(bond->slave_cnt);
+ if (skb->protocol == htons(ETH_P_IP)) {
+ int noff = skb_network_offset(skb);
+ struct iphdr *iph;
- if (likely(slave_cnt)) {
- slave_id = bond_rr_gen_slave_id(bond);
- bond_xmit_slave_id(bond, skb, slave_id % slave_cnt);
- } else {
- bond_tx_drop(bond_dev, skb);
+ if (unlikely(!pskb_may_pull(skb, noff + sizeof(*iph))))
+ goto non_igmp;
+
+ iph = ip_hdr(skb);
+ if (iph->protocol == IPPROTO_IGMP) {
+ slave = rcu_dereference(bond->curr_active_slave);
+ if (slave)
+ bond_dev_queue_xmit(bond, skb, slave->dev);
+ else
+ bond_xmit_slave_id(bond, skb, 0);
+ return NETDEV_TX_OK;
}
}
+non_igmp:
+ slave_cnt = READ_ONCE(bond->slave_cnt);
+ if (likely(slave_cnt)) {
+ slave_id = bond_rr_gen_slave_id(bond);
+ bond_xmit_slave_id(bond, skb, slave_id % slave_cnt);
+ } else {
+ bond_tx_drop(bond_dev, skb);
+ }
return NETDEV_TX_OK;
}
@@ -4003,9 +4005,8 @@ int bond_update_slave_arr(struct bonding *bond, struct slave *skipslave)
if (skipslave == slave)
continue;
- netdev_dbg(bond->dev,
- "Adding slave dev %s to tx hash array[%d]\n",
- slave->dev->name, new_arr->count);
+ slave_dbg(bond->dev, slave->dev, "Adding slave to tx hash array[%d]\n",
+ new_arr->count);
new_arr->arr[new_arr->count++] = slave;
}
@@ -4707,6 +4708,7 @@ static int bond_check_params(struct bond_params *params)
params->arp_all_targets = arp_all_targets_value;
params->updelay = updelay;
params->downdelay = downdelay;
+ params->peer_notif_delay = 0;
params->use_carrier = use_carrier;
params->lacp_fast = lacp_fast;
params->primary[0] = 0;
diff --git a/drivers/net/bonding/bond_netlink.c b/drivers/net/bonding/bond_netlink.c
index b24cce48ae35..b43b51646b11 100644
--- a/drivers/net/bonding/bond_netlink.c
+++ b/drivers/net/bonding/bond_netlink.c
@@ -108,6 +108,7 @@ static const struct nla_policy bond_policy[IFLA_BOND_MAX + 1] = {
[IFLA_BOND_AD_ACTOR_SYSTEM] = { .type = NLA_BINARY,
.len = ETH_ALEN },
[IFLA_BOND_TLB_DYNAMIC_LB] = { .type = NLA_U8 },
+ [IFLA_BOND_PEER_NOTIF_DELAY] = { .type = NLA_U32 },
};
static const struct nla_policy bond_slave_policy[IFLA_BOND_SLAVE_MAX + 1] = {
@@ -215,6 +216,14 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
if (err)
return err;
}
+ if (data[IFLA_BOND_PEER_NOTIF_DELAY]) {
+ int delay = nla_get_u32(data[IFLA_BOND_PEER_NOTIF_DELAY]);
+
+ bond_opt_initval(&newval, delay);
+ err = __bond_opt_set(bond, BOND_OPT_PEER_NOTIF_DELAY, &newval);
+ if (err)
+ return err;
+ }
if (data[IFLA_BOND_USE_CARRIER]) {
int use_carrier = nla_get_u8(data[IFLA_BOND_USE_CARRIER]);
@@ -494,6 +503,7 @@ static size_t bond_get_size(const struct net_device *bond_dev)
nla_total_size(sizeof(u16)) + /* IFLA_BOND_AD_USER_PORT_KEY */
nla_total_size(ETH_ALEN) + /* IFLA_BOND_AD_ACTOR_SYSTEM */
nla_total_size(sizeof(u8)) + /* IFLA_BOND_TLB_DYNAMIC_LB */
+ nla_total_size(sizeof(u32)) + /* IFLA_BOND_PEER_NOTIF_DELAY */
0;
}
@@ -536,6 +546,10 @@ static int bond_fill_info(struct sk_buff *skb,
bond->params.downdelay * bond->params.miimon))
goto nla_put_failure;
+ if (nla_put_u32(skb, IFLA_BOND_PEER_NOTIF_DELAY,
+ bond->params.peer_notif_delay * bond->params.miimon))
+ goto nla_put_failure;
+
if (nla_put_u8(skb, IFLA_BOND_USE_CARRIER, bond->params.use_carrier))
goto nla_put_failure;
diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
index 9677418e0362..ddb3916d3506 100644
--- a/drivers/net/bonding/bond_options.c
+++ b/drivers/net/bonding/bond_options.c
@@ -24,6 +24,8 @@ static int bond_option_updelay_set(struct bonding *bond,
const struct bond_opt_value *newval);
static int bond_option_downdelay_set(struct bonding *bond,
const struct bond_opt_value *newval);
+static int bond_option_peer_notif_delay_set(struct bonding *bond,
+ const struct bond_opt_value *newval);
static int bond_option_use_carrier_set(struct bonding *bond,
const struct bond_opt_value *newval);
static int bond_option_arp_interval_set(struct bonding *bond,
@@ -424,6 +426,13 @@ static const struct bond_option bond_opts[BOND_OPT_LAST] = {
.desc = "Number of peer notifications to send on failover event",
.values = bond_num_peer_notif_tbl,
.set = bond_option_num_peer_notif_set
+ },
+ [BOND_OPT_PEER_NOTIF_DELAY] = {
+ .id = BOND_OPT_PEER_NOTIF_DELAY,
+ .name = "peer_notif_delay",
+ .desc = "Delay between each peer notification on failover event, in milliseconds",
+ .values = bond_intmax_tbl,
+ .set = bond_option_peer_notif_delay_set
}
};
@@ -783,14 +792,12 @@ static int bond_option_active_slave_set(struct bonding *bond,
if (slave_dev) {
if (!netif_is_bond_slave(slave_dev)) {
- netdev_err(bond->dev, "Device %s is not bonding slave\n",
- slave_dev->name);
+ slave_err(bond->dev, slave_dev, "Device is not bonding slave\n");
return -EINVAL;
}
if (bond->dev != netdev_master_upper_dev_get(slave_dev)) {
- netdev_err(bond->dev, "Device %s is not our slave\n",
- slave_dev->name);
+ slave_err(bond->dev, slave_dev, "Device is not our slave\n");
return -EINVAL;
}
}
@@ -809,18 +816,15 @@ static int bond_option_active_slave_set(struct bonding *bond,
if (new_active == old_active) {
/* do nothing */
- netdev_dbg(bond->dev, "%s is already the current active slave\n",
- new_active->dev->name);
+ slave_dbg(bond->dev, new_active->dev, "is already the current active slave\n");
} else {
if (old_active && (new_active->link == BOND_LINK_UP) &&
bond_slave_is_up(new_active)) {
- netdev_dbg(bond->dev, "Setting %s as active slave\n",
- new_active->dev->name);
+ slave_dbg(bond->dev, new_active->dev, "Setting as active slave\n");
bond_change_active_slave(bond, new_active);
} else {
- netdev_err(bond->dev, "Could not set %s as active slave; either %s is down or the link is down\n",
- new_active->dev->name,
- new_active->dev->name);
+ slave_err(bond->dev, new_active->dev, "Could not set as active slave; either %s is down or the link is down\n",
+ new_active->dev->name);
ret = -EINVAL;
}
}
@@ -846,6 +850,9 @@ static int bond_option_miimon_set(struct bonding *bond,
if (bond->params.downdelay)
netdev_dbg(bond->dev, "Note: Updating downdelay (to %d) since it is a multiple of the miimon value\n",
bond->params.downdelay * bond->params.miimon);
+ if (bond->params.peer_notif_delay)
+ netdev_dbg(bond->dev, "Note: Updating peer_notif_delay (to %d) since it is a multiple of the miimon value\n",
+ bond->params.peer_notif_delay * bond->params.miimon);
if (newval->value && bond->params.arp_interval) {
netdev_dbg(bond->dev, "MII monitoring cannot be used with ARP monitoring - disabling ARP monitoring...\n");
bond->params.arp_interval = 0;
@@ -869,52 +876,59 @@ static int bond_option_miimon_set(struct bonding *bond,
return 0;
}
-/* Set up and down delays. These must be multiples of the
- * MII monitoring value, and are stored internally as the multiplier.
- * Thus, we must translate to MS for the real world.
+/* Set up, down and peer notification delays. These must be multiples
+ * of the MII monitoring value, and are stored internally as the
+ * multiplier. Thus, we must translate to MS for the real world.
*/
-static int bond_option_updelay_set(struct bonding *bond,
- const struct bond_opt_value *newval)
+static int _bond_option_delay_set(struct bonding *bond,
+ const struct bond_opt_value *newval,
+ const char *name,
+ int *target)
{
int value = newval->value;
if (!bond->params.miimon) {
- netdev_err(bond->dev, "Unable to set up delay as MII monitoring is disabled\n");
+ netdev_err(bond->dev, "Unable to set %s as MII monitoring is disabled\n",
+ name);
return -EPERM;
}
if ((value % bond->params.miimon) != 0) {
- netdev_warn(bond->dev, "up delay (%d) is not a multiple of miimon (%d), updelay rounded to %d ms\n",
+ netdev_warn(bond->dev,
+ "%s (%d) is not a multiple of miimon (%d), value rounded to %d ms\n",
+ name,
value, bond->params.miimon,
(value / bond->params.miimon) *
bond->params.miimon);
}
- bond->params.updelay = value / bond->params.miimon;
- netdev_dbg(bond->dev, "Setting up delay to %d\n",
- bond->params.updelay * bond->params.miimon);
+ *target = value / bond->params.miimon;
+ netdev_dbg(bond->dev, "Setting %s to %d\n",
+ name,
+ *target * bond->params.miimon);
return 0;
}
+static int bond_option_updelay_set(struct bonding *bond,
+ const struct bond_opt_value *newval)
+{
+ return _bond_option_delay_set(bond, newval, "up delay",
+ &bond->params.updelay);
+}
+
static int bond_option_downdelay_set(struct bonding *bond,
const struct bond_opt_value *newval)
{
- int value = newval->value;
-
- if (!bond->params.miimon) {
- netdev_err(bond->dev, "Unable to set down delay as MII monitoring is disabled\n");
- return -EPERM;
- }
- if ((value % bond->params.miimon) != 0) {
- netdev_warn(bond->dev, "down delay (%d) is not a multiple of miimon (%d), delay rounded to %d ms\n",
- value, bond->params.miimon,
- (value / bond->params.miimon) *
- bond->params.miimon);
- }
- bond->params.downdelay = value / bond->params.miimon;
- netdev_dbg(bond->dev, "Setting down delay to %d\n",
- bond->params.downdelay * bond->params.miimon);
+ return _bond_option_delay_set(bond, newval, "down delay",
+ &bond->params.downdelay);
+}
- return 0;
+static int bond_option_peer_notif_delay_set(struct bonding *bond,
+ const struct bond_opt_value *newval)
+{
+ int ret = _bond_option_delay_set(bond, newval,
+ "peer notification delay",
+ &bond->params.peer_notif_delay);
+ return ret;
}
static int bond_option_use_carrier_set(struct bonding *bond,
@@ -1132,8 +1146,7 @@ static int bond_option_primary_set(struct bonding *bond,
bond_for_each_slave(bond, slave, iter) {
if (strncmp(slave->dev->name, primary, IFNAMSIZ) == 0) {
- netdev_dbg(bond->dev, "Setting %s as primary slave\n",
- slave->dev->name);
+ slave_dbg(bond->dev, slave->dev, "Setting as primary slave\n");
rcu_assign_pointer(bond->primary_slave, slave);
strcpy(bond->params.primary, slave->dev->name);
bond->force_primary = true;
@@ -1150,8 +1163,8 @@ static int bond_option_primary_set(struct bonding *bond,
strncpy(bond->params.primary, primary, IFNAMSIZ);
bond->params.primary[IFNAMSIZ - 1] = 0;
- netdev_dbg(bond->dev, "Recording %s as primary, but it has not been enslaved to %s yet\n",
- primary, bond->dev->name);
+ netdev_dbg(bond->dev, "Recording %s as primary, but it has not been enslaved yet\n",
+ primary);
out:
unblock_netpoll_tx();
@@ -1378,12 +1391,12 @@ static int bond_option_slaves_set(struct bonding *bond,
switch (command[0]) {
case '+':
- netdev_dbg(bond->dev, "Adding slave %s\n", dev->name);
+ slave_dbg(bond->dev, dev, "Enslaving interface\n");
ret = bond_enslave(bond->dev, dev, NULL);
break;
case '-':
- netdev_dbg(bond->dev, "Removing slave %s\n", dev->name);
+ slave_dbg(bond->dev, dev, "Releasing interface\n");
ret = bond_release(bond->dev, dev);
break;
@@ -1447,7 +1460,7 @@ static int bond_option_ad_actor_system_set(struct bonding *bond,
return 0;
err:
- netdev_err(bond->dev, "Invalid MAC address.\n");
+ netdev_err(bond->dev, "Invalid ad_actor_system MAC address.\n");
return -EINVAL;
}
diff --git a/drivers/net/bonding/bond_procfs.c b/drivers/net/bonding/bond_procfs.c
index 9f7d83e827c3..fd5c9cbe45b1 100644
--- a/drivers/net/bonding/bond_procfs.c
+++ b/drivers/net/bonding/bond_procfs.c
@@ -104,6 +104,8 @@ static void bond_info_show_master(struct seq_file *seq)
bond->params.updelay * bond->params.miimon);
seq_printf(seq, "Down Delay (ms): %d\n",
bond->params.downdelay * bond->params.miimon);
+ seq_printf(seq, "Peer Notification Delay (ms): %d\n",
+ bond->params.peer_notif_delay * bond->params.miimon);
/* ARP information */
diff --git a/drivers/net/bonding/bond_sysfs.c b/drivers/net/bonding/bond_sysfs.c
index 94214eaf53c5..2d615a93685e 100644
--- a/drivers/net/bonding/bond_sysfs.c
+++ b/drivers/net/bonding/bond_sysfs.c
@@ -327,6 +327,18 @@ static ssize_t bonding_show_updelay(struct device *d,
static DEVICE_ATTR(updelay, 0644,
bonding_show_updelay, bonding_sysfs_store_option);
+static ssize_t bonding_show_peer_notif_delay(struct device *d,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct bonding *bond = to_bond(d);
+
+ return sprintf(buf, "%d\n",
+ bond->params.peer_notif_delay * bond->params.miimon);
+}
+static DEVICE_ATTR(peer_notif_delay, 0644,
+ bonding_show_peer_notif_delay, bonding_sysfs_store_option);
+
/* Show the LACP interval. */
static ssize_t bonding_show_lacp(struct device *d,
struct device_attribute *attr,
@@ -718,6 +730,7 @@ static struct attribute *per_bond_attrs[] = {
&dev_attr_arp_ip_target.attr,
&dev_attr_downdelay.attr,
&dev_attr_updelay.attr,
+ &dev_attr_peer_notif_delay.attr,
&dev_attr_lacp_rate.attr,
&dev_attr_ad_select.attr,
&dev_attr_xmit_hash_policy.attr,
diff --git a/drivers/net/can/softing/softing_main.c b/drivers/net/can/softing/softing_main.c
index 68bb58a57f3b..8242fb287cbb 100644
--- a/drivers/net/can/softing/softing_main.c
+++ b/drivers/net/can/softing/softing_main.c
@@ -683,7 +683,7 @@ static void softing_netdev_cleanup(struct net_device *netdev)
static ssize_t show_##name(struct device *dev, \
struct device_attribute *attr, char *buf) \
{ \
- struct softing *card = platform_get_drvdata(to_platform_device(dev)); \
+ struct softing *card = dev_get_drvdata(dev); \
return sprintf(buf, "%u\n", card->member); \
} \
static DEVICE_ATTR(name, 0444, show_##name, NULL)
@@ -692,7 +692,7 @@ static DEVICE_ATTR(name, 0444, show_##name, NULL)
static ssize_t show_##name(struct device *dev, \
struct device_attribute *attr, char *buf) \
{ \
- struct softing *card = platform_get_drvdata(to_platform_device(dev)); \
+ struct softing *card = dev_get_drvdata(dev); \
return sprintf(buf, "%s\n", card->member); \
} \
static DEVICE_ATTR(name, 0444, show_##name, NULL)
diff --git a/drivers/net/dsa/Kconfig b/drivers/net/dsa/Kconfig
index b91e78e3598f..f6232ce8481f 100644
--- a/drivers/net/dsa/Kconfig
+++ b/drivers/net/dsa/Kconfig
@@ -99,8 +99,8 @@ config NET_DSA_SMSC_LAN9303_MDIO
for MDIO managed mode.
config NET_DSA_VITESSE_VSC73XX
- tristate "Vitesse VSC7385/7388/7395/7398 support"
- depends on OF && SPI
+ tristate
+ depends on OF
depends on NET_DSA
select FIXED_PHY
select VITESSE_PHY
@@ -109,4 +109,24 @@ config NET_DSA_VITESSE_VSC73XX
This enables support for the Vitesse VSC7385, VSC7388,
VSC7395 and VSC7398 SparX integrated ethernet switches.
+config NET_DSA_VITESSE_VSC73XX_SPI
+ tristate "Vitesse VSC7385/7388/7395/7398 SPI mode support"
+ depends on OF
+ depends on NET_DSA
+ depends on SPI
+ select NET_DSA_VITESSE_VSC73XX
+ ---help---
+ This enables support for the Vitesse VSC7385, VSC7388, VSC7395
+ and VSC7398 SparX integrated ethernet switches in SPI managed mode.
+
+config NET_DSA_VITESSE_VSC73XX_PLATFORM
+ tristate "Vitesse VSC7385/7388/7395/7398 Platform mode support"
+ depends on OF
+ depends on NET_DSA
+ depends on HAS_IOMEM
+ select NET_DSA_VITESSE_VSC73XX
+ ---help---
+ This enables support for the Vitesse VSC7385, VSC7388, VSC7395
+ and VSC7398 SparX integrated ethernet switches, connected over
+ a CPU-attached address bus and work in memory-mapped I/O mode.
endmenu
diff --git a/drivers/net/dsa/Makefile b/drivers/net/dsa/Makefile
index d99dc6de0006..ae70b79628d6 100644
--- a/drivers/net/dsa/Makefile
+++ b/drivers/net/dsa/Makefile
@@ -14,7 +14,9 @@ realtek-smi-objs := realtek-smi-core.o rtl8366.o rtl8366rb.o
obj-$(CONFIG_NET_DSA_SMSC_LAN9303) += lan9303-core.o
obj-$(CONFIG_NET_DSA_SMSC_LAN9303_I2C) += lan9303_i2c.o
obj-$(CONFIG_NET_DSA_SMSC_LAN9303_MDIO) += lan9303_mdio.o
-obj-$(CONFIG_NET_DSA_VITESSE_VSC73XX) += vitesse-vsc73xx.o
+obj-$(CONFIG_NET_DSA_VITESSE_VSC73XX) += vitesse-vsc73xx-core.o
+obj-$(CONFIG_NET_DSA_VITESSE_VSC73XX_PLATFORM) += vitesse-vsc73xx-platform.o
+obj-$(CONFIG_NET_DSA_VITESSE_VSC73XX_SPI) += vitesse-vsc73xx-spi.o
obj-y += b53/
obj-y += microchip/
obj-y += mv88e6xxx/
diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
index c8040ecf4425..907af62846ba 100644
--- a/drivers/net/dsa/b53/b53_common.c
+++ b/drivers/net/dsa/b53/b53_common.c
@@ -955,13 +955,13 @@ static int b53_setup(struct dsa_switch *ds)
if (ret)
dev_err(ds->dev, "failed to apply configuration\n");
- /* Configure IMP/CPU port, disable unused ports. Enabled
+ /* Configure IMP/CPU port, disable all other ports. Enabled
* ports will be configured with .port_enable
*/
for (port = 0; port < dev->num_ports; port++) {
if (dsa_is_cpu_port(ds, port))
b53_enable_cpu_port(dev, port);
- else if (dsa_is_unused_port(ds, port))
+ else
b53_disable_port(ds, port);
}
diff --git a/drivers/net/dsa/microchip/Kconfig b/drivers/net/dsa/microchip/Kconfig
index 2c3a6751bdaf..fe0a13b79c4b 100644
--- a/drivers/net/dsa/microchip/Kconfig
+++ b/drivers/net/dsa/microchip/Kconfig
@@ -13,5 +13,6 @@ menuconfig NET_DSA_MICROCHIP_KSZ9477
config NET_DSA_MICROCHIP_KSZ9477_SPI
tristate "KSZ9477 series SPI connected switch driver"
depends on NET_DSA_MICROCHIP_KSZ9477 && SPI
+ select REGMAP_SPI
help
Select to enable support for registering switches configured through SPI.
diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c
index c026d15721f6..a8c97f7a79b7 100644
--- a/drivers/net/dsa/microchip/ksz9477.c
+++ b/drivers/net/dsa/microchip/ksz9477.c
@@ -65,51 +65,36 @@ static const struct {
{ 0x83, "tx_discards" },
};
-static void ksz9477_cfg32(struct ksz_device *dev, u32 addr, u32 bits, bool set)
+static void ksz_cfg(struct ksz_device *dev, u32 addr, u8 bits, bool set)
{
- u32 data;
+ regmap_update_bits(dev->regmap[0], addr, bits, set ? bits : 0);
+}
- ksz_read32(dev, addr, &data);
- if (set)
- data |= bits;
- else
- data &= ~bits;
- ksz_write32(dev, addr, data);
+static void ksz_port_cfg(struct ksz_device *dev, int port, int offset, u8 bits,
+ bool set)
+{
+ regmap_update_bits(dev->regmap[0], PORT_CTRL_ADDR(port, offset),
+ bits, set ? bits : 0);
+}
+
+static void ksz9477_cfg32(struct ksz_device *dev, u32 addr, u32 bits, bool set)
+{
+ regmap_update_bits(dev->regmap[2], addr, bits, set ? bits : 0);
}
static void ksz9477_port_cfg32(struct ksz_device *dev, int port, int offset,
u32 bits, bool set)
{
- u32 addr;
- u32 data;
-
- addr = PORT_CTRL_ADDR(port, offset);
- ksz_read32(dev, addr, &data);
-
- if (set)
- data |= bits;
- else
- data &= ~bits;
-
- ksz_write32(dev, addr, data);
+ regmap_update_bits(dev->regmap[2], PORT_CTRL_ADDR(port, offset),
+ bits, set ? bits : 0);
}
-static int ksz9477_wait_vlan_ctrl_ready(struct ksz_device *dev, u32 waiton,
- int timeout)
+static int ksz9477_wait_vlan_ctrl_ready(struct ksz_device *dev)
{
- u8 data;
+ unsigned int val;
- do {
- ksz_read8(dev, REG_SW_VLAN_CTRL, &data);
- if (!(data & waiton))
- break;
- usleep_range(1, 10);
- } while (timeout-- > 0);
-
- if (timeout <= 0)
- return -ETIMEDOUT;
-
- return 0;
+ return regmap_read_poll_timeout(dev->regmap[0], REG_SW_VLAN_CTRL,
+ val, !(val & VLAN_START), 10, 1000);
}
static int ksz9477_get_vlan_table(struct ksz_device *dev, u16 vid,
@@ -123,8 +108,8 @@ static int ksz9477_get_vlan_table(struct ksz_device *dev, u16 vid,
ksz_write8(dev, REG_SW_VLAN_CTRL, VLAN_READ | VLAN_START);
/* wait to be cleared */
- ret = ksz9477_wait_vlan_ctrl_ready(dev, VLAN_START, 1000);
- if (ret < 0) {
+ ret = ksz9477_wait_vlan_ctrl_ready(dev);
+ if (ret) {
dev_dbg(dev->dev, "Failed to read vlan table\n");
goto exit;
}
@@ -156,8 +141,8 @@ static int ksz9477_set_vlan_table(struct ksz_device *dev, u16 vid,
ksz_write8(dev, REG_SW_VLAN_CTRL, VLAN_START | VLAN_WRITE);
/* wait to be cleared */
- ret = ksz9477_wait_vlan_ctrl_ready(dev, VLAN_START, 1000);
- if (ret < 0) {
+ ret = ksz9477_wait_vlan_ctrl_ready(dev);
+ if (ret) {
dev_dbg(dev->dev, "Failed to write vlan table\n");
goto exit;
}
@@ -191,55 +176,35 @@ static void ksz9477_write_table(struct ksz_device *dev, u32 *table)
ksz_write32(dev, REG_SW_ALU_VAL_D, table[3]);
}
-static int ksz9477_wait_alu_ready(struct ksz_device *dev, u32 waiton,
- int timeout)
+static int ksz9477_wait_alu_ready(struct ksz_device *dev)
{
- u32 data;
-
- do {
- ksz_read32(dev, REG_SW_ALU_CTRL__4, &data);
- if (!(data & waiton))
- break;
- usleep_range(1, 10);
- } while (timeout-- > 0);
+ unsigned int val;
- if (timeout <= 0)
- return -ETIMEDOUT;
-
- return 0;
+ return regmap_read_poll_timeout(dev->regmap[2], REG_SW_ALU_CTRL__4,
+ val, !(val & ALU_START), 10, 1000);
}
-static int ksz9477_wait_alu_sta_ready(struct ksz_device *dev, u32 waiton,
- int timeout)
+static int ksz9477_wait_alu_sta_ready(struct ksz_device *dev)
{
- u32 data;
+ unsigned int val;
- do {
- ksz_read32(dev, REG_SW_ALU_STAT_CTRL__4, &data);
- if (!(data & waiton))
- break;
- usleep_range(1, 10);
- } while (timeout-- > 0);
-
- if (timeout <= 0)
- return -ETIMEDOUT;
-
- return 0;
+ return regmap_read_poll_timeout(dev->regmap[2],
+ REG_SW_ALU_STAT_CTRL__4,
+ val, !(val & ALU_STAT_START),
+ 10, 1000);
}
static int ksz9477_reset_switch(struct ksz_device *dev)
{
u8 data8;
- u16 data16;
u32 data32;
/* reset switch */
ksz_cfg(dev, REG_SW_OPERATION, SW_RESET, true);
/* turn off SPI DO Edge select */
- ksz_read8(dev, REG_SW_GLOBAL_SERIAL_CTRL_0, &data8);
- data8 &= ~SPI_AUTO_EDGE_DETECTION;
- ksz_write8(dev, REG_SW_GLOBAL_SERIAL_CTRL_0, data8);
+ regmap_update_bits(dev->regmap[0], REG_SW_GLOBAL_SERIAL_CTRL_0,
+ SPI_AUTO_EDGE_DETECTION, 0);
/* default configuration */
ksz_read8(dev, REG_SW_LUE_CTRL_1, &data8);
@@ -253,10 +218,14 @@ static int ksz9477_reset_switch(struct ksz_device *dev)
ksz_read32(dev, REG_SW_PORT_INT_STATUS__4, &data32);
/* set broadcast storm protection 10% rate */
- ksz_read16(dev, REG_SW_MAC_CTRL_2, &data16);
- data16 &= ~BROADCAST_STORM_RATE;
- data16 |= (BROADCAST_STORM_VALUE * BROADCAST_STORM_PROT_RATE) / 100;
- ksz_write16(dev, REG_SW_MAC_CTRL_2, data16);
+ regmap_update_bits(dev->regmap[1], REG_SW_MAC_CTRL_2,
+ BROADCAST_STORM_RATE,
+ (BROADCAST_STORM_VALUE *
+ BROADCAST_STORM_PROT_RATE) / 100);
+
+ if (dev->synclko_125)
+ ksz_write8(dev, REG_SW_GLOBAL_OUTPUT_CTRL__1,
+ SW_ENABLE_REFCLKO | SW_REFCLKO_IS_125MHZ);
return 0;
}
@@ -264,12 +233,8 @@ static int ksz9477_reset_switch(struct ksz_device *dev)
static void ksz9477_r_mib_cnt(struct ksz_device *dev, int port, u16 addr,
u64 *cnt)
{
- struct ksz_poll_ctx ctx = {
- .dev = dev,
- .port = port,
- .offset = REG_PORT_MIB_CTRL_STAT__4,
- };
struct ksz_port *p = &dev->ports[port];
+ unsigned int val;
u32 data;
int ret;
@@ -279,11 +244,11 @@ static void ksz9477_r_mib_cnt(struct ksz_device *dev, int port, u16 addr,
data |= (addr << MIB_COUNTER_INDEX_S);
ksz_pwrite32(dev, port, REG_PORT_MIB_CTRL_STAT__4, data);
- ret = readx_poll_timeout(ksz_pread32_poll, &ctx, data,
- !(data & MIB_COUNTER_READ), 10, 1000);
-
+ ret = regmap_read_poll_timeout(dev->regmap[2],
+ PORT_CTRL_ADDR(port, REG_PORT_MIB_CTRL_STAT__4),
+ val, !(val & MIB_COUNTER_READ), 10, 1000);
/* failed to read MIB. get out of loop */
- if (ret < 0) {
+ if (ret) {
dev_dbg(dev->dev, "Failed to get MIB\n");
return;
}
@@ -518,10 +483,10 @@ static void ksz9477_flush_dyn_mac_table(struct ksz_device *dev, int port)
{
u8 data;
- ksz_read8(dev, REG_SW_LUE_CTRL_2, &data);
- data &= ~(SW_FLUSH_OPTION_M << SW_FLUSH_OPTION_S);
- data |= (SW_FLUSH_OPTION_DYN_MAC << SW_FLUSH_OPTION_S);
- ksz_write8(dev, REG_SW_LUE_CTRL_2, data);
+ regmap_update_bits(dev->regmap[0], REG_SW_LUE_CTRL_2,
+ SW_FLUSH_OPTION_M << SW_FLUSH_OPTION_S,
+ SW_FLUSH_OPTION_DYN_MAC << SW_FLUSH_OPTION_S);
+
if (port < dev->mib_port_cnt) {
/* flush individual port */
ksz_pread8(dev, port, P_STP_CTRL, &data);
@@ -648,8 +613,8 @@ static int ksz9477_port_fdb_add(struct dsa_switch *ds, int port,
ksz_write32(dev, REG_SW_ALU_CTRL__4, ALU_READ | ALU_START);
/* wait to be finished */
- ret = ksz9477_wait_alu_ready(dev, ALU_START, 1000);
- if (ret < 0) {
+ ret = ksz9477_wait_alu_ready(dev);
+ if (ret) {
dev_dbg(dev->dev, "Failed to read ALU\n");
goto exit;
}
@@ -672,8 +637,8 @@ static int ksz9477_port_fdb_add(struct dsa_switch *ds, int port,
ksz_write32(dev, REG_SW_ALU_CTRL__4, ALU_WRITE | ALU_START);
/* wait to be finished */
- ret = ksz9477_wait_alu_ready(dev, ALU_START, 1000);
- if (ret < 0)
+ ret = ksz9477_wait_alu_ready(dev);
+ if (ret)
dev_dbg(dev->dev, "Failed to write ALU\n");
exit:
@@ -705,8 +670,8 @@ static int ksz9477_port_fdb_del(struct dsa_switch *ds, int port,
ksz_write32(dev, REG_SW_ALU_CTRL__4, ALU_READ | ALU_START);
/* wait to be finished */
- ret = ksz9477_wait_alu_ready(dev, ALU_START, 1000);
- if (ret < 0) {
+ ret = ksz9477_wait_alu_ready(dev);
+ if (ret) {
dev_dbg(dev->dev, "Failed to read ALU\n");
goto exit;
}
@@ -739,8 +704,8 @@ static int ksz9477_port_fdb_del(struct dsa_switch *ds, int port,
ksz_write32(dev, REG_SW_ALU_CTRL__4, ALU_WRITE | ALU_START);
/* wait to be finished */
- ret = ksz9477_wait_alu_ready(dev, ALU_START, 1000);
- if (ret < 0)
+ ret = ksz9477_wait_alu_ready(dev);
+ if (ret)
dev_dbg(dev->dev, "Failed to write ALU\n");
exit:
@@ -846,7 +811,7 @@ static void ksz9477_port_mdb_add(struct dsa_switch *ds, int port,
ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data);
/* wait to be finished */
- if (ksz9477_wait_alu_sta_ready(dev, ALU_STAT_START, 1000) < 0) {
+ if (ksz9477_wait_alu_sta_ready(dev)) {
dev_dbg(dev->dev, "Failed to read ALU STATIC\n");
goto exit;
}
@@ -887,7 +852,7 @@ static void ksz9477_port_mdb_add(struct dsa_switch *ds, int port,
ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data);
/* wait to be finished */
- if (ksz9477_wait_alu_sta_ready(dev, ALU_STAT_START, 1000) < 0)
+ if (ksz9477_wait_alu_sta_ready(dev))
dev_dbg(dev->dev, "Failed to read ALU STATIC\n");
exit:
@@ -917,8 +882,8 @@ static int ksz9477_port_mdb_del(struct dsa_switch *ds, int port,
ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data);
/* wait to be finished */
- ret = ksz9477_wait_alu_sta_ready(dev, ALU_STAT_START, 1000);
- if (ret < 0) {
+ ret = ksz9477_wait_alu_sta_ready(dev);
+ if (ret) {
dev_dbg(dev->dev, "Failed to read ALU STATIC\n");
goto exit;
}
@@ -959,8 +924,8 @@ static int ksz9477_port_mdb_del(struct dsa_switch *ds, int port,
ksz_write32(dev, REG_SW_ALU_STAT_CTRL__4, data);
/* wait to be finished */
- ret = ksz9477_wait_alu_sta_ready(dev, ALU_STAT_START, 1000);
- if (ret < 0)
+ ret = ksz9477_wait_alu_sta_ready(dev);
+ if (ret)
dev_dbg(dev->dev, "Failed to read ALU STATIC\n");
exit:
@@ -1165,6 +1130,62 @@ static phy_interface_t ksz9477_get_interface(struct ksz_device *dev, int port)
return interface;
}
+static void ksz9477_port_mmd_write(struct ksz_device *dev, int port,
+ u8 dev_addr, u16 reg_addr, u16 val)
+{
+ ksz_pwrite16(dev, port, REG_PORT_PHY_MMD_SETUP,
+ MMD_SETUP(PORT_MMD_OP_INDEX, dev_addr));
+ ksz_pwrite16(dev, port, REG_PORT_PHY_MMD_INDEX_DATA, reg_addr);
+ ksz_pwrite16(dev, port, REG_PORT_PHY_MMD_SETUP,
+ MMD_SETUP(PORT_MMD_OP_DATA_NO_INCR, dev_addr));
+ ksz_pwrite16(dev, port, REG_PORT_PHY_MMD_INDEX_DATA, val);
+}
+
+static void ksz9477_phy_errata_setup(struct ksz_device *dev, int port)
+{
+ /* Apply PHY settings to address errata listed in
+ * KSZ9477, KSZ9897, KSZ9896, KSZ9567, KSZ8565
+ * Silicon Errata and Data Sheet Clarification documents:
+ *
+ * Register settings are needed to improve PHY receive performance
+ */
+ ksz9477_port_mmd_write(dev, port, 0x01, 0x6f, 0xdd0b);
+ ksz9477_port_mmd_write(dev, port, 0x01, 0x8f, 0x6032);
+ ksz9477_port_mmd_write(dev, port, 0x01, 0x9d, 0x248c);
+ ksz9477_port_mmd_write(dev, port, 0x01, 0x75, 0x0060);
+ ksz9477_port_mmd_write(dev, port, 0x01, 0xd3, 0x7777);
+ ksz9477_port_mmd_write(dev, port, 0x1c, 0x06, 0x3008);
+ ksz9477_port_mmd_write(dev, port, 0x1c, 0x08, 0x2001);
+
+ /* Transmit waveform amplitude can be improved
+ * (1000BASE-T, 100BASE-TX, 10BASE-Te)
+ */
+ ksz9477_port_mmd_write(dev, port, 0x1c, 0x04, 0x00d0);
+
+ /* Energy Efficient Ethernet (EEE) feature select must
+ * be manually disabled (except on KSZ8565 which is 100Mbit)
+ */
+ if (dev->features & GBIT_SUPPORT)
+ ksz9477_port_mmd_write(dev, port, 0x07, 0x3c, 0x0000);
+
+ /* Register settings are required to meet data sheet
+ * supply current specifications
+ */
+ ksz9477_port_mmd_write(dev, port, 0x1c, 0x13, 0x6eff);
+ ksz9477_port_mmd_write(dev, port, 0x1c, 0x14, 0xe6ff);
+ ksz9477_port_mmd_write(dev, port, 0x1c, 0x15, 0x6eff);
+ ksz9477_port_mmd_write(dev, port, 0x1c, 0x16, 0xe6ff);
+ ksz9477_port_mmd_write(dev, port, 0x1c, 0x17, 0x00ff);
+ ksz9477_port_mmd_write(dev, port, 0x1c, 0x18, 0x43ff);
+ ksz9477_port_mmd_write(dev, port, 0x1c, 0x19, 0xc3ff);
+ ksz9477_port_mmd_write(dev, port, 0x1c, 0x1a, 0x6fff);
+ ksz9477_port_mmd_write(dev, port, 0x1c, 0x1b, 0x07ff);
+ ksz9477_port_mmd_write(dev, port, 0x1c, 0x1c, 0x0fff);
+ ksz9477_port_mmd_write(dev, port, 0x1c, 0x1d, 0xe7ff);
+ ksz9477_port_mmd_write(dev, port, 0x1c, 0x1e, 0xefff);
+ ksz9477_port_mmd_write(dev, port, 0x1c, 0x20, 0xeeee);
+}
+
static void ksz9477_port_setup(struct ksz_device *dev, int port, bool cpu_port)
{
u8 data8;
@@ -1203,6 +1224,8 @@ static void ksz9477_port_setup(struct ksz_device *dev, int port, bool cpu_port)
PORT_FORCE_TX_FLOW_CTRL | PORT_FORCE_RX_FLOW_CTRL,
false);
+ if (dev->phy_errata_9477)
+ ksz9477_phy_errata_setup(dev, port);
} else {
/* force flow control */
ksz_port_cfg(dev, port, REG_PORT_CTRL_0,
@@ -1474,6 +1497,7 @@ struct ksz_chip_data {
int num_statics;
int cpu_ports;
int port_cnt;
+ bool phy_errata_9477;
};
static const struct ksz_chip_data ksz9477_switch_chips[] = {
@@ -1485,6 +1509,7 @@ static const struct ksz_chip_data ksz9477_switch_chips[] = {
.num_statics = 16,
.cpu_ports = 0x7F, /* can be configured as cpu port */
.port_cnt = 7, /* total physical port count */
+ .phy_errata_9477 = true,
},
{
.chip_id = 0x00989700,
@@ -1494,6 +1519,7 @@ static const struct ksz_chip_data ksz9477_switch_chips[] = {
.num_statics = 16,
.cpu_ports = 0x7F, /* can be configured as cpu port */
.port_cnt = 7, /* total physical port count */
+ .phy_errata_9477 = true,
},
{
.chip_id = 0x00989300,
@@ -1522,6 +1548,7 @@ static int ksz9477_switch_init(struct ksz_device *dev)
dev->num_statics = chip->num_statics;
dev->port_cnt = chip->port_cnt;
dev->cpu_ports = chip->cpu_ports;
+ dev->phy_errata_9477 = chip->phy_errata_9477;
break;
}
diff --git a/drivers/net/dsa/microchip/ksz9477_spi.c b/drivers/net/dsa/microchip/ksz9477_spi.c
index 75178624d3f5..5a9e27b337a8 100644
--- a/drivers/net/dsa/microchip/ksz9477_spi.c
+++ b/drivers/net/dsa/microchip/ksz9477_spi.c
@@ -10,119 +10,43 @@
#include <linux/delay.h>
#include <linux/kernel.h>
#include <linux/module.h>
+#include <linux/regmap.h>
#include <linux/spi/spi.h>
#include "ksz_priv.h"
-#include "ksz_spi.h"
-
-/* SPI frame opcodes */
-#define KS_SPIOP_RD 3
-#define KS_SPIOP_WR 2
+#include "ksz_common.h"
#define SPI_ADDR_SHIFT 24
-#define SPI_ADDR_MASK (BIT(SPI_ADDR_SHIFT) - 1)
+#define SPI_ADDR_ALIGN 3
#define SPI_TURNAROUND_SHIFT 5
-/* Enough to read all switch port registers. */
-#define SPI_TX_BUF_LEN 0x100
-
-static int ksz9477_spi_read_reg(struct spi_device *spi, u32 reg, u8 *val,
- unsigned int len)
-{
- u32 txbuf;
- int ret;
-
- txbuf = reg & SPI_ADDR_MASK;
- txbuf |= KS_SPIOP_RD << SPI_ADDR_SHIFT;
- txbuf <<= SPI_TURNAROUND_SHIFT;
- txbuf = cpu_to_be32(txbuf);
-
- ret = spi_write_then_read(spi, &txbuf, 4, val, len);
- return ret;
-}
-
-static int ksz9477_spi_write_reg(struct spi_device *spi, u32 reg, u8 *val,
- unsigned int len)
-{
- u32 *txbuf = (u32 *)val;
-
- *txbuf = reg & SPI_ADDR_MASK;
- *txbuf |= (KS_SPIOP_WR << SPI_ADDR_SHIFT);
- *txbuf <<= SPI_TURNAROUND_SHIFT;
- *txbuf = cpu_to_be32(*txbuf);
-
- return spi_write(spi, txbuf, 4 + len);
-}
-
-static int ksz_spi_read(struct ksz_device *dev, u32 reg, u8 *data,
- unsigned int len)
-{
- struct spi_device *spi = dev->priv;
-
- return ksz9477_spi_read_reg(spi, reg, data, len);
-}
-
-static int ksz_spi_write(struct ksz_device *dev, u32 reg, void *data,
- unsigned int len)
-{
- struct spi_device *spi = dev->priv;
-
- if (len > SPI_TX_BUF_LEN)
- len = SPI_TX_BUF_LEN;
- memcpy(&dev->txbuf[4], data, len);
- return ksz9477_spi_write_reg(spi, reg, dev->txbuf, len);
-}
-
-static int ksz_spi_read24(struct ksz_device *dev, u32 reg, u32 *val)
-{
- int ret;
-
- *val = 0;
- ret = ksz_spi_read(dev, reg, (u8 *)val, 3);
- if (!ret) {
- *val = be32_to_cpu(*val);
- /* convert to 24bit */
- *val >>= 8;
- }
-
- return ret;
-}
-
-static int ksz_spi_write24(struct ksz_device *dev, u32 reg, u32 value)
-{
- /* make it to big endian 24bit from MSB */
- value <<= 8;
- value = cpu_to_be32(value);
- return ksz_spi_write(dev, reg, &value, 3);
-}
-
-static const struct ksz_io_ops ksz9477_spi_ops = {
- .read8 = ksz_spi_read8,
- .read16 = ksz_spi_read16,
- .read24 = ksz_spi_read24,
- .read32 = ksz_spi_read32,
- .write8 = ksz_spi_write8,
- .write16 = ksz_spi_write16,
- .write24 = ksz_spi_write24,
- .write32 = ksz_spi_write32,
- .get = ksz_spi_get,
- .set = ksz_spi_set,
-};
+KSZ_REGMAP_TABLE(ksz9477, 32, SPI_ADDR_SHIFT,
+ SPI_TURNAROUND_SHIFT, SPI_ADDR_ALIGN);
static int ksz9477_spi_probe(struct spi_device *spi)
{
struct ksz_device *dev;
- int ret;
+ int i, ret;
- dev = ksz_switch_alloc(&spi->dev, &ksz9477_spi_ops, spi);
+ dev = ksz_switch_alloc(&spi->dev, spi);
if (!dev)
return -ENOMEM;
+ for (i = 0; i < ARRAY_SIZE(ksz9477_regmap_config); i++) {
+ dev->regmap[i] = devm_regmap_init_spi(spi,
+ &ksz9477_regmap_config[i]);
+ if (IS_ERR(dev->regmap[i])) {
+ ret = PTR_ERR(dev->regmap[i]);
+ dev_err(&spi->dev,
+ "Failed to initialize regmap%i: %d\n",
+ ksz9477_regmap_config[i].val_bits, ret);
+ return ret;
+ }
+ }
+
if (spi->dev.platform_data)
dev->pdata = spi->dev.platform_data;
- dev->txbuf = devm_kzalloc(dev->dev, 4 + SPI_TX_BUF_LEN, GFP_KERNEL);
-
ret = ksz9477_switch_register(dev);
/* Main DSA driver may not be started yet. */
diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
index db91b213eae1..a3d2d67894bd 100644
--- a/drivers/net/dsa/microchip/ksz_common.c
+++ b/drivers/net/dsa/microchip/ksz_common.c
@@ -396,9 +396,7 @@ void ksz_disable_port(struct dsa_switch *ds, int port)
}
EXPORT_SYMBOL_GPL(ksz_disable_port);
-struct ksz_device *ksz_switch_alloc(struct device *base,
- const struct ksz_io_ops *ops,
- void *priv)
+struct ksz_device *ksz_switch_alloc(struct device *base, void *priv)
{
struct dsa_switch *ds;
struct ksz_device *swdev;
@@ -416,7 +414,6 @@ struct ksz_device *ksz_switch_alloc(struct device *base,
swdev->ds = ds;
swdev->priv = priv;
- swdev->ops = ops;
return swdev;
}
@@ -442,7 +439,6 @@ int ksz_switch_register(struct ksz_device *dev,
}
mutex_init(&dev->dev_mutex);
- mutex_init(&dev->reg_mutex);
mutex_init(&dev->stats_mutex);
mutex_init(&dev->alu_mutex);
mutex_init(&dev->vlan_mutex);
@@ -463,6 +459,8 @@ int ksz_switch_register(struct ksz_device *dev,
ret = of_get_phy_mode(dev->dev->of_node);
if (ret >= 0)
dev->interface = ret;
+ dev->synclko_125 = of_property_read_bool(dev->dev->of_node,
+ "microchip,synclko-125");
}
ret = dsa_register_switch(dev->ds);
diff --git a/drivers/net/dsa/microchip/ksz_common.h b/drivers/net/dsa/microchip/ksz_common.h
index 21cd794e18f1..ee7096d8af07 100644
--- a/drivers/net/dsa/microchip/ksz_common.h
+++ b/drivers/net/dsa/microchip/ksz_common.h
@@ -7,6 +7,8 @@
#ifndef __KSZ_COMMON_H
#define __KSZ_COMMON_H
+#include <linux/regmap.h>
+
void ksz_port_cleanup(struct ksz_device *dev, int port);
void ksz_update_port_member(struct ksz_device *dev, int port);
void ksz_init_mib_timer(struct ksz_device *dev);
@@ -41,114 +43,44 @@ void ksz_disable_port(struct dsa_switch *ds, int port);
static inline int ksz_read8(struct ksz_device *dev, u32 reg, u8 *val)
{
- int ret;
-
- mutex_lock(&dev->reg_mutex);
- ret = dev->ops->read8(dev, reg, val);
- mutex_unlock(&dev->reg_mutex);
+ unsigned int value;
+ int ret = regmap_read(dev->regmap[0], reg, &value);
+ *val = value;
return ret;
}
static inline int ksz_read16(struct ksz_device *dev, u32 reg, u16 *val)
{
- int ret;
-
- mutex_lock(&dev->reg_mutex);
- ret = dev->ops->read16(dev, reg, val);
- mutex_unlock(&dev->reg_mutex);
-
- return ret;
-}
-
-static inline int ksz_read24(struct ksz_device *dev, u32 reg, u32 *val)
-{
- int ret;
-
- mutex_lock(&dev->reg_mutex);
- ret = dev->ops->read24(dev, reg, val);
- mutex_unlock(&dev->reg_mutex);
+ unsigned int value;
+ int ret = regmap_read(dev->regmap[1], reg, &value);
+ *val = value;
return ret;
}
static inline int ksz_read32(struct ksz_device *dev, u32 reg, u32 *val)
{
- int ret;
-
- mutex_lock(&dev->reg_mutex);
- ret = dev->ops->read32(dev, reg, val);
- mutex_unlock(&dev->reg_mutex);
+ unsigned int value;
+ int ret = regmap_read(dev->regmap[2], reg, &value);
+ *val = value;
return ret;
}
static inline int ksz_write8(struct ksz_device *dev, u32 reg, u8 value)
{
- int ret;
-
- mutex_lock(&dev->reg_mutex);
- ret = dev->ops->write8(dev, reg, value);
- mutex_unlock(&dev->reg_mutex);
-
- return ret;
+ return regmap_write(dev->regmap[0], reg, value);
}
static inline int ksz_write16(struct ksz_device *dev, u32 reg, u16 value)
{
- int ret;
-
- mutex_lock(&dev->reg_mutex);
- ret = dev->ops->write16(dev, reg, value);
- mutex_unlock(&dev->reg_mutex);
-
- return ret;
-}
-
-static inline int ksz_write24(struct ksz_device *dev, u32 reg, u32 value)
-{
- int ret;
-
- mutex_lock(&dev->reg_mutex);
- ret = dev->ops->write24(dev, reg, value);
- mutex_unlock(&dev->reg_mutex);
-
- return ret;
+ return regmap_write(dev->regmap[1], reg, value);
}
static inline int ksz_write32(struct ksz_device *dev, u32 reg, u32 value)
{
- int ret;
-
- mutex_lock(&dev->reg_mutex);
- ret = dev->ops->write32(dev, reg, value);
- mutex_unlock(&dev->reg_mutex);
-
- return ret;
-}
-
-static inline int ksz_get(struct ksz_device *dev, u32 reg, void *data,
- size_t len)
-{
- int ret;
-
- mutex_lock(&dev->reg_mutex);
- ret = dev->ops->get(dev, reg, data, len);
- mutex_unlock(&dev->reg_mutex);
-
- return ret;
-}
-
-static inline int ksz_set(struct ksz_device *dev, u32 reg, void *data,
- size_t len)
-{
- int ret;
-
- mutex_lock(&dev->reg_mutex);
- ret = dev->ops->set(dev, reg, data, len);
- mutex_unlock(&dev->reg_mutex);
-
- return ret;
+ return regmap_write(dev->regmap[2], reg, value);
}
static inline void ksz_pread8(struct ksz_device *dev, int port, int offset,
@@ -187,47 +119,36 @@ static inline void ksz_pwrite32(struct ksz_device *dev, int port, int offset,
ksz_write32(dev, dev->dev_ops->get_port_addr(port, offset), data);
}
-static void ksz_cfg(struct ksz_device *dev, u32 addr, u8 bits, bool set)
-{
- u8 data;
-
- ksz_read8(dev, addr, &data);
- if (set)
- data |= bits;
- else
- data &= ~bits;
- ksz_write8(dev, addr, data);
-}
-
-static void ksz_port_cfg(struct ksz_device *dev, int port, int offset, u8 bits,
- bool set)
-{
- u32 addr;
- u8 data;
-
- addr = dev->dev_ops->get_port_addr(port, offset);
- ksz_read8(dev, addr, &data);
-
- if (set)
- data |= bits;
- else
- data &= ~bits;
-
- ksz_write8(dev, addr, data);
-}
-
-struct ksz_poll_ctx {
- struct ksz_device *dev;
- int port;
- int offset;
-};
-
-static inline u32 ksz_pread32_poll(struct ksz_poll_ctx *ctx)
-{
- u32 data;
-
- ksz_pread32(ctx->dev, ctx->port, ctx->offset, &data);
- return data;
-}
+/* Regmap tables generation */
+#define KSZ_SPI_OP_RD 3
+#define KSZ_SPI_OP_WR 2
+
+#define KSZ_SPI_OP_FLAG_MASK(opcode, swp, regbits, regpad) \
+ swab##swp((opcode) << ((regbits) + (regpad)))
+
+#define KSZ_REGMAP_ENTRY(width, swp, regbits, regpad, regalign) \
+ { \
+ .val_bits = (width), \
+ .reg_stride = (width) / 8, \
+ .reg_bits = (regbits) + (regalign), \
+ .pad_bits = (regpad), \
+ .max_register = BIT(regbits) - 1, \
+ .cache_type = REGCACHE_NONE, \
+ .read_flag_mask = \
+ KSZ_SPI_OP_FLAG_MASK(KSZ_SPI_OP_RD, swp, \
+ regbits, regpad), \
+ .write_flag_mask = \
+ KSZ_SPI_OP_FLAG_MASK(KSZ_SPI_OP_WR, swp, \
+ regbits, regpad), \
+ .reg_format_endian = REGMAP_ENDIAN_BIG, \
+ .val_format_endian = REGMAP_ENDIAN_BIG \
+ }
+
+#define KSZ_REGMAP_TABLE(ksz, swp, regbits, regpad, regalign) \
+ static const struct regmap_config ksz##_regmap_config[] = { \
+ KSZ_REGMAP_ENTRY(8, swp, (regbits), (regpad), (regalign)), \
+ KSZ_REGMAP_ENTRY(16, swp, (regbits), (regpad), (regalign)), \
+ KSZ_REGMAP_ENTRY(32, swp, (regbits), (regpad), (regalign)), \
+ }
#endif
diff --git a/drivers/net/dsa/microchip/ksz_priv.h b/drivers/net/dsa/microchip/ksz_priv.h
index b52e5ca17ab4..beacf0e40f42 100644
--- a/drivers/net/dsa/microchip/ksz_priv.h
+++ b/drivers/net/dsa/microchip/ksz_priv.h
@@ -14,8 +14,6 @@
#include <linux/etherdevice.h>
#include <net/dsa.h>
-struct ksz_io_ops;
-
struct vlan_table {
u32 table[3];
};
@@ -49,14 +47,13 @@ struct ksz_device {
const char *name;
struct mutex dev_mutex; /* device access */
- struct mutex reg_mutex; /* register access */
struct mutex stats_mutex; /* status access */
struct mutex alu_mutex; /* ALU access */
struct mutex vlan_mutex; /* vlan access */
- const struct ksz_io_ops *ops;
const struct ksz_dev_ops *dev_ops;
struct device *dev;
+ struct regmap *regmap[3];
void *priv;
@@ -77,11 +74,11 @@ struct ksz_device {
int last_port; /* ports after that not used */
phy_interface_t interface;
u32 regs_size;
+ bool phy_errata_9477;
+ bool synclko_125;
struct vlan_table *vlan_cache;
- u8 *txbuf;
-
struct ksz_port *ports;
struct timer_list mib_read_timer;
struct work_struct mib_read;
@@ -100,19 +97,6 @@ struct ksz_device {
u16 port_mask;
};
-struct ksz_io_ops {
- int (*read8)(struct ksz_device *dev, u32 reg, u8 *value);
- int (*read16)(struct ksz_device *dev, u32 reg, u16 *value);
- int (*read24)(struct ksz_device *dev, u32 reg, u32 *value);
- int (*read32)(struct ksz_device *dev, u32 reg, u32 *value);
- int (*write8)(struct ksz_device *dev, u32 reg, u8 value);
- int (*write16)(struct ksz_device *dev, u32 reg, u16 value);
- int (*write24)(struct ksz_device *dev, u32 reg, u32 value);
- int (*write32)(struct ksz_device *dev, u32 reg, u32 value);
- int (*get)(struct ksz_device *dev, u32 reg, void *data, size_t len);
- int (*set)(struct ksz_device *dev, u32 reg, void *data, size_t len);
-};
-
struct alu_struct {
/* entry 1 */
u8 is_static:1;
@@ -161,8 +145,7 @@ struct ksz_dev_ops {
void (*exit)(struct ksz_device *dev);
};
-struct ksz_device *ksz_switch_alloc(struct device *base,
- const struct ksz_io_ops *ops, void *priv);
+struct ksz_device *ksz_switch_alloc(struct device *base, void *priv);
int ksz_switch_register(struct ksz_device *dev,
const struct ksz_dev_ops *ops);
void ksz_switch_remove(struct ksz_device *dev);
diff --git a/drivers/net/dsa/microchip/ksz_spi.h b/drivers/net/dsa/microchip/ksz_spi.h
deleted file mode 100644
index 427811bd60b3..000000000000
--- a/drivers/net/dsa/microchip/ksz_spi.h
+++ /dev/null
@@ -1,69 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0
- * Microchip KSZ series SPI access common header
- *
- * Copyright (C) 2017-2018 Microchip Technology Inc.
- * Tristram Ha <Tristram.Ha@microchip.com>
- */
-
-#ifndef __KSZ_SPI_H
-#define __KSZ_SPI_H
-
-/* Chip dependent SPI access */
-static int ksz_spi_read(struct ksz_device *dev, u32 reg, u8 *data,
- unsigned int len);
-static int ksz_spi_write(struct ksz_device *dev, u32 reg, void *data,
- unsigned int len);
-
-static int ksz_spi_read8(struct ksz_device *dev, u32 reg, u8 *val)
-{
- return ksz_spi_read(dev, reg, val, 1);
-}
-
-static int ksz_spi_read16(struct ksz_device *dev, u32 reg, u16 *val)
-{
- int ret = ksz_spi_read(dev, reg, (u8 *)val, 2);
-
- if (!ret)
- *val = be16_to_cpu(*val);
-
- return ret;
-}
-
-static int ksz_spi_read32(struct ksz_device *dev, u32 reg, u32 *val)
-{
- int ret = ksz_spi_read(dev, reg, (u8 *)val, 4);
-
- if (!ret)
- *val = be32_to_cpu(*val);
-
- return ret;
-}
-
-static int ksz_spi_write8(struct ksz_device *dev, u32 reg, u8 value)
-{
- return ksz_spi_write(dev, reg, &value, 1);
-}
-
-static int ksz_spi_write16(struct ksz_device *dev, u32 reg, u16 value)
-{
- value = cpu_to_be16(value);
- return ksz_spi_write(dev, reg, &value, 2);
-}
-
-static int ksz_spi_write32(struct ksz_device *dev, u32 reg, u32 value)
-{
- value = cpu_to_be32(value);
- return ksz_spi_write(dev, reg, &value, 4);
-}
-
-static int ksz_spi_get(struct ksz_device *dev, u32 reg, void *data, size_t len)
-{
- return ksz_spi_read(dev, reg, data, len);
-}
-
-static int ksz_spi_set(struct ksz_device *dev, u32 reg, void *data, size_t len)
-{
- return ksz_spi_write(dev, reg, data, len);
-}
-
-#endif
diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
index c7d352da5448..3181e95586d6 100644
--- a/drivers/net/dsa/mt7530.c
+++ b/drivers/net/dsa/mt7530.c
@@ -428,24 +428,48 @@ static int
mt7530_pad_clk_setup(struct dsa_switch *ds, int mode)
{
struct mt7530_priv *priv = ds->priv;
- u32 ncpo1, ssc_delta, trgint, i;
+ u32 ncpo1, ssc_delta, trgint, i, xtal;
+
+ xtal = mt7530_read(priv, MT7530_MHWTRAP) & HWTRAP_XTAL_MASK;
+
+ if (xtal == HWTRAP_XTAL_20MHZ) {
+ dev_err(priv->dev,
+ "%s: MT7530 with a 20MHz XTAL is not supported!\n",
+ __func__);
+ return -EINVAL;
+ }
switch (mode) {
case PHY_INTERFACE_MODE_RGMII:
trgint = 0;
+ /* PLL frequency: 125MHz */
ncpo1 = 0x0c80;
- ssc_delta = 0x87;
break;
case PHY_INTERFACE_MODE_TRGMII:
trgint = 1;
- ncpo1 = 0x1400;
- ssc_delta = 0x57;
+ if (priv->id == ID_MT7621) {
+ /* PLL frequency: 150MHz: 1.2GBit */
+ if (xtal == HWTRAP_XTAL_40MHZ)
+ ncpo1 = 0x0780;
+ if (xtal == HWTRAP_XTAL_25MHZ)
+ ncpo1 = 0x0a00;
+ } else { /* PLL frequency: 250MHz: 2.0Gbit */
+ if (xtal == HWTRAP_XTAL_40MHZ)
+ ncpo1 = 0x0c80;
+ if (xtal == HWTRAP_XTAL_25MHZ)
+ ncpo1 = 0x1400;
+ }
break;
default:
dev_err(priv->dev, "xMII mode %d not supported\n", mode);
return -EINVAL;
}
+ if (xtal == HWTRAP_XTAL_25MHZ)
+ ssc_delta = 0x57;
+ else
+ ssc_delta = 0x87;
+
mt7530_rmw(priv, MT7530_P6ECR, P6_INTF_MODE_MASK,
P6_INTF_MODE(trgint));
@@ -507,7 +531,9 @@ mt7530_pad_clk_setup(struct dsa_switch *ds, int mode)
mt7530_rmw(priv, MT7530_TRGMII_RD(i),
RD_TAP_MASK, RD_TAP(16));
else
- mt7623_trgmii_set(priv, GSW_INTF_MODE, INTF_MODE_TRGMII);
+ if (priv->id != ID_MT7621)
+ mt7623_trgmii_set(priv, GSW_INTF_MODE,
+ INTF_MODE_TRGMII);
return 0;
}
@@ -613,13 +639,13 @@ static void mt7530_adjust_link(struct dsa_switch *ds, int port,
struct mt7530_priv *priv = ds->priv;
if (phy_is_pseudo_fixed_link(phydev)) {
- if (priv->id == ID_MT7530) {
- dev_dbg(priv->dev, "phy-mode for master device = %x\n",
- phydev->interface);
+ dev_dbg(priv->dev, "phy-mode for master device = %x\n",
+ phydev->interface);
- /* Setup TX circuit incluing relevant PAD and driving */
- mt7530_pad_clk_setup(ds, phydev->interface);
+ /* Setup TX circuit incluing relevant PAD and driving */
+ mt7530_pad_clk_setup(ds, phydev->interface);
+ if (priv->id == ID_MT7530) {
/* Setup RX circuit, relevant PAD and driving on the
* host which must be placed after the setup on the
* device side is all finished.
diff --git a/drivers/net/dsa/mt7530.h b/drivers/net/dsa/mt7530.h
index 4331429969fa..bfac90f48102 100644
--- a/drivers/net/dsa/mt7530.h
+++ b/drivers/net/dsa/mt7530.h
@@ -244,6 +244,10 @@ enum mt7530_vlan_port_attr {
/* Register for hw trap status */
#define MT7530_HWTRAP 0x7800
+#define HWTRAP_XTAL_MASK (BIT(10) | BIT(9))
+#define HWTRAP_XTAL_25MHZ (BIT(10) | BIT(9))
+#define HWTRAP_XTAL_40MHZ (BIT(10))
+#define HWTRAP_XTAL_20MHZ (BIT(9))
/* Register for hw trap modification */
#define MT7530_MHWTRAP 0x7804
diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
index 063c7a671b41..6b17cd961d06 100644
--- a/drivers/net/dsa/mv88e6xxx/chip.c
+++ b/drivers/net/dsa/mv88e6xxx/chip.c
@@ -118,9 +118,9 @@ static irqreturn_t mv88e6xxx_g1_irq_thread_work(struct mv88e6xxx_chip *chip)
u16 ctl1;
int err;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_g1_read(chip, MV88E6XXX_G1_STS, &reg);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (err)
goto out;
@@ -135,13 +135,13 @@ static irqreturn_t mv88e6xxx_g1_irq_thread_work(struct mv88e6xxx_chip *chip)
}
}
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_g1_read(chip, MV88E6XXX_G1_CTL1, &ctl1);
if (err)
goto unlock;
err = mv88e6xxx_g1_read(chip, MV88E6XXX_G1_STS, &reg);
unlock:
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (err)
goto out;
ctl1 &= GENMASK(chip->g1_irq.nirqs, 0);
@@ -162,7 +162,7 @@ static void mv88e6xxx_g1_irq_bus_lock(struct irq_data *d)
{
struct mv88e6xxx_chip *chip = irq_data_get_irq_chip_data(d);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
}
static void mv88e6xxx_g1_irq_bus_sync_unlock(struct irq_data *d)
@@ -184,7 +184,7 @@ static void mv88e6xxx_g1_irq_bus_sync_unlock(struct irq_data *d)
goto out;
out:
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
}
static const struct irq_chip mv88e6xxx_g1_irq_chip = {
@@ -239,9 +239,9 @@ static void mv88e6xxx_g1_irq_free(struct mv88e6xxx_chip *chip)
*/
free_irq(chip->irq, chip);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
mv88e6xxx_g1_irq_free_common(chip);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
}
static int mv88e6xxx_g1_irq_setup_common(struct mv88e6xxx_chip *chip)
@@ -310,12 +310,12 @@ static int mv88e6xxx_g1_irq_setup(struct mv88e6xxx_chip *chip)
*/
irq_set_lockdep_class(chip->irq, &lock_key, &request_key);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
err = request_threaded_irq(chip->irq, NULL,
mv88e6xxx_g1_irq_thread_fn,
IRQF_ONESHOT | IRQF_SHARED,
dev_name(chip->dev), chip);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (err)
mv88e6xxx_g1_irq_free_common(chip);
@@ -359,9 +359,9 @@ static void mv88e6xxx_irq_poll_free(struct mv88e6xxx_chip *chip)
kthread_cancel_delayed_work_sync(&chip->irq_poll_work);
kthread_destroy_worker(chip->kworker);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
mv88e6xxx_g1_irq_free_common(chip);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
}
int mv88e6xxx_wait(struct mv88e6xxx_chip *chip, int addr, int reg, u16 mask)
@@ -496,11 +496,11 @@ static void mv88e6xxx_adjust_link(struct dsa_switch *ds, int port,
mv88e6xxx_phy_is_internal(ds, port))
return;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_port_setup_mac(chip, port, phydev->link, phydev->speed,
phydev->duplex, phydev->pause,
phydev->interface);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (err && err != -EOPNOTSUPP)
dev_err(ds->dev, "p%d: failed to configure MAC\n", port);
@@ -616,12 +616,12 @@ static int mv88e6xxx_link_state(struct dsa_switch *ds, int port,
struct mv88e6xxx_chip *chip = ds->priv;
int err;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (chip->info->ops->port_link_state)
err = chip->info->ops->port_link_state(chip, port, state);
else
err = -EOPNOTSUPP;
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return err;
}
@@ -651,10 +651,10 @@ static void mv88e6xxx_mac_config(struct dsa_switch *ds, int port,
}
pause = !!phylink_test(state->advertising, Pause);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_port_setup_mac(chip, port, link, speed, duplex, pause,
state->interface);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (err && err != -EOPNOTSUPP)
dev_err(ds->dev, "p%d: failed to configure MAC\n", port);
@@ -665,9 +665,9 @@ static void mv88e6xxx_mac_link_force(struct dsa_switch *ds, int port, int link)
struct mv88e6xxx_chip *chip = ds->priv;
int err;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = chip->info->ops->port_set_link(chip, port, link);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (err)
dev_err(chip->dev, "p%d: failed to force MAC link\n", port);
@@ -825,6 +825,12 @@ static int mv88e6095_stats_get_strings(struct mv88e6xxx_chip *chip,
STATS_TYPE_BANK0 | STATS_TYPE_PORT);
}
+static int mv88e6250_stats_get_strings(struct mv88e6xxx_chip *chip,
+ uint8_t *data)
+{
+ return mv88e6xxx_stats_get_strings(chip, data, STATS_TYPE_BANK0);
+}
+
static int mv88e6320_stats_get_strings(struct mv88e6xxx_chip *chip,
uint8_t *data)
{
@@ -859,7 +865,7 @@ static void mv88e6xxx_get_strings(struct dsa_switch *ds, int port,
if (stringset != ETH_SS_STATS)
return;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (chip->info->ops->stats_get_strings)
count = chip->info->ops->stats_get_strings(chip, data);
@@ -872,7 +878,7 @@ static void mv88e6xxx_get_strings(struct dsa_switch *ds, int port,
data += count * ETH_GSTRING_LEN;
mv88e6xxx_atu_vtu_get_strings(data);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
}
static int mv88e6xxx_stats_get_sset_count(struct mv88e6xxx_chip *chip,
@@ -895,6 +901,11 @@ static int mv88e6095_stats_get_sset_count(struct mv88e6xxx_chip *chip)
STATS_TYPE_PORT);
}
+static int mv88e6250_stats_get_sset_count(struct mv88e6xxx_chip *chip)
+{
+ return mv88e6xxx_stats_get_sset_count(chip, STATS_TYPE_BANK0);
+}
+
static int mv88e6320_stats_get_sset_count(struct mv88e6xxx_chip *chip)
{
return mv88e6xxx_stats_get_sset_count(chip, STATS_TYPE_BANK0 |
@@ -910,7 +921,7 @@ static int mv88e6xxx_get_sset_count(struct dsa_switch *ds, int port, int sset)
if (sset != ETH_SS_STATS)
return 0;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (chip->info->ops->stats_get_sset_count)
count = chip->info->ops->stats_get_sset_count(chip);
if (count < 0)
@@ -927,7 +938,7 @@ static int mv88e6xxx_get_sset_count(struct dsa_switch *ds, int port, int sset)
count += ARRAY_SIZE(mv88e6xxx_atu_vtu_stats_strings);
out:
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return count;
}
@@ -942,11 +953,11 @@ static int mv88e6xxx_stats_get_stats(struct mv88e6xxx_chip *chip, int port,
for (i = 0, j = 0; i < ARRAY_SIZE(mv88e6xxx_hw_stats); i++) {
stat = &mv88e6xxx_hw_stats[i];
if (stat->type & types) {
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
data[j] = _mv88e6xxx_get_ethtool_stat(chip, stat, port,
bank1_select,
histogram);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
j++;
}
@@ -962,6 +973,13 @@ static int mv88e6095_stats_get_stats(struct mv88e6xxx_chip *chip, int port,
0, MV88E6XXX_G1_STATS_OP_HIST_RX_TX);
}
+static int mv88e6250_stats_get_stats(struct mv88e6xxx_chip *chip, int port,
+ uint64_t *data)
+{
+ return mv88e6xxx_stats_get_stats(chip, port, data, STATS_TYPE_BANK0,
+ 0, MV88E6XXX_G1_STATS_OP_HIST_RX_TX);
+}
+
static int mv88e6320_stats_get_stats(struct mv88e6xxx_chip *chip, int port,
uint64_t *data)
{
@@ -998,14 +1016,14 @@ static void mv88e6xxx_get_stats(struct mv88e6xxx_chip *chip, int port,
if (chip->info->ops->stats_get_stats)
count = chip->info->ops->stats_get_stats(chip, port, data);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (chip->info->ops->serdes_get_stats) {
data += count;
count = chip->info->ops->serdes_get_stats(chip, port, data);
}
data += count;
mv88e6xxx_atu_vtu_get_stats(chip, port, data);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
}
static void mv88e6xxx_get_ethtool_stats(struct dsa_switch *ds, int port,
@@ -1014,10 +1032,10 @@ static void mv88e6xxx_get_ethtool_stats(struct dsa_switch *ds, int port,
struct mv88e6xxx_chip *chip = ds->priv;
int ret;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
ret = mv88e6xxx_stats_snapshot(chip, port);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (ret < 0)
return;
@@ -1044,7 +1062,7 @@ static void mv88e6xxx_get_regs(struct dsa_switch *ds, int port,
memset(p, 0xff, 32 * sizeof(u16));
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
for (i = 0; i < 32; i++) {
@@ -1053,7 +1071,7 @@ static void mv88e6xxx_get_regs(struct dsa_switch *ds, int port,
p[i] = reg;
}
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
}
static int mv88e6xxx_get_mac_eee(struct dsa_switch *ds, int port,
@@ -1119,9 +1137,9 @@ static void mv88e6xxx_port_stp_state_set(struct dsa_switch *ds, int port,
struct mv88e6xxx_chip *chip = ds->priv;
int err;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_port_set_state(chip, port, state);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (err)
dev_err(ds->dev, "p%d: failed to update state\n", port);
@@ -1306,9 +1324,9 @@ static void mv88e6xxx_port_fast_age(struct dsa_switch *ds, int port)
struct mv88e6xxx_chip *chip = ds->priv;
int err;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_g1_atu_remove(chip, 0, port, false);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (err)
dev_err(ds->dev, "p%d: failed to flush ATU\n", port);
@@ -1436,7 +1454,7 @@ static int mv88e6xxx_port_check_hw_vlan(struct dsa_switch *ds, int port,
if (!vid_begin)
return -EOPNOTSUPP;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
do {
err = mv88e6xxx_vtu_getnext(chip, &vlan);
@@ -1476,7 +1494,7 @@ static int mv88e6xxx_port_check_hw_vlan(struct dsa_switch *ds, int port,
} while (vlan.vid < vid_end);
unlock:
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return err;
}
@@ -1492,9 +1510,9 @@ static int mv88e6xxx_port_vlan_filtering(struct dsa_switch *ds, int port,
if (!chip->info->max_vid)
return -EOPNOTSUPP;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_port_set_8021q_mode(chip, port, mode);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return err;
}
@@ -1628,7 +1646,7 @@ static void mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port,
else
member = MV88E6XXX_G1_VTU_DATA_MEMBER_TAG_TAGGED;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
for (vid = vlan->vid_begin; vid <= vlan->vid_end; ++vid)
if (_mv88e6xxx_port_vlan_add(chip, port, vid, member))
@@ -1639,7 +1657,7 @@ static void mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port,
dev_err(ds->dev, "p%d: failed to set PVID %d\n", port,
vlan->vid_end);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
}
static int _mv88e6xxx_port_vlan_del(struct mv88e6xxx_chip *chip,
@@ -1685,7 +1703,7 @@ static int mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port,
if (!chip->info->max_vid)
return -EOPNOTSUPP;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_port_get_pvid(chip, port, &pvid);
if (err)
@@ -1704,7 +1722,7 @@ static int mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port,
}
unlock:
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return err;
}
@@ -1715,10 +1733,10 @@ static int mv88e6xxx_port_fdb_add(struct dsa_switch *ds, int port,
struct mv88e6xxx_chip *chip = ds->priv;
int err;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_port_db_load_purge(chip, port, addr, vid,
MV88E6XXX_G1_ATU_DATA_STATE_UC_STATIC);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return err;
}
@@ -1729,10 +1747,10 @@ static int mv88e6xxx_port_fdb_del(struct dsa_switch *ds, int port,
struct mv88e6xxx_chip *chip = ds->priv;
int err;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_port_db_load_purge(chip, port, addr, vid,
MV88E6XXX_G1_ATU_DATA_STATE_UNUSED);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return err;
}
@@ -1749,9 +1767,7 @@ static int mv88e6xxx_port_db_dump_fid(struct mv88e6xxx_chip *chip,
eth_broadcast_addr(addr.mac);
do {
- mutex_lock(&chip->reg_lock);
err = mv88e6xxx_g1_atu_getnext(chip, fid, &addr);
- mutex_unlock(&chip->reg_lock);
if (err)
return err;
@@ -1784,10 +1800,7 @@ static int mv88e6xxx_port_db_dump(struct mv88e6xxx_chip *chip, int port,
int err;
/* Dump port's default Filtering Information Database (VLAN ID 0) */
- mutex_lock(&chip->reg_lock);
err = mv88e6xxx_port_get_fid(chip, port, &fid);
- mutex_unlock(&chip->reg_lock);
-
if (err)
return err;
@@ -1797,9 +1810,7 @@ static int mv88e6xxx_port_db_dump(struct mv88e6xxx_chip *chip, int port,
/* Dump VLANs' Filtering Information Databases */
do {
- mutex_lock(&chip->reg_lock);
err = mv88e6xxx_vtu_getnext(chip, &vlan);
- mutex_unlock(&chip->reg_lock);
if (err)
return err;
@@ -1819,8 +1830,13 @@ static int mv88e6xxx_port_fdb_dump(struct dsa_switch *ds, int port,
dsa_fdb_dump_cb_t *cb, void *data)
{
struct mv88e6xxx_chip *chip = ds->priv;
+ int err;
- return mv88e6xxx_port_db_dump(chip, port, cb, data);
+ mv88e6xxx_reg_lock(chip);
+ err = mv88e6xxx_port_db_dump(chip, port, cb, data);
+ mv88e6xxx_reg_unlock(chip);
+
+ return err;
}
static int mv88e6xxx_bridge_map(struct mv88e6xxx_chip *chip,
@@ -1867,9 +1883,9 @@ static int mv88e6xxx_port_bridge_join(struct dsa_switch *ds, int port,
struct mv88e6xxx_chip *chip = ds->priv;
int err;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_bridge_map(chip, br);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return err;
}
@@ -1879,11 +1895,11 @@ static void mv88e6xxx_port_bridge_leave(struct dsa_switch *ds, int port,
{
struct mv88e6xxx_chip *chip = ds->priv;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (mv88e6xxx_bridge_map(chip, br) ||
mv88e6xxx_port_vlan_map(chip, port))
dev_err(ds->dev, "failed to remap in-chip Port VLAN\n");
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
}
static int mv88e6xxx_crosschip_bridge_join(struct dsa_switch *ds, int dev,
@@ -1895,9 +1911,9 @@ static int mv88e6xxx_crosschip_bridge_join(struct dsa_switch *ds, int dev,
if (!mv88e6xxx_has_pvt(chip))
return 0;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_pvt_map(chip, dev, port);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return err;
}
@@ -1910,10 +1926,10 @@ static void mv88e6xxx_crosschip_bridge_leave(struct dsa_switch *ds, int dev,
if (!mv88e6xxx_has_pvt(chip))
return;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (mv88e6xxx_pvt_map(chip, dev, port))
dev_err(ds->dev, "failed to remap cross-chip Port VLAN\n");
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
}
static int mv88e6xxx_software_reset(struct mv88e6xxx_chip *chip)
@@ -2264,14 +2280,14 @@ static int mv88e6xxx_port_enable(struct dsa_switch *ds, int port,
struct mv88e6xxx_chip *chip = ds->priv;
int err;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_serdes_power(chip, port, true);
if (!err && chip->info->ops->serdes_irq_setup)
err = chip->info->ops->serdes_irq_setup(chip, port);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return err;
}
@@ -2280,7 +2296,7 @@ static void mv88e6xxx_port_disable(struct dsa_switch *ds, int port)
{
struct mv88e6xxx_chip *chip = ds->priv;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (mv88e6xxx_port_set_state(chip, port, BR_STATE_DISABLED))
dev_err(chip->dev, "failed to disable port\n");
@@ -2291,7 +2307,7 @@ static void mv88e6xxx_port_disable(struct dsa_switch *ds, int port)
if (mv88e6xxx_serdes_power(chip, port, false))
dev_err(chip->dev, "failed to power off SERDES\n");
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
}
static int mv88e6xxx_set_ageing_time(struct dsa_switch *ds,
@@ -2300,9 +2316,9 @@ static int mv88e6xxx_set_ageing_time(struct dsa_switch *ds,
struct mv88e6xxx_chip *chip = ds->priv;
int err;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_g1_atu_set_age_time(chip, ageing_time);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return err;
}
@@ -2432,7 +2448,7 @@ static int mv88e6xxx_setup(struct dsa_switch *ds)
chip->ds = ds;
ds->slave_mii_bus = mv88e6xxx_default_mdio_bus(chip);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (chip->info->ops->setup_errata) {
err = chip->info->ops->setup_errata(chip);
@@ -2539,7 +2555,7 @@ static int mv88e6xxx_setup(struct dsa_switch *ds)
goto unlock;
unlock:
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return err;
}
@@ -2554,9 +2570,9 @@ static int mv88e6xxx_mdio_read(struct mii_bus *bus, int phy, int reg)
if (!chip->info->ops->phy_read)
return -EOPNOTSUPP;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = chip->info->ops->phy_read(chip, bus, phy, reg, &val);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (reg == MII_PHYSID2) {
/* Some internal PHYs don't have a model number. */
@@ -2589,9 +2605,9 @@ static int mv88e6xxx_mdio_write(struct mii_bus *bus, int phy, int reg, u16 val)
if (!chip->info->ops->phy_write)
return -EOPNOTSUPP;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = chip->info->ops->phy_write(chip, bus, phy, reg, val);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return err;
}
@@ -2606,9 +2622,9 @@ static int mv88e6xxx_mdio_register(struct mv88e6xxx_chip *chip,
int err;
if (external) {
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_g2_scratch_gpio_set_smi(chip, true);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (err)
return err;
@@ -2729,9 +2745,9 @@ static int mv88e6xxx_get_eeprom(struct dsa_switch *ds,
if (!chip->info->ops->get_eeprom)
return -EOPNOTSUPP;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = chip->info->ops->get_eeprom(chip, eeprom, data);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (err)
return err;
@@ -2753,9 +2769,9 @@ static int mv88e6xxx_set_eeprom(struct dsa_switch *ds,
if (eeprom->magic != 0xc3ec4951)
return -EINVAL;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = chip->info->ops->set_eeprom(chip, eeprom, data);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return err;
}
@@ -3444,6 +3460,44 @@ static const struct mv88e6xxx_ops mv88e6240_ops = {
.phylink_validate = mv88e6352_phylink_validate,
};
+static const struct mv88e6xxx_ops mv88e6250_ops = {
+ /* MV88E6XXX_FAMILY_6250 */
+ .ieee_pri_map = mv88e6250_g1_ieee_pri_map,
+ .ip_pri_map = mv88e6085_g1_ip_pri_map,
+ .irl_init_all = mv88e6352_g2_irl_init_all,
+ .get_eeprom = mv88e6xxx_g2_get_eeprom16,
+ .set_eeprom = mv88e6xxx_g2_set_eeprom16,
+ .set_switch_mac = mv88e6xxx_g2_set_switch_mac,
+ .phy_read = mv88e6xxx_g2_smi_phy_read,
+ .phy_write = mv88e6xxx_g2_smi_phy_write,
+ .port_set_link = mv88e6xxx_port_set_link,
+ .port_set_duplex = mv88e6xxx_port_set_duplex,
+ .port_set_rgmii_delay = mv88e6352_port_set_rgmii_delay,
+ .port_set_speed = mv88e6250_port_set_speed,
+ .port_tag_remap = mv88e6095_port_tag_remap,
+ .port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ .port_set_egress_floods = mv88e6352_port_set_egress_floods,
+ .port_set_ether_type = mv88e6351_port_set_ether_type,
+ .port_egress_rate_limiting = mv88e6097_port_egress_rate_limiting,
+ .port_pause_limit = mv88e6097_port_pause_limit,
+ .port_disable_pri_override = mv88e6xxx_port_disable_pri_override,
+ .port_link_state = mv88e6250_port_link_state,
+ .stats_snapshot = mv88e6320_g1_stats_snapshot,
+ .stats_set_histogram = mv88e6095_g1_stats_set_histogram,
+ .stats_get_sset_count = mv88e6250_stats_get_sset_count,
+ .stats_get_strings = mv88e6250_stats_get_strings,
+ .stats_get_stats = mv88e6250_stats_get_stats,
+ .set_cpu_port = mv88e6095_g1_set_cpu_port,
+ .set_egress_port = mv88e6095_g1_set_egress_port,
+ .watchdog_ops = &mv88e6250_watchdog_ops,
+ .mgmt_rsvd2cpu = mv88e6352_g2_mgmt_rsvd2cpu,
+ .pot_clear = mv88e6xxx_g2_pot_clear,
+ .reset = mv88e6250_g1_reset,
+ .vtu_getnext = mv88e6250_g1_vtu_getnext,
+ .vtu_loadpurge = mv88e6250_g1_vtu_loadpurge,
+ .phylink_validate = mv88e6065_phylink_validate,
+};
+
static const struct mv88e6xxx_ops mv88e6290_ops = {
/* MV88E6XXX_FAMILY_6390 */
.setup_errata = mv88e6390_setup_errata,
@@ -4229,6 +4283,27 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
.ops = &mv88e6240_ops,
},
+ [MV88E6250] = {
+ .prod_num = MV88E6XXX_PORT_SWITCH_ID_PROD_6250,
+ .family = MV88E6XXX_FAMILY_6250,
+ .name = "Marvell 88E6250",
+ .num_databases = 64,
+ .num_ports = 7,
+ .num_internal_phys = 5,
+ .max_vid = 4095,
+ .port_base_addr = 0x08,
+ .phy_base_addr = 0x00,
+ .global1_addr = 0x0f,
+ .global2_addr = 0x07,
+ .age_time_coeff = 15000,
+ .g1_irqs = 9,
+ .g2_irqs = 10,
+ .atu_move_port_mask = 0xf,
+ .dual_chip = true,
+ .tag_protocol = DSA_TAG_PROTO_DSA,
+ .ops = &mv88e6250_ops,
+ },
+
[MV88E6290] = {
.prod_num = MV88E6XXX_PORT_SWITCH_ID_PROD_6290,
.family = MV88E6XXX_FAMILY_6390,
@@ -4457,9 +4532,9 @@ static int mv88e6xxx_detect(struct mv88e6xxx_chip *chip)
u16 id;
int err;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_port_read(chip, 0, MV88E6XXX_PORT_SWITCH_ID, &id);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (err)
return err;
@@ -4522,12 +4597,12 @@ static void mv88e6xxx_port_mdb_add(struct dsa_switch *ds, int port,
{
struct mv88e6xxx_chip *chip = ds->priv;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (mv88e6xxx_port_db_load_purge(chip, port, mdb->addr, mdb->vid,
MV88E6XXX_G1_ATU_DATA_STATE_MC_STATIC))
dev_err(ds->dev, "p%d: failed to load multicast MAC address\n",
port);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
}
static int mv88e6xxx_port_mdb_del(struct dsa_switch *ds, int port,
@@ -4536,10 +4611,10 @@ static int mv88e6xxx_port_mdb_del(struct dsa_switch *ds, int port,
struct mv88e6xxx_chip *chip = ds->priv;
int err;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_port_db_load_purge(chip, port, mdb->addr, mdb->vid,
MV88E6XXX_G1_ATU_DATA_STATE_UNUSED);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return err;
}
@@ -4550,12 +4625,12 @@ static int mv88e6xxx_port_egress_floods(struct dsa_switch *ds, int port,
struct mv88e6xxx_chip *chip = ds->priv;
int err = -EOPNOTSUPP;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (chip->info->ops->port_set_egress_floods)
err = chip->info->ops->port_set_egress_floods(chip, port,
unicast,
multicast);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return err;
}
@@ -4711,6 +4786,8 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev)
err = PTR_ERR(chip->reset);
goto out;
}
+ if (chip->reset)
+ usleep_range(1000, 2000);
err = mv88e6xxx_detect(chip);
if (err)
@@ -4726,9 +4803,9 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev)
chip->eeprom_len = pdata->eeprom_len;
}
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_switch_reset(chip);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (err)
goto out;
@@ -4747,12 +4824,12 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev)
* the PHYs will link their interrupts to these interrupt
* controllers
*/
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (chip->irq > 0)
err = mv88e6xxx_g1_irq_setup(chip);
else
err = mv88e6xxx_irq_poll_setup(chip);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (err)
goto out;
@@ -4837,6 +4914,10 @@ static const struct of_device_id mv88e6xxx_of_match[] = {
.compatible = "marvell,mv88e6190",
.data = &mv88e6xxx_table[MV88E6190],
},
+ {
+ .compatible = "marvell,mv88e6250",
+ .data = &mv88e6xxx_table[MV88E6250],
+ },
{ /* sentinel */ },
};
diff --git a/drivers/net/dsa/mv88e6xxx/chip.h b/drivers/net/dsa/mv88e6xxx/chip.h
index d3e10111a6fe..4646e46d47f2 100644
--- a/drivers/net/dsa/mv88e6xxx/chip.h
+++ b/drivers/net/dsa/mv88e6xxx/chip.h
@@ -58,6 +58,7 @@ enum mv88e6xxx_model {
MV88E6190X,
MV88E6191,
MV88E6240,
+ MV88E6250,
MV88E6290,
MV88E6320,
MV88E6321,
@@ -76,6 +77,7 @@ enum mv88e6xxx_family {
MV88E6XXX_FAMILY_6097, /* 6046 6085 6096 6097 */
MV88E6XXX_FAMILY_6165, /* 6123 6161 6165 */
MV88E6XXX_FAMILY_6185, /* 6108 6121 6122 6131 6152 6155 6182 6185 */
+ MV88E6XXX_FAMILY_6250, /* 6250 */
MV88E6XXX_FAMILY_6320, /* 6320 6321 */
MV88E6XXX_FAMILY_6341, /* 6141 6341 */
MV88E6XXX_FAMILY_6351, /* 6171 6175 6350 6351 */
@@ -108,6 +110,12 @@ struct mv88e6xxx_info {
* when it is non-zero, and use indirect access to internal registers.
*/
bool multi_chip;
+ /* Dual-chip Addressing Mode
+ * Some chips respond to only half of the 32 SMI addresses,
+ * allowing two to coexist on the same SMI interface.
+ */
+ bool dual_chip;
+
enum dsa_tag_protocol tag_protocol;
/* Mask for FromPort and ToPort value of PortVec used in ATU Move
@@ -572,4 +580,14 @@ int mv88e6xxx_port_setup_mac(struct mv88e6xxx_chip *chip, int port, int link,
phy_interface_t mode);
struct mii_bus *mv88e6xxx_default_mdio_bus(struct mv88e6xxx_chip *chip);
+static inline void mv88e6xxx_reg_lock(struct mv88e6xxx_chip *chip)
+{
+ mutex_lock(&chip->reg_lock);
+}
+
+static inline void mv88e6xxx_reg_unlock(struct mv88e6xxx_chip *chip)
+{
+ mutex_unlock(&chip->reg_lock);
+}
+
#endif /* _MV88E6XXX_CHIP_H */
diff --git a/drivers/net/dsa/mv88e6xxx/global1.c b/drivers/net/dsa/mv88e6xxx/global1.c
index 09b8a3d0dd37..1323ef30a5e9 100644
--- a/drivers/net/dsa/mv88e6xxx/global1.c
+++ b/drivers/net/dsa/mv88e6xxx/global1.c
@@ -178,7 +178,7 @@ int mv88e6185_g1_reset(struct mv88e6xxx_chip *chip)
return mv88e6185_g1_wait_ppu_polling(chip);
}
-int mv88e6352_g1_reset(struct mv88e6xxx_chip *chip)
+int mv88e6250_g1_reset(struct mv88e6xxx_chip *chip)
{
u16 val;
int err;
@@ -194,7 +194,14 @@ int mv88e6352_g1_reset(struct mv88e6xxx_chip *chip)
if (err)
return err;
- err = mv88e6xxx_g1_wait_init_ready(chip);
+ return mv88e6xxx_g1_wait_init_ready(chip);
+}
+
+int mv88e6352_g1_reset(struct mv88e6xxx_chip *chip)
+{
+ int err;
+
+ err = mv88e6250_g1_reset(chip);
if (err)
return err;
@@ -295,6 +302,12 @@ int mv88e6085_g1_ieee_pri_map(struct mv88e6xxx_chip *chip)
return mv88e6xxx_g1_write(chip, MV88E6XXX_G1_IEEE_PRI, 0xfa41);
}
+int mv88e6250_g1_ieee_pri_map(struct mv88e6xxx_chip *chip)
+{
+ /* Reset the IEEE Tag priorities to defaults */
+ return mv88e6xxx_g1_write(chip, MV88E6XXX_G1_IEEE_PRI, 0xfa50);
+}
+
/* Offset 0x1a: Monitor Control */
/* Offset 0x1a: Monitor & MGMT Control on some devices */
@@ -375,26 +388,26 @@ int mv88e6390_g1_mgmt_rsvd2cpu(struct mv88e6xxx_chip *chip)
u16 ptr;
int err;
- /* 01:c2:80:00:00:00:00-01:c2:80:00:00:00:07 are Management */
- ptr = MV88E6390_G1_MONITOR_MGMT_CTL_PTR_0180C280000000XLO;
+ /* 01:80:c2:00:00:00-01:80:c2:00:00:07 are Management */
+ ptr = MV88E6390_G1_MONITOR_MGMT_CTL_PTR_0180C200000XLO;
err = mv88e6390_g1_monitor_write(chip, ptr, 0xff);
if (err)
return err;
- /* 01:c2:80:00:00:00:08-01:c2:80:00:00:00:0f are Management */
- ptr = MV88E6390_G1_MONITOR_MGMT_CTL_PTR_0180C280000000XHI;
+ /* 01:80:c2:00:00:08-01:80:c2:00:00:0f are Management */
+ ptr = MV88E6390_G1_MONITOR_MGMT_CTL_PTR_0180C200000XHI;
err = mv88e6390_g1_monitor_write(chip, ptr, 0xff);
if (err)
return err;
- /* 01:c2:80:00:00:00:20-01:c2:80:00:00:00:27 are Management */
- ptr = MV88E6390_G1_MONITOR_MGMT_CTL_PTR_0180C280000002XLO;
+ /* 01:80:c2:00:00:20-01:80:c2:00:00:27 are Management */
+ ptr = MV88E6390_G1_MONITOR_MGMT_CTL_PTR_0180C200002XLO;
err = mv88e6390_g1_monitor_write(chip, ptr, 0xff);
if (err)
return err;
- /* 01:c2:80:00:00:00:28-01:c2:80:00:00:00:2f are Management */
- ptr = MV88E6390_G1_MONITOR_MGMT_CTL_PTR_0180C280000002XHI;
+ /* 01:80:c2:00:00:28-01:80:c2:00:00:2f are Management */
+ ptr = MV88E6390_G1_MONITOR_MGMT_CTL_PTR_0180C200002XHI;
err = mv88e6390_g1_monitor_write(chip, ptr, 0xff);
if (err)
return err;
@@ -461,7 +474,7 @@ int mv88e6xxx_g1_set_device_number(struct mv88e6xxx_chip *chip, int index)
/* Offset 0x1d: Statistics Operation 2 */
-int mv88e6xxx_g1_stats_wait(struct mv88e6xxx_chip *chip)
+static int mv88e6xxx_g1_stats_wait(struct mv88e6xxx_chip *chip)
{
return mv88e6xxx_g1_wait(chip, MV88E6XXX_G1_STATS_OP,
MV88E6XXX_G1_STATS_OP_BUSY);
diff --git a/drivers/net/dsa/mv88e6xxx/global1.h b/drivers/net/dsa/mv88e6xxx/global1.h
index 7bd5ab733a3f..d444266f7d78 100644
--- a/drivers/net/dsa/mv88e6xxx/global1.h
+++ b/drivers/net/dsa/mv88e6xxx/global1.h
@@ -186,10 +186,10 @@
#define MV88E6390_G1_MONITOR_MGMT_CTL 0x1a
#define MV88E6390_G1_MONITOR_MGMT_CTL_UPDATE 0x8000
#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_MASK 0x3f00
-#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_0180C280000000XLO 0x0000
-#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_0180C280000000XHI 0x0100
-#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_0180C280000002XLO 0x0200
-#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_0180C280000002XHI 0x0300
+#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_0180C200000XLO 0x0000
+#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_0180C200000XHI 0x0100
+#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_0180C200002XLO 0x0200
+#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_0180C200002XHI 0x0300
#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_INGRESS_DEST 0x2000
#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_EGRESS_DEST 0x2100
#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST 0x3000
@@ -255,11 +255,11 @@ int mv88e6xxx_g1_set_switch_mac(struct mv88e6xxx_chip *chip, u8 *addr);
int mv88e6185_g1_reset(struct mv88e6xxx_chip *chip);
int mv88e6352_g1_reset(struct mv88e6xxx_chip *chip);
+int mv88e6250_g1_reset(struct mv88e6xxx_chip *chip);
int mv88e6185_g1_ppu_enable(struct mv88e6xxx_chip *chip);
int mv88e6185_g1_ppu_disable(struct mv88e6xxx_chip *chip);
-int mv88e6xxx_g1_stats_wait(struct mv88e6xxx_chip *chip);
int mv88e6xxx_g1_stats_snapshot(struct mv88e6xxx_chip *chip, int port);
int mv88e6320_g1_stats_snapshot(struct mv88e6xxx_chip *chip, int port);
int mv88e6390_g1_stats_snapshot(struct mv88e6xxx_chip *chip, int port);
@@ -274,7 +274,9 @@ int mv88e6390_g1_set_cpu_port(struct mv88e6xxx_chip *chip, int port);
int mv88e6390_g1_mgmt_rsvd2cpu(struct mv88e6xxx_chip *chip);
int mv88e6085_g1_ip_pri_map(struct mv88e6xxx_chip *chip);
+
int mv88e6085_g1_ieee_pri_map(struct mv88e6xxx_chip *chip);
+int mv88e6250_g1_ieee_pri_map(struct mv88e6xxx_chip *chip);
int mv88e6185_g1_set_cascade_port(struct mv88e6xxx_chip *chip, int port);
@@ -301,6 +303,10 @@ int mv88e6185_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry);
int mv88e6185_g1_vtu_loadpurge(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry);
+int mv88e6250_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
+ struct mv88e6xxx_vtu_entry *entry);
+int mv88e6250_g1_vtu_loadpurge(struct mv88e6xxx_chip *chip,
+ struct mv88e6xxx_vtu_entry *entry);
int mv88e6352_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry);
int mv88e6352_g1_vtu_loadpurge(struct mv88e6xxx_chip *chip,
diff --git a/drivers/net/dsa/mv88e6xxx/global1_atu.c b/drivers/net/dsa/mv88e6xxx/global1_atu.c
index 4542dfa5fc69..1cf388e9bd94 100644
--- a/drivers/net/dsa/mv88e6xxx/global1_atu.c
+++ b/drivers/net/dsa/mv88e6xxx/global1_atu.c
@@ -90,7 +90,7 @@ static int mv88e6xxx_g1_atu_op(struct mv88e6xxx_chip *chip, u16 fid, u16 op)
if (err)
return err;
} else {
- if (mv88e6xxx_num_databases(chip) > 16) {
+ if (mv88e6xxx_num_databases(chip) > 64) {
/* ATU DBNum[7:4] are located in ATU Control 15:12 */
err = mv88e6xxx_g1_read(chip, MV88E6XXX_G1_ATU_CTL,
&val);
@@ -102,6 +102,9 @@ static int mv88e6xxx_g1_atu_op(struct mv88e6xxx_chip *chip, u16 fid, u16 op)
val);
if (err)
return err;
+ } else if (mv88e6xxx_num_databases(chip) > 16) {
+ /* ATU DBNum[5:4] are located in ATU Operation 9:8 */
+ op |= (fid & 0x30) << 4;
}
/* ATU DBNum[3:0] are located in ATU Operation 3:0 */
@@ -314,7 +317,7 @@ static irqreturn_t mv88e6xxx_g1_atu_prob_irq_thread_fn(int irq, void *dev_id)
int err;
u16 val;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_g1_atu_op(chip, 0,
MV88E6XXX_G1_ATU_OP_GET_CLR_VIOLATION);
@@ -361,12 +364,12 @@ static irqreturn_t mv88e6xxx_g1_atu_prob_irq_thread_fn(int irq, void *dev_id)
entry.mac, entry.portvec, spid);
chip->ports[spid].atu_full_violation++;
}
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return IRQ_HANDLED;
out:
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
dev_err(chip->dev, "ATU problem: error %d while handling interrupt\n",
err);
diff --git a/drivers/net/dsa/mv88e6xxx/global1_vtu.c b/drivers/net/dsa/mv88e6xxx/global1_vtu.c
index ecef69045a42..6cac997360e8 100644
--- a/drivers/net/dsa/mv88e6xxx/global1_vtu.c
+++ b/drivers/net/dsa/mv88e6xxx/global1_vtu.c
@@ -303,6 +303,35 @@ static int mv88e6xxx_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
return mv88e6xxx_g1_vtu_vid_read(chip, entry);
}
+int mv88e6250_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
+ struct mv88e6xxx_vtu_entry *entry)
+{
+ u16 val;
+ int err;
+
+ err = mv88e6xxx_g1_vtu_getnext(chip, entry);
+ if (err)
+ return err;
+
+ if (entry->valid) {
+ err = mv88e6185_g1_vtu_data_read(chip, entry);
+ if (err)
+ return err;
+
+ /* VTU DBNum[3:0] are located in VTU Operation 3:0
+ * VTU DBNum[5:4] are located in VTU Operation 9:8
+ */
+ err = mv88e6xxx_g1_read(chip, MV88E6XXX_G1_VTU_OP, &val);
+ if (err)
+ return err;
+
+ entry->fid = val & 0x000f;
+ entry->fid |= (val & 0x0300) >> 4;
+ }
+
+ return 0;
+}
+
int mv88e6185_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry)
{
@@ -392,6 +421,35 @@ int mv88e6390_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
return 0;
}
+int mv88e6250_g1_vtu_loadpurge(struct mv88e6xxx_chip *chip,
+ struct mv88e6xxx_vtu_entry *entry)
+{
+ u16 op = MV88E6XXX_G1_VTU_OP_VTU_LOAD_PURGE;
+ int err;
+
+ err = mv88e6xxx_g1_vtu_op_wait(chip);
+ if (err)
+ return err;
+
+ err = mv88e6xxx_g1_vtu_vid_write(chip, entry);
+ if (err)
+ return err;
+
+ if (entry->valid) {
+ err = mv88e6185_g1_vtu_data_write(chip, entry);
+ if (err)
+ return err;
+
+ /* VTU DBNum[3:0] are located in VTU Operation 3:0
+ * VTU DBNum[5:4] are located in VTU Operation 9:8
+ */
+ op |= entry->fid & 0x000f;
+ op |= (entry->fid & 0x0030) << 4;
+ }
+
+ return mv88e6xxx_g1_vtu_op(chip, op);
+}
+
int mv88e6185_g1_vtu_loadpurge(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry)
{
@@ -521,7 +579,7 @@ static irqreturn_t mv88e6xxx_g1_vtu_prob_irq_thread_fn(int irq, void *dev_id)
int err;
u16 val;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_g1_vtu_op(chip, MV88E6XXX_G1_VTU_OP_GET_CLR_VIOLATION);
if (err)
@@ -549,12 +607,12 @@ static irqreturn_t mv88e6xxx_g1_vtu_prob_irq_thread_fn(int irq, void *dev_id)
chip->ports[spid].vtu_miss_violation++;
}
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return IRQ_HANDLED;
out:
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
dev_err(chip->dev, "VTU problem: error %d while handling interrupt\n",
err);
diff --git a/drivers/net/dsa/mv88e6xxx/global2.c b/drivers/net/dsa/mv88e6xxx/global2.c
index 1546171210a1..2305b94b3051 100644
--- a/drivers/net/dsa/mv88e6xxx/global2.c
+++ b/drivers/net/dsa/mv88e6xxx/global2.c
@@ -812,6 +812,32 @@ const struct mv88e6xxx_irq_ops mv88e6097_watchdog_ops = {
.irq_free = mv88e6097_watchdog_free,
};
+static void mv88e6250_watchdog_free(struct mv88e6xxx_chip *chip)
+{
+ u16 reg;
+
+ mv88e6xxx_g2_read(chip, MV88E6250_G2_WDOG_CTL, &reg);
+
+ reg &= ~(MV88E6250_G2_WDOG_CTL_EGRESS_ENABLE |
+ MV88E6250_G2_WDOG_CTL_QC_ENABLE);
+
+ mv88e6xxx_g2_write(chip, MV88E6250_G2_WDOG_CTL, reg);
+}
+
+static int mv88e6250_watchdog_setup(struct mv88e6xxx_chip *chip)
+{
+ return mv88e6xxx_g2_write(chip, MV88E6250_G2_WDOG_CTL,
+ MV88E6250_G2_WDOG_CTL_EGRESS_ENABLE |
+ MV88E6250_G2_WDOG_CTL_QC_ENABLE |
+ MV88E6250_G2_WDOG_CTL_SWRESET);
+}
+
+const struct mv88e6xxx_irq_ops mv88e6250_watchdog_ops = {
+ .irq_action = mv88e6097_watchdog_action,
+ .irq_setup = mv88e6250_watchdog_setup,
+ .irq_free = mv88e6250_watchdog_free,
+};
+
static int mv88e6390_watchdog_setup(struct mv88e6xxx_chip *chip)
{
return mv88e6xxx_g2_update(chip, MV88E6390_G2_WDOG_CTL,
@@ -867,20 +893,20 @@ static irqreturn_t mv88e6xxx_g2_watchdog_thread_fn(int irq, void *dev_id)
struct mv88e6xxx_chip *chip = dev_id;
irqreturn_t ret = IRQ_NONE;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (chip->info->ops->watchdog_ops->irq_action)
ret = chip->info->ops->watchdog_ops->irq_action(chip, irq);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return ret;
}
static void mv88e6xxx_g2_watchdog_free(struct mv88e6xxx_chip *chip)
{
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (chip->info->ops->watchdog_ops->irq_free)
chip->info->ops->watchdog_ops->irq_free(chip);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
free_irq(chip->watchdog_irq, chip);
irq_dispose_mapping(chip->watchdog_irq);
@@ -902,10 +928,10 @@ static int mv88e6xxx_g2_watchdog_setup(struct mv88e6xxx_chip *chip)
if (err)
return err;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (chip->info->ops->watchdog_ops->irq_setup)
err = chip->info->ops->watchdog_ops->irq_setup(chip);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return err;
}
@@ -960,9 +986,9 @@ static irqreturn_t mv88e6xxx_g2_irq_thread_fn(int irq, void *dev_id)
int err;
u16 reg;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_g2_int_source(chip, &reg);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (err)
goto out;
@@ -981,7 +1007,7 @@ static void mv88e6xxx_g2_irq_bus_lock(struct irq_data *d)
{
struct mv88e6xxx_chip *chip = irq_data_get_irq_chip_data(d);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
}
static void mv88e6xxx_g2_irq_bus_sync_unlock(struct irq_data *d)
@@ -993,7 +1019,7 @@ static void mv88e6xxx_g2_irq_bus_sync_unlock(struct irq_data *d)
if (err)
dev_err(chip->dev, "failed to mask interrupts\n");
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
}
static const struct irq_chip mv88e6xxx_g2_irq_chip = {
diff --git a/drivers/net/dsa/mv88e6xxx/global2.h b/drivers/net/dsa/mv88e6xxx/global2.h
index bfb2c6123f55..a664fc25f132 100644
--- a/drivers/net/dsa/mv88e6xxx/global2.h
+++ b/drivers/net/dsa/mv88e6xxx/global2.h
@@ -202,6 +202,18 @@
#define MV88E6XXX_G2_SCRATCH_MISC_DATA_MASK 0x00ff
/* Offset 0x1B: Watch Dog Control Register */
+#define MV88E6250_G2_WDOG_CTL 0x1b
+#define MV88E6250_G2_WDOG_CTL_QC_HISTORY 0x0100
+#define MV88E6250_G2_WDOG_CTL_QC_EVENT 0x0080
+#define MV88E6250_G2_WDOG_CTL_QC_ENABLE 0x0040
+#define MV88E6250_G2_WDOG_CTL_EGRESS_HISTORY 0x0020
+#define MV88E6250_G2_WDOG_CTL_EGRESS_EVENT 0x0010
+#define MV88E6250_G2_WDOG_CTL_EGRESS_ENABLE 0x0008
+#define MV88E6250_G2_WDOG_CTL_FORCE_IRQ 0x0004
+#define MV88E6250_G2_WDOG_CTL_HISTORY 0x0002
+#define MV88E6250_G2_WDOG_CTL_SWRESET 0x0001
+
+/* Offset 0x1B: Watch Dog Control Register */
#define MV88E6352_G2_WDOG_CTL 0x1b
#define MV88E6352_G2_WDOG_CTL_EGRESS_EVENT 0x0080
#define MV88E6352_G2_WDOG_CTL_RMU_TIMEOUT 0x0040
@@ -330,6 +342,7 @@ int mv88e6xxx_g2_device_mapping_write(struct mv88e6xxx_chip *chip, int target,
int port);
extern const struct mv88e6xxx_irq_ops mv88e6097_watchdog_ops;
+extern const struct mv88e6xxx_irq_ops mv88e6250_watchdog_ops;
extern const struct mv88e6xxx_irq_ops mv88e6390_watchdog_ops;
extern const struct mv88e6xxx_avb_ops mv88e6165_avb_ops;
@@ -480,6 +493,7 @@ static inline int mv88e6xxx_g2_pot_clear(struct mv88e6xxx_chip *chip)
}
static const struct mv88e6xxx_irq_ops mv88e6097_watchdog_ops = {};
+static const struct mv88e6xxx_irq_ops mv88e6250_watchdog_ops = {};
static const struct mv88e6xxx_irq_ops mv88e6390_watchdog_ops = {};
static const struct mv88e6xxx_avb_ops mv88e6165_avb_ops = {};
diff --git a/drivers/net/dsa/mv88e6xxx/hwtstamp.c b/drivers/net/dsa/mv88e6xxx/hwtstamp.c
index 7f95a636561d..a4c488b12e8f 100644
--- a/drivers/net/dsa/mv88e6xxx/hwtstamp.c
+++ b/drivers/net/dsa/mv88e6xxx/hwtstamp.c
@@ -147,7 +147,7 @@ static int mv88e6xxx_set_hwtstamp_config(struct mv88e6xxx_chip *chip, int port,
return -ERANGE;
}
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (tstamp_enable) {
chip->enable_count += 1;
if (chip->enable_count == 1 && ptp_ops->global_enable)
@@ -161,7 +161,7 @@ static int mv88e6xxx_set_hwtstamp_config(struct mv88e6xxx_chip *chip, int port,
if (chip->enable_count == 0 && ptp_ops->global_disable)
ptp_ops->global_disable(chip);
}
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
/* Once hardware has been configured, enable timestamp checks
* in the RX/TX paths.
@@ -301,10 +301,10 @@ static void mv88e6xxx_get_rxts(struct mv88e6xxx_chip *chip,
skb_queue_splice_tail_init(rxq, &received);
spin_unlock_irqrestore(&rxq->lock, flags);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_port_ptp_read(chip, ps->port_id,
reg, buf, ARRAY_SIZE(buf));
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (err)
pr_err("failed to get the receive time stamp\n");
@@ -314,9 +314,9 @@ static void mv88e6xxx_get_rxts(struct mv88e6xxx_chip *chip,
seq_id = buf[3];
if (status & MV88E6XXX_PTP_TS_VALID) {
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_port_ptp_write(chip, ps->port_id, reg, 0);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (err)
pr_err("failed to clear the receive status\n");
}
@@ -327,9 +327,9 @@ static void mv88e6xxx_get_rxts(struct mv88e6xxx_chip *chip,
if (mv88e6xxx_ts_valid(status) && seq_match(skb, seq_id)) {
ns = timehi << 16 | timelo;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
ns = timecounter_cyc2time(&chip->tstamp_tc, ns);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
shwt = skb_hwtstamps(skb);
memset(shwt, 0, sizeof(*shwt));
shwt->hwtstamp = ns_to_ktime(ns);
@@ -405,12 +405,12 @@ static int mv88e6xxx_txtstamp_work(struct mv88e6xxx_chip *chip,
if (!ps->tx_skb)
return 0;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_port_ptp_read(chip, ps->port_id,
ptp_ops->dep_sts_reg,
departure_block,
ARRAY_SIZE(departure_block));
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (err)
goto free_and_clear_skb;
@@ -430,9 +430,9 @@ static int mv88e6xxx_txtstamp_work(struct mv88e6xxx_chip *chip,
}
/* We have the timestamp; go ahead and clear valid now */
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
mv88e6xxx_port_ptp_write(chip, ps->port_id, ptp_ops->dep_sts_reg, 0);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
status = departure_block[0] & MV88E6XXX_PTP_TS_STATUS_MASK;
if (status != MV88E6XXX_PTP_TS_STATUS_NORMAL) {
@@ -447,9 +447,9 @@ static int mv88e6xxx_txtstamp_work(struct mv88e6xxx_chip *chip,
memset(&shhwtstamps, 0, sizeof(shhwtstamps));
time_raw = ((u32)departure_block[2] << 16) | departure_block[1];
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
ns = timecounter_cyc2time(&chip->tstamp_tc, time_raw);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
shhwtstamps.hwtstamp = ns_to_ktime(ns);
dev_dbg(chip->dev,
diff --git a/drivers/net/dsa/mv88e6xxx/phy.c b/drivers/net/dsa/mv88e6xxx/phy.c
index 2952db73f55c..252b5b3a3efe 100644
--- a/drivers/net/dsa/mv88e6xxx/phy.c
+++ b/drivers/net/dsa/mv88e6xxx/phy.c
@@ -137,7 +137,7 @@ static void mv88e6xxx_phy_ppu_reenable_work(struct work_struct *ugly)
chip = container_of(ugly, struct mv88e6xxx_chip, ppu_work);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (mutex_trylock(&chip->ppu_mutex)) {
if (mv88e6xxx_phy_ppu_enable(chip) == 0)
@@ -145,7 +145,7 @@ static void mv88e6xxx_phy_ppu_reenable_work(struct work_struct *ugly)
mutex_unlock(&chip->ppu_mutex);
}
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
}
static void mv88e6xxx_phy_ppu_reenable_timer(struct timer_list *t)
diff --git a/drivers/net/dsa/mv88e6xxx/port.c b/drivers/net/dsa/mv88e6xxx/port.c
index 9a2b4b385a2c..04309ef0a1cc 100644
--- a/drivers/net/dsa/mv88e6xxx/port.c
+++ b/drivers/net/dsa/mv88e6xxx/port.c
@@ -290,6 +290,18 @@ int mv88e6185_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
return mv88e6xxx_port_set_speed(chip, port, speed, false, false);
}
+/* Support 10, 100 Mbps (e.g. 88E6250 family) */
+int mv88e6250_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
+{
+ if (speed == SPEED_MAX)
+ speed = 100;
+
+ if (speed > 100)
+ return -EOPNOTSUPP;
+
+ return mv88e6xxx_port_set_speed(chip, port, speed, false, false);
+}
+
/* Support 10, 100, 200, 1000, 2500 Mbps (e.g. 88E6341) */
int mv88e6341_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
{
@@ -517,6 +529,71 @@ int mv88e6352_port_get_cmode(struct mv88e6xxx_chip *chip, int port, u8 *cmode)
return 0;
}
+int mv88e6250_port_link_state(struct mv88e6xxx_chip *chip, int port,
+ struct phylink_link_state *state)
+{
+ int err;
+ u16 reg;
+
+ err = mv88e6xxx_port_read(chip, port, MV88E6XXX_PORT_STS, &reg);
+ if (err)
+ return err;
+
+ if (port < 5) {
+ switch (reg & MV88E6250_PORT_STS_PORTMODE_MASK) {
+ case MV88E6250_PORT_STS_PORTMODE_PHY_10_HALF:
+ state->speed = SPEED_10;
+ state->duplex = DUPLEX_HALF;
+ break;
+ case MV88E6250_PORT_STS_PORTMODE_PHY_100_HALF:
+ state->speed = SPEED_100;
+ state->duplex = DUPLEX_HALF;
+ break;
+ case MV88E6250_PORT_STS_PORTMODE_PHY_10_FULL:
+ state->speed = SPEED_10;
+ state->duplex = DUPLEX_FULL;
+ break;
+ case MV88E6250_PORT_STS_PORTMODE_PHY_100_FULL:
+ state->speed = SPEED_100;
+ state->duplex = DUPLEX_FULL;
+ break;
+ default:
+ state->speed = SPEED_UNKNOWN;
+ state->duplex = DUPLEX_UNKNOWN;
+ break;
+ }
+ } else {
+ switch (reg & MV88E6250_PORT_STS_PORTMODE_MASK) {
+ case MV88E6250_PORT_STS_PORTMODE_MII_10_HALF:
+ state->speed = SPEED_10;
+ state->duplex = DUPLEX_HALF;
+ break;
+ case MV88E6250_PORT_STS_PORTMODE_MII_100_HALF:
+ state->speed = SPEED_100;
+ state->duplex = DUPLEX_HALF;
+ break;
+ case MV88E6250_PORT_STS_PORTMODE_MII_10_FULL:
+ state->speed = SPEED_10;
+ state->duplex = DUPLEX_FULL;
+ break;
+ case MV88E6250_PORT_STS_PORTMODE_MII_100_FULL:
+ state->speed = SPEED_100;
+ state->duplex = DUPLEX_FULL;
+ break;
+ default:
+ state->speed = SPEED_UNKNOWN;
+ state->duplex = DUPLEX_UNKNOWN;
+ break;
+ }
+ }
+
+ state->link = !!(reg & MV88E6250_PORT_STS_LINK);
+ state->an_enabled = 1;
+ state->an_complete = state->link;
+
+ return 0;
+}
+
int mv88e6352_port_link_state(struct mv88e6xxx_chip *chip, int port,
struct phylink_link_state *state)
{
diff --git a/drivers/net/dsa/mv88e6xxx/port.h b/drivers/net/dsa/mv88e6xxx/port.h
index f2fba3f73199..8d5a6cd6fb19 100644
--- a/drivers/net/dsa/mv88e6xxx/port.h
+++ b/drivers/net/dsa/mv88e6xxx/port.h
@@ -19,6 +19,16 @@
#define MV88E6XXX_PORT_STS_MY_PAUSE 0x4000
#define MV88E6XXX_PORT_STS_HD_FLOW 0x2000
#define MV88E6XXX_PORT_STS_PHY_DETECT 0x1000
+#define MV88E6250_PORT_STS_LINK 0x1000
+#define MV88E6250_PORT_STS_PORTMODE_MASK 0x0f00
+#define MV88E6250_PORT_STS_PORTMODE_PHY_10_HALF 0x0800
+#define MV88E6250_PORT_STS_PORTMODE_PHY_100_HALF 0x0900
+#define MV88E6250_PORT_STS_PORTMODE_PHY_10_FULL 0x0a00
+#define MV88E6250_PORT_STS_PORTMODE_PHY_100_FULL 0x0b00
+#define MV88E6250_PORT_STS_PORTMODE_MII_10_HALF 0x0c00
+#define MV88E6250_PORT_STS_PORTMODE_MII_100_HALF 0x0d00
+#define MV88E6250_PORT_STS_PORTMODE_MII_10_FULL 0x0e00
+#define MV88E6250_PORT_STS_PORTMODE_MII_100_FULL 0x0f00
#define MV88E6XXX_PORT_STS_LINK 0x0800
#define MV88E6XXX_PORT_STS_DUPLEX 0x0400
#define MV88E6XXX_PORT_STS_SPEED_MASK 0x0300
@@ -108,6 +118,7 @@
#define MV88E6XXX_PORT_SWITCH_ID_PROD_6191 0x1910
#define MV88E6XXX_PORT_SWITCH_ID_PROD_6185 0x1a70
#define MV88E6XXX_PORT_SWITCH_ID_PROD_6240 0x2400
+#define MV88E6XXX_PORT_SWITCH_ID_PROD_6250 0x2500
#define MV88E6XXX_PORT_SWITCH_ID_PROD_6290 0x2900
#define MV88E6XXX_PORT_SWITCH_ID_PROD_6321 0x3100
#define MV88E6XXX_PORT_SWITCH_ID_PROD_6141 0x3400
@@ -275,6 +286,7 @@ int mv88e6xxx_port_set_duplex(struct mv88e6xxx_chip *chip, int port, int dup);
int mv88e6065_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
int mv88e6185_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
+int mv88e6250_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
int mv88e6341_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
int mv88e6352_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
int mv88e6390_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
@@ -328,6 +340,8 @@ int mv88e6185_port_get_cmode(struct mv88e6xxx_chip *chip, int port, u8 *cmode);
int mv88e6352_port_get_cmode(struct mv88e6xxx_chip *chip, int port, u8 *cmode);
int mv88e6185_port_link_state(struct mv88e6xxx_chip *chip, int port,
struct phylink_link_state *state);
+int mv88e6250_port_link_state(struct mv88e6xxx_chip *chip, int port,
+ struct phylink_link_state *state);
int mv88e6352_port_link_state(struct mv88e6xxx_chip *chip, int port,
struct phylink_link_state *state);
int mv88e6xxx_port_set_map_da(struct mv88e6xxx_chip *chip, int port);
diff --git a/drivers/net/dsa/mv88e6xxx/ptp.c b/drivers/net/dsa/mv88e6xxx/ptp.c
index 7b40c5886b75..768d256f7c9f 100644
--- a/drivers/net/dsa/mv88e6xxx/ptp.c
+++ b/drivers/net/dsa/mv88e6xxx/ptp.c
@@ -138,10 +138,10 @@ static void mv88e6352_tai_event_work(struct work_struct *ugly)
u32 raw_ts;
int err;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_tai_read(chip, MV88E6XXX_TAI_EVENT_STATUS,
status, ARRAY_SIZE(status));
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
if (err) {
dev_err(chip->dev, "failed to read TAI status register\n");
@@ -158,18 +158,18 @@ static void mv88e6352_tai_event_work(struct work_struct *ugly)
/* Clear the valid bit so the next timestamp can come in */
status[0] &= ~MV88E6XXX_TAI_EVENT_STATUS_VALID;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_tai_write(chip, MV88E6XXX_TAI_EVENT_STATUS, status[0]);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
/* This is an external timestamp */
ev.type = PTP_CLOCK_EXTTS;
/* We only have one timestamping channel. */
ev.index = 0;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
ev.timestamp = timecounter_cyc2time(&chip->tstamp_tc, raw_ts);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
ptp_clock_event(chip->ptp_clock, &ev);
out:
@@ -192,12 +192,12 @@ static int mv88e6xxx_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
adj *= scaled_ppm;
diff = div_u64(adj, CC_MULT_DEM);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
timecounter_read(&chip->tstamp_tc);
chip->tstamp_cc.mult = neg_adj ? mult - diff : mult + diff;
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return 0;
}
@@ -206,9 +206,9 @@ static int mv88e6xxx_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
{
struct mv88e6xxx_chip *chip = ptp_to_chip(ptp);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
timecounter_adjtime(&chip->tstamp_tc, delta);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return 0;
}
@@ -219,9 +219,9 @@ static int mv88e6xxx_ptp_gettime(struct ptp_clock_info *ptp,
struct mv88e6xxx_chip *chip = ptp_to_chip(ptp);
u64 ns;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
ns = timecounter_read(&chip->tstamp_tc);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
*ts = ns_to_timespec64(ns);
@@ -236,9 +236,9 @@ static int mv88e6xxx_ptp_settime(struct ptp_clock_info *ptp,
ns = timespec64_to_ns(ts);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
timecounter_init(&chip->tstamp_tc, &chip->tstamp_cc, ns);
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return 0;
}
@@ -256,7 +256,7 @@ static int mv88e6352_ptp_enable_extts(struct mv88e6xxx_chip *chip,
if (pin < 0)
return -EBUSY;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (on) {
func = MV88E6352_G2_SCRATCH_GPIO_PCTL_EVREQ;
@@ -278,7 +278,7 @@ static int mv88e6352_ptp_enable_extts(struct mv88e6xxx_chip *chip,
}
out:
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return err;
}
diff --git a/drivers/net/dsa/mv88e6xxx/serdes.c b/drivers/net/dsa/mv88e6xxx/serdes.c
index d986c5d55bf1..20c526c2a9ee 100644
--- a/drivers/net/dsa/mv88e6xxx/serdes.c
+++ b/drivers/net/dsa/mv88e6xxx/serdes.c
@@ -208,7 +208,7 @@ static irqreturn_t mv88e6352_serdes_thread_fn(int irq, void *dev_id)
u16 status;
int err;
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
err = mv88e6352_serdes_read(chip, MV88E6352_SERDES_INT_STATUS, &status);
if (err)
@@ -219,7 +219,7 @@ static irqreturn_t mv88e6352_serdes_thread_fn(int irq, void *dev_id)
mv88e6352_serdes_irq_link(chip, port->port);
}
out:
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return ret;
}
@@ -253,12 +253,12 @@ int mv88e6352_serdes_irq_setup(struct mv88e6xxx_chip *chip, int port)
/* Requesting the IRQ will trigger irq callbacks. So we cannot
* hold the reg_lock.
*/
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
err = request_threaded_irq(chip->ports[port].serdes_irq, NULL,
mv88e6352_serdes_thread_fn,
IRQF_ONESHOT, "mv88e6xxx-serdes",
&chip->ports[port]);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (err) {
dev_err(chip->dev, "Unable to request SERDES interrupt: %d\n",
@@ -279,9 +279,9 @@ void mv88e6352_serdes_irq_free(struct mv88e6xxx_chip *chip, int port)
/* Freeing the IRQ will trigger irq callbacks. So we cannot
* hold the reg_lock.
*/
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
free_irq(chip->ports[port].serdes_irq, &chip->ports[port]);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
chip->ports[port].serdes_irq = 0;
}
@@ -621,7 +621,7 @@ static irqreturn_t mv88e6390_serdes_thread_fn(int irq, void *dev_id)
lane = mv88e6390x_serdes_get_lane(chip, port->port);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
switch (cmode) {
case MV88E6XXX_PORT_STS_CMODE_SGMII:
@@ -637,7 +637,7 @@ static irqreturn_t mv88e6390_serdes_thread_fn(int irq, void *dev_id)
}
}
out:
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
return ret;
}
@@ -666,12 +666,12 @@ int mv88e6390x_serdes_irq_setup(struct mv88e6xxx_chip *chip, int port)
/* Requesting the IRQ will trigger irq callbacks. So we cannot
* hold the reg_lock.
*/
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
err = request_threaded_irq(chip->ports[port].serdes_irq, NULL,
mv88e6390_serdes_thread_fn,
IRQF_ONESHOT, "mv88e6xxx-serdes",
&chip->ports[port]);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
if (err) {
dev_err(chip->dev, "Unable to request SERDES interrupt: %d\n",
@@ -705,9 +705,9 @@ void mv88e6390x_serdes_irq_free(struct mv88e6xxx_chip *chip, int port)
/* Freeing the IRQ will trigger irq callbacks. So we cannot
* hold the reg_lock.
*/
- mutex_unlock(&chip->reg_lock);
+ mv88e6xxx_reg_unlock(chip);
free_irq(chip->ports[port].serdes_irq, &chip->ports[port]);
- mutex_lock(&chip->reg_lock);
+ mv88e6xxx_reg_lock(chip);
chip->ports[port].serdes_irq = 0;
}
diff --git a/drivers/net/dsa/mv88e6xxx/smi.c b/drivers/net/dsa/mv88e6xxx/smi.c
index 92e9324f1fb9..5fc78a063843 100644
--- a/drivers/net/dsa/mv88e6xxx/smi.c
+++ b/drivers/net/dsa/mv88e6xxx/smi.c
@@ -20,6 +20,10 @@
* When ADDR is non-zero, the chip uses Multi-chip Addressing Mode, allowing
* multiple devices to share the SMI interface. In this mode it responds to only
* 2 registers, used to indirectly access the internal SMI devices.
+ *
+ * Some chips use a different scheme: Only the ADDR4 pin is used for
+ * configuration, and the device responds to 16 of the 32 SMI
+ * addresses, allowing two to coexist on the same SMI interface.
*/
static int mv88e6xxx_smi_direct_read(struct mv88e6xxx_chip *chip,
@@ -72,6 +76,23 @@ static const struct mv88e6xxx_bus_ops mv88e6xxx_smi_direct_ops = {
.write = mv88e6xxx_smi_direct_write,
};
+static int mv88e6xxx_smi_dual_direct_read(struct mv88e6xxx_chip *chip,
+ int dev, int reg, u16 *data)
+{
+ return mv88e6xxx_smi_direct_read(chip, chip->sw_addr + dev, reg, data);
+}
+
+static int mv88e6xxx_smi_dual_direct_write(struct mv88e6xxx_chip *chip,
+ int dev, int reg, u16 data)
+{
+ return mv88e6xxx_smi_direct_write(chip, chip->sw_addr + dev, reg, data);
+}
+
+static const struct mv88e6xxx_bus_ops mv88e6xxx_smi_dual_direct_ops = {
+ .read = mv88e6xxx_smi_dual_direct_read,
+ .write = mv88e6xxx_smi_dual_direct_write,
+};
+
/* Offset 0x00: SMI Command Register
* Offset 0x01: SMI Data Register
*/
@@ -140,7 +161,9 @@ static const struct mv88e6xxx_bus_ops mv88e6xxx_smi_indirect_ops = {
int mv88e6xxx_smi_init(struct mv88e6xxx_chip *chip,
struct mii_bus *bus, int sw_addr)
{
- if (sw_addr == 0)
+ if (chip->info->dual_chip)
+ chip->smi_ops = &mv88e6xxx_smi_dual_direct_ops;
+ else if (sw_addr == 0)
chip->smi_ops = &mv88e6xxx_smi_direct_ops;
else if (chip->info->multi_chip)
chip->smi_ops = &mv88e6xxx_smi_indirect_ops;
diff --git a/drivers/net/dsa/qca8k.c b/drivers/net/dsa/qca8k.c
index c4fa400efdcc..27709f866c23 100644
--- a/drivers/net/dsa/qca8k.c
+++ b/drivers/net/dsa/qca8k.c
@@ -14,6 +14,7 @@
#include <linux/of_platform.h>
#include <linux/if_bridge.h>
#include <linux/mdio.h>
+#include <linux/gpio.h>
#include <linux/etherdevice.h>
#include "qca8k.h"
@@ -1046,6 +1047,20 @@ qca8k_sw_probe(struct mdio_device *mdiodev)
priv->bus = mdiodev->bus;
priv->dev = &mdiodev->dev;
+ priv->reset_gpio = devm_gpiod_get_optional(priv->dev, "reset",
+ GPIOD_ASIS);
+ if (IS_ERR(priv->reset_gpio))
+ return PTR_ERR(priv->reset_gpio);
+
+ if (priv->reset_gpio) {
+ gpiod_set_value_cansleep(priv->reset_gpio, 1);
+ /* The active low duration must be greater than 10 ms
+ * and checkpatch.pl wants 20 ms.
+ */
+ msleep(20);
+ gpiod_set_value_cansleep(priv->reset_gpio, 0);
+ }
+
/* read the switches ID register */
id = qca8k_read(priv, QCA8K_REG_MASK_CTRL);
id >>= QCA8K_MASK_CTRL_ID_S;
diff --git a/drivers/net/dsa/qca8k.h b/drivers/net/dsa/qca8k.h
index 91557433ce2f..42d6ea24eb14 100644
--- a/drivers/net/dsa/qca8k.h
+++ b/drivers/net/dsa/qca8k.h
@@ -10,6 +10,7 @@
#include <linux/delay.h>
#include <linux/regmap.h>
+#include <linux/gpio.h>
#define QCA8K_NUM_PORTS 7
@@ -174,6 +175,7 @@ struct qca8k_priv {
struct mutex reg_mutex;
struct device *dev;
struct dsa_switch_ops ops;
+ struct gpio_desc *reset_gpio;
};
struct qca8k_mib_desc {
diff --git a/drivers/net/dsa/sja1105/Kconfig b/drivers/net/dsa/sja1105/Kconfig
index 1144fc5f61a8..770134a66e48 100644
--- a/drivers/net/dsa/sja1105/Kconfig
+++ b/drivers/net/dsa/sja1105/Kconfig
@@ -9,10 +9,17 @@ tristate "NXP SJA1105 Ethernet switch family support"
This is the driver for the NXP SJA1105 automotive Ethernet switch
family. These are 5-port devices and are managed over an SPI
interface. Probing is handled based on OF bindings and so is the
- linkage to phylib. The driver supports the following revisions:
+ linkage to PHYLINK. The driver supports the following revisions:
- SJA1105E (Gen. 1, No TT-Ethernet)
- SJA1105T (Gen. 1, TT-Ethernet)
- SJA1105P (Gen. 2, No SGMII, No TT-Ethernet)
- SJA1105Q (Gen. 2, No SGMII, TT-Ethernet)
- SJA1105R (Gen. 2, SGMII, No TT-Ethernet)
- SJA1105S (Gen. 2, SGMII, TT-Ethernet)
+
+config NET_DSA_SJA1105_PTP
+ bool "Support for the PTP clock on the NXP SJA1105 Ethernet switch"
+ depends on NET_DSA_SJA1105
+ help
+ This enables support for timestamping and PTP clock manipulations in
+ the SJA1105 DSA driver.
diff --git a/drivers/net/dsa/sja1105/Makefile b/drivers/net/dsa/sja1105/Makefile
index 941848de8b46..4483113e6259 100644
--- a/drivers/net/dsa/sja1105/Makefile
+++ b/drivers/net/dsa/sja1105/Makefile
@@ -8,3 +8,7 @@ sja1105-objs := \
sja1105_clocking.o \
sja1105_static_config.o \
sja1105_dynamic_config.o \
+
+ifdef CONFIG_NET_DSA_SJA1105_PTP
+sja1105-objs += sja1105_ptp.o
+endif
diff --git a/drivers/net/dsa/sja1105/sja1105.h b/drivers/net/dsa/sja1105/sja1105.h
index b043bfc408f2..78094db32622 100644
--- a/drivers/net/dsa/sja1105/sja1105.h
+++ b/drivers/net/dsa/sja1105/sja1105.h
@@ -5,6 +5,8 @@
#ifndef _SJA1105_H
#define _SJA1105_H
+#include <linux/ptp_clock_kernel.h>
+#include <linux/timecounter.h>
#include <linux/dsa/sja1105.h>
#include <net/dsa.h>
#include <linux/mutex.h>
@@ -27,9 +29,14 @@ struct sja1105_regs {
u64 rgu;
u64 config;
u64 rmii_pll1;
+ u64 ptp_control;
+ u64 ptpclk;
+ u64 ptpclkrate;
+ u64 ptptsclk;
+ u64 ptpegr_ts[SJA1105_NUM_PORTS];
u64 pad_mii_tx[SJA1105_NUM_PORTS];
+ u64 pad_mii_id[SJA1105_NUM_PORTS];
u64 cgu_idiv[SJA1105_NUM_PORTS];
- u64 rgmii_pad_mii_tx[SJA1105_NUM_PORTS];
u64 mii_tx_clk[SJA1105_NUM_PORTS];
u64 mii_rx_clk[SJA1105_NUM_PORTS];
u64 mii_ext_tx_clk[SJA1105_NUM_PORTS];
@@ -50,11 +57,26 @@ struct sja1105_info {
* switch core and device_id)
*/
u64 part_no;
+ /* E/T and P/Q/R/S have partial timestamps of different sizes.
+ * They must be reconstructed on both families anyway to get the full
+ * 64-bit values back.
+ */
+ int ptp_ts_bits;
+ /* Also SPI commands are of different sizes to retrieve
+ * the egress timestamps.
+ */
+ int ptpegr_ts_bytes;
const struct sja1105_dynamic_table_ops *dyn_ops;
const struct sja1105_table_ops *static_ops;
const struct sja1105_regs *regs;
+ int (*ptp_cmd)(const void *ctx, const void *data);
int (*reset_cmd)(const void *ctx, const void *data);
int (*setup_rgmii_delay)(const void *ctx, int port);
+ /* Prototypes from include/net/dsa.h */
+ int (*fdb_add_cmd)(struct dsa_switch *ds, int port,
+ const unsigned char *addr, u16 vid);
+ int (*fdb_del_cmd)(struct dsa_switch *ds, int port,
+ const unsigned char *addr, u16 vid);
const char *name;
};
@@ -67,13 +89,25 @@ struct sja1105_private {
struct spi_device *spidev;
struct dsa_switch *ds;
struct sja1105_port ports[SJA1105_NUM_PORTS];
+ struct ptp_clock_info ptp_caps;
+ struct ptp_clock *clock;
+ /* The cycle counter translates the PTP timestamps (based on
+ * a free-running counter) into a software time domain.
+ */
+ struct cyclecounter tstamp_cc;
+ struct timecounter tstamp_tc;
+ struct delayed_work refresh_work;
+ /* Serializes all operations on the cycle counter */
+ struct mutex ptp_lock;
/* Serializes transmission of management frames so that
* the switch doesn't confuse them with one another.
*/
struct mutex mgmt_lock;
+ struct sja1105_tagger_data tagger_data;
};
#include "sja1105_dynamic_config.h"
+#include "sja1105_ptp.h"
struct sja1105_spi_message {
u64 access;
@@ -97,6 +131,8 @@ int sja1105_spi_send_long_packed_buf(const struct sja1105_private *priv,
sja1105_spi_rw_mode_t rw, u64 base_addr,
void *packed_buf, u64 buf_len);
int sja1105_static_config_upload(struct sja1105_private *priv);
+int sja1105_inhibit_tx(const struct sja1105_private *priv,
+ unsigned long port_bitmap, bool tx_inhibited);
extern struct sja1105_info sja1105e_info;
extern struct sja1105_info sja1105t_info;
@@ -125,6 +161,7 @@ typedef enum {
SJA1105_SPEED_AUTO = 0,
} sja1105_speed_t;
+int sja1105pqrs_setup_rgmii_delay(const void *ctx, int port);
int sja1105_clocking_setup_port(struct sja1105_private *priv, int port);
int sja1105_clocking_setup(struct sja1105_private *priv);
@@ -142,7 +179,20 @@ int sja1105_dynamic_config_write(struct sja1105_private *priv,
enum sja1105_blk_idx blk_idx,
int index, void *entry, bool keep);
-u8 sja1105_fdb_hash(struct sja1105_private *priv, const u8 *addr, u16 vid);
+enum sja1105_iotag {
+ SJA1105_C_TAG = 0, /* Inner VLAN header */
+ SJA1105_S_TAG = 1, /* Outer VLAN header */
+};
+
+u8 sja1105et_fdb_hash(struct sja1105_private *priv, const u8 *addr, u16 vid);
+int sja1105et_fdb_add(struct dsa_switch *ds, int port,
+ const unsigned char *addr, u16 vid);
+int sja1105et_fdb_del(struct dsa_switch *ds, int port,
+ const unsigned char *addr, u16 vid);
+int sja1105pqrs_fdb_add(struct dsa_switch *ds, int port,
+ const unsigned char *addr, u16 vid);
+int sja1105pqrs_fdb_del(struct dsa_switch *ds, int port,
+ const unsigned char *addr, u16 vid);
/* Common implementations for the static and dynamic configs */
size_t sja1105_l2_forwarding_entry_packing(void *buf, void *entry_ptr,
diff --git a/drivers/net/dsa/sja1105/sja1105_clocking.c b/drivers/net/dsa/sja1105/sja1105_clocking.c
index 94bfe0ee50a8..608126a15d72 100644
--- a/drivers/net/dsa/sja1105/sja1105_clocking.c
+++ b/drivers/net/dsa/sja1105/sja1105_clocking.c
@@ -19,6 +19,17 @@ struct sja1105_cfg_pad_mii_tx {
u64 clk_ipud;
};
+struct sja1105_cfg_pad_mii_id {
+ u64 rxc_stable_ovr;
+ u64 rxc_delay;
+ u64 rxc_bypass;
+ u64 rxc_pd;
+ u64 txc_stable_ovr;
+ u64 txc_delay;
+ u64 txc_bypass;
+ u64 txc_pd;
+};
+
/* UM10944 Table 82.
* IDIV_0_C to IDIV_4_C control registers
* (addr. 10000Bh to 10000Fh)
@@ -373,11 +384,88 @@ static int sja1105_rgmii_cfg_pad_tx_config(struct sja1105_private *priv,
sja1105_cfg_pad_mii_tx_packing(packed_buf, &pad_mii_tx, PACK);
return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
- regs->rgmii_pad_mii_tx[port],
+ regs->pad_mii_tx[port],
packed_buf, SJA1105_SIZE_CGU_CMD);
}
-static int sja1105_rgmii_clocking_setup(struct sja1105_private *priv, int port)
+static void
+sja1105_cfg_pad_mii_id_packing(void *buf, struct sja1105_cfg_pad_mii_id *cmd,
+ enum packing_op op)
+{
+ const int size = SJA1105_SIZE_CGU_CMD;
+
+ sja1105_packing(buf, &cmd->rxc_stable_ovr, 15, 15, size, op);
+ sja1105_packing(buf, &cmd->rxc_delay, 14, 10, size, op);
+ sja1105_packing(buf, &cmd->rxc_bypass, 9, 9, size, op);
+ sja1105_packing(buf, &cmd->rxc_pd, 8, 8, size, op);
+ sja1105_packing(buf, &cmd->txc_stable_ovr, 7, 7, size, op);
+ sja1105_packing(buf, &cmd->txc_delay, 6, 2, size, op);
+ sja1105_packing(buf, &cmd->txc_bypass, 1, 1, size, op);
+ sja1105_packing(buf, &cmd->txc_pd, 0, 0, size, op);
+}
+
+/* Valid range in degrees is an integer between 73.8 and 101.7 */
+static inline u64 sja1105_rgmii_delay(u64 phase)
+{
+ /* UM11040.pdf: The delay in degree phase is 73.8 + delay_tune * 0.9.
+ * To avoid floating point operations we'll multiply by 10
+ * and get 1 decimal point precision.
+ */
+ phase *= 10;
+ return (phase - 738) / 9;
+}
+
+/* The RGMII delay setup procedure is 2-step and gets called upon each
+ * .phylink_mac_config. Both are strategic.
+ * The reason is that the RX Tunable Delay Line of the SJA1105 MAC has issues
+ * with recovering from a frequency change of the link partner's RGMII clock.
+ * The easiest way to recover from this is to temporarily power down the TDL,
+ * as it will re-lock at the new frequency afterwards.
+ */
+int sja1105pqrs_setup_rgmii_delay(const void *ctx, int port)
+{
+ const struct sja1105_private *priv = ctx;
+ const struct sja1105_regs *regs = priv->info->regs;
+ struct sja1105_cfg_pad_mii_id pad_mii_id = {0};
+ u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0};
+ int rc;
+
+ if (priv->rgmii_rx_delay[port])
+ pad_mii_id.rxc_delay = sja1105_rgmii_delay(90);
+ if (priv->rgmii_tx_delay[port])
+ pad_mii_id.txc_delay = sja1105_rgmii_delay(90);
+
+ /* Stage 1: Turn the RGMII delay lines off. */
+ pad_mii_id.rxc_bypass = 1;
+ pad_mii_id.rxc_pd = 1;
+ pad_mii_id.txc_bypass = 1;
+ pad_mii_id.txc_pd = 1;
+ sja1105_cfg_pad_mii_id_packing(packed_buf, &pad_mii_id, PACK);
+
+ rc = sja1105_spi_send_packed_buf(priv, SPI_WRITE,
+ regs->pad_mii_id[port],
+ packed_buf, SJA1105_SIZE_CGU_CMD);
+ if (rc < 0)
+ return rc;
+
+ /* Stage 2: Turn the RGMII delay lines on. */
+ if (priv->rgmii_rx_delay[port]) {
+ pad_mii_id.rxc_bypass = 0;
+ pad_mii_id.rxc_pd = 0;
+ }
+ if (priv->rgmii_tx_delay[port]) {
+ pad_mii_id.txc_bypass = 0;
+ pad_mii_id.txc_pd = 0;
+ }
+ sja1105_cfg_pad_mii_id_packing(packed_buf, &pad_mii_id, PACK);
+
+ return sja1105_spi_send_packed_buf(priv, SPI_WRITE,
+ regs->pad_mii_id[port],
+ packed_buf, SJA1105_SIZE_CGU_CMD);
+}
+
+static int sja1105_rgmii_clocking_setup(struct sja1105_private *priv, int port,
+ sja1105_mii_role_t role)
{
struct device *dev = priv->ds->dev;
struct sja1105_mac_config_entry *mac;
@@ -429,6 +517,12 @@ static int sja1105_rgmii_clocking_setup(struct sja1105_private *priv, int port)
}
if (!priv->info->setup_rgmii_delay)
return 0;
+ /* The role has no hardware effect for RGMII. However we use it as
+ * a proxy for this interface being a MAC-to-MAC connection, with
+ * the RGMII internal delays needing to be applied by us.
+ */
+ if (role == XMII_MAC)
+ return 0;
return priv->info->setup_rgmii_delay(priv, port);
}
@@ -575,7 +669,7 @@ int sja1105_clocking_setup_port(struct sja1105_private *priv, int port)
rc = sja1105_rmii_clocking_setup(priv, port, role);
break;
case XMII_MODE_RGMII:
- rc = sja1105_rgmii_clocking_setup(priv, port);
+ rc = sja1105_rgmii_clocking_setup(priv, port, role);
break;
default:
dev_err(dev, "Invalid interface mode specified: %d\n",
diff --git a/drivers/net/dsa/sja1105/sja1105_dynamic_config.c b/drivers/net/dsa/sja1105/sja1105_dynamic_config.c
index e73ab28bf632..6bfb1696a6f2 100644
--- a/drivers/net/dsa/sja1105/sja1105_dynamic_config.c
+++ b/drivers/net/dsa/sja1105/sja1105_dynamic_config.c
@@ -3,6 +3,98 @@
*/
#include "sja1105.h"
+/* In the dynamic configuration interface, the switch exposes a register-like
+ * view of some of the static configuration tables.
+ * Many times the field organization of the dynamic tables is abbreviated (not
+ * all fields are dynamically reconfigurable) and different from the static
+ * ones, but the key reason for having it is that we can spare a switch reset
+ * for settings that can be changed dynamically.
+ *
+ * This file creates a per-switch-family abstraction called
+ * struct sja1105_dynamic_table_ops and two operations that work with it:
+ * - sja1105_dynamic_config_write
+ * - sja1105_dynamic_config_read
+ *
+ * Compared to the struct sja1105_table_ops from sja1105_static_config.c,
+ * the dynamic accessors work with a compound buffer:
+ *
+ * packed_buf
+ *
+ * |
+ * V
+ * +-----------------------------------------+------------------+
+ * | ENTRY BUFFER | COMMAND BUFFER |
+ * +-----------------------------------------+------------------+
+ *
+ * <----------------------- packed_size ------------------------>
+ *
+ * The ENTRY BUFFER may or may not have the same layout, or size, as its static
+ * configuration table entry counterpart. When it does, the same packing
+ * function is reused (bar exceptional cases - see
+ * sja1105pqrs_dyn_l2_lookup_entry_packing).
+ *
+ * The reason for the COMMAND BUFFER being at the end is to be able to send
+ * a dynamic write command through a single SPI burst. By the time the switch
+ * reacts to the command, the ENTRY BUFFER is already populated with the data
+ * sent by the core.
+ *
+ * The COMMAND BUFFER is always SJA1105_SIZE_DYN_CMD bytes (one 32-bit word) in
+ * size.
+ *
+ * Sometimes the ENTRY BUFFER does not really exist (when the number of fields
+ * that can be reconfigured is small), then the switch repurposes some of the
+ * unused 32 bits of the COMMAND BUFFER to hold ENTRY data.
+ *
+ * The key members of struct sja1105_dynamic_table_ops are:
+ * - .entry_packing: A function that deals with packing an ENTRY structure
+ * into an SPI buffer, or retrieving an ENTRY structure
+ * from one.
+ * The @packed_buf pointer it's given does always point to
+ * the ENTRY portion of the buffer.
+ * - .cmd_packing: A function that deals with packing/unpacking the COMMAND
+ * structure to/from the SPI buffer.
+ * It is given the same @packed_buf pointer as .entry_packing,
+ * so most of the time, the @packed_buf points *behind* the
+ * COMMAND offset inside the buffer.
+ * To access the COMMAND portion of the buffer, the function
+ * knows its correct offset.
+ * Giving both functions the same pointer is handy because in
+ * extreme cases (see sja1105pqrs_dyn_l2_lookup_entry_packing)
+ * the .entry_packing is able to jump to the COMMAND portion,
+ * or vice-versa (sja1105pqrs_l2_lookup_cmd_packing).
+ * - .access: A bitmap of:
+ * OP_READ: Set if the hardware manual marks the ENTRY portion of the
+ * dynamic configuration table buffer as R (readable) after
+ * an SPI read command (the switch will populate the buffer).
+ * OP_WRITE: Set if the manual marks the ENTRY portion of the dynamic
+ * table buffer as W (writable) after an SPI write command
+ * (the switch will read the fields provided in the buffer).
+ * OP_DEL: Set if the manual says the VALIDENT bit is supported in the
+ * COMMAND portion of this dynamic config buffer (i.e. the
+ * specified entry can be invalidated through a SPI write
+ * command).
+ * OP_SEARCH: Set if the manual says that the index of an entry can
+ * be retrieved in the COMMAND portion of the buffer based
+ * on its ENTRY portion, as a result of a SPI write command.
+ * Only the TCAM-based FDB table on SJA1105 P/Q/R/S supports
+ * this.
+ * - .max_entry_count: The number of entries, counting from zero, that can be
+ * reconfigured through the dynamic interface. If a static
+ * table can be reconfigured at all dynamically, this
+ * number always matches the maximum number of supported
+ * static entries.
+ * - .packed_size: The length in bytes of the compound ENTRY + COMMAND BUFFER.
+ * Note that sometimes the compound buffer may contain holes in
+ * it (see sja1105_vlan_lookup_cmd_packing). The @packed_buf is
+ * contiguous however, so @packed_size includes any unused
+ * bytes.
+ * - .addr: The base SPI address at which the buffer must be written to the
+ * switch's memory. When looking at the hardware manual, this must
+ * always match the lowest documented address for the ENTRY, and not
+ * that of the COMMAND, since the other 32-bit words will follow along
+ * at the correct addresses.
+ */
+
#define SJA1105_SIZE_DYN_CMD 4
#define SJA1105ET_SIZE_MAC_CONFIG_DYN_ENTRY \
@@ -35,17 +127,70 @@
#define SJA1105_MAX_DYN_CMD_SIZE \
SJA1105PQRS_SIZE_MAC_CONFIG_DYN_CMD
+struct sja1105_dyn_cmd {
+ bool search;
+ u64 valid;
+ u64 rdwrset;
+ u64 errors;
+ u64 valident;
+ u64 index;
+};
+
+enum sja1105_hostcmd {
+ SJA1105_HOSTCMD_SEARCH = 1,
+ SJA1105_HOSTCMD_READ = 2,
+ SJA1105_HOSTCMD_WRITE = 3,
+ SJA1105_HOSTCMD_INVALIDATE = 4,
+};
+
static void
sja1105pqrs_l2_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op)
{
u8 *p = buf + SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY;
const int size = SJA1105_SIZE_DYN_CMD;
+ u64 hostcmd;
sja1105_packing(p, &cmd->valid, 31, 31, size, op);
sja1105_packing(p, &cmd->rdwrset, 30, 30, size, op);
sja1105_packing(p, &cmd->errors, 29, 29, size, op);
sja1105_packing(p, &cmd->valident, 27, 27, size, op);
+
+ /* VALIDENT is supposed to indicate "keep or not", but in SJA1105 E/T,
+ * using it to delete a management route was unsupported. UM10944
+ * said about it:
+ *
+ * In case of a write access with the MGMTROUTE flag set,
+ * the flag will be ignored. It will always be found cleared
+ * for read accesses with the MGMTROUTE flag set.
+ *
+ * SJA1105 P/Q/R/S keeps the same behavior w.r.t. VALIDENT, but there
+ * is now another flag called HOSTCMD which does more stuff (quoting
+ * from UM11040):
+ *
+ * A write request is accepted only when HOSTCMD is set to write host
+ * or invalid. A read request is accepted only when HOSTCMD is set to
+ * search host or read host.
+ *
+ * So it is possible to translate a RDWRSET/VALIDENT combination into
+ * HOSTCMD so that we keep the dynamic command API in place, and
+ * at the same time achieve compatibility with the management route
+ * command structure.
+ */
+ if (cmd->rdwrset == SPI_READ) {
+ if (cmd->search)
+ hostcmd = SJA1105_HOSTCMD_SEARCH;
+ else
+ hostcmd = SJA1105_HOSTCMD_READ;
+ } else {
+ /* SPI_WRITE */
+ if (cmd->valident)
+ hostcmd = SJA1105_HOSTCMD_WRITE;
+ else
+ hostcmd = SJA1105_HOSTCMD_INVALIDATE;
+ }
+ sja1105_packing(p, &hostcmd, 25, 23, size, op);
+
/* Hack - The hardware takes the 'index' field within
* struct sja1105_l2_lookup_entry as the index on which this command
* will operate. However it will ignore everything else, so 'index'
@@ -54,9 +199,66 @@ sja1105pqrs_l2_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
* such that our API doesn't need to ask for a full-blown entry
* structure when e.g. a delete is requested.
*/
- sja1105_packing(buf, &cmd->index, 29, 20,
+ sja1105_packing(buf, &cmd->index, 15, 6,
SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY, op);
- /* TODO hostcmd */
+}
+
+/* The switch is so retarded that it makes our command/entry abstraction
+ * crumble apart.
+ *
+ * On P/Q/R/S, the switch tries to say whether a FDB entry
+ * is statically programmed or dynamically learned via a flag called LOCKEDS.
+ * The hardware manual says about this fiels:
+ *
+ * On write will specify the format of ENTRY.
+ * On read the flag will be found cleared at times the VALID flag is found
+ * set. The flag will also be found cleared in response to a read having the
+ * MGMTROUTE flag set. In response to a read with the MGMTROUTE flag
+ * cleared, the flag be set if the most recent access operated on an entry
+ * that was either loaded by configuration or through dynamic reconfiguration
+ * (as opposed to automatically learned entries).
+ *
+ * The trouble with this flag is that it's part of the *command* to access the
+ * dynamic interface, and not part of the *entry* retrieved from it.
+ * Otherwise said, for a sja1105_dynamic_config_read, LOCKEDS is supposed to be
+ * an output from the switch into the command buffer, and for a
+ * sja1105_dynamic_config_write, the switch treats LOCKEDS as an input
+ * (hence we can write either static, or automatically learned entries, from
+ * the core).
+ * But the manual contradicts itself in the last phrase where it says that on
+ * read, LOCKEDS will be set to 1 for all FDB entries written through the
+ * dynamic interface (therefore, the value of LOCKEDS from the
+ * sja1105_dynamic_config_write is not really used for anything, it'll store a
+ * 1 anyway).
+ * This means you can't really write a FDB entry with LOCKEDS=0 (automatically
+ * learned) into the switch, which kind of makes sense.
+ * As for reading through the dynamic interface, it doesn't make too much sense
+ * to put LOCKEDS into the command, since the switch will inevitably have to
+ * ignore it (otherwise a command would be like "read the FDB entry 123, but
+ * only if it's dynamically learned" <- well how am I supposed to know?) and
+ * just use it as an output buffer for its findings. But guess what... that's
+ * what the entry buffer is for!
+ * Unfortunately, what really breaks this abstraction is the fact that it
+ * wasn't designed having the fact in mind that the switch can output
+ * entry-related data as writeback through the command buffer.
+ * However, whether a FDB entry is statically or dynamically learned *is* part
+ * of the entry and not the command data, no matter what the switch thinks.
+ * In order to do that, we'll need to wrap around the
+ * sja1105pqrs_l2_lookup_entry_packing from sja1105_static_config.c, and take
+ * a peek outside of the caller-supplied @buf (the entry buffer), to reach the
+ * command buffer.
+ */
+static size_t
+sja1105pqrs_dyn_l2_lookup_entry_packing(void *buf, void *entry_ptr,
+ enum packing_op op)
+{
+ struct sja1105_l2_lookup_entry *entry = entry_ptr;
+ u8 *cmd = buf + SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY;
+ const int size = SJA1105_SIZE_DYN_CMD;
+
+ sja1105_packing(cmd, &entry->lockeds, 28, 28, size, op);
+
+ return sja1105pqrs_l2_lookup_entry_packing(buf, entry_ptr, op);
}
static void
@@ -107,6 +309,36 @@ static size_t sja1105et_mgmt_route_entry_packing(void *buf, void *entry_ptr,
return size;
}
+static void
+sja1105pqrs_mgmt_route_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
+ enum packing_op op)
+{
+ u8 *p = buf + SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY;
+ u64 mgmtroute = 1;
+
+ sja1105pqrs_l2_lookup_cmd_packing(buf, cmd, op);
+ if (op == PACK)
+ sja1105_pack(p, &mgmtroute, 26, 26, SJA1105_SIZE_DYN_CMD);
+}
+
+static size_t sja1105pqrs_mgmt_route_entry_packing(void *buf, void *entry_ptr,
+ enum packing_op op)
+{
+ const size_t size = SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY;
+ struct sja1105_mgmt_entry *entry = entry_ptr;
+
+ /* In P/Q/R/S, enfport got renamed to mgmtvalid, but its purpose
+ * is the same (driver uses it to confirm that frame was sent).
+ * So just keep the name from E/T.
+ */
+ sja1105_packing(buf, &entry->tsreg, 71, 71, size, op);
+ sja1105_packing(buf, &entry->takets, 70, 70, size, op);
+ sja1105_packing(buf, &entry->macaddr, 69, 22, size, op);
+ sja1105_packing(buf, &entry->destports, 21, 17, size, op);
+ sja1105_packing(buf, &entry->enfport, 16, 16, size, op);
+ return size;
+}
+
/* In E/T, entry is at addresses 0x27-0x28. There is a 4 byte gap at 0x29,
* and command is at 0x2a. Similarly in P/Q/R/S there is a 1 register gap
* between entry (0x2d, 0x2e) and command (0x30).
@@ -240,6 +472,7 @@ sja1105et_general_params_entry_packing(void *buf, void *entry_ptr,
#define OP_READ BIT(0)
#define OP_WRITE BIT(1)
#define OP_DEL BIT(2)
+#define OP_SEARCH BIT(3)
/* SJA1105E/T: First generation */
struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = {
@@ -293,6 +526,7 @@ struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = {
.addr = 0x38,
},
[BLK_IDX_L2_FORWARDING_PARAMS] = {0},
+ [BLK_IDX_AVB_PARAMS] = {0},
[BLK_IDX_GENERAL_PARAMS] = {
.entry_packing = sja1105et_general_params_entry_packing,
.cmd_packing = sja1105et_general_params_cmd_packing,
@@ -304,14 +538,22 @@ struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = {
[BLK_IDX_XMII_PARAMS] = {0},
};
-/* SJA1105P/Q/R/S: Second generation: TODO */
+/* SJA1105P/Q/R/S: Second generation */
struct sja1105_dynamic_table_ops sja1105pqrs_dyn_ops[BLK_IDX_MAX_DYN] = {
[BLK_IDX_L2_LOOKUP] = {
- .entry_packing = sja1105pqrs_l2_lookup_entry_packing,
+ .entry_packing = sja1105pqrs_dyn_l2_lookup_entry_packing,
.cmd_packing = sja1105pqrs_l2_lookup_cmd_packing,
- .access = (OP_READ | OP_WRITE | OP_DEL),
+ .access = (OP_READ | OP_WRITE | OP_DEL | OP_SEARCH),
.max_entry_count = SJA1105_MAX_L2_LOOKUP_COUNT,
- .packed_size = SJA1105ET_SIZE_L2_LOOKUP_DYN_CMD,
+ .packed_size = SJA1105PQRS_SIZE_L2_LOOKUP_DYN_CMD,
+ .addr = 0x24,
+ },
+ [BLK_IDX_MGMT_ROUTE] = {
+ .entry_packing = sja1105pqrs_mgmt_route_entry_packing,
+ .cmd_packing = sja1105pqrs_mgmt_route_cmd_packing,
+ .access = (OP_READ | OP_WRITE | OP_DEL | OP_SEARCH),
+ .max_entry_count = SJA1105_NUM_PORTS,
+ .packed_size = SJA1105PQRS_SIZE_L2_LOOKUP_DYN_CMD,
.addr = 0x24,
},
[BLK_IDX_L2_POLICING] = {0},
@@ -348,6 +590,7 @@ struct sja1105_dynamic_table_ops sja1105pqrs_dyn_ops[BLK_IDX_MAX_DYN] = {
.addr = 0x38,
},
[BLK_IDX_L2_FORWARDING_PARAMS] = {0},
+ [BLK_IDX_AVB_PARAMS] = {0},
[BLK_IDX_GENERAL_PARAMS] = {
.entry_packing = sja1105et_general_params_entry_packing,
.cmd_packing = sja1105et_general_params_cmd_packing,
@@ -359,6 +602,24 @@ struct sja1105_dynamic_table_ops sja1105pqrs_dyn_ops[BLK_IDX_MAX_DYN] = {
[BLK_IDX_XMII_PARAMS] = {0},
};
+/* Provides read access to the settings through the dynamic interface
+ * of the switch.
+ * @blk_idx is used as key to select from the sja1105_dynamic_table_ops.
+ * The selection is limited by the hardware in respect to which
+ * configuration blocks can be read through the dynamic interface.
+ * @index is used to retrieve a particular table entry. If negative,
+ * (and if the @blk_idx supports the searching operation) a search
+ * is performed by the @entry parameter.
+ * @entry Type-casted to an unpacked structure that holds a table entry
+ * of the type specified in @blk_idx.
+ * Usually an output argument. If @index is negative, then this
+ * argument is used as input/output: it should be pre-populated
+ * with the element to search for. Entries which support the
+ * search operation will have an "index" field (not the @index
+ * argument to this function) and that is where the found index
+ * will be returned (or left unmodified - thus negative - if not
+ * found).
+ */
int sja1105_dynamic_config_read(struct sja1105_private *priv,
enum sja1105_blk_idx blk_idx,
int index, void *entry)
@@ -375,8 +636,10 @@ int sja1105_dynamic_config_read(struct sja1105_private *priv,
ops = &priv->info->dyn_ops[blk_idx];
- if (index >= ops->max_entry_count)
+ if (index >= 0 && index >= ops->max_entry_count)
return -ERANGE;
+ if (index < 0 && !(ops->access & OP_SEARCH))
+ return -EOPNOTSUPP;
if (!(ops->access & OP_READ))
return -EOPNOTSUPP;
if (ops->packed_size > SJA1105_MAX_DYN_CMD_SIZE)
@@ -388,9 +651,20 @@ int sja1105_dynamic_config_read(struct sja1105_private *priv,
cmd.valid = true; /* Trigger action on table entry */
cmd.rdwrset = SPI_READ; /* Action is read */
- cmd.index = index;
+ if (index < 0) {
+ /* Avoid copying a signed negative number to an u64 */
+ cmd.index = 0;
+ cmd.search = true;
+ } else {
+ cmd.index = index;
+ cmd.search = false;
+ }
+ cmd.valident = true;
ops->cmd_packing(packed_buf, &cmd, PACK);
+ if (cmd.search)
+ ops->entry_packing(packed_buf, entry, PACK);
+
/* Send SPI write operation: read config table entry */
rc = sja1105_spi_send_packed_buf(priv, SPI_WRITE, ops->addr,
packed_buf, ops->packed_size);
@@ -416,7 +690,7 @@ int sja1105_dynamic_config_read(struct sja1105_private *priv,
* So don't error out in that case.
*/
if (!cmd.valident && blk_idx != BLK_IDX_MGMT_ROUTE)
- return -EINVAL;
+ return -ENOENT;
cpu_relax();
} while (cmd.valid && --retries);
@@ -448,6 +722,8 @@ int sja1105_dynamic_config_write(struct sja1105_private *priv,
if (index >= ops->max_entry_count)
return -ERANGE;
+ if (index < 0)
+ return -ERANGE;
if (!(ops->access & OP_WRITE))
return -EOPNOTSUPP;
if (!keep && !(ops->access & OP_DEL))
@@ -510,7 +786,7 @@ static u8 sja1105_crc8_add(u8 crc, u8 byte, u8 poly)
* is also received as argument in the Koopman notation that the switch
* hardware stores it in.
*/
-u8 sja1105_fdb_hash(struct sja1105_private *priv, const u8 *addr, u16 vid)
+u8 sja1105et_fdb_hash(struct sja1105_private *priv, const u8 *addr, u16 vid)
{
struct sja1105_l2_lookup_params_entry *l2_lookup_params =
priv->static_config.tables[BLK_IDX_L2_LOOKUP_PARAMS].entries;
diff --git a/drivers/net/dsa/sja1105/sja1105_dynamic_config.h b/drivers/net/dsa/sja1105/sja1105_dynamic_config.h
index 77be59546a55..740dadf43f01 100644
--- a/drivers/net/dsa/sja1105/sja1105_dynamic_config.h
+++ b/drivers/net/dsa/sja1105/sja1105_dynamic_config.h
@@ -7,13 +7,10 @@
#include "sja1105.h"
#include <linux/packing.h>
-struct sja1105_dyn_cmd {
- u64 valid;
- u64 rdwrset;
- u64 errors;
- u64 valident;
- u64 index;
-};
+/* Special index that can be used for sja1105_dynamic_config_read */
+#define SJA1105_SEARCH -1
+
+struct sja1105_dyn_cmd;
struct sja1105_dynamic_table_ops {
/* This returns size_t just to keep same prototype as the
diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
index 1c3959efebc4..32bf3a7cc3b6 100644
--- a/drivers/net/dsa/sja1105/sja1105_main.c
+++ b/drivers/net/dsa/sja1105/sja1105_main.c
@@ -70,8 +70,7 @@ static int sja1105_init_mac_settings(struct sja1105_private *priv)
/* Keep standard IFG of 12 bytes on egress. */
.ifg = 0,
/* Always put the MAC speed in automatic mode, where it can be
- * retrieved from the PHY object through phylib and
- * sja1105_adjust_port_config.
+ * adjusted at runtime by PHYLINK.
*/
.speed = SJA1105_SPEED_AUTO,
/* No static correction for 1-step 1588 events */
@@ -81,7 +80,7 @@ static int sja1105_init_mac_settings(struct sja1105_private *priv)
.maxage = 0xFF,
/* Internal VLAN (pvid) to apply to untagged ingress */
.vlanprio = 0,
- .vlanid = 0,
+ .vlanid = 1,
.ing_mirr = false,
.egr_mirr = false,
/* Don't drop traffic with other EtherType than ETH_P_IP */
@@ -116,7 +115,6 @@ static int sja1105_init_mac_settings(struct sja1105_private *priv)
if (!table->entries)
return -ENOMEM;
- /* Override table based on phylib DT bindings */
table->entry_count = SJA1105_NUM_PORTS;
mac = table->entries;
@@ -157,7 +155,7 @@ static int sja1105_init_mii_settings(struct sja1105_private *priv,
if (!table->entries)
return -ENOMEM;
- /* Override table based on phylib DT bindings */
+ /* Override table based on PHYLINK DT bindings */
table->entry_count = SJA1105_MAX_XMII_PARAMS_COUNT;
mii = table->entries;
@@ -205,11 +203,16 @@ static int sja1105_init_static_fdb(struct sja1105_private *priv)
static int sja1105_init_l2_lookup_params(struct sja1105_private *priv)
{
struct sja1105_table *table;
+ u64 max_fdb_entries = SJA1105_MAX_L2_LOOKUP_COUNT / SJA1105_NUM_PORTS;
struct sja1105_l2_lookup_params_entry default_l2_lookup_params = {
/* Learned FDB entries are forgotten after 300 seconds */
.maxage = SJA1105_AGEING_TIME_MS(300000),
/* All entries within a FDB bin are available for learning */
.dyn_tbsz = SJA1105ET_FDB_BIN_SIZE,
+ /* And the P/Q/R/S equivalent setting: */
+ .start_dynspc = 0,
+ .maxaddrp = {max_fdb_entries, max_fdb_entries, max_fdb_entries,
+ max_fdb_entries, max_fdb_entries, },
/* 2^8 + 2^5 + 2^3 + 2^2 + 2^1 + 1 in Koopman notation */
.poly = 0x97,
/* This selects between Independent VLAN Learning (IVL) and
@@ -225,6 +228,13 @@ static int sja1105_init_l2_lookup_params(struct sja1105_private *priv)
* Maybe correlate with no_linklocal_learn from bridge driver?
*/
.no_mgmt_learn = true,
+ /* P/Q/R/S only */
+ .use_static = true,
+ /* Dynamically learned FDB entries can overwrite other (older)
+ * dynamic FDB entries
+ */
+ .owr_dyn = true,
+ .drpnolearn = true,
};
table = &priv->static_config.tables[BLK_IDX_L2_LOOKUP_PARAMS];
@@ -257,20 +267,15 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
.vmemb_port = 0,
.vlan_bc = 0,
.tag_port = 0,
- .vlanid = 0,
+ .vlanid = 1,
};
int i;
table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP];
- /* The static VLAN table will only contain the initial pvid of 0.
+ /* The static VLAN table will only contain the initial pvid of 1.
* All other VLANs are to be configured through dynamic entries,
* and kept in the static configuration table as backing memory.
- * The pvid of 0 is sufficient to pass traffic while the ports are
- * standalone and when vlan_filtering is disabled. When filtering
- * gets enabled, the switchdev core sets up the VLAN ID 1 and sets
- * it as the new pvid. Actually 'pvid 1' still comes up in 'bridge
- * vlan' even when vlan_filtering is off, but it has no effect.
*/
if (table->entry_count) {
kfree(table->entries);
@@ -284,7 +289,7 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
table->entry_count = 1;
- /* VLAN ID 0: all DT-defined ports are members; no restrictions on
+ /* VLAN 1: all DT-defined ports are members; no restrictions on
* forwarding; always transmit priority-tagged frames as untagged.
*/
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
@@ -380,14 +385,14 @@ static int sja1105_init_general_params(struct sja1105_private *priv)
.mirr_ptacu = 0,
.switchid = priv->ds->index,
/* Priority queue for link-local frames trapped to CPU */
- .hostprio = 0,
+ .hostprio = 7,
.mac_fltres1 = SJA1105_LINKLOCAL_FILTER_A,
.mac_flt1 = SJA1105_LINKLOCAL_FILTER_A_MASK,
- .incl_srcpt1 = true,
+ .incl_srcpt1 = false,
.send_meta1 = false,
.mac_fltres0 = SJA1105_LINKLOCAL_FILTER_B,
.mac_flt0 = SJA1105_LINKLOCAL_FILTER_B_MASK,
- .incl_srcpt0 = true,
+ .incl_srcpt0 = false,
.send_meta0 = false,
/* The destination for traffic matching mac_fltres1 and
* mac_fltres0 on all ports except host_port. Such traffic
@@ -499,6 +504,39 @@ static int sja1105_init_l2_policing(struct sja1105_private *priv)
return 0;
}
+static int sja1105_init_avb_params(struct sja1105_private *priv,
+ bool on)
+{
+ struct sja1105_avb_params_entry *avb;
+ struct sja1105_table *table;
+
+ table = &priv->static_config.tables[BLK_IDX_AVB_PARAMS];
+
+ /* Discard previous AVB Parameters Table */
+ if (table->entry_count) {
+ kfree(table->entries);
+ table->entry_count = 0;
+ }
+
+ /* Configure the reception of meta frames only if requested */
+ if (!on)
+ return 0;
+
+ table->entries = kcalloc(SJA1105_MAX_AVB_PARAMS_COUNT,
+ table->ops->unpacked_entry_size, GFP_KERNEL);
+ if (!table->entries)
+ return -ENOMEM;
+
+ table->entry_count = SJA1105_MAX_AVB_PARAMS_COUNT;
+
+ avb = table->entries;
+
+ avb->destmeta = SJA1105_META_DMAC;
+ avb->srcmeta = SJA1105_META_SMAC;
+
+ return 0;
+}
+
static int sja1105_static_config_load(struct sja1105_private *priv,
struct sja1105_dt_port *ports)
{
@@ -539,6 +577,9 @@ static int sja1105_static_config_load(struct sja1105_private *priv,
rc = sja1105_init_general_params(priv);
if (rc < 0)
return rc;
+ rc = sja1105_init_avb_params(priv, false);
+ if (rc < 0)
+ return rc;
/* Send initial configuration to hardware via SPI */
return sja1105_static_config_upload(priv);
@@ -644,26 +685,18 @@ static int sja1105_parse_dt(struct sja1105_private *priv,
return rc;
}
-/* Convert back and forth MAC speed from Mbps to SJA1105 encoding */
+/* Convert link speed from SJA1105 to ethtool encoding */
static int sja1105_speed[] = {
- [SJA1105_SPEED_AUTO] = 0,
- [SJA1105_SPEED_10MBPS] = 10,
- [SJA1105_SPEED_100MBPS] = 100,
- [SJA1105_SPEED_1000MBPS] = 1000,
+ [SJA1105_SPEED_AUTO] = SPEED_UNKNOWN,
+ [SJA1105_SPEED_10MBPS] = SPEED_10,
+ [SJA1105_SPEED_100MBPS] = SPEED_100,
+ [SJA1105_SPEED_1000MBPS] = SPEED_1000,
};
-/* Set link speed and enable/disable traffic I/O in the MAC configuration
- * for a specific port.
- *
- * @speed_mbps: If 0, leave the speed unchanged, else adapt MAC to PHY speed.
- * @enabled: Manage Rx and Tx settings for this port. If false, overrides the
- * settings from the STP state, but not persistently (does not
- * overwrite the static MAC info for this port).
- */
+/* Set link speed in the MAC configuration for a specific port. */
static int sja1105_adjust_port_config(struct sja1105_private *priv, int port,
- int speed_mbps, bool enabled)
+ int speed_mbps)
{
- struct sja1105_mac_config_entry dyn_mac;
struct sja1105_xmii_params_entry *mii;
struct sja1105_mac_config_entry *mac;
struct device *dev = priv->ds->dev;
@@ -671,21 +704,33 @@ static int sja1105_adjust_port_config(struct sja1105_private *priv, int port,
sja1105_speed_t speed;
int rc;
- mii = priv->static_config.tables[BLK_IDX_XMII_PARAMS].entries;
+ /* On P/Q/R/S, one can read from the device via the MAC reconfiguration
+ * tables. On E/T, MAC reconfig tables are not readable, only writable.
+ * We have to *know* what the MAC looks like. For the sake of keeping
+ * the code common, we'll use the static configuration tables as a
+ * reasonable approximation for both E/T and P/Q/R/S.
+ */
mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries;
+ mii = priv->static_config.tables[BLK_IDX_XMII_PARAMS].entries;
switch (speed_mbps) {
- case 0:
- /* No speed update requested */
+ case SPEED_UNKNOWN:
+ /* PHYLINK called sja1105_mac_config() to inform us about
+ * the state->interface, but AN has not completed and the
+ * speed is not yet valid. UM10944.pdf says that setting
+ * SJA1105_SPEED_AUTO at runtime disables the port, so that is
+ * ok for power consumption in case AN will never complete -
+ * otherwise PHYLINK should come back with a new update.
+ */
speed = SJA1105_SPEED_AUTO;
break;
- case 10:
+ case SPEED_10:
speed = SJA1105_SPEED_10MBPS;
break;
- case 100:
+ case SPEED_100:
speed = SJA1105_SPEED_100MBPS;
break;
- case 1000:
+ case SPEED_1000:
speed = SJA1105_SPEED_1000MBPS;
break;
default:
@@ -693,26 +738,16 @@ static int sja1105_adjust_port_config(struct sja1105_private *priv, int port,
return -EINVAL;
}
- /* If requested, overwrite SJA1105_SPEED_AUTO from the static MAC
- * configuration table, since this will be used for the clocking setup,
- * and we no longer need to store it in the static config (already told
- * hardware we want auto during upload phase).
+ /* Overwrite SJA1105_SPEED_AUTO from the static MAC configuration
+ * table, since this will be used for the clocking setup, and we no
+ * longer need to store it in the static config (already told hardware
+ * we want auto during upload phase).
*/
mac[port].speed = speed;
- /* On P/Q/R/S, one can read from the device via the MAC reconfiguration
- * tables. On E/T, MAC reconfig tables are not readable, only writable.
- * We have to *know* what the MAC looks like. For the sake of keeping
- * the code common, we'll use the static configuration tables as a
- * reasonable approximation for both E/T and P/Q/R/S.
- */
- dyn_mac = mac[port];
- dyn_mac.ingress = enabled && mac[port].ingress;
- dyn_mac.egress = enabled && mac[port].egress;
-
/* Write to the dynamic reconfiguration tables */
- rc = sja1105_dynamic_config_write(priv, BLK_IDX_MAC_CONFIG,
- port, &dyn_mac, true);
+ rc = sja1105_dynamic_config_write(priv, BLK_IDX_MAC_CONFIG, port,
+ &mac[port], true);
if (rc < 0) {
dev_err(dev, "Failed to write MAC config: %d\n", rc);
return rc;
@@ -724,9 +759,6 @@ static int sja1105_adjust_port_config(struct sja1105_private *priv, int port,
* the clock setup does interrupt the clock signal for a certain time
* which causes trouble for all PHYs relying on this signal.
*/
- if (!enabled)
- return 0;
-
phy_mode = mii->xmii_mode[port];
if (phy_mode != XMII_MODE_RGMII)
return 0;
@@ -734,15 +766,67 @@ static int sja1105_adjust_port_config(struct sja1105_private *priv, int port,
return sja1105_clocking_setup_port(priv, port);
}
-static void sja1105_adjust_link(struct dsa_switch *ds, int port,
- struct phy_device *phydev)
+/* The SJA1105 MAC programming model is through the static config (the xMII
+ * Mode table cannot be dynamically reconfigured), and we have to program
+ * that early (earlier than PHYLINK calls us, anyway).
+ * So just error out in case the connected PHY attempts to change the initial
+ * system interface MII protocol from what is defined in the DT, at least for
+ * now.
+ */
+static bool sja1105_phy_mode_mismatch(struct sja1105_private *priv, int port,
+ phy_interface_t interface)
+{
+ struct sja1105_xmii_params_entry *mii;
+ sja1105_phy_interface_t phy_mode;
+
+ mii = priv->static_config.tables[BLK_IDX_XMII_PARAMS].entries;
+ phy_mode = mii->xmii_mode[port];
+
+ switch (interface) {
+ case PHY_INTERFACE_MODE_MII:
+ return (phy_mode != XMII_MODE_MII);
+ case PHY_INTERFACE_MODE_RMII:
+ return (phy_mode != XMII_MODE_RMII);
+ case PHY_INTERFACE_MODE_RGMII:
+ case PHY_INTERFACE_MODE_RGMII_ID:
+ case PHY_INTERFACE_MODE_RGMII_RXID:
+ case PHY_INTERFACE_MODE_RGMII_TXID:
+ return (phy_mode != XMII_MODE_RGMII);
+ default:
+ return true;
+ }
+}
+
+static void sja1105_mac_config(struct dsa_switch *ds, int port,
+ unsigned int link_an_mode,
+ const struct phylink_link_state *state)
{
struct sja1105_private *priv = ds->priv;
- if (!phydev->link)
- sja1105_adjust_port_config(priv, port, 0, false);
- else
- sja1105_adjust_port_config(priv, port, phydev->speed, true);
+ if (sja1105_phy_mode_mismatch(priv, port, state->interface))
+ return;
+
+ if (link_an_mode == MLO_AN_INBAND) {
+ dev_err(ds->dev, "In-band AN not supported!\n");
+ return;
+ }
+
+ sja1105_adjust_port_config(priv, port, state->speed);
+}
+
+static void sja1105_mac_link_down(struct dsa_switch *ds, int port,
+ unsigned int mode,
+ phy_interface_t interface)
+{
+ sja1105_inhibit_tx(ds->priv, BIT(port), true);
+}
+
+static void sja1105_mac_link_up(struct dsa_switch *ds, int port,
+ unsigned int mode,
+ phy_interface_t interface,
+ struct phy_device *phydev)
+{
+ sja1105_inhibit_tx(ds->priv, BIT(port), false);
}
static void sja1105_phylink_validate(struct dsa_switch *ds, int port,
@@ -759,6 +843,16 @@ static void sja1105_phylink_validate(struct dsa_switch *ds, int port,
mii = priv->static_config.tables[BLK_IDX_XMII_PARAMS].entries;
+ /* include/linux/phylink.h says:
+ * When @state->interface is %PHY_INTERFACE_MODE_NA, phylink
+ * expects the MAC driver to return all supported link modes.
+ */
+ if (state->interface != PHY_INTERFACE_MODE_NA &&
+ sja1105_phy_mode_mismatch(priv, port, state->interface)) {
+ bitmap_zero(supported, __ETHTOOL_LINK_MODE_MASK_NBITS);
+ return;
+ }
+
/* The MAC does not support pause frames, and also doesn't
* support half-duplex traffic modes.
*/
@@ -774,6 +868,77 @@ static void sja1105_phylink_validate(struct dsa_switch *ds, int port,
__ETHTOOL_LINK_MODE_MASK_NBITS);
}
+static int
+sja1105_find_static_fdb_entry(struct sja1105_private *priv, int port,
+ const struct sja1105_l2_lookup_entry *requested)
+{
+ struct sja1105_l2_lookup_entry *l2_lookup;
+ struct sja1105_table *table;
+ int i;
+
+ table = &priv->static_config.tables[BLK_IDX_L2_LOOKUP];
+ l2_lookup = table->entries;
+
+ for (i = 0; i < table->entry_count; i++)
+ if (l2_lookup[i].macaddr == requested->macaddr &&
+ l2_lookup[i].vlanid == requested->vlanid &&
+ l2_lookup[i].destports & BIT(port))
+ return i;
+
+ return -1;
+}
+
+/* We want FDB entries added statically through the bridge command to persist
+ * across switch resets, which are a common thing during normal SJA1105
+ * operation. So we have to back them up in the static configuration tables
+ * and hence apply them on next static config upload... yay!
+ */
+static int
+sja1105_static_fdb_change(struct sja1105_private *priv, int port,
+ const struct sja1105_l2_lookup_entry *requested,
+ bool keep)
+{
+ struct sja1105_l2_lookup_entry *l2_lookup;
+ struct sja1105_table *table;
+ int rc, match;
+
+ table = &priv->static_config.tables[BLK_IDX_L2_LOOKUP];
+
+ match = sja1105_find_static_fdb_entry(priv, port, requested);
+ if (match < 0) {
+ /* Can't delete a missing entry. */
+ if (!keep)
+ return 0;
+
+ /* No match => new entry */
+ rc = sja1105_table_resize(table, table->entry_count + 1);
+ if (rc)
+ return rc;
+
+ match = table->entry_count - 1;
+ }
+
+ /* Assign pointer after the resize (it may be new memory) */
+ l2_lookup = table->entries;
+
+ /* We have a match.
+ * If the job was to add this FDB entry, it's already done (mostly
+ * anyway, since the port forwarding mask may have changed, case in
+ * which we update it).
+ * Otherwise we have to delete it.
+ */
+ if (keep) {
+ l2_lookup[match] = *requested;
+ return 0;
+ }
+
+ /* To remove, the strategy is to overwrite the element with
+ * the last one, and then reduce the array size by 1
+ */
+ l2_lookup[match] = l2_lookup[table->entry_count - 1];
+ return sja1105_table_resize(table, table->entry_count - 1);
+}
+
/* First-generation switches have a 4-way set associative TCAM that
* holds the FDB entries. An FDB index spans from 0 to 1023 and is comprised of
* a "bin" (grouping of 4 entries) and a "way" (an entry within a bin).
@@ -785,10 +950,10 @@ static inline int sja1105et_fdb_index(int bin, int way)
return bin * SJA1105ET_FDB_BIN_SIZE + way;
}
-static int sja1105_is_fdb_entry_in_bin(struct sja1105_private *priv, int bin,
- const u8 *addr, u16 vid,
- struct sja1105_l2_lookup_entry *match,
- int *last_unused)
+static int sja1105et_is_fdb_entry_in_bin(struct sja1105_private *priv, int bin,
+ const u8 *addr, u16 vid,
+ struct sja1105_l2_lookup_entry *match,
+ int *last_unused)
{
int way;
@@ -817,19 +982,19 @@ static int sja1105_is_fdb_entry_in_bin(struct sja1105_private *priv, int bin,
return -1;
}
-static int sja1105_fdb_add(struct dsa_switch *ds, int port,
- const unsigned char *addr, u16 vid)
+int sja1105et_fdb_add(struct dsa_switch *ds, int port,
+ const unsigned char *addr, u16 vid)
{
struct sja1105_l2_lookup_entry l2_lookup = {0};
struct sja1105_private *priv = ds->priv;
struct device *dev = ds->dev;
int last_unused = -1;
- int bin, way;
+ int bin, way, rc;
- bin = sja1105_fdb_hash(priv, addr, vid);
+ bin = sja1105et_fdb_hash(priv, addr, vid);
- way = sja1105_is_fdb_entry_in_bin(priv, bin, addr, vid,
- &l2_lookup, &last_unused);
+ way = sja1105et_is_fdb_entry_in_bin(priv, bin, addr, vid,
+ &l2_lookup, &last_unused);
if (way >= 0) {
/* We have an FDB entry. Is our port in the destination
* mask? If yes, we need to do nothing. If not, we need
@@ -868,22 +1033,26 @@ static int sja1105_fdb_add(struct dsa_switch *ds, int port,
}
l2_lookup.index = sja1105et_fdb_index(bin, way);
- return sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
- l2_lookup.index, &l2_lookup,
- true);
+ rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
+ l2_lookup.index, &l2_lookup,
+ true);
+ if (rc < 0)
+ return rc;
+
+ return sja1105_static_fdb_change(priv, port, &l2_lookup, true);
}
-static int sja1105_fdb_del(struct dsa_switch *ds, int port,
- const unsigned char *addr, u16 vid)
+int sja1105et_fdb_del(struct dsa_switch *ds, int port,
+ const unsigned char *addr, u16 vid)
{
struct sja1105_l2_lookup_entry l2_lookup = {0};
struct sja1105_private *priv = ds->priv;
- int index, bin, way;
+ int index, bin, way, rc;
bool keep;
- bin = sja1105_fdb_hash(priv, addr, vid);
- way = sja1105_is_fdb_entry_in_bin(priv, bin, addr, vid,
- &l2_lookup, NULL);
+ bin = sja1105et_fdb_hash(priv, addr, vid);
+ way = sja1105et_is_fdb_entry_in_bin(priv, bin, addr, vid,
+ &l2_lookup, NULL);
if (way < 0)
return 0;
index = sja1105et_fdb_index(bin, way);
@@ -893,15 +1062,176 @@ static int sja1105_fdb_del(struct dsa_switch *ds, int port,
* need to completely evict the FDB entry.
* Otherwise we just write it back.
*/
- if (l2_lookup.destports & BIT(port))
- l2_lookup.destports &= ~BIT(port);
+ l2_lookup.destports &= ~BIT(port);
+
+ if (l2_lookup.destports)
+ keep = true;
+ else
+ keep = false;
+
+ rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
+ index, &l2_lookup, keep);
+ if (rc < 0)
+ return rc;
+
+ return sja1105_static_fdb_change(priv, port, &l2_lookup, keep);
+}
+
+int sja1105pqrs_fdb_add(struct dsa_switch *ds, int port,
+ const unsigned char *addr, u16 vid)
+{
+ struct sja1105_l2_lookup_entry l2_lookup = {0};
+ struct sja1105_private *priv = ds->priv;
+ int rc, i;
+
+ /* Search for an existing entry in the FDB table */
+ l2_lookup.macaddr = ether_addr_to_u64(addr);
+ l2_lookup.vlanid = vid;
+ l2_lookup.iotag = SJA1105_S_TAG;
+ l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0);
+ l2_lookup.mask_vlanid = VLAN_VID_MASK;
+ l2_lookup.mask_iotag = BIT(0);
+ l2_lookup.destports = BIT(port);
+
+ rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
+ SJA1105_SEARCH, &l2_lookup);
+ if (rc == 0) {
+ /* Found and this port is already in the entry's
+ * port mask => job done
+ */
+ if (l2_lookup.destports & BIT(port))
+ return 0;
+ /* l2_lookup.index is populated by the switch in case it
+ * found something.
+ */
+ l2_lookup.destports |= BIT(port);
+ goto skip_finding_an_index;
+ }
+
+ /* Not found, so try to find an unused spot in the FDB.
+ * This is slightly inefficient because the strategy is knock-knock at
+ * every possible position from 0 to 1023.
+ */
+ for (i = 0; i < SJA1105_MAX_L2_LOOKUP_COUNT; i++) {
+ rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
+ i, NULL);
+ if (rc < 0)
+ break;
+ }
+ if (i == SJA1105_MAX_L2_LOOKUP_COUNT) {
+ dev_err(ds->dev, "FDB is full, cannot add entry.\n");
+ return -EINVAL;
+ }
+ l2_lookup.lockeds = true;
+ l2_lookup.index = i;
+
+skip_finding_an_index:
+ rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
+ l2_lookup.index, &l2_lookup,
+ true);
+ if (rc < 0)
+ return rc;
+
+ return sja1105_static_fdb_change(priv, port, &l2_lookup, true);
+}
+
+int sja1105pqrs_fdb_del(struct dsa_switch *ds, int port,
+ const unsigned char *addr, u16 vid)
+{
+ struct sja1105_l2_lookup_entry l2_lookup = {0};
+ struct sja1105_private *priv = ds->priv;
+ bool keep;
+ int rc;
+
+ l2_lookup.macaddr = ether_addr_to_u64(addr);
+ l2_lookup.vlanid = vid;
+ l2_lookup.iotag = SJA1105_S_TAG;
+ l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0);
+ l2_lookup.mask_vlanid = VLAN_VID_MASK;
+ l2_lookup.mask_iotag = BIT(0);
+ l2_lookup.destports = BIT(port);
+
+ rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
+ SJA1105_SEARCH, &l2_lookup);
+ if (rc < 0)
+ return 0;
+
+ l2_lookup.destports &= ~BIT(port);
+
+ /* Decide whether we remove just this port from the FDB entry,
+ * or if we remove it completely.
+ */
if (l2_lookup.destports)
keep = true;
else
keep = false;
- return sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
- index, &l2_lookup, keep);
+ rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
+ l2_lookup.index, &l2_lookup, keep);
+ if (rc < 0)
+ return rc;
+
+ return sja1105_static_fdb_change(priv, port, &l2_lookup, keep);
+}
+
+static int sja1105_fdb_add(struct dsa_switch *ds, int port,
+ const unsigned char *addr, u16 vid)
+{
+ struct sja1105_private *priv = ds->priv;
+ u16 rx_vid, tx_vid;
+ int rc, i;
+
+ if (dsa_port_is_vlan_filtering(&ds->ports[port]))
+ return priv->info->fdb_add_cmd(ds, port, addr, vid);
+
+ /* Since we make use of VLANs even when the bridge core doesn't tell us
+ * to, translate these FDB entries into the correct dsa_8021q ones.
+ * The basic idea (also repeats for removal below) is:
+ * - Each of the other front-panel ports needs to be able to forward a
+ * pvid-tagged (aka tagged with their rx_vid) frame that matches this
+ * DMAC.
+ * - The CPU port (aka the tx_vid of this port) needs to be able to
+ * send a frame matching this DMAC to the specified port.
+ * For a better picture see net/dsa/tag_8021q.c.
+ */
+ for (i = 0; i < SJA1105_NUM_PORTS; i++) {
+ if (i == port)
+ continue;
+ if (i == dsa_upstream_port(priv->ds, port))
+ continue;
+
+ rx_vid = dsa_8021q_rx_vid(ds, i);
+ rc = priv->info->fdb_add_cmd(ds, port, addr, rx_vid);
+ if (rc < 0)
+ return rc;
+ }
+ tx_vid = dsa_8021q_tx_vid(ds, port);
+ return priv->info->fdb_add_cmd(ds, port, addr, tx_vid);
+}
+
+static int sja1105_fdb_del(struct dsa_switch *ds, int port,
+ const unsigned char *addr, u16 vid)
+{
+ struct sja1105_private *priv = ds->priv;
+ u16 rx_vid, tx_vid;
+ int rc, i;
+
+ if (dsa_port_is_vlan_filtering(&ds->ports[port]))
+ return priv->info->fdb_del_cmd(ds, port, addr, vid);
+
+ for (i = 0; i < SJA1105_NUM_PORTS; i++) {
+ if (i == port)
+ continue;
+ if (i == dsa_upstream_port(priv->ds, port))
+ continue;
+
+ rx_vid = dsa_8021q_rx_vid(ds, i);
+ rc = priv->info->fdb_del_cmd(ds, port, addr, rx_vid);
+ if (rc < 0)
+ return rc;
+ }
+ tx_vid = dsa_8021q_tx_vid(ds, port);
+ return priv->info->fdb_del_cmd(ds, port, addr, tx_vid);
}
static int sja1105_fdb_dump(struct dsa_switch *ds, int port,
@@ -909,8 +1239,12 @@ static int sja1105_fdb_dump(struct dsa_switch *ds, int port,
{
struct sja1105_private *priv = ds->priv;
struct device *dev = ds->dev;
+ u16 rx_vid, tx_vid;
int i;
+ rx_vid = dsa_8021q_rx_vid(ds, port);
+ tx_vid = dsa_8021q_tx_vid(ds, port);
+
for (i = 0; i < SJA1105_MAX_L2_LOOKUP_COUNT; i++) {
struct sja1105_l2_lookup_entry l2_lookup = {0};
u8 macaddr[ETH_ALEN];
@@ -919,7 +1253,7 @@ static int sja1105_fdb_dump(struct dsa_switch *ds, int port,
rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
i, &l2_lookup);
/* No fdb entry at i, not an issue */
- if (rc == -EINVAL)
+ if (rc == -ENOENT)
continue;
if (rc) {
dev_err(dev, "Failed to dump FDB: %d\n", rc);
@@ -935,7 +1269,41 @@ static int sja1105_fdb_dump(struct dsa_switch *ds, int port,
if (!(l2_lookup.destports & BIT(port)))
continue;
u64_to_ether_addr(l2_lookup.macaddr, macaddr);
- cb(macaddr, l2_lookup.vlanid, false, data);
+
+ /* On SJA1105 E/T, the switch doesn't implement the LOCKEDS
+ * bit, so it doesn't tell us whether a FDB entry is static
+ * or not.
+ * But, of course, we can find out - we're the ones who added
+ * it in the first place.
+ */
+ if (priv->info->device_id == SJA1105E_DEVICE_ID ||
+ priv->info->device_id == SJA1105T_DEVICE_ID) {
+ int match;
+
+ match = sja1105_find_static_fdb_entry(priv, port,
+ &l2_lookup);
+ l2_lookup.lockeds = (match >= 0);
+ }
+
+ /* We need to hide the dsa_8021q VLANs from the user. This
+ * basically means hiding the duplicates and only showing
+ * the pvid that is supposed to be active in standalone and
+ * non-vlan_filtering modes (aka 1).
+ * - For statically added FDB entries (bridge fdb add), we
+ * can convert the TX VID (coming from the CPU port) into the
+ * pvid and ignore the RX VIDs of the other ports.
+ * - For dynamically learned FDB entries, a single entry with
+ * no duplicates is learned - that which has the real port's
+ * pvid, aka RX VID.
+ */
+ if (!dsa_port_is_vlan_filtering(&ds->ports[port])) {
+ if (l2_lookup.vlanid == tx_vid ||
+ l2_lookup.vlanid == rx_vid)
+ l2_lookup.vlanid = 1;
+ else
+ continue;
+ }
+ cb(macaddr, l2_lookup.vlanid, l2_lookup.lockeds, data);
}
return 0;
}
@@ -1056,27 +1424,6 @@ static void sja1105_bridge_leave(struct dsa_switch *ds, int port,
sja1105_bridge_member(ds, port, br, false);
}
-static u8 sja1105_stp_state_get(struct sja1105_private *priv, int port)
-{
- struct sja1105_mac_config_entry *mac;
-
- mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries;
-
- if (!mac[port].ingress && !mac[port].egress && !mac[port].dyn_learn)
- return BR_STATE_BLOCKING;
- if (mac[port].ingress && !mac[port].egress && !mac[port].dyn_learn)
- return BR_STATE_LISTENING;
- if (mac[port].ingress && !mac[port].egress && mac[port].dyn_learn)
- return BR_STATE_LEARNING;
- if (mac[port].ingress && mac[port].egress && mac[port].dyn_learn)
- return BR_STATE_FORWARDING;
- /* This is really an error condition if the MAC was in none of the STP
- * states above. But treating the port as disabled does nothing, which
- * is adequate, and it also resets the MAC to a known state later on.
- */
- return BR_STATE_DISABLED;
-}
-
/* For situations where we need to change a setting at runtime that is only
* available through the static configuration, resetting the switch in order
* to upload the new static config is unavoidable. Back up the settings we
@@ -1087,27 +1434,18 @@ static int sja1105_static_config_reload(struct sja1105_private *priv)
{
struct sja1105_mac_config_entry *mac;
int speed_mbps[SJA1105_NUM_PORTS];
- u8 stp_state[SJA1105_NUM_PORTS];
int rc, i;
mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries;
- /* Back up settings changed by sja1105_adjust_port_config and
- * sja1105_bridge_stp_state_set and restore their defaults.
+ /* Back up the dynamic link speed changed by sja1105_adjust_port_config
+ * in order to temporarily restore it to SJA1105_SPEED_AUTO - which the
+ * switch wants to see in the static config in order to allow us to
+ * change it through the dynamic interface later.
*/
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
speed_mbps[i] = sja1105_speed[mac[i].speed];
mac[i].speed = SJA1105_SPEED_AUTO;
- if (i == dsa_upstream_port(priv->ds, i)) {
- mac[i].ingress = true;
- mac[i].egress = true;
- mac[i].dyn_learn = true;
- } else {
- stp_state[i] = sja1105_stp_state_get(priv, i);
- mac[i].ingress = false;
- mac[i].egress = false;
- mac[i].dyn_learn = false;
- }
}
/* Reset switch and send updated static configuration */
@@ -1124,13 +1462,7 @@ static int sja1105_static_config_reload(struct sja1105_private *priv)
goto out;
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
- bool enabled = (speed_mbps[i] != 0);
-
- if (i != dsa_upstream_port(priv->ds, i))
- sja1105_bridge_stp_state_set(priv->ds, i, stp_state[i]);
-
- rc = sja1105_adjust_port_config(priv, i, speed_mbps[i],
- enabled);
+ rc = sja1105_adjust_port_config(priv, i, speed_mbps[i]);
if (rc < 0)
goto out;
}
@@ -1138,23 +1470,6 @@ out:
return rc;
}
-/* The TPID setting belongs to the General Parameters table,
- * which can only be partially reconfigured at runtime (and not the TPID).
- * So a switch reset is required.
- */
-static int sja1105_change_tpid(struct sja1105_private *priv,
- u16 tpid, u16 tpid2)
-{
- struct sja1105_general_params_entry *general_params;
- struct sja1105_table *table;
-
- table = &priv->static_config.tables[BLK_IDX_GENERAL_PARAMS];
- general_params = table->entries;
- general_params->tpid = tpid;
- general_params->tpid2 = tpid2;
- return sja1105_static_config_reload(priv);
-}
-
static int sja1105_pvid_apply(struct sja1105_private *priv, int port, u16 pvid)
{
struct sja1105_mac_config_entry *mac;
@@ -1273,17 +1588,41 @@ static int sja1105_vlan_prepare(struct dsa_switch *ds, int port,
return 0;
}
+/* The TPID setting belongs to the General Parameters table,
+ * which can only be partially reconfigured at runtime (and not the TPID).
+ * So a switch reset is required.
+ */
static int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled)
{
+ struct sja1105_general_params_entry *general_params;
struct sja1105_private *priv = ds->priv;
+ struct sja1105_table *table;
+ u16 tpid, tpid2;
int rc;
- if (enabled)
+ if (enabled) {
/* Enable VLAN filtering. */
- rc = sja1105_change_tpid(priv, ETH_P_8021Q, ETH_P_8021AD);
- else
+ tpid = ETH_P_8021AD;
+ tpid2 = ETH_P_8021Q;
+ } else {
/* Disable VLAN filtering. */
- rc = sja1105_change_tpid(priv, ETH_P_SJA1105, ETH_P_SJA1105);
+ tpid = ETH_P_SJA1105;
+ tpid2 = ETH_P_SJA1105;
+ }
+
+ table = &priv->static_config.tables[BLK_IDX_GENERAL_PARAMS];
+ general_params = table->entries;
+ /* EtherType used to identify outer tagged (S-tag) VLAN traffic */
+ general_params->tpid = tpid;
+ /* EtherType used to identify inner tagged (C-tag) VLAN traffic */
+ general_params->tpid2 = tpid2;
+ /* When VLAN filtering is on, we need to at least be able to
+ * decode management traffic through the "backup plan".
+ */
+ general_params->incl_srcpt1 = enabled;
+ general_params->incl_srcpt0 = enabled;
+
+ rc = sja1105_static_config_reload(priv);
if (rc)
dev_err(ds->dev, "Failed to change VLAN Ethertype\n");
@@ -1372,6 +1711,11 @@ static int sja1105_setup(struct dsa_switch *ds)
return rc;
}
+ rc = sja1105_ptp_clock_register(priv);
+ if (rc < 0) {
+ dev_err(ds->dev, "Failed to register PTP clock: %d\n", rc);
+ return rc;
+ }
/* Create and send configuration down to device */
rc = sja1105_static_config_load(priv, ports);
if (rc < 0) {
@@ -1401,8 +1745,16 @@ static int sja1105_setup(struct dsa_switch *ds)
return sja1105_setup_8021q_tagging(ds, true);
}
+static void sja1105_teardown(struct dsa_switch *ds)
+{
+ struct sja1105_private *priv = ds->priv;
+
+ cancel_work_sync(&priv->tagger_data.rxtstamp_work);
+ skb_queue_purge(&priv->tagger_data.skb_rxtstamp_queue);
+}
+
static int sja1105_mgmt_xmit(struct dsa_switch *ds, int port, int slot,
- struct sk_buff *skb)
+ struct sk_buff *skb, bool takets)
{
struct sja1105_mgmt_entry mgmt_route = {0};
struct sja1105_private *priv = ds->priv;
@@ -1415,6 +1767,8 @@ static int sja1105_mgmt_xmit(struct dsa_switch *ds, int port, int slot,
mgmt_route.macaddr = ether_addr_to_u64(hdr->h_dest);
mgmt_route.destports = BIT(port);
mgmt_route.enfport = 1;
+ mgmt_route.tsreg = 0;
+ mgmt_route.takets = takets;
rc = sja1105_dynamic_config_write(priv, BLK_IDX_MGMT_ROUTE,
slot, &mgmt_route, true);
@@ -1446,6 +1800,8 @@ static int sja1105_mgmt_xmit(struct dsa_switch *ds, int port, int slot,
if (!timeout) {
/* Clean up the management route so that a follow-up
* frame may not match on it by mistake.
+ * This is only hardware supported on P/Q/R/S - on E/T it is
+ * a no-op and we are silently discarding the -EOPNOTSUPP.
*/
sja1105_dynamic_config_write(priv, BLK_IDX_MGMT_ROUTE,
slot, &mgmt_route, false);
@@ -1464,7 +1820,11 @@ static netdev_tx_t sja1105_port_deferred_xmit(struct dsa_switch *ds, int port,
{
struct sja1105_private *priv = ds->priv;
struct sja1105_port *sp = &priv->ports[port];
+ struct skb_shared_hwtstamps shwt = {0};
int slot = sp->mgmt_slot;
+ struct sk_buff *clone;
+ u64 now, ts;
+ int rc;
/* The tragic fact about the switch having 4x2 slots for installing
* management routes is that all of them except one are actually
@@ -1482,8 +1842,36 @@ static netdev_tx_t sja1105_port_deferred_xmit(struct dsa_switch *ds, int port,
*/
mutex_lock(&priv->mgmt_lock);
- sja1105_mgmt_xmit(ds, port, slot, skb);
+ /* The clone, if there, was made by dsa_skb_tx_timestamp */
+ clone = DSA_SKB_CB(skb)->clone;
+
+ sja1105_mgmt_xmit(ds, port, slot, skb, !!clone);
+
+ if (!clone)
+ goto out;
+
+ skb_shinfo(clone)->tx_flags |= SKBTX_IN_PROGRESS;
+
+ mutex_lock(&priv->ptp_lock);
+
+ now = priv->tstamp_cc.read(&priv->tstamp_cc);
+
+ rc = sja1105_ptpegr_ts_poll(priv, slot, &ts);
+ if (rc < 0) {
+ dev_err(ds->dev, "xmit: timed out polling for tstamp\n");
+ kfree_skb(clone);
+ goto out_unlock_ptp;
+ }
+
+ ts = sja1105_tstamp_reconstruct(priv, now, ts);
+ ts = timecounter_cyc2time(&priv->tstamp_tc, ts);
+
+ shwt.hwtstamp = ns_to_ktime(ts);
+ skb_complete_tx_timestamp(clone, &shwt);
+out_unlock_ptp:
+ mutex_unlock(&priv->ptp_lock);
+out:
mutex_unlock(&priv->mgmt_lock);
return NETDEV_TX_OK;
}
@@ -1512,15 +1900,180 @@ static int sja1105_set_ageing_time(struct dsa_switch *ds,
return sja1105_static_config_reload(priv);
}
+/* Caller must hold priv->tagger_data.meta_lock */
+static int sja1105_change_rxtstamping(struct sja1105_private *priv,
+ bool on)
+{
+ struct sja1105_general_params_entry *general_params;
+ struct sja1105_table *table;
+ int rc;
+
+ table = &priv->static_config.tables[BLK_IDX_GENERAL_PARAMS];
+ general_params = table->entries;
+ general_params->send_meta1 = on;
+ general_params->send_meta0 = on;
+
+ rc = sja1105_init_avb_params(priv, on);
+ if (rc < 0)
+ return rc;
+
+ /* Initialize the meta state machine to a known state */
+ if (priv->tagger_data.stampable_skb) {
+ kfree_skb(priv->tagger_data.stampable_skb);
+ priv->tagger_data.stampable_skb = NULL;
+ }
+
+ return sja1105_static_config_reload(priv);
+}
+
+static int sja1105_hwtstamp_set(struct dsa_switch *ds, int port,
+ struct ifreq *ifr)
+{
+ struct sja1105_private *priv = ds->priv;
+ struct hwtstamp_config config;
+ bool rx_on;
+ int rc;
+
+ if (copy_from_user(&config, ifr->ifr_data, sizeof(config)))
+ return -EFAULT;
+
+ switch (config.tx_type) {
+ case HWTSTAMP_TX_OFF:
+ priv->ports[port].hwts_tx_en = false;
+ break;
+ case HWTSTAMP_TX_ON:
+ priv->ports[port].hwts_tx_en = true;
+ break;
+ default:
+ return -ERANGE;
+ }
+
+ switch (config.rx_filter) {
+ case HWTSTAMP_FILTER_NONE:
+ rx_on = false;
+ break;
+ default:
+ rx_on = true;
+ break;
+ }
+
+ if (rx_on != priv->tagger_data.hwts_rx_en) {
+ spin_lock(&priv->tagger_data.meta_lock);
+ rc = sja1105_change_rxtstamping(priv, rx_on);
+ spin_unlock(&priv->tagger_data.meta_lock);
+ if (rc < 0) {
+ dev_err(ds->dev,
+ "Failed to change RX timestamping: %d\n", rc);
+ return -EFAULT;
+ }
+ priv->tagger_data.hwts_rx_en = rx_on;
+ }
+
+ if (copy_to_user(ifr->ifr_data, &config, sizeof(config)))
+ return -EFAULT;
+ return 0;
+}
+
+static int sja1105_hwtstamp_get(struct dsa_switch *ds, int port,
+ struct ifreq *ifr)
+{
+ struct sja1105_private *priv = ds->priv;
+ struct hwtstamp_config config;
+
+ config.flags = 0;
+ if (priv->ports[port].hwts_tx_en)
+ config.tx_type = HWTSTAMP_TX_ON;
+ else
+ config.tx_type = HWTSTAMP_TX_OFF;
+ if (priv->tagger_data.hwts_rx_en)
+ config.rx_filter = HWTSTAMP_FILTER_PTP_V2_L2_EVENT;
+ else
+ config.rx_filter = HWTSTAMP_FILTER_NONE;
+
+ return copy_to_user(ifr->ifr_data, &config, sizeof(config)) ?
+ -EFAULT : 0;
+}
+
+#define to_tagger(d) \
+ container_of((d), struct sja1105_tagger_data, rxtstamp_work)
+#define to_sja1105(d) \
+ container_of((d), struct sja1105_private, tagger_data)
+
+static void sja1105_rxtstamp_work(struct work_struct *work)
+{
+ struct sja1105_tagger_data *data = to_tagger(work);
+ struct sja1105_private *priv = to_sja1105(data);
+ struct sk_buff *skb;
+ u64 now;
+
+ mutex_lock(&priv->ptp_lock);
+
+ now = priv->tstamp_cc.read(&priv->tstamp_cc);
+
+ while ((skb = skb_dequeue(&data->skb_rxtstamp_queue)) != NULL) {
+ struct skb_shared_hwtstamps *shwt = skb_hwtstamps(skb);
+ u64 ts;
+
+ *shwt = (struct skb_shared_hwtstamps) {0};
+
+ ts = SJA1105_SKB_CB(skb)->meta_tstamp;
+ ts = sja1105_tstamp_reconstruct(priv, now, ts);
+ ts = timecounter_cyc2time(&priv->tstamp_tc, ts);
+
+ shwt->hwtstamp = ns_to_ktime(ts);
+ netif_rx_ni(skb);
+ }
+
+ mutex_unlock(&priv->ptp_lock);
+}
+
+/* Called from dsa_skb_defer_rx_timestamp */
+static bool sja1105_port_rxtstamp(struct dsa_switch *ds, int port,
+ struct sk_buff *skb, unsigned int type)
+{
+ struct sja1105_private *priv = ds->priv;
+ struct sja1105_tagger_data *data = &priv->tagger_data;
+
+ if (!data->hwts_rx_en)
+ return false;
+
+ /* We need to read the full PTP clock to reconstruct the Rx
+ * timestamp. For that we need a sleepable context.
+ */
+ skb_queue_tail(&data->skb_rxtstamp_queue, skb);
+ schedule_work(&data->rxtstamp_work);
+ return true;
+}
+
+/* Called from dsa_skb_tx_timestamp. This callback is just to make DSA clone
+ * the skb and have it available in DSA_SKB_CB in the .port_deferred_xmit
+ * callback, where we will timestamp it synchronously.
+ */
+static bool sja1105_port_txtstamp(struct dsa_switch *ds, int port,
+ struct sk_buff *skb, unsigned int type)
+{
+ struct sja1105_private *priv = ds->priv;
+ struct sja1105_port *sp = &priv->ports[port];
+
+ if (!sp->hwts_tx_en)
+ return false;
+
+ return true;
+}
+
static const struct dsa_switch_ops sja1105_switch_ops = {
.get_tag_protocol = sja1105_get_tag_protocol,
.setup = sja1105_setup,
- .adjust_link = sja1105_adjust_link,
+ .teardown = sja1105_teardown,
.set_ageing_time = sja1105_set_ageing_time,
.phylink_validate = sja1105_phylink_validate,
+ .phylink_mac_config = sja1105_mac_config,
+ .phylink_mac_link_up = sja1105_mac_link_up,
+ .phylink_mac_link_down = sja1105_mac_link_down,
.get_strings = sja1105_get_strings,
.get_ethtool_stats = sja1105_get_ethtool_stats,
.get_sset_count = sja1105_get_sset_count,
+ .get_ts_info = sja1105_get_ts_info,
.port_fdb_dump = sja1105_fdb_dump,
.port_fdb_add = sja1105_fdb_add,
.port_fdb_del = sja1105_fdb_del,
@@ -1535,6 +2088,10 @@ static const struct dsa_switch_ops sja1105_switch_ops = {
.port_mdb_add = sja1105_mdb_add,
.port_mdb_del = sja1105_mdb_del,
.port_deferred_xmit = sja1105_port_deferred_xmit,
+ .port_hwtstamp_get = sja1105_hwtstamp_get,
+ .port_hwtstamp_set = sja1105_hwtstamp_set,
+ .port_rxtstamp = sja1105_port_rxtstamp,
+ .port_txtstamp = sja1105_port_txtstamp,
};
static int sja1105_check_device_id(struct sja1105_private *priv)
@@ -1575,6 +2132,7 @@ static int sja1105_check_device_id(struct sja1105_private *priv)
static int sja1105_probe(struct spi_device *spi)
{
+ struct sja1105_tagger_data *tagger_data;
struct device *dev = &spi->dev;
struct sja1105_private *priv;
struct dsa_switch *ds;
@@ -1629,12 +2187,17 @@ static int sja1105_probe(struct spi_device *spi)
ds->priv = priv;
priv->ds = ds;
+ tagger_data = &priv->tagger_data;
+ skb_queue_head_init(&tagger_data->skb_rxtstamp_queue);
+ INIT_WORK(&tagger_data->rxtstamp_work, sja1105_rxtstamp_work);
+
/* Connections between dsa_port and sja1105_port */
for (i = 0; i < SJA1105_NUM_PORTS; i++) {
struct sja1105_port *sp = &priv->ports[i];
ds->ports[i].priv = sp;
sp->dp = &ds->ports[i];
+ sp->data = tagger_data;
}
mutex_init(&priv->mgmt_lock);
@@ -1645,6 +2208,7 @@ static int sja1105_remove(struct spi_device *spi)
{
struct sja1105_private *priv = spi_get_drvdata(spi);
+ sja1105_ptp_clock_unregister(priv);
dsa_unregister_switch(priv->ds);
sja1105_static_config_free(&priv->static_config);
return 0;
diff --git a/drivers/net/dsa/sja1105/sja1105_ptp.c b/drivers/net/dsa/sja1105/sja1105_ptp.c
new file mode 100644
index 000000000000..d19cfdf681af
--- /dev/null
+++ b/drivers/net/dsa/sja1105/sja1105_ptp.c
@@ -0,0 +1,393 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019, Vladimir Oltean <olteanv@gmail.com>
+ */
+#include "sja1105.h"
+
+/* The adjfine API clamps ppb between [-32,768,000, 32,768,000], and
+ * therefore scaled_ppm between [-2,147,483,648, 2,147,483,647].
+ * Set the maximum supported ppb to a round value smaller than the maximum.
+ *
+ * Percentually speaking, this is a +/- 0.032x adjustment of the
+ * free-running counter (0.968x to 1.032x).
+ */
+#define SJA1105_MAX_ADJ_PPB 32000000
+#define SJA1105_SIZE_PTP_CMD 4
+
+/* Timestamps are in units of 8 ns clock ticks (equivalent to a fixed
+ * 125 MHz clock) so the scale factor (MULT / SHIFT) needs to be 8.
+ * Furthermore, wisely pick SHIFT as 28 bits, which translates
+ * MULT into 2^31 (0x80000000). This is the same value around which
+ * the hardware PTPCLKRATE is centered, so the same ppb conversion
+ * arithmetic can be reused.
+ */
+#define SJA1105_CC_SHIFT 28
+#define SJA1105_CC_MULT (8 << SJA1105_CC_SHIFT)
+
+/* Having 33 bits of cycle counter left until a 64-bit overflow during delta
+ * conversion, we multiply this by the 8 ns counter resolution and arrive at
+ * a comfortable 68.71 second refresh interval until the delta would cause
+ * an integer overflow, in absence of any other readout.
+ * Approximate to 1 minute.
+ */
+#define SJA1105_REFRESH_INTERVAL (HZ * 60)
+
+/* This range is actually +/- SJA1105_MAX_ADJ_PPB
+ * divided by 1000 (ppb -> ppm) and with a 16-bit
+ * "fractional" part (actually fixed point).
+ * |
+ * v
+ * Convert scaled_ppm from the +/- ((10^6) << 16) range
+ * into the +/- (1 << 31) range.
+ *
+ * This forgoes a "ppb" numeric representation (up to NSEC_PER_SEC)
+ * and defines the scaling factor between scaled_ppm and the actual
+ * frequency adjustments (both cycle counter and hardware).
+ *
+ * ptpclkrate = scaled_ppm * 2^31 / (10^6 * 2^16)
+ * simplifies to
+ * ptpclkrate = scaled_ppm * 2^9 / 5^6
+ */
+#define SJA1105_CC_MULT_NUM (1 << 9)
+#define SJA1105_CC_MULT_DEM 15625
+
+#define ptp_to_sja1105(d) container_of((d), struct sja1105_private, ptp_caps)
+#define cc_to_sja1105(d) container_of((d), struct sja1105_private, tstamp_cc)
+#define dw_to_sja1105(d) container_of((d), struct sja1105_private, refresh_work)
+
+struct sja1105_ptp_cmd {
+ u64 resptp; /* reset */
+};
+
+int sja1105_get_ts_info(struct dsa_switch *ds, int port,
+ struct ethtool_ts_info *info)
+{
+ struct sja1105_private *priv = ds->priv;
+
+ /* Called during cleanup */
+ if (!priv->clock)
+ return -ENODEV;
+
+ info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE |
+ SOF_TIMESTAMPING_RX_HARDWARE |
+ SOF_TIMESTAMPING_RAW_HARDWARE;
+ info->tx_types = (1 << HWTSTAMP_TX_OFF) |
+ (1 << HWTSTAMP_TX_ON);
+ info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
+ (1 << HWTSTAMP_FILTER_PTP_V2_L2_EVENT);
+ info->phc_index = ptp_clock_index(priv->clock);
+ return 0;
+}
+
+int sja1105et_ptp_cmd(const void *ctx, const void *data)
+{
+ const struct sja1105_ptp_cmd *cmd = data;
+ const struct sja1105_private *priv = ctx;
+ const struct sja1105_regs *regs = priv->info->regs;
+ const int size = SJA1105_SIZE_PTP_CMD;
+ u8 buf[SJA1105_SIZE_PTP_CMD] = {0};
+ /* No need to keep this as part of the structure */
+ u64 valid = 1;
+
+ sja1105_pack(buf, &valid, 31, 31, size);
+ sja1105_pack(buf, &cmd->resptp, 2, 2, size);
+
+ return sja1105_spi_send_packed_buf(priv, SPI_WRITE, regs->ptp_control,
+ buf, SJA1105_SIZE_PTP_CMD);
+}
+
+int sja1105pqrs_ptp_cmd(const void *ctx, const void *data)
+{
+ const struct sja1105_ptp_cmd *cmd = data;
+ const struct sja1105_private *priv = ctx;
+ const struct sja1105_regs *regs = priv->info->regs;
+ const int size = SJA1105_SIZE_PTP_CMD;
+ u8 buf[SJA1105_SIZE_PTP_CMD] = {0};
+ /* No need to keep this as part of the structure */
+ u64 valid = 1;
+
+ sja1105_pack(buf, &valid, 31, 31, size);
+ sja1105_pack(buf, &cmd->resptp, 3, 3, size);
+
+ return sja1105_spi_send_packed_buf(priv, SPI_WRITE, regs->ptp_control,
+ buf, SJA1105_SIZE_PTP_CMD);
+}
+
+/* The switch returns partial timestamps (24 bits for SJA1105 E/T, which wrap
+ * around in 0.135 seconds, and 32 bits for P/Q/R/S, wrapping around in 34.35
+ * seconds).
+ *
+ * This receives the RX or TX MAC timestamps, provided by hardware as
+ * the lower bits of the cycle counter, sampled at the time the timestamp was
+ * collected.
+ *
+ * To reconstruct into a full 64-bit-wide timestamp, the cycle counter is
+ * read and the high-order bits are filled in.
+ *
+ * Must be called within one wraparound period of the partial timestamp since
+ * it was generated by the MAC.
+ */
+u64 sja1105_tstamp_reconstruct(struct sja1105_private *priv, u64 now,
+ u64 ts_partial)
+{
+ u64 partial_tstamp_mask = CYCLECOUNTER_MASK(priv->info->ptp_ts_bits);
+ u64 ts_reconstructed;
+
+ ts_reconstructed = (now & ~partial_tstamp_mask) | ts_partial;
+
+ /* Check lower bits of current cycle counter against the timestamp.
+ * If the current cycle counter is lower than the partial timestamp,
+ * then wraparound surely occurred and must be accounted for.
+ */
+ if ((now & partial_tstamp_mask) <= ts_partial)
+ ts_reconstructed -= (partial_tstamp_mask + 1);
+
+ return ts_reconstructed;
+}
+
+/* Reads the SPI interface for an egress timestamp generated by the switch
+ * for frames sent using management routes.
+ *
+ * SJA1105 E/T layout of the 4-byte SPI payload:
+ *
+ * 31 23 15 7 0
+ * | | | | |
+ * +-----+-----+-----+ ^
+ * ^ |
+ * | |
+ * 24-bit timestamp Update bit
+ *
+ *
+ * SJA1105 P/Q/R/S layout of the 8-byte SPI payload:
+ *
+ * 31 23 15 7 0 63 55 47 39 32
+ * | | | | | | | | | |
+ * ^ +-----+-----+-----+-----+
+ * | ^
+ * | |
+ * Update bit 32-bit timestamp
+ *
+ * Notice that the update bit is in the same place.
+ * To have common code for E/T and P/Q/R/S for reading the timestamp,
+ * we need to juggle with the offset and the bit indices.
+ */
+int sja1105_ptpegr_ts_poll(struct sja1105_private *priv, int port, u64 *ts)
+{
+ const struct sja1105_regs *regs = priv->info->regs;
+ int tstamp_bit_start, tstamp_bit_end;
+ int timeout = 10;
+ u8 packed_buf[8];
+ u64 update;
+ int rc;
+
+ do {
+ rc = sja1105_spi_send_packed_buf(priv, SPI_READ,
+ regs->ptpegr_ts[port],
+ packed_buf,
+ priv->info->ptpegr_ts_bytes);
+ if (rc < 0)
+ return rc;
+
+ sja1105_unpack(packed_buf, &update, 0, 0,
+ priv->info->ptpegr_ts_bytes);
+ if (update)
+ break;
+
+ usleep_range(10, 50);
+ } while (--timeout);
+
+ if (!timeout)
+ return -ETIMEDOUT;
+
+ /* Point the end bit to the second 32-bit word on P/Q/R/S,
+ * no-op on E/T.
+ */
+ tstamp_bit_end = (priv->info->ptpegr_ts_bytes - 4) * 8;
+ /* Shift the 24-bit timestamp on E/T to be collected from 31:8.
+ * No-op on P/Q/R/S.
+ */
+ tstamp_bit_end += 32 - priv->info->ptp_ts_bits;
+ tstamp_bit_start = tstamp_bit_end + priv->info->ptp_ts_bits - 1;
+
+ *ts = 0;
+
+ sja1105_unpack(packed_buf, ts, tstamp_bit_start, tstamp_bit_end,
+ priv->info->ptpegr_ts_bytes);
+
+ return 0;
+}
+
+int sja1105_ptp_reset(struct sja1105_private *priv)
+{
+ struct dsa_switch *ds = priv->ds;
+ struct sja1105_ptp_cmd cmd = {0};
+ int rc;
+
+ mutex_lock(&priv->ptp_lock);
+
+ cmd.resptp = 1;
+ dev_dbg(ds->dev, "Resetting PTP clock\n");
+ rc = priv->info->ptp_cmd(priv, &cmd);
+
+ timecounter_init(&priv->tstamp_tc, &priv->tstamp_cc,
+ ktime_to_ns(ktime_get_real()));
+
+ mutex_unlock(&priv->ptp_lock);
+
+ return rc;
+}
+
+static int sja1105_ptp_gettime(struct ptp_clock_info *ptp,
+ struct timespec64 *ts)
+{
+ struct sja1105_private *priv = ptp_to_sja1105(ptp);
+ u64 ns;
+
+ mutex_lock(&priv->ptp_lock);
+ ns = timecounter_read(&priv->tstamp_tc);
+ mutex_unlock(&priv->ptp_lock);
+
+ *ts = ns_to_timespec64(ns);
+
+ return 0;
+}
+
+static int sja1105_ptp_settime(struct ptp_clock_info *ptp,
+ const struct timespec64 *ts)
+{
+ struct sja1105_private *priv = ptp_to_sja1105(ptp);
+ u64 ns = timespec64_to_ns(ts);
+
+ mutex_lock(&priv->ptp_lock);
+ timecounter_init(&priv->tstamp_tc, &priv->tstamp_cc, ns);
+ mutex_unlock(&priv->ptp_lock);
+
+ return 0;
+}
+
+static int sja1105_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
+{
+ struct sja1105_private *priv = ptp_to_sja1105(ptp);
+ s64 clkrate;
+
+ clkrate = (s64)scaled_ppm * SJA1105_CC_MULT_NUM;
+ clkrate = div_s64(clkrate, SJA1105_CC_MULT_DEM);
+
+ mutex_lock(&priv->ptp_lock);
+
+ /* Force a readout to update the timer *before* changing its frequency.
+ *
+ * This way, its corrected time curve can at all times be modeled
+ * as a linear "A * x + B" function, where:
+ *
+ * - B are past frequency adjustments and offset shifts, all
+ * accumulated into the cycle_last variable.
+ *
+ * - A is the new frequency adjustments we're just about to set.
+ *
+ * Reading now makes B accumulate the correct amount of time,
+ * corrected at the old rate, before changing it.
+ *
+ * Hardware timestamps then become simple points on the curve and
+ * are approximated using the above function. This is still better
+ * than letting the switch take the timestamps using the hardware
+ * rate-corrected clock (PTPCLKVAL) - the comparison in this case would
+ * be that we're shifting the ruler at the same time as we're taking
+ * measurements with it.
+ *
+ * The disadvantage is that it's possible to receive timestamps when
+ * a frequency adjustment took place in the near past.
+ * In this case they will be approximated using the new ppb value
+ * instead of a compound function made of two segments (one at the old
+ * and the other at the new rate) - introducing some inaccuracy.
+ */
+ timecounter_read(&priv->tstamp_tc);
+
+ priv->tstamp_cc.mult = SJA1105_CC_MULT + clkrate;
+
+ mutex_unlock(&priv->ptp_lock);
+
+ return 0;
+}
+
+static int sja1105_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
+{
+ struct sja1105_private *priv = ptp_to_sja1105(ptp);
+
+ mutex_lock(&priv->ptp_lock);
+ timecounter_adjtime(&priv->tstamp_tc, delta);
+ mutex_unlock(&priv->ptp_lock);
+
+ return 0;
+}
+
+static u64 sja1105_ptptsclk_read(const struct cyclecounter *cc)
+{
+ struct sja1105_private *priv = cc_to_sja1105(cc);
+ const struct sja1105_regs *regs = priv->info->regs;
+ u64 ptptsclk = 0;
+ int rc;
+
+ rc = sja1105_spi_send_int(priv, SPI_READ, regs->ptptsclk,
+ &ptptsclk, 8);
+ if (rc < 0)
+ dev_err_ratelimited(priv->ds->dev,
+ "failed to read ptp cycle counter: %d\n",
+ rc);
+ return ptptsclk;
+}
+
+static void sja1105_ptp_overflow_check(struct work_struct *work)
+{
+ struct delayed_work *dw = to_delayed_work(work);
+ struct sja1105_private *priv = dw_to_sja1105(dw);
+ struct timespec64 ts;
+
+ sja1105_ptp_gettime(&priv->ptp_caps, &ts);
+
+ schedule_delayed_work(&priv->refresh_work, SJA1105_REFRESH_INTERVAL);
+}
+
+static const struct ptp_clock_info sja1105_ptp_caps = {
+ .owner = THIS_MODULE,
+ .name = "SJA1105 PHC",
+ .adjfine = sja1105_ptp_adjfine,
+ .adjtime = sja1105_ptp_adjtime,
+ .gettime64 = sja1105_ptp_gettime,
+ .settime64 = sja1105_ptp_settime,
+ .max_adj = SJA1105_MAX_ADJ_PPB,
+};
+
+int sja1105_ptp_clock_register(struct sja1105_private *priv)
+{
+ struct dsa_switch *ds = priv->ds;
+
+ /* Set up the cycle counter */
+ priv->tstamp_cc = (struct cyclecounter) {
+ .read = sja1105_ptptsclk_read,
+ .mask = CYCLECOUNTER_MASK(64),
+ .shift = SJA1105_CC_SHIFT,
+ .mult = SJA1105_CC_MULT,
+ };
+ mutex_init(&priv->ptp_lock);
+ INIT_DELAYED_WORK(&priv->refresh_work, sja1105_ptp_overflow_check);
+
+ schedule_delayed_work(&priv->refresh_work, SJA1105_REFRESH_INTERVAL);
+
+ priv->ptp_caps = sja1105_ptp_caps;
+
+ priv->clock = ptp_clock_register(&priv->ptp_caps, ds->dev);
+ if (IS_ERR_OR_NULL(priv->clock))
+ return PTR_ERR(priv->clock);
+
+ return sja1105_ptp_reset(priv);
+}
+
+void sja1105_ptp_clock_unregister(struct sja1105_private *priv)
+{
+ if (IS_ERR_OR_NULL(priv->clock))
+ return;
+
+ cancel_delayed_work_sync(&priv->refresh_work);
+ ptp_clock_unregister(priv->clock);
+ priv->clock = NULL;
+}
diff --git a/drivers/net/dsa/sja1105/sja1105_ptp.h b/drivers/net/dsa/sja1105/sja1105_ptp.h
new file mode 100644
index 000000000000..af456b0a4d27
--- /dev/null
+++ b/drivers/net/dsa/sja1105/sja1105_ptp.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * Copyright (c) 2019, Vladimir Oltean <olteanv@gmail.com>
+ */
+#ifndef _SJA1105_PTP_H
+#define _SJA1105_PTP_H
+
+#if IS_ENABLED(CONFIG_NET_DSA_SJA1105_PTP)
+
+int sja1105_ptp_clock_register(struct sja1105_private *priv);
+
+void sja1105_ptp_clock_unregister(struct sja1105_private *priv);
+
+int sja1105_ptpegr_ts_poll(struct sja1105_private *priv, int port, u64 *ts);
+
+int sja1105et_ptp_cmd(const void *ctx, const void *data);
+
+int sja1105pqrs_ptp_cmd(const void *ctx, const void *data);
+
+int sja1105_get_ts_info(struct dsa_switch *ds, int port,
+ struct ethtool_ts_info *ts);
+
+u64 sja1105_tstamp_reconstruct(struct sja1105_private *priv, u64 now,
+ u64 ts_partial);
+
+int sja1105_ptp_reset(struct sja1105_private *priv);
+
+#else
+
+static inline int sja1105_ptp_clock_register(struct sja1105_private *priv)
+{
+ return 0;
+}
+
+static inline void sja1105_ptp_clock_unregister(struct sja1105_private *priv)
+{
+ return;
+}
+
+static inline int
+sja1105_ptpegr_ts_poll(struct sja1105_private *priv, int port, u64 *ts)
+{
+ return 0;
+}
+
+static inline u64 sja1105_tstamp_reconstruct(struct sja1105_private *priv,
+ u64 now, u64 ts_partial)
+{
+ return 0;
+}
+
+static inline int sja1105_ptp_reset(struct sja1105_private *priv)
+{
+ return 0;
+}
+
+#define sja1105et_ptp_cmd NULL
+
+#define sja1105pqrs_ptp_cmd NULL
+
+#define sja1105_get_ts_info NULL
+
+#endif /* IS_ENABLED(CONFIG_NET_DSA_SJA1105_PTP) */
+
+#endif /* _SJA1105_PTP_H */
diff --git a/drivers/net/dsa/sja1105/sja1105_spi.c b/drivers/net/dsa/sja1105/sja1105_spi.c
index 2eb70b8acfc3..84dc603138cf 100644
--- a/drivers/net/dsa/sja1105/sja1105_spi.c
+++ b/drivers/net/dsa/sja1105/sja1105_spi.c
@@ -283,20 +283,22 @@ static int sja1105_cold_reset(const struct sja1105_private *priv)
return priv->info->reset_cmd(priv, &reset);
}
-static int sja1105_inhibit_tx(const struct sja1105_private *priv,
- const unsigned long *port_bitmap)
+int sja1105_inhibit_tx(const struct sja1105_private *priv,
+ unsigned long port_bitmap, bool tx_inhibited)
{
const struct sja1105_regs *regs = priv->info->regs;
u64 inhibit_cmd;
- int port, rc;
+ int rc;
rc = sja1105_spi_send_int(priv, SPI_READ, regs->port_control,
&inhibit_cmd, SJA1105_SIZE_PORT_CTRL);
if (rc < 0)
return rc;
- for_each_set_bit(port, port_bitmap, SJA1105_NUM_PORTS)
- inhibit_cmd |= BIT(port);
+ if (tx_inhibited)
+ inhibit_cmd |= port_bitmap;
+ else
+ inhibit_cmd &= ~port_bitmap;
return sja1105_spi_send_int(priv, SPI_WRITE, regs->port_control,
&inhibit_cmd, SJA1105_SIZE_PORT_CTRL);
@@ -413,7 +415,7 @@ int sja1105_static_config_upload(struct sja1105_private *priv)
* Tx on all ports and waiting for current packet to drain.
* Otherwise, the PHY will see an unterminated Ethernet packet.
*/
- rc = sja1105_inhibit_tx(priv, &port_bitmap);
+ rc = sja1105_inhibit_tx(priv, port_bitmap, true);
if (rc < 0) {
dev_err(dev, "Failed to inhibit Tx on ports\n");
return -ENXIO;
@@ -478,7 +480,12 @@ int sja1105_static_config_upload(struct sja1105_private *priv)
dev_info(dev, "Succeeded after %d tried\n", RETRIES - retries);
}
+ rc = sja1105_ptp_reset(priv);
+ if (rc < 0)
+ dev_err(dev, "Failed to reset PTP clock: %d\n", rc);
+
dev_info(dev, "Reset switch and programmed static config\n");
+
out:
kfree(config_buf);
return rc;
@@ -491,11 +498,10 @@ static struct sja1105_regs sja1105et_regs = {
.port_control = 0x11,
.config = 0x020000,
.rgu = 0x100440,
+ /* UM10944.pdf, Table 86, ACU Register overview */
.pad_mii_tx = {0x100800, 0x100802, 0x100804, 0x100806, 0x100808},
.rmii_pll1 = 0x10000A,
.cgu_idiv = {0x10000B, 0x10000C, 0x10000D, 0x10000E, 0x10000F},
- /* UM10944.pdf, Table 86, ACU Register overview */
- .rgmii_pad_mii_tx = {0x100800, 0x100802, 0x100804, 0x100806, 0x100808},
.mac = {0x200, 0x202, 0x204, 0x206, 0x208},
.mac_hl1 = {0x400, 0x410, 0x420, 0x430, 0x440},
.mac_hl2 = {0x600, 0x610, 0x620, 0x630, 0x640},
@@ -507,6 +513,11 @@ static struct sja1105_regs sja1105et_regs = {
.rgmii_tx_clk = {0x100016, 0x10001D, 0x100024, 0x10002B, 0x100032},
.rmii_ref_clk = {0x100015, 0x10001C, 0x100023, 0x10002A, 0x100031},
.rmii_ext_tx_clk = {0x100018, 0x10001F, 0x100026, 0x10002D, 0x100034},
+ .ptpegr_ts = {0xC0, 0xC2, 0xC4, 0xC6, 0xC8},
+ .ptp_control = 0x17,
+ .ptpclk = 0x18, /* Spans 0x18 to 0x19 */
+ .ptpclkrate = 0x1A,
+ .ptptsclk = 0x1B, /* Spans 0x1B to 0x1C */
};
static struct sja1105_regs sja1105pqrs_regs = {
@@ -516,11 +527,11 @@ static struct sja1105_regs sja1105pqrs_regs = {
.port_control = 0x12,
.config = 0x020000,
.rgu = 0x100440,
+ /* UM10944.pdf, Table 86, ACU Register overview */
.pad_mii_tx = {0x100800, 0x100802, 0x100804, 0x100806, 0x100808},
+ .pad_mii_id = {0x100810, 0x100811, 0x100812, 0x100813, 0x100814},
.rmii_pll1 = 0x10000A,
.cgu_idiv = {0x10000B, 0x10000C, 0x10000D, 0x10000E, 0x10000F},
- /* UM10944.pdf, Table 86, ACU Register overview */
- .rgmii_pad_mii_tx = {0x100800, 0x100802, 0x100804, 0x100806, 0x100808},
.mac = {0x200, 0x202, 0x204, 0x206, 0x208},
.mac_hl1 = {0x400, 0x410, 0x420, 0x430, 0x440},
.mac_hl2 = {0x600, 0x610, 0x620, 0x630, 0x640},
@@ -533,6 +544,11 @@ static struct sja1105_regs sja1105pqrs_regs = {
.rmii_ref_clk = {0x100015, 0x10001B, 0x100021, 0x100027, 0x10002D},
.rmii_ext_tx_clk = {0x100017, 0x10001D, 0x100023, 0x100029, 0x10002F},
.qlevel = {0x604, 0x614, 0x624, 0x634, 0x644},
+ .ptpegr_ts = {0xC0, 0xC4, 0xC8, 0xCC, 0xD0},
+ .ptp_control = 0x18,
+ .ptpclk = 0x19,
+ .ptpclkrate = 0x1B,
+ .ptptsclk = 0x1C,
};
struct sja1105_info sja1105e_info = {
@@ -540,7 +556,12 @@ struct sja1105_info sja1105e_info = {
.part_no = SJA1105ET_PART_NO,
.static_ops = sja1105e_table_ops,
.dyn_ops = sja1105et_dyn_ops,
+ .ptp_ts_bits = 24,
+ .ptpegr_ts_bytes = 4,
.reset_cmd = sja1105et_reset_cmd,
+ .fdb_add_cmd = sja1105et_fdb_add,
+ .fdb_del_cmd = sja1105et_fdb_del,
+ .ptp_cmd = sja1105et_ptp_cmd,
.regs = &sja1105et_regs,
.name = "SJA1105E",
};
@@ -549,7 +570,12 @@ struct sja1105_info sja1105t_info = {
.part_no = SJA1105ET_PART_NO,
.static_ops = sja1105t_table_ops,
.dyn_ops = sja1105et_dyn_ops,
+ .ptp_ts_bits = 24,
+ .ptpegr_ts_bytes = 4,
.reset_cmd = sja1105et_reset_cmd,
+ .fdb_add_cmd = sja1105et_fdb_add,
+ .fdb_del_cmd = sja1105et_fdb_del,
+ .ptp_cmd = sja1105et_ptp_cmd,
.regs = &sja1105et_regs,
.name = "SJA1105T",
};
@@ -558,7 +584,13 @@ struct sja1105_info sja1105p_info = {
.part_no = SJA1105P_PART_NO,
.static_ops = sja1105p_table_ops,
.dyn_ops = sja1105pqrs_dyn_ops,
+ .ptp_ts_bits = 32,
+ .ptpegr_ts_bytes = 8,
+ .setup_rgmii_delay = sja1105pqrs_setup_rgmii_delay,
.reset_cmd = sja1105pqrs_reset_cmd,
+ .fdb_add_cmd = sja1105pqrs_fdb_add,
+ .fdb_del_cmd = sja1105pqrs_fdb_del,
+ .ptp_cmd = sja1105pqrs_ptp_cmd,
.regs = &sja1105pqrs_regs,
.name = "SJA1105P",
};
@@ -567,7 +599,13 @@ struct sja1105_info sja1105q_info = {
.part_no = SJA1105Q_PART_NO,
.static_ops = sja1105q_table_ops,
.dyn_ops = sja1105pqrs_dyn_ops,
+ .ptp_ts_bits = 32,
+ .ptpegr_ts_bytes = 8,
+ .setup_rgmii_delay = sja1105pqrs_setup_rgmii_delay,
.reset_cmd = sja1105pqrs_reset_cmd,
+ .fdb_add_cmd = sja1105pqrs_fdb_add,
+ .fdb_del_cmd = sja1105pqrs_fdb_del,
+ .ptp_cmd = sja1105pqrs_ptp_cmd,
.regs = &sja1105pqrs_regs,
.name = "SJA1105Q",
};
@@ -576,7 +614,13 @@ struct sja1105_info sja1105r_info = {
.part_no = SJA1105R_PART_NO,
.static_ops = sja1105r_table_ops,
.dyn_ops = sja1105pqrs_dyn_ops,
+ .ptp_ts_bits = 32,
+ .ptpegr_ts_bytes = 8,
+ .setup_rgmii_delay = sja1105pqrs_setup_rgmii_delay,
.reset_cmd = sja1105pqrs_reset_cmd,
+ .fdb_add_cmd = sja1105pqrs_fdb_add,
+ .fdb_del_cmd = sja1105pqrs_fdb_del,
+ .ptp_cmd = sja1105pqrs_ptp_cmd,
.regs = &sja1105pqrs_regs,
.name = "SJA1105R",
};
@@ -586,6 +630,12 @@ struct sja1105_info sja1105s_info = {
.static_ops = sja1105s_table_ops,
.dyn_ops = sja1105pqrs_dyn_ops,
.regs = &sja1105pqrs_regs,
+ .ptp_ts_bits = 32,
+ .ptpegr_ts_bytes = 8,
+ .setup_rgmii_delay = sja1105pqrs_setup_rgmii_delay,
.reset_cmd = sja1105pqrs_reset_cmd,
+ .fdb_add_cmd = sja1105pqrs_fdb_add,
+ .fdb_del_cmd = sja1105pqrs_fdb_del,
+ .ptp_cmd = sja1105pqrs_ptp_cmd,
.name = "SJA1105S",
};
diff --git a/drivers/net/dsa/sja1105/sja1105_static_config.c b/drivers/net/dsa/sja1105/sja1105_static_config.c
index b3c992b0abb0..b31c737dc560 100644
--- a/drivers/net/dsa/sja1105/sja1105_static_config.c
+++ b/drivers/net/dsa/sja1105/sja1105_static_config.c
@@ -91,6 +91,28 @@ u32 sja1105_crc32(const void *buf, size_t len)
return ~crc;
}
+static size_t sja1105et_avb_params_entry_packing(void *buf, void *entry_ptr,
+ enum packing_op op)
+{
+ const size_t size = SJA1105ET_SIZE_AVB_PARAMS_ENTRY;
+ struct sja1105_avb_params_entry *entry = entry_ptr;
+
+ sja1105_packing(buf, &entry->destmeta, 95, 48, size, op);
+ sja1105_packing(buf, &entry->srcmeta, 47, 0, size, op);
+ return size;
+}
+
+static size_t sja1105pqrs_avb_params_entry_packing(void *buf, void *entry_ptr,
+ enum packing_op op)
+{
+ const size_t size = SJA1105PQRS_SIZE_AVB_PARAMS_ENTRY;
+ struct sja1105_avb_params_entry *entry = entry_ptr;
+
+ sja1105_packing(buf, &entry->destmeta, 125, 78, size, op);
+ sja1105_packing(buf, &entry->srcmeta, 77, 30, size, op);
+ return size;
+}
+
static size_t sja1105et_general_params_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
@@ -208,11 +230,20 @@ sja1105pqrs_l2_lookup_params_entry_packing(void *buf, void *entry_ptr,
{
const size_t size = SJA1105PQRS_SIZE_L2_LOOKUP_PARAMS_ENTRY;
struct sja1105_l2_lookup_params_entry *entry = entry_ptr;
+ int offset, i;
+ for (i = 0, offset = 58; i < 5; i++, offset += 11)
+ sja1105_packing(buf, &entry->maxaddrp[i],
+ offset + 10, offset + 0, size, op);
sja1105_packing(buf, &entry->maxage, 57, 43, size, op);
+ sja1105_packing(buf, &entry->start_dynspc, 42, 33, size, op);
+ sja1105_packing(buf, &entry->drpnolearn, 32, 28, size, op);
sja1105_packing(buf, &entry->shared_learn, 27, 27, size, op);
sja1105_packing(buf, &entry->no_enf_hostprt, 26, 26, size, op);
sja1105_packing(buf, &entry->no_mgmt_learn, 25, 25, size, op);
+ sja1105_packing(buf, &entry->use_static, 24, 24, size, op);
+ sja1105_packing(buf, &entry->owr_dyn, 23, 23, size, op);
+ sja1105_packing(buf, &entry->learn_once, 22, 22, size, op);
return size;
}
@@ -236,10 +267,20 @@ size_t sja1105pqrs_l2_lookup_entry_packing(void *buf, void *entry_ptr,
const size_t size = SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY;
struct sja1105_l2_lookup_entry *entry = entry_ptr;
- /* These are static L2 lookup entries, so the structure
- * should match UM11040 Table 16/17 definitions when
- * LOCKEDS is 1.
- */
+ if (entry->lockeds) {
+ sja1105_packing(buf, &entry->tsreg, 159, 159, size, op);
+ sja1105_packing(buf, &entry->mirrvlan, 158, 147, size, op);
+ sja1105_packing(buf, &entry->takets, 146, 146, size, op);
+ sja1105_packing(buf, &entry->mirr, 145, 145, size, op);
+ sja1105_packing(buf, &entry->retag, 144, 144, size, op);
+ } else {
+ sja1105_packing(buf, &entry->touched, 159, 159, size, op);
+ sja1105_packing(buf, &entry->age, 158, 144, size, op);
+ }
+ sja1105_packing(buf, &entry->mask_iotag, 143, 143, size, op);
+ sja1105_packing(buf, &entry->mask_vlanid, 142, 131, size, op);
+ sja1105_packing(buf, &entry->mask_macaddr, 130, 83, size, op);
+ sja1105_packing(buf, &entry->iotag, 82, 82, size, op);
sja1105_packing(buf, &entry->vlanid, 81, 70, size, op);
sja1105_packing(buf, &entry->macaddr, 69, 22, size, op);
sja1105_packing(buf, &entry->destports, 21, 17, size, op);
@@ -413,6 +454,7 @@ static u64 blk_id_map[BLK_IDX_MAX] = {
[BLK_IDX_MAC_CONFIG] = BLKID_MAC_CONFIG,
[BLK_IDX_L2_LOOKUP_PARAMS] = BLKID_L2_LOOKUP_PARAMS,
[BLK_IDX_L2_FORWARDING_PARAMS] = BLKID_L2_FORWARDING_PARAMS,
+ [BLK_IDX_AVB_PARAMS] = BLKID_AVB_PARAMS,
[BLK_IDX_GENERAL_PARAMS] = BLKID_GENERAL_PARAMS,
[BLK_IDX_XMII_PARAMS] = BLKID_XMII_PARAMS,
};
@@ -442,7 +484,7 @@ const char *sja1105_static_config_error_msg[] = {
"vl-forwarding-parameters-table.partspc.",
};
-sja1105_config_valid_t
+static sja1105_config_valid_t
static_config_check_memory_size(const struct sja1105_table *tables)
{
const struct sja1105_l2_forwarding_params_entry *l2_fwd_params;
@@ -614,6 +656,12 @@ struct sja1105_table_ops sja1105e_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT,
},
+ [BLK_IDX_AVB_PARAMS] = {
+ .packing = sja1105et_avb_params_entry_packing,
+ .unpacked_entry_size = sizeof(struct sja1105_avb_params_entry),
+ .packed_entry_size = SJA1105ET_SIZE_AVB_PARAMS_ENTRY,
+ .max_entry_count = SJA1105_MAX_AVB_PARAMS_COUNT,
+ },
[BLK_IDX_GENERAL_PARAMS] = {
.packing = sja1105et_general_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
@@ -672,6 +720,12 @@ struct sja1105_table_ops sja1105t_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT,
},
+ [BLK_IDX_AVB_PARAMS] = {
+ .packing = sja1105et_avb_params_entry_packing,
+ .unpacked_entry_size = sizeof(struct sja1105_avb_params_entry),
+ .packed_entry_size = SJA1105ET_SIZE_AVB_PARAMS_ENTRY,
+ .max_entry_count = SJA1105_MAX_AVB_PARAMS_COUNT,
+ },
[BLK_IDX_GENERAL_PARAMS] = {
.packing = sja1105et_general_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
@@ -730,6 +784,12 @@ struct sja1105_table_ops sja1105p_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT,
},
+ [BLK_IDX_AVB_PARAMS] = {
+ .packing = sja1105pqrs_avb_params_entry_packing,
+ .unpacked_entry_size = sizeof(struct sja1105_avb_params_entry),
+ .packed_entry_size = SJA1105PQRS_SIZE_AVB_PARAMS_ENTRY,
+ .max_entry_count = SJA1105_MAX_AVB_PARAMS_COUNT,
+ },
[BLK_IDX_GENERAL_PARAMS] = {
.packing = sja1105pqrs_general_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
@@ -788,6 +848,12 @@ struct sja1105_table_ops sja1105q_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT,
},
+ [BLK_IDX_AVB_PARAMS] = {
+ .packing = sja1105pqrs_avb_params_entry_packing,
+ .unpacked_entry_size = sizeof(struct sja1105_avb_params_entry),
+ .packed_entry_size = SJA1105PQRS_SIZE_AVB_PARAMS_ENTRY,
+ .max_entry_count = SJA1105_MAX_AVB_PARAMS_COUNT,
+ },
[BLK_IDX_GENERAL_PARAMS] = {
.packing = sja1105pqrs_general_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
@@ -846,6 +912,12 @@ struct sja1105_table_ops sja1105r_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT,
},
+ [BLK_IDX_AVB_PARAMS] = {
+ .packing = sja1105pqrs_avb_params_entry_packing,
+ .unpacked_entry_size = sizeof(struct sja1105_avb_params_entry),
+ .packed_entry_size = SJA1105PQRS_SIZE_AVB_PARAMS_ENTRY,
+ .max_entry_count = SJA1105_MAX_AVB_PARAMS_COUNT,
+ },
[BLK_IDX_GENERAL_PARAMS] = {
.packing = sja1105pqrs_general_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
@@ -904,6 +976,12 @@ struct sja1105_table_ops sja1105s_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT,
},
+ [BLK_IDX_AVB_PARAMS] = {
+ .packing = sja1105pqrs_avb_params_entry_packing,
+ .unpacked_entry_size = sizeof(struct sja1105_avb_params_entry),
+ .packed_entry_size = SJA1105PQRS_SIZE_AVB_PARAMS_ENTRY,
+ .max_entry_count = SJA1105_MAX_AVB_PARAMS_COUNT,
+ },
[BLK_IDX_GENERAL_PARAMS] = {
.packing = sja1105pqrs_general_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_general_params_entry),
diff --git a/drivers/net/dsa/sja1105/sja1105_static_config.h b/drivers/net/dsa/sja1105/sja1105_static_config.h
index 069ca8fd059c..684465fc0882 100644
--- a/drivers/net/dsa/sja1105/sja1105_static_config.h
+++ b/drivers/net/dsa/sja1105/sja1105_static_config.h
@@ -20,10 +20,12 @@
#define SJA1105ET_SIZE_MAC_CONFIG_ENTRY 28
#define SJA1105ET_SIZE_L2_LOOKUP_PARAMS_ENTRY 4
#define SJA1105ET_SIZE_GENERAL_PARAMS_ENTRY 40
+#define SJA1105ET_SIZE_AVB_PARAMS_ENTRY 12
#define SJA1105PQRS_SIZE_L2_LOOKUP_ENTRY 20
#define SJA1105PQRS_SIZE_MAC_CONFIG_ENTRY 32
#define SJA1105PQRS_SIZE_L2_LOOKUP_PARAMS_ENTRY 16
#define SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY 44
+#define SJA1105PQRS_SIZE_AVB_PARAMS_ENTRY 16
/* UM10944.pdf Page 11, Table 2. Configuration Blocks */
enum {
@@ -34,6 +36,7 @@ enum {
BLKID_MAC_CONFIG = 0x09,
BLKID_L2_LOOKUP_PARAMS = 0x0D,
BLKID_L2_FORWARDING_PARAMS = 0x0E,
+ BLKID_AVB_PARAMS = 0x10,
BLKID_GENERAL_PARAMS = 0x11,
BLKID_XMII_PARAMS = 0x4E,
};
@@ -46,6 +49,7 @@ enum sja1105_blk_idx {
BLK_IDX_MAC_CONFIG,
BLK_IDX_L2_LOOKUP_PARAMS,
BLK_IDX_L2_FORWARDING_PARAMS,
+ BLK_IDX_AVB_PARAMS,
BLK_IDX_GENERAL_PARAMS,
BLK_IDX_XMII_PARAMS,
BLK_IDX_MAX,
@@ -64,6 +68,7 @@ enum sja1105_blk_idx {
#define SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT 1
#define SJA1105_MAX_GENERAL_PARAMS_COUNT 1
#define SJA1105_MAX_XMII_PARAMS_COUNT 1
+#define SJA1105_MAX_AVB_PARAMS_COUNT 1
#define SJA1105_MAX_FRAME_MEMORY 929
@@ -122,9 +127,36 @@ struct sja1105_l2_lookup_entry {
u64 destports;
u64 enfport;
u64 index;
+ /* P/Q/R/S only */
+ u64 mask_iotag;
+ u64 mask_vlanid;
+ u64 mask_macaddr;
+ u64 iotag;
+ u64 lockeds;
+ union {
+ /* LOCKEDS=1: Static FDB entries */
+ struct {
+ u64 tsreg;
+ u64 mirrvlan;
+ u64 takets;
+ u64 mirr;
+ u64 retag;
+ };
+ /* LOCKEDS=0: Dynamically learned FDB entries */
+ struct {
+ u64 touched;
+ u64 age;
+ };
+ };
};
struct sja1105_l2_lookup_params_entry {
+ u64 maxaddrp[5]; /* P/Q/R/S only */
+ u64 start_dynspc; /* P/Q/R/S only */
+ u64 drpnolearn; /* P/Q/R/S only */
+ u64 use_static; /* P/Q/R/S only */
+ u64 owr_dyn; /* P/Q/R/S only */
+ u64 learn_once; /* P/Q/R/S only */
u64 maxage; /* Shared */
u64 dyn_tbsz; /* E/T only */
u64 poly; /* E/T only */
@@ -153,6 +185,11 @@ struct sja1105_l2_policing_entry {
u64 partition;
};
+struct sja1105_avb_params_entry {
+ u64 destmeta;
+ u64 srcmeta;
+};
+
struct sja1105_mac_config_entry {
u64 top[8];
u64 base[8];
diff --git a/drivers/net/dsa/vitesse-vsc73xx.c b/drivers/net/dsa/vitesse-vsc73xx-core.c
index d4780610ea8a..614377ef7956 100644
--- a/drivers/net/dsa/vitesse-vsc73xx.c
+++ b/drivers/net/dsa/vitesse-vsc73xx-core.c
@@ -10,10 +10,6 @@
* handling the switch in a memory-mapped manner by connecting to that external
* CPU's memory bus.
*
- * This driver (currently) only takes control of the switch chip over SPI and
- * configures it to route packages around when connected to a CPU port. The
- * chip has embedded PHYs and VLAN support so we model it using DSA.
- *
* Copyright (C) 2018 Linus Wallej <linus.walleij@linaro.org>
* Includes portions of code from the firmware uploader by:
* Copyright (C) 2009 Gabor Juhos <juhosg@openwrt.org>
@@ -24,8 +20,6 @@
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/of_mdio.h>
-#include <linux/platform_device.h>
-#include <linux/spi/spi.h>
#include <linux/bitops.h>
#include <linux/if_bridge.h>
#include <linux/etherdevice.h>
@@ -34,6 +28,8 @@
#include <linux/random.h>
#include <net/dsa.h>
+#include "vitesse-vsc73xx.h"
+
#define VSC73XX_BLOCK_MAC 0x1 /* Subblocks 0-4, 6 (CPU port) */
#define VSC73XX_BLOCK_ANALYZER 0x2 /* Only subblock 0 */
#define VSC73XX_BLOCK_MII 0x3 /* Subblocks 0 and 1 */
@@ -255,13 +251,6 @@
#define VSC73XX_GLORESET_PHY_RESET BIT(1)
#define VSC73XX_GLORESET_MASTER_RESET BIT(0)
-#define VSC73XX_CMD_MODE_READ 0
-#define VSC73XX_CMD_MODE_WRITE 1
-#define VSC73XX_CMD_MODE_SHIFT 4
-#define VSC73XX_CMD_BLOCK_SHIFT 5
-#define VSC73XX_CMD_BLOCK_MASK 0x7
-#define VSC73XX_CMD_SUBBLOCK_MASK 0xf
-
#define VSC7385_CLOCK_DELAY ((3 << 4) | 3)
#define VSC7385_CLOCK_DELAY_MASK ((3 << 4) | 3)
@@ -274,20 +263,6 @@
VSC73XX_ICPU_CTRL_CLK_EN | \
VSC73XX_ICPU_CTRL_SRST)
-/**
- * struct vsc73xx - VSC73xx state container
- */
-struct vsc73xx {
- struct device *dev;
- struct gpio_desc *reset;
- struct spi_device *spi;
- struct dsa_switch *ds;
- struct gpio_chip gc;
- u16 chipid;
- u8 addr[ETH_ALEN];
- struct mutex lock; /* Protects SPI traffic */
-};
-
#define IS_7385(a) ((a)->chipid == VSC73XX_CHIPID_ID_7385)
#define IS_7388(a) ((a)->chipid == VSC73XX_CHIPID_ID_7388)
#define IS_7395(a) ((a)->chipid == VSC73XX_CHIPID_ID_7395)
@@ -365,7 +340,7 @@ static const struct vsc73xx_counter vsc73xx_tx_counters[] = {
{ 29, "TxQoSClass3" }, /* non-standard counter */
};
-static int vsc73xx_is_addr_valid(u8 block, u8 subblock)
+int vsc73xx_is_addr_valid(u8 block, u8 subblock)
{
switch (block) {
case VSC73XX_BLOCK_MAC:
@@ -396,96 +371,18 @@ static int vsc73xx_is_addr_valid(u8 block, u8 subblock)
return 0;
}
-
-static u8 vsc73xx_make_addr(u8 mode, u8 block, u8 subblock)
-{
- u8 ret;
-
- ret = (block & VSC73XX_CMD_BLOCK_MASK) << VSC73XX_CMD_BLOCK_SHIFT;
- ret |= (mode & 1) << VSC73XX_CMD_MODE_SHIFT;
- ret |= subblock & VSC73XX_CMD_SUBBLOCK_MASK;
-
- return ret;
-}
+EXPORT_SYMBOL(vsc73xx_is_addr_valid);
static int vsc73xx_read(struct vsc73xx *vsc, u8 block, u8 subblock, u8 reg,
u32 *val)
{
- struct spi_transfer t[2];
- struct spi_message m;
- u8 cmd[4];
- u8 buf[4];
- int ret;
-
- if (!vsc73xx_is_addr_valid(block, subblock))
- return -EINVAL;
-
- spi_message_init(&m);
-
- memset(&t, 0, sizeof(t));
-
- t[0].tx_buf = cmd;
- t[0].len = sizeof(cmd);
- spi_message_add_tail(&t[0], &m);
-
- t[1].rx_buf = buf;
- t[1].len = sizeof(buf);
- spi_message_add_tail(&t[1], &m);
-
- cmd[0] = vsc73xx_make_addr(VSC73XX_CMD_MODE_READ, block, subblock);
- cmd[1] = reg;
- cmd[2] = 0;
- cmd[3] = 0;
-
- mutex_lock(&vsc->lock);
- ret = spi_sync(vsc->spi, &m);
- mutex_unlock(&vsc->lock);
-
- if (ret)
- return ret;
-
- *val = (buf[0] << 24) | (buf[1] << 16) | (buf[2] << 8) | buf[3];
-
- return 0;
+ return vsc->ops->read(vsc, block, subblock, reg, val);
}
static int vsc73xx_write(struct vsc73xx *vsc, u8 block, u8 subblock, u8 reg,
u32 val)
{
- struct spi_transfer t[2];
- struct spi_message m;
- u8 cmd[2];
- u8 buf[4];
- int ret;
-
- if (!vsc73xx_is_addr_valid(block, subblock))
- return -EINVAL;
-
- spi_message_init(&m);
-
- memset(&t, 0, sizeof(t));
-
- t[0].tx_buf = cmd;
- t[0].len = sizeof(cmd);
- spi_message_add_tail(&t[0], &m);
-
- t[1].tx_buf = buf;
- t[1].len = sizeof(buf);
- spi_message_add_tail(&t[1], &m);
-
- cmd[0] = vsc73xx_make_addr(VSC73XX_CMD_MODE_WRITE, block, subblock);
- cmd[1] = reg;
-
- buf[0] = (val >> 24) & 0xff;
- buf[1] = (val >> 16) & 0xff;
- buf[2] = (val >> 8) & 0xff;
- buf[3] = val & 0xff;
-
- mutex_lock(&vsc->lock);
- ret = spi_sync(vsc->spi, &m);
- mutex_unlock(&vsc->lock);
-
- return ret;
+ return vsc->ops->write(vsc, block, subblock, reg, val);
}
static int vsc73xx_update_bits(struct vsc73xx *vsc, u8 block, u8 subblock,
@@ -520,22 +417,8 @@ static int vsc73xx_detect(struct vsc73xx *vsc)
}
if (val == 0xffffffff) {
- dev_info(vsc->dev, "chip seems dead, assert reset\n");
- gpiod_set_value_cansleep(vsc->reset, 1);
- /* Reset pulse should be 20ns minimum, according to datasheet
- * table 245, so 10us should be fine
- */
- usleep_range(10, 100);
- gpiod_set_value_cansleep(vsc->reset, 0);
- /* Wait 20ms according to datasheet table 245 */
- msleep(20);
-
- ret = vsc73xx_read(vsc, VSC73XX_BLOCK_SYSTEM, 0,
- VSC73XX_ICPU_MBOX_VAL, &val);
- if (val == 0xffffffff) {
- dev_err(vsc->dev, "seems not to help, giving up\n");
- return -ENODEV;
- }
+ dev_info(vsc->dev, "chip seems dead.\n");
+ return -EAGAIN;
}
ret = vsc73xx_read(vsc, VSC73XX_BLOCK_SYSTEM, 0,
@@ -586,9 +469,8 @@ static int vsc73xx_detect(struct vsc73xx *vsc)
}
if (icpu_si_boot_en && !icpu_pi_en) {
dev_err(vsc->dev,
- "iCPU enabled boots from SI, no external memory\n");
- dev_err(vsc->dev, "no idea how to deal with this\n");
- return -ENODEV;
+ "iCPU enabled boots from PI/SI, no external memory\n");
+ return -EAGAIN;
}
if (!icpu_si_boot_en && icpu_pi_en) {
dev_err(vsc->dev,
@@ -1245,21 +1127,11 @@ static int vsc73xx_gpio_probe(struct vsc73xx *vsc)
return 0;
}
-static int vsc73xx_probe(struct spi_device *spi)
+int vsc73xx_probe(struct vsc73xx *vsc)
{
- struct device *dev = &spi->dev;
- struct vsc73xx *vsc;
+ struct device *dev = vsc->dev;
int ret;
- vsc = devm_kzalloc(dev, sizeof(*vsc), GFP_KERNEL);
- if (!vsc)
- return -ENOMEM;
-
- spi_set_drvdata(spi, vsc);
- vsc->spi = spi_dev_get(spi);
- vsc->dev = dev;
- mutex_init(&vsc->lock);
-
/* Release reset, if any */
vsc->reset = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW);
if (IS_ERR(vsc->reset)) {
@@ -1270,15 +1142,20 @@ static int vsc73xx_probe(struct spi_device *spi)
/* Wait 20ms according to datasheet table 245 */
msleep(20);
- spi->mode = SPI_MODE_0;
- spi->bits_per_word = 8;
- ret = spi_setup(spi);
- if (ret < 0) {
- dev_err(dev, "spi setup failed.\n");
- return ret;
- }
-
ret = vsc73xx_detect(vsc);
+ if (ret == -EAGAIN) {
+ dev_err(vsc->dev,
+ "Chip seems to be out of control. Assert reset and try again.\n");
+ gpiod_set_value_cansleep(vsc->reset, 1);
+ /* Reset pulse should be 20ns minimum, according to datasheet
+ * table 245, so 10us should be fine
+ */
+ usleep_range(10, 100);
+ gpiod_set_value_cansleep(vsc->reset, 0);
+ /* Wait 20ms according to datasheet table 245 */
+ msleep(20);
+ ret = vsc73xx_detect(vsc);
+ }
if (ret) {
dev_err(dev, "no chip found (%d)\n", ret);
return -ENODEV;
@@ -1321,43 +1198,16 @@ static int vsc73xx_probe(struct spi_device *spi)
return 0;
}
+EXPORT_SYMBOL(vsc73xx_probe);
-static int vsc73xx_remove(struct spi_device *spi)
+int vsc73xx_remove(struct vsc73xx *vsc)
{
- struct vsc73xx *vsc = spi_get_drvdata(spi);
-
dsa_unregister_switch(vsc->ds);
gpiod_set_value(vsc->reset, 1);
return 0;
}
-
-static const struct of_device_id vsc73xx_of_match[] = {
- {
- .compatible = "vitesse,vsc7385",
- },
- {
- .compatible = "vitesse,vsc7388",
- },
- {
- .compatible = "vitesse,vsc7395",
- },
- {
- .compatible = "vitesse,vsc7398",
- },
- { },
-};
-MODULE_DEVICE_TABLE(of, vsc73xx_of_match);
-
-static struct spi_driver vsc73xx_driver = {
- .probe = vsc73xx_probe,
- .remove = vsc73xx_remove,
- .driver = {
- .name = "vsc73xx",
- .of_match_table = vsc73xx_of_match,
- },
-};
-module_spi_driver(vsc73xx_driver);
+EXPORT_SYMBOL(vsc73xx_remove);
MODULE_AUTHOR("Linus Walleij <linus.walleij@linaro.org>");
MODULE_DESCRIPTION("Vitesse VSC7385/7388/7395/7398 driver");
diff --git a/drivers/net/dsa/vitesse-vsc73xx-platform.c b/drivers/net/dsa/vitesse-vsc73xx-platform.c
new file mode 100644
index 000000000000..0541785f9fee
--- /dev/null
+++ b/drivers/net/dsa/vitesse-vsc73xx-platform.c
@@ -0,0 +1,164 @@
+// SPDX-License-Identifier: GPL-2.0
+/* DSA driver for:
+ * Vitesse VSC7385 SparX-G5 5+1-port Integrated Gigabit Ethernet Switch
+ * Vitesse VSC7388 SparX-G8 8-port Integrated Gigabit Ethernet Switch
+ * Vitesse VSC7395 SparX-G5e 5+1-port Integrated Gigabit Ethernet Switch
+ * Vitesse VSC7398 SparX-G8e 8-port Integrated Gigabit Ethernet Switch
+ *
+ * This driver takes control of the switch chip connected over CPU-attached
+ * address bus and configures it to route packages around when connected to
+ * a CPU port.
+ *
+ * Copyright (C) 2019 Pawel Dembicki <paweldembicki@gmail.com>
+ * Based on vitesse-vsc-spi.c by:
+ * Copyright (C) 2018 Linus Wallej <linus.walleij@linaro.org>
+ * Includes portions of code from the firmware uploader by:
+ * Copyright (C) 2009 Gabor Juhos <juhosg@openwrt.org>
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+
+#include "vitesse-vsc73xx.h"
+
+#define VSC73XX_CMD_PLATFORM_BLOCK_SHIFT 14
+#define VSC73XX_CMD_PLATFORM_BLOCK_MASK 0x7
+#define VSC73XX_CMD_PLATFORM_SUBBLOCK_SHIFT 10
+#define VSC73XX_CMD_PLATFORM_SUBBLOCK_MASK 0xf
+#define VSC73XX_CMD_PLATFORM_REGISTER_SHIFT 2
+
+/**
+ * struct vsc73xx_platform - VSC73xx Platform state container
+ */
+struct vsc73xx_platform {
+ struct platform_device *pdev;
+ void __iomem *base_addr;
+ struct vsc73xx vsc;
+};
+
+static const struct vsc73xx_ops vsc73xx_platform_ops;
+
+static u32 vsc73xx_make_addr(u8 block, u8 subblock, u8 reg)
+{
+ u32 ret;
+
+ ret = (block & VSC73XX_CMD_PLATFORM_BLOCK_MASK)
+ << VSC73XX_CMD_PLATFORM_BLOCK_SHIFT;
+ ret |= (subblock & VSC73XX_CMD_PLATFORM_SUBBLOCK_MASK)
+ << VSC73XX_CMD_PLATFORM_SUBBLOCK_SHIFT;
+ ret |= reg << VSC73XX_CMD_PLATFORM_REGISTER_SHIFT;
+
+ return ret;
+}
+
+static int vsc73xx_platform_read(struct vsc73xx *vsc, u8 block, u8 subblock,
+ u8 reg, u32 *val)
+{
+ struct vsc73xx_platform *vsc_platform = vsc->priv;
+ u32 offset;
+
+ if (!vsc73xx_is_addr_valid(block, subblock))
+ return -EINVAL;
+
+ offset = vsc73xx_make_addr(block, subblock, reg);
+ /* By default vsc73xx running in big-endian mode.
+ * (See "Register Addressing" section 5.5.3 in the VSC7385 manual.)
+ */
+ *val = ioread32be(vsc_platform->base_addr + offset);
+
+ return 0;
+}
+
+static int vsc73xx_platform_write(struct vsc73xx *vsc, u8 block, u8 subblock,
+ u8 reg, u32 val)
+{
+ struct vsc73xx_platform *vsc_platform = vsc->priv;
+ u32 offset;
+
+ if (!vsc73xx_is_addr_valid(block, subblock))
+ return -EINVAL;
+
+ offset = vsc73xx_make_addr(block, subblock, reg);
+ iowrite32be(val, vsc_platform->base_addr + offset);
+
+ return 0;
+}
+
+static int vsc73xx_platform_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct vsc73xx_platform *vsc_platform;
+ struct resource *res = NULL;
+ int ret;
+
+ vsc_platform = devm_kzalloc(dev, sizeof(*vsc_platform), GFP_KERNEL);
+ if (!vsc_platform)
+ return -ENOMEM;
+
+ platform_set_drvdata(pdev, vsc_platform);
+ vsc_platform->pdev = pdev;
+ vsc_platform->vsc.dev = dev;
+ vsc_platform->vsc.priv = vsc_platform;
+ vsc_platform->vsc.ops = &vsc73xx_platform_ops;
+
+ /* obtain I/O memory space */
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ dev_err(&pdev->dev, "cannot obtain I/O memory space\n");
+ ret = -ENXIO;
+ return ret;
+ }
+
+ vsc_platform->base_addr = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(vsc_platform->base_addr)) {
+ dev_err(&pdev->dev, "cannot request I/O memory space\n");
+ ret = -ENXIO;
+ return ret;
+ }
+
+ return vsc73xx_probe(&vsc_platform->vsc);
+}
+
+static int vsc73xx_platform_remove(struct platform_device *pdev)
+{
+ struct vsc73xx_platform *vsc_platform = platform_get_drvdata(pdev);
+
+ return vsc73xx_remove(&vsc_platform->vsc);
+}
+
+static const struct vsc73xx_ops vsc73xx_platform_ops = {
+ .read = vsc73xx_platform_read,
+ .write = vsc73xx_platform_write,
+};
+
+static const struct of_device_id vsc73xx_of_match[] = {
+ {
+ .compatible = "vitesse,vsc7385",
+ },
+ {
+ .compatible = "vitesse,vsc7388",
+ },
+ {
+ .compatible = "vitesse,vsc7395",
+ },
+ {
+ .compatible = "vitesse,vsc7398",
+ },
+ { },
+};
+MODULE_DEVICE_TABLE(of, vsc73xx_of_match);
+
+static struct platform_driver vsc73xx_platform_driver = {
+ .probe = vsc73xx_platform_probe,
+ .remove = vsc73xx_platform_remove,
+ .driver = {
+ .name = "vsc73xx-platform",
+ .of_match_table = vsc73xx_of_match,
+ },
+};
+module_platform_driver(vsc73xx_platform_driver);
+
+MODULE_AUTHOR("Pawel Dembicki <paweldembicki@gmail.com>");
+MODULE_DESCRIPTION("Vitesse VSC7385/7388/7395/7398 Platform driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/net/dsa/vitesse-vsc73xx-spi.c b/drivers/net/dsa/vitesse-vsc73xx-spi.c
new file mode 100644
index 000000000000..e73c8fcddc9f
--- /dev/null
+++ b/drivers/net/dsa/vitesse-vsc73xx-spi.c
@@ -0,0 +1,203 @@
+// SPDX-License-Identifier: GPL-2.0
+/* DSA driver for:
+ * Vitesse VSC7385 SparX-G5 5+1-port Integrated Gigabit Ethernet Switch
+ * Vitesse VSC7388 SparX-G8 8-port Integrated Gigabit Ethernet Switch
+ * Vitesse VSC7395 SparX-G5e 5+1-port Integrated Gigabit Ethernet Switch
+ * Vitesse VSC7398 SparX-G8e 8-port Integrated Gigabit Ethernet Switch
+ *
+ * This driver takes control of the switch chip over SPI and
+ * configures it to route packages around when connected to a CPU port.
+ *
+ * Copyright (C) 2018 Linus Wallej <linus.walleij@linaro.org>
+ * Includes portions of code from the firmware uploader by:
+ * Copyright (C) 2009 Gabor Juhos <juhosg@openwrt.org>
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/spi/spi.h>
+
+#include "vitesse-vsc73xx.h"
+
+#define VSC73XX_CMD_SPI_MODE_READ 0
+#define VSC73XX_CMD_SPI_MODE_WRITE 1
+#define VSC73XX_CMD_SPI_MODE_SHIFT 4
+#define VSC73XX_CMD_SPI_BLOCK_SHIFT 5
+#define VSC73XX_CMD_SPI_BLOCK_MASK 0x7
+#define VSC73XX_CMD_SPI_SUBBLOCK_MASK 0xf
+
+/**
+ * struct vsc73xx_spi - VSC73xx SPI state container
+ */
+struct vsc73xx_spi {
+ struct spi_device *spi;
+ struct mutex lock; /* Protects SPI traffic */
+ struct vsc73xx vsc;
+};
+
+static const struct vsc73xx_ops vsc73xx_spi_ops;
+
+static u8 vsc73xx_make_addr(u8 mode, u8 block, u8 subblock)
+{
+ u8 ret;
+
+ ret =
+ (block & VSC73XX_CMD_SPI_BLOCK_MASK) << VSC73XX_CMD_SPI_BLOCK_SHIFT;
+ ret |= (mode & 1) << VSC73XX_CMD_SPI_MODE_SHIFT;
+ ret |= subblock & VSC73XX_CMD_SPI_SUBBLOCK_MASK;
+
+ return ret;
+}
+
+static int vsc73xx_spi_read(struct vsc73xx *vsc, u8 block, u8 subblock, u8 reg,
+ u32 *val)
+{
+ struct vsc73xx_spi *vsc_spi = vsc->priv;
+ struct spi_transfer t[2];
+ struct spi_message m;
+ u8 cmd[4];
+ u8 buf[4];
+ int ret;
+
+ if (!vsc73xx_is_addr_valid(block, subblock))
+ return -EINVAL;
+
+ spi_message_init(&m);
+
+ memset(&t, 0, sizeof(t));
+
+ t[0].tx_buf = cmd;
+ t[0].len = sizeof(cmd);
+ spi_message_add_tail(&t[0], &m);
+
+ t[1].rx_buf = buf;
+ t[1].len = sizeof(buf);
+ spi_message_add_tail(&t[1], &m);
+
+ cmd[0] = vsc73xx_make_addr(VSC73XX_CMD_SPI_MODE_READ, block, subblock);
+ cmd[1] = reg;
+ cmd[2] = 0;
+ cmd[3] = 0;
+
+ mutex_lock(&vsc_spi->lock);
+ ret = spi_sync(vsc_spi->spi, &m);
+ mutex_unlock(&vsc_spi->lock);
+
+ if (ret)
+ return ret;
+
+ *val = (buf[0] << 24) | (buf[1] << 16) | (buf[2] << 8) | buf[3];
+
+ return 0;
+}
+
+static int vsc73xx_spi_write(struct vsc73xx *vsc, u8 block, u8 subblock, u8 reg,
+ u32 val)
+{
+ struct vsc73xx_spi *vsc_spi = vsc->priv;
+ struct spi_transfer t[2];
+ struct spi_message m;
+ u8 cmd[2];
+ u8 buf[4];
+ int ret;
+
+ if (!vsc73xx_is_addr_valid(block, subblock))
+ return -EINVAL;
+
+ spi_message_init(&m);
+
+ memset(&t, 0, sizeof(t));
+
+ t[0].tx_buf = cmd;
+ t[0].len = sizeof(cmd);
+ spi_message_add_tail(&t[0], &m);
+
+ t[1].tx_buf = buf;
+ t[1].len = sizeof(buf);
+ spi_message_add_tail(&t[1], &m);
+
+ cmd[0] = vsc73xx_make_addr(VSC73XX_CMD_SPI_MODE_WRITE, block, subblock);
+ cmd[1] = reg;
+
+ buf[0] = (val >> 24) & 0xff;
+ buf[1] = (val >> 16) & 0xff;
+ buf[2] = (val >> 8) & 0xff;
+ buf[3] = val & 0xff;
+
+ mutex_lock(&vsc_spi->lock);
+ ret = spi_sync(vsc_spi->spi, &m);
+ mutex_unlock(&vsc_spi->lock);
+
+ return ret;
+}
+
+static int vsc73xx_spi_probe(struct spi_device *spi)
+{
+ struct device *dev = &spi->dev;
+ struct vsc73xx_spi *vsc_spi;
+ int ret;
+
+ vsc_spi = devm_kzalloc(dev, sizeof(*vsc_spi), GFP_KERNEL);
+ if (!vsc_spi)
+ return -ENOMEM;
+
+ spi_set_drvdata(spi, vsc_spi);
+ vsc_spi->spi = spi_dev_get(spi);
+ vsc_spi->vsc.dev = dev;
+ vsc_spi->vsc.priv = vsc_spi;
+ vsc_spi->vsc.ops = &vsc73xx_spi_ops;
+ mutex_init(&vsc_spi->lock);
+
+ spi->mode = SPI_MODE_0;
+ spi->bits_per_word = 8;
+ ret = spi_setup(spi);
+ if (ret < 0) {
+ dev_err(dev, "spi setup failed.\n");
+ return ret;
+ }
+
+ return vsc73xx_probe(&vsc_spi->vsc);
+}
+
+static int vsc73xx_spi_remove(struct spi_device *spi)
+{
+ struct vsc73xx_spi *vsc_spi = spi_get_drvdata(spi);
+
+ return vsc73xx_remove(&vsc_spi->vsc);
+}
+
+static const struct vsc73xx_ops vsc73xx_spi_ops = {
+ .read = vsc73xx_spi_read,
+ .write = vsc73xx_spi_write,
+};
+
+static const struct of_device_id vsc73xx_of_match[] = {
+ {
+ .compatible = "vitesse,vsc7385",
+ },
+ {
+ .compatible = "vitesse,vsc7388",
+ },
+ {
+ .compatible = "vitesse,vsc7395",
+ },
+ {
+ .compatible = "vitesse,vsc7398",
+ },
+ { },
+};
+MODULE_DEVICE_TABLE(of, vsc73xx_of_match);
+
+static struct spi_driver vsc73xx_spi_driver = {
+ .probe = vsc73xx_spi_probe,
+ .remove = vsc73xx_spi_remove,
+ .driver = {
+ .name = "vsc73xx-spi",
+ .of_match_table = vsc73xx_of_match,
+ },
+};
+module_spi_driver(vsc73xx_spi_driver);
+
+MODULE_AUTHOR("Linus Walleij <linus.walleij@linaro.org>");
+MODULE_DESCRIPTION("Vitesse VSC7385/7388/7395/7398 SPI driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/net/dsa/vitesse-vsc73xx.h b/drivers/net/dsa/vitesse-vsc73xx.h
new file mode 100644
index 000000000000..7478f8d4e0a9
--- /dev/null
+++ b/drivers/net/dsa/vitesse-vsc73xx.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#include <linux/device.h>
+#include <linux/etherdevice.h>
+#include <linux/gpio/driver.h>
+
+/**
+ * struct vsc73xx - VSC73xx state container
+ */
+struct vsc73xx {
+ struct device *dev;
+ struct gpio_desc *reset;
+ struct dsa_switch *ds;
+ struct gpio_chip gc;
+ u16 chipid;
+ u8 addr[ETH_ALEN];
+ const struct vsc73xx_ops *ops;
+ void *priv;
+};
+
+struct vsc73xx_ops {
+ int (*read)(struct vsc73xx *vsc, u8 block, u8 subblock, u8 reg,
+ u32 *val);
+ int (*write)(struct vsc73xx *vsc, u8 block, u8 subblock, u8 reg,
+ u32 val);
+};
+
+int vsc73xx_is_addr_valid(u8 block, u8 subblock);
+int vsc73xx_probe(struct vsc73xx *vsc);
+int vsc73xx_remove(struct vsc73xx *vsc);
diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig
index fe115b7caba0..93a2d4deb27c 100644
--- a/drivers/net/ethernet/Kconfig
+++ b/drivers/net/ethernet/Kconfig
@@ -76,6 +76,7 @@ source "drivers/net/ethernet/ezchip/Kconfig"
source "drivers/net/ethernet/faraday/Kconfig"
source "drivers/net/ethernet/freescale/Kconfig"
source "drivers/net/ethernet/fujitsu/Kconfig"
+source "drivers/net/ethernet/google/Kconfig"
source "drivers/net/ethernet/hisilicon/Kconfig"
source "drivers/net/ethernet/hp/Kconfig"
source "drivers/net/ethernet/huawei/Kconfig"
diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile
index 7b5bf9682066..fb9155cffcff 100644
--- a/drivers/net/ethernet/Makefile
+++ b/drivers/net/ethernet/Makefile
@@ -39,6 +39,7 @@ obj-$(CONFIG_NET_VENDOR_EZCHIP) += ezchip/
obj-$(CONFIG_NET_VENDOR_FARADAY) += faraday/
obj-$(CONFIG_NET_VENDOR_FREESCALE) += freescale/
obj-$(CONFIG_NET_VENDOR_FUJITSU) += fujitsu/
+obj-$(CONFIG_NET_VENDOR_GOOGLE) += google/
obj-$(CONFIG_NET_VENDOR_HISILICON) += hisilicon/
obj-$(CONFIG_NET_VENDOR_HP) += hp/
obj-$(CONFIG_NET_VENDOR_HUAWEI) += huawei/
diff --git a/drivers/net/ethernet/allwinner/sun4i-emac.c b/drivers/net/ethernet/allwinner/sun4i-emac.c
index 9e06dff619c3..3434730a7699 100644
--- a/drivers/net/ethernet/allwinner/sun4i-emac.c
+++ b/drivers/net/ethernet/allwinner/sun4i-emac.c
@@ -224,8 +224,8 @@ static int emac_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
static void emac_get_drvinfo(struct net_device *dev,
struct ethtool_drvinfo *info)
{
- strlcpy(info->driver, DRV_NAME, sizeof(DRV_NAME));
- strlcpy(info->version, DRV_VERSION, sizeof(DRV_VERSION));
+ strlcpy(info->driver, DRV_NAME, sizeof(info->driver));
+ strlcpy(info->version, DRV_VERSION, sizeof(info->version));
strlcpy(info->bus_info, dev_name(&dev->dev), sizeof(info->bus_info));
}
@@ -818,7 +818,6 @@ static int emac_probe(struct platform_device *pdev)
SET_NETDEV_DEV(ndev, &pdev->dev);
db = netdev_priv(ndev);
- memset(db, 0, sizeof(*db));
db->dev = &pdev->dev;
db->ndev = ndev;
diff --git a/drivers/net/ethernet/amazon/ena/ena_admin_defs.h b/drivers/net/ethernet/amazon/ena/ena_admin_defs.h
index 9f80b73f90b1..d19f2ecf8e84 100644
--- a/drivers/net/ethernet/amazon/ena/ena_admin_defs.h
+++ b/drivers/net/ethernet/amazon/ena/ena_admin_defs.h
@@ -60,6 +60,7 @@ enum ena_admin_aq_feature_id {
ENA_ADMIN_MAX_QUEUES_NUM = 2,
ENA_ADMIN_HW_HINTS = 3,
ENA_ADMIN_LLQ = 4,
+ ENA_ADMIN_MAX_QUEUES_EXT = 7,
ENA_ADMIN_RSS_HASH_FUNCTION = 10,
ENA_ADMIN_STATELESS_OFFLOAD_CONFIG = 11,
ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG = 12,
@@ -421,7 +422,13 @@ struct ena_admin_get_set_feature_common_desc {
/* as appears in ena_admin_aq_feature_id */
u8 feature_id;
- u16 reserved16;
+ /* The driver specifies the max feature version it supports and the
+ * device responds with the currently supported feature version. The
+ * field is zero based
+ */
+ u8 feature_version;
+
+ u8 reserved8;
};
struct ena_admin_device_attr_feature_desc {
@@ -524,6 +531,39 @@ struct ena_admin_feature_llq_desc {
/* the stride control the driver selected to use */
u16 descriptors_stride_ctrl_enabled;
+
+ /* Maximum size in bytes taken by llq entries in a single tx burst.
+ * Set to 0 when there is no such limit.
+ */
+ u32 max_tx_burst_size;
+};
+
+struct ena_admin_queue_ext_feature_fields {
+ u32 max_tx_sq_num;
+
+ u32 max_tx_cq_num;
+
+ u32 max_rx_sq_num;
+
+ u32 max_rx_cq_num;
+
+ u32 max_tx_sq_depth;
+
+ u32 max_tx_cq_depth;
+
+ u32 max_rx_sq_depth;
+
+ u32 max_rx_cq_depth;
+
+ u32 max_tx_header_size;
+
+ /* Maximum Descriptors number, including meta descriptor, allowed for
+ * a single Tx packet
+ */
+ u16 max_per_packet_tx_descs;
+
+ /* Maximum Descriptors number allowed for a single Rx packet */
+ u16 max_per_packet_rx_descs;
};
struct ena_admin_queue_feature_desc {
@@ -832,6 +872,19 @@ struct ena_admin_get_feat_cmd {
u32 raw[11];
};
+struct ena_admin_queue_ext_feature_desc {
+ /* version */
+ u8 version;
+
+ u8 reserved1[3];
+
+ union {
+ struct ena_admin_queue_ext_feature_fields max_queue_ext;
+
+ u32 raw[10];
+ };
+};
+
struct ena_admin_get_feat_resp {
struct ena_admin_acq_common_desc acq_common_desc;
@@ -844,6 +897,8 @@ struct ena_admin_get_feat_resp {
struct ena_admin_queue_feature_desc max_queue;
+ struct ena_admin_queue_ext_feature_desc max_queue_ext;
+
struct ena_admin_feature_aenq_desc aenq;
struct ena_admin_get_feature_link_desc link;
@@ -908,7 +963,9 @@ struct ena_admin_aenq_common_desc {
u16 syndrom;
- /* 0 : phase */
+ /* 0 : phase
+ * 7:1 : reserved - MBZ
+ */
u8 flags;
u8 reserved1[3];
diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c
index 7f8266b191ae..911a2e7a375a 100644
--- a/drivers/net/ethernet/amazon/ena/ena_com.c
+++ b/drivers/net/ethernet/amazon/ena/ena_com.c
@@ -91,7 +91,7 @@ struct ena_com_stats_ctx {
struct ena_admin_acq_get_stats_resp get_resp;
};
-static inline int ena_com_mem_addr_set(struct ena_com_dev *ena_dev,
+static int ena_com_mem_addr_set(struct ena_com_dev *ena_dev,
struct ena_common_mem_addr *ena_addr,
dma_addr_t addr)
{
@@ -115,7 +115,7 @@ static int ena_com_admin_init_sq(struct ena_com_admin_queue *queue)
GFP_KERNEL);
if (!sq->entries) {
- pr_err("memory allocation failed");
+ pr_err("memory allocation failed\n");
return -ENOMEM;
}
@@ -137,7 +137,7 @@ static int ena_com_admin_init_cq(struct ena_com_admin_queue *queue)
GFP_KERNEL);
if (!cq->entries) {
- pr_err("memory allocation failed");
+ pr_err("memory allocation failed\n");
return -ENOMEM;
}
@@ -160,7 +160,7 @@ static int ena_com_admin_init_aenq(struct ena_com_dev *dev,
GFP_KERNEL);
if (!aenq->entries) {
- pr_err("memory allocation failed");
+ pr_err("memory allocation failed\n");
return -ENOMEM;
}
@@ -190,7 +190,7 @@ static int ena_com_admin_init_aenq(struct ena_com_dev *dev,
return 0;
}
-static inline void comp_ctxt_release(struct ena_com_admin_queue *queue,
+static void comp_ctxt_release(struct ena_com_admin_queue *queue,
struct ena_comp_ctx *comp_ctx)
{
comp_ctx->occupied = false;
@@ -277,7 +277,7 @@ static struct ena_comp_ctx *__ena_com_submit_admin_cmd(struct ena_com_admin_queu
return comp_ctx;
}
-static inline int ena_com_init_comp_ctxt(struct ena_com_admin_queue *queue)
+static int ena_com_init_comp_ctxt(struct ena_com_admin_queue *queue)
{
size_t size = queue->q_depth * sizeof(struct ena_comp_ctx);
struct ena_comp_ctx *comp_ctx;
@@ -285,7 +285,7 @@ static inline int ena_com_init_comp_ctxt(struct ena_com_admin_queue *queue)
queue->comp_ctx = devm_kzalloc(queue->q_dmadev, size, GFP_KERNEL);
if (unlikely(!queue->comp_ctx)) {
- pr_err("memory allocation failed");
+ pr_err("memory allocation failed\n");
return -ENOMEM;
}
@@ -356,7 +356,7 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_dev,
}
if (!io_sq->desc_addr.virt_addr) {
- pr_err("memory allocation failed");
+ pr_err("memory allocation failed\n");
return -ENOMEM;
}
}
@@ -382,7 +382,7 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_dev,
devm_kzalloc(ena_dev->dmadev, size, GFP_KERNEL);
if (!io_sq->bounce_buf_ctrl.base_buffer) {
- pr_err("bounce buffer memory allocation failed");
+ pr_err("bounce buffer memory allocation failed\n");
return -ENOMEM;
}
@@ -396,6 +396,10 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_dev,
0x0, io_sq->llq_info.desc_list_entry_size);
io_sq->llq_buf_ctrl.descs_left_in_line =
io_sq->llq_info.descs_num_before_header;
+
+ if (io_sq->llq_info.max_entries_in_tx_burst > 0)
+ io_sq->entries_in_tx_burst_left =
+ io_sq->llq_info.max_entries_in_tx_burst;
}
io_sq->tail = 0;
@@ -436,7 +440,7 @@ static int ena_com_init_io_cq(struct ena_com_dev *ena_dev,
}
if (!io_cq->cdesc_addr.virt_addr) {
- pr_err("memory allocation failed");
+ pr_err("memory allocation failed\n");
return -ENOMEM;
}
@@ -727,6 +731,9 @@ static int ena_com_config_llq_info(struct ena_com_dev *ena_dev,
supported_feat, llq_info->descs_num_before_header);
}
+ llq_info->max_entries_in_tx_burst =
+ (u16)(llq_features->max_tx_burst_size / llq_default_cfg->llq_ring_entry_size_value);
+
rc = ena_com_set_llq(ena_dev);
if (rc)
pr_err("Cannot set LLQ configuration: %d\n", rc);
@@ -755,16 +762,26 @@ static int ena_com_wait_and_process_admin_cq_interrupts(struct ena_comp_ctx *com
admin_queue->stats.no_completion++;
spin_unlock_irqrestore(&admin_queue->q_lock, flags);
- if (comp_ctx->status == ENA_CMD_COMPLETED)
- pr_err("The ena device have completion but the driver didn't receive any MSI-X interrupt (cmd %d)\n",
- comp_ctx->cmd_opcode);
- else
- pr_err("The ena device doesn't send any completion for the admin cmd %d status %d\n",
+ if (comp_ctx->status == ENA_CMD_COMPLETED) {
+ pr_err("The ena device sent a completion but the driver didn't receive a MSI-X interrupt (cmd %d), autopolling mode is %s\n",
+ comp_ctx->cmd_opcode,
+ admin_queue->auto_polling ? "ON" : "OFF");
+ /* Check if fallback to polling is enabled */
+ if (admin_queue->auto_polling)
+ admin_queue->polling = true;
+ } else {
+ pr_err("The ena device doesn't send a completion for the admin cmd %d status %d\n",
comp_ctx->cmd_opcode, comp_ctx->status);
-
- admin_queue->running_state = false;
- ret = -ETIME;
- goto err;
+ }
+ /* Check if shifted to polling mode.
+ * This will happen if there is a completion without an interrupt
+ * and autopolling mode is enabled. Continuing normal execution in such case
+ */
+ if (!admin_queue->polling) {
+ admin_queue->running_state = false;
+ ret = -ETIME;
+ goto err;
+ }
}
ret = ena_com_comp_status_to_errno(comp_ctx->comp_status);
@@ -822,7 +839,7 @@ static u32 ena_com_reg_bar_read32(struct ena_com_dev *ena_dev, u16 offset)
}
if (read_resp->reg_off != offset) {
- pr_err("Read failure: wrong offset provided");
+ pr_err("Read failure: wrong offset provided\n");
ret = ENA_MMIO_READ_TIMEOUT;
} else {
ret = read_resp->reg_val;
@@ -961,7 +978,8 @@ static int ena_com_get_feature_ex(struct ena_com_dev *ena_dev,
struct ena_admin_get_feat_resp *get_resp,
enum ena_admin_aq_feature_id feature_id,
dma_addr_t control_buf_dma_addr,
- u32 control_buff_size)
+ u32 control_buff_size,
+ u8 feature_ver)
{
struct ena_com_admin_queue *admin_queue;
struct ena_admin_get_feat_cmd get_cmd;
@@ -992,7 +1010,7 @@ static int ena_com_get_feature_ex(struct ena_com_dev *ena_dev,
}
get_cmd.control_buffer.length = control_buff_size;
-
+ get_cmd.feat_common.feature_version = feature_ver;
get_cmd.feat_common.feature_id = feature_id;
ret = ena_com_execute_admin_command(admin_queue,
@@ -1012,13 +1030,15 @@ static int ena_com_get_feature_ex(struct ena_com_dev *ena_dev,
static int ena_com_get_feature(struct ena_com_dev *ena_dev,
struct ena_admin_get_feat_resp *get_resp,
- enum ena_admin_aq_feature_id feature_id)
+ enum ena_admin_aq_feature_id feature_id,
+ u8 feature_ver)
{
return ena_com_get_feature_ex(ena_dev,
get_resp,
feature_id,
0,
- 0);
+ 0,
+ feature_ver);
}
static int ena_com_hash_key_allocate(struct ena_com_dev *ena_dev)
@@ -1078,7 +1098,7 @@ static int ena_com_indirect_table_allocate(struct ena_com_dev *ena_dev,
int ret;
ret = ena_com_get_feature(ena_dev, &get_resp,
- ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG);
+ ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG, 0);
if (unlikely(ret))
return ret;
@@ -1498,7 +1518,7 @@ int ena_com_set_aenq_config(struct ena_com_dev *ena_dev, u32 groups_flag)
struct ena_admin_get_feat_resp get_resp;
int ret;
- ret = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_AENQ_CONFIG);
+ ret = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_AENQ_CONFIG, 0);
if (ret) {
pr_info("Can't get aenq configuration\n");
return ret;
@@ -1643,6 +1663,12 @@ void ena_com_set_admin_polling_mode(struct ena_com_dev *ena_dev, bool polling)
ena_dev->admin_queue.polling = polling;
}
+void ena_com_set_admin_auto_polling_mode(struct ena_com_dev *ena_dev,
+ bool polling)
+{
+ ena_dev->admin_queue.auto_polling = polling;
+}
+
int ena_com_mmio_reg_read_request_init(struct ena_com_dev *ena_dev)
{
struct ena_com_mmio_read *mmio_read = &ena_dev->mmio_read;
@@ -1867,7 +1893,7 @@ void ena_com_destroy_io_queue(struct ena_com_dev *ena_dev, u16 qid)
int ena_com_get_link_params(struct ena_com_dev *ena_dev,
struct ena_admin_get_feat_resp *resp)
{
- return ena_com_get_feature(ena_dev, resp, ENA_ADMIN_LINK_CONFIG);
+ return ena_com_get_feature(ena_dev, resp, ENA_ADMIN_LINK_CONFIG, 0);
}
int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev,
@@ -1877,7 +1903,7 @@ int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev,
int rc;
rc = ena_com_get_feature(ena_dev, &get_resp,
- ENA_ADMIN_DEVICE_ATTRIBUTES);
+ ENA_ADMIN_DEVICE_ATTRIBUTES, 0);
if (rc)
return rc;
@@ -1885,17 +1911,34 @@ int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev,
sizeof(get_resp.u.dev_attr));
ena_dev->supported_features = get_resp.u.dev_attr.supported_features;
- rc = ena_com_get_feature(ena_dev, &get_resp,
- ENA_ADMIN_MAX_QUEUES_NUM);
- if (rc)
- return rc;
+ if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) {
+ rc = ena_com_get_feature(ena_dev, &get_resp,
+ ENA_ADMIN_MAX_QUEUES_EXT,
+ ENA_FEATURE_MAX_QUEUE_EXT_VER);
+ if (rc)
+ return rc;
- memcpy(&get_feat_ctx->max_queues, &get_resp.u.max_queue,
- sizeof(get_resp.u.max_queue));
- ena_dev->tx_max_header_size = get_resp.u.max_queue.max_header_size;
+ if (get_resp.u.max_queue_ext.version != ENA_FEATURE_MAX_QUEUE_EXT_VER)
+ return -EINVAL;
+
+ memcpy(&get_feat_ctx->max_queue_ext, &get_resp.u.max_queue_ext,
+ sizeof(get_resp.u.max_queue_ext));
+ ena_dev->tx_max_header_size =
+ get_resp.u.max_queue_ext.max_queue_ext.max_tx_header_size;
+ } else {
+ rc = ena_com_get_feature(ena_dev, &get_resp,
+ ENA_ADMIN_MAX_QUEUES_NUM, 0);
+ memcpy(&get_feat_ctx->max_queues, &get_resp.u.max_queue,
+ sizeof(get_resp.u.max_queue));
+ ena_dev->tx_max_header_size =
+ get_resp.u.max_queue.max_header_size;
+
+ if (rc)
+ return rc;
+ }
rc = ena_com_get_feature(ena_dev, &get_resp,
- ENA_ADMIN_AENQ_CONFIG);
+ ENA_ADMIN_AENQ_CONFIG, 0);
if (rc)
return rc;
@@ -1903,7 +1946,7 @@ int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev,
sizeof(get_resp.u.aenq));
rc = ena_com_get_feature(ena_dev, &get_resp,
- ENA_ADMIN_STATELESS_OFFLOAD_CONFIG);
+ ENA_ADMIN_STATELESS_OFFLOAD_CONFIG, 0);
if (rc)
return rc;
@@ -1913,7 +1956,7 @@ int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev,
/* Driver hints isn't mandatory admin command. So in case the
* command isn't supported set driver hints to 0
*/
- rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_HW_HINTS);
+ rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_HW_HINTS, 0);
if (!rc)
memcpy(&get_feat_ctx->hw_hints, &get_resp.u.hw_hints,
@@ -1924,7 +1967,7 @@ int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev,
else
return rc;
- rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_LLQ);
+ rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_LLQ, 0);
if (!rc)
memcpy(&get_feat_ctx->llq, &get_resp.u.llq,
sizeof(get_resp.u.llq));
@@ -2161,7 +2204,7 @@ int ena_com_get_offload_settings(struct ena_com_dev *ena_dev,
struct ena_admin_get_feat_resp resp;
ret = ena_com_get_feature(ena_dev, &resp,
- ENA_ADMIN_STATELESS_OFFLOAD_CONFIG);
+ ENA_ADMIN_STATELESS_OFFLOAD_CONFIG, 0);
if (unlikely(ret)) {
pr_err("Failed to get offload capabilities %d\n", ret);
return ret;
@@ -2190,7 +2233,7 @@ int ena_com_set_hash_function(struct ena_com_dev *ena_dev)
/* Validate hash function is supported */
ret = ena_com_get_feature(ena_dev, &get_resp,
- ENA_ADMIN_RSS_HASH_FUNCTION);
+ ENA_ADMIN_RSS_HASH_FUNCTION, 0);
if (unlikely(ret))
return ret;
@@ -2250,7 +2293,7 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev,
rc = ena_com_get_feature_ex(ena_dev, &get_resp,
ENA_ADMIN_RSS_HASH_FUNCTION,
rss->hash_key_dma_addr,
- sizeof(*rss->hash_key));
+ sizeof(*rss->hash_key), 0);
if (unlikely(rc))
return rc;
@@ -2302,7 +2345,7 @@ int ena_com_get_hash_function(struct ena_com_dev *ena_dev,
rc = ena_com_get_feature_ex(ena_dev, &get_resp,
ENA_ADMIN_RSS_HASH_FUNCTION,
rss->hash_key_dma_addr,
- sizeof(*rss->hash_key));
+ sizeof(*rss->hash_key), 0);
if (unlikely(rc))
return rc;
@@ -2327,7 +2370,7 @@ int ena_com_get_hash_ctrl(struct ena_com_dev *ena_dev,
rc = ena_com_get_feature_ex(ena_dev, &get_resp,
ENA_ADMIN_RSS_HASH_INPUT,
rss->hash_ctrl_dma_addr,
- sizeof(*rss->hash_ctrl));
+ sizeof(*rss->hash_ctrl), 0);
if (unlikely(rc))
return rc;
@@ -2563,7 +2606,7 @@ int ena_com_indirect_table_get(struct ena_com_dev *ena_dev, u32 *ind_tbl)
rc = ena_com_get_feature_ex(ena_dev, &get_resp,
ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG,
rss->rss_ind_tbl_dma_addr,
- tbl_size);
+ tbl_size, 0);
if (unlikely(rc))
return rc;
@@ -2778,7 +2821,7 @@ int ena_com_init_interrupt_moderation(struct ena_com_dev *ena_dev)
int rc;
rc = ena_com_get_feature(ena_dev, &get_resp,
- ENA_ADMIN_INTERRUPT_MODERATION);
+ ENA_ADMIN_INTERRUPT_MODERATION, 0);
if (rc) {
if (rc == -EOPNOTSUPP) {
@@ -2913,8 +2956,8 @@ int ena_com_config_dev_mode(struct ena_com_dev *ena_dev,
struct ena_admin_feature_llq_desc *llq_features,
struct ena_llq_configurations *llq_default_cfg)
{
+ struct ena_com_llq_info *llq_info = &ena_dev->llq_info;
int rc;
- int size;
if (!llq_features->max_llq_num) {
ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST;
@@ -2925,12 +2968,10 @@ int ena_com_config_dev_mode(struct ena_com_dev *ena_dev,
if (rc)
return rc;
- /* Validate the descriptor is not too big */
- size = ena_dev->tx_max_header_size;
- size += ena_dev->llq_info.descs_num_before_header *
- sizeof(struct ena_eth_io_tx_desc);
+ ena_dev->tx_max_header_size = llq_info->desc_list_entry_size -
+ (llq_info->descs_num_before_header * sizeof(struct ena_eth_io_tx_desc));
- if (unlikely(ena_dev->llq_info.desc_list_entry_size < size)) {
+ if (unlikely(ena_dev->tx_max_header_size == 0)) {
pr_err("the size of the LLQ entry is smaller than needed\n");
return -EINVAL;
}
diff --git a/drivers/net/ethernet/amazon/ena/ena_com.h b/drivers/net/ethernet/amazon/ena/ena_com.h
index 078d6f2b4f39..0d3664fe260d 100644
--- a/drivers/net/ethernet/amazon/ena/ena_com.h
+++ b/drivers/net/ethernet/amazon/ena/ena_com.h
@@ -101,6 +101,8 @@
#define ENA_HW_HINTS_NO_TIMEOUT 0xFFFF
+#define ENA_FEATURE_MAX_QUEUE_EXT_VER 1
+
enum ena_intr_moder_level {
ENA_INTR_MODER_LOWEST = 0,
ENA_INTR_MODER_LOW,
@@ -159,6 +161,7 @@ struct ena_com_llq_info {
u16 desc_list_entry_size;
u16 descs_num_before_header;
u16 descs_per_entry;
+ u16 max_entries_in_tx_burst;
};
struct ena_com_io_cq {
@@ -238,6 +241,7 @@ struct ena_com_io_sq {
u8 phase;
u8 desc_entry_size;
u8 dma_addr_bits;
+ u16 entries_in_tx_burst_left;
} ____cacheline_aligned;
struct ena_com_admin_cq {
@@ -281,6 +285,9 @@ struct ena_com_admin_queue {
/* Indicate if the admin queue should poll for completion */
bool polling;
+ /* Define if fallback to polling mode should occur */
+ bool auto_polling;
+
u16 curr_cmd_id;
/* Indicate that the ena was initialized and can
@@ -377,6 +384,7 @@ struct ena_com_dev {
struct ena_com_dev_get_features_ctx {
struct ena_admin_queue_feature_desc max_queues;
+ struct ena_admin_queue_ext_feature_desc max_queue_ext;
struct ena_admin_device_attr_feature_desc dev_attr;
struct ena_admin_feature_aenq_desc aenq;
struct ena_admin_feature_offload_desc offload;
@@ -536,6 +544,17 @@ void ena_com_set_admin_polling_mode(struct ena_com_dev *ena_dev, bool polling);
*/
bool ena_com_get_ena_admin_polling_mode(struct ena_com_dev *ena_dev);
+/* ena_com_set_admin_auto_polling_mode - Enable autoswitch to polling mode
+ * @ena_dev: ENA communication layer struct
+ * @polling: Enable/Disable polling mode
+ *
+ * Set the autopolling mode.
+ * If autopolling is on:
+ * In case of missing interrupt when data is available switch to polling.
+ */
+void ena_com_set_admin_auto_polling_mode(struct ena_com_dev *ena_dev,
+ bool polling);
+
/* ena_com_admin_q_comp_intr_handler - admin queue interrupt handler
* @ena_dev: ENA communication layer struct
*
diff --git a/drivers/net/ethernet/amazon/ena/ena_eth_com.c b/drivers/net/ethernet/amazon/ena/ena_eth_com.c
index f6c2d3855be8..38046bf0ff44 100644
--- a/drivers/net/ethernet/amazon/ena/ena_eth_com.c
+++ b/drivers/net/ethernet/amazon/ena/ena_eth_com.c
@@ -32,7 +32,7 @@
#include "ena_eth_com.h"
-static inline struct ena_eth_io_rx_cdesc_base *ena_com_get_next_rx_cdesc(
+static struct ena_eth_io_rx_cdesc_base *ena_com_get_next_rx_cdesc(
struct ena_com_io_cq *io_cq)
{
struct ena_eth_io_rx_cdesc_base *cdesc;
@@ -59,7 +59,7 @@ static inline struct ena_eth_io_rx_cdesc_base *ena_com_get_next_rx_cdesc(
return cdesc;
}
-static inline void *get_sq_desc_regular_queue(struct ena_com_io_sq *io_sq)
+static void *get_sq_desc_regular_queue(struct ena_com_io_sq *io_sq)
{
u16 tail_masked;
u32 offset;
@@ -71,7 +71,7 @@ static inline void *get_sq_desc_regular_queue(struct ena_com_io_sq *io_sq)
return (void *)((uintptr_t)io_sq->desc_addr.virt_addr + offset);
}
-static inline int ena_com_write_bounce_buffer_to_dev(struct ena_com_io_sq *io_sq,
+static int ena_com_write_bounce_buffer_to_dev(struct ena_com_io_sq *io_sq,
u8 *bounce_buffer)
{
struct ena_com_llq_info *llq_info = &io_sq->llq_info;
@@ -82,6 +82,17 @@ static inline int ena_com_write_bounce_buffer_to_dev(struct ena_com_io_sq *io_sq
dst_tail_mask = io_sq->tail & (io_sq->q_depth - 1);
dst_offset = dst_tail_mask * llq_info->desc_list_entry_size;
+ if (is_llq_max_tx_burst_exists(io_sq)) {
+ if (unlikely(!io_sq->entries_in_tx_burst_left)) {
+ pr_err("Error: trying to send more packets than tx burst allows\n");
+ return -ENOSPC;
+ }
+
+ io_sq->entries_in_tx_burst_left--;
+ pr_debug("decreasing entries_in_tx_burst_left of queue %d to %d\n",
+ io_sq->qid, io_sq->entries_in_tx_burst_left);
+ }
+
/* Make sure everything was written into the bounce buffer before
* writing the bounce buffer to the device
*/
@@ -100,7 +111,7 @@ static inline int ena_com_write_bounce_buffer_to_dev(struct ena_com_io_sq *io_sq
return 0;
}
-static inline int ena_com_write_header_to_bounce(struct ena_com_io_sq *io_sq,
+static int ena_com_write_header_to_bounce(struct ena_com_io_sq *io_sq,
u8 *header_src,
u16 header_len)
{
@@ -131,7 +142,7 @@ static inline int ena_com_write_header_to_bounce(struct ena_com_io_sq *io_sq,
return 0;
}
-static inline void *get_sq_desc_llq(struct ena_com_io_sq *io_sq)
+static void *get_sq_desc_llq(struct ena_com_io_sq *io_sq)
{
struct ena_com_llq_pkt_ctrl *pkt_ctrl = &io_sq->llq_buf_ctrl;
u8 *bounce_buffer;
@@ -151,7 +162,7 @@ static inline void *get_sq_desc_llq(struct ena_com_io_sq *io_sq)
return sq_desc;
}
-static inline int ena_com_close_bounce_buffer(struct ena_com_io_sq *io_sq)
+static int ena_com_close_bounce_buffer(struct ena_com_io_sq *io_sq)
{
struct ena_com_llq_pkt_ctrl *pkt_ctrl = &io_sq->llq_buf_ctrl;
struct ena_com_llq_info *llq_info = &io_sq->llq_info;
@@ -178,7 +189,7 @@ static inline int ena_com_close_bounce_buffer(struct ena_com_io_sq *io_sq)
return 0;
}
-static inline void *get_sq_desc(struct ena_com_io_sq *io_sq)
+static void *get_sq_desc(struct ena_com_io_sq *io_sq)
{
if (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
return get_sq_desc_llq(io_sq);
@@ -186,7 +197,7 @@ static inline void *get_sq_desc(struct ena_com_io_sq *io_sq)
return get_sq_desc_regular_queue(io_sq);
}
-static inline int ena_com_sq_update_llq_tail(struct ena_com_io_sq *io_sq)
+static int ena_com_sq_update_llq_tail(struct ena_com_io_sq *io_sq)
{
struct ena_com_llq_pkt_ctrl *pkt_ctrl = &io_sq->llq_buf_ctrl;
struct ena_com_llq_info *llq_info = &io_sq->llq_info;
@@ -214,7 +225,7 @@ static inline int ena_com_sq_update_llq_tail(struct ena_com_io_sq *io_sq)
return 0;
}
-static inline int ena_com_sq_update_tail(struct ena_com_io_sq *io_sq)
+static int ena_com_sq_update_tail(struct ena_com_io_sq *io_sq)
{
if (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
return ena_com_sq_update_llq_tail(io_sq);
@@ -228,7 +239,7 @@ static inline int ena_com_sq_update_tail(struct ena_com_io_sq *io_sq)
return 0;
}
-static inline struct ena_eth_io_rx_cdesc_base *
+static struct ena_eth_io_rx_cdesc_base *
ena_com_rx_cdesc_idx_to_ptr(struct ena_com_io_cq *io_cq, u16 idx)
{
idx &= (io_cq->q_depth - 1);
@@ -237,7 +248,7 @@ static inline struct ena_eth_io_rx_cdesc_base *
idx * io_cq->cdesc_entry_size_in_bytes);
}
-static inline u16 ena_com_cdesc_rx_pkt_get(struct ena_com_io_cq *io_cq,
+static u16 ena_com_cdesc_rx_pkt_get(struct ena_com_io_cq *io_cq,
u16 *first_cdesc_idx)
{
struct ena_eth_io_rx_cdesc_base *cdesc;
@@ -274,24 +285,7 @@ static inline u16 ena_com_cdesc_rx_pkt_get(struct ena_com_io_cq *io_cq,
return count;
}
-static inline bool ena_com_meta_desc_changed(struct ena_com_io_sq *io_sq,
- struct ena_com_tx_ctx *ena_tx_ctx)
-{
- int rc;
-
- if (ena_tx_ctx->meta_valid) {
- rc = memcmp(&io_sq->cached_tx_meta,
- &ena_tx_ctx->ena_meta,
- sizeof(struct ena_com_tx_meta));
-
- if (unlikely(rc != 0))
- return true;
- }
-
- return false;
-}
-
-static inline int ena_com_create_and_store_tx_meta_desc(struct ena_com_io_sq *io_sq,
+static int ena_com_create_and_store_tx_meta_desc(struct ena_com_io_sq *io_sq,
struct ena_com_tx_ctx *ena_tx_ctx)
{
struct ena_eth_io_tx_meta_desc *meta_desc = NULL;
@@ -340,7 +334,7 @@ static inline int ena_com_create_and_store_tx_meta_desc(struct ena_com_io_sq *io
return ena_com_sq_update_tail(io_sq);
}
-static inline void ena_com_rx_set_flags(struct ena_com_rx_ctx *ena_rx_ctx,
+static void ena_com_rx_set_flags(struct ena_com_rx_ctx *ena_rx_ctx,
struct ena_eth_io_rx_cdesc_base *cdesc)
{
ena_rx_ctx->l3_proto = cdesc->status &
diff --git a/drivers/net/ethernet/amazon/ena/ena_eth_com.h b/drivers/net/ethernet/amazon/ena/ena_eth_com.h
index 340d02b64ca6..77986c0ea52c 100644
--- a/drivers/net/ethernet/amazon/ena/ena_eth_com.h
+++ b/drivers/net/ethernet/amazon/ena/ena_eth_com.h
@@ -125,8 +125,55 @@ static inline bool ena_com_sq_have_enough_space(struct ena_com_io_sq *io_sq,
return ena_com_free_desc(io_sq) > temp;
}
+static inline bool ena_com_meta_desc_changed(struct ena_com_io_sq *io_sq,
+ struct ena_com_tx_ctx *ena_tx_ctx)
+{
+ if (!ena_tx_ctx->meta_valid)
+ return false;
+
+ return !!memcmp(&io_sq->cached_tx_meta,
+ &ena_tx_ctx->ena_meta,
+ sizeof(struct ena_com_tx_meta));
+}
+
+static inline bool is_llq_max_tx_burst_exists(struct ena_com_io_sq *io_sq)
+{
+ return (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) &&
+ io_sq->llq_info.max_entries_in_tx_burst > 0;
+}
+
+static inline bool ena_com_is_doorbell_needed(struct ena_com_io_sq *io_sq,
+ struct ena_com_tx_ctx *ena_tx_ctx)
+{
+ struct ena_com_llq_info *llq_info;
+ int descs_after_first_entry;
+ int num_entries_needed = 1;
+ u16 num_descs;
+
+ if (!is_llq_max_tx_burst_exists(io_sq))
+ return false;
+
+ llq_info = &io_sq->llq_info;
+ num_descs = ena_tx_ctx->num_bufs;
+
+ if (unlikely(ena_com_meta_desc_changed(io_sq, ena_tx_ctx)))
+ ++num_descs;
+
+ if (num_descs > llq_info->descs_num_before_header) {
+ descs_after_first_entry = num_descs - llq_info->descs_num_before_header;
+ num_entries_needed += DIV_ROUND_UP(descs_after_first_entry,
+ llq_info->descs_per_entry);
+ }
+
+ pr_debug("queue: %d num_descs: %d num_entries_needed: %d\n", io_sq->qid,
+ num_descs, num_entries_needed);
+
+ return num_entries_needed > io_sq->entries_in_tx_burst_left;
+}
+
static inline int ena_com_write_sq_doorbell(struct ena_com_io_sq *io_sq)
{
+ u16 max_entries_in_tx_burst = io_sq->llq_info.max_entries_in_tx_burst;
u16 tail = io_sq->tail;
pr_debug("write submission queue doorbell for queue: %d tail: %d\n",
@@ -134,6 +181,12 @@ static inline int ena_com_write_sq_doorbell(struct ena_com_io_sq *io_sq)
writel(tail, io_sq->db_addr);
+ if (is_llq_max_tx_burst_exists(io_sq)) {
+ pr_debug("reset available entries in tx burst for queue %d to %d\n",
+ io_sq->qid, max_entries_in_tx_burst);
+ io_sq->entries_in_tx_burst_left = max_entries_in_tx_burst;
+ }
+
return 0;
}
@@ -142,15 +195,17 @@ static inline int ena_com_update_dev_comp_head(struct ena_com_io_cq *io_cq)
u16 unreported_comp, head;
bool need_update;
- head = io_cq->head;
- unreported_comp = head - io_cq->last_head_update;
- need_update = unreported_comp > (io_cq->q_depth / ENA_COMP_HEAD_THRESH);
-
- if (io_cq->cq_head_db_reg && need_update) {
- pr_debug("Write completion queue doorbell for queue %d: head: %d\n",
- io_cq->qid, head);
- writel(head, io_cq->cq_head_db_reg);
- io_cq->last_head_update = head;
+ if (unlikely(io_cq->cq_head_db_reg)) {
+ head = io_cq->head;
+ unreported_comp = head - io_cq->last_head_update;
+ need_update = unreported_comp > (io_cq->q_depth / ENA_COMP_HEAD_THRESH);
+
+ if (unlikely(need_update)) {
+ pr_debug("Write completion queue doorbell for queue %d: head: %d\n",
+ io_cq->qid, head);
+ writel(head, io_cq->cq_head_db_reg);
+ io_cq->last_head_update = head;
+ }
}
return 0;
diff --git a/drivers/net/ethernet/amazon/ena/ena_ethtool.c b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
index fe596bc30a96..b997c3ce9e2b 100644
--- a/drivers/net/ethernet/amazon/ena/ena_ethtool.c
+++ b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
@@ -88,13 +88,14 @@ static const struct ena_stats ena_stats_tx_strings[] = {
static const struct ena_stats ena_stats_rx_strings[] = {
ENA_STAT_RX_ENTRY(cnt),
ENA_STAT_RX_ENTRY(bytes),
+ ENA_STAT_RX_ENTRY(rx_copybreak_pkt),
+ ENA_STAT_RX_ENTRY(csum_good),
ENA_STAT_RX_ENTRY(refil_partial),
ENA_STAT_RX_ENTRY(bad_csum),
ENA_STAT_RX_ENTRY(page_alloc_fail),
ENA_STAT_RX_ENTRY(skb_alloc_fail),
ENA_STAT_RX_ENTRY(dma_mapping_err),
ENA_STAT_RX_ENTRY(bad_desc_num),
- ENA_STAT_RX_ENTRY(rx_copybreak_pkt),
ENA_STAT_RX_ENTRY(bad_req_id),
ENA_STAT_RX_ENTRY(empty_rx_ring),
ENA_STAT_RX_ENTRY(csum_unchecked),
@@ -447,13 +448,32 @@ static void ena_get_ringparam(struct net_device *netdev,
struct ethtool_ringparam *ring)
{
struct ena_adapter *adapter = netdev_priv(netdev);
- struct ena_ring *tx_ring = &adapter->tx_ring[0];
- struct ena_ring *rx_ring = &adapter->rx_ring[0];
- ring->rx_max_pending = rx_ring->ring_size;
- ring->tx_max_pending = tx_ring->ring_size;
- ring->rx_pending = rx_ring->ring_size;
- ring->tx_pending = tx_ring->ring_size;
+ ring->tx_max_pending = adapter->max_tx_ring_size;
+ ring->rx_max_pending = adapter->max_rx_ring_size;
+ ring->tx_pending = adapter->tx_ring[0].ring_size;
+ ring->rx_pending = adapter->rx_ring[0].ring_size;
+}
+
+static int ena_set_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring)
+{
+ struct ena_adapter *adapter = netdev_priv(netdev);
+ u32 new_tx_size, new_rx_size;
+
+ new_tx_size = ring->tx_pending < ENA_MIN_RING_SIZE ?
+ ENA_MIN_RING_SIZE : ring->tx_pending;
+ new_tx_size = rounddown_pow_of_two(new_tx_size);
+
+ new_rx_size = ring->rx_pending < ENA_MIN_RING_SIZE ?
+ ENA_MIN_RING_SIZE : ring->rx_pending;
+ new_rx_size = rounddown_pow_of_two(new_rx_size);
+
+ if (new_tx_size == adapter->requested_tx_ring_size &&
+ new_rx_size == adapter->requested_rx_ring_size)
+ return 0;
+
+ return ena_update_queue_sizes(adapter, new_tx_size, new_rx_size);
}
static u32 ena_flow_hash_to_flow_type(u16 hash_fields)
@@ -807,6 +827,7 @@ static const struct ethtool_ops ena_ethtool_ops = {
.get_coalesce = ena_get_coalesce,
.set_coalesce = ena_set_coalesce,
.get_ringparam = ena_get_ringparam,
+ .set_ringparam = ena_set_ringparam,
.get_sset_count = ena_get_sset_count,
.get_strings = ena_get_strings,
.get_ethtool_stats = ena_get_ethtool_stats,
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index 9c83642922c7..664e3ed97ea9 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -182,7 +182,7 @@ static void ena_init_io_rings(struct ena_adapter *adapter)
ena_init_io_rings_common(adapter, rxr, i);
/* TX specific ring state */
- txr->ring_size = adapter->tx_ring_size;
+ txr->ring_size = adapter->requested_tx_ring_size;
txr->tx_max_header_size = ena_dev->tx_max_header_size;
txr->tx_mem_queue_type = ena_dev->tx_mem_queue_type;
txr->sgl_size = adapter->max_tx_sgl_size;
@@ -190,7 +190,7 @@ static void ena_init_io_rings(struct ena_adapter *adapter)
ena_com_get_nonadaptive_moderation_interval_tx(ena_dev);
/* RX specific ring state */
- rxr->ring_size = adapter->rx_ring_size;
+ rxr->ring_size = adapter->requested_rx_ring_size;
rxr->rx_copybreak = adapter->rx_copybreak;
rxr->sgl_size = adapter->max_rx_sgl_size;
rxr->smoothed_interval =
@@ -228,11 +228,11 @@ static int ena_setup_tx_resources(struct ena_adapter *adapter, int qid)
}
size = sizeof(u16) * tx_ring->ring_size;
- tx_ring->free_tx_ids = vzalloc_node(size, node);
- if (!tx_ring->free_tx_ids) {
- tx_ring->free_tx_ids = vzalloc(size);
- if (!tx_ring->free_tx_ids)
- goto err_free_tx_ids;
+ tx_ring->free_ids = vzalloc_node(size, node);
+ if (!tx_ring->free_ids) {
+ tx_ring->free_ids = vzalloc(size);
+ if (!tx_ring->free_ids)
+ goto err_tx_free_ids;
}
size = tx_ring->tx_max_header_size;
@@ -245,7 +245,7 @@ static int ena_setup_tx_resources(struct ena_adapter *adapter, int qid)
/* Req id ring for TX out of order completions */
for (i = 0; i < tx_ring->ring_size; i++)
- tx_ring->free_tx_ids[i] = i;
+ tx_ring->free_ids[i] = i;
/* Reset tx statistics */
memset(&tx_ring->tx_stats, 0x0, sizeof(tx_ring->tx_stats));
@@ -256,9 +256,9 @@ static int ena_setup_tx_resources(struct ena_adapter *adapter, int qid)
return 0;
err_push_buf_intermediate_buf:
- vfree(tx_ring->free_tx_ids);
- tx_ring->free_tx_ids = NULL;
-err_free_tx_ids:
+ vfree(tx_ring->free_ids);
+ tx_ring->free_ids = NULL;
+err_tx_free_ids:
vfree(tx_ring->tx_buffer_info);
tx_ring->tx_buffer_info = NULL;
err_tx_buffer_info:
@@ -278,8 +278,8 @@ static void ena_free_tx_resources(struct ena_adapter *adapter, int qid)
vfree(tx_ring->tx_buffer_info);
tx_ring->tx_buffer_info = NULL;
- vfree(tx_ring->free_tx_ids);
- tx_ring->free_tx_ids = NULL;
+ vfree(tx_ring->free_ids);
+ tx_ring->free_ids = NULL;
vfree(tx_ring->push_buf_intermediate_buf);
tx_ring->push_buf_intermediate_buf = NULL;
@@ -326,7 +326,7 @@ static void ena_free_all_io_tx_resources(struct ena_adapter *adapter)
ena_free_tx_resources(adapter, i);
}
-static inline int validate_rx_req_id(struct ena_ring *rx_ring, u16 req_id)
+static int validate_rx_req_id(struct ena_ring *rx_ring, u16 req_id)
{
if (likely(req_id < rx_ring->ring_size))
return 0;
@@ -377,10 +377,10 @@ static int ena_setup_rx_resources(struct ena_adapter *adapter,
}
size = sizeof(u16) * rx_ring->ring_size;
- rx_ring->free_rx_ids = vzalloc_node(size, node);
- if (!rx_ring->free_rx_ids) {
- rx_ring->free_rx_ids = vzalloc(size);
- if (!rx_ring->free_rx_ids) {
+ rx_ring->free_ids = vzalloc_node(size, node);
+ if (!rx_ring->free_ids) {
+ rx_ring->free_ids = vzalloc(size);
+ if (!rx_ring->free_ids) {
vfree(rx_ring->rx_buffer_info);
rx_ring->rx_buffer_info = NULL;
return -ENOMEM;
@@ -389,7 +389,7 @@ static int ena_setup_rx_resources(struct ena_adapter *adapter,
/* Req id ring for receiving RX pkts out of order */
for (i = 0; i < rx_ring->ring_size; i++)
- rx_ring->free_rx_ids[i] = i;
+ rx_ring->free_ids[i] = i;
/* Reset rx statistics */
memset(&rx_ring->rx_stats, 0x0, sizeof(rx_ring->rx_stats));
@@ -415,8 +415,8 @@ static void ena_free_rx_resources(struct ena_adapter *adapter,
vfree(rx_ring->rx_buffer_info);
rx_ring->rx_buffer_info = NULL;
- vfree(rx_ring->free_rx_ids);
- rx_ring->free_rx_ids = NULL;
+ vfree(rx_ring->free_ids);
+ rx_ring->free_ids = NULL;
}
/* ena_setup_all_rx_resources - allocate I/O Rx queues resources for all queues
@@ -460,7 +460,7 @@ static void ena_free_all_io_rx_resources(struct ena_adapter *adapter)
ena_free_rx_resources(adapter, i);
}
-static inline int ena_alloc_rx_page(struct ena_ring *rx_ring,
+static int ena_alloc_rx_page(struct ena_ring *rx_ring,
struct ena_rx_buffer *rx_info, gfp_t gfp)
{
struct ena_com_buf *ena_buf;
@@ -531,7 +531,7 @@ static int ena_refill_rx_bufs(struct ena_ring *rx_ring, u32 num)
for (i = 0; i < num; i++) {
struct ena_rx_buffer *rx_info;
- req_id = rx_ring->free_rx_ids[next_to_use];
+ req_id = rx_ring->free_ids[next_to_use];
rc = validate_rx_req_id(rx_ring, req_id);
if (unlikely(rc < 0))
break;
@@ -594,7 +594,6 @@ static void ena_free_rx_bufs(struct ena_adapter *adapter,
/* ena_refill_all_rx_bufs - allocate all queues Rx buffers
* @adapter: board private structure
- *
*/
static void ena_refill_all_rx_bufs(struct ena_adapter *adapter)
{
@@ -621,7 +620,7 @@ static void ena_free_all_rx_bufs(struct ena_adapter *adapter)
ena_free_rx_bufs(adapter, i);
}
-static inline void ena_unmap_tx_skb(struct ena_ring *tx_ring,
+static void ena_unmap_tx_skb(struct ena_ring *tx_ring,
struct ena_tx_buffer *tx_info)
{
struct ena_com_buf *ena_buf;
@@ -797,7 +796,7 @@ static int ena_clean_tx_irq(struct ena_ring *tx_ring, u32 budget)
tx_pkts++;
total_done += tx_info->tx_descs;
- tx_ring->free_tx_ids[next_to_clean] = req_id;
+ tx_ring->free_ids[next_to_clean] = req_id;
next_to_clean = ENA_TX_RING_IDX_NEXT(next_to_clean,
tx_ring->ring_size);
}
@@ -911,7 +910,7 @@ static struct sk_buff *ena_rx_skb(struct ena_ring *rx_ring,
skb_put(skb, len);
skb->protocol = eth_type_trans(skb, rx_ring->netdev);
- rx_ring->free_rx_ids[*next_to_clean] = req_id;
+ rx_ring->free_ids[*next_to_clean] = req_id;
*next_to_clean = ENA_RX_RING_IDX_ADD(*next_to_clean, descs,
rx_ring->ring_size);
return skb;
@@ -935,7 +934,7 @@ static struct sk_buff *ena_rx_skb(struct ena_ring *rx_ring,
rx_info->page = NULL;
- rx_ring->free_rx_ids[*next_to_clean] = req_id;
+ rx_ring->free_ids[*next_to_clean] = req_id;
*next_to_clean =
ENA_RX_RING_IDX_NEXT(*next_to_clean,
rx_ring->ring_size);
@@ -956,7 +955,7 @@ static struct sk_buff *ena_rx_skb(struct ena_ring *rx_ring,
* @ena_rx_ctx: received packet context/metadata
* @skb: skb currently being received and modified
*/
-static inline void ena_rx_checksum(struct ena_ring *rx_ring,
+static void ena_rx_checksum(struct ena_ring *rx_ring,
struct ena_com_rx_ctx *ena_rx_ctx,
struct sk_buff *skb)
{
@@ -1001,6 +1000,9 @@ static inline void ena_rx_checksum(struct ena_ring *rx_ring,
if (likely(ena_rx_ctx->l4_csum_checked)) {
skb->ip_summed = CHECKSUM_UNNECESSARY;
+ u64_stats_update_begin(&rx_ring->syncp);
+ rx_ring->rx_stats.csum_good++;
+ u64_stats_update_end(&rx_ring->syncp);
} else {
u64_stats_update_begin(&rx_ring->syncp);
rx_ring->rx_stats.csum_unchecked++;
@@ -1088,7 +1090,7 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
/* exit if we failed to retrieve a buffer */
if (unlikely(!skb)) {
for (i = 0; i < ena_rx_ctx.descs; i++) {
- rx_ring->free_tx_ids[next_to_clean] =
+ rx_ring->free_ids[next_to_clean] =
rx_ring->ena_bufs[i].req_id;
next_to_clean =
ENA_RX_RING_IDX_NEXT(next_to_clean,
@@ -1153,7 +1155,7 @@ error:
return 0;
}
-inline void ena_adjust_intr_moderation(struct ena_ring *rx_ring,
+void ena_adjust_intr_moderation(struct ena_ring *rx_ring,
struct ena_ring *tx_ring)
{
/* We apply adaptive moderation on Rx path only.
@@ -1172,7 +1174,7 @@ inline void ena_adjust_intr_moderation(struct ena_ring *rx_ring,
rx_ring->per_napi_bytes = 0;
}
-static inline void ena_unmask_interrupt(struct ena_ring *tx_ring,
+static void ena_unmask_interrupt(struct ena_ring *tx_ring,
struct ena_ring *rx_ring)
{
struct ena_eth_io_intr_reg intr_reg;
@@ -1192,7 +1194,7 @@ static inline void ena_unmask_interrupt(struct ena_ring *tx_ring,
ena_com_unmask_intr(rx_ring->ena_com_io_cq, &intr_reg);
}
-static inline void ena_update_ring_numa_node(struct ena_ring *tx_ring,
+static void ena_update_ring_numa_node(struct ena_ring *tx_ring,
struct ena_ring *rx_ring)
{
int cpu = get_cpu();
@@ -1635,7 +1637,7 @@ static int ena_create_io_tx_queue(struct ena_adapter *adapter, int qid)
ctx.qid = ena_qid;
ctx.mem_queue_type = ena_dev->tx_mem_queue_type;
ctx.msix_vector = msix_vector;
- ctx.queue_size = adapter->tx_ring_size;
+ ctx.queue_size = tx_ring->ring_size;
ctx.numa_node = cpu_to_node(tx_ring->cpu);
rc = ena_com_create_io_queue(ena_dev, &ctx);
@@ -1702,7 +1704,7 @@ static int ena_create_io_rx_queue(struct ena_adapter *adapter, int qid)
ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX;
ctx.mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST;
ctx.msix_vector = msix_vector;
- ctx.queue_size = adapter->rx_ring_size;
+ ctx.queue_size = rx_ring->ring_size;
ctx.numa_node = cpu_to_node(rx_ring->cpu);
rc = ena_com_create_io_queue(ena_dev, &ctx);
@@ -1749,6 +1751,112 @@ create_err:
return rc;
}
+static void set_io_rings_size(struct ena_adapter *adapter,
+ int new_tx_size, int new_rx_size)
+{
+ int i;
+
+ for (i = 0; i < adapter->num_queues; i++) {
+ adapter->tx_ring[i].ring_size = new_tx_size;
+ adapter->rx_ring[i].ring_size = new_rx_size;
+ }
+}
+
+/* This function allows queue allocation to backoff when the system is
+ * low on memory. If there is not enough memory to allocate io queues
+ * the driver will try to allocate smaller queues.
+ *
+ * The backoff algorithm is as follows:
+ * 1. Try to allocate TX and RX and if successful.
+ * 1.1. return success
+ *
+ * 2. Divide by 2 the size of the larger of RX and TX queues (or both if their size is the same).
+ *
+ * 3. If TX or RX is smaller than 256
+ * 3.1. return failure.
+ * 4. else
+ * 4.1. go back to 1.
+ */
+static int create_queues_with_size_backoff(struct ena_adapter *adapter)
+{
+ int rc, cur_rx_ring_size, cur_tx_ring_size;
+ int new_rx_ring_size, new_tx_ring_size;
+
+ /* current queue sizes might be set to smaller than the requested
+ * ones due to past queue allocation failures.
+ */
+ set_io_rings_size(adapter, adapter->requested_tx_ring_size,
+ adapter->requested_rx_ring_size);
+
+ while (1) {
+ rc = ena_setup_all_tx_resources(adapter);
+ if (rc)
+ goto err_setup_tx;
+
+ rc = ena_create_all_io_tx_queues(adapter);
+ if (rc)
+ goto err_create_tx_queues;
+
+ rc = ena_setup_all_rx_resources(adapter);
+ if (rc)
+ goto err_setup_rx;
+
+ rc = ena_create_all_io_rx_queues(adapter);
+ if (rc)
+ goto err_create_rx_queues;
+
+ return 0;
+
+err_create_rx_queues:
+ ena_free_all_io_rx_resources(adapter);
+err_setup_rx:
+ ena_destroy_all_tx_queues(adapter);
+err_create_tx_queues:
+ ena_free_all_io_tx_resources(adapter);
+err_setup_tx:
+ if (rc != -ENOMEM) {
+ netif_err(adapter, ifup, adapter->netdev,
+ "Queue creation failed with error code %d\n",
+ rc);
+ return rc;
+ }
+
+ cur_tx_ring_size = adapter->tx_ring[0].ring_size;
+ cur_rx_ring_size = adapter->rx_ring[0].ring_size;
+
+ netif_err(adapter, ifup, adapter->netdev,
+ "Not enough memory to create queues with sizes TX=%d, RX=%d\n",
+ cur_tx_ring_size, cur_rx_ring_size);
+
+ new_tx_ring_size = cur_tx_ring_size;
+ new_rx_ring_size = cur_rx_ring_size;
+
+ /* Decrease the size of the larger queue, or
+ * decrease both if they are the same size.
+ */
+ if (cur_rx_ring_size <= cur_tx_ring_size)
+ new_tx_ring_size = cur_tx_ring_size / 2;
+ if (cur_rx_ring_size >= cur_tx_ring_size)
+ new_rx_ring_size = cur_rx_ring_size / 2;
+
+ if (new_tx_ring_size < ENA_MIN_RING_SIZE ||
+ new_rx_ring_size < ENA_MIN_RING_SIZE) {
+ netif_err(adapter, ifup, adapter->netdev,
+ "Queue creation failed with the smallest possible queue size of %d for both queues. Not retrying with smaller queues\n",
+ ENA_MIN_RING_SIZE);
+ return rc;
+ }
+
+ netif_err(adapter, ifup, adapter->netdev,
+ "Retrying queue creation with sizes TX=%d, RX=%d\n",
+ new_tx_ring_size,
+ new_rx_ring_size);
+
+ set_io_rings_size(adapter, new_tx_ring_size,
+ new_rx_ring_size);
+ }
+}
+
static int ena_up(struct ena_adapter *adapter)
{
int rc, i;
@@ -1768,25 +1876,9 @@ static int ena_up(struct ena_adapter *adapter)
if (rc)
goto err_req_irq;
- /* allocate transmit descriptors */
- rc = ena_setup_all_tx_resources(adapter);
+ rc = create_queues_with_size_backoff(adapter);
if (rc)
- goto err_setup_tx;
-
- /* allocate receive descriptors */
- rc = ena_setup_all_rx_resources(adapter);
- if (rc)
- goto err_setup_rx;
-
- /* Create TX queues */
- rc = ena_create_all_io_tx_queues(adapter);
- if (rc)
- goto err_create_tx_queues;
-
- /* Create RX queues */
- rc = ena_create_all_io_rx_queues(adapter);
- if (rc)
- goto err_create_rx_queues;
+ goto err_create_queues_with_backoff;
rc = ena_up_complete(adapter);
if (rc)
@@ -1815,14 +1907,11 @@ static int ena_up(struct ena_adapter *adapter)
return rc;
err_up:
- ena_destroy_all_rx_queues(adapter);
-err_create_rx_queues:
ena_destroy_all_tx_queues(adapter);
-err_create_tx_queues:
- ena_free_all_io_rx_resources(adapter);
-err_setup_rx:
ena_free_all_io_tx_resources(adapter);
-err_setup_tx:
+ ena_destroy_all_rx_queues(adapter);
+ ena_free_all_io_rx_resources(adapter);
+err_create_queues_with_backoff:
ena_free_io_irq(adapter);
err_req_irq:
ena_del_napi(adapter);
@@ -1942,6 +2031,20 @@ static int ena_close(struct net_device *netdev)
return 0;
}
+int ena_update_queue_sizes(struct ena_adapter *adapter,
+ u32 new_tx_size,
+ u32 new_rx_size)
+{
+ bool dev_up;
+
+ dev_up = test_bit(ENA_FLAG_DEV_UP, &adapter->flags);
+ ena_close(adapter->netdev);
+ adapter->requested_tx_ring_size = new_tx_size;
+ adapter->requested_rx_ring_size = new_rx_size;
+ ena_init_io_rings(adapter);
+ return dev_up ? ena_up(adapter) : 0;
+}
+
static void ena_tx_csum(struct ena_com_tx_ctx *ena_tx_ctx, struct sk_buff *skb)
{
u32 mss = skb_shinfo(skb)->gso_size;
@@ -2152,7 +2255,7 @@ static netdev_tx_t ena_start_xmit(struct sk_buff *skb, struct net_device *dev)
skb_tx_timestamp(skb);
next_to_use = tx_ring->next_to_use;
- req_id = tx_ring->free_tx_ids[next_to_use];
+ req_id = tx_ring->free_ids[next_to_use];
tx_info = &tx_ring->tx_buffer_info[req_id];
tx_info->num_of_bufs = 0;
@@ -2172,6 +2275,13 @@ static netdev_tx_t ena_start_xmit(struct sk_buff *skb, struct net_device *dev)
/* set flags and meta data */
ena_tx_csum(&ena_tx_ctx, skb);
+ if (unlikely(ena_com_is_doorbell_needed(tx_ring->ena_com_io_sq, &ena_tx_ctx))) {
+ netif_dbg(adapter, tx_queued, dev,
+ "llq tx max burst size of queue %d achieved, writing doorbell to send burst\n",
+ qid);
+ ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq);
+ }
+
/* prepare the packet's descriptors to dma engine */
rc = ena_com_prepare_tx(tx_ring->ena_com_io_sq, &ena_tx_ctx,
&nb_hw_desc);
@@ -2447,13 +2557,6 @@ static int ena_device_validate_params(struct ena_adapter *adapter,
return -EINVAL;
}
- if ((get_feat_ctx->max_queues.max_cq_num < adapter->num_queues) ||
- (get_feat_ctx->max_queues.max_sq_num < adapter->num_queues)) {
- netif_err(adapter, drv, netdev,
- "Error, device doesn't support enough queues\n");
- return -EINVAL;
- }
-
if (get_feat_ctx->dev_attr.max_mtu < netdev->mtu) {
netif_err(adapter, drv, netdev,
"Error, device max mtu is smaller than netdev MTU\n");
@@ -3027,18 +3130,32 @@ static int ena_calc_io_queue_num(struct pci_dev *pdev,
struct ena_com_dev *ena_dev,
struct ena_com_dev_get_features_ctx *get_feat_ctx)
{
- int io_sq_num, io_queue_num;
+ int io_tx_sq_num, io_tx_cq_num, io_rx_num, io_queue_num;
- /* In case of LLQ use the llq number in the get feature cmd */
+ if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) {
+ struct ena_admin_queue_ext_feature_fields *max_queue_ext =
+ &get_feat_ctx->max_queue_ext.max_queue_ext;
+ io_rx_num = min_t(int, max_queue_ext->max_rx_sq_num,
+ max_queue_ext->max_rx_cq_num);
+
+ io_tx_sq_num = max_queue_ext->max_tx_sq_num;
+ io_tx_cq_num = max_queue_ext->max_tx_cq_num;
+ } else {
+ struct ena_admin_queue_feature_desc *max_queues =
+ &get_feat_ctx->max_queues;
+ io_tx_sq_num = max_queues->max_sq_num;
+ io_tx_cq_num = max_queues->max_cq_num;
+ io_rx_num = min_t(int, io_tx_sq_num, io_tx_cq_num);
+ }
+
+ /* In case of LLQ use the llq fields for the tx SQ/CQ */
if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
- io_sq_num = get_feat_ctx->llq.max_llq_num;
- else
- io_sq_num = get_feat_ctx->max_queues.max_sq_num;
+ io_tx_sq_num = get_feat_ctx->llq.max_llq_num;
io_queue_num = min_t(int, num_online_cpus(), ENA_MAX_NUM_IO_QUEUES);
- io_queue_num = min_t(int, io_queue_num, io_sq_num);
- io_queue_num = min_t(int, io_queue_num,
- get_feat_ctx->max_queues.max_cq_num);
+ io_queue_num = min_t(int, io_queue_num, io_rx_num);
+ io_queue_num = min_t(int, io_queue_num, io_tx_sq_num);
+ io_queue_num = min_t(int, io_queue_num, io_tx_cq_num);
/* 1 IRQ for for mgmnt and 1 IRQs for each IO direction */
io_queue_num = min_t(int, io_queue_num, pci_msix_vec_count(pdev) - 1);
if (unlikely(!io_queue_num)) {
@@ -3212,7 +3329,7 @@ static void ena_release_bars(struct ena_com_dev *ena_dev, struct pci_dev *pdev)
pci_release_selected_regions(pdev, release_bars);
}
-static inline void set_default_llq_configurations(struct ena_llq_configurations *llq_config)
+static void set_default_llq_configurations(struct ena_llq_configurations *llq_config)
{
llq_config->llq_header_location = ENA_ADMIN_INLINE_HEADER;
llq_config->llq_ring_entry_size = ENA_ADMIN_LIST_ENTRY_SIZE_128B;
@@ -3221,36 +3338,70 @@ static inline void set_default_llq_configurations(struct ena_llq_configurations
llq_config->llq_ring_entry_size_value = 128;
}
-static int ena_calc_queue_size(struct pci_dev *pdev,
- struct ena_com_dev *ena_dev,
- u16 *max_tx_sgl_size,
- u16 *max_rx_sgl_size,
- struct ena_com_dev_get_features_ctx *get_feat_ctx)
+static int ena_calc_queue_size(struct ena_calc_queue_size_ctx *ctx)
{
- u32 queue_size = ENA_DEFAULT_RING_SIZE;
+ struct ena_admin_feature_llq_desc *llq = &ctx->get_feat_ctx->llq;
+ struct ena_com_dev *ena_dev = ctx->ena_dev;
+ u32 tx_queue_size = ENA_DEFAULT_RING_SIZE;
+ u32 rx_queue_size = ENA_DEFAULT_RING_SIZE;
+ u32 max_tx_queue_size;
+ u32 max_rx_queue_size;
- queue_size = min_t(u32, queue_size,
- get_feat_ctx->max_queues.max_cq_depth);
- queue_size = min_t(u32, queue_size,
- get_feat_ctx->max_queues.max_sq_depth);
+ if (ctx->ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) {
+ struct ena_admin_queue_ext_feature_fields *max_queue_ext =
+ &ctx->get_feat_ctx->max_queue_ext.max_queue_ext;
+ max_rx_queue_size = min_t(u32, max_queue_ext->max_rx_cq_depth,
+ max_queue_ext->max_rx_sq_depth);
+ max_tx_queue_size = max_queue_ext->max_tx_cq_depth;
- if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
- queue_size = min_t(u32, queue_size,
- get_feat_ctx->llq.max_llq_depth);
+ if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
+ max_tx_queue_size = min_t(u32, max_tx_queue_size,
+ llq->max_llq_depth);
+ else
+ max_tx_queue_size = min_t(u32, max_tx_queue_size,
+ max_queue_ext->max_tx_sq_depth);
- queue_size = rounddown_pow_of_two(queue_size);
+ ctx->max_tx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
+ max_queue_ext->max_per_packet_tx_descs);
+ ctx->max_rx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
+ max_queue_ext->max_per_packet_rx_descs);
+ } else {
+ struct ena_admin_queue_feature_desc *max_queues =
+ &ctx->get_feat_ctx->max_queues;
+ max_rx_queue_size = min_t(u32, max_queues->max_cq_depth,
+ max_queues->max_sq_depth);
+ max_tx_queue_size = max_queues->max_cq_depth;
+
+ if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV)
+ max_tx_queue_size = min_t(u32, max_tx_queue_size,
+ llq->max_llq_depth);
+ else
+ max_tx_queue_size = min_t(u32, max_tx_queue_size,
+ max_queues->max_sq_depth);
- if (unlikely(!queue_size)) {
- dev_err(&pdev->dev, "Invalid queue size\n");
- return -EFAULT;
+ ctx->max_tx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
+ max_queues->max_packet_tx_descs);
+ ctx->max_rx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
+ max_queues->max_packet_rx_descs);
}
- *max_tx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
- get_feat_ctx->max_queues.max_packet_tx_descs);
- *max_rx_sgl_size = min_t(u16, ENA_PKT_MAX_BUFS,
- get_feat_ctx->max_queues.max_packet_rx_descs);
+ max_tx_queue_size = rounddown_pow_of_two(max_tx_queue_size);
+ max_rx_queue_size = rounddown_pow_of_two(max_rx_queue_size);
+
+ tx_queue_size = clamp_val(tx_queue_size, ENA_MIN_RING_SIZE,
+ max_tx_queue_size);
+ rx_queue_size = clamp_val(rx_queue_size, ENA_MIN_RING_SIZE,
+ max_rx_queue_size);
- return queue_size;
+ tx_queue_size = rounddown_pow_of_two(tx_queue_size);
+ rx_queue_size = rounddown_pow_of_two(rx_queue_size);
+
+ ctx->max_tx_queue_size = max_tx_queue_size;
+ ctx->max_rx_queue_size = max_rx_queue_size;
+ ctx->tx_queue_size = tx_queue_size;
+ ctx->rx_queue_size = rx_queue_size;
+
+ return 0;
}
/* ena_probe - Device Initialization Routine
@@ -3266,23 +3417,19 @@ static int ena_calc_queue_size(struct pci_dev *pdev,
static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
struct ena_com_dev_get_features_ctx get_feat_ctx;
- static int version_printed;
- struct net_device *netdev;
- struct ena_adapter *adapter;
+ struct ena_calc_queue_size_ctx calc_queue_ctx = { 0 };
struct ena_llq_configurations llq_config;
struct ena_com_dev *ena_dev = NULL;
- char *queue_type_str;
- static int adapters_found;
+ struct ena_adapter *adapter;
int io_queue_num, bars, rc;
- int queue_size;
- u16 tx_sgl_size = 0;
- u16 rx_sgl_size = 0;
+ struct net_device *netdev;
+ static int adapters_found;
+ char *queue_type_str;
bool wd_state;
dev_dbg(&pdev->dev, "%s\n", __func__);
- if (version_printed++ == 0)
- dev_info(&pdev->dev, "%s", version);
+ dev_info_once(&pdev->dev, "%s", version);
rc = pci_enable_device_mem(pdev);
if (rc) {
@@ -3334,20 +3481,25 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_device_destroy;
}
+ calc_queue_ctx.ena_dev = ena_dev;
+ calc_queue_ctx.get_feat_ctx = &get_feat_ctx;
+ calc_queue_ctx.pdev = pdev;
+
/* initial Tx interrupt delay, Assumes 1 usec granularity.
* Updated during device initialization with the real granularity
*/
ena_dev->intr_moder_tx_interval = ENA_INTR_INITIAL_TX_INTERVAL_USECS;
io_queue_num = ena_calc_io_queue_num(pdev, ena_dev, &get_feat_ctx);
- queue_size = ena_calc_queue_size(pdev, ena_dev, &tx_sgl_size,
- &rx_sgl_size, &get_feat_ctx);
- if ((queue_size <= 0) || (io_queue_num <= 0)) {
+ rc = ena_calc_queue_size(&calc_queue_ctx);
+ if (rc || io_queue_num <= 0) {
rc = -EFAULT;
goto err_device_destroy;
}
- dev_info(&pdev->dev, "creating %d io queues. queue size: %d. LLQ is %s\n",
- io_queue_num, queue_size,
+ dev_info(&pdev->dev, "creating %d io queues. rx queue size: %d tx queue size. %d LLQ is %s\n",
+ io_queue_num,
+ calc_queue_ctx.rx_queue_size,
+ calc_queue_ctx.tx_queue_size,
(ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) ?
"ENABLED" : "DISABLED");
@@ -3373,11 +3525,12 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE);
adapter->reset_reason = ENA_REGS_RESET_NORMAL;
- adapter->tx_ring_size = queue_size;
- adapter->rx_ring_size = queue_size;
-
- adapter->max_tx_sgl_size = tx_sgl_size;
- adapter->max_rx_sgl_size = rx_sgl_size;
+ adapter->requested_tx_ring_size = calc_queue_ctx.tx_queue_size;
+ adapter->requested_rx_ring_size = calc_queue_ctx.rx_queue_size;
+ adapter->max_tx_ring_size = calc_queue_ctx.max_tx_queue_size;
+ adapter->max_rx_ring_size = calc_queue_ctx.max_rx_queue_size;
+ adapter->max_tx_sgl_size = calc_queue_ctx.max_tx_sgl_size;
+ adapter->max_rx_sgl_size = calc_queue_ctx.max_rx_sgl_size;
adapter->num_queues = io_queue_num;
adapter->last_monitored_tx_qid = 0;
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
index 63870072cbbd..efbcffd22215 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
@@ -44,8 +44,8 @@
#include "ena_eth_com.h"
#define DRV_MODULE_VER_MAJOR 2
-#define DRV_MODULE_VER_MINOR 0
-#define DRV_MODULE_VER_SUBMINOR 3
+#define DRV_MODULE_VER_MINOR 1
+#define DRV_MODULE_VER_SUBMINOR 0
#define DRV_MODULE_NAME "ena"
#ifndef DRV_MODULE_VERSION
@@ -79,6 +79,7 @@
#define ENA_BAR_MASK (BIT(ENA_REG_BAR) | BIT(ENA_MEM_BAR))
#define ENA_DEFAULT_RING_SIZE (1024)
+#define ENA_MIN_RING_SIZE (256)
#define ENA_TX_WAKEUP_THRESH (MAX_SKB_FRAGS + 2)
#define ENA_DEFAULT_RX_COPYBREAK (256 - NET_IP_ALIGN)
@@ -154,6 +155,18 @@ struct ena_napi {
u32 qid;
};
+struct ena_calc_queue_size_ctx {
+ struct ena_com_dev_get_features_ctx *get_feat_ctx;
+ struct ena_com_dev *ena_dev;
+ struct pci_dev *pdev;
+ u16 tx_queue_size;
+ u16 rx_queue_size;
+ u16 max_tx_queue_size;
+ u16 max_rx_queue_size;
+ u16 max_tx_sgl_size;
+ u16 max_rx_sgl_size;
+};
+
struct ena_tx_buffer {
struct sk_buff *skb;
/* num of ena desc for this specific skb
@@ -208,26 +221,24 @@ struct ena_stats_tx {
struct ena_stats_rx {
u64 cnt;
u64 bytes;
+ u64 rx_copybreak_pkt;
+ u64 csum_good;
u64 refil_partial;
u64 bad_csum;
u64 page_alloc_fail;
u64 skb_alloc_fail;
u64 dma_mapping_err;
u64 bad_desc_num;
- u64 rx_copybreak_pkt;
u64 bad_req_id;
u64 empty_rx_ring;
u64 csum_unchecked;
};
struct ena_ring {
- union {
- /* Holds the empty requests for TX/RX
- * out of order completions
- */
- u16 *free_tx_ids;
- u16 *free_rx_ids;
- };
+ /* Holds the empty requests for TX/RX
+ * out of order completions
+ */
+ u16 *free_ids;
union {
struct ena_tx_buffer *tx_buffer_info;
@@ -321,8 +332,11 @@ struct ena_adapter {
u32 tx_usecs, rx_usecs; /* interrupt moderation */
u32 tx_frames, rx_frames; /* interrupt moderation */
- u32 tx_ring_size;
- u32 rx_ring_size;
+ u32 requested_tx_ring_size;
+ u32 requested_rx_ring_size;
+
+ u32 max_tx_ring_size;
+ u32 max_rx_ring_size;
u32 msg_enable;
@@ -372,6 +386,10 @@ void ena_dump_stats_to_dmesg(struct ena_adapter *adapter);
void ena_dump_stats_to_buf(struct ena_adapter *adapter, u8 *buf);
+int ena_update_queue_sizes(struct ena_adapter *adapter,
+ u32 new_tx_size,
+ u32 new_rx_size);
+
int ena_get_sset_count(struct net_device *netdev, int sset);
#endif /* !(ENA_H) */
diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_cfg.h b/drivers/net/ethernet/aquantia/atlantic/aq_cfg.h
index 173be45463ee..02f1b70c4e25 100644
--- a/drivers/net/ethernet/aquantia/atlantic/aq_cfg.h
+++ b/drivers/net/ethernet/aquantia/atlantic/aq_cfg.h
@@ -9,6 +9,8 @@
#ifndef AQ_CFG_H
#define AQ_CFG_H
+#include <generated/utsrelease.h>
+
#define AQ_CFG_VECS_DEF 8U
#define AQ_CFG_TCS_DEF 1U
@@ -86,10 +88,7 @@
#define AQ_CFG_DRV_AUTHOR "aQuantia"
#define AQ_CFG_DRV_DESC "aQuantia Corporation(R) Network Driver"
#define AQ_CFG_DRV_NAME "atlantic"
-#define AQ_CFG_DRV_VERSION __stringify(NIC_MAJOR_DRIVER_VERSION)"."\
- __stringify(NIC_MINOR_DRIVER_VERSION)"."\
- __stringify(NIC_BUILD_DRIVER_VERSION)"."\
- __stringify(NIC_REVISION_DRIVER_VERSION) \
+#define AQ_CFG_DRV_VERSION UTS_RELEASE \
AQ_CFG_DRV_VERSION_SUFFIX
#endif /* AQ_CFG_H */
diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_drvinfo.c b/drivers/net/ethernet/aquantia/atlantic/aq_drvinfo.c
index adad6a7acabe..6da65099047d 100644
--- a/drivers/net/ethernet/aquantia/atlantic/aq_drvinfo.c
+++ b/drivers/net/ethernet/aquantia/atlantic/aq_drvinfo.c
@@ -1,4 +1,4 @@
-// SPDX-License-Identifier: GPL-2.0-or-later
+// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (C) 2014-2019 aQuantia Corporation. */
/* File aq_drvinfo.c: Definition of common code for firmware info in sys.*/
diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_drvinfo.h b/drivers/net/ethernet/aquantia/atlantic/aq_drvinfo.h
index 41fbb1358068..23a0487893a7 100644
--- a/drivers/net/ethernet/aquantia/atlantic/aq_drvinfo.h
+++ b/drivers/net/ethernet/aquantia/atlantic/aq_drvinfo.h
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: GPL-2.0-or-later */
+/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2014-2017 aQuantia Corporation. */
/* File aq_drvinfo.h: Declaration of common code for firmware info in sys.*/
diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_filters.c b/drivers/net/ethernet/aquantia/atlantic/aq_filters.c
index 1fff462a4175..440690b18734 100644
--- a/drivers/net/ethernet/aquantia/atlantic/aq_filters.c
+++ b/drivers/net/ethernet/aquantia/atlantic/aq_filters.c
@@ -1,4 +1,4 @@
-// SPDX-License-Identifier: GPL-2.0-or-later
+// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (C) 2014-2017 aQuantia Corporation. */
/* File aq_filters.c: RX filters related functions. */
diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_filters.h b/drivers/net/ethernet/aquantia/atlantic/aq_filters.h
index c6a08c6585d5..122e06c88a33 100644
--- a/drivers/net/ethernet/aquantia/atlantic/aq_filters.h
+++ b/drivers/net/ethernet/aquantia/atlantic/aq_filters.h
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: GPL-2.0-or-later */
+/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2014-2017 aQuantia Corporation. */
/* File aq_filters.h: RX filters related functions. */
diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_main.c b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
index 5315df5ff6f8..100722ad5c2d 100644
--- a/drivers/net/ethernet/aquantia/atlantic/aq_main.c
+++ b/drivers/net/ethernet/aquantia/atlantic/aq_main.c
@@ -108,11 +108,16 @@ err_exit:
static int aq_ndev_set_features(struct net_device *ndev,
netdev_features_t features)
{
+ bool is_vlan_rx_strip = !!(features & NETIF_F_HW_VLAN_CTAG_RX);
+ bool is_vlan_tx_insert = !!(features & NETIF_F_HW_VLAN_CTAG_TX);
struct aq_nic_s *aq_nic = netdev_priv(ndev);
- struct aq_nic_cfg_s *aq_cfg = aq_nic_get_cfg(aq_nic);
+ bool need_ndev_restart = false;
+ struct aq_nic_cfg_s *aq_cfg;
bool is_lro = false;
int err = 0;
+ aq_cfg = aq_nic_get_cfg(aq_nic);
+
if (!(features & NETIF_F_NTUPLE)) {
if (aq_nic->ndev->features & NETIF_F_NTUPLE) {
err = aq_clear_rxnfc_all_rules(aq_nic);
@@ -135,17 +140,32 @@ static int aq_ndev_set_features(struct net_device *ndev,
if (aq_cfg->is_lro != is_lro) {
aq_cfg->is_lro = is_lro;
-
- if (netif_running(ndev)) {
- aq_ndev_close(ndev);
- aq_ndev_open(ndev);
- }
+ need_ndev_restart = true;
}
}
- if ((aq_nic->ndev->features ^ features) & NETIF_F_RXCSUM)
+
+ if ((aq_nic->ndev->features ^ features) & NETIF_F_RXCSUM) {
err = aq_nic->aq_hw_ops->hw_set_offload(aq_nic->aq_hw,
aq_cfg);
+ if (unlikely(err))
+ goto err_exit;
+ }
+
+ if (aq_cfg->is_vlan_rx_strip != is_vlan_rx_strip) {
+ aq_cfg->is_vlan_rx_strip = is_vlan_rx_strip;
+ need_ndev_restart = true;
+ }
+ if (aq_cfg->is_vlan_tx_insert != is_vlan_tx_insert) {
+ aq_cfg->is_vlan_tx_insert = is_vlan_tx_insert;
+ need_ndev_restart = true;
+ }
+
+ if (need_ndev_restart && netif_running(ndev)) {
+ aq_ndev_close(ndev);
+ aq_ndev_open(ndev);
+ }
+
err_exit:
return err;
}
diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
index 41172fbebddd..e1392766e21e 100644
--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
@@ -126,6 +126,8 @@ void aq_nic_cfg_start(struct aq_nic_s *self)
cfg->link_speed_msk &= cfg->aq_hw_caps->link_speed_msk;
cfg->features = cfg->aq_hw_caps->hw_features;
+ cfg->is_vlan_rx_strip = !!(cfg->features & NETIF_F_HW_VLAN_CTAG_RX);
+ cfg->is_vlan_tx_insert = !!(cfg->features & NETIF_F_HW_VLAN_CTAG_TX);
cfg->is_vlan_force_promisc = true;
}
@@ -286,7 +288,8 @@ void aq_nic_ndev_init(struct aq_nic_s *self)
self->ndev->hw_features |= aq_hw_caps->hw_features;
self->ndev->features = aq_hw_caps->hw_features;
self->ndev->vlan_features |= NETIF_F_HW_CSUM | NETIF_F_RXCSUM |
- NETIF_F_RXHASH | NETIF_F_SG | NETIF_F_LRO;
+ NETIF_F_RXHASH | NETIF_F_SG |
+ NETIF_F_LRO | NETIF_F_TSO;
self->ndev->priv_flags = aq_hw_caps->hw_priv_flags;
self->ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
@@ -427,26 +430,37 @@ static unsigned int aq_nic_map_skb(struct aq_nic_s *self,
unsigned int dx = ring->sw_tail;
struct aq_ring_buff_s *first = NULL;
struct aq_ring_buff_s *dx_buff = &ring->buff_ring[dx];
+ bool need_context_tag = false;
+
+ dx_buff->flags = 0U;
if (unlikely(skb_is_gso(skb))) {
- dx_buff->flags = 0U;
+ dx_buff->mss = skb_shinfo(skb)->gso_size;
+ dx_buff->is_gso = 1U;
dx_buff->len_pkt = skb->len;
dx_buff->len_l2 = ETH_HLEN;
dx_buff->len_l3 = ip_hdrlen(skb);
dx_buff->len_l4 = tcp_hdrlen(skb);
- dx_buff->mss = skb_shinfo(skb)->gso_size;
- dx_buff->is_txc = 1U;
dx_buff->eop_index = 0xffffU;
-
dx_buff->is_ipv6 =
(ip_hdr(skb)->version == 6) ? 1U : 0U;
+ need_context_tag = true;
+ }
+
+ if (self->aq_nic_cfg.is_vlan_tx_insert && skb_vlan_tag_present(skb)) {
+ dx_buff->vlan_tx_tag = skb_vlan_tag_get(skb);
+ dx_buff->len_pkt = skb->len;
+ dx_buff->is_vlan = 1U;
+ need_context_tag = true;
+ }
+ if (need_context_tag) {
dx = aq_ring_next_dx(ring, dx);
dx_buff = &ring->buff_ring[dx];
+ dx_buff->flags = 0U;
++ret;
}
- dx_buff->flags = 0U;
dx_buff->len = skb_headlen(skb);
dx_buff->pa = dma_map_single(aq_nic_get_dev(self),
skb->data,
@@ -535,7 +549,7 @@ mapping_error:
--ret, dx = aq_ring_next_dx(ring, dx)) {
dx_buff = &ring->buff_ring[dx];
- if (!dx_buff->is_txc && dx_buff->pa) {
+ if (!dx_buff->is_gso && !dx_buff->is_vlan && dx_buff->pa) {
if (unlikely(dx_buff->is_sop)) {
dma_unmap_single(aq_nic_get_dev(self),
dx_buff->pa,
diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.h b/drivers/net/ethernet/aquantia/atlantic/aq_nic.h
index 0f22f5d5691b..255b54a6ae07 100644
--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.h
+++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.h
@@ -35,6 +35,8 @@ struct aq_nic_cfg_s {
u32 flow_control;
u32 link_speed_msk;
u32 wol;
+ u8 is_vlan_rx_strip;
+ u8 is_vlan_tx_insert;
bool is_vlan_force_promisc;
u16 is_mc_list_enabled;
u16 mc_list_count;
diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
index 2a7b91ed17c5..3901d7994ca1 100644
--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
@@ -409,6 +409,10 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
}
}
+ if (buff->is_vlan)
+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
+ buff->vlan_rx_tag);
+
skb->protocol = eth_type_trans(skb, ndev);
aq_rx_checksum(self, buff, skb);
diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.h b/drivers/net/ethernet/aquantia/atlantic/aq_ring.h
index 6bd67210d0b7..47abd09d06c2 100644
--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.h
+++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.h
@@ -27,7 +27,7 @@ struct aq_rxpage {
* +----------+----------+----------+-----------
* 4/8bytes|len pkt |len pkt | | skb
* +----------+----------+----------+-----------
- * 4/8bytes|is_txc |len,flags |len |len,is_eop
+ * 4/8bytes|is_gso |len,flags |len |len,is_eop
* +----------+----------+----------+-----------
*
* This aq_ring_buff_s doesn't have endianness dependency.
@@ -44,6 +44,7 @@ struct __packed aq_ring_buff_s {
u8 is_hash_l4;
u8 rsvd1;
struct aq_rxpage rxdata;
+ u16 vlan_rx_tag;
};
/* EOP */
struct {
@@ -59,6 +60,7 @@ struct __packed aq_ring_buff_s {
u8 is_ipv6:1;
u8 rsvd2:7;
u32 len_pkt;
+ u16 vlan_tx_tag;
};
};
union {
@@ -70,11 +72,12 @@ struct __packed aq_ring_buff_s {
u32 is_cso_err:1;
u32 is_sop:1;
u32 is_eop:1;
- u32 is_txc:1;
+ u32 is_gso:1;
u32 is_mapped:1;
u32 is_cleaned:1;
u32 is_error:1;
- u32 rsvd3:6;
+ u32 is_vlan:1;
+ u32 rsvd3:5;
u16 eop_index;
u16 rsvd4;
};
diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c
index 0f140a9fe404..359a4d387185 100644
--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c
+++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c
@@ -451,7 +451,7 @@ static int hw_atl_a0_hw_ring_tx_xmit(struct aq_hw_s *self,
buff = &ring->buff_ring[ring->sw_tail];
- if (buff->is_txc) {
+ if (buff->is_gso) {
txd->ctl |= (buff->len_l3 << 31) |
(buff->len_l2 << 24) |
HW_ATL_A0_TXD_CTL_CMD_TCP |
diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
index 13ac2661a473..30f7fc4c97ff 100644
--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
+++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
@@ -40,7 +40,9 @@
NETIF_F_TSO | \
NETIF_F_LRO | \
NETIF_F_NTUPLE | \
- NETIF_F_HW_VLAN_CTAG_FILTER, \
+ NETIF_F_HW_VLAN_CTAG_FILTER | \
+ NETIF_F_HW_VLAN_CTAG_RX | \
+ NETIF_F_HW_VLAN_CTAG_TX, \
.hw_priv_flags = IFF_UNICAST_FLT, \
.flow_control = true, \
.mtu = HW_ATL_B0_MTU_JUMBO, \
@@ -245,6 +247,9 @@ static int hw_atl_b0_hw_offload_set(struct aq_hw_s *self,
/* LSO offloads*/
hw_atl_tdm_large_send_offload_en_set(self, 0xFFFFFFFFU);
+ /* Outer VLAN tag offload */
+ hw_atl_rpo_outer_vlan_tag_mode_set(self, 1U);
+
/* LRO offloads */
{
unsigned int val = (8U < HW_ATL_B0_LRO_RXD_MAX) ? 0x3U :
@@ -487,6 +492,7 @@ static int hw_atl_b0_hw_ring_tx_xmit(struct aq_hw_s *self,
unsigned int buff_pa_len = 0U;
unsigned int pkt_len = 0U;
unsigned int frag_count = 0U;
+ bool is_vlan = false;
bool is_gso = false;
buff = &ring->buff_ring[ring->sw_tail];
@@ -501,36 +507,44 @@ static int hw_atl_b0_hw_ring_tx_xmit(struct aq_hw_s *self,
buff = &ring->buff_ring[ring->sw_tail];
- if (buff->is_txc) {
+ if (buff->is_gso) {
+ txd->ctl |= HW_ATL_B0_TXD_CTL_CMD_TCP;
+ txd->ctl |= HW_ATL_B0_TXD_CTL_DESC_TYPE_TXC;
txd->ctl |= (buff->len_l3 << 31) |
- (buff->len_l2 << 24) |
- HW_ATL_B0_TXD_CTL_CMD_TCP |
- HW_ATL_B0_TXD_CTL_DESC_TYPE_TXC;
- txd->ctl2 |= (buff->mss << 16) |
- (buff->len_l4 << 8) |
- (buff->len_l3 >> 1);
+ (buff->len_l2 << 24);
+ txd->ctl2 |= (buff->mss << 16);
+ is_gso = true;
pkt_len -= (buff->len_l4 +
buff->len_l3 +
buff->len_l2);
- is_gso = true;
-
if (buff->is_ipv6)
txd->ctl |= HW_ATL_B0_TXD_CTL_CMD_IPV6;
- } else {
+ txd->ctl2 |= (buff->len_l4 << 8) |
+ (buff->len_l3 >> 1);
+ }
+ if (buff->is_vlan) {
+ txd->ctl |= HW_ATL_B0_TXD_CTL_DESC_TYPE_TXC;
+ txd->ctl |= buff->vlan_tx_tag << 4;
+ is_vlan = true;
+ }
+ if (!buff->is_gso && !buff->is_vlan) {
buff_pa_len = buff->len;
txd->buf_addr = buff->pa;
txd->ctl |= (HW_ATL_B0_TXD_CTL_BLEN &
((u32)buff_pa_len << 4));
txd->ctl |= HW_ATL_B0_TXD_CTL_DESC_TYPE_TXD;
+
/* PAY_LEN */
txd->ctl2 |= HW_ATL_B0_TXD_CTL2_LEN & (pkt_len << 14);
- if (is_gso) {
- txd->ctl |= HW_ATL_B0_TXD_CTL_CMD_LSO;
+ if (is_gso || is_vlan) {
+ /* enable tx context */
txd->ctl2 |= HW_ATL_B0_TXD_CTL2_CTX_EN;
}
+ if (is_gso)
+ txd->ctl |= HW_ATL_B0_TXD_CTL_CMD_LSO;
/* Tx checksum offloads */
if (buff->is_ip_cso)
@@ -539,13 +553,16 @@ static int hw_atl_b0_hw_ring_tx_xmit(struct aq_hw_s *self,
if (buff->is_udp_cso || buff->is_tcp_cso)
txd->ctl |= HW_ATL_B0_TXD_CTL_CMD_TUCSO;
+ if (is_vlan)
+ txd->ctl |= HW_ATL_B0_TXD_CTL_CMD_VLAN;
+
if (unlikely(buff->is_eop)) {
txd->ctl |= HW_ATL_B0_TXD_CTL_EOP;
txd->ctl |= HW_ATL_B0_TXD_CTL_CMD_WB;
is_gso = false;
+ is_vlan = false;
}
}
-
ring->sw_tail = aq_ring_next_dx(ring, ring->sw_tail);
}
@@ -559,6 +576,7 @@ static int hw_atl_b0_hw_ring_rx_init(struct aq_hw_s *self,
{
u32 dma_desc_addr_lsw = (u32)aq_ring->dx_ring_pa;
u32 dma_desc_addr_msw = (u32)(((u64)aq_ring->dx_ring_pa) >> 32);
+ u32 vlan_rx_stripping = self->aq_nic_cfg->is_vlan_rx_strip;
hw_atl_rdm_rx_desc_en_set(self, false, aq_ring->idx);
@@ -578,7 +596,8 @@ static int hw_atl_b0_hw_ring_rx_init(struct aq_hw_s *self,
hw_atl_rdm_rx_desc_head_buff_size_set(self, 0U, aq_ring->idx);
hw_atl_rdm_rx_desc_head_splitting_set(self, 0U, aq_ring->idx);
- hw_atl_rpo_rx_desc_vlan_stripping_set(self, 0U, aq_ring->idx);
+ hw_atl_rpo_rx_desc_vlan_stripping_set(self, !!vlan_rx_stripping,
+ aq_ring->idx);
/* Rx ring set mode */
@@ -681,11 +700,15 @@ static int hw_atl_b0_hw_ring_rx_receive(struct aq_hw_s *self,
buff = &ring->buff_ring[ring->hw_head];
+ buff->flags = 0U;
+ buff->is_hash_l4 = 0U;
+
rx_stat = (0x0000003CU & rxd_wb->status) >> 2;
is_rx_check_sum_enabled = (rxd_wb->type >> 19) & 0x3U;
- pkt_type = 0xFFU & (rxd_wb->type >> 4);
+ pkt_type = (rxd_wb->type & HW_ATL_B0_RXD_WB_STAT_PKTTYPE) >>
+ HW_ATL_B0_RXD_WB_STAT_PKTTYPE_SHIFT;
if (is_rx_check_sum_enabled & BIT(0) &&
(0x0U == (pkt_type & 0x3U)))
@@ -706,6 +729,13 @@ static int hw_atl_b0_hw_ring_rx_receive(struct aq_hw_s *self,
buff->is_cso_err = 0U;
}
+ if (self->aq_nic_cfg->is_vlan_rx_strip &&
+ ((pkt_type & HW_ATL_B0_RXD_WB_PKTTYPE_VLAN) ||
+ (pkt_type & HW_ATL_B0_RXD_WB_PKTTYPE_VLAN_DOUBLE))) {
+ buff->is_vlan = 1;
+ buff->vlan_rx_tag = le16_to_cpu(rxd_wb->vlan);
+ }
+
if ((rx_stat & BIT(0)) || rxd_wb->type & 0x1000U) {
/* MAC error or DMA error */
buff->is_error = 1U;
diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0_internal.h b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0_internal.h
index e4ba2ccf9830..808d8cd4252a 100644
--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0_internal.h
+++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0_internal.h
@@ -107,10 +107,17 @@
#define HW_ATL_B0_RXD_NCEA0 (0x1)
#define HW_ATL_B0_RXD_WB_STAT_RSSTYPE (0x0000000F)
+#define HW_ATL_B0_RXD_WB_STAT_RSSTYPE_SHIFT (0x0)
#define HW_ATL_B0_RXD_WB_STAT_PKTTYPE (0x00000FF0)
+#define HW_ATL_B0_RXD_WB_STAT_PKTTYPE_SHIFT (0x4)
#define HW_ATL_B0_RXD_WB_STAT_RXCTRL (0x00180000)
+#define HW_ATL_B0_RXD_WB_STAT_RXCTRL_SHIFT (0x13)
#define HW_ATL_B0_RXD_WB_STAT_SPLHDR (0x00200000)
#define HW_ATL_B0_RXD_WB_STAT_HDRLEN (0xFFC00000)
+#define HW_ATL_B0_RXD_WB_STAT_HDRLEN_SHIFT (0x16)
+
+#define HW_ATL_B0_RXD_WB_PKTTYPE_VLAN BIT(5)
+#define HW_ATL_B0_RXD_WB_PKTTYPE_VLAN_DOUBLE BIT(6)
#define HW_ATL_B0_RXD_WB_STAT2_DD (0x0001)
#define HW_ATL_B0_RXD_WB_STAT2_EOP (0x0002)
diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c
index 451529069f28..1149812ae463 100644
--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c
+++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c
@@ -1004,6 +1004,22 @@ void hw_atl_rpo_rx_desc_vlan_stripping_set(struct aq_hw_s *aq_hw,
rx_desc_vlan_stripping);
}
+void hw_atl_rpo_outer_vlan_tag_mode_set(void *context,
+ u32 outervlantagmode)
+{
+ aq_hw_write_reg_bit(context, HW_ATL_RPO_OUTER_VL_INS_MODE_ADR,
+ HW_ATL_RPO_OUTER_VL_INS_MODE_MSK,
+ HW_ATL_RPO_OUTER_VL_INS_MODE_SHIFT,
+ outervlantagmode);
+}
+
+u32 hw_atl_rpo_outer_vlan_tag_mode_get(void *context)
+{
+ return aq_hw_read_reg_bit(context, HW_ATL_RPO_OUTER_VL_INS_MODE_ADR,
+ HW_ATL_RPO_OUTER_VL_INS_MODE_MSK,
+ HW_ATL_RPO_OUTER_VL_INS_MODE_SHIFT);
+}
+
void hw_atl_rpo_tcp_udp_crc_offload_en_set(struct aq_hw_s *aq_hw,
u32 tcp_udp_crc_offload_en)
{
diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.h b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.h
index 34b42ce43512..0c37abbabca5 100644
--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.h
+++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.h
@@ -488,6 +488,11 @@ void hw_atl_rpo_rx_desc_vlan_stripping_set(struct aq_hw_s *aq_hw,
u32 rx_desc_vlan_stripping,
u32 descriptor);
+void hw_atl_rpo_outer_vlan_tag_mode_set(void *context,
+ u32 outervlantagmode);
+
+u32 hw_atl_rpo_outer_vlan_tag_mode_get(void *context);
+
/* set tcp/udp checksum offload enable */
void hw_atl_rpo_tcp_udp_crc_offload_en_set(struct aq_hw_s *aq_hw,
u32 tcp_udp_crc_offload_en);
diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h
index fc1446f737bb..c3febcdfa92e 100644
--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h
+++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h
@@ -1383,6 +1383,24 @@
/* default value of bitfield l4_chk_en */
#define HW_ATL_RPOL4CHK_EN_DEFAULT 0x0
+/* RX outer_vl_ins_mode Bitfield Definitions
+ * Preprocessor definitions for the bitfield "outer_vl_ins_mode".
+ * PORT="pif_rpo_outer_vl_mode_i"
+ */
+
+/* Register address for bitfield outer_vl_ins_mode */
+#define HW_ATL_RPO_OUTER_VL_INS_MODE_ADR 0x00005580
+/* Bitmask for bitfield outer_vl_ins_mode */
+#define HW_ATL_RPO_OUTER_VL_INS_MODE_MSK 0x00000004
+/* Inverted bitmask for bitfield outer_vl_ins_mode */
+#define HW_ATL_RPO_OUTER_VL_INS_MODE_MSKN 0xFFFFFFFB
+/* Lower bit position of bitfield outer_vl_ins_mode */
+#define HW_ATL_RPO_OUTER_VL_INS_MODE_SHIFT 2
+/* Width of bitfield outer_vl_ins_mode */
+#define HW_ATL_RPO_OUTER_VL_INS_MODE_WIDTH 1
+/* Default value of bitfield outer_vl_ins_mode */
+#define HW_ATL_RPO_OUTER_VL_INS_MODE_DEFAULT 0x0
+
/* rx reg_res_dsbl bitfield definitions
* preprocessor definitions for the bitfield "reg_res_dsbl".
* port="pif_rx_reg_res_dsbl_i"
diff --git a/drivers/net/ethernet/aquantia/atlantic/ver.h b/drivers/net/ethernet/aquantia/atlantic/ver.h
index 23374bffa92b..597654b51e01 100644
--- a/drivers/net/ethernet/aquantia/atlantic/ver.h
+++ b/drivers/net/ethernet/aquantia/atlantic/ver.h
@@ -7,11 +7,6 @@
#ifndef VER_H
#define VER_H
-#define NIC_MAJOR_DRIVER_VERSION 2
-#define NIC_MINOR_DRIVER_VERSION 0
-#define NIC_BUILD_DRIVER_VERSION 4
-#define NIC_REVISION_DRIVER_VERSION 0
-
#define AQ_CFG_DRV_VERSION_SUFFIX "-kern"
#endif /* VER_H */
diff --git a/drivers/net/ethernet/atheros/Kconfig b/drivers/net/ethernet/atheros/Kconfig
index 953ff1f9ac70..0058051ba925 100644
--- a/drivers/net/ethernet/atheros/Kconfig
+++ b/drivers/net/ethernet/atheros/Kconfig
@@ -6,7 +6,7 @@
config NET_VENDOR_ATHEROS
bool "Atheros devices"
default y
- depends on PCI
+ depends on (PCI || ATH79)
---help---
If you have a network (Ethernet) card belonging to this class, say Y.
@@ -17,6 +17,14 @@ config NET_VENDOR_ATHEROS
if NET_VENDOR_ATHEROS
+config AG71XX
+ tristate "Atheros AR7XXX/AR9XXX built-in ethernet mac support"
+ depends on ATH79
+ select PHYLIB
+ help
+ If you wish to compile a kernel for AR7XXX/91XXX and enable
+ ethernet support, then you should always answer Y to this.
+
config ATL2
tristate "Atheros L2 Fast Ethernet support"
depends on PCI
diff --git a/drivers/net/ethernet/atheros/Makefile b/drivers/net/ethernet/atheros/Makefile
index aa3d394b87e6..aca696cb6425 100644
--- a/drivers/net/ethernet/atheros/Makefile
+++ b/drivers/net/ethernet/atheros/Makefile
@@ -3,6 +3,7 @@
# Makefile for the Atheros network device drivers.
#
+obj-$(CONFIG_AG71XX) += ag71xx.o
obj-$(CONFIG_ATL1) += atlx/
obj-$(CONFIG_ATL2) += atlx/
obj-$(CONFIG_ATL1E) += atl1e/
diff --git a/drivers/net/ethernet/atheros/ag71xx.c b/drivers/net/ethernet/atheros/ag71xx.c
new file mode 100644
index 000000000000..72a57c6cd254
--- /dev/null
+++ b/drivers/net/ethernet/atheros/ag71xx.c
@@ -0,0 +1,1898 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Atheros AR71xx built-in ethernet mac driver
+ *
+ * Copyright (C) 2019 Oleksij Rempel <o.rempel@pengutronix.de>
+ *
+ * List of authors contributed to this driver before mainlining:
+ * Alexander Couzens <lynxis@fe80.eu>
+ * Christian Lamparter <chunkeey@gmail.com>
+ * Chuanhong Guo <gch981213@gmail.com>
+ * Daniel F. Dickinson <cshored@thecshore.com>
+ * David Bauer <mail@david-bauer.net>
+ * Felix Fietkau <nbd@nbd.name>
+ * Gabor Juhos <juhosg@freemail.hu>
+ * Hauke Mehrtens <hauke@hauke-m.de>
+ * Johann Neuhauser <johann@it-neuhauser.de>
+ * John Crispin <john@phrozen.org>
+ * Jo-Philipp Wich <jo@mein.io>
+ * Koen Vandeputte <koen.vandeputte@ncentric.com>
+ * Lucian Cristian <lucian.cristian@gmail.com>
+ * Matt Merhar <mattmerhar@protonmail.com>
+ * Milan Krstic <milan.krstic@gmail.com>
+ * Petr Štetiar <ynezz@true.cz>
+ * Rosen Penev <rosenp@gmail.com>
+ * Stephen Walker <stephendwalker+github@gmail.com>
+ * Vittorio Gambaletta <openwrt@vittgam.net>
+ * Weijie Gao <hackpascal@gmail.com>
+ * Imre Kaloz <kaloz@openwrt.org>
+ */
+
+#include <linux/if_vlan.h>
+#include <linux/mfd/syscon.h>
+#include <linux/of_mdio.h>
+#include <linux/of_net.h>
+#include <linux/of_platform.h>
+#include <linux/regmap.h>
+#include <linux/reset.h>
+#include <linux/clk.h>
+
+/* For our NAPI weight bigger does *NOT* mean better - it means more
+ * D-cache misses and lots more wasted cycles than we'll ever
+ * possibly gain from saving instructions.
+ */
+#define AG71XX_NAPI_WEIGHT 32
+#define AG71XX_OOM_REFILL (1 + HZ / 10)
+
+#define AG71XX_INT_ERR (AG71XX_INT_RX_BE | AG71XX_INT_TX_BE)
+#define AG71XX_INT_TX (AG71XX_INT_TX_PS)
+#define AG71XX_INT_RX (AG71XX_INT_RX_PR | AG71XX_INT_RX_OF)
+
+#define AG71XX_INT_POLL (AG71XX_INT_RX | AG71XX_INT_TX)
+#define AG71XX_INT_INIT (AG71XX_INT_ERR | AG71XX_INT_POLL)
+
+#define AG71XX_TX_MTU_LEN 1540
+
+#define AG71XX_TX_RING_SPLIT 512
+#define AG71XX_TX_RING_DS_PER_PKT DIV_ROUND_UP(AG71XX_TX_MTU_LEN, \
+ AG71XX_TX_RING_SPLIT)
+#define AG71XX_TX_RING_SIZE_DEFAULT 128
+#define AG71XX_RX_RING_SIZE_DEFAULT 256
+
+#define AG71XX_MDIO_RETRY 1000
+#define AG71XX_MDIO_DELAY 5
+#define AG71XX_MDIO_MAX_CLK 5000000
+
+/* Register offsets */
+#define AG71XX_REG_MAC_CFG1 0x0000
+#define MAC_CFG1_TXE BIT(0) /* Tx Enable */
+#define MAC_CFG1_STX BIT(1) /* Synchronize Tx Enable */
+#define MAC_CFG1_RXE BIT(2) /* Rx Enable */
+#define MAC_CFG1_SRX BIT(3) /* Synchronize Rx Enable */
+#define MAC_CFG1_TFC BIT(4) /* Tx Flow Control Enable */
+#define MAC_CFG1_RFC BIT(5) /* Rx Flow Control Enable */
+#define MAC_CFG1_SR BIT(31) /* Soft Reset */
+#define MAC_CFG1_INIT (MAC_CFG1_RXE | MAC_CFG1_TXE | \
+ MAC_CFG1_SRX | MAC_CFG1_STX)
+
+#define AG71XX_REG_MAC_CFG2 0x0004
+#define MAC_CFG2_FDX BIT(0)
+#define MAC_CFG2_PAD_CRC_EN BIT(2)
+#define MAC_CFG2_LEN_CHECK BIT(4)
+#define MAC_CFG2_IF_1000 BIT(9)
+#define MAC_CFG2_IF_10_100 BIT(8)
+
+#define AG71XX_REG_MAC_MFL 0x0010
+
+#define AG71XX_REG_MII_CFG 0x0020
+#define MII_CFG_CLK_DIV_4 0
+#define MII_CFG_CLK_DIV_6 2
+#define MII_CFG_CLK_DIV_8 3
+#define MII_CFG_CLK_DIV_10 4
+#define MII_CFG_CLK_DIV_14 5
+#define MII_CFG_CLK_DIV_20 6
+#define MII_CFG_CLK_DIV_28 7
+#define MII_CFG_CLK_DIV_34 8
+#define MII_CFG_CLK_DIV_42 9
+#define MII_CFG_CLK_DIV_50 10
+#define MII_CFG_CLK_DIV_58 11
+#define MII_CFG_CLK_DIV_66 12
+#define MII_CFG_CLK_DIV_74 13
+#define MII_CFG_CLK_DIV_82 14
+#define MII_CFG_CLK_DIV_98 15
+#define MII_CFG_RESET BIT(31)
+
+#define AG71XX_REG_MII_CMD 0x0024
+#define MII_CMD_READ BIT(0)
+
+#define AG71XX_REG_MII_ADDR 0x0028
+#define MII_ADDR_SHIFT 8
+
+#define AG71XX_REG_MII_CTRL 0x002c
+#define AG71XX_REG_MII_STATUS 0x0030
+#define AG71XX_REG_MII_IND 0x0034
+#define MII_IND_BUSY BIT(0)
+#define MII_IND_INVALID BIT(2)
+
+#define AG71XX_REG_MAC_IFCTL 0x0038
+#define MAC_IFCTL_SPEED BIT(16)
+
+#define AG71XX_REG_MAC_ADDR1 0x0040
+#define AG71XX_REG_MAC_ADDR2 0x0044
+#define AG71XX_REG_FIFO_CFG0 0x0048
+#define FIFO_CFG0_WTM BIT(0) /* Watermark Module */
+#define FIFO_CFG0_RXS BIT(1) /* Rx System Module */
+#define FIFO_CFG0_RXF BIT(2) /* Rx Fabric Module */
+#define FIFO_CFG0_TXS BIT(3) /* Tx System Module */
+#define FIFO_CFG0_TXF BIT(4) /* Tx Fabric Module */
+#define FIFO_CFG0_ALL (FIFO_CFG0_WTM | FIFO_CFG0_RXS | FIFO_CFG0_RXF \
+ | FIFO_CFG0_TXS | FIFO_CFG0_TXF)
+#define FIFO_CFG0_INIT (FIFO_CFG0_ALL << FIFO_CFG0_ENABLE_SHIFT)
+
+#define FIFO_CFG0_ENABLE_SHIFT 8
+
+#define AG71XX_REG_FIFO_CFG1 0x004c
+#define AG71XX_REG_FIFO_CFG2 0x0050
+#define AG71XX_REG_FIFO_CFG3 0x0054
+#define AG71XX_REG_FIFO_CFG4 0x0058
+#define FIFO_CFG4_DE BIT(0) /* Drop Event */
+#define FIFO_CFG4_DV BIT(1) /* RX_DV Event */
+#define FIFO_CFG4_FC BIT(2) /* False Carrier */
+#define FIFO_CFG4_CE BIT(3) /* Code Error */
+#define FIFO_CFG4_CR BIT(4) /* CRC error */
+#define FIFO_CFG4_LM BIT(5) /* Length Mismatch */
+#define FIFO_CFG4_LO BIT(6) /* Length out of range */
+#define FIFO_CFG4_OK BIT(7) /* Packet is OK */
+#define FIFO_CFG4_MC BIT(8) /* Multicast Packet */
+#define FIFO_CFG4_BC BIT(9) /* Broadcast Packet */
+#define FIFO_CFG4_DR BIT(10) /* Dribble */
+#define FIFO_CFG4_LE BIT(11) /* Long Event */
+#define FIFO_CFG4_CF BIT(12) /* Control Frame */
+#define FIFO_CFG4_PF BIT(13) /* Pause Frame */
+#define FIFO_CFG4_UO BIT(14) /* Unsupported Opcode */
+#define FIFO_CFG4_VT BIT(15) /* VLAN tag detected */
+#define FIFO_CFG4_FT BIT(16) /* Frame Truncated */
+#define FIFO_CFG4_UC BIT(17) /* Unicast Packet */
+#define FIFO_CFG4_INIT (FIFO_CFG4_DE | FIFO_CFG4_DV | FIFO_CFG4_FC | \
+ FIFO_CFG4_CE | FIFO_CFG4_CR | FIFO_CFG4_LM | \
+ FIFO_CFG4_LO | FIFO_CFG4_OK | FIFO_CFG4_MC | \
+ FIFO_CFG4_BC | FIFO_CFG4_DR | FIFO_CFG4_LE | \
+ FIFO_CFG4_CF | FIFO_CFG4_PF | FIFO_CFG4_UO | \
+ FIFO_CFG4_VT)
+
+#define AG71XX_REG_FIFO_CFG5 0x005c
+#define FIFO_CFG5_DE BIT(0) /* Drop Event */
+#define FIFO_CFG5_DV BIT(1) /* RX_DV Event */
+#define FIFO_CFG5_FC BIT(2) /* False Carrier */
+#define FIFO_CFG5_CE BIT(3) /* Code Error */
+#define FIFO_CFG5_LM BIT(4) /* Length Mismatch */
+#define FIFO_CFG5_LO BIT(5) /* Length Out of Range */
+#define FIFO_CFG5_OK BIT(6) /* Packet is OK */
+#define FIFO_CFG5_MC BIT(7) /* Multicast Packet */
+#define FIFO_CFG5_BC BIT(8) /* Broadcast Packet */
+#define FIFO_CFG5_DR BIT(9) /* Dribble */
+#define FIFO_CFG5_CF BIT(10) /* Control Frame */
+#define FIFO_CFG5_PF BIT(11) /* Pause Frame */
+#define FIFO_CFG5_UO BIT(12) /* Unsupported Opcode */
+#define FIFO_CFG5_VT BIT(13) /* VLAN tag detected */
+#define FIFO_CFG5_LE BIT(14) /* Long Event */
+#define FIFO_CFG5_FT BIT(15) /* Frame Truncated */
+#define FIFO_CFG5_16 BIT(16) /* unknown */
+#define FIFO_CFG5_17 BIT(17) /* unknown */
+#define FIFO_CFG5_SF BIT(18) /* Short Frame */
+#define FIFO_CFG5_BM BIT(19) /* Byte Mode */
+#define FIFO_CFG5_INIT (FIFO_CFG5_DE | FIFO_CFG5_DV | FIFO_CFG5_FC | \
+ FIFO_CFG5_CE | FIFO_CFG5_LO | FIFO_CFG5_OK | \
+ FIFO_CFG5_MC | FIFO_CFG5_BC | FIFO_CFG5_DR | \
+ FIFO_CFG5_CF | FIFO_CFG5_PF | FIFO_CFG5_VT | \
+ FIFO_CFG5_LE | FIFO_CFG5_FT | FIFO_CFG5_16 | \
+ FIFO_CFG5_17 | FIFO_CFG5_SF)
+
+#define AG71XX_REG_TX_CTRL 0x0180
+#define TX_CTRL_TXE BIT(0) /* Tx Enable */
+
+#define AG71XX_REG_TX_DESC 0x0184
+#define AG71XX_REG_TX_STATUS 0x0188
+#define TX_STATUS_PS BIT(0) /* Packet Sent */
+#define TX_STATUS_UR BIT(1) /* Tx Underrun */
+#define TX_STATUS_BE BIT(3) /* Bus Error */
+
+#define AG71XX_REG_RX_CTRL 0x018c
+#define RX_CTRL_RXE BIT(0) /* Rx Enable */
+
+#define AG71XX_DMA_RETRY 10
+#define AG71XX_DMA_DELAY 1
+
+#define AG71XX_REG_RX_DESC 0x0190
+#define AG71XX_REG_RX_STATUS 0x0194
+#define RX_STATUS_PR BIT(0) /* Packet Received */
+#define RX_STATUS_OF BIT(2) /* Rx Overflow */
+#define RX_STATUS_BE BIT(3) /* Bus Error */
+
+#define AG71XX_REG_INT_ENABLE 0x0198
+#define AG71XX_REG_INT_STATUS 0x019c
+#define AG71XX_INT_TX_PS BIT(0)
+#define AG71XX_INT_TX_UR BIT(1)
+#define AG71XX_INT_TX_BE BIT(3)
+#define AG71XX_INT_RX_PR BIT(4)
+#define AG71XX_INT_RX_OF BIT(6)
+#define AG71XX_INT_RX_BE BIT(7)
+
+#define AG71XX_REG_FIFO_DEPTH 0x01a8
+#define AG71XX_REG_RX_SM 0x01b0
+#define AG71XX_REG_TX_SM 0x01b4
+
+#define ETH_SWITCH_HEADER_LEN 2
+
+#define AG71XX_DEFAULT_MSG_ENABLE \
+ (NETIF_MSG_DRV \
+ | NETIF_MSG_PROBE \
+ | NETIF_MSG_LINK \
+ | NETIF_MSG_TIMER \
+ | NETIF_MSG_IFDOWN \
+ | NETIF_MSG_IFUP \
+ | NETIF_MSG_RX_ERR \
+ | NETIF_MSG_TX_ERR)
+
+#define DESC_EMPTY BIT(31)
+#define DESC_MORE BIT(24)
+#define DESC_PKTLEN_M 0xfff
+struct ag71xx_desc {
+ u32 data;
+ u32 ctrl;
+ u32 next;
+ u32 pad;
+} __aligned(4);
+
+#define AG71XX_DESC_SIZE roundup(sizeof(struct ag71xx_desc), \
+ L1_CACHE_BYTES)
+
+struct ag71xx_buf {
+ union {
+ struct {
+ struct sk_buff *skb;
+ unsigned int len;
+ } tx;
+ struct {
+ dma_addr_t dma_addr;
+ void *rx_buf;
+ } rx;
+ };
+};
+
+struct ag71xx_ring {
+ /* "Hot" fields in the data path. */
+ unsigned int curr;
+ unsigned int dirty;
+
+ /* "Cold" fields - not used in the data path. */
+ struct ag71xx_buf *buf;
+ u16 order;
+ u16 desc_split;
+ dma_addr_t descs_dma;
+ u8 *descs_cpu;
+};
+
+enum ag71xx_type {
+ AR7100,
+ AR7240,
+ AR9130,
+ AR9330,
+ AR9340,
+ QCA9530,
+ QCA9550,
+};
+
+struct ag71xx_dcfg {
+ u32 max_frame_len;
+ const u32 *fifodata;
+ u16 desc_pktlen_mask;
+ bool tx_hang_workaround;
+ enum ag71xx_type type;
+};
+
+struct ag71xx {
+ /* Critical data related to the per-packet data path are clustered
+ * early in this structure to help improve the D-cache footprint.
+ */
+ struct ag71xx_ring rx_ring ____cacheline_aligned;
+ struct ag71xx_ring tx_ring ____cacheline_aligned;
+
+ u16 rx_buf_size;
+ u8 rx_buf_offset;
+
+ struct net_device *ndev;
+ struct platform_device *pdev;
+ struct napi_struct napi;
+ u32 msg_enable;
+ const struct ag71xx_dcfg *dcfg;
+
+ /* From this point onwards we're not looking at per-packet fields. */
+ void __iomem *mac_base;
+
+ struct ag71xx_desc *stop_desc;
+ dma_addr_t stop_desc_dma;
+
+ int phy_if_mode;
+
+ struct delayed_work restart_work;
+ struct timer_list oom_timer;
+
+ struct reset_control *mac_reset;
+
+ u32 fifodata[3];
+ int mac_idx;
+
+ struct reset_control *mdio_reset;
+ struct mii_bus *mii_bus;
+ struct clk *clk_mdio;
+ struct clk *clk_eth;
+};
+
+static int ag71xx_desc_empty(struct ag71xx_desc *desc)
+{
+ return (desc->ctrl & DESC_EMPTY) != 0;
+}
+
+static struct ag71xx_desc *ag71xx_ring_desc(struct ag71xx_ring *ring, int idx)
+{
+ return (struct ag71xx_desc *)&ring->descs_cpu[idx * AG71XX_DESC_SIZE];
+}
+
+static int ag71xx_ring_size_order(int size)
+{
+ return fls(size - 1);
+}
+
+static bool ag71xx_is(struct ag71xx *ag, enum ag71xx_type type)
+{
+ return ag->dcfg->type == type;
+}
+
+static void ag71xx_wr(struct ag71xx *ag, unsigned int reg, u32 value)
+{
+ iowrite32(value, ag->mac_base + reg);
+ /* flush write */
+ (void)ioread32(ag->mac_base + reg);
+}
+
+static u32 ag71xx_rr(struct ag71xx *ag, unsigned int reg)
+{
+ return ioread32(ag->mac_base + reg);
+}
+
+static void ag71xx_sb(struct ag71xx *ag, unsigned int reg, u32 mask)
+{
+ void __iomem *r;
+
+ r = ag->mac_base + reg;
+ iowrite32(ioread32(r) | mask, r);
+ /* flush write */
+ (void)ioread32(r);
+}
+
+static void ag71xx_cb(struct ag71xx *ag, unsigned int reg, u32 mask)
+{
+ void __iomem *r;
+
+ r = ag->mac_base + reg;
+ iowrite32(ioread32(r) & ~mask, r);
+ /* flush write */
+ (void)ioread32(r);
+}
+
+static void ag71xx_int_enable(struct ag71xx *ag, u32 ints)
+{
+ ag71xx_sb(ag, AG71XX_REG_INT_ENABLE, ints);
+}
+
+static void ag71xx_int_disable(struct ag71xx *ag, u32 ints)
+{
+ ag71xx_cb(ag, AG71XX_REG_INT_ENABLE, ints);
+}
+
+static int ag71xx_mdio_wait_busy(struct ag71xx *ag)
+{
+ struct net_device *ndev = ag->ndev;
+ int i;
+
+ for (i = 0; i < AG71XX_MDIO_RETRY; i++) {
+ u32 busy;
+
+ udelay(AG71XX_MDIO_DELAY);
+
+ busy = ag71xx_rr(ag, AG71XX_REG_MII_IND);
+ if (!busy)
+ return 0;
+
+ udelay(AG71XX_MDIO_DELAY);
+ }
+
+ netif_err(ag, link, ndev, "MDIO operation timed out\n");
+
+ return -ETIMEDOUT;
+}
+
+static int ag71xx_mdio_mii_read(struct mii_bus *bus, int addr, int reg)
+{
+ struct ag71xx *ag = bus->priv;
+ int err, val;
+
+ err = ag71xx_mdio_wait_busy(ag);
+ if (err)
+ return err;
+
+ ag71xx_wr(ag, AG71XX_REG_MII_ADDR,
+ ((addr & 0x1f) << MII_ADDR_SHIFT) | (reg & 0xff));
+ /* enable read mode */
+ ag71xx_wr(ag, AG71XX_REG_MII_CMD, MII_CMD_READ);
+
+ err = ag71xx_mdio_wait_busy(ag);
+ if (err)
+ return err;
+
+ val = ag71xx_rr(ag, AG71XX_REG_MII_STATUS);
+ /* disable read mode */
+ ag71xx_wr(ag, AG71XX_REG_MII_CMD, 0);
+
+ netif_dbg(ag, link, ag->ndev, "mii_read: addr=%04x, reg=%04x, value=%04x\n",
+ addr, reg, val);
+
+ return val;
+}
+
+static int ag71xx_mdio_mii_write(struct mii_bus *bus, int addr, int reg,
+ u16 val)
+{
+ struct ag71xx *ag = bus->priv;
+
+ netif_dbg(ag, link, ag->ndev, "mii_write: addr=%04x, reg=%04x, value=%04x\n",
+ addr, reg, val);
+
+ ag71xx_wr(ag, AG71XX_REG_MII_ADDR,
+ ((addr & 0x1f) << MII_ADDR_SHIFT) | (reg & 0xff));
+ ag71xx_wr(ag, AG71XX_REG_MII_CTRL, val);
+
+ return ag71xx_mdio_wait_busy(ag);
+}
+
+static const u32 ar71xx_mdio_div_table[] = {
+ 4, 4, 6, 8, 10, 14, 20, 28,
+};
+
+static const u32 ar7240_mdio_div_table[] = {
+ 2, 2, 4, 6, 8, 12, 18, 26, 32, 40, 48, 56, 62, 70, 78, 96,
+};
+
+static const u32 ar933x_mdio_div_table[] = {
+ 4, 4, 6, 8, 10, 14, 20, 28, 34, 42, 50, 58, 66, 74, 82, 98,
+};
+
+static int ag71xx_mdio_get_divider(struct ag71xx *ag, u32 *div)
+{
+ unsigned long ref_clock;
+ const u32 *table;
+ int ndivs, i;
+
+ ref_clock = clk_get_rate(ag->clk_mdio);
+ if (!ref_clock)
+ return -EINVAL;
+
+ if (ag71xx_is(ag, AR9330) || ag71xx_is(ag, AR9340)) {
+ table = ar933x_mdio_div_table;
+ ndivs = ARRAY_SIZE(ar933x_mdio_div_table);
+ } else if (ag71xx_is(ag, AR7240)) {
+ table = ar7240_mdio_div_table;
+ ndivs = ARRAY_SIZE(ar7240_mdio_div_table);
+ } else {
+ table = ar71xx_mdio_div_table;
+ ndivs = ARRAY_SIZE(ar71xx_mdio_div_table);
+ }
+
+ for (i = 0; i < ndivs; i++) {
+ unsigned long t;
+
+ t = ref_clock / table[i];
+ if (t <= AG71XX_MDIO_MAX_CLK) {
+ *div = i;
+ return 0;
+ }
+ }
+
+ return -ENOENT;
+}
+
+static int ag71xx_mdio_reset(struct mii_bus *bus)
+{
+ struct ag71xx *ag = bus->priv;
+ int err;
+ u32 t;
+
+ err = ag71xx_mdio_get_divider(ag, &t);
+ if (err)
+ return err;
+
+ ag71xx_wr(ag, AG71XX_REG_MII_CFG, t | MII_CFG_RESET);
+ usleep_range(100, 200);
+
+ ag71xx_wr(ag, AG71XX_REG_MII_CFG, t);
+ usleep_range(100, 200);
+
+ return 0;
+}
+
+static int ag71xx_mdio_probe(struct ag71xx *ag)
+{
+ struct device *dev = &ag->pdev->dev;
+ struct net_device *ndev = ag->ndev;
+ static struct mii_bus *mii_bus;
+ struct device_node *np;
+ int err;
+
+ np = dev->of_node;
+ ag->mii_bus = NULL;
+
+ ag->clk_mdio = devm_clk_get(dev, "mdio");
+ if (IS_ERR(ag->clk_mdio)) {
+ netif_err(ag, probe, ndev, "Failed to get mdio clk.\n");
+ return PTR_ERR(ag->clk_mdio);
+ }
+
+ err = clk_prepare_enable(ag->clk_mdio);
+ if (err) {
+ netif_err(ag, probe, ndev, "Failed to enable mdio clk.\n");
+ return err;
+ }
+
+ mii_bus = devm_mdiobus_alloc(dev);
+ if (!mii_bus) {
+ err = -ENOMEM;
+ goto mdio_err_put_clk;
+ }
+
+ ag->mdio_reset = of_reset_control_get_exclusive(np, "mdio");
+ if (IS_ERR(ag->mdio_reset)) {
+ netif_err(ag, probe, ndev, "Failed to get reset mdio.\n");
+ return PTR_ERR(ag->mdio_reset);
+ }
+
+ mii_bus->name = "ag71xx_mdio";
+ mii_bus->read = ag71xx_mdio_mii_read;
+ mii_bus->write = ag71xx_mdio_mii_write;
+ mii_bus->reset = ag71xx_mdio_reset;
+ mii_bus->priv = ag;
+ mii_bus->parent = dev;
+ snprintf(mii_bus->id, MII_BUS_ID_SIZE, "%s.%d", np->name, ag->mac_idx);
+
+ if (!IS_ERR(ag->mdio_reset)) {
+ reset_control_assert(ag->mdio_reset);
+ msleep(100);
+ reset_control_deassert(ag->mdio_reset);
+ msleep(200);
+ }
+
+ err = of_mdiobus_register(mii_bus, np);
+ if (err)
+ goto mdio_err_put_clk;
+
+ ag->mii_bus = mii_bus;
+
+ return 0;
+
+mdio_err_put_clk:
+ clk_disable_unprepare(ag->clk_mdio);
+ return err;
+}
+
+static void ag71xx_mdio_remove(struct ag71xx *ag)
+{
+ if (ag->mii_bus)
+ mdiobus_unregister(ag->mii_bus);
+ clk_disable_unprepare(ag->clk_mdio);
+}
+
+static void ag71xx_hw_stop(struct ag71xx *ag)
+{
+ /* disable all interrupts and stop the rx/tx engine */
+ ag71xx_wr(ag, AG71XX_REG_INT_ENABLE, 0);
+ ag71xx_wr(ag, AG71XX_REG_RX_CTRL, 0);
+ ag71xx_wr(ag, AG71XX_REG_TX_CTRL, 0);
+}
+
+static bool ag71xx_check_dma_stuck(struct ag71xx *ag)
+{
+ unsigned long timestamp;
+ u32 rx_sm, tx_sm, rx_fd;
+
+ timestamp = netdev_get_tx_queue(ag->ndev, 0)->trans_start;
+ if (likely(time_before(jiffies, timestamp + HZ / 10)))
+ return false;
+
+ if (!netif_carrier_ok(ag->ndev))
+ return false;
+
+ rx_sm = ag71xx_rr(ag, AG71XX_REG_RX_SM);
+ if ((rx_sm & 0x7) == 0x3 && ((rx_sm >> 4) & 0x7) == 0x6)
+ return true;
+
+ tx_sm = ag71xx_rr(ag, AG71XX_REG_TX_SM);
+ rx_fd = ag71xx_rr(ag, AG71XX_REG_FIFO_DEPTH);
+ if (((tx_sm >> 4) & 0x7) == 0 && ((rx_sm & 0x7) == 0) &&
+ ((rx_sm >> 4) & 0x7) == 0 && rx_fd == 0)
+ return true;
+
+ return false;
+}
+
+static int ag71xx_tx_packets(struct ag71xx *ag, bool flush)
+{
+ struct ag71xx_ring *ring = &ag->tx_ring;
+ int sent = 0, bytes_compl = 0, n = 0;
+ struct net_device *ndev = ag->ndev;
+ int ring_mask, ring_size;
+ bool dma_stuck = false;
+
+ ring_mask = BIT(ring->order) - 1;
+ ring_size = BIT(ring->order);
+
+ netif_dbg(ag, tx_queued, ndev, "processing TX ring\n");
+
+ while (ring->dirty + n != ring->curr) {
+ struct ag71xx_desc *desc;
+ struct sk_buff *skb;
+ unsigned int i;
+
+ i = (ring->dirty + n) & ring_mask;
+ desc = ag71xx_ring_desc(ring, i);
+ skb = ring->buf[i].tx.skb;
+
+ if (!flush && !ag71xx_desc_empty(desc)) {
+ if (ag->dcfg->tx_hang_workaround &&
+ ag71xx_check_dma_stuck(ag)) {
+ schedule_delayed_work(&ag->restart_work,
+ HZ / 2);
+ dma_stuck = true;
+ }
+ break;
+ }
+
+ if (flush)
+ desc->ctrl |= DESC_EMPTY;
+
+ n++;
+ if (!skb)
+ continue;
+
+ dev_kfree_skb_any(skb);
+ ring->buf[i].tx.skb = NULL;
+
+ bytes_compl += ring->buf[i].tx.len;
+
+ sent++;
+ ring->dirty += n;
+
+ while (n > 0) {
+ ag71xx_wr(ag, AG71XX_REG_TX_STATUS, TX_STATUS_PS);
+ n--;
+ }
+ }
+
+ netif_dbg(ag, tx_done, ndev, "%d packets sent out\n", sent);
+
+ if (!sent)
+ return 0;
+
+ ag->ndev->stats.tx_bytes += bytes_compl;
+ ag->ndev->stats.tx_packets += sent;
+
+ netdev_completed_queue(ag->ndev, sent, bytes_compl);
+ if ((ring->curr - ring->dirty) < (ring_size * 3) / 4)
+ netif_wake_queue(ag->ndev);
+
+ if (!dma_stuck)
+ cancel_delayed_work(&ag->restart_work);
+
+ return sent;
+}
+
+static void ag71xx_dma_wait_stop(struct ag71xx *ag)
+{
+ struct net_device *ndev = ag->ndev;
+ int i;
+
+ for (i = 0; i < AG71XX_DMA_RETRY; i++) {
+ u32 rx, tx;
+
+ mdelay(AG71XX_DMA_DELAY);
+
+ rx = ag71xx_rr(ag, AG71XX_REG_RX_CTRL) & RX_CTRL_RXE;
+ tx = ag71xx_rr(ag, AG71XX_REG_TX_CTRL) & TX_CTRL_TXE;
+ if (!rx && !tx)
+ return;
+ }
+
+ netif_err(ag, hw, ndev, "DMA stop operation timed out\n");
+}
+
+static void ag71xx_dma_reset(struct ag71xx *ag)
+{
+ struct net_device *ndev = ag->ndev;
+ u32 val;
+ int i;
+
+ /* stop RX and TX */
+ ag71xx_wr(ag, AG71XX_REG_RX_CTRL, 0);
+ ag71xx_wr(ag, AG71XX_REG_TX_CTRL, 0);
+
+ /* give the hardware some time to really stop all rx/tx activity
+ * clearing the descriptors too early causes random memory corruption
+ */
+ ag71xx_dma_wait_stop(ag);
+
+ /* clear descriptor addresses */
+ ag71xx_wr(ag, AG71XX_REG_TX_DESC, ag->stop_desc_dma);
+ ag71xx_wr(ag, AG71XX_REG_RX_DESC, ag->stop_desc_dma);
+
+ /* clear pending RX/TX interrupts */
+ for (i = 0; i < 256; i++) {
+ ag71xx_wr(ag, AG71XX_REG_RX_STATUS, RX_STATUS_PR);
+ ag71xx_wr(ag, AG71XX_REG_TX_STATUS, TX_STATUS_PS);
+ }
+
+ /* clear pending errors */
+ ag71xx_wr(ag, AG71XX_REG_RX_STATUS, RX_STATUS_BE | RX_STATUS_OF);
+ ag71xx_wr(ag, AG71XX_REG_TX_STATUS, TX_STATUS_BE | TX_STATUS_UR);
+
+ val = ag71xx_rr(ag, AG71XX_REG_RX_STATUS);
+ if (val)
+ netif_err(ag, hw, ndev, "unable to clear DMA Rx status: %08x\n",
+ val);
+
+ val = ag71xx_rr(ag, AG71XX_REG_TX_STATUS);
+
+ /* mask out reserved bits */
+ val &= ~0xff000000;
+
+ if (val)
+ netif_err(ag, hw, ndev, "unable to clear DMA Tx status: %08x\n",
+ val);
+}
+
+static void ag71xx_hw_setup(struct ag71xx *ag)
+{
+ u32 init = MAC_CFG1_INIT;
+
+ /* setup MAC configuration registers */
+ ag71xx_wr(ag, AG71XX_REG_MAC_CFG1, init);
+
+ ag71xx_sb(ag, AG71XX_REG_MAC_CFG2,
+ MAC_CFG2_PAD_CRC_EN | MAC_CFG2_LEN_CHECK);
+
+ /* setup max frame length to zero */
+ ag71xx_wr(ag, AG71XX_REG_MAC_MFL, 0);
+
+ /* setup FIFO configuration registers */
+ ag71xx_wr(ag, AG71XX_REG_FIFO_CFG0, FIFO_CFG0_INIT);
+ ag71xx_wr(ag, AG71XX_REG_FIFO_CFG1, ag->fifodata[0]);
+ ag71xx_wr(ag, AG71XX_REG_FIFO_CFG2, ag->fifodata[1]);
+ ag71xx_wr(ag, AG71XX_REG_FIFO_CFG4, FIFO_CFG4_INIT);
+ ag71xx_wr(ag, AG71XX_REG_FIFO_CFG5, FIFO_CFG5_INIT);
+}
+
+static unsigned int ag71xx_max_frame_len(unsigned int mtu)
+{
+ return ETH_SWITCH_HEADER_LEN + ETH_HLEN + VLAN_HLEN + mtu + ETH_FCS_LEN;
+}
+
+static void ag71xx_hw_set_macaddr(struct ag71xx *ag, unsigned char *mac)
+{
+ u32 t;
+
+ t = (((u32)mac[5]) << 24) | (((u32)mac[4]) << 16)
+ | (((u32)mac[3]) << 8) | ((u32)mac[2]);
+
+ ag71xx_wr(ag, AG71XX_REG_MAC_ADDR1, t);
+
+ t = (((u32)mac[1]) << 24) | (((u32)mac[0]) << 16);
+ ag71xx_wr(ag, AG71XX_REG_MAC_ADDR2, t);
+}
+
+static void ag71xx_fast_reset(struct ag71xx *ag)
+{
+ struct net_device *dev = ag->ndev;
+ u32 rx_ds;
+ u32 mii_reg;
+
+ ag71xx_hw_stop(ag);
+
+ mii_reg = ag71xx_rr(ag, AG71XX_REG_MII_CFG);
+ rx_ds = ag71xx_rr(ag, AG71XX_REG_RX_DESC);
+
+ ag71xx_tx_packets(ag, true);
+
+ reset_control_assert(ag->mac_reset);
+ usleep_range(10, 20);
+ reset_control_deassert(ag->mac_reset);
+ usleep_range(10, 20);
+
+ ag71xx_dma_reset(ag);
+ ag71xx_hw_setup(ag);
+ ag->tx_ring.curr = 0;
+ ag->tx_ring.dirty = 0;
+ netdev_reset_queue(ag->ndev);
+
+ /* setup max frame length */
+ ag71xx_wr(ag, AG71XX_REG_MAC_MFL,
+ ag71xx_max_frame_len(ag->ndev->mtu));
+
+ ag71xx_wr(ag, AG71XX_REG_RX_DESC, rx_ds);
+ ag71xx_wr(ag, AG71XX_REG_TX_DESC, ag->tx_ring.descs_dma);
+ ag71xx_wr(ag, AG71XX_REG_MII_CFG, mii_reg);
+
+ ag71xx_hw_set_macaddr(ag, dev->dev_addr);
+}
+
+static void ag71xx_hw_start(struct ag71xx *ag)
+{
+ /* start RX engine */
+ ag71xx_wr(ag, AG71XX_REG_RX_CTRL, RX_CTRL_RXE);
+
+ /* enable interrupts */
+ ag71xx_wr(ag, AG71XX_REG_INT_ENABLE, AG71XX_INT_INIT);
+
+ netif_wake_queue(ag->ndev);
+}
+
+static void ag71xx_link_adjust(struct ag71xx *ag, bool update)
+{
+ struct phy_device *phydev = ag->ndev->phydev;
+ u32 cfg2;
+ u32 ifctl;
+ u32 fifo5;
+
+ if (!phydev->link && update) {
+ ag71xx_hw_stop(ag);
+ return;
+ }
+
+ if (!ag71xx_is(ag, AR7100) && !ag71xx_is(ag, AR9130))
+ ag71xx_fast_reset(ag);
+
+ cfg2 = ag71xx_rr(ag, AG71XX_REG_MAC_CFG2);
+ cfg2 &= ~(MAC_CFG2_IF_1000 | MAC_CFG2_IF_10_100 | MAC_CFG2_FDX);
+ cfg2 |= (phydev->duplex) ? MAC_CFG2_FDX : 0;
+
+ ifctl = ag71xx_rr(ag, AG71XX_REG_MAC_IFCTL);
+ ifctl &= ~(MAC_IFCTL_SPEED);
+
+ fifo5 = ag71xx_rr(ag, AG71XX_REG_FIFO_CFG5);
+ fifo5 &= ~FIFO_CFG5_BM;
+
+ switch (phydev->speed) {
+ case SPEED_1000:
+ cfg2 |= MAC_CFG2_IF_1000;
+ fifo5 |= FIFO_CFG5_BM;
+ break;
+ case SPEED_100:
+ cfg2 |= MAC_CFG2_IF_10_100;
+ ifctl |= MAC_IFCTL_SPEED;
+ break;
+ case SPEED_10:
+ cfg2 |= MAC_CFG2_IF_10_100;
+ break;
+ default:
+ WARN(1, "not supported speed %i\n", phydev->speed);
+ return;
+ }
+
+ if (ag->tx_ring.desc_split) {
+ ag->fifodata[2] &= 0xffff;
+ ag->fifodata[2] |= ((2048 - ag->tx_ring.desc_split) / 4) << 16;
+ }
+
+ ag71xx_wr(ag, AG71XX_REG_FIFO_CFG3, ag->fifodata[2]);
+
+ ag71xx_wr(ag, AG71XX_REG_MAC_CFG2, cfg2);
+ ag71xx_wr(ag, AG71XX_REG_FIFO_CFG5, fifo5);
+ ag71xx_wr(ag, AG71XX_REG_MAC_IFCTL, ifctl);
+
+ ag71xx_hw_start(ag);
+
+ if (update)
+ phy_print_status(phydev);
+}
+
+static void ag71xx_phy_link_adjust(struct net_device *ndev)
+{
+ struct ag71xx *ag = netdev_priv(ndev);
+
+ ag71xx_link_adjust(ag, true);
+}
+
+static int ag71xx_phy_connect(struct ag71xx *ag)
+{
+ struct device_node *np = ag->pdev->dev.of_node;
+ struct net_device *ndev = ag->ndev;
+ struct device_node *phy_node;
+ struct phy_device *phydev;
+ int ret;
+
+ if (of_phy_is_fixed_link(np)) {
+ ret = of_phy_register_fixed_link(np);
+ if (ret < 0) {
+ netif_err(ag, probe, ndev, "Failed to register fixed PHY link: %d\n",
+ ret);
+ return ret;
+ }
+
+ phy_node = of_node_get(np);
+ } else {
+ phy_node = of_parse_phandle(np, "phy-handle", 0);
+ }
+
+ if (!phy_node) {
+ netif_err(ag, probe, ndev, "Could not find valid phy node\n");
+ return -ENODEV;
+ }
+
+ phydev = of_phy_connect(ag->ndev, phy_node, ag71xx_phy_link_adjust,
+ 0, ag->phy_if_mode);
+
+ of_node_put(phy_node);
+
+ if (!phydev) {
+ netif_err(ag, probe, ndev, "Could not connect to PHY device\n");
+ return -ENODEV;
+ }
+
+ phy_attached_info(phydev);
+
+ return 0;
+}
+
+static void ag71xx_ring_tx_clean(struct ag71xx *ag)
+{
+ struct ag71xx_ring *ring = &ag->tx_ring;
+ int ring_mask = BIT(ring->order) - 1;
+ u32 bytes_compl = 0, pkts_compl = 0;
+ struct net_device *ndev = ag->ndev;
+
+ while (ring->curr != ring->dirty) {
+ struct ag71xx_desc *desc;
+ u32 i = ring->dirty & ring_mask;
+
+ desc = ag71xx_ring_desc(ring, i);
+ if (!ag71xx_desc_empty(desc)) {
+ desc->ctrl = 0;
+ ndev->stats.tx_errors++;
+ }
+
+ if (ring->buf[i].tx.skb) {
+ bytes_compl += ring->buf[i].tx.len;
+ pkts_compl++;
+ dev_kfree_skb_any(ring->buf[i].tx.skb);
+ }
+ ring->buf[i].tx.skb = NULL;
+ ring->dirty++;
+ }
+
+ /* flush descriptors */
+ wmb();
+
+ netdev_completed_queue(ndev, pkts_compl, bytes_compl);
+}
+
+static void ag71xx_ring_tx_init(struct ag71xx *ag)
+{
+ struct ag71xx_ring *ring = &ag->tx_ring;
+ int ring_size = BIT(ring->order);
+ int ring_mask = ring_size - 1;
+ int i;
+
+ for (i = 0; i < ring_size; i++) {
+ struct ag71xx_desc *desc = ag71xx_ring_desc(ring, i);
+
+ desc->next = (u32)(ring->descs_dma +
+ AG71XX_DESC_SIZE * ((i + 1) & ring_mask));
+
+ desc->ctrl = DESC_EMPTY;
+ ring->buf[i].tx.skb = NULL;
+ }
+
+ /* flush descriptors */
+ wmb();
+
+ ring->curr = 0;
+ ring->dirty = 0;
+ netdev_reset_queue(ag->ndev);
+}
+
+static void ag71xx_ring_rx_clean(struct ag71xx *ag)
+{
+ struct ag71xx_ring *ring = &ag->rx_ring;
+ int ring_size = BIT(ring->order);
+ int i;
+
+ if (!ring->buf)
+ return;
+
+ for (i = 0; i < ring_size; i++)
+ if (ring->buf[i].rx.rx_buf) {
+ dma_unmap_single(&ag->pdev->dev,
+ ring->buf[i].rx.dma_addr,
+ ag->rx_buf_size, DMA_FROM_DEVICE);
+ skb_free_frag(ring->buf[i].rx.rx_buf);
+ }
+}
+
+static int ag71xx_buffer_size(struct ag71xx *ag)
+{
+ return ag->rx_buf_size +
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+}
+
+static bool ag71xx_fill_rx_buf(struct ag71xx *ag, struct ag71xx_buf *buf,
+ int offset,
+ void *(*alloc)(unsigned int size))
+{
+ struct ag71xx_ring *ring = &ag->rx_ring;
+ struct ag71xx_desc *desc;
+ void *data;
+
+ desc = ag71xx_ring_desc(ring, buf - &ring->buf[0]);
+
+ data = alloc(ag71xx_buffer_size(ag));
+ if (!data)
+ return false;
+
+ buf->rx.rx_buf = data;
+ buf->rx.dma_addr = dma_map_single(&ag->pdev->dev, data, ag->rx_buf_size,
+ DMA_FROM_DEVICE);
+ desc->data = (u32)buf->rx.dma_addr + offset;
+ return true;
+}
+
+static int ag71xx_ring_rx_init(struct ag71xx *ag)
+{
+ struct ag71xx_ring *ring = &ag->rx_ring;
+ struct net_device *ndev = ag->ndev;
+ int ring_mask = BIT(ring->order) - 1;
+ int ring_size = BIT(ring->order);
+ unsigned int i;
+ int ret;
+
+ ret = 0;
+ for (i = 0; i < ring_size; i++) {
+ struct ag71xx_desc *desc = ag71xx_ring_desc(ring, i);
+
+ desc->next = (u32)(ring->descs_dma +
+ AG71XX_DESC_SIZE * ((i + 1) & ring_mask));
+
+ netif_dbg(ag, rx_status, ndev, "RX desc at %p, next is %08x\n",
+ desc, desc->next);
+ }
+
+ for (i = 0; i < ring_size; i++) {
+ struct ag71xx_desc *desc = ag71xx_ring_desc(ring, i);
+
+ if (!ag71xx_fill_rx_buf(ag, &ring->buf[i], ag->rx_buf_offset,
+ netdev_alloc_frag)) {
+ ret = -ENOMEM;
+ break;
+ }
+
+ desc->ctrl = DESC_EMPTY;
+ }
+
+ /* flush descriptors */
+ wmb();
+
+ ring->curr = 0;
+ ring->dirty = 0;
+
+ return ret;
+}
+
+static int ag71xx_ring_rx_refill(struct ag71xx *ag)
+{
+ struct ag71xx_ring *ring = &ag->rx_ring;
+ int ring_mask = BIT(ring->order) - 1;
+ int offset = ag->rx_buf_offset;
+ unsigned int count;
+
+ count = 0;
+ for (; ring->curr - ring->dirty > 0; ring->dirty++) {
+ struct ag71xx_desc *desc;
+ unsigned int i;
+
+ i = ring->dirty & ring_mask;
+ desc = ag71xx_ring_desc(ring, i);
+
+ if (!ring->buf[i].rx.rx_buf &&
+ !ag71xx_fill_rx_buf(ag, &ring->buf[i], offset,
+ napi_alloc_frag))
+ break;
+
+ desc->ctrl = DESC_EMPTY;
+ count++;
+ }
+
+ /* flush descriptors */
+ wmb();
+
+ netif_dbg(ag, rx_status, ag->ndev, "%u rx descriptors refilled\n",
+ count);
+
+ return count;
+}
+
+static int ag71xx_rings_init(struct ag71xx *ag)
+{
+ struct ag71xx_ring *tx = &ag->tx_ring;
+ struct ag71xx_ring *rx = &ag->rx_ring;
+ int ring_size, tx_size;
+
+ ring_size = BIT(tx->order) + BIT(rx->order);
+ tx_size = BIT(tx->order);
+
+ tx->buf = kcalloc(ring_size, sizeof(*tx->buf), GFP_KERNEL);
+ if (!tx->buf)
+ return -ENOMEM;
+
+ tx->descs_cpu = dma_alloc_coherent(&ag->pdev->dev,
+ ring_size * AG71XX_DESC_SIZE,
+ &tx->descs_dma, GFP_ATOMIC);
+ if (!tx->descs_cpu) {
+ kfree(tx->buf);
+ tx->buf = NULL;
+ return -ENOMEM;
+ }
+
+ rx->buf = &tx->buf[BIT(tx->order)];
+ rx->descs_cpu = ((void *)tx->descs_cpu) + tx_size * AG71XX_DESC_SIZE;
+ rx->descs_dma = tx->descs_dma + tx_size * AG71XX_DESC_SIZE;
+
+ ag71xx_ring_tx_init(ag);
+ return ag71xx_ring_rx_init(ag);
+}
+
+static void ag71xx_rings_free(struct ag71xx *ag)
+{
+ struct ag71xx_ring *tx = &ag->tx_ring;
+ struct ag71xx_ring *rx = &ag->rx_ring;
+ int ring_size;
+
+ ring_size = BIT(tx->order) + BIT(rx->order);
+
+ if (tx->descs_cpu)
+ dma_free_coherent(&ag->pdev->dev, ring_size * AG71XX_DESC_SIZE,
+ tx->descs_cpu, tx->descs_dma);
+
+ kfree(tx->buf);
+
+ tx->descs_cpu = NULL;
+ rx->descs_cpu = NULL;
+ tx->buf = NULL;
+ rx->buf = NULL;
+}
+
+static void ag71xx_rings_cleanup(struct ag71xx *ag)
+{
+ ag71xx_ring_rx_clean(ag);
+ ag71xx_ring_tx_clean(ag);
+ ag71xx_rings_free(ag);
+
+ netdev_reset_queue(ag->ndev);
+}
+
+static void ag71xx_hw_init(struct ag71xx *ag)
+{
+ ag71xx_hw_stop(ag);
+
+ ag71xx_sb(ag, AG71XX_REG_MAC_CFG1, MAC_CFG1_SR);
+ usleep_range(20, 30);
+
+ reset_control_assert(ag->mac_reset);
+ msleep(100);
+ reset_control_deassert(ag->mac_reset);
+ msleep(200);
+
+ ag71xx_hw_setup(ag);
+
+ ag71xx_dma_reset(ag);
+}
+
+static int ag71xx_hw_enable(struct ag71xx *ag)
+{
+ int ret;
+
+ ret = ag71xx_rings_init(ag);
+ if (ret)
+ return ret;
+
+ napi_enable(&ag->napi);
+ ag71xx_wr(ag, AG71XX_REG_TX_DESC, ag->tx_ring.descs_dma);
+ ag71xx_wr(ag, AG71XX_REG_RX_DESC, ag->rx_ring.descs_dma);
+ netif_start_queue(ag->ndev);
+
+ return 0;
+}
+
+static void ag71xx_hw_disable(struct ag71xx *ag)
+{
+ netif_stop_queue(ag->ndev);
+
+ ag71xx_hw_stop(ag);
+ ag71xx_dma_reset(ag);
+
+ napi_disable(&ag->napi);
+ del_timer_sync(&ag->oom_timer);
+
+ ag71xx_rings_cleanup(ag);
+}
+
+static int ag71xx_open(struct net_device *ndev)
+{
+ struct ag71xx *ag = netdev_priv(ndev);
+ unsigned int max_frame_len;
+ int ret;
+
+ max_frame_len = ag71xx_max_frame_len(ndev->mtu);
+ ag->rx_buf_size =
+ SKB_DATA_ALIGN(max_frame_len + NET_SKB_PAD + NET_IP_ALIGN);
+
+ /* setup max frame length */
+ ag71xx_wr(ag, AG71XX_REG_MAC_MFL, max_frame_len);
+ ag71xx_hw_set_macaddr(ag, ndev->dev_addr);
+
+ ret = ag71xx_hw_enable(ag);
+ if (ret)
+ goto err;
+
+ ret = ag71xx_phy_connect(ag);
+ if (ret)
+ goto err;
+
+ phy_start(ndev->phydev);
+
+ return 0;
+
+err:
+ ag71xx_rings_cleanup(ag);
+ return ret;
+}
+
+static int ag71xx_stop(struct net_device *ndev)
+{
+ struct ag71xx *ag = netdev_priv(ndev);
+
+ phy_stop(ndev->phydev);
+ phy_disconnect(ndev->phydev);
+ ag71xx_hw_disable(ag);
+
+ return 0;
+}
+
+static int ag71xx_fill_dma_desc(struct ag71xx_ring *ring, u32 addr, int len)
+{
+ int i, ring_mask, ndesc, split;
+ struct ag71xx_desc *desc;
+
+ ring_mask = BIT(ring->order) - 1;
+ ndesc = 0;
+ split = ring->desc_split;
+
+ if (!split)
+ split = len;
+
+ while (len > 0) {
+ unsigned int cur_len = len;
+
+ i = (ring->curr + ndesc) & ring_mask;
+ desc = ag71xx_ring_desc(ring, i);
+
+ if (!ag71xx_desc_empty(desc))
+ return -1;
+
+ if (cur_len > split) {
+ cur_len = split;
+
+ /* TX will hang if DMA transfers <= 4 bytes,
+ * make sure next segment is more than 4 bytes long.
+ */
+ if (len <= split + 4)
+ cur_len -= 4;
+ }
+
+ desc->data = addr;
+ addr += cur_len;
+ len -= cur_len;
+
+ if (len > 0)
+ cur_len |= DESC_MORE;
+
+ /* prevent early tx attempt of this descriptor */
+ if (!ndesc)
+ cur_len |= DESC_EMPTY;
+
+ desc->ctrl = cur_len;
+ ndesc++;
+ }
+
+ return ndesc;
+}
+
+static netdev_tx_t ag71xx_hard_start_xmit(struct sk_buff *skb,
+ struct net_device *ndev)
+{
+ int i, n, ring_min, ring_mask, ring_size;
+ struct ag71xx *ag = netdev_priv(ndev);
+ struct ag71xx_ring *ring;
+ struct ag71xx_desc *desc;
+ dma_addr_t dma_addr;
+
+ ring = &ag->tx_ring;
+ ring_mask = BIT(ring->order) - 1;
+ ring_size = BIT(ring->order);
+
+ if (skb->len <= 4) {
+ netif_dbg(ag, tx_err, ndev, "packet len is too small\n");
+ goto err_drop;
+ }
+
+ dma_addr = dma_map_single(&ag->pdev->dev, skb->data, skb->len,
+ DMA_TO_DEVICE);
+
+ i = ring->curr & ring_mask;
+ desc = ag71xx_ring_desc(ring, i);
+
+ /* setup descriptor fields */
+ n = ag71xx_fill_dma_desc(ring, (u32)dma_addr,
+ skb->len & ag->dcfg->desc_pktlen_mask);
+ if (n < 0)
+ goto err_drop_unmap;
+
+ i = (ring->curr + n - 1) & ring_mask;
+ ring->buf[i].tx.len = skb->len;
+ ring->buf[i].tx.skb = skb;
+
+ netdev_sent_queue(ndev, skb->len);
+
+ skb_tx_timestamp(skb);
+
+ desc->ctrl &= ~DESC_EMPTY;
+ ring->curr += n;
+
+ /* flush descriptor */
+ wmb();
+
+ ring_min = 2;
+ if (ring->desc_split)
+ ring_min *= AG71XX_TX_RING_DS_PER_PKT;
+
+ if (ring->curr - ring->dirty >= ring_size - ring_min) {
+ netif_dbg(ag, tx_err, ndev, "tx queue full\n");
+ netif_stop_queue(ndev);
+ }
+
+ netif_dbg(ag, tx_queued, ndev, "packet injected into TX queue\n");
+
+ /* enable TX engine */
+ ag71xx_wr(ag, AG71XX_REG_TX_CTRL, TX_CTRL_TXE);
+
+ return NETDEV_TX_OK;
+
+err_drop_unmap:
+ dma_unmap_single(&ag->pdev->dev, dma_addr, skb->len, DMA_TO_DEVICE);
+
+err_drop:
+ ndev->stats.tx_dropped++;
+
+ dev_kfree_skb(skb);
+ return NETDEV_TX_OK;
+}
+
+static int ag71xx_do_ioctl(struct net_device *ndev, struct ifreq *ifr, int cmd)
+{
+ if (!ndev->phydev)
+ return -EINVAL;
+
+ return phy_mii_ioctl(ndev->phydev, ifr, cmd);
+}
+
+static void ag71xx_oom_timer_handler(struct timer_list *t)
+{
+ struct ag71xx *ag = from_timer(ag, t, oom_timer);
+
+ napi_schedule(&ag->napi);
+}
+
+static void ag71xx_tx_timeout(struct net_device *ndev)
+{
+ struct ag71xx *ag = netdev_priv(ndev);
+
+ netif_err(ag, tx_err, ndev, "tx timeout\n");
+
+ schedule_delayed_work(&ag->restart_work, 1);
+}
+
+static void ag71xx_restart_work_func(struct work_struct *work)
+{
+ struct ag71xx *ag = container_of(work, struct ag71xx,
+ restart_work.work);
+ struct net_device *ndev = ag->ndev;
+
+ rtnl_lock();
+ ag71xx_hw_disable(ag);
+ ag71xx_hw_enable(ag);
+ if (ndev->phydev->link)
+ ag71xx_link_adjust(ag, false);
+ rtnl_unlock();
+}
+
+static int ag71xx_rx_packets(struct ag71xx *ag, int limit)
+{
+ struct net_device *ndev = ag->ndev;
+ int ring_mask, ring_size, done = 0;
+ unsigned int pktlen_mask, offset;
+ struct sk_buff *next, *skb;
+ struct ag71xx_ring *ring;
+ struct list_head rx_list;
+
+ ring = &ag->rx_ring;
+ pktlen_mask = ag->dcfg->desc_pktlen_mask;
+ offset = ag->rx_buf_offset;
+ ring_mask = BIT(ring->order) - 1;
+ ring_size = BIT(ring->order);
+
+ netif_dbg(ag, rx_status, ndev, "rx packets, limit=%d, curr=%u, dirty=%u\n",
+ limit, ring->curr, ring->dirty);
+
+ INIT_LIST_HEAD(&rx_list);
+
+ while (done < limit) {
+ unsigned int i = ring->curr & ring_mask;
+ struct ag71xx_desc *desc = ag71xx_ring_desc(ring, i);
+ int pktlen;
+ int err = 0;
+
+ if (ag71xx_desc_empty(desc))
+ break;
+
+ if ((ring->dirty + ring_size) == ring->curr) {
+ WARN_ONCE(1, "RX out of ring");
+ break;
+ }
+
+ ag71xx_wr(ag, AG71XX_REG_RX_STATUS, RX_STATUS_PR);
+
+ pktlen = desc->ctrl & pktlen_mask;
+ pktlen -= ETH_FCS_LEN;
+
+ dma_unmap_single(&ag->pdev->dev, ring->buf[i].rx.dma_addr,
+ ag->rx_buf_size, DMA_FROM_DEVICE);
+
+ ndev->stats.rx_packets++;
+ ndev->stats.rx_bytes += pktlen;
+
+ skb = build_skb(ring->buf[i].rx.rx_buf, ag71xx_buffer_size(ag));
+ if (!skb) {
+ skb_free_frag(ring->buf[i].rx.rx_buf);
+ goto next;
+ }
+
+ skb_reserve(skb, offset);
+ skb_put(skb, pktlen);
+
+ if (err) {
+ ndev->stats.rx_dropped++;
+ kfree_skb(skb);
+ } else {
+ skb->dev = ndev;
+ skb->ip_summed = CHECKSUM_NONE;
+ list_add_tail(&skb->list, &rx_list);
+ }
+
+next:
+ ring->buf[i].rx.rx_buf = NULL;
+ done++;
+
+ ring->curr++;
+ }
+
+ ag71xx_ring_rx_refill(ag);
+
+ list_for_each_entry_safe(skb, next, &rx_list, list)
+ skb->protocol = eth_type_trans(skb, ndev);
+ netif_receive_skb_list(&rx_list);
+
+ netif_dbg(ag, rx_status, ndev, "rx finish, curr=%u, dirty=%u, done=%d\n",
+ ring->curr, ring->dirty, done);
+
+ return done;
+}
+
+static int ag71xx_poll(struct napi_struct *napi, int limit)
+{
+ struct ag71xx *ag = container_of(napi, struct ag71xx, napi);
+ struct ag71xx_ring *rx_ring = &ag->rx_ring;
+ int rx_ring_size = BIT(rx_ring->order);
+ struct net_device *ndev = ag->ndev;
+ int tx_done, rx_done;
+ u32 status;
+
+ tx_done = ag71xx_tx_packets(ag, false);
+
+ netif_dbg(ag, rx_status, ndev, "processing RX ring\n");
+ rx_done = ag71xx_rx_packets(ag, limit);
+
+ if (!rx_ring->buf[rx_ring->dirty % rx_ring_size].rx.rx_buf)
+ goto oom;
+
+ status = ag71xx_rr(ag, AG71XX_REG_RX_STATUS);
+ if (unlikely(status & RX_STATUS_OF)) {
+ ag71xx_wr(ag, AG71XX_REG_RX_STATUS, RX_STATUS_OF);
+ ndev->stats.rx_fifo_errors++;
+
+ /* restart RX */
+ ag71xx_wr(ag, AG71XX_REG_RX_CTRL, RX_CTRL_RXE);
+ }
+
+ if (rx_done < limit) {
+ if (status & RX_STATUS_PR)
+ goto more;
+
+ status = ag71xx_rr(ag, AG71XX_REG_TX_STATUS);
+ if (status & TX_STATUS_PS)
+ goto more;
+
+ netif_dbg(ag, rx_status, ndev, "disable polling mode, rx=%d, tx=%d,limit=%d\n",
+ rx_done, tx_done, limit);
+
+ napi_complete(napi);
+
+ /* enable interrupts */
+ ag71xx_int_enable(ag, AG71XX_INT_POLL);
+ return rx_done;
+ }
+
+more:
+ netif_dbg(ag, rx_status, ndev, "stay in polling mode, rx=%d, tx=%d, limit=%d\n",
+ rx_done, tx_done, limit);
+ return limit;
+
+oom:
+ netif_err(ag, rx_err, ndev, "out of memory\n");
+
+ mod_timer(&ag->oom_timer, jiffies + AG71XX_OOM_REFILL);
+ napi_complete(napi);
+ return 0;
+}
+
+static irqreturn_t ag71xx_interrupt(int irq, void *dev_id)
+{
+ struct net_device *ndev = dev_id;
+ struct ag71xx *ag;
+ u32 status;
+
+ ag = netdev_priv(ndev);
+ status = ag71xx_rr(ag, AG71XX_REG_INT_STATUS);
+
+ if (unlikely(!status))
+ return IRQ_NONE;
+
+ if (unlikely(status & AG71XX_INT_ERR)) {
+ if (status & AG71XX_INT_TX_BE) {
+ ag71xx_wr(ag, AG71XX_REG_TX_STATUS, TX_STATUS_BE);
+ netif_err(ag, intr, ndev, "TX BUS error\n");
+ }
+ if (status & AG71XX_INT_RX_BE) {
+ ag71xx_wr(ag, AG71XX_REG_RX_STATUS, RX_STATUS_BE);
+ netif_err(ag, intr, ndev, "RX BUS error\n");
+ }
+ }
+
+ if (likely(status & AG71XX_INT_POLL)) {
+ ag71xx_int_disable(ag, AG71XX_INT_POLL);
+ netif_dbg(ag, intr, ndev, "enable polling mode\n");
+ napi_schedule(&ag->napi);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static int ag71xx_change_mtu(struct net_device *ndev, int new_mtu)
+{
+ struct ag71xx *ag = netdev_priv(ndev);
+
+ ndev->mtu = new_mtu;
+ ag71xx_wr(ag, AG71XX_REG_MAC_MFL,
+ ag71xx_max_frame_len(ndev->mtu));
+
+ return 0;
+}
+
+static const struct net_device_ops ag71xx_netdev_ops = {
+ .ndo_open = ag71xx_open,
+ .ndo_stop = ag71xx_stop,
+ .ndo_start_xmit = ag71xx_hard_start_xmit,
+ .ndo_do_ioctl = ag71xx_do_ioctl,
+ .ndo_tx_timeout = ag71xx_tx_timeout,
+ .ndo_change_mtu = ag71xx_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+};
+
+static const u32 ar71xx_addr_ar7100[] = {
+ 0x19000000, 0x1a000000,
+};
+
+static int ag71xx_probe(struct platform_device *pdev)
+{
+ struct device_node *np = pdev->dev.of_node;
+ const struct ag71xx_dcfg *dcfg;
+ struct net_device *ndev;
+ struct resource *res;
+ const void *mac_addr;
+ int tx_size, err, i;
+ struct ag71xx *ag;
+
+ if (!np)
+ return -ENODEV;
+
+ ndev = devm_alloc_etherdev(&pdev->dev, sizeof(*ag));
+ if (!ndev)
+ return -ENOMEM;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res)
+ return -EINVAL;
+
+ dcfg = of_device_get_match_data(&pdev->dev);
+ if (!dcfg)
+ return -EINVAL;
+
+ ag = netdev_priv(ndev);
+ ag->mac_idx = -1;
+ for (i = 0; i < ARRAY_SIZE(ar71xx_addr_ar7100); i++) {
+ if (ar71xx_addr_ar7100[i] == res->start)
+ ag->mac_idx = i;
+ }
+
+ if (ag->mac_idx < 0) {
+ netif_err(ag, probe, ndev, "unknown mac idx\n");
+ return -EINVAL;
+ }
+
+ ag->clk_eth = devm_clk_get(&pdev->dev, "eth");
+ if (IS_ERR(ag->clk_eth)) {
+ netif_err(ag, probe, ndev, "Failed to get eth clk.\n");
+ return PTR_ERR(ag->clk_eth);
+ }
+
+ SET_NETDEV_DEV(ndev, &pdev->dev);
+
+ ag->pdev = pdev;
+ ag->ndev = ndev;
+ ag->dcfg = dcfg;
+ ag->msg_enable = netif_msg_init(-1, AG71XX_DEFAULT_MSG_ENABLE);
+ memcpy(ag->fifodata, dcfg->fifodata, sizeof(ag->fifodata));
+
+ ag->mac_reset = devm_reset_control_get(&pdev->dev, "mac");
+ if (IS_ERR(ag->mac_reset)) {
+ netif_err(ag, probe, ndev, "missing mac reset\n");
+ err = PTR_ERR(ag->mac_reset);
+ goto err_free;
+ }
+
+ ag->mac_base = devm_ioremap_nocache(&pdev->dev, res->start,
+ res->end - res->start + 1);
+ if (!ag->mac_base) {
+ err = -ENOMEM;
+ goto err_free;
+ }
+
+ ndev->irq = platform_get_irq(pdev, 0);
+ err = devm_request_irq(&pdev->dev, ndev->irq, ag71xx_interrupt,
+ 0x0, dev_name(&pdev->dev), ndev);
+ if (err) {
+ netif_err(ag, probe, ndev, "unable to request IRQ %d\n",
+ ndev->irq);
+ goto err_free;
+ }
+
+ ndev->netdev_ops = &ag71xx_netdev_ops;
+
+ INIT_DELAYED_WORK(&ag->restart_work, ag71xx_restart_work_func);
+ timer_setup(&ag->oom_timer, ag71xx_oom_timer_handler, 0);
+
+ tx_size = AG71XX_TX_RING_SIZE_DEFAULT;
+ ag->rx_ring.order = ag71xx_ring_size_order(AG71XX_RX_RING_SIZE_DEFAULT);
+
+ ndev->min_mtu = 68;
+ ndev->max_mtu = dcfg->max_frame_len - ag71xx_max_frame_len(0);
+
+ ag->rx_buf_offset = NET_SKB_PAD;
+ if (!ag71xx_is(ag, AR7100) && !ag71xx_is(ag, AR9130))
+ ag->rx_buf_offset += NET_IP_ALIGN;
+
+ if (ag71xx_is(ag, AR7100)) {
+ ag->tx_ring.desc_split = AG71XX_TX_RING_SPLIT;
+ tx_size *= AG71XX_TX_RING_DS_PER_PKT;
+ }
+ ag->tx_ring.order = ag71xx_ring_size_order(tx_size);
+
+ ag->stop_desc = dmam_alloc_coherent(&pdev->dev,
+ sizeof(struct ag71xx_desc),
+ &ag->stop_desc_dma, GFP_KERNEL);
+ if (!ag->stop_desc)
+ goto err_free;
+
+ ag->stop_desc->data = 0;
+ ag->stop_desc->ctrl = 0;
+ ag->stop_desc->next = (u32)ag->stop_desc_dma;
+
+ mac_addr = of_get_mac_address(np);
+ if (mac_addr)
+ memcpy(ndev->dev_addr, mac_addr, ETH_ALEN);
+ if (!mac_addr || !is_valid_ether_addr(ndev->dev_addr)) {
+ netif_err(ag, probe, ndev, "invalid MAC address, using random address\n");
+ eth_random_addr(ndev->dev_addr);
+ }
+
+ ag->phy_if_mode = of_get_phy_mode(np);
+ if (ag->phy_if_mode < 0) {
+ netif_err(ag, probe, ndev, "missing phy-mode property in DT\n");
+ err = ag->phy_if_mode;
+ goto err_free;
+ }
+
+ netif_napi_add(ndev, &ag->napi, ag71xx_poll, AG71XX_NAPI_WEIGHT);
+
+ err = clk_prepare_enable(ag->clk_eth);
+ if (err) {
+ netif_err(ag, probe, ndev, "Failed to enable eth clk.\n");
+ goto err_free;
+ }
+
+ ag71xx_wr(ag, AG71XX_REG_MAC_CFG1, 0);
+
+ ag71xx_hw_init(ag);
+
+ err = ag71xx_mdio_probe(ag);
+ if (err)
+ goto err_put_clk;
+
+ platform_set_drvdata(pdev, ndev);
+
+ err = register_netdev(ndev);
+ if (err) {
+ netif_err(ag, probe, ndev, "unable to register net device\n");
+ platform_set_drvdata(pdev, NULL);
+ goto err_mdio_remove;
+ }
+
+ netif_info(ag, probe, ndev, "Atheros AG71xx at 0x%08lx, irq %d, mode:%s\n",
+ (unsigned long)ag->mac_base, ndev->irq,
+ phy_modes(ag->phy_if_mode));
+
+ return 0;
+
+err_mdio_remove:
+ ag71xx_mdio_remove(ag);
+err_put_clk:
+ clk_disable_unprepare(ag->clk_eth);
+err_free:
+ free_netdev(ndev);
+ return err;
+}
+
+static int ag71xx_remove(struct platform_device *pdev)
+{
+ struct net_device *ndev = platform_get_drvdata(pdev);
+ struct ag71xx *ag;
+
+ if (!ndev)
+ return 0;
+
+ ag = netdev_priv(ndev);
+ unregister_netdev(ndev);
+ ag71xx_mdio_remove(ag);
+ clk_disable_unprepare(ag->clk_eth);
+ platform_set_drvdata(pdev, NULL);
+
+ return 0;
+}
+
+static const u32 ar71xx_fifo_ar7100[] = {
+ 0x0fff0000, 0x00001fff, 0x00780fff,
+};
+
+static const u32 ar71xx_fifo_ar9130[] = {
+ 0x0fff0000, 0x00001fff, 0x008001ff,
+};
+
+static const u32 ar71xx_fifo_ar9330[] = {
+ 0x0010ffff, 0x015500aa, 0x01f00140,
+};
+
+static const struct ag71xx_dcfg ag71xx_dcfg_ar7100 = {
+ .type = AR7100,
+ .fifodata = ar71xx_fifo_ar7100,
+ .max_frame_len = 1540,
+ .desc_pktlen_mask = SZ_4K - 1,
+ .tx_hang_workaround = false,
+};
+
+static const struct ag71xx_dcfg ag71xx_dcfg_ar7240 = {
+ .type = AR7240,
+ .fifodata = ar71xx_fifo_ar7100,
+ .max_frame_len = 1540,
+ .desc_pktlen_mask = SZ_4K - 1,
+ .tx_hang_workaround = true,
+};
+
+static const struct ag71xx_dcfg ag71xx_dcfg_ar9130 = {
+ .type = AR9130,
+ .fifodata = ar71xx_fifo_ar9130,
+ .max_frame_len = 1540,
+ .desc_pktlen_mask = SZ_4K - 1,
+ .tx_hang_workaround = false,
+};
+
+static const struct ag71xx_dcfg ag71xx_dcfg_ar9330 = {
+ .type = AR9330,
+ .fifodata = ar71xx_fifo_ar9330,
+ .max_frame_len = 1540,
+ .desc_pktlen_mask = SZ_4K - 1,
+ .tx_hang_workaround = true,
+};
+
+static const struct ag71xx_dcfg ag71xx_dcfg_ar9340 = {
+ .type = AR9340,
+ .fifodata = ar71xx_fifo_ar9330,
+ .max_frame_len = SZ_16K - 1,
+ .desc_pktlen_mask = SZ_16K - 1,
+ .tx_hang_workaround = true,
+};
+
+static const struct ag71xx_dcfg ag71xx_dcfg_qca9530 = {
+ .type = QCA9530,
+ .fifodata = ar71xx_fifo_ar9330,
+ .max_frame_len = SZ_16K - 1,
+ .desc_pktlen_mask = SZ_16K - 1,
+ .tx_hang_workaround = true,
+};
+
+static const struct ag71xx_dcfg ag71xx_dcfg_qca9550 = {
+ .type = QCA9550,
+ .fifodata = ar71xx_fifo_ar9330,
+ .max_frame_len = 1540,
+ .desc_pktlen_mask = SZ_16K - 1,
+ .tx_hang_workaround = true,
+};
+
+static const struct of_device_id ag71xx_match[] = {
+ { .compatible = "qca,ar7100-eth", .data = &ag71xx_dcfg_ar7100 },
+ { .compatible = "qca,ar7240-eth", .data = &ag71xx_dcfg_ar7240 },
+ { .compatible = "qca,ar7241-eth", .data = &ag71xx_dcfg_ar7240 },
+ { .compatible = "qca,ar7242-eth", .data = &ag71xx_dcfg_ar7240 },
+ { .compatible = "qca,ar9130-eth", .data = &ag71xx_dcfg_ar9130 },
+ { .compatible = "qca,ar9330-eth", .data = &ag71xx_dcfg_ar9330 },
+ { .compatible = "qca,ar9340-eth", .data = &ag71xx_dcfg_ar9340 },
+ { .compatible = "qca,qca9530-eth", .data = &ag71xx_dcfg_qca9530 },
+ { .compatible = "qca,qca9550-eth", .data = &ag71xx_dcfg_qca9550 },
+ { .compatible = "qca,qca9560-eth", .data = &ag71xx_dcfg_qca9550 },
+ {}
+};
+
+static struct platform_driver ag71xx_driver = {
+ .probe = ag71xx_probe,
+ .remove = ag71xx_remove,
+ .driver = {
+ .name = "ag71xx",
+ .of_match_table = ag71xx_match,
+ }
+};
+
+module_platform_driver(ag71xx_driver);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
index 25bf085324b8..be7f9cebb675 100644
--- a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
+++ b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
@@ -2201,7 +2201,7 @@ static netdev_tx_t atl1c_xmit_frame(struct sk_buff *skb,
struct net_device *netdev)
{
struct atl1c_adapter *adapter = netdev_priv(netdev);
- u16 tpd_req = 1;
+ u16 tpd_req;
struct atl1c_tpd_desc *tpd;
enum atl1c_trans_queue type = atl1c_trans_normal;
diff --git a/drivers/net/ethernet/broadcom/Kconfig b/drivers/net/ethernet/broadcom/Kconfig
index b123509d385f..e9017caf024d 100644
--- a/drivers/net/ethernet/broadcom/Kconfig
+++ b/drivers/net/ethernet/broadcom/Kconfig
@@ -8,6 +8,7 @@ config NET_VENDOR_BROADCOM
default y
depends on (SSB_POSSIBLE && HAS_DMA) || PCI || BCM63XX || \
SIBYTE_SB1xxx_SOC
+ select DIMLIB
---help---
If you have a network (Ethernet) chipset belonging to this class,
say Y.
@@ -198,6 +199,7 @@ config BNXT
select FW_LOADER
select LIBCRC32C
select NET_DEVLINK
+ select PAGE_POOL
---help---
This driver supports Broadcom NetXtreme-C/E 10/25/40/50 gigabit
Ethernet cards. To compile this driver as a module, choose M here:
diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
index 85e610210477..291e4afd4a1a 100644
--- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c
+++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
@@ -2659,7 +2659,6 @@ static int bcm_enetsw_probe(struct platform_device *pdev)
if (!dev)
return -ENOMEM;
priv = netdev_priv(dev);
- memset(priv, 0, sizeof(*priv));
/* initialize default and fetch platform data */
priv->enet_is_sw = true;
diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
index cae9b77ff44b..b9c5cea8db16 100644
--- a/drivers/net/ethernet/broadcom/bcmsysport.c
+++ b/drivers/net/ethernet/broadcom/bcmsysport.c
@@ -609,7 +609,7 @@ static int bcm_sysport_set_coalesce(struct net_device *dev,
struct ethtool_coalesce *ec)
{
struct bcm_sysport_priv *priv = netdev_priv(dev);
- struct net_dim_cq_moder moder;
+ struct dim_cq_moder moder;
u32 usecs, pkts;
unsigned int i;
@@ -992,7 +992,7 @@ static int bcm_sysport_poll(struct napi_struct *napi, int budget)
{
struct bcm_sysport_priv *priv =
container_of(napi, struct bcm_sysport_priv, napi);
- struct net_dim_sample dim_sample;
+ struct dim_sample dim_sample;
unsigned int work_done = 0;
work_done = bcm_sysport_desc_rx(priv, budget);
@@ -1016,8 +1016,8 @@ static int bcm_sysport_poll(struct napi_struct *napi, int budget)
}
if (priv->dim.use_dim) {
- net_dim_sample(priv->dim.event_ctr, priv->dim.packets,
- priv->dim.bytes, &dim_sample);
+ dim_update_sample(priv->dim.event_ctr, priv->dim.packets,
+ priv->dim.bytes, &dim_sample);
net_dim(&priv->dim.dim, dim_sample);
}
@@ -1087,16 +1087,16 @@ static void bcm_sysport_resume_from_wol(struct bcm_sysport_priv *priv)
static void bcm_sysport_dim_work(struct work_struct *work)
{
- struct net_dim *dim = container_of(work, struct net_dim, work);
+ struct dim *dim = container_of(work, struct dim, work);
struct bcm_sysport_net_dim *ndim =
container_of(dim, struct bcm_sysport_net_dim, dim);
struct bcm_sysport_priv *priv =
container_of(ndim, struct bcm_sysport_priv, dim);
- struct net_dim_cq_moder cur_profile =
- net_dim_get_rx_moderation(dim->mode, dim->profile_ix);
+ struct dim_cq_moder cur_profile = net_dim_get_rx_moderation(dim->mode,
+ dim->profile_ix);
bcm_sysport_set_rx_coalesce(priv, cur_profile.usec, cur_profile.pkts);
- dim->state = NET_DIM_START_MEASURE;
+ dim->state = DIM_START_MEASURE;
}
/* RX and misc interrupt routine */
@@ -1437,7 +1437,7 @@ static void bcm_sysport_init_dim(struct bcm_sysport_priv *priv,
struct bcm_sysport_net_dim *dim = &priv->dim;
INIT_WORK(&dim->dim.work, cb);
- dim->dim.mode = NET_DIM_CQ_PERIOD_MODE_START_FROM_EQE;
+ dim->dim.mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE;
dim->event_ctr = 0;
dim->packets = 0;
dim->bytes = 0;
@@ -1446,7 +1446,7 @@ static void bcm_sysport_init_dim(struct bcm_sysport_priv *priv,
static void bcm_sysport_init_rx_coalesce(struct bcm_sysport_priv *priv)
{
struct bcm_sysport_net_dim *dim = &priv->dim;
- struct net_dim_cq_moder moder;
+ struct dim_cq_moder moder;
u32 usecs, pkts;
usecs = priv->rx_coalesce_usecs;
diff --git a/drivers/net/ethernet/broadcom/bcmsysport.h b/drivers/net/ethernet/broadcom/bcmsysport.h
index 86193931203a..6d80735fbc7f 100644
--- a/drivers/net/ethernet/broadcom/bcmsysport.h
+++ b/drivers/net/ethernet/broadcom/bcmsysport.h
@@ -11,7 +11,7 @@
#include <linux/bitmap.h>
#include <linux/ethtool.h>
#include <linux/if_vlan.h>
-#include <linux/net_dim.h>
+#include <linux/dim.h>
/* Receive/transmit descriptor format */
#define DESC_ADDR_HI_STATUS_LEN 0x00
@@ -702,7 +702,7 @@ struct bcm_sysport_net_dim {
u16 event_ctr;
unsigned long packets;
unsigned long bytes;
- struct net_dim dim;
+ struct dim dim;
};
/* Software view of the TX ring */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
index 008ad0ca89ba..656ed80647f0 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
@@ -684,7 +684,7 @@ static void *bnx2x_frag_alloc(const struct bnx2x_fastpath *fp, gfp_t gfp_mask)
if (unlikely(gfpflags_allow_blocking(gfp_mask)))
return (void *)__get_free_page(gfp_mask);
- return netdev_alloc_frag(fp->rx_frag_size);
+ return napi_alloc_frag(fp->rx_frag_size);
}
return kmalloc(fp->rx_buf_size + NET_SKB_PAD, gfp_mask);
@@ -3857,9 +3857,12 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) {
if (!(bp->flags & TX_TIMESTAMPING_EN)) {
+ bp->eth_stats.ptp_skip_tx_ts++;
BNX2X_ERR("Tx timestamping was not enabled, this packet will not be timestamped\n");
} else if (bp->ptp_tx_skb) {
- BNX2X_ERR("The device supports only a single outstanding packet to timestamp, this packet will not be timestamped\n");
+ bp->eth_stats.ptp_skip_tx_ts++;
+ netdev_err_once(bp->dev,
+ "Device supports only a single outstanding packet to timestamp, this packet won't be timestamped\n");
} else {
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
/* schedule check for Tx timestamp */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
index 51fc845de31a..4a0ba6801c9e 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
@@ -182,7 +182,9 @@ static const struct {
{ STATS_OFFSET32(driver_filtered_tx_pkt),
4, false, "driver_filtered_tx_pkt" },
{ STATS_OFFSET32(eee_tx_lpi),
- 4, true, "Tx LPI entry count"}
+ 4, true, "Tx LPI entry count"},
+ { STATS_OFFSET32(ptp_skip_tx_ts),
+ 4, false, "ptp_skipped_tx_tstamp" },
};
#define BNX2X_NUM_STATS ARRAY_SIZE(bnx2x_stats_arr)
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index 03ac10b1cd1e..2cc14db8f0ec 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -15214,11 +15214,24 @@ static void bnx2x_ptp_task(struct work_struct *work)
u32 val_seq;
u64 timestamp, ns;
struct skb_shared_hwtstamps shhwtstamps;
+ bool bail = true;
+ int i;
+
+ /* FW may take a while to complete timestamping; try a bit and if it's
+ * still not complete, may indicate an error state - bail out then.
+ */
+ for (i = 0; i < 10; i++) {
+ /* Read Tx timestamp registers */
+ val_seq = REG_RD(bp, port ? NIG_REG_P1_TLLH_PTP_BUF_SEQID :
+ NIG_REG_P0_TLLH_PTP_BUF_SEQID);
+ if (val_seq & 0x10000) {
+ bail = false;
+ break;
+ }
+ msleep(1 << i);
+ }
- /* Read Tx timestamp registers */
- val_seq = REG_RD(bp, port ? NIG_REG_P1_TLLH_PTP_BUF_SEQID :
- NIG_REG_P0_TLLH_PTP_BUF_SEQID);
- if (val_seq & 0x10000) {
+ if (!bail) {
/* There is a valid timestamp value */
timestamp = REG_RD(bp, port ? NIG_REG_P1_TLLH_PTP_BUF_TS_MSB :
NIG_REG_P0_TLLH_PTP_BUF_TS_MSB);
@@ -15233,16 +15246,18 @@ static void bnx2x_ptp_task(struct work_struct *work)
memset(&shhwtstamps, 0, sizeof(shhwtstamps));
shhwtstamps.hwtstamp = ns_to_ktime(ns);
skb_tstamp_tx(bp->ptp_tx_skb, &shhwtstamps);
- dev_kfree_skb_any(bp->ptp_tx_skb);
- bp->ptp_tx_skb = NULL;
DP(BNX2X_MSG_PTP, "Tx timestamp, timestamp cycles = %llu, ns = %llu\n",
timestamp, ns);
} else {
- DP(BNX2X_MSG_PTP, "There is no valid Tx timestamp yet\n");
- /* Reschedule to keep checking for a valid timestamp value */
- schedule_work(&bp->ptp_task);
+ DP(BNX2X_MSG_PTP,
+ "Tx timestamp is not recorded (register read=%u)\n",
+ val_seq);
+ bp->eth_stats.ptp_skip_tx_ts++;
}
+
+ dev_kfree_skb_any(bp->ptp_tx_skb);
+ bp->ptp_tx_skb = NULL;
}
void bnx2x_set_rx_ts(struct bnx2x *bp, struct sk_buff *skb)
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h
index b2644ed13d06..d55e63692cf3 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h
@@ -207,6 +207,9 @@ struct bnx2x_eth_stats {
u32 driver_filtered_tx_pkt;
/* src: Clear-on-Read register; Will not survive PMF Migration */
u32 eee_tx_lpi;
+
+ /* PTP */
+ u32 ptp_skip_tx_ts;
};
struct bnx2x_eth_q_stats {
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index f758b2e0591f..3f632028eff0 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -54,6 +54,7 @@
#include <net/pkt_cls.h>
#include <linux/hwmon.h>
#include <linux/hwmon-sysfs.h>
+#include <net/page_pool.h>
#include "bnxt_hsi.h"
#include "bnxt.h"
@@ -668,19 +669,20 @@ next_tx_int:
}
static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping,
+ struct bnxt_rx_ring_info *rxr,
gfp_t gfp)
{
struct device *dev = &bp->pdev->dev;
struct page *page;
- page = alloc_page(gfp);
+ page = page_pool_dev_alloc_pages(rxr->page_pool);
if (!page)
return NULL;
*mapping = dma_map_page_attrs(dev, page, 0, PAGE_SIZE, bp->rx_dir,
DMA_ATTR_WEAK_ORDERING);
if (dma_mapping_error(dev, *mapping)) {
- __free_page(page);
+ page_pool_recycle_direct(rxr->page_pool, page);
return NULL;
}
*mapping += bp->rx_dma_offset;
@@ -716,7 +718,8 @@ int bnxt_alloc_rx_data(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
dma_addr_t mapping;
if (BNXT_RX_PAGE_MODE(bp)) {
- struct page *page = __bnxt_alloc_rx_page(bp, &mapping, gfp);
+ struct page *page =
+ __bnxt_alloc_rx_page(bp, &mapping, rxr, gfp);
if (!page)
return -ENOMEM;
@@ -1989,6 +1992,9 @@ static int __bnxt_poll_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
}
}
+ if (event & BNXT_REDIRECT_EVENT)
+ xdp_do_flush_map();
+
if (event & BNXT_TX_EVENT) {
struct bnxt_tx_ring_info *txr = bnapi->tx_ring;
u16 prod = txr->tx_prod;
@@ -2130,12 +2136,12 @@ static int bnxt_poll(struct napi_struct *napi, int budget)
}
}
if (bp->flags & BNXT_FLAG_DIM) {
- struct net_dim_sample dim_sample;
+ struct dim_sample dim_sample;
- net_dim_sample(cpr->event_ctr,
- cpr->rx_packets,
- cpr->rx_bytes,
- &dim_sample);
+ dim_update_sample(cpr->event_ctr,
+ cpr->rx_packets,
+ cpr->rx_bytes,
+ &dim_sample);
net_dim(&cpr->dim, dim_sample);
}
return work_done;
@@ -2254,9 +2260,23 @@ static void bnxt_free_tx_skbs(struct bnxt *bp)
for (j = 0; j < max_idx;) {
struct bnxt_sw_tx_bd *tx_buf = &txr->tx_buf_ring[j];
- struct sk_buff *skb = tx_buf->skb;
+ struct sk_buff *skb;
int k, last;
+ if (i < bp->tx_nr_rings_xdp &&
+ tx_buf->action == XDP_REDIRECT) {
+ dma_unmap_single(&pdev->dev,
+ dma_unmap_addr(tx_buf, mapping),
+ dma_unmap_len(tx_buf, len),
+ PCI_DMA_TODEVICE);
+ xdp_return_frame(tx_buf->xdpf);
+ tx_buf->action = 0;
+ tx_buf->xdpf = NULL;
+ j++;
+ continue;
+ }
+
+ skb = tx_buf->skb;
if (!skb) {
j++;
continue;
@@ -2343,7 +2363,7 @@ static void bnxt_free_rx_skbs(struct bnxt *bp)
dma_unmap_page_attrs(&pdev->dev, mapping,
PAGE_SIZE, bp->rx_dir,
DMA_ATTR_WEAK_ORDERING);
- __free_page(data);
+ page_pool_recycle_direct(rxr->page_pool, data);
} else {
dma_unmap_single_attrs(&pdev->dev, mapping,
bp->rx_buf_use_size,
@@ -2480,6 +2500,9 @@ static void bnxt_free_rx_rings(struct bnxt *bp)
if (xdp_rxq_info_is_reg(&rxr->xdp_rxq))
xdp_rxq_info_unreg(&rxr->xdp_rxq);
+ page_pool_destroy(rxr->page_pool);
+ rxr->page_pool = NULL;
+
kfree(rxr->rx_tpa);
rxr->rx_tpa = NULL;
@@ -2494,6 +2517,26 @@ static void bnxt_free_rx_rings(struct bnxt *bp)
}
}
+static int bnxt_alloc_rx_page_pool(struct bnxt *bp,
+ struct bnxt_rx_ring_info *rxr)
+{
+ struct page_pool_params pp = { 0 };
+
+ pp.pool_size = bp->rx_ring_size;
+ pp.nid = dev_to_node(&bp->pdev->dev);
+ pp.dev = &bp->pdev->dev;
+ pp.dma_dir = DMA_BIDIRECTIONAL;
+
+ rxr->page_pool = page_pool_create(&pp);
+ if (IS_ERR(rxr->page_pool)) {
+ int err = PTR_ERR(rxr->page_pool);
+
+ rxr->page_pool = NULL;
+ return err;
+ }
+ return 0;
+}
+
static int bnxt_alloc_rx_rings(struct bnxt *bp)
{
int i, rc, agg_rings = 0, tpa_rings = 0;
@@ -2513,10 +2556,22 @@ static int bnxt_alloc_rx_rings(struct bnxt *bp)
ring = &rxr->rx_ring_struct;
+ rc = bnxt_alloc_rx_page_pool(bp, rxr);
+ if (rc)
+ return rc;
+
rc = xdp_rxq_info_reg(&rxr->xdp_rxq, bp->dev, i);
if (rc < 0)
return rc;
+ rc = xdp_rxq_info_reg_mem_model(&rxr->xdp_rxq,
+ MEM_TYPE_PAGE_POOL,
+ rxr->page_pool);
+ if (rc) {
+ xdp_rxq_info_unreg(&rxr->xdp_rxq);
+ return rc;
+ }
+
rc = bnxt_alloc_ring(bp, &ring->ring_mem);
if (rc)
return rc;
@@ -5508,7 +5563,16 @@ static int bnxt_cp_rings_in_use(struct bnxt *bp)
static int bnxt_get_func_stat_ctxs(struct bnxt *bp)
{
- return bp->cp_nr_rings + bnxt_get_ulp_stat_ctxs(bp);
+ int ulp_stat = bnxt_get_ulp_stat_ctxs(bp);
+ int cp = bp->cp_nr_rings;
+
+ if (!ulp_stat)
+ return cp;
+
+ if (bnxt_nq_rings_in_use(bp) > cp + bnxt_get_ulp_msix_num(bp))
+ return bnxt_get_ulp_msix_base(bp) + ulp_stat;
+
+ return cp + ulp_stat;
}
static bool bnxt_need_reserve_rings(struct bnxt *bp)
@@ -7477,11 +7541,7 @@ unsigned int bnxt_get_avail_cp_rings_for_en(struct bnxt *bp)
unsigned int bnxt_get_avail_stat_ctxs_for_en(struct bnxt *bp)
{
- unsigned int stat;
-
- stat = bnxt_get_max_func_stat_ctxs(bp) - bnxt_get_ulp_stat_ctxs(bp);
- stat -= bp->cp_nr_rings;
- return stat;
+ return bnxt_get_max_func_stat_ctxs(bp) - bnxt_get_func_stat_ctxs(bp);
}
int bnxt_get_avail_msix(struct bnxt *bp, int num)
@@ -7813,7 +7873,7 @@ static void bnxt_enable_napi(struct bnxt *bp)
if (bp->bnapi[i]->rx_ring) {
INIT_WORK(&cpr->dim.work, bnxt_dim_work);
- cpr->dim.mode = NET_DIM_CQ_PERIOD_MODE_START_FROM_EQE;
+ cpr->dim.mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE;
}
napi_enable(&bp->bnapi[i]->napi);
}
@@ -9847,32 +9907,19 @@ static int bnxt_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
}
}
-static int bnxt_setup_tc_block(struct net_device *dev,
- struct tc_block_offload *f)
-{
- struct bnxt *bp = netdev_priv(dev);
-
- if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
- return -EOPNOTSUPP;
-
- switch (f->command) {
- case TC_BLOCK_BIND:
- return tcf_block_cb_register(f->block, bnxt_setup_tc_block_cb,
- bp, bp, f->extack);
- case TC_BLOCK_UNBIND:
- tcf_block_cb_unregister(f->block, bnxt_setup_tc_block_cb, bp);
- return 0;
- default:
- return -EOPNOTSUPP;
- }
-}
+static LIST_HEAD(bnxt_block_cb_list);
static int bnxt_setup_tc(struct net_device *dev, enum tc_setup_type type,
void *type_data)
{
+ struct bnxt *bp = netdev_priv(dev);
+
switch (type) {
case TC_SETUP_BLOCK:
- return bnxt_setup_tc_block(dev, type_data);
+ return flow_block_cb_setup_simple(type_data,
+ &bnxt_block_cb_list,
+ bnxt_setup_tc_block_cb,
+ bp, bp, true);
case TC_SETUP_QDISC_MQPRIO: {
struct tc_mqprio_qopt *mqprio = type_data;
@@ -10233,6 +10280,7 @@ static const struct net_device_ops bnxt_netdev_ops = {
.ndo_udp_tunnel_add = bnxt_udp_tunnel_add,
.ndo_udp_tunnel_del = bnxt_udp_tunnel_del,
.ndo_bpf = bnxt_xdp,
+ .ndo_xdp_xmit = bnxt_xdp_xmit,
.ndo_bridge_getlink = bnxt_bridge_getlink,
.ndo_bridge_setlink = bnxt_bridge_setlink,
.ndo_get_devlink_port = bnxt_get_devlink_port,
@@ -10262,10 +10310,10 @@ static void bnxt_remove_one(struct pci_dev *pdev)
bnxt_dcb_free(bp);
kfree(bp->edev);
bp->edev = NULL;
+ bnxt_cleanup_pci(bp);
bnxt_free_ctx_mem(bp);
kfree(bp->ctx);
bp->ctx = NULL;
- bnxt_cleanup_pci(bp);
bnxt_free_port_stats(bp);
free_netdev(dev);
}
@@ -10859,6 +10907,7 @@ static void bnxt_shutdown(struct pci_dev *pdev)
if (system_state == SYSTEM_POWER_OFF) {
bnxt_clear_int_mode(bp);
+ pci_disable_device(pdev);
pci_wake_from_d3(pdev, bp->wol);
pci_set_power_state(pdev, PCI_D3hot);
}
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index be438d82f939..16694b704d15 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -24,7 +24,9 @@
#include <net/devlink.h>
#include <net/dst_metadata.h>
#include <net/xdp.h>
-#include <linux/net_dim.h>
+#include <linux/dim.h>
+
+struct page_pool;
struct tx_bd {
__le32 tx_bd_len_flags_type;
@@ -587,15 +589,21 @@ struct nqe_cn {
#define BNXT_HWRM_CHNL_CHIMP 0
#define BNXT_HWRM_CHNL_KONG 1
-#define BNXT_RX_EVENT 1
-#define BNXT_AGG_EVENT 2
-#define BNXT_TX_EVENT 4
+#define BNXT_RX_EVENT 1
+#define BNXT_AGG_EVENT 2
+#define BNXT_TX_EVENT 4
+#define BNXT_REDIRECT_EVENT 8
struct bnxt_sw_tx_bd {
- struct sk_buff *skb;
+ union {
+ struct sk_buff *skb;
+ struct xdp_frame *xdpf;
+ };
DEFINE_DMA_UNMAP_ADDR(mapping);
+ DEFINE_DMA_UNMAP_LEN(len);
u8 is_gso;
u8 is_push;
+ u8 action;
union {
unsigned short nr_frags;
u16 rx_prod;
@@ -793,6 +801,7 @@ struct bnxt_rx_ring_info {
struct bnxt_ring_struct rx_ring_struct;
struct bnxt_ring_struct rx_agg_ring_struct;
struct xdp_rxq_info xdp_rxq;
+ struct page_pool *page_pool;
};
struct bnxt_cp_ring_info {
@@ -810,7 +819,7 @@ struct bnxt_cp_ring_info {
u64 rx_bytes;
u64 event_ctr;
- struct net_dim dim;
+ struct dim dim;
union {
struct tx_cmp *cp_desc_ring[MAX_CP_PAGES];
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
index 70775158c8c4..07301cb87c03 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
@@ -396,7 +396,7 @@ static int bnxt_hwrm_queue_dscp_qcaps(struct bnxt *bp)
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_QUEUE_DSCP_QCAPS, -1, -1);
mutex_lock(&bp->hwrm_cmd_lock);
- rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+ rc = _hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
if (!rc) {
bp->max_dscp_value = (1 << resp->num_dscp_bits) - 1;
if (bp->max_dscp_value < 0x3f)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_debugfs.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_debugfs.c
index 94e208e9789f..61393f351a77 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_debugfs.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_debugfs.c
@@ -11,7 +11,7 @@
#include <linux/module.h>
#include <linux/pci.h>
#include "bnxt_hsi.h"
-#include <linux/net_dim.h>
+#include <linux/dim.h>
#include "bnxt.h"
#include "bnxt_debugfs.h"
@@ -21,7 +21,7 @@ static ssize_t debugfs_dim_read(struct file *filep,
char __user *buffer,
size_t count, loff_t *ppos)
{
- struct net_dim *dim = filep->private_data;
+ struct dim *dim = filep->private_data;
int len;
char *buf;
@@ -61,7 +61,7 @@ static const struct file_operations debugfs_dim_fops = {
.read = debugfs_dim_read,
};
-static struct dentry *debugfs_dim_ring_init(struct net_dim *dim, int ring_idx,
+static struct dentry *debugfs_dim_ring_init(struct dim *dim, int ring_idx,
struct dentry *dd)
{
static char qname[16];
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_dim.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_dim.c
index afa97c8bb081..6f6576dc417a 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_dim.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_dim.c
@@ -7,26 +7,25 @@
* the Free Software Foundation.
*/
-#include <linux/net_dim.h>
+#include <linux/dim.h>
#include "bnxt_hsi.h"
#include "bnxt.h"
void bnxt_dim_work(struct work_struct *work)
{
- struct net_dim *dim = container_of(work, struct net_dim,
- work);
+ struct dim *dim = container_of(work, struct dim, work);
struct bnxt_cp_ring_info *cpr = container_of(dim,
struct bnxt_cp_ring_info,
dim);
struct bnxt_napi *bnapi = container_of(cpr,
struct bnxt_napi,
cp_ring);
- struct net_dim_cq_moder cur_moder =
+ struct dim_cq_moder cur_moder =
net_dim_get_rx_moderation(dim->mode, dim->profile_ix);
cpr->rx_ring_coal.coal_ticks = cur_moder.usec;
cpr->rx_ring_coal.coal_bufs = cur_moder.pkts;
bnxt_hwrm_set_ring_coal(bnapi->bp, bnapi);
- dim->state = NET_DIM_START_MEASURE;
+ dim->state = DIM_START_MEASURE;
}
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
index a6c7baf38036..c7ee63d69679 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
@@ -2799,7 +2799,7 @@ static int bnxt_run_loopback(struct bnxt *bp)
dev_kfree_skb(skb);
return -EIO;
}
- bnxt_xmit_xdp(bp, txr, map, pkt_size, 0);
+ bnxt_xmit_bd(bp, txr, map, pkt_size);
/* Sync BD data before updating doorbell */
wmb();
@@ -2842,7 +2842,7 @@ static void bnxt_self_test(struct net_device *dev, struct ethtool_test *etest,
bool offline = false;
u8 test_results = 0;
u8 test_mask = 0;
- int rc, i;
+ int rc = 0, i;
if (!bp->num_tests || !BNXT_SINGLE_PF(bp))
return;
@@ -2913,9 +2913,9 @@ static void bnxt_self_test(struct net_device *dev, struct ethtool_test *etest,
}
bnxt_hwrm_phy_loopback(bp, false, false);
bnxt_half_close_nic(bp);
- bnxt_open_nic(bp, false, true);
+ rc = bnxt_open_nic(bp, false, true);
}
- if (bnxt_test_irq(bp)) {
+ if (rc || bnxt_test_irq(bp)) {
buf[BNXT_IRQ_TEST_IDX] = 1;
etest->flags |= ETH_TEST_FL_FAILED;
}
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
index 44d6c5743fb9..6fe4a7174271 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
@@ -170,10 +170,10 @@ static int bnxt_tc_parse_actions(struct bnxt *bp,
}
static int bnxt_tc_parse_flow(struct bnxt *bp,
- struct tc_cls_flower_offload *tc_flow_cmd,
+ struct flow_cls_offload *tc_flow_cmd,
struct bnxt_tc_flow *flow)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(tc_flow_cmd);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(tc_flow_cmd);
struct flow_dissector *dissector = rule->match.dissector;
/* KEY_CONTROL and KEY_BASIC are needed for forming a meaningful key */
@@ -1262,7 +1262,7 @@ static void bnxt_tc_set_src_fid(struct bnxt *bp, struct bnxt_tc_flow *flow,
* The hash-tables are already protected by the rhashtable API.
*/
static int bnxt_tc_add_flow(struct bnxt *bp, u16 src_fid,
- struct tc_cls_flower_offload *tc_flow_cmd)
+ struct flow_cls_offload *tc_flow_cmd)
{
struct bnxt_tc_flow_node *new_node, *old_node;
struct bnxt_tc_info *tc_info = bp->tc_info;
@@ -1348,7 +1348,7 @@ done:
}
static int bnxt_tc_del_flow(struct bnxt *bp,
- struct tc_cls_flower_offload *tc_flow_cmd)
+ struct flow_cls_offload *tc_flow_cmd)
{
struct bnxt_tc_info *tc_info = bp->tc_info;
struct bnxt_tc_flow_node *flow_node;
@@ -1363,7 +1363,7 @@ static int bnxt_tc_del_flow(struct bnxt *bp,
}
static int bnxt_tc_get_flow_stats(struct bnxt *bp,
- struct tc_cls_flower_offload *tc_flow_cmd)
+ struct flow_cls_offload *tc_flow_cmd)
{
struct bnxt_tc_flow_stats stats, *curr_stats, *prev_stats;
struct bnxt_tc_info *tc_info = bp->tc_info;
@@ -1585,14 +1585,14 @@ void bnxt_tc_flow_stats_work(struct bnxt *bp)
}
int bnxt_tc_setup_flower(struct bnxt *bp, u16 src_fid,
- struct tc_cls_flower_offload *cls_flower)
+ struct flow_cls_offload *cls_flower)
{
switch (cls_flower->command) {
- case TC_CLSFLOWER_REPLACE:
+ case FLOW_CLS_REPLACE:
return bnxt_tc_add_flow(bp, src_fid, cls_flower);
- case TC_CLSFLOWER_DESTROY:
+ case FLOW_CLS_DESTROY:
return bnxt_tc_del_flow(bp, cls_flower);
- case TC_CLSFLOWER_STATS:
+ case FLOW_CLS_STATS:
return bnxt_tc_get_flow_stats(bp, cls_flower);
default:
return -EOPNOTSUPP;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.h
index 8a0968967bc5..ffec57d1a5ec 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.h
@@ -196,7 +196,7 @@ struct bnxt_tc_flow_node {
};
int bnxt_tc_setup_flower(struct bnxt *bp, u16 src_fid,
- struct tc_cls_flower_offload *cls_flower);
+ struct flow_cls_offload *cls_flower);
int bnxt_init_tc(struct bnxt *bp);
void bnxt_shutdown_tc(struct bnxt *bp);
void bnxt_tc_flow_stats_work(struct bnxt *bp);
@@ -209,7 +209,7 @@ static inline bool bnxt_tc_flower_enabled(struct bnxt *bp)
#else /* CONFIG_BNXT_FLOWER_OFFLOAD */
static inline int bnxt_tc_setup_flower(struct bnxt *bp, u16 src_fid,
- struct tc_cls_flower_offload *cls_flower)
+ struct flow_cls_offload *cls_flower)
{
return -EOPNOTSUPP;
}
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
index bfa342a98d08..fc77caf0a076 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
@@ -157,8 +157,10 @@ static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, int ulp_id,
if (BNXT_NEW_RM(bp)) {
struct bnxt_hw_resc *hw_resc = &bp->hw_resc;
+ int resv_msix;
- avail_msix = hw_resc->resv_irqs - bp->cp_nr_rings;
+ resv_msix = hw_resc->resv_irqs - bp->cp_nr_rings;
+ avail_msix = min_t(int, resv_msix, avail_msix);
edev->ulp_tbl[ulp_id].msix_requested = avail_msix;
}
bnxt_fill_msix_vecs(bp, ent);
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c
index f760921389a3..f9bf7d7250ab 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c
@@ -161,34 +161,19 @@ static int bnxt_vf_rep_setup_tc_block_cb(enum tc_setup_type type,
}
}
-static int bnxt_vf_rep_setup_tc_block(struct net_device *dev,
- struct tc_block_offload *f)
-{
- struct bnxt_vf_rep *vf_rep = netdev_priv(dev);
-
- if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
- return -EOPNOTSUPP;
-
- switch (f->command) {
- case TC_BLOCK_BIND:
- return tcf_block_cb_register(f->block,
- bnxt_vf_rep_setup_tc_block_cb,
- vf_rep, vf_rep, f->extack);
- case TC_BLOCK_UNBIND:
- tcf_block_cb_unregister(f->block,
- bnxt_vf_rep_setup_tc_block_cb, vf_rep);
- return 0;
- default:
- return -EOPNOTSUPP;
- }
-}
+static LIST_HEAD(bnxt_vf_block_cb_list);
static int bnxt_vf_rep_setup_tc(struct net_device *dev, enum tc_setup_type type,
void *type_data)
{
+ struct bnxt_vf_rep *vf_rep = netdev_priv(dev);
+
switch (type) {
case TC_SETUP_BLOCK:
- return bnxt_vf_rep_setup_tc_block(dev, type_data);
+ return flow_block_cb_setup_simple(type_data,
+ &bnxt_vf_block_cb_list,
+ bnxt_vf_rep_setup_tc_block_cb,
+ vf_rep, vf_rep, true);
default:
return -EOPNOTSUPP;
}
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
index 0184ef6f05a7..c6f6f2033880 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
@@ -15,12 +15,14 @@
#include <linux/bpf.h>
#include <linux/bpf_trace.h>
#include <linux/filter.h>
+#include <net/page_pool.h>
#include "bnxt_hsi.h"
#include "bnxt.h"
#include "bnxt_xdp.h"
-void bnxt_xmit_xdp(struct bnxt *bp, struct bnxt_tx_ring_info *txr,
- dma_addr_t mapping, u32 len, u16 rx_prod)
+struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp,
+ struct bnxt_tx_ring_info *txr,
+ dma_addr_t mapping, u32 len)
{
struct bnxt_sw_tx_bd *tx_buf;
struct tx_bd *txbd;
@@ -29,7 +31,6 @@ void bnxt_xmit_xdp(struct bnxt *bp, struct bnxt_tx_ring_info *txr,
prod = txr->tx_prod;
tx_buf = &txr->tx_buf_ring[prod];
- tx_buf->rx_prod = rx_prod;
txbd = &txr->tx_desc_ring[TX_RING(prod)][TX_IDX(prod)];
flags = (len << TX_BD_LEN_SHIFT) | (1 << TX_BD_FLAGS_BD_CNT_SHIFT) |
@@ -40,30 +41,67 @@ void bnxt_xmit_xdp(struct bnxt *bp, struct bnxt_tx_ring_info *txr,
prod = NEXT_TX(prod);
txr->tx_prod = prod;
+ return tx_buf;
+}
+
+static void __bnxt_xmit_xdp(struct bnxt *bp, struct bnxt_tx_ring_info *txr,
+ dma_addr_t mapping, u32 len, u16 rx_prod)
+{
+ struct bnxt_sw_tx_bd *tx_buf;
+
+ tx_buf = bnxt_xmit_bd(bp, txr, mapping, len);
+ tx_buf->rx_prod = rx_prod;
+ tx_buf->action = XDP_TX;
+}
+
+static void __bnxt_xmit_xdp_redirect(struct bnxt *bp,
+ struct bnxt_tx_ring_info *txr,
+ dma_addr_t mapping, u32 len,
+ struct xdp_frame *xdpf)
+{
+ struct bnxt_sw_tx_bd *tx_buf;
+
+ tx_buf = bnxt_xmit_bd(bp, txr, mapping, len);
+ tx_buf->action = XDP_REDIRECT;
+ tx_buf->xdpf = xdpf;
+ dma_unmap_addr_set(tx_buf, mapping, mapping);
+ dma_unmap_len_set(tx_buf, len, 0);
}
void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts)
{
struct bnxt_tx_ring_info *txr = bnapi->tx_ring;
struct bnxt_rx_ring_info *rxr = bnapi->rx_ring;
+ bool rx_doorbell_needed = false;
struct bnxt_sw_tx_bd *tx_buf;
u16 tx_cons = txr->tx_cons;
u16 last_tx_cons = tx_cons;
- u16 rx_prod;
int i;
for (i = 0; i < nr_pkts; i++) {
- last_tx_cons = tx_cons;
+ tx_buf = &txr->tx_buf_ring[tx_cons];
+
+ if (tx_buf->action == XDP_REDIRECT) {
+ struct pci_dev *pdev = bp->pdev;
+
+ dma_unmap_single(&pdev->dev,
+ dma_unmap_addr(tx_buf, mapping),
+ dma_unmap_len(tx_buf, len),
+ PCI_DMA_TODEVICE);
+ xdp_return_frame(tx_buf->xdpf);
+ tx_buf->action = 0;
+ tx_buf->xdpf = NULL;
+ } else if (tx_buf->action == XDP_TX) {
+ rx_doorbell_needed = true;
+ last_tx_cons = tx_cons;
+ }
tx_cons = NEXT_TX(tx_cons);
}
txr->tx_cons = tx_cons;
- if (bnxt_tx_avail(bp, txr) == bp->tx_ring_size) {
- rx_prod = rxr->rx_prod;
- } else {
+ if (rx_doorbell_needed) {
tx_buf = &txr->tx_buf_ring[last_tx_cons];
- rx_prod = tx_buf->rx_prod;
+ bnxt_db_write(bp, &rxr->rx_db, tx_buf->rx_prod);
}
- bnxt_db_write(bp, &rxr->rx_db, rx_prod);
}
/* returns the following:
@@ -88,19 +126,19 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons,
return false;
pdev = bp->pdev;
- txr = rxr->bnapi->tx_ring;
rx_buf = &rxr->rx_buf_ring[cons];
offset = bp->rx_offset;
+ mapping = rx_buf->mapping - bp->rx_dma_offset;
+ dma_sync_single_for_cpu(&pdev->dev, mapping + offset, *len, bp->rx_dir);
+
+ txr = rxr->bnapi->tx_ring;
xdp.data_hard_start = *data_ptr - offset;
xdp.data = *data_ptr;
xdp_set_data_meta_invalid(&xdp);
xdp.data_end = *data_ptr + *len;
xdp.rxq = &rxr->xdp_rxq;
orig_data = xdp.data;
- mapping = rx_buf->mapping - bp->rx_dma_offset;
-
- dma_sync_single_for_cpu(&pdev->dev, mapping + offset, *len, bp->rx_dir);
rcu_read_lock();
act = bpf_prog_run_xdp(xdp_prog, &xdp);
@@ -132,10 +170,34 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons,
*event = BNXT_TX_EVENT;
dma_sync_single_for_device(&pdev->dev, mapping + offset, *len,
bp->rx_dir);
- bnxt_xmit_xdp(bp, txr, mapping + offset, *len,
- NEXT_RX(rxr->rx_prod));
+ __bnxt_xmit_xdp(bp, txr, mapping + offset, *len,
+ NEXT_RX(rxr->rx_prod));
bnxt_reuse_rx_data(rxr, cons, page);
return true;
+ case XDP_REDIRECT:
+ /* if we are calling this here then we know that the
+ * redirect is coming from a frame received by the
+ * bnxt_en driver.
+ */
+ dma_unmap_page_attrs(&pdev->dev, mapping,
+ PAGE_SIZE, bp->rx_dir,
+ DMA_ATTR_WEAK_ORDERING);
+
+ /* if we are unable to allocate a new buffer, abort and reuse */
+ if (bnxt_alloc_rx_data(bp, rxr, rxr->rx_prod, GFP_ATOMIC)) {
+ trace_xdp_exception(bp->dev, xdp_prog, act);
+ bnxt_reuse_rx_data(rxr, cons, page);
+ return true;
+ }
+
+ if (xdp_do_redirect(bp->dev, &xdp, xdp_prog)) {
+ trace_xdp_exception(bp->dev, xdp_prog, act);
+ page_pool_recycle_direct(rxr->page_pool, page);
+ return true;
+ }
+
+ *event |= BNXT_REDIRECT_EVENT;
+ break;
default:
bpf_warn_invalid_xdp_action(act);
/* Fall thru */
@@ -149,6 +211,56 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons,
return true;
}
+int bnxt_xdp_xmit(struct net_device *dev, int num_frames,
+ struct xdp_frame **frames, u32 flags)
+{
+ struct bnxt *bp = netdev_priv(dev);
+ struct bpf_prog *xdp_prog = READ_ONCE(bp->xdp_prog);
+ struct pci_dev *pdev = bp->pdev;
+ struct bnxt_tx_ring_info *txr;
+ dma_addr_t mapping;
+ int drops = 0;
+ int ring;
+ int i;
+
+ if (!test_bit(BNXT_STATE_OPEN, &bp->state) ||
+ !bp->tx_nr_rings_xdp ||
+ !xdp_prog)
+ return -EINVAL;
+
+ ring = smp_processor_id() % bp->tx_nr_rings_xdp;
+ txr = &bp->tx_ring[ring];
+
+ for (i = 0; i < num_frames; i++) {
+ struct xdp_frame *xdp = frames[i];
+
+ if (!txr || !bnxt_tx_avail(bp, txr) ||
+ !(bp->bnapi[ring]->flags & BNXT_NAPI_FLAG_XDP)) {
+ xdp_return_frame_rx_napi(xdp);
+ drops++;
+ continue;
+ }
+
+ mapping = dma_map_single(&pdev->dev, xdp->data, xdp->len,
+ DMA_TO_DEVICE);
+
+ if (dma_mapping_error(&pdev->dev, mapping)) {
+ xdp_return_frame_rx_napi(xdp);
+ drops++;
+ continue;
+ }
+ __bnxt_xmit_xdp_redirect(bp, txr, mapping, xdp->len, xdp);
+ }
+
+ if (flags & XDP_XMIT_FLUSH) {
+ /* Sync BD data before updating doorbell */
+ wmb();
+ bnxt_db_write(bp, &txr->tx_db, txr->tx_prod);
+ }
+
+ return num_frames - drops;
+}
+
/* Under rtnl_lock */
static int bnxt_xdp_set(struct bnxt *bp, struct bpf_prog *prog)
{
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h
index 414b748038ca..0df40c3beb05 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h
@@ -10,12 +10,15 @@
#ifndef BNXT_XDP_H
#define BNXT_XDP_H
-void bnxt_xmit_xdp(struct bnxt *bp, struct bnxt_tx_ring_info *txr,
- dma_addr_t mapping, u32 len, u16 rx_prod);
+struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp,
+ struct bnxt_tx_ring_info *txr,
+ dma_addr_t mapping, u32 len);
void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts);
bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons,
struct page *page, u8 **data_ptr, unsigned int *len,
u8 *event);
int bnxt_xdp(struct net_device *dev, struct netdev_bpf *xdp);
+int bnxt_xdp_xmit(struct net_device *dev, int num_frames,
+ struct xdp_frame **frames, u32 flags);
#endif
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
index 41b50e6570ea..34466b827dde 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -640,7 +640,7 @@ static void bcmgenet_set_rx_coalesce(struct bcmgenet_rx_ring *ring,
static void bcmgenet_set_ring_rx_coalesce(struct bcmgenet_rx_ring *ring,
struct ethtool_coalesce *ec)
{
- struct net_dim_cq_moder moder;
+ struct dim_cq_moder moder;
u32 usecs, pkts;
ring->rx_coalesce_usecs = ec->rx_coalesce_usecs;
@@ -1895,7 +1895,7 @@ static int bcmgenet_rx_poll(struct napi_struct *napi, int budget)
{
struct bcmgenet_rx_ring *ring = container_of(napi,
struct bcmgenet_rx_ring, napi);
- struct net_dim_sample dim_sample;
+ struct dim_sample dim_sample;
unsigned int work_done;
work_done = bcmgenet_desc_rx(ring, budget);
@@ -1906,8 +1906,8 @@ static int bcmgenet_rx_poll(struct napi_struct *napi, int budget)
}
if (ring->dim.use_dim) {
- net_dim_sample(ring->dim.event_ctr, ring->dim.packets,
- ring->dim.bytes, &dim_sample);
+ dim_update_sample(ring->dim.event_ctr, ring->dim.packets,
+ ring->dim.bytes, &dim_sample);
net_dim(&ring->dim.dim, dim_sample);
}
@@ -1916,16 +1916,16 @@ static int bcmgenet_rx_poll(struct napi_struct *napi, int budget)
static void bcmgenet_dim_work(struct work_struct *work)
{
- struct net_dim *dim = container_of(work, struct net_dim, work);
+ struct dim *dim = container_of(work, struct dim, work);
struct bcmgenet_net_dim *ndim =
container_of(dim, struct bcmgenet_net_dim, dim);
struct bcmgenet_rx_ring *ring =
container_of(ndim, struct bcmgenet_rx_ring, dim);
- struct net_dim_cq_moder cur_profile =
+ struct dim_cq_moder cur_profile =
net_dim_get_rx_moderation(dim->mode, dim->profile_ix);
bcmgenet_set_rx_coalesce(ring, cur_profile.usec, cur_profile.pkts);
- dim->state = NET_DIM_START_MEASURE;
+ dim->state = DIM_START_MEASURE;
}
/* Assign skb to RX DMA descriptor. */
@@ -2082,7 +2082,7 @@ static void bcmgenet_init_dim(struct bcmgenet_rx_ring *ring,
struct bcmgenet_net_dim *dim = &ring->dim;
INIT_WORK(&dim->dim.work, cb);
- dim->dim.mode = NET_DIM_CQ_PERIOD_MODE_START_FROM_EQE;
+ dim->dim.mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE;
dim->event_ctr = 0;
dim->packets = 0;
dim->bytes = 0;
@@ -2091,7 +2091,7 @@ static void bcmgenet_init_dim(struct bcmgenet_rx_ring *ring,
static void bcmgenet_init_rx_coalesce(struct bcmgenet_rx_ring *ring)
{
struct bcmgenet_net_dim *dim = &ring->dim;
- struct net_dim_cq_moder moder;
+ struct dim_cq_moder moder;
u32 usecs, pkts;
usecs = ring->rx_coalesce_usecs;
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
index 9ad835aee1bc..4a8fc03d82fd 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
@@ -13,7 +13,7 @@
#include <linux/mii.h>
#include <linux/if_vlan.h>
#include <linux/phy.h>
-#include <linux/net_dim.h>
+#include <linux/dim.h>
/* total number of Buffer Descriptors, same for Rx/Tx */
#define TOTAL_DESC 256
@@ -578,7 +578,7 @@ struct bcmgenet_net_dim {
u16 event_ctr;
unsigned long packets;
unsigned long bytes;
- struct net_dim dim;
+ struct dim dim;
};
struct bcmgenet_rx_ring {
diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
index 6d1f9c822548..4c404d2213f9 100644
--- a/drivers/net/ethernet/broadcom/tg3.c
+++ b/drivers/net/ethernet/broadcom/tg3.c
@@ -6710,7 +6710,7 @@ static int tg3_alloc_rx_data(struct tg3 *tp, struct tg3_rx_prodring_set *tpr,
skb_size = SKB_DATA_ALIGN(data_size + TG3_RX_OFFSET(tp)) +
SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
if (skb_size <= PAGE_SIZE) {
- data = netdev_alloc_frag(skb_size);
+ data = napi_alloc_frag(skb_size);
*frag_size = skb_size;
} else {
data = kmalloc(skb_size, GFP_ATOMIC);
diff --git a/drivers/net/ethernet/cadence/Kconfig b/drivers/net/ethernet/cadence/Kconfig
index 1766697c9c5a..f4b3bd85dfe3 100644
--- a/drivers/net/ethernet/cadence/Kconfig
+++ b/drivers/net/ethernet/cadence/Kconfig
@@ -1,6 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
#
-# Atmel device configuration
+# Cadence device configuration
#
config NET_VENDOR_CADENCE
@@ -13,15 +13,15 @@ config NET_VENDOR_CADENCE
If unsure, say Y.
Note that the answer to this question doesn't directly affect the
- kernel: saying N will just cause the configurator to skip all
- the remaining Atmel network card questions. If you say Y, you will be
+ kernel: saying N will just cause the configurator to skip all the
+ remaining Cadence network card questions. If you say Y, you will be
asked for your specific card in the following questions.
if NET_VENDOR_CADENCE
config MACB
tristate "Cadence MACB/GEM support"
- depends on HAS_DMA
+ depends on HAS_DMA && COMMON_CLK
select PHYLIB
---help---
The Cadence MACB ethernet interface is found on many Atmel AT32 and
@@ -42,7 +42,7 @@ config MACB_USE_HWSTAMP
config MACB_PCI
tristate "Cadence PCI MACB/GEM support"
- depends on MACB && PCI && COMMON_CLK
+ depends on MACB && PCI
---help---
This is PCI wrapper for MACB driver.
diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h
index 6ff123da6a14..03983bd46eef 100644
--- a/drivers/net/ethernet/cadence/macb.h
+++ b/drivers/net/ethernet/cadence/macb.h
@@ -496,7 +496,11 @@
/* Bitfields in TISUBN */
#define GEM_SUBNSINCR_OFFSET 0
-#define GEM_SUBNSINCR_SIZE 16
+#define GEM_SUBNSINCRL_OFFSET 24
+#define GEM_SUBNSINCRL_SIZE 8
+#define GEM_SUBNSINCRH_OFFSET 0
+#define GEM_SUBNSINCRH_SIZE 16
+#define GEM_SUBNSINCR_SIZE 24
/* Bitfields in TI */
#define GEM_NSINCR_OFFSET 0
@@ -834,6 +838,9 @@ struct gem_tx_ts {
/* limit RX checksum offload to TCP and UDP packets */
#define GEM_RX_CSUM_CHECKED_MASK 2
+/* Scaled PPM fraction */
+#define PPM_FRACTION 16
+
/* struct macb_tx_skb - data about an skb which is being transmitted
* @skb: skb currently being transmitted, only set for the last buffer
* of the frame
@@ -1060,7 +1067,8 @@ struct macb_or_gem_ops {
int (*mog_alloc_rx_buffers)(struct macb *bp);
void (*mog_free_rx_buffers)(struct macb *bp);
void (*mog_init_rings)(struct macb *bp);
- int (*mog_rx)(struct macb_queue *queue, int budget);
+ int (*mog_rx)(struct macb_queue *queue, struct napi_struct *napi,
+ int budget);
};
/* MACB-PTP interface: adapt to platform needs. */
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 262a28ff81fc..5ca17e62dc3e 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -7,6 +7,7 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/clk.h>
+#include <linux/clk-provider.h>
#include <linux/crc32.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
@@ -37,6 +38,13 @@
#include <linux/pm_runtime.h>
#include "macb.h"
+/* This structure is only used for MACB on SiFive FU540 devices */
+struct sifive_fu540_macb_mgmt {
+ void __iomem *reg;
+ unsigned long rate;
+ struct clk_hw hw;
+};
+
#define MACB_RX_BUFFER_SIZE 128
#define RX_BUFFER_MULTIPLE 64 /* bytes */
@@ -981,7 +989,8 @@ static void discard_partial_frame(struct macb_queue *queue, unsigned int begin,
*/
}
-static int gem_rx(struct macb_queue *queue, int budget)
+static int gem_rx(struct macb_queue *queue, struct napi_struct *napi,
+ int budget)
{
struct macb *bp = queue->bp;
unsigned int len;
@@ -1063,7 +1072,7 @@ static int gem_rx(struct macb_queue *queue, int budget)
skb->data, 32, true);
#endif
- netif_receive_skb(skb);
+ napi_gro_receive(napi, skb);
}
gem_rx_refill(queue);
@@ -1071,8 +1080,8 @@ static int gem_rx(struct macb_queue *queue, int budget)
return count;
}
-static int macb_rx_frame(struct macb_queue *queue, unsigned int first_frag,
- unsigned int last_frag)
+static int macb_rx_frame(struct macb_queue *queue, struct napi_struct *napi,
+ unsigned int first_frag, unsigned int last_frag)
{
unsigned int len;
unsigned int frag;
@@ -1148,7 +1157,7 @@ static int macb_rx_frame(struct macb_queue *queue, unsigned int first_frag,
bp->dev->stats.rx_bytes += skb->len;
netdev_vdbg(bp->dev, "received skb of length %u, csum: %08x\n",
skb->len, skb->csum);
- netif_receive_skb(skb);
+ napi_gro_receive(napi, skb);
return 0;
}
@@ -1171,7 +1180,8 @@ static inline void macb_init_rx_ring(struct macb_queue *queue)
queue->rx_tail = 0;
}
-static int macb_rx(struct macb_queue *queue, int budget)
+static int macb_rx(struct macb_queue *queue, struct napi_struct *napi,
+ int budget)
{
struct macb *bp = queue->bp;
bool reset_rx_queue = false;
@@ -1208,7 +1218,7 @@ static int macb_rx(struct macb_queue *queue, int budget)
continue;
}
- dropped = macb_rx_frame(queue, first_frag, tail);
+ dropped = macb_rx_frame(queue, napi, first_frag, tail);
first_frag = -1;
if (unlikely(dropped < 0)) {
reset_rx_queue = true;
@@ -1262,7 +1272,7 @@ static int macb_poll(struct napi_struct *napi, int budget)
netdev_vdbg(bp->dev, "poll: status = %08lx, budget = %d\n",
(unsigned long)status, budget);
- work_done = bp->macbgem_ops.mog_rx(queue, budget);
+ work_done = bp->macbgem_ops.mog_rx(queue, napi, budget);
if (work_done < budget) {
napi_complete_done(napi, work_done);
@@ -3477,7 +3487,7 @@ static int macb_init(struct platform_device *pdev)
queue = &bp->queues[q];
queue->bp = bp;
- netif_napi_add(dev, &queue->napi, macb_poll, 64);
+ netif_napi_add(dev, &queue->napi, macb_poll, NAPI_POLL_WEIGHT);
if (hw_q) {
queue->ISR = GEM_ISR(hw_q - 1);
queue->IER = GEM_IER(hw_q - 1);
@@ -3616,6 +3626,8 @@ static int macb_init(struct platform_device *pdev)
/* max number of receive buffers */
#define AT91ETHER_MAX_RX_DESCR 9
+static struct sifive_fu540_macb_mgmt *mgmt;
+
/* Initialize and start the Receiver and Transmit subsystems */
static int at91ether_start(struct net_device *dev)
{
@@ -3943,6 +3955,116 @@ static int at91ether_init(struct platform_device *pdev)
return 0;
}
+static unsigned long fu540_macb_tx_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+{
+ return mgmt->rate;
+}
+
+static long fu540_macb_tx_round_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long *parent_rate)
+{
+ if (WARN_ON(rate < 2500000))
+ return 2500000;
+ else if (rate == 2500000)
+ return 2500000;
+ else if (WARN_ON(rate < 13750000))
+ return 2500000;
+ else if (WARN_ON(rate < 25000000))
+ return 25000000;
+ else if (rate == 25000000)
+ return 25000000;
+ else if (WARN_ON(rate < 75000000))
+ return 25000000;
+ else if (WARN_ON(rate < 125000000))
+ return 125000000;
+ else if (rate == 125000000)
+ return 125000000;
+
+ WARN_ON(rate > 125000000);
+
+ return 125000000;
+}
+
+static int fu540_macb_tx_set_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long parent_rate)
+{
+ rate = fu540_macb_tx_round_rate(hw, rate, &parent_rate);
+ if (rate != 125000000)
+ iowrite32(1, mgmt->reg);
+ else
+ iowrite32(0, mgmt->reg);
+ mgmt->rate = rate;
+
+ return 0;
+}
+
+static const struct clk_ops fu540_c000_ops = {
+ .recalc_rate = fu540_macb_tx_recalc_rate,
+ .round_rate = fu540_macb_tx_round_rate,
+ .set_rate = fu540_macb_tx_set_rate,
+};
+
+static int fu540_c000_clk_init(struct platform_device *pdev, struct clk **pclk,
+ struct clk **hclk, struct clk **tx_clk,
+ struct clk **rx_clk, struct clk **tsu_clk)
+{
+ struct clk_init_data init;
+ int err = 0;
+
+ err = macb_clk_init(pdev, pclk, hclk, tx_clk, rx_clk, tsu_clk);
+ if (err)
+ return err;
+
+ mgmt = devm_kzalloc(&pdev->dev, sizeof(*mgmt), GFP_KERNEL);
+ if (!mgmt)
+ return -ENOMEM;
+
+ init.name = "sifive-gemgxl-mgmt";
+ init.ops = &fu540_c000_ops;
+ init.flags = 0;
+ init.num_parents = 0;
+
+ mgmt->rate = 0;
+ mgmt->hw.init = &init;
+
+ *tx_clk = clk_register(NULL, &mgmt->hw);
+ if (IS_ERR(*tx_clk))
+ return PTR_ERR(*tx_clk);
+
+ err = clk_prepare_enable(*tx_clk);
+ if (err)
+ dev_err(&pdev->dev, "failed to enable tx_clk (%u)\n", err);
+ else
+ dev_info(&pdev->dev, "Registered clk switch '%s'\n", init.name);
+
+ return 0;
+}
+
+static int fu540_c000_init(struct platform_device *pdev)
+{
+ struct resource *res;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ if (!res)
+ return -ENODEV;
+
+ mgmt->reg = ioremap(res->start, resource_size(res));
+ if (!mgmt->reg)
+ return -ENOMEM;
+
+ return macb_init(pdev);
+}
+
+static const struct macb_config fu540_c000_config = {
+ .caps = MACB_CAPS_GIGABIT_MODE_AVAILABLE | MACB_CAPS_JUMBO |
+ MACB_CAPS_GEM_HAS_PTP,
+ .dma_burst_length = 16,
+ .clk_init = fu540_c000_clk_init,
+ .init = fu540_c000_init,
+ .jumbo_max_len = 10240,
+};
+
static const struct macb_config at91sam9260_config = {
.caps = MACB_CAPS_USRIO_HAS_CLKEN | MACB_CAPS_USRIO_DEFAULT_IS_MII_GMII,
.clk_init = macb_clk_init,
@@ -4032,6 +4154,7 @@ static const struct of_device_id macb_dt_ids[] = {
{ .compatible = "cdns,emac", .data = &emac_config },
{ .compatible = "cdns,zynqmp-gem", .data = &zynqmp_config},
{ .compatible = "cdns,zynq-gem", .data = &zynq_config },
+ { .compatible = "sifive,fu540-macb", .data = &fu540_c000_config },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, macb_dt_ids);
@@ -4239,6 +4362,7 @@ err_out_free_netdev:
err_disable_clocks:
clk_disable_unprepare(tx_clk);
+ clk_unregister(tx_clk);
clk_disable_unprepare(hclk);
clk_disable_unprepare(pclk);
clk_disable_unprepare(rx_clk);
@@ -4273,6 +4397,7 @@ static int macb_remove(struct platform_device *pdev)
pm_runtime_dont_use_autosuspend(&pdev->dev);
if (!pm_runtime_suspended(&pdev->dev)) {
clk_disable_unprepare(bp->tx_clk);
+ clk_unregister(bp->tx_clk);
clk_disable_unprepare(bp->hclk);
clk_disable_unprepare(bp->pclk);
clk_disable_unprepare(bp->rx_clk);
diff --git a/drivers/net/ethernet/cadence/macb_ptp.c b/drivers/net/ethernet/cadence/macb_ptp.c
index 0a8aca8d3634..43a3f0dbf857 100644
--- a/drivers/net/ethernet/cadence/macb_ptp.c
+++ b/drivers/net/ethernet/cadence/macb_ptp.c
@@ -104,7 +104,10 @@ static int gem_tsu_incr_set(struct macb *bp, struct tsu_incr *incr_spec)
* to take effect.
*/
spin_lock_irqsave(&bp->tsu_clk_lock, flags);
- gem_writel(bp, TISUBN, GEM_BF(SUBNSINCR, incr_spec->sub_ns));
+ /* RegBit[15:0] = Subns[23:8]; RegBit[31:24] = Subns[7:0] */
+ gem_writel(bp, TISUBN, GEM_BF(SUBNSINCRL, incr_spec->sub_ns) |
+ GEM_BF(SUBNSINCRH, (incr_spec->sub_ns >>
+ GEM_SUBNSINCRL_SIZE)));
gem_writel(bp, TI, GEM_BF(NSINCR, incr_spec->ns));
spin_unlock_irqrestore(&bp->tsu_clk_lock, flags);
@@ -135,7 +138,7 @@ static int gem_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
* (temp / USEC_PER_SEC) + 0.5
*/
adj += (USEC_PER_SEC >> 1);
- adj >>= GEM_SUBNSINCR_SIZE; /* remove fractions */
+ adj >>= PPM_FRACTION; /* remove fractions */
adj = div_u64(adj, USEC_PER_SEC);
adj = neg_adj ? (word - adj) : (word + adj);
diff --git a/drivers/net/ethernet/calxeda/xgmac.c b/drivers/net/ethernet/calxeda/xgmac.c
index 11d4e91ea754..99f49d059414 100644
--- a/drivers/net/ethernet/calxeda/xgmac.c
+++ b/drivers/net/ethernet/calxeda/xgmac.c
@@ -1855,7 +1855,7 @@ static void xgmac_pmt(void __iomem *ioaddr, unsigned long mode)
static int xgmac_suspend(struct device *dev)
{
- struct net_device *ndev = platform_get_drvdata(to_platform_device(dev));
+ struct net_device *ndev = dev_get_drvdata(dev);
struct xgmac_priv *priv = netdev_priv(ndev);
u32 value;
@@ -1881,7 +1881,7 @@ static int xgmac_suspend(struct device *dev)
static int xgmac_resume(struct device *dev)
{
- struct net_device *ndev = platform_get_drvdata(to_platform_device(dev));
+ struct net_device *ndev = dev_get_drvdata(dev);
struct xgmac_priv *priv = netdev_priv(ndev);
void __iomem *ioaddr = priv->base;
diff --git a/drivers/net/ethernet/chelsio/cxgb4/Makefile b/drivers/net/ethernet/chelsio/cxgb4/Makefile
index 91d8a885deba..20390f6afbb4 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/Makefile
+++ b/drivers/net/ethernet/chelsio/cxgb4/Makefile
@@ -7,7 +7,7 @@ obj-$(CONFIG_CHELSIO_T4) += cxgb4.o
cxgb4-objs := cxgb4_main.o l2t.o smt.o t4_hw.o sge.o clip_tbl.o cxgb4_ethtool.o \
cxgb4_uld.o srq.o sched.o cxgb4_filter.o cxgb4_tc_u32.o \
- cxgb4_ptp.o cxgb4_tc_flower.o cxgb4_cudbg.o \
+ cxgb4_ptp.o cxgb4_tc_flower.o cxgb4_cudbg.o cxgb4_mps.o \
cudbg_common.o cudbg_lib.o cudbg_zlib.o
cxgb4-$(CONFIG_CHELSIO_T4_DCB) += cxgb4_dcb.o
cxgb4-$(CONFIG_CHELSIO_T4_FCOE) += cxgb4_fcoe.o
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
index a8fe0808823d..1fbb640e896a 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
@@ -280,6 +280,7 @@ struct tp_params {
unsigned short tx_modq[NCHAN]; /* channel to modulation queue map */
u32 vlan_pri_map; /* cached TP_VLAN_PRI_MAP */
+ u32 filter_mask;
u32 ingress_config; /* cached TP_INGRESS_CONFIG */
/* cached TP_OUT_CONFIG compressed error vector
@@ -600,6 +601,7 @@ struct port_info {
u8 vin;
u8 vivld;
u8 smt_idx;
+ u8 rx_cchan;
};
struct dentry;
@@ -878,6 +880,7 @@ struct uld_msix_info {
unsigned short vec;
char desc[IFNAMSIZ + 10];
unsigned int idx;
+ cpumask_var_t aff_mask;
};
struct vf_info {
@@ -902,10 +905,6 @@ struct mbox_list {
struct list_head list;
};
-struct mps_encap_entry {
- atomic_t refcnt;
-};
-
#if IS_ENABLED(CONFIG_THERMAL)
struct ch_thermal {
struct thermal_zone_device *tzdev;
@@ -914,6 +913,14 @@ struct ch_thermal {
};
#endif
+struct mps_entries_ref {
+ struct list_head list;
+ u8 addr[ETH_ALEN];
+ u8 mask[ETH_ALEN];
+ u16 idx;
+ refcount_t refcnt;
+};
+
struct adapter {
void __iomem *regs;
void __iomem *bar2;
@@ -938,9 +945,10 @@ struct adapter {
struct cxgb4_virt_res vres;
unsigned int swintr;
- struct {
+ struct msix_info {
unsigned short vec;
char desc[IFNAMSIZ + 10];
+ cpumask_var_t aff_mask;
} msix_info[MAX_INGQ + 1];
struct uld_msix_info *msix_info_ulds; /* msix info for uld's */
struct uld_msix_bmap msix_bmap_ulds; /* msix bitmap for all uld */
@@ -965,7 +973,6 @@ struct adapter {
unsigned int rawf_start;
unsigned int rawf_cnt;
struct smt_data *smt;
- struct mps_encap_entry *mps_encap;
struct cxgb4_uld_info *uld;
void *uld_handle[CXGB4_ULD_MAX];
unsigned int num_uld;
@@ -973,6 +980,8 @@ struct adapter {
struct list_head list_node;
struct list_head rcu_node;
struct list_head mac_hlist; /* list of MAC addresses in MPS Hash */
+ struct list_head mps_ref;
+ spinlock_t mps_ref_lock; /* lock for syncing mps ref/def activities */
void *iscsi_ppm;
@@ -1898,5 +1907,46 @@ int cxgb4_dcb_enabled(const struct net_device *dev);
int cxgb4_thermal_init(struct adapter *adap);
int cxgb4_thermal_remove(struct adapter *adap);
+int cxgb4_set_msix_aff(struct adapter *adap, unsigned short vec,
+ cpumask_var_t *aff_mask, int idx);
+void cxgb4_clear_msix_aff(unsigned short vec, cpumask_var_t aff_mask);
+
+int cxgb4_change_mac(struct port_info *pi, unsigned int viid,
+ int *tcam_idx, const u8 *addr,
+ bool persistent, u8 *smt_idx);
+
+int cxgb4_alloc_mac_filt(struct adapter *adap, unsigned int viid,
+ bool free, unsigned int naddr,
+ const u8 **addr, u16 *idx,
+ u64 *hash, bool sleep_ok);
+int cxgb4_free_mac_filt(struct adapter *adap, unsigned int viid,
+ unsigned int naddr, const u8 **addr, bool sleep_ok);
+int cxgb4_init_mps_ref_entries(struct adapter *adap);
+void cxgb4_free_mps_ref_entries(struct adapter *adap);
+int cxgb4_alloc_encap_mac_filt(struct adapter *adap, unsigned int viid,
+ const u8 *addr, const u8 *mask,
+ unsigned int vni, unsigned int vni_mask,
+ u8 dip_hit, u8 lookup_type, bool sleep_ok);
+int cxgb4_free_encap_mac_filt(struct adapter *adap, unsigned int viid,
+ int idx, bool sleep_ok);
+int cxgb4_free_raw_mac_filt(struct adapter *adap,
+ unsigned int viid,
+ const u8 *addr,
+ const u8 *mask,
+ unsigned int idx,
+ u8 lookup_type,
+ u8 port_id,
+ bool sleep_ok);
+int cxgb4_alloc_raw_mac_filt(struct adapter *adap,
+ unsigned int viid,
+ const u8 *addr,
+ const u8 *mask,
+ unsigned int idx,
+ u8 lookup_type,
+ u8 port_id,
+ bool sleep_ok);
+int cxgb4_update_mac_filt(struct port_info *pi, unsigned int viid,
+ int *tcam_idx, const u8 *addr,
+ bool persistent, u8 *smt_idx);
#endif /* __CXGB4_H__ */
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
index 4107007b6ec4..43b0f8c57da7 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
@@ -248,8 +248,9 @@ static int validate_filter(struct net_device *dev,
u32 fconf, iconf;
/* Check for unconfigured fields being used. */
- fconf = adapter->params.tp.vlan_pri_map;
iconf = adapter->params.tp.ingress_config;
+ fconf = fs->hash ? adapter->params.tp.filter_mask :
+ adapter->params.tp.vlan_pri_map;
if (unsupported(fconf, FCOE_F, fs->val.fcoe, fs->mask.fcoe) ||
unsupported(fconf, PORT_F, fs->val.iport, fs->mask.iport) ||
@@ -726,10 +727,8 @@ void clear_filter(struct adapter *adap, struct filter_entry *f)
cxgb4_smt_release(f->smt);
if (f->fs.val.encap_vld && f->fs.val.ovlan_vld)
- if (atomic_dec_and_test(&adap->mps_encap[f->fs.val.ovlan &
- 0x1ff].refcnt))
- t4_free_encap_mac_filt(adap, pi->viid,
- f->fs.val.ovlan & 0x1ff, 0);
+ t4_free_encap_mac_filt(adap, pi->viid,
+ f->fs.val.ovlan & 0x1ff, 0);
if ((f->fs.hash || is_t6(adap->params.chip)) && f->fs.type)
cxgb4_clip_release(f->dev, (const u32 *)&f->fs.val.lip, 1);
@@ -1041,7 +1040,7 @@ static void mk_act_open_req6(struct filter_entry *f, struct sk_buff *skb,
RSS_QUEUE_V(f->fs.iq) |
TX_QUEUE_V(f->fs.nat_mode) |
T5_OPT_2_VALID_F |
- RX_CHANNEL_F |
+ RX_CHANNEL_V(cxgb4_port_e2cchan(f->dev)) |
CONG_CNTRL_V((f->fs.action == FILTER_DROP) |
(f->fs.dirsteer << 1)) |
PACE_V((f->fs.maskhash) |
@@ -1081,7 +1080,7 @@ static void mk_act_open_req(struct filter_entry *f, struct sk_buff *skb,
RSS_QUEUE_V(f->fs.iq) |
TX_QUEUE_V(f->fs.nat_mode) |
T5_OPT_2_VALID_F |
- RX_CHANNEL_F |
+ RX_CHANNEL_V(cxgb4_port_e2cchan(f->dev)) |
CONG_CNTRL_V((f->fs.action == FILTER_DROP) |
(f->fs.dirsteer << 1)) |
PACE_V((f->fs.maskhash) |
@@ -1176,7 +1175,6 @@ static int cxgb4_set_hash_filter(struct net_device *dev,
if (ret < 0)
goto free_atid;
- atomic_inc(&adapter->mps_encap[ret].refcnt);
f->fs.val.ovlan = ret;
f->fs.mask.ovlan = 0xffff;
f->fs.val.ovlan_vld = 1;
@@ -1419,7 +1417,6 @@ int __cxgb4_set_filter(struct net_device *dev, int filter_id,
if (ret < 0)
goto free_clip;
- atomic_inc(&adapter->mps_encap[ret].refcnt);
f->fs.val.ovlan = ret;
f->fs.mask.ovlan = 0x1ff;
f->fs.val.ovlan_vld = 1;
@@ -1833,24 +1830,38 @@ void filter_rpl(struct adapter *adap, const struct cpl_set_tcb_rpl *rpl)
}
}
-int init_hash_filter(struct adapter *adap)
+void init_hash_filter(struct adapter *adap)
{
+ u32 reg;
+
/* On T6, verify the necessary register configs and warn the user in
* case of improper config
*/
if (is_t6(adap->params.chip)) {
- if (TCAM_ACTV_HIT_G(t4_read_reg(adap, LE_DB_RSP_CODE_0_A)) != 4)
- goto err;
+ if (is_offload(adap)) {
+ if (!(t4_read_reg(adap, TP_GLOBAL_CONFIG_A)
+ & ACTIVEFILTERCOUNTS_F)) {
+ dev_err(adap->pdev_dev, "Invalid hash filter + ofld config\n");
+ return;
+ }
+ } else {
+ reg = t4_read_reg(adap, LE_DB_RSP_CODE_0_A);
+ if (TCAM_ACTV_HIT_G(reg) != 4) {
+ dev_err(adap->pdev_dev, "Invalid hash filter config\n");
+ return;
+ }
+
+ reg = t4_read_reg(adap, LE_DB_RSP_CODE_1_A);
+ if (HASH_ACTV_HIT_G(reg) != 4) {
+ dev_err(adap->pdev_dev, "Invalid hash filter config\n");
+ return;
+ }
+ }
- if (HASH_ACTV_HIT_G(t4_read_reg(adap, LE_DB_RSP_CODE_1_A)) != 4)
- goto err;
} else {
dev_err(adap->pdev_dev, "Hash filter supported only on T6\n");
- return -EINVAL;
+ return;
}
+
adap->params.hash_filter = 1;
- return 0;
-err:
- dev_warn(adap->pdev_dev, "Invalid hash filter config!\n");
- return -EINVAL;
}
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.h
index 8db5fca6dcc9..b0751c0611ec 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.h
@@ -50,7 +50,7 @@ int delete_filter(struct adapter *adapter, unsigned int fidx);
int writable_filter(struct filter_entry *f);
void clear_all_filters(struct adapter *adapter);
-int init_hash_filter(struct adapter *adap);
+void init_hash_filter(struct adapter *adap);
bool is_filter_exact_match(struct adapter *adap,
struct ch_filter_specification *fs);
#endif /* __CXGB4_FILTER_H */
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
index 715e4edcf4a2..67202b6f352e 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
@@ -366,13 +366,19 @@ static int cxgb4_mac_sync(struct net_device *netdev, const u8 *mac_addr)
int ret;
u64 mhash = 0;
u64 uhash = 0;
+ /* idx stores the index of allocated filters,
+ * its size should be modified based on the number of
+ * MAC addresses that we allocate filters for
+ */
+
+ u16 idx[1] = {};
bool free = false;
bool ucast = is_unicast_ether_addr(mac_addr);
const u8 *maclist[1] = {mac_addr};
struct hash_mac_addr *new_entry;
- ret = t4_alloc_mac_filt(adap, adap->mbox, pi->viid, free, 1, maclist,
- NULL, ucast ? &uhash : &mhash, false);
+ ret = cxgb4_alloc_mac_filt(adap, pi->viid, free, 1, maclist,
+ idx, ucast ? &uhash : &mhash, false);
if (ret < 0)
goto out;
/* if hash != 0, then add the addr to hash addr list
@@ -410,7 +416,7 @@ static int cxgb4_mac_unsync(struct net_device *netdev, const u8 *mac_addr)
}
}
- ret = t4_free_mac_filt(adap, adap->mbox, pi->viid, 1, maclist, false);
+ ret = cxgb4_free_mac_filt(adap, pi->viid, 1, maclist, false);
return ret < 0 ? -EINVAL : 0;
}
@@ -449,9 +455,9 @@ static int set_rxmode(struct net_device *dev, int mtu, bool sleep_ok)
* Addresses are programmed to hash region, if tcam runs out of entries.
*
*/
-static int cxgb4_change_mac(struct port_info *pi, unsigned int viid,
- int *tcam_idx, const u8 *addr, bool persist,
- u8 *smt_idx)
+int cxgb4_change_mac(struct port_info *pi, unsigned int viid,
+ int *tcam_idx, const u8 *addr, bool persist,
+ u8 *smt_idx)
{
struct adapter *adapter = pi->adapter;
struct hash_mac_addr *entry, *new_entry;
@@ -505,8 +511,8 @@ static int link_start(struct net_device *dev)
ret = t4_set_rxmode(pi->adapter, mb, pi->viid, dev->mtu, -1, -1, -1,
!!(dev->features & NETIF_F_HW_VLAN_CTAG_RX), true);
if (ret == 0)
- ret = cxgb4_change_mac(pi, pi->viid, &pi->xact_addr_filt,
- dev->dev_addr, true, &pi->smt_idx);
+ ret = cxgb4_update_mac_filt(pi, pi->viid, &pi->xact_addr_filt,
+ dev->dev_addr, true, &pi->smt_idx);
if (ret == 0)
ret = t4_link_l1cfg(pi->adapter, mb, pi->tx_chan,
&pi->link_cfg);
@@ -702,9 +708,38 @@ static void name_msix_vecs(struct adapter *adap)
}
}
+int cxgb4_set_msix_aff(struct adapter *adap, unsigned short vec,
+ cpumask_var_t *aff_mask, int idx)
+{
+ int rv;
+
+ if (!zalloc_cpumask_var(aff_mask, GFP_KERNEL)) {
+ dev_err(adap->pdev_dev, "alloc_cpumask_var failed\n");
+ return -ENOMEM;
+ }
+
+ cpumask_set_cpu(cpumask_local_spread(idx, dev_to_node(adap->pdev_dev)),
+ *aff_mask);
+
+ rv = irq_set_affinity_hint(vec, *aff_mask);
+ if (rv)
+ dev_warn(adap->pdev_dev,
+ "irq_set_affinity_hint %u failed %d\n",
+ vec, rv);
+
+ return 0;
+}
+
+void cxgb4_clear_msix_aff(unsigned short vec, cpumask_var_t aff_mask)
+{
+ irq_set_affinity_hint(vec, NULL);
+ free_cpumask_var(aff_mask);
+}
+
static int request_msix_queue_irqs(struct adapter *adap)
{
struct sge *s = &adap->sge;
+ struct msix_info *minfo;
int err, ethqidx;
int msi_index = 2;
@@ -714,32 +749,77 @@ static int request_msix_queue_irqs(struct adapter *adap)
return err;
for_each_ethrxq(s, ethqidx) {
- err = request_irq(adap->msix_info[msi_index].vec,
+ minfo = &adap->msix_info[msi_index];
+ err = request_irq(minfo->vec,
t4_sge_intr_msix, 0,
- adap->msix_info[msi_index].desc,
+ minfo->desc,
&s->ethrxq[ethqidx].rspq);
if (err)
goto unwind;
+
+ cxgb4_set_msix_aff(adap, minfo->vec,
+ &minfo->aff_mask, ethqidx);
msi_index++;
}
return 0;
unwind:
- while (--ethqidx >= 0)
- free_irq(adap->msix_info[--msi_index].vec,
- &s->ethrxq[ethqidx].rspq);
+ while (--ethqidx >= 0) {
+ msi_index--;
+ minfo = &adap->msix_info[msi_index];
+ cxgb4_clear_msix_aff(minfo->vec, minfo->aff_mask);
+ free_irq(minfo->vec, &s->ethrxq[ethqidx].rspq);
+ }
free_irq(adap->msix_info[1].vec, &s->fw_evtq);
return err;
}
static void free_msix_queue_irqs(struct adapter *adap)
{
- int i, msi_index = 2;
struct sge *s = &adap->sge;
+ struct msix_info *minfo;
+ int i, msi_index = 2;
free_irq(adap->msix_info[1].vec, &s->fw_evtq);
- for_each_ethrxq(s, i)
- free_irq(adap->msix_info[msi_index++].vec, &s->ethrxq[i].rspq);
+ for_each_ethrxq(s, i) {
+ minfo = &adap->msix_info[msi_index++];
+ cxgb4_clear_msix_aff(minfo->vec, minfo->aff_mask);
+ free_irq(minfo->vec, &s->ethrxq[i].rspq);
+ }
+}
+
+static int setup_ppod_edram(struct adapter *adap)
+{
+ unsigned int param, val;
+ int ret;
+
+ /* Driver sends FW_PARAMS_PARAM_DEV_PPOD_EDRAM read command to check
+ * if firmware supports ppod edram feature or not. If firmware
+ * returns 1, then driver can enable this feature by sending
+ * FW_PARAMS_PARAM_DEV_PPOD_EDRAM write command with value 1 to
+ * enable ppod edram feature.
+ */
+ param = (FW_PARAMS_MNEM_V(FW_PARAMS_MNEM_DEV) |
+ FW_PARAMS_PARAM_X_V(FW_PARAMS_PARAM_DEV_PPOD_EDRAM));
+
+ ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 1, &param, &val);
+ if (ret < 0) {
+ dev_warn(adap->pdev_dev,
+ "querying PPOD_EDRAM support failed: %d\n",
+ ret);
+ return -1;
+ }
+
+ if (val != 1)
+ return -1;
+
+ ret = t4_set_params(adap, adap->mbox, adap->pf, 0, 1, &param, &val);
+ if (ret < 0) {
+ dev_err(adap->pdev_dev,
+ "setting PPOD_EDRAM failed: %d\n", ret);
+ return -1;
+ }
+ return 0;
}
/**
@@ -1646,6 +1726,18 @@ unsigned int cxgb4_port_chan(const struct net_device *dev)
}
EXPORT_SYMBOL(cxgb4_port_chan);
+/**
+ * cxgb4_port_e2cchan - get the HW c-channel of a port
+ * @dev: the net device for the port
+ *
+ * Return the HW RX c-channel of the given port.
+ */
+unsigned int cxgb4_port_e2cchan(const struct net_device *dev)
+{
+ return netdev2pinfo(dev)->rx_cchan;
+}
+EXPORT_SYMBOL(cxgb4_port_e2cchan);
+
unsigned int cxgb4_dbfifo_count(const struct net_device *dev, int lpfifo)
{
struct adapter *adap = netdev2adap(dev);
@@ -2934,8 +3026,8 @@ static int cxgb_set_mac_addr(struct net_device *dev, void *p)
if (!is_valid_ether_addr(addr->sa_data))
return -EADDRNOTAVAIL;
- ret = cxgb4_change_mac(pi, pi->viid, &pi->xact_addr_filt,
- addr->sa_data, true, &pi->smt_idx);
+ ret = cxgb4_update_mac_filt(pi, pi->viid, &pi->xact_addr_filt,
+ addr->sa_data, true, &pi->smt_idx);
if (ret < 0)
return ret;
@@ -3043,14 +3135,14 @@ static int cxgb_set_tx_maxrate(struct net_device *dev, int index, u32 rate)
}
static int cxgb_setup_tc_flower(struct net_device *dev,
- struct tc_cls_flower_offload *cls_flower)
+ struct flow_cls_offload *cls_flower)
{
switch (cls_flower->command) {
- case TC_CLSFLOWER_REPLACE:
+ case FLOW_CLS_REPLACE:
return cxgb4_tc_flower_replace(dev, cls_flower);
- case TC_CLSFLOWER_DESTROY:
+ case FLOW_CLS_DESTROY:
return cxgb4_tc_flower_destroy(dev, cls_flower);
- case TC_CLSFLOWER_STATS:
+ case FLOW_CLS_STATS:
return cxgb4_tc_flower_stats(dev, cls_flower);
default:
return -EOPNOTSUPP;
@@ -3098,32 +3190,19 @@ static int cxgb_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
}
}
-static int cxgb_setup_tc_block(struct net_device *dev,
- struct tc_block_offload *f)
-{
- struct port_info *pi = netdev2pinfo(dev);
-
- if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
- return -EOPNOTSUPP;
-
- switch (f->command) {
- case TC_BLOCK_BIND:
- return tcf_block_cb_register(f->block, cxgb_setup_tc_block_cb,
- pi, dev, f->extack);
- case TC_BLOCK_UNBIND:
- tcf_block_cb_unregister(f->block, cxgb_setup_tc_block_cb, pi);
- return 0;
- default:
- return -EOPNOTSUPP;
- }
-}
+static LIST_HEAD(cxgb_block_cb_list);
static int cxgb_setup_tc(struct net_device *dev, enum tc_setup_type type,
void *type_data)
{
+ struct port_info *pi = netdev2pinfo(dev);
+
switch (type) {
case TC_SETUP_BLOCK:
- return cxgb_setup_tc_block(dev, type_data);
+ return flow_block_cb_setup_simple(type_data,
+ &cxgb_block_cb_list,
+ cxgb_setup_tc_block_cb,
+ pi, dev, true);
default:
return -EOPNOTSUPP;
}
@@ -3187,8 +3266,6 @@ static void cxgb_del_udp_tunnel(struct net_device *netdev,
i);
return;
}
- atomic_dec(&adapter->mps_encap[adapter->rawf_start +
- pi->port_id].refcnt);
}
}
@@ -3277,7 +3354,6 @@ static void cxgb_add_udp_tunnel(struct net_device *netdev,
cxgb_del_udp_tunnel(netdev, ti);
return;
}
- atomic_inc(&adapter->mps_encap[ret].refcnt);
}
}
@@ -3905,14 +3981,14 @@ static int adap_init0_phy(struct adapter *adap)
*/
static int adap_init0_config(struct adapter *adapter, int reset)
{
+ char *fw_config_file, fw_config_file_path[256];
+ u32 finiver, finicsum, cfcsum, param, val;
struct fw_caps_config_cmd caps_cmd;
- const struct firmware *cf;
unsigned long mtype = 0, maddr = 0;
- u32 finiver, finicsum, cfcsum;
- int ret;
- int config_issued = 0;
- char *fw_config_file, fw_config_file_path[256];
+ const struct firmware *cf;
char *config_name = NULL;
+ int config_issued = 0;
+ int ret;
/*
* Reset device if necessary.
@@ -4020,6 +4096,24 @@ static int adap_init0_config(struct adapter *adapter, int reset)
goto bye;
}
+ val = 0;
+
+ /* Ofld + Hash filter is supported. Older fw will fail this request and
+ * it is fine.
+ */
+ param = (FW_PARAMS_MNEM_V(FW_PARAMS_MNEM_DEV) |
+ FW_PARAMS_PARAM_X_V(FW_PARAMS_PARAM_DEV_HASHFILTER_WITH_OFLD));
+ ret = t4_set_params(adapter, adapter->mbox, adapter->pf, 0,
+ 1, &param, &val);
+
+ /* FW doesn't know about Hash filter + ofld support,
+ * it's not a problem, don't return an error.
+ */
+ if (ret < 0) {
+ dev_warn(adapter->pdev_dev,
+ "Hash filter with ofld is not supported by FW\n");
+ }
+
/*
* Issue a Capability Configuration command to the firmware to get it
* to parse the Configuration File. We don't use t4_fw_config_file()
@@ -4096,6 +4190,13 @@ static int adap_init0_config(struct adapter *adapter, int reset)
dev_err(adapter->pdev_dev,
"HMA configuration failed with error %d\n", ret);
+ if (is_t6(adapter->params.chip)) {
+ ret = setup_ppod_edram(adapter);
+ if (!ret)
+ dev_info(adapter->pdev_dev, "Successfully enabled "
+ "ppod edram feature\n");
+ }
+
/*
* And finally tell the firmware to initialize itself using the
* parameters from the Configuration File.
@@ -4580,6 +4681,13 @@ static int adap_init0(struct adapter *adap)
if (ret < 0)
goto bye;
+ /* hash filter has some mandatory register settings to be tested and for
+ * that it needs to test whether offload is enabled or not, hence
+ * checking and setting it here.
+ */
+ if (caps_cmd.ofldcaps)
+ adap->params.offload = 1;
+
if (caps_cmd.ofldcaps ||
(caps_cmd.niccaps & htons(FW_CAPS_CONFIG_NIC_HASHFILTER))) {
/* query offload-related parameters */
@@ -4619,11 +4727,8 @@ static int adap_init0(struct adapter *adap)
adap->params.ofldq_wr_cred = val[5];
if (caps_cmd.niccaps & htons(FW_CAPS_CONFIG_NIC_HASHFILTER)) {
- ret = init_hash_filter(adap);
- if (ret < 0)
- goto bye;
+ init_hash_filter(adap);
} else {
- adap->params.offload = 1;
adap->num_ofld_uld += 1;
}
}
@@ -4715,6 +4820,22 @@ static int adap_init0(struct adapter *adap)
goto bye;
adap->vres.iscsi.start = val[0];
adap->vres.iscsi.size = val[1] - val[0] + 1;
+ if (is_t6(adap->params.chip)) {
+ params[0] = FW_PARAM_PFVF(PPOD_EDRAM_START);
+ params[1] = FW_PARAM_PFVF(PPOD_EDRAM_END);
+ ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 2,
+ params, val);
+ if (!ret) {
+ adap->vres.ppod_edram.start = val[0];
+ adap->vres.ppod_edram.size =
+ val[1] - val[0] + 1;
+
+ dev_info(adap->pdev_dev,
+ "ppod edram start 0x%x end 0x%x size 0x%x\n",
+ val[0], val[1],
+ adap->vres.ppod_edram.size);
+ }
+ }
/* LIO target and cxgb4i initiaitor */
adap->num_ofld_uld += 2;
}
@@ -5315,7 +5436,6 @@ static void free_some_resources(struct adapter *adapter)
{
unsigned int i;
- kvfree(adapter->mps_encap);
kvfree(adapter->smt);
kvfree(adapter->l2t);
kvfree(adapter->srq);
@@ -5841,12 +5961,6 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
adapter->params.offload = 0;
}
- adapter->mps_encap = kvcalloc(adapter->params.arch.mps_tcam_size,
- sizeof(struct mps_encap_entry),
- GFP_KERNEL);
- if (!adapter->mps_encap)
- dev_warn(&pdev->dev, "could not allocate MPS Encap entries, continuing\n");
-
#if IS_ENABLED(CONFIG_IPV6)
if (chip_ver <= CHELSIO_T5 &&
(!(t4_read_reg(adapter, LE_DB_CONFIG_A) & ASLIPCOMPEN_F))) {
@@ -5922,6 +6036,8 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
/* check for PCI Express bandwidth capabiltites */
pcie_print_link_status(pdev);
+ cxgb4_init_mps_ref_entries(adapter);
+
err = init_rss(adapter);
if (err)
goto out_free_dev;
@@ -6048,6 +6164,8 @@ static void remove_one(struct pci_dev *pdev)
disable_interrupts(adapter);
+ cxgb4_free_mps_ref_entries(adapter);
+
for_each_port(adapter, i)
if (adapter->port[i]->reg_state == NETREG_REGISTERED)
unregister_netdev(adapter->port[i]);
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_mps.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_mps.c
new file mode 100644
index 000000000000..b1a073eea60b
--- /dev/null
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_mps.c
@@ -0,0 +1,241 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019 Chelsio Communications, Inc. All rights reserved. */
+
+#include "cxgb4.h"
+
+static int cxgb4_mps_ref_dec_by_mac(struct adapter *adap,
+ const u8 *addr, const u8 *mask)
+{
+ u8 bitmask[] = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff };
+ struct mps_entries_ref *mps_entry, *tmp;
+ int ret = -EINVAL;
+
+ spin_lock_bh(&adap->mps_ref_lock);
+ list_for_each_entry_safe(mps_entry, tmp, &adap->mps_ref, list) {
+ if (ether_addr_equal(mps_entry->addr, addr) &&
+ ether_addr_equal(mps_entry->mask, mask ? mask : bitmask)) {
+ if (!refcount_dec_and_test(&mps_entry->refcnt)) {
+ spin_unlock_bh(&adap->mps_ref_lock);
+ return -EBUSY;
+ }
+ list_del(&mps_entry->list);
+ kfree(mps_entry);
+ ret = 0;
+ break;
+ }
+ }
+ spin_unlock_bh(&adap->mps_ref_lock);
+ return ret;
+}
+
+static int cxgb4_mps_ref_dec(struct adapter *adap, u16 idx)
+{
+ struct mps_entries_ref *mps_entry, *tmp;
+ int ret = -EINVAL;
+
+ spin_lock(&adap->mps_ref_lock);
+ list_for_each_entry_safe(mps_entry, tmp, &adap->mps_ref, list) {
+ if (mps_entry->idx == idx) {
+ if (!refcount_dec_and_test(&mps_entry->refcnt)) {
+ spin_unlock(&adap->mps_ref_lock);
+ return -EBUSY;
+ }
+ list_del(&mps_entry->list);
+ kfree(mps_entry);
+ ret = 0;
+ break;
+ }
+ }
+ spin_unlock(&adap->mps_ref_lock);
+ return ret;
+}
+
+static int cxgb4_mps_ref_inc(struct adapter *adap, const u8 *mac_addr,
+ u16 idx, const u8 *mask)
+{
+ u8 bitmask[] = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff };
+ struct mps_entries_ref *mps_entry;
+ int ret = 0;
+
+ spin_lock_bh(&adap->mps_ref_lock);
+ list_for_each_entry(mps_entry, &adap->mps_ref, list) {
+ if (mps_entry->idx == idx) {
+ refcount_inc(&mps_entry->refcnt);
+ goto unlock;
+ }
+ }
+ mps_entry = kzalloc(sizeof(*mps_entry), GFP_ATOMIC);
+ if (!mps_entry) {
+ ret = -ENOMEM;
+ goto unlock;
+ }
+ ether_addr_copy(mps_entry->mask, mask ? mask : bitmask);
+ ether_addr_copy(mps_entry->addr, mac_addr);
+ mps_entry->idx = idx;
+ refcount_set(&mps_entry->refcnt, 1);
+ list_add_tail(&mps_entry->list, &adap->mps_ref);
+unlock:
+ spin_unlock_bh(&adap->mps_ref_lock);
+ return ret;
+}
+
+int cxgb4_free_mac_filt(struct adapter *adap, unsigned int viid,
+ unsigned int naddr, const u8 **addr, bool sleep_ok)
+{
+ int ret, i;
+
+ for (i = 0; i < naddr; i++) {
+ if (!cxgb4_mps_ref_dec_by_mac(adap, addr[i], NULL)) {
+ ret = t4_free_mac_filt(adap, adap->mbox, viid,
+ 1, &addr[i], sleep_ok);
+ if (ret < 0)
+ return ret;
+ }
+ }
+
+ /* return number of filters freed */
+ return naddr;
+}
+
+int cxgb4_alloc_mac_filt(struct adapter *adap, unsigned int viid,
+ bool free, unsigned int naddr, const u8 **addr,
+ u16 *idx, u64 *hash, bool sleep_ok)
+{
+ int ret, i;
+
+ ret = t4_alloc_mac_filt(adap, adap->mbox, viid, free,
+ naddr, addr, idx, hash, sleep_ok);
+ if (ret < 0)
+ return ret;
+
+ for (i = 0; i < naddr; i++) {
+ if (idx[i] != 0xffff) {
+ if (cxgb4_mps_ref_inc(adap, addr[i], idx[i], NULL)) {
+ ret = -ENOMEM;
+ goto error;
+ }
+ }
+ }
+
+ goto out;
+error:
+ cxgb4_free_mac_filt(adap, viid, naddr, addr, sleep_ok);
+
+out:
+ /* Returns a negative error number or the number of filters allocated */
+ return ret;
+}
+
+int cxgb4_update_mac_filt(struct port_info *pi, unsigned int viid,
+ int *tcam_idx, const u8 *addr,
+ bool persistent, u8 *smt_idx)
+{
+ int ret;
+
+ ret = cxgb4_change_mac(pi, viid, tcam_idx,
+ addr, persistent, smt_idx);
+ if (ret < 0)
+ return ret;
+
+ cxgb4_mps_ref_inc(pi->adapter, addr, *tcam_idx, NULL);
+ return ret;
+}
+
+int cxgb4_free_raw_mac_filt(struct adapter *adap,
+ unsigned int viid,
+ const u8 *addr,
+ const u8 *mask,
+ unsigned int idx,
+ u8 lookup_type,
+ u8 port_id,
+ bool sleep_ok)
+{
+ int ret = 0;
+
+ if (!cxgb4_mps_ref_dec(adap, idx))
+ ret = t4_free_raw_mac_filt(adap, viid, addr,
+ mask, idx, lookup_type,
+ port_id, sleep_ok);
+
+ return ret;
+}
+
+int cxgb4_alloc_raw_mac_filt(struct adapter *adap,
+ unsigned int viid,
+ const u8 *addr,
+ const u8 *mask,
+ unsigned int idx,
+ u8 lookup_type,
+ u8 port_id,
+ bool sleep_ok)
+{
+ int ret;
+
+ ret = t4_alloc_raw_mac_filt(adap, viid, addr,
+ mask, idx, lookup_type,
+ port_id, sleep_ok);
+ if (ret < 0)
+ return ret;
+
+ if (cxgb4_mps_ref_inc(adap, addr, ret, mask)) {
+ ret = -ENOMEM;
+ t4_free_raw_mac_filt(adap, viid, addr,
+ mask, idx, lookup_type,
+ port_id, sleep_ok);
+ }
+
+ return ret;
+}
+
+int cxgb4_free_encap_mac_filt(struct adapter *adap, unsigned int viid,
+ int idx, bool sleep_ok)
+{
+ int ret = 0;
+
+ if (!cxgb4_mps_ref_dec(adap, idx))
+ ret = t4_free_encap_mac_filt(adap, viid, idx, sleep_ok);
+
+ return ret;
+}
+
+int cxgb4_alloc_encap_mac_filt(struct adapter *adap, unsigned int viid,
+ const u8 *addr, const u8 *mask,
+ unsigned int vni, unsigned int vni_mask,
+ u8 dip_hit, u8 lookup_type, bool sleep_ok)
+{
+ int ret;
+
+ ret = t4_alloc_encap_mac_filt(adap, viid, addr, mask, vni, vni_mask,
+ dip_hit, lookup_type, sleep_ok);
+ if (ret < 0)
+ return ret;
+
+ if (cxgb4_mps_ref_inc(adap, addr, ret, mask)) {
+ ret = -ENOMEM;
+ t4_free_encap_mac_filt(adap, viid, ret, sleep_ok);
+ }
+ return ret;
+}
+
+int cxgb4_init_mps_ref_entries(struct adapter *adap)
+{
+ spin_lock_init(&adap->mps_ref_lock);
+ INIT_LIST_HEAD(&adap->mps_ref);
+
+ return 0;
+}
+
+void cxgb4_free_mps_ref_entries(struct adapter *adap)
+{
+ struct mps_entries_ref *mps_entry, *tmp;
+
+ if (!list_empty(&adap->mps_ref))
+ return;
+
+ spin_lock(&adap->mps_ref_lock);
+ list_for_each_entry_safe(mps_entry, tmp, &adap->mps_ref, list) {
+ list_del(&mps_entry->list);
+ kfree(mps_entry);
+ }
+ spin_unlock(&adap->mps_ref_lock);
+}
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
index cfaf8f618d1f..312599c6b35a 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
@@ -80,10 +80,10 @@ static struct ch_tc_flower_entry *ch_flower_lookup(struct adapter *adap,
}
static void cxgb4_process_flow_match(struct net_device *dev,
- struct tc_cls_flower_offload *cls,
+ struct flow_cls_offload *cls,
struct ch_filter_specification *fs)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(cls);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(cls);
u16 addr_type = 0;
if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CONTROL)) {
@@ -223,9 +223,9 @@ static void cxgb4_process_flow_match(struct net_device *dev,
}
static int cxgb4_validate_flow_match(struct net_device *dev,
- struct tc_cls_flower_offload *cls)
+ struct flow_cls_offload *cls)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(cls);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(cls);
struct flow_dissector *dissector = rule->match.dissector;
u16 ethtype_mask = 0;
u16 ethtype_key = 0;
@@ -378,10 +378,10 @@ static void process_pedit_field(struct ch_filter_specification *fs, u32 val,
}
static void cxgb4_process_flow_actions(struct net_device *in,
- struct tc_cls_flower_offload *cls,
+ struct flow_cls_offload *cls,
struct ch_filter_specification *fs)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(cls);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(cls);
struct flow_action_entry *act;
int i;
@@ -544,9 +544,9 @@ static bool valid_pedit_action(struct net_device *dev,
}
static int cxgb4_validate_flow_actions(struct net_device *dev,
- struct tc_cls_flower_offload *cls)
+ struct flow_cls_offload *cls)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(cls);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(cls);
struct flow_action_entry *act;
bool act_redir = false;
bool act_pedit = false;
@@ -633,7 +633,7 @@ static int cxgb4_validate_flow_actions(struct net_device *dev,
}
int cxgb4_tc_flower_replace(struct net_device *dev,
- struct tc_cls_flower_offload *cls)
+ struct flow_cls_offload *cls)
{
struct adapter *adap = netdev2adap(dev);
struct ch_tc_flower_entry *ch_flower;
@@ -709,7 +709,7 @@ free_entry:
}
int cxgb4_tc_flower_destroy(struct net_device *dev,
- struct tc_cls_flower_offload *cls)
+ struct flow_cls_offload *cls)
{
struct adapter *adap = netdev2adap(dev);
struct ch_tc_flower_entry *ch_flower;
@@ -783,7 +783,7 @@ static void ch_flower_stats_cb(struct timer_list *t)
}
int cxgb4_tc_flower_stats(struct net_device *dev,
- struct tc_cls_flower_offload *cls)
+ struct flow_cls_offload *cls)
{
struct adapter *adap = netdev2adap(dev);
struct ch_tc_flower_stats *ofld_stats;
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.h
index 050c8a50ae41..eb4c95248baf 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.h
@@ -109,11 +109,11 @@ struct ch_tc_pedit_fields {
#define PEDIT_UDP_SPORT_DPORT 0x0
int cxgb4_tc_flower_replace(struct net_device *dev,
- struct tc_cls_flower_offload *cls);
+ struct flow_cls_offload *cls);
int cxgb4_tc_flower_destroy(struct net_device *dev,
- struct tc_cls_flower_offload *cls);
+ struct flow_cls_offload *cls);
int cxgb4_tc_flower_stats(struct net_device *dev,
- struct tc_cls_flower_offload *cls);
+ struct flow_cls_offload *cls);
int cxgb4_init_tc_flower(struct adapter *adap);
void cxgb4_cleanup_tc_flower(struct adapter *adap);
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
index 6c685b920713..5b602243d573 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
@@ -352,25 +352,32 @@ static int
request_msix_queue_irqs_uld(struct adapter *adap, unsigned int uld_type)
{
struct sge_uld_rxq_info *rxq_info = adap->sge.uld_rxq_info[uld_type];
+ struct uld_msix_info *minfo;
int err = 0;
unsigned int idx, bmap_idx;
for_each_uldrxq(rxq_info, idx) {
bmap_idx = rxq_info->msix_tbl[idx];
- err = request_irq(adap->msix_info_ulds[bmap_idx].vec,
+ minfo = &adap->msix_info_ulds[bmap_idx];
+ err = request_irq(minfo->vec,
t4_sge_intr_msix, 0,
- adap->msix_info_ulds[bmap_idx].desc,
+ minfo->desc,
&rxq_info->uldrxq[idx].rspq);
if (err)
goto unwind;
+
+ cxgb4_set_msix_aff(adap, minfo->vec,
+ &minfo->aff_mask, idx);
}
return 0;
+
unwind:
while (idx-- > 0) {
bmap_idx = rxq_info->msix_tbl[idx];
+ minfo = &adap->msix_info_ulds[bmap_idx];
+ cxgb4_clear_msix_aff(minfo->vec, minfo->aff_mask);
free_msix_idx_in_bmap(adap, bmap_idx);
- free_irq(adap->msix_info_ulds[bmap_idx].vec,
- &rxq_info->uldrxq[idx].rspq);
+ free_irq(minfo->vec, &rxq_info->uldrxq[idx].rspq);
}
return err;
}
@@ -379,14 +386,16 @@ static void
free_msix_queue_irqs_uld(struct adapter *adap, unsigned int uld_type)
{
struct sge_uld_rxq_info *rxq_info = adap->sge.uld_rxq_info[uld_type];
+ struct uld_msix_info *minfo;
unsigned int idx, bmap_idx;
for_each_uldrxq(rxq_info, idx) {
bmap_idx = rxq_info->msix_tbl[idx];
+ minfo = &adap->msix_info_ulds[bmap_idx];
+ cxgb4_clear_msix_aff(minfo->vec, minfo->aff_mask);
free_msix_idx_in_bmap(adap, bmap_idx);
- free_irq(adap->msix_info_ulds[bmap_idx].vec,
- &rxq_info->uldrxq[idx].rspq);
+ free_irq(minfo->vec, &rxq_info->uldrxq[idx].rspq);
}
}
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h
index 21da34a4ca24..cee582e36134 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h
@@ -292,6 +292,7 @@ struct cxgb4_virt_res { /* virtualized HW resources */
struct cxgb4_range ocq;
struct cxgb4_range key;
unsigned int ncrypto_fc;
+ struct cxgb4_range ppod_edram;
};
struct chcr_stats_debug {
@@ -393,6 +394,7 @@ int cxgb4_immdata_send(struct net_device *dev, unsigned int idx,
int cxgb4_crypto_send(struct net_device *dev, struct sk_buff *skb);
unsigned int cxgb4_dbfifo_count(const struct net_device *dev, int lpfifo);
unsigned int cxgb4_port_chan(const struct net_device *dev);
+unsigned int cxgb4_port_e2cchan(const struct net_device *dev);
unsigned int cxgb4_port_viid(const struct net_device *dev);
unsigned int cxgb4_tp_smt_idx(enum chip_type chip, unsigned int viid);
unsigned int cxgb4_port_idx(const struct net_device *dev);
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
index 93feb258067b..9dd5ed9a2965 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
@@ -6209,6 +6209,37 @@ unsigned int t4_get_mps_bg_map(struct adapter *adapter, int pidx)
}
/**
+ * t4_get_tp_e2c_map - return the E2C channel map associated with a port
+ * @adapter: the adapter
+ * @pidx: the port index
+ */
+static unsigned int t4_get_tp_e2c_map(struct adapter *adapter, int pidx)
+{
+ unsigned int nports;
+ u32 param, val = 0;
+ int ret;
+
+ nports = 1 << NUMPORTS_G(t4_read_reg(adapter, MPS_CMN_CTL_A));
+ if (pidx >= nports) {
+ CH_WARN(adapter, "TP E2C Channel Port Index %d >= Nports %d\n",
+ pidx, nports);
+ return 0;
+ }
+
+ /* FW version >= 1.16.44.0 can determine E2C channel map using
+ * FW_PARAMS_PARAM_DEV_TPCHMAP API.
+ */
+ param = (FW_PARAMS_MNEM_V(FW_PARAMS_MNEM_DEV) |
+ FW_PARAMS_PARAM_X_V(FW_PARAMS_PARAM_DEV_TPCHMAP));
+ ret = t4_query_params_ns(adapter, adapter->mbox, adapter->pf,
+ 0, 1, &param, &val);
+ if (!ret)
+ return (val >> (8 * pidx)) & 0xff;
+
+ return 0;
+}
+
+/**
* t4_get_tp_ch_map - return TP ingress channels associated with a port
* @adapter: the adapter
* @pidx: the port index
@@ -9368,8 +9399,9 @@ int t4_init_sge_params(struct adapter *adapter)
*/
int t4_init_tp_params(struct adapter *adap, bool sleep_ok)
{
- int chan;
- u32 v;
+ u32 param, val, v;
+ int chan, ret;
+
v = t4_read_reg(adap, TP_TIMER_RESOLUTION_A);
adap->params.tp.tre = TIMERRESOLUTION_G(v);
@@ -9379,11 +9411,47 @@ int t4_init_tp_params(struct adapter *adap, bool sleep_ok)
for (chan = 0; chan < NCHAN; chan++)
adap->params.tp.tx_modq[chan] = chan;
- /* Cache the adapter's Compressed Filter Mode and global Incress
+ /* Cache the adapter's Compressed Filter Mode/Mask and global Ingress
* Configuration.
*/
- t4_tp_pio_read(adap, &adap->params.tp.vlan_pri_map, 1,
- TP_VLAN_PRI_MAP_A, sleep_ok);
+ param = (FW_PARAMS_MNEM_V(FW_PARAMS_MNEM_DEV) |
+ FW_PARAMS_PARAM_X_V(FW_PARAMS_PARAM_DEV_FILTER) |
+ FW_PARAMS_PARAM_Y_V(FW_PARAM_DEV_FILTER_MODE_MASK));
+
+ /* Read current value */
+ ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 1,
+ &param, &val);
+ if (ret == 0) {
+ dev_info(adap->pdev_dev,
+ "Current filter mode/mask 0x%x:0x%x\n",
+ FW_PARAMS_PARAM_FILTER_MODE_G(val),
+ FW_PARAMS_PARAM_FILTER_MASK_G(val));
+ adap->params.tp.vlan_pri_map =
+ FW_PARAMS_PARAM_FILTER_MODE_G(val);
+ adap->params.tp.filter_mask =
+ FW_PARAMS_PARAM_FILTER_MASK_G(val);
+ } else {
+ dev_info(adap->pdev_dev,
+ "Failed to read filter mode/mask via fw api, using indirect-reg-read\n");
+
+ /* Incase of older-fw (which doesn't expose the api
+ * FW_PARAM_DEV_FILTER_MODE_MASK) and newer-driver (which uses
+ * the fw api) combination, fall-back to older method of reading
+ * the filter mode from indirect-register
+ */
+ t4_tp_pio_read(adap, &adap->params.tp.vlan_pri_map, 1,
+ TP_VLAN_PRI_MAP_A, sleep_ok);
+
+ /* With the older-fw and newer-driver combination we might run
+ * into an issue when user wants to use hash filter region but
+ * the filter_mask is zero, in this case filter_mask validation
+ * is tough. To avoid that we set the filter_mask same as filter
+ * mode, which will behave exactly as the older way of ignoring
+ * the filter mask validation.
+ */
+ adap->params.tp.filter_mask = adap->params.tp.vlan_pri_map;
+ }
+
t4_tp_pio_read(adap, &adap->params.tp.ingress_config, 1,
TP_INGRESS_CONFIG_A, sleep_ok);
@@ -9594,6 +9662,7 @@ int t4_init_portinfo(struct port_info *pi, int mbox,
pi->tx_chan = port;
pi->lport = port;
pi->rss_size = rss_size;
+ pi->rx_cchan = t4_get_tp_e2c_map(pi->adapter, port);
/* If fw supports returning the VIN as part of FW_VI_CMD,
* save the returned values.
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h b/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
index eb222d40ddbf..a957a6e4d4c4 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
@@ -1334,6 +1334,10 @@
#define TP_OUT_CONFIG_A 0x7d04
#define TP_GLOBAL_CONFIG_A 0x7d08
+#define ACTIVEFILTERCOUNTS_S 22
+#define ACTIVEFILTERCOUNTS_V(x) ((x) << ACTIVEFILTERCOUNTS_S)
+#define ACTIVEFILTERCOUNTS_F ACTIVEFILTERCOUNTS_V(1U)
+
#define TP_CMM_TCB_BASE_A 0x7d10
#define TP_CMM_MM_BASE_A 0x7d14
#define TP_CMM_TIMER_BASE_A 0x7d18
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h b/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h
index b2a618e72fcf..65313f6b5704 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h
@@ -1221,6 +1221,23 @@ enum fw_params_mnem {
/*
* device parameters
*/
+
+#define FW_PARAMS_PARAM_FILTER_MODE_S 16
+#define FW_PARAMS_PARAM_FILTER_MODE_M 0xffff
+#define FW_PARAMS_PARAM_FILTER_MODE_V(x) \
+ ((x) << FW_PARAMS_PARAM_FILTER_MODE_S)
+#define FW_PARAMS_PARAM_FILTER_MODE_G(x) \
+ (((x) >> FW_PARAMS_PARAM_FILTER_MODE_S) & \
+ FW_PARAMS_PARAM_FILTER_MODE_M)
+
+#define FW_PARAMS_PARAM_FILTER_MASK_S 0
+#define FW_PARAMS_PARAM_FILTER_MASK_M 0xffff
+#define FW_PARAMS_PARAM_FILTER_MASK_V(x) \
+ ((x) << FW_PARAMS_PARAM_FILTER_MASK_S)
+#define FW_PARAMS_PARAM_FILTER_MASK_G(x) \
+ (((x) >> FW_PARAMS_PARAM_FILTER_MASK_S) & \
+ FW_PARAMS_PARAM_FILTER_MASK_M)
+
enum fw_params_param_dev {
FW_PARAMS_PARAM_DEV_CCLK = 0x00, /* chip core clock in khz */
FW_PARAMS_PARAM_DEV_PORTVEC = 0x01, /* the port vector */
@@ -1250,12 +1267,16 @@ enum fw_params_param_dev {
FW_PARAMS_PARAM_DEV_RI_FR_NSMR_TPTE_WR = 0x1C,
FW_PARAMS_PARAM_DEV_FILTER2_WR = 0x1D,
FW_PARAMS_PARAM_DEV_MPSBGMAP = 0x1E,
+ FW_PARAMS_PARAM_DEV_TPCHMAP = 0x1F,
FW_PARAMS_PARAM_DEV_HMA_SIZE = 0x20,
FW_PARAMS_PARAM_DEV_RDMA_WRITE_WITH_IMM = 0x21,
+ FW_PARAMS_PARAM_DEV_PPOD_EDRAM = 0x23,
FW_PARAMS_PARAM_DEV_RI_WRITE_CMPL_WR = 0x24,
FW_PARAMS_PARAM_DEV_OPAQUE_VIID_SMT_EXTN = 0x27,
+ FW_PARAMS_PARAM_DEV_HASHFILTER_WITH_OFLD = 0x28,
FW_PARAMS_PARAM_DEV_DBQ_TIMER = 0x29,
FW_PARAMS_PARAM_DEV_DBQ_TIMERTICK = 0x2A,
+ FW_PARAMS_PARAM_DEV_FILTER = 0x2E,
};
/*
@@ -1312,6 +1333,8 @@ enum fw_params_param_pfvf {
FW_PARAMS_PARAM_PFVF_RAWF_END = 0x37,
FW_PARAMS_PARAM_PFVF_NCRYPTO_LOOKASIDE = 0x39,
FW_PARAMS_PARAM_PFVF_PORT_CAPS32 = 0x3A,
+ FW_PARAMS_PARAM_PFVF_PPOD_EDRAM_START = 0x3B,
+ FW_PARAMS_PARAM_PFVF_PPOD_EDRAM_END = 0x3C,
FW_PARAMS_PARAM_PFVF_LINK_STATE = 0x40,
};
@@ -1347,6 +1370,11 @@ enum fw_params_param_dev_diag {
FW_PARAM_DEV_DIAG_MAXTMPTHRESH = 0x02,
};
+enum fw_params_param_dev_filter {
+ FW_PARAM_DEV_FILTER_VNIC_MODE = 0x00,
+ FW_PARAM_DEV_FILTER_MODE_MASK = 0x01,
+};
+
enum fw_params_param_dev_fwcache {
FW_PARAM_DEV_FWCACHE_FLUSH = 0x00,
FW_PARAM_DEV_FWCACHE_FLUSHINV = 0x01,
diff --git a/drivers/net/ethernet/chelsio/libcxgb/libcxgb_ppm.c b/drivers/net/ethernet/chelsio/libcxgb/libcxgb_ppm.c
index e2919005ead3..21034536c9c5 100644
--- a/drivers/net/ethernet/chelsio/libcxgb/libcxgb_ppm.c
+++ b/drivers/net/ethernet/chelsio/libcxgb/libcxgb_ppm.c
@@ -123,6 +123,9 @@ static int ppm_get_cpu_entries(struct cxgbi_ppm *ppm, unsigned int count,
unsigned int cpu;
int i;
+ if (!ppm->pool)
+ return -EINVAL;
+
cpu = get_cpu();
pool = per_cpu_ptr(ppm->pool, cpu);
spin_lock_bh(&pool->lock);
@@ -169,7 +172,9 @@ static int ppm_get_entries(struct cxgbi_ppm *ppm, unsigned int count,
}
ppm->next = i + count;
- if (ppm->next >= ppm->bmap_index_max)
+ if (ppm->max_index_in_edram && (ppm->next >= ppm->max_index_in_edram))
+ ppm->next = 0;
+ else if (ppm->next >= ppm->bmap_index_max)
ppm->next = 0;
spin_unlock_bh(&ppm->map_lock);
@@ -382,18 +387,36 @@ static struct cxgbi_ppm_pool *ppm_alloc_cpu_pool(unsigned int *total,
int cxgbi_ppm_init(void **ppm_pp, struct net_device *ndev,
struct pci_dev *pdev, void *lldev,
- struct cxgbi_tag_format *tformat,
- unsigned int ppmax,
- unsigned int llimit,
- unsigned int start,
- unsigned int reserve_factor)
+ struct cxgbi_tag_format *tformat, unsigned int iscsi_size,
+ unsigned int llimit, unsigned int start,
+ unsigned int reserve_factor, unsigned int iscsi_edram_start,
+ unsigned int iscsi_edram_size)
{
struct cxgbi_ppm *ppm = (struct cxgbi_ppm *)(*ppm_pp);
struct cxgbi_ppm_pool *pool = NULL;
- unsigned int ppmax_pool = 0;
unsigned int pool_index_max = 0;
- unsigned int alloc_sz;
+ unsigned int ppmax_pool = 0;
unsigned int ppod_bmap_size;
+ unsigned int alloc_sz;
+ unsigned int ppmax;
+
+ if (!iscsi_edram_start)
+ iscsi_edram_size = 0;
+
+ if (iscsi_edram_size &&
+ ((iscsi_edram_start + iscsi_edram_size) != start)) {
+ pr_err("iscsi ppod region not contiguous: EDRAM start 0x%x "
+ "size 0x%x DDR start 0x%x\n",
+ iscsi_edram_start, iscsi_edram_size, start);
+ return -EINVAL;
+ }
+
+ if (iscsi_edram_size) {
+ reserve_factor = 0;
+ start = iscsi_edram_start;
+ }
+
+ ppmax = (iscsi_edram_size + iscsi_size) >> PPOD_SIZE_SHIFT;
if (ppm) {
pr_info("ippm: %s, ppm 0x%p,0x%p already initialized, %u/%u.\n",
@@ -434,6 +457,14 @@ int cxgbi_ppm_init(void **ppm_pp, struct net_device *ndev,
__func__, ppmax, ppmax_pool, ppod_bmap_size, start,
end);
}
+ if (iscsi_edram_size) {
+ unsigned int first_ddr_idx =
+ iscsi_edram_size >> PPOD_SIZE_SHIFT;
+
+ ppm->max_index_in_edram = first_ddr_idx - 1;
+ bitmap_set(ppm->ppod_bmap, first_ddr_idx, 1);
+ pr_debug("reserved %u ppod in bitmap\n", first_ddr_idx);
+ }
spin_lock_init(&ppm->map_lock);
kref_init(&ppm->refcnt);
diff --git a/drivers/net/ethernet/chelsio/libcxgb/libcxgb_ppm.h b/drivers/net/ethernet/chelsio/libcxgb/libcxgb_ppm.h
index a91ad766cef0..7b02c200dd1e 100644
--- a/drivers/net/ethernet/chelsio/libcxgb/libcxgb_ppm.h
+++ b/drivers/net/ethernet/chelsio/libcxgb/libcxgb_ppm.h
@@ -143,6 +143,7 @@ struct cxgbi_ppm {
spinlock_t map_lock; /* ppm map lock */
unsigned int bmap_index_max;
unsigned int next;
+ unsigned int max_index_in_edram;
unsigned long *ppod_bmap;
struct cxgbi_ppod_data ppod_data[0];
};
@@ -324,9 +325,9 @@ int cxgbi_ppm_ppods_reserve(struct cxgbi_ppm *, unsigned short nr_pages,
unsigned long caller_data);
int cxgbi_ppm_init(void **ppm_pp, struct net_device *, struct pci_dev *,
void *lldev, struct cxgbi_tag_format *,
- unsigned int ppmax, unsigned int llimit,
- unsigned int start,
- unsigned int reserve_factor);
+ unsigned int iscsi_size, unsigned int llimit,
+ unsigned int start, unsigned int reserve_factor,
+ unsigned int edram_start, unsigned int edram_size);
int cxgbi_ppm_release(struct cxgbi_ppm *ppm);
void cxgbi_tagmask_check(unsigned int tagmask, struct cxgbi_tag_format *);
unsigned int cxgbi_tagmask_set(unsigned int ppmax);
diff --git a/drivers/net/ethernet/freescale/dpaa2/Kconfig b/drivers/net/ethernet/freescale/dpaa2/Kconfig
index 8bd384720f80..fbef2829f3de 100644
--- a/drivers/net/ethernet/freescale/dpaa2/Kconfig
+++ b/drivers/net/ethernet/freescale/dpaa2/Kconfig
@@ -10,8 +10,7 @@ config FSL_DPAA2_ETH
config FSL_DPAA2_PTP_CLOCK
tristate "Freescale DPAA2 PTP Clock"
- depends on FSL_DPAA2_ETH
- imply PTP_1588_CLOCK
+ depends on FSL_DPAA2_ETH && PTP_1588_CLOCK_QORIQ
default y
help
This driver adds support for using the DPAA2 1588 timer module
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
index 7d2390e3df77..0acb11557ed1 100644
--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
@@ -555,7 +555,7 @@ static int build_sg_fd(struct dpaa2_eth_priv *priv,
/* Prepare the HW SGT structure */
sgt_buf_size = priv->tx_data_offset +
sizeof(struct dpaa2_sg_entry) * num_dma_bufs;
- sgt_buf = netdev_alloc_frag(sgt_buf_size + DPAA2_ETH_TX_BUF_ALIGN);
+ sgt_buf = napi_alloc_frag(sgt_buf_size + DPAA2_ETH_TX_BUF_ALIGN);
if (unlikely(!sgt_buf)) {
err = -ENOMEM;
goto sgt_buf_alloc_failed;
@@ -757,6 +757,7 @@ static netdev_tx_t dpaa2_eth_tx(struct sk_buff *skb, struct net_device *net_dev)
u16 queue_mapping;
unsigned int needed_headroom;
u32 fd_len;
+ u8 prio = 0;
int err, i;
percpu_stats = this_cpu_ptr(priv->percpu_stats);
@@ -814,6 +815,18 @@ static netdev_tx_t dpaa2_eth_tx(struct sk_buff *skb, struct net_device *net_dev)
* a queue affined to the same core that processed the Rx frame
*/
queue_mapping = skb_get_queue_mapping(skb);
+
+ if (net_dev->num_tc) {
+ prio = netdev_txq_to_tc(net_dev, queue_mapping);
+ /* Hardware interprets priority level 0 as being the highest,
+ * so we need to do a reverse mapping to the netdev tc index
+ */
+ prio = net_dev->num_tc - prio - 1;
+ /* We have only one FQ array entry for all Tx hardware queues
+ * with the same flow id (but different priority levels)
+ */
+ queue_mapping %= dpaa2_eth_queue_count(priv);
+ }
fq = &priv->fq[queue_mapping];
fd_len = dpaa2_fd_get_len(&fd);
@@ -824,7 +837,7 @@ static netdev_tx_t dpaa2_eth_tx(struct sk_buff *skb, struct net_device *net_dev)
* the Tx confirmation callback for this frame
*/
for (i = 0; i < DPAA2_ETH_ENQUEUE_RETRIES; i++) {
- err = priv->enqueue(priv, fq, &fd, 0);
+ err = priv->enqueue(priv, fq, &fd, prio);
if (err != -EBUSY)
break;
}
@@ -997,13 +1010,6 @@ static int seed_pool(struct dpaa2_eth_priv *priv, u16 bpid)
int i, j;
int new_count;
- /* This is the lazy seeding of Rx buffer pools.
- * dpaa2_add_bufs() is also used on the Rx hotpath and calls
- * napi_alloc_frag(). The trouble with that is that it in turn ends up
- * calling this_cpu_ptr(), which mandates execution in atomic context.
- * Rather than splitting up the code, do a one-off preempt disable.
- */
- preempt_disable();
for (j = 0; j < priv->num_channels; j++) {
for (i = 0; i < DPAA2_ETH_NUM_BUFS;
i += DPAA2_ETH_BUFS_PER_CMD) {
@@ -1011,12 +1017,10 @@ static int seed_pool(struct dpaa2_eth_priv *priv, u16 bpid)
priv->channel[j]->buf_count += new_count;
if (new_count < DPAA2_ETH_BUFS_PER_CMD) {
- preempt_enable();
return -ENOMEM;
}
}
}
- preempt_enable();
return 0;
}
@@ -1872,6 +1876,78 @@ static int dpaa2_eth_xdp_xmit(struct net_device *net_dev, int n,
return n - drops;
}
+static int update_xps(struct dpaa2_eth_priv *priv)
+{
+ struct net_device *net_dev = priv->net_dev;
+ struct cpumask xps_mask;
+ struct dpaa2_eth_fq *fq;
+ int i, num_queues, netdev_queues;
+ int err = 0;
+
+ num_queues = dpaa2_eth_queue_count(priv);
+ netdev_queues = (net_dev->num_tc ? : 1) * num_queues;
+
+ /* The first <num_queues> entries in priv->fq array are Tx/Tx conf
+ * queues, so only process those
+ */
+ for (i = 0; i < netdev_queues; i++) {
+ fq = &priv->fq[i % num_queues];
+
+ cpumask_clear(&xps_mask);
+ cpumask_set_cpu(fq->target_cpu, &xps_mask);
+
+ err = netif_set_xps_queue(net_dev, &xps_mask, i);
+ if (err) {
+ netdev_warn_once(net_dev, "Error setting XPS queue\n");
+ break;
+ }
+ }
+
+ return err;
+}
+
+static int dpaa2_eth_setup_tc(struct net_device *net_dev,
+ enum tc_setup_type type, void *type_data)
+{
+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+ struct tc_mqprio_qopt *mqprio = type_data;
+ u8 num_tc, num_queues;
+ int i;
+
+ if (type != TC_SETUP_QDISC_MQPRIO)
+ return -EINVAL;
+
+ mqprio->hw = TC_MQPRIO_HW_OFFLOAD_TCS;
+ num_queues = dpaa2_eth_queue_count(priv);
+ num_tc = mqprio->num_tc;
+
+ if (num_tc == net_dev->num_tc)
+ return 0;
+
+ if (num_tc > dpaa2_eth_tc_count(priv)) {
+ netdev_err(net_dev, "Max %d traffic classes supported\n",
+ dpaa2_eth_tc_count(priv));
+ return -EINVAL;
+ }
+
+ if (!num_tc) {
+ netdev_reset_tc(net_dev);
+ netif_set_real_num_tx_queues(net_dev, num_queues);
+ goto out;
+ }
+
+ netdev_set_num_tc(net_dev, num_tc);
+ netif_set_real_num_tx_queues(net_dev, num_tc * num_queues);
+
+ for (i = 0; i < num_tc; i++)
+ netdev_set_tc_queue(net_dev, i, num_queues, i * num_queues);
+
+out:
+ update_xps(priv);
+
+ return 0;
+}
+
static const struct net_device_ops dpaa2_eth_ops = {
.ndo_open = dpaa2_eth_open,
.ndo_start_xmit = dpaa2_eth_tx,
@@ -1884,6 +1960,7 @@ static const struct net_device_ops dpaa2_eth_ops = {
.ndo_change_mtu = dpaa2_eth_change_mtu,
.ndo_bpf = dpaa2_eth_xdp,
.ndo_xdp_xmit = dpaa2_eth_xdp_xmit,
+ .ndo_setup_tc = dpaa2_eth_setup_tc,
};
static void cdan_cb(struct dpaa2_io_notification_ctx *ctx)
@@ -2138,10 +2215,9 @@ static struct dpaa2_eth_channel *get_affine_channel(struct dpaa2_eth_priv *priv,
static void set_fq_affinity(struct dpaa2_eth_priv *priv)
{
struct device *dev = priv->net_dev->dev.parent;
- struct cpumask xps_mask;
struct dpaa2_eth_fq *fq;
int rx_cpu, txc_cpu;
- int i, err;
+ int i;
/* For each FQ, pick one channel/CPU to deliver frames to.
* This may well change at runtime, either through irqbalance or
@@ -2160,17 +2236,6 @@ static void set_fq_affinity(struct dpaa2_eth_priv *priv)
break;
case DPAA2_TX_CONF_FQ:
fq->target_cpu = txc_cpu;
-
- /* Tell the stack to affine to txc_cpu the Tx queue
- * associated with the confirmation one
- */
- cpumask_clear(&xps_mask);
- cpumask_set_cpu(txc_cpu, &xps_mask);
- err = netif_set_xps_queue(priv->net_dev, &xps_mask,
- fq->flowid);
- if (err)
- dev_err(dev, "Error setting XPS queue\n");
-
txc_cpu = cpumask_next(txc_cpu, &priv->dpio_cpumask);
if (txc_cpu >= nr_cpu_ids)
txc_cpu = cpumask_first(&priv->dpio_cpumask);
@@ -2180,6 +2245,8 @@ static void set_fq_affinity(struct dpaa2_eth_priv *priv)
}
fq->channel = get_affine_channel(priv, fq->target_cpu);
}
+
+ update_xps(priv);
}
static void setup_fqs(struct dpaa2_eth_priv *priv)
@@ -2361,11 +2428,10 @@ static inline int dpaa2_eth_enqueue_qd(struct dpaa2_eth_priv *priv,
static inline int dpaa2_eth_enqueue_fq(struct dpaa2_eth_priv *priv,
struct dpaa2_eth_fq *fq,
- struct dpaa2_fd *fd,
- u8 prio __always_unused)
+ struct dpaa2_fd *fd, u8 prio)
{
return dpaa2_io_service_enqueue_fq(fq->channel->dpio,
- fq->tx_fqid, fd);
+ fq->tx_fqid[prio], fd);
}
static void set_enqueue_mode(struct dpaa2_eth_priv *priv)
@@ -2479,14 +2545,9 @@ static int setup_rx_flow(struct dpaa2_eth_priv *priv,
queue.destination.type = DPNI_DEST_DPCON;
queue.destination.priority = 1;
queue.user_context = (u64)(uintptr_t)fq;
- queue.flc.stash_control = 1;
- queue.flc.value &= 0xFFFFFFFFFFFFFFC0;
- /* 01 01 00 - data, annotation, flow context */
- queue.flc.value |= 0x14;
err = dpni_set_queue(priv->mc_io, 0, priv->mc_token,
DPNI_QUEUE_RX, 0, fq->flowid,
- DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST |
- DPNI_QUEUE_OPT_FLC,
+ DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST,
&queue);
if (err) {
dev_err(dev, "dpni_set_queue(RX) failed\n");
@@ -2526,17 +2587,21 @@ static int setup_tx_flow(struct dpaa2_eth_priv *priv,
struct device *dev = priv->net_dev->dev.parent;
struct dpni_queue queue;
struct dpni_queue_id qid;
- int err;
+ int i, err;
- err = dpni_get_queue(priv->mc_io, 0, priv->mc_token,
- DPNI_QUEUE_TX, 0, fq->flowid, &queue, &qid);
- if (err) {
- dev_err(dev, "dpni_get_queue(TX) failed\n");
- return err;
+ for (i = 0; i < dpaa2_eth_tc_count(priv); i++) {
+ err = dpni_get_queue(priv->mc_io, 0, priv->mc_token,
+ DPNI_QUEUE_TX, i, fq->flowid,
+ &queue, &qid);
+ if (err) {
+ dev_err(dev, "dpni_get_queue(TX) failed\n");
+ return err;
+ }
+ fq->tx_fqid[i] = qid.fqid;
}
+ /* All Tx queues belonging to the same flowid have the same qdbin */
fq->tx_qdbin = qid.qdbin;
- fq->tx_fqid = qid.fqid;
err = dpni_get_queue(priv->mc_io, 0, priv->mc_token,
DPNI_QUEUE_TX_CONFIRM, 0, fq->flowid,
@@ -3236,7 +3301,7 @@ static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
dev = &dpni_dev->dev;
/* Net device */
- net_dev = alloc_etherdev_mq(sizeof(*priv), DPAA2_ETH_MAX_TX_QUEUES);
+ net_dev = alloc_etherdev_mq(sizeof(*priv), DPAA2_ETH_MAX_NETDEV_QUEUES);
if (!net_dev) {
dev_err(dev, "alloc_etherdev_mq() failed\n");
return -ENOMEM;
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
index e180d5a68c98..9af18c24221f 100644
--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
+++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
@@ -282,10 +282,13 @@ struct dpaa2_eth_ch_stats {
};
/* Maximum number of queues associated with a DPNI */
+#define DPAA2_ETH_MAX_TCS 8
#define DPAA2_ETH_MAX_RX_QUEUES 16
#define DPAA2_ETH_MAX_TX_QUEUES 16
#define DPAA2_ETH_MAX_QUEUES (DPAA2_ETH_MAX_RX_QUEUES + \
DPAA2_ETH_MAX_TX_QUEUES)
+#define DPAA2_ETH_MAX_NETDEV_QUEUES \
+ (DPAA2_ETH_MAX_TX_QUEUES * DPAA2_ETH_MAX_TCS)
#define DPAA2_ETH_MAX_DPCONS 16
@@ -299,8 +302,9 @@ struct dpaa2_eth_priv;
struct dpaa2_eth_fq {
u32 fqid;
u32 tx_qdbin;
- u32 tx_fqid;
+ u32 tx_fqid[DPAA2_ETH_MAX_TCS];
u16 flowid;
+ u8 tc;
int target_cpu;
u32 dq_frames;
u32 dq_bytes;
@@ -448,6 +452,9 @@ static inline int dpaa2_eth_cmp_dpni_ver(struct dpaa2_eth_priv *priv,
#define dpaa2_eth_fs_count(priv) \
((priv)->dpni_attrs.fs_entries)
+#define dpaa2_eth_tc_count(priv) \
+ ((priv)->dpni_attrs.num_tcs)
+
/* We have exactly one {Rx, Tx conf} queue per channel */
#define dpaa2_eth_queue_count(priv) \
((priv)->num_channels)
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c
index 9b150db3b510..a9503aea527f 100644
--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c
+++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c
@@ -5,114 +5,58 @@
*/
#include <linux/module.h>
-#include <linux/slab.h>
-#include <linux/ptp_clock_kernel.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/msi.h>
#include <linux/fsl/mc.h>
+#include <linux/fsl/ptp_qoriq.h>
#include "dpaa2-ptp.h"
-struct ptp_dpaa2_priv {
- struct fsl_mc_device *ptp_mc_dev;
- struct ptp_clock *clock;
- struct ptp_clock_info caps;
- u32 freq_comp;
-};
-
-/* PTP clock operations */
-static int ptp_dpaa2_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
+static int dpaa2_ptp_enable(struct ptp_clock_info *ptp,
+ struct ptp_clock_request *rq, int on)
{
- struct ptp_dpaa2_priv *ptp_dpaa2 =
- container_of(ptp, struct ptp_dpaa2_priv, caps);
- struct fsl_mc_device *mc_dev = ptp_dpaa2->ptp_mc_dev;
- struct device *dev = &mc_dev->dev;
- u64 adj;
- u32 diff, tmr_add;
- int neg_adj = 0;
- int err = 0;
-
- if (ppb < 0) {
- neg_adj = 1;
- ppb = -ppb;
- }
-
- tmr_add = ptp_dpaa2->freq_comp;
- adj = tmr_add;
- adj *= ppb;
- diff = div_u64(adj, 1000000000ULL);
-
- tmr_add = neg_adj ? tmr_add - diff : tmr_add + diff;
+ struct ptp_qoriq *ptp_qoriq = container_of(ptp, struct ptp_qoriq, caps);
+ struct fsl_mc_device *mc_dev;
+ struct device *dev;
+ u32 mask = 0;
+ u32 bit;
+ int err;
- err = dprtc_set_freq_compensation(mc_dev->mc_io, 0,
- mc_dev->mc_handle, tmr_add);
- if (err)
- dev_err(dev, "dprtc_set_freq_compensation err %d\n", err);
- return err;
-}
+ dev = ptp_qoriq->dev;
+ mc_dev = to_fsl_mc_device(dev);
-static int ptp_dpaa2_adjtime(struct ptp_clock_info *ptp, s64 delta)
-{
- struct ptp_dpaa2_priv *ptp_dpaa2 =
- container_of(ptp, struct ptp_dpaa2_priv, caps);
- struct fsl_mc_device *mc_dev = ptp_dpaa2->ptp_mc_dev;
- struct device *dev = &mc_dev->dev;
- s64 now;
- int err = 0;
+ switch (rq->type) {
+ case PTP_CLK_REQ_PPS:
+ bit = DPRTC_EVENT_PPS;
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
- err = dprtc_get_time(mc_dev->mc_io, 0, mc_dev->mc_handle, &now);
- if (err) {
- dev_err(dev, "dprtc_get_time err %d\n", err);
+ err = dprtc_get_irq_mask(mc_dev->mc_io, 0, mc_dev->mc_handle,
+ DPRTC_IRQ_INDEX, &mask);
+ if (err < 0) {
+ dev_err(dev, "dprtc_get_irq_mask(): %d\n", err);
return err;
}
- now += delta;
+ if (on)
+ mask |= bit;
+ else
+ mask &= ~bit;
- err = dprtc_set_time(mc_dev->mc_io, 0, mc_dev->mc_handle, now);
- if (err)
- dev_err(dev, "dprtc_set_time err %d\n", err);
- return err;
-}
-
-static int ptp_dpaa2_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts)
-{
- struct ptp_dpaa2_priv *ptp_dpaa2 =
- container_of(ptp, struct ptp_dpaa2_priv, caps);
- struct fsl_mc_device *mc_dev = ptp_dpaa2->ptp_mc_dev;
- struct device *dev = &mc_dev->dev;
- u64 ns;
- u32 remainder;
- int err = 0;
-
- err = dprtc_get_time(mc_dev->mc_io, 0, mc_dev->mc_handle, &ns);
- if (err) {
- dev_err(dev, "dprtc_get_time err %d\n", err);
+ err = dprtc_set_irq_mask(mc_dev->mc_io, 0, mc_dev->mc_handle,
+ DPRTC_IRQ_INDEX, mask);
+ if (err < 0) {
+ dev_err(dev, "dprtc_set_irq_mask(): %d\n", err);
return err;
}
- ts->tv_sec = div_u64_rem(ns, 1000000000, &remainder);
- ts->tv_nsec = remainder;
- return err;
-}
-
-static int ptp_dpaa2_settime(struct ptp_clock_info *ptp,
- const struct timespec64 *ts)
-{
- struct ptp_dpaa2_priv *ptp_dpaa2 =
- container_of(ptp, struct ptp_dpaa2_priv, caps);
- struct fsl_mc_device *mc_dev = ptp_dpaa2->ptp_mc_dev;
- struct device *dev = &mc_dev->dev;
- u64 ns;
- int err = 0;
-
- ns = ts->tv_sec * 1000000000ULL;
- ns += ts->tv_nsec;
-
- err = dprtc_set_time(mc_dev->mc_io, 0, mc_dev->mc_handle, ns);
- if (err)
- dev_err(dev, "dprtc_set_time err %d\n", err);
- return err;
+ return 0;
}
-static const struct ptp_clock_info ptp_dpaa2_caps = {
+static const struct ptp_clock_info dpaa2_ptp_caps = {
.owner = THIS_MODULE,
.name = "DPAA2 PTP Clock",
.max_adj = 512000,
@@ -121,21 +65,58 @@ static const struct ptp_clock_info ptp_dpaa2_caps = {
.n_per_out = 3,
.n_pins = 0,
.pps = 1,
- .adjfreq = ptp_dpaa2_adjfreq,
- .adjtime = ptp_dpaa2_adjtime,
- .gettime64 = ptp_dpaa2_gettime,
- .settime64 = ptp_dpaa2_settime,
+ .adjfine = ptp_qoriq_adjfine,
+ .adjtime = ptp_qoriq_adjtime,
+ .gettime64 = ptp_qoriq_gettime,
+ .settime64 = ptp_qoriq_settime,
+ .enable = dpaa2_ptp_enable,
};
+static irqreturn_t dpaa2_ptp_irq_handler_thread(int irq, void *priv)
+{
+ struct ptp_qoriq *ptp_qoriq = priv;
+ struct ptp_clock_event event;
+ struct fsl_mc_device *mc_dev;
+ struct device *dev;
+ u32 status = 0;
+ int err;
+
+ dev = ptp_qoriq->dev;
+ mc_dev = to_fsl_mc_device(dev);
+
+ err = dprtc_get_irq_status(mc_dev->mc_io, 0, mc_dev->mc_handle,
+ DPRTC_IRQ_INDEX, &status);
+ if (unlikely(err)) {
+ dev_err(dev, "dprtc_get_irq_status err %d\n", err);
+ return IRQ_NONE;
+ }
+
+ if (status & DPRTC_EVENT_PPS) {
+ event.type = PTP_CLOCK_PPS;
+ ptp_clock_event(ptp_qoriq->clock, &event);
+ }
+
+ err = dprtc_clear_irq_status(mc_dev->mc_io, 0, mc_dev->mc_handle,
+ DPRTC_IRQ_INDEX, status);
+ if (unlikely(err)) {
+ dev_err(dev, "dprtc_clear_irq_status err %d\n", err);
+ return IRQ_NONE;
+ }
+
+ return IRQ_HANDLED;
+}
+
static int dpaa2_ptp_probe(struct fsl_mc_device *mc_dev)
{
struct device *dev = &mc_dev->dev;
- struct ptp_dpaa2_priv *ptp_dpaa2;
- u32 tmr_add = 0;
+ struct fsl_mc_device_irq *irq;
+ struct ptp_qoriq *ptp_qoriq;
+ struct device_node *node;
+ void __iomem *base;
int err;
- ptp_dpaa2 = devm_kzalloc(dev, sizeof(*ptp_dpaa2), GFP_KERNEL);
- if (!ptp_dpaa2)
+ ptp_qoriq = devm_kzalloc(dev, sizeof(*ptp_qoriq), GFP_KERNEL);
+ if (!ptp_qoriq)
return -ENOMEM;
err = fsl_mc_portal_allocate(mc_dev, 0, &mc_dev->mc_io);
@@ -154,30 +135,60 @@ static int dpaa2_ptp_probe(struct fsl_mc_device *mc_dev)
goto err_free_mcp;
}
- ptp_dpaa2->ptp_mc_dev = mc_dev;
+ ptp_qoriq->dev = dev;
- err = dprtc_get_freq_compensation(mc_dev->mc_io, 0,
- mc_dev->mc_handle, &tmr_add);
- if (err) {
- dev_err(dev, "dprtc_get_freq_compensation err %d\n", err);
+ node = of_find_compatible_node(NULL, NULL, "fsl,dpaa2-ptp");
+ if (!node) {
+ err = -ENODEV;
goto err_close;
}
- ptp_dpaa2->freq_comp = tmr_add;
- ptp_dpaa2->caps = ptp_dpaa2_caps;
+ dev->of_node = node;
- ptp_dpaa2->clock = ptp_clock_register(&ptp_dpaa2->caps, dev);
- if (IS_ERR(ptp_dpaa2->clock)) {
- err = PTR_ERR(ptp_dpaa2->clock);
+ base = of_iomap(node, 0);
+ if (!base) {
+ err = -ENOMEM;
goto err_close;
}
- dpaa2_phc_index = ptp_clock_index(ptp_dpaa2->clock);
+ err = fsl_mc_allocate_irqs(mc_dev);
+ if (err) {
+ dev_err(dev, "MC irqs allocation failed\n");
+ goto err_unmap;
+ }
+
+ irq = mc_dev->irqs[0];
+ ptp_qoriq->irq = irq->msi_desc->irq;
- dev_set_drvdata(dev, ptp_dpaa2);
+ err = devm_request_threaded_irq(dev, ptp_qoriq->irq, NULL,
+ dpaa2_ptp_irq_handler_thread,
+ IRQF_NO_SUSPEND | IRQF_ONESHOT,
+ dev_name(dev), ptp_qoriq);
+ if (err < 0) {
+ dev_err(dev, "devm_request_threaded_irq(): %d\n", err);
+ goto err_free_mc_irq;
+ }
+
+ err = dprtc_set_irq_enable(mc_dev->mc_io, 0, mc_dev->mc_handle,
+ DPRTC_IRQ_INDEX, 1);
+ if (err < 0) {
+ dev_err(dev, "dprtc_set_irq_enable(): %d\n", err);
+ goto err_free_mc_irq;
+ }
+
+ err = ptp_qoriq_init(ptp_qoriq, base, &dpaa2_ptp_caps);
+ if (err)
+ goto err_free_mc_irq;
+
+ dpaa2_phc_index = ptp_qoriq->phc_index;
+ dev_set_drvdata(dev, ptp_qoriq);
return 0;
+err_free_mc_irq:
+ fsl_mc_free_irqs(mc_dev);
+err_unmap:
+ iounmap(base);
err_close:
dprtc_close(mc_dev->mc_io, 0, mc_dev->mc_handle);
err_free_mcp:
@@ -188,12 +199,15 @@ err_exit:
static int dpaa2_ptp_remove(struct fsl_mc_device *mc_dev)
{
- struct ptp_dpaa2_priv *ptp_dpaa2;
struct device *dev = &mc_dev->dev;
+ struct ptp_qoriq *ptp_qoriq;
+
+ ptp_qoriq = dev_get_drvdata(dev);
- ptp_dpaa2 = dev_get_drvdata(dev);
- ptp_clock_unregister(ptp_dpaa2->clock);
+ dpaa2_phc_index = -1;
+ ptp_qoriq_free(ptp_qoriq);
+ fsl_mc_free_irqs(mc_dev);
dprtc_close(mc_dev->mc_io, 0, mc_dev->mc_handle);
fsl_mc_portal_free(mc_dev->mc_io);
diff --git a/drivers/net/ethernet/freescale/dpaa2/dprtc-cmd.h b/drivers/net/ethernet/freescale/dpaa2/dprtc-cmd.h
index 9af4ac71f347..720cd50f5895 100644
--- a/drivers/net/ethernet/freescale/dpaa2/dprtc-cmd.h
+++ b/drivers/net/ethernet/freescale/dpaa2/dprtc-cmd.h
@@ -17,22 +17,54 @@
#define DPRTC_CMDID_CLOSE DPRTC_CMD(0x800)
#define DPRTC_CMDID_OPEN DPRTC_CMD(0x810)
-#define DPRTC_CMDID_SET_FREQ_COMPENSATION DPRTC_CMD(0x1d1)
-#define DPRTC_CMDID_GET_FREQ_COMPENSATION DPRTC_CMD(0x1d2)
-#define DPRTC_CMDID_GET_TIME DPRTC_CMD(0x1d3)
-#define DPRTC_CMDID_SET_TIME DPRTC_CMD(0x1d4)
+#define DPRTC_CMDID_SET_IRQ_ENABLE DPRTC_CMD(0x012)
+#define DPRTC_CMDID_GET_IRQ_ENABLE DPRTC_CMD(0x013)
+#define DPRTC_CMDID_SET_IRQ_MASK DPRTC_CMD(0x014)
+#define DPRTC_CMDID_GET_IRQ_MASK DPRTC_CMD(0x015)
+#define DPRTC_CMDID_GET_IRQ_STATUS DPRTC_CMD(0x016)
+#define DPRTC_CMDID_CLEAR_IRQ_STATUS DPRTC_CMD(0x017)
#pragma pack(push, 1)
struct dprtc_cmd_open {
__le32 dprtc_id;
};
-struct dprtc_get_freq_compensation {
- __le32 freq_compensation;
+struct dprtc_cmd_get_irq {
+ __le32 pad;
+ u8 irq_index;
};
-struct dprtc_time {
- __le64 time;
+struct dprtc_cmd_set_irq_enable {
+ u8 en;
+ u8 pad[3];
+ u8 irq_index;
+};
+
+struct dprtc_rsp_get_irq_enable {
+ u8 en;
+};
+
+struct dprtc_cmd_set_irq_mask {
+ __le32 mask;
+ u8 irq_index;
+};
+
+struct dprtc_rsp_get_irq_mask {
+ __le32 mask;
+};
+
+struct dprtc_cmd_get_irq_status {
+ __le32 status;
+ u8 irq_index;
+};
+
+struct dprtc_rsp_get_irq_status {
+ __le32 status;
+};
+
+struct dprtc_cmd_clear_irq_status {
+ __le32 status;
+ u8 irq_index;
};
#pragma pack(pop)
diff --git a/drivers/net/ethernet/freescale/dpaa2/dprtc.c b/drivers/net/ethernet/freescale/dpaa2/dprtc.c
index c13e09bc7b9d..ed52a34fa6a1 100644
--- a/drivers/net/ethernet/freescale/dpaa2/dprtc.c
+++ b/drivers/net/ethernet/freescale/dpaa2/dprtc.c
@@ -74,121 +74,220 @@ int dprtc_close(struct fsl_mc_io *mc_io,
}
/**
- * dprtc_set_freq_compensation() - Sets a new frequency compensation value.
+ * dprtc_set_irq_enable() - Set overall interrupt state.
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPRTC object
+ * @irq_index: The interrupt index to configure
+ * @en: Interrupt state - enable = 1, disable = 0
*
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPRTC object
- * @freq_compensation: The new frequency compensation value to set.
+ * Allows GPP software to control when interrupts are generated.
+ * Each interrupt can have up to 32 causes. The enable/disable control's the
+ * overall interrupt state. if the interrupt is disabled no causes will cause
+ * an interrupt.
*
* Return: '0' on Success; Error code otherwise.
*/
-int dprtc_set_freq_compensation(struct fsl_mc_io *mc_io,
- u32 cmd_flags,
- u16 token,
- u32 freq_compensation)
+int dprtc_set_irq_enable(struct fsl_mc_io *mc_io,
+ u32 cmd_flags,
+ u16 token,
+ u8 irq_index,
+ u8 en)
{
- struct dprtc_get_freq_compensation *cmd_params;
+ struct dprtc_cmd_set_irq_enable *cmd_params;
struct fsl_mc_command cmd = { 0 };
- cmd.header = mc_encode_cmd_header(DPRTC_CMDID_SET_FREQ_COMPENSATION,
+ cmd.header = mc_encode_cmd_header(DPRTC_CMDID_SET_IRQ_ENABLE,
cmd_flags,
token);
- cmd_params = (struct dprtc_get_freq_compensation *)cmd.params;
- cmd_params->freq_compensation = cpu_to_le32(freq_compensation);
+ cmd_params = (struct dprtc_cmd_set_irq_enable *)cmd.params;
+ cmd_params->irq_index = irq_index;
+ cmd_params->en = en;
return mc_send_command(mc_io, &cmd);
}
/**
- * dprtc_get_freq_compensation() - Retrieves the frequency compensation value
+ * dprtc_get_irq_enable() - Get overall interrupt state
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPRTC object
+ * @irq_index: The interrupt index to configure
+ * @en: Returned interrupt state - enable = 1, disable = 0
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dprtc_get_irq_enable(struct fsl_mc_io *mc_io,
+ u32 cmd_flags,
+ u16 token,
+ u8 irq_index,
+ u8 *en)
+{
+ struct dprtc_rsp_get_irq_enable *rsp_params;
+ struct dprtc_cmd_get_irq *cmd_params;
+ struct fsl_mc_command cmd = { 0 };
+ int err;
+
+ cmd.header = mc_encode_cmd_header(DPRTC_CMDID_GET_IRQ_ENABLE,
+ cmd_flags,
+ token);
+ cmd_params = (struct dprtc_cmd_get_irq *)cmd.params;
+ cmd_params->irq_index = irq_index;
+
+ err = mc_send_command(mc_io, &cmd);
+ if (err)
+ return err;
+
+ rsp_params = (struct dprtc_rsp_get_irq_enable *)cmd.params;
+ *en = rsp_params->en;
+
+ return 0;
+}
+
+/**
+ * dprtc_set_irq_mask() - Set interrupt mask.
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPRTC object
+ * @irq_index: The interrupt index to configure
+ * @mask: Event mask to trigger interrupt;
+ * each bit:
+ * 0 = ignore event
+ * 1 = consider event for asserting IRQ
+ *
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
+ *
+ * Return: '0' on Success; Error code otherwise.
+ */
+int dprtc_set_irq_mask(struct fsl_mc_io *mc_io,
+ u32 cmd_flags,
+ u16 token,
+ u8 irq_index,
+ u32 mask)
+{
+ struct dprtc_cmd_set_irq_mask *cmd_params;
+ struct fsl_mc_command cmd = { 0 };
+
+ cmd.header = mc_encode_cmd_header(DPRTC_CMDID_SET_IRQ_MASK,
+ cmd_flags,
+ token);
+ cmd_params = (struct dprtc_cmd_set_irq_mask *)cmd.params;
+ cmd_params->mask = cpu_to_le32(mask);
+ cmd_params->irq_index = irq_index;
+
+ return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dprtc_get_irq_mask() - Get interrupt mask.
+ * @mc_io: Pointer to MC portal's I/O object
+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token: Token of DPRTC object
+ * @irq_index: The interrupt index to configure
+ * @mask: Returned event mask to trigger interrupt
*
- * @mc_io: Pointer to MC portal's I/O object
- * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
- * @token: Token of DPRTC object
- * @freq_compensation: Frequency compensation value
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
*
* Return: '0' on Success; Error code otherwise.
*/
-int dprtc_get_freq_compensation(struct fsl_mc_io *mc_io,
- u32 cmd_flags,
- u16 token,
- u32 *freq_compensation)
+int dprtc_get_irq_mask(struct fsl_mc_io *mc_io,
+ u32 cmd_flags,
+ u16 token,
+ u8 irq_index,
+ u32 *mask)
{
- struct dprtc_get_freq_compensation *rsp_params;
+ struct dprtc_rsp_get_irq_mask *rsp_params;
+ struct dprtc_cmd_get_irq *cmd_params;
struct fsl_mc_command cmd = { 0 };
int err;
- cmd.header = mc_encode_cmd_header(DPRTC_CMDID_GET_FREQ_COMPENSATION,
+ cmd.header = mc_encode_cmd_header(DPRTC_CMDID_GET_IRQ_MASK,
cmd_flags,
token);
+ cmd_params = (struct dprtc_cmd_get_irq *)cmd.params;
+ cmd_params->irq_index = irq_index;
err = mc_send_command(mc_io, &cmd);
if (err)
return err;
- rsp_params = (struct dprtc_get_freq_compensation *)cmd.params;
- *freq_compensation = le32_to_cpu(rsp_params->freq_compensation);
+ rsp_params = (struct dprtc_rsp_get_irq_mask *)cmd.params;
+ *mask = le32_to_cpu(rsp_params->mask);
return 0;
}
/**
- * dprtc_get_time() - Returns the current RTC time.
+ * dprtc_get_irq_status() - Get the current status of any pending interrupts.
*
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPRTC object
- * @time: Current RTC time.
+ * @irq_index: The interrupt index to configure
+ * @status: Returned interrupts status - one bit per cause:
+ * 0 = no interrupt pending
+ * 1 = interrupt pending
*
* Return: '0' on Success; Error code otherwise.
*/
-int dprtc_get_time(struct fsl_mc_io *mc_io,
- u32 cmd_flags,
- u16 token,
- uint64_t *time)
+int dprtc_get_irq_status(struct fsl_mc_io *mc_io,
+ u32 cmd_flags,
+ u16 token,
+ u8 irq_index,
+ u32 *status)
{
- struct dprtc_time *rsp_params;
+ struct dprtc_cmd_get_irq_status *cmd_params;
+ struct dprtc_rsp_get_irq_status *rsp_params;
struct fsl_mc_command cmd = { 0 };
int err;
- cmd.header = mc_encode_cmd_header(DPRTC_CMDID_GET_TIME,
+ cmd.header = mc_encode_cmd_header(DPRTC_CMDID_GET_IRQ_STATUS,
cmd_flags,
token);
+ cmd_params = (struct dprtc_cmd_get_irq_status *)cmd.params;
+ cmd_params->status = cpu_to_le32(*status);
+ cmd_params->irq_index = irq_index;
err = mc_send_command(mc_io, &cmd);
if (err)
return err;
- rsp_params = (struct dprtc_time *)cmd.params;
- *time = le64_to_cpu(rsp_params->time);
+ rsp_params = (struct dprtc_rsp_get_irq_status *)cmd.params;
+ *status = le32_to_cpu(rsp_params->status);
return 0;
}
/**
- * dprtc_set_time() - Updates current RTC time.
+ * dprtc_clear_irq_status() - Clear a pending interrupt's status
*
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPRTC object
- * @time: New RTC time.
+ * @irq_index: The interrupt index to configure
+ * @status: Bits to clear (W1C) - one bit per cause:
+ * 0 = don't change
+ * 1 = clear status bit
*
* Return: '0' on Success; Error code otherwise.
*/
-int dprtc_set_time(struct fsl_mc_io *mc_io,
- u32 cmd_flags,
- u16 token,
- uint64_t time)
+int dprtc_clear_irq_status(struct fsl_mc_io *mc_io,
+ u32 cmd_flags,
+ u16 token,
+ u8 irq_index,
+ u32 status)
{
- struct dprtc_time *cmd_params;
+ struct dprtc_cmd_clear_irq_status *cmd_params;
struct fsl_mc_command cmd = { 0 };
- cmd.header = mc_encode_cmd_header(DPRTC_CMDID_SET_TIME,
+ cmd.header = mc_encode_cmd_header(DPRTC_CMDID_CLEAR_IRQ_STATUS,
cmd_flags,
token);
- cmd_params = (struct dprtc_time *)cmd.params;
- cmd_params->time = cpu_to_le64(time);
+ cmd_params = (struct dprtc_cmd_clear_irq_status *)cmd.params;
+ cmd_params->irq_index = irq_index;
+ cmd_params->status = cpu_to_le32(status);
return mc_send_command(mc_io, &cmd);
}
diff --git a/drivers/net/ethernet/freescale/dpaa2/dprtc.h b/drivers/net/ethernet/freescale/dpaa2/dprtc.h
index fe19618d6cdf..be7914c1634d 100644
--- a/drivers/net/ethernet/freescale/dpaa2/dprtc.h
+++ b/drivers/net/ethernet/freescale/dpaa2/dprtc.h
@@ -13,6 +13,14 @@
struct fsl_mc_io;
+/**
+ * Number of irq's
+ */
+#define DPRTC_MAX_IRQ_NUM 1
+#define DPRTC_IRQ_INDEX 0
+
+#define DPRTC_EVENT_PPS 0x08000000
+
int dprtc_open(struct fsl_mc_io *mc_io,
u32 cmd_flags,
int dprtc_id,
@@ -22,24 +30,40 @@ int dprtc_close(struct fsl_mc_io *mc_io,
u32 cmd_flags,
u16 token);
-int dprtc_set_freq_compensation(struct fsl_mc_io *mc_io,
- u32 cmd_flags,
- u16 token,
- u32 freq_compensation);
-
-int dprtc_get_freq_compensation(struct fsl_mc_io *mc_io,
- u32 cmd_flags,
- u16 token,
- u32 *freq_compensation);
-
-int dprtc_get_time(struct fsl_mc_io *mc_io,
- u32 cmd_flags,
- u16 token,
- uint64_t *time);
-
-int dprtc_set_time(struct fsl_mc_io *mc_io,
- u32 cmd_flags,
- u16 token,
- uint64_t time);
+int dprtc_set_irq_enable(struct fsl_mc_io *mc_io,
+ u32 cmd_flags,
+ u16 token,
+ u8 irq_index,
+ u8 en);
+
+int dprtc_get_irq_enable(struct fsl_mc_io *mc_io,
+ u32 cmd_flags,
+ u16 token,
+ u8 irq_index,
+ u8 *en);
+
+int dprtc_set_irq_mask(struct fsl_mc_io *mc_io,
+ u32 cmd_flags,
+ u16 token,
+ u8 irq_index,
+ u32 mask);
+
+int dprtc_get_irq_mask(struct fsl_mc_io *mc_io,
+ u32 cmd_flags,
+ u16 token,
+ u8 irq_index,
+ u32 *mask);
+
+int dprtc_get_irq_status(struct fsl_mc_io *mc_io,
+ u32 cmd_flags,
+ u16 token,
+ u8 irq_index,
+ u32 *status);
+
+int dprtc_clear_irq_status(struct fsl_mc_io *mc_io,
+ u32 cmd_flags,
+ u16 token,
+ u8 irq_index,
+ u32 status);
#endif /* __FSL_DPRTC_H */
diff --git a/drivers/net/ethernet/freescale/enetc/Kconfig b/drivers/net/ethernet/freescale/enetc/Kconfig
index 8429f5c1d810..ed0d010c7cf2 100644
--- a/drivers/net/ethernet/freescale/enetc/Kconfig
+++ b/drivers/net/ethernet/freescale/enetc/Kconfig
@@ -29,3 +29,13 @@ config FSL_ENETC_PTP_CLOCK
packets using the SO_TIMESTAMPING API.
If compiled as module (M), the module name is fsl-enetc-ptp.
+
+config FSL_ENETC_HW_TIMESTAMPING
+ bool "ENETC hardware timestamping support"
+ depends on FSL_ENETC || FSL_ENETC_VF
+ help
+ Enable hardware timestamping support on the Ethernet packets
+ using the SO_TIMESTAMPING API. Because the RX BD ring dynamic
+ allocation has not been supported and it is too expensive to use
+ extended RX BDs if timestamping is not used, this option enables
+ extended RX BDs in order to support hardware timestamping.
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
index 491475d87736..223709443ea4 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc.c
@@ -13,7 +13,8 @@
#define ENETC_MAX_SKB_FRAGS 13
#define ENETC_TXBDS_MAX_NEEDED ENETC_TXBDS_NEEDED(ENETC_MAX_SKB_FRAGS + 1)
-static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb);
+static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb,
+ int active_offloads);
netdev_tx_t enetc_xmit(struct sk_buff *skb, struct net_device *ndev)
{
@@ -33,7 +34,7 @@ netdev_tx_t enetc_xmit(struct sk_buff *skb, struct net_device *ndev)
return NETDEV_TX_BUSY;
}
- count = enetc_map_tx_buffs(tx_ring, skb);
+ count = enetc_map_tx_buffs(tx_ring, skb, priv->active_offloads);
if (unlikely(!count))
goto drop_packet_err;
@@ -105,7 +106,8 @@ static void enetc_free_tx_skb(struct enetc_bdr *tx_ring,
}
}
-static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb)
+static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb,
+ int active_offloads)
{
struct enetc_tx_swbd *tx_swbd;
struct skb_frag_struct *frag;
@@ -137,7 +139,10 @@ static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb)
count++;
do_vlan = skb_vlan_tag_present(skb);
- do_tstamp = skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP;
+ do_tstamp = (active_offloads & ENETC_F_TX_TSTAMP) &&
+ (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP);
+ tx_swbd->do_tstamp = do_tstamp;
+ tx_swbd->check_wb = tx_swbd->do_tstamp;
if (do_vlan || do_tstamp)
flags |= ENETC_TXBD_FLAGS_EX;
@@ -299,24 +304,70 @@ static int enetc_bd_ready_count(struct enetc_bdr *tx_ring, int ci)
return pi >= ci ? pi - ci : tx_ring->bd_count - ci + pi;
}
+static void enetc_get_tx_tstamp(struct enetc_hw *hw, union enetc_tx_bd *txbd,
+ u64 *tstamp)
+{
+ u32 lo, hi, tstamp_lo;
+
+ lo = enetc_rd(hw, ENETC_SICTR0);
+ hi = enetc_rd(hw, ENETC_SICTR1);
+ tstamp_lo = le32_to_cpu(txbd->wb.tstamp);
+ if (lo <= tstamp_lo)
+ hi -= 1;
+ *tstamp = (u64)hi << 32 | tstamp_lo;
+}
+
+static void enetc_tstamp_tx(struct sk_buff *skb, u64 tstamp)
+{
+ struct skb_shared_hwtstamps shhwtstamps;
+
+ if (skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS) {
+ memset(&shhwtstamps, 0, sizeof(shhwtstamps));
+ shhwtstamps.hwtstamp = ns_to_ktime(tstamp);
+ skb_tstamp_tx(skb, &shhwtstamps);
+ }
+}
+
static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
{
struct net_device *ndev = tx_ring->ndev;
int tx_frm_cnt = 0, tx_byte_cnt = 0;
struct enetc_tx_swbd *tx_swbd;
int i, bds_to_clean;
+ bool do_tstamp;
+ u64 tstamp = 0;
i = tx_ring->next_to_clean;
tx_swbd = &tx_ring->tx_swbd[i];
bds_to_clean = enetc_bd_ready_count(tx_ring, i);
+ do_tstamp = false;
+
while (bds_to_clean && tx_frm_cnt < ENETC_DEFAULT_TX_WORK) {
bool is_eof = !!tx_swbd->skb;
+ if (unlikely(tx_swbd->check_wb)) {
+ struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ union enetc_tx_bd *txbd;
+
+ txbd = ENETC_TXBD(*tx_ring, i);
+
+ if (txbd->flags & ENETC_TXBD_FLAGS_W &&
+ tx_swbd->do_tstamp) {
+ enetc_get_tx_tstamp(&priv->si->hw, txbd,
+ &tstamp);
+ do_tstamp = true;
+ }
+ }
+
if (likely(tx_swbd->dma))
enetc_unmap_tx_buff(tx_ring, tx_swbd);
if (is_eof) {
+ if (unlikely(do_tstamp)) {
+ enetc_tstamp_tx(tx_swbd->skb, tstamp);
+ do_tstamp = false;
+ }
napi_consume_skb(tx_swbd->skb, napi_budget);
tx_swbd->skb = NULL;
}
@@ -425,10 +476,38 @@ static int enetc_refill_rx_ring(struct enetc_bdr *rx_ring, const int buff_cnt)
return j;
}
+#ifdef CONFIG_FSL_ENETC_HW_TIMESTAMPING
+static void enetc_get_rx_tstamp(struct net_device *ndev,
+ union enetc_rx_bd *rxbd,
+ struct sk_buff *skb)
+{
+ struct skb_shared_hwtstamps *shhwtstamps = skb_hwtstamps(skb);
+ struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ struct enetc_hw *hw = &priv->si->hw;
+ u32 lo, hi, tstamp_lo;
+ u64 tstamp;
+
+ if (le16_to_cpu(rxbd->r.flags) & ENETC_RXBD_FLAG_TSTMP) {
+ lo = enetc_rd(hw, ENETC_SICTR0);
+ hi = enetc_rd(hw, ENETC_SICTR1);
+ tstamp_lo = le32_to_cpu(rxbd->r.tstamp);
+ if (lo <= tstamp_lo)
+ hi -= 1;
+
+ tstamp = (u64)hi << 32 | tstamp_lo;
+ memset(shhwtstamps, 0, sizeof(*shhwtstamps));
+ shhwtstamps->hwtstamp = ns_to_ktime(tstamp);
+ }
+}
+#endif
+
static void enetc_get_offloads(struct enetc_bdr *rx_ring,
union enetc_rx_bd *rxbd, struct sk_buff *skb)
{
- /* TODO: add tstamp, hashing */
+#ifdef CONFIG_FSL_ENETC_HW_TIMESTAMPING
+ struct enetc_ndev_priv *priv = netdev_priv(rx_ring->ndev);
+#endif
+ /* TODO: hashing */
if (rx_ring->ndev->features & NETIF_F_RXCSUM) {
u16 inet_csum = le16_to_cpu(rxbd->r.inet_csum);
@@ -442,6 +521,10 @@ static void enetc_get_offloads(struct enetc_bdr *rx_ring,
if (le16_to_cpu(rxbd->r.flags) & ENETC_RXBD_FLAG_VLAN)
__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
le16_to_cpu(rxbd->r.vlan_opt));
+#ifdef CONFIG_FSL_ENETC_HW_TIMESTAMPING
+ if (priv->active_offloads & ENETC_F_RX_TSTAMP)
+ enetc_get_rx_tstamp(rx_ring->ndev, rxbd, skb);
+#endif
}
static void enetc_process_skb(struct enetc_bdr *rx_ring,
@@ -1074,6 +1157,9 @@ static void enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
enetc_rxbdr_wr(hw, idx, ENETC_RBICIR0, ENETC_RBICIR0_ICEN | 0x1);
rbmr = ENETC_RBMR_EN;
+#ifdef CONFIG_FSL_ENETC_HW_TIMESTAMPING
+ rbmr |= ENETC_RBMR_BDS;
+#endif
if (rx_ring->ndev->features & NETIF_F_HW_VLAN_CTAG_RX)
rbmr |= ENETC_RBMR_VTE;
@@ -1341,6 +1427,62 @@ int enetc_close(struct net_device *ndev)
return 0;
}
+int enetc_setup_tc(struct net_device *ndev, enum tc_setup_type type,
+ void *type_data)
+{
+ struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ struct tc_mqprio_qopt *mqprio = type_data;
+ struct enetc_bdr *tx_ring;
+ u8 num_tc;
+ int i;
+
+ if (type != TC_SETUP_QDISC_MQPRIO)
+ return -EOPNOTSUPP;
+
+ mqprio->hw = TC_MQPRIO_HW_OFFLOAD_TCS;
+ num_tc = mqprio->num_tc;
+
+ if (!num_tc) {
+ netdev_reset_tc(ndev);
+ netif_set_real_num_tx_queues(ndev, priv->num_tx_rings);
+
+ /* Reset all ring priorities to 0 */
+ for (i = 0; i < priv->num_tx_rings; i++) {
+ tx_ring = priv->tx_ring[i];
+ enetc_set_bdr_prio(&priv->si->hw, tx_ring->index, 0);
+ }
+
+ return 0;
+ }
+
+ /* Check if we have enough BD rings available to accommodate all TCs */
+ if (num_tc > priv->num_tx_rings) {
+ netdev_err(ndev, "Max %d traffic classes supported\n",
+ priv->num_tx_rings);
+ return -EINVAL;
+ }
+
+ /* For the moment, we use only one BD ring per TC.
+ *
+ * Configure num_tc BD rings with increasing priorities.
+ */
+ for (i = 0; i < num_tc; i++) {
+ tx_ring = priv->tx_ring[i];
+ enetc_set_bdr_prio(&priv->si->hw, tx_ring->index, i);
+ }
+
+ /* Reset the number of netdev queues based on the TC count */
+ netif_set_real_num_tx_queues(ndev, num_tc);
+
+ netdev_set_num_tc(ndev, num_tc);
+
+ /* Each TC is associated with one netdev queue */
+ for (i = 0; i < num_tc; i++)
+ netdev_set_tc_queue(ndev, i, 1, i);
+
+ return 0;
+}
+
struct net_device_stats *enetc_get_stats(struct net_device *ndev)
{
struct enetc_ndev_priv *priv = netdev_priv(ndev);
@@ -1396,6 +1538,70 @@ int enetc_set_features(struct net_device *ndev,
return 0;
}
+#ifdef CONFIG_FSL_ENETC_HW_TIMESTAMPING
+static int enetc_hwtstamp_set(struct net_device *ndev, struct ifreq *ifr)
+{
+ struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ struct hwtstamp_config config;
+
+ if (copy_from_user(&config, ifr->ifr_data, sizeof(config)))
+ return -EFAULT;
+
+ switch (config.tx_type) {
+ case HWTSTAMP_TX_OFF:
+ priv->active_offloads &= ~ENETC_F_TX_TSTAMP;
+ break;
+ case HWTSTAMP_TX_ON:
+ priv->active_offloads |= ENETC_F_TX_TSTAMP;
+ break;
+ default:
+ return -ERANGE;
+ }
+
+ switch (config.rx_filter) {
+ case HWTSTAMP_FILTER_NONE:
+ priv->active_offloads &= ~ENETC_F_RX_TSTAMP;
+ break;
+ default:
+ priv->active_offloads |= ENETC_F_RX_TSTAMP;
+ config.rx_filter = HWTSTAMP_FILTER_ALL;
+ }
+
+ return copy_to_user(ifr->ifr_data, &config, sizeof(config)) ?
+ -EFAULT : 0;
+}
+
+static int enetc_hwtstamp_get(struct net_device *ndev, struct ifreq *ifr)
+{
+ struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ struct hwtstamp_config config;
+
+ config.flags = 0;
+
+ if (priv->active_offloads & ENETC_F_TX_TSTAMP)
+ config.tx_type = HWTSTAMP_TX_ON;
+ else
+ config.tx_type = HWTSTAMP_TX_OFF;
+
+ config.rx_filter = (priv->active_offloads & ENETC_F_RX_TSTAMP) ?
+ HWTSTAMP_FILTER_ALL : HWTSTAMP_FILTER_NONE;
+
+ return copy_to_user(ifr->ifr_data, &config, sizeof(config)) ?
+ -EFAULT : 0;
+}
+#endif
+
+int enetc_ioctl(struct net_device *ndev, struct ifreq *rq, int cmd)
+{
+#ifdef CONFIG_FSL_ENETC_HW_TIMESTAMPING
+ if (cmd == SIOCSHWTSTAMP)
+ return enetc_hwtstamp_set(ndev, rq);
+ if (cmd == SIOCGHWTSTAMP)
+ return enetc_hwtstamp_get(ndev, rq);
+#endif
+ return -EINVAL;
+}
+
int enetc_alloc_msix(struct enetc_ndev_priv *priv)
{
struct pci_dev *pdev = priv->si->pdev;
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.h b/drivers/net/ethernet/freescale/enetc/enetc.h
index b274135c5103..541b4e2073fe 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc.h
+++ b/drivers/net/ethernet/freescale/enetc/enetc.h
@@ -21,7 +21,9 @@ struct enetc_tx_swbd {
struct sk_buff *skb;
dma_addr_t dma;
u16 len;
- u16 is_dma_page;
+ u8 is_dma_page:1;
+ u8 check_wb:1;
+ u8 do_tstamp:1;
};
#define ENETC_RX_MAXFRM_SIZE ENETC_MAC_MAXFRM_SIZE
@@ -167,6 +169,12 @@ struct enetc_cls_rule {
#define ENETC_MAX_BDR_INT 2 /* fixed to max # of available cpus */
+/* TODO: more hardware offloads */
+enum enetc_active_offloads {
+ ENETC_F_RX_TSTAMP = BIT(0),
+ ENETC_F_TX_TSTAMP = BIT(1),
+};
+
struct enetc_ndev_priv {
struct net_device *ndev;
struct device *dev; /* dma-mapping device */
@@ -178,6 +186,7 @@ struct enetc_ndev_priv {
u16 rx_bd_count, tx_bd_count;
u16 msg_enable;
+ int active_offloads;
struct enetc_bdr *tx_ring[16];
struct enetc_bdr *rx_ring[16];
@@ -200,6 +209,9 @@ struct enetc_msg_cmd_set_primary_mac {
#define ENETC_CBDR_TIMEOUT 1000 /* usecs */
+/* PTP driver exports */
+extern int enetc_phc_index;
+
/* SI common */
int enetc_pci_probe(struct pci_dev *pdev, const char *name, int sizeof_priv);
void enetc_pci_remove(struct pci_dev *pdev);
@@ -216,6 +228,10 @@ netdev_tx_t enetc_xmit(struct sk_buff *skb, struct net_device *ndev);
struct net_device_stats *enetc_get_stats(struct net_device *ndev);
int enetc_set_features(struct net_device *ndev,
netdev_features_t features);
+int enetc_ioctl(struct net_device *ndev, struct ifreq *rq, int cmd);
+int enetc_setup_tc(struct net_device *ndev, enum tc_setup_type type,
+ void *type_data);
+
/* ethtool */
void enetc_set_ethtool_ops(struct net_device *ndev);
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
index b9519b6ad727..fcb52efec075 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
@@ -555,6 +555,35 @@ static void enetc_get_ringparam(struct net_device *ndev,
}
}
+static int enetc_get_ts_info(struct net_device *ndev,
+ struct ethtool_ts_info *info)
+{
+ int *phc_idx;
+
+ phc_idx = symbol_get(enetc_phc_index);
+ if (phc_idx) {
+ info->phc_index = *phc_idx;
+ symbol_put(enetc_phc_index);
+ } else {
+ info->phc_index = -1;
+ }
+
+#ifdef CONFIG_FSL_ENETC_HW_TIMESTAMPING
+ info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE |
+ SOF_TIMESTAMPING_RX_HARDWARE |
+ SOF_TIMESTAMPING_RAW_HARDWARE;
+
+ info->tx_types = (1 << HWTSTAMP_TX_OFF) |
+ (1 << HWTSTAMP_TX_ON);
+ info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
+ (1 << HWTSTAMP_FILTER_ALL);
+#else
+ info->so_timestamping = SOF_TIMESTAMPING_RX_SOFTWARE |
+ SOF_TIMESTAMPING_SOFTWARE;
+#endif
+ return 0;
+}
+
static const struct ethtool_ops enetc_pf_ethtool_ops = {
.get_regs_len = enetc_get_reglen,
.get_regs = enetc_get_regs,
@@ -571,6 +600,7 @@ static const struct ethtool_ops enetc_pf_ethtool_ops = {
.get_link_ksettings = phy_ethtool_get_link_ksettings,
.set_link_ksettings = phy_ethtool_set_link_ksettings,
.get_link = ethtool_op_get_link,
+ .get_ts_info = enetc_get_ts_info,
};
static const struct ethtool_ops enetc_vf_ethtool_ops = {
@@ -586,6 +616,7 @@ static const struct ethtool_ops enetc_vf_ethtool_ops = {
.set_rxfh = enetc_set_rxfh,
.get_ringparam = enetc_get_ringparam,
.get_link = ethtool_op_get_link,
+ .get_ts_info = enetc_get_ts_info,
};
void enetc_set_ethtool_ops(struct net_device *ndev)
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_hw.h b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
index df8eb8882d92..88276299f447 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc_hw.h
+++ b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
@@ -127,7 +127,7 @@ enum enetc_bdr_type {TX, RX};
#define ENETC_TBSR_BUSY BIT(0)
#define ENETC_TBMR_VIH BIT(9)
#define ENETC_TBMR_PRIO_MASK GENMASK(2, 0)
-#define ENETC_TBMR_PRIO_SET(val) val
+#define ENETC_TBMR_SET_PRIO(val) ((val) & ENETC_TBMR_PRIO_MASK)
#define ENETC_TBMR_EN BIT(31)
#define ENETC_TBSR 0x4
#define ENETC_TBBAR0 0x10
@@ -361,6 +361,12 @@ union enetc_tx_bd {
u8 e_flags;
u8 flags;
} ext; /* Tx BD extension */
+ struct {
+ __le32 tstamp;
+ u8 reserved[10];
+ u8 status;
+ u8 flags;
+ } wb; /* writeback descriptor */
};
#define ENETC_TXBD_FLAGS_L4CS BIT(0)
@@ -399,6 +405,9 @@ union enetc_rx_bd {
struct {
__le64 addr;
u8 reserved[8];
+#ifdef CONFIG_FSL_ENETC_HW_TIMESTAMPING
+ u8 reserved1[16];
+#endif
} w;
struct {
__le16 inet_csum;
@@ -413,6 +422,10 @@ union enetc_rx_bd {
};
__le32 lstatus;
};
+#ifdef CONFIG_FSL_ENETC_HW_TIMESTAMPING
+ __le32 tstamp;
+ u8 reserved[12];
+#endif
} r;
};
@@ -531,3 +544,13 @@ static inline void enetc_enable_txvlan(struct enetc_hw *hw, int si_idx,
val = (val & ~ENETC_TBMR_VIH) | (en ? ENETC_TBMR_VIH : 0);
enetc_txbdr_wr(hw, si_idx, ENETC_TBMR, val);
}
+
+static inline void enetc_set_bdr_prio(struct enetc_hw *hw, int bdr_idx,
+ int prio)
+{
+ u32 val = enetc_txbdr_rd(hw, bdr_idx, ENETC_TBMR);
+
+ val &= ~ENETC_TBMR_PRIO_MASK;
+ val |= ENETC_TBMR_SET_PRIO(prio);
+ enetc_txbdr_wr(hw, bdr_idx, ENETC_TBMR, val);
+}
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
index 78287c517095..258b3cb38a6f 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
@@ -702,6 +702,8 @@ static const struct net_device_ops enetc_ndev_ops = {
.ndo_set_vf_vlan = enetc_pf_set_vf_vlan,
.ndo_set_vf_spoofchk = enetc_pf_set_vf_spoofchk,
.ndo_set_features = enetc_pf_set_features,
+ .ndo_do_ioctl = enetc_ioctl,
+ .ndo_setup_tc = enetc_setup_tc,
};
static void enetc_pf_netdev_setup(struct enetc_si *si, struct net_device *ndev,
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_ptp.c b/drivers/net/ethernet/freescale/enetc/enetc_ptp.c
index 8c1497e7d9c5..2fd2586e42bf 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc_ptp.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc_ptp.c
@@ -7,6 +7,9 @@
#include "enetc.h"
+int enetc_phc_index = -1;
+EXPORT_SYMBOL(enetc_phc_index);
+
static struct ptp_clock_info enetc_ptp_caps = {
.owner = THIS_MODULE,
.name = "ENETC PTP clock",
@@ -96,6 +99,7 @@ static int enetc_ptp_probe(struct pci_dev *pdev,
if (err)
goto err_no_clock;
+ enetc_phc_index = ptp_qoriq->phc_index;
pci_set_drvdata(pdev, ptp_qoriq);
return 0;
@@ -119,6 +123,7 @@ static void enetc_ptp_remove(struct pci_dev *pdev)
{
struct ptp_qoriq *ptp_qoriq = pci_get_drvdata(pdev);
+ enetc_phc_index = -1;
ptp_qoriq_free(ptp_qoriq);
kfree(ptp_qoriq);
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_vf.c b/drivers/net/ethernet/freescale/enetc/enetc_vf.c
index 72c3ea887bcf..ebd21bf4cfa1 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc_vf.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc_vf.c
@@ -111,6 +111,8 @@ static const struct net_device_ops enetc_ndev_ops = {
.ndo_get_stats = enetc_get_stats,
.ndo_set_mac_address = enetc_vf_set_mac_addr,
.ndo_set_features = enetc_vf_set_features,
+ .ndo_do_ioctl = enetc_ioctl,
+ .ndo_setup_tc = enetc_setup_tc,
};
static void enetc_vf_netdev_setup(struct enetc_si *si, struct net_device *ndev,
diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
index 38f10f7dcbc3..9d459ccf251d 100644
--- a/drivers/net/ethernet/freescale/fec_main.c
+++ b/drivers/net/ethernet/freescale/fec_main.c
@@ -1689,10 +1689,10 @@ static void fec_get_mac(struct net_device *ndev)
*/
if (!is_valid_ether_addr(iap)) {
/* Report it and use a random ethernet address instead */
- netdev_err(ndev, "Invalid MAC address: %pM\n", iap);
+ dev_err(&fep->pdev->dev, "Invalid MAC address: %pM\n", iap);
eth_hw_addr_random(ndev);
- netdev_info(ndev, "Using random MAC address: %pM\n",
- ndev->dev_addr);
+ dev_info(&fep->pdev->dev, "Using random MAC address: %pM\n",
+ ndev->dev_addr);
return;
}
@@ -2446,30 +2446,31 @@ static int
fec_enet_set_coalesce(struct net_device *ndev, struct ethtool_coalesce *ec)
{
struct fec_enet_private *fep = netdev_priv(ndev);
+ struct device *dev = &fep->pdev->dev;
unsigned int cycle;
if (!(fep->quirks & FEC_QUIRK_HAS_COALESCE))
return -EOPNOTSUPP;
if (ec->rx_max_coalesced_frames > 255) {
- pr_err("Rx coalesced frames exceed hardware limitation\n");
+ dev_err(dev, "Rx coalesced frames exceed hardware limitation\n");
return -EINVAL;
}
if (ec->tx_max_coalesced_frames > 255) {
- pr_err("Tx coalesced frame exceed hardware limitation\n");
+ dev_err(dev, "Tx coalesced frame exceed hardware limitation\n");
return -EINVAL;
}
cycle = fec_enet_us_to_itr_clock(ndev, fep->rx_time_itr);
if (cycle > 0xFFFF) {
- pr_err("Rx coalesced usec exceed hardware limitation\n");
+ dev_err(dev, "Rx coalesced usec exceed hardware limitation\n");
return -EINVAL;
}
cycle = fec_enet_us_to_itr_clock(ndev, fep->tx_time_itr);
if (cycle > 0xFFFF) {
- pr_err("Rx coalesced usec exceed hardware limitation\n");
+ dev_err(dev, "Rx coalesced usec exceed hardware limitation\n");
return -EINVAL;
}
@@ -3473,7 +3474,6 @@ fec_probe(struct platform_device *pdev)
if (ret) {
dev_err(&pdev->dev,
"Failed to enable phy regulator: %d\n", ret);
- clk_disable_unprepare(fep->clk_ipg);
goto failed_regulator;
}
} else {
diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c
index 7e892b1cbd3d..19e2365be7d8 100644
--- a/drivers/net/ethernet/freescale/fec_ptp.c
+++ b/drivers/net/ethernet/freescale/fec_ptp.c
@@ -617,7 +617,7 @@ void fec_ptp_init(struct platform_device *pdev, int irq_idx)
fep->ptp_clock = ptp_clock_register(&fep->ptp_caps, &pdev->dev);
if (IS_ERR(fep->ptp_clock)) {
fep->ptp_clock = NULL;
- pr_err("ptp_clock_register failed\n");
+ dev_err(&pdev->dev, "ptp_clock_register failed\n");
}
schedule_delayed_work(&fep->time_keep, HZ);
diff --git a/drivers/net/ethernet/freescale/fman/fman_keygen.c b/drivers/net/ethernet/freescale/fman/fman_keygen.c
index f54da3c684d0..e1bdfed16134 100644
--- a/drivers/net/ethernet/freescale/fman/fman_keygen.c
+++ b/drivers/net/ethernet/freescale/fman/fman_keygen.c
@@ -144,7 +144,8 @@
/* Hash Key extraction fields: */
#define DEFAULT_HASH_KEY_EXTRACT_FIELDS \
(KG_SCH_KN_IPSRC1 | KG_SCH_KN_IPDST1 | \
- KG_SCH_KN_L4PSRC | KG_SCH_KN_L4PDST)
+ KG_SCH_KN_L4PSRC | KG_SCH_KN_L4PDST | \
+ KG_SCH_KN_IPSEC_SPI)
/* Default values to be used as hash key in case IPv4 or L4 (TCP, UDP)
* don't exist in the frame
diff --git a/drivers/net/ethernet/google/Kconfig b/drivers/net/ethernet/google/Kconfig
new file mode 100644
index 000000000000..b8f04d052fda
--- /dev/null
+++ b/drivers/net/ethernet/google/Kconfig
@@ -0,0 +1,27 @@
+#
+# Google network device configuration
+#
+
+config NET_VENDOR_GOOGLE
+ bool "Google Devices"
+ default y
+ help
+ If you have a network (Ethernet) device belonging to this class, say Y.
+
+ Note that the answer to this question doesn't directly affect the
+ kernel: saying N will just cause the configurator to skip all
+ the questions about Google devices. If you say Y, you will be asked
+ for your specific device in the following questions.
+
+if NET_VENDOR_GOOGLE
+
+config GVE
+ tristate "Google Virtual NIC (gVNIC) support"
+ depends on PCI_MSI
+ help
+ This driver supports Google Virtual NIC (gVNIC)"
+
+ To compile this driver as a module, choose M here.
+ The module will be called gve.
+
+endif #NET_VENDOR_GOOGLE
diff --git a/drivers/net/ethernet/google/Makefile b/drivers/net/ethernet/google/Makefile
new file mode 100644
index 000000000000..402cc3ba1639
--- /dev/null
+++ b/drivers/net/ethernet/google/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for the Google network device drivers.
+#
+
+obj-$(CONFIG_GVE) += gve/
diff --git a/drivers/net/ethernet/google/gve/Makefile b/drivers/net/ethernet/google/gve/Makefile
new file mode 100644
index 000000000000..3354ce40eb97
--- /dev/null
+++ b/drivers/net/ethernet/google/gve/Makefile
@@ -0,0 +1,4 @@
+# Makefile for the Google virtual Ethernet (gve) driver
+
+obj-$(CONFIG_GVE) += gve.o
+gve-objs := gve_main.o gve_tx.o gve_rx.o gve_ethtool.o gve_adminq.o
diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
new file mode 100644
index 000000000000..92372dc43be8
--- /dev/null
+++ b/drivers/net/ethernet/google/gve/gve.h
@@ -0,0 +1,459 @@
+/* SPDX-License-Identifier: (GPL-2.0 OR MIT)
+ * Google virtual Ethernet (gve) driver
+ *
+ * Copyright (C) 2015-2019 Google, Inc.
+ */
+
+#ifndef _GVE_H_
+#define _GVE_H_
+
+#include <linux/dma-mapping.h>
+#include <linux/netdevice.h>
+#include <linux/pci.h>
+#include <linux/u64_stats_sync.h>
+#include "gve_desc.h"
+
+#ifndef PCI_VENDOR_ID_GOOGLE
+#define PCI_VENDOR_ID_GOOGLE 0x1ae0
+#endif
+
+#define PCI_DEV_ID_GVNIC 0x0042
+
+#define GVE_REGISTER_BAR 0
+#define GVE_DOORBELL_BAR 2
+
+/* Driver can alloc up to 2 segments for the header and 2 for the payload. */
+#define GVE_TX_MAX_IOVEC 4
+/* 1 for management, 1 for rx, 1 for tx */
+#define GVE_MIN_MSIX 3
+
+/* Each slot in the desc ring has a 1:1 mapping to a slot in the data ring */
+struct gve_rx_desc_queue {
+ struct gve_rx_desc *desc_ring; /* the descriptor ring */
+ dma_addr_t bus; /* the bus for the desc_ring */
+ u32 cnt; /* free-running total number of completed packets */
+ u32 fill_cnt; /* free-running total number of descriptors posted */
+ u32 mask; /* masks the cnt to the size of the ring */
+ u8 seqno; /* the next expected seqno for this desc*/
+};
+
+/* The page info for a single slot in the RX data queue */
+struct gve_rx_slot_page_info {
+ struct page *page;
+ void *page_address;
+ u32 page_offset; /* offset to write to in page */
+};
+
+/* A list of pages registered with the device during setup and used by a queue
+ * as buffers
+ */
+struct gve_queue_page_list {
+ u32 id; /* unique id */
+ u32 num_entries;
+ struct page **pages; /* list of num_entries pages */
+ dma_addr_t *page_buses; /* the dma addrs of the pages */
+};
+
+/* Each slot in the data ring has a 1:1 mapping to a slot in the desc ring */
+struct gve_rx_data_queue {
+ struct gve_rx_data_slot *data_ring; /* read by NIC */
+ dma_addr_t data_bus; /* dma mapping of the slots */
+ struct gve_rx_slot_page_info *page_info; /* page info of the buffers */
+ struct gve_queue_page_list *qpl; /* qpl assigned to this queue */
+ u32 mask; /* masks the cnt to the size of the ring */
+ u32 cnt; /* free-running total number of completed packets */
+};
+
+struct gve_priv;
+
+/* An RX ring that contains a power-of-two sized desc and data ring. */
+struct gve_rx_ring {
+ struct gve_priv *gve;
+ struct gve_rx_desc_queue desc;
+ struct gve_rx_data_queue data;
+ u64 rbytes; /* free-running bytes received */
+ u64 rpackets; /* free-running packets received */
+ u32 q_num; /* queue index */
+ u32 ntfy_id; /* notification block index */
+ struct gve_queue_resources *q_resources; /* head and tail pointer idx */
+ dma_addr_t q_resources_bus; /* dma address for the queue resources */
+ struct u64_stats_sync statss; /* sync stats for 32bit archs */
+};
+
+/* A TX desc ring entry */
+union gve_tx_desc {
+ struct gve_tx_pkt_desc pkt; /* first desc for a packet */
+ struct gve_tx_seg_desc seg; /* subsequent descs for a packet */
+};
+
+/* Tracks the memory in the fifo occupied by a segment of a packet */
+struct gve_tx_iovec {
+ u32 iov_offset; /* offset into this segment */
+ u32 iov_len; /* length */
+ u32 iov_padding; /* padding associated with this segment */
+};
+
+/* Tracks the memory in the fifo occupied by the skb. Mapped 1:1 to a desc
+ * ring entry but only used for a pkt_desc not a seg_desc
+ */
+struct gve_tx_buffer_state {
+ struct sk_buff *skb; /* skb for this pkt */
+ struct gve_tx_iovec iov[GVE_TX_MAX_IOVEC]; /* segments of this pkt */
+};
+
+/* A TX buffer - each queue has one */
+struct gve_tx_fifo {
+ void *base; /* address of base of FIFO */
+ u32 size; /* total size */
+ atomic_t available; /* how much space is still available */
+ u32 head; /* offset to write at */
+ struct gve_queue_page_list *qpl; /* QPL mapped into this FIFO */
+};
+
+/* A TX ring that contains a power-of-two sized desc ring and a FIFO buffer */
+struct gve_tx_ring {
+ /* Cacheline 0 -- Accessed & dirtied during transmit */
+ struct gve_tx_fifo tx_fifo;
+ u32 req; /* driver tracked head pointer */
+ u32 done; /* driver tracked tail pointer */
+
+ /* Cacheline 1 -- Accessed & dirtied during gve_clean_tx_done */
+ __be32 last_nic_done ____cacheline_aligned; /* NIC tail pointer */
+ u64 pkt_done; /* free-running - total packets completed */
+ u64 bytes_done; /* free-running - total bytes completed */
+
+ /* Cacheline 2 -- Read-mostly fields */
+ union gve_tx_desc *desc ____cacheline_aligned;
+ struct gve_tx_buffer_state *info; /* Maps 1:1 to a desc */
+ struct netdev_queue *netdev_txq;
+ struct gve_queue_resources *q_resources; /* head and tail pointer idx */
+ u32 mask; /* masks req and done down to queue size */
+
+ /* Slow-path fields */
+ u32 q_num ____cacheline_aligned; /* queue idx */
+ u32 stop_queue; /* count of queue stops */
+ u32 wake_queue; /* count of queue wakes */
+ u32 ntfy_id; /* notification block index */
+ dma_addr_t bus; /* dma address of the descr ring */
+ dma_addr_t q_resources_bus; /* dma address of the queue resources */
+ struct u64_stats_sync statss; /* sync stats for 32bit archs */
+} ____cacheline_aligned;
+
+/* Wraps the info for one irq including the napi struct and the queues
+ * associated with that irq.
+ */
+struct gve_notify_block {
+ __be32 irq_db_index; /* idx into Bar2 - set by device, must be 1st */
+ char name[IFNAMSIZ + 16]; /* name registered with the kernel */
+ struct napi_struct napi; /* kernel napi struct for this block */
+ struct gve_priv *priv;
+ struct gve_tx_ring *tx; /* tx rings on this block */
+ struct gve_rx_ring *rx; /* rx rings on this block */
+} ____cacheline_aligned;
+
+/* Tracks allowed and current queue settings */
+struct gve_queue_config {
+ u16 max_queues;
+ u16 num_queues; /* current */
+};
+
+/* Tracks the available and used qpl IDs */
+struct gve_qpl_config {
+ u32 qpl_map_size; /* map memory size */
+ unsigned long *qpl_id_map; /* bitmap of used qpl ids */
+};
+
+struct gve_priv {
+ struct net_device *dev;
+ struct gve_tx_ring *tx; /* array of tx_cfg.num_queues */
+ struct gve_rx_ring *rx; /* array of rx_cfg.num_queues */
+ struct gve_queue_page_list *qpls; /* array of num qpls */
+ struct gve_notify_block *ntfy_blocks; /* array of num_ntfy_blks */
+ dma_addr_t ntfy_block_bus;
+ struct msix_entry *msix_vectors; /* array of num_ntfy_blks + 1 */
+ char mgmt_msix_name[IFNAMSIZ + 16];
+ u32 mgmt_msix_idx;
+ __be32 *counter_array; /* array of num_event_counters */
+ dma_addr_t counter_array_bus;
+
+ u16 num_event_counters;
+ u16 tx_desc_cnt; /* num desc per ring */
+ u16 rx_desc_cnt; /* num desc per ring */
+ u16 tx_pages_per_qpl; /* tx buffer length */
+ u16 rx_pages_per_qpl; /* rx buffer length */
+ u64 max_registered_pages;
+ u64 num_registered_pages; /* num pages registered with NIC */
+ u32 rx_copybreak; /* copy packets smaller than this */
+ u16 default_num_queues; /* default num queues to set up */
+
+ struct gve_queue_config tx_cfg;
+ struct gve_queue_config rx_cfg;
+ struct gve_qpl_config qpl_cfg; /* map used QPL ids */
+ u32 num_ntfy_blks; /* spilt between TX and RX so must be even */
+
+ struct gve_registers __iomem *reg_bar0; /* see gve_register.h */
+ __be32 __iomem *db_bar2; /* "array" of doorbells */
+ u32 msg_enable; /* level for netif* netdev print macros */
+ struct pci_dev *pdev;
+
+ /* metrics */
+ u32 tx_timeo_cnt;
+
+ /* Admin queue - see gve_adminq.h*/
+ union gve_adminq_command *adminq;
+ dma_addr_t adminq_bus_addr;
+ u32 adminq_mask; /* masks prod_cnt to adminq size */
+ u32 adminq_prod_cnt; /* free-running count of AQ cmds executed */
+
+ struct workqueue_struct *gve_wq;
+ struct work_struct service_task;
+ unsigned long service_task_flags;
+ unsigned long state_flags;
+};
+
+enum gve_service_task_flags {
+ GVE_PRIV_FLAGS_DO_RESET = BIT(1),
+ GVE_PRIV_FLAGS_RESET_IN_PROGRESS = BIT(2),
+ GVE_PRIV_FLAGS_PROBE_IN_PROGRESS = BIT(3),
+};
+
+enum gve_state_flags {
+ GVE_PRIV_FLAGS_ADMIN_QUEUE_OK = BIT(1),
+ GVE_PRIV_FLAGS_DEVICE_RESOURCES_OK = BIT(2),
+ GVE_PRIV_FLAGS_DEVICE_RINGS_OK = BIT(3),
+ GVE_PRIV_FLAGS_NAPI_ENABLED = BIT(4),
+};
+
+static inline bool gve_get_do_reset(struct gve_priv *priv)
+{
+ return test_bit(GVE_PRIV_FLAGS_DO_RESET, &priv->service_task_flags);
+}
+
+static inline void gve_set_do_reset(struct gve_priv *priv)
+{
+ set_bit(GVE_PRIV_FLAGS_DO_RESET, &priv->service_task_flags);
+}
+
+static inline void gve_clear_do_reset(struct gve_priv *priv)
+{
+ clear_bit(GVE_PRIV_FLAGS_DO_RESET, &priv->service_task_flags);
+}
+
+static inline bool gve_get_reset_in_progress(struct gve_priv *priv)
+{
+ return test_bit(GVE_PRIV_FLAGS_RESET_IN_PROGRESS,
+ &priv->service_task_flags);
+}
+
+static inline void gve_set_reset_in_progress(struct gve_priv *priv)
+{
+ set_bit(GVE_PRIV_FLAGS_RESET_IN_PROGRESS, &priv->service_task_flags);
+}
+
+static inline void gve_clear_reset_in_progress(struct gve_priv *priv)
+{
+ clear_bit(GVE_PRIV_FLAGS_RESET_IN_PROGRESS, &priv->service_task_flags);
+}
+
+static inline bool gve_get_probe_in_progress(struct gve_priv *priv)
+{
+ return test_bit(GVE_PRIV_FLAGS_PROBE_IN_PROGRESS,
+ &priv->service_task_flags);
+}
+
+static inline void gve_set_probe_in_progress(struct gve_priv *priv)
+{
+ set_bit(GVE_PRIV_FLAGS_PROBE_IN_PROGRESS, &priv->service_task_flags);
+}
+
+static inline void gve_clear_probe_in_progress(struct gve_priv *priv)
+{
+ clear_bit(GVE_PRIV_FLAGS_PROBE_IN_PROGRESS, &priv->service_task_flags);
+}
+
+static inline bool gve_get_admin_queue_ok(struct gve_priv *priv)
+{
+ return test_bit(GVE_PRIV_FLAGS_ADMIN_QUEUE_OK, &priv->state_flags);
+}
+
+static inline void gve_set_admin_queue_ok(struct gve_priv *priv)
+{
+ set_bit(GVE_PRIV_FLAGS_ADMIN_QUEUE_OK, &priv->state_flags);
+}
+
+static inline void gve_clear_admin_queue_ok(struct gve_priv *priv)
+{
+ clear_bit(GVE_PRIV_FLAGS_ADMIN_QUEUE_OK, &priv->state_flags);
+}
+
+static inline bool gve_get_device_resources_ok(struct gve_priv *priv)
+{
+ return test_bit(GVE_PRIV_FLAGS_DEVICE_RESOURCES_OK, &priv->state_flags);
+}
+
+static inline void gve_set_device_resources_ok(struct gve_priv *priv)
+{
+ set_bit(GVE_PRIV_FLAGS_DEVICE_RESOURCES_OK, &priv->state_flags);
+}
+
+static inline void gve_clear_device_resources_ok(struct gve_priv *priv)
+{
+ clear_bit(GVE_PRIV_FLAGS_DEVICE_RESOURCES_OK, &priv->state_flags);
+}
+
+static inline bool gve_get_device_rings_ok(struct gve_priv *priv)
+{
+ return test_bit(GVE_PRIV_FLAGS_DEVICE_RINGS_OK, &priv->state_flags);
+}
+
+static inline void gve_set_device_rings_ok(struct gve_priv *priv)
+{
+ set_bit(GVE_PRIV_FLAGS_DEVICE_RINGS_OK, &priv->state_flags);
+}
+
+static inline void gve_clear_device_rings_ok(struct gve_priv *priv)
+{
+ clear_bit(GVE_PRIV_FLAGS_DEVICE_RINGS_OK, &priv->state_flags);
+}
+
+static inline bool gve_get_napi_enabled(struct gve_priv *priv)
+{
+ return test_bit(GVE_PRIV_FLAGS_NAPI_ENABLED, &priv->state_flags);
+}
+
+static inline void gve_set_napi_enabled(struct gve_priv *priv)
+{
+ set_bit(GVE_PRIV_FLAGS_NAPI_ENABLED, &priv->state_flags);
+}
+
+static inline void gve_clear_napi_enabled(struct gve_priv *priv)
+{
+ clear_bit(GVE_PRIV_FLAGS_NAPI_ENABLED, &priv->state_flags);
+}
+
+/* Returns the address of the ntfy_blocks irq doorbell
+ */
+static inline __be32 __iomem *gve_irq_doorbell(struct gve_priv *priv,
+ struct gve_notify_block *block)
+{
+ return &priv->db_bar2[be32_to_cpu(block->irq_db_index)];
+}
+
+/* Returns the index into ntfy_blocks of the given tx ring's block
+ */
+static inline u32 gve_tx_idx_to_ntfy(struct gve_priv *priv, u32 queue_idx)
+{
+ return queue_idx;
+}
+
+/* Returns the index into ntfy_blocks of the given rx ring's block
+ */
+static inline u32 gve_rx_idx_to_ntfy(struct gve_priv *priv, u32 queue_idx)
+{
+ return (priv->num_ntfy_blks / 2) + queue_idx;
+}
+
+/* Returns the number of tx queue page lists
+ */
+static inline u32 gve_num_tx_qpls(struct gve_priv *priv)
+{
+ return priv->tx_cfg.num_queues;
+}
+
+/* Returns the number of rx queue page lists
+ */
+static inline u32 gve_num_rx_qpls(struct gve_priv *priv)
+{
+ return priv->rx_cfg.num_queues;
+}
+
+/* Returns a pointer to the next available tx qpl in the list of qpls
+ */
+static inline
+struct gve_queue_page_list *gve_assign_tx_qpl(struct gve_priv *priv)
+{
+ int id = find_first_zero_bit(priv->qpl_cfg.qpl_id_map,
+ priv->qpl_cfg.qpl_map_size);
+
+ /* we are out of tx qpls */
+ if (id >= gve_num_tx_qpls(priv))
+ return NULL;
+
+ set_bit(id, priv->qpl_cfg.qpl_id_map);
+ return &priv->qpls[id];
+}
+
+/* Returns a pointer to the next available rx qpl in the list of qpls
+ */
+static inline
+struct gve_queue_page_list *gve_assign_rx_qpl(struct gve_priv *priv)
+{
+ int id = find_next_zero_bit(priv->qpl_cfg.qpl_id_map,
+ priv->qpl_cfg.qpl_map_size,
+ gve_num_tx_qpls(priv));
+
+ /* we are out of rx qpls */
+ if (id == priv->qpl_cfg.qpl_map_size)
+ return NULL;
+
+ set_bit(id, priv->qpl_cfg.qpl_id_map);
+ return &priv->qpls[id];
+}
+
+/* Unassigns the qpl with the given id
+ */
+static inline void gve_unassign_qpl(struct gve_priv *priv, int id)
+{
+ clear_bit(id, priv->qpl_cfg.qpl_id_map);
+}
+
+/* Returns the correct dma direction for tx and rx qpls
+ */
+static inline enum dma_data_direction gve_qpl_dma_dir(struct gve_priv *priv,
+ int id)
+{
+ if (id < gve_num_tx_qpls(priv))
+ return DMA_TO_DEVICE;
+ else
+ return DMA_FROM_DEVICE;
+}
+
+/* Returns true if the max mtu allows page recycling */
+static inline bool gve_can_recycle_pages(struct net_device *dev)
+{
+ /* We can't recycle the pages if we can't fit a packet into half a
+ * page.
+ */
+ return dev->max_mtu <= PAGE_SIZE / 2;
+}
+
+/* buffers */
+int gve_alloc_page(struct device *dev, struct page **page, dma_addr_t *dma,
+ enum dma_data_direction);
+void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma,
+ enum dma_data_direction);
+/* tx handling */
+netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev);
+bool gve_tx_poll(struct gve_notify_block *block, int budget);
+int gve_tx_alloc_rings(struct gve_priv *priv);
+void gve_tx_free_rings(struct gve_priv *priv);
+__be32 gve_tx_load_event_counter(struct gve_priv *priv,
+ struct gve_tx_ring *tx);
+/* rx handling */
+void gve_rx_write_doorbell(struct gve_priv *priv, struct gve_rx_ring *rx);
+bool gve_rx_poll(struct gve_notify_block *block, int budget);
+int gve_rx_alloc_rings(struct gve_priv *priv);
+void gve_rx_free_rings(struct gve_priv *priv);
+bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget,
+ netdev_features_t feat);
+/* Reset */
+void gve_schedule_reset(struct gve_priv *priv);
+int gve_reset(struct gve_priv *priv, bool attempt_teardown);
+int gve_adjust_queues(struct gve_priv *priv,
+ struct gve_queue_config new_rx_config,
+ struct gve_queue_config new_tx_config);
+/* exported by ethtool.c */
+extern const struct ethtool_ops gve_ethtool_ops;
+/* needed by ethtool */
+extern const char gve_version_str[];
+#endif /* _GVE_H_ */
diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c
new file mode 100644
index 000000000000..c3ba7baf0107
--- /dev/null
+++ b/drivers/net/ethernet/google/gve/gve_adminq.c
@@ -0,0 +1,387 @@
+// SPDX-License-Identifier: (GPL-2.0 OR MIT)
+/* Google virtual Ethernet (gve) driver
+ *
+ * Copyright (C) 2015-2019 Google, Inc.
+ */
+
+#include <linux/etherdevice.h>
+#include <linux/pci.h>
+#include "gve.h"
+#include "gve_adminq.h"
+#include "gve_register.h"
+
+#define GVE_MAX_ADMINQ_RELEASE_CHECK 500
+#define GVE_ADMINQ_SLEEP_LEN 20
+#define GVE_MAX_ADMINQ_EVENT_COUNTER_CHECK 100
+
+int gve_adminq_alloc(struct device *dev, struct gve_priv *priv)
+{
+ priv->adminq = dma_alloc_coherent(dev, PAGE_SIZE,
+ &priv->adminq_bus_addr, GFP_KERNEL);
+ if (unlikely(!priv->adminq))
+ return -ENOMEM;
+
+ priv->adminq_mask = (PAGE_SIZE / sizeof(union gve_adminq_command)) - 1;
+ priv->adminq_prod_cnt = 0;
+
+ /* Setup Admin queue with the device */
+ iowrite32be(priv->adminq_bus_addr / PAGE_SIZE,
+ &priv->reg_bar0->adminq_pfn);
+
+ gve_set_admin_queue_ok(priv);
+ return 0;
+}
+
+void gve_adminq_release(struct gve_priv *priv)
+{
+ int i = 0;
+
+ /* Tell the device the adminq is leaving */
+ iowrite32be(0x0, &priv->reg_bar0->adminq_pfn);
+ while (ioread32be(&priv->reg_bar0->adminq_pfn)) {
+ /* If this is reached the device is unrecoverable and still
+ * holding memory. Continue looping to avoid memory corruption,
+ * but WARN so it is visible what is going on.
+ */
+ if (i == GVE_MAX_ADMINQ_RELEASE_CHECK)
+ WARN(1, "Unrecoverable platform error!");
+ i++;
+ msleep(GVE_ADMINQ_SLEEP_LEN);
+ }
+ gve_clear_device_rings_ok(priv);
+ gve_clear_device_resources_ok(priv);
+ gve_clear_admin_queue_ok(priv);
+}
+
+void gve_adminq_free(struct device *dev, struct gve_priv *priv)
+{
+ if (!gve_get_admin_queue_ok(priv))
+ return;
+ gve_adminq_release(priv);
+ dma_free_coherent(dev, PAGE_SIZE, priv->adminq, priv->adminq_bus_addr);
+ gve_clear_admin_queue_ok(priv);
+}
+
+static void gve_adminq_kick_cmd(struct gve_priv *priv, u32 prod_cnt)
+{
+ iowrite32be(prod_cnt, &priv->reg_bar0->adminq_doorbell);
+}
+
+static bool gve_adminq_wait_for_cmd(struct gve_priv *priv, u32 prod_cnt)
+{
+ int i;
+
+ for (i = 0; i < GVE_MAX_ADMINQ_EVENT_COUNTER_CHECK; i++) {
+ if (ioread32be(&priv->reg_bar0->adminq_event_counter)
+ == prod_cnt)
+ return true;
+ msleep(GVE_ADMINQ_SLEEP_LEN);
+ }
+
+ return false;
+}
+
+static int gve_adminq_parse_err(struct device *dev, u32 status)
+{
+ if (status != GVE_ADMINQ_COMMAND_PASSED &&
+ status != GVE_ADMINQ_COMMAND_UNSET)
+ dev_err(dev, "AQ command failed with status %d\n", status);
+
+ switch (status) {
+ case GVE_ADMINQ_COMMAND_PASSED:
+ return 0;
+ case GVE_ADMINQ_COMMAND_UNSET:
+ dev_err(dev, "parse_aq_err: err and status both unset, this should not be possible.\n");
+ return -EINVAL;
+ case GVE_ADMINQ_COMMAND_ERROR_ABORTED:
+ case GVE_ADMINQ_COMMAND_ERROR_CANCELLED:
+ case GVE_ADMINQ_COMMAND_ERROR_DATALOSS:
+ case GVE_ADMINQ_COMMAND_ERROR_FAILED_PRECONDITION:
+ case GVE_ADMINQ_COMMAND_ERROR_UNAVAILABLE:
+ return -EAGAIN;
+ case GVE_ADMINQ_COMMAND_ERROR_ALREADY_EXISTS:
+ case GVE_ADMINQ_COMMAND_ERROR_INTERNAL_ERROR:
+ case GVE_ADMINQ_COMMAND_ERROR_INVALID_ARGUMENT:
+ case GVE_ADMINQ_COMMAND_ERROR_NOT_FOUND:
+ case GVE_ADMINQ_COMMAND_ERROR_OUT_OF_RANGE:
+ case GVE_ADMINQ_COMMAND_ERROR_UNKNOWN_ERROR:
+ return -EINVAL;
+ case GVE_ADMINQ_COMMAND_ERROR_DEADLINE_EXCEEDED:
+ return -ETIME;
+ case GVE_ADMINQ_COMMAND_ERROR_PERMISSION_DENIED:
+ case GVE_ADMINQ_COMMAND_ERROR_UNAUTHENTICATED:
+ return -EACCES;
+ case GVE_ADMINQ_COMMAND_ERROR_RESOURCE_EXHAUSTED:
+ return -ENOMEM;
+ case GVE_ADMINQ_COMMAND_ERROR_UNIMPLEMENTED:
+ return -ENOTSUPP;
+ default:
+ dev_err(dev, "parse_aq_err: unknown status code %d\n", status);
+ return -EINVAL;
+ }
+}
+
+/* This function is not threadsafe - the caller is responsible for any
+ * necessary locks.
+ */
+int gve_adminq_execute_cmd(struct gve_priv *priv,
+ union gve_adminq_command *cmd_orig)
+{
+ union gve_adminq_command *cmd;
+ u32 status = 0;
+ u32 prod_cnt;
+
+ cmd = &priv->adminq[priv->adminq_prod_cnt & priv->adminq_mask];
+ priv->adminq_prod_cnt++;
+ prod_cnt = priv->adminq_prod_cnt;
+
+ memcpy(cmd, cmd_orig, sizeof(*cmd_orig));
+
+ gve_adminq_kick_cmd(priv, prod_cnt);
+ if (!gve_adminq_wait_for_cmd(priv, prod_cnt)) {
+ dev_err(&priv->pdev->dev, "AQ command timed out, need to reset AQ\n");
+ return -ENOTRECOVERABLE;
+ }
+
+ memcpy(cmd_orig, cmd, sizeof(*cmd));
+ status = be32_to_cpu(READ_ONCE(cmd->status));
+ return gve_adminq_parse_err(&priv->pdev->dev, status);
+}
+
+/* The device specifies that the management vector can either be the first irq
+ * or the last irq. ntfy_blk_msix_base_idx indicates the first irq assigned to
+ * the ntfy blks. It if is 0 then the management vector is last, if it is 1 then
+ * the management vector is first.
+ *
+ * gve arranges the msix vectors so that the management vector is last.
+ */
+#define GVE_NTFY_BLK_BASE_MSIX_IDX 0
+int gve_adminq_configure_device_resources(struct gve_priv *priv,
+ dma_addr_t counter_array_bus_addr,
+ u32 num_counters,
+ dma_addr_t db_array_bus_addr,
+ u32 num_ntfy_blks)
+{
+ union gve_adminq_command cmd;
+
+ memset(&cmd, 0, sizeof(cmd));
+ cmd.opcode = cpu_to_be32(GVE_ADMINQ_CONFIGURE_DEVICE_RESOURCES);
+ cmd.configure_device_resources =
+ (struct gve_adminq_configure_device_resources) {
+ .counter_array = cpu_to_be64(counter_array_bus_addr),
+ .num_counters = cpu_to_be32(num_counters),
+ .irq_db_addr = cpu_to_be64(db_array_bus_addr),
+ .num_irq_dbs = cpu_to_be32(num_ntfy_blks),
+ .irq_db_stride = cpu_to_be32(sizeof(priv->ntfy_blocks[0])),
+ .ntfy_blk_msix_base_idx =
+ cpu_to_be32(GVE_NTFY_BLK_BASE_MSIX_IDX),
+ };
+
+ return gve_adminq_execute_cmd(priv, &cmd);
+}
+
+int gve_adminq_deconfigure_device_resources(struct gve_priv *priv)
+{
+ union gve_adminq_command cmd;
+
+ memset(&cmd, 0, sizeof(cmd));
+ cmd.opcode = cpu_to_be32(GVE_ADMINQ_DECONFIGURE_DEVICE_RESOURCES);
+
+ return gve_adminq_execute_cmd(priv, &cmd);
+}
+
+int gve_adminq_create_tx_queue(struct gve_priv *priv, u32 queue_index)
+{
+ struct gve_tx_ring *tx = &priv->tx[queue_index];
+ union gve_adminq_command cmd;
+
+ memset(&cmd, 0, sizeof(cmd));
+ cmd.opcode = cpu_to_be32(GVE_ADMINQ_CREATE_TX_QUEUE);
+ cmd.create_tx_queue = (struct gve_adminq_create_tx_queue) {
+ .queue_id = cpu_to_be32(queue_index),
+ .reserved = 0,
+ .queue_resources_addr = cpu_to_be64(tx->q_resources_bus),
+ .tx_ring_addr = cpu_to_be64(tx->bus),
+ .queue_page_list_id = cpu_to_be32(tx->tx_fifo.qpl->id),
+ .ntfy_id = cpu_to_be32(tx->ntfy_id),
+ };
+
+ return gve_adminq_execute_cmd(priv, &cmd);
+}
+
+int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index)
+{
+ struct gve_rx_ring *rx = &priv->rx[queue_index];
+ union gve_adminq_command cmd;
+
+ memset(&cmd, 0, sizeof(cmd));
+ cmd.opcode = cpu_to_be32(GVE_ADMINQ_CREATE_RX_QUEUE);
+ cmd.create_rx_queue = (struct gve_adminq_create_rx_queue) {
+ .queue_id = cpu_to_be32(queue_index),
+ .index = cpu_to_be32(queue_index),
+ .reserved = 0,
+ .ntfy_id = cpu_to_be32(rx->ntfy_id),
+ .queue_resources_addr = cpu_to_be64(rx->q_resources_bus),
+ .rx_desc_ring_addr = cpu_to_be64(rx->desc.bus),
+ .rx_data_ring_addr = cpu_to_be64(rx->data.data_bus),
+ .queue_page_list_id = cpu_to_be32(rx->data.qpl->id),
+ };
+
+ return gve_adminq_execute_cmd(priv, &cmd);
+}
+
+int gve_adminq_destroy_tx_queue(struct gve_priv *priv, u32 queue_index)
+{
+ union gve_adminq_command cmd;
+
+ memset(&cmd, 0, sizeof(cmd));
+ cmd.opcode = cpu_to_be32(GVE_ADMINQ_DESTROY_TX_QUEUE);
+ cmd.destroy_tx_queue = (struct gve_adminq_destroy_tx_queue) {
+ .queue_id = cpu_to_be32(queue_index),
+ };
+
+ return gve_adminq_execute_cmd(priv, &cmd);
+}
+
+int gve_adminq_destroy_rx_queue(struct gve_priv *priv, u32 queue_index)
+{
+ union gve_adminq_command cmd;
+
+ memset(&cmd, 0, sizeof(cmd));
+ cmd.opcode = cpu_to_be32(GVE_ADMINQ_DESTROY_RX_QUEUE);
+ cmd.destroy_rx_queue = (struct gve_adminq_destroy_rx_queue) {
+ .queue_id = cpu_to_be32(queue_index),
+ };
+
+ return gve_adminq_execute_cmd(priv, &cmd);
+}
+
+int gve_adminq_describe_device(struct gve_priv *priv)
+{
+ struct gve_device_descriptor *descriptor;
+ union gve_adminq_command cmd;
+ dma_addr_t descriptor_bus;
+ int err = 0;
+ u8 *mac;
+ u16 mtu;
+
+ memset(&cmd, 0, sizeof(cmd));
+ descriptor = dma_alloc_coherent(&priv->pdev->dev, PAGE_SIZE,
+ &descriptor_bus, GFP_KERNEL);
+ if (!descriptor)
+ return -ENOMEM;
+ cmd.opcode = cpu_to_be32(GVE_ADMINQ_DESCRIBE_DEVICE);
+ cmd.describe_device.device_descriptor_addr =
+ cpu_to_be64(descriptor_bus);
+ cmd.describe_device.device_descriptor_version =
+ cpu_to_be32(GVE_ADMINQ_DEVICE_DESCRIPTOR_VERSION);
+ cmd.describe_device.available_length = cpu_to_be32(PAGE_SIZE);
+
+ err = gve_adminq_execute_cmd(priv, &cmd);
+ if (err)
+ goto free_device_descriptor;
+
+ priv->tx_desc_cnt = be16_to_cpu(descriptor->tx_queue_entries);
+ if (priv->tx_desc_cnt * sizeof(priv->tx->desc[0]) < PAGE_SIZE) {
+ netif_err(priv, drv, priv->dev, "Tx desc count %d too low\n",
+ priv->tx_desc_cnt);
+ err = -EINVAL;
+ goto free_device_descriptor;
+ }
+ priv->rx_desc_cnt = be16_to_cpu(descriptor->rx_queue_entries);
+ if (priv->rx_desc_cnt * sizeof(priv->rx->desc.desc_ring[0])
+ < PAGE_SIZE ||
+ priv->rx_desc_cnt * sizeof(priv->rx->data.data_ring[0])
+ < PAGE_SIZE) {
+ netif_err(priv, drv, priv->dev, "Rx desc count %d too low\n",
+ priv->rx_desc_cnt);
+ err = -EINVAL;
+ goto free_device_descriptor;
+ }
+ priv->max_registered_pages =
+ be64_to_cpu(descriptor->max_registered_pages);
+ mtu = be16_to_cpu(descriptor->mtu);
+ if (mtu < ETH_MIN_MTU) {
+ netif_err(priv, drv, priv->dev, "MTU %d below minimum MTU\n",
+ mtu);
+ err = -EINVAL;
+ goto free_device_descriptor;
+ }
+ priv->dev->max_mtu = mtu;
+ priv->num_event_counters = be16_to_cpu(descriptor->counters);
+ ether_addr_copy(priv->dev->dev_addr, descriptor->mac);
+ mac = descriptor->mac;
+ netif_info(priv, drv, priv->dev, "MAC addr: %pM\n", mac);
+ priv->tx_pages_per_qpl = be16_to_cpu(descriptor->tx_pages_per_qpl);
+ priv->rx_pages_per_qpl = be16_to_cpu(descriptor->rx_pages_per_qpl);
+ if (priv->rx_pages_per_qpl < priv->rx_desc_cnt) {
+ netif_err(priv, drv, priv->dev, "rx_pages_per_qpl cannot be smaller than rx_desc_cnt, setting rx_desc_cnt down to %d.\n",
+ priv->rx_pages_per_qpl);
+ priv->rx_desc_cnt = priv->rx_pages_per_qpl;
+ }
+ priv->default_num_queues = be16_to_cpu(descriptor->default_num_queues);
+
+free_device_descriptor:
+ dma_free_coherent(&priv->pdev->dev, sizeof(*descriptor), descriptor,
+ descriptor_bus);
+ return err;
+}
+
+int gve_adminq_register_page_list(struct gve_priv *priv,
+ struct gve_queue_page_list *qpl)
+{
+ struct device *hdev = &priv->pdev->dev;
+ u32 num_entries = qpl->num_entries;
+ u32 size = num_entries * sizeof(qpl->page_buses[0]);
+ union gve_adminq_command cmd;
+ dma_addr_t page_list_bus;
+ __be64 *page_list;
+ int err;
+ int i;
+
+ memset(&cmd, 0, sizeof(cmd));
+ page_list = dma_alloc_coherent(hdev, size, &page_list_bus, GFP_KERNEL);
+ if (!page_list)
+ return -ENOMEM;
+
+ for (i = 0; i < num_entries; i++)
+ page_list[i] = cpu_to_be64(qpl->page_buses[i]);
+
+ cmd.opcode = cpu_to_be32(GVE_ADMINQ_REGISTER_PAGE_LIST);
+ cmd.reg_page_list = (struct gve_adminq_register_page_list) {
+ .page_list_id = cpu_to_be32(qpl->id),
+ .num_pages = cpu_to_be32(num_entries),
+ .page_address_list_addr = cpu_to_be64(page_list_bus),
+ };
+
+ err = gve_adminq_execute_cmd(priv, &cmd);
+ dma_free_coherent(hdev, size, page_list, page_list_bus);
+ return err;
+}
+
+int gve_adminq_unregister_page_list(struct gve_priv *priv, u32 page_list_id)
+{
+ union gve_adminq_command cmd;
+
+ memset(&cmd, 0, sizeof(cmd));
+ cmd.opcode = cpu_to_be32(GVE_ADMINQ_UNREGISTER_PAGE_LIST);
+ cmd.unreg_page_list = (struct gve_adminq_unregister_page_list) {
+ .page_list_id = cpu_to_be32(page_list_id),
+ };
+
+ return gve_adminq_execute_cmd(priv, &cmd);
+}
+
+int gve_adminq_set_mtu(struct gve_priv *priv, u64 mtu)
+{
+ union gve_adminq_command cmd;
+
+ memset(&cmd, 0, sizeof(cmd));
+ cmd.opcode = cpu_to_be32(GVE_ADMINQ_SET_DRIVER_PARAMETER);
+ cmd.set_driver_param = (struct gve_adminq_set_driver_parameter) {
+ .parameter_type = cpu_to_be32(GVE_SET_PARAM_MTU),
+ .parameter_value = cpu_to_be64(mtu),
+ };
+
+ return gve_adminq_execute_cmd(priv, &cmd);
+}
diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h
new file mode 100644
index 000000000000..4dfa06edc0f8
--- /dev/null
+++ b/drivers/net/ethernet/google/gve/gve_adminq.h
@@ -0,0 +1,217 @@
+/* SPDX-License-Identifier: (GPL-2.0 OR MIT)
+ * Google virtual Ethernet (gve) driver
+ *
+ * Copyright (C) 2015-2019 Google, Inc.
+ */
+
+#ifndef _GVE_ADMINQ_H
+#define _GVE_ADMINQ_H
+
+#include <linux/build_bug.h>
+
+/* Admin queue opcodes */
+enum gve_adminq_opcodes {
+ GVE_ADMINQ_DESCRIBE_DEVICE = 0x1,
+ GVE_ADMINQ_CONFIGURE_DEVICE_RESOURCES = 0x2,
+ GVE_ADMINQ_REGISTER_PAGE_LIST = 0x3,
+ GVE_ADMINQ_UNREGISTER_PAGE_LIST = 0x4,
+ GVE_ADMINQ_CREATE_TX_QUEUE = 0x5,
+ GVE_ADMINQ_CREATE_RX_QUEUE = 0x6,
+ GVE_ADMINQ_DESTROY_TX_QUEUE = 0x7,
+ GVE_ADMINQ_DESTROY_RX_QUEUE = 0x8,
+ GVE_ADMINQ_DECONFIGURE_DEVICE_RESOURCES = 0x9,
+ GVE_ADMINQ_SET_DRIVER_PARAMETER = 0xB,
+};
+
+/* Admin queue status codes */
+enum gve_adminq_statuses {
+ GVE_ADMINQ_COMMAND_UNSET = 0x0,
+ GVE_ADMINQ_COMMAND_PASSED = 0x1,
+ GVE_ADMINQ_COMMAND_ERROR_ABORTED = 0xFFFFFFF0,
+ GVE_ADMINQ_COMMAND_ERROR_ALREADY_EXISTS = 0xFFFFFFF1,
+ GVE_ADMINQ_COMMAND_ERROR_CANCELLED = 0xFFFFFFF2,
+ GVE_ADMINQ_COMMAND_ERROR_DATALOSS = 0xFFFFFFF3,
+ GVE_ADMINQ_COMMAND_ERROR_DEADLINE_EXCEEDED = 0xFFFFFFF4,
+ GVE_ADMINQ_COMMAND_ERROR_FAILED_PRECONDITION = 0xFFFFFFF5,
+ GVE_ADMINQ_COMMAND_ERROR_INTERNAL_ERROR = 0xFFFFFFF6,
+ GVE_ADMINQ_COMMAND_ERROR_INVALID_ARGUMENT = 0xFFFFFFF7,
+ GVE_ADMINQ_COMMAND_ERROR_NOT_FOUND = 0xFFFFFFF8,
+ GVE_ADMINQ_COMMAND_ERROR_OUT_OF_RANGE = 0xFFFFFFF9,
+ GVE_ADMINQ_COMMAND_ERROR_PERMISSION_DENIED = 0xFFFFFFFA,
+ GVE_ADMINQ_COMMAND_ERROR_UNAUTHENTICATED = 0xFFFFFFFB,
+ GVE_ADMINQ_COMMAND_ERROR_RESOURCE_EXHAUSTED = 0xFFFFFFFC,
+ GVE_ADMINQ_COMMAND_ERROR_UNAVAILABLE = 0xFFFFFFFD,
+ GVE_ADMINQ_COMMAND_ERROR_UNIMPLEMENTED = 0xFFFFFFFE,
+ GVE_ADMINQ_COMMAND_ERROR_UNKNOWN_ERROR = 0xFFFFFFFF,
+};
+
+#define GVE_ADMINQ_DEVICE_DESCRIPTOR_VERSION 1
+
+/* All AdminQ command structs should be naturally packed. The static_assert
+ * calls make sure this is the case at compile time.
+ */
+
+struct gve_adminq_describe_device {
+ __be64 device_descriptor_addr;
+ __be32 device_descriptor_version;
+ __be32 available_length;
+};
+
+static_assert(sizeof(struct gve_adminq_describe_device) == 16);
+
+struct gve_device_descriptor {
+ __be64 max_registered_pages;
+ __be16 reserved1;
+ __be16 tx_queue_entries;
+ __be16 rx_queue_entries;
+ __be16 default_num_queues;
+ __be16 mtu;
+ __be16 counters;
+ __be16 tx_pages_per_qpl;
+ __be16 rx_pages_per_qpl;
+ u8 mac[ETH_ALEN];
+ __be16 num_device_options;
+ __be16 total_length;
+ u8 reserved2[6];
+};
+
+static_assert(sizeof(struct gve_device_descriptor) == 40);
+
+struct device_option {
+ __be32 option_id;
+ __be32 option_length;
+};
+
+static_assert(sizeof(struct device_option) == 8);
+
+struct gve_adminq_configure_device_resources {
+ __be64 counter_array;
+ __be64 irq_db_addr;
+ __be32 num_counters;
+ __be32 num_irq_dbs;
+ __be32 irq_db_stride;
+ __be32 ntfy_blk_msix_base_idx;
+};
+
+static_assert(sizeof(struct gve_adminq_configure_device_resources) == 32);
+
+struct gve_adminq_register_page_list {
+ __be32 page_list_id;
+ __be32 num_pages;
+ __be64 page_address_list_addr;
+};
+
+static_assert(sizeof(struct gve_adminq_register_page_list) == 16);
+
+struct gve_adminq_unregister_page_list {
+ __be32 page_list_id;
+};
+
+static_assert(sizeof(struct gve_adminq_unregister_page_list) == 4);
+
+struct gve_adminq_create_tx_queue {
+ __be32 queue_id;
+ __be32 reserved;
+ __be64 queue_resources_addr;
+ __be64 tx_ring_addr;
+ __be32 queue_page_list_id;
+ __be32 ntfy_id;
+};
+
+static_assert(sizeof(struct gve_adminq_create_tx_queue) == 32);
+
+struct gve_adminq_create_rx_queue {
+ __be32 queue_id;
+ __be32 index;
+ __be32 reserved;
+ __be32 ntfy_id;
+ __be64 queue_resources_addr;
+ __be64 rx_desc_ring_addr;
+ __be64 rx_data_ring_addr;
+ __be32 queue_page_list_id;
+ u8 padding[4];
+};
+
+static_assert(sizeof(struct gve_adminq_create_rx_queue) == 48);
+
+/* Queue resources that are shared with the device */
+struct gve_queue_resources {
+ union {
+ struct {
+ __be32 db_index; /* Device -> Guest */
+ __be32 counter_index; /* Device -> Guest */
+ };
+ u8 reserved[64];
+ };
+};
+
+static_assert(sizeof(struct gve_queue_resources) == 64);
+
+struct gve_adminq_destroy_tx_queue {
+ __be32 queue_id;
+};
+
+static_assert(sizeof(struct gve_adminq_destroy_tx_queue) == 4);
+
+struct gve_adminq_destroy_rx_queue {
+ __be32 queue_id;
+};
+
+static_assert(sizeof(struct gve_adminq_destroy_rx_queue) == 4);
+
+/* GVE Set Driver Parameter Types */
+enum gve_set_driver_param_types {
+ GVE_SET_PARAM_MTU = 0x1,
+};
+
+struct gve_adminq_set_driver_parameter {
+ __be32 parameter_type;
+ u8 reserved[4];
+ __be64 parameter_value;
+};
+
+static_assert(sizeof(struct gve_adminq_set_driver_parameter) == 16);
+
+union gve_adminq_command {
+ struct {
+ __be32 opcode;
+ __be32 status;
+ union {
+ struct gve_adminq_configure_device_resources
+ configure_device_resources;
+ struct gve_adminq_create_tx_queue create_tx_queue;
+ struct gve_adminq_create_rx_queue create_rx_queue;
+ struct gve_adminq_destroy_tx_queue destroy_tx_queue;
+ struct gve_adminq_destroy_rx_queue destroy_rx_queue;
+ struct gve_adminq_describe_device describe_device;
+ struct gve_adminq_register_page_list reg_page_list;
+ struct gve_adminq_unregister_page_list unreg_page_list;
+ struct gve_adminq_set_driver_parameter set_driver_param;
+ };
+ };
+ u8 reserved[64];
+};
+
+static_assert(sizeof(union gve_adminq_command) == 64);
+
+int gve_adminq_alloc(struct device *dev, struct gve_priv *priv);
+void gve_adminq_free(struct device *dev, struct gve_priv *priv);
+void gve_adminq_release(struct gve_priv *priv);
+int gve_adminq_execute_cmd(struct gve_priv *priv,
+ union gve_adminq_command *cmd_orig);
+int gve_adminq_describe_device(struct gve_priv *priv);
+int gve_adminq_configure_device_resources(struct gve_priv *priv,
+ dma_addr_t counter_array_bus_addr,
+ u32 num_counters,
+ dma_addr_t db_array_bus_addr,
+ u32 num_ntfy_blks);
+int gve_adminq_deconfigure_device_resources(struct gve_priv *priv);
+int gve_adminq_create_tx_queue(struct gve_priv *priv, u32 queue_id);
+int gve_adminq_destroy_tx_queue(struct gve_priv *priv, u32 queue_id);
+int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_id);
+int gve_adminq_destroy_rx_queue(struct gve_priv *priv, u32 queue_id);
+int gve_adminq_register_page_list(struct gve_priv *priv,
+ struct gve_queue_page_list *qpl);
+int gve_adminq_unregister_page_list(struct gve_priv *priv, u32 page_list_id);
+int gve_adminq_set_mtu(struct gve_priv *priv, u64 mtu);
+#endif /* _GVE_ADMINQ_H */
diff --git a/drivers/net/ethernet/google/gve/gve_desc.h b/drivers/net/ethernet/google/gve/gve_desc.h
new file mode 100644
index 000000000000..54779871d52e
--- /dev/null
+++ b/drivers/net/ethernet/google/gve/gve_desc.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: (GPL-2.0 OR MIT)
+ * Google virtual Ethernet (gve) driver
+ *
+ * Copyright (C) 2015-2019 Google, Inc.
+ */
+
+/* GVE Transmit Descriptor formats */
+
+#ifndef _GVE_DESC_H_
+#define _GVE_DESC_H_
+
+#include <linux/build_bug.h>
+
+/* A note on seg_addrs
+ *
+ * Base addresses encoded in seg_addr are not assumed to be physical
+ * addresses. The ring format assumes these come from some linear address
+ * space. This could be physical memory, kernel virtual memory, user virtual
+ * memory. gVNIC uses lists of registered pages. Each queue is assumed
+ * to be associated with a single such linear address space to ensure a
+ * consistent meaning for seg_addrs posted to its rings.
+ */
+
+struct gve_tx_pkt_desc {
+ u8 type_flags; /* desc type is lower 4 bits, flags upper */
+ u8 l4_csum_offset; /* relative offset of L4 csum word */
+ u8 l4_hdr_offset; /* Offset of start of L4 headers in packet */
+ u8 desc_cnt; /* Total descriptors for this packet */
+ __be16 len; /* Total length of this packet (in bytes) */
+ __be16 seg_len; /* Length of this descriptor's segment */
+ __be64 seg_addr; /* Base address (see note) of this segment */
+} __packed;
+
+struct gve_tx_seg_desc {
+ u8 type_flags; /* type is lower 4 bits, flags upper */
+ u8 l3_offset; /* TSO: 2 byte units to start of IPH */
+ __be16 reserved;
+ __be16 mss; /* TSO MSS */
+ __be16 seg_len;
+ __be64 seg_addr;
+} __packed;
+
+/* GVE Transmit Descriptor Types */
+#define GVE_TXD_STD (0x0 << 4) /* Std with Host Address */
+#define GVE_TXD_TSO (0x1 << 4) /* TSO with Host Address */
+#define GVE_TXD_SEG (0x2 << 4) /* Seg with Host Address */
+
+/* GVE Transmit Descriptor Flags for Std Pkts */
+#define GVE_TXF_L4CSUM BIT(0) /* Need csum offload */
+#define GVE_TXF_TSTAMP BIT(2) /* Timestamp required */
+
+/* GVE Transmit Descriptor Flags for TSO Segs */
+#define GVE_TXSF_IPV6 BIT(1) /* IPv6 TSO */
+
+/* GVE Receive Packet Descriptor */
+/* The start of an ethernet packet comes 2 bytes into the rx buffer.
+ * gVNIC adds this padding so that both the DMA and the L3/4 protocol header
+ * access is aligned.
+ */
+#define GVE_RX_PAD 2
+
+struct gve_rx_desc {
+ u8 padding[48];
+ __be32 rss_hash; /* Receive-side scaling hash (Toeplitz for gVNIC) */
+ __be16 mss;
+ __be16 reserved; /* Reserved to zero */
+ u8 hdr_len; /* Header length (L2-L4) including padding */
+ u8 hdr_off; /* 64-byte-scaled offset into RX_DATA entry */
+ __sum16 csum; /* 1's-complement partial checksum of L3+ bytes */
+ __be16 len; /* Length of the received packet */
+ __be16 flags_seq; /* Flags [15:3] and sequence number [2:0] (1-7) */
+} __packed;
+static_assert(sizeof(struct gve_rx_desc) == 64);
+
+/* As with the Tx ring format, the qpl_offset entries below are offsets into an
+ * ordered list of registered pages.
+ */
+struct gve_rx_data_slot {
+ /* byte offset into the rx registered segment of this slot */
+ __be64 qpl_offset;
+};
+
+/* GVE Recive Packet Descriptor Seq No */
+#define GVE_SEQNO(x) (be16_to_cpu(x) & 0x7)
+
+/* GVE Recive Packet Descriptor Flags */
+#define GVE_RXFLG(x) cpu_to_be16(1 << (3 + (x)))
+#define GVE_RXF_FRAG GVE_RXFLG(3) /* IP Fragment */
+#define GVE_RXF_IPV4 GVE_RXFLG(4) /* IPv4 */
+#define GVE_RXF_IPV6 GVE_RXFLG(5) /* IPv6 */
+#define GVE_RXF_TCP GVE_RXFLG(6) /* TCP Packet */
+#define GVE_RXF_UDP GVE_RXFLG(7) /* UDP Packet */
+#define GVE_RXF_ERR GVE_RXFLG(8) /* Packet Error Detected */
+
+/* GVE IRQ */
+#define GVE_IRQ_ACK BIT(31)
+#define GVE_IRQ_MASK BIT(30)
+#define GVE_IRQ_EVENT BIT(29)
+
+static inline bool gve_needs_rss(__be16 flag)
+{
+ if (flag & GVE_RXF_FRAG)
+ return false;
+ if (flag & (GVE_RXF_IPV4 | GVE_RXF_IPV6))
+ return true;
+ return false;
+}
+
+static inline u8 gve_next_seqno(u8 seq)
+{
+ return (seq + 1) == 8 ? 1 : seq + 1;
+}
+#endif /* _GVE_DESC_H_ */
diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
new file mode 100644
index 000000000000..26540b856541
--- /dev/null
+++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
@@ -0,0 +1,245 @@
+// SPDX-License-Identifier: (GPL-2.0 OR MIT)
+/* Google virtual Ethernet (gve) driver
+ *
+ * Copyright (C) 2015-2019 Google, Inc.
+ */
+
+#include <linux/rtnetlink.h>
+#include "gve.h"
+
+static void gve_get_drvinfo(struct net_device *netdev,
+ struct ethtool_drvinfo *info)
+{
+ struct gve_priv *priv = netdev_priv(netdev);
+
+ strlcpy(info->driver, "gve", sizeof(info->driver));
+ strlcpy(info->version, gve_version_str, sizeof(info->version));
+ strlcpy(info->bus_info, pci_name(priv->pdev), sizeof(info->bus_info));
+}
+
+static void gve_set_msglevel(struct net_device *netdev, u32 value)
+{
+ struct gve_priv *priv = netdev_priv(netdev);
+
+ priv->msg_enable = value;
+}
+
+static u32 gve_get_msglevel(struct net_device *netdev)
+{
+ struct gve_priv *priv = netdev_priv(netdev);
+
+ return priv->msg_enable;
+}
+
+static const char gve_gstrings_main_stats[][ETH_GSTRING_LEN] = {
+ "rx_packets", "tx_packets", "rx_bytes", "tx_bytes",
+ "rx_dropped", "tx_dropped", "tx_timeouts",
+};
+
+#define GVE_MAIN_STATS_LEN ARRAY_SIZE(gve_gstrings_main_stats)
+#define NUM_GVE_TX_CNTS 5
+#define NUM_GVE_RX_CNTS 2
+
+static void gve_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
+{
+ struct gve_priv *priv = netdev_priv(netdev);
+ char *s = (char *)data;
+ int i;
+
+ if (stringset != ETH_SS_STATS)
+ return;
+
+ memcpy(s, *gve_gstrings_main_stats,
+ sizeof(gve_gstrings_main_stats));
+ s += sizeof(gve_gstrings_main_stats);
+ for (i = 0; i < priv->rx_cfg.num_queues; i++) {
+ snprintf(s, ETH_GSTRING_LEN, "rx_desc_cnt[%u]", i);
+ s += ETH_GSTRING_LEN;
+ snprintf(s, ETH_GSTRING_LEN, "rx_desc_fill_cnt[%u]", i);
+ s += ETH_GSTRING_LEN;
+ }
+ for (i = 0; i < priv->tx_cfg.num_queues; i++) {
+ snprintf(s, ETH_GSTRING_LEN, "tx_req[%u]", i);
+ s += ETH_GSTRING_LEN;
+ snprintf(s, ETH_GSTRING_LEN, "tx_done[%u]", i);
+ s += ETH_GSTRING_LEN;
+ snprintf(s, ETH_GSTRING_LEN, "tx_wake[%u]", i);
+ s += ETH_GSTRING_LEN;
+ snprintf(s, ETH_GSTRING_LEN, "tx_stop[%u]", i);
+ s += ETH_GSTRING_LEN;
+ snprintf(s, ETH_GSTRING_LEN, "tx_event_counter[%u]", i);
+ s += ETH_GSTRING_LEN;
+ }
+}
+
+static int gve_get_sset_count(struct net_device *netdev, int sset)
+{
+ struct gve_priv *priv = netdev_priv(netdev);
+
+ switch (sset) {
+ case ETH_SS_STATS:
+ return GVE_MAIN_STATS_LEN +
+ (priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS) +
+ (priv->tx_cfg.num_queues * NUM_GVE_TX_CNTS);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static void
+gve_get_ethtool_stats(struct net_device *netdev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ struct gve_priv *priv = netdev_priv(netdev);
+ u64 rx_pkts, rx_bytes, tx_pkts, tx_bytes;
+ unsigned int start;
+ int ring;
+ int i;
+
+ ASSERT_RTNL();
+
+ for (rx_pkts = 0, rx_bytes = 0, ring = 0;
+ ring < priv->rx_cfg.num_queues; ring++) {
+ if (priv->rx) {
+ do {
+ start =
+ u64_stats_fetch_begin(&priv->rx[ring].statss);
+ rx_pkts += priv->rx[ring].rpackets;
+ rx_bytes += priv->rx[ring].rbytes;
+ } while (u64_stats_fetch_retry(&priv->rx[ring].statss,
+ start));
+ }
+ }
+ for (tx_pkts = 0, tx_bytes = 0, ring = 0;
+ ring < priv->tx_cfg.num_queues; ring++) {
+ if (priv->tx) {
+ do {
+ start =
+ u64_stats_fetch_begin(&priv->tx[ring].statss);
+ tx_pkts += priv->tx[ring].pkt_done;
+ tx_bytes += priv->tx[ring].bytes_done;
+ } while (u64_stats_fetch_retry(&priv->tx[ring].statss,
+ start));
+ }
+ }
+
+ i = 0;
+ data[i++] = rx_pkts;
+ data[i++] = tx_pkts;
+ data[i++] = rx_bytes;
+ data[i++] = tx_bytes;
+ /* Skip rx_dropped and tx_dropped */
+ i += 2;
+ data[i++] = priv->tx_timeo_cnt;
+ i = GVE_MAIN_STATS_LEN;
+
+ /* walk RX rings */
+ if (priv->rx) {
+ for (ring = 0; ring < priv->rx_cfg.num_queues; ring++) {
+ struct gve_rx_ring *rx = &priv->rx[ring];
+
+ data[i++] = rx->desc.cnt;
+ data[i++] = rx->desc.fill_cnt;
+ }
+ } else {
+ i += priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS;
+ }
+ /* walk TX rings */
+ if (priv->tx) {
+ for (ring = 0; ring < priv->tx_cfg.num_queues; ring++) {
+ struct gve_tx_ring *tx = &priv->tx[ring];
+
+ data[i++] = tx->req;
+ data[i++] = tx->done;
+ data[i++] = tx->wake_queue;
+ data[i++] = tx->stop_queue;
+ data[i++] = be32_to_cpu(gve_tx_load_event_counter(priv,
+ tx));
+ }
+ } else {
+ i += priv->tx_cfg.num_queues * NUM_GVE_TX_CNTS;
+ }
+}
+
+static void gve_get_channels(struct net_device *netdev,
+ struct ethtool_channels *cmd)
+{
+ struct gve_priv *priv = netdev_priv(netdev);
+
+ cmd->max_rx = priv->rx_cfg.max_queues;
+ cmd->max_tx = priv->tx_cfg.max_queues;
+ cmd->max_other = 0;
+ cmd->max_combined = 0;
+ cmd->rx_count = priv->rx_cfg.num_queues;
+ cmd->tx_count = priv->tx_cfg.num_queues;
+ cmd->other_count = 0;
+ cmd->combined_count = 0;
+}
+
+static int gve_set_channels(struct net_device *netdev,
+ struct ethtool_channels *cmd)
+{
+ struct gve_priv *priv = netdev_priv(netdev);
+ struct gve_queue_config new_tx_cfg = priv->tx_cfg;
+ struct gve_queue_config new_rx_cfg = priv->rx_cfg;
+ struct ethtool_channels old_settings;
+ int new_tx = cmd->tx_count;
+ int new_rx = cmd->rx_count;
+
+ gve_get_channels(netdev, &old_settings);
+
+ /* Changing combined is not allowed allowed */
+ if (cmd->combined_count != old_settings.combined_count)
+ return -EINVAL;
+
+ if (!new_rx || !new_tx)
+ return -EINVAL;
+
+ if (!netif_carrier_ok(netdev)) {
+ priv->tx_cfg.num_queues = new_tx;
+ priv->rx_cfg.num_queues = new_rx;
+ return 0;
+ }
+
+ new_tx_cfg.num_queues = new_tx;
+ new_rx_cfg.num_queues = new_rx;
+
+ return gve_adjust_queues(priv, new_rx_cfg, new_tx_cfg);
+}
+
+static void gve_get_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *cmd)
+{
+ struct gve_priv *priv = netdev_priv(netdev);
+
+ cmd->rx_max_pending = priv->rx_desc_cnt;
+ cmd->tx_max_pending = priv->tx_desc_cnt;
+ cmd->rx_pending = priv->rx_desc_cnt;
+ cmd->tx_pending = priv->tx_desc_cnt;
+}
+
+static int gve_user_reset(struct net_device *netdev, u32 *flags)
+{
+ struct gve_priv *priv = netdev_priv(netdev);
+
+ if (*flags == ETH_RESET_ALL) {
+ *flags = 0;
+ return gve_reset(priv, true);
+ }
+
+ return -EOPNOTSUPP;
+}
+
+const struct ethtool_ops gve_ethtool_ops = {
+ .get_drvinfo = gve_get_drvinfo,
+ .get_strings = gve_get_strings,
+ .get_sset_count = gve_get_sset_count,
+ .get_ethtool_stats = gve_get_ethtool_stats,
+ .set_msglevel = gve_set_msglevel,
+ .get_msglevel = gve_get_msglevel,
+ .set_channels = gve_set_channels,
+ .get_channels = gve_get_channels,
+ .get_link = ethtool_op_get_link,
+ .get_ringparam = gve_get_ringparam,
+ .reset = gve_user_reset,
+};
diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
new file mode 100644
index 000000000000..24f16e3368cd
--- /dev/null
+++ b/drivers/net/ethernet/google/gve/gve_main.c
@@ -0,0 +1,1232 @@
+// SPDX-License-Identifier: (GPL-2.0 OR MIT)
+/* Google virtual Ethernet (gve) driver
+ *
+ * Copyright (C) 2015-2019 Google, Inc.
+ */
+
+#include <linux/cpumask.h>
+#include <linux/etherdevice.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/sched.h>
+#include <linux/timer.h>
+#include <linux/workqueue.h>
+#include <net/sch_generic.h>
+#include "gve.h"
+#include "gve_adminq.h"
+#include "gve_register.h"
+
+#define GVE_DEFAULT_RX_COPYBREAK (256)
+
+#define DEFAULT_MSG_LEVEL (NETIF_MSG_DRV | NETIF_MSG_LINK)
+#define GVE_VERSION "1.0.0"
+#define GVE_VERSION_PREFIX "GVE-"
+
+const char gve_version_str[] = GVE_VERSION;
+static const char gve_version_prefix[] = GVE_VERSION_PREFIX;
+
+static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
+{
+ struct gve_priv *priv = netdev_priv(dev);
+ unsigned int start;
+ int ring;
+
+ if (priv->rx) {
+ for (ring = 0; ring < priv->rx_cfg.num_queues; ring++) {
+ do {
+ start =
+ u64_stats_fetch_begin(&priv->rx[ring].statss);
+ s->rx_packets += priv->rx[ring].rpackets;
+ s->rx_bytes += priv->rx[ring].rbytes;
+ } while (u64_stats_fetch_retry(&priv->rx[ring].statss,
+ start));
+ }
+ }
+ if (priv->tx) {
+ for (ring = 0; ring < priv->tx_cfg.num_queues; ring++) {
+ do {
+ start =
+ u64_stats_fetch_begin(&priv->tx[ring].statss);
+ s->tx_packets += priv->tx[ring].pkt_done;
+ s->tx_bytes += priv->tx[ring].bytes_done;
+ } while (u64_stats_fetch_retry(&priv->rx[ring].statss,
+ start));
+ }
+ }
+}
+
+static int gve_alloc_counter_array(struct gve_priv *priv)
+{
+ priv->counter_array =
+ dma_alloc_coherent(&priv->pdev->dev,
+ priv->num_event_counters *
+ sizeof(*priv->counter_array),
+ &priv->counter_array_bus, GFP_KERNEL);
+ if (!priv->counter_array)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void gve_free_counter_array(struct gve_priv *priv)
+{
+ dma_free_coherent(&priv->pdev->dev,
+ priv->num_event_counters *
+ sizeof(*priv->counter_array),
+ priv->counter_array, priv->counter_array_bus);
+ priv->counter_array = NULL;
+}
+
+static irqreturn_t gve_mgmnt_intr(int irq, void *arg)
+{
+ struct gve_priv *priv = arg;
+
+ queue_work(priv->gve_wq, &priv->service_task);
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t gve_intr(int irq, void *arg)
+{
+ struct gve_notify_block *block = arg;
+ struct gve_priv *priv = block->priv;
+
+ iowrite32be(GVE_IRQ_MASK, gve_irq_doorbell(priv, block));
+ napi_schedule_irqoff(&block->napi);
+ return IRQ_HANDLED;
+}
+
+static int gve_napi_poll(struct napi_struct *napi, int budget)
+{
+ struct gve_notify_block *block;
+ __be32 __iomem *irq_doorbell;
+ bool reschedule = false;
+ struct gve_priv *priv;
+
+ block = container_of(napi, struct gve_notify_block, napi);
+ priv = block->priv;
+
+ if (block->tx)
+ reschedule |= gve_tx_poll(block, budget);
+ if (block->rx)
+ reschedule |= gve_rx_poll(block, budget);
+
+ if (reschedule)
+ return budget;
+
+ napi_complete(napi);
+ irq_doorbell = gve_irq_doorbell(priv, block);
+ iowrite32be(GVE_IRQ_ACK | GVE_IRQ_EVENT, irq_doorbell);
+
+ /* Double check we have no extra work.
+ * Ensure unmask synchronizes with checking for work.
+ */
+ dma_rmb();
+ if (block->tx)
+ reschedule |= gve_tx_poll(block, -1);
+ if (block->rx)
+ reschedule |= gve_rx_poll(block, -1);
+ if (reschedule && napi_reschedule(napi))
+ iowrite32be(GVE_IRQ_MASK, irq_doorbell);
+
+ return 0;
+}
+
+static int gve_alloc_notify_blocks(struct gve_priv *priv)
+{
+ int num_vecs_requested = priv->num_ntfy_blks + 1;
+ char *name = priv->dev->name;
+ unsigned int active_cpus;
+ int vecs_enabled;
+ int i, j;
+ int err;
+
+ priv->msix_vectors = kvzalloc(num_vecs_requested *
+ sizeof(*priv->msix_vectors), GFP_KERNEL);
+ if (!priv->msix_vectors)
+ return -ENOMEM;
+ for (i = 0; i < num_vecs_requested; i++)
+ priv->msix_vectors[i].entry = i;
+ vecs_enabled = pci_enable_msix_range(priv->pdev, priv->msix_vectors,
+ GVE_MIN_MSIX, num_vecs_requested);
+ if (vecs_enabled < 0) {
+ dev_err(&priv->pdev->dev, "Could not enable min msix %d/%d\n",
+ GVE_MIN_MSIX, vecs_enabled);
+ err = vecs_enabled;
+ goto abort_with_msix_vectors;
+ }
+ if (vecs_enabled != num_vecs_requested) {
+ int new_num_ntfy_blks = (vecs_enabled - 1) & ~0x1;
+ int vecs_per_type = new_num_ntfy_blks / 2;
+ int vecs_left = new_num_ntfy_blks % 2;
+
+ priv->num_ntfy_blks = new_num_ntfy_blks;
+ priv->tx_cfg.max_queues = min_t(int, priv->tx_cfg.max_queues,
+ vecs_per_type);
+ priv->rx_cfg.max_queues = min_t(int, priv->rx_cfg.max_queues,
+ vecs_per_type + vecs_left);
+ dev_err(&priv->pdev->dev,
+ "Could not enable desired msix, only enabled %d, adjusting tx max queues to %d, and rx max queues to %d\n",
+ vecs_enabled, priv->tx_cfg.max_queues,
+ priv->rx_cfg.max_queues);
+ if (priv->tx_cfg.num_queues > priv->tx_cfg.max_queues)
+ priv->tx_cfg.num_queues = priv->tx_cfg.max_queues;
+ if (priv->rx_cfg.num_queues > priv->rx_cfg.max_queues)
+ priv->rx_cfg.num_queues = priv->rx_cfg.max_queues;
+ }
+ /* Half the notification blocks go to TX and half to RX */
+ active_cpus = min_t(int, priv->num_ntfy_blks / 2, num_online_cpus());
+
+ /* Setup Management Vector - the last vector */
+ snprintf(priv->mgmt_msix_name, sizeof(priv->mgmt_msix_name), "%s-mgmnt",
+ name);
+ err = request_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector,
+ gve_mgmnt_intr, 0, priv->mgmt_msix_name, priv);
+ if (err) {
+ dev_err(&priv->pdev->dev, "Did not receive management vector.\n");
+ goto abort_with_msix_enabled;
+ }
+ priv->ntfy_blocks =
+ dma_alloc_coherent(&priv->pdev->dev,
+ priv->num_ntfy_blks *
+ sizeof(*priv->ntfy_blocks),
+ &priv->ntfy_block_bus, GFP_KERNEL);
+ if (!priv->ntfy_blocks) {
+ err = -ENOMEM;
+ goto abort_with_mgmt_vector;
+ }
+ /* Setup the other blocks - the first n-1 vectors */
+ for (i = 0; i < priv->num_ntfy_blks; i++) {
+ struct gve_notify_block *block = &priv->ntfy_blocks[i];
+ int msix_idx = i;
+
+ snprintf(block->name, sizeof(block->name), "%s-ntfy-block.%d",
+ name, i);
+ block->priv = priv;
+ err = request_irq(priv->msix_vectors[msix_idx].vector,
+ gve_intr, 0, block->name, block);
+ if (err) {
+ dev_err(&priv->pdev->dev,
+ "Failed to receive msix vector %d\n", i);
+ goto abort_with_some_ntfy_blocks;
+ }
+ irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
+ get_cpu_mask(i % active_cpus));
+ }
+ return 0;
+abort_with_some_ntfy_blocks:
+ for (j = 0; j < i; j++) {
+ struct gve_notify_block *block = &priv->ntfy_blocks[j];
+ int msix_idx = j;
+
+ irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
+ NULL);
+ free_irq(priv->msix_vectors[msix_idx].vector, block);
+ }
+ dma_free_coherent(&priv->pdev->dev, priv->num_ntfy_blks *
+ sizeof(*priv->ntfy_blocks),
+ priv->ntfy_blocks, priv->ntfy_block_bus);
+ priv->ntfy_blocks = NULL;
+abort_with_mgmt_vector:
+ free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);
+abort_with_msix_enabled:
+ pci_disable_msix(priv->pdev);
+abort_with_msix_vectors:
+ kfree(priv->msix_vectors);
+ priv->msix_vectors = NULL;
+ return err;
+}
+
+static void gve_free_notify_blocks(struct gve_priv *priv)
+{
+ int i;
+
+ /* Free the irqs */
+ for (i = 0; i < priv->num_ntfy_blks; i++) {
+ struct gve_notify_block *block = &priv->ntfy_blocks[i];
+ int msix_idx = i;
+
+ irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
+ NULL);
+ free_irq(priv->msix_vectors[msix_idx].vector, block);
+ }
+ dma_free_coherent(&priv->pdev->dev,
+ priv->num_ntfy_blks * sizeof(*priv->ntfy_blocks),
+ priv->ntfy_blocks, priv->ntfy_block_bus);
+ priv->ntfy_blocks = NULL;
+ free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);
+ pci_disable_msix(priv->pdev);
+ kfree(priv->msix_vectors);
+ priv->msix_vectors = NULL;
+}
+
+static int gve_setup_device_resources(struct gve_priv *priv)
+{
+ int err;
+
+ err = gve_alloc_counter_array(priv);
+ if (err)
+ return err;
+ err = gve_alloc_notify_blocks(priv);
+ if (err)
+ goto abort_with_counter;
+ err = gve_adminq_configure_device_resources(priv,
+ priv->counter_array_bus,
+ priv->num_event_counters,
+ priv->ntfy_block_bus,
+ priv->num_ntfy_blks);
+ if (unlikely(err)) {
+ dev_err(&priv->pdev->dev,
+ "could not setup device_resources: err=%d\n", err);
+ err = -ENXIO;
+ goto abort_with_ntfy_blocks;
+ }
+ gve_set_device_resources_ok(priv);
+ return 0;
+abort_with_ntfy_blocks:
+ gve_free_notify_blocks(priv);
+abort_with_counter:
+ gve_free_counter_array(priv);
+ return err;
+}
+
+static void gve_trigger_reset(struct gve_priv *priv);
+
+static void gve_teardown_device_resources(struct gve_priv *priv)
+{
+ int err;
+
+ /* Tell device its resources are being freed */
+ if (gve_get_device_resources_ok(priv)) {
+ err = gve_adminq_deconfigure_device_resources(priv);
+ if (err) {
+ dev_err(&priv->pdev->dev,
+ "Could not deconfigure device resources: err=%d\n",
+ err);
+ gve_trigger_reset(priv);
+ }
+ }
+ gve_free_counter_array(priv);
+ gve_free_notify_blocks(priv);
+ gve_clear_device_resources_ok(priv);
+}
+
+static void gve_add_napi(struct gve_priv *priv, int ntfy_idx)
+{
+ struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
+
+ netif_napi_add(priv->dev, &block->napi, gve_napi_poll,
+ NAPI_POLL_WEIGHT);
+}
+
+static void gve_remove_napi(struct gve_priv *priv, int ntfy_idx)
+{
+ struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
+
+ netif_napi_del(&block->napi);
+}
+
+static int gve_register_qpls(struct gve_priv *priv)
+{
+ int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv);
+ int err;
+ int i;
+
+ for (i = 0; i < num_qpls; i++) {
+ err = gve_adminq_register_page_list(priv, &priv->qpls[i]);
+ if (err) {
+ netif_err(priv, drv, priv->dev,
+ "failed to register queue page list %d\n",
+ priv->qpls[i].id);
+ /* This failure will trigger a reset - no need to clean
+ * up
+ */
+ return err;
+ }
+ }
+ return 0;
+}
+
+static int gve_unregister_qpls(struct gve_priv *priv)
+{
+ int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv);
+ int err;
+ int i;
+
+ for (i = 0; i < num_qpls; i++) {
+ err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id);
+ /* This failure will trigger a reset - no need to clean up */
+ if (err) {
+ netif_err(priv, drv, priv->dev,
+ "Failed to unregister queue page list %d\n",
+ priv->qpls[i].id);
+ return err;
+ }
+ }
+ return 0;
+}
+
+static int gve_create_rings(struct gve_priv *priv)
+{
+ int err;
+ int i;
+
+ for (i = 0; i < priv->tx_cfg.num_queues; i++) {
+ err = gve_adminq_create_tx_queue(priv, i);
+ if (err) {
+ netif_err(priv, drv, priv->dev, "failed to create tx queue %d\n",
+ i);
+ /* This failure will trigger a reset - no need to clean
+ * up
+ */
+ return err;
+ }
+ netif_dbg(priv, drv, priv->dev, "created tx queue %d\n", i);
+ }
+ for (i = 0; i < priv->rx_cfg.num_queues; i++) {
+ err = gve_adminq_create_rx_queue(priv, i);
+ if (err) {
+ netif_err(priv, drv, priv->dev, "failed to create rx queue %d\n",
+ i);
+ /* This failure will trigger a reset - no need to clean
+ * up
+ */
+ return err;
+ }
+ /* Rx data ring has been prefilled with packet buffers at
+ * queue allocation time.
+ * Write the doorbell to provide descriptor slots and packet
+ * buffers to the NIC.
+ */
+ gve_rx_write_doorbell(priv, &priv->rx[i]);
+ netif_dbg(priv, drv, priv->dev, "created rx queue %d\n", i);
+ }
+
+ return 0;
+}
+
+static int gve_alloc_rings(struct gve_priv *priv)
+{
+ int ntfy_idx;
+ int err;
+ int i;
+
+ /* Setup tx rings */
+ priv->tx = kvzalloc(priv->tx_cfg.num_queues * sizeof(*priv->tx),
+ GFP_KERNEL);
+ if (!priv->tx)
+ return -ENOMEM;
+ err = gve_tx_alloc_rings(priv);
+ if (err)
+ goto free_tx;
+ /* Setup rx rings */
+ priv->rx = kvzalloc(priv->rx_cfg.num_queues * sizeof(*priv->rx),
+ GFP_KERNEL);
+ if (!priv->rx) {
+ err = -ENOMEM;
+ goto free_tx_queue;
+ }
+ err = gve_rx_alloc_rings(priv);
+ if (err)
+ goto free_rx;
+ /* Add tx napi & init sync stats*/
+ for (i = 0; i < priv->tx_cfg.num_queues; i++) {
+ u64_stats_init(&priv->tx[i].statss);
+ ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
+ gve_add_napi(priv, ntfy_idx);
+ }
+ /* Add rx napi & init sync stats*/
+ for (i = 0; i < priv->rx_cfg.num_queues; i++) {
+ u64_stats_init(&priv->rx[i].statss);
+ ntfy_idx = gve_rx_idx_to_ntfy(priv, i);
+ gve_add_napi(priv, ntfy_idx);
+ }
+
+ return 0;
+
+free_rx:
+ kfree(priv->rx);
+ priv->rx = NULL;
+free_tx_queue:
+ gve_tx_free_rings(priv);
+free_tx:
+ kfree(priv->tx);
+ priv->tx = NULL;
+ return err;
+}
+
+static int gve_destroy_rings(struct gve_priv *priv)
+{
+ int err;
+ int i;
+
+ for (i = 0; i < priv->tx_cfg.num_queues; i++) {
+ err = gve_adminq_destroy_tx_queue(priv, i);
+ if (err) {
+ netif_err(priv, drv, priv->dev,
+ "failed to destroy tx queue %d\n",
+ i);
+ /* This failure will trigger a reset - no need to clean
+ * up
+ */
+ return err;
+ }
+ netif_dbg(priv, drv, priv->dev, "destroyed tx queue %d\n", i);
+ }
+ for (i = 0; i < priv->rx_cfg.num_queues; i++) {
+ err = gve_adminq_destroy_rx_queue(priv, i);
+ if (err) {
+ netif_err(priv, drv, priv->dev,
+ "failed to destroy rx queue %d\n",
+ i);
+ /* This failure will trigger a reset - no need to clean
+ * up
+ */
+ return err;
+ }
+ netif_dbg(priv, drv, priv->dev, "destroyed rx queue %d\n", i);
+ }
+ return 0;
+}
+
+static void gve_free_rings(struct gve_priv *priv)
+{
+ int ntfy_idx;
+ int i;
+
+ if (priv->tx) {
+ for (i = 0; i < priv->tx_cfg.num_queues; i++) {
+ ntfy_idx = gve_tx_idx_to_ntfy(priv, i);
+ gve_remove_napi(priv, ntfy_idx);
+ }
+ gve_tx_free_rings(priv);
+ kfree(priv->tx);
+ priv->tx = NULL;
+ }
+ if (priv->rx) {
+ for (i = 0; i < priv->rx_cfg.num_queues; i++) {
+ ntfy_idx = gve_rx_idx_to_ntfy(priv, i);
+ gve_remove_napi(priv, ntfy_idx);
+ }
+ gve_rx_free_rings(priv);
+ kfree(priv->rx);
+ priv->rx = NULL;
+ }
+}
+
+int gve_alloc_page(struct device *dev, struct page **page, dma_addr_t *dma,
+ enum dma_data_direction dir)
+{
+ *page = alloc_page(GFP_KERNEL);
+ if (!*page)
+ return -ENOMEM;
+ *dma = dma_map_page(dev, *page, 0, PAGE_SIZE, dir);
+ if (dma_mapping_error(dev, *dma)) {
+ put_page(*page);
+ return -ENOMEM;
+ }
+ return 0;
+}
+
+static int gve_alloc_queue_page_list(struct gve_priv *priv, u32 id,
+ int pages)
+{
+ struct gve_queue_page_list *qpl = &priv->qpls[id];
+ int err;
+ int i;
+
+ if (pages + priv->num_registered_pages > priv->max_registered_pages) {
+ netif_err(priv, drv, priv->dev,
+ "Reached max number of registered pages %llu > %llu\n",
+ pages + priv->num_registered_pages,
+ priv->max_registered_pages);
+ return -EINVAL;
+ }
+
+ qpl->id = id;
+ qpl->num_entries = pages;
+ qpl->pages = kvzalloc(pages * sizeof(*qpl->pages), GFP_KERNEL);
+ /* caller handles clean up */
+ if (!qpl->pages)
+ return -ENOMEM;
+ qpl->page_buses = kvzalloc(pages * sizeof(*qpl->page_buses),
+ GFP_KERNEL);
+ /* caller handles clean up */
+ if (!qpl->page_buses)
+ return -ENOMEM;
+
+ for (i = 0; i < pages; i++) {
+ err = gve_alloc_page(&priv->pdev->dev, &qpl->pages[i],
+ &qpl->page_buses[i],
+ gve_qpl_dma_dir(priv, id));
+ /* caller handles clean up */
+ if (err)
+ return -ENOMEM;
+ }
+ priv->num_registered_pages += pages;
+
+ return 0;
+}
+
+void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma,
+ enum dma_data_direction dir)
+{
+ if (!dma_mapping_error(dev, dma))
+ dma_unmap_page(dev, dma, PAGE_SIZE, dir);
+ if (page)
+ put_page(page);
+}
+
+static void gve_free_queue_page_list(struct gve_priv *priv,
+ int id)
+{
+ struct gve_queue_page_list *qpl = &priv->qpls[id];
+ int i;
+
+ if (!qpl->pages)
+ return;
+ if (!qpl->page_buses)
+ goto free_pages;
+
+ for (i = 0; i < qpl->num_entries; i++)
+ gve_free_page(&priv->pdev->dev, qpl->pages[i],
+ qpl->page_buses[i], gve_qpl_dma_dir(priv, id));
+
+ kfree(qpl->page_buses);
+free_pages:
+ kfree(qpl->pages);
+ priv->num_registered_pages -= qpl->num_entries;
+}
+
+static int gve_alloc_qpls(struct gve_priv *priv)
+{
+ int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv);
+ int i, j;
+ int err;
+
+ priv->qpls = kvzalloc(num_qpls * sizeof(*priv->qpls), GFP_KERNEL);
+ if (!priv->qpls)
+ return -ENOMEM;
+
+ for (i = 0; i < gve_num_tx_qpls(priv); i++) {
+ err = gve_alloc_queue_page_list(priv, i,
+ priv->tx_pages_per_qpl);
+ if (err)
+ goto free_qpls;
+ }
+ for (; i < num_qpls; i++) {
+ err = gve_alloc_queue_page_list(priv, i,
+ priv->rx_pages_per_qpl);
+ if (err)
+ goto free_qpls;
+ }
+
+ priv->qpl_cfg.qpl_map_size = BITS_TO_LONGS(num_qpls) *
+ sizeof(unsigned long) * BITS_PER_BYTE;
+ priv->qpl_cfg.qpl_id_map = kvzalloc(BITS_TO_LONGS(num_qpls) *
+ sizeof(unsigned long), GFP_KERNEL);
+ if (!priv->qpl_cfg.qpl_id_map) {
+ err = -ENOMEM;
+ goto free_qpls;
+ }
+
+ return 0;
+
+free_qpls:
+ for (j = 0; j <= i; j++)
+ gve_free_queue_page_list(priv, j);
+ kfree(priv->qpls);
+ return err;
+}
+
+static void gve_free_qpls(struct gve_priv *priv)
+{
+ int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv);
+ int i;
+
+ kfree(priv->qpl_cfg.qpl_id_map);
+
+ for (i = 0; i < num_qpls; i++)
+ gve_free_queue_page_list(priv, i);
+
+ kfree(priv->qpls);
+}
+
+/* Use this to schedule a reset when the device is capable of continuing
+ * to handle other requests in its current state. If it is not, do a reset
+ * in thread instead.
+ */
+void gve_schedule_reset(struct gve_priv *priv)
+{
+ gve_set_do_reset(priv);
+ queue_work(priv->gve_wq, &priv->service_task);
+}
+
+static void gve_reset_and_teardown(struct gve_priv *priv, bool was_up);
+static int gve_reset_recovery(struct gve_priv *priv, bool was_up);
+static void gve_turndown(struct gve_priv *priv);
+static void gve_turnup(struct gve_priv *priv);
+
+static int gve_open(struct net_device *dev)
+{
+ struct gve_priv *priv = netdev_priv(dev);
+ int err;
+
+ err = gve_alloc_qpls(priv);
+ if (err)
+ return err;
+ err = gve_alloc_rings(priv);
+ if (err)
+ goto free_qpls;
+
+ err = netif_set_real_num_tx_queues(dev, priv->tx_cfg.num_queues);
+ if (err)
+ goto free_rings;
+ err = netif_set_real_num_rx_queues(dev, priv->rx_cfg.num_queues);
+ if (err)
+ goto free_rings;
+
+ err = gve_register_qpls(priv);
+ if (err)
+ goto reset;
+ err = gve_create_rings(priv);
+ if (err)
+ goto reset;
+ gve_set_device_rings_ok(priv);
+
+ gve_turnup(priv);
+ netif_carrier_on(dev);
+ return 0;
+
+free_rings:
+ gve_free_rings(priv);
+free_qpls:
+ gve_free_qpls(priv);
+ return err;
+
+reset:
+ /* This must have been called from a reset due to the rtnl lock
+ * so just return at this point.
+ */
+ if (gve_get_reset_in_progress(priv))
+ return err;
+ /* Otherwise reset before returning */
+ gve_reset_and_teardown(priv, true);
+ /* if this fails there is nothing we can do so just ignore the return */
+ gve_reset_recovery(priv, false);
+ /* return the original error */
+ return err;
+}
+
+static int gve_close(struct net_device *dev)
+{
+ struct gve_priv *priv = netdev_priv(dev);
+ int err;
+
+ netif_carrier_off(dev);
+ if (gve_get_device_rings_ok(priv)) {
+ gve_turndown(priv);
+ err = gve_destroy_rings(priv);
+ if (err)
+ goto err;
+ err = gve_unregister_qpls(priv);
+ if (err)
+ goto err;
+ gve_clear_device_rings_ok(priv);
+ }
+
+ gve_free_rings(priv);
+ gve_free_qpls(priv);
+ return 0;
+
+err:
+ /* This must have been called from a reset due to the rtnl lock
+ * so just return at this point.
+ */
+ if (gve_get_reset_in_progress(priv))
+ return err;
+ /* Otherwise reset before returning */
+ gve_reset_and_teardown(priv, true);
+ return gve_reset_recovery(priv, false);
+}
+
+int gve_adjust_queues(struct gve_priv *priv,
+ struct gve_queue_config new_rx_config,
+ struct gve_queue_config new_tx_config)
+{
+ int err;
+
+ if (netif_carrier_ok(priv->dev)) {
+ /* To make this process as simple as possible we teardown the
+ * device, set the new configuration, and then bring the device
+ * up again.
+ */
+ err = gve_close(priv->dev);
+ /* we have already tried to reset in close,
+ * just fail at this point
+ */
+ if (err)
+ return err;
+ priv->tx_cfg = new_tx_config;
+ priv->rx_cfg = new_rx_config;
+
+ err = gve_open(priv->dev);
+ if (err)
+ goto err;
+
+ return 0;
+ }
+ /* Set the config for the next up. */
+ priv->tx_cfg = new_tx_config;
+ priv->rx_cfg = new_rx_config;
+
+ return 0;
+err:
+ netif_err(priv, drv, priv->dev,
+ "Adjust queues failed! !!! DISABLING ALL QUEUES !!!\n");
+ gve_turndown(priv);
+ return err;
+}
+
+static void gve_turndown(struct gve_priv *priv)
+{
+ int idx;
+
+ if (netif_carrier_ok(priv->dev))
+ netif_carrier_off(priv->dev);
+
+ if (!gve_get_napi_enabled(priv))
+ return;
+
+ /* Disable napi to prevent more work from coming in */
+ for (idx = 0; idx < priv->tx_cfg.num_queues; idx++) {
+ int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx);
+ struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
+
+ napi_disable(&block->napi);
+ }
+ for (idx = 0; idx < priv->rx_cfg.num_queues; idx++) {
+ int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx);
+ struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
+
+ napi_disable(&block->napi);
+ }
+
+ /* Stop tx queues */
+ netif_tx_disable(priv->dev);
+
+ gve_clear_napi_enabled(priv);
+}
+
+static void gve_turnup(struct gve_priv *priv)
+{
+ int idx;
+
+ /* Start the tx queues */
+ netif_tx_start_all_queues(priv->dev);
+
+ /* Enable napi and unmask interrupts for all queues */
+ for (idx = 0; idx < priv->tx_cfg.num_queues; idx++) {
+ int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx);
+ struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
+
+ napi_enable(&block->napi);
+ iowrite32be(0, gve_irq_doorbell(priv, block));
+ }
+ for (idx = 0; idx < priv->rx_cfg.num_queues; idx++) {
+ int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx);
+ struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
+
+ napi_enable(&block->napi);
+ iowrite32be(0, gve_irq_doorbell(priv, block));
+ }
+
+ gve_set_napi_enabled(priv);
+}
+
+static void gve_tx_timeout(struct net_device *dev)
+{
+ struct gve_priv *priv = netdev_priv(dev);
+
+ gve_schedule_reset(priv);
+ priv->tx_timeo_cnt++;
+}
+
+static const struct net_device_ops gve_netdev_ops = {
+ .ndo_start_xmit = gve_tx,
+ .ndo_open = gve_open,
+ .ndo_stop = gve_close,
+ .ndo_get_stats64 = gve_get_stats,
+ .ndo_tx_timeout = gve_tx_timeout,
+};
+
+static void gve_handle_status(struct gve_priv *priv, u32 status)
+{
+ if (GVE_DEVICE_STATUS_RESET_MASK & status) {
+ dev_info(&priv->pdev->dev, "Device requested reset.\n");
+ gve_set_do_reset(priv);
+ }
+}
+
+static void gve_handle_reset(struct gve_priv *priv)
+{
+ /* A service task will be scheduled at the end of probe to catch any
+ * resets that need to happen, and we don't want to reset until
+ * probe is done.
+ */
+ if (gve_get_probe_in_progress(priv))
+ return;
+
+ if (gve_get_do_reset(priv)) {
+ rtnl_lock();
+ gve_reset(priv, false);
+ rtnl_unlock();
+ }
+}
+
+/* Handle NIC status register changes and reset requests */
+static void gve_service_task(struct work_struct *work)
+{
+ struct gve_priv *priv = container_of(work, struct gve_priv,
+ service_task);
+
+ gve_handle_status(priv,
+ ioread32be(&priv->reg_bar0->device_status));
+
+ gve_handle_reset(priv);
+}
+
+static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device)
+{
+ int num_ntfy;
+ int err;
+
+ /* Set up the adminq */
+ err = gve_adminq_alloc(&priv->pdev->dev, priv);
+ if (err) {
+ dev_err(&priv->pdev->dev,
+ "Failed to alloc admin queue: err=%d\n", err);
+ return err;
+ }
+
+ if (skip_describe_device)
+ goto setup_device;
+
+ /* Get the initial information we need from the device */
+ err = gve_adminq_describe_device(priv);
+ if (err) {
+ dev_err(&priv->pdev->dev,
+ "Could not get device information: err=%d\n", err);
+ goto err;
+ }
+ if (priv->dev->max_mtu > PAGE_SIZE) {
+ priv->dev->max_mtu = PAGE_SIZE;
+ err = gve_adminq_set_mtu(priv, priv->dev->mtu);
+ if (err) {
+ netif_err(priv, drv, priv->dev, "Could not set mtu");
+ goto err;
+ }
+ }
+ priv->dev->mtu = priv->dev->max_mtu;
+ num_ntfy = pci_msix_vec_count(priv->pdev);
+ if (num_ntfy <= 0) {
+ dev_err(&priv->pdev->dev,
+ "could not count MSI-x vectors: err=%d\n", num_ntfy);
+ err = num_ntfy;
+ goto err;
+ } else if (num_ntfy < GVE_MIN_MSIX) {
+ dev_err(&priv->pdev->dev, "gve needs at least %d MSI-x vectors, but only has %d\n",
+ GVE_MIN_MSIX, num_ntfy);
+ err = -EINVAL;
+ goto err;
+ }
+
+ priv->num_registered_pages = 0;
+ priv->rx_copybreak = GVE_DEFAULT_RX_COPYBREAK;
+ /* gvnic has one Notification Block per MSI-x vector, except for the
+ * management vector
+ */
+ priv->num_ntfy_blks = (num_ntfy - 1) & ~0x1;
+ priv->mgmt_msix_idx = priv->num_ntfy_blks;
+
+ priv->tx_cfg.max_queues =
+ min_t(int, priv->tx_cfg.max_queues, priv->num_ntfy_blks / 2);
+ priv->rx_cfg.max_queues =
+ min_t(int, priv->rx_cfg.max_queues, priv->num_ntfy_blks / 2);
+
+ priv->tx_cfg.num_queues = priv->tx_cfg.max_queues;
+ priv->rx_cfg.num_queues = priv->rx_cfg.max_queues;
+ if (priv->default_num_queues > 0) {
+ priv->tx_cfg.num_queues = min_t(int, priv->default_num_queues,
+ priv->tx_cfg.num_queues);
+ priv->rx_cfg.num_queues = min_t(int, priv->default_num_queues,
+ priv->rx_cfg.num_queues);
+ }
+
+ netif_info(priv, drv, priv->dev, "TX queues %d, RX queues %d\n",
+ priv->tx_cfg.num_queues, priv->rx_cfg.num_queues);
+ netif_info(priv, drv, priv->dev, "Max TX queues %d, Max RX queues %d\n",
+ priv->tx_cfg.max_queues, priv->rx_cfg.max_queues);
+
+setup_device:
+ err = gve_setup_device_resources(priv);
+ if (!err)
+ return 0;
+err:
+ gve_adminq_free(&priv->pdev->dev, priv);
+ return err;
+}
+
+static void gve_teardown_priv_resources(struct gve_priv *priv)
+{
+ gve_teardown_device_resources(priv);
+ gve_adminq_free(&priv->pdev->dev, priv);
+}
+
+static void gve_trigger_reset(struct gve_priv *priv)
+{
+ /* Reset the device by releasing the AQ */
+ gve_adminq_release(priv);
+}
+
+static void gve_reset_and_teardown(struct gve_priv *priv, bool was_up)
+{
+ gve_trigger_reset(priv);
+ /* With the reset having already happened, close cannot fail */
+ if (was_up)
+ gve_close(priv->dev);
+ gve_teardown_priv_resources(priv);
+}
+
+static int gve_reset_recovery(struct gve_priv *priv, bool was_up)
+{
+ int err;
+
+ err = gve_init_priv(priv, true);
+ if (err)
+ goto err;
+ if (was_up) {
+ err = gve_open(priv->dev);
+ if (err)
+ goto err;
+ }
+ return 0;
+err:
+ dev_err(&priv->pdev->dev, "Reset failed! !!! DISABLING ALL QUEUES !!!\n");
+ gve_turndown(priv);
+ return err;
+}
+
+int gve_reset(struct gve_priv *priv, bool attempt_teardown)
+{
+ bool was_up = netif_carrier_ok(priv->dev);
+ int err;
+
+ dev_info(&priv->pdev->dev, "Performing reset\n");
+ gve_clear_do_reset(priv);
+ gve_set_reset_in_progress(priv);
+ /* If we aren't attempting to teardown normally, just go turndown and
+ * reset right away.
+ */
+ if (!attempt_teardown) {
+ gve_turndown(priv);
+ gve_reset_and_teardown(priv, was_up);
+ } else {
+ /* Otherwise attempt to close normally */
+ if (was_up) {
+ err = gve_close(priv->dev);
+ /* If that fails reset as we did above */
+ if (err)
+ gve_reset_and_teardown(priv, was_up);
+ }
+ /* Clean up any remaining resources */
+ gve_teardown_priv_resources(priv);
+ }
+
+ /* Set it all back up */
+ err = gve_reset_recovery(priv, was_up);
+ gve_clear_reset_in_progress(priv);
+ return err;
+}
+
+static void gve_write_version(u8 __iomem *driver_version_register)
+{
+ const char *c = gve_version_prefix;
+
+ while (*c) {
+ writeb(*c, driver_version_register);
+ c++;
+ }
+
+ c = gve_version_str;
+ while (*c) {
+ writeb(*c, driver_version_register);
+ c++;
+ }
+ writeb('\n', driver_version_register);
+}
+
+static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ int max_tx_queues, max_rx_queues;
+ struct net_device *dev;
+ __be32 __iomem *db_bar;
+ struct gve_registers __iomem *reg_bar;
+ struct gve_priv *priv;
+ int err;
+
+ err = pci_enable_device(pdev);
+ if (err)
+ return -ENXIO;
+
+ err = pci_request_regions(pdev, "gvnic-cfg");
+ if (err)
+ goto abort_with_enabled;
+
+ pci_set_master(pdev);
+
+ err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));
+ if (err) {
+ dev_err(&pdev->dev, "Failed to set dma mask: err=%d\n", err);
+ goto abort_with_pci_region;
+ }
+
+ err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
+ if (err) {
+ dev_err(&pdev->dev,
+ "Failed to set consistent dma mask: err=%d\n", err);
+ goto abort_with_pci_region;
+ }
+
+ reg_bar = pci_iomap(pdev, GVE_REGISTER_BAR, 0);
+ if (!reg_bar) {
+ dev_err(&pdev->dev, "Failed to map pci bar!\n");
+ err = -ENOMEM;
+ goto abort_with_pci_region;
+ }
+
+ db_bar = pci_iomap(pdev, GVE_DOORBELL_BAR, 0);
+ if (!db_bar) {
+ dev_err(&pdev->dev, "Failed to map doorbell bar!\n");
+ err = -ENOMEM;
+ goto abort_with_reg_bar;
+ }
+
+ gve_write_version(&reg_bar->driver_version);
+ /* Get max queues to alloc etherdev */
+ max_rx_queues = ioread32be(&reg_bar->max_tx_queues);
+ max_tx_queues = ioread32be(&reg_bar->max_rx_queues);
+ /* Alloc and setup the netdev and priv */
+ dev = alloc_etherdev_mqs(sizeof(*priv), max_tx_queues, max_rx_queues);
+ if (!dev) {
+ dev_err(&pdev->dev, "could not allocate netdev\n");
+ goto abort_with_db_bar;
+ }
+ SET_NETDEV_DEV(dev, &pdev->dev);
+ pci_set_drvdata(pdev, dev);
+ dev->ethtool_ops = &gve_ethtool_ops;
+ dev->netdev_ops = &gve_netdev_ops;
+ /* advertise features */
+ dev->hw_features = NETIF_F_HIGHDMA;
+ dev->hw_features |= NETIF_F_SG;
+ dev->hw_features |= NETIF_F_HW_CSUM;
+ dev->hw_features |= NETIF_F_TSO;
+ dev->hw_features |= NETIF_F_TSO6;
+ dev->hw_features |= NETIF_F_TSO_ECN;
+ dev->hw_features |= NETIF_F_RXCSUM;
+ dev->hw_features |= NETIF_F_RXHASH;
+ dev->features = dev->hw_features;
+ dev->watchdog_timeo = 5 * HZ;
+ dev->min_mtu = ETH_MIN_MTU;
+ netif_carrier_off(dev);
+
+ priv = netdev_priv(dev);
+ priv->dev = dev;
+ priv->pdev = pdev;
+ priv->msg_enable = DEFAULT_MSG_LEVEL;
+ priv->reg_bar0 = reg_bar;
+ priv->db_bar2 = db_bar;
+ priv->service_task_flags = 0x0;
+ priv->state_flags = 0x0;
+
+ gve_set_probe_in_progress(priv);
+ priv->gve_wq = alloc_ordered_workqueue("gve", 0);
+ if (!priv->gve_wq) {
+ dev_err(&pdev->dev, "Could not allocate workqueue");
+ err = -ENOMEM;
+ goto abort_with_netdev;
+ }
+ INIT_WORK(&priv->service_task, gve_service_task);
+ priv->tx_cfg.max_queues = max_tx_queues;
+ priv->rx_cfg.max_queues = max_rx_queues;
+
+ err = gve_init_priv(priv, false);
+ if (err)
+ goto abort_with_wq;
+
+ err = register_netdev(dev);
+ if (err)
+ goto abort_with_wq;
+
+ dev_info(&pdev->dev, "GVE version %s\n", gve_version_str);
+ gve_clear_probe_in_progress(priv);
+ queue_work(priv->gve_wq, &priv->service_task);
+ return 0;
+
+abort_with_wq:
+ destroy_workqueue(priv->gve_wq);
+
+abort_with_netdev:
+ free_netdev(dev);
+
+abort_with_db_bar:
+ pci_iounmap(pdev, db_bar);
+
+abort_with_reg_bar:
+ pci_iounmap(pdev, reg_bar);
+
+abort_with_pci_region:
+ pci_release_regions(pdev);
+
+abort_with_enabled:
+ pci_disable_device(pdev);
+ return -ENXIO;
+}
+EXPORT_SYMBOL(gve_probe);
+
+static void gve_remove(struct pci_dev *pdev)
+{
+ struct net_device *netdev = pci_get_drvdata(pdev);
+ struct gve_priv *priv = netdev_priv(netdev);
+ __be32 __iomem *db_bar = priv->db_bar2;
+ void __iomem *reg_bar = priv->reg_bar0;
+
+ unregister_netdev(netdev);
+ gve_teardown_priv_resources(priv);
+ destroy_workqueue(priv->gve_wq);
+ free_netdev(netdev);
+ pci_iounmap(pdev, db_bar);
+ pci_iounmap(pdev, reg_bar);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+}
+
+static const struct pci_device_id gve_id_table[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_GOOGLE, PCI_DEV_ID_GVNIC) },
+ { }
+};
+
+static struct pci_driver gvnic_driver = {
+ .name = "gvnic",
+ .id_table = gve_id_table,
+ .probe = gve_probe,
+ .remove = gve_remove,
+};
+
+module_pci_driver(gvnic_driver);
+
+MODULE_DEVICE_TABLE(pci, gve_id_table);
+MODULE_AUTHOR("Google, Inc.");
+MODULE_DESCRIPTION("gVNIC Driver");
+MODULE_LICENSE("Dual MIT/GPL");
+MODULE_VERSION(GVE_VERSION);
diff --git a/drivers/net/ethernet/google/gve/gve_register.h b/drivers/net/ethernet/google/gve/gve_register.h
new file mode 100644
index 000000000000..84ab8893aadd
--- /dev/null
+++ b/drivers/net/ethernet/google/gve/gve_register.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: (GPL-2.0 OR MIT)
+ * Google virtual Ethernet (gve) driver
+ *
+ * Copyright (C) 2015-2019 Google, Inc.
+ */
+
+#ifndef _GVE_REGISTER_H_
+#define _GVE_REGISTER_H_
+
+/* Fixed Configuration Registers */
+struct gve_registers {
+ __be32 device_status;
+ __be32 driver_status;
+ __be32 max_tx_queues;
+ __be32 max_rx_queues;
+ __be32 adminq_pfn;
+ __be32 adminq_doorbell;
+ __be32 adminq_event_counter;
+ u8 reserved[3];
+ u8 driver_version;
+};
+
+enum gve_device_status_flags {
+ GVE_DEVICE_STATUS_RESET_MASK = BIT(1),
+ GVE_DEVICE_STATUS_LINK_STATUS_MASK = BIT(2),
+};
+#endif /* _GVE_REGISTER_H_ */
diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
new file mode 100644
index 000000000000..c1aeabd1c594
--- /dev/null
+++ b/drivers/net/ethernet/google/gve/gve_rx.c
@@ -0,0 +1,446 @@
+// SPDX-License-Identifier: (GPL-2.0 OR MIT)
+/* Google virtual Ethernet (gve) driver
+ *
+ * Copyright (C) 2015-2019 Google, Inc.
+ */
+
+#include "gve.h"
+#include "gve_adminq.h"
+#include <linux/etherdevice.h>
+
+static void gve_rx_remove_from_block(struct gve_priv *priv, int queue_idx)
+{
+ struct gve_notify_block *block =
+ &priv->ntfy_blocks[gve_rx_idx_to_ntfy(priv, queue_idx)];
+
+ block->rx = NULL;
+}
+
+static void gve_rx_free_ring(struct gve_priv *priv, int idx)
+{
+ struct gve_rx_ring *rx = &priv->rx[idx];
+ struct device *dev = &priv->pdev->dev;
+ size_t bytes;
+ u32 slots;
+
+ gve_rx_remove_from_block(priv, idx);
+
+ bytes = sizeof(struct gve_rx_desc) * priv->rx_desc_cnt;
+ dma_free_coherent(dev, bytes, rx->desc.desc_ring, rx->desc.bus);
+ rx->desc.desc_ring = NULL;
+
+ dma_free_coherent(dev, sizeof(*rx->q_resources),
+ rx->q_resources, rx->q_resources_bus);
+ rx->q_resources = NULL;
+
+ gve_unassign_qpl(priv, rx->data.qpl->id);
+ rx->data.qpl = NULL;
+ kfree(rx->data.page_info);
+
+ slots = rx->data.mask + 1;
+ bytes = sizeof(*rx->data.data_ring) * slots;
+ dma_free_coherent(dev, bytes, rx->data.data_ring,
+ rx->data.data_bus);
+ rx->data.data_ring = NULL;
+ netif_dbg(priv, drv, priv->dev, "freed rx ring %d\n", idx);
+}
+
+static void gve_setup_rx_buffer(struct gve_rx_slot_page_info *page_info,
+ struct gve_rx_data_slot *slot,
+ dma_addr_t addr, struct page *page)
+{
+ page_info->page = page;
+ page_info->page_offset = 0;
+ page_info->page_address = page_address(page);
+ slot->qpl_offset = cpu_to_be64(addr);
+}
+
+static int gve_prefill_rx_pages(struct gve_rx_ring *rx)
+{
+ struct gve_priv *priv = rx->gve;
+ u32 slots;
+ int i;
+
+ /* Allocate one page per Rx queue slot. Each page is split into two
+ * packet buffers, when possible we "page flip" between the two.
+ */
+ slots = rx->data.mask + 1;
+
+ rx->data.page_info = kvzalloc(slots *
+ sizeof(*rx->data.page_info), GFP_KERNEL);
+ if (!rx->data.page_info)
+ return -ENOMEM;
+
+ rx->data.qpl = gve_assign_rx_qpl(priv);
+
+ for (i = 0; i < slots; i++) {
+ struct page *page = rx->data.qpl->pages[i];
+ dma_addr_t addr = i * PAGE_SIZE;
+
+ gve_setup_rx_buffer(&rx->data.page_info[i],
+ &rx->data.data_ring[i], addr, page);
+ }
+
+ return slots;
+}
+
+static void gve_rx_add_to_block(struct gve_priv *priv, int queue_idx)
+{
+ u32 ntfy_idx = gve_rx_idx_to_ntfy(priv, queue_idx);
+ struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
+ struct gve_rx_ring *rx = &priv->rx[queue_idx];
+
+ block->rx = rx;
+ rx->ntfy_id = ntfy_idx;
+}
+
+static int gve_rx_alloc_ring(struct gve_priv *priv, int idx)
+{
+ struct gve_rx_ring *rx = &priv->rx[idx];
+ struct device *hdev = &priv->pdev->dev;
+ u32 slots, npages;
+ int filled_pages;
+ size_t bytes;
+ int err;
+
+ netif_dbg(priv, drv, priv->dev, "allocating rx ring\n");
+ /* Make sure everything is zeroed to start with */
+ memset(rx, 0, sizeof(*rx));
+
+ rx->gve = priv;
+ rx->q_num = idx;
+
+ slots = priv->rx_pages_per_qpl;
+ rx->data.mask = slots - 1;
+
+ /* alloc rx data ring */
+ bytes = sizeof(*rx->data.data_ring) * slots;
+ rx->data.data_ring = dma_alloc_coherent(hdev, bytes,
+ &rx->data.data_bus,
+ GFP_KERNEL);
+ if (!rx->data.data_ring)
+ return -ENOMEM;
+ filled_pages = gve_prefill_rx_pages(rx);
+ if (filled_pages < 0) {
+ err = -ENOMEM;
+ goto abort_with_slots;
+ }
+ rx->desc.fill_cnt = filled_pages;
+ /* Ensure data ring slots (packet buffers) are visible. */
+ dma_wmb();
+
+ /* Alloc gve_queue_resources */
+ rx->q_resources =
+ dma_alloc_coherent(hdev,
+ sizeof(*rx->q_resources),
+ &rx->q_resources_bus,
+ GFP_KERNEL);
+ if (!rx->q_resources) {
+ err = -ENOMEM;
+ goto abort_filled;
+ }
+ netif_dbg(priv, drv, priv->dev, "rx[%d]->data.data_bus=%lx\n", idx,
+ (unsigned long)rx->data.data_bus);
+
+ /* alloc rx desc ring */
+ bytes = sizeof(struct gve_rx_desc) * priv->rx_desc_cnt;
+ npages = bytes / PAGE_SIZE;
+ if (npages * PAGE_SIZE != bytes) {
+ err = -EIO;
+ goto abort_with_q_resources;
+ }
+
+ rx->desc.desc_ring = dma_alloc_coherent(hdev, bytes, &rx->desc.bus,
+ GFP_KERNEL);
+ if (!rx->desc.desc_ring) {
+ err = -ENOMEM;
+ goto abort_with_q_resources;
+ }
+ rx->desc.mask = slots - 1;
+ rx->desc.cnt = 0;
+ rx->desc.seqno = 1;
+ gve_rx_add_to_block(priv, idx);
+
+ return 0;
+
+abort_with_q_resources:
+ dma_free_coherent(hdev, sizeof(*rx->q_resources),
+ rx->q_resources, rx->q_resources_bus);
+ rx->q_resources = NULL;
+abort_filled:
+ kfree(rx->data.page_info);
+abort_with_slots:
+ bytes = sizeof(*rx->data.data_ring) * slots;
+ dma_free_coherent(hdev, bytes, rx->data.data_ring, rx->data.data_bus);
+ rx->data.data_ring = NULL;
+
+ return err;
+}
+
+int gve_rx_alloc_rings(struct gve_priv *priv)
+{
+ int err = 0;
+ int i;
+
+ for (i = 0; i < priv->rx_cfg.num_queues; i++) {
+ err = gve_rx_alloc_ring(priv, i);
+ if (err) {
+ netif_err(priv, drv, priv->dev,
+ "Failed to alloc rx ring=%d: err=%d\n",
+ i, err);
+ break;
+ }
+ }
+ /* Unallocate if there was an error */
+ if (err) {
+ int j;
+
+ for (j = 0; j < i; j++)
+ gve_rx_free_ring(priv, j);
+ }
+ return err;
+}
+
+void gve_rx_free_rings(struct gve_priv *priv)
+{
+ int i;
+
+ for (i = 0; i < priv->rx_cfg.num_queues; i++)
+ gve_rx_free_ring(priv, i);
+}
+
+void gve_rx_write_doorbell(struct gve_priv *priv, struct gve_rx_ring *rx)
+{
+ u32 db_idx = be32_to_cpu(rx->q_resources->db_index);
+
+ iowrite32be(rx->desc.fill_cnt, &priv->db_bar2[db_idx]);
+}
+
+static enum pkt_hash_types gve_rss_type(__be16 pkt_flags)
+{
+ if (likely(pkt_flags & (GVE_RXF_TCP | GVE_RXF_UDP)))
+ return PKT_HASH_TYPE_L4;
+ if (pkt_flags & (GVE_RXF_IPV4 | GVE_RXF_IPV6))
+ return PKT_HASH_TYPE_L3;
+ return PKT_HASH_TYPE_L2;
+}
+
+static struct sk_buff *gve_rx_copy(struct net_device *dev,
+ struct napi_struct *napi,
+ struct gve_rx_slot_page_info *page_info,
+ u16 len)
+{
+ struct sk_buff *skb = napi_alloc_skb(napi, len);
+ void *va = page_info->page_address + GVE_RX_PAD +
+ page_info->page_offset;
+
+ if (unlikely(!skb))
+ return NULL;
+
+ __skb_put(skb, len);
+
+ skb_copy_to_linear_data(skb, va, len);
+
+ skb->protocol = eth_type_trans(skb, dev);
+ return skb;
+}
+
+static struct sk_buff *gve_rx_add_frags(struct net_device *dev,
+ struct napi_struct *napi,
+ struct gve_rx_slot_page_info *page_info,
+ u16 len)
+{
+ struct sk_buff *skb = napi_get_frags(napi);
+
+ if (unlikely(!skb))
+ return NULL;
+
+ skb_add_rx_frag(skb, 0, page_info->page,
+ page_info->page_offset +
+ GVE_RX_PAD, len, PAGE_SIZE / 2);
+
+ return skb;
+}
+
+static void gve_rx_flip_buff(struct gve_rx_slot_page_info *page_info,
+ struct gve_rx_data_slot *data_ring)
+{
+ u64 addr = be64_to_cpu(data_ring->qpl_offset);
+
+ page_info->page_offset ^= PAGE_SIZE / 2;
+ addr ^= PAGE_SIZE / 2;
+ data_ring->qpl_offset = cpu_to_be64(addr);
+}
+
+static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc,
+ netdev_features_t feat)
+{
+ struct gve_rx_slot_page_info *page_info;
+ struct gve_priv *priv = rx->gve;
+ struct napi_struct *napi = &priv->ntfy_blocks[rx->ntfy_id].napi;
+ struct net_device *dev = priv->dev;
+ struct sk_buff *skb;
+ int pagecount;
+ u16 len;
+ u32 idx;
+
+ /* drop this packet */
+ if (unlikely(rx_desc->flags_seq & GVE_RXF_ERR))
+ return true;
+
+ len = be16_to_cpu(rx_desc->len) - GVE_RX_PAD;
+ idx = rx->data.cnt & rx->data.mask;
+ page_info = &rx->data.page_info[idx];
+
+ /* gvnic can only receive into registered segments. If the buffer
+ * can't be recycled, our only choice is to copy the data out of
+ * it so that we can return it to the device.
+ */
+
+ if (PAGE_SIZE == 4096) {
+ if (len <= priv->rx_copybreak) {
+ /* Just copy small packets */
+ skb = gve_rx_copy(dev, napi, page_info, len);
+ goto have_skb;
+ }
+ if (unlikely(!gve_can_recycle_pages(dev))) {
+ skb = gve_rx_copy(dev, napi, page_info, len);
+ goto have_skb;
+ }
+ pagecount = page_count(page_info->page);
+ if (pagecount == 1) {
+ /* No part of this page is used by any SKBs; we attach
+ * the page fragment to a new SKB and pass it up the
+ * stack.
+ */
+ skb = gve_rx_add_frags(dev, napi, page_info, len);
+ if (!skb)
+ return true;
+ /* Make sure the kernel stack can't release the page */
+ get_page(page_info->page);
+ /* "flip" to other packet buffer on this page */
+ gve_rx_flip_buff(page_info, &rx->data.data_ring[idx]);
+ } else if (pagecount >= 2) {
+ /* We have previously passed the other half of this
+ * page up the stack, but it has not yet been freed.
+ */
+ skb = gve_rx_copy(dev, napi, page_info, len);
+ } else {
+ WARN(pagecount < 1, "Pagecount should never be < 1");
+ return false;
+ }
+ } else {
+ skb = gve_rx_copy(dev, napi, page_info, len);
+ }
+
+have_skb:
+ /* We didn't manage to allocate an skb but we haven't had any
+ * reset worthy failures.
+ */
+ if (!skb)
+ return true;
+
+ rx->data.cnt++;
+
+ if (likely(feat & NETIF_F_RXCSUM)) {
+ /* NIC passes up the partial sum */
+ if (rx_desc->csum)
+ skb->ip_summed = CHECKSUM_COMPLETE;
+ else
+ skb->ip_summed = CHECKSUM_NONE;
+ skb->csum = csum_unfold(rx_desc->csum);
+ }
+
+ /* parse flags & pass relevant info up */
+ if (likely(feat & NETIF_F_RXHASH) &&
+ gve_needs_rss(rx_desc->flags_seq))
+ skb_set_hash(skb, be32_to_cpu(rx_desc->rss_hash),
+ gve_rss_type(rx_desc->flags_seq));
+
+ if (skb_is_nonlinear(skb))
+ napi_gro_frags(napi);
+ else
+ napi_gro_receive(napi, skb);
+ return true;
+}
+
+static bool gve_rx_work_pending(struct gve_rx_ring *rx)
+{
+ struct gve_rx_desc *desc;
+ __be16 flags_seq;
+ u32 next_idx;
+
+ next_idx = rx->desc.cnt & rx->desc.mask;
+ desc = rx->desc.desc_ring + next_idx;
+
+ flags_seq = desc->flags_seq;
+ /* Make sure we have synchronized the seq no with the device */
+ smp_rmb();
+
+ return (GVE_SEQNO(flags_seq) == rx->desc.seqno);
+}
+
+bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget,
+ netdev_features_t feat)
+{
+ struct gve_priv *priv = rx->gve;
+ struct gve_rx_desc *desc;
+ u32 cnt = rx->desc.cnt;
+ u32 idx = cnt & rx->desc.mask;
+ u32 work_done = 0;
+ u64 bytes = 0;
+
+ desc = rx->desc.desc_ring + idx;
+ while ((GVE_SEQNO(desc->flags_seq) == rx->desc.seqno) &&
+ work_done < budget) {
+ netif_info(priv, rx_status, priv->dev,
+ "[%d] idx=%d desc=%p desc->flags_seq=0x%x\n",
+ rx->q_num, idx, desc, desc->flags_seq);
+ netif_info(priv, rx_status, priv->dev,
+ "[%d] seqno=%d rx->desc.seqno=%d\n",
+ rx->q_num, GVE_SEQNO(desc->flags_seq),
+ rx->desc.seqno);
+ bytes += be16_to_cpu(desc->len) - GVE_RX_PAD;
+ if (!gve_rx(rx, desc, feat))
+ gve_schedule_reset(priv);
+ cnt++;
+ idx = cnt & rx->desc.mask;
+ desc = rx->desc.desc_ring + idx;
+ rx->desc.seqno = gve_next_seqno(rx->desc.seqno);
+ work_done++;
+ }
+
+ if (!work_done)
+ return false;
+
+ u64_stats_update_begin(&rx->statss);
+ rx->rpackets += work_done;
+ rx->rbytes += bytes;
+ u64_stats_update_end(&rx->statss);
+ rx->desc.cnt = cnt;
+ rx->desc.fill_cnt += work_done;
+
+ /* restock desc ring slots */
+ dma_wmb(); /* Ensure descs are visible before ringing doorbell */
+ gve_rx_write_doorbell(priv, rx);
+ return gve_rx_work_pending(rx);
+}
+
+bool gve_rx_poll(struct gve_notify_block *block, int budget)
+{
+ struct gve_rx_ring *rx = block->rx;
+ netdev_features_t feat;
+ bool repoll = false;
+
+ feat = block->napi.dev->features;
+
+ /* If budget is 0, do all the work */
+ if (budget == 0)
+ budget = INT_MAX;
+
+ if (budget > 0)
+ repoll |= gve_clean_rx_done(rx, budget, feat);
+ else
+ repoll |= gve_rx_work_pending(rx);
+ return repoll;
+}
diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
new file mode 100644
index 000000000000..778b87b5a06c
--- /dev/null
+++ b/drivers/net/ethernet/google/gve/gve_tx.c
@@ -0,0 +1,584 @@
+// SPDX-License-Identifier: (GPL-2.0 OR MIT)
+/* Google virtual Ethernet (gve) driver
+ *
+ * Copyright (C) 2015-2019 Google, Inc.
+ */
+
+#include "gve.h"
+#include "gve_adminq.h"
+#include <linux/ip.h>
+#include <linux/tcp.h>
+#include <linux/vmalloc.h>
+#include <linux/skbuff.h>
+
+static inline void gve_tx_put_doorbell(struct gve_priv *priv,
+ struct gve_queue_resources *q_resources,
+ u32 val)
+{
+ iowrite32be(val, &priv->db_bar2[be32_to_cpu(q_resources->db_index)]);
+}
+
+/* gvnic can only transmit from a Registered Segment.
+ * We copy skb payloads into the registered segment before writing Tx
+ * descriptors and ringing the Tx doorbell.
+ *
+ * gve_tx_fifo_* manages the Registered Segment as a FIFO - clients must
+ * free allocations in the order they were allocated.
+ */
+
+static int gve_tx_fifo_init(struct gve_priv *priv, struct gve_tx_fifo *fifo)
+{
+ fifo->base = vmap(fifo->qpl->pages, fifo->qpl->num_entries, VM_MAP,
+ PAGE_KERNEL);
+ if (unlikely(!fifo->base)) {
+ netif_err(priv, drv, priv->dev, "Failed to vmap fifo, qpl_id = %d\n",
+ fifo->qpl->id);
+ return -ENOMEM;
+ }
+
+ fifo->size = fifo->qpl->num_entries * PAGE_SIZE;
+ atomic_set(&fifo->available, fifo->size);
+ fifo->head = 0;
+ return 0;
+}
+
+static void gve_tx_fifo_release(struct gve_priv *priv, struct gve_tx_fifo *fifo)
+{
+ WARN(atomic_read(&fifo->available) != fifo->size,
+ "Releasing non-empty fifo");
+
+ vunmap(fifo->base);
+}
+
+static int gve_tx_fifo_pad_alloc_one_frag(struct gve_tx_fifo *fifo,
+ size_t bytes)
+{
+ return (fifo->head + bytes < fifo->size) ? 0 : fifo->size - fifo->head;
+}
+
+static bool gve_tx_fifo_can_alloc(struct gve_tx_fifo *fifo, size_t bytes)
+{
+ return (atomic_read(&fifo->available) <= bytes) ? false : true;
+}
+
+/* gve_tx_alloc_fifo - Allocate fragment(s) from Tx FIFO
+ * @fifo: FIFO to allocate from
+ * @bytes: Allocation size
+ * @iov: Scatter-gather elements to fill with allocation fragment base/len
+ *
+ * Returns number of valid elements in iov[] or negative on error.
+ *
+ * Allocations from a given FIFO must be externally synchronized but concurrent
+ * allocation and frees are allowed.
+ */
+static int gve_tx_alloc_fifo(struct gve_tx_fifo *fifo, size_t bytes,
+ struct gve_tx_iovec iov[2])
+{
+ size_t overflow, padding;
+ u32 aligned_head;
+ int nfrags = 0;
+
+ if (!bytes)
+ return 0;
+
+ /* This check happens before we know how much padding is needed to
+ * align to a cacheline boundary for the payload, but that is fine,
+ * because the FIFO head always start aligned, and the FIFO's boundaries
+ * are aligned, so if there is space for the data, there is space for
+ * the padding to the next alignment.
+ */
+ WARN(!gve_tx_fifo_can_alloc(fifo, bytes),
+ "Reached %s when there's not enough space in the fifo", __func__);
+
+ nfrags++;
+
+ iov[0].iov_offset = fifo->head;
+ iov[0].iov_len = bytes;
+ fifo->head += bytes;
+
+ if (fifo->head > fifo->size) {
+ /* If the allocation did not fit in the tail fragment of the
+ * FIFO, also use the head fragment.
+ */
+ nfrags++;
+ overflow = fifo->head - fifo->size;
+ iov[0].iov_len -= overflow;
+ iov[1].iov_offset = 0; /* Start of fifo*/
+ iov[1].iov_len = overflow;
+
+ fifo->head = overflow;
+ }
+
+ /* Re-align to a cacheline boundary */
+ aligned_head = L1_CACHE_ALIGN(fifo->head);
+ padding = aligned_head - fifo->head;
+ iov[nfrags - 1].iov_padding = padding;
+ atomic_sub(bytes + padding, &fifo->available);
+ fifo->head = aligned_head;
+
+ if (fifo->head == fifo->size)
+ fifo->head = 0;
+
+ return nfrags;
+}
+
+/* gve_tx_free_fifo - Return space to Tx FIFO
+ * @fifo: FIFO to return fragments to
+ * @bytes: Bytes to free
+ */
+static void gve_tx_free_fifo(struct gve_tx_fifo *fifo, size_t bytes)
+{
+ atomic_add(bytes, &fifo->available);
+}
+
+static void gve_tx_remove_from_block(struct gve_priv *priv, int queue_idx)
+{
+ struct gve_notify_block *block =
+ &priv->ntfy_blocks[gve_tx_idx_to_ntfy(priv, queue_idx)];
+
+ block->tx = NULL;
+}
+
+static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
+ u32 to_do, bool try_to_wake);
+
+static void gve_tx_free_ring(struct gve_priv *priv, int idx)
+{
+ struct gve_tx_ring *tx = &priv->tx[idx];
+ struct device *hdev = &priv->pdev->dev;
+ size_t bytes;
+ u32 slots;
+
+ gve_tx_remove_from_block(priv, idx);
+ slots = tx->mask + 1;
+ gve_clean_tx_done(priv, tx, tx->req, false);
+ netdev_tx_reset_queue(tx->netdev_txq);
+
+ dma_free_coherent(hdev, sizeof(*tx->q_resources),
+ tx->q_resources, tx->q_resources_bus);
+ tx->q_resources = NULL;
+
+ gve_tx_fifo_release(priv, &tx->tx_fifo);
+ gve_unassign_qpl(priv, tx->tx_fifo.qpl->id);
+ tx->tx_fifo.qpl = NULL;
+
+ bytes = sizeof(*tx->desc) * slots;
+ dma_free_coherent(hdev, bytes, tx->desc, tx->bus);
+ tx->desc = NULL;
+
+ vfree(tx->info);
+ tx->info = NULL;
+
+ netif_dbg(priv, drv, priv->dev, "freed tx queue %d\n", idx);
+}
+
+static void gve_tx_add_to_block(struct gve_priv *priv, int queue_idx)
+{
+ int ntfy_idx = gve_tx_idx_to_ntfy(priv, queue_idx);
+ struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
+ struct gve_tx_ring *tx = &priv->tx[queue_idx];
+
+ block->tx = tx;
+ tx->ntfy_id = ntfy_idx;
+}
+
+static int gve_tx_alloc_ring(struct gve_priv *priv, int idx)
+{
+ struct gve_tx_ring *tx = &priv->tx[idx];
+ struct device *hdev = &priv->pdev->dev;
+ u32 slots = priv->tx_desc_cnt;
+ size_t bytes;
+
+ /* Make sure everything is zeroed to start */
+ memset(tx, 0, sizeof(*tx));
+ tx->q_num = idx;
+
+ tx->mask = slots - 1;
+
+ /* alloc metadata */
+ tx->info = vzalloc(sizeof(*tx->info) * slots);
+ if (!tx->info)
+ return -ENOMEM;
+
+ /* alloc tx queue */
+ bytes = sizeof(*tx->desc) * slots;
+ tx->desc = dma_alloc_coherent(hdev, bytes, &tx->bus, GFP_KERNEL);
+ if (!tx->desc)
+ goto abort_with_info;
+
+ tx->tx_fifo.qpl = gve_assign_tx_qpl(priv);
+
+ /* map Tx FIFO */
+ if (gve_tx_fifo_init(priv, &tx->tx_fifo))
+ goto abort_with_desc;
+
+ tx->q_resources =
+ dma_alloc_coherent(hdev,
+ sizeof(*tx->q_resources),
+ &tx->q_resources_bus,
+ GFP_KERNEL);
+ if (!tx->q_resources)
+ goto abort_with_fifo;
+
+ netif_dbg(priv, drv, priv->dev, "tx[%d]->bus=%lx\n", idx,
+ (unsigned long)tx->bus);
+ tx->netdev_txq = netdev_get_tx_queue(priv->dev, idx);
+ gve_tx_add_to_block(priv, idx);
+
+ return 0;
+
+abort_with_fifo:
+ gve_tx_fifo_release(priv, &tx->tx_fifo);
+abort_with_desc:
+ dma_free_coherent(hdev, bytes, tx->desc, tx->bus);
+ tx->desc = NULL;
+abort_with_info:
+ vfree(tx->info);
+ tx->info = NULL;
+ return -ENOMEM;
+}
+
+int gve_tx_alloc_rings(struct gve_priv *priv)
+{
+ int err = 0;
+ int i;
+
+ for (i = 0; i < priv->tx_cfg.num_queues; i++) {
+ err = gve_tx_alloc_ring(priv, i);
+ if (err) {
+ netif_err(priv, drv, priv->dev,
+ "Failed to alloc tx ring=%d: err=%d\n",
+ i, err);
+ break;
+ }
+ }
+ /* Unallocate if there was an error */
+ if (err) {
+ int j;
+
+ for (j = 0; j < i; j++)
+ gve_tx_free_ring(priv, j);
+ }
+ return err;
+}
+
+void gve_tx_free_rings(struct gve_priv *priv)
+{
+ int i;
+
+ for (i = 0; i < priv->tx_cfg.num_queues; i++)
+ gve_tx_free_ring(priv, i);
+}
+
+/* gve_tx_avail - Calculates the number of slots available in the ring
+ * @tx: tx ring to check
+ *
+ * Returns the number of slots available
+ *
+ * The capacity of the queue is mask + 1. We don't need to reserve an entry.
+ **/
+static inline u32 gve_tx_avail(struct gve_tx_ring *tx)
+{
+ return tx->mask + 1 - (tx->req - tx->done);
+}
+
+static inline int gve_skb_fifo_bytes_required(struct gve_tx_ring *tx,
+ struct sk_buff *skb)
+{
+ int pad_bytes, align_hdr_pad;
+ int bytes;
+ int hlen;
+
+ hlen = skb_is_gso(skb) ? skb_checksum_start_offset(skb) +
+ tcp_hdrlen(skb) : skb_headlen(skb);
+
+ pad_bytes = gve_tx_fifo_pad_alloc_one_frag(&tx->tx_fifo,
+ hlen);
+ /* We need to take into account the header alignment padding. */
+ align_hdr_pad = L1_CACHE_ALIGN(hlen) - hlen;
+ bytes = align_hdr_pad + pad_bytes + skb->len;
+
+ return bytes;
+}
+
+/* The most descriptors we could need are 3 - 1 for the headers, 1 for
+ * the beginning of the payload at the end of the FIFO, and 1 if the
+ * payload wraps to the beginning of the FIFO.
+ */
+#define MAX_TX_DESC_NEEDED 3
+
+/* Check if sufficient resources (descriptor ring space, FIFO space) are
+ * available to transmit the given number of bytes.
+ */
+static inline bool gve_can_tx(struct gve_tx_ring *tx, int bytes_required)
+{
+ return (gve_tx_avail(tx) >= MAX_TX_DESC_NEEDED &&
+ gve_tx_fifo_can_alloc(&tx->tx_fifo, bytes_required));
+}
+
+/* Stops the queue if the skb cannot be transmitted. */
+static int gve_maybe_stop_tx(struct gve_tx_ring *tx, struct sk_buff *skb)
+{
+ int bytes_required;
+
+ bytes_required = gve_skb_fifo_bytes_required(tx, skb);
+ if (likely(gve_can_tx(tx, bytes_required)))
+ return 0;
+
+ /* No space, so stop the queue */
+ tx->stop_queue++;
+ netif_tx_stop_queue(tx->netdev_txq);
+ smp_mb(); /* sync with restarting queue in gve_clean_tx_done() */
+
+ /* Now check for resources again, in case gve_clean_tx_done() freed
+ * resources after we checked and we stopped the queue after
+ * gve_clean_tx_done() checked.
+ *
+ * gve_maybe_stop_tx() gve_clean_tx_done()
+ * nsegs/can_alloc test failed
+ * gve_tx_free_fifo()
+ * if (tx queue stopped)
+ * netif_tx_queue_wake()
+ * netif_tx_stop_queue()
+ * Need to check again for space here!
+ */
+ if (likely(!gve_can_tx(tx, bytes_required)))
+ return -EBUSY;
+
+ netif_tx_start_queue(tx->netdev_txq);
+ tx->wake_queue++;
+ return 0;
+}
+
+static void gve_tx_fill_pkt_desc(union gve_tx_desc *pkt_desc,
+ struct sk_buff *skb, bool is_gso,
+ int l4_hdr_offset, u32 desc_cnt,
+ u16 hlen, u64 addr)
+{
+ /* l4_hdr_offset and csum_offset are in units of 16-bit words */
+ if (is_gso) {
+ pkt_desc->pkt.type_flags = GVE_TXD_TSO | GVE_TXF_L4CSUM;
+ pkt_desc->pkt.l4_csum_offset = skb->csum_offset >> 1;
+ pkt_desc->pkt.l4_hdr_offset = l4_hdr_offset >> 1;
+ } else if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
+ pkt_desc->pkt.type_flags = GVE_TXD_STD | GVE_TXF_L4CSUM;
+ pkt_desc->pkt.l4_csum_offset = skb->csum_offset >> 1;
+ pkt_desc->pkt.l4_hdr_offset = l4_hdr_offset >> 1;
+ } else {
+ pkt_desc->pkt.type_flags = GVE_TXD_STD;
+ pkt_desc->pkt.l4_csum_offset = 0;
+ pkt_desc->pkt.l4_hdr_offset = 0;
+ }
+ pkt_desc->pkt.desc_cnt = desc_cnt;
+ pkt_desc->pkt.len = cpu_to_be16(skb->len);
+ pkt_desc->pkt.seg_len = cpu_to_be16(hlen);
+ pkt_desc->pkt.seg_addr = cpu_to_be64(addr);
+}
+
+static void gve_tx_fill_seg_desc(union gve_tx_desc *seg_desc,
+ struct sk_buff *skb, bool is_gso,
+ u16 len, u64 addr)
+{
+ seg_desc->seg.type_flags = GVE_TXD_SEG;
+ if (is_gso) {
+ if (skb_is_gso_v6(skb))
+ seg_desc->seg.type_flags |= GVE_TXSF_IPV6;
+ seg_desc->seg.l3_offset = skb_network_offset(skb) >> 1;
+ seg_desc->seg.mss = cpu_to_be16(skb_shinfo(skb)->gso_size);
+ }
+ seg_desc->seg.seg_len = cpu_to_be16(len);
+ seg_desc->seg.seg_addr = cpu_to_be64(addr);
+}
+
+static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb)
+{
+ int pad_bytes, hlen, hdr_nfrags, payload_nfrags, l4_hdr_offset;
+ union gve_tx_desc *pkt_desc, *seg_desc;
+ struct gve_tx_buffer_state *info;
+ bool is_gso = skb_is_gso(skb);
+ u32 idx = tx->req & tx->mask;
+ int payload_iov = 2;
+ int copy_offset;
+ u32 next_idx;
+ int i;
+
+ info = &tx->info[idx];
+ pkt_desc = &tx->desc[idx];
+
+ l4_hdr_offset = skb_checksum_start_offset(skb);
+ /* If the skb is gso, then we want the tcp header in the first segment
+ * otherwise we want the linear portion of the skb (which will contain
+ * the checksum because skb->csum_start and skb->csum_offset are given
+ * relative to skb->head) in the first segment.
+ */
+ hlen = is_gso ? l4_hdr_offset + tcp_hdrlen(skb) :
+ skb_headlen(skb);
+
+ info->skb = skb;
+ /* We don't want to split the header, so if necessary, pad to the end
+ * of the fifo and then put the header at the beginning of the fifo.
+ */
+ pad_bytes = gve_tx_fifo_pad_alloc_one_frag(&tx->tx_fifo, hlen);
+ hdr_nfrags = gve_tx_alloc_fifo(&tx->tx_fifo, hlen + pad_bytes,
+ &info->iov[0]);
+ WARN(!hdr_nfrags, "hdr_nfrags should never be 0!");
+ payload_nfrags = gve_tx_alloc_fifo(&tx->tx_fifo, skb->len - hlen,
+ &info->iov[payload_iov]);
+
+ gve_tx_fill_pkt_desc(pkt_desc, skb, is_gso, l4_hdr_offset,
+ 1 + payload_nfrags, hlen,
+ info->iov[hdr_nfrags - 1].iov_offset);
+
+ skb_copy_bits(skb, 0,
+ tx->tx_fifo.base + info->iov[hdr_nfrags - 1].iov_offset,
+ hlen);
+ copy_offset = hlen;
+
+ for (i = payload_iov; i < payload_nfrags + payload_iov; i++) {
+ next_idx = (tx->req + 1 + i - payload_iov) & tx->mask;
+ seg_desc = &tx->desc[next_idx];
+
+ gve_tx_fill_seg_desc(seg_desc, skb, is_gso,
+ info->iov[i].iov_len,
+ info->iov[i].iov_offset);
+
+ skb_copy_bits(skb, copy_offset,
+ tx->tx_fifo.base + info->iov[i].iov_offset,
+ info->iov[i].iov_len);
+ copy_offset += info->iov[i].iov_len;
+ }
+
+ return 1 + payload_nfrags;
+}
+
+netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev)
+{
+ struct gve_priv *priv = netdev_priv(dev);
+ struct gve_tx_ring *tx;
+ int nsegs;
+
+ WARN(skb_get_queue_mapping(skb) > priv->tx_cfg.num_queues,
+ "skb queue index out of range");
+ tx = &priv->tx[skb_get_queue_mapping(skb)];
+ if (unlikely(gve_maybe_stop_tx(tx, skb))) {
+ /* We need to ring the txq doorbell -- we have stopped the Tx
+ * queue for want of resources, but prior calls to gve_tx()
+ * may have added descriptors without ringing the doorbell.
+ */
+
+ /* Ensure tx descs from a prior gve_tx are visible before
+ * ringing doorbell.
+ */
+ dma_wmb();
+ gve_tx_put_doorbell(priv, tx->q_resources, tx->req);
+ return NETDEV_TX_BUSY;
+ }
+ nsegs = gve_tx_add_skb(tx, skb);
+
+ netdev_tx_sent_queue(tx->netdev_txq, skb->len);
+ skb_tx_timestamp(skb);
+
+ /* give packets to NIC */
+ tx->req += nsegs;
+
+ if (!netif_xmit_stopped(tx->netdev_txq) && netdev_xmit_more())
+ return NETDEV_TX_OK;
+
+ /* Ensure tx descs are visible before ringing doorbell */
+ dma_wmb();
+ gve_tx_put_doorbell(priv, tx->q_resources, tx->req);
+ return NETDEV_TX_OK;
+}
+
+#define GVE_TX_START_THRESH PAGE_SIZE
+
+static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx,
+ u32 to_do, bool try_to_wake)
+{
+ struct gve_tx_buffer_state *info;
+ u64 pkts = 0, bytes = 0;
+ size_t space_freed = 0;
+ struct sk_buff *skb;
+ int i, j;
+ u32 idx;
+
+ for (j = 0; j < to_do; j++) {
+ idx = tx->done & tx->mask;
+ netif_info(priv, tx_done, priv->dev,
+ "[%d] %s: idx=%d (req=%u done=%u)\n",
+ tx->q_num, __func__, idx, tx->req, tx->done);
+ info = &tx->info[idx];
+ skb = info->skb;
+
+ /* Mark as free */
+ if (skb) {
+ info->skb = NULL;
+ bytes += skb->len;
+ pkts++;
+ dev_consume_skb_any(skb);
+ /* FIFO free */
+ for (i = 0; i < ARRAY_SIZE(info->iov); i++) {
+ space_freed += info->iov[i].iov_len +
+ info->iov[i].iov_padding;
+ info->iov[i].iov_len = 0;
+ info->iov[i].iov_padding = 0;
+ }
+ }
+ tx->done++;
+ }
+
+ gve_tx_free_fifo(&tx->tx_fifo, space_freed);
+ u64_stats_update_begin(&tx->statss);
+ tx->bytes_done += bytes;
+ tx->pkt_done += pkts;
+ u64_stats_update_end(&tx->statss);
+ netdev_tx_completed_queue(tx->netdev_txq, pkts, bytes);
+
+ /* start the queue if we've stopped it */
+#ifndef CONFIG_BQL
+ /* Make sure that the doorbells are synced */
+ smp_mb();
+#endif
+ if (try_to_wake && netif_tx_queue_stopped(tx->netdev_txq) &&
+ likely(gve_can_tx(tx, GVE_TX_START_THRESH))) {
+ tx->wake_queue++;
+ netif_tx_wake_queue(tx->netdev_txq);
+ }
+
+ return pkts;
+}
+
+__be32 gve_tx_load_event_counter(struct gve_priv *priv,
+ struct gve_tx_ring *tx)
+{
+ u32 counter_index = be32_to_cpu((tx->q_resources->counter_index));
+
+ return READ_ONCE(priv->counter_array[counter_index]);
+}
+
+bool gve_tx_poll(struct gve_notify_block *block, int budget)
+{
+ struct gve_priv *priv = block->priv;
+ struct gve_tx_ring *tx = block->tx;
+ bool repoll = false;
+ u32 nic_done;
+ u32 to_do;
+
+ /* If budget is 0, do all the work */
+ if (budget == 0)
+ budget = INT_MAX;
+
+ /* Find out how much work there is to be done */
+ tx->last_nic_done = gve_tx_load_event_counter(priv, tx);
+ nic_done = be32_to_cpu(tx->last_nic_done);
+ if (budget > 0) {
+ /* Do as much work as we have that the budget will
+ * allow
+ */
+ to_do = min_t(u32, (nic_done - tx->done), budget);
+ gve_clean_tx_done(priv, tx, to_do, true);
+ }
+ /* If we still have work we want to repoll */
+ repoll |= (nic_done != tx->done);
+ return repoll;
+}
diff --git a/drivers/net/ethernet/hisilicon/Kconfig b/drivers/net/ethernet/hisilicon/Kconfig
index a0d780c14e60..3892a2062404 100644
--- a/drivers/net/ethernet/hisilicon/Kconfig
+++ b/drivers/net/ethernet/hisilicon/Kconfig
@@ -46,6 +46,16 @@ config HIP04_ETH
If you wish to compile a kernel for a hardware with hisilicon p04 SoC and
want to use the internal ethernet then you should answer Y to this.
+config HI13X1_GMAC
+ bool "Hisilicon HI13X1 Network Device Support"
+ depends on HIP04_ETH
+ help
+ If you wish to compile a kernel for a hardware with hisilicon hi13x1_gamc
+ then you should answer Y to this. This makes this driver suitable for use
+ on certain boards such as the HI13X1.
+
+ If you are unsure, say N.
+
config HNS_MDIO
tristate
select PHYLIB
diff --git a/drivers/net/ethernet/hisilicon/hip04_eth.c b/drivers/net/ethernet/hisilicon/hip04_eth.c
index e1f2978506fd..625635771b83 100644
--- a/drivers/net/ethernet/hisilicon/hip04_eth.c
+++ b/drivers/net/ethernet/hisilicon/hip04_eth.c
@@ -16,6 +16,8 @@
#include <linux/mfd/syscon.h>
#include <linux/regmap.h>
+#define SC_PPE_RESET_DREQ 0x026C
+
#define PPE_CFG_RX_ADDR 0x100
#define PPE_CFG_POOL_GRP 0x300
#define PPE_CFG_RX_BUF_SIZE 0x400
@@ -33,10 +35,23 @@
#define GE_MODE_CHANGE_REG 0x1b4
#define GE_RECV_CONTROL_REG 0x1e0
#define GE_STATION_MAC_ADDRESS 0x210
-#define PPE_CFG_CPU_ADD_ADDR 0x580
-#define PPE_CFG_MAX_FRAME_LEN_REG 0x408
+
#define PPE_CFG_BUS_CTRL_REG 0x424
#define PPE_CFG_RX_CTRL_REG 0x428
+
+#if defined(CONFIG_HI13X1_GMAC)
+#define PPE_CFG_CPU_ADD_ADDR 0x6D0
+#define PPE_CFG_MAX_FRAME_LEN_REG 0x500
+#define PPE_CFG_RX_PKT_MODE_REG 0x504
+#define PPE_CFG_QOS_VMID_GEN 0x520
+#define PPE_CFG_RX_PKT_INT 0x740
+#define PPE_INTEN 0x700
+#define PPE_INTSTS 0x708
+#define PPE_RINT 0x704
+#define PPE_CFG_STS_MODE 0x880
+#else
+#define PPE_CFG_CPU_ADD_ADDR 0x580
+#define PPE_CFG_MAX_FRAME_LEN_REG 0x408
#define PPE_CFG_RX_PKT_MODE_REG 0x438
#define PPE_CFG_QOS_VMID_GEN 0x500
#define PPE_CFG_RX_PKT_INT 0x538
@@ -44,8 +59,12 @@
#define PPE_INTSTS 0x608
#define PPE_RINT 0x604
#define PPE_CFG_STS_MODE 0x700
+#endif /* CONFIG_HI13X1_GMAC */
+
#define PPE_HIS_RX_PKT_CNT 0x804
+#define RESET_DREQ_ALL 0xffffffff
+
/* REG_INTERRUPT */
#define RCV_INT BIT(10)
#define RCV_NOBUF BIT(8)
@@ -57,8 +76,15 @@
/* TX descriptor config */
#define TX_FREE_MEM BIT(0)
#define TX_READ_ALLOC_L3 BIT(1)
-#define TX_FINISH_CACHE_INV BIT(2)
+#if defined(CONFIG_HI13X1_GMAC)
+#define TX_CLEAR_WB BIT(7)
+#define TX_RELEASE_TO_PPE BIT(4)
+#define TX_FINISH_CACHE_INV BIT(6)
+#define TX_POOL_SHIFT 16
+#else
#define TX_CLEAR_WB BIT(4)
+#define TX_FINISH_CACHE_INV BIT(2)
+#endif
#define TX_L3_CHECKSUM BIT(5)
#define TX_LOOP_BACK BIT(11)
@@ -93,18 +119,35 @@
#define GE_RX_PORT_EN BIT(1)
#define GE_TX_PORT_EN BIT(2)
-#define PPE_CFG_STS_RX_PKT_CNT_RC BIT(12)
-
#define PPE_CFG_RX_PKT_ALIGN BIT(18)
-#define PPE_CFG_QOS_VMID_MODE BIT(14)
+
+#if defined(CONFIG_HI13X1_GMAC)
+#define PPE_CFG_QOS_VMID_GRP_SHIFT 4
+#define PPE_CFG_RX_CTRL_ALIGN_SHIFT 7
+#define PPE_CFG_STS_RX_PKT_CNT_RC BIT(0)
+#define PPE_CFG_QOS_VMID_MODE BIT(15)
+#define PPE_CFG_BUS_LOCAL_REL (BIT(9) | BIT(15) | BIT(19) | BIT(23))
+
+/* buf unit size is cache_line_size, which is 64, so the shift is 6 */
+#define PPE_BUF_SIZE_SHIFT 6
+#define PPE_TX_BUF_HOLD BIT(31)
+#define CACHE_LINE_MASK 0x3F
+#else
#define PPE_CFG_QOS_VMID_GRP_SHIFT 8
+#define PPE_CFG_RX_CTRL_ALIGN_SHIFT 11
+#define PPE_CFG_STS_RX_PKT_CNT_RC BIT(12)
+#define PPE_CFG_QOS_VMID_MODE BIT(14)
+#define PPE_CFG_BUS_LOCAL_REL BIT(14)
+
+/* buf unit size is 1, so the shift is 6 */
+#define PPE_BUF_SIZE_SHIFT 0
+#define PPE_TX_BUF_HOLD 0
+#endif /* CONFIG_HI13X1_GMAC */
#define PPE_CFG_RX_FIFO_FSFU BIT(11)
#define PPE_CFG_RX_DEPTH_SHIFT 16
#define PPE_CFG_RX_START_SHIFT 0
-#define PPE_CFG_RX_CTRL_ALIGN_SHIFT 11
-#define PPE_CFG_BUS_LOCAL_REL BIT(14)
#define PPE_CFG_BUS_BIG_ENDIEN BIT(0)
#define RX_DESC_NUM 128
@@ -128,26 +171,50 @@
#define HIP04_MIN_TX_COALESCE_FRAMES 100
struct tx_desc {
+#if defined(CONFIG_HI13X1_GMAC)
+ u32 reserved1[2];
+ u32 send_addr;
+ u16 send_size;
+ u16 data_offset;
+ u32 reserved2[7];
+ u32 cfg;
+ u32 wb_addr;
+ u32 reserved3[3];
+#else
u32 send_addr;
u32 send_size;
u32 next_addr;
u32 cfg;
u32 wb_addr;
+#endif
} __aligned(64);
struct rx_desc {
+#if defined(CONFIG_HI13X1_GMAC)
+ u32 reserved1[3];
+ u16 pkt_len;
+ u16 reserved_16;
+ u32 reserved2[6];
+ u32 pkt_err;
+ u32 reserved3[5];
+#else
u16 reserved_16;
u16 pkt_len;
u32 reserve1[3];
u32 pkt_err;
u32 reserve2[4];
+#endif
};
struct hip04_priv {
void __iomem *base;
+#if defined(CONFIG_HI13X1_GMAC)
+ void __iomem *sysctrl_base;
+#endif
int phy_mode;
int chan;
unsigned int port;
+ unsigned int group;
unsigned int speed;
unsigned int duplex;
unsigned int reg_inten;
@@ -221,6 +288,13 @@ static void hip04_config_port(struct net_device *ndev, u32 speed, u32 duplex)
writel_relaxed(val, priv->base + GE_MODE_CHANGE_REG);
}
+static void hip04_reset_dreq(struct hip04_priv *priv)
+{
+#if defined(CONFIG_HI13X1_GMAC)
+ writel_relaxed(RESET_DREQ_ALL, priv->sysctrl_base + SC_PPE_RESET_DREQ);
+#endif
+}
+
static void hip04_reset_ppe(struct hip04_priv *priv)
{
u32 val, tmp, timeout = 0;
@@ -241,14 +315,14 @@ static void hip04_config_fifo(struct hip04_priv *priv)
val |= PPE_CFG_STS_RX_PKT_CNT_RC;
writel_relaxed(val, priv->base + PPE_CFG_STS_MODE);
- val = BIT(priv->port);
+ val = BIT(priv->group);
regmap_write(priv->map, priv->port * 4 + PPE_CFG_POOL_GRP, val);
- val = priv->port << PPE_CFG_QOS_VMID_GRP_SHIFT;
+ val = priv->group << PPE_CFG_QOS_VMID_GRP_SHIFT;
val |= PPE_CFG_QOS_VMID_MODE;
writel_relaxed(val, priv->base + PPE_CFG_QOS_VMID_GEN);
- val = RX_BUF_SIZE;
+ val = RX_BUF_SIZE >> PPE_BUF_SIZE_SHIFT;
regmap_write(priv->map, priv->port * 4 + PPE_CFG_RX_BUF_SIZE, val);
val = RX_DESC_NUM << PPE_CFG_RX_DEPTH_SHIFT;
@@ -285,8 +359,10 @@ static void hip04_config_fifo(struct hip04_priv *priv)
val |= GE_RX_STRIP_PAD | GE_RX_PAD_EN;
writel_relaxed(val, priv->base + GE_RECV_CONTROL_REG);
+#ifndef CONFIG_HI13X1_GMAC
val = GE_AUTO_NEG_CTL;
writel_relaxed(val, priv->base + GE_TX_LOCAL_PAGE_REG);
+#endif
}
static void hip04_mac_enable(struct net_device *ndev)
@@ -329,12 +405,18 @@ static void hip04_mac_disable(struct net_device *ndev)
static void hip04_set_xmit_desc(struct hip04_priv *priv, dma_addr_t phys)
{
- writel(phys, priv->base + PPE_CFG_CPU_ADD_ADDR);
+ u32 val;
+
+ val = phys >> PPE_BUF_SIZE_SHIFT | PPE_TX_BUF_HOLD;
+ writel(val, priv->base + PPE_CFG_CPU_ADD_ADDR);
}
static void hip04_set_recv_desc(struct hip04_priv *priv, dma_addr_t phys)
{
- regmap_write(priv->map, priv->port * 4 + PPE_CFG_RX_ADDR, phys);
+ u32 val;
+
+ val = phys >> PPE_BUF_SIZE_SHIFT;
+ regmap_write(priv->map, priv->port * 4 + PPE_CFG_RX_ADDR, val);
}
static u32 hip04_recv_cnt(struct hip04_priv *priv)
@@ -442,11 +524,20 @@ hip04_mac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
priv->tx_skb[tx_head] = skb;
priv->tx_phys[tx_head] = phys;
- desc->send_addr = cpu_to_be32(phys);
- desc->send_size = cpu_to_be32(skb->len);
- desc->cfg = cpu_to_be32(TX_CLEAR_WB | TX_FINISH_CACHE_INV);
+
+ desc->send_size = (__force u32)cpu_to_be32(skb->len);
+#if defined(CONFIG_HI13X1_GMAC)
+ desc->cfg = (__force u32)cpu_to_be32(TX_CLEAR_WB | TX_FINISH_CACHE_INV
+ | TX_RELEASE_TO_PPE | priv->port << TX_POOL_SHIFT);
+ desc->data_offset = (__force u32)cpu_to_be32(phys & CACHE_LINE_MASK);
+ desc->send_addr = (__force u32)cpu_to_be32(phys & ~CACHE_LINE_MASK);
+#else
+ desc->cfg = (__force u32)cpu_to_be32(TX_CLEAR_WB | TX_FINISH_CACHE_INV);
+ desc->send_addr = (__force u32)cpu_to_be32(phys);
+#endif
phys = priv->tx_desc_dma + tx_head * sizeof(struct tx_desc);
- desc->wb_addr = cpu_to_be32(phys);
+ desc->wb_addr = (__force u32)cpu_to_be32(phys +
+ offsetof(struct tx_desc, send_addr));
skb_tx_timestamp(skb);
hip04_set_xmit_desc(priv, phys);
@@ -507,8 +598,8 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget)
priv->rx_phys[priv->rx_head] = 0;
desc = (struct rx_desc *)skb->data;
- len = be16_to_cpu(desc->pkt_len);
- err = be32_to_cpu(desc->pkt_err);
+ len = be16_to_cpu((__force __be16)desc->pkt_len);
+ err = be32_to_cpu((__force __be32)desc->pkt_err);
if (0 == len) {
dev_kfree_skb_any(skb);
@@ -828,7 +919,16 @@ static int hip04_mac_probe(struct platform_device *pdev)
goto init_fail;
}
- ret = of_parse_phandle_with_fixed_args(node, "port-handle", 2, 0, &arg);
+#if defined(CONFIG_HI13X1_GMAC)
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ priv->sysctrl_base = devm_ioremap_resource(d, res);
+ if (IS_ERR(priv->sysctrl_base)) {
+ ret = PTR_ERR(priv->sysctrl_base);
+ goto init_fail;
+ }
+#endif
+
+ ret = of_parse_phandle_with_fixed_args(node, "port-handle", 3, 0, &arg);
if (ret < 0) {
dev_warn(d, "no port-handle\n");
goto init_fail;
@@ -836,6 +936,7 @@ static int hip04_mac_probe(struct platform_device *pdev)
priv->port = arg.args[0];
priv->chan = arg.args[1] * RX_DESC_NUM;
+ priv->group = arg.args[2];
hrtimer_init(&priv->tx_coalesce_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
@@ -896,6 +997,7 @@ static int hip04_mac_probe(struct platform_device *pdev)
ndev->irq = irq;
netif_napi_add(ndev, &priv->napi, hip04_rx_poll, NAPI_POLL_WEIGHT);
+ hip04_reset_dreq(priv);
hip04_reset_ppe(priv);
if (priv->phy_mode == PHY_INTERFACE_MODE_MII)
hip04_config_port(ndev, SPEED_100, DUPLEX_FULL);
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
index fe879c07ae3c..2235dd55fab2 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
@@ -2370,6 +2370,7 @@ static int hns_nic_dev_probe(struct platform_device *pdev)
ndev->hw_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
NETIF_F_RXCSUM | NETIF_F_SG | NETIF_F_GSO |
NETIF_F_GRO | NETIF_F_TSO | NETIF_F_TSO6;
+ ndev->vlan_features |= NETIF_F_TSO | NETIF_F_TSO6;
ndev->max_mtu = MAC_MAX_MTU_V2 -
(ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN);
break;
diff --git a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
index 83e19c6b974e..8ad5292eebbe 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
@@ -69,7 +69,7 @@ enum hclge_mbx_vlan_cfg_subcode {
};
#define HCLGE_MBX_MAX_MSG_SIZE 16
-#define HCLGE_MBX_MAX_RESP_DATA_SIZE 16
+#define HCLGE_MBX_MAX_RESP_DATA_SIZE 8
#define HCLGE_MBX_RING_MAP_BASIC_MSG_NUM 3
#define HCLGE_MBX_RING_NODE_VARIABLE_NUM 3
diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.c b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
index fa8b8506b120..908d4f45c06a 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
@@ -16,21 +16,18 @@ static LIST_HEAD(hnae3_ae_dev_list);
*/
static DEFINE_MUTEX(hnae3_common_lock);
-static bool hnae3_client_match(enum hnae3_client_type client_type,
- enum hnae3_dev_type dev_type)
+static bool hnae3_client_match(enum hnae3_client_type client_type)
{
- if ((dev_type == HNAE3_DEV_KNIC) && (client_type == HNAE3_CLIENT_KNIC ||
- client_type == HNAE3_CLIENT_ROCE))
- return true;
-
- if (dev_type == HNAE3_DEV_UNIC && client_type == HNAE3_CLIENT_UNIC)
+ if (client_type == HNAE3_CLIENT_KNIC ||
+ client_type == HNAE3_CLIENT_ROCE)
return true;
return false;
}
void hnae3_set_client_init_flag(struct hnae3_client *client,
- struct hnae3_ae_dev *ae_dev, int inited)
+ struct hnae3_ae_dev *ae_dev,
+ unsigned int inited)
{
if (!client || !ae_dev)
return;
@@ -39,9 +36,6 @@ void hnae3_set_client_init_flag(struct hnae3_client *client,
case HNAE3_CLIENT_KNIC:
hnae3_set_bit(ae_dev->flag, HNAE3_KNIC_CLIENT_INITED_B, inited);
break;
- case HNAE3_CLIENT_UNIC:
- hnae3_set_bit(ae_dev->flag, HNAE3_UNIC_CLIENT_INITED_B, inited);
- break;
case HNAE3_CLIENT_ROCE:
hnae3_set_bit(ae_dev->flag, HNAE3_ROCE_CLIENT_INITED_B, inited);
break;
@@ -61,10 +55,6 @@ static int hnae3_get_client_init_flag(struct hnae3_client *client,
inited = hnae3_get_bit(ae_dev->flag,
HNAE3_KNIC_CLIENT_INITED_B);
break;
- case HNAE3_CLIENT_UNIC:
- inited = hnae3_get_bit(ae_dev->flag,
- HNAE3_UNIC_CLIENT_INITED_B);
- break;
case HNAE3_CLIENT_ROCE:
inited = hnae3_get_bit(ae_dev->flag,
HNAE3_ROCE_CLIENT_INITED_B);
@@ -82,7 +72,7 @@ static int hnae3_init_client_instance(struct hnae3_client *client,
int ret;
/* check if this client matches the type of ae_dev */
- if (!(hnae3_client_match(client->type, ae_dev->dev_type) &&
+ if (!(hnae3_client_match(client->type) &&
hnae3_get_bit(ae_dev->flag, HNAE3_DEV_INITED_B))) {
return 0;
}
@@ -99,7 +89,7 @@ static void hnae3_uninit_client_instance(struct hnae3_client *client,
struct hnae3_ae_dev *ae_dev)
{
/* check if this client matches the type of ae_dev */
- if (!(hnae3_client_match(client->type, ae_dev->dev_type) &&
+ if (!(hnae3_client_match(client->type) &&
hnae3_get_bit(ae_dev->flag, HNAE3_DEV_INITED_B)))
return;
@@ -251,6 +241,7 @@ void hnae3_unregister_ae_algo(struct hnae3_ae_algo *ae_algo)
ae_algo->ops->uninit_ae_dev(ae_dev);
hnae3_set_bit(ae_dev->flag, HNAE3_DEV_INITED_B, 0);
+ ae_dev->ops = NULL;
}
list_del(&ae_algo->node);
@@ -351,6 +342,7 @@ void hnae3_unregister_ae_dev(struct hnae3_ae_dev *ae_dev)
ae_algo->ops->uninit_ae_dev(ae_dev);
hnae3_set_bit(ae_dev->flag, HNAE3_DEV_INITED_B, 0);
+ ae_dev->ops = NULL;
}
list_del(&ae_dev->node);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
index ad21b0ef1946..48c7b70fc2c4 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
@@ -102,15 +102,9 @@ enum hnae3_loop {
enum hnae3_client_type {
HNAE3_CLIENT_KNIC,
- HNAE3_CLIENT_UNIC,
HNAE3_CLIENT_ROCE,
};
-enum hnae3_dev_type {
- HNAE3_DEV_KNIC,
- HNAE3_DEV_UNIC,
-};
-
/* mac media type */
enum hnae3_media_type {
HNAE3_MEDIA_TYPE_UNKNOWN,
@@ -154,7 +148,6 @@ enum hnae3_reset_type {
HNAE3_VF_FULL_RESET,
HNAE3_FLR_RESET,
HNAE3_FUNC_RESET,
- HNAE3_CORE_RESET,
HNAE3_GLOBAL_RESET,
HNAE3_IMP_RESET,
HNAE3_UNKNOWN_RESET,
@@ -220,8 +213,7 @@ struct hnae3_ae_dev {
const struct hnae3_ae_ops *ops;
struct list_head node;
u32 flag;
- u8 override_pci_need_reset; /* fix to stop multiple reset happening */
- enum hnae3_dev_type dev_type;
+ unsigned long hw_err_reset_req;
enum hnae3_reset_type reset_type;
void *priv;
};
@@ -271,6 +263,8 @@ struct hnae3_ae_dev {
* get auto autonegotiation of pause frame use
* restart_autoneg()
* restart autonegotiation
+ * halt_autoneg()
+ * halt/resume autonegotiation when autonegotiation on
* get_coalesce_usecs()
* get usecs to delay a TX interrupt after a packet is sent
* get_rx_max_coalesced_frames()
@@ -339,10 +333,14 @@ struct hnae3_ae_dev {
* Set vlan filter config of Ports
* set_vf_vlan_filter()
* Set vlan filter config of vf
+ * restore_vlan_table()
+ * Restore vlan filter entries after reset
* enable_hw_strip_rxvtag()
* Enable/disable hardware strip vlan tag of packets received
* set_gro_en
* Enable/disable HW GRO
+ * add_arfs_entry
+ * Check the 5-tuples of flow, and create flow director rule
*/
struct hnae3_ae_ops {
int (*init_ae_dev)(struct hnae3_ae_dev *ae_dev);
@@ -386,6 +384,7 @@ struct hnae3_ae_ops {
int (*set_autoneg)(struct hnae3_handle *handle, bool enable);
int (*get_autoneg)(struct hnae3_handle *handle);
int (*restart_autoneg)(struct hnae3_handle *handle);
+ int (*halt_autoneg)(struct hnae3_handle *handle, bool halt);
void (*get_coalesce_usecs)(struct hnae3_handle *handle,
u32 *tx_usecs, u32 *rx_usecs);
@@ -463,6 +462,8 @@ struct hnae3_ae_ops {
u16 vlan, u8 qos, __be16 proto);
int (*enable_hw_strip_rxvtag)(struct hnae3_handle *handle, bool enable);
void (*reset_event)(struct pci_dev *pdev, struct hnae3_handle *handle);
+ enum hnae3_reset_type (*get_reset_level)(struct hnae3_ae_dev *ae_dev,
+ unsigned long *addr);
void (*set_default_reset_request)(struct hnae3_ae_dev *ae_dev,
enum hnae3_reset_type rst_type);
void (*get_channels)(struct hnae3_handle *handle,
@@ -492,7 +493,9 @@ struct hnae3_ae_ops {
struct ethtool_rxnfc *cmd, u32 *rule_locs);
int (*restore_fd_rules)(struct hnae3_handle *handle);
void (*enable_fd)(struct hnae3_handle *handle, bool enable);
- int (*dbg_run_cmd)(struct hnae3_handle *handle, char *cmd_buf);
+ int (*add_arfs_entry)(struct hnae3_handle *handle, u16 queue_id,
+ u16 flow_id, struct flow_keys *fkeys);
+ int (*dbg_run_cmd)(struct hnae3_handle *handle, const char *cmd_buf);
pci_ers_result_t (*handle_hw_ras_error)(struct hnae3_ae_dev *ae_dev);
bool (*get_hw_reset_stat)(struct hnae3_handle *handle);
bool (*ae_dev_resetting)(struct hnae3_handle *handle);
@@ -502,6 +505,7 @@ struct hnae3_ae_ops {
void (*set_timer_task)(struct hnae3_handle *handle, bool enable);
int (*mac_connect_phy)(struct hnae3_handle *handle);
void (*mac_disconnect_phy)(struct hnae3_handle *handle);
+ void (*restore_vlan_table)(struct hnae3_handle *handle);
};
struct hnae3_dcb_ops {
@@ -643,5 +647,6 @@ void hnae3_unregister_client(struct hnae3_client *client);
int hnae3_register_client(struct hnae3_client *client);
void hnae3_set_client_init_flag(struct hnae3_client *client,
- struct hnae3_ae_dev *ae_dev, int inited);
+ struct hnae3_ae_dev *ae_dev,
+ unsigned int inited);
#endif
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c b/drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c
index b6fabbbdfd5b..d2ec4c573bf8 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c
@@ -4,8 +4,7 @@
#include "hnae3.h"
#include "hns3_enet.h"
-static
-int hns3_dcbnl_ieee_getets(struct net_device *ndev, struct ieee_ets *ets)
+static int hns3_dcbnl_ieee_getets(struct net_device *ndev, struct ieee_ets *ets)
{
struct hnae3_handle *h = hns3_get_handle(ndev);
@@ -18,8 +17,7 @@ int hns3_dcbnl_ieee_getets(struct net_device *ndev, struct ieee_ets *ets)
return -EOPNOTSUPP;
}
-static
-int hns3_dcbnl_ieee_setets(struct net_device *ndev, struct ieee_ets *ets)
+static int hns3_dcbnl_ieee_setets(struct net_device *ndev, struct ieee_ets *ets)
{
struct hnae3_handle *h = hns3_get_handle(ndev);
@@ -32,8 +30,7 @@ int hns3_dcbnl_ieee_setets(struct net_device *ndev, struct ieee_ets *ets)
return -EOPNOTSUPP;
}
-static
-int hns3_dcbnl_ieee_getpfc(struct net_device *ndev, struct ieee_pfc *pfc)
+static int hns3_dcbnl_ieee_getpfc(struct net_device *ndev, struct ieee_pfc *pfc)
{
struct hnae3_handle *h = hns3_get_handle(ndev);
@@ -46,8 +43,7 @@ int hns3_dcbnl_ieee_getpfc(struct net_device *ndev, struct ieee_pfc *pfc)
return -EOPNOTSUPP;
}
-static
-int hns3_dcbnl_ieee_setpfc(struct net_device *ndev, struct ieee_pfc *pfc)
+static int hns3_dcbnl_ieee_setpfc(struct net_device *ndev, struct ieee_pfc *pfc)
{
struct hnae3_handle *h = hns3_get_handle(ndev);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
index fc4917ac44be..a4b937286f55 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
@@ -11,7 +11,8 @@
static struct dentry *hns3_dbgfs_root;
-static int hns3_dbg_queue_info(struct hnae3_handle *h, char *cmd_buf)
+static int hns3_dbg_queue_info(struct hnae3_handle *h,
+ const char *cmd_buf)
{
struct hns3_nic_priv *priv = h->priv;
struct hns3_nic_ring_data *ring_data;
@@ -155,7 +156,7 @@ static int hns3_dbg_queue_map(struct hnae3_handle *h)
return 0;
}
-static int hns3_dbg_bd_info(struct hnae3_handle *h, char *cmd_buf)
+static int hns3_dbg_bd_info(struct hnae3_handle *h, const char *cmd_buf)
{
struct hns3_nic_priv *priv = h->priv;
struct hns3_nic_ring_data *ring_data;
@@ -252,6 +253,7 @@ static void hns3_dbg_help(struct hnae3_handle *h)
dev_info(&h->pdev->dev, "dump qos buf cfg\n");
dev_info(&h->pdev->dev, "dump mng tbl\n");
dev_info(&h->pdev->dev, "dump reset info\n");
+ dev_info(&h->pdev->dev, "dump m7 info\n");
dev_info(&h->pdev->dev, "dump ncl_config <offset> <length>(in hex)\n");
dev_info(&h->pdev->dev, "dump mac tnl status\n");
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
index f326805543a4..310afa708831 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
@@ -4,6 +4,9 @@
#include <linux/dma-mapping.h>
#include <linux/etherdevice.h>
#include <linux/interrupt.h>
+#ifdef CONFIG_RFS_ACCEL
+#include <linux/cpu_rmap.h>
+#endif
#include <linux/if_vlan.h>
#include <linux/ip.h>
#include <linux/ipv6.h>
@@ -14,6 +17,7 @@
#include <linux/sctp.h>
#include <linux/vermagic.h>
#include <net/gre.h>
+#include <net/ip6_checksum.h>
#include <net/pkt_cls.h>
#include <net/tcp.h>
#include <net/vxlan.h>
@@ -24,8 +28,7 @@
#define hns3_set_field(origin, shift, val) ((origin) |= ((val) << (shift)))
#define hns3_tx_bd_count(S) DIV_ROUND_UP(S, HNS3_MAX_BD_SIZE)
-static void hns3_clear_all_ring(struct hnae3_handle *h);
-static void hns3_force_clear_all_rx_ring(struct hnae3_handle *h);
+static void hns3_clear_all_ring(struct hnae3_handle *h, bool force);
static void hns3_remove_hw_addr(struct net_device *netdev);
static const char hns3_driver_name[] = "hns3";
@@ -79,23 +82,6 @@ static irqreturn_t hns3_irq_handle(int irq, void *vector)
return IRQ_HANDLED;
}
-/* This callback function is used to set affinity changes to the irq affinity
- * masks when the irq_set_affinity_notifier function is used.
- */
-static void hns3_nic_irq_affinity_notify(struct irq_affinity_notify *notify,
- const cpumask_t *mask)
-{
- struct hns3_enet_tqp_vector *tqp_vectors =
- container_of(notify, struct hns3_enet_tqp_vector,
- affinity_notify);
-
- tqp_vectors->affinity_mask = *mask;
-}
-
-static void hns3_nic_irq_affinity_release(struct kref *ref)
-{
-}
-
static void hns3_nic_uninit_irq(struct hns3_nic_priv *priv)
{
struct hns3_enet_tqp_vector *tqp_vectors;
@@ -107,8 +93,7 @@ static void hns3_nic_uninit_irq(struct hns3_nic_priv *priv)
if (tqp_vectors->irq_init_flag != HNS3_VECTOR_INITED)
continue;
- /* clear the affinity notifier and affinity mask */
- irq_set_affinity_notifier(tqp_vectors->vector_irq, NULL);
+ /* clear the affinity mask */
irq_set_affinity_hint(tqp_vectors->vector_irq, NULL);
/* release the irq resource */
@@ -153,20 +138,14 @@ static int hns3_nic_init_irq(struct hns3_nic_priv *priv)
tqp_vectors->name[HNAE3_INT_NAME_LEN - 1] = '\0';
ret = request_irq(tqp_vectors->vector_irq, hns3_irq_handle, 0,
- tqp_vectors->name,
- tqp_vectors);
+ tqp_vectors->name, tqp_vectors);
if (ret) {
netdev_err(priv->netdev, "request irq(%d) fail\n",
tqp_vectors->vector_irq);
+ hns3_nic_uninit_irq(priv);
return ret;
}
- tqp_vectors->affinity_notify.notify =
- hns3_nic_irq_affinity_notify;
- tqp_vectors->affinity_notify.release =
- hns3_nic_irq_affinity_release;
- irq_set_affinity_notifier(tqp_vectors->vector_irq,
- &tqp_vectors->affinity_notify);
irq_set_affinity_hint(tqp_vectors->vector_irq,
&tqp_vectors->affinity_mask);
@@ -297,8 +276,7 @@ static int hns3_nic_set_real_num_queue(struct net_device *netdev)
ret = netif_set_real_num_tx_queues(netdev, queue_size);
if (ret) {
netdev_err(netdev,
- "netif_set_real_num_tx_queues fail, ret=%d!\n",
- ret);
+ "netif_set_real_num_tx_queues fail, ret=%d!\n", ret);
return ret;
}
@@ -340,6 +318,40 @@ static void hns3_tqp_disable(struct hnae3_queue *tqp)
hns3_write_dev(tqp, HNS3_RING_EN_REG, rcb_reg);
}
+static void hns3_free_rx_cpu_rmap(struct net_device *netdev)
+{
+#ifdef CONFIG_RFS_ACCEL
+ free_irq_cpu_rmap(netdev->rx_cpu_rmap);
+ netdev->rx_cpu_rmap = NULL;
+#endif
+}
+
+static int hns3_set_rx_cpu_rmap(struct net_device *netdev)
+{
+#ifdef CONFIG_RFS_ACCEL
+ struct hns3_nic_priv *priv = netdev_priv(netdev);
+ struct hns3_enet_tqp_vector *tqp_vector;
+ int i, ret;
+
+ if (!netdev->rx_cpu_rmap) {
+ netdev->rx_cpu_rmap = alloc_irq_cpu_rmap(priv->vector_num);
+ if (!netdev->rx_cpu_rmap)
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < priv->vector_num; i++) {
+ tqp_vector = &priv->tqp_vector[i];
+ ret = irq_cpu_rmap_add(netdev->rx_cpu_rmap,
+ tqp_vector->vector_irq);
+ if (ret) {
+ hns3_free_rx_cpu_rmap(netdev);
+ return ret;
+ }
+ }
+#endif
+ return 0;
+}
+
static int hns3_nic_net_up(struct net_device *netdev)
{
struct hns3_nic_priv *priv = netdev_priv(netdev);
@@ -351,11 +363,16 @@ static int hns3_nic_net_up(struct net_device *netdev)
if (ret)
return ret;
+ /* the device can work without cpu rmap, only aRFS needs it */
+ ret = hns3_set_rx_cpu_rmap(netdev);
+ if (ret)
+ netdev_warn(netdev, "set rx cpu rmap fail, ret=%d!\n", ret);
+
/* get irq resource for all vectors */
ret = hns3_nic_init_irq(priv);
if (ret) {
- netdev_err(netdev, "hns init irq failed! ret=%d\n", ret);
- return ret;
+ netdev_err(netdev, "init irq failed! ret=%d\n", ret);
+ goto free_rmap;
}
clear_bit(HNS3_NIC_STATE_DOWN, &priv->state);
@@ -384,7 +401,8 @@ out_start_err:
hns3_vector_disable(&priv->tqp_vector[j]);
hns3_nic_uninit_irq(priv);
-
+free_rmap:
+ hns3_free_rx_cpu_rmap(netdev);
return ret;
}
@@ -429,16 +447,13 @@ static int hns3_nic_net_open(struct net_device *netdev)
ret = hns3_nic_net_up(netdev);
if (ret) {
- netdev_err(netdev,
- "hns net up fail, ret=%d!\n", ret);
+ netdev_err(netdev, "net up fail, ret=%d!\n", ret);
return ret;
}
kinfo = &h->kinfo;
- for (i = 0; i < HNAE3_MAX_USER_PRIO; i++) {
- netdev_set_prio_tc_map(netdev, i,
- kinfo->prio_tc[i]);
- }
+ for (i = 0; i < HNAE3_MAX_USER_PRIO; i++)
+ netdev_set_prio_tc_map(netdev, i, kinfo->prio_tc[i]);
if (h->ae_algo->ops->set_timer_task)
h->ae_algo->ops->set_timer_task(priv->ae_handle, true);
@@ -447,6 +462,20 @@ static int hns3_nic_net_open(struct net_device *netdev)
return 0;
}
+static void hns3_reset_tx_queue(struct hnae3_handle *h)
+{
+ struct net_device *ndev = h->kinfo.netdev;
+ struct hns3_nic_priv *priv = netdev_priv(ndev);
+ struct netdev_queue *dev_queue;
+ u32 i;
+
+ for (i = 0; i < h->kinfo.num_tqps; i++) {
+ dev_queue = netdev_get_tx_queue(ndev,
+ priv->ring_data[i].queue_index);
+ netdev_tx_reset_queue(dev_queue);
+ }
+}
+
static void hns3_nic_net_down(struct net_device *netdev)
{
struct hns3_nic_priv *priv = netdev_priv(netdev);
@@ -467,10 +496,19 @@ static void hns3_nic_net_down(struct net_device *netdev)
if (ops->stop)
ops->stop(priv->ae_handle);
+ hns3_free_rx_cpu_rmap(netdev);
+
/* free irq resources */
hns3_nic_uninit_irq(priv);
- hns3_clear_all_ring(priv->ae_handle);
+ /* delay ring buffer clearing to hns3_reset_notify_uninit_enet
+ * during reset process, because driver may not be able
+ * to disable the ring through firmware when downing the netdev.
+ */
+ if (!hns3_nic_resetting(netdev))
+ hns3_clear_all_ring(priv->ae_handle, false);
+
+ hns3_reset_tx_queue(priv->ae_handle);
}
static int hns3_nic_net_stop(struct net_device *netdev)
@@ -641,7 +679,7 @@ static int hns3_set_tso(struct sk_buff *skb, u32 *paylen,
if (l3.v4->version == 4)
l3.v4->check = 0;
- /* tunnel packet.*/
+ /* tunnel packet */
if (skb_shinfo(skb)->gso_type & (SKB_GSO_GRE |
SKB_GSO_GRE_CSUM |
SKB_GSO_UDP_TUNNEL |
@@ -666,11 +704,11 @@ static int hns3_set_tso(struct sk_buff *skb, u32 *paylen,
l3.v4->check = 0;
}
- /* normal or tunnel packet*/
+ /* normal or tunnel packet */
l4_offset = l4.hdr - skb->data;
hdr_len = (l4.tcp->doff << 2) + l4_offset;
- /* remove payload length from inner pseudo checksum when tso*/
+ /* remove payload length from inner pseudo checksum when tso */
l4_paylen = skb->len - l4_offset;
csum_replace_by_diff(&l4.tcp->check,
(__force __wsum)htonl(l4_paylen));
@@ -778,7 +816,7 @@ static void hns3_set_outer_l2l3l4(struct sk_buff *skb, u8 ol4_proto,
hns3_set_field(*ol_type_vlan_len_msec, HNS3_TXD_L3LEN_S, l3_len >> 2);
il2_hdr = skb_inner_mac_header(skb);
- /* compute OL4 header size, defined in 4 Bytes. */
+ /* compute OL4 header size, defined in 4 Bytes */
l4_len = il2_hdr - l4.hdr;
hns3_set_field(*ol_type_vlan_len_msec, HNS3_TXD_L4LEN_S, l4_len >> 2);
@@ -913,8 +951,9 @@ static int hns3_set_l2l3l4(struct sk_buff *skb, u8 ol4_proto,
static void hns3_set_txbd_baseinfo(u16 *bdtp_fe_sc_vld_ra_ri, int frag_end)
{
/* Config bd buffer end */
- hns3_set_field(*bdtp_fe_sc_vld_ra_ri, HNS3_TXD_FE_B, !!frag_end);
- hns3_set_field(*bdtp_fe_sc_vld_ra_ri, HNS3_TXD_VLD_B, 1);
+ if (!!frag_end)
+ hns3_set_field(*bdtp_fe_sc_vld_ra_ri, HNS3_TXD_FE_B, 1U);
+ hns3_set_field(*bdtp_fe_sc_vld_ra_ri, HNS3_TXD_VLD_B, 1U);
}
static int hns3_fill_desc_vtags(struct sk_buff *skb,
@@ -988,7 +1027,8 @@ static int hns3_fill_desc_vtags(struct sk_buff *skb,
}
static int hns3_fill_desc(struct hns3_enet_ring *ring, void *priv,
- int size, int frag_end, enum hns_desc_type type)
+ unsigned int size, int frag_end,
+ enum hns_desc_type type)
{
struct hns3_desc_cb *desc_cb = &ring->desc_cb[ring->next_to_use];
struct hns3_desc *desc = &ring->desc[ring->next_to_use];
@@ -1038,8 +1078,7 @@ static int hns3_fill_desc(struct hns3_enet_ring *ring, void *priv,
/* Set txbd */
desc->tx.ol_type_vlan_len_msec =
cpu_to_le32(ol_type_vlan_len_msec);
- desc->tx.type_cs_vlan_tso_len =
- cpu_to_le32(type_cs_vlan_tso);
+ desc->tx.type_cs_vlan_tso_len = cpu_to_le32(type_cs_vlan_tso);
desc->tx.paylen = cpu_to_le32(paylen);
desc->tx.mss = cpu_to_le16(mss);
desc->tx.vlan_tag = cpu_to_le16(inner_vtag);
@@ -1086,19 +1125,19 @@ static int hns3_fill_desc(struct hns3_enet_ring *ring, void *priv,
desc_cb->priv = priv;
desc_cb->dma = dma + HNS3_MAX_BD_SIZE * k;
desc_cb->type = (type == DESC_TYPE_SKB && !k) ?
- DESC_TYPE_SKB : DESC_TYPE_PAGE;
+ DESC_TYPE_SKB : DESC_TYPE_PAGE;
/* now, fill the descriptor */
desc->addr = cpu_to_le64(dma + HNS3_MAX_BD_SIZE * k);
desc->tx.send_size = cpu_to_le16((k == frag_buf_num - 1) ?
- (u16)sizeoflast : (u16)HNS3_MAX_BD_SIZE);
+ (u16)sizeoflast : (u16)HNS3_MAX_BD_SIZE);
hns3_set_txbd_baseinfo(&bdtp_fe_sc_vld_ra_ri,
frag_end && (k == frag_buf_num - 1) ?
1 : 0);
desc->tx.bdtp_fe_sc_vld_ra_ri =
cpu_to_le16(bdtp_fe_sc_vld_ra_ri);
- /* move ring pointer to next.*/
+ /* move ring pointer to next */
ring_ptr_move_fw(ring, next_to_use);
desc_cb = &ring->desc_cb[ring->next_to_use];
@@ -1452,12 +1491,10 @@ static void hns3_nic_get_stats64(struct net_device *netdev,
start = u64_stats_fetch_begin_irq(&ring->syncp);
rx_bytes += ring->stats.rx_bytes;
rx_pkts += ring->stats.rx_pkts;
- rx_drop += ring->stats.non_vld_descs;
rx_drop += ring->stats.l2_err;
- rx_errors += ring->stats.non_vld_descs;
rx_errors += ring->stats.l2_err;
+ rx_errors += ring->stats.l3l4_csum_err;
rx_crc_errors += ring->stats.l2_err;
- rx_crc_errors += ring->stats.l3l4_csum_err;
rx_multicast += ring->stats.rx_multicast;
rx_length_errors += ring->stats.err_pkt_len;
} while (u64_stats_fetch_retry_irq(&ring->syncp, start));
@@ -1493,12 +1530,12 @@ static void hns3_nic_get_stats64(struct net_device *netdev,
static int hns3_setup_tc(struct net_device *netdev, void *type_data)
{
struct tc_mqprio_qopt_offload *mqprio_qopt = type_data;
- struct hnae3_handle *h = hns3_get_handle(netdev);
- struct hnae3_knic_private_info *kinfo = &h->kinfo;
u8 *prio_tc = mqprio_qopt->qopt.prio_tc_map;
+ struct hnae3_knic_private_info *kinfo;
u8 tc = mqprio_qopt->qopt.num_tc;
u16 mode = mqprio_qopt->mode;
u8 hw = mqprio_qopt->qopt.hw;
+ struct hnae3_handle *h;
if (!((hw == TC_MQPRIO_HW_OFFLOAD_TCS &&
mode == TC_MQPRIO_MODE_CHANNEL) || (!hw && tc == 0)))
@@ -1510,6 +1547,9 @@ static int hns3_setup_tc(struct net_device *netdev, void *type_data)
if (!netdev)
return -EINVAL;
+ h = hns3_get_handle(netdev);
+ kinfo = &h->kinfo;
+
return (kinfo->dcb_ops && kinfo->dcb_ops->setup_tc) ?
kinfo->dcb_ops->setup_tc(h, tc, prio_tc) : -EOPNOTSUPP;
}
@@ -1527,15 +1567,11 @@ static int hns3_vlan_rx_add_vid(struct net_device *netdev,
__be16 proto, u16 vid)
{
struct hnae3_handle *h = hns3_get_handle(netdev);
- struct hns3_nic_priv *priv = netdev_priv(netdev);
int ret = -EIO;
if (h->ae_algo->ops->set_vlan_filter)
ret = h->ae_algo->ops->set_vlan_filter(h, proto, vid, false);
- if (!ret)
- set_bit(vid, priv->active_vlans);
-
return ret;
}
@@ -1543,33 +1579,11 @@ static int hns3_vlan_rx_kill_vid(struct net_device *netdev,
__be16 proto, u16 vid)
{
struct hnae3_handle *h = hns3_get_handle(netdev);
- struct hns3_nic_priv *priv = netdev_priv(netdev);
int ret = -EIO;
if (h->ae_algo->ops->set_vlan_filter)
ret = h->ae_algo->ops->set_vlan_filter(h, proto, vid, true);
- if (!ret)
- clear_bit(vid, priv->active_vlans);
-
- return ret;
-}
-
-static int hns3_restore_vlan(struct net_device *netdev)
-{
- struct hns3_nic_priv *priv = netdev_priv(netdev);
- int ret = 0;
- u16 vid;
-
- for_each_set_bit(vid, priv->active_vlans, VLAN_N_VID) {
- ret = hns3_vlan_rx_add_vid(netdev, htons(ETH_P_8021Q), vid);
- if (ret) {
- netdev_err(netdev, "Restore vlan: %d filter, ret:%d\n",
- vid, ret);
- return ret;
- }
- }
-
return ret;
}
@@ -1581,7 +1595,7 @@ static int hns3_ndo_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan,
if (h->ae_algo->ops->set_vf_vlan_filter)
ret = h->ae_algo->ops->set_vf_vlan_filter(h, vf, vlan,
- qos, vlan_proto);
+ qos, vlan_proto);
return ret;
}
@@ -1722,6 +1736,32 @@ static void hns3_nic_net_timeout(struct net_device *ndev)
h->ae_algo->ops->reset_event(h->pdev, h);
}
+#ifdef CONFIG_RFS_ACCEL
+static int hns3_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
+ u16 rxq_index, u32 flow_id)
+{
+ struct hnae3_handle *h = hns3_get_handle(dev);
+ struct flow_keys fkeys;
+
+ if (!h->ae_algo->ops->add_arfs_entry)
+ return -EOPNOTSUPP;
+
+ if (skb->encapsulation)
+ return -EPROTONOSUPPORT;
+
+ if (!skb_flow_dissect_flow_keys(skb, &fkeys, 0))
+ return -EPROTONOSUPPORT;
+
+ if ((fkeys.basic.n_proto != htons(ETH_P_IP) &&
+ fkeys.basic.n_proto != htons(ETH_P_IPV6)) ||
+ (fkeys.basic.ip_proto != IPPROTO_TCP &&
+ fkeys.basic.ip_proto != IPPROTO_UDP))
+ return -EPROTONOSUPPORT;
+
+ return h->ae_algo->ops->add_arfs_entry(h, rxq_index, flow_id, &fkeys);
+}
+#endif
+
static const struct net_device_ops hns3_nic_netdev_ops = {
.ndo_open = hns3_nic_net_open,
.ndo_stop = hns3_nic_net_stop,
@@ -1737,6 +1777,10 @@ static const struct net_device_ops hns3_nic_netdev_ops = {
.ndo_vlan_rx_add_vid = hns3_vlan_rx_add_vid,
.ndo_vlan_rx_kill_vid = hns3_vlan_rx_kill_vid,
.ndo_set_vf_vlan = hns3_ndo_set_vf_vlan,
+#ifdef CONFIG_RFS_ACCEL
+ .ndo_rx_flow_steer = hns3_rx_flow_steer,
+#endif
+
};
bool hns3_is_phys_func(struct pci_dev *pdev)
@@ -1802,8 +1846,7 @@ static int hns3_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
struct hnae3_ae_dev *ae_dev;
int ret;
- ae_dev = devm_kzalloc(&pdev->dev, sizeof(*ae_dev),
- GFP_KERNEL);
+ ae_dev = devm_kzalloc(&pdev->dev, sizeof(*ae_dev), GFP_KERNEL);
if (!ae_dev) {
ret = -ENOMEM;
return ret;
@@ -1811,7 +1854,6 @@ static int hns3_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
ae_dev->pdev = pdev;
ae_dev->flag = ent->driver_data;
- ae_dev->dev_type = HNAE3_DEV_KNIC;
ae_dev->reset_type = HNAE3_NONE_RESET;
hns3_get_dev_capability(pdev, ae_dev);
pci_set_drvdata(pdev, ae_dev);
@@ -1895,9 +1937,9 @@ static pci_ers_result_t hns3_error_detected(struct pci_dev *pdev,
if (state == pci_channel_io_perm_failure)
return PCI_ERS_RESULT_DISCONNECT;
- if (!ae_dev) {
+ if (!ae_dev || !ae_dev->ops) {
dev_err(&pdev->dev,
- "Can't recover - error happened during device init\n");
+ "Can't recover - error happened before device initialized\n");
return PCI_ERS_RESULT_NONE;
}
@@ -1912,14 +1954,23 @@ static pci_ers_result_t hns3_error_detected(struct pci_dev *pdev,
static pci_ers_result_t hns3_slot_reset(struct pci_dev *pdev)
{
struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
+ const struct hnae3_ae_ops *ops;
+ enum hnae3_reset_type reset_type;
struct device *dev = &pdev->dev;
- dev_info(dev, "requesting reset due to PCI error\n");
+ if (!ae_dev || !ae_dev->ops)
+ return PCI_ERS_RESULT_NONE;
+ ops = ae_dev->ops;
/* request the reset */
- if (ae_dev->ops->reset_event) {
- if (!ae_dev->override_pci_need_reset)
- ae_dev->ops->reset_event(pdev, NULL);
+ if (ops->reset_event) {
+ if (ae_dev->hw_err_reset_req) {
+ reset_type = ops->get_reset_level(ae_dev,
+ &ae_dev->hw_err_reset_req);
+ ops->set_default_reset_request(ae_dev, reset_type);
+ dev_info(dev, "requesting reset due to PCI error\n");
+ ops->reset_event(pdev, NULL);
+ }
return PCI_ERS_RESULT_RECOVERED;
}
@@ -2168,7 +2219,7 @@ out_buffer_fail:
return ret;
}
-/* detach a in-used buffer and replace with a reserved one */
+/* detach a in-used buffer and replace with a reserved one */
static void hns3_replace_buffer(struct hns3_enet_ring *ring, int i,
struct hns3_desc_cb *res_cb)
{
@@ -2181,8 +2232,8 @@ static void hns3_replace_buffer(struct hns3_enet_ring *ring, int i,
static void hns3_reuse_buffer(struct hns3_enet_ring *ring, int i)
{
ring->desc_cb[i].reuse_flag = 0;
- ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma
- + ring->desc_cb[i].page_offset);
+ ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma +
+ ring->desc_cb[i].page_offset);
ring->desc[i].rx.bd_base_info = 0;
}
@@ -2284,8 +2335,8 @@ static int hns3_desc_unused(struct hns3_enet_ring *ring)
return ((ntc >= ntu) ? 0 : ring->desc_num) + ntc - ntu;
}
-static void
-hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring, int cleand_count)
+static void hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring,
+ int cleand_count)
{
struct hns3_desc_cb *desc_cb;
struct hns3_desc_cb res_cbs;
@@ -2338,7 +2389,7 @@ static void hns3_nic_reuse_page(struct sk_buff *skb, int i,
/* Avoid re-using remote pages, or the stack is still using the page
* when page_offset rollback to zero, flag default unreuse
*/
- if (unlikely(page_to_nid(desc_cb->priv) != numa_node_id()) ||
+ if (unlikely(page_to_nid(desc_cb->priv) != numa_mem_id()) ||
(!desc_cb->page_offset && page_count(desc_cb->priv) > 1))
return;
@@ -2347,7 +2398,7 @@ static void hns3_nic_reuse_page(struct sk_buff *skb, int i,
if (desc_cb->page_offset + truesize <= hnae3_page_size(ring)) {
desc_cb->reuse_flag = 1;
- /* Bump ref count on page before it is given*/
+ /* Bump ref count on page before it is given */
get_page(desc_cb->priv);
} else if (page_count(desc_cb->priv) == 1) {
desc_cb->reuse_flag = 1;
@@ -2356,13 +2407,13 @@ static void hns3_nic_reuse_page(struct sk_buff *skb, int i,
}
}
-static int hns3_gro_complete(struct sk_buff *skb)
+static int hns3_gro_complete(struct sk_buff *skb, u32 l234info)
{
__be16 type = skb->protocol;
struct tcphdr *th;
int depth = 0;
- while (type == htons(ETH_P_8021Q)) {
+ while (eth_type_vlan(type)) {
struct vlan_hdr *vh;
if ((depth + VLAN_HLEN) > skb_headlen(skb))
@@ -2373,10 +2424,24 @@ static int hns3_gro_complete(struct sk_buff *skb)
depth += VLAN_HLEN;
}
+ skb_set_network_header(skb, depth);
+
if (type == htons(ETH_P_IP)) {
+ const struct iphdr *iph = ip_hdr(skb);
+
depth += sizeof(struct iphdr);
+ skb_set_transport_header(skb, depth);
+ th = tcp_hdr(skb);
+ th->check = ~tcp_v4_check(skb->len - depth, iph->saddr,
+ iph->daddr, 0);
} else if (type == htons(ETH_P_IPV6)) {
+ const struct ipv6hdr *iph = ipv6_hdr(skb);
+
depth += sizeof(struct ipv6hdr);
+ skb_set_transport_header(skb, depth);
+ th = tcp_hdr(skb);
+ th->check = ~tcp_v6_check(skb->len - depth, &iph->saddr,
+ &iph->daddr, 0);
} else {
netdev_err(skb->dev,
"Error: FW GRO supports only IPv4/IPv6, not 0x%04x, depth: %d\n",
@@ -2384,13 +2449,16 @@ static int hns3_gro_complete(struct sk_buff *skb)
return -EFAULT;
}
- th = (struct tcphdr *)(skb->data + depth);
skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count;
if (th->cwr)
skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ECN;
- skb->ip_summed = CHECKSUM_UNNECESSARY;
+ if (l234info & BIT(HNS3_RXD_GRO_FIXID_B))
+ skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_FIXEDID;
+ skb->csum_start = (unsigned char *)th - skb->head;
+ skb->csum_offset = offsetof(struct tcphdr, check);
+ skb->ip_summed = CHECKSUM_PARTIAL;
return 0;
}
@@ -2508,7 +2576,7 @@ static bool hns3_parse_vlan_tag(struct hns3_enet_ring *ring,
}
}
-static int hns3_alloc_skb(struct hns3_enet_ring *ring, int length,
+static int hns3_alloc_skb(struct hns3_enet_ring *ring, unsigned int length,
unsigned char *va)
{
#define HNS3_NEED_ADD_FRAG 1
@@ -2537,7 +2605,7 @@ static int hns3_alloc_skb(struct hns3_enet_ring *ring, int length,
memcpy(__skb_put(skb, length), va, ALIGN(length, sizeof(long)));
/* We can reuse buffer as-is, just make sure it is local */
- if (likely(page_to_nid(desc_cb->priv) == numa_node_id()))
+ if (likely(page_to_nid(desc_cb->priv) == numa_mem_id()))
desc_cb->reuse_flag = 1;
else /* This page cannot be reused so discard it */
put_page(desc_cb->priv);
@@ -2574,7 +2642,7 @@ static int hns3_add_frag(struct hns3_enet_ring *ring, struct hns3_desc *desc,
*/
if (pending) {
pre_bd = (ring->next_to_clean - 1 + ring->desc_num) %
- ring->desc_num;
+ ring->desc_num;
pre_desc = &ring->desc[pre_bd];
bd_base_info = le32_to_cpu(pre_desc->rx.bd_base_info);
} else {
@@ -2628,21 +2696,22 @@ static int hns3_set_gro_and_checksum(struct hns3_enet_ring *ring,
struct sk_buff *skb, u32 l234info,
u32 bd_base_info, u32 ol_info)
{
- u16 gro_count;
u32 l3_type;
- gro_count = hnae3_get_field(l234info, HNS3_RXD_GRO_COUNT_M,
- HNS3_RXD_GRO_COUNT_S);
+ skb_shinfo(skb)->gso_size = hnae3_get_field(bd_base_info,
+ HNS3_RXD_GRO_SIZE_M,
+ HNS3_RXD_GRO_SIZE_S);
/* if there is no HW GRO, do not set gro params */
- if (!gro_count) {
+ if (!skb_shinfo(skb)->gso_size) {
hns3_rx_checksum(ring, skb, l234info, bd_base_info, ol_info);
return 0;
}
- NAPI_GRO_CB(skb)->count = gro_count;
+ NAPI_GRO_CB(skb)->count = hnae3_get_field(l234info,
+ HNS3_RXD_GRO_COUNT_M,
+ HNS3_RXD_GRO_COUNT_S);
- l3_type = hnae3_get_field(l234info, HNS3_RXD_L3ID_M,
- HNS3_RXD_L3ID_S);
+ l3_type = hnae3_get_field(l234info, HNS3_RXD_L3ID_M, HNS3_RXD_L3ID_S);
if (l3_type == HNS3_L3_TYPE_IPV4)
skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
else if (l3_type == HNS3_L3_TYPE_IPV6)
@@ -2650,11 +2719,7 @@ static int hns3_set_gro_and_checksum(struct hns3_enet_ring *ring,
else
return -EFAULT;
- skb_shinfo(skb)->gso_size = hnae3_get_field(bd_base_info,
- HNS3_RXD_GRO_SIZE_M,
- HNS3_RXD_GRO_SIZE_S);
-
- return hns3_gro_complete(skb);
+ return hns3_gro_complete(skb, l234info);
}
static void hns3_set_rx_skb_rss_type(struct hns3_enet_ring *ring,
@@ -2703,14 +2768,6 @@ static int hns3_handle_bdinfo(struct hns3_enet_ring *ring, struct sk_buff *skb)
vlan_tag);
}
- if (unlikely(!(bd_base_info & BIT(HNS3_RXD_VLD_B)))) {
- u64_stats_update_begin(&ring->syncp);
- ring->stats.non_vld_descs++;
- u64_stats_update_end(&ring->syncp);
-
- return -EINVAL;
- }
-
if (unlikely(!desc->rx.pkt_len || (l234info & (BIT(HNS3_RXD_TRUNCAT_B) |
BIT(HNS3_RXD_L2E_B))))) {
u64_stats_update_begin(&ring->syncp);
@@ -2762,8 +2819,8 @@ static int hns3_handle_rx_bd(struct hns3_enet_ring *ring,
struct sk_buff *skb = ring->skb;
struct hns3_desc_cb *desc_cb;
struct hns3_desc *desc;
+ unsigned int length;
u32 bd_base_info;
- int length;
int ret;
desc = &ring->desc[ring->next_to_clean];
@@ -2828,14 +2885,14 @@ static int hns3_handle_rx_bd(struct hns3_enet_ring *ring,
return ret;
}
+ skb_record_rx_queue(skb, ring->tqp->tqp_index);
*out_skb = skb;
return 0;
}
-int hns3_clean_rx_ring(
- struct hns3_enet_ring *ring, int budget,
- void (*rx_fn)(struct hns3_enet_ring *, struct sk_buff *))
+int hns3_clean_rx_ring(struct hns3_enet_ring *ring, int budget,
+ void (*rx_fn)(struct hns3_enet_ring *, struct sk_buff *))
{
#define RCB_NOF_ALLOC_RX_BUFF_ONCE 16
int recv_pkts, recv_bds, clean_count, err;
@@ -2887,42 +2944,25 @@ int hns3_clean_rx_ring(
out:
/* Make all data has been write before submit */
if (clean_count + unused_count > 0)
- hns3_nic_alloc_rx_buffers(ring,
- clean_count + unused_count);
+ hns3_nic_alloc_rx_buffers(ring, clean_count + unused_count);
return recv_pkts;
}
-static bool hns3_get_new_int_gl(struct hns3_enet_ring_group *ring_group)
+static bool hns3_get_new_flow_lvl(struct hns3_enet_ring_group *ring_group)
{
- struct hns3_enet_tqp_vector *tqp_vector =
- ring_group->ring->tqp_vector;
+#define HNS3_RX_LOW_BYTE_RATE 10000
+#define HNS3_RX_MID_BYTE_RATE 20000
+#define HNS3_RX_ULTRA_PACKET_RATE 40
+
enum hns3_flow_level_range new_flow_level;
- int packets_per_msecs;
- int bytes_per_msecs;
+ struct hns3_enet_tqp_vector *tqp_vector;
+ int packets_per_msecs, bytes_per_msecs;
u32 time_passed_ms;
- u16 new_int_gl;
-
- if (!tqp_vector->last_jiffies)
- return false;
-
- if (ring_group->total_packets == 0) {
- ring_group->coal.int_gl = HNS3_INT_GL_50K;
- ring_group->coal.flow_level = HNS3_FLOW_LOW;
- return true;
- }
- /* Simple throttlerate management
- * 0-10MB/s lower (50000 ints/s)
- * 10-20MB/s middle (20000 ints/s)
- * 20-1249MB/s high (18000 ints/s)
- * > 40000pps ultra (8000 ints/s)
- */
- new_flow_level = ring_group->coal.flow_level;
- new_int_gl = ring_group->coal.int_gl;
+ tqp_vector = ring_group->ring->tqp_vector;
time_passed_ms =
jiffies_to_msecs(jiffies - tqp_vector->last_jiffies);
-
if (!time_passed_ms)
return false;
@@ -2932,9 +2972,14 @@ static bool hns3_get_new_int_gl(struct hns3_enet_ring_group *ring_group)
do_div(ring_group->total_bytes, time_passed_ms);
bytes_per_msecs = ring_group->total_bytes;
-#define HNS3_RX_LOW_BYTE_RATE 10000
-#define HNS3_RX_MID_BYTE_RATE 20000
+ new_flow_level = ring_group->coal.flow_level;
+ /* Simple throttlerate management
+ * 0-10MB/s lower (50000 ints/s)
+ * 10-20MB/s middle (20000 ints/s)
+ * 20-1249MB/s high (18000 ints/s)
+ * > 40000pps ultra (8000 ints/s)
+ */
switch (new_flow_level) {
case HNS3_FLOW_LOW:
if (bytes_per_msecs > HNS3_RX_LOW_BYTE_RATE)
@@ -2954,13 +2999,40 @@ static bool hns3_get_new_int_gl(struct hns3_enet_ring_group *ring_group)
break;
}
-#define HNS3_RX_ULTRA_PACKET_RATE 40
-
if (packets_per_msecs > HNS3_RX_ULTRA_PACKET_RATE &&
&tqp_vector->rx_group == ring_group)
new_flow_level = HNS3_FLOW_ULTRA;
- switch (new_flow_level) {
+ ring_group->total_bytes = 0;
+ ring_group->total_packets = 0;
+ ring_group->coal.flow_level = new_flow_level;
+
+ return true;
+}
+
+static bool hns3_get_new_int_gl(struct hns3_enet_ring_group *ring_group)
+{
+ struct hns3_enet_tqp_vector *tqp_vector;
+ u16 new_int_gl;
+
+ if (!ring_group->ring)
+ return false;
+
+ tqp_vector = ring_group->ring->tqp_vector;
+ if (!tqp_vector->last_jiffies)
+ return false;
+
+ if (ring_group->total_packets == 0) {
+ ring_group->coal.int_gl = HNS3_INT_GL_50K;
+ ring_group->coal.flow_level = HNS3_FLOW_LOW;
+ return true;
+ }
+
+ if (!hns3_get_new_flow_lvl(ring_group))
+ return false;
+
+ new_int_gl = ring_group->coal.int_gl;
+ switch (ring_group->coal.flow_level) {
case HNS3_FLOW_LOW:
new_int_gl = HNS3_INT_GL_50K;
break;
@@ -2977,9 +3049,6 @@ static bool hns3_get_new_int_gl(struct hns3_enet_ring_group *ring_group)
break;
}
- ring_group->total_bytes = 0;
- ring_group->total_packets = 0;
- ring_group->coal.flow_level = new_flow_level;
if (new_int_gl != ring_group->coal.int_gl) {
ring_group->coal.int_gl = new_int_gl;
return true;
@@ -3280,6 +3349,7 @@ static int hns3_nic_alloc_vector_data(struct hns3_nic_priv *priv)
if (!vector)
return -ENOMEM;
+ /* save the actual available vector number */
vector_num = h->ae_algo->ops->get_vector(h, vector_num, vector);
priv->vector_num = vector_num;
@@ -3331,8 +3401,6 @@ static void hns3_nic_uninit_vector_data(struct hns3_nic_priv *priv)
hns3_free_vector_ring_chain(tqp_vector, &vector_ring_chain);
if (tqp_vector->irq_init_flag == HNS3_VECTOR_INITED) {
- irq_set_affinity_notifier(tqp_vector->vector_irq,
- NULL);
irq_set_affinity_hint(tqp_vector->vector_irq, NULL);
free_irq(tqp_vector->vector_irq, tqp_vector);
tqp_vector->irq_init_flag = HNS3_VECTOR_NOT_INITED;
@@ -3364,7 +3432,7 @@ static int hns3_nic_dealloc_vector_data(struct hns3_nic_priv *priv)
}
static int hns3_ring_get_cfg(struct hnae3_queue *q, struct hns3_nic_priv *priv,
- int ring_type)
+ unsigned int ring_type)
{
struct hns3_nic_ring_data *ring_data = priv->ring_data;
int queue_num = priv->ae_handle->kinfo.num_tqps;
@@ -3550,8 +3618,7 @@ static void hns3_init_ring_hw(struct hns3_enet_ring *ring)
struct hnae3_queue *q = ring->tqp;
if (!HNAE3_IS_TX_RING(ring)) {
- hns3_write_dev(q, HNS3_RING_RX_RING_BASEADDR_L_REG,
- (u32)dma);
+ hns3_write_dev(q, HNS3_RING_RX_RING_BASEADDR_L_REG, (u32)dma);
hns3_write_dev(q, HNS3_RING_RX_RING_BASEADDR_H_REG,
(u32)((dma >> 31) >> 1));
@@ -3851,6 +3918,8 @@ static void hns3_client_uninit(struct hnae3_handle *handle, bool reset)
hns3_client_stop(handle);
+ hns3_uninit_phy(netdev);
+
if (!test_and_clear_bit(HNS3_NIC_STATE_INITED, &priv->state)) {
netdev_warn(netdev, "already uninitialized\n");
goto out_netdev_free;
@@ -3858,9 +3927,7 @@ static void hns3_client_uninit(struct hnae3_handle *handle, bool reset)
hns3_del_all_fd_rules(netdev, true);
- hns3_force_clear_all_rx_ring(handle);
-
- hns3_uninit_phy(netdev);
+ hns3_clear_all_ring(handle, true);
hns3_nic_uninit_vector_data(priv);
@@ -3997,8 +4064,7 @@ static int hns3_clear_rx_ring(struct hns3_enet_ring *ring)
ret);
return ret;
}
- hns3_replace_buffer(ring, ring->next_to_use,
- &res_cbs);
+ hns3_replace_buffer(ring, ring->next_to_use, &res_cbs);
}
ring_ptr_move_fw(ring, next_to_use);
}
@@ -4030,40 +4096,26 @@ static void hns3_force_clear_rx_ring(struct hns3_enet_ring *ring)
}
}
-static void hns3_force_clear_all_rx_ring(struct hnae3_handle *h)
+static void hns3_clear_all_ring(struct hnae3_handle *h, bool force)
{
struct net_device *ndev = h->kinfo.netdev;
struct hns3_nic_priv *priv = netdev_priv(ndev);
- struct hns3_enet_ring *ring;
u32 i;
for (i = 0; i < h->kinfo.num_tqps; i++) {
- ring = priv->ring_data[i + h->kinfo.num_tqps].ring;
- hns3_force_clear_rx_ring(ring);
- }
-}
-
-static void hns3_clear_all_ring(struct hnae3_handle *h)
-{
- struct net_device *ndev = h->kinfo.netdev;
- struct hns3_nic_priv *priv = netdev_priv(ndev);
- u32 i;
-
- for (i = 0; i < h->kinfo.num_tqps; i++) {
- struct netdev_queue *dev_queue;
struct hns3_enet_ring *ring;
ring = priv->ring_data[i].ring;
hns3_clear_tx_ring(ring);
- dev_queue = netdev_get_tx_queue(ndev,
- priv->ring_data[i].queue_index);
- netdev_tx_reset_queue(dev_queue);
ring = priv->ring_data[i + h->kinfo.num_tqps].ring;
/* Continue to clear other rings even if clearing some
* rings failed.
*/
- hns3_clear_rx_ring(ring);
+ if (force)
+ hns3_force_clear_rx_ring(ring);
+ else
+ hns3_clear_rx_ring(ring);
}
}
@@ -4173,7 +4225,7 @@ static int hns3_reset_notify_up_enet(struct hnae3_handle *handle)
if (ret) {
set_bit(HNS3_NIC_STATE_RESETTING, &priv->state);
netdev_err(kinfo->netdev,
- "hns net up fail, ret=%d!\n", ret);
+ "net up fail, ret=%d!\n", ret);
return ret;
}
}
@@ -4251,12 +4303,8 @@ static int hns3_reset_notify_restore_enet(struct hnae3_handle *handle)
vlan_filter_enable = netdev->flags & IFF_PROMISC ? false : true;
hns3_enable_vlan_filter(netdev, vlan_filter_enable);
- /* Hardware table is only clear when pf resets */
- if (!(handle->flags & HNAE3_SUPPORT_VF)) {
- ret = hns3_restore_vlan(netdev);
- if (ret)
- return ret;
- }
+ if (handle->ae_algo->ops->restore_vlan_table)
+ handle->ae_algo->ops->restore_vlan_table(handle);
return hns3_restore_fd_rules(netdev);
}
@@ -4272,7 +4320,8 @@ static int hns3_reset_notify_uninit_enet(struct hnae3_handle *handle)
return 0;
}
- hns3_force_clear_all_rx_ring(handle);
+ hns3_clear_all_ring(handle, true);
+ hns3_reset_tx_queue(priv->ae_handle);
hns3_nic_uninit_vector_data(priv);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
index c14480f9b625..848b866761df 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
@@ -145,7 +145,7 @@ enum hns3_nic_state {
#define HNS3_RXD_TSIND_M (0x7 << HNS3_RXD_TSIND_S)
#define HNS3_RXD_LKBK_B 15
#define HNS3_RXD_GRO_SIZE_S 16
-#define HNS3_RXD_GRO_SIZE_M (0x3ff << HNS3_RXD_GRO_SIZE_S)
+#define HNS3_RXD_GRO_SIZE_M (0x3fff << HNS3_RXD_GRO_SIZE_S)
#define HNS3_TXD_L3T_S 0
#define HNS3_TXD_L3T_M (0x3 << HNS3_TXD_L3T_S)
@@ -384,7 +384,6 @@ struct ring_stats {
u64 rx_err_cnt;
u64 reuse_pg_cnt;
u64 err_pkt_len;
- u64 non_vld_descs;
u64 err_bd_num;
u64 l2_err;
u64 l3l4_csum_err;
@@ -417,7 +416,7 @@ struct hns3_enet_ring {
*/
int next_to_clean;
- int pull_len; /* head length for current packet */
+ u32 pull_len; /* head length for current packet */
u32 frag_num;
unsigned char *va; /* first buffer address for current packet */
@@ -446,25 +445,6 @@ enum hns3_flow_level_range {
HNS3_FLOW_ULTRA = 3,
};
-enum hns3_link_mode_bits {
- HNS3_LM_FIBRE_BIT = BIT(0),
- HNS3_LM_AUTONEG_BIT = BIT(1),
- HNS3_LM_TP_BIT = BIT(2),
- HNS3_LM_PAUSE_BIT = BIT(3),
- HNS3_LM_BACKPLANE_BIT = BIT(4),
- HNS3_LM_10BASET_HALF_BIT = BIT(5),
- HNS3_LM_10BASET_FULL_BIT = BIT(6),
- HNS3_LM_100BASET_HALF_BIT = BIT(7),
- HNS3_LM_100BASET_FULL_BIT = BIT(8),
- HNS3_LM_1000BASET_FULL_BIT = BIT(9),
- HNS3_LM_10000BASEKR_FULL_BIT = BIT(10),
- HNS3_LM_25000BASEKR_FULL_BIT = BIT(11),
- HNS3_LM_40000BASELR4_FULL_BIT = BIT(12),
- HNS3_LM_50000BASEKR2_FULL_BIT = BIT(13),
- HNS3_LM_100000BASEKR4_FULL_BIT = BIT(14),
- HNS3_LM_COUNT = 15
-};
-
#define HNS3_INT_GL_MAX 0x1FE0
#define HNS3_INT_GL_50K 0x0014
#define HNS3_INT_GL_20K 0x0032
@@ -550,7 +530,6 @@ struct hns3_nic_priv {
struct notifier_block notifier_block;
/* Vxlan/Geneve information */
struct hns3_udp_tunnel udp_tnl[HNS3_UDP_TNL_MAX];
- unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
struct hns3_enet_coalesce tx_coal;
struct hns3_enet_coalesce rx_coal;
};
@@ -631,7 +610,7 @@ static inline bool hns3_nic_resetting(struct net_device *netdev)
#define hnae3_buf_size(_ring) ((_ring)->buf_size)
#define hnae3_page_order(_ring) (get_order(hnae3_buf_size(_ring)))
-#define hnae3_page_size(_ring) (PAGE_SIZE << hnae3_page_order(_ring))
+#define hnae3_page_size(_ring) (PAGE_SIZE << (u32)hnae3_page_order(_ring))
/* iterator for handling rings in ring group */
#define hns3_for_each_ring(pos, head) \
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
index d1588ea6132c..5bff98a9b0dc 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
@@ -44,7 +44,6 @@ static const struct hns3_stats hns3_rxq_stats[] = {
HNS3_TQP_STAT("errors", rx_err_cnt),
HNS3_TQP_STAT("reuse_pg_cnt", reuse_pg_cnt),
HNS3_TQP_STAT("err_pkt_len", err_pkt_len),
- HNS3_TQP_STAT("non_vld_descs", non_vld_descs),
HNS3_TQP_STAT("err_bd_num", err_bd_num),
HNS3_TQP_STAT("l2_err", l2_err),
HNS3_TQP_STAT("l3l4_csum_err", l3l4_csum_err),
@@ -60,6 +59,7 @@ static const struct hns3_stats hns3_rxq_stats[] = {
#define HNS3_NIC_LB_TEST_PKT_NUM 1
#define HNS3_NIC_LB_TEST_RING_ID 0
#define HNS3_NIC_LB_TEST_PACKET_SIZE 128
+#define HNS3_NIC_LB_SETUP_USEC 10000
/* Nic loopback test err */
#define HNS3_NIC_LB_TEST_NO_MEM_ERR 1
@@ -117,7 +117,7 @@ static int hns3_lp_up(struct net_device *ndev, enum hnae3_loop loop_mode)
return ret;
ret = hns3_lp_setup(ndev, loop_mode, true);
- usleep_range(10000, 20000);
+ usleep_range(HNS3_NIC_LB_SETUP_USEC, HNS3_NIC_LB_SETUP_USEC * 2);
return ret;
}
@@ -132,7 +132,7 @@ static int hns3_lp_down(struct net_device *ndev, enum hnae3_loop loop_mode)
return ret;
}
- usleep_range(10000, 20000);
+ usleep_range(HNS3_NIC_LB_SETUP_USEC, HNS3_NIC_LB_SETUP_USEC * 2);
return 0;
}
@@ -149,6 +149,12 @@ static void hns3_lp_setup_skb(struct sk_buff *skb)
packet = skb_put(skb, HNS3_NIC_LB_TEST_PACKET_SIZE);
memcpy(ethh->h_dest, ndev->dev_addr, ETH_ALEN);
+
+ /* The dst mac addr of loopback packet is the same as the host'
+ * mac addr, the SSU component may loop back the packet to host
+ * before the packet reaches mac or serdes, which will defect
+ * the purpose of mac or serdes selftest.
+ */
ethh->h_dest[5] += 0x1f;
eth_zero_addr(ethh->h_source);
ethh->h_proto = htons(ETH_P_ARP);
@@ -243,11 +249,13 @@ static int hns3_lp_run_test(struct net_device *ndev, enum hnae3_loop mode)
skb_get(skb);
tx_ret = hns3_nic_net_xmit(skb, ndev);
- if (tx_ret == NETDEV_TX_OK)
+ if (tx_ret == NETDEV_TX_OK) {
good_cnt++;
- else
+ } else {
+ kfree_skb(skb);
netdev_err(ndev, "hns3_lb_run_test xmit failed: %d\n",
tx_ret);
+ }
}
if (good_cnt != HNS3_NIC_LB_TEST_PKT_NUM) {
ret_val = HNS3_NIC_LB_TEST_TX_CNT_ERR;
@@ -327,6 +335,13 @@ static void hns3_self_test(struct net_device *ndev,
h->ae_algo->ops->enable_vlan_filter(h, false);
#endif
+ /* Tell firmware to stop mac autoneg before loopback test start,
+ * otherwise loopback test may be failed when the port is still
+ * negotiating.
+ */
+ if (h->ae_algo->ops->halt_autoneg)
+ h->ae_algo->ops->halt_autoneg(h, true);
+
set_bit(HNS3_NIC_STATE_TESTING, &priv->state);
for (i = 0; i < HNS3_SELF_TEST_TYPE_NUM; i++) {
@@ -349,6 +364,9 @@ static void hns3_self_test(struct net_device *ndev,
clear_bit(HNS3_NIC_STATE_TESTING, &priv->state);
+ if (h->ae_algo->ops->halt_autoneg)
+ h->ae_algo->ops->halt_autoneg(h, false);
+
#if IS_ENABLED(CONFIG_VLAN_8021Q)
if (dis_vlan_filter)
h->ae_algo->ops->enable_vlan_filter(h, true);
@@ -435,7 +453,7 @@ static void hns3_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
switch (stringset) {
case ETH_SS_STATS:
buff = hns3_get_strings_tqps(h, buff);
- h->ae_algo->ops->get_strings(h, stringset, (u8 *)buff);
+ ops->get_strings(h, stringset, (u8 *)buff);
break;
case ETH_SS_TEST:
ops->get_strings(h, stringset, data);
@@ -510,6 +528,11 @@ static void hns3_get_drvinfo(struct net_device *netdev,
struct hns3_nic_priv *priv = netdev_priv(netdev);
struct hnae3_handle *h = priv->ae_handle;
+ if (!h->ae_algo->ops->get_fw_version) {
+ netdev_err(netdev, "could not get fw version!\n");
+ return;
+ }
+
strncpy(drvinfo->version, hns3_driver_version,
sizeof(drvinfo->version));
drvinfo->version[sizeof(drvinfo->version) - 1] = '\0';
@@ -530,7 +553,7 @@ static u32 hns3_get_link(struct net_device *netdev)
{
struct hnae3_handle *h = hns3_get_handle(netdev);
- if (h->ae_algo && h->ae_algo->ops && h->ae_algo->ops->get_status)
+ if (h->ae_algo->ops->get_status)
return h->ae_algo->ops->get_status(h);
else
return 0;
@@ -560,7 +583,7 @@ static void hns3_get_pauseparam(struct net_device *netdev,
{
struct hnae3_handle *h = hns3_get_handle(netdev);
- if (h->ae_algo && h->ae_algo->ops && h->ae_algo->ops->get_pauseparam)
+ if (h->ae_algo->ops->get_pauseparam)
h->ae_algo->ops->get_pauseparam(h, &param->autoneg,
&param->rx_pause, &param->tx_pause);
}
@@ -610,9 +633,6 @@ static int hns3_get_link_ksettings(struct net_device *netdev,
u8 media_type;
u8 link_stat;
- if (!h->ae_algo || !h->ae_algo->ops)
- return -EOPNOTSUPP;
-
ops = h->ae_algo->ops;
if (ops->get_media_type)
ops->get_media_type(h, &media_type, &module_type);
@@ -740,8 +760,7 @@ static u32 hns3_get_rss_key_size(struct net_device *netdev)
{
struct hnae3_handle *h = hns3_get_handle(netdev);
- if (!h->ae_algo || !h->ae_algo->ops ||
- !h->ae_algo->ops->get_rss_key_size)
+ if (!h->ae_algo->ops->get_rss_key_size)
return 0;
return h->ae_algo->ops->get_rss_key_size(h);
@@ -751,8 +770,7 @@ static u32 hns3_get_rss_indir_size(struct net_device *netdev)
{
struct hnae3_handle *h = hns3_get_handle(netdev);
- if (!h->ae_algo || !h->ae_algo->ops ||
- !h->ae_algo->ops->get_rss_indir_size)
+ if (!h->ae_algo->ops->get_rss_indir_size)
return 0;
return h->ae_algo->ops->get_rss_indir_size(h);
@@ -763,7 +781,7 @@ static int hns3_get_rss(struct net_device *netdev, u32 *indir, u8 *key,
{
struct hnae3_handle *h = hns3_get_handle(netdev);
- if (!h->ae_algo || !h->ae_algo->ops || !h->ae_algo->ops->get_rss)
+ if (!h->ae_algo->ops->get_rss)
return -EOPNOTSUPP;
return h->ae_algo->ops->get_rss(h, indir, key, hfunc);
@@ -774,7 +792,7 @@ static int hns3_set_rss(struct net_device *netdev, const u32 *indir,
{
struct hnae3_handle *h = hns3_get_handle(netdev);
- if (!h->ae_algo || !h->ae_algo->ops || !h->ae_algo->ops->set_rss)
+ if (!h->ae_algo->ops->set_rss)
return -EOPNOTSUPP;
if ((h->pdev->revision == 0x20 &&
@@ -799,9 +817,6 @@ static int hns3_get_rxnfc(struct net_device *netdev,
{
struct hnae3_handle *h = hns3_get_handle(netdev);
- if (!h->ae_algo || !h->ae_algo->ops)
- return -EOPNOTSUPP;
-
switch (cmd->cmd) {
case ETHTOOL_GRXRINGS:
cmd->data = h->kinfo.num_tqps;
@@ -915,9 +930,6 @@ static int hns3_set_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd)
{
struct hnae3_handle *h = hns3_get_handle(netdev);
- if (!h->ae_algo || !h->ae_algo->ops)
- return -EOPNOTSUPP;
-
switch (cmd->cmd) {
case ETHTOOL_SRXFH:
if (h->ae_algo->ops->set_rss_tuple)
@@ -1193,7 +1205,7 @@ static int hns3_set_phys_id(struct net_device *netdev,
{
struct hnae3_handle *h = hns3_get_handle(netdev);
- if (!h->ae_algo || !h->ae_algo->ops || !h->ae_algo->ops->set_led_id)
+ if (!h->ae_algo->ops->set_led_id)
return -EOPNOTSUPP;
return h->ae_algo->ops->set_led_id(h, state);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
index fbd904e3077c..22f6acd45d9a 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
@@ -110,8 +110,7 @@ static void hclge_cmd_config_regs(struct hclge_cmq_ring *ring)
hclge_write_dev(hw, HCLGE_NIC_CSQ_BASEADDR_H_REG,
upper_32_bits(dma));
hclge_write_dev(hw, HCLGE_NIC_CSQ_DEPTH_REG,
- (ring->desc_num >> HCLGE_NIC_CMQ_DESC_NUM_S) |
- HCLGE_NIC_CMQ_ENABLE);
+ ring->desc_num >> HCLGE_NIC_CMQ_DESC_NUM_S);
hclge_write_dev(hw, HCLGE_NIC_CSQ_HEAD_REG, 0);
hclge_write_dev(hw, HCLGE_NIC_CSQ_TAIL_REG, 0);
} else {
@@ -120,8 +119,7 @@ static void hclge_cmd_config_regs(struct hclge_cmq_ring *ring)
hclge_write_dev(hw, HCLGE_NIC_CRQ_BASEADDR_H_REG,
upper_32_bits(dma));
hclge_write_dev(hw, HCLGE_NIC_CRQ_DEPTH_REG,
- (ring->desc_num >> HCLGE_NIC_CMQ_DESC_NUM_S) |
- HCLGE_NIC_CMQ_ENABLE);
+ ring->desc_num >> HCLGE_NIC_CMQ_DESC_NUM_S);
hclge_write_dev(hw, HCLGE_NIC_CRQ_HEAD_REG, 0);
hclge_write_dev(hw, HCLGE_NIC_CRQ_TAIL_REG, 0);
}
@@ -175,7 +173,11 @@ static bool hclge_is_special_opcode(u16 opcode)
HCLGE_OPC_STATS_MAC,
HCLGE_OPC_STATS_MAC_ALL,
HCLGE_OPC_QUERY_32_BIT_REG,
- HCLGE_OPC_QUERY_64_BIT_REG};
+ HCLGE_OPC_QUERY_64_BIT_REG,
+ HCLGE_QUERY_CLEAR_MPF_RAS_INT,
+ HCLGE_QUERY_CLEAR_PF_RAS_INT,
+ HCLGE_QUERY_CLEAR_ALL_MPF_MSIX_INT,
+ HCLGE_QUERY_CLEAR_ALL_PF_MSIX_INT};
int i;
for (i = 0; i < ARRAY_SIZE(spec_opcode); i++) {
@@ -186,12 +188,43 @@ static bool hclge_is_special_opcode(u16 opcode)
return false;
}
+static int hclge_cmd_convert_err_code(u16 desc_ret)
+{
+ switch (desc_ret) {
+ case HCLGE_CMD_EXEC_SUCCESS:
+ return 0;
+ case HCLGE_CMD_NO_AUTH:
+ return -EPERM;
+ case HCLGE_CMD_NOT_SUPPORTED:
+ return -EOPNOTSUPP;
+ case HCLGE_CMD_QUEUE_FULL:
+ return -EXFULL;
+ case HCLGE_CMD_NEXT_ERR:
+ return -ENOSR;
+ case HCLGE_CMD_UNEXE_ERR:
+ return -ENOTBLK;
+ case HCLGE_CMD_PARA_ERR:
+ return -EINVAL;
+ case HCLGE_CMD_RESULT_ERR:
+ return -ERANGE;
+ case HCLGE_CMD_TIMEOUT:
+ return -ETIME;
+ case HCLGE_CMD_HILINK_ERR:
+ return -ENOLINK;
+ case HCLGE_CMD_QUEUE_ILLEGAL:
+ return -ENXIO;
+ case HCLGE_CMD_INVALID:
+ return -EBADR;
+ default:
+ return -EIO;
+ }
+}
+
static int hclge_cmd_check_retval(struct hclge_hw *hw, struct hclge_desc *desc,
int num, int ntc)
{
u16 opcode, desc_ret;
int handle;
- int retval;
opcode = le16_to_cpu(desc[0].opcode);
for (handle = 0; handle < num; handle++) {
@@ -205,17 +238,9 @@ static int hclge_cmd_check_retval(struct hclge_hw *hw, struct hclge_desc *desc,
else
desc_ret = le16_to_cpu(desc[0].retval);
- if (desc_ret == HCLGE_CMD_EXEC_SUCCESS)
- retval = 0;
- else if (desc_ret == HCLGE_CMD_NO_AUTH)
- retval = -EPERM;
- else if (desc_ret == HCLGE_CMD_NOT_SUPPORTED)
- retval = -EOPNOTSUPP;
- else
- retval = -EIO;
hw->cmq.last_status = desc_ret;
- return retval;
+ return hclge_cmd_convert_err_code(desc_ret);
}
/**
@@ -230,6 +255,7 @@ static int hclge_cmd_check_retval(struct hclge_hw *hw, struct hclge_desc *desc,
int hclge_cmd_send(struct hclge_hw *hw, struct hclge_desc *desc, int num)
{
struct hclge_dev *hdev = container_of(hw, struct hclge_dev, hw);
+ struct hclge_cmq_ring *csq = &hw->cmq.csq;
struct hclge_desc *desc_to_use;
bool complete = false;
u32 timeout = 0;
@@ -239,8 +265,16 @@ int hclge_cmd_send(struct hclge_hw *hw, struct hclge_desc *desc, int num)
spin_lock_bh(&hw->cmq.csq.lock);
- if (num > hclge_ring_space(&hw->cmq.csq) ||
- test_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state)) {
+ if (test_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state)) {
+ spin_unlock_bh(&hw->cmq.csq.lock);
+ return -EBUSY;
+ }
+
+ if (num > hclge_ring_space(&hw->cmq.csq)) {
+ /* If CMDQ ring is full, SW HEAD and HW HEAD may be different,
+ * need update the SW HEAD pointer csq->next_to_clean
+ */
+ csq->next_to_clean = hclge_read_dev(hw, HCLGE_NIC_CSQ_HEAD_REG);
spin_unlock_bh(&hw->cmq.csq.lock);
return -EBUSY;
}
@@ -278,7 +312,7 @@ int hclge_cmd_send(struct hclge_hw *hw, struct hclge_desc *desc, int num)
}
if (!complete) {
- retval = -EAGAIN;
+ retval = -EBADE;
} else {
retval = hclge_cmd_check_retval(hw, desc, num, ntc);
}
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
index d79a209b80f6..96840d8f3e24 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
@@ -41,6 +41,14 @@ enum hclge_cmd_return_status {
HCLGE_CMD_NO_AUTH = 1,
HCLGE_CMD_NOT_SUPPORTED = 2,
HCLGE_CMD_QUEUE_FULL = 3,
+ HCLGE_CMD_NEXT_ERR = 4,
+ HCLGE_CMD_UNEXE_ERR = 5,
+ HCLGE_CMD_PARA_ERR = 6,
+ HCLGE_CMD_RESULT_ERR = 7,
+ HCLGE_CMD_TIMEOUT = 8,
+ HCLGE_CMD_HILINK_ERR = 9,
+ HCLGE_CMD_QUEUE_ILLEGAL = 10,
+ HCLGE_CMD_INVALID = 11,
};
enum hclge_cmd_status {
@@ -180,6 +188,9 @@ enum hclge_opcode_type {
HCLGE_OPC_CFG_COM_TQP_QUEUE = 0x0B20,
HCLGE_OPC_RESET_TQP_QUEUE = 0x0B22,
+ /* PPU commands */
+ HCLGE_OPC_PPU_PF_OTHER_INT_DFX = 0x0B4A,
+
/* TSO command */
HCLGE_OPC_TSO_GENERIC_CONFIG = 0x0C01,
HCLGE_OPC_GRO_GENERIC_CONFIG = 0x0C10,
@@ -243,6 +254,9 @@ enum hclge_opcode_type {
/* NCL config command */
HCLGE_OPC_QUERY_NCL_CONFIG = 0x7011,
+ /* M7 stats command */
+ HCLGE_OPC_M7_STATS_BD = 0x7012,
+ HCLGE_OPC_M7_STATS_INFO = 0x7013,
/* SFP command */
HCLGE_OPC_GET_SFP_INFO = 0x7104,
@@ -265,6 +279,8 @@ enum hclge_opcode_type {
HCLGE_CONFIG_ROCEE_RAS_INT_EN = 0x1580,
HCLGE_QUERY_CLEAR_ROCEE_RAS_INT = 0x1581,
HCLGE_ROCEE_PF_RAS_INT_CMD = 0x1584,
+ HCLGE_QUERY_ROCEE_ECC_RAS_INFO_CMD = 0x1585,
+ HCLGE_QUERY_ROCEE_AXI_RAS_INFO_CMD = 0x1586,
HCLGE_IGU_EGU_TNL_INT_EN = 0x1803,
HCLGE_IGU_COMMON_INT_EN = 0x1806,
HCLGE_TM_QCN_MEM_INT_CFG = 0x1A14,
@@ -641,6 +657,11 @@ enum hclge_mac_vlan_tbl_opcode {
HCLGE_MAC_VLAN_LKUP, /* Lookup a entry through mac_vlan key */
};
+enum hclge_mac_vlan_add_resp_code {
+ HCLGE_ADD_UC_OVERFLOW = 2, /* ADD failed for UC overflow */
+ HCLGE_ADD_MC_OVERFLOW, /* ADD failed for MC overflow */
+};
+
#define HCLGE_MAC_VLAN_BIT0_EN_B 0
#define HCLGE_MAC_VLAN_BIT1_EN_B 1
#define HCLGE_MAC_EPORT_SW_EN_B 12
@@ -674,7 +695,6 @@ struct hclge_umv_spc_alc_cmd {
#define HCLGE_MAC_MGR_MASK_VLAN_B BIT(0)
#define HCLGE_MAC_MGR_MASK_MAC_B BIT(1)
#define HCLGE_MAC_MGR_MASK_ETHERTYPE_B BIT(2)
-#define HCLGE_MAC_ETHERTYPE_LLDP 0x88cc
struct hclge_mac_mgr_tbl_entry_cmd {
u8 flags;
@@ -872,7 +892,7 @@ struct hclge_serdes_lb_cmd {
#define HCLGE_TOTAL_PKT_BUF 0x108000 /* 1.03125M bytes */
#define HCLGE_DEFAULT_DV 0xA000 /* 40k byte */
#define HCLGE_DEFAULT_NON_DCB_DV 0x7800 /* 30K byte */
-#define HCLGE_NON_DCB_ADDITIONAL_BUF 0x200 /* 512 byte */
+#define HCLGE_NON_DCB_ADDITIONAL_BUF 0x1400 /* 5120 byte */
#define HCLGE_TYPE_CRQ 0
#define HCLGE_TYPE_CSQ 1
@@ -970,6 +990,25 @@ struct hclge_fd_ad_config_cmd {
u8 rsv2[8];
};
+struct hclge_get_m7_bd_cmd {
+ __le32 bd_num;
+ u8 rsv[20];
+};
+
+struct hclge_query_ppu_pf_other_int_dfx_cmd {
+ __le16 over_8bd_no_fe_qid;
+ __le16 over_8bd_no_fe_vf_id;
+ __le16 tso_mss_cmp_min_err_qid;
+ __le16 tso_mss_cmp_min_err_vf_id;
+ __le16 tso_mss_cmp_max_err_qid;
+ __le16 tso_mss_cmp_max_err_vf_id;
+ __le16 tx_rd_fbd_poison_qid;
+ __le16 tx_rd_fbd_poison_vf_id;
+ __le16 rx_rd_fbd_poison_qid;
+ __le16 rx_rd_fbd_poison_vf_id;
+ u8 rsv[4];
+};
+
int hclge_cmd_init(struct hclge_dev *hdev);
static inline void hclge_write_reg(void __iomem *base, u32 reg, u32 value)
{
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
index 1161361a973b..bac4ce13f6ae 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
@@ -325,6 +325,8 @@ static int hclge_ieee_setpfc(struct hnae3_handle *h, struct ieee_pfc *pfc)
hdev->tm_info.hw_pfc_map = pfc_map;
hdev->tm_info.pfc_en = pfc->pfc_en;
+ hclge_tm_pfc_info_update(hdev);
+
return hclge_pause_setup_hw(hdev, false);
}
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
index a9ffb57c4607..ab625c757a95 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
@@ -61,9 +61,11 @@ static int hclge_dbg_cmd_send(struct hclge_dev *hdev,
static void hclge_dbg_dump_reg_common(struct hclge_dev *hdev,
struct hclge_dbg_dfx_message *dfx_message,
- char *cmd_buf, int msg_num, int offset,
- enum hclge_opcode_type cmd)
+ const char *cmd_buf, int msg_num,
+ int offset, enum hclge_opcode_type cmd)
{
+#define BD_DATA_NUM 6
+
struct hclge_desc *desc_src;
struct hclge_desc *desc;
int bd_num, buf_len;
@@ -92,14 +94,16 @@ static void hclge_dbg_dump_reg_common(struct hclge_dev *hdev,
return;
}
- max = (bd_num * 6) <= msg_num ? (bd_num * 6) : msg_num;
+ max = (bd_num * BD_DATA_NUM) <= msg_num ?
+ (bd_num * BD_DATA_NUM) : msg_num;
desc = desc_src;
for (i = 0; i < max; i++) {
- (((i / 6) > 0) && ((i % 6) == 0)) ? desc++ : desc;
+ ((i > 0) && ((i % BD_DATA_NUM) == 0)) ? desc++ : desc;
if (dfx_message->flag)
dev_info(&hdev->pdev->dev, "%s: 0x%x\n",
- dfx_message->message, desc->data[i % 6]);
+ dfx_message->message,
+ desc->data[i % BD_DATA_NUM]);
dfx_message++;
}
@@ -107,7 +111,7 @@ static void hclge_dbg_dump_reg_common(struct hclge_dev *hdev,
kfree(desc_src);
}
-static void hclge_dbg_dump_dcb(struct hclge_dev *hdev, char *cmd_buf)
+static void hclge_dbg_dump_dcb(struct hclge_dev *hdev, const char *cmd_buf)
{
struct device *dev = &hdev->pdev->dev;
struct hclge_dbg_bitmap_cmd *bitmap;
@@ -207,7 +211,7 @@ static void hclge_dbg_dump_dcb(struct hclge_dev *hdev, char *cmd_buf)
dev_info(dev, "IGU_TX_PRI_MAP_TC_CFG: 0x%x\n", desc[0].data[5]);
}
-static void hclge_dbg_dump_reg_cmd(struct hclge_dev *hdev, char *cmd_buf)
+static void hclge_dbg_dump_reg_cmd(struct hclge_dev *hdev, const char *cmd_buf)
{
int msg_num;
@@ -395,7 +399,7 @@ static void hclge_dbg_dump_tm_pg(struct hclge_dev *hdev)
if (ret)
goto err_tm_pg_cmd_send;
- dev_info(&hdev->pdev->dev, "PRI_SCH pg_id: %u\n", desc.data[0]);
+ dev_info(&hdev->pdev->dev, "PRI_SCH pri_id: %u\n", desc.data[0]);
cmd = HCLGE_OPC_TM_QS_SCH_MODE_CFG;
hclge_cmd_setup_basic_desc(&desc, cmd, true);
@@ -403,7 +407,7 @@ static void hclge_dbg_dump_tm_pg(struct hclge_dev *hdev)
if (ret)
goto err_tm_pg_cmd_send;
- dev_info(&hdev->pdev->dev, "QS_SCH pg_id: %u\n", desc.data[0]);
+ dev_info(&hdev->pdev->dev, "QS_SCH qs_id: %u\n", desc.data[0]);
cmd = HCLGE_OPC_TM_BP_TO_QSET_MAPPING;
hclge_cmd_setup_basic_desc(&desc, cmd, true);
@@ -412,9 +416,9 @@ static void hclge_dbg_dump_tm_pg(struct hclge_dev *hdev)
goto err_tm_pg_cmd_send;
bp_to_qs_map_cmd = (struct hclge_bp_to_qs_map_cmd *)desc.data;
- dev_info(&hdev->pdev->dev, "BP_TO_QSET pg_id: %u\n",
+ dev_info(&hdev->pdev->dev, "BP_TO_QSET tc_id: %u\n",
bp_to_qs_map_cmd->tc_id);
- dev_info(&hdev->pdev->dev, "BP_TO_QSET pg_shapping: 0x%x\n",
+ dev_info(&hdev->pdev->dev, "BP_TO_QSET qs_group_id: 0x%x\n",
bp_to_qs_map_cmd->qs_group_id);
dev_info(&hdev->pdev->dev, "BP_TO_QSET qs_bit_map: 0x%x\n",
bp_to_qs_map_cmd->qs_bit_map);
@@ -473,7 +477,7 @@ static void hclge_dbg_dump_tm(struct hclge_dev *hdev)
nq_to_qs_map = (struct hclge_nq_to_qs_link_cmd *)desc.data;
dev_info(&hdev->pdev->dev, "NQ_TO_QS nq_id: %u\n", nq_to_qs_map->nq_id);
- dev_info(&hdev->pdev->dev, "NQ_TO_QS qset_id: %u\n",
+ dev_info(&hdev->pdev->dev, "NQ_TO_QS qset_id: 0x%x\n",
nq_to_qs_map->qset_id);
cmd = HCLGE_OPC_TM_PG_WEIGHT;
@@ -537,7 +541,8 @@ err_tm_cmd_send:
cmd, ret);
}
-static void hclge_dbg_dump_tm_map(struct hclge_dev *hdev, char *cmd_buf)
+static void hclge_dbg_dump_tm_map(struct hclge_dev *hdev,
+ const char *cmd_buf)
{
struct hclge_bp_to_qs_map_cmd *bp_to_qs_map_cmd;
struct hclge_nq_to_qs_link_cmd *nq_to_qs_map;
@@ -921,11 +926,67 @@ static void hclge_dbg_dump_rst_info(struct hclge_dev *hdev)
hdev->rst_stats.reset_cnt);
}
+void hclge_dbg_get_m7_stats_info(struct hclge_dev *hdev)
+{
+ struct hclge_desc *desc_src, *desc_tmp;
+ struct hclge_get_m7_bd_cmd *req;
+ struct hclge_desc desc;
+ u32 bd_num, buf_len;
+ int ret, i;
+
+ hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_M7_STATS_BD, true);
+
+ req = (struct hclge_get_m7_bd_cmd *)desc.data;
+ ret = hclge_cmd_send(&hdev->hw, &desc, 1);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "get firmware statistics bd number failed, ret=%d\n",
+ ret);
+ return;
+ }
+
+ bd_num = le32_to_cpu(req->bd_num);
+
+ buf_len = sizeof(struct hclge_desc) * bd_num;
+ desc_src = kzalloc(buf_len, GFP_KERNEL);
+ if (!desc_src) {
+ dev_err(&hdev->pdev->dev,
+ "allocate desc for get_m7_stats failed\n");
+ return;
+ }
+
+ desc_tmp = desc_src;
+ ret = hclge_dbg_cmd_send(hdev, desc_tmp, 0, bd_num,
+ HCLGE_OPC_M7_STATS_INFO);
+ if (ret) {
+ kfree(desc_src);
+ dev_err(&hdev->pdev->dev,
+ "get firmware statistics failed, ret=%d\n", ret);
+ return;
+ }
+
+ for (i = 0; i < bd_num; i++) {
+ dev_info(&hdev->pdev->dev, "0x%08x 0x%08x 0x%08x\n",
+ le32_to_cpu(desc_tmp->data[0]),
+ le32_to_cpu(desc_tmp->data[1]),
+ le32_to_cpu(desc_tmp->data[2]));
+ dev_info(&hdev->pdev->dev, "0x%08x 0x%08x 0x%08x\n",
+ le32_to_cpu(desc_tmp->data[3]),
+ le32_to_cpu(desc_tmp->data[4]),
+ le32_to_cpu(desc_tmp->data[5]));
+
+ desc_tmp++;
+ }
+
+ kfree(desc_src);
+}
+
/* hclge_dbg_dump_ncl_config: print specified range of NCL_CONFIG file
* @hdev: pointer to struct hclge_dev
* @cmd_buf: string that contains offset and length
*/
-static void hclge_dbg_dump_ncl_config(struct hclge_dev *hdev, char *cmd_buf)
+static void hclge_dbg_dump_ncl_config(struct hclge_dev *hdev,
+ const char *cmd_buf)
{
#define HCLGE_MAX_NCL_CONFIG_OFFSET 4096
#define HCLGE_MAX_NCL_CONFIG_LENGTH (20 + 24 * 4)
@@ -998,13 +1059,13 @@ static void hclge_dbg_dump_mac_tnl_status(struct hclge_dev *hdev)
while (kfifo_get(&hdev->mac_tnl_log, &stats)) {
rem_nsec = do_div(stats.time, HCLGE_BILLION_NANO_SECONDS);
- dev_info(&hdev->pdev->dev, "[%07lu.%03lu]status = 0x%x\n",
+ dev_info(&hdev->pdev->dev, "[%07lu.%03lu] status = 0x%x\n",
(unsigned long)stats.time, rem_nsec / 1000,
stats.status);
}
}
-int hclge_dbg_run_cmd(struct hnae3_handle *handle, char *cmd_buf)
+int hclge_dbg_run_cmd(struct hnae3_handle *handle, const char *cmd_buf)
{
struct hclge_vport *vport = hclge_get_vport(handle);
struct hclge_dev *hdev = vport->back;
@@ -1029,6 +1090,8 @@ int hclge_dbg_run_cmd(struct hnae3_handle *handle, char *cmd_buf)
hclge_dbg_dump_reg_cmd(hdev, cmd_buf);
} else if (strncmp(cmd_buf, "dump reset info", 15) == 0) {
hclge_dbg_dump_rst_info(hdev);
+ } else if (strncmp(cmd_buf, "dump m7 info", 12) == 0) {
+ hclge_dbg_get_m7_stats_info(hdev);
} else if (strncmp(cmd_buf, "dump ncl_config", 15) == 0) {
hclge_dbg_dump_ncl_config(hdev,
&cmd_buf[sizeof("dump ncl_config")]);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
index 4ac80634c984..0a7243825e7b 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
@@ -87,25 +87,25 @@ static const struct hclge_hw_error hclge_msix_sram_ecc_int[] = {
static const struct hclge_hw_error hclge_igu_int[] = {
{ .int_msk = BIT(0), .msg = "igu_rx_buf0_ecc_mbit_err",
- .reset_level = HNAE3_CORE_RESET },
+ .reset_level = HNAE3_GLOBAL_RESET },
{ .int_msk = BIT(2), .msg = "igu_rx_buf1_ecc_mbit_err",
- .reset_level = HNAE3_CORE_RESET },
+ .reset_level = HNAE3_GLOBAL_RESET },
{ /* sentinel */ }
};
static const struct hclge_hw_error hclge_igu_egu_tnl_int[] = {
{ .int_msk = BIT(0), .msg = "rx_buf_overflow",
- .reset_level = HNAE3_CORE_RESET },
+ .reset_level = HNAE3_GLOBAL_RESET },
{ .int_msk = BIT(1), .msg = "rx_stp_fifo_overflow",
- .reset_level = HNAE3_CORE_RESET },
+ .reset_level = HNAE3_GLOBAL_RESET },
{ .int_msk = BIT(2), .msg = "rx_stp_fifo_undeflow",
- .reset_level = HNAE3_CORE_RESET },
+ .reset_level = HNAE3_GLOBAL_RESET },
{ .int_msk = BIT(3), .msg = "tx_buf_overflow",
- .reset_level = HNAE3_CORE_RESET },
+ .reset_level = HNAE3_GLOBAL_RESET },
{ .int_msk = BIT(4), .msg = "tx_buf_underrun",
- .reset_level = HNAE3_CORE_RESET },
+ .reset_level = HNAE3_GLOBAL_RESET },
{ .int_msk = BIT(5), .msg = "rx_stp_buf_overflow",
- .reset_level = HNAE3_CORE_RESET },
+ .reset_level = HNAE3_GLOBAL_RESET },
{ /* sentinel */ }
};
@@ -413,13 +413,13 @@ static const struct hclge_hw_error hclge_ppu_mpf_abnormal_int_st2[] = {
static const struct hclge_hw_error hclge_ppu_mpf_abnormal_int_st3[] = {
{ .int_msk = BIT(4), .msg = "gro_bd_ecc_mbit_err",
- .reset_level = HNAE3_CORE_RESET },
+ .reset_level = HNAE3_GLOBAL_RESET },
{ .int_msk = BIT(5), .msg = "gro_context_ecc_mbit_err",
- .reset_level = HNAE3_CORE_RESET },
+ .reset_level = HNAE3_GLOBAL_RESET },
{ .int_msk = BIT(6), .msg = "rx_stash_cfg_ecc_mbit_err",
- .reset_level = HNAE3_CORE_RESET },
+ .reset_level = HNAE3_GLOBAL_RESET },
{ .int_msk = BIT(7), .msg = "axi_rd_fbd_ecc_mbit_err",
- .reset_level = HNAE3_CORE_RESET },
+ .reset_level = HNAE3_GLOBAL_RESET },
{ /* sentinel */ }
};
@@ -631,29 +631,20 @@ static const struct hclge_hw_error hclge_rocee_qmm_ovf_err_int[] = {
{ /* sentinel */ }
};
-static enum hnae3_reset_type hclge_log_error(struct device *dev, char *reg,
- const struct hclge_hw_error *err,
- u32 err_sts)
+static void hclge_log_error(struct device *dev, char *reg,
+ const struct hclge_hw_error *err,
+ u32 err_sts, unsigned long *reset_requests)
{
- enum hnae3_reset_type reset_level = HNAE3_FUNC_RESET;
- bool need_reset = false;
-
while (err->msg) {
if (err->int_msk & err_sts) {
dev_warn(dev, "%s %s found [error status=0x%x]\n",
reg, err->msg, err_sts);
- if (err->reset_level != HNAE3_NONE_RESET &&
- err->reset_level >= reset_level) {
- reset_level = err->reset_level;
- need_reset = true;
- }
+ if (err->reset_level &&
+ err->reset_level != HNAE3_NONE_RESET)
+ set_bit(err->reset_level, reset_requests);
}
err++;
}
- if (need_reset)
- return reset_level;
- else
- return HNAE3_NONE_RESET;
}
/* hclge_cmd_query_error: read the error information
@@ -673,19 +664,19 @@ static int hclge_cmd_query_error(struct hclge_dev *hdev,
enum hclge_err_int_type int_type)
{
struct device *dev = &hdev->pdev->dev;
- int num = 1;
+ int desc_num = 1;
int ret;
hclge_cmd_setup_basic_desc(&desc[0], cmd, true);
if (flag) {
desc[0].flag |= cpu_to_le16(flag);
hclge_cmd_setup_basic_desc(&desc[1], cmd, true);
- num = 2;
+ desc_num = 2;
}
if (w_num)
desc[0].data[w_num] = cpu_to_le32(int_type);
- ret = hclge_cmd_send(&hdev->hw, &desc[0], num);
+ ret = hclge_cmd_send(&hdev->hw, &desc[0], desc_num);
if (ret)
dev_err(dev, "query error cmd failed (%d)\n", ret);
@@ -941,7 +932,7 @@ static int hclge_config_ppu_error_interrupts(struct hclge_dev *hdev, u32 cmd,
{
struct device *dev = &hdev->pdev->dev;
struct hclge_desc desc[2];
- int num = 1;
+ int desc_num = 1;
int ret;
/* configure PPU error interrupts */
@@ -960,7 +951,7 @@ static int hclge_config_ppu_error_interrupts(struct hclge_dev *hdev, u32 cmd,
desc[1].data[1] = HCLGE_PPU_MPF_ABNORMAL_INT1_EN_MASK;
desc[1].data[2] = HCLGE_PPU_MPF_ABNORMAL_INT2_EN_MASK;
desc[1].data[3] |= HCLGE_PPU_MPF_ABNORMAL_INT3_EN_MASK;
- num = 2;
+ desc_num = 2;
} else if (cmd == HCLGE_PPU_MPF_OTHER_INT_CMD) {
hclge_cmd_setup_basic_desc(&desc[0], cmd, false);
if (en)
@@ -978,7 +969,7 @@ static int hclge_config_ppu_error_interrupts(struct hclge_dev *hdev, u32 cmd,
return -EINVAL;
}
- ret = hclge_cmd_send(&hdev->hw, &desc[0], num);
+ ret = hclge_cmd_send(&hdev->hw, &desc[0], desc_num);
return ret;
}
@@ -1069,12 +1060,51 @@ static int hclge_config_ssu_hw_err_int(struct hclge_dev *hdev, bool en)
return ret;
}
-#define HCLGE_SET_DEFAULT_RESET_REQUEST(reset_type) \
- do { \
- if (ae_dev->ops->set_default_reset_request) \
- ae_dev->ops->set_default_reset_request(ae_dev, \
- reset_type); \
- } while (0)
+/* hclge_query_bd_num: query number of buffer descriptors
+ * @hdev: pointer to struct hclge_dev
+ * @is_ras: true for ras, false for msix
+ * @mpf_bd_num: number of main PF interrupt buffer descriptors
+ * @pf_bd_num: number of not main PF interrupt buffer descriptors
+ *
+ * This function querys number of mpf and pf buffer descriptors.
+ */
+static int hclge_query_bd_num(struct hclge_dev *hdev, bool is_ras,
+ int *mpf_bd_num, int *pf_bd_num)
+{
+ struct device *dev = &hdev->pdev->dev;
+ u32 mpf_min_bd_num, pf_min_bd_num;
+ enum hclge_opcode_type opcode;
+ struct hclge_desc desc_bd;
+ int ret;
+
+ if (is_ras) {
+ opcode = HCLGE_QUERY_RAS_INT_STS_BD_NUM;
+ mpf_min_bd_num = HCLGE_MPF_RAS_INT_MIN_BD_NUM;
+ pf_min_bd_num = HCLGE_PF_RAS_INT_MIN_BD_NUM;
+ } else {
+ opcode = HCLGE_QUERY_MSIX_INT_STS_BD_NUM;
+ mpf_min_bd_num = HCLGE_MPF_MSIX_INT_MIN_BD_NUM;
+ pf_min_bd_num = HCLGE_PF_MSIX_INT_MIN_BD_NUM;
+ }
+
+ hclge_cmd_setup_basic_desc(&desc_bd, opcode, true);
+ ret = hclge_cmd_send(&hdev->hw, &desc_bd, 1);
+ if (ret) {
+ dev_err(dev, "fail(%d) to query msix int status bd num\n",
+ ret);
+ return ret;
+ }
+
+ *mpf_bd_num = le32_to_cpu(desc_bd.data[0]);
+ *pf_bd_num = le32_to_cpu(desc_bd.data[1]);
+ if (*mpf_bd_num < mpf_min_bd_num || *pf_bd_num < pf_min_bd_num) {
+ dev_err(dev, "Invalid bd num: mpf(%d), pf(%d)\n",
+ *mpf_bd_num, *pf_bd_num);
+ return -EINVAL;
+ }
+
+ return 0;
+}
/* hclge_handle_mpf_ras_error: handle all main PF RAS errors
* @hdev: pointer to struct hclge_dev
@@ -1089,7 +1119,6 @@ static int hclge_handle_mpf_ras_error(struct hclge_dev *hdev,
int num)
{
struct hnae3_ae_dev *ae_dev = hdev->ae_dev;
- enum hnae3_reset_type reset_level;
struct device *dev = &hdev->pdev->dev;
__le32 *desc_data;
u32 status;
@@ -1098,8 +1127,6 @@ static int hclge_handle_mpf_ras_error(struct hclge_dev *hdev,
/* query all main PF RAS errors */
hclge_cmd_setup_basic_desc(&desc[0], HCLGE_QUERY_CLEAR_MPF_RAS_INT,
true);
- desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
-
ret = hclge_cmd_send(&hdev->hw, &desc[0], num);
if (ret) {
dev_err(dev, "query all mpf ras int cmd failed (%d)\n", ret);
@@ -1108,95 +1135,74 @@ static int hclge_handle_mpf_ras_error(struct hclge_dev *hdev,
/* log HNS common errors */
status = le32_to_cpu(desc[0].data[0]);
- if (status) {
- reset_level = hclge_log_error(dev, "IMP_TCM_ECC_INT_STS",
- &hclge_imp_tcm_ecc_int[0],
- status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "IMP_TCM_ECC_INT_STS",
+ &hclge_imp_tcm_ecc_int[0], status,
+ &ae_dev->hw_err_reset_req);
status = le32_to_cpu(desc[0].data[1]);
- if (status) {
- reset_level = hclge_log_error(dev, "CMDQ_MEM_ECC_INT_STS",
- &hclge_cmdq_nic_mem_ecc_int[0],
- status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "CMDQ_MEM_ECC_INT_STS",
+ &hclge_cmdq_nic_mem_ecc_int[0], status,
+ &ae_dev->hw_err_reset_req);
- if ((le32_to_cpu(desc[0].data[2])) & BIT(0)) {
+ if ((le32_to_cpu(desc[0].data[2])) & BIT(0))
dev_warn(dev, "imp_rd_data_poison_err found\n");
- HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_NONE_RESET);
- }
status = le32_to_cpu(desc[0].data[3]);
- if (status) {
- reset_level = hclge_log_error(dev, "TQP_INT_ECC_INT_STS",
- &hclge_tqp_int_ecc_int[0],
- status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "TQP_INT_ECC_INT_STS",
+ &hclge_tqp_int_ecc_int[0], status,
+ &ae_dev->hw_err_reset_req);
status = le32_to_cpu(desc[0].data[4]);
- if (status) {
- reset_level = hclge_log_error(dev, "MSIX_ECC_INT_STS",
- &hclge_msix_sram_ecc_int[0],
- status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "MSIX_ECC_INT_STS",
+ &hclge_msix_sram_ecc_int[0], status,
+ &ae_dev->hw_err_reset_req);
/* log SSU(Storage Switch Unit) errors */
desc_data = (__le32 *)&desc[2];
status = le32_to_cpu(*(desc_data + 2));
- if (status) {
- reset_level = hclge_log_error(dev, "SSU_ECC_MULTI_BIT_INT_0",
- &hclge_ssu_mem_ecc_err_int[0],
- status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "SSU_ECC_MULTI_BIT_INT_0",
+ &hclge_ssu_mem_ecc_err_int[0], status,
+ &ae_dev->hw_err_reset_req);
status = le32_to_cpu(*(desc_data + 3)) & BIT(0);
if (status) {
dev_warn(dev, "SSU_ECC_MULTI_BIT_INT_1 ssu_mem32_ecc_mbit_err found [error status=0x%x]\n",
status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET);
+ set_bit(HNAE3_GLOBAL_RESET, &ae_dev->hw_err_reset_req);
}
status = le32_to_cpu(*(desc_data + 4)) & HCLGE_SSU_COMMON_ERR_INT_MASK;
- if (status) {
- reset_level = hclge_log_error(dev, "SSU_COMMON_ERR_INT",
- &hclge_ssu_com_err_int[0],
- status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "SSU_COMMON_ERR_INT",
+ &hclge_ssu_com_err_int[0], status,
+ &ae_dev->hw_err_reset_req);
/* log IGU(Ingress Unit) errors */
desc_data = (__le32 *)&desc[3];
status = le32_to_cpu(*desc_data) & HCLGE_IGU_INT_MASK;
- if (status) {
- reset_level = hclge_log_error(dev, "IGU_INT_STS",
- &hclge_igu_int[0], status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "IGU_INT_STS",
+ &hclge_igu_int[0], status,
+ &ae_dev->hw_err_reset_req);
/* log PPP(Programmable Packet Process) errors */
desc_data = (__le32 *)&desc[4];
status = le32_to_cpu(*(desc_data + 1));
- if (status) {
- reset_level =
- hclge_log_error(dev, "PPP_MPF_ABNORMAL_INT_ST1",
- &hclge_ppp_mpf_abnormal_int_st1[0],
- status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "PPP_MPF_ABNORMAL_INT_ST1",
+ &hclge_ppp_mpf_abnormal_int_st1[0], status,
+ &ae_dev->hw_err_reset_req);
status = le32_to_cpu(*(desc_data + 3)) & HCLGE_PPP_MPF_INT_ST3_MASK;
- if (status) {
- reset_level =
- hclge_log_error(dev, "PPP_MPF_ABNORMAL_INT_ST3",
- &hclge_ppp_mpf_abnormal_int_st3[0],
- status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "PPP_MPF_ABNORMAL_INT_ST3",
+ &hclge_ppp_mpf_abnormal_int_st3[0], status,
+ &ae_dev->hw_err_reset_req);
/* log PPU(RCB) errors */
desc_data = (__le32 *)&desc[5];
@@ -1204,66 +1210,53 @@ static int hclge_handle_mpf_ras_error(struct hclge_dev *hdev,
if (status) {
dev_warn(dev, "PPU_MPF_ABNORMAL_INT_ST1 %s found\n",
"rpu_rx_pkt_ecc_mbit_err");
- HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET);
+ set_bit(HNAE3_GLOBAL_RESET, &ae_dev->hw_err_reset_req);
}
status = le32_to_cpu(*(desc_data + 2));
- if (status) {
- reset_level =
- hclge_log_error(dev, "PPU_MPF_ABNORMAL_INT_ST2",
- &hclge_ppu_mpf_abnormal_int_st2[0],
- status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "PPU_MPF_ABNORMAL_INT_ST2",
+ &hclge_ppu_mpf_abnormal_int_st2[0], status,
+ &ae_dev->hw_err_reset_req);
status = le32_to_cpu(*(desc_data + 3)) & HCLGE_PPU_MPF_INT_ST3_MASK;
- if (status) {
- reset_level =
- hclge_log_error(dev, "PPU_MPF_ABNORMAL_INT_ST3",
- &hclge_ppu_mpf_abnormal_int_st3[0],
- status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "PPU_MPF_ABNORMAL_INT_ST3",
+ &hclge_ppu_mpf_abnormal_int_st3[0], status,
+ &ae_dev->hw_err_reset_req);
/* log TM(Traffic Manager) errors */
desc_data = (__le32 *)&desc[6];
status = le32_to_cpu(*desc_data);
- if (status) {
- reset_level = hclge_log_error(dev, "TM_SCH_RINT",
- &hclge_tm_sch_rint[0], status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "TM_SCH_RINT",
+ &hclge_tm_sch_rint[0], status,
+ &ae_dev->hw_err_reset_req);
/* log QCN(Quantized Congestion Control) errors */
desc_data = (__le32 *)&desc[7];
status = le32_to_cpu(*desc_data) & HCLGE_QCN_FIFO_INT_MASK;
- if (status) {
- reset_level = hclge_log_error(dev, "QCN_FIFO_RINT",
- &hclge_qcn_fifo_rint[0], status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "QCN_FIFO_RINT",
+ &hclge_qcn_fifo_rint[0], status,
+ &ae_dev->hw_err_reset_req);
status = le32_to_cpu(*(desc_data + 1)) & HCLGE_QCN_ECC_INT_MASK;
- if (status) {
- reset_level = hclge_log_error(dev, "QCN_ECC_RINT",
- &hclge_qcn_ecc_rint[0],
- status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "QCN_ECC_RINT",
+ &hclge_qcn_ecc_rint[0], status,
+ &ae_dev->hw_err_reset_req);
/* log NCSI errors */
desc_data = (__le32 *)&desc[9];
status = le32_to_cpu(*desc_data) & HCLGE_NCSI_ECC_INT_MASK;
- if (status) {
- reset_level = hclge_log_error(dev, "NCSI_ECC_INT_RPT",
- &hclge_ncsi_err_int[0], status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "NCSI_ECC_INT_RPT",
+ &hclge_ncsi_err_int[0], status,
+ &ae_dev->hw_err_reset_req);
/* clear all main PF RAS errors */
hclge_cmd_reuse_desc(&desc[0], false);
- desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
-
ret = hclge_cmd_send(&hdev->hw, &desc[0], num);
if (ret)
dev_err(dev, "clear all mpf ras int cmd failed (%d)\n", ret);
@@ -1285,7 +1278,6 @@ static int hclge_handle_pf_ras_error(struct hclge_dev *hdev,
{
struct hnae3_ae_dev *ae_dev = hdev->ae_dev;
struct device *dev = &hdev->pdev->dev;
- enum hnae3_reset_type reset_level;
__le32 *desc_data;
u32 status;
int ret;
@@ -1293,8 +1285,6 @@ static int hclge_handle_pf_ras_error(struct hclge_dev *hdev,
/* query all PF RAS errors */
hclge_cmd_setup_basic_desc(&desc[0], HCLGE_QUERY_CLEAR_PF_RAS_INT,
true);
- desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
-
ret = hclge_cmd_send(&hdev->hw, &desc[0], num);
if (ret) {
dev_err(dev, "query all pf ras int cmd failed (%d)\n", ret);
@@ -1303,53 +1293,41 @@ static int hclge_handle_pf_ras_error(struct hclge_dev *hdev,
/* log SSU(Storage Switch Unit) errors */
status = le32_to_cpu(desc[0].data[0]);
- if (status) {
- reset_level = hclge_log_error(dev, "SSU_PORT_BASED_ERR_INT",
- &hclge_ssu_port_based_err_int[0],
- status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "SSU_PORT_BASED_ERR_INT",
+ &hclge_ssu_port_based_err_int[0], status,
+ &ae_dev->hw_err_reset_req);
status = le32_to_cpu(desc[0].data[1]);
- if (status) {
- reset_level = hclge_log_error(dev, "SSU_FIFO_OVERFLOW_INT",
- &hclge_ssu_fifo_overflow_int[0],
- status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "SSU_FIFO_OVERFLOW_INT",
+ &hclge_ssu_fifo_overflow_int[0], status,
+ &ae_dev->hw_err_reset_req);
status = le32_to_cpu(desc[0].data[2]);
- if (status) {
- reset_level = hclge_log_error(dev, "SSU_ETS_TCG_INT",
- &hclge_ssu_ets_tcg_int[0],
- status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "SSU_ETS_TCG_INT",
+ &hclge_ssu_ets_tcg_int[0], status,
+ &ae_dev->hw_err_reset_req);
/* log IGU(Ingress Unit) EGU(Egress Unit) TNL errors */
desc_data = (__le32 *)&desc[1];
status = le32_to_cpu(*desc_data) & HCLGE_IGU_EGU_TNL_INT_MASK;
- if (status) {
- reset_level = hclge_log_error(dev, "IGU_EGU_TNL_INT_STS",
- &hclge_igu_egu_tnl_int[0],
- status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "IGU_EGU_TNL_INT_STS",
+ &hclge_igu_egu_tnl_int[0], status,
+ &ae_dev->hw_err_reset_req);
/* log PPU(RCB) errors */
desc_data = (__le32 *)&desc[3];
status = le32_to_cpu(*desc_data) & HCLGE_PPU_PF_INT_RAS_MASK;
- if (status) {
- reset_level = hclge_log_error(dev, "PPU_PF_ABNORMAL_INT_ST0",
- &hclge_ppu_pf_abnormal_int[0],
- status);
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_level);
- }
+ if (status)
+ hclge_log_error(dev, "PPU_PF_ABNORMAL_INT_ST0",
+ &hclge_ppu_pf_abnormal_int[0], status,
+ &ae_dev->hw_err_reset_req);
/* clear all PF RAS errors */
hclge_cmd_reuse_desc(&desc[0], false);
- desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
-
ret = hclge_cmd_send(&hdev->hw, &desc[0], num);
if (ret)
dev_err(dev, "clear all pf ras int cmd failed (%d)\n", ret);
@@ -1359,24 +1337,16 @@ static int hclge_handle_pf_ras_error(struct hclge_dev *hdev,
static int hclge_handle_all_ras_errors(struct hclge_dev *hdev)
{
- struct device *dev = &hdev->pdev->dev;
u32 mpf_bd_num, pf_bd_num, bd_num;
- struct hclge_desc desc_bd;
struct hclge_desc *desc;
int ret;
/* query the number of registers in the RAS int status */
- hclge_cmd_setup_basic_desc(&desc_bd, HCLGE_QUERY_RAS_INT_STS_BD_NUM,
- true);
- ret = hclge_cmd_send(&hdev->hw, &desc_bd, 1);
- if (ret) {
- dev_err(dev, "fail(%d) to query ras int status bd num\n", ret);
+ ret = hclge_query_bd_num(hdev, true, &mpf_bd_num, &pf_bd_num);
+ if (ret)
return ret;
- }
- mpf_bd_num = le32_to_cpu(desc_bd.data[0]);
- pf_bd_num = le32_to_cpu(desc_bd.data[1]);
- bd_num = max_t(u32, mpf_bd_num, pf_bd_num);
+ bd_num = max_t(u32, mpf_bd_num, pf_bd_num);
desc = kcalloc(bd_num, sizeof(struct hclge_desc), GFP_KERNEL);
if (!desc)
return -ENOMEM;
@@ -1396,6 +1366,66 @@ static int hclge_handle_all_ras_errors(struct hclge_dev *hdev)
return ret;
}
+static int hclge_log_rocee_axi_error(struct hclge_dev *hdev)
+{
+ struct device *dev = &hdev->pdev->dev;
+ struct hclge_desc desc[3];
+ int ret;
+
+ hclge_cmd_setup_basic_desc(&desc[0], HCLGE_QUERY_ROCEE_AXI_RAS_INFO_CMD,
+ true);
+ hclge_cmd_setup_basic_desc(&desc[1], HCLGE_QUERY_ROCEE_AXI_RAS_INFO_CMD,
+ true);
+ hclge_cmd_setup_basic_desc(&desc[2], HCLGE_QUERY_ROCEE_AXI_RAS_INFO_CMD,
+ true);
+ desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
+ desc[1].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
+
+ ret = hclge_cmd_send(&hdev->hw, &desc[0], 3);
+ if (ret) {
+ dev_err(dev, "failed(%d) to query ROCEE AXI error sts\n", ret);
+ return ret;
+ }
+
+ dev_info(dev, "AXI1: %08X %08X %08X %08X %08X %08X\n",
+ le32_to_cpu(desc[0].data[0]), le32_to_cpu(desc[0].data[1]),
+ le32_to_cpu(desc[0].data[2]), le32_to_cpu(desc[0].data[3]),
+ le32_to_cpu(desc[0].data[4]), le32_to_cpu(desc[0].data[5]));
+ dev_info(dev, "AXI2: %08X %08X %08X %08X %08X %08X\n",
+ le32_to_cpu(desc[1].data[0]), le32_to_cpu(desc[1].data[1]),
+ le32_to_cpu(desc[1].data[2]), le32_to_cpu(desc[1].data[3]),
+ le32_to_cpu(desc[1].data[4]), le32_to_cpu(desc[1].data[5]));
+ dev_info(dev, "AXI3: %08X %08X %08X %08X\n",
+ le32_to_cpu(desc[2].data[0]), le32_to_cpu(desc[2].data[1]),
+ le32_to_cpu(desc[2].data[2]), le32_to_cpu(desc[2].data[3]));
+
+ return 0;
+}
+
+static int hclge_log_rocee_ecc_error(struct hclge_dev *hdev)
+{
+ struct device *dev = &hdev->pdev->dev;
+ struct hclge_desc desc[2];
+ int ret;
+
+ ret = hclge_cmd_query_error(hdev, &desc[0],
+ HCLGE_QUERY_ROCEE_ECC_RAS_INFO_CMD,
+ HCLGE_CMD_FLAG_NEXT, 0, 0);
+ if (ret) {
+ dev_err(dev, "failed(%d) to query ROCEE ECC error sts\n", ret);
+ return ret;
+ }
+
+ dev_info(dev, "ECC1: %08X %08X %08X %08X %08X %08X\n",
+ le32_to_cpu(desc[0].data[0]), le32_to_cpu(desc[0].data[1]),
+ le32_to_cpu(desc[0].data[2]), le32_to_cpu(desc[0].data[3]),
+ le32_to_cpu(desc[0].data[4]), le32_to_cpu(desc[0].data[5]));
+ dev_info(dev, "ECC2: %08X %08X %08X\n", le32_to_cpu(desc[1].data[0]),
+ le32_to_cpu(desc[1].data[1]), le32_to_cpu(desc[1].data[2]));
+
+ return 0;
+}
+
static int hclge_log_rocee_ovf_error(struct hclge_dev *hdev)
{
struct device *dev = &hdev->pdev->dev;
@@ -1403,8 +1433,7 @@ static int hclge_log_rocee_ovf_error(struct hclge_dev *hdev)
int ret;
/* read overflow error status */
- ret = hclge_cmd_query_error(hdev, &desc[0],
- HCLGE_ROCEE_PF_RAS_INT_CMD,
+ ret = hclge_cmd_query_error(hdev, &desc[0], HCLGE_ROCEE_PF_RAS_INT_CMD,
0, 0, 0);
if (ret) {
dev_err(dev, "failed(%d) to query ROCEE OVF error sts\n", ret);
@@ -1464,19 +1493,27 @@ hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
status = le32_to_cpu(desc[0].data[0]);
- if (status & HCLGE_ROCEE_RERR_INT_MASK) {
- dev_warn(dev, "ROCEE RAS AXI rresp error\n");
- reset_type = HNAE3_FUNC_RESET;
- }
+ if (status & HCLGE_ROCEE_AXI_ERR_INT_MASK) {
+ if (status & HCLGE_ROCEE_RERR_INT_MASK)
+ dev_warn(dev, "ROCEE RAS AXI rresp error\n");
+
+ if (status & HCLGE_ROCEE_BERR_INT_MASK)
+ dev_warn(dev, "ROCEE RAS AXI bresp error\n");
- if (status & HCLGE_ROCEE_BERR_INT_MASK) {
- dev_warn(dev, "ROCEE RAS AXI bresp error\n");
reset_type = HNAE3_FUNC_RESET;
+
+ ret = hclge_log_rocee_axi_error(hdev);
+ if (ret)
+ return HNAE3_GLOBAL_RESET;
}
if (status & HCLGE_ROCEE_ECC_INT_MASK) {
dev_warn(dev, "ROCEE RAS 2bit ECC error\n");
reset_type = HNAE3_GLOBAL_RESET;
+
+ ret = hclge_log_rocee_ecc_error(hdev);
+ if (ret)
+ return HNAE3_GLOBAL_RESET;
}
if (status & HCLGE_ROCEE_OVF_INT_MASK) {
@@ -1486,7 +1523,6 @@ hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
/* reset everything for now */
return HNAE3_GLOBAL_RESET;
}
- reset_type = HNAE3_FUNC_RESET;
}
/* clear error status */
@@ -1501,7 +1537,7 @@ hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
return reset_type;
}
-static int hclge_config_rocee_ras_interrupt(struct hclge_dev *hdev, bool en)
+int hclge_config_rocee_ras_interrupt(struct hclge_dev *hdev, bool en)
{
struct device *dev = &hdev->pdev->dev;
struct hclge_desc desc;
@@ -1539,7 +1575,7 @@ static void hclge_handle_rocee_ras_error(struct hnae3_ae_dev *ae_dev)
reset_type = hclge_log_and_clear_rocee_ras_error(hdev);
if (reset_type != HNAE3_NONE_RESET)
- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_type);
+ set_bit(reset_type, &ae_dev->hw_err_reset_req);
}
static const struct hclge_hw_blk hw_blk[] = {
@@ -1574,10 +1610,9 @@ static const struct hclge_hw_blk hw_blk[] = {
{ /* sentinel */ }
};
-int hclge_hw_error_set_state(struct hclge_dev *hdev, bool state)
+int hclge_config_nic_hw_error(struct hclge_dev *hdev, bool state)
{
const struct hclge_hw_blk *module = hw_blk;
- struct device *dev = &hdev->pdev->dev;
int ret = 0;
while (module->name) {
@@ -1589,10 +1624,6 @@ int hclge_hw_error_set_state(struct hclge_dev *hdev, bool state)
module++;
}
- ret = hclge_config_rocee_ras_interrupt(hdev, state);
- if (ret)
- dev_err(dev, "fail(%d) to configure ROCEE err int\n", ret);
-
return ret;
}
@@ -1602,165 +1633,281 @@ pci_ers_result_t hclge_handle_hw_ras_error(struct hnae3_ae_dev *ae_dev)
struct device *dev = &hdev->pdev->dev;
u32 status;
+ if (!test_bit(HCLGE_STATE_SERVICE_INITED, &hdev->state)) {
+ dev_err(dev,
+ "Can't recover - RAS error reported during dev init\n");
+ return PCI_ERS_RESULT_NONE;
+ }
+
status = hclge_read_dev(&hdev->hw, HCLGE_RAS_PF_OTHER_INT_STS_REG);
+ if (status & HCLGE_RAS_REG_NFE_MASK ||
+ status & HCLGE_RAS_REG_ROCEE_ERR_MASK)
+ ae_dev->hw_err_reset_req = 0;
+ else
+ goto out;
+
/* Handling Non-fatal HNS RAS errors */
if (status & HCLGE_RAS_REG_NFE_MASK) {
dev_warn(dev,
"HNS Non-Fatal RAS error(status=0x%x) identified\n",
status);
hclge_handle_all_ras_errors(hdev);
- } else {
- if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) ||
- hdev->pdev->revision < 0x21) {
- ae_dev->override_pci_need_reset = 1;
- return PCI_ERS_RESULT_RECOVERED;
- }
}
- if (status & HCLGE_RAS_REG_ROCEE_ERR_MASK) {
- dev_warn(dev, "ROCEE uncorrected RAS error identified\n");
+ /* Handling Non-fatal Rocee RAS errors */
+ if (hdev->pdev->revision >= 0x21 &&
+ status & HCLGE_RAS_REG_ROCEE_ERR_MASK) {
+ dev_warn(dev, "ROCEE Non-Fatal RAS error identified\n");
hclge_handle_rocee_ras_error(ae_dev);
}
- if (status & HCLGE_RAS_REG_NFE_MASK ||
- status & HCLGE_RAS_REG_ROCEE_ERR_MASK) {
- ae_dev->override_pci_need_reset = 0;
+ if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state))
+ goto out;
+
+ if (ae_dev->hw_err_reset_req)
return PCI_ERS_RESULT_NEED_RESET;
- }
- ae_dev->override_pci_need_reset = 1;
+out:
return PCI_ERS_RESULT_RECOVERED;
}
-int hclge_handle_hw_msix_error(struct hclge_dev *hdev,
- unsigned long *reset_requests)
+static int hclge_clear_hw_msix_error(struct hclge_dev *hdev,
+ struct hclge_desc *desc, bool is_mpf,
+ u32 bd_num)
+{
+ if (is_mpf)
+ desc[0].opcode =
+ cpu_to_le16(HCLGE_QUERY_CLEAR_ALL_MPF_MSIX_INT);
+ else
+ desc[0].opcode = cpu_to_le16(HCLGE_QUERY_CLEAR_ALL_PF_MSIX_INT);
+
+ desc[0].flag = cpu_to_le16(HCLGE_CMD_FLAG_NO_INTR | HCLGE_CMD_FLAG_IN);
+
+ return hclge_cmd_send(&hdev->hw, &desc[0], bd_num);
+}
+
+/* hclge_query_8bd_info: query information about over_8bd_nfe_err
+ * @hdev: pointer to struct hclge_dev
+ * @vf_id: Index of the virtual function with error
+ * @q_id: Physical index of the queue with error
+ *
+ * This function get specific index of queue and function which causes
+ * over_8bd_nfe_err by using command. If vf_id is 0, it means error is
+ * caused by PF instead of VF.
+ */
+static int hclge_query_over_8bd_err_info(struct hclge_dev *hdev, u16 *vf_id,
+ u16 *q_id)
+{
+ struct hclge_query_ppu_pf_other_int_dfx_cmd *req;
+ struct hclge_desc desc;
+ int ret;
+
+ hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_PPU_PF_OTHER_INT_DFX, true);
+ ret = hclge_cmd_send(&hdev->hw, &desc, 1);
+ if (ret)
+ return ret;
+
+ req = (struct hclge_query_ppu_pf_other_int_dfx_cmd *)desc.data;
+ *vf_id = le16_to_cpu(req->over_8bd_no_fe_vf_id);
+ *q_id = le16_to_cpu(req->over_8bd_no_fe_qid);
+
+ return 0;
+}
+
+/* hclge_handle_over_8bd_err: handle MSI-X error named over_8bd_nfe_err
+ * @hdev: pointer to struct hclge_dev
+ * @reset_requests: reset level that we need to trigger later
+ *
+ * over_8bd_nfe_err is a special MSI-X because it may caused by a VF, in
+ * that case, we need to trigger VF reset. Otherwise, a PF reset is needed.
+ */
+static void hclge_handle_over_8bd_err(struct hclge_dev *hdev,
+ unsigned long *reset_requests)
{
- struct hclge_mac_tnl_stats mac_tnl_stats;
struct device *dev = &hdev->pdev->dev;
- u32 mpf_bd_num, pf_bd_num, bd_num;
- enum hnae3_reset_type reset_level;
- struct hclge_desc desc_bd;
- struct hclge_desc *desc;
- __le32 *desc_data;
- u32 status;
+ u16 vf_id;
+ u16 q_id;
int ret;
- /* query the number of bds for the MSIx int status */
- hclge_cmd_setup_basic_desc(&desc_bd, HCLGE_QUERY_MSIX_INT_STS_BD_NUM,
- true);
- ret = hclge_cmd_send(&hdev->hw, &desc_bd, 1);
+ ret = hclge_query_over_8bd_err_info(hdev, &vf_id, &q_id);
if (ret) {
- dev_err(dev, "fail(%d) to query msix int status bd num\n",
+ dev_err(dev, "fail(%d) to query over_8bd_no_fe info\n",
ret);
- return ret;
+ return;
}
- mpf_bd_num = le32_to_cpu(desc_bd.data[0]);
- pf_bd_num = le32_to_cpu(desc_bd.data[1]);
- bd_num = max_t(u32, mpf_bd_num, pf_bd_num);
+ dev_warn(dev, "PPU_PF_ABNORMAL_INT_ST over_8bd_no_fe found, vf_id(%d), queue_id(%d)\n",
+ vf_id, q_id);
- desc = kcalloc(bd_num, sizeof(struct hclge_desc), GFP_KERNEL);
- if (!desc)
- goto out;
+ if (vf_id) {
+ if (vf_id >= hdev->num_alloc_vport) {
+ dev_err(dev, "invalid vf id(%d)\n", vf_id);
+ return;
+ }
+
+ /* If we need to trigger other reset whose level is higher
+ * than HNAE3_VF_FUNC_RESET, no need to trigger a VF reset
+ * here.
+ */
+ if (*reset_requests != 0)
+ return;
+ ret = hclge_inform_reset_assert_to_vf(&hdev->vport[vf_id]);
+ if (ret)
+ dev_warn(dev, "inform reset to vf(%d) failed %d!\n",
+ hdev->vport->vport_id, ret);
+ } else {
+ set_bit(HNAE3_FUNC_RESET, reset_requests);
+ }
+}
+
+/* hclge_handle_mpf_msix_error: handle all main PF MSI-X errors
+ * @hdev: pointer to struct hclge_dev
+ * @desc: descriptor for describing the command
+ * @mpf_bd_num: number of extended command structures
+ * @reset_requests: record of the reset level that we need
+ *
+ * This function handles all the main PF MSI-X errors in the hw register/s
+ * using command.
+ */
+static int hclge_handle_mpf_msix_error(struct hclge_dev *hdev,
+ struct hclge_desc *desc,
+ int mpf_bd_num,
+ unsigned long *reset_requests)
+{
+ struct device *dev = &hdev->pdev->dev;
+ __le32 *desc_data;
+ u32 status;
+ int ret;
/* query all main PF MSIx errors */
hclge_cmd_setup_basic_desc(&desc[0], HCLGE_QUERY_CLEAR_ALL_MPF_MSIX_INT,
true);
- desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
-
ret = hclge_cmd_send(&hdev->hw, &desc[0], mpf_bd_num);
if (ret) {
- dev_err(dev, "query all mpf msix int cmd failed (%d)\n",
- ret);
- goto msi_error;
+ dev_err(dev, "query all mpf msix int cmd failed (%d)\n", ret);
+ return ret;
}
/* log MAC errors */
desc_data = (__le32 *)&desc[1];
status = le32_to_cpu(*desc_data);
- if (status) {
- reset_level = hclge_log_error(dev, "MAC_AFIFO_TNL_INT_R",
- &hclge_mac_afifo_tnl_int[0],
- status);
- set_bit(reset_level, reset_requests);
- }
+ if (status)
+ hclge_log_error(dev, "MAC_AFIFO_TNL_INT_R",
+ &hclge_mac_afifo_tnl_int[0], status,
+ reset_requests);
/* log PPU(RCB) MPF errors */
desc_data = (__le32 *)&desc[5];
status = le32_to_cpu(*(desc_data + 2)) &
HCLGE_PPU_MPF_INT_ST2_MSIX_MASK;
- if (status) {
- reset_level =
- hclge_log_error(dev, "PPU_MPF_ABNORMAL_INT_ST2",
- &hclge_ppu_mpf_abnormal_int_st2[0],
- status);
- set_bit(reset_level, reset_requests);
- }
+ if (status)
+ dev_warn(dev, "PPU_MPF_ABNORMAL_INT_ST2 rx_q_search_miss found [dfx status=0x%x\n]",
+ status);
/* clear all main PF MSIx errors */
- hclge_cmd_reuse_desc(&desc[0], false);
- desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
+ ret = hclge_clear_hw_msix_error(hdev, desc, true, mpf_bd_num);
+ if (ret)
+ dev_err(dev, "clear all mpf msix int cmd failed (%d)\n", ret);
- ret = hclge_cmd_send(&hdev->hw, &desc[0], mpf_bd_num);
- if (ret) {
- dev_err(dev, "clear all mpf msix int cmd failed (%d)\n",
- ret);
- goto msi_error;
- }
+ return ret;
+}
+
+/* hclge_handle_pf_msix_error: handle all PF MSI-X errors
+ * @hdev: pointer to struct hclge_dev
+ * @desc: descriptor for describing the command
+ * @mpf_bd_num: number of extended command structures
+ * @reset_requests: record of the reset level that we need
+ *
+ * This function handles all the PF MSI-X errors in the hw register/s using
+ * command.
+ */
+static int hclge_handle_pf_msix_error(struct hclge_dev *hdev,
+ struct hclge_desc *desc,
+ int pf_bd_num,
+ unsigned long *reset_requests)
+{
+ struct device *dev = &hdev->pdev->dev;
+ __le32 *desc_data;
+ u32 status;
+ int ret;
/* query all PF MSIx errors */
- memset(desc, 0, bd_num * sizeof(struct hclge_desc));
hclge_cmd_setup_basic_desc(&desc[0], HCLGE_QUERY_CLEAR_ALL_PF_MSIX_INT,
true);
- desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
-
ret = hclge_cmd_send(&hdev->hw, &desc[0], pf_bd_num);
if (ret) {
- dev_err(dev, "query all pf msix int cmd failed (%d)\n",
- ret);
- goto msi_error;
+ dev_err(dev, "query all pf msix int cmd failed (%d)\n", ret);
+ return ret;
}
/* log SSU PF errors */
status = le32_to_cpu(desc[0].data[0]) & HCLGE_SSU_PORT_INT_MSIX_MASK;
- if (status) {
- reset_level = hclge_log_error(dev, "SSU_PORT_BASED_ERR_INT",
- &hclge_ssu_port_based_pf_int[0],
- status);
- set_bit(reset_level, reset_requests);
- }
+ if (status)
+ hclge_log_error(dev, "SSU_PORT_BASED_ERR_INT",
+ &hclge_ssu_port_based_pf_int[0],
+ status, reset_requests);
/* read and log PPP PF errors */
desc_data = (__le32 *)&desc[2];
status = le32_to_cpu(*desc_data);
- if (status) {
- reset_level = hclge_log_error(dev, "PPP_PF_ABNORMAL_INT_ST0",
- &hclge_ppp_pf_abnormal_int[0],
- status);
- set_bit(reset_level, reset_requests);
- }
+ if (status)
+ hclge_log_error(dev, "PPP_PF_ABNORMAL_INT_ST0",
+ &hclge_ppp_pf_abnormal_int[0],
+ status, reset_requests);
/* log PPU(RCB) PF errors */
desc_data = (__le32 *)&desc[3];
status = le32_to_cpu(*desc_data) & HCLGE_PPU_PF_INT_MSIX_MASK;
- if (status) {
- reset_level = hclge_log_error(dev, "PPU_PF_ABNORMAL_INT_ST",
- &hclge_ppu_pf_abnormal_int[0],
- status);
- set_bit(reset_level, reset_requests);
- }
+ if (status)
+ hclge_log_error(dev, "PPU_PF_ABNORMAL_INT_ST",
+ &hclge_ppu_pf_abnormal_int[0],
+ status, reset_requests);
+
+ status = le32_to_cpu(*desc_data) & HCLGE_PPU_PF_OVER_8BD_ERR_MASK;
+ if (status)
+ hclge_handle_over_8bd_err(hdev, reset_requests);
/* clear all PF MSIx errors */
- hclge_cmd_reuse_desc(&desc[0], false);
- desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
+ ret = hclge_clear_hw_msix_error(hdev, desc, false, pf_bd_num);
+ if (ret)
+ dev_err(dev, "clear all pf msix int cmd failed (%d)\n", ret);
- ret = hclge_cmd_send(&hdev->hw, &desc[0], pf_bd_num);
- if (ret) {
- dev_err(dev, "clear all pf msix int cmd failed (%d)\n",
- ret);
+ return ret;
+}
+
+static int hclge_handle_all_hw_msix_error(struct hclge_dev *hdev,
+ unsigned long *reset_requests)
+{
+ struct hclge_mac_tnl_stats mac_tnl_stats;
+ struct device *dev = &hdev->pdev->dev;
+ u32 mpf_bd_num, pf_bd_num, bd_num;
+ struct hclge_desc *desc;
+ u32 status;
+ int ret;
+
+ /* query the number of bds for the MSIx int status */
+ ret = hclge_query_bd_num(hdev, false, &mpf_bd_num, &pf_bd_num);
+ if (ret)
+ goto out;
+
+ bd_num = max_t(u32, mpf_bd_num, pf_bd_num);
+ desc = kcalloc(bd_num, sizeof(struct hclge_desc), GFP_KERNEL);
+ if (!desc) {
+ ret = -ENOMEM;
+ goto out;
}
+ ret = hclge_handle_mpf_msix_error(hdev, desc, mpf_bd_num,
+ reset_requests);
+ if (ret)
+ goto msi_error;
+
+ memset(desc, 0, bd_num * sizeof(struct hclge_desc));
+ ret = hclge_handle_pf_msix_error(hdev, desc, pf_bd_num, reset_requests);
+ if (ret)
+ goto msi_error;
+
/* query and clear mac tnl interruptions */
hclge_cmd_setup_basic_desc(&desc[0], HCLGE_OPC_QUERY_MAC_TNL_INT,
true);
@@ -1783,7 +1930,6 @@ int hclge_handle_hw_msix_error(struct hclge_dev *hdev,
ret = hclge_clear_mac_tnl_int(hdev);
if (ret)
dev_err(dev, "clear mac tnl int failed (%d)\n", ret);
- set_bit(HNAE3_NONE_RESET, reset_requests);
}
msi_error:
@@ -1791,3 +1937,70 @@ msi_error:
out:
return ret;
}
+
+int hclge_handle_hw_msix_error(struct hclge_dev *hdev,
+ unsigned long *reset_requests)
+{
+ struct device *dev = &hdev->pdev->dev;
+
+ if (!test_bit(HCLGE_STATE_SERVICE_INITED, &hdev->state)) {
+ dev_err(dev,
+ "Can't handle - MSIx error reported during dev init\n");
+ return 0;
+ }
+
+ return hclge_handle_all_hw_msix_error(hdev, reset_requests);
+}
+
+void hclge_handle_all_hns_hw_errors(struct hnae3_ae_dev *ae_dev)
+{
+#define HCLGE_DESC_NO_DATA_LEN 8
+
+ struct hclge_dev *hdev = ae_dev->priv;
+ struct device *dev = &hdev->pdev->dev;
+ u32 mpf_bd_num, pf_bd_num, bd_num;
+ struct hclge_desc *desc;
+ u32 status;
+ int ret;
+
+ ae_dev->hw_err_reset_req = 0;
+ status = hclge_read_dev(&hdev->hw, HCLGE_RAS_PF_OTHER_INT_STS_REG);
+
+ /* query the number of bds for the MSIx int status */
+ ret = hclge_query_bd_num(hdev, false, &mpf_bd_num, &pf_bd_num);
+ if (ret)
+ return;
+
+ bd_num = max_t(u32, mpf_bd_num, pf_bd_num);
+ desc = kcalloc(bd_num, sizeof(struct hclge_desc), GFP_KERNEL);
+ if (!desc)
+ return;
+
+ /* Clear HNS hw errors reported through msix */
+ memset(&desc[0].data[0], 0xFF, mpf_bd_num * sizeof(struct hclge_desc) -
+ HCLGE_DESC_NO_DATA_LEN);
+ ret = hclge_clear_hw_msix_error(hdev, desc, true, mpf_bd_num);
+ if (ret) {
+ dev_err(dev, "fail(%d) to clear mpf msix int during init\n",
+ ret);
+ goto msi_error;
+ }
+
+ memset(&desc[0].data[0], 0xFF, pf_bd_num * sizeof(struct hclge_desc) -
+ HCLGE_DESC_NO_DATA_LEN);
+ ret = hclge_clear_hw_msix_error(hdev, desc, false, pf_bd_num);
+ if (ret) {
+ dev_err(dev, "fail(%d) to clear pf msix int during init\n",
+ ret);
+ goto msi_error;
+ }
+
+ /* Handle Non-fatal HNS RAS errors */
+ if (status & HCLGE_RAS_REG_NFE_MASK) {
+ dev_warn(dev, "HNS hw error(RAS) identified during init\n");
+ hclge_handle_all_ras_errors(hdev);
+ }
+
+msi_error:
+ kfree(desc);
+}
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
index 9645590c9294..7ea8bb28a0cb 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
@@ -6,6 +6,11 @@
#include "hclge_main.h"
+#define HCLGE_MPF_RAS_INT_MIN_BD_NUM 10
+#define HCLGE_PF_RAS_INT_MIN_BD_NUM 4
+#define HCLGE_MPF_MSIX_INT_MIN_BD_NUM 10
+#define HCLGE_PF_MSIX_INT_MIN_BD_NUM 4
+
#define HCLGE_RAS_PF_OTHER_INT_STS_REG 0x20B00
#define HCLGE_RAS_REG_NFE_MASK 0xFF00
#define HCLGE_RAS_REG_ROCEE_ERR_MASK 0x3000000
@@ -47,9 +52,9 @@
#define HCLGE_NCSI_ERR_INT_TYPE 0x9
#define HCLGE_MAC_COMMON_ERR_INT_EN 0x107FF
#define HCLGE_MAC_COMMON_ERR_INT_EN_MASK 0x107FF
-#define HCLGE_MAC_TNL_INT_EN GENMASK(7, 0)
-#define HCLGE_MAC_TNL_INT_EN_MASK GENMASK(7, 0)
-#define HCLGE_MAC_TNL_INT_CLR GENMASK(7, 0)
+#define HCLGE_MAC_TNL_INT_EN GENMASK(9, 0)
+#define HCLGE_MAC_TNL_INT_EN_MASK GENMASK(9, 0)
+#define HCLGE_MAC_TNL_INT_CLR GENMASK(9, 0)
#define HCLGE_PPU_MPF_ABNORMAL_INT0_EN GENMASK(31, 0)
#define HCLGE_PPU_MPF_ABNORMAL_INT0_EN_MASK GENMASK(31, 0)
#define HCLGE_PPU_MPF_ABNORMAL_INT1_EN GENMASK(31, 0)
@@ -81,9 +86,10 @@
#define HCLGE_IGU_EGU_TNL_INT_MASK GENMASK(5, 0)
#define HCLGE_PPP_MPF_INT_ST3_MASK GENMASK(5, 0)
#define HCLGE_PPU_MPF_INT_ST3_MASK GENMASK(7, 0)
-#define HCLGE_PPU_MPF_INT_ST2_MSIX_MASK GENMASK(29, 28)
+#define HCLGE_PPU_MPF_INT_ST2_MSIX_MASK BIT(29)
#define HCLGE_PPU_PF_INT_RAS_MASK 0x18
-#define HCLGE_PPU_PF_INT_MSIX_MASK 0x27
+#define HCLGE_PPU_PF_INT_MSIX_MASK 0x26
+#define HCLGE_PPU_PF_OVER_8BD_ERR_MASK 0x01
#define HCLGE_QCN_FIFO_INT_MASK GENMASK(17, 0)
#define HCLGE_QCN_ECC_INT_MASK GENMASK(21, 0)
#define HCLGE_NCSI_ECC_INT_MASK GENMASK(1, 0)
@@ -94,6 +100,7 @@
#define HCLGE_ROCEE_RAS_CE_INT_EN_MASK 0x1
#define HCLGE_ROCEE_RERR_INT_MASK BIT(0)
#define HCLGE_ROCEE_BERR_INT_MASK BIT(1)
+#define HCLGE_ROCEE_AXI_ERR_INT_MASK GENMASK(1, 0)
#define HCLGE_ROCEE_ECC_INT_MASK BIT(2)
#define HCLGE_ROCEE_OVF_INT_MASK BIT(3)
#define HCLGE_ROCEE_OVF_ERR_INT_MASK 0x10000
@@ -119,7 +126,9 @@ struct hclge_hw_error {
};
int hclge_config_mac_tnl_int(struct hclge_dev *hdev, bool en);
-int hclge_hw_error_set_state(struct hclge_dev *hdev, bool state);
+int hclge_config_nic_hw_error(struct hclge_dev *hdev, bool state);
+int hclge_config_rocee_ras_interrupt(struct hclge_dev *hdev, bool en);
+void hclge_handle_all_hns_hw_errors(struct hnae3_ae_dev *ae_dev);
pci_ers_result_t hclge_handle_hw_ras_error(struct hnae3_ae_dev *ae_dev);
int hclge_handle_hw_msix_error(struct hclge_dev *hdev,
unsigned long *reset_requests);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index d3b1f8cb1155..3fde5471e1c0 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -27,14 +27,26 @@
#define HCLGE_STATS_READ(p, offset) (*((u64 *)((u8 *)(p) + (offset))))
#define HCLGE_MAC_STATS_FIELD_OFF(f) (offsetof(struct hclge_mac_stats, f))
-#define HCLGE_BUF_SIZE_UNIT 256
+#define HCLGE_BUF_SIZE_UNIT 256U
+#define HCLGE_BUF_MUL_BY 2
+#define HCLGE_BUF_DIV_BY 2
+#define NEED_RESERVE_TC_NUM 2
+#define BUF_MAX_PERCENT 100
+#define BUF_RESERVE_PERCENT 90
+
+#define HCLGE_RESET_MAX_FAIL_CNT 5
static int hclge_set_mac_mtu(struct hclge_dev *hdev, int new_mps);
static int hclge_init_vlan_config(struct hclge_dev *hdev);
+static void hclge_sync_vlan_filter(struct hclge_dev *hdev);
static int hclge_reset_ae_dev(struct hnae3_ae_dev *ae_dev);
static bool hclge_get_hw_reset_stat(struct hnae3_handle *handle);
static int hclge_set_umv_space(struct hclge_dev *hdev, u16 space_size,
u16 *allocated_size, bool is_alloc);
+static void hclge_rfs_filter_expire(struct hclge_dev *hdev);
+static void hclge_clear_arfs_rules(struct hnae3_handle *handle);
+static enum hnae3_reset_type hclge_get_reset_level(struct hnae3_ae_dev *ae_dev,
+ unsigned long *addr);
static struct hnae3_ae_algo ae_algo;
@@ -290,7 +302,7 @@ static const struct hclge_comm_stats_str g_mac_stats_string[] = {
static const struct hclge_mac_mgr_tbl_entry_cmd hclge_mgr_table[] = {
{
.flags = HCLGE_MAC_MGR_MASK_VLAN_B,
- .ethter_type = cpu_to_le16(HCLGE_MAC_ETHERTYPE_LLDP),
+ .ethter_type = cpu_to_le16(ETH_P_LLDP),
.mac_addr_hi32 = cpu_to_le32(htonl(0x0180C200)),
.mac_addr_lo16 = cpu_to_le16(htons(0x000E)),
.i_port_bitmap = 0x1,
@@ -437,8 +449,7 @@ static int hclge_tqps_update_stats(struct hnae3_handle *handle)
queue = handle->kinfo.tqp[i];
tqp = container_of(queue, struct hclge_tqp, q);
/* command : HCLGE_OPC_QUERY_IGU_STAT */
- hclge_cmd_setup_basic_desc(&desc[0],
- HCLGE_OPC_QUERY_RX_STATUS,
+ hclge_cmd_setup_basic_desc(&desc[0], HCLGE_OPC_QUERY_RX_STATUS,
true);
desc[0].data[0] = cpu_to_le32((tqp->index & 0x1ff));
@@ -446,7 +457,7 @@ static int hclge_tqps_update_stats(struct hnae3_handle *handle)
if (ret) {
dev_err(&hdev->pdev->dev,
"Query tqp stat fail, status = %d,queue = %d\n",
- ret, i);
+ ret, i);
return ret;
}
tqp->tqp_stats.rcb_rx_ring_pktnum_rcd +=
@@ -500,6 +511,7 @@ static int hclge_tqps_get_sset_count(struct hnae3_handle *handle, int stringset)
{
struct hnae3_knic_private_info *kinfo = &handle->kinfo;
+ /* each tqp has TX & RX two queues */
return kinfo->num_tqps * (2);
}
@@ -528,7 +540,7 @@ static u8 *hclge_tqps_get_strings(struct hnae3_handle *handle, u8 *data)
return buff;
}
-static u64 *hclge_comm_get_stats(void *comm_stats,
+static u64 *hclge_comm_get_stats(const void *comm_stats,
const struct hclge_comm_stats_str strs[],
int size, u64 *data)
{
@@ -552,8 +564,7 @@ static u8 *hclge_comm_get_strings(u32 stringset,
return buff;
for (i = 0; i < size; i++) {
- snprintf(buff, ETH_GSTRING_LEN,
- strs[i].desc);
+ snprintf(buff, ETH_GSTRING_LEN, "%s", strs[i].desc);
buff = buff + ETH_GSTRING_LEN;
}
@@ -644,8 +655,7 @@ static int hclge_get_sset_count(struct hnae3_handle *handle, int stringset)
return count;
}
-static void hclge_get_strings(struct hnae3_handle *handle,
- u32 stringset,
+static void hclge_get_strings(struct hnae3_handle *handle, u32 stringset,
u8 *data)
{
u8 *p = (char *)data;
@@ -653,21 +663,17 @@ static void hclge_get_strings(struct hnae3_handle *handle,
if (stringset == ETH_SS_STATS) {
size = ARRAY_SIZE(g_mac_stats_string);
- p = hclge_comm_get_strings(stringset,
- g_mac_stats_string,
- size,
- p);
+ p = hclge_comm_get_strings(stringset, g_mac_stats_string,
+ size, p);
p = hclge_tqps_get_strings(handle, p);
} else if (stringset == ETH_SS_TEST) {
if (handle->flags & HNAE3_SUPPORT_APP_LOOPBACK) {
- memcpy(p,
- hns3_nic_test_strs[HNAE3_LOOP_APP],
+ memcpy(p, hns3_nic_test_strs[HNAE3_LOOP_APP],
ETH_GSTRING_LEN);
p += ETH_GSTRING_LEN;
}
if (handle->flags & HNAE3_SUPPORT_SERDES_SERIAL_LOOPBACK) {
- memcpy(p,
- hns3_nic_test_strs[HNAE3_LOOP_SERIAL_SERDES],
+ memcpy(p, hns3_nic_test_strs[HNAE3_LOOP_SERIAL_SERDES],
ETH_GSTRING_LEN);
p += ETH_GSTRING_LEN;
}
@@ -678,8 +684,7 @@ static void hclge_get_strings(struct hnae3_handle *handle,
p += ETH_GSTRING_LEN;
}
if (handle->flags & HNAE3_SUPPORT_PHY_LOOPBACK) {
- memcpy(p,
- hns3_nic_test_strs[HNAE3_LOOP_PHY],
+ memcpy(p, hns3_nic_test_strs[HNAE3_LOOP_PHY],
ETH_GSTRING_LEN);
p += ETH_GSTRING_LEN;
}
@@ -692,10 +697,8 @@ static void hclge_get_stats(struct hnae3_handle *handle, u64 *data)
struct hclge_dev *hdev = vport->back;
u64 *p;
- p = hclge_comm_get_stats(&hdev->hw_stats.mac_stats,
- g_mac_stats_string,
- ARRAY_SIZE(g_mac_stats_string),
- data);
+ p = hclge_comm_get_stats(&hdev->hw_stats.mac_stats, g_mac_stats_string,
+ ARRAY_SIZE(g_mac_stats_string), data);
p = hclge_tqps_get_stats(handle, p);
}
@@ -726,6 +729,8 @@ static int hclge_parse_func_status(struct hclge_dev *hdev,
static int hclge_query_function_status(struct hclge_dev *hdev)
{
+#define HCLGE_QUERY_MAX_CNT 5
+
struct hclge_func_status_cmd *req;
struct hclge_desc desc;
int timeout = 0;
@@ -738,9 +743,7 @@ static int hclge_query_function_status(struct hclge_dev *hdev)
ret = hclge_cmd_send(&hdev->hw, &desc, 1);
if (ret) {
dev_err(&hdev->pdev->dev,
- "query function status failed %d.\n",
- ret);
-
+ "query function status failed %d.\n", ret);
return ret;
}
@@ -748,7 +751,7 @@ static int hclge_query_function_status(struct hclge_dev *hdev)
if (req->pf_state)
break;
usleep_range(1000, 2000);
- } while (timeout++ < 5);
+ } while (timeout++ < HCLGE_QUERY_MAX_CNT);
ret = hclge_parse_func_status(hdev, req);
@@ -800,7 +803,7 @@ static int hclge_query_pf_resource(struct hclge_dev *hdev)
/* PF should have NIC vectors and Roce vectors,
* NIC vectors are queued before Roce vectors.
*/
- hdev->num_msi = hdev->num_roce_msi +
+ hdev->num_msi = hdev->num_roce_msi +
hdev->roce_base_msix_offset;
} else {
hdev->num_msi =
@@ -1058,6 +1061,7 @@ static void hclge_parse_copper_link_mode(struct hclge_dev *hdev,
linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT, supported);
linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT, supported);
linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT, supported);
+ linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, supported);
}
static void hclge_parse_link_mode(struct hclge_dev *hdev, u8 speed_ability)
@@ -1076,7 +1080,7 @@ static void hclge_parse_cfg(struct hclge_cfg *cfg, struct hclge_desc *desc)
struct hclge_cfg_param_cmd *req;
u64 mac_addr_tmp_high;
u64 mac_addr_tmp;
- int i;
+ unsigned int i;
req = (struct hclge_cfg_param_cmd *)desc[0].data;
@@ -1138,7 +1142,8 @@ static int hclge_get_cfg(struct hclge_dev *hdev, struct hclge_cfg *hcfg)
{
struct hclge_desc desc[HCLGE_PF_CFG_DESC_NUM];
struct hclge_cfg_param_cmd *req;
- int i, ret;
+ unsigned int i;
+ int ret;
for (i = 0; i < HCLGE_PF_CFG_DESC_NUM; i++) {
u32 offset = 0;
@@ -1204,7 +1209,8 @@ static void hclge_init_kdump_kernel_config(struct hclge_dev *hdev)
static int hclge_configure(struct hclge_dev *hdev)
{
struct hclge_cfg cfg;
- int ret, i;
+ unsigned int i;
+ int ret;
ret = hclge_get_cfg(hdev, &cfg);
if (ret) {
@@ -1226,8 +1232,10 @@ static int hclge_configure(struct hclge_dev *hdev)
hdev->tm_info.hw_pfc_map = 0;
hdev->wanted_umv_size = cfg.umv_space;
- if (hnae3_dev_fd_supported(hdev))
+ if (hnae3_dev_fd_supported(hdev)) {
hdev->fd_en = true;
+ hdev->fd_active_type = HCLGE_FD_RULE_NONE;
+ }
ret = hclge_parse_speed(cfg.default_speed, &hdev->hw.mac.speed);
if (ret) {
@@ -1265,8 +1273,8 @@ static int hclge_configure(struct hclge_dev *hdev)
return ret;
}
-static int hclge_config_tso(struct hclge_dev *hdev, int tso_mss_min,
- int tso_mss_max)
+static int hclge_config_tso(struct hclge_dev *hdev, unsigned int tso_mss_min,
+ unsigned int tso_mss_max)
{
struct hclge_cfg_tso_status_cmd *req;
struct hclge_desc desc;
@@ -1352,8 +1360,9 @@ static int hclge_map_tqps_to_func(struct hclge_dev *hdev, u16 func_id,
req = (struct hclge_tqp_map_cmd *)desc.data;
req->tqp_id = cpu_to_le16(tqp_pid);
req->tqp_vf = func_id;
- req->tqp_flag = !is_pf << HCLGE_TQP_MAP_TYPE_B |
- 1 << HCLGE_TQP_MAP_EN_B;
+ req->tqp_flag = 1U << HCLGE_TQP_MAP_EN_B;
+ if (!is_pf)
+ req->tqp_flag |= 1U << HCLGE_TQP_MAP_TYPE_B;
req->tqp_vid = cpu_to_le16(tqp_vid);
ret = hclge_cmd_send(&hdev->hw, &desc, 1);
@@ -1457,11 +1466,6 @@ static int hclge_map_tqp(struct hclge_dev *hdev)
return 0;
}
-static void hclge_unic_setup(struct hclge_vport *vport, u16 num_tqps)
-{
- /* this would be initialized later */
-}
-
static int hclge_vport_setup(struct hclge_vport *vport, u16 num_tqps)
{
struct hnae3_handle *nic = &vport->nic;
@@ -1472,20 +1476,12 @@ static int hclge_vport_setup(struct hclge_vport *vport, u16 num_tqps)
nic->ae_algo = &ae_algo;
nic->numa_node_mask = hdev->numa_node_mask;
- if (hdev->ae_dev->dev_type == HNAE3_DEV_KNIC) {
- ret = hclge_knic_setup(vport, num_tqps,
- hdev->num_tx_desc, hdev->num_rx_desc);
-
- if (ret) {
- dev_err(&hdev->pdev->dev, "knic setup failed %d\n",
- ret);
- return ret;
- }
- } else {
- hclge_unic_setup(vport, num_tqps);
- }
+ ret = hclge_knic_setup(vport, num_tqps,
+ hdev->num_tx_desc, hdev->num_rx_desc);
+ if (ret)
+ dev_err(&hdev->pdev->dev, "knic setup failed %d\n", ret);
- return 0;
+ return ret;
}
static int hclge_alloc_vport(struct hclge_dev *hdev)
@@ -1591,7 +1587,8 @@ static int hclge_tx_buffer_alloc(struct hclge_dev *hdev,
static u32 hclge_get_tc_num(struct hclge_dev *hdev)
{
- int i, cnt = 0;
+ unsigned int i;
+ u32 cnt = 0;
for (i = 0; i < HCLGE_MAX_TC_NUM; i++)
if (hdev->hw_tc_map & BIT(i))
@@ -1604,7 +1601,8 @@ static int hclge_get_pfc_priv_num(struct hclge_dev *hdev,
struct hclge_pkt_buf_alloc *buf_alloc)
{
struct hclge_priv_buf *priv;
- int i, cnt = 0;
+ unsigned int i;
+ int cnt = 0;
for (i = 0; i < HCLGE_MAX_TC_NUM; i++) {
priv = &buf_alloc->priv_buf[i];
@@ -1621,7 +1619,8 @@ static int hclge_get_no_pfc_priv_num(struct hclge_dev *hdev,
struct hclge_pkt_buf_alloc *buf_alloc)
{
struct hclge_priv_buf *priv;
- int i, cnt = 0;
+ unsigned int i;
+ int cnt = 0;
for (i = 0; i < HCLGE_MAX_TC_NUM; i++) {
priv = &buf_alloc->priv_buf[i];
@@ -1671,7 +1670,8 @@ static bool hclge_is_rx_buf_ok(struct hclge_dev *hdev,
aligned_mps = roundup(hdev->mps, HCLGE_BUF_SIZE_UNIT);
if (hnae3_dev_dcb_supported(hdev))
- shared_buf_min = 2 * aligned_mps + hdev->dv_buf_size;
+ shared_buf_min = HCLGE_BUF_MUL_BY * aligned_mps +
+ hdev->dv_buf_size;
else
shared_buf_min = aligned_mps + HCLGE_NON_DCB_ADDITIONAL_BUF
+ hdev->dv_buf_size;
@@ -1689,7 +1689,8 @@ static bool hclge_is_rx_buf_ok(struct hclge_dev *hdev,
if (hnae3_dev_dcb_supported(hdev)) {
buf_alloc->s_buf.self.high = shared_buf - hdev->dv_buf_size;
buf_alloc->s_buf.self.low = buf_alloc->s_buf.self.high
- - roundup(aligned_mps / 2, HCLGE_BUF_SIZE_UNIT);
+ - roundup(aligned_mps / HCLGE_BUF_DIV_BY,
+ HCLGE_BUF_SIZE_UNIT);
} else {
buf_alloc->s_buf.self.high = aligned_mps +
HCLGE_NON_DCB_ADDITIONAL_BUF;
@@ -1697,14 +1698,18 @@ static bool hclge_is_rx_buf_ok(struct hclge_dev *hdev,
}
if (hnae3_dev_dcb_supported(hdev)) {
+ hi_thrd = shared_buf - hdev->dv_buf_size;
+
+ if (tc_num <= NEED_RESERVE_TC_NUM)
+ hi_thrd = hi_thrd * BUF_RESERVE_PERCENT
+ / BUF_MAX_PERCENT;
+
if (tc_num)
- hi_thrd = (shared_buf - hdev->dv_buf_size) / tc_num;
- else
- hi_thrd = shared_buf - hdev->dv_buf_size;
+ hi_thrd = hi_thrd / tc_num;
- hi_thrd = max_t(u32, hi_thrd, 2 * aligned_mps);
+ hi_thrd = max_t(u32, hi_thrd, HCLGE_BUF_MUL_BY * aligned_mps);
hi_thrd = rounddown(hi_thrd, HCLGE_BUF_SIZE_UNIT);
- lo_thrd = hi_thrd - aligned_mps / 2;
+ lo_thrd = hi_thrd - aligned_mps / HCLGE_BUF_DIV_BY;
} else {
hi_thrd = aligned_mps + HCLGE_NON_DCB_ADDITIONAL_BUF;
lo_thrd = aligned_mps;
@@ -1749,7 +1754,7 @@ static bool hclge_rx_buf_calc_all(struct hclge_dev *hdev, bool max,
{
u32 rx_all = hdev->pkt_buf_size - hclge_get_tx_buff_alloced(buf_alloc);
u32 aligned_mps = round_up(hdev->mps, HCLGE_BUF_SIZE_UNIT);
- int i;
+ unsigned int i;
for (i = 0; i < HCLGE_MAX_TC_NUM; i++) {
struct hclge_priv_buf *priv = &buf_alloc->priv_buf[i];
@@ -1765,12 +1770,13 @@ static bool hclge_rx_buf_calc_all(struct hclge_dev *hdev, bool max,
priv->enable = 1;
if (hdev->tm_info.hw_pfc_map & BIT(i)) {
- priv->wl.low = max ? aligned_mps : 256;
+ priv->wl.low = max ? aligned_mps : HCLGE_BUF_SIZE_UNIT;
priv->wl.high = roundup(priv->wl.low + aligned_mps,
HCLGE_BUF_SIZE_UNIT);
} else {
priv->wl.low = 0;
- priv->wl.high = max ? (aligned_mps * 2) : aligned_mps;
+ priv->wl.high = max ? (aligned_mps * HCLGE_BUF_MUL_BY) :
+ aligned_mps;
}
priv->buf_size = priv->wl.high + hdev->dv_buf_size;
@@ -1789,9 +1795,10 @@ static bool hclge_drop_nopfc_buf_till_fit(struct hclge_dev *hdev,
/* let the last to be cleared first */
for (i = HCLGE_MAX_TC_NUM - 1; i >= 0; i--) {
struct hclge_priv_buf *priv = &buf_alloc->priv_buf[i];
+ unsigned int mask = BIT((unsigned int)i);
- if (hdev->hw_tc_map & BIT(i) &&
- !(hdev->tm_info.hw_pfc_map & BIT(i))) {
+ if (hdev->hw_tc_map & mask &&
+ !(hdev->tm_info.hw_pfc_map & mask)) {
/* Clear the no pfc TC private buffer */
priv->wl.low = 0;
priv->wl.high = 0;
@@ -1818,9 +1825,10 @@ static bool hclge_drop_pfc_buf_till_fit(struct hclge_dev *hdev,
/* let the last to be cleared first */
for (i = HCLGE_MAX_TC_NUM - 1; i >= 0; i--) {
struct hclge_priv_buf *priv = &buf_alloc->priv_buf[i];
+ unsigned int mask = BIT((unsigned int)i);
- if (hdev->hw_tc_map & BIT(i) &&
- hdev->tm_info.hw_pfc_map & BIT(i)) {
+ if (hdev->hw_tc_map & mask &&
+ hdev->tm_info.hw_pfc_map & mask) {
/* Reduce the number of pfc TC with private buffer */
priv->wl.low = 0;
priv->enable = 0;
@@ -1837,6 +1845,55 @@ static bool hclge_drop_pfc_buf_till_fit(struct hclge_dev *hdev,
return hclge_is_rx_buf_ok(hdev, buf_alloc, rx_all);
}
+static int hclge_only_alloc_priv_buff(struct hclge_dev *hdev,
+ struct hclge_pkt_buf_alloc *buf_alloc)
+{
+#define COMPENSATE_BUFFER 0x3C00
+#define COMPENSATE_HALF_MPS_NUM 5
+#define PRIV_WL_GAP 0x1800
+
+ u32 rx_priv = hdev->pkt_buf_size - hclge_get_tx_buff_alloced(buf_alloc);
+ u32 tc_num = hclge_get_tc_num(hdev);
+ u32 half_mps = hdev->mps >> 1;
+ u32 min_rx_priv;
+ unsigned int i;
+
+ if (tc_num)
+ rx_priv = rx_priv / tc_num;
+
+ if (tc_num <= NEED_RESERVE_TC_NUM)
+ rx_priv = rx_priv * BUF_RESERVE_PERCENT / BUF_MAX_PERCENT;
+
+ min_rx_priv = hdev->dv_buf_size + COMPENSATE_BUFFER +
+ COMPENSATE_HALF_MPS_NUM * half_mps;
+ min_rx_priv = round_up(min_rx_priv, HCLGE_BUF_SIZE_UNIT);
+ rx_priv = round_down(rx_priv, HCLGE_BUF_SIZE_UNIT);
+
+ if (rx_priv < min_rx_priv)
+ return false;
+
+ for (i = 0; i < HCLGE_MAX_TC_NUM; i++) {
+ struct hclge_priv_buf *priv = &buf_alloc->priv_buf[i];
+
+ priv->enable = 0;
+ priv->wl.low = 0;
+ priv->wl.high = 0;
+ priv->buf_size = 0;
+
+ if (!(hdev->hw_tc_map & BIT(i)))
+ continue;
+
+ priv->enable = 1;
+ priv->buf_size = rx_priv;
+ priv->wl.high = rx_priv - hdev->dv_buf_size;
+ priv->wl.low = priv->wl.high - PRIV_WL_GAP;
+ }
+
+ buf_alloc->s_buf.buf_size = 0;
+
+ return true;
+}
+
/* hclge_rx_buffer_calc: calculate the rx private buffer size for all TCs
* @hdev: pointer to struct hclge_dev
* @buf_alloc: pointer to buffer calculation data
@@ -1856,6 +1913,9 @@ static int hclge_rx_buffer_calc(struct hclge_dev *hdev,
return 0;
}
+ if (hclge_only_alloc_priv_buff(hdev, buf_alloc))
+ return 0;
+
if (hclge_rx_buf_calc_all(hdev, true, buf_alloc))
return 0;
@@ -2153,7 +2213,6 @@ static int hclge_init_msi(struct hclge_dev *hdev)
static u8 hclge_check_speed_dup(u8 duplex, int speed)
{
-
if (!(speed == HCLGE_MAC_SPEED_10M || speed == HCLGE_MAC_SPEED_100M))
duplex = HCLGE_MAC_FULL;
@@ -2171,7 +2230,8 @@ static int hclge_cfg_mac_speed_dup_hw(struct hclge_dev *hdev, int speed,
hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_CONFIG_SPEED_DUP, false);
- hnae3_set_bit(req->speed_dup, HCLGE_CFG_DUPLEX_B, !!duplex);
+ if (duplex)
+ hnae3_set_bit(req->speed_dup, HCLGE_CFG_DUPLEX_B, 1);
switch (speed) {
case HCLGE_MAC_SPEED_10M:
@@ -2261,7 +2321,8 @@ static int hclge_set_autoneg_en(struct hclge_dev *hdev, bool enable)
hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_CONFIG_AN_MODE, false);
req = (struct hclge_config_auto_neg_cmd *)desc.data;
- hnae3_set_bit(flag, HCLGE_MAC_CFG_AN_EN_B, !!enable);
+ if (enable)
+ hnae3_set_bit(flag, HCLGE_MAC_CFG_AN_EN_B, 1U);
req->cfg_an_cmd_flag = cpu_to_le32(flag);
ret = hclge_cmd_send(&hdev->hw, &desc, 1);
@@ -2316,6 +2377,17 @@ static int hclge_restart_autoneg(struct hnae3_handle *handle)
return hclge_notify_client(hdev, HNAE3_UP_CLIENT);
}
+static int hclge_halt_autoneg(struct hnae3_handle *handle, bool halt)
+{
+ struct hclge_vport *vport = hclge_get_vport(handle);
+ struct hclge_dev *hdev = vport->back;
+
+ if (hdev->hw.mac.support_autoneg && hdev->hw.mac.autoneg)
+ return hclge_set_autoneg_en(hdev, !halt);
+
+ return 0;
+}
+
static int hclge_set_fec_hw(struct hclge_dev *hdev, u32 fec_mode)
{
struct hclge_config_fec_cmd *req;
@@ -2389,6 +2461,15 @@ static int hclge_mac_init(struct hclge_dev *hdev)
return ret;
}
+ if (hdev->hw.mac.support_autoneg) {
+ ret = hclge_set_autoneg_en(hdev, hdev->hw.mac.autoneg);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "Config mac autoneg fail ret=%d\n", ret);
+ return ret;
+ }
+ }
+
mac->link = 0;
if (mac->user_fec_mode & BIT(HNAE3_FEC_USER_DEF)) {
@@ -2423,7 +2504,8 @@ static void hclge_mbx_task_schedule(struct hclge_dev *hdev)
static void hclge_reset_task_schedule(struct hclge_dev *hdev)
{
- if (!test_and_set_bit(HCLGE_STATE_RST_SERVICE_SCHED, &hdev->state))
+ if (!test_bit(HCLGE_STATE_REMOVING, &hdev->state) &&
+ !test_and_set_bit(HCLGE_STATE_RST_SERVICE_SCHED, &hdev->state))
schedule_work(&hdev->rst_service_task);
}
@@ -2458,7 +2540,7 @@ static int hclge_get_mac_link_status(struct hclge_dev *hdev)
static int hclge_get_mac_phy_link(struct hclge_dev *hdev)
{
- int mac_state;
+ unsigned int mac_state;
int link_stat;
if (test_bit(HCLGE_STATE_DOWN, &hdev->state))
@@ -2508,6 +2590,9 @@ static void hclge_update_link_status(struct hclge_dev *hdev)
static void hclge_update_port_capability(struct hclge_mac *mac)
{
+ /* update fec ability by speed */
+ hclge_convert_setting_fec(mac);
+
/* firmware can not identify back plane type, the media type
* read from configuration can help deal it
*/
@@ -2529,7 +2614,7 @@ static void hclge_update_port_capability(struct hclge_mac *mac)
static int hclge_get_sfp_speed(struct hclge_dev *hdev, u32 *speed)
{
- struct hclge_sfp_info_cmd *resp = NULL;
+ struct hclge_sfp_info_cmd *resp;
struct hclge_desc desc;
int ret;
@@ -2580,6 +2665,11 @@ static int hclge_get_sfp_info(struct hclge_dev *hdev, struct hclge_mac *mac)
mac->speed_ability = le32_to_cpu(resp->speed_ability);
mac->autoneg = resp->autoneg;
mac->support_autoneg = resp->autoneg_ability;
+ mac->speed_type = QUERY_ACTIVE_SPEED;
+ if (!resp->active_fec)
+ mac->fec_mode = 0;
+ else
+ mac->fec_mode = BIT(resp->active_fec);
} else {
mac->speed_type = QUERY_SFP_SPEED;
}
@@ -2645,6 +2735,7 @@ static void hclge_service_timer(struct timer_list *t)
mod_timer(&hdev->service_timer, jiffies + HZ);
hdev->hw_stats.stats_timer++;
+ hdev->fd_arfs_expire_timer++;
hclge_task_schedule(hdev);
}
@@ -2693,19 +2784,11 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
return HCLGE_VECTOR0_EVENT_RST;
}
- if (BIT(HCLGE_VECTOR0_CORERESET_INT_B) & rst_src_reg) {
- dev_info(&hdev->pdev->dev, "core reset interrupt\n");
- set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state);
- set_bit(HNAE3_CORE_RESET, &hdev->reset_pending);
- *clearval = BIT(HCLGE_VECTOR0_CORERESET_INT_B);
- hdev->rst_stats.core_rst_cnt++;
- return HCLGE_VECTOR0_EVENT_RST;
- }
-
/* check for vector0 msix event source */
if (msix_src_reg & HCLGE_VECTOR0_REG_MSIX_MASK) {
- dev_dbg(&hdev->pdev->dev, "received event 0x%x\n",
- msix_src_reg);
+ dev_info(&hdev->pdev->dev, "received event 0x%x\n",
+ msix_src_reg);
+ *clearval = msix_src_reg;
return HCLGE_VECTOR0_EVENT_ERR;
}
@@ -2717,8 +2800,11 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
}
/* print other vector0 event source */
- dev_dbg(&hdev->pdev->dev, "cmdq_src_reg:0x%x, msix_src_reg:0x%x\n",
- cmdq_src_reg, msix_src_reg);
+ dev_info(&hdev->pdev->dev,
+ "CMDQ INT status:0x%x, other INT status:0x%x\n",
+ cmdq_src_reg, msix_src_reg);
+ *clearval = msix_src_reg;
+
return HCLGE_VECTOR0_EVENT_OTHER;
}
@@ -2754,8 +2840,8 @@ static void hclge_enable_vector(struct hclge_misc_vector *vector, bool enable)
static irqreturn_t hclge_misc_irq_handle(int irq, void *data)
{
struct hclge_dev *hdev = data;
+ u32 clearval = 0;
u32 event_cause;
- u32 clearval;
hclge_enable_vector(&hdev->misc_vector, false);
event_cause = hclge_check_event_cause(hdev, &clearval);
@@ -2797,7 +2883,8 @@ static irqreturn_t hclge_misc_irq_handle(int irq, void *data)
}
/* clear the source of interrupt if it is not cause by reset */
- if (event_cause == HCLGE_VECTOR0_EVENT_MBX) {
+ if (!clearval ||
+ event_cause == HCLGE_VECTOR0_EVENT_MBX) {
hclge_clear_event_cause(hdev, event_cause, clearval);
hclge_enable_vector(&hdev->misc_vector, true);
}
@@ -2861,6 +2948,9 @@ int hclge_notify_client(struct hclge_dev *hdev,
struct hnae3_client *client = hdev->nic_client;
u16 i;
+ if (!test_bit(HCLGE_STATE_NIC_REGISTERED, &hdev->state) || !client)
+ return 0;
+
if (!client->ops->reset_notify)
return -EOPNOTSUPP;
@@ -2886,7 +2976,7 @@ static int hclge_notify_roce_client(struct hclge_dev *hdev,
int ret = 0;
u16 i;
- if (!client)
+ if (!test_bit(HCLGE_STATE_ROCE_REGISTERED, &hdev->state) || !client)
return 0;
if (!client->ops->reset_notify)
@@ -2923,10 +3013,6 @@ static int hclge_reset_wait(struct hclge_dev *hdev)
reg = HCLGE_GLOBAL_RESET_REG;
reg_bit = HCLGE_GLOBAL_RESET_BIT;
break;
- case HNAE3_CORE_RESET:
- reg = HCLGE_GLOBAL_RESET_REG;
- reg_bit = HCLGE_CORE_RESET_BIT;
- break;
case HNAE3_FUNC_RESET:
reg = HCLGE_FUN_RST_ING;
reg_bit = HCLGE_FUN_RST_ING_B;
@@ -3058,12 +3144,6 @@ static void hclge_do_reset(struct hclge_dev *hdev)
hclge_write_dev(&hdev->hw, HCLGE_GLOBAL_RESET_REG, val);
dev_info(&pdev->dev, "Global Reset requested\n");
break;
- case HNAE3_CORE_RESET:
- val = hclge_read_dev(&hdev->hw, HCLGE_GLOBAL_RESET_REG);
- hnae3_set_bit(val, HCLGE_CORE_RESET_BIT, 1);
- hclge_write_dev(&hdev->hw, HCLGE_GLOBAL_RESET_REG, val);
- dev_info(&pdev->dev, "Core Reset requested\n");
- break;
case HNAE3_FUNC_RESET:
dev_info(&pdev->dev, "PF Reset requested\n");
/* schedule again to check later */
@@ -3083,10 +3163,11 @@ static void hclge_do_reset(struct hclge_dev *hdev)
}
}
-static enum hnae3_reset_type hclge_get_reset_level(struct hclge_dev *hdev,
+static enum hnae3_reset_type hclge_get_reset_level(struct hnae3_ae_dev *ae_dev,
unsigned long *addr)
{
enum hnae3_reset_type rst_level = HNAE3_NONE_RESET;
+ struct hclge_dev *hdev = ae_dev->priv;
/* first, resolve any unknown reset type to the known type(s) */
if (test_bit(HNAE3_UNKNOWN_RESET, addr)) {
@@ -3110,16 +3191,10 @@ static enum hnae3_reset_type hclge_get_reset_level(struct hclge_dev *hdev,
rst_level = HNAE3_IMP_RESET;
clear_bit(HNAE3_IMP_RESET, addr);
clear_bit(HNAE3_GLOBAL_RESET, addr);
- clear_bit(HNAE3_CORE_RESET, addr);
clear_bit(HNAE3_FUNC_RESET, addr);
} else if (test_bit(HNAE3_GLOBAL_RESET, addr)) {
rst_level = HNAE3_GLOBAL_RESET;
clear_bit(HNAE3_GLOBAL_RESET, addr);
- clear_bit(HNAE3_CORE_RESET, addr);
- clear_bit(HNAE3_FUNC_RESET, addr);
- } else if (test_bit(HNAE3_CORE_RESET, addr)) {
- rst_level = HNAE3_CORE_RESET;
- clear_bit(HNAE3_CORE_RESET, addr);
clear_bit(HNAE3_FUNC_RESET, addr);
} else if (test_bit(HNAE3_FUNC_RESET, addr)) {
rst_level = HNAE3_FUNC_RESET;
@@ -3147,9 +3222,6 @@ static void hclge_clear_reset_cause(struct hclge_dev *hdev)
case HNAE3_GLOBAL_RESET:
clearval = BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B);
break;
- case HNAE3_CORE_RESET:
- clearval = BIT(HCLGE_VECTOR0_CORERESET_INT_B);
- break;
default:
break;
}
@@ -3180,6 +3252,8 @@ static int hclge_reset_prepare_down(struct hclge_dev *hdev)
static int hclge_reset_prepare_wait(struct hclge_dev *hdev)
{
+#define HCLGE_RESET_SYNC_TIME 100
+
u32 reg_val;
int ret = 0;
@@ -3188,7 +3262,7 @@ static int hclge_reset_prepare_wait(struct hclge_dev *hdev)
/* There is no mechanism for PF to know if VF has stopped IO
* for now, just wait 100 ms for VF to stop IO
*/
- msleep(100);
+ msleep(HCLGE_RESET_SYNC_TIME);
ret = hclge_func_reset_cmd(hdev, 0);
if (ret) {
dev_err(&hdev->pdev->dev,
@@ -3208,7 +3282,7 @@ static int hclge_reset_prepare_wait(struct hclge_dev *hdev)
/* There is no mechanism for PF to know if VF has stopped IO
* for now, just wait 100 ms for VF to stop IO
*/
- msleep(100);
+ msleep(HCLGE_RESET_SYNC_TIME);
set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state);
set_bit(HNAE3_FLR_DOWN, &hdev->flr_state);
hdev->rst_stats.flr_rst_cnt++;
@@ -3222,6 +3296,10 @@ static int hclge_reset_prepare_wait(struct hclge_dev *hdev)
break;
}
+ /* inform hardware that preparatory work is done */
+ msleep(HCLGE_RESET_SYNC_TIME);
+ hclge_write_dev(&hdev->hw, HCLGE_NIC_CSQ_DEPTH_REG,
+ HCLGE_NIC_CMQ_ENABLE);
dev_info(&hdev->pdev->dev, "prepare wait ok\n");
return ret;
@@ -3230,7 +3308,6 @@ static int hclge_reset_prepare_wait(struct hclge_dev *hdev)
static bool hclge_reset_err_handle(struct hclge_dev *hdev, bool is_timeout)
{
#define MAX_RESET_FAIL_CNT 5
-#define RESET_UPGRADE_DELAY_SEC 10
if (hdev->reset_pending) {
dev_info(&hdev->pdev->dev, "Reset pending %lu\n",
@@ -3254,8 +3331,9 @@ static bool hclge_reset_err_handle(struct hclge_dev *hdev, bool is_timeout)
dev_info(&hdev->pdev->dev, "Upgrade reset level\n");
hclge_clear_reset_cause(hdev);
+ set_bit(HNAE3_GLOBAL_RESET, &hdev->default_reset_request);
mod_timer(&hdev->reset_timer,
- jiffies + RESET_UPGRADE_DELAY_SEC * HZ);
+ jiffies + HCLGE_RESET_INTERVAL);
return false;
}
@@ -3282,6 +3360,25 @@ static int hclge_reset_prepare_up(struct hclge_dev *hdev)
return ret;
}
+static int hclge_reset_stack(struct hclge_dev *hdev)
+{
+ int ret;
+
+ ret = hclge_notify_client(hdev, HNAE3_UNINIT_CLIENT);
+ if (ret)
+ return ret;
+
+ ret = hclge_reset_ae_dev(hdev->ae_dev);
+ if (ret)
+ return ret;
+
+ ret = hclge_notify_client(hdev, HNAE3_INIT_CLIENT);
+ if (ret)
+ return ret;
+
+ return hclge_notify_client(hdev, HNAE3_RESTORE_CLIENT);
+}
+
static void hclge_reset(struct hclge_dev *hdev)
{
struct hnae3_ae_dev *ae_dev = pci_get_drvdata(hdev->pdev);
@@ -3325,19 +3422,8 @@ static void hclge_reset(struct hclge_dev *hdev)
goto err_reset;
rtnl_lock();
- ret = hclge_notify_client(hdev, HNAE3_UNINIT_CLIENT);
- if (ret)
- goto err_reset_lock;
- ret = hclge_reset_ae_dev(hdev->ae_dev);
- if (ret)
- goto err_reset_lock;
-
- ret = hclge_notify_client(hdev, HNAE3_INIT_CLIENT);
- if (ret)
- goto err_reset_lock;
-
- ret = hclge_notify_client(hdev, HNAE3_RESTORE_CLIENT);
+ ret = hclge_reset_stack(hdev);
if (ret)
goto err_reset_lock;
@@ -3347,16 +3433,23 @@ static void hclge_reset(struct hclge_dev *hdev)
if (ret)
goto err_reset_lock;
+ rtnl_unlock();
+
+ ret = hclge_notify_roce_client(hdev, HNAE3_INIT_CLIENT);
+ /* ignore RoCE notify error if it fails HCLGE_RESET_MAX_FAIL_CNT - 1
+ * times
+ */
+ if (ret && hdev->reset_fail_cnt < HCLGE_RESET_MAX_FAIL_CNT - 1)
+ goto err_reset;
+
+ rtnl_lock();
+
ret = hclge_notify_client(hdev, HNAE3_UP_CLIENT);
if (ret)
goto err_reset_lock;
rtnl_unlock();
- ret = hclge_notify_roce_client(hdev, HNAE3_INIT_CLIENT);
- if (ret)
- goto err_reset;
-
ret = hclge_notify_roce_client(hdev, HNAE3_UP_CLIENT);
if (ret)
goto err_reset;
@@ -3399,11 +3492,12 @@ static void hclge_reset_event(struct pci_dev *pdev, struct hnae3_handle *handle)
if (!handle)
handle = &hdev->vport[0].nic;
- if (time_before(jiffies, (hdev->last_reset_time + 3 * HZ)))
+ if (time_before(jiffies, (hdev->last_reset_time +
+ HCLGE_RESET_INTERVAL)))
return;
else if (hdev->default_reset_request)
hdev->reset_level =
- hclge_get_reset_level(hdev,
+ hclge_get_reset_level(ae_dev,
&hdev->default_reset_request);
else if (time_after(jiffies, (hdev->last_reset_time + 4 * 5 * HZ)))
hdev->reset_level = HNAE3_FUNC_RESET;
@@ -3432,13 +3526,14 @@ static void hclge_reset_timer(struct timer_list *t)
struct hclge_dev *hdev = from_timer(hdev, t, reset_timer);
dev_info(&hdev->pdev->dev,
- "triggering global reset in reset timer\n");
- set_bit(HNAE3_GLOBAL_RESET, &hdev->default_reset_request);
+ "triggering reset in reset timer\n");
hclge_reset_event(hdev->pdev, NULL);
}
static void hclge_reset_subtask(struct hclge_dev *hdev)
{
+ struct hnae3_ae_dev *ae_dev = pci_get_drvdata(hdev->pdev);
+
/* check if there is any ongoing reset in the hardware. This status can
* be checked from reset_pending. If there is then, we need to wait for
* hardware to complete reset.
@@ -3449,12 +3544,12 @@ static void hclge_reset_subtask(struct hclge_dev *hdev)
* now.
*/
hdev->last_reset_time = jiffies;
- hdev->reset_type = hclge_get_reset_level(hdev, &hdev->reset_pending);
+ hdev->reset_type = hclge_get_reset_level(ae_dev, &hdev->reset_pending);
if (hdev->reset_type != HNAE3_NONE_RESET)
hclge_reset(hdev);
/* check if we got any *new* reset requests to be honored */
- hdev->reset_type = hclge_get_reset_level(hdev, &hdev->reset_request);
+ hdev->reset_type = hclge_get_reset_level(ae_dev, &hdev->reset_request);
if (hdev->reset_type != HNAE3_NONE_RESET)
hclge_do_reset(hdev);
@@ -3521,6 +3616,11 @@ static void hclge_service_task(struct work_struct *work)
hclge_update_port_info(hdev);
hclge_update_link_status(hdev);
hclge_update_vport_alive(hdev);
+ hclge_sync_vlan_filter(hdev);
+ if (hdev->fd_arfs_expire_timer >= HCLGE_FD_ARFS_EXPIRE_TIMER_INTERVAL) {
+ hclge_rfs_filter_expire(hdev);
+ hdev->fd_arfs_expire_timer = 0;
+ }
hclge_service_complete(hdev);
}
@@ -3614,29 +3714,28 @@ static int hclge_set_rss_algo_key(struct hclge_dev *hdev,
const u8 hfunc, const u8 *key)
{
struct hclge_rss_config_cmd *req;
+ unsigned int key_offset = 0;
struct hclge_desc desc;
- int key_offset;
+ int key_counts;
int key_size;
int ret;
+ key_counts = HCLGE_RSS_KEY_SIZE;
req = (struct hclge_rss_config_cmd *)desc.data;
- for (key_offset = 0; key_offset < 3; key_offset++) {
+ while (key_counts) {
hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_RSS_GENERIC_CONFIG,
false);
req->hash_config |= (hfunc & HCLGE_RSS_HASH_ALGO_MASK);
req->hash_config |= (key_offset << HCLGE_RSS_HASH_KEY_OFFSET_B);
- if (key_offset == 2)
- key_size =
- HCLGE_RSS_KEY_SIZE - HCLGE_RSS_HASH_KEY_NUM * 2;
- else
- key_size = HCLGE_RSS_HASH_KEY_NUM;
-
+ key_size = min(HCLGE_RSS_HASH_KEY_NUM, key_counts);
memcpy(req->hash_key,
key + key_offset * HCLGE_RSS_HASH_KEY_NUM, key_size);
+ key_counts -= key_size;
+ key_offset++;
ret = hclge_cmd_send(&hdev->hw, &desc, 1);
if (ret) {
dev_err(&hdev->pdev->dev,
@@ -3995,13 +4094,14 @@ int hclge_rss_init_hw(struct hclge_dev *hdev)
struct hclge_vport *vport = hdev->vport;
u8 *rss_indir = vport[0].rss_indirection_tbl;
u16 rss_size = vport[0].alloc_rss_size;
+ u16 tc_offset[HCLGE_MAX_TC_NUM] = {0};
+ u16 tc_size[HCLGE_MAX_TC_NUM] = {0};
u8 *key = vport[0].rss_hash_key;
u8 hfunc = vport[0].rss_algo;
- u16 tc_offset[HCLGE_MAX_TC_NUM];
u16 tc_valid[HCLGE_MAX_TC_NUM];
- u16 tc_size[HCLGE_MAX_TC_NUM];
u16 roundup_size;
- int i, ret;
+ unsigned int i;
+ int ret;
ret = hclge_set_rss_indir_table(hdev, rss_indir);
if (ret)
@@ -4156,8 +4256,7 @@ int hclge_bind_ring_with_vector(struct hclge_vport *vport,
return 0;
}
-static int hclge_map_ring_to_vector(struct hnae3_handle *handle,
- int vector,
+static int hclge_map_ring_to_vector(struct hnae3_handle *handle, int vector,
struct hnae3_ring_chain_node *ring_chain)
{
struct hclge_vport *vport = hclge_get_vport(handle);
@@ -4174,8 +4273,7 @@ static int hclge_map_ring_to_vector(struct hnae3_handle *handle,
return hclge_bind_ring_with_vector(vport, vector_id, true, ring_chain);
}
-static int hclge_unmap_ring_frm_vector(struct hnae3_handle *handle,
- int vector,
+static int hclge_unmap_ring_frm_vector(struct hnae3_handle *handle, int vector,
struct hnae3_ring_chain_node *ring_chain)
{
struct hclge_vport *vport = hclge_get_vport(handle);
@@ -4196,8 +4294,7 @@ static int hclge_unmap_ring_frm_vector(struct hnae3_handle *handle,
if (ret)
dev_err(&handle->pdev->dev,
"Unmap ring from vector fail. vectorid=%d, ret =%d\n",
- vector_id,
- ret);
+ vector_id, ret);
return ret;
}
@@ -4503,19 +4600,19 @@ static bool hclge_fd_convert_tuple(u32 tuple_bit, u8 *key_x, u8 *key_y,
case 0:
return false;
case BIT(INNER_DST_MAC):
- for (i = 0; i < 6; i++) {
- calc_x(key_x[5 - i], rule->tuples.dst_mac[i],
+ for (i = 0; i < ETH_ALEN; i++) {
+ calc_x(key_x[ETH_ALEN - 1 - i], rule->tuples.dst_mac[i],
rule->tuples_mask.dst_mac[i]);
- calc_y(key_y[5 - i], rule->tuples.dst_mac[i],
+ calc_y(key_y[ETH_ALEN - 1 - i], rule->tuples.dst_mac[i],
rule->tuples_mask.dst_mac[i]);
}
return true;
case BIT(INNER_SRC_MAC):
- for (i = 0; i < 6; i++) {
- calc_x(key_x[5 - i], rule->tuples.src_mac[i],
+ for (i = 0; i < ETH_ALEN; i++) {
+ calc_x(key_x[ETH_ALEN - 1 - i], rule->tuples.src_mac[i],
rule->tuples.src_mac[i]);
- calc_y(key_y[5 - i], rule->tuples.src_mac[i],
+ calc_y(key_y[ETH_ALEN - 1 - i], rule->tuples.src_mac[i],
rule->tuples.src_mac[i]);
}
@@ -4551,19 +4648,19 @@ static bool hclge_fd_convert_tuple(u32 tuple_bit, u8 *key_x, u8 *key_y,
return true;
case BIT(INNER_SRC_IP):
- calc_x(tmp_x_l, rule->tuples.src_ip[3],
- rule->tuples_mask.src_ip[3]);
- calc_y(tmp_y_l, rule->tuples.src_ip[3],
- rule->tuples_mask.src_ip[3]);
+ calc_x(tmp_x_l, rule->tuples.src_ip[IPV4_INDEX],
+ rule->tuples_mask.src_ip[IPV4_INDEX]);
+ calc_y(tmp_y_l, rule->tuples.src_ip[IPV4_INDEX],
+ rule->tuples_mask.src_ip[IPV4_INDEX]);
*(__le32 *)key_x = cpu_to_le32(tmp_x_l);
*(__le32 *)key_y = cpu_to_le32(tmp_y_l);
return true;
case BIT(INNER_DST_IP):
- calc_x(tmp_x_l, rule->tuples.dst_ip[3],
- rule->tuples_mask.dst_ip[3]);
- calc_y(tmp_y_l, rule->tuples.dst_ip[3],
- rule->tuples_mask.dst_ip[3]);
+ calc_x(tmp_x_l, rule->tuples.dst_ip[IPV4_INDEX],
+ rule->tuples_mask.dst_ip[IPV4_INDEX]);
+ calc_y(tmp_y_l, rule->tuples.dst_ip[IPV4_INDEX],
+ rule->tuples_mask.dst_ip[IPV4_INDEX]);
*(__le32 *)key_x = cpu_to_le32(tmp_x_l);
*(__le32 *)key_y = cpu_to_le32(tmp_y_l);
@@ -4617,7 +4714,7 @@ static void hclge_fd_convert_meta_data(struct hclge_fd_key_cfg *key_cfg,
{
u32 tuple_bit, meta_data = 0, tmp_x, tmp_y, port_number;
u8 cur_pos = 0, tuple_size, shift_bits;
- int i;
+ unsigned int i;
for (i = 0; i < MAX_META_DATA; i++) {
tuple_size = meta_data_key_info[i].key_length;
@@ -4659,7 +4756,8 @@ static int hclge_config_key(struct hclge_dev *hdev, u8 stage,
struct hclge_fd_key_cfg *key_cfg = &hdev->fd_cfg.key_cfg[stage];
u8 key_x[MAX_KEY_BYTES], key_y[MAX_KEY_BYTES];
u8 *cur_key_x, *cur_key_y;
- int i, ret, tuple_size;
+ unsigned int i;
+ int ret, tuple_size;
u8 meta_data_region;
memset(key_x, 0, sizeof(key_x));
@@ -4812,6 +4910,7 @@ static int hclge_fd_check_spec(struct hclge_dev *hdev,
*unused |= BIT(INNER_SRC_MAC) | BIT(INNER_DST_MAC) |
BIT(INNER_IP_TOS);
+ /* check whether src/dst ip address used */
if (!tcp_ip6_spec->ip6src[0] && !tcp_ip6_spec->ip6src[1] &&
!tcp_ip6_spec->ip6src[2] && !tcp_ip6_spec->ip6src[3])
*unused |= BIT(INNER_SRC_IP);
@@ -4836,6 +4935,7 @@ static int hclge_fd_check_spec(struct hclge_dev *hdev,
BIT(INNER_IP_TOS) | BIT(INNER_SRC_PORT) |
BIT(INNER_DST_PORT);
+ /* check whether src/dst ip address used */
if (!usr_ip6_spec->ip6src[0] && !usr_ip6_spec->ip6src[1] &&
!usr_ip6_spec->ip6src[2] && !usr_ip6_spec->ip6src[3])
*unused |= BIT(INNER_SRC_IP);
@@ -4906,14 +5006,18 @@ static bool hclge_fd_rule_exist(struct hclge_dev *hdev, u16 location)
struct hclge_fd_rule *rule = NULL;
struct hlist_node *node2;
+ spin_lock_bh(&hdev->fd_rule_lock);
hlist_for_each_entry_safe(rule, node2, &hdev->fd_rule_list, rule_node) {
if (rule->location >= location)
break;
}
+ spin_unlock_bh(&hdev->fd_rule_lock);
+
return rule && rule->location == location;
}
+/* make sure being called after lock up with fd_rule_lock */
static int hclge_fd_update_rule_list(struct hclge_dev *hdev,
struct hclge_fd_rule *new_rule,
u16 location,
@@ -4937,9 +5041,13 @@ static int hclge_fd_update_rule_list(struct hclge_dev *hdev,
kfree(rule);
hdev->hclge_fd_rule_num--;
- if (!is_add)
- return 0;
+ if (!is_add) {
+ if (!hdev->hclge_fd_rule_num)
+ hdev->fd_active_type = HCLGE_FD_RULE_NONE;
+ clear_bit(location, hdev->fd_bmap);
+ return 0;
+ }
} else if (!is_add) {
dev_err(&hdev->pdev->dev,
"delete fail, rule %d is inexistent\n",
@@ -4954,7 +5062,9 @@ static int hclge_fd_update_rule_list(struct hclge_dev *hdev,
else
hlist_add_head(&new_rule->rule_node, &hdev->fd_rule_list);
+ set_bit(location, hdev->fd_bmap);
hdev->hclge_fd_rule_num++;
+ hdev->fd_active_type = new_rule->rule_type;
return 0;
}
@@ -4969,14 +5079,14 @@ static int hclge_fd_get_tuple(struct hclge_dev *hdev,
case SCTP_V4_FLOW:
case TCP_V4_FLOW:
case UDP_V4_FLOW:
- rule->tuples.src_ip[3] =
+ rule->tuples.src_ip[IPV4_INDEX] =
be32_to_cpu(fs->h_u.tcp_ip4_spec.ip4src);
- rule->tuples_mask.src_ip[3] =
+ rule->tuples_mask.src_ip[IPV4_INDEX] =
be32_to_cpu(fs->m_u.tcp_ip4_spec.ip4src);
- rule->tuples.dst_ip[3] =
+ rule->tuples.dst_ip[IPV4_INDEX] =
be32_to_cpu(fs->h_u.tcp_ip4_spec.ip4dst);
- rule->tuples_mask.dst_ip[3] =
+ rule->tuples_mask.dst_ip[IPV4_INDEX] =
be32_to_cpu(fs->m_u.tcp_ip4_spec.ip4dst);
rule->tuples.src_port = be16_to_cpu(fs->h_u.tcp_ip4_spec.psrc);
@@ -4995,14 +5105,14 @@ static int hclge_fd_get_tuple(struct hclge_dev *hdev,
break;
case IP_USER_FLOW:
- rule->tuples.src_ip[3] =
+ rule->tuples.src_ip[IPV4_INDEX] =
be32_to_cpu(fs->h_u.usr_ip4_spec.ip4src);
- rule->tuples_mask.src_ip[3] =
+ rule->tuples_mask.src_ip[IPV4_INDEX] =
be32_to_cpu(fs->m_u.usr_ip4_spec.ip4src);
- rule->tuples.dst_ip[3] =
+ rule->tuples.dst_ip[IPV4_INDEX] =
be32_to_cpu(fs->h_u.usr_ip4_spec.ip4dst);
- rule->tuples_mask.dst_ip[3] =
+ rule->tuples_mask.dst_ip[IPV4_INDEX] =
be32_to_cpu(fs->m_u.usr_ip4_spec.ip4dst);
rule->tuples.ip_tos = fs->h_u.usr_ip4_spec.tos;
@@ -5019,14 +5129,14 @@ static int hclge_fd_get_tuple(struct hclge_dev *hdev,
case TCP_V6_FLOW:
case UDP_V6_FLOW:
be32_to_cpu_array(rule->tuples.src_ip,
- fs->h_u.tcp_ip6_spec.ip6src, 4);
+ fs->h_u.tcp_ip6_spec.ip6src, IPV6_SIZE);
be32_to_cpu_array(rule->tuples_mask.src_ip,
- fs->m_u.tcp_ip6_spec.ip6src, 4);
+ fs->m_u.tcp_ip6_spec.ip6src, IPV6_SIZE);
be32_to_cpu_array(rule->tuples.dst_ip,
- fs->h_u.tcp_ip6_spec.ip6dst, 4);
+ fs->h_u.tcp_ip6_spec.ip6dst, IPV6_SIZE);
be32_to_cpu_array(rule->tuples_mask.dst_ip,
- fs->m_u.tcp_ip6_spec.ip6dst, 4);
+ fs->m_u.tcp_ip6_spec.ip6dst, IPV6_SIZE);
rule->tuples.src_port = be16_to_cpu(fs->h_u.tcp_ip6_spec.psrc);
rule->tuples_mask.src_port =
@@ -5042,14 +5152,14 @@ static int hclge_fd_get_tuple(struct hclge_dev *hdev,
break;
case IPV6_USER_FLOW:
be32_to_cpu_array(rule->tuples.src_ip,
- fs->h_u.usr_ip6_spec.ip6src, 4);
+ fs->h_u.usr_ip6_spec.ip6src, IPV6_SIZE);
be32_to_cpu_array(rule->tuples_mask.src_ip,
- fs->m_u.usr_ip6_spec.ip6src, 4);
+ fs->m_u.usr_ip6_spec.ip6src, IPV6_SIZE);
be32_to_cpu_array(rule->tuples.dst_ip,
- fs->h_u.usr_ip6_spec.ip6dst, 4);
+ fs->h_u.usr_ip6_spec.ip6dst, IPV6_SIZE);
be32_to_cpu_array(rule->tuples_mask.dst_ip,
- fs->m_u.usr_ip6_spec.ip6dst, 4);
+ fs->m_u.usr_ip6_spec.ip6dst, IPV6_SIZE);
rule->tuples.ip_proto = fs->h_u.usr_ip6_spec.l4_proto;
rule->tuples_mask.ip_proto = fs->m_u.usr_ip6_spec.l4_proto;
@@ -5112,6 +5222,36 @@ static int hclge_fd_get_tuple(struct hclge_dev *hdev,
return 0;
}
+/* make sure being called after lock up with fd_rule_lock */
+static int hclge_fd_config_rule(struct hclge_dev *hdev,
+ struct hclge_fd_rule *rule)
+{
+ int ret;
+
+ if (!rule) {
+ dev_err(&hdev->pdev->dev,
+ "The flow director rule is NULL\n");
+ return -EINVAL;
+ }
+
+ /* it will never fail here, so needn't to check return value */
+ hclge_fd_update_rule_list(hdev, rule, rule->location, true);
+
+ ret = hclge_config_action(hdev, HCLGE_FD_STAGE_1, rule);
+ if (ret)
+ goto clear_rule;
+
+ ret = hclge_config_key(hdev, HCLGE_FD_STAGE_1, rule);
+ if (ret)
+ goto clear_rule;
+
+ return 0;
+
+clear_rule:
+ hclge_fd_update_rule_list(hdev, rule, rule->location, false);
+ return ret;
+}
+
static int hclge_add_fd_entry(struct hnae3_handle *handle,
struct ethtool_rxnfc *cmd)
{
@@ -5174,8 +5314,10 @@ static int hclge_add_fd_entry(struct hnae3_handle *handle,
return -ENOMEM;
ret = hclge_fd_get_tuple(hdev, fs, rule);
- if (ret)
- goto free_rule;
+ if (ret) {
+ kfree(rule);
+ return ret;
+ }
rule->flow_type = fs->flow_type;
@@ -5184,24 +5326,19 @@ static int hclge_add_fd_entry(struct hnae3_handle *handle,
rule->vf_id = dst_vport_id;
rule->queue_id = q_index;
rule->action = action;
+ rule->rule_type = HCLGE_FD_EP_ACTIVE;
- ret = hclge_config_action(hdev, HCLGE_FD_STAGE_1, rule);
- if (ret)
- goto free_rule;
+ /* to avoid rule conflict, when user configure rule by ethtool,
+ * we need to clear all arfs rules
+ */
+ hclge_clear_arfs_rules(handle);
- ret = hclge_config_key(hdev, HCLGE_FD_STAGE_1, rule);
- if (ret)
- goto free_rule;
+ spin_lock_bh(&hdev->fd_rule_lock);
+ ret = hclge_fd_config_rule(hdev, rule);
- ret = hclge_fd_update_rule_list(hdev, rule, fs->location, true);
- if (ret)
- goto free_rule;
+ spin_unlock_bh(&hdev->fd_rule_lock);
return ret;
-
-free_rule:
- kfree(rule);
- return ret;
}
static int hclge_del_fd_entry(struct hnae3_handle *handle,
@@ -5222,18 +5359,21 @@ static int hclge_del_fd_entry(struct hnae3_handle *handle,
if (!hclge_fd_rule_exist(hdev, fs->location)) {
dev_err(&hdev->pdev->dev,
- "Delete fail, rule %d is inexistent\n",
- fs->location);
+ "Delete fail, rule %d is inexistent\n", fs->location);
return -ENOENT;
}
- ret = hclge_fd_tcam_config(hdev, HCLGE_FD_STAGE_1, true,
- fs->location, NULL, false);
+ ret = hclge_fd_tcam_config(hdev, HCLGE_FD_STAGE_1, true, fs->location,
+ NULL, false);
if (ret)
return ret;
- return hclge_fd_update_rule_list(hdev, NULL, fs->location,
- false);
+ spin_lock_bh(&hdev->fd_rule_lock);
+ ret = hclge_fd_update_rule_list(hdev, NULL, fs->location, false);
+
+ spin_unlock_bh(&hdev->fd_rule_lock);
+
+ return ret;
}
static void hclge_del_all_fd_entries(struct hnae3_handle *handle,
@@ -5243,25 +5383,30 @@ static void hclge_del_all_fd_entries(struct hnae3_handle *handle,
struct hclge_dev *hdev = vport->back;
struct hclge_fd_rule *rule;
struct hlist_node *node;
+ u16 location;
if (!hnae3_dev_fd_supported(hdev))
return;
+ spin_lock_bh(&hdev->fd_rule_lock);
+ for_each_set_bit(location, hdev->fd_bmap,
+ hdev->fd_cfg.rule_num[HCLGE_FD_STAGE_1])
+ hclge_fd_tcam_config(hdev, HCLGE_FD_STAGE_1, true, location,
+ NULL, false);
+
if (clear_list) {
hlist_for_each_entry_safe(rule, node, &hdev->fd_rule_list,
rule_node) {
- hclge_fd_tcam_config(hdev, HCLGE_FD_STAGE_1, true,
- rule->location, NULL, false);
hlist_del(&rule->rule_node);
kfree(rule);
- hdev->hclge_fd_rule_num--;
}
- } else {
- hlist_for_each_entry_safe(rule, node, &hdev->fd_rule_list,
- rule_node)
- hclge_fd_tcam_config(hdev, HCLGE_FD_STAGE_1, true,
- rule->location, NULL, false);
+ hdev->fd_active_type = HCLGE_FD_RULE_NONE;
+ hdev->hclge_fd_rule_num = 0;
+ bitmap_zero(hdev->fd_bmap,
+ hdev->fd_cfg.rule_num[HCLGE_FD_STAGE_1]);
}
+
+ spin_unlock_bh(&hdev->fd_rule_lock);
}
static int hclge_restore_fd_entries(struct hnae3_handle *handle)
@@ -5283,6 +5428,7 @@ static int hclge_restore_fd_entries(struct hnae3_handle *handle)
if (!hdev->fd_en)
return 0;
+ spin_lock_bh(&hdev->fd_rule_lock);
hlist_for_each_entry_safe(rule, node, &hdev->fd_rule_list, rule_node) {
ret = hclge_config_action(hdev, HCLGE_FD_STAGE_1, rule);
if (!ret)
@@ -5292,11 +5438,18 @@ static int hclge_restore_fd_entries(struct hnae3_handle *handle)
dev_warn(&hdev->pdev->dev,
"Restore rule %d failed, remove it\n",
rule->location);
+ clear_bit(rule->location, hdev->fd_bmap);
hlist_del(&rule->rule_node);
kfree(rule);
hdev->hclge_fd_rule_num--;
}
}
+
+ if (hdev->hclge_fd_rule_num)
+ hdev->fd_active_type = HCLGE_FD_EP_ACTIVE;
+
+ spin_unlock_bh(&hdev->fd_rule_lock);
+
return 0;
}
@@ -5329,13 +5482,18 @@ static int hclge_get_fd_rule_info(struct hnae3_handle *handle,
fs = (struct ethtool_rx_flow_spec *)&cmd->fs;
+ spin_lock_bh(&hdev->fd_rule_lock);
+
hlist_for_each_entry_safe(rule, node2, &hdev->fd_rule_list, rule_node) {
if (rule->location >= fs->location)
break;
}
- if (!rule || fs->location != rule->location)
+ if (!rule || fs->location != rule->location) {
+ spin_unlock_bh(&hdev->fd_rule_lock);
+
return -ENOENT;
+ }
fs->flow_type = rule->flow_type;
switch (fs->flow_type & ~(FLOW_EXT | FLOW_MAC_EXT)) {
@@ -5343,16 +5501,16 @@ static int hclge_get_fd_rule_info(struct hnae3_handle *handle,
case TCP_V4_FLOW:
case UDP_V4_FLOW:
fs->h_u.tcp_ip4_spec.ip4src =
- cpu_to_be32(rule->tuples.src_ip[3]);
+ cpu_to_be32(rule->tuples.src_ip[IPV4_INDEX]);
fs->m_u.tcp_ip4_spec.ip4src =
- rule->unused_tuple & BIT(INNER_SRC_IP) ?
- 0 : cpu_to_be32(rule->tuples_mask.src_ip[3]);
+ rule->unused_tuple & BIT(INNER_SRC_IP) ?
+ 0 : cpu_to_be32(rule->tuples_mask.src_ip[IPV4_INDEX]);
fs->h_u.tcp_ip4_spec.ip4dst =
- cpu_to_be32(rule->tuples.dst_ip[3]);
+ cpu_to_be32(rule->tuples.dst_ip[IPV4_INDEX]);
fs->m_u.tcp_ip4_spec.ip4dst =
- rule->unused_tuple & BIT(INNER_DST_IP) ?
- 0 : cpu_to_be32(rule->tuples_mask.dst_ip[3]);
+ rule->unused_tuple & BIT(INNER_DST_IP) ?
+ 0 : cpu_to_be32(rule->tuples_mask.dst_ip[IPV4_INDEX]);
fs->h_u.tcp_ip4_spec.psrc = cpu_to_be16(rule->tuples.src_port);
fs->m_u.tcp_ip4_spec.psrc =
@@ -5372,16 +5530,16 @@ static int hclge_get_fd_rule_info(struct hnae3_handle *handle,
break;
case IP_USER_FLOW:
fs->h_u.usr_ip4_spec.ip4src =
- cpu_to_be32(rule->tuples.src_ip[3]);
+ cpu_to_be32(rule->tuples.src_ip[IPV4_INDEX]);
fs->m_u.tcp_ip4_spec.ip4src =
- rule->unused_tuple & BIT(INNER_SRC_IP) ?
- 0 : cpu_to_be32(rule->tuples_mask.src_ip[3]);
+ rule->unused_tuple & BIT(INNER_SRC_IP) ?
+ 0 : cpu_to_be32(rule->tuples_mask.src_ip[IPV4_INDEX]);
fs->h_u.usr_ip4_spec.ip4dst =
- cpu_to_be32(rule->tuples.dst_ip[3]);
+ cpu_to_be32(rule->tuples.dst_ip[IPV4_INDEX]);
fs->m_u.usr_ip4_spec.ip4dst =
- rule->unused_tuple & BIT(INNER_DST_IP) ?
- 0 : cpu_to_be32(rule->tuples_mask.dst_ip[3]);
+ rule->unused_tuple & BIT(INNER_DST_IP) ?
+ 0 : cpu_to_be32(rule->tuples_mask.dst_ip[IPV4_INDEX]);
fs->h_u.usr_ip4_spec.tos = rule->tuples.ip_tos;
fs->m_u.usr_ip4_spec.tos =
@@ -5400,20 +5558,22 @@ static int hclge_get_fd_rule_info(struct hnae3_handle *handle,
case TCP_V6_FLOW:
case UDP_V6_FLOW:
cpu_to_be32_array(fs->h_u.tcp_ip6_spec.ip6src,
- rule->tuples.src_ip, 4);
+ rule->tuples.src_ip, IPV6_SIZE);
if (rule->unused_tuple & BIT(INNER_SRC_IP))
- memset(fs->m_u.tcp_ip6_spec.ip6src, 0, sizeof(int) * 4);
+ memset(fs->m_u.tcp_ip6_spec.ip6src, 0,
+ sizeof(int) * IPV6_SIZE);
else
cpu_to_be32_array(fs->m_u.tcp_ip6_spec.ip6src,
- rule->tuples_mask.src_ip, 4);
+ rule->tuples_mask.src_ip, IPV6_SIZE);
cpu_to_be32_array(fs->h_u.tcp_ip6_spec.ip6dst,
- rule->tuples.dst_ip, 4);
+ rule->tuples.dst_ip, IPV6_SIZE);
if (rule->unused_tuple & BIT(INNER_DST_IP))
- memset(fs->m_u.tcp_ip6_spec.ip6dst, 0, sizeof(int) * 4);
+ memset(fs->m_u.tcp_ip6_spec.ip6dst, 0,
+ sizeof(int) * IPV6_SIZE);
else
cpu_to_be32_array(fs->m_u.tcp_ip6_spec.ip6dst,
- rule->tuples_mask.dst_ip, 4);
+ rule->tuples_mask.dst_ip, IPV6_SIZE);
fs->h_u.tcp_ip6_spec.psrc = cpu_to_be16(rule->tuples.src_port);
fs->m_u.tcp_ip6_spec.psrc =
@@ -5428,20 +5588,22 @@ static int hclge_get_fd_rule_info(struct hnae3_handle *handle,
break;
case IPV6_USER_FLOW:
cpu_to_be32_array(fs->h_u.usr_ip6_spec.ip6src,
- rule->tuples.src_ip, 4);
+ rule->tuples.src_ip, IPV6_SIZE);
if (rule->unused_tuple & BIT(INNER_SRC_IP))
- memset(fs->m_u.usr_ip6_spec.ip6src, 0, sizeof(int) * 4);
+ memset(fs->m_u.usr_ip6_spec.ip6src, 0,
+ sizeof(int) * IPV6_SIZE);
else
cpu_to_be32_array(fs->m_u.usr_ip6_spec.ip6src,
- rule->tuples_mask.src_ip, 4);
+ rule->tuples_mask.src_ip, IPV6_SIZE);
cpu_to_be32_array(fs->h_u.usr_ip6_spec.ip6dst,
- rule->tuples.dst_ip, 4);
+ rule->tuples.dst_ip, IPV6_SIZE);
if (rule->unused_tuple & BIT(INNER_DST_IP))
- memset(fs->m_u.usr_ip6_spec.ip6dst, 0, sizeof(int) * 4);
+ memset(fs->m_u.usr_ip6_spec.ip6dst, 0,
+ sizeof(int) * IPV6_SIZE);
else
cpu_to_be32_array(fs->m_u.usr_ip6_spec.ip6dst,
- rule->tuples_mask.dst_ip, 4);
+ rule->tuples_mask.dst_ip, IPV6_SIZE);
fs->h_u.usr_ip6_spec.l4_proto = rule->tuples.ip_proto;
fs->m_u.usr_ip6_spec.l4_proto =
@@ -5474,6 +5636,7 @@ static int hclge_get_fd_rule_info(struct hnae3_handle *handle,
break;
default:
+ spin_unlock_bh(&hdev->fd_rule_lock);
return -EOPNOTSUPP;
}
@@ -5505,6 +5668,8 @@ static int hclge_get_fd_rule_info(struct hnae3_handle *handle,
fs->ring_cookie |= vf_id;
}
+ spin_unlock_bh(&hdev->fd_rule_lock);
+
return 0;
}
@@ -5522,20 +5687,208 @@ static int hclge_get_all_rules(struct hnae3_handle *handle,
cmd->data = hdev->fd_cfg.rule_num[HCLGE_FD_STAGE_1];
+ spin_lock_bh(&hdev->fd_rule_lock);
hlist_for_each_entry_safe(rule, node2,
&hdev->fd_rule_list, rule_node) {
- if (cnt == cmd->rule_cnt)
+ if (cnt == cmd->rule_cnt) {
+ spin_unlock_bh(&hdev->fd_rule_lock);
return -EMSGSIZE;
+ }
rule_locs[cnt] = rule->location;
cnt++;
}
+ spin_unlock_bh(&hdev->fd_rule_lock);
+
cmd->rule_cnt = cnt;
return 0;
}
+static void hclge_fd_get_flow_tuples(const struct flow_keys *fkeys,
+ struct hclge_fd_rule_tuples *tuples)
+{
+ tuples->ether_proto = be16_to_cpu(fkeys->basic.n_proto);
+ tuples->ip_proto = fkeys->basic.ip_proto;
+ tuples->dst_port = be16_to_cpu(fkeys->ports.dst);
+
+ if (fkeys->basic.n_proto == htons(ETH_P_IP)) {
+ tuples->src_ip[3] = be32_to_cpu(fkeys->addrs.v4addrs.src);
+ tuples->dst_ip[3] = be32_to_cpu(fkeys->addrs.v4addrs.dst);
+ } else {
+ memcpy(tuples->src_ip,
+ fkeys->addrs.v6addrs.src.in6_u.u6_addr32,
+ sizeof(tuples->src_ip));
+ memcpy(tuples->dst_ip,
+ fkeys->addrs.v6addrs.dst.in6_u.u6_addr32,
+ sizeof(tuples->dst_ip));
+ }
+}
+
+/* traverse all rules, check whether an existed rule has the same tuples */
+static struct hclge_fd_rule *
+hclge_fd_search_flow_keys(struct hclge_dev *hdev,
+ const struct hclge_fd_rule_tuples *tuples)
+{
+ struct hclge_fd_rule *rule = NULL;
+ struct hlist_node *node;
+
+ hlist_for_each_entry_safe(rule, node, &hdev->fd_rule_list, rule_node) {
+ if (!memcmp(tuples, &rule->tuples, sizeof(*tuples)))
+ return rule;
+ }
+
+ return NULL;
+}
+
+static void hclge_fd_build_arfs_rule(const struct hclge_fd_rule_tuples *tuples,
+ struct hclge_fd_rule *rule)
+{
+ rule->unused_tuple = BIT(INNER_SRC_MAC) | BIT(INNER_DST_MAC) |
+ BIT(INNER_VLAN_TAG_FST) | BIT(INNER_IP_TOS) |
+ BIT(INNER_SRC_PORT);
+ rule->action = 0;
+ rule->vf_id = 0;
+ rule->rule_type = HCLGE_FD_ARFS_ACTIVE;
+ if (tuples->ether_proto == ETH_P_IP) {
+ if (tuples->ip_proto == IPPROTO_TCP)
+ rule->flow_type = TCP_V4_FLOW;
+ else
+ rule->flow_type = UDP_V4_FLOW;
+ } else {
+ if (tuples->ip_proto == IPPROTO_TCP)
+ rule->flow_type = TCP_V6_FLOW;
+ else
+ rule->flow_type = UDP_V6_FLOW;
+ }
+ memcpy(&rule->tuples, tuples, sizeof(rule->tuples));
+ memset(&rule->tuples_mask, 0xFF, sizeof(rule->tuples_mask));
+}
+
+static int hclge_add_fd_entry_by_arfs(struct hnae3_handle *handle, u16 queue_id,
+ u16 flow_id, struct flow_keys *fkeys)
+{
+ struct hclge_vport *vport = hclge_get_vport(handle);
+ struct hclge_fd_rule_tuples new_tuples;
+ struct hclge_dev *hdev = vport->back;
+ struct hclge_fd_rule *rule;
+ u16 tmp_queue_id;
+ u16 bit_id;
+ int ret;
+
+ if (!hnae3_dev_fd_supported(hdev))
+ return -EOPNOTSUPP;
+
+ memset(&new_tuples, 0, sizeof(new_tuples));
+ hclge_fd_get_flow_tuples(fkeys, &new_tuples);
+
+ spin_lock_bh(&hdev->fd_rule_lock);
+
+ /* when there is already fd rule existed add by user,
+ * arfs should not work
+ */
+ if (hdev->fd_active_type == HCLGE_FD_EP_ACTIVE) {
+ spin_unlock_bh(&hdev->fd_rule_lock);
+
+ return -EOPNOTSUPP;
+ }
+
+ /* check is there flow director filter existed for this flow,
+ * if not, create a new filter for it;
+ * if filter exist with different queue id, modify the filter;
+ * if filter exist with same queue id, do nothing
+ */
+ rule = hclge_fd_search_flow_keys(hdev, &new_tuples);
+ if (!rule) {
+ bit_id = find_first_zero_bit(hdev->fd_bmap, MAX_FD_FILTER_NUM);
+ if (bit_id >= hdev->fd_cfg.rule_num[HCLGE_FD_STAGE_1]) {
+ spin_unlock_bh(&hdev->fd_rule_lock);
+
+ return -ENOSPC;
+ }
+
+ rule = kzalloc(sizeof(*rule), GFP_KERNEL);
+ if (!rule) {
+ spin_unlock_bh(&hdev->fd_rule_lock);
+
+ return -ENOMEM;
+ }
+
+ set_bit(bit_id, hdev->fd_bmap);
+ rule->location = bit_id;
+ rule->flow_id = flow_id;
+ rule->queue_id = queue_id;
+ hclge_fd_build_arfs_rule(&new_tuples, rule);
+ ret = hclge_fd_config_rule(hdev, rule);
+
+ spin_unlock_bh(&hdev->fd_rule_lock);
+
+ if (ret)
+ return ret;
+
+ return rule->location;
+ }
+
+ spin_unlock_bh(&hdev->fd_rule_lock);
+
+ if (rule->queue_id == queue_id)
+ return rule->location;
+
+ tmp_queue_id = rule->queue_id;
+ rule->queue_id = queue_id;
+ ret = hclge_config_action(hdev, HCLGE_FD_STAGE_1, rule);
+ if (ret) {
+ rule->queue_id = tmp_queue_id;
+ return ret;
+ }
+
+ return rule->location;
+}
+
+static void hclge_rfs_filter_expire(struct hclge_dev *hdev)
+{
+#ifdef CONFIG_RFS_ACCEL
+ struct hnae3_handle *handle = &hdev->vport[0].nic;
+ struct hclge_fd_rule *rule;
+ struct hlist_node *node;
+ HLIST_HEAD(del_list);
+
+ spin_lock_bh(&hdev->fd_rule_lock);
+ if (hdev->fd_active_type != HCLGE_FD_ARFS_ACTIVE) {
+ spin_unlock_bh(&hdev->fd_rule_lock);
+ return;
+ }
+ hlist_for_each_entry_safe(rule, node, &hdev->fd_rule_list, rule_node) {
+ if (rps_may_expire_flow(handle->netdev, rule->queue_id,
+ rule->flow_id, rule->location)) {
+ hlist_del_init(&rule->rule_node);
+ hlist_add_head(&rule->rule_node, &del_list);
+ hdev->hclge_fd_rule_num--;
+ clear_bit(rule->location, hdev->fd_bmap);
+ }
+ }
+ spin_unlock_bh(&hdev->fd_rule_lock);
+
+ hlist_for_each_entry_safe(rule, node, &del_list, rule_node) {
+ hclge_fd_tcam_config(hdev, HCLGE_FD_STAGE_1, true,
+ rule->location, NULL, false);
+ kfree(rule);
+ }
+#endif
+}
+
+static void hclge_clear_arfs_rules(struct hnae3_handle *handle)
+{
+#ifdef CONFIG_RFS_ACCEL
+ struct hclge_vport *vport = hclge_get_vport(handle);
+ struct hclge_dev *hdev = vport->back;
+
+ if (hdev->fd_active_type == HCLGE_FD_ARFS_ACTIVE)
+ hclge_del_all_fd_entries(handle, true);
+#endif
+}
+
static bool hclge_get_hw_reset_stat(struct hnae3_handle *handle)
{
struct hclge_vport *vport = hclge_get_vport(handle);
@@ -5565,10 +5918,12 @@ static void hclge_enable_fd(struct hnae3_handle *handle, bool enable)
{
struct hclge_vport *vport = hclge_get_vport(handle);
struct hclge_dev *hdev = vport->back;
+ bool clear;
hdev->fd_en = enable;
+ clear = hdev->fd_active_type == HCLGE_FD_ARFS_ACTIVE ? true : false;
if (!enable)
- hclge_del_all_fd_entries(handle, false);
+ hclge_del_all_fd_entries(handle, clear);
else
hclge_restore_fd_entries(handle);
}
@@ -5582,20 +5937,20 @@ static void hclge_cfg_mac_mode(struct hclge_dev *hdev, bool enable)
int ret;
hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_CONFIG_MAC_MODE, false);
- hnae3_set_bit(loop_en, HCLGE_MAC_TX_EN_B, enable);
- hnae3_set_bit(loop_en, HCLGE_MAC_RX_EN_B, enable);
- hnae3_set_bit(loop_en, HCLGE_MAC_PAD_TX_B, enable);
- hnae3_set_bit(loop_en, HCLGE_MAC_PAD_RX_B, enable);
- hnae3_set_bit(loop_en, HCLGE_MAC_1588_TX_B, 0);
- hnae3_set_bit(loop_en, HCLGE_MAC_1588_RX_B, 0);
- hnae3_set_bit(loop_en, HCLGE_MAC_APP_LP_B, 0);
- hnae3_set_bit(loop_en, HCLGE_MAC_LINE_LP_B, 0);
- hnae3_set_bit(loop_en, HCLGE_MAC_FCS_TX_B, enable);
- hnae3_set_bit(loop_en, HCLGE_MAC_RX_FCS_B, enable);
- hnae3_set_bit(loop_en, HCLGE_MAC_RX_FCS_STRIP_B, enable);
- hnae3_set_bit(loop_en, HCLGE_MAC_TX_OVERSIZE_TRUNCATE_B, enable);
- hnae3_set_bit(loop_en, HCLGE_MAC_RX_OVERSIZE_TRUNCATE_B, enable);
- hnae3_set_bit(loop_en, HCLGE_MAC_TX_UNDER_MIN_ERR_B, enable);
+
+ if (enable) {
+ hnae3_set_bit(loop_en, HCLGE_MAC_TX_EN_B, 1U);
+ hnae3_set_bit(loop_en, HCLGE_MAC_RX_EN_B, 1U);
+ hnae3_set_bit(loop_en, HCLGE_MAC_PAD_TX_B, 1U);
+ hnae3_set_bit(loop_en, HCLGE_MAC_PAD_RX_B, 1U);
+ hnae3_set_bit(loop_en, HCLGE_MAC_FCS_TX_B, 1U);
+ hnae3_set_bit(loop_en, HCLGE_MAC_RX_FCS_B, 1U);
+ hnae3_set_bit(loop_en, HCLGE_MAC_RX_FCS_STRIP_B, 1U);
+ hnae3_set_bit(loop_en, HCLGE_MAC_TX_OVERSIZE_TRUNCATE_B, 1U);
+ hnae3_set_bit(loop_en, HCLGE_MAC_RX_OVERSIZE_TRUNCATE_B, 1U);
+ hnae3_set_bit(loop_en, HCLGE_MAC_TX_UNDER_MIN_ERR_B, 1U);
+ }
+
req->txrx_pad_fcs_loop_en = cpu_to_le32(loop_en);
ret = hclge_cmd_send(&hdev->hw, &desc, 1);
@@ -5726,7 +6081,7 @@ static int hclge_set_serdes_loopback(struct hclge_dev *hdev, bool en,
return -EBUSY;
}
-static int hclge_tqp_enable(struct hclge_dev *hdev, int tqp_id,
+static int hclge_tqp_enable(struct hclge_dev *hdev, unsigned int tqp_id,
int stream_id, bool enable)
{
struct hclge_desc desc;
@@ -5737,7 +6092,8 @@ static int hclge_tqp_enable(struct hclge_dev *hdev, int tqp_id,
hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_CFG_COM_TQP_QUEUE, false);
req->tqp_id = cpu_to_le16(tqp_id & HCLGE_RING_ID_MASK);
req->stream_id = cpu_to_le16(stream_id);
- req->enable |= enable << HCLGE_TQP_ENABLE_B;
+ if (enable)
+ req->enable |= 1U << HCLGE_TQP_ENABLE_B;
ret = hclge_cmd_send(&hdev->hw, &desc, 1);
if (ret)
@@ -5838,6 +6194,8 @@ static void hclge_ae_stop(struct hnae3_handle *handle)
set_bit(HCLGE_STATE_DOWN, &hdev->state);
+ hclge_clear_arfs_rules(handle);
+
/* If it is not PF reset, the firmware will disable the MAC,
* so it only need to stop phy here.
*/
@@ -5903,11 +6261,11 @@ static int hclge_get_mac_vlan_cmd_status(struct hclge_vport *vport,
if (op == HCLGE_MAC_VLAN_ADD) {
if ((!resp_code) || (resp_code == 1)) {
return_status = 0;
- } else if (resp_code == 2) {
+ } else if (resp_code == HCLGE_ADD_UC_OVERFLOW) {
return_status = -ENOSPC;
dev_err(&hdev->pdev->dev,
"add mac addr failed for uc_overflow.\n");
- } else if (resp_code == 3) {
+ } else if (resp_code == HCLGE_ADD_MC_OVERFLOW) {
return_status = -ENOSPC;
dev_err(&hdev->pdev->dev,
"add mac addr failed for mc_overflow.\n");
@@ -5952,13 +6310,15 @@ static int hclge_get_mac_vlan_cmd_status(struct hclge_vport *vport,
static int hclge_update_desc_vfid(struct hclge_desc *desc, int vfid, bool clr)
{
- int word_num;
- int bit_num;
+#define HCLGE_VF_NUM_IN_FIRST_DESC 192
+
+ unsigned int word_num;
+ unsigned int bit_num;
if (vfid > 255 || vfid < 0)
return -EIO;
- if (vfid >= 0 && vfid <= 191) {
+ if (vfid >= 0 && vfid < HCLGE_VF_NUM_IN_FIRST_DESC) {
word_num = vfid / 32;
bit_num = vfid % 32;
if (clr)
@@ -5966,7 +6326,7 @@ static int hclge_update_desc_vfid(struct hclge_desc *desc, int vfid, bool clr)
else
desc[1].data[word_num] |= cpu_to_le32(1 << bit_num);
} else {
- word_num = (vfid - 192) / 32;
+ word_num = (vfid - HCLGE_VF_NUM_IN_FIRST_DESC) / 32;
bit_num = vfid % 32;
if (clr)
desc[2].data[word_num] &= cpu_to_le32(~(1 << bit_num));
@@ -6149,6 +6509,10 @@ static int hclge_init_umv_space(struct hclge_dev *hdev)
mutex_init(&hdev->umv_mutex);
hdev->max_umv_size = allocated_size;
+ /* divide max_umv_size by (hdev->num_req_vfs + 2), in order to
+ * preserve some unicast mac vlan table entries shared by pf
+ * and its vfs.
+ */
hdev->priv_umv_size = hdev->max_umv_size / (hdev->num_req_vfs + 2);
hdev->share_umv_size = hdev->priv_umv_size +
hdev->max_umv_size % (hdev->num_req_vfs + 2);
@@ -6181,7 +6545,9 @@ static int hclge_set_umv_space(struct hclge_dev *hdev, u16 space_size,
req = (struct hclge_umv_spc_alc_cmd *)desc.data;
hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_MAC_VLAN_ALLOCATE, false);
- hnae3_set_bit(req->allocate, HCLGE_UMV_SPC_ALC_B, !is_alloc);
+ if (!is_alloc)
+ hnae3_set_bit(req->allocate, HCLGE_UMV_SPC_ALC_B, 1);
+
req->space_size = cpu_to_le32(space_size);
ret = hclge_cmd_send(&hdev->hw, &desc, 1);
@@ -6270,8 +6636,7 @@ int hclge_add_uc_addr_common(struct hclge_vport *vport,
is_multicast_ether_addr(addr)) {
dev_err(&hdev->pdev->dev,
"Set_uc mac err! invalid mac:%pM. is_zero:%d,is_br=%d,is_mul=%d\n",
- addr,
- is_zero_ether_addr(addr),
+ addr, is_zero_ether_addr(addr),
is_broadcast_ether_addr(addr),
is_multicast_ether_addr(addr));
return -EINVAL;
@@ -6338,9 +6703,8 @@ int hclge_rm_uc_addr_common(struct hclge_vport *vport,
if (is_zero_ether_addr(addr) ||
is_broadcast_ether_addr(addr) ||
is_multicast_ether_addr(addr)) {
- dev_dbg(&hdev->pdev->dev,
- "Remove mac err! invalid mac:%pM.\n",
- addr);
+ dev_dbg(&hdev->pdev->dev, "Remove mac err! invalid mac:%pM.\n",
+ addr);
return -EINVAL;
}
@@ -6381,18 +6745,16 @@ int hclge_add_mc_addr_common(struct hclge_vport *vport,
hnae3_set_bit(req.entry_type, HCLGE_MAC_VLAN_BIT0_EN_B, 0);
hclge_prepare_mac_addr(&req, addr, true);
status = hclge_lookup_mac_vlan_tbl(vport, &req, desc, true);
- if (!status) {
- /* This mac addr exist, update VFID for it */
- hclge_update_desc_vfid(desc, vport->vport_id, false);
- status = hclge_add_mac_vlan_tbl(vport, &req, desc);
- } else {
+ if (status) {
/* This mac addr do not exist, add new entry for it */
memset(desc[0].data, 0, sizeof(desc[0].data));
memset(desc[1].data, 0, sizeof(desc[0].data));
memset(desc[2].data, 0, sizeof(desc[0].data));
- hclge_update_desc_vfid(desc, vport->vport_id, false);
- status = hclge_add_mac_vlan_tbl(vport, &req, desc);
}
+ status = hclge_update_desc_vfid(desc, vport->vport_id, false);
+ if (status)
+ return status;
+ status = hclge_add_mac_vlan_tbl(vport, &req, desc);
if (status == -ENOSPC)
dev_err(&hdev->pdev->dev, "mc mac vlan table is full\n");
@@ -6430,7 +6792,9 @@ int hclge_rm_mc_addr_common(struct hclge_vport *vport,
status = hclge_lookup_mac_vlan_tbl(vport, &req, desc, true);
if (!status) {
/* This mac addr exist, remove this handle's VFID for it */
- hclge_update_desc_vfid(desc, vport->vport_id, true);
+ status = hclge_update_desc_vfid(desc, vport->vport_id, true);
+ if (status)
+ return status;
if (hclge_is_all_function_id_zero(desc))
/* All the vfid is zero, so need to delete this entry */
@@ -6759,7 +7123,7 @@ static void hclge_enable_vlan_filter(struct hnae3_handle *handle, bool enable)
handle->netdev_flags &= ~HNAE3_VLAN_FLTR;
}
-static int hclge_set_vf_vlan_common(struct hclge_dev *hdev, int vfid,
+static int hclge_set_vf_vlan_common(struct hclge_dev *hdev, u16 vfid,
bool is_kill, u16 vlan, u8 qos,
__be16 proto)
{
@@ -6771,6 +7135,12 @@ static int hclge_set_vf_vlan_common(struct hclge_dev *hdev, int vfid,
u8 vf_byte_off;
int ret;
+ /* if vf vlan table is full, firmware will close vf vlan filter, it
+ * is unable and unnecessary to add new vlan id to vf vlan filter
+ */
+ if (test_bit(vfid, hdev->vf_vlan_full) && !is_kill)
+ return 0;
+
hclge_cmd_setup_basic_desc(&desc[0],
HCLGE_OPC_VLAN_FILTER_VF_CFG, false);
hclge_cmd_setup_basic_desc(&desc[1],
@@ -6806,6 +7176,7 @@ static int hclge_set_vf_vlan_common(struct hclge_dev *hdev, int vfid,
return 0;
if (req0->resp_code == HCLGE_VF_VLAN_NO_ENTRY) {
+ set_bit(vfid, hdev->vf_vlan_full);
dev_warn(&hdev->pdev->dev,
"vf vlan table is full, vf vlan filter is disabled\n");
return 0;
@@ -6819,12 +7190,13 @@ static int hclge_set_vf_vlan_common(struct hclge_dev *hdev, int vfid,
if (!req0->resp_code)
return 0;
- if (req0->resp_code == HCLGE_VF_VLAN_DEL_NO_FOUND) {
- dev_warn(&hdev->pdev->dev,
- "vlan %d filter is not in vf vlan table\n",
- vlan);
+ /* vf vlan filter is disabled when vf vlan table is full,
+ * then new vlan id will not be added into vf vlan table.
+ * Just return 0 without warning, avoid massive verbose
+ * print logs when unload.
+ */
+ if (req0->resp_code == HCLGE_VF_VLAN_DEL_NO_FOUND)
return 0;
- }
dev_err(&hdev->pdev->dev,
"Kill vf vlan filter fail, ret =%d.\n",
@@ -7140,10 +7512,6 @@ static void hclge_add_vport_vlan_table(struct hclge_vport *vport, u16 vlan_id,
{
struct hclge_vport_vlan_cfg *vlan;
- /* vlan 0 is reserved */
- if (!vlan_id)
- return;
-
vlan = kzalloc(sizeof(*vlan), GFP_KERNEL);
if (!vlan)
return;
@@ -7238,6 +7606,43 @@ void hclge_uninit_vport_vlan_table(struct hclge_dev *hdev)
mutex_unlock(&hdev->vport_cfg_mutex);
}
+static void hclge_restore_vlan_table(struct hnae3_handle *handle)
+{
+ struct hclge_vport *vport = hclge_get_vport(handle);
+ struct hclge_vport_vlan_cfg *vlan, *tmp;
+ struct hclge_dev *hdev = vport->back;
+ u16 vlan_proto, qos;
+ u16 state, vlan_id;
+ int i;
+
+ mutex_lock(&hdev->vport_cfg_mutex);
+ for (i = 0; i < hdev->num_alloc_vport; i++) {
+ vport = &hdev->vport[i];
+ vlan_proto = vport->port_base_vlan_cfg.vlan_info.vlan_proto;
+ vlan_id = vport->port_base_vlan_cfg.vlan_info.vlan_tag;
+ qos = vport->port_base_vlan_cfg.vlan_info.qos;
+ state = vport->port_base_vlan_cfg.state;
+
+ if (state != HNAE3_PORT_BASE_VLAN_DISABLE) {
+ hclge_set_vlan_filter_hw(hdev, htons(vlan_proto),
+ vport->vport_id, vlan_id, qos,
+ false);
+ continue;
+ }
+
+ list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node) {
+ if (vlan->hd_tbl_status)
+ hclge_set_vlan_filter_hw(hdev,
+ htons(ETH_P_8021Q),
+ vport->vport_id,
+ vlan->vlan_id, 0,
+ false);
+ }
+ }
+
+ mutex_unlock(&hdev->vport_cfg_mutex);
+}
+
int hclge_en_hw_strip_rxvtag(struct hnae3_handle *handle, bool enable)
{
struct hclge_vport *vport = hclge_get_vport(handle);
@@ -7415,11 +7820,20 @@ int hclge_set_vlan_filter(struct hnae3_handle *handle, __be16 proto,
bool writen_to_tbl = false;
int ret = 0;
- /* when port based VLAN enabled, we use port based VLAN as the VLAN
- * filter entry. In this case, we don't update VLAN filter table
- * when user add new VLAN or remove exist VLAN, just update the vport
- * VLAN list. The VLAN id in VLAN list won't be writen in VLAN filter
- * table until port based VLAN disabled
+ /* When device is resetting, firmware is unable to handle
+ * mailbox. Just record the vlan id, and remove it after
+ * reset finished.
+ */
+ if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) && is_kill) {
+ set_bit(vlan_id, vport->vlan_del_fail_bmap);
+ return -EBUSY;
+ }
+
+ /* When port base vlan enabled, we use port base vlan as the vlan
+ * filter entry. In this case, we don't update vlan filter table
+ * when user add new vlan or remove exist vlan, just update the vport
+ * vlan list. The vlan id in vlan list will be writen in vlan filter
+ * table until port base vlan disabled
*/
if (handle->port_base_vlan_state == HNAE3_PORT_BASE_VLAN_DISABLE) {
ret = hclge_set_vlan_filter_hw(hdev, proto, vport->vport_id,
@@ -7427,16 +7841,53 @@ int hclge_set_vlan_filter(struct hnae3_handle *handle, __be16 proto,
writen_to_tbl = true;
}
- if (ret)
- return ret;
+ if (!ret) {
+ if (is_kill)
+ hclge_rm_vport_vlan_table(vport, vlan_id, false);
+ else
+ hclge_add_vport_vlan_table(vport, vlan_id,
+ writen_to_tbl);
+ } else if (is_kill) {
+ /* When remove hw vlan filter failed, record the vlan id,
+ * and try to remove it from hw later, to be consistence
+ * with stack
+ */
+ set_bit(vlan_id, vport->vlan_del_fail_bmap);
+ }
+ return ret;
+}
- if (is_kill)
- hclge_rm_vport_vlan_table(vport, vlan_id, false);
- else
- hclge_add_vport_vlan_table(vport, vlan_id,
- writen_to_tbl);
+static void hclge_sync_vlan_filter(struct hclge_dev *hdev)
+{
+#define HCLGE_MAX_SYNC_COUNT 60
- return 0;
+ int i, ret, sync_cnt = 0;
+ u16 vlan_id;
+
+ /* start from vport 1 for PF is always alive */
+ for (i = 0; i < hdev->num_alloc_vport; i++) {
+ struct hclge_vport *vport = &hdev->vport[i];
+
+ vlan_id = find_first_bit(vport->vlan_del_fail_bmap,
+ VLAN_N_VID);
+ while (vlan_id != VLAN_N_VID) {
+ ret = hclge_set_vlan_filter_hw(hdev, htons(ETH_P_8021Q),
+ vport->vport_id, vlan_id,
+ 0, true);
+ if (ret && ret != -EINVAL)
+ return;
+
+ clear_bit(vlan_id, vport->vlan_del_fail_bmap);
+ hclge_rm_vport_vlan_table(vport, vlan_id, false);
+
+ sync_cnt++;
+ if (sync_cnt >= HCLGE_MAX_SYNC_COUNT)
+ return;
+
+ vlan_id = find_first_bit(vport->vlan_del_fail_bmap,
+ VLAN_N_VID);
+ }
+ }
}
static int hclge_set_mac_mtu(struct hclge_dev *hdev, int new_mps)
@@ -7463,7 +7914,7 @@ static int hclge_set_mtu(struct hnae3_handle *handle, int new_mtu)
int hclge_set_vport_mtu(struct hclge_vport *vport, int new_mtu)
{
struct hclge_dev *hdev = vport->back;
- int i, max_frm_size, ret = 0;
+ int i, max_frm_size, ret;
max_frm_size = new_mtu + ETH_HLEN + ETH_FCS_LEN + 2 * VLAN_HLEN;
if (max_frm_size < HCLGE_MAC_MIN_FRAME ||
@@ -7523,7 +7974,8 @@ static int hclge_send_reset_tqp_cmd(struct hclge_dev *hdev, u16 queue_id,
req = (struct hclge_reset_tqp_queue_cmd *)desc.data;
req->tqp_id = cpu_to_le16(queue_id & HCLGE_RING_ID_MASK);
- hnae3_set_bit(req->reset_req, HCLGE_TQP_RESET_B, enable);
+ if (enable)
+ hnae3_set_bit(req->reset_req, HCLGE_TQP_RESET_B, 1U);
ret = hclge_cmd_send(&hdev->hw, &desc, 1);
if (ret) {
@@ -7574,7 +8026,7 @@ int hclge_reset_tqp(struct hnae3_handle *handle, u16 queue_id)
int reset_try_times = 0;
int reset_status;
u16 queue_gid;
- int ret = 0;
+ int ret;
queue_gid = hclge_covert_handle_qid_global(handle, queue_id);
@@ -7591,7 +8043,6 @@ int hclge_reset_tqp(struct hnae3_handle *handle, u16 queue_id)
return ret;
}
- reset_try_times = 0;
while (reset_try_times++ < HCLGE_TQP_RESET_TRY_TIMES) {
/* Wait for tqp hw reset */
msleep(20);
@@ -7630,7 +8081,6 @@ void hclge_reset_vf_queue(struct hclge_vport *vport, u16 queue_id)
return;
}
- reset_try_times = 0;
while (reset_try_times++ < HCLGE_TQP_RESET_TRY_TIMES) {
/* Wait for tqp hw reset */
msleep(20);
@@ -7700,7 +8150,7 @@ int hclge_cfg_flowctrl(struct hclge_dev *hdev)
{
struct phy_device *phydev = hdev->hw.mac.phydev;
u16 remote_advertising = 0;
- u16 local_advertising = 0;
+ u16 local_advertising;
u32 rx_pause, tx_pause;
u8 flowctl;
@@ -7733,8 +8183,9 @@ static void hclge_get_pauseparam(struct hnae3_handle *handle, u32 *auto_neg,
{
struct hclge_vport *vport = hclge_get_vport(handle);
struct hclge_dev *hdev = vport->back;
+ struct phy_device *phydev = hdev->hw.mac.phydev;
- *auto_neg = hclge_get_autoneg(handle);
+ *auto_neg = phydev ? hclge_get_autoneg(handle) : 0;
if (hdev->tm_info.fc_mode == HCLGE_FC_PFC) {
*rx_en = 0;
@@ -7765,11 +8216,13 @@ static int hclge_set_pauseparam(struct hnae3_handle *handle, u32 auto_neg,
struct phy_device *phydev = hdev->hw.mac.phydev;
u32 fc_autoneg;
- fc_autoneg = hclge_get_autoneg(handle);
- if (auto_neg != fc_autoneg) {
- dev_info(&hdev->pdev->dev,
- "To change autoneg please use: ethtool -s <dev> autoneg <on|off>\n");
- return -EOPNOTSUPP;
+ if (phydev) {
+ fc_autoneg = hclge_get_autoneg(handle);
+ if (auto_neg != fc_autoneg) {
+ dev_info(&hdev->pdev->dev,
+ "To change autoneg please use: ethtool -s <dev> autoneg <on|off>\n");
+ return -EOPNOTSUPP;
+ }
}
if (hdev->tm_info.fc_mode == HCLGE_FC_PFC) {
@@ -7780,16 +8233,13 @@ static int hclge_set_pauseparam(struct hnae3_handle *handle, u32 auto_neg,
hclge_set_flowctrl_adv(hdev, rx_en, tx_en);
- if (!fc_autoneg)
+ if (!auto_neg)
return hclge_cfg_pauseparam(hdev, rx_en, tx_en);
if (phydev)
return phy_start_aneg(phydev);
- if (hdev->pdev->revision == 0x20)
- return -EOPNOTSUPP;
-
- return hclge_restart_autoneg(handle);
+ return -EOPNOTSUPP;
}
static void hclge_get_ksettings_an_result(struct hnae3_handle *handle,
@@ -7825,7 +8275,8 @@ static void hclge_get_mdix_mode(struct hnae3_handle *handle,
struct hclge_vport *vport = hclge_get_vport(handle);
struct hclge_dev *hdev = vport->back;
struct phy_device *phydev = hdev->hw.mac.phydev;
- int mdix_ctrl, mdix, retval, is_resolved;
+ int mdix_ctrl, mdix, is_resolved;
+ unsigned int retval;
if (!phydev) {
*tp_mdix_ctrl = ETH_TP_MDI_INVALID;
@@ -7894,6 +8345,102 @@ static void hclge_info_show(struct hclge_dev *hdev)
dev_info(dev, "PF info end.\n");
}
+static int hclge_init_nic_client_instance(struct hnae3_ae_dev *ae_dev,
+ struct hclge_vport *vport)
+{
+ struct hnae3_client *client = vport->nic.client;
+ struct hclge_dev *hdev = ae_dev->priv;
+ int rst_cnt;
+ int ret;
+
+ rst_cnt = hdev->rst_stats.reset_cnt;
+ ret = client->ops->init_instance(&vport->nic);
+ if (ret)
+ return ret;
+
+ set_bit(HCLGE_STATE_NIC_REGISTERED, &hdev->state);
+ if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) ||
+ rst_cnt != hdev->rst_stats.reset_cnt) {
+ ret = -EBUSY;
+ goto init_nic_err;
+ }
+
+ /* Enable nic hw error interrupts */
+ ret = hclge_config_nic_hw_error(hdev, true);
+ if (ret) {
+ dev_err(&ae_dev->pdev->dev,
+ "fail(%d) to enable hw error interrupts\n", ret);
+ goto init_nic_err;
+ }
+
+ hnae3_set_client_init_flag(client, ae_dev, 1);
+
+ if (netif_msg_drv(&hdev->vport->nic))
+ hclge_info_show(hdev);
+
+ return ret;
+
+init_nic_err:
+ clear_bit(HCLGE_STATE_NIC_REGISTERED, &hdev->state);
+ while (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state))
+ msleep(HCLGE_WAIT_RESET_DONE);
+
+ client->ops->uninit_instance(&vport->nic, 0);
+
+ return ret;
+}
+
+static int hclge_init_roce_client_instance(struct hnae3_ae_dev *ae_dev,
+ struct hclge_vport *vport)
+{
+ struct hnae3_client *client = vport->roce.client;
+ struct hclge_dev *hdev = ae_dev->priv;
+ int rst_cnt;
+ int ret;
+
+ if (!hnae3_dev_roce_supported(hdev) || !hdev->roce_client ||
+ !hdev->nic_client)
+ return 0;
+
+ client = hdev->roce_client;
+ ret = hclge_init_roce_base_info(vport);
+ if (ret)
+ return ret;
+
+ rst_cnt = hdev->rst_stats.reset_cnt;
+ ret = client->ops->init_instance(&vport->roce);
+ if (ret)
+ return ret;
+
+ set_bit(HCLGE_STATE_ROCE_REGISTERED, &hdev->state);
+ if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) ||
+ rst_cnt != hdev->rst_stats.reset_cnt) {
+ ret = -EBUSY;
+ goto init_roce_err;
+ }
+
+ /* Enable roce ras interrupts */
+ ret = hclge_config_rocee_ras_interrupt(hdev, true);
+ if (ret) {
+ dev_err(&ae_dev->pdev->dev,
+ "fail(%d) to enable roce ras interrupts\n", ret);
+ goto init_roce_err;
+ }
+
+ hnae3_set_client_init_flag(client, ae_dev, 1);
+
+ return 0;
+
+init_roce_err:
+ clear_bit(HCLGE_STATE_ROCE_REGISTERED, &hdev->state);
+ while (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state))
+ msleep(HCLGE_WAIT_RESET_DONE);
+
+ hdev->roce_client->ops->uninit_instance(&vport->roce, 0);
+
+ return ret;
+}
+
static int hclge_init_client_instance(struct hnae3_client *client,
struct hnae3_ae_dev *ae_dev)
{
@@ -7909,41 +8456,13 @@ static int hclge_init_client_instance(struct hnae3_client *client,
hdev->nic_client = client;
vport->nic.client = client;
- ret = client->ops->init_instance(&vport->nic);
+ ret = hclge_init_nic_client_instance(ae_dev, vport);
if (ret)
goto clear_nic;
- hnae3_set_client_init_flag(client, ae_dev, 1);
-
- if (netif_msg_drv(&hdev->vport->nic))
- hclge_info_show(hdev);
-
- if (hdev->roce_client &&
- hnae3_dev_roce_supported(hdev)) {
- struct hnae3_client *rc = hdev->roce_client;
-
- ret = hclge_init_roce_base_info(vport);
- if (ret)
- goto clear_roce;
-
- ret = rc->ops->init_instance(&vport->roce);
- if (ret)
- goto clear_roce;
-
- hnae3_set_client_init_flag(hdev->roce_client,
- ae_dev, 1);
- }
-
- break;
- case HNAE3_CLIENT_UNIC:
- hdev->nic_client = client;
- vport->nic.client = client;
-
- ret = client->ops->init_instance(&vport->nic);
+ ret = hclge_init_roce_client_instance(ae_dev, vport);
if (ret)
- goto clear_nic;
-
- hnae3_set_client_init_flag(client, ae_dev, 1);
+ goto clear_roce;
break;
case HNAE3_CLIENT_ROCE:
@@ -7952,17 +8471,9 @@ static int hclge_init_client_instance(struct hnae3_client *client,
vport->roce.client = client;
}
- if (hdev->roce_client && hdev->nic_client) {
- ret = hclge_init_roce_base_info(vport);
- if (ret)
- goto clear_roce;
-
- ret = client->ops->init_instance(&vport->roce);
- if (ret)
- goto clear_roce;
-
- hnae3_set_client_init_flag(client, ae_dev, 1);
- }
+ ret = hclge_init_roce_client_instance(ae_dev, vport);
+ if (ret)
+ goto clear_roce;
break;
default:
@@ -7970,7 +8481,7 @@ static int hclge_init_client_instance(struct hnae3_client *client,
}
}
- return 0;
+ return ret;
clear_nic:
hdev->nic_client = NULL;
@@ -7992,6 +8503,10 @@ static void hclge_uninit_client_instance(struct hnae3_client *client,
for (i = 0; i < hdev->num_vmdq_vport + 1; i++) {
vport = &hdev->vport[i];
if (hdev->roce_client) {
+ clear_bit(HCLGE_STATE_ROCE_REGISTERED, &hdev->state);
+ while (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state))
+ msleep(HCLGE_WAIT_RESET_DONE);
+
hdev->roce_client->ops->uninit_instance(&vport->roce,
0);
hdev->roce_client = NULL;
@@ -8000,6 +8515,10 @@ static void hclge_uninit_client_instance(struct hnae3_client *client,
if (client->type == HNAE3_CLIENT_ROCE)
return;
if (hdev->nic_client && client->ops->uninit_instance) {
+ clear_bit(HCLGE_STATE_NIC_REGISTERED, &hdev->state);
+ while (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state))
+ msleep(HCLGE_WAIT_RESET_DONE);
+
client->ops->uninit_instance(&vport->nic, 0);
hdev->nic_client = NULL;
vport->nic.client = NULL;
@@ -8081,6 +8600,7 @@ static void hclge_state_init(struct hclge_dev *hdev)
static void hclge_state_uninit(struct hclge_dev *hdev)
{
set_bit(HCLGE_STATE_DOWN, &hdev->state);
+ set_bit(HCLGE_STATE_REMOVING, &hdev->state);
if (hdev->service_timer.function)
del_timer_sync(&hdev->service_timer);
@@ -8122,6 +8642,23 @@ static void hclge_flr_done(struct hnae3_ae_dev *ae_dev)
set_bit(HNAE3_FLR_DONE, &hdev->flr_state);
}
+static void hclge_clear_resetting_state(struct hclge_dev *hdev)
+{
+ u16 i;
+
+ for (i = 0; i < hdev->num_alloc_vport; i++) {
+ struct hclge_vport *vport = &hdev->vport[i];
+ int ret;
+
+ /* Send cmd to clear VF's FUNC_RST_ING */
+ ret = hclge_set_vf_rst(hdev, vport->vport_id, false);
+ if (ret)
+ dev_warn(&hdev->pdev->dev,
+ "clear vf(%d) rst failed %d!\n",
+ vport->vport_id, ret);
+ }
+}
+
static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
{
struct pci_dev *pdev = ae_dev->pdev;
@@ -8143,6 +8680,7 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
mutex_init(&hdev->vport_lock);
mutex_init(&hdev->vport_cfg_mutex);
+ spin_lock_init(&hdev->fd_rule_lock);
ret = hclge_pci_init(hdev);
if (ret) {
@@ -8270,13 +8808,6 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
goto err_mdiobus_unreg;
}
- ret = hclge_hw_error_set_state(hdev, true);
- if (ret) {
- dev_err(&pdev->dev,
- "fail(%d) to enable hw error interrupts\n", ret);
- goto err_mdiobus_unreg;
- }
-
INIT_KFIFO(hdev->mac_tnl_log);
hclge_dcb_ops_set(hdev);
@@ -8288,6 +8819,22 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
INIT_WORK(&hdev->mbx_service_task, hclge_mailbox_service_task);
hclge_clear_all_event_cause(hdev);
+ hclge_clear_resetting_state(hdev);
+
+ /* Log and clear the hw errors those already occurred */
+ hclge_handle_all_hns_hw_errors(ae_dev);
+
+ /* request delayed reset for the error recovery because an immediate
+ * global reset on a PF affecting pending initialization of other PFs
+ */
+ if (ae_dev->hw_err_reset_req) {
+ enum hnae3_reset_type reset_level;
+
+ reset_level = hclge_get_reset_level(ae_dev,
+ &ae_dev->hw_err_reset_req);
+ hclge_set_def_reset_request(ae_dev, reset_level);
+ mod_timer(&hdev->reset_timer, jiffies + HCLGE_RESET_INTERVAL);
+ }
/* Enable MISC vector(vector0) */
hclge_enable_vector(&hdev->misc_vector, true);
@@ -8342,6 +8889,7 @@ static int hclge_reset_ae_dev(struct hnae3_ae_dev *ae_dev)
hclge_stats_clear(hdev);
memset(hdev->vlan_table, 0, sizeof(hdev->vlan_table));
+ memset(hdev->vf_vlan_full, 0, sizeof(hdev->vf_vlan_full));
ret = hclge_cmd_init(hdev);
if (ret) {
@@ -8393,21 +8941,31 @@ static int hclge_reset_ae_dev(struct hnae3_ae_dev *ae_dev)
ret = hclge_init_fd_config(hdev);
if (ret) {
- dev_err(&pdev->dev,
- "fd table init fail, ret=%d\n", ret);
+ dev_err(&pdev->dev, "fd table init fail, ret=%d\n", ret);
return ret;
}
/* Re-enable the hw error interrupts because
- * the interrupts get disabled on core/global reset.
+ * the interrupts get disabled on global reset.
*/
- ret = hclge_hw_error_set_state(hdev, true);
+ ret = hclge_config_nic_hw_error(hdev, true);
if (ret) {
dev_err(&pdev->dev,
- "fail(%d) to re-enable HNS hw error interrupts\n", ret);
+ "fail(%d) to re-enable NIC hw error interrupts\n",
+ ret);
return ret;
}
+ if (hdev->roce_client) {
+ ret = hclge_config_rocee_ras_interrupt(hdev, true);
+ if (ret) {
+ dev_err(&pdev->dev,
+ "fail(%d) to re-enable roce ras interrupts\n",
+ ret);
+ return ret;
+ }
+ }
+
hclge_reset_vport_state(hdev);
dev_info(&pdev->dev, "Reset done, %s driver initialization finished.\n",
@@ -8432,8 +8990,11 @@ static void hclge_uninit_ae_dev(struct hnae3_ae_dev *ae_dev)
hclge_enable_vector(&hdev->misc_vector, false);
synchronize_irq(hdev->misc_vector.vector_irq);
+ /* Disable all hw interrupts */
hclge_config_mac_tnl_int(hdev, false);
- hclge_hw_error_set_state(hdev, false);
+ hclge_config_nic_hw_error(hdev, false);
+ hclge_config_rocee_ras_interrupt(hdev, false);
+
hclge_cmd_uninit(hdev);
hclge_misc_irq_uninit(hdev);
hclge_pci_uninit(hdev);
@@ -8478,15 +9039,16 @@ static int hclge_set_channels(struct hnae3_handle *handle, u32 new_tqps_num,
{
struct hclge_vport *vport = hclge_get_vport(handle);
struct hnae3_knic_private_info *kinfo = &vport->nic.kinfo;
+ u16 tc_offset[HCLGE_MAX_TC_NUM] = {0};
struct hclge_dev *hdev = vport->back;
+ u16 tc_size[HCLGE_MAX_TC_NUM] = {0};
int cur_rss_size = kinfo->rss_size;
int cur_tqps = kinfo->num_tqps;
- u16 tc_offset[HCLGE_MAX_TC_NUM];
u16 tc_valid[HCLGE_MAX_TC_NUM];
- u16 tc_size[HCLGE_MAX_TC_NUM];
u16 roundup_size;
u32 *rss_indir;
- int ret, i;
+ unsigned int i;
+ int ret;
kinfo->req_rss_size = new_tqps_num;
@@ -8571,10 +9133,12 @@ static int hclge_get_32_bit_regs(struct hclge_dev *hdev, u32 regs_num,
void *data)
{
#define HCLGE_32_BIT_REG_RTN_DATANUM 8
+#define HCLGE_32_BIT_DESC_NODATA_LEN 2
struct hclge_desc *desc;
u32 *reg_val = data;
__le32 *desc_data;
+ int nodata_num;
int cmd_num;
int i, k, n;
int ret;
@@ -8582,7 +9146,9 @@ static int hclge_get_32_bit_regs(struct hclge_dev *hdev, u32 regs_num,
if (regs_num == 0)
return 0;
- cmd_num = DIV_ROUND_UP(regs_num + 2, HCLGE_32_BIT_REG_RTN_DATANUM);
+ nodata_num = HCLGE_32_BIT_DESC_NODATA_LEN;
+ cmd_num = DIV_ROUND_UP(regs_num + nodata_num,
+ HCLGE_32_BIT_REG_RTN_DATANUM);
desc = kcalloc(cmd_num, sizeof(struct hclge_desc), GFP_KERNEL);
if (!desc)
return -ENOMEM;
@@ -8599,7 +9165,7 @@ static int hclge_get_32_bit_regs(struct hclge_dev *hdev, u32 regs_num,
for (i = 0; i < cmd_num; i++) {
if (i == 0) {
desc_data = (__le32 *)(&desc[i].data[0]);
- n = HCLGE_32_BIT_REG_RTN_DATANUM - 2;
+ n = HCLGE_32_BIT_REG_RTN_DATANUM - nodata_num;
} else {
desc_data = (__le32 *)(&desc[i]);
n = HCLGE_32_BIT_REG_RTN_DATANUM;
@@ -8621,10 +9187,12 @@ static int hclge_get_64_bit_regs(struct hclge_dev *hdev, u32 regs_num,
void *data)
{
#define HCLGE_64_BIT_REG_RTN_DATANUM 4
+#define HCLGE_64_BIT_DESC_NODATA_LEN 1
struct hclge_desc *desc;
u64 *reg_val = data;
__le64 *desc_data;
+ int nodata_len;
int cmd_num;
int i, k, n;
int ret;
@@ -8632,7 +9200,9 @@ static int hclge_get_64_bit_regs(struct hclge_dev *hdev, u32 regs_num,
if (regs_num == 0)
return 0;
- cmd_num = DIV_ROUND_UP(regs_num + 1, HCLGE_64_BIT_REG_RTN_DATANUM);
+ nodata_len = HCLGE_64_BIT_DESC_NODATA_LEN;
+ cmd_num = DIV_ROUND_UP(regs_num + nodata_len,
+ HCLGE_64_BIT_REG_RTN_DATANUM);
desc = kcalloc(cmd_num, sizeof(struct hclge_desc), GFP_KERNEL);
if (!desc)
return -ENOMEM;
@@ -8649,7 +9219,7 @@ static int hclge_get_64_bit_regs(struct hclge_dev *hdev, u32 regs_num,
for (i = 0; i < cmd_num; i++) {
if (i == 0) {
desc_data = (__le64 *)(&desc[i].data[0]);
- n = HCLGE_64_BIT_REG_RTN_DATANUM - 1;
+ n = HCLGE_64_BIT_REG_RTN_DATANUM - nodata_len;
} else {
desc_data = (__le64 *)(&desc[i]);
n = HCLGE_64_BIT_REG_RTN_DATANUM;
@@ -8876,6 +9446,7 @@ static const struct hnae3_ae_ops hclge_ops = {
.set_autoneg = hclge_set_autoneg,
.get_autoneg = hclge_get_autoneg,
.restart_autoneg = hclge_restart_autoneg,
+ .halt_autoneg = hclge_halt_autoneg,
.get_pauseparam = hclge_get_pauseparam,
.set_pauseparam = hclge_set_pauseparam,
.set_mtu = hclge_set_mtu,
@@ -8892,6 +9463,7 @@ static const struct hnae3_ae_ops hclge_ops = {
.set_vf_vlan_filter = hclge_set_vf_vlan_filter,
.enable_hw_strip_rxvtag = hclge_en_hw_strip_rxvtag,
.reset_event = hclge_reset_event,
+ .get_reset_level = hclge_get_reset_level,
.set_default_reset_request = hclge_set_def_reset_request,
.get_tqps_and_rss_info = hclge_get_tqps_and_rss_info,
.set_channels = hclge_set_channels,
@@ -8908,6 +9480,7 @@ static const struct hnae3_ae_ops hclge_ops = {
.get_fd_all_rules = hclge_get_all_rules,
.restore_fd_rules = hclge_restore_fd_entries,
.enable_fd = hclge_enable_fd,
+ .add_arfs_entry = hclge_add_fd_entry_by_arfs,
.dbg_run_cmd = hclge_dbg_run_cmd,
.handle_hw_ras_error = hclge_handle_hw_ras_error,
.get_hw_reset_stat = hclge_get_hw_reset_stat,
@@ -8918,6 +9491,7 @@ static const struct hnae3_ae_ops hclge_ops = {
.set_timer_task = hclge_set_timer_task,
.mac_connect_phy = hclge_mac_connect_phy,
.mac_disconnect_phy = hclge_mac_disconnect_phy,
+ .restore_vlan_table = hclge_restore_vlan_table,
};
static struct hnae3_ae_algo ae_algo = {
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
index dd06b11187b0..6a12285f4c76 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
@@ -201,6 +201,8 @@ enum HCLGE_DEV_STATE {
HCLGE_STATE_DOWN,
HCLGE_STATE_DISABLED,
HCLGE_STATE_REMOVING,
+ HCLGE_STATE_NIC_REGISTERED,
+ HCLGE_STATE_ROCE_REGISTERED,
HCLGE_STATE_SERVICE_INITED,
HCLGE_STATE_SERVICE_SCHED,
HCLGE_STATE_RST_SERVICE_SCHED,
@@ -472,6 +474,7 @@ enum HCLGE_FD_KEY_TYPE {
enum HCLGE_FD_STAGE {
HCLGE_FD_STAGE_1,
HCLGE_FD_STAGE_2,
+ MAX_STAGE_NUM,
};
/* OUTER_XXX indicates tuples in tunnel header of tunnel packet
@@ -526,7 +529,7 @@ enum HCLGE_FD_META_DATA {
struct key_info {
u8 key_type;
- u8 key_length;
+ u8 key_length; /* use bit as unit */
};
static const struct key_info meta_data_key_info[] = {
@@ -578,6 +581,16 @@ static const struct key_info tuple_key_info[] = {
#define MAX_KEY_BYTES (MAX_KEY_DWORDS * 4)
#define MAX_META_DATA_LENGTH 32
+/* assigned by firmware, the real filter number for each pf may be less */
+#define MAX_FD_FILTER_NUM 4096
+#define HCLGE_FD_ARFS_EXPIRE_TIMER_INTERVAL 5
+
+enum HCLGE_FD_ACTIVE_RULE_TYPE {
+ HCLGE_FD_RULE_NONE,
+ HCLGE_FD_ARFS_ACTIVE,
+ HCLGE_FD_EP_ACTIVE,
+};
+
enum HCLGE_FD_PACKET_TYPE {
NIC_PACKET,
ROCE_PACKET,
@@ -600,18 +613,23 @@ struct hclge_fd_key_cfg {
struct hclge_fd_cfg {
u8 fd_mode;
- u16 max_key_length;
+ u16 max_key_length; /* use bit as unit */
u32 proto_support;
- u32 rule_num[2]; /* rule entry number */
- u16 cnt_num[2]; /* rule hit counter number */
- struct hclge_fd_key_cfg key_cfg[2];
+ u32 rule_num[MAX_STAGE_NUM]; /* rule entry number */
+ u16 cnt_num[MAX_STAGE_NUM]; /* rule hit counter number */
+ struct hclge_fd_key_cfg key_cfg[MAX_STAGE_NUM];
};
+#define IPV4_INDEX 3
+#define IPV6_SIZE 4
struct hclge_fd_rule_tuples {
- u8 src_mac[6];
- u8 dst_mac[6];
- u32 src_ip[4];
- u32 dst_ip[4];
+ u8 src_mac[ETH_ALEN];
+ u8 dst_mac[ETH_ALEN];
+ /* Be compatible for ip address of both ipv4 and ipv6.
+ * For ipv4 address, we store it in src/dst_ip[3].
+ */
+ u32 src_ip[IPV6_SIZE];
+ u32 dst_ip[IPV6_SIZE];
u16 src_port;
u16 dst_port;
u16 vlan_tag1;
@@ -630,6 +648,8 @@ struct hclge_fd_rule {
u16 vf_id;
u16 queue_id;
u16 location;
+ u16 flow_id; /* only used for arfs */
+ enum HCLGE_FD_ACTIVE_RULE_TYPE rule_type;
};
struct hclge_fd_ad_data {
@@ -679,6 +699,20 @@ struct hclge_mac_tnl_stats {
u32 status;
};
+#define HCLGE_RESET_INTERVAL (10 * HZ)
+#define HCLGE_WAIT_RESET_DONE 100
+
+#pragma pack(1)
+struct hclge_vf_vlan_cfg {
+ u8 mbx_cmd;
+ u8 subcode;
+ u8 is_kill;
+ u16 vlan;
+ u16 proto;
+};
+
+#pragma pack()
+
/* For each bit of TCAM entry, it uses a pair of 'x' and
* 'y' to indicate which value to match, like below:
* ----------------------------------
@@ -806,10 +840,15 @@ struct hclge_dev {
struct hclge_vlan_type_cfg vlan_type_cfg;
unsigned long vlan_table[VLAN_N_VID][BITS_TO_LONGS(HCLGE_VPORT_NUM)];
+ unsigned long vf_vlan_full[BITS_TO_LONGS(HCLGE_VPORT_NUM)];
struct hclge_fd_cfg fd_cfg;
struct hlist_head fd_rule_list;
+ spinlock_t fd_rule_lock; /* protect fd_rule_list and fd_bmap */
u16 hclge_fd_rule_num;
+ u16 fd_arfs_expire_timer;
+ unsigned long fd_bmap[BITS_TO_LONGS(MAX_FD_FILTER_NUM)];
+ enum HCLGE_FD_ACTIVE_RULE_TYPE fd_active_type;
u8 fd_en;
u16 wanted_umv_size;
@@ -891,13 +930,14 @@ struct hclge_vport {
u32 bw_limit; /* VSI BW Limit (0 = disabled) */
u8 dwrr;
+ unsigned long vlan_del_fail_bmap[BITS_TO_LONGS(VLAN_N_VID)];
struct hclge_port_base_vlan_config port_base_vlan_cfg;
struct hclge_tx_vtag_cfg txvlan_cfg;
struct hclge_rx_vtag_cfg rxvlan_cfg;
u16 used_umv_num;
- int vport_id;
+ u16 vport_id;
struct hclge_dev *back; /* Back reference to associated dev */
struct hnae3_handle nic;
struct hnae3_handle roce;
@@ -959,7 +999,7 @@ int hclge_func_reset_cmd(struct hclge_dev *hdev, int func_id);
int hclge_vport_start(struct hclge_vport *vport);
void hclge_vport_stop(struct hclge_vport *vport);
int hclge_set_vport_mtu(struct hclge_vport *vport, int new_mtu);
-int hclge_dbg_run_cmd(struct hnae3_handle *handle, char *cmd_buf);
+int hclge_dbg_run_cmd(struct hnae3_handle *handle, const char *cmd_buf);
u16 hclge_covert_handle_qid_global(struct hnae3_handle *handle, u16 queue_id);
int hclge_notify_client(struct hclge_dev *hdev,
enum hnae3_reset_notify_type type);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
index 0e04e63f2a94..a38ac7cfe16b 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
@@ -29,6 +29,10 @@ static int hclge_gen_resp_to_vf(struct hclge_vport *vport,
"PF fail to gen resp to VF len %d exceeds max len %d\n",
resp_data_len,
HCLGE_MBX_MAX_RESP_DATA_SIZE);
+ /* If resp_data_len is too long, set the value to max length
+ * and return the msg to VF
+ */
+ resp_data_len = HCLGE_MBX_MAX_RESP_DATA_SIZE;
}
hclge_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_PF_TO_VF, false);
@@ -93,7 +97,7 @@ int hclge_inform_reset_assert_to_vf(struct hclge_vport *vport)
else if (hdev->reset_type == HNAE3_FLR_RESET)
reset_type = HNAE3_VF_FULL_RESET;
else
- return -EINVAL;
+ reset_type = HNAE3_VF_FUNC_RESET;
memcpy(&msg_data[0], &reset_type, sizeof(u16));
@@ -192,12 +196,10 @@ static int hclge_map_unmap_ring_to_vf_vector(struct hclge_vport *vport, bool en,
return ret;
ret = hclge_bind_ring_with_vector(vport, vector_id, en, &ring_chain);
- if (ret)
- return ret;
hclge_free_vector_ring_chain(&ring_chain);
- return 0;
+ return ret;
}
static int hclge_set_vf_promisc_mode(struct hclge_vport *vport,
@@ -308,21 +310,23 @@ int hclge_push_vf_port_base_vlan_info(struct hclge_vport *vport, u8 vfid,
static int hclge_set_vf_vlan_cfg(struct hclge_vport *vport,
struct hclge_mbx_vf_to_pf_cmd *mbx_req)
{
+ struct hclge_vf_vlan_cfg *msg_cmd;
int status = 0;
- if (mbx_req->msg[1] == HCLGE_MBX_VLAN_FILTER) {
+ msg_cmd = (struct hclge_vf_vlan_cfg *)mbx_req->msg;
+ if (msg_cmd->subcode == HCLGE_MBX_VLAN_FILTER) {
struct hnae3_handle *handle = &vport->nic;
u16 vlan, proto;
bool is_kill;
- is_kill = !!mbx_req->msg[2];
- memcpy(&vlan, &mbx_req->msg[3], sizeof(vlan));
- memcpy(&proto, &mbx_req->msg[5], sizeof(proto));
+ is_kill = !!msg_cmd->is_kill;
+ vlan = msg_cmd->vlan;
+ proto = msg_cmd->proto;
status = hclge_set_vlan_filter(handle, cpu_to_be16(proto),
vlan, is_kill);
- } else if (mbx_req->msg[1] == HCLGE_MBX_VLAN_RX_OFF_CFG) {
+ } else if (msg_cmd->subcode == HCLGE_MBX_VLAN_RX_OFF_CFG) {
struct hnae3_handle *handle = &vport->nic;
- bool en = mbx_req->msg[2] ? true : false;
+ bool en = msg_cmd->is_kill ? true : false;
status = hclge_en_hw_strip_rxvtag(handle, en);
} else if (mbx_req->msg[1] == HCLGE_MBX_PORT_BASE_VLAN_CFG) {
@@ -365,13 +369,14 @@ static int hclge_get_vf_tcinfo(struct hclge_vport *vport,
{
struct hnae3_knic_private_info *kinfo = &vport->nic.kinfo;
u8 vf_tc_map = 0;
- int i, ret;
+ unsigned int i;
+ int ret;
for (i = 0; i < kinfo->num_tc; i++)
vf_tc_map |= BIT(i);
ret = hclge_gen_resp_to_vf(vport, mbx_req, 0, &vf_tc_map,
- sizeof(u8));
+ sizeof(vf_tc_map));
return ret;
}
@@ -553,7 +558,8 @@ void hclge_mbx_handler(struct hclge_dev *hdev)
struct hclge_mbx_vf_to_pf_cmd *req;
struct hclge_vport *vport;
struct hclge_desc *desc;
- int ret, flag;
+ unsigned int flag;
+ int ret;
/* handle all the mailbox requests in the queue */
while (!hclge_cmd_crq_empty(&hdev->hw)) {
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
index 1e8134892d77..abb1b438564e 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
@@ -55,9 +55,9 @@ static int hclge_mdio_write(struct mii_bus *bus, int phyid, int regnum,
mdio_cmd = (struct hclge_mdio_cfg_cmd *)desc.data;
hnae3_set_field(mdio_cmd->phyid, HCLGE_MDIO_PHYID_M,
- HCLGE_MDIO_PHYID_S, phyid);
+ HCLGE_MDIO_PHYID_S, (u32)phyid);
hnae3_set_field(mdio_cmd->phyad, HCLGE_MDIO_PHYREG_M,
- HCLGE_MDIO_PHYREG_S, regnum);
+ HCLGE_MDIO_PHYREG_S, (u32)regnum);
hnae3_set_bit(mdio_cmd->ctrl_bit, HCLGE_MDIO_CTRL_START_B, 1);
hnae3_set_field(mdio_cmd->ctrl_bit, HCLGE_MDIO_CTRL_ST_M,
@@ -93,9 +93,9 @@ static int hclge_mdio_read(struct mii_bus *bus, int phyid, int regnum)
mdio_cmd = (struct hclge_mdio_cfg_cmd *)desc.data;
hnae3_set_field(mdio_cmd->phyid, HCLGE_MDIO_PHYID_M,
- HCLGE_MDIO_PHYID_S, phyid);
+ HCLGE_MDIO_PHYID_S, (u32)phyid);
hnae3_set_field(mdio_cmd->phyad, HCLGE_MDIO_PHYREG_M,
- HCLGE_MDIO_PHYREG_S, regnum);
+ HCLGE_MDIO_PHYREG_S, (u32)regnum);
hnae3_set_bit(mdio_cmd->ctrl_bit, HCLGE_MDIO_CTRL_START_B, 1);
hnae3_set_field(mdio_cmd->ctrl_bit, HCLGE_MDIO_CTRL_ST_M,
@@ -224,6 +224,13 @@ int hclge_mac_connect_phy(struct hnae3_handle *handle)
linkmode_and(phydev->supported, phydev->supported, mask);
linkmode_copy(phydev->advertising, phydev->supported);
+ /* supported flag is Pause and Asym Pause, but default advertising
+ * should be rx on, tx on, so need clear Asym Pause in advertising
+ * flag
+ */
+ linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
+ phydev->advertising);
+
return 0;
}
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
index a7bbb6d3091a..3f41fa2bc414 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
@@ -43,18 +43,23 @@ enum hclge_shaper_level {
static int hclge_shaper_para_calc(u32 ir, u8 shaper_level,
u8 *ir_b, u8 *ir_u, u8 *ir_s)
{
+#define DIVISOR_CLK (1000 * 8)
+#define DIVISOR_IR_B_126 (126 * DIVISOR_CLK)
+
const u16 tick_array[HCLGE_SHAPER_LVL_CNT] = {
6 * 256, /* Prioriy level */
6 * 32, /* Prioriy group level */
6 * 8, /* Port level */
6 * 256 /* Qset level */
};
- u8 ir_u_calc = 0, ir_s_calc = 0;
+ u8 ir_u_calc = 0;
+ u8 ir_s_calc = 0;
u32 ir_calc;
u32 tick;
/* Calc tick */
- if (shaper_level >= HCLGE_SHAPER_LVL_CNT)
+ if (shaper_level >= HCLGE_SHAPER_LVL_CNT ||
+ ir > HCLGE_ETHER_MAX_RATE)
return -EINVAL;
tick = tick_array[shaper_level];
@@ -66,7 +71,7 @@ static int hclge_shaper_para_calc(u32 ir, u8 shaper_level,
* ir_calc = ---------------- * 1000
* tick * 1
*/
- ir_calc = (1008000 + (tick >> 1) - 1) / tick;
+ ir_calc = (DIVISOR_IR_B_126 + (tick >> 1) - 1) / tick;
if (ir_calc == ir) {
*ir_b = 126;
@@ -78,27 +83,28 @@ static int hclge_shaper_para_calc(u32 ir, u8 shaper_level,
/* Increasing the denominator to select ir_s value */
while (ir_calc > ir) {
ir_s_calc++;
- ir_calc = 1008000 / (tick * (1 << ir_s_calc));
+ ir_calc = DIVISOR_IR_B_126 / (tick * (1 << ir_s_calc));
}
if (ir_calc == ir)
*ir_b = 126;
else
- *ir_b = (ir * tick * (1 << ir_s_calc) + 4000) / 8000;
+ *ir_b = (ir * tick * (1 << ir_s_calc) +
+ (DIVISOR_CLK >> 1)) / DIVISOR_CLK;
} else {
/* Increasing the numerator to select ir_u value */
u32 numerator;
while (ir_calc < ir) {
ir_u_calc++;
- numerator = 1008000 * (1 << ir_u_calc);
+ numerator = DIVISOR_IR_B_126 * (1 << ir_u_calc);
ir_calc = (numerator + (tick >> 1)) / tick;
}
if (ir_calc == ir) {
*ir_b = 126;
} else {
- u32 denominator = (8000 * (1 << --ir_u_calc));
+ u32 denominator = (DIVISOR_CLK * (1 << --ir_u_calc));
*ir_b = (ir * tick + (denominator >> 1)) / denominator;
}
}
@@ -119,14 +125,13 @@ static int hclge_pfc_stats_get(struct hclge_dev *hdev,
opcode == HCLGE_OPC_QUERY_PFC_TX_PKT_CNT))
return -EINVAL;
- for (i = 0; i < HCLGE_TM_PFC_PKT_GET_CMD_NUM; i++) {
+ for (i = 0; i < HCLGE_TM_PFC_PKT_GET_CMD_NUM - 1; i++) {
hclge_cmd_setup_basic_desc(&desc[i], opcode, true);
- if (i != (HCLGE_TM_PFC_PKT_GET_CMD_NUM - 1))
- desc[i].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
- else
- desc[i].flag &= ~cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
+ desc[i].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
}
+ hclge_cmd_setup_basic_desc(&desc[i], opcode, true);
+
ret = hclge_cmd_send(&hdev->hw, desc, HCLGE_TM_PFC_PKT_GET_CMD_NUM);
if (ret)
return ret;
@@ -219,8 +224,7 @@ int hclge_pause_addr_cfg(struct hclge_dev *hdev, const u8 *mac_addr)
trans_gap = pause_param->pause_trans_gap;
trans_time = le16_to_cpu(pause_param->pause_trans_time);
- return hclge_pause_param_cfg(hdev, mac_addr, trans_gap,
- trans_time);
+ return hclge_pause_param_cfg(hdev, mac_addr, trans_gap, trans_time);
}
static int hclge_fill_pri_array(struct hclge_dev *hdev, u8 *pri, u8 pri_id)
@@ -361,29 +365,36 @@ static int hclge_tm_qs_weight_cfg(struct hclge_dev *hdev, u16 qs_id,
return hclge_cmd_send(&hdev->hw, &desc, 1);
}
+static u32 hclge_tm_get_shapping_para(u8 ir_b, u8 ir_u, u8 ir_s,
+ u8 bs_b, u8 bs_s)
+{
+ u32 shapping_para = 0;
+
+ hclge_tm_set_field(shapping_para, IR_B, ir_b);
+ hclge_tm_set_field(shapping_para, IR_U, ir_u);
+ hclge_tm_set_field(shapping_para, IR_S, ir_s);
+ hclge_tm_set_field(shapping_para, BS_B, bs_b);
+ hclge_tm_set_field(shapping_para, BS_S, bs_s);
+
+ return shapping_para;
+}
+
static int hclge_tm_pg_shapping_cfg(struct hclge_dev *hdev,
enum hclge_shap_bucket bucket, u8 pg_id,
- u8 ir_b, u8 ir_u, u8 ir_s, u8 bs_b, u8 bs_s)
+ u32 shapping_para)
{
struct hclge_pg_shapping_cmd *shap_cfg_cmd;
enum hclge_opcode_type opcode;
struct hclge_desc desc;
- u32 shapping_para = 0;
opcode = bucket ? HCLGE_OPC_TM_PG_P_SHAPPING :
- HCLGE_OPC_TM_PG_C_SHAPPING;
+ HCLGE_OPC_TM_PG_C_SHAPPING;
hclge_cmd_setup_basic_desc(&desc, opcode, false);
shap_cfg_cmd = (struct hclge_pg_shapping_cmd *)desc.data;
shap_cfg_cmd->pg_id = pg_id;
- hclge_tm_set_field(shapping_para, IR_B, ir_b);
- hclge_tm_set_field(shapping_para, IR_U, ir_u);
- hclge_tm_set_field(shapping_para, IR_S, ir_s);
- hclge_tm_set_field(shapping_para, BS_B, bs_b);
- hclge_tm_set_field(shapping_para, BS_S, bs_s);
-
shap_cfg_cmd->pg_shapping_para = cpu_to_le32(shapping_para);
return hclge_cmd_send(&hdev->hw, &desc, 1);
@@ -397,7 +408,7 @@ static int hclge_tm_port_shaper_cfg(struct hclge_dev *hdev)
u8 ir_u, ir_b, ir_s;
int ret;
- ret = hclge_shaper_para_calc(HCLGE_ETHER_MAX_RATE,
+ ret = hclge_shaper_para_calc(hdev->hw.mac.speed,
HCLGE_SHAPER_LVL_PORT,
&ir_b, &ir_u, &ir_s);
if (ret)
@@ -406,11 +417,9 @@ static int hclge_tm_port_shaper_cfg(struct hclge_dev *hdev)
hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_TM_PORT_SHAPPING, false);
shap_cfg_cmd = (struct hclge_port_shapping_cmd *)desc.data;
- hclge_tm_set_field(shapping_para, IR_B, ir_b);
- hclge_tm_set_field(shapping_para, IR_U, ir_u);
- hclge_tm_set_field(shapping_para, IR_S, ir_s);
- hclge_tm_set_field(shapping_para, BS_B, HCLGE_SHAPER_BS_U_DEF);
- hclge_tm_set_field(shapping_para, BS_S, HCLGE_SHAPER_BS_S_DEF);
+ shapping_para = hclge_tm_get_shapping_para(ir_b, ir_u, ir_s,
+ HCLGE_SHAPER_BS_U_DEF,
+ HCLGE_SHAPER_BS_S_DEF);
shap_cfg_cmd->port_shapping_para = cpu_to_le32(shapping_para);
@@ -419,16 +428,14 @@ static int hclge_tm_port_shaper_cfg(struct hclge_dev *hdev)
static int hclge_tm_pri_shapping_cfg(struct hclge_dev *hdev,
enum hclge_shap_bucket bucket, u8 pri_id,
- u8 ir_b, u8 ir_u, u8 ir_s,
- u8 bs_b, u8 bs_s)
+ u32 shapping_para)
{
struct hclge_pri_shapping_cmd *shap_cfg_cmd;
enum hclge_opcode_type opcode;
struct hclge_desc desc;
- u32 shapping_para = 0;
opcode = bucket ? HCLGE_OPC_TM_PRI_P_SHAPPING :
- HCLGE_OPC_TM_PRI_C_SHAPPING;
+ HCLGE_OPC_TM_PRI_C_SHAPPING;
hclge_cmd_setup_basic_desc(&desc, opcode, false);
@@ -436,12 +443,6 @@ static int hclge_tm_pri_shapping_cfg(struct hclge_dev *hdev,
shap_cfg_cmd->pri_id = pri_id;
- hclge_tm_set_field(shapping_para, IR_B, ir_b);
- hclge_tm_set_field(shapping_para, IR_U, ir_u);
- hclge_tm_set_field(shapping_para, IR_S, ir_s);
- hclge_tm_set_field(shapping_para, BS_B, bs_b);
- hclge_tm_set_field(shapping_para, BS_S, bs_s);
-
shap_cfg_cmd->pri_shapping_para = cpu_to_le32(shapping_para);
return hclge_cmd_send(&hdev->hw, &desc, 1);
@@ -531,6 +532,7 @@ static void hclge_tm_vport_tc_info_update(struct hclge_vport *vport)
max_rss_size = min_t(u16, hdev->rss_size_max,
vport->alloc_tqps / kinfo->num_tc);
+ /* Set to user value, no larger than max_rss_size. */
if (kinfo->req_rss_size != kinfo->rss_size && kinfo->req_rss_size &&
kinfo->req_rss_size <= max_rss_size) {
dev_info(&hdev->pdev->dev, "rss changes from %d to %d\n",
@@ -538,6 +540,7 @@ static void hclge_tm_vport_tc_info_update(struct hclge_vport *vport)
kinfo->rss_size = kinfo->req_rss_size;
} else if (kinfo->rss_size > max_rss_size ||
(!kinfo->req_rss_size && kinfo->rss_size < max_rss_size)) {
+ /* Set to the maximum specification value (max_rss_size). */
dev_info(&hdev->pdev->dev, "rss changes from %d to %d\n",
kinfo->rss_size, max_rss_size);
kinfo->rss_size = max_rss_size;
@@ -595,8 +598,10 @@ static void hclge_tm_tc_info_init(struct hclge_dev *hdev)
hdev->tm_info.prio_tc[i] =
(i >= hdev->tm_info.num_tc) ? 0 : i;
- /* DCB is enabled if we have more than 1 TC */
- if (hdev->tm_info.num_tc > 1)
+ /* DCB is enabled if we have more than 1 TC or pfc_en is
+ * non-zero.
+ */
+ if (hdev->tm_info.num_tc > 1 || hdev->tm_info.pfc_en)
hdev->flag |= HCLGE_FLAG_DCB_ENABLE;
else
hdev->flag &= ~HCLGE_FLAG_DCB_ENABLE;
@@ -604,12 +609,14 @@ static void hclge_tm_tc_info_init(struct hclge_dev *hdev)
static void hclge_tm_pg_info_init(struct hclge_dev *hdev)
{
+#define BW_PERCENT 100
+
u8 i;
for (i = 0; i < hdev->tm_info.num_pg; i++) {
int k;
- hdev->tm_info.pg_dwrr[i] = i ? 0 : 100;
+ hdev->tm_info.pg_dwrr[i] = i ? 0 : BW_PERCENT;
hdev->tm_info.pg_info[i].pg_id = i;
hdev->tm_info.pg_info[i].pg_sch_mode = HCLGE_SCH_MODE_DWRR;
@@ -621,7 +628,7 @@ static void hclge_tm_pg_info_init(struct hclge_dev *hdev)
hdev->tm_info.pg_info[i].tc_bit_map = hdev->hw_tc_map;
for (k = 0; k < hdev->tm_info.num_tc; k++)
- hdev->tm_info.pg_info[i].tc_dwrr[k] = 100;
+ hdev->tm_info.pg_info[i].tc_dwrr[k] = BW_PERCENT;
}
}
@@ -682,6 +689,7 @@ static int hclge_tm_pg_to_pri_map(struct hclge_dev *hdev)
static int hclge_tm_pg_shaper_cfg(struct hclge_dev *hdev)
{
u8 ir_u, ir_b, ir_s;
+ u32 shaper_para;
int ret;
u32 i;
@@ -699,18 +707,21 @@ static int hclge_tm_pg_shaper_cfg(struct hclge_dev *hdev)
if (ret)
return ret;
+ shaper_para = hclge_tm_get_shapping_para(0, 0, 0,
+ HCLGE_SHAPER_BS_U_DEF,
+ HCLGE_SHAPER_BS_S_DEF);
ret = hclge_tm_pg_shapping_cfg(hdev,
HCLGE_TM_SHAP_C_BUCKET, i,
- 0, 0, 0, HCLGE_SHAPER_BS_U_DEF,
- HCLGE_SHAPER_BS_S_DEF);
+ shaper_para);
if (ret)
return ret;
+ shaper_para = hclge_tm_get_shapping_para(ir_b, ir_u, ir_s,
+ HCLGE_SHAPER_BS_U_DEF,
+ HCLGE_SHAPER_BS_S_DEF);
ret = hclge_tm_pg_shapping_cfg(hdev,
HCLGE_TM_SHAP_P_BUCKET, i,
- ir_b, ir_u, ir_s,
- HCLGE_SHAPER_BS_U_DEF,
- HCLGE_SHAPER_BS_S_DEF);
+ shaper_para);
if (ret)
return ret;
}
@@ -730,8 +741,7 @@ static int hclge_tm_pg_dwrr_cfg(struct hclge_dev *hdev)
/* pg to prio */
for (i = 0; i < hdev->tm_info.num_pg; i++) {
/* Cfg dwrr */
- ret = hclge_tm_pg_weight_cfg(hdev, i,
- hdev->tm_info.pg_dwrr[i]);
+ ret = hclge_tm_pg_weight_cfg(hdev, i, hdev->tm_info.pg_dwrr[i]);
if (ret)
return ret;
}
@@ -811,6 +821,7 @@ static int hclge_tm_pri_q_qs_cfg(struct hclge_dev *hdev)
static int hclge_tm_pri_tc_base_shaper_cfg(struct hclge_dev *hdev)
{
u8 ir_u, ir_b, ir_s;
+ u32 shaper_para;
int ret;
u32 i;
@@ -822,17 +833,19 @@ static int hclge_tm_pri_tc_base_shaper_cfg(struct hclge_dev *hdev)
if (ret)
return ret;
- ret = hclge_tm_pri_shapping_cfg(
- hdev, HCLGE_TM_SHAP_C_BUCKET, i,
- 0, 0, 0, HCLGE_SHAPER_BS_U_DEF,
- HCLGE_SHAPER_BS_S_DEF);
+ shaper_para = hclge_tm_get_shapping_para(0, 0, 0,
+ HCLGE_SHAPER_BS_U_DEF,
+ HCLGE_SHAPER_BS_S_DEF);
+ ret = hclge_tm_pri_shapping_cfg(hdev, HCLGE_TM_SHAP_C_BUCKET, i,
+ shaper_para);
if (ret)
return ret;
- ret = hclge_tm_pri_shapping_cfg(
- hdev, HCLGE_TM_SHAP_P_BUCKET, i,
- ir_b, ir_u, ir_s, HCLGE_SHAPER_BS_U_DEF,
- HCLGE_SHAPER_BS_S_DEF);
+ shaper_para = hclge_tm_get_shapping_para(ir_b, ir_u, ir_s,
+ HCLGE_SHAPER_BS_U_DEF,
+ HCLGE_SHAPER_BS_S_DEF);
+ ret = hclge_tm_pri_shapping_cfg(hdev, HCLGE_TM_SHAP_P_BUCKET, i,
+ shaper_para);
if (ret)
return ret;
}
@@ -844,6 +857,7 @@ static int hclge_tm_pri_vnet_base_shaper_pri_cfg(struct hclge_vport *vport)
{
struct hclge_dev *hdev = vport->back;
u8 ir_u, ir_b, ir_s;
+ u32 shaper_para;
int ret;
ret = hclge_shaper_para_calc(vport->bw_limit, HCLGE_SHAPER_LVL_VF,
@@ -851,18 +865,19 @@ static int hclge_tm_pri_vnet_base_shaper_pri_cfg(struct hclge_vport *vport)
if (ret)
return ret;
+ shaper_para = hclge_tm_get_shapping_para(0, 0, 0,
+ HCLGE_SHAPER_BS_U_DEF,
+ HCLGE_SHAPER_BS_S_DEF);
ret = hclge_tm_pri_shapping_cfg(hdev, HCLGE_TM_SHAP_C_BUCKET,
- vport->vport_id,
- 0, 0, 0, HCLGE_SHAPER_BS_U_DEF,
- HCLGE_SHAPER_BS_S_DEF);
+ vport->vport_id, shaper_para);
if (ret)
return ret;
+ shaper_para = hclge_tm_get_shapping_para(ir_b, ir_u, ir_s,
+ HCLGE_SHAPER_BS_U_DEF,
+ HCLGE_SHAPER_BS_S_DEF);
ret = hclge_tm_pri_shapping_cfg(hdev, HCLGE_TM_SHAP_P_BUCKET,
- vport->vport_id,
- ir_b, ir_u, ir_s,
- HCLGE_SHAPER_BS_U_DEF,
- HCLGE_SHAPER_BS_S_DEF);
+ vport->vport_id, shaper_para);
if (ret)
return ret;
@@ -964,7 +979,7 @@ static int hclge_tm_ets_tc_dwrr_cfg(struct hclge_dev *hdev)
struct hclge_ets_tc_weight_cmd *ets_weight;
struct hclge_desc desc;
- int i;
+ unsigned int i;
hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_ETS_TC_WEIGHT, false);
ets_weight = (struct hclge_ets_tc_weight_cmd *)desc.data;
@@ -1124,6 +1139,9 @@ static int hclge_tm_schd_mode_vnet_base_cfg(struct hclge_vport *vport)
int ret;
u8 i;
+ if (vport->vport_id >= HNAE3_MAX_TC)
+ return -EINVAL;
+
ret = hclge_tm_pri_schd_mode_cfg(hdev, vport->vport_id);
if (ret)
return ret;
@@ -1212,8 +1230,8 @@ static int hclge_pause_param_setup_hw(struct hclge_dev *hdev)
struct hclge_mac *mac = &hdev->hw.mac;
return hclge_pause_param_cfg(hdev, mac->mac_addr,
- HCLGE_DEFAULT_PAUSE_TRANS_GAP,
- HCLGE_DEFAULT_PAUSE_TRANS_TIME);
+ HCLGE_DEFAULT_PAUSE_TRANS_GAP,
+ HCLGE_DEFAULT_PAUSE_TRANS_TIME);
}
static int hclge_pfc_setup_hw(struct hclge_dev *hdev)
@@ -1358,7 +1376,8 @@ void hclge_tm_prio_tc_info_update(struct hclge_dev *hdev, u8 *prio_tc)
void hclge_tm_schd_info_update(struct hclge_dev *hdev, u8 num_tc)
{
- u8 i, bit_map = 0;
+ u8 bit_map = 0;
+ u8 i;
hdev->tm_info.num_tc = num_tc;
@@ -1375,6 +1394,19 @@ void hclge_tm_schd_info_update(struct hclge_dev *hdev, u8 num_tc)
hclge_tm_schd_info_init(hdev);
}
+void hclge_tm_pfc_info_update(struct hclge_dev *hdev)
+{
+ /* DCB is enabled if we have more than 1 TC or pfc_en is
+ * non-zero.
+ */
+ if (hdev->tm_info.num_tc > 1 || hdev->tm_info.pfc_en)
+ hdev->flag |= HCLGE_FLAG_DCB_ENABLE;
+ else
+ hdev->flag &= ~HCLGE_FLAG_DCB_ENABLE;
+
+ hclge_pfc_info_init(hdev);
+}
+
int hclge_tm_init_hw(struct hclge_dev *hdev, bool init)
{
int ret;
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
index f60e540c7a62..818610988d34 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
@@ -12,7 +12,7 @@
#define HCLGE_TM_PORT_BASE_MODE_MSK BIT(0)
-#define HCLGE_DEFAULT_PAUSE_TRANS_GAP 0xFF
+#define HCLGE_DEFAULT_PAUSE_TRANS_GAP 0x7F
#define HCLGE_DEFAULT_PAUSE_TRANS_TIME 0xFFFF
/* SP or DWRR */
@@ -147,6 +147,7 @@ int hclge_pause_setup_hw(struct hclge_dev *hdev, bool init);
int hclge_tm_schd_setup_hw(struct hclge_dev *hdev);
void hclge_tm_prio_tc_info_update(struct hclge_dev *hdev, u8 *prio_tc);
void hclge_tm_schd_info_update(struct hclge_dev *hdev, u8 num_tc);
+void hclge_tm_pfc_info_update(struct hclge_dev *hdev);
int hclge_tm_dwrr_cfg(struct hclge_dev *hdev);
int hclge_tm_init_hw(struct hclge_dev *hdev, bool init);
int hclge_mac_pause_en_cfg(struct hclge_dev *hdev, bool tx, bool rx);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile b/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile
index 6193f8fa7cf3..53804d95ea90 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile
@@ -6,4 +6,4 @@
ccflags-y := -I $(srctree)/drivers/net/ethernet/hisilicon/hns3
obj-$(CONFIG_HNS3_HCLGEVF) += hclgevf.o
-hclgevf-objs = hclgevf_main.o hclgevf_cmd.o hclgevf_mbx.o \ No newline at end of file
+hclgevf-objs = hclgevf_main.o hclgevf_cmd.o hclgevf_mbx.o
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
index 71f356fc2446..652b796044e3 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
@@ -98,7 +98,6 @@ static void hclgevf_cmd_config_regs(struct hclgevf_cmq_ring *ring)
hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_BASEADDR_H_REG, reg_val);
reg_val = (ring->desc_num >> HCLGEVF_NIC_CMQ_DESC_NUM_S);
- reg_val |= HCLGEVF_NIC_CMQ_ENABLE;
hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_DEPTH_REG, reg_val);
hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_HEAD_REG, 0);
@@ -110,7 +109,6 @@ static void hclgevf_cmd_config_regs(struct hclgevf_cmq_ring *ring)
hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_BASEADDR_H_REG, reg_val);
reg_val = (ring->desc_num >> HCLGEVF_NIC_CMQ_DESC_NUM_S);
- reg_val |= HCLGEVF_NIC_CMQ_ENABLE;
hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_DEPTH_REG, reg_val);
hclgevf_write_dev(hw, HCLGEVF_NIC_CRQ_HEAD_REG, 0);
@@ -179,6 +177,38 @@ void hclgevf_cmd_setup_basic_desc(struct hclgevf_desc *desc,
desc->flag &= cpu_to_le16(~HCLGEVF_CMD_FLAG_WR);
}
+static int hclgevf_cmd_convert_err_code(u16 desc_ret)
+{
+ switch (desc_ret) {
+ case HCLGEVF_CMD_EXEC_SUCCESS:
+ return 0;
+ case HCLGEVF_CMD_NO_AUTH:
+ return -EPERM;
+ case HCLGEVF_CMD_NOT_SUPPORTED:
+ return -EOPNOTSUPP;
+ case HCLGEVF_CMD_QUEUE_FULL:
+ return -EXFULL;
+ case HCLGEVF_CMD_NEXT_ERR:
+ return -ENOSR;
+ case HCLGEVF_CMD_UNEXE_ERR:
+ return -ENOTBLK;
+ case HCLGEVF_CMD_PARA_ERR:
+ return -EINVAL;
+ case HCLGEVF_CMD_RESULT_ERR:
+ return -ERANGE;
+ case HCLGEVF_CMD_TIMEOUT:
+ return -ETIME;
+ case HCLGEVF_CMD_HILINK_ERR:
+ return -ENOLINK;
+ case HCLGEVF_CMD_QUEUE_ILLEGAL:
+ return -ENXIO;
+ case HCLGEVF_CMD_INVALID:
+ return -EBADR;
+ default:
+ return -EIO;
+ }
+}
+
/* hclgevf_cmd_send - send command to command queue
* @hw: pointer to the hw struct
* @desc: prefilled descriptor for describing the command
@@ -190,6 +220,7 @@ void hclgevf_cmd_setup_basic_desc(struct hclgevf_desc *desc,
int hclgevf_cmd_send(struct hclgevf_hw *hw, struct hclgevf_desc *desc, int num)
{
struct hclgevf_dev *hdev = (struct hclgevf_dev *)hw->hdev;
+ struct hclgevf_cmq_ring *csq = &hw->cmq.csq;
struct hclgevf_desc *desc_to_use;
bool complete = false;
u32 timeout = 0;
@@ -201,8 +232,17 @@ int hclgevf_cmd_send(struct hclgevf_hw *hw, struct hclgevf_desc *desc, int num)
spin_lock_bh(&hw->cmq.csq.lock);
- if (num > hclgevf_ring_space(&hw->cmq.csq) ||
- test_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state)) {
+ if (test_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state)) {
+ spin_unlock_bh(&hw->cmq.csq.lock);
+ return -EBUSY;
+ }
+
+ if (num > hclgevf_ring_space(&hw->cmq.csq)) {
+ /* If CMDQ ring is full, SW HEAD and HW HEAD may be different,
+ * need update the SW HEAD pointer csq->next_to_clean
+ */
+ csq->next_to_clean = hclgevf_read_dev(hw,
+ HCLGEVF_NIC_CSQ_HEAD_REG);
spin_unlock_bh(&hw->cmq.csq.lock);
return -EBUSY;
}
@@ -251,11 +291,7 @@ int hclgevf_cmd_send(struct hclgevf_hw *hw, struct hclgevf_desc *desc, int num)
else
retval = le16_to_cpu(desc[0].retval);
- if ((enum hclgevf_cmd_return_status)retval ==
- HCLGEVF_CMD_EXEC_SUCCESS)
- status = 0;
- else
- status = -EIO;
+ status = hclgevf_cmd_convert_err_code(retval);
hw->cmq.last_status = (enum hclgevf_cmd_status)retval;
ntc++;
handle++;
@@ -265,14 +301,13 @@ int hclgevf_cmd_send(struct hclgevf_hw *hw, struct hclgevf_desc *desc, int num)
}
if (!complete)
- status = -EAGAIN;
+ status = -EBADE;
/* Clean the command send queue */
handle = hclgevf_cmd_csq_clean(hw);
- if (handle != num) {
+ if (handle != num)
dev_warn(&hdev->pdev->dev,
"cleaned %d, need to clean %d\n", handle, num);
- }
spin_unlock_bh(&hw->cmq.csq.lock);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h
index 47030b42341f..127a434a56f3 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h
@@ -46,9 +46,17 @@ struct hclgevf_cmq_ring {
enum hclgevf_cmd_return_status {
HCLGEVF_CMD_EXEC_SUCCESS = 0,
- HCLGEVF_CMD_NO_AUTH = 1,
- HCLGEVF_CMD_NOT_EXEC = 2,
- HCLGEVF_CMD_QUEUE_FULL = 3,
+ HCLGEVF_CMD_NO_AUTH = 1,
+ HCLGEVF_CMD_NOT_SUPPORTED = 2,
+ HCLGEVF_CMD_QUEUE_FULL = 3,
+ HCLGEVF_CMD_NEXT_ERR = 4,
+ HCLGEVF_CMD_UNEXE_ERR = 5,
+ HCLGEVF_CMD_PARA_ERR = 6,
+ HCLGEVF_CMD_RESULT_ERR = 7,
+ HCLGEVF_CMD_TIMEOUT = 8,
+ HCLGEVF_CMD_HILINK_ERR = 9,
+ HCLGEVF_CMD_QUEUE_ILLEGAL = 10,
+ HCLGEVF_CMD_INVALID = 11,
};
enum hclgevf_cmd_status {
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
index 5d53467ee2d2..a13a0e101c3b 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
@@ -11,6 +11,8 @@
#define HCLGEVF_NAME "hclgevf"
+#define HCLGEVF_RESET_MAX_FAIL_CNT 5
+
static int hclgevf_reset_hdev(struct hclgevf_dev *hdev);
static struct hnae3_ae_algo ae_algovf;
@@ -83,8 +85,7 @@ static const u32 tqp_intr_reg_addr_list[] = {HCLGEVF_TQP_INTR_CTRL_REG,
HCLGEVF_TQP_INTR_GL2_REG,
HCLGEVF_TQP_INTR_RL_REG};
-static inline struct hclgevf_dev *hclgevf_ae_get_hdev(
- struct hnae3_handle *handle)
+static struct hclgevf_dev *hclgevf_ae_get_hdev(struct hnae3_handle *handle)
{
if (!handle->client)
return container_of(handle, struct hclgevf_dev, nic);
@@ -232,7 +233,7 @@ static int hclgevf_get_tc_info(struct hclgevf_dev *hdev)
int status;
status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_TCINFO, 0, NULL, 0,
- true, &resp_msg, sizeof(u8));
+ true, &resp_msg, sizeof(resp_msg));
if (status) {
dev_err(&hdev->pdev->dev,
"VF request to get TC info from PF failed %d",
@@ -321,7 +322,8 @@ static u16 hclgevf_get_qid_global(struct hnae3_handle *handle, u16 queue_id)
memcpy(&msg_data[0], &queue_id, sizeof(queue_id));
ret = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_QID_IN_PF, 0, msg_data,
- 2, true, resp_data, 2);
+ sizeof(msg_data), true, resp_data,
+ sizeof(resp_data));
if (!ret)
qid_in_pf = *(u16 *)resp_data;
@@ -382,7 +384,7 @@ static int hclgevf_knic_setup(struct hclgevf_dev *hdev)
struct hnae3_handle *nic = &hdev->nic;
struct hnae3_knic_private_info *kinfo;
u16 new_tqps = hdev->num_tqps;
- int i;
+ unsigned int i;
kinfo = &nic->kinfo;
kinfo->num_tc = 0;
@@ -418,7 +420,7 @@ static void hclgevf_request_link_info(struct hclgevf_dev *hdev)
u8 resp_msg;
status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_LINK_STATUS, 0, NULL,
- 0, false, &resp_msg, sizeof(u8));
+ 0, false, &resp_msg, sizeof(resp_msg));
if (status)
dev_err(&hdev->pdev->dev,
"VF failed to fetch link status(%d) from PF", status);
@@ -453,11 +455,13 @@ static void hclgevf_update_link_mode(struct hclgevf_dev *hdev)
u8 resp_msg;
send_msg = HCLGEVF_ADVERTISING;
- hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_LINK_MODE, 0, &send_msg,
- sizeof(u8), false, &resp_msg, sizeof(u8));
+ hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_LINK_MODE, 0,
+ &send_msg, sizeof(send_msg), false,
+ &resp_msg, sizeof(resp_msg));
send_msg = HCLGEVF_SUPPORTED;
- hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_LINK_MODE, 0, &send_msg,
- sizeof(u8), false, &resp_msg, sizeof(u8));
+ hclgevf_send_mbx_msg(hdev, HCLGE_MBX_GET_LINK_MODE, 0,
+ &send_msg, sizeof(send_msg), false,
+ &resp_msg, sizeof(resp_msg));
}
static int hclgevf_set_handle_info(struct hclgevf_dev *hdev)
@@ -470,12 +474,6 @@ static int hclgevf_set_handle_info(struct hclgevf_dev *hdev)
nic->numa_node_mask = hdev->numa_node_mask;
nic->flags |= HNAE3_SUPPORT_VF;
- if (hdev->ae_dev->dev_type != HNAE3_DEV_KNIC) {
- dev_err(&hdev->pdev->dev, "unsupported device type %d\n",
- hdev->ae_dev->dev_type);
- return -EINVAL;
- }
-
ret = hclgevf_knic_setup(hdev);
if (ret)
dev_err(&hdev->pdev->dev, "VF knic setup failed %d\n",
@@ -544,14 +542,16 @@ static int hclgevf_set_rss_algo_key(struct hclgevf_dev *hdev,
const u8 hfunc, const u8 *key)
{
struct hclgevf_rss_config_cmd *req;
+ unsigned int key_offset = 0;
struct hclgevf_desc desc;
- int key_offset;
+ int key_counts;
int key_size;
int ret;
+ key_counts = HCLGEVF_RSS_KEY_SIZE;
req = (struct hclgevf_rss_config_cmd *)desc.data;
- for (key_offset = 0; key_offset < 3; key_offset++) {
+ while (key_counts) {
hclgevf_cmd_setup_basic_desc(&desc,
HCLGEVF_OPC_RSS_GENERIC_CONFIG,
false);
@@ -560,15 +560,12 @@ static int hclgevf_set_rss_algo_key(struct hclgevf_dev *hdev,
req->hash_config |=
(key_offset << HCLGEVF_RSS_HASH_KEY_OFFSET_B);
- if (key_offset == 2)
- key_size =
- HCLGEVF_RSS_KEY_SIZE - HCLGEVF_RSS_HASH_KEY_NUM * 2;
- else
- key_size = HCLGEVF_RSS_HASH_KEY_NUM;
-
+ key_size = min(HCLGEVF_RSS_HASH_KEY_NUM, key_counts);
memcpy(req->hash_key,
key + key_offset * HCLGEVF_RSS_HASH_KEY_NUM, key_size);
+ key_counts -= key_size;
+ key_offset++;
ret = hclgevf_cmd_send(&hdev->hw, &desc, 1);
if (ret) {
dev_err(&hdev->pdev->dev,
@@ -631,7 +628,7 @@ static int hclgevf_set_rss_tc_mode(struct hclgevf_dev *hdev, u16 rss_size)
struct hclgevf_desc desc;
u16 roundup_size;
int status;
- int i;
+ unsigned int i;
req = (struct hclgevf_rss_tc_mode_cmd *)desc.data;
@@ -997,6 +994,8 @@ static int hclgevf_bind_ring_to_vector(struct hnae3_handle *handle, bool en,
u8 type;
req = (struct hclge_mbx_vf_to_pf_cmd *)desc.data;
+ type = en ? HCLGE_MBX_MAP_RING_TO_VECTOR :
+ HCLGE_MBX_UNMAP_RING_TO_VECTOR;
for (node = ring_chain; node; node = node->next) {
int idx_offset = HCLGE_MBX_RING_MAP_BASIC_MSG_NUM +
@@ -1006,9 +1005,6 @@ static int hclgevf_bind_ring_to_vector(struct hnae3_handle *handle, bool en,
hclgevf_cmd_setup_basic_desc(&desc,
HCLGEVF_OPC_MBX_VF_TO_PF,
false);
- type = en ?
- HCLGE_MBX_MAP_RING_TO_VECTOR :
- HCLGE_MBX_UNMAP_RING_TO_VECTOR;
req->msg[0] = type;
req->msg[1] = vector_id;
}
@@ -1134,7 +1130,7 @@ static int hclgevf_set_promisc_mode(struct hclgevf_dev *hdev, bool en_bc_pmc)
return hclgevf_cmd_set_promisc_mode(hdev, en_bc_pmc);
}
-static int hclgevf_tqp_enable(struct hclgevf_dev *hdev, int tqp_id,
+static int hclgevf_tqp_enable(struct hclgevf_dev *hdev, unsigned int tqp_id,
int stream_id, bool enable)
{
struct hclgevf_cfg_com_tqp_queue_cmd *req;
@@ -1147,7 +1143,8 @@ static int hclgevf_tqp_enable(struct hclgevf_dev *hdev, int tqp_id,
false);
req->tqp_id = cpu_to_le16(tqp_id & HCLGEVF_RING_ID_MASK);
req->stream_id = cpu_to_le16(stream_id);
- req->enable |= enable << HCLGEVF_TQP_ENABLE_B;
+ if (enable)
+ req->enable |= 1U << HCLGEVF_TQP_ENABLE_B;
status = hclgevf_cmd_send(&hdev->hw, &desc, 1);
if (status)
@@ -1193,7 +1190,7 @@ static int hclgevf_set_mac_addr(struct hnae3_handle *handle, void *p,
HCLGE_MBX_MAC_VLAN_UC_MODIFY;
status = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_UNICAST,
- subcode, msg_data, ETH_ALEN * 2,
+ subcode, msg_data, sizeof(msg_data),
true, NULL, 0);
if (!status)
ether_addr_copy(hdev->hw.mac.mac_addr, new_mac_addr);
@@ -1248,19 +1245,61 @@ static int hclgevf_set_vlan_filter(struct hnae3_handle *handle,
#define HCLGEVF_VLAN_MBX_MSG_LEN 5
struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
u8 msg_data[HCLGEVF_VLAN_MBX_MSG_LEN];
+ int ret;
- if (vlan_id > 4095)
+ if (vlan_id > HCLGEVF_MAX_VLAN_ID)
return -EINVAL;
if (proto != htons(ETH_P_8021Q))
return -EPROTONOSUPPORT;
+ /* When device is resetting, firmware is unable to handle
+ * mailbox. Just record the vlan id, and remove it after
+ * reset finished.
+ */
+ if (test_bit(HCLGEVF_STATE_RST_HANDLING, &hdev->state) && is_kill) {
+ set_bit(vlan_id, hdev->vlan_del_fail_bmap);
+ return -EBUSY;
+ }
+
msg_data[0] = is_kill;
memcpy(&msg_data[1], &vlan_id, sizeof(vlan_id));
memcpy(&msg_data[3], &proto, sizeof(proto));
- return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_VLAN,
- HCLGE_MBX_VLAN_FILTER, msg_data,
- HCLGEVF_VLAN_MBX_MSG_LEN, false, NULL, 0);
+ ret = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_SET_VLAN,
+ HCLGE_MBX_VLAN_FILTER, msg_data,
+ HCLGEVF_VLAN_MBX_MSG_LEN, false, NULL, 0);
+
+ /* When remove hw vlan filter failed, record the vlan id,
+ * and try to remove it from hw later, to be consistence
+ * with stack.
+ */
+ if (is_kill && ret)
+ set_bit(vlan_id, hdev->vlan_del_fail_bmap);
+
+ return ret;
+}
+
+static void hclgevf_sync_vlan_filter(struct hclgevf_dev *hdev)
+{
+#define HCLGEVF_MAX_SYNC_COUNT 60
+ struct hnae3_handle *handle = &hdev->nic;
+ int ret, sync_cnt = 0;
+ u16 vlan_id;
+
+ vlan_id = find_first_bit(hdev->vlan_del_fail_bmap, VLAN_N_VID);
+ while (vlan_id != VLAN_N_VID) {
+ ret = hclgevf_set_vlan_filter(handle, htons(ETH_P_8021Q),
+ vlan_id, true);
+ if (ret)
+ return;
+
+ clear_bit(vlan_id, hdev->vlan_del_fail_bmap);
+ sync_cnt++;
+ if (sync_cnt >= HCLGEVF_MAX_SYNC_COUNT)
+ return;
+
+ vlan_id = find_first_bit(hdev->vlan_del_fail_bmap, VLAN_N_VID);
+ }
}
static int hclgevf_en_hw_strip_rxvtag(struct hnae3_handle *handle, bool enable)
@@ -1280,7 +1319,7 @@ static int hclgevf_reset_tqp(struct hnae3_handle *handle, u16 queue_id)
u8 msg_data[2];
int ret;
- memcpy(&msg_data[0], &queue_id, sizeof(queue_id));
+ memcpy(msg_data, &queue_id, sizeof(queue_id));
/* disable vf queue before send queue reset msg to PF */
ret = hclgevf_tqp_enable(hdev, queue_id, 0, false);
@@ -1288,7 +1327,7 @@ static int hclgevf_reset_tqp(struct hnae3_handle *handle, u16 queue_id)
return ret;
return hclgevf_send_mbx_msg(hdev, HCLGE_MBX_QUEUE_RESET, 0, msg_data,
- 2, true, NULL, 0);
+ sizeof(msg_data), true, NULL, 0);
}
static int hclgevf_set_mtu(struct hnae3_handle *handle, int new_mtu)
@@ -1306,6 +1345,10 @@ static int hclgevf_notify_client(struct hclgevf_dev *hdev,
struct hnae3_handle *handle = &hdev->nic;
int ret;
+ if (!test_bit(HCLGEVF_STATE_NIC_REGISTERED, &hdev->state) ||
+ !client)
+ return 0;
+
if (!client->ops->reset_notify)
return -EOPNOTSUPP;
@@ -1410,6 +1453,8 @@ static int hclgevf_reset_stack(struct hclgevf_dev *hdev)
static int hclgevf_reset_prepare_wait(struct hclgevf_dev *hdev)
{
+#define HCLGEVF_RESET_SYNC_TIME 100
+
int ret = 0;
switch (hdev->reset_type) {
@@ -1427,13 +1472,34 @@ static int hclgevf_reset_prepare_wait(struct hclgevf_dev *hdev)
}
set_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state);
-
+ /* inform hardware that preparatory work is done */
+ msleep(HCLGEVF_RESET_SYNC_TIME);
+ hclgevf_write_dev(&hdev->hw, HCLGEVF_NIC_CSQ_DEPTH_REG,
+ HCLGEVF_NIC_CMQ_ENABLE);
dev_info(&hdev->pdev->dev, "prepare reset(%d) wait done, ret:%d\n",
hdev->reset_type, ret);
return ret;
}
+static void hclgevf_reset_err_handle(struct hclgevf_dev *hdev)
+{
+ hdev->rst_stats.rst_fail_cnt++;
+ dev_err(&hdev->pdev->dev, "failed to reset VF(%d)\n",
+ hdev->rst_stats.rst_fail_cnt);
+
+ if (hdev->rst_stats.rst_fail_cnt < HCLGEVF_RESET_MAX_FAIL_CNT)
+ set_bit(hdev->reset_type, &hdev->reset_pending);
+
+ if (hclgevf_is_reset_pending(hdev)) {
+ set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state);
+ hclgevf_reset_task_schedule(hdev);
+ } else {
+ hclgevf_write_dev(&hdev->hw, HCLGEVF_NIC_CSQ_DEPTH_REG,
+ HCLGEVF_NIC_CMQ_ENABLE);
+ }
+}
+
static int hclgevf_reset(struct hclgevf_dev *hdev)
{
struct hnae3_ae_dev *ae_dev = pci_get_drvdata(hdev->pdev);
@@ -1490,19 +1556,13 @@ static int hclgevf_reset(struct hclgevf_dev *hdev)
hdev->last_reset_time = jiffies;
ae_dev->reset_type = HNAE3_NONE_RESET;
hdev->rst_stats.rst_done_cnt++;
+ hdev->rst_stats.rst_fail_cnt = 0;
return ret;
err_reset_lock:
rtnl_unlock();
err_reset:
- /* When VF reset failed, only the higher level reset asserted by PF
- * can restore it, so re-initialize the command queue to receive
- * this higher reset event.
- */
- hclgevf_cmd_init(hdev);
- dev_err(&hdev->pdev->dev, "failed to reset VF\n");
- if (hclgevf_is_reset_pending(hdev))
- hclgevf_reset_task_schedule(hdev);
+ hclgevf_reset_err_handle(hdev);
return ret;
}
@@ -1612,7 +1672,8 @@ static void hclgevf_get_misc_vector(struct hclgevf_dev *hdev)
void hclgevf_reset_task_schedule(struct hclgevf_dev *hdev)
{
- if (!test_bit(HCLGEVF_STATE_RST_SERVICE_SCHED, &hdev->state)) {
+ if (!test_bit(HCLGEVF_STATE_RST_SERVICE_SCHED, &hdev->state) &&
+ !test_bit(HCLGEVF_STATE_REMOVING, &hdev->state)) {
set_bit(HCLGEVF_STATE_RST_SERVICE_SCHED, &hdev->state);
schedule_work(&hdev->rst_service_task);
}
@@ -1648,7 +1709,8 @@ static void hclgevf_service_timer(struct timer_list *t)
{
struct hclgevf_dev *hdev = from_timer(hdev, t, service_timer);
- mod_timer(&hdev->service_timer, jiffies + 5 * HZ);
+ mod_timer(&hdev->service_timer, jiffies +
+ HCLGEVF_GENERAL_TASK_INTERVAL * HZ);
hdev->stats_timer++;
hclgevf_task_schedule(hdev);
@@ -1668,9 +1730,9 @@ static void hclgevf_reset_service_task(struct work_struct *work)
if (test_and_clear_bit(HCLGEVF_RESET_PENDING,
&hdev->reset_state)) {
/* PF has initmated that it is about to reset the hardware.
- * We now have to poll & check if harware has actually completed
- * the reset sequence. On hardware reset completion, VF needs to
- * reset the client and ae device.
+ * We now have to poll & check if hardware has actually
+ * completed the reset sequence. On hardware reset completion,
+ * VF needs to reset the client and ae device.
*/
hdev->reset_attempts = 0;
@@ -1686,7 +1748,7 @@ static void hclgevf_reset_service_task(struct work_struct *work)
} else if (test_and_clear_bit(HCLGEVF_RESET_REQUESTED,
&hdev->reset_state)) {
/* we could be here when either of below happens:
- * 1. reset was initiated due to watchdog timeout due to
+ * 1. reset was initiated due to watchdog timeout caused by
* a. IMP was earlier reset and our TX got choked down and
* which resulted in watchdog reacting and inducing VF
* reset. This also means our cmdq would be unreliable.
@@ -1748,7 +1810,8 @@ static void hclgevf_keep_alive_timer(struct timer_list *t)
struct hclgevf_dev *hdev = from_timer(hdev, t, keep_alive_timer);
schedule_work(&hdev->keep_alive_task);
- mod_timer(&hdev->keep_alive_timer, jiffies + 2 * HZ);
+ mod_timer(&hdev->keep_alive_timer, jiffies +
+ HCLGEVF_KEEP_ALIVE_TASK_INTERVAL * HZ);
}
static void hclgevf_keep_alive_task(struct work_struct *work)
@@ -1763,7 +1826,7 @@ static void hclgevf_keep_alive_task(struct work_struct *work)
return;
ret = hclgevf_send_mbx_msg(hdev, HCLGE_MBX_KEEP_ALIVE, 0, NULL,
- 0, false, &respmsg, sizeof(u8));
+ 0, false, &respmsg, sizeof(respmsg));
if (ret)
dev_err(&hdev->pdev->dev,
"VF sends keep alive cmd failed(=%d)\n", ret);
@@ -1789,6 +1852,8 @@ static void hclgevf_service_task(struct work_struct *work)
hclgevf_update_link_mode(hdev);
+ hclgevf_sync_vlan_filter(hdev);
+
hclgevf_deferred_task_schedule(hdev);
clear_bit(HCLGEVF_STATE_SERVICE_SCHED, &hdev->state);
@@ -1995,7 +2060,7 @@ static int hclgevf_rss_init_hw(struct hclgevf_dev *hdev)
}
- /* Initialize RSS indirect table for each vport */
+ /* Initialize RSS indirect table */
for (i = 0; i < HCLGEVF_RSS_IND_TBL_SIZE; i++)
rss_cfg->rss_indirection_tbl[i] = i % hdev->rss_size_max;
@@ -2008,9 +2073,6 @@ static int hclgevf_rss_init_hw(struct hclgevf_dev *hdev)
static int hclgevf_init_vlan_config(struct hclgevf_dev *hdev)
{
- /* other vlan config(like, VLAN TX/RX offload) would also be added
- * here later
- */
return hclgevf_set_vlan_filter(&hdev->nic, htons(ETH_P_8021Q), 0,
false);
}
@@ -2032,7 +2094,6 @@ static int hclgevf_ae_start(struct hnae3_handle *handle)
{
struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
- /* reset tqp stats */
hclgevf_reset_tqp_stats(handle);
hclgevf_request_link_info(hdev);
@@ -2056,7 +2117,6 @@ static void hclgevf_ae_stop(struct hnae3_handle *handle)
if (hclgevf_reset_tqp(handle, i))
break;
- /* reset tqp stats */
hclgevf_reset_tqp_stats(handle);
hclgevf_update_link_status(hdev, 0);
}
@@ -2080,7 +2140,8 @@ static int hclgevf_client_start(struct hnae3_handle *handle)
if (ret)
return ret;
- mod_timer(&hdev->keep_alive_timer, jiffies + 2 * HZ);
+ mod_timer(&hdev->keep_alive_timer, jiffies +
+ HCLGEVF_KEEP_ALIVE_TASK_INTERVAL * HZ);
return 0;
}
@@ -2123,6 +2184,7 @@ static void hclgevf_state_init(struct hclgevf_dev *hdev)
static void hclgevf_state_uninit(struct hclgevf_dev *hdev)
{
set_bit(HCLGEVF_STATE_DOWN, &hdev->state);
+ set_bit(HCLGEVF_STATE_REMOVING, &hdev->state);
if (hdev->keep_alive_timer.function)
del_timer_sync(&hdev->keep_alive_timer);
@@ -2249,49 +2311,68 @@ static void hclgevf_info_show(struct hclgevf_dev *hdev)
dev_info(dev, "VF info end.\n");
}
-static int hclgevf_init_client_instance(struct hnae3_client *client,
- struct hnae3_ae_dev *ae_dev)
+static int hclgevf_init_nic_client_instance(struct hnae3_ae_dev *ae_dev,
+ struct hnae3_client *client)
{
struct hclgevf_dev *hdev = ae_dev->priv;
int ret;
- switch (client->type) {
- case HNAE3_CLIENT_KNIC:
- hdev->nic_client = client;
- hdev->nic.client = client;
+ ret = client->ops->init_instance(&hdev->nic);
+ if (ret)
+ return ret;
- ret = client->ops->init_instance(&hdev->nic);
- if (ret)
- goto clear_nic;
+ set_bit(HCLGEVF_STATE_NIC_REGISTERED, &hdev->state);
+ hnae3_set_client_init_flag(client, ae_dev, 1);
- hnae3_set_client_init_flag(client, ae_dev, 1);
+ if (netif_msg_drv(&hdev->nic))
+ hclgevf_info_show(hdev);
- if (netif_msg_drv(&hdev->nic))
- hclgevf_info_show(hdev);
+ return 0;
+}
- if (hdev->roce_client && hnae3_dev_roce_supported(hdev)) {
- struct hnae3_client *rc = hdev->roce_client;
+static int hclgevf_init_roce_client_instance(struct hnae3_ae_dev *ae_dev,
+ struct hnae3_client *client)
+{
+ struct hclgevf_dev *hdev = ae_dev->priv;
+ int ret;
- ret = hclgevf_init_roce_base_info(hdev);
- if (ret)
- goto clear_roce;
- ret = rc->ops->init_instance(&hdev->roce);
- if (ret)
- goto clear_roce;
+ if (!hnae3_dev_roce_supported(hdev) || !hdev->roce_client ||
+ !hdev->nic_client)
+ return 0;
- hnae3_set_client_init_flag(hdev->roce_client, ae_dev,
- 1);
- }
- break;
- case HNAE3_CLIENT_UNIC:
+ ret = hclgevf_init_roce_base_info(hdev);
+ if (ret)
+ return ret;
+
+ ret = client->ops->init_instance(&hdev->roce);
+ if (ret)
+ return ret;
+
+ hnae3_set_client_init_flag(client, ae_dev, 1);
+
+ return 0;
+}
+
+static int hclgevf_init_client_instance(struct hnae3_client *client,
+ struct hnae3_ae_dev *ae_dev)
+{
+ struct hclgevf_dev *hdev = ae_dev->priv;
+ int ret;
+
+ switch (client->type) {
+ case HNAE3_CLIENT_KNIC:
hdev->nic_client = client;
hdev->nic.client = client;
- ret = client->ops->init_instance(&hdev->nic);
+ ret = hclgevf_init_nic_client_instance(ae_dev, client);
if (ret)
goto clear_nic;
- hnae3_set_client_init_flag(client, ae_dev, 1);
+ ret = hclgevf_init_roce_client_instance(ae_dev,
+ hdev->roce_client);
+ if (ret)
+ goto clear_roce;
+
break;
case HNAE3_CLIENT_ROCE:
if (hnae3_dev_roce_supported(hdev)) {
@@ -2299,17 +2380,10 @@ static int hclgevf_init_client_instance(struct hnae3_client *client,
hdev->roce.client = client;
}
- if (hdev->roce_client && hdev->nic_client) {
- ret = hclgevf_init_roce_base_info(hdev);
- if (ret)
- goto clear_roce;
-
- ret = client->ops->init_instance(&hdev->roce);
- if (ret)
- goto clear_roce;
- }
+ ret = hclgevf_init_roce_client_instance(ae_dev, client);
+ if (ret)
+ goto clear_roce;
- hnae3_set_client_init_flag(client, ae_dev, 1);
break;
default:
return -EINVAL;
@@ -2342,6 +2416,8 @@ static void hclgevf_uninit_client_instance(struct hnae3_client *client,
/* un-init nic/unic, if this was not called by roce client */
if (client->ops->uninit_instance && hdev->nic_client &&
client->type != HNAE3_CLIENT_ROCE) {
+ clear_bit(HCLGEVF_STATE_NIC_REGISTERED, &hdev->state);
+
client->ops->uninit_instance(&hdev->nic, 0);
hdev->nic_client = NULL;
hdev->nic.client = NULL;
@@ -2512,6 +2588,12 @@ static int hclgevf_reset_hdev(struct hclgevf_dev *hdev)
return ret;
}
+ if (pdev->revision >= 0x21) {
+ ret = hclgevf_set_promisc_mode(hdev, true);
+ if (ret)
+ return ret;
+ }
+
dev_info(&hdev->pdev->dev, "Reset done\n");
return 0;
@@ -2591,9 +2673,11 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev)
* firmware makes sure broadcast packets can be accepted.
* For revision 0x21, default to enable broadcast promisc mode.
*/
- ret = hclgevf_set_promisc_mode(hdev, true);
- if (ret)
- goto err_config;
+ if (pdev->revision >= 0x21) {
+ ret = hclgevf_set_promisc_mode(hdev, true);
+ if (ret)
+ goto err_config;
+ }
/* Initialize RSS for this VF */
ret = hclgevf_rss_init_hw(hdev);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
index cc52f54f8c08..5a9e30998a8f 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
@@ -4,6 +4,7 @@
#ifndef __HCLGEVF_MAIN_H
#define __HCLGEVF_MAIN_H
#include <linux/fs.h>
+#include <linux/if_vlan.h>
#include <linux/types.h>
#include "hclge_mbx.h"
#include "hclgevf_cmd.h"
@@ -12,9 +13,12 @@
#define HCLGEVF_MOD_VERSION "1.0"
#define HCLGEVF_DRIVER_NAME "hclgevf"
+#define HCLGEVF_MAX_VLAN_ID 4095
#define HCLGEVF_MISC_VECTOR_NUM 0
#define HCLGEVF_INVALID_VPORT 0xffff
+#define HCLGEVF_GENERAL_TASK_INTERVAL 5
+#define HCLGEVF_KEEP_ALIVE_TASK_INTERVAL 2
/* This number in actual depends upon the total number of VFs
* created by physical function. But the maximum number of
@@ -130,6 +134,8 @@ enum hclgevf_states {
HCLGEVF_STATE_DOWN,
HCLGEVF_STATE_DISABLED,
HCLGEVF_STATE_IRQ_INITED,
+ HCLGEVF_STATE_REMOVING,
+ HCLGEVF_STATE_NIC_REGISTERED,
/* task states */
HCLGEVF_STATE_SERVICE_SCHED,
HCLGEVF_STATE_RST_SERVICE_SCHED,
@@ -220,6 +226,7 @@ struct hclgevf_rst_stats {
u32 vf_rst_cnt; /* the number of VF reset */
u32 rst_done_cnt; /* the number of reset completed */
u32 hw_rst_done_cnt; /* the number of HW reset completed */
+ u32 rst_fail_cnt; /* the number of VF reset fail */
};
struct hclgevf_dev {
@@ -265,6 +272,8 @@ struct hclgevf_dev {
u16 *vector_status;
int *vector_irq;
+ unsigned long vlan_del_fail_bmap[BITS_TO_LONGS(VLAN_N_VID)];
+
bool mbx_event_pending;
struct hclgevf_mbx_resp_status mbx_resp; /* mailbox response */
struct hclgevf_mbx_arq_ring arq; /* mailbox async rx queue */
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
index 30f2e9352cf3..f60b80bd605e 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
@@ -102,7 +102,8 @@ int hclgevf_send_mbx_msg(struct hclgevf_dev *hdev, u16 code, u16 subcode,
~HCLGE_MBX_NEED_RESP_BIT;
req->msg[0] = code;
req->msg[1] = subcode;
- memcpy(&req->msg[2], msg_data, msg_len);
+ if (msg_data)
+ memcpy(&req->msg[2], msg_data, msg_len);
/* synchronous send */
if (need_resp) {
diff --git a/drivers/net/ethernet/huawei/hinic/Makefile b/drivers/net/ethernet/huawei/hinic/Makefile
index 99de5b6607d5..fe88ab88cacc 100644
--- a/drivers/net/ethernet/huawei/hinic/Makefile
+++ b/drivers/net/ethernet/huawei/hinic/Makefile
@@ -4,4 +4,4 @@ obj-$(CONFIG_HINIC) += hinic.o
hinic-y := hinic_main.o hinic_tx.o hinic_rx.o hinic_port.o hinic_hw_dev.o \
hinic_hw_io.o hinic_hw_qp.o hinic_hw_cmdq.o hinic_hw_wq.o \
hinic_hw_mgmt.o hinic_hw_api_cmd.o hinic_hw_eqs.o hinic_hw_if.o \
- hinic_common.o
+ hinic_common.o hinic_ethtool.o
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_dev.h b/drivers/net/ethernet/huawei/hinic/hinic_dev.h
index 353276fdcaed..a209b14160cc 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_dev.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_dev.h
@@ -22,6 +22,7 @@
enum hinic_flags {
HINIC_LINK_UP = BIT(0),
HINIC_INTF_UP = BIT(1),
+ HINIC_RSS_ENABLE = BIT(2),
};
struct hinic_rx_mode_work {
@@ -29,6 +30,23 @@ struct hinic_rx_mode_work {
u32 rx_mode;
};
+struct hinic_rss_type {
+ u8 tcp_ipv6_ext;
+ u8 ipv6_ext;
+ u8 tcp_ipv6;
+ u8 ipv6;
+ u8 tcp_ipv4;
+ u8 ipv4;
+ u8 udp_ipv6;
+ u8 udp_ipv4;
+};
+
+enum hinic_rss_hash_type {
+ HINIC_RSS_HASH_ENGINE_TYPE_XOR,
+ HINIC_RSS_HASH_ENGINE_TYPE_TOEP,
+ HINIC_RSS_HASH_ENGINE_TYPE_MAX,
+};
+
struct hinic_dev {
struct net_device *netdev;
struct hinic_hwdev *hwdev;
@@ -36,6 +54,8 @@ struct hinic_dev {
u32 msg_enable;
unsigned int tx_weight;
unsigned int rx_weight;
+ u16 num_qps;
+ u16 max_qps;
unsigned int flags;
@@ -50,6 +70,14 @@ struct hinic_dev {
struct hinic_txq_stats tx_stats;
struct hinic_rxq_stats rx_stats;
+
+ u8 rss_tmpl_idx;
+ u8 rss_hash_engine;
+ u16 num_rss;
+ u16 rss_limit;
+ struct hinic_rss_type rss_type;
+ u8 *rss_hkey_user;
+ s32 *rss_indir_user;
};
#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_ethtool.c b/drivers/net/ethernet/huawei/hinic/hinic_ethtool.c
new file mode 100644
index 000000000000..60ec48fe4144
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic/hinic_ethtool.c
@@ -0,0 +1,762 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Huawei HiNIC PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+#include <linux/vmalloc.h>
+
+#include "hinic_hw_qp.h"
+#include "hinic_hw_dev.h"
+#include "hinic_port.h"
+#include "hinic_tx.h"
+#include "hinic_rx.h"
+#include "hinic_dev.h"
+
+static void set_link_speed(struct ethtool_link_ksettings *link_ksettings,
+ enum hinic_speed speed)
+{
+ switch (speed) {
+ case HINIC_SPEED_10MB_LINK:
+ link_ksettings->base.speed = SPEED_10;
+ break;
+
+ case HINIC_SPEED_100MB_LINK:
+ link_ksettings->base.speed = SPEED_100;
+ break;
+
+ case HINIC_SPEED_1000MB_LINK:
+ link_ksettings->base.speed = SPEED_1000;
+ break;
+
+ case HINIC_SPEED_10GB_LINK:
+ link_ksettings->base.speed = SPEED_10000;
+ break;
+
+ case HINIC_SPEED_25GB_LINK:
+ link_ksettings->base.speed = SPEED_25000;
+ break;
+
+ case HINIC_SPEED_40GB_LINK:
+ link_ksettings->base.speed = SPEED_40000;
+ break;
+
+ case HINIC_SPEED_100GB_LINK:
+ link_ksettings->base.speed = SPEED_100000;
+ break;
+
+ default:
+ link_ksettings->base.speed = SPEED_UNKNOWN;
+ break;
+ }
+}
+
+static int hinic_get_link_ksettings(struct net_device *netdev,
+ struct ethtool_link_ksettings
+ *link_ksettings)
+{
+ struct hinic_dev *nic_dev = netdev_priv(netdev);
+ enum hinic_port_link_state link_state;
+ struct hinic_port_cap port_cap;
+ int err;
+
+ ethtool_link_ksettings_zero_link_mode(link_ksettings, advertising);
+ ethtool_link_ksettings_add_link_mode(link_ksettings, supported,
+ Autoneg);
+
+ link_ksettings->base.speed = SPEED_UNKNOWN;
+ link_ksettings->base.autoneg = AUTONEG_DISABLE;
+ link_ksettings->base.duplex = DUPLEX_UNKNOWN;
+
+ err = hinic_port_get_cap(nic_dev, &port_cap);
+ if (err)
+ return err;
+
+ err = hinic_port_link_state(nic_dev, &link_state);
+ if (err)
+ return err;
+
+ if (link_state != HINIC_LINK_STATE_UP)
+ return err;
+
+ set_link_speed(link_ksettings, port_cap.speed);
+
+ if (!!(port_cap.autoneg_cap & HINIC_AUTONEG_SUPPORTED))
+ ethtool_link_ksettings_add_link_mode(link_ksettings,
+ advertising, Autoneg);
+
+ if (port_cap.autoneg_state == HINIC_AUTONEG_ACTIVE)
+ link_ksettings->base.autoneg = AUTONEG_ENABLE;
+
+ link_ksettings->base.duplex = (port_cap.duplex == HINIC_DUPLEX_FULL) ?
+ DUPLEX_FULL : DUPLEX_HALF;
+ return 0;
+}
+
+static void hinic_get_drvinfo(struct net_device *netdev,
+ struct ethtool_drvinfo *info)
+{
+ struct hinic_dev *nic_dev = netdev_priv(netdev);
+ u8 mgmt_ver[HINIC_MGMT_VERSION_MAX_LEN] = {0};
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_hwif *hwif = hwdev->hwif;
+ int err;
+
+ strlcpy(info->driver, HINIC_DRV_NAME, sizeof(info->driver));
+ strlcpy(info->bus_info, pci_name(hwif->pdev), sizeof(info->bus_info));
+
+ err = hinic_get_mgmt_version(nic_dev, mgmt_ver);
+ if (err)
+ return;
+
+ snprintf(info->fw_version, sizeof(info->fw_version), "%s", mgmt_ver);
+}
+
+static void hinic_get_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring)
+{
+ ring->rx_max_pending = HINIC_RQ_DEPTH;
+ ring->tx_max_pending = HINIC_SQ_DEPTH;
+ ring->rx_pending = HINIC_RQ_DEPTH;
+ ring->tx_pending = HINIC_SQ_DEPTH;
+}
+
+static void hinic_get_channels(struct net_device *netdev,
+ struct ethtool_channels *channels)
+{
+ struct hinic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+
+ channels->max_rx = hwdev->nic_cap.max_qps;
+ channels->max_tx = hwdev->nic_cap.max_qps;
+ channels->max_other = 0;
+ channels->max_combined = 0;
+ channels->rx_count = hinic_hwdev_num_qps(hwdev);
+ channels->tx_count = hinic_hwdev_num_qps(hwdev);
+ channels->other_count = 0;
+ channels->combined_count = 0;
+}
+
+static int hinic_get_rss_hash_opts(struct hinic_dev *nic_dev,
+ struct ethtool_rxnfc *cmd)
+{
+ struct hinic_rss_type rss_type = { 0 };
+ int err;
+
+ cmd->data = 0;
+
+ if (!(nic_dev->flags & HINIC_RSS_ENABLE))
+ return 0;
+
+ err = hinic_get_rss_type(nic_dev, nic_dev->rss_tmpl_idx,
+ &rss_type);
+ if (err)
+ return err;
+
+ cmd->data = RXH_IP_SRC | RXH_IP_DST;
+ switch (cmd->flow_type) {
+ case TCP_V4_FLOW:
+ if (rss_type.tcp_ipv4)
+ cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+ break;
+ case TCP_V6_FLOW:
+ if (rss_type.tcp_ipv6)
+ cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+ break;
+ case UDP_V4_FLOW:
+ if (rss_type.udp_ipv4)
+ cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+ break;
+ case UDP_V6_FLOW:
+ if (rss_type.udp_ipv6)
+ cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+ break;
+ case IPV4_FLOW:
+ case IPV6_FLOW:
+ break;
+ default:
+ cmd->data = 0;
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int set_l4_rss_hash_ops(struct ethtool_rxnfc *cmd,
+ struct hinic_rss_type *rss_type)
+{
+ u8 rss_l4_en = 0;
+
+ switch (cmd->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
+ case 0:
+ rss_l4_en = 0;
+ break;
+ case (RXH_L4_B_0_1 | RXH_L4_B_2_3):
+ rss_l4_en = 1;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ switch (cmd->flow_type) {
+ case TCP_V4_FLOW:
+ rss_type->tcp_ipv4 = rss_l4_en;
+ break;
+ case TCP_V6_FLOW:
+ rss_type->tcp_ipv6 = rss_l4_en;
+ break;
+ case UDP_V4_FLOW:
+ rss_type->udp_ipv4 = rss_l4_en;
+ break;
+ case UDP_V6_FLOW:
+ rss_type->udp_ipv6 = rss_l4_en;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic_set_rss_hash_opts(struct hinic_dev *nic_dev,
+ struct ethtool_rxnfc *cmd)
+{
+ struct hinic_rss_type *rss_type = &nic_dev->rss_type;
+ int err;
+
+ if (!(nic_dev->flags & HINIC_RSS_ENABLE)) {
+ cmd->data = 0;
+ return -EOPNOTSUPP;
+ }
+
+ /* RSS does not support anything other than hashing
+ * to queues on src and dst IPs and ports
+ */
+ if (cmd->data & ~(RXH_IP_SRC | RXH_IP_DST | RXH_L4_B_0_1 |
+ RXH_L4_B_2_3))
+ return -EINVAL;
+
+ /* We need at least the IP SRC and DEST fields for hashing */
+ if (!(cmd->data & RXH_IP_SRC) || !(cmd->data & RXH_IP_DST))
+ return -EINVAL;
+
+ err = hinic_get_rss_type(nic_dev,
+ nic_dev->rss_tmpl_idx, rss_type);
+ if (err)
+ return -EFAULT;
+
+ switch (cmd->flow_type) {
+ case TCP_V4_FLOW:
+ case TCP_V6_FLOW:
+ case UDP_V4_FLOW:
+ case UDP_V6_FLOW:
+ err = set_l4_rss_hash_ops(cmd, rss_type);
+ if (err)
+ return err;
+ break;
+ case IPV4_FLOW:
+ rss_type->ipv4 = 1;
+ break;
+ case IPV6_FLOW:
+ rss_type->ipv6 = 1;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ err = hinic_set_rss_type(nic_dev, nic_dev->rss_tmpl_idx,
+ *rss_type);
+ if (err)
+ return -EFAULT;
+
+ return 0;
+}
+
+static int __set_rss_rxfh(struct net_device *netdev,
+ const u32 *indir, const u8 *key)
+{
+ struct hinic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ if (indir) {
+ if (!nic_dev->rss_indir_user) {
+ nic_dev->rss_indir_user =
+ kzalloc(sizeof(u32) * HINIC_RSS_INDIR_SIZE,
+ GFP_KERNEL);
+ if (!nic_dev->rss_indir_user)
+ return -ENOMEM;
+ }
+
+ memcpy(nic_dev->rss_indir_user, indir,
+ sizeof(u32) * HINIC_RSS_INDIR_SIZE);
+
+ err = hinic_rss_set_indir_tbl(nic_dev,
+ nic_dev->rss_tmpl_idx, indir);
+ if (err)
+ return -EFAULT;
+ }
+
+ if (key) {
+ if (!nic_dev->rss_hkey_user) {
+ nic_dev->rss_hkey_user =
+ kzalloc(HINIC_RSS_KEY_SIZE * 2, GFP_KERNEL);
+
+ if (!nic_dev->rss_hkey_user)
+ return -ENOMEM;
+ }
+
+ memcpy(nic_dev->rss_hkey_user, key, HINIC_RSS_KEY_SIZE);
+
+ err = hinic_rss_set_template_tbl(nic_dev,
+ nic_dev->rss_tmpl_idx, key);
+ if (err)
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int hinic_get_rxnfc(struct net_device *netdev,
+ struct ethtool_rxnfc *cmd, u32 *rule_locs)
+{
+ struct hinic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+
+ switch (cmd->cmd) {
+ case ETHTOOL_GRXRINGS:
+ cmd->data = nic_dev->num_qps;
+ break;
+ case ETHTOOL_GRXFH:
+ err = hinic_get_rss_hash_opts(nic_dev, cmd);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ break;
+ }
+
+ return err;
+}
+
+static int hinic_set_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd)
+{
+ struct hinic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+
+ switch (cmd->cmd) {
+ case ETHTOOL_SRXFH:
+ err = hinic_set_rss_hash_opts(nic_dev, cmd);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ break;
+ }
+
+ return err;
+}
+
+static int hinic_get_rxfh(struct net_device *netdev,
+ u32 *indir, u8 *key, u8 *hfunc)
+{
+ struct hinic_dev *nic_dev = netdev_priv(netdev);
+ u8 hash_engine_type = 0;
+ int err = 0;
+
+ if (!(nic_dev->flags & HINIC_RSS_ENABLE))
+ return -EOPNOTSUPP;
+
+ if (hfunc) {
+ err = hinic_rss_get_hash_engine(nic_dev,
+ nic_dev->rss_tmpl_idx,
+ &hash_engine_type);
+ if (err)
+ return -EFAULT;
+
+ *hfunc = hash_engine_type ? ETH_RSS_HASH_TOP : ETH_RSS_HASH_XOR;
+ }
+
+ if (indir) {
+ err = hinic_rss_get_indir_tbl(nic_dev,
+ nic_dev->rss_tmpl_idx, indir);
+ if (err)
+ return -EFAULT;
+ }
+
+ if (key)
+ err = hinic_rss_get_template_tbl(nic_dev,
+ nic_dev->rss_tmpl_idx, key);
+
+ return err;
+}
+
+static int hinic_set_rxfh(struct net_device *netdev, const u32 *indir,
+ const u8 *key, const u8 hfunc)
+{
+ struct hinic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+
+ if (!(nic_dev->flags & HINIC_RSS_ENABLE))
+ return -EOPNOTSUPP;
+
+ if (hfunc != ETH_RSS_HASH_NO_CHANGE) {
+ if (hfunc != ETH_RSS_HASH_TOP && hfunc != ETH_RSS_HASH_XOR)
+ return -EOPNOTSUPP;
+
+ nic_dev->rss_hash_engine = (hfunc == ETH_RSS_HASH_XOR) ?
+ HINIC_RSS_HASH_ENGINE_TYPE_XOR :
+ HINIC_RSS_HASH_ENGINE_TYPE_TOEP;
+ err = hinic_rss_set_hash_engine
+ (nic_dev, nic_dev->rss_tmpl_idx,
+ nic_dev->rss_hash_engine);
+ if (err)
+ return -EFAULT;
+ }
+
+ err = __set_rss_rxfh(netdev, indir, key);
+
+ return err;
+}
+
+static u32 hinic_get_rxfh_key_size(struct net_device *netdev)
+{
+ return HINIC_RSS_KEY_SIZE;
+}
+
+static u32 hinic_get_rxfh_indir_size(struct net_device *netdev)
+{
+ return HINIC_RSS_INDIR_SIZE;
+}
+
+#define ARRAY_LEN(arr) ((int)((int)sizeof(arr) / (int)sizeof(arr[0])))
+
+#define HINIC_FUNC_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic_vport_stats, _stat_item), \
+ .offset = offsetof(struct hinic_vport_stats, _stat_item) \
+}
+
+static struct hinic_stats hinic_function_stats[] = {
+ HINIC_FUNC_STAT(tx_unicast_pkts_vport),
+ HINIC_FUNC_STAT(tx_unicast_bytes_vport),
+ HINIC_FUNC_STAT(tx_multicast_pkts_vport),
+ HINIC_FUNC_STAT(tx_multicast_bytes_vport),
+ HINIC_FUNC_STAT(tx_broadcast_pkts_vport),
+ HINIC_FUNC_STAT(tx_broadcast_bytes_vport),
+
+ HINIC_FUNC_STAT(rx_unicast_pkts_vport),
+ HINIC_FUNC_STAT(rx_unicast_bytes_vport),
+ HINIC_FUNC_STAT(rx_multicast_pkts_vport),
+ HINIC_FUNC_STAT(rx_multicast_bytes_vport),
+ HINIC_FUNC_STAT(rx_broadcast_pkts_vport),
+ HINIC_FUNC_STAT(rx_broadcast_bytes_vport),
+
+ HINIC_FUNC_STAT(tx_discard_vport),
+ HINIC_FUNC_STAT(rx_discard_vport),
+ HINIC_FUNC_STAT(tx_err_vport),
+ HINIC_FUNC_STAT(rx_err_vport),
+};
+
+#define HINIC_PORT_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic_phy_port_stats, _stat_item), \
+ .offset = offsetof(struct hinic_phy_port_stats, _stat_item) \
+}
+
+static struct hinic_stats hinic_port_stats[] = {
+ HINIC_PORT_STAT(mac_rx_total_pkt_num),
+ HINIC_PORT_STAT(mac_rx_total_oct_num),
+ HINIC_PORT_STAT(mac_rx_bad_pkt_num),
+ HINIC_PORT_STAT(mac_rx_bad_oct_num),
+ HINIC_PORT_STAT(mac_rx_good_pkt_num),
+ HINIC_PORT_STAT(mac_rx_good_oct_num),
+ HINIC_PORT_STAT(mac_rx_uni_pkt_num),
+ HINIC_PORT_STAT(mac_rx_multi_pkt_num),
+ HINIC_PORT_STAT(mac_rx_broad_pkt_num),
+ HINIC_PORT_STAT(mac_tx_total_pkt_num),
+ HINIC_PORT_STAT(mac_tx_total_oct_num),
+ HINIC_PORT_STAT(mac_tx_bad_pkt_num),
+ HINIC_PORT_STAT(mac_tx_bad_oct_num),
+ HINIC_PORT_STAT(mac_tx_good_pkt_num),
+ HINIC_PORT_STAT(mac_tx_good_oct_num),
+ HINIC_PORT_STAT(mac_tx_uni_pkt_num),
+ HINIC_PORT_STAT(mac_tx_multi_pkt_num),
+ HINIC_PORT_STAT(mac_tx_broad_pkt_num),
+ HINIC_PORT_STAT(mac_rx_fragment_pkt_num),
+ HINIC_PORT_STAT(mac_rx_undersize_pkt_num),
+ HINIC_PORT_STAT(mac_rx_undermin_pkt_num),
+ HINIC_PORT_STAT(mac_rx_64_oct_pkt_num),
+ HINIC_PORT_STAT(mac_rx_65_127_oct_pkt_num),
+ HINIC_PORT_STAT(mac_rx_128_255_oct_pkt_num),
+ HINIC_PORT_STAT(mac_rx_256_511_oct_pkt_num),
+ HINIC_PORT_STAT(mac_rx_512_1023_oct_pkt_num),
+ HINIC_PORT_STAT(mac_rx_1024_1518_oct_pkt_num),
+ HINIC_PORT_STAT(mac_rx_1519_2047_oct_pkt_num),
+ HINIC_PORT_STAT(mac_rx_2048_4095_oct_pkt_num),
+ HINIC_PORT_STAT(mac_rx_4096_8191_oct_pkt_num),
+ HINIC_PORT_STAT(mac_rx_8192_9216_oct_pkt_num),
+ HINIC_PORT_STAT(mac_rx_9217_12287_oct_pkt_num),
+ HINIC_PORT_STAT(mac_rx_12288_16383_oct_pkt_num),
+ HINIC_PORT_STAT(mac_rx_1519_max_good_pkt_num),
+ HINIC_PORT_STAT(mac_rx_1519_max_bad_pkt_num),
+ HINIC_PORT_STAT(mac_rx_oversize_pkt_num),
+ HINIC_PORT_STAT(mac_rx_jabber_pkt_num),
+ HINIC_PORT_STAT(mac_rx_pause_num),
+ HINIC_PORT_STAT(mac_rx_pfc_pkt_num),
+ HINIC_PORT_STAT(mac_rx_pfc_pri0_pkt_num),
+ HINIC_PORT_STAT(mac_rx_pfc_pri1_pkt_num),
+ HINIC_PORT_STAT(mac_rx_pfc_pri2_pkt_num),
+ HINIC_PORT_STAT(mac_rx_pfc_pri3_pkt_num),
+ HINIC_PORT_STAT(mac_rx_pfc_pri4_pkt_num),
+ HINIC_PORT_STAT(mac_rx_pfc_pri5_pkt_num),
+ HINIC_PORT_STAT(mac_rx_pfc_pri6_pkt_num),
+ HINIC_PORT_STAT(mac_rx_pfc_pri7_pkt_num),
+ HINIC_PORT_STAT(mac_rx_control_pkt_num),
+ HINIC_PORT_STAT(mac_rx_sym_err_pkt_num),
+ HINIC_PORT_STAT(mac_rx_fcs_err_pkt_num),
+ HINIC_PORT_STAT(mac_rx_send_app_good_pkt_num),
+ HINIC_PORT_STAT(mac_rx_send_app_bad_pkt_num),
+ HINIC_PORT_STAT(mac_tx_fragment_pkt_num),
+ HINIC_PORT_STAT(mac_tx_undersize_pkt_num),
+ HINIC_PORT_STAT(mac_tx_undermin_pkt_num),
+ HINIC_PORT_STAT(mac_tx_64_oct_pkt_num),
+ HINIC_PORT_STAT(mac_tx_65_127_oct_pkt_num),
+ HINIC_PORT_STAT(mac_tx_128_255_oct_pkt_num),
+ HINIC_PORT_STAT(mac_tx_256_511_oct_pkt_num),
+ HINIC_PORT_STAT(mac_tx_512_1023_oct_pkt_num),
+ HINIC_PORT_STAT(mac_tx_1024_1518_oct_pkt_num),
+ HINIC_PORT_STAT(mac_tx_1519_2047_oct_pkt_num),
+ HINIC_PORT_STAT(mac_tx_2048_4095_oct_pkt_num),
+ HINIC_PORT_STAT(mac_tx_4096_8191_oct_pkt_num),
+ HINIC_PORT_STAT(mac_tx_8192_9216_oct_pkt_num),
+ HINIC_PORT_STAT(mac_tx_9217_12287_oct_pkt_num),
+ HINIC_PORT_STAT(mac_tx_12288_16383_oct_pkt_num),
+ HINIC_PORT_STAT(mac_tx_1519_max_good_pkt_num),
+ HINIC_PORT_STAT(mac_tx_1519_max_bad_pkt_num),
+ HINIC_PORT_STAT(mac_tx_oversize_pkt_num),
+ HINIC_PORT_STAT(mac_tx_jabber_pkt_num),
+ HINIC_PORT_STAT(mac_tx_pause_num),
+ HINIC_PORT_STAT(mac_tx_pfc_pkt_num),
+ HINIC_PORT_STAT(mac_tx_pfc_pri0_pkt_num),
+ HINIC_PORT_STAT(mac_tx_pfc_pri1_pkt_num),
+ HINIC_PORT_STAT(mac_tx_pfc_pri2_pkt_num),
+ HINIC_PORT_STAT(mac_tx_pfc_pri3_pkt_num),
+ HINIC_PORT_STAT(mac_tx_pfc_pri4_pkt_num),
+ HINIC_PORT_STAT(mac_tx_pfc_pri5_pkt_num),
+ HINIC_PORT_STAT(mac_tx_pfc_pri6_pkt_num),
+ HINIC_PORT_STAT(mac_tx_pfc_pri7_pkt_num),
+ HINIC_PORT_STAT(mac_tx_control_pkt_num),
+ HINIC_PORT_STAT(mac_tx_err_all_pkt_num),
+ HINIC_PORT_STAT(mac_tx_from_app_good_pkt_num),
+ HINIC_PORT_STAT(mac_tx_from_app_bad_pkt_num),
+};
+
+#define HINIC_TXQ_STAT(_stat_item) { \
+ .name = "txq%d_"#_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic_txq_stats, _stat_item), \
+ .offset = offsetof(struct hinic_txq_stats, _stat_item) \
+}
+
+static struct hinic_stats hinic_tx_queue_stats[] = {
+ HINIC_TXQ_STAT(pkts),
+ HINIC_TXQ_STAT(bytes),
+ HINIC_TXQ_STAT(tx_busy),
+ HINIC_TXQ_STAT(tx_wake),
+ HINIC_TXQ_STAT(tx_dropped),
+ HINIC_TXQ_STAT(big_frags_pkts),
+};
+
+#define HINIC_RXQ_STAT(_stat_item) { \
+ .name = "rxq%d_"#_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic_rxq_stats, _stat_item), \
+ .offset = offsetof(struct hinic_rxq_stats, _stat_item) \
+}
+
+static struct hinic_stats hinic_rx_queue_stats[] = {
+ HINIC_RXQ_STAT(pkts),
+ HINIC_RXQ_STAT(bytes),
+ HINIC_RXQ_STAT(errors),
+ HINIC_RXQ_STAT(csum_errors),
+ HINIC_RXQ_STAT(other_errors),
+};
+
+static void get_drv_queue_stats(struct hinic_dev *nic_dev, u64 *data)
+{
+ struct hinic_txq_stats txq_stats;
+ struct hinic_rxq_stats rxq_stats;
+ u16 i = 0, j = 0, qid = 0;
+ char *p;
+
+ for (qid = 0; qid < nic_dev->num_qps; qid++) {
+ if (!nic_dev->txqs)
+ break;
+
+ hinic_txq_get_stats(&nic_dev->txqs[qid], &txq_stats);
+ for (j = 0; j < ARRAY_LEN(hinic_tx_queue_stats); j++, i++) {
+ p = (char *)&txq_stats +
+ hinic_tx_queue_stats[j].offset;
+ data[i] = (hinic_tx_queue_stats[j].size ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+ }
+
+ for (qid = 0; qid < nic_dev->num_qps; qid++) {
+ if (!nic_dev->rxqs)
+ break;
+
+ hinic_rxq_get_stats(&nic_dev->rxqs[qid], &rxq_stats);
+ for (j = 0; j < ARRAY_LEN(hinic_rx_queue_stats); j++, i++) {
+ p = (char *)&rxq_stats +
+ hinic_rx_queue_stats[j].offset;
+ data[i] = (hinic_rx_queue_stats[j].size ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+ }
+}
+
+static void hinic_get_ethtool_stats(struct net_device *netdev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ struct hinic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic_vport_stats vport_stats = {0};
+ struct hinic_phy_port_stats *port_stats;
+ u16 i = 0, j = 0;
+ char *p;
+ int err;
+
+ err = hinic_get_vport_stats(nic_dev, &vport_stats);
+ if (err)
+ netif_err(nic_dev, drv, netdev,
+ "Failed to get vport stats from firmware\n");
+
+ for (j = 0; j < ARRAY_LEN(hinic_function_stats); j++, i++) {
+ p = (char *)&vport_stats + hinic_function_stats[j].offset;
+ data[i] = (hinic_function_stats[j].size ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+
+ port_stats = kzalloc(sizeof(*port_stats), GFP_KERNEL);
+ if (!port_stats) {
+ memset(&data[i], 0,
+ ARRAY_LEN(hinic_port_stats) * sizeof(*data));
+ i += ARRAY_LEN(hinic_port_stats);
+ goto get_drv_stats;
+ }
+
+ err = hinic_get_phy_port_stats(nic_dev, port_stats);
+ if (err)
+ netif_err(nic_dev, drv, netdev,
+ "Failed to get port stats from firmware\n");
+
+ for (j = 0; j < ARRAY_LEN(hinic_port_stats); j++, i++) {
+ p = (char *)port_stats + hinic_port_stats[j].offset;
+ data[i] = (hinic_port_stats[j].size ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+
+ kfree(port_stats);
+
+get_drv_stats:
+ get_drv_queue_stats(nic_dev, data + i);
+}
+
+static int hinic_get_sset_count(struct net_device *netdev, int sset)
+{
+ struct hinic_dev *nic_dev = netdev_priv(netdev);
+ int count, q_num;
+
+ switch (sset) {
+ case ETH_SS_STATS:
+ q_num = nic_dev->num_qps;
+ count = ARRAY_LEN(hinic_function_stats) +
+ (ARRAY_LEN(hinic_tx_queue_stats) +
+ ARRAY_LEN(hinic_rx_queue_stats)) * q_num;
+
+ count += ARRAY_LEN(hinic_port_stats);
+
+ return count;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static void hinic_get_strings(struct net_device *netdev,
+ u32 stringset, u8 *data)
+{
+ struct hinic_dev *nic_dev = netdev_priv(netdev);
+ char *p = (char *)data;
+ u16 i, j;
+
+ switch (stringset) {
+ case ETH_SS_STATS:
+ for (i = 0; i < ARRAY_LEN(hinic_function_stats); i++) {
+ memcpy(p, hinic_function_stats[i].name,
+ ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ }
+
+ for (i = 0; i < ARRAY_LEN(hinic_port_stats); i++) {
+ memcpy(p, hinic_port_stats[i].name,
+ ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ }
+
+ for (i = 0; i < nic_dev->num_qps; i++) {
+ for (j = 0; j < ARRAY_LEN(hinic_tx_queue_stats); j++) {
+ sprintf(p, hinic_tx_queue_stats[j].name, i);
+ p += ETH_GSTRING_LEN;
+ }
+ }
+
+ for (i = 0; i < nic_dev->num_qps; i++) {
+ for (j = 0; j < ARRAY_LEN(hinic_rx_queue_stats); j++) {
+ sprintf(p, hinic_rx_queue_stats[j].name, i);
+ p += ETH_GSTRING_LEN;
+ }
+ }
+
+ return;
+ default:
+ return;
+ }
+}
+
+static const struct ethtool_ops hinic_ethtool_ops = {
+ .get_link_ksettings = hinic_get_link_ksettings,
+ .get_drvinfo = hinic_get_drvinfo,
+ .get_link = ethtool_op_get_link,
+ .get_ringparam = hinic_get_ringparam,
+ .get_channels = hinic_get_channels,
+ .get_rxnfc = hinic_get_rxnfc,
+ .set_rxnfc = hinic_set_rxnfc,
+ .get_rxfh_key_size = hinic_get_rxfh_key_size,
+ .get_rxfh_indir_size = hinic_get_rxfh_indir_size,
+ .get_rxfh = hinic_get_rxfh,
+ .set_rxfh = hinic_set_rxfh,
+ .get_sset_count = hinic_get_sset_count,
+ .get_ethtool_stats = hinic_get_ethtool_stats,
+ .get_strings = hinic_get_strings,
+};
+
+void hinic_set_ethtool_ops(struct net_device *netdev)
+{
+ netdev->ethtool_ops = &hinic_ethtool_ops;
+}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
index 408705687de6..6f2cf569a283 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
@@ -89,9 +89,6 @@ static int get_capability(struct hinic_hwdev *hwdev,
if (nic_cap->num_qps > HINIC_Q_CTXT_MAX)
nic_cap->num_qps = HINIC_Q_CTXT_MAX;
- /* num_qps must be power of 2 */
- nic_cap->num_qps = BIT(fls(nic_cap->num_qps) - 1);
-
nic_cap->max_qps = dev_cap->max_sqs + 1;
if (nic_cap->max_qps != (dev_cap->max_rqs + 1))
return -EFAULT;
@@ -304,6 +301,8 @@ static int set_hw_ioctxt(struct hinic_hwdev *hwdev, unsigned int rq_depth,
hw_ioctxt.set_cmdq_depth = HW_IOCTXT_SET_CMDQ_DEPTH_DEFAULT;
hw_ioctxt.cmdq_depth = 0;
+ hw_ioctxt.lro_en = 1;
+
hw_ioctxt.rq_depth = ilog2(rq_depth);
hw_ioctxt.rx_buf_sz_idx = HINIC_RX_BUF_SZ_IDX;
@@ -872,6 +871,13 @@ void hinic_free_hwdev(struct hinic_hwdev *hwdev)
hinic_free_hwif(hwdev->hwif);
}
+int hinic_hwdev_max_num_qps(struct hinic_hwdev *hwdev)
+{
+ struct hinic_cap *nic_cap = &hwdev->nic_cap;
+
+ return nic_cap->max_qps;
+}
+
/**
* hinic_hwdev_num_qps - return the number QPs available for use
* @hwdev: the NIC HW device
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.h
index a0a5b7434ad7..b069045de416 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.h
@@ -41,21 +41,73 @@ enum hinic_port_cmd {
HINIC_PORT_CMD_GET_LINK_STATE = 24,
+ HINIC_PORT_CMD_SET_LRO = 25,
+
HINIC_PORT_CMD_SET_RX_CSUM = 26,
+ HINIC_PORT_CMD_SET_RX_VLAN_OFFLOAD = 27,
+
+ HINIC_PORT_CMD_GET_PORT_STATISTICS = 28,
+
+ HINIC_PORT_CMD_CLEAR_PORT_STATISTICS = 29,
+
+ HINIC_PORT_CMD_GET_VPORT_STAT = 30,
+
+ HINIC_PORT_CMD_CLEAN_VPORT_STAT = 31,
+
+ HINIC_PORT_CMD_GET_RSS_TEMPLATE_INDIR_TBL = 37,
+
HINIC_PORT_CMD_SET_PORT_STATE = 41,
+ HINIC_PORT_CMD_SET_RSS_TEMPLATE_TBL = 43,
+
+ HINIC_PORT_CMD_GET_RSS_TEMPLATE_TBL = 44,
+
+ HINIC_PORT_CMD_SET_RSS_HASH_ENGINE = 45,
+
+ HINIC_PORT_CMD_GET_RSS_HASH_ENGINE = 46,
+
+ HINIC_PORT_CMD_GET_RSS_CTX_TBL = 47,
+
+ HINIC_PORT_CMD_SET_RSS_CTX_TBL = 48,
+
+ HINIC_PORT_CMD_RSS_TEMP_MGR = 49,
+
+ HINIC_PORT_CMD_RSS_CFG = 66,
+
HINIC_PORT_CMD_FWCTXT_INIT = 69,
+ HINIC_PORT_CMD_GET_MGMT_VERSION = 88,
+
HINIC_PORT_CMD_SET_FUNC_STATE = 93,
HINIC_PORT_CMD_GET_GLOBAL_QPN = 102,
HINIC_PORT_CMD_SET_TSO = 112,
+ HINIC_PORT_CMD_SET_RQ_IQ_MAP = 115,
+
HINIC_PORT_CMD_GET_CAP = 170,
+
+ HINIC_PORT_CMD_SET_LRO_TIMER = 244,
};
+enum hinic_ucode_cmd {
+ HINIC_UCODE_CMD_MODIFY_QUEUE_CONTEXT = 0,
+ HINIC_UCODE_CMD_CLEAN_QUEUE_CONTEXT,
+ HINIC_UCODE_CMD_ARM_SQ,
+ HINIC_UCODE_CMD_ARM_RQ,
+ HINIC_UCODE_CMD_SET_RSS_INDIR_TABLE,
+ HINIC_UCODE_CMD_SET_RSS_CONTEXT_TABLE,
+ HINIC_UCODE_CMD_GET_RSS_INDIR_TABLE,
+ HINIC_UCODE_CMD_GET_RSS_CONTEXT_TABLE,
+ HINIC_UCODE_CMD_SET_IQ_ENABLE,
+ HINIC_UCODE_CMD_SET_RQ_FLUSH = 10
+};
+
+#define NIC_RSS_CMD_TEMP_ALLOC 0x01
+#define NIC_RSS_CMD_TEMP_FREE 0x02
+
enum hinic_mgmt_msg_cmd {
HINIC_MGMT_MSG_CMD_BASE = 160,
@@ -97,7 +149,7 @@ struct hinic_cmd_hw_ioctxt {
u8 set_cmdq_depth;
u8 cmdq_depth;
- u8 rsvd2;
+ u8 lro_en;
u8 rsvd3;
u8 rsvd4;
u8 rsvd5;
@@ -215,6 +267,8 @@ struct hinic_hwdev *hinic_init_hwdev(struct pci_dev *pdev);
void hinic_free_hwdev(struct hinic_hwdev *hwdev);
+int hinic_hwdev_max_num_qps(struct hinic_hwdev *hwdev);
+
int hinic_hwdev_num_qps(struct hinic_hwdev *hwdev);
struct hinic_sq *hinic_hwdev_get_sq(struct hinic_hwdev *hwdev, int i);
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_io.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_io.c
index 2d07bdd17432..d66f86fa3f46 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_io.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_io.c
@@ -36,6 +36,7 @@
enum io_cmd {
IO_CMD_MODIFY_QUEUE_CTXT = 0,
+ IO_CMD_CLEAN_QUEUE_CTXT,
};
static void init_db_area_idx(struct hinic_free_db_area *free_db_area)
@@ -201,6 +202,59 @@ static int write_qp_ctxts(struct hinic_func_to_io *func_to_io, u16 base_qpn,
write_rq_ctxts(func_to_io, base_qpn, num_qps));
}
+static int hinic_clean_queue_offload_ctxt(struct hinic_func_to_io *func_to_io,
+ enum hinic_qp_ctxt_type ctxt_type)
+{
+ struct hinic_hwif *hwif = func_to_io->hwif;
+ struct hinic_clean_queue_ctxt *ctxt_block;
+ struct pci_dev *pdev = hwif->pdev;
+ struct hinic_cmdq_buf cmdq_buf;
+ u64 out_param = 0;
+ int err;
+
+ err = hinic_alloc_cmdq_buf(&func_to_io->cmdqs, &cmdq_buf);
+ if (err) {
+ dev_err(&pdev->dev, "Failed to allocate cmdq buf\n");
+ return err;
+ }
+
+ ctxt_block = cmdq_buf.buf;
+ ctxt_block->cmdq_hdr.num_queues = func_to_io->max_qps;
+ ctxt_block->cmdq_hdr.queue_type = ctxt_type;
+ ctxt_block->cmdq_hdr.addr_offset = 0;
+
+ /* TSO/LRO ctxt size: 0x0:0B; 0x1:160B; 0x2:200B; 0x3:240B */
+ ctxt_block->ctxt_size = 0x3;
+
+ hinic_cpu_to_be32(ctxt_block, sizeof(*ctxt_block));
+
+ cmdq_buf.size = sizeof(*ctxt_block);
+
+ err = hinic_cmdq_direct_resp(&func_to_io->cmdqs, HINIC_MOD_L2NIC,
+ IO_CMD_CLEAN_QUEUE_CTXT,
+ &cmdq_buf, &out_param);
+
+ if (err || out_param) {
+ dev_err(&pdev->dev, "Failed to clean offload ctxts, err: %d, out_param: 0x%llx\n",
+ err, out_param);
+
+ err = -EFAULT;
+ }
+
+ hinic_free_cmdq_buf(&func_to_io->cmdqs, &cmdq_buf);
+
+ return err;
+}
+
+static int hinic_clean_qp_offload_ctxt(struct hinic_func_to_io *func_to_io)
+{
+ /* clean LRO/TSO context space */
+ return (hinic_clean_queue_offload_ctxt(func_to_io,
+ HINIC_QP_CTXT_TYPE_SQ) ||
+ hinic_clean_queue_offload_ctxt(func_to_io,
+ HINIC_QP_CTXT_TYPE_RQ));
+}
+
/**
* init_qp - Initialize a Queue Pair
* @func_to_io: func to io channel that holds the IO components
@@ -372,6 +426,12 @@ int hinic_io_create_qps(struct hinic_func_to_io *func_to_io,
goto err_write_qp_ctxts;
}
+ err = hinic_clean_qp_offload_ctxt(func_to_io);
+ if (err) {
+ dev_err(&pdev->dev, "Failed to clean QP contexts space\n");
+ goto err_write_qp_ctxts;
+ }
+
return 0;
err_write_qp_ctxts:
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_qp_ctxt.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_qp_ctxt.h
index 1856fdcc1e32..00900a6640ad 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_qp_ctxt.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_qp_ctxt.h
@@ -192,6 +192,11 @@ struct hinic_rq_ctxt {
u32 wq_block_lo_pfn;
};
+struct hinic_clean_queue_ctxt {
+ struct hinic_qp_ctxt_header cmdq_hdr;
+ u32 ctxt_size;
+};
+
struct hinic_sq_ctxt_block {
struct hinic_qp_ctxt_header hdr;
struct hinic_sq_ctxt sq_ctxt[HINIC_Q_CTXT_MAX];
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_wqe.h b/drivers/net/ethernet/huawei/hinic/hinic_hw_wqe.h
index 8991c9a5ef04..f4b6d2c1061f 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_wqe.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_wqe.h
@@ -210,6 +210,57 @@
#define HINIC_MSS_DEFAULT 0x3E00
#define HINIC_MSS_MIN 0x50
+#define RQ_CQE_STATUS_NUM_LRO_SHIFT 16
+#define RQ_CQE_STATUS_NUM_LRO_MASK 0xFFU
+
+#define RQ_CQE_STATUS_GET(val, member) (((val) >> \
+ RQ_CQE_STATUS_##member##_SHIFT) & \
+ RQ_CQE_STATUS_##member##_MASK)
+
+#define HINIC_GET_RX_NUM_LRO(status) \
+ RQ_CQE_STATUS_GET(status, NUM_LRO)
+
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT 0
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK 0xFFFU
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U
+
+#define RQ_CQE_OFFOLAD_TYPE_GET(val, member) (((val) >> \
+ RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \
+ RQ_CQE_OFFOLAD_TYPE_##member##_MASK)
+
+#define HINIC_GET_RX_PKT_TYPE(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE)
+
+#define HINIC_GET_RX_VLAN_OFFLOAD_EN(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, VLAN_EN)
+
+#define RQ_CQE_SGE_VLAN_MASK 0xFFFFU
+#define RQ_CQE_SGE_VLAN_SHIFT 0
+
+#define RQ_CQE_SGE_GET(val, member) (((val) >> \
+ RQ_CQE_SGE_##member##_SHIFT) & \
+ RQ_CQE_SGE_##member##_MASK)
+
+#define HINIC_GET_RX_VLAN_TAG(vlan_len) \
+ RQ_CQE_SGE_GET(vlan_len, VLAN)
+
+#define HINIC_RSS_TYPE_VALID_SHIFT 23
+#define HINIC_RSS_TYPE_TCP_IPV6_EXT_SHIFT 24
+#define HINIC_RSS_TYPE_IPV6_EXT_SHIFT 25
+#define HINIC_RSS_TYPE_TCP_IPV6_SHIFT 26
+#define HINIC_RSS_TYPE_IPV6_SHIFT 27
+#define HINIC_RSS_TYPE_TCP_IPV4_SHIFT 28
+#define HINIC_RSS_TYPE_IPV4_SHIFT 29
+#define HINIC_RSS_TYPE_UDP_IPV6_SHIFT 30
+#define HINIC_RSS_TYPE_UDP_IPV4_SHIFT 31
+
+#define HINIC_RSS_TYPE_SET(val, member) \
+ (((u32)(val) & 0x1) << HINIC_RSS_TYPE_##member##_SHIFT)
+
+#define HINIC_RSS_TYPE_GET(val, member) \
+ (((u32)(val) >> HINIC_RSS_TYPE_##member##_SHIFT) & 0x1)
+
enum hinic_l4offload_type {
HINIC_L4_OFF_DISABLE = 0,
HINIC_TCP_OFFLOAD_ENABLE = 1,
@@ -363,7 +414,7 @@ struct hinic_rq_cqe {
u32 status;
u32 len;
- u32 rsvd2;
+ u32 offload_type;
u32 rsvd3;
u32 rsvd4;
u32 rsvd5;
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c
index b695d29d364c..2411ad270c98 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_main.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c
@@ -53,6 +53,10 @@ MODULE_PARM_DESC(rx_weight, "Number Rx packets for NAPI budget (default=64)");
NETIF_MSG_IFUP | \
NETIF_MSG_TX_ERR | NETIF_MSG_RX_ERR)
+#define HINIC_LRO_MAX_WQE_NUM_DEFAULT 8
+
+#define HINIC_LRO_RX_TIMER_DEFAULT 16
+
#define VLAN_BITMAP_SIZE(nic_dev) (ALIGN(VLAN_N_VID, 8) / 8)
#define work_to_rx_mode_work(work) \
@@ -63,137 +67,9 @@ MODULE_PARM_DESC(rx_weight, "Number Rx packets for NAPI budget (default=64)");
static int change_mac_addr(struct net_device *netdev, const u8 *addr);
-static void set_link_speed(struct ethtool_link_ksettings *link_ksettings,
- enum hinic_speed speed)
-{
- switch (speed) {
- case HINIC_SPEED_10MB_LINK:
- link_ksettings->base.speed = SPEED_10;
- break;
-
- case HINIC_SPEED_100MB_LINK:
- link_ksettings->base.speed = SPEED_100;
- break;
-
- case HINIC_SPEED_1000MB_LINK:
- link_ksettings->base.speed = SPEED_1000;
- break;
-
- case HINIC_SPEED_10GB_LINK:
- link_ksettings->base.speed = SPEED_10000;
- break;
-
- case HINIC_SPEED_25GB_LINK:
- link_ksettings->base.speed = SPEED_25000;
- break;
-
- case HINIC_SPEED_40GB_LINK:
- link_ksettings->base.speed = SPEED_40000;
- break;
-
- case HINIC_SPEED_100GB_LINK:
- link_ksettings->base.speed = SPEED_100000;
- break;
-
- default:
- link_ksettings->base.speed = SPEED_UNKNOWN;
- break;
- }
-}
-
-static int hinic_get_link_ksettings(struct net_device *netdev,
- struct ethtool_link_ksettings
- *link_ksettings)
-{
- struct hinic_dev *nic_dev = netdev_priv(netdev);
- enum hinic_port_link_state link_state;
- struct hinic_port_cap port_cap;
- int err;
-
- ethtool_link_ksettings_zero_link_mode(link_ksettings, advertising);
- ethtool_link_ksettings_add_link_mode(link_ksettings, supported,
- Autoneg);
-
- link_ksettings->base.speed = SPEED_UNKNOWN;
- link_ksettings->base.autoneg = AUTONEG_DISABLE;
- link_ksettings->base.duplex = DUPLEX_UNKNOWN;
-
- err = hinic_port_get_cap(nic_dev, &port_cap);
- if (err) {
- netif_err(nic_dev, drv, netdev,
- "Failed to get port capabilities\n");
- return err;
- }
-
- err = hinic_port_link_state(nic_dev, &link_state);
- if (err) {
- netif_err(nic_dev, drv, netdev,
- "Failed to get port link state\n");
- return err;
- }
-
- if (link_state != HINIC_LINK_STATE_UP) {
- netif_info(nic_dev, drv, netdev, "No link\n");
- return err;
- }
-
- set_link_speed(link_ksettings, port_cap.speed);
-
- if (!!(port_cap.autoneg_cap & HINIC_AUTONEG_SUPPORTED))
- ethtool_link_ksettings_add_link_mode(link_ksettings,
- advertising, Autoneg);
-
- if (port_cap.autoneg_state == HINIC_AUTONEG_ACTIVE)
- link_ksettings->base.autoneg = AUTONEG_ENABLE;
-
- link_ksettings->base.duplex = (port_cap.duplex == HINIC_DUPLEX_FULL) ?
- DUPLEX_FULL : DUPLEX_HALF;
- return 0;
-}
-
-static void hinic_get_drvinfo(struct net_device *netdev,
- struct ethtool_drvinfo *info)
-{
- struct hinic_dev *nic_dev = netdev_priv(netdev);
- struct hinic_hwdev *hwdev = nic_dev->hwdev;
- struct hinic_hwif *hwif = hwdev->hwif;
-
- strlcpy(info->driver, HINIC_DRV_NAME, sizeof(info->driver));
- strlcpy(info->bus_info, pci_name(hwif->pdev), sizeof(info->bus_info));
-}
-
-static void hinic_get_ringparam(struct net_device *netdev,
- struct ethtool_ringparam *ring)
-{
- ring->rx_max_pending = HINIC_RQ_DEPTH;
- ring->tx_max_pending = HINIC_SQ_DEPTH;
- ring->rx_pending = HINIC_RQ_DEPTH;
- ring->tx_pending = HINIC_SQ_DEPTH;
-}
-
-static void hinic_get_channels(struct net_device *netdev,
- struct ethtool_channels *channels)
-{
- struct hinic_dev *nic_dev = netdev_priv(netdev);
- struct hinic_hwdev *hwdev = nic_dev->hwdev;
-
- channels->max_rx = hwdev->nic_cap.max_qps;
- channels->max_tx = hwdev->nic_cap.max_qps;
- channels->max_other = 0;
- channels->max_combined = 0;
- channels->rx_count = hinic_hwdev_num_qps(hwdev);
- channels->tx_count = hinic_hwdev_num_qps(hwdev);
- channels->other_count = 0;
- channels->combined_count = 0;
-}
-
-static const struct ethtool_ops hinic_ethtool_ops = {
- .get_link_ksettings = hinic_get_link_ksettings,
- .get_drvinfo = hinic_get_drvinfo,
- .get_link = ethtool_op_get_link,
- .get_ringparam = hinic_get_ringparam,
- .get_channels = hinic_get_channels,
-};
+static int set_features(struct hinic_dev *nic_dev,
+ netdev_features_t pre_features,
+ netdev_features_t features, bool force_change);
static void update_rx_stats(struct hinic_dev *nic_dev, struct hinic_rxq *rxq)
{
@@ -207,6 +83,9 @@ static void update_rx_stats(struct hinic_dev *nic_dev, struct hinic_rxq *rxq)
u64_stats_update_begin(&nic_rx_stats->syncp);
nic_rx_stats->bytes += rx_stats.bytes;
nic_rx_stats->pkts += rx_stats.pkts;
+ nic_rx_stats->errors += rx_stats.errors;
+ nic_rx_stats->csum_errors += rx_stats.csum_errors;
+ nic_rx_stats->other_errors += rx_stats.other_errors;
u64_stats_update_end(&nic_rx_stats->syncp);
hinic_rxq_clean_stats(rxq);
@@ -227,6 +106,7 @@ static void update_tx_stats(struct hinic_dev *nic_dev, struct hinic_txq *txq)
nic_tx_stats->tx_busy += tx_stats.tx_busy;
nic_tx_stats->tx_wake += tx_stats.tx_wake;
nic_tx_stats->tx_dropped += tx_stats.tx_dropped;
+ nic_tx_stats->big_frags_pkts += tx_stats.big_frags_pkts;
u64_stats_update_end(&nic_tx_stats->syncp);
hinic_txq_clean_stats(txq);
@@ -363,11 +243,135 @@ static void free_rxqs(struct hinic_dev *nic_dev)
nic_dev->rxqs = NULL;
}
+static int hinic_configure_max_qnum(struct hinic_dev *nic_dev)
+{
+ int err;
+
+ err = hinic_set_max_qnum(nic_dev, nic_dev->hwdev->nic_cap.max_qps);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static int hinic_rss_init(struct hinic_dev *nic_dev)
+{
+ u8 default_rss_key[HINIC_RSS_KEY_SIZE];
+ u8 tmpl_idx = nic_dev->rss_tmpl_idx;
+ u32 *indir_tbl;
+ int err, i;
+
+ indir_tbl = kcalloc(HINIC_RSS_INDIR_SIZE, sizeof(u32), GFP_KERNEL);
+ if (!indir_tbl)
+ return -ENOMEM;
+
+ netdev_rss_key_fill(default_rss_key, sizeof(default_rss_key));
+ for (i = 0; i < HINIC_RSS_INDIR_SIZE; i++)
+ indir_tbl[i] = ethtool_rxfh_indir_default(i, nic_dev->num_rss);
+
+ err = hinic_rss_set_template_tbl(nic_dev, tmpl_idx, default_rss_key);
+ if (err)
+ goto out;
+
+ err = hinic_rss_set_indir_tbl(nic_dev, tmpl_idx, indir_tbl);
+ if (err)
+ goto out;
+
+ err = hinic_set_rss_type(nic_dev, tmpl_idx, nic_dev->rss_type);
+ if (err)
+ goto out;
+
+ err = hinic_rss_set_hash_engine(nic_dev, tmpl_idx,
+ nic_dev->rss_hash_engine);
+ if (err)
+ goto out;
+
+ err = hinic_rss_cfg(nic_dev, 1, tmpl_idx);
+ if (err)
+ goto out;
+
+out:
+ kfree(indir_tbl);
+ return err;
+}
+
+static void hinic_rss_deinit(struct hinic_dev *nic_dev)
+{
+ hinic_rss_cfg(nic_dev, 0, nic_dev->rss_tmpl_idx);
+}
+
+static void hinic_init_rss_parameters(struct hinic_dev *nic_dev)
+{
+ nic_dev->rss_hash_engine = HINIC_RSS_HASH_ENGINE_TYPE_XOR;
+ nic_dev->rss_type.tcp_ipv6_ext = 1;
+ nic_dev->rss_type.ipv6_ext = 1;
+ nic_dev->rss_type.tcp_ipv6 = 1;
+ nic_dev->rss_type.ipv6 = 1;
+ nic_dev->rss_type.tcp_ipv4 = 1;
+ nic_dev->rss_type.ipv4 = 1;
+ nic_dev->rss_type.udp_ipv6 = 1;
+ nic_dev->rss_type.udp_ipv4 = 1;
+}
+
+static void hinic_enable_rss(struct hinic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_hwif *hwif = hwdev->hwif;
+ struct pci_dev *pdev = hwif->pdev;
+ int i, node, err = 0;
+ u16 num_cpus = 0;
+
+ nic_dev->max_qps = hinic_hwdev_max_num_qps(hwdev);
+ if (nic_dev->max_qps <= 1) {
+ nic_dev->flags &= ~HINIC_RSS_ENABLE;
+ nic_dev->rss_limit = nic_dev->max_qps;
+ nic_dev->num_qps = nic_dev->max_qps;
+ nic_dev->num_rss = nic_dev->max_qps;
+
+ return;
+ }
+
+ err = hinic_rss_template_alloc(nic_dev, &nic_dev->rss_tmpl_idx);
+ if (err) {
+ netif_err(nic_dev, drv, netdev,
+ "Failed to alloc tmpl_idx for rss, can't enable rss for this function\n");
+ nic_dev->flags &= ~HINIC_RSS_ENABLE;
+ nic_dev->max_qps = 1;
+ nic_dev->rss_limit = nic_dev->max_qps;
+ nic_dev->num_qps = nic_dev->max_qps;
+ nic_dev->num_rss = nic_dev->max_qps;
+
+ return;
+ }
+
+ nic_dev->flags |= HINIC_RSS_ENABLE;
+
+ for (i = 0; i < num_online_cpus(); i++) {
+ node = cpu_to_node(i);
+ if (node == dev_to_node(&pdev->dev))
+ num_cpus++;
+ }
+
+ if (!num_cpus)
+ num_cpus = num_online_cpus();
+
+ nic_dev->num_qps = min_t(u16, nic_dev->max_qps, num_cpus);
+
+ nic_dev->rss_limit = nic_dev->num_qps;
+ nic_dev->num_rss = nic_dev->num_qps;
+
+ hinic_init_rss_parameters(nic_dev);
+ err = hinic_rss_init(nic_dev);
+ if (err)
+ netif_err(nic_dev, drv, netdev, "Failed to init rss\n");
+}
+
static int hinic_open(struct net_device *netdev)
{
struct hinic_dev *nic_dev = netdev_priv(netdev);
enum hinic_port_link_state link_state;
- int err, ret, num_qps;
+ int err, ret;
if (!(nic_dev->flags & HINIC_INTF_UP)) {
err = hinic_hwdev_ifup(nic_dev->hwdev);
@@ -392,9 +396,17 @@ static int hinic_open(struct net_device *netdev)
goto err_create_rxqs;
}
- num_qps = hinic_hwdev_num_qps(nic_dev->hwdev);
- netif_set_real_num_tx_queues(netdev, num_qps);
- netif_set_real_num_rx_queues(netdev, num_qps);
+ hinic_enable_rss(nic_dev);
+
+ err = hinic_configure_max_qnum(nic_dev);
+ if (err) {
+ netif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to configure the maximum number of queues\n");
+ goto err_port_state;
+ }
+
+ netif_set_real_num_tx_queues(netdev, nic_dev->num_qps);
+ netif_set_real_num_rx_queues(netdev, nic_dev->num_qps);
err = hinic_port_set_state(nic_dev, HINIC_PORT_ENABLE);
if (err) {
@@ -450,9 +462,12 @@ err_func_port_state:
if (ret)
netif_warn(nic_dev, drv, netdev,
"Failed to revert port state\n");
-
err_port_state:
free_rxqs(nic_dev);
+ if (nic_dev->flags & HINIC_RSS_ENABLE) {
+ hinic_rss_deinit(nic_dev);
+ hinic_rss_template_free(nic_dev, nic_dev->rss_tmpl_idx);
+ }
err_create_rxqs:
free_txqs(nic_dev);
@@ -496,6 +511,11 @@ static int hinic_close(struct net_device *netdev)
return err;
}
+ if (nic_dev->flags & HINIC_RSS_ENABLE) {
+ hinic_rss_deinit(nic_dev);
+ hinic_rss_template_free(nic_dev, nic_dev->rss_tmpl_idx);
+ }
+
free_rxqs(nic_dev);
free_txqs(nic_dev);
@@ -715,7 +735,6 @@ static void set_rx_mode(struct work_struct *work)
{
struct hinic_rx_mode_work *rx_mode_work = work_to_rx_mode_work(work);
struct hinic_dev *nic_dev = rx_mode_work_to_nic_dev(rx_mode_work);
- struct netdev_hw_addr *ha;
netif_info(nic_dev, drv, nic_dev->netdev, "set rx mode work\n");
@@ -723,9 +742,6 @@ static void set_rx_mode(struct work_struct *work)
__dev_uc_sync(nic_dev->netdev, add_mac_addr, remove_mac_addr);
__dev_mc_sync(nic_dev->netdev, add_mac_addr, remove_mac_addr);
-
- netdev_for_each_mc_addr(ha, nic_dev->netdev)
- add_mac_addr(nic_dev->netdev, ha->addr);
}
static void hinic_set_rx_mode(struct net_device *netdev)
@@ -776,12 +792,36 @@ static void hinic_get_stats64(struct net_device *netdev,
stats->rx_bytes = nic_rx_stats->bytes;
stats->rx_packets = nic_rx_stats->pkts;
+ stats->rx_errors = nic_rx_stats->errors;
stats->tx_bytes = nic_tx_stats->bytes;
stats->tx_packets = nic_tx_stats->pkts;
stats->tx_errors = nic_tx_stats->tx_dropped;
}
+static int hinic_set_features(struct net_device *netdev,
+ netdev_features_t features)
+{
+ struct hinic_dev *nic_dev = netdev_priv(netdev);
+
+ return set_features(nic_dev, nic_dev->netdev->features,
+ features, false);
+}
+
+static netdev_features_t hinic_fix_features(struct net_device *netdev,
+ netdev_features_t features)
+{
+ struct hinic_dev *nic_dev = netdev_priv(netdev);
+
+ /* If Rx checksum is disabled, then LRO should also be disabled */
+ if (!(features & NETIF_F_RXCSUM)) {
+ netif_info(nic_dev, drv, netdev, "disabling LRO as RXCSUM is off\n");
+ features &= ~NETIF_F_LRO;
+ }
+
+ return features;
+}
+
static const struct net_device_ops hinic_netdev_ops = {
.ndo_open = hinic_open,
.ndo_stop = hinic_close,
@@ -794,13 +834,16 @@ static const struct net_device_ops hinic_netdev_ops = {
.ndo_start_xmit = hinic_xmit_frame,
.ndo_tx_timeout = hinic_tx_timeout,
.ndo_get_stats64 = hinic_get_stats64,
+ .ndo_fix_features = hinic_fix_features,
+ .ndo_set_features = hinic_set_features,
};
static void netdev_features_init(struct net_device *netdev)
{
netdev->hw_features = NETIF_F_SG | NETIF_F_HIGHDMA | NETIF_F_IP_CSUM |
NETIF_F_IPV6_CSUM | NETIF_F_TSO | NETIF_F_TSO6 |
- NETIF_F_RXCSUM;
+ NETIF_F_RXCSUM | NETIF_F_LRO |
+ NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX;
netdev->vlan_features = netdev->hw_features;
@@ -873,6 +916,18 @@ static int set_features(struct hinic_dev *nic_dev,
if (changed & NETIF_F_RXCSUM)
err = hinic_set_rx_csum_offload(nic_dev, csum_en);
+ if (changed & NETIF_F_LRO) {
+ err = hinic_set_rx_lro_state(nic_dev,
+ !!(features & NETIF_F_LRO),
+ HINIC_LRO_RX_TIMER_DEFAULT,
+ HINIC_LRO_MAX_WQE_NUM_DEFAULT);
+ }
+
+ if (changed & NETIF_F_HW_VLAN_CTAG_RX)
+ err = hinic_set_rx_vlan_offload(nic_dev,
+ !!(features &
+ NETIF_F_HW_VLAN_CTAG_RX));
+
return err;
}
@@ -912,8 +967,8 @@ static int nic_dev_init(struct pci_dev *pdev)
goto err_alloc_etherdev;
}
+ hinic_set_ethtool_ops(netdev);
netdev->netdev_ops = &hinic_netdev_ops;
- netdev->ethtool_ops = &hinic_ethtool_ops;
netdev->max_mtu = ETH_MAX_MTU;
nic_dev = netdev_priv(netdev);
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_port.c b/drivers/net/ethernet/huawei/hinic/hinic_port.c
index 4b3b7d39e437..1e389a004e50 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_port.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_port.c
@@ -430,3 +430,641 @@ int hinic_set_rx_csum_offload(struct hinic_dev *nic_dev, u32 en)
return 0;
}
+
+int hinic_set_rx_vlan_offload(struct hinic_dev *nic_dev, u8 en)
+{
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_vlan_cfg vlan_cfg;
+ struct hinic_hwif *hwif;
+ struct pci_dev *pdev;
+ u16 out_size;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ hwif = hwdev->hwif;
+ pdev = hwif->pdev;
+ vlan_cfg.func_id = HINIC_HWIF_FUNC_IDX(hwif);
+ vlan_cfg.vlan_rx_offload = en;
+
+ err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_SET_RX_VLAN_OFFLOAD,
+ &vlan_cfg, sizeof(vlan_cfg),
+ &vlan_cfg, &out_size);
+ if (err || !out_size || vlan_cfg.status) {
+ dev_err(&pdev->dev,
+ "Failed to set rx vlan offload, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, vlan_cfg.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic_set_max_qnum(struct hinic_dev *nic_dev, u8 num_rqs)
+{
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_hwif *hwif = hwdev->hwif;
+ struct pci_dev *pdev = hwif->pdev;
+ struct hinic_rq_num rq_num = { 0 };
+ u16 out_size = sizeof(rq_num);
+ int err;
+
+ rq_num.func_id = HINIC_HWIF_FUNC_IDX(hwif);
+ rq_num.num_rqs = num_rqs;
+ rq_num.rq_depth = ilog2(HINIC_SQ_DEPTH);
+
+ err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_SET_RQ_IQ_MAP,
+ &rq_num, sizeof(rq_num),
+ &rq_num, &out_size);
+ if (err || !out_size || rq_num.status) {
+ dev_err(&pdev->dev,
+ "Failed to rxq number, ret = %d\n",
+ rq_num.status);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic_set_rx_lro(struct hinic_dev *nic_dev, u8 ipv4_en, u8 ipv6_en,
+ u8 max_wqe_num)
+{
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_hwif *hwif = hwdev->hwif;
+ struct hinic_lro_config lro_cfg = { 0 };
+ struct pci_dev *pdev = hwif->pdev;
+ u16 out_size = sizeof(lro_cfg);
+ int err;
+
+ lro_cfg.func_id = HINIC_HWIF_FUNC_IDX(hwif);
+ lro_cfg.lro_ipv4_en = ipv4_en;
+ lro_cfg.lro_ipv6_en = ipv6_en;
+ lro_cfg.lro_max_wqe_num = max_wqe_num;
+
+ err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_SET_LRO,
+ &lro_cfg, sizeof(lro_cfg),
+ &lro_cfg, &out_size);
+ if (err || !out_size || lro_cfg.status) {
+ dev_err(&pdev->dev,
+ "Failed to set lro offload, ret = %d\n",
+ lro_cfg.status);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic_set_rx_lro_timer(struct hinic_dev *nic_dev, u32 timer_value)
+{
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_lro_timer lro_timer = { 0 };
+ struct hinic_hwif *hwif = hwdev->hwif;
+ struct pci_dev *pdev = hwif->pdev;
+ u16 out_size = sizeof(lro_timer);
+ int err;
+
+ lro_timer.status = 0;
+ lro_timer.type = 0;
+ lro_timer.enable = 1;
+ lro_timer.timer = timer_value;
+
+ err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_SET_LRO_TIMER,
+ &lro_timer, sizeof(lro_timer),
+ &lro_timer, &out_size);
+ if (lro_timer.status == 0xFF) {
+ /* For this case, we think status (0xFF) is OK */
+ lro_timer.status = 0;
+ dev_dbg(&pdev->dev,
+ "Set lro timer not supported by the current FW version, it will be 1ms default\n");
+ }
+
+ if (err || !out_size || lro_timer.status) {
+ dev_err(&pdev->dev,
+ "Failed to set lro timer, ret = %d\n",
+ lro_timer.status);
+
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic_set_rx_lro_state(struct hinic_dev *nic_dev, u8 lro_en,
+ u32 lro_timer, u32 wqe_num)
+{
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ u8 ipv4_en;
+ u8 ipv6_en;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ ipv4_en = lro_en ? 1 : 0;
+ ipv6_en = lro_en ? 1 : 0;
+
+ err = hinic_set_rx_lro(nic_dev, ipv4_en, ipv6_en, (u8)wqe_num);
+ if (err)
+ return err;
+
+ err = hinic_set_rx_lro_timer(nic_dev, lro_timer);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+int hinic_rss_set_indir_tbl(struct hinic_dev *nic_dev, u32 tmpl_idx,
+ const u32 *indir_table)
+{
+ struct hinic_rss_indirect_tbl *indir_tbl;
+ struct hinic_func_to_io *func_to_io;
+ struct hinic_cmdq_buf cmd_buf;
+ struct hinic_hwdev *hwdev;
+ struct hinic_hwif *hwif;
+ struct pci_dev *pdev;
+ u32 indir_size;
+ u64 out_param;
+ int err, i;
+ u32 *temp;
+
+ hwdev = nic_dev->hwdev;
+ func_to_io = &hwdev->func_to_io;
+ hwif = hwdev->hwif;
+ pdev = hwif->pdev;
+
+ err = hinic_alloc_cmdq_buf(&func_to_io->cmdqs, &cmd_buf);
+ if (err) {
+ dev_err(&pdev->dev, "Failed to allocate cmdq buf\n");
+ return err;
+ }
+
+ cmd_buf.size = sizeof(*indir_tbl);
+
+ indir_tbl = cmd_buf.buf;
+ indir_tbl->group_index = cpu_to_be32(tmpl_idx);
+
+ for (i = 0; i < HINIC_RSS_INDIR_SIZE; i++) {
+ indir_tbl->entry[i] = indir_table[i];
+
+ if (0x3 == (i & 0x3)) {
+ temp = (u32 *)&indir_tbl->entry[i - 3];
+ *temp = cpu_to_be32(*temp);
+ }
+ }
+
+ /* cfg the rss indirect table by command queue */
+ indir_size = HINIC_RSS_INDIR_SIZE / 2;
+ indir_tbl->offset = 0;
+ indir_tbl->size = cpu_to_be32(indir_size);
+
+ err = hinic_cmdq_direct_resp(&func_to_io->cmdqs, HINIC_MOD_L2NIC,
+ HINIC_UCODE_CMD_SET_RSS_INDIR_TABLE,
+ &cmd_buf, &out_param);
+ if (err || out_param != 0) {
+ dev_err(&pdev->dev, "Failed to set rss indir table\n");
+ err = -EFAULT;
+ goto free_buf;
+ }
+
+ indir_tbl->offset = cpu_to_be32(indir_size);
+ indir_tbl->size = cpu_to_be32(indir_size);
+ memcpy(&indir_tbl->entry[0], &indir_tbl->entry[indir_size], indir_size);
+
+ err = hinic_cmdq_direct_resp(&func_to_io->cmdqs, HINIC_MOD_L2NIC,
+ HINIC_UCODE_CMD_SET_RSS_INDIR_TABLE,
+ &cmd_buf, &out_param);
+ if (err || out_param != 0) {
+ dev_err(&pdev->dev, "Failed to set rss indir table\n");
+ err = -EFAULT;
+ }
+
+free_buf:
+ hinic_free_cmdq_buf(&func_to_io->cmdqs, &cmd_buf);
+
+ return err;
+}
+
+int hinic_rss_get_indir_tbl(struct hinic_dev *nic_dev, u32 tmpl_idx,
+ u32 *indir_table)
+{
+ struct hinic_rss_indir_table rss_cfg = { 0 };
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_hwif *hwif = hwdev->hwif;
+ struct pci_dev *pdev = hwif->pdev;
+ u16 out_size = sizeof(rss_cfg);
+ int err = 0, i;
+
+ rss_cfg.func_id = HINIC_HWIF_FUNC_IDX(hwif);
+ rss_cfg.template_id = tmpl_idx;
+
+ err = hinic_port_msg_cmd(hwdev,
+ HINIC_PORT_CMD_GET_RSS_TEMPLATE_INDIR_TBL,
+ &rss_cfg, sizeof(rss_cfg), &rss_cfg,
+ &out_size);
+ if (err || !out_size || rss_cfg.status) {
+ dev_err(&pdev->dev, "Failed to get indir table, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, rss_cfg.status, out_size);
+ return -EINVAL;
+ }
+
+ hinic_be32_to_cpu(rss_cfg.indir, HINIC_RSS_INDIR_SIZE);
+ for (i = 0; i < HINIC_RSS_INDIR_SIZE; i++)
+ indir_table[i] = rss_cfg.indir[i];
+
+ return 0;
+}
+
+int hinic_set_rss_type(struct hinic_dev *nic_dev, u32 tmpl_idx,
+ struct hinic_rss_type rss_type)
+{
+ struct hinic_rss_context_tbl *ctx_tbl;
+ struct hinic_func_to_io *func_to_io;
+ struct hinic_cmdq_buf cmd_buf;
+ struct hinic_hwdev *hwdev;
+ struct hinic_hwif *hwif;
+ struct pci_dev *pdev;
+ u64 out_param;
+ u32 ctx = 0;
+ int err;
+
+ hwdev = nic_dev->hwdev;
+ func_to_io = &hwdev->func_to_io;
+ hwif = hwdev->hwif;
+ pdev = hwif->pdev;
+
+ err = hinic_alloc_cmdq_buf(&func_to_io->cmdqs, &cmd_buf);
+ if (err) {
+ dev_err(&pdev->dev, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ ctx |= HINIC_RSS_TYPE_SET(1, VALID) |
+ HINIC_RSS_TYPE_SET(rss_type.ipv4, IPV4) |
+ HINIC_RSS_TYPE_SET(rss_type.ipv6, IPV6) |
+ HINIC_RSS_TYPE_SET(rss_type.ipv6_ext, IPV6_EXT) |
+ HINIC_RSS_TYPE_SET(rss_type.tcp_ipv4, TCP_IPV4) |
+ HINIC_RSS_TYPE_SET(rss_type.tcp_ipv6, TCP_IPV6) |
+ HINIC_RSS_TYPE_SET(rss_type.tcp_ipv6_ext, TCP_IPV6_EXT) |
+ HINIC_RSS_TYPE_SET(rss_type.udp_ipv4, UDP_IPV4) |
+ HINIC_RSS_TYPE_SET(rss_type.udp_ipv6, UDP_IPV6);
+
+ cmd_buf.size = sizeof(struct hinic_rss_context_tbl);
+
+ ctx_tbl = (struct hinic_rss_context_tbl *)cmd_buf.buf;
+ ctx_tbl->group_index = cpu_to_be32(tmpl_idx);
+ ctx_tbl->offset = 0;
+ ctx_tbl->size = sizeof(u32);
+ ctx_tbl->size = cpu_to_be32(ctx_tbl->size);
+ ctx_tbl->rsvd = 0;
+ ctx_tbl->ctx = cpu_to_be32(ctx);
+
+ /* cfg the rss context table by command queue */
+ err = hinic_cmdq_direct_resp(&func_to_io->cmdqs, HINIC_MOD_L2NIC,
+ HINIC_UCODE_CMD_SET_RSS_CONTEXT_TABLE,
+ &cmd_buf, &out_param);
+
+ hinic_free_cmdq_buf(&func_to_io->cmdqs, &cmd_buf);
+
+ if (err || out_param != 0) {
+ dev_err(&pdev->dev, "Failed to set rss context table, err: %d\n",
+ err);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic_get_rss_type(struct hinic_dev *nic_dev, u32 tmpl_idx,
+ struct hinic_rss_type *rss_type)
+{
+ struct hinic_rss_context_table ctx_tbl = { 0 };
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_hwif *hwif;
+ struct pci_dev *pdev;
+ u16 out_size = sizeof(ctx_tbl);
+ int err;
+
+ if (!hwdev || !rss_type)
+ return -EINVAL;
+
+ hwif = hwdev->hwif;
+ pdev = hwif->pdev;
+
+ ctx_tbl.func_id = HINIC_HWIF_FUNC_IDX(hwif);
+ ctx_tbl.template_id = tmpl_idx;
+
+ err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_GET_RSS_CTX_TBL,
+ &ctx_tbl, sizeof(ctx_tbl),
+ &ctx_tbl, &out_size);
+ if (err || !out_size || ctx_tbl.status) {
+ dev_err(&pdev->dev, "Failed to get hash type, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, ctx_tbl.status, out_size);
+ return -EINVAL;
+ }
+
+ rss_type->ipv4 = HINIC_RSS_TYPE_GET(ctx_tbl.context, IPV4);
+ rss_type->ipv6 = HINIC_RSS_TYPE_GET(ctx_tbl.context, IPV6);
+ rss_type->ipv6_ext = HINIC_RSS_TYPE_GET(ctx_tbl.context, IPV6_EXT);
+ rss_type->tcp_ipv4 = HINIC_RSS_TYPE_GET(ctx_tbl.context, TCP_IPV4);
+ rss_type->tcp_ipv6 = HINIC_RSS_TYPE_GET(ctx_tbl.context, TCP_IPV6);
+ rss_type->tcp_ipv6_ext = HINIC_RSS_TYPE_GET(ctx_tbl.context,
+ TCP_IPV6_EXT);
+ rss_type->udp_ipv4 = HINIC_RSS_TYPE_GET(ctx_tbl.context, UDP_IPV4);
+ rss_type->udp_ipv6 = HINIC_RSS_TYPE_GET(ctx_tbl.context, UDP_IPV6);
+
+ return 0;
+}
+
+int hinic_rss_set_template_tbl(struct hinic_dev *nic_dev, u32 template_id,
+ const u8 *temp)
+{
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_hwif *hwif = hwdev->hwif;
+ struct hinic_rss_key rss_key = { 0 };
+ struct pci_dev *pdev = hwif->pdev;
+ u16 out_size;
+ int err;
+
+ rss_key.func_id = HINIC_HWIF_FUNC_IDX(hwif);
+ rss_key.template_id = template_id;
+ memcpy(rss_key.key, temp, HINIC_RSS_KEY_SIZE);
+
+ err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_SET_RSS_TEMPLATE_TBL,
+ &rss_key, sizeof(rss_key),
+ &rss_key, &out_size);
+ if (err || !out_size || rss_key.status) {
+ dev_err(&pdev->dev,
+ "Failed to set rss hash key, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, rss_key.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic_rss_get_template_tbl(struct hinic_dev *nic_dev, u32 tmpl_idx,
+ u8 *temp)
+{
+ struct hinic_rss_template_key temp_key = { 0 };
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_hwif *hwif;
+ struct pci_dev *pdev;
+ u16 out_size = sizeof(temp_key);
+ int err;
+
+ if (!hwdev || !temp)
+ return -EINVAL;
+
+ hwif = hwdev->hwif;
+ pdev = hwif->pdev;
+
+ temp_key.func_id = HINIC_HWIF_FUNC_IDX(hwif);
+ temp_key.template_id = tmpl_idx;
+
+ err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_GET_RSS_TEMPLATE_TBL,
+ &temp_key, sizeof(temp_key),
+ &temp_key, &out_size);
+ if (err || !out_size || temp_key.status) {
+ dev_err(&pdev->dev, "Failed to set hash key, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, temp_key.status, out_size);
+ return -EINVAL;
+ }
+
+ memcpy(temp, temp_key.key, HINIC_RSS_KEY_SIZE);
+
+ return 0;
+}
+
+int hinic_rss_set_hash_engine(struct hinic_dev *nic_dev, u8 template_id,
+ u8 type)
+{
+ struct hinic_rss_engine_type rss_engine = { 0 };
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_hwif *hwif = hwdev->hwif;
+ struct pci_dev *pdev = hwif->pdev;
+ u16 out_size;
+ int err;
+
+ rss_engine.func_id = HINIC_HWIF_FUNC_IDX(hwif);
+ rss_engine.hash_engine = type;
+ rss_engine.template_id = template_id;
+
+ err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_SET_RSS_HASH_ENGINE,
+ &rss_engine, sizeof(rss_engine),
+ &rss_engine, &out_size);
+ if (err || !out_size || rss_engine.status) {
+ dev_err(&pdev->dev,
+ "Failed to set hash engine, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, rss_engine.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic_rss_get_hash_engine(struct hinic_dev *nic_dev, u8 tmpl_idx, u8 *type)
+{
+ struct hinic_rss_engine_type hash_type = { 0 };
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_hwif *hwif;
+ struct pci_dev *pdev;
+ u16 out_size = sizeof(hash_type);
+ int err;
+
+ if (!hwdev || !type)
+ return -EINVAL;
+
+ hwif = hwdev->hwif;
+ pdev = hwif->pdev;
+
+ hash_type.func_id = HINIC_HWIF_FUNC_IDX(hwif);
+ hash_type.template_id = tmpl_idx;
+
+ err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_GET_RSS_HASH_ENGINE,
+ &hash_type, sizeof(hash_type),
+ &hash_type, &out_size);
+ if (err || !out_size || hash_type.status) {
+ dev_err(&pdev->dev, "Failed to get hash engine, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, hash_type.status, out_size);
+ return -EINVAL;
+ }
+
+ *type = hash_type.hash_engine;
+ return 0;
+}
+
+int hinic_rss_cfg(struct hinic_dev *nic_dev, u8 rss_en, u8 template_id)
+{
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_rss_config rss_cfg = { 0 };
+ struct hinic_hwif *hwif = hwdev->hwif;
+ struct pci_dev *pdev = hwif->pdev;
+ u16 out_size;
+ int err;
+
+ rss_cfg.func_id = HINIC_HWIF_FUNC_IDX(hwif);
+ rss_cfg.rss_en = rss_en;
+ rss_cfg.template_id = template_id;
+ rss_cfg.rq_priority_number = 0;
+
+ err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_RSS_CFG,
+ &rss_cfg, sizeof(rss_cfg),
+ &rss_cfg, &out_size);
+ if (err || !out_size || rss_cfg.status) {
+ dev_err(&pdev->dev,
+ "Failed to set rss cfg, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, rss_cfg.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic_rss_template_alloc(struct hinic_dev *nic_dev, u8 *tmpl_idx)
+{
+ struct hinic_rss_template_mgmt template_mgmt = { 0 };
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_hwif *hwif = hwdev->hwif;
+ struct pci_dev *pdev = hwif->pdev;
+ u16 out_size;
+ int err;
+
+ template_mgmt.func_id = HINIC_HWIF_FUNC_IDX(hwif);
+ template_mgmt.cmd = NIC_RSS_CMD_TEMP_ALLOC;
+
+ err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_RSS_TEMP_MGR,
+ &template_mgmt, sizeof(template_mgmt),
+ &template_mgmt, &out_size);
+ if (err || !out_size || template_mgmt.status) {
+ dev_err(&pdev->dev, "Failed to alloc rss template, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, template_mgmt.status, out_size);
+ return -EINVAL;
+ }
+
+ *tmpl_idx = template_mgmt.template_id;
+
+ return 0;
+}
+
+int hinic_rss_template_free(struct hinic_dev *nic_dev, u8 tmpl_idx)
+{
+ struct hinic_rss_template_mgmt template_mgmt = { 0 };
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_hwif *hwif = hwdev->hwif;
+ struct pci_dev *pdev = hwif->pdev;
+ u16 out_size;
+ int err;
+
+ template_mgmt.func_id = HINIC_HWIF_FUNC_IDX(hwif);
+ template_mgmt.template_id = tmpl_idx;
+ template_mgmt.cmd = NIC_RSS_CMD_TEMP_FREE;
+
+ err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_RSS_TEMP_MGR,
+ &template_mgmt, sizeof(template_mgmt),
+ &template_mgmt, &out_size);
+ if (err || !out_size || template_mgmt.status) {
+ dev_err(&pdev->dev, "Failed to free rss template, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, template_mgmt.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic_get_vport_stats(struct hinic_dev *nic_dev,
+ struct hinic_vport_stats *stats)
+{
+ struct hinic_cmd_vport_stats vport_stats = { 0 };
+ struct hinic_port_stats_info stats_info = { 0 };
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_hwif *hwif = hwdev->hwif;
+ u16 out_size = sizeof(vport_stats);
+ struct pci_dev *pdev = hwif->pdev;
+ int err;
+
+ stats_info.stats_version = HINIC_PORT_STATS_VERSION;
+ stats_info.func_id = HINIC_HWIF_FUNC_IDX(hwif);
+ stats_info.stats_size = sizeof(vport_stats);
+
+ err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_GET_VPORT_STAT,
+ &stats_info, sizeof(stats_info),
+ &vport_stats, &out_size);
+ if (err || !out_size || vport_stats.status) {
+ dev_err(&pdev->dev,
+ "Failed to get function statistics, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, vport_stats.status, out_size);
+ return -EFAULT;
+ }
+
+ memcpy(stats, &vport_stats.stats, sizeof(*stats));
+ return 0;
+}
+
+int hinic_get_phy_port_stats(struct hinic_dev *nic_dev,
+ struct hinic_phy_port_stats *stats)
+{
+ struct hinic_port_stats_info stats_info = { 0 };
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_hwif *hwif = hwdev->hwif;
+ struct hinic_port_stats *port_stats;
+ u16 out_size = sizeof(*port_stats);
+ struct pci_dev *pdev = hwif->pdev;
+ int err;
+
+ port_stats = kzalloc(sizeof(*port_stats), GFP_KERNEL);
+ if (!port_stats)
+ return -ENOMEM;
+
+ stats_info.stats_version = HINIC_PORT_STATS_VERSION;
+ stats_info.stats_size = sizeof(*port_stats);
+
+ err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_GET_PORT_STATISTICS,
+ &stats_info, sizeof(stats_info),
+ port_stats, &out_size);
+ if (err || !out_size || port_stats->status) {
+ dev_err(&pdev->dev,
+ "Failed to get port statistics, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, port_stats->status, out_size);
+ err = -EINVAL;
+ goto out;
+ }
+
+ memcpy(stats, &port_stats->stats, sizeof(*stats));
+
+out:
+ kfree(port_stats);
+
+ return err;
+}
+
+int hinic_get_mgmt_version(struct hinic_dev *nic_dev, u8 *mgmt_ver)
+{
+ struct hinic_hwdev *hwdev = nic_dev->hwdev;
+ struct hinic_version_info up_ver = {0};
+ struct hinic_hwif *hwif;
+ struct pci_dev *pdev;
+ u16 out_size;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ hwif = hwdev->hwif;
+ pdev = hwif->pdev;
+
+ err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_GET_MGMT_VERSION,
+ &up_ver, sizeof(up_ver), &up_ver,
+ &out_size);
+ if (err || !out_size || up_ver.status) {
+ dev_err(&pdev->dev,
+ "Failed to get mgmt version, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, up_ver.status, out_size);
+ return -EINVAL;
+ }
+
+ snprintf(mgmt_ver, HINIC_MGMT_VERSION_MAX_LEN, "%s", up_ver.ver);
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_port.h b/drivers/net/ethernet/huawei/hinic/hinic_port.h
index c562afd206be..44772fd47fc1 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_port.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_port.h
@@ -13,6 +13,22 @@
#include "hinic_dev.h"
+#define HINIC_RSS_KEY_SIZE 40
+#define HINIC_RSS_INDIR_SIZE 256
+#define HINIC_PORT_STATS_VERSION 0
+#define HINIC_FW_VERSION_NAME 16
+#define HINIC_COMPILE_TIME_LEN 20
+#define HINIC_MGMT_VERSION_MAX_LEN 32
+
+struct hinic_version_info {
+ u8 status;
+ u8 version;
+ u8 rsvd[6];
+
+ u8 ver[HINIC_FW_VERSION_NAME];
+ u8 time[HINIC_COMPILE_TIME_LEN];
+};
+
enum hinic_rx_mode {
HINIC_RX_MODE_UC = BIT(0),
HINIC_RX_MODE_MC = BIT(1),
@@ -183,6 +199,313 @@ struct hinic_checksum_offload {
u16 rsvd1;
u32 rx_csum_offload;
};
+
+struct hinic_rq_num {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u16 rsvd1[33];
+ u32 num_rqs;
+ u32 rq_depth;
+};
+
+struct hinic_lro_config {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 lro_ipv4_en;
+ u8 lro_ipv6_en;
+ u8 lro_max_wqe_num;
+ u8 resv2[13];
+};
+
+struct hinic_lro_timer {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u8 type; /* 0: set timer value, 1: get timer value */
+ u8 enable; /* when set lro time, enable should be 1 */
+ u16 rsvd1;
+ u32 timer;
+};
+
+struct hinic_vlan_cfg {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u8 vlan_rx_offload;
+ u8 rsvd1[5];
+};
+
+struct hinic_rss_template_mgmt {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u8 cmd;
+ u8 template_id;
+ u8 rsvd1[4];
+};
+
+struct hinic_rss_template_key {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u8 template_id;
+ u8 rsvd1;
+ u8 key[HINIC_RSS_KEY_SIZE];
+};
+
+struct hinic_rss_context_tbl {
+ u32 group_index;
+ u32 offset;
+ u32 size;
+ u32 rsvd;
+ u32 ctx;
+};
+
+struct hinic_rss_context_table {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u8 template_id;
+ u8 rsvd1;
+ u32 context;
+};
+
+struct hinic_rss_indirect_tbl {
+ u32 group_index;
+ u32 offset;
+ u32 size;
+ u32 rsvd;
+ u8 entry[HINIC_RSS_INDIR_SIZE];
+};
+
+struct hinic_rss_indir_table {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u8 template_id;
+ u8 rsvd1;
+ u8 indir[HINIC_RSS_INDIR_SIZE];
+};
+
+struct hinic_rss_key {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u8 template_id;
+ u8 rsvd1;
+ u8 key[HINIC_RSS_KEY_SIZE];
+};
+
+struct hinic_rss_engine_type {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u8 template_id;
+ u8 hash_engine;
+ u8 rsvd1[4];
+};
+
+struct hinic_rss_config {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u8 rss_en;
+ u8 template_id;
+ u8 rq_priority_number;
+ u8 rsvd1[11];
+};
+
+struct hinic_stats {
+ char name[ETH_GSTRING_LEN];
+ u32 size;
+ int offset;
+};
+
+struct hinic_vport_stats {
+ u64 tx_unicast_pkts_vport;
+ u64 tx_unicast_bytes_vport;
+ u64 tx_multicast_pkts_vport;
+ u64 tx_multicast_bytes_vport;
+ u64 tx_broadcast_pkts_vport;
+ u64 tx_broadcast_bytes_vport;
+
+ u64 rx_unicast_pkts_vport;
+ u64 rx_unicast_bytes_vport;
+ u64 rx_multicast_pkts_vport;
+ u64 rx_multicast_bytes_vport;
+ u64 rx_broadcast_pkts_vport;
+ u64 rx_broadcast_bytes_vport;
+
+ u64 tx_discard_vport;
+ u64 rx_discard_vport;
+ u64 tx_err_vport;
+ u64 rx_err_vport;
+};
+
+struct hinic_phy_port_stats {
+ u64 mac_rx_total_pkt_num;
+ u64 mac_rx_total_oct_num;
+ u64 mac_rx_bad_pkt_num;
+ u64 mac_rx_bad_oct_num;
+ u64 mac_rx_good_pkt_num;
+ u64 mac_rx_good_oct_num;
+ u64 mac_rx_uni_pkt_num;
+ u64 mac_rx_multi_pkt_num;
+ u64 mac_rx_broad_pkt_num;
+
+ u64 mac_tx_total_pkt_num;
+ u64 mac_tx_total_oct_num;
+ u64 mac_tx_bad_pkt_num;
+ u64 mac_tx_bad_oct_num;
+ u64 mac_tx_good_pkt_num;
+ u64 mac_tx_good_oct_num;
+ u64 mac_tx_uni_pkt_num;
+ u64 mac_tx_multi_pkt_num;
+ u64 mac_tx_broad_pkt_num;
+
+ u64 mac_rx_fragment_pkt_num;
+ u64 mac_rx_undersize_pkt_num;
+ u64 mac_rx_undermin_pkt_num;
+ u64 mac_rx_64_oct_pkt_num;
+ u64 mac_rx_65_127_oct_pkt_num;
+ u64 mac_rx_128_255_oct_pkt_num;
+ u64 mac_rx_256_511_oct_pkt_num;
+ u64 mac_rx_512_1023_oct_pkt_num;
+ u64 mac_rx_1024_1518_oct_pkt_num;
+ u64 mac_rx_1519_2047_oct_pkt_num;
+ u64 mac_rx_2048_4095_oct_pkt_num;
+ u64 mac_rx_4096_8191_oct_pkt_num;
+ u64 mac_rx_8192_9216_oct_pkt_num;
+ u64 mac_rx_9217_12287_oct_pkt_num;
+ u64 mac_rx_12288_16383_oct_pkt_num;
+ u64 mac_rx_1519_max_bad_pkt_num;
+ u64 mac_rx_1519_max_good_pkt_num;
+ u64 mac_rx_oversize_pkt_num;
+ u64 mac_rx_jabber_pkt_num;
+
+ u64 mac_rx_pause_num;
+ u64 mac_rx_pfc_pkt_num;
+ u64 mac_rx_pfc_pri0_pkt_num;
+ u64 mac_rx_pfc_pri1_pkt_num;
+ u64 mac_rx_pfc_pri2_pkt_num;
+ u64 mac_rx_pfc_pri3_pkt_num;
+ u64 mac_rx_pfc_pri4_pkt_num;
+ u64 mac_rx_pfc_pri5_pkt_num;
+ u64 mac_rx_pfc_pri6_pkt_num;
+ u64 mac_rx_pfc_pri7_pkt_num;
+ u64 mac_rx_control_pkt_num;
+ u64 mac_rx_y1731_pkt_num;
+ u64 mac_rx_sym_err_pkt_num;
+ u64 mac_rx_fcs_err_pkt_num;
+ u64 mac_rx_send_app_good_pkt_num;
+ u64 mac_rx_send_app_bad_pkt_num;
+
+ u64 mac_tx_fragment_pkt_num;
+ u64 mac_tx_undersize_pkt_num;
+ u64 mac_tx_undermin_pkt_num;
+ u64 mac_tx_64_oct_pkt_num;
+ u64 mac_tx_65_127_oct_pkt_num;
+ u64 mac_tx_128_255_oct_pkt_num;
+ u64 mac_tx_256_511_oct_pkt_num;
+ u64 mac_tx_512_1023_oct_pkt_num;
+ u64 mac_tx_1024_1518_oct_pkt_num;
+ u64 mac_tx_1519_2047_oct_pkt_num;
+ u64 mac_tx_2048_4095_oct_pkt_num;
+ u64 mac_tx_4096_8191_oct_pkt_num;
+ u64 mac_tx_8192_9216_oct_pkt_num;
+ u64 mac_tx_9217_12287_oct_pkt_num;
+ u64 mac_tx_12288_16383_oct_pkt_num;
+ u64 mac_tx_1519_max_bad_pkt_num;
+ u64 mac_tx_1519_max_good_pkt_num;
+ u64 mac_tx_oversize_pkt_num;
+ u64 mac_tx_jabber_pkt_num;
+
+ u64 mac_tx_pause_num;
+ u64 mac_tx_pfc_pkt_num;
+ u64 mac_tx_pfc_pri0_pkt_num;
+ u64 mac_tx_pfc_pri1_pkt_num;
+ u64 mac_tx_pfc_pri2_pkt_num;
+ u64 mac_tx_pfc_pri3_pkt_num;
+ u64 mac_tx_pfc_pri4_pkt_num;
+ u64 mac_tx_pfc_pri5_pkt_num;
+ u64 mac_tx_pfc_pri6_pkt_num;
+ u64 mac_tx_pfc_pri7_pkt_num;
+ u64 mac_tx_control_pkt_num;
+ u64 mac_tx_y1731_pkt_num;
+ u64 mac_tx_1588_pkt_num;
+ u64 mac_tx_err_all_pkt_num;
+ u64 mac_tx_from_app_good_pkt_num;
+ u64 mac_tx_from_app_bad_pkt_num;
+
+ u64 mac_rx_higig2_ext_pkt_num;
+ u64 mac_rx_higig2_message_pkt_num;
+ u64 mac_rx_higig2_error_pkt_num;
+ u64 mac_rx_higig2_cpu_ctrl_pkt_num;
+ u64 mac_rx_higig2_unicast_pkt_num;
+ u64 mac_rx_higig2_broadcast_pkt_num;
+ u64 mac_rx_higig2_l2_multicast_pkt_num;
+ u64 mac_rx_higig2_l3_multicast_pkt_num;
+
+ u64 mac_tx_higig2_message_pkt_num;
+ u64 mac_tx_higig2_ext_pkt_num;
+ u64 mac_tx_higig2_cpu_ctrl_pkt_num;
+ u64 mac_tx_higig2_unicast_pkt_num;
+ u64 mac_tx_higig2_broadcast_pkt_num;
+ u64 mac_tx_higig2_l2_multicast_pkt_num;
+ u64 mac_tx_higig2_l3_multicast_pkt_num;
+};
+
+struct hinic_port_stats_info {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 stats_version;
+ u32 stats_size;
+};
+
+struct hinic_port_stats {
+ u8 status;
+ u8 version;
+ u8 rsvd[6];
+
+ struct hinic_phy_port_stats stats;
+};
+
+struct hinic_cmd_vport_stats {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ struct hinic_vport_stats stats;
+};
+
int hinic_port_add_mac(struct hinic_dev *nic_dev, const u8 *addr,
u16 vlan_id);
@@ -211,7 +534,55 @@ int hinic_port_set_func_state(struct hinic_dev *nic_dev,
int hinic_port_get_cap(struct hinic_dev *nic_dev,
struct hinic_port_cap *port_cap);
+int hinic_set_max_qnum(struct hinic_dev *nic_dev, u8 num_rqs);
+
int hinic_port_set_tso(struct hinic_dev *nic_dev, enum hinic_tso_state state);
int hinic_set_rx_csum_offload(struct hinic_dev *nic_dev, u32 en);
+
+int hinic_set_rx_lro_state(struct hinic_dev *nic_dev, u8 lro_en,
+ u32 lro_timer, u32 wqe_num);
+
+int hinic_set_rss_type(struct hinic_dev *nic_dev, u32 tmpl_idx,
+ struct hinic_rss_type rss_type);
+
+int hinic_rss_set_indir_tbl(struct hinic_dev *nic_dev, u32 tmpl_idx,
+ const u32 *indir_table);
+
+int hinic_rss_set_template_tbl(struct hinic_dev *nic_dev, u32 template_id,
+ const u8 *temp);
+
+int hinic_rss_set_hash_engine(struct hinic_dev *nic_dev, u8 template_id,
+ u8 type);
+
+int hinic_rss_cfg(struct hinic_dev *nic_dev, u8 rss_en, u8 template_id);
+
+int hinic_rss_template_alloc(struct hinic_dev *nic_dev, u8 *tmpl_idx);
+
+int hinic_rss_template_free(struct hinic_dev *nic_dev, u8 tmpl_idx);
+
+void hinic_set_ethtool_ops(struct net_device *netdev);
+
+int hinic_get_rss_type(struct hinic_dev *nic_dev, u32 tmpl_idx,
+ struct hinic_rss_type *rss_type);
+
+int hinic_rss_get_indir_tbl(struct hinic_dev *nic_dev, u32 tmpl_idx,
+ u32 *indir_table);
+
+int hinic_rss_get_template_tbl(struct hinic_dev *nic_dev, u32 tmpl_idx,
+ u8 *temp);
+
+int hinic_rss_get_hash_engine(struct hinic_dev *nic_dev, u8 tmpl_idx,
+ u8 *type);
+
+int hinic_get_phy_port_stats(struct hinic_dev *nic_dev,
+ struct hinic_phy_port_stats *stats);
+
+int hinic_get_vport_stats(struct hinic_dev *nic_dev,
+ struct hinic_vport_stats *stats);
+
+int hinic_set_rx_vlan_offload(struct hinic_dev *nic_dev, u8 en);
+
+int hinic_get_mgmt_version(struct hinic_dev *nic_dev, u8 *mgmt_ver);
+
#endif
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_rx.c b/drivers/net/ethernet/huawei/hinic/hinic_rx.c
index 0850ea83d6c1..56ea6d692f1c 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_rx.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_rx.c
@@ -18,6 +18,7 @@
#include <linux/dma-mapping.h>
#include <linux/prefetch.h>
#include <linux/cpumask.h>
+#include <linux/if_vlan.h>
#include <asm/barrier.h>
#include "hinic_common.h"
@@ -36,6 +37,15 @@
#define RX_IRQ_NO_RESEND_TIMER 0
#define HINIC_RX_BUFFER_WRITE 16
+#define HINIC_RX_IPV6_PKT 7
+#define LRO_PKT_HDR_LEN_IPV4 66
+#define LRO_PKT_HDR_LEN_IPV6 86
+#define LRO_REPLENISH_THLD 256
+
+#define LRO_PKT_HDR_LEN(cqe) \
+ (HINIC_GET_RX_PKT_TYPE(be32_to_cpu((cqe)->offload_type)) == \
+ HINIC_RX_IPV6_PKT ? LRO_PKT_HDR_LEN_IPV6 : LRO_PKT_HDR_LEN_IPV4)
+
/**
* hinic_rxq_clean_stats - Clean the statistics of specific queue
* @rxq: Logical Rx Queue
@@ -47,6 +57,9 @@ void hinic_rxq_clean_stats(struct hinic_rxq *rxq)
u64_stats_update_begin(&rxq_stats->syncp);
rxq_stats->pkts = 0;
rxq_stats->bytes = 0;
+ rxq_stats->errors = 0;
+ rxq_stats->csum_errors = 0;
+ rxq_stats->other_errors = 0;
u64_stats_update_end(&rxq_stats->syncp);
}
@@ -65,6 +78,10 @@ void hinic_rxq_get_stats(struct hinic_rxq *rxq, struct hinic_rxq_stats *stats)
start = u64_stats_fetch_begin(&rxq_stats->syncp);
stats->pkts = rxq_stats->pkts;
stats->bytes = rxq_stats->bytes;
+ stats->errors = rxq_stats->csum_errors +
+ rxq_stats->other_errors;
+ stats->csum_errors = rxq_stats->csum_errors;
+ stats->other_errors = rxq_stats->other_errors;
} while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
u64_stats_update_end(&stats->syncp);
}
@@ -81,27 +98,25 @@ static void rxq_stats_init(struct hinic_rxq *rxq)
hinic_rxq_clean_stats(rxq);
}
-static void rx_csum(struct hinic_rxq *rxq, u16 cons_idx,
+static void rx_csum(struct hinic_rxq *rxq, u32 status,
struct sk_buff *skb)
{
struct net_device *netdev = rxq->netdev;
- struct hinic_rq_cqe *cqe;
- struct hinic_rq *rq;
u32 csum_err;
- u32 status;
- rq = rxq->rq;
- cqe = rq->cqe[cons_idx];
- status = be32_to_cpu(cqe->status);
csum_err = HINIC_RQ_CQE_STATUS_GET(status, CSUM_ERR);
if (!(netdev->features & NETIF_F_RXCSUM))
return;
- if (!csum_err)
+ if (!csum_err) {
skb->ip_summed = CHECKSUM_UNNECESSARY;
- else
+ } else {
+ if (!(csum_err & (HINIC_RX_CSUM_HW_CHECK_NONE |
+ HINIC_RX_CSUM_IPSU_OTHER_ERR)))
+ rxq->rxq_stats.csum_errors++;
skb->ip_summed = CHECKSUM_NONE;
+ }
}
/**
* rx_alloc_skb - allocate skb and map it to dma address
@@ -311,13 +326,21 @@ static int rx_recv_jumbo_pkt(struct hinic_rxq *rxq, struct sk_buff *head_skb,
static int rxq_recv(struct hinic_rxq *rxq, int budget)
{
struct hinic_qp *qp = container_of(rxq->rq, struct hinic_qp, rq);
+ struct net_device *netdev = rxq->netdev;
u64 pkt_len = 0, rx_bytes = 0;
+ struct hinic_rq *rq = rxq->rq;
struct hinic_rq_wqe *rq_wqe;
unsigned int free_wqebbs;
+ struct hinic_rq_cqe *cqe;
int num_wqes, pkts = 0;
struct hinic_sge sge;
+ unsigned int status;
struct sk_buff *skb;
- u16 ci;
+ u32 offload_type;
+ u16 ci, num_lro;
+ u16 num_wqe = 0;
+ u32 vlan_len;
+ u16 vid;
while (pkts < budget) {
num_wqes = 0;
@@ -327,11 +350,13 @@ static int rxq_recv(struct hinic_rxq *rxq, int budget)
if (!rq_wqe)
break;
+ cqe = rq->cqe[ci];
+ status = be32_to_cpu(cqe->status);
hinic_rq_get_sge(rxq->rq, rq_wqe, ci, &sge);
rx_unmap_skb(rxq, hinic_sge_to_dma(&sge));
- rx_csum(rxq, ci, skb);
+ rx_csum(rxq, status, skb);
prefetch(skb->data);
@@ -345,9 +370,17 @@ static int rxq_recv(struct hinic_rxq *rxq, int budget)
HINIC_RX_BUF_SZ, ci);
}
- hinic_rq_put_wqe(rxq->rq, ci,
+ hinic_rq_put_wqe(rq, ci,
(num_wqes + 1) * HINIC_RQ_WQE_SIZE);
+ offload_type = be32_to_cpu(cqe->offload_type);
+ vlan_len = be32_to_cpu(cqe->len);
+ if ((netdev->features & NETIF_F_HW_VLAN_CTAG_RX) &&
+ HINIC_GET_RX_VLAN_OFFLOAD_EN(offload_type)) {
+ vid = HINIC_GET_RX_VLAN_TAG(vlan_len);
+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vid);
+ }
+
skb_record_rx_queue(skb, qp->q_id);
skb->protocol = eth_type_trans(skb, rxq->netdev);
@@ -355,6 +388,21 @@ static int rxq_recv(struct hinic_rxq *rxq, int budget)
pkts++;
rx_bytes += pkt_len;
+
+ num_lro = HINIC_GET_RX_NUM_LRO(status);
+ if (num_lro) {
+ rx_bytes += ((num_lro - 1) *
+ LRO_PKT_HDR_LEN(cqe));
+
+ num_wqe +=
+ (u16)(pkt_len >> rxq->rx_buff_shift) +
+ ((pkt_len & (rxq->buf_len - 1)) ? 1 : 0);
+ }
+
+ cqe->status = 0;
+
+ if (num_wqe >= LRO_REPLENISH_THLD)
+ break;
}
free_wqebbs = hinic_get_rq_free_wqebbs(rxq->rq);
@@ -469,20 +517,20 @@ int hinic_init_rxq(struct hinic_rxq *rxq, struct hinic_rq *rq,
struct net_device *netdev)
{
struct hinic_qp *qp = container_of(rq, struct hinic_qp, rq);
- int err, pkts, irqname_len;
+ int err, pkts;
rxq->netdev = netdev;
rxq->rq = rq;
+ rxq->buf_len = HINIC_RX_BUF_SZ;
+ rxq->rx_buff_shift = ilog2(HINIC_RX_BUF_SZ);
rxq_stats_init(rxq);
- irqname_len = snprintf(NULL, 0, "hinic_rxq%d", qp->q_id) + 1;
- rxq->irq_name = devm_kzalloc(&netdev->dev, irqname_len, GFP_KERNEL);
+ rxq->irq_name = devm_kasprintf(&netdev->dev, GFP_KERNEL,
+ "hinic_rxq%d", qp->q_id);
if (!rxq->irq_name)
return -ENOMEM;
- sprintf(rxq->irq_name, "hinic_rxq%d", qp->q_id);
-
pkts = rx_alloc_pkts(rxq);
if (!pkts) {
err = -ENOMEM;
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_rx.h b/drivers/net/ethernet/huawei/hinic/hinic_rx.h
index bc797498a87f..507dcbae9085 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_rx.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_rx.h
@@ -21,7 +21,10 @@
struct hinic_rxq_stats {
u64 pkts;
u64 bytes;
-
+ u64 errors;
+ u64 csum_errors;
+ u64 other_errors;
+ u64 alloc_skb_err;
struct u64_stats_sync syncp;
};
@@ -32,6 +35,8 @@ struct hinic_rxq {
struct hinic_rxq_stats rxq_stats;
char *irq_name;
+ u16 buf_len;
+ u32 rx_buff_shift;
struct napi_struct napi;
};
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_tx.c b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
index b9fd8d720349..9c78251f9c39 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
@@ -83,6 +83,7 @@ void hinic_txq_clean_stats(struct hinic_txq *txq)
txq_stats->tx_busy = 0;
txq_stats->tx_wake = 0;
txq_stats->tx_dropped = 0;
+ txq_stats->big_frags_pkts = 0;
u64_stats_update_end(&txq_stats->syncp);
}
@@ -104,6 +105,7 @@ void hinic_txq_get_stats(struct hinic_txq *txq, struct hinic_txq_stats *stats)
stats->tx_busy = txq_stats->tx_busy;
stats->tx_wake = txq_stats->tx_wake;
stats->tx_dropped = txq_stats->tx_dropped;
+ stats->big_frags_pkts = txq_stats->big_frags_pkts;
} while (u64_stats_fetch_retry(&txq_stats->syncp, start));
u64_stats_update_end(&stats->syncp);
}
@@ -405,10 +407,20 @@ static int offload_csum(struct hinic_sq_task *task, u32 *queue_info,
return 1;
}
+static void offload_vlan(struct hinic_sq_task *task, u32 *queue_info,
+ u16 vlan_tag, u16 vlan_pri)
+{
+ task->pkt_info0 |= HINIC_SQ_TASK_INFO0_SET(vlan_tag, VLAN_TAG) |
+ HINIC_SQ_TASK_INFO0_SET(1U, VLAN_OFFLOAD);
+
+ *queue_info |= HINIC_SQ_CTRL_SET(vlan_pri, QUEUE_INFO_PRI);
+}
+
static int hinic_tx_offload(struct sk_buff *skb, struct hinic_sq_task *task,
u32 *queue_info)
{
enum hinic_offload_type offload = 0;
+ u16 vlan_tag;
int enabled;
enabled = offload_tso(task, queue_info, skb);
@@ -422,6 +434,13 @@ static int hinic_tx_offload(struct sk_buff *skb, struct hinic_sq_task *task,
return -EPROTONOSUPPORT;
}
+ if (unlikely(skb_vlan_tag_present(skb))) {
+ vlan_tag = skb_vlan_tag_get(skb);
+ offload_vlan(task, queue_info, vlan_tag,
+ vlan_tag >> VLAN_PRIO_SHIFT);
+ offload |= TX_OFFLOAD_VLAN;
+ }
+
if (offload)
hinic_task_set_l2hdr(task, skb_network_offset(skb));
@@ -464,6 +483,12 @@ netdev_tx_t hinic_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
}
nr_sges = skb_shinfo(skb)->nr_frags + 1;
+ if (nr_sges > 17) {
+ u64_stats_update_begin(&txq->txq_stats.syncp);
+ txq->txq_stats.big_frags_pkts++;
+ u64_stats_update_end(&txq->txq_stats.syncp);
+ }
+
if (nr_sges > txq->max_sges) {
netdev_err(netdev, "Too many Tx sges\n");
goto skb_error;
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_tx.h b/drivers/net/ethernet/huawei/hinic/hinic_tx.h
index ca5f537fc383..f158b7db7fb8 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_tx.h
+++ b/drivers/net/ethernet/huawei/hinic/hinic_tx.h
@@ -21,6 +21,7 @@ struct hinic_txq_stats {
u64 tx_busy;
u64 tx_wake;
u64 tx_dropped;
+ u64 big_frags_pkts;
struct u64_stats_sync syncp;
};
diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c
index 551de8c2fef2..f703fa58458e 100644
--- a/drivers/net/ethernet/intel/e1000/e1000_main.c
+++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
@@ -3019,7 +3019,7 @@ static void e1000_tx_queue(struct e1000_adapter *adapter,
* applicable for weak-ordered memory model archs,
* such as IA-64).
*/
- wmb();
+ dma_wmb();
tx_ring->next_to_use = i;
}
@@ -4540,7 +4540,7 @@ e1000_alloc_jumbo_rx_buffers(struct e1000_adapter *adapter,
* applicable for weak-ordered memory model archs,
* such as IA-64).
*/
- wmb();
+ dma_wmb();
writel(i, adapter->hw.hw_addr + rx_ring->rdt);
}
}
@@ -4655,7 +4655,7 @@ static void e1000_alloc_rx_buffers(struct e1000_adapter *adapter,
* applicable for weak-ordered memory model archs,
* such as IA-64).
*/
- wmb();
+ dma_wmb();
writel(i, hw->hw_addr + rx_ring->rdt);
}
}
diff --git a/drivers/net/ethernet/intel/e1000e/80003es2lan.c b/drivers/net/ethernet/intel/e1000e/80003es2lan.c
index f86d55657959..4b103cca8a39 100644
--- a/drivers/net/ethernet/intel/e1000e/80003es2lan.c
+++ b/drivers/net/ethernet/intel/e1000e/80003es2lan.c
@@ -680,7 +680,7 @@ static s32 e1000_reset_hw_80003es2lan(struct e1000_hw *hw)
ew32(TCTL, E1000_TCTL_PSP);
e1e_flush();
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
ctrl = er32(CTRL);
diff --git a/drivers/net/ethernet/intel/e1000e/82571.c b/drivers/net/ethernet/intel/e1000e/82571.c
index b9309302c29e..2c1bab377b2a 100644
--- a/drivers/net/ethernet/intel/e1000e/82571.c
+++ b/drivers/net/ethernet/intel/e1000e/82571.c
@@ -959,7 +959,7 @@ static s32 e1000_reset_hw_82571(struct e1000_hw *hw)
ew32(TCTL, tctl);
e1e_flush();
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
/* Must acquire the MDIO ownership before MAC reset.
* Ownership defaults to firmware after a reset.
diff --git a/drivers/net/ethernet/intel/e1000e/defines.h b/drivers/net/ethernet/intel/e1000e/defines.h
index fd550dee4982..63c3c79380a1 100644
--- a/drivers/net/ethernet/intel/e1000e/defines.h
+++ b/drivers/net/ethernet/intel/e1000e/defines.h
@@ -222,6 +222,9 @@
#define E1000_STATUS_PHYRA 0x00000400 /* PHY Reset Asserted */
#define E1000_STATUS_GIO_MASTER_ENABLE 0x00080000 /* Master Req status */
+/* PCIm function state */
+#define E1000_STATUS_PCIM_STATE 0x40000000
+
#define HALF_DUPLEX 1
#define FULL_DUPLEX 2
diff --git a/drivers/net/ethernet/intel/e1000e/e1000.h b/drivers/net/ethernet/intel/e1000e/e1000.h
index be13227f1697..34cd67951aec 100644
--- a/drivers/net/ethernet/intel/e1000e/e1000.h
+++ b/drivers/net/ethernet/intel/e1000e/e1000.h
@@ -186,12 +186,13 @@ struct e1000_phy_regs {
/* board specific private data structure */
struct e1000_adapter {
- struct timer_list watchdog_timer;
struct timer_list phy_info_timer;
struct timer_list blink_timer;
struct work_struct reset_task;
- struct work_struct watchdog_task;
+ struct delayed_work watchdog_task;
+
+ struct workqueue_struct *e1000_workqueue;
const struct e1000_info *ei;
diff --git a/drivers/net/ethernet/intel/e1000e/ethtool.c b/drivers/net/ethernet/intel/e1000e/ethtool.c
index 02ebf208f48b..08342698386d 100644
--- a/drivers/net/ethernet/intel/e1000e/ethtool.c
+++ b/drivers/net/ethernet/intel/e1000e/ethtool.c
@@ -1014,7 +1014,7 @@ static int e1000_intr_test(struct e1000_adapter *adapter, u64 *data)
/* Disable all the interrupts */
ew32(IMC, 0xFFFFFFFF);
e1e_flush();
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
/* Test each interrupt */
for (i = 0; i < 10; i++) {
@@ -1046,7 +1046,7 @@ static int e1000_intr_test(struct e1000_adapter *adapter, u64 *data)
ew32(IMC, mask);
ew32(ICS, mask);
e1e_flush();
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
if (adapter->test_icr & mask) {
*data = 3;
@@ -1064,7 +1064,7 @@ static int e1000_intr_test(struct e1000_adapter *adapter, u64 *data)
ew32(IMS, mask);
ew32(ICS, mask);
e1e_flush();
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
if (!(adapter->test_icr & mask)) {
*data = 4;
@@ -1082,7 +1082,7 @@ static int e1000_intr_test(struct e1000_adapter *adapter, u64 *data)
ew32(IMC, ~mask & 0x00007FFF);
ew32(ICS, ~mask & 0x00007FFF);
e1e_flush();
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
if (adapter->test_icr) {
*data = 5;
@@ -1094,7 +1094,7 @@ static int e1000_intr_test(struct e1000_adapter *adapter, u64 *data)
/* Disable all the interrupts */
ew32(IMC, 0xFFFFFFFF);
e1e_flush();
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
/* Unhook test interrupt handler */
free_irq(irq, netdev);
@@ -1470,7 +1470,7 @@ static int e1000_set_82571_fiber_loopback(struct e1000_adapter *adapter)
*/
ew32(SCTL, E1000_SCTL_ENABLE_SERDES_LOOPBACK);
e1e_flush();
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
return 0;
}
@@ -1584,7 +1584,7 @@ static void e1000_loopback_cleanup(struct e1000_adapter *adapter)
hw->phy.media_type == e1000_media_type_internal_serdes) {
ew32(SCTL, E1000_SCTL_DISABLE_SERDES_LOOPBACK);
e1e_flush();
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
break;
}
/* Fall Through */
diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
index cdae0efde8e6..395b05701480 100644
--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
+++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
@@ -271,7 +271,7 @@ static void e1000_toggle_lanphypc_pch_lpt(struct e1000_hw *hw)
u16 count = 20;
do {
- usleep_range(5000, 10000);
+ usleep_range(5000, 6000);
} while (!(er32(CTRL_EXT) & E1000_CTRL_EXT_LPCD) && count--);
msleep(30);
@@ -405,7 +405,7 @@ out:
/* Ungate automatic PHY configuration on non-managed 82579 */
if ((hw->mac.type == e1000_pch2lan) &&
!(fwsm & E1000_ICH_FWSM_FW_VALID)) {
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
e1000_gate_hw_phy_config_ich8lan(hw, false);
}
@@ -531,7 +531,7 @@ static s32 e1000_init_phy_params_ich8lan(struct e1000_hw *hw)
phy->id = 0;
while ((e1000_phy_unknown == e1000e_get_phy_type_from_id(phy->id)) &&
(i++ < 100)) {
- usleep_range(1000, 2000);
+ usleep_range(1000, 1100);
ret_val = e1000e_get_phy_id(hw);
if (ret_val)
return ret_val;
@@ -1244,7 +1244,7 @@ static s32 e1000_disable_ulp_lpt_lp(struct e1000_hw *hw, bool force)
goto out;
}
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
}
e_dbg("ULP_CONFIG_DONE cleared after %dmsec\n", i * 10);
@@ -1999,7 +1999,7 @@ static s32 e1000_check_reset_block_ich8lan(struct e1000_hw *hw)
while ((blocked = !(er32(FWSM) & E1000_ICH_FWSM_RSPCIPHY)) &&
(i++ < 30))
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
return blocked ? E1000_BLK_PHY_RESET : 0;
}
@@ -2818,7 +2818,7 @@ static s32 e1000_post_phy_reset_ich8lan(struct e1000_hw *hw)
return 0;
/* Allow time for h/w to get to quiescent state after reset */
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
/* Perform any necessary post-reset workarounds */
switch (hw->mac.type) {
@@ -2854,7 +2854,7 @@ static s32 e1000_post_phy_reset_ich8lan(struct e1000_hw *hw)
if (hw->mac.type == e1000_pch2lan) {
/* Ungate automatic PHY configuration on non-managed 82579 */
if (!(er32(FWSM) & E1000_ICH_FWSM_FW_VALID)) {
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
e1000_gate_hw_phy_config_ich8lan(hw, false);
}
@@ -3875,7 +3875,7 @@ release:
*/
if (!ret_val) {
nvm->ops.reload(hw);
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
}
out:
@@ -4026,7 +4026,7 @@ release:
*/
if (!ret_val) {
nvm->ops.reload(hw);
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
}
out:
@@ -4650,7 +4650,7 @@ static s32 e1000_reset_hw_ich8lan(struct e1000_hw *hw)
ew32(TCTL, E1000_TCTL_PSP);
e1e_flush();
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
/* Workaround for ICH8 bit corruption issue in FIFO memory */
if (hw->mac.type == e1000_ich8lan) {
diff --git a/drivers/net/ethernet/intel/e1000e/mac.c b/drivers/net/ethernet/intel/e1000e/mac.c
index 4abd55d646c5..e531976f8a67 100644
--- a/drivers/net/ethernet/intel/e1000e/mac.c
+++ b/drivers/net/ethernet/intel/e1000e/mac.c
@@ -797,7 +797,7 @@ static s32 e1000_poll_fiber_serdes_link_generic(struct e1000_hw *hw)
* milliseconds even if the other end is doing it in SW).
*/
for (i = 0; i < FIBER_LINK_UP_LIMIT; i++) {
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
status = er32(STATUS);
if (status & E1000_STATUS_LU)
break;
diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
index 0e09bede42a2..e4baa13b3cda 100644
--- a/drivers/net/ethernet/intel/e1000e/netdev.c
+++ b/drivers/net/ethernet/intel/e1000e/netdev.c
@@ -1780,7 +1780,8 @@ static irqreturn_t e1000_intr_msi(int __always_unused irq, void *data)
}
/* guard against interrupt when we're going down */
if (!test_bit(__E1000_DOWN, &adapter->state))
- mod_timer(&adapter->watchdog_timer, jiffies + 1);
+ queue_delayed_work(adapter->e1000_workqueue,
+ &adapter->watchdog_task, 1);
}
/* Reset on uncorrectable ECC error */
@@ -1860,7 +1861,8 @@ static irqreturn_t e1000_intr(int __always_unused irq, void *data)
}
/* guard against interrupt when we're going down */
if (!test_bit(__E1000_DOWN, &adapter->state))
- mod_timer(&adapter->watchdog_timer, jiffies + 1);
+ queue_delayed_work(adapter->e1000_workqueue,
+ &adapter->watchdog_task, 1);
}
/* Reset on uncorrectable ECC error */
@@ -1905,7 +1907,8 @@ static irqreturn_t e1000_msix_other(int __always_unused irq, void *data)
hw->mac.get_link_status = true;
/* guard against interrupt when we're going down */
if (!test_bit(__E1000_DOWN, &adapter->state))
- mod_timer(&adapter->watchdog_timer, jiffies + 1);
+ queue_delayed_work(adapter->e1000_workqueue,
+ &adapter->watchdog_task, 1);
}
if (!test_bit(__E1000_DOWN, &adapter->state))
@@ -3208,7 +3211,7 @@ static void e1000_configure_rx(struct e1000_adapter *adapter)
if (!(adapter->flags2 & FLAG2_NO_DISABLE_RX))
ew32(RCTL, rctl & ~E1000_RCTL_EN);
e1e_flush();
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
if (adapter->flags2 & FLAG2_DMA_BURST) {
/* set the writeback threshold (only takes effect if the RDTR
@@ -4046,12 +4049,12 @@ void e1000e_reset(struct e1000_adapter *adapter)
case e1000_pch_lpt:
case e1000_pch_spt:
case e1000_pch_cnp:
- fc->refresh_time = 0x0400;
+ fc->refresh_time = 0xFFFF;
+ fc->pause_time = 0xFFFF;
if (adapter->netdev->mtu <= ETH_DATA_LEN) {
fc->high_water = 0x05C20;
fc->low_water = 0x05048;
- fc->pause_time = 0x0650;
break;
}
@@ -4208,7 +4211,7 @@ void e1000e_up(struct e1000_adapter *adapter)
e1000_configure_msix(adapter);
e1000_irq_enable(adapter);
- netif_start_queue(adapter->netdev);
+ /* Tx queue started by watchdog timer when link is up */
e1000e_trigger_lsc(adapter);
}
@@ -4272,13 +4275,12 @@ void e1000e_down(struct e1000_adapter *adapter, bool reset)
/* flush both disables and wait for them to finish */
e1e_flush();
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
e1000_irq_disable(adapter);
napi_synchronize(&adapter->napi);
- del_timer_sync(&adapter->watchdog_timer);
del_timer_sync(&adapter->phy_info_timer);
spin_lock(&adapter->stats64_lock);
@@ -4310,7 +4312,7 @@ void e1000e_reinit_locked(struct e1000_adapter *adapter)
{
might_sleep();
while (test_and_set_bit(__E1000_RESETTING, &adapter->state))
- usleep_range(1000, 2000);
+ usleep_range(1000, 1100);
e1000e_down(adapter, true);
e1000e_up(adapter);
clear_bit(__E1000_RESETTING, &adapter->state);
@@ -4606,6 +4608,7 @@ int e1000e_open(struct net_device *netdev)
pm_runtime_get_sync(&pdev->dev);
netif_carrier_off(netdev);
+ netif_stop_queue(netdev);
/* allocate transmit descriptors */
err = e1000e_setup_tx_resources(adapter->tx_ring);
@@ -4666,7 +4669,6 @@ int e1000e_open(struct net_device *netdev)
e1000_irq_enable(adapter);
adapter->tx_hang_recheck = false;
- netif_start_queue(netdev);
hw->mac.get_link_status = true;
pm_runtime_put(&pdev->dev);
@@ -4707,7 +4709,7 @@ int e1000e_close(struct net_device *netdev)
int count = E1000_CHECK_RESET_COUNT;
while (test_bit(__E1000_RESETTING, &adapter->state) && count--)
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
WARN_ON(test_bit(__E1000_RESETTING, &adapter->state));
@@ -5150,31 +5152,18 @@ static void e1000e_check_82574_phy_workaround(struct e1000_adapter *adapter)
}
}
-/**
- * e1000_watchdog - Timer Call-back
- * @data: pointer to adapter cast into an unsigned long
- **/
-static void e1000_watchdog(struct timer_list *t)
-{
- struct e1000_adapter *adapter = from_timer(adapter, t, watchdog_timer);
-
- /* Do the rest outside of interrupt context */
- schedule_work(&adapter->watchdog_task);
-
- /* TODO: make this use queue_delayed_work() */
-}
-
static void e1000_watchdog_task(struct work_struct *work)
{
struct e1000_adapter *adapter = container_of(work,
struct e1000_adapter,
- watchdog_task);
+ watchdog_task.work);
struct net_device *netdev = adapter->netdev;
struct e1000_mac_info *mac = &adapter->hw.mac;
struct e1000_phy_info *phy = &adapter->hw.phy;
struct e1000_ring *tx_ring = adapter->tx_ring;
+ u32 dmoff_exit_timeout = 100, tries = 0;
struct e1000_hw *hw = &adapter->hw;
- u32 link, tctl;
+ u32 link, tctl, pcim_state;
if (test_bit(__E1000_DOWN, &adapter->state))
return;
@@ -5199,6 +5188,21 @@ static void e1000_watchdog_task(struct work_struct *work)
/* Cancel scheduled suspend requests. */
pm_runtime_resume(netdev->dev.parent);
+ /* Checking if MAC is in DMoff state*/
+ pcim_state = er32(STATUS);
+ while (pcim_state & E1000_STATUS_PCIM_STATE) {
+ if (tries++ == dmoff_exit_timeout) {
+ e_dbg("Error in exiting dmoff\n");
+ break;
+ }
+ usleep_range(10000, 20000);
+ pcim_state = er32(STATUS);
+
+ /* Checking if MAC exited DMoff state */
+ if (!(pcim_state & E1000_STATUS_PCIM_STATE))
+ e1000_phy_hw_reset(&adapter->hw);
+ }
+
/* update snapshot of PHY registers on LSC */
e1000_phy_read_status(adapter);
mac->ops.get_link_up_info(&adapter->hw,
@@ -5288,6 +5292,7 @@ static void e1000_watchdog_task(struct work_struct *work)
if (phy->ops.cfg_on_link_up)
phy->ops.cfg_on_link_up(hw);
+ netif_wake_queue(netdev);
netif_carrier_on(netdev);
if (!test_bit(__E1000_DOWN, &adapter->state))
@@ -5301,6 +5306,7 @@ static void e1000_watchdog_task(struct work_struct *work)
/* Link status message must follow this format */
pr_info("%s NIC Link is Down\n", adapter->netdev->name);
netif_carrier_off(netdev);
+ netif_stop_queue(netdev);
if (!test_bit(__E1000_DOWN, &adapter->state))
mod_timer(&adapter->phy_info_timer,
round_jiffies(jiffies + 2 * HZ));
@@ -5308,13 +5314,8 @@ static void e1000_watchdog_task(struct work_struct *work)
/* 8000ES2LAN requires a Rx packet buffer work-around
* on link down event; reset the controller to flush
* the Rx packet buffer.
- *
- * If the link is lost the controller stops DMA, but
- * if there is queued Tx work it cannot be done. So
- * reset the controller to flush the Tx packet buffers.
*/
- if ((adapter->flags & FLAG_RX_NEEDS_RESTART) ||
- e1000_desc_unused(tx_ring) + 1 < tx_ring->count)
+ if (adapter->flags & FLAG_RX_NEEDS_RESTART)
adapter->flags |= FLAG_RESTART_NOW;
else
pm_schedule_suspend(netdev->dev.parent,
@@ -5337,6 +5338,14 @@ link_up:
adapter->gotc_old = adapter->stats.gotc;
spin_unlock(&adapter->stats64_lock);
+ /* If the link is lost the controller stops DMA, but
+ * if there is queued Tx work it cannot be done. So
+ * reset the controller to flush the Tx packet buffers.
+ */
+ if (!netif_carrier_ok(netdev) &&
+ (e1000_desc_unused(tx_ring) + 1 < tx_ring->count))
+ adapter->flags |= FLAG_RESTART_NOW;
+
/* If reset is necessary, do it outside of interrupt context. */
if (adapter->flags & FLAG_RESTART_NOW) {
schedule_work(&adapter->reset_task);
@@ -5395,8 +5404,9 @@ link_up:
/* Reset the timer */
if (!test_bit(__E1000_DOWN, &adapter->state))
- mod_timer(&adapter->watchdog_timer,
- round_jiffies(jiffies + 2 * HZ));
+ queue_delayed_work(adapter->e1000_workqueue,
+ &adapter->watchdog_task,
+ round_jiffies(2 * HZ));
}
#define E1000_TX_FLAGS_CSUM 0x00000001
@@ -6016,7 +6026,7 @@ static int e1000_change_mtu(struct net_device *netdev, int new_mtu)
}
while (test_and_set_bit(__E1000_RESETTING, &adapter->state))
- usleep_range(1000, 2000);
+ usleep_range(1000, 1100);
/* e1000e_down -> e1000e_reset dependent on max_frame_size & mtu */
adapter->max_frame_size = max_frame;
e_info("changing MTU from %d to %d\n", netdev->mtu, new_mtu);
@@ -6296,7 +6306,7 @@ static int e1000e_pm_freeze(struct device *dev)
int count = E1000_CHECK_RESET_COUNT;
while (test_bit(__E1000_RESETTING, &adapter->state) && count--)
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
WARN_ON(test_bit(__E1000_RESETTING, &adapter->state));
@@ -6711,7 +6721,7 @@ static int e1000e_pm_runtime_suspend(struct device *dev)
int count = E1000_CHECK_RESET_COUNT;
while (test_bit(__E1000_RESETTING, &adapter->state) && count--)
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
WARN_ON(test_bit(__E1000_RESETTING, &adapter->state));
@@ -7251,11 +7261,21 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_eeprom;
}
- timer_setup(&adapter->watchdog_timer, e1000_watchdog, 0);
+ adapter->e1000_workqueue = alloc_workqueue("%s", WQ_MEM_RECLAIM, 0,
+ e1000e_driver_name);
+
+ if (!adapter->e1000_workqueue) {
+ err = -ENOMEM;
+ goto err_workqueue;
+ }
+
+ INIT_DELAYED_WORK(&adapter->watchdog_task, e1000_watchdog_task);
+ queue_delayed_work(adapter->e1000_workqueue, &adapter->watchdog_task,
+ 0);
+
timer_setup(&adapter->phy_info_timer, e1000_update_phy_info, 0);
INIT_WORK(&adapter->reset_task, e1000_reset_task);
- INIT_WORK(&adapter->watchdog_task, e1000_watchdog_task);
INIT_WORK(&adapter->downshift_task, e1000e_downshift_workaround);
INIT_WORK(&adapter->update_phy_task, e1000e_update_phy_task);
INIT_WORK(&adapter->print_hang_task, e1000_print_hw_hang);
@@ -7349,6 +7369,9 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
return 0;
err_register:
+ flush_workqueue(adapter->e1000_workqueue);
+ destroy_workqueue(adapter->e1000_workqueue);
+err_workqueue:
if (!(adapter->flags & FLAG_HAS_AMT))
e1000e_release_hw_control(adapter);
err_eeprom:
@@ -7395,15 +7418,17 @@ static void e1000_remove(struct pci_dev *pdev)
*/
if (!down)
set_bit(__E1000_DOWN, &adapter->state);
- del_timer_sync(&adapter->watchdog_timer);
del_timer_sync(&adapter->phy_info_timer);
cancel_work_sync(&adapter->reset_task);
- cancel_work_sync(&adapter->watchdog_task);
cancel_work_sync(&adapter->downshift_task);
cancel_work_sync(&adapter->update_phy_task);
cancel_work_sync(&adapter->print_hang_task);
+ cancel_delayed_work(&adapter->watchdog_task);
+ flush_workqueue(adapter->e1000_workqueue);
+ destroy_workqueue(adapter->e1000_workqueue);
+
if (adapter->flags & FLAG_HAS_HW_TIMESTAMP) {
cancel_work_sync(&adapter->tx_hwtstamp_work);
if (adapter->tx_hwtstamp_skb) {
diff --git a/drivers/net/ethernet/intel/e1000e/nvm.c b/drivers/net/ethernet/intel/e1000e/nvm.c
index 937f9af22d26..e609f4df86f4 100644
--- a/drivers/net/ethernet/intel/e1000e/nvm.c
+++ b/drivers/net/ethernet/intel/e1000e/nvm.c
@@ -392,7 +392,7 @@ s32 e1000e_write_nvm_spi(struct e1000_hw *hw, u16 offset, u16 words, u16 *data)
break;
}
}
- usleep_range(10000, 20000);
+ usleep_range(10000, 11000);
nvm->ops.release(hw);
}
diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index 7ce42040b851..84bd06901014 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -27,6 +27,7 @@
#include <net/ip6_checksum.h>
#include <linux/ethtool.h>
#include <linux/if_vlan.h>
+#include <linux/if_macvlan.h>
#include <linux/if_bridge.h>
#include <linux/clocksource.h>
#include <linux/net_tstamp.h>
@@ -295,8 +296,6 @@ struct i40e_cloud_filter {
u8 tunnel_type;
};
-#define I40E_ETH_P_LLDP 0x88cc
-
#define I40E_DCB_PRIO_TYPE_STRICT 0
#define I40E_DCB_PRIO_TYPE_ETS 1
#define I40E_DCB_STRICT_PRIO_CREDITS 127
@@ -414,6 +413,11 @@ struct i40e_flex_pit {
u8 pit_index;
};
+struct i40e_fwd_adapter {
+ struct net_device *netdev;
+ int bit_no;
+};
+
struct i40e_channel {
struct list_head list;
bool initialized;
@@ -428,11 +432,25 @@ struct i40e_channel {
struct i40e_aqc_vsi_properties_data info;
u64 max_tx_rate;
+ struct i40e_fwd_adapter *fwd;
/* track this channel belongs to which VSI */
struct i40e_vsi *parent_vsi;
};
+static inline bool i40e_is_channel_macvlan(struct i40e_channel *ch)
+{
+ return !!ch->fwd;
+}
+
+static inline u8 *i40e_channel_mac(struct i40e_channel *ch)
+{
+ if (i40e_is_channel_macvlan(ch))
+ return ch->fwd->netdev->dev_addr;
+ else
+ return NULL;
+}
+
/* struct that defines the Ethernet device */
struct i40e_pf {
struct pci_dev *pdev;
@@ -777,7 +795,8 @@ struct i40e_vsi {
u16 alloc_queue_pairs; /* Allocated Tx/Rx queues */
u16 req_queue_pairs; /* User requested queue pairs */
u16 num_queue_pairs; /* Used tx and rx pairs */
- u16 num_desc;
+ u16 num_tx_desc;
+ u16 num_rx_desc;
enum i40e_vsi_type type; /* VSI type, e.g., LAN, FCoE, etc */
s16 vf_id; /* Virtual function ID for SRIOV VSIs */
@@ -814,6 +833,13 @@ struct i40e_vsi {
struct list_head ch_list;
u16 tc_seid_map[I40E_MAX_TRAFFIC_CLASS];
+ /* macvlan fields */
+#define I40E_MAX_MACVLANS 128 /* Max HW vectors - 1 on FVL */
+#define I40E_MIN_MACVLAN_VECTORS 2 /* Min vectors to enable macvlans */
+ DECLARE_BITMAP(fwd_bitmask, I40E_MAX_MACVLANS);
+ struct list_head macvlan_list;
+ int macvlan_cnt;
+
void *priv; /* client driver data reference. */
/* VSI specific handlers */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq.c b/drivers/net/ethernet/intel/i40e/i40e_adminq.c
index 243dcd4bec19..814acbe79ffd 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq.c
@@ -675,7 +675,7 @@ static u16 i40e_clean_asq(struct i40e_hw *hw)
desc = I40E_ADMINQ_DESC(*asq, ntc);
details = I40E_ADMINQ_DETAILS(*asq, ntc);
while (rd32(hw, hw->aq.asq.head) != ntc) {
- i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE,
+ i40e_debug(hw, I40E_DEBUG_AQ_COMMAND,
"ntc %d head %d.\n", ntc, rd32(hw, hw->aq.asq.head));
if (details->callback) {
@@ -835,7 +835,7 @@ i40e_status i40e_asq_send_command(struct i40e_hw *hw,
}
/* bump the tail */
- i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer:\n");
+ i40e_debug(hw, I40E_DEBUG_AQ_COMMAND, "AQTX: desc and buffer:\n");
i40e_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc_on_ring,
buff, buff_size);
(hw->aq.asq.next_to_use)++;
@@ -886,7 +886,7 @@ i40e_status i40e_asq_send_command(struct i40e_hw *hw,
hw->aq.asq_last_status = (enum i40e_admin_queue_err)retval;
}
- i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE,
+ i40e_debug(hw, I40E_DEBUG_AQ_COMMAND,
"AQTX: desc and buffer writeback:\n");
i40e_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc, buff, buff_size);
@@ -995,7 +995,7 @@ i40e_status i40e_clean_arq_element(struct i40e_hw *hw,
memcpy(e->msg_buf, hw->aq.arq.r.arq_bi[desc_idx].va,
e->msg_len);
- i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, "AQRX: desc and buffer:\n");
+ i40e_debug(hw, I40E_DEBUG_AQ_COMMAND, "AQRX: desc and buffer:\n");
i40e_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf,
hw->aq.arq_buf_size);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
index ecb1adaa54ec..906cf68d3453 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
@@ -281,47 +281,49 @@ void i40e_debug_aq(struct i40e_hw *hw, enum i40e_debug_mask mask, void *desc,
void *buffer, u16 buf_len)
{
struct i40e_aq_desc *aq_desc = (struct i40e_aq_desc *)desc;
+ u32 effective_mask = hw->debug_mask & mask;
+ char prefix[27];
u16 len;
u8 *buf = (u8 *)buffer;
- if ((!(mask & hw->debug_mask)) || (desc == NULL))
+ if (!effective_mask || !desc)
return;
len = le16_to_cpu(aq_desc->datalen);
- i40e_debug(hw, mask,
+ i40e_debug(hw, mask & I40E_DEBUG_AQ_DESCRIPTOR,
"AQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
le16_to_cpu(aq_desc->opcode),
le16_to_cpu(aq_desc->flags),
le16_to_cpu(aq_desc->datalen),
le16_to_cpu(aq_desc->retval));
- i40e_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
+ i40e_debug(hw, mask & I40E_DEBUG_AQ_DESCRIPTOR,
+ "\tcookie (h,l) 0x%08X 0x%08X\n",
le32_to_cpu(aq_desc->cookie_high),
le32_to_cpu(aq_desc->cookie_low));
- i40e_debug(hw, mask, "\tparam (0,1) 0x%08X 0x%08X\n",
+ i40e_debug(hw, mask & I40E_DEBUG_AQ_DESCRIPTOR,
+ "\tparam (0,1) 0x%08X 0x%08X\n",
le32_to_cpu(aq_desc->params.internal.param0),
le32_to_cpu(aq_desc->params.internal.param1));
- i40e_debug(hw, mask, "\taddr (h,l) 0x%08X 0x%08X\n",
+ i40e_debug(hw, mask & I40E_DEBUG_AQ_DESCRIPTOR,
+ "\taddr (h,l) 0x%08X 0x%08X\n",
le32_to_cpu(aq_desc->params.external.addr_high),
le32_to_cpu(aq_desc->params.external.addr_low));
- if ((buffer != NULL) && (aq_desc->datalen != 0)) {
+ if (buffer && buf_len != 0 && len != 0 &&
+ (effective_mask & I40E_DEBUG_AQ_DESC_BUFFER)) {
i40e_debug(hw, mask, "AQ CMD Buffer:\n");
if (buf_len < len)
len = buf_len;
- /* write the full 16-byte chunks */
- if (hw->debug_mask & mask) {
- char prefix[27];
-
- snprintf(prefix, sizeof(prefix),
- "i40e %02x:%02x.%x: \t0x",
- hw->bus.bus_id,
- hw->bus.device,
- hw->bus.func);
-
- print_hex_dump(KERN_INFO, prefix, DUMP_PREFIX_OFFSET,
- 16, 1, buf, len, false);
- }
+
+ snprintf(prefix, sizeof(prefix),
+ "i40e %02x:%02x.%x: \t0x",
+ hw->bus.bus_id,
+ hw->bus.device,
+ hw->bus.func);
+
+ print_hex_dump(KERN_INFO, prefix, DUMP_PREFIX_OFFSET,
+ 16, 1, buf, len, false);
}
}
@@ -1859,8 +1861,7 @@ i40e_status i40e_aq_get_link_info(struct i40e_hw *hw,
hw->aq.fw_min_ver < 40)) && hw_link_info->phy_type == 0xE)
hw_link_info->phy_type = I40E_PHY_TYPE_10GBASE_SFPP_CU;
- if (hw->aq.api_maj_ver == I40E_FW_API_VERSION_MAJOR &&
- hw->aq.api_min_ver >= 7) {
+ if (hw->flags & I40E_HW_FLAG_AQ_PHY_ACCESS_CAPABLE) {
__le32 tmp;
memcpy(&tmp, resp->link_type, sizeof(tmp));
diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
index 7ea4f09229e4..55d20acfcf70 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
@@ -333,8 +333,9 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
" seid = %d, id = %d, uplink_seid = %d\n",
vsi->seid, vsi->id, vsi->uplink_seid);
dev_info(&pf->pdev->dev,
- " base_queue = %d, num_queue_pairs = %d, num_desc = %d\n",
- vsi->base_queue, vsi->num_queue_pairs, vsi->num_desc);
+ " base_queue = %d, num_queue_pairs = %d, num_tx_desc = %d, num_rx_desc = %d\n",
+ vsi->base_queue, vsi->num_queue_pairs, vsi->num_tx_desc,
+ vsi->num_rx_desc);
dev_info(&pf->pdev->dev, " type = %i\n", vsi->type);
if (vsi->type == I40E_VSI_SRIOV)
dev_info(&pf->pdev->dev, " VF ID = %i\n", vsi->vf_id);
@@ -1330,7 +1331,7 @@ static ssize_t i40e_dbg_command_write(struct file *filp,
}
ret = i40e_aq_add_rem_control_packet_filter(&pf->hw,
pf->hw.mac.addr,
- I40E_ETH_P_LLDP, 0,
+ ETH_P_LLDP, 0,
pf->vsi[pf->lan_vsi]->seid,
0, true, NULL, NULL);
if (ret) {
@@ -1348,7 +1349,7 @@ static ssize_t i40e_dbg_command_write(struct file *filp,
ret = i40e_aq_add_rem_control_packet_filter(&pf->hw,
pf->hw.mac.addr,
- I40E_ETH_P_LLDP, 0,
+ ETH_P_LLDP, 0,
pf->vsi[pf->lan_vsi]->seid,
0, false, NULL, NULL);
if (ret) {
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
index 7545b21bee3c..527eb52c5401 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
@@ -1982,6 +1982,8 @@ static int i40e_set_ringparam(struct net_device *netdev,
if (i40e_enabled_xdp_vsi(vsi))
vsi->xdp_rings[i]->count = new_tx_count;
}
+ vsi->num_tx_desc = new_tx_count;
+ vsi->num_rx_desc = new_rx_count;
goto done;
}
@@ -2118,6 +2120,8 @@ rx_unwind:
rx_rings = NULL;
}
+ vsi->num_tx_desc = new_tx_count;
+ vsi->num_rx_desc = new_rx_count;
i40e_up(vsi);
free_tx:
@@ -4852,9 +4856,12 @@ static u32 i40e_get_priv_flags(struct net_device *dev)
static int i40e_set_priv_flags(struct net_device *dev, u32 flags)
{
struct i40e_netdev_priv *np = netdev_priv(dev);
+ u64 orig_flags, new_flags, changed_flags;
+ enum i40e_admin_queue_err adq_err;
struct i40e_vsi *vsi = np->vsi;
struct i40e_pf *pf = vsi->back;
- u64 orig_flags, new_flags, changed_flags;
+ bool is_reset_needed;
+ i40e_status status;
u32 i, j;
orig_flags = READ_ONCE(pf->flags);
@@ -4898,6 +4905,10 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags)
flags_complete:
changed_flags = orig_flags ^ new_flags;
+ is_reset_needed = !!(changed_flags & (I40E_FLAG_VEB_STATS_ENABLED |
+ I40E_FLAG_LEGACY_RX | I40E_FLAG_SOURCE_PRUNING_DISABLED |
+ I40E_FLAG_DISABLE_FW_LLDP));
+
/* Before we finalize any flag changes, we need to perform some
* checks to ensure that the changes are supported and safe.
*/
@@ -4932,13 +4943,6 @@ flags_complete:
return -EOPNOTSUPP;
}
- /* Now that we've checked to ensure that the new flags are valid, load
- * them into place. Since we only modify flags either (a) during
- * initialization or (b) while holding the RTNL lock, we don't need
- * anything fancy here.
- */
- pf->flags = new_flags;
-
/* Process any additional changes needed as a result of flag changes.
* The changed_flags value reflects the list of bits that were
* changed in the code above.
@@ -4946,7 +4950,7 @@ flags_complete:
/* Flush current ATR settings if ATR was disabled */
if ((changed_flags & I40E_FLAG_FD_ATR_ENABLED) &&
- !(pf->flags & I40E_FLAG_FD_ATR_ENABLED)) {
+ !(new_flags & I40E_FLAG_FD_ATR_ENABLED)) {
set_bit(__I40E_FD_ATR_AUTO_DISABLED, pf->state);
set_bit(__I40E_FD_FLUSH_REQUESTED, pf->state);
}
@@ -4955,7 +4959,7 @@ flags_complete:
u16 sw_flags = 0, valid_flags = 0;
int ret;
- if (!(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT))
+ if (!(new_flags & I40E_FLAG_TRUE_PROMISC_SUPPORT))
sw_flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
valid_flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
ret = i40e_aq_set_switch_config(&pf->hw, sw_flags, valid_flags,
@@ -4974,13 +4978,13 @@ flags_complete:
(changed_flags & I40E_FLAG_BASE_R_FEC)) {
u8 fec_cfg = 0;
- if (pf->flags & I40E_FLAG_RS_FEC &&
- pf->flags & I40E_FLAG_BASE_R_FEC) {
+ if (new_flags & I40E_FLAG_RS_FEC &&
+ new_flags & I40E_FLAG_BASE_R_FEC) {
fec_cfg = I40E_AQ_SET_FEC_AUTO;
- } else if (pf->flags & I40E_FLAG_RS_FEC) {
+ } else if (new_flags & I40E_FLAG_RS_FEC) {
fec_cfg = (I40E_AQ_SET_FEC_REQUEST_RS |
I40E_AQ_SET_FEC_ABILITY_RS);
- } else if (pf->flags & I40E_FLAG_BASE_R_FEC) {
+ } else if (new_flags & I40E_FLAG_BASE_R_FEC) {
fec_cfg = (I40E_AQ_SET_FEC_REQUEST_KR |
I40E_AQ_SET_FEC_ABILITY_KR);
}
@@ -4988,14 +4992,14 @@ flags_complete:
dev_warn(&pf->pdev->dev, "Cannot change FEC config\n");
}
- if ((changed_flags & pf->flags &
+ if ((changed_flags & new_flags &
I40E_FLAG_LINK_DOWN_ON_CLOSE_ENABLED) &&
- (pf->flags & I40E_FLAG_MFP_ENABLED))
+ (new_flags & I40E_FLAG_MFP_ENABLED))
dev_warn(&pf->pdev->dev,
"Turning on link-down-on-close flag may affect other partitions\n");
if (changed_flags & I40E_FLAG_DISABLE_FW_LLDP) {
- if (pf->flags & I40E_FLAG_DISABLE_FW_LLDP) {
+ if (new_flags & I40E_FLAG_DISABLE_FW_LLDP) {
struct i40e_dcbx_config *dcbcfg;
i40e_aq_stop_lldp(&pf->hw, true, false, NULL);
@@ -5013,17 +5017,43 @@ flags_complete:
dcbcfg->pfc.willing = 1;
dcbcfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS;
} else {
- i40e_aq_start_lldp(&pf->hw, false, NULL);
+ status = i40e_aq_start_lldp(&pf->hw, false, NULL);
+ if (status) {
+ adq_err = pf->hw.aq.asq_last_status;
+ switch (adq_err) {
+ case I40E_AQ_RC_EEXIST:
+ dev_warn(&pf->pdev->dev,
+ "FW LLDP agent is already running\n");
+ is_reset_needed = false;
+ break;
+ case I40E_AQ_RC_EPERM:
+ dev_warn(&pf->pdev->dev,
+ "Device configuration forbids SW from starting the LLDP agent.\n");
+ return -EINVAL;
+ default:
+ dev_warn(&pf->pdev->dev,
+ "Starting FW LLDP agent failed: error: %s, %s\n",
+ i40e_stat_str(&pf->hw,
+ status),
+ i40e_aq_str(&pf->hw,
+ adq_err));
+ return -EINVAL;
+ }
+ }
}
}
+ /* Now that we've checked to ensure that the new flags are valid, load
+ * them into place. Since we only modify flags either (a) during
+ * initialization or (b) while holding the RTNL lock, we don't need
+ * anything fancy here.
+ */
+ pf->flags = new_flags;
+
/* Issue reset to cause things to take effect, as additional bits
* are added we will need to create a mask of bits requiring reset
*/
- if (changed_flags & (I40E_FLAG_VEB_STATS_ENABLED |
- I40E_FLAG_LEGACY_RX |
- I40E_FLAG_SOURCE_PRUNING_DISABLED |
- I40E_FLAG_DISABLE_FW_LLDP))
+ if (is_reset_needed)
i40e_do_reset(pf, BIT(__I40E_PF_RESET_REQUESTED), true);
return 0;
@@ -5181,6 +5211,16 @@ static int i40e_get_module_eeprom(struct net_device *netdev,
return 0;
}
+static int i40e_get_eee(struct net_device *netdev, struct ethtool_eee *edata)
+{
+ return -EOPNOTSUPP;
+}
+
+static int i40e_set_eee(struct net_device *netdev, struct ethtool_eee *edata)
+{
+ return -EOPNOTSUPP;
+}
+
static const struct ethtool_ops i40e_ethtool_recovery_mode_ops = {
.set_eeprom = i40e_set_eeprom,
.get_eeprom_len = i40e_get_eeprom_len,
@@ -5208,6 +5248,8 @@ static const struct ethtool_ops i40e_ethtool_ops = {
.set_rxnfc = i40e_set_rxnfc,
.self_test = i40e_diag_test,
.get_strings = i40e_get_strings,
+ .get_eee = i40e_get_eee,
+ .set_eee = i40e_set_eee,
.set_phys_id = i40e_set_phys_id,
.get_sset_count = i40e_get_sset_count,
.get_ethtool_stats = i40e_get_ethtool_stats,
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 320562b39686..9ebbe3da61bb 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -32,7 +32,7 @@ static const char i40e_driver_string[] =
__stringify(DRV_VERSION_MINOR) "." \
__stringify(DRV_VERSION_BUILD) DRV_KERN
const char i40e_driver_version_str[] = DRV_VERSION;
-static const char i40e_copyright[] = "Copyright (c) 2013 - 2014 Intel Corporation.";
+static const char i40e_copyright[] = "Copyright (c) 2013 - 2019 Intel Corporation.";
/* a bit of forward declarations */
static void i40e_vsi_reinit_locked(struct i40e_vsi *vsi);
@@ -636,9 +636,6 @@ void i40e_update_eth_stats(struct i40e_vsi *vsi)
i40e_stat_update32(hw, I40E_GLV_RUPP(stat_idx),
vsi->stat_offsets_loaded,
&oes->rx_unknown_protocol, &es->rx_unknown_protocol);
- i40e_stat_update32(hw, I40E_GLV_TEPC(stat_idx),
- vsi->stat_offsets_loaded,
- &oes->tx_errors, &es->tx_errors);
i40e_stat_update48(hw, I40E_GLV_GORCH(stat_idx),
I40E_GLV_GORCL(stat_idx),
@@ -5864,8 +5861,10 @@ static int i40e_add_channel(struct i40e_pf *pf, u16 uplink_seid,
return -ENOENT;
}
- /* Success, update channel */
- ch->enabled_tc = enabled_tc;
+ /* Success, update channel, set enabled_tc only if the channel
+ * is not a macvlan
+ */
+ ch->enabled_tc = !i40e_is_channel_macvlan(ch) && enabled_tc;
ch->seid = ctxt.seid;
ch->vsi_number = ctxt.vsi_number;
ch->stat_counter_idx = cpu_to_le16(ctxt.info.stat_counter_idx);
@@ -6413,6 +6412,50 @@ static int i40e_resume_port_tx(struct i40e_pf *pf)
}
/**
+ * i40e_update_dcb_config
+ * @hw: pointer to the HW struct
+ * @enable_mib_change: enable MIB change event
+ *
+ * Update DCB configuration from the firmware
+ **/
+static enum i40e_status_code
+i40e_update_dcb_config(struct i40e_hw *hw, bool enable_mib_change)
+{
+ struct i40e_lldp_variables lldp_cfg;
+ i40e_status ret;
+
+ if (!hw->func_caps.dcb)
+ return I40E_NOT_SUPPORTED;
+
+ /* Read LLDP NVM area */
+ ret = i40e_read_lldp_cfg(hw, &lldp_cfg);
+ if (ret)
+ return I40E_ERR_NOT_READY;
+
+ /* Get DCBX status */
+ ret = i40e_get_dcbx_status(hw, &hw->dcbx_status);
+ if (ret)
+ return ret;
+
+ /* Check the DCBX Status */
+ if (hw->dcbx_status == I40E_DCBX_STATUS_DONE ||
+ hw->dcbx_status == I40E_DCBX_STATUS_IN_PROGRESS) {
+ /* Get current DCBX configuration */
+ ret = i40e_get_dcb_config(hw);
+ if (ret)
+ return ret;
+ } else if (hw->dcbx_status == I40E_DCBX_STATUS_DISABLED) {
+ return I40E_ERR_NOT_READY;
+ }
+
+ /* Configure the LLDP MIB change event */
+ if (enable_mib_change)
+ ret = i40e_aq_cfg_lldp_mib_change_event(hw, true, NULL);
+
+ return ret;
+}
+
+/**
* i40e_init_pf_dcb - Initialize DCB configuration
* @pf: PF being configured
*
@@ -6428,11 +6471,13 @@ static int i40e_init_pf_dcb(struct i40e_pf *pf)
* Also do not enable DCBx if FW LLDP agent is disabled
*/
if ((pf->hw_features & I40E_HW_NO_DCB_SUPPORT) ||
- (pf->flags & I40E_FLAG_DISABLE_FW_LLDP))
+ (pf->flags & I40E_FLAG_DISABLE_FW_LLDP)) {
+ dev_info(&pf->pdev->dev, "DCB is not supported or FW LLDP is disabled\n");
+ err = I40E_NOT_SUPPORTED;
goto out;
+ }
- /* Get the initial DCB configuration */
- err = i40e_init_dcb(hw, true);
+ err = i40e_update_dcb_config(hw, true);
if (!err) {
/* Device/Function is not DCBX capable */
if ((!hw->func_caps.dcb) ||
@@ -6869,6 +6914,489 @@ static void i40e_vsi_set_default_tc_config(struct i40e_vsi *vsi)
}
/**
+ * i40e_del_macvlan_filter
+ * @hw: pointer to the HW structure
+ * @seid: seid of the channel VSI
+ * @macaddr: the mac address to apply as a filter
+ * @aq_err: store the admin Q error
+ *
+ * This function deletes a mac filter on the channel VSI which serves as the
+ * macvlan. Returns 0 on success.
+ **/
+static i40e_status i40e_del_macvlan_filter(struct i40e_hw *hw, u16 seid,
+ const u8 *macaddr, int *aq_err)
+{
+ struct i40e_aqc_remove_macvlan_element_data element;
+ i40e_status status;
+
+ memset(&element, 0, sizeof(element));
+ ether_addr_copy(element.mac_addr, macaddr);
+ element.vlan_tag = 0;
+ element.flags = I40E_AQC_MACVLAN_DEL_PERFECT_MATCH;
+ status = i40e_aq_remove_macvlan(hw, seid, &element, 1, NULL);
+ *aq_err = hw->aq.asq_last_status;
+
+ return status;
+}
+
+/**
+ * i40e_add_macvlan_filter
+ * @hw: pointer to the HW structure
+ * @seid: seid of the channel VSI
+ * @macaddr: the mac address to apply as a filter
+ * @aq_err: store the admin Q error
+ *
+ * This function adds a mac filter on the channel VSI which serves as the
+ * macvlan. Returns 0 on success.
+ **/
+static i40e_status i40e_add_macvlan_filter(struct i40e_hw *hw, u16 seid,
+ const u8 *macaddr, int *aq_err)
+{
+ struct i40e_aqc_add_macvlan_element_data element;
+ i40e_status status;
+ u16 cmd_flags = 0;
+
+ ether_addr_copy(element.mac_addr, macaddr);
+ element.vlan_tag = 0;
+ element.queue_number = 0;
+ element.match_method = I40E_AQC_MM_ERR_NO_RES;
+ cmd_flags |= I40E_AQC_MACVLAN_ADD_PERFECT_MATCH;
+ element.flags = cpu_to_le16(cmd_flags);
+ status = i40e_aq_add_macvlan(hw, seid, &element, 1, NULL);
+ *aq_err = hw->aq.asq_last_status;
+
+ return status;
+}
+
+/**
+ * i40e_reset_ch_rings - Reset the queue contexts in a channel
+ * @vsi: the VSI we want to access
+ * @ch: the channel we want to access
+ */
+static void i40e_reset_ch_rings(struct i40e_vsi *vsi, struct i40e_channel *ch)
+{
+ struct i40e_ring *tx_ring, *rx_ring;
+ u16 pf_q;
+ int i;
+
+ for (i = 0; i < ch->num_queue_pairs; i++) {
+ pf_q = ch->base_queue + i;
+ tx_ring = vsi->tx_rings[pf_q];
+ tx_ring->ch = NULL;
+ rx_ring = vsi->rx_rings[pf_q];
+ rx_ring->ch = NULL;
+ }
+}
+
+/**
+ * i40e_free_macvlan_channels
+ * @vsi: the VSI we want to access
+ *
+ * This function frees the Qs of the channel VSI from
+ * the stack and also deletes the channel VSIs which
+ * serve as macvlans.
+ */
+static void i40e_free_macvlan_channels(struct i40e_vsi *vsi)
+{
+ struct i40e_channel *ch, *ch_tmp;
+ int ret;
+
+ if (list_empty(&vsi->macvlan_list))
+ return;
+
+ list_for_each_entry_safe(ch, ch_tmp, &vsi->macvlan_list, list) {
+ struct i40e_vsi *parent_vsi;
+
+ if (i40e_is_channel_macvlan(ch)) {
+ i40e_reset_ch_rings(vsi, ch);
+ clear_bit(ch->fwd->bit_no, vsi->fwd_bitmask);
+ netdev_unbind_sb_channel(vsi->netdev, ch->fwd->netdev);
+ netdev_set_sb_channel(ch->fwd->netdev, 0);
+ kfree(ch->fwd);
+ ch->fwd = NULL;
+ }
+
+ list_del(&ch->list);
+ parent_vsi = ch->parent_vsi;
+ if (!parent_vsi || !ch->initialized) {
+ kfree(ch);
+ continue;
+ }
+
+ /* remove the VSI */
+ ret = i40e_aq_delete_element(&vsi->back->hw, ch->seid,
+ NULL);
+ if (ret)
+ dev_err(&vsi->back->pdev->dev,
+ "unable to remove channel (%d) for parent VSI(%d)\n",
+ ch->seid, parent_vsi->seid);
+ kfree(ch);
+ }
+ vsi->macvlan_cnt = 0;
+}
+
+/**
+ * i40e_fwd_ring_up - bring the macvlan device up
+ * @vsi: the VSI we want to access
+ * @vdev: macvlan netdevice
+ * @fwd: the private fwd structure
+ */
+static int i40e_fwd_ring_up(struct i40e_vsi *vsi, struct net_device *vdev,
+ struct i40e_fwd_adapter *fwd)
+{
+ int ret = 0, num_tc = 1, i, aq_err;
+ struct i40e_channel *ch, *ch_tmp;
+ struct i40e_pf *pf = vsi->back;
+ struct i40e_hw *hw = &pf->hw;
+
+ if (list_empty(&vsi->macvlan_list))
+ return -EINVAL;
+
+ /* Go through the list and find an available channel */
+ list_for_each_entry_safe(ch, ch_tmp, &vsi->macvlan_list, list) {
+ if (!i40e_is_channel_macvlan(ch)) {
+ ch->fwd = fwd;
+ /* record configuration for macvlan interface in vdev */
+ for (i = 0; i < num_tc; i++)
+ netdev_bind_sb_channel_queue(vsi->netdev, vdev,
+ i,
+ ch->num_queue_pairs,
+ ch->base_queue);
+ for (i = 0; i < ch->num_queue_pairs; i++) {
+ struct i40e_ring *tx_ring, *rx_ring;
+ u16 pf_q;
+
+ pf_q = ch->base_queue + i;
+
+ /* Get to TX ring ptr */
+ tx_ring = vsi->tx_rings[pf_q];
+ tx_ring->ch = ch;
+
+ /* Get the RX ring ptr */
+ rx_ring = vsi->rx_rings[pf_q];
+ rx_ring->ch = ch;
+ }
+ break;
+ }
+ }
+
+ /* Guarantee all rings are updated before we update the
+ * MAC address filter.
+ */
+ wmb();
+
+ /* Add a mac filter */
+ ret = i40e_add_macvlan_filter(hw, ch->seid, vdev->dev_addr, &aq_err);
+ if (ret) {
+ /* if we cannot add the MAC rule then disable the offload */
+ macvlan_release_l2fw_offload(vdev);
+ for (i = 0; i < ch->num_queue_pairs; i++) {
+ struct i40e_ring *rx_ring;
+ u16 pf_q;
+
+ pf_q = ch->base_queue + i;
+ rx_ring = vsi->rx_rings[pf_q];
+ rx_ring->netdev = NULL;
+ }
+ dev_info(&pf->pdev->dev,
+ "Error adding mac filter on macvlan err %s, aq_err %s\n",
+ i40e_stat_str(hw, ret),
+ i40e_aq_str(hw, aq_err));
+ netdev_err(vdev, "L2fwd offload disabled to L2 filter error\n");
+ }
+
+ return ret;
+}
+
+/**
+ * i40e_setup_macvlans - create the channels which will be macvlans
+ * @vsi: the VSI we want to access
+ * @macvlan_cnt: no. of macvlans to be setup
+ * @qcnt: no. of Qs per macvlan
+ * @vdev: macvlan netdevice
+ */
+static int i40e_setup_macvlans(struct i40e_vsi *vsi, u16 macvlan_cnt, u16 qcnt,
+ struct net_device *vdev)
+{
+ struct i40e_pf *pf = vsi->back;
+ struct i40e_hw *hw = &pf->hw;
+ struct i40e_vsi_context ctxt;
+ u16 sections, qmap, num_qps;
+ struct i40e_channel *ch;
+ int i, pow, ret = 0;
+ u8 offset = 0;
+
+ if (vsi->type != I40E_VSI_MAIN || !macvlan_cnt)
+ return -EINVAL;
+
+ num_qps = vsi->num_queue_pairs - (macvlan_cnt * qcnt);
+
+ /* find the next higher power-of-2 of num queue pairs */
+ pow = fls(roundup_pow_of_two(num_qps) - 1);
+
+ qmap = (offset << I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT) |
+ (pow << I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT);
+
+ /* Setup context bits for the main VSI */
+ sections = I40E_AQ_VSI_PROP_QUEUE_MAP_VALID;
+ sections |= I40E_AQ_VSI_PROP_SCHED_VALID;
+ memset(&ctxt, 0, sizeof(ctxt));
+ ctxt.seid = vsi->seid;
+ ctxt.pf_num = vsi->back->hw.pf_id;
+ ctxt.vf_num = 0;
+ ctxt.uplink_seid = vsi->uplink_seid;
+ ctxt.info = vsi->info;
+ ctxt.info.tc_mapping[0] = cpu_to_le16(qmap);
+ ctxt.info.mapping_flags |= cpu_to_le16(I40E_AQ_VSI_QUE_MAP_CONTIG);
+ ctxt.info.queue_mapping[0] = cpu_to_le16(vsi->base_queue);
+ ctxt.info.valid_sections |= cpu_to_le16(sections);
+
+ /* Reconfigure RSS for main VSI with new max queue count */
+ vsi->rss_size = max_t(u16, num_qps, qcnt);
+ ret = i40e_vsi_config_rss(vsi);
+ if (ret) {
+ dev_info(&pf->pdev->dev,
+ "Failed to reconfig RSS for num_queues (%u)\n",
+ vsi->rss_size);
+ return ret;
+ }
+ vsi->reconfig_rss = true;
+ dev_dbg(&vsi->back->pdev->dev,
+ "Reconfigured RSS with num_queues (%u)\n", vsi->rss_size);
+ vsi->next_base_queue = num_qps;
+ vsi->cnt_q_avail = vsi->num_queue_pairs - num_qps;
+
+ /* Update the VSI after updating the VSI queue-mapping
+ * information
+ */
+ ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL);
+ if (ret) {
+ dev_info(&pf->pdev->dev,
+ "Update vsi tc config failed, err %s aq_err %s\n",
+ i40e_stat_str(hw, ret),
+ i40e_aq_str(hw, hw->aq.asq_last_status));
+ return ret;
+ }
+ /* update the local VSI info with updated queue map */
+ i40e_vsi_update_queue_map(vsi, &ctxt);
+ vsi->info.valid_sections = 0;
+
+ /* Create channels for macvlans */
+ INIT_LIST_HEAD(&vsi->macvlan_list);
+ for (i = 0; i < macvlan_cnt; i++) {
+ ch = kzalloc(sizeof(*ch), GFP_KERNEL);
+ if (!ch) {
+ ret = -ENOMEM;
+ goto err_free;
+ }
+ INIT_LIST_HEAD(&ch->list);
+ ch->num_queue_pairs = qcnt;
+ if (!i40e_setup_channel(pf, vsi, ch)) {
+ ret = -EINVAL;
+ goto err_free;
+ }
+ ch->parent_vsi = vsi;
+ vsi->cnt_q_avail -= ch->num_queue_pairs;
+ vsi->macvlan_cnt++;
+ list_add_tail(&ch->list, &vsi->macvlan_list);
+ }
+
+ return ret;
+
+err_free:
+ dev_info(&pf->pdev->dev, "Failed to setup macvlans\n");
+ i40e_free_macvlan_channels(vsi);
+
+ return ret;
+}
+
+/**
+ * i40e_fwd_add - configure macvlans
+ * @netdev: net device to configure
+ * @vdev: macvlan netdevice
+ **/
+static void *i40e_fwd_add(struct net_device *netdev, struct net_device *vdev)
+{
+ struct i40e_netdev_priv *np = netdev_priv(netdev);
+ u16 q_per_macvlan = 0, macvlan_cnt = 0, vectors;
+ struct i40e_vsi *vsi = np->vsi;
+ struct i40e_pf *pf = vsi->back;
+ struct i40e_fwd_adapter *fwd;
+ int avail_macvlan, ret;
+
+ if ((pf->flags & I40E_FLAG_DCB_ENABLED)) {
+ netdev_info(netdev, "Macvlans are not supported when DCB is enabled\n");
+ return ERR_PTR(-EINVAL);
+ }
+ if ((pf->flags & I40E_FLAG_TC_MQPRIO)) {
+ netdev_info(netdev, "Macvlans are not supported when HW TC offload is on\n");
+ return ERR_PTR(-EINVAL);
+ }
+ if (pf->num_lan_msix < I40E_MIN_MACVLAN_VECTORS) {
+ netdev_info(netdev, "Not enough vectors available to support macvlans\n");
+ return ERR_PTR(-EINVAL);
+ }
+
+ /* The macvlan device has to be a single Q device so that the
+ * tc_to_txq field can be reused to pick the tx queue.
+ */
+ if (netif_is_multiqueue(vdev))
+ return ERR_PTR(-ERANGE);
+
+ if (!vsi->macvlan_cnt) {
+ /* reserve bit 0 for the pf device */
+ set_bit(0, vsi->fwd_bitmask);
+
+ /* Try to reserve as many queues as possible for macvlans. First
+ * reserve 3/4th of max vectors, then half, then quarter and
+ * calculate Qs per macvlan as you go
+ */
+ vectors = pf->num_lan_msix;
+ if (vectors <= I40E_MAX_MACVLANS && vectors > 64) {
+ /* allocate 4 Qs per macvlan and 32 Qs to the PF*/
+ q_per_macvlan = 4;
+ macvlan_cnt = (vectors - 32) / 4;
+ } else if (vectors <= 64 && vectors > 32) {
+ /* allocate 2 Qs per macvlan and 16 Qs to the PF*/
+ q_per_macvlan = 2;
+ macvlan_cnt = (vectors - 16) / 2;
+ } else if (vectors <= 32 && vectors > 16) {
+ /* allocate 1 Q per macvlan and 16 Qs to the PF*/
+ q_per_macvlan = 1;
+ macvlan_cnt = vectors - 16;
+ } else if (vectors <= 16 && vectors > 8) {
+ /* allocate 1 Q per macvlan and 8 Qs to the PF */
+ q_per_macvlan = 1;
+ macvlan_cnt = vectors - 8;
+ } else {
+ /* allocate 1 Q per macvlan and 1 Q to the PF */
+ q_per_macvlan = 1;
+ macvlan_cnt = vectors - 1;
+ }
+
+ if (macvlan_cnt == 0)
+ return ERR_PTR(-EBUSY);
+
+ /* Quiesce VSI queues */
+ i40e_quiesce_vsi(vsi);
+
+ /* sets up the macvlans but does not "enable" them */
+ ret = i40e_setup_macvlans(vsi, macvlan_cnt, q_per_macvlan,
+ vdev);
+ if (ret)
+ return ERR_PTR(ret);
+
+ /* Unquiesce VSI */
+ i40e_unquiesce_vsi(vsi);
+ }
+ avail_macvlan = find_first_zero_bit(vsi->fwd_bitmask,
+ vsi->macvlan_cnt);
+ if (avail_macvlan >= I40E_MAX_MACVLANS)
+ return ERR_PTR(-EBUSY);
+
+ /* create the fwd struct */
+ fwd = kzalloc(sizeof(*fwd), GFP_KERNEL);
+ if (!fwd)
+ return ERR_PTR(-ENOMEM);
+
+ set_bit(avail_macvlan, vsi->fwd_bitmask);
+ fwd->bit_no = avail_macvlan;
+ netdev_set_sb_channel(vdev, avail_macvlan);
+ fwd->netdev = vdev;
+
+ if (!netif_running(netdev))
+ return fwd;
+
+ /* Set fwd ring up */
+ ret = i40e_fwd_ring_up(vsi, vdev, fwd);
+ if (ret) {
+ /* unbind the queues and drop the subordinate channel config */
+ netdev_unbind_sb_channel(netdev, vdev);
+ netdev_set_sb_channel(vdev, 0);
+
+ kfree(fwd);
+ return ERR_PTR(-EINVAL);
+ }
+
+ return fwd;
+}
+
+/**
+ * i40e_del_all_macvlans - Delete all the mac filters on the channels
+ * @vsi: the VSI we want to access
+ */
+static void i40e_del_all_macvlans(struct i40e_vsi *vsi)
+{
+ struct i40e_channel *ch, *ch_tmp;
+ struct i40e_pf *pf = vsi->back;
+ struct i40e_hw *hw = &pf->hw;
+ int aq_err, ret = 0;
+
+ if (list_empty(&vsi->macvlan_list))
+ return;
+
+ list_for_each_entry_safe(ch, ch_tmp, &vsi->macvlan_list, list) {
+ if (i40e_is_channel_macvlan(ch)) {
+ ret = i40e_del_macvlan_filter(hw, ch->seid,
+ i40e_channel_mac(ch),
+ &aq_err);
+ if (!ret) {
+ /* Reset queue contexts */
+ i40e_reset_ch_rings(vsi, ch);
+ clear_bit(ch->fwd->bit_no, vsi->fwd_bitmask);
+ netdev_unbind_sb_channel(vsi->netdev,
+ ch->fwd->netdev);
+ netdev_set_sb_channel(ch->fwd->netdev, 0);
+ kfree(ch->fwd);
+ ch->fwd = NULL;
+ }
+ }
+ }
+}
+
+/**
+ * i40e_fwd_del - delete macvlan interfaces
+ * @netdev: net device to configure
+ * @vdev: macvlan netdevice
+ */
+static void i40e_fwd_del(struct net_device *netdev, void *vdev)
+{
+ struct i40e_netdev_priv *np = netdev_priv(netdev);
+ struct i40e_fwd_adapter *fwd = vdev;
+ struct i40e_channel *ch, *ch_tmp;
+ struct i40e_vsi *vsi = np->vsi;
+ struct i40e_pf *pf = vsi->back;
+ struct i40e_hw *hw = &pf->hw;
+ int aq_err, ret = 0;
+
+ /* Find the channel associated with the macvlan and del mac filter */
+ list_for_each_entry_safe(ch, ch_tmp, &vsi->macvlan_list, list) {
+ if (i40e_is_channel_macvlan(ch) &&
+ ether_addr_equal(i40e_channel_mac(ch),
+ fwd->netdev->dev_addr)) {
+ ret = i40e_del_macvlan_filter(hw, ch->seid,
+ i40e_channel_mac(ch),
+ &aq_err);
+ if (!ret) {
+ /* Reset queue contexts */
+ i40e_reset_ch_rings(vsi, ch);
+ clear_bit(ch->fwd->bit_no, vsi->fwd_bitmask);
+ netdev_unbind_sb_channel(netdev, fwd->netdev);
+ netdev_set_sb_channel(fwd->netdev, 0);
+ kfree(ch->fwd);
+ ch->fwd = NULL;
+ } else {
+ dev_info(&pf->pdev->dev,
+ "Error deleting mac filter on macvlan err %s, aq_err %s\n",
+ i40e_stat_str(hw, ret),
+ i40e_aq_str(hw, aq_err));
+ }
+ break;
+ }
+ }
+}
+
+/**
* i40e_setup_tc - configure multiple traffic classes
* @netdev: net device to configure
* @type_data: tc offload data
@@ -6963,6 +7491,10 @@ config_tc:
vsi->seid);
need_reset = true;
goto exit;
+ } else {
+ dev_info(&vsi->back->pdev->dev,
+ "Setup channel (id:%u) utilizing num_queues %d\n",
+ vsi->seid, vsi->tc_config.tc_info[0].qcount);
}
if (pf->flags & I40E_FLAG_TC_MQPRIO) {
@@ -7227,15 +7759,15 @@ int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi,
/**
* i40e_parse_cls_flower - Parse tc flower filters provided by kernel
* @vsi: Pointer to VSI
- * @cls_flower: Pointer to struct tc_cls_flower_offload
+ * @cls_flower: Pointer to struct flow_cls_offload
* @filter: Pointer to cloud filter structure
*
**/
static int i40e_parse_cls_flower(struct i40e_vsi *vsi,
- struct tc_cls_flower_offload *f,
+ struct flow_cls_offload *f,
struct i40e_cloud_filter *filter)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
struct flow_dissector *dissector = rule->match.dissector;
u16 n_proto_mask = 0, n_proto_key = 0, addr_type = 0;
struct i40e_pf *pf = vsi->back;
@@ -7469,11 +8001,11 @@ static int i40e_handle_tclass(struct i40e_vsi *vsi, u32 tc,
/**
* i40e_configure_clsflower - Configure tc flower filters
* @vsi: Pointer to VSI
- * @cls_flower: Pointer to struct tc_cls_flower_offload
+ * @cls_flower: Pointer to struct flow_cls_offload
*
**/
static int i40e_configure_clsflower(struct i40e_vsi *vsi,
- struct tc_cls_flower_offload *cls_flower)
+ struct flow_cls_offload *cls_flower)
{
int tc = tc_classid_to_hwtc(vsi->netdev, cls_flower->classid);
struct i40e_cloud_filter *filter = NULL;
@@ -7565,11 +8097,11 @@ static struct i40e_cloud_filter *i40e_find_cloud_filter(struct i40e_vsi *vsi,
/**
* i40e_delete_clsflower - Remove tc flower filters
* @vsi: Pointer to VSI
- * @cls_flower: Pointer to struct tc_cls_flower_offload
+ * @cls_flower: Pointer to struct flow_cls_offload
*
**/
static int i40e_delete_clsflower(struct i40e_vsi *vsi,
- struct tc_cls_flower_offload *cls_flower)
+ struct flow_cls_offload *cls_flower)
{
struct i40e_cloud_filter *filter = NULL;
struct i40e_pf *pf = vsi->back;
@@ -7612,16 +8144,16 @@ static int i40e_delete_clsflower(struct i40e_vsi *vsi,
* @type_data: offload data
**/
static int i40e_setup_tc_cls_flower(struct i40e_netdev_priv *np,
- struct tc_cls_flower_offload *cls_flower)
+ struct flow_cls_offload *cls_flower)
{
struct i40e_vsi *vsi = np->vsi;
switch (cls_flower->command) {
- case TC_CLSFLOWER_REPLACE:
+ case FLOW_CLS_REPLACE:
return i40e_configure_clsflower(vsi, cls_flower);
- case TC_CLSFLOWER_DESTROY:
+ case FLOW_CLS_DESTROY:
return i40e_delete_clsflower(vsi, cls_flower);
- case TC_CLSFLOWER_STATS:
+ case FLOW_CLS_STATS:
return -EOPNOTSUPP;
default:
return -EOPNOTSUPP;
@@ -7645,34 +8177,21 @@ static int i40e_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
}
}
-static int i40e_setup_tc_block(struct net_device *dev,
- struct tc_block_offload *f)
-{
- struct i40e_netdev_priv *np = netdev_priv(dev);
-
- if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
- return -EOPNOTSUPP;
-
- switch (f->command) {
- case TC_BLOCK_BIND:
- return tcf_block_cb_register(f->block, i40e_setup_tc_block_cb,
- np, np, f->extack);
- case TC_BLOCK_UNBIND:
- tcf_block_cb_unregister(f->block, i40e_setup_tc_block_cb, np);
- return 0;
- default:
- return -EOPNOTSUPP;
- }
-}
+static LIST_HEAD(i40e_block_cb_list);
static int __i40e_setup_tc(struct net_device *netdev, enum tc_setup_type type,
void *type_data)
{
+ struct i40e_netdev_priv *np = netdev_priv(netdev);
+
switch (type) {
case TC_SETUP_QDISC_MQPRIO:
return i40e_setup_tc(netdev, type_data);
case TC_SETUP_BLOCK:
- return i40e_setup_tc_block(netdev, type_data);
+ return flow_block_cb_setup_simple(type_data,
+ &i40e_block_cb_list,
+ i40e_setup_tc_block_cb,
+ np, np, true);
default:
return -EOPNOTSUPP;
}
@@ -8570,7 +9089,7 @@ static void i40e_link_event(struct i40e_pf *pf)
/* Notify the base of the switch tree connected to
* the link. Floating VEBs are not notified.
*/
- if (pf->lan_veb != I40E_NO_VEB && pf->veb[pf->lan_veb])
+ if (pf->lan_veb < I40E_MAX_VEB && pf->veb[pf->lan_veb])
i40e_veb_link_event(pf->veb[pf->lan_veb], new_link);
else
i40e_vsi_link_event(vsi, new_link);
@@ -10031,8 +10550,12 @@ static int i40e_set_num_rings_in_vsi(struct i40e_vsi *vsi)
switch (vsi->type) {
case I40E_VSI_MAIN:
vsi->alloc_queue_pairs = pf->num_lan_qps;
- vsi->num_desc = ALIGN(I40E_DEFAULT_NUM_DESCRIPTORS,
- I40E_REQ_DESCRIPTOR_MULTIPLE);
+ if (!vsi->num_tx_desc)
+ vsi->num_tx_desc = ALIGN(I40E_DEFAULT_NUM_DESCRIPTORS,
+ I40E_REQ_DESCRIPTOR_MULTIPLE);
+ if (!vsi->num_rx_desc)
+ vsi->num_rx_desc = ALIGN(I40E_DEFAULT_NUM_DESCRIPTORS,
+ I40E_REQ_DESCRIPTOR_MULTIPLE);
if (pf->flags & I40E_FLAG_MSIX_ENABLED)
vsi->num_q_vectors = pf->num_lan_msix;
else
@@ -10042,22 +10565,32 @@ static int i40e_set_num_rings_in_vsi(struct i40e_vsi *vsi)
case I40E_VSI_FDIR:
vsi->alloc_queue_pairs = 1;
- vsi->num_desc = ALIGN(I40E_FDIR_RING_COUNT,
- I40E_REQ_DESCRIPTOR_MULTIPLE);
+ vsi->num_tx_desc = ALIGN(I40E_FDIR_RING_COUNT,
+ I40E_REQ_DESCRIPTOR_MULTIPLE);
+ vsi->num_rx_desc = ALIGN(I40E_FDIR_RING_COUNT,
+ I40E_REQ_DESCRIPTOR_MULTIPLE);
vsi->num_q_vectors = pf->num_fdsb_msix;
break;
case I40E_VSI_VMDQ2:
vsi->alloc_queue_pairs = pf->num_vmdq_qps;
- vsi->num_desc = ALIGN(I40E_DEFAULT_NUM_DESCRIPTORS,
- I40E_REQ_DESCRIPTOR_MULTIPLE);
+ if (!vsi->num_tx_desc)
+ vsi->num_tx_desc = ALIGN(I40E_DEFAULT_NUM_DESCRIPTORS,
+ I40E_REQ_DESCRIPTOR_MULTIPLE);
+ if (!vsi->num_rx_desc)
+ vsi->num_rx_desc = ALIGN(I40E_DEFAULT_NUM_DESCRIPTORS,
+ I40E_REQ_DESCRIPTOR_MULTIPLE);
vsi->num_q_vectors = pf->num_vmdq_msix;
break;
case I40E_VSI_SRIOV:
vsi->alloc_queue_pairs = pf->num_vf_qps;
- vsi->num_desc = ALIGN(I40E_DEFAULT_NUM_DESCRIPTORS,
- I40E_REQ_DESCRIPTOR_MULTIPLE);
+ if (!vsi->num_tx_desc)
+ vsi->num_tx_desc = ALIGN(I40E_DEFAULT_NUM_DESCRIPTORS,
+ I40E_REQ_DESCRIPTOR_MULTIPLE);
+ if (!vsi->num_rx_desc)
+ vsi->num_rx_desc = ALIGN(I40E_DEFAULT_NUM_DESCRIPTORS,
+ I40E_REQ_DESCRIPTOR_MULTIPLE);
break;
default:
@@ -10333,7 +10866,7 @@ static int i40e_alloc_rings(struct i40e_vsi *vsi)
ring->vsi = vsi;
ring->netdev = vsi->netdev;
ring->dev = &pf->pdev->dev;
- ring->count = vsi->num_desc;
+ ring->count = vsi->num_tx_desc;
ring->size = 0;
ring->dcb_tc = 0;
if (vsi->back->hw_features & I40E_HW_WB_ON_ITR_CAPABLE)
@@ -10350,7 +10883,7 @@ static int i40e_alloc_rings(struct i40e_vsi *vsi)
ring->vsi = vsi;
ring->netdev = NULL;
ring->dev = &pf->pdev->dev;
- ring->count = vsi->num_desc;
+ ring->count = vsi->num_tx_desc;
ring->size = 0;
ring->dcb_tc = 0;
if (vsi->back->hw_features & I40E_HW_WB_ON_ITR_CAPABLE)
@@ -10366,7 +10899,7 @@ setup_rx:
ring->vsi = vsi;
ring->netdev = vsi->netdev;
ring->dev = &pf->pdev->dev;
- ring->count = vsi->num_desc;
+ ring->count = vsi->num_rx_desc;
ring->size = 0;
ring->dcb_tc = 0;
ring->itr_setting = pf->rx_itr_default;
@@ -11604,6 +12137,9 @@ static int i40e_set_features(struct net_device *netdev,
return -EINVAL;
}
+ if (!(features & NETIF_F_HW_L2FW_DOFFLOAD) && vsi->macvlan_cnt)
+ i40e_del_all_macvlans(vsi);
+
need_reset = i40e_set_ntuple(pf, features);
if (need_reset)
@@ -12348,6 +12884,8 @@ static const struct net_device_ops i40e_netdev_ops = {
.ndo_bpf = i40e_xdp,
.ndo_xdp_xmit = i40e_xdp_xmit,
.ndo_xsk_async_xmit = i40e_xsk_async_xmit,
+ .ndo_dfwd_add_station = i40e_fwd_add,
+ .ndo_dfwd_del_station = i40e_fwd_del,
};
/**
@@ -12407,6 +12945,9 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
/* record features VLANs can make use of */
netdev->vlan_features |= hw_enc_features | NETIF_F_TSO_MANGLEID;
+ /* enable macvlan offloads */
+ netdev->hw_features |= NETIF_F_HW_L2FW_DOFFLOAD;
+
hw_features = hw_enc_features |
NETIF_F_HW_VLAN_CTAG_TX |
NETIF_F_HW_VLAN_CTAG_RX;
@@ -12519,7 +13060,7 @@ int i40e_is_vsi_uplink_mode_veb(struct i40e_vsi *vsi)
struct i40e_pf *pf = vsi->back;
/* Uplink is not a bridge so default to VEB */
- if (vsi->veb_idx == I40E_NO_VEB)
+ if (vsi->veb_idx >= I40E_MAX_VEB)
return 1;
veb = pf->veb[vsi->veb_idx];
@@ -13577,7 +14118,7 @@ static void i40e_setup_pf_switch_element(struct i40e_pf *pf,
/* Main VEB? */
if (uplink_seid != pf->mac_seid)
break;
- if (pf->lan_veb == I40E_NO_VEB) {
+ if (pf->lan_veb >= I40E_MAX_VEB) {
int v;
/* find existing or else empty VEB */
@@ -13587,13 +14128,15 @@ static void i40e_setup_pf_switch_element(struct i40e_pf *pf,
break;
}
}
- if (pf->lan_veb == I40E_NO_VEB) {
+ if (pf->lan_veb >= I40E_MAX_VEB) {
v = i40e_veb_mem_alloc(pf);
if (v < 0)
break;
pf->lan_veb = v;
}
}
+ if (pf->lan_veb >= I40E_MAX_VEB)
+ break;
pf->veb[pf->lan_veb]->seid = seid;
pf->veb[pf->lan_veb]->uplink_seid = pf->mac_seid;
@@ -13747,7 +14290,7 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
/* Set up the PF VSI associated with the PF's main VSI
* that is already in the HW switch
*/
- if (pf->lan_veb != I40E_NO_VEB && pf->veb[pf->lan_veb])
+ if (pf->lan_veb < I40E_MAX_VEB && pf->veb[pf->lan_veb])
uplink_seid = pf->veb[pf->lan_veb]->seid;
else
uplink_seid = pf->mac_seid;
@@ -14203,7 +14746,17 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
pf->ioremap_len = min_t(int, pci_resource_len(pdev, 0),
I40E_MAX_CSR_SPACE);
-
+ /* We believe that the highest register to read is
+ * I40E_GLGEN_STAT_CLEAR, so we check if the BAR size
+ * is not less than that before mapping to prevent a
+ * kernel panic.
+ */
+ if (pf->ioremap_len < I40E_GLGEN_STAT_CLEAR) {
+ dev_err(&pdev->dev, "Cannot map registers, bar size 0x%X too small, aborting\n",
+ pf->ioremap_len);
+ err = -ENOMEM;
+ goto err_ioremap;
+ }
hw->hw_addr = ioremap(pci_resource_start(pdev, 0), pf->ioremap_len);
if (!hw->hw_addr) {
err = -EIO;
@@ -14388,6 +14941,11 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
pci_set_drvdata(pdev, pf);
pci_save_state(pdev);
+ dev_info(&pdev->dev,
+ (pf->flags & I40E_FLAG_DISABLE_FW_LLDP) ?
+ "FW LLDP is disabled\n" :
+ "FW LLDP is enabled\n");
+
/* Enable FW to write default DCB config on link-up */
i40e_aq_set_dcb_parameters(hw, true, NULL);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
index 882627073dce..eac88bcc6c06 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
@@ -350,6 +350,10 @@ i40e_virtchnl_link_speed(enum i40e_aq_link_speed link_speed)
return VIRTCHNL_LINK_SPEED_100MB;
case I40E_LINK_SPEED_1GB:
return VIRTCHNL_LINK_SPEED_1GB;
+ case I40E_LINK_SPEED_2_5GB:
+ return VIRTCHNL_LINK_SPEED_2_5GB;
+ case I40E_LINK_SPEED_5GB:
+ return VIRTCHNL_LINK_SPEED_5GB;
case I40E_LINK_SPEED_10GB:
return VIRTCHNL_LINK_SPEED_10GB;
case I40E_LINK_SPEED_40GB:
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ptp.c b/drivers/net/ethernet/intel/i40e/i40e_ptp.c
index 439c35f0c581..11394a52e21c 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ptp.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ptp.c
@@ -140,8 +140,7 @@ static int i40e_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
* @ptp: The PTP clock structure
* @delta: Offset in nanoseconds to adjust the PHC time by
*
- * Adjust the frequency of the PHC by the indicated parts per billion from the
- * base frequency.
+ * Adjust the current clock time by a delta specified in nanoseconds.
**/
static int i40e_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
{
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 20a283702c9f..2a2fe3ec7926 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -774,7 +774,7 @@ void i40e_detect_recover_hung(struct i40e_vsi *vsi)
static bool i40e_clean_tx_irq(struct i40e_vsi *vsi,
struct i40e_ring *tx_ring, int napi_budget)
{
- u16 i = tx_ring->next_to_clean;
+ int i = tx_ring->next_to_clean;
struct i40e_tx_buffer *tx_buf;
struct i40e_tx_desc *tx_head;
struct i40e_tx_desc *tx_desc;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
index 479bc60c8f71..02b09a8ad54c 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
@@ -440,7 +440,7 @@ static int i40e_config_iwarp_qvlist(struct i40e_vf *vf,
struct virtchnl_iwarp_qv_info *qv_info;
u32 v_idx, i, reg_idx, reg;
u32 next_q_idx, next_q_type;
- u32 msix_vf, size;
+ u32 msix_vf;
int ret = 0;
msix_vf = pf->hw.func_caps.num_msix_vectors_vf;
@@ -454,11 +454,10 @@ static int i40e_config_iwarp_qvlist(struct i40e_vf *vf,
goto err_out;
}
- size = sizeof(struct virtchnl_iwarp_qvlist_info) +
- (sizeof(struct virtchnl_iwarp_qv_info) *
- (qvlist_info->num_vectors - 1));
kfree(vf->qvlist_info);
- vf->qvlist_info = kzalloc(size, GFP_KERNEL);
+ vf->qvlist_info = kzalloc(struct_size(vf->qvlist_info, qv_info,
+ qvlist_info->num_vectors - 1),
+ GFP_KERNEL);
if (!vf->qvlist_info) {
ret = -ENOMEM;
goto err_out;
@@ -470,14 +469,15 @@ static int i40e_config_iwarp_qvlist(struct i40e_vf *vf,
qv_info = &qvlist_info->qv_info[i];
if (!qv_info)
continue;
- v_idx = qv_info->v_idx;
/* Validate vector id belongs to this vf */
- if (!i40e_vc_isvalid_vector_id(vf, v_idx)) {
+ if (!i40e_vc_isvalid_vector_id(vf, qv_info->v_idx)) {
ret = -EINVAL;
goto err_free;
}
+ v_idx = qv_info->v_idx;
+
vf->qvlist_info->qv_info[i] = *qv_info;
reg_idx = ((msix_vf - 1) * vf->vf_id) + (v_idx - 1);
@@ -1845,7 +1845,7 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
i40e_status aq_ret = 0;
struct i40e_vsi *vsi;
int num_vsis = 1;
- int len = 0;
+ size_t len = 0;
int ret;
if (!test_bit(I40E_VF_STATE_INIT, &vf->vf_states)) {
@@ -1853,9 +1853,7 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
goto err;
}
- len = (sizeof(struct virtchnl_vf_resource) +
- sizeof(struct virtchnl_vsi_resource) * num_vsis);
-
+ len = struct_size(vfres, vsi_res, num_vsis);
vfres = kzalloc(len, GFP_KERNEL);
if (!vfres) {
aq_ret = I40E_ERR_NO_MEMORY;
@@ -2135,8 +2133,13 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
}
}
- if (vf->adq_enabled)
+ if (vf->adq_enabled) {
+ if (idx >= ARRAY_SIZE(vf->ch)) {
+ aq_ret = I40E_ERR_NO_AVAILABLE_VSI;
+ goto error_param;
+ }
vsi_id = vf->ch[idx].vsi_id;
+ }
if (i40e_config_vsi_rx_queue(vf, vsi_id, vsi_queue_id,
&qpi->rxq) ||
@@ -2152,6 +2155,10 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
* to its appropriate VSIs based on TC mapping
**/
if (vf->adq_enabled) {
+ if (idx >= ARRAY_SIZE(vf->ch)) {
+ aq_ret = I40E_ERR_NO_AVAILABLE_VSI;
+ goto error_param;
+ }
if (j == (vf->ch[idx].num_qps - 1)) {
idx++;
j = 0; /* resetting the queue count */
@@ -2318,7 +2325,6 @@ static int i40e_vc_enable_queues_msg(struct i40e_vf *vf, u8 *msg)
struct virtchnl_queue_select *vqs =
(struct virtchnl_queue_select *)msg;
struct i40e_pf *pf = vf->pf;
- u16 vsi_id = vqs->vsi_id;
i40e_status aq_ret = 0;
int i;
@@ -2327,7 +2333,7 @@ static int i40e_vc_enable_queues_msg(struct i40e_vf *vf, u8 *msg)
goto error_param;
}
- if (!i40e_vc_isvalid_vsi_id(vf, vsi_id)) {
+ if (!i40e_vc_isvalid_vsi_id(vf, vqs->vsi_id)) {
aq_ret = I40E_ERR_PARAM;
goto error_param;
}
@@ -2427,18 +2433,14 @@ static int i40e_vc_request_queues_msg(struct i40e_vf *vf, u8 *msg)
{
struct virtchnl_vf_res_request *vfres =
(struct virtchnl_vf_res_request *)msg;
- int req_pairs = vfres->num_queue_pairs;
- int cur_pairs = vf->num_queue_pairs;
+ u16 req_pairs = vfres->num_queue_pairs;
+ u8 cur_pairs = vf->num_queue_pairs;
struct i40e_pf *pf = vf->pf;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states))
return -EINVAL;
- if (req_pairs <= 0) {
- dev_err(&pf->pdev->dev,
- "VF %d tried to request %d queues. Ignoring.\n",
- vf->vf_id, req_pairs);
- } else if (req_pairs > I40E_MAX_VF_QUEUES) {
+ if (req_pairs > I40E_MAX_VF_QUEUES) {
dev_err(&pf->pdev->dev,
"VF %d tried to request more than %d queues.\n",
vf->vf_id,
@@ -2509,7 +2511,7 @@ error_param:
* MAC filters: 16 for multicast, 1 for MAC, 1 for broadcast
*/
#define I40E_VC_MAX_MAC_ADDR_PER_VF (16 + 1 + 1)
-#define I40E_VC_MAX_VLAN_PER_VF 8
+#define I40E_VC_MAX_VLAN_PER_VF 16
/**
* i40e_check_vf_permission
@@ -2587,12 +2589,11 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
(struct virtchnl_ether_addr_list *)msg;
struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi = NULL;
- u16 vsi_id = al->vsi_id;
i40e_status ret = 0;
int i;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
- !i40e_vc_isvalid_vsi_id(vf, vsi_id)) {
+ !i40e_vc_isvalid_vsi_id(vf, al->vsi_id)) {
ret = I40E_ERR_PARAM;
goto error_param;
}
@@ -2657,12 +2658,11 @@ static int i40e_vc_del_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
(struct virtchnl_ether_addr_list *)msg;
struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi = NULL;
- u16 vsi_id = al->vsi_id;
i40e_status ret = 0;
int i;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
- !i40e_vc_isvalid_vsi_id(vf, vsi_id)) {
+ !i40e_vc_isvalid_vsi_id(vf, al->vsi_id)) {
ret = I40E_ERR_PARAM;
goto error_param;
}
@@ -2726,7 +2726,6 @@ static int i40e_vc_add_vlan_msg(struct i40e_vf *vf, u8 *msg)
(struct virtchnl_vlan_filter_list *)msg;
struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi = NULL;
- u16 vsi_id = vfl->vsi_id;
i40e_status aq_ret = 0;
int i;
@@ -2737,7 +2736,7 @@ static int i40e_vc_add_vlan_msg(struct i40e_vf *vf, u8 *msg)
goto error_param;
}
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
- !i40e_vc_isvalid_vsi_id(vf, vsi_id)) {
+ !i40e_vc_isvalid_vsi_id(vf, vfl->vsi_id)) {
aq_ret = I40E_ERR_PARAM;
goto error_param;
}
@@ -2798,12 +2797,11 @@ static int i40e_vc_remove_vlan_msg(struct i40e_vf *vf, u8 *msg)
(struct virtchnl_vlan_filter_list *)msg;
struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi = NULL;
- u16 vsi_id = vfl->vsi_id;
i40e_status aq_ret = 0;
int i;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
- !i40e_vc_isvalid_vsi_id(vf, vsi_id)) {
+ !i40e_vc_isvalid_vsi_id(vf, vfl->vsi_id)) {
aq_ret = I40E_ERR_PARAM;
goto error_param;
}
@@ -2920,11 +2918,10 @@ static int i40e_vc_config_rss_key(struct i40e_vf *vf, u8 *msg)
(struct virtchnl_rss_key *)msg;
struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi = NULL;
- u16 vsi_id = vrk->vsi_id;
i40e_status aq_ret = 0;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
- !i40e_vc_isvalid_vsi_id(vf, vsi_id) ||
+ !i40e_vc_isvalid_vsi_id(vf, vrk->vsi_id) ||
(vrk->key_len != I40E_HKEY_ARRAY_SIZE)) {
aq_ret = I40E_ERR_PARAM;
goto err;
@@ -2951,16 +2948,22 @@ static int i40e_vc_config_rss_lut(struct i40e_vf *vf, u8 *msg)
(struct virtchnl_rss_lut *)msg;
struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi = NULL;
- u16 vsi_id = vrl->vsi_id;
i40e_status aq_ret = 0;
+ u16 i;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||
- !i40e_vc_isvalid_vsi_id(vf, vsi_id) ||
+ !i40e_vc_isvalid_vsi_id(vf, vrl->vsi_id) ||
(vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE)) {
aq_ret = I40E_ERR_PARAM;
goto err;
}
+ for (i = 0; i < vrl->lut_entries; i++)
+ if (vrl->lut[i] >= vf->num_queue_pairs) {
+ aq_ret = I40E_ERR_PARAM;
+ goto err;
+ }
+
vsi = pf->vsi[vf->lan_vsi_idx];
aq_ret = i40e_config_rss(vsi, NULL, vrl->lut, I40E_VF_HLUT_ARRAY_SIZE);
/* send the response to the VF */
@@ -3041,14 +3044,15 @@ err:
**/
static int i40e_vc_enable_vlan_stripping(struct i40e_vf *vf, u8 *msg)
{
- struct i40e_vsi *vsi = vf->pf->vsi[vf->lan_vsi_idx];
i40e_status aq_ret = 0;
+ struct i40e_vsi *vsi;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
aq_ret = I40E_ERR_PARAM;
goto err;
}
+ vsi = vf->pf->vsi[vf->lan_vsi_idx];
i40e_vlan_stripping_enable(vsi);
/* send the response to the VF */
@@ -3066,14 +3070,15 @@ err:
**/
static int i40e_vc_disable_vlan_stripping(struct i40e_vf *vf, u8 *msg)
{
- struct i40e_vsi *vsi = vf->pf->vsi[vf->lan_vsi_idx];
i40e_status aq_ret = 0;
+ struct i40e_vsi *vsi;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
aq_ret = I40E_ERR_PARAM;
goto err;
}
+ vsi = vf->pf->vsi[vf->lan_vsi_idx];
i40e_vlan_stripping_disable(vsi);
/* send the response to the VF */
@@ -3531,8 +3536,9 @@ static int i40e_vc_add_qch_msg(struct i40e_vf *vf, u8 *msg)
(struct virtchnl_tc_info *)msg;
struct i40e_pf *pf = vf->pf;
struct i40e_link_status *ls = &pf->hw.phy.link_info;
- int i, adq_request_qps = 0, speed = 0;
+ int i, adq_request_qps = 0;
i40e_status aq_ret = 0;
+ u64 speed = 0;
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
aq_ret = I40E_ERR_PARAM;
@@ -3558,8 +3564,8 @@ static int i40e_vc_add_qch_msg(struct i40e_vf *vf, u8 *msg)
/* max number of traffic classes for VF currently capped at 4 */
if (!tci->num_tc || tci->num_tc > I40E_MAX_VF_VSI) {
dev_err(&pf->pdev->dev,
- "VF %d trying to set %u TCs, valid range 1-4 TCs per VF\n",
- vf->vf_id, tci->num_tc);
+ "VF %d trying to set %u TCs, valid range 1-%u TCs per VF\n",
+ vf->vf_id, tci->num_tc, I40E_MAX_VF_VSI);
aq_ret = I40E_ERR_PARAM;
goto err;
}
@@ -3569,8 +3575,9 @@ static int i40e_vc_add_qch_msg(struct i40e_vf *vf, u8 *msg)
if (!tci->list[i].count ||
tci->list[i].count > I40E_DEFAULT_QUEUES_PER_VF) {
dev_err(&pf->pdev->dev,
- "VF %d: TC %d trying to set %u queues, valid range 1-4 queues per TC\n",
- vf->vf_id, i, tci->list[i].count);
+ "VF %d: TC %d trying to set %u queues, valid range 1-%u queues per TC\n",
+ vf->vf_id, i, tci->list[i].count,
+ I40E_DEFAULT_QUEUES_PER_VF);
aq_ret = I40E_ERR_PARAM;
goto err;
}
@@ -3730,19 +3737,6 @@ int i40e_vc_process_vf_msg(struct i40e_pf *pf, s16 vf_id, u32 v_opcode,
/* perform basic checks on the msg */
ret = virtchnl_vc_validate_vf_msg(&vf->vf_ver, v_opcode, msg, msglen);
- /* perform additional checks specific to this driver */
- if (v_opcode == VIRTCHNL_OP_CONFIG_RSS_KEY) {
- struct virtchnl_rss_key *vrk = (struct virtchnl_rss_key *)msg;
-
- if (vrk->key_len != I40E_HKEY_ARRAY_SIZE)
- ret = -EINVAL;
- } else if (v_opcode == VIRTCHNL_OP_CONFIG_RSS_LUT) {
- struct virtchnl_rss_lut *vrl = (struct virtchnl_rss_lut *)msg;
-
- if (vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE)
- ret = -EINVAL;
- }
-
if (ret) {
i40e_vc_send_resp_to_vf(vf, v_opcode, I40E_ERR_PARAM);
dev_err(&pf->pdev->dev, "Invalid message from VF %d, opcode %d, len %d\n",
@@ -3943,6 +3937,11 @@ int i40e_ndo_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
int bkt;
u8 i;
+ if (test_and_set_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state)) {
+ dev_warn(&pf->pdev->dev, "Unable to configure VFs, other operation is pending.\n");
+ return -EAGAIN;
+ }
+
/* validate the request */
ret = i40e_validate_vf(pf, vf_id);
if (ret)
@@ -3967,11 +3966,6 @@ int i40e_ndo_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
goto error_param;
}
- if (test_and_set_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state)) {
- dev_warn(&pf->pdev->dev, "Unable to configure VFs, other operation is pending.\n");
- return -EAGAIN;
- }
-
if (is_multicast_ether_addr(mac)) {
dev_err(&pf->pdev->dev,
"Invalid Ethernet address %pM for VF %d\n", mac, vf_id);
@@ -4302,10 +4296,8 @@ int i40e_ndo_get_vf_config(struct net_device *netdev,
vf = &pf->vf[vf_id];
/* first vsi is always the LAN vsi */
vsi = pf->vsi[vf->lan_vsi_idx];
- if (!test_bit(I40E_VF_STATE_INIT, &vf->vf_states)) {
- dev_err(&pf->pdev->dev, "VF %d still in reset. Try again.\n",
- vf_id);
- ret = -EAGAIN;
+ if (!vsi) {
+ ret = -ENOENT;
goto error_param;
}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
index 1b17486543ac..32bad014d76c 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
@@ -215,6 +215,7 @@ static int i40e_run_xdp_zc(struct i40e_ring *rx_ring, struct xdp_buff *xdp)
break;
default:
bpf_warn_invalid_xdp_action(act);
+ /* fall through */
case XDP_ABORTED:
trace_xdp_exception(rx_ring->netdev, xdp_prog, act);
/* fallthrough -- handle aborts by dropping packet */
@@ -640,8 +641,8 @@ static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget)
struct i40e_tx_desc *tx_desc = NULL;
struct i40e_tx_buffer *tx_bi;
bool work_done = true;
+ struct xdp_desc desc;
dma_addr_t dma;
- u32 len;
while (budget-- > 0) {
if (!unlikely(I40E_DESC_UNUSED(xdp_ring))) {
@@ -650,21 +651,23 @@ static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget)
break;
}
- if (!xsk_umem_consume_tx(xdp_ring->xsk_umem, &dma, &len))
+ if (!xsk_umem_consume_tx(xdp_ring->xsk_umem, &desc))
break;
- dma_sync_single_for_device(xdp_ring->dev, dma, len,
+ dma = xdp_umem_get_dma(xdp_ring->xsk_umem, desc.addr);
+
+ dma_sync_single_for_device(xdp_ring->dev, dma, desc.len,
DMA_BIDIRECTIONAL);
tx_bi = &xdp_ring->tx_bi[xdp_ring->next_to_use];
- tx_bi->bytecount = len;
+ tx_bi->bytecount = desc.len;
tx_desc = I40E_TX_DESC(xdp_ring, xdp_ring->next_to_use);
tx_desc->buffer_addr = cpu_to_le64(dma);
tx_desc->cmd_type_offset_bsz =
build_ctob(I40E_TX_DESC_CMD_ICRC
| I40E_TX_DESC_CMD_EOP,
- 0, len, 0);
+ 0, desc.len, 0);
xdp_ring->next_to_use++;
if (xdp_ring->next_to_use == xdp_ring->count)
diff --git a/drivers/net/ethernet/intel/iavf/Makefile b/drivers/net/ethernet/intel/iavf/Makefile
index 9cbb5743ed12..c997063ed728 100644
--- a/drivers/net/ethernet/intel/iavf/Makefile
+++ b/drivers/net/ethernet/intel/iavf/Makefile
@@ -12,4 +12,4 @@ subdir-ccflags-y += -I$(src)
obj-$(CONFIG_IAVF) += iavf.o
iavf-objs := iavf_main.o iavf_ethtool.o iavf_virtchnl.o \
- iavf_txrx.o iavf_common.o i40e_adminq.o iavf_client.o
+ iavf_txrx.o iavf_common.o iavf_adminq.o iavf_client.o
diff --git a/drivers/net/ethernet/intel/iavf/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/iavf/i40e_adminq_cmd.h
deleted file mode 100644
index e5ae4a1c0cff..000000000000
--- a/drivers/net/ethernet/intel/iavf/i40e_adminq_cmd.h
+++ /dev/null
@@ -1,530 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2013 - 2018 Intel Corporation. */
-
-#ifndef _I40E_ADMINQ_CMD_H_
-#define _I40E_ADMINQ_CMD_H_
-
-/* This header file defines the i40e Admin Queue commands and is shared between
- * i40e Firmware and Software. Do not change the names in this file to IAVF
- * because this file should be diff-able against the i40e version, even
- * though many parts have been removed in this VF version.
- *
- * This file needs to comply with the Linux Kernel coding style.
- */
-
-#define I40E_FW_API_VERSION_MAJOR 0x0001
-#define I40E_FW_API_VERSION_MINOR_X722 0x0005
-#define I40E_FW_API_VERSION_MINOR_X710 0x0008
-
-#define I40E_FW_MINOR_VERSION(_h) ((_h)->mac.type == I40E_MAC_XL710 ? \
- I40E_FW_API_VERSION_MINOR_X710 : \
- I40E_FW_API_VERSION_MINOR_X722)
-
-/* API version 1.7 implements additional link and PHY-specific APIs */
-#define I40E_MINOR_VER_GET_LINK_INFO_XL710 0x0007
-
-struct i40e_aq_desc {
- __le16 flags;
- __le16 opcode;
- __le16 datalen;
- __le16 retval;
- __le32 cookie_high;
- __le32 cookie_low;
- union {
- struct {
- __le32 param0;
- __le32 param1;
- __le32 param2;
- __le32 param3;
- } internal;
- struct {
- __le32 param0;
- __le32 param1;
- __le32 addr_high;
- __le32 addr_low;
- } external;
- u8 raw[16];
- } params;
-};
-
-/* Flags sub-structure
- * |0 |1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 |12 |13 |14 |15 |
- * |DD |CMP|ERR|VFE| * * RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE |
- */
-
-/* command flags and offsets*/
-#define I40E_AQ_FLAG_DD_SHIFT 0
-#define I40E_AQ_FLAG_CMP_SHIFT 1
-#define I40E_AQ_FLAG_ERR_SHIFT 2
-#define I40E_AQ_FLAG_VFE_SHIFT 3
-#define I40E_AQ_FLAG_LB_SHIFT 9
-#define I40E_AQ_FLAG_RD_SHIFT 10
-#define I40E_AQ_FLAG_VFC_SHIFT 11
-#define I40E_AQ_FLAG_BUF_SHIFT 12
-#define I40E_AQ_FLAG_SI_SHIFT 13
-#define I40E_AQ_FLAG_EI_SHIFT 14
-#define I40E_AQ_FLAG_FE_SHIFT 15
-
-#define I40E_AQ_FLAG_DD BIT(I40E_AQ_FLAG_DD_SHIFT) /* 0x1 */
-#define I40E_AQ_FLAG_CMP BIT(I40E_AQ_FLAG_CMP_SHIFT) /* 0x2 */
-#define I40E_AQ_FLAG_ERR BIT(I40E_AQ_FLAG_ERR_SHIFT) /* 0x4 */
-#define I40E_AQ_FLAG_VFE BIT(I40E_AQ_FLAG_VFE_SHIFT) /* 0x8 */
-#define I40E_AQ_FLAG_LB BIT(I40E_AQ_FLAG_LB_SHIFT) /* 0x200 */
-#define I40E_AQ_FLAG_RD BIT(I40E_AQ_FLAG_RD_SHIFT) /* 0x400 */
-#define I40E_AQ_FLAG_VFC BIT(I40E_AQ_FLAG_VFC_SHIFT) /* 0x800 */
-#define I40E_AQ_FLAG_BUF BIT(I40E_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
-#define I40E_AQ_FLAG_SI BIT(I40E_AQ_FLAG_SI_SHIFT) /* 0x2000 */
-#define I40E_AQ_FLAG_EI BIT(I40E_AQ_FLAG_EI_SHIFT) /* 0x4000 */
-#define I40E_AQ_FLAG_FE BIT(I40E_AQ_FLAG_FE_SHIFT) /* 0x8000 */
-
-/* error codes */
-enum i40e_admin_queue_err {
- I40E_AQ_RC_OK = 0, /* success */
- I40E_AQ_RC_EPERM = 1, /* Operation not permitted */
- I40E_AQ_RC_ENOENT = 2, /* No such element */
- I40E_AQ_RC_ESRCH = 3, /* Bad opcode */
- I40E_AQ_RC_EINTR = 4, /* operation interrupted */
- I40E_AQ_RC_EIO = 5, /* I/O error */
- I40E_AQ_RC_ENXIO = 6, /* No such resource */
- I40E_AQ_RC_E2BIG = 7, /* Arg too long */
- I40E_AQ_RC_EAGAIN = 8, /* Try again */
- I40E_AQ_RC_ENOMEM = 9, /* Out of memory */
- I40E_AQ_RC_EACCES = 10, /* Permission denied */
- I40E_AQ_RC_EFAULT = 11, /* Bad address */
- I40E_AQ_RC_EBUSY = 12, /* Device or resource busy */
- I40E_AQ_RC_EEXIST = 13, /* object already exists */
- I40E_AQ_RC_EINVAL = 14, /* Invalid argument */
- I40E_AQ_RC_ENOTTY = 15, /* Not a typewriter */
- I40E_AQ_RC_ENOSPC = 16, /* No space left or alloc failure */
- I40E_AQ_RC_ENOSYS = 17, /* Function not implemented */
- I40E_AQ_RC_ERANGE = 18, /* Parameter out of range */
- I40E_AQ_RC_EFLUSHED = 19, /* Cmd flushed due to prev cmd error */
- I40E_AQ_RC_BAD_ADDR = 20, /* Descriptor contains a bad pointer */
- I40E_AQ_RC_EMODE = 21, /* Op not allowed in current dev mode */
- I40E_AQ_RC_EFBIG = 22, /* File too large */
-};
-
-/* Admin Queue command opcodes */
-enum i40e_admin_queue_opc {
- /* aq commands */
- i40e_aqc_opc_get_version = 0x0001,
- i40e_aqc_opc_driver_version = 0x0002,
- i40e_aqc_opc_queue_shutdown = 0x0003,
- i40e_aqc_opc_set_pf_context = 0x0004,
-
- /* resource ownership */
- i40e_aqc_opc_request_resource = 0x0008,
- i40e_aqc_opc_release_resource = 0x0009,
-
- i40e_aqc_opc_list_func_capabilities = 0x000A,
- i40e_aqc_opc_list_dev_capabilities = 0x000B,
-
- /* Proxy commands */
- i40e_aqc_opc_set_proxy_config = 0x0104,
- i40e_aqc_opc_set_ns_proxy_table_entry = 0x0105,
-
- /* LAA */
- i40e_aqc_opc_mac_address_read = 0x0107,
- i40e_aqc_opc_mac_address_write = 0x0108,
-
- /* PXE */
- i40e_aqc_opc_clear_pxe_mode = 0x0110,
-
- /* WoL commands */
- i40e_aqc_opc_set_wol_filter = 0x0120,
- i40e_aqc_opc_get_wake_reason = 0x0121,
-
- /* internal switch commands */
- i40e_aqc_opc_get_switch_config = 0x0200,
- i40e_aqc_opc_add_statistics = 0x0201,
- i40e_aqc_opc_remove_statistics = 0x0202,
- i40e_aqc_opc_set_port_parameters = 0x0203,
- i40e_aqc_opc_get_switch_resource_alloc = 0x0204,
- i40e_aqc_opc_set_switch_config = 0x0205,
- i40e_aqc_opc_rx_ctl_reg_read = 0x0206,
- i40e_aqc_opc_rx_ctl_reg_write = 0x0207,
-
- i40e_aqc_opc_add_vsi = 0x0210,
- i40e_aqc_opc_update_vsi_parameters = 0x0211,
- i40e_aqc_opc_get_vsi_parameters = 0x0212,
-
- i40e_aqc_opc_add_pv = 0x0220,
- i40e_aqc_opc_update_pv_parameters = 0x0221,
- i40e_aqc_opc_get_pv_parameters = 0x0222,
-
- i40e_aqc_opc_add_veb = 0x0230,
- i40e_aqc_opc_update_veb_parameters = 0x0231,
- i40e_aqc_opc_get_veb_parameters = 0x0232,
-
- i40e_aqc_opc_delete_element = 0x0243,
-
- i40e_aqc_opc_add_macvlan = 0x0250,
- i40e_aqc_opc_remove_macvlan = 0x0251,
- i40e_aqc_opc_add_vlan = 0x0252,
- i40e_aqc_opc_remove_vlan = 0x0253,
- i40e_aqc_opc_set_vsi_promiscuous_modes = 0x0254,
- i40e_aqc_opc_add_tag = 0x0255,
- i40e_aqc_opc_remove_tag = 0x0256,
- i40e_aqc_opc_add_multicast_etag = 0x0257,
- i40e_aqc_opc_remove_multicast_etag = 0x0258,
- i40e_aqc_opc_update_tag = 0x0259,
- i40e_aqc_opc_add_control_packet_filter = 0x025A,
- i40e_aqc_opc_remove_control_packet_filter = 0x025B,
- i40e_aqc_opc_add_cloud_filters = 0x025C,
- i40e_aqc_opc_remove_cloud_filters = 0x025D,
- i40e_aqc_opc_clear_wol_switch_filters = 0x025E,
-
- i40e_aqc_opc_add_mirror_rule = 0x0260,
- i40e_aqc_opc_delete_mirror_rule = 0x0261,
-
- /* Dynamic Device Personalization */
- i40e_aqc_opc_write_personalization_profile = 0x0270,
- i40e_aqc_opc_get_personalization_profile_list = 0x0271,
-
- /* DCB commands */
- i40e_aqc_opc_dcb_ignore_pfc = 0x0301,
- i40e_aqc_opc_dcb_updated = 0x0302,
- i40e_aqc_opc_set_dcb_parameters = 0x0303,
-
- /* TX scheduler */
- i40e_aqc_opc_configure_vsi_bw_limit = 0x0400,
- i40e_aqc_opc_configure_vsi_ets_sla_bw_limit = 0x0406,
- i40e_aqc_opc_configure_vsi_tc_bw = 0x0407,
- i40e_aqc_opc_query_vsi_bw_config = 0x0408,
- i40e_aqc_opc_query_vsi_ets_sla_config = 0x040A,
- i40e_aqc_opc_configure_switching_comp_bw_limit = 0x0410,
-
- i40e_aqc_opc_enable_switching_comp_ets = 0x0413,
- i40e_aqc_opc_modify_switching_comp_ets = 0x0414,
- i40e_aqc_opc_disable_switching_comp_ets = 0x0415,
- i40e_aqc_opc_configure_switching_comp_ets_bw_limit = 0x0416,
- i40e_aqc_opc_configure_switching_comp_bw_config = 0x0417,
- i40e_aqc_opc_query_switching_comp_ets_config = 0x0418,
- i40e_aqc_opc_query_port_ets_config = 0x0419,
- i40e_aqc_opc_query_switching_comp_bw_config = 0x041A,
- i40e_aqc_opc_suspend_port_tx = 0x041B,
- i40e_aqc_opc_resume_port_tx = 0x041C,
- i40e_aqc_opc_configure_partition_bw = 0x041D,
- /* hmc */
- i40e_aqc_opc_query_hmc_resource_profile = 0x0500,
- i40e_aqc_opc_set_hmc_resource_profile = 0x0501,
-
- /* phy commands*/
- i40e_aqc_opc_get_phy_abilities = 0x0600,
- i40e_aqc_opc_set_phy_config = 0x0601,
- i40e_aqc_opc_set_mac_config = 0x0603,
- i40e_aqc_opc_set_link_restart_an = 0x0605,
- i40e_aqc_opc_get_link_status = 0x0607,
- i40e_aqc_opc_set_phy_int_mask = 0x0613,
- i40e_aqc_opc_get_local_advt_reg = 0x0614,
- i40e_aqc_opc_set_local_advt_reg = 0x0615,
- i40e_aqc_opc_get_partner_advt = 0x0616,
- i40e_aqc_opc_set_lb_modes = 0x0618,
- i40e_aqc_opc_get_phy_wol_caps = 0x0621,
- i40e_aqc_opc_set_phy_debug = 0x0622,
- i40e_aqc_opc_upload_ext_phy_fm = 0x0625,
- i40e_aqc_opc_run_phy_activity = 0x0626,
- i40e_aqc_opc_set_phy_register = 0x0628,
- i40e_aqc_opc_get_phy_register = 0x0629,
-
- /* NVM commands */
- i40e_aqc_opc_nvm_read = 0x0701,
- i40e_aqc_opc_nvm_erase = 0x0702,
- i40e_aqc_opc_nvm_update = 0x0703,
- i40e_aqc_opc_nvm_config_read = 0x0704,
- i40e_aqc_opc_nvm_config_write = 0x0705,
- i40e_aqc_opc_oem_post_update = 0x0720,
- i40e_aqc_opc_thermal_sensor = 0x0721,
-
- /* virtualization commands */
- i40e_aqc_opc_send_msg_to_pf = 0x0801,
- i40e_aqc_opc_send_msg_to_vf = 0x0802,
- i40e_aqc_opc_send_msg_to_peer = 0x0803,
-
- /* alternate structure */
- i40e_aqc_opc_alternate_write = 0x0900,
- i40e_aqc_opc_alternate_write_indirect = 0x0901,
- i40e_aqc_opc_alternate_read = 0x0902,
- i40e_aqc_opc_alternate_read_indirect = 0x0903,
- i40e_aqc_opc_alternate_write_done = 0x0904,
- i40e_aqc_opc_alternate_set_mode = 0x0905,
- i40e_aqc_opc_alternate_clear_port = 0x0906,
-
- /* LLDP commands */
- i40e_aqc_opc_lldp_get_mib = 0x0A00,
- i40e_aqc_opc_lldp_update_mib = 0x0A01,
- i40e_aqc_opc_lldp_add_tlv = 0x0A02,
- i40e_aqc_opc_lldp_update_tlv = 0x0A03,
- i40e_aqc_opc_lldp_delete_tlv = 0x0A04,
- i40e_aqc_opc_lldp_stop = 0x0A05,
- i40e_aqc_opc_lldp_start = 0x0A06,
-
- /* Tunnel commands */
- i40e_aqc_opc_add_udp_tunnel = 0x0B00,
- i40e_aqc_opc_del_udp_tunnel = 0x0B01,
- i40e_aqc_opc_set_rss_key = 0x0B02,
- i40e_aqc_opc_set_rss_lut = 0x0B03,
- i40e_aqc_opc_get_rss_key = 0x0B04,
- i40e_aqc_opc_get_rss_lut = 0x0B05,
-
- /* Async Events */
- i40e_aqc_opc_event_lan_overflow = 0x1001,
-
- /* OEM commands */
- i40e_aqc_opc_oem_parameter_change = 0xFE00,
- i40e_aqc_opc_oem_device_status_change = 0xFE01,
- i40e_aqc_opc_oem_ocsd_initialize = 0xFE02,
- i40e_aqc_opc_oem_ocbb_initialize = 0xFE03,
-
- /* debug commands */
- i40e_aqc_opc_debug_read_reg = 0xFF03,
- i40e_aqc_opc_debug_write_reg = 0xFF04,
- i40e_aqc_opc_debug_modify_reg = 0xFF07,
- i40e_aqc_opc_debug_dump_internals = 0xFF08,
-};
-
-/* command structures and indirect data structures */
-
-/* Structure naming conventions:
- * - no suffix for direct command descriptor structures
- * - _data for indirect sent data
- * - _resp for indirect return data (data which is both will use _data)
- * - _completion for direct return data
- * - _element_ for repeated elements (may also be _data or _resp)
- *
- * Command structures are expected to overlay the params.raw member of the basic
- * descriptor, and as such cannot exceed 16 bytes in length.
- */
-
-/* This macro is used to generate a compilation error if a structure
- * is not exactly the correct length. It gives a divide by zero error if the
- * structure is not of the correct size, otherwise it creates an enum that is
- * never used.
- */
-#define I40E_CHECK_STRUCT_LEN(n, X) enum i40e_static_assert_enum_##X \
- { i40e_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
-
-/* This macro is used extensively to ensure that command structures are 16
- * bytes in length as they have to map to the raw array of that size.
- */
-#define I40E_CHECK_CMD_LENGTH(X) I40E_CHECK_STRUCT_LEN(16, X)
-
-/* Queue Shutdown (direct 0x0003) */
-struct i40e_aqc_queue_shutdown {
- __le32 driver_unloading;
-#define I40E_AQ_DRIVER_UNLOADING 0x1
- u8 reserved[12];
-};
-
-I40E_CHECK_CMD_LENGTH(i40e_aqc_queue_shutdown);
-
-struct i40e_aqc_vsi_properties_data {
- /* first 96 byte are written by SW */
- __le16 valid_sections;
-#define I40E_AQ_VSI_PROP_SWITCH_VALID 0x0001
-#define I40E_AQ_VSI_PROP_SECURITY_VALID 0x0002
-#define I40E_AQ_VSI_PROP_VLAN_VALID 0x0004
-#define I40E_AQ_VSI_PROP_CAS_PV_VALID 0x0008
-#define I40E_AQ_VSI_PROP_INGRESS_UP_VALID 0x0010
-#define I40E_AQ_VSI_PROP_EGRESS_UP_VALID 0x0020
-#define I40E_AQ_VSI_PROP_QUEUE_MAP_VALID 0x0040
-#define I40E_AQ_VSI_PROP_QUEUE_OPT_VALID 0x0080
-#define I40E_AQ_VSI_PROP_OUTER_UP_VALID 0x0100
-#define I40E_AQ_VSI_PROP_SCHED_VALID 0x0200
- /* switch section */
- __le16 switch_id; /* 12bit id combined with flags below */
-#define I40E_AQ_VSI_SW_ID_SHIFT 0x0000
-#define I40E_AQ_VSI_SW_ID_MASK (0xFFF << I40E_AQ_VSI_SW_ID_SHIFT)
-#define I40E_AQ_VSI_SW_ID_FLAG_NOT_STAG 0x1000
-#define I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB 0x2000
-#define I40E_AQ_VSI_SW_ID_FLAG_LOCAL_LB 0x4000
- u8 sw_reserved[2];
- /* security section */
- u8 sec_flags;
-#define I40E_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD 0x01
-#define I40E_AQ_VSI_SEC_FLAG_ENABLE_VLAN_CHK 0x02
-#define I40E_AQ_VSI_SEC_FLAG_ENABLE_MAC_CHK 0x04
- u8 sec_reserved;
- /* VLAN section */
- __le16 pvid; /* VLANS include priority bits */
- __le16 fcoe_pvid;
- u8 port_vlan_flags;
-#define I40E_AQ_VSI_PVLAN_MODE_SHIFT 0x00
-#define I40E_AQ_VSI_PVLAN_MODE_MASK (0x03 << \
- I40E_AQ_VSI_PVLAN_MODE_SHIFT)
-#define I40E_AQ_VSI_PVLAN_MODE_TAGGED 0x01
-#define I40E_AQ_VSI_PVLAN_MODE_UNTAGGED 0x02
-#define I40E_AQ_VSI_PVLAN_MODE_ALL 0x03
-#define I40E_AQ_VSI_PVLAN_INSERT_PVID 0x04
-#define I40E_AQ_VSI_PVLAN_EMOD_SHIFT 0x03
-#define I40E_AQ_VSI_PVLAN_EMOD_MASK (0x3 << \
- I40E_AQ_VSI_PVLAN_EMOD_SHIFT)
-#define I40E_AQ_VSI_PVLAN_EMOD_STR_BOTH 0x0
-#define I40E_AQ_VSI_PVLAN_EMOD_STR_UP 0x08
-#define I40E_AQ_VSI_PVLAN_EMOD_STR 0x10
-#define I40E_AQ_VSI_PVLAN_EMOD_NOTHING 0x18
- u8 pvlan_reserved[3];
- /* ingress egress up sections */
- __le32 ingress_table; /* bitmap, 3 bits per up */
-#define I40E_AQ_VSI_UP_TABLE_UP0_SHIFT 0
-#define I40E_AQ_VSI_UP_TABLE_UP0_MASK (0x7 << \
- I40E_AQ_VSI_UP_TABLE_UP0_SHIFT)
-#define I40E_AQ_VSI_UP_TABLE_UP1_SHIFT 3
-#define I40E_AQ_VSI_UP_TABLE_UP1_MASK (0x7 << \
- I40E_AQ_VSI_UP_TABLE_UP1_SHIFT)
-#define I40E_AQ_VSI_UP_TABLE_UP2_SHIFT 6
-#define I40E_AQ_VSI_UP_TABLE_UP2_MASK (0x7 << \
- I40E_AQ_VSI_UP_TABLE_UP2_SHIFT)
-#define I40E_AQ_VSI_UP_TABLE_UP3_SHIFT 9
-#define I40E_AQ_VSI_UP_TABLE_UP3_MASK (0x7 << \
- I40E_AQ_VSI_UP_TABLE_UP3_SHIFT)
-#define I40E_AQ_VSI_UP_TABLE_UP4_SHIFT 12
-#define I40E_AQ_VSI_UP_TABLE_UP4_MASK (0x7 << \
- I40E_AQ_VSI_UP_TABLE_UP4_SHIFT)
-#define I40E_AQ_VSI_UP_TABLE_UP5_SHIFT 15
-#define I40E_AQ_VSI_UP_TABLE_UP5_MASK (0x7 << \
- I40E_AQ_VSI_UP_TABLE_UP5_SHIFT)
-#define I40E_AQ_VSI_UP_TABLE_UP6_SHIFT 18
-#define I40E_AQ_VSI_UP_TABLE_UP6_MASK (0x7 << \
- I40E_AQ_VSI_UP_TABLE_UP6_SHIFT)
-#define I40E_AQ_VSI_UP_TABLE_UP7_SHIFT 21
-#define I40E_AQ_VSI_UP_TABLE_UP7_MASK (0x7 << \
- I40E_AQ_VSI_UP_TABLE_UP7_SHIFT)
- __le32 egress_table; /* same defines as for ingress table */
- /* cascaded PV section */
- __le16 cas_pv_tag;
- u8 cas_pv_flags;
-#define I40E_AQ_VSI_CAS_PV_TAGX_SHIFT 0x00
-#define I40E_AQ_VSI_CAS_PV_TAGX_MASK (0x03 << \
- I40E_AQ_VSI_CAS_PV_TAGX_SHIFT)
-#define I40E_AQ_VSI_CAS_PV_TAGX_LEAVE 0x00
-#define I40E_AQ_VSI_CAS_PV_TAGX_REMOVE 0x01
-#define I40E_AQ_VSI_CAS_PV_TAGX_COPY 0x02
-#define I40E_AQ_VSI_CAS_PV_INSERT_TAG 0x10
-#define I40E_AQ_VSI_CAS_PV_ETAG_PRUNE 0x20
-#define I40E_AQ_VSI_CAS_PV_ACCEPT_HOST_TAG 0x40
- u8 cas_pv_reserved;
- /* queue mapping section */
- __le16 mapping_flags;
-#define I40E_AQ_VSI_QUE_MAP_CONTIG 0x0
-#define I40E_AQ_VSI_QUE_MAP_NONCONTIG 0x1
- __le16 queue_mapping[16];
-#define I40E_AQ_VSI_QUEUE_SHIFT 0x0
-#define I40E_AQ_VSI_QUEUE_MASK (0x7FF << I40E_AQ_VSI_QUEUE_SHIFT)
- __le16 tc_mapping[8];
-#define I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT 0
-#define I40E_AQ_VSI_TC_QUE_OFFSET_MASK (0x1FF << \
- I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT)
-#define I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT 9
-#define I40E_AQ_VSI_TC_QUE_NUMBER_MASK (0x7 << \
- I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT)
- /* queueing option section */
- u8 queueing_opt_flags;
-#define I40E_AQ_VSI_QUE_OPT_MULTICAST_UDP_ENA 0x04
-#define I40E_AQ_VSI_QUE_OPT_UNICAST_UDP_ENA 0x08
-#define I40E_AQ_VSI_QUE_OPT_TCP_ENA 0x10
-#define I40E_AQ_VSI_QUE_OPT_FCOE_ENA 0x20
-#define I40E_AQ_VSI_QUE_OPT_RSS_LUT_PF 0x00
-#define I40E_AQ_VSI_QUE_OPT_RSS_LUT_VSI 0x40
- u8 queueing_opt_reserved[3];
- /* scheduler section */
- u8 up_enable_bits;
- u8 sched_reserved;
- /* outer up section */
- __le32 outer_up_table; /* same structure and defines as ingress tbl */
- u8 cmd_reserved[8];
- /* last 32 bytes are written by FW */
- __le16 qs_handle[8];
-#define I40E_AQ_VSI_QS_HANDLE_INVALID 0xFFFF
- __le16 stat_counter_idx;
- __le16 sched_id;
- u8 resp_reserved[12];
-};
-
-I40E_CHECK_STRUCT_LEN(128, i40e_aqc_vsi_properties_data);
-
-/* Get VEB Parameters (direct 0x0232)
- * uses i40e_aqc_switch_seid for the descriptor
- */
-struct i40e_aqc_get_veb_parameters_completion {
- __le16 seid;
- __le16 switch_id;
- __le16 veb_flags; /* only the first/last flags from 0x0230 is valid */
- __le16 statistic_index;
- __le16 vebs_used;
- __le16 vebs_free;
- u8 reserved[4];
-};
-
-I40E_CHECK_CMD_LENGTH(i40e_aqc_get_veb_parameters_completion);
-
-#define I40E_LINK_SPEED_100MB_SHIFT 0x1
-#define I40E_LINK_SPEED_1000MB_SHIFT 0x2
-#define I40E_LINK_SPEED_10GB_SHIFT 0x3
-#define I40E_LINK_SPEED_40GB_SHIFT 0x4
-#define I40E_LINK_SPEED_20GB_SHIFT 0x5
-#define I40E_LINK_SPEED_25GB_SHIFT 0x6
-
-enum i40e_aq_link_speed {
- I40E_LINK_SPEED_UNKNOWN = 0,
- I40E_LINK_SPEED_100MB = BIT(I40E_LINK_SPEED_100MB_SHIFT),
- I40E_LINK_SPEED_1GB = BIT(I40E_LINK_SPEED_1000MB_SHIFT),
- I40E_LINK_SPEED_10GB = BIT(I40E_LINK_SPEED_10GB_SHIFT),
- I40E_LINK_SPEED_40GB = BIT(I40E_LINK_SPEED_40GB_SHIFT),
- I40E_LINK_SPEED_20GB = BIT(I40E_LINK_SPEED_20GB_SHIFT),
- I40E_LINK_SPEED_25GB = BIT(I40E_LINK_SPEED_25GB_SHIFT),
-};
-
-/* Send to PF command (indirect 0x0801) id is only used by PF
- * Send to VF command (indirect 0x0802) id is only used by PF
- * Send to Peer PF command (indirect 0x0803)
- */
-struct i40e_aqc_pf_vf_message {
- __le32 id;
- u8 reserved[4];
- __le32 addr_high;
- __le32 addr_low;
-};
-
-I40E_CHECK_CMD_LENGTH(i40e_aqc_pf_vf_message);
-
-struct i40e_aqc_get_set_rss_key {
-#define I40E_AQC_SET_RSS_KEY_VSI_VALID BIT(15)
-#define I40E_AQC_SET_RSS_KEY_VSI_ID_SHIFT 0
-#define I40E_AQC_SET_RSS_KEY_VSI_ID_MASK (0x3FF << \
- I40E_AQC_SET_RSS_KEY_VSI_ID_SHIFT)
- __le16 vsi_id;
- u8 reserved[6];
- __le32 addr_high;
- __le32 addr_low;
-};
-
-I40E_CHECK_CMD_LENGTH(i40e_aqc_get_set_rss_key);
-
-struct i40e_aqc_get_set_rss_key_data {
- u8 standard_rss_key[0x28];
- u8 extended_hash_key[0xc];
-};
-
-I40E_CHECK_STRUCT_LEN(0x34, i40e_aqc_get_set_rss_key_data);
-
-struct i40e_aqc_get_set_rss_lut {
-#define I40E_AQC_SET_RSS_LUT_VSI_VALID BIT(15)
-#define I40E_AQC_SET_RSS_LUT_VSI_ID_SHIFT 0
-#define I40E_AQC_SET_RSS_LUT_VSI_ID_MASK (0x3FF << \
- I40E_AQC_SET_RSS_LUT_VSI_ID_SHIFT)
- __le16 vsi_id;
-#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT 0
-#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK \
- BIT(I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
-
-#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_VSI 0
-#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_PF 1
- __le16 flags;
- u8 reserved[4];
- __le32 addr_high;
- __le32 addr_low;
-};
-
-I40E_CHECK_CMD_LENGTH(i40e_aqc_get_set_rss_lut);
-#endif /* _I40E_ADMINQ_CMD_H_ */
diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
index 272d76b733aa..9fc635d816d2 100644
--- a/drivers/net/ethernet/intel/iavf/iavf.h
+++ b/drivers/net/ethernet/intel/iavf/iavf.h
@@ -109,7 +109,7 @@ struct iavf_q_vector {
/* Helper macros to switch between ints/sec and what the register uses.
* And yes, it's the same math going both ways. The lowest value
- * supported by all of the i40e hardware is 8.
+ * supported by all of the iavf hardware is 8.
*/
#define EITR_INTS_PER_SEC_TO_REG(_eitr) \
((_eitr) ? (1000000000 / ((_eitr) * 256)) : 8)
@@ -171,6 +171,7 @@ enum iavf_state_t {
__IAVF_INIT_GET_RESOURCES, /* aq msg sent, awaiting reply */
__IAVF_INIT_SW, /* got resources, setting up structs */
__IAVF_RESETTING, /* in reset */
+ __IAVF_COMM_FAILED, /* communication with PF failed */
/* Below here, watchdog is running */
__IAVF_DOWN, /* ready, can be opened */
__IAVF_DOWN_PENDING, /* descending, waiting for watchdog */
@@ -216,7 +217,6 @@ struct iavf_cloud_filter {
/* board specific private data structure */
struct iavf_adapter {
- struct timer_list watchdog_timer;
struct work_struct reset_task;
struct work_struct adminq_task;
struct delayed_work client_task;
@@ -244,7 +244,7 @@ struct iavf_adapter {
int num_iwarp_msix;
int iwarp_base_vector;
u32 client_pending;
- struct i40e_client_instance *cinst;
+ struct iavf_client_instance *cinst;
struct msix_entry *msix_entries;
u32 flags;
@@ -303,7 +303,7 @@ struct iavf_adapter {
enum iavf_state_t state;
unsigned long crit_section;
- struct work_struct watchdog_task;
+ struct delayed_work watchdog_task;
bool netdev_registered;
bool link_up;
enum virtchnl_link_speed link_speed;
@@ -351,7 +351,7 @@ struct iavf_adapter {
/* Ethtool Private Flags */
/* lan device, used by client interface */
-struct i40e_device {
+struct iavf_device {
struct list_head list;
struct iavf_adapter *vf;
};
@@ -359,6 +359,7 @@ struct i40e_device {
/* needed by iavf_ethtool.c */
extern char iavf_driver_name[];
extern const char iavf_driver_version[];
+extern struct workqueue_struct *iavf_wq;
int iavf_up(struct iavf_adapter *adapter);
void iavf_down(struct iavf_adapter *adapter);
@@ -402,7 +403,7 @@ void iavf_enable_vlan_stripping(struct iavf_adapter *adapter);
void iavf_disable_vlan_stripping(struct iavf_adapter *adapter);
void iavf_virtchnl_completion(struct iavf_adapter *adapter,
enum virtchnl_ops v_opcode,
- iavf_status v_retval, u8 *msg, u16 msglen);
+ enum iavf_status v_retval, u8 *msg, u16 msglen);
int iavf_config_rss(struct iavf_adapter *adapter);
int iavf_lan_add_device(struct iavf_adapter *adapter);
int iavf_lan_del_device(struct iavf_adapter *adapter);
diff --git a/drivers/net/ethernet/intel/iavf/i40e_adminq.c b/drivers/net/ethernet/intel/iavf/iavf_adminq.c
index fca1ecfd9f71..9fa3fa99b4c2 100644
--- a/drivers/net/ethernet/intel/iavf/i40e_adminq.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_adminq.c
@@ -4,16 +4,16 @@
#include "iavf_status.h"
#include "iavf_type.h"
#include "iavf_register.h"
-#include "i40e_adminq.h"
+#include "iavf_adminq.h"
#include "iavf_prototype.h"
/**
- * i40e_adminq_init_regs - Initialize AdminQ registers
+ * iavf_adminq_init_regs - Initialize AdminQ registers
* @hw: pointer to the hardware structure
*
* This assumes the alloc_asq and alloc_arq functions have already been called
**/
-static void i40e_adminq_init_regs(struct iavf_hw *hw)
+static void iavf_adminq_init_regs(struct iavf_hw *hw)
{
/* set head and tail registers in our local struct */
hw->aq.asq.tail = IAVF_VF_ATQT1;
@@ -29,24 +29,24 @@ static void i40e_adminq_init_regs(struct iavf_hw *hw)
}
/**
- * i40e_alloc_adminq_asq_ring - Allocate Admin Queue send rings
+ * iavf_alloc_adminq_asq_ring - Allocate Admin Queue send rings
* @hw: pointer to the hardware structure
**/
-static iavf_status i40e_alloc_adminq_asq_ring(struct iavf_hw *hw)
+static enum iavf_status iavf_alloc_adminq_asq_ring(struct iavf_hw *hw)
{
- iavf_status ret_code;
+ enum iavf_status ret_code;
ret_code = iavf_allocate_dma_mem(hw, &hw->aq.asq.desc_buf,
- i40e_mem_atq_ring,
+ iavf_mem_atq_ring,
(hw->aq.num_asq_entries *
- sizeof(struct i40e_aq_desc)),
+ sizeof(struct iavf_aq_desc)),
IAVF_ADMINQ_DESC_ALIGNMENT);
if (ret_code)
return ret_code;
ret_code = iavf_allocate_virt_mem(hw, &hw->aq.asq.cmd_buf,
(hw->aq.num_asq_entries *
- sizeof(struct i40e_asq_cmd_details)));
+ sizeof(struct iavf_asq_cmd_details)));
if (ret_code) {
iavf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
return ret_code;
@@ -56,55 +56,55 @@ static iavf_status i40e_alloc_adminq_asq_ring(struct iavf_hw *hw)
}
/**
- * i40e_alloc_adminq_arq_ring - Allocate Admin Queue receive rings
+ * iavf_alloc_adminq_arq_ring - Allocate Admin Queue receive rings
* @hw: pointer to the hardware structure
**/
-static iavf_status i40e_alloc_adminq_arq_ring(struct iavf_hw *hw)
+static enum iavf_status iavf_alloc_adminq_arq_ring(struct iavf_hw *hw)
{
- iavf_status ret_code;
+ enum iavf_status ret_code;
ret_code = iavf_allocate_dma_mem(hw, &hw->aq.arq.desc_buf,
- i40e_mem_arq_ring,
+ iavf_mem_arq_ring,
(hw->aq.num_arq_entries *
- sizeof(struct i40e_aq_desc)),
+ sizeof(struct iavf_aq_desc)),
IAVF_ADMINQ_DESC_ALIGNMENT);
return ret_code;
}
/**
- * i40e_free_adminq_asq - Free Admin Queue send rings
+ * iavf_free_adminq_asq - Free Admin Queue send rings
* @hw: pointer to the hardware structure
*
* This assumes the posted send buffers have already been cleaned
* and de-allocated
**/
-static void i40e_free_adminq_asq(struct iavf_hw *hw)
+static void iavf_free_adminq_asq(struct iavf_hw *hw)
{
iavf_free_dma_mem(hw, &hw->aq.asq.desc_buf);
}
/**
- * i40e_free_adminq_arq - Free Admin Queue receive rings
+ * iavf_free_adminq_arq - Free Admin Queue receive rings
* @hw: pointer to the hardware structure
*
* This assumes the posted receive buffers have already been cleaned
* and de-allocated
**/
-static void i40e_free_adminq_arq(struct iavf_hw *hw)
+static void iavf_free_adminq_arq(struct iavf_hw *hw)
{
iavf_free_dma_mem(hw, &hw->aq.arq.desc_buf);
}
/**
- * i40e_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue
+ * iavf_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue
* @hw: pointer to the hardware structure
**/
-static iavf_status i40e_alloc_arq_bufs(struct iavf_hw *hw)
+static enum iavf_status iavf_alloc_arq_bufs(struct iavf_hw *hw)
{
- struct i40e_aq_desc *desc;
+ struct iavf_aq_desc *desc;
struct iavf_dma_mem *bi;
- iavf_status ret_code;
+ enum iavf_status ret_code;
int i;
/* We'll be allocating the buffer info memory first, then we can
@@ -123,7 +123,7 @@ static iavf_status i40e_alloc_arq_bufs(struct iavf_hw *hw)
for (i = 0; i < hw->aq.num_arq_entries; i++) {
bi = &hw->aq.arq.r.arq_bi[i];
ret_code = iavf_allocate_dma_mem(hw, bi,
- i40e_mem_arq_buf,
+ iavf_mem_arq_buf,
hw->aq.arq_buf_size,
IAVF_ADMINQ_DESC_ALIGNMENT);
if (ret_code)
@@ -132,9 +132,9 @@ static iavf_status i40e_alloc_arq_bufs(struct iavf_hw *hw)
/* now configure the descriptors for use */
desc = IAVF_ADMINQ_DESC(hw->aq.arq, i);
- desc->flags = cpu_to_le16(I40E_AQ_FLAG_BUF);
- if (hw->aq.arq_buf_size > I40E_AQ_LARGE_BUF)
- desc->flags |= cpu_to_le16(I40E_AQ_FLAG_LB);
+ desc->flags = cpu_to_le16(IAVF_AQ_FLAG_BUF);
+ if (hw->aq.arq_buf_size > IAVF_AQ_LARGE_BUF)
+ desc->flags |= cpu_to_le16(IAVF_AQ_FLAG_LB);
desc->opcode = 0;
/* This is in accordance with Admin queue design, there is no
* register for buffer size configuration
@@ -165,13 +165,13 @@ unwind_alloc_arq_bufs:
}
/**
- * i40e_alloc_asq_bufs - Allocate empty buffer structs for the send queue
+ * iavf_alloc_asq_bufs - Allocate empty buffer structs for the send queue
* @hw: pointer to the hardware structure
**/
-static iavf_status i40e_alloc_asq_bufs(struct iavf_hw *hw)
+static enum iavf_status iavf_alloc_asq_bufs(struct iavf_hw *hw)
{
struct iavf_dma_mem *bi;
- iavf_status ret_code;
+ enum iavf_status ret_code;
int i;
/* No mapped memory needed yet, just the buffer info structures */
@@ -186,7 +186,7 @@ static iavf_status i40e_alloc_asq_bufs(struct iavf_hw *hw)
for (i = 0; i < hw->aq.num_asq_entries; i++) {
bi = &hw->aq.asq.r.asq_bi[i];
ret_code = iavf_allocate_dma_mem(hw, bi,
- i40e_mem_asq_buf,
+ iavf_mem_asq_buf,
hw->aq.asq_buf_size,
IAVF_ADMINQ_DESC_ALIGNMENT);
if (ret_code)
@@ -206,10 +206,10 @@ unwind_alloc_asq_bufs:
}
/**
- * i40e_free_arq_bufs - Free receive queue buffer info elements
+ * iavf_free_arq_bufs - Free receive queue buffer info elements
* @hw: pointer to the hardware structure
**/
-static void i40e_free_arq_bufs(struct iavf_hw *hw)
+static void iavf_free_arq_bufs(struct iavf_hw *hw)
{
int i;
@@ -225,10 +225,10 @@ static void i40e_free_arq_bufs(struct iavf_hw *hw)
}
/**
- * i40e_free_asq_bufs - Free send queue buffer info elements
+ * iavf_free_asq_bufs - Free send queue buffer info elements
* @hw: pointer to the hardware structure
**/
-static void i40e_free_asq_bufs(struct iavf_hw *hw)
+static void iavf_free_asq_bufs(struct iavf_hw *hw)
{
int i;
@@ -248,14 +248,14 @@ static void i40e_free_asq_bufs(struct iavf_hw *hw)
}
/**
- * i40e_config_asq_regs - configure ASQ registers
+ * iavf_config_asq_regs - configure ASQ registers
* @hw: pointer to the hardware structure
*
* Configure base address and length registers for the transmit queue
**/
-static iavf_status i40e_config_asq_regs(struct iavf_hw *hw)
+static enum iavf_status iavf_config_asq_regs(struct iavf_hw *hw)
{
- iavf_status ret_code = 0;
+ enum iavf_status ret_code = 0;
u32 reg = 0;
/* Clear Head and Tail */
@@ -271,20 +271,20 @@ static iavf_status i40e_config_asq_regs(struct iavf_hw *hw)
/* Check one register to verify that config was applied */
reg = rd32(hw, hw->aq.asq.bal);
if (reg != lower_32_bits(hw->aq.asq.desc_buf.pa))
- ret_code = I40E_ERR_ADMIN_QUEUE_ERROR;
+ ret_code = IAVF_ERR_ADMIN_QUEUE_ERROR;
return ret_code;
}
/**
- * i40e_config_arq_regs - ARQ register configuration
+ * iavf_config_arq_regs - ARQ register configuration
* @hw: pointer to the hardware structure
*
* Configure base address and length registers for the receive (event queue)
**/
-static iavf_status i40e_config_arq_regs(struct iavf_hw *hw)
+static enum iavf_status iavf_config_arq_regs(struct iavf_hw *hw)
{
- iavf_status ret_code = 0;
+ enum iavf_status ret_code = 0;
u32 reg = 0;
/* Clear Head and Tail */
@@ -303,13 +303,13 @@ static iavf_status i40e_config_arq_regs(struct iavf_hw *hw)
/* Check one register to verify that config was applied */
reg = rd32(hw, hw->aq.arq.bal);
if (reg != lower_32_bits(hw->aq.arq.desc_buf.pa))
- ret_code = I40E_ERR_ADMIN_QUEUE_ERROR;
+ ret_code = IAVF_ERR_ADMIN_QUEUE_ERROR;
return ret_code;
}
/**
- * i40e_init_asq - main initialization routine for ASQ
+ * iavf_init_asq - main initialization routine for ASQ
* @hw: pointer to the hardware structure
*
* This is the main initialization routine for the Admin Send Queue
@@ -321,20 +321,20 @@ static iavf_status i40e_config_arq_regs(struct iavf_hw *hw)
* Do *NOT* hold the lock when calling this as the memory allocation routines
* called are not going to be atomic context safe
**/
-static iavf_status i40e_init_asq(struct iavf_hw *hw)
+static enum iavf_status iavf_init_asq(struct iavf_hw *hw)
{
- iavf_status ret_code = 0;
+ enum iavf_status ret_code = 0;
if (hw->aq.asq.count > 0) {
/* queue already initialized */
- ret_code = I40E_ERR_NOT_READY;
+ ret_code = IAVF_ERR_NOT_READY;
goto init_adminq_exit;
}
/* verify input for valid configuration */
if ((hw->aq.num_asq_entries == 0) ||
(hw->aq.asq_buf_size == 0)) {
- ret_code = I40E_ERR_CONFIG;
+ ret_code = IAVF_ERR_CONFIG;
goto init_adminq_exit;
}
@@ -342,17 +342,17 @@ static iavf_status i40e_init_asq(struct iavf_hw *hw)
hw->aq.asq.next_to_clean = 0;
/* allocate the ring memory */
- ret_code = i40e_alloc_adminq_asq_ring(hw);
+ ret_code = iavf_alloc_adminq_asq_ring(hw);
if (ret_code)
goto init_adminq_exit;
/* allocate buffers in the rings */
- ret_code = i40e_alloc_asq_bufs(hw);
+ ret_code = iavf_alloc_asq_bufs(hw);
if (ret_code)
goto init_adminq_free_rings;
/* initialize base registers */
- ret_code = i40e_config_asq_regs(hw);
+ ret_code = iavf_config_asq_regs(hw);
if (ret_code)
goto init_adminq_free_rings;
@@ -361,14 +361,14 @@ static iavf_status i40e_init_asq(struct iavf_hw *hw)
goto init_adminq_exit;
init_adminq_free_rings:
- i40e_free_adminq_asq(hw);
+ iavf_free_adminq_asq(hw);
init_adminq_exit:
return ret_code;
}
/**
- * i40e_init_arq - initialize ARQ
+ * iavf_init_arq - initialize ARQ
* @hw: pointer to the hardware structure
*
* The main initialization routine for the Admin Receive (Event) Queue.
@@ -380,20 +380,20 @@ init_adminq_exit:
* Do *NOT* hold the lock when calling this as the memory allocation routines
* called are not going to be atomic context safe
**/
-static iavf_status i40e_init_arq(struct iavf_hw *hw)
+static enum iavf_status iavf_init_arq(struct iavf_hw *hw)
{
- iavf_status ret_code = 0;
+ enum iavf_status ret_code = 0;
if (hw->aq.arq.count > 0) {
/* queue already initialized */
- ret_code = I40E_ERR_NOT_READY;
+ ret_code = IAVF_ERR_NOT_READY;
goto init_adminq_exit;
}
/* verify input for valid configuration */
if ((hw->aq.num_arq_entries == 0) ||
(hw->aq.arq_buf_size == 0)) {
- ret_code = I40E_ERR_CONFIG;
+ ret_code = IAVF_ERR_CONFIG;
goto init_adminq_exit;
}
@@ -401,17 +401,17 @@ static iavf_status i40e_init_arq(struct iavf_hw *hw)
hw->aq.arq.next_to_clean = 0;
/* allocate the ring memory */
- ret_code = i40e_alloc_adminq_arq_ring(hw);
+ ret_code = iavf_alloc_adminq_arq_ring(hw);
if (ret_code)
goto init_adminq_exit;
/* allocate buffers in the rings */
- ret_code = i40e_alloc_arq_bufs(hw);
+ ret_code = iavf_alloc_arq_bufs(hw);
if (ret_code)
goto init_adminq_free_rings;
/* initialize base registers */
- ret_code = i40e_config_arq_regs(hw);
+ ret_code = iavf_config_arq_regs(hw);
if (ret_code)
goto init_adminq_free_rings;
@@ -420,26 +420,26 @@ static iavf_status i40e_init_arq(struct iavf_hw *hw)
goto init_adminq_exit;
init_adminq_free_rings:
- i40e_free_adminq_arq(hw);
+ iavf_free_adminq_arq(hw);
init_adminq_exit:
return ret_code;
}
/**
- * i40e_shutdown_asq - shutdown the ASQ
+ * iavf_shutdown_asq - shutdown the ASQ
* @hw: pointer to the hardware structure
*
* The main shutdown routine for the Admin Send Queue
**/
-static iavf_status i40e_shutdown_asq(struct iavf_hw *hw)
+static enum iavf_status iavf_shutdown_asq(struct iavf_hw *hw)
{
- iavf_status ret_code = 0;
+ enum iavf_status ret_code = 0;
mutex_lock(&hw->aq.asq_mutex);
if (hw->aq.asq.count == 0) {
- ret_code = I40E_ERR_NOT_READY;
+ ret_code = IAVF_ERR_NOT_READY;
goto shutdown_asq_out;
}
@@ -453,7 +453,7 @@ static iavf_status i40e_shutdown_asq(struct iavf_hw *hw)
hw->aq.asq.count = 0; /* to indicate uninitialized queue */
/* free ring buffers */
- i40e_free_asq_bufs(hw);
+ iavf_free_asq_bufs(hw);
shutdown_asq_out:
mutex_unlock(&hw->aq.asq_mutex);
@@ -461,19 +461,19 @@ shutdown_asq_out:
}
/**
- * i40e_shutdown_arq - shutdown ARQ
+ * iavf_shutdown_arq - shutdown ARQ
* @hw: pointer to the hardware structure
*
* The main shutdown routine for the Admin Receive Queue
**/
-static iavf_status i40e_shutdown_arq(struct iavf_hw *hw)
+static enum iavf_status iavf_shutdown_arq(struct iavf_hw *hw)
{
- iavf_status ret_code = 0;
+ enum iavf_status ret_code = 0;
mutex_lock(&hw->aq.arq_mutex);
if (hw->aq.arq.count == 0) {
- ret_code = I40E_ERR_NOT_READY;
+ ret_code = IAVF_ERR_NOT_READY;
goto shutdown_arq_out;
}
@@ -487,7 +487,7 @@ static iavf_status i40e_shutdown_arq(struct iavf_hw *hw)
hw->aq.arq.count = 0; /* to indicate uninitialized queue */
/* free ring buffers */
- i40e_free_arq_bufs(hw);
+ iavf_free_arq_bufs(hw);
shutdown_arq_out:
mutex_unlock(&hw->aq.arq_mutex);
@@ -505,32 +505,32 @@ shutdown_arq_out:
* - hw->aq.arq_buf_size
* - hw->aq.asq_buf_size
**/
-iavf_status iavf_init_adminq(struct iavf_hw *hw)
+enum iavf_status iavf_init_adminq(struct iavf_hw *hw)
{
- iavf_status ret_code;
+ enum iavf_status ret_code;
/* verify input for valid configuration */
if ((hw->aq.num_arq_entries == 0) ||
(hw->aq.num_asq_entries == 0) ||
(hw->aq.arq_buf_size == 0) ||
(hw->aq.asq_buf_size == 0)) {
- ret_code = I40E_ERR_CONFIG;
+ ret_code = IAVF_ERR_CONFIG;
goto init_adminq_exit;
}
/* Set up register offsets */
- i40e_adminq_init_regs(hw);
+ iavf_adminq_init_regs(hw);
/* setup ASQ command write back timeout */
- hw->aq.asq_cmd_timeout = I40E_ASQ_CMD_TIMEOUT;
+ hw->aq.asq_cmd_timeout = IAVF_ASQ_CMD_TIMEOUT;
/* allocate the ASQ */
- ret_code = i40e_init_asq(hw);
+ ret_code = iavf_init_asq(hw);
if (ret_code)
goto init_adminq_destroy_locks;
/* allocate the ARQ */
- ret_code = i40e_init_arq(hw);
+ ret_code = iavf_init_arq(hw);
if (ret_code)
goto init_adminq_free_asq;
@@ -538,7 +538,7 @@ iavf_status iavf_init_adminq(struct iavf_hw *hw)
goto init_adminq_exit;
init_adminq_free_asq:
- i40e_shutdown_asq(hw);
+ iavf_shutdown_asq(hw);
init_adminq_destroy_locks:
init_adminq_exit:
@@ -549,53 +549,53 @@ init_adminq_exit:
* iavf_shutdown_adminq - shutdown routine for the Admin Queue
* @hw: pointer to the hardware structure
**/
-iavf_status iavf_shutdown_adminq(struct iavf_hw *hw)
+enum iavf_status iavf_shutdown_adminq(struct iavf_hw *hw)
{
- iavf_status ret_code = 0;
+ enum iavf_status ret_code = 0;
if (iavf_check_asq_alive(hw))
iavf_aq_queue_shutdown(hw, true);
- i40e_shutdown_asq(hw);
- i40e_shutdown_arq(hw);
+ iavf_shutdown_asq(hw);
+ iavf_shutdown_arq(hw);
return ret_code;
}
/**
- * i40e_clean_asq - cleans Admin send queue
+ * iavf_clean_asq - cleans Admin send queue
* @hw: pointer to the hardware structure
*
* returns the number of free desc
**/
-static u16 i40e_clean_asq(struct iavf_hw *hw)
+static u16 iavf_clean_asq(struct iavf_hw *hw)
{
struct iavf_adminq_ring *asq = &hw->aq.asq;
- struct i40e_asq_cmd_details *details;
+ struct iavf_asq_cmd_details *details;
u16 ntc = asq->next_to_clean;
- struct i40e_aq_desc desc_cb;
- struct i40e_aq_desc *desc;
+ struct iavf_aq_desc desc_cb;
+ struct iavf_aq_desc *desc;
desc = IAVF_ADMINQ_DESC(*asq, ntc);
- details = I40E_ADMINQ_DETAILS(*asq, ntc);
+ details = IAVF_ADMINQ_DETAILS(*asq, ntc);
while (rd32(hw, hw->aq.asq.head) != ntc) {
iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE,
"ntc %d head %d.\n", ntc, rd32(hw, hw->aq.asq.head));
if (details->callback) {
- I40E_ADMINQ_CALLBACK cb_func =
- (I40E_ADMINQ_CALLBACK)details->callback;
+ IAVF_ADMINQ_CALLBACK cb_func =
+ (IAVF_ADMINQ_CALLBACK)details->callback;
desc_cb = *desc;
cb_func(hw, &desc_cb);
}
- memset((void *)desc, 0, sizeof(struct i40e_aq_desc));
+ memset((void *)desc, 0, sizeof(struct iavf_aq_desc));
memset((void *)details, 0,
- sizeof(struct i40e_asq_cmd_details));
+ sizeof(struct iavf_asq_cmd_details));
ntc++;
if (ntc == asq->count)
ntc = 0;
desc = IAVF_ADMINQ_DESC(*asq, ntc);
- details = I40E_ADMINQ_DETAILS(*asq, ntc);
+ details = IAVF_ADMINQ_DETAILS(*asq, ntc);
}
asq->next_to_clean = ntc;
@@ -629,16 +629,17 @@ bool iavf_asq_done(struct iavf_hw *hw)
* This is the main send command driver routine for the Admin Queue send
* queue. It runs the queue, cleans the queue, etc
**/
-iavf_status iavf_asq_send_command(struct iavf_hw *hw, struct i40e_aq_desc *desc,
- void *buff, /* can be NULL */
- u16 buff_size,
- struct i40e_asq_cmd_details *cmd_details)
+enum iavf_status iavf_asq_send_command(struct iavf_hw *hw,
+ struct iavf_aq_desc *desc,
+ void *buff, /* can be NULL */
+ u16 buff_size,
+ struct iavf_asq_cmd_details *cmd_details)
{
struct iavf_dma_mem *dma_buff = NULL;
- struct i40e_asq_cmd_details *details;
- struct i40e_aq_desc *desc_on_ring;
+ struct iavf_asq_cmd_details *details;
+ struct iavf_aq_desc *desc_on_ring;
bool cmd_completed = false;
- iavf_status status = 0;
+ enum iavf_status status = 0;
u16 retval = 0;
u32 val = 0;
@@ -647,21 +648,21 @@ iavf_status iavf_asq_send_command(struct iavf_hw *hw, struct i40e_aq_desc *desc,
if (hw->aq.asq.count == 0) {
iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE,
"AQTX: Admin queue not initialized.\n");
- status = I40E_ERR_QUEUE_EMPTY;
+ status = IAVF_ERR_QUEUE_EMPTY;
goto asq_send_command_error;
}
- hw->aq.asq_last_status = I40E_AQ_RC_OK;
+ hw->aq.asq_last_status = IAVF_AQ_RC_OK;
val = rd32(hw, hw->aq.asq.head);
if (val >= hw->aq.num_asq_entries) {
iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE,
"AQTX: head overrun at %d\n", val);
- status = I40E_ERR_QUEUE_EMPTY;
+ status = IAVF_ERR_QUEUE_EMPTY;
goto asq_send_command_error;
}
- details = I40E_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use);
+ details = IAVF_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use);
if (cmd_details) {
*details = *cmd_details;
@@ -676,7 +677,7 @@ iavf_status iavf_asq_send_command(struct iavf_hw *hw, struct i40e_aq_desc *desc,
cpu_to_le32(lower_32_bits(details->cookie));
}
} else {
- memset(details, 0, sizeof(struct i40e_asq_cmd_details));
+ memset(details, 0, sizeof(struct iavf_asq_cmd_details));
}
/* clear requested flags and then set additional flags if defined */
@@ -688,7 +689,7 @@ iavf_status iavf_asq_send_command(struct iavf_hw *hw, struct i40e_aq_desc *desc,
IAVF_DEBUG_AQ_MESSAGE,
"AQTX: Invalid buffer size: %d.\n",
buff_size);
- status = I40E_ERR_INVALID_SIZE;
+ status = IAVF_ERR_INVALID_SIZE;
goto asq_send_command_error;
}
@@ -696,7 +697,7 @@ iavf_status iavf_asq_send_command(struct iavf_hw *hw, struct i40e_aq_desc *desc,
iavf_debug(hw,
IAVF_DEBUG_AQ_MESSAGE,
"AQTX: Async flag not set along with postpone flag");
- status = I40E_ERR_PARAM;
+ status = IAVF_ERR_PARAM;
goto asq_send_command_error;
}
@@ -707,11 +708,11 @@ iavf_status iavf_asq_send_command(struct iavf_hw *hw, struct i40e_aq_desc *desc,
/* the clean function called here could be called in a separate thread
* in case of asynchronous completions
*/
- if (i40e_clean_asq(hw) == 0) {
+ if (iavf_clean_asq(hw) == 0) {
iavf_debug(hw,
IAVF_DEBUG_AQ_MESSAGE,
"AQTX: Error queue is full.\n");
- status = I40E_ERR_ADMIN_QUEUE_FULL;
+ status = IAVF_ERR_ADMIN_QUEUE_FULL;
goto asq_send_command_error;
}
@@ -780,13 +781,13 @@ iavf_status iavf_asq_send_command(struct iavf_hw *hw, struct i40e_aq_desc *desc,
retval &= 0xff;
}
cmd_completed = true;
- if ((enum i40e_admin_queue_err)retval == I40E_AQ_RC_OK)
+ if ((enum iavf_admin_queue_err)retval == IAVF_AQ_RC_OK)
status = 0;
- else if ((enum i40e_admin_queue_err)retval == I40E_AQ_RC_EBUSY)
- status = I40E_ERR_NOT_READY;
+ else if ((enum iavf_admin_queue_err)retval == IAVF_AQ_RC_EBUSY)
+ status = IAVF_ERR_NOT_READY;
else
- status = I40E_ERR_ADMIN_QUEUE_ERROR;
- hw->aq.asq_last_status = (enum i40e_admin_queue_err)retval;
+ status = IAVF_ERR_ADMIN_QUEUE_ERROR;
+ hw->aq.asq_last_status = (enum iavf_admin_queue_err)retval;
}
iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE,
@@ -803,11 +804,11 @@ iavf_status iavf_asq_send_command(struct iavf_hw *hw, struct i40e_aq_desc *desc,
if (rd32(hw, hw->aq.asq.len) & IAVF_VF_ATQLEN1_ATQCRIT_MASK) {
iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE,
"AQTX: AQ Critical error.\n");
- status = I40E_ERR_ADMIN_QUEUE_CRITICAL_ERROR;
+ status = IAVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR;
} else {
iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE,
"AQTX: Writeback timeout.\n");
- status = I40E_ERR_ADMIN_QUEUE_TIMEOUT;
+ status = IAVF_ERR_ADMIN_QUEUE_TIMEOUT;
}
}
@@ -823,12 +824,12 @@ asq_send_command_error:
*
* Fill the desc with default values
**/
-void iavf_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc, u16 opcode)
+void iavf_fill_default_direct_cmd_desc(struct iavf_aq_desc *desc, u16 opcode)
{
/* zero out the desc */
- memset((void *)desc, 0, sizeof(struct i40e_aq_desc));
+ memset((void *)desc, 0, sizeof(struct iavf_aq_desc));
desc->opcode = cpu_to_le16(opcode);
- desc->flags = cpu_to_le16(I40E_AQ_FLAG_SI);
+ desc->flags = cpu_to_le16(IAVF_AQ_FLAG_SI);
}
/**
@@ -841,13 +842,13 @@ void iavf_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc, u16 opcode)
* the contents through e. It can also return how many events are
* left to process through 'pending'
**/
-iavf_status iavf_clean_arq_element(struct iavf_hw *hw,
- struct i40e_arq_event_info *e,
- u16 *pending)
+enum iavf_status iavf_clean_arq_element(struct iavf_hw *hw,
+ struct iavf_arq_event_info *e,
+ u16 *pending)
{
u16 ntc = hw->aq.arq.next_to_clean;
- struct i40e_aq_desc *desc;
- iavf_status ret_code = 0;
+ struct iavf_aq_desc *desc;
+ enum iavf_status ret_code = 0;
struct iavf_dma_mem *bi;
u16 desc_idx;
u16 datalen;
@@ -863,7 +864,7 @@ iavf_status iavf_clean_arq_element(struct iavf_hw *hw,
if (hw->aq.arq.count == 0) {
iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE,
"AQRX: Admin queue not initialized.\n");
- ret_code = I40E_ERR_QUEUE_EMPTY;
+ ret_code = IAVF_ERR_QUEUE_EMPTY;
goto clean_arq_element_err;
}
@@ -871,7 +872,7 @@ iavf_status iavf_clean_arq_element(struct iavf_hw *hw,
ntu = rd32(hw, hw->aq.arq.head) & IAVF_VF_ARQH1_ARQH_MASK;
if (ntu == ntc) {
/* nothing to do - shouldn't need to update ring's values */
- ret_code = I40E_ERR_ADMIN_QUEUE_NO_WORK;
+ ret_code = IAVF_ERR_ADMIN_QUEUE_NO_WORK;
goto clean_arq_element_out;
}
@@ -880,10 +881,10 @@ iavf_status iavf_clean_arq_element(struct iavf_hw *hw,
desc_idx = ntc;
hw->aq.arq_last_status =
- (enum i40e_admin_queue_err)le16_to_cpu(desc->retval);
+ (enum iavf_admin_queue_err)le16_to_cpu(desc->retval);
flags = le16_to_cpu(desc->flags);
- if (flags & I40E_AQ_FLAG_ERR) {
- ret_code = I40E_ERR_ADMIN_QUEUE_ERROR;
+ if (flags & IAVF_AQ_FLAG_ERR) {
+ ret_code = IAVF_ERR_ADMIN_QUEUE_ERROR;
iavf_debug(hw,
IAVF_DEBUG_AQ_MESSAGE,
"AQRX: Event received with error 0x%X.\n",
@@ -906,11 +907,11 @@ iavf_status iavf_clean_arq_element(struct iavf_hw *hw,
* size
*/
bi = &hw->aq.arq.r.arq_bi[ntc];
- memset((void *)desc, 0, sizeof(struct i40e_aq_desc));
+ memset((void *)desc, 0, sizeof(struct iavf_aq_desc));
- desc->flags = cpu_to_le16(I40E_AQ_FLAG_BUF);
- if (hw->aq.arq_buf_size > I40E_AQ_LARGE_BUF)
- desc->flags |= cpu_to_le16(I40E_AQ_FLAG_LB);
+ desc->flags = cpu_to_le16(IAVF_AQ_FLAG_BUF);
+ if (hw->aq.arq_buf_size > IAVF_AQ_LARGE_BUF)
+ desc->flags |= cpu_to_le16(IAVF_AQ_FLAG_LB);
desc->datalen = cpu_to_le16((u16)bi->size);
desc->params.external.addr_high = cpu_to_le32(upper_32_bits(bi->pa));
desc->params.external.addr_low = cpu_to_le32(lower_32_bits(bi->pa));
diff --git a/drivers/net/ethernet/intel/iavf/i40e_adminq.h b/drivers/net/ethernet/intel/iavf/iavf_adminq.h
index ee983889eab0..baf2fe26f302 100644
--- a/drivers/net/ethernet/intel/iavf/i40e_adminq.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_adminq.h
@@ -6,10 +6,10 @@
#include "iavf_osdep.h"
#include "iavf_status.h"
-#include "i40e_adminq_cmd.h"
+#include "iavf_adminq_cmd.h"
#define IAVF_ADMINQ_DESC(R, i) \
- (&(((struct i40e_aq_desc *)((R).desc_buf.va))[i]))
+ (&(((struct iavf_aq_desc *)((R).desc_buf.va))[i]))
#define IAVF_ADMINQ_DESC_ALIGNMENT 4096
@@ -39,22 +39,22 @@ struct iavf_adminq_ring {
};
/* ASQ transaction details */
-struct i40e_asq_cmd_details {
- void *callback; /* cast from type I40E_ADMINQ_CALLBACK */
+struct iavf_asq_cmd_details {
+ void *callback; /* cast from type IAVF_ADMINQ_CALLBACK */
u64 cookie;
u16 flags_ena;
u16 flags_dis;
bool async;
bool postpone;
- struct i40e_aq_desc *wb_desc;
+ struct iavf_aq_desc *wb_desc;
};
-#define I40E_ADMINQ_DETAILS(R, i) \
- (&(((struct i40e_asq_cmd_details *)((R).cmd_buf.va))[i]))
+#define IAVF_ADMINQ_DETAILS(R, i) \
+ (&(((struct iavf_asq_cmd_details *)((R).cmd_buf.va))[i]))
/* ARQ event information */
-struct i40e_arq_event_info {
- struct i40e_aq_desc desc;
+struct iavf_arq_event_info {
+ struct iavf_aq_desc desc;
u16 msg_len;
u16 buf_len;
u8 *msg_buf;
@@ -79,45 +79,45 @@ struct iavf_adminq_info {
struct mutex arq_mutex; /* Receive queue lock */
/* last status values on send and receive queues */
- enum i40e_admin_queue_err asq_last_status;
- enum i40e_admin_queue_err arq_last_status;
+ enum iavf_admin_queue_err asq_last_status;
+ enum iavf_admin_queue_err arq_last_status;
};
/**
- * i40e_aq_rc_to_posix - convert errors to user-land codes
+ * iavf_aq_rc_to_posix - convert errors to user-land codes
* aq_ret: AdminQ handler error code can override aq_rc
* aq_rc: AdminQ firmware error code to convert
**/
-static inline int i40e_aq_rc_to_posix(int aq_ret, int aq_rc)
+static inline int iavf_aq_rc_to_posix(int aq_ret, int aq_rc)
{
int aq_to_posix[] = {
- 0, /* I40E_AQ_RC_OK */
- -EPERM, /* I40E_AQ_RC_EPERM */
- -ENOENT, /* I40E_AQ_RC_ENOENT */
- -ESRCH, /* I40E_AQ_RC_ESRCH */
- -EINTR, /* I40E_AQ_RC_EINTR */
- -EIO, /* I40E_AQ_RC_EIO */
- -ENXIO, /* I40E_AQ_RC_ENXIO */
- -E2BIG, /* I40E_AQ_RC_E2BIG */
- -EAGAIN, /* I40E_AQ_RC_EAGAIN */
- -ENOMEM, /* I40E_AQ_RC_ENOMEM */
- -EACCES, /* I40E_AQ_RC_EACCES */
- -EFAULT, /* I40E_AQ_RC_EFAULT */
- -EBUSY, /* I40E_AQ_RC_EBUSY */
- -EEXIST, /* I40E_AQ_RC_EEXIST */
- -EINVAL, /* I40E_AQ_RC_EINVAL */
- -ENOTTY, /* I40E_AQ_RC_ENOTTY */
- -ENOSPC, /* I40E_AQ_RC_ENOSPC */
- -ENOSYS, /* I40E_AQ_RC_ENOSYS */
- -ERANGE, /* I40E_AQ_RC_ERANGE */
- -EPIPE, /* I40E_AQ_RC_EFLUSHED */
- -ESPIPE, /* I40E_AQ_RC_BAD_ADDR */
- -EROFS, /* I40E_AQ_RC_EMODE */
- -EFBIG, /* I40E_AQ_RC_EFBIG */
+ 0, /* IAVF_AQ_RC_OK */
+ -EPERM, /* IAVF_AQ_RC_EPERM */
+ -ENOENT, /* IAVF_AQ_RC_ENOENT */
+ -ESRCH, /* IAVF_AQ_RC_ESRCH */
+ -EINTR, /* IAVF_AQ_RC_EINTR */
+ -EIO, /* IAVF_AQ_RC_EIO */
+ -ENXIO, /* IAVF_AQ_RC_ENXIO */
+ -E2BIG, /* IAVF_AQ_RC_E2BIG */
+ -EAGAIN, /* IAVF_AQ_RC_EAGAIN */
+ -ENOMEM, /* IAVF_AQ_RC_ENOMEM */
+ -EACCES, /* IAVF_AQ_RC_EACCES */
+ -EFAULT, /* IAVF_AQ_RC_EFAULT */
+ -EBUSY, /* IAVF_AQ_RC_EBUSY */
+ -EEXIST, /* IAVF_AQ_RC_EEXIST */
+ -EINVAL, /* IAVF_AQ_RC_EINVAL */
+ -ENOTTY, /* IAVF_AQ_RC_ENOTTY */
+ -ENOSPC, /* IAVF_AQ_RC_ENOSPC */
+ -ENOSYS, /* IAVF_AQ_RC_ENOSYS */
+ -ERANGE, /* IAVF_AQ_RC_ERANGE */
+ -EPIPE, /* IAVF_AQ_RC_EFLUSHED */
+ -ESPIPE, /* IAVF_AQ_RC_BAD_ADDR */
+ -EROFS, /* IAVF_AQ_RC_EMODE */
+ -EFBIG, /* IAVF_AQ_RC_EFBIG */
};
/* aq_rc is invalid if AQ timed out */
- if (aq_ret == I40E_ERR_ADMIN_QUEUE_TIMEOUT)
+ if (aq_ret == IAVF_ERR_ADMIN_QUEUE_TIMEOUT)
return -EAGAIN;
if (!((u32)aq_rc < (sizeof(aq_to_posix) / sizeof((aq_to_posix)[0]))))
@@ -127,9 +127,9 @@ static inline int i40e_aq_rc_to_posix(int aq_ret, int aq_rc)
}
/* general information */
-#define I40E_AQ_LARGE_BUF 512
-#define I40E_ASQ_CMD_TIMEOUT 250000 /* usecs */
+#define IAVF_AQ_LARGE_BUF 512
+#define IAVF_ASQ_CMD_TIMEOUT 250000 /* usecs */
-void iavf_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc, u16 opcode);
+void iavf_fill_default_direct_cmd_desc(struct iavf_aq_desc *desc, u16 opcode);
#endif /* _IAVF_ADMINQ_H_ */
diff --git a/drivers/net/ethernet/intel/iavf/iavf_adminq_cmd.h b/drivers/net/ethernet/intel/iavf/iavf_adminq_cmd.h
new file mode 100644
index 000000000000..bc512308557b
--- /dev/null
+++ b/drivers/net/ethernet/intel/iavf/iavf_adminq_cmd.h
@@ -0,0 +1,528 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2013 - 2018 Intel Corporation. */
+
+#ifndef _IAVF_ADMINQ_CMD_H_
+#define _IAVF_ADMINQ_CMD_H_
+
+/* This header file defines the iavf Admin Queue commands and is shared between
+ * iavf Firmware and Software.
+ *
+ * This file needs to comply with the Linux Kernel coding style.
+ */
+
+#define IAVF_FW_API_VERSION_MAJOR 0x0001
+#define IAVF_FW_API_VERSION_MINOR_X722 0x0005
+#define IAVF_FW_API_VERSION_MINOR_X710 0x0008
+
+#define IAVF_FW_MINOR_VERSION(_h) ((_h)->mac.type == IAVF_MAC_XL710 ? \
+ IAVF_FW_API_VERSION_MINOR_X710 : \
+ IAVF_FW_API_VERSION_MINOR_X722)
+
+/* API version 1.7 implements additional link and PHY-specific APIs */
+#define IAVF_MINOR_VER_GET_LINK_INFO_XL710 0x0007
+
+struct iavf_aq_desc {
+ __le16 flags;
+ __le16 opcode;
+ __le16 datalen;
+ __le16 retval;
+ __le32 cookie_high;
+ __le32 cookie_low;
+ union {
+ struct {
+ __le32 param0;
+ __le32 param1;
+ __le32 param2;
+ __le32 param3;
+ } internal;
+ struct {
+ __le32 param0;
+ __le32 param1;
+ __le32 addr_high;
+ __le32 addr_low;
+ } external;
+ u8 raw[16];
+ } params;
+};
+
+/* Flags sub-structure
+ * |0 |1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR|VFE| * * RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE |
+ */
+
+/* command flags and offsets*/
+#define IAVF_AQ_FLAG_DD_SHIFT 0
+#define IAVF_AQ_FLAG_CMP_SHIFT 1
+#define IAVF_AQ_FLAG_ERR_SHIFT 2
+#define IAVF_AQ_FLAG_VFE_SHIFT 3
+#define IAVF_AQ_FLAG_LB_SHIFT 9
+#define IAVF_AQ_FLAG_RD_SHIFT 10
+#define IAVF_AQ_FLAG_VFC_SHIFT 11
+#define IAVF_AQ_FLAG_BUF_SHIFT 12
+#define IAVF_AQ_FLAG_SI_SHIFT 13
+#define IAVF_AQ_FLAG_EI_SHIFT 14
+#define IAVF_AQ_FLAG_FE_SHIFT 15
+
+#define IAVF_AQ_FLAG_DD BIT(IAVF_AQ_FLAG_DD_SHIFT) /* 0x1 */
+#define IAVF_AQ_FLAG_CMP BIT(IAVF_AQ_FLAG_CMP_SHIFT) /* 0x2 */
+#define IAVF_AQ_FLAG_ERR BIT(IAVF_AQ_FLAG_ERR_SHIFT) /* 0x4 */
+#define IAVF_AQ_FLAG_VFE BIT(IAVF_AQ_FLAG_VFE_SHIFT) /* 0x8 */
+#define IAVF_AQ_FLAG_LB BIT(IAVF_AQ_FLAG_LB_SHIFT) /* 0x200 */
+#define IAVF_AQ_FLAG_RD BIT(IAVF_AQ_FLAG_RD_SHIFT) /* 0x400 */
+#define IAVF_AQ_FLAG_VFC BIT(IAVF_AQ_FLAG_VFC_SHIFT) /* 0x800 */
+#define IAVF_AQ_FLAG_BUF BIT(IAVF_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
+#define IAVF_AQ_FLAG_SI BIT(IAVF_AQ_FLAG_SI_SHIFT) /* 0x2000 */
+#define IAVF_AQ_FLAG_EI BIT(IAVF_AQ_FLAG_EI_SHIFT) /* 0x4000 */
+#define IAVF_AQ_FLAG_FE BIT(IAVF_AQ_FLAG_FE_SHIFT) /* 0x8000 */
+
+/* error codes */
+enum iavf_admin_queue_err {
+ IAVF_AQ_RC_OK = 0, /* success */
+ IAVF_AQ_RC_EPERM = 1, /* Operation not permitted */
+ IAVF_AQ_RC_ENOENT = 2, /* No such element */
+ IAVF_AQ_RC_ESRCH = 3, /* Bad opcode */
+ IAVF_AQ_RC_EINTR = 4, /* operation interrupted */
+ IAVF_AQ_RC_EIO = 5, /* I/O error */
+ IAVF_AQ_RC_ENXIO = 6, /* No such resource */
+ IAVF_AQ_RC_E2BIG = 7, /* Arg too long */
+ IAVF_AQ_RC_EAGAIN = 8, /* Try again */
+ IAVF_AQ_RC_ENOMEM = 9, /* Out of memory */
+ IAVF_AQ_RC_EACCES = 10, /* Permission denied */
+ IAVF_AQ_RC_EFAULT = 11, /* Bad address */
+ IAVF_AQ_RC_EBUSY = 12, /* Device or resource busy */
+ IAVF_AQ_RC_EEXIST = 13, /* object already exists */
+ IAVF_AQ_RC_EINVAL = 14, /* Invalid argument */
+ IAVF_AQ_RC_ENOTTY = 15, /* Not a typewriter */
+ IAVF_AQ_RC_ENOSPC = 16, /* No space left or alloc failure */
+ IAVF_AQ_RC_ENOSYS = 17, /* Function not implemented */
+ IAVF_AQ_RC_ERANGE = 18, /* Parameter out of range */
+ IAVF_AQ_RC_EFLUSHED = 19, /* Cmd flushed due to prev cmd error */
+ IAVF_AQ_RC_BAD_ADDR = 20, /* Descriptor contains a bad pointer */
+ IAVF_AQ_RC_EMODE = 21, /* Op not allowed in current dev mode */
+ IAVF_AQ_RC_EFBIG = 22, /* File too large */
+};
+
+/* Admin Queue command opcodes */
+enum iavf_admin_queue_opc {
+ /* aq commands */
+ iavf_aqc_opc_get_version = 0x0001,
+ iavf_aqc_opc_driver_version = 0x0002,
+ iavf_aqc_opc_queue_shutdown = 0x0003,
+ iavf_aqc_opc_set_pf_context = 0x0004,
+
+ /* resource ownership */
+ iavf_aqc_opc_request_resource = 0x0008,
+ iavf_aqc_opc_release_resource = 0x0009,
+
+ iavf_aqc_opc_list_func_capabilities = 0x000A,
+ iavf_aqc_opc_list_dev_capabilities = 0x000B,
+
+ /* Proxy commands */
+ iavf_aqc_opc_set_proxy_config = 0x0104,
+ iavf_aqc_opc_set_ns_proxy_table_entry = 0x0105,
+
+ /* LAA */
+ iavf_aqc_opc_mac_address_read = 0x0107,
+ iavf_aqc_opc_mac_address_write = 0x0108,
+
+ /* PXE */
+ iavf_aqc_opc_clear_pxe_mode = 0x0110,
+
+ /* WoL commands */
+ iavf_aqc_opc_set_wol_filter = 0x0120,
+ iavf_aqc_opc_get_wake_reason = 0x0121,
+
+ /* internal switch commands */
+ iavf_aqc_opc_get_switch_config = 0x0200,
+ iavf_aqc_opc_add_statistics = 0x0201,
+ iavf_aqc_opc_remove_statistics = 0x0202,
+ iavf_aqc_opc_set_port_parameters = 0x0203,
+ iavf_aqc_opc_get_switch_resource_alloc = 0x0204,
+ iavf_aqc_opc_set_switch_config = 0x0205,
+ iavf_aqc_opc_rx_ctl_reg_read = 0x0206,
+ iavf_aqc_opc_rx_ctl_reg_write = 0x0207,
+
+ iavf_aqc_opc_add_vsi = 0x0210,
+ iavf_aqc_opc_update_vsi_parameters = 0x0211,
+ iavf_aqc_opc_get_vsi_parameters = 0x0212,
+
+ iavf_aqc_opc_add_pv = 0x0220,
+ iavf_aqc_opc_update_pv_parameters = 0x0221,
+ iavf_aqc_opc_get_pv_parameters = 0x0222,
+
+ iavf_aqc_opc_add_veb = 0x0230,
+ iavf_aqc_opc_update_veb_parameters = 0x0231,
+ iavf_aqc_opc_get_veb_parameters = 0x0232,
+
+ iavf_aqc_opc_delete_element = 0x0243,
+
+ iavf_aqc_opc_add_macvlan = 0x0250,
+ iavf_aqc_opc_remove_macvlan = 0x0251,
+ iavf_aqc_opc_add_vlan = 0x0252,
+ iavf_aqc_opc_remove_vlan = 0x0253,
+ iavf_aqc_opc_set_vsi_promiscuous_modes = 0x0254,
+ iavf_aqc_opc_add_tag = 0x0255,
+ iavf_aqc_opc_remove_tag = 0x0256,
+ iavf_aqc_opc_add_multicast_etag = 0x0257,
+ iavf_aqc_opc_remove_multicast_etag = 0x0258,
+ iavf_aqc_opc_update_tag = 0x0259,
+ iavf_aqc_opc_add_control_packet_filter = 0x025A,
+ iavf_aqc_opc_remove_control_packet_filter = 0x025B,
+ iavf_aqc_opc_add_cloud_filters = 0x025C,
+ iavf_aqc_opc_remove_cloud_filters = 0x025D,
+ iavf_aqc_opc_clear_wol_switch_filters = 0x025E,
+
+ iavf_aqc_opc_add_mirror_rule = 0x0260,
+ iavf_aqc_opc_delete_mirror_rule = 0x0261,
+
+ /* Dynamic Device Personalization */
+ iavf_aqc_opc_write_personalization_profile = 0x0270,
+ iavf_aqc_opc_get_personalization_profile_list = 0x0271,
+
+ /* DCB commands */
+ iavf_aqc_opc_dcb_ignore_pfc = 0x0301,
+ iavf_aqc_opc_dcb_updated = 0x0302,
+ iavf_aqc_opc_set_dcb_parameters = 0x0303,
+
+ /* TX scheduler */
+ iavf_aqc_opc_configure_vsi_bw_limit = 0x0400,
+ iavf_aqc_opc_configure_vsi_ets_sla_bw_limit = 0x0406,
+ iavf_aqc_opc_configure_vsi_tc_bw = 0x0407,
+ iavf_aqc_opc_query_vsi_bw_config = 0x0408,
+ iavf_aqc_opc_query_vsi_ets_sla_config = 0x040A,
+ iavf_aqc_opc_configure_switching_comp_bw_limit = 0x0410,
+
+ iavf_aqc_opc_enable_switching_comp_ets = 0x0413,
+ iavf_aqc_opc_modify_switching_comp_ets = 0x0414,
+ iavf_aqc_opc_disable_switching_comp_ets = 0x0415,
+ iavf_aqc_opc_configure_switching_comp_ets_bw_limit = 0x0416,
+ iavf_aqc_opc_configure_switching_comp_bw_config = 0x0417,
+ iavf_aqc_opc_query_switching_comp_ets_config = 0x0418,
+ iavf_aqc_opc_query_port_ets_config = 0x0419,
+ iavf_aqc_opc_query_switching_comp_bw_config = 0x041A,
+ iavf_aqc_opc_suspend_port_tx = 0x041B,
+ iavf_aqc_opc_resume_port_tx = 0x041C,
+ iavf_aqc_opc_configure_partition_bw = 0x041D,
+ /* hmc */
+ iavf_aqc_opc_query_hmc_resource_profile = 0x0500,
+ iavf_aqc_opc_set_hmc_resource_profile = 0x0501,
+
+ /* phy commands*/
+ iavf_aqc_opc_get_phy_abilities = 0x0600,
+ iavf_aqc_opc_set_phy_config = 0x0601,
+ iavf_aqc_opc_set_mac_config = 0x0603,
+ iavf_aqc_opc_set_link_restart_an = 0x0605,
+ iavf_aqc_opc_get_link_status = 0x0607,
+ iavf_aqc_opc_set_phy_int_mask = 0x0613,
+ iavf_aqc_opc_get_local_advt_reg = 0x0614,
+ iavf_aqc_opc_set_local_advt_reg = 0x0615,
+ iavf_aqc_opc_get_partner_advt = 0x0616,
+ iavf_aqc_opc_set_lb_modes = 0x0618,
+ iavf_aqc_opc_get_phy_wol_caps = 0x0621,
+ iavf_aqc_opc_set_phy_debug = 0x0622,
+ iavf_aqc_opc_upload_ext_phy_fm = 0x0625,
+ iavf_aqc_opc_run_phy_activity = 0x0626,
+ iavf_aqc_opc_set_phy_register = 0x0628,
+ iavf_aqc_opc_get_phy_register = 0x0629,
+
+ /* NVM commands */
+ iavf_aqc_opc_nvm_read = 0x0701,
+ iavf_aqc_opc_nvm_erase = 0x0702,
+ iavf_aqc_opc_nvm_update = 0x0703,
+ iavf_aqc_opc_nvm_config_read = 0x0704,
+ iavf_aqc_opc_nvm_config_write = 0x0705,
+ iavf_aqc_opc_oem_post_update = 0x0720,
+ iavf_aqc_opc_thermal_sensor = 0x0721,
+
+ /* virtualization commands */
+ iavf_aqc_opc_send_msg_to_pf = 0x0801,
+ iavf_aqc_opc_send_msg_to_vf = 0x0802,
+ iavf_aqc_opc_send_msg_to_peer = 0x0803,
+
+ /* alternate structure */
+ iavf_aqc_opc_alternate_write = 0x0900,
+ iavf_aqc_opc_alternate_write_indirect = 0x0901,
+ iavf_aqc_opc_alternate_read = 0x0902,
+ iavf_aqc_opc_alternate_read_indirect = 0x0903,
+ iavf_aqc_opc_alternate_write_done = 0x0904,
+ iavf_aqc_opc_alternate_set_mode = 0x0905,
+ iavf_aqc_opc_alternate_clear_port = 0x0906,
+
+ /* LLDP commands */
+ iavf_aqc_opc_lldp_get_mib = 0x0A00,
+ iavf_aqc_opc_lldp_update_mib = 0x0A01,
+ iavf_aqc_opc_lldp_add_tlv = 0x0A02,
+ iavf_aqc_opc_lldp_update_tlv = 0x0A03,
+ iavf_aqc_opc_lldp_delete_tlv = 0x0A04,
+ iavf_aqc_opc_lldp_stop = 0x0A05,
+ iavf_aqc_opc_lldp_start = 0x0A06,
+
+ /* Tunnel commands */
+ iavf_aqc_opc_add_udp_tunnel = 0x0B00,
+ iavf_aqc_opc_del_udp_tunnel = 0x0B01,
+ iavf_aqc_opc_set_rss_key = 0x0B02,
+ iavf_aqc_opc_set_rss_lut = 0x0B03,
+ iavf_aqc_opc_get_rss_key = 0x0B04,
+ iavf_aqc_opc_get_rss_lut = 0x0B05,
+
+ /* Async Events */
+ iavf_aqc_opc_event_lan_overflow = 0x1001,
+
+ /* OEM commands */
+ iavf_aqc_opc_oem_parameter_change = 0xFE00,
+ iavf_aqc_opc_oem_device_status_change = 0xFE01,
+ iavf_aqc_opc_oem_ocsd_initialize = 0xFE02,
+ iavf_aqc_opc_oem_ocbb_initialize = 0xFE03,
+
+ /* debug commands */
+ iavf_aqc_opc_debug_read_reg = 0xFF03,
+ iavf_aqc_opc_debug_write_reg = 0xFF04,
+ iavf_aqc_opc_debug_modify_reg = 0xFF07,
+ iavf_aqc_opc_debug_dump_internals = 0xFF08,
+};
+
+/* command structures and indirect data structures */
+
+/* Structure naming conventions:
+ * - no suffix for direct command descriptor structures
+ * - _data for indirect sent data
+ * - _resp for indirect return data (data which is both will use _data)
+ * - _completion for direct return data
+ * - _element_ for repeated elements (may also be _data or _resp)
+ *
+ * Command structures are expected to overlay the params.raw member of the basic
+ * descriptor, and as such cannot exceed 16 bytes in length.
+ */
+
+/* This macro is used to generate a compilation error if a structure
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure is not of the correct size, otherwise it creates an enum that is
+ * never used.
+ */
+#define IAVF_CHECK_STRUCT_LEN(n, X) enum iavf_static_assert_enum_##X \
+ { iavf_static_assert_##X = (n) / ((sizeof(struct X) == (n)) ? 1 : 0) }
+
+/* This macro is used extensively to ensure that command structures are 16
+ * bytes in length as they have to map to the raw array of that size.
+ */
+#define IAVF_CHECK_CMD_LENGTH(X) IAVF_CHECK_STRUCT_LEN(16, X)
+
+/* Queue Shutdown (direct 0x0003) */
+struct iavf_aqc_queue_shutdown {
+ __le32 driver_unloading;
+#define IAVF_AQ_DRIVER_UNLOADING 0x1
+ u8 reserved[12];
+};
+
+IAVF_CHECK_CMD_LENGTH(iavf_aqc_queue_shutdown);
+
+struct iavf_aqc_vsi_properties_data {
+ /* first 96 byte are written by SW */
+ __le16 valid_sections;
+#define IAVF_AQ_VSI_PROP_SWITCH_VALID 0x0001
+#define IAVF_AQ_VSI_PROP_SECURITY_VALID 0x0002
+#define IAVF_AQ_VSI_PROP_VLAN_VALID 0x0004
+#define IAVF_AQ_VSI_PROP_CAS_PV_VALID 0x0008
+#define IAVF_AQ_VSI_PROP_INGRESS_UP_VALID 0x0010
+#define IAVF_AQ_VSI_PROP_EGRESS_UP_VALID 0x0020
+#define IAVF_AQ_VSI_PROP_QUEUE_MAP_VALID 0x0040
+#define IAVF_AQ_VSI_PROP_QUEUE_OPT_VALID 0x0080
+#define IAVF_AQ_VSI_PROP_OUTER_UP_VALID 0x0100
+#define IAVF_AQ_VSI_PROP_SCHED_VALID 0x0200
+ /* switch section */
+ __le16 switch_id; /* 12bit id combined with flags below */
+#define IAVF_AQ_VSI_SW_ID_SHIFT 0x0000
+#define IAVF_AQ_VSI_SW_ID_MASK (0xFFF << IAVF_AQ_VSI_SW_ID_SHIFT)
+#define IAVF_AQ_VSI_SW_ID_FLAG_NOT_STAG 0x1000
+#define IAVF_AQ_VSI_SW_ID_FLAG_ALLOW_LB 0x2000
+#define IAVF_AQ_VSI_SW_ID_FLAG_LOCAL_LB 0x4000
+ u8 sw_reserved[2];
+ /* security section */
+ u8 sec_flags;
+#define IAVF_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD 0x01
+#define IAVF_AQ_VSI_SEC_FLAG_ENABLE_VLAN_CHK 0x02
+#define IAVF_AQ_VSI_SEC_FLAG_ENABLE_MAC_CHK 0x04
+ u8 sec_reserved;
+ /* VLAN section */
+ __le16 pvid; /* VLANS include priority bits */
+ __le16 fcoe_pvid;
+ u8 port_vlan_flags;
+#define IAVF_AQ_VSI_PVLAN_MODE_SHIFT 0x00
+#define IAVF_AQ_VSI_PVLAN_MODE_MASK (0x03 << \
+ IAVF_AQ_VSI_PVLAN_MODE_SHIFT)
+#define IAVF_AQ_VSI_PVLAN_MODE_TAGGED 0x01
+#define IAVF_AQ_VSI_PVLAN_MODE_UNTAGGED 0x02
+#define IAVF_AQ_VSI_PVLAN_MODE_ALL 0x03
+#define IAVF_AQ_VSI_PVLAN_INSERT_PVID 0x04
+#define IAVF_AQ_VSI_PVLAN_EMOD_SHIFT 0x03
+#define IAVF_AQ_VSI_PVLAN_EMOD_MASK (0x3 << \
+ IAVF_AQ_VSI_PVLAN_EMOD_SHIFT)
+#define IAVF_AQ_VSI_PVLAN_EMOD_STR_BOTH 0x0
+#define IAVF_AQ_VSI_PVLAN_EMOD_STR_UP 0x08
+#define IAVF_AQ_VSI_PVLAN_EMOD_STR 0x10
+#define IAVF_AQ_VSI_PVLAN_EMOD_NOTHING 0x18
+ u8 pvlan_reserved[3];
+ /* ingress egress up sections */
+ __le32 ingress_table; /* bitmap, 3 bits per up */
+#define IAVF_AQ_VSI_UP_TABLE_UP0_SHIFT 0
+#define IAVF_AQ_VSI_UP_TABLE_UP0_MASK (0x7 << \
+ IAVF_AQ_VSI_UP_TABLE_UP0_SHIFT)
+#define IAVF_AQ_VSI_UP_TABLE_UP1_SHIFT 3
+#define IAVF_AQ_VSI_UP_TABLE_UP1_MASK (0x7 << \
+ IAVF_AQ_VSI_UP_TABLE_UP1_SHIFT)
+#define IAVF_AQ_VSI_UP_TABLE_UP2_SHIFT 6
+#define IAVF_AQ_VSI_UP_TABLE_UP2_MASK (0x7 << \
+ IAVF_AQ_VSI_UP_TABLE_UP2_SHIFT)
+#define IAVF_AQ_VSI_UP_TABLE_UP3_SHIFT 9
+#define IAVF_AQ_VSI_UP_TABLE_UP3_MASK (0x7 << \
+ IAVF_AQ_VSI_UP_TABLE_UP3_SHIFT)
+#define IAVF_AQ_VSI_UP_TABLE_UP4_SHIFT 12
+#define IAVF_AQ_VSI_UP_TABLE_UP4_MASK (0x7 << \
+ IAVF_AQ_VSI_UP_TABLE_UP4_SHIFT)
+#define IAVF_AQ_VSI_UP_TABLE_UP5_SHIFT 15
+#define IAVF_AQ_VSI_UP_TABLE_UP5_MASK (0x7 << \
+ IAVF_AQ_VSI_UP_TABLE_UP5_SHIFT)
+#define IAVF_AQ_VSI_UP_TABLE_UP6_SHIFT 18
+#define IAVF_AQ_VSI_UP_TABLE_UP6_MASK (0x7 << \
+ IAVF_AQ_VSI_UP_TABLE_UP6_SHIFT)
+#define IAVF_AQ_VSI_UP_TABLE_UP7_SHIFT 21
+#define IAVF_AQ_VSI_UP_TABLE_UP7_MASK (0x7 << \
+ IAVF_AQ_VSI_UP_TABLE_UP7_SHIFT)
+ __le32 egress_table; /* same defines as for ingress table */
+ /* cascaded PV section */
+ __le16 cas_pv_tag;
+ u8 cas_pv_flags;
+#define IAVF_AQ_VSI_CAS_PV_TAGX_SHIFT 0x00
+#define IAVF_AQ_VSI_CAS_PV_TAGX_MASK (0x03 << \
+ IAVF_AQ_VSI_CAS_PV_TAGX_SHIFT)
+#define IAVF_AQ_VSI_CAS_PV_TAGX_LEAVE 0x00
+#define IAVF_AQ_VSI_CAS_PV_TAGX_REMOVE 0x01
+#define IAVF_AQ_VSI_CAS_PV_TAGX_COPY 0x02
+#define IAVF_AQ_VSI_CAS_PV_INSERT_TAG 0x10
+#define IAVF_AQ_VSI_CAS_PV_ETAG_PRUNE 0x20
+#define IAVF_AQ_VSI_CAS_PV_ACCEPT_HOST_TAG 0x40
+ u8 cas_pv_reserved;
+ /* queue mapping section */
+ __le16 mapping_flags;
+#define IAVF_AQ_VSI_QUE_MAP_CONTIG 0x0
+#define IAVF_AQ_VSI_QUE_MAP_NONCONTIG 0x1
+ __le16 queue_mapping[16];
+#define IAVF_AQ_VSI_QUEUE_SHIFT 0x0
+#define IAVF_AQ_VSI_QUEUE_MASK (0x7FF << IAVF_AQ_VSI_QUEUE_SHIFT)
+ __le16 tc_mapping[8];
+#define IAVF_AQ_VSI_TC_QUE_OFFSET_SHIFT 0
+#define IAVF_AQ_VSI_TC_QUE_OFFSET_MASK (0x1FF << \
+ IAVF_AQ_VSI_TC_QUE_OFFSET_SHIFT)
+#define IAVF_AQ_VSI_TC_QUE_NUMBER_SHIFT 9
+#define IAVF_AQ_VSI_TC_QUE_NUMBER_MASK (0x7 << \
+ IAVF_AQ_VSI_TC_QUE_NUMBER_SHIFT)
+ /* queueing option section */
+ u8 queueing_opt_flags;
+#define IAVF_AQ_VSI_QUE_OPT_MULTICAST_UDP_ENA 0x04
+#define IAVF_AQ_VSI_QUE_OPT_UNICAST_UDP_ENA 0x08
+#define IAVF_AQ_VSI_QUE_OPT_TCP_ENA 0x10
+#define IAVF_AQ_VSI_QUE_OPT_FCOE_ENA 0x20
+#define IAVF_AQ_VSI_QUE_OPT_RSS_LUT_PF 0x00
+#define IAVF_AQ_VSI_QUE_OPT_RSS_LUT_VSI 0x40
+ u8 queueing_opt_reserved[3];
+ /* scheduler section */
+ u8 up_enable_bits;
+ u8 sched_reserved;
+ /* outer up section */
+ __le32 outer_up_table; /* same structure and defines as ingress tbl */
+ u8 cmd_reserved[8];
+ /* last 32 bytes are written by FW */
+ __le16 qs_handle[8];
+#define IAVF_AQ_VSI_QS_HANDLE_INVALID 0xFFFF
+ __le16 stat_counter_idx;
+ __le16 sched_id;
+ u8 resp_reserved[12];
+};
+
+IAVF_CHECK_STRUCT_LEN(128, iavf_aqc_vsi_properties_data);
+
+/* Get VEB Parameters (direct 0x0232)
+ * uses iavf_aqc_switch_seid for the descriptor
+ */
+struct iavf_aqc_get_veb_parameters_completion {
+ __le16 seid;
+ __le16 switch_id;
+ __le16 veb_flags; /* only the first/last flags from 0x0230 is valid */
+ __le16 statistic_index;
+ __le16 vebs_used;
+ __le16 vebs_free;
+ u8 reserved[4];
+};
+
+IAVF_CHECK_CMD_LENGTH(iavf_aqc_get_veb_parameters_completion);
+
+#define IAVF_LINK_SPEED_100MB_SHIFT 0x1
+#define IAVF_LINK_SPEED_1000MB_SHIFT 0x2
+#define IAVF_LINK_SPEED_10GB_SHIFT 0x3
+#define IAVF_LINK_SPEED_40GB_SHIFT 0x4
+#define IAVF_LINK_SPEED_20GB_SHIFT 0x5
+#define IAVF_LINK_SPEED_25GB_SHIFT 0x6
+
+enum iavf_aq_link_speed {
+ IAVF_LINK_SPEED_UNKNOWN = 0,
+ IAVF_LINK_SPEED_100MB = BIT(IAVF_LINK_SPEED_100MB_SHIFT),
+ IAVF_LINK_SPEED_1GB = BIT(IAVF_LINK_SPEED_1000MB_SHIFT),
+ IAVF_LINK_SPEED_10GB = BIT(IAVF_LINK_SPEED_10GB_SHIFT),
+ IAVF_LINK_SPEED_40GB = BIT(IAVF_LINK_SPEED_40GB_SHIFT),
+ IAVF_LINK_SPEED_20GB = BIT(IAVF_LINK_SPEED_20GB_SHIFT),
+ IAVF_LINK_SPEED_25GB = BIT(IAVF_LINK_SPEED_25GB_SHIFT),
+};
+
+/* Send to PF command (indirect 0x0801) id is only used by PF
+ * Send to VF command (indirect 0x0802) id is only used by PF
+ * Send to Peer PF command (indirect 0x0803)
+ */
+struct iavf_aqc_pf_vf_message {
+ __le32 id;
+ u8 reserved[4];
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+IAVF_CHECK_CMD_LENGTH(iavf_aqc_pf_vf_message);
+
+struct iavf_aqc_get_set_rss_key {
+#define IAVF_AQC_SET_RSS_KEY_VSI_VALID BIT(15)
+#define IAVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT 0
+#define IAVF_AQC_SET_RSS_KEY_VSI_ID_MASK (0x3FF << \
+ IAVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT)
+ __le16 vsi_id;
+ u8 reserved[6];
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+IAVF_CHECK_CMD_LENGTH(iavf_aqc_get_set_rss_key);
+
+struct iavf_aqc_get_set_rss_key_data {
+ u8 standard_rss_key[0x28];
+ u8 extended_hash_key[0xc];
+};
+
+IAVF_CHECK_STRUCT_LEN(0x34, iavf_aqc_get_set_rss_key_data);
+
+struct iavf_aqc_get_set_rss_lut {
+#define IAVF_AQC_SET_RSS_LUT_VSI_VALID BIT(15)
+#define IAVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT 0
+#define IAVF_AQC_SET_RSS_LUT_VSI_ID_MASK (0x3FF << \
+ IAVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT)
+ __le16 vsi_id;
+#define IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT 0
+#define IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK \
+ BIT(IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
+
+#define IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI 0
+#define IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF 1
+ __le16 flags;
+ u8 reserved[4];
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
+IAVF_CHECK_CMD_LENGTH(iavf_aqc_get_set_rss_lut);
+#endif /* _IAVF_ADMINQ_CMD_H_ */
diff --git a/drivers/net/ethernet/intel/iavf/iavf_alloc.h b/drivers/net/ethernet/intel/iavf/iavf_alloc.h
index bf2753146f30..2711573c14ec 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_alloc.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_alloc.h
@@ -20,12 +20,15 @@ enum iavf_memory_type {
};
/* prototype for functions used for dynamic memory allocation */
-iavf_status iavf_allocate_dma_mem(struct iavf_hw *hw, struct iavf_dma_mem *mem,
- enum iavf_memory_type type,
- u64 size, u32 alignment);
-iavf_status iavf_free_dma_mem(struct iavf_hw *hw, struct iavf_dma_mem *mem);
-iavf_status iavf_allocate_virt_mem(struct iavf_hw *hw,
- struct iavf_virt_mem *mem, u32 size);
-iavf_status iavf_free_virt_mem(struct iavf_hw *hw, struct iavf_virt_mem *mem);
+enum iavf_status iavf_allocate_dma_mem(struct iavf_hw *hw,
+ struct iavf_dma_mem *mem,
+ enum iavf_memory_type type,
+ u64 size, u32 alignment);
+enum iavf_status iavf_free_dma_mem(struct iavf_hw *hw,
+ struct iavf_dma_mem *mem);
+enum iavf_status iavf_allocate_virt_mem(struct iavf_hw *hw,
+ struct iavf_virt_mem *mem, u32 size);
+enum iavf_status iavf_free_virt_mem(struct iavf_hw *hw,
+ struct iavf_virt_mem *mem);
#endif /* _IAVF_ALLOC_H_ */
diff --git a/drivers/net/ethernet/intel/iavf/iavf_client.c b/drivers/net/ethernet/intel/iavf/iavf_client.c
index aea45364fd1c..0c77e4171808 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_client.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_client.c
@@ -10,19 +10,19 @@
static
const char iavf_client_interface_version_str[] = IAVF_CLIENT_VERSION_STR;
-static struct i40e_client *vf_registered_client;
-static LIST_HEAD(i40e_devices);
+static struct iavf_client *vf_registered_client;
+static LIST_HEAD(iavf_devices);
static DEFINE_MUTEX(iavf_device_mutex);
-static u32 iavf_client_virtchnl_send(struct i40e_info *ldev,
- struct i40e_client *client,
+static u32 iavf_client_virtchnl_send(struct iavf_info *ldev,
+ struct iavf_client *client,
u8 *msg, u16 len);
-static int iavf_client_setup_qvlist(struct i40e_info *ldev,
- struct i40e_client *client,
- struct i40e_qvlist_info *qvlist_info);
+static int iavf_client_setup_qvlist(struct iavf_info *ldev,
+ struct iavf_client *client,
+ struct iavf_qvlist_info *qvlist_info);
-static struct i40e_ops iavf_lan_ops = {
+static struct iavf_ops iavf_lan_ops = {
.virtchnl_send = iavf_client_virtchnl_send,
.setup_qvlist = iavf_client_setup_qvlist,
};
@@ -33,11 +33,11 @@ static struct i40e_ops iavf_lan_ops = {
* @params: client param struct
**/
static
-void iavf_client_get_params(struct iavf_vsi *vsi, struct i40e_params *params)
+void iavf_client_get_params(struct iavf_vsi *vsi, struct iavf_params *params)
{
int i;
- memset(params, 0, sizeof(struct i40e_params));
+ memset(params, 0, sizeof(struct iavf_params));
params->mtu = vsi->netdev->mtu;
params->link_up = vsi->back->link_up;
@@ -57,7 +57,7 @@ void iavf_client_get_params(struct iavf_vsi *vsi, struct i40e_params *params)
**/
void iavf_notify_client_message(struct iavf_vsi *vsi, u8 *msg, u16 len)
{
- struct i40e_client_instance *cinst;
+ struct iavf_client_instance *cinst;
if (!vsi)
return;
@@ -81,8 +81,8 @@ void iavf_notify_client_message(struct iavf_vsi *vsi, u8 *msg, u16 len)
**/
void iavf_notify_client_l2_params(struct iavf_vsi *vsi)
{
- struct i40e_client_instance *cinst;
- struct i40e_params params;
+ struct iavf_client_instance *cinst;
+ struct iavf_params params;
if (!vsi)
return;
@@ -110,7 +110,7 @@ void iavf_notify_client_l2_params(struct iavf_vsi *vsi)
void iavf_notify_client_open(struct iavf_vsi *vsi)
{
struct iavf_adapter *adapter = vsi->back;
- struct i40e_client_instance *cinst = adapter->cinst;
+ struct iavf_client_instance *cinst = adapter->cinst;
int ret;
if (!cinst || !cinst->client || !cinst->client->ops ||
@@ -119,10 +119,10 @@ void iavf_notify_client_open(struct iavf_vsi *vsi)
"Cannot locate client instance open function\n");
return;
}
- if (!(test_bit(__I40E_CLIENT_INSTANCE_OPENED, &cinst->state))) {
+ if (!(test_bit(__IAVF_CLIENT_INSTANCE_OPENED, &cinst->state))) {
ret = cinst->client->ops->open(&cinst->lan_info, cinst->client);
if (!ret)
- set_bit(__I40E_CLIENT_INSTANCE_OPENED, &cinst->state);
+ set_bit(__IAVF_CLIENT_INSTANCE_OPENED, &cinst->state);
}
}
@@ -132,17 +132,17 @@ void iavf_notify_client_open(struct iavf_vsi *vsi)
*
* Return 0 on success or < 0 on error
**/
-static int iavf_client_release_qvlist(struct i40e_info *ldev)
+static int iavf_client_release_qvlist(struct iavf_info *ldev)
{
struct iavf_adapter *adapter = ldev->vf;
- iavf_status err;
+ enum iavf_status err;
if (adapter->aq_required)
return -EAGAIN;
err = iavf_aq_send_msg_to_pf(&adapter->hw,
VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP,
- I40E_SUCCESS, NULL, 0, NULL);
+ IAVF_SUCCESS, NULL, 0, NULL);
if (err)
dev_err(&adapter->pdev->dev,
@@ -162,7 +162,7 @@ static int iavf_client_release_qvlist(struct i40e_info *ldev)
void iavf_notify_client_close(struct iavf_vsi *vsi, bool reset)
{
struct iavf_adapter *adapter = vsi->back;
- struct i40e_client_instance *cinst = adapter->cinst;
+ struct iavf_client_instance *cinst = adapter->cinst;
if (!cinst || !cinst->client || !cinst->client->ops ||
!cinst->client->ops->close) {
@@ -172,7 +172,7 @@ void iavf_notify_client_close(struct iavf_vsi *vsi, bool reset)
}
cinst->client->ops->close(&cinst->lan_info, cinst->client, reset);
iavf_client_release_qvlist(&cinst->lan_info);
- clear_bit(__I40E_CLIENT_INSTANCE_OPENED, &cinst->state);
+ clear_bit(__IAVF_CLIENT_INSTANCE_OPENED, &cinst->state);
}
/**
@@ -181,13 +181,13 @@ void iavf_notify_client_close(struct iavf_vsi *vsi, bool reset)
*
* Returns cinst ptr on success, NULL on failure
**/
-static struct i40e_client_instance *
+static struct iavf_client_instance *
iavf_client_add_instance(struct iavf_adapter *adapter)
{
- struct i40e_client_instance *cinst = NULL;
+ struct iavf_client_instance *cinst = NULL;
struct iavf_vsi *vsi = &adapter->vsi;
struct netdev_hw_addr *mac = NULL;
- struct i40e_params params;
+ struct iavf_params params;
if (!vf_registered_client)
goto out;
@@ -205,7 +205,7 @@ iavf_client_add_instance(struct iavf_adapter *adapter)
cinst->lan_info.netdev = vsi->netdev;
cinst->lan_info.pcidev = adapter->pdev;
cinst->lan_info.fid = 0;
- cinst->lan_info.ftype = I40E_CLIENT_FTYPE_VF;
+ cinst->lan_info.ftype = IAVF_CLIENT_FTYPE_VF;
cinst->lan_info.hw_addr = adapter->hw.hw_addr;
cinst->lan_info.ops = &iavf_lan_ops;
cinst->lan_info.version.major = IAVF_CLIENT_VERSION_MAJOR;
@@ -213,7 +213,7 @@ iavf_client_add_instance(struct iavf_adapter *adapter)
cinst->lan_info.version.build = IAVF_CLIENT_VERSION_BUILD;
iavf_client_get_params(vsi, &params);
cinst->lan_info.params = params;
- set_bit(__I40E_CLIENT_INSTANCE_NONE, &cinst->state);
+ set_bit(__IAVF_CLIENT_INSTANCE_NONE, &cinst->state);
cinst->lan_info.msix_count = adapter->num_iwarp_msix;
cinst->lan_info.msix_entries =
@@ -250,8 +250,8 @@ void iavf_client_del_instance(struct iavf_adapter *adapter)
**/
void iavf_client_subtask(struct iavf_adapter *adapter)
{
- struct i40e_client *client = vf_registered_client;
- struct i40e_client_instance *cinst;
+ struct iavf_client *client = vf_registered_client;
+ struct iavf_client_instance *cinst;
int ret = 0;
if (adapter->state < __IAVF_DOWN)
@@ -269,13 +269,13 @@ void iavf_client_subtask(struct iavf_adapter *adapter)
dev_info(&adapter->pdev->dev, "Added instance of Client %s\n",
client->name);
- if (!test_bit(__I40E_CLIENT_INSTANCE_OPENED, &cinst->state)) {
+ if (!test_bit(__IAVF_CLIENT_INSTANCE_OPENED, &cinst->state)) {
/* Send an Open request to the client */
if (client->ops && client->ops->open)
ret = client->ops->open(&cinst->lan_info, client);
if (!ret)
- set_bit(__I40E_CLIENT_INSTANCE_OPENED,
+ set_bit(__IAVF_CLIENT_INSTANCE_OPENED,
&cinst->state);
else
/* remove client instance */
@@ -291,11 +291,11 @@ void iavf_client_subtask(struct iavf_adapter *adapter)
**/
int iavf_lan_add_device(struct iavf_adapter *adapter)
{
- struct i40e_device *ldev;
+ struct iavf_device *ldev;
int ret = 0;
mutex_lock(&iavf_device_mutex);
- list_for_each_entry(ldev, &i40e_devices, list) {
+ list_for_each_entry(ldev, &iavf_devices, list) {
if (ldev->vf == adapter) {
ret = -EEXIST;
goto out;
@@ -308,7 +308,7 @@ int iavf_lan_add_device(struct iavf_adapter *adapter)
}
ldev->vf = adapter;
INIT_LIST_HEAD(&ldev->list);
- list_add(&ldev->list, &i40e_devices);
+ list_add(&ldev->list, &iavf_devices);
dev_info(&adapter->pdev->dev, "Added LAN device bus=0x%02x dev=0x%02x func=0x%02x\n",
adapter->hw.bus.bus_id, adapter->hw.bus.device,
adapter->hw.bus.func);
@@ -331,11 +331,11 @@ out:
**/
int iavf_lan_del_device(struct iavf_adapter *adapter)
{
- struct i40e_device *ldev, *tmp;
+ struct iavf_device *ldev, *tmp;
int ret = -ENODEV;
mutex_lock(&iavf_device_mutex);
- list_for_each_entry_safe(ldev, tmp, &i40e_devices, list) {
+ list_for_each_entry_safe(ldev, tmp, &iavf_devices, list) {
if (ldev->vf == adapter) {
dev_info(&adapter->pdev->dev,
"Deleted LAN device bus=0x%02x dev=0x%02x func=0x%02x\n",
@@ -357,24 +357,24 @@ int iavf_lan_del_device(struct iavf_adapter *adapter)
* @client: pointer to the registered client
*
**/
-static void iavf_client_release(struct i40e_client *client)
+static void iavf_client_release(struct iavf_client *client)
{
- struct i40e_client_instance *cinst;
- struct i40e_device *ldev;
+ struct iavf_client_instance *cinst;
+ struct iavf_device *ldev;
struct iavf_adapter *adapter;
mutex_lock(&iavf_device_mutex);
- list_for_each_entry(ldev, &i40e_devices, list) {
+ list_for_each_entry(ldev, &iavf_devices, list) {
adapter = ldev->vf;
cinst = adapter->cinst;
if (!cinst)
continue;
- if (test_bit(__I40E_CLIENT_INSTANCE_OPENED, &cinst->state)) {
+ if (test_bit(__IAVF_CLIENT_INSTANCE_OPENED, &cinst->state)) {
if (client->ops && client->ops->close)
client->ops->close(&cinst->lan_info, client,
false);
iavf_client_release_qvlist(&cinst->lan_info);
- clear_bit(__I40E_CLIENT_INSTANCE_OPENED, &cinst->state);
+ clear_bit(__IAVF_CLIENT_INSTANCE_OPENED, &cinst->state);
dev_warn(&adapter->pdev->dev,
"Client %s instance closed\n", client->name);
@@ -392,13 +392,13 @@ static void iavf_client_release(struct i40e_client *client)
* @client: pointer to the registered client
*
**/
-static void iavf_client_prepare(struct i40e_client *client)
+static void iavf_client_prepare(struct iavf_client *client)
{
- struct i40e_device *ldev;
+ struct iavf_device *ldev;
struct iavf_adapter *adapter;
mutex_lock(&iavf_device_mutex);
- list_for_each_entry(ldev, &i40e_devices, list) {
+ list_for_each_entry(ldev, &iavf_devices, list) {
adapter = ldev->vf;
/* Signal the watchdog to service the client */
adapter->flags |= IAVF_FLAG_SERVICE_CLIENT_REQUESTED;
@@ -415,18 +415,18 @@ static void iavf_client_prepare(struct i40e_client *client)
*
* Return 0 on success or < 0 on error
**/
-static u32 iavf_client_virtchnl_send(struct i40e_info *ldev,
- struct i40e_client *client,
+static u32 iavf_client_virtchnl_send(struct iavf_info *ldev,
+ struct iavf_client *client,
u8 *msg, u16 len)
{
struct iavf_adapter *adapter = ldev->vf;
- iavf_status err;
+ enum iavf_status err;
if (adapter->aq_required)
return -EAGAIN;
err = iavf_aq_send_msg_to_pf(&adapter->hw, VIRTCHNL_OP_IWARP,
- I40E_SUCCESS, msg, len, NULL);
+ IAVF_SUCCESS, msg, len, NULL);
if (err)
dev_err(&adapter->pdev->dev, "Unable to send iWarp message to PF, error %d, aq status %d\n",
err, adapter->hw.aq.asq_last_status);
@@ -442,16 +442,16 @@ static u32 iavf_client_virtchnl_send(struct i40e_info *ldev,
*
* Return 0 on success or < 0 on error
**/
-static int iavf_client_setup_qvlist(struct i40e_info *ldev,
- struct i40e_client *client,
- struct i40e_qvlist_info *qvlist_info)
+static int iavf_client_setup_qvlist(struct iavf_info *ldev,
+ struct iavf_client *client,
+ struct iavf_qvlist_info *qvlist_info)
{
struct virtchnl_iwarp_qvlist_info *v_qvlist_info;
struct iavf_adapter *adapter = ldev->vf;
- struct i40e_qv_info *qv_info;
- iavf_status err;
+ struct iavf_qv_info *qv_info;
+ enum iavf_status err;
u32 v_idx, i;
- u32 msg_size;
+ size_t msg_size;
if (adapter->aq_required)
return -EAGAIN;
@@ -469,13 +469,12 @@ static int iavf_client_setup_qvlist(struct i40e_info *ldev,
}
v_qvlist_info = (struct virtchnl_iwarp_qvlist_info *)qvlist_info;
- msg_size = sizeof(struct virtchnl_iwarp_qvlist_info) +
- (sizeof(struct virtchnl_iwarp_qv_info) *
- (v_qvlist_info->num_vectors - 1));
+ msg_size = struct_size(v_qvlist_info, qv_info,
+ v_qvlist_info->num_vectors - 1);
adapter->client_pending |= BIT(VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP);
err = iavf_aq_send_msg_to_pf(&adapter->hw,
- VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP, I40E_SUCCESS,
+ VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP, IAVF_SUCCESS,
(u8 *)v_qvlist_info, msg_size, NULL);
if (err) {
@@ -499,12 +498,12 @@ out:
}
/**
- * iavf_register_client - Register a i40e client driver with the L2 driver
- * @client: pointer to the i40e_client struct
+ * iavf_register_client - Register a iavf client driver with the L2 driver
+ * @client: pointer to the iavf_client struct
*
* Returns 0 on success or non-0 on error
**/
-int iavf_register_client(struct i40e_client *client)
+int iavf_register_client(struct iavf_client *client)
{
int ret = 0;
@@ -550,12 +549,12 @@ out:
EXPORT_SYMBOL(iavf_register_client);
/**
- * iavf_unregister_client - Unregister a i40e client driver with the L2 driver
- * @client: pointer to the i40e_client struct
+ * iavf_unregister_client - Unregister a iavf client driver with the L2 driver
+ * @client: pointer to the iavf_client struct
*
* Returns 0 on success or non-0 on error
**/
-int iavf_unregister_client(struct i40e_client *client)
+int iavf_unregister_client(struct iavf_client *client)
{
int ret = 0;
diff --git a/drivers/net/ethernet/intel/iavf/iavf_client.h b/drivers/net/ethernet/intel/iavf/iavf_client.h
index e216fc9dfd81..9a7cf39ea75a 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_client.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_client.h
@@ -17,86 +17,86 @@
__stringify(IAVF_CLIENT_VERSION_MINOR) "." \
__stringify(IAVF_CLIENT_VERSION_BUILD)
-struct i40e_client_version {
+struct iavf_client_version {
u8 major;
u8 minor;
u8 build;
u8 rsvd;
};
-enum i40e_client_state {
- __I40E_CLIENT_NULL,
- __I40E_CLIENT_REGISTERED
+enum iavf_client_state {
+ __IAVF_CLIENT_NULL,
+ __IAVF_CLIENT_REGISTERED
};
-enum i40e_client_instance_state {
- __I40E_CLIENT_INSTANCE_NONE,
- __I40E_CLIENT_INSTANCE_OPENED,
+enum iavf_client_instance_state {
+ __IAVF_CLIENT_INSTANCE_NONE,
+ __IAVF_CLIENT_INSTANCE_OPENED,
};
-struct i40e_ops;
-struct i40e_client;
+struct iavf_ops;
+struct iavf_client;
/* HW does not define a type value for AEQ; only for RX/TX and CEQ.
* In order for us to keep the interface simple, SW will define a
* unique type value for AEQ.
*/
-#define I40E_QUEUE_TYPE_PE_AEQ 0x80
-#define I40E_QUEUE_INVALID_IDX 0xFFFF
+#define IAVF_QUEUE_TYPE_PE_AEQ 0x80
+#define IAVF_QUEUE_INVALID_IDX 0xFFFF
-struct i40e_qv_info {
+struct iavf_qv_info {
u32 v_idx; /* msix_vector */
u16 ceq_idx;
u16 aeq_idx;
u8 itr_idx;
};
-struct i40e_qvlist_info {
+struct iavf_qvlist_info {
u32 num_vectors;
- struct i40e_qv_info qv_info[1];
+ struct iavf_qv_info qv_info[1];
};
-#define I40E_CLIENT_MSIX_ALL 0xFFFFFFFF
+#define IAVF_CLIENT_MSIX_ALL 0xFFFFFFFF
/* set of LAN parameters useful for clients managed by LAN */
/* Struct to hold per priority info */
-struct i40e_prio_qos_params {
+struct iavf_prio_qos_params {
u16 qs_handle; /* qs handle for prio */
u8 tc; /* TC mapped to prio */
u8 reserved;
};
-#define I40E_CLIENT_MAX_USER_PRIORITY 8
+#define IAVF_CLIENT_MAX_USER_PRIORITY 8
/* Struct to hold Client QoS */
-struct i40e_qos_params {
- struct i40e_prio_qos_params prio_qos[I40E_CLIENT_MAX_USER_PRIORITY];
+struct iavf_qos_params {
+ struct iavf_prio_qos_params prio_qos[IAVF_CLIENT_MAX_USER_PRIORITY];
};
-struct i40e_params {
- struct i40e_qos_params qos;
+struct iavf_params {
+ struct iavf_qos_params qos;
u16 mtu;
u16 link_up; /* boolean */
};
/* Structure to hold LAN device info for a client device */
-struct i40e_info {
- struct i40e_client_version version;
+struct iavf_info {
+ struct iavf_client_version version;
u8 lanmac[6];
struct net_device *netdev;
struct pci_dev *pcidev;
u8 __iomem *hw_addr;
u8 fid; /* function id, PF id or VF id */
-#define I40E_CLIENT_FTYPE_PF 0
-#define I40E_CLIENT_FTYPE_VF 1
+#define IAVF_CLIENT_FTYPE_PF 0
+#define IAVF_CLIENT_FTYPE_VF 1
u8 ftype; /* function type, PF or VF */
void *vf; /* cast to iavf_adapter */
/* All L2 params that could change during the life span of the device
* and needs to be communicated to the client when they change
*/
- struct i40e_params params;
- struct i40e_ops *ops;
+ struct iavf_params params;
+ struct iavf_ops *ops;
u16 msix_count; /* number of msix vectors*/
/* Array down below will be dynamically allocated based on msix_count */
@@ -104,66 +104,66 @@ struct i40e_info {
u16 itr_index; /* Which ITR index the PE driver is suppose to use */
};
-struct i40e_ops {
+struct iavf_ops {
/* setup_q_vector_list enables queues with a particular vector */
- int (*setup_qvlist)(struct i40e_info *ldev, struct i40e_client *client,
- struct i40e_qvlist_info *qv_info);
+ int (*setup_qvlist)(struct iavf_info *ldev, struct iavf_client *client,
+ struct iavf_qvlist_info *qv_info);
- u32 (*virtchnl_send)(struct i40e_info *ldev, struct i40e_client *client,
+ u32 (*virtchnl_send)(struct iavf_info *ldev, struct iavf_client *client,
u8 *msg, u16 len);
/* If the PE Engine is unresponsive, RDMA driver can request a reset.*/
- void (*request_reset)(struct i40e_info *ldev,
- struct i40e_client *client);
+ void (*request_reset)(struct iavf_info *ldev,
+ struct iavf_client *client);
};
-struct i40e_client_ops {
+struct iavf_client_ops {
/* Should be called from register_client() or whenever the driver is
* ready to create a specific client instance.
*/
- int (*open)(struct i40e_info *ldev, struct i40e_client *client);
+ int (*open)(struct iavf_info *ldev, struct iavf_client *client);
/* Should be closed when netdev is unavailable or when unregister
* call comes in. If the close happens due to a reset, set the reset
* bit to true.
*/
- void (*close)(struct i40e_info *ldev, struct i40e_client *client,
+ void (*close)(struct iavf_info *ldev, struct iavf_client *client,
bool reset);
/* called when some l2 managed parameters changes - mss */
- void (*l2_param_change)(struct i40e_info *ldev,
- struct i40e_client *client,
- struct i40e_params *params);
+ void (*l2_param_change)(struct iavf_info *ldev,
+ struct iavf_client *client,
+ struct iavf_params *params);
/* called when a message is received from the PF */
- int (*virtchnl_receive)(struct i40e_info *ldev,
- struct i40e_client *client,
+ int (*virtchnl_receive)(struct iavf_info *ldev,
+ struct iavf_client *client,
u8 *msg, u16 len);
};
/* Client device */
-struct i40e_client_instance {
+struct iavf_client_instance {
struct list_head list;
- struct i40e_info lan_info;
- struct i40e_client *client;
+ struct iavf_info lan_info;
+ struct iavf_client *client;
unsigned long state;
};
-struct i40e_client {
+struct iavf_client {
struct list_head list; /* list of registered clients */
char name[IAVF_CLIENT_STR_LENGTH];
- struct i40e_client_version version;
+ struct iavf_client_version version;
unsigned long state; /* client state */
atomic_t ref_cnt; /* Count of all the client devices of this kind */
u32 flags;
-#define I40E_CLIENT_FLAGS_LAUNCH_ON_PROBE BIT(0)
-#define I40E_TX_FLAGS_NOTIFY_OTHER_EVENTS BIT(2)
+#define IAVF_CLIENT_FLAGS_LAUNCH_ON_PROBE BIT(0)
+#define IAVF_TX_FLAGS_NOTIFY_OTHER_EVENTS BIT(2)
u8 type;
-#define I40E_CLIENT_IWARP 0
- struct i40e_client_ops *ops; /* client ops provided by the client */
+#define IAVF_CLIENT_IWARP 0
+ struct iavf_client_ops *ops; /* client ops provided by the client */
};
/* used by clients */
-int iavf_register_client(struct i40e_client *client);
-int iavf_unregister_client(struct i40e_client *client);
+int iavf_register_client(struct iavf_client *client);
+int iavf_unregister_client(struct iavf_client *client);
#endif /* _IAVF_CLIENT_H_ */
diff --git a/drivers/net/ethernet/intel/iavf/iavf_common.c b/drivers/net/ethernet/intel/iavf/iavf_common.c
index 768369c89e77..8547fc8fdfd6 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_common.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_common.c
@@ -2,7 +2,7 @@
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include "iavf_type.h"
-#include "i40e_adminq.h"
+#include "iavf_adminq.h"
#include "iavf_prototype.h"
#include <linux/avf/virtchnl.h>
@@ -13,9 +13,9 @@
* This function sets the mac type of the adapter based on the
* vendor ID and device ID stored in the hw structure.
**/
-iavf_status iavf_set_mac_type(struct iavf_hw *hw)
+enum iavf_status iavf_set_mac_type(struct iavf_hw *hw)
{
- iavf_status status = 0;
+ enum iavf_status status = 0;
if (hw->vendor_id == PCI_VENDOR_ID_INTEL) {
switch (hw->device_id) {
@@ -32,7 +32,7 @@ iavf_status iavf_set_mac_type(struct iavf_hw *hw)
break;
}
} else {
- status = I40E_ERR_DEVICE_NOT_SUPPORTED;
+ status = IAVF_ERR_DEVICE_NOT_SUPPORTED;
}
hw_dbg(hw, "found mac: %d, returns: %d\n", hw->mac.type, status);
@@ -44,55 +44,55 @@ iavf_status iavf_set_mac_type(struct iavf_hw *hw)
* @hw: pointer to the HW structure
* @aq_err: the AQ error code to convert
**/
-const char *iavf_aq_str(struct iavf_hw *hw, enum i40e_admin_queue_err aq_err)
+const char *iavf_aq_str(struct iavf_hw *hw, enum iavf_admin_queue_err aq_err)
{
switch (aq_err) {
- case I40E_AQ_RC_OK:
+ case IAVF_AQ_RC_OK:
return "OK";
- case I40E_AQ_RC_EPERM:
- return "I40E_AQ_RC_EPERM";
- case I40E_AQ_RC_ENOENT:
- return "I40E_AQ_RC_ENOENT";
- case I40E_AQ_RC_ESRCH:
- return "I40E_AQ_RC_ESRCH";
- case I40E_AQ_RC_EINTR:
- return "I40E_AQ_RC_EINTR";
- case I40E_AQ_RC_EIO:
- return "I40E_AQ_RC_EIO";
- case I40E_AQ_RC_ENXIO:
- return "I40E_AQ_RC_ENXIO";
- case I40E_AQ_RC_E2BIG:
- return "I40E_AQ_RC_E2BIG";
- case I40E_AQ_RC_EAGAIN:
- return "I40E_AQ_RC_EAGAIN";
- case I40E_AQ_RC_ENOMEM:
- return "I40E_AQ_RC_ENOMEM";
- case I40E_AQ_RC_EACCES:
- return "I40E_AQ_RC_EACCES";
- case I40E_AQ_RC_EFAULT:
- return "I40E_AQ_RC_EFAULT";
- case I40E_AQ_RC_EBUSY:
- return "I40E_AQ_RC_EBUSY";
- case I40E_AQ_RC_EEXIST:
- return "I40E_AQ_RC_EEXIST";
- case I40E_AQ_RC_EINVAL:
- return "I40E_AQ_RC_EINVAL";
- case I40E_AQ_RC_ENOTTY:
- return "I40E_AQ_RC_ENOTTY";
- case I40E_AQ_RC_ENOSPC:
- return "I40E_AQ_RC_ENOSPC";
- case I40E_AQ_RC_ENOSYS:
- return "I40E_AQ_RC_ENOSYS";
- case I40E_AQ_RC_ERANGE:
- return "I40E_AQ_RC_ERANGE";
- case I40E_AQ_RC_EFLUSHED:
- return "I40E_AQ_RC_EFLUSHED";
- case I40E_AQ_RC_BAD_ADDR:
- return "I40E_AQ_RC_BAD_ADDR";
- case I40E_AQ_RC_EMODE:
- return "I40E_AQ_RC_EMODE";
- case I40E_AQ_RC_EFBIG:
- return "I40E_AQ_RC_EFBIG";
+ case IAVF_AQ_RC_EPERM:
+ return "IAVF_AQ_RC_EPERM";
+ case IAVF_AQ_RC_ENOENT:
+ return "IAVF_AQ_RC_ENOENT";
+ case IAVF_AQ_RC_ESRCH:
+ return "IAVF_AQ_RC_ESRCH";
+ case IAVF_AQ_RC_EINTR:
+ return "IAVF_AQ_RC_EINTR";
+ case IAVF_AQ_RC_EIO:
+ return "IAVF_AQ_RC_EIO";
+ case IAVF_AQ_RC_ENXIO:
+ return "IAVF_AQ_RC_ENXIO";
+ case IAVF_AQ_RC_E2BIG:
+ return "IAVF_AQ_RC_E2BIG";
+ case IAVF_AQ_RC_EAGAIN:
+ return "IAVF_AQ_RC_EAGAIN";
+ case IAVF_AQ_RC_ENOMEM:
+ return "IAVF_AQ_RC_ENOMEM";
+ case IAVF_AQ_RC_EACCES:
+ return "IAVF_AQ_RC_EACCES";
+ case IAVF_AQ_RC_EFAULT:
+ return "IAVF_AQ_RC_EFAULT";
+ case IAVF_AQ_RC_EBUSY:
+ return "IAVF_AQ_RC_EBUSY";
+ case IAVF_AQ_RC_EEXIST:
+ return "IAVF_AQ_RC_EEXIST";
+ case IAVF_AQ_RC_EINVAL:
+ return "IAVF_AQ_RC_EINVAL";
+ case IAVF_AQ_RC_ENOTTY:
+ return "IAVF_AQ_RC_ENOTTY";
+ case IAVF_AQ_RC_ENOSPC:
+ return "IAVF_AQ_RC_ENOSPC";
+ case IAVF_AQ_RC_ENOSYS:
+ return "IAVF_AQ_RC_ENOSYS";
+ case IAVF_AQ_RC_ERANGE:
+ return "IAVF_AQ_RC_ERANGE";
+ case IAVF_AQ_RC_EFLUSHED:
+ return "IAVF_AQ_RC_EFLUSHED";
+ case IAVF_AQ_RC_BAD_ADDR:
+ return "IAVF_AQ_RC_BAD_ADDR";
+ case IAVF_AQ_RC_EMODE:
+ return "IAVF_AQ_RC_EMODE";
+ case IAVF_AQ_RC_EFBIG:
+ return "IAVF_AQ_RC_EFBIG";
}
snprintf(hw->err_str, sizeof(hw->err_str), "%d", aq_err);
@@ -104,143 +104,143 @@ const char *iavf_aq_str(struct iavf_hw *hw, enum i40e_admin_queue_err aq_err)
* @hw: pointer to the HW structure
* @stat_err: the status error code to convert
**/
-const char *iavf_stat_str(struct iavf_hw *hw, iavf_status stat_err)
+const char *iavf_stat_str(struct iavf_hw *hw, enum iavf_status stat_err)
{
switch (stat_err) {
case 0:
return "OK";
- case I40E_ERR_NVM:
- return "I40E_ERR_NVM";
- case I40E_ERR_NVM_CHECKSUM:
- return "I40E_ERR_NVM_CHECKSUM";
- case I40E_ERR_PHY:
- return "I40E_ERR_PHY";
- case I40E_ERR_CONFIG:
- return "I40E_ERR_CONFIG";
- case I40E_ERR_PARAM:
- return "I40E_ERR_PARAM";
- case I40E_ERR_MAC_TYPE:
- return "I40E_ERR_MAC_TYPE";
- case I40E_ERR_UNKNOWN_PHY:
- return "I40E_ERR_UNKNOWN_PHY";
- case I40E_ERR_LINK_SETUP:
- return "I40E_ERR_LINK_SETUP";
- case I40E_ERR_ADAPTER_STOPPED:
- return "I40E_ERR_ADAPTER_STOPPED";
- case I40E_ERR_INVALID_MAC_ADDR:
- return "I40E_ERR_INVALID_MAC_ADDR";
- case I40E_ERR_DEVICE_NOT_SUPPORTED:
- return "I40E_ERR_DEVICE_NOT_SUPPORTED";
- case I40E_ERR_MASTER_REQUESTS_PENDING:
- return "I40E_ERR_MASTER_REQUESTS_PENDING";
- case I40E_ERR_INVALID_LINK_SETTINGS:
- return "I40E_ERR_INVALID_LINK_SETTINGS";
- case I40E_ERR_AUTONEG_NOT_COMPLETE:
- return "I40E_ERR_AUTONEG_NOT_COMPLETE";
- case I40E_ERR_RESET_FAILED:
- return "I40E_ERR_RESET_FAILED";
- case I40E_ERR_SWFW_SYNC:
- return "I40E_ERR_SWFW_SYNC";
- case I40E_ERR_NO_AVAILABLE_VSI:
- return "I40E_ERR_NO_AVAILABLE_VSI";
- case I40E_ERR_NO_MEMORY:
- return "I40E_ERR_NO_MEMORY";
- case I40E_ERR_BAD_PTR:
- return "I40E_ERR_BAD_PTR";
- case I40E_ERR_RING_FULL:
- return "I40E_ERR_RING_FULL";
- case I40E_ERR_INVALID_PD_ID:
- return "I40E_ERR_INVALID_PD_ID";
- case I40E_ERR_INVALID_QP_ID:
- return "I40E_ERR_INVALID_QP_ID";
- case I40E_ERR_INVALID_CQ_ID:
- return "I40E_ERR_INVALID_CQ_ID";
- case I40E_ERR_INVALID_CEQ_ID:
- return "I40E_ERR_INVALID_CEQ_ID";
- case I40E_ERR_INVALID_AEQ_ID:
- return "I40E_ERR_INVALID_AEQ_ID";
- case I40E_ERR_INVALID_SIZE:
- return "I40E_ERR_INVALID_SIZE";
- case I40E_ERR_INVALID_ARP_INDEX:
- return "I40E_ERR_INVALID_ARP_INDEX";
- case I40E_ERR_INVALID_FPM_FUNC_ID:
- return "I40E_ERR_INVALID_FPM_FUNC_ID";
- case I40E_ERR_QP_INVALID_MSG_SIZE:
- return "I40E_ERR_QP_INVALID_MSG_SIZE";
- case I40E_ERR_QP_TOOMANY_WRS_POSTED:
- return "I40E_ERR_QP_TOOMANY_WRS_POSTED";
- case I40E_ERR_INVALID_FRAG_COUNT:
- return "I40E_ERR_INVALID_FRAG_COUNT";
- case I40E_ERR_QUEUE_EMPTY:
- return "I40E_ERR_QUEUE_EMPTY";
- case I40E_ERR_INVALID_ALIGNMENT:
- return "I40E_ERR_INVALID_ALIGNMENT";
- case I40E_ERR_FLUSHED_QUEUE:
- return "I40E_ERR_FLUSHED_QUEUE";
- case I40E_ERR_INVALID_PUSH_PAGE_INDEX:
- return "I40E_ERR_INVALID_PUSH_PAGE_INDEX";
- case I40E_ERR_INVALID_IMM_DATA_SIZE:
- return "I40E_ERR_INVALID_IMM_DATA_SIZE";
- case I40E_ERR_TIMEOUT:
- return "I40E_ERR_TIMEOUT";
- case I40E_ERR_OPCODE_MISMATCH:
- return "I40E_ERR_OPCODE_MISMATCH";
- case I40E_ERR_CQP_COMPL_ERROR:
- return "I40E_ERR_CQP_COMPL_ERROR";
- case I40E_ERR_INVALID_VF_ID:
- return "I40E_ERR_INVALID_VF_ID";
- case I40E_ERR_INVALID_HMCFN_ID:
- return "I40E_ERR_INVALID_HMCFN_ID";
- case I40E_ERR_BACKING_PAGE_ERROR:
- return "I40E_ERR_BACKING_PAGE_ERROR";
- case I40E_ERR_NO_PBLCHUNKS_AVAILABLE:
- return "I40E_ERR_NO_PBLCHUNKS_AVAILABLE";
- case I40E_ERR_INVALID_PBLE_INDEX:
- return "I40E_ERR_INVALID_PBLE_INDEX";
- case I40E_ERR_INVALID_SD_INDEX:
- return "I40E_ERR_INVALID_SD_INDEX";
- case I40E_ERR_INVALID_PAGE_DESC_INDEX:
- return "I40E_ERR_INVALID_PAGE_DESC_INDEX";
- case I40E_ERR_INVALID_SD_TYPE:
- return "I40E_ERR_INVALID_SD_TYPE";
- case I40E_ERR_MEMCPY_FAILED:
- return "I40E_ERR_MEMCPY_FAILED";
- case I40E_ERR_INVALID_HMC_OBJ_INDEX:
- return "I40E_ERR_INVALID_HMC_OBJ_INDEX";
- case I40E_ERR_INVALID_HMC_OBJ_COUNT:
- return "I40E_ERR_INVALID_HMC_OBJ_COUNT";
- case I40E_ERR_INVALID_SRQ_ARM_LIMIT:
- return "I40E_ERR_INVALID_SRQ_ARM_LIMIT";
- case I40E_ERR_SRQ_ENABLED:
- return "I40E_ERR_SRQ_ENABLED";
- case I40E_ERR_ADMIN_QUEUE_ERROR:
- return "I40E_ERR_ADMIN_QUEUE_ERROR";
- case I40E_ERR_ADMIN_QUEUE_TIMEOUT:
- return "I40E_ERR_ADMIN_QUEUE_TIMEOUT";
- case I40E_ERR_BUF_TOO_SHORT:
- return "I40E_ERR_BUF_TOO_SHORT";
- case I40E_ERR_ADMIN_QUEUE_FULL:
- return "I40E_ERR_ADMIN_QUEUE_FULL";
- case I40E_ERR_ADMIN_QUEUE_NO_WORK:
- return "I40E_ERR_ADMIN_QUEUE_NO_WORK";
- case I40E_ERR_BAD_IWARP_CQE:
- return "I40E_ERR_BAD_IWARP_CQE";
- case I40E_ERR_NVM_BLANK_MODE:
- return "I40E_ERR_NVM_BLANK_MODE";
- case I40E_ERR_NOT_IMPLEMENTED:
- return "I40E_ERR_NOT_IMPLEMENTED";
- case I40E_ERR_PE_DOORBELL_NOT_ENABLED:
- return "I40E_ERR_PE_DOORBELL_NOT_ENABLED";
- case I40E_ERR_DIAG_TEST_FAILED:
- return "I40E_ERR_DIAG_TEST_FAILED";
- case I40E_ERR_NOT_READY:
- return "I40E_ERR_NOT_READY";
- case I40E_NOT_SUPPORTED:
- return "I40E_NOT_SUPPORTED";
- case I40E_ERR_FIRMWARE_API_VERSION:
- return "I40E_ERR_FIRMWARE_API_VERSION";
- case I40E_ERR_ADMIN_QUEUE_CRITICAL_ERROR:
- return "I40E_ERR_ADMIN_QUEUE_CRITICAL_ERROR";
+ case IAVF_ERR_NVM:
+ return "IAVF_ERR_NVM";
+ case IAVF_ERR_NVM_CHECKSUM:
+ return "IAVF_ERR_NVM_CHECKSUM";
+ case IAVF_ERR_PHY:
+ return "IAVF_ERR_PHY";
+ case IAVF_ERR_CONFIG:
+ return "IAVF_ERR_CONFIG";
+ case IAVF_ERR_PARAM:
+ return "IAVF_ERR_PARAM";
+ case IAVF_ERR_MAC_TYPE:
+ return "IAVF_ERR_MAC_TYPE";
+ case IAVF_ERR_UNKNOWN_PHY:
+ return "IAVF_ERR_UNKNOWN_PHY";
+ case IAVF_ERR_LINK_SETUP:
+ return "IAVF_ERR_LINK_SETUP";
+ case IAVF_ERR_ADAPTER_STOPPED:
+ return "IAVF_ERR_ADAPTER_STOPPED";
+ case IAVF_ERR_INVALID_MAC_ADDR:
+ return "IAVF_ERR_INVALID_MAC_ADDR";
+ case IAVF_ERR_DEVICE_NOT_SUPPORTED:
+ return "IAVF_ERR_DEVICE_NOT_SUPPORTED";
+ case IAVF_ERR_MASTER_REQUESTS_PENDING:
+ return "IAVF_ERR_MASTER_REQUESTS_PENDING";
+ case IAVF_ERR_INVALID_LINK_SETTINGS:
+ return "IAVF_ERR_INVALID_LINK_SETTINGS";
+ case IAVF_ERR_AUTONEG_NOT_COMPLETE:
+ return "IAVF_ERR_AUTONEG_NOT_COMPLETE";
+ case IAVF_ERR_RESET_FAILED:
+ return "IAVF_ERR_RESET_FAILED";
+ case IAVF_ERR_SWFW_SYNC:
+ return "IAVF_ERR_SWFW_SYNC";
+ case IAVF_ERR_NO_AVAILABLE_VSI:
+ return "IAVF_ERR_NO_AVAILABLE_VSI";
+ case IAVF_ERR_NO_MEMORY:
+ return "IAVF_ERR_NO_MEMORY";
+ case IAVF_ERR_BAD_PTR:
+ return "IAVF_ERR_BAD_PTR";
+ case IAVF_ERR_RING_FULL:
+ return "IAVF_ERR_RING_FULL";
+ case IAVF_ERR_INVALID_PD_ID:
+ return "IAVF_ERR_INVALID_PD_ID";
+ case IAVF_ERR_INVALID_QP_ID:
+ return "IAVF_ERR_INVALID_QP_ID";
+ case IAVF_ERR_INVALID_CQ_ID:
+ return "IAVF_ERR_INVALID_CQ_ID";
+ case IAVF_ERR_INVALID_CEQ_ID:
+ return "IAVF_ERR_INVALID_CEQ_ID";
+ case IAVF_ERR_INVALID_AEQ_ID:
+ return "IAVF_ERR_INVALID_AEQ_ID";
+ case IAVF_ERR_INVALID_SIZE:
+ return "IAVF_ERR_INVALID_SIZE";
+ case IAVF_ERR_INVALID_ARP_INDEX:
+ return "IAVF_ERR_INVALID_ARP_INDEX";
+ case IAVF_ERR_INVALID_FPM_FUNC_ID:
+ return "IAVF_ERR_INVALID_FPM_FUNC_ID";
+ case IAVF_ERR_QP_INVALID_MSG_SIZE:
+ return "IAVF_ERR_QP_INVALID_MSG_SIZE";
+ case IAVF_ERR_QP_TOOMANY_WRS_POSTED:
+ return "IAVF_ERR_QP_TOOMANY_WRS_POSTED";
+ case IAVF_ERR_INVALID_FRAG_COUNT:
+ return "IAVF_ERR_INVALID_FRAG_COUNT";
+ case IAVF_ERR_QUEUE_EMPTY:
+ return "IAVF_ERR_QUEUE_EMPTY";
+ case IAVF_ERR_INVALID_ALIGNMENT:
+ return "IAVF_ERR_INVALID_ALIGNMENT";
+ case IAVF_ERR_FLUSHED_QUEUE:
+ return "IAVF_ERR_FLUSHED_QUEUE";
+ case IAVF_ERR_INVALID_PUSH_PAGE_INDEX:
+ return "IAVF_ERR_INVALID_PUSH_PAGE_INDEX";
+ case IAVF_ERR_INVALID_IMM_DATA_SIZE:
+ return "IAVF_ERR_INVALID_IMM_DATA_SIZE";
+ case IAVF_ERR_TIMEOUT:
+ return "IAVF_ERR_TIMEOUT";
+ case IAVF_ERR_OPCODE_MISMATCH:
+ return "IAVF_ERR_OPCODE_MISMATCH";
+ case IAVF_ERR_CQP_COMPL_ERROR:
+ return "IAVF_ERR_CQP_COMPL_ERROR";
+ case IAVF_ERR_INVALID_VF_ID:
+ return "IAVF_ERR_INVALID_VF_ID";
+ case IAVF_ERR_INVALID_HMCFN_ID:
+ return "IAVF_ERR_INVALID_HMCFN_ID";
+ case IAVF_ERR_BACKING_PAGE_ERROR:
+ return "IAVF_ERR_BACKING_PAGE_ERROR";
+ case IAVF_ERR_NO_PBLCHUNKS_AVAILABLE:
+ return "IAVF_ERR_NO_PBLCHUNKS_AVAILABLE";
+ case IAVF_ERR_INVALID_PBLE_INDEX:
+ return "IAVF_ERR_INVALID_PBLE_INDEX";
+ case IAVF_ERR_INVALID_SD_INDEX:
+ return "IAVF_ERR_INVALID_SD_INDEX";
+ case IAVF_ERR_INVALID_PAGE_DESC_INDEX:
+ return "IAVF_ERR_INVALID_PAGE_DESC_INDEX";
+ case IAVF_ERR_INVALID_SD_TYPE:
+ return "IAVF_ERR_INVALID_SD_TYPE";
+ case IAVF_ERR_MEMCPY_FAILED:
+ return "IAVF_ERR_MEMCPY_FAILED";
+ case IAVF_ERR_INVALID_HMC_OBJ_INDEX:
+ return "IAVF_ERR_INVALID_HMC_OBJ_INDEX";
+ case IAVF_ERR_INVALID_HMC_OBJ_COUNT:
+ return "IAVF_ERR_INVALID_HMC_OBJ_COUNT";
+ case IAVF_ERR_INVALID_SRQ_ARM_LIMIT:
+ return "IAVF_ERR_INVALID_SRQ_ARM_LIMIT";
+ case IAVF_ERR_SRQ_ENABLED:
+ return "IAVF_ERR_SRQ_ENABLED";
+ case IAVF_ERR_ADMIN_QUEUE_ERROR:
+ return "IAVF_ERR_ADMIN_QUEUE_ERROR";
+ case IAVF_ERR_ADMIN_QUEUE_TIMEOUT:
+ return "IAVF_ERR_ADMIN_QUEUE_TIMEOUT";
+ case IAVF_ERR_BUF_TOO_SHORT:
+ return "IAVF_ERR_BUF_TOO_SHORT";
+ case IAVF_ERR_ADMIN_QUEUE_FULL:
+ return "IAVF_ERR_ADMIN_QUEUE_FULL";
+ case IAVF_ERR_ADMIN_QUEUE_NO_WORK:
+ return "IAVF_ERR_ADMIN_QUEUE_NO_WORK";
+ case IAVF_ERR_BAD_IWARP_CQE:
+ return "IAVF_ERR_BAD_IWARP_CQE";
+ case IAVF_ERR_NVM_BLANK_MODE:
+ return "IAVF_ERR_NVM_BLANK_MODE";
+ case IAVF_ERR_NOT_IMPLEMENTED:
+ return "IAVF_ERR_NOT_IMPLEMENTED";
+ case IAVF_ERR_PE_DOORBELL_NOT_ENABLED:
+ return "IAVF_ERR_PE_DOORBELL_NOT_ENABLED";
+ case IAVF_ERR_DIAG_TEST_FAILED:
+ return "IAVF_ERR_DIAG_TEST_FAILED";
+ case IAVF_ERR_NOT_READY:
+ return "IAVF_ERR_NOT_READY";
+ case IAVF_NOT_SUPPORTED:
+ return "IAVF_NOT_SUPPORTED";
+ case IAVF_ERR_FIRMWARE_API_VERSION:
+ return "IAVF_ERR_FIRMWARE_API_VERSION";
+ case IAVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR:
+ return "IAVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR";
}
snprintf(hw->err_str, sizeof(hw->err_str), "%d", stat_err);
@@ -260,7 +260,7 @@ const char *iavf_stat_str(struct iavf_hw *hw, iavf_status stat_err)
void iavf_debug_aq(struct iavf_hw *hw, enum iavf_debug_mask mask, void *desc,
void *buffer, u16 buf_len)
{
- struct i40e_aq_desc *aq_desc = (struct i40e_aq_desc *)desc;
+ struct iavf_aq_desc *aq_desc = (struct iavf_aq_desc *)desc;
u8 *buf = (u8 *)buffer;
if ((!(mask & hw->debug_mask)) || !desc)
@@ -327,17 +327,17 @@ bool iavf_check_asq_alive(struct iavf_hw *hw)
* Tell the Firmware that we're shutting down the AdminQ and whether
* or not the driver is unloading as well.
**/
-iavf_status iavf_aq_queue_shutdown(struct iavf_hw *hw, bool unloading)
+enum iavf_status iavf_aq_queue_shutdown(struct iavf_hw *hw, bool unloading)
{
- struct i40e_aq_desc desc;
- struct i40e_aqc_queue_shutdown *cmd =
- (struct i40e_aqc_queue_shutdown *)&desc.params.raw;
- iavf_status status;
+ struct iavf_aq_desc desc;
+ struct iavf_aqc_queue_shutdown *cmd =
+ (struct iavf_aqc_queue_shutdown *)&desc.params.raw;
+ enum iavf_status status;
- iavf_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_queue_shutdown);
+ iavf_fill_default_direct_cmd_desc(&desc, iavf_aqc_opc_queue_shutdown);
if (unloading)
- cmd->driver_unloading = cpu_to_le32(I40E_AQ_DRIVER_UNLOADING);
+ cmd->driver_unloading = cpu_to_le32(IAVF_AQ_DRIVER_UNLOADING);
status = iavf_asq_send_command(hw, &desc, NULL, 0, NULL);
return status;
@@ -354,43 +354,43 @@ iavf_status iavf_aq_queue_shutdown(struct iavf_hw *hw, bool unloading)
*
* Internal function to get or set RSS look up table
**/
-static iavf_status iavf_aq_get_set_rss_lut(struct iavf_hw *hw,
- u16 vsi_id, bool pf_lut,
- u8 *lut, u16 lut_size,
- bool set)
+static enum iavf_status iavf_aq_get_set_rss_lut(struct iavf_hw *hw,
+ u16 vsi_id, bool pf_lut,
+ u8 *lut, u16 lut_size,
+ bool set)
{
- iavf_status status;
- struct i40e_aq_desc desc;
- struct i40e_aqc_get_set_rss_lut *cmd_resp =
- (struct i40e_aqc_get_set_rss_lut *)&desc.params.raw;
+ enum iavf_status status;
+ struct iavf_aq_desc desc;
+ struct iavf_aqc_get_set_rss_lut *cmd_resp =
+ (struct iavf_aqc_get_set_rss_lut *)&desc.params.raw;
if (set)
iavf_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_set_rss_lut);
+ iavf_aqc_opc_set_rss_lut);
else
iavf_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_get_rss_lut);
+ iavf_aqc_opc_get_rss_lut);
/* Indirect command */
- desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_BUF);
- desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_RD);
+ desc.flags |= cpu_to_le16((u16)IAVF_AQ_FLAG_BUF);
+ desc.flags |= cpu_to_le16((u16)IAVF_AQ_FLAG_RD);
cmd_resp->vsi_id =
cpu_to_le16((u16)((vsi_id <<
- I40E_AQC_SET_RSS_LUT_VSI_ID_SHIFT) &
- I40E_AQC_SET_RSS_LUT_VSI_ID_MASK));
- cmd_resp->vsi_id |= cpu_to_le16((u16)I40E_AQC_SET_RSS_LUT_VSI_VALID);
+ IAVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT) &
+ IAVF_AQC_SET_RSS_LUT_VSI_ID_MASK));
+ cmd_resp->vsi_id |= cpu_to_le16((u16)IAVF_AQC_SET_RSS_LUT_VSI_VALID);
if (pf_lut)
cmd_resp->flags |= cpu_to_le16((u16)
- ((I40E_AQC_SET_RSS_LUT_TABLE_TYPE_PF <<
- I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
- I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
+ ((IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF <<
+ IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
+ IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
else
cmd_resp->flags |= cpu_to_le16((u16)
- ((I40E_AQC_SET_RSS_LUT_TABLE_TYPE_VSI <<
- I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
- I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
+ ((IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI <<
+ IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) &
+ IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK));
status = iavf_asq_send_command(hw, &desc, lut, lut_size, NULL);
@@ -407,8 +407,8 @@ static iavf_status iavf_aq_get_set_rss_lut(struct iavf_hw *hw,
*
* get the RSS lookup table, PF or VSI type
**/
-iavf_status iavf_aq_get_rss_lut(struct iavf_hw *hw, u16 vsi_id,
- bool pf_lut, u8 *lut, u16 lut_size)
+enum iavf_status iavf_aq_get_rss_lut(struct iavf_hw *hw, u16 vsi_id,
+ bool pf_lut, u8 *lut, u16 lut_size)
{
return iavf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size,
false);
@@ -424,8 +424,8 @@ iavf_status iavf_aq_get_rss_lut(struct iavf_hw *hw, u16 vsi_id,
*
* set the RSS lookup table, PF or VSI type
**/
-iavf_status iavf_aq_set_rss_lut(struct iavf_hw *hw, u16 vsi_id,
- bool pf_lut, u8 *lut, u16 lut_size)
+enum iavf_status iavf_aq_set_rss_lut(struct iavf_hw *hw, u16 vsi_id,
+ bool pf_lut, u8 *lut, u16 lut_size)
{
return iavf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
}
@@ -439,33 +439,33 @@ iavf_status iavf_aq_set_rss_lut(struct iavf_hw *hw, u16 vsi_id,
*
* get the RSS key per VSI
**/
-static
+static enum
iavf_status iavf_aq_get_set_rss_key(struct iavf_hw *hw, u16 vsi_id,
- struct i40e_aqc_get_set_rss_key_data *key,
+ struct iavf_aqc_get_set_rss_key_data *key,
bool set)
{
- iavf_status status;
- struct i40e_aq_desc desc;
- struct i40e_aqc_get_set_rss_key *cmd_resp =
- (struct i40e_aqc_get_set_rss_key *)&desc.params.raw;
- u16 key_size = sizeof(struct i40e_aqc_get_set_rss_key_data);
+ enum iavf_status status;
+ struct iavf_aq_desc desc;
+ struct iavf_aqc_get_set_rss_key *cmd_resp =
+ (struct iavf_aqc_get_set_rss_key *)&desc.params.raw;
+ u16 key_size = sizeof(struct iavf_aqc_get_set_rss_key_data);
if (set)
iavf_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_set_rss_key);
+ iavf_aqc_opc_set_rss_key);
else
iavf_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_get_rss_key);
+ iavf_aqc_opc_get_rss_key);
/* Indirect command */
- desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_BUF);
- desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_RD);
+ desc.flags |= cpu_to_le16((u16)IAVF_AQ_FLAG_BUF);
+ desc.flags |= cpu_to_le16((u16)IAVF_AQ_FLAG_RD);
cmd_resp->vsi_id =
cpu_to_le16((u16)((vsi_id <<
- I40E_AQC_SET_RSS_KEY_VSI_ID_SHIFT) &
- I40E_AQC_SET_RSS_KEY_VSI_ID_MASK));
- cmd_resp->vsi_id |= cpu_to_le16((u16)I40E_AQC_SET_RSS_KEY_VSI_VALID);
+ IAVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT) &
+ IAVF_AQC_SET_RSS_KEY_VSI_ID_MASK));
+ cmd_resp->vsi_id |= cpu_to_le16((u16)IAVF_AQC_SET_RSS_KEY_VSI_VALID);
status = iavf_asq_send_command(hw, &desc, key, key_size, NULL);
@@ -479,8 +479,8 @@ iavf_status iavf_aq_get_set_rss_key(struct iavf_hw *hw, u16 vsi_id,
* @key: pointer to key info struct
*
**/
-iavf_status iavf_aq_get_rss_key(struct iavf_hw *hw, u16 vsi_id,
- struct i40e_aqc_get_set_rss_key_data *key)
+enum iavf_status iavf_aq_get_rss_key(struct iavf_hw *hw, u16 vsi_id,
+ struct iavf_aqc_get_set_rss_key_data *key)
{
return iavf_aq_get_set_rss_key(hw, vsi_id, key, false);
}
@@ -493,8 +493,8 @@ iavf_status iavf_aq_get_rss_key(struct iavf_hw *hw, u16 vsi_id,
*
* set the RSS key per VSI
**/
-iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw, u16 vsi_id,
- struct i40e_aqc_get_set_rss_key_data *key)
+enum iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw, u16 vsi_id,
+ struct iavf_aqc_get_set_rss_key_data *key)
{
return iavf_aq_get_set_rss_key(hw, vsi_id, key, true);
}
@@ -515,7 +515,7 @@ iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw, u16 vsi_id,
* IF NOT iavf_ptype_lookup[ptype].known
* THEN
* Packet is unknown
- * ELSE IF iavf_ptype_lookup[ptype].outer_ip == I40E_RX_PTYPE_OUTER_IP
+ * ELSE IF iavf_ptype_lookup[ptype].outer_ip == IAVF_RX_PTYPE_OUTER_IP
* Use the rest of the fields to look at the tunnels, inner protocols, etc
* ELSE
* Use the enum iavf_rx_l2_ptype to decode the packet type
@@ -877,24 +877,25 @@ struct iavf_rx_ptype_decoded iavf_ptype_lookup[] = {
* is sent asynchronously, i.e. iavf_asq_send_command() does not wait for
* completion before returning.
**/
-iavf_status iavf_aq_send_msg_to_pf(struct iavf_hw *hw,
- enum virtchnl_ops v_opcode,
- iavf_status v_retval, u8 *msg, u16 msglen,
- struct i40e_asq_cmd_details *cmd_details)
+enum iavf_status iavf_aq_send_msg_to_pf(struct iavf_hw *hw,
+ enum virtchnl_ops v_opcode,
+ enum iavf_status v_retval,
+ u8 *msg, u16 msglen,
+ struct iavf_asq_cmd_details *cmd_details)
{
- struct i40e_asq_cmd_details details;
- struct i40e_aq_desc desc;
- iavf_status status;
+ struct iavf_asq_cmd_details details;
+ struct iavf_aq_desc desc;
+ enum iavf_status status;
- iavf_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_send_msg_to_pf);
- desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_SI);
+ iavf_fill_default_direct_cmd_desc(&desc, iavf_aqc_opc_send_msg_to_pf);
+ desc.flags |= cpu_to_le16((u16)IAVF_AQ_FLAG_SI);
desc.cookie_high = cpu_to_le32(v_opcode);
desc.cookie_low = cpu_to_le32(v_retval);
if (msglen) {
- desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF
- | I40E_AQ_FLAG_RD));
- if (msglen > I40E_AQ_LARGE_BUF)
- desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_LB);
+ desc.flags |= cpu_to_le16((u16)(IAVF_AQ_FLAG_BUF
+ | IAVF_AQ_FLAG_RD));
+ if (msglen > IAVF_AQ_LARGE_BUF)
+ desc.flags |= cpu_to_le16((u16)IAVF_AQ_FLAG_LB);
desc.datalen = cpu_to_le16(msglen);
}
if (!cmd_details) {
@@ -948,7 +949,7 @@ void iavf_vf_parse_hw_config(struct iavf_hw *hw,
* as none will be forthcoming. Immediately after calling this function,
* the admin queue should be shut down and (optionally) reinitialized.
**/
-iavf_status iavf_vf_reset(struct iavf_hw *hw)
+enum iavf_status iavf_vf_reset(struct iavf_hw *hw)
{
return iavf_aq_send_msg_to_pf(hw, VIRTCHNL_OP_RESET_VF,
0, NULL, 0, NULL);
diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
index 9f87304109fe..dad3eec8ccd8 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
@@ -280,10 +280,10 @@ static int iavf_get_link_ksettings(struct net_device *netdev,
cmd->base.port = PORT_NONE;
/* Set speed and duplex */
switch (adapter->link_speed) {
- case I40E_LINK_SPEED_40GB:
+ case IAVF_LINK_SPEED_40GB:
cmd->base.speed = SPEED_40000;
break;
- case I40E_LINK_SPEED_25GB:
+ case IAVF_LINK_SPEED_25GB:
#ifdef SPEED_25000
cmd->base.speed = SPEED_25000;
#else
@@ -291,16 +291,16 @@ static int iavf_get_link_ksettings(struct net_device *netdev,
"Speed is 25G, display not supported by this version of ethtool.\n");
#endif
break;
- case I40E_LINK_SPEED_20GB:
+ case IAVF_LINK_SPEED_20GB:
cmd->base.speed = SPEED_20000;
break;
- case I40E_LINK_SPEED_10GB:
+ case IAVF_LINK_SPEED_10GB:
cmd->base.speed = SPEED_10000;
break;
- case I40E_LINK_SPEED_1GB:
+ case IAVF_LINK_SPEED_1GB:
cmd->base.speed = SPEED_1000;
break;
- case I40E_LINK_SPEED_100MB:
+ case IAVF_LINK_SPEED_100MB:
cmd->base.speed = SPEED_100;
break;
default:
@@ -510,7 +510,7 @@ static int iavf_set_priv_flags(struct net_device *netdev, u32 flags)
if (changed_flags & IAVF_FLAG_LEGACY_RX) {
if (netif_running(netdev)) {
adapter->flags |= IAVF_FLAG_RESET_NEEDED;
- schedule_work(&adapter->reset_task);
+ queue_work(iavf_wq, &adapter->reset_task);
}
}
@@ -622,7 +622,7 @@ static int iavf_set_ringparam(struct net_device *netdev,
if (netif_running(netdev)) {
adapter->flags |= IAVF_FLAG_RESET_NEEDED;
- schedule_work(&adapter->reset_task);
+ queue_work(iavf_wq, &adapter->reset_task);
}
return 0;
diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
index 4569d69a2b55..9d2b50964a08 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
@@ -14,6 +14,8 @@
static int iavf_setup_all_tx_resources(struct iavf_adapter *adapter);
static int iavf_setup_all_rx_resources(struct iavf_adapter *adapter);
static int iavf_close(struct net_device *netdev);
+static int iavf_init_get_resources(struct iavf_adapter *adapter);
+static int iavf_check_reset_complete(struct iavf_hw *hw);
char iavf_driver_name[] = "iavf";
static const char iavf_driver_string[] =
@@ -57,7 +59,8 @@ MODULE_DESCRIPTION("Intel(R) Ethernet Adaptive Virtual Function Network Driver")
MODULE_LICENSE("GPL v2");
MODULE_VERSION(DRV_VERSION);
-static struct workqueue_struct *iavf_wq;
+static const struct net_device_ops iavf_netdev_ops;
+struct workqueue_struct *iavf_wq;
/**
* iavf_allocate_dma_mem_d - OS specific memory alloc for shared code
@@ -66,14 +69,14 @@ static struct workqueue_struct *iavf_wq;
* @size: size of memory requested
* @alignment: what to align the allocation to
**/
-iavf_status iavf_allocate_dma_mem_d(struct iavf_hw *hw,
- struct iavf_dma_mem *mem,
- u64 size, u32 alignment)
+enum iavf_status iavf_allocate_dma_mem_d(struct iavf_hw *hw,
+ struct iavf_dma_mem *mem,
+ u64 size, u32 alignment)
{
struct iavf_adapter *adapter = (struct iavf_adapter *)hw->back;
if (!mem)
- return I40E_ERR_PARAM;
+ return IAVF_ERR_PARAM;
mem->size = ALIGN(size, alignment);
mem->va = dma_alloc_coherent(&adapter->pdev->dev, mem->size,
@@ -81,7 +84,7 @@ iavf_status iavf_allocate_dma_mem_d(struct iavf_hw *hw,
if (mem->va)
return 0;
else
- return I40E_ERR_NO_MEMORY;
+ return IAVF_ERR_NO_MEMORY;
}
/**
@@ -89,12 +92,13 @@ iavf_status iavf_allocate_dma_mem_d(struct iavf_hw *hw,
* @hw: pointer to the HW structure
* @mem: ptr to mem struct to free
**/
-iavf_status iavf_free_dma_mem_d(struct iavf_hw *hw, struct iavf_dma_mem *mem)
+enum iavf_status iavf_free_dma_mem_d(struct iavf_hw *hw,
+ struct iavf_dma_mem *mem)
{
struct iavf_adapter *adapter = (struct iavf_adapter *)hw->back;
if (!mem || !mem->va)
- return I40E_ERR_PARAM;
+ return IAVF_ERR_PARAM;
dma_free_coherent(&adapter->pdev->dev, mem->size,
mem->va, (dma_addr_t)mem->pa);
return 0;
@@ -106,11 +110,11 @@ iavf_status iavf_free_dma_mem_d(struct iavf_hw *hw, struct iavf_dma_mem *mem)
* @mem: ptr to mem struct to fill out
* @size: size of memory requested
**/
-iavf_status iavf_allocate_virt_mem_d(struct iavf_hw *hw,
- struct iavf_virt_mem *mem, u32 size)
+enum iavf_status iavf_allocate_virt_mem_d(struct iavf_hw *hw,
+ struct iavf_virt_mem *mem, u32 size)
{
if (!mem)
- return I40E_ERR_PARAM;
+ return IAVF_ERR_PARAM;
mem->size = size;
mem->va = kzalloc(size, GFP_KERNEL);
@@ -118,7 +122,7 @@ iavf_status iavf_allocate_virt_mem_d(struct iavf_hw *hw,
if (mem->va)
return 0;
else
- return I40E_ERR_NO_MEMORY;
+ return IAVF_ERR_NO_MEMORY;
}
/**
@@ -126,10 +130,11 @@ iavf_status iavf_allocate_virt_mem_d(struct iavf_hw *hw,
* @hw: pointer to the HW structure
* @mem: ptr to mem struct to free
**/
-iavf_status iavf_free_virt_mem_d(struct iavf_hw *hw, struct iavf_virt_mem *mem)
+enum iavf_status iavf_free_virt_mem_d(struct iavf_hw *hw,
+ struct iavf_virt_mem *mem)
{
if (!mem)
- return I40E_ERR_PARAM;
+ return IAVF_ERR_PARAM;
/* it's ok to kfree a NULL pointer */
kfree(mem->va);
@@ -168,7 +173,7 @@ void iavf_schedule_reset(struct iavf_adapter *adapter)
if (!(adapter->flags &
(IAVF_FLAG_RESET_PENDING | IAVF_FLAG_RESET_NEEDED))) {
adapter->flags |= IAVF_FLAG_RESET_NEEDED;
- schedule_work(&adapter->reset_task);
+ queue_work(iavf_wq, &adapter->reset_task);
}
}
@@ -287,7 +292,7 @@ static irqreturn_t iavf_msix_aq(int irq, void *data)
rd32(hw, IAVF_VFINT_ICR0_ENA1);
/* schedule work on the private workqueue */
- schedule_work(&adapter->adminq_task);
+ queue_work(iavf_wq, &adapter->adminq_task);
return IRQ_HANDLED;
}
@@ -657,14 +662,13 @@ iavf_vlan_filter *iavf_add_vlan(struct iavf_adapter *adapter, u16 vlan)
f = iavf_find_vlan(adapter, vlan);
if (!f) {
- f = kzalloc(sizeof(*f), GFP_KERNEL);
+ f = kzalloc(sizeof(*f), GFP_ATOMIC);
if (!f)
goto clearout;
f->vlan = vlan;
- INIT_LIST_HEAD(&f->list);
- list_add(&f->list, &adapter->vlan_filter_list);
+ list_add_tail(&f->list, &adapter->vlan_filter_list);
f->add = true;
adapter->aq_required |= IAVF_FLAG_AQ_ADD_VLAN_FILTER;
}
@@ -979,7 +983,7 @@ static void iavf_up_complete(struct iavf_adapter *adapter)
adapter->aq_required |= IAVF_FLAG_AQ_ENABLE_QUEUES;
if (CLIENT_ENABLED(adapter))
adapter->flags |= IAVF_FLAG_CLIENT_NEEDS_OPEN;
- mod_timer_pending(&adapter->watchdog_timer, jiffies + 1);
+ mod_delayed_work(iavf_wq, &adapter->watchdog_task, 0);
}
/**
@@ -1043,7 +1047,7 @@ void iavf_down(struct iavf_adapter *adapter)
adapter->aq_required |= IAVF_FLAG_AQ_DISABLE_QUEUES;
}
- mod_timer_pending(&adapter->watchdog_timer, jiffies + 1);
+ mod_delayed_work(iavf_wq, &adapter->watchdog_task, 0);
}
/**
@@ -1227,8 +1231,8 @@ out:
**/
static int iavf_config_rss_aq(struct iavf_adapter *adapter)
{
- struct i40e_aqc_get_set_rss_key_data *rss_key =
- (struct i40e_aqc_get_set_rss_key_data *)adapter->rss_key;
+ struct iavf_aqc_get_set_rss_key_data *rss_key =
+ (struct iavf_aqc_get_set_rss_key_data *)adapter->rss_key;
struct iavf_hw *hw = &adapter->hw;
int ret = 0;
@@ -1532,136 +1536,66 @@ err:
}
/**
- * iavf_watchdog_timer - Periodic call-back timer
- * @data: pointer to adapter disguised as unsigned long
- **/
-static void iavf_watchdog_timer(struct timer_list *t)
-{
- struct iavf_adapter *adapter = from_timer(adapter, t,
- watchdog_timer);
-
- schedule_work(&adapter->watchdog_task);
- /* timer will be rescheduled in watchdog task */
-}
-
-/**
- * iavf_watchdog_task - Periodic call-back task
- * @work: pointer to work_struct
+ * iavf_process_aq_command - process aq_required flags
+ * and sends aq command
+ * @adapter: pointer to iavf adapter structure
+ *
+ * Returns 0 on success
+ * Returns error code if no command was sent
+ * or error code if the command failed.
**/
-static void iavf_watchdog_task(struct work_struct *work)
+static int iavf_process_aq_command(struct iavf_adapter *adapter)
{
- struct iavf_adapter *adapter = container_of(work,
- struct iavf_adapter,
- watchdog_task);
- struct iavf_hw *hw = &adapter->hw;
- u32 reg_val;
-
- if (test_and_set_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section))
- goto restart_watchdog;
-
- if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED) {
- reg_val = rd32(hw, IAVF_VFGEN_RSTAT) &
- IAVF_VFGEN_RSTAT_VFR_STATE_MASK;
- if ((reg_val == VIRTCHNL_VFR_VFACTIVE) ||
- (reg_val == VIRTCHNL_VFR_COMPLETED)) {
- /* A chance for redemption! */
- dev_err(&adapter->pdev->dev, "Hardware came out of reset. Attempting reinit.\n");
- adapter->state = __IAVF_STARTUP;
- adapter->flags &= ~IAVF_FLAG_PF_COMMS_FAILED;
- schedule_delayed_work(&adapter->init_task, 10);
- clear_bit(__IAVF_IN_CRITICAL_TASK,
- &adapter->crit_section);
- /* Don't reschedule the watchdog, since we've restarted
- * the init task. When init_task contacts the PF and
- * gets everything set up again, it'll restart the
- * watchdog for us. Down, boy. Sit. Stay. Woof.
- */
- return;
- }
- adapter->aq_required = 0;
- adapter->current_op = VIRTCHNL_OP_UNKNOWN;
- goto watchdog_done;
- }
-
- if ((adapter->state < __IAVF_DOWN) ||
- (adapter->flags & IAVF_FLAG_RESET_PENDING))
- goto watchdog_done;
-
- /* check for reset */
- reg_val = rd32(hw, IAVF_VF_ARQLEN1) & IAVF_VF_ARQLEN1_ARQENABLE_MASK;
- if (!(adapter->flags & IAVF_FLAG_RESET_PENDING) && !reg_val) {
- adapter->state = __IAVF_RESETTING;
- adapter->flags |= IAVF_FLAG_RESET_PENDING;
- dev_err(&adapter->pdev->dev, "Hardware reset detected\n");
- schedule_work(&adapter->reset_task);
- adapter->aq_required = 0;
- adapter->current_op = VIRTCHNL_OP_UNKNOWN;
- goto watchdog_done;
- }
-
- /* Process admin queue tasks. After init, everything gets done
- * here so we don't race on the admin queue.
- */
- if (adapter->current_op) {
- if (!iavf_asq_done(hw)) {
- dev_dbg(&adapter->pdev->dev, "Admin queue timeout\n");
- iavf_send_api_ver(adapter);
- }
- goto watchdog_done;
- }
- if (adapter->aq_required & IAVF_FLAG_AQ_GET_CONFIG) {
- iavf_send_vf_config_msg(adapter);
- goto watchdog_done;
- }
-
+ if (adapter->aq_required & IAVF_FLAG_AQ_GET_CONFIG)
+ return iavf_send_vf_config_msg(adapter);
if (adapter->aq_required & IAVF_FLAG_AQ_DISABLE_QUEUES) {
iavf_disable_queues(adapter);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_MAP_VECTORS) {
iavf_map_queues(adapter);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_ADD_MAC_FILTER) {
iavf_add_ether_addrs(adapter);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_ADD_VLAN_FILTER) {
iavf_add_vlans(adapter);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_DEL_MAC_FILTER) {
iavf_del_ether_addrs(adapter);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_DEL_VLAN_FILTER) {
iavf_del_vlans(adapter);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_ENABLE_VLAN_STRIPPING) {
iavf_enable_vlan_stripping(adapter);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_DISABLE_VLAN_STRIPPING) {
iavf_disable_vlan_stripping(adapter);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_CONFIGURE_QUEUES) {
iavf_configure_queues(adapter);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_ENABLE_QUEUES) {
iavf_enable_queues(adapter);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_CONFIGURE_RSS) {
@@ -1669,81 +1603,414 @@ static void iavf_watchdog_task(struct work_struct *work)
* PF, so we don't have to set current_op as we will
* not get a response through the ARQ.
*/
- iavf_init_rss(adapter);
adapter->aq_required &= ~IAVF_FLAG_AQ_CONFIGURE_RSS;
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_GET_HENA) {
iavf_get_hena(adapter);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_SET_HENA) {
iavf_set_hena(adapter);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_SET_RSS_KEY) {
iavf_set_rss_key(adapter);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_SET_RSS_LUT) {
iavf_set_rss_lut(adapter);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_REQUEST_PROMISC) {
iavf_set_promiscuous(adapter, FLAG_VF_UNICAST_PROMISC |
FLAG_VF_MULTICAST_PROMISC);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_REQUEST_ALLMULTI) {
iavf_set_promiscuous(adapter, FLAG_VF_MULTICAST_PROMISC);
- goto watchdog_done;
+ return 0;
}
if ((adapter->aq_required & IAVF_FLAG_AQ_RELEASE_PROMISC) &&
(adapter->aq_required & IAVF_FLAG_AQ_RELEASE_ALLMULTI)) {
iavf_set_promiscuous(adapter, 0);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_ENABLE_CHANNELS) {
iavf_enable_channels(adapter);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_DISABLE_CHANNELS) {
iavf_disable_channels(adapter);
- goto watchdog_done;
+ return 0;
}
-
if (adapter->aq_required & IAVF_FLAG_AQ_ADD_CLOUD_FILTER) {
iavf_add_cloud_filter(adapter);
- goto watchdog_done;
+ return 0;
}
if (adapter->aq_required & IAVF_FLAG_AQ_DEL_CLOUD_FILTER) {
iavf_del_cloud_filter(adapter);
+ return 0;
+ }
+ if (adapter->aq_required & IAVF_FLAG_AQ_DEL_CLOUD_FILTER) {
+ iavf_del_cloud_filter(adapter);
+ return 0;
+ }
+ if (adapter->aq_required & IAVF_FLAG_AQ_ADD_CLOUD_FILTER) {
+ iavf_add_cloud_filter(adapter);
+ return 0;
+ }
+ return -EAGAIN;
+}
+
+/**
+ * iavf_startup - first step of driver startup
+ * @adapter: board private structure
+ *
+ * Function process __IAVF_STARTUP driver state.
+ * When success the state is changed to __IAVF_INIT_VERSION_CHECK
+ * when fails it returns -EAGAIN
+ **/
+static int iavf_startup(struct iavf_adapter *adapter)
+{
+ struct pci_dev *pdev = adapter->pdev;
+ struct iavf_hw *hw = &adapter->hw;
+ int err;
+
+ WARN_ON(adapter->state != __IAVF_STARTUP);
+
+ /* driver loaded, probe complete */
+ adapter->flags &= ~IAVF_FLAG_PF_COMMS_FAILED;
+ adapter->flags &= ~IAVF_FLAG_RESET_PENDING;
+ err = iavf_set_mac_type(hw);
+ if (err) {
+ dev_err(&pdev->dev, "Failed to set MAC type (%d)\n", err);
+ goto err;
+ }
+
+ err = iavf_check_reset_complete(hw);
+ if (err) {
+ dev_info(&pdev->dev, "Device is still in reset (%d), retrying\n",
+ err);
+ goto err;
+ }
+ hw->aq.num_arq_entries = IAVF_AQ_LEN;
+ hw->aq.num_asq_entries = IAVF_AQ_LEN;
+ hw->aq.arq_buf_size = IAVF_MAX_AQ_BUF_SIZE;
+ hw->aq.asq_buf_size = IAVF_MAX_AQ_BUF_SIZE;
+
+ err = iavf_init_adminq(hw);
+ if (err) {
+ dev_err(&pdev->dev, "Failed to init Admin Queue (%d)\n", err);
+ goto err;
+ }
+ err = iavf_send_api_ver(adapter);
+ if (err) {
+ dev_err(&pdev->dev, "Unable to send to PF (%d)\n", err);
+ iavf_shutdown_adminq(hw);
+ goto err;
+ }
+ adapter->state = __IAVF_INIT_VERSION_CHECK;
+err:
+ return err;
+}
+
+/**
+ * iavf_init_version_check - second step of driver startup
+ * @adapter: board private structure
+ *
+ * Function process __IAVF_INIT_VERSION_CHECK driver state.
+ * When success the state is changed to __IAVF_INIT_GET_RESOURCES
+ * when fails it returns -EAGAIN
+ **/
+static int iavf_init_version_check(struct iavf_adapter *adapter)
+{
+ struct pci_dev *pdev = adapter->pdev;
+ struct iavf_hw *hw = &adapter->hw;
+ int err = -EAGAIN;
+
+ WARN_ON(adapter->state != __IAVF_INIT_VERSION_CHECK);
+
+ if (!iavf_asq_done(hw)) {
+ dev_err(&pdev->dev, "Admin queue command never completed\n");
+ iavf_shutdown_adminq(hw);
+ adapter->state = __IAVF_STARTUP;
+ goto err;
+ }
+
+ /* aq msg sent, awaiting reply */
+ err = iavf_verify_api_ver(adapter);
+ if (err) {
+ if (err == IAVF_ERR_ADMIN_QUEUE_NO_WORK)
+ err = iavf_send_api_ver(adapter);
+ else
+ dev_err(&pdev->dev, "Unsupported PF API version %d.%d, expected %d.%d\n",
+ adapter->pf_version.major,
+ adapter->pf_version.minor,
+ VIRTCHNL_VERSION_MAJOR,
+ VIRTCHNL_VERSION_MINOR);
+ goto err;
+ }
+ err = iavf_send_vf_config_msg(adapter);
+ if (err) {
+ dev_err(&pdev->dev, "Unable to send config request (%d)\n",
+ err);
+ goto err;
+ }
+ adapter->state = __IAVF_INIT_GET_RESOURCES;
+
+err:
+ return err;
+}
+
+/**
+ * iavf_init_get_resources - third step of driver startup
+ * @adapter: board private structure
+ *
+ * Function process __IAVF_INIT_GET_RESOURCES driver state and
+ * finishes driver initialization procedure.
+ * When success the state is changed to __IAVF_DOWN
+ * when fails it returns -EAGAIN
+ **/
+static int iavf_init_get_resources(struct iavf_adapter *adapter)
+{
+ struct net_device *netdev = adapter->netdev;
+ struct pci_dev *pdev = adapter->pdev;
+ struct iavf_hw *hw = &adapter->hw;
+ int err = 0, bufsz;
+
+ WARN_ON(adapter->state != __IAVF_INIT_GET_RESOURCES);
+ /* aq msg sent, awaiting reply */
+ if (!adapter->vf_res) {
+ bufsz = sizeof(struct virtchnl_vf_resource) +
+ (IAVF_MAX_VF_VSI *
+ sizeof(struct virtchnl_vsi_resource));
+ adapter->vf_res = kzalloc(bufsz, GFP_KERNEL);
+ if (!adapter->vf_res)
+ goto err;
+ }
+ err = iavf_get_vf_config(adapter);
+ if (err == IAVF_ERR_ADMIN_QUEUE_NO_WORK) {
+ err = iavf_send_vf_config_msg(adapter);
+ goto err;
+ } else if (err == IAVF_ERR_PARAM) {
+ /* We only get ERR_PARAM if the device is in a very bad
+ * state or if we've been disabled for previous bad
+ * behavior. Either way, we're done now.
+ */
+ iavf_shutdown_adminq(hw);
+ dev_err(&pdev->dev, "Unable to get VF config due to PF error condition, not retrying\n");
+ return 0;
+ }
+ if (err) {
+ dev_err(&pdev->dev, "Unable to get VF config (%d)\n", err);
+ goto err_alloc;
+ }
+
+ if (iavf_process_config(adapter))
+ goto err_alloc;
+ adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+
+ adapter->flags |= IAVF_FLAG_RX_CSUM_ENABLED;
+
+ netdev->netdev_ops = &iavf_netdev_ops;
+ iavf_set_ethtool_ops(netdev);
+ netdev->watchdog_timeo = 5 * HZ;
+
+ /* MTU range: 68 - 9710 */
+ netdev->min_mtu = ETH_MIN_MTU;
+ netdev->max_mtu = IAVF_MAX_RXBUFFER - IAVF_PACKET_HDR_PAD;
+
+ if (!is_valid_ether_addr(adapter->hw.mac.addr)) {
+ dev_info(&pdev->dev, "Invalid MAC address %pM, using random\n",
+ adapter->hw.mac.addr);
+ eth_hw_addr_random(netdev);
+ ether_addr_copy(adapter->hw.mac.addr, netdev->dev_addr);
+ } else {
+ adapter->flags |= IAVF_FLAG_ADDR_SET_BY_PF;
+ ether_addr_copy(netdev->dev_addr, adapter->hw.mac.addr);
+ ether_addr_copy(netdev->perm_addr, adapter->hw.mac.addr);
+ }
+
+ adapter->tx_desc_count = IAVF_DEFAULT_TXD;
+ adapter->rx_desc_count = IAVF_DEFAULT_RXD;
+ err = iavf_init_interrupt_scheme(adapter);
+ if (err)
+ goto err_sw_init;
+ iavf_map_rings_to_vectors(adapter);
+ if (adapter->vf_res->vf_cap_flags &
+ VIRTCHNL_VF_OFFLOAD_WB_ON_ITR)
+ adapter->flags |= IAVF_FLAG_WB_ON_ITR_CAPABLE;
+
+ err = iavf_request_misc_irq(adapter);
+ if (err)
+ goto err_sw_init;
+
+ netif_carrier_off(netdev);
+ adapter->link_up = false;
+
+ /* set the semaphore to prevent any callbacks after device registration
+ * up to time when state of driver will be set to __IAVF_DOWN
+ */
+ rtnl_lock();
+ if (!adapter->netdev_registered) {
+ err = register_netdevice(netdev);
+ if (err) {
+ rtnl_unlock();
+ goto err_register;
+ }
+ }
+
+ adapter->netdev_registered = true;
+
+ netif_tx_stop_all_queues(netdev);
+ if (CLIENT_ALLOWED(adapter)) {
+ err = iavf_lan_add_device(adapter);
+ if (err) {
+ rtnl_unlock();
+ dev_info(&pdev->dev, "Failed to add VF to client API service list: %d\n",
+ err);
+ }
+ }
+ dev_info(&pdev->dev, "MAC address: %pM\n", adapter->hw.mac.addr);
+ if (netdev->features & NETIF_F_GRO)
+ dev_info(&pdev->dev, "GRO is enabled\n");
+
+ adapter->state = __IAVF_DOWN;
+ set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
+ rtnl_unlock();
+
+ iavf_misc_irq_enable(adapter);
+ wake_up(&adapter->down_waitqueue);
+
+ adapter->rss_key = kzalloc(adapter->rss_key_size, GFP_KERNEL);
+ adapter->rss_lut = kzalloc(adapter->rss_lut_size, GFP_KERNEL);
+ if (!adapter->rss_key || !adapter->rss_lut)
+ goto err_mem;
+ if (RSS_AQ(adapter))
+ adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_RSS;
+ else
+ iavf_init_rss(adapter);
+
+ return err;
+err_mem:
+ iavf_free_rss(adapter);
+err_register:
+ iavf_free_misc_irq(adapter);
+err_sw_init:
+ iavf_reset_interrupt_capability(adapter);
+err_alloc:
+ kfree(adapter->vf_res);
+ adapter->vf_res = NULL;
+err:
+ return err;
+}
+
+/**
+ * iavf_watchdog_task - Periodic call-back task
+ * @work: pointer to work_struct
+ **/
+static void iavf_watchdog_task(struct work_struct *work)
+{
+ struct iavf_adapter *adapter = container_of(work,
+ struct iavf_adapter,
+ watchdog_task.work);
+ struct iavf_hw *hw = &adapter->hw;
+ u32 reg_val;
+
+ if (test_and_set_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section))
+ goto restart_watchdog;
+
+ if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)
+ adapter->state = __IAVF_COMM_FAILED;
+
+ switch (adapter->state) {
+ case __IAVF_COMM_FAILED:
+ reg_val = rd32(hw, IAVF_VFGEN_RSTAT) &
+ IAVF_VFGEN_RSTAT_VFR_STATE_MASK;
+ if (reg_val == VIRTCHNL_VFR_VFACTIVE ||
+ reg_val == VIRTCHNL_VFR_COMPLETED) {
+ /* A chance for redemption! */
+ dev_err(&adapter->pdev->dev,
+ "Hardware came out of reset. Attempting reinit.\n");
+ adapter->state = __IAVF_STARTUP;
+ adapter->flags &= ~IAVF_FLAG_PF_COMMS_FAILED;
+ queue_delayed_work(iavf_wq, &adapter->init_task, 10);
+ clear_bit(__IAVF_IN_CRITICAL_TASK,
+ &adapter->crit_section);
+ /* Don't reschedule the watchdog, since we've restarted
+ * the init task. When init_task contacts the PF and
+ * gets everything set up again, it'll restart the
+ * watchdog for us. Down, boy. Sit. Stay. Woof.
+ */
+ return;
+ }
+ adapter->aq_required = 0;
+ adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+ clear_bit(__IAVF_IN_CRITICAL_TASK,
+ &adapter->crit_section);
+ queue_delayed_work(iavf_wq,
+ &adapter->watchdog_task,
+ msecs_to_jiffies(10));
goto watchdog_done;
+ case __IAVF_RESETTING:
+ clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
+ queue_delayed_work(iavf_wq, &adapter->watchdog_task, HZ * 2);
+ return;
+ case __IAVF_DOWN:
+ case __IAVF_DOWN_PENDING:
+ case __IAVF_TESTING:
+ case __IAVF_RUNNING:
+ if (adapter->current_op) {
+ if (!iavf_asq_done(hw)) {
+ dev_dbg(&adapter->pdev->dev,
+ "Admin queue timeout\n");
+ iavf_send_api_ver(adapter);
+ }
+ } else {
+ if (!iavf_process_aq_command(adapter) &&
+ adapter->state == __IAVF_RUNNING)
+ iavf_request_stats(adapter);
+ }
+ break;
+ case __IAVF_REMOVE:
+ clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
+ return;
+ default:
+ goto restart_watchdog;
}
- schedule_delayed_work(&adapter->client_task, msecs_to_jiffies(5));
+ /* check for hw reset */
+ reg_val = rd32(hw, IAVF_VF_ARQLEN1) & IAVF_VF_ARQLEN1_ARQENABLE_MASK;
+ if (!reg_val) {
+ adapter->state = __IAVF_RESETTING;
+ adapter->flags |= IAVF_FLAG_RESET_PENDING;
+ adapter->aq_required = 0;
+ adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+ dev_err(&adapter->pdev->dev, "Hardware reset detected\n");
+ queue_work(iavf_wq, &adapter->reset_task);
+ goto watchdog_done;
+ }
- if (adapter->state == __IAVF_RUNNING)
- iavf_request_stats(adapter);
+ schedule_delayed_work(&adapter->client_task, msecs_to_jiffies(5));
watchdog_done:
- if (adapter->state == __IAVF_RUNNING)
+ if (adapter->state == __IAVF_RUNNING ||
+ adapter->state == __IAVF_COMM_FAILED)
iavf_detect_recover_hung(&adapter->vsi);
clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
restart_watchdog:
- if (adapter->state == __IAVF_REMOVE)
- return;
if (adapter->aq_required)
- mod_timer(&adapter->watchdog_timer,
- jiffies + msecs_to_jiffies(20));
+ queue_delayed_work(iavf_wq, &adapter->watchdog_task,
+ msecs_to_jiffies(20));
else
- mod_timer(&adapter->watchdog_timer, jiffies + (HZ * 2));
- schedule_work(&adapter->adminq_task);
+ queue_delayed_work(iavf_wq, &adapter->watchdog_task, HZ * 2);
+ queue_work(iavf_wq, &adapter->adminq_task);
}
static void iavf_disable_vf(struct iavf_adapter *adapter)
@@ -1967,7 +2234,7 @@ continue_reset:
adapter->aq_required |= IAVF_FLAG_AQ_ADD_CLOUD_FILTER;
iavf_misc_irq_enable(adapter);
- mod_timer(&adapter->watchdog_timer, jiffies + 2);
+ mod_delayed_work(iavf_wq, &adapter->watchdog_task, 2);
/* We were running when the reset started, so we need to restore some
* state here.
@@ -2020,9 +2287,9 @@ static void iavf_adminq_task(struct work_struct *work)
struct iavf_adapter *adapter =
container_of(work, struct iavf_adapter, adminq_task);
struct iavf_hw *hw = &adapter->hw;
- struct i40e_arq_event_info event;
+ struct iavf_arq_event_info event;
enum virtchnl_ops v_op;
- iavf_status ret, v_ret;
+ enum iavf_status ret, v_ret;
u32 val, oldval;
u16 pending;
@@ -2037,7 +2304,7 @@ static void iavf_adminq_task(struct work_struct *work)
do {
ret = iavf_clean_arq_element(hw, &event, &pending);
v_op = (enum virtchnl_ops)le32_to_cpu(event.desc.cookie_high);
- v_ret = (iavf_status)le32_to_cpu(event.desc.cookie_low);
+ v_ret = (enum iavf_status)le32_to_cpu(event.desc.cookie_low);
if (ret || !v_op)
break; /* No event to process or error cleaning ARQ */
@@ -2239,22 +2506,22 @@ static int iavf_validate_tx_bandwidth(struct iavf_adapter *adapter,
int speed = 0, ret = 0;
switch (adapter->link_speed) {
- case I40E_LINK_SPEED_40GB:
+ case IAVF_LINK_SPEED_40GB:
speed = 40000;
break;
- case I40E_LINK_SPEED_25GB:
+ case IAVF_LINK_SPEED_25GB:
speed = 25000;
break;
- case I40E_LINK_SPEED_20GB:
+ case IAVF_LINK_SPEED_20GB:
speed = 20000;
break;
- case I40E_LINK_SPEED_10GB:
+ case IAVF_LINK_SPEED_10GB:
speed = 10000;
break;
- case I40E_LINK_SPEED_1GB:
+ case IAVF_LINK_SPEED_1GB:
speed = 1000;
break;
- case I40E_LINK_SPEED_100MB:
+ case IAVF_LINK_SPEED_100MB:
speed = 100;
break;
default:
@@ -2432,14 +2699,14 @@ exit:
/**
* iavf_parse_cls_flower - Parse tc flower filters provided by kernel
* @adapter: board private structure
- * @cls_flower: pointer to struct tc_cls_flower_offload
+ * @cls_flower: pointer to struct flow_cls_offload
* @filter: pointer to cloud filter structure
*/
static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
- struct tc_cls_flower_offload *f,
+ struct flow_cls_offload *f,
struct iavf_cloud_filter *filter)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
struct flow_dissector *dissector = rule->match.dissector;
u16 n_proto_mask = 0;
u16 n_proto_key = 0;
@@ -2508,7 +2775,7 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
} else {
dev_err(&adapter->pdev->dev, "Bad ether dest mask %pM\n",
match.mask->dst);
- return I40E_ERR_CONFIG;
+ return IAVF_ERR_CONFIG;
}
}
@@ -2518,7 +2785,7 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
} else {
dev_err(&adapter->pdev->dev, "Bad ether src mask %pM\n",
match.mask->src);
- return I40E_ERR_CONFIG;
+ return IAVF_ERR_CONFIG;
}
}
@@ -2553,7 +2820,7 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
} else {
dev_err(&adapter->pdev->dev, "Bad vlan mask %u\n",
match.mask->vlan_id);
- return I40E_ERR_CONFIG;
+ return IAVF_ERR_CONFIG;
}
}
vf->mask.tcp_spec.vlan_id |= cpu_to_be16(0xffff);
@@ -2577,7 +2844,7 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
} else {
dev_err(&adapter->pdev->dev, "Bad ip dst mask 0x%08x\n",
be32_to_cpu(match.mask->dst));
- return I40E_ERR_CONFIG;
+ return IAVF_ERR_CONFIG;
}
}
@@ -2587,13 +2854,13 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
} else {
dev_err(&adapter->pdev->dev, "Bad ip src mask 0x%08x\n",
be32_to_cpu(match.mask->dst));
- return I40E_ERR_CONFIG;
+ return IAVF_ERR_CONFIG;
}
}
if (field_flags & IAVF_CLOUD_FIELD_TEN_ID) {
dev_info(&adapter->pdev->dev, "Tenant id not allowed for ip filter\n");
- return I40E_ERR_CONFIG;
+ return IAVF_ERR_CONFIG;
}
if (match.key->dst) {
vf->mask.tcp_spec.dst_ip[0] |= cpu_to_be32(0xffffffff);
@@ -2614,7 +2881,7 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
if (ipv6_addr_any(&match.mask->dst)) {
dev_err(&adapter->pdev->dev, "Bad ipv6 dst mask 0x%02x\n",
IPV6_ADDR_ANY);
- return I40E_ERR_CONFIG;
+ return IAVF_ERR_CONFIG;
}
/* src and dest IPv6 address should not be LOOPBACK
@@ -2624,7 +2891,7 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
ipv6_addr_loopback(&match.key->src)) {
dev_err(&adapter->pdev->dev,
"ipv6 addr should not be loopback\n");
- return I40E_ERR_CONFIG;
+ return IAVF_ERR_CONFIG;
}
if (!ipv6_addr_any(&match.mask->dst) ||
!ipv6_addr_any(&match.mask->src))
@@ -2649,7 +2916,7 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
} else {
dev_err(&adapter->pdev->dev, "Bad src port mask %u\n",
be16_to_cpu(match.mask->src));
- return I40E_ERR_CONFIG;
+ return IAVF_ERR_CONFIG;
}
}
@@ -2659,7 +2926,7 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
} else {
dev_err(&adapter->pdev->dev, "Bad dst port mask %u\n",
be16_to_cpu(match.mask->dst));
- return I40E_ERR_CONFIG;
+ return IAVF_ERR_CONFIG;
}
}
if (match.key->dst) {
@@ -2704,10 +2971,10 @@ static int iavf_handle_tclass(struct iavf_adapter *adapter, u32 tc,
/**
* iavf_configure_clsflower - Add tc flower filters
* @adapter: board private structure
- * @cls_flower: Pointer to struct tc_cls_flower_offload
+ * @cls_flower: Pointer to struct flow_cls_offload
*/
static int iavf_configure_clsflower(struct iavf_adapter *adapter,
- struct tc_cls_flower_offload *cls_flower)
+ struct flow_cls_offload *cls_flower)
{
int tc = tc_classid_to_hwtc(adapter->netdev, cls_flower->classid);
struct iavf_cloud_filter *filter = NULL;
@@ -2783,10 +3050,10 @@ static struct iavf_cloud_filter *iavf_find_cf(struct iavf_adapter *adapter,
/**
* iavf_delete_clsflower - Remove tc flower filters
* @adapter: board private structure
- * @cls_flower: Pointer to struct tc_cls_flower_offload
+ * @cls_flower: Pointer to struct flow_cls_offload
*/
static int iavf_delete_clsflower(struct iavf_adapter *adapter,
- struct tc_cls_flower_offload *cls_flower)
+ struct flow_cls_offload *cls_flower)
{
struct iavf_cloud_filter *filter = NULL;
int err = 0;
@@ -2810,17 +3077,17 @@ static int iavf_delete_clsflower(struct iavf_adapter *adapter,
* @type_data: offload data
*/
static int iavf_setup_tc_cls_flower(struct iavf_adapter *adapter,
- struct tc_cls_flower_offload *cls_flower)
+ struct flow_cls_offload *cls_flower)
{
if (cls_flower->common.chain_index)
return -EOPNOTSUPP;
switch (cls_flower->command) {
- case TC_CLSFLOWER_REPLACE:
+ case FLOW_CLS_REPLACE:
return iavf_configure_clsflower(adapter, cls_flower);
- case TC_CLSFLOWER_DESTROY:
+ case FLOW_CLS_DESTROY:
return iavf_delete_clsflower(adapter, cls_flower);
- case TC_CLSFLOWER_STATS:
+ case FLOW_CLS_STATS:
return -EOPNOTSUPP;
default:
return -EOPNOTSUPP;
@@ -2846,34 +3113,7 @@ static int iavf_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
}
}
-/**
- * iavf_setup_tc_block - register callbacks for tc
- * @netdev: network interface device structure
- * @f: tc offload data
- *
- * This function registers block callbacks for tc
- * offloads
- **/
-static int iavf_setup_tc_block(struct net_device *dev,
- struct tc_block_offload *f)
-{
- struct iavf_adapter *adapter = netdev_priv(dev);
-
- if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
- return -EOPNOTSUPP;
-
- switch (f->command) {
- case TC_BLOCK_BIND:
- return tcf_block_cb_register(f->block, iavf_setup_tc_block_cb,
- adapter, adapter, f->extack);
- case TC_BLOCK_UNBIND:
- tcf_block_cb_unregister(f->block, iavf_setup_tc_block_cb,
- adapter);
- return 0;
- default:
- return -EOPNOTSUPP;
- }
-}
+static LIST_HEAD(iavf_block_cb_list);
/**
* iavf_setup_tc - configure multiple traffic classes
@@ -2889,11 +3129,16 @@ static int iavf_setup_tc_block(struct net_device *dev,
static int iavf_setup_tc(struct net_device *netdev, enum tc_setup_type type,
void *type_data)
{
+ struct iavf_adapter *adapter = netdev_priv(netdev);
+
switch (type) {
case TC_SETUP_QDISC_MQPRIO:
return __iavf_setup_tc(netdev, type_data);
case TC_SETUP_BLOCK:
- return iavf_setup_tc_block(netdev, type_data);
+ return flow_block_cb_setup_simple(type_data,
+ &iavf_block_cb_list,
+ iavf_setup_tc_block_cb,
+ adapter, adapter, true);
default:
return -EOPNOTSUPP;
}
@@ -2908,7 +3153,7 @@ static int iavf_setup_tc(struct net_device *netdev, enum tc_setup_type type,
* The open entry point is called when a network interface is made
* active by the system (IFF_UP). At this point all resources needed
* for transmit and receive operations are allocated, the interrupt
- * handler is registered with the OS, the watchdog timer is started,
+ * handler is registered with the OS, the watchdog is started,
* and the stack is notified that the interface is ready.
**/
static int iavf_open(struct net_device *netdev)
@@ -3020,7 +3265,7 @@ static int iavf_close(struct net_device *netdev)
status = wait_event_timeout(adapter->down_waitqueue,
adapter->state == __IAVF_DOWN,
- msecs_to_jiffies(200));
+ msecs_to_jiffies(500));
if (!status)
netdev_warn(netdev, "Device resources not yet released\n");
return 0;
@@ -3043,7 +3288,7 @@ static int iavf_change_mtu(struct net_device *netdev, int new_mtu)
adapter->flags |= IAVF_FLAG_SERVICE_CLIENT_REQUESTED;
}
adapter->flags |= IAVF_FLAG_RESET_NEEDED;
- schedule_work(&adapter->reset_task);
+ queue_work(iavf_wq, &adapter->reset_task);
return 0;
}
@@ -3348,217 +3593,41 @@ int iavf_process_config(struct iavf_adapter *adapter)
static void iavf_init_task(struct work_struct *work)
{
struct iavf_adapter *adapter = container_of(work,
- struct iavf_adapter,
- init_task.work);
- struct net_device *netdev = adapter->netdev;
+ struct iavf_adapter,
+ init_task.work);
struct iavf_hw *hw = &adapter->hw;
- struct pci_dev *pdev = adapter->pdev;
- int err, bufsz;
switch (adapter->state) {
case __IAVF_STARTUP:
- /* driver loaded, probe complete */
- adapter->flags &= ~IAVF_FLAG_PF_COMMS_FAILED;
- adapter->flags &= ~IAVF_FLAG_RESET_PENDING;
- err = iavf_set_mac_type(hw);
- if (err) {
- dev_err(&pdev->dev, "Failed to set MAC type (%d)\n",
- err);
- goto err;
- }
- err = iavf_check_reset_complete(hw);
- if (err) {
- dev_info(&pdev->dev, "Device is still in reset (%d), retrying\n",
- err);
- goto err;
- }
- hw->aq.num_arq_entries = IAVF_AQ_LEN;
- hw->aq.num_asq_entries = IAVF_AQ_LEN;
- hw->aq.arq_buf_size = IAVF_MAX_AQ_BUF_SIZE;
- hw->aq.asq_buf_size = IAVF_MAX_AQ_BUF_SIZE;
-
- err = iavf_init_adminq(hw);
- if (err) {
- dev_err(&pdev->dev, "Failed to init Admin Queue (%d)\n",
- err);
- goto err;
- }
- err = iavf_send_api_ver(adapter);
- if (err) {
- dev_err(&pdev->dev, "Unable to send to PF (%d)\n", err);
- iavf_shutdown_adminq(hw);
- goto err;
- }
- adapter->state = __IAVF_INIT_VERSION_CHECK;
- goto restart;
+ if (iavf_startup(adapter) < 0)
+ goto init_failed;
+ break;
case __IAVF_INIT_VERSION_CHECK:
- if (!iavf_asq_done(hw)) {
- dev_err(&pdev->dev, "Admin queue command never completed\n");
- iavf_shutdown_adminq(hw);
- adapter->state = __IAVF_STARTUP;
- goto err;
- }
-
- /* aq msg sent, awaiting reply */
- err = iavf_verify_api_ver(adapter);
- if (err) {
- if (err == I40E_ERR_ADMIN_QUEUE_NO_WORK)
- err = iavf_send_api_ver(adapter);
- else
- dev_err(&pdev->dev, "Unsupported PF API version %d.%d, expected %d.%d\n",
- adapter->pf_version.major,
- adapter->pf_version.minor,
- VIRTCHNL_VERSION_MAJOR,
- VIRTCHNL_VERSION_MINOR);
- goto err;
- }
- err = iavf_send_vf_config_msg(adapter);
- if (err) {
- dev_err(&pdev->dev, "Unable to send config request (%d)\n",
- err);
- goto err;
- }
- adapter->state = __IAVF_INIT_GET_RESOURCES;
- goto restart;
- case __IAVF_INIT_GET_RESOURCES:
- /* aq msg sent, awaiting reply */
- if (!adapter->vf_res) {
- bufsz = sizeof(struct virtchnl_vf_resource) +
- (IAVF_MAX_VF_VSI *
- sizeof(struct virtchnl_vsi_resource));
- adapter->vf_res = kzalloc(bufsz, GFP_KERNEL);
- if (!adapter->vf_res)
- goto err;
- }
- err = iavf_get_vf_config(adapter);
- if (err == I40E_ERR_ADMIN_QUEUE_NO_WORK) {
- err = iavf_send_vf_config_msg(adapter);
- goto err;
- } else if (err == I40E_ERR_PARAM) {
- /* We only get ERR_PARAM if the device is in a very bad
- * state or if we've been disabled for previous bad
- * behavior. Either way, we're done now.
- */
- iavf_shutdown_adminq(hw);
- dev_err(&pdev->dev, "Unable to get VF config due to PF error condition, not retrying\n");
- return;
- }
- if (err) {
- dev_err(&pdev->dev, "Unable to get VF config (%d)\n",
- err);
- goto err_alloc;
- }
- adapter->state = __IAVF_INIT_SW;
+ if (iavf_init_version_check(adapter) < 0)
+ goto init_failed;
break;
+ case __IAVF_INIT_GET_RESOURCES:
+ if (iavf_init_get_resources(adapter) < 0)
+ goto init_failed;
+ return;
default:
- goto err_alloc;
- }
-
- if (iavf_process_config(adapter))
- goto err_alloc;
- adapter->current_op = VIRTCHNL_OP_UNKNOWN;
-
- adapter->flags |= IAVF_FLAG_RX_CSUM_ENABLED;
-
- netdev->netdev_ops = &iavf_netdev_ops;
- iavf_set_ethtool_ops(netdev);
- netdev->watchdog_timeo = 5 * HZ;
-
- /* MTU range: 68 - 9710 */
- netdev->min_mtu = ETH_MIN_MTU;
- netdev->max_mtu = IAVF_MAX_RXBUFFER - IAVF_PACKET_HDR_PAD;
-
- if (!is_valid_ether_addr(adapter->hw.mac.addr)) {
- dev_info(&pdev->dev, "Invalid MAC address %pM, using random\n",
- adapter->hw.mac.addr);
- eth_hw_addr_random(netdev);
- ether_addr_copy(adapter->hw.mac.addr, netdev->dev_addr);
- } else {
- adapter->flags |= IAVF_FLAG_ADDR_SET_BY_PF;
- ether_addr_copy(netdev->dev_addr, adapter->hw.mac.addr);
- ether_addr_copy(netdev->perm_addr, adapter->hw.mac.addr);
- }
-
- timer_setup(&adapter->watchdog_timer, iavf_watchdog_timer, 0);
- mod_timer(&adapter->watchdog_timer, jiffies + 1);
-
- adapter->tx_desc_count = IAVF_DEFAULT_TXD;
- adapter->rx_desc_count = IAVF_DEFAULT_RXD;
- err = iavf_init_interrupt_scheme(adapter);
- if (err)
- goto err_sw_init;
- iavf_map_rings_to_vectors(adapter);
- if (adapter->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_WB_ON_ITR)
- adapter->flags |= IAVF_FLAG_WB_ON_ITR_CAPABLE;
-
- err = iavf_request_misc_irq(adapter);
- if (err)
- goto err_sw_init;
-
- netif_carrier_off(netdev);
- adapter->link_up = false;
-
- if (!adapter->netdev_registered) {
- err = register_netdev(netdev);
- if (err)
- goto err_register;
- }
-
- adapter->netdev_registered = true;
-
- netif_tx_stop_all_queues(netdev);
- if (CLIENT_ALLOWED(adapter)) {
- err = iavf_lan_add_device(adapter);
- if (err)
- dev_info(&pdev->dev, "Failed to add VF to client API service list: %d\n",
- err);
+ goto init_failed;
}
- dev_info(&pdev->dev, "MAC address: %pM\n", adapter->hw.mac.addr);
- if (netdev->features & NETIF_F_GRO)
- dev_info(&pdev->dev, "GRO is enabled\n");
-
- adapter->state = __IAVF_DOWN;
- set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
- iavf_misc_irq_enable(adapter);
- wake_up(&adapter->down_waitqueue);
-
- adapter->rss_key = kzalloc(adapter->rss_key_size, GFP_KERNEL);
- adapter->rss_lut = kzalloc(adapter->rss_lut_size, GFP_KERNEL);
- if (!adapter->rss_key || !adapter->rss_lut)
- goto err_mem;
-
- if (RSS_AQ(adapter)) {
- adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_RSS;
- mod_timer_pending(&adapter->watchdog_timer, jiffies + 1);
- } else {
- iavf_init_rss(adapter);
- }
- return;
-restart:
- schedule_delayed_work(&adapter->init_task, msecs_to_jiffies(30));
+ queue_delayed_work(iavf_wq, &adapter->init_task,
+ msecs_to_jiffies(30));
return;
-err_mem:
- iavf_free_rss(adapter);
-err_register:
- iavf_free_misc_irq(adapter);
-err_sw_init:
- iavf_reset_interrupt_capability(adapter);
-err_alloc:
- kfree(adapter->vf_res);
- adapter->vf_res = NULL;
-err:
- /* Things went into the weeds, so try again later */
+init_failed:
if (++adapter->aq_wait_count > IAVF_AQ_MAX_ERR) {
- dev_err(&pdev->dev, "Failed to communicate with PF; waiting before retry\n");
+ dev_err(&adapter->pdev->dev,
+ "Failed to communicate with PF; waiting before retry\n");
adapter->flags |= IAVF_FLAG_PF_COMMS_FAILED;
iavf_shutdown_adminq(hw);
adapter->state = __IAVF_STARTUP;
- schedule_delayed_work(&adapter->init_task, HZ * 5);
+ queue_delayed_work(iavf_wq, &adapter->init_task, HZ * 5);
return;
}
- schedule_delayed_work(&adapter->init_task, HZ);
+ queue_delayed_work(iavf_wq, &adapter->init_task, HZ);
}
/**
@@ -3683,11 +3752,11 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
INIT_WORK(&adapter->reset_task, iavf_reset_task);
INIT_WORK(&adapter->adminq_task, iavf_adminq_task);
- INIT_WORK(&adapter->watchdog_task, iavf_watchdog_task);
+ INIT_DELAYED_WORK(&adapter->watchdog_task, iavf_watchdog_task);
INIT_DELAYED_WORK(&adapter->client_task, iavf_client_task);
INIT_DELAYED_WORK(&adapter->init_task, iavf_init_task);
- schedule_delayed_work(&adapter->init_task,
- msecs_to_jiffies(5 * (pdev->devfn & 0x07)));
+ queue_delayed_work(iavf_wq, &adapter->init_task,
+ msecs_to_jiffies(5 * (pdev->devfn & 0x07)));
/* Setup the wait queue for indicating transition to down status */
init_waitqueue_head(&adapter->down_waitqueue);
@@ -3783,7 +3852,7 @@ static int iavf_resume(struct pci_dev *pdev)
return err;
}
- schedule_work(&adapter->reset_task);
+ queue_work(iavf_wq, &adapter->reset_task);
netif_device_attach(netdev);
@@ -3843,8 +3912,7 @@ static void iavf_remove(struct pci_dev *pdev)
iavf_reset_interrupt_capability(adapter);
iavf_free_q_vectors(adapter);
- if (adapter->watchdog_timer.function)
- del_timer_sync(&adapter->watchdog_timer);
+ cancel_delayed_work_sync(&adapter->watchdog_task);
cancel_work_sync(&adapter->adminq_task);
diff --git a/drivers/net/ethernet/intel/iavf/iavf_osdep.h b/drivers/net/ethernet/intel/iavf/iavf_osdep.h
index e6e0b0328706..a452ce90679a 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_osdep.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_osdep.h
@@ -44,9 +44,12 @@ struct iavf_virt_mem {
#define iavf_allocate_virt_mem(h, m, s) iavf_allocate_virt_mem_d(h, m, s)
#define iavf_free_virt_mem(h, m) iavf_free_virt_mem_d(h, m)
-#define iavf_debug(h, m, s, ...) iavf_debug_d(h, m, s, ##__VA_ARGS__)
-extern void iavf_debug_d(void *hw, u32 mask, char *fmt_str, ...)
- __attribute__ ((format(gnu_printf, 3, 4)));
+#define iavf_debug(h, m, s, ...) \
+do { \
+ if (((m) & (h)->debug_mask)) \
+ pr_info("iavf %02x:%02x.%x " s, \
+ (h)->bus.bus_id, (h)->bus.device, \
+ (h)->bus.func, ##__VA_ARGS__); \
+} while (0)
-typedef enum iavf_status_code iavf_status;
#endif /* _IAVF_OSDEP_H_ */
diff --git a/drivers/net/ethernet/intel/iavf/iavf_prototype.h b/drivers/net/ethernet/intel/iavf/iavf_prototype.h
index d6685103af39..edebfbbcffdc 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_prototype.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_prototype.h
@@ -16,39 +16,40 @@
*/
/* adminq functions */
-iavf_status iavf_init_adminq(struct iavf_hw *hw);
-iavf_status iavf_shutdown_adminq(struct iavf_hw *hw);
-void i40e_adminq_init_ring_data(struct iavf_hw *hw);
-iavf_status iavf_clean_arq_element(struct iavf_hw *hw,
- struct i40e_arq_event_info *e,
- u16 *events_pending);
-iavf_status iavf_asq_send_command(struct iavf_hw *hw, struct i40e_aq_desc *desc,
- void *buff, /* can be NULL */
- u16 buff_size,
- struct i40e_asq_cmd_details *cmd_details);
+enum iavf_status iavf_init_adminq(struct iavf_hw *hw);
+enum iavf_status iavf_shutdown_adminq(struct iavf_hw *hw);
+void iavf_adminq_init_ring_data(struct iavf_hw *hw);
+enum iavf_status iavf_clean_arq_element(struct iavf_hw *hw,
+ struct iavf_arq_event_info *e,
+ u16 *events_pending);
+enum iavf_status iavf_asq_send_command(struct iavf_hw *hw,
+ struct iavf_aq_desc *desc,
+ void *buff, /* can be NULL */
+ u16 buff_size,
+ struct iavf_asq_cmd_details *cmd_details);
bool iavf_asq_done(struct iavf_hw *hw);
/* debug function for adminq */
void iavf_debug_aq(struct iavf_hw *hw, enum iavf_debug_mask mask,
void *desc, void *buffer, u16 buf_len);
-void i40e_idle_aq(struct iavf_hw *hw);
+void iavf_idle_aq(struct iavf_hw *hw);
void iavf_resume_aq(struct iavf_hw *hw);
bool iavf_check_asq_alive(struct iavf_hw *hw);
-iavf_status iavf_aq_queue_shutdown(struct iavf_hw *hw, bool unloading);
-const char *iavf_aq_str(struct iavf_hw *hw, enum i40e_admin_queue_err aq_err);
-const char *iavf_stat_str(struct iavf_hw *hw, iavf_status stat_err);
+enum iavf_status iavf_aq_queue_shutdown(struct iavf_hw *hw, bool unloading);
+const char *iavf_aq_str(struct iavf_hw *hw, enum iavf_admin_queue_err aq_err);
+const char *iavf_stat_str(struct iavf_hw *hw, enum iavf_status stat_err);
-iavf_status iavf_aq_get_rss_lut(struct iavf_hw *hw, u16 seid,
- bool pf_lut, u8 *lut, u16 lut_size);
-iavf_status iavf_aq_set_rss_lut(struct iavf_hw *hw, u16 seid,
- bool pf_lut, u8 *lut, u16 lut_size);
-iavf_status iavf_aq_get_rss_key(struct iavf_hw *hw, u16 seid,
- struct i40e_aqc_get_set_rss_key_data *key);
-iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw, u16 seid,
- struct i40e_aqc_get_set_rss_key_data *key);
+enum iavf_status iavf_aq_get_rss_lut(struct iavf_hw *hw, u16 seid,
+ bool pf_lut, u8 *lut, u16 lut_size);
+enum iavf_status iavf_aq_set_rss_lut(struct iavf_hw *hw, u16 seid,
+ bool pf_lut, u8 *lut, u16 lut_size);
+enum iavf_status iavf_aq_get_rss_key(struct iavf_hw *hw, u16 seid,
+ struct iavf_aqc_get_set_rss_key_data *key);
+enum iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw, u16 seid,
+ struct iavf_aqc_get_set_rss_key_data *key);
-iavf_status iavf_set_mac_type(struct iavf_hw *hw);
+enum iavf_status iavf_set_mac_type(struct iavf_hw *hw);
extern struct iavf_rx_ptype_decoded iavf_ptype_lookup[];
@@ -59,9 +60,10 @@ static inline struct iavf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype)
void iavf_vf_parse_hw_config(struct iavf_hw *hw,
struct virtchnl_vf_resource *msg);
-iavf_status iavf_vf_reset(struct iavf_hw *hw);
-iavf_status iavf_aq_send_msg_to_pf(struct iavf_hw *hw,
- enum virtchnl_ops v_opcode,
- iavf_status v_retval, u8 *msg, u16 msglen,
- struct i40e_asq_cmd_details *cmd_details);
+enum iavf_status iavf_vf_reset(struct iavf_hw *hw);
+enum iavf_status iavf_aq_send_msg_to_pf(struct iavf_hw *hw,
+ enum virtchnl_ops v_opcode,
+ enum iavf_status v_retval,
+ u8 *msg, u16 msglen,
+ struct iavf_asq_cmd_details *cmd_details);
#endif /* _IAVF_PROTOTYPE_H_ */
diff --git a/drivers/net/ethernet/intel/iavf/iavf_status.h b/drivers/net/ethernet/intel/iavf/iavf_status.h
index 46742fab7b8c..46e3d1f6b604 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_status.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_status.h
@@ -5,74 +5,74 @@
#define _IAVF_STATUS_H_
/* Error Codes */
-enum iavf_status_code {
- I40E_SUCCESS = 0,
- I40E_ERR_NVM = -1,
- I40E_ERR_NVM_CHECKSUM = -2,
- I40E_ERR_PHY = -3,
- I40E_ERR_CONFIG = -4,
- I40E_ERR_PARAM = -5,
- I40E_ERR_MAC_TYPE = -6,
- I40E_ERR_UNKNOWN_PHY = -7,
- I40E_ERR_LINK_SETUP = -8,
- I40E_ERR_ADAPTER_STOPPED = -9,
- I40E_ERR_INVALID_MAC_ADDR = -10,
- I40E_ERR_DEVICE_NOT_SUPPORTED = -11,
- I40E_ERR_MASTER_REQUESTS_PENDING = -12,
- I40E_ERR_INVALID_LINK_SETTINGS = -13,
- I40E_ERR_AUTONEG_NOT_COMPLETE = -14,
- I40E_ERR_RESET_FAILED = -15,
- I40E_ERR_SWFW_SYNC = -16,
- I40E_ERR_NO_AVAILABLE_VSI = -17,
- I40E_ERR_NO_MEMORY = -18,
- I40E_ERR_BAD_PTR = -19,
- I40E_ERR_RING_FULL = -20,
- I40E_ERR_INVALID_PD_ID = -21,
- I40E_ERR_INVALID_QP_ID = -22,
- I40E_ERR_INVALID_CQ_ID = -23,
- I40E_ERR_INVALID_CEQ_ID = -24,
- I40E_ERR_INVALID_AEQ_ID = -25,
- I40E_ERR_INVALID_SIZE = -26,
- I40E_ERR_INVALID_ARP_INDEX = -27,
- I40E_ERR_INVALID_FPM_FUNC_ID = -28,
- I40E_ERR_QP_INVALID_MSG_SIZE = -29,
- I40E_ERR_QP_TOOMANY_WRS_POSTED = -30,
- I40E_ERR_INVALID_FRAG_COUNT = -31,
- I40E_ERR_QUEUE_EMPTY = -32,
- I40E_ERR_INVALID_ALIGNMENT = -33,
- I40E_ERR_FLUSHED_QUEUE = -34,
- I40E_ERR_INVALID_PUSH_PAGE_INDEX = -35,
- I40E_ERR_INVALID_IMM_DATA_SIZE = -36,
- I40E_ERR_TIMEOUT = -37,
- I40E_ERR_OPCODE_MISMATCH = -38,
- I40E_ERR_CQP_COMPL_ERROR = -39,
- I40E_ERR_INVALID_VF_ID = -40,
- I40E_ERR_INVALID_HMCFN_ID = -41,
- I40E_ERR_BACKING_PAGE_ERROR = -42,
- I40E_ERR_NO_PBLCHUNKS_AVAILABLE = -43,
- I40E_ERR_INVALID_PBLE_INDEX = -44,
- I40E_ERR_INVALID_SD_INDEX = -45,
- I40E_ERR_INVALID_PAGE_DESC_INDEX = -46,
- I40E_ERR_INVALID_SD_TYPE = -47,
- I40E_ERR_MEMCPY_FAILED = -48,
- I40E_ERR_INVALID_HMC_OBJ_INDEX = -49,
- I40E_ERR_INVALID_HMC_OBJ_COUNT = -50,
- I40E_ERR_INVALID_SRQ_ARM_LIMIT = -51,
- I40E_ERR_SRQ_ENABLED = -52,
- I40E_ERR_ADMIN_QUEUE_ERROR = -53,
- I40E_ERR_ADMIN_QUEUE_TIMEOUT = -54,
- I40E_ERR_BUF_TOO_SHORT = -55,
- I40E_ERR_ADMIN_QUEUE_FULL = -56,
- I40E_ERR_ADMIN_QUEUE_NO_WORK = -57,
- I40E_ERR_BAD_IWARP_CQE = -58,
- I40E_ERR_NVM_BLANK_MODE = -59,
- I40E_ERR_NOT_IMPLEMENTED = -60,
- I40E_ERR_PE_DOORBELL_NOT_ENABLED = -61,
- I40E_ERR_DIAG_TEST_FAILED = -62,
- I40E_ERR_NOT_READY = -63,
- I40E_NOT_SUPPORTED = -64,
- I40E_ERR_FIRMWARE_API_VERSION = -65,
- I40E_ERR_ADMIN_QUEUE_CRITICAL_ERROR = -66,
+enum iavf_status {
+ IAVF_SUCCESS = 0,
+ IAVF_ERR_NVM = -1,
+ IAVF_ERR_NVM_CHECKSUM = -2,
+ IAVF_ERR_PHY = -3,
+ IAVF_ERR_CONFIG = -4,
+ IAVF_ERR_PARAM = -5,
+ IAVF_ERR_MAC_TYPE = -6,
+ IAVF_ERR_UNKNOWN_PHY = -7,
+ IAVF_ERR_LINK_SETUP = -8,
+ IAVF_ERR_ADAPTER_STOPPED = -9,
+ IAVF_ERR_INVALID_MAC_ADDR = -10,
+ IAVF_ERR_DEVICE_NOT_SUPPORTED = -11,
+ IAVF_ERR_MASTER_REQUESTS_PENDING = -12,
+ IAVF_ERR_INVALID_LINK_SETTINGS = -13,
+ IAVF_ERR_AUTONEG_NOT_COMPLETE = -14,
+ IAVF_ERR_RESET_FAILED = -15,
+ IAVF_ERR_SWFW_SYNC = -16,
+ IAVF_ERR_NO_AVAILABLE_VSI = -17,
+ IAVF_ERR_NO_MEMORY = -18,
+ IAVF_ERR_BAD_PTR = -19,
+ IAVF_ERR_RING_FULL = -20,
+ IAVF_ERR_INVALID_PD_ID = -21,
+ IAVF_ERR_INVALID_QP_ID = -22,
+ IAVF_ERR_INVALID_CQ_ID = -23,
+ IAVF_ERR_INVALID_CEQ_ID = -24,
+ IAVF_ERR_INVALID_AEQ_ID = -25,
+ IAVF_ERR_INVALID_SIZE = -26,
+ IAVF_ERR_INVALID_ARP_INDEX = -27,
+ IAVF_ERR_INVALID_FPM_FUNC_ID = -28,
+ IAVF_ERR_QP_INVALID_MSG_SIZE = -29,
+ IAVF_ERR_QP_TOOMANY_WRS_POSTED = -30,
+ IAVF_ERR_INVALID_FRAG_COUNT = -31,
+ IAVF_ERR_QUEUE_EMPTY = -32,
+ IAVF_ERR_INVALID_ALIGNMENT = -33,
+ IAVF_ERR_FLUSHED_QUEUE = -34,
+ IAVF_ERR_INVALID_PUSH_PAGE_INDEX = -35,
+ IAVF_ERR_INVALID_IMM_DATA_SIZE = -36,
+ IAVF_ERR_TIMEOUT = -37,
+ IAVF_ERR_OPCODE_MISMATCH = -38,
+ IAVF_ERR_CQP_COMPL_ERROR = -39,
+ IAVF_ERR_INVALID_VF_ID = -40,
+ IAVF_ERR_INVALID_HMCFN_ID = -41,
+ IAVF_ERR_BACKING_PAGE_ERROR = -42,
+ IAVF_ERR_NO_PBLCHUNKS_AVAILABLE = -43,
+ IAVF_ERR_INVALID_PBLE_INDEX = -44,
+ IAVF_ERR_INVALID_SD_INDEX = -45,
+ IAVF_ERR_INVALID_PAGE_DESC_INDEX = -46,
+ IAVF_ERR_INVALID_SD_TYPE = -47,
+ IAVF_ERR_MEMCPY_FAILED = -48,
+ IAVF_ERR_INVALID_HMC_OBJ_INDEX = -49,
+ IAVF_ERR_INVALID_HMC_OBJ_COUNT = -50,
+ IAVF_ERR_INVALID_SRQ_ARM_LIMIT = -51,
+ IAVF_ERR_SRQ_ENABLED = -52,
+ IAVF_ERR_ADMIN_QUEUE_ERROR = -53,
+ IAVF_ERR_ADMIN_QUEUE_TIMEOUT = -54,
+ IAVF_ERR_BUF_TOO_SHORT = -55,
+ IAVF_ERR_ADMIN_QUEUE_FULL = -56,
+ IAVF_ERR_ADMIN_QUEUE_NO_WORK = -57,
+ IAVF_ERR_BAD_IWARP_CQE = -58,
+ IAVF_ERR_NVM_BLANK_MODE = -59,
+ IAVF_ERR_NOT_IMPLEMENTED = -60,
+ IAVF_ERR_PE_DOORBELL_NOT_ENABLED = -61,
+ IAVF_ERR_DIAG_TEST_FAILED = -62,
+ IAVF_ERR_NOT_READY = -63,
+ IAVF_NOT_SUPPORTED = -64,
+ IAVF_ERR_FIRMWARE_API_VERSION = -65,
+ IAVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR = -66,
};
#endif /* _IAVF_STATUS_H_ */
diff --git a/drivers/net/ethernet/intel/iavf/iavf_trace.h b/drivers/net/ethernet/intel/iavf/iavf_trace.h
index 1474f5539751..1058e68a02b4 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_trace.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_trace.h
@@ -17,8 +17,8 @@
/* See trace-events-sample.h for a detailed description of why this
* guard clause is different from most normal include files.
*/
-#if !defined(_I40E_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
-#define _I40E_TRACE_H_
+#if !defined(_IAVF_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
+#define _IAVF_TRACE_H_
#include <linux/tracepoint.h>
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
index 06d1509d57f7..0cca1b589b56 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
@@ -190,7 +190,7 @@ void iavf_detect_recover_hung(struct iavf_vsi *vsi)
static bool iavf_clean_tx_irq(struct iavf_vsi *vsi,
struct iavf_ring *tx_ring, int napi_budget)
{
- u16 i = tx_ring->next_to_clean;
+ int i = tx_ring->next_to_clean;
struct iavf_tx_buffer *tx_buf;
struct iavf_tx_desc *tx_desc;
unsigned int total_bytes = 0, total_packets = 0;
@@ -379,19 +379,19 @@ static inline unsigned int iavf_itr_divisor(struct iavf_q_vector *q_vector)
unsigned int divisor;
switch (q_vector->adapter->link_speed) {
- case I40E_LINK_SPEED_40GB:
+ case IAVF_LINK_SPEED_40GB:
divisor = IAVF_ITR_ADAPTIVE_MIN_INC * 1024;
break;
- case I40E_LINK_SPEED_25GB:
- case I40E_LINK_SPEED_20GB:
+ case IAVF_LINK_SPEED_25GB:
+ case IAVF_LINK_SPEED_20GB:
divisor = IAVF_ITR_ADAPTIVE_MIN_INC * 512;
break;
default:
- case I40E_LINK_SPEED_10GB:
+ case IAVF_LINK_SPEED_10GB:
divisor = IAVF_ITR_ADAPTIVE_MIN_INC * 256;
break;
- case I40E_LINK_SPEED_1GB:
- case I40E_LINK_SPEED_100MB:
+ case IAVF_LINK_SPEED_1GB:
+ case IAVF_LINK_SPEED_100MB:
divisor = IAVF_ITR_ADAPTIVE_MIN_INC * 32;
break;
}
@@ -1236,6 +1236,9 @@ static void iavf_add_rx_frag(struct iavf_ring *rx_ring,
unsigned int truesize = SKB_DATA_ALIGN(size + iavf_rx_offset(rx_ring));
#endif
+ if (!size)
+ return;
+
skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page,
rx_buffer->page_offset, size, truesize);
@@ -1260,6 +1263,9 @@ static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring,
{
struct iavf_rx_buffer *rx_buffer;
+ if (!size)
+ return NULL;
+
rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean];
prefetchw(rx_buffer->page);
@@ -1290,7 +1296,7 @@ static struct sk_buff *iavf_construct_skb(struct iavf_ring *rx_ring,
struct iavf_rx_buffer *rx_buffer,
unsigned int size)
{
- void *va = page_address(rx_buffer->page) + rx_buffer->page_offset;
+ void *va;
#if (PAGE_SIZE < 8192)
unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2;
#else
@@ -1299,7 +1305,10 @@ static struct sk_buff *iavf_construct_skb(struct iavf_ring *rx_ring,
unsigned int headlen;
struct sk_buff *skb;
+ if (!rx_buffer)
+ return NULL;
/* prefetch first cache line of first page */
+ va = page_address(rx_buffer->page) + rx_buffer->page_offset;
prefetch(va);
#if L1_CACHE_BYTES < 128
prefetch(va + L1_CACHE_BYTES);
@@ -1354,7 +1363,7 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring,
struct iavf_rx_buffer *rx_buffer,
unsigned int size)
{
- void *va = page_address(rx_buffer->page) + rx_buffer->page_offset;
+ void *va;
#if (PAGE_SIZE < 8192)
unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2;
#else
@@ -1363,7 +1372,10 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring,
#endif
struct sk_buff *skb;
+ if (!rx_buffer)
+ return NULL;
/* prefetch first cache line of first page */
+ va = page_address(rx_buffer->page) + rx_buffer->page_offset;
prefetch(va);
#if L1_CACHE_BYTES < 128
prefetch(va + L1_CACHE_BYTES);
@@ -1398,6 +1410,9 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring,
static void iavf_put_rx_buffer(struct iavf_ring *rx_ring,
struct iavf_rx_buffer *rx_buffer)
{
+ if (!rx_buffer)
+ return;
+
if (iavf_can_reuse_rx_page(rx_buffer)) {
/* hand second half of page back to the ring */
iavf_reuse_rx_page(rx_ring, rx_buffer);
@@ -1496,11 +1511,12 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
* verified the descriptor has been written back.
*/
dma_rmb();
+#define IAVF_RXD_DD BIT(IAVF_RX_DESC_STATUS_DD_SHIFT)
+ if (!iavf_test_staterr(rx_desc, IAVF_RXD_DD))
+ break;
size = (qword & IAVF_RXD_QW1_LENGTH_PBUF_MASK) >>
IAVF_RXD_QW1_LENGTH_PBUF_SHIFT;
- if (!size)
- break;
iavf_trace(clean_rx_irq, rx_ring, rx_desc, skb);
rx_buffer = iavf_get_rx_buffer(rx_ring, size);
@@ -1516,7 +1532,8 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
/* exit if we failed to retrieve a buffer */
if (!skb) {
rx_ring->rx_stats.alloc_buff_failed++;
- rx_buffer->pagecnt_bias++;
+ if (rx_buffer)
+ rx_buffer->pagecnt_bias++;
break;
}
diff --git a/drivers/net/ethernet/intel/iavf/iavf_type.h b/drivers/net/ethernet/intel/iavf/iavf_type.h
index ca89583613fb..7190a40c540c 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_type.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_type.h
@@ -7,7 +7,7 @@
#include "iavf_status.h"
#include "iavf_osdep.h"
#include "iavf_register.h"
-#include "i40e_adminq.h"
+#include "iavf_adminq.h"
#include "iavf_devids.h"
#define IAVF_RXQ_CTX_DBUFF_SHIFT 7
@@ -21,7 +21,7 @@
/* forward declaration */
struct iavf_hw;
-typedef void (*I40E_ADMINQ_CALLBACK)(struct iavf_hw *, struct i40e_aq_desc *);
+typedef void (*IAVF_ADMINQ_CALLBACK)(struct iavf_hw *, struct iavf_aq_desc *);
/* Data type manipulation macros. */
diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
index e64751da0921..d49d58a6de80 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
@@ -22,7 +22,7 @@ static int iavf_send_pf_msg(struct iavf_adapter *adapter,
enum virtchnl_ops op, u8 *msg, u16 len)
{
struct iavf_hw *hw = &adapter->hw;
- iavf_status err;
+ enum iavf_status err;
if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)
return 0; /* nothing to see here, move along */
@@ -41,7 +41,7 @@ static int iavf_send_pf_msg(struct iavf_adapter *adapter,
*
* Send API version admin queue message to the PF. The reply is not checked
* in this function. Returns 0 if the message was successfully
- * sent, or one of the I40E_ADMIN_QUEUE_ERROR_ statuses if not.
+ * sent, or one of the IAVF_ADMIN_QUEUE_ERROR_ statuses if not.
**/
int iavf_send_api_ver(struct iavf_adapter *adapter)
{
@@ -60,16 +60,16 @@ int iavf_send_api_ver(struct iavf_adapter *adapter)
*
* Compare API versions with the PF. Must be called after admin queue is
* initialized. Returns 0 if API versions match, -EIO if they do not,
- * I40E_ERR_ADMIN_QUEUE_NO_WORK if the admin queue is empty, and any errors
+ * IAVF_ERR_ADMIN_QUEUE_NO_WORK if the admin queue is empty, and any errors
* from the firmware are propagated.
**/
int iavf_verify_api_ver(struct iavf_adapter *adapter)
{
struct virtchnl_version_info *pf_vvi;
struct iavf_hw *hw = &adapter->hw;
- struct i40e_arq_event_info event;
+ struct iavf_arq_event_info event;
enum virtchnl_ops op;
- iavf_status err;
+ enum iavf_status err;
event.buf_len = IAVF_MAX_AQ_BUF_SIZE;
event.msg_buf = kzalloc(event.buf_len, GFP_KERNEL);
@@ -92,7 +92,7 @@ int iavf_verify_api_ver(struct iavf_adapter *adapter)
}
- err = (iavf_status)le32_to_cpu(event.desc.cookie_low);
+ err = (enum iavf_status)le32_to_cpu(event.desc.cookie_low);
if (err)
goto out_alloc;
@@ -123,7 +123,7 @@ out:
*
* Send VF configuration request admin queue message to the PF. The reply
* is not checked in this function. Returns 0 if the message was
- * successfully sent, or one of the I40E_ADMIN_QUEUE_ERROR_ statuses if not.
+ * successfully sent, or one of the IAVF_ADMIN_QUEUE_ERROR_ statuses if not.
**/
int iavf_send_vf_config_msg(struct iavf_adapter *adapter)
{
@@ -189,9 +189,9 @@ static void iavf_validate_num_queues(struct iavf_adapter *adapter)
int iavf_get_vf_config(struct iavf_adapter *adapter)
{
struct iavf_hw *hw = &adapter->hw;
- struct i40e_arq_event_info event;
+ struct iavf_arq_event_info event;
enum virtchnl_ops op;
- iavf_status err;
+ enum iavf_status err;
u16 len;
len = sizeof(struct virtchnl_vf_resource) +
@@ -216,7 +216,7 @@ int iavf_get_vf_config(struct iavf_adapter *adapter)
break;
}
- err = (iavf_status)le32_to_cpu(event.desc.cookie_low);
+ err = (enum iavf_status)le32_to_cpu(event.desc.cookie_low);
memcpy(adapter->vf_res, event.msg_buf, min(event.msg_len, len));
/* some PFs send more queues than we should have so validate that
@@ -242,7 +242,8 @@ void iavf_configure_queues(struct iavf_adapter *adapter)
struct virtchnl_vsi_queue_config_info *vqci;
struct virtchnl_queue_pair_info *vqpi;
int pairs = adapter->num_active_queues;
- int i, len, max_frame = IAVF_MAX_RXBUFFER;
+ int i, max_frame = IAVF_MAX_RXBUFFER;
+ size_t len;
if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) {
/* bail because we already have a command pending */
@@ -251,8 +252,7 @@ void iavf_configure_queues(struct iavf_adapter *adapter)
return;
}
adapter->current_op = VIRTCHNL_OP_CONFIG_VSI_QUEUES;
- len = sizeof(struct virtchnl_vsi_queue_config_info) +
- (sizeof(struct virtchnl_queue_pair_info) * pairs);
+ len = struct_size(vqci, qpair, pairs);
vqci = kzalloc(len, GFP_KERNEL);
if (!vqci)
return;
@@ -351,8 +351,9 @@ void iavf_map_queues(struct iavf_adapter *adapter)
{
struct virtchnl_irq_map_info *vimi;
struct virtchnl_vector_map *vecmap;
- int v_idx, q_vectors, len;
struct iavf_q_vector *q_vector;
+ int v_idx, q_vectors;
+ size_t len;
if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) {
/* bail because we already have a command pending */
@@ -364,9 +365,7 @@ void iavf_map_queues(struct iavf_adapter *adapter)
q_vectors = adapter->num_msix_vectors - NONQ_VECS;
- len = sizeof(struct virtchnl_irq_map_info) +
- (adapter->num_msix_vectors *
- sizeof(struct virtchnl_vector_map));
+ len = struct_size(vimi, vecmap, adapter->num_msix_vectors);
vimi = kzalloc(len, GFP_KERNEL);
if (!vimi)
return;
@@ -416,7 +415,7 @@ int iavf_request_queues(struct iavf_adapter *adapter, int num)
return -EBUSY;
}
- vfres.num_queue_pairs = num;
+ vfres.num_queue_pairs = min_t(int, num, num_online_cpus());
adapter->current_op = VIRTCHNL_OP_REQUEST_QUEUES;
adapter->flags |= IAVF_FLAG_REINIT_ITR_NEEDED;
@@ -433,9 +432,10 @@ int iavf_request_queues(struct iavf_adapter *adapter, int num)
void iavf_add_ether_addrs(struct iavf_adapter *adapter)
{
struct virtchnl_ether_addr_list *veal;
- int len, i = 0, count = 0;
struct iavf_mac_filter *f;
+ int i = 0, count = 0;
bool more = false;
+ size_t len;
if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) {
/* bail because we already have a command pending */
@@ -457,15 +457,13 @@ void iavf_add_ether_addrs(struct iavf_adapter *adapter)
}
adapter->current_op = VIRTCHNL_OP_ADD_ETH_ADDR;
- len = sizeof(struct virtchnl_ether_addr_list) +
- (count * sizeof(struct virtchnl_ether_addr));
+ len = struct_size(veal, list, count);
if (len > IAVF_MAX_AQ_BUF_SIZE) {
dev_warn(&adapter->pdev->dev, "Too many add MAC changes in one request\n");
count = (IAVF_MAX_AQ_BUF_SIZE -
sizeof(struct virtchnl_ether_addr_list)) /
sizeof(struct virtchnl_ether_addr);
- len = sizeof(struct virtchnl_ether_addr_list) +
- (count * sizeof(struct virtchnl_ether_addr));
+ len = struct_size(veal, list, count);
more = true;
}
@@ -505,8 +503,9 @@ void iavf_del_ether_addrs(struct iavf_adapter *adapter)
{
struct virtchnl_ether_addr_list *veal;
struct iavf_mac_filter *f, *ftmp;
- int len, i = 0, count = 0;
+ int i = 0, count = 0;
bool more = false;
+ size_t len;
if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) {
/* bail because we already have a command pending */
@@ -528,15 +527,13 @@ void iavf_del_ether_addrs(struct iavf_adapter *adapter)
}
adapter->current_op = VIRTCHNL_OP_DEL_ETH_ADDR;
- len = sizeof(struct virtchnl_ether_addr_list) +
- (count * sizeof(struct virtchnl_ether_addr));
+ len = struct_size(veal, list, count);
if (len > IAVF_MAX_AQ_BUF_SIZE) {
dev_warn(&adapter->pdev->dev, "Too many delete MAC changes in one request\n");
count = (IAVF_MAX_AQ_BUF_SIZE -
sizeof(struct virtchnl_ether_addr_list)) /
sizeof(struct virtchnl_ether_addr);
- len = sizeof(struct virtchnl_ether_addr_list) +
- (count * sizeof(struct virtchnl_ether_addr));
+ len = struct_size(veal, list, count);
more = true;
}
veal = kzalloc(len, GFP_ATOMIC);
@@ -938,22 +935,22 @@ static void iavf_print_link_message(struct iavf_adapter *adapter)
}
switch (adapter->link_speed) {
- case I40E_LINK_SPEED_40GB:
+ case IAVF_LINK_SPEED_40GB:
speed = "40 G";
break;
- case I40E_LINK_SPEED_25GB:
+ case IAVF_LINK_SPEED_25GB:
speed = "25 G";
break;
- case I40E_LINK_SPEED_20GB:
+ case IAVF_LINK_SPEED_20GB:
speed = "20 G";
break;
- case I40E_LINK_SPEED_10GB:
+ case IAVF_LINK_SPEED_10GB:
speed = "10 G";
break;
- case I40E_LINK_SPEED_1GB:
+ case IAVF_LINK_SPEED_1GB:
speed = "1000 M";
break;
- case I40E_LINK_SPEED_100MB:
+ case IAVF_LINK_SPEED_100MB:
speed = "100 M";
break;
default:
@@ -973,7 +970,7 @@ static void iavf_print_link_message(struct iavf_adapter *adapter)
void iavf_enable_channels(struct iavf_adapter *adapter)
{
struct virtchnl_tc_info *vti = NULL;
- u16 len;
+ size_t len;
int i;
if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) {
@@ -983,9 +980,7 @@ void iavf_enable_channels(struct iavf_adapter *adapter)
return;
}
- len = (adapter->num_tc * sizeof(struct virtchnl_channel_info)) +
- sizeof(struct virtchnl_tc_info);
-
+ len = struct_size(vti, list, adapter->num_tc - 1);
vti = kzalloc(len, GFP_KERNEL);
if (!vti)
return;
@@ -1184,8 +1179,8 @@ void iavf_request_reset(struct iavf_adapter *adapter)
* This function handles the reply messages.
**/
void iavf_virtchnl_completion(struct iavf_adapter *adapter,
- enum virtchnl_ops v_opcode, iavf_status v_retval,
- u8 *msg, u16 msglen)
+ enum virtchnl_ops v_opcode,
+ enum iavf_status v_retval, u8 *msg, u16 msglen)
{
struct net_device *netdev = adapter->netdev;
@@ -1238,7 +1233,7 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
if (!(adapter->flags & IAVF_FLAG_RESET_PENDING)) {
adapter->flags |= IAVF_FLAG_RESET_PENDING;
dev_info(&adapter->pdev->dev, "Scheduling reset task\n");
- schedule_work(&adapter->reset_task);
+ queue_work(iavf_wq, &adapter->reset_task);
}
break;
default:
diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 792e6e42030e..9ee6b55553c0 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -44,15 +44,22 @@
extern const char ice_drv_ver[];
#define ICE_BAR0 0
#define ICE_REQ_DESC_MULTIPLE 32
-#define ICE_MIN_NUM_DESC ICE_REQ_DESC_MULTIPLE
+#define ICE_MIN_NUM_DESC 64
#define ICE_MAX_NUM_DESC 8160
-/* set default number of Rx/Tx descriptors to the minimum between
- * ICE_MAX_NUM_DESC and the number of descriptors to fill up an entire page
+#define ICE_DFLT_MIN_RX_DESC 512
+/* if the default number of Rx descriptors between ICE_MAX_NUM_DESC and the
+ * number of descriptors to fill up an entire page is greater than or equal to
+ * ICE_DFLT_MIN_RX_DESC set it based on page size, otherwise set it to
+ * ICE_DFLT_MIN_RX_DESC
+ */
+#define ICE_DFLT_NUM_RX_DESC \
+ min_t(u16, ICE_MAX_NUM_DESC, \
+ max_t(u16, ALIGN(PAGE_SIZE / sizeof(union ice_32byte_rx_desc), \
+ ICE_REQ_DESC_MULTIPLE), \
+ ICE_DFLT_MIN_RX_DESC))
+/* set default number of Tx descriptors to the minimum between ICE_MAX_NUM_DESC
+ * and the number of descriptors to fill up an entire page
*/
-#define ICE_DFLT_NUM_RX_DESC min_t(u16, ICE_MAX_NUM_DESC, \
- ALIGN(PAGE_SIZE / \
- sizeof(union ice_32byte_rx_desc), \
- ICE_REQ_DESC_MULTIPLE))
#define ICE_DFLT_NUM_TX_DESC min_t(u16, ICE_MAX_NUM_DESC, \
ALIGN(PAGE_SIZE / \
sizeof(struct ice_tx_desc), \
@@ -160,7 +167,7 @@ struct ice_tc_cfg {
struct ice_res_tracker {
u16 num_entries;
- u16 search_hint;
+ u16 end;
u16 list[1];
};
@@ -182,6 +189,7 @@ struct ice_sw {
};
enum ice_state {
+ __ICE_TESTING,
__ICE_DOWN,
__ICE_NEEDS_RESTART,
__ICE_PREPARED_FOR_RESET, /* set by driver when prepared */
@@ -244,8 +252,7 @@ struct ice_vsi {
u32 rx_buf_failed;
u32 rx_page_failed;
int num_q_vectors;
- int sw_base_vector; /* Irq base for OS reserved vectors */
- int hw_base_vector; /* HW (absolute) index of a vector */
+ int base_vector; /* IRQ base for OS reserved vectors */
enum ice_vsi_type type;
u16 vsi_num; /* HW (absolute) index of this VSI */
u16 idx; /* software index in pf->vsi[] */
@@ -277,10 +284,10 @@ struct ice_vsi {
struct list_head tmp_sync_list; /* MAC filters to be synced */
struct list_head tmp_unsync_list; /* MAC filters to be unsynced */
- u8 irqs_ready;
- u8 current_isup; /* Sync 'link up' logging */
- u8 stat_offsets_loaded;
- u8 vlan_ena;
+ u8 irqs_ready:1;
+ u8 current_isup:1; /* Sync 'link up' logging */
+ u8 stat_offsets_loaded:1;
+ u8 vlan_ena:1;
/* queue information */
u8 tx_mapping_mode; /* ICE_MAP_MODE_[CONTIG|SCATTER] */
@@ -330,7 +337,7 @@ enum ice_pf_flags {
ICE_FLAG_DCB_CAPABLE,
ICE_FLAG_DCB_ENA,
ICE_FLAG_LINK_DOWN_ON_CLOSE_ENA,
- ICE_FLAG_DISABLE_FW_LLDP,
+ ICE_FLAG_ENABLE_FW_LLDP,
ICE_FLAG_ETHTOOL_CTXT, /* set when ethtool holds RTNL lock */
ICE_PF_FLAGS_NBITS /* must be last */
};
@@ -340,10 +347,12 @@ struct ice_pf {
/* OS reserved IRQ details */
struct msix_entry *msix_entries;
- struct ice_res_tracker *sw_irq_tracker;
-
- /* HW reserved Interrupts for this PF */
- struct ice_res_tracker *hw_irq_tracker;
+ struct ice_res_tracker *irq_tracker;
+ /* First MSIX vector used by SR-IOV VFs. Calculated by subtracting the
+ * number of MSIX vectors needed for all SR-IOV VFs from the number of
+ * MSIX vectors allowed on this PF.
+ */
+ u16 sriov_base_vector;
struct ice_vsi **vsi; /* VSIs created by the driver */
struct ice_sw *first_sw; /* first switch created by firmware */
@@ -365,10 +374,8 @@ struct ice_pf {
struct mutex sw_mutex; /* lock for protecting VSI alloc flow */
u32 msg_enable;
u32 hw_csum_rx_error;
- u32 sw_oicr_idx; /* Other interrupt cause SW vector index */
+ u32 oicr_idx; /* Other interrupt cause MSIX vector index */
u32 num_avail_sw_msix; /* remaining MSIX SW vectors left unclaimed */
- u32 hw_oicr_idx; /* Other interrupt cause vector HW index */
- u32 num_avail_hw_msix; /* remaining HW MSIX vectors left unclaimed */
u32 num_lan_msix; /* Total MSIX vectors for base driver */
u16 num_lan_tx; /* num LAN Tx queues setup */
u16 num_lan_rx; /* num LAN Rx queues setup */
@@ -384,7 +391,7 @@ struct ice_pf {
struct ice_hw_port_stats stats;
struct ice_hw_port_stats stats_prev;
struct ice_hw hw;
- u8 stat_prev_loaded; /* has previous stats been loaded */
+ u8 stat_prev_loaded:1; /* has previous stats been loaded */
#ifdef CONFIG_DCB
u16 dcbx_cap;
#endif /* CONFIG_DCB */
@@ -392,6 +399,7 @@ struct ice_pf {
unsigned long tx_timeout_last_recovery;
u32 tx_timeout_recovery_level;
char int_name[ICE_INT_NAME_STR_LEN];
+ u32 sw_int_count;
};
struct ice_netdev_priv {
@@ -409,7 +417,7 @@ ice_irq_dynamic_ena(struct ice_hw *hw, struct ice_vsi *vsi,
struct ice_q_vector *q_vector)
{
u32 vector = (vsi && q_vector) ? q_vector->reg_idx :
- ((struct ice_pf *)hw->back)->hw_oicr_idx;
+ ((struct ice_pf *)hw->back)->oicr_idx;
int itr = ICE_ITR_NONE;
u32 val;
@@ -444,17 +452,22 @@ ice_find_vsi_by_type(struct ice_pf *pf, enum ice_vsi_type type)
return NULL;
}
+int ice_vsi_setup_tx_rings(struct ice_vsi *vsi);
+int ice_vsi_setup_rx_rings(struct ice_vsi *vsi);
void ice_set_ethtool_ops(struct net_device *netdev);
int ice_up(struct ice_vsi *vsi);
int ice_down(struct ice_vsi *vsi);
+int ice_vsi_cfg(struct ice_vsi *vsi);
+struct ice_vsi *ice_lb_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi);
int ice_set_rss(struct ice_vsi *vsi, u8 *seed, u8 *lut, u16 lut_size);
int ice_get_rss(struct ice_vsi *vsi, u8 *seed, u8 *lut, u16 lut_size);
void ice_fill_rss_lut(u8 *lut, u16 rss_table_size, u16 rss_size);
void ice_print_link_msg(struct ice_vsi *vsi, bool isup);
-void ice_napi_del(struct ice_vsi *vsi);
#ifdef CONFIG_DCB
int ice_pf_ena_all_vsi(struct ice_pf *pf, bool locked);
void ice_pf_dis_all_vsi(struct ice_pf *pf, bool locked);
#endif /* CONFIG_DCB */
+int ice_open(struct net_device *netdev);
+int ice_stop(struct net_device *netdev);
#endif /* _ICE_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index 6ef083002f5b..765e3c2ed045 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -35,8 +35,8 @@ struct ice_aqc_get_ver {
/* Queue Shutdown (direct 0x0003) */
struct ice_aqc_q_shutdown {
-#define ICE_AQC_DRIVER_UNLOADING BIT(0)
__le32 driver_unloading;
+#define ICE_AQC_DRIVER_UNLOADING BIT(0)
u8 reserved[12];
};
@@ -120,11 +120,9 @@ struct ice_aqc_manage_mac_read {
#define ICE_AQC_MAN_MAC_WOL_ADDR_VALID BIT(7)
#define ICE_AQC_MAN_MAC_READ_S 4
#define ICE_AQC_MAN_MAC_READ_M (0xF << ICE_AQC_MAN_MAC_READ_S)
- u8 lport_num;
- u8 lport_num_valid;
-#define ICE_AQC_MAN_MAC_PORT_NUM_IS_VALID BIT(0)
+ u8 rsvd[2];
u8 num_addr; /* Used in response */
- u8 reserved[3];
+ u8 rsvd1[3];
__le32 addr_high;
__le32 addr_low;
};
@@ -140,7 +138,7 @@ struct ice_aqc_manage_mac_read_resp {
/* Manage MAC address, write command - direct (0x0108) */
struct ice_aqc_manage_mac_write {
- u8 port_num;
+ u8 rsvd;
u8 flags;
#define ICE_AQC_MAN_MAC_WR_MC_MAG_EN BIT(0)
#define ICE_AQC_MAN_MAC_WR_WOL_LAA_PFR_KEEP BIT(1)
@@ -920,6 +918,8 @@ struct ice_aqc_get_phy_caps_data {
#define ICE_AQC_PHY_EN_LINK BIT(3)
#define ICE_AQC_PHY_AN_MODE BIT(4)
#define ICE_AQC_GET_PHY_EN_MOD_QUAL BIT(5)
+#define ICE_AQC_PHY_EN_AUTO_FEC BIT(7)
+#define ICE_AQC_PHY_CAPS_MASK ICE_M(0xff, 0)
u8 low_power_ctrl;
#define ICE_AQC_PHY_EN_D3COLD_LOW_POWER_AUTONEG BIT(0)
__le16 eee_cap;
@@ -932,6 +932,7 @@ struct ice_aqc_get_phy_caps_data {
#define ICE_AQC_PHY_EEE_EN_40GBASE_KR4 BIT(6)
__le16 eeer_value;
u8 phy_id_oui[4]; /* PHY/Module ID connected on the port */
+ u8 phy_fw_ver[8];
u8 link_fec_options;
#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN BIT(0)
#define ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ BIT(1)
@@ -940,6 +941,8 @@ struct ice_aqc_get_phy_caps_data {
#define ICE_AQC_PHY_FEC_25G_RS_544_REQ BIT(4)
#define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN BIT(6)
#define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN BIT(7)
+#define ICE_AQC_PHY_FEC_MASK ICE_M(0xdf, 0)
+ u8 rsvd1; /* Byte 35 reserved */
u8 extended_compliance_code;
#define ICE_MODULE_TYPE_TOTAL_BYTE 3
u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
@@ -954,13 +957,14 @@ struct ice_aqc_get_phy_caps_data {
#define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS 0xA0
#define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS 0x86
u8 qualified_module_count;
+ u8 rsvd2[7]; /* Bytes 47:41 reserved */
#define ICE_AQC_QUAL_MOD_COUNT_MAX 16
struct {
u8 v_oui[3];
- u8 rsvd1;
+ u8 rsvd3;
u8 v_part[16];
__le32 v_rev;
- __le64 rsvd8;
+ __le64 rsvd4;
} qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
};
@@ -1062,6 +1066,7 @@ struct ice_aqc_get_link_status_data {
#define ICE_AQ_LINK_25G_KR_FEC_EN BIT(0)
#define ICE_AQ_LINK_25G_RS_528_FEC_EN BIT(1)
#define ICE_AQ_LINK_25G_RS_544_FEC_EN BIT(2)
+#define ICE_AQ_FEC_MASK ICE_M(0x7, 0)
/* Pacing Config */
#define ICE_AQ_CFG_PACING_S 3
#define ICE_AQ_CFG_PACING_M (0xF << ICE_AQ_CFG_PACING_S)
@@ -1112,6 +1117,14 @@ struct ice_aqc_set_event_mask {
u8 reserved1[6];
};
+/* Set MAC Loopback command (direct 0x0620) */
+struct ice_aqc_set_mac_lb {
+ u8 lb_mode;
+#define ICE_AQ_MAC_LB_EN BIT(0)
+#define ICE_AQ_MAC_LB_OSC_CLK BIT(1)
+ u8 reserved[15];
+};
+
/* Set Port Identification LED (direct, 0x06E9) */
struct ice_aqc_set_port_id_led {
u8 lport_num;
@@ -1145,6 +1158,17 @@ struct ice_aqc_nvm {
__le32 addr_low;
};
+/* NVM Checksum Command (direct, 0x0706) */
+struct ice_aqc_nvm_checksum {
+ u8 flags;
+#define ICE_AQC_NVM_CHECKSUM_VERIFY BIT(0)
+#define ICE_AQC_NVM_CHECKSUM_RECALC BIT(1)
+ u8 rsvd;
+ __le16 checksum; /* Used only by response */
+#define ICE_AQC_NVM_CHECKSUM_CORRECT 0xBABA
+ u8 rsvd2[12];
+};
+
/**
* Send to PF command (indirect 0x0801) ID is only used by PF
*
@@ -1249,7 +1273,7 @@ struct ice_aqc_get_cee_dcb_cfg_resp {
};
/* Set Local LLDP MIB (indirect 0x0A08)
- * Used to replace the local MIB of a given LLDP agent. e.g. DCBx
+ * Used to replace the local MIB of a given LLDP agent. e.g. DCBX
*/
struct ice_aqc_lldp_set_local_mib {
u8 type;
@@ -1266,7 +1290,7 @@ struct ice_aqc_lldp_set_local_mib {
};
/* Stop/Start LLDP Agent (direct 0x0A09)
- * Used for stopping/starting specific LLDP agent. e.g. DCBx.
+ * Used for stopping/starting specific LLDP agent. e.g. DCBX.
* The same structure is used for the response, with the command field
* being used as the status field.
*/
@@ -1539,6 +1563,7 @@ struct ice_aq_desc {
struct ice_aqc_query_txsched_res query_sched_res;
struct ice_aqc_query_port_ets port_ets;
struct ice_aqc_nvm nvm;
+ struct ice_aqc_nvm_checksum nvm_checksum;
struct ice_aqc_pf_vf_msg virt;
struct ice_aqc_lldp_get_mib lldp_get_mib;
struct ice_aqc_lldp_set_mib_change lldp_set_event;
@@ -1554,6 +1579,7 @@ struct ice_aq_desc {
struct ice_aqc_add_update_free_vsi_resp add_update_free_vsi_res;
struct ice_aqc_fw_logging fw_logging;
struct ice_aqc_get_clear_fw_log get_clear_fw_log;
+ struct ice_aqc_set_mac_lb set_mac_lb;
struct ice_aqc_alloc_free_res_cmd sw_res_ctrl;
struct ice_aqc_set_event_mask set_event_mask;
struct ice_aqc_get_link_status get_link_status;
@@ -1642,10 +1668,12 @@ enum ice_adminq_opc {
ice_aqc_opc_restart_an = 0x0605,
ice_aqc_opc_get_link_status = 0x0607,
ice_aqc_opc_set_event_mask = 0x0613,
+ ice_aqc_opc_set_mac_lb = 0x0620,
ice_aqc_opc_set_port_id_led = 0x06E9,
/* NVM commands */
ice_aqc_opc_nvm_read = 0x0701,
+ ice_aqc_opc_nvm_checksum = 0x0706,
/* PF/VF mailbox commands */
ice_mbx_opc_send_msg_to_pf = 0x0801,
@@ -1671,6 +1699,7 @@ enum ice_adminq_opc {
/* debug commands */
ice_aqc_opc_fw_logging = 0xFF09,
+ ice_aqc_opc_fw_logging_info = 0xFF10,
};
#endif /* _ICE_ADMINQ_CMD_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index da7878529929..2e0731c1e1a3 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -51,9 +51,6 @@ static enum ice_status ice_set_mac_type(struct ice_hw *hw)
*/
void ice_dev_onetime_setup(struct ice_hw *hw)
{
- /* configure Rx - set non pxe mode */
- wr32(hw, GLLAN_RCTL_0, 0x1);
-
#define MBX_PF_VT_PFALLOC 0x00231E80
/* set VFs per PF */
wr32(hw, MBX_PF_VT_PFALLOC, rd32(hw, PF_VT_PFALLOC_HIF));
@@ -307,6 +304,8 @@ ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
hw_link_info->an_info = link_data.an_info;
hw_link_info->ext_info = link_data.ext_info;
hw_link_info->max_frame_size = le16_to_cpu(link_data.max_frame_size);
+ hw_link_info->fec_info = link_data.cfg & ICE_AQ_FEC_MASK;
+ hw_link_info->topo_media_conflict = link_data.topo_media_conflict;
hw_link_info->pacing = link_data.cfg & ICE_AQ_CFG_PACING_M;
/* update fc info */
@@ -476,6 +475,49 @@ static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
ICE_FW_LOG_DESC_SIZE(ICE_AQC_FW_LOG_ID_MAX)
/**
+ * ice_get_fw_log_cfg - get FW logging configuration
+ * @hw: pointer to the HW struct
+ */
+static enum ice_status ice_get_fw_log_cfg(struct ice_hw *hw)
+{
+ struct ice_aqc_fw_logging_data *config;
+ struct ice_aq_desc desc;
+ enum ice_status status;
+ u16 size;
+
+ size = ICE_FW_LOG_DESC_SIZE_MAX;
+ config = devm_kzalloc(ice_hw_to_dev(hw), size, GFP_KERNEL);
+ if (!config)
+ return ICE_ERR_NO_MEMORY;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging_info);
+
+ desc.flags |= cpu_to_le16(ICE_AQ_FLAG_BUF);
+ desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
+
+ status = ice_aq_send_cmd(hw, &desc, config, size, NULL);
+ if (!status) {
+ u16 i;
+
+ /* Save FW logging information into the HW structure */
+ for (i = 0; i < ICE_AQC_FW_LOG_ID_MAX; i++) {
+ u16 v, m, flgs;
+
+ v = le16_to_cpu(config->entry[i]);
+ m = (v & ICE_AQC_FW_LOG_ID_M) >> ICE_AQC_FW_LOG_ID_S;
+ flgs = (v & ICE_AQC_FW_LOG_EN_M) >> ICE_AQC_FW_LOG_EN_S;
+
+ if (m < ICE_AQC_FW_LOG_ID_MAX)
+ hw->fw_log.evnts[m].cur = flgs;
+ }
+ }
+
+ devm_kfree(ice_hw_to_dev(hw), config);
+
+ return status;
+}
+
+/**
* ice_cfg_fw_log - configure FW logging
* @hw: pointer to the HW struct
* @enable: enable certain FW logging events if true, disable all if false
@@ -529,6 +571,11 @@ static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
(!hw->fw_log.actv_evnts || !ice_check_sq_alive(hw, &hw->adminq)))
return 0;
+ /* Get current FW log settings */
+ status = ice_get_fw_log_cfg(hw);
+ if (status)
+ return status;
+
ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging);
cmd = &desc.params.fw_logging;
@@ -634,17 +681,17 @@ out:
*/
void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf)
{
- ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg Start ]\n");
- ice_debug_array(hw, ICE_DBG_AQ_MSG, 16, 1, (u8 *)buf,
+ ice_debug(hw, ICE_DBG_FW_LOG, "[ FW Log Msg Start ]\n");
+ ice_debug_array(hw, ICE_DBG_FW_LOG, 16, 1, (u8 *)buf,
le16_to_cpu(desc->datalen));
- ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg End ]\n");
+ ice_debug(hw, ICE_DBG_FW_LOG, "[ FW Log Msg End ]\n");
}
/**
* ice_get_itr_intrl_gran - determine int/intrl granularity
* @hw: pointer to the HW struct
*
- * Determines the itr/intrl granularities based on the maximum aggregate
+ * Determines the ITR/intrl granularities based on the maximum aggregate
* bandwidth according to the device's configuration during power-on.
*/
static void ice_get_itr_intrl_gran(struct ice_hw *hw)
@@ -815,6 +862,10 @@ err_unroll_cqinit:
/**
* ice_deinit_hw - unroll initialization operations done by ice_init_hw
* @hw: pointer to the hardware structure
+ *
+ * This should be called only during nominal operation, not as a result of
+ * ice_init_hw() failing since ice_init_hw() will take care of unrolling
+ * applicable initializations if it fails for any reason.
*/
void ice_deinit_hw(struct ice_hw *hw)
{
@@ -1447,6 +1498,7 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
struct ice_hw_func_caps *func_p = NULL;
struct ice_hw_dev_caps *dev_p = NULL;
struct ice_hw_common_caps *caps;
+ char const *prefix;
u32 i;
if (!buf)
@@ -1457,9 +1509,11 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
if (opc == ice_aqc_opc_list_dev_caps) {
dev_p = &hw->dev_caps;
caps = &dev_p->common_cap;
+ prefix = "dev cap";
} else if (opc == ice_aqc_opc_list_func_caps) {
func_p = &hw->func_caps;
caps = &func_p->common_cap;
+ prefix = "func cap";
} else {
ice_debug(hw, ICE_DBG_INIT, "wrong opcode\n");
return;
@@ -1475,28 +1529,29 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
case ICE_AQC_CAPS_VALID_FUNCTIONS:
caps->valid_functions = number;
ice_debug(hw, ICE_DBG_INIT,
- "HW caps: Valid Functions = %d\n",
+ "%s: valid functions = %d\n", prefix,
caps->valid_functions);
break;
case ICE_AQC_CAPS_SRIOV:
caps->sr_iov_1_1 = (number == 1);
ice_debug(hw, ICE_DBG_INIT,
- "HW caps: SR-IOV = %d\n", caps->sr_iov_1_1);
+ "%s: SR-IOV = %d\n", prefix,
+ caps->sr_iov_1_1);
break;
case ICE_AQC_CAPS_VF:
if (dev_p) {
dev_p->num_vfs_exposed = number;
ice_debug(hw, ICE_DBG_INIT,
- "HW caps: VFs exposed = %d\n",
+ "%s: VFs exposed = %d\n", prefix,
dev_p->num_vfs_exposed);
} else if (func_p) {
func_p->num_allocd_vfs = number;
func_p->vf_base_id = logical_id;
ice_debug(hw, ICE_DBG_INIT,
- "HW caps: VFs allocated = %d\n",
+ "%s: VFs allocated = %d\n", prefix,
func_p->num_allocd_vfs);
ice_debug(hw, ICE_DBG_INIT,
- "HW caps: VF base_id = %d\n",
+ "%s: VF base_id = %d\n", prefix,
func_p->vf_base_id);
}
break;
@@ -1504,69 +1559,69 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
if (dev_p) {
dev_p->num_vsi_allocd_to_host = number;
ice_debug(hw, ICE_DBG_INIT,
- "HW caps: Dev.VSI cnt = %d\n",
+ "%s: num VSI alloc to host = %d\n",
+ prefix,
dev_p->num_vsi_allocd_to_host);
} else if (func_p) {
func_p->guar_num_vsi =
ice_get_num_per_func(hw, ICE_MAX_VSI);
ice_debug(hw, ICE_DBG_INIT,
- "HW caps: Func.VSI cnt = %d\n",
- number);
+ "%s: num guaranteed VSI (fw) = %d\n",
+ prefix, number);
+ ice_debug(hw, ICE_DBG_INIT,
+ "%s: num guaranteed VSI = %d\n",
+ prefix, func_p->guar_num_vsi);
}
break;
case ICE_AQC_CAPS_RSS:
caps->rss_table_size = number;
caps->rss_table_entry_width = logical_id;
ice_debug(hw, ICE_DBG_INIT,
- "HW caps: RSS table size = %d\n",
+ "%s: RSS table size = %d\n", prefix,
caps->rss_table_size);
ice_debug(hw, ICE_DBG_INIT,
- "HW caps: RSS table width = %d\n",
+ "%s: RSS table width = %d\n", prefix,
caps->rss_table_entry_width);
break;
case ICE_AQC_CAPS_RXQS:
caps->num_rxq = number;
caps->rxq_first_id = phys_id;
ice_debug(hw, ICE_DBG_INIT,
- "HW caps: Num Rx Qs = %d\n", caps->num_rxq);
+ "%s: num Rx queues = %d\n", prefix,
+ caps->num_rxq);
ice_debug(hw, ICE_DBG_INIT,
- "HW caps: Rx first queue ID = %d\n",
+ "%s: Rx first queue ID = %d\n", prefix,
caps->rxq_first_id);
break;
case ICE_AQC_CAPS_TXQS:
caps->num_txq = number;
caps->txq_first_id = phys_id;
ice_debug(hw, ICE_DBG_INIT,
- "HW caps: Num Tx Qs = %d\n", caps->num_txq);
+ "%s: num Tx queues = %d\n", prefix,
+ caps->num_txq);
ice_debug(hw, ICE_DBG_INIT,
- "HW caps: Tx first queue ID = %d\n",
+ "%s: Tx first queue ID = %d\n", prefix,
caps->txq_first_id);
break;
case ICE_AQC_CAPS_MSIX:
caps->num_msix_vectors = number;
caps->msix_vector_first_id = phys_id;
ice_debug(hw, ICE_DBG_INIT,
- "HW caps: MSIX vector count = %d\n",
+ "%s: MSIX vector count = %d\n", prefix,
caps->num_msix_vectors);
ice_debug(hw, ICE_DBG_INIT,
- "HW caps: MSIX first vector index = %d\n",
+ "%s: MSIX first vector index = %d\n", prefix,
caps->msix_vector_first_id);
break;
case ICE_AQC_CAPS_MAX_MTU:
caps->max_mtu = number;
- if (dev_p)
- ice_debug(hw, ICE_DBG_INIT,
- "HW caps: Dev.MaxMTU = %d\n",
- caps->max_mtu);
- else if (func_p)
- ice_debug(hw, ICE_DBG_INIT,
- "HW caps: func.MaxMTU = %d\n",
- caps->max_mtu);
+ ice_debug(hw, ICE_DBG_INIT, "%s: max MTU = %d\n",
+ prefix, caps->max_mtu);
break;
default:
ice_debug(hw, ICE_DBG_INIT,
- "HW caps: Unknown capability[%d]: 0x%x\n", i,
- cap);
+ "%s: unknown capability[%d]: 0x%x\n", prefix,
+ i, cap);
break;
}
}
@@ -1947,36 +2002,37 @@ ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
*/
enum ice_status ice_update_link_info(struct ice_port_info *pi)
{
- struct ice_aqc_get_phy_caps_data *pcaps;
- struct ice_phy_info *phy_info;
+ struct ice_link_status *li;
enum ice_status status;
- struct ice_hw *hw;
if (!pi)
return ICE_ERR_PARAM;
- hw = pi->hw;
-
- pcaps = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*pcaps), GFP_KERNEL);
- if (!pcaps)
- return ICE_ERR_NO_MEMORY;
+ li = &pi->phy.link_info;
- phy_info = &pi->phy;
status = ice_aq_get_link_info(pi, true, NULL, NULL);
if (status)
- goto out;
+ return status;
+
+ if (li->link_info & ICE_AQ_MEDIA_AVAILABLE) {
+ struct ice_aqc_get_phy_caps_data *pcaps;
+ struct ice_hw *hw;
+
+ hw = pi->hw;
+ pcaps = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*pcaps),
+ GFP_KERNEL);
+ if (!pcaps)
+ return ICE_ERR_NO_MEMORY;
- if (phy_info->link_info.link_info & ICE_AQ_MEDIA_AVAILABLE) {
status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG,
pcaps, NULL);
- if (status)
- goto out;
+ if (!status)
+ memcpy(li->module_type, &pcaps->module_type,
+ sizeof(li->module_type));
- memcpy(phy_info->link_info.module_type, &pcaps->module_type,
- sizeof(phy_info->link_info.module_type));
+ devm_kfree(ice_hw_to_dev(hw), pcaps);
}
-out:
- devm_kfree(ice_hw_to_dev(hw), pcaps);
+
return status;
}
@@ -2081,6 +2137,74 @@ out:
}
/**
+ * ice_copy_phy_caps_to_cfg - Copy PHY ability data to configuration data
+ * @caps: PHY ability structure to copy date from
+ * @cfg: PHY configuration structure to copy data to
+ *
+ * Helper function to copy AQC PHY get ability data to PHY set configuration
+ * data structure
+ */
+void
+ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps,
+ struct ice_aqc_set_phy_cfg_data *cfg)
+{
+ if (!caps || !cfg)
+ return;
+
+ cfg->phy_type_low = caps->phy_type_low;
+ cfg->phy_type_high = caps->phy_type_high;
+ cfg->caps = caps->caps;
+ cfg->low_power_ctrl = caps->low_power_ctrl;
+ cfg->eee_cap = caps->eee_cap;
+ cfg->eeer_value = caps->eeer_value;
+ cfg->link_fec_opt = caps->link_fec_options;
+}
+
+/**
+ * ice_cfg_phy_fec - Configure PHY FEC data based on FEC mode
+ * @cfg: PHY configuration data to set FEC mode
+ * @fec: FEC mode to configure
+ *
+ * Caller should copy ice_aqc_get_phy_caps_data.caps ICE_AQC_PHY_EN_AUTO_FEC
+ * (bit 7) and ice_aqc_get_phy_caps_data.link_fec_options to cfg.caps
+ * ICE_AQ_PHY_ENA_AUTO_FEC (bit 7) and cfg.link_fec_options before calling.
+ */
+void
+ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec)
+{
+ switch (fec) {
+ case ICE_FEC_BASER:
+ /* Clear auto FEC and RS bits, and AND BASE-R ability
+ * bits and OR request bits.
+ */
+ cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+ cfg->link_fec_opt &= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
+ ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN;
+ cfg->link_fec_opt |= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
+ ICE_AQC_PHY_FEC_25G_KR_REQ;
+ break;
+ case ICE_FEC_RS:
+ /* Clear auto FEC and BASE-R bits, and AND RS ability
+ * bits and OR request bits.
+ */
+ cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+ cfg->link_fec_opt &= ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN;
+ cfg->link_fec_opt |= ICE_AQC_PHY_FEC_25G_RS_528_REQ |
+ ICE_AQC_PHY_FEC_25G_RS_544_REQ;
+ break;
+ case ICE_FEC_NONE:
+ /* Clear auto FEC and all FEC option bits. */
+ cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+ cfg->link_fec_opt &= ~ICE_AQC_PHY_FEC_MASK;
+ break;
+ case ICE_FEC_AUTO:
+ /* AND auto FEC bit, and all caps bits. */
+ cfg->caps &= ICE_AQC_PHY_CAPS_MASK;
+ break;
+ }
+}
+
+/**
* ice_get_link_status - get status of the HW network link
* @pi: port information structure
* @link_up: pointer to bool (true/false = linkup/linkdown)
@@ -2169,6 +2293,29 @@ ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
}
/**
+ * ice_aq_set_mac_loopback
+ * @hw: pointer to the HW struct
+ * @ena_lpbk: Enable or Disable loopback
+ * @cd: pointer to command details structure or NULL
+ *
+ * Enable/disable loopback on a given port
+ */
+enum ice_status
+ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd)
+{
+ struct ice_aqc_set_mac_lb *cmd;
+ struct ice_aq_desc desc;
+
+ cmd = &desc.params.set_mac_lb;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_mac_lb);
+ if (ena_lpbk)
+ cmd->lb_mode = ICE_AQ_MAC_LB_EN;
+
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
* ice_aq_set_port_id_led
* @pi: pointer to the port information
* @is_orig_mode: is this LED set to original mode (by the net-list)
@@ -2552,7 +2699,7 @@ do_aq:
ice_debug(hw, ICE_DBG_SCHED, "VM%d disable failed %d\n",
vmvf_num, hw->adminq.sq_last_status);
else
- ice_debug(hw, ICE_DBG_SCHED, "disable Q %d failed %d\n",
+ ice_debug(hw, ICE_DBG_SCHED, "disable queue %d failed %d\n",
le16_to_cpu(qg_list[0].q_id[0]),
hw->adminq.sq_last_status);
}
@@ -2924,7 +3071,6 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues,
if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
return ICE_ERR_CFG;
-
if (!num_queues) {
/* if queue is disabled already yet the disable queue command
* has to be sent to complete the VF reset, then call
diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h
index f1ddebf45231..d1f8353fe6bb 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.h
+++ b/drivers/net/ethernet/intel/ice/ice_common.h
@@ -9,6 +9,8 @@
#include "ice_switch.h"
#include <linux/avf/virtchnl.h>
+enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw);
+
void
ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len);
enum ice_status ice_init_hw(struct ice_hw *hw);
@@ -84,7 +86,11 @@ ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
enum ice_status
ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
bool ena_auto_link_update);
-
+void
+ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec);
+void
+ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps,
+ struct ice_aqc_set_phy_cfg_data *cfg);
enum ice_status
ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
struct ice_sq_cd *cd);
@@ -95,6 +101,9 @@ enum ice_status
ice_aq_set_event_mask(struct ice_hw *hw, u8 port_num, u16 mask,
struct ice_sq_cd *cd);
enum ice_status
+ice_aq_set_mac_loopback(struct ice_hw *hw, bool ena_lpbk, struct ice_sq_cd *cd);
+
+enum ice_status
ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
struct ice_sq_cd *cd);
diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.c b/drivers/net/ethernet/intel/ice/ice_controlq.c
index cc8cb5fdcdc1..e91ac4df0242 100644
--- a/drivers/net/ethernet/intel/ice/ice_controlq.c
+++ b/drivers/net/ethernet/intel/ice/ice_controlq.c
@@ -439,7 +439,7 @@ do { \
/* free the buffer info list */ \
if ((qi)->ring.cmd_buf) \
devm_kfree(ice_hw_to_dev(hw), (qi)->ring.cmd_buf); \
- /* free dma head */ \
+ /* free DMA head */ \
devm_kfree(ice_hw_to_dev(hw), (qi)->ring.dma_head); \
} while (0)
diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.h b/drivers/net/ethernet/intel/ice/ice_controlq.h
index e0585394d984..44945c2165d8 100644
--- a/drivers/net/ethernet/intel/ice/ice_controlq.h
+++ b/drivers/net/ethernet/intel/ice/ice_controlq.h
@@ -35,7 +35,7 @@ enum ice_ctl_q {
#define ICE_CTL_Q_SQ_CMD_TIMEOUT 250 /* msecs */
struct ice_ctl_q_ring {
- void *dma_head; /* Virtual address to dma head */
+ void *dma_head; /* Virtual address to DMA head */
struct ice_dma_mem desc_buf; /* descriptor ring memory */
void *cmd_buf; /* command buffer memory */
diff --git a/drivers/net/ethernet/intel/ice/ice_dcb.c b/drivers/net/ethernet/intel/ice/ice_dcb.c
index 8bbf48e04a1c..c2002ded65f6 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb.c
+++ b/drivers/net/ethernet/intel/ice/ice_dcb.c
@@ -82,12 +82,14 @@ ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update,
* @hw: pointer to the HW struct
* @shutdown_lldp_agent: True if LLDP Agent needs to be Shutdown
* False if LLDP Agent needs to be Stopped
+ * @persist: True if Stop/Shutdown of LLDP Agent needs to be persistent across
+ * reboots
* @cd: pointer to command details structure or NULL
*
* Stop or Shutdown the embedded LLDP Agent (0x0A05)
*/
enum ice_status
-ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent,
+ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent, bool persist,
struct ice_sq_cd *cd)
{
struct ice_aqc_lldp_stop *cmd;
@@ -100,17 +102,22 @@ ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent,
if (shutdown_lldp_agent)
cmd->command |= ICE_AQ_LLDP_AGENT_SHUTDOWN;
+ if (persist)
+ cmd->command |= ICE_AQ_LLDP_AGENT_PERSIST_DIS;
+
return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
}
/**
* ice_aq_start_lldp
* @hw: pointer to the HW struct
+ * @persist: True if Start of LLDP Agent needs to be persistent across reboots
* @cd: pointer to command details structure or NULL
*
* Start the embedded LLDP Agent on all ports. (0x0A06)
*/
-enum ice_status ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd)
+enum ice_status
+ice_aq_start_lldp(struct ice_hw *hw, bool persist, struct ice_sq_cd *cd)
{
struct ice_aqc_lldp_start *cmd;
struct ice_aq_desc desc;
@@ -121,6 +128,9 @@ enum ice_status ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd)
cmd->command = ICE_AQ_LLDP_AGENT_START;
+ if (persist)
+ cmd->command |= ICE_AQ_LLDP_AGENT_PERSIST_ENA;
+
return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
}
@@ -163,7 +173,7 @@ ice_aq_set_lldp_mib(struct ice_hw *hw, u8 mib_type, void *buf, u16 buf_size,
*
* Get the DCBX status from the Firmware
*/
-u8 ice_get_dcbx_status(struct ice_hw *hw)
+static u8 ice_get_dcbx_status(struct ice_hw *hw)
{
u32 reg;
@@ -614,7 +624,8 @@ ice_parse_org_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
*
* Parse DCB configuration from the LLDPDU
*/
-enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg)
+static enum ice_status
+ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg)
{
struct ice_lldp_org_tlv *tlv;
enum ice_status ret = 0;
@@ -658,13 +669,13 @@ enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg)
/**
* ice_aq_get_dcb_cfg
* @hw: pointer to the HW struct
- * @mib_type: mib type for the query
+ * @mib_type: MIB type for the query
* @bridgetype: bridge type for the query (remote)
* @dcbcfg: store for LLDPDU data
*
* Query DCB configuration from the firmware
*/
-static enum ice_status
+enum ice_status
ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
struct ice_dcbx_cfg *dcbcfg)
{
@@ -689,13 +700,13 @@ ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
}
/**
- * ice_aq_start_stop_dcbx - Start/Stop DCBx service in FW
+ * ice_aq_start_stop_dcbx - Start/Stop DCBX service in FW
* @hw: pointer to the HW struct
- * @start_dcbx_agent: True if DCBx Agent needs to be started
- * False if DCBx Agent needs to be stopped
- * @dcbx_agent_status: FW indicates back the DCBx agent status
- * True if DCBx Agent is active
- * False if DCBx Agent is stopped
+ * @start_dcbx_agent: True if DCBX Agent needs to be started
+ * False if DCBX Agent needs to be stopped
+ * @dcbx_agent_status: FW indicates back the DCBX agent status
+ * True if DCBX Agent is active
+ * False if DCBX Agent is stopped
* @cd: pointer to command details structure or NULL
*
* Start/Stop the embedded dcbx Agent. In case that this wrapper function
diff --git a/drivers/net/ethernet/intel/ice/ice_dcb.h b/drivers/net/ethernet/intel/ice/ice_dcb.h
index e7d4416e3a66..522e1452abe2 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb.h
+++ b/drivers/net/ethernet/intel/ice/ice_dcb.h
@@ -120,8 +120,9 @@ struct ice_cee_app_prio {
u8 prio_map;
} __packed;
-u8 ice_get_dcbx_status(struct ice_hw *hw);
-enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg);
+enum ice_status
+ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
+ struct ice_dcbx_cfg *dcbcfg);
enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi);
enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi);
enum ice_status ice_init_dcb(struct ice_hw *hw);
@@ -131,9 +132,10 @@ ice_query_port_ets(struct ice_port_info *pi,
struct ice_sq_cd *cmd_details);
#ifdef CONFIG_DCB
enum ice_status
-ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent,
+ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent, bool persist,
struct ice_sq_cd *cd);
-enum ice_status ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_start_lldp(struct ice_hw *hw, bool persist, struct ice_sq_cd *cd);
enum ice_status
ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent,
bool *dcbx_agent_status, struct ice_sq_cd *cd);
@@ -144,6 +146,7 @@ ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update,
static inline enum ice_status
ice_aq_stop_lldp(struct ice_hw __always_unused *hw,
bool __always_unused shutdown_lldp_agent,
+ bool __always_unused persist,
struct ice_sq_cd __always_unused *cd)
{
return 0;
@@ -151,6 +154,7 @@ ice_aq_stop_lldp(struct ice_hw __always_unused *hw,
static inline enum ice_status
ice_aq_start_lldp(struct ice_hw __always_unused *hw,
+ bool __always_unused persist,
struct ice_sq_cd __always_unused *cd)
{
return 0;
diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
index 3e81af1884fc..fe88b127ca42 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
@@ -120,12 +120,14 @@ static void ice_pf_dcb_recfg(struct ice_pf *pf)
tc_map = ICE_DFLT_TRAFFIC_CLASS;
ret = ice_vsi_cfg_tc(pf->vsi[v], tc_map);
- if (ret)
+ if (ret) {
dev_err(&pf->pdev->dev,
"Failed to config TC for VSI index: %d\n",
pf->vsi[v]->idx);
- else
- ice_vsi_map_rings_to_vectors(pf->vsi[v]);
+ continue;
+ }
+
+ ice_vsi_map_rings_to_vectors(pf->vsi[v]);
}
}
@@ -133,8 +135,10 @@ static void ice_pf_dcb_recfg(struct ice_pf *pf)
* ice_pf_dcb_cfg - Apply new DCB configuration
* @pf: pointer to the PF struct
* @new_cfg: DCBX config to apply
+ * @locked: is the RTNL held
*/
-static int ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg)
+static
+int ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg, bool locked)
{
struct ice_dcbx_cfg *old_cfg, *curr_cfg;
struct ice_aqc_port_ets_elem buf = { 0 };
@@ -163,7 +167,8 @@ static int ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg)
/* avoid race conditions by holding the lock while disabling and
* re-enabling the VSI
*/
- rtnl_lock();
+ if (!locked)
+ rtnl_lock();
ice_pf_dis_all_vsi(pf, true);
memcpy(curr_cfg, new_cfg, sizeof(*curr_cfg));
@@ -192,7 +197,8 @@ static int ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg)
out:
ice_pf_ena_all_vsi(pf, true);
- rtnl_unlock();
+ if (!locked)
+ rtnl_unlock();
devm_kfree(&pf->pdev->dev, old_cfg);
return ret;
}
@@ -271,15 +277,16 @@ dcb_error:
prev_cfg->etscfg.tcbwtable[0] = ICE_TC_MAX_BW;
prev_cfg->etscfg.tsatable[0] = ICE_IEEE_TSA_ETS;
memcpy(&prev_cfg->etsrec, &prev_cfg->etscfg, sizeof(prev_cfg->etsrec));
- ice_pf_dcb_cfg(pf, prev_cfg);
+ ice_pf_dcb_cfg(pf, prev_cfg, false);
devm_kfree(&pf->pdev->dev, prev_cfg);
}
/**
* ice_dcb_init_cfg - set the initial DCB config in SW
- * @pf: pf to apply config to
+ * @pf: PF to apply config to
+ * @locked: Is the RTNL held
*/
-static int ice_dcb_init_cfg(struct ice_pf *pf)
+static int ice_dcb_init_cfg(struct ice_pf *pf, bool locked)
{
struct ice_dcbx_cfg *newcfg;
struct ice_port_info *pi;
@@ -294,7 +301,7 @@ static int ice_dcb_init_cfg(struct ice_pf *pf)
memset(&pi->local_dcbx_cfg, 0, sizeof(*newcfg));
dev_info(&pf->pdev->dev, "Configuring initial DCB values\n");
- if (ice_pf_dcb_cfg(pf, newcfg))
+ if (ice_pf_dcb_cfg(pf, newcfg, locked))
ret = -EINVAL;
devm_kfree(&pf->pdev->dev, newcfg);
@@ -304,9 +311,10 @@ static int ice_dcb_init_cfg(struct ice_pf *pf)
/**
* ice_dcb_sw_default_config - Apply a default DCB config
- * @pf: pf to apply config to
+ * @pf: PF to apply config to
+ * @locked: was this function called with RTNL held
*/
-static int ice_dcb_sw_dflt_cfg(struct ice_pf *pf)
+static int ice_dcb_sw_dflt_cfg(struct ice_pf *pf, bool locked)
{
struct ice_aqc_port_ets_elem buf = { 0 };
struct ice_dcbx_cfg *dcbcfg;
@@ -338,7 +346,7 @@ static int ice_dcb_sw_dflt_cfg(struct ice_pf *pf)
dcbcfg->app[0].priority = 3;
dcbcfg->app[0].prot_id = ICE_APP_PROT_ID_FCOE;
- ret = ice_pf_dcb_cfg(pf, dcbcfg);
+ ret = ice_pf_dcb_cfg(pf, dcbcfg, locked);
devm_kfree(&pf->pdev->dev, dcbcfg);
if (ret)
return ret;
@@ -348,9 +356,10 @@ static int ice_dcb_sw_dflt_cfg(struct ice_pf *pf)
/**
* ice_init_pf_dcb - initialize DCB for a PF
- * @pf: pf to initiialize DCB for
+ * @pf: PF to initialize DCB for
+ * @locked: Was function called with RTNL held
*/
-int ice_init_pf_dcb(struct ice_pf *pf)
+int ice_init_pf_dcb(struct ice_pf *pf, bool locked)
{
struct device *dev = &pf->pdev->dev;
struct ice_port_info *port_info;
@@ -360,33 +369,10 @@ int ice_init_pf_dcb(struct ice_pf *pf)
port_info = hw->port_info;
- /* check if device is DCB capable */
- if (!hw->func_caps.common_cap.dcb) {
- dev_dbg(dev, "DCB not supported\n");
- return -EOPNOTSUPP;
- }
-
- /* Best effort to put DCBx and LLDP into a good state */
- port_info->dcbx_status = ice_get_dcbx_status(hw);
- if (port_info->dcbx_status != ICE_DCBX_STATUS_DONE &&
- port_info->dcbx_status != ICE_DCBX_STATUS_IN_PROGRESS) {
- bool dcbx_status;
-
- /* Attempt to start LLDP engine. Ignore errors
- * as this will error if it is already started
- */
- ice_aq_start_lldp(hw, NULL);
-
- /* Attempt to start DCBX. Ignore errors as this
- * will error if it is already started
- */
- ice_aq_start_stop_dcbx(hw, true, &dcbx_status, NULL);
- }
-
err = ice_init_dcb(hw);
if (err) {
- /* FW LLDP not in usable state, default to SW DCBx/LLDP */
- dev_info(&pf->pdev->dev, "FW LLDP not in usable state\n");
+ /* FW LLDP is not active, default to SW DCBX/LLDP */
+ dev_info(&pf->pdev->dev, "FW LLDP is not active\n");
hw->port_info->dcbx_status = ICE_DCBX_STATUS_NOT_STARTED;
hw->port_info->is_sw_lldp = true;
}
@@ -398,15 +384,16 @@ int ice_init_pf_dcb(struct ice_pf *pf)
if (port_info->is_sw_lldp) {
sw_default = 1;
dev_info(&pf->pdev->dev, "DCBx/LLDP in SW mode.\n");
+ clear_bit(ICE_FLAG_ENABLE_FW_LLDP, pf->flags);
+ } else {
+ set_bit(ICE_FLAG_ENABLE_FW_LLDP, pf->flags);
}
- if (port_info->dcbx_status == ICE_DCBX_STATUS_NOT_STARTED) {
- sw_default = 1;
+ if (port_info->dcbx_status == ICE_DCBX_STATUS_NOT_STARTED)
dev_info(&pf->pdev->dev, "DCBX not started\n");
- }
if (sw_default) {
- err = ice_dcb_sw_dflt_cfg(pf);
+ err = ice_dcb_sw_dflt_cfg(pf, locked);
if (err) {
dev_err(&pf->pdev->dev,
"Failed to set local DCB config %d\n", err);
@@ -425,7 +412,7 @@ int ice_init_pf_dcb(struct ice_pf *pf)
set_bit(ICE_FLAG_DCB_CAPABLE, pf->flags);
- err = ice_dcb_init_cfg(pf);
+ err = ice_dcb_init_cfg(pf, locked);
if (err)
goto dcb_init_err;
@@ -515,6 +502,55 @@ ice_tx_prepare_vlan_flags_dcb(struct ice_ring *tx_ring,
}
/**
+ * ice_dcb_need_recfg - Check if DCB needs reconfig
+ * @pf: board private structure
+ * @old_cfg: current DCB config
+ * @new_cfg: new DCB config
+ */
+static bool ice_dcb_need_recfg(struct ice_pf *pf, struct ice_dcbx_cfg *old_cfg,
+ struct ice_dcbx_cfg *new_cfg)
+{
+ bool need_reconfig = false;
+
+ /* Check if ETS configuration has changed */
+ if (memcmp(&new_cfg->etscfg, &old_cfg->etscfg,
+ sizeof(new_cfg->etscfg))) {
+ /* If Priority Table has changed reconfig is needed */
+ if (memcmp(&new_cfg->etscfg.prio_table,
+ &old_cfg->etscfg.prio_table,
+ sizeof(new_cfg->etscfg.prio_table))) {
+ need_reconfig = true;
+ dev_dbg(&pf->pdev->dev, "ETS UP2TC changed.\n");
+ }
+
+ if (memcmp(&new_cfg->etscfg.tcbwtable,
+ &old_cfg->etscfg.tcbwtable,
+ sizeof(new_cfg->etscfg.tcbwtable)))
+ dev_dbg(&pf->pdev->dev, "ETS TC BW Table changed.\n");
+
+ if (memcmp(&new_cfg->etscfg.tsatable,
+ &old_cfg->etscfg.tsatable,
+ sizeof(new_cfg->etscfg.tsatable)))
+ dev_dbg(&pf->pdev->dev, "ETS TSA Table changed.\n");
+ }
+
+ /* Check if PFC configuration has changed */
+ if (memcmp(&new_cfg->pfc, &old_cfg->pfc, sizeof(new_cfg->pfc))) {
+ need_reconfig = true;
+ dev_dbg(&pf->pdev->dev, "PFC config change detected.\n");
+ }
+
+ /* Check if APP Table has changed */
+ if (memcmp(&new_cfg->app, &old_cfg->app, sizeof(new_cfg->app))) {
+ need_reconfig = true;
+ dev_dbg(&pf->pdev->dev, "APP Table change detected.\n");
+ }
+
+ dev_dbg(&pf->pdev->dev, "dcb need_reconfig=%d\n", need_reconfig);
+ return need_reconfig;
+}
+
+/**
* ice_dcb_process_lldp_set_mib_change - Process MIB change
* @pf: ptr to ice_pf
* @event: pointer to the admin queue receive event
@@ -523,29 +559,95 @@ void
ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf,
struct ice_rq_event_info *event)
{
- if (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED) {
- struct ice_dcbx_cfg *dcbcfg, *prev_cfg;
- int err;
-
- prev_cfg = &pf->hw.port_info->local_dcbx_cfg;
- dcbcfg = devm_kmemdup(&pf->pdev->dev, prev_cfg,
- sizeof(*dcbcfg), GFP_KERNEL);
- if (!dcbcfg)
+ struct ice_aqc_port_ets_elem buf = { 0 };
+ struct ice_aqc_lldp_get_mib *mib;
+ struct ice_dcbx_cfg tmp_dcbx_cfg;
+ bool need_reconfig = false;
+ struct ice_port_info *pi;
+ u8 type;
+ int ret;
+
+ /* Not DCB capable or capability disabled */
+ if (!(test_bit(ICE_FLAG_DCB_CAPABLE, pf->flags)))
+ return;
+
+ if (pf->dcbx_cap & DCB_CAP_DCBX_HOST) {
+ dev_dbg(&pf->pdev->dev,
+ "MIB Change Event in HOST mode\n");
+ return;
+ }
+
+ pi = pf->hw.port_info;
+ mib = (struct ice_aqc_lldp_get_mib *)&event->desc.params.raw;
+ /* Ignore if event is not for Nearest Bridge */
+ type = ((mib->type >> ICE_AQ_LLDP_BRID_TYPE_S) &
+ ICE_AQ_LLDP_BRID_TYPE_M);
+ dev_dbg(&pf->pdev->dev, "LLDP event MIB bridge type 0x%x\n", type);
+ if (type != ICE_AQ_LLDP_BRID_TYPE_NEAREST_BRID)
+ return;
+
+ /* Check MIB Type and return if event for Remote MIB update */
+ type = mib->type & ICE_AQ_LLDP_MIB_TYPE_M;
+ dev_dbg(&pf->pdev->dev,
+ "LLDP event mib type %s\n", type ? "remote" : "local");
+ if (type == ICE_AQ_LLDP_MIB_REMOTE) {
+ /* Update the remote cached instance and return */
+ ret = ice_aq_get_dcb_cfg(pi->hw, ICE_AQ_LLDP_MIB_REMOTE,
+ ICE_AQ_LLDP_BRID_TYPE_NEAREST_BRID,
+ &pi->remote_dcbx_cfg);
+ if (ret) {
+ dev_err(&pf->pdev->dev, "Failed to get remote DCB config\n");
return;
+ }
+ }
- err = ice_lldp_to_dcb_cfg(event->msg_buf, dcbcfg);
- if (!err)
- ice_pf_dcb_cfg(pf, dcbcfg);
+ /* store the old configuration */
+ tmp_dcbx_cfg = pf->hw.port_info->local_dcbx_cfg;
- devm_kfree(&pf->pdev->dev, dcbcfg);
+ /* Reset the old DCBX configuration data */
+ memset(&pi->local_dcbx_cfg, 0, sizeof(pi->local_dcbx_cfg));
- /* Get updated DCBx data from firmware */
- err = ice_get_dcb_cfg(pf->hw.port_info);
- if (err)
- dev_err(&pf->pdev->dev,
- "Failed to get DCB config\n");
- } else {
+ /* Get updated DCBX data from firmware */
+ ret = ice_get_dcb_cfg(pf->hw.port_info);
+ if (ret) {
+ dev_err(&pf->pdev->dev, "Failed to get DCB config\n");
+ return;
+ }
+
+ /* No change detected in DCBX configs */
+ if (!memcmp(&tmp_dcbx_cfg, &pi->local_dcbx_cfg, sizeof(tmp_dcbx_cfg))) {
dev_dbg(&pf->pdev->dev,
- "MIB Change Event in HOST mode\n");
+ "No change detected in DCBX configuration.\n");
+ return;
+ }
+
+ need_reconfig = ice_dcb_need_recfg(pf, &tmp_dcbx_cfg,
+ &pi->local_dcbx_cfg);
+ if (!need_reconfig)
+ return;
+
+ /* Enable DCB tagging only when more than one TC */
+ if (ice_dcb_get_num_tc(&pi->local_dcbx_cfg) > 1) {
+ dev_dbg(&pf->pdev->dev, "DCB tagging enabled (num TC > 1)\n");
+ set_bit(ICE_FLAG_DCB_ENA, pf->flags);
+ } else {
+ dev_dbg(&pf->pdev->dev, "DCB tagging disabled (num TC = 1)\n");
+ clear_bit(ICE_FLAG_DCB_ENA, pf->flags);
}
+
+ rtnl_lock();
+ ice_pf_dis_all_vsi(pf, true);
+
+ ret = ice_query_port_ets(pf->hw.port_info, &buf, sizeof(buf), NULL);
+ if (ret) {
+ dev_err(&pf->pdev->dev, "Query Port ETS failed\n");
+ rtnl_unlock();
+ return;
+ }
+
+ /* changes in configuration update VSI */
+ ice_pf_dcb_recfg(pf);
+
+ ice_pf_ena_all_vsi(pf, true);
+ rtnl_unlock();
}
diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.h b/drivers/net/ethernet/intel/ice/ice_dcb_lib.h
index ca7b76faa03c..819081053ff5 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.h
@@ -14,7 +14,7 @@ void ice_dcb_rebuild(struct ice_pf *pf);
u8 ice_dcb_get_ena_tc(struct ice_dcbx_cfg *dcbcfg);
u8 ice_dcb_get_num_tc(struct ice_dcbx_cfg *dcbcfg);
void ice_vsi_cfg_dcb_rings(struct ice_vsi *vsi);
-int ice_init_pf_dcb(struct ice_pf *pf);
+int ice_init_pf_dcb(struct ice_pf *pf, bool locked);
void ice_update_dcb_stats(struct ice_pf *pf);
int
ice_tx_prepare_vlan_flags_dcb(struct ice_ring *tx_ring,
@@ -40,7 +40,8 @@ static inline u8 ice_dcb_get_num_tc(struct ice_dcbx_cfg __always_unused *dcbcfg)
return 1;
}
-static inline int ice_init_pf_dcb(struct ice_pf *pf)
+static inline int
+ice_init_pf_dcb(struct ice_pf *pf, bool __always_unused locked)
{
dev_dbg(&pf->pdev->dev, "DCB not supported\n");
return -EOPNOTSUPP;
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index 1341fde8d53f..52083a63dee6 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -45,22 +45,40 @@ static int ice_q_stats_len(struct net_device *netdev)
ICE_VSI_STATS_LEN + ice_q_stats_len(n))
static const struct ice_stats ice_gstrings_vsi_stats[] = {
- ICE_VSI_STAT("tx_unicast", eth_stats.tx_unicast),
ICE_VSI_STAT("rx_unicast", eth_stats.rx_unicast),
- ICE_VSI_STAT("tx_multicast", eth_stats.tx_multicast),
+ ICE_VSI_STAT("tx_unicast", eth_stats.tx_unicast),
ICE_VSI_STAT("rx_multicast", eth_stats.rx_multicast),
- ICE_VSI_STAT("tx_broadcast", eth_stats.tx_broadcast),
+ ICE_VSI_STAT("tx_multicast", eth_stats.tx_multicast),
ICE_VSI_STAT("rx_broadcast", eth_stats.rx_broadcast),
- ICE_VSI_STAT("tx_bytes", eth_stats.tx_bytes),
+ ICE_VSI_STAT("tx_broadcast", eth_stats.tx_broadcast),
ICE_VSI_STAT("rx_bytes", eth_stats.rx_bytes),
- ICE_VSI_STAT("rx_discards", eth_stats.rx_discards),
- ICE_VSI_STAT("tx_errors", eth_stats.tx_errors),
- ICE_VSI_STAT("tx_linearize", tx_linearize),
+ ICE_VSI_STAT("tx_bytes", eth_stats.tx_bytes),
+ ICE_VSI_STAT("rx_dropped", eth_stats.rx_discards),
ICE_VSI_STAT("rx_unknown_protocol", eth_stats.rx_unknown_protocol),
ICE_VSI_STAT("rx_alloc_fail", rx_buf_failed),
ICE_VSI_STAT("rx_pg_alloc_fail", rx_page_failed),
+ ICE_VSI_STAT("tx_errors", eth_stats.tx_errors),
+ ICE_VSI_STAT("tx_linearize", tx_linearize),
+};
+
+enum ice_ethtool_test_id {
+ ICE_ETH_TEST_REG = 0,
+ ICE_ETH_TEST_EEPROM,
+ ICE_ETH_TEST_INTR,
+ ICE_ETH_TEST_LOOP,
+ ICE_ETH_TEST_LINK,
};
+static const char ice_gstrings_test[][ETH_GSTRING_LEN] = {
+ "Register test (offline)",
+ "EEPROM test (offline)",
+ "Interrupt test (offline)",
+ "Loopback test (offline)",
+ "Link test (on/offline)",
+};
+
+#define ICE_TEST_LEN (sizeof(ice_gstrings_test) / ETH_GSTRING_LEN)
+
/* These PF_STATs might look like duplicates of some NETDEV_STATs,
* but they aren't. This device is capable of supporting multiple
* VSIs/netdevs on a single PF. The NETDEV_STATs are for individual
@@ -71,45 +89,45 @@ static const struct ice_stats ice_gstrings_vsi_stats[] = {
* is queried on the base PF netdev.
*/
static const struct ice_stats ice_gstrings_pf_stats[] = {
- ICE_PF_STAT("port.tx_bytes", stats.eth.tx_bytes),
- ICE_PF_STAT("port.rx_bytes", stats.eth.rx_bytes),
- ICE_PF_STAT("port.tx_unicast", stats.eth.tx_unicast),
- ICE_PF_STAT("port.rx_unicast", stats.eth.rx_unicast),
- ICE_PF_STAT("port.tx_multicast", stats.eth.tx_multicast),
- ICE_PF_STAT("port.rx_multicast", stats.eth.rx_multicast),
- ICE_PF_STAT("port.tx_broadcast", stats.eth.tx_broadcast),
- ICE_PF_STAT("port.rx_broadcast", stats.eth.rx_broadcast),
- ICE_PF_STAT("port.tx_errors", stats.eth.tx_errors),
- ICE_PF_STAT("port.tx_size_64", stats.tx_size_64),
- ICE_PF_STAT("port.rx_size_64", stats.rx_size_64),
- ICE_PF_STAT("port.tx_size_127", stats.tx_size_127),
- ICE_PF_STAT("port.rx_size_127", stats.rx_size_127),
- ICE_PF_STAT("port.tx_size_255", stats.tx_size_255),
- ICE_PF_STAT("port.rx_size_255", stats.rx_size_255),
- ICE_PF_STAT("port.tx_size_511", stats.tx_size_511),
- ICE_PF_STAT("port.rx_size_511", stats.rx_size_511),
- ICE_PF_STAT("port.tx_size_1023", stats.tx_size_1023),
- ICE_PF_STAT("port.rx_size_1023", stats.rx_size_1023),
- ICE_PF_STAT("port.tx_size_1522", stats.tx_size_1522),
- ICE_PF_STAT("port.rx_size_1522", stats.rx_size_1522),
- ICE_PF_STAT("port.tx_size_big", stats.tx_size_big),
- ICE_PF_STAT("port.rx_size_big", stats.rx_size_big),
- ICE_PF_STAT("port.link_xon_tx", stats.link_xon_tx),
- ICE_PF_STAT("port.link_xon_rx", stats.link_xon_rx),
- ICE_PF_STAT("port.link_xoff_tx", stats.link_xoff_tx),
- ICE_PF_STAT("port.link_xoff_rx", stats.link_xoff_rx),
- ICE_PF_STAT("port.tx_dropped_link_down", stats.tx_dropped_link_down),
- ICE_PF_STAT("port.rx_undersize", stats.rx_undersize),
- ICE_PF_STAT("port.rx_fragments", stats.rx_fragments),
- ICE_PF_STAT("port.rx_oversize", stats.rx_oversize),
- ICE_PF_STAT("port.rx_jabber", stats.rx_jabber),
- ICE_PF_STAT("port.rx_csum_bad", hw_csum_rx_error),
- ICE_PF_STAT("port.rx_length_errors", stats.rx_len_errors),
- ICE_PF_STAT("port.rx_dropped", stats.eth.rx_discards),
- ICE_PF_STAT("port.rx_crc_errors", stats.crc_errors),
- ICE_PF_STAT("port.illegal_bytes", stats.illegal_bytes),
- ICE_PF_STAT("port.mac_local_faults", stats.mac_local_faults),
- ICE_PF_STAT("port.mac_remote_faults", stats.mac_remote_faults),
+ ICE_PF_STAT("rx_bytes.nic", stats.eth.rx_bytes),
+ ICE_PF_STAT("tx_bytes.nic", stats.eth.tx_bytes),
+ ICE_PF_STAT("rx_unicast.nic", stats.eth.rx_unicast),
+ ICE_PF_STAT("tx_unicast.nic", stats.eth.tx_unicast),
+ ICE_PF_STAT("rx_multicast.nic", stats.eth.rx_multicast),
+ ICE_PF_STAT("tx_multicast.nic", stats.eth.tx_multicast),
+ ICE_PF_STAT("rx_broadcast.nic", stats.eth.rx_broadcast),
+ ICE_PF_STAT("tx_broadcast.nic", stats.eth.tx_broadcast),
+ ICE_PF_STAT("tx_errors.nic", stats.eth.tx_errors),
+ ICE_PF_STAT("rx_size_64.nic", stats.rx_size_64),
+ ICE_PF_STAT("tx_size_64.nic", stats.tx_size_64),
+ ICE_PF_STAT("rx_size_127.nic", stats.rx_size_127),
+ ICE_PF_STAT("tx_size_127.nic", stats.tx_size_127),
+ ICE_PF_STAT("rx_size_255.nic", stats.rx_size_255),
+ ICE_PF_STAT("tx_size_255.nic", stats.tx_size_255),
+ ICE_PF_STAT("rx_size_511.nic", stats.rx_size_511),
+ ICE_PF_STAT("tx_size_511.nic", stats.tx_size_511),
+ ICE_PF_STAT("rx_size_1023.nic", stats.rx_size_1023),
+ ICE_PF_STAT("tx_size_1023.nic", stats.tx_size_1023),
+ ICE_PF_STAT("rx_size_1522.nic", stats.rx_size_1522),
+ ICE_PF_STAT("tx_size_1522.nic", stats.tx_size_1522),
+ ICE_PF_STAT("rx_size_big.nic", stats.rx_size_big),
+ ICE_PF_STAT("tx_size_big.nic", stats.tx_size_big),
+ ICE_PF_STAT("link_xon_rx.nic", stats.link_xon_rx),
+ ICE_PF_STAT("link_xon_tx.nic", stats.link_xon_tx),
+ ICE_PF_STAT("link_xoff_rx.nic", stats.link_xoff_rx),
+ ICE_PF_STAT("link_xoff_tx.nic", stats.link_xoff_tx),
+ ICE_PF_STAT("tx_dropped_link_down.nic", stats.tx_dropped_link_down),
+ ICE_PF_STAT("rx_undersize.nic", stats.rx_undersize),
+ ICE_PF_STAT("rx_fragments.nic", stats.rx_fragments),
+ ICE_PF_STAT("rx_oversize.nic", stats.rx_oversize),
+ ICE_PF_STAT("rx_jabber.nic", stats.rx_jabber),
+ ICE_PF_STAT("rx_csum_bad.nic", hw_csum_rx_error),
+ ICE_PF_STAT("rx_length_errors.nic", stats.rx_len_errors),
+ ICE_PF_STAT("rx_dropped.nic", stats.eth.rx_discards),
+ ICE_PF_STAT("rx_crc_errors.nic", stats.crc_errors),
+ ICE_PF_STAT("illegal_bytes.nic", stats.illegal_bytes),
+ ICE_PF_STAT("mac_local_faults.nic", stats.mac_local_faults),
+ ICE_PF_STAT("mac_remote_faults.nic", stats.mac_remote_faults),
};
static const u32 ice_regs_dump_list[] = {
@@ -120,6 +138,9 @@ static const u32 ice_regs_dump_list[] = {
QINT_RQCTL(0),
PFINT_OICR_ENA,
QRX_ITR(0),
+ PF0INT_ITR_0(0),
+ PF0INT_ITR_1(0),
+ PF0INT_ITR_2(0),
};
struct ice_priv_flag {
@@ -134,7 +155,7 @@ struct ice_priv_flag {
static const struct ice_priv_flag ice_gstrings_priv_flags[] = {
ICE_PRIV_FLAG("link-down-on-close", ICE_FLAG_LINK_DOWN_ON_CLOSE_ENA),
- ICE_PRIV_FLAG("disable-fw-lldp", ICE_FLAG_DISABLE_FW_LLDP),
+ ICE_PRIV_FLAG("enable-fw-lldp", ICE_FLAG_ENABLE_FW_LLDP),
};
#define ICE_PRIV_FLAG_ARRAY_SIZE ARRAY_SIZE(ice_gstrings_priv_flags)
@@ -278,6 +299,571 @@ out:
return ret;
}
+/**
+ * ice_active_vfs - check if there are any active VFs
+ * @pf: board private structure
+ *
+ * Returns true if an active VF is found, otherwise returns false
+ */
+static bool ice_active_vfs(struct ice_pf *pf)
+{
+ struct ice_vf *vf = pf->vf;
+ int i;
+
+ for (i = 0; i < pf->num_alloc_vfs; i++, vf++)
+ if (test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states))
+ return true;
+ return false;
+}
+
+/**
+ * ice_link_test - perform a link test on a given net_device
+ * @netdev: network interface device structure
+ *
+ * This function performs one of the self-tests required by ethtool.
+ * Returns 0 on success, non-zero on failure.
+ */
+static u64 ice_link_test(struct net_device *netdev)
+{
+ struct ice_netdev_priv *np = netdev_priv(netdev);
+ enum ice_status status;
+ bool link_up = false;
+
+ netdev_info(netdev, "link test\n");
+ status = ice_get_link_status(np->vsi->port_info, &link_up);
+ if (status) {
+ netdev_err(netdev, "link query error, status = %d\n", status);
+ return 1;
+ }
+
+ if (!link_up)
+ return 2;
+
+ return 0;
+}
+
+/**
+ * ice_eeprom_test - perform an EEPROM test on a given net_device
+ * @netdev: network interface device structure
+ *
+ * This function performs one of the self-tests required by ethtool.
+ * Returns 0 on success, non-zero on failure.
+ */
+static u64 ice_eeprom_test(struct net_device *netdev)
+{
+ struct ice_netdev_priv *np = netdev_priv(netdev);
+ struct ice_pf *pf = np->vsi->back;
+
+ netdev_info(netdev, "EEPROM test\n");
+ return !!(ice_nvm_validate_checksum(&pf->hw));
+}
+
+/**
+ * ice_reg_pattern_test
+ * @hw: pointer to the HW struct
+ * @reg: reg to be tested
+ * @mask: bits to be touched
+ */
+static int ice_reg_pattern_test(struct ice_hw *hw, u32 reg, u32 mask)
+{
+ struct ice_pf *pf = (struct ice_pf *)hw->back;
+ static const u32 patterns[] = {
+ 0x5A5A5A5A, 0xA5A5A5A5,
+ 0x00000000, 0xFFFFFFFF
+ };
+ u32 val, orig_val;
+ int i;
+
+ orig_val = rd32(hw, reg);
+ for (i = 0; i < ARRAY_SIZE(patterns); ++i) {
+ u32 pattern = patterns[i] & mask;
+
+ wr32(hw, reg, pattern);
+ val = rd32(hw, reg);
+ if (val == pattern)
+ continue;
+ dev_err(&pf->pdev->dev,
+ "%s: reg pattern test failed - reg 0x%08x pat 0x%08x val 0x%08x\n"
+ , __func__, reg, pattern, val);
+ return 1;
+ }
+
+ wr32(hw, reg, orig_val);
+ val = rd32(hw, reg);
+ if (val != orig_val) {
+ dev_err(&pf->pdev->dev,
+ "%s: reg restore test failed - reg 0x%08x orig 0x%08x val 0x%08x\n"
+ , __func__, reg, orig_val, val);
+ return 1;
+ }
+
+ return 0;
+}
+
+/**
+ * ice_reg_test - perform a register test on a given net_device
+ * @netdev: network interface device structure
+ *
+ * This function performs one of the self-tests required by ethtool.
+ * Returns 0 on success, non-zero on failure.
+ */
+static u64 ice_reg_test(struct net_device *netdev)
+{
+ struct ice_netdev_priv *np = netdev_priv(netdev);
+ struct ice_hw *hw = np->vsi->port_info->hw;
+ u32 int_elements = hw->func_caps.common_cap.num_msix_vectors ?
+ hw->func_caps.common_cap.num_msix_vectors - 1 : 1;
+ struct ice_diag_reg_test_info {
+ u32 address;
+ u32 mask;
+ u32 elem_num;
+ u32 elem_size;
+ } ice_reg_list[] = {
+ {GLINT_ITR(0, 0), 0x00000fff, int_elements,
+ GLINT_ITR(0, 1) - GLINT_ITR(0, 0)},
+ {GLINT_ITR(1, 0), 0x00000fff, int_elements,
+ GLINT_ITR(1, 1) - GLINT_ITR(1, 0)},
+ {GLINT_ITR(0, 0), 0x00000fff, int_elements,
+ GLINT_ITR(2, 1) - GLINT_ITR(2, 0)},
+ {GLINT_CTL, 0xffff0001, 1, 0}
+ };
+ int i;
+
+ netdev_dbg(netdev, "Register test\n");
+ for (i = 0; i < ARRAY_SIZE(ice_reg_list); ++i) {
+ u32 j;
+
+ for (j = 0; j < ice_reg_list[i].elem_num; ++j) {
+ u32 mask = ice_reg_list[i].mask;
+ u32 reg = ice_reg_list[i].address +
+ (j * ice_reg_list[i].elem_size);
+
+ /* bail on failure (non-zero return) */
+ if (ice_reg_pattern_test(hw, reg, mask))
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * ice_lbtest_prepare_rings - configure Tx/Rx test rings
+ * @vsi: pointer to the VSI structure
+ *
+ * Function configures rings of a VSI for loopback test without
+ * enabling interrupts or informing the kernel about new queues.
+ *
+ * Returns 0 on success, negative on failure.
+ */
+static int ice_lbtest_prepare_rings(struct ice_vsi *vsi)
+{
+ int status;
+
+ status = ice_vsi_setup_tx_rings(vsi);
+ if (status)
+ goto err_setup_tx_ring;
+
+ status = ice_vsi_setup_rx_rings(vsi);
+ if (status)
+ goto err_setup_rx_ring;
+
+ status = ice_vsi_cfg(vsi);
+ if (status)
+ goto err_setup_rx_ring;
+
+ status = ice_vsi_start_rx_rings(vsi);
+ if (status)
+ goto err_start_rx_ring;
+
+ return status;
+
+err_start_rx_ring:
+ ice_vsi_free_rx_rings(vsi);
+err_setup_rx_ring:
+ ice_vsi_stop_lan_tx_rings(vsi, ICE_NO_RESET, 0);
+err_setup_tx_ring:
+ ice_vsi_free_tx_rings(vsi);
+
+ return status;
+}
+
+/**
+ * ice_lbtest_disable_rings - disable Tx/Rx test rings after loopback test
+ * @vsi: pointer to the VSI structure
+ *
+ * Function stops and frees VSI rings after a loopback test.
+ * Returns 0 on success, negative on failure.
+ */
+static int ice_lbtest_disable_rings(struct ice_vsi *vsi)
+{
+ int status;
+
+ status = ice_vsi_stop_lan_tx_rings(vsi, ICE_NO_RESET, 0);
+ if (status)
+ netdev_err(vsi->netdev, "Failed to stop Tx rings, VSI %d error %d\n",
+ vsi->vsi_num, status);
+
+ status = ice_vsi_stop_rx_rings(vsi);
+ if (status)
+ netdev_err(vsi->netdev, "Failed to stop Rx rings, VSI %d error %d\n",
+ vsi->vsi_num, status);
+
+ ice_vsi_free_tx_rings(vsi);
+ ice_vsi_free_rx_rings(vsi);
+
+ return status;
+}
+
+/**
+ * ice_lbtest_create_frame - create test packet
+ * @pf: pointer to the PF structure
+ * @ret_data: allocated frame buffer
+ * @size: size of the packet data
+ *
+ * Function allocates a frame with a test pattern on specific offsets.
+ * Returns 0 on success, non-zero on failure.
+ */
+static int ice_lbtest_create_frame(struct ice_pf *pf, u8 **ret_data, u16 size)
+{
+ u8 *data;
+
+ if (!pf)
+ return -EINVAL;
+
+ data = devm_kzalloc(&pf->pdev->dev, size, GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
+ /* Since the ethernet test frame should always be at least
+ * 64 bytes long, fill some octets in the payload with test data.
+ */
+ memset(data, 0xFF, size);
+ data[32] = 0xDE;
+ data[42] = 0xAD;
+ data[44] = 0xBE;
+ data[46] = 0xEF;
+
+ *ret_data = data;
+
+ return 0;
+}
+
+/**
+ * ice_lbtest_check_frame - verify received loopback frame
+ * @frame: pointer to the raw packet data
+ *
+ * Function verifies received test frame with a pattern.
+ * Returns true if frame matches the pattern, false otherwise.
+ */
+static bool ice_lbtest_check_frame(u8 *frame)
+{
+ /* Validate bytes of a frame under offsets chosen earlier */
+ if (frame[32] == 0xDE &&
+ frame[42] == 0xAD &&
+ frame[44] == 0xBE &&
+ frame[46] == 0xEF &&
+ frame[48] == 0xFF)
+ return true;
+
+ return false;
+}
+
+/**
+ * ice_diag_send - send test frames to the test ring
+ * @tx_ring: pointer to the transmit ring
+ * @data: pointer to the raw packet data
+ * @size: size of the packet to send
+ *
+ * Function sends loopback packets on a test Tx ring.
+ */
+static int ice_diag_send(struct ice_ring *tx_ring, u8 *data, u16 size)
+{
+ struct ice_tx_desc *tx_desc;
+ struct ice_tx_buf *tx_buf;
+ dma_addr_t dma;
+ u64 td_cmd;
+
+ tx_desc = ICE_TX_DESC(tx_ring, tx_ring->next_to_use);
+ tx_buf = &tx_ring->tx_buf[tx_ring->next_to_use];
+
+ dma = dma_map_single(tx_ring->dev, data, size, DMA_TO_DEVICE);
+ if (dma_mapping_error(tx_ring->dev, dma))
+ return -EINVAL;
+
+ tx_desc->buf_addr = cpu_to_le64(dma);
+
+ /* These flags are required for a descriptor to be pushed out */
+ td_cmd = (u64)(ICE_TX_DESC_CMD_EOP | ICE_TX_DESC_CMD_RS);
+ tx_desc->cmd_type_offset_bsz =
+ cpu_to_le64(ICE_TX_DESC_DTYPE_DATA |
+ (td_cmd << ICE_TXD_QW1_CMD_S) |
+ ((u64)0 << ICE_TXD_QW1_OFFSET_S) |
+ ((u64)size << ICE_TXD_QW1_TX_BUF_SZ_S) |
+ ((u64)0 << ICE_TXD_QW1_L2TAG1_S));
+
+ tx_buf->next_to_watch = tx_desc;
+
+ /* Force memory write to complete before letting h/w know
+ * there are new descriptors to fetch.
+ */
+ wmb();
+
+ tx_ring->next_to_use++;
+ if (tx_ring->next_to_use >= tx_ring->count)
+ tx_ring->next_to_use = 0;
+
+ writel_relaxed(tx_ring->next_to_use, tx_ring->tail);
+
+ /* Wait until the packets get transmitted to the receive queue. */
+ usleep_range(1000, 2000);
+ dma_unmap_single(tx_ring->dev, dma, size, DMA_TO_DEVICE);
+
+ return 0;
+}
+
+#define ICE_LB_FRAME_SIZE 64
+/**
+ * ice_lbtest_receive_frames - receive and verify test frames
+ * @rx_ring: pointer to the receive ring
+ *
+ * Function receives loopback packets and verify their correctness.
+ * Returns number of received valid frames.
+ */
+static int ice_lbtest_receive_frames(struct ice_ring *rx_ring)
+{
+ struct ice_rx_buf *rx_buf;
+ int valid_frames, i;
+ u8 *received_buf;
+
+ valid_frames = 0;
+
+ for (i = 0; i < rx_ring->count; i++) {
+ union ice_32b_rx_flex_desc *rx_desc;
+
+ rx_desc = ICE_RX_DESC(rx_ring, i);
+
+ if (!(rx_desc->wb.status_error0 &
+ cpu_to_le16(ICE_TX_DESC_CMD_EOP | ICE_TX_DESC_CMD_RS)))
+ continue;
+
+ rx_buf = &rx_ring->rx_buf[i];
+ received_buf = page_address(rx_buf->page);
+
+ if (ice_lbtest_check_frame(received_buf))
+ valid_frames++;
+ }
+
+ return valid_frames;
+}
+
+/**
+ * ice_loopback_test - perform a loopback test on a given net_device
+ * @netdev: network interface device structure
+ *
+ * This function performs one of the self-tests required by ethtool.
+ * Returns 0 on success, non-zero on failure.
+ */
+static u64 ice_loopback_test(struct net_device *netdev)
+{
+ struct ice_netdev_priv *np = netdev_priv(netdev);
+ struct ice_vsi *orig_vsi = np->vsi, *test_vsi;
+ struct ice_pf *pf = orig_vsi->back;
+ struct ice_ring *tx_ring, *rx_ring;
+ u8 broadcast[ETH_ALEN], ret = 0;
+ int num_frames, valid_frames;
+ LIST_HEAD(tmp_list);
+ u8 *tx_frame;
+ int i;
+
+ netdev_info(netdev, "loopback test\n");
+
+ test_vsi = ice_lb_vsi_setup(pf, pf->hw.port_info);
+ if (!test_vsi) {
+ netdev_err(netdev, "Failed to create a VSI for the loopback test");
+ return 1;
+ }
+
+ test_vsi->netdev = netdev;
+ tx_ring = test_vsi->tx_rings[0];
+ rx_ring = test_vsi->rx_rings[0];
+
+ if (ice_lbtest_prepare_rings(test_vsi)) {
+ ret = 2;
+ goto lbtest_vsi_close;
+ }
+
+ if (ice_alloc_rx_bufs(rx_ring, rx_ring->count)) {
+ ret = 3;
+ goto lbtest_rings_dis;
+ }
+
+ /* Enable MAC loopback in firmware */
+ if (ice_aq_set_mac_loopback(&pf->hw, true, NULL)) {
+ ret = 4;
+ goto lbtest_mac_dis;
+ }
+
+ /* Test VSI needs to receive broadcast packets */
+ eth_broadcast_addr(broadcast);
+ if (ice_add_mac_to_list(test_vsi, &tmp_list, broadcast)) {
+ ret = 5;
+ goto lbtest_mac_dis;
+ }
+
+ if (ice_add_mac(&pf->hw, &tmp_list)) {
+ ret = 6;
+ goto free_mac_list;
+ }
+
+ if (ice_lbtest_create_frame(pf, &tx_frame, ICE_LB_FRAME_SIZE)) {
+ ret = 7;
+ goto remove_mac_filters;
+ }
+
+ num_frames = min_t(int, tx_ring->count, 32);
+ for (i = 0; i < num_frames; i++) {
+ if (ice_diag_send(tx_ring, tx_frame, ICE_LB_FRAME_SIZE)) {
+ ret = 8;
+ goto lbtest_free_frame;
+ }
+ }
+
+ valid_frames = ice_lbtest_receive_frames(rx_ring);
+ if (!valid_frames)
+ ret = 9;
+ else if (valid_frames != num_frames)
+ ret = 10;
+
+lbtest_free_frame:
+ devm_kfree(&pf->pdev->dev, tx_frame);
+remove_mac_filters:
+ if (ice_remove_mac(&pf->hw, &tmp_list))
+ netdev_err(netdev, "Could not remove MAC filter for the test VSI");
+free_mac_list:
+ ice_free_fltr_list(&pf->pdev->dev, &tmp_list);
+lbtest_mac_dis:
+ /* Disable MAC loopback after the test is completed. */
+ if (ice_aq_set_mac_loopback(&pf->hw, false, NULL))
+ netdev_err(netdev, "Could not disable MAC loopback\n");
+lbtest_rings_dis:
+ if (ice_lbtest_disable_rings(test_vsi))
+ netdev_err(netdev, "Could not disable test rings\n");
+lbtest_vsi_close:
+ test_vsi->netdev = NULL;
+ if (ice_vsi_release(test_vsi))
+ netdev_err(netdev, "Failed to remove the test VSI");
+
+ return ret;
+}
+
+/**
+ * ice_intr_test - perform an interrupt test on a given net_device
+ * @netdev: network interface device structure
+ *
+ * This function performs one of the self-tests required by ethtool.
+ * Returns 0 on success, non-zero on failure.
+ */
+static u64 ice_intr_test(struct net_device *netdev)
+{
+ struct ice_netdev_priv *np = netdev_priv(netdev);
+ struct ice_pf *pf = np->vsi->back;
+ u16 swic_old = pf->sw_int_count;
+
+ netdev_info(netdev, "interrupt test\n");
+
+ wr32(&pf->hw, GLINT_DYN_CTL(pf->oicr_idx),
+ GLINT_DYN_CTL_SW_ITR_INDX_M |
+ GLINT_DYN_CTL_INTENA_MSK_M |
+ GLINT_DYN_CTL_SWINT_TRIG_M);
+
+ usleep_range(1000, 2000);
+ return (swic_old == pf->sw_int_count);
+}
+
+/**
+ * ice_self_test - handler function for performing a self-test by ethtool
+ * @netdev: network interface device structure
+ * @eth_test: ethtool_test structure
+ * @data: required by ethtool.self_test
+ *
+ * This function is called after invoking 'ethtool -t devname' command where
+ * devname is the name of the network device on which ethtool should operate.
+ * It performs a set of self-tests to check if a device works properly.
+ */
+static void
+ice_self_test(struct net_device *netdev, struct ethtool_test *eth_test,
+ u64 *data)
+{
+ struct ice_netdev_priv *np = netdev_priv(netdev);
+ bool if_running = netif_running(netdev);
+ struct ice_pf *pf = np->vsi->back;
+
+ if (eth_test->flags == ETH_TEST_FL_OFFLINE) {
+ netdev_info(netdev, "offline testing starting\n");
+
+ set_bit(__ICE_TESTING, pf->state);
+
+ if (ice_active_vfs(pf)) {
+ dev_warn(&pf->pdev->dev,
+ "Please take active VFs and Netqueues offline and restart the adapter before running NIC diagnostics\n");
+ data[ICE_ETH_TEST_REG] = 1;
+ data[ICE_ETH_TEST_EEPROM] = 1;
+ data[ICE_ETH_TEST_INTR] = 1;
+ data[ICE_ETH_TEST_LOOP] = 1;
+ data[ICE_ETH_TEST_LINK] = 1;
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+ clear_bit(__ICE_TESTING, pf->state);
+ goto skip_ol_tests;
+ }
+ /* If the device is online then take it offline */
+ if (if_running)
+ /* indicate we're in test mode */
+ ice_stop(netdev);
+
+ data[ICE_ETH_TEST_LINK] = ice_link_test(netdev);
+ data[ICE_ETH_TEST_EEPROM] = ice_eeprom_test(netdev);
+ data[ICE_ETH_TEST_INTR] = ice_intr_test(netdev);
+ data[ICE_ETH_TEST_LOOP] = ice_loopback_test(netdev);
+ data[ICE_ETH_TEST_REG] = ice_reg_test(netdev);
+
+ if (data[ICE_ETH_TEST_LINK] ||
+ data[ICE_ETH_TEST_EEPROM] ||
+ data[ICE_ETH_TEST_LOOP] ||
+ data[ICE_ETH_TEST_INTR] ||
+ data[ICE_ETH_TEST_REG])
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+
+ clear_bit(__ICE_TESTING, pf->state);
+
+ if (if_running) {
+ int status = ice_open(netdev);
+
+ if (status) {
+ dev_err(&pf->pdev->dev,
+ "Could not open device %s, err %d",
+ pf->int_name, status);
+ }
+ }
+ } else {
+ /* Online tests */
+ netdev_info(netdev, "online testing starting\n");
+
+ data[ICE_ETH_TEST_LINK] = ice_link_test(netdev);
+ if (data[ICE_ETH_TEST_LINK])
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+
+ /* Offline only tests, not run in online; pass by default */
+ data[ICE_ETH_TEST_REG] = 0;
+ data[ICE_ETH_TEST_EEPROM] = 0;
+ data[ICE_ETH_TEST_INTR] = 0;
+ data[ICE_ETH_TEST_LOOP] = 0;
+ }
+
+skip_ol_tests:
+ netdev_info(netdev, "testing finished\n");
+}
+
static void ice_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
{
struct ice_netdev_priv *np = netdev_priv(netdev);
@@ -295,17 +881,17 @@ static void ice_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
ice_for_each_alloc_txq(vsi, i) {
snprintf(p, ETH_GSTRING_LEN,
- "tx-queue-%u.tx_packets", i);
+ "tx_queue_%u_packets", i);
p += ETH_GSTRING_LEN;
- snprintf(p, ETH_GSTRING_LEN, "tx-queue-%u.tx_bytes", i);
+ snprintf(p, ETH_GSTRING_LEN, "tx_queue_%u_bytes", i);
p += ETH_GSTRING_LEN;
}
ice_for_each_alloc_rxq(vsi, i) {
snprintf(p, ETH_GSTRING_LEN,
- "rx-queue-%u.rx_packets", i);
+ "rx_queue_%u_packets", i);
p += ETH_GSTRING_LEN;
- snprintf(p, ETH_GSTRING_LEN, "rx-queue-%u.rx_bytes", i);
+ snprintf(p, ETH_GSTRING_LEN, "rx_queue_%u_bytes", i);
p += ETH_GSTRING_LEN;
}
@@ -320,21 +906,24 @@ static void ice_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
for (i = 0; i < ICE_MAX_USER_PRIORITY; i++) {
snprintf(p, ETH_GSTRING_LEN,
- "port.tx-priority-%u-xon", i);
+ "tx_priority_%u_xon.nic", i);
p += ETH_GSTRING_LEN;
snprintf(p, ETH_GSTRING_LEN,
- "port.tx-priority-%u-xoff", i);
+ "tx_priority_%u_xoff.nic", i);
p += ETH_GSTRING_LEN;
}
for (i = 0; i < ICE_MAX_USER_PRIORITY; i++) {
snprintf(p, ETH_GSTRING_LEN,
- "port.rx-priority-%u-xon", i);
+ "rx_priority_%u_xon.nic", i);
p += ETH_GSTRING_LEN;
snprintf(p, ETH_GSTRING_LEN,
- "port.rx-priority-%u-xoff", i);
+ "rx_priority_%u_xoff.nic", i);
p += ETH_GSTRING_LEN;
}
break;
+ case ETH_SS_TEST:
+ memcpy(data, ice_gstrings_test, ICE_TEST_LEN * ETH_GSTRING_LEN);
+ break;
case ETH_SS_PRIV_FLAGS:
for (i = 0; i < ICE_PRIV_FLAG_ARRAY_SIZE; i++) {
snprintf(p, ETH_GSTRING_LEN, "%s",
@@ -371,6 +960,185 @@ ice_set_phys_id(struct net_device *netdev, enum ethtool_phys_id_state state)
}
/**
+ * ice_set_fec_cfg - Set link FEC options
+ * @netdev: network interface device structure
+ * @req_fec: FEC mode to configure
+ */
+static int ice_set_fec_cfg(struct net_device *netdev, enum ice_fec_mode req_fec)
+{
+ struct ice_netdev_priv *np = netdev_priv(netdev);
+ struct ice_aqc_set_phy_cfg_data config = { 0 };
+ struct ice_aqc_get_phy_caps_data *caps;
+ struct ice_vsi *vsi = np->vsi;
+ u8 sw_cfg_caps, sw_cfg_fec;
+ struct ice_port_info *pi;
+ enum ice_status status;
+ int err = 0;
+
+ pi = vsi->port_info;
+ if (!pi)
+ return -EOPNOTSUPP;
+
+ /* Changing the FEC parameters is not supported if not the PF VSI */
+ if (vsi->type != ICE_VSI_PF) {
+ netdev_info(netdev, "Changing FEC parameters only supported for PF VSI\n");
+ return -EOPNOTSUPP;
+ }
+
+ /* Get last SW configuration */
+ caps = devm_kzalloc(&vsi->back->pdev->dev, sizeof(*caps), GFP_KERNEL);
+ if (!caps)
+ return -ENOMEM;
+
+ status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG,
+ caps, NULL);
+ if (status) {
+ err = -EAGAIN;
+ goto done;
+ }
+
+ /* Copy SW configuration returned from PHY caps to PHY config */
+ ice_copy_phy_caps_to_cfg(caps, &config);
+ sw_cfg_caps = caps->caps;
+ sw_cfg_fec = caps->link_fec_options;
+
+ /* Get toloplogy caps, then copy PHY FEC topoloy caps to PHY config */
+ memset(caps, 0, sizeof(*caps));
+
+ status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP,
+ caps, NULL);
+ if (status) {
+ err = -EAGAIN;
+ goto done;
+ }
+
+ config.caps |= (caps->caps & ICE_AQC_PHY_EN_AUTO_FEC);
+ config.link_fec_opt = caps->link_fec_options;
+
+ ice_cfg_phy_fec(&config, req_fec);
+
+ /* If FEC mode has changed, then set PHY configuration and enable AN. */
+ if ((config.caps & ICE_AQ_PHY_ENA_AUTO_FEC) !=
+ (sw_cfg_caps & ICE_AQC_PHY_EN_AUTO_FEC) ||
+ config.link_fec_opt != sw_cfg_fec) {
+ if (caps->caps & ICE_AQC_PHY_AN_MODE)
+ config.caps |= ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
+
+ status = ice_aq_set_phy_cfg(pi->hw, pi->lport, &config, NULL);
+
+ if (status)
+ err = -EAGAIN;
+ }
+
+done:
+ devm_kfree(&vsi->back->pdev->dev, caps);
+ return err;
+}
+
+/**
+ * ice_set_fecparam - Set FEC link options
+ * @netdev: network interface device structure
+ * @fecparam: Ethtool structure to retrieve FEC parameters
+ */
+static int
+ice_set_fecparam(struct net_device *netdev, struct ethtool_fecparam *fecparam)
+{
+ struct ice_netdev_priv *np = netdev_priv(netdev);
+ struct ice_vsi *vsi = np->vsi;
+ enum ice_fec_mode fec;
+
+ switch (fecparam->fec) {
+ case ETHTOOL_FEC_AUTO:
+ fec = ICE_FEC_AUTO;
+ break;
+ case ETHTOOL_FEC_RS:
+ fec = ICE_FEC_RS;
+ break;
+ case ETHTOOL_FEC_BASER:
+ fec = ICE_FEC_BASER;
+ break;
+ case ETHTOOL_FEC_OFF:
+ case ETHTOOL_FEC_NONE:
+ fec = ICE_FEC_NONE;
+ break;
+ default:
+ dev_warn(&vsi->back->pdev->dev, "Unsupported FEC mode: %d\n",
+ fecparam->fec);
+ return -EINVAL;
+ }
+
+ return ice_set_fec_cfg(netdev, fec);
+}
+
+/**
+ * ice_get_fecparam - Get link FEC options
+ * @netdev: network interface device structure
+ * @fecparam: Ethtool structure to retrieve FEC parameters
+ */
+static int
+ice_get_fecparam(struct net_device *netdev, struct ethtool_fecparam *fecparam)
+{
+ struct ice_netdev_priv *np = netdev_priv(netdev);
+ struct ice_aqc_get_phy_caps_data *caps;
+ struct ice_link_status *link_info;
+ struct ice_vsi *vsi = np->vsi;
+ struct ice_port_info *pi;
+ enum ice_status status;
+ int err = 0;
+
+ pi = vsi->port_info;
+
+ if (!pi)
+ return -EOPNOTSUPP;
+ link_info = &pi->phy.link_info;
+
+ /* Set FEC mode based on negotiated link info */
+ switch (link_info->fec_info) {
+ case ICE_AQ_LINK_25G_KR_FEC_EN:
+ fecparam->active_fec = ETHTOOL_FEC_BASER;
+ break;
+ case ICE_AQ_LINK_25G_RS_528_FEC_EN:
+ /* fall through */
+ case ICE_AQ_LINK_25G_RS_544_FEC_EN:
+ fecparam->active_fec = ETHTOOL_FEC_RS;
+ break;
+ default:
+ fecparam->active_fec = ETHTOOL_FEC_OFF;
+ break;
+ }
+
+ caps = devm_kzalloc(&vsi->back->pdev->dev, sizeof(*caps), GFP_KERNEL);
+ if (!caps)
+ return -ENOMEM;
+
+ status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP,
+ caps, NULL);
+ if (status) {
+ err = -EAGAIN;
+ goto done;
+ }
+
+ /* Set supported/configured FEC modes based on PHY capability */
+ if (caps->caps & ICE_AQC_PHY_EN_AUTO_FEC)
+ fecparam->fec |= ETHTOOL_FEC_AUTO;
+ if (caps->link_fec_options & ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN ||
+ caps->link_fec_options & ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ ||
+ caps->link_fec_options & ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN ||
+ caps->link_fec_options & ICE_AQC_PHY_FEC_25G_KR_REQ)
+ fecparam->fec |= ETHTOOL_FEC_BASER;
+ if (caps->link_fec_options & ICE_AQC_PHY_FEC_25G_RS_528_REQ ||
+ caps->link_fec_options & ICE_AQC_PHY_FEC_25G_RS_544_REQ ||
+ caps->link_fec_options & ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN)
+ fecparam->fec |= ETHTOOL_FEC_RS;
+ if (caps->link_fec_options == 0)
+ fecparam->fec |= ETHTOOL_FEC_OFF;
+
+done:
+ devm_kfree(&vsi->back->pdev->dev, caps);
+ return err;
+}
+
+/**
* ice_get_priv_flags - report device private flags
* @netdev: network interface device structure
*
@@ -433,10 +1201,11 @@ static int ice_set_priv_flags(struct net_device *netdev, u32 flags)
bitmap_xor(change_flags, pf->flags, orig_flags, ICE_PF_FLAGS_NBITS);
- if (test_bit(ICE_FLAG_DISABLE_FW_LLDP, change_flags)) {
- if (test_bit(ICE_FLAG_DISABLE_FW_LLDP, pf->flags)) {
+ if (test_bit(ICE_FLAG_ENABLE_FW_LLDP, change_flags)) {
+ if (!test_bit(ICE_FLAG_ENABLE_FW_LLDP, pf->flags)) {
enum ice_status status;
+ /* Disable FW LLDP engine */
status = ice_aq_cfg_lldp_mib_change(&pf->hw, false,
NULL);
/* If unregistering for LLDP events fails, this is
@@ -450,7 +1219,7 @@ static int ice_set_priv_flags(struct net_device *netdev, u32 flags)
/* The AQ call to stop the FW LLDP agent will generate
* an error if the agent is already stopped.
*/
- status = ice_aq_stop_lldp(&pf->hw, true, NULL);
+ status = ice_aq_stop_lldp(&pf->hw, true, true, NULL);
if (status)
dev_warn(&pf->pdev->dev,
"Fail to stop LLDP agent\n");
@@ -458,9 +1227,14 @@ static int ice_set_priv_flags(struct net_device *netdev, u32 flags)
* will likely not need DCB, so failure to init is
* not a concern of ethtool
*/
- status = ice_init_pf_dcb(pf);
+ status = ice_init_pf_dcb(pf, true);
if (status)
dev_warn(&pf->pdev->dev, "Fail to init DCB\n");
+
+ /* Forward LLDP packets to default VSI so that they
+ * are passed up the stack
+ */
+ ice_cfg_sw_lldp(vsi, false, true);
} else {
enum ice_status status;
bool dcbx_agent_status;
@@ -468,12 +1242,12 @@ static int ice_set_priv_flags(struct net_device *netdev, u32 flags)
/* AQ command to start FW LLDP agent will return an
* error if the agent is already started
*/
- status = ice_aq_start_lldp(&pf->hw, NULL);
+ status = ice_aq_start_lldp(&pf->hw, true, NULL);
if (status)
dev_warn(&pf->pdev->dev,
"Fail to start LLDP Agent\n");
- /* AQ command to start FW DCBx agent will fail if
+ /* AQ command to start FW DCBX agent will fail if
* the agent is already started
*/
status = ice_aq_start_stop_dcbx(&pf->hw, true,
@@ -491,15 +1265,14 @@ static int ice_set_priv_flags(struct net_device *netdev, u32 flags)
* registration/init failed but do not return error
* state to ethtool
*/
- status = ice_aq_cfg_lldp_mib_change(&pf->hw, false,
- NULL);
- if (status)
- dev_dbg(&pf->pdev->dev,
- "Fail to reg for MIB change\n");
-
- status = ice_init_pf_dcb(pf);
+ status = ice_init_pf_dcb(pf, true);
if (status)
dev_dbg(&pf->pdev->dev, "Fail to init DCB\n");
+
+ /* Remove rule to direct LLDP packets to default VSI.
+ * The FW LLDP engine will now be consuming them.
+ */
+ ice_cfg_sw_lldp(vsi, false, false);
}
}
clear_bit(ICE_FLAG_ETHTOOL_CTXT, pf->flags);
@@ -529,6 +1302,8 @@ static int ice_get_sset_count(struct net_device *netdev, int sset)
* not safe.
*/
return ICE_ALL_STATS_LEN(netdev);
+ case ETH_SS_TEST:
+ return ICE_TEST_LEN;
case ETH_SS_PRIV_FLAGS:
return ICE_PRIV_FLAG_ARRAY_SIZE;
default:
@@ -628,7 +1403,8 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_types_low & ICE_PHY_TYPE_LOW_100M_SGMII) {
ethtool_link_ksettings_add_link_mode(ks, supported,
100baseT_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_100MB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_100MB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
100baseT_Full);
}
@@ -636,14 +1412,16 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_types_low & ICE_PHY_TYPE_LOW_1G_SGMII) {
ethtool_link_ksettings_add_link_mode(ks, supported,
1000baseT_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_1000MB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_1000MB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
1000baseT_Full);
}
if (phy_types_low & ICE_PHY_TYPE_LOW_1000BASE_KX) {
ethtool_link_ksettings_add_link_mode(ks, supported,
1000baseKX_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_1000MB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_1000MB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
1000baseKX_Full);
}
@@ -651,14 +1429,16 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_types_low & ICE_PHY_TYPE_LOW_1000BASE_LX) {
ethtool_link_ksettings_add_link_mode(ks, supported,
1000baseX_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_1000MB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_1000MB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
1000baseX_Full);
}
if (phy_types_low & ICE_PHY_TYPE_LOW_2500BASE_T) {
ethtool_link_ksettings_add_link_mode(ks, supported,
2500baseT_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_2500MB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_2500MB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
2500baseT_Full);
}
@@ -666,7 +1446,8 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_types_low & ICE_PHY_TYPE_LOW_2500BASE_KX) {
ethtool_link_ksettings_add_link_mode(ks, supported,
2500baseX_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_2500MB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_2500MB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
2500baseX_Full);
}
@@ -674,7 +1455,8 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_types_low & ICE_PHY_TYPE_LOW_5GBASE_KR) {
ethtool_link_ksettings_add_link_mode(ks, supported,
5000baseT_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_5GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_5GB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
5000baseT_Full);
}
@@ -684,28 +1466,32 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_types_low & ICE_PHY_TYPE_LOW_10G_SFI_C2C) {
ethtool_link_ksettings_add_link_mode(ks, supported,
10000baseT_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_10GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_10GB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
10000baseT_Full);
}
if (phy_types_low & ICE_PHY_TYPE_LOW_10GBASE_KR_CR1) {
ethtool_link_ksettings_add_link_mode(ks, supported,
10000baseKR_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_10GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_10GB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
10000baseKR_Full);
}
if (phy_types_low & ICE_PHY_TYPE_LOW_10GBASE_SR) {
ethtool_link_ksettings_add_link_mode(ks, supported,
10000baseSR_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_10GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_10GB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
10000baseSR_Full);
}
if (phy_types_low & ICE_PHY_TYPE_LOW_10GBASE_LR) {
ethtool_link_ksettings_add_link_mode(ks, supported,
10000baseLR_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_10GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_10GB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
10000baseLR_Full);
}
@@ -717,7 +1503,8 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_types_low & ICE_PHY_TYPE_LOW_25G_AUI_C2C) {
ethtool_link_ksettings_add_link_mode(ks, supported,
25000baseCR_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_25GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_25GB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
25000baseCR_Full);
}
@@ -725,7 +1512,8 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_types_low & ICE_PHY_TYPE_LOW_25GBASE_LR) {
ethtool_link_ksettings_add_link_mode(ks, supported,
25000baseSR_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_25GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_25GB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
25000baseSR_Full);
}
@@ -734,14 +1522,16 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_types_low & ICE_PHY_TYPE_LOW_25GBASE_KR1) {
ethtool_link_ksettings_add_link_mode(ks, supported,
25000baseKR_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_25GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_25GB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
25000baseKR_Full);
}
if (phy_types_low & ICE_PHY_TYPE_LOW_40GBASE_KR4) {
ethtool_link_ksettings_add_link_mode(ks, supported,
40000baseKR4_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_40GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_40GB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
40000baseKR4_Full);
}
@@ -750,21 +1540,24 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_types_low & ICE_PHY_TYPE_LOW_40G_XLAUI) {
ethtool_link_ksettings_add_link_mode(ks, supported,
40000baseCR4_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_40GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_40GB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
40000baseCR4_Full);
}
if (phy_types_low & ICE_PHY_TYPE_LOW_40GBASE_SR4) {
ethtool_link_ksettings_add_link_mode(ks, supported,
40000baseSR4_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_40GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_40GB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
40000baseSR4_Full);
}
if (phy_types_low & ICE_PHY_TYPE_LOW_40GBASE_LR4) {
ethtool_link_ksettings_add_link_mode(ks, supported,
40000baseLR4_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_40GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_40GB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
40000baseLR4_Full);
}
@@ -779,7 +1572,8 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_types_low & ICE_PHY_TYPE_LOW_50G_AUI1) {
ethtool_link_ksettings_add_link_mode(ks, supported,
50000baseCR2_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_50GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_50GB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
50000baseCR2_Full);
}
@@ -787,7 +1581,8 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_types_low & ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4) {
ethtool_link_ksettings_add_link_mode(ks, supported,
50000baseKR2_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_50GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_50GB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
50000baseKR2_Full);
}
@@ -797,7 +1592,8 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_types_low & ICE_PHY_TYPE_LOW_50GBASE_LR) {
ethtool_link_ksettings_add_link_mode(ks, supported,
50000baseSR2_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_50GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_50GB)
ethtool_link_ksettings_add_link_mode(ks, advertising,
50000baseSR2_Full);
}
@@ -814,7 +1610,8 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_types_high & ICE_PHY_TYPE_HIGH_100G_AUI2) {
ethtool_link_ksettings_add_link_mode(ks, supported,
100000baseCR4_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_100GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_100GB)
need_add_adv_mode = true;
}
if (need_add_adv_mode) {
@@ -826,7 +1623,8 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_types_low & ICE_PHY_TYPE_LOW_100GBASE_SR2) {
ethtool_link_ksettings_add_link_mode(ks, supported,
100000baseSR4_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_100GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_100GB)
need_add_adv_mode = true;
}
if (need_add_adv_mode) {
@@ -838,7 +1636,8 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_types_low & ICE_PHY_TYPE_LOW_100GBASE_DR) {
ethtool_link_ksettings_add_link_mode(ks, supported,
100000baseLR4_ER4_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_100GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_100GB)
need_add_adv_mode = true;
}
if (need_add_adv_mode) {
@@ -851,7 +1650,8 @@ ice_phy_type_to_ethtool(struct net_device *netdev,
phy_types_high & ICE_PHY_TYPE_HIGH_100GBASE_KR2_PAM4) {
ethtool_link_ksettings_add_link_mode(ks, supported,
100000baseKR4_Full);
- if (hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_100GB)
+ if (!hw_link_info->req_speeds ||
+ hw_link_info->req_speeds & ICE_AQ_LINK_SPEED_100GB)
need_add_adv_mode = true;
}
if (need_add_adv_mode)
@@ -1275,6 +2075,7 @@ ice_get_link_ksettings(struct net_device *netdev,
struct ethtool_link_ksettings *ks)
{
struct ice_netdev_priv *np = netdev_priv(netdev);
+ struct ice_aqc_get_phy_caps_data *caps;
struct ice_link_status *hw_link_info;
struct ice_vsi *vsi = np->vsi;
@@ -1345,6 +2146,40 @@ ice_get_link_ksettings(struct net_device *netdev,
break;
}
+ caps = devm_kzalloc(&vsi->back->pdev->dev, sizeof(*caps), GFP_KERNEL);
+ if (!caps)
+ goto done;
+
+ if (ice_aq_get_phy_caps(vsi->port_info, false, ICE_AQC_REPORT_TOPO_CAP,
+ caps, NULL))
+ netdev_info(netdev, "Get phy capability failed.\n");
+
+ /* Set supported FEC modes based on PHY capability */
+ ethtool_link_ksettings_add_link_mode(ks, supported, FEC_NONE);
+
+ if (caps->link_fec_options & ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN ||
+ caps->link_fec_options & ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN)
+ ethtool_link_ksettings_add_link_mode(ks, supported, FEC_BASER);
+ if (caps->link_fec_options & ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN)
+ ethtool_link_ksettings_add_link_mode(ks, supported, FEC_RS);
+
+ if (ice_aq_get_phy_caps(vsi->port_info, false, ICE_AQC_REPORT_SW_CFG,
+ caps, NULL))
+ netdev_info(netdev, "Get phy capability failed.\n");
+
+ /* Set advertised FEC modes based on PHY capability */
+ ethtool_link_ksettings_add_link_mode(ks, advertising, FEC_NONE);
+
+ if (caps->link_fec_options & ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ ||
+ caps->link_fec_options & ICE_AQC_PHY_FEC_25G_KR_REQ)
+ ethtool_link_ksettings_add_link_mode(ks, advertising,
+ FEC_BASER);
+ if (caps->link_fec_options & ICE_AQC_PHY_FEC_25G_RS_528_REQ ||
+ caps->link_fec_options & ICE_AQC_PHY_FEC_25G_RS_544_REQ)
+ ethtool_link_ksettings_add_link_mode(ks, advertising, FEC_RS);
+
+done:
+ devm_kfree(&vsi->back->pdev->dev, caps);
return 0;
}
@@ -2371,8 +3206,7 @@ ice_set_rc_coalesce(enum ice_container_type c_type, struct ethtool_coalesce *ec,
if (ec->rx_coalesce_usecs_high != rc->ring->q_vector->intrl) {
rc->ring->q_vector->intrl = ec->rx_coalesce_usecs_high;
- wr32(&pf->hw, GLINT_RATE(vsi->hw_base_vector +
- rc->ring->q_vector->v_idx),
+ wr32(&pf->hw, GLINT_RATE(rc->ring->q_vector->reg_idx),
ice_intrl_usec_to_reg(ec->rx_coalesce_usecs_high,
pf->hw.intrl_gran));
}
@@ -2533,6 +3367,7 @@ static const struct ethtool_ops ice_ethtool_ops = {
.get_regs = ice_get_regs,
.get_msglevel = ice_get_msglevel,
.set_msglevel = ice_set_msglevel,
+ .self_test = ice_self_test,
.get_link = ethtool_op_get_link,
.get_eeprom_len = ice_get_eeprom_len,
.get_eeprom = ice_get_eeprom,
@@ -2557,6 +3392,8 @@ static const struct ethtool_ops ice_ethtool_ops = {
.get_ts_info = ethtool_op_get_ts_info,
.get_per_queue_coalesce = ice_get_per_q_coalesce,
.set_per_queue_coalesce = ice_set_per_q_coalesce,
+ .get_fecparam = ice_get_fecparam,
+ .set_fecparam = ice_set_fecparam,
};
/**
diff --git a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
index ec25f26069b0..6c5ce05742b1 100644
--- a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
+++ b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
@@ -6,6 +6,9 @@
#ifndef _ICE_HW_AUTOGEN_H_
#define _ICE_HW_AUTOGEN_H_
+#define PF0INT_ITR_0(_i) (0x03000004 + ((_i) * 4096))
+#define PF0INT_ITR_1(_i) (0x03000008 + ((_i) * 4096))
+#define PF0INT_ITR_2(_i) (0x0300000C + ((_i) * 4096))
#define QTX_COMM_DBELL(_DBQM) (0x002C0000 + ((_DBQM) * 4))
#define QTX_COMM_HEAD(_DBQM) (0x000E0000 + ((_DBQM) * 4))
#define QTX_COMM_HEAD_HEAD_S 0
@@ -155,6 +158,7 @@
#define PFINT_OICR_HMC_ERR_M BIT(26)
#define PFINT_OICR_PE_CRITERR_M BIT(28)
#define PFINT_OICR_VFLR_M BIT(29)
+#define PFINT_OICR_SWINT_M BIT(31)
#define PFINT_OICR_CTL 0x0016CA80
#define PFINT_OICR_CTL_MSIX_INDX_M ICE_M(0x7FF, 0)
#define PFINT_OICR_CTL_ITR_INDX_S 11
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index fbf1eba0cc2a..a19f5920733b 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -137,6 +137,8 @@ ice_setup_tx_ctx(struct ice_ring *ring, struct ice_tlan_ctx *tlan_ctx, u16 pf_q)
* for PF or EMP this field should be set to zero
*/
switch (vsi->type) {
+ case ICE_VSI_LB:
+ /* fall through */
case ICE_VSI_PF:
tlan_ctx->vmvf_type = ICE_TLAN_CTX_VMVF_TYPE_PF;
break;
@@ -251,6 +253,10 @@ static int ice_vsi_alloc_arrays(struct ice_vsi *vsi)
if (!vsi->rx_rings)
goto err_rxrings;
+ /* There is no need to allocate q_vectors for a loopback VSI. */
+ if (vsi->type == ICE_VSI_LB)
+ return 0;
+
/* allocate memory for q_vector pointers */
vsi->q_vectors = devm_kcalloc(&pf->pdev->dev, vsi->num_q_vectors,
sizeof(*vsi->q_vectors), GFP_KERNEL);
@@ -275,6 +281,8 @@ static void ice_vsi_set_num_desc(struct ice_vsi *vsi)
{
switch (vsi->type) {
case ICE_VSI_PF:
+ /* fall through */
+ case ICE_VSI_LB:
vsi->num_rx_desc = ICE_DFLT_NUM_RX_DESC;
vsi->num_tx_desc = ICE_DFLT_NUM_TX_DESC;
break;
@@ -313,10 +321,14 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)
vsi->alloc_rxq = vf->num_vf_qs;
/* pf->num_vf_msix includes (VF miscellaneous vector +
* data queue interrupts). Since vsi->num_q_vectors is number
- * of queues vectors, subtract 1 from the original vector
- * count
+ * of queues vectors, subtract 1 (ICE_NONQ_VECS_VF) from the
+ * original vector count
*/
- vsi->num_q_vectors = pf->num_vf_msix - 1;
+ vsi->num_q_vectors = pf->num_vf_msix - ICE_NONQ_VECS_VF;
+ break;
+ case ICE_VSI_LB:
+ vsi->alloc_txq = 1;
+ vsi->alloc_rxq = 1;
break;
default:
dev_warn(&pf->pdev->dev, "Unknown VSI type %d\n", vsi->type);
@@ -516,6 +528,10 @@ ice_vsi_alloc(struct ice_pf *pf, enum ice_vsi_type type, u16 vf_id)
if (ice_vsi_alloc_arrays(vsi))
goto err_rings;
break;
+ case ICE_VSI_LB:
+ if (ice_vsi_alloc_arrays(vsi))
+ goto err_rings;
+ break;
default:
dev_warn(&pf->pdev->dev, "Unknown VSI type %d\n", vsi->type);
goto unlock_pf;
@@ -732,6 +748,8 @@ static void ice_vsi_set_rss_params(struct ice_vsi *vsi)
BIT(cap->rss_table_entry_width));
vsi->rss_lut_type = ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI;
break;
+ case ICE_VSI_LB:
+ break;
default:
dev_warn(&pf->pdev->dev, "Unknown VSI type %d\n",
vsi->type);
@@ -924,6 +942,9 @@ static void ice_set_rss_vsi_ctx(struct ice_vsi_ctx *ctxt, struct ice_vsi *vsi)
lut_type = ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI;
hash_type = ICE_AQ_VSI_Q_OPT_RSS_TPLZ;
break;
+ case ICE_VSI_LB:
+ dev_dbg(&pf->pdev->dev, "Unsupported VSI type %d\n", vsi->type);
+ return;
default:
dev_warn(&pf->pdev->dev, "Unknown VSI type %d\n", vsi->type);
return;
@@ -955,6 +976,8 @@ static int ice_vsi_init(struct ice_vsi *vsi)
ctxt->info = vsi->info;
switch (vsi->type) {
+ case ICE_VSI_LB:
+ /* fall through */
case ICE_VSI_PF:
ctxt->flags = ICE_AQ_VSI_TYPE_PF;
break;
@@ -1145,61 +1168,32 @@ err_out:
static int ice_vsi_setup_vector_base(struct ice_vsi *vsi)
{
struct ice_pf *pf = vsi->back;
- int num_q_vectors = 0;
+ u16 num_q_vectors;
+
+ /* SRIOV doesn't grab irq_tracker entries for each VSI */
+ if (vsi->type == ICE_VSI_VF)
+ return 0;
- if (vsi->sw_base_vector || vsi->hw_base_vector) {
- dev_dbg(&pf->pdev->dev, "VSI %d has non-zero HW base vector %d or SW base vector %d\n",
- vsi->vsi_num, vsi->hw_base_vector, vsi->sw_base_vector);
+ if (vsi->base_vector) {
+ dev_dbg(&pf->pdev->dev, "VSI %d has non-zero base vector %d\n",
+ vsi->vsi_num, vsi->base_vector);
return -EEXIST;
}
if (!test_bit(ICE_FLAG_MSIX_ENA, pf->flags))
return -ENOENT;
- switch (vsi->type) {
- case ICE_VSI_PF:
- num_q_vectors = vsi->num_q_vectors;
- /* reserve slots from OS requested IRQs */
- vsi->sw_base_vector = ice_get_res(pf, pf->sw_irq_tracker,
- num_q_vectors, vsi->idx);
- if (vsi->sw_base_vector < 0) {
- dev_err(&pf->pdev->dev,
- "Failed to get tracking for %d SW vectors for VSI %d, err=%d\n",
- num_q_vectors, vsi->vsi_num,
- vsi->sw_base_vector);
- return -ENOENT;
- }
- pf->num_avail_sw_msix -= num_q_vectors;
-
- /* reserve slots from HW interrupts */
- vsi->hw_base_vector = ice_get_res(pf, pf->hw_irq_tracker,
- num_q_vectors, vsi->idx);
- break;
- case ICE_VSI_VF:
- /* take VF misc vector and data vectors into account */
- num_q_vectors = pf->num_vf_msix;
- /* For VF VSI, reserve slots only from HW interrupts */
- vsi->hw_base_vector = ice_get_res(pf, pf->hw_irq_tracker,
- num_q_vectors, vsi->idx);
- break;
- default:
- dev_warn(&pf->pdev->dev, "Unknown VSI type %d\n", vsi->type);
- break;
- }
-
- if (vsi->hw_base_vector < 0) {
+ num_q_vectors = vsi->num_q_vectors;
+ /* reserve slots from OS requested IRQs */
+ vsi->base_vector = ice_get_res(pf, pf->irq_tracker, num_q_vectors,
+ vsi->idx);
+ if (vsi->base_vector < 0) {
dev_err(&pf->pdev->dev,
- "Failed to get tracking for %d HW vectors for VSI %d, err=%d\n",
- num_q_vectors, vsi->vsi_num, vsi->hw_base_vector);
- if (vsi->type != ICE_VSI_VF) {
- ice_free_res(pf->sw_irq_tracker,
- vsi->sw_base_vector, vsi->idx);
- pf->num_avail_sw_msix += num_q_vectors;
- }
+ "Failed to get tracking for %d vectors for VSI %d, err=%d\n",
+ num_q_vectors, vsi->vsi_num, vsi->base_vector);
return -ENOENT;
}
-
- pf->num_avail_hw_msix -= num_q_vectors;
+ pf->num_avail_sw_msix -= num_q_vectors;
return 0;
}
@@ -1842,8 +1836,73 @@ ice_cfg_itr(struct ice_hw *hw, struct ice_q_vector *q_vector)
}
/**
+ * ice_cfg_txq_interrupt - configure interrupt on Tx queue
+ * @vsi: the VSI being configured
+ * @txq: Tx queue being mapped to MSI-X vector
+ * @msix_idx: MSI-X vector index within the function
+ * @itr_idx: ITR index of the interrupt cause
+ *
+ * Configure interrupt on Tx queue by associating Tx queue to MSI-X vector
+ * within the function space.
+ */
+#ifdef CONFIG_PCI_IOV
+void
+ice_cfg_txq_interrupt(struct ice_vsi *vsi, u16 txq, u16 msix_idx, u16 itr_idx)
+#else
+static void
+ice_cfg_txq_interrupt(struct ice_vsi *vsi, u16 txq, u16 msix_idx, u16 itr_idx)
+#endif /* CONFIG_PCI_IOV */
+{
+ struct ice_pf *pf = vsi->back;
+ struct ice_hw *hw = &pf->hw;
+ u32 val;
+
+ itr_idx = (itr_idx << QINT_TQCTL_ITR_INDX_S) & QINT_TQCTL_ITR_INDX_M;
+
+ val = QINT_TQCTL_CAUSE_ENA_M | itr_idx |
+ ((msix_idx << QINT_TQCTL_MSIX_INDX_S) & QINT_TQCTL_MSIX_INDX_M);
+
+ wr32(hw, QINT_TQCTL(vsi->txq_map[txq]), val);
+}
+
+/**
+ * ice_cfg_rxq_interrupt - configure interrupt on Rx queue
+ * @vsi: the VSI being configured
+ * @rxq: Rx queue being mapped to MSI-X vector
+ * @msix_idx: MSI-X vector index within the function
+ * @itr_idx: ITR index of the interrupt cause
+ *
+ * Configure interrupt on Rx queue by associating Rx queue to MSI-X vector
+ * within the function space.
+ */
+#ifdef CONFIG_PCI_IOV
+void
+ice_cfg_rxq_interrupt(struct ice_vsi *vsi, u16 rxq, u16 msix_idx, u16 itr_idx)
+#else
+static void
+ice_cfg_rxq_interrupt(struct ice_vsi *vsi, u16 rxq, u16 msix_idx, u16 itr_idx)
+#endif /* CONFIG_PCI_IOV */
+{
+ struct ice_pf *pf = vsi->back;
+ struct ice_hw *hw = &pf->hw;
+ u32 val;
+
+ itr_idx = (itr_idx << QINT_RQCTL_ITR_INDX_S) & QINT_RQCTL_ITR_INDX_M;
+
+ val = QINT_RQCTL_CAUSE_ENA_M | itr_idx |
+ ((msix_idx << QINT_RQCTL_MSIX_INDX_S) & QINT_RQCTL_MSIX_INDX_M);
+
+ wr32(hw, QINT_RQCTL(vsi->rxq_map[rxq]), val);
+
+ ice_flush(hw);
+}
+
+/**
* ice_vsi_cfg_msix - MSIX mode Interrupt Config in the HW
* @vsi: the VSI being configured
+ *
+ * This configures MSIX mode interrupts for the PF VSI, and should not be used
+ * for the VF VSI.
*/
void ice_vsi_cfg_msix(struct ice_vsi *vsi)
{
@@ -1873,43 +1932,17 @@ void ice_vsi_cfg_msix(struct ice_vsi *vsi)
* tracked for this PF.
*/
for (q = 0; q < q_vector->num_ring_tx; q++) {
- int itr_idx = (q_vector->tx.itr_idx <<
- QINT_TQCTL_ITR_INDX_S) &
- QINT_TQCTL_ITR_INDX_M;
- u32 val;
-
- if (vsi->type == ICE_VSI_VF)
- val = QINT_TQCTL_CAUSE_ENA_M | itr_idx |
- (((i + 1) << QINT_TQCTL_MSIX_INDX_S) &
- QINT_TQCTL_MSIX_INDX_M);
- else
- val = QINT_TQCTL_CAUSE_ENA_M | itr_idx |
- ((reg_idx << QINT_TQCTL_MSIX_INDX_S) &
- QINT_TQCTL_MSIX_INDX_M);
- wr32(hw, QINT_TQCTL(vsi->txq_map[txq]), val);
+ ice_cfg_txq_interrupt(vsi, txq, reg_idx,
+ q_vector->tx.itr_idx);
txq++;
}
for (q = 0; q < q_vector->num_ring_rx; q++) {
- int itr_idx = (q_vector->rx.itr_idx <<
- QINT_RQCTL_ITR_INDX_S) &
- QINT_RQCTL_ITR_INDX_M;
- u32 val;
-
- if (vsi->type == ICE_VSI_VF)
- val = QINT_RQCTL_CAUSE_ENA_M | itr_idx |
- (((i + 1) << QINT_RQCTL_MSIX_INDX_S) &
- QINT_RQCTL_MSIX_INDX_M);
- else
- val = QINT_RQCTL_CAUSE_ENA_M | itr_idx |
- ((reg_idx << QINT_RQCTL_MSIX_INDX_S) &
- QINT_RQCTL_MSIX_INDX_M);
- wr32(hw, QINT_RQCTL(vsi->rxq_map[rxq]), val);
+ ice_cfg_rxq_interrupt(vsi, rxq, reg_idx,
+ q_vector->rx.itr_idx);
rxq++;
}
}
-
- ice_flush(hw);
}
/**
@@ -2024,6 +2057,19 @@ int ice_vsi_stop_rx_rings(struct ice_vsi *vsi)
}
/**
+ * ice_trigger_sw_intr - trigger a software interrupt
+ * @hw: pointer to the HW structure
+ * @q_vector: interrupt vector to trigger the software interrupt for
+ */
+void ice_trigger_sw_intr(struct ice_hw *hw, struct ice_q_vector *q_vector)
+{
+ wr32(hw, GLINT_DYN_CTL(q_vector->reg_idx),
+ (ICE_ITR_NONE << GLINT_DYN_CTL_ITR_INDX_S) |
+ GLINT_DYN_CTL_SWINT_TRIG_M |
+ GLINT_DYN_CTL_INTENA_M);
+}
+
+/**
* ice_vsi_stop_tx_rings - Disable Tx rings
* @vsi: the VSI being configured
* @rst_src: reset source
@@ -2070,8 +2116,9 @@ ice_vsi_stop_tx_rings(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,
break;
for (i = 0; i < vsi->tc_cfg.tc_info[tc].qcount_tx; i++) {
- if (!rings || !rings[q_idx] ||
- !rings[q_idx]->q_vector) {
+ struct ice_q_vector *q_vector;
+
+ if (!rings || !rings[q_idx]) {
err = -EINVAL;
goto err_out;
}
@@ -2091,9 +2138,10 @@ ice_vsi_stop_tx_rings(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,
/* trigger a software interrupt for the vector
* associated to the queue to schedule NAPI handler
*/
- wr32(hw, GLINT_DYN_CTL(rings[i]->q_vector->reg_idx),
- GLINT_DYN_CTL_SWINT_TRIG_M |
- GLINT_DYN_CTL_INTENA_MSK_M);
+ q_vector = rings[i]->q_vector;
+ if (q_vector)
+ ice_trigger_sw_intr(hw, q_vector);
+
q_idx++;
}
status = ice_dis_vsi_txq(vsi->port_info, vsi->idx, tc,
@@ -2234,7 +2282,14 @@ ice_vsi_set_q_vectors_reg_idx(struct ice_vsi *vsi)
goto clear_reg_idx;
}
- q_vector->reg_idx = q_vector->v_idx + vsi->hw_base_vector;
+ if (vsi->type == ICE_VSI_VF) {
+ struct ice_vf *vf = &vsi->back->vf[vsi->vf_id];
+
+ q_vector->reg_idx = ice_calc_vf_reg_idx(vf, q_vector);
+ } else {
+ q_vector->reg_idx =
+ q_vector->v_idx + vsi->base_vector;
+ }
}
return 0;
@@ -2291,6 +2346,54 @@ ice_vsi_add_rem_eth_mac(struct ice_vsi *vsi, bool add_rule)
}
/**
+ * ice_cfg_sw_lldp - Config switch rules for LLDP packet handling
+ * @vsi: the VSI being configured
+ * @tx: bool to determine Tx or Rx rule
+ * @create: bool to determine create or remove Rule
+ */
+void ice_cfg_sw_lldp(struct ice_vsi *vsi, bool tx, bool create)
+{
+ struct ice_fltr_list_entry *list;
+ struct ice_pf *pf = vsi->back;
+ LIST_HEAD(tmp_add_list);
+ enum ice_status status;
+
+ list = devm_kzalloc(&pf->pdev->dev, sizeof(*list), GFP_KERNEL);
+ if (!list)
+ return;
+
+ list->fltr_info.lkup_type = ICE_SW_LKUP_ETHERTYPE;
+ list->fltr_info.vsi_handle = vsi->idx;
+ list->fltr_info.l_data.ethertype_mac.ethertype = ETH_P_LLDP;
+
+ if (tx) {
+ list->fltr_info.fltr_act = ICE_DROP_PACKET;
+ list->fltr_info.flag = ICE_FLTR_TX;
+ list->fltr_info.src_id = ICE_SRC_ID_VSI;
+ } else {
+ list->fltr_info.fltr_act = ICE_FWD_TO_VSI;
+ list->fltr_info.flag = ICE_FLTR_RX;
+ list->fltr_info.src_id = ICE_SRC_ID_LPORT;
+ }
+
+ INIT_LIST_HEAD(&list->list_entry);
+ list_add(&list->list_entry, &tmp_add_list);
+
+ if (create)
+ status = ice_add_eth_mac(&pf->hw, &tmp_add_list);
+ else
+ status = ice_remove_eth_mac(&pf->hw, &tmp_add_list);
+
+ if (status)
+ dev_err(&pf->pdev->dev,
+ "Fail %s %s LLDP rule on VSI %i error: %d\n",
+ create ? "adding" : "removing", tx ? "TX" : "RX",
+ vsi->vsi_num, status);
+
+ ice_free_fltr_list(&pf->pdev->dev, &tmp_add_list);
+}
+
+/**
* ice_vsi_setup - Set up a VSI by a given type
* @pf: board private structure
* @pi: pointer to the port_info instance
@@ -2310,6 +2413,7 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
{
u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
struct device *dev = &pf->pdev->dev;
+ enum ice_status status;
struct ice_vsi *vsi;
int ret, i;
@@ -2389,23 +2493,24 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
if (ret)
goto unroll_alloc_q_vector;
- /* Setup Vector base only during VF init phase or when VF asks
- * for more vectors than assigned number. In all other cases,
- * assign hw_base_vector to the value given earlier.
- */
- if (test_bit(ICE_VF_STATE_CFG_INTR, pf->vf[vf_id].vf_states)) {
- ret = ice_vsi_setup_vector_base(vsi);
- if (ret)
- goto unroll_vector_base;
- } else {
- vsi->hw_base_vector = pf->vf[vf_id].first_vector_idx;
- }
ret = ice_vsi_set_q_vectors_reg_idx(vsi);
if (ret)
goto unroll_vector_base;
pf->q_left_tx -= vsi->alloc_txq;
pf->q_left_rx -= vsi->alloc_rxq;
+
+ /* Do not exit if configuring RSS had an issue, at least
+ * receive traffic on first queue. Hence no need to capture
+ * return value
+ */
+ if (test_bit(ICE_FLAG_RSS_ENA, pf->flags))
+ ice_vsi_cfg_rss_lut_key(vsi);
+ break;
+ case ICE_VSI_LB:
+ ret = ice_vsi_alloc_rings(vsi);
+ if (ret)
+ goto unroll_vsi_init;
break;
default:
/* clean up the resources and exit */
@@ -2416,12 +2521,12 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
for (i = 0; i < vsi->tc_cfg.numtc; i++)
max_txqs[i] = pf->num_lan_tx;
- ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc,
- max_txqs);
- if (ret) {
+ status = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc,
+ max_txqs);
+ if (status) {
dev_err(&pf->pdev->dev,
"VSI %d failed lan queue config, error %d\n",
- vsi->vsi_num, ret);
+ vsi->vsi_num, status);
goto unroll_vector_base;
}
@@ -2430,19 +2535,28 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
* out PAUSE or PFC frames. If enabled, FW can still send FC frames.
* The rule is added once for PF VSI in order to create appropriate
* recipe, since VSI/VSI list is ignored with drop action...
+ * Also add rules to handle LLDP Tx and Rx packets. Tx LLDP packets
+ * need to be dropped so that VFs cannot send LLDP packets to reconfig
+ * DCB settings in the HW. Also, if the FW DCBX engine is not running
+ * then Rx LLDP packets need to be redirected up the stack.
*/
- if (vsi->type == ICE_VSI_PF)
+ if (vsi->type == ICE_VSI_PF) {
ice_vsi_add_rem_eth_mac(vsi, true);
+ /* Tx LLDP packets */
+ ice_cfg_sw_lldp(vsi, true, true);
+
+ /* Rx LLDP packets */
+ if (!test_bit(ICE_FLAG_ENABLE_FW_LLDP, pf->flags))
+ ice_cfg_sw_lldp(vsi, false, true);
+ }
+
return vsi;
unroll_vector_base:
/* reclaim SW interrupts back to the common pool */
- ice_free_res(pf->sw_irq_tracker, vsi->sw_base_vector, vsi->idx);
+ ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx);
pf->num_avail_sw_msix += vsi->num_q_vectors;
- /* reclaim HW interrupt back to the common pool */
- ice_free_res(pf->hw_irq_tracker, vsi->hw_base_vector, vsi->idx);
- pf->num_avail_hw_msix += vsi->num_q_vectors;
unroll_alloc_q_vector:
ice_vsi_free_q_vectors(vsi);
unroll_vsi_init:
@@ -2463,17 +2577,17 @@ unroll_get_qs:
static void ice_vsi_release_msix(struct ice_vsi *vsi)
{
struct ice_pf *pf = vsi->back;
- u16 vector = vsi->hw_base_vector;
struct ice_hw *hw = &pf->hw;
u32 txq = 0;
u32 rxq = 0;
int i, q;
- for (i = 0; i < vsi->num_q_vectors; i++, vector++) {
+ for (i = 0; i < vsi->num_q_vectors; i++) {
struct ice_q_vector *q_vector = vsi->q_vectors[i];
+ u16 reg_idx = q_vector->reg_idx;
- wr32(hw, GLINT_ITR(ICE_IDX_ITR0, vector), 0);
- wr32(hw, GLINT_ITR(ICE_IDX_ITR1, vector), 0);
+ wr32(hw, GLINT_ITR(ICE_IDX_ITR0, reg_idx), 0);
+ wr32(hw, GLINT_ITR(ICE_IDX_ITR1, reg_idx), 0);
for (q = 0; q < q_vector->num_ring_tx; q++) {
wr32(hw, QINT_TQCTL(vsi->txq_map[txq]), 0);
txq++;
@@ -2495,7 +2609,7 @@ static void ice_vsi_release_msix(struct ice_vsi *vsi)
void ice_vsi_free_irq(struct ice_vsi *vsi)
{
struct ice_pf *pf = vsi->back;
- int base = vsi->sw_base_vector;
+ int base = vsi->base_vector;
if (test_bit(ICE_FLAG_MSIX_ENA, pf->flags)) {
int i;
@@ -2591,11 +2705,11 @@ int ice_free_res(struct ice_res_tracker *res, u16 index, u16 id)
int count = 0;
int i;
- if (!res || index >= res->num_entries)
+ if (!res || index >= res->end)
return -EINVAL;
id |= ICE_RES_VALID_BIT;
- for (i = index; i < res->num_entries && res->list[i] == id; i++) {
+ for (i = index; i < res->end && res->list[i] == id; i++) {
res->list[i] = 0;
count++;
}
@@ -2613,10 +2727,9 @@ int ice_free_res(struct ice_res_tracker *res, u16 index, u16 id)
*/
static int ice_search_res(struct ice_res_tracker *res, u16 needed, u16 id)
{
- int start = res->search_hint;
- int end = start;
+ int start = 0, end = 0;
- if ((start + needed) > res->num_entries)
+ if (needed > res->end)
return -ENOMEM;
id |= ICE_RES_VALID_BIT;
@@ -2625,7 +2738,7 @@ static int ice_search_res(struct ice_res_tracker *res, u16 needed, u16 id)
/* skip already allocated entries */
if (res->list[end++] & ICE_RES_VALID_BIT) {
start = end;
- if ((start + needed) > res->num_entries)
+ if ((start + needed) > res->end)
break;
}
@@ -2636,13 +2749,9 @@ static int ice_search_res(struct ice_res_tracker *res, u16 needed, u16 id)
while (i != end)
res->list[i++] = id;
- if (end == res->num_entries)
- end = 0;
-
- res->search_hint = end;
return start;
}
- } while (1);
+ } while (end < res->end);
return -ENOMEM;
}
@@ -2654,16 +2763,11 @@ static int ice_search_res(struct ice_res_tracker *res, u16 needed, u16 id)
* @needed: size of the block needed
* @id: identifier to track owner
*
- * Returns the base item index of the block, or -ENOMEM for error
- * The search_hint trick and lack of advanced fit-finding only works
- * because we're highly likely to have all the same sized requests.
- * Linear search time and any fragmentation should be minimal.
+ * Returns the base item index of the block, or negative for error
*/
int
ice_get_res(struct ice_pf *pf, struct ice_res_tracker *res, u16 needed, u16 id)
{
- int ret;
-
if (!res || !pf)
return -EINVAL;
@@ -2674,16 +2778,7 @@ ice_get_res(struct ice_pf *pf, struct ice_res_tracker *res, u16 needed, u16 id)
return -EINVAL;
}
- /* search based on search_hint */
- ret = ice_search_res(res, needed, id);
-
- if (ret < 0) {
- /* previous search failed. Reset search hint and try again */
- res->search_hint = 0;
- ret = ice_search_res(res, needed, id);
- }
-
- return ret;
+ return ice_search_res(res, needed, id);
}
/**
@@ -2692,7 +2787,7 @@ ice_get_res(struct ice_pf *pf, struct ice_res_tracker *res, u16 needed, u16 id)
*/
void ice_vsi_dis_irq(struct ice_vsi *vsi)
{
- int base = vsi->sw_base_vector;
+ int base = vsi->base_vector;
struct ice_pf *pf = vsi->back;
struct ice_hw *hw = &pf->hw;
u32 val;
@@ -2738,6 +2833,21 @@ void ice_vsi_dis_irq(struct ice_vsi *vsi)
}
/**
+ * ice_napi_del - Remove NAPI handler for the VSI
+ * @vsi: VSI for which NAPI handler is to be removed
+ */
+void ice_napi_del(struct ice_vsi *vsi)
+{
+ int v_idx;
+
+ if (!vsi->netdev)
+ return;
+
+ ice_for_each_q_vector(vsi, v_idx)
+ netif_napi_del(&vsi->q_vectors[v_idx]->napi);
+}
+
+/**
* ice_vsi_release - Delete a VSI and free its resources
* @vsi: the VSI being removed
*
@@ -2745,60 +2855,61 @@ void ice_vsi_dis_irq(struct ice_vsi *vsi)
*/
int ice_vsi_release(struct ice_vsi *vsi)
{
- struct ice_vf *vf = NULL;
struct ice_pf *pf;
if (!vsi->back)
return -ENODEV;
pf = vsi->back;
- if (vsi->type == ICE_VSI_VF)
- vf = &pf->vf[vsi->vf_id];
- /* do not unregister and free netdevs while driver is in the reset
- * recovery pending state. Since reset/rebuild happens through PF
- * service task workqueue, its not a good idea to unregister netdev
- * that is associated to the PF that is running the work queue items
- * currently. This is done to avoid check_flush_dependency() warning
- * on this wq
+ /* do not unregister while driver is in the reset recovery pending
+ * state. Since reset/rebuild happens through PF service task workqueue,
+ * it's not a good idea to unregister netdev that is associated to the
+ * PF that is running the work queue items currently. This is done to
+ * avoid check_flush_dependency() warning on this wq
*/
- if (vsi->netdev && !ice_is_reset_in_progress(pf->state)) {
- ice_napi_del(vsi);
+ if (vsi->netdev && !ice_is_reset_in_progress(pf->state))
unregister_netdev(vsi->netdev);
- free_netdev(vsi->netdev);
- vsi->netdev = NULL;
- }
if (test_bit(ICE_FLAG_RSS_ENA, pf->flags))
ice_rss_clean(vsi);
/* Disable VSI and free resources */
- ice_vsi_dis_irq(vsi);
+ if (vsi->type != ICE_VSI_LB)
+ ice_vsi_dis_irq(vsi);
ice_vsi_close(vsi);
- /* reclaim interrupt vectors back to PF */
+ /* SR-IOV determines needed MSIX resources all at once instead of per
+ * VSI since when VFs are spawned we know how many VFs there are and how
+ * many interrupts each VF needs. SR-IOV MSIX resources are also
+ * cleared in the same manner.
+ */
if (vsi->type != ICE_VSI_VF) {
/* reclaim SW interrupts back to the common pool */
- ice_free_res(pf->sw_irq_tracker, vsi->sw_base_vector, vsi->idx);
+ ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx);
pf->num_avail_sw_msix += vsi->num_q_vectors;
- /* reclaim HW interrupts back to the common pool */
- ice_free_res(pf->hw_irq_tracker, vsi->hw_base_vector, vsi->idx);
- pf->num_avail_hw_msix += vsi->num_q_vectors;
- } else if (test_bit(ICE_VF_STATE_CFG_INTR, vf->vf_states)) {
- /* Reclaim VF resources back only while freeing all VFs or
- * vector reassignment is requested
- */
- ice_free_res(pf->hw_irq_tracker, vf->first_vector_idx,
- vsi->idx);
- pf->num_avail_hw_msix += pf->num_vf_msix;
}
- if (vsi->type == ICE_VSI_PF)
+ if (vsi->type == ICE_VSI_PF) {
ice_vsi_add_rem_eth_mac(vsi, false);
+ ice_cfg_sw_lldp(vsi, true, false);
+ /* The Rx rule will only exist to remove if the LLDP FW
+ * engine is currently stopped
+ */
+ if (!test_bit(ICE_FLAG_ENABLE_FW_LLDP, pf->flags))
+ ice_cfg_sw_lldp(vsi, false, false);
+ }
ice_remove_vsi_fltr(&pf->hw, vsi->idx);
ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
ice_vsi_delete(vsi);
ice_vsi_free_q_vectors(vsi);
+
+ /* make sure unregister_netdev() was called by checking __ICE_DOWN */
+ if (vsi->netdev && test_bit(__ICE_DOWN, vsi->state)) {
+ free_netdev(vsi->netdev);
+ vsi->netdev = NULL;
+ }
+
ice_vsi_clear_rings(vsi);
ice_vsi_put_qs(vsi);
@@ -2825,6 +2936,7 @@ int ice_vsi_rebuild(struct ice_vsi *vsi)
{
u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
struct ice_vf *vf = NULL;
+ enum ice_status status;
struct ice_pf *pf;
int ret, i;
@@ -2838,24 +2950,17 @@ int ice_vsi_rebuild(struct ice_vsi *vsi)
ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
ice_vsi_free_q_vectors(vsi);
+ /* SR-IOV determines needed MSIX resources all at once instead of per
+ * VSI since when VFs are spawned we know how many VFs there are and how
+ * many interrupts each VF needs. SR-IOV MSIX resources are also
+ * cleared in the same manner.
+ */
if (vsi->type != ICE_VSI_VF) {
/* reclaim SW interrupts back to the common pool */
- ice_free_res(pf->sw_irq_tracker, vsi->sw_base_vector, vsi->idx);
+ ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx);
pf->num_avail_sw_msix += vsi->num_q_vectors;
- vsi->sw_base_vector = 0;
- /* reclaim HW interrupts back to the common pool */
- ice_free_res(pf->hw_irq_tracker, vsi->hw_base_vector,
- vsi->idx);
- pf->num_avail_hw_msix += vsi->num_q_vectors;
- } else {
- /* Reclaim VF resources back to the common pool for reset and
- * and rebuild, with vector reassignment
- */
- ice_free_res(pf->hw_irq_tracker, vf->first_vector_idx,
- vsi->idx);
- pf->num_avail_hw_msix += pf->num_vf_msix;
+ vsi->base_vector = 0;
}
- vsi->hw_base_vector = 0;
ice_vsi_clear_rings(vsi);
ice_vsi_free_arrays(vsi);
@@ -2881,10 +2986,6 @@ int ice_vsi_rebuild(struct ice_vsi *vsi)
if (ret)
goto err_rings;
- ret = ice_vsi_setup_vector_base(vsi);
- if (ret)
- goto err_vectors;
-
ret = ice_vsi_set_q_vectors_reg_idx(vsi);
if (ret)
goto err_vectors;
@@ -2929,12 +3030,12 @@ int ice_vsi_rebuild(struct ice_vsi *vsi)
for (i = 0; i < vsi->tc_cfg.numtc; i++)
max_txqs[i] = pf->num_lan_tx;
- ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc,
- max_txqs);
- if (ret) {
+ status = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc,
+ max_txqs);
+ if (status) {
dev_err(&pf->pdev->dev,
"VSI %d failed lan queue config, error %d\n",
- vsi->vsi_num, ret);
+ vsi->vsi_num, status);
goto err_vectors;
}
return 0;
@@ -2956,7 +3057,7 @@ err_vsi:
/**
* ice_is_reset_in_progress - check for a reset in progress
- * @state: pf state field
+ * @state: PF state field
*/
bool ice_is_reset_in_progress(unsigned long *state)
{
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h
index a91d3553cc89..6e43ef03bfc3 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_lib.h
@@ -19,6 +19,14 @@ int ice_vsi_cfg_lan_txqs(struct ice_vsi *vsi);
void ice_vsi_cfg_msix(struct ice_vsi *vsi);
+#ifdef CONFIG_PCI_IOV
+void
+ice_cfg_txq_interrupt(struct ice_vsi *vsi, u16 txq, u16 msix_idx, u16 itr_idx);
+
+void
+ice_cfg_rxq_interrupt(struct ice_vsi *vsi, u16 rxq, u16 msix_idx, u16 itr_idx);
+#endif /* CONFIG_PCI_IOV */
+
int ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid);
int ice_vsi_kill_vlan(struct ice_vsi *vsi, u16 vid);
@@ -37,6 +45,8 @@ ice_vsi_stop_lan_tx_rings(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,
int ice_cfg_vlan_pruning(struct ice_vsi *vsi, bool ena, bool vlan_promisc);
+void ice_cfg_sw_lldp(struct ice_vsi *vsi, bool tx, bool create);
+
void ice_vsi_delete(struct ice_vsi *vsi);
int ice_vsi_clear(struct ice_vsi *vsi);
@@ -49,6 +59,8 @@ struct ice_vsi *
ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
enum ice_vsi_type type, u16 vf_id);
+void ice_napi_del(struct ice_vsi *vsi);
+
int ice_vsi_release(struct ice_vsi *vsi);
void ice_vsi_close(struct ice_vsi *vsi);
@@ -64,6 +76,8 @@ bool ice_is_reset_in_progress(unsigned long *state);
void ice_vsi_free_q_vectors(struct ice_vsi *vsi);
+void ice_trigger_sw_intr(struct ice_hw *hw, struct ice_q_vector *q_vector);
+
void ice_vsi_put_qs(struct ice_vsi *vsi);
#ifdef CONFIG_DCB
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 7843abf4d44d..28ec0d57941d 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -61,9 +61,10 @@ static u32 ice_get_tx_pending(struct ice_ring *ring)
static void ice_check_for_hang_subtask(struct ice_pf *pf)
{
struct ice_vsi *vsi = NULL;
+ struct ice_hw *hw;
unsigned int i;
- u32 v, v_idx;
int packets;
+ u32 v;
ice_for_each_vsi(pf, v)
if (pf->vsi[v] && pf->vsi[v]->type == ICE_VSI_PF) {
@@ -77,12 +78,12 @@ static void ice_check_for_hang_subtask(struct ice_pf *pf)
if (!(vsi->netdev && netif_carrier_ok(vsi->netdev)))
return;
+ hw = &vsi->back->hw;
+
for (i = 0; i < vsi->num_txq; i++) {
struct ice_ring *tx_ring = vsi->tx_rings[i];
if (tx_ring && tx_ring->desc) {
- int itr = ICE_ITR_NONE;
-
/* If packet counter has not changed the queue is
* likely stalled, so force an interrupt for this
* queue.
@@ -93,12 +94,7 @@ static void ice_check_for_hang_subtask(struct ice_pf *pf)
packets = tx_ring->stats.pkts & INT_MAX;
if (tx_ring->tx_stats.prev_pkt == packets) {
/* Trigger sw interrupt to revive the queue */
- v_idx = tx_ring->q_vector->v_idx;
- wr32(&vsi->back->hw,
- GLINT_DYN_CTL(vsi->hw_base_vector + v_idx),
- (itr << GLINT_DYN_CTL_ITR_INDX_S) |
- GLINT_DYN_CTL_SWINT_TRIG_M |
- GLINT_DYN_CTL_INTENA_MSK_M);
+ ice_trigger_sw_intr(hw, tx_ring->q_vector);
continue;
}
@@ -113,6 +109,67 @@ static void ice_check_for_hang_subtask(struct ice_pf *pf)
}
/**
+ * ice_init_mac_fltr - Set initial MAC filters
+ * @pf: board private structure
+ *
+ * Set initial set of MAC filters for PF VSI; configure filters for permanent
+ * address and broadcast address. If an error is encountered, netdevice will be
+ * unregistered.
+ */
+static int ice_init_mac_fltr(struct ice_pf *pf)
+{
+ LIST_HEAD(tmp_add_list);
+ u8 broadcast[ETH_ALEN];
+ struct ice_vsi *vsi;
+ int status;
+
+ vsi = ice_find_vsi_by_type(pf, ICE_VSI_PF);
+ if (!vsi)
+ return -EINVAL;
+
+ /* To add a MAC filter, first add the MAC to a list and then
+ * pass the list to ice_add_mac.
+ */
+
+ /* Add a unicast MAC filter so the VSI can get its packets */
+ status = ice_add_mac_to_list(vsi, &tmp_add_list,
+ vsi->port_info->mac.perm_addr);
+ if (status)
+ goto unregister;
+
+ /* VSI needs to receive broadcast traffic, so add the broadcast
+ * MAC address to the list as well.
+ */
+ eth_broadcast_addr(broadcast);
+ status = ice_add_mac_to_list(vsi, &tmp_add_list, broadcast);
+ if (status)
+ goto free_mac_list;
+
+ /* Program MAC filters for entries in tmp_add_list */
+ status = ice_add_mac(&pf->hw, &tmp_add_list);
+ if (status)
+ status = -ENOMEM;
+
+free_mac_list:
+ ice_free_fltr_list(&pf->pdev->dev, &tmp_add_list);
+
+unregister:
+ /* We aren't useful with no MAC filters, so unregister if we
+ * had an error
+ */
+ if (status && vsi->netdev->reg_state == NETREG_REGISTERED) {
+ dev_err(&pf->pdev->dev,
+ "Could not add MAC filters error %d. Unregistering device\n",
+ status);
+ unregister_netdev(vsi->netdev);
+ free_netdev(vsi->netdev);
+ vsi->netdev = NULL;
+ }
+
+ return status;
+}
+
+/**
* ice_add_mac_to_sync_list - creates list of MAC addresses to be synced
* @netdev: the net device on which the sync is happening
* @addr: MAC address to sync
@@ -567,7 +624,11 @@ static void ice_reset_subtask(struct ice_pf *pf)
*/
void ice_print_link_msg(struct ice_vsi *vsi, bool isup)
{
+ struct ice_aqc_get_phy_caps_data *caps;
+ enum ice_status status;
+ const char *fec_req;
const char *speed;
+ const char *fec;
const char *fc;
if (!vsi)
@@ -584,6 +645,12 @@ void ice_print_link_msg(struct ice_vsi *vsi, bool isup)
}
switch (vsi->port_info->phy.link_info.link_speed) {
+ case ICE_AQ_LINK_SPEED_100GB:
+ speed = "100 G";
+ break;
+ case ICE_AQ_LINK_SPEED_50GB:
+ speed = "50 G";
+ break;
case ICE_AQ_LINK_SPEED_40GB:
speed = "40 G";
break;
@@ -615,13 +682,13 @@ void ice_print_link_msg(struct ice_vsi *vsi, bool isup)
switch (vsi->port_info->fc.current_mode) {
case ICE_FC_FULL:
- fc = "RX/TX";
+ fc = "Rx/Tx";
break;
case ICE_FC_TX_PAUSE:
- fc = "TX";
+ fc = "Tx";
break;
case ICE_FC_RX_PAUSE:
- fc = "RX";
+ fc = "Rx";
break;
case ICE_FC_NONE:
fc = "None";
@@ -631,8 +698,47 @@ void ice_print_link_msg(struct ice_vsi *vsi, bool isup)
break;
}
- netdev_info(vsi->netdev, "NIC Link is up %sbps, Flow Control: %s\n",
- speed, fc);
+ /* Get FEC mode based on negotiated link info */
+ switch (vsi->port_info->phy.link_info.fec_info) {
+ case ICE_AQ_LINK_25G_RS_528_FEC_EN:
+ /* fall through */
+ case ICE_AQ_LINK_25G_RS_544_FEC_EN:
+ fec = "RS-FEC";
+ break;
+ case ICE_AQ_LINK_25G_KR_FEC_EN:
+ fec = "FC-FEC/BASE-R";
+ break;
+ default:
+ fec = "NONE";
+ break;
+ }
+
+ /* Get FEC mode requested based on PHY caps last SW configuration */
+ caps = devm_kzalloc(&vsi->back->pdev->dev, sizeof(*caps), GFP_KERNEL);
+ if (!caps) {
+ fec_req = "Unknown";
+ goto done;
+ }
+
+ status = ice_aq_get_phy_caps(vsi->port_info, false,
+ ICE_AQC_REPORT_SW_CFG, caps, NULL);
+ if (status)
+ netdev_info(vsi->netdev, "Get phy capability failed.\n");
+
+ if (caps->link_fec_options & ICE_AQC_PHY_FEC_25G_RS_528_REQ ||
+ caps->link_fec_options & ICE_AQC_PHY_FEC_25G_RS_544_REQ)
+ fec_req = "RS-FEC";
+ else if (caps->link_fec_options & ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ ||
+ caps->link_fec_options & ICE_AQC_PHY_FEC_25G_KR_REQ)
+ fec_req = "FC-FEC/BASE-R";
+ else
+ fec_req = "NONE";
+
+ devm_kfree(&vsi->back->pdev->dev, caps);
+
+done:
+ netdev_info(vsi->netdev, "NIC Link is up %sbps, Requested FEC: %s, FEC: %s, Flow Control: %s\n",
+ speed, fec_req, fec, fc);
}
/**
@@ -664,7 +770,7 @@ static void ice_vsi_link_event(struct ice_vsi *vsi, bool link_up)
/**
* ice_link_event - process the link event
- * @pf: pf that the link event is associated with
+ * @pf: PF that the link event is associated with
* @pi: port_info for the port that the link event is associated with
* @link_up: true if the physical link is up and false if it is down
* @link_speed: current link speed received from the link event
@@ -774,7 +880,7 @@ static int ice_init_link_events(struct ice_port_info *pi)
/**
* ice_handle_link_event - handle link event via ARQ
- * @pf: pf that the link event is associated with
+ * @pf: PF that the link event is associated with
* @event: event structure containing link status info
*/
static int
@@ -1161,16 +1267,16 @@ static void ice_handle_mdd_event(struct ice_pf *pf)
}
}
- /* see if one of the VFs needs to be reset */
- for (i = 0; i < pf->num_alloc_vfs && mdd_detected; i++) {
+ /* check to see if one of the VFs caused the MDD */
+ for (i = 0; i < pf->num_alloc_vfs; i++) {
struct ice_vf *vf = &pf->vf[i];
- mdd_detected = false;
+ bool vf_mdd_detected = false;
reg = rd32(hw, VP_MDET_TX_PQM(i));
if (reg & VP_MDET_TX_PQM_VALID_M) {
wr32(hw, VP_MDET_TX_PQM(i), 0xFFFF);
- mdd_detected = true;
+ vf_mdd_detected = true;
dev_info(&pf->pdev->dev, "TX driver issue detected on VF %d\n",
i);
}
@@ -1178,7 +1284,7 @@ static void ice_handle_mdd_event(struct ice_pf *pf)
reg = rd32(hw, VP_MDET_TX_TCLAN(i));
if (reg & VP_MDET_TX_TCLAN_VALID_M) {
wr32(hw, VP_MDET_TX_TCLAN(i), 0xFFFF);
- mdd_detected = true;
+ vf_mdd_detected = true;
dev_info(&pf->pdev->dev, "TX driver issue detected on VF %d\n",
i);
}
@@ -1186,7 +1292,7 @@ static void ice_handle_mdd_event(struct ice_pf *pf)
reg = rd32(hw, VP_MDET_TX_TDPU(i));
if (reg & VP_MDET_TX_TDPU_VALID_M) {
wr32(hw, VP_MDET_TX_TDPU(i), 0xFFFF);
- mdd_detected = true;
+ vf_mdd_detected = true;
dev_info(&pf->pdev->dev, "TX driver issue detected on VF %d\n",
i);
}
@@ -1194,19 +1300,18 @@ static void ice_handle_mdd_event(struct ice_pf *pf)
reg = rd32(hw, VP_MDET_RX(i));
if (reg & VP_MDET_RX_VALID_M) {
wr32(hw, VP_MDET_RX(i), 0xFFFF);
- mdd_detected = true;
+ vf_mdd_detected = true;
dev_info(&pf->pdev->dev, "RX driver issue detected on VF %d\n",
i);
}
- if (mdd_detected) {
+ if (vf_mdd_detected) {
vf->num_mdd_events++;
- dev_info(&pf->pdev->dev,
- "Use PF Control I/F to re-enable the VF\n");
- set_bit(ICE_VF_STATE_DIS, vf->vf_states);
+ if (vf->num_mdd_events > 1)
+ dev_info(&pf->pdev->dev, "VF %d has had %llu MDD events since last boot\n",
+ i, vf->num_mdd_events);
}
}
-
}
/**
@@ -1327,7 +1432,7 @@ static int ice_vsi_req_irq_msix(struct ice_vsi *vsi, char *basename)
{
int q_vectors = vsi->num_q_vectors;
struct ice_pf *pf = vsi->back;
- int base = vsi->sw_base_vector;
+ int base = vsi->base_vector;
int rx_int_idx = 0;
int tx_int_idx = 0;
int vector, err;
@@ -1408,7 +1513,7 @@ static void ice_ena_misc_vector(struct ice_pf *pf)
wr32(hw, PFINT_OICR_ENA, val);
/* SW_ITR_IDX = 0, but don't change INTENA */
- wr32(hw, GLINT_DYN_CTL(pf->hw_oicr_idx),
+ wr32(hw, GLINT_DYN_CTL(pf->oicr_idx),
GLINT_DYN_CTL_SW_ITR_INDX_M | GLINT_DYN_CTL_INTENA_MSK_M);
}
@@ -1430,6 +1535,11 @@ static irqreturn_t ice_misc_intr(int __always_unused irq, void *data)
oicr = rd32(hw, PFINT_OICR);
ena_mask = rd32(hw, PFINT_OICR_ENA);
+ if (oicr & PFINT_OICR_SWINT_M) {
+ ena_mask &= ~PFINT_OICR_SWINT_M;
+ pf->sw_int_count++;
+ }
+
if (oicr & PFINT_OICR_MAL_DETECT_M) {
ena_mask &= ~PFINT_OICR_MAL_DETECT_M;
set_bit(__ICE_MDD_EVENT_PENDING, pf->state);
@@ -1556,15 +1666,13 @@ static void ice_free_irq_msix_misc(struct ice_pf *pf)
ice_flush(hw);
if (test_bit(ICE_FLAG_MSIX_ENA, pf->flags) && pf->msix_entries) {
- synchronize_irq(pf->msix_entries[pf->sw_oicr_idx].vector);
+ synchronize_irq(pf->msix_entries[pf->oicr_idx].vector);
devm_free_irq(&pf->pdev->dev,
- pf->msix_entries[pf->sw_oicr_idx].vector, pf);
+ pf->msix_entries[pf->oicr_idx].vector, pf);
}
pf->num_avail_sw_msix += 1;
- ice_free_res(pf->sw_irq_tracker, pf->sw_oicr_idx, ICE_RES_MISC_VEC_ID);
- pf->num_avail_hw_msix += 1;
- ice_free_res(pf->hw_irq_tracker, pf->hw_oicr_idx, ICE_RES_MISC_VEC_ID);
+ ice_free_res(pf->irq_tracker, pf->oicr_idx, ICE_RES_MISC_VEC_ID);
}
/**
@@ -1618,43 +1726,31 @@ static int ice_req_irq_msix_misc(struct ice_pf *pf)
if (ice_is_reset_in_progress(pf->state))
goto skip_req_irq;
- /* reserve one vector in sw_irq_tracker for misc interrupts */
- oicr_idx = ice_get_res(pf, pf->sw_irq_tracker, 1, ICE_RES_MISC_VEC_ID);
+ /* reserve one vector in irq_tracker for misc interrupts */
+ oicr_idx = ice_get_res(pf, pf->irq_tracker, 1, ICE_RES_MISC_VEC_ID);
if (oicr_idx < 0)
return oicr_idx;
pf->num_avail_sw_msix -= 1;
- pf->sw_oicr_idx = oicr_idx;
-
- /* reserve one vector in hw_irq_tracker for misc interrupts */
- oicr_idx = ice_get_res(pf, pf->hw_irq_tracker, 1, ICE_RES_MISC_VEC_ID);
- if (oicr_idx < 0) {
- ice_free_res(pf->sw_irq_tracker, 1, ICE_RES_MISC_VEC_ID);
- pf->num_avail_sw_msix += 1;
- return oicr_idx;
- }
- pf->num_avail_hw_msix -= 1;
- pf->hw_oicr_idx = oicr_idx;
+ pf->oicr_idx = oicr_idx;
err = devm_request_irq(&pf->pdev->dev,
- pf->msix_entries[pf->sw_oicr_idx].vector,
+ pf->msix_entries[pf->oicr_idx].vector,
ice_misc_intr, 0, pf->int_name, pf);
if (err) {
dev_err(&pf->pdev->dev,
"devm_request_irq for %s failed: %d\n",
pf->int_name, err);
- ice_free_res(pf->sw_irq_tracker, 1, ICE_RES_MISC_VEC_ID);
+ ice_free_res(pf->irq_tracker, 1, ICE_RES_MISC_VEC_ID);
pf->num_avail_sw_msix += 1;
- ice_free_res(pf->hw_irq_tracker, 1, ICE_RES_MISC_VEC_ID);
- pf->num_avail_hw_msix += 1;
return err;
}
skip_req_irq:
ice_ena_misc_vector(pf);
- ice_ena_ctrlq_interrupts(hw, pf->hw_oicr_idx);
- wr32(hw, GLINT_ITR(ICE_RX_ITR, pf->hw_oicr_idx),
+ ice_ena_ctrlq_interrupts(hw, pf->oicr_idx);
+ wr32(hw, GLINT_ITR(ICE_RX_ITR, pf->oicr_idx),
ITR_REG_ALIGN(ICE_ITR_8K) >> ICE_ITR_GRAN_S);
ice_flush(hw);
@@ -1664,21 +1760,6 @@ skip_req_irq:
}
/**
- * ice_napi_del - Remove NAPI handler for the VSI
- * @vsi: VSI for which NAPI handler is to be removed
- */
-void ice_napi_del(struct ice_vsi *vsi)
-{
- int v_idx;
-
- if (!vsi->netdev)
- return;
-
- ice_for_each_q_vector(vsi, v_idx)
- netif_napi_del(&vsi->q_vectors[v_idx]->napi);
-}
-
-/**
* ice_napi_add - register NAPI handler for the VSI
* @vsi: VSI for which NAPI handler is to be registered
*
@@ -1803,8 +1884,8 @@ void ice_fill_rss_lut(u8 *lut, u16 rss_table_size, u16 rss_size)
* @pf: board private structure
* @pi: pointer to the port_info instance
*
- * Returns pointer to the successfully allocated VSI sw struct on success,
- * otherwise returns NULL on failure.
+ * Returns pointer to the successfully allocated VSI software struct
+ * on success, otherwise returns NULL on failure.
*/
static struct ice_vsi *
ice_pf_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi)
@@ -1813,6 +1894,20 @@ ice_pf_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi)
}
/**
+ * ice_lb_vsi_setup - Set up a loopback VSI
+ * @pf: board private structure
+ * @pi: pointer to the port_info instance
+ *
+ * Returns pointer to the successfully allocated VSI software struct
+ * on success, otherwise returns NULL on failure.
+ */
+struct ice_vsi *
+ice_lb_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi)
+{
+ return ice_vsi_setup(pf, pi, ICE_VSI_LB, ICE_INVAL_VFID);
+}
+
+/**
* ice_vlan_rx_add_vid - Add a VLAN ID filter to HW offload
* @netdev: network interface to be adjusted
* @proto: unused protocol
@@ -1900,8 +1995,6 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __always_unused __be16 proto,
*/
static int ice_setup_pf_sw(struct ice_pf *pf)
{
- LIST_HEAD(tmp_add_list);
- u8 broadcast[ETH_ALEN];
struct ice_vsi *vsi;
int status = 0;
@@ -1926,38 +2019,12 @@ static int ice_setup_pf_sw(struct ice_pf *pf)
*/
ice_napi_add(vsi);
- /* To add a MAC filter, first add the MAC to a list and then
- * pass the list to ice_add_mac.
- */
-
- /* Add a unicast MAC filter so the VSI can get its packets */
- status = ice_add_mac_to_list(vsi, &tmp_add_list,
- vsi->port_info->mac.perm_addr);
+ status = ice_init_mac_fltr(pf);
if (status)
goto unroll_napi_add;
- /* VSI needs to receive broadcast traffic, so add the broadcast
- * MAC address to the list as well.
- */
- eth_broadcast_addr(broadcast);
- status = ice_add_mac_to_list(vsi, &tmp_add_list, broadcast);
- if (status)
- goto free_mac_list;
-
- /* program MAC filters for entries in tmp_add_list */
- status = ice_add_mac(&pf->hw, &tmp_add_list);
- if (status) {
- dev_err(&pf->pdev->dev, "Could not add MAC filters\n");
- status = -ENOMEM;
- goto free_mac_list;
- }
-
- ice_free_fltr_list(&pf->pdev->dev, &tmp_add_list);
return status;
-free_mac_list:
- ice_free_fltr_list(&pf->pdev->dev, &tmp_add_list);
-
unroll_napi_add:
if (vsi) {
ice_napi_del(vsi);
@@ -2149,14 +2216,9 @@ static void ice_clear_interrupt_scheme(struct ice_pf *pf)
if (test_bit(ICE_FLAG_MSIX_ENA, pf->flags))
ice_dis_msix(pf);
- if (pf->sw_irq_tracker) {
- devm_kfree(&pf->pdev->dev, pf->sw_irq_tracker);
- pf->sw_irq_tracker = NULL;
- }
-
- if (pf->hw_irq_tracker) {
- devm_kfree(&pf->pdev->dev, pf->hw_irq_tracker);
- pf->hw_irq_tracker = NULL;
+ if (pf->irq_tracker) {
+ devm_kfree(&pf->pdev->dev, pf->irq_tracker);
+ pf->irq_tracker = NULL;
}
}
@@ -2166,7 +2228,7 @@ static void ice_clear_interrupt_scheme(struct ice_pf *pf)
*/
static int ice_init_interrupt_scheme(struct ice_pf *pf)
{
- int vectors = 0, hw_vectors = 0;
+ int vectors;
if (test_bit(ICE_FLAG_MSIX_ENA, pf->flags))
vectors = ice_ena_msix_range(pf);
@@ -2177,31 +2239,18 @@ static int ice_init_interrupt_scheme(struct ice_pf *pf)
return vectors;
/* set up vector assignment tracking */
- pf->sw_irq_tracker =
- devm_kzalloc(&pf->pdev->dev, sizeof(*pf->sw_irq_tracker) +
+ pf->irq_tracker =
+ devm_kzalloc(&pf->pdev->dev, sizeof(*pf->irq_tracker) +
(sizeof(u16) * vectors), GFP_KERNEL);
- if (!pf->sw_irq_tracker) {
+ if (!pf->irq_tracker) {
ice_dis_msix(pf);
return -ENOMEM;
}
/* populate SW interrupts pool with number of OS granted IRQs. */
pf->num_avail_sw_msix = vectors;
- pf->sw_irq_tracker->num_entries = vectors;
-
- /* set up HW vector assignment tracking */
- hw_vectors = pf->hw.func_caps.common_cap.num_msix_vectors;
- pf->hw_irq_tracker =
- devm_kzalloc(&pf->pdev->dev, sizeof(*pf->hw_irq_tracker) +
- (sizeof(u16) * hw_vectors), GFP_KERNEL);
- if (!pf->hw_irq_tracker) {
- ice_clear_interrupt_scheme(pf);
- return -ENOMEM;
- }
-
- /* populate HW interrupts pool with number of HW supported irqs. */
- pf->num_avail_hw_msix = hw_vectors;
- pf->hw_irq_tracker->num_entries = hw_vectors;
+ pf->irq_tracker->num_entries = vectors;
+ pf->irq_tracker->end = pf->irq_tracker->num_entries;
return 0;
}
@@ -2252,7 +2301,7 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
if (!pf)
return -ENOMEM;
- /* set up for high or low dma */
+ /* set up for high or low DMA */
err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
if (err)
err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
@@ -2302,7 +2351,7 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
ice_init_pf(pf);
- err = ice_init_pf_dcb(pf);
+ err = ice_init_pf_dcb(pf, false);
if (err) {
clear_bit(ICE_FLAG_DCB_CAPABLE, pf->flags);
clear_bit(ICE_FLAG_DCB_ENA, pf->flags);
@@ -2368,7 +2417,7 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
err = ice_setup_pf_sw(pf);
if (err) {
- dev_err(dev, "probe failed due to setup pf switch:%d\n", err);
+ dev_err(dev, "probe failed due to setup PF switch:%d\n", err);
goto err_alloc_sw_unroll;
}
@@ -2625,7 +2674,7 @@ static int __init ice_module_init(void)
status = pci_register_driver(&ice_driver);
if (status) {
- pr_err("failed to register pci driver, err %d\n", status);
+ pr_err("failed to register PCI driver, err %d\n", status);
destroy_workqueue(ice_wq);
}
@@ -2725,21 +2774,21 @@ free_lists:
ice_free_fltr_list(&pf->pdev->dev, &a_mac_list);
if (err) {
- netdev_err(netdev, "can't set mac %pM. filter update failed\n",
+ netdev_err(netdev, "can't set MAC %pM. filter update failed\n",
mac);
return err;
}
/* change the netdev's MAC address */
memcpy(netdev->dev_addr, mac, netdev->addr_len);
- netdev_dbg(vsi->netdev, "updated mac address to %pM\n",
+ netdev_dbg(vsi->netdev, "updated MAC address to %pM\n",
netdev->dev_addr);
/* write new MAC address to the firmware */
flags = ICE_AQC_MAN_MAC_UPDATE_LAA_WOL;
status = ice_aq_manage_mac_write(hw, mac, flags, NULL);
if (status) {
- netdev_err(netdev, "can't set mac %pM. write to firmware failed.\n",
+ netdev_err(netdev, "can't set MAC %pM. write to firmware failed.\n",
mac);
}
return 0;
@@ -2876,6 +2925,13 @@ ice_set_features(struct net_device *netdev, netdev_features_t features)
(netdev->features & NETIF_F_HW_VLAN_CTAG_TX))
ret = ice_vsi_manage_vlan_insertion(vsi);
+ if ((features & NETIF_F_HW_VLAN_CTAG_FILTER) &&
+ !(netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER))
+ ret = ice_cfg_vlan_pruning(vsi, true, false);
+ else if (!(features & NETIF_F_HW_VLAN_CTAG_FILTER) &&
+ (netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER))
+ ret = ice_cfg_vlan_pruning(vsi, false, false);
+
return ret;
}
@@ -2901,7 +2957,7 @@ static int ice_vsi_vlan_setup(struct ice_vsi *vsi)
*
* Return 0 on success and negative value on error
*/
-static int ice_vsi_cfg(struct ice_vsi *vsi)
+int ice_vsi_cfg(struct ice_vsi *vsi)
{
int err;
@@ -2933,7 +2989,7 @@ static void ice_napi_enable_all(struct ice_vsi *vsi)
if (!vsi->netdev)
return;
- ice_for_each_q_vector(vsi, q_idx) {
+ ice_for_each_q_vector(vsi, q_idx) {
struct ice_q_vector *q_vector = vsi->q_vectors[q_idx];
if (q_vector->rx.ring || q_vector->tx.ring)
@@ -3456,7 +3512,7 @@ int ice_down(struct ice_vsi *vsi)
*
* Return 0 on success, negative on failure
*/
-static int ice_vsi_setup_tx_rings(struct ice_vsi *vsi)
+int ice_vsi_setup_tx_rings(struct ice_vsi *vsi)
{
int i, err = 0;
@@ -3482,7 +3538,7 @@ static int ice_vsi_setup_tx_rings(struct ice_vsi *vsi)
*
* Return 0 on success, negative on failure
*/
-static int ice_vsi_setup_rx_rings(struct ice_vsi *vsi)
+int ice_vsi_setup_rx_rings(struct ice_vsi *vsi)
{
int i, err = 0;
@@ -3658,7 +3714,7 @@ static int ice_pf_ena_all_vsi(struct ice_pf *pf, bool locked)
}
/**
- * ice_vsi_rebuild_all - rebuild all VSIs in pf
+ * ice_vsi_rebuild_all - rebuild all VSIs in PF
* @pf: the PF
*/
static int ice_vsi_rebuild_all(struct ice_pf *pf)
@@ -3728,7 +3784,7 @@ static int ice_vsi_replay_all(struct ice_pf *pf)
/**
* ice_rebuild - rebuild after reset
- * @pf: pf to rebuild
+ * @pf: PF to rebuild
*/
static void ice_rebuild(struct ice_pf *pf)
{
@@ -3740,7 +3796,7 @@ static void ice_rebuild(struct ice_pf *pf)
if (test_bit(__ICE_DOWN, pf->state))
goto clear_recovery;
- dev_dbg(dev, "rebuilding pf\n");
+ dev_dbg(dev, "rebuilding PF\n");
ret = ice_init_all_ctrlq(hw);
if (ret) {
@@ -3768,12 +3824,6 @@ static void ice_rebuild(struct ice_pf *pf)
ice_dcb_rebuild(pf);
- /* reset search_hint of irq_trackers to 0 since interrupts are
- * reclaimed and could be allocated from beginning during VSI rebuild
- */
- pf->sw_irq_tracker->search_hint = 0;
- pf->hw_irq_tracker->search_hint = 0;
-
err = ice_vsi_rebuild_all(pf);
if (err) {
dev_err(dev, "ice_vsi_rebuild_all failed\n");
@@ -3857,16 +3907,16 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu)
u8 count = 0;
if (new_mtu == netdev->mtu) {
- netdev_warn(netdev, "mtu is already %u\n", netdev->mtu);
+ netdev_warn(netdev, "MTU is already %u\n", netdev->mtu);
return 0;
}
if (new_mtu < netdev->min_mtu) {
- netdev_err(netdev, "new mtu invalid. min_mtu is %d\n",
+ netdev_err(netdev, "new MTU invalid. min_mtu is %d\n",
netdev->min_mtu);
return -EINVAL;
} else if (new_mtu > netdev->max_mtu) {
- netdev_err(netdev, "new mtu invalid. max_mtu is %d\n",
+ netdev_err(netdev, "new MTU invalid. max_mtu is %d\n",
netdev->min_mtu);
return -EINVAL;
}
@@ -3882,7 +3932,7 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu)
} while (count < 100);
if (count == 100) {
- netdev_err(netdev, "can't change mtu. Device is busy\n");
+ netdev_err(netdev, "can't change MTU. Device is busy\n");
return -EBUSY;
}
@@ -3894,18 +3944,18 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu)
err = ice_down(vsi);
if (err) {
- netdev_err(netdev, "change mtu if_up err %d\n", err);
+ netdev_err(netdev, "change MTU if_up err %d\n", err);
return err;
}
err = ice_up(vsi);
if (err) {
- netdev_err(netdev, "change mtu if_up err %d\n", err);
+ netdev_err(netdev, "change MTU if_up err %d\n", err);
return err;
}
}
- netdev_dbg(netdev, "changed mtu to %d\n", new_mtu);
+ netdev_info(netdev, "changed MTU to %d\n", new_mtu);
return 0;
}
@@ -4241,7 +4291,7 @@ static void ice_tx_timeout(struct net_device *netdev)
*
* Returns 0 on success, negative value on failure
*/
-static int ice_open(struct net_device *netdev)
+int ice_open(struct net_device *netdev)
{
struct ice_netdev_priv *np = netdev_priv(netdev);
struct ice_vsi *vsi = np->vsi;
@@ -4278,7 +4328,7 @@ static int ice_open(struct net_device *netdev)
*
* Returns success only - not allowed to fail
*/
-static int ice_stop(struct net_device *netdev)
+int ice_stop(struct net_device *netdev)
{
struct ice_netdev_priv *np = netdev_priv(netdev);
struct ice_vsi *vsi = np->vsi;
diff --git a/drivers/net/ethernet/intel/ice/ice_nvm.c b/drivers/net/ethernet/intel/ice/ice_nvm.c
index 62571d33d0d6..bcb431f1bd92 100644
--- a/drivers/net/ethernet/intel/ice/ice_nvm.c
+++ b/drivers/net/ethernet/intel/ice/ice_nvm.c
@@ -119,7 +119,7 @@ ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
status = ice_read_sr_aq(hw, offset, 1, data, true);
if (!status)
- *data = le16_to_cpu(*(__le16 *)data);
+ *data = le16_to_cpu(*(__force __le16 *)data);
return status;
}
@@ -174,7 +174,7 @@ ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
} while (words_read < *words);
for (i = 0; i < *words; i++)
- data[i] = le16_to_cpu(((__le16 *)data)[i]);
+ data[i] = le16_to_cpu(((__force __le16 *)data)[i]);
read_nvm_buf_aq_exit:
*words = words_read;
@@ -316,3 +316,34 @@ ice_read_sr_buf(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
return status;
}
+
+/**
+ * ice_nvm_validate_checksum
+ * @hw: pointer to the HW struct
+ *
+ * Verify NVM PFA checksum validity (0x0706)
+ */
+enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw)
+{
+ struct ice_aqc_nvm_checksum *cmd;
+ struct ice_aq_desc desc;
+ enum ice_status status;
+
+ status = ice_acquire_nvm(hw, ICE_RES_READ);
+ if (status)
+ return status;
+
+ cmd = &desc.params.nvm_checksum;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_checksum);
+ cmd->flags = ICE_AQC_NVM_CHECKSUM_VERIFY;
+
+ status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+ ice_release_nvm(hw);
+
+ if (!status)
+ if (le16_to_cpu(cmd->checksum) != ICE_AQC_NVM_CHECKSUM_CORRECT)
+ status = ICE_ERR_NVM_CHECKSUM;
+
+ return status;
+}
diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
index 8d49f83be7a5..2a232504379d 100644
--- a/drivers/net/ethernet/intel/ice/ice_sched.c
+++ b/drivers/net/ethernet/intel/ice/ice_sched.c
@@ -683,10 +683,10 @@ ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
u16 i, num_groups_added = 0;
enum ice_status status = 0;
struct ice_hw *hw = pi->hw;
- u16 buf_size;
+ size_t buf_size;
u32 teid;
- buf_size = sizeof(*buf) + sizeof(*buf->generic) * (num_nodes - 1);
+ buf_size = struct_size(buf, generic, num_nodes - 1);
buf = devm_kzalloc(ice_hw_to_dev(hw), buf_size, GFP_KERNEL);
if (!buf)
return ICE_ERR_NO_MEMORY;
diff --git a/drivers/net/ethernet/intel/ice/ice_status.h b/drivers/net/ethernet/intel/ice/ice_status.h
index 17afe6acb18a..c01597885629 100644
--- a/drivers/net/ethernet/intel/ice/ice_status.h
+++ b/drivers/net/ethernet/intel/ice/ice_status.h
@@ -26,6 +26,7 @@ enum ice_status {
ICE_ERR_IN_USE = -16,
ICE_ERR_MAX_LIMIT = -17,
ICE_ERR_RESET_ONGOING = -18,
+ ICE_ERR_NVM_CHECKSUM = -51,
ICE_ERR_BUF_TOO_SHORT = -52,
ICE_ERR_NVM_BLANK_MODE = -53,
ICE_ERR_AQ_ERROR = -100,
diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
index 9f1f595ae7e6..8271fd651725 100644
--- a/drivers/net/ethernet/intel/ice/ice_switch.c
+++ b/drivers/net/ethernet/intel/ice/ice_switch.c
@@ -799,7 +799,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
daddr = f_info->l_data.ethertype_mac.mac_addr;
/* fall-through */
case ICE_SW_LKUP_ETHERTYPE:
- off = (__be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
+ off = (__force __be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
*off = cpu_to_be16(f_info->l_data.ethertype_mac.ethertype);
break;
case ICE_SW_LKUP_MAC_VLAN:
@@ -829,7 +829,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
ether_addr_copy(eth_hdr + ICE_ETH_DA_OFFSET, daddr);
if (!(vlan_id > ICE_MAX_VLAN_ID)) {
- off = (__be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
+ off = (__force __be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
*off = cpu_to_be16(vlan_id);
}
@@ -1973,6 +1973,10 @@ ice_add_vlan(struct ice_hw *hw, struct list_head *v_list)
* ice_add_eth_mac - Add ethertype and MAC based filter rule
* @hw: pointer to the hardware structure
* @em_list: list of ether type MAC filter, MAC is optional
+ *
+ * This function requires the caller to populate the entries in
+ * the filter list with the necessary fields (including flags to
+ * indicate Tx or Rx rules).
*/
enum ice_status
ice_add_eth_mac(struct ice_hw *hw, struct list_head *em_list)
@@ -1990,7 +1994,6 @@ ice_add_eth_mac(struct ice_hw *hw, struct list_head *em_list)
l_type != ICE_SW_LKUP_ETHERTYPE)
return ICE_ERR_PARAM;
- em_list_itr->fltr_info.flag = ICE_FLTR_TX;
em_list_itr->status = ice_add_rule_internal(hw, l_type,
em_list_itr);
if (em_list_itr->status)
diff --git a/drivers/net/ethernet/intel/ice/ice_switch.h b/drivers/net/ethernet/intel/ice/ice_switch.h
index 732b0b9b2e15..cb123fbe30be 100644
--- a/drivers/net/ethernet/intel/ice/ice_switch.h
+++ b/drivers/net/ethernet/intel/ice/ice_switch.h
@@ -8,9 +8,11 @@
#define ICE_SW_CFG_MAX_BUF_LEN 2048
#define ICE_DFLT_VSI_INVAL 0xff
+#define ICE_FLTR_RX BIT(0)
+#define ICE_FLTR_TX BIT(1)
+#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
#define ICE_VSI_INVAL_ID 0xffff
#define ICE_INVAL_Q_HANDLE 0xFFFF
-#define ICE_INVAL_Q_HANDLE 0xFFFF
/* VSI queue context structure */
struct ice_q_ctx {
@@ -69,9 +71,6 @@ struct ice_fltr_info {
/* rule ID returned by firmware once filter rule is created */
u16 fltr_rule_id;
u16 flag;
-#define ICE_FLTR_RX BIT(0)
-#define ICE_FLTR_TX BIT(1)
-#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
u16 src;
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index 2364eaf33d23..3c83230434b6 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -55,7 +55,7 @@ void ice_clean_tx_ring(struct ice_ring *tx_ring)
if (!tx_ring->tx_buf)
return;
- /* Free all the Tx ring sk_bufss */
+ /* Free all the Tx ring sk_buffs */
for (i = 0; i < tx_ring->count; i++)
ice_unmap_and_free_tx_buf(tx_ring, &tx_ring->tx_buf[i]);
@@ -1101,7 +1101,7 @@ static int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)
* ice_adjust_itr_by_size_and_speed - Adjust ITR based on current traffic
* @port_info: port_info structure containing the current link speed
* @avg_pkt_size: average size of Tx or Rx packets based on clean routine
- * @itr: itr value to update
+ * @itr: ITR value to update
*
* Calculate how big of an increment should be applied to the ITR value passed
* in based on wmem_default, SKB overhead, Ethernet overhead, and the current
@@ -1316,7 +1316,7 @@ clear_counts:
*/
static u32 ice_buildreg_itr(u16 itr_idx, u16 itr)
{
- /* The itr value is reported in microseconds, and the register value is
+ /* The ITR value is reported in microseconds, and the register value is
* recorded in 2 microsecond units. For this reason we only need to
* shift by the GLINT_DYN_CTL_INTERVAL_S - ICE_ITR_GRAN_S to apply this
* granularity as a shift instead of division. The mask makes sure the
@@ -1645,7 +1645,7 @@ ice_tx_map(struct ice_ring *tx_ring, struct ice_tx_buf *first,
return;
dma_error:
- /* clear dma mappings for failed tx_buf map */
+ /* clear DMA mappings for failed tx_buf map */
for (;;) {
tx_buf = &tx_ring->tx_buf[i];
ice_unmap_and_free_tx_buf(tx_ring, tx_buf);
@@ -1874,10 +1874,10 @@ int ice_tso(struct ice_tx_buf *first, struct ice_tx_offload_params *off)
cd_mss = skb_shinfo(skb)->gso_size;
/* record cdesc_qw1 with TSO parameters */
- off->cd_qw1 |= ICE_TX_DESC_DTYPE_CTX |
- (ICE_TX_CTX_DESC_TSO << ICE_TXD_CTX_QW1_CMD_S) |
- (cd_tso_len << ICE_TXD_CTX_QW1_TSO_LEN_S) |
- (cd_mss << ICE_TXD_CTX_QW1_MSS_S);
+ off->cd_qw1 |= (u64)(ICE_TX_DESC_DTYPE_CTX |
+ (ICE_TX_CTX_DESC_TSO << ICE_TXD_CTX_QW1_CMD_S) |
+ (cd_tso_len << ICE_TXD_CTX_QW1_TSO_LEN_S) |
+ (cd_mss << ICE_TXD_CTX_QW1_MSS_S));
first->tx_flags |= ICE_TX_FLAGS_TSO;
return 1;
}
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
index 66e05032ee56..ec76aba347b9 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
@@ -58,19 +58,19 @@ struct ice_tx_buf {
unsigned int bytecount;
unsigned short gso_segs;
u32 tx_flags;
- DEFINE_DMA_UNMAP_ADDR(dma);
DEFINE_DMA_UNMAP_LEN(len);
+ DEFINE_DMA_UNMAP_ADDR(dma);
};
struct ice_tx_offload_params {
- u8 header_len;
+ u64 cd_qw1;
+ struct ice_ring *tx_ring;
u32 td_cmd;
u32 td_offset;
u32 td_l2tag1;
- u16 cd_l2tag2;
u32 cd_tunnel_params;
- u64 cd_qw1;
- struct ice_ring *tx_ring;
+ u16 cd_l2tag2;
+ u8 header_len;
};
struct ice_rx_buf {
@@ -150,6 +150,7 @@ enum ice_rx_dtype {
/* descriptor ring, associated with a VSI */
struct ice_ring {
+ /* CL1 - 1st cacheline starts here */
struct ice_ring *next; /* pointer to next ring in q_vector */
void *desc; /* Descriptor ring memory */
struct device *dev; /* Used for DMA mapping */
@@ -161,11 +162,11 @@ struct ice_ring {
struct ice_tx_buf *tx_buf;
struct ice_rx_buf *rx_buf;
};
+ /* CL2 - 2nd cacheline starts here */
u16 q_index; /* Queue number of ring */
- u32 txq_teid; /* Added Tx queue TEID */
-#ifdef CONFIG_DCB
- u8 dcb_tc; /* Traffic class of ring */
-#endif /* CONFIG_DCB */
+ u16 q_handle; /* Queue handle per TC */
+
+ u8 ring_active:1; /* is ring online or not */
u16 count; /* Number of descriptors */
u16 reg_idx; /* HW register index of the ring */
@@ -173,8 +174,7 @@ struct ice_ring {
/* used in interrupt processing */
u16 next_to_use;
u16 next_to_clean;
-
- u8 ring_active; /* is ring online or not */
+ u16 next_to_alloc;
/* stats structs */
struct ice_q_stats stats;
@@ -184,10 +184,17 @@ struct ice_ring {
struct ice_rxq_stats rx_stats;
};
- unsigned int size; /* length of descriptor ring in bytes */
- dma_addr_t dma; /* physical address of ring */
struct rcu_head rcu; /* to avoid race on free */
- u16 next_to_alloc;
+ /* CLX - the below items are only accessed infrequently and should be
+ * in their own cache line if possible
+ */
+ dma_addr_t dma; /* physical address of ring */
+ unsigned int size; /* length of descriptor ring in bytes */
+ u32 txq_teid; /* Added Tx queue TEID */
+ u16 rx_buf_len;
+#ifdef CONFIG_DCB
+ u8 dcb_tc; /* Traffic class of ring */
+#endif /* CONFIG_DCB */
} ____cacheline_internodealigned_in_smp;
struct ice_ring_container {
diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
index a862af4cbf78..24bbef8bbe69 100644
--- a/drivers/net/ethernet/intel/ice/ice_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_type.h
@@ -23,6 +23,7 @@ static inline bool ice_is_tc_ena(u8 bitmap, u8 tc)
/* debug masks - set these bits in hw->debug_mask to control output */
#define ICE_DBG_INIT BIT_ULL(1)
+#define ICE_DBG_FW_LOG BIT_ULL(3)
#define ICE_DBG_LINK BIT_ULL(4)
#define ICE_DBG_PHY BIT_ULL(5)
#define ICE_DBG_QCTX BIT_ULL(6)
@@ -61,6 +62,13 @@ enum ice_fc_mode {
ICE_FC_DFLT
};
+enum ice_fec_mode {
+ ICE_FEC_NONE = 0,
+ ICE_FEC_RS,
+ ICE_FEC_BASER,
+ ICE_FEC_AUTO
+};
+
enum ice_set_fc_aq_failures {
ICE_SET_FC_AQ_FAIL_NONE = 0,
ICE_SET_FC_AQ_FAIL_GET,
@@ -86,12 +94,14 @@ enum ice_media_type {
enum ice_vsi_type {
ICE_VSI_PF = 0,
ICE_VSI_VF,
+ ICE_VSI_LB = 6,
};
struct ice_link_status {
/* Refer to ice_aq_phy_type for bits definition */
u64 phy_type_low;
u64 phy_type_high;
+ u8 topo_media_conflict;
u16 max_frame_size;
u16 link_speed;
u16 req_speeds;
@@ -99,6 +109,7 @@ struct ice_link_status {
u8 link_info;
u8 an_info;
u8 ext_info;
+ u8 fec_info;
u8 pacing;
/* Refer to #define from module_type[ICE_MODULE_TYPE_TOTAL_BYTE] of
* ice_aqc_get_phy_caps structure
@@ -423,7 +434,7 @@ struct ice_hw {
struct ice_fw_log_cfg fw_log;
/* Device max aggregate bandwidths corresponding to the GL_PWR_MODE_CTL
- * register. Used for determining the itr/intrl granularity during
+ * register. Used for determining the ITR/intrl granularity during
* initialization.
*/
#define ICE_MAX_AGG_BW_200G 0x0
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
index a805cbdd69be..5d24b539648f 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
@@ -103,7 +103,7 @@ ice_set_pfe_link_forced(struct ice_vf *vf, struct virtchnl_pf_event *pfe,
u16 link_speed;
if (link_up)
- link_speed = ICE_AQ_LINK_SPEED_40GB;
+ link_speed = ICE_AQ_LINK_SPEED_100GB;
else
link_speed = ICE_AQ_LINK_SPEED_UNKNOWN;
@@ -141,32 +141,20 @@ static void ice_vc_notify_vf_link_state(struct ice_vf *vf)
}
/**
- * ice_get_vf_vector - get VF interrupt vector register offset
- * @vf_msix: number of MSIx vector per VF on a PF
- * @vf_id: VF identifier
- * @i: index of MSIx vector
- */
-static u32 ice_get_vf_vector(int vf_msix, int vf_id, int i)
-{
- return ((i == 0) ? VFINT_DYN_CTLN(vf_id) :
- VFINT_DYN_CTLN(((vf_msix - 1) * (vf_id)) + (i - 1)));
-}
-
-/**
* ice_free_vf_res - Free a VF's resources
* @vf: pointer to the VF info
*/
static void ice_free_vf_res(struct ice_vf *vf)
{
struct ice_pf *pf = vf->pf;
- int i, pf_vf_msix;
+ int i, last_vector_idx;
/* First, disable VF's configuration API to prevent OS from
* accessing the VF's VSI after it's freed or invalidated.
*/
clear_bit(ICE_VF_STATE_INIT, vf->vf_states);
- /* free vsi & disconnect it from the parent uplink */
+ /* free VSI and disconnect it from the parent uplink */
if (vf->lan_vsi_idx) {
ice_vsi_release(pf->vsi[vf->lan_vsi_idx]);
vf->lan_vsi_idx = 0;
@@ -174,13 +162,10 @@ static void ice_free_vf_res(struct ice_vf *vf)
vf->num_mac = 0;
}
- pf_vf_msix = pf->num_vf_msix;
+ last_vector_idx = vf->first_vector_idx + pf->num_vf_msix - 1;
/* Disable interrupts so that VF starts in a known state */
- for (i = 0; i < pf_vf_msix; i++) {
- u32 reg_idx;
-
- reg_idx = ice_get_vf_vector(pf_vf_msix, vf->vf_id, i);
- wr32(&pf->hw, reg_idx, VFINT_DYN_CTLN_CLEARPBA_M);
+ for (i = vf->first_vector_idx; i <= last_vector_idx; i++) {
+ wr32(&pf->hw, GLINT_DYN_CTL(i), GLINT_DYN_CTL_CLEARPBA_M);
ice_flush(&pf->hw);
}
/* reset some of the state variables keeping track of the resources */
@@ -205,8 +190,7 @@ static void ice_dis_vf_mappings(struct ice_vf *vf)
wr32(hw, VPINT_ALLOC(vf->vf_id), 0);
wr32(hw, VPINT_ALLOC_PCI(vf->vf_id), 0);
- first = vf->first_vector_idx +
- hw->func_caps.common_cap.msix_vector_first_id;
+ first = vf->first_vector_idx;
last = first + pf->num_vf_msix - 1;
for (v = first; v <= last; v++) {
u32 reg;
@@ -232,6 +216,42 @@ static void ice_dis_vf_mappings(struct ice_vf *vf)
}
/**
+ * ice_sriov_free_msix_res - Reset/free any used MSIX resources
+ * @pf: pointer to the PF structure
+ *
+ * If MSIX entries from the pf->irq_tracker were needed then we need to
+ * reset the irq_tracker->end and give back the entries we needed to
+ * num_avail_sw_msix.
+ *
+ * If no MSIX entries were taken from the pf->irq_tracker then just clear
+ * the pf->sriov_base_vector.
+ *
+ * Returns 0 on success, and -EINVAL on error.
+ */
+static int ice_sriov_free_msix_res(struct ice_pf *pf)
+{
+ struct ice_res_tracker *res;
+
+ if (!pf)
+ return -EINVAL;
+
+ res = pf->irq_tracker;
+ if (!res)
+ return -EINVAL;
+
+ /* give back irq_tracker resources used */
+ if (pf->sriov_base_vector < res->num_entries) {
+ res->end = res->num_entries;
+ pf->num_avail_sw_msix +=
+ res->num_entries - pf->sriov_base_vector;
+ }
+
+ pf->sriov_base_vector = 0;
+
+ return 0;
+}
+
+/**
* ice_free_vfs - Free all VFs
* @pf: pointer to the PF structure
*/
@@ -246,15 +266,6 @@ void ice_free_vfs(struct ice_pf *pf)
while (test_and_set_bit(__ICE_VF_DIS, pf->state))
usleep_range(1000, 2000);
- /* Disable IOV before freeing resources. This lets any VF drivers
- * running in the host get themselves cleaned up before we yank
- * the carpet out from underneath their feet.
- */
- if (!pci_vfs_assigned(pf->pdev))
- pci_disable_sriov(pf->pdev);
- else
- dev_warn(&pf->pdev->dev, "VFs are assigned - not disabling SR-IOV\n");
-
/* Avoid wait time by stopping all VFs at the same time */
for (i = 0; i < pf->num_alloc_vfs; i++) {
struct ice_vsi *vsi;
@@ -270,6 +281,15 @@ void ice_free_vfs(struct ice_pf *pf)
clear_bit(ICE_VF_STATE_ENA, pf->vf[i].vf_states);
}
+ /* Disable IOV before freeing resources. This lets any VF drivers
+ * running in the host get themselves cleaned up before we yank
+ * the carpet out from underneath their feet.
+ */
+ if (!pci_vfs_assigned(pf->pdev))
+ pci_disable_sriov(pf->pdev);
+ else
+ dev_warn(&pf->pdev->dev, "VFs are assigned - not disabling SR-IOV\n");
+
tmp = pf->num_alloc_vfs;
pf->num_vf_qps = 0;
pf->num_alloc_vfs = 0;
@@ -288,6 +308,10 @@ void ice_free_vfs(struct ice_pf *pf)
}
}
+ if (ice_sriov_free_msix_res(pf))
+ dev_err(&pf->pdev->dev,
+ "Failed to free MSIX resources used by SR-IOV\n");
+
devm_kfree(&pf->pdev->dev, pf->vf);
pf->vf = NULL;
@@ -457,6 +481,22 @@ ice_vf_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, u16 vf_id)
}
/**
+ * ice_calc_vf_first_vector_idx - Calculate absolute MSIX vector index in HW
+ * @pf: pointer to PF structure
+ * @vf: pointer to VF that the first MSIX vector index is being calculated for
+ *
+ * This returns the first MSIX vector index in HW that is used by this VF and
+ * this will always be the OICR index in the AVF driver so any functionality
+ * using vf->first_vector_idx for queue configuration will have to increment by
+ * 1 to avoid meddling with the OICR index.
+ */
+static int ice_calc_vf_first_vector_idx(struct ice_pf *pf, struct ice_vf *vf)
+{
+ return pf->hw.func_caps.common_cap.msix_vector_first_id +
+ pf->sriov_base_vector + vf->vf_id * pf->num_vf_msix;
+}
+
+/**
* ice_alloc_vsi_res - Setup VF VSI and its resources
* @vf: pointer to the VF structure
*
@@ -470,8 +510,10 @@ static int ice_alloc_vsi_res(struct ice_vf *vf)
struct ice_vsi *vsi;
int status = 0;
- vsi = ice_vf_vsi_setup(pf, pf->hw.port_info, vf->vf_id);
+ /* first vector index is the VFs OICR index */
+ vf->first_vector_idx = ice_calc_vf_first_vector_idx(pf, vf);
+ vsi = ice_vf_vsi_setup(pf, pf->hw.port_info, vf->vf_id);
if (!vsi) {
dev_err(&pf->pdev->dev, "Failed to create VF VSI\n");
return -ENOMEM;
@@ -480,14 +522,6 @@ static int ice_alloc_vsi_res(struct ice_vf *vf)
vf->lan_vsi_idx = vsi->idx;
vf->lan_vsi_num = vsi->vsi_num;
- /* first vector index is the VFs OICR index */
- vf->first_vector_idx = vsi->hw_base_vector;
- /* Since hw_base_vector holds the vector where data queue interrupts
- * starts, increment by 1 since VFs allocated vectors include OICR intr
- * as well.
- */
- vsi->hw_base_vector += 1;
-
/* Check if port VLAN exist before, and restore it accordingly */
if (vf->port_vlan_id) {
ice_vsi_manage_pvid(vsi, vf->port_vlan_id, true);
@@ -580,8 +614,7 @@ static void ice_ena_vf_mappings(struct ice_vf *vf)
hw = &pf->hw;
vsi = pf->vsi[vf->lan_vsi_idx];
- first = vf->first_vector_idx +
- hw->func_caps.common_cap.msix_vector_first_id;
+ first = vf->first_vector_idx;
last = (first + pf->num_vf_msix) - 1;
abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
@@ -687,6 +720,97 @@ ice_determine_res(struct ice_pf *pf, u16 avail_res, u16 max_res, u16 min_res)
}
/**
+ * ice_calc_vf_reg_idx - Calculate the VF's register index in the PF space
+ * @vf: VF to calculate the register index for
+ * @q_vector: a q_vector associated to the VF
+ */
+int ice_calc_vf_reg_idx(struct ice_vf *vf, struct ice_q_vector *q_vector)
+{
+ struct ice_pf *pf;
+
+ if (!vf || !q_vector)
+ return -EINVAL;
+
+ pf = vf->pf;
+
+ /* always add one to account for the OICR being the first MSIX */
+ return pf->sriov_base_vector + pf->num_vf_msix * vf->vf_id +
+ q_vector->v_idx + 1;
+}
+
+/**
+ * ice_get_max_valid_res_idx - Get the max valid resource index
+ * @res: pointer to the resource to find the max valid index for
+ *
+ * Start from the end of the ice_res_tracker and return right when we find the
+ * first res->list entry with the ICE_RES_VALID_BIT set. This function is only
+ * valid for SR-IOV because it is the only consumer that manipulates the
+ * res->end and this is always called when res->end is set to res->num_entries.
+ */
+static int ice_get_max_valid_res_idx(struct ice_res_tracker *res)
+{
+ int i;
+
+ if (!res)
+ return -EINVAL;
+
+ for (i = res->num_entries - 1; i >= 0; i--)
+ if (res->list[i] & ICE_RES_VALID_BIT)
+ return i;
+
+ return 0;
+}
+
+/**
+ * ice_sriov_set_msix_res - Set any used MSIX resources
+ * @pf: pointer to PF structure
+ * @num_msix_needed: number of MSIX vectors needed for all SR-IOV VFs
+ *
+ * This function allows SR-IOV resources to be taken from the end of the PF's
+ * allowed HW MSIX vectors so in many cases the irq_tracker will not
+ * be needed. In these cases we just set the pf->sriov_base_vector and return
+ * success.
+ *
+ * If SR-IOV needs to use any pf->irq_tracker entries it updates the
+ * irq_tracker->end based on the first entry needed for SR-IOV. This makes it
+ * so any calls to ice_get_res() using the irq_tracker will not try to use
+ * resources at or beyond the newly set value.
+ *
+ * Return 0 on success, and -EINVAL when there are not enough MSIX vectors in
+ * in the PF's space available for SR-IOV.
+ */
+static int ice_sriov_set_msix_res(struct ice_pf *pf, u16 num_msix_needed)
+{
+ int max_valid_res_idx = ice_get_max_valid_res_idx(pf->irq_tracker);
+ u16 pf_total_msix_vectors =
+ pf->hw.func_caps.common_cap.num_msix_vectors;
+ struct ice_res_tracker *res = pf->irq_tracker;
+ int sriov_base_vector;
+
+ if (max_valid_res_idx < 0)
+ return max_valid_res_idx;
+
+ sriov_base_vector = pf_total_msix_vectors - num_msix_needed;
+
+ /* make sure we only grab irq_tracker entries from the list end and
+ * that we have enough available MSIX vectors
+ */
+ if (sriov_base_vector <= max_valid_res_idx)
+ return -EINVAL;
+
+ pf->sriov_base_vector = sriov_base_vector;
+
+ /* dip into irq_tracker entries and update used resources */
+ if (num_msix_needed > (pf_total_msix_vectors - res->num_entries)) {
+ pf->num_avail_sw_msix -=
+ res->num_entries - pf->sriov_base_vector;
+ res->end = pf->sriov_base_vector;
+ }
+
+ return 0;
+}
+
+/**
* ice_check_avail_res - check if vectors and queues are available
* @pf: pointer to the PF structure
*
@@ -696,11 +820,16 @@ ice_determine_res(struct ice_pf *pf, u16 avail_res, u16 max_res, u16 min_res)
*/
static int ice_check_avail_res(struct ice_pf *pf)
{
- u16 num_msix, num_txq, num_rxq;
+ int max_valid_res_idx = ice_get_max_valid_res_idx(pf->irq_tracker);
+ u16 num_msix, num_txq, num_rxq, num_avail_msix;
- if (!pf->num_alloc_vfs)
+ if (!pf->num_alloc_vfs || max_valid_res_idx < 0)
return -EINVAL;
+ /* add 1 to max_valid_res_idx to account for it being 0-based */
+ num_avail_msix = pf->hw.func_caps.common_cap.num_msix_vectors -
+ (max_valid_res_idx + 1);
+
/* Grab from HW interrupts common pool
* Note: By the time the user decides it needs more vectors in a VF
* its already too late since one must decide this prior to creating the
@@ -717,11 +846,11 @@ static int ice_check_avail_res(struct ice_pf *pf)
* grab default interrupt vectors (5 as supported by AVF driver).
*/
if (pf->num_alloc_vfs <= 16) {
- num_msix = ice_determine_res(pf, pf->num_avail_hw_msix,
+ num_msix = ice_determine_res(pf, num_avail_msix,
ICE_MAX_INTR_PER_VF,
ICE_MIN_INTR_PER_VF);
} else if (pf->num_alloc_vfs <= ICE_MAX_VF_COUNT) {
- num_msix = ice_determine_res(pf, pf->num_avail_hw_msix,
+ num_msix = ice_determine_res(pf, num_avail_msix,
ICE_DFLT_INTR_PER_VF,
ICE_MIN_INTR_PER_VF);
} else {
@@ -750,6 +879,9 @@ static int ice_check_avail_res(struct ice_pf *pf)
if (!num_txq || !num_rxq)
return -EIO;
+ if (ice_sriov_set_msix_res(pf, num_msix * pf->num_alloc_vfs))
+ return -EINVAL;
+
/* since AVF driver works with only queue pairs which means, it expects
* to have equal number of Rx and Tx queues, so take the minimum of
* available Tx or Rx queues
@@ -938,6 +1070,10 @@ bool ice_reset_all_vfs(struct ice_pf *pf, bool is_vflr)
vf->num_vf_qs = 0;
}
+ if (ice_sriov_free_msix_res(pf))
+ dev_err(&pf->pdev->dev,
+ "Failed to free MSIX resources used by SR-IOV\n");
+
if (ice_check_avail_res(pf)) {
dev_err(&pf->pdev->dev,
"Cannot allocate VF resources, try with fewer number of VFs\n");
@@ -1119,7 +1255,7 @@ static int ice_alloc_vfs(struct ice_pf *pf, u16 num_alloc_vfs)
int i, ret;
/* Disable global interrupt 0 so we don't try to handle the VFLR. */
- wr32(hw, GLINT_DYN_CTL(pf->hw_oicr_idx),
+ wr32(hw, GLINT_DYN_CTL(pf->oicr_idx),
ICE_ITR_NONE << GLINT_DYN_CTL_ITR_INDX_S);
ice_flush(hw);
@@ -1134,7 +1270,7 @@ static int ice_alloc_vfs(struct ice_pf *pf, u16 num_alloc_vfs)
GFP_KERNEL);
if (!vfs) {
ret = -ENOMEM;
- goto err_unroll_sriov;
+ goto err_pci_disable_sriov;
}
pf->vf = vfs;
@@ -1154,12 +1290,19 @@ static int ice_alloc_vfs(struct ice_pf *pf, u16 num_alloc_vfs)
pf->num_alloc_vfs = num_alloc_vfs;
/* VF resources get allocated during reset */
- if (!ice_reset_all_vfs(pf, true))
+ if (!ice_reset_all_vfs(pf, true)) {
+ ret = -EIO;
goto err_unroll_sriov;
+ }
goto err_unroll_intr;
err_unroll_sriov:
+ pf->vf = NULL;
+ devm_kfree(&pf->pdev->dev, vfs);
+ vfs = NULL;
+ pf->num_alloc_vfs = 0;
+err_pci_disable_sriov:
pci_disable_sriov(pf->pdev);
err_unroll_intr:
/* rearm interrupts here */
@@ -1168,8 +1311,8 @@ err_unroll_intr:
}
/**
- * ice_pf_state_is_nominal - checks the pf for nominal state
- * @pf: pointer to pf to check
+ * ice_pf_state_is_nominal - checks the PF for nominal state
+ * @pf: pointer to PF to check
*
* Check the PF's state for a collection of bits that would indicate
* the PF is in a state that would inhibit normal operation for
@@ -1496,7 +1639,7 @@ static void ice_vc_reset_vf_msg(struct ice_vf *vf)
/**
* ice_find_vsi_from_id
- * @pf: the pf structure to search for the VSI
+ * @pf: the PF structure to search for the VSI
* @id: ID of the VSI it is searching for
*
* searches for the VSI with the given ID
@@ -1807,28 +1950,37 @@ error_param:
static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg)
{
enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;
- struct virtchnl_irq_map_info *irqmap_info =
- (struct virtchnl_irq_map_info *)msg;
+ struct virtchnl_irq_map_info *irqmap_info;
u16 vsi_id, vsi_q_id, vector_id;
struct virtchnl_vector_map *map;
- struct ice_vsi *vsi = NULL;
struct ice_pf *pf = vf->pf;
+ u16 num_q_vectors_mapped;
+ struct ice_vsi *vsi;
unsigned long qmap;
- u16 num_q_vectors;
int i;
- num_q_vectors = irqmap_info->num_vectors - ICE_NONQ_VECS_VF;
+ irqmap_info = (struct virtchnl_irq_map_info *)msg;
+ num_q_vectors_mapped = irqmap_info->num_vectors;
+
vsi = pf->vsi[vf->lan_vsi_idx];
+ if (!vsi) {
+ v_ret = VIRTCHNL_STATUS_ERR_PARAM;
+ goto error_param;
+ }
+ /* Check to make sure number of VF vectors mapped is not greater than
+ * number of VF vectors originally allocated, and check that
+ * there is actually at least a single VF queue vector mapped
+ */
if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states) ||
- !vsi || vsi->num_q_vectors < num_q_vectors ||
- irqmap_info->num_vectors == 0) {
+ pf->num_vf_msix < num_q_vectors_mapped ||
+ !irqmap_info->num_vectors) {
v_ret = VIRTCHNL_STATUS_ERR_PARAM;
goto error_param;
}
- for (i = 0; i < num_q_vectors; i++) {
- struct ice_q_vector *q_vector = vsi->q_vectors[i];
+ for (i = 0; i < num_q_vectors_mapped; i++) {
+ struct ice_q_vector *q_vector;
map = &irqmap_info->vecmap[i];
@@ -1836,7 +1988,21 @@ static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg)
vsi_id = map->vsi_id;
/* validate msg params */
if (!(vector_id < pf->hw.func_caps.common_cap
- .num_msix_vectors) || !ice_vc_isvalid_vsi_id(vf, vsi_id)) {
+ .num_msix_vectors) || !ice_vc_isvalid_vsi_id(vf, vsi_id) ||
+ (!vector_id && (map->rxq_map || map->txq_map))) {
+ v_ret = VIRTCHNL_STATUS_ERR_PARAM;
+ goto error_param;
+ }
+
+ /* No need to map VF miscellaneous or rogue vector */
+ if (!vector_id)
+ continue;
+
+ /* Subtract non queue vector from vector_id passed by VF
+ * to get actual number of VSI queue vector array index
+ */
+ q_vector = vsi->q_vectors[vector_id - ICE_NONQ_VECS_VF];
+ if (!q_vector) {
v_ret = VIRTCHNL_STATUS_ERR_PARAM;
goto error_param;
}
@@ -1852,6 +2018,8 @@ static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg)
q_vector->num_ring_rx++;
q_vector->rx.itr_idx = map->rxitr_idx;
vsi->rx_rings[vsi_q_id]->q_vector = q_vector;
+ ice_cfg_rxq_interrupt(vsi, vsi_q_id, vector_id,
+ q_vector->rx.itr_idx);
}
qmap = map->txq_map;
@@ -1864,11 +2032,11 @@ static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg)
q_vector->num_ring_tx++;
q_vector->tx.itr_idx = map->txitr_idx;
vsi->tx_rings[vsi_q_id]->q_vector = q_vector;
+ ice_cfg_txq_interrupt(vsi, vsi_q_id, vector_id,
+ q_vector->tx.itr_idx);
}
}
- if (vsi)
- ice_vsi_cfg_msix(vsi);
error_param:
/* send the response to the VF */
return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_IRQ_MAP, v_ret,
@@ -1903,9 +2071,8 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
}
vsi = pf->vsi[vf->lan_vsi_idx];
- if (!vsi) {
+ if (!vsi)
goto error_param;
- }
if (qci->num_queue_pairs > ICE_MAX_BASE_QS_PER_VF) {
dev_err(&pf->pdev->dev,
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
index 3725aea16840..c3ca522c245a 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
@@ -49,29 +49,34 @@ struct ice_vf {
struct ice_pf *pf;
s16 vf_id; /* VF ID in the PF space */
- u32 driver_caps; /* reported by VF driver */
+ u16 lan_vsi_idx; /* index into PF struct */
int first_vector_idx; /* first vector index of this VF */
struct ice_sw *vf_sw_id; /* switch ID the VF VSIs connect to */
struct virtchnl_version_info vf_ver;
+ u32 driver_caps; /* reported by VF driver */
struct virtchnl_ether_addr dflt_lan_addr;
u16 port_vlan_id;
- u8 pf_set_mac; /* VF MAC address set by VMM admin */
- u8 trusted;
- u16 lan_vsi_idx; /* index into PF struct */
+ u8 pf_set_mac:1; /* VF MAC address set by VMM admin */
+ u8 trusted:1;
+ u8 spoofchk:1;
+ u8 link_forced:1;
+ u8 link_up:1; /* only valid if VF link is forced */
+ /* VSI indices - actual VSI pointers are maintained in the PF structure
+ * When assigned, these will be non-zero, because VSI 0 is always
+ * the main LAN VSI for the PF.
+ */
u16 lan_vsi_num; /* ID as used by firmware */
+ unsigned int tx_rate; /* Tx bandwidth limit in Mbps */
+ DECLARE_BITMAP(vf_states, ICE_VF_STATES_NBITS); /* VF runtime states */
+
u64 num_mdd_events; /* number of MDD events detected */
u64 num_inval_msgs; /* number of continuous invalid msgs */
u64 num_valid_msgs; /* number of valid msgs detected */
unsigned long vf_caps; /* VF's adv. capabilities */
- DECLARE_BITMAP(vf_states, ICE_VF_STATES_NBITS); /* VF runtime states */
- unsigned int tx_rate; /* Tx bandwidth limit in Mbps */
- u8 link_forced;
- u8 link_up; /* only valid if VF link is forced */
- u8 spoofchk;
+ u8 num_req_qs; /* num of queue pairs requested by VF */
u16 num_mac;
u16 num_vlan;
u16 num_vf_qs; /* num of queue configured per VF */
- u8 num_req_qs; /* num of queue pairs requested by VF */
};
#ifdef CONFIG_PCI_IOV
@@ -96,6 +101,8 @@ int ice_set_vf_trust(struct net_device *netdev, int vf_id, bool trusted);
int ice_set_vf_link_state(struct net_device *netdev, int vf_id, int link_state);
int ice_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool ena);
+
+int ice_calc_vf_reg_idx(struct ice_vf *vf, struct ice_q_vector *q_vector);
#else /* CONFIG_PCI_IOV */
#define ice_process_vflr_event(pf) do {} while (0)
#define ice_free_vfs(pf) do {} while (0)
@@ -161,5 +168,11 @@ ice_set_vf_link_state(struct net_device __always_unused *netdev,
return -EOPNOTSUPP;
}
+static inline int
+ice_calc_vf_reg_idx(struct ice_vf __always_unused *vf,
+ struct ice_q_vector __always_unused *q_vector)
+{
+ return 0;
+}
#endif /* CONFIG_PCI_IOV */
#endif /* _ICE_VIRTCHNL_PF_H_ */
diff --git a/drivers/net/ethernet/intel/igb/e1000_82575.c b/drivers/net/ethernet/intel/igb/e1000_82575.c
index bafdcf70a353..3ec2ce0725d5 100644
--- a/drivers/net/ethernet/intel/igb/e1000_82575.c
+++ b/drivers/net/ethernet/intel/igb/e1000_82575.c
@@ -638,7 +638,7 @@ static s32 igb_get_invariants_82575(struct e1000_hw *hw)
dev_spec->sgmii_active = true;
break;
}
- /* fall through for I2C based SGMII */
+ /* fall through - for I2C based SGMII */
case E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES:
/* read media type from SFP EEPROM */
ret_val = igb_set_sfp_media_type_82575(hw);
diff --git a/drivers/net/ethernet/intel/igb/e1000_regs.h b/drivers/net/ethernet/intel/igb/e1000_regs.h
index 0ad737d2f289..9cb49980ec2d 100644
--- a/drivers/net/ethernet/intel/igb/e1000_regs.h
+++ b/drivers/net/ethernet/intel/igb/e1000_regs.h
@@ -409,6 +409,8 @@ do { \
#define E1000_I210_TQAVCC(_n) (0x3004 + ((_n) * 0x40))
#define E1000_I210_TQAVHC(_n) (0x300C + ((_n) * 0x40))
+#define E1000_I210_RR2DCDELAY 0x5BF4
+
#define E1000_INVM_DATA_REG(_n) (0x12120 + 4*(_n))
#define E1000_INVM_SIZE 64 /* Number of INVM Data Registers */
diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c
index c645d9e648e0..3182b059bf55 100644
--- a/drivers/net/ethernet/intel/igb/igb_ethtool.c
+++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c
@@ -448,7 +448,7 @@ static void igb_set_msglevel(struct net_device *netdev, u32 data)
static int igb_get_regs_len(struct net_device *netdev)
{
-#define IGB_REGS_LEN 739
+#define IGB_REGS_LEN 740
return IGB_REGS_LEN * sizeof(u32);
}
@@ -675,41 +675,44 @@ static void igb_get_regs(struct net_device *netdev,
regs_buff[554] = adapter->stats.b2ogprc;
}
- if (hw->mac.type != e1000_82576)
- return;
- for (i = 0; i < 12; i++)
- regs_buff[555 + i] = rd32(E1000_SRRCTL(i + 4));
- for (i = 0; i < 4; i++)
- regs_buff[567 + i] = rd32(E1000_PSRTYPE(i + 4));
- for (i = 0; i < 12; i++)
- regs_buff[571 + i] = rd32(E1000_RDBAL(i + 4));
- for (i = 0; i < 12; i++)
- regs_buff[583 + i] = rd32(E1000_RDBAH(i + 4));
- for (i = 0; i < 12; i++)
- regs_buff[595 + i] = rd32(E1000_RDLEN(i + 4));
- for (i = 0; i < 12; i++)
- regs_buff[607 + i] = rd32(E1000_RDH(i + 4));
- for (i = 0; i < 12; i++)
- regs_buff[619 + i] = rd32(E1000_RDT(i + 4));
- for (i = 0; i < 12; i++)
- regs_buff[631 + i] = rd32(E1000_RXDCTL(i + 4));
-
- for (i = 0; i < 12; i++)
- regs_buff[643 + i] = rd32(E1000_TDBAL(i + 4));
- for (i = 0; i < 12; i++)
- regs_buff[655 + i] = rd32(E1000_TDBAH(i + 4));
- for (i = 0; i < 12; i++)
- regs_buff[667 + i] = rd32(E1000_TDLEN(i + 4));
- for (i = 0; i < 12; i++)
- regs_buff[679 + i] = rd32(E1000_TDH(i + 4));
- for (i = 0; i < 12; i++)
- regs_buff[691 + i] = rd32(E1000_TDT(i + 4));
- for (i = 0; i < 12; i++)
- regs_buff[703 + i] = rd32(E1000_TXDCTL(i + 4));
- for (i = 0; i < 12; i++)
- regs_buff[715 + i] = rd32(E1000_TDWBAL(i + 4));
- for (i = 0; i < 12; i++)
- regs_buff[727 + i] = rd32(E1000_TDWBAH(i + 4));
+ if (hw->mac.type == e1000_82576) {
+ for (i = 0; i < 12; i++)
+ regs_buff[555 + i] = rd32(E1000_SRRCTL(i + 4));
+ for (i = 0; i < 4; i++)
+ regs_buff[567 + i] = rd32(E1000_PSRTYPE(i + 4));
+ for (i = 0; i < 12; i++)
+ regs_buff[571 + i] = rd32(E1000_RDBAL(i + 4));
+ for (i = 0; i < 12; i++)
+ regs_buff[583 + i] = rd32(E1000_RDBAH(i + 4));
+ for (i = 0; i < 12; i++)
+ regs_buff[595 + i] = rd32(E1000_RDLEN(i + 4));
+ for (i = 0; i < 12; i++)
+ regs_buff[607 + i] = rd32(E1000_RDH(i + 4));
+ for (i = 0; i < 12; i++)
+ regs_buff[619 + i] = rd32(E1000_RDT(i + 4));
+ for (i = 0; i < 12; i++)
+ regs_buff[631 + i] = rd32(E1000_RXDCTL(i + 4));
+
+ for (i = 0; i < 12; i++)
+ regs_buff[643 + i] = rd32(E1000_TDBAL(i + 4));
+ for (i = 0; i < 12; i++)
+ regs_buff[655 + i] = rd32(E1000_TDBAH(i + 4));
+ for (i = 0; i < 12; i++)
+ regs_buff[667 + i] = rd32(E1000_TDLEN(i + 4));
+ for (i = 0; i < 12; i++)
+ regs_buff[679 + i] = rd32(E1000_TDH(i + 4));
+ for (i = 0; i < 12; i++)
+ regs_buff[691 + i] = rd32(E1000_TDT(i + 4));
+ for (i = 0; i < 12; i++)
+ regs_buff[703 + i] = rd32(E1000_TXDCTL(i + 4));
+ for (i = 0; i < 12; i++)
+ regs_buff[715 + i] = rd32(E1000_TDWBAL(i + 4));
+ for (i = 0; i < 12; i++)
+ regs_buff[727 + i] = rd32(E1000_TDWBAH(i + 4));
+ }
+
+ if (hw->mac.type == e1000_i210 || hw->mac.type == e1000_i211)
+ regs_buff[739] = rd32(E1000_I210_RR2DCDELAY);
}
static int igb_get_eeprom_len(struct net_device *netdev)
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index 39f33afc479c..b4df3e319467 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -753,6 +753,7 @@ u32 igb_rd32(struct e1000_hw *hw, u32 reg)
struct net_device *netdev = igb->netdev;
hw->hw_addr = NULL;
netdev_err(netdev, "PCIe link lost\n");
+ WARN(1, "igb: Failed to read reg 0x%x!\n", reg);
}
return value;
@@ -2577,11 +2578,11 @@ static int igb_offload_cbs(struct igb_adapter *adapter,
#define VLAN_PRIO_FULL_MASK (0x07)
static int igb_parse_cls_flower(struct igb_adapter *adapter,
- struct tc_cls_flower_offload *f,
+ struct flow_cls_offload *f,
int traffic_class,
struct igb_nfc_filter *input)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
struct flow_dissector *dissector = rule->match.dissector;
struct netlink_ext_ack *extack = f->common.extack;
@@ -2659,7 +2660,7 @@ static int igb_parse_cls_flower(struct igb_adapter *adapter,
}
static int igb_configure_clsflower(struct igb_adapter *adapter,
- struct tc_cls_flower_offload *cls_flower)
+ struct flow_cls_offload *cls_flower)
{
struct netlink_ext_ack *extack = cls_flower->common.extack;
struct igb_nfc_filter *filter, *f;
@@ -2721,7 +2722,7 @@ err_parse:
}
static int igb_delete_clsflower(struct igb_adapter *adapter,
- struct tc_cls_flower_offload *cls_flower)
+ struct flow_cls_offload *cls_flower)
{
struct igb_nfc_filter *filter;
int err;
@@ -2751,14 +2752,14 @@ out:
}
static int igb_setup_tc_cls_flower(struct igb_adapter *adapter,
- struct tc_cls_flower_offload *cls_flower)
+ struct flow_cls_offload *cls_flower)
{
switch (cls_flower->command) {
- case TC_CLSFLOWER_REPLACE:
+ case FLOW_CLS_REPLACE:
return igb_configure_clsflower(adapter, cls_flower);
- case TC_CLSFLOWER_DESTROY:
+ case FLOW_CLS_DESTROY:
return igb_delete_clsflower(adapter, cls_flower);
- case TC_CLSFLOWER_STATS:
+ case FLOW_CLS_STATS:
return -EOPNOTSUPP;
default:
return -EOPNOTSUPP;
@@ -2782,25 +2783,6 @@ static int igb_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
}
}
-static int igb_setup_tc_block(struct igb_adapter *adapter,
- struct tc_block_offload *f)
-{
- if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
- return -EOPNOTSUPP;
-
- switch (f->command) {
- case TC_BLOCK_BIND:
- return tcf_block_cb_register(f->block, igb_setup_tc_block_cb,
- adapter, adapter, f->extack);
- case TC_BLOCK_UNBIND:
- tcf_block_cb_unregister(f->block, igb_setup_tc_block_cb,
- adapter);
- return 0;
- default:
- return -EOPNOTSUPP;
- }
-}
-
static int igb_offload_txtime(struct igb_adapter *adapter,
struct tc_etf_qopt_offload *qopt)
{
@@ -2824,6 +2806,8 @@ static int igb_offload_txtime(struct igb_adapter *adapter,
return 0;
}
+static LIST_HEAD(igb_block_cb_list);
+
static int igb_setup_tc(struct net_device *dev, enum tc_setup_type type,
void *type_data)
{
@@ -2833,7 +2817,11 @@ static int igb_setup_tc(struct net_device *dev, enum tc_setup_type type,
case TC_SETUP_QDISC_CBS:
return igb_offload_cbs(adapter, type_data);
case TC_SETUP_BLOCK:
- return igb_setup_tc_block(adapter, type_data);
+ return flow_block_cb_setup_simple(type_data,
+ &igb_block_cb_list,
+ igb_setup_tc_block_cb,
+ adapter, adapter, true);
+
case TC_SETUP_QDISC_ETF:
return igb_offload_txtime(adapter, type_data);
@@ -5687,6 +5675,7 @@ static void igb_tx_ctxtdesc(struct igb_ring *tx_ring,
*/
if (tx_ring->launchtime_enable) {
ts = ns_to_timespec64(first->skb->tstamp);
+ first->skb->tstamp = 0;
context_desc->seqnum_seed = cpu_to_le32(ts.tv_nsec / 32);
} else {
context_desc->seqnum_seed = 0;
@@ -6695,7 +6684,7 @@ static int __igb_notify_dca(struct device *dev, void *data)
igb_setup_dca(adapter);
break;
}
- /* Fall Through since DCA is disabled. */
+ /* Fall Through - since DCA is disabled. */
case DCA_PROVIDER_REMOVE:
if (adapter->flags & IGB_FLAG_DCA_ENABLED) {
/* without this a class_device is left
diff --git a/drivers/net/ethernet/intel/igc/igc_base.c b/drivers/net/ethernet/intel/igc/igc_base.c
index 51a8b8769c67..59258d791106 100644
--- a/drivers/net/ethernet/intel/igc/igc_base.c
+++ b/drivers/net/ethernet/intel/igc/igc_base.c
@@ -10,50 +10,6 @@
#include "igc.h"
/**
- * igc_set_pcie_completion_timeout - set pci-e completion timeout
- * @hw: pointer to the HW structure
- */
-static s32 igc_set_pcie_completion_timeout(struct igc_hw *hw)
-{
- u32 gcr = rd32(IGC_GCR);
- u16 pcie_devctl2;
- s32 ret_val = 0;
-
- /* only take action if timeout value is defaulted to 0 */
- if (gcr & IGC_GCR_CMPL_TMOUT_MASK)
- goto out;
-
- /* if capabilities version is type 1 we can write the
- * timeout of 10ms to 200ms through the GCR register
- */
- if (!(gcr & IGC_GCR_CAP_VER2)) {
- gcr |= IGC_GCR_CMPL_TMOUT_10ms;
- goto out;
- }
-
- /* for version 2 capabilities we need to write the config space
- * directly in order to set the completion timeout value for
- * 16ms to 55ms
- */
- ret_val = igc_read_pcie_cap_reg(hw, PCIE_DEVICE_CONTROL2,
- &pcie_devctl2);
- if (ret_val)
- goto out;
-
- pcie_devctl2 |= PCIE_DEVICE_CONTROL2_16ms;
-
- ret_val = igc_write_pcie_cap_reg(hw, PCIE_DEVICE_CONTROL2,
- &pcie_devctl2);
-out:
- /* disable completion timeout resend */
- gcr &= ~IGC_GCR_CMPL_TMOUT_RESEND;
-
- wr32(IGC_GCR, gcr);
-
- return ret_val;
-}
-
-/**
* igc_reset_hw_base - Reset hardware
* @hw: pointer to the HW structure
*
@@ -72,11 +28,6 @@ static s32 igc_reset_hw_base(struct igc_hw *hw)
if (ret_val)
hw_dbg("PCI-E Master disable polling has failed.\n");
- /* set the completion timeout for interface */
- ret_val = igc_set_pcie_completion_timeout(hw);
- if (ret_val)
- hw_dbg("PCI-E Set completion timeout has failed.\n");
-
hw_dbg("Masking off all interrupts\n");
wr32(IGC_IMC, 0xffffffff);
diff --git a/drivers/net/ethernet/intel/igc/igc_defines.h b/drivers/net/ethernet/intel/igc/igc_defines.h
index a9a30268de59..fc0ccfe38a20 100644
--- a/drivers/net/ethernet/intel/igc/igc_defines.h
+++ b/drivers/net/ethernet/intel/igc/igc_defines.h
@@ -5,8 +5,8 @@
#define _IGC_DEFINES_H_
/* Number of Transmit and Receive Descriptors must be a multiple of 8 */
-#define REQ_TX_DESCRIPTOR_MULTIPLE 8
-#define REQ_RX_DESCRIPTOR_MULTIPLE 8
+#define REQ_TX_DESCRIPTOR_MULTIPLE 8
+#define REQ_RX_DESCRIPTOR_MULTIPLE 8
#define IGC_CTRL_EXT_DRV_LOAD 0x10000000 /* Drv loaded bit for FW */
@@ -29,12 +29,6 @@
/* Status of Master requests. */
#define IGC_STATUS_GIO_MASTER_ENABLE 0x00080000
-/* PCI Express Control */
-#define IGC_GCR_CMPL_TMOUT_MASK 0x0000F000
-#define IGC_GCR_CMPL_TMOUT_10ms 0x00001000
-#define IGC_GCR_CMPL_TMOUT_RESEND 0x00010000
-#define IGC_GCR_CAP_VER2 0x00040000
-
/* Receive Address
* Number of high/low register pairs in the RAR. The RAR (Receive Address
* Registers) holds the directed and multicast addresses that we monitor.
@@ -72,6 +66,9 @@
#define IGC_CONNSW_AUTOSENSE_EN 0x1
+/* As per the EAS the maximum supported size is 9.5KB (9728 bytes) */
+#define MAX_JUMBO_FRAME_SIZE 0x2600
+
/* PBA constants */
#define IGC_PBA_34K 0x0022
@@ -264,9 +261,6 @@
#define IGC_TCTL_RTLC 0x01000000 /* Re-transmit on late collision */
#define IGC_TCTL_MULR 0x10000000 /* Multiple request support */
-#define IGC_CT_SHIFT 4
-#define IGC_COLLISION_THRESHOLD 15
-
/* Flow Control Constants */
#define FLOW_CONTROL_ADDRESS_LOW 0x00C28001
#define FLOW_CONTROL_ADDRESS_HIGH 0x00000100
@@ -398,7 +392,7 @@
#define IGC_MDIC_ERROR 0x40000000
#define IGC_MDIC_DEST 0x80000000
-#define IGC_N0_QUEUE -1
+#define IGC_N0_QUEUE -1
#define IGC_MAX_MAC_HDR_LEN 127
#define IGC_MAX_NETWORK_HDR_LEN 511
diff --git a/drivers/net/ethernet/intel/igc/igc_hw.h b/drivers/net/ethernet/intel/igc/igc_hw.h
index 7c88b7bd4799..1039a224ac80 100644
--- a/drivers/net/ethernet/intel/igc/igc_hw.h
+++ b/drivers/net/ethernet/intel/igc/igc_hw.h
@@ -114,11 +114,8 @@ struct igc_nvm_operations {
struct igc_phy_operations {
s32 (*acquire)(struct igc_hw *hw);
- s32 (*check_polarity)(struct igc_hw *hw);
s32 (*check_reset_block)(struct igc_hw *hw);
s32 (*force_speed_duplex)(struct igc_hw *hw);
- s32 (*get_cfg_done)(struct igc_hw *hw);
- s32 (*get_cable_length)(struct igc_hw *hw);
s32 (*get_phy_info)(struct igc_hw *hw);
s32 (*read_reg)(struct igc_hw *hw, u32 address, u16 *data);
void (*release)(struct igc_hw *hw);
diff --git a/drivers/net/ethernet/intel/igc/igc_mac.c b/drivers/net/ethernet/intel/igc/igc_mac.c
index f7683d3ae47c..ba4646737288 100644
--- a/drivers/net/ethernet/intel/igc/igc_mac.c
+++ b/drivers/net/ethernet/intel/igc/igc_mac.c
@@ -8,7 +8,6 @@
#include "igc_hw.h"
/* forward declaration */
-static s32 igc_set_default_fc(struct igc_hw *hw);
static s32 igc_set_fc_watermarks(struct igc_hw *hw);
/**
@@ -96,13 +95,10 @@ s32 igc_setup_link(struct igc_hw *hw)
goto out;
/* If requested flow control is set to default, set flow control
- * based on the EEPROM flow control settings.
+ * to the both 'rx' and 'tx' pause frames.
*/
- if (hw->fc.requested_mode == igc_fc_default) {
- ret_val = igc_set_default_fc(hw);
- if (ret_val)
- goto out;
- }
+ if (hw->fc.requested_mode == igc_fc_default)
+ hw->fc.requested_mode = igc_fc_full;
/* We want to save off the original Flow Control configuration just
* in case we get disconnected and then reconnected into a different
@@ -136,19 +132,6 @@ out:
}
/**
- * igc_set_default_fc - Set flow control default values
- * @hw: pointer to the HW structure
- *
- * Read the EEPROM for the default values for flow control and store the
- * values.
- */
-static s32 igc_set_default_fc(struct igc_hw *hw)
-{
- hw->fc.requested_mode = igc_fc_full;
- return 0;
-}
-
-/**
* igc_force_mac_fc - Force the MAC's flow control settings
* @hw: pointer to the HW structure
*
diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
index 34fa0e60a780..93f3b4e6185b 100644
--- a/drivers/net/ethernet/intel/igc/igc_main.c
+++ b/drivers/net/ethernet/intel/igc/igc_main.c
@@ -72,6 +72,27 @@ void igc_reset(struct igc_adapter *adapter)
{
struct pci_dev *pdev = adapter->pdev;
struct igc_hw *hw = &adapter->hw;
+ struct igc_fc_info *fc = &hw->fc;
+ u32 pba, hwm;
+
+ /* Repartition PBA for greater than 9k MTU if required */
+ pba = IGC_PBA_34K;
+
+ /* flow control settings
+ * The high water mark must be low enough to fit one full frame
+ * after transmitting the pause frame. As such we must have enough
+ * space to allow for us to complete our current transmit and then
+ * receive the frame that is in progress from the link partner.
+ * Set it to:
+ * - the full Rx FIFO size minus one full Tx plus one full Rx frame
+ */
+ hwm = (pba << 10) - (adapter->max_frame_size + MAX_JUMBO_FRAME_SIZE);
+
+ fc->high_water = hwm & 0xFFFFFFF0; /* 16-byte granularity */
+ fc->low_water = fc->high_water - 16;
+ fc->pause_time = 0xFFFF;
+ fc->send_xon = 1;
+ fc->current_mode = fc->requested_mode;
hw->mac.ops.reset_hw(hw);
@@ -3934,6 +3955,7 @@ u32 igc_rd32(struct igc_hw *hw, u32 reg)
hw->hw_addr = NULL;
netif_device_detach(netdev);
netdev_err(netdev, "PCIe link lost, device now detached\n");
+ WARN(1, "igc: Failed to read reg 0x%x!\n", reg);
}
return value;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
index 08d85e336bd4..39e73ad60352 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
@@ -50,8 +50,6 @@
#define IXGBE_MAX_RXD 4096
#define IXGBE_MIN_RXD 64
-#define IXGBE_ETH_P_LLDP 0x88CC
-
/* flow control */
#define IXGBE_MIN_FCRTL 0x40
#define IXGBE_MAX_FCRTL 0x7FF80
@@ -635,6 +633,7 @@ struct ixgbe_adapter {
/* XDP */
int num_xdp_queues;
struct ixgbe_ring *xdp_ring[MAX_XDP_QUEUES];
+ unsigned long *af_xdp_zc_qps; /* tracks AF_XDP ZC enabled rings */
/* TX */
struct ixgbe_ring *tx_ring[MAX_TX_QUEUES] ____cacheline_aligned_in_smp;
@@ -774,11 +773,6 @@ struct ixgbe_adapter {
#ifdef CONFIG_IXGBE_IPSEC
struct ixgbe_ipsec *ipsec;
#endif /* CONFIG_IXGBE_IPSEC */
-
- /* AF_XDP zero-copy */
- struct xdp_umem **xsk_umems;
- u16 num_xsk_umems_used;
- u16 num_xsk_umems;
};
static inline u8 ixgbe_max_rss_indices(struct ixgbe_adapter *adapter)
@@ -1039,4 +1033,10 @@ static inline int ixgbe_ipsec_vf_add_sa(struct ixgbe_adapter *adapter,
static inline int ixgbe_ipsec_vf_del_sa(struct ixgbe_adapter *adapter,
u32 *mbuf, u32 vf) { return -EACCES; }
#endif /* CONFIG_IXGBE_IPSEC */
+
+static inline bool ixgbe_enabled_xdp_adapter(struct ixgbe_adapter *adapter)
+{
+ return !!adapter->xdp_prog;
+}
+
#endif /* _IXGBE_H_ */
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
index acba067cc15a..7c52ae8ac005 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
@@ -3226,7 +3226,8 @@ static int ixgbe_get_module_info(struct net_device *dev,
page_swap = true;
}
- if (sff8472_rev == IXGBE_SFF_SFF_8472_UNSUP || page_swap) {
+ if (sff8472_rev == IXGBE_SFF_SFF_8472_UNSUP || page_swap ||
+ !(addr_mode & IXGBE_SFF_DDM_IMPLEMENTED)) {
/* We have a SFP, but it does not support SFF-8472 */
modinfo->type = ETH_MODULE_SFF_8079;
modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
index ff85ce5791a3..31629fc7e820 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
@@ -842,6 +842,9 @@ void ixgbe_ipsec_vf_clear(struct ixgbe_adapter *adapter, u32 vf)
struct ixgbe_ipsec *ipsec = adapter->ipsec;
int i;
+ if (!ipsec)
+ return;
+
/* search rx sa table */
for (i = 0; i < IXGBE_IPSEC_MAX_SA_COUNT && ipsec->num_rx_sa; i++) {
if (!ipsec->rx_tbl[i].used)
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 57fd9ee6de66..cbaf712d6529 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -6288,6 +6288,10 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter,
if (ixgbe_init_rss_key(adapter))
return -ENOMEM;
+ adapter->af_xdp_zc_qps = bitmap_zalloc(MAX_XDP_QUEUES, GFP_KERNEL);
+ if (!adapter->af_xdp_zc_qps)
+ return -ENOMEM;
+
/* Set MAC specific capability flags and exceptions */
switch (hw->mac.type) {
case ixgbe_mac_82598EB:
@@ -9603,27 +9607,6 @@ static int ixgbe_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
}
}
-static int ixgbe_setup_tc_block(struct net_device *dev,
- struct tc_block_offload *f)
-{
- struct ixgbe_adapter *adapter = netdev_priv(dev);
-
- if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
- return -EOPNOTSUPP;
-
- switch (f->command) {
- case TC_BLOCK_BIND:
- return tcf_block_cb_register(f->block, ixgbe_setup_tc_block_cb,
- adapter, adapter, f->extack);
- case TC_BLOCK_UNBIND:
- tcf_block_cb_unregister(f->block, ixgbe_setup_tc_block_cb,
- adapter);
- return 0;
- default:
- return -EOPNOTSUPP;
- }
-}
-
static int ixgbe_setup_tc_mqprio(struct net_device *dev,
struct tc_mqprio_qopt *mqprio)
{
@@ -9631,12 +9614,19 @@ static int ixgbe_setup_tc_mqprio(struct net_device *dev,
return ixgbe_setup_tc(dev, mqprio->num_tc);
}
+static LIST_HEAD(ixgbe_block_cb_list);
+
static int __ixgbe_setup_tc(struct net_device *dev, enum tc_setup_type type,
void *type_data)
{
+ struct ixgbe_adapter *adapter = netdev_priv(dev);
+
switch (type) {
case TC_SETUP_BLOCK:
- return ixgbe_setup_tc_block(dev, type_data);
+ return flow_block_cb_setup_simple(type_data,
+ &ixgbe_block_cb_list,
+ ixgbe_setup_tc_block_cb,
+ adapter, adapter, true);
case TC_SETUP_QDISC_MQPRIO:
return ixgbe_setup_tc_mqprio(dev, type_data);
default:
@@ -11161,6 +11151,7 @@ err_sw_init:
kfree(adapter->jump_tables[0]);
kfree(adapter->mac_table);
kfree(adapter->rss_key);
+ bitmap_free(adapter->af_xdp_zc_qps);
err_ioremap:
disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state);
free_netdev(netdev);
@@ -11249,6 +11240,7 @@ static void ixgbe_remove(struct pci_dev *pdev)
kfree(adapter->mac_table);
kfree(adapter->rss_key);
+ bitmap_free(adapter->af_xdp_zc_qps);
disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state);
free_netdev(netdev);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
index 214b01085718..6544c4539c0d 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
@@ -45,6 +45,7 @@
#define IXGBE_SFF_SOFT_RS_SELECT_10G 0x8
#define IXGBE_SFF_SOFT_RS_SELECT_1G 0x0
#define IXGBE_SFF_ADDRESSING_MODE 0x4
+#define IXGBE_SFF_DDM_IMPLEMENTED 0x40
#define IXGBE_SFF_QSFP_DA_ACTIVE_CABLE 0x1
#define IXGBE_SFF_QSFP_DA_PASSIVE_CABLE 0x8
#define IXGBE_SFF_QSFP_CONNECTOR_NOT_SEPARABLE 0x23
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
index d81a50dc9535..0be13a90ff79 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
@@ -72,13 +72,13 @@
#define IXGBE_INCPER_SHIFT_82599 24
#define IXGBE_OVERFLOW_PERIOD (HZ * 30)
-#define IXGBE_PTP_TX_TIMEOUT (HZ * 15)
+#define IXGBE_PTP_TX_TIMEOUT (HZ)
-/* half of a one second clock period, for use with PPS signal. We have to use
- * this instead of something pre-defined like IXGBE_PTP_PPS_HALF_SECOND, in
- * order to force at least 64bits of precision for shifting
+/* We use our own definitions instead of NSEC_PER_SEC because we want to mark
+ * the value as a ULL to force precision when bit shifting.
*/
-#define IXGBE_PTP_PPS_HALF_SECOND 500000000ULL
+#define NS_PER_SEC 1000000000ULL
+#define NS_PER_HALF_SEC 500000000ULL
/* In contrast, the X550 controller has two registers, SYSTIMEH and SYSTIMEL
* which contain measurements of seconds and nanoseconds respectively. This
@@ -141,23 +141,26 @@
#define MAX_TIMADJ 0x7FFFFFFF
/**
- * ixgbe_ptp_setup_sdp_x540
+ * ixgbe_ptp_setup_sdp_X540
* @adapter: private adapter structure
*
* this function enables or disables the clock out feature on SDP0 for
- * the X540 device. It will create a 1second periodic output that can
+ * the X540 device. It will create a 1 second periodic output that can
* be used as the PPS (via an interrupt).
*
- * It calculates when the systime will be on an exact second, and then
- * aligns the start of the PPS signal to that value. The shift is
- * necessary because it can change based on the link speed.
+ * It calculates when the system time will be on an exact second, and then
+ * aligns the start of the PPS signal to that value.
+ *
+ * This works by using the cycle counter shift and mult values in reverse, and
+ * assumes that the values we're shifting will not overflow.
*/
-static void ixgbe_ptp_setup_sdp_x540(struct ixgbe_adapter *adapter)
+static void ixgbe_ptp_setup_sdp_X540(struct ixgbe_adapter *adapter)
{
+ struct cyclecounter *cc = &adapter->hw_cc;
struct ixgbe_hw *hw = &adapter->hw;
- int shift = adapter->hw_cc.shift;
u32 esdp, tsauxc, clktiml, clktimh, trgttiml, trgttimh, rem;
- u64 ns = 0, clock_edge = 0;
+ u64 ns = 0, clock_edge = 0, clock_period;
+ unsigned long flags;
/* disable the pin first */
IXGBE_WRITE_REG(hw, IXGBE_TSAUXC, 0x0);
@@ -177,26 +180,33 @@ static void ixgbe_ptp_setup_sdp_x540(struct ixgbe_adapter *adapter)
/* enable the Clock Out feature on SDP0, and allow
* interrupts to occur when the pin changes
*/
- tsauxc = IXGBE_TSAUXC_EN_CLK |
- IXGBE_TSAUXC_SYNCLK |
- IXGBE_TSAUXC_SDP0_INT;
+ tsauxc = (IXGBE_TSAUXC_EN_CLK |
+ IXGBE_TSAUXC_SYNCLK |
+ IXGBE_TSAUXC_SDP0_INT);
- /* clock period (or pulse length) */
- clktiml = (u32)(IXGBE_PTP_PPS_HALF_SECOND << shift);
- clktimh = (u32)((IXGBE_PTP_PPS_HALF_SECOND << shift) >> 32);
-
- /* Account for the cyclecounter wrap-around value by
- * using the converted ns value of the current time to
- * check for when the next aligned second would occur.
+ /* Determine the clock time period to use. This assumes that the
+ * cycle counter shift is small enough to avoid overflow.
*/
- clock_edge |= (u64)IXGBE_READ_REG(hw, IXGBE_SYSTIML);
- clock_edge |= (u64)IXGBE_READ_REG(hw, IXGBE_SYSTIMH) << 32;
- ns = timecounter_cyc2time(&adapter->hw_tc, clock_edge);
+ clock_period = div_u64((NS_PER_HALF_SEC << cc->shift), cc->mult);
+ clktiml = (u32)(clock_period);
+ clktimh = (u32)(clock_period >> 32);
- div_u64_rem(ns, IXGBE_PTP_PPS_HALF_SECOND, &rem);
- clock_edge += ((IXGBE_PTP_PPS_HALF_SECOND - (u64)rem) << shift);
+ /* Read the current clock time, and save the cycle counter value */
+ spin_lock_irqsave(&adapter->tmreg_lock, flags);
+ ns = timecounter_read(&adapter->hw_tc);
+ clock_edge = adapter->hw_tc.cycle_last;
+ spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+
+ /* Figure out how many seconds to add in order to round up */
+ div_u64_rem(ns, NS_PER_SEC, &rem);
+
+ /* Figure out how many nanoseconds to add to round the clock edge up
+ * to the next full second
+ */
+ rem = (NS_PER_SEC - rem);
- /* specify the initial clock start time */
+ /* Adjust the clock edge to align with the next full second. */
+ clock_edge += div_u64(((u64)rem << cc->shift), cc->mult);
trgttiml = (u32)clock_edge;
trgttimh = (u32)(clock_edge >> 32);
@@ -212,8 +222,100 @@ static void ixgbe_ptp_setup_sdp_x540(struct ixgbe_adapter *adapter)
}
/**
+ * ixgbe_ptp_setup_sdp_X550
+ * @adapter: private adapter structure
+ *
+ * Enable or disable a clock output signal on SDP 0 for X550 hardware.
+ *
+ * Use the target time feature to align the output signal on the next full
+ * second.
+ *
+ * This works by using the cycle counter shift and mult values in reverse, and
+ * assumes that the values we're shifting will not overflow.
+ */
+static void ixgbe_ptp_setup_sdp_X550(struct ixgbe_adapter *adapter)
+{
+ u32 esdp, tsauxc, freqout, trgttiml, trgttimh, rem, tssdp;
+ struct cyclecounter *cc = &adapter->hw_cc;
+ struct ixgbe_hw *hw = &adapter->hw;
+ u64 ns = 0, clock_edge = 0;
+ struct timespec64 ts;
+ unsigned long flags;
+
+ /* disable the pin first */
+ IXGBE_WRITE_REG(hw, IXGBE_TSAUXC, 0x0);
+ IXGBE_WRITE_FLUSH(hw);
+
+ if (!(adapter->flags2 & IXGBE_FLAG2_PTP_PPS_ENABLED))
+ return;
+
+ esdp = IXGBE_READ_REG(hw, IXGBE_ESDP);
+
+ /* enable the SDP0 pin as output, and connected to the
+ * native function for Timesync (ClockOut)
+ */
+ esdp |= IXGBE_ESDP_SDP0_DIR |
+ IXGBE_ESDP_SDP0_NATIVE;
+
+ /* enable the Clock Out feature on SDP0, and use Target Time 0 to
+ * enable generation of interrupts on the clock change.
+ */
+#define IXGBE_TSAUXC_DIS_TS_CLEAR 0x40000000
+ tsauxc = (IXGBE_TSAUXC_EN_CLK | IXGBE_TSAUXC_ST0 |
+ IXGBE_TSAUXC_EN_TT0 | IXGBE_TSAUXC_SDP0_INT |
+ IXGBE_TSAUXC_DIS_TS_CLEAR);
+
+ tssdp = (IXGBE_TSSDP_TS_SDP0_EN |
+ IXGBE_TSSDP_TS_SDP0_CLK0);
+
+ /* Determine the clock time period to use. This assumes that the
+ * cycle counter shift is small enough to avoid overflowing a 32bit
+ * value.
+ */
+ freqout = div_u64(NS_PER_HALF_SEC << cc->shift, cc->mult);
+
+ /* Read the current clock time, and save the cycle counter value */
+ spin_lock_irqsave(&adapter->tmreg_lock, flags);
+ ns = timecounter_read(&adapter->hw_tc);
+ clock_edge = adapter->hw_tc.cycle_last;
+ spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+
+ /* Figure out how far past the next second we are */
+ div_u64_rem(ns, NS_PER_SEC, &rem);
+
+ /* Figure out how many nanoseconds to add to round the clock edge up
+ * to the next full second
+ */
+ rem = (NS_PER_SEC - rem);
+
+ /* Adjust the clock edge to align with the next full second. */
+ clock_edge += div_u64(((u64)rem << cc->shift), cc->mult);
+
+ /* X550 hardware stores the time in 32bits of 'billions of cycles' and
+ * 32bits of 'cycles'. There's no guarantee that cycles represents
+ * nanoseconds. However, we can use the math from a timespec64 to
+ * convert into the hardware representation.
+ *
+ * See ixgbe_ptp_read_X550() for more details.
+ */
+ ts = ns_to_timespec64(clock_edge);
+ trgttiml = (u32)ts.tv_nsec;
+ trgttimh = (u32)ts.tv_sec;
+
+ IXGBE_WRITE_REG(hw, IXGBE_FREQOUT0, freqout);
+ IXGBE_WRITE_REG(hw, IXGBE_TRGTTIML0, trgttiml);
+ IXGBE_WRITE_REG(hw, IXGBE_TRGTTIMH0, trgttimh);
+
+ IXGBE_WRITE_REG(hw, IXGBE_ESDP, esdp);
+ IXGBE_WRITE_REG(hw, IXGBE_TSSDP, tssdp);
+ IXGBE_WRITE_REG(hw, IXGBE_TSAUXC, tsauxc);
+
+ IXGBE_WRITE_FLUSH(hw);
+}
+
+/**
* ixgbe_ptp_read_X550 - read cycle counter value
- * @hw_cc: cyclecounter structure
+ * @cc: cyclecounter structure
*
* This function reads SYSTIME registers. It is called by the cyclecounter
* structure to convert from internal representation into nanoseconds. We need
@@ -221,10 +323,10 @@ static void ixgbe_ptp_setup_sdp_x540(struct ixgbe_adapter *adapter)
* result of SYSTIME is 32bits of "billions of cycles" and 32 bits of
* "cycles", rather than seconds and nanoseconds.
*/
-static u64 ixgbe_ptp_read_X550(const struct cyclecounter *hw_cc)
+static u64 ixgbe_ptp_read_X550(const struct cyclecounter *cc)
{
struct ixgbe_adapter *adapter =
- container_of(hw_cc, struct ixgbe_adapter, hw_cc);
+ container_of(cc, struct ixgbe_adapter, hw_cc);
struct ixgbe_hw *hw = &adapter->hw;
struct timespec64 ts;
@@ -838,6 +940,15 @@ void ixgbe_ptp_rx_rgtstamp(struct ixgbe_q_vector *q_vector,
ixgbe_ptp_convert_to_hwtstamp(adapter, skb_hwtstamps(skb), regval);
}
+/**
+ * ixgbe_ptp_get_ts_config - get current hardware timestamping configuration
+ * @adapter: pointer to adapter structure
+ * @ifr: ioctl data
+ *
+ * This function returns the current timestamping settings. Rather than
+ * attempt to deconstruct registers to fill in the values, simply keep a copy
+ * of the old settings around, and return a copy when requested.
+ */
int ixgbe_ptp_get_ts_config(struct ixgbe_adapter *adapter, struct ifreq *ifr)
{
struct hwtstamp_config *config = &adapter->tstamp_config;
@@ -1253,7 +1364,7 @@ static long ixgbe_ptp_create_clock(struct ixgbe_adapter *adapter)
adapter->ptp_caps.gettimex64 = ixgbe_ptp_gettimex;
adapter->ptp_caps.settime64 = ixgbe_ptp_settime;
adapter->ptp_caps.enable = ixgbe_ptp_feature_enable;
- adapter->ptp_setup_sdp = ixgbe_ptp_setup_sdp_x540;
+ adapter->ptp_setup_sdp = ixgbe_ptp_setup_sdp_X540;
break;
case ixgbe_mac_82599EB:
snprintf(adapter->ptp_caps.name,
@@ -1280,13 +1391,13 @@ static long ixgbe_ptp_create_clock(struct ixgbe_adapter *adapter)
adapter->ptp_caps.n_alarm = 0;
adapter->ptp_caps.n_ext_ts = 0;
adapter->ptp_caps.n_per_out = 0;
- adapter->ptp_caps.pps = 0;
+ adapter->ptp_caps.pps = 1;
adapter->ptp_caps.adjfreq = ixgbe_ptp_adjfreq_X550;
adapter->ptp_caps.adjtime = ixgbe_ptp_adjtime;
adapter->ptp_caps.gettimex64 = ixgbe_ptp_gettimex;
adapter->ptp_caps.settime64 = ixgbe_ptp_settime;
adapter->ptp_caps.enable = ixgbe_ptp_feature_enable;
- adapter->ptp_setup_sdp = NULL;
+ adapter->ptp_setup_sdp = ixgbe_ptp_setup_sdp_X550;
break;
default:
adapter->ptp_clock = NULL;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
index 345701af7749..537dfff585e0 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
@@ -1645,7 +1645,7 @@ int ixgbe_ndo_set_vf_spoofchk(struct net_device *netdev, int vf, bool setting)
IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_LLDP),
(IXGBE_ETQF_FILTER_EN |
IXGBE_ETQF_TX_ANTISPOOF |
- IXGBE_ETH_P_LLDP));
+ ETH_P_LLDP));
IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_FC),
(IXGBE_ETQF_FILTER_EN |
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
index 84f2dba39e36..2be1c4c72435 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
@@ -1067,6 +1067,7 @@ struct ixgbe_nvm_version {
#define IXGBE_AUXSTMPL1 0x08C44 /* Auxiliary Time Stamp 1 register Low - RO */
#define IXGBE_AUXSTMPH1 0x08C48 /* Auxiliary Time Stamp 1 register High - RO */
#define IXGBE_TSIM 0x08C68 /* TimeSync Interrupt Mask Register - RW */
+#define IXGBE_TSSDP 0x0003C /* TimeSync SDP Configuration Register - RW */
/* Diagnostic Registers */
#define IXGBE_RDSTATCTL 0x02C20
@@ -2240,11 +2241,18 @@ enum {
#define IXGBE_RXDCTL_RLPML_EN 0x00008000
#define IXGBE_RXDCTL_VME 0x40000000 /* VLAN mode enable */
-#define IXGBE_TSAUXC_EN_CLK 0x00000004
-#define IXGBE_TSAUXC_SYNCLK 0x00000008
-#define IXGBE_TSAUXC_SDP0_INT 0x00000040
+#define IXGBE_TSAUXC_EN_CLK 0x00000004
+#define IXGBE_TSAUXC_SYNCLK 0x00000008
+#define IXGBE_TSAUXC_SDP0_INT 0x00000040
+#define IXGBE_TSAUXC_EN_TT0 0x00000001
+#define IXGBE_TSAUXC_EN_TT1 0x00000002
+#define IXGBE_TSAUXC_ST0 0x00000010
#define IXGBE_TSAUXC_DISABLE_SYSTIME 0x80000000
+#define IXGBE_TSSDP_TS_SDP0_SEL_MASK 0x000000C0
+#define IXGBE_TSSDP_TS_SDP0_CLK0 0x00000080
+#define IXGBE_TSSDP_TS_SDP0_EN 0x00000100
+
#define IXGBE_TSYNCTXCTL_VALID 0x00000001 /* Tx timestamp valid */
#define IXGBE_TSYNCTXCTL_ENABLED 0x00000010 /* Tx timestamping enabled */
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
index bfe95ce0bd7f..6b609553329f 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
@@ -14,57 +14,10 @@ struct xdp_umem *ixgbe_xsk_umem(struct ixgbe_adapter *adapter,
bool xdp_on = READ_ONCE(adapter->xdp_prog);
int qid = ring->ring_idx;
- if (!adapter->xsk_umems || !adapter->xsk_umems[qid] ||
- qid >= adapter->num_xsk_umems || !xdp_on)
+ if (!xdp_on || !test_bit(qid, adapter->af_xdp_zc_qps))
return NULL;
- return adapter->xsk_umems[qid];
-}
-
-static int ixgbe_alloc_xsk_umems(struct ixgbe_adapter *adapter)
-{
- if (adapter->xsk_umems)
- return 0;
-
- adapter->num_xsk_umems_used = 0;
- adapter->num_xsk_umems = adapter->num_rx_queues;
- adapter->xsk_umems = kcalloc(adapter->num_xsk_umems,
- sizeof(*adapter->xsk_umems),
- GFP_KERNEL);
- if (!adapter->xsk_umems) {
- adapter->num_xsk_umems = 0;
- return -ENOMEM;
- }
-
- return 0;
-}
-
-static int ixgbe_add_xsk_umem(struct ixgbe_adapter *adapter,
- struct xdp_umem *umem,
- u16 qid)
-{
- int err;
-
- err = ixgbe_alloc_xsk_umems(adapter);
- if (err)
- return err;
-
- adapter->xsk_umems[qid] = umem;
- adapter->num_xsk_umems_used++;
-
- return 0;
-}
-
-static void ixgbe_remove_xsk_umem(struct ixgbe_adapter *adapter, u16 qid)
-{
- adapter->xsk_umems[qid] = NULL;
- adapter->num_xsk_umems_used--;
-
- if (adapter->num_xsk_umems == 0) {
- kfree(adapter->xsk_umems);
- adapter->xsk_umems = NULL;
- adapter->num_xsk_umems = 0;
- }
+ return xdp_get_umem_from_qid(adapter->netdev, qid);
}
static int ixgbe_xsk_umem_dma_map(struct ixgbe_adapter *adapter,
@@ -113,6 +66,7 @@ static int ixgbe_xsk_umem_enable(struct ixgbe_adapter *adapter,
struct xdp_umem *umem,
u16 qid)
{
+ struct net_device *netdev = adapter->netdev;
struct xdp_umem_fq_reuse *reuseq;
bool if_running;
int err;
@@ -120,12 +74,9 @@ static int ixgbe_xsk_umem_enable(struct ixgbe_adapter *adapter,
if (qid >= adapter->num_rx_queues)
return -EINVAL;
- if (adapter->xsk_umems) {
- if (qid >= adapter->num_xsk_umems)
- return -EINVAL;
- if (adapter->xsk_umems[qid])
- return -EBUSY;
- }
+ if (qid >= netdev->real_num_rx_queues ||
+ qid >= netdev->real_num_tx_queues)
+ return -EINVAL;
reuseq = xsk_reuseq_prepare(adapter->rx_ring[0]->count);
if (!reuseq)
@@ -138,14 +89,12 @@ static int ixgbe_xsk_umem_enable(struct ixgbe_adapter *adapter,
return err;
if_running = netif_running(adapter->netdev) &&
- READ_ONCE(adapter->xdp_prog);
+ ixgbe_enabled_xdp_adapter(adapter);
if (if_running)
ixgbe_txrx_ring_disable(adapter, qid);
- err = ixgbe_add_xsk_umem(adapter, umem, qid);
- if (err)
- return err;
+ set_bit(qid, adapter->af_xdp_zc_qps);
if (if_running) {
ixgbe_txrx_ring_enable(adapter, qid);
@@ -161,20 +110,21 @@ static int ixgbe_xsk_umem_enable(struct ixgbe_adapter *adapter,
static int ixgbe_xsk_umem_disable(struct ixgbe_adapter *adapter, u16 qid)
{
+ struct xdp_umem *umem;
bool if_running;
- if (!adapter->xsk_umems || qid >= adapter->num_xsk_umems ||
- !adapter->xsk_umems[qid])
+ umem = xdp_get_umem_from_qid(adapter->netdev, qid);
+ if (!umem)
return -EINVAL;
if_running = netif_running(adapter->netdev) &&
- READ_ONCE(adapter->xdp_prog);
+ ixgbe_enabled_xdp_adapter(adapter);
if (if_running)
ixgbe_txrx_ring_disable(adapter, qid);
- ixgbe_xsk_umem_dma_unmap(adapter, adapter->xsk_umems[qid]);
- ixgbe_remove_xsk_umem(adapter, qid);
+ clear_bit(qid, adapter->af_xdp_zc_qps);
+ ixgbe_xsk_umem_dma_unmap(adapter, umem);
if (if_running)
ixgbe_txrx_ring_enable(adapter, qid);
@@ -621,8 +571,9 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
union ixgbe_adv_tx_desc *tx_desc = NULL;
struct ixgbe_tx_buffer *tx_bi;
bool work_done = true;
- u32 len, cmd_type;
+ struct xdp_desc desc;
dma_addr_t dma;
+ u32 cmd_type;
while (budget-- > 0) {
if (unlikely(!ixgbe_desc_unused(xdp_ring)) ||
@@ -631,15 +582,18 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
break;
}
- if (!xsk_umem_consume_tx(xdp_ring->xsk_umem, &dma, &len))
+ if (!xsk_umem_consume_tx(xdp_ring->xsk_umem, &desc))
break;
- dma_sync_single_for_device(xdp_ring->dev, dma, len,
+ dma = xdp_umem_get_dma(xdp_ring->xsk_umem, desc.addr);
+
+ dma_sync_single_for_device(xdp_ring->dev, dma, desc.len,
DMA_BIDIRECTIONAL);
tx_bi = &xdp_ring->tx_buffer_info[xdp_ring->next_to_use];
- tx_bi->bytecount = len;
+ tx_bi->bytecount = desc.len;
tx_bi->xdpf = NULL;
+ tx_bi->gso_segs = 1;
tx_desc = IXGBE_TX_DESC(xdp_ring, xdp_ring->next_to_use);
tx_desc->read.buffer_addr = cpu_to_le64(dma);
@@ -648,10 +602,10 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
cmd_type = IXGBE_ADVTXD_DTYP_DATA |
IXGBE_ADVTXD_DCMD_DEXT |
IXGBE_ADVTXD_DCMD_IFCS;
- cmd_type |= len | IXGBE_TXD_CMD;
+ cmd_type |= desc.len | IXGBE_TXD_CMD;
tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);
tx_desc->read.olinfo_status =
- cpu_to_le32(len << IXGBE_ADVTXD_PAYLEN_SHIFT);
+ cpu_to_le32(desc.len << IXGBE_ADVTXD_PAYLEN_SHIFT);
xdp_ring->next_to_use++;
if (xdp_ring->next_to_use == xdp_ring->count)
@@ -704,7 +658,6 @@ bool ixgbe_clean_xdp_tx_irq(struct ixgbe_q_vector *q_vector,
xsk_frames++;
tx_bi->xdpf = NULL;
- total_bytes += tx_bi->bytecount;
tx_bi++;
tx_desc++;
@@ -753,7 +706,7 @@ int ixgbe_xsk_async_xmit(struct net_device *dev, u32 qid)
if (qid >= adapter->num_xdp_queues)
return -ENXIO;
- if (!adapter->xsk_umems || !adapter->xsk_umems[qid])
+ if (!adapter->xdp_ring[qid]->xsk_umem)
return -ENXIO;
ring = adapter->xdp_ring[qid];
diff --git a/drivers/net/ethernet/intel/ixgbevf/ethtool.c b/drivers/net/ethernet/intel/ixgbevf/ethtool.c
index 5399787e07af..54459b69c948 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ethtool.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ethtool.c
@@ -85,22 +85,16 @@ static int ixgbevf_get_link_ksettings(struct net_device *netdev,
struct ethtool_link_ksettings *cmd)
{
struct ixgbevf_adapter *adapter = netdev_priv(netdev);
- struct ixgbe_hw *hw = &adapter->hw;
- u32 link_speed = 0;
- bool link_up;
ethtool_link_ksettings_zero_link_mode(cmd, supported);
ethtool_link_ksettings_add_link_mode(cmd, supported, 10000baseT_Full);
cmd->base.autoneg = AUTONEG_DISABLE;
cmd->base.port = -1;
- hw->mac.get_link_status = 1;
- hw->mac.ops.check_link(hw, &link_speed, &link_up, false);
-
- if (link_up) {
+ if (adapter->link_up) {
__u32 speed = SPEED_10000;
- switch (link_speed) {
+ switch (adapter->link_speed) {
case IXGBE_LINK_SPEED_10GB_FULL:
speed = SPEED_10000;
break;
diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
index d189ed247665..d2b41f9f87f8 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
@@ -1423,6 +1423,9 @@ static void ixgbevf_update_itr(struct ixgbevf_q_vector *q_vector,
*/
/* what was last interrupt timeslice? */
timepassed_us = q_vector->itr >> 2;
+ if (timepassed_us == 0)
+ return;
+
bytes_perint = bytes / timepassed_us; /* bytes/usec */
switch (itr_setting) {
diff --git a/drivers/net/ethernet/intel/ixgbevf/vf.c b/drivers/net/ethernet/intel/ixgbevf/vf.c
index cd3b81300cc7..d5ce49636548 100644
--- a/drivers/net/ethernet/intel/ixgbevf/vf.c
+++ b/drivers/net/ethernet/intel/ixgbevf/vf.c
@@ -508,9 +508,8 @@ static s32 ixgbevf_update_mc_addr_list_vf(struct ixgbe_hw *hw,
vector_list[i++] = ixgbevf_mta_vector(hw, ha->addr);
}
- ixgbevf_write_msg_read_ack(hw, msgbuf, msgbuf, IXGBE_VFMAILBOX_SIZE);
-
- return 0;
+ return ixgbevf_write_msg_read_ack(hw, msgbuf, msgbuf,
+ IXGBE_VFMAILBOX_SIZE);
}
/**
diff --git a/drivers/net/ethernet/marvell/mvmdio.c b/drivers/net/ethernet/marvell/mvmdio.c
index c5dac6bd2be4..f660cc2b8258 100644
--- a/drivers/net/ethernet/marvell/mvmdio.c
+++ b/drivers/net/ethernet/marvell/mvmdio.c
@@ -64,7 +64,7 @@
struct orion_mdio_dev {
void __iomem *regs;
- struct clk *clk[3];
+ struct clk *clk[4];
/*
* If we have access to the error interrupt pin (which is
* somewhat misnamed as it not only reflects internal errors
@@ -321,11 +321,19 @@ static int orion_mdio_probe(struct platform_device *pdev)
for (i = 0; i < ARRAY_SIZE(dev->clk); i++) {
dev->clk[i] = of_clk_get(pdev->dev.of_node, i);
+ if (PTR_ERR(dev->clk[i]) == -EPROBE_DEFER) {
+ ret = -EPROBE_DEFER;
+ goto out_clk;
+ }
if (IS_ERR(dev->clk[i]))
break;
clk_prepare_enable(dev->clk[i]);
}
+ if (!IS_ERR(of_clk_get(pdev->dev.of_node, ARRAY_SIZE(dev->clk))))
+ dev_warn(&pdev->dev, "unsupported number of clocks, limiting to the first "
+ __stringify(ARRAY_SIZE(dev->clk)) "\n");
+
dev->err_interrupt = platform_get_irq(pdev, 0);
if (dev->err_interrupt > 0 &&
resource_size(r) < MVMDIO_ERR_INT_MASK + 4) {
@@ -362,6 +370,7 @@ out_mdio:
if (dev->err_interrupt > 0)
writel(0, dev->regs + MVMDIO_ERR_INT_MASK);
+out_clk:
for (i = 0; i < ARRAY_SIZE(dev->clk); i++) {
if (IS_ERR(dev->clk[i]))
break;
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 269bd73be1a0..895bfed26a8a 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -437,6 +437,7 @@ struct mvneta_port {
struct device_node *dn;
unsigned int tx_csum_limit;
struct phylink *phylink;
+ struct phylink_config phylink_config;
struct phy *comphy;
struct mvneta_bm *bm_priv;
@@ -1118,7 +1119,7 @@ static void mvneta_bm_update_mtu(struct mvneta_port *pp, int mtu)
SKB_DATA_ALIGN(MVNETA_RX_BUF_SIZE(bm_pool->pkt_size));
/* Fill entire long pool */
- num = hwbm_pool_add(hwbm_pool, hwbm_pool->size, GFP_ATOMIC);
+ num = hwbm_pool_add(hwbm_pool, hwbm_pool->size);
if (num != hwbm_pool->size) {
WARN(1, "pool %d: %d of %d allocated\n",
bm_pool->id, num, hwbm_pool->size);
@@ -3356,9 +3357,11 @@ static int mvneta_set_mac_addr(struct net_device *dev, void *addr)
return 0;
}
-static void mvneta_validate(struct net_device *ndev, unsigned long *supported,
+static void mvneta_validate(struct phylink_config *config,
+ unsigned long *supported,
struct phylink_link_state *state)
{
+ struct net_device *ndev = to_net_dev(config->dev);
struct mvneta_port *pp = netdev_priv(ndev);
__ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, };
@@ -3408,9 +3411,10 @@ static void mvneta_validate(struct net_device *ndev, unsigned long *supported,
phylink_helper_basex_speed(state);
}
-static int mvneta_mac_link_state(struct net_device *ndev,
+static int mvneta_mac_link_state(struct phylink_config *config,
struct phylink_link_state *state)
{
+ struct net_device *ndev = to_net_dev(config->dev);
struct mvneta_port *pp = netdev_priv(ndev);
u32 gmac_stat;
@@ -3438,8 +3442,9 @@ static int mvneta_mac_link_state(struct net_device *ndev,
return 1;
}
-static void mvneta_mac_an_restart(struct net_device *ndev)
+static void mvneta_mac_an_restart(struct phylink_config *config)
{
+ struct net_device *ndev = to_net_dev(config->dev);
struct mvneta_port *pp = netdev_priv(ndev);
u32 gmac_an = mvreg_read(pp, MVNETA_GMAC_AUTONEG_CONFIG);
@@ -3449,9 +3454,10 @@ static void mvneta_mac_an_restart(struct net_device *ndev)
gmac_an & ~MVNETA_GMAC_INBAND_RESTART_AN);
}
-static void mvneta_mac_config(struct net_device *ndev, unsigned int mode,
- const struct phylink_link_state *state)
+static void mvneta_mac_config(struct phylink_config *config, unsigned int mode,
+ const struct phylink_link_state *state)
{
+ struct net_device *ndev = to_net_dev(config->dev);
struct mvneta_port *pp = netdev_priv(ndev);
u32 new_ctrl0, gmac_ctrl0 = mvreg_read(pp, MVNETA_GMAC_CTRL_0);
u32 new_ctrl2, gmac_ctrl2 = mvreg_read(pp, MVNETA_GMAC_CTRL_2);
@@ -3581,9 +3587,10 @@ static void mvneta_set_eee(struct mvneta_port *pp, bool enable)
mvreg_write(pp, MVNETA_LPI_CTRL_1, lpi_ctl1);
}
-static void mvneta_mac_link_down(struct net_device *ndev, unsigned int mode,
- phy_interface_t interface)
+static void mvneta_mac_link_down(struct phylink_config *config,
+ unsigned int mode, phy_interface_t interface)
{
+ struct net_device *ndev = to_net_dev(config->dev);
struct mvneta_port *pp = netdev_priv(ndev);
u32 val;
@@ -3600,10 +3607,11 @@ static void mvneta_mac_link_down(struct net_device *ndev, unsigned int mode,
mvneta_set_eee(pp, false);
}
-static void mvneta_mac_link_up(struct net_device *ndev, unsigned int mode,
+static void mvneta_mac_link_up(struct phylink_config *config, unsigned int mode,
phy_interface_t interface,
struct phy_device *phy)
{
+ struct net_device *ndev = to_net_dev(config->dev);
struct mvneta_port *pp = netdev_priv(ndev);
u32 val;
@@ -4500,8 +4508,14 @@ static int mvneta_probe(struct platform_device *pdev)
comphy = NULL;
}
- phylink = phylink_create(dev, pdev->dev.fwnode, phy_mode,
- &mvneta_phylink_ops);
+ pp = netdev_priv(dev);
+ spin_lock_init(&pp->lock);
+
+ pp->phylink_config.dev = &dev->dev;
+ pp->phylink_config.type = PHYLINK_NETDEV;
+
+ phylink = phylink_create(&pp->phylink_config, pdev->dev.fwnode,
+ phy_mode, &mvneta_phylink_ops);
if (IS_ERR(phylink)) {
err = PTR_ERR(phylink);
goto err_free_irq;
@@ -4513,8 +4527,6 @@ static int mvneta_probe(struct platform_device *pdev)
dev->ethtool_ops = &mvneta_eth_tool_ops;
- pp = netdev_priv(dev);
- spin_lock_init(&pp->lock);
pp->phylink = phylink;
pp->comphy = comphy;
pp->phy_interface = phy_mode;
diff --git a/drivers/net/ethernet/marvell/mvneta_bm.c b/drivers/net/ethernet/marvell/mvneta_bm.c
index de468e1bdba9..82ee2bcca6fd 100644
--- a/drivers/net/ethernet/marvell/mvneta_bm.c
+++ b/drivers/net/ethernet/marvell/mvneta_bm.c
@@ -190,7 +190,7 @@ struct mvneta_bm_pool *mvneta_bm_pool_use(struct mvneta_bm *priv, u8 pool_id,
SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
hwbm_pool->construct = mvneta_bm_construct;
hwbm_pool->priv = new_pool;
- spin_lock_init(&hwbm_pool->lock);
+ mutex_init(&hwbm_pool->buf_lock);
/* Create new pool */
err = mvneta_bm_pool_create(priv, new_pool);
@@ -201,7 +201,7 @@ struct mvneta_bm_pool *mvneta_bm_pool_use(struct mvneta_bm *priv, u8 pool_id,
}
/* Allocate buffers for this pool */
- num = hwbm_pool_add(hwbm_pool, hwbm_pool->size, GFP_ATOMIC);
+ num = hwbm_pool_add(hwbm_pool, hwbm_pool->size);
if (num != hwbm_pool->size) {
WARN(1, "pool %d: %d of %d allocated\n",
new_pool->id, num, hwbm_pool->size);
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
index 6171270a016c..4d9564ba68f6 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
@@ -148,6 +148,8 @@
#define MVPP22_CLS_C2_ATTR2 0x1b6c
#define MVPP22_CLS_C2_ATTR2_RSS_EN BIT(30)
#define MVPP22_CLS_C2_ATTR3 0x1b70
+#define MVPP22_CLS_C2_TCAM_CTRL 0x1b90
+#define MVPP22_CLS_C2_TCAM_BYPASS_FIFO BIT(0)
/* Descriptor Manager Top Registers */
#define MVPP2_RXQ_NUM_REG 0x2040
@@ -327,8 +329,26 @@
#define MVPP22_BM_ADDR_HIGH_VIRT_RLS_MASK 0xff00
#define MVPP22_BM_ADDR_HIGH_VIRT_RLS_SHIFT 8
+/* Packet Processor per-port counters */
+#define MVPP2_OVERRUN_ETH_DROP 0x7000
+#define MVPP2_CLS_ETH_DROP 0x7020
+
/* Hit counters registers */
#define MVPP2_CTRS_IDX 0x7040
+#define MVPP22_CTRS_TX_CTR(port, txq) ((txq) | ((port) << 3) | BIT(7))
+#define MVPP2_TX_DESC_ENQ_CTR 0x7100
+#define MVPP2_TX_DESC_ENQ_TO_DDR_CTR 0x7104
+#define MVPP2_TX_BUFF_ENQ_TO_DDR_CTR 0x7108
+#define MVPP2_TX_DESC_ENQ_HW_FWD_CTR 0x710c
+#define MVPP2_RX_DESC_ENQ_CTR 0x7120
+#define MVPP2_TX_PKTS_DEQ_CTR 0x7130
+#define MVPP2_TX_PKTS_FULL_QUEUE_DROP_CTR 0x7200
+#define MVPP2_TX_PKTS_EARLY_DROP_CTR 0x7204
+#define MVPP2_TX_PKTS_BM_DROP_CTR 0x7208
+#define MVPP2_TX_PKTS_BM_MC_DROP_CTR 0x720c
+#define MVPP2_RX_PKTS_FULL_QUEUE_DROP_CTR 0x7220
+#define MVPP2_RX_PKTS_EARLY_DROP_CTR 0x7224
+#define MVPP2_RX_PKTS_BM_DROP_CTR 0x7228
#define MVPP2_CLS_DEC_TBL_HIT_CTR 0x7700
#define MVPP2_CLS_FLOW_TBL_HIT_CTR 0x7704
@@ -624,6 +644,7 @@
#define MVPP2_N_RFS_RULES (MVPP2_N_RFS_ENTRIES_PER_FLOW * 7)
/* RSS constants */
+#define MVPP22_N_RSS_TABLES 8
#define MVPP22_RSS_TABLE_ENTRIES 32
/* IPv6 max L3 address size */
@@ -725,6 +746,10 @@ enum mvpp2_prs_l3_cast {
/* Definitions */
struct mvpp2_dbgfs_entries;
+struct mvpp2_rss_table {
+ u32 indir[MVPP22_RSS_TABLE_ENTRIES];
+};
+
/* Shared Packet Processor resources */
struct mvpp2 {
/* Shared registers' base addresses */
@@ -788,6 +813,9 @@ struct mvpp2 {
/* Debugfs entries private data */
struct mvpp2_dbgfs_entries *dbgfs_entries;
+
+ /* RSS Indirection tables */
+ struct mvpp2_rss_table *rss_tables[MVPP22_N_RSS_TABLES];
};
struct mvpp2_pcpu_stats {
@@ -905,6 +933,7 @@ struct mvpp2_port {
phy_interface_t phy_interface;
struct phylink *phylink;
+ struct phylink_config phylink_config;
struct phy *comphy;
struct mvpp2_bm_pool *pool_long;
@@ -919,12 +948,14 @@ struct mvpp2_port {
u32 tx_time_coal;
- /* RSS indirection table */
- u32 indir[MVPP22_RSS_TABLE_ENTRIES];
-
/* List of steering rules active on that port */
- struct mvpp2_ethtool_fs *rfs_rules[MVPP2_N_RFS_RULES];
+ struct mvpp2_ethtool_fs *rfs_rules[MVPP2_N_RFS_ENTRIES_PER_FLOW];
int n_rfs_rules;
+
+ /* Each port has its own view of the rss contexts, so that it can number
+ * them from 0
+ */
+ int rss_ctx[MVPP22_N_RSS_TABLES];
};
/* The mvpp2_tx_desc and mvpp2_rx_desc structures describe the
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
index a57d17ab91f0..35478cba2aa5 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
@@ -44,17 +44,17 @@ static const struct mvpp2_cls_flow cls_flows[MVPP2_N_PRS_FLOWS] = {
/* TCP over IPv4 flows, Not fragmented, with vlan tag */
MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_NF_TAG,
- MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP4 | MVPP2_PRS_RI_L4_TCP,
MVPP2_PRS_IP_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_NF_TAG,
- MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP4_OPT | MVPP2_PRS_RI_L4_TCP,
MVPP2_PRS_IP_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_NF_TAG,
- MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP4_OTHER | MVPP2_PRS_RI_L4_TCP,
MVPP2_PRS_IP_MASK),
@@ -79,17 +79,17 @@ static const struct mvpp2_cls_flow cls_flows[MVPP2_N_PRS_FLOWS] = {
/* TCP over IPv4 flows, fragmented, with vlan tag */
MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_FRAG_TAG,
- MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP4 | MVPP2_PRS_RI_L4_TCP,
MVPP2_PRS_IP_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_FRAG_TAG,
- MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP4_OPT | MVPP2_PRS_RI_L4_TCP,
MVPP2_PRS_IP_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_FRAG_TAG,
- MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP4_OTHER | MVPP2_PRS_RI_L4_TCP,
MVPP2_PRS_IP_MASK),
@@ -114,17 +114,17 @@ static const struct mvpp2_cls_flow cls_flows[MVPP2_N_PRS_FLOWS] = {
/* UDP over IPv4 flows, Not fragmented, with vlan tag */
MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_NF_TAG,
- MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP4 | MVPP2_PRS_RI_L4_UDP,
MVPP2_PRS_IP_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_NF_TAG,
- MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP4_OPT | MVPP2_PRS_RI_L4_UDP,
MVPP2_PRS_IP_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_NF_TAG,
- MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP4_5T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP4_OTHER | MVPP2_PRS_RI_L4_UDP,
MVPP2_PRS_IP_MASK),
@@ -149,17 +149,17 @@ static const struct mvpp2_cls_flow cls_flows[MVPP2_N_PRS_FLOWS] = {
/* UDP over IPv4 flows, fragmented, with vlan tag */
MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_FRAG_TAG,
- MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP4 | MVPP2_PRS_RI_L4_UDP,
MVPP2_PRS_IP_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_FRAG_TAG,
- MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP4_OPT | MVPP2_PRS_RI_L4_UDP,
MVPP2_PRS_IP_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_FRAG_TAG,
- MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP4_OTHER | MVPP2_PRS_RI_L4_UDP,
MVPP2_PRS_IP_MASK),
@@ -178,12 +178,12 @@ static const struct mvpp2_cls_flow cls_flows[MVPP2_N_PRS_FLOWS] = {
/* TCP over IPv6 flows, not fragmented, with vlan tag */
MVPP2_DEF_FLOW(MVPP22_FLOW_TCP6, MVPP2_FL_IP6_TCP_NF_TAG,
- MVPP22_CLS_HEK_IP6_5T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP6_5T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP6 | MVPP2_PRS_RI_L4_TCP,
MVPP2_PRS_IP_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_TCP6, MVPP2_FL_IP6_TCP_NF_TAG,
- MVPP22_CLS_HEK_IP6_5T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP6_5T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP6_EXT | MVPP2_PRS_RI_L4_TCP,
MVPP2_PRS_IP_MASK),
@@ -202,13 +202,13 @@ static const struct mvpp2_cls_flow cls_flows[MVPP2_N_PRS_FLOWS] = {
/* TCP over IPv6 flows, fragmented, with vlan tag */
MVPP2_DEF_FLOW(MVPP22_FLOW_TCP6, MVPP2_FL_IP6_TCP_FRAG_TAG,
- MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP6 | MVPP2_PRS_RI_IP_FRAG_TRUE |
MVPP2_PRS_RI_L4_TCP,
MVPP2_PRS_IP_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_TCP6, MVPP2_FL_IP6_TCP_FRAG_TAG,
- MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP6_EXT | MVPP2_PRS_RI_IP_FRAG_TRUE |
MVPP2_PRS_RI_L4_TCP,
MVPP2_PRS_IP_MASK),
@@ -228,12 +228,12 @@ static const struct mvpp2_cls_flow cls_flows[MVPP2_N_PRS_FLOWS] = {
/* UDP over IPv6 flows, not fragmented, with vlan tag */
MVPP2_DEF_FLOW(MVPP22_FLOW_UDP6, MVPP2_FL_IP6_UDP_NF_TAG,
- MVPP22_CLS_HEK_IP6_5T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP6_5T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP6 | MVPP2_PRS_RI_L4_UDP,
MVPP2_PRS_IP_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_UDP6, MVPP2_FL_IP6_UDP_NF_TAG,
- MVPP22_CLS_HEK_IP6_5T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP6_5T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP6_EXT | MVPP2_PRS_RI_L4_UDP,
MVPP2_PRS_IP_MASK),
@@ -252,13 +252,13 @@ static const struct mvpp2_cls_flow cls_flows[MVPP2_N_PRS_FLOWS] = {
/* UDP over IPv6 flows, fragmented, with vlan tag */
MVPP2_DEF_FLOW(MVPP22_FLOW_UDP6, MVPP2_FL_IP6_UDP_FRAG_TAG,
- MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP6 | MVPP2_PRS_RI_IP_FRAG_TRUE |
MVPP2_PRS_RI_L4_UDP,
MVPP2_PRS_IP_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_UDP6, MVPP2_FL_IP6_UDP_FRAG_TAG,
- MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP6_EXT | MVPP2_PRS_RI_IP_FRAG_TRUE |
MVPP2_PRS_RI_L4_UDP,
MVPP2_PRS_IP_MASK),
@@ -279,15 +279,15 @@ static const struct mvpp2_cls_flow cls_flows[MVPP2_N_PRS_FLOWS] = {
/* IPv4 flows, with vlan tag */
MVPP2_DEF_FLOW(MVPP22_FLOW_IP4, MVPP2_FL_IP4_TAG,
- MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP4,
MVPP2_PRS_RI_L3_PROTO_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_IP4, MVPP2_FL_IP4_TAG,
- MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP4_OPT,
MVPP2_PRS_RI_L3_PROTO_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_IP4, MVPP2_FL_IP4_TAG,
- MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP4_OTHER,
MVPP2_PRS_RI_L3_PROTO_MASK),
@@ -303,11 +303,11 @@ static const struct mvpp2_cls_flow cls_flows[MVPP2_N_PRS_FLOWS] = {
/* IPv6 flows, with vlan tag */
MVPP2_DEF_FLOW(MVPP22_FLOW_IP6, MVPP2_FL_IP6_TAG,
- MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP6,
MVPP2_PRS_RI_L3_PROTO_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_IP6, MVPP2_FL_IP6_TAG,
- MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_OPT_VLAN,
+ MVPP22_CLS_HEK_IP6_2T | MVPP22_CLS_HEK_TAGGED,
MVPP2_PRS_RI_L3_IP6,
MVPP2_PRS_RI_L3_PROTO_MASK),
@@ -548,6 +548,8 @@ void mvpp2_cls_c2_read(struct mvpp2 *priv, int index,
static int mvpp2_cls_ethtool_flow_to_type(int flow_type)
{
switch (flow_type & ~(FLOW_EXT | FLOW_MAC_EXT | FLOW_RSS)) {
+ case ETHER_FLOW:
+ return MVPP22_FLOW_ETHERNET;
case TCP_V4_FLOW:
return MVPP22_FLOW_TCP4;
case TCP_V6_FLOW:
@@ -596,7 +598,7 @@ static void mvpp2_cls_flow_init(struct mvpp2 *priv,
mvpp2_cls_flow_eng_set(&fe, MVPP22_CLS_ENGINE_C2);
mvpp2_cls_flow_port_id_sel(&fe, true);
- mvpp2_cls_flow_lu_type_set(&fe, MVPP22_FLOW_ETHERNET);
+ mvpp2_cls_flow_lu_type_set(&fe, MVPP22_CLS_LU_TYPE_ALL);
/* Add all ports */
for (i = 0; i < MVPP2_MAX_PORTS; i++)
@@ -655,6 +657,9 @@ static int mvpp2_flow_set_hek_fields(struct mvpp2_cls_flow_entry *fe,
case MVPP22_CLS_HEK_OPT_VLAN:
field_id = MVPP22_CLS_FIELD_VLAN;
break;
+ case MVPP22_CLS_HEK_OPT_VLAN_PRI:
+ field_id = MVPP22_CLS_FIELD_VLAN_PRI;
+ break;
case MVPP22_CLS_HEK_OPT_IP4SA:
field_id = MVPP22_CLS_FIELD_IP4SA;
break;
@@ -689,6 +694,10 @@ static int mvpp2_cls_hek_field_size(u32 field)
switch (field) {
case MVPP22_CLS_HEK_OPT_MAC_DA:
return 48;
+ case MVPP22_CLS_HEK_OPT_VLAN:
+ return 12;
+ case MVPP22_CLS_HEK_OPT_VLAN_PRI:
+ return 3;
case MVPP22_CLS_HEK_OPT_IP4SA:
case MVPP22_CLS_HEK_OPT_IP4DA:
return 32;
@@ -777,6 +786,9 @@ u16 mvpp2_flow_get_hek_fields(struct mvpp2_cls_flow_entry *fe)
case MVPP22_CLS_FIELD_VLAN:
hash_opts |= MVPP22_CLS_HEK_OPT_VLAN;
break;
+ case MVPP22_CLS_FIELD_VLAN_PRI:
+ hash_opts |= MVPP22_CLS_HEK_OPT_VLAN_PRI;
+ break;
case MVPP22_CLS_FIELD_L3_PROTO:
hash_opts |= MVPP22_CLS_HEK_OPT_L3_PROTO;
break;
@@ -861,7 +873,7 @@ static void mvpp2_port_c2_cls_init(struct mvpp2_port *port)
/* Match on Lookup Type */
c2.tcam[4] |= MVPP22_CLS_C2_TCAM_EN(MVPP22_CLS_C2_LU_TYPE(MVPP2_CLS_LU_TYPE_MASK));
- c2.tcam[4] |= MVPP22_CLS_C2_LU_TYPE(MVPP22_FLOW_ETHERNET);
+ c2.tcam[4] |= MVPP22_CLS_C2_LU_TYPE(MVPP22_CLS_LU_TYPE_ALL);
/* Update RSS status after matching this entry */
c2.act = MVPP22_CLS_C2_ACT_RSS_EN(MVPP22_C2_UPD_LOCK);
@@ -923,6 +935,12 @@ void mvpp2_cls_init(struct mvpp2 *priv)
mvpp2_cls_c2_write(priv, &c2);
}
+ /* Disable the FIFO stages in C2 engine, which are only used in BIST
+ * mode
+ */
+ mvpp2_write(priv, MVPP22_CLS_C2_TCAM_CTRL,
+ MVPP22_CLS_C2_TCAM_BYPASS_FIFO);
+
mvpp2_cls_port_init_flows(priv);
}
@@ -963,12 +981,22 @@ u32 mvpp2_cls_c2_hit_count(struct mvpp2 *priv, int c2_index)
return mvpp2_read(priv, MVPP22_CLS_C2_HIT_CTR);
}
-static void mvpp2_rss_port_c2_enable(struct mvpp2_port *port)
+static void mvpp2_rss_port_c2_enable(struct mvpp2_port *port, u32 ctx)
{
struct mvpp2_cls_c2_entry c2;
+ u8 qh, ql;
mvpp2_cls_c2_read(port->priv, MVPP22_CLS_C2_RSS_ENTRY(port->id), &c2);
+ /* The RxQ number is used to select the RSS table. It that case, we set
+ * it to be the ctx number.
+ */
+ qh = (ctx >> 3) & MVPP22_CLS_C2_ATTR0_QHIGH_MASK;
+ ql = ctx & MVPP22_CLS_C2_ATTR0_QLOW_MASK;
+
+ c2.attr[0] = MVPP22_CLS_C2_ATTR0_QHIGH(qh) |
+ MVPP22_CLS_C2_ATTR0_QLOW(ql);
+
c2.attr[2] |= MVPP22_CLS_C2_ATTR2_RSS_EN;
mvpp2_cls_c2_write(port->priv, &c2);
@@ -977,22 +1005,45 @@ static void mvpp2_rss_port_c2_enable(struct mvpp2_port *port)
static void mvpp2_rss_port_c2_disable(struct mvpp2_port *port)
{
struct mvpp2_cls_c2_entry c2;
+ u8 qh, ql;
mvpp2_cls_c2_read(port->priv, MVPP22_CLS_C2_RSS_ENTRY(port->id), &c2);
+ /* Reset the default destination RxQ to the port's first rx queue. */
+ qh = (port->first_rxq >> 3) & MVPP22_CLS_C2_ATTR0_QHIGH_MASK;
+ ql = port->first_rxq & MVPP22_CLS_C2_ATTR0_QLOW_MASK;
+
+ c2.attr[0] = MVPP22_CLS_C2_ATTR0_QHIGH(qh) |
+ MVPP22_CLS_C2_ATTR0_QLOW(ql);
+
c2.attr[2] &= ~MVPP22_CLS_C2_ATTR2_RSS_EN;
mvpp2_cls_c2_write(port->priv, &c2);
}
-void mvpp22_port_rss_enable(struct mvpp2_port *port)
+static inline int mvpp22_rss_ctx(struct mvpp2_port *port, int port_rss_ctx)
+{
+ return port->rss_ctx[port_rss_ctx];
+}
+
+int mvpp22_port_rss_enable(struct mvpp2_port *port)
{
- mvpp2_rss_port_c2_enable(port);
+ if (mvpp22_rss_ctx(port, 0) < 0)
+ return -EINVAL;
+
+ mvpp2_rss_port_c2_enable(port, mvpp22_rss_ctx(port, 0));
+
+ return 0;
}
-void mvpp22_port_rss_disable(struct mvpp2_port *port)
+int mvpp22_port_rss_disable(struct mvpp2_port *port)
{
+ if (mvpp22_rss_ctx(port, 0) < 0)
+ return -EINVAL;
+
mvpp2_rss_port_c2_disable(port);
+
+ return 0;
}
static void mvpp22_port_c2_lookup_disable(struct mvpp2_port *port, int entry)
@@ -1029,7 +1080,7 @@ static int mvpp2_port_c2_tcam_rule_add(struct mvpp2_port *port,
struct flow_action_entry *act;
struct mvpp2_cls_c2_entry c2;
u8 qh, ql, pmap;
- int index;
+ int index, ctx;
memset(&c2, 0, sizeof(c2));
@@ -1042,13 +1093,13 @@ static int mvpp2_port_c2_tcam_rule_add(struct mvpp2_port *port,
rule->c2_index = c2.index;
- c2.tcam[0] = (rule->c2_tcam & 0xffff) |
+ c2.tcam[3] = (rule->c2_tcam & 0xffff) |
((rule->c2_tcam_mask & 0xffff) << 16);
- c2.tcam[1] = ((rule->c2_tcam >> 16) & 0xffff) |
+ c2.tcam[2] = ((rule->c2_tcam >> 16) & 0xffff) |
(((rule->c2_tcam_mask >> 16) & 0xffff) << 16);
- c2.tcam[2] = ((rule->c2_tcam >> 32) & 0xffff) |
+ c2.tcam[1] = ((rule->c2_tcam >> 32) & 0xffff) |
(((rule->c2_tcam_mask >> 32) & 0xffff) << 16);
- c2.tcam[3] = ((rule->c2_tcam >> 48) & 0xffff) |
+ c2.tcam[0] = ((rule->c2_tcam >> 48) & 0xffff) |
(((rule->c2_tcam_mask >> 48) & 0xffff) << 16);
pmap = BIT(port->id);
@@ -1069,14 +1120,36 @@ static int mvpp2_port_c2_tcam_rule_add(struct mvpp2_port *port,
*/
c2.act = MVPP22_CLS_C2_ACT_COLOR(MVPP22_C2_COL_NO_UPD_LOCK);
+ /* Update RSS status after matching this entry */
+ if (act->queue.ctx)
+ c2.attr[2] |= MVPP22_CLS_C2_ATTR2_RSS_EN;
+
+ /* Always lock the RSS_EN decision. We might have high prio
+ * rules steering to an RXQ, and a lower one steering to RSS,
+ * we don't want the low prio RSS rule overwriting this flag.
+ */
+ c2.act = MVPP22_CLS_C2_ACT_RSS_EN(MVPP22_C2_UPD_LOCK);
+
/* Mark packet as "forwarded to software", needed for RSS */
c2.act |= MVPP22_CLS_C2_ACT_FWD(MVPP22_C2_FWD_SW_LOCK);
c2.act |= MVPP22_CLS_C2_ACT_QHIGH(MVPP22_C2_UPD_LOCK) |
MVPP22_CLS_C2_ACT_QLOW(MVPP22_C2_UPD_LOCK);
- qh = ((act->queue.index + port->first_rxq) >> 3) & MVPP22_CLS_C2_ATTR0_QHIGH_MASK;
- ql = (act->queue.index + port->first_rxq) & MVPP22_CLS_C2_ATTR0_QLOW_MASK;
+ if (act->queue.ctx) {
+ /* Get the global ctx number */
+ ctx = mvpp22_rss_ctx(port, act->queue.ctx);
+ if (ctx < 0)
+ return -EINVAL;
+
+ qh = (ctx >> 3) & MVPP22_CLS_C2_ATTR0_QHIGH_MASK;
+ ql = ctx & MVPP22_CLS_C2_ATTR0_QLOW_MASK;
+ } else {
+ qh = ((act->queue.index + port->first_rxq) >> 3) &
+ MVPP22_CLS_C2_ATTR0_QHIGH_MASK;
+ ql = (act->queue.index + port->first_rxq) &
+ MVPP22_CLS_C2_ATTR0_QLOW_MASK;
+ }
c2.attr[0] = MVPP22_CLS_C2_ATTR0_QHIGH(qh) |
MVPP22_CLS_C2_ATTR0_QLOW(ql);
@@ -1140,6 +1213,9 @@ static int mvpp2_port_flt_rfs_rule_insert(struct mvpp2_port *port,
if (!flow)
return 0;
+ if ((rule->hek_fields & flow->supported_hash_opts) != rule->hek_fields)
+ continue;
+
index = MVPP2_CLS_FLT_C2_RFS(port->id, flow->flow_id, rule->loc);
mvpp2_cls_flow_read(priv, index, &fe);
@@ -1158,7 +1234,44 @@ static int mvpp2_port_flt_rfs_rule_insert(struct mvpp2_port *port,
static int mvpp2_cls_c2_build_match(struct mvpp2_rfs_rule *rule)
{
struct flow_rule *flow = rule->flow;
- int offs = 64;
+ int offs = 0;
+
+ /* The order of insertion in C2 tcam must match the order in which
+ * the fields are found in the header
+ */
+ if (flow_rule_match_key(flow, FLOW_DISSECTOR_KEY_VLAN)) {
+ struct flow_match_vlan match;
+
+ flow_rule_match_vlan(flow, &match);
+ if (match.mask->vlan_id) {
+ rule->hek_fields |= MVPP22_CLS_HEK_OPT_VLAN;
+
+ rule->c2_tcam |= ((u64)match.key->vlan_id) << offs;
+ rule->c2_tcam_mask |= ((u64)match.mask->vlan_id) << offs;
+
+ /* Don't update the offset yet */
+ }
+
+ if (match.mask->vlan_priority) {
+ rule->hek_fields |= MVPP22_CLS_HEK_OPT_VLAN_PRI;
+
+ /* VLAN pri is always at offset 13 relative to the
+ * current offset
+ */
+ rule->c2_tcam |= ((u64)match.key->vlan_priority) <<
+ (offs + 13);
+ rule->c2_tcam_mask |= ((u64)match.mask->vlan_priority) <<
+ (offs + 13);
+ }
+
+ if (match.mask->vlan_dei)
+ return -EOPNOTSUPP;
+
+ /* vlan id and prio always seem to take a full 16-bit slot in
+ * the Header Extracted Key.
+ */
+ offs += 16;
+ }
if (flow_rule_match_key(flow, FLOW_DISSECTOR_KEY_PORTS)) {
struct flow_match_ports match;
@@ -1166,18 +1279,18 @@ static int mvpp2_cls_c2_build_match(struct mvpp2_rfs_rule *rule)
flow_rule_match_ports(flow, &match);
if (match.mask->src) {
rule->hek_fields |= MVPP22_CLS_HEK_OPT_L4SIP;
- offs -= mvpp2_cls_hek_field_size(MVPP22_CLS_HEK_OPT_L4SIP);
rule->c2_tcam |= ((u64)ntohs(match.key->src)) << offs;
rule->c2_tcam_mask |= ((u64)ntohs(match.mask->src)) << offs;
+ offs += mvpp2_cls_hek_field_size(MVPP22_CLS_HEK_OPT_L4SIP);
}
if (match.mask->dst) {
rule->hek_fields |= MVPP22_CLS_HEK_OPT_L4DIP;
- offs -= mvpp2_cls_hek_field_size(MVPP22_CLS_HEK_OPT_L4DIP);
rule->c2_tcam |= ((u64)ntohs(match.key->dst)) << offs;
rule->c2_tcam_mask |= ((u64)ntohs(match.mask->dst)) << offs;
+ offs += mvpp2_cls_hek_field_size(MVPP22_CLS_HEK_OPT_L4DIP);
}
}
@@ -1196,6 +1309,13 @@ static int mvpp2_cls_rfs_parse_rule(struct mvpp2_rfs_rule *rule)
if (act->id != FLOW_ACTION_QUEUE && act->id != FLOW_ACTION_DROP)
return -EOPNOTSUPP;
+ /* When both an RSS context and an queue index are set, the index
+ * is considered as an offset to be added to the indirection table
+ * entries. We don't support this, so reject this rule.
+ */
+ if (act->queue.ctx && act->queue.index)
+ return -EOPNOTSUPP;
+
/* For now, only use the C2 engine which has a HEK size limited to 64
* bits for TCAM matching.
*/
@@ -1212,7 +1332,7 @@ int mvpp2_ethtool_cls_rule_get(struct mvpp2_port *port,
{
struct mvpp2_ethtool_fs *efs;
- if (rxnfc->fs.location >= MVPP2_N_RFS_RULES)
+ if (rxnfc->fs.location >= MVPP2_N_RFS_ENTRIES_PER_FLOW)
return -EINVAL;
efs = port->rfs_rules[rxnfc->fs.location];
@@ -1232,8 +1352,7 @@ int mvpp2_ethtool_cls_rule_ins(struct mvpp2_port *port,
struct mvpp2_ethtool_fs *efs, *old_efs;
int ret = 0;
- if (info->fs.location >= 4 ||
- info->fs.location < 0)
+ if (info->fs.location >= MVPP2_N_RFS_ENTRIES_PER_FLOW)
return -EINVAL;
efs = kzalloc(sizeof(*efs), GFP_KERNEL);
@@ -1242,6 +1361,12 @@ int mvpp2_ethtool_cls_rule_ins(struct mvpp2_port *port,
input.fs = &info->fs;
+ /* We need to manually set the rss_ctx, since this info isn't present
+ * in info->fs
+ */
+ if (info->fs.flow_type & FLOW_RSS)
+ input.rss_ctx = info->rss_context;
+
ethtool_rule = ethtool_rx_flow_rule_create(&input);
if (IS_ERR(ethtool_rule)) {
ret = PTR_ERR(ethtool_rule);
@@ -1250,6 +1375,10 @@ int mvpp2_ethtool_cls_rule_ins(struct mvpp2_port *port,
efs->rule.flow = ethtool_rule->rule;
efs->rule.flow_type = mvpp2_cls_ethtool_flow_to_type(info->fs.flow_type);
+ if (efs->rule.flow_type < 0) {
+ ret = efs->rule.flow_type;
+ goto clean_rule;
+ }
ret = mvpp2_cls_rfs_parse_rule(&efs->rule);
if (ret)
@@ -1328,19 +1457,160 @@ static inline u32 mvpp22_rxfh_indir(struct mvpp2_port *port, u32 rxq)
return port->first_rxq + ((rxq * nrxqs + rxq / cpus) % port->nrxqs);
}
-void mvpp22_rss_fill_table(struct mvpp2_port *port, u32 table)
+static void mvpp22_rss_fill_table(struct mvpp2_port *port,
+ struct mvpp2_rss_table *table,
+ u32 rss_ctx)
{
struct mvpp2 *priv = port->priv;
int i;
for (i = 0; i < MVPP22_RSS_TABLE_ENTRIES; i++) {
- u32 sel = MVPP22_RSS_INDEX_TABLE(table) |
+ u32 sel = MVPP22_RSS_INDEX_TABLE(rss_ctx) |
MVPP22_RSS_INDEX_TABLE_ENTRY(i);
mvpp2_write(priv, MVPP22_RSS_INDEX, sel);
mvpp2_write(priv, MVPP22_RSS_TABLE_ENTRY,
- mvpp22_rxfh_indir(port, port->indir[i]));
+ mvpp22_rxfh_indir(port, table->indir[i]));
+ }
+}
+
+static int mvpp22_rss_context_create(struct mvpp2_port *port, u32 *rss_ctx)
+{
+ struct mvpp2 *priv = port->priv;
+ u32 ctx;
+
+ /* Find the first free RSS table */
+ for (ctx = 0; ctx < MVPP22_N_RSS_TABLES; ctx++) {
+ if (!priv->rss_tables[ctx])
+ break;
+ }
+
+ if (ctx == MVPP22_N_RSS_TABLES)
+ return -EINVAL;
+
+ priv->rss_tables[ctx] = kzalloc(sizeof(*priv->rss_tables[ctx]),
+ GFP_KERNEL);
+ if (!priv->rss_tables[ctx])
+ return -ENOMEM;
+
+ *rss_ctx = ctx;
+
+ /* Set the table width: replace the whole classifier Rx queue number
+ * with the ones configured in RSS table entries.
+ */
+ mvpp2_write(priv, MVPP22_RSS_INDEX, MVPP22_RSS_INDEX_TABLE(ctx));
+ mvpp2_write(priv, MVPP22_RSS_WIDTH, 8);
+
+ mvpp2_write(priv, MVPP22_RSS_INDEX, MVPP22_RSS_INDEX_QUEUE(ctx));
+ mvpp2_write(priv, MVPP22_RXQ2RSS_TABLE, MVPP22_RSS_TABLE_POINTER(ctx));
+
+ return 0;
+}
+
+int mvpp22_port_rss_ctx_create(struct mvpp2_port *port, u32 *port_ctx)
+{
+ u32 rss_ctx;
+ int ret, i;
+
+ ret = mvpp22_rss_context_create(port, &rss_ctx);
+ if (ret)
+ return ret;
+
+ /* Find the first available context number in the port, starting from 1.
+ * Context 0 on each port is reserved for the default context.
+ */
+ for (i = 1; i < MVPP22_N_RSS_TABLES; i++) {
+ if (port->rss_ctx[i] < 0)
+ break;
+ }
+
+ if (i == MVPP22_N_RSS_TABLES)
+ return -EINVAL;
+
+ port->rss_ctx[i] = rss_ctx;
+ *port_ctx = i;
+
+ return 0;
+}
+
+static struct mvpp2_rss_table *mvpp22_rss_table_get(struct mvpp2 *priv,
+ int rss_ctx)
+{
+ if (rss_ctx < 0 || rss_ctx >= MVPP22_N_RSS_TABLES)
+ return NULL;
+
+ return priv->rss_tables[rss_ctx];
+}
+
+int mvpp22_port_rss_ctx_delete(struct mvpp2_port *port, u32 port_ctx)
+{
+ struct mvpp2 *priv = port->priv;
+ struct ethtool_rxnfc *rxnfc;
+ int i, rss_ctx, ret;
+
+ rss_ctx = mvpp22_rss_ctx(port, port_ctx);
+
+ if (rss_ctx < 0 || rss_ctx >= MVPP22_N_RSS_TABLES)
+ return -EINVAL;
+
+ /* Invalidate any active classification rule that use this context */
+ for (i = 0; i < MVPP2_N_RFS_ENTRIES_PER_FLOW; i++) {
+ if (!port->rfs_rules[i])
+ continue;
+
+ rxnfc = &port->rfs_rules[i]->rxnfc;
+ if (!(rxnfc->fs.flow_type & FLOW_RSS) ||
+ rxnfc->rss_context != port_ctx)
+ continue;
+
+ ret = mvpp2_ethtool_cls_rule_del(port, rxnfc);
+ if (ret) {
+ netdev_warn(port->dev,
+ "couldn't remove classification rule %d associated to this context",
+ rxnfc->fs.location);
+ }
}
+
+ kfree(priv->rss_tables[rss_ctx]);
+
+ priv->rss_tables[rss_ctx] = NULL;
+ port->rss_ctx[port_ctx] = -1;
+
+ return 0;
+}
+
+int mvpp22_port_rss_ctx_indir_set(struct mvpp2_port *port, u32 port_ctx,
+ const u32 *indir)
+{
+ int rss_ctx = mvpp22_rss_ctx(port, port_ctx);
+ struct mvpp2_rss_table *rss_table = mvpp22_rss_table_get(port->priv,
+ rss_ctx);
+
+ if (!rss_table)
+ return -EINVAL;
+
+ memcpy(rss_table->indir, indir,
+ MVPP22_RSS_TABLE_ENTRIES * sizeof(rss_table->indir[0]));
+
+ mvpp22_rss_fill_table(port, rss_table, rss_ctx);
+
+ return 0;
+}
+
+int mvpp22_port_rss_ctx_indir_get(struct mvpp2_port *port, u32 port_ctx,
+ u32 *indir)
+{
+ int rss_ctx = mvpp22_rss_ctx(port, port_ctx);
+ struct mvpp2_rss_table *rss_table = mvpp22_rss_table_get(port->priv,
+ rss_ctx);
+
+ if (!rss_table)
+ return -EINVAL;
+
+ memcpy(indir, rss_table->indir,
+ MVPP22_RSS_TABLE_ENTRIES * sizeof(rss_table->indir[0]));
+
+ return 0;
}
int mvpp2_ethtool_rxfh_set(struct mvpp2_port *port, struct ethtool_rxnfc *info)
@@ -1424,32 +1694,32 @@ int mvpp2_ethtool_rxfh_get(struct mvpp2_port *port, struct ethtool_rxnfc *info)
return 0;
}
-void mvpp22_port_rss_init(struct mvpp2_port *port)
+int mvpp22_port_rss_init(struct mvpp2_port *port)
{
- struct mvpp2 *priv = port->priv;
- int i;
+ struct mvpp2_rss_table *table;
+ u32 context = 0;
+ int i, ret;
- /* Set the table width: replace the whole classifier Rx queue number
- * with the ones configured in RSS table entries.
- */
- mvpp2_write(priv, MVPP22_RSS_INDEX, MVPP22_RSS_INDEX_TABLE(port->id));
- mvpp2_write(priv, MVPP22_RSS_WIDTH, 8);
+ for (i = 0; i < MVPP22_N_RSS_TABLES; i++)
+ port->rss_ctx[i] = -1;
- /* The default RxQ is used as a key to select the RSS table to use.
- * We use one RSS table per port.
- */
- mvpp2_write(priv, MVPP22_RSS_INDEX,
- MVPP22_RSS_INDEX_QUEUE(port->first_rxq));
- mvpp2_write(priv, MVPP22_RXQ2RSS_TABLE,
- MVPP22_RSS_TABLE_POINTER(port->id));
+ ret = mvpp22_rss_context_create(port, &context);
+ if (ret)
+ return ret;
+
+ table = mvpp22_rss_table_get(port->priv, context);
+ if (!table)
+ return -EINVAL;
+
+ port->rss_ctx[0] = context;
/* Configure the first table to evenly distribute the packets across
* real Rx Queues. The table entries map a hash to a port Rx Queue.
*/
for (i = 0; i < MVPP22_RSS_TABLE_ENTRIES; i++)
- port->indir[i] = ethtool_rxfh_indir_default(i, port->nrxqs);
+ table->indir[i] = ethtool_rxfh_indir_default(i, port->nrxqs);
- mvpp22_rss_fill_table(port, port->id);
+ mvpp22_rss_fill_table(port, table, mvpp22_rss_ctx(port, 0));
/* Configure default flows */
mvpp2_port_rss_hash_opts_set(port, MVPP22_FLOW_IP4, MVPP22_CLS_HEK_IP4_2T);
@@ -1458,4 +1728,6 @@ void mvpp22_port_rss_init(struct mvpp2_port *port)
mvpp2_port_rss_hash_opts_set(port, MVPP22_FLOW_TCP6, MVPP22_CLS_HEK_IP6_5T);
mvpp2_port_rss_hash_opts_set(port, MVPP22_FLOW_UDP4, MVPP22_CLS_HEK_IP4_5T);
mvpp2_port_rss_hash_opts_set(port, MVPP22_FLOW_UDP6, MVPP22_CLS_HEK_IP6_5T);
+
+ return 0;
}
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
index 56b617375a65..8867f25afab4 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.h
@@ -33,15 +33,16 @@ enum mvpp2_cls_engine {
};
#define MVPP22_CLS_HEK_OPT_MAC_DA BIT(0)
-#define MVPP22_CLS_HEK_OPT_VLAN BIT(1)
-#define MVPP22_CLS_HEK_OPT_L3_PROTO BIT(2)
-#define MVPP22_CLS_HEK_OPT_IP4SA BIT(3)
-#define MVPP22_CLS_HEK_OPT_IP4DA BIT(4)
-#define MVPP22_CLS_HEK_OPT_IP6SA BIT(5)
-#define MVPP22_CLS_HEK_OPT_IP6DA BIT(6)
-#define MVPP22_CLS_HEK_OPT_L4SIP BIT(7)
-#define MVPP22_CLS_HEK_OPT_L4DIP BIT(8)
-#define MVPP22_CLS_HEK_N_FIELDS 9
+#define MVPP22_CLS_HEK_OPT_VLAN_PRI BIT(1)
+#define MVPP22_CLS_HEK_OPT_VLAN BIT(2)
+#define MVPP22_CLS_HEK_OPT_L3_PROTO BIT(3)
+#define MVPP22_CLS_HEK_OPT_IP4SA BIT(4)
+#define MVPP22_CLS_HEK_OPT_IP4DA BIT(5)
+#define MVPP22_CLS_HEK_OPT_IP6SA BIT(6)
+#define MVPP22_CLS_HEK_OPT_IP6DA BIT(7)
+#define MVPP22_CLS_HEK_OPT_L4SIP BIT(8)
+#define MVPP22_CLS_HEK_OPT_L4DIP BIT(9)
+#define MVPP22_CLS_HEK_N_FIELDS 10
#define MVPP22_CLS_HEK_L4_OPTS (MVPP22_CLS_HEK_OPT_L4SIP | \
MVPP22_CLS_HEK_OPT_L4DIP)
@@ -59,8 +60,12 @@ enum mvpp2_cls_engine {
#define MVPP22_CLS_HEK_IP6_5T (MVPP22_CLS_HEK_IP6_2T | \
MVPP22_CLS_HEK_L4_OPTS)
+#define MVPP22_CLS_HEK_TAGGED (MVPP22_CLS_HEK_OPT_VLAN | \
+ MVPP22_CLS_HEK_OPT_VLAN_PRI)
+
enum mvpp2_cls_field_id {
MVPP22_CLS_FIELD_MAC_DA = 0x03,
+ MVPP22_CLS_FIELD_VLAN_PRI = 0x05,
MVPP22_CLS_FIELD_VLAN = 0x06,
MVPP22_CLS_FIELD_L3_PROTO = 0x0f,
MVPP22_CLS_FIELD_IP4SA = 0x10,
@@ -180,6 +185,11 @@ enum mvpp2_prs_flow {
/* LU Type defined for all engines, and specified in the flow table */
#define MVPP2_CLS_LU_TYPE_MASK 0x3f
+enum mvpp2_cls_lu_type {
+ /* rule->loc is used as a lu-type for the entries 0 - 62. */
+ MVPP22_CLS_LU_TYPE_ALL = 63,
+};
+
#define MVPP2_N_FLOWS (MVPP2_FL_LAST - MVPP2_FL_START)
struct mvpp2_cls_flow {
@@ -249,11 +259,18 @@ struct mvpp2_cls_lookup_entry {
u32 data;
};
-void mvpp22_rss_fill_table(struct mvpp2_port *port, u32 table);
-void mvpp22_port_rss_init(struct mvpp2_port *port);
+int mvpp22_port_rss_init(struct mvpp2_port *port);
+
+int mvpp22_port_rss_enable(struct mvpp2_port *port);
+int mvpp22_port_rss_disable(struct mvpp2_port *port);
+
+int mvpp22_port_rss_ctx_create(struct mvpp2_port *port, u32 *rss_ctx);
+int mvpp22_port_rss_ctx_delete(struct mvpp2_port *port, u32 rss_ctx);
-void mvpp22_port_rss_enable(struct mvpp2_port *port);
-void mvpp22_port_rss_disable(struct mvpp2_port *port);
+int mvpp22_port_rss_ctx_indir_set(struct mvpp2_port *port, u32 rss_ctx,
+ const u32 *indir);
+int mvpp22_port_rss_ctx_indir_get(struct mvpp2_port *port, u32 rss_ctx,
+ u32 *indir);
int mvpp2_ethtool_rxfh_get(struct mvpp2_port *port, struct ethtool_rxnfc *info);
int mvpp2_ethtool_rxfh_set(struct mvpp2_port *port, struct ethtool_rxnfc *info);
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index d8e5241097a9..c51f1d5b550b 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -56,9 +56,9 @@ static struct {
/* The prototype is added here to be used in start_dev when using ACPI. This
* will be removed once phylink is used for all modes (dt+ACPI).
*/
-static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,
+static void mvpp2_mac_config(struct phylink_config *config, unsigned int mode,
const struct phylink_link_state *state);
-static void mvpp2_mac_link_up(struct net_device *dev, unsigned int mode,
+static void mvpp2_mac_link_up(struct phylink_config *config, unsigned int mode,
phy_interface_t interface, struct phy_device *phy);
/* Queue modes */
@@ -1258,6 +1258,17 @@ static u64 mvpp2_read_count(struct mvpp2_port *port,
return val;
}
+/* Some counters are accessed indirectly by first writing an index to
+ * MVPP2_CTRS_IDX. The index can represent various resources depending on the
+ * register we access, it can be a hit counter for some classification tables,
+ * a counter specific to a rxq, a txq or a buffer pool.
+ */
+static u32 mvpp2_read_index(struct mvpp2 *priv, u32 index, u32 reg)
+{
+ mvpp2_write(priv, MVPP2_CTRS_IDX, index);
+ return mvpp2_read(priv, reg);
+}
+
/* Due to the fact that software statistics and hardware statistics are, by
* design, incremented at different moments in the chain of packet processing,
* it is very likely that incoming packets could have been dropped after being
@@ -1267,7 +1278,7 @@ static u64 mvpp2_read_count(struct mvpp2_port *port,
* Hence, statistics gathered from userspace with ifconfig (software) and
* ethtool (hardware) cannot be compared.
*/
-static const struct mvpp2_ethtool_counter mvpp2_ethtool_regs[] = {
+static const struct mvpp2_ethtool_counter mvpp2_ethtool_mib_regs[] = {
{ MVPP2_MIB_GOOD_OCTETS_RCVD, "good_octets_received", true },
{ MVPP2_MIB_BAD_OCTETS_RCVD, "bad_octets_received" },
{ MVPP2_MIB_CRC_ERRORS_SENT, "crc_errors_sent" },
@@ -1297,31 +1308,114 @@ static const struct mvpp2_ethtool_counter mvpp2_ethtool_regs[] = {
{ MVPP2_MIB_LATE_COLLISION, "late_collision" },
};
+static const struct mvpp2_ethtool_counter mvpp2_ethtool_port_regs[] = {
+ { MVPP2_OVERRUN_ETH_DROP, "rx_fifo_or_parser_overrun_drops" },
+ { MVPP2_CLS_ETH_DROP, "rx_classifier_drops" },
+};
+
+static const struct mvpp2_ethtool_counter mvpp2_ethtool_txq_regs[] = {
+ { MVPP2_TX_DESC_ENQ_CTR, "txq_%d_desc_enqueue" },
+ { MVPP2_TX_DESC_ENQ_TO_DDR_CTR, "txq_%d_desc_enqueue_to_ddr" },
+ { MVPP2_TX_BUFF_ENQ_TO_DDR_CTR, "txq_%d_buff_euqueue_to_ddr" },
+ { MVPP2_TX_DESC_ENQ_HW_FWD_CTR, "txq_%d_desc_hardware_forwarded" },
+ { MVPP2_TX_PKTS_DEQ_CTR, "txq_%d_packets_dequeued" },
+ { MVPP2_TX_PKTS_FULL_QUEUE_DROP_CTR, "txq_%d_queue_full_drops" },
+ { MVPP2_TX_PKTS_EARLY_DROP_CTR, "txq_%d_packets_early_drops" },
+ { MVPP2_TX_PKTS_BM_DROP_CTR, "txq_%d_packets_bm_drops" },
+ { MVPP2_TX_PKTS_BM_MC_DROP_CTR, "txq_%d_packets_rep_bm_drops" },
+};
+
+static const struct mvpp2_ethtool_counter mvpp2_ethtool_rxq_regs[] = {
+ { MVPP2_RX_DESC_ENQ_CTR, "rxq_%d_desc_enqueue" },
+ { MVPP2_RX_PKTS_FULL_QUEUE_DROP_CTR, "rxq_%d_queue_full_drops" },
+ { MVPP2_RX_PKTS_EARLY_DROP_CTR, "rxq_%d_packets_early_drops" },
+ { MVPP2_RX_PKTS_BM_DROP_CTR, "rxq_%d_packets_bm_drops" },
+};
+
+#define MVPP2_N_ETHTOOL_STATS(ntxqs, nrxqs) (ARRAY_SIZE(mvpp2_ethtool_mib_regs) + \
+ ARRAY_SIZE(mvpp2_ethtool_port_regs) + \
+ (ARRAY_SIZE(mvpp2_ethtool_txq_regs) * (ntxqs)) + \
+ (ARRAY_SIZE(mvpp2_ethtool_rxq_regs) * (nrxqs)))
+
static void mvpp2_ethtool_get_strings(struct net_device *netdev, u32 sset,
u8 *data)
{
- if (sset == ETH_SS_STATS) {
- int i;
+ struct mvpp2_port *port = netdev_priv(netdev);
+ int i, q;
- for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_regs); i++)
- strscpy(data + i * ETH_GSTRING_LEN,
- mvpp2_ethtool_regs[i].string, ETH_GSTRING_LEN);
+ if (sset != ETH_SS_STATS)
+ return;
+
+ for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_mib_regs); i++) {
+ strscpy(data, mvpp2_ethtool_mib_regs[i].string,
+ ETH_GSTRING_LEN);
+ data += ETH_GSTRING_LEN;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_port_regs); i++) {
+ strscpy(data, mvpp2_ethtool_port_regs[i].string,
+ ETH_GSTRING_LEN);
+ data += ETH_GSTRING_LEN;
+ }
+
+ for (q = 0; q < port->ntxqs; q++) {
+ for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_txq_regs); i++) {
+ snprintf(data, ETH_GSTRING_LEN,
+ mvpp2_ethtool_txq_regs[i].string, q);
+ data += ETH_GSTRING_LEN;
+ }
+ }
+
+ for (q = 0; q < port->nrxqs; q++) {
+ for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_rxq_regs); i++) {
+ snprintf(data, ETH_GSTRING_LEN,
+ mvpp2_ethtool_rxq_regs[i].string,
+ q);
+ data += ETH_GSTRING_LEN;
+ }
}
}
+static void mvpp2_read_stats(struct mvpp2_port *port)
+{
+ u64 *pstats;
+ int i, q;
+
+ pstats = port->ethtool_stats;
+
+ for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_mib_regs); i++)
+ *pstats++ += mvpp2_read_count(port, &mvpp2_ethtool_mib_regs[i]);
+
+ for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_port_regs); i++)
+ *pstats++ += mvpp2_read(port->priv,
+ mvpp2_ethtool_port_regs[i].offset +
+ 4 * port->id);
+
+ for (q = 0; q < port->ntxqs; q++)
+ for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_txq_regs); i++)
+ *pstats++ += mvpp2_read_index(port->priv,
+ MVPP22_CTRS_TX_CTR(port->id, i),
+ mvpp2_ethtool_txq_regs[i].offset);
+
+ /* Rxqs are numbered from 0 from the user standpoint, but not from the
+ * driver's. We need to add the port->first_rxq offset.
+ */
+ for (q = 0; q < port->nrxqs; q++)
+ for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_rxq_regs); i++)
+ *pstats++ += mvpp2_read_index(port->priv,
+ port->first_rxq + i,
+ mvpp2_ethtool_rxq_regs[i].offset);
+}
+
static void mvpp2_gather_hw_statistics(struct work_struct *work)
{
struct delayed_work *del_work = to_delayed_work(work);
struct mvpp2_port *port = container_of(del_work, struct mvpp2_port,
stats_work);
- u64 *pstats;
- int i;
mutex_lock(&port->gather_stats_lock);
- pstats = port->ethtool_stats;
- for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_regs); i++)
- *pstats++ += mvpp2_read_count(port, &mvpp2_ethtool_regs[i]);
+ mvpp2_read_stats(port);
/* No need to read again the counters right after this function if it
* was called asynchronously by the user (ie. use of ethtool).
@@ -1345,27 +1439,24 @@ static void mvpp2_ethtool_get_stats(struct net_device *dev,
mutex_lock(&port->gather_stats_lock);
memcpy(data, port->ethtool_stats,
- sizeof(u64) * ARRAY_SIZE(mvpp2_ethtool_regs));
+ sizeof(u64) * MVPP2_N_ETHTOOL_STATS(port->ntxqs, port->nrxqs));
mutex_unlock(&port->gather_stats_lock);
}
static int mvpp2_ethtool_get_sset_count(struct net_device *dev, int sset)
{
+ struct mvpp2_port *port = netdev_priv(dev);
+
if (sset == ETH_SS_STATS)
- return ARRAY_SIZE(mvpp2_ethtool_regs);
+ return MVPP2_N_ETHTOOL_STATS(port->ntxqs, port->nrxqs);
return -EOPNOTSUPP;
}
static void mvpp2_mac_reset_assert(struct mvpp2_port *port)
{
- unsigned int i;
u32 val;
- /* Read the GOP statistics to reset the hardware counters */
- for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_regs); i++)
- mvpp2_read_count(port, &mvpp2_ethtool_regs[i]);
-
val = readl(port->base + MVPP2_GMAC_CTRL_2_REG) |
MVPP2_GMAC_PORT_RESET_MASK;
writel(val, port->base + MVPP2_GMAC_CTRL_2_REG);
@@ -3237,9 +3328,9 @@ static void mvpp2_start_dev(struct mvpp2_port *port)
struct phylink_link_state state = {
.interface = port->phy_interface,
};
- mvpp2_mac_config(port->dev, MLO_AN_INBAND, &state);
- mvpp2_mac_link_up(port->dev, MLO_AN_INBAND, port->phy_interface,
- NULL);
+ mvpp2_mac_config(&port->phylink_config, MLO_AN_INBAND, &state);
+ mvpp2_mac_link_up(&port->phylink_config, MLO_AN_INBAND,
+ port->phy_interface, NULL);
}
netif_tx_start_all_queues(port->dev);
@@ -3954,7 +4045,7 @@ static int mvpp2_ethtool_get_rxnfc(struct net_device *dev,
ret = mvpp2_ethtool_cls_rule_get(port, info);
break;
case ETHTOOL_GRXCLSRLALL:
- for (i = 0; i < MVPP2_N_RFS_RULES; i++) {
+ for (i = 0; i < MVPP2_N_RFS_ENTRIES_PER_FLOW; i++) {
if (port->rfs_rules[i])
rules[loc++] = i;
}
@@ -4000,24 +4091,25 @@ static int mvpp2_ethtool_get_rxfh(struct net_device *dev, u32 *indir, u8 *key,
u8 *hfunc)
{
struct mvpp2_port *port = netdev_priv(dev);
+ int ret = 0;
if (!mvpp22_rss_is_supported())
return -EOPNOTSUPP;
if (indir)
- memcpy(indir, port->indir,
- ARRAY_SIZE(port->indir) * sizeof(port->indir[0]));
+ ret = mvpp22_port_rss_ctx_indir_get(port, 0, indir);
if (hfunc)
*hfunc = ETH_RSS_HASH_CRC32;
- return 0;
+ return ret;
}
static int mvpp2_ethtool_set_rxfh(struct net_device *dev, const u32 *indir,
const u8 *key, const u8 hfunc)
{
struct mvpp2_port *port = netdev_priv(dev);
+ int ret = 0;
if (!mvpp22_rss_is_supported())
return -EOPNOTSUPP;
@@ -4028,15 +4120,58 @@ static int mvpp2_ethtool_set_rxfh(struct net_device *dev, const u32 *indir,
if (key)
return -EOPNOTSUPP;
- if (indir) {
- memcpy(port->indir, indir,
- ARRAY_SIZE(port->indir) * sizeof(port->indir[0]));
- mvpp22_rss_fill_table(port, port->id);
- }
+ if (indir)
+ ret = mvpp22_port_rss_ctx_indir_set(port, 0, indir);
- return 0;
+ return ret;
+}
+
+static int mvpp2_ethtool_get_rxfh_context(struct net_device *dev, u32 *indir,
+ u8 *key, u8 *hfunc, u32 rss_context)
+{
+ struct mvpp2_port *port = netdev_priv(dev);
+ int ret = 0;
+
+ if (!mvpp22_rss_is_supported())
+ return -EOPNOTSUPP;
+
+ if (hfunc)
+ *hfunc = ETH_RSS_HASH_CRC32;
+
+ if (indir)
+ ret = mvpp22_port_rss_ctx_indir_get(port, rss_context, indir);
+
+ return ret;
}
+static int mvpp2_ethtool_set_rxfh_context(struct net_device *dev,
+ const u32 *indir, const u8 *key,
+ const u8 hfunc, u32 *rss_context,
+ bool delete)
+{
+ struct mvpp2_port *port = netdev_priv(dev);
+ int ret;
+
+ if (!mvpp22_rss_is_supported())
+ return -EOPNOTSUPP;
+
+ if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_CRC32)
+ return -EOPNOTSUPP;
+
+ if (key)
+ return -EOPNOTSUPP;
+
+ if (delete)
+ return mvpp22_port_rss_ctx_delete(port, *rss_context);
+
+ if (*rss_context == ETH_RXFH_CONTEXT_ALLOC) {
+ ret = mvpp22_port_rss_ctx_create(port, rss_context);
+ if (ret)
+ return ret;
+ }
+
+ return mvpp22_port_rss_ctx_indir_set(port, *rss_context, indir);
+}
/* Device ops */
static const struct net_device_ops mvpp2_netdev_ops = {
@@ -4073,7 +4208,8 @@ static const struct ethtool_ops mvpp2_eth_tool_ops = {
.get_rxfh_indir_size = mvpp2_ethtool_get_rxfh_indir_size,
.get_rxfh = mvpp2_ethtool_get_rxfh,
.set_rxfh = mvpp2_ethtool_set_rxfh,
-
+ .get_rxfh_context = mvpp2_ethtool_get_rxfh_context,
+ .set_rxfh_context = mvpp2_ethtool_set_rxfh_context,
};
/* Used for PPv2.1, or PPv2.2 with the old Device Tree binding that
@@ -4327,6 +4463,11 @@ static int mvpp2_port_init(struct mvpp2_port *port)
if (err)
goto err_free_percpu;
+ /* Clear all port stats */
+ mvpp2_read_stats(port);
+ memset(port->ethtool_stats, 0,
+ MVPP2_N_ETHTOOL_STATS(port->ntxqs, port->nrxqs) * sizeof(u64));
+
return 0;
err_free_percpu:
@@ -4416,11 +4557,12 @@ static void mvpp2_port_copy_mac_addr(struct net_device *dev, struct mvpp2 *priv,
eth_hw_addr_random(dev);
}
-static void mvpp2_phylink_validate(struct net_device *dev,
+static void mvpp2_phylink_validate(struct phylink_config *config,
unsigned long *supported,
struct phylink_link_state *state)
{
- struct mvpp2_port *port = netdev_priv(dev);
+ struct mvpp2_port *port = container_of(config, struct mvpp2_port,
+ phylink_config);
__ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, };
/* Invalid combinations */
@@ -4544,10 +4686,11 @@ static void mvpp2_gmac_link_state(struct mvpp2_port *port,
state->pause |= MLO_PAUSE_TX;
}
-static int mvpp2_phylink_mac_link_state(struct net_device *dev,
+static int mvpp2_phylink_mac_link_state(struct phylink_config *config,
struct phylink_link_state *state)
{
- struct mvpp2_port *port = netdev_priv(dev);
+ struct mvpp2_port *port = container_of(config, struct mvpp2_port,
+ phylink_config);
if (port->priv->hw_version == MVPP22 && port->gop_id == 0) {
u32 mode = readl(port->base + MVPP22_XLG_CTRL3_REG);
@@ -4563,9 +4706,10 @@ static int mvpp2_phylink_mac_link_state(struct net_device *dev,
return 1;
}
-static void mvpp2_mac_an_restart(struct net_device *dev)
+static void mvpp2_mac_an_restart(struct phylink_config *config)
{
- struct mvpp2_port *port = netdev_priv(dev);
+ struct mvpp2_port *port = container_of(config, struct mvpp2_port,
+ phylink_config);
u32 val = readl(port->base + MVPP2_GMAC_AUTONEG_CONFIG);
writel(val | MVPP2_GMAC_IN_BAND_RESTART_AN,
@@ -4750,9 +4894,10 @@ static void mvpp2_gmac_config(struct mvpp2_port *port, unsigned int mode,
}
}
-static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,
+static void mvpp2_mac_config(struct phylink_config *config, unsigned int mode,
const struct phylink_link_state *state)
{
+ struct net_device *dev = to_net_dev(config->dev);
struct mvpp2_port *port = netdev_priv(dev);
bool change_interface = port->phy_interface != state->interface;
@@ -4792,9 +4937,10 @@ static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,
mvpp2_port_enable(port);
}
-static void mvpp2_mac_link_up(struct net_device *dev, unsigned int mode,
+static void mvpp2_mac_link_up(struct phylink_config *config, unsigned int mode,
phy_interface_t interface, struct phy_device *phy)
{
+ struct net_device *dev = to_net_dev(config->dev);
struct mvpp2_port *port = netdev_priv(dev);
u32 val;
@@ -4819,9 +4965,10 @@ static void mvpp2_mac_link_up(struct net_device *dev, unsigned int mode,
netif_tx_wake_all_queues(dev);
}
-static void mvpp2_mac_link_down(struct net_device *dev, unsigned int mode,
- phy_interface_t interface)
+static void mvpp2_mac_link_down(struct phylink_config *config,
+ unsigned int mode, phy_interface_t interface)
{
+ struct net_device *dev = to_net_dev(config->dev);
struct mvpp2_port *port = netdev_priv(dev);
u32 val;
@@ -5002,7 +5149,7 @@ static int mvpp2_port_probe(struct platform_device *pdev,
}
port->ethtool_stats = devm_kcalloc(&pdev->dev,
- ARRAY_SIZE(mvpp2_ethtool_regs),
+ MVPP2_N_ETHTOOL_STATS(ntxqs, nrxqs),
sizeof(u64), GFP_KERNEL);
if (!port->ethtool_stats) {
err = -ENOMEM;
@@ -5078,8 +5225,11 @@ static int mvpp2_port_probe(struct platform_device *pdev,
/* Phylink isn't used w/ ACPI as of now */
if (port_node) {
- phylink = phylink_create(dev, port_fwnode, phy_mode,
- &mvpp2_phylink_ops);
+ port->phylink_config.dev = &dev->dev;
+ port->phylink_config.type = PHYLINK_NETDEV;
+
+ phylink = phylink_create(&port->phylink_config, port_fwnode,
+ phy_mode, &mvpp2_phylink_ops);
if (IS_ERR(phylink)) {
err = PTR_ERR(phylink);
goto err_free_port_pcpu;
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
index ae2240074d8e..5692c6087bbb 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
@@ -312,7 +312,8 @@ static void mvpp2_prs_sram_shift_set(struct mvpp2_prs_entry *pe, int shift,
}
/* Set value */
- pe->sram[MVPP2_BIT_TO_WORD(MVPP2_PRS_SRAM_SHIFT_OFFS)] = shift & MVPP2_PRS_SRAM_SHIFT_MASK;
+ pe->sram[MVPP2_BIT_TO_WORD(MVPP2_PRS_SRAM_SHIFT_OFFS)] |=
+ shift & MVPP2_PRS_SRAM_SHIFT_MASK;
/* Reset and set operation */
mvpp2_prs_sram_bits_clear(pe, MVPP2_PRS_SRAM_OP_SEL_SHIFT_OFFS,
diff --git a/drivers/net/ethernet/mediatek/Makefile b/drivers/net/ethernet/mediatek/Makefile
index d41a2414c575..2d8362f9341b 100644
--- a/drivers/net/ethernet/mediatek/Makefile
+++ b/drivers/net/ethernet/mediatek/Makefile
@@ -3,4 +3,5 @@
# Makefile for the Mediatek SoCs built-in ethernet macs
#
-obj-$(CONFIG_NET_MEDIATEK_SOC) += mtk_eth_soc.o
+obj-$(CONFIG_NET_MEDIATEK_SOC) += mtk_eth.o
+mtk_eth-y := mtk_eth_soc.o mtk_sgmii.o mtk_eth_path.o
diff --git a/drivers/net/ethernet/mediatek/mtk_eth_path.c b/drivers/net/ethernet/mediatek/mtk_eth_path.c
new file mode 100644
index 000000000000..7f05880cf9ef
--- /dev/null
+++ b/drivers/net/ethernet/mediatek/mtk_eth_path.c
@@ -0,0 +1,352 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2018-2019 MediaTek Inc.
+
+/* A library for configuring path from GMAC/GDM to target PHY
+ *
+ * Author: Sean Wang <sean.wang@mediatek.com>
+ *
+ */
+
+#include <linux/phy.h>
+#include <linux/regmap.h>
+
+#include "mtk_eth_soc.h"
+
+struct mtk_eth_muxc {
+ const char *name;
+ int cap_bit;
+ int (*set_path)(struct mtk_eth *eth, int path);
+};
+
+static const char *mtk_eth_path_name(int path)
+{
+ switch (path) {
+ case MTK_ETH_PATH_GMAC1_RGMII:
+ return "gmac1_rgmii";
+ case MTK_ETH_PATH_GMAC1_TRGMII:
+ return "gmac1_trgmii";
+ case MTK_ETH_PATH_GMAC1_SGMII:
+ return "gmac1_sgmii";
+ case MTK_ETH_PATH_GMAC2_RGMII:
+ return "gmac2_rgmii";
+ case MTK_ETH_PATH_GMAC2_SGMII:
+ return "gmac2_sgmii";
+ case MTK_ETH_PATH_GMAC2_GEPHY:
+ return "gmac2_gephy";
+ case MTK_ETH_PATH_GDM1_ESW:
+ return "gdm1_esw";
+ default:
+ return "unknown path";
+ }
+}
+
+static int set_mux_gdm1_to_gmac1_esw(struct mtk_eth *eth, int path)
+{
+ bool updated = true;
+ u32 val, mask, set;
+
+ switch (path) {
+ case MTK_ETH_PATH_GMAC1_SGMII:
+ mask = ~(u32)MTK_MUX_TO_ESW;
+ set = 0;
+ break;
+ case MTK_ETH_PATH_GDM1_ESW:
+ mask = ~(u32)MTK_MUX_TO_ESW;
+ set = MTK_MUX_TO_ESW;
+ break;
+ default:
+ updated = false;
+ break;
+ };
+
+ if (updated) {
+ val = mtk_r32(eth, MTK_MAC_MISC);
+ val = (val & mask) | set;
+ mtk_w32(eth, val, MTK_MAC_MISC);
+ }
+
+ dev_dbg(eth->dev, "path %s in %s updated = %d\n",
+ mtk_eth_path_name(path), __func__, updated);
+
+ return 0;
+}
+
+static int set_mux_gmac2_gmac0_to_gephy(struct mtk_eth *eth, int path)
+{
+ unsigned int val = 0;
+ bool updated = true;
+
+ switch (path) {
+ case MTK_ETH_PATH_GMAC2_GEPHY:
+ val = ~(u32)GEPHY_MAC_SEL;
+ break;
+ default:
+ updated = false;
+ break;
+ }
+
+ if (updated)
+ regmap_update_bits(eth->infra, INFRA_MISC2, GEPHY_MAC_SEL, val);
+
+ dev_dbg(eth->dev, "path %s in %s updated = %d\n",
+ mtk_eth_path_name(path), __func__, updated);
+
+ return 0;
+}
+
+static int set_mux_u3_gmac2_to_qphy(struct mtk_eth *eth, int path)
+{
+ unsigned int val = 0;
+ bool updated = true;
+
+ switch (path) {
+ case MTK_ETH_PATH_GMAC2_SGMII:
+ val = CO_QPHY_SEL;
+ break;
+ default:
+ updated = false;
+ break;
+ }
+
+ if (updated)
+ regmap_update_bits(eth->infra, INFRA_MISC2, CO_QPHY_SEL, val);
+
+ dev_dbg(eth->dev, "path %s in %s updated = %d\n",
+ mtk_eth_path_name(path), __func__, updated);
+
+ return 0;
+}
+
+static int set_mux_gmac1_gmac2_to_sgmii_rgmii(struct mtk_eth *eth, int path)
+{
+ unsigned int val = 0;
+ bool updated = true;
+
+ switch (path) {
+ case MTK_ETH_PATH_GMAC1_SGMII:
+ val = SYSCFG0_SGMII_GMAC1;
+ break;
+ case MTK_ETH_PATH_GMAC2_SGMII:
+ val = SYSCFG0_SGMII_GMAC2;
+ break;
+ case MTK_ETH_PATH_GMAC1_RGMII:
+ case MTK_ETH_PATH_GMAC2_RGMII:
+ regmap_read(eth->ethsys, ETHSYS_SYSCFG0, &val);
+ val &= SYSCFG0_SGMII_MASK;
+
+ if ((path == MTK_GMAC1_RGMII && val == SYSCFG0_SGMII_GMAC1) ||
+ (path == MTK_GMAC2_RGMII && val == SYSCFG0_SGMII_GMAC2))
+ val = 0;
+ else
+ updated = false;
+ break;
+ default:
+ updated = false;
+ break;
+ };
+
+ if (updated)
+ regmap_update_bits(eth->ethsys, ETHSYS_SYSCFG0,
+ SYSCFG0_SGMII_MASK, val);
+
+ dev_dbg(eth->dev, "path %s in %s updated = %d\n",
+ mtk_eth_path_name(path), __func__, updated);
+
+ return 0;
+}
+
+static int set_mux_gmac12_to_gephy_sgmii(struct mtk_eth *eth, int path)
+{
+ unsigned int val = 0;
+ bool updated = true;
+
+ regmap_read(eth->ethsys, ETHSYS_SYSCFG0, &val);
+
+ switch (path) {
+ case MTK_ETH_PATH_GMAC1_SGMII:
+ val |= SYSCFG0_SGMII_GMAC1_V2;
+ break;
+ case MTK_ETH_PATH_GMAC2_GEPHY:
+ val &= ~(u32)SYSCFG0_SGMII_GMAC2_V2;
+ break;
+ case MTK_ETH_PATH_GMAC2_SGMII:
+ val |= SYSCFG0_SGMII_GMAC2_V2;
+ break;
+ default:
+ updated = false;
+ };
+
+ if (updated)
+ regmap_update_bits(eth->ethsys, ETHSYS_SYSCFG0,
+ SYSCFG0_SGMII_MASK, val);
+
+ dev_dbg(eth->dev, "path %s in %s updated = %d\n",
+ mtk_eth_path_name(path), __func__, updated);
+
+ return 0;
+}
+
+static const struct mtk_eth_muxc mtk_eth_muxc[] = {
+ {
+ .name = "mux_gdm1_to_gmac1_esw",
+ .cap_bit = MTK_ETH_MUX_GDM1_TO_GMAC1_ESW,
+ .set_path = set_mux_gdm1_to_gmac1_esw,
+ }, {
+ .name = "mux_gmac2_gmac0_to_gephy",
+ .cap_bit = MTK_ETH_MUX_GMAC2_GMAC0_TO_GEPHY,
+ .set_path = set_mux_gmac2_gmac0_to_gephy,
+ }, {
+ .name = "mux_u3_gmac2_to_qphy",
+ .cap_bit = MTK_ETH_MUX_U3_GMAC2_TO_QPHY,
+ .set_path = set_mux_u3_gmac2_to_qphy,
+ }, {
+ .name = "mux_gmac1_gmac2_to_sgmii_rgmii",
+ .cap_bit = MTK_ETH_MUX_GMAC1_GMAC2_TO_SGMII_RGMII,
+ .set_path = set_mux_gmac1_gmac2_to_sgmii_rgmii,
+ }, {
+ .name = "mux_gmac12_to_gephy_sgmii",
+ .cap_bit = MTK_ETH_MUX_GMAC12_TO_GEPHY_SGMII,
+ .set_path = set_mux_gmac12_to_gephy_sgmii,
+ },
+};
+
+static int mtk_eth_mux_setup(struct mtk_eth *eth, int path)
+{
+ int i, err = 0;
+
+ if (!MTK_HAS_CAPS(eth->soc->caps, path)) {
+ dev_err(eth->dev, "path %s isn't support on the SoC\n",
+ mtk_eth_path_name(path));
+ return -EINVAL;
+ }
+
+ if (!MTK_HAS_CAPS(eth->soc->caps, MTK_MUX))
+ return 0;
+
+ /* Setup MUX in path fabric */
+ for (i = 0; i < ARRAY_SIZE(mtk_eth_muxc); i++) {
+ if (MTK_HAS_CAPS(eth->soc->caps, mtk_eth_muxc[i].cap_bit)) {
+ err = mtk_eth_muxc[i].set_path(eth, path);
+ if (err)
+ goto out;
+ } else {
+ dev_dbg(eth->dev, "mux %s isn't present on the SoC\n",
+ mtk_eth_muxc[i].name);
+ }
+ }
+
+out:
+ return err;
+}
+
+static int mtk_gmac_sgmii_path_setup(struct mtk_eth *eth, int mac_id)
+{
+ unsigned int val = 0;
+ int sid, err, path;
+
+ path = (mac_id == 0) ? MTK_ETH_PATH_GMAC1_SGMII :
+ MTK_ETH_PATH_GMAC2_SGMII;
+
+ /* Setup proper MUXes along the path */
+ err = mtk_eth_mux_setup(eth, path);
+ if (err)
+ return err;
+
+ /* The path GMAC to SGMII will be enabled once the SGMIISYS is being
+ * setup done.
+ */
+ regmap_read(eth->ethsys, ETHSYS_SYSCFG0, &val);
+
+ regmap_update_bits(eth->ethsys, ETHSYS_SYSCFG0,
+ SYSCFG0_SGMII_MASK, ~(u32)SYSCFG0_SGMII_MASK);
+
+ /* Decide how GMAC and SGMIISYS be mapped */
+ sid = (MTK_HAS_CAPS(eth->soc->caps, MTK_SHARED_SGMII)) ? 0 : mac_id;
+
+ /* Setup SGMIISYS with the determined property */
+ if (MTK_HAS_FLAGS(eth->sgmii->flags[sid], MTK_SGMII_PHYSPEED_AN))
+ err = mtk_sgmii_setup_mode_an(eth->sgmii, sid);
+ else
+ err = mtk_sgmii_setup_mode_force(eth->sgmii, sid);
+
+ if (err)
+ return err;
+
+ regmap_update_bits(eth->ethsys, ETHSYS_SYSCFG0,
+ SYSCFG0_SGMII_MASK, val);
+
+ return 0;
+}
+
+static int mtk_gmac_gephy_path_setup(struct mtk_eth *eth, int mac_id)
+{
+ int err, path = 0;
+
+ if (mac_id == 1)
+ path = MTK_ETH_PATH_GMAC2_GEPHY;
+
+ if (!path)
+ return -EINVAL;
+
+ /* Setup proper MUXes along the path */
+ err = mtk_eth_mux_setup(eth, path);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static int mtk_gmac_rgmii_path_setup(struct mtk_eth *eth, int mac_id)
+{
+ int err, path;
+
+ path = (mac_id == 0) ? MTK_ETH_PATH_GMAC1_RGMII :
+ MTK_ETH_PATH_GMAC2_RGMII;
+
+ /* Setup proper MUXes along the path */
+ err = mtk_eth_mux_setup(eth, path);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+int mtk_setup_hw_path(struct mtk_eth *eth, int mac_id, int phymode)
+{
+ int err;
+
+ switch (phymode) {
+ case PHY_INTERFACE_MODE_TRGMII:
+ case PHY_INTERFACE_MODE_RGMII_TXID:
+ case PHY_INTERFACE_MODE_RGMII_RXID:
+ case PHY_INTERFACE_MODE_RGMII_ID:
+ case PHY_INTERFACE_MODE_RGMII:
+ case PHY_INTERFACE_MODE_MII:
+ case PHY_INTERFACE_MODE_REVMII:
+ case PHY_INTERFACE_MODE_RMII:
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_RGMII)) {
+ err = mtk_gmac_rgmii_path_setup(eth, mac_id);
+ if (err)
+ return err;
+ }
+ break;
+ case PHY_INTERFACE_MODE_SGMII:
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_SGMII)) {
+ err = mtk_gmac_sgmii_path_setup(eth, mac_id);
+ if (err)
+ return err;
+ }
+ break;
+ case PHY_INTERFACE_MODE_GMII:
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_GEPHY)) {
+ err = mtk_gmac_gephy_path_setup(eth, mac_id);
+ if (err)
+ return err;
+ }
+ break;
+ default:
+ break;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
index 6cfffb64cd51..b20b3a5a1ebb 100644
--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
@@ -48,8 +48,10 @@ static const struct mtk_ethtool_stats {
};
static const char * const mtk_clks_source_name[] = {
- "ethif", "esw", "gp0", "gp1", "gp2", "trgpll", "sgmii_tx250m",
- "sgmii_rx250m", "sgmii_cdr_ref", "sgmii_cdr_fb", "sgmii_ck", "eth2pll"
+ "ethif", "sgmiitop", "esw", "gp0", "gp1", "gp2", "fe", "trgpll",
+ "sgmii_tx250m", "sgmii_rx250m", "sgmii_cdr_ref", "sgmii_cdr_fb",
+ "sgmii2_tx250m", "sgmii2_rx250m", "sgmii2_cdr_ref", "sgmii2_cdr_fb",
+ "sgmii_ck", "eth2pll",
};
void mtk_w32(struct mtk_eth *eth, u32 val, unsigned reg)
@@ -132,6 +134,31 @@ static int mtk_mdio_read(struct mii_bus *bus, int phy_addr, int phy_reg)
return _mtk_mdio_read(eth, phy_addr, phy_reg);
}
+static int mt7621_gmac0_rgmii_adjust(struct mtk_eth *eth,
+ phy_interface_t interface)
+{
+ u32 val;
+
+ /* Check DDR memory type.
+ * Currently TRGMII mode with DDR2 memory is not supported.
+ */
+ regmap_read(eth->ethsys, ETHSYS_SYSCFG, &val);
+ if (interface == PHY_INTERFACE_MODE_TRGMII &&
+ val & SYSCFG_DRAM_TYPE_DDR2) {
+ dev_err(eth->dev,
+ "TRGMII mode with DDR2 memory is not supported!\n");
+ return -EOPNOTSUPP;
+ }
+
+ val = (interface == PHY_INTERFACE_MODE_TRGMII) ?
+ ETHSYS_TRGMII_MT7621_DDR_PLL : 0;
+
+ regmap_update_bits(eth->ethsys, ETHSYS_CLKCFG0,
+ ETHSYS_TRGMII_MT7621_MASK, val);
+
+ return 0;
+}
+
static void mtk_gmac0_rgmii_adjust(struct mtk_eth *eth, int speed)
{
u32 val;
@@ -159,47 +186,6 @@ static void mtk_gmac0_rgmii_adjust(struct mtk_eth *eth, int speed)
mtk_w32(eth, val, TRGMII_TCK_CTRL);
}
-static void mtk_gmac_sgmii_hw_setup(struct mtk_eth *eth, int mac_id)
-{
- u32 val;
-
- /* Setup the link timer and QPHY power up inside SGMIISYS */
- regmap_write(eth->sgmiisys, SGMSYS_PCS_LINK_TIMER,
- SGMII_LINK_TIMER_DEFAULT);
-
- regmap_read(eth->sgmiisys, SGMSYS_SGMII_MODE, &val);
- val |= SGMII_REMOTE_FAULT_DIS;
- regmap_write(eth->sgmiisys, SGMSYS_SGMII_MODE, val);
-
- regmap_read(eth->sgmiisys, SGMSYS_PCS_CONTROL_1, &val);
- val |= SGMII_AN_RESTART;
- regmap_write(eth->sgmiisys, SGMSYS_PCS_CONTROL_1, val);
-
- regmap_read(eth->sgmiisys, SGMSYS_QPHY_PWR_STATE_CTRL, &val);
- val &= ~SGMII_PHYA_PWD;
- regmap_write(eth->sgmiisys, SGMSYS_QPHY_PWR_STATE_CTRL, val);
-
- /* Determine MUX for which GMAC uses the SGMII interface */
- if (MTK_HAS_CAPS(eth->soc->caps, MTK_DUAL_GMAC_SHARED_SGMII)) {
- regmap_read(eth->ethsys, ETHSYS_SYSCFG0, &val);
- val &= ~SYSCFG0_SGMII_MASK;
- val |= !mac_id ? SYSCFG0_SGMII_GMAC1 : SYSCFG0_SGMII_GMAC2;
- regmap_write(eth->ethsys, ETHSYS_SYSCFG0, val);
-
- dev_info(eth->dev, "setup shared sgmii for gmac=%d\n",
- mac_id);
- }
-
- /* Setup the GMAC1 going through SGMII path when SoC also support
- * ESW on GMAC1
- */
- if (MTK_HAS_CAPS(eth->soc->caps, MTK_GMAC1_ESW | MTK_GMAC1_SGMII) &&
- !mac_id) {
- mtk_w32(eth, 0, MTK_MAC_MISC);
- dev_info(eth->dev, "setup gmac1 going through sgmii");
- }
-}
-
static void mtk_phy_link_adjust(struct net_device *dev)
{
struct mtk_mac *mac = netdev_priv(dev);
@@ -222,9 +208,17 @@ static void mtk_phy_link_adjust(struct net_device *dev)
break;
}
- if (MTK_HAS_CAPS(mac->hw->soc->caps, MTK_GMAC1_TRGMII) &&
- !mac->id && !mac->trgmii)
- mtk_gmac0_rgmii_adjust(mac->hw, dev->phydev->speed);
+ if (MTK_HAS_CAPS(mac->hw->soc->caps, MTK_GMAC1_TRGMII) && !mac->id) {
+ if (MTK_HAS_CAPS(mac->hw->soc->caps, MTK_TRGMII_MT7621_CLK)) {
+ if (mt7621_gmac0_rgmii_adjust(mac->hw,
+ dev->phydev->interface))
+ return;
+ } else {
+ if (!mac->trgmii)
+ mtk_gmac0_rgmii_adjust(mac->hw,
+ dev->phydev->speed);
+ }
+ }
if (dev->phydev->link)
mcr |= MAC_MCR_FORCE_LINK;
@@ -289,6 +283,7 @@ static int mtk_phy_connect(struct net_device *dev)
struct mtk_eth *eth;
struct device_node *np;
u32 val;
+ int err;
eth = mac->hw;
np = of_parse_phandle(mac->of_node, "phy-handle", 0);
@@ -298,6 +293,10 @@ static int mtk_phy_connect(struct net_device *dev)
if (!np)
return -ENODEV;
+ err = mtk_setup_hw_path(eth, mac->id, of_get_phy_mode(np));
+ if (err)
+ goto err_phy;
+
mac->ge_mode = 0;
switch (of_get_phy_mode(np)) {
case PHY_INTERFACE_MODE_TRGMII:
@@ -306,12 +305,10 @@ static int mtk_phy_connect(struct net_device *dev)
case PHY_INTERFACE_MODE_RGMII_RXID:
case PHY_INTERFACE_MODE_RGMII_ID:
case PHY_INTERFACE_MODE_RGMII:
- break;
case PHY_INTERFACE_MODE_SGMII:
- if (MTK_HAS_CAPS(eth->soc->caps, MTK_SGMII))
- mtk_gmac_sgmii_hw_setup(eth, mac->id);
break;
case PHY_INTERFACE_MODE_MII:
+ case PHY_INTERFACE_MODE_GMII:
mac->ge_mode = 1;
break;
case PHY_INTERFACE_MODE_REVMII:
@@ -2477,16 +2474,28 @@ static int mtk_probe(struct platform_device *pdev)
return PTR_ERR(eth->ethsys);
}
- if (MTK_HAS_CAPS(eth->soc->caps, MTK_SGMII)) {
- eth->sgmiisys =
- syscon_regmap_lookup_by_phandle(pdev->dev.of_node,
- "mediatek,sgmiisys");
- if (IS_ERR(eth->sgmiisys)) {
- dev_err(&pdev->dev, "no sgmiisys regmap found\n");
- return PTR_ERR(eth->sgmiisys);
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_INFRA)) {
+ eth->infra = syscon_regmap_lookup_by_phandle(pdev->dev.of_node,
+ "mediatek,infracfg");
+ if (IS_ERR(eth->infra)) {
+ dev_err(&pdev->dev, "no infracfg regmap found\n");
+ return PTR_ERR(eth->infra);
}
}
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_SGMII)) {
+ eth->sgmii = devm_kzalloc(eth->dev, sizeof(*eth->sgmii),
+ GFP_KERNEL);
+ if (!eth->sgmii)
+ return -ENOMEM;
+
+ err = mtk_sgmii_init(eth->sgmii, pdev->dev.of_node,
+ eth->soc->ana_rgc3);
+
+ if (err)
+ return err;
+ }
+
if (eth->soc->required_pctl) {
eth->pctl = syscon_regmap_lookup_by_phandle(pdev->dev.of_node,
"mediatek,pctl");
@@ -2625,34 +2634,43 @@ static int mtk_remove(struct platform_device *pdev)
}
static const struct mtk_soc_data mt2701_data = {
- .caps = MTK_GMAC1_TRGMII | MTK_HWLRO,
+ .caps = MT7623_CAPS | MTK_HWLRO,
.required_clks = MT7623_CLKS_BITMAP,
.required_pctl = true,
};
static const struct mtk_soc_data mt7621_data = {
- .caps = MTK_SHARED_INT,
+ .caps = MT7621_CAPS,
.required_clks = MT7621_CLKS_BITMAP,
.required_pctl = false,
};
static const struct mtk_soc_data mt7622_data = {
- .caps = MTK_DUAL_GMAC_SHARED_SGMII | MTK_GMAC1_ESW | MTK_HWLRO,
+ .ana_rgc3 = 0x2028,
+ .caps = MT7622_CAPS | MTK_HWLRO,
.required_clks = MT7622_CLKS_BITMAP,
.required_pctl = false,
};
static const struct mtk_soc_data mt7623_data = {
- .caps = MTK_GMAC1_TRGMII | MTK_HWLRO,
+ .caps = MT7623_CAPS | MTK_HWLRO,
.required_clks = MT7623_CLKS_BITMAP,
.required_pctl = true,
};
+static const struct mtk_soc_data mt7629_data = {
+ .ana_rgc3 = 0x128,
+ .caps = MT7629_CAPS | MTK_HWLRO,
+ .required_clks = MT7629_CLKS_BITMAP,
+ .required_pctl = false,
+};
+
const struct of_device_id of_mtk_match[] = {
{ .compatible = "mediatek,mt2701-eth", .data = &mt2701_data},
{ .compatible = "mediatek,mt7621-eth", .data = &mt7621_data},
{ .compatible = "mediatek,mt7622-eth", .data = &mt7622_data},
{ .compatible = "mediatek,mt7623-eth", .data = &mt7623_data},
+ { .compatible = "mediatek,mt7629-eth", .data = &mt7629_data},
{},
};
MODULE_DEVICE_TABLE(of, of_mtk_match);
diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
index baa85d5601e7..c6be599ed94d 100644
--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
@@ -9,6 +9,10 @@
#ifndef MTK_ETH_H
#define MTK_ETH_H
+#include <linux/dma-mapping.h>
+#include <linux/netdevice.h>
+#include <linux/of_net.h>
+#include <linux/u64_stats_sync.h>
#include <linux/refcount.h>
#define MTK_QDMA_PAGE_SIZE 2048
@@ -359,17 +363,27 @@
#define MT7622_ETH 7622
#define MT7621_ETH 7621
+/* ethernet system control register */
+#define ETHSYS_SYSCFG 0x10
+#define SYSCFG_DRAM_TYPE_DDR2 BIT(4)
+
/* ethernet subsystem config register */
#define ETHSYS_SYSCFG0 0x14
#define SYSCFG0_GE_MASK 0x3
#define SYSCFG0_GE_MODE(x, y) (x << (12 + (y * 2)))
-#define SYSCFG0_SGMII_MASK (3 << 8)
-#define SYSCFG0_SGMII_GMAC1 ((2 << 8) & GENMASK(9, 8))
-#define SYSCFG0_SGMII_GMAC2 ((3 << 8) & GENMASK(9, 8))
+#define SYSCFG0_SGMII_MASK GENMASK(9, 8)
+#define SYSCFG0_SGMII_GMAC1 ((2 << 8) & SYSCFG0_SGMII_MASK)
+#define SYSCFG0_SGMII_GMAC2 ((3 << 8) & SYSCFG0_SGMII_MASK)
+#define SYSCFG0_SGMII_GMAC1_V2 BIT(9)
+#define SYSCFG0_SGMII_GMAC2_V2 BIT(8)
+
/* ethernet subsystem clock register */
#define ETHSYS_CLKCFG0 0x2c
#define ETHSYS_TRGMII_CLK_SEL362_5 BIT(11)
+#define ETHSYS_TRGMII_MT7621_MASK (BIT(5) | BIT(6))
+#define ETHSYS_TRGMII_MT7621_APLL BIT(6)
+#define ETHSYS_TRGMII_MT7621_DDR_PLL BIT(5)
/* ethernet reset control register */
#define ETHSYS_RSTCTRL 0x34
@@ -393,6 +407,11 @@
#define SGMSYS_QPHY_PWR_STATE_CTRL 0xe8
#define SGMII_PHYA_PWD BIT(4)
+/* Infrasys subsystem config registers */
+#define INFRA_MISC2 0x70c
+#define CO_QPHY_SEL BIT(0)
+#define GEPHY_MAC_SEL BIT(1)
+
struct mtk_rx_dma {
unsigned int rxd1;
unsigned int rxd2;
@@ -457,15 +476,21 @@ enum mtk_tx_flags {
*/
enum mtk_clks_map {
MTK_CLK_ETHIF,
+ MTK_CLK_SGMIITOP,
MTK_CLK_ESW,
MTK_CLK_GP0,
MTK_CLK_GP1,
MTK_CLK_GP2,
+ MTK_CLK_FE,
MTK_CLK_TRGPLL,
MTK_CLK_SGMII_TX_250M,
MTK_CLK_SGMII_RX_250M,
MTK_CLK_SGMII_CDR_REF,
MTK_CLK_SGMII_CDR_FB,
+ MTK_CLK_SGMII2_TX_250M,
+ MTK_CLK_SGMII2_RX_250M,
+ MTK_CLK_SGMII2_CDR_REF,
+ MTK_CLK_SGMII2_CDR_FB,
MTK_CLK_SGMII_CK,
MTK_CLK_ETH2PLL,
MTK_CLK_MAX
@@ -484,6 +509,19 @@ enum mtk_clks_map {
BIT(MTK_CLK_SGMII_CK) | \
BIT(MTK_CLK_ETH2PLL))
#define MT7621_CLKS_BITMAP (0)
+#define MT7629_CLKS_BITMAP (BIT(MTK_CLK_ETHIF) | BIT(MTK_CLK_ESW) | \
+ BIT(MTK_CLK_GP0) | BIT(MTK_CLK_GP1) | \
+ BIT(MTK_CLK_GP2) | BIT(MTK_CLK_FE) | \
+ BIT(MTK_CLK_SGMII_TX_250M) | \
+ BIT(MTK_CLK_SGMII_RX_250M) | \
+ BIT(MTK_CLK_SGMII_CDR_REF) | \
+ BIT(MTK_CLK_SGMII_CDR_FB) | \
+ BIT(MTK_CLK_SGMII2_TX_250M) | \
+ BIT(MTK_CLK_SGMII2_RX_250M) | \
+ BIT(MTK_CLK_SGMII2_CDR_REF) | \
+ BIT(MTK_CLK_SGMII2_CDR_FB) | \
+ BIT(MTK_CLK_SGMII_CK) | \
+ BIT(MTK_CLK_ETH2PLL) | BIT(MTK_CLK_SGMIITOP))
enum mtk_dev_state {
MTK_HW_INIT,
@@ -554,21 +592,120 @@ struct mtk_rx_ring {
u32 crx_idx_reg;
};
-#define MTK_TRGMII BIT(0)
-#define MTK_GMAC1_TRGMII (BIT(1) | MTK_TRGMII)
-#define MTK_ESW BIT(4)
-#define MTK_GMAC1_ESW (BIT(5) | MTK_ESW)
-#define MTK_SGMII BIT(8)
-#define MTK_GMAC1_SGMII (BIT(9) | MTK_SGMII)
-#define MTK_GMAC2_SGMII (BIT(10) | MTK_SGMII)
-#define MTK_DUAL_GMAC_SHARED_SGMII (BIT(11) | MTK_GMAC1_SGMII | \
- MTK_GMAC2_SGMII)
-#define MTK_HWLRO BIT(12)
-#define MTK_SHARED_INT BIT(13)
+enum mkt_eth_capabilities {
+ MTK_RGMII_BIT = 0,
+ MTK_TRGMII_BIT,
+ MTK_SGMII_BIT,
+ MTK_ESW_BIT,
+ MTK_GEPHY_BIT,
+ MTK_MUX_BIT,
+ MTK_INFRA_BIT,
+ MTK_SHARED_SGMII_BIT,
+ MTK_HWLRO_BIT,
+ MTK_SHARED_INT_BIT,
+ MTK_TRGMII_MT7621_CLK_BIT,
+
+ /* MUX BITS*/
+ MTK_ETH_MUX_GDM1_TO_GMAC1_ESW_BIT,
+ MTK_ETH_MUX_GMAC2_GMAC0_TO_GEPHY_BIT,
+ MTK_ETH_MUX_U3_GMAC2_TO_QPHY_BIT,
+ MTK_ETH_MUX_GMAC1_GMAC2_TO_SGMII_RGMII_BIT,
+ MTK_ETH_MUX_GMAC12_TO_GEPHY_SGMII_BIT,
+
+ /* PATH BITS */
+ MTK_ETH_PATH_GMAC1_RGMII_BIT,
+ MTK_ETH_PATH_GMAC1_TRGMII_BIT,
+ MTK_ETH_PATH_GMAC1_SGMII_BIT,
+ MTK_ETH_PATH_GMAC2_RGMII_BIT,
+ MTK_ETH_PATH_GMAC2_SGMII_BIT,
+ MTK_ETH_PATH_GMAC2_GEPHY_BIT,
+ MTK_ETH_PATH_GDM1_ESW_BIT,
+};
+
+/* Supported hardware group on SoCs */
+#define MTK_RGMII BIT(MTK_RGMII_BIT)
+#define MTK_TRGMII BIT(MTK_TRGMII_BIT)
+#define MTK_SGMII BIT(MTK_SGMII_BIT)
+#define MTK_ESW BIT(MTK_ESW_BIT)
+#define MTK_GEPHY BIT(MTK_GEPHY_BIT)
+#define MTK_MUX BIT(MTK_MUX_BIT)
+#define MTK_INFRA BIT(MTK_INFRA_BIT)
+#define MTK_SHARED_SGMII BIT(MTK_SHARED_SGMII_BIT)
+#define MTK_HWLRO BIT(MTK_HWLRO_BIT)
+#define MTK_SHARED_INT BIT(MTK_SHARED_INT_BIT)
+#define MTK_TRGMII_MT7621_CLK BIT(MTK_TRGMII_MT7621_CLK_BIT)
+
+#define MTK_ETH_MUX_GDM1_TO_GMAC1_ESW \
+ BIT(MTK_ETH_MUX_GDM1_TO_GMAC1_ESW_BIT)
+#define MTK_ETH_MUX_GMAC2_GMAC0_TO_GEPHY \
+ BIT(MTK_ETH_MUX_GMAC2_GMAC0_TO_GEPHY_BIT)
+#define MTK_ETH_MUX_U3_GMAC2_TO_QPHY \
+ BIT(MTK_ETH_MUX_U3_GMAC2_TO_QPHY_BIT)
+#define MTK_ETH_MUX_GMAC1_GMAC2_TO_SGMII_RGMII \
+ BIT(MTK_ETH_MUX_GMAC1_GMAC2_TO_SGMII_RGMII_BIT)
+#define MTK_ETH_MUX_GMAC12_TO_GEPHY_SGMII \
+ BIT(MTK_ETH_MUX_GMAC12_TO_GEPHY_SGMII_BIT)
+
+/* Supported path present on SoCs */
+#define MTK_ETH_PATH_GMAC1_RGMII BIT(MTK_ETH_PATH_GMAC1_RGMII_BIT)
+#define MTK_ETH_PATH_GMAC1_TRGMII BIT(MTK_ETH_PATH_GMAC1_TRGMII_BIT)
+#define MTK_ETH_PATH_GMAC1_SGMII BIT(MTK_ETH_PATH_GMAC1_SGMII_BIT)
+#define MTK_ETH_PATH_GMAC2_RGMII BIT(MTK_ETH_PATH_GMAC2_RGMII_BIT)
+#define MTK_ETH_PATH_GMAC2_SGMII BIT(MTK_ETH_PATH_GMAC2_SGMII_BIT)
+#define MTK_ETH_PATH_GMAC2_GEPHY BIT(MTK_ETH_PATH_GMAC2_GEPHY_BIT)
+#define MTK_ETH_PATH_GDM1_ESW BIT(MTK_ETH_PATH_GDM1_ESW_BIT)
+
+#define MTK_GMAC1_RGMII (MTK_ETH_PATH_GMAC1_RGMII | MTK_RGMII)
+#define MTK_GMAC1_TRGMII (MTK_ETH_PATH_GMAC1_TRGMII | MTK_TRGMII)
+#define MTK_GMAC1_SGMII (MTK_ETH_PATH_GMAC1_SGMII | MTK_SGMII)
+#define MTK_GMAC2_RGMII (MTK_ETH_PATH_GMAC2_RGMII | MTK_RGMII)
+#define MTK_GMAC2_SGMII (MTK_ETH_PATH_GMAC2_SGMII | MTK_SGMII)
+#define MTK_GMAC2_GEPHY (MTK_ETH_PATH_GMAC2_GEPHY | MTK_GEPHY)
+#define MTK_GDM1_ESW (MTK_ETH_PATH_GDM1_ESW | MTK_ESW)
+
+/* MUXes present on SoCs */
+/* 0: GDM1 -> GMAC1, 1: GDM1 -> ESW */
+#define MTK_MUX_GDM1_TO_GMAC1_ESW (MTK_ETH_MUX_GDM1_TO_GMAC1_ESW | MTK_MUX)
+
+/* 0: GMAC2 -> GEPHY, 1: GMAC0 -> GePHY */
+#define MTK_MUX_GMAC2_GMAC0_TO_GEPHY \
+ (MTK_ETH_MUX_GMAC2_GMAC0_TO_GEPHY | MTK_MUX | MTK_INFRA)
+
+/* 0: U3 -> QPHY, 1: GMAC2 -> QPHY */
+#define MTK_MUX_U3_GMAC2_TO_QPHY \
+ (MTK_ETH_MUX_U3_GMAC2_TO_QPHY | MTK_MUX | MTK_INFRA)
+
+/* 2: GMAC1 -> SGMII, 3: GMAC2 -> SGMII */
+#define MTK_MUX_GMAC1_GMAC2_TO_SGMII_RGMII \
+ (MTK_ETH_MUX_GMAC1_GMAC2_TO_SGMII_RGMII | MTK_MUX | \
+ MTK_SHARED_SGMII)
+
+/* 0: GMACx -> GEPHY, 1: GMACx -> SGMII where x is 1 or 2 */
+#define MTK_MUX_GMAC12_TO_GEPHY_SGMII \
+ (MTK_ETH_MUX_GMAC12_TO_GEPHY_SGMII | MTK_MUX)
+
#define MTK_HAS_CAPS(caps, _x) (((caps) & (_x)) == (_x))
+#define MT7621_CAPS (MTK_GMAC1_RGMII | MTK_GMAC1_TRGMII | \
+ MTK_GMAC2_RGMII | MTK_SHARED_INT | MTK_TRGMII_MT7621_CLK)
+
+#define MT7622_CAPS (MTK_GMAC1_RGMII | MTK_GMAC1_SGMII | MTK_GMAC2_RGMII | \
+ MTK_GMAC2_SGMII | MTK_GDM1_ESW | \
+ MTK_MUX_GDM1_TO_GMAC1_ESW | \
+ MTK_MUX_GMAC1_GMAC2_TO_SGMII_RGMII)
+
+#define MT7623_CAPS (MTK_GMAC1_RGMII | MTK_GMAC1_TRGMII | MTK_GMAC2_RGMII)
+
+#define MT7629_CAPS (MTK_GMAC1_SGMII | MTK_GMAC2_SGMII | MTK_GMAC2_GEPHY | \
+ MTK_GDM1_ESW | MTK_MUX_GDM1_TO_GMAC1_ESW | \
+ MTK_MUX_GMAC2_GMAC0_TO_GEPHY | \
+ MTK_MUX_U3_GMAC2_TO_QPHY | \
+ MTK_MUX_GMAC12_TO_GEPHY_SGMII)
+
/* struct mtk_eth_data - This is the structure holding all differences
* among various plaforms
+ * @ana_rgc3: The offset for register ANA_RGC3 related to
+ * sgmiisys syscon
* @caps Flags shown the extra capability for the SoC
* @required_clks Flags shown the bitmap for required clocks on
* the target SoC
@@ -576,6 +713,7 @@ struct mtk_rx_ring {
* the extra setup for those pins used by GMAC.
*/
struct mtk_soc_data {
+ u32 ana_rgc3;
u32 caps;
u32 required_clks;
bool required_pctl;
@@ -584,6 +722,26 @@ struct mtk_soc_data {
/* currently no SoC has more than 2 macs */
#define MTK_MAX_DEVS 2
+#define MTK_SGMII_PHYSPEED_AN BIT(31)
+#define MTK_SGMII_PHYSPEED_MASK GENMASK(0, 2)
+#define MTK_SGMII_PHYSPEED_1000 BIT(0)
+#define MTK_SGMII_PHYSPEED_2500 BIT(1)
+#define MTK_HAS_FLAGS(flags, _x) (((flags) & (_x)) == (_x))
+
+/* struct mtk_sgmii - This is the structure holding sgmii regmap and its
+ * characteristics
+ * @regmap: The register map pointing at the range used to setup
+ * SGMII modes
+ * @flags: The enum refers to which mode the sgmii wants to run on
+ * @ana_rgc3: The offset refers to register ANA_RGC3 related to regmap
+ */
+
+struct mtk_sgmii {
+ struct regmap *regmap[MTK_MAX_DEVS];
+ u32 flags[MTK_MAX_DEVS];
+ u32 ana_rgc3;
+};
+
/* struct mtk_eth - This is the main datasructure for holding the state
* of the driver
* @dev: The device pointer
@@ -599,8 +757,8 @@ struct mtk_soc_data {
* @msg_enable: Ethtool msg level
* @ethsys: The register map pointing at the range used to setup
* MII modes
- * @sgmiisys: The register map pointing at the range used to setup
- * SGMII modes
+ * @infra: The register map pointing at the range used to setup
+ * SGMII and GePHY path
* @pctl: The register map pointing at the range used to setup
* GMAC port drive/slew values
* @dma_refcnt: track how many netdevs are using the DMA engine
@@ -632,7 +790,8 @@ struct mtk_eth {
u32 msg_enable;
unsigned long sysclk;
struct regmap *ethsys;
- struct regmap *sgmiisys;
+ struct regmap *infra;
+ struct mtk_sgmii *sgmii;
struct regmap *pctl;
bool hwlro;
refcount_t dma_refcnt;
@@ -683,4 +842,10 @@ void mtk_stats_update_mac(struct mtk_mac *mac);
void mtk_w32(struct mtk_eth *eth, u32 val, unsigned reg);
u32 mtk_r32(struct mtk_eth *eth, unsigned reg);
+int mtk_sgmii_init(struct mtk_sgmii *ss, struct device_node *np,
+ u32 ana_rgc3);
+int mtk_sgmii_setup_mode_an(struct mtk_sgmii *ss, int id);
+int mtk_sgmii_setup_mode_force(struct mtk_sgmii *ss, int id);
+int mtk_setup_hw_path(struct mtk_eth *eth, int mac_id, int phymode);
+
#endif /* MTK_ETH_H */
diff --git a/drivers/net/ethernet/mediatek/mtk_sgmii.c b/drivers/net/ethernet/mediatek/mtk_sgmii.c
new file mode 100644
index 000000000000..136f90ce5a65
--- /dev/null
+++ b/drivers/net/ethernet/mediatek/mtk_sgmii.c
@@ -0,0 +1,105 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2018-2019 MediaTek Inc.
+
+/* A library for MediaTek SGMII circuit
+ *
+ * Author: Sean Wang <sean.wang@mediatek.com>
+ *
+ */
+
+#include <linux/mfd/syscon.h>
+#include <linux/of.h>
+#include <linux/regmap.h>
+
+#include "mtk_eth_soc.h"
+
+int mtk_sgmii_init(struct mtk_sgmii *ss, struct device_node *r, u32 ana_rgc3)
+{
+ struct device_node *np;
+ const char *str;
+ int i, err;
+
+ ss->ana_rgc3 = ana_rgc3;
+
+ for (i = 0; i < MTK_MAX_DEVS; i++) {
+ np = of_parse_phandle(r, "mediatek,sgmiisys", i);
+ if (!np)
+ break;
+
+ ss->regmap[i] = syscon_node_to_regmap(np);
+ if (IS_ERR(ss->regmap[i]))
+ return PTR_ERR(ss->regmap[i]);
+
+ err = of_property_read_string(np, "mediatek,physpeed", &str);
+ if (err)
+ return err;
+
+ if (!strcmp(str, "2500"))
+ ss->flags[i] |= MTK_SGMII_PHYSPEED_2500;
+ else if (!strcmp(str, "1000"))
+ ss->flags[i] |= MTK_SGMII_PHYSPEED_1000;
+ else if (!strcmp(str, "auto"))
+ ss->flags[i] |= MTK_SGMII_PHYSPEED_AN;
+ else
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int mtk_sgmii_setup_mode_an(struct mtk_sgmii *ss, int id)
+{
+ unsigned int val;
+
+ if (!ss->regmap[id])
+ return -EINVAL;
+
+ /* Setup the link timer and QPHY power up inside SGMIISYS */
+ regmap_write(ss->regmap[id], SGMSYS_PCS_LINK_TIMER,
+ SGMII_LINK_TIMER_DEFAULT);
+
+ regmap_read(ss->regmap[id], SGMSYS_SGMII_MODE, &val);
+ val |= SGMII_REMOTE_FAULT_DIS;
+ regmap_write(ss->regmap[id], SGMSYS_SGMII_MODE, val);
+
+ regmap_read(ss->regmap[id], SGMSYS_PCS_CONTROL_1, &val);
+ val |= SGMII_AN_RESTART;
+ regmap_write(ss->regmap[id], SGMSYS_PCS_CONTROL_1, val);
+
+ regmap_read(ss->regmap[id], SGMSYS_QPHY_PWR_STATE_CTRL, &val);
+ val &= ~SGMII_PHYA_PWD;
+ regmap_write(ss->regmap[id], SGMSYS_QPHY_PWR_STATE_CTRL, val);
+
+ return 0;
+}
+
+int mtk_sgmii_setup_mode_force(struct mtk_sgmii *ss, int id)
+{
+ unsigned int val;
+ int mode;
+
+ if (!ss->regmap[id])
+ return -EINVAL;
+
+ regmap_read(ss->regmap[id], ss->ana_rgc3, &val);
+ val &= ~GENMASK(2, 3);
+ mode = ss->flags[id] & MTK_SGMII_PHYSPEED_MASK;
+ val |= (mode == MTK_SGMII_PHYSPEED_1000) ? 0 : BIT(2);
+ regmap_write(ss->regmap[id], ss->ana_rgc3, val);
+
+ /* Disable SGMII AN */
+ regmap_read(ss->regmap[id], SGMSYS_PCS_CONTROL_1, &val);
+ val &= ~BIT(12);
+ regmap_write(ss->regmap[id], SGMSYS_PCS_CONTROL_1, val);
+
+ /* SGMII force mode setting */
+ val = 0x31120019;
+ regmap_write(ss->regmap[id], SGMSYS_SGMII_MODE, val);
+
+ /* Release PHYA power down state */
+ regmap_read(ss->regmap[id], SGMSYS_QPHY_PWR_STATE_CTRL, &val);
+ val &= ~SGMII_PHYA_PWD;
+ regmap_write(ss->regmap[id], SGMSYS_QPHY_PWR_STATE_CTRL, val);
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
index 2391e3cfb56b..37fef8cd25e3 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
+++ b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
@@ -34,6 +34,7 @@ config MLX5_CORE_EN
depends on NETDEVICES && ETHERNET && INET && PCI && MLX5_CORE
depends on IPV6=y || IPV6=n || MLX5_CORE=m
select PAGE_POOL
+ select DIMLIB
default n
---help---
Ethernet support in Mellanox Technologies ConnectX-4 NIC.
@@ -96,26 +97,60 @@ config MLX5_CORE_IPOIB
---help---
MLX5 IPoIB offloads & acceleration support.
+config MLX5_FPGA_IPSEC
+ bool "Mellanox Technologies IPsec Innova support"
+ depends on MLX5_CORE
+ depends on MLX5_FPGA
+ default n
+ help
+ Build IPsec support for the Innova family of network cards by Mellanox
+ Technologies. Innova network cards are comprised of a ConnectX chip
+ and an FPGA chip on one board. If you select this option, the
+ mlx5_core driver will include the Innova FPGA core and allow building
+ sandbox-specific client drivers.
+
config MLX5_EN_IPSEC
bool "IPSec XFRM cryptography-offload accelaration"
- depends on MLX5_ACCEL
depends on MLX5_CORE_EN
depends on XFRM_OFFLOAD
depends on INET_ESP_OFFLOAD || INET6_ESP_OFFLOAD
+ depends on MLX5_FPGA_IPSEC
default n
- ---help---
+ help
Build support for IPsec cryptography-offload accelaration in the NIC.
Note: Support for hardware with this capability needs to be selected
for this option to become available.
-config MLX5_EN_TLS
- bool "TLS cryptography-offload accelaration"
+config MLX5_FPGA_TLS
+ bool "Mellanox Technologies TLS Innova support"
+ depends on TLS_DEVICE
+ depends on TLS=y || MLX5_CORE=m
+ depends on MLX5_FPGA
+ default n
+ help
+ Build TLS support for the Innova family of network cards by Mellanox
+ Technologies. Innova network cards are comprised of a ConnectX chip
+ and an FPGA chip on one board. If you select this option, the
+ mlx5_core driver will include the Innova FPGA core and allow building
+ sandbox-specific client drivers.
+
+config MLX5_TLS
+ bool "Mellanox Technologies TLS Connect-X support"
depends on MLX5_CORE_EN
depends on TLS_DEVICE
depends on TLS=y || MLX5_CORE=m
- depends on MLX5_ACCEL
+ select MLX5_ACCEL
default n
- ---help---
- Build support for TLS cryptography-offload accelaration in the NIC.
- Note: Support for hardware with this capability needs to be selected
- for this option to become available.
+ help
+ Build TLS support for the Connect-X family of network cards by Mellanox
+ Technologies.
+
+config MLX5_EN_TLS
+ bool "TLS cryptography-offload accelaration"
+ depends on MLX5_CORE_EN
+ depends on MLX5_FPGA_TLS || MLX5_TLS
+ default y
+ help
+ Build support for TLS cryptography-offload accelaration in the NIC.
+ Note: Support for hardware with this capability needs to be selected
+ for this option to become available.
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
index 243368dc23db..57d2cc666fe3 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile
+++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
@@ -13,9 +13,10 @@ obj-$(CONFIG_MLX5_CORE) += mlx5_core.o
#
mlx5_core-y := main.o cmd.o debugfs.o fw.o eq.o uar.o pagealloc.o \
health.o mcg.o cq.o alloc.o qp.o port.o mr.o pd.o \
- transobj.o vport.o sriov.o fs_cmd.o fs_core.o \
+ transobj.o vport.o sriov.o fs_cmd.o fs_core.o pci_irq.o \
fs_counters.o rl.o lag.o dev.o events.o wq.o lib/gid.o \
- lib/devcom.o diag/fs_tracepoint.o diag/fw_tracer.o
+ lib/devcom.o lib/pci_vsc.o diag/fs_tracepoint.o \
+ diag/fw_tracer.o diag/crdump.o devlink.o
#
# Netdev basic
@@ -23,7 +24,7 @@ mlx5_core-y := main.o cmd.o debugfs.o fw.o eq.o uar.o pagealloc.o \
mlx5_core-$(CONFIG_MLX5_CORE_EN) += en_main.o en_common.o en_fs.o en_ethtool.o \
en_tx.o en_rx.o en_dim.o en_txrx.o en/xdp.o en_stats.o \
en_selftest.o en/port.o en/monitor_stats.o en/reporter_tx.o \
- en/params.o
+ en/params.o en/xsk/umem.o en/xsk/setup.o en/xsk/rx.o en/xsk/tx.o
#
# Netdev extra
@@ -31,12 +32,15 @@ mlx5_core-$(CONFIG_MLX5_CORE_EN) += en_main.o en_common.o en_fs.o en_ethtool.o \
mlx5_core-$(CONFIG_MLX5_EN_ARFS) += en_arfs.o
mlx5_core-$(CONFIG_MLX5_EN_RXNFC) += en_fs_ethtool.o
mlx5_core-$(CONFIG_MLX5_CORE_EN_DCB) += en_dcbnl.o en/port_buffer.o
-mlx5_core-$(CONFIG_MLX5_ESWITCH) += en_rep.o en_tc.o en/tc_tun.o lib/port_tun.o lag_mp.o
+mlx5_core-$(CONFIG_MLX5_ESWITCH) += en_rep.o en_tc.o en/tc_tun.o lib/port_tun.o lag_mp.o \
+ lib/geneve.o en/tc_tun_vxlan.o en/tc_tun_gre.o \
+ en/tc_tun_geneve.o
#
# Core extra
#
-mlx5_core-$(CONFIG_MLX5_ESWITCH) += eswitch.o eswitch_offloads.o ecpf.o rdma.o
+mlx5_core-$(CONFIG_MLX5_ESWITCH) += eswitch.o eswitch_offloads.o eswitch_offloads_termtbl.o \
+ ecpf.o rdma.o
mlx5_core-$(CONFIG_MLX5_MPFS) += lib/mpfs.o
mlx5_core-$(CONFIG_VXLAN) += lib/vxlan.o
mlx5_core-$(CONFIG_PTP_1588_CLOCK) += lib/clock.o
@@ -49,12 +53,14 @@ mlx5_core-$(CONFIG_MLX5_CORE_IPOIB) += ipoib/ipoib.o ipoib/ethtool.o ipoib/ipoib
#
# Accelerations & FPGA
#
-mlx5_core-$(CONFIG_MLX5_ACCEL) += accel/ipsec.o accel/tls.o
+mlx5_core-$(CONFIG_MLX5_FPGA_IPSEC) += fpga/ipsec.o
+mlx5_core-$(CONFIG_MLX5_FPGA_TLS) += fpga/tls.o
+mlx5_core-$(CONFIG_MLX5_ACCEL) += lib/crypto.o accel/tls.o accel/ipsec.o
-mlx5_core-$(CONFIG_MLX5_FPGA) += fpga/cmd.o fpga/core.o fpga/conn.o fpga/sdk.o \
- fpga/ipsec.o fpga/tls.o
+mlx5_core-$(CONFIG_MLX5_FPGA) += fpga/cmd.o fpga/core.o fpga/conn.o fpga/sdk.o
mlx5_core-$(CONFIG_MLX5_EN_IPSEC) += en_accel/ipsec.o en_accel/ipsec_rxtx.o \
en_accel/ipsec_stats.o
-mlx5_core-$(CONFIG_MLX5_EN_TLS) += en_accel/tls.o en_accel/tls_rxtx.o en_accel/tls_stats.o
+mlx5_core-$(CONFIG_MLX5_EN_TLS) += en_accel/tls.o en_accel/tls_rxtx.o en_accel/tls_stats.o \
+ en_accel/ktls.o en_accel/ktls_tx.o
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/accel/ipsec.c
index 9f1b1939716a..eddc34e4a762 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/accel/ipsec.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/accel/ipsec.c
@@ -31,6 +31,8 @@
*
*/
+#ifdef CONFIG_MLX5_FPGA_IPSEC
+
#include <linux/mlx5/device.h>
#include "accel/ipsec.h"
@@ -74,6 +76,11 @@ int mlx5_accel_ipsec_init(struct mlx5_core_dev *mdev)
return mlx5_fpga_ipsec_init(mdev);
}
+void mlx5_accel_ipsec_build_fs_cmds(void)
+{
+ mlx5_fpga_ipsec_build_fs_cmds();
+}
+
void mlx5_accel_ipsec_cleanup(struct mlx5_core_dev *mdev)
{
mlx5_fpga_ipsec_cleanup(mdev);
@@ -107,3 +114,5 @@ int mlx5_accel_esp_modify_xfrm(struct mlx5_accel_esp_xfrm *xfrm,
return mlx5_fpga_esp_modify_xfrm(xfrm, attrs);
}
EXPORT_SYMBOL_GPL(mlx5_accel_esp_modify_xfrm);
+
+#endif
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/accel/ipsec.h
index 024dbd22a89b..530e428d46ab 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/accel/ipsec.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/accel/ipsec.h
@@ -37,7 +37,7 @@
#include <linux/mlx5/driver.h>
#include <linux/mlx5/accel.h>
-#ifdef CONFIG_MLX5_ACCEL
+#ifdef CONFIG_MLX5_FPGA_IPSEC
#define MLX5_IPSEC_DEV(mdev) (mlx5_accel_ipsec_device_caps(mdev) & \
MLX5_ACCEL_IPSEC_CAP_DEVICE)
@@ -54,6 +54,7 @@ void *mlx5_accel_esp_create_hw_context(struct mlx5_core_dev *mdev,
void mlx5_accel_esp_free_hw_context(void *context);
int mlx5_accel_ipsec_init(struct mlx5_core_dev *mdev);
+void mlx5_accel_ipsec_build_fs_cmds(void);
void mlx5_accel_ipsec_cleanup(struct mlx5_core_dev *mdev);
#else
@@ -79,6 +80,10 @@ static inline int mlx5_accel_ipsec_init(struct mlx5_core_dev *mdev)
return 0;
}
+static inline void mlx5_accel_ipsec_build_fs_cmds(void)
+{
+}
+
static inline void mlx5_accel_ipsec_cleanup(struct mlx5_core_dev *mdev)
{
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/accel/tls.c b/drivers/net/ethernet/mellanox/mlx5/core/accel/tls.c
index da7bd26368f9..cab708af3422 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/accel/tls.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/accel/tls.c
@@ -35,6 +35,9 @@
#include "accel/tls.h"
#include "mlx5_core.h"
+#include "lib/mlx5.h"
+
+#ifdef CONFIG_MLX5_FPGA_TLS
#include "fpga/tls.h"
int mlx5_accel_tls_add_flow(struct mlx5_core_dev *mdev, void *flow,
@@ -61,7 +64,8 @@ int mlx5_accel_tls_resync_rx(struct mlx5_core_dev *mdev, u32 handle, u32 seq,
bool mlx5_accel_is_tls_device(struct mlx5_core_dev *mdev)
{
- return mlx5_fpga_is_tls_device(mdev);
+ return mlx5_fpga_is_tls_device(mdev) ||
+ mlx5_accel_is_ktls_device(mdev);
}
u32 mlx5_accel_tls_device_caps(struct mlx5_core_dev *mdev)
@@ -78,3 +82,42 @@ void mlx5_accel_tls_cleanup(struct mlx5_core_dev *mdev)
{
mlx5_fpga_tls_cleanup(mdev);
}
+#endif
+
+#ifdef CONFIG_MLX5_TLS
+int mlx5_ktls_create_key(struct mlx5_core_dev *mdev,
+ struct tls_crypto_info *crypto_info,
+ u32 *p_key_id)
+{
+ u32 sz_bytes;
+ void *key;
+
+ switch (crypto_info->cipher_type) {
+ case TLS_CIPHER_AES_GCM_128: {
+ struct tls12_crypto_info_aes_gcm_128 *info =
+ (struct tls12_crypto_info_aes_gcm_128 *)crypto_info;
+
+ key = info->key;
+ sz_bytes = sizeof(info->key);
+ break;
+ }
+ case TLS_CIPHER_AES_GCM_256: {
+ struct tls12_crypto_info_aes_gcm_256 *info =
+ (struct tls12_crypto_info_aes_gcm_256 *)crypto_info;
+
+ key = info->key;
+ sz_bytes = sizeof(info->key);
+ break;
+ }
+ default:
+ return -EINVAL;
+ }
+
+ return mlx5_create_encryption_key(mdev, key, sz_bytes, p_key_id);
+}
+
+void mlx5_ktls_destroy_key(struct mlx5_core_dev *mdev, u32 key_id)
+{
+ mlx5_destroy_encryption_key(mdev, key_id);
+}
+#endif
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/accel/tls.h b/drivers/net/ethernet/mellanox/mlx5/core/accel/tls.h
index def4093ebfae..879321b21616 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/accel/tls.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/accel/tls.h
@@ -37,8 +37,51 @@
#include <linux/mlx5/driver.h>
#include <linux/tls.h>
-#ifdef CONFIG_MLX5_ACCEL
+#ifdef CONFIG_MLX5_TLS
+int mlx5_ktls_create_key(struct mlx5_core_dev *mdev,
+ struct tls_crypto_info *crypto_info,
+ u32 *p_key_id);
+void mlx5_ktls_destroy_key(struct mlx5_core_dev *mdev, u32 key_id);
+static inline bool mlx5_accel_is_ktls_device(struct mlx5_core_dev *mdev)
+{
+ if (!MLX5_CAP_GEN(mdev, tls))
+ return false;
+
+ if (!MLX5_CAP_GEN(mdev, log_max_dek))
+ return false;
+
+ return MLX5_CAP_TLS(mdev, tls_1_2_aes_gcm_128);
+}
+
+static inline bool mlx5e_ktls_type_check(struct mlx5_core_dev *mdev,
+ struct tls_crypto_info *crypto_info)
+{
+ switch (crypto_info->cipher_type) {
+ case TLS_CIPHER_AES_GCM_128:
+ if (crypto_info->version == TLS_1_2_VERSION)
+ return MLX5_CAP_TLS(mdev, tls_1_2_aes_gcm_128);
+ break;
+ }
+
+ return false;
+}
+#else
+static inline int
+mlx5_ktls_create_key(struct mlx5_core_dev *mdev,
+ struct tls_crypto_info *crypto_info,
+ u32 *p_key_id) { return -ENOTSUPP; }
+static inline void
+mlx5_ktls_destroy_key(struct mlx5_core_dev *mdev, u32 key_id) {}
+
+static inline bool
+mlx5_accel_is_ktls_device(struct mlx5_core_dev *mdev) { return false; }
+static inline bool
+mlx5e_ktls_type_check(struct mlx5_core_dev *mdev,
+ struct tls_crypto_info *crypto_info) { return false; }
+#endif
+
+#ifdef CONFIG_MLX5_FPGA_TLS
enum {
MLX5_ACCEL_TLS_TX = BIT(0),
MLX5_ACCEL_TLS_RX = BIT(1),
@@ -84,11 +127,13 @@ static inline void mlx5_accel_tls_del_flow(struct mlx5_core_dev *mdev, u32 swid,
bool direction_sx) { }
static inline int mlx5_accel_tls_resync_rx(struct mlx5_core_dev *mdev, u32 handle,
u32 seq, u64 rcd_sn) { return 0; }
-static inline bool mlx5_accel_is_tls_device(struct mlx5_core_dev *mdev) { return false; }
+static inline bool mlx5_accel_is_tls_device(struct mlx5_core_dev *mdev)
+{
+ return mlx5_accel_is_ktls_device(mdev);
+}
static inline u32 mlx5_accel_tls_device_caps(struct mlx5_core_dev *mdev) { return 0; }
static inline int mlx5_accel_tls_init(struct mlx5_core_dev *mdev) { return 0; }
static inline void mlx5_accel_tls_cleanup(struct mlx5_core_dev *mdev) { }
-
#endif
#endif /* __MLX5_ACCEL_TLS_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
index e94686c42000..8cdd7e66f8df 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
@@ -316,7 +316,7 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op,
case MLX5_CMD_OP_DESTROY_GENERAL_OBJECT:
case MLX5_CMD_OP_DEALLOC_MEMIC:
case MLX5_CMD_OP_PAGE_FAULT_RESUME:
- case MLX5_CMD_OP_QUERY_HOST_PARAMS:
+ case MLX5_CMD_OP_QUERY_ESW_FUNCTIONS:
return MLX5_CMD_STAT_OK;
case MLX5_CMD_OP_QUERY_HCA_CAP:
@@ -632,7 +632,7 @@ const char *mlx5_command_str(int command)
MLX5_COMMAND_STR_CASE(QUERY_MODIFY_HEADER_CONTEXT);
MLX5_COMMAND_STR_CASE(ALLOC_MEMIC);
MLX5_COMMAND_STR_CASE(DEALLOC_MEMIC);
- MLX5_COMMAND_STR_CASE(QUERY_HOST_PARAMS);
+ MLX5_COMMAND_STR_CASE(QUERY_ESW_FUNCTIONS);
MLX5_COMMAND_STR_CASE(CREATE_UCTX);
MLX5_COMMAND_STR_CASE(DESTROY_UCTX);
MLX5_COMMAND_STR_CASE(CREATE_UMEM);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cq.c b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
index 713a17ee3751..818edc63e428 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/cq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
@@ -58,7 +58,7 @@ void mlx5_cq_tasklet_cb(unsigned long data)
list_for_each_entry_safe(mcq, temp, &ctx->process_list,
tasklet_ctx.list) {
list_del_init(&mcq->tasklet_ctx.list);
- mcq->tasklet_ctx.comp(mcq);
+ mcq->tasklet_ctx.comp(mcq, NULL);
mlx5_cq_put(mcq);
if (time_after(jiffies, end))
break;
@@ -68,7 +68,8 @@ void mlx5_cq_tasklet_cb(unsigned long data)
tasklet_schedule(&ctx->task);
}
-static void mlx5_add_cq_to_tasklet(struct mlx5_core_cq *cq)
+static void mlx5_add_cq_to_tasklet(struct mlx5_core_cq *cq,
+ struct mlx5_eqe *eqe)
{
unsigned long flags;
struct mlx5_eq_tasklet *tasklet_ctx = cq->tasklet_ctx.priv;
@@ -87,11 +88,10 @@ static void mlx5_add_cq_to_tasklet(struct mlx5_core_cq *cq)
}
int mlx5_core_create_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq,
- u32 *in, int inlen)
+ u32 *in, int inlen, u32 *out, int outlen)
{
int eqn = MLX5_GET(cqc, MLX5_ADDR_OF(create_cq_in, in, cq_context), c_eqn);
u32 dout[MLX5_ST_SZ_DW(destroy_cq_out)];
- u32 out[MLX5_ST_SZ_DW(create_cq_out)];
u32 din[MLX5_ST_SZ_DW(destroy_cq_in)];
struct mlx5_eq_comp *eq;
int err;
@@ -100,9 +100,9 @@ int mlx5_core_create_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq,
if (IS_ERR(eq))
return PTR_ERR(eq);
- memset(out, 0, sizeof(out));
+ memset(out, 0, outlen);
MLX5_SET(create_cq_in, in, opcode, MLX5_CMD_OP_CREATE_CQ);
- err = mlx5_cmd_exec(dev, in, inlen, out, sizeof(out));
+ err = mlx5_cmd_exec(dev, in, inlen, out, outlen);
if (err)
return err;
@@ -158,13 +158,8 @@ int mlx5_core_destroy_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq)
u32 in[MLX5_ST_SZ_DW(destroy_cq_in)] = {0};
int err;
- err = mlx5_eq_del_cq(mlx5_get_async_eq(dev), cq);
- if (err)
- return err;
-
- err = mlx5_eq_del_cq(&cq->eq->core, cq);
- if (err)
- return err;
+ mlx5_eq_del_cq(mlx5_get_async_eq(dev), cq);
+ mlx5_eq_del_cq(&cq->eq->core, cq);
MLX5_SET(destroy_cq_in, in, opcode, MLX5_CMD_OP_DESTROY_CQ);
MLX5_SET(destroy_cq_in, in, cqn, cq->cqn);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
index f6b1da99e6c2..5bb6a26ea267 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
@@ -311,13 +311,20 @@ static u32 mlx5_gen_pci_id(struct mlx5_core_dev *dev)
/* Must be called with intf_mutex held */
struct mlx5_core_dev *mlx5_get_next_phys_dev(struct mlx5_core_dev *dev)
{
- u32 pci_id = mlx5_gen_pci_id(dev);
struct mlx5_core_dev *res = NULL;
struct mlx5_core_dev *tmp_dev;
struct mlx5_priv *priv;
+ u32 pci_id;
+ if (!mlx5_core_is_pf(dev))
+ return NULL;
+
+ pci_id = mlx5_gen_pci_id(dev);
list_for_each_entry(priv, &mlx5_dev_list, dev_list) {
tmp_dev = container_of(priv, struct mlx5_core_dev, priv);
+ if (!mlx5_core_is_pf(tmp_dev))
+ continue;
+
if ((dev != tmp_dev) && (mlx5_gen_pci_id(tmp_dev) == pci_id)) {
res = tmp_dev;
break;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
new file mode 100644
index 000000000000..a400f4430c28
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
@@ -0,0 +1,118 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/* Copyright (c) 2019 Mellanox Technologies */
+
+#include <devlink.h>
+
+#include "mlx5_core.h"
+#include "eswitch.h"
+
+static int mlx5_devlink_flash_update(struct devlink *devlink,
+ const char *file_name,
+ const char *component,
+ struct netlink_ext_ack *extack)
+{
+ struct mlx5_core_dev *dev = devlink_priv(devlink);
+ const struct firmware *fw;
+ int err;
+
+ if (component)
+ return -EOPNOTSUPP;
+
+ err = request_firmware_direct(&fw, file_name, &dev->pdev->dev);
+ if (err)
+ return err;
+
+ return mlx5_firmware_flash(dev, fw, extack);
+}
+
+static u8 mlx5_fw_ver_major(u32 version)
+{
+ return (version >> 24) & 0xff;
+}
+
+static u8 mlx5_fw_ver_minor(u32 version)
+{
+ return (version >> 16) & 0xff;
+}
+
+static u16 mlx5_fw_ver_subminor(u32 version)
+{
+ return version & 0xffff;
+}
+
+#define DEVLINK_FW_STRING_LEN 32
+
+static int
+mlx5_devlink_info_get(struct devlink *devlink, struct devlink_info_req *req,
+ struct netlink_ext_ack *extack)
+{
+ struct mlx5_core_dev *dev = devlink_priv(devlink);
+ char version_str[DEVLINK_FW_STRING_LEN];
+ u32 running_fw, stored_fw;
+ int err;
+
+ err = devlink_info_driver_name_put(req, DRIVER_NAME);
+ if (err)
+ return err;
+
+ err = devlink_info_version_fixed_put(req, "fw.psid", dev->board_id);
+ if (err)
+ return err;
+
+ err = mlx5_fw_version_query(dev, &running_fw, &stored_fw);
+ if (err)
+ return err;
+
+ snprintf(version_str, sizeof(version_str), "%d.%d.%04d",
+ mlx5_fw_ver_major(running_fw), mlx5_fw_ver_minor(running_fw),
+ mlx5_fw_ver_subminor(running_fw));
+ err = devlink_info_version_running_put(req, "fw.version", version_str);
+ if (err)
+ return err;
+
+ /* no pending version, return running (stored) version */
+ if (stored_fw == 0)
+ stored_fw = running_fw;
+
+ snprintf(version_str, sizeof(version_str), "%d.%d.%04d",
+ mlx5_fw_ver_major(stored_fw), mlx5_fw_ver_minor(stored_fw),
+ mlx5_fw_ver_subminor(stored_fw));
+ err = devlink_info_version_stored_put(req, "fw.version", version_str);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static const struct devlink_ops mlx5_devlink_ops = {
+#ifdef CONFIG_MLX5_ESWITCH
+ .eswitch_mode_set = mlx5_devlink_eswitch_mode_set,
+ .eswitch_mode_get = mlx5_devlink_eswitch_mode_get,
+ .eswitch_inline_mode_set = mlx5_devlink_eswitch_inline_mode_set,
+ .eswitch_inline_mode_get = mlx5_devlink_eswitch_inline_mode_get,
+ .eswitch_encap_mode_set = mlx5_devlink_eswitch_encap_mode_set,
+ .eswitch_encap_mode_get = mlx5_devlink_eswitch_encap_mode_get,
+#endif
+ .flash_update = mlx5_devlink_flash_update,
+ .info_get = mlx5_devlink_info_get,
+};
+
+struct devlink *mlx5_devlink_alloc(void)
+{
+ return devlink_alloc(&mlx5_devlink_ops, sizeof(struct mlx5_core_dev));
+}
+
+void mlx5_devlink_free(struct devlink *devlink)
+{
+ devlink_free(devlink);
+}
+
+int mlx5_devlink_register(struct devlink *devlink, struct device *dev)
+{
+ return devlink_register(devlink, dev);
+}
+
+void mlx5_devlink_unregister(struct devlink *devlink)
+{
+ devlink_unregister(devlink);
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.h b/drivers/net/ethernet/mellanox/mlx5/core/devlink.h
new file mode 100644
index 000000000000..d0ba03774ddf
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2019, Mellanox Technologies */
+
+#ifndef __MLX5_DEVLINK_H__
+#define __MLX5_DEVLINK_H__
+
+#include <net/devlink.h>
+
+struct devlink *mlx5_devlink_alloc(void);
+void mlx5_devlink_free(struct devlink *devlink);
+int mlx5_devlink_register(struct devlink *devlink, struct device *dev);
+void mlx5_devlink_unregister(struct devlink *devlink);
+
+#endif /* __MLX5_DEVLINK_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/crdump.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/crdump.c
new file mode 100644
index 000000000000..28d02749d3c4
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/crdump.c
@@ -0,0 +1,115 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/* Copyright (c) 2019 Mellanox Technologies */
+
+#include <linux/mlx5/driver.h>
+#include "mlx5_core.h"
+#include "lib/pci_vsc.h"
+#include "lib/mlx5.h"
+
+#define BAD_ACCESS 0xBADACCE5
+#define MLX5_PROTECTED_CR_SCAN_CRSPACE 0x7
+
+static bool mlx5_crdump_enabled(struct mlx5_core_dev *dev)
+{
+ return !!dev->priv.health.crdump_size;
+}
+
+static int mlx5_crdump_fill(struct mlx5_core_dev *dev, u32 *cr_data)
+{
+ u32 crdump_size = dev->priv.health.crdump_size;
+ int i, ret;
+
+ for (i = 0; i < (crdump_size / 4); i++)
+ cr_data[i] = BAD_ACCESS;
+
+ ret = mlx5_vsc_gw_read_block_fast(dev, cr_data, crdump_size);
+ if (ret <= 0) {
+ if (ret == 0)
+ return -EIO;
+ return ret;
+ }
+
+ if (crdump_size != ret) {
+ mlx5_core_warn(dev, "failed to read full dump, read %d out of %u\n",
+ ret, crdump_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int mlx5_crdump_collect(struct mlx5_core_dev *dev, u32 *cr_data)
+{
+ int ret;
+
+ if (!mlx5_crdump_enabled(dev))
+ return -ENODEV;
+
+ ret = mlx5_vsc_gw_lock(dev);
+ if (ret) {
+ mlx5_core_warn(dev, "crdump: failed to lock vsc gw err %d\n",
+ ret);
+ return ret;
+ }
+ /* Verify no other PF is running cr-dump or sw reset */
+ ret = mlx5_vsc_sem_set_space(dev, MLX5_SEMAPHORE_SW_RESET,
+ MLX5_VSC_LOCK);
+ if (ret) {
+ mlx5_core_warn(dev, "Failed to lock SW reset semaphore\n");
+ goto unlock_gw;
+ }
+
+ ret = mlx5_vsc_gw_set_space(dev, MLX5_VSC_SPACE_SCAN_CRSPACE, NULL);
+ if (ret)
+ goto unlock_sem;
+
+ ret = mlx5_crdump_fill(dev, cr_data);
+
+unlock_sem:
+ mlx5_vsc_sem_set_space(dev, MLX5_SEMAPHORE_SW_RESET, MLX5_VSC_UNLOCK);
+unlock_gw:
+ mlx5_vsc_gw_unlock(dev);
+ return ret;
+}
+
+int mlx5_crdump_enable(struct mlx5_core_dev *dev)
+{
+ struct mlx5_priv *priv = &dev->priv;
+ u32 space_size;
+ int ret;
+
+ if (!mlx5_core_is_pf(dev) || !mlx5_vsc_accessible(dev) ||
+ mlx5_crdump_enabled(dev))
+ return 0;
+
+ ret = mlx5_vsc_gw_lock(dev);
+ if (ret)
+ return ret;
+
+ /* Check if space is supported and get space size */
+ ret = mlx5_vsc_gw_set_space(dev, MLX5_VSC_SPACE_SCAN_CRSPACE,
+ &space_size);
+ if (ret) {
+ /* Unlock and mask error since space is not supported */
+ mlx5_vsc_gw_unlock(dev);
+ return 0;
+ }
+
+ if (!space_size) {
+ mlx5_core_warn(dev, "Invalid Crspace size, zero\n");
+ mlx5_vsc_gw_unlock(dev);
+ return -EINVAL;
+ }
+
+ ret = mlx5_vsc_gw_unlock(dev);
+ if (ret)
+ return ret;
+
+ priv->health.crdump_size = space_size;
+ return 0;
+}
+
+void mlx5_crdump_disable(struct mlx5_core_dev *dev)
+{
+ dev->priv.health.crdump_size = 0;
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.h b/drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.h
index a4cf123e3f17..ddf1b87f1bc0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fs_tracepoint.h
@@ -187,6 +187,7 @@ TRACE_EVENT(mlx5_fs_set_fte,
__field(u32, index)
__field(u32, action)
__field(u32, flow_tag)
+ __field(u32, flow_source)
__field(u8, mask_enable)
__field(int, new_fte)
__array(u32, mask_outer, MLX5_ST_SZ_DW(fte_match_set_lyr_2_4))
@@ -204,7 +205,8 @@ TRACE_EVENT(mlx5_fs_set_fte,
__entry->index = fte->index;
__entry->action = fte->action.action;
__entry->mask_enable = __entry->fg->mask.match_criteria_enable;
- __entry->flow_tag = fte->action.flow_tag;
+ __entry->flow_tag = fte->flow_context.flow_tag;
+ __entry->flow_source = fte->flow_context.flow_source;
memcpy(__entry->mask_outer,
MLX5_ADDR_OF(fte_match_param,
&__entry->fg->mask.match_criteria,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
index 6999f4486e9e..8a4930c8bf62 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
@@ -243,6 +243,19 @@ free_strings_db:
return -ENOMEM;
}
+static void
+mlx5_fw_tracer_init_saved_traces_array(struct mlx5_fw_tracer *tracer)
+{
+ tracer->st_arr.saved_traces_index = 0;
+ mutex_init(&tracer->st_arr.lock);
+}
+
+static void
+mlx5_fw_tracer_clean_saved_traces_array(struct mlx5_fw_tracer *tracer)
+{
+ mutex_destroy(&tracer->st_arr.lock);
+}
+
static void mlx5_tracer_read_strings_db(struct work_struct *work)
{
struct mlx5_fw_tracer *tracer = container_of(work, struct mlx5_fw_tracer,
@@ -522,6 +535,24 @@ static void mlx5_fw_tracer_clean_ready_list(struct mlx5_fw_tracer *tracer)
list_del(&str_frmt->list);
}
+static void mlx5_fw_tracer_save_trace(struct mlx5_fw_tracer *tracer,
+ u64 timestamp, bool lost,
+ u8 event_id, char *msg)
+{
+ struct mlx5_fw_trace_data *trace_data;
+
+ mutex_lock(&tracer->st_arr.lock);
+ trace_data = &tracer->st_arr.straces[tracer->st_arr.saved_traces_index];
+ trace_data->timestamp = timestamp;
+ trace_data->lost = lost;
+ trace_data->event_id = event_id;
+ strncpy(trace_data->msg, msg, TRACE_STR_MSG);
+
+ tracer->st_arr.saved_traces_index =
+ (tracer->st_arr.saved_traces_index + 1) & (SAVED_TRACES_NUM - 1);
+ mutex_unlock(&tracer->st_arr.lock);
+}
+
static void mlx5_tracer_print_trace(struct tracer_string_format *str_frmt,
struct mlx5_core_dev *dev,
u64 trace_timestamp)
@@ -540,6 +571,9 @@ static void mlx5_tracer_print_trace(struct tracer_string_format *str_frmt,
trace_mlx5_fw(dev->tracer, trace_timestamp, str_frmt->lost,
str_frmt->event_id, tmp);
+ mlx5_fw_tracer_save_trace(dev->tracer, trace_timestamp,
+ str_frmt->lost, str_frmt->event_id, tmp);
+
/* remove it from hash */
mlx5_tracer_clean_message(str_frmt);
}
@@ -786,6 +820,109 @@ static void mlx5_fw_tracer_ownership_change(struct work_struct *work)
mlx5_fw_tracer_start(tracer);
}
+static int mlx5_fw_tracer_set_core_dump_reg(struct mlx5_core_dev *dev,
+ u32 *in, int size_in)
+{
+ u32 out[MLX5_ST_SZ_DW(core_dump_reg)] = {};
+
+ if (!MLX5_CAP_DEBUG(dev, core_dump_general) &&
+ !MLX5_CAP_DEBUG(dev, core_dump_qp))
+ return -EOPNOTSUPP;
+
+ return mlx5_core_access_reg(dev, in, size_in, out, sizeof(out),
+ MLX5_REG_CORE_DUMP, 0, 1);
+}
+
+int mlx5_fw_tracer_trigger_core_dump_general(struct mlx5_core_dev *dev)
+{
+ struct mlx5_fw_tracer *tracer = dev->tracer;
+ u32 in[MLX5_ST_SZ_DW(core_dump_reg)] = {};
+ int err;
+
+ if (!MLX5_CAP_DEBUG(dev, core_dump_general) || !tracer)
+ return -EOPNOTSUPP;
+ if (!tracer->owner)
+ return -EPERM;
+
+ MLX5_SET(core_dump_reg, in, core_dump_type, 0x0);
+
+ err = mlx5_fw_tracer_set_core_dump_reg(dev, in, sizeof(in));
+ if (err)
+ return err;
+ queue_work(tracer->work_queue, &tracer->handle_traces_work);
+ flush_workqueue(tracer->work_queue);
+ return 0;
+}
+
+static int
+mlx5_devlink_fmsg_fill_trace(struct devlink_fmsg *fmsg,
+ struct mlx5_fw_trace_data *trace_data)
+{
+ int err;
+
+ err = devlink_fmsg_obj_nest_start(fmsg);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_u64_pair_put(fmsg, "timestamp", trace_data->timestamp);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_bool_pair_put(fmsg, "lost", trace_data->lost);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_u8_pair_put(fmsg, "event_id", trace_data->event_id);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_string_pair_put(fmsg, "msg", trace_data->msg);
+ if (err)
+ return err;
+
+ err = devlink_fmsg_obj_nest_end(fmsg);
+ if (err)
+ return err;
+ return 0;
+}
+
+int mlx5_fw_tracer_get_saved_traces_objects(struct mlx5_fw_tracer *tracer,
+ struct devlink_fmsg *fmsg)
+{
+ struct mlx5_fw_trace_data *straces = tracer->st_arr.straces;
+ u32 index, start_index, end_index;
+ u32 saved_traces_index;
+ int err;
+
+ if (!straces[0].timestamp)
+ return -ENOMSG;
+
+ mutex_lock(&tracer->st_arr.lock);
+ saved_traces_index = tracer->st_arr.saved_traces_index;
+ if (straces[saved_traces_index].timestamp)
+ start_index = saved_traces_index;
+ else
+ start_index = 0;
+ end_index = (saved_traces_index - 1) & (SAVED_TRACES_NUM - 1);
+
+ err = devlink_fmsg_arr_pair_nest_start(fmsg, "dump fw traces");
+ if (err)
+ goto unlock;
+ index = start_index;
+ while (index != end_index) {
+ err = mlx5_devlink_fmsg_fill_trace(fmsg, &straces[index]);
+ if (err)
+ goto unlock;
+
+ index = (index + 1) & (SAVED_TRACES_NUM - 1);
+ }
+
+ err = devlink_fmsg_arr_pair_nest_end(fmsg);
+unlock:
+ mutex_unlock(&tracer->st_arr.lock);
+ return err;
+}
+
/* Create software resources (Buffers, etc ..) */
struct mlx5_fw_tracer *mlx5_fw_tracer_create(struct mlx5_core_dev *dev)
{
@@ -833,6 +970,7 @@ struct mlx5_fw_tracer *mlx5_fw_tracer_create(struct mlx5_core_dev *dev)
goto free_log_buf;
}
+ mlx5_fw_tracer_init_saved_traces_array(tracer);
mlx5_core_dbg(dev, "FWTracer: Tracer created\n");
return tracer;
@@ -917,6 +1055,7 @@ void mlx5_fw_tracer_destroy(struct mlx5_fw_tracer *tracer)
cancel_work_sync(&tracer->read_fw_strings_work);
mlx5_fw_tracer_clean_ready_list(tracer);
mlx5_fw_tracer_clean_print_hash(tracer);
+ mlx5_fw_tracer_clean_saved_traces_array(tracer);
mlx5_fw_tracer_free_strings_db(tracer);
mlx5_fw_tracer_destroy_log_buf(tracer);
flush_workqueue(tracer->work_queue);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h
index a8b8747f2b61..40601fba80ba 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.h
@@ -46,6 +46,9 @@
#define TRACER_BLOCK_SIZE_BYTE 256
#define TRACES_PER_BLOCK 32
+#define TRACE_STR_MSG 256
+#define SAVED_TRACES_NUM 8192
+
#define TRACER_MAX_PARAMS 7
#define MESSAGE_HASH_BITS 6
#define MESSAGE_HASH_SIZE BIT(MESSAGE_HASH_BITS)
@@ -53,6 +56,13 @@
#define MASK_52_7 (0x1FFFFFFFFFFF80)
#define MASK_6_0 (0x7F)
+struct mlx5_fw_trace_data {
+ u64 timestamp;
+ bool lost;
+ u8 event_id;
+ char msg[TRACE_STR_MSG];
+};
+
struct mlx5_fw_tracer {
struct mlx5_core_dev *dev;
struct mlx5_nb nb;
@@ -83,6 +93,13 @@ struct mlx5_fw_tracer {
u32 consumer_index;
} buff;
+ /* Saved Traces Array */
+ struct {
+ struct mlx5_fw_trace_data straces[SAVED_TRACES_NUM];
+ u32 saved_traces_index;
+ struct mutex lock; /* Protect st_arr access */
+ } st_arr;
+
u64 last_timestamp;
struct work_struct handle_traces_work;
struct hlist_head hash[MESSAGE_HASH_SIZE];
@@ -171,5 +188,8 @@ struct mlx5_fw_tracer *mlx5_fw_tracer_create(struct mlx5_core_dev *dev);
int mlx5_fw_tracer_init(struct mlx5_fw_tracer *tracer);
void mlx5_fw_tracer_cleanup(struct mlx5_fw_tracer *tracer);
void mlx5_fw_tracer_destroy(struct mlx5_fw_tracer *tracer);
+int mlx5_fw_tracer_trigger_core_dump_general(struct mlx5_core_dev *dev);
+int mlx5_fw_tracer_get_saved_traces_objects(struct mlx5_fw_tracer *tracer,
+ struct devlink_fmsg *fmsg);
#endif
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c b/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c
index 0ccd6d40baf7..d2228e37450f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c
@@ -83,30 +83,3 @@ void mlx5_ec_cleanup(struct mlx5_core_dev *dev)
mlx5_peer_pf_cleanup(dev);
}
-
-static int mlx5_query_host_params_context(struct mlx5_core_dev *dev,
- u32 *out, int outlen)
-{
- u32 in[MLX5_ST_SZ_DW(query_host_params_in)] = {};
-
- MLX5_SET(query_host_params_in, in, opcode,
- MLX5_CMD_OP_QUERY_HOST_PARAMS);
-
- return mlx5_cmd_exec(dev, in, sizeof(in), out, outlen);
-}
-
-int mlx5_query_host_params_num_vfs(struct mlx5_core_dev *dev, int *num_vf)
-{
- u32 out[MLX5_ST_SZ_DW(query_host_params_out)] = {};
- int err;
-
- err = mlx5_query_host_params_context(dev, out, sizeof(out));
- if (err)
- return err;
-
- *num_vf = MLX5_GET(query_host_params_out, out,
- host_params_context.host_num_of_vfs);
- mlx5_core_dbg(dev, "host_num_of_vfs %d\n", *num_vf);
-
- return 0;
-}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ecpf.h b/drivers/net/ethernet/mellanox/mlx5/core/ecpf.h
index 346372df218f..d3d7a00a02ac 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/ecpf.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/ecpf.h
@@ -16,7 +16,6 @@ enum {
bool mlx5_read_embedded_cpu(struct mlx5_core_dev *dev);
int mlx5_ec_init(struct mlx5_core_dev *dev);
void mlx5_ec_cleanup(struct mlx5_core_dev *dev);
-int mlx5_query_host_params_num_vfs(struct mlx5_core_dev *dev, int *num_vf);
#else /* CONFIG_MLX5_ESWITCH */
@@ -24,9 +23,6 @@ static inline bool
mlx5_read_embedded_cpu(struct mlx5_core_dev *dev) { return false; }
static inline int mlx5_ec_init(struct mlx5_core_dev *dev) { return 0; }
static inline void mlx5_ec_cleanup(struct mlx5_core_dev *dev) {}
-static inline int
-mlx5_query_host_params_num_vfs(struct mlx5_core_dev *dev, int *num_vf)
-{ return -EOPNOTSUPP; }
#endif /* CONFIG_MLX5_ESWITCH */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index cc6797e24571..263558875f20 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -48,7 +48,7 @@
#include <linux/rhashtable.h>
#include <net/switchdev.h>
#include <net/xdp.h>
-#include <linux/net_dim.h>
+#include <linux/dim.h>
#include <linux/bits.h>
#include "wq.h"
#include "mlx5_core.h"
@@ -137,6 +137,7 @@ struct page_pool;
#define MLX5E_MAX_NUM_CHANNELS (MLX5E_INDIR_RQT_SIZE >> 1)
#define MLX5E_MAX_NUM_SQS (MLX5E_MAX_NUM_CHANNELS * MLX5E_MAX_NUM_TC)
#define MLX5E_TX_CQ_POLL_BUDGET 128
+#define MLX5E_TX_XSK_POLL_BUDGET 64
#define MLX5E_SQ_RECOVER_MIN_INTERVAL 500 /* msecs */
#define MLX5E_UMR_WQE_INLINE_SZ \
@@ -155,6 +156,11 @@ do { \
##__VA_ARGS__); \
} while (0)
+enum mlx5e_rq_group {
+ MLX5E_RQ_GROUP_REGULAR,
+ MLX5E_RQ_GROUP_XSK,
+ MLX5E_NUM_RQ_GROUPS /* Keep last. */
+};
static inline u16 mlx5_min_rx_wqes(int wq_type, u32 wq_size)
{
@@ -179,7 +185,8 @@ static inline int mlx5e_get_max_num_channels(struct mlx5_core_dev *mdev)
/* Use this function to get max num channels after netdev was created */
static inline int mlx5e_get_netdev_max_channels(struct net_device *netdev)
{
- return min_t(unsigned int, netdev->num_rx_queues,
+ return min_t(unsigned int,
+ netdev->num_rx_queues / MLX5E_NUM_RQ_GROUPS,
netdev->num_tx_queues);
}
@@ -202,7 +209,10 @@ struct mlx5e_umr_wqe {
struct mlx5_wqe_ctrl_seg ctrl;
struct mlx5_wqe_umr_ctrl_seg uctrl;
struct mlx5_mkey_seg mkc;
- struct mlx5_mtt inline_mtts[0];
+ union {
+ struct mlx5_mtt inline_mtts[0];
+ u8 tls_static_params_ctx[0];
+ };
};
extern const char mlx5e_self_tests[][ETH_GSTRING_LEN];
@@ -238,9 +248,9 @@ struct mlx5e_params {
u16 num_channels;
u8 num_tc;
bool rx_cqe_compress_def;
- struct net_dim_cq_moder rx_cq_moderation;
- struct net_dim_cq_moder tx_cq_moderation;
bool tunneled_offload_en;
+ struct dim_cq_moder rx_cq_moderation;
+ struct dim_cq_moder tx_cq_moderation;
bool lro_en;
u8 tx_min_inline_mode;
bool vlan_strip_disable;
@@ -250,6 +260,7 @@ struct mlx5e_params {
u32 lro_timeout;
u32 pflags;
struct bpf_prog *xdp_prog;
+ struct mlx5e_xsk *xsk;
unsigned int sw_mtu;
int hard_mtu;
};
@@ -325,6 +336,9 @@ struct mlx5e_tx_wqe_info {
u32 num_bytes;
u8 num_wqebbs;
u8 num_dma;
+#ifdef CONFIG_MLX5_EN_TLS
+ skb_frag_t *resync_dump_frag;
+#endif
};
enum mlx5e_dma_map_type {
@@ -348,6 +362,13 @@ enum {
struct mlx5e_sq_wqe_info {
u8 opcode;
+
+ /* Auxiliary data for different opcodes. */
+ union {
+ struct {
+ struct mlx5e_rq *rq;
+ } umr;
+ };
};
struct mlx5e_txqsq {
@@ -356,7 +377,7 @@ struct mlx5e_txqsq {
/* dirtied @completion */
u16 cc;
u32 dma_fifo_cc;
- struct net_dim dim; /* Adaptive Moderation */
+ struct dim dim; /* Adaptive Moderation */
/* dirtied @xmit */
u16 pc ____cacheline_aligned_in_smp;
@@ -375,6 +396,7 @@ struct mlx5e_txqsq {
void __iomem *uar_map;
struct netdev_queue *txq;
u32 sqn;
+ u16 stop_room;
u8 min_inline_mode;
struct device *pdev;
__be32 mkey_be;
@@ -392,14 +414,55 @@ struct mlx5e_txqsq {
} ____cacheline_aligned_in_smp;
struct mlx5e_dma_info {
- struct page *page;
- dma_addr_t addr;
+ dma_addr_t addr;
+ union {
+ struct page *page;
+ struct {
+ u64 handle;
+ void *data;
+ } xsk;
+ };
+};
+
+/* XDP packets can be transmitted in different ways. On completion, we need to
+ * distinguish between them to clean up things in a proper way.
+ */
+enum mlx5e_xdp_xmit_mode {
+ /* An xdp_frame was transmitted due to either XDP_REDIRECT from another
+ * device or XDP_TX from an XSK RQ. The frame has to be unmapped and
+ * returned.
+ */
+ MLX5E_XDP_XMIT_MODE_FRAME,
+
+ /* The xdp_frame was created in place as a result of XDP_TX from a
+ * regular RQ. No DMA remapping happened, and the page belongs to us.
+ */
+ MLX5E_XDP_XMIT_MODE_PAGE,
+
+ /* No xdp_frame was created at all, the transmit happened from a UMEM
+ * page. The UMEM Completion Ring producer pointer has to be increased.
+ */
+ MLX5E_XDP_XMIT_MODE_XSK,
};
struct mlx5e_xdp_info {
- struct xdp_frame *xdpf;
- dma_addr_t dma_addr;
- struct mlx5e_dma_info di;
+ enum mlx5e_xdp_xmit_mode mode;
+ union {
+ struct {
+ struct xdp_frame *xdpf;
+ dma_addr_t dma_addr;
+ } frame;
+ struct {
+ struct mlx5e_rq *rq;
+ struct mlx5e_dma_info di;
+ } page;
+ };
+};
+
+struct mlx5e_xdp_xmit_data {
+ dma_addr_t dma_addr;
+ void *data;
+ u32 len;
};
struct mlx5e_xdp_info_fifo {
@@ -425,8 +488,12 @@ struct mlx5e_xdp_mpwqe {
};
struct mlx5e_xdpsq;
-typedef bool (*mlx5e_fp_xmit_xdp_frame)(struct mlx5e_xdpsq*,
- struct mlx5e_xdp_info*);
+typedef int (*mlx5e_fp_xmit_xdp_frame_check)(struct mlx5e_xdpsq *);
+typedef bool (*mlx5e_fp_xmit_xdp_frame)(struct mlx5e_xdpsq *,
+ struct mlx5e_xdp_xmit_data *,
+ struct mlx5e_xdp_info *,
+ int);
+
struct mlx5e_xdpsq {
/* data path */
@@ -443,8 +510,10 @@ struct mlx5e_xdpsq {
struct mlx5e_cq cq;
/* read only */
+ struct xdp_umem *umem;
struct mlx5_wq_cyc wq;
struct mlx5e_xdpsq_stats *stats;
+ mlx5e_fp_xmit_xdp_frame_check xmit_xdp_frame_check;
mlx5e_fp_xmit_xdp_frame xmit_xdp_frame;
struct {
struct mlx5e_xdp_wqe_info *wqe_info;
@@ -487,12 +556,6 @@ struct mlx5e_icosq {
struct mlx5e_channel *channel;
} ____cacheline_aligned_in_smp;
-static inline bool
-mlx5e_wqc_has_room_for(struct mlx5_wq_cyc *wq, u16 cc, u16 pc, u16 n)
-{
- return (mlx5_wq_cyc_ctr2ix(wq, cc - pc) >= n) || (cc == pc);
-}
-
struct mlx5e_wqe_frag_info {
struct mlx5e_dma_info *di;
u32 offset;
@@ -571,9 +634,11 @@ struct mlx5e_rq {
u8 log_stride_sz;
u8 umr_in_progress;
u8 umr_last_bulk;
+ u8 umr_completed;
} mpwqe;
};
struct {
+ u16 umem_headroom;
u16 headroom;
u8 map_dir; /* dma map direction */
} buff;
@@ -596,14 +661,18 @@ struct mlx5e_rq {
int ix;
unsigned int hw_mtu;
- struct net_dim dim; /* Dynamic Interrupt Moderation */
+ struct dim dim; /* Dynamic Interrupt Moderation */
/* XDP */
struct bpf_prog *xdp_prog;
- struct mlx5e_xdpsq xdpsq;
+ struct mlx5e_xdpsq *xdpsq;
DECLARE_BITMAP(flags, 8);
struct page_pool *page_pool;
+ /* AF_XDP zero-copy */
+ struct zero_copy_allocator zca;
+ struct xdp_umem *umem;
+
/* control */
struct mlx5_wq_ctrl wq_ctrl;
__be32 mkey_be;
@@ -616,9 +685,15 @@ struct mlx5e_rq {
struct xdp_rxq_info xdp_rxq;
} ____cacheline_aligned_in_smp;
+enum mlx5e_channel_state {
+ MLX5E_CHANNEL_STATE_XSK,
+ MLX5E_CHANNEL_NUM_STATES
+};
+
struct mlx5e_channel {
/* data path */
struct mlx5e_rq rq;
+ struct mlx5e_xdpsq rq_xdpsq;
struct mlx5e_txqsq sq[MLX5E_MAX_NUM_TC];
struct mlx5e_icosq icosq; /* internal control operations */
bool xdp;
@@ -631,6 +706,13 @@ struct mlx5e_channel {
/* XDP_REDIRECT */
struct mlx5e_xdpsq xdpsq;
+ /* AF_XDP zero-copy */
+ struct mlx5e_rq xskrq;
+ struct mlx5e_xdpsq xsksq;
+ struct mlx5e_icosq xskicosq;
+ /* xskicosq can be accessed from any CPU - the spinlock protects it. */
+ spinlock_t xskicosq_lock;
+
/* data path - accessed per napi poll */
struct irq_desc *irq_desc;
struct mlx5e_ch_stats *stats;
@@ -639,6 +721,7 @@ struct mlx5e_channel {
struct mlx5e_priv *priv;
struct mlx5_core_dev *mdev;
struct hwtstamp_config *tstamp;
+ DECLARE_BITMAP(state, MLX5E_CHANNEL_NUM_STATES);
int ix;
int cpu;
cpumask_var_t xps_cpumask;
@@ -654,14 +737,17 @@ struct mlx5e_channel_stats {
struct mlx5e_ch_stats ch;
struct mlx5e_sq_stats sq[MLX5E_MAX_NUM_TC];
struct mlx5e_rq_stats rq;
+ struct mlx5e_rq_stats xskrq;
struct mlx5e_xdpsq_stats rq_xdpsq;
struct mlx5e_xdpsq_stats xdpsq;
+ struct mlx5e_xdpsq_stats xsksq;
} ____cacheline_aligned_in_smp;
enum {
MLX5E_STATE_OPENED,
MLX5E_STATE_DESTROYING,
MLX5E_STATE_XDP_TX_ENABLED,
+ MLX5E_STATE_XDP_OPEN,
};
struct mlx5e_rqt {
@@ -694,6 +780,17 @@ struct mlx5e_modify_sq_param {
int rl_index;
};
+struct mlx5e_xsk {
+ /* UMEMs are stored separately from channels, because we don't want to
+ * lose them when channels are recreated. The kernel also stores UMEMs,
+ * but it doesn't distinguish between zero-copy and non-zero-copy UMEMs,
+ * so rely on our mechanism.
+ */
+ struct xdp_umem **umems;
+ u16 refcnt;
+ bool ever_used;
+};
+
struct mlx5e_priv {
/* priv data path fields - start */
struct mlx5e_txqsq *txq2sq[MLX5E_MAX_NUM_CHANNELS * MLX5E_MAX_NUM_TC];
@@ -714,6 +811,7 @@ struct mlx5e_priv {
struct mlx5e_tir indir_tir[MLX5E_NUM_INDIR_TIRS];
struct mlx5e_tir inner_indir_tir[MLX5E_NUM_INDIR_TIRS];
struct mlx5e_tir direct_tir[MLX5E_MAX_NUM_CHANNELS];
+ struct mlx5e_tir xsk_tir[MLX5E_MAX_NUM_CHANNELS];
struct mlx5e_rss_params rss_params;
u32 tx_rates[MLX5E_MAX_NUM_SQS];
@@ -750,6 +848,7 @@ struct mlx5e_priv {
struct mlx5e_tls *tls;
#endif
struct devlink_health_reporter *tx_reporter;
+ struct mlx5e_xsk xsk;
};
struct mlx5e_profile {
@@ -763,6 +862,7 @@ struct mlx5e_profile {
void (*cleanup_tx)(struct mlx5e_priv *priv);
void (*enable)(struct mlx5e_priv *priv);
void (*disable)(struct mlx5e_priv *priv);
+ int (*update_rx)(struct mlx5e_priv *priv);
void (*update_stats)(struct mlx5e_priv *priv);
void (*update_carrier)(struct mlx5e_priv *priv);
struct {
@@ -781,7 +881,7 @@ netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
struct mlx5e_tx_wqe *wqe, u16 pi, bool xmit_more);
void mlx5e_trigger_irq(struct mlx5e_icosq *sq);
-void mlx5e_completion_event(struct mlx5_core_cq *mcq);
+void mlx5e_completion_event(struct mlx5_core_cq *mcq, struct mlx5_eqe *eqe);
void mlx5e_cq_error_event(struct mlx5_core_cq *mcq, enum mlx5_event event);
int mlx5e_napi_poll(struct napi_struct *napi, int budget);
bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget);
@@ -793,11 +893,13 @@ bool mlx5e_striding_rq_possible(struct mlx5_core_dev *mdev,
struct mlx5e_params *params);
void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info);
-void mlx5e_page_release(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info,
- bool recycle);
+void mlx5e_page_release_dynamic(struct mlx5e_rq *rq,
+ struct mlx5e_dma_info *dma_info,
+ bool recycle);
void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq);
+void mlx5e_poll_ico_cq(struct mlx5e_cq *cq);
bool mlx5e_post_rx_mpwqes(struct mlx5e_rq *rq);
void mlx5e_dealloc_rx_wqe(struct mlx5e_rq *rq, u16 ix);
void mlx5e_dealloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix);
@@ -853,6 +955,30 @@ void mlx5e_build_indir_tir_ctx_hash(struct mlx5e_rss_params *rss_params,
void mlx5e_modify_tirs_hash(struct mlx5e_priv *priv, void *in, int inlen);
struct mlx5e_tirc_config mlx5e_tirc_get_default_config(enum mlx5e_traffic_types tt);
+struct mlx5e_xsk_param;
+
+struct mlx5e_rq_param;
+int mlx5e_open_rq(struct mlx5e_channel *c, struct mlx5e_params *params,
+ struct mlx5e_rq_param *param, struct mlx5e_xsk_param *xsk,
+ struct xdp_umem *umem, struct mlx5e_rq *rq);
+int mlx5e_wait_for_min_rx_wqes(struct mlx5e_rq *rq, int wait_time);
+void mlx5e_deactivate_rq(struct mlx5e_rq *rq);
+void mlx5e_close_rq(struct mlx5e_rq *rq);
+
+struct mlx5e_sq_param;
+int mlx5e_open_icosq(struct mlx5e_channel *c, struct mlx5e_params *params,
+ struct mlx5e_sq_param *param, struct mlx5e_icosq *sq);
+void mlx5e_close_icosq(struct mlx5e_icosq *sq);
+int mlx5e_open_xdpsq(struct mlx5e_channel *c, struct mlx5e_params *params,
+ struct mlx5e_sq_param *param, struct xdp_umem *umem,
+ struct mlx5e_xdpsq *sq, bool is_redirect);
+void mlx5e_close_xdpsq(struct mlx5e_xdpsq *sq);
+
+struct mlx5e_cq_param;
+int mlx5e_open_cq(struct mlx5e_channel *c, struct dim_cq_moder moder,
+ struct mlx5e_cq_param *param, struct mlx5e_cq *cq);
+void mlx5e_close_cq(struct mlx5e_cq *cq);
+
int mlx5e_open_locked(struct net_device *netdev);
int mlx5e_close_locked(struct net_device *netdev);
@@ -898,102 +1024,6 @@ static inline bool mlx5_tx_swp_supported(struct mlx5_core_dev *mdev)
MLX5_CAP_ETH(mdev, swp_csum) && MLX5_CAP_ETH(mdev, swp_lso);
}
-struct mlx5e_swp_spec {
- __be16 l3_proto;
- u8 l4_proto;
- u8 is_tun;
- __be16 tun_l3_proto;
- u8 tun_l4_proto;
-};
-
-static inline void
-mlx5e_set_eseg_swp(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg,
- struct mlx5e_swp_spec *swp_spec)
-{
- /* SWP offsets are in 2-bytes words */
- eseg->swp_outer_l3_offset = skb_network_offset(skb) / 2;
- if (swp_spec->l3_proto == htons(ETH_P_IPV6))
- eseg->swp_flags |= MLX5_ETH_WQE_SWP_OUTER_L3_IPV6;
- if (swp_spec->l4_proto) {
- eseg->swp_outer_l4_offset = skb_transport_offset(skb) / 2;
- if (swp_spec->l4_proto == IPPROTO_UDP)
- eseg->swp_flags |= MLX5_ETH_WQE_SWP_OUTER_L4_UDP;
- }
-
- if (swp_spec->is_tun) {
- eseg->swp_inner_l3_offset = skb_inner_network_offset(skb) / 2;
- if (swp_spec->tun_l3_proto == htons(ETH_P_IPV6))
- eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6;
- } else { /* typically for ipsec when xfrm mode != XFRM_MODE_TUNNEL */
- eseg->swp_inner_l3_offset = skb_network_offset(skb) / 2;
- if (swp_spec->l3_proto == htons(ETH_P_IPV6))
- eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6;
- }
- switch (swp_spec->tun_l4_proto) {
- case IPPROTO_UDP:
- eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L4_UDP;
- /* fall through */
- case IPPROTO_TCP:
- eseg->swp_inner_l4_offset = skb_inner_transport_offset(skb) / 2;
- break;
- }
-}
-
-static inline void mlx5e_sq_fetch_wqe(struct mlx5e_txqsq *sq,
- struct mlx5e_tx_wqe **wqe,
- u16 *pi)
-{
- struct mlx5_wq_cyc *wq = &sq->wq;
-
- *pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
- *wqe = mlx5_wq_cyc_get_wqe(wq, *pi);
- memset(*wqe, 0, sizeof(**wqe));
-}
-
-static inline
-struct mlx5e_tx_wqe *mlx5e_post_nop(struct mlx5_wq_cyc *wq, u32 sqn, u16 *pc)
-{
- u16 pi = mlx5_wq_cyc_ctr2ix(wq, *pc);
- struct mlx5e_tx_wqe *wqe = mlx5_wq_cyc_get_wqe(wq, pi);
- struct mlx5_wqe_ctrl_seg *cseg = &wqe->ctrl;
-
- memset(cseg, 0, sizeof(*cseg));
-
- cseg->opmod_idx_opcode = cpu_to_be32((*pc << 8) | MLX5_OPCODE_NOP);
- cseg->qpn_ds = cpu_to_be32((sqn << 8) | 0x01);
-
- (*pc)++;
-
- return wqe;
-}
-
-static inline
-void mlx5e_notify_hw(struct mlx5_wq_cyc *wq, u16 pc,
- void __iomem *uar_map,
- struct mlx5_wqe_ctrl_seg *ctrl)
-{
- ctrl->fm_ce_se = MLX5_WQE_CTRL_CQ_UPDATE;
- /* ensure wqe is visible to device before updating doorbell record */
- dma_wmb();
-
- *wq->db = cpu_to_be32(pc);
-
- /* ensure doorbell record is visible to device before ringing the
- * doorbell
- */
- wmb();
-
- mlx5_write64((__be32 *)ctrl, uar_map);
-}
-
-static inline void mlx5e_cq_arm(struct mlx5e_cq *cq)
-{
- struct mlx5_core_cq *mcq;
-
- mcq = &cq->mcq;
- mlx5_cq_arm(mcq, MLX5_CQ_DB_REQ_NOT, mcq->uar->map, cq->wq.cc);
-}
-
extern const struct ethtool_ops mlx5e_ethtool_ops;
#ifdef CONFIG_MLX5_CORE_EN_DCB
extern const struct dcbnl_rtnl_ops mlx5e_dcbnl_ops;
@@ -1023,17 +1053,17 @@ int mlx5e_create_indirect_rqt(struct mlx5e_priv *priv);
int mlx5e_create_indirect_tirs(struct mlx5e_priv *priv, bool inner_ttc);
void mlx5e_destroy_indirect_tirs(struct mlx5e_priv *priv, bool inner_ttc);
-int mlx5e_create_direct_rqts(struct mlx5e_priv *priv);
-void mlx5e_destroy_direct_rqts(struct mlx5e_priv *priv);
-int mlx5e_create_direct_tirs(struct mlx5e_priv *priv);
-void mlx5e_destroy_direct_tirs(struct mlx5e_priv *priv);
+int mlx5e_create_direct_rqts(struct mlx5e_priv *priv, struct mlx5e_tir *tirs);
+void mlx5e_destroy_direct_rqts(struct mlx5e_priv *priv, struct mlx5e_tir *tirs);
+int mlx5e_create_direct_tirs(struct mlx5e_priv *priv, struct mlx5e_tir *tirs);
+void mlx5e_destroy_direct_tirs(struct mlx5e_priv *priv, struct mlx5e_tir *tirs);
void mlx5e_destroy_rqt(struct mlx5e_priv *priv, struct mlx5e_rqt *rqt);
-int mlx5e_create_tis(struct mlx5_core_dev *mdev, int tc,
- u32 underlay_qpn, u32 *tisn);
+int mlx5e_create_tis(struct mlx5_core_dev *mdev, void *in, u32 *tisn);
void mlx5e_destroy_tis(struct mlx5_core_dev *mdev, u32 tisn);
int mlx5e_create_tises(struct mlx5e_priv *priv);
+int mlx5e_update_nic_rx(struct mlx5e_priv *priv);
void mlx5e_update_carrier(struct mlx5e_priv *priv);
int mlx5e_close(struct net_device *netdev);
int mlx5e_open(struct net_device *netdev);
@@ -1075,8 +1105,6 @@ u32 mlx5e_ethtool_get_rxfh_key_size(struct mlx5e_priv *priv);
u32 mlx5e_ethtool_get_rxfh_indir_size(struct mlx5e_priv *priv);
int mlx5e_ethtool_get_ts_info(struct mlx5e_priv *priv,
struct ethtool_ts_info *info);
-int mlx5e_ethtool_flash_device(struct mlx5e_priv *priv,
- struct ethtool_flash *flash);
void mlx5e_ethtool_get_pauseparam(struct mlx5e_priv *priv,
struct ethtool_pauseparam *pauseparam);
int mlx5e_ethtool_set_pauseparam(struct mlx5e_priv *priv,
@@ -1097,6 +1125,7 @@ void mlx5e_detach_netdev(struct mlx5e_priv *priv);
void mlx5e_destroy_netdev(struct mlx5e_priv *priv);
void mlx5e_set_netdev_mtu_boundaries(struct mlx5e_priv *priv);
void mlx5e_build_nic_params(struct mlx5_core_dev *mdev,
+ struct mlx5e_xsk *xsk,
struct mlx5e_rss_params *rss_params,
struct mlx5e_params *params,
u16 max_channels, u16 mtu);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
index d3744bffbae3..79301d116667 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
@@ -3,65 +3,102 @@
#include "en/params.h"
-u32 mlx5e_rx_get_linear_frag_sz(struct mlx5e_params *params)
+static inline bool mlx5e_rx_is_xdp(struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk)
{
- u16 hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
- u16 linear_rq_headroom = params->xdp_prog ?
- XDP_PACKET_HEADROOM : MLX5_RX_HEADROOM;
- u32 frag_sz;
+ return params->xdp_prog || xsk;
+}
+
+u16 mlx5e_get_linear_rq_headroom(struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk)
+{
+ u16 headroom = NET_IP_ALIGN;
+
+ if (mlx5e_rx_is_xdp(params, xsk)) {
+ headroom += XDP_PACKET_HEADROOM;
+ if (xsk)
+ headroom += xsk->headroom;
+ } else {
+ headroom += MLX5_RX_HEADROOM;
+ }
+
+ return headroom;
+}
+
+u32 mlx5e_rx_get_linear_frag_sz(struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk)
+{
+ u32 hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
+ u16 linear_rq_headroom = mlx5e_get_linear_rq_headroom(params, xsk);
+ u32 frag_sz = linear_rq_headroom + hw_mtu;
- linear_rq_headroom += NET_IP_ALIGN;
+ /* AF_XDP doesn't build SKBs in place. */
+ if (!xsk)
+ frag_sz = MLX5_SKB_FRAG_SZ(frag_sz);
- frag_sz = MLX5_SKB_FRAG_SZ(linear_rq_headroom + hw_mtu);
+ /* XDP in mlx5e doesn't support multiple packets per page. */
+ if (mlx5e_rx_is_xdp(params, xsk))
+ frag_sz = max_t(u32, frag_sz, PAGE_SIZE);
- if (params->xdp_prog && frag_sz < PAGE_SIZE)
- frag_sz = PAGE_SIZE;
+ /* Even if we can go with a smaller fragment size, we must not put
+ * multiple packets into a single frame.
+ */
+ if (xsk)
+ frag_sz = max_t(u32, frag_sz, xsk->chunk_size);
return frag_sz;
}
-u8 mlx5e_mpwqe_log_pkts_per_wqe(struct mlx5e_params *params)
+u8 mlx5e_mpwqe_log_pkts_per_wqe(struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk)
{
- u32 linear_frag_sz = mlx5e_rx_get_linear_frag_sz(params);
+ u32 linear_frag_sz = mlx5e_rx_get_linear_frag_sz(params, xsk);
return MLX5_MPWRQ_LOG_WQE_SZ - order_base_2(linear_frag_sz);
}
-bool mlx5e_rx_is_linear_skb(struct mlx5e_params *params)
+bool mlx5e_rx_is_linear_skb(struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk)
{
- u32 frag_sz = mlx5e_rx_get_linear_frag_sz(params);
+ /* AF_XDP allocates SKBs on XDP_PASS - ensure they don't occupy more
+ * than one page. For this, check both with and without xsk.
+ */
+ u32 linear_frag_sz = max(mlx5e_rx_get_linear_frag_sz(params, xsk),
+ mlx5e_rx_get_linear_frag_sz(params, NULL));
- return !params->lro_en && frag_sz <= PAGE_SIZE;
+ return !params->lro_en && linear_frag_sz <= PAGE_SIZE;
}
#define MLX5_MAX_MPWQE_LOG_WQE_STRIDE_SZ ((BIT(__mlx5_bit_sz(wq, log_wqe_stride_size)) - 1) + \
MLX5_MPWQE_LOG_STRIDE_SZ_BASE)
bool mlx5e_rx_mpwqe_is_linear_skb(struct mlx5_core_dev *mdev,
- struct mlx5e_params *params)
+ struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk)
{
- u32 frag_sz = mlx5e_rx_get_linear_frag_sz(params);
+ u32 linear_frag_sz = mlx5e_rx_get_linear_frag_sz(params, xsk);
s8 signed_log_num_strides_param;
u8 log_num_strides;
- if (!mlx5e_rx_is_linear_skb(params))
+ if (!mlx5e_rx_is_linear_skb(params, xsk))
return false;
- if (order_base_2(frag_sz) > MLX5_MAX_MPWQE_LOG_WQE_STRIDE_SZ)
+ if (order_base_2(linear_frag_sz) > MLX5_MAX_MPWQE_LOG_WQE_STRIDE_SZ)
return false;
if (MLX5_CAP_GEN(mdev, ext_stride_num_range))
return true;
- log_num_strides = MLX5_MPWRQ_LOG_WQE_SZ - order_base_2(frag_sz);
+ log_num_strides = MLX5_MPWRQ_LOG_WQE_SZ - order_base_2(linear_frag_sz);
signed_log_num_strides_param =
(s8)log_num_strides - MLX5_MPWQE_LOG_NUM_STRIDES_BASE;
return signed_log_num_strides_param >= 0;
}
-u8 mlx5e_mpwqe_get_log_rq_size(struct mlx5e_params *params)
+u8 mlx5e_mpwqe_get_log_rq_size(struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk)
{
- u8 log_pkts_per_wqe = mlx5e_mpwqe_log_pkts_per_wqe(params);
+ u8 log_pkts_per_wqe = mlx5e_mpwqe_log_pkts_per_wqe(params, xsk);
/* Numbers are unsigned, don't subtract to avoid underflow. */
if (params->log_rq_mtu_frames <
@@ -72,33 +109,30 @@ u8 mlx5e_mpwqe_get_log_rq_size(struct mlx5e_params *params)
}
u8 mlx5e_mpwqe_get_log_stride_size(struct mlx5_core_dev *mdev,
- struct mlx5e_params *params)
+ struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk)
{
- if (mlx5e_rx_mpwqe_is_linear_skb(mdev, params))
- return order_base_2(mlx5e_rx_get_linear_frag_sz(params));
+ if (mlx5e_rx_mpwqe_is_linear_skb(mdev, params, xsk))
+ return order_base_2(mlx5e_rx_get_linear_frag_sz(params, xsk));
return MLX5_MPWRQ_DEF_LOG_STRIDE_SZ(mdev);
}
u8 mlx5e_mpwqe_get_log_num_strides(struct mlx5_core_dev *mdev,
- struct mlx5e_params *params)
+ struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk)
{
return MLX5_MPWRQ_LOG_WQE_SZ -
- mlx5e_mpwqe_get_log_stride_size(mdev, params);
+ mlx5e_mpwqe_get_log_stride_size(mdev, params, xsk);
}
u16 mlx5e_get_rq_headroom(struct mlx5_core_dev *mdev,
- struct mlx5e_params *params)
+ struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk)
{
- u16 linear_rq_headroom = params->xdp_prog ?
- XDP_PACKET_HEADROOM : MLX5_RX_HEADROOM;
- bool is_linear_skb;
-
- linear_rq_headroom += NET_IP_ALIGN;
-
- is_linear_skb = (params->rq_wq_type == MLX5_WQ_TYPE_CYCLIC) ?
- mlx5e_rx_is_linear_skb(params) :
- mlx5e_rx_mpwqe_is_linear_skb(mdev, params);
+ bool is_linear_skb = (params->rq_wq_type == MLX5_WQ_TYPE_CYCLIC) ?
+ mlx5e_rx_is_linear_skb(params, xsk) :
+ mlx5e_rx_mpwqe_is_linear_skb(mdev, params, xsk);
- return is_linear_skb ? linear_rq_headroom : 0;
+ return is_linear_skb ? mlx5e_get_linear_rq_headroom(params, xsk) : 0;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
index b106a0236f36..bd882b5ee9a7 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
@@ -6,17 +6,119 @@
#include "en.h"
-u32 mlx5e_rx_get_linear_frag_sz(struct mlx5e_params *params);
-u8 mlx5e_mpwqe_log_pkts_per_wqe(struct mlx5e_params *params);
-bool mlx5e_rx_is_linear_skb(struct mlx5e_params *params);
+struct mlx5e_xsk_param {
+ u16 headroom;
+ u16 chunk_size;
+};
+
+struct mlx5e_rq_param {
+ u32 rqc[MLX5_ST_SZ_DW(rqc)];
+ struct mlx5_wq_param wq;
+ struct mlx5e_rq_frags_info frags_info;
+};
+
+struct mlx5e_sq_param {
+ u32 sqc[MLX5_ST_SZ_DW(sqc)];
+ struct mlx5_wq_param wq;
+ bool is_mpw;
+};
+
+struct mlx5e_cq_param {
+ u32 cqc[MLX5_ST_SZ_DW(cqc)];
+ struct mlx5_wq_param wq;
+ u16 eq_ix;
+ u8 cq_period_mode;
+};
+
+struct mlx5e_channel_param {
+ struct mlx5e_rq_param rq;
+ struct mlx5e_sq_param sq;
+ struct mlx5e_sq_param xdp_sq;
+ struct mlx5e_sq_param icosq;
+ struct mlx5e_cq_param rx_cq;
+ struct mlx5e_cq_param tx_cq;
+ struct mlx5e_cq_param icosq_cq;
+};
+
+static inline bool mlx5e_qid_get_ch_if_in_group(struct mlx5e_params *params,
+ u16 qid,
+ enum mlx5e_rq_group group,
+ u16 *ix)
+{
+ int nch = params->num_channels;
+ int ch = qid - nch * group;
+
+ if (ch < 0 || ch >= nch)
+ return false;
+
+ *ix = ch;
+ return true;
+}
+
+static inline void mlx5e_qid_get_ch_and_group(struct mlx5e_params *params,
+ u16 qid,
+ u16 *ix,
+ enum mlx5e_rq_group *group)
+{
+ u16 nch = params->num_channels;
+
+ *ix = qid % nch;
+ *group = qid / nch;
+}
+
+static inline bool mlx5e_qid_validate(struct mlx5e_params *params, u64 qid)
+{
+ return qid < params->num_channels * MLX5E_NUM_RQ_GROUPS;
+}
+
+/* Parameter calculations */
+
+u16 mlx5e_get_linear_rq_headroom(struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk);
+u32 mlx5e_rx_get_linear_frag_sz(struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk);
+u8 mlx5e_mpwqe_log_pkts_per_wqe(struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk);
+bool mlx5e_rx_is_linear_skb(struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk);
bool mlx5e_rx_mpwqe_is_linear_skb(struct mlx5_core_dev *mdev,
- struct mlx5e_params *params);
-u8 mlx5e_mpwqe_get_log_rq_size(struct mlx5e_params *params);
+ struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk);
+u8 mlx5e_mpwqe_get_log_rq_size(struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk);
u8 mlx5e_mpwqe_get_log_stride_size(struct mlx5_core_dev *mdev,
- struct mlx5e_params *params);
+ struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk);
u8 mlx5e_mpwqe_get_log_num_strides(struct mlx5_core_dev *mdev,
- struct mlx5e_params *params);
+ struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk);
u16 mlx5e_get_rq_headroom(struct mlx5_core_dev *mdev,
- struct mlx5e_params *params);
+ struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk);
+
+/* Build queue parameters */
+
+void mlx5e_build_rq_param(struct mlx5e_priv *priv,
+ struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk,
+ struct mlx5e_rq_param *param);
+void mlx5e_build_sq_param_common(struct mlx5e_priv *priv,
+ struct mlx5e_sq_param *param);
+void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv,
+ struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk,
+ struct mlx5e_cq_param *param);
+void mlx5e_build_tx_cq_param(struct mlx5e_priv *priv,
+ struct mlx5e_params *params,
+ struct mlx5e_cq_param *param);
+void mlx5e_build_ico_cq_param(struct mlx5e_priv *priv,
+ u8 log_wq_size,
+ struct mlx5e_cq_param *param);
+void mlx5e_build_icosq_param(struct mlx5e_priv *priv,
+ u8 log_wq_size,
+ struct mlx5e_sq_param *param);
+void mlx5e_build_xdpsq_param(struct mlx5e_priv *priv,
+ struct mlx5e_params *params,
+ struct mlx5e_sq_param *param);
#endif /* __MLX5_EN_PARAMS_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
index 231e7cdfc6f7..a6a52806be45 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
@@ -3,8 +3,22 @@
#include <net/vxlan.h>
#include <net/gre.h>
-#include "lib/vxlan.h"
+#include <net/geneve.h>
#include "en/tc_tun.h"
+#include "en_tc.h"
+
+struct mlx5e_tc_tunnel *mlx5e_get_tc_tun(struct net_device *tunnel_dev)
+{
+ if (netif_is_vxlan(tunnel_dev))
+ return &vxlan_tunnel;
+ else if (netif_is_geneve(tunnel_dev))
+ return &geneve_tunnel;
+ else if (netif_is_gretap(tunnel_dev) ||
+ netif_is_ip6gretap(tunnel_dev))
+ return &gre_tunnel;
+ else
+ return NULL;
+}
static int get_route_and_out_devs(struct mlx5e_priv *priv,
struct net_device *dev,
@@ -34,7 +48,8 @@ static int get_route_and_out_devs(struct mlx5e_priv *priv,
*route_dev = dev;
if (is_vlan_dev(*route_dev))
*out_dev = uplink_dev;
- else if (mlx5e_eswitch_rep(dev))
+ else if (mlx5e_eswitch_rep(dev) &&
+ mlx5e_is_valid_eswitch_fwd_dev(priv, dev))
*out_dev = *route_dev;
else
return -EOPNOTSUPP;
@@ -142,63 +157,15 @@ static int mlx5e_route_lookup_ipv6(struct mlx5e_priv *priv,
return 0;
}
-static int mlx5e_gen_vxlan_header(char buf[], struct ip_tunnel_key *tun_key)
-{
- __be32 tun_id = tunnel_id_to_key32(tun_key->tun_id);
- struct udphdr *udp = (struct udphdr *)(buf);
- struct vxlanhdr *vxh = (struct vxlanhdr *)
- ((char *)udp + sizeof(struct udphdr));
-
- udp->dest = tun_key->tp_dst;
- vxh->vx_flags = VXLAN_HF_VNI;
- vxh->vx_vni = vxlan_vni_field(tun_id);
-
- return 0;
-}
-
-static int mlx5e_gen_gre_header(char buf[], struct ip_tunnel_key *tun_key)
-{
- __be32 tun_id = tunnel_id_to_key32(tun_key->tun_id);
- int hdr_len;
- struct gre_base_hdr *greh = (struct gre_base_hdr *)(buf);
-
- /* the HW does not calculate GRE csum or sequences */
- if (tun_key->tun_flags & (TUNNEL_CSUM | TUNNEL_SEQ))
- return -EOPNOTSUPP;
-
- greh->protocol = htons(ETH_P_TEB);
-
- /* GRE key */
- hdr_len = gre_calc_hlen(tun_key->tun_flags);
- greh->flags = gre_tnl_flags_to_gre_flags(tun_key->tun_flags);
- if (tun_key->tun_flags & TUNNEL_KEY) {
- __be32 *ptr = (__be32 *)(((u8 *)greh) + hdr_len - 4);
-
- *ptr = tun_id;
- }
-
- return 0;
-}
-
static int mlx5e_gen_ip_tunnel_header(char buf[], __u8 *ip_proto,
struct mlx5e_encap_entry *e)
{
- int err = 0;
- struct ip_tunnel_key *key = &e->tun_info.key;
-
- if (e->tunnel_type == MLX5E_TC_TUNNEL_TYPE_VXLAN) {
- *ip_proto = IPPROTO_UDP;
- err = mlx5e_gen_vxlan_header(buf, key);
- } else if (e->tunnel_type == MLX5E_TC_TUNNEL_TYPE_GRETAP) {
- *ip_proto = IPPROTO_GRE;
- err = mlx5e_gen_gre_header(buf, key);
- } else {
- pr_warn("mlx5: Cannot generate tunnel header for tunnel type (%d)\n"
- , e->tunnel_type);
- err = -EOPNOTSUPP;
+ if (!e->tunnel) {
+ pr_warn("mlx5: Cannot generate tunnel header for this tunnel\n");
+ return -EOPNOTSUPP;
}
- return err;
+ return e->tunnel->generate_ip_tun_hdr(buf, ip_proto, e);
}
static char *gen_eth_tnl_hdr(char *buf, struct net_device *dev,
@@ -230,7 +197,7 @@ int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv,
struct mlx5e_encap_entry *e)
{
int max_encap_size = MLX5_CAP_ESW(priv->mdev, max_encap_header_size);
- struct ip_tunnel_key *tun_key = &e->tun_info.key;
+ const struct ip_tunnel_key *tun_key = &e->tun_info->key;
struct net_device *out_dev, *route_dev;
struct neighbour *n = NULL;
struct flowi4 fl4 = {};
@@ -254,7 +221,7 @@ int mlx5e_tc_tun_create_header_ipv4(struct mlx5e_priv *priv,
ipv4_encap_size =
(is_vlan_dev(route_dev) ? VLAN_ETH_HLEN : ETH_HLEN) +
sizeof(struct iphdr) +
- e->tunnel_hlen;
+ e->tunnel->calc_hlen(e);
if (max_encap_size < ipv4_encap_size) {
mlx5_core_warn(priv->mdev, "encap size %d too big, max supported is %d\n",
@@ -346,7 +313,7 @@ int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv,
struct mlx5e_encap_entry *e)
{
int max_encap_size = MLX5_CAP_ESW(priv->mdev, max_encap_header_size);
- struct ip_tunnel_key *tun_key = &e->tun_info.key;
+ const struct ip_tunnel_key *tun_key = &e->tun_info->key;
struct net_device *out_dev, *route_dev;
struct neighbour *n = NULL;
struct flowi6 fl6 = {};
@@ -370,7 +337,7 @@ int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv,
ipv6_encap_size =
(is_vlan_dev(route_dev) ? VLAN_ETH_HLEN : ETH_HLEN) +
sizeof(struct ipv6hdr) +
- e->tunnel_hlen;
+ e->tunnel->calc_hlen(e);
if (max_encap_size < ipv6_encap_size) {
mlx5_core_warn(priv->mdev, "encap size %d too big, max supported is %d\n",
@@ -456,27 +423,12 @@ out:
return err;
}
-int mlx5e_tc_tun_get_type(struct net_device *tunnel_dev)
-{
- if (netif_is_vxlan(tunnel_dev))
- return MLX5E_TC_TUNNEL_TYPE_VXLAN;
- else if (netif_is_gretap(tunnel_dev) ||
- netif_is_ip6gretap(tunnel_dev))
- return MLX5E_TC_TUNNEL_TYPE_GRETAP;
- else
- return MLX5E_TC_TUNNEL_TYPE_UNKNOWN;
-}
-
bool mlx5e_tc_tun_device_to_offload(struct mlx5e_priv *priv,
struct net_device *netdev)
{
- int tunnel_type = mlx5e_tc_tun_get_type(netdev);
+ struct mlx5e_tc_tunnel *tunnel = mlx5e_get_tc_tun(netdev);
- if (tunnel_type == MLX5E_TC_TUNNEL_TYPE_VXLAN &&
- MLX5_CAP_ESW(priv->mdev, vxlan_encap_decap))
- return true;
- else if (tunnel_type == MLX5E_TC_TUNNEL_TYPE_GRETAP &&
- MLX5_CAP_ESW(priv->mdev, nvgre_encap_decap))
+ if (tunnel && tunnel->can_offload(priv))
return true;
else
return false;
@@ -487,71 +439,87 @@ int mlx5e_tc_tun_init_encap_attr(struct net_device *tunnel_dev,
struct mlx5e_encap_entry *e,
struct netlink_ext_ack *extack)
{
- e->tunnel_type = mlx5e_tc_tun_get_type(tunnel_dev);
+ struct mlx5e_tc_tunnel *tunnel = mlx5e_get_tc_tun(tunnel_dev);
- if (e->tunnel_type == MLX5E_TC_TUNNEL_TYPE_VXLAN) {
- int dst_port = be16_to_cpu(e->tun_info.key.tp_dst);
-
- if (!mlx5_vxlan_lookup_port(priv->mdev->vxlan, dst_port)) {
- NL_SET_ERR_MSG_MOD(extack,
- "vxlan udp dport was not registered with the HW");
- netdev_warn(priv->netdev,
- "%d isn't an offloaded vxlan udp dport\n",
- dst_port);
- return -EOPNOTSUPP;
- }
- e->reformat_type = MLX5_REFORMAT_TYPE_L2_TO_VXLAN;
- e->tunnel_hlen = VXLAN_HLEN;
- } else if (e->tunnel_type == MLX5E_TC_TUNNEL_TYPE_GRETAP) {
- e->reformat_type = MLX5_REFORMAT_TYPE_L2_TO_NVGRE;
- e->tunnel_hlen = gre_calc_hlen(e->tun_info.key.tun_flags);
- } else {
+ if (!tunnel) {
e->reformat_type = -1;
- e->tunnel_hlen = -1;
return -EOPNOTSUPP;
}
- return 0;
+
+ return tunnel->init_encap_attr(tunnel_dev, priv, e, extack);
}
-static int mlx5e_tc_tun_parse_vxlan(struct mlx5e_priv *priv,
- struct mlx5_flow_spec *spec,
- struct tc_cls_flower_offload *f,
- void *headers_c,
- void *headers_v)
+int mlx5e_tc_tun_parse(struct net_device *filter_dev,
+ struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+ struct flow_cls_offload *f,
+ void *headers_c,
+ void *headers_v, u8 *match_level)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+ struct mlx5e_tc_tunnel *tunnel = mlx5e_get_tc_tun(filter_dev);
+ int err = 0;
+
+ if (!tunnel) {
+ netdev_warn(priv->netdev,
+ "decapsulation offload is not supported for %s net device\n",
+ mlx5e_netdev_kind(filter_dev));
+ err = -EOPNOTSUPP;
+ goto out;
+ }
+
+ *match_level = tunnel->match_level;
+
+ if (tunnel->parse_udp_ports) {
+ err = tunnel->parse_udp_ports(priv, spec, f,
+ headers_c, headers_v);
+ if (err)
+ goto out;
+ }
+
+ if (tunnel->parse_tunnel) {
+ err = tunnel->parse_tunnel(priv, spec, f,
+ headers_c, headers_v);
+ if (err)
+ goto out;
+ }
+
+out:
+ return err;
+}
+
+int mlx5e_tc_tun_parse_udp_ports(struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+ struct flow_cls_offload *f,
+ void *headers_c,
+ void *headers_v)
+{
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
struct netlink_ext_ack *extack = f->common.extack;
- void *misc_c = MLX5_ADDR_OF(fte_match_param,
- spec->match_criteria,
- misc_parameters);
- void *misc_v = MLX5_ADDR_OF(fte_match_param,
- spec->match_value,
- misc_parameters);
struct flow_match_ports enc_ports;
- flow_rule_match_enc_ports(rule, &enc_ports);
-
/* Full udp dst port must be given */
- if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS) ||
- memchr_inv(&enc_ports.mask->dst, 0xff, sizeof(enc_ports.mask->dst))) {
+
+ if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS)) {
NL_SET_ERR_MSG_MOD(extack,
- "VXLAN decap filter must include enc_dst_port condition");
+ "UDP tunnel decap filter must include enc_dst_port condition");
netdev_warn(priv->netdev,
- "VXLAN decap filter must include enc_dst_port condition\n");
+ "UDP tunnel decap filter must include enc_dst_port condition\n");
return -EOPNOTSUPP;
}
- /* udp dst port must be knonwn as a VXLAN port */
- if (!mlx5_vxlan_lookup_port(priv->mdev->vxlan, be16_to_cpu(enc_ports.key->dst))) {
+ flow_rule_match_enc_ports(rule, &enc_ports);
+
+ if (memchr_inv(&enc_ports.mask->dst, 0xff,
+ sizeof(enc_ports.mask->dst))) {
NL_SET_ERR_MSG_MOD(extack,
- "Matched UDP port is not registered as a VXLAN port");
+ "UDP tunnel decap filter must match enc_dst_port fully");
netdev_warn(priv->netdev,
- "UDP port %d is not registered as a VXLAN port\n",
- be16_to_cpu(enc_ports.key->dst));
+ "UDP tunnel decap filter must match enc_dst_port fully\n");
return -EOPNOTSUPP;
}
- /* dst UDP port is valid here */
+ /* match on UDP protocol and dst port number */
+
MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, ip_protocol);
MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_UDP);
@@ -560,92 +528,15 @@ static int mlx5e_tc_tun_parse_vxlan(struct mlx5e_priv *priv,
MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport,
ntohs(enc_ports.key->dst));
+ /* UDP src port on outer header is generated by HW,
+ * so it is probably a bad idea to request matching it.
+ * Nonetheless, it is allowed.
+ */
+
MLX5_SET(fte_match_set_lyr_2_4, headers_c, udp_sport,
ntohs(enc_ports.mask->src));
MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_sport,
ntohs(enc_ports.key->src));
- /* match on VNI */
- if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
- struct flow_match_enc_keyid enc_keyid;
-
- flow_rule_match_enc_keyid(rule, &enc_keyid);
-
- MLX5_SET(fte_match_set_misc, misc_c, vxlan_vni,
- be32_to_cpu(enc_keyid.mask->keyid));
- MLX5_SET(fte_match_set_misc, misc_v, vxlan_vni,
- be32_to_cpu(enc_keyid.key->keyid));
- }
- return 0;
-}
-
-static int mlx5e_tc_tun_parse_gretap(struct mlx5e_priv *priv,
- struct mlx5_flow_spec *spec,
- struct tc_cls_flower_offload *f,
- void *outer_headers_c,
- void *outer_headers_v)
-{
- void *misc_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
- misc_parameters);
- void *misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value,
- misc_parameters);
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
-
- if (!MLX5_CAP_ESW(priv->mdev, nvgre_encap_decap)) {
- NL_SET_ERR_MSG_MOD(f->common.extack,
- "GRE HW offloading is not supported");
- netdev_warn(priv->netdev, "GRE HW offloading is not supported\n");
- return -EOPNOTSUPP;
- }
-
- MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c, ip_protocol);
- MLX5_SET(fte_match_set_lyr_2_4, outer_headers_v,
- ip_protocol, IPPROTO_GRE);
-
- /* gre protocol*/
- MLX5_SET_TO_ONES(fte_match_set_misc, misc_c, gre_protocol);
- MLX5_SET(fte_match_set_misc, misc_v, gre_protocol, ETH_P_TEB);
-
- /* gre key */
- if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
- struct flow_match_enc_keyid enc_keyid;
-
- flow_rule_match_enc_keyid(rule, &enc_keyid);
- MLX5_SET(fte_match_set_misc, misc_c,
- gre_key.key, be32_to_cpu(enc_keyid.mask->keyid));
- MLX5_SET(fte_match_set_misc, misc_v,
- gre_key.key, be32_to_cpu(enc_keyid.key->keyid));
- }
-
return 0;
}
-
-int mlx5e_tc_tun_parse(struct net_device *filter_dev,
- struct mlx5e_priv *priv,
- struct mlx5_flow_spec *spec,
- struct tc_cls_flower_offload *f,
- void *headers_c,
- void *headers_v, u8 *match_level)
-{
- int tunnel_type;
- int err = 0;
-
- tunnel_type = mlx5e_tc_tun_get_type(filter_dev);
- if (tunnel_type == MLX5E_TC_TUNNEL_TYPE_VXLAN) {
- *match_level = MLX5_MATCH_L4;
- err = mlx5e_tc_tun_parse_vxlan(priv, spec, f,
- headers_c, headers_v);
- } else if (tunnel_type == MLX5E_TC_TUNNEL_TYPE_GRETAP) {
- *match_level = MLX5_MATCH_L3;
- err = mlx5e_tc_tun_parse_gretap(priv, spec, f,
- headers_c, headers_v);
- } else {
- netdev_warn(priv->netdev,
- "decapsulation offload is not supported for %s (kind: \"%s\")\n",
- netdev_name(filter_dev),
- mlx5e_netdev_kind(filter_dev));
-
- return -EOPNOTSUPP;
- }
- return err;
-}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.h
index b63f15de899d..c362b9225dc2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.h
@@ -14,9 +14,41 @@
enum {
MLX5E_TC_TUNNEL_TYPE_UNKNOWN,
MLX5E_TC_TUNNEL_TYPE_VXLAN,
- MLX5E_TC_TUNNEL_TYPE_GRETAP
+ MLX5E_TC_TUNNEL_TYPE_GENEVE,
+ MLX5E_TC_TUNNEL_TYPE_GRETAP,
};
+struct mlx5e_tc_tunnel {
+ int tunnel_type;
+ enum mlx5_flow_match_level match_level;
+
+ bool (*can_offload)(struct mlx5e_priv *priv);
+ int (*calc_hlen)(struct mlx5e_encap_entry *e);
+ int (*init_encap_attr)(struct net_device *tunnel_dev,
+ struct mlx5e_priv *priv,
+ struct mlx5e_encap_entry *e,
+ struct netlink_ext_ack *extack);
+ int (*generate_ip_tun_hdr)(char buf[],
+ __u8 *ip_proto,
+ struct mlx5e_encap_entry *e);
+ int (*parse_udp_ports)(struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+ struct flow_cls_offload *f,
+ void *headers_c,
+ void *headers_v);
+ int (*parse_tunnel)(struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+ struct flow_cls_offload *f,
+ void *headers_c,
+ void *headers_v);
+};
+
+extern struct mlx5e_tc_tunnel vxlan_tunnel;
+extern struct mlx5e_tc_tunnel geneve_tunnel;
+extern struct mlx5e_tc_tunnel gre_tunnel;
+
+struct mlx5e_tc_tunnel *mlx5e_get_tc_tun(struct net_device *tunnel_dev);
+
int mlx5e_tc_tun_init_encap_attr(struct net_device *tunnel_dev,
struct mlx5e_priv *priv,
struct mlx5e_encap_entry *e,
@@ -30,15 +62,20 @@ int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv,
struct net_device *mirred_dev,
struct mlx5e_encap_entry *e);
-int mlx5e_tc_tun_get_type(struct net_device *tunnel_dev);
bool mlx5e_tc_tun_device_to_offload(struct mlx5e_priv *priv,
struct net_device *netdev);
int mlx5e_tc_tun_parse(struct net_device *filter_dev,
struct mlx5e_priv *priv,
struct mlx5_flow_spec *spec,
- struct tc_cls_flower_offload *f,
+ struct flow_cls_offload *f,
void *headers_c,
void *headers_v, u8 *match_level);
+int mlx5e_tc_tun_parse_udp_ports(struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+ struct flow_cls_offload *f,
+ void *headers_c,
+ void *headers_v);
+
#endif //__MLX5_EN_TC_TUNNEL_H__
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_geneve.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_geneve.c
new file mode 100644
index 000000000000..951ea26d96bc
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_geneve.c
@@ -0,0 +1,335 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/* Copyright (c) 2018 Mellanox Technologies. */
+
+#include <net/geneve.h>
+#include "lib/geneve.h"
+#include "en/tc_tun.h"
+
+#define MLX5E_GENEVE_VER 0
+
+static bool mlx5e_tc_tun_can_offload_geneve(struct mlx5e_priv *priv)
+{
+ return !!(MLX5_CAP_GEN(priv->mdev, flex_parser_protocols) & MLX5_FLEX_PROTO_GENEVE);
+}
+
+static int mlx5e_tc_tun_calc_hlen_geneve(struct mlx5e_encap_entry *e)
+{
+ return sizeof(struct udphdr) +
+ sizeof(struct genevehdr) +
+ e->tun_info->options_len;
+}
+
+static int mlx5e_tc_tun_check_udp_dport_geneve(struct mlx5e_priv *priv,
+ struct flow_cls_offload *f)
+{
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
+ struct netlink_ext_ack *extack = f->common.extack;
+ struct flow_match_ports enc_ports;
+
+ if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS))
+ return -EOPNOTSUPP;
+
+ flow_rule_match_enc_ports(rule, &enc_ports);
+
+ /* Currently we support only default GENEVE
+ * port, so udp dst port must match.
+ */
+ if (be16_to_cpu(enc_ports.key->dst) != GENEVE_UDP_PORT) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Matched UDP dst port is not registered as a GENEVE port");
+ netdev_warn(priv->netdev,
+ "UDP port %d is not registered as a GENEVE port\n",
+ be16_to_cpu(enc_ports.key->dst));
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+static int mlx5e_tc_tun_parse_udp_ports_geneve(struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+ struct flow_cls_offload *f,
+ void *headers_c,
+ void *headers_v)
+{
+ int err;
+
+ err = mlx5e_tc_tun_parse_udp_ports(priv, spec, f, headers_c, headers_v);
+ if (err)
+ return err;
+
+ return mlx5e_tc_tun_check_udp_dport_geneve(priv, f);
+}
+
+static int mlx5e_tc_tun_init_encap_attr_geneve(struct net_device *tunnel_dev,
+ struct mlx5e_priv *priv,
+ struct mlx5e_encap_entry *e,
+ struct netlink_ext_ack *extack)
+{
+ e->tunnel = &geneve_tunnel;
+
+ /* Reformat type for GENEVE encap is similar to VXLAN:
+ * in both cases the HW adds in the same place a
+ * defined encapsulation header that the SW provides.
+ */
+ e->reformat_type = MLX5_REFORMAT_TYPE_L2_TO_VXLAN;
+ return 0;
+}
+
+static void mlx5e_tunnel_id_to_vni(__be64 tun_id, __u8 *vni)
+{
+#ifdef __BIG_ENDIAN
+ vni[0] = (__force __u8)(tun_id >> 16);
+ vni[1] = (__force __u8)(tun_id >> 8);
+ vni[2] = (__force __u8)tun_id;
+#else
+ vni[0] = (__force __u8)((__force u64)tun_id >> 40);
+ vni[1] = (__force __u8)((__force u64)tun_id >> 48);
+ vni[2] = (__force __u8)((__force u64)tun_id >> 56);
+#endif
+}
+
+static int mlx5e_gen_ip_tunnel_header_geneve(char buf[],
+ __u8 *ip_proto,
+ struct mlx5e_encap_entry *e)
+{
+ const struct ip_tunnel_info *tun_info = e->tun_info;
+ struct udphdr *udp = (struct udphdr *)(buf);
+ struct genevehdr *geneveh;
+
+ geneveh = (struct genevehdr *)((char *)udp + sizeof(struct udphdr));
+
+ *ip_proto = IPPROTO_UDP;
+
+ udp->dest = tun_info->key.tp_dst;
+
+ memset(geneveh, 0, sizeof(*geneveh));
+ geneveh->ver = MLX5E_GENEVE_VER;
+ geneveh->opt_len = tun_info->options_len / 4;
+ geneveh->oam = !!(tun_info->key.tun_flags & TUNNEL_OAM);
+ geneveh->critical = !!(tun_info->key.tun_flags & TUNNEL_CRIT_OPT);
+ mlx5e_tunnel_id_to_vni(tun_info->key.tun_id, geneveh->vni);
+ geneveh->proto_type = htons(ETH_P_TEB);
+
+ if (tun_info->key.tun_flags & TUNNEL_GENEVE_OPT) {
+ if (!geneveh->opt_len)
+ return -EOPNOTSUPP;
+ ip_tunnel_info_opts_get(geneveh->options, tun_info);
+ }
+
+ return 0;
+}
+
+static int mlx5e_tc_tun_parse_geneve_vni(struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+ struct flow_cls_offload *f)
+{
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
+ struct netlink_ext_ack *extack = f->common.extack;
+ struct flow_match_enc_keyid enc_keyid;
+ void *misc_c, *misc_v;
+
+ misc_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters);
+ misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters);
+
+ if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID))
+ return 0;
+
+ flow_rule_match_enc_keyid(rule, &enc_keyid);
+
+ if (!enc_keyid.mask->keyid)
+ return 0;
+
+ if (!MLX5_CAP_ESW_FLOWTABLE_FDB(priv->mdev, ft_field_support.outer_geneve_vni)) {
+ NL_SET_ERR_MSG_MOD(extack, "Matching on GENEVE VNI is not supported");
+ netdev_warn(priv->netdev, "Matching on GENEVE VNI is not supported\n");
+ return -EOPNOTSUPP;
+ }
+
+ MLX5_SET(fte_match_set_misc, misc_c, geneve_vni, be32_to_cpu(enc_keyid.mask->keyid));
+ MLX5_SET(fte_match_set_misc, misc_v, geneve_vni, be32_to_cpu(enc_keyid.key->keyid));
+
+ return 0;
+}
+
+static int mlx5e_tc_tun_parse_geneve_options(struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+ struct flow_cls_offload *f)
+{
+ u8 max_tlv_option_data_len = MLX5_CAP_GEN(priv->mdev, max_geneve_tlv_option_data_len);
+ u8 max_tlv_options = MLX5_CAP_GEN(priv->mdev, max_geneve_tlv_options);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
+ struct netlink_ext_ack *extack = f->common.extack;
+ void *misc_c, *misc_v, *misc_3_c, *misc_3_v;
+ struct geneve_opt *option_key, *option_mask;
+ __be32 opt_data_key = 0, opt_data_mask = 0;
+ struct flow_match_enc_opts enc_opts;
+ int res = 0;
+
+ misc_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters);
+ misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters);
+ misc_3_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters_3);
+ misc_3_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters_3);
+
+ if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_OPTS))
+ return 0;
+
+ flow_rule_match_enc_opts(rule, &enc_opts);
+
+ if (memchr_inv(&enc_opts.mask->data, 0, sizeof(enc_opts.mask->data)) &&
+ !MLX5_CAP_ESW_FLOWTABLE_FDB(priv->mdev,
+ ft_field_support.geneve_tlv_option_0_data)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Matching on GENEVE options is not supported");
+ netdev_warn(priv->netdev,
+ "Matching on GENEVE options is not supported\n");
+ return -EOPNOTSUPP;
+ }
+
+ /* make sure that we're talking about GENEVE options */
+
+ if (enc_opts.key->dst_opt_type != TUNNEL_GENEVE_OPT) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Matching on GENEVE options: option type is not GENEVE");
+ netdev_warn(priv->netdev,
+ "Matching on GENEVE options: option type is not GENEVE\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (enc_opts.mask->len &&
+ !MLX5_CAP_ESW_FLOWTABLE_FDB(priv->mdev,
+ ft_field_support.outer_geneve_opt_len)) {
+ NL_SET_ERR_MSG_MOD(extack, "Matching on GENEVE options len is not supported");
+ netdev_warn(priv->netdev,
+ "Matching on GENEVE options len is not supported\n");
+ return -EOPNOTSUPP;
+ }
+
+ /* max_geneve_tlv_option_data_len comes in multiples of 4 bytes, and it
+ * doesn't include the TLV option header. 'geneve_opt_len' is a total
+ * len of all the options, including the headers, also multiples of 4
+ * bytes. Len that comes from the dissector is in bytes.
+ */
+
+ if ((enc_opts.key->len / 4) > ((max_tlv_option_data_len + 1) * max_tlv_options)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Matching on GENEVE options: unsupported options len");
+ netdev_warn(priv->netdev,
+ "Matching on GENEVE options: unsupported options len (len=%d)\n",
+ enc_opts.key->len);
+ return -EOPNOTSUPP;
+ }
+
+ MLX5_SET(fte_match_set_misc, misc_c, geneve_opt_len, enc_opts.mask->len / 4);
+ MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, enc_opts.key->len / 4);
+
+ /* we support matching on one option only, so just get it */
+ option_key = (struct geneve_opt *)&enc_opts.key->data[0];
+ option_mask = (struct geneve_opt *)&enc_opts.mask->data[0];
+
+ if (option_key->length > max_tlv_option_data_len) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Matching on GENEVE options: unsupported option len");
+ netdev_warn(priv->netdev,
+ "Matching on GENEVE options: unsupported option len (key=%d, mask=%d)\n",
+ option_key->length, option_mask->length);
+ return -EOPNOTSUPP;
+ }
+
+ /* data can't be all 0 - fail to offload such rule */
+ if (!memchr_inv(option_key->opt_data, 0, option_key->length * 4)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Matching on GENEVE options: can't match on 0 data field");
+ netdev_warn(priv->netdev,
+ "Matching on GENEVE options: can't match on 0 data field\n");
+ return -EOPNOTSUPP;
+ }
+
+ /* add new GENEVE TLV options object */
+ res = mlx5_geneve_tlv_option_add(priv->mdev->geneve, option_key);
+ if (res) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Matching on GENEVE options: failed creating TLV opt object");
+ netdev_warn(priv->netdev,
+ "Matching on GENEVE options: failed creating TLV opt object (class:type:len = 0x%x:0x%x:%d)\n",
+ be16_to_cpu(option_key->opt_class),
+ option_key->type, option_key->length);
+ return res;
+ }
+
+ /* In general, after creating the object, need to query it
+ * in order to check which option data to set in misc3.
+ * But we support only geneve_tlv_option_0_data, so no
+ * point querying at this stage.
+ */
+
+ memcpy(&opt_data_key, option_key->opt_data, option_key->length * 4);
+ memcpy(&opt_data_mask, option_mask->opt_data, option_mask->length * 4);
+ MLX5_SET(fte_match_set_misc3, misc_3_v,
+ geneve_tlv_option_0_data, be32_to_cpu(opt_data_key));
+ MLX5_SET(fte_match_set_misc3, misc_3_c,
+ geneve_tlv_option_0_data, be32_to_cpu(opt_data_mask));
+
+ spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS_3;
+
+ return 0;
+}
+
+static int mlx5e_tc_tun_parse_geneve_params(struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+ struct flow_cls_offload *f)
+{
+ void *misc_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters);
+ void *misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters);
+ struct netlink_ext_ack *extack = f->common.extack;
+
+ /* match on OAM - packets with OAM bit on should NOT be offloaded */
+
+ if (!MLX5_CAP_ESW_FLOWTABLE_FDB(priv->mdev, ft_field_support.outer_geneve_oam)) {
+ NL_SET_ERR_MSG_MOD(extack, "Matching on GENEVE OAM is not supported");
+ netdev_warn(priv->netdev, "Matching on GENEVE OAM is not supported\n");
+ return -EOPNOTSUPP;
+ }
+ MLX5_SET_TO_ONES(fte_match_set_misc, misc_c, geneve_oam);
+ MLX5_SET(fte_match_set_misc, misc_v, geneve_oam, 0);
+
+ /* Match on GENEVE protocol. We support only Transparent Eth Bridge. */
+
+ if (MLX5_CAP_ESW_FLOWTABLE_FDB(priv->mdev,
+ ft_field_support.outer_geneve_protocol_type)) {
+ MLX5_SET_TO_ONES(fte_match_set_misc, misc_c, geneve_protocol_type);
+ MLX5_SET(fte_match_set_misc, misc_v, geneve_protocol_type, ETH_P_TEB);
+ }
+
+ return 0;
+}
+
+static int mlx5e_tc_tun_parse_geneve(struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+ struct flow_cls_offload *f,
+ void *headers_c,
+ void *headers_v)
+{
+ int err;
+
+ err = mlx5e_tc_tun_parse_geneve_params(priv, spec, f);
+ if (err)
+ return err;
+
+ err = mlx5e_tc_tun_parse_geneve_vni(priv, spec, f);
+ if (err)
+ return err;
+
+ return mlx5e_tc_tun_parse_geneve_options(priv, spec, f);
+}
+
+struct mlx5e_tc_tunnel geneve_tunnel = {
+ .tunnel_type = MLX5E_TC_TUNNEL_TYPE_GENEVE,
+ .match_level = MLX5_MATCH_L4,
+ .can_offload = mlx5e_tc_tun_can_offload_geneve,
+ .calc_hlen = mlx5e_tc_tun_calc_hlen_geneve,
+ .init_encap_attr = mlx5e_tc_tun_init_encap_attr_geneve,
+ .generate_ip_tun_hdr = mlx5e_gen_ip_tunnel_header_geneve,
+ .parse_udp_ports = mlx5e_tc_tun_parse_udp_ports_geneve,
+ .parse_tunnel = mlx5e_tc_tun_parse_geneve,
+};
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_gre.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_gre.c
new file mode 100644
index 000000000000..58b13192df23
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_gre.c
@@ -0,0 +1,95 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/* Copyright (c) 2018 Mellanox Technologies. */
+
+#include <net/gre.h>
+#include "en/tc_tun.h"
+
+static bool mlx5e_tc_tun_can_offload_gretap(struct mlx5e_priv *priv)
+{
+ return !!MLX5_CAP_ESW(priv->mdev, nvgre_encap_decap);
+}
+
+static int mlx5e_tc_tun_calc_hlen_gretap(struct mlx5e_encap_entry *e)
+{
+ return gre_calc_hlen(e->tun_info->key.tun_flags);
+}
+
+static int mlx5e_tc_tun_init_encap_attr_gretap(struct net_device *tunnel_dev,
+ struct mlx5e_priv *priv,
+ struct mlx5e_encap_entry *e,
+ struct netlink_ext_ack *extack)
+{
+ e->tunnel = &gre_tunnel;
+ e->reformat_type = MLX5_REFORMAT_TYPE_L2_TO_NVGRE;
+ return 0;
+}
+
+static int mlx5e_gen_ip_tunnel_header_gretap(char buf[],
+ __u8 *ip_proto,
+ struct mlx5e_encap_entry *e)
+{
+ const struct ip_tunnel_key *tun_key = &e->tun_info->key;
+ struct gre_base_hdr *greh = (struct gre_base_hdr *)(buf);
+ __be32 tun_id = tunnel_id_to_key32(tun_key->tun_id);
+ int hdr_len;
+
+ *ip_proto = IPPROTO_GRE;
+
+ /* the HW does not calculate GRE csum or sequences */
+ if (tun_key->tun_flags & (TUNNEL_CSUM | TUNNEL_SEQ))
+ return -EOPNOTSUPP;
+
+ greh->protocol = htons(ETH_P_TEB);
+
+ /* GRE key */
+ hdr_len = mlx5e_tc_tun_calc_hlen_gretap(e);
+ greh->flags = gre_tnl_flags_to_gre_flags(tun_key->tun_flags);
+ if (tun_key->tun_flags & TUNNEL_KEY) {
+ __be32 *ptr = (__be32 *)(((u8 *)greh) + hdr_len - 4);
+ *ptr = tun_id;
+ }
+
+ return 0;
+}
+
+static int mlx5e_tc_tun_parse_gretap(struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+ struct flow_cls_offload *f,
+ void *headers_c,
+ void *headers_v)
+{
+ void *misc_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters);
+ void *misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
+
+ MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, ip_protocol);
+ MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_GRE);
+
+ /* gre protocol */
+ MLX5_SET_TO_ONES(fte_match_set_misc, misc_c, gre_protocol);
+ MLX5_SET(fte_match_set_misc, misc_v, gre_protocol, ETH_P_TEB);
+
+ /* gre key */
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
+ struct flow_match_enc_keyid enc_keyid;
+
+ flow_rule_match_enc_keyid(rule, &enc_keyid);
+ MLX5_SET(fte_match_set_misc, misc_c,
+ gre_key.key, be32_to_cpu(enc_keyid.mask->keyid));
+ MLX5_SET(fte_match_set_misc, misc_v,
+ gre_key.key, be32_to_cpu(enc_keyid.key->keyid));
+ }
+
+ return 0;
+}
+
+struct mlx5e_tc_tunnel gre_tunnel = {
+ .tunnel_type = MLX5E_TC_TUNNEL_TYPE_GRETAP,
+ .match_level = MLX5_MATCH_L3,
+ .can_offload = mlx5e_tc_tun_can_offload_gretap,
+ .calc_hlen = mlx5e_tc_tun_calc_hlen_gretap,
+ .init_encap_attr = mlx5e_tc_tun_init_encap_attr_gretap,
+ .generate_ip_tun_hdr = mlx5e_gen_ip_tunnel_header_gretap,
+ .parse_udp_ports = NULL,
+ .parse_tunnel = mlx5e_tc_tun_parse_gretap,
+};
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c
new file mode 100644
index 000000000000..37b176801bcc
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c
@@ -0,0 +1,151 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/* Copyright (c) 2018 Mellanox Technologies. */
+
+#include <net/vxlan.h>
+#include "lib/vxlan.h"
+#include "en/tc_tun.h"
+
+static bool mlx5e_tc_tun_can_offload_vxlan(struct mlx5e_priv *priv)
+{
+ return !!MLX5_CAP_ESW(priv->mdev, vxlan_encap_decap);
+}
+
+static int mlx5e_tc_tun_calc_hlen_vxlan(struct mlx5e_encap_entry *e)
+{
+ return VXLAN_HLEN;
+}
+
+static int mlx5e_tc_tun_check_udp_dport_vxlan(struct mlx5e_priv *priv,
+ struct flow_cls_offload *f)
+{
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
+ struct netlink_ext_ack *extack = f->common.extack;
+ struct flow_match_ports enc_ports;
+
+ if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS))
+ return -EOPNOTSUPP;
+
+ flow_rule_match_enc_ports(rule, &enc_ports);
+
+ /* check the UDP destination port validity */
+
+ if (!mlx5_vxlan_lookup_port(priv->mdev->vxlan,
+ be16_to_cpu(enc_ports.key->dst))) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Matched UDP dst port is not registered as a VXLAN port");
+ netdev_warn(priv->netdev,
+ "UDP port %d is not registered as a VXLAN port\n",
+ be16_to_cpu(enc_ports.key->dst));
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+static int mlx5e_tc_tun_parse_udp_ports_vxlan(struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+ struct flow_cls_offload *f,
+ void *headers_c,
+ void *headers_v)
+{
+ int err = 0;
+
+ err = mlx5e_tc_tun_parse_udp_ports(priv, spec, f, headers_c, headers_v);
+ if (err)
+ return err;
+
+ return mlx5e_tc_tun_check_udp_dport_vxlan(priv, f);
+}
+
+static int mlx5e_tc_tun_init_encap_attr_vxlan(struct net_device *tunnel_dev,
+ struct mlx5e_priv *priv,
+ struct mlx5e_encap_entry *e,
+ struct netlink_ext_ack *extack)
+{
+ int dst_port = be16_to_cpu(e->tun_info->key.tp_dst);
+
+ e->tunnel = &vxlan_tunnel;
+
+ if (!mlx5_vxlan_lookup_port(priv->mdev->vxlan, dst_port)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "vxlan udp dport was not registered with the HW");
+ netdev_warn(priv->netdev,
+ "%d isn't an offloaded vxlan udp dport\n",
+ dst_port);
+ return -EOPNOTSUPP;
+ }
+
+ e->reformat_type = MLX5_REFORMAT_TYPE_L2_TO_VXLAN;
+ return 0;
+}
+
+static int mlx5e_gen_ip_tunnel_header_vxlan(char buf[],
+ __u8 *ip_proto,
+ struct mlx5e_encap_entry *e)
+{
+ const struct ip_tunnel_key *tun_key = &e->tun_info->key;
+ __be32 tun_id = tunnel_id_to_key32(tun_key->tun_id);
+ struct udphdr *udp = (struct udphdr *)(buf);
+ struct vxlanhdr *vxh;
+
+ vxh = (struct vxlanhdr *)((char *)udp + sizeof(struct udphdr));
+ *ip_proto = IPPROTO_UDP;
+
+ udp->dest = tun_key->tp_dst;
+ vxh->vx_flags = VXLAN_HF_VNI;
+ vxh->vx_vni = vxlan_vni_field(tun_id);
+
+ return 0;
+}
+
+static int mlx5e_tc_tun_parse_vxlan(struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+ struct flow_cls_offload *f,
+ void *headers_c,
+ void *headers_v)
+{
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
+ struct netlink_ext_ack *extack = f->common.extack;
+ struct flow_match_enc_keyid enc_keyid;
+ void *misc_c, *misc_v;
+
+ misc_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters);
+ misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters);
+
+ if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID))
+ return 0;
+
+ flow_rule_match_enc_keyid(rule, &enc_keyid);
+
+ if (!enc_keyid.mask->keyid)
+ return 0;
+
+ /* match on VNI is required */
+
+ if (!MLX5_CAP_ESW_FLOWTABLE_FDB(priv->mdev,
+ ft_field_support.outer_vxlan_vni)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Matching on VXLAN VNI is not supported");
+ netdev_warn(priv->netdev,
+ "Matching on VXLAN VNI is not supported\n");
+ return -EOPNOTSUPP;
+ }
+
+ MLX5_SET(fte_match_set_misc, misc_c, vxlan_vni,
+ be32_to_cpu(enc_keyid.mask->keyid));
+ MLX5_SET(fte_match_set_misc, misc_v, vxlan_vni,
+ be32_to_cpu(enc_keyid.key->keyid));
+
+ return 0;
+}
+
+struct mlx5e_tc_tunnel vxlan_tunnel = {
+ .tunnel_type = MLX5E_TC_TUNNEL_TYPE_VXLAN,
+ .match_level = MLX5_MATCH_L4,
+ .can_offload = mlx5e_tc_tun_can_offload_vxlan,
+ .calc_hlen = mlx5e_tc_tun_calc_hlen_vxlan,
+ .init_encap_attr = mlx5e_tc_tun_init_encap_attr_vxlan,
+ .generate_ip_tun_hdr = mlx5e_gen_ip_tunnel_header_vxlan,
+ .parse_udp_ports = mlx5e_tc_tun_parse_udp_ports_vxlan,
+ .parse_tunnel = mlx5e_tc_tun_parse_vxlan,
+};
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
new file mode 100644
index 000000000000..ddfe19adb3d9
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
@@ -0,0 +1,208 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2019 Mellanox Technologies. */
+
+#ifndef __MLX5_EN_TXRX_H___
+#define __MLX5_EN_TXRX_H___
+
+#include "en.h"
+
+#define MLX5E_SQ_NOPS_ROOM MLX5_SEND_WQE_MAX_WQEBBS
+#define MLX5E_SQ_STOP_ROOM (MLX5_SEND_WQE_MAX_WQEBBS +\
+ MLX5E_SQ_NOPS_ROOM)
+
+#ifndef CONFIG_MLX5_EN_TLS
+#define MLX5E_SQ_TLS_ROOM (0)
+#else
+/* TLS offload requires additional stop_room for:
+ * - a resync SKB.
+ * kTLS offload requires additional stop_room for:
+ * - static params WQE,
+ * - progress params WQE, and
+ * - resync DUMP per frag.
+ */
+#define MLX5E_SQ_TLS_ROOM \
+ (MLX5_SEND_WQE_MAX_WQEBBS + \
+ MLX5E_KTLS_STATIC_WQEBBS + MLX5E_KTLS_PROGRESS_WQEBBS + \
+ MAX_SKB_FRAGS * MLX5E_KTLS_MAX_DUMP_WQEBBS)
+#endif
+
+#define INL_HDR_START_SZ (sizeof(((struct mlx5_wqe_eth_seg *)NULL)->inline_hdr.start))
+
+static inline bool
+mlx5e_wqc_has_room_for(struct mlx5_wq_cyc *wq, u16 cc, u16 pc, u16 n)
+{
+ return (mlx5_wq_cyc_ctr2ix(wq, cc - pc) >= n) || (cc == pc);
+}
+
+static inline void *
+mlx5e_sq_fetch_wqe(struct mlx5e_txqsq *sq, size_t size, u16 *pi)
+{
+ struct mlx5_wq_cyc *wq = &sq->wq;
+ void *wqe;
+
+ *pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+ wqe = mlx5_wq_cyc_get_wqe(wq, *pi);
+ memset(wqe, 0, size);
+
+ return wqe;
+}
+
+static inline struct mlx5e_tx_wqe *
+mlx5e_post_nop(struct mlx5_wq_cyc *wq, u32 sqn, u16 *pc)
+{
+ u16 pi = mlx5_wq_cyc_ctr2ix(wq, *pc);
+ struct mlx5e_tx_wqe *wqe = mlx5_wq_cyc_get_wqe(wq, pi);
+ struct mlx5_wqe_ctrl_seg *cseg = &wqe->ctrl;
+
+ memset(cseg, 0, sizeof(*cseg));
+
+ cseg->opmod_idx_opcode = cpu_to_be32((*pc << 8) | MLX5_OPCODE_NOP);
+ cseg->qpn_ds = cpu_to_be32((sqn << 8) | 0x01);
+
+ (*pc)++;
+
+ return wqe;
+}
+
+static inline struct mlx5e_tx_wqe *
+mlx5e_post_nop_fence(struct mlx5_wq_cyc *wq, u32 sqn, u16 *pc)
+{
+ u16 pi = mlx5_wq_cyc_ctr2ix(wq, *pc);
+ struct mlx5e_tx_wqe *wqe = mlx5_wq_cyc_get_wqe(wq, pi);
+ struct mlx5_wqe_ctrl_seg *cseg = &wqe->ctrl;
+
+ memset(cseg, 0, sizeof(*cseg));
+
+ cseg->opmod_idx_opcode = cpu_to_be32((*pc << 8) | MLX5_OPCODE_NOP);
+ cseg->qpn_ds = cpu_to_be32((sqn << 8) | 0x01);
+ cseg->fm_ce_se = MLX5_FENCE_MODE_INITIATOR_SMALL;
+
+ (*pc)++;
+
+ return wqe;
+}
+
+static inline void
+mlx5e_fill_sq_frag_edge(struct mlx5e_txqsq *sq, struct mlx5_wq_cyc *wq,
+ u16 pi, u16 nnops)
+{
+ struct mlx5e_tx_wqe_info *edge_wi, *wi = &sq->db.wqe_info[pi];
+
+ edge_wi = wi + nnops;
+
+ /* fill sq frag edge with nops to avoid wqe wrapping two pages */
+ for (; wi < edge_wi; wi++) {
+ wi->skb = NULL;
+ wi->num_wqebbs = 1;
+ mlx5e_post_nop(wq, sq->sqn, &sq->pc);
+ }
+ sq->stats->nop += nnops;
+}
+
+static inline void
+mlx5e_notify_hw(struct mlx5_wq_cyc *wq, u16 pc, void __iomem *uar_map,
+ struct mlx5_wqe_ctrl_seg *ctrl)
+{
+ ctrl->fm_ce_se = MLX5_WQE_CTRL_CQ_UPDATE;
+ /* ensure wqe is visible to device before updating doorbell record */
+ dma_wmb();
+
+ *wq->db = cpu_to_be32(pc);
+
+ /* ensure doorbell record is visible to device before ringing the
+ * doorbell
+ */
+ wmb();
+
+ mlx5_write64((__be32 *)ctrl, uar_map);
+}
+
+static inline bool mlx5e_transport_inline_tx_wqe(struct mlx5e_tx_wqe *wqe)
+{
+ return !!wqe->ctrl.tisn;
+}
+
+static inline void mlx5e_cq_arm(struct mlx5e_cq *cq)
+{
+ struct mlx5_core_cq *mcq;
+
+ mcq = &cq->mcq;
+ mlx5_cq_arm(mcq, MLX5_CQ_DB_REQ_NOT, mcq->uar->map, cq->wq.cc);
+}
+
+static inline struct mlx5e_sq_dma *
+mlx5e_dma_get(struct mlx5e_txqsq *sq, u32 i)
+{
+ return &sq->db.dma_fifo[i & sq->dma_fifo_mask];
+}
+
+static inline void
+mlx5e_dma_push(struct mlx5e_txqsq *sq, dma_addr_t addr, u32 size,
+ enum mlx5e_dma_map_type map_type)
+{
+ struct mlx5e_sq_dma *dma = mlx5e_dma_get(sq, sq->dma_fifo_pc++);
+
+ dma->addr = addr;
+ dma->size = size;
+ dma->type = map_type;
+}
+
+static inline void
+mlx5e_tx_dma_unmap(struct device *pdev, struct mlx5e_sq_dma *dma)
+{
+ switch (dma->type) {
+ case MLX5E_DMA_MAP_SINGLE:
+ dma_unmap_single(pdev, dma->addr, dma->size, DMA_TO_DEVICE);
+ break;
+ case MLX5E_DMA_MAP_PAGE:
+ dma_unmap_page(pdev, dma->addr, dma->size, DMA_TO_DEVICE);
+ break;
+ default:
+ WARN_ONCE(true, "mlx5e_tx_dma_unmap unknown DMA type!\n");
+ }
+}
+
+/* SW parser related functions */
+
+struct mlx5e_swp_spec {
+ __be16 l3_proto;
+ u8 l4_proto;
+ u8 is_tun;
+ __be16 tun_l3_proto;
+ u8 tun_l4_proto;
+};
+
+static inline void
+mlx5e_set_eseg_swp(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg,
+ struct mlx5e_swp_spec *swp_spec)
+{
+ /* SWP offsets are in 2-bytes words */
+ eseg->swp_outer_l3_offset = skb_network_offset(skb) / 2;
+ if (swp_spec->l3_proto == htons(ETH_P_IPV6))
+ eseg->swp_flags |= MLX5_ETH_WQE_SWP_OUTER_L3_IPV6;
+ if (swp_spec->l4_proto) {
+ eseg->swp_outer_l4_offset = skb_transport_offset(skb) / 2;
+ if (swp_spec->l4_proto == IPPROTO_UDP)
+ eseg->swp_flags |= MLX5_ETH_WQE_SWP_OUTER_L4_UDP;
+ }
+
+ if (swp_spec->is_tun) {
+ eseg->swp_inner_l3_offset = skb_inner_network_offset(skb) / 2;
+ if (swp_spec->tun_l3_proto == htons(ETH_P_IPV6))
+ eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6;
+ } else { /* typically for ipsec when xfrm mode != XFRM_MODE_TUNNEL */
+ eseg->swp_inner_l3_offset = skb_network_offset(skb) / 2;
+ if (swp_spec->l3_proto == htons(ETH_P_IPV6))
+ eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6;
+ }
+ switch (swp_spec->tun_l4_proto) {
+ case IPPROTO_UDP:
+ eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L4_UDP;
+ /* fall through */
+ case IPPROTO_TCP:
+ eseg->swp_inner_l4_offset = skb_inner_transport_offset(skb) / 2;
+ break;
+ }
+}
+
+#endif
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
index eb8ef78e5626..b0b982cf69bb 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
@@ -31,11 +31,13 @@
*/
#include <linux/bpf_trace.h>
+#include <net/xdp_sock.h>
#include "en/xdp.h"
+#include "en/params.h"
-int mlx5e_xdp_max_mtu(struct mlx5e_params *params)
+int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk)
{
- int hr = NET_IP_ALIGN + XDP_PACKET_HEADROOM;
+ int hr = mlx5e_get_linear_rq_headroom(params, xsk);
/* Let S := SKB_DATA_ALIGN(sizeof(struct skb_shared_info)).
* The condition checked in mlx5e_rx_is_linear_skb is:
@@ -54,25 +56,70 @@ int mlx5e_xdp_max_mtu(struct mlx5e_params *params)
}
static inline bool
-mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_dma_info *di,
- struct xdp_buff *xdp)
+mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq,
+ struct mlx5e_dma_info *di, struct xdp_buff *xdp)
{
+ struct mlx5e_xdp_xmit_data xdptxd;
struct mlx5e_xdp_info xdpi;
+ struct xdp_frame *xdpf;
+ dma_addr_t dma_addr;
- xdpi.xdpf = convert_to_xdp_frame(xdp);
- if (unlikely(!xdpi.xdpf))
+ xdpf = convert_to_xdp_frame(xdp);
+ if (unlikely(!xdpf))
return false;
- xdpi.dma_addr = di->addr + (xdpi.xdpf->data - (void *)xdpi.xdpf);
- dma_sync_single_for_device(sq->pdev, xdpi.dma_addr,
- xdpi.xdpf->len, PCI_DMA_TODEVICE);
- xdpi.di = *di;
- return sq->xmit_xdp_frame(sq, &xdpi);
+ xdptxd.data = xdpf->data;
+ xdptxd.len = xdpf->len;
+
+ if (xdp->rxq->mem.type == MEM_TYPE_ZERO_COPY) {
+ /* The xdp_buff was in the UMEM and was copied into a newly
+ * allocated page. The UMEM page was returned via the ZCA, and
+ * this new page has to be mapped at this point and has to be
+ * unmapped and returned via xdp_return_frame on completion.
+ */
+
+ /* Prevent double recycling of the UMEM page. Even in case this
+ * function returns false, the xdp_buff shouldn't be recycled,
+ * as it was already done in xdp_convert_zc_to_xdp_frame.
+ */
+ __set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags); /* non-atomic */
+
+ xdpi.mode = MLX5E_XDP_XMIT_MODE_FRAME;
+
+ dma_addr = dma_map_single(sq->pdev, xdptxd.data, xdptxd.len,
+ DMA_TO_DEVICE);
+ if (dma_mapping_error(sq->pdev, dma_addr)) {
+ xdp_return_frame(xdpf);
+ return false;
+ }
+
+ xdptxd.dma_addr = dma_addr;
+ xdpi.frame.xdpf = xdpf;
+ xdpi.frame.dma_addr = dma_addr;
+ } else {
+ /* Driver assumes that convert_to_xdp_frame returns an xdp_frame
+ * that points to the same memory region as the original
+ * xdp_buff. It allows to map the memory only once and to use
+ * the DMA_BIDIRECTIONAL mode.
+ */
+
+ xdpi.mode = MLX5E_XDP_XMIT_MODE_PAGE;
+
+ dma_addr = di->addr + (xdpf->data - (void *)xdpf);
+ dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd.len,
+ DMA_TO_DEVICE);
+
+ xdptxd.dma_addr = dma_addr;
+ xdpi.page.rq = rq;
+ xdpi.page.di = *di;
+ }
+
+ return sq->xmit_xdp_frame(sq, &xdptxd, &xdpi, 0);
}
/* returns true if packet was consumed by xdp */
bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di,
- void *va, u16 *rx_headroom, u32 *len)
+ void *va, u16 *rx_headroom, u32 *len, bool xsk)
{
struct bpf_prog *prog = READ_ONCE(rq->xdp_prog);
struct xdp_buff xdp;
@@ -86,16 +133,20 @@ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di,
xdp_set_data_meta_invalid(&xdp);
xdp.data_end = xdp.data + *len;
xdp.data_hard_start = va;
+ if (xsk)
+ xdp.handle = di->xsk.handle;
xdp.rxq = &rq->xdp_rxq;
act = bpf_prog_run_xdp(prog, &xdp);
+ if (xsk)
+ xdp.handle += xdp.data - xdp.data_hard_start;
switch (act) {
case XDP_PASS:
*rx_headroom = xdp.data - xdp.data_hard_start;
*len = xdp.data_end - xdp.data;
return false;
case XDP_TX:
- if (unlikely(!mlx5e_xmit_xdp_buff(&rq->xdpsq, di, &xdp)))
+ if (unlikely(!mlx5e_xmit_xdp_buff(rq->xdpsq, rq, di, &xdp)))
goto xdp_abort;
__set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags); /* non-atomic */
return true;
@@ -106,7 +157,8 @@ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di,
goto xdp_abort;
__set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags);
__set_bit(MLX5E_RQ_FLAG_XDP_REDIRECT, rq->flags);
- mlx5e_page_dma_unmap(rq, di);
+ if (!xsk)
+ mlx5e_page_dma_unmap(rq, di);
rq->stats->xdp_redirect++;
return true;
default:
@@ -160,7 +212,7 @@ static void mlx5e_xdp_mpwqe_session_start(struct mlx5e_xdpsq *sq)
stats->mpwqe++;
}
-static void mlx5e_xdp_mpwqe_complete(struct mlx5e_xdpsq *sq)
+void mlx5e_xdp_mpwqe_complete(struct mlx5e_xdpsq *sq)
{
struct mlx5_wq_cyc *wq = &sq->wq;
struct mlx5e_xdp_mpwqe *session = &sq->mpwqe;
@@ -183,32 +235,55 @@ static void mlx5e_xdp_mpwqe_complete(struct mlx5e_xdpsq *sq)
session->wqe = NULL; /* Close session */
}
+enum {
+ MLX5E_XDP_CHECK_OK = 1,
+ MLX5E_XDP_CHECK_START_MPWQE = 2,
+};
+
+static int mlx5e_xmit_xdp_frame_check_mpwqe(struct mlx5e_xdpsq *sq)
+{
+ if (unlikely(!sq->mpwqe.wqe)) {
+ if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc,
+ MLX5_SEND_WQE_MAX_WQEBBS))) {
+ /* SQ is full, ring doorbell */
+ mlx5e_xmit_xdp_doorbell(sq);
+ sq->stats->full++;
+ return -EBUSY;
+ }
+
+ return MLX5E_XDP_CHECK_START_MPWQE;
+ }
+
+ return MLX5E_XDP_CHECK_OK;
+}
+
static bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq,
- struct mlx5e_xdp_info *xdpi)
+ struct mlx5e_xdp_xmit_data *xdptxd,
+ struct mlx5e_xdp_info *xdpi,
+ int check_result)
{
struct mlx5e_xdp_mpwqe *session = &sq->mpwqe;
struct mlx5e_xdpsq_stats *stats = sq->stats;
- struct xdp_frame *xdpf = xdpi->xdpf;
-
- if (unlikely(sq->hw_mtu < xdpf->len)) {
+ if (unlikely(xdptxd->len > sq->hw_mtu)) {
stats->err++;
return false;
}
- if (unlikely(!session->wqe)) {
- if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc,
- MLX5_SEND_WQE_MAX_WQEBBS))) {
- /* SQ is full, ring doorbell */
- mlx5e_xmit_xdp_doorbell(sq);
- stats->full++;
- return false;
- }
+ if (!check_result)
+ check_result = mlx5e_xmit_xdp_frame_check_mpwqe(sq);
+ if (unlikely(check_result < 0))
+ return false;
+ if (check_result == MLX5E_XDP_CHECK_START_MPWQE) {
+ /* Start the session when nothing can fail, so it's guaranteed
+ * that if there is an active session, it has at least one dseg,
+ * and it's safe to complete it at any time.
+ */
mlx5e_xdp_mpwqe_session_start(sq);
}
- mlx5e_xdp_mpwqe_add_dseg(sq, xdpi, stats);
+ mlx5e_xdp_mpwqe_add_dseg(sq, xdptxd, stats);
if (unlikely(session->complete ||
session->ds_count == session->max_ds_count))
@@ -219,7 +294,22 @@ static bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq,
return true;
}
-static bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *xdpi)
+static int mlx5e_xmit_xdp_frame_check(struct mlx5e_xdpsq *sq)
+{
+ if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, 1))) {
+ /* SQ is full, ring doorbell */
+ mlx5e_xmit_xdp_doorbell(sq);
+ sq->stats->full++;
+ return -EBUSY;
+ }
+
+ return MLX5E_XDP_CHECK_OK;
+}
+
+static bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq,
+ struct mlx5e_xdp_xmit_data *xdptxd,
+ struct mlx5e_xdp_info *xdpi,
+ int check_result)
{
struct mlx5_wq_cyc *wq = &sq->wq;
u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
@@ -229,9 +319,8 @@ static bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *
struct mlx5_wqe_eth_seg *eseg = &wqe->eth;
struct mlx5_wqe_data_seg *dseg = wqe->data;
- struct xdp_frame *xdpf = xdpi->xdpf;
- dma_addr_t dma_addr = xdpi->dma_addr;
- unsigned int dma_len = xdpf->len;
+ dma_addr_t dma_addr = xdptxd->dma_addr;
+ u32 dma_len = xdptxd->len;
struct mlx5e_xdpsq_stats *stats = sq->stats;
@@ -242,18 +331,16 @@ static bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *
return false;
}
- if (unlikely(!mlx5e_wqc_has_room_for(wq, sq->cc, sq->pc, 1))) {
- /* SQ is full, ring doorbell */
- mlx5e_xmit_xdp_doorbell(sq);
- stats->full++;
+ if (!check_result)
+ check_result = mlx5e_xmit_xdp_frame_check(sq);
+ if (unlikely(check_result < 0))
return false;
- }
cseg->fm_ce_se = 0;
/* copy the inline part if required */
if (sq->min_inline_mode != MLX5_INLINE_MODE_NONE) {
- memcpy(eseg->inline_hdr.start, xdpf->data, MLX5E_XDP_MIN_INLINE);
+ memcpy(eseg->inline_hdr.start, xdptxd->data, MLX5E_XDP_MIN_INLINE);
eseg->inline_hdr.sz = cpu_to_be16(MLX5E_XDP_MIN_INLINE);
dma_len -= MLX5E_XDP_MIN_INLINE;
dma_addr += MLX5E_XDP_MIN_INLINE;
@@ -277,7 +364,7 @@ static bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *
static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq,
struct mlx5e_xdp_wqe_info *wi,
- struct mlx5e_rq *rq,
+ u32 *xsk_frames,
bool recycle)
{
struct mlx5e_xdp_info_fifo *xdpi_fifo = &sq->db.xdpi_fifo;
@@ -286,22 +373,32 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq,
for (i = 0; i < wi->num_pkts; i++) {
struct mlx5e_xdp_info xdpi = mlx5e_xdpi_fifo_pop(xdpi_fifo);
- if (rq) {
- /* XDP_TX */
- mlx5e_page_release(rq, &xdpi.di, recycle);
- } else {
- /* XDP_REDIRECT */
- dma_unmap_single(sq->pdev, xdpi.dma_addr,
- xdpi.xdpf->len, DMA_TO_DEVICE);
- xdp_return_frame(xdpi.xdpf);
+ switch (xdpi.mode) {
+ case MLX5E_XDP_XMIT_MODE_FRAME:
+ /* XDP_TX from the XSK RQ and XDP_REDIRECT */
+ dma_unmap_single(sq->pdev, xdpi.frame.dma_addr,
+ xdpi.frame.xdpf->len, DMA_TO_DEVICE);
+ xdp_return_frame(xdpi.frame.xdpf);
+ break;
+ case MLX5E_XDP_XMIT_MODE_PAGE:
+ /* XDP_TX from the regular RQ */
+ mlx5e_page_release_dynamic(xdpi.page.rq, &xdpi.page.di, recycle);
+ break;
+ case MLX5E_XDP_XMIT_MODE_XSK:
+ /* AF_XDP send */
+ (*xsk_frames)++;
+ break;
+ default:
+ WARN_ON_ONCE(true);
}
}
}
-bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq)
+bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq)
{
struct mlx5e_xdpsq *sq;
struct mlx5_cqe64 *cqe;
+ u32 xsk_frames = 0;
u16 sqcc;
int i;
@@ -343,10 +440,13 @@ bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq)
sqcc += wi->num_wqebbs;
- mlx5e_free_xdpsq_desc(sq, wi, rq, true);
+ mlx5e_free_xdpsq_desc(sq, wi, &xsk_frames, true);
} while (!last_wqe);
} while ((++i < MLX5E_TX_CQ_POLL_BUDGET) && (cqe = mlx5_cqwq_get_cqe(&cq->wq)));
+ if (xsk_frames)
+ xsk_umem_complete_tx(sq->umem, xsk_frames);
+
sq->stats->cqes += i;
mlx5_cqwq_update_db_record(&cq->wq);
@@ -358,8 +458,10 @@ bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq)
return (i == MLX5E_TX_CQ_POLL_BUDGET);
}
-void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq)
+void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq)
{
+ u32 xsk_frames = 0;
+
while (sq->cc != sq->pc) {
struct mlx5e_xdp_wqe_info *wi;
u16 ci;
@@ -369,8 +471,11 @@ void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq)
sq->cc += wi->num_wqebbs;
- mlx5e_free_xdpsq_desc(sq, wi, rq, false);
+ mlx5e_free_xdpsq_desc(sq, wi, &xsk_frames, false);
}
+
+ if (xsk_frames)
+ xsk_umem_complete_tx(sq->umem, xsk_frames);
}
int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
@@ -398,21 +503,27 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
for (i = 0; i < n; i++) {
struct xdp_frame *xdpf = frames[i];
+ struct mlx5e_xdp_xmit_data xdptxd;
struct mlx5e_xdp_info xdpi;
- xdpi.dma_addr = dma_map_single(sq->pdev, xdpf->data, xdpf->len,
- DMA_TO_DEVICE);
- if (unlikely(dma_mapping_error(sq->pdev, xdpi.dma_addr))) {
+ xdptxd.data = xdpf->data;
+ xdptxd.len = xdpf->len;
+ xdptxd.dma_addr = dma_map_single(sq->pdev, xdptxd.data,
+ xdptxd.len, DMA_TO_DEVICE);
+
+ if (unlikely(dma_mapping_error(sq->pdev, xdptxd.dma_addr))) {
xdp_return_frame_rx_napi(xdpf);
drops++;
continue;
}
- xdpi.xdpf = xdpf;
+ xdpi.mode = MLX5E_XDP_XMIT_MODE_FRAME;
+ xdpi.frame.xdpf = xdpf;
+ xdpi.frame.dma_addr = xdptxd.dma_addr;
- if (unlikely(!sq->xmit_xdp_frame(sq, &xdpi))) {
- dma_unmap_single(sq->pdev, xdpi.dma_addr,
- xdpf->len, DMA_TO_DEVICE);
+ if (unlikely(!sq->xmit_xdp_frame(sq, &xdptxd, &xdpi, 0))) {
+ dma_unmap_single(sq->pdev, xdptxd.dma_addr,
+ xdptxd.len, DMA_TO_DEVICE);
xdp_return_frame_rx_napi(xdpf);
drops++;
}
@@ -429,7 +540,7 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
void mlx5e_xdp_rx_poll_complete(struct mlx5e_rq *rq)
{
- struct mlx5e_xdpsq *xdpsq = &rq->xdpsq;
+ struct mlx5e_xdpsq *xdpsq = rq->xdpsq;
if (xdpsq->mpwqe.wqe)
mlx5e_xdp_mpwqe_complete(xdpsq);
@@ -444,6 +555,8 @@ void mlx5e_xdp_rx_poll_complete(struct mlx5e_rq *rq)
void mlx5e_set_xmit_fp(struct mlx5e_xdpsq *sq, bool is_mpw)
{
+ sq->xmit_xdp_frame_check = is_mpw ?
+ mlx5e_xmit_xdp_frame_check_mpwqe : mlx5e_xmit_xdp_frame_check;
sq->xmit_xdp_frame = is_mpw ?
mlx5e_xmit_xdp_frame_mpwqe : mlx5e_xmit_xdp_frame;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
index 8b537a4b0840..b90923932668 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
@@ -33,17 +33,20 @@
#define __MLX5_EN_XDP_H__
#include "en.h"
+#include "en/txrx.h"
#define MLX5E_XDP_MIN_INLINE (ETH_HLEN + VLAN_HLEN)
#define MLX5E_XDP_TX_EMPTY_DS_COUNT \
(sizeof(struct mlx5e_tx_wqe) / MLX5_SEND_WQE_DS)
#define MLX5E_XDP_TX_DS_COUNT (MLX5E_XDP_TX_EMPTY_DS_COUNT + 1 /* SG DS */)
-int mlx5e_xdp_max_mtu(struct mlx5e_params *params);
+struct mlx5e_xsk_param;
+int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk);
bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di,
- void *va, u16 *rx_headroom, u32 *len);
-bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq);
-void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq);
+ void *va, u16 *rx_headroom, u32 *len, bool xsk);
+void mlx5e_xdp_mpwqe_complete(struct mlx5e_xdpsq *sq);
+bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq);
+void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq);
void mlx5e_set_xmit_fp(struct mlx5e_xdpsq *sq, bool is_mpw);
void mlx5e_xdp_rx_poll_complete(struct mlx5e_rq *rq);
int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
@@ -66,6 +69,21 @@ static inline bool mlx5e_xdp_tx_is_enabled(struct mlx5e_priv *priv)
return test_bit(MLX5E_STATE_XDP_TX_ENABLED, &priv->state);
}
+static inline void mlx5e_xdp_set_open(struct mlx5e_priv *priv)
+{
+ set_bit(MLX5E_STATE_XDP_OPEN, &priv->state);
+}
+
+static inline void mlx5e_xdp_set_closed(struct mlx5e_priv *priv)
+{
+ clear_bit(MLX5E_STATE_XDP_OPEN, &priv->state);
+}
+
+static inline bool mlx5e_xdp_is_open(struct mlx5e_priv *priv)
+{
+ return test_bit(MLX5E_STATE_XDP_OPEN, &priv->state);
+}
+
static inline void mlx5e_xmit_xdp_doorbell(struct mlx5e_xdpsq *sq)
{
if (sq->doorbell_cseg) {
@@ -97,15 +115,14 @@ static inline void mlx5e_xdp_update_inline_state(struct mlx5e_xdpsq *sq)
}
static inline void
-mlx5e_xdp_mpwqe_add_dseg(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *xdpi,
+mlx5e_xdp_mpwqe_add_dseg(struct mlx5e_xdpsq *sq,
+ struct mlx5e_xdp_xmit_data *xdptxd,
struct mlx5e_xdpsq_stats *stats)
{
struct mlx5e_xdp_mpwqe *session = &sq->mpwqe;
- dma_addr_t dma_addr = xdpi->dma_addr;
- struct xdp_frame *xdpf = xdpi->xdpf;
struct mlx5_wqe_data_seg *dseg =
(struct mlx5_wqe_data_seg *)session->wqe + session->ds_count;
- u16 dma_len = xdpf->len;
+ u32 dma_len = xdptxd->len;
session->pkt_count++;
@@ -124,7 +141,7 @@ mlx5e_xdp_mpwqe_add_dseg(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *xdpi,
}
inline_dseg->byte_count = cpu_to_be32(dma_len | MLX5_INLINE_SEG);
- memcpy(inline_dseg->data, xdpf->data, dma_len);
+ memcpy(inline_dseg->data, xdptxd->data, dma_len);
session->ds_count += ds_cnt;
stats->inlnw++;
@@ -132,7 +149,7 @@ mlx5e_xdp_mpwqe_add_dseg(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *xdpi,
}
no_inline:
- dseg->addr = cpu_to_be64(dma_addr);
+ dseg->addr = cpu_to_be64(xdptxd->dma_addr);
dseg->byte_count = cpu_to_be32(dma_len);
dseg->lkey = sq->mkey_be;
session->ds_count++;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/Makefile
new file mode 100644
index 000000000000..5ee42991900a
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/Makefile
@@ -0,0 +1 @@
+subdir-ccflags-y += -I$(src)/../..
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
new file mode 100644
index 000000000000..6a55573ec8f2
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
@@ -0,0 +1,192 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/* Copyright (c) 2019 Mellanox Technologies. */
+
+#include "rx.h"
+#include "en/xdp.h"
+#include <net/xdp_sock.h>
+
+/* RX data path */
+
+bool mlx5e_xsk_pages_enough_umem(struct mlx5e_rq *rq, int count)
+{
+ /* Check in advance that we have enough frames, instead of allocating
+ * one-by-one, failing and moving frames to the Reuse Ring.
+ */
+ return xsk_umem_has_addrs_rq(rq->umem, count);
+}
+
+int mlx5e_xsk_page_alloc_umem(struct mlx5e_rq *rq,
+ struct mlx5e_dma_info *dma_info)
+{
+ struct xdp_umem *umem = rq->umem;
+ u64 handle;
+
+ if (!xsk_umem_peek_addr_rq(umem, &handle))
+ return -ENOMEM;
+
+ dma_info->xsk.handle = handle + rq->buff.umem_headroom;
+ dma_info->xsk.data = xdp_umem_get_data(umem, dma_info->xsk.handle);
+
+ /* No need to add headroom to the DMA address. In striding RQ case, we
+ * just provide pages for UMR, and headroom is counted at the setup
+ * stage when creating a WQE. In non-striding RQ case, headroom is
+ * accounted in mlx5e_alloc_rx_wqe.
+ */
+ dma_info->addr = xdp_umem_get_dma(umem, handle);
+
+ xsk_umem_discard_addr_rq(umem);
+
+ dma_sync_single_for_device(rq->pdev, dma_info->addr, PAGE_SIZE,
+ DMA_BIDIRECTIONAL);
+
+ return 0;
+}
+
+static inline void mlx5e_xsk_recycle_frame(struct mlx5e_rq *rq, u64 handle)
+{
+ xsk_umem_fq_reuse(rq->umem, handle & rq->umem->chunk_mask);
+}
+
+/* XSKRQ uses pages from UMEM, they must not be released. They are returned to
+ * the userspace if possible, and if not, this function is called to reuse them
+ * in the driver.
+ */
+void mlx5e_xsk_page_release(struct mlx5e_rq *rq,
+ struct mlx5e_dma_info *dma_info)
+{
+ mlx5e_xsk_recycle_frame(rq, dma_info->xsk.handle);
+}
+
+/* Return a frame back to the hardware to fill in again. It is used by XDP when
+ * the XDP program returns XDP_TX or XDP_REDIRECT not to an XSKMAP.
+ */
+void mlx5e_xsk_zca_free(struct zero_copy_allocator *zca, unsigned long handle)
+{
+ struct mlx5e_rq *rq = container_of(zca, struct mlx5e_rq, zca);
+
+ mlx5e_xsk_recycle_frame(rq, handle);
+}
+
+static struct sk_buff *mlx5e_xsk_construct_skb(struct mlx5e_rq *rq, void *data,
+ u32 cqe_bcnt)
+{
+ struct sk_buff *skb;
+
+ skb = napi_alloc_skb(rq->cq.napi, cqe_bcnt);
+ if (unlikely(!skb)) {
+ rq->stats->buff_alloc_err++;
+ return NULL;
+ }
+
+ skb_put_data(skb, data, cqe_bcnt);
+
+ return skb;
+}
+
+struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
+ struct mlx5e_mpw_info *wi,
+ u16 cqe_bcnt,
+ u32 head_offset,
+ u32 page_idx)
+{
+ struct mlx5e_dma_info *di = &wi->umr.dma_info[page_idx];
+ u16 rx_headroom = rq->buff.headroom - rq->buff.umem_headroom;
+ u32 cqe_bcnt32 = cqe_bcnt;
+ void *va, *data;
+ u32 frag_size;
+ bool consumed;
+
+ /* Check packet size. Note LRO doesn't use linear SKB */
+ if (unlikely(cqe_bcnt > rq->hw_mtu)) {
+ rq->stats->oversize_pkts_sw_drop++;
+ return NULL;
+ }
+
+ /* head_offset is not used in this function, because di->xsk.data and
+ * di->addr point directly to the necessary place. Furthermore, in the
+ * current implementation, one page = one packet = one frame, so
+ * head_offset should always be 0.
+ */
+ WARN_ON_ONCE(head_offset);
+
+ va = di->xsk.data;
+ data = va + rx_headroom;
+ frag_size = rq->buff.headroom + cqe_bcnt32;
+
+ dma_sync_single_for_cpu(rq->pdev, di->addr, frag_size, DMA_BIDIRECTIONAL);
+ prefetch(data);
+
+ rcu_read_lock();
+ consumed = mlx5e_xdp_handle(rq, di, va, &rx_headroom, &cqe_bcnt32, true);
+ rcu_read_unlock();
+
+ /* Possible flows:
+ * - XDP_REDIRECT to XSKMAP:
+ * The page is owned by the userspace from now.
+ * - XDP_TX and other XDP_REDIRECTs:
+ * The page was returned by ZCA and recycled.
+ * - XDP_DROP:
+ * Recycle the page.
+ * - XDP_PASS:
+ * Allocate an SKB, copy the data and recycle the page.
+ *
+ * Pages to be recycled go to the Reuse Ring on MPWQE deallocation. Its
+ * size is the same as the Driver RX Ring's size, and pages for WQEs are
+ * allocated first from the Reuse Ring, so it has enough space.
+ */
+
+ if (likely(consumed)) {
+ if (likely(__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)))
+ __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */
+ return NULL; /* page/packet was consumed by XDP */
+ }
+
+ /* XDP_PASS: copy the data from the UMEM to a new SKB and reuse the
+ * frame. On SKB allocation failure, NULL is returned.
+ */
+ return mlx5e_xsk_construct_skb(rq, data, cqe_bcnt32);
+}
+
+struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq,
+ struct mlx5_cqe64 *cqe,
+ struct mlx5e_wqe_frag_info *wi,
+ u32 cqe_bcnt)
+{
+ struct mlx5e_dma_info *di = wi->di;
+ u16 rx_headroom = rq->buff.headroom - rq->buff.umem_headroom;
+ void *va, *data;
+ bool consumed;
+ u32 frag_size;
+
+ /* wi->offset is not used in this function, because di->xsk.data and
+ * di->addr point directly to the necessary place. Furthermore, in the
+ * current implementation, one page = one packet = one frame, so
+ * wi->offset should always be 0.
+ */
+ WARN_ON_ONCE(wi->offset);
+
+ va = di->xsk.data;
+ data = va + rx_headroom;
+ frag_size = rq->buff.headroom + cqe_bcnt;
+
+ dma_sync_single_for_cpu(rq->pdev, di->addr, frag_size, DMA_BIDIRECTIONAL);
+ prefetch(data);
+
+ if (unlikely(get_cqe_opcode(cqe) != MLX5_CQE_RESP_SEND)) {
+ rq->stats->wqe_err++;
+ return NULL;
+ }
+
+ rcu_read_lock();
+ consumed = mlx5e_xdp_handle(rq, di, va, &rx_headroom, &cqe_bcnt, true);
+ rcu_read_unlock();
+
+ if (likely(consumed))
+ return NULL; /* page/packet was consumed by XDP */
+
+ /* XDP_PASS: copy the data from the UMEM to a new SKB. The frame reuse
+ * will be handled by mlx5e_put_rx_frag.
+ * On SKB allocation failure, NULL is returned.
+ */
+ return mlx5e_xsk_construct_skb(rq, data, cqe_bcnt);
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h
new file mode 100644
index 000000000000..307b923a1361
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2019 Mellanox Technologies. */
+
+#ifndef __MLX5_EN_XSK_RX_H__
+#define __MLX5_EN_XSK_RX_H__
+
+#include "en.h"
+
+/* RX data path */
+
+bool mlx5e_xsk_pages_enough_umem(struct mlx5e_rq *rq, int count);
+int mlx5e_xsk_page_alloc_umem(struct mlx5e_rq *rq,
+ struct mlx5e_dma_info *dma_info);
+void mlx5e_xsk_page_release(struct mlx5e_rq *rq,
+ struct mlx5e_dma_info *dma_info);
+void mlx5e_xsk_zca_free(struct zero_copy_allocator *zca, unsigned long handle);
+struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
+ struct mlx5e_mpw_info *wi,
+ u16 cqe_bcnt,
+ u32 head_offset,
+ u32 page_idx);
+struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq,
+ struct mlx5_cqe64 *cqe,
+ struct mlx5e_wqe_frag_info *wi,
+ u32 cqe_bcnt);
+
+#endif /* __MLX5_EN_XSK_RX_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
new file mode 100644
index 000000000000..aaffa6f68dc0
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
@@ -0,0 +1,223 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/* Copyright (c) 2019 Mellanox Technologies. */
+
+#include "setup.h"
+#include "en/params.h"
+
+bool mlx5e_validate_xsk_param(struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk,
+ struct mlx5_core_dev *mdev)
+{
+ /* AF_XDP doesn't support frames larger than PAGE_SIZE, and the current
+ * mlx5e XDP implementation doesn't support multiple packets per page.
+ */
+ if (xsk->chunk_size != PAGE_SIZE)
+ return false;
+
+ /* Current MTU and XSK headroom don't allow packets to fit the frames. */
+ if (mlx5e_rx_get_linear_frag_sz(params, xsk) > xsk->chunk_size)
+ return false;
+
+ /* frag_sz is different for regular and XSK RQs, so ensure that linear
+ * SKB mode is possible.
+ */
+ switch (params->rq_wq_type) {
+ case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
+ return mlx5e_rx_mpwqe_is_linear_skb(mdev, params, xsk);
+ default: /* MLX5_WQ_TYPE_CYCLIC */
+ return mlx5e_rx_is_linear_skb(params, xsk);
+ }
+}
+
+static void mlx5e_build_xskicosq_param(struct mlx5e_priv *priv,
+ u8 log_wq_size,
+ struct mlx5e_sq_param *param)
+{
+ void *sqc = param->sqc;
+ void *wq = MLX5_ADDR_OF(sqc, sqc, wq);
+
+ mlx5e_build_sq_param_common(priv, param);
+
+ MLX5_SET(wq, wq, log_wq_sz, log_wq_size);
+}
+
+static void mlx5e_build_xsk_cparam(struct mlx5e_priv *priv,
+ struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk,
+ struct mlx5e_channel_param *cparam)
+{
+ const u8 xskicosq_size = MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE;
+
+ mlx5e_build_rq_param(priv, params, xsk, &cparam->rq);
+ mlx5e_build_xdpsq_param(priv, params, &cparam->xdp_sq);
+ mlx5e_build_xskicosq_param(priv, xskicosq_size, &cparam->icosq);
+ mlx5e_build_rx_cq_param(priv, params, xsk, &cparam->rx_cq);
+ mlx5e_build_tx_cq_param(priv, params, &cparam->tx_cq);
+ mlx5e_build_ico_cq_param(priv, xskicosq_size, &cparam->icosq_cq);
+}
+
+int mlx5e_open_xsk(struct mlx5e_priv *priv, struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk, struct xdp_umem *umem,
+ struct mlx5e_channel *c)
+{
+ struct mlx5e_channel_param cparam = {};
+ struct dim_cq_moder icocq_moder = {};
+ int err;
+
+ if (!mlx5e_validate_xsk_param(params, xsk, priv->mdev))
+ return -EINVAL;
+
+ mlx5e_build_xsk_cparam(priv, params, xsk, &cparam);
+
+ err = mlx5e_open_cq(c, params->rx_cq_moderation, &cparam.rx_cq, &c->xskrq.cq);
+ if (unlikely(err))
+ return err;
+
+ err = mlx5e_open_rq(c, params, &cparam.rq, xsk, umem, &c->xskrq);
+ if (unlikely(err))
+ goto err_close_rx_cq;
+
+ err = mlx5e_open_cq(c, params->tx_cq_moderation, &cparam.tx_cq, &c->xsksq.cq);
+ if (unlikely(err))
+ goto err_close_rq;
+
+ /* Create a separate SQ, so that when the UMEM is disabled, we could
+ * close this SQ safely and stop receiving CQEs. In other case, e.g., if
+ * the XDPSQ was used instead, we might run into trouble when the UMEM
+ * is disabled and then reenabled, but the SQ continues receiving CQEs
+ * from the old UMEM.
+ */
+ err = mlx5e_open_xdpsq(c, params, &cparam.xdp_sq, umem, &c->xsksq, true);
+ if (unlikely(err))
+ goto err_close_tx_cq;
+
+ err = mlx5e_open_cq(c, icocq_moder, &cparam.icosq_cq, &c->xskicosq.cq);
+ if (unlikely(err))
+ goto err_close_sq;
+
+ /* Create a dedicated SQ for posting NOPs whenever we need an IRQ to be
+ * triggered and NAPI to be called on the correct CPU.
+ */
+ err = mlx5e_open_icosq(c, params, &cparam.icosq, &c->xskicosq);
+ if (unlikely(err))
+ goto err_close_icocq;
+
+ spin_lock_init(&c->xskicosq_lock);
+
+ set_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
+
+ return 0;
+
+err_close_icocq:
+ mlx5e_close_cq(&c->xskicosq.cq);
+
+err_close_sq:
+ mlx5e_close_xdpsq(&c->xsksq);
+
+err_close_tx_cq:
+ mlx5e_close_cq(&c->xsksq.cq);
+
+err_close_rq:
+ mlx5e_close_rq(&c->xskrq);
+
+err_close_rx_cq:
+ mlx5e_close_cq(&c->xskrq.cq);
+
+ return err;
+}
+
+void mlx5e_close_xsk(struct mlx5e_channel *c)
+{
+ clear_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
+ napi_synchronize(&c->napi);
+
+ mlx5e_close_rq(&c->xskrq);
+ mlx5e_close_cq(&c->xskrq.cq);
+ mlx5e_close_icosq(&c->xskicosq);
+ mlx5e_close_cq(&c->xskicosq.cq);
+ mlx5e_close_xdpsq(&c->xsksq);
+ mlx5e_close_cq(&c->xsksq.cq);
+}
+
+void mlx5e_activate_xsk(struct mlx5e_channel *c)
+{
+ set_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state);
+ /* TX queue is created active. */
+ mlx5e_trigger_irq(&c->xskicosq);
+}
+
+void mlx5e_deactivate_xsk(struct mlx5e_channel *c)
+{
+ mlx5e_deactivate_rq(&c->xskrq);
+ /* TX queue is disabled on close. */
+}
+
+static int mlx5e_redirect_xsk_rqt(struct mlx5e_priv *priv, u16 ix, u32 rqn)
+{
+ struct mlx5e_redirect_rqt_param direct_rrp = {
+ .is_rss = false,
+ {
+ .rqn = rqn,
+ },
+ };
+
+ u32 rqtn = priv->xsk_tir[ix].rqt.rqtn;
+
+ return mlx5e_redirect_rqt(priv, rqtn, 1, direct_rrp);
+}
+
+int mlx5e_xsk_redirect_rqt_to_channel(struct mlx5e_priv *priv, struct mlx5e_channel *c)
+{
+ return mlx5e_redirect_xsk_rqt(priv, c->ix, c->xskrq.rqn);
+}
+
+int mlx5e_xsk_redirect_rqt_to_drop(struct mlx5e_priv *priv, u16 ix)
+{
+ return mlx5e_redirect_xsk_rqt(priv, ix, priv->drop_rq.rqn);
+}
+
+int mlx5e_xsk_redirect_rqts_to_channels(struct mlx5e_priv *priv, struct mlx5e_channels *chs)
+{
+ int err, i;
+
+ if (!priv->xsk.refcnt)
+ return 0;
+
+ for (i = 0; i < chs->num; i++) {
+ struct mlx5e_channel *c = chs->c[i];
+
+ if (!test_bit(MLX5E_CHANNEL_STATE_XSK, c->state))
+ continue;
+
+ err = mlx5e_xsk_redirect_rqt_to_channel(priv, c);
+ if (unlikely(err))
+ goto err_stop;
+ }
+
+ return 0;
+
+err_stop:
+ for (i--; i >= 0; i--) {
+ if (!test_bit(MLX5E_CHANNEL_STATE_XSK, chs->c[i]->state))
+ continue;
+
+ mlx5e_xsk_redirect_rqt_to_drop(priv, i);
+ }
+
+ return err;
+}
+
+void mlx5e_xsk_redirect_rqts_to_drop(struct mlx5e_priv *priv, struct mlx5e_channels *chs)
+{
+ int i;
+
+ if (!priv->xsk.refcnt)
+ return;
+
+ for (i = 0; i < chs->num; i++) {
+ if (!test_bit(MLX5E_CHANNEL_STATE_XSK, chs->c[i]->state))
+ continue;
+
+ mlx5e_xsk_redirect_rqt_to_drop(priv, i);
+ }
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.h
new file mode 100644
index 000000000000..0dd11b81c046
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2019 Mellanox Technologies. */
+
+#ifndef __MLX5_EN_XSK_SETUP_H__
+#define __MLX5_EN_XSK_SETUP_H__
+
+#include "en.h"
+
+struct mlx5e_xsk_param;
+
+bool mlx5e_validate_xsk_param(struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk,
+ struct mlx5_core_dev *mdev);
+int mlx5e_open_xsk(struct mlx5e_priv *priv, struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk, struct xdp_umem *umem,
+ struct mlx5e_channel *c);
+void mlx5e_close_xsk(struct mlx5e_channel *c);
+void mlx5e_activate_xsk(struct mlx5e_channel *c);
+void mlx5e_deactivate_xsk(struct mlx5e_channel *c);
+int mlx5e_xsk_redirect_rqt_to_channel(struct mlx5e_priv *priv, struct mlx5e_channel *c);
+int mlx5e_xsk_redirect_rqt_to_drop(struct mlx5e_priv *priv, u16 ix);
+int mlx5e_xsk_redirect_rqts_to_channels(struct mlx5e_priv *priv, struct mlx5e_channels *chs);
+void mlx5e_xsk_redirect_rqts_to_drop(struct mlx5e_priv *priv, struct mlx5e_channels *chs);
+
+#endif /* __MLX5_EN_XSK_SETUP_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c
new file mode 100644
index 000000000000..35e188cf4ea4
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c
@@ -0,0 +1,111 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/* Copyright (c) 2019 Mellanox Technologies. */
+
+#include "tx.h"
+#include "umem.h"
+#include "en/xdp.h"
+#include "en/params.h"
+#include <net/xdp_sock.h>
+
+int mlx5e_xsk_async_xmit(struct net_device *dev, u32 qid)
+{
+ struct mlx5e_priv *priv = netdev_priv(dev);
+ struct mlx5e_params *params = &priv->channels.params;
+ struct mlx5e_channel *c;
+ u16 ix;
+
+ if (unlikely(!mlx5e_xdp_is_open(priv)))
+ return -ENETDOWN;
+
+ if (unlikely(!mlx5e_qid_get_ch_if_in_group(params, qid, MLX5E_RQ_GROUP_XSK, &ix)))
+ return -EINVAL;
+
+ c = priv->channels.c[ix];
+
+ if (unlikely(!test_bit(MLX5E_CHANNEL_STATE_XSK, c->state)))
+ return -ENXIO;
+
+ if (!napi_if_scheduled_mark_missed(&c->napi)) {
+ spin_lock(&c->xskicosq_lock);
+ mlx5e_trigger_irq(&c->xskicosq);
+ spin_unlock(&c->xskicosq_lock);
+ }
+
+ return 0;
+}
+
+/* When TX fails (because of the size of the packet), we need to get completions
+ * in order, so post a NOP to get a CQE. Since AF_XDP doesn't distinguish
+ * between successful TX and errors, handling in mlx5e_poll_xdpsq_cq is the
+ * same.
+ */
+static void mlx5e_xsk_tx_post_err(struct mlx5e_xdpsq *sq,
+ struct mlx5e_xdp_info *xdpi)
+{
+ u16 pi = mlx5_wq_cyc_ctr2ix(&sq->wq, sq->pc);
+ struct mlx5e_xdp_wqe_info *wi = &sq->db.wqe_info[pi];
+ struct mlx5e_tx_wqe *nopwqe;
+
+ wi->num_wqebbs = 1;
+ wi->num_pkts = 1;
+
+ nopwqe = mlx5e_post_nop(&sq->wq, sq->sqn, &sq->pc);
+ mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, xdpi);
+ sq->doorbell_cseg = &nopwqe->ctrl;
+}
+
+bool mlx5e_xsk_tx(struct mlx5e_xdpsq *sq, unsigned int budget)
+{
+ struct xdp_umem *umem = sq->umem;
+ struct mlx5e_xdp_info xdpi;
+ struct mlx5e_xdp_xmit_data xdptxd;
+ bool work_done = true;
+ bool flush = false;
+
+ xdpi.mode = MLX5E_XDP_XMIT_MODE_XSK;
+
+ for (; budget; budget--) {
+ int check_result = sq->xmit_xdp_frame_check(sq);
+ struct xdp_desc desc;
+
+ if (unlikely(check_result < 0)) {
+ work_done = false;
+ break;
+ }
+
+ if (!xsk_umem_consume_tx(umem, &desc)) {
+ /* TX will get stuck until something wakes it up by
+ * triggering NAPI. Currently it's expected that the
+ * application calls sendto() if there are consumed, but
+ * not completed frames.
+ */
+ break;
+ }
+
+ xdptxd.dma_addr = xdp_umem_get_dma(umem, desc.addr);
+ xdptxd.data = xdp_umem_get_data(umem, desc.addr);
+ xdptxd.len = desc.len;
+
+ dma_sync_single_for_device(sq->pdev, xdptxd.dma_addr,
+ xdptxd.len, DMA_BIDIRECTIONAL);
+
+ if (unlikely(!sq->xmit_xdp_frame(sq, &xdptxd, &xdpi, check_result))) {
+ if (sq->mpwqe.wqe)
+ mlx5e_xdp_mpwqe_complete(sq);
+
+ mlx5e_xsk_tx_post_err(sq, &xdpi);
+ }
+
+ flush = true;
+ }
+
+ if (flush) {
+ if (sq->mpwqe.wqe)
+ mlx5e_xdp_mpwqe_complete(sq);
+ mlx5e_xmit_xdp_doorbell(sq);
+
+ xsk_umem_consume_tx_done(umem);
+ }
+
+ return !(budget && work_done);
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.h
new file mode 100644
index 000000000000..7add18bf78d8
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2019 Mellanox Technologies. */
+
+#ifndef __MLX5_EN_XSK_TX_H__
+#define __MLX5_EN_XSK_TX_H__
+
+#include "en.h"
+
+/* TX data path */
+
+int mlx5e_xsk_async_xmit(struct net_device *dev, u32 qid);
+
+bool mlx5e_xsk_tx(struct mlx5e_xdpsq *sq, unsigned int budget);
+
+#endif /* __MLX5_EN_XSK_TX_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/umem.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/umem.c
new file mode 100644
index 000000000000..4baaa5788320
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/umem.c
@@ -0,0 +1,267 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/* Copyright (c) 2019 Mellanox Technologies. */
+
+#include <net/xdp_sock.h>
+#include "umem.h"
+#include "setup.h"
+#include "en/params.h"
+
+static int mlx5e_xsk_map_umem(struct mlx5e_priv *priv,
+ struct xdp_umem *umem)
+{
+ struct device *dev = priv->mdev->device;
+ u32 i;
+
+ for (i = 0; i < umem->npgs; i++) {
+ dma_addr_t dma = dma_map_page(dev, umem->pgs[i], 0, PAGE_SIZE,
+ DMA_BIDIRECTIONAL);
+
+ if (unlikely(dma_mapping_error(dev, dma)))
+ goto err_unmap;
+ umem->pages[i].dma = dma;
+ }
+
+ return 0;
+
+err_unmap:
+ while (i--) {
+ dma_unmap_page(dev, umem->pages[i].dma, PAGE_SIZE,
+ DMA_BIDIRECTIONAL);
+ umem->pages[i].dma = 0;
+ }
+
+ return -ENOMEM;
+}
+
+static void mlx5e_xsk_unmap_umem(struct mlx5e_priv *priv,
+ struct xdp_umem *umem)
+{
+ struct device *dev = priv->mdev->device;
+ u32 i;
+
+ for (i = 0; i < umem->npgs; i++) {
+ dma_unmap_page(dev, umem->pages[i].dma, PAGE_SIZE,
+ DMA_BIDIRECTIONAL);
+ umem->pages[i].dma = 0;
+ }
+}
+
+static int mlx5e_xsk_get_umems(struct mlx5e_xsk *xsk)
+{
+ if (!xsk->umems) {
+ xsk->umems = kcalloc(MLX5E_MAX_NUM_CHANNELS,
+ sizeof(*xsk->umems), GFP_KERNEL);
+ if (unlikely(!xsk->umems))
+ return -ENOMEM;
+ }
+
+ xsk->refcnt++;
+ xsk->ever_used = true;
+
+ return 0;
+}
+
+static void mlx5e_xsk_put_umems(struct mlx5e_xsk *xsk)
+{
+ if (!--xsk->refcnt) {
+ kfree(xsk->umems);
+ xsk->umems = NULL;
+ }
+}
+
+static int mlx5e_xsk_add_umem(struct mlx5e_xsk *xsk, struct xdp_umem *umem, u16 ix)
+{
+ int err;
+
+ err = mlx5e_xsk_get_umems(xsk);
+ if (unlikely(err))
+ return err;
+
+ xsk->umems[ix] = umem;
+ return 0;
+}
+
+static void mlx5e_xsk_remove_umem(struct mlx5e_xsk *xsk, u16 ix)
+{
+ xsk->umems[ix] = NULL;
+
+ mlx5e_xsk_put_umems(xsk);
+}
+
+static bool mlx5e_xsk_is_umem_sane(struct xdp_umem *umem)
+{
+ return umem->headroom <= 0xffff && umem->chunk_size_nohr <= 0xffff;
+}
+
+void mlx5e_build_xsk_param(struct xdp_umem *umem, struct mlx5e_xsk_param *xsk)
+{
+ xsk->headroom = umem->headroom;
+ xsk->chunk_size = umem->chunk_size_nohr + umem->headroom;
+}
+
+static int mlx5e_xsk_enable_locked(struct mlx5e_priv *priv,
+ struct xdp_umem *umem, u16 ix)
+{
+ struct mlx5e_params *params = &priv->channels.params;
+ struct mlx5e_xsk_param xsk;
+ struct mlx5e_channel *c;
+ int err;
+
+ if (unlikely(mlx5e_xsk_get_umem(&priv->channels.params, &priv->xsk, ix)))
+ return -EBUSY;
+
+ if (unlikely(!mlx5e_xsk_is_umem_sane(umem)))
+ return -EINVAL;
+
+ err = mlx5e_xsk_map_umem(priv, umem);
+ if (unlikely(err))
+ return err;
+
+ err = mlx5e_xsk_add_umem(&priv->xsk, umem, ix);
+ if (unlikely(err))
+ goto err_unmap_umem;
+
+ mlx5e_build_xsk_param(umem, &xsk);
+
+ if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
+ /* XSK objects will be created on open. */
+ goto validate_closed;
+ }
+
+ if (!params->xdp_prog) {
+ /* XSK objects will be created when an XDP program is set,
+ * and the channels are reopened.
+ */
+ goto validate_closed;
+ }
+
+ c = priv->channels.c[ix];
+
+ err = mlx5e_open_xsk(priv, params, &xsk, umem, c);
+ if (unlikely(err))
+ goto err_remove_umem;
+
+ mlx5e_activate_xsk(c);
+
+ /* Don't wait for WQEs, because the newer xdpsock sample doesn't provide
+ * any Fill Ring entries at the setup stage.
+ */
+
+ err = mlx5e_xsk_redirect_rqt_to_channel(priv, priv->channels.c[ix]);
+ if (unlikely(err))
+ goto err_deactivate;
+
+ return 0;
+
+err_deactivate:
+ mlx5e_deactivate_xsk(c);
+ mlx5e_close_xsk(c);
+
+err_remove_umem:
+ mlx5e_xsk_remove_umem(&priv->xsk, ix);
+
+err_unmap_umem:
+ mlx5e_xsk_unmap_umem(priv, umem);
+
+ return err;
+
+validate_closed:
+ /* Check the configuration in advance, rather than fail at a later stage
+ * (in mlx5e_xdp_set or on open) and end up with no channels.
+ */
+ if (!mlx5e_validate_xsk_param(params, &xsk, priv->mdev)) {
+ err = -EINVAL;
+ goto err_remove_umem;
+ }
+
+ return 0;
+}
+
+static int mlx5e_xsk_disable_locked(struct mlx5e_priv *priv, u16 ix)
+{
+ struct xdp_umem *umem = mlx5e_xsk_get_umem(&priv->channels.params,
+ &priv->xsk, ix);
+ struct mlx5e_channel *c;
+
+ if (unlikely(!umem))
+ return -EINVAL;
+
+ if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
+ goto remove_umem;
+
+ /* XSK RQ and SQ are only created if XDP program is set. */
+ if (!priv->channels.params.xdp_prog)
+ goto remove_umem;
+
+ c = priv->channels.c[ix];
+ mlx5e_xsk_redirect_rqt_to_drop(priv, ix);
+ mlx5e_deactivate_xsk(c);
+ mlx5e_close_xsk(c);
+
+remove_umem:
+ mlx5e_xsk_remove_umem(&priv->xsk, ix);
+ mlx5e_xsk_unmap_umem(priv, umem);
+
+ return 0;
+}
+
+static int mlx5e_xsk_enable_umem(struct mlx5e_priv *priv, struct xdp_umem *umem,
+ u16 ix)
+{
+ int err;
+
+ mutex_lock(&priv->state_lock);
+ err = mlx5e_xsk_enable_locked(priv, umem, ix);
+ mutex_unlock(&priv->state_lock);
+
+ return err;
+}
+
+static int mlx5e_xsk_disable_umem(struct mlx5e_priv *priv, u16 ix)
+{
+ int err;
+
+ mutex_lock(&priv->state_lock);
+ err = mlx5e_xsk_disable_locked(priv, ix);
+ mutex_unlock(&priv->state_lock);
+
+ return err;
+}
+
+int mlx5e_xsk_setup_umem(struct net_device *dev, struct xdp_umem *umem, u16 qid)
+{
+ struct mlx5e_priv *priv = netdev_priv(dev);
+ struct mlx5e_params *params = &priv->channels.params;
+ u16 ix;
+
+ if (unlikely(!mlx5e_qid_get_ch_if_in_group(params, qid, MLX5E_RQ_GROUP_XSK, &ix)))
+ return -EINVAL;
+
+ return umem ? mlx5e_xsk_enable_umem(priv, umem, ix) :
+ mlx5e_xsk_disable_umem(priv, ix);
+}
+
+int mlx5e_xsk_resize_reuseq(struct xdp_umem *umem, u32 nentries)
+{
+ struct xdp_umem_fq_reuse *reuseq;
+
+ reuseq = xsk_reuseq_prepare(nentries);
+ if (unlikely(!reuseq))
+ return -ENOMEM;
+ xsk_reuseq_free(xsk_reuseq_swap(umem, reuseq));
+
+ return 0;
+}
+
+u16 mlx5e_xsk_first_unused_channel(struct mlx5e_params *params, struct mlx5e_xsk *xsk)
+{
+ u16 res = xsk->refcnt ? params->num_channels : 0;
+
+ while (res) {
+ if (mlx5e_xsk_get_umem(params, xsk, res - 1))
+ break;
+ --res;
+ }
+
+ return res;
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/umem.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/umem.h
new file mode 100644
index 000000000000..25b4cbe58b54
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/umem.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2019 Mellanox Technologies. */
+
+#ifndef __MLX5_EN_XSK_UMEM_H__
+#define __MLX5_EN_XSK_UMEM_H__
+
+#include "en.h"
+
+static inline struct xdp_umem *mlx5e_xsk_get_umem(struct mlx5e_params *params,
+ struct mlx5e_xsk *xsk, u16 ix)
+{
+ if (!xsk || !xsk->umems)
+ return NULL;
+
+ if (unlikely(ix >= params->num_channels))
+ return NULL;
+
+ return xsk->umems[ix];
+}
+
+struct mlx5e_xsk_param;
+void mlx5e_build_xsk_param(struct xdp_umem *umem, struct mlx5e_xsk_param *xsk);
+
+/* .ndo_bpf callback. */
+int mlx5e_xsk_setup_umem(struct net_device *dev, struct xdp_umem *umem, u16 qid);
+
+int mlx5e_xsk_resize_reuseq(struct xdp_umem *umem, u32 nentries);
+
+u16 mlx5e_xsk_first_unused_channel(struct mlx5e_params *params, struct mlx5e_xsk *xsk);
+
+#endif /* __MLX5_EN_XSK_UMEM_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h
index 6da7c88742dc..3022463f2284 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h
@@ -39,6 +39,7 @@
#include "en_accel/ipsec_rxtx.h"
#include "en_accel/tls_rxtx.h"
#include "en.h"
+#include "en/txrx.h"
#if IS_ENABLED(CONFIG_GENEVE)
static inline bool mlx5_geneve_tx_allowed(struct mlx5_core_dev *mdev)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h
index ca47c0540904..db84500b024f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h
@@ -39,6 +39,7 @@
#include <linux/skbuff.h>
#include <net/xfrm.h>
#include "en.h"
+#include "en/txrx.h"
struct sk_buff *mlx5e_ipsec_handle_rx_skb(struct net_device *netdev,
struct sk_buff *skb, u32 *cqe_bcnt);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
new file mode 100644
index 000000000000..d2ff74d52720
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
@@ -0,0 +1,93 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+// Copyright (c) 2019 Mellanox Technologies.
+
+#include "en.h"
+#include "en_accel/ktls.h"
+
+static int mlx5e_ktls_create_tis(struct mlx5_core_dev *mdev, u32 *tisn)
+{
+ u32 in[MLX5_ST_SZ_DW(create_tis_in)] = {};
+ void *tisc;
+
+ tisc = MLX5_ADDR_OF(create_tis_in, in, ctx);
+
+ MLX5_SET(tisc, tisc, tls_en, 1);
+
+ return mlx5e_create_tis(mdev, in, tisn);
+}
+
+static int mlx5e_ktls_add(struct net_device *netdev, struct sock *sk,
+ enum tls_offload_ctx_dir direction,
+ struct tls_crypto_info *crypto_info,
+ u32 start_offload_tcp_sn)
+{
+ struct mlx5e_priv *priv = netdev_priv(netdev);
+ struct mlx5e_ktls_offload_context_tx *tx_priv;
+ struct tls_context *tls_ctx = tls_get_ctx(sk);
+ struct mlx5_core_dev *mdev = priv->mdev;
+ int err;
+
+ if (WARN_ON(direction != TLS_OFFLOAD_CTX_DIR_TX))
+ return -EINVAL;
+
+ if (WARN_ON(!mlx5e_ktls_type_check(mdev, crypto_info)))
+ return -EOPNOTSUPP;
+
+ tx_priv = kvzalloc(sizeof(*tx_priv), GFP_KERNEL);
+ if (!tx_priv)
+ return -ENOMEM;
+
+ tx_priv->expected_seq = start_offload_tcp_sn;
+ tx_priv->crypto_info = crypto_info;
+ mlx5e_set_ktls_tx_priv_ctx(tls_ctx, tx_priv);
+
+ /* tc and underlay_qpn values are not in use for tls tis */
+ err = mlx5e_ktls_create_tis(mdev, &tx_priv->tisn);
+ if (err)
+ goto create_tis_fail;
+
+ err = mlx5_ktls_create_key(mdev, crypto_info, &tx_priv->key_id);
+ if (err)
+ goto encryption_key_create_fail;
+
+ mlx5e_ktls_tx_offload_set_pending(tx_priv);
+
+ return 0;
+
+encryption_key_create_fail:
+ mlx5e_destroy_tis(priv->mdev, tx_priv->tisn);
+create_tis_fail:
+ kvfree(tx_priv);
+ return err;
+}
+
+static void mlx5e_ktls_del(struct net_device *netdev,
+ struct tls_context *tls_ctx,
+ enum tls_offload_ctx_dir direction)
+{
+ struct mlx5e_priv *priv = netdev_priv(netdev);
+ struct mlx5e_ktls_offload_context_tx *tx_priv =
+ mlx5e_get_ktls_tx_priv_ctx(tls_ctx);
+
+ mlx5_ktls_destroy_key(priv->mdev, tx_priv->key_id);
+ mlx5e_destroy_tis(priv->mdev, tx_priv->tisn);
+ kvfree(tx_priv);
+}
+
+static const struct tlsdev_ops mlx5e_ktls_ops = {
+ .tls_dev_add = mlx5e_ktls_add,
+ .tls_dev_del = mlx5e_ktls_del,
+};
+
+void mlx5e_ktls_build_netdev(struct mlx5e_priv *priv)
+{
+ struct net_device *netdev = priv->netdev;
+
+ if (!mlx5_accel_is_ktls_device(priv->mdev))
+ return;
+
+ netdev->hw_features |= NETIF_F_HW_TLS_TX;
+ netdev->features |= NETIF_F_HW_TLS_TX;
+
+ netdev->tlsdev_ops = &mlx5e_ktls_ops;
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
new file mode 100644
index 000000000000..407da83474ef
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2019 Mellanox Technologies. */
+
+#ifndef __MLX5E_KTLS_H__
+#define __MLX5E_KTLS_H__
+
+#include "en.h"
+
+#ifdef CONFIG_MLX5_EN_TLS
+#include <net/tls.h>
+#include "accel/tls.h"
+
+#define MLX5E_KTLS_STATIC_UMR_WQE_SZ \
+ (sizeof(struct mlx5e_umr_wqe) + MLX5_ST_SZ_BYTES(tls_static_params))
+#define MLX5E_KTLS_STATIC_WQEBBS \
+ (DIV_ROUND_UP(MLX5E_KTLS_STATIC_UMR_WQE_SZ, MLX5_SEND_WQE_BB))
+
+#define MLX5E_KTLS_PROGRESS_WQE_SZ \
+ (sizeof(struct mlx5e_tx_wqe) + MLX5_ST_SZ_BYTES(tls_progress_params))
+#define MLX5E_KTLS_PROGRESS_WQEBBS \
+ (DIV_ROUND_UP(MLX5E_KTLS_PROGRESS_WQE_SZ, MLX5_SEND_WQE_BB))
+#define MLX5E_KTLS_MAX_DUMP_WQEBBS 2
+
+enum {
+ MLX5E_TLS_PROGRESS_PARAMS_AUTH_STATE_NO_OFFLOAD = 0,
+ MLX5E_TLS_PROGRESS_PARAMS_AUTH_STATE_OFFLOAD = 1,
+ MLX5E_TLS_PROGRESS_PARAMS_AUTH_STATE_AUTHENTICATION = 2,
+};
+
+enum {
+ MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_START = 0,
+ MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_SEARCHING = 1,
+ MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_TRACKING = 2,
+};
+
+struct mlx5e_ktls_offload_context_tx {
+ struct tls_offload_context_tx *tx_ctx;
+ struct tls_crypto_info *crypto_info;
+ u32 expected_seq;
+ u32 tisn;
+ u32 key_id;
+ bool ctx_post_pending;
+};
+
+struct mlx5e_ktls_offload_context_tx_shadow {
+ struct tls_offload_context_tx tx_ctx;
+ struct mlx5e_ktls_offload_context_tx *priv_tx;
+};
+
+static inline void
+mlx5e_set_ktls_tx_priv_ctx(struct tls_context *tls_ctx,
+ struct mlx5e_ktls_offload_context_tx *priv_tx)
+{
+ struct tls_offload_context_tx *tx_ctx = tls_offload_ctx_tx(tls_ctx);
+ struct mlx5e_ktls_offload_context_tx_shadow *shadow;
+
+ BUILD_BUG_ON(sizeof(*shadow) > TLS_OFFLOAD_CONTEXT_SIZE_TX);
+
+ shadow = (struct mlx5e_ktls_offload_context_tx_shadow *)tx_ctx;
+
+ shadow->priv_tx = priv_tx;
+ priv_tx->tx_ctx = tx_ctx;
+}
+
+static inline struct mlx5e_ktls_offload_context_tx *
+mlx5e_get_ktls_tx_priv_ctx(struct tls_context *tls_ctx)
+{
+ struct tls_offload_context_tx *tx_ctx = tls_offload_ctx_tx(tls_ctx);
+ struct mlx5e_ktls_offload_context_tx_shadow *shadow;
+
+ BUILD_BUG_ON(sizeof(*shadow) > TLS_OFFLOAD_CONTEXT_SIZE_TX);
+
+ shadow = (struct mlx5e_ktls_offload_context_tx_shadow *)tx_ctx;
+
+ return shadow->priv_tx;
+}
+
+void mlx5e_ktls_build_netdev(struct mlx5e_priv *priv);
+void mlx5e_ktls_tx_offload_set_pending(struct mlx5e_ktls_offload_context_tx *priv_tx);
+
+struct sk_buff *mlx5e_ktls_handle_tx_skb(struct net_device *netdev,
+ struct mlx5e_txqsq *sq,
+ struct sk_buff *skb,
+ struct mlx5e_tx_wqe **wqe, u16 *pi);
+void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq,
+ struct mlx5e_tx_wqe_info *wi,
+ struct mlx5e_sq_dma *dma);
+
+#else
+
+static inline void mlx5e_ktls_build_netdev(struct mlx5e_priv *priv)
+{
+}
+
+#endif
+
+#endif /* __MLX5E_TLS_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
new file mode 100644
index 000000000000..5c08891806f0
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
@@ -0,0 +1,460 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+// Copyright (c) 2019 Mellanox Technologies.
+
+#include <linux/tls.h>
+#include "en.h"
+#include "en/txrx.h"
+#include "en_accel/ktls.h"
+
+enum {
+ MLX5E_STATIC_PARAMS_CONTEXT_TLS_1_2 = 0x2,
+};
+
+enum {
+ MLX5E_ENCRYPTION_STANDARD_TLS = 0x1,
+};
+
+#define EXTRACT_INFO_FIELDS do { \
+ salt = info->salt; \
+ rec_seq = info->rec_seq; \
+ salt_sz = sizeof(info->salt); \
+ rec_seq_sz = sizeof(info->rec_seq); \
+} while (0)
+
+static void
+fill_static_params_ctx(void *ctx, struct mlx5e_ktls_offload_context_tx *priv_tx)
+{
+ struct tls_crypto_info *crypto_info = priv_tx->crypto_info;
+ char *initial_rn, *gcm_iv;
+ u16 salt_sz, rec_seq_sz;
+ char *salt, *rec_seq;
+ u8 tls_version;
+
+ switch (crypto_info->cipher_type) {
+ case TLS_CIPHER_AES_GCM_128: {
+ struct tls12_crypto_info_aes_gcm_128 *info =
+ (struct tls12_crypto_info_aes_gcm_128 *)crypto_info;
+
+ EXTRACT_INFO_FIELDS;
+ break;
+ }
+ default:
+ WARN_ON(1);
+ return;
+ }
+
+ gcm_iv = MLX5_ADDR_OF(tls_static_params, ctx, gcm_iv);
+ initial_rn = MLX5_ADDR_OF(tls_static_params, ctx, initial_record_number);
+
+ memcpy(gcm_iv, salt, salt_sz);
+ memcpy(initial_rn, rec_seq, rec_seq_sz);
+
+ tls_version = MLX5E_STATIC_PARAMS_CONTEXT_TLS_1_2;
+
+ MLX5_SET(tls_static_params, ctx, tls_version, tls_version);
+ MLX5_SET(tls_static_params, ctx, const_1, 1);
+ MLX5_SET(tls_static_params, ctx, const_2, 2);
+ MLX5_SET(tls_static_params, ctx, encryption_standard,
+ MLX5E_ENCRYPTION_STANDARD_TLS);
+ MLX5_SET(tls_static_params, ctx, dek_index, priv_tx->key_id);
+}
+
+static void
+build_static_params(struct mlx5e_umr_wqe *wqe, u16 pc, u32 sqn,
+ struct mlx5e_ktls_offload_context_tx *priv_tx,
+ bool fence)
+{
+ struct mlx5_wqe_ctrl_seg *cseg = &wqe->ctrl;
+ struct mlx5_wqe_umr_ctrl_seg *ucseg = &wqe->uctrl;
+
+#define STATIC_PARAMS_DS_CNT \
+ DIV_ROUND_UP(MLX5E_KTLS_STATIC_UMR_WQE_SZ, MLX5_SEND_WQE_DS)
+
+ cseg->opmod_idx_opcode = cpu_to_be32((pc << 8) | MLX5_OPCODE_UMR |
+ (MLX5_OPC_MOD_TLS_TIS_STATIC_PARAMS << 24));
+ cseg->qpn_ds = cpu_to_be32((sqn << MLX5_WQE_CTRL_QPN_SHIFT) |
+ STATIC_PARAMS_DS_CNT);
+ cseg->fm_ce_se = fence ? MLX5_FENCE_MODE_INITIATOR_SMALL : 0;
+ cseg->imm = cpu_to_be32(priv_tx->tisn);
+
+ ucseg->flags = MLX5_UMR_INLINE;
+ ucseg->bsf_octowords = cpu_to_be16(MLX5_ST_SZ_BYTES(tls_static_params) / 16);
+
+ fill_static_params_ctx(wqe->tls_static_params_ctx, priv_tx);
+}
+
+static void
+fill_progress_params_ctx(void *ctx, struct mlx5e_ktls_offload_context_tx *priv_tx)
+{
+ MLX5_SET(tls_progress_params, ctx, pd, priv_tx->tisn);
+ MLX5_SET(tls_progress_params, ctx, record_tracker_state,
+ MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_START);
+ MLX5_SET(tls_progress_params, ctx, auth_state,
+ MLX5E_TLS_PROGRESS_PARAMS_AUTH_STATE_NO_OFFLOAD);
+}
+
+static void
+build_progress_params(struct mlx5e_tx_wqe *wqe, u16 pc, u32 sqn,
+ struct mlx5e_ktls_offload_context_tx *priv_tx,
+ bool fence)
+{
+ struct mlx5_wqe_ctrl_seg *cseg = &wqe->ctrl;
+
+#define PROGRESS_PARAMS_DS_CNT \
+ DIV_ROUND_UP(MLX5E_KTLS_PROGRESS_WQE_SZ, MLX5_SEND_WQE_DS)
+
+ cseg->opmod_idx_opcode =
+ cpu_to_be32((pc << 8) | MLX5_OPCODE_SET_PSV |
+ (MLX5_OPC_MOD_TLS_TIS_PROGRESS_PARAMS << 24));
+ cseg->qpn_ds = cpu_to_be32((sqn << MLX5_WQE_CTRL_QPN_SHIFT) |
+ PROGRESS_PARAMS_DS_CNT);
+ cseg->fm_ce_se = fence ? MLX5_FENCE_MODE_INITIATOR_SMALL : 0;
+
+ fill_progress_params_ctx(wqe->data, priv_tx);
+}
+
+static void tx_fill_wi(struct mlx5e_txqsq *sq,
+ u16 pi, u8 num_wqebbs,
+ skb_frag_t *resync_dump_frag)
+{
+ struct mlx5e_tx_wqe_info *wi = &sq->db.wqe_info[pi];
+
+ wi->skb = NULL;
+ wi->num_wqebbs = num_wqebbs;
+ wi->resync_dump_frag = resync_dump_frag;
+}
+
+void mlx5e_ktls_tx_offload_set_pending(struct mlx5e_ktls_offload_context_tx *priv_tx)
+{
+ priv_tx->ctx_post_pending = true;
+}
+
+static bool
+mlx5e_ktls_tx_offload_test_and_clear_pending(struct mlx5e_ktls_offload_context_tx *priv_tx)
+{
+ bool ret = priv_tx->ctx_post_pending;
+
+ priv_tx->ctx_post_pending = false;
+
+ return ret;
+}
+
+static void
+post_static_params(struct mlx5e_txqsq *sq,
+ struct mlx5e_ktls_offload_context_tx *priv_tx,
+ bool fence)
+{
+ struct mlx5e_umr_wqe *umr_wqe;
+ u16 pi;
+
+ umr_wqe = mlx5e_sq_fetch_wqe(sq, MLX5E_KTLS_STATIC_UMR_WQE_SZ, &pi);
+ build_static_params(umr_wqe, sq->pc, sq->sqn, priv_tx, fence);
+ tx_fill_wi(sq, pi, MLX5E_KTLS_STATIC_WQEBBS, NULL);
+ sq->pc += MLX5E_KTLS_STATIC_WQEBBS;
+}
+
+static void
+post_progress_params(struct mlx5e_txqsq *sq,
+ struct mlx5e_ktls_offload_context_tx *priv_tx,
+ bool fence)
+{
+ struct mlx5e_tx_wqe *wqe;
+ u16 pi;
+
+ wqe = mlx5e_sq_fetch_wqe(sq, MLX5E_KTLS_PROGRESS_WQE_SZ, &pi);
+ build_progress_params(wqe, sq->pc, sq->sqn, priv_tx, fence);
+ tx_fill_wi(sq, pi, MLX5E_KTLS_PROGRESS_WQEBBS, NULL);
+ sq->pc += MLX5E_KTLS_PROGRESS_WQEBBS;
+}
+
+static void
+mlx5e_ktls_tx_post_param_wqes(struct mlx5e_txqsq *sq,
+ struct mlx5e_ktls_offload_context_tx *priv_tx,
+ bool skip_static_post, bool fence_first_post)
+{
+ bool progress_fence = skip_static_post || !fence_first_post;
+
+ if (!skip_static_post)
+ post_static_params(sq, priv_tx, fence_first_post);
+
+ post_progress_params(sq, priv_tx, progress_fence);
+}
+
+struct tx_sync_info {
+ u64 rcd_sn;
+ s32 sync_len;
+ int nr_frags;
+ skb_frag_t *frags[MAX_SKB_FRAGS];
+};
+
+static bool tx_sync_info_get(struct mlx5e_ktls_offload_context_tx *priv_tx,
+ u32 tcp_seq, struct tx_sync_info *info)
+{
+ struct tls_offload_context_tx *tx_ctx = priv_tx->tx_ctx;
+ struct tls_record_info *record;
+ int remaining, i = 0;
+ unsigned long flags;
+ bool ret = true;
+
+ spin_lock_irqsave(&tx_ctx->lock, flags);
+ record = tls_get_record(tx_ctx, tcp_seq, &info->rcd_sn);
+
+ if (unlikely(!record)) {
+ ret = false;
+ goto out;
+ }
+
+ if (unlikely(tcp_seq < tls_record_start_seq(record))) {
+ if (!tls_record_is_start_marker(record))
+ ret = false;
+ goto out;
+ }
+
+ info->sync_len = tcp_seq - tls_record_start_seq(record);
+ remaining = info->sync_len;
+ while (remaining > 0) {
+ skb_frag_t *frag = &record->frags[i];
+
+ __skb_frag_ref(frag);
+ remaining -= skb_frag_size(frag);
+ info->frags[i++] = frag;
+ }
+ /* reduce the part which will be sent with the original SKB */
+ if (remaining < 0)
+ skb_frag_size_add(info->frags[i - 1], remaining);
+ info->nr_frags = i;
+out:
+ spin_unlock_irqrestore(&tx_ctx->lock, flags);
+ return ret;
+}
+
+static void
+tx_post_resync_params(struct mlx5e_txqsq *sq,
+ struct mlx5e_ktls_offload_context_tx *priv_tx,
+ u64 rcd_sn)
+{
+ struct tls_crypto_info *crypto_info = priv_tx->crypto_info;
+ __be64 rn_be = cpu_to_be64(rcd_sn);
+ bool skip_static_post;
+ u16 rec_seq_sz;
+ char *rec_seq;
+
+ switch (crypto_info->cipher_type) {
+ case TLS_CIPHER_AES_GCM_128: {
+ struct tls12_crypto_info_aes_gcm_128 *info =
+ (struct tls12_crypto_info_aes_gcm_128 *)crypto_info;
+
+ rec_seq = info->rec_seq;
+ rec_seq_sz = sizeof(info->rec_seq);
+ break;
+ }
+ default:
+ WARN_ON(1);
+ return;
+ }
+
+ skip_static_post = !memcmp(rec_seq, &rn_be, rec_seq_sz);
+ if (!skip_static_post)
+ memcpy(rec_seq, &rn_be, rec_seq_sz);
+
+ mlx5e_ktls_tx_post_param_wqes(sq, priv_tx, skip_static_post, true);
+}
+
+static int
+tx_post_resync_dump(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ skb_frag_t *frag, u32 tisn, bool first)
+{
+ struct mlx5_wqe_ctrl_seg *cseg;
+ struct mlx5_wqe_eth_seg *eseg;
+ struct mlx5_wqe_data_seg *dseg;
+ struct mlx5e_tx_wqe *wqe;
+ dma_addr_t dma_addr = 0;
+ u16 ds_cnt, ds_cnt_inl;
+ u8 num_wqebbs;
+ u16 pi, ihs;
+ int fsz;
+
+ ds_cnt = sizeof(*wqe) / MLX5_SEND_WQE_DS;
+ ihs = eth_get_headlen(skb->dev, skb->data, skb_headlen(skb));
+ ds_cnt_inl = DIV_ROUND_UP(ihs - INL_HDR_START_SZ, MLX5_SEND_WQE_DS);
+ ds_cnt += ds_cnt_inl;
+ ds_cnt += 1; /* one frag */
+
+ wqe = mlx5e_sq_fetch_wqe(sq, sizeof(*wqe), &pi);
+
+ num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS);
+
+ cseg = &wqe->ctrl;
+ eseg = &wqe->eth;
+ dseg = wqe->data;
+
+ cseg->opmod_idx_opcode = cpu_to_be32((sq->pc << 8) | MLX5_OPCODE_DUMP);
+ cseg->qpn_ds = cpu_to_be32((sq->sqn << 8) | ds_cnt);
+ cseg->imm = cpu_to_be32(tisn);
+ cseg->fm_ce_se = first ? MLX5_FENCE_MODE_INITIATOR_SMALL : 0;
+
+ eseg->inline_hdr.sz = cpu_to_be16(ihs);
+ memcpy(eseg->inline_hdr.start, skb->data, ihs);
+ dseg += ds_cnt_inl;
+
+ fsz = skb_frag_size(frag);
+ dma_addr = skb_frag_dma_map(sq->pdev, frag, 0, fsz,
+ DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(sq->pdev, dma_addr)))
+ return -ENOMEM;
+
+ dseg->addr = cpu_to_be64(dma_addr);
+ dseg->lkey = sq->mkey_be;
+ dseg->byte_count = cpu_to_be32(fsz);
+ mlx5e_dma_push(sq, dma_addr, fsz, MLX5E_DMA_MAP_PAGE);
+
+ tx_fill_wi(sq, pi, num_wqebbs, frag);
+ sq->pc += num_wqebbs;
+
+ WARN(num_wqebbs > MLX5E_KTLS_MAX_DUMP_WQEBBS,
+ "unexpected DUMP num_wqebbs, %d > %d",
+ num_wqebbs, MLX5E_KTLS_MAX_DUMP_WQEBBS);
+
+ return 0;
+}
+
+void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq,
+ struct mlx5e_tx_wqe_info *wi,
+ struct mlx5e_sq_dma *dma)
+{
+ struct mlx5e_sq_stats *stats = sq->stats;
+
+ mlx5e_tx_dma_unmap(sq->pdev, dma);
+ __skb_frag_unref(wi->resync_dump_frag);
+ stats->tls_dump_packets++;
+ stats->tls_dump_bytes += wi->num_bytes;
+}
+
+static void tx_post_fence_nop(struct mlx5e_txqsq *sq)
+{
+ struct mlx5_wq_cyc *wq = &sq->wq;
+ u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+
+ tx_fill_wi(sq, pi, 1, NULL);
+
+ mlx5e_post_nop_fence(wq, sq->sqn, &sq->pc);
+}
+
+static struct sk_buff *
+mlx5e_ktls_tx_handle_ooo(struct mlx5e_ktls_offload_context_tx *priv_tx,
+ struct mlx5e_txqsq *sq,
+ struct sk_buff *skb,
+ u32 seq)
+{
+ struct mlx5e_sq_stats *stats = sq->stats;
+ struct mlx5_wq_cyc *wq = &sq->wq;
+ struct tx_sync_info info = {};
+ u16 contig_wqebbs_room, pi;
+ u8 num_wqebbs;
+ int i;
+
+ if (!tx_sync_info_get(priv_tx, seq, &info)) {
+ /* We might get here if a retransmission reaches the driver
+ * after the relevant record is acked.
+ * It should be safe to drop the packet in this case
+ */
+ stats->tls_drop_no_sync_data++;
+ goto err_out;
+ }
+
+ if (unlikely(info.sync_len < 0)) {
+ u32 payload;
+ int headln;
+
+ headln = skb_transport_offset(skb) + tcp_hdrlen(skb);
+ payload = skb->len - headln;
+ if (likely(payload <= -info.sync_len))
+ return skb;
+
+ stats->tls_drop_bypass_req++;
+ goto err_out;
+ }
+
+ stats->tls_ooo++;
+
+ num_wqebbs = MLX5E_KTLS_STATIC_WQEBBS + MLX5E_KTLS_PROGRESS_WQEBBS +
+ (info.nr_frags ? info.nr_frags * MLX5E_KTLS_MAX_DUMP_WQEBBS : 1);
+ pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+ contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
+ if (unlikely(contig_wqebbs_room < num_wqebbs))
+ mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room);
+
+ tx_post_resync_params(sq, priv_tx, info.rcd_sn);
+
+ for (i = 0; i < info.nr_frags; i++)
+ if (tx_post_resync_dump(sq, skb, info.frags[i],
+ priv_tx->tisn, !i))
+ goto err_out;
+
+ /* If no dump WQE was sent, we need to have a fence NOP WQE before the
+ * actual data xmit.
+ */
+ if (!info.nr_frags)
+ tx_post_fence_nop(sq);
+
+ return skb;
+
+err_out:
+ dev_kfree_skb_any(skb);
+ return NULL;
+}
+
+struct sk_buff *mlx5e_ktls_handle_tx_skb(struct net_device *netdev,
+ struct mlx5e_txqsq *sq,
+ struct sk_buff *skb,
+ struct mlx5e_tx_wqe **wqe, u16 *pi)
+{
+ struct mlx5e_ktls_offload_context_tx *priv_tx;
+ struct mlx5e_sq_stats *stats = sq->stats;
+ struct mlx5_wqe_ctrl_seg *cseg;
+ struct tls_context *tls_ctx;
+ int datalen;
+ u32 seq;
+
+ if (!skb->sk || !tls_is_sk_tx_device_offloaded(skb->sk))
+ goto out;
+
+ datalen = skb->len - (skb_transport_offset(skb) + tcp_hdrlen(skb));
+ if (!datalen)
+ goto out;
+
+ tls_ctx = tls_get_ctx(skb->sk);
+ if (unlikely(tls_ctx->netdev != netdev))
+ goto err_out;
+
+ priv_tx = mlx5e_get_ktls_tx_priv_ctx(tls_ctx);
+
+ if (unlikely(mlx5e_ktls_tx_offload_test_and_clear_pending(priv_tx))) {
+ mlx5e_ktls_tx_post_param_wqes(sq, priv_tx, false, false);
+ *wqe = mlx5e_sq_fetch_wqe(sq, sizeof(**wqe), pi);
+ stats->tls_ctx++;
+ }
+
+ seq = ntohl(tcp_hdr(skb)->seq);
+ if (unlikely(priv_tx->expected_seq != seq)) {
+ skb = mlx5e_ktls_tx_handle_ooo(priv_tx, sq, skb, seq);
+ if (unlikely(!skb))
+ goto out;
+ *wqe = mlx5e_sq_fetch_wqe(sq, sizeof(**wqe), pi);
+ }
+
+ priv_tx->expected_seq = seq + datalen;
+
+ cseg = &(*wqe)->ctrl;
+ cseg->imm = cpu_to_be32(priv_tx->tisn);
+
+ stats->tls_encrypted_packets += skb_is_gso(skb) ? skb_shinfo(skb)->gso_segs : 1;
+ stats->tls_encrypted_bytes += datalen;
+
+out:
+ return skb;
+
+err_out:
+ dev_kfree_skb_any(skb);
+ return NULL;
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.c
index e88340e196f7..fba561ffe1d4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.c
@@ -160,25 +160,31 @@ static void mlx5e_tls_del(struct net_device *netdev,
direction == TLS_OFFLOAD_CTX_DIR_TX);
}
-static void mlx5e_tls_resync_rx(struct net_device *netdev, struct sock *sk,
- u32 seq, u64 rcd_sn)
+static int mlx5e_tls_resync(struct net_device *netdev, struct sock *sk,
+ u32 seq, u8 *rcd_sn_data,
+ enum tls_offload_ctx_dir direction)
{
struct tls_context *tls_ctx = tls_get_ctx(sk);
struct mlx5e_priv *priv = netdev_priv(netdev);
struct mlx5e_tls_offload_context_rx *rx_ctx;
+ u64 rcd_sn = *(u64 *)rcd_sn_data;
+ if (WARN_ON_ONCE(direction != TLS_OFFLOAD_CTX_DIR_RX))
+ return -EINVAL;
rx_ctx = mlx5e_get_tls_rx_context(tls_ctx);
netdev_info(netdev, "resyncing seq %d rcd %lld\n", seq,
be64_to_cpu(rcd_sn));
mlx5_accel_tls_resync_rx(priv->mdev, rx_ctx->handle, seq, rcd_sn);
atomic64_inc(&priv->tls->sw_stats.rx_tls_resync_reply);
+
+ return 0;
}
static const struct tlsdev_ops mlx5e_tls_ops = {
.tls_dev_add = mlx5e_tls_add,
.tls_dev_del = mlx5e_tls_del,
- .tls_dev_resync_rx = mlx5e_tls_resync_rx,
+ .tls_dev_resync = mlx5e_tls_resync,
};
void mlx5e_tls_build_netdev(struct mlx5e_priv *priv)
@@ -186,6 +192,11 @@ void mlx5e_tls_build_netdev(struct mlx5e_priv *priv)
struct net_device *netdev = priv->netdev;
u32 caps;
+ if (mlx5_accel_is_ktls_device(priv->mdev)) {
+ mlx5e_ktls_build_netdev(priv);
+ return;
+ }
+
if (!mlx5_accel_is_tls_device(priv->mdev))
return;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.h
index 3f5d72163b56..9015f3f7792d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.h
@@ -33,8 +33,10 @@
#ifndef __MLX5E_TLS_H__
#define __MLX5E_TLS_H__
-#ifdef CONFIG_MLX5_EN_TLS
+#include "accel/tls.h"
+#include "en_accel/ktls.h"
+#ifdef CONFIG_MLX5_EN_TLS
#include <net/tls.h>
#include "en.h"
@@ -94,7 +96,12 @@ int mlx5e_tls_get_stats(struct mlx5e_priv *priv, u64 *data);
#else
-static inline void mlx5e_tls_build_netdev(struct mlx5e_priv *priv) { }
+static inline void mlx5e_tls_build_netdev(struct mlx5e_priv *priv)
+{
+ if (mlx5_accel_is_ktls_device(priv->mdev))
+ mlx5e_ktls_build_netdev(priv);
+}
+
static inline int mlx5e_tls_init(struct mlx5e_priv *priv) { return 0; }
static inline void mlx5e_tls_cleanup(struct mlx5e_priv *priv) { }
static inline int mlx5e_tls_get_count(struct mlx5e_priv *priv) { return 0; }
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_rxtx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_rxtx.c
index 439bf5953885..71384ad1a443 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_rxtx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_rxtx.c
@@ -248,7 +248,7 @@ mlx5e_tls_handle_ooo(struct mlx5e_tls_offload_context_tx *context,
mlx5e_tls_complete_sync_skb(skb, nskb, tcp_seq, headln,
cpu_to_be64(info.rcd_sn));
mlx5e_sq_xmit(sq, nskb, *wqe, *pi, true);
- mlx5e_sq_fetch_wqe(sq, wqe, pi);
+ *wqe = mlx5e_sq_fetch_wqe(sq, sizeof(**wqe), pi);
return skb;
err_out:
@@ -269,6 +269,11 @@ struct sk_buff *mlx5e_tls_handle_tx_skb(struct net_device *netdev,
int datalen;
u32 skb_seq;
+ if (MLX5_CAP_GEN(sq->channel->mdev, tls)) {
+ skb = mlx5e_ktls_handle_tx_skb(netdev, sq, skb, wqe, pi);
+ goto out;
+ }
+
if (!skb->sk || !tls_is_sk_tx_device_offloaded(skb->sk))
goto out;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_rxtx.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_rxtx.h
index 311667ec71b8..90bc1f2384c8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_rxtx.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_rxtx.h
@@ -38,6 +38,7 @@
#include <linux/skbuff.h>
#include "en.h"
+#include "en/txrx.h"
struct sk_buff *mlx5e_tls_handle_tx_skb(struct net_device *netdev,
struct mlx5e_txqsq *sq,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
index 554672edf8c3..8dd31b5c740c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
@@ -680,7 +680,7 @@ static void mlx5e_dcbnl_getpermhwaddr(struct net_device *netdev,
memset(perm_addr, 0xff, MAX_ADDR_LEN);
- mlx5_query_nic_vport_mac_address(priv->mdev, 0, perm_addr);
+ mlx5_query_mac_address(priv->mdev, perm_addr);
}
static void mlx5e_dcbnl_setpgtccfgtx(struct net_device *netdev,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dim.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dim.c
index d67adf70a97b..ca9cfbf57d8f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dim.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dim.c
@@ -30,22 +30,22 @@
* SOFTWARE.
*/
-#include <linux/net_dim.h>
+#include <linux/dim.h>
#include "en.h"
static void
-mlx5e_complete_dim_work(struct net_dim *dim, struct net_dim_cq_moder moder,
+mlx5e_complete_dim_work(struct dim *dim, struct dim_cq_moder moder,
struct mlx5_core_dev *mdev, struct mlx5_core_cq *mcq)
{
mlx5_core_modify_cq_moderation(mdev, mcq, moder.usec, moder.pkts);
- dim->state = NET_DIM_START_MEASURE;
+ dim->state = DIM_START_MEASURE;
}
void mlx5e_rx_dim_work(struct work_struct *work)
{
- struct net_dim *dim = container_of(work, struct net_dim, work);
+ struct dim *dim = container_of(work, struct dim, work);
struct mlx5e_rq *rq = container_of(dim, struct mlx5e_rq, dim);
- struct net_dim_cq_moder cur_moder =
+ struct dim_cq_moder cur_moder =
net_dim_get_rx_moderation(dim->mode, dim->profile_ix);
mlx5e_complete_dim_work(dim, cur_moder, rq->mdev, &rq->cq.mcq);
@@ -53,9 +53,9 @@ void mlx5e_rx_dim_work(struct work_struct *work)
void mlx5e_tx_dim_work(struct work_struct *work)
{
- struct net_dim *dim = container_of(work, struct net_dim, work);
+ struct dim *dim = container_of(work, struct dim, work);
struct mlx5e_txqsq *sq = container_of(dim, struct mlx5e_txqsq, dim);
- struct net_dim_cq_moder cur_moder =
+ struct dim_cq_moder cur_moder =
net_dim_get_tx_moderation(dim->mode, dim->profile_ix);
mlx5e_complete_dim_work(dim, cur_moder, sq->cq.mdev, &sq->cq.mcq);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
index dd764e0471f2..126ec4181286 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
@@ -32,6 +32,7 @@
#include "en.h"
#include "en/port.h"
+#include "en/xsk/umem.h"
#include "lib/clock.h"
void mlx5e_ethtool_get_drvinfo(struct mlx5e_priv *priv,
@@ -46,7 +47,7 @@ void mlx5e_ethtool_get_drvinfo(struct mlx5e_priv *priv,
"%d.%d.%04d (%.16s)",
fw_rev_maj(mdev), fw_rev_min(mdev), fw_rev_sub(mdev),
mdev->board_id);
- strlcpy(drvinfo->bus_info, pci_name(mdev->pdev),
+ strlcpy(drvinfo->bus_info, dev_name(mdev->device),
sizeof(drvinfo->bus_info));
}
@@ -388,8 +389,17 @@ static int mlx5e_set_ringparam(struct net_device *dev,
void mlx5e_ethtool_get_channels(struct mlx5e_priv *priv,
struct ethtool_channels *ch)
{
+ mutex_lock(&priv->state_lock);
+
ch->max_combined = mlx5e_get_netdev_max_channels(priv->netdev);
ch->combined_count = priv->channels.params.num_channels;
+ if (priv->xsk.refcnt) {
+ /* The upper half are XSK queues. */
+ ch->max_combined *= 2;
+ ch->combined_count *= 2;
+ }
+
+ mutex_unlock(&priv->state_lock);
}
static void mlx5e_get_channels(struct net_device *dev,
@@ -403,6 +413,7 @@ static void mlx5e_get_channels(struct net_device *dev,
int mlx5e_ethtool_set_channels(struct mlx5e_priv *priv,
struct ethtool_channels *ch)
{
+ struct mlx5e_params *cur_params = &priv->channels.params;
unsigned int count = ch->combined_count;
struct mlx5e_channels new_channels = {};
bool arfs_enabled;
@@ -414,16 +425,26 @@ int mlx5e_ethtool_set_channels(struct mlx5e_priv *priv,
return -EINVAL;
}
- if (priv->channels.params.num_channels == count)
+ if (cur_params->num_channels == count)
return 0;
mutex_lock(&priv->state_lock);
+ /* Don't allow changing the number of channels if there is an active
+ * XSK, because the numeration of the XSK and regular RQs will change.
+ */
+ if (priv->xsk.refcnt) {
+ err = -EINVAL;
+ netdev_err(priv->netdev, "%s: AF_XDP is active, cannot change the number of channels\n",
+ __func__);
+ goto out;
+ }
+
new_channels.params = priv->channels.params;
new_channels.params.num_channels = count;
if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
- priv->channels.params = new_channels.params;
+ *cur_params = new_channels.params;
if (!netif_is_rxfh_configured(priv->netdev))
mlx5e_build_default_indir_rqt(priv->rss_params.indirection_rqt,
MLX5E_INDIR_RQT_SIZE, count);
@@ -466,7 +487,7 @@ static int mlx5e_set_channels(struct net_device *dev,
int mlx5e_ethtool_get_coalesce(struct mlx5e_priv *priv,
struct ethtool_coalesce *coal)
{
- struct net_dim_cq_moder *rx_moder, *tx_moder;
+ struct dim_cq_moder *rx_moder, *tx_moder;
if (!MLX5_CAP_GEN(priv->mdev, cq_moderation))
return -EOPNOTSUPP;
@@ -521,7 +542,7 @@ mlx5e_set_priv_channels_coalesce(struct mlx5e_priv *priv, struct ethtool_coalesc
int mlx5e_ethtool_set_coalesce(struct mlx5e_priv *priv,
struct ethtool_coalesce *coal)
{
- struct net_dim_cq_moder *rx_moder, *tx_moder;
+ struct dim_cq_moder *rx_moder, *tx_moder;
struct mlx5_core_dev *mdev = priv->mdev;
struct mlx5e_channels new_channels = {};
int err = 0;
@@ -1867,40 +1888,6 @@ static u32 mlx5e_get_priv_flags(struct net_device *netdev)
return priv->channels.params.pflags;
}
-int mlx5e_ethtool_flash_device(struct mlx5e_priv *priv,
- struct ethtool_flash *flash)
-{
- struct mlx5_core_dev *mdev = priv->mdev;
- struct net_device *dev = priv->netdev;
- const struct firmware *fw;
- int err;
-
- if (flash->region != ETHTOOL_FLASH_ALL_REGIONS)
- return -EOPNOTSUPP;
-
- err = request_firmware_direct(&fw, flash->data, &dev->dev);
- if (err)
- return err;
-
- dev_hold(dev);
- rtnl_unlock();
-
- err = mlx5_firmware_flash(mdev, fw);
- release_firmware(fw);
-
- rtnl_lock();
- dev_put(dev);
- return err;
-}
-
-static int mlx5e_flash_device(struct net_device *dev,
- struct ethtool_flash *flash)
-{
- struct mlx5e_priv *priv = netdev_priv(dev);
-
- return mlx5e_ethtool_flash_device(priv, flash);
-}
-
#ifndef CONFIG_MLX5_EN_RXNFC
/* When CONFIG_MLX5_EN_RXNFC=n we only support ETHTOOL_GRXRINGS
* otherwise this function will be defined from en_fs_ethtool.c
@@ -1939,7 +1926,6 @@ const struct ethtool_ops mlx5e_ethtool_ops = {
#ifdef CONFIG_MLX5_EN_RXNFC
.set_rxnfc = mlx5e_set_rxnfc,
#endif
- .flash_device = mlx5e_flash_device,
.get_tunable = mlx5e_get_tunable,
.set_tunable = mlx5e_set_tunable,
.get_pauseparam = mlx5e_get_pauseparam,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
index 4421c10f58ae..ea3a490b569a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
@@ -32,6 +32,8 @@
#include <linux/mlx5/fs.h>
#include "en.h"
+#include "en/params.h"
+#include "en/xsk/umem.h"
struct mlx5e_ethtool_rule {
struct list_head list;
@@ -414,6 +416,14 @@ add_ethtool_flow_rule(struct mlx5e_priv *priv,
if (fs->ring_cookie == RX_CLS_FLOW_DISC) {
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP;
} else {
+ struct mlx5e_params *params = &priv->channels.params;
+ enum mlx5e_rq_group group;
+ struct mlx5e_tir *tir;
+ u16 ix;
+
+ mlx5e_qid_get_ch_and_group(params, fs->ring_cookie, &ix, &group);
+ tir = group == MLX5E_RQ_GROUP_XSK ? priv->xsk_tir : priv->direct_tir;
+
dst = kzalloc(sizeof(*dst), GFP_KERNEL);
if (!dst) {
err = -ENOMEM;
@@ -421,12 +431,12 @@ add_ethtool_flow_rule(struct mlx5e_priv *priv,
}
dst->type = MLX5_FLOW_DESTINATION_TYPE_TIR;
- dst->tir_num = priv->direct_tir[fs->ring_cookie].tirn;
+ dst->tir_num = tir[ix].tirn;
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
}
spec->match_criteria_enable = (!outer_header_zero(spec->match_criteria));
- flow_act.flow_tag = MLX5_FS_DEFAULT_FLOW_TAG;
+ spec->flow_context.flow_tag = MLX5_FS_DEFAULT_FLOW_TAG;
rule = mlx5_add_flow_rules(ft, spec, &flow_act, dst, dst ? 1 : 0);
if (IS_ERR(rule)) {
err = PTR_ERR(rule);
@@ -600,9 +610,9 @@ static int validate_flow(struct mlx5e_priv *priv,
if (fs->location >= MAX_NUM_OF_ETHTOOL_RULES)
return -ENOSPC;
- if (fs->ring_cookie >= priv->channels.params.num_channels &&
- fs->ring_cookie != RX_CLS_FLOW_DISC)
- return -EINVAL;
+ if (fs->ring_cookie != RX_CLS_FLOW_DISC)
+ if (!mlx5e_qid_validate(&priv->channels.params, fs->ring_cookie))
+ return -EINVAL;
switch (fs->flow_type & ~(FLOW_EXT | FLOW_MAC_EXT)) {
case ETHER_FLOW:
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index a8e8350b38aa..6d0ae87c8ded 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -38,8 +38,10 @@
#include <linux/bpf.h>
#include <linux/if_bridge.h>
#include <net/page_pool.h>
+#include <net/xdp_sock.h>
#include "eswitch.h"
#include "en.h"
+#include "en/txrx.h"
#include "en_tc.h"
#include "en_rep.h"
#include "en_accel/ipsec.h"
@@ -56,35 +58,11 @@
#include "en/monitor_stats.h"
#include "en/reporter.h"
#include "en/params.h"
+#include "en/xsk/umem.h"
+#include "en/xsk/setup.h"
+#include "en/xsk/rx.h"
+#include "en/xsk/tx.h"
-struct mlx5e_rq_param {
- u32 rqc[MLX5_ST_SZ_DW(rqc)];
- struct mlx5_wq_param wq;
- struct mlx5e_rq_frags_info frags_info;
-};
-
-struct mlx5e_sq_param {
- u32 sqc[MLX5_ST_SZ_DW(sqc)];
- struct mlx5_wq_param wq;
- bool is_mpw;
-};
-
-struct mlx5e_cq_param {
- u32 cqc[MLX5_ST_SZ_DW(cqc)];
- struct mlx5_wq_param wq;
- u16 eq_ix;
- u8 cq_period_mode;
-};
-
-struct mlx5e_channel_param {
- struct mlx5e_rq_param rq;
- struct mlx5e_sq_param sq;
- struct mlx5e_sq_param xdp_sq;
- struct mlx5e_sq_param icosq;
- struct mlx5e_cq_param rx_cq;
- struct mlx5e_cq_param tx_cq;
- struct mlx5e_cq_param icosq_cq;
-};
bool mlx5e_check_fragmented_striding_rq_cap(struct mlx5_core_dev *mdev)
{
@@ -114,18 +92,31 @@ void mlx5e_init_rq_type_params(struct mlx5_core_dev *mdev,
mlx5_core_info(mdev, "MLX5E: StrdRq(%d) RqSz(%ld) StrdSz(%ld) RxCqeCmprss(%d)\n",
params->rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ,
params->rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ ?
- BIT(mlx5e_mpwqe_get_log_rq_size(params)) :
+ BIT(mlx5e_mpwqe_get_log_rq_size(params, NULL)) :
BIT(params->log_rq_mtu_frames),
- BIT(mlx5e_mpwqe_get_log_stride_size(mdev, params)),
+ BIT(mlx5e_mpwqe_get_log_stride_size(mdev, params, NULL)),
MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_CQE_COMPRESS));
}
bool mlx5e_striding_rq_possible(struct mlx5_core_dev *mdev,
struct mlx5e_params *params)
{
- return mlx5e_check_fragmented_striding_rq_cap(mdev) &&
- !MLX5_IPSEC_DEV(mdev) &&
- !(params->xdp_prog && !mlx5e_rx_mpwqe_is_linear_skb(mdev, params));
+ if (!mlx5e_check_fragmented_striding_rq_cap(mdev))
+ return false;
+
+ if (MLX5_IPSEC_DEV(mdev))
+ return false;
+
+ if (params->xdp_prog) {
+ /* XSK params are not considered here. If striding RQ is in use,
+ * and an XSK is being opened, mlx5e_rx_mpwqe_is_linear_skb will
+ * be called with the known XSK params.
+ */
+ if (!mlx5e_rx_mpwqe_is_linear_skb(mdev, params, NULL))
+ return false;
+ }
+
+ return true;
}
void mlx5e_set_rq_type(struct mlx5_core_dev *mdev, struct mlx5e_params *params)
@@ -394,6 +385,8 @@ static void mlx5e_free_di_list(struct mlx5e_rq *rq)
static int mlx5e_alloc_rq(struct mlx5e_channel *c,
struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk,
+ struct xdp_umem *umem,
struct mlx5e_rq_param *rqp,
struct mlx5e_rq *rq)
{
@@ -401,6 +394,8 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
struct mlx5_core_dev *mdev = c->mdev;
void *rqc = rqp->rqc;
void *rqc_wq = MLX5_ADDR_OF(rqc, rqc, wq);
+ u32 num_xsk_frames = 0;
+ u32 rq_xdp_ix;
u32 pool_size;
int wq_sz;
int err;
@@ -417,7 +412,13 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
rq->ix = c->ix;
rq->mdev = mdev;
rq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
- rq->stats = &c->priv->channel_stats[c->ix].rq;
+ rq->xdpsq = &c->rq_xdpsq;
+ rq->umem = umem;
+
+ if (rq->umem)
+ rq->stats = &c->priv->channel_stats[c->ix].xskrq;
+ else
+ rq->stats = &c->priv->channel_stats[c->ix].rq;
rq->xdp_prog = params->xdp_prog ? bpf_prog_inc(params->xdp_prog) : NULL;
if (IS_ERR(rq->xdp_prog)) {
@@ -426,12 +427,16 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
goto err_rq_wq_destroy;
}
- err = xdp_rxq_info_reg(&rq->xdp_rxq, rq->netdev, rq->ix);
+ rq_xdp_ix = rq->ix;
+ if (xsk)
+ rq_xdp_ix += params->num_channels * MLX5E_RQ_GROUP_XSK;
+ err = xdp_rxq_info_reg(&rq->xdp_rxq, rq->netdev, rq_xdp_ix);
if (err < 0)
goto err_rq_wq_destroy;
rq->buff.map_dir = rq->xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE;
- rq->buff.headroom = mlx5e_get_rq_headroom(mdev, params);
+ rq->buff.headroom = mlx5e_get_rq_headroom(mdev, params, xsk);
+ rq->buff.umem_headroom = xsk ? xsk->headroom : 0;
pool_size = 1 << params->log_rq_mtu_frames;
switch (rq->wq_type) {
@@ -445,7 +450,12 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
wq_sz = mlx5_wq_ll_get_size(&rq->mpwqe.wq);
- pool_size = MLX5_MPWRQ_PAGES_PER_WQE << mlx5e_mpwqe_get_log_rq_size(params);
+ if (xsk)
+ num_xsk_frames = wq_sz <<
+ mlx5e_mpwqe_get_log_num_strides(mdev, params, xsk);
+
+ pool_size = MLX5_MPWRQ_PAGES_PER_WQE <<
+ mlx5e_mpwqe_get_log_rq_size(params, xsk);
rq->post_wqes = mlx5e_post_rx_mpwqes;
rq->dealloc_wqe = mlx5e_dealloc_rx_mpwqe;
@@ -464,12 +474,15 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
goto err_rq_wq_destroy;
}
- rq->mpwqe.skb_from_cqe_mpwrq =
- mlx5e_rx_mpwqe_is_linear_skb(mdev, params) ?
- mlx5e_skb_from_cqe_mpwrq_linear :
- mlx5e_skb_from_cqe_mpwrq_nonlinear;
- rq->mpwqe.log_stride_sz = mlx5e_mpwqe_get_log_stride_size(mdev, params);
- rq->mpwqe.num_strides = BIT(mlx5e_mpwqe_get_log_num_strides(mdev, params));
+ rq->mpwqe.skb_from_cqe_mpwrq = xsk ?
+ mlx5e_xsk_skb_from_cqe_mpwrq_linear :
+ mlx5e_rx_mpwqe_is_linear_skb(mdev, params, NULL) ?
+ mlx5e_skb_from_cqe_mpwrq_linear :
+ mlx5e_skb_from_cqe_mpwrq_nonlinear;
+
+ rq->mpwqe.log_stride_sz = mlx5e_mpwqe_get_log_stride_size(mdev, params, xsk);
+ rq->mpwqe.num_strides =
+ BIT(mlx5e_mpwqe_get_log_num_strides(mdev, params, xsk));
err = mlx5e_create_rq_umr_mkey(mdev, rq);
if (err)
@@ -490,6 +503,9 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
wq_sz = mlx5_wq_cyc_get_size(&rq->wqe.wq);
+ if (xsk)
+ num_xsk_frames = wq_sz << rq->wqe.info.log_num_frags;
+
rq->wqe.info = rqp->frags_info;
rq->wqe.frags =
kvzalloc_node(array_size(sizeof(*rq->wqe.frags),
@@ -503,6 +519,7 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
err = mlx5e_init_di_list(rq, wq_sz, c->cpu);
if (err)
goto err_free;
+
rq->post_wqes = mlx5e_post_rx_wqes;
rq->dealloc_wqe = mlx5e_dealloc_rx_wqe;
@@ -518,33 +535,49 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
goto err_free;
}
- rq->wqe.skb_from_cqe = mlx5e_rx_is_linear_skb(params) ?
- mlx5e_skb_from_cqe_linear :
- mlx5e_skb_from_cqe_nonlinear;
+ rq->wqe.skb_from_cqe = xsk ?
+ mlx5e_xsk_skb_from_cqe_linear :
+ mlx5e_rx_is_linear_skb(params, NULL) ?
+ mlx5e_skb_from_cqe_linear :
+ mlx5e_skb_from_cqe_nonlinear;
rq->mkey_be = c->mkey_be;
}
- /* Create a page_pool and register it with rxq */
- pp_params.order = 0;
- pp_params.flags = 0; /* No-internal DMA mapping in page_pool */
- pp_params.pool_size = pool_size;
- pp_params.nid = cpu_to_node(c->cpu);
- pp_params.dev = c->pdev;
- pp_params.dma_dir = rq->buff.map_dir;
-
- /* page_pool can be used even when there is no rq->xdp_prog,
- * given page_pool does not handle DMA mapping there is no
- * required state to clear. And page_pool gracefully handle
- * elevated refcnt.
- */
- rq->page_pool = page_pool_create(&pp_params);
- if (IS_ERR(rq->page_pool)) {
- err = PTR_ERR(rq->page_pool);
- rq->page_pool = NULL;
- goto err_free;
+ if (xsk) {
+ err = mlx5e_xsk_resize_reuseq(umem, num_xsk_frames);
+ if (unlikely(err)) {
+ mlx5_core_err(mdev, "Unable to allocate the Reuse Ring for %u frames\n",
+ num_xsk_frames);
+ goto err_free;
+ }
+
+ rq->zca.free = mlx5e_xsk_zca_free;
+ err = xdp_rxq_info_reg_mem_model(&rq->xdp_rxq,
+ MEM_TYPE_ZERO_COPY,
+ &rq->zca);
+ } else {
+ /* Create a page_pool and register it with rxq */
+ pp_params.order = 0;
+ pp_params.flags = 0; /* No-internal DMA mapping in page_pool */
+ pp_params.pool_size = pool_size;
+ pp_params.nid = cpu_to_node(c->cpu);
+ pp_params.dev = c->pdev;
+ pp_params.dma_dir = rq->buff.map_dir;
+
+ /* page_pool can be used even when there is no rq->xdp_prog,
+ * given page_pool does not handle DMA mapping there is no
+ * required state to clear. And page_pool gracefully handle
+ * elevated refcnt.
+ */
+ rq->page_pool = page_pool_create(&pp_params);
+ if (IS_ERR(rq->page_pool)) {
+ err = PTR_ERR(rq->page_pool);
+ rq->page_pool = NULL;
+ goto err_free;
+ }
+ err = xdp_rxq_info_reg_mem_model(&rq->xdp_rxq,
+ MEM_TYPE_PAGE_POOL, rq->page_pool);
}
- err = xdp_rxq_info_reg_mem_model(&rq->xdp_rxq,
- MEM_TYPE_PAGE_POOL, rq->page_pool);
if (err)
goto err_free;
@@ -584,11 +617,11 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
switch (params->rx_cq_moderation.cq_period_mode) {
case MLX5_CQ_PERIOD_MODE_START_FROM_CQE:
- rq->dim.mode = NET_DIM_CQ_PERIOD_MODE_START_FROM_CQE;
+ rq->dim.mode = DIM_CQ_PERIOD_MODE_START_FROM_CQE;
break;
case MLX5_CQ_PERIOD_MODE_START_FROM_EQE:
default:
- rq->dim.mode = NET_DIM_CQ_PERIOD_MODE_START_FROM_EQE;
+ rq->dim.mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE;
}
rq->page_cache.head = 0;
@@ -611,8 +644,7 @@ err_rq_wq_destroy:
if (rq->xdp_prog)
bpf_prog_put(rq->xdp_prog);
xdp_rxq_info_unreg(&rq->xdp_rxq);
- if (rq->page_pool)
- page_pool_destroy(rq->page_pool);
+ page_pool_destroy(rq->page_pool);
mlx5_wq_destroy(&rq->wq_ctrl);
return err;
@@ -625,10 +657,6 @@ static void mlx5e_free_rq(struct mlx5e_rq *rq)
if (rq->xdp_prog)
bpf_prog_put(rq->xdp_prog);
- xdp_rxq_info_unreg(&rq->xdp_rxq);
- if (rq->page_pool)
- page_pool_destroy(rq->page_pool);
-
switch (rq->wq_type) {
case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
kvfree(rq->mpwqe.info);
@@ -643,8 +671,15 @@ static void mlx5e_free_rq(struct mlx5e_rq *rq)
i = (i + 1) & (MLX5E_CACHE_SIZE - 1)) {
struct mlx5e_dma_info *dma_info = &rq->page_cache.page_cache[i];
- mlx5e_page_release(rq, dma_info, false);
+ /* With AF_XDP, page_cache is not used, so this loop is not
+ * entered, and it's safe to call mlx5e_page_release_dynamic
+ * directly.
+ */
+ mlx5e_page_release_dynamic(rq, dma_info, false);
}
+
+ xdp_rxq_info_unreg(&rq->xdp_rxq);
+ page_pool_destroy(rq->page_pool);
mlx5_wq_destroy(&rq->wq_ctrl);
}
@@ -778,7 +813,7 @@ static void mlx5e_destroy_rq(struct mlx5e_rq *rq)
mlx5_core_destroy_rq(rq->mdev, rq->rqn);
}
-static int mlx5e_wait_for_min_rx_wqes(struct mlx5e_rq *rq, int wait_time)
+int mlx5e_wait_for_min_rx_wqes(struct mlx5e_rq *rq, int wait_time)
{
unsigned long exp_time = jiffies + msecs_to_jiffies(wait_time);
struct mlx5e_channel *c = rq->channel;
@@ -836,14 +871,13 @@ static void mlx5e_free_rx_descs(struct mlx5e_rq *rq)
}
-static int mlx5e_open_rq(struct mlx5e_channel *c,
- struct mlx5e_params *params,
- struct mlx5e_rq_param *param,
- struct mlx5e_rq *rq)
+int mlx5e_open_rq(struct mlx5e_channel *c, struct mlx5e_params *params,
+ struct mlx5e_rq_param *param, struct mlx5e_xsk_param *xsk,
+ struct xdp_umem *umem, struct mlx5e_rq *rq)
{
int err;
- err = mlx5e_alloc_rq(c, params, param, rq);
+ err = mlx5e_alloc_rq(c, params, xsk, umem, param, rq);
if (err)
return err;
@@ -881,13 +915,13 @@ static void mlx5e_activate_rq(struct mlx5e_rq *rq)
mlx5e_trigger_irq(&rq->channel->icosq);
}
-static void mlx5e_deactivate_rq(struct mlx5e_rq *rq)
+void mlx5e_deactivate_rq(struct mlx5e_rq *rq)
{
clear_bit(MLX5E_RQ_STATE_ENABLED, &rq->state);
napi_synchronize(&rq->channel->napi); /* prevent mlx5e_post_rx_wqes */
}
-static void mlx5e_close_rq(struct mlx5e_rq *rq)
+void mlx5e_close_rq(struct mlx5e_rq *rq)
{
cancel_work_sync(&rq->dim.work);
mlx5e_destroy_rq(rq);
@@ -940,6 +974,7 @@ static int mlx5e_alloc_xdpsq_db(struct mlx5e_xdpsq *sq, int numa)
static int mlx5e_alloc_xdpsq(struct mlx5e_channel *c,
struct mlx5e_params *params,
+ struct xdp_umem *umem,
struct mlx5e_sq_param *param,
struct mlx5e_xdpsq *sq,
bool is_redirect)
@@ -955,9 +990,13 @@ static int mlx5e_alloc_xdpsq(struct mlx5e_channel *c,
sq->uar_map = mdev->mlx5e_res.bfreg.map;
sq->min_inline_mode = params->tx_min_inline_mode;
sq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
- sq->stats = is_redirect ?
- &c->priv->channel_stats[c->ix].xdpsq :
- &c->priv->channel_stats[c->ix].rq_xdpsq;
+ sq->umem = umem;
+
+ sq->stats = sq->umem ?
+ &c->priv->channel_stats[c->ix].xsksq :
+ is_redirect ?
+ &c->priv->channel_stats[c->ix].xdpsq :
+ &c->priv->channel_stats[c->ix].rq_xdpsq;
param->wq.db_numa_node = cpu_to_node(c->cpu);
err = mlx5_wq_cyc_create(mdev, &param->wq, sqc_wq, wq, &sq->wq_ctrl);
@@ -1087,11 +1126,14 @@ static int mlx5e_alloc_txqsq(struct mlx5e_channel *c,
sq->uar_map = mdev->mlx5e_res.bfreg.map;
sq->min_inline_mode = params->tx_min_inline_mode;
sq->stats = &c->priv->channel_stats[c->ix].sq[tc];
+ sq->stop_room = MLX5E_SQ_STOP_ROOM;
INIT_WORK(&sq->recover_work, mlx5e_tx_err_cqe_work);
if (MLX5_IPSEC_DEV(c->priv->mdev))
set_bit(MLX5E_SQ_STATE_IPSEC, &sq->state);
- if (mlx5_accel_is_tls_device(c->priv->mdev))
+ if (mlx5_accel_is_tls_device(c->priv->mdev)) {
set_bit(MLX5E_SQ_STATE_TLS, &sq->state);
+ sq->stop_room += MLX5E_SQ_TLS_ROOM;
+ }
param->wq.db_numa_node = cpu_to_node(c->cpu);
err = mlx5_wq_cyc_create(mdev, &param->wq, sqc_wq, wq, &sq->wq_ctrl);
@@ -1337,10 +1379,8 @@ static void mlx5e_tx_err_cqe_work(struct work_struct *recover_work)
mlx5e_tx_reporter_err_cqe(sq);
}
-static int mlx5e_open_icosq(struct mlx5e_channel *c,
- struct mlx5e_params *params,
- struct mlx5e_sq_param *param,
- struct mlx5e_icosq *sq)
+int mlx5e_open_icosq(struct mlx5e_channel *c, struct mlx5e_params *params,
+ struct mlx5e_sq_param *param, struct mlx5e_icosq *sq)
{
struct mlx5e_create_sq_param csp = {};
int err;
@@ -1366,7 +1406,7 @@ err_free_icosq:
return err;
}
-static void mlx5e_close_icosq(struct mlx5e_icosq *sq)
+void mlx5e_close_icosq(struct mlx5e_icosq *sq)
{
struct mlx5e_channel *c = sq->channel;
@@ -1377,16 +1417,14 @@ static void mlx5e_close_icosq(struct mlx5e_icosq *sq)
mlx5e_free_icosq(sq);
}
-static int mlx5e_open_xdpsq(struct mlx5e_channel *c,
- struct mlx5e_params *params,
- struct mlx5e_sq_param *param,
- struct mlx5e_xdpsq *sq,
- bool is_redirect)
+int mlx5e_open_xdpsq(struct mlx5e_channel *c, struct mlx5e_params *params,
+ struct mlx5e_sq_param *param, struct xdp_umem *umem,
+ struct mlx5e_xdpsq *sq, bool is_redirect)
{
struct mlx5e_create_sq_param csp = {};
int err;
- err = mlx5e_alloc_xdpsq(c, params, param, sq, is_redirect);
+ err = mlx5e_alloc_xdpsq(c, params, umem, param, sq, is_redirect);
if (err)
return err;
@@ -1440,7 +1478,7 @@ err_free_xdpsq:
return err;
}
-static void mlx5e_close_xdpsq(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq)
+void mlx5e_close_xdpsq(struct mlx5e_xdpsq *sq)
{
struct mlx5e_channel *c = sq->channel;
@@ -1448,7 +1486,7 @@ static void mlx5e_close_xdpsq(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq)
napi_synchronize(&c->napi);
mlx5e_destroy_sq(c->mdev, sq->sqn);
- mlx5e_free_xdpsq_descs(sq, rq);
+ mlx5e_free_xdpsq_descs(sq);
mlx5e_free_xdpsq(sq);
}
@@ -1518,6 +1556,7 @@ static void mlx5e_free_cq(struct mlx5e_cq *cq)
static int mlx5e_create_cq(struct mlx5e_cq *cq, struct mlx5e_cq_param *param)
{
+ u32 out[MLX5_ST_SZ_DW(create_cq_out)];
struct mlx5_core_dev *mdev = cq->mdev;
struct mlx5_core_cq *mcq = &cq->mcq;
@@ -1552,7 +1591,7 @@ static int mlx5e_create_cq(struct mlx5e_cq *cq, struct mlx5e_cq_param *param)
MLX5_ADAPTER_PAGE_SHIFT);
MLX5_SET64(cqc, cqc, dbr_addr, cq->wq_ctrl.db.dma);
- err = mlx5_core_create_cq(mdev, mcq, in, inlen);
+ err = mlx5_core_create_cq(mdev, mcq, in, inlen, out, sizeof(out));
kvfree(in);
@@ -1569,10 +1608,8 @@ static void mlx5e_destroy_cq(struct mlx5e_cq *cq)
mlx5_core_destroy_cq(cq->mdev, &cq->mcq);
}
-static int mlx5e_open_cq(struct mlx5e_channel *c,
- struct net_dim_cq_moder moder,
- struct mlx5e_cq_param *param,
- struct mlx5e_cq *cq)
+int mlx5e_open_cq(struct mlx5e_channel *c, struct dim_cq_moder moder,
+ struct mlx5e_cq_param *param, struct mlx5e_cq *cq)
{
struct mlx5_core_dev *mdev = c->mdev;
int err;
@@ -1595,7 +1632,7 @@ err_free_cq:
return err;
}
-static void mlx5e_close_cq(struct mlx5e_cq *cq)
+void mlx5e_close_cq(struct mlx5e_cq *cq)
{
mlx5e_destroy_cq(cq);
mlx5e_free_cq(cq);
@@ -1769,49 +1806,16 @@ static void mlx5e_free_xps_cpumask(struct mlx5e_channel *c)
free_cpumask_var(c->xps_cpumask);
}
-static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
- struct mlx5e_params *params,
- struct mlx5e_channel_param *cparam,
- struct mlx5e_channel **cp)
+static int mlx5e_open_queues(struct mlx5e_channel *c,
+ struct mlx5e_params *params,
+ struct mlx5e_channel_param *cparam)
{
- int cpu = cpumask_first(mlx5_comp_irq_get_affinity_mask(priv->mdev, ix));
- struct net_dim_cq_moder icocq_moder = {0, 0};
- struct net_device *netdev = priv->netdev;
- struct mlx5e_channel *c;
- unsigned int irq;
+ struct dim_cq_moder icocq_moder = {0, 0};
int err;
- int eqn;
-
- err = mlx5_vector2eqn(priv->mdev, ix, &eqn, &irq);
- if (err)
- return err;
-
- c = kvzalloc_node(sizeof(*c), GFP_KERNEL, cpu_to_node(cpu));
- if (!c)
- return -ENOMEM;
-
- c->priv = priv;
- c->mdev = priv->mdev;
- c->tstamp = &priv->tstamp;
- c->ix = ix;
- c->cpu = cpu;
- c->pdev = priv->mdev->device;
- c->netdev = priv->netdev;
- c->mkey_be = cpu_to_be32(priv->mdev->mlx5e_res.mkey.key);
- c->num_tc = params->num_tc;
- c->xdp = !!params->xdp_prog;
- c->stats = &priv->channel_stats[ix].ch;
- c->irq_desc = irq_to_desc(irq);
-
- err = mlx5e_alloc_xps_cpumask(c, params);
- if (err)
- goto err_free_channel;
-
- netif_napi_add(netdev, &c->napi, mlx5e_napi_poll, 64);
err = mlx5e_open_cq(c, icocq_moder, &cparam->icosq_cq, &c->icosq.cq);
if (err)
- goto err_napi_del;
+ return err;
err = mlx5e_open_tx_cqs(c, params, cparam);
if (err)
@@ -1827,7 +1831,7 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
/* XDP SQ CQ params are same as normal TXQ sq CQ params */
err = c->xdp ? mlx5e_open_cq(c, params->tx_cq_moderation,
- &cparam->tx_cq, &c->rq.xdpsq.cq) : 0;
+ &cparam->tx_cq, &c->rq_xdpsq.cq) : 0;
if (err)
goto err_close_rx_cq;
@@ -1841,20 +1845,21 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
if (err)
goto err_close_icosq;
- err = c->xdp ? mlx5e_open_xdpsq(c, params, &cparam->xdp_sq, &c->rq.xdpsq, false) : 0;
- if (err)
- goto err_close_sqs;
+ if (c->xdp) {
+ err = mlx5e_open_xdpsq(c, params, &cparam->xdp_sq, NULL,
+ &c->rq_xdpsq, false);
+ if (err)
+ goto err_close_sqs;
+ }
- err = mlx5e_open_rq(c, params, &cparam->rq, &c->rq);
+ err = mlx5e_open_rq(c, params, &cparam->rq, NULL, NULL, &c->rq);
if (err)
goto err_close_xdp_sq;
- err = mlx5e_open_xdpsq(c, params, &cparam->xdp_sq, &c->xdpsq, true);
+ err = mlx5e_open_xdpsq(c, params, &cparam->xdp_sq, NULL, &c->xdpsq, true);
if (err)
goto err_close_rq;
- *cp = c;
-
return 0;
err_close_rq:
@@ -1862,7 +1867,7 @@ err_close_rq:
err_close_xdp_sq:
if (c->xdp)
- mlx5e_close_xdpsq(&c->rq.xdpsq, &c->rq);
+ mlx5e_close_xdpsq(&c->rq_xdpsq);
err_close_sqs:
mlx5e_close_sqs(c);
@@ -1872,8 +1877,9 @@ err_close_icosq:
err_disable_napi:
napi_disable(&c->napi);
+
if (c->xdp)
- mlx5e_close_cq(&c->rq.xdpsq.cq);
+ mlx5e_close_cq(&c->rq_xdpsq.cq);
err_close_rx_cq:
mlx5e_close_cq(&c->rq.cq);
@@ -1887,6 +1893,85 @@ err_close_tx_cqs:
err_close_icosq_cq:
mlx5e_close_cq(&c->icosq.cq);
+ return err;
+}
+
+static void mlx5e_close_queues(struct mlx5e_channel *c)
+{
+ mlx5e_close_xdpsq(&c->xdpsq);
+ mlx5e_close_rq(&c->rq);
+ if (c->xdp)
+ mlx5e_close_xdpsq(&c->rq_xdpsq);
+ mlx5e_close_sqs(c);
+ mlx5e_close_icosq(&c->icosq);
+ napi_disable(&c->napi);
+ if (c->xdp)
+ mlx5e_close_cq(&c->rq_xdpsq.cq);
+ mlx5e_close_cq(&c->rq.cq);
+ mlx5e_close_cq(&c->xdpsq.cq);
+ mlx5e_close_tx_cqs(c);
+ mlx5e_close_cq(&c->icosq.cq);
+}
+
+static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
+ struct mlx5e_params *params,
+ struct mlx5e_channel_param *cparam,
+ struct xdp_umem *umem,
+ struct mlx5e_channel **cp)
+{
+ int cpu = cpumask_first(mlx5_comp_irq_get_affinity_mask(priv->mdev, ix));
+ struct net_device *netdev = priv->netdev;
+ struct mlx5e_xsk_param xsk;
+ struct mlx5e_channel *c;
+ unsigned int irq;
+ int err;
+ int eqn;
+
+ err = mlx5_vector2eqn(priv->mdev, ix, &eqn, &irq);
+ if (err)
+ return err;
+
+ c = kvzalloc_node(sizeof(*c), GFP_KERNEL, cpu_to_node(cpu));
+ if (!c)
+ return -ENOMEM;
+
+ c->priv = priv;
+ c->mdev = priv->mdev;
+ c->tstamp = &priv->tstamp;
+ c->ix = ix;
+ c->cpu = cpu;
+ c->pdev = priv->mdev->device;
+ c->netdev = priv->netdev;
+ c->mkey_be = cpu_to_be32(priv->mdev->mlx5e_res.mkey.key);
+ c->num_tc = params->num_tc;
+ c->xdp = !!params->xdp_prog;
+ c->stats = &priv->channel_stats[ix].ch;
+ c->irq_desc = irq_to_desc(irq);
+
+ err = mlx5e_alloc_xps_cpumask(c, params);
+ if (err)
+ goto err_free_channel;
+
+ netif_napi_add(netdev, &c->napi, mlx5e_napi_poll, 64);
+
+ err = mlx5e_open_queues(c, params, cparam);
+ if (unlikely(err))
+ goto err_napi_del;
+
+ if (umem) {
+ mlx5e_build_xsk_param(umem, &xsk);
+ err = mlx5e_open_xsk(priv, params, &xsk, umem, c);
+ if (unlikely(err))
+ goto err_close_queues;
+ }
+
+ *cp = c;
+
+ return 0;
+
+err_close_queues:
+ mlx5e_close_queues(c);
+
err_napi_del:
netif_napi_del(&c->napi);
mlx5e_free_xps_cpumask(c);
@@ -1905,12 +1990,18 @@ static void mlx5e_activate_channel(struct mlx5e_channel *c)
mlx5e_activate_txqsq(&c->sq[tc]);
mlx5e_activate_rq(&c->rq);
netif_set_xps_queue(c->netdev, c->xps_cpumask, c->ix);
+
+ if (test_bit(MLX5E_CHANNEL_STATE_XSK, c->state))
+ mlx5e_activate_xsk(c);
}
static void mlx5e_deactivate_channel(struct mlx5e_channel *c)
{
int tc;
+ if (test_bit(MLX5E_CHANNEL_STATE_XSK, c->state))
+ mlx5e_deactivate_xsk(c);
+
mlx5e_deactivate_rq(&c->rq);
for (tc = 0; tc < c->num_tc; tc++)
mlx5e_deactivate_txqsq(&c->sq[tc]);
@@ -1918,19 +2009,9 @@ static void mlx5e_deactivate_channel(struct mlx5e_channel *c)
static void mlx5e_close_channel(struct mlx5e_channel *c)
{
- mlx5e_close_xdpsq(&c->xdpsq, NULL);
- mlx5e_close_rq(&c->rq);
- if (c->xdp)
- mlx5e_close_xdpsq(&c->rq.xdpsq, &c->rq);
- mlx5e_close_sqs(c);
- mlx5e_close_icosq(&c->icosq);
- napi_disable(&c->napi);
- if (c->xdp)
- mlx5e_close_cq(&c->rq.xdpsq.cq);
- mlx5e_close_cq(&c->rq.cq);
- mlx5e_close_cq(&c->xdpsq.cq);
- mlx5e_close_tx_cqs(c);
- mlx5e_close_cq(&c->icosq.cq);
+ if (test_bit(MLX5E_CHANNEL_STATE_XSK, c->state))
+ mlx5e_close_xsk(c);
+ mlx5e_close_queues(c);
netif_napi_del(&c->napi);
mlx5e_free_xps_cpumask(c);
@@ -1941,6 +2022,7 @@ static void mlx5e_close_channel(struct mlx5e_channel *c)
static void mlx5e_build_rq_frags_info(struct mlx5_core_dev *mdev,
struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk,
struct mlx5e_rq_frags_info *info)
{
u32 byte_count = MLX5E_SW2HW_MTU(params, params->sw_mtu);
@@ -1953,10 +2035,10 @@ static void mlx5e_build_rq_frags_info(struct mlx5_core_dev *mdev,
byte_count += MLX5E_METADATA_ETHER_LEN;
#endif
- if (mlx5e_rx_is_linear_skb(params)) {
+ if (mlx5e_rx_is_linear_skb(params, xsk)) {
int frag_stride;
- frag_stride = mlx5e_rx_get_linear_frag_sz(params);
+ frag_stride = mlx5e_rx_get_linear_frag_sz(params, xsk);
frag_stride = roundup_pow_of_two(frag_stride);
info->arr[0].frag_size = byte_count;
@@ -2014,9 +2096,10 @@ static u8 mlx5e_get_rq_log_wq_sz(void *rqc)
return MLX5_GET(wq, wq, log_wq_sz);
}
-static void mlx5e_build_rq_param(struct mlx5e_priv *priv,
- struct mlx5e_params *params,
- struct mlx5e_rq_param *param)
+void mlx5e_build_rq_param(struct mlx5e_priv *priv,
+ struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk,
+ struct mlx5e_rq_param *param)
{
struct mlx5_core_dev *mdev = priv->mdev;
void *rqc = param->rqc;
@@ -2026,16 +2109,16 @@ static void mlx5e_build_rq_param(struct mlx5e_priv *priv,
switch (params->rq_wq_type) {
case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
MLX5_SET(wq, wq, log_wqe_num_of_strides,
- mlx5e_mpwqe_get_log_num_strides(mdev, params) -
+ mlx5e_mpwqe_get_log_num_strides(mdev, params, xsk) -
MLX5_MPWQE_LOG_NUM_STRIDES_BASE);
MLX5_SET(wq, wq, log_wqe_stride_size,
- mlx5e_mpwqe_get_log_stride_size(mdev, params) -
+ mlx5e_mpwqe_get_log_stride_size(mdev, params, xsk) -
MLX5_MPWQE_LOG_STRIDE_SZ_BASE);
- MLX5_SET(wq, wq, log_wq_sz, mlx5e_mpwqe_get_log_rq_size(params));
+ MLX5_SET(wq, wq, log_wq_sz, mlx5e_mpwqe_get_log_rq_size(params, xsk));
break;
default: /* MLX5_WQ_TYPE_CYCLIC */
MLX5_SET(wq, wq, log_wq_sz, params->log_rq_mtu_frames);
- mlx5e_build_rq_frags_info(mdev, params, &param->frags_info);
+ mlx5e_build_rq_frags_info(mdev, params, xsk, &param->frags_info);
ndsegs = param->frags_info.num_frags;
}
@@ -2066,8 +2149,8 @@ static void mlx5e_build_drop_rq_param(struct mlx5e_priv *priv,
param->wq.buf_numa_node = dev_to_node(mdev->device);
}
-static void mlx5e_build_sq_param_common(struct mlx5e_priv *priv,
- struct mlx5e_sq_param *param)
+void mlx5e_build_sq_param_common(struct mlx5e_priv *priv,
+ struct mlx5e_sq_param *param)
{
void *sqc = param->sqc;
void *wq = MLX5_ADDR_OF(sqc, sqc, wq);
@@ -2103,9 +2186,10 @@ static void mlx5e_build_common_cq_param(struct mlx5e_priv *priv,
MLX5_SET(cqc, cqc, cqe_sz, CQE_STRIDE_128_PAD);
}
-static void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv,
- struct mlx5e_params *params,
- struct mlx5e_cq_param *param)
+void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv,
+ struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk,
+ struct mlx5e_cq_param *param)
{
struct mlx5_core_dev *mdev = priv->mdev;
void *cqc = param->cqc;
@@ -2113,8 +2197,8 @@ static void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv,
switch (params->rq_wq_type) {
case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
- log_cq_size = mlx5e_mpwqe_get_log_rq_size(params) +
- mlx5e_mpwqe_get_log_num_strides(mdev, params);
+ log_cq_size = mlx5e_mpwqe_get_log_rq_size(params, xsk) +
+ mlx5e_mpwqe_get_log_num_strides(mdev, params, xsk);
break;
default: /* MLX5_WQ_TYPE_CYCLIC */
log_cq_size = params->log_rq_mtu_frames;
@@ -2130,9 +2214,9 @@ static void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv,
param->cq_period_mode = params->rx_cq_moderation.cq_period_mode;
}
-static void mlx5e_build_tx_cq_param(struct mlx5e_priv *priv,
- struct mlx5e_params *params,
- struct mlx5e_cq_param *param)
+void mlx5e_build_tx_cq_param(struct mlx5e_priv *priv,
+ struct mlx5e_params *params,
+ struct mlx5e_cq_param *param)
{
void *cqc = param->cqc;
@@ -2142,9 +2226,9 @@ static void mlx5e_build_tx_cq_param(struct mlx5e_priv *priv,
param->cq_period_mode = params->tx_cq_moderation.cq_period_mode;
}
-static void mlx5e_build_ico_cq_param(struct mlx5e_priv *priv,
- u8 log_wq_size,
- struct mlx5e_cq_param *param)
+void mlx5e_build_ico_cq_param(struct mlx5e_priv *priv,
+ u8 log_wq_size,
+ struct mlx5e_cq_param *param)
{
void *cqc = param->cqc;
@@ -2152,12 +2236,12 @@ static void mlx5e_build_ico_cq_param(struct mlx5e_priv *priv,
mlx5e_build_common_cq_param(priv, param);
- param->cq_period_mode = NET_DIM_CQ_PERIOD_MODE_START_FROM_EQE;
+ param->cq_period_mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE;
}
-static void mlx5e_build_icosq_param(struct mlx5e_priv *priv,
- u8 log_wq_size,
- struct mlx5e_sq_param *param)
+void mlx5e_build_icosq_param(struct mlx5e_priv *priv,
+ u8 log_wq_size,
+ struct mlx5e_sq_param *param)
{
void *sqc = param->sqc;
void *wq = MLX5_ADDR_OF(sqc, sqc, wq);
@@ -2168,9 +2252,9 @@ static void mlx5e_build_icosq_param(struct mlx5e_priv *priv,
MLX5_SET(sqc, sqc, reg_umr, MLX5_CAP_ETH(priv->mdev, reg_umr_sq));
}
-static void mlx5e_build_xdpsq_param(struct mlx5e_priv *priv,
- struct mlx5e_params *params,
- struct mlx5e_sq_param *param)
+void mlx5e_build_xdpsq_param(struct mlx5e_priv *priv,
+ struct mlx5e_params *params,
+ struct mlx5e_sq_param *param)
{
void *sqc = param->sqc;
void *wq = MLX5_ADDR_OF(sqc, sqc, wq);
@@ -2198,14 +2282,14 @@ static void mlx5e_build_channel_param(struct mlx5e_priv *priv,
{
u8 icosq_log_wq_sz;
- mlx5e_build_rq_param(priv, params, &cparam->rq);
+ mlx5e_build_rq_param(priv, params, NULL, &cparam->rq);
icosq_log_wq_sz = mlx5e_build_icosq_log_wq_sz(params, &cparam->rq);
mlx5e_build_sq_param(priv, params, &cparam->sq);
mlx5e_build_xdpsq_param(priv, params, &cparam->xdp_sq);
mlx5e_build_icosq_param(priv, icosq_log_wq_sz, &cparam->icosq);
- mlx5e_build_rx_cq_param(priv, params, &cparam->rx_cq);
+ mlx5e_build_rx_cq_param(priv, params, NULL, &cparam->rx_cq);
mlx5e_build_tx_cq_param(priv, params, &cparam->tx_cq);
mlx5e_build_ico_cq_param(priv, icosq_log_wq_sz, &cparam->icosq_cq);
}
@@ -2226,7 +2310,12 @@ int mlx5e_open_channels(struct mlx5e_priv *priv,
mlx5e_build_channel_param(priv, &chs->params, cparam);
for (i = 0; i < chs->num; i++) {
- err = mlx5e_open_channel(priv, i, &chs->params, cparam, &chs->c[i]);
+ struct xdp_umem *umem = NULL;
+
+ if (chs->params.xdp_prog)
+ umem = mlx5e_xsk_get_umem(&chs->params, chs->params.xsk, i);
+
+ err = mlx5e_open_channel(priv, i, &chs->params, cparam, umem, &chs->c[i]);
if (err)
goto err_close_channels;
}
@@ -2268,6 +2357,10 @@ static int mlx5e_wait_channels_min_rx_wqes(struct mlx5e_channels *chs)
int timeout = err ? 0 : MLX5E_RQ_WQES_TIMEOUT;
err |= mlx5e_wait_for_min_rx_wqes(&chs->c[i]->rq, timeout);
+
+ /* Don't wait on the XSK RQ, because the newer xdpsock sample
+ * doesn't provide any Fill Ring entries at the setup stage.
+ */
}
return err ? -ETIMEDOUT : 0;
@@ -2340,35 +2433,35 @@ int mlx5e_create_indirect_rqt(struct mlx5e_priv *priv)
return err;
}
-int mlx5e_create_direct_rqts(struct mlx5e_priv *priv)
+int mlx5e_create_direct_rqts(struct mlx5e_priv *priv, struct mlx5e_tir *tirs)
{
- struct mlx5e_rqt *rqt;
+ const int max_nch = mlx5e_get_netdev_max_channels(priv->netdev);
int err;
int ix;
- for (ix = 0; ix < mlx5e_get_netdev_max_channels(priv->netdev); ix++) {
- rqt = &priv->direct_tir[ix].rqt;
- err = mlx5e_create_rqt(priv, 1 /*size */, rqt);
- if (err)
+ for (ix = 0; ix < max_nch; ix++) {
+ err = mlx5e_create_rqt(priv, 1 /*size */, &tirs[ix].rqt);
+ if (unlikely(err))
goto err_destroy_rqts;
}
return 0;
err_destroy_rqts:
- mlx5_core_warn(priv->mdev, "create direct rqts failed, %d\n", err);
+ mlx5_core_warn(priv->mdev, "create rqts failed, %d\n", err);
for (ix--; ix >= 0; ix--)
- mlx5e_destroy_rqt(priv, &priv->direct_tir[ix].rqt);
+ mlx5e_destroy_rqt(priv, &tirs[ix].rqt);
return err;
}
-void mlx5e_destroy_direct_rqts(struct mlx5e_priv *priv)
+void mlx5e_destroy_direct_rqts(struct mlx5e_priv *priv, struct mlx5e_tir *tirs)
{
+ const int max_nch = mlx5e_get_netdev_max_channels(priv->netdev);
int i;
- for (i = 0; i < mlx5e_get_netdev_max_channels(priv->netdev); i++)
- mlx5e_destroy_rqt(priv, &priv->direct_tir[i].rqt);
+ for (i = 0; i < max_nch; i++)
+ mlx5e_destroy_rqt(priv, &tirs[i].rqt);
}
static int mlx5e_rx_hash_fn(int hfunc)
@@ -2788,11 +2881,12 @@ static void mlx5e_build_tx2sq_maps(struct mlx5e_priv *priv)
void mlx5e_activate_priv_channels(struct mlx5e_priv *priv)
{
int num_txqs = priv->channels.num * priv->channels.params.num_tc;
+ int num_rxqs = priv->channels.num * MLX5E_NUM_RQ_GROUPS;
struct net_device *netdev = priv->netdev;
mlx5e_netdev_set_tcs(netdev);
netif_set_real_num_tx_queues(netdev, num_txqs);
- netif_set_real_num_rx_queues(netdev, priv->channels.num);
+ netif_set_real_num_rx_queues(netdev, num_rxqs);
mlx5e_build_tx2sq_maps(priv);
mlx5e_activate_channels(&priv->channels);
@@ -2804,10 +2898,14 @@ void mlx5e_activate_priv_channels(struct mlx5e_priv *priv)
mlx5e_wait_channels_min_rx_wqes(&priv->channels);
mlx5e_redirect_rqts_to_channels(priv, &priv->channels);
+
+ mlx5e_xsk_redirect_rqts_to_channels(priv, &priv->channels);
}
void mlx5e_deactivate_priv_channels(struct mlx5e_priv *priv)
{
+ mlx5e_xsk_redirect_rqts_to_drop(priv, &priv->channels);
+
mlx5e_redirect_rqts_to_drop(priv);
if (mlx5e_is_vport_rep(priv))
@@ -2847,7 +2945,7 @@ static void mlx5e_switch_priv_channels(struct mlx5e_priv *priv,
if (hw_modify)
hw_modify(priv);
- mlx5e_refresh_tirs(priv, false);
+ priv->profile->update_rx(priv);
mlx5e_activate_priv_channels(priv);
/* return carrier back if needed */
@@ -2886,15 +2984,18 @@ void mlx5e_timestamp_init(struct mlx5e_priv *priv)
int mlx5e_open_locked(struct net_device *netdev)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
+ bool is_xdp = priv->channels.params.xdp_prog;
int err;
set_bit(MLX5E_STATE_OPENED, &priv->state);
+ if (is_xdp)
+ mlx5e_xdp_set_open(priv);
err = mlx5e_open_channels(priv, &priv->channels);
if (err)
goto err_clear_state_opened_flag;
- mlx5e_refresh_tirs(priv, false);
+ priv->profile->update_rx(priv);
mlx5e_activate_priv_channels(priv);
if (priv->profile->update_carrier)
priv->profile->update_carrier(priv);
@@ -2903,6 +3004,8 @@ int mlx5e_open_locked(struct net_device *netdev)
return 0;
err_clear_state_opened_flag:
+ if (is_xdp)
+ mlx5e_xdp_set_closed(priv);
clear_bit(MLX5E_STATE_OPENED, &priv->state);
return err;
}
@@ -2934,6 +3037,8 @@ int mlx5e_close_locked(struct net_device *netdev)
if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
return 0;
+ if (priv->channels.params.xdp_prog)
+ mlx5e_xdp_set_closed(priv);
clear_bit(MLX5E_STATE_OPENED, &priv->state);
netif_carrier_off(priv->netdev);
@@ -3045,20 +3150,19 @@ void mlx5e_close_drop_rq(struct mlx5e_rq *drop_rq)
mlx5e_free_cq(&drop_rq->cq);
}
-int mlx5e_create_tis(struct mlx5_core_dev *mdev, int tc,
- u32 underlay_qpn, u32 *tisn)
+int mlx5e_create_tis(struct mlx5_core_dev *mdev, void *in, u32 *tisn)
{
- u32 in[MLX5_ST_SZ_DW(create_tis_in)] = {0};
void *tisc = MLX5_ADDR_OF(create_tis_in, in, ctx);
- MLX5_SET(tisc, tisc, prio, tc << 1);
- MLX5_SET(tisc, tisc, underlay_qpn, underlay_qpn);
MLX5_SET(tisc, tisc, transport_domain, mdev->mlx5e_res.td.tdn);
+ if (MLX5_GET(tisc, tisc, tls_en))
+ MLX5_SET(tisc, tisc, pd, mdev->mlx5e_res.pdn);
+
if (mlx5_lag_is_lacp_owner(mdev))
MLX5_SET(tisc, tisc, strict_lag_tx_port_affinity, 1);
- return mlx5_core_create_tis(mdev, in, sizeof(in), tisn);
+ return mlx5_core_create_tis(mdev, in, MLX5_ST_SZ_BYTES(create_tis_in), tisn);
}
void mlx5e_destroy_tis(struct mlx5_core_dev *mdev, u32 tisn)
@@ -3072,7 +3176,14 @@ int mlx5e_create_tises(struct mlx5e_priv *priv)
int tc;
for (tc = 0; tc < priv->profile->max_tc; tc++) {
- err = mlx5e_create_tis(priv->mdev, tc, 0, &priv->tisn[tc]);
+ u32 in[MLX5_ST_SZ_DW(create_tis_in)] = {};
+ void *tisc;
+
+ tisc = MLX5_ADDR_OF(create_tis_in, in, ctx);
+
+ MLX5_SET(tisc, tisc, prio, tc << 1);
+
+ err = mlx5e_create_tis(priv->mdev, in, &priv->tisn[tc]);
if (err)
goto err_close_tises;
}
@@ -3190,13 +3301,13 @@ err_destroy_inner_tirs:
return err;
}
-int mlx5e_create_direct_tirs(struct mlx5e_priv *priv)
+int mlx5e_create_direct_tirs(struct mlx5e_priv *priv, struct mlx5e_tir *tirs)
{
- int nch = mlx5e_get_netdev_max_channels(priv->netdev);
+ const int max_nch = mlx5e_get_netdev_max_channels(priv->netdev);
struct mlx5e_tir *tir;
void *tirc;
int inlen;
- int err;
+ int err = 0;
u32 *in;
int ix;
@@ -3205,25 +3316,24 @@ int mlx5e_create_direct_tirs(struct mlx5e_priv *priv)
if (!in)
return -ENOMEM;
- for (ix = 0; ix < nch; ix++) {
+ for (ix = 0; ix < max_nch; ix++) {
memset(in, 0, inlen);
- tir = &priv->direct_tir[ix];
+ tir = &tirs[ix];
tirc = MLX5_ADDR_OF(create_tir_in, in, ctx);
- mlx5e_build_direct_tir_ctx(priv, priv->direct_tir[ix].rqt.rqtn, tirc);
+ mlx5e_build_direct_tir_ctx(priv, tir->rqt.rqtn, tirc);
err = mlx5e_create_tir(priv->mdev, tir, in, inlen);
- if (err)
+ if (unlikely(err))
goto err_destroy_ch_tirs;
}
- kvfree(in);
-
- return 0;
+ goto out;
err_destroy_ch_tirs:
- mlx5_core_warn(priv->mdev, "create direct tirs failed, %d\n", err);
+ mlx5_core_warn(priv->mdev, "create tirs failed, %d\n", err);
for (ix--; ix >= 0; ix--)
- mlx5e_destroy_tir(priv->mdev, &priv->direct_tir[ix]);
+ mlx5e_destroy_tir(priv->mdev, &tirs[ix]);
+out:
kvfree(in);
return err;
@@ -3243,13 +3353,13 @@ void mlx5e_destroy_indirect_tirs(struct mlx5e_priv *priv, bool inner_ttc)
mlx5e_destroy_tir(priv->mdev, &priv->inner_indir_tir[i]);
}
-void mlx5e_destroy_direct_tirs(struct mlx5e_priv *priv)
+void mlx5e_destroy_direct_tirs(struct mlx5e_priv *priv, struct mlx5e_tir *tirs)
{
- int nch = mlx5e_get_netdev_max_channels(priv->netdev);
+ const int max_nch = mlx5e_get_netdev_max_channels(priv->netdev);
int i;
- for (i = 0; i < nch; i++)
- mlx5e_destroy_tir(priv->mdev, &priv->direct_tir[i]);
+ for (i = 0; i < max_nch; i++)
+ mlx5e_destroy_tir(priv->mdev, &tirs[i]);
}
static int mlx5e_modify_channels_scatter_fcs(struct mlx5e_channels *chs, bool enable)
@@ -3316,17 +3426,17 @@ out:
#ifdef CONFIG_MLX5_ESWITCH
static int mlx5e_setup_tc_cls_flower(struct mlx5e_priv *priv,
- struct tc_cls_flower_offload *cls_flower,
+ struct flow_cls_offload *cls_flower,
int flags)
{
switch (cls_flower->command) {
- case TC_CLSFLOWER_REPLACE:
+ case FLOW_CLS_REPLACE:
return mlx5e_configure_flower(priv->netdev, priv, cls_flower,
flags);
- case TC_CLSFLOWER_DESTROY:
+ case FLOW_CLS_DESTROY:
return mlx5e_delete_flower(priv->netdev, priv, cls_flower,
flags);
- case TC_CLSFLOWER_STATS:
+ case FLOW_CLS_STATS:
return mlx5e_stats_flower(priv->netdev, priv, cls_flower,
flags);
default:
@@ -3347,36 +3457,22 @@ static int mlx5e_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
return -EOPNOTSUPP;
}
}
-
-static int mlx5e_setup_tc_block(struct net_device *dev,
- struct tc_block_offload *f)
-{
- struct mlx5e_priv *priv = netdev_priv(dev);
-
- if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
- return -EOPNOTSUPP;
-
- switch (f->command) {
- case TC_BLOCK_BIND:
- return tcf_block_cb_register(f->block, mlx5e_setup_tc_block_cb,
- priv, priv, f->extack);
- case TC_BLOCK_UNBIND:
- tcf_block_cb_unregister(f->block, mlx5e_setup_tc_block_cb,
- priv);
- return 0;
- default:
- return -EOPNOTSUPP;
- }
-}
#endif
+static LIST_HEAD(mlx5e_block_cb_list);
+
static int mlx5e_setup_tc(struct net_device *dev, enum tc_setup_type type,
void *type_data)
{
+ struct mlx5e_priv *priv = netdev_priv(dev);
+
switch (type) {
#ifdef CONFIG_MLX5_ESWITCH
case TC_SETUP_BLOCK:
- return mlx5e_setup_tc_block(dev, type_data);
+ return flow_block_cb_setup_simple(type_data,
+ &mlx5e_block_cb_list,
+ mlx5e_setup_tc_block_cb,
+ priv, priv, true);
#endif
case TC_SETUP_QDISC_MQPRIO:
return mlx5e_setup_tc_mqprio(dev, type_data);
@@ -3391,11 +3487,12 @@ void mlx5e_fold_sw_stats64(struct mlx5e_priv *priv, struct rtnl_link_stats64 *s)
for (i = 0; i < mlx5e_get_netdev_max_channels(priv->netdev); i++) {
struct mlx5e_channel_stats *channel_stats = &priv->channel_stats[i];
+ struct mlx5e_rq_stats *xskrq_stats = &channel_stats->xskrq;
struct mlx5e_rq_stats *rq_stats = &channel_stats->rq;
int j;
- s->rx_packets += rq_stats->packets;
- s->rx_bytes += rq_stats->bytes;
+ s->rx_packets += rq_stats->packets + xskrq_stats->packets;
+ s->rx_bytes += rq_stats->bytes + xskrq_stats->bytes;
for (j = 0; j < priv->max_opened_tc; j++) {
struct mlx5e_sq_stats *sq_stats = &channel_stats->sq[j];
@@ -3494,6 +3591,13 @@ static int set_feature_lro(struct net_device *netdev, bool enable)
mutex_lock(&priv->state_lock);
+ if (enable && priv->xsk.refcnt) {
+ netdev_warn(netdev, "LRO is incompatible with AF_XDP (%hu XSKs are active)\n",
+ priv->xsk.refcnt);
+ err = -EINVAL;
+ goto out;
+ }
+
old_params = &priv->channels.params;
if (enable && !MLX5E_GET_PFLAG(old_params, MLX5E_PFLAG_RX_STRIDING_RQ)) {
netdev_warn(netdev, "can't set LRO with legacy RQ\n");
@@ -3507,8 +3611,8 @@ static int set_feature_lro(struct net_device *netdev, bool enable)
new_channels.params.lro_en = enable;
if (old_params->rq_wq_type != MLX5_WQ_TYPE_CYCLIC) {
- if (mlx5e_rx_mpwqe_is_linear_skb(mdev, old_params) ==
- mlx5e_rx_mpwqe_is_linear_skb(mdev, &new_channels.params))
+ if (mlx5e_rx_mpwqe_is_linear_skb(mdev, old_params, NULL) ==
+ mlx5e_rx_mpwqe_is_linear_skb(mdev, &new_channels.params, NULL))
reset = false;
}
@@ -3698,6 +3802,43 @@ static netdev_features_t mlx5e_fix_features(struct net_device *netdev,
return features;
}
+static bool mlx5e_xsk_validate_mtu(struct net_device *netdev,
+ struct mlx5e_channels *chs,
+ struct mlx5e_params *new_params,
+ struct mlx5_core_dev *mdev)
+{
+ u16 ix;
+
+ for (ix = 0; ix < chs->params.num_channels; ix++) {
+ struct xdp_umem *umem = mlx5e_xsk_get_umem(&chs->params, chs->params.xsk, ix);
+ struct mlx5e_xsk_param xsk;
+
+ if (!umem)
+ continue;
+
+ mlx5e_build_xsk_param(umem, &xsk);
+
+ if (!mlx5e_validate_xsk_param(new_params, &xsk, mdev)) {
+ u32 hr = mlx5e_get_linear_rq_headroom(new_params, &xsk);
+ int max_mtu_frame, max_mtu_page, max_mtu;
+
+ /* Two criteria must be met:
+ * 1. HW MTU + all headrooms <= XSK frame size.
+ * 2. Size of SKBs allocated on XDP_PASS <= PAGE_SIZE.
+ */
+ max_mtu_frame = MLX5E_HW2SW_MTU(new_params, xsk.chunk_size - hr);
+ max_mtu_page = mlx5e_xdp_max_mtu(new_params, &xsk);
+ max_mtu = min(max_mtu_frame, max_mtu_page);
+
+ netdev_err(netdev, "MTU %d is too big for an XSK running on channel %hu. Try MTU <= %d\n",
+ new_params->sw_mtu, ix, max_mtu);
+ return false;
+ }
+ }
+
+ return true;
+}
+
int mlx5e_change_mtu(struct net_device *netdev, int new_mtu,
change_hw_mtu_cb set_mtu_cb)
{
@@ -3718,18 +3859,31 @@ int mlx5e_change_mtu(struct net_device *netdev, int new_mtu,
new_channels.params.sw_mtu = new_mtu;
if (params->xdp_prog &&
- !mlx5e_rx_is_linear_skb(&new_channels.params)) {
+ !mlx5e_rx_is_linear_skb(&new_channels.params, NULL)) {
netdev_err(netdev, "MTU(%d) > %d is not allowed while XDP enabled\n",
- new_mtu, mlx5e_xdp_max_mtu(params));
+ new_mtu, mlx5e_xdp_max_mtu(params, NULL));
+ err = -EINVAL;
+ goto out;
+ }
+
+ if (priv->xsk.refcnt &&
+ !mlx5e_xsk_validate_mtu(netdev, &priv->channels,
+ &new_channels.params, priv->mdev)) {
err = -EINVAL;
goto out;
}
if (params->rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) {
- bool is_linear = mlx5e_rx_mpwqe_is_linear_skb(priv->mdev, &new_channels.params);
- u8 ppw_old = mlx5e_mpwqe_log_pkts_per_wqe(params);
- u8 ppw_new = mlx5e_mpwqe_log_pkts_per_wqe(&new_channels.params);
+ bool is_linear = mlx5e_rx_mpwqe_is_linear_skb(priv->mdev,
+ &new_channels.params,
+ NULL);
+ u8 ppw_old = mlx5e_mpwqe_log_pkts_per_wqe(params, NULL);
+ u8 ppw_new = mlx5e_mpwqe_log_pkts_per_wqe(&new_channels.params, NULL);
+
+ /* If XSK is active, XSK RQs are linear. */
+ is_linear |= priv->xsk.refcnt;
+ /* Always reset in linear mode - hw_mtu is used in data path. */
reset = reset && (is_linear || (ppw_old != ppw_new));
}
@@ -4162,16 +4316,29 @@ static int mlx5e_xdp_allowed(struct mlx5e_priv *priv, struct bpf_prog *prog)
new_channels.params = priv->channels.params;
new_channels.params.xdp_prog = prog;
- if (!mlx5e_rx_is_linear_skb(&new_channels.params)) {
+ /* No XSK params: AF_XDP can't be enabled yet at the point of setting
+ * the XDP program.
+ */
+ if (!mlx5e_rx_is_linear_skb(&new_channels.params, NULL)) {
netdev_warn(netdev, "XDP is not allowed with MTU(%d) > %d\n",
new_channels.params.sw_mtu,
- mlx5e_xdp_max_mtu(&new_channels.params));
+ mlx5e_xdp_max_mtu(&new_channels.params, NULL));
return -EINVAL;
}
return 0;
}
+static int mlx5e_xdp_update_state(struct mlx5e_priv *priv)
+{
+ if (priv->channels.params.xdp_prog)
+ mlx5e_xdp_set_open(priv);
+ else
+ mlx5e_xdp_set_closed(priv);
+
+ return 0;
+}
+
static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
@@ -4192,8 +4359,6 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
/* no need for full reset when exchanging programs */
reset = (!priv->channels.params.xdp_prog || !prog);
- if (was_opened && reset)
- mlx5e_close_locked(netdev);
if (was_opened && !reset) {
/* num_channels is invariant here, so we can take the
* batched reference right upfront.
@@ -4205,20 +4370,31 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
}
}
- /* exchange programs, extra prog reference we got from caller
- * as long as we don't fail from this point onwards.
- */
- old_prog = xchg(&priv->channels.params.xdp_prog, prog);
+ if (was_opened && reset) {
+ struct mlx5e_channels new_channels = {};
+
+ new_channels.params = priv->channels.params;
+ new_channels.params.xdp_prog = prog;
+ mlx5e_set_rq_type(priv->mdev, &new_channels.params);
+ old_prog = priv->channels.params.xdp_prog;
+
+ err = mlx5e_safe_switch_channels(priv, &new_channels, mlx5e_xdp_update_state);
+ if (err)
+ goto unlock;
+ } else {
+ /* exchange programs, extra prog reference we got from caller
+ * as long as we don't fail from this point onwards.
+ */
+ old_prog = xchg(&priv->channels.params.xdp_prog, prog);
+ }
+
if (old_prog)
bpf_prog_put(old_prog);
- if (reset) /* change RQ type according to priv->xdp_prog */
+ if (!was_opened && reset) /* change RQ type according to priv->xdp_prog */
mlx5e_set_rq_type(priv->mdev, &priv->channels.params);
- if (was_opened && reset)
- err = mlx5e_open_locked(netdev);
-
- if (!test_bit(MLX5E_STATE_OPENED, &priv->state) || reset)
+ if (!was_opened || reset)
goto unlock;
/* exchanging programs w/o reset, we update ref counts on behalf
@@ -4226,19 +4402,29 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
*/
for (i = 0; i < priv->channels.num; i++) {
struct mlx5e_channel *c = priv->channels.c[i];
+ bool xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
clear_bit(MLX5E_RQ_STATE_ENABLED, &c->rq.state);
+ if (xsk_open)
+ clear_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state);
napi_synchronize(&c->napi);
/* prevent mlx5e_poll_rx_cq from accessing rq->xdp_prog */
old_prog = xchg(&c->rq.xdp_prog, prog);
+ if (old_prog)
+ bpf_prog_put(old_prog);
+
+ if (xsk_open) {
+ old_prog = xchg(&c->xskrq.xdp_prog, prog);
+ if (old_prog)
+ bpf_prog_put(old_prog);
+ }
set_bit(MLX5E_RQ_STATE_ENABLED, &c->rq.state);
+ if (xsk_open)
+ set_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state);
/* napi_schedule in case we have missed anything */
napi_schedule(&c->napi);
-
- if (old_prog)
- bpf_prog_put(old_prog);
}
unlock:
@@ -4269,6 +4455,9 @@ static int mlx5e_xdp(struct net_device *dev, struct netdev_bpf *xdp)
case XDP_QUERY_PROG:
xdp->prog_id = mlx5e_xdp_query(dev);
return 0;
+ case XDP_SETUP_XSK_UMEM:
+ return mlx5e_xsk_setup_umem(dev, xdp->xsk.umem,
+ xdp->xsk.queue_id);
default:
return -EINVAL;
}
@@ -4351,6 +4540,7 @@ const struct net_device_ops mlx5e_netdev_ops = {
.ndo_tx_timeout = mlx5e_tx_timeout,
.ndo_bpf = mlx5e_xdp,
.ndo_xdp_xmit = mlx5e_xdp_xmit,
+ .ndo_xsk_async_xmit = mlx5e_xsk_async_xmit,
#ifdef CONFIG_MLX5_EN_ARFS
.ndo_rx_flow_steer = mlx5e_rx_flow_steer,
#endif
@@ -4420,9 +4610,9 @@ static bool slow_pci_heuristic(struct mlx5_core_dev *mdev)
link_speed > MLX5E_SLOW_PCI_RATIO * pci_bw;
}
-static struct net_dim_cq_moder mlx5e_get_def_tx_moderation(u8 cq_period_mode)
+static struct dim_cq_moder mlx5e_get_def_tx_moderation(u8 cq_period_mode)
{
- struct net_dim_cq_moder moder;
+ struct dim_cq_moder moder;
moder.cq_period_mode = cq_period_mode;
moder.pkts = MLX5E_PARAMS_DEFAULT_TX_CQ_MODERATION_PKTS;
@@ -4433,9 +4623,9 @@ static struct net_dim_cq_moder mlx5e_get_def_tx_moderation(u8 cq_period_mode)
return moder;
}
-static struct net_dim_cq_moder mlx5e_get_def_rx_moderation(u8 cq_period_mode)
+static struct dim_cq_moder mlx5e_get_def_rx_moderation(u8 cq_period_mode)
{
- struct net_dim_cq_moder moder;
+ struct dim_cq_moder moder;
moder.cq_period_mode = cq_period_mode;
moder.pkts = MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_PKTS;
@@ -4449,8 +4639,8 @@ static struct net_dim_cq_moder mlx5e_get_def_rx_moderation(u8 cq_period_mode)
static u8 mlx5_to_net_dim_cq_period_mode(u8 cq_period_mode)
{
return cq_period_mode == MLX5_CQ_PERIOD_MODE_START_FROM_CQE ?
- NET_DIM_CQ_PERIOD_MODE_START_FROM_CQE :
- NET_DIM_CQ_PERIOD_MODE_START_FROM_EQE;
+ DIM_CQ_PERIOD_MODE_START_FROM_CQE :
+ DIM_CQ_PERIOD_MODE_START_FROM_EQE;
}
void mlx5e_set_tx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode)
@@ -4502,11 +4692,13 @@ void mlx5e_build_rq_params(struct mlx5_core_dev *mdev,
* - Striding RQ configuration is not possible/supported.
* - Slow PCI heuristic.
* - Legacy RQ would use linear SKB while Striding RQ would use non-linear.
+ *
+ * No XSK params: checking the availability of striding RQ in general.
*/
if (!slow_pci_heuristic(mdev) &&
mlx5e_striding_rq_possible(mdev, params) &&
- (mlx5e_rx_mpwqe_is_linear_skb(mdev, params) ||
- !mlx5e_rx_is_linear_skb(params)))
+ (mlx5e_rx_mpwqe_is_linear_skb(mdev, params, NULL) ||
+ !mlx5e_rx_is_linear_skb(params, NULL)))
MLX5E_SET_PFLAG(params, MLX5E_PFLAG_RX_STRIDING_RQ, true);
mlx5e_set_rq_type(mdev, params);
mlx5e_init_rq_type_params(mdev, params);
@@ -4528,6 +4720,7 @@ void mlx5e_build_rss_params(struct mlx5e_rss_params *rss_params,
}
void mlx5e_build_nic_params(struct mlx5_core_dev *mdev,
+ struct mlx5e_xsk *xsk,
struct mlx5e_rss_params *rss_params,
struct mlx5e_params *params,
u16 max_channels, u16 mtu)
@@ -4563,9 +4756,11 @@ void mlx5e_build_nic_params(struct mlx5_core_dev *mdev,
/* HW LRO */
/* TODO: && MLX5_CAP_ETH(mdev, lro_cap) */
- if (params->rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ)
- if (!mlx5e_rx_mpwqe_is_linear_skb(mdev, params))
+ if (params->rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) {
+ /* No XSK params: checking the availability of striding RQ in general. */
+ if (!mlx5e_rx_mpwqe_is_linear_skb(mdev, params, NULL))
params->lro_en = !slow_pci_heuristic(mdev);
+ }
params->lro_timeout = mlx5e_choose_lro_timeout(mdev, MLX5E_DEFAULT_LRO_TIMEOUT);
/* CQ moderation params */
@@ -4584,13 +4779,16 @@ void mlx5e_build_nic_params(struct mlx5_core_dev *mdev,
mlx5e_build_rss_params(rss_params, params->num_channels);
params->tunneled_offload_en =
mlx5e_tunnel_inner_ft_supported(mdev);
+
+ /* AF_XDP */
+ params->xsk = xsk;
}
static void mlx5e_set_netdev_dev_addr(struct net_device *netdev)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
- mlx5_query_nic_vport_mac_address(priv->mdev, 0, netdev->dev_addr);
+ mlx5_query_mac_address(priv->mdev, netdev->dev_addr);
if (is_zero_ether_addr(netdev->dev_addr) &&
!MLX5_CAP_GEN(priv->mdev, vport_group_manager)) {
eth_hw_addr_random(netdev);
@@ -4619,14 +4817,18 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
netdev->ethtool_ops = &mlx5e_ethtool_ops;
netdev->vlan_features |= NETIF_F_SG;
- netdev->vlan_features |= NETIF_F_IP_CSUM;
- netdev->vlan_features |= NETIF_F_IPV6_CSUM;
+ netdev->vlan_features |= NETIF_F_HW_CSUM;
netdev->vlan_features |= NETIF_F_GRO;
netdev->vlan_features |= NETIF_F_TSO;
netdev->vlan_features |= NETIF_F_TSO6;
netdev->vlan_features |= NETIF_F_RXCSUM;
netdev->vlan_features |= NETIF_F_RXHASH;
+ netdev->mpls_features |= NETIF_F_SG;
+ netdev->mpls_features |= NETIF_F_HW_CSUM;
+ netdev->mpls_features |= NETIF_F_TSO;
+ netdev->mpls_features |= NETIF_F_TSO6;
+
netdev->hw_enc_features |= NETIF_F_HW_VLAN_CTAG_TX;
netdev->hw_enc_features |= NETIF_F_HW_VLAN_CTAG_RX;
@@ -4642,8 +4844,7 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
if (mlx5_vxlan_allowed(mdev->vxlan) || mlx5_geneve_tx_allowed(mdev) ||
MLX5_CAP_ETH(mdev, tunnel_stateless_gre)) {
- netdev->hw_enc_features |= NETIF_F_IP_CSUM;
- netdev->hw_enc_features |= NETIF_F_IPV6_CSUM;
+ netdev->hw_enc_features |= NETIF_F_HW_CSUM;
netdev->hw_enc_features |= NETIF_F_TSO;
netdev->hw_enc_features |= NETIF_F_TSO6;
netdev->hw_enc_features |= NETIF_F_GSO_PARTIAL;
@@ -4756,7 +4957,7 @@ static int mlx5e_nic_init(struct mlx5_core_dev *mdev,
if (err)
return err;
- mlx5e_build_nic_params(mdev, rss, &priv->channels.params,
+ mlx5e_build_nic_params(mdev, &priv->xsk, rss, &priv->channels.params,
mlx5e_get_netdev_max_channels(netdev),
netdev->mtu);
@@ -4798,7 +4999,7 @@ static int mlx5e_init_nic_rx(struct mlx5e_priv *priv)
if (err)
goto err_close_drop_rq;
- err = mlx5e_create_direct_rqts(priv);
+ err = mlx5e_create_direct_rqts(priv, priv->direct_tir);
if (err)
goto err_destroy_indirect_rqts;
@@ -4806,14 +5007,22 @@ static int mlx5e_init_nic_rx(struct mlx5e_priv *priv)
if (err)
goto err_destroy_direct_rqts;
- err = mlx5e_create_direct_tirs(priv);
+ err = mlx5e_create_direct_tirs(priv, priv->direct_tir);
if (err)
goto err_destroy_indirect_tirs;
+ err = mlx5e_create_direct_rqts(priv, priv->xsk_tir);
+ if (unlikely(err))
+ goto err_destroy_direct_tirs;
+
+ err = mlx5e_create_direct_tirs(priv, priv->xsk_tir);
+ if (unlikely(err))
+ goto err_destroy_xsk_rqts;
+
err = mlx5e_create_flow_steering(priv);
if (err) {
mlx5_core_warn(mdev, "create flow steering failed, %d\n", err);
- goto err_destroy_direct_tirs;
+ goto err_destroy_xsk_tirs;
}
err = mlx5e_tc_nic_init(priv);
@@ -4824,12 +5033,16 @@ static int mlx5e_init_nic_rx(struct mlx5e_priv *priv)
err_destroy_flow_steering:
mlx5e_destroy_flow_steering(priv);
+err_destroy_xsk_tirs:
+ mlx5e_destroy_direct_tirs(priv, priv->xsk_tir);
+err_destroy_xsk_rqts:
+ mlx5e_destroy_direct_rqts(priv, priv->xsk_tir);
err_destroy_direct_tirs:
- mlx5e_destroy_direct_tirs(priv);
+ mlx5e_destroy_direct_tirs(priv, priv->direct_tir);
err_destroy_indirect_tirs:
mlx5e_destroy_indirect_tirs(priv, true);
err_destroy_direct_rqts:
- mlx5e_destroy_direct_rqts(priv);
+ mlx5e_destroy_direct_rqts(priv, priv->direct_tir);
err_destroy_indirect_rqts:
mlx5e_destroy_rqt(priv, &priv->indir_rqt);
err_close_drop_rq:
@@ -4843,9 +5056,11 @@ static void mlx5e_cleanup_nic_rx(struct mlx5e_priv *priv)
{
mlx5e_tc_nic_cleanup(priv);
mlx5e_destroy_flow_steering(priv);
- mlx5e_destroy_direct_tirs(priv);
+ mlx5e_destroy_direct_tirs(priv, priv->xsk_tir);
+ mlx5e_destroy_direct_rqts(priv, priv->xsk_tir);
+ mlx5e_destroy_direct_tirs(priv, priv->direct_tir);
mlx5e_destroy_indirect_tirs(priv, true);
- mlx5e_destroy_direct_rqts(priv);
+ mlx5e_destroy_direct_rqts(priv, priv->direct_tir);
mlx5e_destroy_rqt(priv, &priv->indir_rqt);
mlx5e_close_drop_rq(&priv->drop_rq);
mlx5e_destroy_q_counters(priv);
@@ -4927,6 +5142,11 @@ static void mlx5e_nic_disable(struct mlx5e_priv *priv)
mlx5_lag_remove(mdev);
}
+int mlx5e_update_nic_rx(struct mlx5e_priv *priv)
+{
+ return mlx5e_refresh_tirs(priv, false);
+}
+
static const struct mlx5e_profile mlx5e_nic_profile = {
.init = mlx5e_nic_init,
.cleanup = mlx5e_nic_cleanup,
@@ -4936,6 +5156,7 @@ static const struct mlx5e_profile mlx5e_nic_profile = {
.cleanup_tx = mlx5e_cleanup_nic_tx,
.enable = mlx5e_nic_enable,
.disable = mlx5e_nic_disable,
+ .update_rx = mlx5e_update_nic_rx,
.update_stats = mlx5e_update_ndo_stats,
.update_carrier = mlx5e_update_carrier,
.rx_handlers.handle_rx_cqe = mlx5e_handle_rx_cqe,
@@ -4995,7 +5216,7 @@ struct net_device *mlx5e_create_netdev(struct mlx5_core_dev *mdev,
netdev = alloc_etherdev_mqs(sizeof(struct mlx5e_priv),
nch * profile->max_tc,
- nch);
+ nch * MLX5E_NUM_RQ_GROUPS);
if (!netdev) {
mlx5_core_err(mdev, "alloc_etherdev_mqs() failed\n");
return NULL;
@@ -5133,7 +5354,7 @@ static void *mlx5e_add(struct mlx5_core_dev *mdev)
#ifdef CONFIG_MLX5_ESWITCH
if (MLX5_ESWITCH_MANAGER(mdev) &&
- mlx5_eswitch_mode(mdev->priv.eswitch) == SRIOV_OFFLOADS) {
+ mlx5_eswitch_mode(mdev->priv.eswitch) == MLX5_ESWITCH_OFFLOADS) {
mlx5e_rep_register_vport_reps(mdev);
return mdev;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
index 2f406b161bcf..10ef90a7bddd 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
@@ -37,6 +37,7 @@
#include <net/act_api.h>
#include <net/netevent.h>
#include <net/arp.h>
+#include <net/devlink.h>
#include "eswitch.h"
#include "en.h"
@@ -128,7 +129,7 @@ static void mlx5e_rep_get_strings(struct net_device *dev,
}
}
-static void mlx5e_vf_rep_update_hw_counters(struct mlx5e_priv *priv)
+static void mlx5e_rep_update_hw_counters(struct mlx5e_priv *priv)
{
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
struct mlx5e_rep_priv *rpriv = priv->ppriv;
@@ -166,17 +167,6 @@ static void mlx5e_uplink_rep_update_hw_counters(struct mlx5e_priv *priv)
vport_stats->tx_bytes = PPORT_802_3_GET(pstats, a_octets_transmitted_ok);
}
-static void mlx5e_rep_update_hw_counters(struct mlx5e_priv *priv)
-{
- struct mlx5e_rep_priv *rpriv = priv->ppriv;
- struct mlx5_eswitch_rep *rep = rpriv->rep;
-
- if (rep->vport == MLX5_VPORT_UPLINK)
- mlx5e_uplink_rep_update_hw_counters(priv);
- else
- mlx5e_vf_rep_update_hw_counters(priv);
-}
-
static void mlx5e_rep_update_sw_counters(struct mlx5e_priv *priv)
{
struct mlx5e_sw_stats *s = &priv->stats.sw;
@@ -203,7 +193,7 @@ static void mlx5e_rep_get_ethtool_stats(struct net_device *dev,
mutex_lock(&priv->state_lock);
mlx5e_rep_update_sw_counters(priv);
- mlx5e_rep_update_hw_counters(priv);
+ priv->profile->update_stats(priv);
mutex_unlock(&priv->state_lock);
for (i = 0; i < NUM_VPORT_REP_SW_COUNTERS; i++)
@@ -363,7 +353,7 @@ static int mlx5e_uplink_rep_set_link_ksettings(struct net_device *netdev,
return mlx5e_ethtool_set_link_ksettings(priv, link_ksettings);
}
-static const struct ethtool_ops mlx5e_vf_rep_ethtool_ops = {
+static const struct ethtool_ops mlx5e_rep_ethtool_ops = {
.get_drvinfo = mlx5e_rep_get_drvinfo,
.get_link = ethtool_op_get_link,
.get_strings = mlx5e_rep_get_strings,
@@ -402,30 +392,19 @@ static const struct ethtool_ops mlx5e_uplink_rep_ethtool_ops = {
static int mlx5e_rep_get_port_parent_id(struct net_device *dev,
struct netdev_phys_item_id *ppid)
{
- struct mlx5e_priv *priv = netdev_priv(dev);
- struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
- struct net_device *uplink_upper = NULL;
- struct mlx5e_priv *uplink_priv = NULL;
- struct net_device *uplink_dev;
-
- if (esw->mode == SRIOV_NONE)
- return -EOPNOTSUPP;
+ struct mlx5_eswitch *esw;
+ struct mlx5e_priv *priv;
+ u64 parent_id;
- uplink_dev = mlx5_eswitch_uplink_get_proto_dev(esw, REP_ETH);
- if (uplink_dev) {
- uplink_upper = netdev_master_upper_dev_get(uplink_dev);
- uplink_priv = netdev_priv(uplink_dev);
- }
+ priv = netdev_priv(dev);
+ esw = priv->mdev->priv.eswitch;
- ppid->id_len = ETH_ALEN;
- if (uplink_upper && mlx5_lag_is_sriov(uplink_priv->mdev)) {
- ether_addr_copy(ppid->id, uplink_upper->dev_addr);
- } else {
- struct mlx5e_rep_priv *rpriv = priv->ppriv;
- struct mlx5_eswitch_rep *rep = rpriv->rep;
+ if (esw->mode == MLX5_ESWITCH_NONE)
+ return -EOPNOTSUPP;
- ether_addr_copy(ppid->id, rep->hw_id);
- }
+ parent_id = mlx5_query_nic_system_image_guid(priv->mdev);
+ ppid->id_len = sizeof(parent_id);
+ memcpy(ppid->id, &parent_id, sizeof(parent_id));
return 0;
}
@@ -436,7 +415,7 @@ static void mlx5e_sqs2vport_stop(struct mlx5_eswitch *esw,
struct mlx5e_rep_sq *rep_sq, *tmp;
struct mlx5e_rep_priv *rpriv;
- if (esw->mode != SRIOV_OFFLOADS)
+ if (esw->mode != MLX5_ESWITCH_OFFLOADS)
return;
rpriv = mlx5e_rep_to_rep_priv(rep);
@@ -457,7 +436,7 @@ static int mlx5e_sqs2vport_start(struct mlx5_eswitch *esw,
int err;
int i;
- if (esw->mode != SRIOV_OFFLOADS)
+ if (esw->mode != MLX5_ESWITCH_OFFLOADS)
return 0;
rpriv = mlx5e_rep_to_rep_priv(rep);
@@ -677,7 +656,7 @@ static void mlx5e_rep_indr_clean_block_privs(struct mlx5e_rep_priv *rpriv)
static int
mlx5e_rep_indr_offload(struct net_device *netdev,
- struct tc_cls_flower_offload *flower,
+ struct flow_cls_offload *flower,
struct mlx5e_rep_indr_block_priv *indr_priv)
{
struct mlx5e_priv *priv = netdev_priv(indr_priv->rpriv->netdev);
@@ -685,13 +664,13 @@ mlx5e_rep_indr_offload(struct net_device *netdev,
int err = 0;
switch (flower->command) {
- case TC_CLSFLOWER_REPLACE:
+ case FLOW_CLS_REPLACE:
err = mlx5e_configure_flower(netdev, priv, flower, flags);
break;
- case TC_CLSFLOWER_DESTROY:
+ case FLOW_CLS_DESTROY:
err = mlx5e_delete_flower(netdev, priv, flower, flags);
break;
- case TC_CLSFLOWER_STATS:
+ case FLOW_CLS_STATS:
err = mlx5e_stats_flower(netdev, priv, flower, flags);
break;
default:
@@ -714,23 +693,39 @@ static int mlx5e_rep_indr_setup_block_cb(enum tc_setup_type type,
}
}
+static void mlx5e_rep_indr_tc_block_unbind(void *cb_priv)
+{
+ struct mlx5e_rep_indr_block_priv *indr_priv = cb_priv;
+
+ list_del(&indr_priv->list);
+ kfree(indr_priv);
+}
+
+static LIST_HEAD(mlx5e_block_cb_list);
+
static int
mlx5e_rep_indr_setup_tc_block(struct net_device *netdev,
struct mlx5e_rep_priv *rpriv,
- struct tc_block_offload *f)
+ struct flow_block_offload *f)
{
struct mlx5e_rep_indr_block_priv *indr_priv;
- int err = 0;
+ struct flow_block_cb *block_cb;
- if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
+ if (f->binder_type != FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
return -EOPNOTSUPP;
+ f->driver_block_list = &mlx5e_block_cb_list;
+
switch (f->command) {
- case TC_BLOCK_BIND:
+ case FLOW_BLOCK_BIND:
indr_priv = mlx5e_rep_indr_block_priv_lookup(rpriv, netdev);
if (indr_priv)
return -EEXIST;
+ if (flow_block_cb_is_busy(mlx5e_rep_indr_setup_block_cb,
+ indr_priv, &mlx5e_block_cb_list))
+ return -EBUSY;
+
indr_priv = kmalloc(sizeof(*indr_priv), GFP_KERNEL);
if (!indr_priv)
return -ENOMEM;
@@ -740,26 +735,32 @@ mlx5e_rep_indr_setup_tc_block(struct net_device *netdev,
list_add(&indr_priv->list,
&rpriv->uplink_priv.tc_indr_block_priv_list);
- err = tcf_block_cb_register(f->block,
- mlx5e_rep_indr_setup_block_cb,
- indr_priv, indr_priv, f->extack);
- if (err) {
+ block_cb = flow_block_cb_alloc(f->net,
+ mlx5e_rep_indr_setup_block_cb,
+ indr_priv, indr_priv,
+ mlx5e_rep_indr_tc_block_unbind);
+ if (IS_ERR(block_cb)) {
list_del(&indr_priv->list);
kfree(indr_priv);
+ return PTR_ERR(block_cb);
}
+ flow_block_cb_add(block_cb, f);
+ list_add_tail(&block_cb->driver_list, &mlx5e_block_cb_list);
- return err;
- case TC_BLOCK_UNBIND:
+ return 0;
+ case FLOW_BLOCK_UNBIND:
indr_priv = mlx5e_rep_indr_block_priv_lookup(rpriv, netdev);
if (!indr_priv)
return -ENOENT;
- tcf_block_cb_unregister(f->block,
- mlx5e_rep_indr_setup_block_cb,
- indr_priv);
- list_del(&indr_priv->list);
- kfree(indr_priv);
+ block_cb = flow_block_cb_lookup(f,
+ mlx5e_rep_indr_setup_block_cb,
+ indr_priv);
+ if (!block_cb)
+ return -ENOENT;
+ flow_block_cb_remove(block_cb, f);
+ list_del(&block_cb->driver_list);
return 0;
default:
return -EOPNOTSUPP;
@@ -1101,7 +1102,7 @@ void mlx5e_rep_encap_entry_detach(struct mlx5e_priv *priv,
mlx5_tun_entropy_refcount_dec(tun_entropy, e->reformat_type);
}
-static int mlx5e_vf_rep_open(struct net_device *dev)
+static int mlx5e_rep_open(struct net_device *dev)
{
struct mlx5e_priv *priv = netdev_priv(dev);
struct mlx5e_rep_priv *rpriv = priv->ppriv;
@@ -1124,7 +1125,7 @@ unlock:
return err;
}
-static int mlx5e_vf_rep_close(struct net_device *dev)
+static int mlx5e_rep_close(struct net_device *dev)
{
struct mlx5e_priv *priv = netdev_priv(dev);
struct mlx5e_rep_priv *rpriv = priv->ppriv;
@@ -1141,42 +1142,18 @@ static int mlx5e_vf_rep_close(struct net_device *dev)
return ret;
}
-static int mlx5e_rep_get_phys_port_name(struct net_device *dev,
- char *buf, size_t len)
-{
- struct mlx5e_priv *priv = netdev_priv(dev);
- struct mlx5e_rep_priv *rpriv = priv->ppriv;
- struct mlx5_eswitch_rep *rep = rpriv->rep;
- unsigned int fn;
- int ret;
-
- fn = PCI_FUNC(priv->mdev->pdev->devfn);
- if (fn >= MLX5_MAX_PORTS)
- return -EOPNOTSUPP;
-
- if (rep->vport == MLX5_VPORT_UPLINK)
- ret = snprintf(buf, len, "p%d", fn);
- else
- ret = snprintf(buf, len, "pf%dvf%d", fn, rep->vport - 1);
-
- if (ret >= len)
- return -EOPNOTSUPP;
-
- return 0;
-}
-
static int
mlx5e_rep_setup_tc_cls_flower(struct mlx5e_priv *priv,
- struct tc_cls_flower_offload *cls_flower, int flags)
+ struct flow_cls_offload *cls_flower, int flags)
{
switch (cls_flower->command) {
- case TC_CLSFLOWER_REPLACE:
+ case FLOW_CLS_REPLACE:
return mlx5e_configure_flower(priv->netdev, priv, cls_flower,
flags);
- case TC_CLSFLOWER_DESTROY:
+ case FLOW_CLS_DESTROY:
return mlx5e_delete_flower(priv->netdev, priv, cls_flower,
flags);
- case TC_CLSFLOWER_STATS:
+ case FLOW_CLS_STATS:
return mlx5e_stats_flower(priv->netdev, priv, cls_flower,
flags);
default:
@@ -1198,32 +1175,16 @@ static int mlx5e_rep_setup_tc_cb(enum tc_setup_type type, void *type_data,
}
}
-static int mlx5e_rep_setup_tc_block(struct net_device *dev,
- struct tc_block_offload *f)
-{
- struct mlx5e_priv *priv = netdev_priv(dev);
-
- if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
- return -EOPNOTSUPP;
-
- switch (f->command) {
- case TC_BLOCK_BIND:
- return tcf_block_cb_register(f->block, mlx5e_rep_setup_tc_cb,
- priv, priv, f->extack);
- case TC_BLOCK_UNBIND:
- tcf_block_cb_unregister(f->block, mlx5e_rep_setup_tc_cb, priv);
- return 0;
- default:
- return -EOPNOTSUPP;
- }
-}
-
static int mlx5e_rep_setup_tc(struct net_device *dev, enum tc_setup_type type,
void *type_data)
{
+ struct mlx5e_priv *priv = netdev_priv(dev);
+
switch (type) {
case TC_SETUP_BLOCK:
- return mlx5e_rep_setup_tc_block(dev, type_data);
+ return flow_block_cb_setup_simple(type_data, NULL,
+ mlx5e_rep_setup_tc_cb,
+ priv, priv, true);
default:
return -EOPNOTSUPP;
}
@@ -1276,7 +1237,7 @@ static int mlx5e_rep_get_offload_stats(int attr_id, const struct net_device *dev
}
static void
-mlx5e_vf_rep_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats)
+mlx5e_rep_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats)
{
struct mlx5e_priv *priv = netdev_priv(dev);
@@ -1285,7 +1246,7 @@ mlx5e_vf_rep_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats)
memcpy(stats, &priv->stats.vf_vport, sizeof(*stats));
}
-static int mlx5e_vf_rep_change_mtu(struct net_device *netdev, int new_mtu)
+static int mlx5e_rep_change_mtu(struct net_device *netdev, int new_mtu)
{
return mlx5e_change_mtu(netdev, new_mtu, NULL);
}
@@ -1318,17 +1279,24 @@ static int mlx5e_uplink_rep_set_vf_vlan(struct net_device *dev, int vf, u16 vlan
return 0;
}
-static const struct net_device_ops mlx5e_netdev_ops_vf_rep = {
- .ndo_open = mlx5e_vf_rep_open,
- .ndo_stop = mlx5e_vf_rep_close,
+static struct devlink_port *mlx5e_get_devlink_port(struct net_device *dev)
+{
+ struct mlx5e_priv *priv = netdev_priv(dev);
+ struct mlx5e_rep_priv *rpriv = priv->ppriv;
+
+ return &rpriv->dl_port;
+}
+
+static const struct net_device_ops mlx5e_netdev_ops_rep = {
+ .ndo_open = mlx5e_rep_open,
+ .ndo_stop = mlx5e_rep_close,
.ndo_start_xmit = mlx5e_xmit,
- .ndo_get_phys_port_name = mlx5e_rep_get_phys_port_name,
.ndo_setup_tc = mlx5e_rep_setup_tc,
- .ndo_get_stats64 = mlx5e_vf_rep_get_stats,
+ .ndo_get_devlink_port = mlx5e_get_devlink_port,
+ .ndo_get_stats64 = mlx5e_rep_get_stats,
.ndo_has_offload_stats = mlx5e_rep_has_offload_stats,
.ndo_get_offload_stats = mlx5e_rep_get_offload_stats,
- .ndo_change_mtu = mlx5e_vf_rep_change_mtu,
- .ndo_get_port_parent_id = mlx5e_rep_get_port_parent_id,
+ .ndo_change_mtu = mlx5e_rep_change_mtu,
};
static const struct net_device_ops mlx5e_netdev_ops_uplink_rep = {
@@ -1336,8 +1304,8 @@ static const struct net_device_ops mlx5e_netdev_ops_uplink_rep = {
.ndo_stop = mlx5e_close,
.ndo_start_xmit = mlx5e_xmit,
.ndo_set_mac_address = mlx5e_uplink_rep_set_mac,
- .ndo_get_phys_port_name = mlx5e_rep_get_phys_port_name,
.ndo_setup_tc = mlx5e_rep_setup_tc,
+ .ndo_get_devlink_port = mlx5e_get_devlink_port,
.ndo_get_stats64 = mlx5e_get_stats,
.ndo_has_offload_stats = mlx5e_rep_has_offload_stats,
.ndo_get_offload_stats = mlx5e_rep_get_offload_stats,
@@ -1350,13 +1318,12 @@ static const struct net_device_ops mlx5e_netdev_ops_uplink_rep = {
.ndo_get_vf_config = mlx5e_get_vf_config,
.ndo_get_vf_stats = mlx5e_get_vf_stats,
.ndo_set_vf_vlan = mlx5e_uplink_rep_set_vf_vlan,
- .ndo_get_port_parent_id = mlx5e_rep_get_port_parent_id,
.ndo_set_features = mlx5e_set_features,
};
bool mlx5e_eswitch_rep(struct net_device *netdev)
{
- if (netdev->netdev_ops == &mlx5e_netdev_ops_vf_rep ||
+ if (netdev->netdev_ops == &mlx5e_netdev_ops_rep ||
netdev->netdev_ops == &mlx5e_netdev_ops_uplink_rep)
return true;
@@ -1412,16 +1379,16 @@ static void mlx5e_build_rep_netdev(struct net_device *netdev)
SET_NETDEV_DEV(netdev, mdev->device);
netdev->netdev_ops = &mlx5e_netdev_ops_uplink_rep;
/* we want a persistent mac for the uplink rep */
- mlx5_query_nic_vport_mac_address(mdev, 0, netdev->dev_addr);
+ mlx5_query_mac_address(mdev, netdev->dev_addr);
netdev->ethtool_ops = &mlx5e_uplink_rep_ethtool_ops;
#ifdef CONFIG_MLX5_CORE_EN_DCB
if (MLX5_CAP_GEN(mdev, qos))
netdev->dcbnl_ops = &mlx5e_dcbnl_ops;
#endif
} else {
- netdev->netdev_ops = &mlx5e_netdev_ops_vf_rep;
+ netdev->netdev_ops = &mlx5e_netdev_ops_rep;
eth_hw_addr_random(netdev);
- netdev->ethtool_ops = &mlx5e_vf_rep_ethtool_ops;
+ netdev->ethtool_ops = &mlx5e_rep_ethtool_ops;
}
netdev->watchdog_timeo = 15 * HZ;
@@ -1530,7 +1497,7 @@ static int mlx5e_init_rep_rx(struct mlx5e_priv *priv)
if (err)
goto err_close_drop_rq;
- err = mlx5e_create_direct_rqts(priv);
+ err = mlx5e_create_direct_rqts(priv, priv->direct_tir);
if (err)
goto err_destroy_indirect_rqts;
@@ -1538,7 +1505,7 @@ static int mlx5e_init_rep_rx(struct mlx5e_priv *priv)
if (err)
goto err_destroy_direct_rqts;
- err = mlx5e_create_direct_tirs(priv);
+ err = mlx5e_create_direct_tirs(priv, priv->direct_tir);
if (err)
goto err_destroy_indirect_tirs;
@@ -1555,11 +1522,11 @@ static int mlx5e_init_rep_rx(struct mlx5e_priv *priv)
err_destroy_ttc_table:
mlx5e_destroy_ttc_table(priv, &priv->fs.ttc);
err_destroy_direct_tirs:
- mlx5e_destroy_direct_tirs(priv);
+ mlx5e_destroy_direct_tirs(priv, priv->direct_tir);
err_destroy_indirect_tirs:
mlx5e_destroy_indirect_tirs(priv, false);
err_destroy_direct_rqts:
- mlx5e_destroy_direct_rqts(priv);
+ mlx5e_destroy_direct_rqts(priv, priv->direct_tir);
err_destroy_indirect_rqts:
mlx5e_destroy_rqt(priv, &priv->indir_rqt);
err_close_drop_rq:
@@ -1573,9 +1540,9 @@ static void mlx5e_cleanup_rep_rx(struct mlx5e_priv *priv)
mlx5_del_flow_rules(rpriv->vport_rx_rule);
mlx5e_destroy_ttc_table(priv, &priv->fs.ttc);
- mlx5e_destroy_direct_tirs(priv);
+ mlx5e_destroy_direct_tirs(priv, priv->direct_tir);
mlx5e_destroy_indirect_tirs(priv, false);
- mlx5e_destroy_direct_rqts(priv);
+ mlx5e_destroy_direct_rqts(priv, priv->direct_tir);
mlx5e_destroy_rqt(priv, &priv->indir_rqt);
mlx5e_close_drop_rq(&priv->drop_rq);
}
@@ -1642,11 +1609,16 @@ static void mlx5e_cleanup_rep_tx(struct mlx5e_priv *priv)
}
}
-static void mlx5e_vf_rep_enable(struct mlx5e_priv *priv)
+static void mlx5e_rep_enable(struct mlx5e_priv *priv)
{
mlx5e_set_netdev_mtu_boundaries(priv);
}
+static int mlx5e_update_rep_rx(struct mlx5e_priv *priv)
+{
+ return 0;
+}
+
static int uplink_rep_async_event(struct notifier_block *nb, unsigned long event, void *data)
{
struct mlx5e_priv *priv = container_of(nb, struct mlx5e_priv, events_nb);
@@ -1714,15 +1686,16 @@ static void mlx5e_uplink_rep_disable(struct mlx5e_priv *priv)
mlx5_lag_remove(mdev);
}
-static const struct mlx5e_profile mlx5e_vf_rep_profile = {
+static const struct mlx5e_profile mlx5e_rep_profile = {
.init = mlx5e_init_rep,
.cleanup = mlx5e_cleanup_rep,
.init_rx = mlx5e_init_rep_rx,
.cleanup_rx = mlx5e_cleanup_rep_rx,
.init_tx = mlx5e_init_rep_tx,
.cleanup_tx = mlx5e_cleanup_rep_tx,
- .enable = mlx5e_vf_rep_enable,
- .update_stats = mlx5e_vf_rep_update_hw_counters,
+ .enable = mlx5e_rep_enable,
+ .update_rx = mlx5e_update_rep_rx,
+ .update_stats = mlx5e_rep_update_hw_counters,
.rx_handlers.handle_rx_cqe = mlx5e_handle_rx_cqe_rep,
.rx_handlers.handle_rx_cqe_mpwqe = mlx5e_handle_rx_cqe_mpwrq,
.max_tc = 1,
@@ -1737,6 +1710,7 @@ static const struct mlx5e_profile mlx5e_uplink_rep_profile = {
.cleanup_tx = mlx5e_cleanup_rep_tx,
.enable = mlx5e_uplink_rep_enable,
.disable = mlx5e_uplink_rep_disable,
+ .update_rx = mlx5e_update_rep_rx,
.update_stats = mlx5e_uplink_rep_update_hw_counters,
.update_carrier = mlx5e_update_carrier,
.rx_handlers.handle_rx_cqe = mlx5e_handle_rx_cqe_rep,
@@ -1744,6 +1718,55 @@ static const struct mlx5e_profile mlx5e_uplink_rep_profile = {
.max_tc = MLX5E_MAX_NUM_TC,
};
+static bool
+is_devlink_port_supported(const struct mlx5_core_dev *dev,
+ const struct mlx5e_rep_priv *rpriv)
+{
+ return rpriv->rep->vport == MLX5_VPORT_UPLINK ||
+ rpriv->rep->vport == MLX5_VPORT_PF ||
+ mlx5_eswitch_is_vf_vport(dev->priv.eswitch, rpriv->rep->vport);
+}
+
+static int register_devlink_port(struct mlx5_core_dev *dev,
+ struct mlx5e_rep_priv *rpriv)
+{
+ struct devlink *devlink = priv_to_devlink(dev);
+ struct mlx5_eswitch_rep *rep = rpriv->rep;
+ struct netdev_phys_item_id ppid = {};
+ int ret;
+
+ if (!is_devlink_port_supported(dev, rpriv))
+ return 0;
+
+ ret = mlx5e_rep_get_port_parent_id(rpriv->netdev, &ppid);
+ if (ret)
+ return ret;
+
+ if (rep->vport == MLX5_VPORT_UPLINK)
+ devlink_port_attrs_set(&rpriv->dl_port,
+ DEVLINK_PORT_FLAVOUR_PHYSICAL,
+ PCI_FUNC(dev->pdev->devfn), false, 0,
+ &ppid.id[0], ppid.id_len);
+ else if (rep->vport == MLX5_VPORT_PF)
+ devlink_port_attrs_pci_pf_set(&rpriv->dl_port,
+ &ppid.id[0], ppid.id_len,
+ dev->pdev->devfn);
+ else if (mlx5_eswitch_is_vf_vport(dev->priv.eswitch, rpriv->rep->vport))
+ devlink_port_attrs_pci_vf_set(&rpriv->dl_port,
+ &ppid.id[0], ppid.id_len,
+ dev->pdev->devfn,
+ rep->vport - 1);
+
+ return devlink_port_register(devlink, &rpriv->dl_port, rep->vport);
+}
+
+static void unregister_devlink_port(struct mlx5_core_dev *dev,
+ struct mlx5e_rep_priv *rpriv)
+{
+ if (is_devlink_port_supported(dev, rpriv))
+ devlink_port_unregister(&rpriv->dl_port);
+}
+
/* e-Switch vport representors */
static int
mlx5e_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
@@ -1761,7 +1784,8 @@ mlx5e_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
rpriv->rep = rep;
nch = mlx5e_get_max_num_channels(dev);
- profile = (rep->vport == MLX5_VPORT_UPLINK) ? &mlx5e_uplink_rep_profile : &mlx5e_vf_rep_profile;
+ profile = (rep->vport == MLX5_VPORT_UPLINK) ?
+ &mlx5e_uplink_rep_profile : &mlx5e_rep_profile;
netdev = mlx5e_create_netdev(dev, profile, nch, rpriv);
if (!netdev) {
pr_warn("Failed to create representor netdev for vport %d\n",
@@ -1771,7 +1795,7 @@ mlx5e_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
}
rpriv->netdev = netdev;
- rep->rep_if[REP_ETH].priv = rpriv;
+ rep->rep_data[REP_ETH].priv = rpriv;
INIT_LIST_HEAD(&rpriv->vport_sqs_list);
if (rep->vport == MLX5_VPORT_UPLINK) {
@@ -1794,15 +1818,27 @@ mlx5e_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
goto err_detach_netdev;
}
+ err = register_devlink_port(dev, rpriv);
+ if (err) {
+ esw_warn(dev, "Failed to register devlink port %d\n",
+ rep->vport);
+ goto err_neigh_cleanup;
+ }
+
err = register_netdev(netdev);
if (err) {
pr_warn("Failed to register representor netdev for vport %d\n",
rep->vport);
- goto err_neigh_cleanup;
+ goto err_devlink_cleanup;
}
+ if (is_devlink_port_supported(dev, rpriv))
+ devlink_port_type_eth_set(&rpriv->dl_port, netdev);
return 0;
+err_devlink_cleanup:
+ unregister_devlink_port(dev, rpriv);
+
err_neigh_cleanup:
mlx5e_rep_neigh_cleanup(rpriv);
@@ -1825,9 +1861,13 @@ mlx5e_vport_rep_unload(struct mlx5_eswitch_rep *rep)
struct mlx5e_rep_priv *rpriv = mlx5e_rep_to_rep_priv(rep);
struct net_device *netdev = rpriv->netdev;
struct mlx5e_priv *priv = netdev_priv(netdev);
+ struct mlx5_core_dev *dev = priv->mdev;
void *ppriv = priv->ppriv;
+ if (is_devlink_port_supported(dev, rpriv))
+ devlink_port_type_clear(&rpriv->dl_port);
unregister_netdev(netdev);
+ unregister_devlink_port(dev, rpriv);
mlx5e_rep_neigh_cleanup(rpriv);
mlx5e_detach_netdev(priv);
if (rep->vport == MLX5_VPORT_UPLINK)
@@ -1845,16 +1885,17 @@ static void *mlx5e_vport_rep_get_proto_dev(struct mlx5_eswitch_rep *rep)
return rpriv->netdev;
}
+static const struct mlx5_eswitch_rep_ops rep_ops = {
+ .load = mlx5e_vport_rep_load,
+ .unload = mlx5e_vport_rep_unload,
+ .get_proto_dev = mlx5e_vport_rep_get_proto_dev
+};
+
void mlx5e_rep_register_vport_reps(struct mlx5_core_dev *mdev)
{
struct mlx5_eswitch *esw = mdev->priv.eswitch;
- struct mlx5_eswitch_rep_if rep_if = {};
-
- rep_if.load = mlx5e_vport_rep_load;
- rep_if.unload = mlx5e_vport_rep_unload;
- rep_if.get_proto_dev = mlx5e_vport_rep_get_proto_dev;
- mlx5_eswitch_register_vport_reps(esw, &rep_if, REP_ETH);
+ mlx5_eswitch_register_vport_reps(esw, &rep_ops, REP_ETH);
}
void mlx5e_rep_unregister_vport_reps(struct mlx5_core_dev *mdev)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h
index 83b573b1abac..c56e6ee4350c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.h
@@ -86,12 +86,13 @@ struct mlx5e_rep_priv {
struct mlx5_flow_handle *vport_rx_rule;
struct list_head vport_sqs_list;
struct mlx5_rep_uplink_priv uplink_priv; /* valid for uplink rep */
+ struct devlink_port dl_port;
};
static inline
struct mlx5e_rep_priv *mlx5e_rep_to_rep_priv(struct mlx5_eswitch_rep *rep)
{
- return (struct mlx5e_rep_priv *)rep->rep_if[REP_ETH].priv;
+ return rep->rep_data[REP_ETH].priv;
}
struct mlx5e_neigh {
@@ -150,13 +151,12 @@ struct mlx5e_encap_entry {
struct hlist_node encap_hlist;
struct list_head flows;
u32 encap_id;
- struct ip_tunnel_info tun_info;
+ const struct ip_tunnel_info *tun_info;
unsigned char h_dest[ETH_ALEN]; /* destination eth addr */
struct net_device *out_dev;
struct net_device *route_dev;
- int tunnel_type;
- int tunnel_hlen;
+ struct mlx5e_tc_tunnel *tunnel;
int reformat_type;
u8 flags;
char *encap_header;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index 13133e7f088e..56a2f4666c47 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -34,6 +34,7 @@
#include <linux/ip.h>
#include <linux/ipv6.h>
#include <linux/tcp.h>
+#include <linux/indirect_call_wrapper.h>
#include <net/ip6_checksum.h>
#include <net/page_pool.h>
#include <net/inet_ecn.h>
@@ -46,6 +47,7 @@
#include "en_accel/tls_rxtx.h"
#include "lib/clock.h"
#include "en/xdp.h"
+#include "en/xsk/rx.h"
static inline bool mlx5e_rx_hw_stamp(struct hwtstamp_config *config)
{
@@ -234,8 +236,8 @@ static inline bool mlx5e_rx_cache_get(struct mlx5e_rq *rq,
return true;
}
-static inline int mlx5e_page_alloc_mapped(struct mlx5e_rq *rq,
- struct mlx5e_dma_info *dma_info)
+static inline int mlx5e_page_alloc_pool(struct mlx5e_rq *rq,
+ struct mlx5e_dma_info *dma_info)
{
if (mlx5e_rx_cache_get(rq, dma_info))
return 0;
@@ -247,7 +249,7 @@ static inline int mlx5e_page_alloc_mapped(struct mlx5e_rq *rq,
dma_info->addr = dma_map_page(rq->pdev, dma_info->page, 0,
PAGE_SIZE, rq->buff.map_dir);
if (unlikely(dma_mapping_error(rq->pdev, dma_info->addr))) {
- put_page(dma_info->page);
+ page_pool_recycle_direct(rq->page_pool, dma_info->page);
dma_info->page = NULL;
return -ENOMEM;
}
@@ -255,13 +257,23 @@ static inline int mlx5e_page_alloc_mapped(struct mlx5e_rq *rq,
return 0;
}
+static inline int mlx5e_page_alloc(struct mlx5e_rq *rq,
+ struct mlx5e_dma_info *dma_info)
+{
+ if (rq->umem)
+ return mlx5e_xsk_page_alloc_umem(rq, dma_info);
+ else
+ return mlx5e_page_alloc_pool(rq, dma_info);
+}
+
void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info)
{
dma_unmap_page(rq->pdev, dma_info->addr, PAGE_SIZE, rq->buff.map_dir);
}
-void mlx5e_page_release(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info,
- bool recycle)
+void mlx5e_page_release_dynamic(struct mlx5e_rq *rq,
+ struct mlx5e_dma_info *dma_info,
+ bool recycle)
{
if (likely(recycle)) {
if (mlx5e_rx_cache_put(rq, dma_info))
@@ -271,10 +283,25 @@ void mlx5e_page_release(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info,
page_pool_recycle_direct(rq->page_pool, dma_info->page);
} else {
mlx5e_page_dma_unmap(rq, dma_info);
+ page_pool_release_page(rq->page_pool, dma_info->page);
put_page(dma_info->page);
}
}
+static inline void mlx5e_page_release(struct mlx5e_rq *rq,
+ struct mlx5e_dma_info *dma_info,
+ bool recycle)
+{
+ if (rq->umem)
+ /* The `recycle` parameter is ignored, and the page is always
+ * put into the Reuse Ring, because there is no way to return
+ * the page to the userspace when the interface goes down.
+ */
+ mlx5e_xsk_page_release(rq, dma_info);
+ else
+ mlx5e_page_release_dynamic(rq, dma_info, recycle);
+}
+
static inline int mlx5e_get_rx_frag(struct mlx5e_rq *rq,
struct mlx5e_wqe_frag_info *frag)
{
@@ -286,7 +313,7 @@ static inline int mlx5e_get_rx_frag(struct mlx5e_rq *rq,
* offset) should just use the new one without replenishing again
* by themselves.
*/
- err = mlx5e_page_alloc_mapped(rq, frag->di);
+ err = mlx5e_page_alloc(rq, frag->di);
return err;
}
@@ -352,6 +379,13 @@ static int mlx5e_alloc_rx_wqes(struct mlx5e_rq *rq, u16 ix, u8 wqe_bulk)
int err;
int i;
+ if (rq->umem) {
+ int pages_desired = wqe_bulk << rq->wqe.info.log_num_frags;
+
+ if (unlikely(!mlx5e_xsk_pages_enough_umem(rq, pages_desired)))
+ return -ENOMEM;
+ }
+
for (i = 0; i < wqe_bulk; i++) {
struct mlx5e_rx_wqe_cyc *wqe = mlx5_wq_cyc_get_wqe(wq, ix + i);
@@ -399,11 +433,17 @@ mlx5e_copy_skb_header(struct device *pdev, struct sk_buff *skb,
static void
mlx5e_free_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, bool recycle)
{
- const bool no_xdp_xmit =
- bitmap_empty(wi->xdp_xmit_bitmap, MLX5_MPWRQ_PAGES_PER_WQE);
+ bool no_xdp_xmit;
struct mlx5e_dma_info *dma_info = wi->umr.dma_info;
int i;
+ /* A common case for AF_XDP. */
+ if (bitmap_full(wi->xdp_xmit_bitmap, MLX5_MPWRQ_PAGES_PER_WQE))
+ return;
+
+ no_xdp_xmit = bitmap_empty(wi->xdp_xmit_bitmap,
+ MLX5_MPWRQ_PAGES_PER_WQE);
+
for (i = 0; i < MLX5_MPWRQ_PAGES_PER_WQE; i++)
if (no_xdp_xmit || !test_bit(i, wi->xdp_xmit_bitmap))
mlx5e_page_release(rq, &dma_info[i], recycle);
@@ -425,11 +465,6 @@ static void mlx5e_post_rx_mpwqe(struct mlx5e_rq *rq, u8 n)
mlx5_wq_ll_update_db_record(wq);
}
-static inline u16 mlx5e_icosq_wrap_cnt(struct mlx5e_icosq *sq)
-{
- return mlx5_wq_cyc_get_ctr_wrap_cnt(&sq->wq, sq->pc);
-}
-
static inline void mlx5e_fill_icosq_frag_edge(struct mlx5e_icosq *sq,
struct mlx5_wq_cyc *wq,
u16 pi, u16 nnops)
@@ -457,6 +492,12 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
int err;
int i;
+ if (rq->umem &&
+ unlikely(!mlx5e_xsk_pages_enough_umem(rq, MLX5_MPWRQ_PAGES_PER_WQE))) {
+ err = -ENOMEM;
+ goto err;
+ }
+
pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
if (unlikely(contig_wqebbs_room < MLX5E_UMR_WQEBBS)) {
@@ -465,12 +506,10 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
}
umr_wqe = mlx5_wq_cyc_get_wqe(wq, pi);
- if (unlikely(mlx5e_icosq_wrap_cnt(sq) < 2))
- memcpy(umr_wqe, &rq->mpwqe.umr_wqe,
- offsetof(struct mlx5e_umr_wqe, inline_mtts));
+ memcpy(umr_wqe, &rq->mpwqe.umr_wqe, offsetof(struct mlx5e_umr_wqe, inline_mtts));
for (i = 0; i < MLX5_MPWRQ_PAGES_PER_WQE; i++, dma_info++) {
- err = mlx5e_page_alloc_mapped(rq, dma_info);
+ err = mlx5e_page_alloc(rq, dma_info);
if (unlikely(err))
goto err_unmap;
umr_wqe->inline_mtts[i].ptag = cpu_to_be64(dma_info->addr | MLX5_EN_WR);
@@ -485,6 +524,7 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
umr_wqe->uctrl.xlt_offset = cpu_to_be16(xlt_offset);
sq->db.ico_wqe[pi].opcode = MLX5_OPCODE_UMR;
+ sq->db.ico_wqe[pi].umr.rq = rq;
sq->pc += MLX5E_UMR_WQEBBS;
sq->doorbell_cseg = &umr_wqe->ctrl;
@@ -496,6 +536,8 @@ err_unmap:
dma_info--;
mlx5e_page_release(rq, dma_info, true);
}
+
+err:
rq->stats->buff_alloc_err++;
return err;
@@ -542,11 +584,10 @@ bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq)
return !!err;
}
-static void mlx5e_poll_ico_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq)
+void mlx5e_poll_ico_cq(struct mlx5e_cq *cq)
{
struct mlx5e_icosq *sq = container_of(cq, struct mlx5e_icosq, cq);
struct mlx5_cqe64 *cqe;
- u8 completed_umr = 0;
u16 sqcc;
int i;
@@ -587,7 +628,7 @@ static void mlx5e_poll_ico_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq)
if (likely(wi->opcode == MLX5_OPCODE_UMR)) {
sqcc += MLX5E_UMR_WQEBBS;
- completed_umr++;
+ wi->umr.rq->mpwqe.umr_completed++;
} else if (likely(wi->opcode == MLX5_OPCODE_NOP)) {
sqcc++;
} else {
@@ -603,24 +644,25 @@ static void mlx5e_poll_ico_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq)
sq->cc = sqcc;
mlx5_cqwq_update_db_record(&cq->wq);
-
- if (likely(completed_umr)) {
- mlx5e_post_rx_mpwqe(rq, completed_umr);
- rq->mpwqe.umr_in_progress -= completed_umr;
- }
}
bool mlx5e_post_rx_mpwqes(struct mlx5e_rq *rq)
{
struct mlx5e_icosq *sq = &rq->channel->icosq;
struct mlx5_wq_ll *wq = &rq->mpwqe.wq;
+ u8 umr_completed = rq->mpwqe.umr_completed;
+ int alloc_err = 0;
u8 missing, i;
u16 head;
if (unlikely(!test_bit(MLX5E_RQ_STATE_ENABLED, &rq->state)))
return false;
- mlx5e_poll_ico_cq(&sq->cq, rq);
+ if (umr_completed) {
+ mlx5e_post_rx_mpwqe(rq, umr_completed);
+ rq->mpwqe.umr_in_progress -= umr_completed;
+ rq->mpwqe.umr_completed = 0;
+ }
missing = mlx5_wq_ll_missing(wq) - rq->mpwqe.umr_in_progress;
@@ -634,7 +676,9 @@ bool mlx5e_post_rx_mpwqes(struct mlx5e_rq *rq)
head = rq->mpwqe.actual_wq_head;
i = missing;
do {
- if (unlikely(mlx5e_alloc_rx_mpwqe(rq, head)))
+ alloc_err = mlx5e_alloc_rx_mpwqe(rq, head);
+
+ if (unlikely(alloc_err))
break;
head = mlx5_wq_ll_get_wqe_next_ix(wq, head);
} while (--i);
@@ -648,6 +692,12 @@ bool mlx5e_post_rx_mpwqes(struct mlx5e_rq *rq)
rq->mpwqe.umr_in_progress += rq->mpwqe.umr_last_bulk;
rq->mpwqe.actual_wq_head = head;
+ /* If XSK Fill Ring doesn't have enough frames, busy poll by
+ * rescheduling the NAPI poll.
+ */
+ if (unlikely(alloc_err == -ENOMEM && rq->umem))
+ return true;
+
return false;
}
@@ -1016,7 +1066,7 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe,
}
rcu_read_lock();
- consumed = mlx5e_xdp_handle(rq, di, va, &rx_headroom, &cqe_bcnt);
+ consumed = mlx5e_xdp_handle(rq, di, va, &rx_headroom, &cqe_bcnt, false);
rcu_read_unlock();
if (consumed)
return NULL; /* page/packet was consumed by XDP */
@@ -1092,7 +1142,10 @@ void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
wi = get_frag(rq, ci);
cqe_bcnt = be32_to_cpu(cqe->byte_cnt);
- skb = rq->wqe.skb_from_cqe(rq, cqe, wi, cqe_bcnt);
+ skb = INDIRECT_CALL_2(rq->wqe.skb_from_cqe,
+ mlx5e_skb_from_cqe_linear,
+ mlx5e_skb_from_cqe_nonlinear,
+ rq, cqe, wi, cqe_bcnt);
if (!skb) {
/* probably for XDP */
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {
@@ -1230,7 +1283,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
prefetch(data);
rcu_read_lock();
- consumed = mlx5e_xdp_handle(rq, di, va, &rx_headroom, &cqe_bcnt32);
+ consumed = mlx5e_xdp_handle(rq, di, va, &rx_headroom, &cqe_bcnt32, false);
rcu_read_unlock();
if (consumed) {
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))
@@ -1279,8 +1332,10 @@ void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
cqe_bcnt = mpwrq_get_cqe_byte_cnt(cqe);
- skb = rq->mpwqe.skb_from_cqe_mpwrq(rq, wi, cqe_bcnt, head_offset,
- page_idx);
+ skb = INDIRECT_CALL_2(rq->mpwqe.skb_from_cqe_mpwrq,
+ mlx5e_skb_from_cqe_mpwrq_linear,
+ mlx5e_skb_from_cqe_mpwrq_nonlinear,
+ rq, wi, cqe_bcnt, head_offset, page_idx);
if (!skb)
goto mpwrq_cqe_out;
@@ -1327,7 +1382,8 @@ int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget)
mlx5_cqwq_pop(cqwq);
- rq->handle_rx_cqe(rq, cqe);
+ INDIRECT_CALL_2(rq->handle_rx_cqe, mlx5e_handle_rx_cqe_mpwrq,
+ mlx5e_handle_rx_cqe, rq, cqe);
} while ((++work_done < budget) && (cqe = mlx5_cqwq_get_cqe(cqwq)));
out:
@@ -1437,7 +1493,10 @@ void mlx5i_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
wi = get_frag(rq, ci);
cqe_bcnt = be32_to_cpu(cqe->byte_cnt);
- skb = rq->wqe.skb_from_cqe(rq, cqe, wi, cqe_bcnt);
+ skb = INDIRECT_CALL_2(rq->wqe.skb_from_cqe,
+ mlx5e_skb_from_cqe_linear,
+ mlx5e_skb_from_cqe_nonlinear,
+ rq, cqe, wi, cqe_bcnt);
if (!skb)
goto wq_free_wqe;
@@ -1469,7 +1528,10 @@ void mlx5e_ipsec_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
wi = get_frag(rq, ci);
cqe_bcnt = be32_to_cpu(cqe->byte_cnt);
- skb = rq->wqe.skb_from_cqe(rq, cqe, wi, cqe_bcnt);
+ skb = INDIRECT_CALL_2(rq->wqe.skb_from_cqe,
+ mlx5e_skb_from_cqe_linear,
+ mlx5e_skb_from_cqe_nonlinear,
+ rq, cqe, wi, cqe_bcnt);
if (unlikely(!skb)) {
/* a DROP, save the page-reuse checks */
mlx5e_free_rx_wqe(rq, wi, true);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c b/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
index 4382ef85488c..840ec945ccba 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
@@ -64,7 +64,7 @@ static int mlx5e_test_health_info(struct mlx5e_priv *priv)
{
struct mlx5_core_health *health = &priv->mdev->priv.health;
- return health->sick ? 1 : 0;
+ return health->fatal_error ? 1 : 0;
}
static int mlx5e_test_link_state(struct mlx5e_priv *priv)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
index 483d321d2151..539b4d3656da 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
@@ -48,8 +48,15 @@ static const struct counter_desc sw_stats_desc[] = {
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_nop) },
#ifdef CONFIG_MLX5_EN_TLS
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_encrypted_packets) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_encrypted_bytes) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_ctx) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_ooo) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_resync_bytes) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_drop_no_sync_data) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_drop_bypass_req) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_packets) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_bytes) },
#endif
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_lro_packets) },
@@ -104,7 +111,33 @@ static const struct counter_desc sw_stats_desc[] = {
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, ch_poll) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, ch_arm) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, ch_aff_change) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, ch_force_irq) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, ch_eq_rearm) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_packets) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_bytes) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_csum_complete) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_csum_unnecessary) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_csum_unnecessary_inner) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_csum_none) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_ecn_mark) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_removed_vlan_packets) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_xdp_drop) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_xdp_redirect) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_wqe_err) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_mpwqe_filler_cqes) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_mpwqe_filler_strides) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_oversize_pkts_sw_drop) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_buff_alloc_err) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_cqe_compress_blks) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_cqe_compress_pkts) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_congst_umr) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_arfs_err) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xsk_xmit) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xsk_mpwqe) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xsk_inlnw) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xsk_full) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xsk_err) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xsk_cqes) },
};
#define NUM_SW_COUNTERS ARRAY_SIZE(sw_stats_desc)
@@ -144,6 +177,8 @@ static void mlx5e_grp_sw_update_stats(struct mlx5e_priv *priv)
&priv->channel_stats[i];
struct mlx5e_xdpsq_stats *xdpsq_red_stats = &channel_stats->xdpsq;
struct mlx5e_xdpsq_stats *xdpsq_stats = &channel_stats->rq_xdpsq;
+ struct mlx5e_xdpsq_stats *xsksq_stats = &channel_stats->xsksq;
+ struct mlx5e_rq_stats *xskrq_stats = &channel_stats->xskrq;
struct mlx5e_rq_stats *rq_stats = &channel_stats->rq;
struct mlx5e_ch_stats *ch_stats = &channel_stats->ch;
int j;
@@ -186,6 +221,7 @@ static void mlx5e_grp_sw_update_stats(struct mlx5e_priv *priv)
s->ch_poll += ch_stats->poll;
s->ch_arm += ch_stats->arm;
s->ch_aff_change += ch_stats->aff_change;
+ s->ch_force_irq += ch_stats->force_irq;
s->ch_eq_rearm += ch_stats->eq_rearm;
/* xdp redirect */
s->tx_xdp_xmit += xdpsq_red_stats->xmit;
@@ -194,6 +230,32 @@ static void mlx5e_grp_sw_update_stats(struct mlx5e_priv *priv)
s->tx_xdp_full += xdpsq_red_stats->full;
s->tx_xdp_err += xdpsq_red_stats->err;
s->tx_xdp_cqes += xdpsq_red_stats->cqes;
+ /* AF_XDP zero-copy */
+ s->rx_xsk_packets += xskrq_stats->packets;
+ s->rx_xsk_bytes += xskrq_stats->bytes;
+ s->rx_xsk_csum_complete += xskrq_stats->csum_complete;
+ s->rx_xsk_csum_unnecessary += xskrq_stats->csum_unnecessary;
+ s->rx_xsk_csum_unnecessary_inner += xskrq_stats->csum_unnecessary_inner;
+ s->rx_xsk_csum_none += xskrq_stats->csum_none;
+ s->rx_xsk_ecn_mark += xskrq_stats->ecn_mark;
+ s->rx_xsk_removed_vlan_packets += xskrq_stats->removed_vlan_packets;
+ s->rx_xsk_xdp_drop += xskrq_stats->xdp_drop;
+ s->rx_xsk_xdp_redirect += xskrq_stats->xdp_redirect;
+ s->rx_xsk_wqe_err += xskrq_stats->wqe_err;
+ s->rx_xsk_mpwqe_filler_cqes += xskrq_stats->mpwqe_filler_cqes;
+ s->rx_xsk_mpwqe_filler_strides += xskrq_stats->mpwqe_filler_strides;
+ s->rx_xsk_oversize_pkts_sw_drop += xskrq_stats->oversize_pkts_sw_drop;
+ s->rx_xsk_buff_alloc_err += xskrq_stats->buff_alloc_err;
+ s->rx_xsk_cqe_compress_blks += xskrq_stats->cqe_compress_blks;
+ s->rx_xsk_cqe_compress_pkts += xskrq_stats->cqe_compress_pkts;
+ s->rx_xsk_congst_umr += xskrq_stats->congst_umr;
+ s->rx_xsk_arfs_err += xskrq_stats->arfs_err;
+ s->tx_xsk_xmit += xsksq_stats->xmit;
+ s->tx_xsk_mpwqe += xsksq_stats->mpwqe;
+ s->tx_xsk_inlnw += xsksq_stats->inlnw;
+ s->tx_xsk_full += xsksq_stats->full;
+ s->tx_xsk_err += xsksq_stats->err;
+ s->tx_xsk_cqes += xsksq_stats->cqes;
for (j = 0; j < priv->max_opened_tc; j++) {
struct mlx5e_sq_stats *sq_stats = &channel_stats->sq[j];
@@ -216,8 +278,15 @@ static void mlx5e_grp_sw_update_stats(struct mlx5e_priv *priv)
s->tx_csum_none += sq_stats->csum_none;
s->tx_csum_partial += sq_stats->csum_partial;
#ifdef CONFIG_MLX5_EN_TLS
- s->tx_tls_ooo += sq_stats->tls_ooo;
- s->tx_tls_resync_bytes += sq_stats->tls_resync_bytes;
+ s->tx_tls_encrypted_packets += sq_stats->tls_encrypted_packets;
+ s->tx_tls_encrypted_bytes += sq_stats->tls_encrypted_bytes;
+ s->tx_tls_ctx += sq_stats->tls_ctx;
+ s->tx_tls_ooo += sq_stats->tls_ooo;
+ s->tx_tls_resync_bytes += sq_stats->tls_resync_bytes;
+ s->tx_tls_drop_no_sync_data += sq_stats->tls_drop_no_sync_data;
+ s->tx_tls_drop_bypass_req += sq_stats->tls_drop_bypass_req;
+ s->tx_tls_dump_bytes += sq_stats->tls_dump_bytes;
+ s->tx_tls_dump_packets += sq_stats->tls_dump_packets;
#endif
s->tx_cqes += sq_stats->cqes;
}
@@ -1238,6 +1307,16 @@ static const struct counter_desc sq_stats_desc[] = {
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, csum_partial_inner) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, added_vlan_packets) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, nop) },
+#ifdef CONFIG_MLX5_EN_TLS
+ { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_packets) },
+ { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_bytes) },
+ { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_ctx) },
+ { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_ooo) },
+ { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_drop_no_sync_data) },
+ { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_drop_bypass_req) },
+ { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_dump_packets) },
+ { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_dump_bytes) },
+#endif
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, csum_none) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, stopped) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, dropped) },
@@ -1266,11 +1345,43 @@ static const struct counter_desc xdpsq_stats_desc[] = {
{ MLX5E_DECLARE_XDPSQ_STAT(struct mlx5e_xdpsq_stats, cqes) },
};
+static const struct counter_desc xskrq_stats_desc[] = {
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, packets) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, bytes) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, csum_complete) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, csum_unnecessary) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, csum_unnecessary_inner) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, csum_none) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, ecn_mark) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, removed_vlan_packets) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, xdp_drop) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, xdp_redirect) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, wqe_err) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, mpwqe_filler_cqes) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, mpwqe_filler_strides) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, oversize_pkts_sw_drop) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, buff_alloc_err) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, cqe_compress_blks) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, cqe_compress_pkts) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, congst_umr) },
+ { MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, arfs_err) },
+};
+
+static const struct counter_desc xsksq_stats_desc[] = {
+ { MLX5E_DECLARE_XSKSQ_STAT(struct mlx5e_xdpsq_stats, xmit) },
+ { MLX5E_DECLARE_XSKSQ_STAT(struct mlx5e_xdpsq_stats, mpwqe) },
+ { MLX5E_DECLARE_XSKSQ_STAT(struct mlx5e_xdpsq_stats, inlnw) },
+ { MLX5E_DECLARE_XSKSQ_STAT(struct mlx5e_xdpsq_stats, full) },
+ { MLX5E_DECLARE_XSKSQ_STAT(struct mlx5e_xdpsq_stats, err) },
+ { MLX5E_DECLARE_XSKSQ_STAT(struct mlx5e_xdpsq_stats, cqes) },
+};
+
static const struct counter_desc ch_stats_desc[] = {
{ MLX5E_DECLARE_CH_STAT(struct mlx5e_ch_stats, events) },
{ MLX5E_DECLARE_CH_STAT(struct mlx5e_ch_stats, poll) },
{ MLX5E_DECLARE_CH_STAT(struct mlx5e_ch_stats, arm) },
{ MLX5E_DECLARE_CH_STAT(struct mlx5e_ch_stats, aff_change) },
+ { MLX5E_DECLARE_CH_STAT(struct mlx5e_ch_stats, force_irq) },
{ MLX5E_DECLARE_CH_STAT(struct mlx5e_ch_stats, eq_rearm) },
};
@@ -1278,6 +1389,8 @@ static const struct counter_desc ch_stats_desc[] = {
#define NUM_SQ_STATS ARRAY_SIZE(sq_stats_desc)
#define NUM_XDPSQ_STATS ARRAY_SIZE(xdpsq_stats_desc)
#define NUM_RQ_XDPSQ_STATS ARRAY_SIZE(rq_xdpsq_stats_desc)
+#define NUM_XSKRQ_STATS ARRAY_SIZE(xskrq_stats_desc)
+#define NUM_XSKSQ_STATS ARRAY_SIZE(xsksq_stats_desc)
#define NUM_CH_STATS ARRAY_SIZE(ch_stats_desc)
static int mlx5e_grp_channels_get_num_stats(struct mlx5e_priv *priv)
@@ -1288,13 +1401,16 @@ static int mlx5e_grp_channels_get_num_stats(struct mlx5e_priv *priv)
(NUM_CH_STATS * max_nch) +
(NUM_SQ_STATS * max_nch * priv->max_opened_tc) +
(NUM_RQ_XDPSQ_STATS * max_nch) +
- (NUM_XDPSQ_STATS * max_nch);
+ (NUM_XDPSQ_STATS * max_nch) +
+ (NUM_XSKRQ_STATS * max_nch * priv->xsk.ever_used) +
+ (NUM_XSKSQ_STATS * max_nch * priv->xsk.ever_used);
}
static int mlx5e_grp_channels_fill_strings(struct mlx5e_priv *priv, u8 *data,
int idx)
{
int max_nch = mlx5e_get_netdev_max_channels(priv->netdev);
+ bool is_xsk = priv->xsk.ever_used;
int i, j, tc;
for (i = 0; i < max_nch; i++)
@@ -1306,6 +1422,9 @@ static int mlx5e_grp_channels_fill_strings(struct mlx5e_priv *priv, u8 *data,
for (j = 0; j < NUM_RQ_STATS; j++)
sprintf(data + (idx++) * ETH_GSTRING_LEN,
rq_stats_desc[j].format, i);
+ for (j = 0; j < NUM_XSKRQ_STATS * is_xsk; j++)
+ sprintf(data + (idx++) * ETH_GSTRING_LEN,
+ xskrq_stats_desc[j].format, i);
for (j = 0; j < NUM_RQ_XDPSQ_STATS; j++)
sprintf(data + (idx++) * ETH_GSTRING_LEN,
rq_xdpsq_stats_desc[j].format, i);
@@ -1318,10 +1437,14 @@ static int mlx5e_grp_channels_fill_strings(struct mlx5e_priv *priv, u8 *data,
sq_stats_desc[j].format,
priv->channel_tc2txq[i][tc]);
- for (i = 0; i < max_nch; i++)
+ for (i = 0; i < max_nch; i++) {
+ for (j = 0; j < NUM_XSKSQ_STATS * is_xsk; j++)
+ sprintf(data + (idx++) * ETH_GSTRING_LEN,
+ xsksq_stats_desc[j].format, i);
for (j = 0; j < NUM_XDPSQ_STATS; j++)
sprintf(data + (idx++) * ETH_GSTRING_LEN,
xdpsq_stats_desc[j].format, i);
+ }
return idx;
}
@@ -1330,6 +1453,7 @@ static int mlx5e_grp_channels_fill_stats(struct mlx5e_priv *priv, u64 *data,
int idx)
{
int max_nch = mlx5e_get_netdev_max_channels(priv->netdev);
+ bool is_xsk = priv->xsk.ever_used;
int i, j, tc;
for (i = 0; i < max_nch; i++)
@@ -1343,6 +1467,10 @@ static int mlx5e_grp_channels_fill_stats(struct mlx5e_priv *priv, u64 *data,
data[idx++] =
MLX5E_READ_CTR64_CPU(&priv->channel_stats[i].rq,
rq_stats_desc, j);
+ for (j = 0; j < NUM_XSKRQ_STATS * is_xsk; j++)
+ data[idx++] =
+ MLX5E_READ_CTR64_CPU(&priv->channel_stats[i].xskrq,
+ xskrq_stats_desc, j);
for (j = 0; j < NUM_RQ_XDPSQ_STATS; j++)
data[idx++] =
MLX5E_READ_CTR64_CPU(&priv->channel_stats[i].rq_xdpsq,
@@ -1356,11 +1484,16 @@ static int mlx5e_grp_channels_fill_stats(struct mlx5e_priv *priv, u64 *data,
MLX5E_READ_CTR64_CPU(&priv->channel_stats[i].sq[tc],
sq_stats_desc, j);
- for (i = 0; i < max_nch; i++)
+ for (i = 0; i < max_nch; i++) {
+ for (j = 0; j < NUM_XSKSQ_STATS * is_xsk; j++)
+ data[idx++] =
+ MLX5E_READ_CTR64_CPU(&priv->channel_stats[i].xsksq,
+ xsksq_stats_desc, j);
for (j = 0; j < NUM_XDPSQ_STATS; j++)
data[idx++] =
MLX5E_READ_CTR64_CPU(&priv->channel_stats[i].xdpsq,
xdpsq_stats_desc, j);
+ }
return idx;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
index cdddcc46971b..76ac111e14d0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
@@ -46,6 +46,8 @@
#define MLX5E_DECLARE_TX_STAT(type, fld) "tx%d_"#fld, offsetof(type, fld)
#define MLX5E_DECLARE_XDPSQ_STAT(type, fld) "tx%d_xdp_"#fld, offsetof(type, fld)
#define MLX5E_DECLARE_RQ_XDPSQ_STAT(type, fld) "rx%d_xdp_tx_"#fld, offsetof(type, fld)
+#define MLX5E_DECLARE_XSKRQ_STAT(type, fld) "rx%d_xsk_"#fld, offsetof(type, fld)
+#define MLX5E_DECLARE_XSKSQ_STAT(type, fld) "tx%d_xsk_"#fld, offsetof(type, fld)
#define MLX5E_DECLARE_CH_STAT(type, fld) "ch%d_"#fld, offsetof(type, fld)
struct counter_desc {
@@ -116,12 +118,46 @@ struct mlx5e_sw_stats {
u64 ch_poll;
u64 ch_arm;
u64 ch_aff_change;
+ u64 ch_force_irq;
u64 ch_eq_rearm;
#ifdef CONFIG_MLX5_EN_TLS
+ u64 tx_tls_encrypted_packets;
+ u64 tx_tls_encrypted_bytes;
+ u64 tx_tls_ctx;
u64 tx_tls_ooo;
u64 tx_tls_resync_bytes;
+ u64 tx_tls_drop_no_sync_data;
+ u64 tx_tls_drop_bypass_req;
+ u64 tx_tls_dump_packets;
+ u64 tx_tls_dump_bytes;
#endif
+
+ u64 rx_xsk_packets;
+ u64 rx_xsk_bytes;
+ u64 rx_xsk_csum_complete;
+ u64 rx_xsk_csum_unnecessary;
+ u64 rx_xsk_csum_unnecessary_inner;
+ u64 rx_xsk_csum_none;
+ u64 rx_xsk_ecn_mark;
+ u64 rx_xsk_removed_vlan_packets;
+ u64 rx_xsk_xdp_drop;
+ u64 rx_xsk_xdp_redirect;
+ u64 rx_xsk_wqe_err;
+ u64 rx_xsk_mpwqe_filler_cqes;
+ u64 rx_xsk_mpwqe_filler_strides;
+ u64 rx_xsk_oversize_pkts_sw_drop;
+ u64 rx_xsk_buff_alloc_err;
+ u64 rx_xsk_cqe_compress_blks;
+ u64 rx_xsk_cqe_compress_pkts;
+ u64 rx_xsk_congst_umr;
+ u64 rx_xsk_arfs_err;
+ u64 tx_xsk_xmit;
+ u64 tx_xsk_mpwqe;
+ u64 tx_xsk_inlnw;
+ u64 tx_xsk_full;
+ u64 tx_xsk_err;
+ u64 tx_xsk_cqes;
};
struct mlx5e_qcounter_stats {
@@ -227,8 +263,15 @@ struct mlx5e_sq_stats {
u64 added_vlan_packets;
u64 nop;
#ifdef CONFIG_MLX5_EN_TLS
+ u64 tls_encrypted_packets;
+ u64 tls_encrypted_bytes;
+ u64 tls_ctx;
u64 tls_ooo;
u64 tls_resync_bytes;
+ u64 tls_drop_no_sync_data;
+ u64 tls_drop_bypass_req;
+ u64 tls_dump_packets;
+ u64 tls_dump_bytes;
#endif
/* less likely accessed in data path */
u64 csum_none;
@@ -256,6 +299,7 @@ struct mlx5e_ch_stats {
u64 poll;
u64 arm;
u64 aff_change;
+ u64 force_irq;
u64 eq_rearm;
};
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index e40c60d1631f..2d6436257f9d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -53,6 +53,7 @@
#include "en/port.h"
#include "en/tc_tun.h"
#include "lib/devcom.h"
+#include "lib/geneve.h"
struct mlx5_nic_flow_attr {
u32 action;
@@ -126,7 +127,7 @@ struct mlx5e_tc_flow {
};
struct mlx5e_tc_flow_parse_attr {
- struct ip_tunnel_info tun_info[MLX5_MAX_FLOW_FWD_VPORTS];
+ const struct ip_tunnel_info *tun_info[MLX5_MAX_FLOW_FWD_VPORTS];
struct net_device *filter_dev;
struct mlx5_flow_spec spec;
int num_mod_hdr_actions;
@@ -716,19 +717,22 @@ mlx5e_tc_add_nic_flow(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow,
struct netlink_ext_ack *extack)
{
+ struct mlx5_flow_context *flow_context = &parse_attr->spec.flow_context;
struct mlx5_nic_flow_attr *attr = flow->nic_attr;
struct mlx5_core_dev *dev = priv->mdev;
struct mlx5_flow_destination dest[2] = {};
struct mlx5_flow_act flow_act = {
.action = attr->action,
- .flow_tag = attr->flow_tag,
.reformat_id = 0,
- .flags = FLOW_ACT_HAS_TAG | FLOW_ACT_NO_APPEND,
+ .flags = FLOW_ACT_NO_APPEND,
};
struct mlx5_fc *counter = NULL;
bool table_created = false;
int err, dest_ix = 0;
+ flow_context->flags |= FLOW_CONTEXT_HAS_TAG;
+ flow_context->flow_tag = attr->flow_tag;
+
if (flow->flags & MLX5E_TC_FLOW_HAIRPIN) {
err = mlx5e_hairpin_flow_add(priv, flow, parse_attr, extack);
if (err) {
@@ -799,7 +803,7 @@ mlx5e_tc_add_nic_flow(struct mlx5e_priv *priv,
}
if (attr->match_level != MLX5_MATCH_NONE)
- parse_attr->spec.match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+ parse_attr->spec.match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS;
flow->rule[0] = mlx5_add_flow_rules(priv->fs.tc.t, &parse_attr->spec,
&flow_act, dest, dest_ix);
@@ -1063,6 +1067,19 @@ err_max_prio_chain:
return err;
}
+static bool mlx5_flow_has_geneve_opt(struct mlx5e_tc_flow *flow)
+{
+ struct mlx5_flow_spec *spec = &flow->esw_attr->parse_attr->spec;
+ void *headers_v = MLX5_ADDR_OF(fte_match_param,
+ spec->match_value,
+ misc_parameters_3);
+ u32 geneve_tlv_opt_0_data = MLX5_GET(fte_match_set_misc3,
+ headers_v,
+ geneve_tlv_option_0_data);
+
+ return !!geneve_tlv_opt_0_data;
+}
+
static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow)
{
@@ -1084,6 +1101,9 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
mlx5e_tc_unoffload_fdb_rules(esw, flow, attr);
}
+ if (mlx5_flow_has_geneve_opt(flow))
+ mlx5_geneve_tlv_option_del(priv->mdev->geneve);
+
mlx5_eswitch_del_vlan_action(esw, attr);
for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++)
@@ -1330,7 +1350,7 @@ static void mlx5e_tc_del_flow(struct mlx5e_priv *priv,
static int parse_tunnel_attr(struct mlx5e_priv *priv,
struct mlx5_flow_spec *spec,
- struct tc_cls_flower_offload *f,
+ struct flow_cls_offload *f,
struct net_device *filter_dev, u8 *match_level)
{
struct netlink_ext_ack *extack = f->common.extack;
@@ -1338,8 +1358,7 @@ static int parse_tunnel_attr(struct mlx5e_priv *priv,
outer_headers);
void *headers_v = MLX5_ADDR_OF(fte_match_param, spec->match_value,
outer_headers);
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
- struct flow_match_control enc_control;
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
int err;
err = mlx5e_tc_tun_parse(filter_dev, priv, spec, f,
@@ -1350,9 +1369,7 @@ static int parse_tunnel_attr(struct mlx5e_priv *priv,
return err;
}
- flow_rule_match_enc_control(rule, &enc_control);
-
- if (enc_control.key->addr_type == FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS)) {
struct flow_match_ipv4_addrs match;
flow_rule_match_enc_ipv4_addrs(rule, &match);
@@ -1372,7 +1389,7 @@ static int parse_tunnel_attr(struct mlx5e_priv *priv,
MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, ethertype);
MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, ETH_P_IP);
- } else if (enc_control.key->addr_type == FLOW_DISSECTOR_KEY_IPV6_ADDRS) {
+ } else if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS)) {
struct flow_match_ipv6_addrs match;
flow_rule_match_enc_ipv6_addrs(rule, &match);
@@ -1461,7 +1478,7 @@ static void *get_match_headers_value(u32 flags,
static int __parse_cls_flower(struct mlx5e_priv *priv,
struct mlx5_flow_spec *spec,
- struct tc_cls_flower_offload *f,
+ struct flow_cls_offload *f,
struct net_device *filter_dev,
u8 *match_level, u8 *tunnel_match_level)
{
@@ -1474,7 +1491,7 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
misc_parameters);
void *misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value,
misc_parameters);
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
struct flow_dissector *dissector = rule->match.dissector;
u16 addr_type = 0;
u8 ip_proto = 0;
@@ -1497,29 +1514,21 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
BIT(FLOW_DISSECTOR_KEY_ENC_CONTROL) |
BIT(FLOW_DISSECTOR_KEY_TCP) |
BIT(FLOW_DISSECTOR_KEY_IP) |
- BIT(FLOW_DISSECTOR_KEY_ENC_IP))) {
+ BIT(FLOW_DISSECTOR_KEY_ENC_IP) |
+ BIT(FLOW_DISSECTOR_KEY_ENC_OPTS))) {
NL_SET_ERR_MSG_MOD(extack, "Unsupported key");
netdev_warn(priv->netdev, "Unsupported key used: 0x%x\n",
dissector->used_keys);
return -EOPNOTSUPP;
}
- if ((flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS) ||
- flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID) ||
- flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS)) &&
- flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_CONTROL)) {
- struct flow_match_control match;
-
- flow_rule_match_enc_control(rule, &match);
- switch (match.key->addr_type) {
- case FLOW_DISSECTOR_KEY_IPV4_ADDRS:
- case FLOW_DISSECTOR_KEY_IPV6_ADDRS:
- if (parse_tunnel_attr(priv, spec, f, filter_dev, tunnel_match_level))
- return -EOPNOTSUPP;
- break;
- default:
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS) ||
+ flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS) ||
+ flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID) ||
+ flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS) ||
+ flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_OPTS)) {
+ if (parse_tunnel_attr(priv, spec, f, filter_dev, tunnel_match_level))
return -EOPNOTSUPP;
- }
/* In decap flow, header pointers should point to the inner
* headers, outer header were already set by parse_tunnel_attr
@@ -1822,7 +1831,7 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
static int parse_cls_flower(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow,
struct mlx5_flow_spec *spec,
- struct tc_cls_flower_offload *f,
+ struct flow_cls_offload *f,
struct net_device *filter_dev)
{
struct netlink_ext_ack *extack = f->common.extack;
@@ -2581,21 +2590,21 @@ static int parse_tc_nic_actions(struct mlx5e_priv *priv,
}
struct encap_key {
- struct ip_tunnel_key *ip_tun_key;
- int tunnel_type;
+ const struct ip_tunnel_key *ip_tun_key;
+ struct mlx5e_tc_tunnel *tc_tunnel;
};
static inline int cmp_encap_info(struct encap_key *a,
struct encap_key *b)
{
return memcmp(a->ip_tun_key, b->ip_tun_key, sizeof(*a->ip_tun_key)) ||
- a->tunnel_type != b->tunnel_type;
+ a->tc_tunnel->tunnel_type != b->tc_tunnel->tunnel_type;
}
static inline int hash_encap_info(struct encap_key *key)
{
return jhash(key->ip_tun_key, sizeof(*key->ip_tun_key),
- key->tunnel_type);
+ key->tc_tunnel->tunnel_type);
}
@@ -2625,7 +2634,7 @@ static int mlx5e_attach_encap(struct mlx5e_priv *priv,
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
struct mlx5_esw_flow_attr *attr = flow->esw_attr;
struct mlx5e_tc_flow_parse_attr *parse_attr;
- struct ip_tunnel_info *tun_info;
+ const struct ip_tunnel_info *tun_info;
struct encap_key key, e_key;
struct mlx5e_encap_entry *e;
unsigned short family;
@@ -2634,17 +2643,17 @@ static int mlx5e_attach_encap(struct mlx5e_priv *priv,
int err = 0;
parse_attr = attr->parse_attr;
- tun_info = &parse_attr->tun_info[out_index];
+ tun_info = parse_attr->tun_info[out_index];
family = ip_tunnel_info_af(tun_info);
key.ip_tun_key = &tun_info->key;
- key.tunnel_type = mlx5e_tc_tun_get_type(mirred_dev);
+ key.tc_tunnel = mlx5e_get_tc_tun(mirred_dev);
hash_key = hash_encap_info(&key);
hash_for_each_possible_rcu(esw->offloads.encap_tbl, e,
encap_hlist, hash_key) {
- e_key.ip_tun_key = &e->tun_info.key;
- e_key.tunnel_type = e->tunnel_type;
+ e_key.ip_tun_key = &e->tun_info->key;
+ e_key.tc_tunnel = e->tunnel;
if (!cmp_encap_info(&e_key, &key)) {
found = true;
break;
@@ -2659,7 +2668,7 @@ static int mlx5e_attach_encap(struct mlx5e_priv *priv,
if (!e)
return -ENOMEM;
- e->tun_info = *tun_info;
+ e->tun_info = tun_info;
err = mlx5e_tc_tun_init_encap_attr(mirred_dev, priv, e, extack);
if (err)
goto out_err;
@@ -2793,6 +2802,16 @@ static int add_vlan_pop_action(struct mlx5e_priv *priv,
return err;
}
+bool mlx5e_is_valid_eswitch_fwd_dev(struct mlx5e_priv *priv,
+ struct net_device *out_dev)
+{
+ if (is_merged_eswitch_dev(priv, out_dev))
+ return true;
+
+ return mlx5e_eswitch_rep(out_dev) &&
+ same_hw_devs(priv, netdev_priv(out_dev));
+}
+
static int parse_tc_fdb_actions(struct mlx5e_priv *priv,
struct flow_action *flow_action,
struct mlx5e_tc_flow *flow,
@@ -2858,9 +2877,7 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv,
action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST |
MLX5_FLOW_CONTEXT_ACTION_COUNT;
- if (netdev_port_same_parent_id(priv->netdev,
- out_dev) ||
- is_merged_eswitch_dev(priv, out_dev)) {
+ if (netdev_port_same_parent_id(priv->netdev, out_dev)) {
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
struct net_device *uplink_dev = mlx5_eswitch_uplink_get_proto_dev(esw, REP_ETH);
struct net_device *uplink_upper = netdev_master_upper_dev_get(uplink_dev);
@@ -2877,6 +2894,7 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv,
if (err)
return err;
}
+
if (is_vlan_dev(parse_attr->filter_dev)) {
err = add_vlan_pop_action(priv, attr,
&action);
@@ -2884,8 +2902,13 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv,
return err;
}
- if (!mlx5e_eswitch_rep(out_dev))
+ if (!mlx5e_is_valid_eswitch_fwd_dev(priv, out_dev)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "devices are not on same switch HW, can't offload forwarding");
+ pr_err("devices %s %s not on same switch HW, can't offload forwarding\n",
+ priv->netdev->name, out_dev->name);
return -EOPNOTSUPP;
+ }
out_priv = netdev_priv(out_dev);
rpriv = out_priv->ppriv;
@@ -2895,7 +2918,7 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv,
} else if (encap) {
parse_attr->mirred_ifindex[attr->out_count] =
out_dev->ifindex;
- parse_attr->tun_info[attr->out_count] = *info;
+ parse_attr->tun_info[attr->out_count] = info;
encap = false;
attr->dests[attr->out_count].flags |=
MLX5_ESW_DEST_ENCAP;
@@ -3092,7 +3115,7 @@ static bool is_peer_flow_needed(struct mlx5e_tc_flow *flow)
static int
mlx5e_alloc_flow(struct mlx5e_priv *priv, int attr_size,
- struct tc_cls_flower_offload *f, u16 flow_flags,
+ struct flow_cls_offload *f, u16 flow_flags,
struct mlx5e_tc_flow_parse_attr **__parse_attr,
struct mlx5e_tc_flow **__flow)
{
@@ -3126,7 +3149,7 @@ static void
mlx5e_flow_esw_attr_init(struct mlx5_esw_flow_attr *esw_attr,
struct mlx5e_priv *priv,
struct mlx5e_tc_flow_parse_attr *parse_attr,
- struct tc_cls_flower_offload *f,
+ struct flow_cls_offload *f,
struct mlx5_eswitch_rep *in_rep,
struct mlx5_core_dev *in_mdev)
{
@@ -3148,13 +3171,13 @@ mlx5e_flow_esw_attr_init(struct mlx5_esw_flow_attr *esw_attr,
static struct mlx5e_tc_flow *
__mlx5e_add_fdb_flow(struct mlx5e_priv *priv,
- struct tc_cls_flower_offload *f,
+ struct flow_cls_offload *f,
u16 flow_flags,
struct net_device *filter_dev,
struct mlx5_eswitch_rep *in_rep,
struct mlx5_core_dev *in_mdev)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
struct netlink_ext_ack *extack = f->common.extack;
struct mlx5e_tc_flow_parse_attr *parse_attr;
struct mlx5e_tc_flow *flow;
@@ -3198,7 +3221,7 @@ out:
return ERR_PTR(err);
}
-static int mlx5e_tc_add_fdb_peer_flow(struct tc_cls_flower_offload *f,
+static int mlx5e_tc_add_fdb_peer_flow(struct flow_cls_offload *f,
struct mlx5e_tc_flow *flow,
u16 flow_flags)
{
@@ -3250,7 +3273,7 @@ out:
static int
mlx5e_add_fdb_flow(struct mlx5e_priv *priv,
- struct tc_cls_flower_offload *f,
+ struct flow_cls_offload *f,
u16 flow_flags,
struct net_device *filter_dev,
struct mlx5e_tc_flow **__flow)
@@ -3284,12 +3307,12 @@ out:
static int
mlx5e_add_nic_flow(struct mlx5e_priv *priv,
- struct tc_cls_flower_offload *f,
+ struct flow_cls_offload *f,
u16 flow_flags,
struct net_device *filter_dev,
struct mlx5e_tc_flow **__flow)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
struct netlink_ext_ack *extack = f->common.extack;
struct mlx5e_tc_flow_parse_attr *parse_attr;
struct mlx5e_tc_flow *flow;
@@ -3335,7 +3358,7 @@ out:
static int
mlx5e_tc_add_flow(struct mlx5e_priv *priv,
- struct tc_cls_flower_offload *f,
+ struct flow_cls_offload *f,
int flags,
struct net_device *filter_dev,
struct mlx5e_tc_flow **flow)
@@ -3349,7 +3372,7 @@ mlx5e_tc_add_flow(struct mlx5e_priv *priv,
if (!tc_can_offload_extack(priv->netdev, f->common.extack))
return -EOPNOTSUPP;
- if (esw && esw->mode == SRIOV_OFFLOADS)
+ if (esw && esw->mode == MLX5_ESWITCH_OFFLOADS)
err = mlx5e_add_fdb_flow(priv, f, flow_flags,
filter_dev, flow);
else
@@ -3360,7 +3383,7 @@ mlx5e_tc_add_flow(struct mlx5e_priv *priv,
}
int mlx5e_configure_flower(struct net_device *dev, struct mlx5e_priv *priv,
- struct tc_cls_flower_offload *f, int flags)
+ struct flow_cls_offload *f, int flags)
{
struct netlink_ext_ack *extack = f->common.extack;
struct rhashtable *tc_ht = get_tc_ht(priv, flags);
@@ -3407,7 +3430,7 @@ static bool same_flow_direction(struct mlx5e_tc_flow *flow, int flags)
}
int mlx5e_delete_flower(struct net_device *dev, struct mlx5e_priv *priv,
- struct tc_cls_flower_offload *f, int flags)
+ struct flow_cls_offload *f, int flags)
{
struct rhashtable *tc_ht = get_tc_ht(priv, flags);
struct mlx5e_tc_flow *flow;
@@ -3426,7 +3449,7 @@ int mlx5e_delete_flower(struct net_device *dev, struct mlx5e_priv *priv,
}
int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv,
- struct tc_cls_flower_offload *f, int flags)
+ struct flow_cls_offload *f, int flags)
{
struct mlx5_devcom *devcom = priv->mdev->priv.devcom;
struct rhashtable *tc_ht = get_tc_ht(priv, flags);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
index f62e81902d27..3ab39275ca7d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
@@ -54,12 +54,12 @@ int mlx5e_tc_esw_init(struct rhashtable *tc_ht);
void mlx5e_tc_esw_cleanup(struct rhashtable *tc_ht);
int mlx5e_configure_flower(struct net_device *dev, struct mlx5e_priv *priv,
- struct tc_cls_flower_offload *f, int flags);
+ struct flow_cls_offload *f, int flags);
int mlx5e_delete_flower(struct net_device *dev, struct mlx5e_priv *priv,
- struct tc_cls_flower_offload *f, int flags);
+ struct flow_cls_offload *f, int flags);
int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv,
- struct tc_cls_flower_offload *f, int flags);
+ struct flow_cls_offload *f, int flags);
struct mlx5e_encap_entry;
void mlx5e_tc_encap_flows_add(struct mlx5e_priv *priv,
@@ -74,6 +74,9 @@ int mlx5e_tc_num_filters(struct mlx5e_priv *priv, int flags);
void mlx5e_tc_reoffload_flows_work(struct work_struct *work);
+bool mlx5e_is_valid_eswitch_fwd_dev(struct mlx5e_priv *priv,
+ struct net_device *out_dev);
+
#else /* CONFIG_MLX5_ESWITCH */
static inline int mlx5e_tc_nic_init(struct mlx5e_priv *priv) { return 0; }
static inline void mlx5e_tc_nic_cleanup(struct mlx5e_priv *priv) {}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
index 701e5dc75bb0..600e92cb629a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
@@ -35,55 +35,12 @@
#include <net/geneve.h>
#include <net/dsfield.h>
#include "en.h"
+#include "en/txrx.h"
#include "ipoib/ipoib.h"
#include "en_accel/en_accel.h"
+#include "en_accel/ktls.h"
#include "lib/clock.h"
-#define MLX5E_SQ_NOPS_ROOM MLX5_SEND_WQE_MAX_WQEBBS
-
-#ifndef CONFIG_MLX5_EN_TLS
-#define MLX5E_SQ_STOP_ROOM (MLX5_SEND_WQE_MAX_WQEBBS +\
- MLX5E_SQ_NOPS_ROOM)
-#else
-/* TLS offload requires MLX5E_SQ_STOP_ROOM to have
- * enough room for a resync SKB, a normal SKB and a NOP
- */
-#define MLX5E_SQ_STOP_ROOM (2 * MLX5_SEND_WQE_MAX_WQEBBS +\
- MLX5E_SQ_NOPS_ROOM)
-#endif
-
-static inline void mlx5e_tx_dma_unmap(struct device *pdev,
- struct mlx5e_sq_dma *dma)
-{
- switch (dma->type) {
- case MLX5E_DMA_MAP_SINGLE:
- dma_unmap_single(pdev, dma->addr, dma->size, DMA_TO_DEVICE);
- break;
- case MLX5E_DMA_MAP_PAGE:
- dma_unmap_page(pdev, dma->addr, dma->size, DMA_TO_DEVICE);
- break;
- default:
- WARN_ONCE(true, "mlx5e_tx_dma_unmap unknown DMA type!\n");
- }
-}
-
-static inline struct mlx5e_sq_dma *mlx5e_dma_get(struct mlx5e_txqsq *sq, u32 i)
-{
- return &sq->db.dma_fifo[i & sq->dma_fifo_mask];
-}
-
-static inline void mlx5e_dma_push(struct mlx5e_txqsq *sq,
- dma_addr_t addr,
- u32 size,
- enum mlx5e_dma_map_type map_type)
-{
- struct mlx5e_sq_dma *dma = mlx5e_dma_get(sq, sq->dma_fifo_pc++);
-
- dma->addr = addr;
- dma->size = size;
- dma->type = map_type;
-}
-
static void mlx5e_dma_unmap_wqe_err(struct mlx5e_txqsq *sq, u8 num_dma)
{
int i;
@@ -277,23 +234,6 @@ dma_unmap_wqe_err:
return -ENOMEM;
}
-static inline void mlx5e_fill_sq_frag_edge(struct mlx5e_txqsq *sq,
- struct mlx5_wq_cyc *wq,
- u16 pi, u16 nnops)
-{
- struct mlx5e_tx_wqe_info *edge_wi, *wi = &sq->db.wqe_info[pi];
-
- edge_wi = wi + nnops;
-
- /* fill sq frag edge with nops to avoid wqe wrapping two pages */
- for (; wi < edge_wi; wi++) {
- wi->skb = NULL;
- wi->num_wqebbs = 1;
- mlx5e_post_nop(wq, sq->sqn, &sq->pc);
- }
- sq->stats->nop += nnops;
-}
-
static inline void
mlx5e_txwqe_complete(struct mlx5e_txqsq *sq, struct sk_buff *skb,
u8 opcode, u16 ds_cnt, u8 num_wqebbs, u32 num_bytes, u8 num_dma,
@@ -301,6 +241,7 @@ mlx5e_txwqe_complete(struct mlx5e_txqsq *sq, struct sk_buff *skb,
bool xmit_more)
{
struct mlx5_wq_cyc *wq = &sq->wq;
+ bool send_doorbell;
wi->num_bytes = num_bytes;
wi->num_dma = num_dma;
@@ -310,23 +251,21 @@ mlx5e_txwqe_complete(struct mlx5e_txqsq *sq, struct sk_buff *skb,
cseg->opmod_idx_opcode = cpu_to_be32((sq->pc << 8) | opcode);
cseg->qpn_ds = cpu_to_be32((sq->sqn << 8) | ds_cnt);
- netdev_tx_sent_queue(sq->txq, num_bytes);
-
if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
sq->pc += wi->num_wqebbs;
- if (unlikely(!mlx5e_wqc_has_room_for(wq, sq->cc, sq->pc, MLX5E_SQ_STOP_ROOM))) {
+ if (unlikely(!mlx5e_wqc_has_room_for(wq, sq->cc, sq->pc, sq->stop_room))) {
netif_tx_stop_queue(sq->txq);
sq->stats->stopped++;
}
- if (!xmit_more || netif_xmit_stopped(sq->txq))
+ send_doorbell = __netdev_tx_sent_queue(sq->txq, num_bytes,
+ xmit_more);
+ if (send_doorbell)
mlx5e_notify_hw(wq, sq->pc, sq->uar_map, cseg);
}
-#define INL_HDR_START_SZ (sizeof(((struct mlx5_wqe_eth_seg *)NULL)->inline_hdr.start))
-
netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
struct mlx5e_tx_wqe *wqe, u16 pi, bool xmit_more)
{
@@ -353,9 +292,12 @@ netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
num_bytes = skb->len + (skb_shinfo(skb)->gso_segs - 1) * ihs;
stats->packets += skb_shinfo(skb)->gso_segs;
} else {
+ u8 mode = mlx5e_transport_inline_tx_wqe(wqe) ?
+ MLX5_INLINE_MODE_TCP_UDP : sq->min_inline_mode;
+
opcode = MLX5_OPCODE_SEND;
mss = 0;
- ihs = mlx5e_calc_min_inline(sq->min_inline_mode, skb);
+ ihs = mlx5e_calc_min_inline(mode, skb);
num_bytes = max_t(unsigned int, skb->len, ETH_ZLEN);
stats->packets++;
}
@@ -380,11 +322,17 @@ netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
#ifdef CONFIG_MLX5_EN_IPSEC
struct mlx5_wqe_eth_seg cur_eth = wqe->eth;
#endif
+#ifdef CONFIG_MLX5_EN_TLS
+ struct mlx5_wqe_ctrl_seg cur_ctrl = wqe->ctrl;
+#endif
mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room);
- mlx5e_sq_fetch_wqe(sq, &wqe, &pi);
+ wqe = mlx5e_sq_fetch_wqe(sq, sizeof(*wqe), &pi);
#ifdef CONFIG_MLX5_EN_IPSEC
wqe->eth = cur_eth;
#endif
+#ifdef CONFIG_MLX5_EN_TLS
+ wqe->ctrl = cur_ctrl;
+#endif
}
/* fill wqe */
@@ -443,7 +391,7 @@ netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev)
u16 pi;
sq = priv->txq2sq[skb_get_queue_mapping(skb)];
- mlx5e_sq_fetch_wqe(sq, &wqe, &pi);
+ wqe = mlx5e_sq_fetch_wqe(sq, sizeof(*wqe), &pi);
/* might send skbs and update wqe and pi */
skb = mlx5e_accel_handle_tx(skb, sq, dev, &wqe, &pi);
@@ -531,8 +479,16 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget)
wi = &sq->db.wqe_info[ci];
skb = wi->skb;
- if (unlikely(!skb)) { /* nop */
- sqcc++;
+ if (unlikely(!skb)) {
+#ifdef CONFIG_MLX5_EN_TLS
+ if (wi->resync_dump_frag) {
+ struct mlx5e_sq_dma *dma =
+ mlx5e_dma_get(sq, dma_fifo_cc++);
+
+ mlx5e_ktls_tx_handle_resync_dump_comp(sq, wi, dma);
+ }
+#endif
+ sqcc += wi->num_wqebbs;
continue;
}
@@ -574,8 +530,7 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget)
netdev_tx_completed_queue(sq->txq, npkts, nbytes);
if (netif_tx_queue_stopped(sq->txq) &&
- mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc,
- MLX5E_SQ_STOP_ROOM) &&
+ mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, sq->stop_room) &&
!test_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state)) {
netif_tx_wake_queue(sq->txq);
stats->wake++;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
index f9862bf75491..c50b6f0769c8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
@@ -33,6 +33,7 @@
#include <linux/irq.h>
#include "en.h"
#include "en/xdp.h"
+#include "en/xsk/tx.h"
static inline bool mlx5e_channel_no_affinity_change(struct mlx5e_channel *c)
{
@@ -48,26 +49,24 @@ static inline bool mlx5e_channel_no_affinity_change(struct mlx5e_channel *c)
static void mlx5e_handle_tx_dim(struct mlx5e_txqsq *sq)
{
struct mlx5e_sq_stats *stats = sq->stats;
- struct net_dim_sample dim_sample;
+ struct dim_sample dim_sample;
if (unlikely(!test_bit(MLX5E_SQ_STATE_AM, &sq->state)))
return;
- net_dim_sample(sq->cq.event_ctr, stats->packets, stats->bytes,
- &dim_sample);
+ dim_update_sample(sq->cq.event_ctr, stats->packets, stats->bytes, &dim_sample);
net_dim(&sq->dim, dim_sample);
}
static void mlx5e_handle_rx_dim(struct mlx5e_rq *rq)
{
struct mlx5e_rq_stats *stats = rq->stats;
- struct net_dim_sample dim_sample;
+ struct dim_sample dim_sample;
if (unlikely(!test_bit(MLX5E_RQ_STATE_AM, &rq->state)))
return;
- net_dim_sample(rq->cq.event_ctr, stats->packets, stats->bytes,
- &dim_sample);
+ dim_update_sample(rq->cq.event_ctr, stats->packets, stats->bytes, &dim_sample);
net_dim(&rq->dim, dim_sample);
}
@@ -87,7 +86,12 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
struct mlx5e_channel *c = container_of(napi, struct mlx5e_channel,
napi);
struct mlx5e_ch_stats *ch_stats = c->stats;
+ struct mlx5e_xdpsq *xsksq = &c->xsksq;
+ struct mlx5e_rq *xskrq = &c->xskrq;
struct mlx5e_rq *rq = &c->rq;
+ bool xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
+ bool aff_change = false;
+ bool busy_xsk = false;
bool busy = false;
int work_done = 0;
int i;
@@ -97,22 +101,38 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
for (i = 0; i < c->num_tc; i++)
busy |= mlx5e_poll_tx_cq(&c->sq[i].cq, budget);
- busy |= mlx5e_poll_xdpsq_cq(&c->xdpsq.cq, NULL);
+ busy |= mlx5e_poll_xdpsq_cq(&c->xdpsq.cq);
if (c->xdp)
- busy |= mlx5e_poll_xdpsq_cq(&rq->xdpsq.cq, rq);
+ busy |= mlx5e_poll_xdpsq_cq(&c->rq_xdpsq.cq);
if (likely(budget)) { /* budget=0 means: don't poll rx rings */
- work_done = mlx5e_poll_rx_cq(&rq->cq, budget);
+ if (xsk_open)
+ work_done = mlx5e_poll_rx_cq(&xskrq->cq, budget);
+
+ if (likely(budget - work_done))
+ work_done += mlx5e_poll_rx_cq(&rq->cq, budget - work_done);
+
busy |= work_done == budget;
}
- busy |= c->rq.post_wqes(rq);
+ mlx5e_poll_ico_cq(&c->icosq.cq);
+
+ busy |= rq->post_wqes(rq);
+ if (xsk_open) {
+ mlx5e_poll_ico_cq(&c->xskicosq.cq);
+ busy |= mlx5e_poll_xdpsq_cq(&xsksq->cq);
+ busy_xsk |= mlx5e_xsk_tx(xsksq, MLX5E_TX_XSK_POLL_BUDGET);
+ busy_xsk |= xskrq->post_wqes(xskrq);
+ }
+
+ busy |= busy_xsk;
if (busy) {
if (likely(mlx5e_channel_no_affinity_change(c)))
return budget;
ch_stats->aff_change++;
+ aff_change = true;
if (budget && work_done == budget)
work_done--;
}
@@ -133,10 +153,22 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
mlx5e_cq_arm(&c->icosq.cq);
mlx5e_cq_arm(&c->xdpsq.cq);
+ if (xsk_open) {
+ mlx5e_handle_rx_dim(xskrq);
+ mlx5e_cq_arm(&c->xskicosq.cq);
+ mlx5e_cq_arm(&xsksq->cq);
+ mlx5e_cq_arm(&xskrq->cq);
+ }
+
+ if (unlikely(aff_change && busy_xsk)) {
+ mlx5e_trigger_irq(&c->icosq);
+ ch_stats->force_irq++;
+ }
+
return work_done;
}
-void mlx5e_completion_event(struct mlx5_core_cq *mcq)
+void mlx5e_completion_event(struct mlx5_core_cq *mcq, struct mlx5_eqe *eqe)
{
struct mlx5e_cq *cq = container_of(mcq, struct mlx5e_cq, mcq);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
index 23883d1fa22f..41f25ea2e8d9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
@@ -61,17 +61,21 @@ enum {
MLX5_EQ_DOORBEL_OFFSET = 0x40,
};
-struct mlx5_irq_info {
- cpumask_var_t mask;
- char name[MLX5_MAX_IRQ_NAME];
- void *context; /* dev_id provided to request_irq */
+/* budget must be smaller than MLX5_NUM_SPARE_EQE to guarantee that we update
+ * the ci before we polled all the entries in the EQ. MLX5_NUM_SPARE_EQE is
+ * used to set the EQ size, budget must be smaller than the EQ size.
+ */
+enum {
+ MLX5_EQ_POLLING_BUDGET = 128,
};
+static_assert(MLX5_EQ_POLLING_BUDGET <= MLX5_NUM_SPARE_EQE);
+
struct mlx5_eq_table {
struct list_head comp_eqs_list;
- struct mlx5_eq pages_eq;
- struct mlx5_eq cmd_eq;
- struct mlx5_eq async_eq;
+ struct mlx5_eq_async pages_eq;
+ struct mlx5_eq_async cmd_eq;
+ struct mlx5_eq_async async_eq;
struct atomic_notifier_head nh[MLX5_EVENT_TYPE_MAX];
@@ -79,11 +83,8 @@ struct mlx5_eq_table {
struct mlx5_nb cq_err_nb;
struct mutex lock; /* sync async eqs creations */
- int num_comp_vectors;
- struct mlx5_irq_info *irq_info;
-#ifdef CONFIG_RFS_ACCEL
- struct cpu_rmap *rmap;
-#endif
+ int num_comp_eqs;
+ struct mlx5_irq_table *irq_table;
};
#define MLX5_ASYNC_EVENT_MASK ((1ull << MLX5_EVENT_TYPE_PATH_MIG) | \
@@ -124,16 +125,24 @@ static struct mlx5_core_cq *mlx5_eq_cq_get(struct mlx5_eq *eq, u32 cqn)
return cq;
}
-static irqreturn_t mlx5_eq_comp_int(int irq, void *eq_ptr)
+static int mlx5_eq_comp_int(struct notifier_block *nb,
+ __always_unused unsigned long action,
+ __always_unused void *data)
{
- struct mlx5_eq_comp *eq_comp = eq_ptr;
- struct mlx5_eq *eq = eq_ptr;
+ struct mlx5_eq_comp *eq_comp =
+ container_of(nb, struct mlx5_eq_comp, irq_nb);
+ struct mlx5_eq *eq = &eq_comp->core;
struct mlx5_eqe *eqe;
- int set_ci = 0;
+ int num_eqes = 0;
u32 cqn = -1;
- while ((eqe = next_eqe_sw(eq))) {
+ eqe = next_eqe_sw(eq);
+ if (!eqe)
+ goto out;
+
+ do {
struct mlx5_core_cq *cq;
+
/* Make sure we read EQ entry contents after we've
* checked the ownership bit.
*/
@@ -144,33 +153,23 @@ static irqreturn_t mlx5_eq_comp_int(int irq, void *eq_ptr)
cq = mlx5_eq_cq_get(eq, cqn);
if (likely(cq)) {
++cq->arm_sn;
- cq->comp(cq);
+ cq->comp(cq, eqe);
mlx5_cq_put(cq);
} else {
mlx5_core_warn(eq->dev, "Completion event for bogus CQ 0x%x\n", cqn);
}
++eq->cons_index;
- ++set_ci;
- /* The HCA will think the queue has overflowed if we
- * don't tell it we've been processing events. We
- * create our EQs with MLX5_NUM_SPARE_EQE extra
- * entries, so we must update our consumer index at
- * least that often.
- */
- if (unlikely(set_ci >= MLX5_NUM_SPARE_EQE)) {
- eq_update_ci(eq, 0);
- set_ci = 0;
- }
- }
+ } while ((++num_eqes < MLX5_EQ_POLLING_BUDGET) && (eqe = next_eqe_sw(eq)));
+out:
eq_update_ci(eq, 1);
if (cqn != -1)
tasklet_schedule(&eq_comp->tasklet_ctx.task);
- return IRQ_HANDLED;
+ return 0;
}
/* Some architectures don't latch interrupts when they are disabled, so using
@@ -184,25 +183,32 @@ u32 mlx5_eq_poll_irq_disabled(struct mlx5_eq_comp *eq)
disable_irq(eq->core.irqn);
count_eqe = eq->core.cons_index;
- mlx5_eq_comp_int(eq->core.irqn, eq);
+ mlx5_eq_comp_int(&eq->irq_nb, 0, NULL);
count_eqe = eq->core.cons_index - count_eqe;
enable_irq(eq->core.irqn);
return count_eqe;
}
-static irqreturn_t mlx5_eq_async_int(int irq, void *eq_ptr)
+static int mlx5_eq_async_int(struct notifier_block *nb,
+ unsigned long action, void *data)
{
- struct mlx5_eq *eq = eq_ptr;
+ struct mlx5_eq_async *eq_async =
+ container_of(nb, struct mlx5_eq_async, irq_nb);
+ struct mlx5_eq *eq = &eq_async->core;
struct mlx5_eq_table *eqt;
struct mlx5_core_dev *dev;
struct mlx5_eqe *eqe;
- int set_ci = 0;
+ int num_eqes = 0;
dev = eq->dev;
eqt = dev->priv.eq_table;
- while ((eqe = next_eqe_sw(eq))) {
+ eqe = next_eqe_sw(eq);
+ if (!eqe)
+ goto out;
+
+ do {
/*
* Make sure we read EQ entry contents after we've
* checked the ownership bit.
@@ -217,23 +223,13 @@ static irqreturn_t mlx5_eq_async_int(int irq, void *eq_ptr)
atomic_notifier_call_chain(&eqt->nh[MLX5_EVENT_TYPE_NOTIFY_ANY], eqe->type, eqe);
++eq->cons_index;
- ++set_ci;
- /* The HCA will think the queue has overflowed if we
- * don't tell it we've been processing events. We
- * create our EQs with MLX5_NUM_SPARE_EQE extra
- * entries, so we must update our consumer index at
- * least that often.
- */
- if (unlikely(set_ci >= MLX5_NUM_SPARE_EQE)) {
- eq_update_ci(eq, 0);
- set_ci = 0;
- }
- }
+ } while ((++num_eqes < MLX5_EQ_POLLING_BUDGET) && (eqe = next_eqe_sw(eq)));
+out:
eq_update_ci(eq, 1);
- return IRQ_HANDLED;
+ return 0;
}
static void init_eq_buf(struct mlx5_eq *eq)
@@ -248,22 +244,19 @@ static void init_eq_buf(struct mlx5_eq *eq)
}
static int
-create_map_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq, const char *name,
+create_map_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq,
struct mlx5_eq_param *param)
{
- struct mlx5_eq_table *eq_table = dev->priv.eq_table;
struct mlx5_cq_table *cq_table = &eq->cq_table;
u32 out[MLX5_ST_SZ_DW(create_eq_out)] = {0};
struct mlx5_priv *priv = &dev->priv;
- u8 vecidx = param->index;
+ u8 vecidx = param->irq_index;
__be64 *pas;
void *eqc;
int inlen;
u32 *in;
int err;
-
- if (eq_table->irq_info[vecidx].context)
- return -EEXIST;
+ int i;
/* Init CQ table */
memset(cq_table, 0, sizeof(*cq_table));
@@ -291,10 +284,12 @@ create_map_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq, const char *name,
mlx5_fill_page_array(&eq->buf, pas);
MLX5_SET(create_eq_in, in, opcode, MLX5_CMD_OP_CREATE_EQ);
- if (!param->mask && MLX5_CAP_GEN(dev, log_max_uctx))
+ if (!param->mask[0] && MLX5_CAP_GEN(dev, log_max_uctx))
MLX5_SET(create_eq_in, in, uid, MLX5_SHARED_RESOURCE_UID);
- MLX5_SET64(create_eq_in, in, event_bitmask, param->mask);
+ for (i = 0; i < 4; i++)
+ MLX5_ARRAY_SET64(create_eq_in, in, event_bitmask, i,
+ param->mask[i]);
eqc = MLX5_ADDR_OF(create_eq_in, in, eq_context_entry);
MLX5_SET(eqc, eqc, log_eq_size, ilog2(eq->nent));
@@ -307,34 +302,19 @@ create_map_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq, const char *name,
if (err)
goto err_in;
- snprintf(eq_table->irq_info[vecidx].name, MLX5_MAX_IRQ_NAME, "%s@pci:%s",
- name, pci_name(dev->pdev));
- eq_table->irq_info[vecidx].context = param->context;
-
eq->vecidx = vecidx;
eq->eqn = MLX5_GET(create_eq_out, out, eq_number);
eq->irqn = pci_irq_vector(dev->pdev, vecidx);
eq->dev = dev;
eq->doorbell = priv->uar->map + MLX5_EQ_DOORBEL_OFFSET;
- err = request_irq(eq->irqn, param->handler, 0,
- eq_table->irq_info[vecidx].name, param->context);
- if (err)
- goto err_eq;
err = mlx5_debug_eq_add(dev, eq);
if (err)
- goto err_irq;
-
- /* EQs are created in ARMED state
- */
- eq_update_ci(eq, 1);
+ goto err_eq;
kvfree(in);
return 0;
-err_irq:
- free_irq(eq->irqn, eq);
-
err_eq:
mlx5_cmd_destroy_eq(dev, eq->eqn);
@@ -346,18 +326,48 @@ err_buf:
return err;
}
-static int destroy_unmap_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq)
+/**
+ * mlx5_eq_enable - Enable EQ for receiving EQEs
+ * @dev - Device which owns the eq
+ * @eq - EQ to enable
+ * @nb - notifier call block
+ * mlx5_eq_enable - must be called after EQ is created in device.
+ */
+int mlx5_eq_enable(struct mlx5_core_dev *dev, struct mlx5_eq *eq,
+ struct notifier_block *nb)
{
struct mlx5_eq_table *eq_table = dev->priv.eq_table;
- struct mlx5_irq_info *irq_info;
int err;
- irq_info = &eq_table->irq_info[eq->vecidx];
+ err = mlx5_irq_attach_nb(eq_table->irq_table, eq->vecidx, nb);
+ if (!err)
+ eq_update_ci(eq, 1);
- mlx5_debug_eq_remove(dev, eq);
+ return err;
+}
+EXPORT_SYMBOL(mlx5_eq_enable);
+
+/**
+ * mlx5_eq_disable - Enable EQ for receiving EQEs
+ * @dev - Device which owns the eq
+ * @eq - EQ to disable
+ * @nb - notifier call block
+ * mlx5_eq_disable - must be called before EQ is destroyed.
+ */
+void mlx5_eq_disable(struct mlx5_core_dev *dev, struct mlx5_eq *eq,
+ struct notifier_block *nb)
+{
+ struct mlx5_eq_table *eq_table = dev->priv.eq_table;
+
+ mlx5_irq_detach_nb(eq_table->irq_table, eq->vecidx, nb);
+}
+EXPORT_SYMBOL(mlx5_eq_disable);
+
+static int destroy_unmap_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq)
+{
+ int err;
- free_irq(eq->irqn, irq_info->context);
- irq_info->context = NULL;
+ mlx5_debug_eq_remove(dev, eq);
err = mlx5_cmd_destroy_eq(dev, eq->eqn);
if (err)
@@ -382,7 +392,7 @@ int mlx5_eq_add_cq(struct mlx5_eq *eq, struct mlx5_core_cq *cq)
return err;
}
-int mlx5_eq_del_cq(struct mlx5_eq *eq, struct mlx5_core_cq *cq)
+void mlx5_eq_del_cq(struct mlx5_eq *eq, struct mlx5_core_cq *cq)
{
struct mlx5_cq_table *table = &eq->cq_table;
struct mlx5_core_cq *tmp;
@@ -392,16 +402,14 @@ int mlx5_eq_del_cq(struct mlx5_eq *eq, struct mlx5_core_cq *cq)
spin_unlock(&table->lock);
if (!tmp) {
- mlx5_core_warn(eq->dev, "cq 0x%x not found in eq 0x%x tree\n", eq->eqn, cq->cqn);
- return -ENOENT;
- }
-
- if (tmp != cq) {
- mlx5_core_warn(eq->dev, "corruption on cqn 0x%x in eq 0x%x\n", eq->eqn, cq->cqn);
- return -EINVAL;
+ mlx5_core_dbg(eq->dev, "cq 0x%x not found in eq 0x%x tree\n",
+ eq->eqn, cq->cqn);
+ return;
}
- return 0;
+ if (tmp != cq)
+ mlx5_core_dbg(eq->dev, "corruption on cqn 0x%x in eq 0x%x\n",
+ eq->eqn, cq->cqn);
}
int mlx5_eq_table_init(struct mlx5_core_dev *dev)
@@ -423,6 +431,7 @@ int mlx5_eq_table_init(struct mlx5_core_dev *dev)
for (i = 0; i < MLX5_EVENT_TYPE_MAX; i++)
ATOMIC_INIT_NOTIFIER_HEAD(&eq_table->nh[i]);
+ eq_table->irq_table = dev->priv.irq_table;
return 0;
kvfree_eq_table:
@@ -439,19 +448,20 @@ void mlx5_eq_table_cleanup(struct mlx5_core_dev *dev)
/* Async EQs */
-static int create_async_eq(struct mlx5_core_dev *dev, const char *name,
+static int create_async_eq(struct mlx5_core_dev *dev,
struct mlx5_eq *eq, struct mlx5_eq_param *param)
{
struct mlx5_eq_table *eq_table = dev->priv.eq_table;
int err;
mutex_lock(&eq_table->lock);
- if (param->index >= MLX5_EQ_MAX_ASYNC_EQS) {
- err = -ENOSPC;
+ /* Async EQs must share irq index 0 */
+ if (param->irq_index != 0) {
+ err = -EINVAL;
goto unlock;
}
- err = create_map_eq(dev, eq, name, param);
+ err = create_map_eq(dev, eq, param);
unlock:
mutex_unlock(&eq_table->lock);
return err;
@@ -480,7 +490,7 @@ static int cq_err_event_notifier(struct notifier_block *nb,
/* type == MLX5_EVENT_TYPE_CQ_ERROR */
eqt = mlx5_nb_cof(nb, struct mlx5_eq_table, cq_err_nb);
- eq = &eqt->async_eq;
+ eq = &eqt->async_eq.core;
eqe = data;
cqn = be32_to_cpu(eqe->data.cq_err.cqn) & 0xffffff;
@@ -493,14 +503,31 @@ static int cq_err_event_notifier(struct notifier_block *nb,
return NOTIFY_OK;
}
- cq->event(cq, type);
+ if (cq->event)
+ cq->event(cq, type);
mlx5_cq_put(cq);
return NOTIFY_OK;
}
-static u64 gather_async_events_mask(struct mlx5_core_dev *dev)
+static void gather_user_async_events(struct mlx5_core_dev *dev, u64 mask[4])
+{
+ __be64 *user_unaffiliated_events;
+ __be64 *user_affiliated_events;
+ int i;
+
+ user_affiliated_events =
+ MLX5_CAP_DEV_EVENT(dev, user_affiliated_events);
+ user_unaffiliated_events =
+ MLX5_CAP_DEV_EVENT(dev, user_unaffiliated_events);
+
+ for (i = 0; i < 4; i++)
+ mask[i] |= be64_to_cpu(user_affiliated_events[i] |
+ user_unaffiliated_events[i]);
+}
+
+static void gather_async_events_mask(struct mlx5_core_dev *dev, u64 mask[4])
{
u64 async_event_mask = MLX5_ASYNC_EVENT_MASK;
@@ -533,10 +560,14 @@ static u64 gather_async_events_mask(struct mlx5_core_dev *dev)
if (MLX5_CAP_GEN(dev, max_num_of_monitor_counters))
async_event_mask |= (1ull << MLX5_EVENT_TYPE_MONITOR_COUNTER);
- if (mlx5_core_is_ecpf_esw_manager(dev))
- async_event_mask |= (1ull << MLX5_EVENT_TYPE_HOST_PARAMS_CHANGE);
+ if (mlx5_eswitch_is_funcs_handler(dev))
+ async_event_mask |=
+ (1ull << MLX5_EVENT_TYPE_ESW_FUNCTIONS_CHANGED);
- return async_event_mask;
+ mask[0] = async_event_mask;
+
+ if (MLX5_CAP_GEN(dev, event_cap))
+ gather_user_async_events(dev, mask);
}
static int create_async_eqs(struct mlx5_core_dev *dev)
@@ -548,55 +579,76 @@ static int create_async_eqs(struct mlx5_core_dev *dev)
MLX5_NB_INIT(&table->cq_err_nb, cq_err_event_notifier, CQ_ERROR);
mlx5_eq_notifier_register(dev, &table->cq_err_nb);
+ table->cmd_eq.irq_nb.notifier_call = mlx5_eq_async_int;
param = (struct mlx5_eq_param) {
- .index = MLX5_EQ_CMD_IDX,
- .mask = 1ull << MLX5_EVENT_TYPE_CMD,
+ .irq_index = 0,
.nent = MLX5_NUM_CMD_EQE,
- .context = &table->cmd_eq,
- .handler = mlx5_eq_async_int,
};
- err = create_async_eq(dev, "mlx5_cmd_eq", &table->cmd_eq, &param);
+
+ param.mask[0] = 1ull << MLX5_EVENT_TYPE_CMD;
+ err = create_async_eq(dev, &table->cmd_eq.core, &param);
if (err) {
mlx5_core_warn(dev, "failed to create cmd EQ %d\n", err);
goto err0;
}
-
+ err = mlx5_eq_enable(dev, &table->cmd_eq.core, &table->cmd_eq.irq_nb);
+ if (err) {
+ mlx5_core_warn(dev, "failed to enable cmd EQ %d\n", err);
+ goto err1;
+ }
mlx5_cmd_use_events(dev);
+ table->async_eq.irq_nb.notifier_call = mlx5_eq_async_int;
param = (struct mlx5_eq_param) {
- .index = MLX5_EQ_ASYNC_IDX,
- .mask = gather_async_events_mask(dev),
+ .irq_index = 0,
.nent = MLX5_NUM_ASYNC_EQE,
- .context = &table->async_eq,
- .handler = mlx5_eq_async_int,
};
- err = create_async_eq(dev, "mlx5_async_eq", &table->async_eq, &param);
+
+ gather_async_events_mask(dev, param.mask);
+ err = create_async_eq(dev, &table->async_eq.core, &param);
if (err) {
mlx5_core_warn(dev, "failed to create async EQ %d\n", err);
- goto err1;
+ goto err2;
+ }
+ err = mlx5_eq_enable(dev, &table->async_eq.core,
+ &table->async_eq.irq_nb);
+ if (err) {
+ mlx5_core_warn(dev, "failed to enable async EQ %d\n", err);
+ goto err3;
}
+ table->pages_eq.irq_nb.notifier_call = mlx5_eq_async_int;
param = (struct mlx5_eq_param) {
- .index = MLX5_EQ_PAGEREQ_IDX,
- .mask = 1 << MLX5_EVENT_TYPE_PAGE_REQUEST,
+ .irq_index = 0,
.nent = /* TODO: sriov max_vf + */ 1,
- .context = &table->pages_eq,
- .handler = mlx5_eq_async_int,
};
- err = create_async_eq(dev, "mlx5_pages_eq", &table->pages_eq, &param);
+
+ param.mask[0] = 1ull << MLX5_EVENT_TYPE_PAGE_REQUEST;
+ err = create_async_eq(dev, &table->pages_eq.core, &param);
if (err) {
mlx5_core_warn(dev, "failed to create pages EQ %d\n", err);
- goto err2;
+ goto err4;
+ }
+ err = mlx5_eq_enable(dev, &table->pages_eq.core,
+ &table->pages_eq.irq_nb);
+ if (err) {
+ mlx5_core_warn(dev, "failed to enable pages EQ %d\n", err);
+ goto err5;
}
return err;
+err5:
+ destroy_async_eq(dev, &table->pages_eq.core);
+err4:
+ mlx5_eq_disable(dev, &table->async_eq.core, &table->async_eq.irq_nb);
+err3:
+ destroy_async_eq(dev, &table->async_eq.core);
err2:
- destroy_async_eq(dev, &table->async_eq);
-
-err1:
mlx5_cmd_use_polling(dev);
- destroy_async_eq(dev, &table->cmd_eq);
+ mlx5_eq_disable(dev, &table->cmd_eq.core, &table->cmd_eq.irq_nb);
+err1:
+ destroy_async_eq(dev, &table->cmd_eq.core);
err0:
mlx5_eq_notifier_unregister(dev, &table->cq_err_nb);
return err;
@@ -607,19 +659,22 @@ static void destroy_async_eqs(struct mlx5_core_dev *dev)
struct mlx5_eq_table *table = dev->priv.eq_table;
int err;
- err = destroy_async_eq(dev, &table->pages_eq);
+ mlx5_eq_disable(dev, &table->pages_eq.core, &table->pages_eq.irq_nb);
+ err = destroy_async_eq(dev, &table->pages_eq.core);
if (err)
mlx5_core_err(dev, "failed to destroy pages eq, err(%d)\n",
err);
- err = destroy_async_eq(dev, &table->async_eq);
+ mlx5_eq_disable(dev, &table->async_eq.core, &table->async_eq.irq_nb);
+ err = destroy_async_eq(dev, &table->async_eq.core);
if (err)
mlx5_core_err(dev, "failed to destroy async eq, err(%d)\n",
err);
mlx5_cmd_use_polling(dev);
- err = destroy_async_eq(dev, &table->cmd_eq);
+ mlx5_eq_disable(dev, &table->cmd_eq.core, &table->cmd_eq.irq_nb);
+ err = destroy_async_eq(dev, &table->cmd_eq.core);
if (err)
mlx5_core_err(dev, "failed to destroy command eq, err(%d)\n",
err);
@@ -629,24 +684,24 @@ static void destroy_async_eqs(struct mlx5_core_dev *dev)
struct mlx5_eq *mlx5_get_async_eq(struct mlx5_core_dev *dev)
{
- return &dev->priv.eq_table->async_eq;
+ return &dev->priv.eq_table->async_eq.core;
}
void mlx5_eq_synchronize_async_irq(struct mlx5_core_dev *dev)
{
- synchronize_irq(dev->priv.eq_table->async_eq.irqn);
+ synchronize_irq(dev->priv.eq_table->async_eq.core.irqn);
}
void mlx5_eq_synchronize_cmd_irq(struct mlx5_core_dev *dev)
{
- synchronize_irq(dev->priv.eq_table->cmd_eq.irqn);
+ synchronize_irq(dev->priv.eq_table->cmd_eq.core.irqn);
}
/* Generic EQ API for mlx5_core consumers
* Needed For RDMA ODP EQ for now
*/
struct mlx5_eq *
-mlx5_eq_create_generic(struct mlx5_core_dev *dev, const char *name,
+mlx5_eq_create_generic(struct mlx5_core_dev *dev,
struct mlx5_eq_param *param)
{
struct mlx5_eq *eq = kvzalloc(sizeof(*eq), GFP_KERNEL);
@@ -655,7 +710,7 @@ mlx5_eq_create_generic(struct mlx5_core_dev *dev, const char *name,
if (!eq)
return ERR_PTR(-ENOMEM);
- err = create_async_eq(dev, name, eq, param);
+ err = create_async_eq(dev, eq, param);
if (err) {
kvfree(eq);
eq = ERR_PTR(err);
@@ -713,84 +768,14 @@ void mlx5_eq_update_ci(struct mlx5_eq *eq, u32 cc, bool arm)
}
EXPORT_SYMBOL(mlx5_eq_update_ci);
-/* Completion EQs */
-
-static int set_comp_irq_affinity_hint(struct mlx5_core_dev *mdev, int i)
-{
- struct mlx5_priv *priv = &mdev->priv;
- int vecidx = MLX5_EQ_VEC_COMP_BASE + i;
- int irq = pci_irq_vector(mdev->pdev, vecidx);
- struct mlx5_irq_info *irq_info = &priv->eq_table->irq_info[vecidx];
-
- if (!zalloc_cpumask_var(&irq_info->mask, GFP_KERNEL)) {
- mlx5_core_warn(mdev, "zalloc_cpumask_var failed");
- return -ENOMEM;
- }
-
- cpumask_set_cpu(cpumask_local_spread(i, priv->numa_node),
- irq_info->mask);
-
- if (IS_ENABLED(CONFIG_SMP) &&
- irq_set_affinity_hint(irq, irq_info->mask))
- mlx5_core_warn(mdev, "irq_set_affinity_hint failed, irq 0x%.4x", irq);
-
- return 0;
-}
-
-static void clear_comp_irq_affinity_hint(struct mlx5_core_dev *mdev, int i)
-{
- int vecidx = MLX5_EQ_VEC_COMP_BASE + i;
- struct mlx5_priv *priv = &mdev->priv;
- int irq = pci_irq_vector(mdev->pdev, vecidx);
- struct mlx5_irq_info *irq_info = &priv->eq_table->irq_info[vecidx];
-
- irq_set_affinity_hint(irq, NULL);
- free_cpumask_var(irq_info->mask);
-}
-
-static int set_comp_irq_affinity_hints(struct mlx5_core_dev *mdev)
-{
- int err;
- int i;
-
- for (i = 0; i < mdev->priv.eq_table->num_comp_vectors; i++) {
- err = set_comp_irq_affinity_hint(mdev, i);
- if (err)
- goto err_out;
- }
-
- return 0;
-
-err_out:
- for (i--; i >= 0; i--)
- clear_comp_irq_affinity_hint(mdev, i);
-
- return err;
-}
-
-static void clear_comp_irqs_affinity_hints(struct mlx5_core_dev *mdev)
-{
- int i;
-
- for (i = 0; i < mdev->priv.eq_table->num_comp_vectors; i++)
- clear_comp_irq_affinity_hint(mdev, i);
-}
-
static void destroy_comp_eqs(struct mlx5_core_dev *dev)
{
struct mlx5_eq_table *table = dev->priv.eq_table;
struct mlx5_eq_comp *eq, *n;
- clear_comp_irqs_affinity_hints(dev);
-
-#ifdef CONFIG_RFS_ACCEL
- if (table->rmap) {
- free_irq_cpu_rmap(table->rmap);
- table->rmap = NULL;
- }
-#endif
list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) {
list_del(&eq->list);
+ mlx5_eq_disable(dev, &eq->core, &eq->irq_nb);
if (destroy_unmap_eq(dev, &eq->core))
mlx5_core_warn(dev, "failed to destroy comp EQ 0x%x\n",
eq->core.eqn);
@@ -802,23 +787,17 @@ static void destroy_comp_eqs(struct mlx5_core_dev *dev)
static int create_comp_eqs(struct mlx5_core_dev *dev)
{
struct mlx5_eq_table *table = dev->priv.eq_table;
- char name[MLX5_MAX_IRQ_NAME];
struct mlx5_eq_comp *eq;
- int ncomp_vec;
+ int ncomp_eqs;
int nent;
int err;
int i;
INIT_LIST_HEAD(&table->comp_eqs_list);
- ncomp_vec = table->num_comp_vectors;
+ ncomp_eqs = table->num_comp_eqs;
nent = MLX5_COMP_EQ_SIZE;
-#ifdef CONFIG_RFS_ACCEL
- table->rmap = alloc_irq_cpu_rmap(ncomp_vec);
- if (!table->rmap)
- return -ENOMEM;
-#endif
- for (i = 0; i < ncomp_vec; i++) {
- int vecidx = i + MLX5_EQ_VEC_COMP_BASE;
+ for (i = 0; i < ncomp_eqs; i++) {
+ int vecidx = i + MLX5_IRQ_VEC_COMP_BASE;
struct mlx5_eq_param param = {};
eq = kzalloc(sizeof(*eq), GFP_KERNEL);
@@ -833,33 +812,28 @@ static int create_comp_eqs(struct mlx5_core_dev *dev)
tasklet_init(&eq->tasklet_ctx.task, mlx5_cq_tasklet_cb,
(unsigned long)&eq->tasklet_ctx);
-#ifdef CONFIG_RFS_ACCEL
- irq_cpu_rmap_add(table->rmap, pci_irq_vector(dev->pdev, vecidx));
-#endif
- snprintf(name, MLX5_MAX_IRQ_NAME, "mlx5_comp%d", i);
+ eq->irq_nb.notifier_call = mlx5_eq_comp_int;
param = (struct mlx5_eq_param) {
- .index = vecidx,
- .mask = 0,
+ .irq_index = vecidx,
.nent = nent,
- .context = &eq->core,
- .handler = mlx5_eq_comp_int
};
- err = create_map_eq(dev, &eq->core, name, &param);
+ err = create_map_eq(dev, &eq->core, &param);
+ if (err) {
+ kfree(eq);
+ goto clean;
+ }
+ err = mlx5_eq_enable(dev, &eq->core, &eq->irq_nb);
if (err) {
+ destroy_unmap_eq(dev, &eq->core);
kfree(eq);
goto clean;
}
+
mlx5_core_dbg(dev, "allocated completion EQN %d\n", eq->core.eqn);
/* add tail, to keep the list ordered, for mlx5_vector2eqn to work */
list_add_tail(&eq->list, &table->comp_eqs_list);
}
- err = set_comp_irq_affinity_hints(dev);
- if (err) {
- mlx5_core_err(dev, "Failed to alloc affinity hint cpumask\n");
- goto clean;
- }
-
return 0;
clean:
@@ -890,22 +864,24 @@ EXPORT_SYMBOL(mlx5_vector2eqn);
unsigned int mlx5_comp_vectors_count(struct mlx5_core_dev *dev)
{
- return dev->priv.eq_table->num_comp_vectors;
+ return dev->priv.eq_table->num_comp_eqs;
}
EXPORT_SYMBOL(mlx5_comp_vectors_count);
struct cpumask *
mlx5_comp_irq_get_affinity_mask(struct mlx5_core_dev *dev, int vector)
{
- /* TODO: consider irq_get_affinity_mask(irq) */
- return dev->priv.eq_table->irq_info[vector + MLX5_EQ_VEC_COMP_BASE].mask;
+ int vecidx = vector + MLX5_IRQ_VEC_COMP_BASE;
+
+ return mlx5_irq_get_affinity_mask(dev->priv.eq_table->irq_table,
+ vecidx);
}
EXPORT_SYMBOL(mlx5_comp_irq_get_affinity_mask);
#ifdef CONFIG_RFS_ACCEL
struct cpu_rmap *mlx5_eq_table_get_rmap(struct mlx5_core_dev *dev)
{
- return dev->priv.eq_table->rmap;
+ return mlx5_irq_get_rmap(dev->priv.eq_table->irq_table);
}
#endif
@@ -926,82 +902,19 @@ struct mlx5_eq_comp *mlx5_eqn2comp_eq(struct mlx5_core_dev *dev, int eqn)
void mlx5_core_eq_free_irqs(struct mlx5_core_dev *dev)
{
struct mlx5_eq_table *table = dev->priv.eq_table;
- int i, max_eqs;
-
- clear_comp_irqs_affinity_hints(dev);
-
-#ifdef CONFIG_RFS_ACCEL
- if (table->rmap) {
- free_irq_cpu_rmap(table->rmap);
- table->rmap = NULL;
- }
-#endif
mutex_lock(&table->lock); /* sync with create/destroy_async_eq */
- max_eqs = table->num_comp_vectors + MLX5_EQ_VEC_COMP_BASE;
- for (i = max_eqs - 1; i >= 0; i--) {
- if (!table->irq_info[i].context)
- continue;
- free_irq(pci_irq_vector(dev->pdev, i), table->irq_info[i].context);
- table->irq_info[i].context = NULL;
- }
+ mlx5_irq_table_destroy(dev);
mutex_unlock(&table->lock);
- pci_free_irq_vectors(dev->pdev);
-}
-
-static int alloc_irq_vectors(struct mlx5_core_dev *dev)
-{
- struct mlx5_priv *priv = &dev->priv;
- struct mlx5_eq_table *table = priv->eq_table;
- int num_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ?
- MLX5_CAP_GEN(dev, max_num_eqs) :
- 1 << MLX5_CAP_GEN(dev, log_max_eq);
- int nvec;
- int err;
-
- nvec = MLX5_CAP_GEN(dev, num_ports) * num_online_cpus() +
- MLX5_EQ_VEC_COMP_BASE;
- nvec = min_t(int, nvec, num_eqs);
- if (nvec <= MLX5_EQ_VEC_COMP_BASE)
- return -ENOMEM;
-
- table->irq_info = kcalloc(nvec, sizeof(*table->irq_info), GFP_KERNEL);
- if (!table->irq_info)
- return -ENOMEM;
-
- nvec = pci_alloc_irq_vectors(dev->pdev, MLX5_EQ_VEC_COMP_BASE + 1,
- nvec, PCI_IRQ_MSIX);
- if (nvec < 0) {
- err = nvec;
- goto err_free_irq_info;
- }
-
- table->num_comp_vectors = nvec - MLX5_EQ_VEC_COMP_BASE;
-
- return 0;
-
-err_free_irq_info:
- kfree(table->irq_info);
- return err;
-}
-
-static void free_irq_vectors(struct mlx5_core_dev *dev)
-{
- struct mlx5_priv *priv = &dev->priv;
-
- pci_free_irq_vectors(dev->pdev);
- kfree(priv->eq_table->irq_info);
}
int mlx5_eq_table_create(struct mlx5_core_dev *dev)
{
+ struct mlx5_eq_table *eq_table = dev->priv.eq_table;
int err;
- err = alloc_irq_vectors(dev);
- if (err) {
- mlx5_core_err(dev, "alloc irq vectors failed\n");
- return err;
- }
+ eq_table->num_comp_eqs =
+ mlx5_irq_get_num_comp(eq_table->irq_table);
err = create_async_eqs(dev);
if (err) {
@@ -1019,7 +932,6 @@ int mlx5_eq_table_create(struct mlx5_core_dev *dev)
err_comp_eqs:
destroy_async_eqs(dev);
err_async_eqs:
- free_irq_vectors(dev);
return err;
}
@@ -1027,7 +939,6 @@ void mlx5_eq_table_destroy(struct mlx5_core_dev *dev)
{
destroy_comp_eqs(dev);
destroy_async_eqs(dev);
- free_irq_vectors(dev);
}
int mlx5_eq_notifier_register(struct mlx5_core_dev *dev, struct mlx5_nb *nb)
@@ -1039,6 +950,7 @@ int mlx5_eq_notifier_register(struct mlx5_core_dev *dev, struct mlx5_nb *nb)
return atomic_notifier_chain_register(&eqt->nh[nb->event_type], &nb->nb);
}
+EXPORT_SYMBOL(mlx5_eq_notifier_register);
int mlx5_eq_notifier_unregister(struct mlx5_core_dev *dev, struct mlx5_nb *nb)
{
@@ -1049,3 +961,4 @@ int mlx5_eq_notifier_unregister(struct mlx5_core_dev *dev, struct mlx5_nb *nb)
return atomic_notifier_chain_unregister(&eqt->nh[nb->event_type], &nb->nb);
}
+EXPORT_SYMBOL(mlx5_eq_notifier_unregister);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 6a921e24cd5e..7281f8d6cba6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -134,6 +134,30 @@ static int modify_esw_vport_context_cmd(struct mlx5_core_dev *dev, u16 vport,
return mlx5_cmd_exec(dev, in, inlen, out, sizeof(out));
}
+int mlx5_eswitch_modify_esw_vport_context(struct mlx5_eswitch *esw, u16 vport,
+ void *in, int inlen)
+{
+ return modify_esw_vport_context_cmd(esw->dev, vport, in, inlen);
+}
+
+static int query_esw_vport_context_cmd(struct mlx5_core_dev *dev, u16 vport,
+ void *out, int outlen)
+{
+ u32 in[MLX5_ST_SZ_DW(query_esw_vport_context_in)] = {};
+
+ MLX5_SET(query_esw_vport_context_in, in, opcode,
+ MLX5_CMD_OP_QUERY_ESW_VPORT_CONTEXT);
+ MLX5_SET(modify_esw_vport_context_in, in, vport_number, vport);
+ MLX5_SET(modify_esw_vport_context_in, in, other_vport, 1);
+ return mlx5_cmd_exec(dev, in, sizeof(in), out, outlen);
+}
+
+int mlx5_eswitch_query_esw_vport_context(struct mlx5_eswitch *esw, u16 vport,
+ void *out, int outlen)
+{
+ return query_esw_vport_context_cmd(esw->dev, vport, out, outlen);
+}
+
static int modify_esw_vport_cvlan(struct mlx5_core_dev *dev, u16 vport,
u16 vlan, u8 qos, u8 set_flags)
{
@@ -473,7 +497,7 @@ static int esw_add_uc_addr(struct mlx5_eswitch *esw, struct vport_addr *vaddr)
fdb_add:
/* SRIOV is enabled: Forward UC MAC to vport */
- if (esw->fdb_table.legacy.fdb && esw->mode == SRIOV_LEGACY)
+ if (esw->fdb_table.legacy.fdb && esw->mode == MLX5_ESWITCH_LEGACY)
vaddr->flow_rule = esw_fdb_set_vport_rule(esw, mac, vport);
esw_debug(esw->dev, "\tADDED UC MAC: vport[%d] %pM fr(%p)\n",
@@ -873,7 +897,7 @@ static void esw_vport_change_handle_locked(struct mlx5_vport *vport)
struct mlx5_eswitch *esw = dev->priv.eswitch;
u8 mac[ETH_ALEN];
- mlx5_query_nic_vport_mac_address(dev, vport->vport, mac);
+ mlx5_query_nic_vport_mac_address(dev, vport->vport, true, mac);
esw_debug(dev, "vport[%d] Context Changed: perm mac: %pM\n",
vport->vport, mac);
@@ -939,7 +963,7 @@ int esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
vport->vport, MLX5_CAP_ESW_EGRESS_ACL(dev, log_max_ft_size));
root_ns = mlx5_get_flow_vport_acl_namespace(dev, MLX5_FLOW_NAMESPACE_ESW_EGRESS,
- vport->vport);
+ mlx5_eswitch_vport_num_to_index(esw, vport->vport));
if (!root_ns) {
esw_warn(dev, "Failed to get E-Switch egress flow namespace for vport (%d)\n", vport->vport);
return -EOPNOTSUPP;
@@ -1057,7 +1081,7 @@ int esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
vport->vport, MLX5_CAP_ESW_INGRESS_ACL(dev, log_max_ft_size));
root_ns = mlx5_get_flow_vport_acl_namespace(dev, MLX5_FLOW_NAMESPACE_ESW_INGRESS,
- vport->vport);
+ mlx5_eswitch_vport_num_to_index(esw, vport->vport));
if (!root_ns) {
esw_warn(dev, "Failed to get E-Switch ingress flow namespace for vport (%d)\n", vport->vport);
return -EOPNOTSUPP;
@@ -1168,6 +1192,8 @@ void esw_vport_cleanup_ingress_rules(struct mlx5_eswitch *esw,
vport->ingress.drop_rule = NULL;
vport->ingress.allow_rule = NULL;
+
+ esw_vport_del_ingress_acl_modify_metadata(esw, vport);
}
void esw_vport_disable_ingress_acl(struct mlx5_eswitch *esw,
@@ -1527,6 +1553,7 @@ static void esw_apply_vport_conf(struct mlx5_eswitch *esw,
struct mlx5_vport *vport)
{
u16 vport_num = vport->vport;
+ int flags;
if (esw->manager_vport == vport_num)
return;
@@ -1544,11 +1571,13 @@ static void esw_apply_vport_conf(struct mlx5_eswitch *esw,
vport->info.node_guid);
}
+ flags = (vport->info.vlan || vport->info.qos) ?
+ SET_VLAN_STRIP | SET_VLAN_INSERT : 0;
modify_esw_vport_cvlan(esw->dev, vport_num, vport->info.vlan, vport->info.qos,
- (vport->info.vlan || vport->info.qos));
+ flags);
/* Only legacy mode needs ACLs */
- if (esw->mode == SRIOV_LEGACY) {
+ if (esw->mode == MLX5_ESWITCH_LEGACY) {
esw_vport_ingress_config(esw, vport);
esw_vport_egress_config(esw, vport);
}
@@ -1600,7 +1629,7 @@ static void esw_enable_vport(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
esw_debug(esw->dev, "Enabling VPORT(%d)\n", vport_num);
/* Create steering drop counters for ingress and egress ACLs */
- if (vport_num && esw->mode == SRIOV_LEGACY)
+ if (vport_num && esw->mode == MLX5_ESWITCH_LEGACY)
esw_vport_create_drop_counters(vport);
/* Restore old vport configuration */
@@ -1654,7 +1683,7 @@ static void esw_disable_vport(struct mlx5_eswitch *esw,
vport->enabled_events = 0;
esw_vport_disable_qos(esw, vport);
if (esw->manager_vport != vport_num &&
- esw->mode == SRIOV_LEGACY) {
+ esw->mode == MLX5_ESWITCH_LEGACY) {
mlx5_modify_vport_admin_state(esw->dev,
MLX5_VPORT_STATE_OP_MOD_ESW_VPORT,
vport_num, 1,
@@ -1686,54 +1715,91 @@ static int eswitch_vport_event(struct notifier_block *nb,
return NOTIFY_OK;
}
+/**
+ * mlx5_esw_query_functions - Returns raw output about functions state
+ * @dev: Pointer to device to query
+ *
+ * mlx5_esw_query_functions() allocates and returns functions changed
+ * raw output memory pointer from device on success. Otherwise returns ERR_PTR.
+ * Caller must free the memory using kvfree() when valid pointer is returned.
+ */
+const u32 *mlx5_esw_query_functions(struct mlx5_core_dev *dev)
+{
+ int outlen = MLX5_ST_SZ_BYTES(query_esw_functions_out);
+ u32 in[MLX5_ST_SZ_DW(query_esw_functions_in)] = {};
+ u32 *out;
+ int err;
+
+ out = kvzalloc(outlen, GFP_KERNEL);
+ if (!out)
+ return ERR_PTR(-ENOMEM);
+
+ MLX5_SET(query_esw_functions_in, in, opcode,
+ MLX5_CMD_OP_QUERY_ESW_FUNCTIONS);
+
+ err = mlx5_cmd_exec(dev, in, sizeof(in), out, outlen);
+ if (!err)
+ return out;
+
+ kvfree(out);
+ return ERR_PTR(err);
+}
+
+static void mlx5_eswitch_event_handlers_register(struct mlx5_eswitch *esw)
+{
+ MLX5_NB_INIT(&esw->nb, eswitch_vport_event, NIC_VPORT_CHANGE);
+ mlx5_eq_notifier_register(esw->dev, &esw->nb);
+
+ if (esw->mode == MLX5_ESWITCH_OFFLOADS && mlx5_eswitch_is_funcs_handler(esw->dev)) {
+ MLX5_NB_INIT(&esw->esw_funcs.nb, mlx5_esw_funcs_changed_handler,
+ ESW_FUNCTIONS_CHANGED);
+ mlx5_eq_notifier_register(esw->dev, &esw->esw_funcs.nb);
+ }
+}
+
+static void mlx5_eswitch_event_handlers_unregister(struct mlx5_eswitch *esw)
+{
+ if (esw->mode == MLX5_ESWITCH_OFFLOADS && mlx5_eswitch_is_funcs_handler(esw->dev))
+ mlx5_eq_notifier_unregister(esw->dev, &esw->esw_funcs.nb);
+
+ mlx5_eq_notifier_unregister(esw->dev, &esw->nb);
+
+ flush_workqueue(esw->work_queue);
+}
+
/* Public E-Switch API */
#define ESW_ALLOWED(esw) ((esw) && MLX5_ESWITCH_MANAGER((esw)->dev))
-int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode)
+int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int mode)
{
- int vf_nvports = 0, total_nvports = 0;
struct mlx5_vport *vport;
int err;
int i, enabled_events;
if (!ESW_ALLOWED(esw) ||
!MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ft_support)) {
- esw_warn(esw->dev, "E-Switch FDB is not supported, aborting ...\n");
+ esw_warn(esw->dev, "FDB is not supported, aborting ...\n");
return -EOPNOTSUPP;
}
if (!MLX5_CAP_ESW_INGRESS_ACL(esw->dev, ft_support))
- esw_warn(esw->dev, "E-Switch ingress ACL is not supported by FW\n");
+ esw_warn(esw->dev, "ingress ACL is not supported by FW\n");
if (!MLX5_CAP_ESW_EGRESS_ACL(esw->dev, ft_support))
- esw_warn(esw->dev, "E-Switch engress ACL is not supported by FW\n");
-
- esw_info(esw->dev, "E-Switch enable SRIOV: nvfs(%d) mode (%d)\n", nvfs, mode);
-
- if (mode == SRIOV_OFFLOADS) {
- if (mlx5_core_is_ecpf_esw_manager(esw->dev)) {
- err = mlx5_query_host_params_num_vfs(esw->dev, &vf_nvports);
- if (err)
- return err;
- total_nvports = esw->total_vports;
- } else {
- vf_nvports = nvfs;
- total_nvports = nvfs + MLX5_SPECIAL_VPORTS(esw->dev);
- }
- }
+ esw_warn(esw->dev, "engress ACL is not supported by FW\n");
esw->mode = mode;
mlx5_lag_update(esw->dev);
- if (mode == SRIOV_LEGACY) {
+ if (mode == MLX5_ESWITCH_LEGACY) {
err = esw_create_legacy_table(esw);
if (err)
goto abort;
} else {
mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_ETH);
mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_IB);
- err = esw_offloads_init(esw, vf_nvports, total_nvports);
+ err = esw_offloads_init(esw);
}
if (err)
@@ -1743,11 +1809,8 @@ int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode)
if (err)
esw_warn(esw->dev, "Failed to create eswitch TSAR");
- /* Don't enable vport events when in SRIOV_OFFLOADS mode, since:
- * 1. L2 table (MPFS) is programmed by PF/VF representors netdevs set_rx_mode
- * 2. FDB/Eswitch is programmed by user space tools
- */
- enabled_events = (mode == SRIOV_LEGACY) ? SRIOV_VPORT_EVENTS : 0;
+ enabled_events = (mode == MLX5_ESWITCH_LEGACY) ? SRIOV_VPORT_EVENTS :
+ UC_ADDR_CHANGE;
/* Enable PF vport */
vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_PF);
@@ -1760,22 +1823,21 @@ int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode)
}
/* Enable VF vports */
- mlx5_esw_for_each_vf_vport(esw, i, vport, nvfs)
+ mlx5_esw_for_each_vf_vport(esw, i, vport, esw->esw_funcs.num_vfs)
esw_enable_vport(esw, vport, enabled_events);
- if (mode == SRIOV_LEGACY) {
- MLX5_NB_INIT(&esw->nb, eswitch_vport_event, NIC_VPORT_CHANGE);
- mlx5_eq_notifier_register(esw->dev, &esw->nb);
- }
+ mlx5_eswitch_event_handlers_register(esw);
+
+ esw_info(esw->dev, "Enable: mode(%s), nvfs(%d), active vports(%d)\n",
+ mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS",
+ esw->esw_funcs.num_vfs, esw->enabled_vports);
- esw_info(esw->dev, "SRIOV enabled: active vports(%d)\n",
- esw->enabled_vports);
return 0;
abort:
- esw->mode = SRIOV_NONE;
+ esw->mode = MLX5_ESWITCH_NONE;
- if (mode == SRIOV_OFFLOADS) {
+ if (mode == MLX5_ESWITCH_OFFLOADS) {
mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_IB);
mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_ETH);
}
@@ -1783,23 +1845,22 @@ abort:
return err;
}
-void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw)
+void mlx5_eswitch_disable(struct mlx5_eswitch *esw)
{
struct esw_mc_addr *mc_promisc;
struct mlx5_vport *vport;
int old_mode;
int i;
- if (!ESW_ALLOWED(esw) || esw->mode == SRIOV_NONE)
+ if (!ESW_ALLOWED(esw) || esw->mode == MLX5_ESWITCH_NONE)
return;
- esw_info(esw->dev, "disable SRIOV: active vports(%d) mode(%d)\n",
- esw->enabled_vports, esw->mode);
+ esw_info(esw->dev, "Disable: mode(%s), nvfs(%d), active vports(%d)\n",
+ esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS",
+ esw->esw_funcs.num_vfs, esw->enabled_vports);
mc_promisc = &esw->mc_promisc;
-
- if (esw->mode == SRIOV_LEGACY)
- mlx5_eq_notifier_unregister(esw->dev, &esw->nb);
+ mlx5_eswitch_event_handlers_unregister(esw);
mlx5_esw_for_all_vports(esw, i, vport)
esw_disable_vport(esw, vport);
@@ -1809,17 +1870,17 @@ void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw)
esw_destroy_tsar(esw);
- if (esw->mode == SRIOV_LEGACY)
+ if (esw->mode == MLX5_ESWITCH_LEGACY)
esw_destroy_legacy_table(esw);
- else if (esw->mode == SRIOV_OFFLOADS)
+ else if (esw->mode == MLX5_ESWITCH_OFFLOADS)
esw_offloads_cleanup(esw);
old_mode = esw->mode;
- esw->mode = SRIOV_NONE;
+ esw->mode = MLX5_ESWITCH_NONE;
mlx5_lag_update(esw->dev);
- if (old_mode == SRIOV_OFFLOADS) {
+ if (old_mode == MLX5_ESWITCH_OFFLOADS) {
mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_IB);
mlx5_reload_interface(esw->dev, MLX5_INTERFACE_PROTOCOL_ETH);
}
@@ -1827,14 +1888,16 @@ void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw)
int mlx5_eswitch_init(struct mlx5_core_dev *dev)
{
- int total_vports = MLX5_TOTAL_VPORTS(dev);
struct mlx5_eswitch *esw;
struct mlx5_vport *vport;
+ int total_vports;
int err, i;
if (!MLX5_VPORT_MANAGER(dev))
return 0;
+ total_vports = mlx5_eswitch_get_total_vports(dev);
+
esw_info(dev,
"Total vports %d, per vport: max uc(%d) max mc(%d)\n",
total_vports,
@@ -1847,6 +1910,7 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
esw->dev = dev;
esw->manager_vport = mlx5_eswitch_manager_vport(dev);
+ esw->first_host_vport = mlx5_eswitch_first_host_vport_num(dev);
esw->work_queue = create_singlethread_workqueue("mlx5_esw_wq");
if (!esw->work_queue) {
@@ -1880,7 +1944,7 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
}
esw->enabled_vports = 0;
- esw->mode = SRIOV_NONE;
+ esw->mode = MLX5_ESWITCH_NONE;
esw->offloads.inline_mode = MLX5_INLINE_MODE_NONE;
if (MLX5_CAP_ESW_FLOWTABLE_FDB(dev, reformat) &&
MLX5_CAP_ESW_FLOWTABLE_FDB(dev, decap))
@@ -1950,7 +2014,7 @@ int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
ether_addr_copy(evport->info.mac, mac);
evport->info.node_guid = node_guid;
- if (evport->enabled && esw->mode == SRIOV_LEGACY)
+ if (evport->enabled && esw->mode == MLX5_ESWITCH_LEGACY)
err = esw_vport_ingress_config(esw, evport);
unlock:
@@ -2034,7 +2098,7 @@ int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
evport->info.vlan = vlan;
evport->info.qos = qos;
- if (evport->enabled && esw->mode == SRIOV_LEGACY) {
+ if (evport->enabled && esw->mode == MLX5_ESWITCH_LEGACY) {
err = esw_vport_ingress_config(esw, evport);
if (err)
goto unlock;
@@ -2076,7 +2140,7 @@ int mlx5_eswitch_set_vport_spoofchk(struct mlx5_eswitch *esw,
mlx5_core_warn(esw->dev,
"Spoofchk in set while MAC is invalid, vport(%d)\n",
evport->vport);
- if (evport->enabled && esw->mode == SRIOV_LEGACY)
+ if (evport->enabled && esw->mode == MLX5_ESWITCH_LEGACY)
err = esw_vport_ingress_config(esw, evport);
if (err)
evport->info.spoofchk = pschk;
@@ -2172,7 +2236,7 @@ int mlx5_eswitch_set_vepa(struct mlx5_eswitch *esw, u8 setting)
return -EPERM;
mutex_lock(&esw->state_lock);
- if (esw->mode != SRIOV_LEGACY) {
+ if (esw->mode != MLX5_ESWITCH_LEGACY) {
err = -EOPNOTSUPP;
goto out;
}
@@ -2195,7 +2259,7 @@ int mlx5_eswitch_get_vepa(struct mlx5_eswitch *esw, u8 *setting)
return -EPERM;
mutex_lock(&esw->state_lock);
- if (esw->mode != SRIOV_LEGACY) {
+ if (esw->mode != MLX5_ESWITCH_LEGACY) {
err = -EOPNOTSUPP;
goto out;
}
@@ -2338,7 +2402,7 @@ static int mlx5_eswitch_query_vport_drop_stats(struct mlx5_core_dev *dev,
u64 bytes = 0;
int err = 0;
- if (!vport->enabled || esw->mode != SRIOV_LEGACY)
+ if (!vport->enabled || esw->mode != MLX5_ESWITCH_LEGACY)
return 0;
if (vport->egress.drop_counter)
@@ -2448,16 +2512,27 @@ free_out:
u8 mlx5_eswitch_mode(struct mlx5_eswitch *esw)
{
- return ESW_ALLOWED(esw) ? esw->mode : SRIOV_NONE;
+ return ESW_ALLOWED(esw) ? esw->mode : MLX5_ESWITCH_NONE;
}
EXPORT_SYMBOL_GPL(mlx5_eswitch_mode);
+enum devlink_eswitch_encap_mode
+mlx5_eswitch_get_encap_mode(const struct mlx5_core_dev *dev)
+{
+ struct mlx5_eswitch *esw;
+
+ esw = dev->priv.eswitch;
+ return ESW_ALLOWED(esw) ? esw->offloads.encap :
+ DEVLINK_ESWITCH_ENCAP_MODE_NONE;
+}
+EXPORT_SYMBOL(mlx5_eswitch_get_encap_mode);
+
bool mlx5_esw_lag_prereq(struct mlx5_core_dev *dev0, struct mlx5_core_dev *dev1)
{
- if ((dev0->priv.eswitch->mode == SRIOV_NONE &&
- dev1->priv.eswitch->mode == SRIOV_NONE) ||
- (dev0->priv.eswitch->mode == SRIOV_OFFLOADS &&
- dev1->priv.eswitch->mode == SRIOV_OFFLOADS))
+ if ((dev0->priv.eswitch->mode == MLX5_ESWITCH_NONE &&
+ dev1->priv.eswitch->mode == MLX5_ESWITCH_NONE) ||
+ (dev0->priv.eswitch->mode == MLX5_ESWITCH_OFFLOADS &&
+ dev1->priv.eswitch->mode == MLX5_ESWITCH_OFFLOADS))
return true;
return false;
@@ -2466,6 +2541,26 @@ bool mlx5_esw_lag_prereq(struct mlx5_core_dev *dev0, struct mlx5_core_dev *dev1)
bool mlx5_esw_multipath_prereq(struct mlx5_core_dev *dev0,
struct mlx5_core_dev *dev1)
{
- return (dev0->priv.eswitch->mode == SRIOV_OFFLOADS &&
- dev1->priv.eswitch->mode == SRIOV_OFFLOADS);
+ return (dev0->priv.eswitch->mode == MLX5_ESWITCH_OFFLOADS &&
+ dev1->priv.eswitch->mode == MLX5_ESWITCH_OFFLOADS);
+}
+
+void mlx5_eswitch_update_num_of_vfs(struct mlx5_eswitch *esw, const int num_vfs)
+{
+ const u32 *out;
+
+ WARN_ON_ONCE(esw->mode != MLX5_ESWITCH_NONE);
+
+ if (!mlx5_core_is_ecpf_esw_manager(esw->dev)) {
+ esw->esw_funcs.num_vfs = num_vfs;
+ return;
+ }
+
+ out = mlx5_esw_query_functions(esw->dev);
+ if (IS_ERR(out))
+ return;
+
+ esw->esw_funcs.num_vfs = MLX5_GET(query_esw_functions_out, out,
+ host_params_context.host_num_of_vfs);
+ kvfree(out);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index d043d6f9797d..a38e8a3c7c9a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -68,6 +68,8 @@ struct vport_ingress {
struct mlx5_flow_group *allow_spoofchk_only_grp;
struct mlx5_flow_group *allow_untagged_only_grp;
struct mlx5_flow_group *drop_grp;
+ int modify_metadata_id;
+ struct mlx5_flow_handle *modify_metadata_rule;
struct mlx5_flow_handle *allow_rule;
struct mlx5_flow_handle *drop_rule;
struct mlx5_fc *drop_counter;
@@ -173,9 +175,12 @@ struct mlx5_esw_offload {
struct mutex peer_mutex;
DECLARE_HASHTABLE(encap_tbl, 8);
DECLARE_HASHTABLE(mod_hdr_tbl, 8);
+ DECLARE_HASHTABLE(termtbl_tbl, 8);
+ struct mutex termtbl_mutex; /* protects termtbl hash */
+ const struct mlx5_eswitch_rep_ops *rep_ops[NUM_REP_TYPES];
u8 inline_mode;
u64 num_flows;
- u8 encap;
+ enum devlink_eswitch_encap_mode encap;
};
/* E-Switch MC FDB table hash node */
@@ -190,11 +195,15 @@ struct mlx5_host_work {
struct mlx5_eswitch *esw;
};
-struct mlx5_host_info {
+struct mlx5_esw_functions {
struct mlx5_nb nb;
u16 num_vfs;
};
+enum {
+ MLX5_ESWITCH_VPORT_MATCH_METADATA = BIT(0),
+};
+
struct mlx5_eswitch {
struct mlx5_core_dev *dev;
struct mlx5_nb nb;
@@ -202,6 +211,7 @@ struct mlx5_eswitch {
struct hlist_head mc_table[MLX5_L2_ADDR_HASH_SIZE];
struct workqueue_struct *work_queue;
struct mlx5_vport *vports;
+ u32 flags;
int total_vports;
int enabled_vports;
/* Synchronize between vport change events
@@ -219,12 +229,12 @@ struct mlx5_eswitch {
int mode;
int nvports;
u16 manager_vport;
- struct mlx5_host_info host_info;
+ u16 first_host_vport;
+ struct mlx5_esw_functions esw_funcs;
};
void esw_offloads_cleanup(struct mlx5_eswitch *esw);
-int esw_offloads_init(struct mlx5_eswitch *esw, int vf_nvports,
- int total_nvports);
+int esw_offloads_init(struct mlx5_eswitch *esw);
void esw_offloads_cleanup_reps(struct mlx5_eswitch *esw);
int esw_offloads_init_reps(struct mlx5_eswitch *esw);
void esw_vport_cleanup_ingress_rules(struct mlx5_eswitch *esw,
@@ -239,12 +249,14 @@ void esw_vport_disable_egress_acl(struct mlx5_eswitch *esw,
struct mlx5_vport *vport);
void esw_vport_disable_ingress_acl(struct mlx5_eswitch *esw,
struct mlx5_vport *vport);
+void esw_vport_del_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport);
/* E-Switch API */
int mlx5_eswitch_init(struct mlx5_core_dev *dev);
void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw);
-int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode);
-void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw);
+int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int mode);
+void mlx5_eswitch_disable(struct mlx5_eswitch *esw);
int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
u16 vport, u8 mac[ETH_ALEN]);
int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw,
@@ -266,8 +278,32 @@ int mlx5_eswitch_get_vport_stats(struct mlx5_eswitch *esw,
struct ifla_vf_stats *vf_stats);
void mlx5_eswitch_del_send_to_vport_rule(struct mlx5_flow_handle *rule);
+int mlx5_eswitch_modify_esw_vport_context(struct mlx5_eswitch *esw, u16 vport,
+ void *in, int inlen);
+int mlx5_eswitch_query_esw_vport_context(struct mlx5_eswitch *esw, u16 vport,
+ void *out, int outlen);
+
struct mlx5_flow_spec;
struct mlx5_esw_flow_attr;
+struct mlx5_termtbl_handle;
+
+bool
+mlx5_eswitch_termtbl_required(struct mlx5_eswitch *esw,
+ struct mlx5_flow_act *flow_act,
+ struct mlx5_flow_spec *spec);
+
+struct mlx5_flow_handle *
+mlx5_eswitch_add_termtbl_rule(struct mlx5_eswitch *esw,
+ struct mlx5_flow_table *ft,
+ struct mlx5_flow_spec *spec,
+ struct mlx5_esw_flow_attr *attr,
+ struct mlx5_flow_act *flow_act,
+ struct mlx5_flow_destination *dest,
+ int num_dest);
+
+void
+mlx5_eswitch_termtbl_put(struct mlx5_eswitch *esw,
+ struct mlx5_termtbl_handle *tt);
struct mlx5_flow_handle *
mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
@@ -338,6 +374,7 @@ struct mlx5_esw_flow_attr {
struct mlx5_eswitch_rep *rep;
struct mlx5_core_dev *mdev;
u32 encap_id;
+ struct mlx5_termtbl_handle *termtbl;
} dests[MLX5_MAX_FLOW_FWD_VPORTS];
u32 mod_hdr_id;
u8 match_level;
@@ -355,10 +392,12 @@ int mlx5_devlink_eswitch_mode_get(struct devlink *devlink, u16 *mode);
int mlx5_devlink_eswitch_inline_mode_set(struct devlink *devlink, u8 mode,
struct netlink_ext_ack *extack);
int mlx5_devlink_eswitch_inline_mode_get(struct devlink *devlink, u8 *mode);
-int mlx5_eswitch_inline_mode_get(struct mlx5_eswitch *esw, int nvfs, u8 *mode);
-int mlx5_devlink_eswitch_encap_mode_set(struct devlink *devlink, u8 encap,
+int mlx5_eswitch_inline_mode_get(struct mlx5_eswitch *esw, u8 *mode);
+int mlx5_devlink_eswitch_encap_mode_set(struct devlink *devlink,
+ enum devlink_eswitch_encap_mode encap,
struct netlink_ext_ack *extack);
-int mlx5_devlink_eswitch_encap_mode_get(struct devlink *devlink, u8 *encap);
+int mlx5_devlink_eswitch_encap_mode_get(struct devlink *devlink,
+ enum devlink_eswitch_encap_mode *encap);
void *mlx5_eswitch_get_uplink_priv(struct mlx5_eswitch *esw, u8 rep_type);
int mlx5_eswitch_add_vlan_action(struct mlx5_eswitch *esw,
@@ -386,6 +425,8 @@ bool mlx5_esw_lag_prereq(struct mlx5_core_dev *dev0,
bool mlx5_esw_multipath_prereq(struct mlx5_core_dev *dev0,
struct mlx5_core_dev *dev1);
+const u32 *mlx5_esw_query_functions(struct mlx5_core_dev *dev);
+
#define MLX5_DEBUG_ESWITCH_MASK BIT(3)
#define esw_info(__dev, format, ...) \
@@ -404,6 +445,24 @@ static inline u16 mlx5_eswitch_manager_vport(struct mlx5_core_dev *dev)
MLX5_VPORT_ECPF : MLX5_VPORT_PF;
}
+static inline u16 mlx5_eswitch_first_host_vport_num(struct mlx5_core_dev *dev)
+{
+ return mlx5_core_is_ecpf_esw_manager(dev) ?
+ MLX5_VPORT_PF : MLX5_VPORT_FIRST_VF;
+}
+
+static inline bool mlx5_eswitch_is_funcs_handler(struct mlx5_core_dev *dev)
+{
+ /* Ideally device should have the functions changed supported
+ * capability regardless of it being ECPF or PF wherever such
+ * event should be processed such as on eswitch manager device.
+ * However, some ECPF based device might not have this capability
+ * set. Hence OR for ECPF check to cover such device.
+ */
+ return MLX5_CAP_ESW(dev, esw_functions_changed) ||
+ mlx5_core_is_ecpf_esw_manager(dev);
+}
+
static inline int mlx5_eswitch_uplink_idx(struct mlx5_eswitch *esw)
{
/* Uplink always locate at the last element of the array.*/
@@ -488,16 +547,47 @@ void mlx5e_tc_clean_fdb_peer_flows(struct mlx5_eswitch *esw);
#define mlx5_esw_for_each_vf_vport_num_reverse(esw, vport, nvfs) \
for ((vport) = (nvfs); (vport) >= MLX5_VPORT_FIRST_VF; (vport)--)
+/* Includes host PF (vport 0) if it's not esw manager. */
+#define mlx5_esw_for_each_host_func_rep(esw, i, rep, nvfs) \
+ for ((i) = (esw)->first_host_vport; \
+ (rep) = &(esw)->offloads.vport_reps[i], \
+ (i) <= (nvfs); (i)++)
+
+#define mlx5_esw_for_each_host_func_rep_reverse(esw, i, rep, nvfs) \
+ for ((i) = (nvfs); \
+ (rep) = &(esw)->offloads.vport_reps[i], \
+ (i) >= (esw)->first_host_vport; (i)--)
+
+#define mlx5_esw_for_each_host_func_vport(esw, vport, nvfs) \
+ for ((vport) = (esw)->first_host_vport; \
+ (vport) <= (nvfs); (vport)++)
+
+#define mlx5_esw_for_each_host_func_vport_reverse(esw, vport, nvfs) \
+ for ((vport) = (nvfs); \
+ (vport) >= (esw)->first_host_vport; (vport)--)
+
struct mlx5_vport *__must_check
mlx5_eswitch_get_vport(struct mlx5_eswitch *esw, u16 vport_num);
+bool mlx5_eswitch_is_vf_vport(const struct mlx5_eswitch *esw, u16 vport_num);
+
+void mlx5_eswitch_update_num_of_vfs(struct mlx5_eswitch *esw, const int num_vfs);
+int mlx5_esw_funcs_changed_handler(struct notifier_block *nb, unsigned long type, void *data);
+
#else /* CONFIG_MLX5_ESWITCH */
/* eswitch API stubs */
static inline int mlx5_eswitch_init(struct mlx5_core_dev *dev) { return 0; }
static inline void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw) {}
-static inline int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode) { return 0; }
-static inline void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw) {}
+static inline int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int mode) { return 0; }
+static inline void mlx5_eswitch_disable(struct mlx5_eswitch *esw) {}
static inline bool mlx5_esw_lag_prereq(struct mlx5_core_dev *dev0, struct mlx5_core_dev *dev1) { return true; }
+static inline bool mlx5_eswitch_is_funcs_handler(struct mlx5_core_dev *dev) { return false; }
+static inline const u32 *mlx5_esw_query_functions(struct mlx5_core_dev *dev)
+{
+ return ERR_PTR(-EOPNOTSUPP);
+}
+
+static inline void mlx5_eswitch_update_num_of_vfs(struct mlx5_eswitch *esw, const int num_vfs) {}
#define FDB_MAX_CHAIN 1
#define FDB_SLOW_PATH_CHAIN (FDB_MAX_CHAIN + 1)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 47b446d30f71..8ed4497929b9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -41,7 +41,6 @@
#include "en.h"
#include "fs_core.h"
#include "lib/devcom.h"
-#include "ecpf.h"
#include "lib/eq.h"
/* There are two match-all miss flows, one for unicast dst mac and
@@ -89,6 +88,53 @@ u16 mlx5_eswitch_get_prio_range(struct mlx5_eswitch *esw)
return 1;
}
+static void
+mlx5_eswitch_set_rule_source_port(struct mlx5_eswitch *esw,
+ struct mlx5_flow_spec *spec,
+ struct mlx5_esw_flow_attr *attr)
+{
+ void *misc2;
+ void *misc;
+
+ /* Use metadata matching because vport is not represented by single
+ * VHCA in dual-port RoCE mode, and matching on source vport may fail.
+ */
+ if (mlx5_eswitch_vport_match_metadata_enabled(esw)) {
+ misc2 = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters_2);
+ MLX5_SET(fte_match_set_misc2, misc2, metadata_reg_c_0,
+ mlx5_eswitch_get_vport_metadata_for_match(attr->in_mdev->priv.eswitch,
+ attr->in_rep->vport));
+
+ misc2 = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters_2);
+ MLX5_SET_TO_ONES(fte_match_set_misc2, misc2, metadata_reg_c_0);
+
+ spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS_2;
+ misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters);
+ if (memchr_inv(misc, 0, MLX5_ST_SZ_BYTES(fte_match_set_misc)))
+ spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS;
+ } else {
+ misc = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters);
+ MLX5_SET(fte_match_set_misc, misc, source_port, attr->in_rep->vport);
+
+ if (MLX5_CAP_ESW(esw->dev, merged_eswitch))
+ MLX5_SET(fte_match_set_misc, misc,
+ source_eswitch_owner_vhca_id,
+ MLX5_CAP_GEN(attr->in_mdev, vhca_id));
+
+ misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters);
+ MLX5_SET_TO_ONES(fte_match_set_misc, misc, source_port);
+ if (MLX5_CAP_ESW(esw->dev, merged_eswitch))
+ MLX5_SET_TO_ONES(fte_match_set_misc, misc,
+ source_eswitch_owner_vhca_id);
+
+ spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS;
+ }
+
+ if (MLX5_CAP_ESW_FLOWTABLE(esw->dev, flow_source) &&
+ attr->in_rep->vport == MLX5_VPORT_UPLINK)
+ spec->flow_context.flow_source = MLX5_FLOW_CONTEXT_FLOW_SOURCE_UPLINK;
+}
+
struct mlx5_flow_handle *
mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
struct mlx5_flow_spec *spec,
@@ -100,9 +146,8 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
struct mlx5_flow_handle *rule;
struct mlx5_flow_table *fdb;
int j, i = 0;
- void *misc;
- if (esw->mode != SRIOV_OFFLOADS)
+ if (esw->mode != MLX5_ESWITCH_OFFLOADS)
return ERR_PTR(-EOPNOTSUPP);
flow_act.action = attr->action;
@@ -160,21 +205,8 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
i++;
}
- misc = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters);
- MLX5_SET(fte_match_set_misc, misc, source_port, attr->in_rep->vport);
-
- if (MLX5_CAP_ESW(esw->dev, merged_eswitch))
- MLX5_SET(fte_match_set_misc, misc,
- source_eswitch_owner_vhca_id,
- MLX5_CAP_GEN(attr->in_mdev, vhca_id));
+ mlx5_eswitch_set_rule_source_port(esw, spec, attr);
- misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters);
- MLX5_SET_TO_ONES(fte_match_set_misc, misc, source_port);
- if (MLX5_CAP_ESW(esw->dev, merged_eswitch))
- MLX5_SET_TO_ONES(fte_match_set_misc, misc,
- source_eswitch_owner_vhca_id);
-
- spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS;
if (flow_act.action & MLX5_FLOW_CONTEXT_ACTION_DECAP) {
if (attr->tunnel_match_level != MLX5_MATCH_NONE)
spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS;
@@ -193,7 +225,11 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
goto err_esw_get;
}
- rule = mlx5_add_flow_rules(fdb, spec, &flow_act, dest, i);
+ if (mlx5_eswitch_termtbl_required(esw, &flow_act, spec))
+ rule = mlx5_eswitch_add_termtbl_rule(esw, fdb, spec, attr,
+ &flow_act, dest, i);
+ else
+ rule = mlx5_add_flow_rules(fdb, spec, &flow_act, dest, i);
if (IS_ERR(rule))
goto err_add_rule;
else
@@ -220,7 +256,6 @@ mlx5_eswitch_add_fwd_rule(struct mlx5_eswitch *esw,
struct mlx5_flow_table *fast_fdb;
struct mlx5_flow_table *fwd_fdb;
struct mlx5_flow_handle *rule;
- void *misc;
int i;
fast_fdb = esw_get_prio_table(esw, attr->chain, attr->prio, 0);
@@ -252,25 +287,11 @@ mlx5_eswitch_add_fwd_rule(struct mlx5_eswitch *esw,
dest[i].ft = fwd_fdb,
i++;
- misc = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters);
- MLX5_SET(fte_match_set_misc, misc, source_port, attr->in_rep->vport);
-
- if (MLX5_CAP_ESW(esw->dev, merged_eswitch))
- MLX5_SET(fte_match_set_misc, misc,
- source_eswitch_owner_vhca_id,
- MLX5_CAP_GEN(attr->in_mdev, vhca_id));
+ mlx5_eswitch_set_rule_source_port(esw, spec, attr);
- misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters);
- MLX5_SET_TO_ONES(fte_match_set_misc, misc, source_port);
- if (MLX5_CAP_ESW(esw->dev, merged_eswitch))
- MLX5_SET_TO_ONES(fte_match_set_misc, misc,
- source_eswitch_owner_vhca_id);
-
- if (attr->match_level == MLX5_MATCH_NONE)
- spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS;
- else
- spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS |
- MLX5_MATCH_MISC_PARAMETERS;
+ spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS;
+ if (attr->match_level != MLX5_MATCH_NONE)
+ spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS;
rule = mlx5_add_flow_rules(fast_fdb, spec, &flow_act, dest, i);
@@ -295,8 +316,16 @@ __mlx5_eswitch_del_rule(struct mlx5_eswitch *esw,
bool fwd_rule)
{
bool split = (attr->split_count > 0);
+ int i;
mlx5_del_flow_rules(rule);
+
+ /* unref the term table */
+ for (i = 0; i < MLX5_MAX_FLOW_FWD_VPORTS; i++) {
+ if (attr->dests[i].termtbl)
+ mlx5_eswitch_termtbl_put(esw, attr->dests[i].termtbl);
+ }
+
esw->offloads.num_flows--;
if (fwd_rule) {
@@ -328,12 +357,11 @@ mlx5_eswitch_del_fwd_rule(struct mlx5_eswitch *esw,
static int esw_set_global_vlan_pop(struct mlx5_eswitch *esw, u8 val)
{
struct mlx5_eswitch_rep *rep;
- int vf_vport, err = 0;
+ int i, err = 0;
esw_debug(esw->dev, "%s applying global %s policy\n", __func__, val ? "pop" : "none");
- for (vf_vport = 1; vf_vport < esw->enabled_vports; vf_vport++) {
- rep = &esw->offloads.vport_reps[vf_vport];
- if (atomic_read(&rep->rep_if[REP_ETH].state) != REP_LOADED)
+ mlx5_esw_for_each_host_func_rep(esw, i, rep, esw->esw_funcs.num_vfs) {
+ if (atomic_read(&rep->rep_data[REP_ETH].state) != REP_LOADED)
continue;
err = __mlx5_eswitch_set_vport_vlan(esw, rep->vport, 0, 0, val);
@@ -559,23 +587,87 @@ void mlx5_eswitch_del_send_to_vport_rule(struct mlx5_flow_handle *rule)
mlx5_del_flow_rules(rule);
}
-static void peer_miss_rules_setup(struct mlx5_core_dev *peer_dev,
+static int mlx5_eswitch_enable_passing_vport_metadata(struct mlx5_eswitch *esw)
+{
+ u32 out[MLX5_ST_SZ_DW(query_esw_vport_context_out)] = {};
+ u32 in[MLX5_ST_SZ_DW(modify_esw_vport_context_in)] = {};
+ u8 fdb_to_vport_reg_c_id;
+ int err;
+
+ err = mlx5_eswitch_query_esw_vport_context(esw, esw->manager_vport,
+ out, sizeof(out));
+ if (err)
+ return err;
+
+ fdb_to_vport_reg_c_id = MLX5_GET(query_esw_vport_context_out, out,
+ esw_vport_context.fdb_to_vport_reg_c_id);
+
+ fdb_to_vport_reg_c_id |= MLX5_FDB_TO_VPORT_REG_C_0;
+ MLX5_SET(modify_esw_vport_context_in, in,
+ esw_vport_context.fdb_to_vport_reg_c_id, fdb_to_vport_reg_c_id);
+
+ MLX5_SET(modify_esw_vport_context_in, in,
+ field_select.fdb_to_vport_reg_c_id, 1);
+
+ return mlx5_eswitch_modify_esw_vport_context(esw, esw->manager_vport,
+ in, sizeof(in));
+}
+
+static int mlx5_eswitch_disable_passing_vport_metadata(struct mlx5_eswitch *esw)
+{
+ u32 out[MLX5_ST_SZ_DW(query_esw_vport_context_out)] = {};
+ u32 in[MLX5_ST_SZ_DW(modify_esw_vport_context_in)] = {};
+ u8 fdb_to_vport_reg_c_id;
+ int err;
+
+ err = mlx5_eswitch_query_esw_vport_context(esw, esw->manager_vport,
+ out, sizeof(out));
+ if (err)
+ return err;
+
+ fdb_to_vport_reg_c_id = MLX5_GET(query_esw_vport_context_out, out,
+ esw_vport_context.fdb_to_vport_reg_c_id);
+
+ fdb_to_vport_reg_c_id &= ~MLX5_FDB_TO_VPORT_REG_C_0;
+
+ MLX5_SET(modify_esw_vport_context_in, in,
+ esw_vport_context.fdb_to_vport_reg_c_id, fdb_to_vport_reg_c_id);
+
+ MLX5_SET(modify_esw_vport_context_in, in,
+ field_select.fdb_to_vport_reg_c_id, 1);
+
+ return mlx5_eswitch_modify_esw_vport_context(esw, esw->manager_vport,
+ in, sizeof(in));
+}
+
+static void peer_miss_rules_setup(struct mlx5_eswitch *esw,
+ struct mlx5_core_dev *peer_dev,
struct mlx5_flow_spec *spec,
struct mlx5_flow_destination *dest)
{
- void *misc = MLX5_ADDR_OF(fte_match_param, spec->match_value,
- misc_parameters);
+ void *misc;
- MLX5_SET(fte_match_set_misc, misc, source_eswitch_owner_vhca_id,
- MLX5_CAP_GEN(peer_dev, vhca_id));
+ if (mlx5_eswitch_vport_match_metadata_enabled(esw)) {
+ misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
+ misc_parameters_2);
+ MLX5_SET_TO_ONES(fte_match_set_misc2, misc, metadata_reg_c_0);
- spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS;
+ spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS_2;
+ } else {
+ misc = MLX5_ADDR_OF(fte_match_param, spec->match_value,
+ misc_parameters);
- misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
- misc_parameters);
- MLX5_SET_TO_ONES(fte_match_set_misc, misc, source_port);
- MLX5_SET_TO_ONES(fte_match_set_misc, misc,
- source_eswitch_owner_vhca_id);
+ MLX5_SET(fte_match_set_misc, misc, source_eswitch_owner_vhca_id,
+ MLX5_CAP_GEN(peer_dev, vhca_id));
+
+ spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS;
+
+ misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
+ misc_parameters);
+ MLX5_SET_TO_ONES(fte_match_set_misc, misc, source_port);
+ MLX5_SET_TO_ONES(fte_match_set_misc, misc,
+ source_eswitch_owner_vhca_id);
+ }
dest->type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
dest->vport.num = peer_dev->priv.eswitch->manager_vport;
@@ -583,6 +675,26 @@ static void peer_miss_rules_setup(struct mlx5_core_dev *peer_dev,
dest->vport.flags |= MLX5_FLOW_DEST_VPORT_VHCA_ID;
}
+static void esw_set_peer_miss_rule_source_port(struct mlx5_eswitch *esw,
+ struct mlx5_eswitch *peer_esw,
+ struct mlx5_flow_spec *spec,
+ u16 vport)
+{
+ void *misc;
+
+ if (mlx5_eswitch_vport_match_metadata_enabled(esw)) {
+ misc = MLX5_ADDR_OF(fte_match_param, spec->match_value,
+ misc_parameters_2);
+ MLX5_SET(fte_match_set_misc2, misc, metadata_reg_c_0,
+ mlx5_eswitch_get_vport_metadata_for_match(peer_esw,
+ vport));
+ } else {
+ misc = MLX5_ADDR_OF(fte_match_param, spec->match_value,
+ misc_parameters);
+ MLX5_SET(fte_match_set_misc, misc, source_port, vport);
+ }
+}
+
static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
struct mlx5_core_dev *peer_dev)
{
@@ -600,7 +712,7 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
if (!spec)
return -ENOMEM;
- peer_miss_rules_setup(peer_dev, spec, &dest);
+ peer_miss_rules_setup(esw, peer_dev, spec, &dest);
flows = kvzalloc(nvports * sizeof(*flows), GFP_KERNEL);
if (!flows) {
@@ -613,7 +725,9 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
misc_parameters);
if (mlx5_core_is_ecpf_esw_manager(esw->dev)) {
- MLX5_SET(fte_match_set_misc, misc, source_port, MLX5_VPORT_PF);
+ esw_set_peer_miss_rule_source_port(esw, peer_dev->priv.eswitch,
+ spec, MLX5_VPORT_PF);
+
flow = mlx5_add_flow_rules(esw->fdb_table.offloads.slow_fdb,
spec, &flow_act, &dest, 1);
if (IS_ERR(flow)) {
@@ -635,7 +749,10 @@ static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw,
}
mlx5_esw_for_each_vf_vport_num(esw, i, mlx5_core_max_vfs(esw->dev)) {
- MLX5_SET(fte_match_set_misc, misc, source_port, i);
+ esw_set_peer_miss_rule_source_port(esw,
+ peer_dev->priv.eswitch,
+ spec, i);
+
flow = mlx5_add_flow_rules(esw->fdb_table.offloads.slow_fdb,
spec, &flow_act, &dest, 1);
if (IS_ERR(flow)) {
@@ -919,6 +1036,30 @@ static void esw_destroy_offloads_fast_fdb_tables(struct mlx5_eswitch *esw)
#define MAX_PF_SQ 256
#define MAX_SQ_NVPORTS 32
+static void esw_set_flow_group_source_port(struct mlx5_eswitch *esw,
+ u32 *flow_group_in)
+{
+ void *match_criteria = MLX5_ADDR_OF(create_flow_group_in,
+ flow_group_in,
+ match_criteria);
+
+ if (mlx5_eswitch_vport_match_metadata_enabled(esw)) {
+ MLX5_SET(create_flow_group_in, flow_group_in,
+ match_criteria_enable,
+ MLX5_MATCH_MISC_PARAMETERS_2);
+
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+ misc_parameters_2.metadata_reg_c_0);
+ } else {
+ MLX5_SET(create_flow_group_in, flow_group_in,
+ match_criteria_enable,
+ MLX5_MATCH_MISC_PARAMETERS);
+
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+ misc_parameters.source_port);
+ }
+}
+
static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw, int nvports)
{
int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
@@ -1016,19 +1157,21 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw, int nvports)
/* create peer esw miss group */
memset(flow_group_in, 0, inlen);
- MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
- MLX5_MATCH_MISC_PARAMETERS);
- match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in,
- match_criteria);
+ esw_set_flow_group_source_port(esw, flow_group_in);
+
+ if (!mlx5_eswitch_vport_match_metadata_enabled(esw)) {
+ match_criteria = MLX5_ADDR_OF(create_flow_group_in,
+ flow_group_in,
+ match_criteria);
- MLX5_SET_TO_ONES(fte_match_param, match_criteria,
- misc_parameters.source_port);
- MLX5_SET_TO_ONES(fte_match_param, match_criteria,
- misc_parameters.source_eswitch_owner_vhca_id);
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+ misc_parameters.source_eswitch_owner_vhca_id);
+
+ MLX5_SET(create_flow_group_in, flow_group_in,
+ source_eswitch_owner_vhca_id_valid, 1);
+ }
- MLX5_SET(create_flow_group_in, flow_group_in,
- source_eswitch_owner_vhca_id_valid, 1);
MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix);
MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index,
ix + esw->total_vports - 1);
@@ -1142,7 +1285,6 @@ static int esw_create_vport_rx_group(struct mlx5_eswitch *esw, int nvports)
int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
struct mlx5_flow_group *g;
u32 *flow_group_in;
- void *match_criteria, *misc;
int err = 0;
nvports = nvports + MLX5_ESW_MISS_FLOWS;
@@ -1152,12 +1294,8 @@ static int esw_create_vport_rx_group(struct mlx5_eswitch *esw, int nvports)
/* create vport rx group */
memset(flow_group_in, 0, inlen);
- MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
- MLX5_MATCH_MISC_PARAMETERS);
- match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria);
- misc = MLX5_ADDR_OF(fte_match_param, match_criteria, misc_parameters);
- MLX5_SET_TO_ONES(fte_match_set_misc, misc, source_port);
+ esw_set_flow_group_source_port(esw, flow_group_in);
MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, nvports - 1);
@@ -1196,13 +1334,24 @@ mlx5_eswitch_create_vport_rx_rule(struct mlx5_eswitch *esw, u16 vport,
goto out;
}
- misc = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters);
- MLX5_SET(fte_match_set_misc, misc, source_port, vport);
+ if (mlx5_eswitch_vport_match_metadata_enabled(esw)) {
+ misc = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters_2);
+ MLX5_SET(fte_match_set_misc2, misc, metadata_reg_c_0,
+ mlx5_eswitch_get_vport_metadata_for_match(esw, vport));
- misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters);
- MLX5_SET_TO_ONES(fte_match_set_misc, misc, source_port);
+ misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters_2);
+ MLX5_SET_TO_ONES(fte_match_set_misc2, misc, metadata_reg_c_0);
- spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS;
+ spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS_2;
+ } else {
+ misc = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters);
+ MLX5_SET(fte_match_set_misc, misc, source_port, vport);
+
+ misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters);
+ MLX5_SET_TO_ONES(fte_match_set_misc, misc, source_port);
+
+ spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS;
+ }
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
flow_rule = mlx5_add_flow_rules(esw->offloads.ft_offloads, spec,
@@ -1220,21 +1369,22 @@ out:
static int esw_offloads_start(struct mlx5_eswitch *esw,
struct netlink_ext_ack *extack)
{
- int err, err1, num_vfs = esw->dev->priv.sriov.num_vfs;
+ int err, err1;
- if (esw->mode != SRIOV_LEGACY &&
+ if (esw->mode != MLX5_ESWITCH_LEGACY &&
!mlx5_core_is_ecpf_esw_manager(esw->dev)) {
NL_SET_ERR_MSG_MOD(extack,
"Can't set offloads mode, SRIOV legacy not enabled");
return -EINVAL;
}
- mlx5_eswitch_disable_sriov(esw);
- err = mlx5_eswitch_enable_sriov(esw, num_vfs, SRIOV_OFFLOADS);
+ mlx5_eswitch_disable(esw);
+ mlx5_eswitch_update_num_of_vfs(esw, esw->dev->priv.sriov.num_vfs);
+ err = mlx5_eswitch_enable(esw, MLX5_ESWITCH_OFFLOADS);
if (err) {
NL_SET_ERR_MSG_MOD(extack,
"Failed setting eswitch to offloads");
- err1 = mlx5_eswitch_enable_sriov(esw, num_vfs, SRIOV_LEGACY);
+ err1 = mlx5_eswitch_enable(esw, MLX5_ESWITCH_LEGACY);
if (err1) {
NL_SET_ERR_MSG_MOD(extack,
"Failed setting eswitch back to legacy");
@@ -1242,7 +1392,6 @@ static int esw_offloads_start(struct mlx5_eswitch *esw,
}
if (esw->offloads.inline_mode == MLX5_INLINE_MODE_NONE) {
if (mlx5_eswitch_inline_mode_get(esw,
- num_vfs,
&esw->offloads.inline_mode)) {
esw->offloads.inline_mode = MLX5_INLINE_MODE_L2;
NL_SET_ERR_MSG_MOD(extack,
@@ -1259,11 +1408,11 @@ void esw_offloads_cleanup_reps(struct mlx5_eswitch *esw)
int esw_offloads_init_reps(struct mlx5_eswitch *esw)
{
- int total_vports = MLX5_TOTAL_VPORTS(esw->dev);
+ int total_vports = esw->total_vports;
struct mlx5_core_dev *dev = esw->dev;
struct mlx5_eswitch_rep *rep;
u8 hw_id[ETH_ALEN], rep_type;
- int vport;
+ int vport_index;
esw->offloads.vport_reps = kcalloc(total_vports,
sizeof(struct mlx5_eswitch_rep),
@@ -1271,14 +1420,15 @@ int esw_offloads_init_reps(struct mlx5_eswitch *esw)
if (!esw->offloads.vport_reps)
return -ENOMEM;
- mlx5_query_nic_vport_mac_address(dev, 0, hw_id);
+ mlx5_query_mac_address(dev, hw_id);
- mlx5_esw_for_all_reps(esw, vport, rep) {
- rep->vport = mlx5_eswitch_index_to_vport_num(esw, vport);
+ mlx5_esw_for_all_reps(esw, vport_index, rep) {
+ rep->vport = mlx5_eswitch_index_to_vport_num(esw, vport_index);
+ rep->vport_index = vport_index;
ether_addr_copy(rep->hw_id, hw_id);
for (rep_type = 0; rep_type < NUM_REP_TYPES; rep_type++)
- atomic_set(&rep->rep_if[rep_type].state,
+ atomic_set(&rep->rep_data[rep_type].state,
REP_UNREGISTERED);
}
@@ -1288,9 +1438,9 @@ int esw_offloads_init_reps(struct mlx5_eswitch *esw)
static void __esw_offloads_unload_rep(struct mlx5_eswitch *esw,
struct mlx5_eswitch_rep *rep, u8 rep_type)
{
- if (atomic_cmpxchg(&rep->rep_if[rep_type].state,
+ if (atomic_cmpxchg(&rep->rep_data[rep_type].state,
REP_LOADED, REP_REGISTERED) == REP_LOADED)
- rep->rep_if[rep_type].unload(rep);
+ esw->offloads.rep_ops[rep_type]->unload(rep);
}
static void __unload_reps_special_vport(struct mlx5_eswitch *esw, u8 rep_type)
@@ -1329,21 +1479,20 @@ static void esw_offloads_unload_vf_reps(struct mlx5_eswitch *esw, int nvports)
__unload_reps_vf_vport(esw, nvports, rep_type);
}
-static void __unload_reps_all_vport(struct mlx5_eswitch *esw, int nvports,
- u8 rep_type)
+static void __unload_reps_all_vport(struct mlx5_eswitch *esw, u8 rep_type)
{
- __unload_reps_vf_vport(esw, nvports, rep_type);
+ __unload_reps_vf_vport(esw, esw->esw_funcs.num_vfs, rep_type);
/* Special vports must be the last to unload. */
__unload_reps_special_vport(esw, rep_type);
}
-static void esw_offloads_unload_all_reps(struct mlx5_eswitch *esw, int nvports)
+static void esw_offloads_unload_all_reps(struct mlx5_eswitch *esw)
{
u8 rep_type = NUM_REP_TYPES;
while (rep_type-- > 0)
- __unload_reps_all_vport(esw, nvports, rep_type);
+ __unload_reps_all_vport(esw, rep_type);
}
static int __esw_offloads_load_rep(struct mlx5_eswitch *esw,
@@ -1351,11 +1500,11 @@ static int __esw_offloads_load_rep(struct mlx5_eswitch *esw,
{
int err = 0;
- if (atomic_cmpxchg(&rep->rep_if[rep_type].state,
+ if (atomic_cmpxchg(&rep->rep_data[rep_type].state,
REP_REGISTERED, REP_LOADED) == REP_REGISTERED) {
- err = rep->rep_if[rep_type].load(esw->dev, rep);
+ err = esw->offloads.rep_ops[rep_type]->load(esw->dev, rep);
if (err)
- atomic_set(&rep->rep_if[rep_type].state,
+ atomic_set(&rep->rep_data[rep_type].state,
REP_REGISTERED);
}
@@ -1419,6 +1568,26 @@ err_vf:
return err;
}
+static int __load_reps_all_vport(struct mlx5_eswitch *esw, u8 rep_type)
+{
+ int err;
+
+ /* Special vports must be loaded first, uplink rep creates mdev resource. */
+ err = __load_reps_special_vport(esw, rep_type);
+ if (err)
+ return err;
+
+ err = __load_reps_vf_vport(esw, esw->esw_funcs.num_vfs, rep_type);
+ if (err)
+ goto err_vfs;
+
+ return 0;
+
+err_vfs:
+ __unload_reps_special_vport(esw, rep_type);
+ return err;
+}
+
static int esw_offloads_load_vf_reps(struct mlx5_eswitch *esw, int nvports)
{
u8 rep_type = 0;
@@ -1438,34 +1607,13 @@ err_reps:
return err;
}
-static int __load_reps_all_vport(struct mlx5_eswitch *esw, int nvports,
- u8 rep_type)
-{
- int err;
-
- /* Special vports must be loaded first. */
- err = __load_reps_special_vport(esw, rep_type);
- if (err)
- return err;
-
- err = __load_reps_vf_vport(esw, nvports, rep_type);
- if (err)
- goto err_vfs;
-
- return 0;
-
-err_vfs:
- __unload_reps_special_vport(esw, rep_type);
- return err;
-}
-
-static int esw_offloads_load_all_reps(struct mlx5_eswitch *esw, int nvports)
+static int esw_offloads_load_all_reps(struct mlx5_eswitch *esw)
{
u8 rep_type = 0;
int err;
for (rep_type = 0; rep_type < NUM_REP_TYPES; rep_type++) {
- err = __load_reps_all_vport(esw, nvports, rep_type);
+ err = __load_reps_all_vport(esw, rep_type);
if (err)
goto err_reps;
}
@@ -1474,7 +1622,7 @@ static int esw_offloads_load_all_reps(struct mlx5_eswitch *esw, int nvports)
err_reps:
while (rep_type-- > 0)
- __unload_reps_all_vport(esw, nvports, rep_type);
+ __unload_reps_all_vport(esw, rep_type);
return err;
}
@@ -1510,6 +1658,10 @@ static int mlx5_esw_offloads_devcom_event(int event,
switch (event) {
case ESW_OFFLOADS_DEVCOM_PAIR:
+ if (mlx5_eswitch_vport_match_metadata_enabled(esw) !=
+ mlx5_eswitch_vport_match_metadata_enabled(peer_esw))
+ break;
+
err = mlx5_esw_offloads_pair(esw, peer_esw);
if (err)
goto err_out;
@@ -1578,32 +1730,16 @@ static void esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw)
static int esw_vport_ingress_prio_tag_config(struct mlx5_eswitch *esw,
struct mlx5_vport *vport)
{
- struct mlx5_core_dev *dev = esw->dev;
struct mlx5_flow_act flow_act = {0};
struct mlx5_flow_spec *spec;
int err = 0;
/* For prio tag mode, there is only 1 FTEs:
- * 1) Untagged packets - push prio tag VLAN, allow
+ * 1) Untagged packets - push prio tag VLAN and modify metadata if
+ * required, allow
* Unmatched traffic is allowed by default
*/
- if (!MLX5_CAP_ESW_INGRESS_ACL(dev, ft_support))
- return -EOPNOTSUPP;
-
- esw_vport_cleanup_ingress_rules(esw, vport);
-
- err = esw_vport_enable_ingress_acl(esw, vport);
- if (err) {
- mlx5_core_warn(esw->dev,
- "failed to enable prio tag ingress acl (%d) on vport[%d]\n",
- err, vport->vport);
- return err;
- }
-
- esw_debug(esw->dev,
- "vport[%d] configure ingress rules\n", vport->vport);
-
spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
if (!spec) {
err = -ENOMEM;
@@ -1619,6 +1755,12 @@ static int esw_vport_ingress_prio_tag_config(struct mlx5_eswitch *esw,
flow_act.vlan[0].ethtype = ETH_P_8021Q;
flow_act.vlan[0].vid = 0;
flow_act.vlan[0].prio = 0;
+
+ if (vport->ingress.modify_metadata_rule) {
+ flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR;
+ flow_act.modify_id = vport->ingress.modify_metadata_id;
+ }
+
vport->ingress.allow_rule =
mlx5_add_flow_rules(vport->ingress.acl, spec,
&flow_act, NULL, 0);
@@ -1639,6 +1781,58 @@ out_no_mem:
return err;
}
+static int esw_vport_add_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+{
+ u8 action[MLX5_UN_SZ_BYTES(set_action_in_add_action_in_auto)] = {};
+ struct mlx5_flow_act flow_act = {};
+ struct mlx5_flow_spec spec = {};
+ int err = 0;
+
+ MLX5_SET(set_action_in, action, action_type, MLX5_ACTION_TYPE_SET);
+ MLX5_SET(set_action_in, action, field, MLX5_ACTION_IN_FIELD_METADATA_REG_C_0);
+ MLX5_SET(set_action_in, action, data,
+ mlx5_eswitch_get_vport_metadata_for_match(esw, vport->vport));
+
+ err = mlx5_modify_header_alloc(esw->dev, MLX5_FLOW_NAMESPACE_ESW_INGRESS,
+ 1, action, &vport->ingress.modify_metadata_id);
+ if (err) {
+ esw_warn(esw->dev,
+ "failed to alloc modify header for vport %d ingress acl (%d)\n",
+ vport->vport, err);
+ return err;
+ }
+
+ flow_act.action = MLX5_FLOW_CONTEXT_ACTION_MOD_HDR | MLX5_FLOW_CONTEXT_ACTION_ALLOW;
+ flow_act.modify_id = vport->ingress.modify_metadata_id;
+ vport->ingress.modify_metadata_rule = mlx5_add_flow_rules(vport->ingress.acl,
+ &spec, &flow_act, NULL, 0);
+ if (IS_ERR(vport->ingress.modify_metadata_rule)) {
+ err = PTR_ERR(vport->ingress.modify_metadata_rule);
+ esw_warn(esw->dev,
+ "failed to add setting metadata rule for vport %d ingress acl, err(%d)\n",
+ vport->vport, err);
+ vport->ingress.modify_metadata_rule = NULL;
+ goto out;
+ }
+
+out:
+ if (err)
+ mlx5_modify_header_dealloc(esw->dev, vport->ingress.modify_metadata_id);
+ return err;
+}
+
+void esw_vport_del_ingress_acl_modify_metadata(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+{
+ if (vport->ingress.modify_metadata_rule) {
+ mlx5_del_flow_rules(vport->ingress.modify_metadata_rule);
+ mlx5_modify_header_dealloc(esw->dev, vport->ingress.modify_metadata_id);
+
+ vport->ingress.modify_metadata_rule = NULL;
+ }
+}
+
static int esw_vport_egress_prio_tag_config(struct mlx5_eswitch *esw,
struct mlx5_vport *vport)
{
@@ -1646,6 +1840,9 @@ static int esw_vport_egress_prio_tag_config(struct mlx5_eswitch *esw,
struct mlx5_flow_spec *spec;
int err = 0;
+ if (!MLX5_CAP_GEN(esw->dev, prio_tag_required))
+ return 0;
+
/* For prio tag mode, there is only 1 FTEs:
* 1) prio tag packets - pop the prio tag VLAN, allow
* Unmatched traffic is allowed by default
@@ -1699,27 +1896,98 @@ out_no_mem:
return err;
}
-static int esw_prio_tag_acls_config(struct mlx5_eswitch *esw, int nvports)
+static int esw_vport_ingress_common_config(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
{
- struct mlx5_vport *vport = NULL;
- int i, j;
int err;
- mlx5_esw_for_each_vf_vport(esw, i, vport, nvports) {
+ if (!mlx5_eswitch_vport_match_metadata_enabled(esw) &&
+ !MLX5_CAP_GEN(esw->dev, prio_tag_required))
+ return 0;
+
+ esw_vport_cleanup_ingress_rules(esw, vport);
+
+ err = esw_vport_enable_ingress_acl(esw, vport);
+ if (err) {
+ esw_warn(esw->dev,
+ "failed to enable ingress acl (%d) on vport[%d]\n",
+ err, vport->vport);
+ return err;
+ }
+
+ esw_debug(esw->dev,
+ "vport[%d] configure ingress rules\n", vport->vport);
+
+ if (mlx5_eswitch_vport_match_metadata_enabled(esw)) {
+ err = esw_vport_add_ingress_acl_modify_metadata(esw, vport);
+ if (err)
+ goto out;
+ }
+
+ if (MLX5_CAP_GEN(esw->dev, prio_tag_required) &&
+ mlx5_eswitch_is_vf_vport(esw, vport->vport)) {
err = esw_vport_ingress_prio_tag_config(esw, vport);
if (err)
- goto err_ingress;
- err = esw_vport_egress_prio_tag_config(esw, vport);
+ goto out;
+ }
+
+out:
+ if (err)
+ esw_vport_disable_ingress_acl(esw, vport);
+ return err;
+}
+
+static bool
+esw_check_vport_match_metadata_supported(const struct mlx5_eswitch *esw)
+{
+ if (!MLX5_CAP_ESW(esw->dev, esw_uplink_ingress_acl))
+ return false;
+
+ if (!(MLX5_CAP_ESW_FLOWTABLE(esw->dev, fdb_to_vport_reg_c_id) &
+ MLX5_FDB_TO_VPORT_REG_C_0))
+ return false;
+
+ if (!MLX5_CAP_ESW_FLOWTABLE(esw->dev, flow_source))
+ return false;
+
+ if (mlx5_core_is_ecpf_esw_manager(esw->dev) ||
+ mlx5_ecpf_vport_exists(esw->dev))
+ return false;
+
+ return true;
+}
+
+static int esw_create_offloads_acl_tables(struct mlx5_eswitch *esw)
+{
+ struct mlx5_vport *vport;
+ int i, j;
+ int err;
+
+ if (esw_check_vport_match_metadata_supported(esw))
+ esw->flags |= MLX5_ESWITCH_VPORT_MATCH_METADATA;
+
+ mlx5_esw_for_all_vports(esw, i, vport) {
+ err = esw_vport_ingress_common_config(esw, vport);
if (err)
- goto err_egress;
+ goto err_ingress;
+
+ if (mlx5_eswitch_is_vf_vport(esw, vport->vport)) {
+ err = esw_vport_egress_prio_tag_config(esw, vport);
+ if (err)
+ goto err_egress;
+ }
}
+ if (mlx5_eswitch_vport_match_metadata_enabled(esw))
+ esw_info(esw->dev, "Use metadata reg_c as source vport to match\n");
+
return 0;
err_egress:
esw_vport_disable_ingress_acl(esw, vport);
err_ingress:
- mlx5_esw_for_each_vf_vport_reverse(esw, j, vport, i - 1) {
+ for (j = MLX5_VPORT_PF; j < i; j++) {
+ vport = &esw->vports[j];
esw_vport_disable_egress_acl(esw, vport);
esw_vport_disable_ingress_acl(esw, vport);
}
@@ -1727,40 +1995,46 @@ err_ingress:
return err;
}
-static void esw_prio_tag_acls_cleanup(struct mlx5_eswitch *esw)
+static void esw_destroy_offloads_acl_tables(struct mlx5_eswitch *esw)
{
struct mlx5_vport *vport;
int i;
- mlx5_esw_for_each_vf_vport(esw, i, vport, esw->dev->priv.sriov.num_vfs) {
+ mlx5_esw_for_all_vports(esw, i, vport) {
esw_vport_disable_egress_acl(esw, vport);
esw_vport_disable_ingress_acl(esw, vport);
}
+
+ esw->flags &= ~MLX5_ESWITCH_VPORT_MATCH_METADATA;
}
-static int esw_offloads_steering_init(struct mlx5_eswitch *esw, int vf_nvports,
- int nvports)
+static int esw_offloads_steering_init(struct mlx5_eswitch *esw)
{
+ int num_vfs = esw->esw_funcs.num_vfs;
+ int total_vports;
int err;
+ if (mlx5_core_is_ecpf_esw_manager(esw->dev))
+ total_vports = esw->total_vports;
+ else
+ total_vports = num_vfs + MLX5_SPECIAL_VPORTS(esw->dev);
+
memset(&esw->fdb_table.offloads, 0, sizeof(struct offloads_fdb));
mutex_init(&esw->fdb_table.offloads.fdb_prio_lock);
- if (MLX5_CAP_GEN(esw->dev, prio_tag_required)) {
- err = esw_prio_tag_acls_config(esw, vf_nvports);
- if (err)
- return err;
- }
-
- err = esw_create_offloads_fdb_tables(esw, nvports);
+ err = esw_create_offloads_acl_tables(esw);
if (err)
return err;
- err = esw_create_offloads_table(esw, nvports);
+ err = esw_create_offloads_fdb_tables(esw, total_vports);
+ if (err)
+ goto create_fdb_err;
+
+ err = esw_create_offloads_table(esw, total_vports);
if (err)
goto create_ft_err;
- err = esw_create_vport_rx_group(esw, nvports);
+ err = esw_create_vport_rx_group(esw, total_vports);
if (err)
goto create_fg_err;
@@ -1772,6 +2046,9 @@ create_fg_err:
create_ft_err:
esw_destroy_offloads_fdb_tables(esw);
+create_fdb_err:
+ esw_destroy_offloads_acl_tables(esw);
+
return err;
}
@@ -1780,88 +2057,105 @@ static void esw_offloads_steering_cleanup(struct mlx5_eswitch *esw)
esw_destroy_vport_rx_group(esw);
esw_destroy_offloads_table(esw);
esw_destroy_offloads_fdb_tables(esw);
- if (MLX5_CAP_GEN(esw->dev, prio_tag_required))
- esw_prio_tag_acls_cleanup(esw);
+ esw_destroy_offloads_acl_tables(esw);
}
-static void esw_host_params_event_handler(struct work_struct *work)
+static void
+esw_vfs_changed_event_handler(struct mlx5_eswitch *esw, const u32 *out)
{
- struct mlx5_host_work *host_work;
- struct mlx5_eswitch *esw;
- int err, num_vf = 0;
+ bool host_pf_disabled;
+ u16 new_num_vfs;
- host_work = container_of(work, struct mlx5_host_work, work);
- esw = host_work->esw;
+ new_num_vfs = MLX5_GET(query_esw_functions_out, out,
+ host_params_context.host_num_of_vfs);
+ host_pf_disabled = MLX5_GET(query_esw_functions_out, out,
+ host_params_context.host_pf_disabled);
- err = mlx5_query_host_params_num_vfs(esw->dev, &num_vf);
- if (err || num_vf == esw->host_info.num_vfs)
- goto out;
+ if (new_num_vfs == esw->esw_funcs.num_vfs || host_pf_disabled)
+ return;
/* Number of VFs can only change from "0 to x" or "x to 0". */
- if (esw->host_info.num_vfs > 0) {
- esw_offloads_unload_vf_reps(esw, esw->host_info.num_vfs);
+ if (esw->esw_funcs.num_vfs > 0) {
+ esw_offloads_unload_vf_reps(esw, esw->esw_funcs.num_vfs);
} else {
- err = esw_offloads_load_vf_reps(esw, num_vf);
+ int err;
+ err = esw_offloads_load_vf_reps(esw, new_num_vfs);
if (err)
- goto out;
+ return;
}
+ esw->esw_funcs.num_vfs = new_num_vfs;
+}
+
+static void esw_functions_changed_event_handler(struct work_struct *work)
+{
+ struct mlx5_host_work *host_work;
+ struct mlx5_eswitch *esw;
+ const u32 *out;
- esw->host_info.num_vfs = num_vf;
+ host_work = container_of(work, struct mlx5_host_work, work);
+ esw = host_work->esw;
+ out = mlx5_esw_query_functions(esw->dev);
+ if (IS_ERR(out))
+ goto out;
+
+ esw_vfs_changed_event_handler(esw, out);
+ kvfree(out);
out:
kfree(host_work);
}
-static int esw_host_params_event(struct notifier_block *nb,
- unsigned long type, void *data)
+int mlx5_esw_funcs_changed_handler(struct notifier_block *nb, unsigned long type, void *data)
{
+ struct mlx5_esw_functions *esw_funcs;
struct mlx5_host_work *host_work;
- struct mlx5_host_info *host_info;
struct mlx5_eswitch *esw;
host_work = kzalloc(sizeof(*host_work), GFP_ATOMIC);
if (!host_work)
return NOTIFY_DONE;
- host_info = mlx5_nb_cof(nb, struct mlx5_host_info, nb);
- esw = container_of(host_info, struct mlx5_eswitch, host_info);
+ esw_funcs = mlx5_nb_cof(nb, struct mlx5_esw_functions, nb);
+ esw = container_of(esw_funcs, struct mlx5_eswitch, esw_funcs);
host_work->esw = esw;
- INIT_WORK(&host_work->work, esw_host_params_event_handler);
+ INIT_WORK(&host_work->work, esw_functions_changed_event_handler);
queue_work(esw->work_queue, &host_work->work);
return NOTIFY_OK;
}
-int esw_offloads_init(struct mlx5_eswitch *esw, int vf_nvports,
- int total_nvports)
+int esw_offloads_init(struct mlx5_eswitch *esw)
{
int err;
- err = esw_offloads_steering_init(esw, vf_nvports, total_nvports);
+ err = esw_offloads_steering_init(esw);
if (err)
return err;
- err = esw_offloads_load_all_reps(esw, vf_nvports);
+ if (mlx5_eswitch_vport_match_metadata_enabled(esw)) {
+ err = mlx5_eswitch_enable_passing_vport_metadata(esw);
+ if (err)
+ goto err_vport_metadata;
+ }
+
+ err = esw_offloads_load_all_reps(esw);
if (err)
goto err_reps;
esw_offloads_devcom_init(esw);
-
- if (mlx5_core_is_ecpf_esw_manager(esw->dev)) {
- MLX5_NB_INIT(&esw->host_info.nb, esw_host_params_event,
- HOST_PARAMS_CHANGE);
- mlx5_eq_notifier_register(esw->dev, &esw->host_info.nb);
- esw->host_info.num_vfs = vf_nvports;
- }
+ mutex_init(&esw->offloads.termtbl_mutex);
mlx5_rdma_enable_roce(esw->dev);
return 0;
err_reps:
+ if (mlx5_eswitch_vport_match_metadata_enabled(esw))
+ mlx5_eswitch_disable_passing_vport_metadata(esw);
+err_vport_metadata:
esw_offloads_steering_cleanup(esw);
return err;
}
@@ -1869,13 +2163,13 @@ err_reps:
static int esw_offloads_stop(struct mlx5_eswitch *esw,
struct netlink_ext_ack *extack)
{
- int err, err1, num_vfs = esw->dev->priv.sriov.num_vfs;
+ int err, err1;
- mlx5_eswitch_disable_sriov(esw);
- err = mlx5_eswitch_enable_sriov(esw, num_vfs, SRIOV_LEGACY);
+ mlx5_eswitch_disable(esw);
+ err = mlx5_eswitch_enable(esw, MLX5_ESWITCH_LEGACY);
if (err) {
NL_SET_ERR_MSG_MOD(extack, "Failed setting eswitch to legacy");
- err1 = mlx5_eswitch_enable_sriov(esw, num_vfs, SRIOV_OFFLOADS);
+ err1 = mlx5_eswitch_enable(esw, MLX5_ESWITCH_OFFLOADS);
if (err1) {
NL_SET_ERR_MSG_MOD(extack,
"Failed setting eswitch back to offloads");
@@ -1887,19 +2181,11 @@ static int esw_offloads_stop(struct mlx5_eswitch *esw,
void esw_offloads_cleanup(struct mlx5_eswitch *esw)
{
- u16 num_vfs;
-
- if (mlx5_core_is_ecpf_esw_manager(esw->dev)) {
- mlx5_eq_notifier_unregister(esw->dev, &esw->host_info.nb);
- flush_workqueue(esw->work_queue);
- num_vfs = esw->host_info.num_vfs;
- } else {
- num_vfs = esw->dev->priv.sriov.num_vfs;
- }
-
mlx5_rdma_disable_roce(esw->dev);
esw_offloads_devcom_cleanup(esw);
- esw_offloads_unload_all_reps(esw, num_vfs);
+ esw_offloads_unload_all_reps(esw);
+ if (mlx5_eswitch_vport_match_metadata_enabled(esw))
+ mlx5_eswitch_disable_passing_vport_metadata(esw);
esw_offloads_steering_cleanup(esw);
}
@@ -1907,10 +2193,10 @@ static int esw_mode_from_devlink(u16 mode, u16 *mlx5_mode)
{
switch (mode) {
case DEVLINK_ESWITCH_MODE_LEGACY:
- *mlx5_mode = SRIOV_LEGACY;
+ *mlx5_mode = MLX5_ESWITCH_LEGACY;
break;
case DEVLINK_ESWITCH_MODE_SWITCHDEV:
- *mlx5_mode = SRIOV_OFFLOADS;
+ *mlx5_mode = MLX5_ESWITCH_OFFLOADS;
break;
default:
return -EINVAL;
@@ -1922,10 +2208,10 @@ static int esw_mode_from_devlink(u16 mode, u16 *mlx5_mode)
static int esw_mode_to_devlink(u16 mlx5_mode, u16 *mode)
{
switch (mlx5_mode) {
- case SRIOV_LEGACY:
+ case MLX5_ESWITCH_LEGACY:
*mode = DEVLINK_ESWITCH_MODE_LEGACY;
break;
- case SRIOV_OFFLOADS:
+ case MLX5_ESWITCH_OFFLOADS:
*mode = DEVLINK_ESWITCH_MODE_SWITCHDEV;
break;
default:
@@ -1989,7 +2275,7 @@ static int mlx5_devlink_eswitch_check(struct devlink *devlink)
if(!MLX5_ESWITCH_MANAGER(dev))
return -EPERM;
- if (dev->priv.eswitch->mode == SRIOV_NONE &&
+ if (dev->priv.eswitch->mode == MLX5_ESWITCH_NONE &&
!mlx5_core_is_ecpf_esw_manager(dev))
return -EOPNOTSUPP;
@@ -2040,7 +2326,7 @@ int mlx5_devlink_eswitch_inline_mode_set(struct devlink *devlink, u8 mode,
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
struct mlx5_eswitch *esw = dev->priv.eswitch;
- int err, vport;
+ int err, vport, num_vport;
u8 mlx5_mode;
err = mlx5_devlink_eswitch_check(devlink);
@@ -2069,7 +2355,7 @@ int mlx5_devlink_eswitch_inline_mode_set(struct devlink *devlink, u8 mode,
if (err)
goto out;
- for (vport = 1; vport < esw->enabled_vports; vport++) {
+ mlx5_esw_for_each_host_func_vport(esw, vport, esw->esw_funcs.num_vfs) {
err = mlx5_modify_nic_vport_min_inline(dev, vport, mlx5_mode);
if (err) {
NL_SET_ERR_MSG_MOD(extack,
@@ -2082,7 +2368,8 @@ int mlx5_devlink_eswitch_inline_mode_set(struct devlink *devlink, u8 mode,
return 0;
revert_inline_mode:
- while (--vport > 0)
+ num_vport = --vport;
+ mlx5_esw_for_each_host_func_vport_reverse(esw, vport, num_vport)
mlx5_modify_nic_vport_min_inline(dev,
vport,
esw->offloads.inline_mode);
@@ -2103,7 +2390,7 @@ int mlx5_devlink_eswitch_inline_mode_get(struct devlink *devlink, u8 *mode)
return esw_inline_mode_to_devlink(esw->offloads.inline_mode, mode);
}
-int mlx5_eswitch_inline_mode_get(struct mlx5_eswitch *esw, int nvfs, u8 *mode)
+int mlx5_eswitch_inline_mode_get(struct mlx5_eswitch *esw, u8 *mode)
{
u8 prev_mlx5_mode, mlx5_mode = MLX5_INLINE_MODE_L2;
struct mlx5_core_dev *dev = esw->dev;
@@ -2112,7 +2399,7 @@ int mlx5_eswitch_inline_mode_get(struct mlx5_eswitch *esw, int nvfs, u8 *mode)
if (!MLX5_CAP_GEN(dev, vport_group_manager))
return -EOPNOTSUPP;
- if (esw->mode == SRIOV_NONE)
+ if (esw->mode == MLX5_ESWITCH_NONE)
return -EOPNOTSUPP;
switch (MLX5_CAP_ETH(dev, wqe_inline_mode)) {
@@ -2127,9 +2414,10 @@ int mlx5_eswitch_inline_mode_get(struct mlx5_eswitch *esw, int nvfs, u8 *mode)
}
query_vports:
- for (vport = 1; vport <= nvfs; vport++) {
+ mlx5_query_nic_vport_min_inline(dev, esw->first_host_vport, &prev_mlx5_mode);
+ mlx5_esw_for_each_host_func_vport(esw, vport, esw->esw_funcs.num_vfs) {
mlx5_query_nic_vport_min_inline(dev, vport, &mlx5_mode);
- if (vport > 1 && prev_mlx5_mode != mlx5_mode)
+ if (prev_mlx5_mode != mlx5_mode)
return -EINVAL;
prev_mlx5_mode = mlx5_mode;
}
@@ -2139,7 +2427,8 @@ out:
return 0;
}
-int mlx5_devlink_eswitch_encap_mode_set(struct devlink *devlink, u8 encap,
+int mlx5_devlink_eswitch_encap_mode_set(struct devlink *devlink,
+ enum devlink_eswitch_encap_mode encap,
struct netlink_ext_ack *extack)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
@@ -2158,7 +2447,7 @@ int mlx5_devlink_eswitch_encap_mode_set(struct devlink *devlink, u8 encap,
if (encap && encap != DEVLINK_ESWITCH_ENCAP_MODE_BASIC)
return -EOPNOTSUPP;
- if (esw->mode == SRIOV_LEGACY) {
+ if (esw->mode == MLX5_ESWITCH_LEGACY) {
esw->offloads.encap = encap;
return 0;
}
@@ -2188,7 +2477,8 @@ int mlx5_devlink_eswitch_encap_mode_set(struct devlink *devlink, u8 encap,
return err;
}
-int mlx5_devlink_eswitch_encap_mode_get(struct devlink *devlink, u8 *encap)
+int mlx5_devlink_eswitch_encap_mode_get(struct devlink *devlink,
+ enum devlink_eswitch_encap_mode *encap)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
struct mlx5_eswitch *esw = dev->priv.eswitch;
@@ -2203,36 +2493,31 @@ int mlx5_devlink_eswitch_encap_mode_get(struct devlink *devlink, u8 *encap)
}
void mlx5_eswitch_register_vport_reps(struct mlx5_eswitch *esw,
- struct mlx5_eswitch_rep_if *__rep_if,
+ const struct mlx5_eswitch_rep_ops *ops,
u8 rep_type)
{
- struct mlx5_eswitch_rep_if *rep_if;
+ struct mlx5_eswitch_rep_data *rep_data;
struct mlx5_eswitch_rep *rep;
int i;
+ esw->offloads.rep_ops[rep_type] = ops;
mlx5_esw_for_all_reps(esw, i, rep) {
- rep_if = &rep->rep_if[rep_type];
- rep_if->load = __rep_if->load;
- rep_if->unload = __rep_if->unload;
- rep_if->get_proto_dev = __rep_if->get_proto_dev;
- rep_if->priv = __rep_if->priv;
-
- atomic_set(&rep_if->state, REP_REGISTERED);
+ rep_data = &rep->rep_data[rep_type];
+ atomic_set(&rep_data->state, REP_REGISTERED);
}
}
EXPORT_SYMBOL(mlx5_eswitch_register_vport_reps);
void mlx5_eswitch_unregister_vport_reps(struct mlx5_eswitch *esw, u8 rep_type)
{
- u16 max_vf = mlx5_core_max_vfs(esw->dev);
struct mlx5_eswitch_rep *rep;
int i;
- if (esw->mode == SRIOV_OFFLOADS)
- __unload_reps_all_vport(esw, max_vf, rep_type);
+ if (esw->mode == MLX5_ESWITCH_OFFLOADS)
+ __unload_reps_all_vport(esw, rep_type);
mlx5_esw_for_all_reps(esw, i, rep)
- atomic_set(&rep->rep_if[rep_type].state, REP_UNREGISTERED);
+ atomic_set(&rep->rep_data[rep_type].state, REP_UNREGISTERED);
}
EXPORT_SYMBOL(mlx5_eswitch_unregister_vport_reps);
@@ -2241,7 +2526,7 @@ void *mlx5_eswitch_get_uplink_priv(struct mlx5_eswitch *esw, u8 rep_type)
struct mlx5_eswitch_rep *rep;
rep = mlx5_eswitch_get_rep(esw, MLX5_VPORT_UPLINK);
- return rep->rep_if[rep_type].priv;
+ return rep->rep_data[rep_type].priv;
}
void *mlx5_eswitch_get_proto_dev(struct mlx5_eswitch *esw,
@@ -2252,9 +2537,9 @@ void *mlx5_eswitch_get_proto_dev(struct mlx5_eswitch *esw,
rep = mlx5_eswitch_get_rep(esw, vport);
- if (atomic_read(&rep->rep_if[rep_type].state) == REP_LOADED &&
- rep->rep_if[rep_type].get_proto_dev)
- return rep->rep_if[rep_type].get_proto_dev(rep);
+ if (atomic_read(&rep->rep_data[rep_type].state) == REP_LOADED &&
+ esw->offloads.rep_ops[rep_type]->get_proto_dev)
+ return esw->offloads.rep_ops[rep_type]->get_proto_dev(rep);
return NULL;
}
EXPORT_SYMBOL(mlx5_eswitch_get_proto_dev);
@@ -2271,3 +2556,22 @@ struct mlx5_eswitch_rep *mlx5_eswitch_vport_rep(struct mlx5_eswitch *esw,
return mlx5_eswitch_get_rep(esw, vport);
}
EXPORT_SYMBOL(mlx5_eswitch_vport_rep);
+
+bool mlx5_eswitch_is_vf_vport(const struct mlx5_eswitch *esw, u16 vport_num)
+{
+ return vport_num >= MLX5_VPORT_FIRST_VF &&
+ vport_num <= esw->dev->priv.sriov.max_vfs;
+}
+
+bool mlx5_eswitch_vport_match_metadata_enabled(const struct mlx5_eswitch *esw)
+{
+ return !!(esw->flags & MLX5_ESWITCH_VPORT_MATCH_METADATA);
+}
+EXPORT_SYMBOL(mlx5_eswitch_vport_match_metadata_enabled);
+
+u32 mlx5_eswitch_get_vport_metadata_for_match(const struct mlx5_eswitch *esw,
+ u16 vport_num)
+{
+ return ((MLX5_CAP_GEN(esw->dev, vhca_id) & 0xffff) << 16) | vport_num;
+}
+EXPORT_SYMBOL(mlx5_eswitch_get_vport_metadata_for_match);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
new file mode 100644
index 000000000000..1d55a324a17e
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c
@@ -0,0 +1,277 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+// Copyright (c) 2019 Mellanox Technologies.
+
+#include <linux/mlx5/fs.h>
+#include "eswitch.h"
+
+struct mlx5_termtbl_handle {
+ struct hlist_node termtbl_hlist;
+
+ struct mlx5_flow_table *termtbl;
+ struct mlx5_flow_act flow_act;
+ struct mlx5_flow_destination dest;
+
+ struct mlx5_flow_handle *rule;
+ int ref_count;
+};
+
+static u32
+mlx5_eswitch_termtbl_hash(struct mlx5_flow_act *flow_act,
+ struct mlx5_flow_destination *dest)
+{
+ u32 hash;
+
+ hash = jhash_1word(flow_act->action, 0);
+ hash = jhash((const void *)&flow_act->vlan,
+ sizeof(flow_act->vlan), hash);
+ hash = jhash((const void *)&dest->vport.num,
+ sizeof(dest->vport.num), hash);
+ hash = jhash((const void *)&dest->vport.vhca_id,
+ sizeof(dest->vport.num), hash);
+ return hash;
+}
+
+static int
+mlx5_eswitch_termtbl_cmp(struct mlx5_flow_act *flow_act1,
+ struct mlx5_flow_destination *dest1,
+ struct mlx5_flow_act *flow_act2,
+ struct mlx5_flow_destination *dest2)
+{
+ return flow_act1->action != flow_act2->action ||
+ dest1->vport.num != dest2->vport.num ||
+ dest1->vport.vhca_id != dest2->vport.vhca_id ||
+ memcmp(&flow_act1->vlan, &flow_act2->vlan,
+ sizeof(flow_act1->vlan));
+}
+
+static int
+mlx5_eswitch_termtbl_create(struct mlx5_core_dev *dev,
+ struct mlx5_termtbl_handle *tt,
+ struct mlx5_flow_act *flow_act)
+{
+ static const struct mlx5_flow_spec spec = {};
+ struct mlx5_flow_namespace *root_ns;
+ int prio, flags;
+ int err;
+
+ root_ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_FDB);
+ if (!root_ns) {
+ esw_warn(dev, "Failed to get FDB flow namespace\n");
+ return -EOPNOTSUPP;
+ }
+
+ /* As this is the terminating action then the termination table is the
+ * same prio as the slow path
+ */
+ prio = FDB_SLOW_PATH;
+ flags = MLX5_FLOW_TABLE_TERMINATION;
+ tt->termtbl = mlx5_create_auto_grouped_flow_table(root_ns, prio, 1, 1,
+ 0, flags);
+ if (IS_ERR(tt->termtbl)) {
+ esw_warn(dev, "Failed to create termination table\n");
+ return -EOPNOTSUPP;
+ }
+
+ tt->rule = mlx5_add_flow_rules(tt->termtbl, &spec, flow_act,
+ &tt->dest, 1);
+
+ if (IS_ERR(tt->rule)) {
+ esw_warn(dev, "Failed to create termination table rule\n");
+ goto add_flow_err;
+ }
+ return 0;
+
+add_flow_err:
+ err = mlx5_destroy_flow_table(tt->termtbl);
+ if (err)
+ esw_warn(dev, "Failed to destroy termination table\n");
+
+ return -EOPNOTSUPP;
+}
+
+static struct mlx5_termtbl_handle *
+mlx5_eswitch_termtbl_get_create(struct mlx5_eswitch *esw,
+ struct mlx5_flow_act *flow_act,
+ struct mlx5_flow_destination *dest)
+{
+ struct mlx5_termtbl_handle *tt;
+ bool found = false;
+ u32 hash_key;
+ int err;
+
+ mutex_lock(&esw->offloads.termtbl_mutex);
+
+ hash_key = mlx5_eswitch_termtbl_hash(flow_act, dest);
+ hash_for_each_possible(esw->offloads.termtbl_tbl, tt,
+ termtbl_hlist, hash_key) {
+ if (!mlx5_eswitch_termtbl_cmp(&tt->flow_act, &tt->dest,
+ flow_act, dest)) {
+ found = true;
+ break;
+ }
+ }
+ if (found)
+ goto tt_add_ref;
+
+ tt = kzalloc(sizeof(*tt), GFP_KERNEL);
+ if (!tt) {
+ err = -ENOMEM;
+ goto tt_create_err;
+ }
+
+ tt->dest.type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
+ tt->dest.vport.num = dest->vport.num;
+ tt->dest.vport.vhca_id = dest->vport.vhca_id;
+ memcpy(&tt->flow_act, flow_act, sizeof(*flow_act));
+
+ err = mlx5_eswitch_termtbl_create(esw->dev, tt, flow_act);
+ if (err) {
+ esw_warn(esw->dev, "Failed to create termination table\n");
+ goto tt_create_err;
+ }
+ hash_add(esw->offloads.termtbl_tbl, &tt->termtbl_hlist, hash_key);
+tt_add_ref:
+ tt->ref_count++;
+ mutex_unlock(&esw->offloads.termtbl_mutex);
+ return tt;
+tt_create_err:
+ kfree(tt);
+ mutex_unlock(&esw->offloads.termtbl_mutex);
+ return ERR_PTR(err);
+}
+
+void
+mlx5_eswitch_termtbl_put(struct mlx5_eswitch *esw,
+ struct mlx5_termtbl_handle *tt)
+{
+ mutex_lock(&esw->offloads.termtbl_mutex);
+ if (--tt->ref_count == 0)
+ hash_del(&tt->termtbl_hlist);
+ mutex_unlock(&esw->offloads.termtbl_mutex);
+
+ if (!tt->ref_count) {
+ mlx5_del_flow_rules(tt->rule);
+ mlx5_destroy_flow_table(tt->termtbl);
+ kfree(tt);
+ }
+}
+
+static void
+mlx5_eswitch_termtbl_actions_move(struct mlx5_flow_act *src,
+ struct mlx5_flow_act *dst)
+{
+ if (!(src->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH))
+ return;
+
+ src->action &= ~MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH;
+ dst->action |= MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH;
+ memcpy(&dst->vlan[0], &src->vlan[0], sizeof(src->vlan[0]));
+ memset(&src->vlan[0], 0, sizeof(src->vlan[0]));
+
+ if (!(src->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2))
+ return;
+
+ src->action &= ~MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2;
+ dst->action |= MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2;
+ memcpy(&dst->vlan[1], &src->vlan[1], sizeof(src->vlan[1]));
+ memset(&src->vlan[1], 0, sizeof(src->vlan[1]));
+}
+
+bool
+mlx5_eswitch_termtbl_required(struct mlx5_eswitch *esw,
+ struct mlx5_flow_act *flow_act,
+ struct mlx5_flow_spec *spec)
+{
+ u32 port_mask = MLX5_GET(fte_match_param, spec->match_criteria,
+ misc_parameters.source_port);
+ u32 port_value = MLX5_GET(fte_match_param, spec->match_value,
+ misc_parameters.source_port);
+
+ if (!MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, termination_table))
+ return false;
+
+ /* push vlan on RX */
+ return (flow_act->action & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH) &&
+ ((port_mask & port_value) == MLX5_VPORT_UPLINK);
+}
+
+struct mlx5_flow_handle *
+mlx5_eswitch_add_termtbl_rule(struct mlx5_eswitch *esw,
+ struct mlx5_flow_table *fdb,
+ struct mlx5_flow_spec *spec,
+ struct mlx5_esw_flow_attr *attr,
+ struct mlx5_flow_act *flow_act,
+ struct mlx5_flow_destination *dest,
+ int num_dest)
+{
+ struct mlx5_flow_act term_tbl_act = {};
+ struct mlx5_flow_handle *rule = NULL;
+ bool term_table_created = false;
+ int num_vport_dests = 0;
+ int i, curr_dest;
+
+ mlx5_eswitch_termtbl_actions_move(flow_act, &term_tbl_act);
+ term_tbl_act.action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+
+ for (i = 0; i < num_dest; i++) {
+ struct mlx5_termtbl_handle *tt;
+
+ /* only vport destinations can be terminated */
+ if (dest[i].type != MLX5_FLOW_DESTINATION_TYPE_VPORT)
+ continue;
+
+ /* get the terminating table for the action list */
+ tt = mlx5_eswitch_termtbl_get_create(esw, &term_tbl_act,
+ &dest[i]);
+ if (IS_ERR(tt)) {
+ esw_warn(esw->dev, "Failed to create termination table\n");
+ goto revert_changes;
+ }
+ attr->dests[num_vport_dests].termtbl = tt;
+ num_vport_dests++;
+
+ /* link the destination with the termination table */
+ dest[i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
+ dest[i].ft = tt->termtbl;
+ term_table_created = true;
+ }
+
+ /* at least one destination should reference a termination table */
+ if (!term_table_created)
+ goto revert_changes;
+
+ /* create the FTE */
+ rule = mlx5_add_flow_rules(fdb, spec, flow_act, dest, num_dest);
+ if (IS_ERR(rule))
+ goto revert_changes;
+
+ goto out;
+
+revert_changes:
+ /* revert the changes that were made to the original flow_act
+ * and fall-back to the original rule actions
+ */
+ mlx5_eswitch_termtbl_actions_move(&term_tbl_act, flow_act);
+
+ for (curr_dest = 0; curr_dest < num_vport_dests; curr_dest++) {
+ struct mlx5_termtbl_handle *tt = attr->dests[curr_dest].termtbl;
+
+ /* search for the destination associated with the
+ * current term table
+ */
+ for (i = 0; i < num_dest; i++) {
+ if (dest[i].ft != tt->termtbl)
+ continue;
+
+ memset(&dest[i], 0, sizeof(dest[i]));
+ dest[i].type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
+ dest[i].vport.num = tt->dest.vport.num;
+ dest[i].vport.vhca_id = tt->dest.vport.vhca_id;
+ mlx5_eswitch_termtbl_put(esw, tt);
+ break;
+ }
+ }
+ rule = mlx5_add_flow_rules(fdb, spec, flow_act, dest, num_dest);
+out:
+ return rule;
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/events.c b/drivers/net/ethernet/mellanox/mlx5/core/events.c
index a81e8d2168d8..8bcf3426b9c6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/events.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/events.c
@@ -108,8 +108,8 @@ static const char *eqe_type_str(u8 type)
return "MLX5_EVENT_TYPE_STALL_EVENT";
case MLX5_EVENT_TYPE_CMD:
return "MLX5_EVENT_TYPE_CMD";
- case MLX5_EVENT_TYPE_HOST_PARAMS_CHANGE:
- return "MLX5_EVENT_TYPE_HOST_PARAMS_CHANGE";
+ case MLX5_EVENT_TYPE_ESW_FUNCTIONS_CHANGED:
+ return "MLX5_EVENT_TYPE_ESW_FUNCTIONS_CHANGED";
case MLX5_EVENT_TYPE_PAGE_REQUEST:
return "MLX5_EVENT_TYPE_PAGE_REQUEST";
case MLX5_EVENT_TYPE_PAGE_FAULT:
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c
index ca2296a2f9ee..4c50efe4e7f1 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c
@@ -414,7 +414,8 @@ static void mlx5_fpga_conn_cq_tasklet(unsigned long data)
mlx5_fpga_conn_cqes(conn, MLX5_FPGA_CQ_BUDGET);
}
-static void mlx5_fpga_conn_cq_complete(struct mlx5_core_cq *mcq)
+static void mlx5_fpga_conn_cq_complete(struct mlx5_core_cq *mcq,
+ struct mlx5_eqe *eqe)
{
struct mlx5_fpga_conn *conn;
@@ -429,6 +430,7 @@ static int mlx5_fpga_conn_create_cq(struct mlx5_fpga_conn *conn, int cq_size)
struct mlx5_fpga_device *fdev = conn->fdev;
struct mlx5_core_dev *mdev = fdev->mdev;
u32 temp_cqc[MLX5_ST_SZ_DW(cqc)] = {0};
+ u32 out[MLX5_ST_SZ_DW(create_cq_out)];
struct mlx5_wq_param wqp;
struct mlx5_cqe64 *cqe;
int inlen, err, eqn;
@@ -476,7 +478,7 @@ static int mlx5_fpga_conn_create_cq(struct mlx5_fpga_conn *conn, int cq_size)
pas = (__be64 *)MLX5_ADDR_OF(create_cq_in, in, pas);
mlx5_fill_page_frag_array(&conn->cq.wq_ctrl.buf, pas);
- err = mlx5_core_create_cq(mdev, &conn->cq.mcq, in, inlen);
+ err = mlx5_core_create_cq(mdev, &conn->cq.mcq, in, inlen, out, sizeof(out));
kvfree(in);
if (err)
@@ -867,7 +869,7 @@ struct mlx5_fpga_conn *mlx5_fpga_conn_create(struct mlx5_fpga_device *fdev,
conn->cb_arg = attr->cb_arg;
remote_mac = MLX5_ADDR_OF(fpga_qpc, conn->fpga_qpc, remote_mac_47_32);
- err = mlx5_query_nic_vport_mac_address(fdev->mdev, 0, remote_mac);
+ err = mlx5_query_mac_address(fdev->mdev, remote_mac);
if (err) {
mlx5_fpga_err(fdev, "Failed to query local MAC: %d\n", err);
ret = ERR_PTR(err);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
index 52c47d3dd5a5..c76da309506b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
@@ -636,7 +636,8 @@ static bool mlx5_is_fpga_egress_ipsec_rule(struct mlx5_core_dev *dev,
u8 match_criteria_enable,
const u32 *match_c,
const u32 *match_v,
- struct mlx5_flow_act *flow_act)
+ struct mlx5_flow_act *flow_act,
+ struct mlx5_flow_context *flow_context)
{
const void *outer_c = MLX5_ADDR_OF(fte_match_param, match_c,
outer_headers);
@@ -655,7 +656,7 @@ static bool mlx5_is_fpga_egress_ipsec_rule(struct mlx5_core_dev *dev,
(match_criteria_enable &
~(MLX5_MATCH_OUTER_HEADERS | MLX5_MATCH_MISC_PARAMETERS)) ||
(flow_act->action & ~(MLX5_FLOW_CONTEXT_ACTION_ENCRYPT | MLX5_FLOW_CONTEXT_ACTION_ALLOW)) ||
- (flow_act->flags & FLOW_ACT_HAS_TAG))
+ (flow_context->flags & FLOW_CONTEXT_HAS_TAG))
return false;
return true;
@@ -767,7 +768,8 @@ mlx5_fpga_ipsec_fs_create_sa_ctx(struct mlx5_core_dev *mdev,
fg->mask.match_criteria_enable,
fg->mask.match_criteria,
fte->val,
- &fte->action))
+ &fte->action,
+ &fte->flow_context))
return ERR_PTR(-EINVAL);
else if (!mlx5_is_fpga_ipsec_rule(mdev,
fg->mask.match_criteria_enable,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.h
index 2b5e63b0d4d6..382985e65b48 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.h
@@ -37,8 +37,6 @@
#include "accel/ipsec.h"
#include "fs_cmd.h"
-#ifdef CONFIG_MLX5_FPGA
-
u32 mlx5_fpga_ipsec_device_caps(struct mlx5_core_dev *mdev);
unsigned int mlx5_fpga_ipsec_counters_count(struct mlx5_core_dev *mdev);
int mlx5_fpga_ipsec_counters_read(struct mlx5_core_dev *mdev, u64 *counters,
@@ -66,77 +64,4 @@ int mlx5_fpga_esp_modify_xfrm(struct mlx5_accel_esp_xfrm *xfrm,
const struct mlx5_flow_cmds *
mlx5_fs_cmd_get_default_ipsec_fpga_cmds(enum fs_flow_table_type type);
-#else
-
-static inline u32 mlx5_fpga_ipsec_device_caps(struct mlx5_core_dev *mdev)
-{
- return 0;
-}
-
-static inline unsigned int
-mlx5_fpga_ipsec_counters_count(struct mlx5_core_dev *mdev)
-{
- return 0;
-}
-
-static inline int mlx5_fpga_ipsec_counters_read(struct mlx5_core_dev *mdev,
- u64 *counters)
-{
- return 0;
-}
-
-static inline void *
-mlx5_fpga_ipsec_create_sa_ctx(struct mlx5_core_dev *mdev,
- struct mlx5_accel_esp_xfrm *accel_xfrm,
- const __be32 saddr[4],
- const __be32 daddr[4],
- const __be32 spi, bool is_ipv6)
-{
- return NULL;
-}
-
-static inline void mlx5_fpga_ipsec_delete_sa_ctx(void *context)
-{
-}
-
-static inline int mlx5_fpga_ipsec_init(struct mlx5_core_dev *mdev)
-{
- return 0;
-}
-
-static inline void mlx5_fpga_ipsec_cleanup(struct mlx5_core_dev *mdev)
-{
-}
-
-static inline void mlx5_fpga_ipsec_build_fs_cmds(void)
-{
-}
-
-static inline struct mlx5_accel_esp_xfrm *
-mlx5_fpga_esp_create_xfrm(struct mlx5_core_dev *mdev,
- const struct mlx5_accel_esp_xfrm_attrs *attrs,
- u32 flags)
-{
- return ERR_PTR(-EOPNOTSUPP);
-}
-
-static inline void mlx5_fpga_esp_destroy_xfrm(struct mlx5_accel_esp_xfrm *xfrm)
-{
-}
-
-static inline int
-mlx5_fpga_esp_modify_xfrm(struct mlx5_accel_esp_xfrm *xfrm,
- const struct mlx5_accel_esp_xfrm_attrs *attrs)
-{
- return -EOPNOTSUPP;
-}
-
-static inline const struct mlx5_flow_cmds *
-mlx5_fs_cmd_get_default_ipsec_fpga_cmds(enum fs_flow_table_type type)
-{
- return mlx5_fs_cmd_get_default(type);
-}
-
-#endif /* CONFIG_MLX5_FPGA */
-
#endif /* __MLX5_FPGA_SADB_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
index 013b1ca4a791..7ac1249eadc3 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
@@ -147,6 +147,7 @@ static int mlx5_cmd_create_flow_table(struct mlx5_flow_root_namespace *ns,
{
int en_encap = !!(ft->flags & MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT);
int en_decap = !!(ft->flags & MLX5_FLOW_TABLE_TUNNEL_EN_DECAP);
+ int term = !!(ft->flags & MLX5_FLOW_TABLE_TERMINATION);
u32 out[MLX5_ST_SZ_DW(create_flow_table_out)] = {0};
u32 in[MLX5_ST_SZ_DW(create_flow_table_in)] = {0};
struct mlx5_core_dev *dev = ns->dev;
@@ -167,6 +168,8 @@ static int mlx5_cmd_create_flow_table(struct mlx5_flow_root_namespace *ns,
en_decap);
MLX5_SET(create_flow_table_in, in, flow_table_context.reformat_en,
en_encap);
+ MLX5_SET(create_flow_table_in, in, flow_table_context.termination_table,
+ term);
switch (ft->op_mod) {
case FS_FT_OP_MOD_NORMAL:
@@ -393,7 +396,11 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev,
in_flow_context = MLX5_ADDR_OF(set_fte_in, in, flow_context);
MLX5_SET(flow_context, in_flow_context, group_id, group_id);
- MLX5_SET(flow_context, in_flow_context, flow_tag, fte->action.flow_tag);
+ MLX5_SET(flow_context, in_flow_context, flow_tag,
+ fte->flow_context.flow_tag);
+ MLX5_SET(flow_context, in_flow_context, flow_source,
+ fte->flow_context.flow_source);
+
MLX5_SET(flow_context, in_flow_context, extended_destination,
extended_dest);
if (extended_dest) {
@@ -768,6 +775,10 @@ int mlx5_modify_header_alloc(struct mlx5_core_dev *dev,
max_actions = MLX5_CAP_FLOWTABLE_NIC_TX(dev, max_modify_header_actions);
table_type = FS_FT_NIC_TX;
break;
+ case MLX5_FLOW_NAMESPACE_ESW_INGRESS:
+ max_actions = MLX5_CAP_ESW_INGRESS_ACL(dev, max_modify_header_actions);
+ table_type = FS_FT_ESW_INGRESS_ACL;
+ break;
default:
return -EOPNOTSUPP;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
index fe76c6fd6d80..3e99799bdb40 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
@@ -584,7 +584,7 @@ err_ida_remove:
}
static struct fs_fte *alloc_fte(struct mlx5_flow_table *ft,
- u32 *match_value,
+ const struct mlx5_flow_spec *spec,
struct mlx5_flow_act *flow_act)
{
struct mlx5_flow_steering *steering = get_steering(&ft->node);
@@ -594,9 +594,10 @@ static struct fs_fte *alloc_fte(struct mlx5_flow_table *ft,
if (!fte)
return ERR_PTR(-ENOMEM);
- memcpy(fte->val, match_value, sizeof(fte->val));
+ memcpy(fte->val, &spec->match_value, sizeof(fte->val));
fte->node.type = FS_TYPE_FLOW_ENTRY;
fte->action = *flow_act;
+ fte->flow_context = spec->flow_context;
tree_init_node(&fte->node, NULL, del_sw_fte);
@@ -612,7 +613,7 @@ static void dealloc_flow_group(struct mlx5_flow_steering *steering,
static struct mlx5_flow_group *alloc_flow_group(struct mlx5_flow_steering *steering,
u8 match_criteria_enable,
- void *match_criteria,
+ const void *match_criteria,
int start_index,
int end_index)
{
@@ -642,7 +643,7 @@ static struct mlx5_flow_group *alloc_flow_group(struct mlx5_flow_steering *steer
static struct mlx5_flow_group *alloc_insert_flow_group(struct mlx5_flow_table *ft,
u8 match_criteria_enable,
- void *match_criteria,
+ const void *match_criteria,
int start_index,
int end_index,
struct list_head *prev)
@@ -1285,7 +1286,7 @@ free_handle:
}
static struct mlx5_flow_group *alloc_auto_flow_group(struct mlx5_flow_table *ft,
- struct mlx5_flow_spec *spec)
+ const struct mlx5_flow_spec *spec)
{
struct list_head *prev = &ft->node.children;
struct mlx5_flow_group *fg;
@@ -1430,7 +1431,9 @@ static bool check_conflicting_actions(u32 action1, u32 action2)
return false;
}
-static int check_conflicting_ftes(struct fs_fte *fte, const struct mlx5_flow_act *flow_act)
+static int check_conflicting_ftes(struct fs_fte *fte,
+ const struct mlx5_flow_context *flow_context,
+ const struct mlx5_flow_act *flow_act)
{
if (check_conflicting_actions(flow_act->action, fte->action.action)) {
mlx5_core_warn(get_dev(&fte->node),
@@ -1438,12 +1441,12 @@ static int check_conflicting_ftes(struct fs_fte *fte, const struct mlx5_flow_act
return -EEXIST;
}
- if ((flow_act->flags & FLOW_ACT_HAS_TAG) &&
- fte->action.flow_tag != flow_act->flow_tag) {
+ if ((flow_context->flags & FLOW_CONTEXT_HAS_TAG) &&
+ fte->flow_context.flow_tag != flow_context->flow_tag) {
mlx5_core_warn(get_dev(&fte->node),
"FTE flow tag %u already exists with different flow tag %u\n",
- fte->action.flow_tag,
- flow_act->flow_tag);
+ fte->flow_context.flow_tag,
+ flow_context->flow_tag);
return -EEXIST;
}
@@ -1451,7 +1454,7 @@ static int check_conflicting_ftes(struct fs_fte *fte, const struct mlx5_flow_act
}
static struct mlx5_flow_handle *add_rule_fg(struct mlx5_flow_group *fg,
- u32 *match_value,
+ const struct mlx5_flow_spec *spec,
struct mlx5_flow_act *flow_act,
struct mlx5_flow_destination *dest,
int dest_num,
@@ -1462,7 +1465,7 @@ static struct mlx5_flow_handle *add_rule_fg(struct mlx5_flow_group *fg,
int i;
int ret;
- ret = check_conflicting_ftes(fte, flow_act);
+ ret = check_conflicting_ftes(fte, &spec->flow_context, flow_act);
if (ret)
return ERR_PTR(ret);
@@ -1536,7 +1539,7 @@ static void free_match_list(struct match_list_head *head)
static int build_match_list(struct match_list_head *match_head,
struct mlx5_flow_table *ft,
- struct mlx5_flow_spec *spec)
+ const struct mlx5_flow_spec *spec)
{
struct rhlist_head *tmp, *list;
struct mlx5_flow_group *g;
@@ -1589,7 +1592,7 @@ static u64 matched_fgs_get_version(struct list_head *match_head)
static struct fs_fte *
lookup_fte_locked(struct mlx5_flow_group *g,
- u32 *match_value,
+ const u32 *match_value,
bool take_write)
{
struct fs_fte *fte_tmp;
@@ -1622,7 +1625,7 @@ out:
static struct mlx5_flow_handle *
try_add_to_existing_fg(struct mlx5_flow_table *ft,
struct list_head *match_head,
- struct mlx5_flow_spec *spec,
+ const struct mlx5_flow_spec *spec,
struct mlx5_flow_act *flow_act,
struct mlx5_flow_destination *dest,
int dest_num,
@@ -1637,7 +1640,7 @@ try_add_to_existing_fg(struct mlx5_flow_table *ft,
u64 version;
int err;
- fte = alloc_fte(ft, spec->match_value, flow_act);
+ fte = alloc_fte(ft, spec, flow_act);
if (IS_ERR(fte))
return ERR_PTR(-ENOMEM);
@@ -1653,8 +1656,7 @@ search_again_locked:
fte_tmp = lookup_fte_locked(g, spec->match_value, take_write);
if (!fte_tmp)
continue;
- rule = add_rule_fg(g, spec->match_value,
- flow_act, dest, dest_num, fte_tmp);
+ rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte_tmp);
up_write_ref_node(&fte_tmp->node, false);
tree_put_node(&fte_tmp->node, false);
kmem_cache_free(steering->ftes_cache, fte);
@@ -1701,8 +1703,7 @@ skip_search:
nested_down_write_ref_node(&fte->node, FS_LOCK_CHILD);
up_write_ref_node(&g->node, false);
- rule = add_rule_fg(g, spec->match_value,
- flow_act, dest, dest_num, fte);
+ rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte);
up_write_ref_node(&fte->node, false);
tree_put_node(&fte->node, false);
return rule;
@@ -1715,7 +1716,7 @@ out:
static struct mlx5_flow_handle *
_mlx5_add_flow_rules(struct mlx5_flow_table *ft,
- struct mlx5_flow_spec *spec,
+ const struct mlx5_flow_spec *spec,
struct mlx5_flow_act *flow_act,
struct mlx5_flow_destination *dest,
int dest_num)
@@ -1788,7 +1789,7 @@ search_again_locked:
if (err)
goto err_release_fg;
- fte = alloc_fte(ft, spec->match_value, flow_act);
+ fte = alloc_fte(ft, spec, flow_act);
if (IS_ERR(fte)) {
err = PTR_ERR(fte);
goto err_release_fg;
@@ -1802,8 +1803,7 @@ search_again_locked:
nested_down_write_ref_node(&fte->node, FS_LOCK_CHILD);
up_write_ref_node(&g->node, false);
- rule = add_rule_fg(g, spec->match_value, flow_act, dest,
- dest_num, fte);
+ rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte);
up_write_ref_node(&fte->node, false);
tree_put_node(&fte->node, false);
tree_put_node(&g->node, false);
@@ -1823,7 +1823,7 @@ static bool fwd_next_prio_supported(struct mlx5_flow_table *ft)
struct mlx5_flow_handle *
mlx5_add_flow_rules(struct mlx5_flow_table *ft,
- struct mlx5_flow_spec *spec,
+ const struct mlx5_flow_spec *spec,
struct mlx5_flow_act *flow_act,
struct mlx5_flow_destination *dest,
int num_dest)
@@ -2092,7 +2092,7 @@ struct mlx5_flow_namespace *mlx5_get_flow_vport_acl_namespace(struct mlx5_core_d
{
struct mlx5_flow_steering *steering = dev->priv.steering;
- if (!steering || vport >= MLX5_TOTAL_VPORTS(dev))
+ if (!steering || vport >= mlx5_eswitch_get_total_vports(dev))
return NULL;
switch (type) {
@@ -2423,7 +2423,7 @@ static void cleanup_egress_acls_root_ns(struct mlx5_core_dev *dev)
if (!steering->esw_egress_root_ns)
return;
- for (i = 0; i < MLX5_TOTAL_VPORTS(dev); i++)
+ for (i = 0; i < mlx5_eswitch_get_total_vports(dev); i++)
cleanup_root_ns(steering->esw_egress_root_ns[i]);
kfree(steering->esw_egress_root_ns);
@@ -2438,7 +2438,7 @@ static void cleanup_ingress_acls_root_ns(struct mlx5_core_dev *dev)
if (!steering->esw_ingress_root_ns)
return;
- for (i = 0; i < MLX5_TOTAL_VPORTS(dev); i++)
+ for (i = 0; i < mlx5_eswitch_get_total_vports(dev); i++)
cleanup_root_ns(steering->esw_ingress_root_ns[i]);
kfree(steering->esw_ingress_root_ns);
@@ -2606,16 +2606,18 @@ static int init_ingress_acl_root_ns(struct mlx5_flow_steering *steering, int vpo
static int init_egress_acls_root_ns(struct mlx5_core_dev *dev)
{
struct mlx5_flow_steering *steering = dev->priv.steering;
+ int total_vports = mlx5_eswitch_get_total_vports(dev);
int err;
int i;
- steering->esw_egress_root_ns = kcalloc(MLX5_TOTAL_VPORTS(dev),
- sizeof(*steering->esw_egress_root_ns),
- GFP_KERNEL);
+ steering->esw_egress_root_ns =
+ kcalloc(total_vports,
+ sizeof(*steering->esw_egress_root_ns),
+ GFP_KERNEL);
if (!steering->esw_egress_root_ns)
return -ENOMEM;
- for (i = 0; i < MLX5_TOTAL_VPORTS(dev); i++) {
+ for (i = 0; i < total_vports; i++) {
err = init_egress_acl_root_ns(steering, i);
if (err)
goto cleanup_root_ns;
@@ -2634,16 +2636,18 @@ cleanup_root_ns:
static int init_ingress_acls_root_ns(struct mlx5_core_dev *dev)
{
struct mlx5_flow_steering *steering = dev->priv.steering;
+ int total_vports = mlx5_eswitch_get_total_vports(dev);
int err;
int i;
- steering->esw_ingress_root_ns = kcalloc(MLX5_TOTAL_VPORTS(dev),
- sizeof(*steering->esw_ingress_root_ns),
- GFP_KERNEL);
+ steering->esw_ingress_root_ns =
+ kcalloc(total_vports,
+ sizeof(*steering->esw_ingress_root_ns),
+ GFP_KERNEL);
if (!steering->esw_ingress_root_ns)
return -ENOMEM;
- for (i = 0; i < MLX5_TOTAL_VPORTS(dev); i++) {
+ for (i = 0; i < total_vports; i++) {
err = init_ingress_acl_root_ns(steering, i);
if (err)
goto cleanup_root_ns;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
index a08c3d09a50f..c48c382f926f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
@@ -170,6 +170,7 @@ struct fs_fte {
u32 val[MLX5_ST_SZ_DW_MATCH_PARAM];
u32 dests_size;
u32 index;
+ struct mlx5_flow_context flow_context;
struct mlx5_flow_act action;
enum fs_fte_status status;
struct mlx5_fc *counter;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
index c6c28f56aa29..b3762123a69c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
@@ -102,13 +102,15 @@ static struct list_head *mlx5_fc_counters_lookup_next(struct mlx5_core_dev *dev,
struct mlx5_fc_stats *fc_stats = &dev->priv.fc_stats;
unsigned long next_id = (unsigned long)id + 1;
struct mlx5_fc *counter;
+ unsigned long tmp;
rcu_read_lock();
/* skip counters that are in idr, but not yet in counters list */
- while ((counter = idr_get_next_ul(&fc_stats->counters_idr,
- &next_id)) != NULL &&
- list_empty(&counter->list))
- next_id++;
+ idr_for_each_entry_continue_ul(&fc_stats->counters_idr,
+ counter, tmp, next_id) {
+ if (!list_empty(&counter->list))
+ break;
+ }
rcu_read_unlock();
return counter ? &counter->list : &fc_stats->counters;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw.c b/drivers/net/ethernet/mellanox/mlx5/core/fw.c
index 1ab6f7e3bec6..a19790dee7b2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fw.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fw.c
@@ -37,6 +37,37 @@
#include "mlx5_core.h"
#include "../../mlxfw/mlxfw.h"
+enum {
+ MCQS_IDENTIFIER_BOOT_IMG = 0x1,
+ MCQS_IDENTIFIER_OEM_NVCONFIG = 0x4,
+ MCQS_IDENTIFIER_MLNX_NVCONFIG = 0x5,
+ MCQS_IDENTIFIER_CS_TOKEN = 0x6,
+ MCQS_IDENTIFIER_DBG_TOKEN = 0x7,
+ MCQS_IDENTIFIER_GEARBOX = 0xA,
+};
+
+enum {
+ MCQS_UPDATE_STATE_IDLE,
+ MCQS_UPDATE_STATE_IN_PROGRESS,
+ MCQS_UPDATE_STATE_APPLIED,
+ MCQS_UPDATE_STATE_ACTIVE,
+ MCQS_UPDATE_STATE_ACTIVE_PENDING_RESET,
+ MCQS_UPDATE_STATE_FAILED,
+ MCQS_UPDATE_STATE_CANCELED,
+ MCQS_UPDATE_STATE_BUSY,
+};
+
+enum {
+ MCQI_INFO_TYPE_CAPABILITIES = 0x0,
+ MCQI_INFO_TYPE_VERSION = 0x1,
+ MCQI_INFO_TYPE_ACTIVATION_METHOD = 0x5,
+};
+
+enum {
+ MCQI_FW_RUNNING_VERSION = 0,
+ MCQI_FW_STORED_VERSION = 1,
+};
+
static int mlx5_cmd_query_adapter(struct mlx5_core_dev *dev, u32 *out,
int outlen)
{
@@ -202,6 +233,18 @@ int mlx5_query_hca_caps(struct mlx5_core_dev *dev)
return err;
}
+ if (MLX5_CAP_GEN(dev, event_cap)) {
+ err = mlx5_core_get_caps(dev, MLX5_CAP_DEV_EVENT);
+ if (err)
+ return err;
+ }
+
+ if (MLX5_CAP_GEN(dev, tls)) {
+ err = mlx5_core_get_caps(dev, MLX5_CAP_TLS);
+ if (err)
+ return err;
+ }
+
return 0;
}
@@ -392,33 +435,49 @@ static int mlx5_reg_mcda_set(struct mlx5_core_dev *dev,
}
static int mlx5_reg_mcqi_query(struct mlx5_core_dev *dev,
- u16 component_index,
- u32 *max_component_size,
- u8 *log_mcda_word_size,
- u16 *mcda_max_write_size)
+ u16 component_index, bool read_pending,
+ u8 info_type, u16 data_size, void *mcqi_data)
{
- u32 out[MLX5_ST_SZ_DW(mcqi_reg) + MLX5_ST_SZ_DW(mcqi_cap)];
- int offset = MLX5_ST_SZ_DW(mcqi_reg);
- u32 in[MLX5_ST_SZ_DW(mcqi_reg)];
+ u32 out[MLX5_ST_SZ_DW(mcqi_reg) + MLX5_UN_SZ_DW(mcqi_reg_data)] = {};
+ u32 in[MLX5_ST_SZ_DW(mcqi_reg)] = {};
+ void *data;
int err;
- memset(in, 0, sizeof(in));
- memset(out, 0, sizeof(out));
-
MLX5_SET(mcqi_reg, in, component_index, component_index);
- MLX5_SET(mcqi_reg, in, data_size, MLX5_ST_SZ_BYTES(mcqi_cap));
+ MLX5_SET(mcqi_reg, in, read_pending_component, read_pending);
+ MLX5_SET(mcqi_reg, in, info_type, info_type);
+ MLX5_SET(mcqi_reg, in, data_size, data_size);
err = mlx5_core_access_reg(dev, in, sizeof(in), out,
- sizeof(out), MLX5_REG_MCQI, 0, 0);
+ MLX5_ST_SZ_BYTES(mcqi_reg) + data_size,
+ MLX5_REG_MCQI, 0, 0);
if (err)
- goto out;
+ return err;
- *max_component_size = MLX5_GET(mcqi_cap, out + offset, max_component_size);
- *log_mcda_word_size = MLX5_GET(mcqi_cap, out + offset, log_mcda_word_size);
- *mcda_max_write_size = MLX5_GET(mcqi_cap, out + offset, mcda_max_write_size);
+ data = MLX5_ADDR_OF(mcqi_reg, out, data);
+ memcpy(mcqi_data, data, data_size);
-out:
- return err;
+ return 0;
+}
+
+static int mlx5_reg_mcqi_caps_query(struct mlx5_core_dev *dev, u16 component_index,
+ u32 *max_component_size, u8 *log_mcda_word_size,
+ u16 *mcda_max_write_size)
+{
+ u32 mcqi_reg[MLX5_ST_SZ_DW(mcqi_cap)] = {};
+ int err;
+
+ err = mlx5_reg_mcqi_query(dev, component_index, 0,
+ MCQI_INFO_TYPE_CAPABILITIES,
+ MLX5_ST_SZ_BYTES(mcqi_cap), mcqi_reg);
+ if (err)
+ return err;
+
+ *max_component_size = MLX5_GET(mcqi_cap, mcqi_reg, max_component_size);
+ *log_mcda_word_size = MLX5_GET(mcqi_cap, mcqi_reg, log_mcda_word_size);
+ *mcda_max_write_size = MLX5_GET(mcqi_cap, mcqi_reg, mcda_max_write_size);
+
+ return 0;
}
struct mlx5_mlxfw_dev {
@@ -434,8 +493,13 @@ static int mlx5_component_query(struct mlxfw_dev *mlxfw_dev,
container_of(mlxfw_dev, struct mlx5_mlxfw_dev, mlxfw_dev);
struct mlx5_core_dev *dev = mlx5_mlxfw_dev->mlx5_core_dev;
- return mlx5_reg_mcqi_query(dev, component_index, p_max_size,
- p_align_bits, p_max_write_size);
+ if (!MLX5_CAP_GEN(dev, mcam_reg) || !MLX5_CAP_MCAM_REG(dev, mcqi)) {
+ mlx5_core_warn(dev, "caps query isn't supported by running FW\n");
+ return -EOPNOTSUPP;
+ }
+
+ return mlx5_reg_mcqi_caps_query(dev, component_index, p_max_size,
+ p_align_bits, p_max_write_size);
}
static int mlx5_fsm_lock(struct mlxfw_dev *mlxfw_dev, u32 *fwhandle)
@@ -552,7 +616,8 @@ static const struct mlxfw_dev_ops mlx5_mlxfw_dev_ops = {
};
int mlx5_firmware_flash(struct mlx5_core_dev *dev,
- const struct firmware *firmware)
+ const struct firmware *firmware,
+ struct netlink_ext_ack *extack)
{
struct mlx5_mlxfw_dev mlx5_mlxfw_dev = {
.mlxfw_dev = {
@@ -571,5 +636,133 @@ int mlx5_firmware_flash(struct mlx5_core_dev *dev,
return -EOPNOTSUPP;
}
- return mlxfw_firmware_flash(&mlx5_mlxfw_dev.mlxfw_dev, firmware);
+ return mlxfw_firmware_flash(&mlx5_mlxfw_dev.mlxfw_dev,
+ firmware, extack);
+}
+
+static int mlx5_reg_mcqi_version_query(struct mlx5_core_dev *dev,
+ u16 component_index, bool read_pending,
+ u32 *mcqi_version_out)
+{
+ return mlx5_reg_mcqi_query(dev, component_index, read_pending,
+ MCQI_INFO_TYPE_VERSION,
+ MLX5_ST_SZ_BYTES(mcqi_version),
+ mcqi_version_out);
+}
+
+static int mlx5_reg_mcqs_query(struct mlx5_core_dev *dev, u32 *out,
+ u16 component_index)
+{
+ u8 out_sz = MLX5_ST_SZ_BYTES(mcqs_reg);
+ u32 in[MLX5_ST_SZ_DW(mcqs_reg)] = {};
+ int err;
+
+ memset(out, 0, out_sz);
+
+ MLX5_SET(mcqs_reg, in, component_index, component_index);
+
+ err = mlx5_core_access_reg(dev, in, sizeof(in), out,
+ out_sz, MLX5_REG_MCQS, 0, 0);
+ return err;
+}
+
+/* scans component index sequentially, to find the boot img index */
+static int mlx5_get_boot_img_component_index(struct mlx5_core_dev *dev)
+{
+ u32 out[MLX5_ST_SZ_DW(mcqs_reg)] = {};
+ u16 identifier, component_idx = 0;
+ bool quit;
+ int err;
+
+ do {
+ err = mlx5_reg_mcqs_query(dev, out, component_idx);
+ if (err)
+ return err;
+
+ identifier = MLX5_GET(mcqs_reg, out, identifier);
+ quit = !!MLX5_GET(mcqs_reg, out, last_index_flag);
+ quit |= identifier == MCQS_IDENTIFIER_BOOT_IMG;
+ } while (!quit && ++component_idx);
+
+ if (identifier != MCQS_IDENTIFIER_BOOT_IMG) {
+ mlx5_core_warn(dev, "mcqs: can't find boot_img component ix, last scanned idx %d\n",
+ component_idx);
+ return -EOPNOTSUPP;
+ }
+
+ return component_idx;
+}
+
+static int
+mlx5_fw_image_pending(struct mlx5_core_dev *dev,
+ int component_index,
+ bool *pending_version_exists)
+{
+ u32 out[MLX5_ST_SZ_DW(mcqs_reg)];
+ u8 component_update_state;
+ int err;
+
+ err = mlx5_reg_mcqs_query(dev, out, component_index);
+ if (err)
+ return err;
+
+ component_update_state = MLX5_GET(mcqs_reg, out, component_update_state);
+
+ if (component_update_state == MCQS_UPDATE_STATE_IDLE) {
+ *pending_version_exists = false;
+ } else if (component_update_state == MCQS_UPDATE_STATE_ACTIVE_PENDING_RESET) {
+ *pending_version_exists = true;
+ } else {
+ mlx5_core_warn(dev,
+ "mcqs: can't read pending fw version while fw state is %d\n",
+ component_update_state);
+ return -ENODATA;
+ }
+ return 0;
+}
+
+int mlx5_fw_version_query(struct mlx5_core_dev *dev,
+ u32 *running_ver, u32 *pending_ver)
+{
+ u32 reg_mcqi_version[MLX5_ST_SZ_DW(mcqi_version)] = {};
+ bool pending_version_exists;
+ int component_index;
+ int err;
+
+ if (!MLX5_CAP_GEN(dev, mcam_reg) || !MLX5_CAP_MCAM_REG(dev, mcqi) ||
+ !MLX5_CAP_MCAM_REG(dev, mcqs)) {
+ mlx5_core_warn(dev, "fw query isn't supported by the FW\n");
+ return -EOPNOTSUPP;
+ }
+
+ component_index = mlx5_get_boot_img_component_index(dev);
+ if (component_index < 0)
+ return component_index;
+
+ err = mlx5_reg_mcqi_version_query(dev, component_index,
+ MCQI_FW_RUNNING_VERSION,
+ reg_mcqi_version);
+ if (err)
+ return err;
+
+ *running_ver = MLX5_GET(mcqi_version, reg_mcqi_version, version);
+
+ err = mlx5_fw_image_pending(dev, component_index, &pending_version_exists);
+ if (err)
+ return err;
+
+ if (!pending_version_exists) {
+ *pending_ver = 0;
+ return 0;
+ }
+
+ err = mlx5_reg_mcqi_version_query(dev, component_index,
+ MCQI_FW_STORED_VERSION,
+ reg_mcqi_version);
+ if (err)
+ return err;
+
+ *pending_ver = MLX5_GET(mcqi_version, reg_mcqi_version, version);
+
+ return 0;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
index a2656f4008d9..2fe6923f7ce0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
@@ -40,6 +40,8 @@
#include "mlx5_core.h"
#include "lib/eq.h"
#include "lib/mlx5.h"
+#include "lib/pci_vsc.h"
+#include "diag/fw_tracer.h"
enum {
MLX5_HEALTH_POLL_INTERVAL = 2 * HZ,
@@ -62,12 +64,20 @@ enum {
enum {
MLX5_DROP_NEW_HEALTH_WORK,
- MLX5_DROP_NEW_RECOVERY_WORK,
+};
+
+enum {
+ MLX5_SENSOR_NO_ERR = 0,
+ MLX5_SENSOR_PCI_COMM_ERR = 1,
+ MLX5_SENSOR_PCI_ERR = 2,
+ MLX5_SENSOR_NIC_DISABLED = 3,
+ MLX5_SENSOR_NIC_SW_RESET = 4,
+ MLX5_SENSOR_FW_SYND_RFR = 5,
};
u8 mlx5_get_nic_state(struct mlx5_core_dev *dev)
{
- return (ioread32be(&dev->iseg->cmdq_addr_l_sz) >> 8) & 3;
+ return (ioread32be(&dev->iseg->cmdq_addr_l_sz) >> 8) & 7;
}
void mlx5_set_nic_state(struct mlx5_core_dev *dev, u8 state)
@@ -80,18 +90,105 @@ void mlx5_set_nic_state(struct mlx5_core_dev *dev, u8 state)
&dev->iseg->cmdq_addr_l_sz);
}
-static int in_fatal(struct mlx5_core_dev *dev)
+static bool sensor_pci_not_working(struct mlx5_core_dev *dev)
{
struct mlx5_core_health *health = &dev->priv.health;
struct health_buffer __iomem *h = health->health;
+ /* Offline PCI reads return 0xffffffff */
+ return (ioread32be(&h->fw_ver) == 0xffffffff);
+}
+
+static bool sensor_fw_synd_rfr(struct mlx5_core_dev *dev)
+{
+ struct mlx5_core_health *health = &dev->priv.health;
+ struct health_buffer __iomem *h = health->health;
+ u32 rfr = ioread32be(&h->rfr) >> MLX5_RFR_OFFSET;
+ u8 synd = ioread8(&h->synd);
+
+ if (rfr && synd)
+ mlx5_core_dbg(dev, "FW requests reset, synd: %d\n", synd);
+ return rfr && synd;
+}
+
+static u32 check_fatal_sensors(struct mlx5_core_dev *dev)
+{
+ if (sensor_pci_not_working(dev))
+ return MLX5_SENSOR_PCI_COMM_ERR;
+ if (pci_channel_offline(dev->pdev))
+ return MLX5_SENSOR_PCI_ERR;
if (mlx5_get_nic_state(dev) == MLX5_NIC_IFC_DISABLED)
- return 1;
+ return MLX5_SENSOR_NIC_DISABLED;
+ if (mlx5_get_nic_state(dev) == MLX5_NIC_IFC_SW_RESET)
+ return MLX5_SENSOR_NIC_SW_RESET;
+ if (sensor_fw_synd_rfr(dev))
+ return MLX5_SENSOR_FW_SYND_RFR;
- if (ioread32be(&h->fw_ver) == 0xffffffff)
- return 1;
+ return MLX5_SENSOR_NO_ERR;
+}
- return 0;
+static int lock_sem_sw_reset(struct mlx5_core_dev *dev, bool lock)
+{
+ enum mlx5_vsc_state state;
+ int ret;
+
+ if (!mlx5_core_is_pf(dev))
+ return -EBUSY;
+
+ /* Try to lock GW access, this stage doesn't return
+ * EBUSY because locked GW does not mean that other PF
+ * already started the reset.
+ */
+ ret = mlx5_vsc_gw_lock(dev);
+ if (ret == -EBUSY)
+ return -EINVAL;
+ if (ret)
+ return ret;
+
+ state = lock ? MLX5_VSC_LOCK : MLX5_VSC_UNLOCK;
+ /* At this stage, if the return status == EBUSY, then we know
+ * for sure that another PF started the reset, so don't allow
+ * another reset.
+ */
+ ret = mlx5_vsc_sem_set_space(dev, MLX5_SEMAPHORE_SW_RESET, state);
+ if (ret)
+ mlx5_core_warn(dev, "Failed to lock SW reset semaphore\n");
+
+ /* Unlock GW access */
+ mlx5_vsc_gw_unlock(dev);
+
+ return ret;
+}
+
+static bool reset_fw_if_needed(struct mlx5_core_dev *dev)
+{
+ bool supported = (ioread32be(&dev->iseg->initializing) >>
+ MLX5_FW_RESET_SUPPORTED_OFFSET) & 1;
+ u32 fatal_error;
+
+ if (!supported)
+ return false;
+
+ /* The reset only needs to be issued by one PF. The health buffer is
+ * shared between all functions, and will be cleared during a reset.
+ * Check again to avoid a redundant 2nd reset. If the fatal erros was
+ * PCI related a reset won't help.
+ */
+ fatal_error = check_fatal_sensors(dev);
+ if (fatal_error == MLX5_SENSOR_PCI_COMM_ERR ||
+ fatal_error == MLX5_SENSOR_NIC_DISABLED ||
+ fatal_error == MLX5_SENSOR_NIC_SW_RESET) {
+ mlx5_core_warn(dev, "Not issuing FW reset. Either it's already done or won't help.");
+ return false;
+ }
+
+ mlx5_core_warn(dev, "Issuing FW Reset\n");
+ /* Write the NIC interface field to initiate the reset, the command
+ * interface address also resides here, don't overwrite it.
+ */
+ mlx5_set_nic_state(dev, MLX5_NIC_IFC_SW_RESET);
+
+ return true;
}
void mlx5_enter_error_state(struct mlx5_core_dev *dev, bool force)
@@ -99,14 +196,65 @@ void mlx5_enter_error_state(struct mlx5_core_dev *dev, bool force)
mutex_lock(&dev->intf_state_mutex);
if (dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR)
goto unlock;
+ if (dev->state == MLX5_DEVICE_STATE_UNINITIALIZED) {
+ dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR;
+ goto unlock;
+ }
- mlx5_core_err(dev, "start\n");
- if (pci_channel_offline(dev->pdev) || in_fatal(dev) || force) {
+ if (check_fatal_sensors(dev) || force) {
dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR;
mlx5_cmd_flush(dev);
}
mlx5_notifier_call_chain(dev->priv.events, MLX5_DEV_EVENT_SYS_ERROR, (void *)1);
+unlock:
+ mutex_unlock(&dev->intf_state_mutex);
+}
+
+#define MLX5_CRDUMP_WAIT_MS 60000
+#define MLX5_FW_RESET_WAIT_MS 1000
+void mlx5_error_sw_reset(struct mlx5_core_dev *dev)
+{
+ unsigned long end, delay_ms = MLX5_FW_RESET_WAIT_MS;
+ int lock = -EBUSY;
+
+ mutex_lock(&dev->intf_state_mutex);
+ if (dev->state != MLX5_DEVICE_STATE_INTERNAL_ERROR)
+ goto unlock;
+
+ mlx5_core_err(dev, "start\n");
+
+ if (check_fatal_sensors(dev) == MLX5_SENSOR_FW_SYND_RFR) {
+ /* Get cr-dump and reset FW semaphore */
+ lock = lock_sem_sw_reset(dev, true);
+
+ if (lock == -EBUSY) {
+ delay_ms = MLX5_CRDUMP_WAIT_MS;
+ goto recover_from_sw_reset;
+ }
+ /* Execute SW reset */
+ reset_fw_if_needed(dev);
+ }
+
+recover_from_sw_reset:
+ /* Recover from SW reset */
+ end = jiffies + msecs_to_jiffies(delay_ms);
+ do {
+ if (mlx5_get_nic_state(dev) == MLX5_NIC_IFC_DISABLED)
+ break;
+
+ cond_resched();
+ } while (!time_after(jiffies, end));
+
+ if (mlx5_get_nic_state(dev) != MLX5_NIC_IFC_DISABLED) {
+ dev_err(&dev->pdev->dev, "NIC IFC still %d after %lums.\n",
+ mlx5_get_nic_state(dev), delay_ms);
+ }
+
+ /* Release FW semaphore if you are the lock owner */
+ if (!lock)
+ lock_sem_sw_reset(dev, false);
+
mlx5_core_err(dev, "end\n");
unlock:
@@ -129,6 +277,20 @@ static void mlx5_handle_bad_state(struct mlx5_core_dev *dev)
case MLX5_NIC_IFC_NO_DRAM_NIC:
mlx5_core_warn(dev, "Expected to see disabled NIC but it is no dram nic\n");
break;
+
+ case MLX5_NIC_IFC_SW_RESET:
+ /* The IFC mode field is 3 bits, so it will read 0x7 in 2 cases:
+ * 1. PCI has been disabled (ie. PCI-AER, PF driver unloaded
+ * and this is a VF), this is not recoverable by SW reset.
+ * Logging of this is handled elsewhere.
+ * 2. FW reset has been issued by another function, driver can
+ * be reloaded to recover after the mode switches to
+ * MLX5_NIC_IFC_DISABLED.
+ */
+ if (dev->priv.health.fatal_error != MLX5_SENSOR_PCI_COMM_ERR)
+ mlx5_core_warn(dev, "NIC SW reset in progress\n");
+ break;
+
default:
mlx5_core_warn(dev, "Expected to see disabled NIC but it is has invalid value %d\n",
nic_interface);
@@ -137,52 +299,32 @@ static void mlx5_handle_bad_state(struct mlx5_core_dev *dev)
mlx5_disable_device(dev);
}
-static void health_recover(struct work_struct *work)
-{
- struct mlx5_core_health *health;
- struct delayed_work *dwork;
- struct mlx5_core_dev *dev;
- struct mlx5_priv *priv;
- u8 nic_state;
-
- dwork = container_of(work, struct delayed_work, work);
- health = container_of(dwork, struct mlx5_core_health, recover_work);
- priv = container_of(health, struct mlx5_priv, health);
- dev = container_of(priv, struct mlx5_core_dev, priv);
-
- nic_state = mlx5_get_nic_state(dev);
- if (nic_state == MLX5_NIC_IFC_INVALID) {
- mlx5_core_err(dev, "health recovery flow aborted since the nic state is invalid\n");
- return;
- }
-
- mlx5_core_err(dev, "starting health recovery flow\n");
- mlx5_recover_device(dev);
-}
-
/* How much time to wait until health resetting the driver (in msecs) */
-#define MLX5_RECOVERY_DELAY_MSECS 60000
-static void health_care(struct work_struct *work)
+#define MLX5_RECOVERY_WAIT_MSECS 60000
+static int mlx5_health_try_recover(struct mlx5_core_dev *dev)
{
- unsigned long recover_delay = msecs_to_jiffies(MLX5_RECOVERY_DELAY_MSECS);
- struct mlx5_core_health *health;
- struct mlx5_core_dev *dev;
- struct mlx5_priv *priv;
- unsigned long flags;
+ unsigned long end;
- health = container_of(work, struct mlx5_core_health, work);
- priv = container_of(health, struct mlx5_priv, health);
- dev = container_of(priv, struct mlx5_core_dev, priv);
mlx5_core_warn(dev, "handling bad device here\n");
mlx5_handle_bad_state(dev);
+ end = jiffies + msecs_to_jiffies(MLX5_RECOVERY_WAIT_MSECS);
+ while (sensor_pci_not_working(dev)) {
+ if (time_after(jiffies, end)) {
+ mlx5_core_err(dev,
+ "health recovery flow aborted, PCI reads still not working\n");
+ return -EIO;
+ }
+ msleep(100);
+ }
- spin_lock_irqsave(&health->wq_lock, flags);
- if (!test_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags))
- schedule_delayed_work(&health->recover_work, recover_delay);
- else
- mlx5_core_err(dev,
- "new health works are not permitted at this stage\n");
- spin_unlock_irqrestore(&health->wq_lock, flags);
+ mlx5_core_err(dev, "starting health recovery flow\n");
+ mlx5_recover_device(dev);
+ if (!test_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state) ||
+ check_fatal_sensors(dev)) {
+ mlx5_core_err(dev, "health recovery failed\n");
+ return -EIO;
+ }
+ return 0;
}
static const char *hsynd_str(u8 synd)
@@ -246,6 +388,282 @@ static void print_health_info(struct mlx5_core_dev *dev)
mlx5_core_err(dev, "raw fw_ver 0x%08x\n", fw);
}
+static int
+mlx5_fw_reporter_diagnose(struct devlink_health_reporter *reporter,
+ struct devlink_fmsg *fmsg)
+{
+ struct mlx5_core_dev *dev = devlink_health_reporter_priv(reporter);
+ struct mlx5_core_health *health = &dev->priv.health;
+ struct health_buffer __iomem *h = health->health;
+ u8 synd;
+ int err;
+
+ synd = ioread8(&h->synd);
+ err = devlink_fmsg_u8_pair_put(fmsg, "Syndrome", synd);
+ if (err || !synd)
+ return err;
+ return devlink_fmsg_string_pair_put(fmsg, "Description", hsynd_str(synd));
+}
+
+struct mlx5_fw_reporter_ctx {
+ u8 err_synd;
+ int miss_counter;
+};
+
+static int
+mlx5_fw_reporter_ctx_pairs_put(struct devlink_fmsg *fmsg,
+ struct mlx5_fw_reporter_ctx *fw_reporter_ctx)
+{
+ int err;
+
+ err = devlink_fmsg_u8_pair_put(fmsg, "syndrome",
+ fw_reporter_ctx->err_synd);
+ if (err)
+ return err;
+ err = devlink_fmsg_u32_pair_put(fmsg, "fw_miss_counter",
+ fw_reporter_ctx->miss_counter);
+ if (err)
+ return err;
+ return 0;
+}
+
+static int
+mlx5_fw_reporter_heath_buffer_data_put(struct mlx5_core_dev *dev,
+ struct devlink_fmsg *fmsg)
+{
+ struct mlx5_core_health *health = &dev->priv.health;
+ struct health_buffer __iomem *h = health->health;
+ int err;
+ int i;
+
+ if (!ioread8(&h->synd))
+ return 0;
+
+ err = devlink_fmsg_pair_nest_start(fmsg, "health buffer");
+ if (err)
+ return err;
+ err = devlink_fmsg_obj_nest_start(fmsg);
+ if (err)
+ return err;
+ err = devlink_fmsg_arr_pair_nest_start(fmsg, "assert_var");
+ if (err)
+ return err;
+
+ for (i = 0; i < ARRAY_SIZE(h->assert_var); i++) {
+ err = devlink_fmsg_u32_put(fmsg, ioread32be(h->assert_var + i));
+ if (err)
+ return err;
+ }
+ err = devlink_fmsg_arr_pair_nest_end(fmsg);
+ if (err)
+ return err;
+ err = devlink_fmsg_u32_pair_put(fmsg, "assert_exit_ptr",
+ ioread32be(&h->assert_exit_ptr));
+ if (err)
+ return err;
+ err = devlink_fmsg_u32_pair_put(fmsg, "assert_callra",
+ ioread32be(&h->assert_callra));
+ if (err)
+ return err;
+ err = devlink_fmsg_u32_pair_put(fmsg, "hw_id", ioread32be(&h->hw_id));
+ if (err)
+ return err;
+ err = devlink_fmsg_u8_pair_put(fmsg, "irisc_index",
+ ioread8(&h->irisc_index));
+ if (err)
+ return err;
+ err = devlink_fmsg_u8_pair_put(fmsg, "synd", ioread8(&h->synd));
+ if (err)
+ return err;
+ err = devlink_fmsg_u32_pair_put(fmsg, "ext_synd",
+ ioread16be(&h->ext_synd));
+ if (err)
+ return err;
+ err = devlink_fmsg_u32_pair_put(fmsg, "raw_fw_ver",
+ ioread32be(&h->fw_ver));
+ if (err)
+ return err;
+ err = devlink_fmsg_obj_nest_end(fmsg);
+ if (err)
+ return err;
+ return devlink_fmsg_pair_nest_end(fmsg);
+}
+
+static int
+mlx5_fw_reporter_dump(struct devlink_health_reporter *reporter,
+ struct devlink_fmsg *fmsg, void *priv_ctx)
+{
+ struct mlx5_core_dev *dev = devlink_health_reporter_priv(reporter);
+ int err;
+
+ err = mlx5_fw_tracer_trigger_core_dump_general(dev);
+ if (err)
+ return err;
+
+ if (priv_ctx) {
+ struct mlx5_fw_reporter_ctx *fw_reporter_ctx = priv_ctx;
+
+ err = mlx5_fw_reporter_ctx_pairs_put(fmsg, fw_reporter_ctx);
+ if (err)
+ return err;
+ }
+
+ err = mlx5_fw_reporter_heath_buffer_data_put(dev, fmsg);
+ if (err)
+ return err;
+ return mlx5_fw_tracer_get_saved_traces_objects(dev->tracer, fmsg);
+}
+
+static void mlx5_fw_reporter_err_work(struct work_struct *work)
+{
+ struct mlx5_fw_reporter_ctx fw_reporter_ctx;
+ struct mlx5_core_health *health;
+
+ health = container_of(work, struct mlx5_core_health, report_work);
+
+ if (IS_ERR_OR_NULL(health->fw_reporter))
+ return;
+
+ fw_reporter_ctx.err_synd = health->synd;
+ fw_reporter_ctx.miss_counter = health->miss_counter;
+ if (fw_reporter_ctx.err_synd) {
+ devlink_health_report(health->fw_reporter,
+ "FW syndrom reported", &fw_reporter_ctx);
+ return;
+ }
+ if (fw_reporter_ctx.miss_counter)
+ devlink_health_report(health->fw_reporter,
+ "FW miss counter reported",
+ &fw_reporter_ctx);
+}
+
+static const struct devlink_health_reporter_ops mlx5_fw_reporter_ops = {
+ .name = "fw",
+ .diagnose = mlx5_fw_reporter_diagnose,
+ .dump = mlx5_fw_reporter_dump,
+};
+
+static int
+mlx5_fw_fatal_reporter_recover(struct devlink_health_reporter *reporter,
+ void *priv_ctx)
+{
+ struct mlx5_core_dev *dev = devlink_health_reporter_priv(reporter);
+
+ return mlx5_health_try_recover(dev);
+}
+
+#define MLX5_CR_DUMP_CHUNK_SIZE 256
+static int
+mlx5_fw_fatal_reporter_dump(struct devlink_health_reporter *reporter,
+ struct devlink_fmsg *fmsg, void *priv_ctx)
+{
+ struct mlx5_core_dev *dev = devlink_health_reporter_priv(reporter);
+ u32 crdump_size = dev->priv.health.crdump_size;
+ u32 *cr_data;
+ u32 data_size;
+ u32 offset;
+ int err;
+
+ if (!mlx5_core_is_pf(dev))
+ return -EPERM;
+
+ cr_data = kvmalloc(crdump_size, GFP_KERNEL);
+ if (!cr_data)
+ return -ENOMEM;
+ err = mlx5_crdump_collect(dev, cr_data);
+ if (err)
+ return err;
+
+ if (priv_ctx) {
+ struct mlx5_fw_reporter_ctx *fw_reporter_ctx = priv_ctx;
+
+ err = mlx5_fw_reporter_ctx_pairs_put(fmsg, fw_reporter_ctx);
+ if (err)
+ goto free_data;
+ }
+
+ err = devlink_fmsg_arr_pair_nest_start(fmsg, "crdump_data");
+ if (err)
+ goto free_data;
+ for (offset = 0; offset < crdump_size; offset += data_size) {
+ if (crdump_size - offset < MLX5_CR_DUMP_CHUNK_SIZE)
+ data_size = crdump_size - offset;
+ else
+ data_size = MLX5_CR_DUMP_CHUNK_SIZE;
+ err = devlink_fmsg_binary_put(fmsg, cr_data, data_size);
+ if (err)
+ goto free_data;
+ }
+ err = devlink_fmsg_arr_pair_nest_end(fmsg);
+
+free_data:
+ kfree(cr_data);
+ return err;
+}
+
+static void mlx5_fw_fatal_reporter_err_work(struct work_struct *work)
+{
+ struct mlx5_fw_reporter_ctx fw_reporter_ctx;
+ struct mlx5_core_health *health;
+ struct mlx5_core_dev *dev;
+ struct mlx5_priv *priv;
+
+ health = container_of(work, struct mlx5_core_health, fatal_report_work);
+ priv = container_of(health, struct mlx5_priv, health);
+ dev = container_of(priv, struct mlx5_core_dev, priv);
+
+ mlx5_enter_error_state(dev, false);
+ if (IS_ERR_OR_NULL(health->fw_fatal_reporter)) {
+ if (mlx5_health_try_recover(dev))
+ mlx5_core_err(dev, "health recovery failed\n");
+ return;
+ }
+ fw_reporter_ctx.err_synd = health->synd;
+ fw_reporter_ctx.miss_counter = health->miss_counter;
+ devlink_health_report(health->fw_fatal_reporter,
+ "FW fatal error reported", &fw_reporter_ctx);
+}
+
+static const struct devlink_health_reporter_ops mlx5_fw_fatal_reporter_ops = {
+ .name = "fw_fatal",
+ .recover = mlx5_fw_fatal_reporter_recover,
+ .dump = mlx5_fw_fatal_reporter_dump,
+};
+
+#define MLX5_REPORTER_FW_GRACEFUL_PERIOD 1200000
+static void mlx5_fw_reporters_create(struct mlx5_core_dev *dev)
+{
+ struct mlx5_core_health *health = &dev->priv.health;
+ struct devlink *devlink = priv_to_devlink(dev);
+
+ health->fw_reporter =
+ devlink_health_reporter_create(devlink, &mlx5_fw_reporter_ops,
+ 0, false, dev);
+ if (IS_ERR(health->fw_reporter))
+ mlx5_core_warn(dev, "Failed to create fw reporter, err = %ld\n",
+ PTR_ERR(health->fw_reporter));
+
+ health->fw_fatal_reporter =
+ devlink_health_reporter_create(devlink,
+ &mlx5_fw_fatal_reporter_ops,
+ MLX5_REPORTER_FW_GRACEFUL_PERIOD,
+ true, dev);
+ if (IS_ERR(health->fw_fatal_reporter))
+ mlx5_core_warn(dev, "Failed to create fw fatal reporter, err = %ld\n",
+ PTR_ERR(health->fw_fatal_reporter));
+}
+
+static void mlx5_fw_reporters_destroy(struct mlx5_core_dev *dev)
+{
+ struct mlx5_core_health *health = &dev->priv.health;
+
+ if (!IS_ERR_OR_NULL(health->fw_reporter))
+ devlink_health_reporter_destroy(health->fw_reporter);
+
+ if (!IS_ERR_OR_NULL(health->fw_fatal_reporter))
+ devlink_health_reporter_destroy(health->fw_fatal_reporter);
+}
+
static unsigned long get_next_poll_jiffies(void)
{
unsigned long next;
@@ -264,7 +682,7 @@ void mlx5_trigger_health_work(struct mlx5_core_dev *dev)
spin_lock_irqsave(&health->wq_lock, flags);
if (!test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags))
- queue_work(health->wq, &health->work);
+ queue_work(health->wq, &health->fatal_report_work);
else
mlx5_core_err(dev, "new health works are not permitted at this stage\n");
spin_unlock_irqrestore(&health->wq_lock, flags);
@@ -274,6 +692,9 @@ static void poll_health(struct timer_list *t)
{
struct mlx5_core_dev *dev = from_timer(dev, t, priv.health.timer);
struct mlx5_core_health *health = &dev->priv.health;
+ struct health_buffer __iomem *h = health->health;
+ u32 fatal_error;
+ u8 prev_synd;
u32 count;
if (dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR)
@@ -289,10 +710,19 @@ static void poll_health(struct timer_list *t)
if (health->miss_counter == MAX_MISSES) {
mlx5_core_err(dev, "device's health compromised - reached miss count\n");
print_health_info(dev);
+ queue_work(health->wq, &health->report_work);
}
- if (in_fatal(dev) && !health->sick) {
- health->sick = true;
+ prev_synd = health->synd;
+ health->synd = ioread8(&h->synd);
+ if (health->synd && health->synd != prev_synd)
+ queue_work(health->wq, &health->report_work);
+
+ fatal_error = check_fatal_sensors(dev);
+
+ if (fatal_error && !health->fatal_error) {
+ mlx5_core_err(dev, "Fatal error %u detected\n", fatal_error);
+ dev->priv.health.fatal_error = fatal_error;
print_health_info(dev);
mlx5_trigger_health_work(dev);
}
@@ -306,9 +736,8 @@ void mlx5_start_health_poll(struct mlx5_core_dev *dev)
struct mlx5_core_health *health = &dev->priv.health;
timer_setup(&health->timer, poll_health, 0);
- health->sick = 0;
+ health->fatal_error = MLX5_SENSOR_NO_ERR;
clear_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags);
- clear_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags);
health->health = &dev->iseg->health;
health->health_counter = &dev->iseg->health_counter;
@@ -324,7 +753,6 @@ void mlx5_stop_health_poll(struct mlx5_core_dev *dev, bool disable_health)
if (disable_health) {
spin_lock_irqsave(&health->wq_lock, flags);
set_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags);
- set_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags);
spin_unlock_irqrestore(&health->wq_lock, flags);
}
@@ -338,21 +766,9 @@ void mlx5_drain_health_wq(struct mlx5_core_dev *dev)
spin_lock_irqsave(&health->wq_lock, flags);
set_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags);
- set_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags);
spin_unlock_irqrestore(&health->wq_lock, flags);
- cancel_delayed_work_sync(&health->recover_work);
- cancel_work_sync(&health->work);
-}
-
-void mlx5_drain_health_recovery(struct mlx5_core_dev *dev)
-{
- struct mlx5_core_health *health = &dev->priv.health;
- unsigned long flags;
-
- spin_lock_irqsave(&health->wq_lock, flags);
- set_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags);
- spin_unlock_irqrestore(&health->wq_lock, flags);
- cancel_delayed_work_sync(&dev->priv.health.recover_work);
+ cancel_work_sync(&health->report_work);
+ cancel_work_sync(&health->fatal_report_work);
}
void mlx5_health_flush(struct mlx5_core_dev *dev)
@@ -367,6 +783,7 @@ void mlx5_health_cleanup(struct mlx5_core_dev *dev)
struct mlx5_core_health *health = &dev->priv.health;
destroy_workqueue(health->wq);
+ mlx5_fw_reporters_destroy(dev);
}
int mlx5_health_init(struct mlx5_core_dev *dev)
@@ -374,20 +791,26 @@ int mlx5_health_init(struct mlx5_core_dev *dev)
struct mlx5_core_health *health;
char *name;
+ mlx5_fw_reporters_create(dev);
+
health = &dev->priv.health;
name = kmalloc(64, GFP_KERNEL);
if (!name)
- return -ENOMEM;
+ goto out_err;
strcpy(name, "mlx5_health");
strcat(name, dev_name(dev->device));
health->wq = create_singlethread_workqueue(name);
kfree(name);
if (!health->wq)
- return -ENOMEM;
+ goto out_err;
spin_lock_init(&health->wq_lock);
- INIT_WORK(&health->work, health_care);
- INIT_DELAYED_WORK(&health->recover_work, health_recover);
+ INIT_WORK(&health->fatal_report_work, mlx5_fw_fatal_reporter_err_work);
+ INIT_WORK(&health->report_work, mlx5_fw_reporter_err_work);
return 0;
+
+out_err:
+ mlx5_fw_reporters_destroy(dev);
+ return -ENOMEM;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c
index 90cb50fe17fd..ebd81f6b556e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c
@@ -122,14 +122,6 @@ static int mlx5i_get_ts_info(struct net_device *netdev,
return mlx5e_ethtool_get_ts_info(priv, info);
}
-static int mlx5i_flash_device(struct net_device *netdev,
- struct ethtool_flash *flash)
-{
- struct mlx5e_priv *priv = mlx5i_epriv(netdev);
-
- return mlx5e_ethtool_flash_device(priv, flash);
-}
-
enum mlx5_ptys_width {
MLX5_PTYS_WIDTH_1X = 1 << 0,
MLX5_PTYS_WIDTH_2X = 1 << 1,
@@ -241,7 +233,6 @@ const struct ethtool_ops mlx5i_ethtool_ops = {
.get_ethtool_stats = mlx5i_get_ethtool_stats,
.get_ringparam = mlx5i_get_ringparam,
.set_ringparam = mlx5i_set_ringparam,
- .flash_device = mlx5i_flash_device,
.get_channels = mlx5i_get_channels,
.set_channels = mlx5i_set_channels,
.get_coalesce = mlx5i_get_coalesce,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
index 9ca492b430d8..faf197d53743 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
@@ -87,7 +87,7 @@ int mlx5i_init(struct mlx5_core_dev *mdev,
mlx5e_set_netdev_mtu_boundaries(priv);
netdev->mtu = netdev->max_mtu;
- mlx5e_build_nic_params(mdev, &priv->rss_params, &priv->channels.params,
+ mlx5e_build_nic_params(mdev, NULL, &priv->rss_params, &priv->channels.params,
mlx5e_get_netdev_max_channels(netdev),
netdev->mtu);
mlx5i_build_nic_params(mdev, &priv->channels.params);
@@ -258,6 +258,18 @@ void mlx5i_destroy_underlay_qp(struct mlx5_core_dev *mdev, struct mlx5_core_qp *
mlx5_core_destroy_qp(mdev, qp);
}
+int mlx5i_create_tis(struct mlx5_core_dev *mdev, u32 underlay_qpn, u32 *tisn)
+{
+ u32 in[MLX5_ST_SZ_DW(create_tis_in)] = {};
+ void *tisc;
+
+ tisc = MLX5_ADDR_OF(create_tis_in, in, ctx);
+
+ MLX5_SET(tisc, tisc, underlay_qpn, underlay_qpn);
+
+ return mlx5e_create_tis(mdev, in, tisn);
+}
+
static int mlx5i_init_tx(struct mlx5e_priv *priv)
{
struct mlx5i_priv *ipriv = priv->ppriv;
@@ -269,7 +281,7 @@ static int mlx5i_init_tx(struct mlx5e_priv *priv)
return err;
}
- err = mlx5e_create_tis(priv->mdev, 0 /* tc */, ipriv->qp.qpn, &priv->tisn[0]);
+ err = mlx5i_create_tis(priv->mdev, ipriv->qp.qpn, &priv->tisn[0]);
if (err) {
mlx5_core_warn(priv->mdev, "create tis failed, %d\n", err);
goto err_destroy_underlay_qp;
@@ -365,7 +377,7 @@ static int mlx5i_init_rx(struct mlx5e_priv *priv)
if (err)
goto err_close_drop_rq;
- err = mlx5e_create_direct_rqts(priv);
+ err = mlx5e_create_direct_rqts(priv, priv->direct_tir);
if (err)
goto err_destroy_indirect_rqts;
@@ -373,7 +385,7 @@ static int mlx5i_init_rx(struct mlx5e_priv *priv)
if (err)
goto err_destroy_direct_rqts;
- err = mlx5e_create_direct_tirs(priv);
+ err = mlx5e_create_direct_tirs(priv, priv->direct_tir);
if (err)
goto err_destroy_indirect_tirs;
@@ -384,11 +396,11 @@ static int mlx5i_init_rx(struct mlx5e_priv *priv)
return 0;
err_destroy_direct_tirs:
- mlx5e_destroy_direct_tirs(priv);
+ mlx5e_destroy_direct_tirs(priv, priv->direct_tir);
err_destroy_indirect_tirs:
mlx5e_destroy_indirect_tirs(priv, true);
err_destroy_direct_rqts:
- mlx5e_destroy_direct_rqts(priv);
+ mlx5e_destroy_direct_rqts(priv, priv->direct_tir);
err_destroy_indirect_rqts:
mlx5e_destroy_rqt(priv, &priv->indir_rqt);
err_close_drop_rq:
@@ -401,9 +413,9 @@ err_destroy_q_counters:
static void mlx5i_cleanup_rx(struct mlx5e_priv *priv)
{
mlx5i_destroy_flow_steering(priv);
- mlx5e_destroy_direct_tirs(priv);
+ mlx5e_destroy_direct_tirs(priv, priv->direct_tir);
mlx5e_destroy_indirect_tirs(priv, true);
- mlx5e_destroy_direct_rqts(priv);
+ mlx5e_destroy_direct_rqts(priv, priv->direct_tir);
mlx5e_destroy_rqt(priv, &priv->indir_rqt);
mlx5e_close_drop_rq(&priv->drop_rq);
mlx5e_destroy_q_counters(priv);
@@ -418,6 +430,7 @@ static const struct mlx5e_profile mlx5i_nic_profile = {
.cleanup_rx = mlx5i_cleanup_rx,
.enable = NULL, /* mlx5i_enable */
.disable = NULL, /* mlx5i_disable */
+ .update_rx = mlx5e_update_nic_rx,
.update_stats = NULL, /* mlx5i_update_stats */
.update_carrier = NULL, /* no HW update in IB link */
.rx_handlers.handle_rx_cqe = mlx5i_handle_rx_cqe,
@@ -526,7 +539,7 @@ static int mlx5i_open(struct net_device *netdev)
if (err)
goto err_remove_fs_underlay_qp;
- mlx5e_refresh_tirs(epriv, false);
+ epriv->profile->update_rx(epriv);
mlx5e_activate_priv_channels(epriv);
mutex_unlock(&epriv->state_lock);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h
index e19ba3fcd1b7..c87962cab921 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h
@@ -59,6 +59,8 @@ struct mlx5i_priv {
char *mlx5e_priv[0];
};
+int mlx5i_create_tis(struct mlx5_core_dev *mdev, u32 underlay_qpn, u32 *tisn);
+
/* Underlay QP create/destroy functions */
int mlx5i_create_underlay_qp(struct mlx5_core_dev *mdev, struct mlx5_core_qp *qp);
void mlx5i_destroy_underlay_qp(struct mlx5_core_dev *mdev, struct mlx5_core_qp *qp);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib_vlan.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib_vlan.c
index b491b8f5fd6b..6e56fa769d2e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib_vlan.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib_vlan.c
@@ -210,7 +210,7 @@ static int mlx5i_pkey_open(struct net_device *netdev)
goto err_unint_underlay_qp;
}
- err = mlx5e_create_tis(mdev, 0 /* tc */, ipriv->qp.qpn, &epriv->tisn[0]);
+ err = mlx5i_create_tis(mdev, ipriv->qp.qpn, &epriv->tisn[0]);
if (err) {
mlx5_core_warn(mdev, "create child tis failed, %d\n", err);
goto err_remove_rx_uderlay_qp;
@@ -221,7 +221,7 @@ static int mlx5i_pkey_open(struct net_device *netdev)
mlx5_core_warn(mdev, "opening child channels failed, %d\n", err);
goto err_clear_state_opened_flag;
}
- mlx5e_refresh_tirs(epriv, false);
+ epriv->profile->update_rx(epriv);
mlx5e_activate_priv_channels(epriv);
mutex_unlock(&epriv->state_lock);
@@ -350,6 +350,7 @@ static const struct mlx5e_profile mlx5i_pkey_nic_profile = {
.cleanup_rx = mlx5i_pkey_cleanup_rx,
.enable = NULL,
.disable = NULL,
+ .update_rx = mlx5e_update_nic_rx,
.update_stats = NULL,
.rx_handlers.handle_rx_cqe = mlx5i_handle_rx_cqe,
.rx_handlers.handle_rx_cqe_mpwqe = NULL, /* Not supported */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
index 959605559858..c5ef2ff26465 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
@@ -305,8 +305,8 @@ static void mlx5_do_bond(struct mlx5_lag *ldev)
!mlx5_sriov_is_enabled(dev1);
#ifdef CONFIG_MLX5_ESWITCH
- roce_lag &= dev0->priv.eswitch->mode == SRIOV_NONE &&
- dev1->priv.eswitch->mode == SRIOV_NONE;
+ roce_lag &= dev0->priv.eswitch->mode == MLX5_ESWITCH_NONE &&
+ dev1->priv.eswitch->mode == MLX5_ESWITCH_NONE;
#endif
if (roce_lag)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
index 8212bfd05733..e69766393990 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag_mp.c
@@ -2,6 +2,7 @@
/* Copyright (c) 2019 Mellanox Technologies. */
#include <linux/netdevice.h>
+#include <net/nexthop.h>
#include "lag.h"
#include "lag_mp.h"
#include "mlx5_core.h"
@@ -110,6 +111,8 @@ static void mlx5_lag_fib_route_event(struct mlx5_lag *ldev,
struct fib_info *fi)
{
struct lag_mp *mp = &ldev->lag_mp;
+ struct fib_nh *fib_nh0, *fib_nh1;
+ unsigned int nhs;
/* Handle delete event */
if (event == FIB_EVENT_ENTRY_DEL) {
@@ -120,9 +123,11 @@ static void mlx5_lag_fib_route_event(struct mlx5_lag *ldev,
}
/* Handle add/replace event */
- if (fi->fib_nhs == 1) {
+ nhs = fib_info_num_path(fi);
+ if (nhs == 1) {
if (__mlx5_lag_is_active(ldev)) {
- struct net_device *nh_dev = fi->fib_nh[0].fib_nh_dev;
+ struct fib_nh *nh = fib_info_nh(fi, 0);
+ struct net_device *nh_dev = nh->fib_nh_dev;
int i = mlx5_lag_dev_get_netdev_idx(ldev, nh_dev);
mlx5_lag_set_port_affinity(ldev, ++i);
@@ -130,14 +135,16 @@ static void mlx5_lag_fib_route_event(struct mlx5_lag *ldev,
return;
}
- if (fi->fib_nhs != 2)
+ if (nhs != 2)
return;
/* Verify next hops are ports of the same hca */
- if (!(fi->fib_nh[0].fib_nh_dev == ldev->pf[0].netdev &&
- fi->fib_nh[1].fib_nh_dev == ldev->pf[1].netdev) &&
- !(fi->fib_nh[0].fib_nh_dev == ldev->pf[1].netdev &&
- fi->fib_nh[1].fib_nh_dev == ldev->pf[0].netdev)) {
+ fib_nh0 = fib_info_nh(fi, 0);
+ fib_nh1 = fib_info_nh(fi, 1);
+ if (!(fib_nh0->fib_nh_dev == ldev->pf[0].netdev &&
+ fib_nh1->fib_nh_dev == ldev->pf[1].netdev) &&
+ !(fib_nh0->fib_nh_dev == ldev->pf[1].netdev &&
+ fib_nh1->fib_nh_dev == ldev->pf[0].netdev)) {
mlx5_core_warn(ldev->pf[0].dev, "Multipath offload require two ports of the same HCA\n");
return;
}
@@ -174,7 +181,7 @@ static void mlx5_lag_fib_nexthop_event(struct mlx5_lag *ldev,
mlx5_lag_set_port_affinity(ldev, i);
}
} else if (event == FIB_EVENT_NH_ADD &&
- fi->fib_nhs == 2) {
+ fib_info_num_path(fi) == 2) {
mlx5_lag_set_port_affinity(ldev, 0);
}
}
@@ -238,6 +245,7 @@ static int mlx5_lag_fib_event(struct notifier_block *nb,
struct mlx5_fib_event_work *fib_work;
struct fib_entry_notifier_info *fen_info;
struct fib_nh_notifier_info *fnh_info;
+ struct net_device *fib_dev;
struct fib_info *fi;
if (info->family != AF_INET)
@@ -254,8 +262,13 @@ static int mlx5_lag_fib_event(struct notifier_block *nb,
fen_info = container_of(info, struct fib_entry_notifier_info,
info);
fi = fen_info->fi;
- if (fi->fib_dev != ldev->pf[0].netdev &&
- fi->fib_dev != ldev->pf[1].netdev) {
+ if (fi->nh) {
+ NL_SET_ERR_MSG_MOD(info->extack, "IPv4 route with nexthop objects is not supported");
+ return notifier_from_errno(-EINVAL);
+ }
+ fib_dev = fib_info_nh(fen_info->fi, 0)->fib_nh_dev;
+ if (fib_dev != ldev->pf[0].netdev &&
+ fib_dev != ldev->pf[1].netdev) {
return NOTIFY_DONE;
}
fib_work = mlx5_lag_init_fib_work(ldev, event);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.c
new file mode 100644
index 000000000000..ea9ee88491e5
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/crypto.c
@@ -0,0 +1,72 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+// Copyright (c) 2019 Mellanox Technologies.
+
+#include "mlx5_core.h"
+
+int mlx5_create_encryption_key(struct mlx5_core_dev *mdev,
+ void *key, u32 sz_bytes,
+ u32 *p_key_id)
+{
+ u32 in[MLX5_ST_SZ_DW(create_encryption_key_in)] = {};
+ u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)];
+ u32 sz_bits = sz_bytes * BITS_PER_BYTE;
+ u8 general_obj_key_size;
+ u64 general_obj_types;
+ void *obj, *key_p;
+ int err;
+
+ obj = MLX5_ADDR_OF(create_encryption_key_in, in, encryption_key_object);
+ key_p = MLX5_ADDR_OF(encryption_key_obj, obj, key);
+
+ general_obj_types = MLX5_CAP_GEN_64(mdev, general_obj_types);
+ if (!(general_obj_types &
+ MLX5_HCA_CAP_GENERAL_OBJECT_TYPES_ENCRYPTION_KEY))
+ return -EINVAL;
+
+ switch (sz_bits) {
+ case 128:
+ general_obj_key_size =
+ MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_KEY_SIZE_128;
+ break;
+ case 256:
+ general_obj_key_size =
+ MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_KEY_SIZE_256;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ memcpy(key_p, key, sz_bytes);
+
+ MLX5_SET(encryption_key_obj, obj, key_size, general_obj_key_size);
+ MLX5_SET(encryption_key_obj, obj, key_type,
+ MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_TYPE_DEK);
+ MLX5_SET(general_obj_in_cmd_hdr, in, opcode,
+ MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
+ MLX5_SET(general_obj_in_cmd_hdr, in, obj_type,
+ MLX5_GENERAL_OBJECT_TYPES_ENCRYPTION_KEY);
+ MLX5_SET(encryption_key_obj, obj, pd, mdev->mlx5e_res.pdn);
+
+ err = mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out));
+ if (!err)
+ *p_key_id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
+
+ /* avoid leaking key on the stack */
+ memzero_explicit(in, sizeof(in));
+
+ return err;
+}
+
+void mlx5_destroy_encryption_key(struct mlx5_core_dev *mdev, u32 key_id)
+{
+ u32 in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {};
+ u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)];
+
+ MLX5_SET(general_obj_in_cmd_hdr, in, opcode,
+ MLX5_CMD_OP_DESTROY_GENERAL_OBJECT);
+ MLX5_SET(general_obj_in_cmd_hdr, in, obj_type,
+ MLX5_GENERAL_OBJECT_TYPES_ENCRYPTION_KEY);
+ MLX5_SET(general_obj_in_cmd_hdr, in, obj_id, key_id);
+
+ mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out));
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
index c0fb6d72b695..3dfab91ae5f2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
@@ -7,7 +7,6 @@
#include <linux/mlx5/eq.h>
#include <linux/mlx5/cq.h>
-#define MLX5_MAX_IRQ_NAME (32)
#define MLX5_EQE_SIZE (sizeof(struct mlx5_eqe))
struct mlx5_eq_tasklet {
@@ -36,8 +35,14 @@ struct mlx5_eq {
struct mlx5_rsc_debug *dbg;
};
+struct mlx5_eq_async {
+ struct mlx5_eq core;
+ struct notifier_block irq_nb;
+};
+
struct mlx5_eq_comp {
- struct mlx5_eq core; /* Must be first */
+ struct mlx5_eq core;
+ struct notifier_block irq_nb;
struct mlx5_eq_tasklet tasklet_ctx;
struct list_head list;
};
@@ -70,7 +75,7 @@ int mlx5_eq_table_create(struct mlx5_core_dev *dev);
void mlx5_eq_table_destroy(struct mlx5_core_dev *dev);
int mlx5_eq_add_cq(struct mlx5_eq *eq, struct mlx5_core_cq *cq);
-int mlx5_eq_del_cq(struct mlx5_eq *eq, struct mlx5_core_cq *cq);
+void mlx5_eq_del_cq(struct mlx5_eq *eq, struct mlx5_core_cq *cq);
struct mlx5_eq_comp *mlx5_eqn2comp_eq(struct mlx5_core_dev *dev, int eqn);
struct mlx5_eq *mlx5_get_async_eq(struct mlx5_core_dev *dev);
void mlx5_cq_tasklet_cb(unsigned long data);
@@ -92,7 +97,4 @@ void mlx5_core_eq_free_irqs(struct mlx5_core_dev *dev);
struct cpu_rmap *mlx5_eq_table_get_rmap(struct mlx5_core_dev *dev);
#endif
-int mlx5_eq_notifier_register(struct mlx5_core_dev *dev, struct mlx5_nb *nb);
-int mlx5_eq_notifier_unregister(struct mlx5_core_dev *dev, struct mlx5_nb *nb);
-
#endif
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/geneve.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/geneve.c
new file mode 100644
index 000000000000..23361a9ae4fa
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/geneve.c
@@ -0,0 +1,157 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/* Copyright (c) 2019 Mellanox Technologies. */
+
+#include <linux/kernel.h>
+#include "mlx5_core.h"
+#include "geneve.h"
+
+struct mlx5_geneve {
+ struct mlx5_core_dev *mdev;
+ __be16 opt_class;
+ u8 opt_type;
+ u32 obj_id;
+ struct mutex sync_lock; /* protect GENEVE obj operations */
+ u32 refcount;
+};
+
+static int mlx5_geneve_tlv_option_create(struct mlx5_core_dev *mdev,
+ __be16 class,
+ u8 type,
+ u8 len)
+{
+ u32 in[MLX5_ST_SZ_DW(create_geneve_tlv_option_in)] = {};
+ u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {};
+ u64 general_obj_types;
+ void *hdr, *opt;
+ u16 obj_id;
+ int err;
+
+ general_obj_types = MLX5_CAP_GEN_64(mdev, general_obj_types);
+ if (!(general_obj_types & MLX5_GENERAL_OBJ_TYPES_CAP_GENEVE_TLV_OPT))
+ return -EINVAL;
+
+ hdr = MLX5_ADDR_OF(create_geneve_tlv_option_in, in, hdr);
+ opt = MLX5_ADDR_OF(create_geneve_tlv_option_in, in, geneve_tlv_opt);
+
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type, MLX5_OBJ_TYPE_GENEVE_TLV_OPT);
+
+ MLX5_SET(geneve_tlv_option, opt, option_class, be16_to_cpu(class));
+ MLX5_SET(geneve_tlv_option, opt, option_type, type);
+ MLX5_SET(geneve_tlv_option, opt, option_data_length, len);
+
+ err = mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out));
+ if (err)
+ return err;
+
+ obj_id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
+ return obj_id;
+}
+
+static void mlx5_geneve_tlv_option_destroy(struct mlx5_core_dev *mdev, u16 obj_id)
+{
+ u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {};
+ u32 in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {};
+
+ MLX5_SET(general_obj_in_cmd_hdr, in, opcode, MLX5_CMD_OP_DESTROY_GENERAL_OBJECT);
+ MLX5_SET(general_obj_in_cmd_hdr, in, obj_type, MLX5_OBJ_TYPE_GENEVE_TLV_OPT);
+ MLX5_SET(general_obj_in_cmd_hdr, in, obj_id, obj_id);
+
+ mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out));
+}
+
+int mlx5_geneve_tlv_option_add(struct mlx5_geneve *geneve, struct geneve_opt *opt)
+{
+ int res = 0;
+
+ if (IS_ERR_OR_NULL(geneve))
+ return -EOPNOTSUPP;
+
+ mutex_lock(&geneve->sync_lock);
+
+ if (geneve->refcount) {
+ if (geneve->opt_class == opt->opt_class &&
+ geneve->opt_type == opt->type) {
+ /* We already have TLV options obj allocated */
+ geneve->refcount++;
+ } else {
+ /* TLV options obj allocated, but its params
+ * do not match the new request.
+ * We support only one such object.
+ */
+ mlx5_core_warn(geneve->mdev,
+ "Won't create Geneve TLV opt object with class:type:len = 0x%x:0x%x:%d (another class:type already exists)\n",
+ be16_to_cpu(opt->opt_class),
+ opt->type,
+ opt->length);
+ res = -EOPNOTSUPP;
+ goto unlock;
+ }
+ } else {
+ /* We don't have any TLV options obj allocated */
+
+ res = mlx5_geneve_tlv_option_create(geneve->mdev,
+ opt->opt_class,
+ opt->type,
+ opt->length);
+ if (res < 0) {
+ mlx5_core_warn(geneve->mdev,
+ "Failed creating Geneve TLV opt object class:type:len = 0x%x:0x%x:%d (err=%d)\n",
+ be16_to_cpu(opt->opt_class),
+ opt->type, opt->length, res);
+ goto unlock;
+ }
+ geneve->opt_class = opt->opt_class;
+ geneve->opt_type = opt->type;
+ geneve->obj_id = res;
+ geneve->refcount++;
+ }
+
+unlock:
+ mutex_unlock(&geneve->sync_lock);
+ return res;
+}
+
+void mlx5_geneve_tlv_option_del(struct mlx5_geneve *geneve)
+{
+ if (IS_ERR_OR_NULL(geneve))
+ return;
+
+ mutex_lock(&geneve->sync_lock);
+ if (--geneve->refcount == 0) {
+ /* We've just removed the last user of Geneve option.
+ * Now delete the object in FW.
+ */
+ mlx5_geneve_tlv_option_destroy(geneve->mdev, geneve->obj_id);
+
+ geneve->opt_class = 0;
+ geneve->opt_type = 0;
+ geneve->obj_id = 0;
+ }
+ mutex_unlock(&geneve->sync_lock);
+}
+
+struct mlx5_geneve *mlx5_geneve_create(struct mlx5_core_dev *mdev)
+{
+ struct mlx5_geneve *geneve =
+ kzalloc(sizeof(*geneve), GFP_KERNEL);
+
+ if (!geneve)
+ return ERR_PTR(-ENOMEM);
+ geneve->mdev = mdev;
+ mutex_init(&geneve->sync_lock);
+
+ return geneve;
+}
+
+void mlx5_geneve_destroy(struct mlx5_geneve *geneve)
+{
+ if (IS_ERR_OR_NULL(geneve))
+ return;
+
+ /* Lockless since we are unloading */
+ if (geneve->refcount)
+ mlx5_geneve_tlv_option_destroy(geneve->mdev, geneve->obj_id);
+
+ kfree(geneve);
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/geneve.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/geneve.h
new file mode 100644
index 000000000000..adee0cbba19c
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/geneve.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2019 Mellanox Technologies. */
+
+#ifndef __MLX5_GENEVE_H__
+#define __MLX5_GENEVE_H__
+
+#include <net/geneve.h>
+#include <linux/mlx5/driver.h>
+
+struct mlx5_geneve;
+
+#ifdef CONFIG_MLX5_ESWITCH
+
+struct mlx5_geneve *mlx5_geneve_create(struct mlx5_core_dev *mdev);
+void mlx5_geneve_destroy(struct mlx5_geneve *geneve);
+
+int mlx5_geneve_tlv_option_add(struct mlx5_geneve *geneve, struct geneve_opt *opt);
+void mlx5_geneve_tlv_option_del(struct mlx5_geneve *geneve);
+
+#else /* CONFIG_MLX5_ESWITCH */
+
+static inline struct mlx5_geneve
+*mlx5_geneve_create(struct mlx5_core_dev *mdev) { return NULL; }
+static inline void
+mlx5_geneve_destroy(struct mlx5_geneve *geneve) {}
+static inline int
+mlx5_geneve_tlv_option_add(struct mlx5_geneve *geneve, struct geneve_opt *opt) { return 0; }
+static inline void
+mlx5_geneve_tlv_option_del(struct mlx5_geneve *geneve) {}
+
+#endif /* CONFIG_MLX5_ESWITCH */
+
+#endif /* __MLX5_GENEVE_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
index 397a2847867a..b99d469e4e64 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
@@ -41,6 +41,9 @@ int mlx5_core_reserve_gids(struct mlx5_core_dev *dev, unsigned int count);
void mlx5_core_unreserve_gids(struct mlx5_core_dev *dev, unsigned int count);
int mlx5_core_reserved_gid_alloc(struct mlx5_core_dev *dev, int *gid_index);
void mlx5_core_reserved_gid_free(struct mlx5_core_dev *dev, int gid_index);
+int mlx5_crdump_enable(struct mlx5_core_dev *dev);
+void mlx5_crdump_disable(struct mlx5_core_dev *dev);
+int mlx5_crdump_collect(struct mlx5_core_dev *dev, u32 *cr_data);
/* TODO move to lib/events.h */
@@ -76,4 +79,9 @@ struct mlx5_pme_stats {
void mlx5_get_pme_stats(struct mlx5_core_dev *dev, struct mlx5_pme_stats *stats);
int mlx5_notifier_call_chain(struct mlx5_events *events, unsigned int event, void *data);
+/* Crypto */
+int mlx5_create_encryption_key(struct mlx5_core_dev *mdev,
+ void *key, u32 sz_bytes, u32 *p_key_id);
+void mlx5_destroy_encryption_key(struct mlx5_core_dev *mdev, u32 key_id);
+
#endif
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c
index a71d5b9c7ab2..3118e8d66407 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c
@@ -67,6 +67,7 @@ static int del_l2table_entry_cmd(struct mlx5_core_dev *dev, u32 index)
struct l2table_node {
struct l2addr_node node;
u32 index; /* index in HW l2 table */
+ int ref_count;
};
struct mlx5_mpfs {
@@ -134,8 +135,8 @@ int mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac)
{
struct mlx5_mpfs *mpfs = dev->priv.mpfs;
struct l2table_node *l2addr;
+ int err = 0;
u32 index;
- int err;
if (!MLX5_ESWITCH_MANAGER(dev))
return 0;
@@ -144,30 +145,35 @@ int mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac)
l2addr = l2addr_hash_find(mpfs->hash, mac, struct l2table_node);
if (l2addr) {
- err = -EEXIST;
- goto abort;
+ l2addr->ref_count++;
+ goto out;
}
err = alloc_l2table_index(mpfs, &index);
if (err)
- goto abort;
+ goto out;
l2addr = l2addr_hash_add(mpfs->hash, mac, struct l2table_node, GFP_KERNEL);
if (!l2addr) {
- free_l2table_index(mpfs, index);
err = -ENOMEM;
- goto abort;
+ goto hash_add_err;
}
- l2addr->index = index;
err = set_l2table_entry_cmd(dev, index, mac);
- if (err) {
- l2addr_hash_del(l2addr);
- free_l2table_index(mpfs, index);
- }
+ if (err)
+ goto set_table_entry_err;
+
+ l2addr->index = index;
+ l2addr->ref_count = 1;
mlx5_core_dbg(dev, "MPFS mac added %pM, index (%d)\n", mac, index);
-abort:
+ goto out;
+
+set_table_entry_err:
+ l2addr_hash_del(l2addr);
+hash_add_err:
+ free_l2table_index(mpfs, index);
+out:
mutex_unlock(&mpfs->lock);
return err;
}
@@ -190,6 +196,9 @@ int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac)
goto unlock;
}
+ if (--l2addr->ref_count > 0)
+ goto unlock;
+
index = l2addr->index;
del_l2table_entry_cmd(dev, index);
l2addr_hash_del(l2addr);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
new file mode 100644
index 000000000000..6b774e0c2766
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
@@ -0,0 +1,316 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/* Copyright (c) 2019 Mellanox Technologies */
+
+#include <linux/pci.h>
+#include "mlx5_core.h"
+#include "pci_vsc.h"
+
+#define MLX5_EXTRACT_C(source, offset, size) \
+ ((((u32)(source)) >> (offset)) & MLX5_ONES32(size))
+#define MLX5_EXTRACT(src, start, len) \
+ (((len) == 32) ? (src) : MLX5_EXTRACT_C(src, start, len))
+#define MLX5_ONES32(size) \
+ ((size) ? (0xffffffff >> (32 - (size))) : 0)
+#define MLX5_MASK32(offset, size) \
+ (MLX5_ONES32(size) << (offset))
+#define MLX5_MERGE_C(rsrc1, rsrc2, start, len) \
+ ((((rsrc2) << (start)) & (MLX5_MASK32((start), (len)))) | \
+ ((rsrc1) & (~MLX5_MASK32((start), (len)))))
+#define MLX5_MERGE(rsrc1, rsrc2, start, len) \
+ (((len) == 32) ? (rsrc2) : MLX5_MERGE_C(rsrc1, rsrc2, start, len))
+#define vsc_read(dev, offset, val) \
+ pci_read_config_dword((dev)->pdev, (dev)->vsc_addr + (offset), (val))
+#define vsc_write(dev, offset, val) \
+ pci_write_config_dword((dev)->pdev, (dev)->vsc_addr + (offset), (val))
+#define VSC_MAX_RETRIES 2048
+
+enum {
+ VSC_CTRL_OFFSET = 0x4,
+ VSC_COUNTER_OFFSET = 0x8,
+ VSC_SEMAPHORE_OFFSET = 0xc,
+ VSC_ADDR_OFFSET = 0x10,
+ VSC_DATA_OFFSET = 0x14,
+
+ VSC_FLAG_BIT_OFFS = 31,
+ VSC_FLAG_BIT_LEN = 1,
+
+ VSC_SYND_BIT_OFFS = 30,
+ VSC_SYND_BIT_LEN = 1,
+
+ VSC_ADDR_BIT_OFFS = 0,
+ VSC_ADDR_BIT_LEN = 30,
+
+ VSC_SPACE_BIT_OFFS = 0,
+ VSC_SPACE_BIT_LEN = 16,
+
+ VSC_SIZE_VLD_BIT_OFFS = 28,
+ VSC_SIZE_VLD_BIT_LEN = 1,
+
+ VSC_STATUS_BIT_OFFS = 29,
+ VSC_STATUS_BIT_LEN = 3,
+};
+
+void mlx5_pci_vsc_init(struct mlx5_core_dev *dev)
+{
+ if (!mlx5_core_is_pf(dev))
+ return;
+
+ dev->vsc_addr = pci_find_capability(dev->pdev,
+ PCI_CAP_ID_VNDR);
+ if (!dev->vsc_addr)
+ mlx5_core_warn(dev, "Failed to get valid vendor specific ID\n");
+}
+
+int mlx5_vsc_gw_lock(struct mlx5_core_dev *dev)
+{
+ u32 counter = 0;
+ int retries = 0;
+ u32 lock_val;
+ int ret;
+
+ pci_cfg_access_lock(dev->pdev);
+ do {
+ if (retries > VSC_MAX_RETRIES) {
+ ret = -EBUSY;
+ goto pci_unlock;
+ }
+
+ /* Check if semaphore is already locked */
+ ret = vsc_read(dev, VSC_SEMAPHORE_OFFSET, &lock_val);
+ if (ret)
+ goto pci_unlock;
+
+ if (lock_val) {
+ retries++;
+ usleep_range(1000, 2000);
+ continue;
+ }
+
+ /* Read and write counter value, if written value is
+ * the same, semaphore was acquired successfully.
+ */
+ ret = vsc_read(dev, VSC_COUNTER_OFFSET, &counter);
+ if (ret)
+ goto pci_unlock;
+
+ ret = vsc_write(dev, VSC_SEMAPHORE_OFFSET, counter);
+ if (ret)
+ goto pci_unlock;
+
+ ret = vsc_read(dev, VSC_SEMAPHORE_OFFSET, &lock_val);
+ if (ret)
+ goto pci_unlock;
+
+ retries++;
+ } while (counter != lock_val);
+
+ return 0;
+
+pci_unlock:
+ pci_cfg_access_unlock(dev->pdev);
+ return ret;
+}
+
+int mlx5_vsc_gw_unlock(struct mlx5_core_dev *dev)
+{
+ int ret;
+
+ ret = vsc_write(dev, VSC_SEMAPHORE_OFFSET, MLX5_VSC_UNLOCK);
+ pci_cfg_access_unlock(dev->pdev);
+ return ret;
+}
+
+int mlx5_vsc_gw_set_space(struct mlx5_core_dev *dev, u16 space,
+ u32 *ret_space_size)
+{
+ int ret;
+ u32 val = 0;
+
+ if (!mlx5_vsc_accessible(dev))
+ return -EINVAL;
+
+ if (ret_space_size)
+ *ret_space_size = 0;
+
+ /* Get a unique val */
+ ret = vsc_read(dev, VSC_CTRL_OFFSET, &val);
+ if (ret)
+ goto out;
+
+ /* Try to modify the lock */
+ val = MLX5_MERGE(val, space, VSC_SPACE_BIT_OFFS, VSC_SPACE_BIT_LEN);
+ ret = vsc_write(dev, VSC_CTRL_OFFSET, val);
+ if (ret)
+ goto out;
+
+ /* Verify lock was modified */
+ ret = vsc_read(dev, VSC_CTRL_OFFSET, &val);
+ if (ret)
+ goto out;
+
+ if (MLX5_EXTRACT(val, VSC_STATUS_BIT_OFFS, VSC_STATUS_BIT_LEN) == 0)
+ return -EINVAL;
+
+ /* Get space max address if indicated by size valid bit */
+ if (ret_space_size &&
+ MLX5_EXTRACT(val, VSC_SIZE_VLD_BIT_OFFS, VSC_SIZE_VLD_BIT_LEN)) {
+ ret = vsc_read(dev, VSC_ADDR_OFFSET, &val);
+ if (ret) {
+ mlx5_core_warn(dev, "Failed to get max space size\n");
+ goto out;
+ }
+ *ret_space_size = MLX5_EXTRACT(val, VSC_ADDR_BIT_OFFS,
+ VSC_ADDR_BIT_LEN);
+ }
+ return 0;
+
+out:
+ return ret;
+}
+
+static int mlx5_vsc_wait_on_flag(struct mlx5_core_dev *dev, u8 expected_val)
+{
+ int retries = 0;
+ u32 flag;
+ int ret;
+
+ do {
+ if (retries > VSC_MAX_RETRIES)
+ return -EBUSY;
+
+ ret = vsc_read(dev, VSC_ADDR_OFFSET, &flag);
+ if (ret)
+ return ret;
+ flag = MLX5_EXTRACT(flag, VSC_FLAG_BIT_OFFS, VSC_FLAG_BIT_LEN);
+ retries++;
+
+ if ((retries & 0xf) == 0)
+ usleep_range(1000, 2000);
+
+ } while (flag != expected_val);
+
+ return 0;
+}
+
+static int mlx5_vsc_gw_write(struct mlx5_core_dev *dev, unsigned int address,
+ u32 data)
+{
+ int ret;
+
+ if (MLX5_EXTRACT(address, VSC_SYND_BIT_OFFS,
+ VSC_FLAG_BIT_LEN + VSC_SYND_BIT_LEN))
+ return -EINVAL;
+
+ /* Set flag to 0x1 */
+ address = MLX5_MERGE(address, 1, VSC_FLAG_BIT_OFFS, 1);
+ ret = vsc_write(dev, VSC_DATA_OFFSET, data);
+ if (ret)
+ goto out;
+
+ ret = vsc_write(dev, VSC_ADDR_OFFSET, address);
+ if (ret)
+ goto out;
+
+ /* Wait for the flag to be cleared */
+ ret = mlx5_vsc_wait_on_flag(dev, 0);
+
+out:
+ return ret;
+}
+
+static int mlx5_vsc_gw_read(struct mlx5_core_dev *dev, unsigned int address,
+ u32 *data)
+{
+ int ret;
+
+ if (MLX5_EXTRACT(address, VSC_SYND_BIT_OFFS,
+ VSC_FLAG_BIT_LEN + VSC_SYND_BIT_LEN))
+ return -EINVAL;
+
+ ret = vsc_write(dev, VSC_ADDR_OFFSET, address);
+ if (ret)
+ goto out;
+
+ ret = mlx5_vsc_wait_on_flag(dev, 1);
+ if (ret)
+ goto out;
+
+ ret = vsc_read(dev, VSC_DATA_OFFSET, data);
+out:
+ return ret;
+}
+
+static int mlx5_vsc_gw_read_fast(struct mlx5_core_dev *dev,
+ unsigned int read_addr,
+ unsigned int *next_read_addr,
+ u32 *data)
+{
+ int ret;
+
+ ret = mlx5_vsc_gw_read(dev, read_addr, data);
+ if (ret)
+ goto out;
+
+ ret = vsc_read(dev, VSC_ADDR_OFFSET, next_read_addr);
+ if (ret)
+ goto out;
+
+ *next_read_addr = MLX5_EXTRACT(*next_read_addr, VSC_ADDR_BIT_OFFS,
+ VSC_ADDR_BIT_LEN);
+
+ if (*next_read_addr <= read_addr)
+ ret = -EINVAL;
+out:
+ return ret;
+}
+
+int mlx5_vsc_gw_read_block_fast(struct mlx5_core_dev *dev, u32 *data,
+ int length)
+{
+ unsigned int next_read_addr = 0;
+ unsigned int read_addr = 0;
+
+ while (read_addr < length) {
+ if (mlx5_vsc_gw_read_fast(dev, read_addr, &next_read_addr,
+ &data[(read_addr >> 2)]))
+ return read_addr;
+
+ read_addr = next_read_addr;
+ }
+ return length;
+}
+
+int mlx5_vsc_sem_set_space(struct mlx5_core_dev *dev, u16 space,
+ enum mlx5_vsc_state state)
+{
+ u32 data, id = 0;
+ int ret;
+
+ ret = mlx5_vsc_gw_set_space(dev, MLX5_SEMAPHORE_SPACE_DOMAIN, NULL);
+ if (ret) {
+ mlx5_core_warn(dev, "Failed to set gw space %d\n", ret);
+ return ret;
+ }
+
+ if (state == MLX5_VSC_LOCK) {
+ /* Get a unique ID based on the counter */
+ ret = vsc_read(dev, VSC_COUNTER_OFFSET, &id);
+ if (ret)
+ return ret;
+ }
+
+ /* Try to modify lock */
+ ret = mlx5_vsc_gw_write(dev, space, id);
+ if (ret)
+ return ret;
+
+ /* Verify lock was modified */
+ ret = mlx5_vsc_gw_read(dev, space, &data);
+ if (ret)
+ return -EINVAL;
+
+ if (data != id)
+ return -EBUSY;
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.h
new file mode 100644
index 000000000000..64272a6d7754
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/* Copyright (c) 2019 Mellanox Technologies */
+
+#ifndef __MLX5_PCI_VSC_H__
+#define __MLX5_PCI_VSC_H__
+
+enum mlx5_vsc_state {
+ MLX5_VSC_UNLOCK,
+ MLX5_VSC_LOCK,
+};
+
+enum {
+ MLX5_VSC_SPACE_SCAN_CRSPACE = 0x7,
+};
+
+void mlx5_pci_vsc_init(struct mlx5_core_dev *dev);
+int mlx5_vsc_gw_lock(struct mlx5_core_dev *dev);
+int mlx5_vsc_gw_unlock(struct mlx5_core_dev *dev);
+int mlx5_vsc_gw_set_space(struct mlx5_core_dev *dev, u16 space,
+ u32 *ret_space_size);
+int mlx5_vsc_gw_read_block_fast(struct mlx5_core_dev *dev, u32 *data,
+ int length);
+
+static inline bool mlx5_vsc_accessible(struct mlx5_core_dev *dev)
+{
+ return !!dev->vsc_addr;
+}
+
+int mlx5_vsc_sem_set_space(struct mlx5_core_dev *dev, u16 space,
+ enum mlx5_vsc_state state);
+
+#endif /* __MLX5_PCI_VSC_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
index 23d53163ce15..b15b27a497fc 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -56,6 +56,7 @@
#include "fs_core.h"
#include "lib/mpfs.h"
#include "eswitch.h"
+#include "devlink.h"
#include "lib/mlx5.h"
#include "fpga/core.h"
#include "fpga/ipsec.h"
@@ -63,7 +64,9 @@
#include "accel/tls.h"
#include "lib/clock.h"
#include "lib/vxlan.h"
+#include "lib/geneve.h"
#include "lib/devcom.h"
+#include "lib/pci_vsc.h"
#include "diag/fw_tracer.h"
#include "ecpf.h"
@@ -169,18 +172,28 @@ static struct mlx5_profile profile[] = {
#define FW_INIT_TIMEOUT_MILI 2000
#define FW_INIT_WAIT_MS 2
-#define FW_PRE_INIT_TIMEOUT_MILI 10000
+#define FW_PRE_INIT_TIMEOUT_MILI 120000
+#define FW_INIT_WARN_MESSAGE_INTERVAL 20000
-static int wait_fw_init(struct mlx5_core_dev *dev, u32 max_wait_mili)
+static int wait_fw_init(struct mlx5_core_dev *dev, u32 max_wait_mili,
+ u32 warn_time_mili)
{
+ unsigned long warn = jiffies + msecs_to_jiffies(warn_time_mili);
unsigned long end = jiffies + msecs_to_jiffies(max_wait_mili);
int err = 0;
+ BUILD_BUG_ON(FW_PRE_INIT_TIMEOUT_MILI < FW_INIT_WARN_MESSAGE_INTERVAL);
+
while (fw_initializing(dev)) {
if (time_after(jiffies, end)) {
err = -EBUSY;
break;
}
+ if (warn_time_mili && time_after(jiffies, warn)) {
+ mlx5_core_warn(dev, "Waiting for FW initialization, timeout abort in %ds\n",
+ jiffies_to_msecs(end - warn) / 1000);
+ warn = jiffies + msecs_to_jiffies(warn_time_mili);
+ }
msleep(FW_INIT_WAIT_MS);
}
@@ -721,8 +734,7 @@ static int mlx5_pci_init(struct mlx5_core_dev *dev, struct pci_dev *pdev,
struct mlx5_priv *priv = &dev->priv;
int err = 0;
- priv->pci_dev_data = id->driver_data;
-
+ mutex_init(&dev->pci_status_mutex);
pci_set_drvdata(dev->pdev, dev);
dev->bar_addr = pci_resource_start(pdev, 0);
@@ -761,6 +773,8 @@ static int mlx5_pci_init(struct mlx5_core_dev *dev, struct pci_dev *pdev,
goto err_clr_master;
}
+ mlx5_pci_vsc_init(dev);
+
return 0;
err_clr_master:
@@ -794,10 +808,16 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
goto err_devcom;
}
+ err = mlx5_irq_table_init(dev);
+ if (err) {
+ mlx5_core_err(dev, "failed to initialize irq table\n");
+ goto err_devcom;
+ }
+
err = mlx5_eq_table_init(dev);
if (err) {
mlx5_core_err(dev, "failed to initialize eq\n");
- goto err_devcom;
+ goto err_irq_cleanup;
}
err = mlx5_events_init(dev);
@@ -821,6 +841,7 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
mlx5_init_clock(dev);
dev->vxlan = mlx5_vxlan_create(dev);
+ dev->geneve = mlx5_geneve_create(dev);
err = mlx5_init_rl_table(dev);
if (err) {
@@ -834,37 +855,38 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
goto err_rl_cleanup;
}
- err = mlx5_eswitch_init(dev);
+ err = mlx5_sriov_init(dev);
if (err) {
- mlx5_core_err(dev, "Failed to init eswitch %d\n", err);
+ mlx5_core_err(dev, "Failed to init sriov %d\n", err);
goto err_mpfs_cleanup;
}
- err = mlx5_sriov_init(dev);
+ err = mlx5_eswitch_init(dev);
if (err) {
- mlx5_core_err(dev, "Failed to init sriov %d\n", err);
- goto err_eswitch_cleanup;
+ mlx5_core_err(dev, "Failed to init eswitch %d\n", err);
+ goto err_sriov_cleanup;
}
err = mlx5_fpga_init(dev);
if (err) {
mlx5_core_err(dev, "Failed to init fpga device %d\n", err);
- goto err_sriov_cleanup;
+ goto err_eswitch_cleanup;
}
dev->tracer = mlx5_fw_tracer_create(dev);
return 0;
-err_sriov_cleanup:
- mlx5_sriov_cleanup(dev);
err_eswitch_cleanup:
mlx5_eswitch_cleanup(dev->priv.eswitch);
+err_sriov_cleanup:
+ mlx5_sriov_cleanup(dev);
err_mpfs_cleanup:
mlx5_mpfs_cleanup(dev);
err_rl_cleanup:
mlx5_cleanup_rl_table(dev);
err_tables_cleanup:
+ mlx5_geneve_destroy(dev->geneve);
mlx5_vxlan_destroy(dev->vxlan);
mlx5_cleanup_mkey_table(dev);
mlx5_cleanup_qp_table(dev);
@@ -873,6 +895,8 @@ err_events_cleanup:
mlx5_events_cleanup(dev);
err_eq_cleanup:
mlx5_eq_table_cleanup(dev);
+err_irq_cleanup:
+ mlx5_irq_table_cleanup(dev);
err_devcom:
mlx5_devcom_unregister_device(dev->priv.devcom);
@@ -883,10 +907,11 @@ static void mlx5_cleanup_once(struct mlx5_core_dev *dev)
{
mlx5_fw_tracer_destroy(dev->tracer);
mlx5_fpga_cleanup(dev);
- mlx5_sriov_cleanup(dev);
mlx5_eswitch_cleanup(dev->priv.eswitch);
+ mlx5_sriov_cleanup(dev);
mlx5_mpfs_cleanup(dev);
mlx5_cleanup_rl_table(dev);
+ mlx5_geneve_destroy(dev->geneve);
mlx5_vxlan_destroy(dev->vxlan);
mlx5_cleanup_clock(dev);
mlx5_cleanup_reserved_gids(dev);
@@ -895,6 +920,7 @@ static void mlx5_cleanup_once(struct mlx5_core_dev *dev)
mlx5_cq_debugfs_cleanup(dev);
mlx5_events_cleanup(dev);
mlx5_eq_table_cleanup(dev);
+ mlx5_irq_table_cleanup(dev);
mlx5_devcom_unregister_device(dev->priv.devcom);
}
@@ -911,7 +937,7 @@ static int mlx5_function_setup(struct mlx5_core_dev *dev, bool boot)
/* wait for firmware to accept initialization segments configurations
*/
- err = wait_fw_init(dev, FW_PRE_INIT_TIMEOUT_MILI);
+ err = wait_fw_init(dev, FW_PRE_INIT_TIMEOUT_MILI, FW_INIT_WARN_MESSAGE_INTERVAL);
if (err) {
mlx5_core_err(dev, "Firmware over %d MS in pre-initializing state, aborting\n",
FW_PRE_INIT_TIMEOUT_MILI);
@@ -924,7 +950,7 @@ static int mlx5_function_setup(struct mlx5_core_dev *dev, bool boot)
return err;
}
- err = wait_fw_init(dev, FW_INIT_TIMEOUT_MILI);
+ err = wait_fw_init(dev, FW_INIT_TIMEOUT_MILI, 0);
if (err) {
mlx5_core_err(dev, "Firmware over %d MS in initializing state, aborting\n",
FW_INIT_TIMEOUT_MILI);
@@ -1028,6 +1054,12 @@ static int mlx5_load(struct mlx5_core_dev *dev)
mlx5_events_start(dev);
mlx5_pagealloc_start(dev);
+ err = mlx5_irq_table_create(dev);
+ if (err) {
+ mlx5_core_err(dev, "Failed to alloc IRQs\n");
+ goto err_irq_table;
+ }
+
err = mlx5_eq_table_create(dev);
if (err) {
mlx5_core_err(dev, "Failed to create EQs\n");
@@ -1099,6 +1131,8 @@ err_fpga_start:
err_fw_tracer:
mlx5_eq_table_destroy(dev);
err_eq_table:
+ mlx5_irq_table_destroy(dev);
+err_irq_table:
mlx5_pagealloc_stop(dev);
mlx5_events_stop(dev);
mlx5_put_uars_page(dev, dev->priv.uar);
@@ -1115,6 +1149,7 @@ static void mlx5_unload(struct mlx5_core_dev *dev)
mlx5_fpga_device_stop(dev);
mlx5_fw_tracer_cleanup(dev->tracer);
mlx5_eq_table_destroy(dev);
+ mlx5_irq_table_destroy(dev);
mlx5_pagealloc_stop(dev);
mlx5_events_stop(dev);
mlx5_put_uars_page(dev, dev->priv.uar);
@@ -1183,7 +1218,7 @@ static int mlx5_unload_one(struct mlx5_core_dev *dev, bool cleanup)
int err = 0;
if (cleanup)
- mlx5_drain_health_recovery(dev);
+ mlx5_drain_health_wq(dev);
mutex_lock(&dev->intf_state_mutex);
if (!test_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state)) {
@@ -1210,17 +1245,6 @@ out:
return err;
}
-static const struct devlink_ops mlx5_devlink_ops = {
-#ifdef CONFIG_MLX5_ESWITCH
- .eswitch_mode_set = mlx5_devlink_eswitch_mode_set,
- .eswitch_mode_get = mlx5_devlink_eswitch_mode_get,
- .eswitch_inline_mode_set = mlx5_devlink_eswitch_inline_mode_set,
- .eswitch_inline_mode_get = mlx5_devlink_eswitch_inline_mode_get,
- .eswitch_encap_mode_set = mlx5_devlink_eswitch_encap_mode_set,
- .eswitch_encap_mode_get = mlx5_devlink_eswitch_encap_mode_get,
-#endif
-};
-
static int mlx5_mdev_init(struct mlx5_core_dev *dev, int profile_idx)
{
struct mlx5_priv *priv = &dev->priv;
@@ -1230,7 +1254,6 @@ static int mlx5_mdev_init(struct mlx5_core_dev *dev, int profile_idx)
INIT_LIST_HEAD(&priv->ctx_list);
spin_lock_init(&priv->ctx_lock);
- mutex_init(&dev->pci_status_mutex);
mutex_init(&dev->intf_state_mutex);
mutex_init(&priv->bfregs.reg_head.lock);
@@ -1282,9 +1305,9 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *id)
struct devlink *devlink;
int err;
- devlink = devlink_alloc(&mlx5_devlink_ops, sizeof(*dev));
+ devlink = mlx5_devlink_alloc();
if (!devlink) {
- dev_err(&pdev->dev, "kzalloc failed\n");
+ dev_err(&pdev->dev, "devlink alloc failed\n");
return -ENOMEM;
}
@@ -1292,6 +1315,9 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *id)
dev->device = &pdev->dev;
dev->pdev = pdev;
+ dev->coredev_type = id->driver_data & MLX5_PCI_DEV_IS_VF ?
+ MLX5_COREDEV_VF : MLX5_COREDEV_PF;
+
err = mlx5_mdev_init(dev, prof_sel);
if (err)
goto mdev_init_err;
@@ -1312,10 +1338,14 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *id)
request_module_nowait(MLX5_IB_MOD);
- err = devlink_register(devlink, &pdev->dev);
+ err = mlx5_devlink_register(devlink, &pdev->dev);
if (err)
goto clean_load;
+ err = mlx5_crdump_enable(dev);
+ if (err)
+ dev_err(&pdev->dev, "mlx5_crdump_enable failed with error code %d\n", err);
+
pci_save_state(pdev);
return 0;
@@ -1327,7 +1357,7 @@ err_load_one:
pci_init_err:
mlx5_mdev_uninit(dev);
mdev_init_err:
- devlink_free(devlink);
+ mlx5_devlink_free(devlink);
return err;
}
@@ -1337,7 +1367,8 @@ static void remove_one(struct pci_dev *pdev)
struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
struct devlink *devlink = priv_to_devlink(dev);
- devlink_unregister(devlink);
+ mlx5_crdump_disable(dev);
+ mlx5_devlink_unregister(devlink);
mlx5_unregister_device(dev);
if (mlx5_unload_one(dev, true)) {
@@ -1348,7 +1379,7 @@ static void remove_one(struct pci_dev *pdev)
mlx5_pci_close(dev);
mlx5_mdev_uninit(dev);
- devlink_free(devlink);
+ mlx5_devlink_free(devlink);
}
static pci_ers_result_t mlx5_pci_err_detected(struct pci_dev *pdev,
@@ -1359,12 +1390,10 @@ static pci_ers_result_t mlx5_pci_err_detected(struct pci_dev *pdev,
mlx5_core_info(dev, "%s was called\n", __func__);
mlx5_enter_error_state(dev, false);
+ mlx5_error_sw_reset(dev);
mlx5_unload_one(dev, false);
- /* In case of kernel call drain the health wq */
- if (state) {
- mlx5_drain_health_wq(dev);
- mlx5_pci_disable_device(dev);
- }
+ mlx5_drain_health_wq(dev);
+ mlx5_pci_disable_device(dev);
return state == pci_channel_io_perm_failure ?
PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_NEED_RESET;
@@ -1532,7 +1561,8 @@ MODULE_DEVICE_TABLE(pci, mlx5_core_pci_table);
void mlx5_disable_device(struct mlx5_core_dev *dev)
{
- mlx5_pci_err_detected(dev->pdev, 0);
+ mlx5_error_sw_reset(dev);
+ mlx5_unload_one(dev, false);
}
void mlx5_recover_device(struct mlx5_core_dev *dev)
@@ -1570,7 +1600,7 @@ static int __init init(void)
get_random_bytes(&sw_owner_id, sizeof(sw_owner_id));
mlx5_core_verify_params();
- mlx5_fpga_ipsec_build_fs_cmds();
+ mlx5_accel_ipsec_build_fs_cmds();
mlx5_register_debugfs();
err = pci_register_driver(&mlx5_core_driver);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
index 22e69d4813e4..471bbc48bc1f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
@@ -111,6 +111,11 @@ enum {
MLX5_DRIVER_SYND = 0xbadd00de,
};
+enum mlx5_semaphore_space_address {
+ MLX5_SEMAPHORE_SPACE_DOMAIN = 0xA,
+ MLX5_SEMAPHORE_SW_RESET = 0x20,
+};
+
int mlx5_query_hca_caps(struct mlx5_core_dev *dev);
int mlx5_query_board_id(struct mlx5_core_dev *dev);
int mlx5_cmd_init_hca(struct mlx5_core_dev *dev, uint32_t *sw_owner_id);
@@ -118,6 +123,7 @@ int mlx5_cmd_teardown_hca(struct mlx5_core_dev *dev);
int mlx5_cmd_force_teardown_hca(struct mlx5_core_dev *dev);
int mlx5_cmd_fast_teardown_hca(struct mlx5_core_dev *dev);
void mlx5_enter_error_state(struct mlx5_core_dev *dev, bool force);
+void mlx5_error_sw_reset(struct mlx5_core_dev *dev);
void mlx5_disable_device(struct mlx5_core_dev *dev);
void mlx5_recover_device(struct mlx5_core_dev *dev);
int mlx5_sriov_init(struct mlx5_core_dev *dev);
@@ -153,6 +159,19 @@ int mlx5_query_qcam_reg(struct mlx5_core_dev *mdev, u32 *qcam,
void mlx5_lag_add(struct mlx5_core_dev *dev, struct net_device *netdev);
void mlx5_lag_remove(struct mlx5_core_dev *dev);
+int mlx5_irq_table_init(struct mlx5_core_dev *dev);
+void mlx5_irq_table_cleanup(struct mlx5_core_dev *dev);
+int mlx5_irq_table_create(struct mlx5_core_dev *dev);
+void mlx5_irq_table_destroy(struct mlx5_core_dev *dev);
+int mlx5_irq_attach_nb(struct mlx5_irq_table *irq_table, int vecidx,
+ struct notifier_block *nb);
+int mlx5_irq_detach_nb(struct mlx5_irq_table *irq_table, int vecidx,
+ struct notifier_block *nb);
+struct cpumask *
+mlx5_irq_get_affinity_mask(struct mlx5_irq_table *irq_table, int vecidx);
+struct cpu_rmap *mlx5_irq_get_rmap(struct mlx5_irq_table *table);
+int mlx5_irq_get_num_comp(struct mlx5_irq_table *table);
+
int mlx5_events_init(struct mlx5_core_dev *dev);
void mlx5_events_cleanup(struct mlx5_core_dev *dev);
void mlx5_events_start(struct mlx5_core_dev *dev);
@@ -184,7 +203,10 @@ int mlx5_set_mtppse(struct mlx5_core_dev *mdev, u8 pin, u8 arm, u8 mode);
MLX5_CAP_MCAM_FEATURE((mdev), mtpps_fs) && \
MLX5_CAP_MCAM_FEATURE((mdev), mtpps_enh_out_per_adj))
-int mlx5_firmware_flash(struct mlx5_core_dev *dev, const struct firmware *fw);
+int mlx5_firmware_flash(struct mlx5_core_dev *dev, const struct firmware *fw,
+ struct netlink_ext_ack *extack);
+int mlx5_fw_version_query(struct mlx5_core_dev *dev,
+ u32 *running_ver, u32 *stored_ver);
void mlx5e_init(void);
void mlx5e_cleanup(void);
@@ -213,7 +235,7 @@ enum {
MLX5_NIC_IFC_FULL = 0,
MLX5_NIC_IFC_DISABLED = 1,
MLX5_NIC_IFC_NO_DRAM_NIC = 2,
- MLX5_NIC_IFC_INVALID = 3
+ MLX5_NIC_IFC_SW_RESET = 7
};
u8 mlx5_get_nic_state(struct mlx5_core_dev *dev);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mr.c b/drivers/net/ethernet/mellanox/mlx5/core/mr.c
index ea744d8466ea..9231b39d18b2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/mr.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/mr.c
@@ -38,15 +38,12 @@
void mlx5_init_mkey_table(struct mlx5_core_dev *dev)
{
- struct mlx5_mkey_table *table = &dev->priv.mkey_table;
-
- memset(table, 0, sizeof(*table));
- rwlock_init(&table->lock);
- INIT_RADIX_TREE(&table->tree, GFP_ATOMIC);
+ xa_init_flags(&dev->priv.mkey_table, XA_FLAGS_LOCK_IRQ);
}
void mlx5_cleanup_mkey_table(struct mlx5_core_dev *dev)
{
+ WARN_ON(!xa_empty(&dev->priv.mkey_table));
}
int mlx5_core_create_mkey_cb(struct mlx5_core_dev *dev,
@@ -56,8 +53,8 @@ int mlx5_core_create_mkey_cb(struct mlx5_core_dev *dev,
mlx5_async_cbk_t callback,
struct mlx5_async_work *context)
{
- struct mlx5_mkey_table *table = &dev->priv.mkey_table;
u32 lout[MLX5_ST_SZ_DW(create_mkey_out)] = {0};
+ struct xarray *mkeys = &dev->priv.mkey_table;
u32 mkey_index;
void *mkc;
int err;
@@ -88,12 +85,10 @@ int mlx5_core_create_mkey_cb(struct mlx5_core_dev *dev,
mlx5_core_dbg(dev, "out 0x%x, key 0x%x, mkey 0x%x\n",
mkey_index, key, mkey->key);
- /* connect to mkey tree */
- write_lock_irq(&table->lock);
- err = radix_tree_insert(&table->tree, mlx5_base_mkey(mkey->key), mkey);
- write_unlock_irq(&table->lock);
+ err = xa_err(xa_store_irq(mkeys, mlx5_base_mkey(mkey->key), mkey,
+ GFP_KERNEL));
if (err) {
- mlx5_core_warn(dev, "failed radix tree insert of mkey 0x%x, %d\n",
+ mlx5_core_warn(dev, "failed xarray insert of mkey 0x%x, %d\n",
mlx5_base_mkey(mkey->key), err);
mlx5_core_destroy_mkey(dev, mkey);
}
@@ -114,17 +109,17 @@ EXPORT_SYMBOL(mlx5_core_create_mkey);
int mlx5_core_destroy_mkey(struct mlx5_core_dev *dev,
struct mlx5_core_mkey *mkey)
{
- struct mlx5_mkey_table *table = &dev->priv.mkey_table;
u32 out[MLX5_ST_SZ_DW(destroy_mkey_out)] = {0};
u32 in[MLX5_ST_SZ_DW(destroy_mkey_in)] = {0};
+ struct xarray *mkeys = &dev->priv.mkey_table;
struct mlx5_core_mkey *deleted_mkey;
unsigned long flags;
- write_lock_irqsave(&table->lock, flags);
- deleted_mkey = radix_tree_delete(&table->tree, mlx5_base_mkey(mkey->key));
- write_unlock_irqrestore(&table->lock, flags);
+ xa_lock_irqsave(mkeys, flags);
+ deleted_mkey = __xa_erase(mkeys, mlx5_base_mkey(mkey->key));
+ xa_unlock_irqrestore(mkeys, flags);
if (!deleted_mkey) {
- mlx5_core_dbg(dev, "failed radix tree delete of mkey 0x%x\n",
+ mlx5_core_dbg(dev, "failed xarray delete of mkey 0x%x\n",
mlx5_base_mkey(mkey->key));
return -ENOENT;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
new file mode 100644
index 000000000000..373981a659c7
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
@@ -0,0 +1,334 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/* Copyright (c) 2019 Mellanox Technologies. */
+
+#include <linux/interrupt.h>
+#include <linux/notifier.h>
+#include <linux/module.h>
+#include <linux/mlx5/driver.h>
+#include "mlx5_core.h"
+#ifdef CONFIG_RFS_ACCEL
+#include <linux/cpu_rmap.h>
+#endif
+
+#define MLX5_MAX_IRQ_NAME (32)
+
+struct mlx5_irq {
+ struct atomic_notifier_head nh;
+ cpumask_var_t mask;
+ char name[MLX5_MAX_IRQ_NAME];
+};
+
+struct mlx5_irq_table {
+ struct mlx5_irq *irq;
+ int nvec;
+#ifdef CONFIG_RFS_ACCEL
+ struct cpu_rmap *rmap;
+#endif
+};
+
+int mlx5_irq_table_init(struct mlx5_core_dev *dev)
+{
+ struct mlx5_irq_table *irq_table;
+
+ irq_table = kvzalloc(sizeof(*irq_table), GFP_KERNEL);
+ if (!irq_table)
+ return -ENOMEM;
+
+ dev->priv.irq_table = irq_table;
+ return 0;
+}
+
+void mlx5_irq_table_cleanup(struct mlx5_core_dev *dev)
+{
+ kvfree(dev->priv.irq_table);
+}
+
+int mlx5_irq_get_num_comp(struct mlx5_irq_table *table)
+{
+ return table->nvec - MLX5_IRQ_VEC_COMP_BASE;
+}
+
+static struct mlx5_irq *mlx5_irq_get(struct mlx5_core_dev *dev, int vecidx)
+{
+ struct mlx5_irq_table *irq_table = dev->priv.irq_table;
+
+ return &irq_table->irq[vecidx];
+}
+
+int mlx5_irq_attach_nb(struct mlx5_irq_table *irq_table, int vecidx,
+ struct notifier_block *nb)
+{
+ struct mlx5_irq *irq;
+
+ irq = &irq_table->irq[vecidx];
+ return atomic_notifier_chain_register(&irq->nh, nb);
+}
+
+int mlx5_irq_detach_nb(struct mlx5_irq_table *irq_table, int vecidx,
+ struct notifier_block *nb)
+{
+ struct mlx5_irq *irq;
+
+ irq = &irq_table->irq[vecidx];
+ return atomic_notifier_chain_unregister(&irq->nh, nb);
+}
+
+static irqreturn_t mlx5_irq_int_handler(int irq, void *nh)
+{
+ atomic_notifier_call_chain(nh, 0, NULL);
+ return IRQ_HANDLED;
+}
+
+static void irq_set_name(char *name, int vecidx)
+{
+ if (vecidx == 0) {
+ snprintf(name, MLX5_MAX_IRQ_NAME, "mlx5_async");
+ return;
+ }
+
+ snprintf(name, MLX5_MAX_IRQ_NAME, "mlx5_comp%d",
+ vecidx - MLX5_IRQ_VEC_COMP_BASE);
+ return;
+}
+
+static int request_irqs(struct mlx5_core_dev *dev, int nvec)
+{
+ char name[MLX5_MAX_IRQ_NAME];
+ int err;
+ int i;
+
+ for (i = 0; i < nvec; i++) {
+ struct mlx5_irq *irq = mlx5_irq_get(dev, i);
+ int irqn = pci_irq_vector(dev->pdev, i);
+
+ irq_set_name(name, i);
+ ATOMIC_INIT_NOTIFIER_HEAD(&irq->nh);
+ snprintf(irq->name, MLX5_MAX_IRQ_NAME,
+ "%s@pci:%s", name, pci_name(dev->pdev));
+ err = request_irq(irqn, mlx5_irq_int_handler, 0, irq->name,
+ &irq->nh);
+ if (err) {
+ mlx5_core_err(dev, "Failed to request irq\n");
+ goto err_request_irq;
+ }
+ }
+ return 0;
+
+err_request_irq:
+ for (; i >= 0; i--) {
+ struct mlx5_irq *irq = mlx5_irq_get(dev, i);
+ int irqn = pci_irq_vector(dev->pdev, i);
+
+ free_irq(irqn, &irq->nh);
+ }
+ return err;
+}
+
+static void irq_clear_rmap(struct mlx5_core_dev *dev)
+{
+#ifdef CONFIG_RFS_ACCEL
+ struct mlx5_irq_table *irq_table = dev->priv.irq_table;
+
+ free_irq_cpu_rmap(irq_table->rmap);
+#endif
+}
+
+static int irq_set_rmap(struct mlx5_core_dev *mdev)
+{
+ int err = 0;
+#ifdef CONFIG_RFS_ACCEL
+ struct mlx5_irq_table *irq_table = mdev->priv.irq_table;
+ int num_affinity_vec;
+ int vecidx;
+
+ num_affinity_vec = mlx5_irq_get_num_comp(irq_table);
+ irq_table->rmap = alloc_irq_cpu_rmap(num_affinity_vec);
+ if (!irq_table->rmap) {
+ err = -ENOMEM;
+ mlx5_core_err(mdev, "Failed to allocate cpu_rmap. err %d", err);
+ goto err_out;
+ }
+
+ vecidx = MLX5_IRQ_VEC_COMP_BASE;
+ for (; vecidx < irq_table->nvec; vecidx++) {
+ err = irq_cpu_rmap_add(irq_table->rmap,
+ pci_irq_vector(mdev->pdev, vecidx));
+ if (err) {
+ mlx5_core_err(mdev, "irq_cpu_rmap_add failed. err %d",
+ err);
+ goto err_irq_cpu_rmap_add;
+ }
+ }
+ return 0;
+
+err_irq_cpu_rmap_add:
+ irq_clear_rmap(mdev);
+err_out:
+#endif
+ return err;
+}
+
+/* Completion IRQ vectors */
+
+static int set_comp_irq_affinity_hint(struct mlx5_core_dev *mdev, int i)
+{
+ int vecidx = MLX5_IRQ_VEC_COMP_BASE + i;
+ struct mlx5_irq *irq;
+ int irqn;
+
+ irq = mlx5_irq_get(mdev, vecidx);
+ irqn = pci_irq_vector(mdev->pdev, vecidx);
+ if (!zalloc_cpumask_var(&irq->mask, GFP_KERNEL)) {
+ mlx5_core_warn(mdev, "zalloc_cpumask_var failed");
+ return -ENOMEM;
+ }
+
+ cpumask_set_cpu(cpumask_local_spread(i, mdev->priv.numa_node),
+ irq->mask);
+ if (IS_ENABLED(CONFIG_SMP) &&
+ irq_set_affinity_hint(irqn, irq->mask))
+ mlx5_core_warn(mdev, "irq_set_affinity_hint failed, irq 0x%.4x",
+ irqn);
+
+ return 0;
+}
+
+static void clear_comp_irq_affinity_hint(struct mlx5_core_dev *mdev, int i)
+{
+ int vecidx = MLX5_IRQ_VEC_COMP_BASE + i;
+ struct mlx5_irq *irq;
+ int irqn;
+
+ irq = mlx5_irq_get(mdev, vecidx);
+ irqn = pci_irq_vector(mdev->pdev, vecidx);
+ irq_set_affinity_hint(irqn, NULL);
+ free_cpumask_var(irq->mask);
+}
+
+static int set_comp_irq_affinity_hints(struct mlx5_core_dev *mdev)
+{
+ int nvec = mlx5_irq_get_num_comp(mdev->priv.irq_table);
+ int err;
+ int i;
+
+ for (i = 0; i < nvec; i++) {
+ err = set_comp_irq_affinity_hint(mdev, i);
+ if (err)
+ goto err_out;
+ }
+
+ return 0;
+
+err_out:
+ for (i--; i >= 0; i--)
+ clear_comp_irq_affinity_hint(mdev, i);
+
+ return err;
+}
+
+static void clear_comp_irqs_affinity_hints(struct mlx5_core_dev *mdev)
+{
+ int nvec = mlx5_irq_get_num_comp(mdev->priv.irq_table);
+ int i;
+
+ for (i = 0; i < nvec; i++)
+ clear_comp_irq_affinity_hint(mdev, i);
+}
+
+struct cpumask *
+mlx5_irq_get_affinity_mask(struct mlx5_irq_table *irq_table, int vecidx)
+{
+ return irq_table->irq[vecidx].mask;
+}
+
+#ifdef CONFIG_RFS_ACCEL
+struct cpu_rmap *mlx5_irq_get_rmap(struct mlx5_irq_table *irq_table)
+{
+ return irq_table->rmap;
+}
+#endif
+
+static void unrequest_irqs(struct mlx5_core_dev *dev)
+{
+ struct mlx5_irq_table *table = dev->priv.irq_table;
+ int i;
+
+ for (i = 0; i < table->nvec; i++)
+ free_irq(pci_irq_vector(dev->pdev, i),
+ &mlx5_irq_get(dev, i)->nh);
+}
+
+int mlx5_irq_table_create(struct mlx5_core_dev *dev)
+{
+ struct mlx5_priv *priv = &dev->priv;
+ struct mlx5_irq_table *table = priv->irq_table;
+ int num_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ?
+ MLX5_CAP_GEN(dev, max_num_eqs) :
+ 1 << MLX5_CAP_GEN(dev, log_max_eq);
+ int nvec;
+ int err;
+
+ nvec = MLX5_CAP_GEN(dev, num_ports) * num_online_cpus() +
+ MLX5_IRQ_VEC_COMP_BASE;
+ nvec = min_t(int, nvec, num_eqs);
+ if (nvec <= MLX5_IRQ_VEC_COMP_BASE)
+ return -ENOMEM;
+
+ table->irq = kcalloc(nvec, sizeof(*table->irq), GFP_KERNEL);
+ if (!table->irq)
+ return -ENOMEM;
+
+ nvec = pci_alloc_irq_vectors(dev->pdev, MLX5_IRQ_VEC_COMP_BASE + 1,
+ nvec, PCI_IRQ_MSIX);
+ if (nvec < 0) {
+ err = nvec;
+ goto err_free_irq;
+ }
+
+ table->nvec = nvec;
+
+ err = irq_set_rmap(dev);
+ if (err)
+ goto err_set_rmap;
+
+ err = request_irqs(dev, nvec);
+ if (err)
+ goto err_request_irqs;
+
+ err = set_comp_irq_affinity_hints(dev);
+ if (err) {
+ mlx5_core_err(dev, "Failed to alloc affinity hint cpumask\n");
+ goto err_set_affinity;
+ }
+
+ return 0;
+
+err_set_affinity:
+ unrequest_irqs(dev);
+err_request_irqs:
+ irq_clear_rmap(dev);
+err_set_rmap:
+ pci_free_irq_vectors(dev->pdev);
+err_free_irq:
+ kfree(table->irq);
+ return err;
+}
+
+void mlx5_irq_table_destroy(struct mlx5_core_dev *dev)
+{
+ struct mlx5_irq_table *table = dev->priv.irq_table;
+ int i;
+
+ /* free_irq requires that affinity and rmap will be cleared
+ * before calling it. This is why there is asymmetry with set_rmap
+ * which should be called after alloc_irq but before request_irq.
+ */
+ irq_clear_rmap(dev);
+ clear_comp_irqs_affinity_hints(dev);
+ for (i = 0; i < table->nvec; i++)
+ free_irq(pci_irq_vector(dev->pdev, i),
+ &mlx5_irq_get(dev, i)->nh);
+ pci_free_irq_vectors(dev->pdev);
+ kfree(table->irq);
+}
+
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/rdma.c b/drivers/net/ethernet/mellanox/mlx5/core/rdma.c
index 86f77456f873..17ce9dd56b13 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/rdma.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/rdma.c
@@ -106,10 +106,10 @@ static int mlx5_rdma_enable_roce_steering(struct mlx5_core_dev *dev)
return 0;
-destroy_flow_table:
- mlx5_destroy_flow_table(ft);
destroy_flow_group:
mlx5_destroy_flow_group(fg);
+destroy_flow_table:
+ mlx5_destroy_flow_table(ft);
free:
kvfree(spec);
kvfree(flow_group_in);
@@ -126,7 +126,7 @@ static void mlx5_rdma_make_default_gid(struct mlx5_core_dev *dev, union ib_gid *
{
u8 hw_id[ETH_ALEN];
- mlx5_query_nic_vport_mac_address(dev, 0, hw_id);
+ mlx5_query_mac_address(dev, hw_id);
gid->global.subnet_prefix = cpu_to_be64(0xfe80000000000000LL);
addrconf_addr_eui48(&gid->raw[8], hw_id);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
index a249b3c3843d..61fcfd8b39b4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
@@ -74,17 +74,11 @@ static int mlx5_device_enable_sriov(struct mlx5_core_dev *dev, int num_vfs)
int err;
int vf;
- if (sriov->enabled_vfs) {
- mlx5_core_warn(dev,
- "failed to enable SRIOV on device, already enabled with %d vfs\n",
- sriov->enabled_vfs);
- return -EBUSY;
- }
-
if (!MLX5_ESWITCH_MANAGER(dev))
goto enable_vfs_hca;
- err = mlx5_eswitch_enable_sriov(dev->priv.eswitch, num_vfs, SRIOV_LEGACY);
+ mlx5_eswitch_update_num_of_vfs(dev->priv.eswitch, num_vfs);
+ err = mlx5_eswitch_enable(dev->priv.eswitch, MLX5_ESWITCH_LEGACY);
if (err) {
mlx5_core_warn(dev,
"failed to enable eswitch SRIOV (%d)\n", err);
@@ -99,7 +93,6 @@ enable_vfs_hca:
continue;
}
sriov->vfs_ctx[vf].enabled = 1;
- sriov->enabled_vfs++;
if (MLX5_CAP_GEN(dev, port_type) == MLX5_CAP_PORT_TYPE_IB) {
err = sriov_restore_guids(dev, vf);
if (err) {
@@ -118,13 +111,11 @@ enable_vfs_hca:
static void mlx5_device_disable_sriov(struct mlx5_core_dev *dev)
{
struct mlx5_core_sriov *sriov = &dev->priv.sriov;
+ int num_vfs = pci_num_vf(dev->pdev);
int err;
int vf;
- if (!sriov->enabled_vfs)
- goto out;
-
- for (vf = 0; vf < sriov->num_vfs; vf++) {
+ for (vf = num_vfs - 1; vf >= 0; vf--) {
if (!sriov->vfs_ctx[vf].enabled)
continue;
err = mlx5_core_disable_hca(dev, vf + 1);
@@ -133,12 +124,10 @@ static void mlx5_device_disable_sriov(struct mlx5_core_dev *dev)
continue;
}
sriov->vfs_ctx[vf].enabled = 0;
- sriov->enabled_vfs--;
}
-out:
if (MLX5_ESWITCH_MANAGER(dev))
- mlx5_eswitch_disable_sriov(dev->priv.eswitch);
+ mlx5_eswitch_disable(dev->priv.eswitch);
if (mlx5_wait_for_pages(dev, &dev->priv.vfs_pages))
mlx5_core_warn(dev, "timeout reclaiming VFs pages\n");
@@ -191,13 +180,11 @@ int mlx5_core_sriov_configure(struct pci_dev *pdev, int num_vfs)
int mlx5_sriov_attach(struct mlx5_core_dev *dev)
{
- struct mlx5_core_sriov *sriov = &dev->priv.sriov;
-
- if (!mlx5_core_is_pf(dev) || !sriov->num_vfs)
+ if (!mlx5_core_is_pf(dev) || !pci_num_vf(dev->pdev))
return 0;
/* If sriov VFs exist in PCI level, enable them in device level */
- return mlx5_device_enable_sriov(dev, sriov->num_vfs);
+ return mlx5_device_enable_sriov(dev, pci_num_vf(dev->pdev));
}
void mlx5_sriov_detach(struct mlx5_core_dev *dev)
@@ -208,6 +195,30 @@ void mlx5_sriov_detach(struct mlx5_core_dev *dev)
mlx5_device_disable_sriov(dev);
}
+static u16 mlx5_get_max_vfs(struct mlx5_core_dev *dev)
+{
+ u16 host_total_vfs;
+ const u32 *out;
+
+ if (mlx5_core_is_ecpf_esw_manager(dev)) {
+ out = mlx5_esw_query_functions(dev);
+
+ /* Old FW doesn't support getting total_vfs from esw func
+ * but supports getting it from pci_sriov.
+ */
+ if (IS_ERR(out))
+ goto done;
+ host_total_vfs = MLX5_GET(query_esw_functions_out, out,
+ host_params_context.host_total_vfs);
+ kvfree(out);
+ if (host_total_vfs)
+ return host_total_vfs;
+ }
+
+done:
+ return pci_sriov_get_totalvfs(dev->pdev);
+}
+
int mlx5_sriov_init(struct mlx5_core_dev *dev)
{
struct mlx5_core_sriov *sriov = &dev->priv.sriov;
@@ -218,6 +229,7 @@ int mlx5_sriov_init(struct mlx5_core_dev *dev)
return 0;
total_vfs = pci_sriov_get_totalvfs(pdev);
+ sriov->max_vfs = mlx5_get_max_vfs(dev);
sriov->num_vfs = pci_num_vf(pdev);
sriov->vfs_ctx = kcalloc(total_vfs, sizeof(*sriov->vfs_ctx), GFP_KERNEL);
if (!sriov->vfs_ctx)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/vport.c b/drivers/net/ethernet/mellanox/mlx5/core/vport.c
index 95cdc8cbcba4..c912d82ca64b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/vport.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/vport.c
@@ -34,6 +34,7 @@
#include <linux/etherdevice.h>
#include <linux/mlx5/driver.h>
#include <linux/mlx5/vport.h>
+#include <linux/mlx5/eswitch.h>
#include "mlx5_core.h"
/* Mutex to hold while enabling or disabling RoCE */
@@ -155,11 +156,12 @@ int mlx5_modify_nic_vport_min_inline(struct mlx5_core_dev *mdev,
}
int mlx5_query_nic_vport_mac_address(struct mlx5_core_dev *mdev,
- u16 vport, u8 *addr)
+ u16 vport, bool other, u8 *addr)
{
- u32 *out;
int outlen = MLX5_ST_SZ_BYTES(query_nic_vport_context_out);
+ u32 in[MLX5_ST_SZ_DW(query_nic_vport_context_in)] = {};
u8 *out_addr;
+ u32 *out;
int err;
out = kvzalloc(outlen, GFP_KERNEL);
@@ -169,7 +171,12 @@ int mlx5_query_nic_vport_mac_address(struct mlx5_core_dev *mdev,
out_addr = MLX5_ADDR_OF(query_nic_vport_context_out, out,
nic_vport_context.permanent_address);
- err = mlx5_query_nic_vport_context(mdev, vport, out, outlen);
+ MLX5_SET(query_nic_vport_context_in, in, opcode,
+ MLX5_CMD_OP_QUERY_NIC_VPORT_CONTEXT);
+ MLX5_SET(query_nic_vport_context_in, in, vport_number, vport);
+ MLX5_SET(query_nic_vport_context_in, in, other_vport, other);
+
+ err = mlx5_cmd_exec(mdev, in, sizeof(in), out, outlen);
if (!err)
ether_addr_copy(addr, &out_addr[2]);
@@ -178,6 +185,12 @@ int mlx5_query_nic_vport_mac_address(struct mlx5_core_dev *mdev,
}
EXPORT_SYMBOL_GPL(mlx5_query_nic_vport_mac_address);
+int mlx5_query_mac_address(struct mlx5_core_dev *mdev, u8 *addr)
+{
+ return mlx5_query_nic_vport_mac_address(mdev, 0, false, addr);
+}
+EXPORT_SYMBOL_GPL(mlx5_query_mac_address);
+
int mlx5_modify_nic_vport_mac_address(struct mlx5_core_dev *mdev,
u16 vport, u8 *addr)
{
@@ -194,9 +207,7 @@ int mlx5_modify_nic_vport_mac_address(struct mlx5_core_dev *mdev,
MLX5_SET(modify_nic_vport_context_in, in,
field_select.permanent_address, 1);
MLX5_SET(modify_nic_vport_context_in, in, vport_number, vport);
-
- if (vport)
- MLX5_SET(modify_nic_vport_context_in, in, other_vport, 1);
+ MLX5_SET(modify_nic_vport_context_in, in, other_vport, 1);
nic_vport_ctx = MLX5_ADDR_OF(modify_nic_vport_context_in,
in, nic_vport_context);
@@ -291,9 +302,7 @@ int mlx5_query_nic_vport_mac_list(struct mlx5_core_dev *dev,
MLX5_CMD_OP_QUERY_NIC_VPORT_CONTEXT);
MLX5_SET(query_nic_vport_context_in, in, allowed_list_type, list_type);
MLX5_SET(query_nic_vport_context_in, in, vport_number, vport);
-
- if (vport)
- MLX5_SET(query_nic_vport_context_in, in, other_vport, 1);
+ MLX5_SET(query_nic_vport_context_in, in, other_vport, 1);
err = mlx5_cmd_exec(dev, in, sizeof(in), out, out_sz);
if (err)
@@ -483,7 +492,7 @@ int mlx5_modify_nic_vport_node_guid(struct mlx5_core_dev *mdev,
MLX5_SET(modify_nic_vport_context_in, in,
field_select.node_guid, 1);
MLX5_SET(modify_nic_vport_context_in, in, vport_number, vport);
- MLX5_SET(modify_nic_vport_context_in, in, other_vport, !!vport);
+ MLX5_SET(modify_nic_vport_context_in, in, other_vport, 1);
nic_vport_context = MLX5_ADDR_OF(modify_nic_vport_context_in,
in, nic_vport_context);
@@ -1157,3 +1166,17 @@ u64 mlx5_query_nic_system_image_guid(struct mlx5_core_dev *mdev)
return tmp;
}
EXPORT_SYMBOL_GPL(mlx5_query_nic_system_image_guid);
+
+/**
+ * mlx5_eswitch_get_total_vports - Get total vports of the eswitch
+ *
+ * @dev: Pointer to core device
+ *
+ * mlx5_eswitch_get_total_vports returns total number of vports for
+ * the eswitch.
+ */
+u16 mlx5_eswitch_get_total_vports(const struct mlx5_core_dev *dev)
+{
+ return MLX5_SPECIAL_VPORTS(dev) + mlx5_core_max_vfs(dev);
+}
+EXPORT_SYMBOL(mlx5_eswitch_get_total_vports);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.h b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
index 1f87cce421e0..f1ec58c9e9e3 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
@@ -134,11 +134,6 @@ static inline void mlx5_wq_cyc_update_db_record(struct mlx5_wq_cyc *wq)
*wq->db = cpu_to_be32(wq->wqe_ctr);
}
-static inline u16 mlx5_wq_cyc_get_ctr_wrap_cnt(struct mlx5_wq_cyc *wq, u16 ctr)
-{
- return ctr >> wq->fbc.log_sz;
-}
-
static inline u16 mlx5_wq_cyc_ctr2ix(struct mlx5_wq_cyc *wq, u16 ctr)
{
return ctr & wq->fbc.sz_m1;
diff --git a/drivers/net/ethernet/mellanox/mlxfw/mlxfw.h b/drivers/net/ethernet/mellanox/mlxfw/mlxfw.h
index 14c0c62f8e73..c50e74ab02c4 100644
--- a/drivers/net/ethernet/mellanox/mlxfw/mlxfw.h
+++ b/drivers/net/ethernet/mellanox/mlxfw/mlxfw.h
@@ -5,6 +5,7 @@
#define _MLXFW_H
#include <linux/firmware.h>
+#include <linux/netlink.h>
enum mlxfw_fsm_state {
MLXFW_FSM_STATE_IDLE,
@@ -57,6 +58,10 @@ struct mlxfw_dev_ops {
void (*fsm_cancel)(struct mlxfw_dev *mlxfw_dev, u32 fwhandle);
void (*fsm_release)(struct mlxfw_dev *mlxfw_dev, u32 fwhandle);
+
+ void (*status_notify)(struct mlxfw_dev *mlxfw_dev,
+ const char *msg, const char *comp_name,
+ u32 done_bytes, u32 total_bytes);
};
struct mlxfw_dev {
@@ -67,11 +72,13 @@ struct mlxfw_dev {
#if IS_REACHABLE(CONFIG_MLXFW)
int mlxfw_firmware_flash(struct mlxfw_dev *mlxfw_dev,
- const struct firmware *firmware);
+ const struct firmware *firmware,
+ struct netlink_ext_ack *extack);
#else
static inline
int mlxfw_firmware_flash(struct mlxfw_dev *mlxfw_dev,
- const struct firmware *firmware)
+ const struct firmware *firmware,
+ struct netlink_ext_ack *extack)
{
return -EOPNOTSUPP;
}
diff --git a/drivers/net/ethernet/mellanox/mlxfw/mlxfw_fsm.c b/drivers/net/ethernet/mellanox/mlxfw/mlxfw_fsm.c
index 240c027e5f07..67990406cba2 100644
--- a/drivers/net/ethernet/mellanox/mlxfw/mlxfw_fsm.c
+++ b/drivers/net/ethernet/mellanox/mlxfw/mlxfw_fsm.c
@@ -39,8 +39,19 @@ static const char * const mlxfw_fsm_state_err_str[] = {
"unknown error"
};
+static void mlxfw_status_notify(struct mlxfw_dev *mlxfw_dev,
+ const char *msg, const char *comp_name,
+ u32 done_bytes, u32 total_bytes)
+{
+ if (!mlxfw_dev->ops->status_notify)
+ return;
+ mlxfw_dev->ops->status_notify(mlxfw_dev, msg, comp_name,
+ done_bytes, total_bytes);
+}
+
static int mlxfw_fsm_state_wait(struct mlxfw_dev *mlxfw_dev, u32 fwhandle,
- enum mlxfw_fsm_state fsm_state)
+ enum mlxfw_fsm_state fsm_state,
+ struct netlink_ext_ack *extack)
{
enum mlxfw_fsm_state_err fsm_state_err;
enum mlxfw_fsm_state curr_fsm_state;
@@ -57,11 +68,13 @@ retry:
if (fsm_state_err != MLXFW_FSM_STATE_ERR_OK) {
pr_err("Firmware flash failed: %s\n",
mlxfw_fsm_state_err_str[fsm_state_err]);
+ NL_SET_ERR_MSG_MOD(extack, "Firmware flash failed");
return -EINVAL;
}
if (curr_fsm_state != fsm_state) {
if (--times == 0) {
pr_err("Timeout reached on FSM state change");
+ NL_SET_ERR_MSG_MOD(extack, "Timeout reached on FSM state change");
return -ETIMEDOUT;
}
msleep(MLXFW_FSM_STATE_WAIT_CYCLE_MS);
@@ -76,16 +89,20 @@ retry:
static int mlxfw_flash_component(struct mlxfw_dev *mlxfw_dev,
u32 fwhandle,
- struct mlxfw_mfa2_component *comp)
+ struct mlxfw_mfa2_component *comp,
+ struct netlink_ext_ack *extack)
{
u16 comp_max_write_size;
u8 comp_align_bits;
u32 comp_max_size;
+ char comp_name[8];
u16 block_size;
u8 *block_ptr;
u32 offset;
int err;
+ sprintf(comp_name, "%u", comp->index);
+
err = mlxfw_dev->ops->component_query(mlxfw_dev, comp->index,
&comp_max_size, &comp_align_bits,
&comp_max_write_size);
@@ -96,6 +113,7 @@ static int mlxfw_flash_component(struct mlxfw_dev *mlxfw_dev,
if (comp->data_size > comp_max_size) {
pr_err("Component %d is of size %d which is bigger than limit %d\n",
comp->index, comp->data_size, comp_max_size);
+ NL_SET_ERR_MSG_MOD(extack, "Component is bigger than limit");
return -EINVAL;
}
@@ -103,6 +121,7 @@ static int mlxfw_flash_component(struct mlxfw_dev *mlxfw_dev,
comp_align_bits);
pr_debug("Component update\n");
+ mlxfw_status_notify(mlxfw_dev, "Updating component", comp_name, 0, 0);
err = mlxfw_dev->ops->fsm_component_update(mlxfw_dev, fwhandle,
comp->index,
comp->data_size);
@@ -110,11 +129,13 @@ static int mlxfw_flash_component(struct mlxfw_dev *mlxfw_dev,
return err;
err = mlxfw_fsm_state_wait(mlxfw_dev, fwhandle,
- MLXFW_FSM_STATE_DOWNLOAD);
+ MLXFW_FSM_STATE_DOWNLOAD, extack);
if (err)
goto err_out;
pr_debug("Component download\n");
+ mlxfw_status_notify(mlxfw_dev, "Downloading component",
+ comp_name, 0, comp->data_size);
for (offset = 0;
offset < MLXFW_ALIGN_UP(comp->data_size, comp_align_bits);
offset += comp_max_write_size) {
@@ -126,15 +147,20 @@ static int mlxfw_flash_component(struct mlxfw_dev *mlxfw_dev,
offset);
if (err)
goto err_out;
+ mlxfw_status_notify(mlxfw_dev, "Downloading component",
+ comp_name, offset + block_size,
+ comp->data_size);
}
pr_debug("Component verify\n");
+ mlxfw_status_notify(mlxfw_dev, "Verifying component", comp_name, 0, 0);
err = mlxfw_dev->ops->fsm_component_verify(mlxfw_dev, fwhandle,
comp->index);
if (err)
goto err_out;
- err = mlxfw_fsm_state_wait(mlxfw_dev, fwhandle, MLXFW_FSM_STATE_LOCKED);
+ err = mlxfw_fsm_state_wait(mlxfw_dev, fwhandle,
+ MLXFW_FSM_STATE_LOCKED, extack);
if (err)
goto err_out;
return 0;
@@ -145,7 +171,8 @@ err_out:
}
static int mlxfw_flash_components(struct mlxfw_dev *mlxfw_dev, u32 fwhandle,
- struct mlxfw_mfa2_file *mfa2_file)
+ struct mlxfw_mfa2_file *mfa2_file,
+ struct netlink_ext_ack *extack)
{
u32 component_count;
int err;
@@ -156,6 +183,7 @@ static int mlxfw_flash_components(struct mlxfw_dev *mlxfw_dev, u32 fwhandle,
&component_count);
if (err) {
pr_err("Could not find device PSID in MFA2 file\n");
+ NL_SET_ERR_MSG_MOD(extack, "Could not find device PSID in MFA2 file");
return err;
}
@@ -168,7 +196,7 @@ static int mlxfw_flash_components(struct mlxfw_dev *mlxfw_dev, u32 fwhandle,
return PTR_ERR(comp);
pr_info("Flashing component type %d\n", comp->index);
- err = mlxfw_flash_component(mlxfw_dev, fwhandle, comp);
+ err = mlxfw_flash_component(mlxfw_dev, fwhandle, comp, extack);
mlxfw_mfa2_file_component_put(comp);
if (err)
return err;
@@ -177,7 +205,8 @@ static int mlxfw_flash_components(struct mlxfw_dev *mlxfw_dev, u32 fwhandle,
}
int mlxfw_firmware_flash(struct mlxfw_dev *mlxfw_dev,
- const struct firmware *firmware)
+ const struct firmware *firmware,
+ struct netlink_ext_ack *extack)
{
struct mlxfw_mfa2_file *mfa2_file;
u32 fwhandle;
@@ -185,6 +214,7 @@ int mlxfw_firmware_flash(struct mlxfw_dev *mlxfw_dev,
if (!mlxfw_mfa2_check(firmware)) {
pr_err("Firmware file is not MFA2\n");
+ NL_SET_ERR_MSG_MOD(extack, "Firmware file is not MFA2");
return -EINVAL;
}
@@ -193,29 +223,35 @@ int mlxfw_firmware_flash(struct mlxfw_dev *mlxfw_dev,
return PTR_ERR(mfa2_file);
pr_info("Initialize firmware flash process\n");
+ mlxfw_status_notify(mlxfw_dev, "Initializing firmware flash process",
+ NULL, 0, 0);
err = mlxfw_dev->ops->fsm_lock(mlxfw_dev, &fwhandle);
if (err) {
pr_err("Could not lock the firmware FSM\n");
+ NL_SET_ERR_MSG_MOD(extack, "Could not lock the firmware FSM");
goto err_fsm_lock;
}
err = mlxfw_fsm_state_wait(mlxfw_dev, fwhandle,
- MLXFW_FSM_STATE_LOCKED);
+ MLXFW_FSM_STATE_LOCKED, extack);
if (err)
goto err_state_wait_idle_to_locked;
- err = mlxfw_flash_components(mlxfw_dev, fwhandle, mfa2_file);
+ err = mlxfw_flash_components(mlxfw_dev, fwhandle, mfa2_file, extack);
if (err)
goto err_flash_components;
pr_debug("Activate image\n");
+ mlxfw_status_notify(mlxfw_dev, "Activating image", NULL, 0, 0);
err = mlxfw_dev->ops->fsm_activate(mlxfw_dev, fwhandle);
if (err) {
pr_err("Could not activate the downloaded image\n");
+ NL_SET_ERR_MSG_MOD(extack, "Could not activate the downloaded image");
goto err_fsm_activate;
}
- err = mlxfw_fsm_state_wait(mlxfw_dev, fwhandle, MLXFW_FSM_STATE_LOCKED);
+ err = mlxfw_fsm_state_wait(mlxfw_dev, fwhandle,
+ MLXFW_FSM_STATE_LOCKED, extack);
if (err)
goto err_state_wait_activate_to_locked;
@@ -223,6 +259,7 @@ int mlxfw_firmware_flash(struct mlxfw_dev *mlxfw_dev,
mlxfw_dev->ops->fsm_release(mlxfw_dev, fwhandle);
pr_info("Firmware flash done.\n");
+ mlxfw_status_notify(mlxfw_dev, "Firmware flash done", NULL, 0, 0);
mlxfw_mfa2_file_fini(mfa2_file);
return 0;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/Kconfig b/drivers/net/ethernet/mellanox/mlxsw/Kconfig
index 11ded0bc7d98..06c80343d9ed 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/Kconfig
+++ b/drivers/net/ethernet/mellanox/mlxsw/Kconfig
@@ -83,6 +83,8 @@ config MLXSW_SPECTRUM
select PARMAN
select OBJAGG
select MLXFW
+ imply PTP_1588_CLOCK
+ select NET_PTP_CLASSIFY if PTP_1588_CLOCK
default m
---help---
This driver supports Mellanox Technologies Spectrum Ethernet
diff --git a/drivers/net/ethernet/mellanox/mlxsw/Makefile b/drivers/net/ethernet/mellanox/mlxsw/Makefile
index c4dc72e1ce63..171b36bd8a4e 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/Makefile
+++ b/drivers/net/ethernet/mellanox/mlxsw/Makefile
@@ -31,5 +31,6 @@ mlxsw_spectrum-objs := spectrum.o spectrum_buffers.o \
spectrum_nve.o spectrum_nve_vxlan.o \
spectrum_dpipe.o
mlxsw_spectrum-$(CONFIG_MLXSW_SPECTRUM_DCB) += spectrum_dcb.o
+mlxsw_spectrum-$(CONFIG_PTP_1588_CLOCK) += spectrum_ptp.o
obj-$(CONFIG_MLXSW_MINIMAL) += mlxsw_minimal.o
mlxsw_minimal-objs := minimal.o
diff --git a/drivers/net/ethernet/mellanox/mlxsw/cmd.h b/drivers/net/ethernet/mellanox/mlxsw/cmd.h
index 0772e4339b33..5ffdfb532cb7 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/cmd.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/cmd.h
@@ -317,6 +317,18 @@ MLXSW_ITEM64(cmd_mbox, query_fw, doorbell_page_offset, 0x40, 0, 64);
*/
MLXSW_ITEM32(cmd_mbox, query_fw, doorbell_page_bar, 0x48, 30, 2);
+/* cmd_mbox_query_fw_free_running_clock_offset
+ * The offset of the free running clock page
+ */
+MLXSW_ITEM64(cmd_mbox, query_fw, free_running_clock_offset, 0x50, 0, 64);
+
+/* cmd_mbox_query_fw_fr_rn_clk_bar
+ * PCI base address register (BAR) of the free running clock page
+ * 0: BAR 0
+ * 1: 64 bit BAR
+ */
+MLXSW_ITEM32(cmd_mbox, query_fw, fr_rn_clk_bar, 0x58, 30, 2);
+
/* QUERY_BOARDINFO - Query Board Information
* -----------------------------------------
* OpMod == 0 (N/A), INMmod == 0 (N/A)
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
index 6ee6de7f0160..17ceac7505e5 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
@@ -1003,6 +1003,20 @@ static int mlxsw_devlink_core_bus_device_reload(struct devlink *devlink,
return err;
}
+static int mlxsw_devlink_flash_update(struct devlink *devlink,
+ const char *file_name,
+ const char *component,
+ struct netlink_ext_ack *extack)
+{
+ struct mlxsw_core *mlxsw_core = devlink_priv(devlink);
+ struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver;
+
+ if (!mlxsw_driver->flash_update)
+ return -EOPNOTSUPP;
+ return mlxsw_driver->flash_update(mlxsw_core, file_name,
+ component, extack);
+}
+
static const struct devlink_ops mlxsw_devlink_ops = {
.reload = mlxsw_devlink_core_bus_device_reload,
.port_type_set = mlxsw_devlink_port_type_set,
@@ -1019,6 +1033,7 @@ static const struct devlink_ops mlxsw_devlink_ops = {
.sb_occ_port_pool_get = mlxsw_devlink_sb_occ_port_pool_get,
.sb_occ_tc_port_bind_get = mlxsw_devlink_sb_occ_tc_port_bind_get,
.info_get = mlxsw_devlink_info_get,
+ .flash_update = mlxsw_devlink_flash_update,
};
static int
@@ -1098,6 +1113,12 @@ __mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info,
goto err_register_params;
}
+ if (mlxsw_driver->init) {
+ err = mlxsw_driver->init(mlxsw_core, mlxsw_bus_info);
+ if (err)
+ goto err_driver_init;
+ }
+
err = mlxsw_hwmon_init(mlxsw_core, mlxsw_bus_info, &mlxsw_core->hwmon);
if (err)
goto err_hwmon_init;
@@ -1107,22 +1128,17 @@ __mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info,
if (err)
goto err_thermal_init;
- if (mlxsw_driver->init) {
- err = mlxsw_driver->init(mlxsw_core, mlxsw_bus_info);
- if (err)
- goto err_driver_init;
- }
-
if (mlxsw_driver->params_register && !reload)
devlink_params_publish(devlink);
return 0;
-err_driver_init:
- mlxsw_thermal_fini(mlxsw_core->thermal);
err_thermal_init:
mlxsw_hwmon_fini(mlxsw_core->hwmon);
err_hwmon_init:
+ if (mlxsw_core->driver->fini)
+ mlxsw_core->driver->fini(mlxsw_core);
+err_driver_init:
if (mlxsw_driver->params_unregister && !reload)
mlxsw_driver->params_unregister(mlxsw_core);
err_register_params:
@@ -1187,10 +1203,10 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
if (mlxsw_core->driver->params_unregister && !reload)
devlink_params_unpublish(devlink);
- if (mlxsw_core->driver->fini)
- mlxsw_core->driver->fini(mlxsw_core);
mlxsw_thermal_fini(mlxsw_core->thermal);
mlxsw_hwmon_fini(mlxsw_core->hwmon);
+ if (mlxsw_core->driver->fini)
+ mlxsw_core->driver->fini(mlxsw_core);
if (mlxsw_core->driver->params_unregister && !reload)
mlxsw_core->driver->params_unregister(mlxsw_core);
if (!reload)
@@ -1229,6 +1245,15 @@ int mlxsw_core_skb_transmit(struct mlxsw_core *mlxsw_core, struct sk_buff *skb,
}
EXPORT_SYMBOL(mlxsw_core_skb_transmit);
+void mlxsw_core_ptp_transmitted(struct mlxsw_core *mlxsw_core,
+ struct sk_buff *skb, u8 local_port)
+{
+ if (mlxsw_core->driver->ptp_transmitted)
+ mlxsw_core->driver->ptp_transmitted(mlxsw_core, skb,
+ local_port);
+}
+EXPORT_SYMBOL(mlxsw_core_ptp_transmitted);
+
static bool __is_rx_listener_equal(const struct mlxsw_rx_listener *rxl_a,
const struct mlxsw_rx_listener *rxl_b)
{
@@ -2010,6 +2035,18 @@ int mlxsw_core_resources_query(struct mlxsw_core *mlxsw_core, char *mbox,
}
EXPORT_SYMBOL(mlxsw_core_resources_query);
+u32 mlxsw_core_read_frc_h(struct mlxsw_core *mlxsw_core)
+{
+ return mlxsw_core->bus->read_frc_h(mlxsw_core->bus_priv);
+}
+EXPORT_SYMBOL(mlxsw_core_read_frc_h);
+
+u32 mlxsw_core_read_frc_l(struct mlxsw_core *mlxsw_core)
+{
+ return mlxsw_core->bus->read_frc_l(mlxsw_core->bus_priv);
+}
+EXPORT_SYMBOL(mlxsw_core_read_frc_l);
+
static int __init mlxsw_core_module_init(void)
{
int err;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.h b/drivers/net/ethernet/mellanox/mlxsw/core.h
index e3832cb5bdda..8efcff4b59cb 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/core.h
@@ -48,6 +48,8 @@ bool mlxsw_core_skb_transmit_busy(struct mlxsw_core *mlxsw_core,
const struct mlxsw_tx_info *tx_info);
int mlxsw_core_skb_transmit(struct mlxsw_core *mlxsw_core, struct sk_buff *skb,
const struct mlxsw_tx_info *tx_info);
+void mlxsw_core_ptp_transmitted(struct mlxsw_core *mlxsw_core,
+ struct sk_buff *skb, u8 local_port);
struct mlxsw_rx_listener {
void (*func)(struct sk_buff *skb, u8 local_port, void *priv);
@@ -284,6 +286,9 @@ struct mlxsw_driver {
unsigned int sb_index, u16 tc_index,
enum devlink_sb_pool_type pool_type,
u32 *p_cur, u32 *p_max);
+ int (*flash_update)(struct mlxsw_core *mlxsw_core,
+ const char *file_name, const char *component,
+ struct netlink_ext_ack *extack);
void (*txhdr_construct)(struct sk_buff *skb,
const struct mlxsw_tx_info *tx_info);
int (*resources_register)(struct mlxsw_core *mlxsw_core);
@@ -293,6 +298,13 @@ struct mlxsw_driver {
u64 *p_linear_size);
int (*params_register)(struct mlxsw_core *mlxsw_core);
void (*params_unregister)(struct mlxsw_core *mlxsw_core);
+
+ /* Notify a driver that a timestamped packet was transmitted. Driver
+ * is responsible for freeing the passed-in SKB.
+ */
+ void (*ptp_transmitted)(struct mlxsw_core *mlxsw_core,
+ struct sk_buff *skb, u8 local_port);
+
u8 txhdr_len;
const struct mlxsw_config_profile *profile;
bool res_query_enabled;
@@ -306,6 +318,9 @@ int mlxsw_core_kvd_sizes_get(struct mlxsw_core *mlxsw_core,
void mlxsw_core_fw_flash_start(struct mlxsw_core *mlxsw_core);
void mlxsw_core_fw_flash_end(struct mlxsw_core *mlxsw_core);
+u32 mlxsw_core_read_frc_h(struct mlxsw_core *mlxsw_core);
+u32 mlxsw_core_read_frc_l(struct mlxsw_core *mlxsw_core);
+
bool mlxsw_core_res_valid(struct mlxsw_core *mlxsw_core,
enum mlxsw_res_id res_id);
@@ -336,6 +351,8 @@ struct mlxsw_bus {
char *in_mbox, size_t in_mbox_size,
char *out_mbox, size_t out_mbox_size,
u8 *p_status);
+ u32 (*read_frc_h)(void *bus_priv);
+ u32 (*read_frc_l)(void *bus_priv);
u8 features;
};
@@ -353,7 +370,8 @@ struct mlxsw_bus_info {
struct mlxsw_fw_rev fw_rev;
u8 vsd[MLXSW_CMD_BOARDINFO_VSD_LEN];
u8 psid[MLXSW_CMD_BOARDINFO_PSID_LEN];
- u8 low_frequency;
+ u8 low_frequency:1,
+ read_frc_capable:1;
};
struct mlxsw_hwmon;
@@ -409,4 +427,14 @@ enum mlxsw_devlink_param_id {
MLXSW_DEVLINK_PARAM_ID_ACL_REGION_REHASH_INTERVAL,
};
+struct mlxsw_skb_cb {
+ struct mlxsw_tx_info tx_info;
+};
+
+static inline struct mlxsw_skb_cb *mlxsw_skb_cb(struct sk_buff *skb)
+{
+ BUILD_BUG_ON(sizeof(mlxsw_skb_cb) > sizeof(skb->cb));
+ return (struct mlxsw_skb_cb *) skb->cb;
+}
+
#endif
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c
index cb3e663b1d37..feb4672a5ac0 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c
@@ -30,8 +30,9 @@ static bool mlxsw_afk_blocks_check(struct mlxsw_afk *mlxsw_afk)
elinst = &block->instances[j];
if (elinst->type != elinst->info->type ||
- elinst->item.size.bits !=
- elinst->info->item.size.bits)
+ (!elinst->avoid_size_check &&
+ elinst->item.size.bits !=
+ elinst->info->item.size.bits))
return false;
}
}
@@ -385,12 +386,12 @@ EXPORT_SYMBOL(mlxsw_afk_values_add_buf);
static void mlxsw_sp_afk_encode_u32(const struct mlxsw_item *storage_item,
const struct mlxsw_item *output_item,
- char *storage, char *output)
+ char *storage, char *output, int diff)
{
u32 value;
value = __mlxsw_item_get32(storage, storage_item, 0);
- __mlxsw_item_set32(output, output_item, 0, value);
+ __mlxsw_item_set32(output, output_item, 0, value + diff);
}
static void mlxsw_sp_afk_encode_buf(const struct mlxsw_item *storage_item,
@@ -406,14 +407,14 @@ static void mlxsw_sp_afk_encode_buf(const struct mlxsw_item *storage_item,
static void
mlxsw_sp_afk_encode_one(const struct mlxsw_afk_element_inst *elinst,
- char *output, char *storage)
+ char *output, char *storage, int u32_diff)
{
const struct mlxsw_item *storage_item = &elinst->info->item;
const struct mlxsw_item *output_item = &elinst->item;
if (elinst->type == MLXSW_AFK_ELEMENT_TYPE_U32)
mlxsw_sp_afk_encode_u32(storage_item, output_item,
- storage, output);
+ storage, output, u32_diff);
else if (elinst->type == MLXSW_AFK_ELEMENT_TYPE_BUF)
mlxsw_sp_afk_encode_buf(storage_item, output_item,
storage, output);
@@ -446,9 +447,10 @@ void mlxsw_afk_encode(struct mlxsw_afk *mlxsw_afk,
continue;
mlxsw_sp_afk_encode_one(elinst, block_key,
- values->storage.key);
+ values->storage.key,
+ elinst->u32_key_diff);
mlxsw_sp_afk_encode_one(elinst, block_mask,
- values->storage.mask);
+ values->storage.mask, 0);
}
mlxsw_afk->ops->encode_block(key, i, block_key);
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h
index 4a625cdf3e7c..cb229b55ecc4 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h
@@ -74,7 +74,7 @@ struct mlxsw_afk_element_info {
* define an internal storage geometry.
*/
static const struct mlxsw_afk_element_info mlxsw_afk_element_infos[] = {
- MLXSW_AFK_ELEMENT_INFO_U32(SRC_SYS_PORT, 0x00, 16, 8),
+ MLXSW_AFK_ELEMENT_INFO_U32(SRC_SYS_PORT, 0x00, 16, 16),
MLXSW_AFK_ELEMENT_INFO_BUF(DMAC_32_47, 0x04, 2),
MLXSW_AFK_ELEMENT_INFO_BUF(DMAC_0_31, 0x06, 4),
MLXSW_AFK_ELEMENT_INFO_BUF(SMAC_32_47, 0x0A, 2),
@@ -107,9 +107,14 @@ struct mlxsw_afk_element_inst { /* element instance in actual block */
const struct mlxsw_afk_element_info *info;
enum mlxsw_afk_element_type type;
struct mlxsw_item item; /* element geometry in block */
+ int u32_key_diff; /* in case value needs to be adjusted before write
+ * this diff is here to handle that
+ */
+ bool avoid_size_check;
};
-#define MLXSW_AFK_ELEMENT_INST(_type, _element, _offset, _shift, _size) \
+#define MLXSW_AFK_ELEMENT_INST(_type, _element, _offset, \
+ _shift, _size, _u32_key_diff, _avoid_size_check) \
{ \
.info = &mlxsw_afk_element_infos[MLXSW_AFK_ELEMENT_##_element], \
.type = _type, \
@@ -119,15 +124,24 @@ struct mlxsw_afk_element_inst { /* element instance in actual block */
.size = {.bits = _size}, \
.name = #_element, \
}, \
+ .u32_key_diff = _u32_key_diff, \
+ .avoid_size_check = _avoid_size_check, \
}
#define MLXSW_AFK_ELEMENT_INST_U32(_element, _offset, _shift, _size) \
MLXSW_AFK_ELEMENT_INST(MLXSW_AFK_ELEMENT_TYPE_U32, \
- _element, _offset, _shift, _size)
+ _element, _offset, _shift, _size, 0, false)
+
+#define MLXSW_AFK_ELEMENT_INST_EXT_U32(_element, _offset, \
+ _shift, _size, _key_diff, \
+ _avoid_size_check) \
+ MLXSW_AFK_ELEMENT_INST(MLXSW_AFK_ELEMENT_TYPE_U32, \
+ _element, _offset, _shift, _size, \
+ _key_diff, _avoid_size_check)
#define MLXSW_AFK_ELEMENT_INST_BUF(_element, _offset, _size) \
MLXSW_AFK_ELEMENT_INST(MLXSW_AFK_ELEMENT_TYPE_BUF, \
- _element, _offset, 0, _size)
+ _element, _offset, 0, _size, 0, false)
struct mlxsw_afk_block {
u16 encoding; /* block ID */
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_env.c b/drivers/net/ethernet/mellanox/mlxsw/core_env.c
index 72539a9a3847..d2c7ce67c300 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core_env.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core_env.c
@@ -92,33 +92,20 @@ int mlxsw_env_module_temp_thresholds_get(struct mlxsw_core *core, int module,
u16 temp;
} temp_thresh;
char mcia_pl[MLXSW_REG_MCIA_LEN] = {0};
- char mtbr_pl[MLXSW_REG_MTBR_LEN] = {0};
- u16 module_temp;
+ char mtmp_pl[MLXSW_REG_MTMP_LEN];
+ unsigned int module_temp;
bool qsfp;
int err;
- mlxsw_reg_mtbr_pack(mtbr_pl, MLXSW_REG_MTBR_BASE_MODULE_INDEX + module,
- 1);
- err = mlxsw_reg_query(core, MLXSW_REG(mtbr), mtbr_pl);
+ mlxsw_reg_mtmp_pack(mtmp_pl, MLXSW_REG_MTMP_MODULE_INDEX_MIN + module,
+ false, false);
+ err = mlxsw_reg_query(core, MLXSW_REG(mtmp), mtmp_pl);
if (err)
return err;
-
- /* Don't read temperature thresholds for module with no valid info. */
- mlxsw_reg_mtbr_temp_unpack(mtbr_pl, 0, &module_temp, NULL);
- switch (module_temp) {
- case MLXSW_REG_MTBR_BAD_SENS_INFO: /* fall-through */
- case MLXSW_REG_MTBR_NO_CONN: /* fall-through */
- case MLXSW_REG_MTBR_NO_TEMP_SENS: /* fall-through */
- case MLXSW_REG_MTBR_INDEX_NA:
+ mlxsw_reg_mtmp_unpack(mtmp_pl, &module_temp, NULL, NULL);
+ if (!module_temp) {
*temp = 0;
return 0;
- default:
- /* Do not consider thresholds for zero temperature. */
- if (MLXSW_REG_MTMP_TEMP_TO_MC(module_temp) == 0) {
- *temp = 0;
- return 0;
- }
- break;
}
/* Read Free Side Device Temperature Thresholds from page 03h
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c b/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c
index 496dc904c5ed..5b00726c4346 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c
@@ -23,6 +23,14 @@ struct mlxsw_hwmon_attr {
char name[32];
};
+static int mlxsw_hwmon_get_attr_index(int index, int count)
+{
+ if (index >= count)
+ return index % count + MLXSW_REG_MTMP_GBOX_INDEX_MIN;
+
+ return index;
+}
+
struct mlxsw_hwmon {
struct mlxsw_core *core;
const struct mlxsw_bus_info *bus_info;
@@ -33,6 +41,7 @@ struct mlxsw_hwmon {
struct mlxsw_hwmon_attr hwmon_attrs[MLXSW_HWMON_ATTR_COUNT];
unsigned int attrs_count;
u8 sensor_count;
+ u8 module_sensor_count;
};
static ssize_t mlxsw_hwmon_temp_show(struct device *dev,
@@ -43,18 +52,19 @@ static ssize_t mlxsw_hwmon_temp_show(struct device *dev,
container_of(attr, struct mlxsw_hwmon_attr, dev_attr);
struct mlxsw_hwmon *mlxsw_hwmon = mlwsw_hwmon_attr->hwmon;
char mtmp_pl[MLXSW_REG_MTMP_LEN];
- unsigned int temp;
+ int temp, index;
int err;
- mlxsw_reg_mtmp_pack(mtmp_pl, mlwsw_hwmon_attr->type_index,
- false, false);
+ index = mlxsw_hwmon_get_attr_index(mlwsw_hwmon_attr->type_index,
+ mlxsw_hwmon->module_sensor_count);
+ mlxsw_reg_mtmp_pack(mtmp_pl, index, false, false);
err = mlxsw_reg_query(mlxsw_hwmon->core, MLXSW_REG(mtmp), mtmp_pl);
if (err) {
dev_err(mlxsw_hwmon->bus_info->dev, "Failed to query temp sensor\n");
return err;
}
mlxsw_reg_mtmp_unpack(mtmp_pl, &temp, NULL, NULL);
- return sprintf(buf, "%u\n", temp);
+ return sprintf(buf, "%d\n", temp);
}
static ssize_t mlxsw_hwmon_temp_max_show(struct device *dev,
@@ -65,18 +75,19 @@ static ssize_t mlxsw_hwmon_temp_max_show(struct device *dev,
container_of(attr, struct mlxsw_hwmon_attr, dev_attr);
struct mlxsw_hwmon *mlxsw_hwmon = mlwsw_hwmon_attr->hwmon;
char mtmp_pl[MLXSW_REG_MTMP_LEN];
- unsigned int temp_max;
+ int temp_max, index;
int err;
- mlxsw_reg_mtmp_pack(mtmp_pl, mlwsw_hwmon_attr->type_index,
- false, false);
+ index = mlxsw_hwmon_get_attr_index(mlwsw_hwmon_attr->type_index,
+ mlxsw_hwmon->module_sensor_count);
+ mlxsw_reg_mtmp_pack(mtmp_pl, index, false, false);
err = mlxsw_reg_query(mlxsw_hwmon->core, MLXSW_REG(mtmp), mtmp_pl);
if (err) {
dev_err(mlxsw_hwmon->bus_info->dev, "Failed to query temp sensor\n");
return err;
}
mlxsw_reg_mtmp_unpack(mtmp_pl, NULL, &temp_max, NULL);
- return sprintf(buf, "%u\n", temp_max);
+ return sprintf(buf, "%d\n", temp_max);
}
static ssize_t mlxsw_hwmon_temp_rst_store(struct device *dev,
@@ -88,6 +99,7 @@ static ssize_t mlxsw_hwmon_temp_rst_store(struct device *dev,
struct mlxsw_hwmon *mlxsw_hwmon = mlwsw_hwmon_attr->hwmon;
char mtmp_pl[MLXSW_REG_MTMP_LEN];
unsigned long val;
+ int index;
int err;
err = kstrtoul(buf, 10, &val);
@@ -96,7 +108,9 @@ static ssize_t mlxsw_hwmon_temp_rst_store(struct device *dev,
if (val != 1)
return -EINVAL;
- mlxsw_reg_mtmp_pack(mtmp_pl, mlwsw_hwmon_attr->type_index, true, true);
+ index = mlxsw_hwmon_get_attr_index(mlwsw_hwmon_attr->type_index,
+ mlxsw_hwmon->module_sensor_count);
+ mlxsw_reg_mtmp_pack(mtmp_pl, index, true, true);
err = mlxsw_reg_write(mlxsw_hwmon->core, MLXSW_REG(mtmp), mtmp_pl);
if (err) {
dev_err(mlxsw_hwmon->bus_info->dev, "Failed to reset temp sensor history\n");
@@ -198,40 +212,20 @@ static ssize_t mlxsw_hwmon_module_temp_show(struct device *dev,
struct mlxsw_hwmon_attr *mlwsw_hwmon_attr =
container_of(attr, struct mlxsw_hwmon_attr, dev_attr);
struct mlxsw_hwmon *mlxsw_hwmon = mlwsw_hwmon_attr->hwmon;
- char mtbr_pl[MLXSW_REG_MTBR_LEN] = {0};
- u16 temp;
+ char mtmp_pl[MLXSW_REG_MTMP_LEN];
u8 module;
+ int temp;
int err;
module = mlwsw_hwmon_attr->type_index - mlxsw_hwmon->sensor_count;
- mlxsw_reg_mtbr_pack(mtbr_pl, MLXSW_REG_MTBR_BASE_MODULE_INDEX + module,
- 1);
- err = mlxsw_reg_query(mlxsw_hwmon->core, MLXSW_REG(mtbr), mtbr_pl);
- if (err) {
- dev_err(dev, "Failed to query module temperature sensor\n");
+ mlxsw_reg_mtmp_pack(mtmp_pl, MLXSW_REG_MTMP_MODULE_INDEX_MIN + module,
+ false, false);
+ err = mlxsw_reg_query(mlxsw_hwmon->core, MLXSW_REG(mtmp), mtmp_pl);
+ if (err)
return err;
- }
-
- mlxsw_reg_mtbr_temp_unpack(mtbr_pl, 0, &temp, NULL);
- /* Update status and temperature cache. */
- switch (temp) {
- case MLXSW_REG_MTBR_NO_CONN: /* fall-through */
- case MLXSW_REG_MTBR_NO_TEMP_SENS: /* fall-through */
- case MLXSW_REG_MTBR_INDEX_NA:
- temp = 0;
- break;
- case MLXSW_REG_MTBR_BAD_SENS_INFO:
- /* Untrusted cable is connected. Reading temperature from its
- * sensor is faulty.
- */
- temp = 0;
- break;
- default:
- temp = MLXSW_REG_MTMP_TEMP_TO_MC(temp);
- break;
- }
+ mlxsw_reg_mtmp_unpack(mtmp_pl, &temp, NULL, NULL);
- return sprintf(buf, "%u\n", temp);
+ return sprintf(buf, "%d\n", temp);
}
static ssize_t mlxsw_hwmon_module_temp_fault_show(struct device *dev,
@@ -333,6 +327,20 @@ mlxsw_hwmon_module_temp_label_show(struct device *dev,
mlwsw_hwmon_attr->type_index);
}
+static ssize_t
+mlxsw_hwmon_gbox_temp_label_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct mlxsw_hwmon_attr *mlwsw_hwmon_attr =
+ container_of(attr, struct mlxsw_hwmon_attr, dev_attr);
+ struct mlxsw_hwmon *mlxsw_hwmon = mlwsw_hwmon_attr->hwmon;
+ int index = mlwsw_hwmon_attr->type_index -
+ mlxsw_hwmon->module_sensor_count + 1;
+
+ return sprintf(buf, "gearbox %03u\n", index);
+}
+
enum mlxsw_hwmon_attr_type {
MLXSW_HWMON_ATTR_TYPE_TEMP,
MLXSW_HWMON_ATTR_TYPE_TEMP_MAX,
@@ -345,6 +353,7 @@ enum mlxsw_hwmon_attr_type {
MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE_CRIT,
MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE_EMERG,
MLXSW_HWMON_ATTR_TYPE_TEMP_MODULE_LABEL,
+ MLXSW_HWMON_ATTR_TYPE_TEMP_GBOX_LABEL,
};
static void mlxsw_hwmon_attr_add(struct mlxsw_hwmon *mlxsw_hwmon,
@@ -428,6 +437,13 @@ static void mlxsw_hwmon_attr_add(struct mlxsw_hwmon *mlxsw_hwmon,
snprintf(mlxsw_hwmon_attr->name, sizeof(mlxsw_hwmon_attr->name),
"temp%u_label", num + 1);
break;
+ case MLXSW_HWMON_ATTR_TYPE_TEMP_GBOX_LABEL:
+ mlxsw_hwmon_attr->dev_attr.show =
+ mlxsw_hwmon_gbox_temp_label_show;
+ mlxsw_hwmon_attr->dev_attr.attr.mode = 0444;
+ snprintf(mlxsw_hwmon_attr->name, sizeof(mlxsw_hwmon_attr->name),
+ "temp%u_label", num + 1);
+ break;
default:
WARN_ON(1);
}
@@ -556,6 +572,54 @@ static int mlxsw_hwmon_module_init(struct mlxsw_hwmon *mlxsw_hwmon)
index, index);
index++;
}
+ mlxsw_hwmon->module_sensor_count = index;
+
+ return 0;
+}
+
+static int mlxsw_hwmon_gearbox_init(struct mlxsw_hwmon *mlxsw_hwmon)
+{
+ int index, max_index, sensor_index;
+ char mgpir_pl[MLXSW_REG_MGPIR_LEN];
+ char mtmp_pl[MLXSW_REG_MTMP_LEN];
+ u8 gbox_num;
+ int err;
+
+ mlxsw_reg_mgpir_pack(mgpir_pl);
+ err = mlxsw_reg_query(mlxsw_hwmon->core, MLXSW_REG(mgpir), mgpir_pl);
+ if (err)
+ return err;
+
+ mlxsw_reg_mgpir_unpack(mgpir_pl, &gbox_num, NULL, NULL);
+ if (!gbox_num)
+ return 0;
+
+ index = mlxsw_hwmon->module_sensor_count;
+ max_index = mlxsw_hwmon->module_sensor_count + gbox_num;
+ while (index < max_index) {
+ sensor_index = index % mlxsw_hwmon->module_sensor_count +
+ MLXSW_REG_MTMP_GBOX_INDEX_MIN;
+ mlxsw_reg_mtmp_pack(mtmp_pl, sensor_index, true, true);
+ err = mlxsw_reg_write(mlxsw_hwmon->core,
+ MLXSW_REG(mtmp), mtmp_pl);
+ if (err) {
+ dev_err(mlxsw_hwmon->bus_info->dev, "Failed to setup temp sensor number %d\n",
+ sensor_index);
+ return err;
+ }
+ mlxsw_hwmon_attr_add(mlxsw_hwmon, MLXSW_HWMON_ATTR_TYPE_TEMP,
+ index, index);
+ mlxsw_hwmon_attr_add(mlxsw_hwmon,
+ MLXSW_HWMON_ATTR_TYPE_TEMP_MAX, index,
+ index);
+ mlxsw_hwmon_attr_add(mlxsw_hwmon,
+ MLXSW_HWMON_ATTR_TYPE_TEMP_RST, index,
+ index);
+ mlxsw_hwmon_attr_add(mlxsw_hwmon,
+ MLXSW_HWMON_ATTR_TYPE_TEMP_GBOX_LABEL,
+ index, index);
+ index++;
+ }
return 0;
}
@@ -586,6 +650,10 @@ int mlxsw_hwmon_init(struct mlxsw_core *mlxsw_core,
if (err)
goto err_temp_module_init;
+ err = mlxsw_hwmon_gearbox_init(mlxsw_hwmon);
+ if (err)
+ goto err_temp_gearbox_init;
+
mlxsw_hwmon->groups[0] = &mlxsw_hwmon->group;
mlxsw_hwmon->group.attrs = mlxsw_hwmon->attrs;
@@ -602,6 +670,7 @@ int mlxsw_hwmon_init(struct mlxsw_core *mlxsw_core,
return 0;
err_hwmon_register:
+err_temp_gearbox_init:
err_temp_module_init:
err_fans_init:
err_temp_init:
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
index d3e851e7ca72..35a1dc89c28a 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
@@ -23,6 +23,7 @@
#define MLXSW_THERMAL_HYSTERESIS_TEMP 5000 /* 5C */
#define MLXSW_THERMAL_MODULE_TEMP_SHIFT (MLXSW_THERMAL_HYSTERESIS_TEMP * 2)
#define MLXSW_THERMAL_ZONE_MAX_NAME 16
+#define MLXSW_THERMAL_TEMP_SCORE_MAX GENMASK(31, 0)
#define MLXSW_THERMAL_MAX_STATE 10
#define MLXSW_THERMAL_MAX_DUTY 255
/* Minimum and maximum fan allowed speed in percent: from 20% to 100%. Values
@@ -98,7 +99,7 @@ struct mlxsw_thermal_module {
struct thermal_zone_device *tzdev;
struct mlxsw_thermal_trip trips[MLXSW_THERMAL_NUM_TRIPS];
enum thermal_device_mode mode;
- int module;
+ int module; /* Module or gearbox number */
};
struct mlxsw_thermal {
@@ -111,6 +112,10 @@ struct mlxsw_thermal {
struct mlxsw_thermal_trip trips[MLXSW_THERMAL_NUM_TRIPS];
enum thermal_device_mode mode;
struct mlxsw_thermal_module *tz_module_arr;
+ struct mlxsw_thermal_module *tz_gearbox_arr;
+ u8 tz_gearbox_num;
+ unsigned int tz_highest_score;
+ struct thermal_zone_device *tz_highest_dev;
};
static inline u8 mlxsw_state_to_duty(int state)
@@ -195,6 +200,34 @@ mlxsw_thermal_module_trips_update(struct device *dev, struct mlxsw_core *core,
return 0;
}
+static void mlxsw_thermal_tz_score_update(struct mlxsw_thermal *thermal,
+ struct thermal_zone_device *tzdev,
+ struct mlxsw_thermal_trip *trips,
+ int temp)
+{
+ struct mlxsw_thermal_trip *trip = trips;
+ unsigned int score, delta, i, shift = 1;
+
+ /* Calculate thermal zone score, if temperature is above the critical
+ * threshold score is set to MLXSW_THERMAL_TEMP_SCORE_MAX.
+ */
+ score = MLXSW_THERMAL_TEMP_SCORE_MAX;
+ for (i = MLXSW_THERMAL_TEMP_TRIP_NORM; i < MLXSW_THERMAL_NUM_TRIPS;
+ i++, trip++) {
+ if (temp < trip->temp) {
+ delta = DIV_ROUND_CLOSEST(temp, trip->temp - temp);
+ score = delta * shift;
+ break;
+ }
+ shift *= 256;
+ }
+
+ if (score > thermal->tz_highest_score) {
+ thermal->tz_highest_score = score;
+ thermal->tz_highest_dev = tzdev;
+ }
+}
+
static int mlxsw_thermal_bind(struct thermal_zone_device *tzdev,
struct thermal_cooling_device *cdev)
{
@@ -279,7 +312,7 @@ static int mlxsw_thermal_get_temp(struct thermal_zone_device *tzdev,
struct mlxsw_thermal *thermal = tzdev->devdata;
struct device *dev = thermal->bus_info->dev;
char mtmp_pl[MLXSW_REG_MTMP_LEN];
- unsigned int temp;
+ int temp;
int err;
mlxsw_reg_mtmp_pack(mtmp_pl, 0, false, false);
@@ -290,8 +323,11 @@ static int mlxsw_thermal_get_temp(struct thermal_zone_device *tzdev,
return err;
}
mlxsw_reg_mtmp_unpack(mtmp_pl, &temp, NULL, NULL);
+ if (temp > 0)
+ mlxsw_thermal_tz_score_update(thermal, tzdev, thermal->trips,
+ temp);
- *p_temp = (int) temp;
+ *p_temp = temp;
return 0;
}
@@ -351,6 +387,22 @@ static int mlxsw_thermal_set_trip_hyst(struct thermal_zone_device *tzdev,
return 0;
}
+static int mlxsw_thermal_trend_get(struct thermal_zone_device *tzdev,
+ int trip, enum thermal_trend *trend)
+{
+ struct mlxsw_thermal_module *tz = tzdev->devdata;
+ struct mlxsw_thermal *thermal = tz->parent;
+
+ if (trip < 0 || trip >= MLXSW_THERMAL_NUM_TRIPS)
+ return -EINVAL;
+
+ if (tzdev == thermal->tz_highest_dev)
+ return 1;
+
+ *trend = THERMAL_TREND_STABLE;
+ return 0;
+}
+
static struct thermal_zone_device_ops mlxsw_thermal_ops = {
.bind = mlxsw_thermal_bind,
.unbind = mlxsw_thermal_unbind,
@@ -362,6 +414,7 @@ static struct thermal_zone_device_ops mlxsw_thermal_ops = {
.set_trip_temp = mlxsw_thermal_set_trip_temp,
.get_trip_hyst = mlxsw_thermal_get_trip_hyst,
.set_trip_hyst = mlxsw_thermal_set_trip_hyst,
+ .get_trend = mlxsw_thermal_trend_get,
};
static int mlxsw_thermal_module_bind(struct thermal_zone_device *tzdev,
@@ -449,39 +502,33 @@ static int mlxsw_thermal_module_temp_get(struct thermal_zone_device *tzdev,
struct mlxsw_thermal_module *tz = tzdev->devdata;
struct mlxsw_thermal *thermal = tz->parent;
struct device *dev = thermal->bus_info->dev;
- char mtbr_pl[MLXSW_REG_MTBR_LEN];
- u16 temp;
+ char mtmp_pl[MLXSW_REG_MTMP_LEN];
+ int temp;
int err;
/* Read module temperature. */
- mlxsw_reg_mtbr_pack(mtbr_pl, MLXSW_REG_MTBR_BASE_MODULE_INDEX +
- tz->module, 1);
- err = mlxsw_reg_query(thermal->core, MLXSW_REG(mtbr), mtbr_pl);
- if (err)
- return err;
-
- mlxsw_reg_mtbr_temp_unpack(mtbr_pl, 0, &temp, NULL);
- /* Update temperature. */
- switch (temp) {
- case MLXSW_REG_MTBR_NO_CONN: /* fall-through */
- case MLXSW_REG_MTBR_NO_TEMP_SENS: /* fall-through */
- case MLXSW_REG_MTBR_INDEX_NA: /* fall-through */
- case MLXSW_REG_MTBR_BAD_SENS_INFO:
+ mlxsw_reg_mtmp_pack(mtmp_pl, MLXSW_REG_MTMP_MODULE_INDEX_MIN +
+ tz->module, false, false);
+ err = mlxsw_reg_query(thermal->core, MLXSW_REG(mtmp), mtmp_pl);
+ if (err) {
+ /* Do not return error - in case of broken module's sensor
+ * it will cause error message flooding.
+ */
temp = 0;
- break;
- default:
- temp = MLXSW_REG_MTMP_TEMP_TO_MC(temp);
- /* Reset all trip point. */
- mlxsw_thermal_module_trips_reset(tz);
- /* Update trip points. */
- err = mlxsw_thermal_module_trips_update(dev, thermal->core,
- tz);
- if (err)
- return err;
- break;
+ *p_temp = (int) temp;
+ return 0;
}
+ mlxsw_reg_mtmp_unpack(mtmp_pl, &temp, NULL, NULL);
+ *p_temp = temp;
+
+ if (!temp)
+ return 0;
+
+ /* Update trip points. */
+ err = mlxsw_thermal_module_trips_update(dev, thermal->core, tz);
+ if (!err && temp > 0)
+ mlxsw_thermal_tz_score_update(thermal, tzdev, tz->trips, temp);
- *p_temp = (int) temp;
return 0;
}
@@ -545,10 +592,6 @@ mlxsw_thermal_module_trip_hyst_set(struct thermal_zone_device *tzdev, int trip,
return 0;
}
-static struct thermal_zone_params mlxsw_thermal_module_params = {
- .governor_name = "user_space",
-};
-
static struct thermal_zone_device_ops mlxsw_thermal_module_ops = {
.bind = mlxsw_thermal_module_bind,
.unbind = mlxsw_thermal_module_unbind,
@@ -560,6 +603,46 @@ static struct thermal_zone_device_ops mlxsw_thermal_module_ops = {
.set_trip_temp = mlxsw_thermal_module_trip_temp_set,
.get_trip_hyst = mlxsw_thermal_module_trip_hyst_get,
.set_trip_hyst = mlxsw_thermal_module_trip_hyst_set,
+ .get_trend = mlxsw_thermal_trend_get,
+};
+
+static int mlxsw_thermal_gearbox_temp_get(struct thermal_zone_device *tzdev,
+ int *p_temp)
+{
+ struct mlxsw_thermal_module *tz = tzdev->devdata;
+ struct mlxsw_thermal *thermal = tz->parent;
+ char mtmp_pl[MLXSW_REG_MTMP_LEN];
+ u16 index;
+ int temp;
+ int err;
+
+ index = MLXSW_REG_MTMP_GBOX_INDEX_MIN + tz->module;
+ mlxsw_reg_mtmp_pack(mtmp_pl, index, false, false);
+
+ err = mlxsw_reg_query(thermal->core, MLXSW_REG(mtmp), mtmp_pl);
+ if (err)
+ return err;
+
+ mlxsw_reg_mtmp_unpack(mtmp_pl, &temp, NULL, NULL);
+ if (temp > 0)
+ mlxsw_thermal_tz_score_update(thermal, tzdev, tz->trips, temp);
+
+ *p_temp = temp;
+ return 0;
+}
+
+static struct thermal_zone_device_ops mlxsw_thermal_gearbox_ops = {
+ .bind = mlxsw_thermal_module_bind,
+ .unbind = mlxsw_thermal_module_unbind,
+ .get_mode = mlxsw_thermal_module_mode_get,
+ .set_mode = mlxsw_thermal_module_mode_set,
+ .get_temp = mlxsw_thermal_gearbox_temp_get,
+ .get_trip_type = mlxsw_thermal_module_trip_type_get,
+ .get_trip_temp = mlxsw_thermal_module_trip_temp_get,
+ .set_trip_temp = mlxsw_thermal_module_trip_temp_set,
+ .get_trip_hyst = mlxsw_thermal_module_trip_hyst_get,
+ .set_trip_hyst = mlxsw_thermal_module_trip_hyst_set,
+ .get_trend = mlxsw_thermal_trend_get,
};
static int mlxsw_thermal_get_max_state(struct thermal_cooling_device *cdev,
@@ -675,13 +758,13 @@ mlxsw_thermal_module_tz_init(struct mlxsw_thermal_module *module_tz)
MLXSW_THERMAL_TRIP_MASK,
module_tz,
&mlxsw_thermal_module_ops,
- &mlxsw_thermal_module_params,
- 0, 0);
+ NULL, 0, 0);
if (IS_ERR(module_tz->tzdev)) {
err = PTR_ERR(module_tz->tzdev);
return err;
}
+ module_tz->mode = THERMAL_DEVICE_ENABLED;
return 0;
}
@@ -787,6 +870,92 @@ mlxsw_thermal_modules_fini(struct mlxsw_thermal *thermal)
kfree(thermal->tz_module_arr);
}
+static int
+mlxsw_thermal_gearbox_tz_init(struct mlxsw_thermal_module *gearbox_tz)
+{
+ char tz_name[MLXSW_THERMAL_ZONE_MAX_NAME];
+
+ snprintf(tz_name, sizeof(tz_name), "mlxsw-gearbox%d",
+ gearbox_tz->module + 1);
+ gearbox_tz->tzdev = thermal_zone_device_register(tz_name,
+ MLXSW_THERMAL_NUM_TRIPS,
+ MLXSW_THERMAL_TRIP_MASK,
+ gearbox_tz,
+ &mlxsw_thermal_gearbox_ops,
+ NULL, 0, 0);
+ if (IS_ERR(gearbox_tz->tzdev))
+ return PTR_ERR(gearbox_tz->tzdev);
+
+ gearbox_tz->mode = THERMAL_DEVICE_ENABLED;
+ return 0;
+}
+
+static void
+mlxsw_thermal_gearbox_tz_fini(struct mlxsw_thermal_module *gearbox_tz)
+{
+ thermal_zone_device_unregister(gearbox_tz->tzdev);
+}
+
+static int
+mlxsw_thermal_gearboxes_init(struct device *dev, struct mlxsw_core *core,
+ struct mlxsw_thermal *thermal)
+{
+ struct mlxsw_thermal_module *gearbox_tz;
+ char mgpir_pl[MLXSW_REG_MGPIR_LEN];
+ int i;
+ int err;
+
+ if (!mlxsw_core_res_query_enabled(core))
+ return 0;
+
+ mlxsw_reg_mgpir_pack(mgpir_pl);
+ err = mlxsw_reg_query(core, MLXSW_REG(mgpir), mgpir_pl);
+ if (err)
+ return err;
+
+ mlxsw_reg_mgpir_unpack(mgpir_pl, &thermal->tz_gearbox_num, NULL, NULL);
+ if (!thermal->tz_gearbox_num)
+ return 0;
+
+ thermal->tz_gearbox_arr = kcalloc(thermal->tz_gearbox_num,
+ sizeof(*thermal->tz_gearbox_arr),
+ GFP_KERNEL);
+ if (!thermal->tz_gearbox_arr)
+ return -ENOMEM;
+
+ for (i = 0; i < thermal->tz_gearbox_num; i++) {
+ gearbox_tz = &thermal->tz_gearbox_arr[i];
+ memcpy(gearbox_tz->trips, default_thermal_trips,
+ sizeof(thermal->trips));
+ gearbox_tz->module = i;
+ gearbox_tz->parent = thermal;
+ err = mlxsw_thermal_gearbox_tz_init(gearbox_tz);
+ if (err)
+ goto err_unreg_tz_gearbox;
+ }
+
+ return 0;
+
+err_unreg_tz_gearbox:
+ for (i--; i >= 0; i--)
+ mlxsw_thermal_gearbox_tz_fini(&thermal->tz_gearbox_arr[i]);
+ kfree(thermal->tz_gearbox_arr);
+ return err;
+}
+
+static void
+mlxsw_thermal_gearboxes_fini(struct mlxsw_thermal *thermal)
+{
+ int i;
+
+ if (!mlxsw_core_res_query_enabled(thermal->core))
+ return;
+
+ for (i = thermal->tz_gearbox_num - 1; i >= 0; i--)
+ mlxsw_thermal_gearbox_tz_fini(&thermal->tz_gearbox_arr[i]);
+ kfree(thermal->tz_gearbox_arr);
+}
+
int mlxsw_thermal_init(struct mlxsw_core *core,
const struct mlxsw_bus_info *bus_info,
struct mlxsw_thermal **p_thermal)
@@ -877,10 +1046,16 @@ int mlxsw_thermal_init(struct mlxsw_core *core,
if (err)
goto err_unreg_tzdev;
+ err = mlxsw_thermal_gearboxes_init(dev, core, thermal);
+ if (err)
+ goto err_unreg_modules_tzdev;
+
thermal->mode = THERMAL_DEVICE_ENABLED;
*p_thermal = thermal;
return 0;
+err_unreg_modules_tzdev:
+ mlxsw_thermal_modules_fini(thermal);
err_unreg_tzdev:
if (thermal->tzdev) {
thermal_zone_device_unregister(thermal->tzdev);
@@ -899,6 +1074,7 @@ void mlxsw_thermal_fini(struct mlxsw_thermal *thermal)
{
int i;
+ mlxsw_thermal_gearboxes_fini(thermal);
mlxsw_thermal_modules_fini(thermal);
if (thermal->tzdev) {
thermal_zone_device_unregister(thermal->tzdev);
diff --git a/drivers/net/ethernet/mellanox/mlxsw/i2c.c b/drivers/net/ethernet/mellanox/mlxsw/i2c.c
index 06aea1999518..95f408d0e103 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/i2c.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/i2c.c
@@ -43,11 +43,10 @@
#define MLXSW_I2C_PREP_SIZE (MLXSW_I2C_ADDR_WIDTH + 28)
#define MLXSW_I2C_MBOX_SIZE 20
#define MLXSW_I2C_MBOX_OUT_PARAM_OFF 12
-#define MLXSW_I2C_MAX_BUFF_SIZE 32
#define MLXSW_I2C_MBOX_OFFSET_BITS 20
#define MLXSW_I2C_MBOX_SIZE_BITS 12
#define MLXSW_I2C_ADDR_BUF_SIZE 4
-#define MLXSW_I2C_BLK_MAX 32
+#define MLXSW_I2C_BLK_DEF 32
#define MLXSW_I2C_RETRY 5
#define MLXSW_I2C_TIMEOUT_MSECS 5000
#define MLXSW_I2C_MAX_DATA_SIZE 256
@@ -62,6 +61,7 @@
* @dev: I2C device;
* @core: switch core pointer;
* @bus_info: bus info block;
+ * @block_size: maximum block size allowed to pass to under layer;
*/
struct mlxsw_i2c {
struct {
@@ -74,6 +74,7 @@ struct mlxsw_i2c {
struct device *dev;
struct mlxsw_core *core;
struct mlxsw_bus_info bus_info;
+ u16 block_size;
};
#define MLXSW_I2C_READ_MSG(_client, _addr_buf, _buf, _len) { \
@@ -315,20 +316,26 @@ mlxsw_i2c_write(struct device *dev, size_t in_mbox_size, u8 *in_mbox, int num,
struct i2c_client *client = to_i2c_client(dev);
struct mlxsw_i2c *mlxsw_i2c = i2c_get_clientdata(client);
unsigned long timeout = msecs_to_jiffies(MLXSW_I2C_TIMEOUT_MSECS);
- u8 tran_buf[MLXSW_I2C_MAX_BUFF_SIZE + MLXSW_I2C_ADDR_BUF_SIZE];
int off = mlxsw_i2c->cmd.mb_off_in, chunk_size, i, j;
unsigned long end;
+ u8 *tran_buf;
struct i2c_msg write_tran =
- MLXSW_I2C_WRITE_MSG(client, tran_buf, MLXSW_I2C_PUSH_CMD_SIZE);
+ MLXSW_I2C_WRITE_MSG(client, NULL, MLXSW_I2C_PUSH_CMD_SIZE);
int err;
+ tran_buf = kmalloc(mlxsw_i2c->block_size + MLXSW_I2C_ADDR_BUF_SIZE,
+ GFP_KERNEL);
+ if (!tran_buf)
+ return -ENOMEM;
+
+ write_tran.buf = tran_buf;
for (i = 0; i < num; i++) {
- chunk_size = (in_mbox_size > MLXSW_I2C_BLK_MAX) ?
- MLXSW_I2C_BLK_MAX : in_mbox_size;
+ chunk_size = (in_mbox_size > mlxsw_i2c->block_size) ?
+ mlxsw_i2c->block_size : in_mbox_size;
write_tran.len = MLXSW_I2C_ADDR_WIDTH + chunk_size;
mlxsw_i2c_set_slave_addr(tran_buf, off);
memcpy(&tran_buf[MLXSW_I2C_ADDR_BUF_SIZE], in_mbox +
- MLXSW_I2C_BLK_MAX * i, chunk_size);
+ mlxsw_i2c->block_size * i, chunk_size);
j = 0;
end = jiffies + timeout;
@@ -342,9 +349,10 @@ mlxsw_i2c_write(struct device *dev, size_t in_mbox_size, u8 *in_mbox, int num,
(j++ < MLXSW_I2C_RETRY));
if (err != 1) {
- if (!err)
+ if (!err) {
err = -EIO;
- return err;
+ goto mlxsw_i2c_write_exit;
+ }
}
off += chunk_size;
@@ -355,24 +363,27 @@ mlxsw_i2c_write(struct device *dev, size_t in_mbox_size, u8 *in_mbox, int num,
err = mlxsw_i2c_write_cmd(client, mlxsw_i2c, 0);
if (err) {
dev_err(&client->dev, "Could not start transaction");
- return -EIO;
+ err = -EIO;
+ goto mlxsw_i2c_write_exit;
}
/* Wait until go bit is cleared. */
err = mlxsw_i2c_wait_go_bit(client, mlxsw_i2c, p_status);
if (err) {
dev_err(&client->dev, "HW semaphore is not released");
- return err;
+ goto mlxsw_i2c_write_exit;
}
/* Validate transaction completion status. */
if (*p_status) {
dev_err(&client->dev, "Bad transaction completion status %x\n",
*p_status);
- return -EIO;
+ err = -EIO;
}
- return 0;
+mlxsw_i2c_write_exit:
+ kfree(tran_buf);
+ return err;
}
/* Routine executes I2C command. */
@@ -395,8 +406,8 @@ mlxsw_i2c_cmd(struct device *dev, u16 opcode, u32 in_mod, size_t in_mbox_size,
if (in_mbox) {
reg_size = mlxsw_i2c_get_reg_size(in_mbox);
- num = reg_size / MLXSW_I2C_BLK_MAX;
- if (reg_size % MLXSW_I2C_BLK_MAX)
+ num = reg_size / mlxsw_i2c->block_size;
+ if (reg_size % mlxsw_i2c->block_size)
num++;
if (mutex_lock_interruptible(&mlxsw_i2c->cmd.lock) < 0) {
@@ -416,7 +427,7 @@ mlxsw_i2c_cmd(struct device *dev, u16 opcode, u32 in_mod, size_t in_mbox_size,
} else {
/* No input mailbox is case of initialization query command. */
reg_size = MLXSW_I2C_MAX_DATA_SIZE;
- num = reg_size / MLXSW_I2C_BLK_MAX;
+ num = reg_size / mlxsw_i2c->block_size;
if (mutex_lock_interruptible(&mlxsw_i2c->cmd.lock) < 0) {
dev_err(&client->dev, "Could not acquire lock");
@@ -432,8 +443,8 @@ mlxsw_i2c_cmd(struct device *dev, u16 opcode, u32 in_mod, size_t in_mbox_size,
/* Send read transaction to get output mailbox content. */
read_tran[1].buf = out_mbox;
for (i = 0; i < num; i++) {
- chunk_size = (reg_size > MLXSW_I2C_BLK_MAX) ?
- MLXSW_I2C_BLK_MAX : reg_size;
+ chunk_size = (reg_size > mlxsw_i2c->block_size) ?
+ mlxsw_i2c->block_size : reg_size;
read_tran[1].len = chunk_size;
mlxsw_i2c_set_slave_addr(tran_buf, off);
@@ -509,8 +520,20 @@ mlxsw_i2c_init(void *bus_priv, struct mlxsw_core *mlxsw_core,
if (!mbox)
return -ENOMEM;
+ err = mlxsw_cmd_query_fw(mlxsw_core, mbox);
+ if (err)
+ goto mbox_put;
+
+ mlxsw_i2c->bus_info.fw_rev.major =
+ mlxsw_cmd_mbox_query_fw_fw_rev_major_get(mbox);
+ mlxsw_i2c->bus_info.fw_rev.minor =
+ mlxsw_cmd_mbox_query_fw_fw_rev_minor_get(mbox);
+ mlxsw_i2c->bus_info.fw_rev.subminor =
+ mlxsw_cmd_mbox_query_fw_fw_rev_subminor_get(mbox);
+
err = mlxsw_core_resources_query(mlxsw_core, mbox, res);
+mbox_put:
mlxsw_cmd_mbox_free(mbox);
return err;
}
@@ -534,6 +557,7 @@ static const struct mlxsw_bus mlxsw_i2c_bus = {
static int mlxsw_i2c_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
+ const struct i2c_adapter_quirks *quirks = client->adapter->quirks;
struct mlxsw_i2c *mlxsw_i2c;
u8 status;
int err;
@@ -542,6 +566,22 @@ static int mlxsw_i2c_probe(struct i2c_client *client,
if (!mlxsw_i2c)
return -ENOMEM;
+ if (quirks) {
+ if ((quirks->max_read_len &&
+ quirks->max_read_len < MLXSW_I2C_BLK_DEF) ||
+ (quirks->max_write_len &&
+ quirks->max_write_len < MLXSW_I2C_BLK_DEF)) {
+ dev_err(&client->dev, "Insufficient transaction buffer length\n");
+ return -EOPNOTSUPP;
+ }
+
+ mlxsw_i2c->block_size = max_t(u16, MLXSW_I2C_BLK_DEF,
+ min_t(u16, quirks->max_read_len,
+ quirks->max_write_len));
+ } else {
+ mlxsw_i2c->block_size = MLXSW_I2C_BLK_DEF;
+ }
+
i2c_set_clientdata(client, mlxsw_i2c);
mutex_init(&mlxsw_i2c->cmd.lock);
diff --git a/drivers/net/ethernet/mellanox/mlxsw/minimal.c b/drivers/net/ethernet/mellanox/mlxsw/minimal.c
index cf2114273b72..471b0ca6d69a 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/minimal.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/minimal.c
@@ -67,6 +67,23 @@ static const struct net_device_ops mlxsw_m_port_netdev_ops = {
.ndo_get_devlink_port = mlxsw_m_port_get_devlink_port,
};
+static void mlxsw_m_module_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *drvinfo)
+{
+ struct mlxsw_m_port *mlxsw_m_port = netdev_priv(dev);
+ struct mlxsw_m *mlxsw_m = mlxsw_m_port->mlxsw_m;
+
+ strlcpy(drvinfo->driver, mlxsw_m->bus_info->device_kind,
+ sizeof(drvinfo->driver));
+ snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
+ "%d.%d.%d",
+ mlxsw_m->bus_info->fw_rev.major,
+ mlxsw_m->bus_info->fw_rev.minor,
+ mlxsw_m->bus_info->fw_rev.subminor);
+ strlcpy(drvinfo->bus_info, mlxsw_m->bus_info->device_name,
+ sizeof(drvinfo->bus_info));
+}
+
static int mlxsw_m_get_module_info(struct net_device *netdev,
struct ethtool_modinfo *modinfo)
{
@@ -88,6 +105,7 @@ mlxsw_m_get_module_eeprom(struct net_device *netdev, struct ethtool_eeprom *ee,
}
static const struct ethtool_ops mlxsw_m_port_ethtool_ops = {
+ .get_drvinfo = mlxsw_m_module_get_drvinfo,
.get_module_info = mlxsw_m_get_module_info,
.get_module_eeprom = mlxsw_m_get_module_eeprom,
};
diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c
index b40455f8293d..051b19388a81 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/pci.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c
@@ -102,6 +102,7 @@ struct mlxsw_pci_queue_type_group {
struct mlxsw_pci {
struct pci_dev *pdev;
u8 __iomem *hw_addr;
+ u64 free_running_clock_offset;
struct mlxsw_pci_queue_type_group queues[MLXSW_PCI_QUEUE_TYPE_COUNT];
u32 doorbell_offset;
struct mlxsw_core *core;
@@ -507,17 +508,28 @@ static void mlxsw_pci_cqe_sdq_handle(struct mlxsw_pci *mlxsw_pci,
{
struct pci_dev *pdev = mlxsw_pci->pdev;
struct mlxsw_pci_queue_elem_info *elem_info;
+ struct mlxsw_tx_info tx_info;
char *wqe;
struct sk_buff *skb;
int i;
spin_lock(&q->lock);
elem_info = mlxsw_pci_queue_elem_info_consumer_get(q);
+ tx_info = mlxsw_skb_cb(elem_info->u.sdq.skb)->tx_info;
skb = elem_info->u.sdq.skb;
wqe = elem_info->elem;
for (i = 0; i < MLXSW_PCI_WQE_SG_ENTRIES; i++)
mlxsw_pci_wqe_frag_unmap(mlxsw_pci, wqe, i, DMA_TO_DEVICE);
- dev_kfree_skb_any(skb);
+
+ if (unlikely(!tx_info.is_emad &&
+ skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) {
+ mlxsw_core_ptp_transmitted(mlxsw_pci->core, skb,
+ tx_info.local_port);
+ skb = NULL;
+ }
+
+ if (skb)
+ dev_kfree_skb_any(skb);
elem_info->u.sdq.skb = NULL;
if (q->consumer_counter++ != consumer_counter_limit)
@@ -1414,6 +1426,15 @@ static int mlxsw_pci_init(void *bus_priv, struct mlxsw_core *mlxsw_core,
mlxsw_pci->doorbell_offset =
mlxsw_cmd_mbox_query_fw_doorbell_page_offset_get(mbox);
+ if (mlxsw_cmd_mbox_query_fw_fr_rn_clk_bar_get(mbox) != 0) {
+ dev_err(&pdev->dev, "Unsupported free running clock BAR queried from hw\n");
+ err = -EINVAL;
+ goto err_fr_rn_clk_bar;
+ }
+
+ mlxsw_pci->free_running_clock_offset =
+ mlxsw_cmd_mbox_query_fw_free_running_clock_offset_get(mbox);
+
num_pages = mlxsw_cmd_mbox_query_fw_fw_pages_get(mbox);
err = mlxsw_pci_fw_area_init(mlxsw_pci, mbox, num_pages);
if (err)
@@ -1469,6 +1490,7 @@ err_query_resources:
err_boardinfo:
mlxsw_pci_fw_area_fini(mlxsw_pci);
err_fw_area_init:
+err_fr_rn_clk_bar:
err_doorbell_page_bar:
err_iface_rev:
err_query_fw:
@@ -1537,6 +1559,7 @@ static int mlxsw_pci_skb_transmit(void *bus_priv, struct sk_buff *skb,
err = -EAGAIN;
goto unlock;
}
+ mlxsw_skb_cb(skb)->tx_info = *tx_info;
elem_info->u.sdq.skb = skb;
wqe = elem_info->elem;
@@ -1560,6 +1583,9 @@ static int mlxsw_pci_skb_transmit(void *bus_priv, struct sk_buff *skb,
goto unmap_frags;
}
+ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))
+ skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+
/* Set unused sq entries byte count to zero. */
for (i++; i < MLXSW_PCI_WQE_SG_ENTRIES; i++)
mlxsw_pci_wqe_byte_count_set(wqe, i, 0);
@@ -1672,6 +1698,24 @@ static int mlxsw_pci_cmd_exec(void *bus_priv, u16 opcode, u8 opcode_mod,
return err;
}
+static u32 mlxsw_pci_read_frc_h(void *bus_priv)
+{
+ struct mlxsw_pci *mlxsw_pci = bus_priv;
+ u64 frc_offset;
+
+ frc_offset = mlxsw_pci->free_running_clock_offset;
+ return mlxsw_pci_read32(mlxsw_pci, FREE_RUNNING_CLOCK_H(frc_offset));
+}
+
+static u32 mlxsw_pci_read_frc_l(void *bus_priv)
+{
+ struct mlxsw_pci *mlxsw_pci = bus_priv;
+ u64 frc_offset;
+
+ frc_offset = mlxsw_pci->free_running_clock_offset;
+ return mlxsw_pci_read32(mlxsw_pci, FREE_RUNNING_CLOCK_L(frc_offset));
+}
+
static const struct mlxsw_bus mlxsw_pci_bus = {
.kind = "pci",
.init = mlxsw_pci_init,
@@ -1679,6 +1723,8 @@ static const struct mlxsw_bus mlxsw_pci_bus = {
.skb_transmit_busy = mlxsw_pci_skb_transmit_busy,
.skb_transmit = mlxsw_pci_skb_transmit,
.cmd_exec = mlxsw_pci_cmd_exec,
+ .read_frc_h = mlxsw_pci_read_frc_h,
+ .read_frc_l = mlxsw_pci_read_frc_l,
.features = MLXSW_BUS_F_TXRX | MLXSW_BUS_F_RESET,
};
@@ -1740,6 +1786,7 @@ static int mlxsw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
mlxsw_pci->bus_info.device_kind = driver_name;
mlxsw_pci->bus_info.device_name = pci_name(mlxsw_pci->pdev);
mlxsw_pci->bus_info.dev = &pdev->dev;
+ mlxsw_pci->bus_info.read_frc_capable = true;
mlxsw_pci->id = id;
err = mlxsw_core_bus_device_register(&mlxsw_pci->bus_info,
diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h b/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h
index 8648ca171254..e57e42e2d2b2 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h
@@ -43,6 +43,9 @@
#define MLXSW_PCI_DOORBELL(offset, type_offset, num) \
((offset) + (type_offset) + (num) * 4)
+#define MLXSW_PCI_FREE_RUNNING_CLOCK_H(offset) (offset)
+#define MLXSW_PCI_FREE_RUNNING_CLOCK_L(offset) ((offset) + 4)
+
#define MLXSW_PCI_CQS_MAX 96
#define MLXSW_PCI_EQS_COUNT 2
#define MLXSW_PCI_EQ_ASYNC_NUM 0
diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h
index 7ed63ed657c7..ead36702549a 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/reg.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h
@@ -3515,6 +3515,18 @@ MLXSW_ITEM32(reg, qeec, next_element_index, 0x08, 0, 8);
*/
MLXSW_ITEM32(reg, qeec, mise, 0x0C, 31, 1);
+/* reg_qeec_ptps
+ * PTP shaper
+ * 0: regular shaper mode
+ * 1: PTP oriented shaper
+ * Allowed only for hierarchy 0
+ * Not supported for CPU port
+ * Note that ptps mode may affect the shaper rates of all hierarchies
+ * Supported only on Spectrum-1
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, qeec, ptps, 0x0C, 29, 1);
+
enum {
MLXSW_REG_QEEC_BYTES_MODE,
MLXSW_REG_QEEC_PACKETS_MODE,
@@ -3601,6 +3613,16 @@ static inline void mlxsw_reg_qeec_pack(char *payload, u8 local_port,
mlxsw_reg_qeec_next_element_index_set(payload, next_index);
}
+static inline void mlxsw_reg_qeec_ptps_pack(char *payload, u8 local_port,
+ bool ptps)
+{
+ MLXSW_REG_ZERO(qeec, payload);
+ mlxsw_reg_qeec_local_port_set(payload, local_port);
+ mlxsw_reg_qeec_element_hierarchy_set(payload,
+ MLXSW_REG_QEEC_HIERARCY_PORT);
+ mlxsw_reg_qeec_ptps_set(payload, ptps);
+}
+
/* QRWE - QoS ReWrite Enable
* -------------------------
* This register configures the rewrite enable per receive port.
@@ -3814,6 +3836,112 @@ mlxsw_reg_qtctm_pack(char *payload, u8 local_port, bool mc)
mlxsw_reg_qtctm_mc_set(payload, mc);
}
+/* QPSC - QoS PTP Shaper Configuration Register
+ * --------------------------------------------
+ * The QPSC allows advanced configuration of the shapers when QEEC.ptps=1.
+ * Supported only on Spectrum-1.
+ */
+#define MLXSW_REG_QPSC_ID 0x401B
+#define MLXSW_REG_QPSC_LEN 0x28
+
+MLXSW_REG_DEFINE(qpsc, MLXSW_REG_QPSC_ID, MLXSW_REG_QPSC_LEN);
+
+enum mlxsw_reg_qpsc_port_speed {
+ MLXSW_REG_QPSC_PORT_SPEED_100M,
+ MLXSW_REG_QPSC_PORT_SPEED_1G,
+ MLXSW_REG_QPSC_PORT_SPEED_10G,
+ MLXSW_REG_QPSC_PORT_SPEED_25G,
+};
+
+/* reg_qpsc_port_speed
+ * Port speed.
+ * Access: Index
+ */
+MLXSW_ITEM32(reg, qpsc, port_speed, 0x00, 0, 4);
+
+/* reg_qpsc_shaper_time_exp
+ * The base-time-interval for updating the shapers tokens (for all hierarchies).
+ * shaper_update_rate = 2 ^ shaper_time_exp * (1 + shaper_time_mantissa) * 32nSec
+ * shaper_rate = 64bit * shaper_inc / shaper_update_rate
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, qpsc, shaper_time_exp, 0x04, 16, 4);
+
+/* reg_qpsc_shaper_time_mantissa
+ * The base-time-interval for updating the shapers tokens (for all hierarchies).
+ * shaper_update_rate = 2 ^ shaper_time_exp * (1 + shaper_time_mantissa) * 32nSec
+ * shaper_rate = 64bit * shaper_inc / shaper_update_rate
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, qpsc, shaper_time_mantissa, 0x04, 0, 5);
+
+/* reg_qpsc_shaper_inc
+ * Number of tokens added to shaper on each update.
+ * Units of 8B.
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, qpsc, shaper_inc, 0x08, 0, 5);
+
+/* reg_qpsc_shaper_bs
+ * Max shaper Burst size.
+ * Burst size is 2 ^ max_shaper_bs * 512 [bits]
+ * Range is: 5..25 (from 2KB..2GB)
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, qpsc, shaper_bs, 0x0C, 0, 6);
+
+/* reg_qpsc_ptsc_we
+ * Write enable to port_to_shaper_credits.
+ * Access: WO
+ */
+MLXSW_ITEM32(reg, qpsc, ptsc_we, 0x10, 31, 1);
+
+/* reg_qpsc_port_to_shaper_credits
+ * For split ports: range 1..57
+ * For non-split ports: range 1..112
+ * Written only when ptsc_we is set.
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, qpsc, port_to_shaper_credits, 0x10, 0, 8);
+
+/* reg_qpsc_ing_timestamp_inc
+ * Ingress timestamp increment.
+ * 2's complement.
+ * The timestamp of MTPPTR at ingress will be incremented by this value. Global
+ * value for all ports.
+ * Same units as used by MTPPTR.
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, qpsc, ing_timestamp_inc, 0x20, 0, 32);
+
+/* reg_qpsc_egr_timestamp_inc
+ * Egress timestamp increment.
+ * 2's complement.
+ * The timestamp of MTPPTR at egress will be incremented by this value. Global
+ * value for all ports.
+ * Same units as used by MTPPTR.
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, qpsc, egr_timestamp_inc, 0x24, 0, 32);
+
+static inline void
+mlxsw_reg_qpsc_pack(char *payload, enum mlxsw_reg_qpsc_port_speed port_speed,
+ u8 shaper_time_exp, u8 shaper_time_mantissa, u8 shaper_inc,
+ u8 shaper_bs, u8 port_to_shaper_credits,
+ int ing_timestamp_inc, int egr_timestamp_inc)
+{
+ MLXSW_REG_ZERO(qpsc, payload);
+ mlxsw_reg_qpsc_port_speed_set(payload, port_speed);
+ mlxsw_reg_qpsc_shaper_time_exp_set(payload, shaper_time_exp);
+ mlxsw_reg_qpsc_shaper_time_mantissa_set(payload, shaper_time_mantissa);
+ mlxsw_reg_qpsc_shaper_inc_set(payload, shaper_inc);
+ mlxsw_reg_qpsc_shaper_bs_set(payload, shaper_bs);
+ mlxsw_reg_qpsc_ptsc_we_set(payload, true);
+ mlxsw_reg_qpsc_port_to_shaper_credits_set(payload, port_to_shaper_credits);
+ mlxsw_reg_qpsc_ing_timestamp_inc_set(payload, ing_timestamp_inc);
+ mlxsw_reg_qpsc_egr_timestamp_inc_set(payload, egr_timestamp_inc);
+}
+
/* PMLP - Ports Module to Local Port Register
* ------------------------------------------
* Configures the assignment of modules to local ports.
@@ -5292,6 +5420,8 @@ enum mlxsw_reg_htgt_trap_group {
MLXSW_REG_HTGT_TRAP_GROUP_SP_IPV6_MLD,
MLXSW_REG_HTGT_TRAP_GROUP_SP_IPV6_ND,
MLXSW_REG_HTGT_TRAP_GROUP_SP_LBERROR,
+ MLXSW_REG_HTGT_TRAP_GROUP_SP_PTP0,
+ MLXSW_REG_HTGT_TRAP_GROUP_SP_PTP1,
};
/* reg_htgt_trap_group
@@ -8039,16 +8169,21 @@ MLXSW_ITEM32(reg, mtcap, sensor_count, 0x00, 0, 7);
MLXSW_REG_DEFINE(mtmp, MLXSW_REG_MTMP_ID, MLXSW_REG_MTMP_LEN);
+#define MLXSW_REG_MTMP_MODULE_INDEX_MIN 64
+#define MLXSW_REG_MTMP_GBOX_INDEX_MIN 256
/* reg_mtmp_sensor_index
* Sensors index to access.
* 64-127 of sensor_index are mapped to the SFP+/QSFP modules sequentially
* (module 0 is mapped to sensor_index 64).
* Access: Index
*/
-MLXSW_ITEM32(reg, mtmp, sensor_index, 0x00, 0, 7);
+MLXSW_ITEM32(reg, mtmp, sensor_index, 0x00, 0, 12);
/* Convert to milli degrees Celsius */
-#define MLXSW_REG_MTMP_TEMP_TO_MC(val) (val * 125)
+#define MLXSW_REG_MTMP_TEMP_TO_MC(val) ({ typeof(val) v_ = (val); \
+ ((v_) >= 0) ? ((v_) * 125) : \
+ ((s16)((GENMASK(15, 0) + (v_) + 1) \
+ * 125)); })
/* reg_mtmp_temperature
* Temperature reading from the sensor. Reading is in 0.125 Celsius
@@ -8107,7 +8242,7 @@ MLXSW_ITEM32(reg, mtmp, temperature_threshold_lo, 0x10, 0, 16);
*/
MLXSW_ITEM_BUF(reg, mtmp, sensor_name, 0x18, MLXSW_REG_MTMP_SENSOR_NAME_SIZE);
-static inline void mlxsw_reg_mtmp_pack(char *payload, u8 sensor_index,
+static inline void mlxsw_reg_mtmp_pack(char *payload, u16 sensor_index,
bool max_temp_enable,
bool max_temp_reset)
{
@@ -8119,11 +8254,10 @@ static inline void mlxsw_reg_mtmp_pack(char *payload, u8 sensor_index,
MLXSW_REG_MTMP_THRESH_HI);
}
-static inline void mlxsw_reg_mtmp_unpack(char *payload, unsigned int *p_temp,
- unsigned int *p_max_temp,
- char *sensor_name)
+static inline void mlxsw_reg_mtmp_unpack(char *payload, int *p_temp,
+ int *p_max_temp, char *sensor_name)
{
- u16 temp;
+ s16 temp;
if (p_temp) {
temp = mlxsw_reg_mtmp_temperature_get(payload);
@@ -8156,7 +8290,7 @@ MLXSW_REG_DEFINE(mtbr, MLXSW_REG_MTBR_ID, MLXSW_REG_MTBR_LEN);
* 64-127 are mapped to the SFP+/QSFP modules sequentially).
* Access: Index
*/
-MLXSW_ITEM32(reg, mtbr, base_sensor_index, 0x00, 0, 7);
+MLXSW_ITEM32(reg, mtbr, base_sensor_index, 0x00, 0, 12);
/* reg_mtbr_num_rec
* Request: Number of records to read
@@ -8183,7 +8317,7 @@ MLXSW_ITEM32_INDEXED(reg, mtbr, rec_max_temp, MLXSW_REG_MTBR_BASE_LEN, 16,
MLXSW_ITEM32_INDEXED(reg, mtbr, rec_temp, MLXSW_REG_MTBR_BASE_LEN, 0, 16,
MLXSW_REG_MTBR_REC_LEN, 0x00, false);
-static inline void mlxsw_reg_mtbr_pack(char *payload, u8 base_sensor_index,
+static inline void mlxsw_reg_mtbr_pack(char *payload, u16 base_sensor_index,
u8 num_rec)
{
MLXSW_REG_ZERO(mtbr, payload);
@@ -8689,6 +8823,107 @@ static inline void mlxsw_reg_mlcr_pack(char *payload, u8 local_port,
MLXSW_REG_MLCR_DURATION_MAX : 0);
}
+/* MTPPS - Management Pulse Per Second Register
+ * --------------------------------------------
+ * This register provides the device PPS capabilities, configure the PPS in and
+ * out modules and holds the PPS in time stamp.
+ */
+#define MLXSW_REG_MTPPS_ID 0x9053
+#define MLXSW_REG_MTPPS_LEN 0x3C
+
+MLXSW_REG_DEFINE(mtpps, MLXSW_REG_MTPPS_ID, MLXSW_REG_MTPPS_LEN);
+
+/* reg_mtpps_enable
+ * Enables the PPS functionality the specific pin.
+ * A boolean variable.
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, mtpps, enable, 0x20, 31, 1);
+
+enum mlxsw_reg_mtpps_pin_mode {
+ MLXSW_REG_MTPPS_PIN_MODE_VIRTUAL_PIN = 0x2,
+};
+
+/* reg_mtpps_pin_mode
+ * Pin mode to be used. The mode must comply with the supported modes of the
+ * requested pin.
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, mtpps, pin_mode, 0x20, 8, 4);
+
+#define MLXSW_REG_MTPPS_PIN_SP_VIRTUAL_PIN 7
+
+/* reg_mtpps_pin
+ * Pin to be configured or queried out of the supported pins.
+ * Access: Index
+ */
+MLXSW_ITEM32(reg, mtpps, pin, 0x20, 0, 8);
+
+/* reg_mtpps_time_stamp
+ * When pin_mode = pps_in, the latched device time when it was triggered from
+ * the external GPIO pin.
+ * When pin_mode = pps_out or virtual_pin or pps_out_and_virtual_pin, the target
+ * time to generate next output signal.
+ * Time is in units of device clock.
+ * Access: RW
+ */
+MLXSW_ITEM64(reg, mtpps, time_stamp, 0x28, 0, 64);
+
+static inline void
+mlxsw_reg_mtpps_vpin_pack(char *payload, u64 time_stamp)
+{
+ MLXSW_REG_ZERO(mtpps, payload);
+ mlxsw_reg_mtpps_pin_set(payload, MLXSW_REG_MTPPS_PIN_SP_VIRTUAL_PIN);
+ mlxsw_reg_mtpps_pin_mode_set(payload,
+ MLXSW_REG_MTPPS_PIN_MODE_VIRTUAL_PIN);
+ mlxsw_reg_mtpps_enable_set(payload, true);
+ mlxsw_reg_mtpps_time_stamp_set(payload, time_stamp);
+}
+
+/* MTUTC - Management UTC Register
+ * -------------------------------
+ * Configures the HW UTC counter.
+ */
+#define MLXSW_REG_MTUTC_ID 0x9055
+#define MLXSW_REG_MTUTC_LEN 0x1C
+
+MLXSW_REG_DEFINE(mtutc, MLXSW_REG_MTUTC_ID, MLXSW_REG_MTUTC_LEN);
+
+enum mlxsw_reg_mtutc_operation {
+ MLXSW_REG_MTUTC_OPERATION_SET_TIME_AT_NEXT_SEC = 0,
+ MLXSW_REG_MTUTC_OPERATION_ADJUST_FREQ = 3,
+};
+
+/* reg_mtutc_operation
+ * Operation.
+ * Access: OP
+ */
+MLXSW_ITEM32(reg, mtutc, operation, 0x00, 0, 4);
+
+/* reg_mtutc_freq_adjustment
+ * Frequency adjustment: Every PPS the HW frequency will be
+ * adjusted by this value. Units of HW clock, where HW counts
+ * 10^9 HW clocks for 1 HW second.
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, mtutc, freq_adjustment, 0x04, 0, 32);
+
+/* reg_mtutc_utc_sec
+ * UTC seconds.
+ * Access: WO
+ */
+MLXSW_ITEM32(reg, mtutc, utc_sec, 0x10, 0, 32);
+
+static inline void
+mlxsw_reg_mtutc_pack(char *payload, enum mlxsw_reg_mtutc_operation oper,
+ u32 freq_adj, u32 utc_sec)
+{
+ MLXSW_REG_ZERO(mtutc, payload);
+ mlxsw_reg_mtutc_operation_set(payload, oper);
+ mlxsw_reg_mtutc_freq_adjustment_set(payload, freq_adj);
+ mlxsw_reg_mtutc_utc_sec_set(payload, utc_sec);
+}
+
/* MCQI - Management Component Query Information
* ---------------------------------------------
* This register allows querying information about firmware components.
@@ -9043,6 +9278,267 @@ static inline void mlxsw_reg_mprs_pack(char *payload, u16 parsing_depth,
mlxsw_reg_mprs_vxlan_udp_dport_set(payload, vxlan_udp_dport);
}
+/* MOGCR - Monitoring Global Configuration Register
+ * ------------------------------------------------
+ */
+#define MLXSW_REG_MOGCR_ID 0x9086
+#define MLXSW_REG_MOGCR_LEN 0x20
+
+MLXSW_REG_DEFINE(mogcr, MLXSW_REG_MOGCR_ID, MLXSW_REG_MOGCR_LEN);
+
+/* reg_mogcr_ptp_iftc
+ * PTP Ingress FIFO Trap Clear
+ * The PTP_ING_FIFO trap provides MTPPTR with clr according
+ * to this value. Default 0.
+ * Reserved when IB switches and when SwitchX/-2, Spectrum-2
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, mogcr, ptp_iftc, 0x00, 1, 1);
+
+/* reg_mogcr_ptp_eftc
+ * PTP Egress FIFO Trap Clear
+ * The PTP_EGR_FIFO trap provides MTPPTR with clr according
+ * to this value. Default 0.
+ * Reserved when IB switches and when SwitchX/-2, Spectrum-2
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, mogcr, ptp_eftc, 0x00, 0, 1);
+
+/* MTPPPC - Time Precision Packet Port Configuration
+ * -------------------------------------------------
+ * This register serves for configuration of which PTP messages should be
+ * timestamped. This is a global configuration, despite the register name.
+ *
+ * Reserved when Spectrum-2.
+ */
+#define MLXSW_REG_MTPPPC_ID 0x9090
+#define MLXSW_REG_MTPPPC_LEN 0x28
+
+MLXSW_REG_DEFINE(mtpppc, MLXSW_REG_MTPPPC_ID, MLXSW_REG_MTPPPC_LEN);
+
+/* reg_mtpppc_ing_timestamp_message_type
+ * Bitwise vector of PTP message types to timestamp at ingress.
+ * MessageType field as defined by IEEE 1588
+ * Each bit corresponds to a value (e.g. Bit0: Sync, Bit1: Delay_Req)
+ * Default all 0
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, mtpppc, ing_timestamp_message_type, 0x08, 0, 16);
+
+/* reg_mtpppc_egr_timestamp_message_type
+ * Bitwise vector of PTP message types to timestamp at egress.
+ * MessageType field as defined by IEEE 1588
+ * Each bit corresponds to a value (e.g. Bit0: Sync, Bit1: Delay_Req)
+ * Default all 0
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, mtpppc, egr_timestamp_message_type, 0x0C, 0, 16);
+
+static inline void mlxsw_reg_mtpppc_pack(char *payload, u16 ing, u16 egr)
+{
+ MLXSW_REG_ZERO(mtpppc, payload);
+ mlxsw_reg_mtpppc_ing_timestamp_message_type_set(payload, ing);
+ mlxsw_reg_mtpppc_egr_timestamp_message_type_set(payload, egr);
+}
+
+/* MTPPTR - Time Precision Packet Timestamping Reading
+ * ---------------------------------------------------
+ * The MTPPTR is used for reading the per port PTP timestamp FIFO.
+ * There is a trap for packets which are latched to the timestamp FIFO, thus the
+ * SW knows which FIFO to read. Note that packets enter the FIFO before been
+ * trapped. The sequence number is used to synchronize the timestamp FIFO
+ * entries and the trapped packets.
+ * Reserved when Spectrum-2.
+ */
+
+#define MLXSW_REG_MTPPTR_ID 0x9091
+#define MLXSW_REG_MTPPTR_BASE_LEN 0x10 /* base length, without records */
+#define MLXSW_REG_MTPPTR_REC_LEN 0x10 /* record length */
+#define MLXSW_REG_MTPPTR_REC_MAX_COUNT 4
+#define MLXSW_REG_MTPPTR_LEN (MLXSW_REG_MTPPTR_BASE_LEN + \
+ MLXSW_REG_MTPPTR_REC_LEN * MLXSW_REG_MTPPTR_REC_MAX_COUNT)
+
+MLXSW_REG_DEFINE(mtpptr, MLXSW_REG_MTPPTR_ID, MLXSW_REG_MTPPTR_LEN);
+
+/* reg_mtpptr_local_port
+ * Not supported for CPU port.
+ * Access: Index
+ */
+MLXSW_ITEM32(reg, mtpptr, local_port, 0x00, 16, 8);
+
+enum mlxsw_reg_mtpptr_dir {
+ MLXSW_REG_MTPPTR_DIR_INGRESS,
+ MLXSW_REG_MTPPTR_DIR_EGRESS,
+};
+
+/* reg_mtpptr_dir
+ * Direction.
+ * Access: Index
+ */
+MLXSW_ITEM32(reg, mtpptr, dir, 0x00, 0, 1);
+
+/* reg_mtpptr_clr
+ * Clear the records.
+ * Access: OP
+ */
+MLXSW_ITEM32(reg, mtpptr, clr, 0x04, 31, 1);
+
+/* reg_mtpptr_num_rec
+ * Number of valid records in the response
+ * Range 0.. cap_ptp_timestamp_fifo
+ * Access: RO
+ */
+MLXSW_ITEM32(reg, mtpptr, num_rec, 0x08, 0, 4);
+
+/* reg_mtpptr_rec_message_type
+ * MessageType field as defined by IEEE 1588 Each bit corresponds to a value
+ * (e.g. Bit0: Sync, Bit1: Delay_Req)
+ * Access: RO
+ */
+MLXSW_ITEM32_INDEXED(reg, mtpptr, rec_message_type,
+ MLXSW_REG_MTPPTR_BASE_LEN, 8, 4,
+ MLXSW_REG_MTPPTR_REC_LEN, 0, false);
+
+/* reg_mtpptr_rec_domain_number
+ * DomainNumber field as defined by IEEE 1588
+ * Access: RO
+ */
+MLXSW_ITEM32_INDEXED(reg, mtpptr, rec_domain_number,
+ MLXSW_REG_MTPPTR_BASE_LEN, 0, 8,
+ MLXSW_REG_MTPPTR_REC_LEN, 0, false);
+
+/* reg_mtpptr_rec_sequence_id
+ * SequenceId field as defined by IEEE 1588
+ * Access: RO
+ */
+MLXSW_ITEM32_INDEXED(reg, mtpptr, rec_sequence_id,
+ MLXSW_REG_MTPPTR_BASE_LEN, 0, 16,
+ MLXSW_REG_MTPPTR_REC_LEN, 0x4, false);
+
+/* reg_mtpptr_rec_timestamp_high
+ * Timestamp of when the PTP packet has passed through the port Units of PLL
+ * clock time.
+ * For Spectrum-1 the PLL clock is 156.25Mhz and PLL clock time is 6.4nSec.
+ * Access: RO
+ */
+MLXSW_ITEM32_INDEXED(reg, mtpptr, rec_timestamp_high,
+ MLXSW_REG_MTPPTR_BASE_LEN, 0, 32,
+ MLXSW_REG_MTPPTR_REC_LEN, 0x8, false);
+
+/* reg_mtpptr_rec_timestamp_low
+ * See rec_timestamp_high.
+ * Access: RO
+ */
+MLXSW_ITEM32_INDEXED(reg, mtpptr, rec_timestamp_low,
+ MLXSW_REG_MTPPTR_BASE_LEN, 0, 32,
+ MLXSW_REG_MTPPTR_REC_LEN, 0xC, false);
+
+static inline void mlxsw_reg_mtpptr_unpack(const char *payload,
+ unsigned int rec,
+ u8 *p_message_type,
+ u8 *p_domain_number,
+ u16 *p_sequence_id,
+ u64 *p_timestamp)
+{
+ u32 timestamp_high, timestamp_low;
+
+ *p_message_type = mlxsw_reg_mtpptr_rec_message_type_get(payload, rec);
+ *p_domain_number = mlxsw_reg_mtpptr_rec_domain_number_get(payload, rec);
+ *p_sequence_id = mlxsw_reg_mtpptr_rec_sequence_id_get(payload, rec);
+ timestamp_high = mlxsw_reg_mtpptr_rec_timestamp_high_get(payload, rec);
+ timestamp_low = mlxsw_reg_mtpptr_rec_timestamp_low_get(payload, rec);
+ *p_timestamp = (u64)timestamp_high << 32 | timestamp_low;
+}
+
+/* MTPTPT - Monitoring Precision Time Protocol Trap Register
+ * ---------------------------------------------------------
+ * This register is used for configuring under which trap to deliver PTP
+ * packets depending on type of the packet.
+ */
+#define MLXSW_REG_MTPTPT_ID 0x9092
+#define MLXSW_REG_MTPTPT_LEN 0x08
+
+MLXSW_REG_DEFINE(mtptpt, MLXSW_REG_MTPTPT_ID, MLXSW_REG_MTPTPT_LEN);
+
+enum mlxsw_reg_mtptpt_trap_id {
+ MLXSW_REG_MTPTPT_TRAP_ID_PTP0,
+ MLXSW_REG_MTPTPT_TRAP_ID_PTP1,
+};
+
+/* reg_mtptpt_trap_id
+ * Trap id.
+ * Access: Index
+ */
+MLXSW_ITEM32(reg, mtptpt, trap_id, 0x00, 0, 4);
+
+/* reg_mtptpt_message_type
+ * Bitwise vector of PTP message types to trap. This is a necessary but
+ * non-sufficient condition since need to enable also per port. See MTPPPC.
+ * Message types are defined by IEEE 1588 Each bit corresponds to a value (e.g.
+ * Bit0: Sync, Bit1: Delay_Req)
+ */
+MLXSW_ITEM32(reg, mtptpt, message_type, 0x04, 0, 16);
+
+static inline void mlxsw_reg_mtptptp_pack(char *payload,
+ enum mlxsw_reg_mtptpt_trap_id trap_id,
+ u16 message_type)
+{
+ MLXSW_REG_ZERO(mtptpt, payload);
+ mlxsw_reg_mtptpt_trap_id_set(payload, trap_id);
+ mlxsw_reg_mtptpt_message_type_set(payload, message_type);
+}
+
+/* MGPIR - Management General Peripheral Information Register
+ * ----------------------------------------------------------
+ * MGPIR register allows software to query the hardware and
+ * firmware general information of peripheral entities.
+ */
+#define MLXSW_REG_MGPIR_ID 0x9100
+#define MLXSW_REG_MGPIR_LEN 0xA0
+
+MLXSW_REG_DEFINE(mgpir, MLXSW_REG_MGPIR_ID, MLXSW_REG_MGPIR_LEN);
+
+enum mlxsw_reg_mgpir_device_type {
+ MLXSW_REG_MGPIR_DEVICE_TYPE_NONE,
+ MLXSW_REG_MGPIR_DEVICE_TYPE_GEARBOX_DIE,
+};
+
+/* device_type
+ * Access: RO
+ */
+MLXSW_ITEM32(reg, mgpir, device_type, 0x00, 24, 4);
+
+/* devices_per_flash
+ * Number of devices of device_type per flash (can be shared by few devices).
+ * Access: RO
+ */
+MLXSW_ITEM32(reg, mgpir, devices_per_flash, 0x00, 16, 8);
+
+/* num_of_devices
+ * Number of devices of device_type.
+ * Access: RO
+ */
+MLXSW_ITEM32(reg, mgpir, num_of_devices, 0x00, 0, 8);
+
+static inline void mlxsw_reg_mgpir_pack(char *payload)
+{
+ MLXSW_REG_ZERO(mgpir, payload);
+}
+
+static inline void
+mlxsw_reg_mgpir_unpack(char *payload, u8 *num_of_devices,
+ enum mlxsw_reg_mgpir_device_type *device_type,
+ u8 *devices_per_flash)
+{
+ if (num_of_devices)
+ *num_of_devices = mlxsw_reg_mgpir_num_of_devices_get(payload);
+ if (device_type)
+ *device_type = mlxsw_reg_mgpir_device_type_get(payload);
+ if (devices_per_flash)
+ *devices_per_flash =
+ mlxsw_reg_mgpir_devices_per_flash_get(payload);
+}
+
/* TNGCR - Tunneling NVE General Configuration Register
* ----------------------------------------------------
* The TNGCR register is used for setting up the NVE Tunneling configuration.
@@ -10006,6 +10502,7 @@ static const struct mlxsw_reg_info *mlxsw_reg_infos[] = {
MLXSW_REG(qpdsm),
MLXSW_REG(qpdpm),
MLXSW_REG(qtctm),
+ MLXSW_REG(qpsc),
MLXSW_REG(pmlp),
MLXSW_REG(pmtu),
MLXSW_REG(ptys),
@@ -10052,12 +10549,19 @@ static const struct mlxsw_reg_info *mlxsw_reg_infos[] = {
MLXSW_REG(mgir),
MLXSW_REG(mrsr),
MLXSW_REG(mlcr),
+ MLXSW_REG(mtpps),
+ MLXSW_REG(mtutc),
MLXSW_REG(mpsc),
MLXSW_REG(mcqi),
MLXSW_REG(mcc),
MLXSW_REG(mcda),
MLXSW_REG(mgpc),
MLXSW_REG(mprs),
+ MLXSW_REG(mogcr),
+ MLXSW_REG(mtpppc),
+ MLXSW_REG(mtpptr),
+ MLXSW_REG(mtptpt),
+ MLXSW_REG(mgpir),
MLXSW_REG(tngcr),
MLXSW_REG(tnumt),
MLXSW_REG(tnqcr),
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
index 23204356ad88..4d34d42b3b0e 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
@@ -41,6 +41,7 @@
#include "spectrum_dpipe.h"
#include "spectrum_acl_flex_actions.h"
#include "spectrum_span.h"
+#include "spectrum_ptp.h"
#include "../mlxfw/mlxfw.h"
#define MLXSW_SP_FWREV_MINOR_TO_BRANCH(minor) ((minor) / 100)
@@ -146,6 +147,35 @@ struct mlxsw_sp_mlxfw_dev {
struct mlxsw_sp *mlxsw_sp;
};
+struct mlxsw_sp_ptp_ops {
+ struct mlxsw_sp_ptp_clock *
+ (*clock_init)(struct mlxsw_sp *mlxsw_sp, struct device *dev);
+ void (*clock_fini)(struct mlxsw_sp_ptp_clock *clock);
+
+ struct mlxsw_sp_ptp_state *(*init)(struct mlxsw_sp *mlxsw_sp);
+ void (*fini)(struct mlxsw_sp_ptp_state *ptp_state);
+
+ /* Notify a driver that a packet that might be PTP was received. Driver
+ * is responsible for freeing the passed-in SKB.
+ */
+ void (*receive)(struct mlxsw_sp *mlxsw_sp, struct sk_buff *skb,
+ u8 local_port);
+
+ /* Notify a driver that a timestamped packet was transmitted. Driver
+ * is responsible for freeing the passed-in SKB.
+ */
+ void (*transmitted)(struct mlxsw_sp *mlxsw_sp, struct sk_buff *skb,
+ u8 local_port);
+
+ int (*hwtstamp_get)(struct mlxsw_sp_port *mlxsw_sp_port,
+ struct hwtstamp_config *config);
+ int (*hwtstamp_set)(struct mlxsw_sp_port *mlxsw_sp_port,
+ struct hwtstamp_config *config);
+ void (*shaper_work)(struct work_struct *work);
+ int (*get_ts_info)(struct mlxsw_sp *mlxsw_sp,
+ struct ethtool_ts_info *info);
+};
+
static int mlxsw_sp_component_query(struct mlxfw_dev *mlxfw_dev,
u16 component_index, u32 *p_max_size,
u8 *p_align_bits, u16 *p_max_write_size)
@@ -294,6 +324,19 @@ static void mlxsw_sp_fsm_release(struct mlxfw_dev *mlxfw_dev, u32 fwhandle)
mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(mcc), mcc_pl);
}
+static void mlxsw_sp_status_notify(struct mlxfw_dev *mlxfw_dev,
+ const char *msg, const char *comp_name,
+ u32 done_bytes, u32 total_bytes)
+{
+ struct mlxsw_sp_mlxfw_dev *mlxsw_sp_mlxfw_dev =
+ container_of(mlxfw_dev, struct mlxsw_sp_mlxfw_dev, mlxfw_dev);
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_mlxfw_dev->mlxsw_sp;
+
+ devlink_flash_update_status_notify(priv_to_devlink(mlxsw_sp->core),
+ msg, comp_name,
+ done_bytes, total_bytes);
+}
+
static const struct mlxfw_dev_ops mlxsw_sp_mlxfw_dev_ops = {
.component_query = mlxsw_sp_component_query,
.fsm_lock = mlxsw_sp_fsm_lock,
@@ -303,11 +346,13 @@ static const struct mlxfw_dev_ops mlxsw_sp_mlxfw_dev_ops = {
.fsm_activate = mlxsw_sp_fsm_activate,
.fsm_query_state = mlxsw_sp_fsm_query_state,
.fsm_cancel = mlxsw_sp_fsm_cancel,
- .fsm_release = mlxsw_sp_fsm_release
+ .fsm_release = mlxsw_sp_fsm_release,
+ .status_notify = mlxsw_sp_status_notify,
};
static int mlxsw_sp_firmware_flash(struct mlxsw_sp *mlxsw_sp,
- const struct firmware *firmware)
+ const struct firmware *firmware,
+ struct netlink_ext_ack *extack)
{
struct mlxsw_sp_mlxfw_dev mlxsw_sp_mlxfw_dev = {
.mlxfw_dev = {
@@ -320,7 +365,10 @@ static int mlxsw_sp_firmware_flash(struct mlxsw_sp *mlxsw_sp,
int err;
mlxsw_core_fw_flash_start(mlxsw_sp->core);
- err = mlxfw_firmware_flash(&mlxsw_sp_mlxfw_dev.mlxfw_dev, firmware);
+ devlink_flash_update_begin_notify(priv_to_devlink(mlxsw_sp->core));
+ err = mlxfw_firmware_flash(&mlxsw_sp_mlxfw_dev.mlxfw_dev,
+ firmware, extack);
+ devlink_flash_update_end_notify(priv_to_devlink(mlxsw_sp->core));
mlxsw_core_fw_flash_end(mlxsw_sp->core);
return err;
@@ -374,7 +422,7 @@ static int mlxsw_sp_fw_rev_validate(struct mlxsw_sp *mlxsw_sp)
return err;
}
- err = mlxsw_sp_firmware_flash(mlxsw_sp, firmware);
+ err = mlxsw_sp_firmware_flash(mlxsw_sp, firmware, NULL);
release_firmware(firmware);
if (err)
dev_err(mlxsw_sp->bus_info->dev, "Could not upgrade firmware\n");
@@ -388,6 +436,27 @@ static int mlxsw_sp_fw_rev_validate(struct mlxsw_sp *mlxsw_sp)
return 0;
}
+static int mlxsw_sp_flash_update(struct mlxsw_core *mlxsw_core,
+ const char *file_name, const char *component,
+ struct netlink_ext_ack *extack)
+{
+ struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
+ const struct firmware *firmware;
+ int err;
+
+ if (component)
+ return -EOPNOTSUPP;
+
+ err = request_firmware_direct(&firmware, file_name,
+ mlxsw_sp->bus_info->dev);
+ if (err)
+ return err;
+ err = mlxsw_sp_firmware_flash(mlxsw_sp, firmware, extack);
+ release_firmware(firmware);
+
+ return err;
+}
+
int mlxsw_sp_flow_counter_get(struct mlxsw_sp *mlxsw_sp,
unsigned int counter_index, u64 *packets,
u64 *bytes)
@@ -738,6 +807,8 @@ static netdev_tx_t mlxsw_sp_port_xmit(struct sk_buff *skb,
u64 len;
int err;
+ memset(skb->cb, 0, sizeof(struct mlxsw_skb_cb));
+
if (mlxsw_core_skb_transmit_busy(mlxsw_sp->core, &tx_info))
return NETDEV_TX_BUSY;
@@ -1437,21 +1508,21 @@ static int mlxsw_sp_setup_tc_cls_matchall(struct mlxsw_sp_port *mlxsw_sp_port,
static int
mlxsw_sp_setup_tc_cls_flower(struct mlxsw_sp_acl_block *acl_block,
- struct tc_cls_flower_offload *f)
+ struct flow_cls_offload *f)
{
struct mlxsw_sp *mlxsw_sp = mlxsw_sp_acl_block_mlxsw_sp(acl_block);
switch (f->command) {
- case TC_CLSFLOWER_REPLACE:
+ case FLOW_CLS_REPLACE:
return mlxsw_sp_flower_replace(mlxsw_sp, acl_block, f);
- case TC_CLSFLOWER_DESTROY:
+ case FLOW_CLS_DESTROY:
mlxsw_sp_flower_destroy(mlxsw_sp, acl_block, f);
return 0;
- case TC_CLSFLOWER_STATS:
+ case FLOW_CLS_STATS:
return mlxsw_sp_flower_stats(mlxsw_sp, acl_block, f);
- case TC_CLSFLOWER_TMPLT_CREATE:
+ case FLOW_CLS_TMPLT_CREATE:
return mlxsw_sp_flower_tmplt_create(mlxsw_sp, acl_block, f);
- case TC_CLSFLOWER_TMPLT_DESTROY:
+ case FLOW_CLS_TMPLT_DESTROY:
mlxsw_sp_flower_tmplt_destroy(mlxsw_sp, acl_block, f);
return 0;
default:
@@ -1514,33 +1585,45 @@ static int mlxsw_sp_setup_tc_block_cb_flower(enum tc_setup_type type,
}
}
+static void mlxsw_sp_tc_block_flower_release(void *cb_priv)
+{
+ struct mlxsw_sp_acl_block *acl_block = cb_priv;
+
+ mlxsw_sp_acl_block_destroy(acl_block);
+}
+
+static LIST_HEAD(mlxsw_sp_block_cb_list);
+
static int
mlxsw_sp_setup_tc_block_flower_bind(struct mlxsw_sp_port *mlxsw_sp_port,
- struct tcf_block *block, bool ingress,
- struct netlink_ext_ack *extack)
+ struct flow_block_offload *f, bool ingress)
{
struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
struct mlxsw_sp_acl_block *acl_block;
- struct tcf_block_cb *block_cb;
+ struct flow_block_cb *block_cb;
+ bool register_block = false;
int err;
- block_cb = tcf_block_cb_lookup(block, mlxsw_sp_setup_tc_block_cb_flower,
- mlxsw_sp);
+ block_cb = flow_block_cb_lookup(f, mlxsw_sp_setup_tc_block_cb_flower,
+ mlxsw_sp);
if (!block_cb) {
- acl_block = mlxsw_sp_acl_block_create(mlxsw_sp, block->net);
+ acl_block = mlxsw_sp_acl_block_create(mlxsw_sp, f->net);
if (!acl_block)
return -ENOMEM;
- block_cb = __tcf_block_cb_register(block,
- mlxsw_sp_setup_tc_block_cb_flower,
- mlxsw_sp, acl_block, extack);
+ block_cb = flow_block_cb_alloc(f->net,
+ mlxsw_sp_setup_tc_block_cb_flower,
+ mlxsw_sp, acl_block,
+ mlxsw_sp_tc_block_flower_release);
if (IS_ERR(block_cb)) {
+ mlxsw_sp_acl_block_destroy(acl_block);
err = PTR_ERR(block_cb);
goto err_cb_register;
}
+ register_block = true;
} else {
- acl_block = tcf_block_cb_priv(block_cb);
+ acl_block = flow_block_cb_priv(block_cb);
}
- tcf_block_cb_incref(block_cb);
+ flow_block_cb_incref(block_cb);
err = mlxsw_sp_acl_block_bind(mlxsw_sp, acl_block,
mlxsw_sp_port, ingress);
if (err)
@@ -1551,28 +1634,31 @@ mlxsw_sp_setup_tc_block_flower_bind(struct mlxsw_sp_port *mlxsw_sp_port,
else
mlxsw_sp_port->eg_acl_block = acl_block;
+ if (register_block) {
+ flow_block_cb_add(block_cb, f);
+ list_add_tail(&block_cb->driver_list, &mlxsw_sp_block_cb_list);
+ }
+
return 0;
err_block_bind:
- if (!tcf_block_cb_decref(block_cb)) {
- __tcf_block_cb_unregister(block, block_cb);
+ if (!flow_block_cb_decref(block_cb))
+ flow_block_cb_free(block_cb);
err_cb_register:
- mlxsw_sp_acl_block_destroy(acl_block);
- }
return err;
}
static void
mlxsw_sp_setup_tc_block_flower_unbind(struct mlxsw_sp_port *mlxsw_sp_port,
- struct tcf_block *block, bool ingress)
+ struct flow_block_offload *f, bool ingress)
{
struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
struct mlxsw_sp_acl_block *acl_block;
- struct tcf_block_cb *block_cb;
+ struct flow_block_cb *block_cb;
int err;
- block_cb = tcf_block_cb_lookup(block, mlxsw_sp_setup_tc_block_cb_flower,
- mlxsw_sp);
+ block_cb = flow_block_cb_lookup(f, mlxsw_sp_setup_tc_block_cb_flower,
+ mlxsw_sp);
if (!block_cb)
return;
@@ -1581,50 +1667,63 @@ mlxsw_sp_setup_tc_block_flower_unbind(struct mlxsw_sp_port *mlxsw_sp_port,
else
mlxsw_sp_port->eg_acl_block = NULL;
- acl_block = tcf_block_cb_priv(block_cb);
+ acl_block = flow_block_cb_priv(block_cb);
err = mlxsw_sp_acl_block_unbind(mlxsw_sp, acl_block,
mlxsw_sp_port, ingress);
- if (!err && !tcf_block_cb_decref(block_cb)) {
- __tcf_block_cb_unregister(block, block_cb);
- mlxsw_sp_acl_block_destroy(acl_block);
+ if (!err && !flow_block_cb_decref(block_cb)) {
+ flow_block_cb_remove(block_cb, f);
+ list_del(&block_cb->driver_list);
}
}
static int mlxsw_sp_setup_tc_block(struct mlxsw_sp_port *mlxsw_sp_port,
- struct tc_block_offload *f)
+ struct flow_block_offload *f)
{
+ struct flow_block_cb *block_cb;
tc_setup_cb_t *cb;
bool ingress;
int err;
- if (f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS) {
+ if (f->binder_type == FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS) {
cb = mlxsw_sp_setup_tc_block_cb_matchall_ig;
ingress = true;
- } else if (f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_EGRESS) {
+ } else if (f->binder_type == FLOW_BLOCK_BINDER_TYPE_CLSACT_EGRESS) {
cb = mlxsw_sp_setup_tc_block_cb_matchall_eg;
ingress = false;
} else {
return -EOPNOTSUPP;
}
+ f->driver_block_list = &mlxsw_sp_block_cb_list;
+
switch (f->command) {
- case TC_BLOCK_BIND:
- err = tcf_block_cb_register(f->block, cb, mlxsw_sp_port,
- mlxsw_sp_port, f->extack);
- if (err)
- return err;
- err = mlxsw_sp_setup_tc_block_flower_bind(mlxsw_sp_port,
- f->block, ingress,
- f->extack);
+ case FLOW_BLOCK_BIND:
+ if (flow_block_cb_is_busy(cb, mlxsw_sp_port,
+ &mlxsw_sp_block_cb_list))
+ return -EBUSY;
+
+ block_cb = flow_block_cb_alloc(f->net, cb, mlxsw_sp_port,
+ mlxsw_sp_port, NULL);
+ if (IS_ERR(block_cb))
+ return PTR_ERR(block_cb);
+ err = mlxsw_sp_setup_tc_block_flower_bind(mlxsw_sp_port, f,
+ ingress);
if (err) {
- tcf_block_cb_unregister(f->block, cb, mlxsw_sp_port);
+ flow_block_cb_free(block_cb);
return err;
}
+ flow_block_cb_add(block_cb, f);
+ list_add_tail(&block_cb->driver_list, &mlxsw_sp_block_cb_list);
return 0;
- case TC_BLOCK_UNBIND:
+ case FLOW_BLOCK_UNBIND:
mlxsw_sp_setup_tc_block_flower_unbind(mlxsw_sp_port,
- f->block, ingress);
- tcf_block_cb_unregister(f->block, cb, mlxsw_sp_port);
+ f, ingress);
+ block_cb = flow_block_cb_lookup(f, cb, mlxsw_sp_port);
+ if (!block_cb)
+ return -ENOENT;
+
+ flow_block_cb_remove(block_cb, f);
+ list_del(&block_cb->driver_list);
return 0;
default:
return -EOPNOTSUPP;
@@ -1745,6 +1844,65 @@ mlxsw_sp_port_get_devlink_port(struct net_device *dev)
mlxsw_sp_port->local_port);
}
+static int mlxsw_sp_port_hwtstamp_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ struct ifreq *ifr)
+{
+ struct hwtstamp_config config;
+ int err;
+
+ if (copy_from_user(&config, ifr->ifr_data, sizeof(config)))
+ return -EFAULT;
+
+ err = mlxsw_sp_port->mlxsw_sp->ptp_ops->hwtstamp_set(mlxsw_sp_port,
+ &config);
+ if (err)
+ return err;
+
+ if (copy_to_user(ifr->ifr_data, &config, sizeof(config)))
+ return -EFAULT;
+
+ return 0;
+}
+
+static int mlxsw_sp_port_hwtstamp_get(struct mlxsw_sp_port *mlxsw_sp_port,
+ struct ifreq *ifr)
+{
+ struct hwtstamp_config config;
+ int err;
+
+ err = mlxsw_sp_port->mlxsw_sp->ptp_ops->hwtstamp_get(mlxsw_sp_port,
+ &config);
+ if (err)
+ return err;
+
+ if (copy_to_user(ifr->ifr_data, &config, sizeof(config)))
+ return -EFAULT;
+
+ return 0;
+}
+
+static inline void mlxsw_sp_port_ptp_clear(struct mlxsw_sp_port *mlxsw_sp_port)
+{
+ struct hwtstamp_config config = {0};
+
+ mlxsw_sp_port->mlxsw_sp->ptp_ops->hwtstamp_set(mlxsw_sp_port, &config);
+}
+
+static int
+mlxsw_sp_port_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
+
+ switch (cmd) {
+ case SIOCSHWTSTAMP:
+ return mlxsw_sp_port_hwtstamp_set(mlxsw_sp_port, ifr);
+ case SIOCGHWTSTAMP:
+ return mlxsw_sp_port_hwtstamp_get(mlxsw_sp_port, ifr);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
static const struct net_device_ops mlxsw_sp_port_netdev_ops = {
.ndo_open = mlxsw_sp_port_open,
.ndo_stop = mlxsw_sp_port_stop,
@@ -1760,6 +1918,7 @@ static const struct net_device_ops mlxsw_sp_port_netdev_ops = {
.ndo_vlan_rx_kill_vid = mlxsw_sp_port_kill_vid,
.ndo_set_features = mlxsw_sp_set_features,
.ndo_get_devlink_port = mlxsw_sp_port_get_devlink_port,
+ .ndo_do_ioctl = mlxsw_sp_port_ioctl,
};
static void mlxsw_sp_port_get_drvinfo(struct net_device *dev,
@@ -2525,28 +2684,33 @@ mlxsw_sp1_from_ptys_link(struct mlxsw_sp *mlxsw_sp, u32 ptys_eth_proto,
}
}
+static u32
+mlxsw_sp1_from_ptys_speed(struct mlxsw_sp *mlxsw_sp, u32 ptys_eth_proto)
+{
+ int i;
+
+ for (i = 0; i < MLXSW_SP1_PORT_LINK_MODE_LEN; i++) {
+ if (ptys_eth_proto & mlxsw_sp1_port_link_mode[i].mask)
+ return mlxsw_sp1_port_link_mode[i].speed;
+ }
+
+ return SPEED_UNKNOWN;
+}
+
static void
mlxsw_sp1_from_ptys_speed_duplex(struct mlxsw_sp *mlxsw_sp, bool carrier_ok,
u32 ptys_eth_proto,
struct ethtool_link_ksettings *cmd)
{
- u32 speed = SPEED_UNKNOWN;
- u8 duplex = DUPLEX_UNKNOWN;
- int i;
+ cmd->base.speed = SPEED_UNKNOWN;
+ cmd->base.duplex = DUPLEX_UNKNOWN;
if (!carrier_ok)
- goto out;
+ return;
- for (i = 0; i < MLXSW_SP1_PORT_LINK_MODE_LEN; i++) {
- if (ptys_eth_proto & mlxsw_sp1_port_link_mode[i].mask) {
- speed = mlxsw_sp1_port_link_mode[i].speed;
- duplex = DUPLEX_FULL;
- break;
- }
- }
-out:
- cmd->base.speed = speed;
- cmd->base.duplex = duplex;
+ cmd->base.speed = mlxsw_sp1_from_ptys_speed(mlxsw_sp, ptys_eth_proto);
+ if (cmd->base.speed != SPEED_UNKNOWN)
+ cmd->base.duplex = DUPLEX_FULL;
}
static u32
@@ -2617,6 +2781,7 @@ static const struct mlxsw_sp_port_type_speed_ops
mlxsw_sp1_port_type_speed_ops = {
.from_ptys_supported_port = mlxsw_sp1_from_ptys_supported_port,
.from_ptys_link = mlxsw_sp1_from_ptys_link,
+ .from_ptys_speed = mlxsw_sp1_from_ptys_speed,
.from_ptys_speed_duplex = mlxsw_sp1_from_ptys_speed_duplex,
.to_ptys_advert_link = mlxsw_sp1_to_ptys_advert_link,
.to_ptys_speed = mlxsw_sp1_to_ptys_speed,
@@ -2867,28 +3032,33 @@ mlxsw_sp2_from_ptys_link(struct mlxsw_sp *mlxsw_sp, u32 ptys_eth_proto,
}
}
+static u32
+mlxsw_sp2_from_ptys_speed(struct mlxsw_sp *mlxsw_sp, u32 ptys_eth_proto)
+{
+ int i;
+
+ for (i = 0; i < MLXSW_SP2_PORT_LINK_MODE_LEN; i++) {
+ if (ptys_eth_proto & mlxsw_sp2_port_link_mode[i].mask)
+ return mlxsw_sp2_port_link_mode[i].speed;
+ }
+
+ return SPEED_UNKNOWN;
+}
+
static void
mlxsw_sp2_from_ptys_speed_duplex(struct mlxsw_sp *mlxsw_sp, bool carrier_ok,
u32 ptys_eth_proto,
struct ethtool_link_ksettings *cmd)
{
- u32 speed = SPEED_UNKNOWN;
- u8 duplex = DUPLEX_UNKNOWN;
- int i;
+ cmd->base.speed = SPEED_UNKNOWN;
+ cmd->base.duplex = DUPLEX_UNKNOWN;
if (!carrier_ok)
- goto out;
+ return;
- for (i = 0; i < MLXSW_SP2_PORT_LINK_MODE_LEN; i++) {
- if (ptys_eth_proto & mlxsw_sp2_port_link_mode[i].mask) {
- speed = mlxsw_sp2_port_link_mode[i].speed;
- duplex = DUPLEX_FULL;
- break;
- }
- }
-out:
- cmd->base.speed = speed;
- cmd->base.duplex = duplex;
+ cmd->base.speed = mlxsw_sp2_from_ptys_speed(mlxsw_sp, ptys_eth_proto);
+ if (cmd->base.speed != SPEED_UNKNOWN)
+ cmd->base.duplex = DUPLEX_FULL;
}
static bool
@@ -2999,6 +3169,7 @@ static const struct mlxsw_sp_port_type_speed_ops
mlxsw_sp2_port_type_speed_ops = {
.from_ptys_supported_port = mlxsw_sp2_from_ptys_supported_port,
.from_ptys_link = mlxsw_sp2_from_ptys_link,
+ .from_ptys_speed = mlxsw_sp2_from_ptys_speed,
.from_ptys_speed_duplex = mlxsw_sp2_from_ptys_speed_duplex,
.to_ptys_advert_link = mlxsw_sp2_to_ptys_advert_link,
.to_ptys_speed = mlxsw_sp2_to_ptys_speed,
@@ -3159,31 +3330,6 @@ mlxsw_sp_port_set_link_ksettings(struct net_device *dev,
return 0;
}
-static int mlxsw_sp_flash_device(struct net_device *dev,
- struct ethtool_flash *flash)
-{
- struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
- struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
- const struct firmware *firmware;
- int err;
-
- if (flash->region != ETHTOOL_FLASH_ALL_REGIONS)
- return -EOPNOTSUPP;
-
- dev_hold(dev);
- rtnl_unlock();
-
- err = request_firmware_direct(&firmware, flash->data, &dev->dev);
- if (err)
- goto out;
- err = mlxsw_sp_firmware_flash(mlxsw_sp, firmware);
- release_firmware(firmware);
-out:
- rtnl_lock();
- dev_put(dev);
- return err;
-}
-
static int mlxsw_sp_get_module_info(struct net_device *netdev,
struct ethtool_modinfo *modinfo)
{
@@ -3213,6 +3359,15 @@ static int mlxsw_sp_get_module_eeprom(struct net_device *netdev,
return err;
}
+static int
+mlxsw_sp_get_ts_info(struct net_device *netdev, struct ethtool_ts_info *info)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(netdev);
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+
+ return mlxsw_sp->ptp_ops->get_ts_info(mlxsw_sp, info);
+}
+
static const struct ethtool_ops mlxsw_sp_port_ethtool_ops = {
.get_drvinfo = mlxsw_sp_port_get_drvinfo,
.get_link = ethtool_op_get_link,
@@ -3224,9 +3379,9 @@ static const struct ethtool_ops mlxsw_sp_port_ethtool_ops = {
.get_sset_count = mlxsw_sp_port_get_sset_count,
.get_link_ksettings = mlxsw_sp_port_get_link_ksettings,
.set_link_ksettings = mlxsw_sp_port_set_link_ksettings,
- .flash_device = mlxsw_sp_flash_device,
.get_module_info = mlxsw_sp_get_module_info,
.get_module_eeprom = mlxsw_sp_get_module_eeprom,
+ .get_ts_info = mlxsw_sp_get_ts_info,
};
static int
@@ -3343,8 +3498,9 @@ static int mlxsw_sp_port_ets_init(struct mlxsw_sp_port *mlxsw_sp_port)
return err;
}
- /* Make sure the max shaper is disabled in all hierarchies that
- * support it.
+ /* Make sure the max shaper is disabled in all hierarchies that support
+ * it. Note that this disables ptps (PTP shaper), but that is intended
+ * for the initial configuration.
*/
err = mlxsw_sp_port_ets_maxrate_set(mlxsw_sp_port,
MLXSW_REG_QEEC_HIERARCY_PORT, 0, 0,
@@ -3589,6 +3745,9 @@ static int mlxsw_sp_port_create(struct mlxsw_sp *mlxsw_sp, u8 local_port,
}
mlxsw_sp_port->default_vlan = mlxsw_sp_port_vlan;
+ INIT_DELAYED_WORK(&mlxsw_sp_port->ptp.shaper_dw,
+ mlxsw_sp->ptp_ops->shaper_work);
+
mlxsw_sp->ports[local_port] = mlxsw_sp_port;
err = register_netdev(dev);
if (err) {
@@ -3643,6 +3802,8 @@ static void mlxsw_sp_port_remove(struct mlxsw_sp *mlxsw_sp, u8 local_port)
struct mlxsw_sp_port *mlxsw_sp_port = mlxsw_sp->ports[local_port];
cancel_delayed_work_sync(&mlxsw_sp_port->periodic_hw_stats.update_dw);
+ cancel_delayed_work_sync(&mlxsw_sp_port->ptp.shaper_dw);
+ mlxsw_sp_port_ptp_clear(mlxsw_sp_port);
mlxsw_core_port_clear(mlxsw_sp->core, local_port, mlxsw_sp);
unregister_netdev(mlxsw_sp_port->dev); /* This calls ndo_stop */
mlxsw_sp->ports[local_port] = NULL;
@@ -3927,14 +4088,55 @@ static void mlxsw_sp_pude_event_func(const struct mlxsw_reg_info *reg,
if (status == MLXSW_PORT_OPER_STATUS_UP) {
netdev_info(mlxsw_sp_port->dev, "link up\n");
netif_carrier_on(mlxsw_sp_port->dev);
+ mlxsw_core_schedule_dw(&mlxsw_sp_port->ptp.shaper_dw, 0);
} else {
netdev_info(mlxsw_sp_port->dev, "link down\n");
netif_carrier_off(mlxsw_sp_port->dev);
}
}
-static void mlxsw_sp_rx_listener_no_mark_func(struct sk_buff *skb,
- u8 local_port, void *priv)
+static void mlxsw_sp1_ptp_fifo_event_func(struct mlxsw_sp *mlxsw_sp,
+ char *mtpptr_pl, bool ingress)
+{
+ u8 local_port;
+ u8 num_rec;
+ int i;
+
+ local_port = mlxsw_reg_mtpptr_local_port_get(mtpptr_pl);
+ num_rec = mlxsw_reg_mtpptr_num_rec_get(mtpptr_pl);
+ for (i = 0; i < num_rec; i++) {
+ u8 domain_number;
+ u8 message_type;
+ u16 sequence_id;
+ u64 timestamp;
+
+ mlxsw_reg_mtpptr_unpack(mtpptr_pl, i, &message_type,
+ &domain_number, &sequence_id,
+ &timestamp);
+ mlxsw_sp1_ptp_got_timestamp(mlxsw_sp, ingress, local_port,
+ message_type, domain_number,
+ sequence_id, timestamp);
+ }
+}
+
+static void mlxsw_sp1_ptp_ing_fifo_event_func(const struct mlxsw_reg_info *reg,
+ char *mtpptr_pl, void *priv)
+{
+ struct mlxsw_sp *mlxsw_sp = priv;
+
+ mlxsw_sp1_ptp_fifo_event_func(mlxsw_sp, mtpptr_pl, true);
+}
+
+static void mlxsw_sp1_ptp_egr_fifo_event_func(const struct mlxsw_reg_info *reg,
+ char *mtpptr_pl, void *priv)
+{
+ struct mlxsw_sp *mlxsw_sp = priv;
+
+ mlxsw_sp1_ptp_fifo_event_func(mlxsw_sp, mtpptr_pl, false);
+}
+
+void mlxsw_sp_rx_listener_no_mark_func(struct sk_buff *skb,
+ u8 local_port, void *priv)
{
struct mlxsw_sp *mlxsw_sp = priv;
struct mlxsw_sp_port *mlxsw_sp_port = mlxsw_sp->ports[local_port];
@@ -4008,6 +4210,14 @@ out:
consume_skb(skb);
}
+static void mlxsw_sp_rx_listener_ptp(struct sk_buff *skb, u8 local_port,
+ void *priv)
+{
+ struct mlxsw_sp *mlxsw_sp = priv;
+
+ mlxsw_sp->ptp_ops->receive(mlxsw_sp, skb, local_port);
+}
+
#define MLXSW_SP_RXL_NO_MARK(_trap_id, _action, _trap_group, _is_ctrl) \
MLXSW_RXL(mlxsw_sp_rx_listener_no_mark_func, _trap_id, _action, \
_is_ctrl, SP_##_trap_group, DISCARD)
@@ -4029,7 +4239,8 @@ static const struct mlxsw_listener mlxsw_sp_listener[] = {
/* L2 traps */
MLXSW_SP_RXL_NO_MARK(STP, TRAP_TO_CPU, STP, true),
MLXSW_SP_RXL_NO_MARK(LACP, TRAP_TO_CPU, LACP, true),
- MLXSW_SP_RXL_NO_MARK(LLDP, TRAP_TO_CPU, LLDP, true),
+ MLXSW_RXL(mlxsw_sp_rx_listener_ptp, LLDP, TRAP_TO_CPU,
+ false, SP_LLDP, DISCARD),
MLXSW_SP_RXL_MARK(DHCP, MIRROR_TO_CPU, DHCP, false),
MLXSW_SP_RXL_MARK(IGMP_QUERY, MIRROR_TO_CPU, IGMP, false),
MLXSW_SP_RXL_NO_MARK(IGMP_V1_REPORT, TRAP_TO_CPU, IGMP, false),
@@ -4098,6 +4309,16 @@ static const struct mlxsw_listener mlxsw_sp_listener[] = {
/* NVE traps */
MLXSW_SP_RXL_MARK(NVE_ENCAP_ARP, TRAP_TO_CPU, ARP, false),
MLXSW_SP_RXL_NO_MARK(NVE_DECAP_ARP, TRAP_TO_CPU, ARP, false),
+ /* PTP traps */
+ MLXSW_RXL(mlxsw_sp_rx_listener_ptp, PTP0, TRAP_TO_CPU,
+ false, SP_PTP0, DISCARD),
+ MLXSW_SP_RXL_NO_MARK(PTP1, TRAP_TO_CPU, PTP1, false),
+};
+
+static const struct mlxsw_listener mlxsw_sp1_listener[] = {
+ /* Events */
+ MLXSW_EVENTL(mlxsw_sp1_ptp_egr_fifo_event_func, PTP_EGR_FIFO, SP_PTP0),
+ MLXSW_EVENTL(mlxsw_sp1_ptp_ing_fifo_event_func, PTP_ING_FIFO, SP_PTP0),
};
static int mlxsw_sp_cpu_policers_set(struct mlxsw_core *mlxsw_core)
@@ -4149,6 +4370,14 @@ static int mlxsw_sp_cpu_policers_set(struct mlxsw_core *mlxsw_core)
rate = 1024;
burst_size = 7;
break;
+ case MLXSW_REG_HTGT_TRAP_GROUP_SP_PTP0:
+ rate = 24 * 1024;
+ burst_size = 12;
+ break;
+ case MLXSW_REG_HTGT_TRAP_GROUP_SP_PTP1:
+ rate = 19 * 1024;
+ burst_size = 12;
+ break;
default:
continue;
}
@@ -4187,6 +4416,7 @@ static int mlxsw_sp_trap_groups_set(struct mlxsw_core *mlxsw_core)
case MLXSW_REG_HTGT_TRAP_GROUP_SP_LLDP:
case MLXSW_REG_HTGT_TRAP_GROUP_SP_OSPF:
case MLXSW_REG_HTGT_TRAP_GROUP_SP_PIM:
+ case MLXSW_REG_HTGT_TRAP_GROUP_SP_PTP0:
priority = 5;
tc = 5;
break;
@@ -4204,6 +4434,7 @@ static int mlxsw_sp_trap_groups_set(struct mlxsw_core *mlxsw_core)
case MLXSW_REG_HTGT_TRAP_GROUP_SP_ARP:
case MLXSW_REG_HTGT_TRAP_GROUP_SP_IPV6_ND:
case MLXSW_REG_HTGT_TRAP_GROUP_SP_RPF:
+ case MLXSW_REG_HTGT_TRAP_GROUP_SP_PTP1:
priority = 2;
tc = 2;
break;
@@ -4237,22 +4468,16 @@ static int mlxsw_sp_trap_groups_set(struct mlxsw_core *mlxsw_core)
return 0;
}
-static int mlxsw_sp_traps_init(struct mlxsw_sp *mlxsw_sp)
+static int mlxsw_sp_traps_register(struct mlxsw_sp *mlxsw_sp,
+ const struct mlxsw_listener listeners[],
+ size_t listeners_count)
{
int i;
int err;
- err = mlxsw_sp_cpu_policers_set(mlxsw_sp->core);
- if (err)
- return err;
-
- err = mlxsw_sp_trap_groups_set(mlxsw_sp->core);
- if (err)
- return err;
-
- for (i = 0; i < ARRAY_SIZE(mlxsw_sp_listener); i++) {
+ for (i = 0; i < listeners_count; i++) {
err = mlxsw_core_trap_register(mlxsw_sp->core,
- &mlxsw_sp_listener[i],
+ &listeners[i],
mlxsw_sp);
if (err)
goto err_listener_register;
@@ -4263,23 +4488,63 @@ static int mlxsw_sp_traps_init(struct mlxsw_sp *mlxsw_sp)
err_listener_register:
for (i--; i >= 0; i--) {
mlxsw_core_trap_unregister(mlxsw_sp->core,
- &mlxsw_sp_listener[i],
+ &listeners[i],
mlxsw_sp);
}
return err;
}
-static void mlxsw_sp_traps_fini(struct mlxsw_sp *mlxsw_sp)
+static void mlxsw_sp_traps_unregister(struct mlxsw_sp *mlxsw_sp,
+ const struct mlxsw_listener listeners[],
+ size_t listeners_count)
{
int i;
- for (i = 0; i < ARRAY_SIZE(mlxsw_sp_listener); i++) {
+ for (i = 0; i < listeners_count; i++) {
mlxsw_core_trap_unregister(mlxsw_sp->core,
- &mlxsw_sp_listener[i],
+ &listeners[i],
mlxsw_sp);
}
}
+static int mlxsw_sp_traps_init(struct mlxsw_sp *mlxsw_sp)
+{
+ int err;
+
+ err = mlxsw_sp_cpu_policers_set(mlxsw_sp->core);
+ if (err)
+ return err;
+
+ err = mlxsw_sp_trap_groups_set(mlxsw_sp->core);
+ if (err)
+ return err;
+
+ err = mlxsw_sp_traps_register(mlxsw_sp, mlxsw_sp_listener,
+ ARRAY_SIZE(mlxsw_sp_listener));
+ if (err)
+ return err;
+
+ err = mlxsw_sp_traps_register(mlxsw_sp, mlxsw_sp->listeners,
+ mlxsw_sp->listeners_count);
+ if (err)
+ goto err_extra_traps_init;
+
+ return 0;
+
+err_extra_traps_init:
+ mlxsw_sp_traps_unregister(mlxsw_sp, mlxsw_sp_listener,
+ ARRAY_SIZE(mlxsw_sp_listener));
+ return err;
+}
+
+static void mlxsw_sp_traps_fini(struct mlxsw_sp *mlxsw_sp)
+{
+ mlxsw_sp_traps_unregister(mlxsw_sp, mlxsw_sp->listeners,
+ mlxsw_sp->listeners_count);
+ mlxsw_sp_traps_unregister(mlxsw_sp, mlxsw_sp_listener,
+ ARRAY_SIZE(mlxsw_sp_listener));
+}
+
#define MLXSW_SP_LAG_SEED_INIT 0xcafecafe
static int mlxsw_sp_lag_init(struct mlxsw_sp *mlxsw_sp)
@@ -4332,6 +4597,32 @@ static int mlxsw_sp_basic_trap_groups_set(struct mlxsw_core *mlxsw_core)
return mlxsw_reg_write(mlxsw_core, MLXSW_REG(htgt), htgt_pl);
}
+static const struct mlxsw_sp_ptp_ops mlxsw_sp1_ptp_ops = {
+ .clock_init = mlxsw_sp1_ptp_clock_init,
+ .clock_fini = mlxsw_sp1_ptp_clock_fini,
+ .init = mlxsw_sp1_ptp_init,
+ .fini = mlxsw_sp1_ptp_fini,
+ .receive = mlxsw_sp1_ptp_receive,
+ .transmitted = mlxsw_sp1_ptp_transmitted,
+ .hwtstamp_get = mlxsw_sp1_ptp_hwtstamp_get,
+ .hwtstamp_set = mlxsw_sp1_ptp_hwtstamp_set,
+ .shaper_work = mlxsw_sp1_ptp_shaper_work,
+ .get_ts_info = mlxsw_sp1_ptp_get_ts_info,
+};
+
+static const struct mlxsw_sp_ptp_ops mlxsw_sp2_ptp_ops = {
+ .clock_init = mlxsw_sp2_ptp_clock_init,
+ .clock_fini = mlxsw_sp2_ptp_clock_fini,
+ .init = mlxsw_sp2_ptp_init,
+ .fini = mlxsw_sp2_ptp_fini,
+ .receive = mlxsw_sp2_ptp_receive,
+ .transmitted = mlxsw_sp2_ptp_transmitted,
+ .hwtstamp_get = mlxsw_sp2_ptp_hwtstamp_get,
+ .hwtstamp_set = mlxsw_sp2_ptp_hwtstamp_set,
+ .shaper_work = mlxsw_sp2_ptp_shaper_work,
+ .get_ts_info = mlxsw_sp2_ptp_get_ts_info,
+};
+
static int mlxsw_sp_netdevice_event(struct notifier_block *unused,
unsigned long event, void *ptr);
@@ -4429,6 +4720,28 @@ static int mlxsw_sp_init(struct mlxsw_core *mlxsw_core,
goto err_router_init;
}
+ if (mlxsw_sp->bus_info->read_frc_capable) {
+ /* NULL is a valid return value from clock_init */
+ mlxsw_sp->clock =
+ mlxsw_sp->ptp_ops->clock_init(mlxsw_sp,
+ mlxsw_sp->bus_info->dev);
+ if (IS_ERR(mlxsw_sp->clock)) {
+ err = PTR_ERR(mlxsw_sp->clock);
+ dev_err(mlxsw_sp->bus_info->dev, "Failed to init ptp clock\n");
+ goto err_ptp_clock_init;
+ }
+ }
+
+ if (mlxsw_sp->clock) {
+ /* NULL is a valid return value from ptp_ops->init */
+ mlxsw_sp->ptp_state = mlxsw_sp->ptp_ops->init(mlxsw_sp);
+ if (IS_ERR(mlxsw_sp->ptp_state)) {
+ err = PTR_ERR(mlxsw_sp->ptp_state);
+ dev_err(mlxsw_sp->bus_info->dev, "Failed to initialize PTP\n");
+ goto err_ptp_init;
+ }
+ }
+
/* Initialize netdevice notifier after router and SPAN is initialized,
* so that the event handler can use router structures and call SPAN
* respin.
@@ -4459,6 +4772,12 @@ err_ports_create:
err_dpipe_init:
unregister_netdevice_notifier(&mlxsw_sp->netdevice_nb);
err_netdev_notifier:
+ if (mlxsw_sp->clock)
+ mlxsw_sp->ptp_ops->fini(mlxsw_sp->ptp_state);
+err_ptp_init:
+ if (mlxsw_sp->clock)
+ mlxsw_sp->ptp_ops->clock_fini(mlxsw_sp->clock);
+err_ptp_clock_init:
mlxsw_sp_router_fini(mlxsw_sp);
err_router_init:
mlxsw_sp_acl_fini(mlxsw_sp);
@@ -4502,6 +4821,9 @@ static int mlxsw_sp1_init(struct mlxsw_core *mlxsw_core,
mlxsw_sp->rif_ops_arr = mlxsw_sp1_rif_ops_arr;
mlxsw_sp->sb_vals = &mlxsw_sp1_sb_vals;
mlxsw_sp->port_type_speed_ops = &mlxsw_sp1_port_type_speed_ops;
+ mlxsw_sp->ptp_ops = &mlxsw_sp1_ptp_ops;
+ mlxsw_sp->listeners = mlxsw_sp1_listener;
+ mlxsw_sp->listeners_count = ARRAY_SIZE(mlxsw_sp1_listener);
return mlxsw_sp_init(mlxsw_core, mlxsw_bus_info);
}
@@ -4521,6 +4843,7 @@ static int mlxsw_sp2_init(struct mlxsw_core *mlxsw_core,
mlxsw_sp->rif_ops_arr = mlxsw_sp2_rif_ops_arr;
mlxsw_sp->sb_vals = &mlxsw_sp2_sb_vals;
mlxsw_sp->port_type_speed_ops = &mlxsw_sp2_port_type_speed_ops;
+ mlxsw_sp->ptp_ops = &mlxsw_sp2_ptp_ops;
return mlxsw_sp_init(mlxsw_core, mlxsw_bus_info);
}
@@ -4532,6 +4855,10 @@ static void mlxsw_sp_fini(struct mlxsw_core *mlxsw_core)
mlxsw_sp_ports_remove(mlxsw_sp);
mlxsw_sp_dpipe_fini(mlxsw_sp);
unregister_netdevice_notifier(&mlxsw_sp->netdevice_nb);
+ if (mlxsw_sp->clock) {
+ mlxsw_sp->ptp_ops->fini(mlxsw_sp->ptp_state);
+ mlxsw_sp->ptp_ops->clock_fini(mlxsw_sp->clock);
+ }
mlxsw_sp_router_fini(mlxsw_sp);
mlxsw_sp_acl_fini(mlxsw_sp);
mlxsw_sp_nve_fini(mlxsw_sp);
@@ -4874,6 +5201,15 @@ static void mlxsw_sp2_params_unregister(struct mlxsw_core *mlxsw_core)
mlxsw_sp_params_unregister(mlxsw_core);
}
+static void mlxsw_sp_ptp_transmitted(struct mlxsw_core *mlxsw_core,
+ struct sk_buff *skb, u8 local_port)
+{
+ struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
+
+ skb_pull(skb, MLXSW_TXHDR_LEN);
+ mlxsw_sp->ptp_ops->transmitted(mlxsw_sp, skb, local_port);
+}
+
static struct mlxsw_driver mlxsw_sp1_driver = {
.kind = mlxsw_sp1_driver_name,
.priv_size = sizeof(struct mlxsw_sp),
@@ -4892,11 +5228,13 @@ static struct mlxsw_driver mlxsw_sp1_driver = {
.sb_occ_max_clear = mlxsw_sp_sb_occ_max_clear,
.sb_occ_port_pool_get = mlxsw_sp_sb_occ_port_pool_get,
.sb_occ_tc_port_bind_get = mlxsw_sp_sb_occ_tc_port_bind_get,
+ .flash_update = mlxsw_sp_flash_update,
.txhdr_construct = mlxsw_sp_txhdr_construct,
.resources_register = mlxsw_sp1_resources_register,
.kvd_sizes_get = mlxsw_sp_kvd_sizes_get,
.params_register = mlxsw_sp_params_register,
.params_unregister = mlxsw_sp_params_unregister,
+ .ptp_transmitted = mlxsw_sp_ptp_transmitted,
.txhdr_len = MLXSW_TXHDR_LEN,
.profile = &mlxsw_sp1_config_profile,
.res_query_enabled = true,
@@ -4920,10 +5258,12 @@ static struct mlxsw_driver mlxsw_sp2_driver = {
.sb_occ_max_clear = mlxsw_sp_sb_occ_max_clear,
.sb_occ_port_pool_get = mlxsw_sp_sb_occ_port_pool_get,
.sb_occ_tc_port_bind_get = mlxsw_sp_sb_occ_tc_port_bind_get,
+ .flash_update = mlxsw_sp_flash_update,
.txhdr_construct = mlxsw_sp_txhdr_construct,
.resources_register = mlxsw_sp2_resources_register,
.params_register = mlxsw_sp2_params_register,
.params_unregister = mlxsw_sp2_params_unregister,
+ .ptp_transmitted = mlxsw_sp_ptp_transmitted,
.txhdr_len = MLXSW_TXHDR_LEN,
.profile = &mlxsw_sp2_config_profile,
.res_query_enabled = true,
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
index 8601b3041acd..a252b080dda9 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
@@ -136,6 +136,8 @@ struct mlxsw_sp_acl_tcam_ops;
struct mlxsw_sp_nve_ops;
struct mlxsw_sp_sb_vals;
struct mlxsw_sp_port_type_speed_ops;
+struct mlxsw_sp_ptp_state;
+struct mlxsw_sp_ptp_ops;
struct mlxsw_sp {
struct mlxsw_sp_port **ports;
@@ -155,6 +157,8 @@ struct mlxsw_sp {
struct mlxsw_sp_kvdl *kvdl;
struct mlxsw_sp_nve *nve;
struct notifier_block netdevice_nb;
+ struct mlxsw_sp_ptp_clock *clock;
+ struct mlxsw_sp_ptp_state *ptp_state;
struct mlxsw_sp_counter_pool *counter_pool;
struct {
@@ -172,6 +176,9 @@ struct mlxsw_sp {
const struct mlxsw_sp_rif_ops **rif_ops_arr;
const struct mlxsw_sp_sb_vals *sb_vals;
const struct mlxsw_sp_port_type_speed_ops *port_type_speed_ops;
+ const struct mlxsw_sp_ptp_ops *ptp_ops;
+ const struct mlxsw_listener *listeners;
+ size_t listeners_count;
};
static inline struct mlxsw_sp_upper *
@@ -259,6 +266,12 @@ struct mlxsw_sp_port {
unsigned acl_rule_count;
struct mlxsw_sp_acl_block *ing_acl_block;
struct mlxsw_sp_acl_block *eg_acl_block;
+ struct {
+ struct delayed_work shaper_dw;
+ struct hwtstamp_config hwtstamp_config;
+ u16 ing_types;
+ u16 egr_types;
+ } ptp;
};
struct mlxsw_sp_port_type_speed_ops {
@@ -267,6 +280,7 @@ struct mlxsw_sp_port_type_speed_ops {
struct ethtool_link_ksettings *cmd);
void (*from_ptys_link)(struct mlxsw_sp *mlxsw_sp, u32 ptys_eth_proto,
unsigned long *mode);
+ u32 (*from_ptys_speed)(struct mlxsw_sp *mlxsw_sp, u32 ptys_eth_proto);
void (*from_ptys_speed_duplex)(struct mlxsw_sp *mlxsw_sp,
bool carrier_ok, u32 ptys_eth_proto,
struct ethtool_link_ksettings *cmd);
@@ -435,6 +449,8 @@ struct mlxsw_sp_fid *mlxsw_sp_bridge_fid_get(struct mlxsw_sp *mlxsw_sp,
extern struct notifier_block mlxsw_sp_switchdev_notifier;
/* spectrum.c */
+void mlxsw_sp_rx_listener_no_mark_func(struct sk_buff *skb,
+ u8 local_port, void *priv);
int mlxsw_sp_port_ets_set(struct mlxsw_sp_port *mlxsw_sp_port,
enum mlxsw_reg_qeec_hr hr, u8 index, u8 next_index,
bool dwrr, u8 dwrr_weight);
@@ -620,6 +636,15 @@ enum mlxsw_sp_acl_profile {
MLXSW_SP_ACL_PROFILE_MR,
};
+struct mlxsw_sp_acl_block {
+ struct list_head binding_list;
+ struct mlxsw_sp_acl_ruleset *ruleset_zero;
+ struct mlxsw_sp *mlxsw_sp;
+ unsigned int rule_count;
+ unsigned int disable_count;
+ struct net *net;
+};
+
struct mlxsw_afk *mlxsw_sp_acl_afk(struct mlxsw_sp_acl *acl);
struct mlxsw_sp *mlxsw_sp_acl_block_mlxsw_sp(struct mlxsw_sp_acl_block *block);
unsigned int mlxsw_sp_acl_block_rule_count(struct mlxsw_sp_acl_block *block);
@@ -782,19 +807,19 @@ extern const struct mlxsw_afk_ops mlxsw_sp2_afk_ops;
/* spectrum_flower.c */
int mlxsw_sp_flower_replace(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block,
- struct tc_cls_flower_offload *f);
+ struct flow_cls_offload *f);
void mlxsw_sp_flower_destroy(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block,
- struct tc_cls_flower_offload *f);
+ struct flow_cls_offload *f);
int mlxsw_sp_flower_stats(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block,
- struct tc_cls_flower_offload *f);
+ struct flow_cls_offload *f);
int mlxsw_sp_flower_tmplt_create(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block,
- struct tc_cls_flower_offload *f);
+ struct flow_cls_offload *f);
void mlxsw_sp_flower_tmplt_destroy(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block,
- struct tc_cls_flower_offload *f);
+ struct flow_cls_offload *f);
/* spectrum_qdisc.c */
int mlxsw_sp_tc_qdisc_init(struct mlxsw_sp_port *mlxsw_sp_port);
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c
index a146a44634e9..e8ac90564dbe 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c
@@ -45,14 +45,6 @@ struct mlxsw_sp_acl_block_binding {
bool ingress;
};
-struct mlxsw_sp_acl_block {
- struct list_head binding_list;
- struct mlxsw_sp_acl_ruleset *ruleset_zero;
- struct mlxsw_sp *mlxsw_sp;
- unsigned int rule_count;
- unsigned int disable_count;
-};
-
struct mlxsw_sp_acl_ruleset_ht_key {
struct mlxsw_sp_acl_block *block;
u32 chain_index;
@@ -221,6 +213,7 @@ struct mlxsw_sp_acl_block *mlxsw_sp_acl_block_create(struct mlxsw_sp *mlxsw_sp,
return NULL;
INIT_LIST_HEAD(&block->binding_list);
block->mlxsw_sp = mlxsw_sp;
+ block->net = net;
return block;
}
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c
index 2a998dea4f39..279c241f76f0 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c
@@ -12,7 +12,7 @@ static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_dmac[] = {
MLXSW_AFK_ELEMENT_INST_BUF(DMAC_0_31, 0x02, 4),
MLXSW_AFK_ELEMENT_INST_U32(PCP, 0x08, 13, 3),
MLXSW_AFK_ELEMENT_INST_U32(VID, 0x08, 0, 12),
- MLXSW_AFK_ELEMENT_INST_U32(SRC_SYS_PORT, 0x0C, 0, 8),
+ MLXSW_AFK_ELEMENT_INST_U32(SRC_SYS_PORT, 0x0C, 0, 16),
};
static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_smac[] = {
@@ -20,7 +20,7 @@ static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_smac[] = {
MLXSW_AFK_ELEMENT_INST_BUF(SMAC_0_31, 0x02, 4),
MLXSW_AFK_ELEMENT_INST_U32(PCP, 0x08, 13, 3),
MLXSW_AFK_ELEMENT_INST_U32(VID, 0x08, 0, 12),
- MLXSW_AFK_ELEMENT_INST_U32(SRC_SYS_PORT, 0x0C, 0, 8),
+ MLXSW_AFK_ELEMENT_INST_U32(SRC_SYS_PORT, 0x0C, 0, 16),
};
static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_smac_ex[] = {
@@ -32,13 +32,13 @@ static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_smac_ex[] = {
static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_sip[] = {
MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_0_31, 0x00, 4),
MLXSW_AFK_ELEMENT_INST_U32(IP_PROTO, 0x08, 0, 8),
- MLXSW_AFK_ELEMENT_INST_U32(SRC_SYS_PORT, 0x0C, 0, 8),
+ MLXSW_AFK_ELEMENT_INST_U32(SRC_SYS_PORT, 0x0C, 0, 16),
};
static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_dip[] = {
MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_0_31, 0x00, 4),
MLXSW_AFK_ELEMENT_INST_U32(IP_PROTO, 0x08, 0, 8),
- MLXSW_AFK_ELEMENT_INST_U32(SRC_SYS_PORT, 0x0C, 0, 8),
+ MLXSW_AFK_ELEMENT_INST_U32(SRC_SYS_PORT, 0x0C, 0, 16),
};
static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4[] = {
@@ -149,7 +149,7 @@ static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_4[] = {
static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_5[] = {
MLXSW_AFK_ELEMENT_INST_U32(VID, 0x04, 16, 12),
- MLXSW_AFK_ELEMENT_INST_U32(SRC_SYS_PORT, 0x04, 0, 8), /* RX_ACL_SYSTEM_PORT */
+ MLXSW_AFK_ELEMENT_INST_EXT_U32(SRC_SYS_PORT, 0x04, 0, 8, -1, true), /* RX_ACL_SYSTEM_PORT */
};
static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_0[] = {
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
index 96b23c856f4d..202e9a246019 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
@@ -120,8 +120,51 @@ static int mlxsw_sp_flower_parse_actions(struct mlxsw_sp *mlxsw_sp,
return 0;
}
+static int mlxsw_sp_flower_parse_meta(struct mlxsw_sp_acl_rule_info *rulei,
+ struct flow_cls_offload *f,
+ struct mlxsw_sp_acl_block *block)
+{
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
+ struct mlxsw_sp_port *mlxsw_sp_port;
+ struct net_device *ingress_dev;
+ struct flow_match_meta match;
+
+ if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_META))
+ return 0;
+
+ flow_rule_match_meta(rule, &match);
+ if (match.mask->ingress_ifindex != 0xFFFFFFFF) {
+ NL_SET_ERR_MSG_MOD(f->common.extack, "Unsupported ingress ifindex mask");
+ return -EINVAL;
+ }
+
+ ingress_dev = __dev_get_by_index(block->net,
+ match.key->ingress_ifindex);
+ if (!ingress_dev) {
+ NL_SET_ERR_MSG_MOD(f->common.extack, "Can't find specified ingress port to match on");
+ return -EINVAL;
+ }
+
+ if (!mlxsw_sp_port_dev_check(ingress_dev)) {
+ NL_SET_ERR_MSG_MOD(f->common.extack, "Can't match on non-mlxsw ingress port");
+ return -EINVAL;
+ }
+
+ mlxsw_sp_port = netdev_priv(ingress_dev);
+ if (mlxsw_sp_port->mlxsw_sp != block->mlxsw_sp) {
+ NL_SET_ERR_MSG_MOD(f->common.extack, "Can't match on a port from different device");
+ return -EINVAL;
+ }
+
+ mlxsw_sp_acl_rulei_keymask_u32(rulei,
+ MLXSW_AFK_ELEMENT_SRC_SYS_PORT,
+ mlxsw_sp_port->local_port,
+ 0xFFFFFFFF);
+ return 0;
+}
+
static void mlxsw_sp_flower_parse_ipv4(struct mlxsw_sp_acl_rule_info *rulei,
- struct tc_cls_flower_offload *f)
+ struct flow_cls_offload *f)
{
struct flow_match_ipv4_addrs match;
@@ -136,7 +179,7 @@ static void mlxsw_sp_flower_parse_ipv4(struct mlxsw_sp_acl_rule_info *rulei,
}
static void mlxsw_sp_flower_parse_ipv6(struct mlxsw_sp_acl_rule_info *rulei,
- struct tc_cls_flower_offload *f)
+ struct flow_cls_offload *f)
{
struct flow_match_ipv6_addrs match;
@@ -170,10 +213,10 @@ static void mlxsw_sp_flower_parse_ipv6(struct mlxsw_sp_acl_rule_info *rulei,
static int mlxsw_sp_flower_parse_ports(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_rule_info *rulei,
- struct tc_cls_flower_offload *f,
+ struct flow_cls_offload *f,
u8 ip_proto)
{
- const struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+ const struct flow_rule *rule = flow_cls_offload_flow_rule(f);
struct flow_match_ports match;
if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_PORTS))
@@ -197,10 +240,10 @@ static int mlxsw_sp_flower_parse_ports(struct mlxsw_sp *mlxsw_sp,
static int mlxsw_sp_flower_parse_tcp(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_rule_info *rulei,
- struct tc_cls_flower_offload *f,
+ struct flow_cls_offload *f,
u8 ip_proto)
{
- const struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+ const struct flow_rule *rule = flow_cls_offload_flow_rule(f);
struct flow_match_tcp match;
if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_TCP))
@@ -222,10 +265,10 @@ static int mlxsw_sp_flower_parse_tcp(struct mlxsw_sp *mlxsw_sp,
static int mlxsw_sp_flower_parse_ip(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_rule_info *rulei,
- struct tc_cls_flower_offload *f,
+ struct flow_cls_offload *f,
u16 n_proto)
{
- const struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+ const struct flow_rule *rule = flow_cls_offload_flow_rule(f);
struct flow_match_ip match;
if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IP))
@@ -256,9 +299,9 @@ static int mlxsw_sp_flower_parse_ip(struct mlxsw_sp *mlxsw_sp,
static int mlxsw_sp_flower_parse(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block,
struct mlxsw_sp_acl_rule_info *rulei,
- struct tc_cls_flower_offload *f)
+ struct flow_cls_offload *f)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
struct flow_dissector *dissector = rule->match.dissector;
u16 n_proto_mask = 0;
u16 n_proto_key = 0;
@@ -267,7 +310,8 @@ static int mlxsw_sp_flower_parse(struct mlxsw_sp *mlxsw_sp,
int err;
if (dissector->used_keys &
- ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
+ ~(BIT(FLOW_DISSECTOR_KEY_META) |
+ BIT(FLOW_DISSECTOR_KEY_CONTROL) |
BIT(FLOW_DISSECTOR_KEY_BASIC) |
BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) |
@@ -283,6 +327,10 @@ static int mlxsw_sp_flower_parse(struct mlxsw_sp *mlxsw_sp,
mlxsw_sp_acl_rulei_priority(rulei, f->common.prio);
+ err = mlxsw_sp_flower_parse_meta(rulei, f, block);
+ if (err)
+ return err;
+
if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CONTROL)) {
struct flow_match_control match;
@@ -378,7 +426,7 @@ static int mlxsw_sp_flower_parse(struct mlxsw_sp *mlxsw_sp,
int mlxsw_sp_flower_replace(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block,
- struct tc_cls_flower_offload *f)
+ struct flow_cls_offload *f)
{
struct mlxsw_sp_acl_rule_info *rulei;
struct mlxsw_sp_acl_ruleset *ruleset;
@@ -425,7 +473,7 @@ err_rule_create:
void mlxsw_sp_flower_destroy(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block,
- struct tc_cls_flower_offload *f)
+ struct flow_cls_offload *f)
{
struct mlxsw_sp_acl_ruleset *ruleset;
struct mlxsw_sp_acl_rule *rule;
@@ -447,7 +495,7 @@ void mlxsw_sp_flower_destroy(struct mlxsw_sp *mlxsw_sp,
int mlxsw_sp_flower_stats(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block,
- struct tc_cls_flower_offload *f)
+ struct flow_cls_offload *f)
{
struct mlxsw_sp_acl_ruleset *ruleset;
struct mlxsw_sp_acl_rule *rule;
@@ -483,7 +531,7 @@ err_rule_get_stats:
int mlxsw_sp_flower_tmplt_create(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block,
- struct tc_cls_flower_offload *f)
+ struct flow_cls_offload *f)
{
struct mlxsw_sp_acl_ruleset *ruleset;
struct mlxsw_sp_acl_rule_info rulei;
@@ -504,7 +552,7 @@ int mlxsw_sp_flower_tmplt_create(struct mlxsw_sp *mlxsw_sp,
void mlxsw_sp_flower_tmplt_destroy(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_acl_block *block,
- struct tc_cls_flower_offload *f)
+ struct flow_cls_offload *f)
{
struct mlxsw_sp_acl_ruleset *ruleset;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c
new file mode 100644
index 000000000000..bd9c2bc2d5d6
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c
@@ -0,0 +1,1111 @@
+// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
+/* Copyright (c) 2019 Mellanox Technologies. All rights reserved */
+
+#include <linux/ptp_clock_kernel.h>
+#include <linux/clocksource.h>
+#include <linux/timecounter.h>
+#include <linux/spinlock.h>
+#include <linux/device.h>
+#include <linux/rhashtable.h>
+#include <linux/ptp_classify.h>
+#include <linux/if_ether.h>
+#include <linux/if_vlan.h>
+#include <linux/net_tstamp.h>
+
+#include "spectrum.h"
+#include "spectrum_ptp.h"
+#include "core.h"
+
+#define MLXSW_SP1_PTP_CLOCK_CYCLES_SHIFT 29
+#define MLXSW_SP1_PTP_CLOCK_FREQ_KHZ 156257 /* 6.4nSec */
+#define MLXSW_SP1_PTP_CLOCK_MASK 64
+
+#define MLXSW_SP1_PTP_HT_GC_INTERVAL 500 /* ms */
+
+/* How long, approximately, should the unmatched entries stay in the hash table
+ * before they are collected. Should be evenly divisible by the GC interval.
+ */
+#define MLXSW_SP1_PTP_HT_GC_TIMEOUT 1000 /* ms */
+
+struct mlxsw_sp_ptp_state {
+ struct mlxsw_sp *mlxsw_sp;
+ struct rhashtable unmatched_ht;
+ spinlock_t unmatched_lock; /* protects the HT */
+ struct delayed_work ht_gc_dw;
+ u32 gc_cycle;
+};
+
+struct mlxsw_sp1_ptp_key {
+ u8 local_port;
+ u8 message_type;
+ u16 sequence_id;
+ u8 domain_number;
+ bool ingress;
+};
+
+struct mlxsw_sp1_ptp_unmatched {
+ struct mlxsw_sp1_ptp_key key;
+ struct rhash_head ht_node;
+ struct rcu_head rcu;
+ struct sk_buff *skb;
+ u64 timestamp;
+ u32 gc_cycle;
+};
+
+static const struct rhashtable_params mlxsw_sp1_ptp_unmatched_ht_params = {
+ .key_len = sizeof_field(struct mlxsw_sp1_ptp_unmatched, key),
+ .key_offset = offsetof(struct mlxsw_sp1_ptp_unmatched, key),
+ .head_offset = offsetof(struct mlxsw_sp1_ptp_unmatched, ht_node),
+};
+
+struct mlxsw_sp_ptp_clock {
+ struct mlxsw_core *core;
+ spinlock_t lock; /* protect this structure */
+ struct cyclecounter cycles;
+ struct timecounter tc;
+ u32 nominal_c_mult;
+ struct ptp_clock *ptp;
+ struct ptp_clock_info ptp_info;
+ unsigned long overflow_period;
+ struct delayed_work overflow_work;
+};
+
+static u64 __mlxsw_sp1_ptp_read_frc(struct mlxsw_sp_ptp_clock *clock,
+ struct ptp_system_timestamp *sts)
+{
+ struct mlxsw_core *mlxsw_core = clock->core;
+ u32 frc_h1, frc_h2, frc_l;
+
+ frc_h1 = mlxsw_core_read_frc_h(mlxsw_core);
+ ptp_read_system_prets(sts);
+ frc_l = mlxsw_core_read_frc_l(mlxsw_core);
+ ptp_read_system_postts(sts);
+ frc_h2 = mlxsw_core_read_frc_h(mlxsw_core);
+
+ if (frc_h1 != frc_h2) {
+ /* wrap around */
+ ptp_read_system_prets(sts);
+ frc_l = mlxsw_core_read_frc_l(mlxsw_core);
+ ptp_read_system_postts(sts);
+ }
+
+ return (u64) frc_l | (u64) frc_h2 << 32;
+}
+
+static u64 mlxsw_sp1_ptp_read_frc(const struct cyclecounter *cc)
+{
+ struct mlxsw_sp_ptp_clock *clock =
+ container_of(cc, struct mlxsw_sp_ptp_clock, cycles);
+
+ return __mlxsw_sp1_ptp_read_frc(clock, NULL) & cc->mask;
+}
+
+static int
+mlxsw_sp1_ptp_phc_adjfreq(struct mlxsw_sp_ptp_clock *clock, int freq_adj)
+{
+ struct mlxsw_core *mlxsw_core = clock->core;
+ char mtutc_pl[MLXSW_REG_MTUTC_LEN];
+
+ mlxsw_reg_mtutc_pack(mtutc_pl, MLXSW_REG_MTUTC_OPERATION_ADJUST_FREQ,
+ freq_adj, 0);
+ return mlxsw_reg_write(mlxsw_core, MLXSW_REG(mtutc), mtutc_pl);
+}
+
+static u64 mlxsw_sp1_ptp_ns2cycles(const struct timecounter *tc, u64 nsec)
+{
+ u64 cycles = (u64) nsec;
+
+ cycles <<= tc->cc->shift;
+ cycles = div_u64(cycles, tc->cc->mult);
+
+ return cycles;
+}
+
+static int
+mlxsw_sp1_ptp_phc_settime(struct mlxsw_sp_ptp_clock *clock, u64 nsec)
+{
+ struct mlxsw_core *mlxsw_core = clock->core;
+ u64 next_sec, next_sec_in_nsec, cycles;
+ char mtutc_pl[MLXSW_REG_MTUTC_LEN];
+ char mtpps_pl[MLXSW_REG_MTPPS_LEN];
+ int err;
+
+ next_sec = div_u64(nsec, NSEC_PER_SEC) + 1;
+ next_sec_in_nsec = next_sec * NSEC_PER_SEC;
+
+ spin_lock_bh(&clock->lock);
+ cycles = mlxsw_sp1_ptp_ns2cycles(&clock->tc, next_sec_in_nsec);
+ spin_unlock_bh(&clock->lock);
+
+ mlxsw_reg_mtpps_vpin_pack(mtpps_pl, cycles);
+ err = mlxsw_reg_write(mlxsw_core, MLXSW_REG(mtpps), mtpps_pl);
+ if (err)
+ return err;
+
+ mlxsw_reg_mtutc_pack(mtutc_pl,
+ MLXSW_REG_MTUTC_OPERATION_SET_TIME_AT_NEXT_SEC,
+ 0, next_sec);
+ return mlxsw_reg_write(mlxsw_core, MLXSW_REG(mtutc), mtutc_pl);
+}
+
+static int mlxsw_sp1_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
+{
+ struct mlxsw_sp_ptp_clock *clock =
+ container_of(ptp, struct mlxsw_sp_ptp_clock, ptp_info);
+ int neg_adj = 0;
+ u32 diff;
+ u64 adj;
+ s32 ppb;
+
+ ppb = scaled_ppm_to_ppb(scaled_ppm);
+
+ if (ppb < 0) {
+ neg_adj = 1;
+ ppb = -ppb;
+ }
+
+ adj = clock->nominal_c_mult;
+ adj *= ppb;
+ diff = div_u64(adj, NSEC_PER_SEC);
+
+ spin_lock_bh(&clock->lock);
+ timecounter_read(&clock->tc);
+ clock->cycles.mult = neg_adj ? clock->nominal_c_mult - diff :
+ clock->nominal_c_mult + diff;
+ spin_unlock_bh(&clock->lock);
+
+ return mlxsw_sp1_ptp_phc_adjfreq(clock, neg_adj ? -ppb : ppb);
+}
+
+static int mlxsw_sp1_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
+{
+ struct mlxsw_sp_ptp_clock *clock =
+ container_of(ptp, struct mlxsw_sp_ptp_clock, ptp_info);
+ u64 nsec;
+
+ spin_lock_bh(&clock->lock);
+ timecounter_adjtime(&clock->tc, delta);
+ nsec = timecounter_read(&clock->tc);
+ spin_unlock_bh(&clock->lock);
+
+ return mlxsw_sp1_ptp_phc_settime(clock, nsec);
+}
+
+static int mlxsw_sp1_ptp_gettimex(struct ptp_clock_info *ptp,
+ struct timespec64 *ts,
+ struct ptp_system_timestamp *sts)
+{
+ struct mlxsw_sp_ptp_clock *clock =
+ container_of(ptp, struct mlxsw_sp_ptp_clock, ptp_info);
+ u64 cycles, nsec;
+
+ spin_lock_bh(&clock->lock);
+ cycles = __mlxsw_sp1_ptp_read_frc(clock, sts);
+ nsec = timecounter_cyc2time(&clock->tc, cycles);
+ spin_unlock_bh(&clock->lock);
+
+ *ts = ns_to_timespec64(nsec);
+
+ return 0;
+}
+
+static int mlxsw_sp1_ptp_settime(struct ptp_clock_info *ptp,
+ const struct timespec64 *ts)
+{
+ struct mlxsw_sp_ptp_clock *clock =
+ container_of(ptp, struct mlxsw_sp_ptp_clock, ptp_info);
+ u64 nsec = timespec64_to_ns(ts);
+
+ spin_lock_bh(&clock->lock);
+ timecounter_init(&clock->tc, &clock->cycles, nsec);
+ nsec = timecounter_read(&clock->tc);
+ spin_unlock_bh(&clock->lock);
+
+ return mlxsw_sp1_ptp_phc_settime(clock, nsec);
+}
+
+static const struct ptp_clock_info mlxsw_sp1_ptp_clock_info = {
+ .owner = THIS_MODULE,
+ .name = "mlxsw_sp_clock",
+ .max_adj = 100000000,
+ .adjfine = mlxsw_sp1_ptp_adjfine,
+ .adjtime = mlxsw_sp1_ptp_adjtime,
+ .gettimex64 = mlxsw_sp1_ptp_gettimex,
+ .settime64 = mlxsw_sp1_ptp_settime,
+};
+
+static void mlxsw_sp1_ptp_clock_overflow(struct work_struct *work)
+{
+ struct delayed_work *dwork = to_delayed_work(work);
+ struct mlxsw_sp_ptp_clock *clock;
+
+ clock = container_of(dwork, struct mlxsw_sp_ptp_clock, overflow_work);
+
+ spin_lock_bh(&clock->lock);
+ timecounter_read(&clock->tc);
+ spin_unlock_bh(&clock->lock);
+ mlxsw_core_schedule_dw(&clock->overflow_work, clock->overflow_period);
+}
+
+struct mlxsw_sp_ptp_clock *
+mlxsw_sp1_ptp_clock_init(struct mlxsw_sp *mlxsw_sp, struct device *dev)
+{
+ u64 overflow_cycles, nsec, frac = 0;
+ struct mlxsw_sp_ptp_clock *clock;
+ int err;
+
+ clock = kzalloc(sizeof(*clock), GFP_KERNEL);
+ if (!clock)
+ return ERR_PTR(-ENOMEM);
+
+ spin_lock_init(&clock->lock);
+ clock->cycles.read = mlxsw_sp1_ptp_read_frc;
+ clock->cycles.shift = MLXSW_SP1_PTP_CLOCK_CYCLES_SHIFT;
+ clock->cycles.mult = clocksource_khz2mult(MLXSW_SP1_PTP_CLOCK_FREQ_KHZ,
+ clock->cycles.shift);
+ clock->nominal_c_mult = clock->cycles.mult;
+ clock->cycles.mask = CLOCKSOURCE_MASK(MLXSW_SP1_PTP_CLOCK_MASK);
+ clock->core = mlxsw_sp->core;
+
+ timecounter_init(&clock->tc, &clock->cycles,
+ ktime_to_ns(ktime_get_real()));
+
+ /* Calculate period in seconds to call the overflow watchdog - to make
+ * sure counter is checked at least twice every wrap around.
+ * The period is calculated as the minimum between max HW cycles count
+ * (The clock source mask) and max amount of cycles that can be
+ * multiplied by clock multiplier where the result doesn't exceed
+ * 64bits.
+ */
+ overflow_cycles = div64_u64(~0ULL >> 1, clock->cycles.mult);
+ overflow_cycles = min(overflow_cycles, div_u64(clock->cycles.mask, 3));
+
+ nsec = cyclecounter_cyc2ns(&clock->cycles, overflow_cycles, 0, &frac);
+ clock->overflow_period = nsecs_to_jiffies(nsec);
+
+ INIT_DELAYED_WORK(&clock->overflow_work, mlxsw_sp1_ptp_clock_overflow);
+ mlxsw_core_schedule_dw(&clock->overflow_work, 0);
+
+ clock->ptp_info = mlxsw_sp1_ptp_clock_info;
+ clock->ptp = ptp_clock_register(&clock->ptp_info, dev);
+ if (IS_ERR(clock->ptp)) {
+ err = PTR_ERR(clock->ptp);
+ dev_err(dev, "ptp_clock_register failed %d\n", err);
+ goto err_ptp_clock_register;
+ }
+
+ return clock;
+
+err_ptp_clock_register:
+ cancel_delayed_work_sync(&clock->overflow_work);
+ kfree(clock);
+ return ERR_PTR(err);
+}
+
+void mlxsw_sp1_ptp_clock_fini(struct mlxsw_sp_ptp_clock *clock)
+{
+ ptp_clock_unregister(clock->ptp);
+ cancel_delayed_work_sync(&clock->overflow_work);
+ kfree(clock);
+}
+
+static int mlxsw_sp_ptp_parse(struct sk_buff *skb,
+ u8 *p_domain_number,
+ u8 *p_message_type,
+ u16 *p_sequence_id)
+{
+ unsigned int offset = 0;
+ unsigned int ptp_class;
+ u8 *data;
+
+ data = skb_mac_header(skb);
+ ptp_class = ptp_classify_raw(skb);
+
+ switch (ptp_class & PTP_CLASS_VMASK) {
+ case PTP_CLASS_V1:
+ case PTP_CLASS_V2:
+ break;
+ default:
+ return -ERANGE;
+ }
+
+ if (ptp_class & PTP_CLASS_VLAN)
+ offset += VLAN_HLEN;
+
+ switch (ptp_class & PTP_CLASS_PMASK) {
+ case PTP_CLASS_IPV4:
+ offset += ETH_HLEN + IPV4_HLEN(data + offset) + UDP_HLEN;
+ break;
+ case PTP_CLASS_IPV6:
+ offset += ETH_HLEN + IP6_HLEN + UDP_HLEN;
+ break;
+ case PTP_CLASS_L2:
+ offset += ETH_HLEN;
+ break;
+ default:
+ return -ERANGE;
+ }
+
+ /* PTP header is 34 bytes. */
+ if (skb->len < offset + 34)
+ return -EINVAL;
+
+ *p_message_type = data[offset] & 0x0f;
+ *p_domain_number = data[offset + 4];
+ *p_sequence_id = (u16)(data[offset + 30]) << 8 | data[offset + 31];
+ return 0;
+}
+
+/* Returns NULL on successful insertion, a pointer on conflict, or an ERR_PTR on
+ * error.
+ */
+static struct mlxsw_sp1_ptp_unmatched *
+mlxsw_sp1_ptp_unmatched_save(struct mlxsw_sp *mlxsw_sp,
+ struct mlxsw_sp1_ptp_key key,
+ struct sk_buff *skb,
+ u64 timestamp)
+{
+ int cycles = MLXSW_SP1_PTP_HT_GC_TIMEOUT / MLXSW_SP1_PTP_HT_GC_INTERVAL;
+ struct mlxsw_sp_ptp_state *ptp_state = mlxsw_sp->ptp_state;
+ struct mlxsw_sp1_ptp_unmatched *unmatched;
+ struct mlxsw_sp1_ptp_unmatched *conflict;
+
+ unmatched = kzalloc(sizeof(*unmatched), GFP_ATOMIC);
+ if (!unmatched)
+ return ERR_PTR(-ENOMEM);
+
+ unmatched->key = key;
+ unmatched->skb = skb;
+ unmatched->timestamp = timestamp;
+ unmatched->gc_cycle = mlxsw_sp->ptp_state->gc_cycle + cycles;
+
+ conflict = rhashtable_lookup_get_insert_fast(&ptp_state->unmatched_ht,
+ &unmatched->ht_node,
+ mlxsw_sp1_ptp_unmatched_ht_params);
+ if (conflict)
+ kfree(unmatched);
+
+ return conflict;
+}
+
+static struct mlxsw_sp1_ptp_unmatched *
+mlxsw_sp1_ptp_unmatched_lookup(struct mlxsw_sp *mlxsw_sp,
+ struct mlxsw_sp1_ptp_key key)
+{
+ return rhashtable_lookup(&mlxsw_sp->ptp_state->unmatched_ht, &key,
+ mlxsw_sp1_ptp_unmatched_ht_params);
+}
+
+static int
+mlxsw_sp1_ptp_unmatched_remove(struct mlxsw_sp *mlxsw_sp,
+ struct mlxsw_sp1_ptp_unmatched *unmatched)
+{
+ return rhashtable_remove_fast(&mlxsw_sp->ptp_state->unmatched_ht,
+ &unmatched->ht_node,
+ mlxsw_sp1_ptp_unmatched_ht_params);
+}
+
+/* This function is called in the following scenarios:
+ *
+ * 1) When a packet is matched with its timestamp.
+ * 2) In several situation when it is necessary to immediately pass on
+ * an SKB without a timestamp.
+ * 3) From GC indirectly through mlxsw_sp1_ptp_unmatched_finish().
+ * This case is similar to 2) above.
+ */
+static void mlxsw_sp1_ptp_packet_finish(struct mlxsw_sp *mlxsw_sp,
+ struct sk_buff *skb, u8 local_port,
+ bool ingress,
+ struct skb_shared_hwtstamps *hwtstamps)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port;
+
+ /* Between capturing the packet and finishing it, there is a window of
+ * opportunity for the originating port to go away (e.g. due to a
+ * split). Also make sure the SKB device reference is still valid.
+ */
+ mlxsw_sp_port = mlxsw_sp->ports[local_port];
+ if (!(mlxsw_sp_port && (!skb->dev || skb->dev == mlxsw_sp_port->dev))) {
+ dev_kfree_skb_any(skb);
+ return;
+ }
+
+ if (ingress) {
+ if (hwtstamps)
+ *skb_hwtstamps(skb) = *hwtstamps;
+ mlxsw_sp_rx_listener_no_mark_func(skb, local_port, mlxsw_sp);
+ } else {
+ /* skb_tstamp_tx() allows hwtstamps to be NULL. */
+ skb_tstamp_tx(skb, hwtstamps);
+ dev_kfree_skb_any(skb);
+ }
+}
+
+static void mlxsw_sp1_packet_timestamp(struct mlxsw_sp *mlxsw_sp,
+ struct mlxsw_sp1_ptp_key key,
+ struct sk_buff *skb,
+ u64 timestamp)
+{
+ struct skb_shared_hwtstamps hwtstamps;
+ u64 nsec;
+
+ spin_lock_bh(&mlxsw_sp->clock->lock);
+ nsec = timecounter_cyc2time(&mlxsw_sp->clock->tc, timestamp);
+ spin_unlock_bh(&mlxsw_sp->clock->lock);
+
+ hwtstamps.hwtstamp = ns_to_ktime(nsec);
+ mlxsw_sp1_ptp_packet_finish(mlxsw_sp, skb,
+ key.local_port, key.ingress, &hwtstamps);
+}
+
+static void
+mlxsw_sp1_ptp_unmatched_finish(struct mlxsw_sp *mlxsw_sp,
+ struct mlxsw_sp1_ptp_unmatched *unmatched)
+{
+ if (unmatched->skb && unmatched->timestamp)
+ mlxsw_sp1_packet_timestamp(mlxsw_sp, unmatched->key,
+ unmatched->skb,
+ unmatched->timestamp);
+ else if (unmatched->skb)
+ mlxsw_sp1_ptp_packet_finish(mlxsw_sp, unmatched->skb,
+ unmatched->key.local_port,
+ unmatched->key.ingress, NULL);
+ kfree_rcu(unmatched, rcu);
+}
+
+static void mlxsw_sp1_ptp_unmatched_free_fn(void *ptr, void *arg)
+{
+ struct mlxsw_sp1_ptp_unmatched *unmatched = ptr;
+
+ /* This is invoked at a point where the ports are gone already. Nothing
+ * to do with whatever is left in the HT but to free it.
+ */
+ if (unmatched->skb)
+ dev_kfree_skb_any(unmatched->skb);
+ kfree_rcu(unmatched, rcu);
+}
+
+static void mlxsw_sp1_ptp_got_piece(struct mlxsw_sp *mlxsw_sp,
+ struct mlxsw_sp1_ptp_key key,
+ struct sk_buff *skb, u64 timestamp)
+{
+ struct mlxsw_sp1_ptp_unmatched *unmatched, *conflict;
+ int err;
+
+ rcu_read_lock();
+
+ unmatched = mlxsw_sp1_ptp_unmatched_lookup(mlxsw_sp, key);
+
+ spin_lock(&mlxsw_sp->ptp_state->unmatched_lock);
+
+ if (unmatched) {
+ /* There was an unmatched entry when we looked, but it may have
+ * been removed before we took the lock.
+ */
+ err = mlxsw_sp1_ptp_unmatched_remove(mlxsw_sp, unmatched);
+ if (err)
+ unmatched = NULL;
+ }
+
+ if (!unmatched) {
+ /* We have no unmatched entry, but one may have been added after
+ * we looked, but before we took the lock.
+ */
+ unmatched = mlxsw_sp1_ptp_unmatched_save(mlxsw_sp, key,
+ skb, timestamp);
+ if (IS_ERR(unmatched)) {
+ if (skb)
+ mlxsw_sp1_ptp_packet_finish(mlxsw_sp, skb,
+ key.local_port,
+ key.ingress, NULL);
+ unmatched = NULL;
+ } else if (unmatched) {
+ /* Save just told us, under lock, that the entry is
+ * there, so this has to work.
+ */
+ err = mlxsw_sp1_ptp_unmatched_remove(mlxsw_sp,
+ unmatched);
+ WARN_ON_ONCE(err);
+ }
+ }
+
+ /* If unmatched is non-NULL here, it comes either from the lookup, or
+ * from the save attempt above. In either case the entry was removed
+ * from the hash table. If unmatched is NULL, a new unmatched entry was
+ * added to the hash table, and there was no conflict.
+ */
+
+ if (skb && unmatched && unmatched->timestamp) {
+ unmatched->skb = skb;
+ } else if (timestamp && unmatched && unmatched->skb) {
+ unmatched->timestamp = timestamp;
+ } else if (unmatched) {
+ /* unmatched holds an older entry of the same type: either an
+ * skb if we are handling skb, or a timestamp if we are handling
+ * timestamp. We can't match that up, so save what we have.
+ */
+ conflict = mlxsw_sp1_ptp_unmatched_save(mlxsw_sp, key,
+ skb, timestamp);
+ if (IS_ERR(conflict)) {
+ if (skb)
+ mlxsw_sp1_ptp_packet_finish(mlxsw_sp, skb,
+ key.local_port,
+ key.ingress, NULL);
+ } else {
+ /* Above, we removed an object with this key from the
+ * hash table, under lock, so conflict can not be a
+ * valid pointer.
+ */
+ WARN_ON_ONCE(conflict);
+ }
+ }
+
+ spin_unlock(&mlxsw_sp->ptp_state->unmatched_lock);
+
+ if (unmatched)
+ mlxsw_sp1_ptp_unmatched_finish(mlxsw_sp, unmatched);
+
+ rcu_read_unlock();
+}
+
+static void mlxsw_sp1_ptp_got_packet(struct mlxsw_sp *mlxsw_sp,
+ struct sk_buff *skb, u8 local_port,
+ bool ingress)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port;
+ struct mlxsw_sp1_ptp_key key;
+ u8 types;
+ int err;
+
+ mlxsw_sp_port = mlxsw_sp->ports[local_port];
+ if (!mlxsw_sp_port)
+ goto immediate;
+
+ types = ingress ? mlxsw_sp_port->ptp.ing_types :
+ mlxsw_sp_port->ptp.egr_types;
+ if (!types)
+ goto immediate;
+
+ memset(&key, 0, sizeof(key));
+ key.local_port = local_port;
+ key.ingress = ingress;
+
+ err = mlxsw_sp_ptp_parse(skb, &key.domain_number, &key.message_type,
+ &key.sequence_id);
+ if (err)
+ goto immediate;
+
+ /* For packets whose timestamping was not enabled on this port, don't
+ * bother trying to match the timestamp.
+ */
+ if (!((1 << key.message_type) & types))
+ goto immediate;
+
+ mlxsw_sp1_ptp_got_piece(mlxsw_sp, key, skb, 0);
+ return;
+
+immediate:
+ mlxsw_sp1_ptp_packet_finish(mlxsw_sp, skb, local_port, ingress, NULL);
+}
+
+void mlxsw_sp1_ptp_got_timestamp(struct mlxsw_sp *mlxsw_sp, bool ingress,
+ u8 local_port, u8 message_type,
+ u8 domain_number, u16 sequence_id,
+ u64 timestamp)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port;
+ struct mlxsw_sp1_ptp_key key;
+ u8 types;
+
+ mlxsw_sp_port = mlxsw_sp->ports[local_port];
+ if (!mlxsw_sp_port)
+ return;
+
+ types = ingress ? mlxsw_sp_port->ptp.ing_types :
+ mlxsw_sp_port->ptp.egr_types;
+
+ /* For message types whose timestamping was not enabled on this port,
+ * don't bother with the timestamp.
+ */
+ if (!((1 << message_type) & types))
+ return;
+
+ memset(&key, 0, sizeof(key));
+ key.local_port = local_port;
+ key.domain_number = domain_number;
+ key.message_type = message_type;
+ key.sequence_id = sequence_id;
+ key.ingress = ingress;
+
+ mlxsw_sp1_ptp_got_piece(mlxsw_sp, key, NULL, timestamp);
+}
+
+void mlxsw_sp1_ptp_receive(struct mlxsw_sp *mlxsw_sp, struct sk_buff *skb,
+ u8 local_port)
+{
+ skb_reset_mac_header(skb);
+ mlxsw_sp1_ptp_got_packet(mlxsw_sp, skb, local_port, true);
+}
+
+void mlxsw_sp1_ptp_transmitted(struct mlxsw_sp *mlxsw_sp,
+ struct sk_buff *skb, u8 local_port)
+{
+ mlxsw_sp1_ptp_got_packet(mlxsw_sp, skb, local_port, false);
+}
+
+static void
+mlxsw_sp1_ptp_ht_gc_collect(struct mlxsw_sp_ptp_state *ptp_state,
+ struct mlxsw_sp1_ptp_unmatched *unmatched)
+{
+ int err;
+
+ /* If an unmatched entry has an SKB, it has to be handed over to the
+ * networking stack. This is usually done from a trap handler, which is
+ * invoked in a softirq context. Here we are going to do it in process
+ * context. If that were to be interrupted by a softirq, it could cause
+ * a deadlock when an attempt is made to take an already-taken lock
+ * somewhere along the sending path. Disable softirqs to prevent this.
+ */
+ local_bh_disable();
+
+ spin_lock(&ptp_state->unmatched_lock);
+ err = rhashtable_remove_fast(&ptp_state->unmatched_ht,
+ &unmatched->ht_node,
+ mlxsw_sp1_ptp_unmatched_ht_params);
+ spin_unlock(&ptp_state->unmatched_lock);
+
+ if (err)
+ /* The packet was matched with timestamp during the walk. */
+ goto out;
+
+ /* mlxsw_sp1_ptp_unmatched_finish() invokes netif_receive_skb(). While
+ * the comment at that function states that it can only be called in
+ * soft IRQ context, this pattern of local_bh_disable() +
+ * netif_receive_skb(), in process context, is seen elsewhere in the
+ * kernel, notably in pktgen.
+ */
+ mlxsw_sp1_ptp_unmatched_finish(ptp_state->mlxsw_sp, unmatched);
+
+out:
+ local_bh_enable();
+}
+
+static void mlxsw_sp1_ptp_ht_gc(struct work_struct *work)
+{
+ struct delayed_work *dwork = to_delayed_work(work);
+ struct mlxsw_sp1_ptp_unmatched *unmatched;
+ struct mlxsw_sp_ptp_state *ptp_state;
+ struct rhashtable_iter iter;
+ u32 gc_cycle;
+ void *obj;
+
+ ptp_state = container_of(dwork, struct mlxsw_sp_ptp_state, ht_gc_dw);
+ gc_cycle = ptp_state->gc_cycle++;
+
+ rhashtable_walk_enter(&ptp_state->unmatched_ht, &iter);
+ rhashtable_walk_start(&iter);
+ while ((obj = rhashtable_walk_next(&iter))) {
+ if (IS_ERR(obj))
+ continue;
+
+ unmatched = obj;
+ if (unmatched->gc_cycle <= gc_cycle)
+ mlxsw_sp1_ptp_ht_gc_collect(ptp_state, unmatched);
+ }
+ rhashtable_walk_stop(&iter);
+ rhashtable_walk_exit(&iter);
+
+ mlxsw_core_schedule_dw(&ptp_state->ht_gc_dw,
+ MLXSW_SP1_PTP_HT_GC_INTERVAL);
+}
+
+static int mlxsw_sp_ptp_mtptpt_set(struct mlxsw_sp *mlxsw_sp,
+ enum mlxsw_reg_mtptpt_trap_id trap_id,
+ u16 message_type)
+{
+ char mtptpt_pl[MLXSW_REG_MTPTPT_LEN];
+
+ mlxsw_reg_mtptptp_pack(mtptpt_pl, trap_id, message_type);
+ return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(mtptpt), mtptpt_pl);
+}
+
+static int mlxsw_sp1_ptp_set_fifo_clr_on_trap(struct mlxsw_sp *mlxsw_sp,
+ bool clr)
+{
+ char mogcr_pl[MLXSW_REG_MOGCR_LEN] = {0};
+ int err;
+
+ err = mlxsw_reg_query(mlxsw_sp->core, MLXSW_REG(mogcr), mogcr_pl);
+ if (err)
+ return err;
+
+ mlxsw_reg_mogcr_ptp_iftc_set(mogcr_pl, clr);
+ mlxsw_reg_mogcr_ptp_eftc_set(mogcr_pl, clr);
+ return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(mogcr), mogcr_pl);
+}
+
+static int mlxsw_sp1_ptp_mtpppc_set(struct mlxsw_sp *mlxsw_sp,
+ u16 ing_types, u16 egr_types)
+{
+ char mtpppc_pl[MLXSW_REG_MTPPPC_LEN];
+
+ mlxsw_reg_mtpppc_pack(mtpppc_pl, ing_types, egr_types);
+ return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(mtpppc), mtpppc_pl);
+}
+
+struct mlxsw_sp1_ptp_shaper_params {
+ u32 ethtool_speed;
+ enum mlxsw_reg_qpsc_port_speed port_speed;
+ u8 shaper_time_exp;
+ u8 shaper_time_mantissa;
+ u8 shaper_inc;
+ u8 shaper_bs;
+ u8 port_to_shaper_credits;
+ int ing_timestamp_inc;
+ int egr_timestamp_inc;
+};
+
+static const struct mlxsw_sp1_ptp_shaper_params
+mlxsw_sp1_ptp_shaper_params[] = {
+ {
+ .ethtool_speed = SPEED_100,
+ .port_speed = MLXSW_REG_QPSC_PORT_SPEED_100M,
+ .shaper_time_exp = 4,
+ .shaper_time_mantissa = 12,
+ .shaper_inc = 9,
+ .shaper_bs = 1,
+ .port_to_shaper_credits = 1,
+ .ing_timestamp_inc = -313,
+ .egr_timestamp_inc = 313,
+ },
+ {
+ .ethtool_speed = SPEED_1000,
+ .port_speed = MLXSW_REG_QPSC_PORT_SPEED_1G,
+ .shaper_time_exp = 0,
+ .shaper_time_mantissa = 12,
+ .shaper_inc = 6,
+ .shaper_bs = 0,
+ .port_to_shaper_credits = 1,
+ .ing_timestamp_inc = -35,
+ .egr_timestamp_inc = 35,
+ },
+ {
+ .ethtool_speed = SPEED_10000,
+ .port_speed = MLXSW_REG_QPSC_PORT_SPEED_10G,
+ .shaper_time_exp = 0,
+ .shaper_time_mantissa = 2,
+ .shaper_inc = 14,
+ .shaper_bs = 1,
+ .port_to_shaper_credits = 1,
+ .ing_timestamp_inc = -11,
+ .egr_timestamp_inc = 11,
+ },
+ {
+ .ethtool_speed = SPEED_25000,
+ .port_speed = MLXSW_REG_QPSC_PORT_SPEED_25G,
+ .shaper_time_exp = 0,
+ .shaper_time_mantissa = 0,
+ .shaper_inc = 11,
+ .shaper_bs = 1,
+ .port_to_shaper_credits = 1,
+ .ing_timestamp_inc = -14,
+ .egr_timestamp_inc = 14,
+ },
+};
+
+#define MLXSW_SP1_PTP_SHAPER_PARAMS_LEN ARRAY_SIZE(mlxsw_sp1_ptp_shaper_params)
+
+static int mlxsw_sp1_ptp_shaper_params_set(struct mlxsw_sp *mlxsw_sp)
+{
+ const struct mlxsw_sp1_ptp_shaper_params *params;
+ char qpsc_pl[MLXSW_REG_QPSC_LEN];
+ int i, err;
+
+ for (i = 0; i < MLXSW_SP1_PTP_SHAPER_PARAMS_LEN; i++) {
+ params = &mlxsw_sp1_ptp_shaper_params[i];
+ mlxsw_reg_qpsc_pack(qpsc_pl, params->port_speed,
+ params->shaper_time_exp,
+ params->shaper_time_mantissa,
+ params->shaper_inc, params->shaper_bs,
+ params->port_to_shaper_credits,
+ params->ing_timestamp_inc,
+ params->egr_timestamp_inc);
+ err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(qpsc), qpsc_pl);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+struct mlxsw_sp_ptp_state *mlxsw_sp1_ptp_init(struct mlxsw_sp *mlxsw_sp)
+{
+ struct mlxsw_sp_ptp_state *ptp_state;
+ u16 message_type;
+ int err;
+
+ err = mlxsw_sp1_ptp_shaper_params_set(mlxsw_sp);
+ if (err)
+ return ERR_PTR(err);
+
+ ptp_state = kzalloc(sizeof(*ptp_state), GFP_KERNEL);
+ if (!ptp_state)
+ return ERR_PTR(-ENOMEM);
+ ptp_state->mlxsw_sp = mlxsw_sp;
+
+ spin_lock_init(&ptp_state->unmatched_lock);
+
+ err = rhashtable_init(&ptp_state->unmatched_ht,
+ &mlxsw_sp1_ptp_unmatched_ht_params);
+ if (err)
+ goto err_hashtable_init;
+
+ /* Delive these message types as PTP0. */
+ message_type = BIT(MLXSW_SP_PTP_MESSAGE_TYPE_SYNC) |
+ BIT(MLXSW_SP_PTP_MESSAGE_TYPE_DELAY_REQ) |
+ BIT(MLXSW_SP_PTP_MESSAGE_TYPE_PDELAY_REQ) |
+ BIT(MLXSW_SP_PTP_MESSAGE_TYPE_PDELAY_RESP);
+ err = mlxsw_sp_ptp_mtptpt_set(mlxsw_sp, MLXSW_REG_MTPTPT_TRAP_ID_PTP0,
+ message_type);
+ if (err)
+ goto err_mtptpt_set;
+
+ /* Everything else is PTP1. */
+ message_type = ~message_type;
+ err = mlxsw_sp_ptp_mtptpt_set(mlxsw_sp, MLXSW_REG_MTPTPT_TRAP_ID_PTP1,
+ message_type);
+ if (err)
+ goto err_mtptpt1_set;
+
+ err = mlxsw_sp1_ptp_set_fifo_clr_on_trap(mlxsw_sp, true);
+ if (err)
+ goto err_fifo_clr;
+
+ INIT_DELAYED_WORK(&ptp_state->ht_gc_dw, mlxsw_sp1_ptp_ht_gc);
+ mlxsw_core_schedule_dw(&ptp_state->ht_gc_dw,
+ MLXSW_SP1_PTP_HT_GC_INTERVAL);
+ return ptp_state;
+
+err_fifo_clr:
+ mlxsw_sp_ptp_mtptpt_set(mlxsw_sp, MLXSW_REG_MTPTPT_TRAP_ID_PTP1, 0);
+err_mtptpt1_set:
+ mlxsw_sp_ptp_mtptpt_set(mlxsw_sp, MLXSW_REG_MTPTPT_TRAP_ID_PTP0, 0);
+err_mtptpt_set:
+ rhashtable_destroy(&ptp_state->unmatched_ht);
+err_hashtable_init:
+ kfree(ptp_state);
+ return ERR_PTR(err);
+}
+
+void mlxsw_sp1_ptp_fini(struct mlxsw_sp_ptp_state *ptp_state)
+{
+ struct mlxsw_sp *mlxsw_sp = ptp_state->mlxsw_sp;
+
+ cancel_delayed_work_sync(&ptp_state->ht_gc_dw);
+ mlxsw_sp1_ptp_mtpppc_set(mlxsw_sp, 0, 0);
+ mlxsw_sp1_ptp_set_fifo_clr_on_trap(mlxsw_sp, false);
+ mlxsw_sp_ptp_mtptpt_set(mlxsw_sp, MLXSW_REG_MTPTPT_TRAP_ID_PTP1, 0);
+ mlxsw_sp_ptp_mtptpt_set(mlxsw_sp, MLXSW_REG_MTPTPT_TRAP_ID_PTP0, 0);
+ rhashtable_free_and_destroy(&ptp_state->unmatched_ht,
+ &mlxsw_sp1_ptp_unmatched_free_fn, NULL);
+ kfree(ptp_state);
+}
+
+int mlxsw_sp1_ptp_hwtstamp_get(struct mlxsw_sp_port *mlxsw_sp_port,
+ struct hwtstamp_config *config)
+{
+ *config = mlxsw_sp_port->ptp.hwtstamp_config;
+ return 0;
+}
+
+static int mlxsw_sp_ptp_get_message_types(const struct hwtstamp_config *config,
+ u16 *p_ing_types, u16 *p_egr_types,
+ enum hwtstamp_rx_filters *p_rx_filter)
+{
+ enum hwtstamp_rx_filters rx_filter = config->rx_filter;
+ enum hwtstamp_tx_types tx_type = config->tx_type;
+ u16 ing_types = 0x00;
+ u16 egr_types = 0x00;
+
+ switch (tx_type) {
+ case HWTSTAMP_TX_OFF:
+ egr_types = 0x00;
+ break;
+ case HWTSTAMP_TX_ON:
+ egr_types = 0xff;
+ break;
+ case HWTSTAMP_TX_ONESTEP_SYNC:
+ return -ERANGE;
+ }
+
+ switch (rx_filter) {
+ case HWTSTAMP_FILTER_NONE:
+ ing_types = 0x00;
+ break;
+ case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_L2_SYNC:
+ case HWTSTAMP_FILTER_PTP_V2_SYNC:
+ ing_types = 0x01;
+ break;
+ case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:
+ case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
+ case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ:
+ case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ:
+ ing_types = 0x02;
+ break;
+ case HWTSTAMP_FILTER_PTP_V1_L4_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
+ case HWTSTAMP_FILTER_PTP_V2_EVENT:
+ ing_types = 0x0f;
+ break;
+ case HWTSTAMP_FILTER_ALL:
+ ing_types = 0xff;
+ break;
+ case HWTSTAMP_FILTER_SOME:
+ case HWTSTAMP_FILTER_NTP_ALL:
+ return -ERANGE;
+ }
+
+ *p_ing_types = ing_types;
+ *p_egr_types = egr_types;
+ *p_rx_filter = rx_filter;
+ return 0;
+}
+
+static int mlxsw_sp1_ptp_mtpppc_update(struct mlxsw_sp_port *mlxsw_sp_port,
+ u16 ing_types, u16 egr_types)
+{
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+ struct mlxsw_sp_port *tmp;
+ int i;
+
+ /* MTPPPC configures timestamping globally, not per port. Find the
+ * configuration that contains all configured timestamping requests.
+ */
+ for (i = 1; i < mlxsw_core_max_ports(mlxsw_sp->core); i++) {
+ tmp = mlxsw_sp->ports[i];
+ if (tmp && tmp != mlxsw_sp_port) {
+ ing_types |= tmp->ptp.ing_types;
+ egr_types |= tmp->ptp.egr_types;
+ }
+ }
+
+ return mlxsw_sp1_ptp_mtpppc_set(mlxsw_sp_port->mlxsw_sp,
+ ing_types, egr_types);
+}
+
+static bool mlxsw_sp1_ptp_hwtstamp_enabled(struct mlxsw_sp_port *mlxsw_sp_port)
+{
+ return mlxsw_sp_port->ptp.ing_types || mlxsw_sp_port->ptp.egr_types;
+}
+
+static int
+mlxsw_sp1_ptp_port_shaper_set(struct mlxsw_sp_port *mlxsw_sp_port, bool enable)
+{
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+ char qeec_pl[MLXSW_REG_QEEC_LEN];
+
+ mlxsw_reg_qeec_ptps_pack(qeec_pl, mlxsw_sp_port->local_port, enable);
+ return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(qeec), qeec_pl);
+}
+
+static int mlxsw_sp1_ptp_port_shaper_check(struct mlxsw_sp_port *mlxsw_sp_port)
+{
+ const struct mlxsw_sp_port_type_speed_ops *port_type_speed_ops;
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+ char ptys_pl[MLXSW_REG_PTYS_LEN];
+ u32 eth_proto_oper, speed;
+ bool ptps = false;
+ int err, i;
+
+ if (!mlxsw_sp1_ptp_hwtstamp_enabled(mlxsw_sp_port))
+ return mlxsw_sp1_ptp_port_shaper_set(mlxsw_sp_port, false);
+
+ port_type_speed_ops = mlxsw_sp->port_type_speed_ops;
+ port_type_speed_ops->reg_ptys_eth_pack(mlxsw_sp, ptys_pl,
+ mlxsw_sp_port->local_port, 0,
+ false);
+ err = mlxsw_reg_query(mlxsw_sp->core, MLXSW_REG(ptys), ptys_pl);
+ if (err)
+ return err;
+ port_type_speed_ops->reg_ptys_eth_unpack(mlxsw_sp, ptys_pl, NULL, NULL,
+ &eth_proto_oper);
+
+ speed = port_type_speed_ops->from_ptys_speed(mlxsw_sp, eth_proto_oper);
+ for (i = 0; i < MLXSW_SP1_PTP_SHAPER_PARAMS_LEN; i++) {
+ if (mlxsw_sp1_ptp_shaper_params[i].ethtool_speed == speed) {
+ ptps = true;
+ break;
+ }
+ }
+
+ return mlxsw_sp1_ptp_port_shaper_set(mlxsw_sp_port, ptps);
+}
+
+void mlxsw_sp1_ptp_shaper_work(struct work_struct *work)
+{
+ struct delayed_work *dwork = to_delayed_work(work);
+ struct mlxsw_sp_port *mlxsw_sp_port;
+ int err;
+
+ mlxsw_sp_port = container_of(dwork, struct mlxsw_sp_port,
+ ptp.shaper_dw);
+
+ if (!mlxsw_sp1_ptp_hwtstamp_enabled(mlxsw_sp_port))
+ return;
+
+ err = mlxsw_sp1_ptp_port_shaper_check(mlxsw_sp_port);
+ if (err)
+ netdev_err(mlxsw_sp_port->dev, "Failed to set up PTP shaper\n");
+}
+
+int mlxsw_sp1_ptp_hwtstamp_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ struct hwtstamp_config *config)
+{
+ enum hwtstamp_rx_filters rx_filter;
+ u16 ing_types;
+ u16 egr_types;
+ int err;
+
+ err = mlxsw_sp_ptp_get_message_types(config, &ing_types, &egr_types,
+ &rx_filter);
+ if (err)
+ return err;
+
+ err = mlxsw_sp1_ptp_mtpppc_update(mlxsw_sp_port, ing_types, egr_types);
+ if (err)
+ return err;
+
+ mlxsw_sp_port->ptp.hwtstamp_config = *config;
+ mlxsw_sp_port->ptp.ing_types = ing_types;
+ mlxsw_sp_port->ptp.egr_types = egr_types;
+
+ err = mlxsw_sp1_ptp_port_shaper_check(mlxsw_sp_port);
+ if (err)
+ return err;
+
+ /* Notify the ioctl caller what we are actually timestamping. */
+ config->rx_filter = rx_filter;
+
+ return 0;
+}
+
+int mlxsw_sp1_ptp_get_ts_info(struct mlxsw_sp *mlxsw_sp,
+ struct ethtool_ts_info *info)
+{
+ info->phc_index = ptp_clock_index(mlxsw_sp->clock->ptp);
+
+ info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE |
+ SOF_TIMESTAMPING_RX_HARDWARE |
+ SOF_TIMESTAMPING_RAW_HARDWARE;
+
+ info->tx_types = BIT(HWTSTAMP_TX_OFF) |
+ BIT(HWTSTAMP_TX_ON);
+
+ info->rx_filters = BIT(HWTSTAMP_FILTER_NONE) |
+ BIT(HWTSTAMP_FILTER_ALL);
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.h
new file mode 100644
index 000000000000..72e55f6926b9
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.h
@@ -0,0 +1,186 @@
+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 */
+/* Copyright (c) 2019 Mellanox Technologies. All rights reserved */
+
+#ifndef _MLXSW_SPECTRUM_PTP_H
+#define _MLXSW_SPECTRUM_PTP_H
+
+#include <linux/device.h>
+#include <linux/rhashtable.h>
+
+struct mlxsw_sp;
+struct mlxsw_sp_port;
+struct mlxsw_sp_ptp_clock;
+
+enum {
+ MLXSW_SP_PTP_MESSAGE_TYPE_SYNC,
+ MLXSW_SP_PTP_MESSAGE_TYPE_DELAY_REQ,
+ MLXSW_SP_PTP_MESSAGE_TYPE_PDELAY_REQ,
+ MLXSW_SP_PTP_MESSAGE_TYPE_PDELAY_RESP,
+};
+
+static inline int mlxsw_sp_ptp_get_ts_info_noptp(struct ethtool_ts_info *info)
+{
+ info->so_timestamping = SOF_TIMESTAMPING_RX_SOFTWARE |
+ SOF_TIMESTAMPING_SOFTWARE;
+ info->phc_index = -1;
+ return 0;
+}
+
+#if IS_REACHABLE(CONFIG_PTP_1588_CLOCK)
+
+struct mlxsw_sp_ptp_clock *
+mlxsw_sp1_ptp_clock_init(struct mlxsw_sp *mlxsw_sp, struct device *dev);
+
+void mlxsw_sp1_ptp_clock_fini(struct mlxsw_sp_ptp_clock *clock);
+
+struct mlxsw_sp_ptp_state *mlxsw_sp1_ptp_init(struct mlxsw_sp *mlxsw_sp);
+
+void mlxsw_sp1_ptp_fini(struct mlxsw_sp_ptp_state *ptp_state);
+
+void mlxsw_sp1_ptp_receive(struct mlxsw_sp *mlxsw_sp, struct sk_buff *skb,
+ u8 local_port);
+
+void mlxsw_sp1_ptp_transmitted(struct mlxsw_sp *mlxsw_sp,
+ struct sk_buff *skb, u8 local_port);
+
+void mlxsw_sp1_ptp_got_timestamp(struct mlxsw_sp *mlxsw_sp, bool ingress,
+ u8 local_port, u8 message_type,
+ u8 domain_number, u16 sequence_id,
+ u64 timestamp);
+
+int mlxsw_sp1_ptp_hwtstamp_get(struct mlxsw_sp_port *mlxsw_sp_port,
+ struct hwtstamp_config *config);
+
+int mlxsw_sp1_ptp_hwtstamp_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ struct hwtstamp_config *config);
+
+void mlxsw_sp1_ptp_shaper_work(struct work_struct *work);
+
+int mlxsw_sp1_ptp_get_ts_info(struct mlxsw_sp *mlxsw_sp,
+ struct ethtool_ts_info *info);
+
+#else
+
+static inline struct mlxsw_sp_ptp_clock *
+mlxsw_sp1_ptp_clock_init(struct mlxsw_sp *mlxsw_sp, struct device *dev)
+{
+ return NULL;
+}
+
+static inline void mlxsw_sp1_ptp_clock_fini(struct mlxsw_sp_ptp_clock *clock)
+{
+}
+
+static inline struct mlxsw_sp_ptp_state *
+mlxsw_sp1_ptp_init(struct mlxsw_sp *mlxsw_sp)
+{
+ return NULL;
+}
+
+static inline void mlxsw_sp1_ptp_fini(struct mlxsw_sp_ptp_state *ptp_state)
+{
+}
+
+static inline void mlxsw_sp1_ptp_receive(struct mlxsw_sp *mlxsw_sp,
+ struct sk_buff *skb, u8 local_port)
+{
+ mlxsw_sp_rx_listener_no_mark_func(skb, local_port, mlxsw_sp);
+}
+
+static inline void mlxsw_sp1_ptp_transmitted(struct mlxsw_sp *mlxsw_sp,
+ struct sk_buff *skb, u8 local_port)
+{
+ dev_kfree_skb_any(skb);
+}
+
+static inline void
+mlxsw_sp1_ptp_got_timestamp(struct mlxsw_sp *mlxsw_sp, bool ingress,
+ u8 local_port, u8 message_type,
+ u8 domain_number,
+ u16 sequence_id, u64 timestamp)
+{
+}
+
+static inline int
+mlxsw_sp1_ptp_hwtstamp_get(struct mlxsw_sp_port *mlxsw_sp_port,
+ struct hwtstamp_config *config)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline int
+mlxsw_sp1_ptp_hwtstamp_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ struct hwtstamp_config *config)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline void mlxsw_sp1_ptp_shaper_work(struct work_struct *work)
+{
+}
+
+static inline int mlxsw_sp1_ptp_get_ts_info(struct mlxsw_sp *mlxsw_sp,
+ struct ethtool_ts_info *info)
+{
+ return mlxsw_sp_ptp_get_ts_info_noptp(info);
+}
+
+#endif
+
+static inline struct mlxsw_sp_ptp_clock *
+mlxsw_sp2_ptp_clock_init(struct mlxsw_sp *mlxsw_sp, struct device *dev)
+{
+ return NULL;
+}
+
+static inline void mlxsw_sp2_ptp_clock_fini(struct mlxsw_sp_ptp_clock *clock)
+{
+}
+
+static inline struct mlxsw_sp_ptp_state *
+mlxsw_sp2_ptp_init(struct mlxsw_sp *mlxsw_sp)
+{
+ return NULL;
+}
+
+static inline void mlxsw_sp2_ptp_fini(struct mlxsw_sp_ptp_state *ptp_state)
+{
+}
+
+static inline void mlxsw_sp2_ptp_receive(struct mlxsw_sp *mlxsw_sp,
+ struct sk_buff *skb, u8 local_port)
+{
+ mlxsw_sp_rx_listener_no_mark_func(skb, local_port, mlxsw_sp);
+}
+
+static inline void mlxsw_sp2_ptp_transmitted(struct mlxsw_sp *mlxsw_sp,
+ struct sk_buff *skb, u8 local_port)
+{
+ dev_kfree_skb_any(skb);
+}
+
+static inline int
+mlxsw_sp2_ptp_hwtstamp_get(struct mlxsw_sp_port *mlxsw_sp_port,
+ struct hwtstamp_config *config)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline int
+mlxsw_sp2_ptp_hwtstamp_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ struct hwtstamp_config *config)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline void mlxsw_sp2_ptp_shaper_work(struct work_struct *work)
+{
+}
+
+static inline int mlxsw_sp2_ptp_get_ts_info(struct mlxsw_sp *mlxsw_sp,
+ struct ethtool_ts_info *info)
+{
+ return mlxsw_sp_ptp_get_ts_info_noptp(info);
+}
+
+#endif
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
index ef554739dd54..e618be7ce6c6 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
@@ -21,6 +21,7 @@
#include <net/arp.h>
#include <net/ip_fib.h>
#include <net/ip6_fib.h>
+#include <net/nexthop.h>
#include <net/fib_rules.h>
#include <net/ip_tunnels.h>
#include <net/l3mdev.h>
@@ -2887,7 +2888,7 @@ mlxsw_sp_nexthop6_group_cmp(const struct mlxsw_sp_nexthop_group *nh_grp,
return false;
list_for_each_entry(mlxsw_sp_rt6, &fib6_entry->rt6_list, list) {
- struct fib6_nh *fib6_nh = &mlxsw_sp_rt6->rt->fib6_nh;
+ struct fib6_nh *fib6_nh = mlxsw_sp_rt6->rt->fib6_nh;
struct in6_addr *gw;
int ifindex, weight;
@@ -2959,7 +2960,7 @@ mlxsw_sp_nexthop6_group_hash(struct mlxsw_sp_fib6_entry *fib6_entry, u32 seed)
struct net_device *dev;
list_for_each_entry(mlxsw_sp_rt6, &fib6_entry->rt6_list, list) {
- dev = mlxsw_sp_rt6->rt->fib6_nh.fib_nh_dev;
+ dev = mlxsw_sp_rt6->rt->fib6_nh->fib_nh_dev;
val ^= dev->ifindex;
}
@@ -3883,23 +3884,25 @@ static void mlxsw_sp_nexthop_rif_gone_sync(struct mlxsw_sp *mlxsw_sp,
}
static bool mlxsw_sp_fi_is_gateway(const struct mlxsw_sp *mlxsw_sp,
- const struct fib_info *fi)
+ struct fib_info *fi)
{
- return fi->fib_nh->fib_nh_scope == RT_SCOPE_LINK ||
- mlxsw_sp_nexthop4_ipip_type(mlxsw_sp, fi->fib_nh, NULL);
+ const struct fib_nh *nh = fib_info_nh(fi, 0);
+
+ return nh->fib_nh_scope == RT_SCOPE_LINK ||
+ mlxsw_sp_nexthop4_ipip_type(mlxsw_sp, nh, NULL);
}
static struct mlxsw_sp_nexthop_group *
mlxsw_sp_nexthop4_group_create(struct mlxsw_sp *mlxsw_sp, struct fib_info *fi)
{
+ unsigned int nhs = fib_info_num_path(fi);
struct mlxsw_sp_nexthop_group *nh_grp;
struct mlxsw_sp_nexthop *nh;
struct fib_nh *fib_nh;
int i;
int err;
- nh_grp = kzalloc(struct_size(nh_grp, nexthops, fi->fib_nhs),
- GFP_KERNEL);
+ nh_grp = kzalloc(struct_size(nh_grp, nexthops, nhs), GFP_KERNEL);
if (!nh_grp)
return ERR_PTR(-ENOMEM);
nh_grp->priv = fi;
@@ -3907,11 +3910,11 @@ mlxsw_sp_nexthop4_group_create(struct mlxsw_sp *mlxsw_sp, struct fib_info *fi)
nh_grp->neigh_tbl = &arp_tbl;
nh_grp->gateway = mlxsw_sp_fi_is_gateway(mlxsw_sp, fi);
- nh_grp->count = fi->fib_nhs;
+ nh_grp->count = nhs;
fib_info_hold(fi);
for (i = 0; i < nh_grp->count; i++) {
nh = &nh_grp->nexthops[i];
- fib_nh = &fi->fib_nh[i];
+ fib_nh = fib_info_nh(fi, i);
err = mlxsw_sp_nexthop4_init(mlxsw_sp, nh_grp, nh, fib_nh);
if (err)
goto err_nexthop4_init;
@@ -4027,9 +4030,9 @@ mlxsw_sp_rt6_nexthop(struct mlxsw_sp_nexthop_group *nh_grp,
struct mlxsw_sp_nexthop *nh = &nh_grp->nexthops[i];
struct fib6_info *rt = mlxsw_sp_rt6->rt;
- if (nh->rif && nh->rif->dev == rt->fib6_nh.fib_nh_dev &&
+ if (nh->rif && nh->rif->dev == rt->fib6_nh->fib_nh_dev &&
ipv6_addr_equal((const struct in6_addr *) &nh->gw_addr,
- &rt->fib6_nh.fib_nh_gw6))
+ &rt->fib6_nh->fib_nh_gw6))
return nh;
continue;
}
@@ -4089,13 +4092,13 @@ mlxsw_sp_fib6_entry_offload_set(struct mlxsw_sp_fib_entry *fib_entry)
if (fib_entry->type == MLXSW_SP_FIB_ENTRY_TYPE_LOCAL ||
fib_entry->type == MLXSW_SP_FIB_ENTRY_TYPE_BLACKHOLE) {
list_first_entry(&fib6_entry->rt6_list, struct mlxsw_sp_rt6,
- list)->rt->fib6_nh.fib_nh_flags |= RTNH_F_OFFLOAD;
+ list)->rt->fib6_nh->fib_nh_flags |= RTNH_F_OFFLOAD;
return;
}
list_for_each_entry(mlxsw_sp_rt6, &fib6_entry->rt6_list, list) {
struct mlxsw_sp_nexthop_group *nh_grp = fib_entry->nh_group;
- struct fib6_nh *fib6_nh = &mlxsw_sp_rt6->rt->fib6_nh;
+ struct fib6_nh *fib6_nh = mlxsw_sp_rt6->rt->fib6_nh;
struct mlxsw_sp_nexthop *nh;
nh = mlxsw_sp_rt6_nexthop(nh_grp, mlxsw_sp_rt6);
@@ -4117,7 +4120,7 @@ mlxsw_sp_fib6_entry_offload_unset(struct mlxsw_sp_fib_entry *fib_entry)
list_for_each_entry(mlxsw_sp_rt6, &fib6_entry->rt6_list, list) {
struct fib6_info *rt = mlxsw_sp_rt6->rt;
- rt->fib6_nh.fib_nh_flags &= ~RTNH_F_OFFLOAD;
+ rt->fib6_nh->fib_nh_flags &= ~RTNH_F_OFFLOAD;
}
}
@@ -4349,9 +4352,9 @@ mlxsw_sp_fib4_entry_type_set(struct mlxsw_sp *mlxsw_sp,
const struct fib_entry_notifier_info *fen_info,
struct mlxsw_sp_fib_entry *fib_entry)
{
+ struct net_device *dev = fib_info_nh(fen_info->fi, 0)->fib_nh_dev;
union mlxsw_sp_l3addr dip = { .addr4 = htonl(fen_info->dst) };
u32 tb_id = mlxsw_sp_fix_tb_id(fen_info->tb_id);
- struct net_device *dev = fen_info->fi->fib_dev;
struct mlxsw_sp_ipip_entry *ipip_entry;
struct fib_info *fi = fen_info->fi;
@@ -4995,7 +4998,8 @@ static void mlxsw_sp_rt6_destroy(struct mlxsw_sp_rt6 *mlxsw_sp_rt6)
static bool mlxsw_sp_fib6_rt_can_mp(const struct fib6_info *rt)
{
/* RTF_CACHE routes are ignored */
- return !(rt->fib6_flags & RTF_ADDRCONF) && rt->fib6_nh.fib_nh_gw_family;
+ return !(rt->fib6_flags & RTF_ADDRCONF) &&
+ rt->fib6_nh->fib_nh_gw_family;
}
static struct fib6_info *
@@ -5054,8 +5058,8 @@ static bool mlxsw_sp_nexthop6_ipip_type(const struct mlxsw_sp *mlxsw_sp,
const struct fib6_info *rt,
enum mlxsw_sp_ipip_type *ret)
{
- return rt->fib6_nh.fib_nh_dev &&
- mlxsw_sp_netdev_ipip_type(mlxsw_sp, rt->fib6_nh.fib_nh_dev, ret);
+ return rt->fib6_nh->fib_nh_dev &&
+ mlxsw_sp_netdev_ipip_type(mlxsw_sp, rt->fib6_nh->fib_nh_dev, ret);
}
static int mlxsw_sp_nexthop6_type_init(struct mlxsw_sp *mlxsw_sp,
@@ -5065,7 +5069,7 @@ static int mlxsw_sp_nexthop6_type_init(struct mlxsw_sp *mlxsw_sp,
{
const struct mlxsw_sp_ipip_ops *ipip_ops;
struct mlxsw_sp_ipip_entry *ipip_entry;
- struct net_device *dev = rt->fib6_nh.fib_nh_dev;
+ struct net_device *dev = rt->fib6_nh->fib_nh_dev;
struct mlxsw_sp_rif *rif;
int err;
@@ -5108,11 +5112,11 @@ static int mlxsw_sp_nexthop6_init(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_nexthop *nh,
const struct fib6_info *rt)
{
- struct net_device *dev = rt->fib6_nh.fib_nh_dev;
+ struct net_device *dev = rt->fib6_nh->fib_nh_dev;
nh->nh_grp = nh_grp;
- nh->nh_weight = rt->fib6_nh.fib_nh_weight;
- memcpy(&nh->gw_addr, &rt->fib6_nh.fib_nh_gw6, sizeof(nh->gw_addr));
+ nh->nh_weight = rt->fib6_nh->fib_nh_weight;
+ memcpy(&nh->gw_addr, &rt->fib6_nh->fib_nh_gw6, sizeof(nh->gw_addr));
mlxsw_sp_nexthop_counter_alloc(mlxsw_sp, nh);
list_add_tail(&nh->router_list_node, &mlxsw_sp->router->nexthop_list);
@@ -5135,7 +5139,7 @@ static void mlxsw_sp_nexthop6_fini(struct mlxsw_sp *mlxsw_sp,
static bool mlxsw_sp_rt6_is_gateway(const struct mlxsw_sp *mlxsw_sp,
const struct fib6_info *rt)
{
- return rt->fib6_nh.fib_nh_gw_family ||
+ return rt->fib6_nh->fib_nh_gw_family ||
mlxsw_sp_nexthop6_ipip_type(mlxsw_sp, rt, NULL);
}
@@ -5274,17 +5278,21 @@ err_nexthop6_group_get:
static int
mlxsw_sp_fib6_entry_nexthop_add(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_fib6_entry *fib6_entry,
- struct fib6_info *rt)
+ struct fib6_info **rt_arr, unsigned int nrt6)
{
struct mlxsw_sp_rt6 *mlxsw_sp_rt6;
- int err;
+ int err, i;
- mlxsw_sp_rt6 = mlxsw_sp_rt6_create(rt);
- if (IS_ERR(mlxsw_sp_rt6))
- return PTR_ERR(mlxsw_sp_rt6);
+ for (i = 0; i < nrt6; i++) {
+ mlxsw_sp_rt6 = mlxsw_sp_rt6_create(rt_arr[i]);
+ if (IS_ERR(mlxsw_sp_rt6)) {
+ err = PTR_ERR(mlxsw_sp_rt6);
+ goto err_rt6_create;
+ }
- list_add_tail(&mlxsw_sp_rt6->list, &fib6_entry->rt6_list);
- fib6_entry->nrt6++;
+ list_add_tail(&mlxsw_sp_rt6->list, &fib6_entry->rt6_list);
+ fib6_entry->nrt6++;
+ }
err = mlxsw_sp_nexthop6_group_update(mlxsw_sp, fib6_entry);
if (err)
@@ -5293,27 +5301,38 @@ mlxsw_sp_fib6_entry_nexthop_add(struct mlxsw_sp *mlxsw_sp,
return 0;
err_nexthop6_group_update:
- fib6_entry->nrt6--;
- list_del(&mlxsw_sp_rt6->list);
- mlxsw_sp_rt6_destroy(mlxsw_sp_rt6);
+ i = nrt6;
+err_rt6_create:
+ for (i--; i >= 0; i--) {
+ fib6_entry->nrt6--;
+ mlxsw_sp_rt6 = list_last_entry(&fib6_entry->rt6_list,
+ struct mlxsw_sp_rt6, list);
+ list_del(&mlxsw_sp_rt6->list);
+ mlxsw_sp_rt6_destroy(mlxsw_sp_rt6);
+ }
return err;
}
static void
mlxsw_sp_fib6_entry_nexthop_del(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_fib6_entry *fib6_entry,
- struct fib6_info *rt)
+ struct fib6_info **rt_arr, unsigned int nrt6)
{
struct mlxsw_sp_rt6 *mlxsw_sp_rt6;
+ int i;
- mlxsw_sp_rt6 = mlxsw_sp_fib6_entry_rt_find(fib6_entry, rt);
- if (WARN_ON(!mlxsw_sp_rt6))
- return;
+ for (i = 0; i < nrt6; i++) {
+ mlxsw_sp_rt6 = mlxsw_sp_fib6_entry_rt_find(fib6_entry,
+ rt_arr[i]);
+ if (WARN_ON_ONCE(!mlxsw_sp_rt6))
+ continue;
+
+ fib6_entry->nrt6--;
+ list_del(&mlxsw_sp_rt6->list);
+ mlxsw_sp_rt6_destroy(mlxsw_sp_rt6);
+ }
- fib6_entry->nrt6--;
- list_del(&mlxsw_sp_rt6->list);
mlxsw_sp_nexthop6_group_update(mlxsw_sp, fib6_entry);
- mlxsw_sp_rt6_destroy(mlxsw_sp_rt6);
}
static void mlxsw_sp_fib6_entry_type_set(struct mlxsw_sp *mlxsw_sp,
@@ -5354,29 +5373,32 @@ mlxsw_sp_fib6_entry_rt_destroy_all(struct mlxsw_sp_fib6_entry *fib6_entry)
static struct mlxsw_sp_fib6_entry *
mlxsw_sp_fib6_entry_create(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_fib_node *fib_node,
- struct fib6_info *rt)
+ struct fib6_info **rt_arr, unsigned int nrt6)
{
struct mlxsw_sp_fib6_entry *fib6_entry;
struct mlxsw_sp_fib_entry *fib_entry;
struct mlxsw_sp_rt6 *mlxsw_sp_rt6;
- int err;
+ int err, i;
fib6_entry = kzalloc(sizeof(*fib6_entry), GFP_KERNEL);
if (!fib6_entry)
return ERR_PTR(-ENOMEM);
fib_entry = &fib6_entry->common;
- mlxsw_sp_rt6 = mlxsw_sp_rt6_create(rt);
- if (IS_ERR(mlxsw_sp_rt6)) {
- err = PTR_ERR(mlxsw_sp_rt6);
- goto err_rt6_create;
+ INIT_LIST_HEAD(&fib6_entry->rt6_list);
+
+ for (i = 0; i < nrt6; i++) {
+ mlxsw_sp_rt6 = mlxsw_sp_rt6_create(rt_arr[i]);
+ if (IS_ERR(mlxsw_sp_rt6)) {
+ err = PTR_ERR(mlxsw_sp_rt6);
+ goto err_rt6_create;
+ }
+ list_add_tail(&mlxsw_sp_rt6->list, &fib6_entry->rt6_list);
+ fib6_entry->nrt6++;
}
- mlxsw_sp_fib6_entry_type_set(mlxsw_sp, fib_entry, mlxsw_sp_rt6->rt);
+ mlxsw_sp_fib6_entry_type_set(mlxsw_sp, fib_entry, rt_arr[0]);
- INIT_LIST_HEAD(&fib6_entry->rt6_list);
- list_add_tail(&mlxsw_sp_rt6->list, &fib6_entry->rt6_list);
- fib6_entry->nrt6 = 1;
err = mlxsw_sp_nexthop6_group_get(mlxsw_sp, fib6_entry);
if (err)
goto err_nexthop6_group_get;
@@ -5386,9 +5408,15 @@ mlxsw_sp_fib6_entry_create(struct mlxsw_sp *mlxsw_sp,
return fib6_entry;
err_nexthop6_group_get:
- list_del(&mlxsw_sp_rt6->list);
- mlxsw_sp_rt6_destroy(mlxsw_sp_rt6);
+ i = nrt6;
err_rt6_create:
+ for (i--; i >= 0; i--) {
+ fib6_entry->nrt6--;
+ mlxsw_sp_rt6 = list_last_entry(&fib6_entry->rt6_list,
+ struct mlxsw_sp_rt6, list);
+ list_del(&mlxsw_sp_rt6->list);
+ mlxsw_sp_rt6_destroy(mlxsw_sp_rt6);
+ }
kfree(fib6_entry);
return ERR_PTR(err);
}
@@ -5431,16 +5459,16 @@ mlxsw_sp_fib6_node_entry_find(const struct mlxsw_sp_fib_node *fib_node,
static int
mlxsw_sp_fib6_node_list_insert(struct mlxsw_sp_fib6_entry *new6_entry,
- bool replace)
+ bool *p_replace)
{
struct mlxsw_sp_fib_node *fib_node = new6_entry->common.fib_node;
struct fib6_info *nrt = mlxsw_sp_fib6_entry_rt(new6_entry);
struct mlxsw_sp_fib6_entry *fib6_entry;
- fib6_entry = mlxsw_sp_fib6_node_entry_find(fib_node, nrt, replace);
+ fib6_entry = mlxsw_sp_fib6_node_entry_find(fib_node, nrt, *p_replace);
- if (replace && WARN_ON(!fib6_entry))
- return -EINVAL;
+ if (*p_replace && !fib6_entry)
+ *p_replace = false;
if (fib6_entry) {
list_add_tail(&new6_entry->common.list,
@@ -5475,11 +5503,11 @@ mlxsw_sp_fib6_node_list_remove(struct mlxsw_sp_fib6_entry *fib6_entry)
static int mlxsw_sp_fib6_node_entry_link(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_fib6_entry *fib6_entry,
- bool replace)
+ bool *p_replace)
{
int err;
- err = mlxsw_sp_fib6_node_list_insert(fib6_entry, replace);
+ err = mlxsw_sp_fib6_node_list_insert(fib6_entry, p_replace);
if (err)
return err;
@@ -5552,10 +5580,12 @@ static void mlxsw_sp_fib6_entry_replace(struct mlxsw_sp *mlxsw_sp,
}
static int mlxsw_sp_router_fib6_add(struct mlxsw_sp *mlxsw_sp,
- struct fib6_info *rt, bool replace)
+ struct fib6_info **rt_arr,
+ unsigned int nrt6, bool replace)
{
struct mlxsw_sp_fib6_entry *fib6_entry;
struct mlxsw_sp_fib_node *fib_node;
+ struct fib6_info *rt = rt_arr[0];
int err;
if (mlxsw_sp->router->aborted)
@@ -5580,19 +5610,21 @@ static int mlxsw_sp_router_fib6_add(struct mlxsw_sp *mlxsw_sp,
*/
fib6_entry = mlxsw_sp_fib6_node_mp_entry_find(fib_node, rt, replace);
if (fib6_entry) {
- err = mlxsw_sp_fib6_entry_nexthop_add(mlxsw_sp, fib6_entry, rt);
+ err = mlxsw_sp_fib6_entry_nexthop_add(mlxsw_sp, fib6_entry,
+ rt_arr, nrt6);
if (err)
goto err_fib6_entry_nexthop_add;
return 0;
}
- fib6_entry = mlxsw_sp_fib6_entry_create(mlxsw_sp, fib_node, rt);
+ fib6_entry = mlxsw_sp_fib6_entry_create(mlxsw_sp, fib_node, rt_arr,
+ nrt6);
if (IS_ERR(fib6_entry)) {
err = PTR_ERR(fib6_entry);
goto err_fib6_entry_create;
}
- err = mlxsw_sp_fib6_node_entry_link(mlxsw_sp, fib6_entry, replace);
+ err = mlxsw_sp_fib6_node_entry_link(mlxsw_sp, fib6_entry, &replace);
if (err)
goto err_fib6_node_entry_link;
@@ -5609,10 +5641,12 @@ err_fib6_entry_nexthop_add:
}
static void mlxsw_sp_router_fib6_del(struct mlxsw_sp *mlxsw_sp,
- struct fib6_info *rt)
+ struct fib6_info **rt_arr,
+ unsigned int nrt6)
{
struct mlxsw_sp_fib6_entry *fib6_entry;
struct mlxsw_sp_fib_node *fib_node;
+ struct fib6_info *rt = rt_arr[0];
if (mlxsw_sp->router->aborted)
return;
@@ -5624,11 +5658,12 @@ static void mlxsw_sp_router_fib6_del(struct mlxsw_sp *mlxsw_sp,
if (WARN_ON(!fib6_entry))
return;
- /* If route is part of a multipath entry, but not the last one
- * removed, then only reduce its nexthop group.
+ /* If not all the nexthops are deleted, then only reduce the nexthop
+ * group.
*/
- if (!list_is_singular(&fib6_entry->rt6_list)) {
- mlxsw_sp_fib6_entry_nexthop_del(mlxsw_sp, fib6_entry, rt);
+ if (nrt6 != fib6_entry->nrt6) {
+ mlxsw_sp_fib6_entry_nexthop_del(mlxsw_sp, fib6_entry, rt_arr,
+ nrt6);
return;
}
@@ -5889,10 +5924,15 @@ static void mlxsw_sp_router_fib_abort(struct mlxsw_sp *mlxsw_sp)
dev_warn(mlxsw_sp->bus_info->dev, "Failed to set abort trap.\n");
}
+struct mlxsw_sp_fib6_event_work {
+ struct fib6_info **rt_arr;
+ unsigned int nrt6;
+};
+
struct mlxsw_sp_fib_event_work {
struct work_struct work;
union {
- struct fib6_entry_notifier_info fen6_info;
+ struct mlxsw_sp_fib6_event_work fib6_work;
struct fib_entry_notifier_info fen_info;
struct fib_rule_notifier_info fr_info;
struct fib_nh_notifier_info fnh_info;
@@ -5903,6 +5943,54 @@ struct mlxsw_sp_fib_event_work {
unsigned long event;
};
+static int
+mlxsw_sp_router_fib6_work_init(struct mlxsw_sp_fib6_event_work *fib6_work,
+ struct fib6_entry_notifier_info *fen6_info)
+{
+ struct fib6_info *rt = fen6_info->rt;
+ struct fib6_info **rt_arr;
+ struct fib6_info *iter;
+ unsigned int nrt6;
+ int i = 0;
+
+ nrt6 = fen6_info->nsiblings + 1;
+
+ rt_arr = kcalloc(nrt6, sizeof(struct fib6_info *), GFP_ATOMIC);
+ if (!rt_arr)
+ return -ENOMEM;
+
+ fib6_work->rt_arr = rt_arr;
+ fib6_work->nrt6 = nrt6;
+
+ rt_arr[0] = rt;
+ fib6_info_hold(rt);
+
+ if (!fen6_info->nsiblings)
+ return 0;
+
+ list_for_each_entry(iter, &rt->fib6_siblings, fib6_siblings) {
+ if (i == fen6_info->nsiblings)
+ break;
+
+ rt_arr[i + 1] = iter;
+ fib6_info_hold(iter);
+ i++;
+ }
+ WARN_ON_ONCE(i != fen6_info->nsiblings);
+
+ return 0;
+}
+
+static void
+mlxsw_sp_router_fib6_work_fini(struct mlxsw_sp_fib6_event_work *fib6_work)
+{
+ int i;
+
+ for (i = 0; i < fib6_work->nrt6; i++)
+ mlxsw_sp_rt6_release(fib6_work->rt_arr[i]);
+ kfree(fib6_work->rt_arr);
+}
+
static void mlxsw_sp_router_fib4_event_work(struct work_struct *work)
{
struct mlxsw_sp_fib_event_work *fib_work =
@@ -5961,18 +6049,21 @@ static void mlxsw_sp_router_fib6_event_work(struct work_struct *work)
switch (fib_work->event) {
case FIB_EVENT_ENTRY_REPLACE: /* fall through */
- case FIB_EVENT_ENTRY_APPEND: /* fall through */
case FIB_EVENT_ENTRY_ADD:
replace = fib_work->event == FIB_EVENT_ENTRY_REPLACE;
err = mlxsw_sp_router_fib6_add(mlxsw_sp,
- fib_work->fen6_info.rt, replace);
+ fib_work->fib6_work.rt_arr,
+ fib_work->fib6_work.nrt6,
+ replace);
if (err)
mlxsw_sp_router_fib_abort(mlxsw_sp);
- mlxsw_sp_rt6_release(fib_work->fen6_info.rt);
+ mlxsw_sp_router_fib6_work_fini(&fib_work->fib6_work);
break;
case FIB_EVENT_ENTRY_DEL:
- mlxsw_sp_router_fib6_del(mlxsw_sp, fib_work->fen6_info.rt);
- mlxsw_sp_rt6_release(fib_work->fen6_info.rt);
+ mlxsw_sp_router_fib6_del(mlxsw_sp,
+ fib_work->fib6_work.rt_arr,
+ fib_work->fib6_work.nrt6);
+ mlxsw_sp_router_fib6_work_fini(&fib_work->fib6_work);
break;
case FIB_EVENT_RULE_ADD:
/* if we get here, a rule was added that we do not support.
@@ -6061,22 +6152,26 @@ static void mlxsw_sp_router_fib4_event(struct mlxsw_sp_fib_event_work *fib_work,
}
}
-static void mlxsw_sp_router_fib6_event(struct mlxsw_sp_fib_event_work *fib_work,
- struct fib_notifier_info *info)
+static int mlxsw_sp_router_fib6_event(struct mlxsw_sp_fib_event_work *fib_work,
+ struct fib_notifier_info *info)
{
struct fib6_entry_notifier_info *fen6_info;
+ int err;
switch (fib_work->event) {
case FIB_EVENT_ENTRY_REPLACE: /* fall through */
- case FIB_EVENT_ENTRY_APPEND: /* fall through */
case FIB_EVENT_ENTRY_ADD: /* fall through */
case FIB_EVENT_ENTRY_DEL:
fen6_info = container_of(info, struct fib6_entry_notifier_info,
info);
- fib_work->fen6_info = *fen6_info;
- fib6_info_hold(fib_work->fen6_info.rt);
+ err = mlxsw_sp_router_fib6_work_init(&fib_work->fib6_work,
+ fen6_info);
+ if (err)
+ return err;
break;
}
+
+ return 0;
}
static void
@@ -6185,6 +6280,20 @@ static int mlxsw_sp_router_fib_event(struct notifier_block *nb,
NL_SET_ERR_MSG_MOD(info->extack, "IPv6 gateway with IPv4 route is not supported");
return notifier_from_errno(-EINVAL);
}
+ if (fen_info->fi->nh) {
+ NL_SET_ERR_MSG_MOD(info->extack, "IPv4 route with nexthop objects is not supported");
+ return notifier_from_errno(-EINVAL);
+ }
+ } else if (info->family == AF_INET6) {
+ struct fib6_entry_notifier_info *fen6_info;
+
+ fen6_info = container_of(info,
+ struct fib6_entry_notifier_info,
+ info);
+ if (fen6_info->rt->nh) {
+ NL_SET_ERR_MSG_MOD(info->extack, "IPv6 route with nexthop objects is not supported");
+ return notifier_from_errno(-EINVAL);
+ }
}
break;
}
@@ -6203,7 +6312,9 @@ static int mlxsw_sp_router_fib_event(struct notifier_block *nb,
break;
case AF_INET6:
INIT_WORK(&fib_work->work, mlxsw_sp_router_fib6_event_work);
- mlxsw_sp_router_fib6_event(fib_work, info);
+ err = mlxsw_sp_router_fib6_event(fib_work, info);
+ if (err)
+ goto err_fib_event;
break;
case RTNL_FAMILY_IP6MR:
case RTNL_FAMILY_IPMR:
@@ -6215,6 +6326,10 @@ static int mlxsw_sp_router_fib_event(struct notifier_block *nb,
mlxsw_core_schedule_work(&fib_work->work);
return NOTIFY_DONE;
+
+err_fib_event:
+ kfree(fib_work);
+ return NOTIFY_BAD;
}
struct mlxsw_sp_rif *
diff --git a/drivers/net/ethernet/mellanox/mlxsw/switchx2.c b/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
index fc4f19167262..bdab96f5bc70 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
@@ -299,6 +299,8 @@ static netdev_tx_t mlxsw_sx_port_xmit(struct sk_buff *skb,
u64 len;
int err;
+ memset(skb->cb, 0, sizeof(struct mlxsw_skb_cb));
+
if (mlxsw_core_skb_transmit_busy(mlxsw_sx->core, &tx_info))
return NETDEV_TX_BUSY;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/trap.h b/drivers/net/ethernet/mellanox/mlxsw/trap.h
index 451216dd7f6b..19202bdb5105 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/trap.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/trap.h
@@ -17,6 +17,8 @@ enum {
MLXSW_TRAP_ID_MVRP = 0x15,
MLXSW_TRAP_ID_RPVST = 0x16,
MLXSW_TRAP_ID_DHCP = 0x19,
+ MLXSW_TRAP_ID_PTP0 = 0x28,
+ MLXSW_TRAP_ID_PTP1 = 0x29,
MLXSW_TRAP_ID_IGMP_QUERY = 0x30,
MLXSW_TRAP_ID_IGMP_V1_REPORT = 0x31,
MLXSW_TRAP_ID_IGMP_V2_REPORT = 0x32,
@@ -76,6 +78,10 @@ enum {
enum mlxsw_event_trap_id {
/* Port Up/Down event generated by hardware */
MLXSW_TRAP_ID_PUDE = 0x8,
+ /* PTP Ingress FIFO has a new entry */
+ MLXSW_TRAP_ID_PTP_ING_FIFO = 0x2D,
+ /* PTP Egress FIFO has a new entry */
+ MLXSW_TRAP_ID_PTP_EGR_FIFO = 0x2E,
};
#endif /* _MLXSW_TRAP_H */
diff --git a/drivers/net/ethernet/mscc/Makefile b/drivers/net/ethernet/mscc/Makefile
index cb52a3b128ae..9a36c26095c8 100644
--- a/drivers/net/ethernet/mscc/Makefile
+++ b/drivers/net/ethernet/mscc/Makefile
@@ -1,5 +1,5 @@
# SPDX-License-Identifier: (GPL-2.0 OR MIT)
obj-$(CONFIG_MSCC_OCELOT_SWITCH) += mscc_ocelot_common.o
mscc_ocelot_common-y := ocelot.o ocelot_io.o
-mscc_ocelot_common-y += ocelot_regs.o
+mscc_ocelot_common-y += ocelot_regs.o ocelot_tc.o ocelot_police.o ocelot_ace.o ocelot_flower.o
obj-$(CONFIG_MSCC_OCELOT_SWITCH_OCELOT) += ocelot_board.o
diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
index 02ad11e0b0d8..b71e4ecbe469 100644
--- a/drivers/net/ethernet/mscc/ocelot.c
+++ b/drivers/net/ethernet/mscc/ocelot.c
@@ -22,6 +22,7 @@
#include <net/switchdev.h>
#include "ocelot.h"
+#include "ocelot_ace.h"
#define TABLE_UPDATE_SLEEP_US 10
#define TABLE_UPDATE_TIMEOUT_US 100000
@@ -130,6 +131,13 @@ static void ocelot_mact_init(struct ocelot *ocelot)
ocelot_write(ocelot, MACACCESS_CMD_INIT, ANA_TABLES_MACACCESS);
}
+static void ocelot_vcap_enable(struct ocelot *ocelot, struct ocelot_port *port)
+{
+ ocelot_write_gix(ocelot, ANA_PORT_VCAP_S2_CFG_S2_ENA |
+ ANA_PORT_VCAP_S2_CFG_S2_IP6_CFG(0xa),
+ ANA_PORT_VCAP_S2_CFG, port->chip_port);
+}
+
static inline u32 ocelot_vlant_read_vlanaccess(struct ocelot *ocelot)
{
return ocelot_read(ocelot, ANA_TABLES_VLANACCESS);
@@ -884,6 +892,13 @@ static int ocelot_set_features(struct net_device *dev,
struct ocelot_port *port = netdev_priv(dev);
netdev_features_t changed = dev->features ^ features;
+ if ((dev->features & NETIF_F_HW_TC) > (features & NETIF_F_HW_TC) &&
+ port->tc.offload_cnt) {
+ netdev_err(dev,
+ "Cannot disable HW TC offload while offloads active\n");
+ return -EBUSY;
+ }
+
if (changed & NETIF_F_HW_VLAN_CTAG_FILTER)
ocelot_vlan_mode(port, features);
@@ -917,6 +932,7 @@ static const struct net_device_ops ocelot_port_netdev_ops = {
.ndo_vlan_rx_kill_vid = ocelot_vlan_rx_kill_vid,
.ndo_set_features = ocelot_set_features,
.ndo_get_port_parent_id = ocelot_get_port_parent_id,
+ .ndo_setup_tc = ocelot_setup_tc,
};
static void ocelot_get_strings(struct net_device *netdev, u32 sset, u8 *data)
@@ -1636,8 +1652,9 @@ int ocelot_probe_port(struct ocelot *ocelot, u8 port,
dev->netdev_ops = &ocelot_port_netdev_ops;
dev->ethtool_ops = &ocelot_ethtool_ops;
- dev->hw_features |= NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_RXFCS;
- dev->features |= NETIF_F_HW_VLAN_CTAG_FILTER;
+ dev->hw_features |= NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_RXFCS |
+ NETIF_F_HW_TC;
+ dev->features |= NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_TC;
memcpy(dev->dev_addr, ocelot->base_mac, ETH_ALEN);
dev->dev_addr[ETH_ALEN - 1] += port;
@@ -1653,6 +1670,9 @@ int ocelot_probe_port(struct ocelot *ocelot, u8 port,
/* Basic L2 initialization */
ocelot_vlan_port_apply(ocelot, ocelot_port);
+ /* Enable vcap lookups */
+ ocelot_vcap_enable(ocelot, ocelot_port);
+
return 0;
err_register_netdev:
@@ -1687,6 +1707,7 @@ int ocelot_init(struct ocelot *ocelot)
ocelot_mact_init(ocelot);
ocelot_vlan_init(ocelot);
+ ocelot_ace_init(ocelot);
for (port = 0; port < ocelot->num_phys_ports; port++) {
/* Clear all counters (5 groups) */
@@ -1799,6 +1820,7 @@ void ocelot_deinit(struct ocelot *ocelot)
{
destroy_workqueue(ocelot->stats_queue);
mutex_destroy(&ocelot->stats_lock);
+ ocelot_ace_deinit();
}
EXPORT_SYMBOL(ocelot_deinit);
diff --git a/drivers/net/ethernet/mscc/ocelot.h b/drivers/net/ethernet/mscc/ocelot.h
index 541fe41e60b0..f7eeb4806897 100644
--- a/drivers/net/ethernet/mscc/ocelot.h
+++ b/drivers/net/ethernet/mscc/ocelot.h
@@ -22,6 +22,7 @@
#include "ocelot_rew.h"
#include "ocelot_sys.h"
#include "ocelot_qs.h"
+#include "ocelot_tc.h"
#define PGID_AGGR 64
#define PGID_SRC 80
@@ -68,6 +69,7 @@ enum ocelot_target {
QSYS,
REW,
SYS,
+ S2,
HSIO,
TARGET_MAX,
};
@@ -334,6 +336,13 @@ enum ocelot_reg {
SYS_CM_DATA_RD,
SYS_CM_OP,
SYS_CM_DATA,
+ S2_CORE_UPDATE_CTRL = S2 << TARGET_OFFSET,
+ S2_CORE_MV_CFG,
+ S2_CACHE_ENTRY_DAT,
+ S2_CACHE_MASK_DAT,
+ S2_CACHE_ACTION_DAT,
+ S2_CACHE_CNT_DAT,
+ S2_CACHE_TG_DAT,
};
enum ocelot_regfield {
@@ -454,6 +463,8 @@ struct ocelot_port {
phy_interface_t phy_mode;
struct phy *serdes;
+
+ struct ocelot_port_tc tc;
};
u32 __ocelot_read_ix(struct ocelot *ocelot, u32 reg, u32 offset);
diff --git a/drivers/net/ethernet/mscc/ocelot_ace.c b/drivers/net/ethernet/mscc/ocelot_ace.c
new file mode 100644
index 000000000000..39aca1ab4687
--- /dev/null
+++ b/drivers/net/ethernet/mscc/ocelot_ace.c
@@ -0,0 +1,782 @@
+// SPDX-License-Identifier: (GPL-2.0 OR MIT)
+/* Microsemi Ocelot Switch driver
+ * Copyright (c) 2019 Microsemi Corporation
+ */
+
+#include <linux/iopoll.h>
+#include <linux/proc_fs.h>
+
+#include "ocelot_ace.h"
+#include "ocelot_vcap.h"
+#include "ocelot_s2.h"
+
+#define OCELOT_POLICER_DISCARD 0x17f
+
+static struct ocelot_acl_block *acl_block;
+
+struct vcap_props {
+ const char *name; /* Symbolic name */
+ u16 tg_width; /* Type-group width (in bits) */
+ u16 sw_count; /* Sub word count */
+ u16 entry_count; /* Entry count */
+ u16 entry_words; /* Number of entry words */
+ u16 entry_width; /* Entry width (in bits) */
+ u16 action_count; /* Action count */
+ u16 action_words; /* Number of action words */
+ u16 action_width; /* Action width (in bits) */
+ u16 action_type_width; /* Action type width (in bits) */
+ struct {
+ u16 width; /* Action type width (in bits) */
+ u16 count; /* Action type sub word count */
+ } action_table[2];
+ u16 counter_words; /* Number of counter words */
+ u16 counter_width; /* Counter width (in bits) */
+};
+
+#define ENTRY_WIDTH 32
+#define BITS_TO_32BIT(x) (1 + (((x) - 1) / ENTRY_WIDTH))
+
+static const struct vcap_props vcap_is2 = {
+ .name = "IS2",
+ .tg_width = 2,
+ .sw_count = 4,
+ .entry_count = VCAP_IS2_CNT,
+ .entry_words = BITS_TO_32BIT(VCAP_IS2_ENTRY_WIDTH),
+ .entry_width = VCAP_IS2_ENTRY_WIDTH,
+ .action_count = (VCAP_IS2_CNT + VCAP_PORT_CNT + 2),
+ .action_words = BITS_TO_32BIT(VCAP_IS2_ACTION_WIDTH),
+ .action_width = (VCAP_IS2_ACTION_WIDTH),
+ .action_type_width = 1,
+ .action_table = {
+ {
+ .width = (IS2_AO_ACL_ID + IS2_AL_ACL_ID),
+ .count = 2
+ },
+ {
+ .width = 6,
+ .count = 4
+ },
+ },
+ .counter_words = BITS_TO_32BIT(4 * ENTRY_WIDTH),
+ .counter_width = ENTRY_WIDTH,
+};
+
+enum vcap_sel {
+ VCAP_SEL_ENTRY = 0x1,
+ VCAP_SEL_ACTION = 0x2,
+ VCAP_SEL_COUNTER = 0x4,
+ VCAP_SEL_ALL = 0x7,
+};
+
+enum vcap_cmd {
+ VCAP_CMD_WRITE = 0, /* Copy from Cache to TCAM */
+ VCAP_CMD_READ = 1, /* Copy from TCAM to Cache */
+ VCAP_CMD_MOVE_UP = 2, /* Move <count> up */
+ VCAP_CMD_MOVE_DOWN = 3, /* Move <count> down */
+ VCAP_CMD_INITIALIZE = 4, /* Write all (from cache) */
+};
+
+#define VCAP_ENTRY_WIDTH 12 /* Max entry width (32bit words) */
+#define VCAP_COUNTER_WIDTH 4 /* Max counter width (32bit words) */
+
+struct vcap_data {
+ u32 entry[VCAP_ENTRY_WIDTH]; /* ENTRY_DAT */
+ u32 mask[VCAP_ENTRY_WIDTH]; /* MASK_DAT */
+ u32 action[VCAP_ENTRY_WIDTH]; /* ACTION_DAT */
+ u32 counter[VCAP_COUNTER_WIDTH]; /* CNT_DAT */
+ u32 tg; /* TG_DAT */
+ u32 type; /* Action type */
+ u32 tg_sw; /* Current type-group */
+ u32 cnt; /* Current counter */
+ u32 key_offset; /* Current entry offset */
+ u32 action_offset; /* Current action offset */
+ u32 counter_offset; /* Current counter offset */
+ u32 tg_value; /* Current type-group value */
+ u32 tg_mask; /* Current type-group mask */
+};
+
+static u32 vcap_s2_read_update_ctrl(struct ocelot *oc)
+{
+ return ocelot_read(oc, S2_CORE_UPDATE_CTRL);
+}
+
+static void vcap_cmd(struct ocelot *oc, u16 ix, int cmd, int sel)
+{
+ u32 value = (S2_CORE_UPDATE_CTRL_UPDATE_CMD(cmd) |
+ S2_CORE_UPDATE_CTRL_UPDATE_ADDR(ix) |
+ S2_CORE_UPDATE_CTRL_UPDATE_SHOT);
+
+ if ((sel & VCAP_SEL_ENTRY) && ix >= vcap_is2.entry_count)
+ return;
+
+ if (!(sel & VCAP_SEL_ENTRY))
+ value |= S2_CORE_UPDATE_CTRL_UPDATE_ENTRY_DIS;
+
+ if (!(sel & VCAP_SEL_ACTION))
+ value |= S2_CORE_UPDATE_CTRL_UPDATE_ACTION_DIS;
+
+ if (!(sel & VCAP_SEL_COUNTER))
+ value |= S2_CORE_UPDATE_CTRL_UPDATE_CNT_DIS;
+
+ ocelot_write(oc, value, S2_CORE_UPDATE_CTRL);
+ readx_poll_timeout(vcap_s2_read_update_ctrl, oc, value,
+ (value & S2_CORE_UPDATE_CTRL_UPDATE_SHOT) == 0,
+ 10, 100000);
+}
+
+/* Convert from 0-based row to VCAP entry row and run command */
+static void vcap_row_cmd(struct ocelot *oc, u32 row, int cmd, int sel)
+{
+ vcap_cmd(oc, vcap_is2.entry_count - row - 1, cmd, sel);
+}
+
+static void vcap_entry2cache(struct ocelot *oc, struct vcap_data *data)
+{
+ u32 i;
+
+ for (i = 0; i < vcap_is2.entry_words; i++) {
+ ocelot_write_rix(oc, data->entry[i], S2_CACHE_ENTRY_DAT, i);
+ ocelot_write_rix(oc, ~data->mask[i], S2_CACHE_MASK_DAT, i);
+ }
+ ocelot_write(oc, data->tg, S2_CACHE_TG_DAT);
+}
+
+static void vcap_cache2entry(struct ocelot *oc, struct vcap_data *data)
+{
+ u32 i;
+
+ for (i = 0; i < vcap_is2.entry_words; i++) {
+ data->entry[i] = ocelot_read_rix(oc, S2_CACHE_ENTRY_DAT, i);
+ // Invert mask
+ data->mask[i] = ~ocelot_read_rix(oc, S2_CACHE_MASK_DAT, i);
+ }
+ data->tg = ocelot_read(oc, S2_CACHE_TG_DAT);
+}
+
+static void vcap_action2cache(struct ocelot *oc, struct vcap_data *data)
+{
+ u32 i, width, mask;
+
+ /* Encode action type */
+ width = vcap_is2.action_type_width;
+ if (width) {
+ mask = GENMASK(width, 0);
+ data->action[0] = ((data->action[0] & ~mask) | data->type);
+ }
+
+ for (i = 0; i < vcap_is2.action_words; i++)
+ ocelot_write_rix(oc, data->action[i], S2_CACHE_ACTION_DAT, i);
+
+ for (i = 0; i < vcap_is2.counter_words; i++)
+ ocelot_write_rix(oc, data->counter[i], S2_CACHE_CNT_DAT, i);
+}
+
+static void vcap_cache2action(struct ocelot *oc, struct vcap_data *data)
+{
+ u32 i, width;
+
+ for (i = 0; i < vcap_is2.action_words; i++)
+ data->action[i] = ocelot_read_rix(oc, S2_CACHE_ACTION_DAT, i);
+
+ for (i = 0; i < vcap_is2.counter_words; i++)
+ data->counter[i] = ocelot_read_rix(oc, S2_CACHE_CNT_DAT, i);
+
+ /* Extract action type */
+ width = vcap_is2.action_type_width;
+ data->type = (width ? (data->action[0] & GENMASK(width, 0)) : 0);
+}
+
+/* Calculate offsets for entry */
+static void is2_data_get(struct vcap_data *data, int ix)
+{
+ u32 i, col, offset, count, cnt, base, width = vcap_is2.tg_width;
+
+ count = (data->tg_sw == VCAP_TG_HALF ? 2 : 4);
+ col = (ix % 2);
+ cnt = (vcap_is2.sw_count / count);
+ base = (vcap_is2.sw_count - col * cnt - cnt);
+ data->tg_value = 0;
+ data->tg_mask = 0;
+ for (i = 0; i < cnt; i++) {
+ offset = ((base + i) * width);
+ data->tg_value |= (data->tg_sw << offset);
+ data->tg_mask |= GENMASK(offset + width - 1, offset);
+ }
+
+ /* Calculate key/action/counter offsets */
+ col = (count - col - 1);
+ data->key_offset = (base * vcap_is2.entry_width) / vcap_is2.sw_count;
+ data->counter_offset = (cnt * col * vcap_is2.counter_width);
+ i = data->type;
+ width = vcap_is2.action_table[i].width;
+ cnt = vcap_is2.action_table[i].count;
+ data->action_offset =
+ (((cnt * col * width) / count) + vcap_is2.action_type_width);
+}
+
+static void vcap_data_set(u32 *data, u32 offset, u32 len, u32 value)
+{
+ u32 i, v, m;
+
+ for (i = 0; i < len; i++, offset++) {
+ v = data[offset / ENTRY_WIDTH];
+ m = (1 << (offset % ENTRY_WIDTH));
+ if (value & (1 << i))
+ v |= m;
+ else
+ v &= ~m;
+ data[offset / ENTRY_WIDTH] = v;
+ }
+}
+
+static u32 vcap_data_get(u32 *data, u32 offset, u32 len)
+{
+ u32 i, v, m, value = 0;
+
+ for (i = 0; i < len; i++, offset++) {
+ v = data[offset / ENTRY_WIDTH];
+ m = (1 << (offset % ENTRY_WIDTH));
+ if (v & m)
+ value |= (1 << i);
+ }
+ return value;
+}
+
+static void vcap_key_set(struct vcap_data *data, u32 offset, u32 width,
+ u32 value, u32 mask)
+{
+ vcap_data_set(data->entry, offset + data->key_offset, width, value);
+ vcap_data_set(data->mask, offset + data->key_offset, width, mask);
+}
+
+static void vcap_key_bytes_set(struct vcap_data *data, u32 offset, u8 *val,
+ u8 *msk, u32 count)
+{
+ u32 i, j, n = 0, value = 0, mask = 0;
+
+ /* Data wider than 32 bits are split up in chunks of maximum 32 bits.
+ * The 32 LSB of the data are written to the 32 MSB of the TCAM.
+ */
+ offset += (count * 8);
+ for (i = 0; i < count; i++) {
+ j = (count - i - 1);
+ value += (val[j] << n);
+ mask += (msk[j] << n);
+ n += 8;
+ if (n == ENTRY_WIDTH || (i + 1) == count) {
+ offset -= n;
+ vcap_key_set(data, offset, n, value, mask);
+ n = 0;
+ value = 0;
+ mask = 0;
+ }
+ }
+}
+
+static void vcap_key_l4_port_set(struct vcap_data *data, u32 offset,
+ struct ocelot_vcap_udp_tcp *port)
+{
+ vcap_key_set(data, offset, 16, port->value, port->mask);
+}
+
+static void vcap_key_bit_set(struct vcap_data *data, u32 offset,
+ enum ocelot_vcap_bit val)
+{
+ vcap_key_set(data, offset, 1, val == OCELOT_VCAP_BIT_1 ? 1 : 0,
+ val == OCELOT_VCAP_BIT_ANY ? 0 : 1);
+}
+
+#define VCAP_KEY_SET(fld, val, msk) \
+ vcap_key_set(&data, IS2_HKO_##fld, IS2_HKL_##fld, val, msk)
+#define VCAP_KEY_ANY_SET(fld) \
+ vcap_key_set(&data, IS2_HKO_##fld, IS2_HKL_##fld, 0, 0)
+#define VCAP_KEY_BIT_SET(fld, val) vcap_key_bit_set(&data, IS2_HKO_##fld, val)
+#define VCAP_KEY_BYTES_SET(fld, val, msk) \
+ vcap_key_bytes_set(&data, IS2_HKO_##fld, val, msk, IS2_HKL_##fld / 8)
+
+static void vcap_action_set(struct vcap_data *data, u32 offset, u32 width,
+ u32 value)
+{
+ vcap_data_set(data->action, offset + data->action_offset, width, value);
+}
+
+#define VCAP_ACT_SET(fld, val) \
+ vcap_action_set(data, IS2_AO_##fld, IS2_AL_##fld, val)
+
+static void is2_action_set(struct vcap_data *data,
+ enum ocelot_ace_action action)
+{
+ switch (action) {
+ case OCELOT_ACL_ACTION_DROP:
+ VCAP_ACT_SET(PORT_MASK, 0x0);
+ VCAP_ACT_SET(MASK_MODE, 0x1);
+ VCAP_ACT_SET(POLICE_ENA, 0x1);
+ VCAP_ACT_SET(POLICE_IDX, OCELOT_POLICER_DISCARD);
+ VCAP_ACT_SET(CPU_QU_NUM, 0x0);
+ VCAP_ACT_SET(CPU_COPY_ENA, 0x0);
+ break;
+ case OCELOT_ACL_ACTION_TRAP:
+ VCAP_ACT_SET(PORT_MASK, 0x0);
+ VCAP_ACT_SET(MASK_MODE, 0x0);
+ VCAP_ACT_SET(POLICE_ENA, 0x0);
+ VCAP_ACT_SET(POLICE_IDX, 0x0);
+ VCAP_ACT_SET(CPU_QU_NUM, 0x0);
+ VCAP_ACT_SET(CPU_COPY_ENA, 0x1);
+ break;
+ }
+}
+
+static void is2_entry_set(struct ocelot *ocelot, int ix,
+ struct ocelot_ace_rule *ace)
+{
+ u32 val, msk, type, type_mask = 0xf, i, count;
+ struct ocelot_ace_vlan *tag = &ace->vlan;
+ struct ocelot_vcap_u64 payload;
+ struct vcap_data data;
+ int row = (ix / 2);
+
+ memset(&payload, 0, sizeof(payload));
+ memset(&data, 0, sizeof(data));
+
+ /* Read row */
+ vcap_row_cmd(ocelot, row, VCAP_CMD_READ, VCAP_SEL_ALL);
+ vcap_cache2entry(ocelot, &data);
+ vcap_cache2action(ocelot, &data);
+
+ data.tg_sw = VCAP_TG_HALF;
+ is2_data_get(&data, ix);
+ data.tg = (data.tg & ~data.tg_mask);
+ if (ace->prio != 0)
+ data.tg |= data.tg_value;
+
+ data.type = IS2_ACTION_TYPE_NORMAL;
+
+ VCAP_KEY_ANY_SET(PAG);
+ VCAP_KEY_SET(IGR_PORT_MASK, 0, ~BIT(ace->chip_port));
+ VCAP_KEY_BIT_SET(FIRST, OCELOT_VCAP_BIT_1);
+ VCAP_KEY_BIT_SET(HOST_MATCH, OCELOT_VCAP_BIT_ANY);
+ VCAP_KEY_BIT_SET(L2_MC, ace->dmac_mc);
+ VCAP_KEY_BIT_SET(L2_BC, ace->dmac_bc);
+ VCAP_KEY_BIT_SET(VLAN_TAGGED, tag->tagged);
+ VCAP_KEY_SET(VID, tag->vid.value, tag->vid.mask);
+ VCAP_KEY_SET(PCP, tag->pcp.value[0], tag->pcp.mask[0]);
+ VCAP_KEY_BIT_SET(DEI, tag->dei);
+
+ switch (ace->type) {
+ case OCELOT_ACE_TYPE_ETYPE: {
+ struct ocelot_ace_frame_etype *etype = &ace->frame.etype;
+
+ type = IS2_TYPE_ETYPE;
+ VCAP_KEY_BYTES_SET(L2_DMAC, etype->dmac.value,
+ etype->dmac.mask);
+ VCAP_KEY_BYTES_SET(L2_SMAC, etype->smac.value,
+ etype->smac.mask);
+ VCAP_KEY_BYTES_SET(MAC_ETYPE_ETYPE, etype->etype.value,
+ etype->etype.mask);
+ VCAP_KEY_ANY_SET(MAC_ETYPE_L2_PAYLOAD); // Clear unused bits
+ vcap_key_bytes_set(&data, IS2_HKO_MAC_ETYPE_L2_PAYLOAD,
+ etype->data.value, etype->data.mask, 2);
+ break;
+ }
+ case OCELOT_ACE_TYPE_LLC: {
+ struct ocelot_ace_frame_llc *llc = &ace->frame.llc;
+
+ type = IS2_TYPE_LLC;
+ VCAP_KEY_BYTES_SET(L2_DMAC, llc->dmac.value, llc->dmac.mask);
+ VCAP_KEY_BYTES_SET(L2_SMAC, llc->smac.value, llc->smac.mask);
+ for (i = 0; i < 4; i++) {
+ payload.value[i] = llc->llc.value[i];
+ payload.mask[i] = llc->llc.mask[i];
+ }
+ VCAP_KEY_BYTES_SET(MAC_LLC_L2_LLC, payload.value, payload.mask);
+ break;
+ }
+ case OCELOT_ACE_TYPE_SNAP: {
+ struct ocelot_ace_frame_snap *snap = &ace->frame.snap;
+
+ type = IS2_TYPE_SNAP;
+ VCAP_KEY_BYTES_SET(L2_DMAC, snap->dmac.value, snap->dmac.mask);
+ VCAP_KEY_BYTES_SET(L2_SMAC, snap->smac.value, snap->smac.mask);
+ VCAP_KEY_BYTES_SET(MAC_SNAP_L2_SNAP,
+ ace->frame.snap.snap.value,
+ ace->frame.snap.snap.mask);
+ break;
+ }
+ case OCELOT_ACE_TYPE_ARP: {
+ struct ocelot_ace_frame_arp *arp = &ace->frame.arp;
+
+ type = IS2_TYPE_ARP;
+ VCAP_KEY_BYTES_SET(MAC_ARP_L2_SMAC, arp->smac.value,
+ arp->smac.mask);
+ VCAP_KEY_BIT_SET(MAC_ARP_ARP_ADDR_SPACE_OK, arp->ethernet);
+ VCAP_KEY_BIT_SET(MAC_ARP_ARP_PROTO_SPACE_OK, arp->ip);
+ VCAP_KEY_BIT_SET(MAC_ARP_ARP_LEN_OK, arp->length);
+ VCAP_KEY_BIT_SET(MAC_ARP_ARP_TGT_MATCH, arp->dmac_match);
+ VCAP_KEY_BIT_SET(MAC_ARP_ARP_SENDER_MATCH, arp->smac_match);
+ VCAP_KEY_BIT_SET(MAC_ARP_ARP_OPCODE_UNKNOWN, arp->unknown);
+
+ /* OPCODE is inverse, bit 0 is reply flag, bit 1 is RARP flag */
+ val = ((arp->req == OCELOT_VCAP_BIT_0 ? 1 : 0) |
+ (arp->arp == OCELOT_VCAP_BIT_0 ? 2 : 0));
+ msk = ((arp->req == OCELOT_VCAP_BIT_ANY ? 0 : 1) |
+ (arp->arp == OCELOT_VCAP_BIT_ANY ? 0 : 2));
+ VCAP_KEY_SET(MAC_ARP_ARP_OPCODE, val, msk);
+ vcap_key_bytes_set(&data, IS2_HKO_MAC_ARP_L3_IP4_DIP,
+ arp->dip.value.addr, arp->dip.mask.addr, 4);
+ vcap_key_bytes_set(&data, IS2_HKO_MAC_ARP_L3_IP4_SIP,
+ arp->sip.value.addr, arp->sip.mask.addr, 4);
+ VCAP_KEY_ANY_SET(MAC_ARP_DIP_EQ_SIP);
+ break;
+ }
+ case OCELOT_ACE_TYPE_IPV4:
+ case OCELOT_ACE_TYPE_IPV6: {
+ enum ocelot_vcap_bit sip_eq_dip, sport_eq_dport, seq_zero, tcp;
+ enum ocelot_vcap_bit ttl, fragment, options, tcp_ack, tcp_urg;
+ enum ocelot_vcap_bit tcp_fin, tcp_syn, tcp_rst, tcp_psh;
+ struct ocelot_ace_frame_ipv4 *ipv4 = NULL;
+ struct ocelot_ace_frame_ipv6 *ipv6 = NULL;
+ struct ocelot_vcap_udp_tcp *sport, *dport;
+ struct ocelot_vcap_ipv4 sip, dip;
+ struct ocelot_vcap_u8 proto, ds;
+ struct ocelot_vcap_u48 *ip_data;
+
+ if (ace->type == OCELOT_ACE_TYPE_IPV4) {
+ ipv4 = &ace->frame.ipv4;
+ ttl = ipv4->ttl;
+ fragment = ipv4->fragment;
+ options = ipv4->options;
+ proto = ipv4->proto;
+ ds = ipv4->ds;
+ ip_data = &ipv4->data;
+ sip = ipv4->sip;
+ dip = ipv4->dip;
+ sport = &ipv4->sport;
+ dport = &ipv4->dport;
+ tcp_fin = ipv4->tcp_fin;
+ tcp_syn = ipv4->tcp_syn;
+ tcp_rst = ipv4->tcp_rst;
+ tcp_psh = ipv4->tcp_psh;
+ tcp_ack = ipv4->tcp_ack;
+ tcp_urg = ipv4->tcp_urg;
+ sip_eq_dip = ipv4->sip_eq_dip;
+ sport_eq_dport = ipv4->sport_eq_dport;
+ seq_zero = ipv4->seq_zero;
+ } else {
+ ipv6 = &ace->frame.ipv6;
+ ttl = ipv6->ttl;
+ fragment = OCELOT_VCAP_BIT_ANY;
+ options = OCELOT_VCAP_BIT_ANY;
+ proto = ipv6->proto;
+ ds = ipv6->ds;
+ ip_data = &ipv6->data;
+ for (i = 0; i < 8; i++) {
+ val = ipv6->sip.value[i + 8];
+ msk = ipv6->sip.mask[i + 8];
+ if (i < 4) {
+ dip.value.addr[i] = val;
+ dip.mask.addr[i] = msk;
+ } else {
+ sip.value.addr[i - 4] = val;
+ sip.mask.addr[i - 4] = msk;
+ }
+ }
+ sport = &ipv6->sport;
+ dport = &ipv6->dport;
+ tcp_fin = ipv6->tcp_fin;
+ tcp_syn = ipv6->tcp_syn;
+ tcp_rst = ipv6->tcp_rst;
+ tcp_psh = ipv6->tcp_psh;
+ tcp_ack = ipv6->tcp_ack;
+ tcp_urg = ipv6->tcp_urg;
+ sip_eq_dip = ipv6->sip_eq_dip;
+ sport_eq_dport = ipv6->sport_eq_dport;
+ seq_zero = ipv6->seq_zero;
+ }
+
+ VCAP_KEY_BIT_SET(IP4,
+ ipv4 ? OCELOT_VCAP_BIT_1 : OCELOT_VCAP_BIT_0);
+ VCAP_KEY_BIT_SET(L3_FRAGMENT, fragment);
+ VCAP_KEY_ANY_SET(L3_FRAG_OFS_GT0);
+ VCAP_KEY_BIT_SET(L3_OPTIONS, options);
+ VCAP_KEY_BIT_SET(L3_TTL_GT0, ttl);
+ VCAP_KEY_BYTES_SET(L3_TOS, ds.value, ds.mask);
+ vcap_key_bytes_set(&data, IS2_HKO_L3_IP4_DIP, dip.value.addr,
+ dip.mask.addr, 4);
+ vcap_key_bytes_set(&data, IS2_HKO_L3_IP4_SIP, sip.value.addr,
+ sip.mask.addr, 4);
+ VCAP_KEY_BIT_SET(DIP_EQ_SIP, sip_eq_dip);
+ val = proto.value[0];
+ msk = proto.mask[0];
+ type = IS2_TYPE_IP_UDP_TCP;
+ if (msk == 0xff && (val == 6 || val == 17)) {
+ /* UDP/TCP protocol match */
+ tcp = (val == 6 ?
+ OCELOT_VCAP_BIT_1 : OCELOT_VCAP_BIT_0);
+ VCAP_KEY_BIT_SET(IP4_TCP_UDP_TCP, tcp);
+ vcap_key_l4_port_set(&data,
+ IS2_HKO_IP4_TCP_UDP_L4_DPORT,
+ dport);
+ vcap_key_l4_port_set(&data,
+ IS2_HKO_IP4_TCP_UDP_L4_SPORT,
+ sport);
+ VCAP_KEY_ANY_SET(IP4_TCP_UDP_L4_RNG);
+ VCAP_KEY_BIT_SET(IP4_TCP_UDP_SPORT_EQ_DPORT,
+ sport_eq_dport);
+ VCAP_KEY_BIT_SET(IP4_TCP_UDP_SEQUENCE_EQ0, seq_zero);
+ VCAP_KEY_BIT_SET(IP4_TCP_UDP_L4_FIN, tcp_fin);
+ VCAP_KEY_BIT_SET(IP4_TCP_UDP_L4_SYN, tcp_syn);
+ VCAP_KEY_BIT_SET(IP4_TCP_UDP_L4_RST, tcp_rst);
+ VCAP_KEY_BIT_SET(IP4_TCP_UDP_L4_PSH, tcp_psh);
+ VCAP_KEY_BIT_SET(IP4_TCP_UDP_L4_ACK, tcp_ack);
+ VCAP_KEY_BIT_SET(IP4_TCP_UDP_L4_URG, tcp_urg);
+ VCAP_KEY_ANY_SET(IP4_TCP_UDP_L4_1588_DOM);
+ VCAP_KEY_ANY_SET(IP4_TCP_UDP_L4_1588_VER);
+ } else {
+ if (msk == 0) {
+ /* Any IP protocol match */
+ type_mask = IS2_TYPE_MASK_IP_ANY;
+ } else {
+ /* Non-UDP/TCP protocol match */
+ type = IS2_TYPE_IP_OTHER;
+ for (i = 0; i < 6; i++) {
+ payload.value[i] = ip_data->value[i];
+ payload.mask[i] = ip_data->mask[i];
+ }
+ }
+ VCAP_KEY_BYTES_SET(IP4_OTHER_L3_PROTO, proto.value,
+ proto.mask);
+ VCAP_KEY_BYTES_SET(IP4_OTHER_L3_PAYLOAD, payload.value,
+ payload.mask);
+ }
+ break;
+ }
+ case OCELOT_ACE_TYPE_ANY:
+ default:
+ type = 0;
+ type_mask = 0;
+ count = (vcap_is2.entry_width / 2);
+ for (i = (IS2_HKO_PCP + IS2_HKL_PCP); i < count;
+ i += ENTRY_WIDTH) {
+ /* Clear entry data */
+ vcap_key_set(&data, i, min(32u, count - i), 0, 0);
+ }
+ break;
+ }
+
+ VCAP_KEY_SET(TYPE, type, type_mask);
+ is2_action_set(&data, ace->action);
+ vcap_data_set(data.counter, data.counter_offset, vcap_is2.counter_width,
+ ace->stats.pkts);
+
+ /* Write row */
+ vcap_entry2cache(ocelot, &data);
+ vcap_action2cache(ocelot, &data);
+ vcap_row_cmd(ocelot, row, VCAP_CMD_WRITE, VCAP_SEL_ALL);
+}
+
+static void is2_entry_get(struct ocelot_ace_rule *rule, int ix)
+{
+ struct ocelot *op = rule->port->ocelot;
+ struct vcap_data data;
+ int row = (ix / 2);
+ u32 cnt;
+
+ vcap_row_cmd(op, row, VCAP_CMD_READ, VCAP_SEL_COUNTER);
+ vcap_cache2action(op, &data);
+ data.tg_sw = VCAP_TG_HALF;
+ is2_data_get(&data, ix);
+ cnt = vcap_data_get(data.counter, data.counter_offset,
+ vcap_is2.counter_width);
+
+ rule->stats.pkts = cnt;
+}
+
+static void ocelot_ace_rule_add(struct ocelot_acl_block *block,
+ struct ocelot_ace_rule *rule)
+{
+ struct ocelot_ace_rule *tmp;
+ struct list_head *pos, *n;
+
+ block->count++;
+
+ if (list_empty(&block->rules)) {
+ list_add(&rule->list, &block->rules);
+ return;
+ }
+
+ list_for_each_safe(pos, n, &block->rules) {
+ tmp = list_entry(pos, struct ocelot_ace_rule, list);
+ if (rule->prio < tmp->prio)
+ break;
+ }
+ list_add(&rule->list, pos->prev);
+}
+
+static int ocelot_ace_rule_get_index_id(struct ocelot_acl_block *block,
+ struct ocelot_ace_rule *rule)
+{
+ struct ocelot_ace_rule *tmp;
+ int index = -1;
+
+ list_for_each_entry(tmp, &block->rules, list) {
+ ++index;
+ if (rule->id == tmp->id)
+ break;
+ }
+ return index;
+}
+
+static struct ocelot_ace_rule*
+ocelot_ace_rule_get_rule_index(struct ocelot_acl_block *block, int index)
+{
+ struct ocelot_ace_rule *tmp;
+ int i = 0;
+
+ list_for_each_entry(tmp, &block->rules, list) {
+ if (i == index)
+ return tmp;
+ ++i;
+ }
+
+ return NULL;
+}
+
+int ocelot_ace_rule_offload_add(struct ocelot_ace_rule *rule)
+{
+ struct ocelot_ace_rule *ace;
+ int i, index;
+
+ /* Add rule to the linked list */
+ ocelot_ace_rule_add(acl_block, rule);
+
+ /* Get the index of the inserted rule */
+ index = ocelot_ace_rule_get_index_id(acl_block, rule);
+
+ /* Move down the rules to make place for the new rule */
+ for (i = acl_block->count - 1; i > index; i--) {
+ ace = ocelot_ace_rule_get_rule_index(acl_block, i);
+ is2_entry_set(rule->port->ocelot, i, ace);
+ }
+
+ /* Now insert the new rule */
+ is2_entry_set(rule->port->ocelot, index, rule);
+ return 0;
+}
+
+static void ocelot_ace_rule_del(struct ocelot_acl_block *block,
+ struct ocelot_ace_rule *rule)
+{
+ struct ocelot_ace_rule *tmp;
+ struct list_head *pos, *q;
+
+ list_for_each_safe(pos, q, &block->rules) {
+ tmp = list_entry(pos, struct ocelot_ace_rule, list);
+ if (tmp->id == rule->id) {
+ list_del(pos);
+ kfree(tmp);
+ }
+ }
+
+ block->count--;
+}
+
+int ocelot_ace_rule_offload_del(struct ocelot_ace_rule *rule)
+{
+ struct ocelot_ace_rule del_ace;
+ struct ocelot_ace_rule *ace;
+ int i, index;
+
+ memset(&del_ace, 0, sizeof(del_ace));
+
+ /* Gets index of the rule */
+ index = ocelot_ace_rule_get_index_id(acl_block, rule);
+
+ /* Delete rule */
+ ocelot_ace_rule_del(acl_block, rule);
+
+ /* Move up all the blocks over the deleted rule */
+ for (i = index; i < acl_block->count; i++) {
+ ace = ocelot_ace_rule_get_rule_index(acl_block, i);
+ is2_entry_set(rule->port->ocelot, i, ace);
+ }
+
+ /* Now delete the last rule, because it is duplicated */
+ is2_entry_set(rule->port->ocelot, acl_block->count, &del_ace);
+
+ return 0;
+}
+
+int ocelot_ace_rule_stats_update(struct ocelot_ace_rule *rule)
+{
+ struct ocelot_ace_rule *tmp;
+ int index;
+
+ index = ocelot_ace_rule_get_index_id(acl_block, rule);
+ is2_entry_get(rule, index);
+
+ /* After we get the result we need to clear the counters */
+ tmp = ocelot_ace_rule_get_rule_index(acl_block, index);
+ tmp->stats.pkts = 0;
+ is2_entry_set(rule->port->ocelot, index, tmp);
+
+ return 0;
+}
+
+static struct ocelot_acl_block *ocelot_acl_block_create(struct ocelot *ocelot)
+{
+ struct ocelot_acl_block *block;
+
+ block = kzalloc(sizeof(*block), GFP_KERNEL);
+ if (!block)
+ return NULL;
+
+ INIT_LIST_HEAD(&block->rules);
+ block->count = 0;
+ block->ocelot = ocelot;
+
+ return block;
+}
+
+static void ocelot_acl_block_destroy(struct ocelot_acl_block *block)
+{
+ kfree(block);
+}
+
+int ocelot_ace_init(struct ocelot *ocelot)
+{
+ struct vcap_data data;
+
+ memset(&data, 0, sizeof(data));
+ vcap_entry2cache(ocelot, &data);
+ ocelot_write(ocelot, vcap_is2.entry_count, S2_CORE_MV_CFG);
+ vcap_cmd(ocelot, 0, VCAP_CMD_INITIALIZE, VCAP_SEL_ENTRY);
+
+ vcap_action2cache(ocelot, &data);
+ ocelot_write(ocelot, vcap_is2.action_count, S2_CORE_MV_CFG);
+ vcap_cmd(ocelot, 0, VCAP_CMD_INITIALIZE,
+ VCAP_SEL_ACTION | VCAP_SEL_COUNTER);
+
+ /* Create a policer that will drop the frames for the cpu.
+ * This policer will be used as action in the acl rules to drop
+ * frames.
+ */
+ ocelot_write_gix(ocelot, 0x299, ANA_POL_MODE_CFG,
+ OCELOT_POLICER_DISCARD);
+ ocelot_write_gix(ocelot, 0x1, ANA_POL_PIR_CFG,
+ OCELOT_POLICER_DISCARD);
+ ocelot_write_gix(ocelot, 0x3fffff, ANA_POL_PIR_STATE,
+ OCELOT_POLICER_DISCARD);
+ ocelot_write_gix(ocelot, 0x0, ANA_POL_CIR_CFG,
+ OCELOT_POLICER_DISCARD);
+ ocelot_write_gix(ocelot, 0x3fffff, ANA_POL_CIR_STATE,
+ OCELOT_POLICER_DISCARD);
+
+ acl_block = ocelot_acl_block_create(ocelot);
+
+ return 0;
+}
+
+void ocelot_ace_deinit(void)
+{
+ ocelot_acl_block_destroy(acl_block);
+}
diff --git a/drivers/net/ethernet/mscc/ocelot_ace.h b/drivers/net/ethernet/mscc/ocelot_ace.h
new file mode 100644
index 000000000000..e98944c87259
--- /dev/null
+++ b/drivers/net/ethernet/mscc/ocelot_ace.h
@@ -0,0 +1,232 @@
+/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
+/* Microsemi Ocelot Switch driver
+ * Copyright (c) 2019 Microsemi Corporation
+ */
+
+#ifndef _MSCC_OCELOT_ACE_H_
+#define _MSCC_OCELOT_ACE_H_
+
+#include "ocelot.h"
+#include <net/sch_generic.h>
+#include <net/pkt_cls.h>
+
+struct ocelot_ipv4 {
+ u8 addr[4];
+};
+
+enum ocelot_vcap_bit {
+ OCELOT_VCAP_BIT_ANY,
+ OCELOT_VCAP_BIT_0,
+ OCELOT_VCAP_BIT_1
+};
+
+struct ocelot_vcap_u8 {
+ u8 value[1];
+ u8 mask[1];
+};
+
+struct ocelot_vcap_u16 {
+ u8 value[2];
+ u8 mask[2];
+};
+
+struct ocelot_vcap_u24 {
+ u8 value[3];
+ u8 mask[3];
+};
+
+struct ocelot_vcap_u32 {
+ u8 value[4];
+ u8 mask[4];
+};
+
+struct ocelot_vcap_u40 {
+ u8 value[5];
+ u8 mask[5];
+};
+
+struct ocelot_vcap_u48 {
+ u8 value[6];
+ u8 mask[6];
+};
+
+struct ocelot_vcap_u64 {
+ u8 value[8];
+ u8 mask[8];
+};
+
+struct ocelot_vcap_u128 {
+ u8 value[16];
+ u8 mask[16];
+};
+
+struct ocelot_vcap_vid {
+ u16 value;
+ u16 mask;
+};
+
+struct ocelot_vcap_ipv4 {
+ struct ocelot_ipv4 value;
+ struct ocelot_ipv4 mask;
+};
+
+struct ocelot_vcap_udp_tcp {
+ u16 value;
+ u16 mask;
+};
+
+enum ocelot_ace_type {
+ OCELOT_ACE_TYPE_ANY,
+ OCELOT_ACE_TYPE_ETYPE,
+ OCELOT_ACE_TYPE_LLC,
+ OCELOT_ACE_TYPE_SNAP,
+ OCELOT_ACE_TYPE_ARP,
+ OCELOT_ACE_TYPE_IPV4,
+ OCELOT_ACE_TYPE_IPV6
+};
+
+struct ocelot_ace_vlan {
+ struct ocelot_vcap_vid vid; /* VLAN ID (12 bit) */
+ struct ocelot_vcap_u8 pcp; /* PCP (3 bit) */
+ enum ocelot_vcap_bit dei; /* DEI */
+ enum ocelot_vcap_bit tagged; /* Tagged/untagged frame */
+};
+
+struct ocelot_ace_frame_etype {
+ struct ocelot_vcap_u48 dmac;
+ struct ocelot_vcap_u48 smac;
+ struct ocelot_vcap_u16 etype;
+ struct ocelot_vcap_u16 data; /* MAC data */
+};
+
+struct ocelot_ace_frame_llc {
+ struct ocelot_vcap_u48 dmac;
+ struct ocelot_vcap_u48 smac;
+
+ /* LLC header: DSAP at byte 0, SSAP at byte 1, Control at byte 2 */
+ struct ocelot_vcap_u32 llc;
+};
+
+struct ocelot_ace_frame_snap {
+ struct ocelot_vcap_u48 dmac;
+ struct ocelot_vcap_u48 smac;
+
+ /* SNAP header: Organization Code at byte 0, Type at byte 3 */
+ struct ocelot_vcap_u40 snap;
+};
+
+struct ocelot_ace_frame_arp {
+ struct ocelot_vcap_u48 smac;
+ enum ocelot_vcap_bit arp; /* Opcode ARP/RARP */
+ enum ocelot_vcap_bit req; /* Opcode request/reply */
+ enum ocelot_vcap_bit unknown; /* Opcode unknown */
+ enum ocelot_vcap_bit smac_match; /* Sender MAC matches SMAC */
+ enum ocelot_vcap_bit dmac_match; /* Target MAC matches DMAC */
+
+ /**< Protocol addr. length 4, hardware length 6 */
+ enum ocelot_vcap_bit length;
+
+ enum ocelot_vcap_bit ip; /* Protocol address type IP */
+ enum ocelot_vcap_bit ethernet; /* Hardware address type Ethernet */
+ struct ocelot_vcap_ipv4 sip; /* Sender IP address */
+ struct ocelot_vcap_ipv4 dip; /* Target IP address */
+};
+
+struct ocelot_ace_frame_ipv4 {
+ enum ocelot_vcap_bit ttl; /* TTL zero */
+ enum ocelot_vcap_bit fragment; /* Fragment */
+ enum ocelot_vcap_bit options; /* Header options */
+ struct ocelot_vcap_u8 ds;
+ struct ocelot_vcap_u8 proto; /* Protocol */
+ struct ocelot_vcap_ipv4 sip; /* Source IP address */
+ struct ocelot_vcap_ipv4 dip; /* Destination IP address */
+ struct ocelot_vcap_u48 data; /* Not UDP/TCP: IP data */
+ struct ocelot_vcap_udp_tcp sport; /* UDP/TCP: Source port */
+ struct ocelot_vcap_udp_tcp dport; /* UDP/TCP: Destination port */
+ enum ocelot_vcap_bit tcp_fin;
+ enum ocelot_vcap_bit tcp_syn;
+ enum ocelot_vcap_bit tcp_rst;
+ enum ocelot_vcap_bit tcp_psh;
+ enum ocelot_vcap_bit tcp_ack;
+ enum ocelot_vcap_bit tcp_urg;
+ enum ocelot_vcap_bit sip_eq_dip; /* SIP equals DIP */
+ enum ocelot_vcap_bit sport_eq_dport; /* SPORT equals DPORT */
+ enum ocelot_vcap_bit seq_zero; /* TCP sequence number is zero */
+};
+
+struct ocelot_ace_frame_ipv6 {
+ struct ocelot_vcap_u8 proto; /* IPv6 protocol */
+ struct ocelot_vcap_u128 sip; /* IPv6 source (byte 0-7 ignored) */
+ enum ocelot_vcap_bit ttl; /* TTL zero */
+ struct ocelot_vcap_u8 ds;
+ struct ocelot_vcap_u48 data; /* Not UDP/TCP: IP data */
+ struct ocelot_vcap_udp_tcp sport;
+ struct ocelot_vcap_udp_tcp dport;
+ enum ocelot_vcap_bit tcp_fin;
+ enum ocelot_vcap_bit tcp_syn;
+ enum ocelot_vcap_bit tcp_rst;
+ enum ocelot_vcap_bit tcp_psh;
+ enum ocelot_vcap_bit tcp_ack;
+ enum ocelot_vcap_bit tcp_urg;
+ enum ocelot_vcap_bit sip_eq_dip; /* SIP equals DIP */
+ enum ocelot_vcap_bit sport_eq_dport; /* SPORT equals DPORT */
+ enum ocelot_vcap_bit seq_zero; /* TCP sequence number is zero */
+};
+
+enum ocelot_ace_action {
+ OCELOT_ACL_ACTION_DROP,
+ OCELOT_ACL_ACTION_TRAP,
+};
+
+struct ocelot_ace_stats {
+ u64 bytes;
+ u64 pkts;
+ u64 used;
+};
+
+struct ocelot_ace_rule {
+ struct list_head list;
+ struct ocelot_port *port;
+
+ u16 prio;
+ u32 id;
+
+ enum ocelot_ace_action action;
+ struct ocelot_ace_stats stats;
+ int chip_port;
+
+ enum ocelot_vcap_bit dmac_mc;
+ enum ocelot_vcap_bit dmac_bc;
+ struct ocelot_ace_vlan vlan;
+
+ enum ocelot_ace_type type;
+ union {
+ /* ocelot_ACE_TYPE_ANY: No specific fields */
+ struct ocelot_ace_frame_etype etype;
+ struct ocelot_ace_frame_llc llc;
+ struct ocelot_ace_frame_snap snap;
+ struct ocelot_ace_frame_arp arp;
+ struct ocelot_ace_frame_ipv4 ipv4;
+ struct ocelot_ace_frame_ipv6 ipv6;
+ } frame;
+};
+
+struct ocelot_acl_block {
+ struct list_head rules;
+ struct ocelot *ocelot;
+ int count;
+};
+
+int ocelot_ace_rule_offload_add(struct ocelot_ace_rule *rule);
+int ocelot_ace_rule_offload_del(struct ocelot_ace_rule *rule);
+int ocelot_ace_rule_stats_update(struct ocelot_ace_rule *rule);
+
+int ocelot_ace_init(struct ocelot *ocelot);
+void ocelot_ace_deinit(void);
+
+int ocelot_setup_tc_block_flower_bind(struct ocelot_port *port,
+ struct flow_block_offload *f);
+void ocelot_setup_tc_block_flower_unbind(struct ocelot_port *port,
+ struct flow_block_offload *f);
+
+#endif /* _MSCC_OCELOT_ACE_H_ */
diff --git a/drivers/net/ethernet/mscc/ocelot_board.c b/drivers/net/ethernet/mscc/ocelot_board.c
index e7f90101d2e0..58bde1a9eacb 100644
--- a/drivers/net/ethernet/mscc/ocelot_board.c
+++ b/drivers/net/ethernet/mscc/ocelot_board.c
@@ -188,6 +188,7 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
{ QSYS, "qsys" },
{ ANA, "ana" },
{ QS, "qs" },
+ { S2, "s2" },
};
if (!np && !pdev->dev.platform_data)
diff --git a/drivers/net/ethernet/mscc/ocelot_flower.c b/drivers/net/ethernet/mscc/ocelot_flower.c
new file mode 100644
index 000000000000..7aaddc09c185
--- /dev/null
+++ b/drivers/net/ethernet/mscc/ocelot_flower.c
@@ -0,0 +1,363 @@
+// SPDX-License-Identifier: (GPL-2.0 OR MIT)
+/* Microsemi Ocelot Switch driver
+ * Copyright (c) 2019 Microsemi Corporation
+ */
+
+#include <net/pkt_cls.h>
+#include <net/tc_act/tc_gact.h>
+
+#include "ocelot_ace.h"
+
+struct ocelot_port_block {
+ struct ocelot_acl_block *block;
+ struct ocelot_port *port;
+};
+
+static u16 get_prio(u32 prio)
+{
+ /* prio starts from 0x1000 while the ids starts from 0 */
+ return prio >> 16;
+}
+
+static int ocelot_flower_parse_action(struct flow_cls_offload *f,
+ struct ocelot_ace_rule *rule)
+{
+ const struct flow_action_entry *a;
+ int i;
+
+ if (f->rule->action.num_entries != 1)
+ return -EOPNOTSUPP;
+
+ flow_action_for_each(i, a, &f->rule->action) {
+ switch (a->id) {
+ case FLOW_ACTION_DROP:
+ rule->action = OCELOT_ACL_ACTION_DROP;
+ break;
+ case FLOW_ACTION_TRAP:
+ rule->action = OCELOT_ACL_ACTION_TRAP;
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+ }
+
+ return 0;
+}
+
+static int ocelot_flower_parse(struct flow_cls_offload *f,
+ struct ocelot_ace_rule *ocelot_rule)
+{
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
+ struct flow_dissector *dissector = rule->match.dissector;
+
+ if (dissector->used_keys &
+ ~(BIT(FLOW_DISSECTOR_KEY_CONTROL) |
+ BIT(FLOW_DISSECTOR_KEY_BASIC) |
+ BIT(FLOW_DISSECTOR_KEY_PORTS) |
+ BIT(FLOW_DISSECTOR_KEY_VLAN) |
+ BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS) |
+ BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS) |
+ BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS))) {
+ return -EOPNOTSUPP;
+ }
+
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CONTROL)) {
+ struct flow_match_control match;
+
+ flow_rule_match_control(rule, &match);
+ }
+
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
+ struct flow_match_eth_addrs match;
+ u16 proto = ntohs(f->common.protocol);
+
+ /* The hw support mac matches only for MAC_ETYPE key,
+ * therefore if other matches(port, tcp flags, etc) are added
+ * then just bail out
+ */
+ if ((dissector->used_keys &
+ (BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
+ BIT(FLOW_DISSECTOR_KEY_BASIC) |
+ BIT(FLOW_DISSECTOR_KEY_CONTROL))) !=
+ (BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
+ BIT(FLOW_DISSECTOR_KEY_BASIC) |
+ BIT(FLOW_DISSECTOR_KEY_CONTROL)))
+ return -EOPNOTSUPP;
+
+ if (proto == ETH_P_IP ||
+ proto == ETH_P_IPV6 ||
+ proto == ETH_P_ARP)
+ return -EOPNOTSUPP;
+
+ flow_rule_match_eth_addrs(rule, &match);
+ ocelot_rule->type = OCELOT_ACE_TYPE_ETYPE;
+ ether_addr_copy(ocelot_rule->frame.etype.dmac.value,
+ match.key->dst);
+ ether_addr_copy(ocelot_rule->frame.etype.smac.value,
+ match.key->src);
+ ether_addr_copy(ocelot_rule->frame.etype.dmac.mask,
+ match.mask->dst);
+ ether_addr_copy(ocelot_rule->frame.etype.smac.mask,
+ match.mask->src);
+ goto finished_key_parsing;
+ }
+
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
+ struct flow_match_basic match;
+
+ flow_rule_match_basic(rule, &match);
+ if (ntohs(match.key->n_proto) == ETH_P_IP) {
+ ocelot_rule->type = OCELOT_ACE_TYPE_IPV4;
+ ocelot_rule->frame.ipv4.proto.value[0] =
+ match.key->ip_proto;
+ ocelot_rule->frame.ipv4.proto.mask[0] =
+ match.mask->ip_proto;
+ }
+ if (ntohs(match.key->n_proto) == ETH_P_IPV6) {
+ ocelot_rule->type = OCELOT_ACE_TYPE_IPV6;
+ ocelot_rule->frame.ipv6.proto.value[0] =
+ match.key->ip_proto;
+ ocelot_rule->frame.ipv6.proto.mask[0] =
+ match.mask->ip_proto;
+ }
+ }
+
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IPV4_ADDRS) &&
+ ntohs(f->common.protocol) == ETH_P_IP) {
+ struct flow_match_ipv4_addrs match;
+ u8 *tmp;
+
+ flow_rule_match_ipv4_addrs(rule, &match);
+ tmp = &ocelot_rule->frame.ipv4.sip.value.addr[0];
+ memcpy(tmp, &match.key->src, 4);
+
+ tmp = &ocelot_rule->frame.ipv4.sip.mask.addr[0];
+ memcpy(tmp, &match.mask->src, 4);
+
+ tmp = &ocelot_rule->frame.ipv4.dip.value.addr[0];
+ memcpy(tmp, &match.key->dst, 4);
+
+ tmp = &ocelot_rule->frame.ipv4.dip.mask.addr[0];
+ memcpy(tmp, &match.mask->dst, 4);
+ }
+
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IPV6_ADDRS) &&
+ ntohs(f->common.protocol) == ETH_P_IPV6) {
+ return -EOPNOTSUPP;
+ }
+
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_PORTS)) {
+ struct flow_match_ports match;
+
+ flow_rule_match_ports(rule, &match);
+ ocelot_rule->frame.ipv4.sport.value = ntohs(match.key->src);
+ ocelot_rule->frame.ipv4.sport.mask = ntohs(match.mask->src);
+ ocelot_rule->frame.ipv4.dport.value = ntohs(match.key->dst);
+ ocelot_rule->frame.ipv4.dport.mask = ntohs(match.mask->dst);
+ }
+
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) {
+ struct flow_match_vlan match;
+
+ flow_rule_match_vlan(rule, &match);
+ ocelot_rule->type = OCELOT_ACE_TYPE_ANY;
+ ocelot_rule->vlan.vid.value = match.key->vlan_id;
+ ocelot_rule->vlan.vid.mask = match.mask->vlan_id;
+ ocelot_rule->vlan.pcp.value[0] = match.key->vlan_priority;
+ ocelot_rule->vlan.pcp.mask[0] = match.mask->vlan_priority;
+ }
+
+finished_key_parsing:
+ ocelot_rule->prio = get_prio(f->common.prio);
+ ocelot_rule->id = f->cookie;
+ return ocelot_flower_parse_action(f, ocelot_rule);
+}
+
+static
+struct ocelot_ace_rule *ocelot_ace_rule_create(struct flow_cls_offload *f,
+ struct ocelot_port_block *block)
+{
+ struct ocelot_ace_rule *rule;
+
+ rule = kzalloc(sizeof(*rule), GFP_KERNEL);
+ if (!rule)
+ return NULL;
+
+ rule->port = block->port;
+ rule->chip_port = block->port->chip_port;
+ return rule;
+}
+
+static int ocelot_flower_replace(struct flow_cls_offload *f,
+ struct ocelot_port_block *port_block)
+{
+ struct ocelot_ace_rule *rule;
+ int ret;
+
+ rule = ocelot_ace_rule_create(f, port_block);
+ if (!rule)
+ return -ENOMEM;
+
+ ret = ocelot_flower_parse(f, rule);
+ if (ret) {
+ kfree(rule);
+ return ret;
+ }
+
+ ret = ocelot_ace_rule_offload_add(rule);
+ if (ret)
+ return ret;
+
+ port_block->port->tc.offload_cnt++;
+ return 0;
+}
+
+static int ocelot_flower_destroy(struct flow_cls_offload *f,
+ struct ocelot_port_block *port_block)
+{
+ struct ocelot_ace_rule rule;
+ int ret;
+
+ rule.prio = get_prio(f->common.prio);
+ rule.port = port_block->port;
+ rule.id = f->cookie;
+
+ ret = ocelot_ace_rule_offload_del(&rule);
+ if (ret)
+ return ret;
+
+ port_block->port->tc.offload_cnt--;
+ return 0;
+}
+
+static int ocelot_flower_stats_update(struct flow_cls_offload *f,
+ struct ocelot_port_block *port_block)
+{
+ struct ocelot_ace_rule rule;
+ int ret;
+
+ rule.prio = get_prio(f->common.prio);
+ rule.port = port_block->port;
+ rule.id = f->cookie;
+ ret = ocelot_ace_rule_stats_update(&rule);
+ if (ret)
+ return ret;
+
+ flow_stats_update(&f->stats, 0x0, rule.stats.pkts, 0x0);
+ return 0;
+}
+
+static int ocelot_setup_tc_cls_flower(struct flow_cls_offload *f,
+ struct ocelot_port_block *port_block)
+{
+ switch (f->command) {
+ case FLOW_CLS_REPLACE:
+ return ocelot_flower_replace(f, port_block);
+ case FLOW_CLS_DESTROY:
+ return ocelot_flower_destroy(f, port_block);
+ case FLOW_CLS_STATS:
+ return ocelot_flower_stats_update(f, port_block);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static int ocelot_setup_tc_block_cb_flower(enum tc_setup_type type,
+ void *type_data, void *cb_priv)
+{
+ struct ocelot_port_block *port_block = cb_priv;
+
+ if (!tc_cls_can_offload_and_chain0(port_block->port->dev, type_data))
+ return -EOPNOTSUPP;
+
+ switch (type) {
+ case TC_SETUP_CLSFLOWER:
+ return ocelot_setup_tc_cls_flower(type_data, cb_priv);
+ case TC_SETUP_CLSMATCHALL:
+ return 0;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static struct ocelot_port_block*
+ocelot_port_block_create(struct ocelot_port *port)
+{
+ struct ocelot_port_block *port_block;
+
+ port_block = kzalloc(sizeof(*port_block), GFP_KERNEL);
+ if (!port_block)
+ return NULL;
+
+ port_block->port = port;
+
+ return port_block;
+}
+
+static void ocelot_port_block_destroy(struct ocelot_port_block *block)
+{
+ kfree(block);
+}
+
+static void ocelot_tc_block_unbind(void *cb_priv)
+{
+ struct ocelot_port_block *port_block = cb_priv;
+
+ ocelot_port_block_destroy(port_block);
+}
+
+int ocelot_setup_tc_block_flower_bind(struct ocelot_port *port,
+ struct flow_block_offload *f)
+{
+ struct ocelot_port_block *port_block;
+ struct flow_block_cb *block_cb;
+ int ret;
+
+ if (f->binder_type == FLOW_BLOCK_BINDER_TYPE_CLSACT_EGRESS)
+ return -EOPNOTSUPP;
+
+ block_cb = flow_block_cb_lookup(f, ocelot_setup_tc_block_cb_flower,
+ port);
+ if (!block_cb) {
+ port_block = ocelot_port_block_create(port);
+ if (!port_block)
+ return -ENOMEM;
+
+ block_cb = flow_block_cb_alloc(f->net,
+ ocelot_setup_tc_block_cb_flower,
+ port, port_block,
+ ocelot_tc_block_unbind);
+ if (IS_ERR(block_cb)) {
+ ret = PTR_ERR(block_cb);
+ goto err_cb_register;
+ }
+ flow_block_cb_add(block_cb, f);
+ list_add_tail(&block_cb->driver_list, f->driver_block_list);
+ } else {
+ port_block = flow_block_cb_priv(block_cb);
+ }
+
+ flow_block_cb_incref(block_cb);
+ return 0;
+
+err_cb_register:
+ ocelot_port_block_destroy(port_block);
+
+ return ret;
+}
+
+void ocelot_setup_tc_block_flower_unbind(struct ocelot_port *port,
+ struct flow_block_offload *f)
+{
+ struct flow_block_cb *block_cb;
+
+ block_cb = flow_block_cb_lookup(f, ocelot_setup_tc_block_cb_flower,
+ port);
+ if (!block_cb)
+ return;
+
+ if (!flow_block_cb_decref(block_cb)) {
+ flow_block_cb_remove(block_cb, f);
+ list_del(&block_cb->driver_list);
+ }
+}
diff --git a/drivers/net/ethernet/mscc/ocelot_police.c b/drivers/net/ethernet/mscc/ocelot_police.c
new file mode 100644
index 000000000000..701e82dd749a
--- /dev/null
+++ b/drivers/net/ethernet/mscc/ocelot_police.c
@@ -0,0 +1,227 @@
+// SPDX-License-Identifier: (GPL-2.0 OR MIT)
+/* Microsemi Ocelot Switch driver
+ *
+ * Copyright (c) 2019 Microsemi Corporation
+ */
+
+#include "ocelot_police.h"
+
+enum mscc_qos_rate_mode {
+ MSCC_QOS_RATE_MODE_DISABLED, /* Policer/shaper disabled */
+ MSCC_QOS_RATE_MODE_LINE, /* Measure line rate in kbps incl. IPG */
+ MSCC_QOS_RATE_MODE_DATA, /* Measures data rate in kbps excl. IPG */
+ MSCC_QOS_RATE_MODE_FRAME, /* Measures frame rate in fps */
+ __MSCC_QOS_RATE_MODE_END,
+ NUM_MSCC_QOS_RATE_MODE = __MSCC_QOS_RATE_MODE_END,
+ MSCC_QOS_RATE_MODE_MAX = __MSCC_QOS_RATE_MODE_END - 1,
+};
+
+/* Types for ANA:POL[0-192]:POL_MODE_CFG.FRM_MODE */
+#define POL_MODE_LINERATE 0 /* Incl IPG. Unit: 33 1/3 kbps, 4096 bytes */
+#define POL_MODE_DATARATE 1 /* Excl IPG. Unit: 33 1/3 kbps, 4096 bytes */
+#define POL_MODE_FRMRATE_HI 2 /* Unit: 33 1/3 fps, 32.8 frames */
+#define POL_MODE_FRMRATE_LO 3 /* Unit: 1/3 fps, 0.3 frames */
+
+/* Policer indexes */
+#define POL_IX_PORT 0 /* 0-11 : Port policers */
+#define POL_IX_QUEUE 32 /* 32-127 : Queue policers */
+
+/* Default policer order */
+#define POL_ORDER 0x1d3 /* Ocelot policer order: Serial (QoS -> Port -> VCAP) */
+
+struct qos_policer_conf {
+ enum mscc_qos_rate_mode mode;
+ bool dlb; /* Enable DLB (dual leaky bucket mode */
+ bool cf; /* Coupling flag (ignored in SLB mode) */
+ u32 cir; /* CIR in kbps/fps (ignored in SLB mode) */
+ u32 cbs; /* CBS in bytes/frames (ignored in SLB mode) */
+ u32 pir; /* PIR in kbps/fps */
+ u32 pbs; /* PBS in bytes/frames */
+ u8 ipg; /* Size of IPG when MSCC_QOS_RATE_MODE_LINE is chosen */
+};
+
+static int qos_policer_conf_set(struct ocelot_port *port, u32 pol_ix,
+ struct qos_policer_conf *conf)
+{
+ u32 cf = 0, cir_ena = 0, frm_mode = POL_MODE_LINERATE;
+ u32 cir = 0, cbs = 0, pir = 0, pbs = 0;
+ bool cir_discard = 0, pir_discard = 0;
+ struct ocelot *ocelot = port->ocelot;
+ u32 pbs_max = 0, cbs_max = 0;
+ u8 ipg = 20;
+ u32 value;
+
+ pir = conf->pir;
+ pbs = conf->pbs;
+
+ switch (conf->mode) {
+ case MSCC_QOS_RATE_MODE_LINE:
+ case MSCC_QOS_RATE_MODE_DATA:
+ if (conf->mode == MSCC_QOS_RATE_MODE_LINE) {
+ frm_mode = POL_MODE_LINERATE;
+ ipg = min_t(u8, GENMASK(4, 0), conf->ipg);
+ } else {
+ frm_mode = POL_MODE_DATARATE;
+ }
+ if (conf->dlb) {
+ cir_ena = 1;
+ cir = conf->cir;
+ cbs = conf->cbs;
+ if (cir == 0 && cbs == 0) {
+ /* Discard cir frames */
+ cir_discard = 1;
+ } else {
+ cir = DIV_ROUND_UP(cir, 100);
+ cir *= 3; /* 33 1/3 kbps */
+ cbs = DIV_ROUND_UP(cbs, 4096);
+ cbs = (cbs ? cbs : 1); /* No zero burst size */
+ cbs_max = 60; /* Limit burst size */
+ cf = conf->cf;
+ if (cf)
+ pir += conf->cir;
+ }
+ }
+ if (pir == 0 && pbs == 0) {
+ /* Discard PIR frames */
+ pir_discard = 1;
+ } else {
+ pir = DIV_ROUND_UP(pir, 100);
+ pir *= 3; /* 33 1/3 kbps */
+ pbs = DIV_ROUND_UP(pbs, 4096);
+ pbs = (pbs ? pbs : 1); /* No zero burst size */
+ pbs_max = 60; /* Limit burst size */
+ }
+ break;
+ case MSCC_QOS_RATE_MODE_FRAME:
+ if (pir >= 100) {
+ frm_mode = POL_MODE_FRMRATE_HI;
+ pir = DIV_ROUND_UP(pir, 100);
+ pir *= 3; /* 33 1/3 fps */
+ pbs = (pbs * 10) / 328; /* 32.8 frames */
+ pbs = (pbs ? pbs : 1); /* No zero burst size */
+ pbs_max = GENMASK(6, 0); /* Limit burst size */
+ } else {
+ frm_mode = POL_MODE_FRMRATE_LO;
+ if (pir == 0 && pbs == 0) {
+ /* Discard all frames */
+ pir_discard = 1;
+ cir_discard = 1;
+ } else {
+ pir *= 3; /* 1/3 fps */
+ pbs = (pbs * 10) / 3; /* 0.3 frames */
+ pbs = (pbs ? pbs : 1); /* No zero burst size */
+ pbs_max = 61; /* Limit burst size */
+ }
+ }
+ break;
+ default: /* MSCC_QOS_RATE_MODE_DISABLED */
+ /* Disable policer using maximum rate and zero burst */
+ pir = GENMASK(15, 0);
+ pbs = 0;
+ break;
+ }
+
+ /* Check limits */
+ if (pir > GENMASK(15, 0)) {
+ netdev_err(port->dev, "Invalid pir\n");
+ return -EINVAL;
+ }
+
+ if (cir > GENMASK(15, 0)) {
+ netdev_err(port->dev, "Invalid cir\n");
+ return -EINVAL;
+ }
+
+ if (pbs > pbs_max) {
+ netdev_err(port->dev, "Invalid pbs\n");
+ return -EINVAL;
+ }
+
+ if (cbs > cbs_max) {
+ netdev_err(port->dev, "Invalid cbs\n");
+ return -EINVAL;
+ }
+
+ value = (ANA_POL_MODE_CFG_IPG_SIZE(ipg) |
+ ANA_POL_MODE_CFG_FRM_MODE(frm_mode) |
+ (cf ? ANA_POL_MODE_CFG_DLB_COUPLED : 0) |
+ (cir_ena ? ANA_POL_MODE_CFG_CIR_ENA : 0) |
+ ANA_POL_MODE_CFG_OVERSHOOT_ENA);
+
+ ocelot_write_gix(ocelot, value, ANA_POL_MODE_CFG, pol_ix);
+
+ ocelot_write_gix(ocelot,
+ ANA_POL_PIR_CFG_PIR_RATE(pir) |
+ ANA_POL_PIR_CFG_PIR_BURST(pbs),
+ ANA_POL_PIR_CFG, pol_ix);
+
+ ocelot_write_gix(ocelot,
+ (pir_discard ? GENMASK(22, 0) : 0),
+ ANA_POL_PIR_STATE, pol_ix);
+
+ ocelot_write_gix(ocelot,
+ ANA_POL_CIR_CFG_CIR_RATE(cir) |
+ ANA_POL_CIR_CFG_CIR_BURST(cbs),
+ ANA_POL_CIR_CFG, pol_ix);
+
+ ocelot_write_gix(ocelot,
+ (cir_discard ? GENMASK(22, 0) : 0),
+ ANA_POL_CIR_STATE, pol_ix);
+
+ return 0;
+}
+
+int ocelot_port_policer_add(struct ocelot_port *port,
+ struct ocelot_policer *pol)
+{
+ struct ocelot *ocelot = port->ocelot;
+ struct qos_policer_conf pp = { 0 };
+ int err;
+
+ if (!pol)
+ return -EINVAL;
+
+ pp.mode = MSCC_QOS_RATE_MODE_DATA;
+ pp.pir = pol->rate;
+ pp.pbs = pol->burst;
+
+ netdev_dbg(port->dev,
+ "%s: port %u pir %u kbps, pbs %u bytes\n",
+ __func__, port->chip_port, pp.pir, pp.pbs);
+
+ err = qos_policer_conf_set(port, POL_IX_PORT + port->chip_port, &pp);
+ if (err)
+ return err;
+
+ ocelot_rmw_gix(ocelot,
+ ANA_PORT_POL_CFG_PORT_POL_ENA |
+ ANA_PORT_POL_CFG_POL_ORDER(POL_ORDER),
+ ANA_PORT_POL_CFG_PORT_POL_ENA |
+ ANA_PORT_POL_CFG_POL_ORDER_M,
+ ANA_PORT_POL_CFG, port->chip_port);
+
+ return 0;
+}
+
+int ocelot_port_policer_del(struct ocelot_port *port)
+{
+ struct ocelot *ocelot = port->ocelot;
+ struct qos_policer_conf pp = { 0 };
+ int err;
+
+ netdev_dbg(port->dev, "%s: port %u\n", __func__, port->chip_port);
+
+ pp.mode = MSCC_QOS_RATE_MODE_DISABLED;
+
+ err = qos_policer_conf_set(port, POL_IX_PORT + port->chip_port, &pp);
+ if (err)
+ return err;
+
+ ocelot_rmw_gix(ocelot,
+ ANA_PORT_POL_CFG_POL_ORDER(POL_ORDER),
+ ANA_PORT_POL_CFG_PORT_POL_ENA |
+ ANA_PORT_POL_CFG_POL_ORDER_M,
+ ANA_PORT_POL_CFG, port->chip_port);
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/mscc/ocelot_police.h b/drivers/net/ethernet/mscc/ocelot_police.h
new file mode 100644
index 000000000000..d1137f79efda
--- /dev/null
+++ b/drivers/net/ethernet/mscc/ocelot_police.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
+/* Microsemi Ocelot Switch driver
+ *
+ * Copyright (c) 2019 Microsemi Corporation
+ */
+
+#ifndef _MSCC_OCELOT_POLICE_H_
+#define _MSCC_OCELOT_POLICE_H_
+
+#include "ocelot.h"
+
+struct ocelot_policer {
+ u32 rate; /* kilobit per second */
+ u32 burst; /* bytes */
+};
+
+int ocelot_port_policer_add(struct ocelot_port *port,
+ struct ocelot_policer *pol);
+
+int ocelot_port_policer_del(struct ocelot_port *port);
+
+#endif /* _MSCC_OCELOT_POLICE_H_ */
diff --git a/drivers/net/ethernet/mscc/ocelot_regs.c b/drivers/net/ethernet/mscc/ocelot_regs.c
index 9271af18b93b..6c387f994ec5 100644
--- a/drivers/net/ethernet/mscc/ocelot_regs.c
+++ b/drivers/net/ethernet/mscc/ocelot_regs.c
@@ -224,12 +224,23 @@ static const u32 ocelot_sys_regmap[] = {
REG(SYS_PTP_CFG, 0x0006c4),
};
+static const u32 ocelot_s2_regmap[] = {
+ REG(S2_CORE_UPDATE_CTRL, 0x000000),
+ REG(S2_CORE_MV_CFG, 0x000004),
+ REG(S2_CACHE_ENTRY_DAT, 0x000008),
+ REG(S2_CACHE_MASK_DAT, 0x000108),
+ REG(S2_CACHE_ACTION_DAT, 0x000208),
+ REG(S2_CACHE_CNT_DAT, 0x000308),
+ REG(S2_CACHE_TG_DAT, 0x000388),
+};
+
static const u32 *ocelot_regmap[] = {
[ANA] = ocelot_ana_regmap,
[QS] = ocelot_qs_regmap,
[QSYS] = ocelot_qsys_regmap,
[REW] = ocelot_rew_regmap,
[SYS] = ocelot_sys_regmap,
+ [S2] = ocelot_s2_regmap,
};
static const struct reg_field ocelot_regfields[] = {
diff --git a/drivers/net/ethernet/mscc/ocelot_s2.h b/drivers/net/ethernet/mscc/ocelot_s2.h
new file mode 100644
index 000000000000..80107bec2e45
--- /dev/null
+++ b/drivers/net/ethernet/mscc/ocelot_s2.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
+/* Microsemi Ocelot Switch driver
+ * Copyright (c) 2018 Microsemi Corporation
+ */
+
+#ifndef _OCELOT_S2_CORE_H_
+#define _OCELOT_S2_CORE_H_
+
+#define S2_CORE_UPDATE_CTRL_UPDATE_CMD(x) (((x) << 22) & GENMASK(24, 22))
+#define S2_CORE_UPDATE_CTRL_UPDATE_CMD_M GENMASK(24, 22)
+#define S2_CORE_UPDATE_CTRL_UPDATE_CMD_X(x) (((x) & GENMASK(24, 22)) >> 22)
+#define S2_CORE_UPDATE_CTRL_UPDATE_ENTRY_DIS BIT(21)
+#define S2_CORE_UPDATE_CTRL_UPDATE_ACTION_DIS BIT(20)
+#define S2_CORE_UPDATE_CTRL_UPDATE_CNT_DIS BIT(19)
+#define S2_CORE_UPDATE_CTRL_UPDATE_ADDR(x) (((x) << 3) & GENMASK(18, 3))
+#define S2_CORE_UPDATE_CTRL_UPDATE_ADDR_M GENMASK(18, 3)
+#define S2_CORE_UPDATE_CTRL_UPDATE_ADDR_X(x) (((x) & GENMASK(18, 3)) >> 3)
+#define S2_CORE_UPDATE_CTRL_UPDATE_SHOT BIT(2)
+#define S2_CORE_UPDATE_CTRL_CLEAR_CACHE BIT(1)
+#define S2_CORE_UPDATE_CTRL_MV_TRAFFIC_IGN BIT(0)
+
+#define S2_CORE_MV_CFG_MV_NUM_POS(x) (((x) << 16) & GENMASK(31, 16))
+#define S2_CORE_MV_CFG_MV_NUM_POS_M GENMASK(31, 16)
+#define S2_CORE_MV_CFG_MV_NUM_POS_X(x) (((x) & GENMASK(31, 16)) >> 16)
+#define S2_CORE_MV_CFG_MV_SIZE(x) ((x) & GENMASK(15, 0))
+#define S2_CORE_MV_CFG_MV_SIZE_M GENMASK(15, 0)
+
+#define S2_CACHE_ENTRY_DAT_RSZ 0x4
+
+#define S2_CACHE_MASK_DAT_RSZ 0x4
+
+#define S2_CACHE_ACTION_DAT_RSZ 0x4
+
+#define S2_CACHE_CNT_DAT_RSZ 0x4
+
+#define S2_STICKY_VCAP_ROW_DELETED_STICKY BIT(0)
+
+#define S2_BIST_CTRL_TCAM_BIST BIT(1)
+#define S2_BIST_CTRL_TCAM_INIT BIT(0)
+
+#define S2_BIST_CFG_TCAM_BIST_SOE_ENA BIT(8)
+#define S2_BIST_CFG_TCAM_HCG_DIS BIT(7)
+#define S2_BIST_CFG_TCAM_CG_DIS BIT(6)
+#define S2_BIST_CFG_TCAM_BIAS(x) ((x) & GENMASK(5, 0))
+#define S2_BIST_CFG_TCAM_BIAS_M GENMASK(5, 0)
+
+#define S2_BIST_STAT_BIST_RT_ERR BIT(15)
+#define S2_BIST_STAT_BIST_PENC_ERR BIT(14)
+#define S2_BIST_STAT_BIST_COMP_ERR BIT(13)
+#define S2_BIST_STAT_BIST_ADDR_ERR BIT(12)
+#define S2_BIST_STAT_BIST_BL1E_ERR BIT(11)
+#define S2_BIST_STAT_BIST_BL1_ERR BIT(10)
+#define S2_BIST_STAT_BIST_BL0E_ERR BIT(9)
+#define S2_BIST_STAT_BIST_BL0_ERR BIT(8)
+#define S2_BIST_STAT_BIST_PH1_ERR BIT(7)
+#define S2_BIST_STAT_BIST_PH0_ERR BIT(6)
+#define S2_BIST_STAT_BIST_PV1_ERR BIT(5)
+#define S2_BIST_STAT_BIST_PV0_ERR BIT(4)
+#define S2_BIST_STAT_BIST_RUN BIT(3)
+#define S2_BIST_STAT_BIST_ERR BIT(2)
+#define S2_BIST_STAT_BIST_BUSY BIT(1)
+#define S2_BIST_STAT_TCAM_RDY BIT(0)
+
+#endif /* _OCELOT_S2_CORE_H_ */
diff --git a/drivers/net/ethernet/mscc/ocelot_tc.c b/drivers/net/ethernet/mscc/ocelot_tc.c
new file mode 100644
index 000000000000..9e6464ffae5d
--- /dev/null
+++ b/drivers/net/ethernet/mscc/ocelot_tc.c
@@ -0,0 +1,197 @@
+// SPDX-License-Identifier: (GPL-2.0 OR MIT)
+/* Microsemi Ocelot Switch TC driver
+ *
+ * Copyright (c) 2019 Microsemi Corporation
+ */
+
+#include "ocelot_tc.h"
+#include "ocelot_police.h"
+#include "ocelot_ace.h"
+#include <net/pkt_cls.h>
+
+static int ocelot_setup_tc_cls_matchall(struct ocelot_port *port,
+ struct tc_cls_matchall_offload *f,
+ bool ingress)
+{
+ struct netlink_ext_ack *extack = f->common.extack;
+ struct ocelot_policer pol = { 0 };
+ struct flow_action_entry *action;
+ int err;
+
+ netdev_dbg(port->dev, "%s: port %u command %d cookie %lu\n",
+ __func__, port->chip_port, f->command, f->cookie);
+
+ if (!ingress) {
+ NL_SET_ERR_MSG_MOD(extack, "Only ingress is supported");
+ return -EOPNOTSUPP;
+ }
+
+ switch (f->command) {
+ case TC_CLSMATCHALL_REPLACE:
+ if (!flow_offload_has_one_action(&f->rule->action)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Only one action is supported");
+ return -EOPNOTSUPP;
+ }
+
+ if (port->tc.block_shared) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Rate limit is not supported on shared blocks");
+ return -EOPNOTSUPP;
+ }
+
+ action = &f->rule->action.entries[0];
+
+ if (action->id != FLOW_ACTION_POLICE) {
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported action");
+ return -EOPNOTSUPP;
+ }
+
+ if (port->tc.police_id && port->tc.police_id != f->cookie) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Only one policer per port is supported\n");
+ return -EEXIST;
+ }
+
+ pol.rate = (u32)div_u64(action->police.rate_bytes_ps, 1000) * 8;
+ pol.burst = (u32)div_u64(action->police.rate_bytes_ps *
+ PSCHED_NS2TICKS(action->police.burst),
+ PSCHED_TICKS_PER_SEC);
+
+ err = ocelot_port_policer_add(port, &pol);
+ if (err) {
+ NL_SET_ERR_MSG_MOD(extack, "Could not add policer\n");
+ return err;
+ }
+
+ port->tc.police_id = f->cookie;
+ port->tc.offload_cnt++;
+ return 0;
+ case TC_CLSMATCHALL_DESTROY:
+ if (port->tc.police_id != f->cookie)
+ return -ENOENT;
+
+ err = ocelot_port_policer_del(port);
+ if (err) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Could not delete policer\n");
+ return err;
+ }
+ port->tc.police_id = 0;
+ port->tc.offload_cnt--;
+ return 0;
+ case TC_CLSMATCHALL_STATS: /* fall through */
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static int ocelot_setup_tc_block_cb(enum tc_setup_type type,
+ void *type_data,
+ void *cb_priv, bool ingress)
+{
+ struct ocelot_port *port = cb_priv;
+
+ if (!tc_cls_can_offload_and_chain0(port->dev, type_data))
+ return -EOPNOTSUPP;
+
+ switch (type) {
+ case TC_SETUP_CLSMATCHALL:
+ netdev_dbg(port->dev, "tc_block_cb: TC_SETUP_CLSMATCHALL %s\n",
+ ingress ? "ingress" : "egress");
+
+ return ocelot_setup_tc_cls_matchall(port, type_data, ingress);
+ case TC_SETUP_CLSFLOWER:
+ return 0;
+ default:
+ netdev_dbg(port->dev, "tc_block_cb: type %d %s\n",
+ type,
+ ingress ? "ingress" : "egress");
+
+ return -EOPNOTSUPP;
+ }
+}
+
+static int ocelot_setup_tc_block_cb_ig(enum tc_setup_type type,
+ void *type_data,
+ void *cb_priv)
+{
+ return ocelot_setup_tc_block_cb(type, type_data,
+ cb_priv, true);
+}
+
+static int ocelot_setup_tc_block_cb_eg(enum tc_setup_type type,
+ void *type_data,
+ void *cb_priv)
+{
+ return ocelot_setup_tc_block_cb(type, type_data,
+ cb_priv, false);
+}
+
+static LIST_HEAD(ocelot_block_cb_list);
+
+static int ocelot_setup_tc_block(struct ocelot_port *port,
+ struct flow_block_offload *f)
+{
+ struct flow_block_cb *block_cb;
+ tc_setup_cb_t *cb;
+ int err;
+
+ netdev_dbg(port->dev, "tc_block command %d, binder_type %d\n",
+ f->command, f->binder_type);
+
+ if (f->binder_type == FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS) {
+ cb = ocelot_setup_tc_block_cb_ig;
+ port->tc.block_shared = f->block_shared;
+ } else if (f->binder_type == FLOW_BLOCK_BINDER_TYPE_CLSACT_EGRESS) {
+ cb = ocelot_setup_tc_block_cb_eg;
+ } else {
+ return -EOPNOTSUPP;
+ }
+
+ f->driver_block_list = &ocelot_block_cb_list;
+
+ switch (f->command) {
+ case FLOW_BLOCK_BIND:
+ if (flow_block_cb_is_busy(cb, port, &ocelot_block_cb_list))
+ return -EBUSY;
+
+ block_cb = flow_block_cb_alloc(f->net, cb, port, port, NULL);
+ if (IS_ERR(block_cb))
+ return PTR_ERR(block_cb);
+
+ err = ocelot_setup_tc_block_flower_bind(port, f);
+ if (err < 0) {
+ flow_block_cb_free(block_cb);
+ return err;
+ }
+ flow_block_cb_add(block_cb, f);
+ list_add_tail(&block_cb->driver_list, f->driver_block_list);
+ return 0;
+ case FLOW_BLOCK_UNBIND:
+ block_cb = flow_block_cb_lookup(f, cb, port);
+ if (!block_cb)
+ return -ENOENT;
+
+ ocelot_setup_tc_block_flower_unbind(port, f);
+ flow_block_cb_remove(block_cb, f);
+ list_del(&block_cb->driver_list);
+ return 0;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+int ocelot_setup_tc(struct net_device *dev, enum tc_setup_type type,
+ void *type_data)
+{
+ struct ocelot_port *port = netdev_priv(dev);
+
+ switch (type) {
+ case TC_SETUP_BLOCK:
+ return ocelot_setup_tc_block(port, type_data);
+ default:
+ return -EOPNOTSUPP;
+ }
+ return 0;
+}
diff --git a/drivers/net/ethernet/mscc/ocelot_tc.h b/drivers/net/ethernet/mscc/ocelot_tc.h
new file mode 100644
index 000000000000..61757c2250a6
--- /dev/null
+++ b/drivers/net/ethernet/mscc/ocelot_tc.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
+/* Microsemi Ocelot Switch driver
+ *
+ * Copyright (c) 2019 Microsemi Corporation
+ */
+
+#ifndef _MSCC_OCELOT_TC_H_
+#define _MSCC_OCELOT_TC_H_
+
+#include <linux/netdevice.h>
+
+struct ocelot_port_tc {
+ bool block_shared;
+ unsigned long offload_cnt;
+
+ unsigned long police_id;
+};
+
+int ocelot_setup_tc(struct net_device *dev, enum tc_setup_type type,
+ void *type_data);
+
+#endif /* _MSCC_OCELOT_TC_H_ */
diff --git a/drivers/net/ethernet/mscc/ocelot_vcap.h b/drivers/net/ethernet/mscc/ocelot_vcap.h
new file mode 100644
index 000000000000..e22eac1da783
--- /dev/null
+++ b/drivers/net/ethernet/mscc/ocelot_vcap.h
@@ -0,0 +1,403 @@
+/* SPDX-License-Identifier: (GPL-2.0 OR MIT)
+ * Microsemi Ocelot Switch driver
+ * Copyright (c) 2019 Microsemi Corporation
+ */
+
+#ifndef _OCELOT_VCAP_H_
+#define _OCELOT_VCAP_H_
+
+/* =================================================================
+ * VCAP Common
+ * =================================================================
+ */
+
+/* VCAP Type-Group values */
+#define VCAP_TG_NONE 0 /* Entry is invalid */
+#define VCAP_TG_FULL 1 /* Full entry */
+#define VCAP_TG_HALF 2 /* Half entry */
+#define VCAP_TG_QUARTER 3 /* Quarter entry */
+
+/* =================================================================
+ * VCAP IS2
+ * =================================================================
+ */
+
+#define VCAP_IS2_CNT 64
+#define VCAP_IS2_ENTRY_WIDTH 376
+#define VCAP_IS2_ACTION_WIDTH 99
+#define VCAP_PORT_CNT 11
+
+/* IS2 half key types */
+#define IS2_TYPE_ETYPE 0
+#define IS2_TYPE_LLC 1
+#define IS2_TYPE_SNAP 2
+#define IS2_TYPE_ARP 3
+#define IS2_TYPE_IP_UDP_TCP 4
+#define IS2_TYPE_IP_OTHER 5
+#define IS2_TYPE_IPV6 6
+#define IS2_TYPE_OAM 7
+#define IS2_TYPE_SMAC_SIP6 8
+#define IS2_TYPE_ANY 100 /* Pseudo type */
+
+/* IS2 half key type mask for matching any IP */
+#define IS2_TYPE_MASK_IP_ANY 0xe
+
+/* IS2 action types */
+#define IS2_ACTION_TYPE_NORMAL 0
+#define IS2_ACTION_TYPE_SMAC_SIP 1
+
+/* IS2 MASK_MODE values */
+#define IS2_ACT_MASK_MODE_NONE 0
+#define IS2_ACT_MASK_MODE_FILTER 1
+#define IS2_ACT_MASK_MODE_POLICY 2
+#define IS2_ACT_MASK_MODE_REDIR 3
+
+/* IS2 REW_OP values */
+#define IS2_ACT_REW_OP_NONE 0
+#define IS2_ACT_REW_OP_PTP_ONE 2
+#define IS2_ACT_REW_OP_PTP_TWO 3
+#define IS2_ACT_REW_OP_SPECIAL 8
+#define IS2_ACT_REW_OP_PTP_ORG 9
+#define IS2_ACT_REW_OP_PTP_ONE_SUB_DELAY_1 (IS2_ACT_REW_OP_PTP_ONE | (1 << 3))
+#define IS2_ACT_REW_OP_PTP_ONE_SUB_DELAY_2 (IS2_ACT_REW_OP_PTP_ONE | (2 << 3))
+#define IS2_ACT_REW_OP_PTP_ONE_ADD_DELAY (IS2_ACT_REW_OP_PTP_ONE | (1 << 5))
+#define IS2_ACT_REW_OP_PTP_ONE_ADD_SUB BIT(7)
+
+#define VCAP_PORT_WIDTH 4
+
+/* IS2 quarter key - SMAC_SIP4 */
+#define IS2_QKO_IGR_PORT 0
+#define IS2_QKL_IGR_PORT VCAP_PORT_WIDTH
+#define IS2_QKO_L2_SMAC (IS2_QKO_IGR_PORT + IS2_QKL_IGR_PORT)
+#define IS2_QKL_L2_SMAC 48
+#define IS2_QKO_L3_IP4_SIP (IS2_QKO_L2_SMAC + IS2_QKL_L2_SMAC)
+#define IS2_QKL_L3_IP4_SIP 32
+
+/* IS2 half key - common */
+#define IS2_HKO_TYPE 0
+#define IS2_HKL_TYPE 4
+#define IS2_HKO_FIRST (IS2_HKO_TYPE + IS2_HKL_TYPE)
+#define IS2_HKL_FIRST 1
+#define IS2_HKO_PAG (IS2_HKO_FIRST + IS2_HKL_FIRST)
+#define IS2_HKL_PAG 8
+#define IS2_HKO_IGR_PORT_MASK (IS2_HKO_PAG + IS2_HKL_PAG)
+#define IS2_HKL_IGR_PORT_MASK (VCAP_PORT_CNT + 1)
+#define IS2_HKO_SERVICE_FRM (IS2_HKO_IGR_PORT_MASK + IS2_HKL_IGR_PORT_MASK)
+#define IS2_HKL_SERVICE_FRM 1
+#define IS2_HKO_HOST_MATCH (IS2_HKO_SERVICE_FRM + IS2_HKL_SERVICE_FRM)
+#define IS2_HKL_HOST_MATCH 1
+#define IS2_HKO_L2_MC (IS2_HKO_HOST_MATCH + IS2_HKL_HOST_MATCH)
+#define IS2_HKL_L2_MC 1
+#define IS2_HKO_L2_BC (IS2_HKO_L2_MC + IS2_HKL_L2_MC)
+#define IS2_HKL_L2_BC 1
+#define IS2_HKO_VLAN_TAGGED (IS2_HKO_L2_BC + IS2_HKL_L2_BC)
+#define IS2_HKL_VLAN_TAGGED 1
+#define IS2_HKO_VID (IS2_HKO_VLAN_TAGGED + IS2_HKL_VLAN_TAGGED)
+#define IS2_HKL_VID 12
+#define IS2_HKO_DEI (IS2_HKO_VID + IS2_HKL_VID)
+#define IS2_HKL_DEI 1
+#define IS2_HKO_PCP (IS2_HKO_DEI + IS2_HKL_DEI)
+#define IS2_HKL_PCP 3
+
+/* IS2 half key - MAC_ETYPE/MAC_LLC/MAC_SNAP/OAM common */
+#define IS2_HKO_L2_DMAC (IS2_HKO_PCP + IS2_HKL_PCP)
+#define IS2_HKL_L2_DMAC 48
+#define IS2_HKO_L2_SMAC (IS2_HKO_L2_DMAC + IS2_HKL_L2_DMAC)
+#define IS2_HKL_L2_SMAC 48
+
+/* IS2 half key - MAC_ETYPE */
+#define IS2_HKO_MAC_ETYPE_ETYPE (IS2_HKO_L2_SMAC + IS2_HKL_L2_SMAC)
+#define IS2_HKL_MAC_ETYPE_ETYPE 16
+#define IS2_HKO_MAC_ETYPE_L2_PAYLOAD \
+ (IS2_HKO_MAC_ETYPE_ETYPE + IS2_HKL_MAC_ETYPE_ETYPE)
+#define IS2_HKL_MAC_ETYPE_L2_PAYLOAD 27
+
+/* IS2 half key - MAC_LLC */
+#define IS2_HKO_MAC_LLC_L2_LLC IS2_HKO_MAC_ETYPE_ETYPE
+#define IS2_HKL_MAC_LLC_L2_LLC 40
+
+/* IS2 half key - MAC_SNAP */
+#define IS2_HKO_MAC_SNAP_L2_SNAP IS2_HKO_MAC_ETYPE_ETYPE
+#define IS2_HKL_MAC_SNAP_L2_SNAP 40
+
+/* IS2 half key - ARP */
+#define IS2_HKO_MAC_ARP_L2_SMAC IS2_HKO_L2_DMAC
+#define IS2_HKL_MAC_ARP_L2_SMAC 48
+#define IS2_HKO_MAC_ARP_ARP_ADDR_SPACE_OK \
+ (IS2_HKO_MAC_ARP_L2_SMAC + IS2_HKL_MAC_ARP_L2_SMAC)
+#define IS2_HKL_MAC_ARP_ARP_ADDR_SPACE_OK 1
+#define IS2_HKO_MAC_ARP_ARP_PROTO_SPACE_OK \
+ (IS2_HKO_MAC_ARP_ARP_ADDR_SPACE_OK + IS2_HKL_MAC_ARP_ARP_ADDR_SPACE_OK)
+#define IS2_HKL_MAC_ARP_ARP_PROTO_SPACE_OK 1
+#define IS2_HKO_MAC_ARP_ARP_LEN_OK \
+ (IS2_HKO_MAC_ARP_ARP_PROTO_SPACE_OK + \
+ IS2_HKL_MAC_ARP_ARP_PROTO_SPACE_OK)
+#define IS2_HKL_MAC_ARP_ARP_LEN_OK 1
+#define IS2_HKO_MAC_ARP_ARP_TGT_MATCH \
+ (IS2_HKO_MAC_ARP_ARP_LEN_OK + IS2_HKL_MAC_ARP_ARP_LEN_OK)
+#define IS2_HKL_MAC_ARP_ARP_TGT_MATCH 1
+#define IS2_HKO_MAC_ARP_ARP_SENDER_MATCH \
+ (IS2_HKO_MAC_ARP_ARP_TGT_MATCH + IS2_HKL_MAC_ARP_ARP_TGT_MATCH)
+#define IS2_HKL_MAC_ARP_ARP_SENDER_MATCH 1
+#define IS2_HKO_MAC_ARP_ARP_OPCODE_UNKNOWN \
+ (IS2_HKO_MAC_ARP_ARP_SENDER_MATCH + IS2_HKL_MAC_ARP_ARP_SENDER_MATCH)
+#define IS2_HKL_MAC_ARP_ARP_OPCODE_UNKNOWN 1
+#define IS2_HKO_MAC_ARP_ARP_OPCODE \
+ (IS2_HKO_MAC_ARP_ARP_OPCODE_UNKNOWN + \
+ IS2_HKL_MAC_ARP_ARP_OPCODE_UNKNOWN)
+#define IS2_HKL_MAC_ARP_ARP_OPCODE 2
+#define IS2_HKO_MAC_ARP_L3_IP4_DIP \
+ (IS2_HKO_MAC_ARP_ARP_OPCODE + IS2_HKL_MAC_ARP_ARP_OPCODE)
+#define IS2_HKL_MAC_ARP_L3_IP4_DIP 32
+#define IS2_HKO_MAC_ARP_L3_IP4_SIP \
+ (IS2_HKO_MAC_ARP_L3_IP4_DIP + IS2_HKL_MAC_ARP_L3_IP4_DIP)
+#define IS2_HKL_MAC_ARP_L3_IP4_SIP 32
+#define IS2_HKO_MAC_ARP_DIP_EQ_SIP \
+ (IS2_HKO_MAC_ARP_L3_IP4_SIP + IS2_HKL_MAC_ARP_L3_IP4_SIP)
+#define IS2_HKL_MAC_ARP_DIP_EQ_SIP 1
+
+/* IS2 half key - IP4_TCP_UDP/IP4_OTHER common */
+#define IS2_HKO_IP4 IS2_HKO_L2_DMAC
+#define IS2_HKL_IP4 1
+#define IS2_HKO_L3_FRAGMENT (IS2_HKO_IP4 + IS2_HKL_IP4)
+#define IS2_HKL_L3_FRAGMENT 1
+#define IS2_HKO_L3_FRAG_OFS_GT0 (IS2_HKO_L3_FRAGMENT + IS2_HKL_L3_FRAGMENT)
+#define IS2_HKL_L3_FRAG_OFS_GT0 1
+#define IS2_HKO_L3_OPTIONS (IS2_HKO_L3_FRAG_OFS_GT0 + IS2_HKL_L3_FRAG_OFS_GT0)
+#define IS2_HKL_L3_OPTIONS 1
+#define IS2_HKO_L3_TTL_GT0 (IS2_HKO_L3_OPTIONS + IS2_HKL_L3_OPTIONS)
+#define IS2_HKL_L3_TTL_GT0 1
+#define IS2_HKO_L3_TOS (IS2_HKO_L3_TTL_GT0 + IS2_HKL_L3_TTL_GT0)
+#define IS2_HKL_L3_TOS 8
+#define IS2_HKO_L3_IP4_DIP (IS2_HKO_L3_TOS + IS2_HKL_L3_TOS)
+#define IS2_HKL_L3_IP4_DIP 32
+#define IS2_HKO_L3_IP4_SIP (IS2_HKO_L3_IP4_DIP + IS2_HKL_L3_IP4_DIP)
+#define IS2_HKL_L3_IP4_SIP 32
+#define IS2_HKO_DIP_EQ_SIP (IS2_HKO_L3_IP4_SIP + IS2_HKL_L3_IP4_SIP)
+#define IS2_HKL_DIP_EQ_SIP 1
+
+/* IS2 half key - IP4_TCP_UDP */
+#define IS2_HKO_IP4_TCP_UDP_TCP (IS2_HKO_DIP_EQ_SIP + IS2_HKL_DIP_EQ_SIP)
+#define IS2_HKL_IP4_TCP_UDP_TCP 1
+#define IS2_HKO_IP4_TCP_UDP_L4_DPORT \
+ (IS2_HKO_IP4_TCP_UDP_TCP + IS2_HKL_IP4_TCP_UDP_TCP)
+#define IS2_HKL_IP4_TCP_UDP_L4_DPORT 16
+#define IS2_HKO_IP4_TCP_UDP_L4_SPORT \
+ (IS2_HKO_IP4_TCP_UDP_L4_DPORT + IS2_HKL_IP4_TCP_UDP_L4_DPORT)
+#define IS2_HKL_IP4_TCP_UDP_L4_SPORT 16
+#define IS2_HKO_IP4_TCP_UDP_L4_RNG \
+ (IS2_HKO_IP4_TCP_UDP_L4_SPORT + IS2_HKL_IP4_TCP_UDP_L4_SPORT)
+#define IS2_HKL_IP4_TCP_UDP_L4_RNG 8
+#define IS2_HKO_IP4_TCP_UDP_SPORT_EQ_DPORT \
+ (IS2_HKO_IP4_TCP_UDP_L4_RNG + IS2_HKL_IP4_TCP_UDP_L4_RNG)
+#define IS2_HKL_IP4_TCP_UDP_SPORT_EQ_DPORT 1
+#define IS2_HKO_IP4_TCP_UDP_SEQUENCE_EQ0 \
+ (IS2_HKO_IP4_TCP_UDP_SPORT_EQ_DPORT + \
+ IS2_HKL_IP4_TCP_UDP_SPORT_EQ_DPORT)
+#define IS2_HKL_IP4_TCP_UDP_SEQUENCE_EQ0 1
+#define IS2_HKO_IP4_TCP_UDP_L4_FIN \
+ (IS2_HKO_IP4_TCP_UDP_SEQUENCE_EQ0 + IS2_HKL_IP4_TCP_UDP_SEQUENCE_EQ0)
+#define IS2_HKL_IP4_TCP_UDP_L4_FIN 1
+#define IS2_HKO_IP4_TCP_UDP_L4_SYN \
+ (IS2_HKO_IP4_TCP_UDP_L4_FIN + IS2_HKL_IP4_TCP_UDP_L4_FIN)
+#define IS2_HKL_IP4_TCP_UDP_L4_SYN 1
+#define IS2_HKO_IP4_TCP_UDP_L4_RST \
+ (IS2_HKO_IP4_TCP_UDP_L4_SYN + IS2_HKL_IP4_TCP_UDP_L4_SYN)
+#define IS2_HKL_IP4_TCP_UDP_L4_RST 1
+#define IS2_HKO_IP4_TCP_UDP_L4_PSH \
+ (IS2_HKO_IP4_TCP_UDP_L4_RST + IS2_HKL_IP4_TCP_UDP_L4_RST)
+#define IS2_HKL_IP4_TCP_UDP_L4_PSH 1
+#define IS2_HKO_IP4_TCP_UDP_L4_ACK \
+ (IS2_HKO_IP4_TCP_UDP_L4_PSH + IS2_HKL_IP4_TCP_UDP_L4_PSH)
+#define IS2_HKL_IP4_TCP_UDP_L4_ACK 1
+#define IS2_HKO_IP4_TCP_UDP_L4_URG \
+ (IS2_HKO_IP4_TCP_UDP_L4_ACK + IS2_HKL_IP4_TCP_UDP_L4_ACK)
+#define IS2_HKL_IP4_TCP_UDP_L4_URG 1
+#define IS2_HKO_IP4_TCP_UDP_L4_1588_DOM \
+ (IS2_HKO_IP4_TCP_UDP_L4_URG + IS2_HKL_IP4_TCP_UDP_L4_URG)
+#define IS2_HKL_IP4_TCP_UDP_L4_1588_DOM 8
+#define IS2_HKO_IP4_TCP_UDP_L4_1588_VER \
+ (IS2_HKO_IP4_TCP_UDP_L4_1588_DOM + IS2_HKL_IP4_TCP_UDP_L4_1588_DOM)
+#define IS2_HKL_IP4_TCP_UDP_L4_1588_VER 4
+
+/* IS2 half key - IP4_OTHER */
+#define IS2_HKO_IP4_OTHER_L3_PROTO IS2_HKO_IP4_TCP_UDP_TCP
+#define IS2_HKL_IP4_OTHER_L3_PROTO 8
+#define IS2_HKO_IP4_OTHER_L3_PAYLOAD \
+ (IS2_HKO_IP4_OTHER_L3_PROTO + IS2_HKL_IP4_OTHER_L3_PROTO)
+#define IS2_HKL_IP4_OTHER_L3_PAYLOAD 56
+
+/* IS2 half key - IP6_STD */
+#define IS2_HKO_IP6_STD_L3_TTL_GT0 IS2_HKO_L2_DMAC
+#define IS2_HKL_IP6_STD_L3_TTL_GT0 1
+#define IS2_HKO_IP6_STD_L3_IP6_SIP \
+ (IS2_HKO_IP6_STD_L3_TTL_GT0 + IS2_HKL_IP6_STD_L3_TTL_GT0)
+#define IS2_HKL_IP6_STD_L3_IP6_SIP 128
+#define IS2_HKO_IP6_STD_L3_PROTO \
+ (IS2_HKO_IP6_STD_L3_IP6_SIP + IS2_HKL_IP6_STD_L3_IP6_SIP)
+#define IS2_HKL_IP6_STD_L3_PROTO 8
+
+/* IS2 half key - OAM */
+#define IS2_HKO_OAM_OAM_MEL_FLAGS IS2_HKO_MAC_ETYPE_ETYPE
+#define IS2_HKL_OAM_OAM_MEL_FLAGS 7
+#define IS2_HKO_OAM_OAM_VER \
+ (IS2_HKO_OAM_OAM_MEL_FLAGS + IS2_HKL_OAM_OAM_MEL_FLAGS)
+#define IS2_HKL_OAM_OAM_VER 5
+#define IS2_HKO_OAM_OAM_OPCODE (IS2_HKO_OAM_OAM_VER + IS2_HKL_OAM_OAM_VER)
+#define IS2_HKL_OAM_OAM_OPCODE 8
+#define IS2_HKO_OAM_OAM_FLAGS (IS2_HKO_OAM_OAM_OPCODE + IS2_HKL_OAM_OAM_OPCODE)
+#define IS2_HKL_OAM_OAM_FLAGS 8
+#define IS2_HKO_OAM_OAM_MEPID (IS2_HKO_OAM_OAM_FLAGS + IS2_HKL_OAM_OAM_FLAGS)
+#define IS2_HKL_OAM_OAM_MEPID 16
+#define IS2_HKO_OAM_OAM_CCM_CNTS_EQ0 \
+ (IS2_HKO_OAM_OAM_MEPID + IS2_HKL_OAM_OAM_MEPID)
+#define IS2_HKL_OAM_OAM_CCM_CNTS_EQ0 1
+
+/* IS2 half key - SMAC_SIP6 */
+#define IS2_HKO_SMAC_SIP6_IGR_PORT IS2_HKL_TYPE
+#define IS2_HKL_SMAC_SIP6_IGR_PORT VCAP_PORT_WIDTH
+#define IS2_HKO_SMAC_SIP6_L2_SMAC \
+ (IS2_HKO_SMAC_SIP6_IGR_PORT + IS2_HKL_SMAC_SIP6_IGR_PORT)
+#define IS2_HKL_SMAC_SIP6_L2_SMAC 48
+#define IS2_HKO_SMAC_SIP6_L3_IP6_SIP \
+ (IS2_HKO_SMAC_SIP6_L2_SMAC + IS2_HKL_SMAC_SIP6_L2_SMAC)
+#define IS2_HKL_SMAC_SIP6_L3_IP6_SIP 128
+
+/* IS2 full key - common */
+#define IS2_FKO_TYPE 0
+#define IS2_FKL_TYPE 2
+#define IS2_FKO_FIRST (IS2_FKO_TYPE + IS2_FKL_TYPE)
+#define IS2_FKL_FIRST 1
+#define IS2_FKO_PAG (IS2_FKO_FIRST + IS2_FKL_FIRST)
+#define IS2_FKL_PAG 8
+#define IS2_FKO_IGR_PORT_MASK (IS2_FKO_PAG + IS2_FKL_PAG)
+#define IS2_FKL_IGR_PORT_MASK (VCAP_PORT_CNT + 1)
+#define IS2_FKO_SERVICE_FRM (IS2_FKO_IGR_PORT_MASK + IS2_FKL_IGR_PORT_MASK)
+#define IS2_FKL_SERVICE_FRM 1
+#define IS2_FKO_HOST_MATCH (IS2_FKO_SERVICE_FRM + IS2_FKL_SERVICE_FRM)
+#define IS2_FKL_HOST_MATCH 1
+#define IS2_FKO_L2_MC (IS2_FKO_HOST_MATCH + IS2_FKL_HOST_MATCH)
+#define IS2_FKL_L2_MC 1
+#define IS2_FKO_L2_BC (IS2_FKO_L2_MC + IS2_FKL_L2_MC)
+#define IS2_FKL_L2_BC 1
+#define IS2_FKO_VLAN_TAGGED (IS2_FKO_L2_BC + IS2_FKL_L2_BC)
+#define IS2_FKL_VLAN_TAGGED 1
+#define IS2_FKO_VID (IS2_FKO_VLAN_TAGGED + IS2_FKL_VLAN_TAGGED)
+#define IS2_FKL_VID 12
+#define IS2_FKO_DEI (IS2_FKO_VID + IS2_FKL_VID)
+#define IS2_FKL_DEI 1
+#define IS2_FKO_PCP (IS2_FKO_DEI + IS2_FKL_DEI)
+#define IS2_FKL_PCP 3
+
+/* IS2 full key - IP6_TCP_UDP/IP6_OTHER common */
+#define IS2_FKO_L3_TTL_GT0 (IS2_FKO_PCP + IS2_FKL_PCP)
+#define IS2_FKL_L3_TTL_GT0 1
+#define IS2_FKO_L3_TOS (IS2_FKO_L3_TTL_GT0 + IS2_FKL_L3_TTL_GT0)
+#define IS2_FKL_L3_TOS 8
+#define IS2_FKO_L3_IP6_DIP (IS2_FKO_L3_TOS + IS2_FKL_L3_TOS)
+#define IS2_FKL_L3_IP6_DIP 128
+#define IS2_FKO_L3_IP6_SIP (IS2_FKO_L3_IP6_DIP + IS2_FKL_L3_IP6_DIP)
+#define IS2_FKL_L3_IP6_SIP 128
+#define IS2_FKO_DIP_EQ_SIP (IS2_FKO_L3_IP6_SIP + IS2_FKL_L3_IP6_SIP)
+#define IS2_FKL_DIP_EQ_SIP 1
+
+/* IS2 full key - IP6_TCP_UDP */
+#define IS2_FKO_IP6_TCP_UDP_TCP (IS2_FKO_DIP_EQ_SIP + IS2_FKL_DIP_EQ_SIP)
+#define IS2_FKL_IP6_TCP_UDP_TCP 1
+#define IS2_FKO_IP6_TCP_UDP_L4_DPORT \
+ (IS2_FKO_IP6_TCP_UDP_TCP + IS2_FKL_IP6_TCP_UDP_TCP)
+#define IS2_FKL_IP6_TCP_UDP_L4_DPORT 16
+#define IS2_FKO_IP6_TCP_UDP_L4_SPORT \
+ (IS2_FKO_IP6_TCP_UDP_L4_DPORT + IS2_FKL_IP6_TCP_UDP_L4_DPORT)
+#define IS2_FKL_IP6_TCP_UDP_L4_SPORT 16
+#define IS2_FKO_IP6_TCP_UDP_L4_RNG \
+ (IS2_FKO_IP6_TCP_UDP_L4_SPORT + IS2_FKL_IP6_TCP_UDP_L4_SPORT)
+#define IS2_FKL_IP6_TCP_UDP_L4_RNG 8
+#define IS2_FKO_IP6_TCP_UDP_SPORT_EQ_DPORT \
+ (IS2_FKO_IP6_TCP_UDP_L4_RNG + IS2_FKL_IP6_TCP_UDP_L4_RNG)
+#define IS2_FKL_IP6_TCP_UDP_SPORT_EQ_DPORT 1
+#define IS2_FKO_IP6_TCP_UDP_SEQUENCE_EQ0 \
+ (IS2_FKO_IP6_TCP_UDP_SPORT_EQ_DPORT + \
+ IS2_FKL_IP6_TCP_UDP_SPORT_EQ_DPORT)
+#define IS2_FKL_IP6_TCP_UDP_SEQUENCE_EQ0 1
+#define IS2_FKO_IP6_TCP_UDP_L4_FIN \
+ (IS2_FKO_IP6_TCP_UDP_SEQUENCE_EQ0 + IS2_FKL_IP6_TCP_UDP_SEQUENCE_EQ0)
+#define IS2_FKL_IP6_TCP_UDP_L4_FIN 1
+#define IS2_FKO_IP6_TCP_UDP_L4_SYN \
+ (IS2_FKO_IP6_TCP_UDP_L4_FIN + IS2_FKL_IP6_TCP_UDP_L4_FIN)
+#define IS2_FKL_IP6_TCP_UDP_L4_SYN 1
+#define IS2_FKO_IP6_TCP_UDP_L4_RST \
+ (IS2_FKO_IP6_TCP_UDP_L4_SYN + IS2_FKL_IP6_TCP_UDP_L4_SYN)
+#define IS2_FKL_IP6_TCP_UDP_L4_RST 1
+#define IS2_FKO_IP6_TCP_UDP_L4_PSH \
+ (IS2_FKO_IP6_TCP_UDP_L4_RST + IS2_FKL_IP6_TCP_UDP_L4_RST)
+#define IS2_FKL_IP6_TCP_UDP_L4_PSH 1
+#define IS2_FKO_IP6_TCP_UDP_L4_ACK \
+ (IS2_FKO_IP6_TCP_UDP_L4_PSH + IS2_FKL_IP6_TCP_UDP_L4_PSH)
+#define IS2_FKL_IP6_TCP_UDP_L4_ACK 1
+#define IS2_FKO_IP6_TCP_UDP_L4_URG \
+ (IS2_FKO_IP6_TCP_UDP_L4_ACK + IS2_FKL_IP6_TCP_UDP_L4_ACK)
+#define IS2_FKL_IP6_TCP_UDP_L4_URG 1
+#define IS2_FKO_IP6_TCP_UDP_L4_1588_DOM \
+ (IS2_FKO_IP6_TCP_UDP_L4_URG + IS2_FKL_IP6_TCP_UDP_L4_URG)
+#define IS2_FKL_IP6_TCP_UDP_L4_1588_DOM 8
+#define IS2_FKO_IP6_TCP_UDP_L4_1588_VER \
+ (IS2_FKO_IP6_TCP_UDP_L4_1588_DOM + IS2_FKL_IP6_TCP_UDP_L4_1588_DOM)
+#define IS2_FKL_IP6_TCP_UDP_L4_1588_VER 4
+
+/* IS2 full key - IP6_OTHER */
+#define IS2_FKO_IP6_OTHER_L3_PROTO IS2_FKO_IP6_TCP_UDP_TCP
+#define IS2_FKL_IP6_OTHER_L3_PROTO 8
+#define IS2_FKO_IP6_OTHER_L3_PAYLOAD \
+ (IS2_FKO_IP6_OTHER_L3_PROTO + IS2_FKL_IP6_OTHER_L3_PROTO)
+#define IS2_FKL_IP6_OTHER_L3_PAYLOAD 56
+
+/* IS2 full key - CUSTOM */
+#define IS2_FKO_CUSTOM_CUSTOM_TYPE IS2_FKO_L3_TTL_GT0
+#define IS2_FKL_CUSTOM_CUSTOM_TYPE 1
+#define IS2_FKO_CUSTOM_CUSTOM \
+ (IS2_FKO_CUSTOM_CUSTOM_TYPE + IS2_FKL_CUSTOM_CUSTOM_TYPE)
+#define IS2_FKL_CUSTOM_CUSTOM 320
+
+/* IS2 action - BASE_TYPE */
+#define IS2_AO_HIT_ME_ONCE 0
+#define IS2_AL_HIT_ME_ONCE 1
+#define IS2_AO_CPU_COPY_ENA (IS2_AO_HIT_ME_ONCE + IS2_AL_HIT_ME_ONCE)
+#define IS2_AL_CPU_COPY_ENA 1
+#define IS2_AO_CPU_QU_NUM (IS2_AO_CPU_COPY_ENA + IS2_AL_CPU_COPY_ENA)
+#define IS2_AL_CPU_QU_NUM 3
+#define IS2_AO_MASK_MODE (IS2_AO_CPU_QU_NUM + IS2_AL_CPU_QU_NUM)
+#define IS2_AL_MASK_MODE 2
+#define IS2_AO_MIRROR_ENA (IS2_AO_MASK_MODE + IS2_AL_MASK_MODE)
+#define IS2_AL_MIRROR_ENA 1
+#define IS2_AO_LRN_DIS (IS2_AO_MIRROR_ENA + IS2_AL_MIRROR_ENA)
+#define IS2_AL_LRN_DIS 1
+#define IS2_AO_POLICE_ENA (IS2_AO_LRN_DIS + IS2_AL_LRN_DIS)
+#define IS2_AL_POLICE_ENA 1
+#define IS2_AO_POLICE_IDX (IS2_AO_POLICE_ENA + IS2_AL_POLICE_ENA)
+#define IS2_AL_POLICE_IDX 9
+#define IS2_AO_POLICE_VCAP_ONLY (IS2_AO_POLICE_IDX + IS2_AL_POLICE_IDX)
+#define IS2_AL_POLICE_VCAP_ONLY 1
+#define IS2_AO_PORT_MASK (IS2_AO_POLICE_VCAP_ONLY + IS2_AL_POLICE_VCAP_ONLY)
+#define IS2_AL_PORT_MASK VCAP_PORT_CNT
+#define IS2_AO_REW_OP (IS2_AO_PORT_MASK + IS2_AL_PORT_MASK)
+#define IS2_AL_REW_OP 9
+#define IS2_AO_LM_CNT_DIS (IS2_AO_REW_OP + IS2_AL_REW_OP)
+#define IS2_AL_LM_CNT_DIS 1
+#define IS2_AO_ISDX_ENA \
+ (IS2_AO_LM_CNT_DIS + IS2_AL_LM_CNT_DIS + 1) /* Reserved bit */
+#define IS2_AL_ISDX_ENA 1
+#define IS2_AO_ACL_ID (IS2_AO_ISDX_ENA + IS2_AL_ISDX_ENA)
+#define IS2_AL_ACL_ID 6
+
+/* IS2 action - SMAC_SIP */
+#define IS2_AO_SMAC_SIP_CPU_COPY_ENA 0
+#define IS2_AL_SMAC_SIP_CPU_COPY_ENA 1
+#define IS2_AO_SMAC_SIP_CPU_QU_NUM 1
+#define IS2_AL_SMAC_SIP_CPU_QU_NUM 3
+#define IS2_AO_SMAC_SIP_FWD_KILL_ENA 4
+#define IS2_AL_SMAC_SIP_FWD_KILL_ENA 1
+#define IS2_AO_SMAC_SIP_HOST_MATCH 5
+#define IS2_AL_SMAC_SIP_HOST_MATCH 1
+
+#endif /* _OCELOT_VCAP_H_ */
diff --git a/drivers/net/ethernet/netronome/Kconfig b/drivers/net/ethernet/netronome/Kconfig
index 4ad5109059e0..bac5be4d4f43 100644
--- a/drivers/net/ethernet/netronome/Kconfig
+++ b/drivers/net/ethernet/netronome/Kconfig
@@ -20,6 +20,7 @@ config NFP
tristate "Netronome(R) NFP4000/NFP6000 NIC driver"
depends on PCI && PCI_MSI
depends on VXLAN || VXLAN=n
+ depends on TLS && TLS_DEVICE || TLS_DEVICE=n
select NET_DEVLINK
---help---
This driver supports the Netronome(R) NFP4000/NFP6000 based
diff --git a/drivers/net/ethernet/netronome/nfp/Makefile b/drivers/net/ethernet/netronome/nfp/Makefile
index 87bf784f8e8f..2805641965f3 100644
--- a/drivers/net/ethernet/netronome/nfp/Makefile
+++ b/drivers/net/ethernet/netronome/nfp/Makefile
@@ -16,6 +16,7 @@ nfp-objs := \
nfpcore/nfp_rtsym.o \
nfpcore/nfp_target.o \
ccm.o \
+ ccm_mbox.o \
nfp_asm.o \
nfp_app.o \
nfp_app_nic.o \
@@ -34,6 +35,11 @@ nfp-objs := \
nfp_shared_buf.o \
nic/main.o
+ifeq ($(CONFIG_TLS_DEVICE),y)
+nfp-objs += \
+ crypto/tls.o
+endif
+
ifeq ($(CONFIG_NFP_APP_FLOWER),y)
nfp-objs += \
flower/action.o \
diff --git a/drivers/net/ethernet/netronome/nfp/abm/cls.c b/drivers/net/ethernet/netronome/nfp/abm/cls.c
index ff3913085665..23ebddfb9532 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/cls.c
+++ b/drivers/net/ethernet/netronome/nfp/abm/cls.c
@@ -262,22 +262,12 @@ static int nfp_abm_setup_tc_block_cb(enum tc_setup_type type,
}
}
+static LIST_HEAD(nfp_abm_block_cb_list);
+
int nfp_abm_setup_cls_block(struct net_device *netdev, struct nfp_repr *repr,
- struct tc_block_offload *f)
+ struct flow_block_offload *f)
{
- if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_EGRESS)
- return -EOPNOTSUPP;
-
- switch (f->command) {
- case TC_BLOCK_BIND:
- return tcf_block_cb_register(f->block,
- nfp_abm_setup_tc_block_cb,
- repr, repr, f->extack);
- case TC_BLOCK_UNBIND:
- tcf_block_cb_unregister(f->block, nfp_abm_setup_tc_block_cb,
- repr);
- return 0;
- default:
- return -EOPNOTSUPP;
- }
+ return flow_block_cb_setup_simple(f, &nfp_abm_block_cb_list,
+ nfp_abm_setup_tc_block_cb,
+ repr, repr, true);
}
diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.h b/drivers/net/ethernet/netronome/nfp/abm/main.h
index 49749c60885e..48746c9c6224 100644
--- a/drivers/net/ethernet/netronome/nfp/abm/main.h
+++ b/drivers/net/ethernet/netronome/nfp/abm/main.h
@@ -247,7 +247,7 @@ int nfp_abm_setup_tc_mq(struct net_device *netdev, struct nfp_abm_link *alink,
int nfp_abm_setup_tc_gred(struct net_device *netdev, struct nfp_abm_link *alink,
struct tc_gred_qopt_offload *opt);
int nfp_abm_setup_cls_block(struct net_device *netdev, struct nfp_repr *repr,
- struct tc_block_offload *opt);
+ struct flow_block_offload *opt);
int nfp_abm_ctrl_read_params(struct nfp_abm_link *alink);
int nfp_abm_ctrl_find_addrs(struct nfp_abm *abm);
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/jit.c b/drivers/net/ethernet/netronome/nfp/bpf/jit.c
index d4bf0e694541..4054b70d7719 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/jit.c
+++ b/drivers/net/ethernet/netronome/nfp/bpf/jit.c
@@ -623,6 +623,13 @@ static void wrp_immed(struct nfp_prog *nfp_prog, swreg dst, u32 imm)
}
static void
+wrp_zext(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst)
+{
+ if (meta->flags & FLAG_INSN_DO_ZEXT)
+ wrp_immed(nfp_prog, reg_both(dst + 1), 0);
+}
+
+static void
wrp_immed_relo(struct nfp_prog *nfp_prog, swreg dst, u32 imm,
enum nfp_relo_type relo)
{
@@ -858,7 +865,8 @@ static int nfp_cpp_memcpy(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
}
static int
-data_ld(struct nfp_prog *nfp_prog, swreg offset, u8 dst_gpr, int size)
+data_ld(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, swreg offset,
+ u8 dst_gpr, int size)
{
unsigned int i;
u16 shift, sz;
@@ -881,14 +889,15 @@ data_ld(struct nfp_prog *nfp_prog, swreg offset, u8 dst_gpr, int size)
wrp_mov(nfp_prog, reg_both(dst_gpr + i), reg_xfer(i));
if (i < 2)
- wrp_immed(nfp_prog, reg_both(dst_gpr + 1), 0);
+ wrp_zext(nfp_prog, meta, dst_gpr);
return 0;
}
static int
-data_ld_host_order(struct nfp_prog *nfp_prog, u8 dst_gpr,
- swreg lreg, swreg rreg, int size, enum cmd_mode mode)
+data_ld_host_order(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
+ u8 dst_gpr, swreg lreg, swreg rreg, int size,
+ enum cmd_mode mode)
{
unsigned int i;
u8 mask, sz;
@@ -911,33 +920,34 @@ data_ld_host_order(struct nfp_prog *nfp_prog, u8 dst_gpr,
wrp_mov(nfp_prog, reg_both(dst_gpr + i), reg_xfer(i));
if (i < 2)
- wrp_immed(nfp_prog, reg_both(dst_gpr + 1), 0);
+ wrp_zext(nfp_prog, meta, dst_gpr);
return 0;
}
static int
-data_ld_host_order_addr32(struct nfp_prog *nfp_prog, u8 src_gpr, swreg offset,
- u8 dst_gpr, u8 size)
+data_ld_host_order_addr32(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
+ u8 src_gpr, swreg offset, u8 dst_gpr, u8 size)
{
- return data_ld_host_order(nfp_prog, dst_gpr, reg_a(src_gpr), offset,
- size, CMD_MODE_32b);
+ return data_ld_host_order(nfp_prog, meta, dst_gpr, reg_a(src_gpr),
+ offset, size, CMD_MODE_32b);
}
static int
-data_ld_host_order_addr40(struct nfp_prog *nfp_prog, u8 src_gpr, swreg offset,
- u8 dst_gpr, u8 size)
+data_ld_host_order_addr40(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
+ u8 src_gpr, swreg offset, u8 dst_gpr, u8 size)
{
swreg rega, regb;
addr40_offset(nfp_prog, src_gpr, offset, &rega, &regb);
- return data_ld_host_order(nfp_prog, dst_gpr, rega, regb,
+ return data_ld_host_order(nfp_prog, meta, dst_gpr, rega, regb,
size, CMD_MODE_40b_BA);
}
static int
-construct_data_ind_ld(struct nfp_prog *nfp_prog, u16 offset, u16 src, u8 size)
+construct_data_ind_ld(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
+ u16 offset, u16 src, u8 size)
{
swreg tmp_reg;
@@ -953,10 +963,12 @@ construct_data_ind_ld(struct nfp_prog *nfp_prog, u16 offset, u16 src, u8 size)
emit_br_relo(nfp_prog, BR_BLO, BR_OFF_RELO, 0, RELO_BR_GO_ABORT);
/* Load data */
- return data_ld(nfp_prog, imm_b(nfp_prog), 0, size);
+ return data_ld(nfp_prog, meta, imm_b(nfp_prog), 0, size);
}
-static int construct_data_ld(struct nfp_prog *nfp_prog, u16 offset, u8 size)
+static int
+construct_data_ld(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
+ u16 offset, u8 size)
{
swreg tmp_reg;
@@ -967,7 +979,7 @@ static int construct_data_ld(struct nfp_prog *nfp_prog, u16 offset, u8 size)
/* Load data */
tmp_reg = re_load_imm_any(nfp_prog, offset, imm_b(nfp_prog));
- return data_ld(nfp_prog, tmp_reg, 0, size);
+ return data_ld(nfp_prog, meta, tmp_reg, 0, size);
}
static int
@@ -1204,7 +1216,7 @@ mem_op_stack(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
}
if (clr_gpr && size < 8)
- wrp_immed(nfp_prog, reg_both(gpr + 1), 0);
+ wrp_zext(nfp_prog, meta, gpr);
while (size) {
u32 slice_end;
@@ -1305,9 +1317,10 @@ wrp_alu32_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
enum alu_op alu_op)
{
const struct bpf_insn *insn = &meta->insn;
+ u8 dst = insn->dst_reg * 2;
- wrp_alu_imm(nfp_prog, insn->dst_reg * 2, alu_op, insn->imm);
- wrp_immed(nfp_prog, reg_both(insn->dst_reg * 2 + 1), 0);
+ wrp_alu_imm(nfp_prog, dst, alu_op, insn->imm);
+ wrp_zext(nfp_prog, meta, dst);
return 0;
}
@@ -1319,7 +1332,7 @@ wrp_alu32_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
u8 dst = meta->insn.dst_reg * 2, src = meta->insn.src_reg * 2;
emit_alu(nfp_prog, reg_both(dst), reg_a(dst), alu_op, reg_b(src));
- wrp_immed(nfp_prog, reg_both(meta->insn.dst_reg * 2 + 1), 0);
+ wrp_zext(nfp_prog, meta, dst);
return 0;
}
@@ -2396,12 +2409,14 @@ static int neg_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
u8 dst = meta->insn.dst_reg * 2;
emit_alu(nfp_prog, reg_both(dst), reg_imm(0), ALU_OP_SUB, reg_b(dst));
- wrp_immed(nfp_prog, reg_both(meta->insn.dst_reg * 2 + 1), 0);
+ wrp_zext(nfp_prog, meta, dst);
return 0;
}
-static int __ashr_imm(struct nfp_prog *nfp_prog, u8 dst, u8 shift_amt)
+static int
+__ashr_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst,
+ u8 shift_amt)
{
if (shift_amt) {
/* Set signedness bit (MSB of result). */
@@ -2410,7 +2425,7 @@ static int __ashr_imm(struct nfp_prog *nfp_prog, u8 dst, u8 shift_amt)
emit_shf(nfp_prog, reg_both(dst), reg_none(), SHF_OP_ASHR,
reg_b(dst), SHF_SC_R_SHF, shift_amt);
}
- wrp_immed(nfp_prog, reg_both(dst + 1), 0);
+ wrp_zext(nfp_prog, meta, dst);
return 0;
}
@@ -2425,7 +2440,7 @@ static int ashr_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
umin = meta->umin_src;
umax = meta->umax_src;
if (umin == umax)
- return __ashr_imm(nfp_prog, dst, umin);
+ return __ashr_imm(nfp_prog, meta, dst, umin);
src = insn->src_reg * 2;
/* NOTE: the first insn will set both indirect shift amount (source A)
@@ -2434,7 +2449,7 @@ static int ashr_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
emit_alu(nfp_prog, reg_none(), reg_a(src), ALU_OP_OR, reg_b(dst));
emit_shf_indir(nfp_prog, reg_both(dst), reg_none(), SHF_OP_ASHR,
reg_b(dst), SHF_SC_R_SHF);
- wrp_immed(nfp_prog, reg_both(dst + 1), 0);
+ wrp_zext(nfp_prog, meta, dst);
return 0;
}
@@ -2444,15 +2459,17 @@ static int ashr_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
const struct bpf_insn *insn = &meta->insn;
u8 dst = insn->dst_reg * 2;
- return __ashr_imm(nfp_prog, dst, insn->imm);
+ return __ashr_imm(nfp_prog, meta, dst, insn->imm);
}
-static int __shr_imm(struct nfp_prog *nfp_prog, u8 dst, u8 shift_amt)
+static int
+__shr_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst,
+ u8 shift_amt)
{
if (shift_amt)
emit_shf(nfp_prog, reg_both(dst), reg_none(), SHF_OP_NONE,
reg_b(dst), SHF_SC_R_SHF, shift_amt);
- wrp_immed(nfp_prog, reg_both(dst + 1), 0);
+ wrp_zext(nfp_prog, meta, dst);
return 0;
}
@@ -2461,7 +2478,7 @@ static int shr_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
const struct bpf_insn *insn = &meta->insn;
u8 dst = insn->dst_reg * 2;
- return __shr_imm(nfp_prog, dst, insn->imm);
+ return __shr_imm(nfp_prog, meta, dst, insn->imm);
}
static int shr_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
@@ -2474,22 +2491,24 @@ static int shr_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
umin = meta->umin_src;
umax = meta->umax_src;
if (umin == umax)
- return __shr_imm(nfp_prog, dst, umin);
+ return __shr_imm(nfp_prog, meta, dst, umin);
src = insn->src_reg * 2;
emit_alu(nfp_prog, reg_none(), reg_a(src), ALU_OP_OR, reg_imm(0));
emit_shf_indir(nfp_prog, reg_both(dst), reg_none(), SHF_OP_NONE,
reg_b(dst), SHF_SC_R_SHF);
- wrp_immed(nfp_prog, reg_both(dst + 1), 0);
+ wrp_zext(nfp_prog, meta, dst);
return 0;
}
-static int __shl_imm(struct nfp_prog *nfp_prog, u8 dst, u8 shift_amt)
+static int
+__shl_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst,
+ u8 shift_amt)
{
if (shift_amt)
emit_shf(nfp_prog, reg_both(dst), reg_none(), SHF_OP_NONE,
reg_b(dst), SHF_SC_L_SHF, shift_amt);
- wrp_immed(nfp_prog, reg_both(dst + 1), 0);
+ wrp_zext(nfp_prog, meta, dst);
return 0;
}
@@ -2498,7 +2517,7 @@ static int shl_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
const struct bpf_insn *insn = &meta->insn;
u8 dst = insn->dst_reg * 2;
- return __shl_imm(nfp_prog, dst, insn->imm);
+ return __shl_imm(nfp_prog, meta, dst, insn->imm);
}
static int shl_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
@@ -2511,11 +2530,11 @@ static int shl_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
umin = meta->umin_src;
umax = meta->umax_src;
if (umin == umax)
- return __shl_imm(nfp_prog, dst, umin);
+ return __shl_imm(nfp_prog, meta, dst, umin);
src = insn->src_reg * 2;
shl_reg64_lt32_low(nfp_prog, dst, src);
- wrp_immed(nfp_prog, reg_both(dst + 1), 0);
+ wrp_zext(nfp_prog, meta, dst);
return 0;
}
@@ -2577,34 +2596,34 @@ static int imm_ld8(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
static int data_ld1(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
{
- return construct_data_ld(nfp_prog, meta->insn.imm, 1);
+ return construct_data_ld(nfp_prog, meta, meta->insn.imm, 1);
}
static int data_ld2(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
{
- return construct_data_ld(nfp_prog, meta->insn.imm, 2);
+ return construct_data_ld(nfp_prog, meta, meta->insn.imm, 2);
}
static int data_ld4(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
{
- return construct_data_ld(nfp_prog, meta->insn.imm, 4);
+ return construct_data_ld(nfp_prog, meta, meta->insn.imm, 4);
}
static int data_ind_ld1(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
{
- return construct_data_ind_ld(nfp_prog, meta->insn.imm,
+ return construct_data_ind_ld(nfp_prog, meta, meta->insn.imm,
meta->insn.src_reg * 2, 1);
}
static int data_ind_ld2(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
{
- return construct_data_ind_ld(nfp_prog, meta->insn.imm,
+ return construct_data_ind_ld(nfp_prog, meta, meta->insn.imm,
meta->insn.src_reg * 2, 2);
}
static int data_ind_ld4(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
{
- return construct_data_ind_ld(nfp_prog, meta->insn.imm,
+ return construct_data_ind_ld(nfp_prog, meta, meta->insn.imm,
meta->insn.src_reg * 2, 4);
}
@@ -2682,7 +2701,7 @@ mem_ldx_data(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
tmp_reg = re_load_imm_any(nfp_prog, meta->insn.off, imm_b(nfp_prog));
- return data_ld_host_order_addr32(nfp_prog, meta->insn.src_reg * 2,
+ return data_ld_host_order_addr32(nfp_prog, meta, meta->insn.src_reg * 2,
tmp_reg, meta->insn.dst_reg * 2, size);
}
@@ -2694,7 +2713,7 @@ mem_ldx_emem(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
tmp_reg = re_load_imm_any(nfp_prog, meta->insn.off, imm_b(nfp_prog));
- return data_ld_host_order_addr40(nfp_prog, meta->insn.src_reg * 2,
+ return data_ld_host_order_addr40(nfp_prog, meta, meta->insn.src_reg * 2,
tmp_reg, meta->insn.dst_reg * 2, size);
}
@@ -2755,7 +2774,7 @@ mem_ldx_data_from_pktcache_unaligned(struct nfp_prog *nfp_prog,
wrp_reg_subpart(nfp_prog, dst_lo, src_lo, len_lo, off);
if (!len_mid) {
- wrp_immed(nfp_prog, dst_hi, 0);
+ wrp_zext(nfp_prog, meta, dst_gpr);
return 0;
}
@@ -2763,7 +2782,7 @@ mem_ldx_data_from_pktcache_unaligned(struct nfp_prog *nfp_prog,
if (size <= REG_WIDTH) {
wrp_reg_or_subpart(nfp_prog, dst_lo, src_mid, len_mid, len_lo);
- wrp_immed(nfp_prog, dst_hi, 0);
+ wrp_zext(nfp_prog, meta, dst_gpr);
} else {
swreg src_hi = reg_xfer(idx + 2);
@@ -2794,10 +2813,10 @@ mem_ldx_data_from_pktcache_aligned(struct nfp_prog *nfp_prog,
if (size < REG_WIDTH) {
wrp_reg_subpart(nfp_prog, dst_lo, src_lo, size, 0);
- wrp_immed(nfp_prog, dst_hi, 0);
+ wrp_zext(nfp_prog, meta, dst_gpr);
} else if (size == REG_WIDTH) {
wrp_mov(nfp_prog, dst_lo, src_lo);
- wrp_immed(nfp_prog, dst_hi, 0);
+ wrp_zext(nfp_prog, meta, dst_gpr);
} else {
swreg src_hi = reg_xfer(idx + 1);
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.c b/drivers/net/ethernet/netronome/nfp/bpf/main.c
index 9c136da25221..1c9fb11470df 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/main.c
+++ b/drivers/net/ethernet/netronome/nfp/bpf/main.c
@@ -160,35 +160,19 @@ static int nfp_bpf_setup_tc_block_cb(enum tc_setup_type type,
return 0;
}
-static int nfp_bpf_setup_tc_block(struct net_device *netdev,
- struct tc_block_offload *f)
-{
- struct nfp_net *nn = netdev_priv(netdev);
-
- if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
- return -EOPNOTSUPP;
-
- switch (f->command) {
- case TC_BLOCK_BIND:
- return tcf_block_cb_register(f->block,
- nfp_bpf_setup_tc_block_cb,
- nn, nn, f->extack);
- case TC_BLOCK_UNBIND:
- tcf_block_cb_unregister(f->block,
- nfp_bpf_setup_tc_block_cb,
- nn);
- return 0;
- default:
- return -EOPNOTSUPP;
- }
-}
+static LIST_HEAD(nfp_bpf_block_cb_list);
static int nfp_bpf_setup_tc(struct nfp_app *app, struct net_device *netdev,
enum tc_setup_type type, void *type_data)
{
+ struct nfp_net *nn = netdev_priv(netdev);
+
switch (type) {
case TC_SETUP_BLOCK:
- return nfp_bpf_setup_tc_block(netdev, type_data);
+ return flow_block_cb_setup_simple(type_data,
+ &nfp_bpf_block_cb_list,
+ nfp_bpf_setup_tc_block_cb,
+ nn, nn, true);
default:
return -EOPNOTSUPP;
}
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.h b/drivers/net/ethernet/netronome/nfp/bpf/main.h
index e54d1ac84df2..57d6ff51e980 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/main.h
+++ b/drivers/net/ethernet/netronome/nfp/bpf/main.h
@@ -238,6 +238,8 @@ struct nfp_bpf_reg_state {
#define FLAG_INSN_SKIP_PREC_DEPENDENT BIT(4)
/* Instruction is optimized by the verifier */
#define FLAG_INSN_SKIP_VERIFIER_OPT BIT(5)
+/* Instruction needs to zero extend to high 32-bit */
+#define FLAG_INSN_DO_ZEXT BIT(6)
#define FLAG_INSN_SKIP_MASK (FLAG_INSN_SKIP_NOOP | \
FLAG_INSN_SKIP_PREC_DEPENDENT | \
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c
index 36f56eb4cbe2..e92ee510fd52 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c
+++ b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c
@@ -744,6 +744,17 @@ continue_subprog:
goto continue_subprog;
}
+static void nfp_bpf_insn_flag_zext(struct nfp_prog *nfp_prog,
+ struct bpf_insn_aux_data *aux)
+{
+ struct nfp_insn_meta *meta;
+
+ list_for_each_entry(meta, &nfp_prog->insns, l) {
+ if (aux[meta->n].zext_dst)
+ meta->flags |= FLAG_INSN_DO_ZEXT;
+ }
+}
+
int nfp_bpf_finalize(struct bpf_verifier_env *env)
{
struct bpf_subprog_info *info;
@@ -784,6 +795,7 @@ int nfp_bpf_finalize(struct bpf_verifier_env *env)
return -EOPNOTSUPP;
}
+ nfp_bpf_insn_flag_zext(nfp_prog, env->insn_aux_data);
return 0;
}
diff --git a/drivers/net/ethernet/netronome/nfp/ccm.c b/drivers/net/ethernet/netronome/nfp/ccm.c
index 94476e41e261..71afd111bae3 100644
--- a/drivers/net/ethernet/netronome/nfp/ccm.c
+++ b/drivers/net/ethernet/netronome/nfp/ccm.c
@@ -7,9 +7,6 @@
#include "nfp_app.h"
#include "nfp_net.h"
-#define NFP_CCM_TYPE_REPLY_BIT 7
-#define __NFP_CCM_REPLY(req) (BIT(NFP_CCM_TYPE_REPLY_BIT) | (req))
-
#define ccm_warn(app, msg...) nn_dp_warn(&(app)->ctrl->dp, msg)
#define NFP_CCM_TAG_ALLOC_SPAN (U16_MAX / 4)
diff --git a/drivers/net/ethernet/netronome/nfp/ccm.h b/drivers/net/ethernet/netronome/nfp/ccm.h
index ac963b128203..a460c75522be 100644
--- a/drivers/net/ethernet/netronome/nfp/ccm.h
+++ b/drivers/net/ethernet/netronome/nfp/ccm.h
@@ -9,6 +9,7 @@
#include <linux/wait.h>
struct nfp_app;
+struct nfp_net;
/* Firmware ABI */
@@ -21,15 +22,27 @@ enum nfp_ccm_type {
NFP_CCM_TYPE_BPF_MAP_GETNEXT = 6,
NFP_CCM_TYPE_BPF_MAP_GETFIRST = 7,
NFP_CCM_TYPE_BPF_BPF_EVENT = 8,
+ NFP_CCM_TYPE_CRYPTO_RESET = 9,
+ NFP_CCM_TYPE_CRYPTO_ADD = 10,
+ NFP_CCM_TYPE_CRYPTO_DEL = 11,
+ NFP_CCM_TYPE_CRYPTO_UPDATE = 12,
__NFP_CCM_TYPE_MAX,
};
#define NFP_CCM_ABI_VERSION 1
+#define NFP_CCM_TYPE_REPLY_BIT 7
+#define __NFP_CCM_REPLY(req) (BIT(NFP_CCM_TYPE_REPLY_BIT) | (req))
+
struct nfp_ccm_hdr {
- u8 type;
- u8 ver;
- __be16 tag;
+ union {
+ struct {
+ u8 type;
+ u8 ver;
+ __be16 tag;
+ };
+ __be32 raw;
+ };
};
static inline u8 nfp_ccm_get_type(struct sk_buff *skb)
@@ -41,15 +54,31 @@ static inline u8 nfp_ccm_get_type(struct sk_buff *skb)
return hdr->type;
}
-static inline unsigned int nfp_ccm_get_tag(struct sk_buff *skb)
+static inline __be16 __nfp_ccm_get_tag(struct sk_buff *skb)
{
struct nfp_ccm_hdr *hdr;
hdr = (struct nfp_ccm_hdr *)skb->data;
- return be16_to_cpu(hdr->tag);
+ return hdr->tag;
+}
+
+static inline unsigned int nfp_ccm_get_tag(struct sk_buff *skb)
+{
+ return be16_to_cpu(__nfp_ccm_get_tag(skb));
}
+#define NFP_NET_MBOX_TLV_TYPE GENMASK(31, 16)
+#define NFP_NET_MBOX_TLV_LEN GENMASK(15, 0)
+
+enum nfp_ccm_mbox_tlv_type {
+ NFP_NET_MBOX_TLV_TYPE_UNKNOWN = 0,
+ NFP_NET_MBOX_TLV_TYPE_END = 1,
+ NFP_NET_MBOX_TLV_TYPE_MSG = 2,
+ NFP_NET_MBOX_TLV_TYPE_MSG_NOSUP = 3,
+ NFP_NET_MBOX_TLV_TYPE_RESV = 4,
+};
+
/* Implementation */
/**
@@ -71,7 +100,7 @@ struct nfp_ccm {
u16 tag_alloc_last;
struct sk_buff_head replies;
- struct wait_queue_head wq;
+ wait_queue_head_t wq;
};
int nfp_ccm_init(struct nfp_ccm *ccm, struct nfp_app *app);
@@ -80,4 +109,23 @@ void nfp_ccm_rx(struct nfp_ccm *ccm, struct sk_buff *skb);
struct sk_buff *
nfp_ccm_communicate(struct nfp_ccm *ccm, struct sk_buff *skb,
enum nfp_ccm_type type, unsigned int reply_size);
+
+int nfp_ccm_mbox_alloc(struct nfp_net *nn);
+void nfp_ccm_mbox_free(struct nfp_net *nn);
+int nfp_ccm_mbox_init(struct nfp_net *nn);
+void nfp_ccm_mbox_clean(struct nfp_net *nn);
+bool nfp_ccm_mbox_fits(struct nfp_net *nn, unsigned int size);
+struct sk_buff *
+nfp_ccm_mbox_msg_alloc(struct nfp_net *nn, unsigned int req_size,
+ unsigned int reply_size, gfp_t flags);
+int __nfp_ccm_mbox_communicate(struct nfp_net *nn, struct sk_buff *skb,
+ enum nfp_ccm_type type,
+ unsigned int reply_size,
+ unsigned int max_reply_size, bool critical);
+int nfp_ccm_mbox_communicate(struct nfp_net *nn, struct sk_buff *skb,
+ enum nfp_ccm_type type,
+ unsigned int reply_size,
+ unsigned int max_reply_size);
+int nfp_ccm_mbox_post(struct nfp_net *nn, struct sk_buff *skb,
+ enum nfp_ccm_type type, unsigned int max_reply_size);
#endif
diff --git a/drivers/net/ethernet/netronome/nfp/ccm_mbox.c b/drivers/net/ethernet/netronome/nfp/ccm_mbox.c
new file mode 100644
index 000000000000..f0783aa9e66e
--- /dev/null
+++ b/drivers/net/ethernet/netronome/nfp/ccm_mbox.c
@@ -0,0 +1,743 @@
+// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+/* Copyright (C) 2019 Netronome Systems, Inc. */
+
+#include <linux/bitfield.h>
+#include <linux/io.h>
+#include <linux/skbuff.h>
+
+#include "ccm.h"
+#include "nfp_net.h"
+
+/* CCM messages via the mailbox. CMSGs get wrapped into simple TLVs
+ * and copied into the mailbox. Multiple messages can be copied to
+ * form a batch. Threads come in with CMSG formed in an skb, then
+ * enqueue that skb onto the request queue. If threads skb is first
+ * in queue this thread will handle the mailbox operation. It copies
+ * up to 64 messages into the mailbox (making sure that both requests
+ * and replies will fit. After FW is done processing the batch it
+ * copies the data out and wakes waiting threads.
+ * If a thread is waiting it either gets its the message completed
+ * (response is copied into the same skb as the request, overwriting
+ * it), or becomes the first in queue.
+ * Completions and next-to-run are signaled via the control buffer
+ * to limit potential cache line bounces.
+ */
+
+#define NFP_CCM_MBOX_BATCH_LIMIT 64
+#define NFP_CCM_TIMEOUT (NFP_NET_POLL_TIMEOUT * 1000)
+#define NFP_CCM_MAX_QLEN 1024
+
+enum nfp_net_mbox_cmsg_state {
+ NFP_NET_MBOX_CMSG_STATE_QUEUED,
+ NFP_NET_MBOX_CMSG_STATE_NEXT,
+ NFP_NET_MBOX_CMSG_STATE_BUSY,
+ NFP_NET_MBOX_CMSG_STATE_REPLY_FOUND,
+ NFP_NET_MBOX_CMSG_STATE_DONE,
+};
+
+/**
+ * struct nfp_ccm_mbox_skb_cb - CCM mailbox specific info
+ * @state: processing state (/stage) of the message
+ * @err: error encountered during processing if any
+ * @max_len: max(request_len, reply_len)
+ * @exp_reply: expected reply length (0 means don't validate)
+ * @posted: the message was posted and nobody waits for the reply
+ */
+struct nfp_ccm_mbox_cmsg_cb {
+ enum nfp_net_mbox_cmsg_state state;
+ int err;
+ unsigned int max_len;
+ unsigned int exp_reply;
+ bool posted;
+};
+
+static u32 nfp_ccm_mbox_max_msg(struct nfp_net *nn)
+{
+ return round_down(nn->tlv_caps.mbox_len, 4) -
+ NFP_NET_CFG_MBOX_SIMPLE_VAL - /* common mbox command header */
+ 4 * 2; /* Msg TLV plus End TLV headers */
+}
+
+static void
+nfp_ccm_mbox_msg_init(struct sk_buff *skb, unsigned int exp_reply, int max_len)
+{
+ struct nfp_ccm_mbox_cmsg_cb *cb = (void *)skb->cb;
+
+ cb->state = NFP_NET_MBOX_CMSG_STATE_QUEUED;
+ cb->err = 0;
+ cb->max_len = max_len;
+ cb->exp_reply = exp_reply;
+ cb->posted = false;
+}
+
+static int nfp_ccm_mbox_maxlen(const struct sk_buff *skb)
+{
+ struct nfp_ccm_mbox_cmsg_cb *cb = (void *)skb->cb;
+
+ return cb->max_len;
+}
+
+static bool nfp_ccm_mbox_done(struct sk_buff *skb)
+{
+ struct nfp_ccm_mbox_cmsg_cb *cb = (void *)skb->cb;
+
+ return cb->state == NFP_NET_MBOX_CMSG_STATE_DONE;
+}
+
+static bool nfp_ccm_mbox_in_progress(struct sk_buff *skb)
+{
+ struct nfp_ccm_mbox_cmsg_cb *cb = (void *)skb->cb;
+
+ return cb->state != NFP_NET_MBOX_CMSG_STATE_QUEUED &&
+ cb->state != NFP_NET_MBOX_CMSG_STATE_NEXT;
+}
+
+static void nfp_ccm_mbox_set_busy(struct sk_buff *skb)
+{
+ struct nfp_ccm_mbox_cmsg_cb *cb = (void *)skb->cb;
+
+ cb->state = NFP_NET_MBOX_CMSG_STATE_BUSY;
+}
+
+static bool nfp_ccm_mbox_is_posted(struct sk_buff *skb)
+{
+ struct nfp_ccm_mbox_cmsg_cb *cb = (void *)skb->cb;
+
+ return cb->posted;
+}
+
+static void nfp_ccm_mbox_mark_posted(struct sk_buff *skb)
+{
+ struct nfp_ccm_mbox_cmsg_cb *cb = (void *)skb->cb;
+
+ cb->posted = true;
+}
+
+static bool nfp_ccm_mbox_is_first(struct nfp_net *nn, struct sk_buff *skb)
+{
+ return skb_queue_is_first(&nn->mbox_cmsg.queue, skb);
+}
+
+static bool nfp_ccm_mbox_should_run(struct nfp_net *nn, struct sk_buff *skb)
+{
+ struct nfp_ccm_mbox_cmsg_cb *cb = (void *)skb->cb;
+
+ return cb->state == NFP_NET_MBOX_CMSG_STATE_NEXT;
+}
+
+static void nfp_ccm_mbox_mark_next_runner(struct nfp_net *nn)
+{
+ struct nfp_ccm_mbox_cmsg_cb *cb;
+ struct sk_buff *skb;
+
+ skb = skb_peek(&nn->mbox_cmsg.queue);
+ if (!skb)
+ return;
+
+ cb = (void *)skb->cb;
+ cb->state = NFP_NET_MBOX_CMSG_STATE_NEXT;
+ if (cb->posted)
+ queue_work(nn->mbox_cmsg.workq, &nn->mbox_cmsg.runq_work);
+}
+
+static void
+nfp_ccm_mbox_write_tlv(struct nfp_net *nn, u32 off, u32 type, u32 len)
+{
+ nn_writel(nn, off,
+ FIELD_PREP(NFP_NET_MBOX_TLV_TYPE, type) |
+ FIELD_PREP(NFP_NET_MBOX_TLV_LEN, len));
+}
+
+static void nfp_ccm_mbox_copy_in(struct nfp_net *nn, struct sk_buff *last)
+{
+ struct sk_buff *skb;
+ int reserve, i, cnt;
+ __be32 *data;
+ u32 off, len;
+
+ off = nn->tlv_caps.mbox_off + NFP_NET_CFG_MBOX_SIMPLE_VAL;
+ skb = __skb_peek(&nn->mbox_cmsg.queue);
+ while (true) {
+ nfp_ccm_mbox_write_tlv(nn, off, NFP_NET_MBOX_TLV_TYPE_MSG,
+ skb->len);
+ off += 4;
+
+ /* Write data word by word, skb->data should be aligned */
+ data = (__be32 *)skb->data;
+ cnt = skb->len / 4;
+ for (i = 0 ; i < cnt; i++) {
+ nn_writel(nn, off, be32_to_cpu(data[i]));
+ off += 4;
+ }
+ if (skb->len & 3) {
+ __be32 tmp = 0;
+
+ memcpy(&tmp, &data[i], skb->len & 3);
+ nn_writel(nn, off, be32_to_cpu(tmp));
+ off += 4;
+ }
+
+ /* Reserve space if reply is bigger */
+ len = round_up(skb->len, 4);
+ reserve = nfp_ccm_mbox_maxlen(skb) - len;
+ if (reserve > 0) {
+ nfp_ccm_mbox_write_tlv(nn, off,
+ NFP_NET_MBOX_TLV_TYPE_RESV,
+ reserve);
+ off += 4 + reserve;
+ }
+
+ if (skb == last)
+ break;
+ skb = skb_queue_next(&nn->mbox_cmsg.queue, skb);
+ }
+
+ nfp_ccm_mbox_write_tlv(nn, off, NFP_NET_MBOX_TLV_TYPE_END, 0);
+}
+
+static struct sk_buff *
+nfp_ccm_mbox_find_req(struct nfp_net *nn, __be16 tag, struct sk_buff *last)
+{
+ struct sk_buff *skb;
+
+ skb = __skb_peek(&nn->mbox_cmsg.queue);
+ while (true) {
+ if (__nfp_ccm_get_tag(skb) == tag)
+ return skb;
+
+ if (skb == last)
+ return NULL;
+ skb = skb_queue_next(&nn->mbox_cmsg.queue, skb);
+ }
+}
+
+static void nfp_ccm_mbox_copy_out(struct nfp_net *nn, struct sk_buff *last)
+{
+ struct nfp_ccm_mbox_cmsg_cb *cb;
+ u8 __iomem *data, *end;
+ struct sk_buff *skb;
+
+ data = nn->dp.ctrl_bar + nn->tlv_caps.mbox_off +
+ NFP_NET_CFG_MBOX_SIMPLE_VAL;
+ end = data + nn->tlv_caps.mbox_len;
+
+ while (true) {
+ unsigned int length, offset, type;
+ struct nfp_ccm_hdr hdr;
+ u32 tlv_hdr;
+
+ tlv_hdr = readl(data);
+ type = FIELD_GET(NFP_NET_MBOX_TLV_TYPE, tlv_hdr);
+ length = FIELD_GET(NFP_NET_MBOX_TLV_LEN, tlv_hdr);
+ offset = data - nn->dp.ctrl_bar;
+
+ /* Advance past the header */
+ data += 4;
+
+ if (data + length > end) {
+ nn_dp_warn(&nn->dp, "mailbox oversized TLV type:%d offset:%u len:%u\n",
+ type, offset, length);
+ break;
+ }
+
+ if (type == NFP_NET_MBOX_TLV_TYPE_END)
+ break;
+ if (type == NFP_NET_MBOX_TLV_TYPE_RESV)
+ goto next_tlv;
+ if (type != NFP_NET_MBOX_TLV_TYPE_MSG &&
+ type != NFP_NET_MBOX_TLV_TYPE_MSG_NOSUP) {
+ nn_dp_warn(&nn->dp, "mailbox unknown TLV type:%d offset:%u len:%u\n",
+ type, offset, length);
+ break;
+ }
+
+ if (length < 4) {
+ nn_dp_warn(&nn->dp, "mailbox msg too short to contain header TLV type:%d offset:%u len:%u\n",
+ type, offset, length);
+ break;
+ }
+
+ hdr.raw = cpu_to_be32(readl(data));
+
+ skb = nfp_ccm_mbox_find_req(nn, hdr.tag, last);
+ if (!skb) {
+ nn_dp_warn(&nn->dp, "mailbox request not found:%u\n",
+ be16_to_cpu(hdr.tag));
+ break;
+ }
+ cb = (void *)skb->cb;
+
+ if (type == NFP_NET_MBOX_TLV_TYPE_MSG_NOSUP) {
+ nn_dp_warn(&nn->dp,
+ "mailbox msg not supported type:%d\n",
+ nfp_ccm_get_type(skb));
+ cb->err = -EIO;
+ goto next_tlv;
+ }
+
+ if (hdr.type != __NFP_CCM_REPLY(nfp_ccm_get_type(skb))) {
+ nn_dp_warn(&nn->dp, "mailbox msg reply wrong type:%u expected:%lu\n",
+ hdr.type,
+ __NFP_CCM_REPLY(nfp_ccm_get_type(skb)));
+ cb->err = -EIO;
+ goto next_tlv;
+ }
+ if (cb->exp_reply && length != cb->exp_reply) {
+ nn_dp_warn(&nn->dp, "mailbox msg reply wrong size type:%u expected:%u have:%u\n",
+ hdr.type, length, cb->exp_reply);
+ cb->err = -EIO;
+ goto next_tlv;
+ }
+ if (length > cb->max_len) {
+ nn_dp_warn(&nn->dp, "mailbox msg oversized reply type:%u max:%u have:%u\n",
+ hdr.type, cb->max_len, length);
+ cb->err = -EIO;
+ goto next_tlv;
+ }
+
+ if (!cb->posted) {
+ __be32 *skb_data;
+ int i, cnt;
+
+ if (length <= skb->len)
+ __skb_trim(skb, length);
+ else
+ skb_put(skb, length - skb->len);
+
+ /* We overcopy here slightly, but that's okay,
+ * the skb is large enough, and the garbage will
+ * be ignored (beyond skb->len).
+ */
+ skb_data = (__be32 *)skb->data;
+ memcpy(skb_data, &hdr, 4);
+
+ cnt = DIV_ROUND_UP(length, 4);
+ for (i = 1 ; i < cnt; i++)
+ skb_data[i] = cpu_to_be32(readl(data + i * 4));
+ }
+
+ cb->state = NFP_NET_MBOX_CMSG_STATE_REPLY_FOUND;
+next_tlv:
+ data += round_up(length, 4);
+ if (data + 4 > end) {
+ nn_dp_warn(&nn->dp,
+ "reached end of MBOX without END TLV\n");
+ break;
+ }
+ }
+
+ smp_wmb(); /* order the skb->data vs. cb->state */
+ spin_lock_bh(&nn->mbox_cmsg.queue.lock);
+ do {
+ skb = __skb_dequeue(&nn->mbox_cmsg.queue);
+ cb = (void *)skb->cb;
+
+ if (cb->state != NFP_NET_MBOX_CMSG_STATE_REPLY_FOUND) {
+ cb->err = -ENOENT;
+ smp_wmb(); /* order the cb->err vs. cb->state */
+ }
+ cb->state = NFP_NET_MBOX_CMSG_STATE_DONE;
+
+ if (cb->posted) {
+ if (cb->err)
+ nn_dp_warn(&nn->dp,
+ "mailbox posted msg failed type:%u err:%d\n",
+ nfp_ccm_get_type(skb), cb->err);
+ dev_consume_skb_any(skb);
+ }
+ } while (skb != last);
+
+ nfp_ccm_mbox_mark_next_runner(nn);
+ spin_unlock_bh(&nn->mbox_cmsg.queue.lock);
+}
+
+static void
+nfp_ccm_mbox_mark_all_err(struct nfp_net *nn, struct sk_buff *last, int err)
+{
+ struct nfp_ccm_mbox_cmsg_cb *cb;
+ struct sk_buff *skb;
+
+ spin_lock_bh(&nn->mbox_cmsg.queue.lock);
+ do {
+ skb = __skb_dequeue(&nn->mbox_cmsg.queue);
+ cb = (void *)skb->cb;
+
+ cb->err = err;
+ smp_wmb(); /* order the cb->err vs. cb->state */
+ cb->state = NFP_NET_MBOX_CMSG_STATE_DONE;
+ } while (skb != last);
+
+ nfp_ccm_mbox_mark_next_runner(nn);
+ spin_unlock_bh(&nn->mbox_cmsg.queue.lock);
+}
+
+static void nfp_ccm_mbox_run_queue_unlock(struct nfp_net *nn)
+ __releases(&nn->mbox_cmsg.queue.lock)
+{
+ int space = nn->tlv_caps.mbox_len - NFP_NET_CFG_MBOX_SIMPLE_VAL;
+ struct sk_buff *skb, *last;
+ int cnt, err;
+
+ space -= 4; /* for End TLV */
+
+ /* First skb must fit, because it's ours and we checked it fits */
+ cnt = 1;
+ last = skb = __skb_peek(&nn->mbox_cmsg.queue);
+ space -= 4 + nfp_ccm_mbox_maxlen(skb);
+
+ while (!skb_queue_is_last(&nn->mbox_cmsg.queue, last)) {
+ skb = skb_queue_next(&nn->mbox_cmsg.queue, last);
+ space -= 4 + nfp_ccm_mbox_maxlen(skb);
+ if (space < 0)
+ break;
+ last = skb;
+ nfp_ccm_mbox_set_busy(skb);
+ cnt++;
+ if (cnt == NFP_CCM_MBOX_BATCH_LIMIT)
+ break;
+ }
+ spin_unlock_bh(&nn->mbox_cmsg.queue.lock);
+
+ /* Now we own all skb's marked in progress, new requests may arrive
+ * at the end of the queue.
+ */
+
+ nn_ctrl_bar_lock(nn);
+
+ nfp_ccm_mbox_copy_in(nn, last);
+
+ err = nfp_net_mbox_reconfig(nn, NFP_NET_CFG_MBOX_CMD_TLV_CMSG);
+ if (!err)
+ nfp_ccm_mbox_copy_out(nn, last);
+ else
+ nfp_ccm_mbox_mark_all_err(nn, last, -EIO);
+
+ nn_ctrl_bar_unlock(nn);
+
+ wake_up_all(&nn->mbox_cmsg.wq);
+}
+
+static int nfp_ccm_mbox_skb_return(struct sk_buff *skb)
+{
+ struct nfp_ccm_mbox_cmsg_cb *cb = (void *)skb->cb;
+
+ if (cb->err)
+ dev_kfree_skb_any(skb);
+ return cb->err;
+}
+
+/* If wait timed out but the command is already in progress we have
+ * to wait until it finishes. Runners has ownership of the skbs marked
+ * as busy.
+ */
+static int
+nfp_ccm_mbox_unlink_unlock(struct nfp_net *nn, struct sk_buff *skb,
+ enum nfp_ccm_type type)
+ __releases(&nn->mbox_cmsg.queue.lock)
+{
+ bool was_first;
+
+ if (nfp_ccm_mbox_in_progress(skb)) {
+ spin_unlock_bh(&nn->mbox_cmsg.queue.lock);
+
+ wait_event(nn->mbox_cmsg.wq, nfp_ccm_mbox_done(skb));
+ smp_rmb(); /* pairs with smp_wmb() after data is written */
+ return nfp_ccm_mbox_skb_return(skb);
+ }
+
+ was_first = nfp_ccm_mbox_should_run(nn, skb);
+ __skb_unlink(skb, &nn->mbox_cmsg.queue);
+ if (was_first)
+ nfp_ccm_mbox_mark_next_runner(nn);
+
+ spin_unlock_bh(&nn->mbox_cmsg.queue.lock);
+
+ if (was_first)
+ wake_up_all(&nn->mbox_cmsg.wq);
+
+ nn_dp_warn(&nn->dp, "time out waiting for mbox response to 0x%02x\n",
+ type);
+ return -ETIMEDOUT;
+}
+
+static int
+nfp_ccm_mbox_msg_prepare(struct nfp_net *nn, struct sk_buff *skb,
+ enum nfp_ccm_type type,
+ unsigned int reply_size, unsigned int max_reply_size,
+ gfp_t flags)
+{
+ const unsigned int mbox_max = nfp_ccm_mbox_max_msg(nn);
+ unsigned int max_len;
+ ssize_t undersize;
+ int err;
+
+ if (unlikely(!(nn->tlv_caps.mbox_cmsg_types & BIT(type)))) {
+ nn_dp_warn(&nn->dp,
+ "message type %d not supported by mailbox\n", type);
+ return -EINVAL;
+ }
+
+ /* If the reply size is unknown assume it will take the entire
+ * mailbox, the callers should do their best for this to never
+ * happen.
+ */
+ if (!max_reply_size)
+ max_reply_size = mbox_max;
+ max_reply_size = round_up(max_reply_size, 4);
+
+ /* Make sure we can fit the entire reply into the skb,
+ * and that we don't have to slow down the mbox handler
+ * with allocations.
+ */
+ undersize = max_reply_size - (skb_end_pointer(skb) - skb->data);
+ if (undersize > 0) {
+ err = pskb_expand_head(skb, 0, undersize, flags);
+ if (err) {
+ nn_dp_warn(&nn->dp,
+ "can't allocate reply buffer for mailbox\n");
+ return err;
+ }
+ }
+
+ /* Make sure that request and response both fit into the mailbox */
+ max_len = max(max_reply_size, round_up(skb->len, 4));
+ if (max_len > mbox_max) {
+ nn_dp_warn(&nn->dp,
+ "message too big for tha mailbox: %u/%u vs %u\n",
+ skb->len, max_reply_size, mbox_max);
+ return -EMSGSIZE;
+ }
+
+ nfp_ccm_mbox_msg_init(skb, reply_size, max_len);
+
+ return 0;
+}
+
+static int
+nfp_ccm_mbox_msg_enqueue(struct nfp_net *nn, struct sk_buff *skb,
+ enum nfp_ccm_type type, bool critical)
+{
+ struct nfp_ccm_hdr *hdr;
+
+ assert_spin_locked(&nn->mbox_cmsg.queue.lock);
+
+ if (!critical && nn->mbox_cmsg.queue.qlen >= NFP_CCM_MAX_QLEN) {
+ nn_dp_warn(&nn->dp, "mailbox request queue too long\n");
+ return -EBUSY;
+ }
+
+ hdr = (void *)skb->data;
+ hdr->ver = NFP_CCM_ABI_VERSION;
+ hdr->type = type;
+ hdr->tag = cpu_to_be16(nn->mbox_cmsg.tag++);
+
+ __skb_queue_tail(&nn->mbox_cmsg.queue, skb);
+
+ return 0;
+}
+
+int __nfp_ccm_mbox_communicate(struct nfp_net *nn, struct sk_buff *skb,
+ enum nfp_ccm_type type,
+ unsigned int reply_size,
+ unsigned int max_reply_size, bool critical)
+{
+ int err;
+
+ err = nfp_ccm_mbox_msg_prepare(nn, skb, type, reply_size,
+ max_reply_size, GFP_KERNEL);
+ if (err)
+ goto err_free_skb;
+
+ spin_lock_bh(&nn->mbox_cmsg.queue.lock);
+
+ err = nfp_ccm_mbox_msg_enqueue(nn, skb, type, critical);
+ if (err)
+ goto err_unlock;
+
+ /* First in queue takes the mailbox lock and processes the batch */
+ if (!nfp_ccm_mbox_is_first(nn, skb)) {
+ bool to;
+
+ spin_unlock_bh(&nn->mbox_cmsg.queue.lock);
+
+ to = !wait_event_timeout(nn->mbox_cmsg.wq,
+ nfp_ccm_mbox_done(skb) ||
+ nfp_ccm_mbox_should_run(nn, skb),
+ msecs_to_jiffies(NFP_CCM_TIMEOUT));
+
+ /* fast path for those completed by another thread */
+ if (nfp_ccm_mbox_done(skb)) {
+ smp_rmb(); /* pairs with wmb after data is written */
+ return nfp_ccm_mbox_skb_return(skb);
+ }
+
+ spin_lock_bh(&nn->mbox_cmsg.queue.lock);
+
+ if (!nfp_ccm_mbox_is_first(nn, skb)) {
+ WARN_ON(!to);
+
+ err = nfp_ccm_mbox_unlink_unlock(nn, skb, type);
+ if (err)
+ goto err_free_skb;
+ return 0;
+ }
+ }
+
+ /* run queue expects the lock held */
+ nfp_ccm_mbox_run_queue_unlock(nn);
+ return nfp_ccm_mbox_skb_return(skb);
+
+err_unlock:
+ spin_unlock_bh(&nn->mbox_cmsg.queue.lock);
+err_free_skb:
+ dev_kfree_skb_any(skb);
+ return err;
+}
+
+int nfp_ccm_mbox_communicate(struct nfp_net *nn, struct sk_buff *skb,
+ enum nfp_ccm_type type,
+ unsigned int reply_size,
+ unsigned int max_reply_size)
+{
+ return __nfp_ccm_mbox_communicate(nn, skb, type, reply_size,
+ max_reply_size, false);
+}
+
+static void nfp_ccm_mbox_post_runq_work(struct work_struct *work)
+{
+ struct sk_buff *skb;
+ struct nfp_net *nn;
+
+ nn = container_of(work, struct nfp_net, mbox_cmsg.runq_work);
+
+ spin_lock_bh(&nn->mbox_cmsg.queue.lock);
+
+ skb = __skb_peek(&nn->mbox_cmsg.queue);
+ if (WARN_ON(!skb || !nfp_ccm_mbox_is_posted(skb) ||
+ !nfp_ccm_mbox_should_run(nn, skb))) {
+ spin_unlock_bh(&nn->mbox_cmsg.queue.lock);
+ return;
+ }
+
+ nfp_ccm_mbox_run_queue_unlock(nn);
+}
+
+static void nfp_ccm_mbox_post_wait_work(struct work_struct *work)
+{
+ struct sk_buff *skb;
+ struct nfp_net *nn;
+ int err;
+
+ nn = container_of(work, struct nfp_net, mbox_cmsg.wait_work);
+
+ skb = skb_peek(&nn->mbox_cmsg.queue);
+ if (WARN_ON(!skb || !nfp_ccm_mbox_is_posted(skb)))
+ /* Should never happen so it's unclear what to do here.. */
+ goto exit_unlock_wake;
+
+ err = nfp_net_mbox_reconfig_wait_posted(nn);
+ if (!err)
+ nfp_ccm_mbox_copy_out(nn, skb);
+ else
+ nfp_ccm_mbox_mark_all_err(nn, skb, -EIO);
+exit_unlock_wake:
+ nn_ctrl_bar_unlock(nn);
+ wake_up_all(&nn->mbox_cmsg.wq);
+}
+
+int nfp_ccm_mbox_post(struct nfp_net *nn, struct sk_buff *skb,
+ enum nfp_ccm_type type, unsigned int max_reply_size)
+{
+ int err;
+
+ err = nfp_ccm_mbox_msg_prepare(nn, skb, type, 0, max_reply_size,
+ GFP_ATOMIC);
+ if (err)
+ goto err_free_skb;
+
+ nfp_ccm_mbox_mark_posted(skb);
+
+ spin_lock_bh(&nn->mbox_cmsg.queue.lock);
+
+ err = nfp_ccm_mbox_msg_enqueue(nn, skb, type, false);
+ if (err)
+ goto err_unlock;
+
+ if (nfp_ccm_mbox_is_first(nn, skb)) {
+ if (nn_ctrl_bar_trylock(nn)) {
+ nfp_ccm_mbox_copy_in(nn, skb);
+ nfp_net_mbox_reconfig_post(nn,
+ NFP_NET_CFG_MBOX_CMD_TLV_CMSG);
+ queue_work(nn->mbox_cmsg.workq,
+ &nn->mbox_cmsg.wait_work);
+ } else {
+ nfp_ccm_mbox_mark_next_runner(nn);
+ }
+ }
+
+ spin_unlock_bh(&nn->mbox_cmsg.queue.lock);
+
+ return 0;
+
+err_unlock:
+ spin_unlock_bh(&nn->mbox_cmsg.queue.lock);
+err_free_skb:
+ dev_kfree_skb_any(skb);
+ return err;
+}
+
+struct sk_buff *
+nfp_ccm_mbox_msg_alloc(struct nfp_net *nn, unsigned int req_size,
+ unsigned int reply_size, gfp_t flags)
+{
+ unsigned int max_size;
+ struct sk_buff *skb;
+
+ if (!reply_size)
+ max_size = nfp_ccm_mbox_max_msg(nn);
+ else
+ max_size = max(req_size, reply_size);
+ max_size = round_up(max_size, 4);
+
+ skb = alloc_skb(max_size, flags);
+ if (!skb)
+ return NULL;
+
+ skb_put(skb, req_size);
+
+ return skb;
+}
+
+bool nfp_ccm_mbox_fits(struct nfp_net *nn, unsigned int size)
+{
+ return nfp_ccm_mbox_max_msg(nn) >= size;
+}
+
+int nfp_ccm_mbox_init(struct nfp_net *nn)
+{
+ return 0;
+}
+
+void nfp_ccm_mbox_clean(struct nfp_net *nn)
+{
+ drain_workqueue(nn->mbox_cmsg.workq);
+}
+
+int nfp_ccm_mbox_alloc(struct nfp_net *nn)
+{
+ skb_queue_head_init(&nn->mbox_cmsg.queue);
+ init_waitqueue_head(&nn->mbox_cmsg.wq);
+ INIT_WORK(&nn->mbox_cmsg.wait_work, nfp_ccm_mbox_post_wait_work);
+ INIT_WORK(&nn->mbox_cmsg.runq_work, nfp_ccm_mbox_post_runq_work);
+
+ nn->mbox_cmsg.workq = alloc_workqueue("nfp-ccm-mbox", WQ_UNBOUND, 0);
+ if (!nn->mbox_cmsg.workq)
+ return -ENOMEM;
+ return 0;
+}
+
+void nfp_ccm_mbox_free(struct nfp_net *nn)
+{
+ destroy_workqueue(nn->mbox_cmsg.workq);
+ WARN_ON(!skb_queue_empty(&nn->mbox_cmsg.queue));
+}
diff --git a/drivers/net/ethernet/netronome/nfp/crypto/crypto.h b/drivers/net/ethernet/netronome/nfp/crypto/crypto.h
new file mode 100644
index 000000000000..60372ddf69f0
--- /dev/null
+++ b/drivers/net/ethernet/netronome/nfp/crypto/crypto.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */
+/* Copyright (C) 2019 Netronome Systems, Inc. */
+
+#ifndef NFP_CRYPTO_H
+#define NFP_CRYPTO_H 1
+
+struct nfp_net_tls_offload_ctx {
+ __be32 fw_handle[2];
+
+ u8 rx_end[0];
+ /* Tx only fields follow - Rx side does not have enough driver state
+ * to fit these
+ */
+
+ u32 next_seq;
+};
+
+#ifdef CONFIG_TLS_DEVICE
+int nfp_net_tls_init(struct nfp_net *nn);
+#else
+static inline int nfp_net_tls_init(struct nfp_net *nn)
+{
+ return 0;
+}
+#endif
+
+#endif
diff --git a/drivers/net/ethernet/netronome/nfp/crypto/fw.h b/drivers/net/ethernet/netronome/nfp/crypto/fw.h
new file mode 100644
index 000000000000..67413d946c4a
--- /dev/null
+++ b/drivers/net/ethernet/netronome/nfp/crypto/fw.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */
+/* Copyright (C) 2019 Netronome Systems, Inc. */
+
+#ifndef NFP_CRYPTO_FW_H
+#define NFP_CRYPTO_FW_H 1
+
+#include "../ccm.h"
+
+#define NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_ENC 0
+#define NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_DEC 1
+
+struct nfp_crypto_reply_simple {
+ struct nfp_ccm_hdr hdr;
+ __be32 error;
+};
+
+struct nfp_crypto_req_reset {
+ struct nfp_ccm_hdr hdr;
+ __be32 ep_id;
+};
+
+#define NFP_NET_TLS_IPVER GENMASK(15, 12)
+#define NFP_NET_TLS_VLAN GENMASK(11, 0)
+#define NFP_NET_TLS_VLAN_UNUSED 4095
+
+struct nfp_crypto_req_add_front {
+ struct nfp_ccm_hdr hdr;
+ __be32 ep_id;
+ u8 resv[3];
+ u8 opcode;
+ u8 key_len;
+ __be16 ipver_vlan __packed;
+ u8 l4_proto;
+#define NFP_NET_TLS_NON_ADDR_KEY_LEN 8
+ u8 l3_addrs[0];
+};
+
+struct nfp_crypto_req_add_back {
+ __be16 src_port;
+ __be16 dst_port;
+ __be32 key[8];
+ __be32 salt;
+ __be32 iv[2];
+ __be32 counter;
+ __be32 rec_no[2];
+ __be32 tcp_seq;
+};
+
+struct nfp_crypto_req_add_v4 {
+ struct nfp_crypto_req_add_front front;
+ __be32 src_ip;
+ __be32 dst_ip;
+ struct nfp_crypto_req_add_back back;
+};
+
+struct nfp_crypto_req_add_v6 {
+ struct nfp_crypto_req_add_front front;
+ __be32 src_ip[4];
+ __be32 dst_ip[4];
+ struct nfp_crypto_req_add_back back;
+};
+
+struct nfp_crypto_reply_add {
+ struct nfp_ccm_hdr hdr;
+ __be32 error;
+ __be32 handle[2];
+};
+
+struct nfp_crypto_req_del {
+ struct nfp_ccm_hdr hdr;
+ __be32 ep_id;
+ __be32 handle[2];
+};
+
+struct nfp_crypto_req_update {
+ struct nfp_ccm_hdr hdr;
+ __be32 ep_id;
+ u8 resv[3];
+ u8 opcode;
+ __be32 handle[2];
+ __be32 rec_no[2];
+ __be32 tcp_seq;
+};
+#endif
diff --git a/drivers/net/ethernet/netronome/nfp/crypto/tls.c b/drivers/net/ethernet/netronome/nfp/crypto/tls.c
new file mode 100644
index 000000000000..96a96b35c0ca
--- /dev/null
+++ b/drivers/net/ethernet/netronome/nfp/crypto/tls.c
@@ -0,0 +1,522 @@
+// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+/* Copyright (C) 2019 Netronome Systems, Inc. */
+
+#include <linux/bitfield.h>
+#include <linux/ipv6.h>
+#include <linux/skbuff.h>
+#include <linux/string.h>
+#include <net/tls.h>
+
+#include "../ccm.h"
+#include "../nfp_net.h"
+#include "crypto.h"
+#include "fw.h"
+
+#define NFP_NET_TLS_CCM_MBOX_OPS_MASK \
+ (BIT(NFP_CCM_TYPE_CRYPTO_RESET) | \
+ BIT(NFP_CCM_TYPE_CRYPTO_ADD) | \
+ BIT(NFP_CCM_TYPE_CRYPTO_DEL) | \
+ BIT(NFP_CCM_TYPE_CRYPTO_UPDATE))
+
+#define NFP_NET_TLS_OPCODE_MASK_RX \
+ BIT(NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_DEC)
+
+#define NFP_NET_TLS_OPCODE_MASK_TX \
+ BIT(NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_ENC)
+
+#define NFP_NET_TLS_OPCODE_MASK \
+ (NFP_NET_TLS_OPCODE_MASK_RX | NFP_NET_TLS_OPCODE_MASK_TX)
+
+static void nfp_net_crypto_set_op(struct nfp_net *nn, u8 opcode, bool on)
+{
+ u32 off, val;
+
+ off = nn->tlv_caps.crypto_enable_off + round_down(opcode / 8, 4);
+
+ val = nn_readl(nn, off);
+ if (on)
+ val |= BIT(opcode & 31);
+ else
+ val &= ~BIT(opcode & 31);
+ nn_writel(nn, off, val);
+}
+
+static bool
+__nfp_net_tls_conn_cnt_changed(struct nfp_net *nn, int add,
+ enum tls_offload_ctx_dir direction)
+{
+ u8 opcode;
+ int cnt;
+
+ if (direction == TLS_OFFLOAD_CTX_DIR_TX) {
+ opcode = NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_ENC;
+ nn->ktls_tx_conn_cnt += add;
+ cnt = nn->ktls_tx_conn_cnt;
+ nn->dp.ktls_tx = !!nn->ktls_tx_conn_cnt;
+ } else {
+ opcode = NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_DEC;
+ nn->ktls_rx_conn_cnt += add;
+ cnt = nn->ktls_rx_conn_cnt;
+ }
+
+ /* Care only about 0 -> 1 and 1 -> 0 transitions */
+ if (cnt > 1)
+ return false;
+
+ nfp_net_crypto_set_op(nn, opcode, cnt);
+ return true;
+}
+
+static int
+nfp_net_tls_conn_cnt_changed(struct nfp_net *nn, int add,
+ enum tls_offload_ctx_dir direction)
+{
+ int ret = 0;
+
+ /* Use the BAR lock to protect the connection counts */
+ nn_ctrl_bar_lock(nn);
+ if (__nfp_net_tls_conn_cnt_changed(nn, add, direction)) {
+ ret = __nfp_net_reconfig(nn, NFP_NET_CFG_UPDATE_CRYPTO);
+ /* Undo the cnt adjustment if failed */
+ if (ret)
+ __nfp_net_tls_conn_cnt_changed(nn, -add, direction);
+ }
+ nn_ctrl_bar_unlock(nn);
+
+ return ret;
+}
+
+static int
+nfp_net_tls_conn_add(struct nfp_net *nn, enum tls_offload_ctx_dir direction)
+{
+ return nfp_net_tls_conn_cnt_changed(nn, 1, direction);
+}
+
+static int
+nfp_net_tls_conn_remove(struct nfp_net *nn, enum tls_offload_ctx_dir direction)
+{
+ return nfp_net_tls_conn_cnt_changed(nn, -1, direction);
+}
+
+static struct sk_buff *
+nfp_net_tls_alloc_simple(struct nfp_net *nn, size_t req_sz, gfp_t flags)
+{
+ return nfp_ccm_mbox_msg_alloc(nn, req_sz,
+ sizeof(struct nfp_crypto_reply_simple),
+ flags);
+}
+
+static int
+nfp_net_tls_communicate_simple(struct nfp_net *nn, struct sk_buff *skb,
+ const char *name, enum nfp_ccm_type type)
+{
+ struct nfp_crypto_reply_simple *reply;
+ int err;
+
+ err = __nfp_ccm_mbox_communicate(nn, skb, type,
+ sizeof(*reply), sizeof(*reply),
+ type == NFP_CCM_TYPE_CRYPTO_DEL);
+ if (err) {
+ nn_dp_warn(&nn->dp, "failed to %s TLS: %d\n", name, err);
+ return err;
+ }
+
+ reply = (void *)skb->data;
+ err = -be32_to_cpu(reply->error);
+ if (err)
+ nn_dp_warn(&nn->dp, "failed to %s TLS, fw replied: %d\n",
+ name, err);
+ dev_consume_skb_any(skb);
+
+ return err;
+}
+
+static void nfp_net_tls_del_fw(struct nfp_net *nn, __be32 *fw_handle)
+{
+ struct nfp_crypto_req_del *req;
+ struct sk_buff *skb;
+
+ skb = nfp_net_tls_alloc_simple(nn, sizeof(*req), GFP_KERNEL);
+ if (!skb)
+ return;
+
+ req = (void *)skb->data;
+ req->ep_id = 0;
+ memcpy(req->handle, fw_handle, sizeof(req->handle));
+
+ nfp_net_tls_communicate_simple(nn, skb, "delete",
+ NFP_CCM_TYPE_CRYPTO_DEL);
+}
+
+static void
+nfp_net_tls_set_ipver_vlan(struct nfp_crypto_req_add_front *front, u8 ipver)
+{
+ front->ipver_vlan = cpu_to_be16(FIELD_PREP(NFP_NET_TLS_IPVER, ipver) |
+ FIELD_PREP(NFP_NET_TLS_VLAN,
+ NFP_NET_TLS_VLAN_UNUSED));
+}
+
+static void
+nfp_net_tls_assign_conn_id(struct nfp_net *nn,
+ struct nfp_crypto_req_add_front *front)
+{
+ u32 len;
+ u64 id;
+
+ id = atomic64_inc_return(&nn->ktls_conn_id_gen);
+ len = front->key_len - NFP_NET_TLS_NON_ADDR_KEY_LEN;
+
+ memcpy(front->l3_addrs, &id, sizeof(id));
+ memset(front->l3_addrs + sizeof(id), 0, len - sizeof(id));
+}
+
+static struct nfp_crypto_req_add_back *
+nfp_net_tls_set_ipv4(struct nfp_net *nn, struct nfp_crypto_req_add_v4 *req,
+ struct sock *sk, int direction)
+{
+ struct inet_sock *inet = inet_sk(sk);
+
+ req->front.key_len += sizeof(__be32) * 2;
+
+ if (direction == TLS_OFFLOAD_CTX_DIR_TX) {
+ nfp_net_tls_assign_conn_id(nn, &req->front);
+ } else {
+ req->src_ip = inet->inet_daddr;
+ req->dst_ip = inet->inet_saddr;
+ }
+
+ return &req->back;
+}
+
+static struct nfp_crypto_req_add_back *
+nfp_net_tls_set_ipv6(struct nfp_net *nn, struct nfp_crypto_req_add_v6 *req,
+ struct sock *sk, int direction)
+{
+#if IS_ENABLED(CONFIG_IPV6)
+ struct ipv6_pinfo *np = inet6_sk(sk);
+
+ req->front.key_len += sizeof(struct in6_addr) * 2;
+
+ if (direction == TLS_OFFLOAD_CTX_DIR_TX) {
+ nfp_net_tls_assign_conn_id(nn, &req->front);
+ } else {
+ memcpy(req->src_ip, &sk->sk_v6_daddr, sizeof(req->src_ip));
+ memcpy(req->dst_ip, &np->saddr, sizeof(req->dst_ip));
+ }
+
+#endif
+ return &req->back;
+}
+
+static void
+nfp_net_tls_set_l4(struct nfp_crypto_req_add_front *front,
+ struct nfp_crypto_req_add_back *back, struct sock *sk,
+ int direction)
+{
+ struct inet_sock *inet = inet_sk(sk);
+
+ front->l4_proto = IPPROTO_TCP;
+
+ if (direction == TLS_OFFLOAD_CTX_DIR_TX) {
+ back->src_port = 0;
+ back->dst_port = 0;
+ } else {
+ back->src_port = inet->inet_dport;
+ back->dst_port = inet->inet_sport;
+ }
+}
+
+static u8 nfp_tls_1_2_dir_to_opcode(enum tls_offload_ctx_dir direction)
+{
+ switch (direction) {
+ case TLS_OFFLOAD_CTX_DIR_TX:
+ return NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_ENC;
+ case TLS_OFFLOAD_CTX_DIR_RX:
+ return NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_DEC;
+ default:
+ WARN_ON_ONCE(1);
+ return 0;
+ }
+}
+
+static bool
+nfp_net_cipher_supported(struct nfp_net *nn, u16 cipher_type,
+ enum tls_offload_ctx_dir direction)
+{
+ u8 bit;
+
+ switch (cipher_type) {
+ case TLS_CIPHER_AES_GCM_128:
+ if (direction == TLS_OFFLOAD_CTX_DIR_TX)
+ bit = NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_ENC;
+ else
+ bit = NFP_NET_CRYPTO_OP_TLS_1_2_AES_GCM_128_DEC;
+ break;
+ default:
+ return false;
+ }
+
+ return nn->tlv_caps.crypto_ops & BIT(bit);
+}
+
+static int
+nfp_net_tls_add(struct net_device *netdev, struct sock *sk,
+ enum tls_offload_ctx_dir direction,
+ struct tls_crypto_info *crypto_info,
+ u32 start_offload_tcp_sn)
+{
+ struct tls12_crypto_info_aes_gcm_128 *tls_ci;
+ struct nfp_net *nn = netdev_priv(netdev);
+ struct nfp_crypto_req_add_front *front;
+ struct nfp_net_tls_offload_ctx *ntls;
+ struct nfp_crypto_req_add_back *back;
+ struct nfp_crypto_reply_add *reply;
+ struct sk_buff *skb;
+ size_t req_sz;
+ void *req;
+ bool ipv6;
+ int err;
+
+ BUILD_BUG_ON(sizeof(struct nfp_net_tls_offload_ctx) >
+ TLS_DRIVER_STATE_SIZE_TX);
+ BUILD_BUG_ON(offsetof(struct nfp_net_tls_offload_ctx, rx_end) >
+ TLS_DRIVER_STATE_SIZE_RX);
+
+ if (!nfp_net_cipher_supported(nn, crypto_info->cipher_type, direction))
+ return -EOPNOTSUPP;
+
+ switch (sk->sk_family) {
+#if IS_ENABLED(CONFIG_IPV6)
+ case AF_INET6:
+ if (sk->sk_ipv6only ||
+ ipv6_addr_type(&sk->sk_v6_daddr) != IPV6_ADDR_MAPPED) {
+ req_sz = sizeof(struct nfp_crypto_req_add_v6);
+ ipv6 = true;
+ break;
+ }
+#endif
+ /* fall through */
+ case AF_INET:
+ req_sz = sizeof(struct nfp_crypto_req_add_v4);
+ ipv6 = false;
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+
+ err = nfp_net_tls_conn_add(nn, direction);
+ if (err)
+ return err;
+
+ skb = nfp_ccm_mbox_msg_alloc(nn, req_sz, sizeof(*reply), GFP_KERNEL);
+ if (!skb) {
+ err = -ENOMEM;
+ goto err_conn_remove;
+ }
+
+ front = (void *)skb->data;
+ front->ep_id = 0;
+ front->key_len = NFP_NET_TLS_NON_ADDR_KEY_LEN;
+ front->opcode = nfp_tls_1_2_dir_to_opcode(direction);
+ memset(front->resv, 0, sizeof(front->resv));
+
+ nfp_net_tls_set_ipver_vlan(front, ipv6 ? 6 : 4);
+
+ req = (void *)skb->data;
+ if (ipv6)
+ back = nfp_net_tls_set_ipv6(nn, req, sk, direction);
+ else
+ back = nfp_net_tls_set_ipv4(nn, req, sk, direction);
+
+ nfp_net_tls_set_l4(front, back, sk, direction);
+
+ back->counter = 0;
+ back->tcp_seq = cpu_to_be32(start_offload_tcp_sn);
+
+ tls_ci = (struct tls12_crypto_info_aes_gcm_128 *)crypto_info;
+ memcpy(back->key, tls_ci->key, TLS_CIPHER_AES_GCM_128_KEY_SIZE);
+ memset(&back->key[TLS_CIPHER_AES_GCM_128_KEY_SIZE / 4], 0,
+ sizeof(back->key) - TLS_CIPHER_AES_GCM_128_KEY_SIZE);
+ memcpy(back->iv, tls_ci->iv, TLS_CIPHER_AES_GCM_128_IV_SIZE);
+ memcpy(&back->salt, tls_ci->salt, TLS_CIPHER_AES_GCM_128_SALT_SIZE);
+ memcpy(back->rec_no, tls_ci->rec_seq, sizeof(tls_ci->rec_seq));
+
+ /* Get an extra ref on the skb so we can wipe the key after */
+ skb_get(skb);
+
+ err = nfp_ccm_mbox_communicate(nn, skb, NFP_CCM_TYPE_CRYPTO_ADD,
+ sizeof(*reply), sizeof(*reply));
+ reply = (void *)skb->data;
+
+ /* We depend on CCM MBOX code not reallocating skb we sent
+ * so we can clear the key material out of the memory.
+ */
+ if (!WARN_ON_ONCE((u8 *)back < skb->head ||
+ (u8 *)back > skb_end_pointer(skb)) &&
+ !WARN_ON_ONCE((u8 *)&reply[1] > (u8 *)back))
+ memzero_explicit(back, sizeof(*back));
+ dev_consume_skb_any(skb); /* the extra ref from skb_get() above */
+
+ if (err) {
+ nn_dp_warn(&nn->dp, "failed to add TLS: %d (%d)\n",
+ err, direction == TLS_OFFLOAD_CTX_DIR_TX);
+ /* communicate frees skb on error */
+ goto err_conn_remove;
+ }
+
+ err = -be32_to_cpu(reply->error);
+ if (err) {
+ if (err == -ENOSPC) {
+ if (!atomic_fetch_inc(&nn->ktls_no_space))
+ nn_info(nn, "HW TLS table full\n");
+ } else {
+ nn_dp_warn(&nn->dp,
+ "failed to add TLS, FW replied: %d\n", err);
+ }
+ goto err_free_skb;
+ }
+
+ if (!reply->handle[0] && !reply->handle[1]) {
+ nn_dp_warn(&nn->dp, "FW returned NULL handle\n");
+ err = -EINVAL;
+ goto err_fw_remove;
+ }
+
+ ntls = tls_driver_ctx(sk, direction);
+ memcpy(ntls->fw_handle, reply->handle, sizeof(ntls->fw_handle));
+ if (direction == TLS_OFFLOAD_CTX_DIR_TX)
+ ntls->next_seq = start_offload_tcp_sn;
+ dev_consume_skb_any(skb);
+
+ if (direction == TLS_OFFLOAD_CTX_DIR_TX)
+ return 0;
+
+ tls_offload_rx_resync_set_type(sk,
+ TLS_OFFLOAD_SYNC_TYPE_CORE_NEXT_HINT);
+ return 0;
+
+err_fw_remove:
+ nfp_net_tls_del_fw(nn, reply->handle);
+err_free_skb:
+ dev_consume_skb_any(skb);
+err_conn_remove:
+ nfp_net_tls_conn_remove(nn, direction);
+ return err;
+}
+
+static void
+nfp_net_tls_del(struct net_device *netdev, struct tls_context *tls_ctx,
+ enum tls_offload_ctx_dir direction)
+{
+ struct nfp_net *nn = netdev_priv(netdev);
+ struct nfp_net_tls_offload_ctx *ntls;
+
+ nfp_net_tls_conn_remove(nn, direction);
+
+ ntls = __tls_driver_ctx(tls_ctx, direction);
+ nfp_net_tls_del_fw(nn, ntls->fw_handle);
+}
+
+static int
+nfp_net_tls_resync(struct net_device *netdev, struct sock *sk, u32 seq,
+ u8 *rcd_sn, enum tls_offload_ctx_dir direction)
+{
+ struct nfp_net *nn = netdev_priv(netdev);
+ struct nfp_net_tls_offload_ctx *ntls;
+ struct nfp_crypto_req_update *req;
+ struct sk_buff *skb;
+ gfp_t flags;
+ int err;
+
+ flags = direction == TLS_OFFLOAD_CTX_DIR_TX ? GFP_KERNEL : GFP_ATOMIC;
+ skb = nfp_net_tls_alloc_simple(nn, sizeof(*req), flags);
+ if (!skb)
+ return -ENOMEM;
+
+ ntls = tls_driver_ctx(sk, direction);
+ req = (void *)skb->data;
+ req->ep_id = 0;
+ req->opcode = nfp_tls_1_2_dir_to_opcode(direction);
+ memset(req->resv, 0, sizeof(req->resv));
+ memcpy(req->handle, ntls->fw_handle, sizeof(ntls->fw_handle));
+ req->tcp_seq = cpu_to_be32(seq);
+ memcpy(req->rec_no, rcd_sn, sizeof(req->rec_no));
+
+ if (direction == TLS_OFFLOAD_CTX_DIR_TX) {
+ err = nfp_net_tls_communicate_simple(nn, skb, "sync",
+ NFP_CCM_TYPE_CRYPTO_UPDATE);
+ if (err)
+ return err;
+ ntls->next_seq = seq;
+ } else {
+ nfp_ccm_mbox_post(nn, skb, NFP_CCM_TYPE_CRYPTO_UPDATE,
+ sizeof(struct nfp_crypto_reply_simple));
+ }
+
+ return 0;
+}
+
+static const struct tlsdev_ops nfp_net_tls_ops = {
+ .tls_dev_add = nfp_net_tls_add,
+ .tls_dev_del = nfp_net_tls_del,
+ .tls_dev_resync = nfp_net_tls_resync,
+};
+
+static int nfp_net_tls_reset(struct nfp_net *nn)
+{
+ struct nfp_crypto_req_reset *req;
+ struct sk_buff *skb;
+
+ skb = nfp_net_tls_alloc_simple(nn, sizeof(*req), GFP_KERNEL);
+ if (!skb)
+ return -ENOMEM;
+
+ req = (void *)skb->data;
+ req->ep_id = 0;
+
+ return nfp_net_tls_communicate_simple(nn, skb, "reset",
+ NFP_CCM_TYPE_CRYPTO_RESET);
+}
+
+int nfp_net_tls_init(struct nfp_net *nn)
+{
+ struct net_device *netdev = nn->dp.netdev;
+ int err;
+
+ if (!(nn->tlv_caps.crypto_ops & NFP_NET_TLS_OPCODE_MASK))
+ return 0;
+
+ if ((nn->tlv_caps.mbox_cmsg_types & NFP_NET_TLS_CCM_MBOX_OPS_MASK) !=
+ NFP_NET_TLS_CCM_MBOX_OPS_MASK)
+ return 0;
+
+ if (!nfp_ccm_mbox_fits(nn, sizeof(struct nfp_crypto_req_add_v6))) {
+ nn_warn(nn, "disabling TLS offload - mbox too small: %d\n",
+ nn->tlv_caps.mbox_len);
+ return 0;
+ }
+
+ err = nfp_net_tls_reset(nn);
+ if (err)
+ return err;
+
+ nn_ctrl_bar_lock(nn);
+ nn_writel(nn, nn->tlv_caps.crypto_enable_off, 0);
+ err = __nfp_net_reconfig(nn, NFP_NET_CFG_UPDATE_CRYPTO);
+ nn_ctrl_bar_unlock(nn);
+ if (err)
+ return err;
+
+ if (nn->tlv_caps.crypto_ops & NFP_NET_TLS_OPCODE_MASK_RX) {
+ netdev->hw_features |= NETIF_F_HW_TLS_RX;
+ netdev->features |= NETIF_F_HW_TLS_RX;
+ }
+ if (nn->tlv_caps.crypto_ops & NFP_NET_TLS_OPCODE_MASK_TX) {
+ netdev->hw_features |= NETIF_F_HW_TLS_TX;
+ netdev->features |= NETIF_F_HW_TLS_TX;
+ }
+
+ netdev->tlsdev_ops = &nfp_net_tls_ops;
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/netronome/nfp/flower/action.c b/drivers/net/ethernet/netronome/nfp/flower/action.c
index c56e31d9f8a4..5a54fe848de4 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/action.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/action.c
@@ -54,7 +54,8 @@ nfp_fl_push_vlan(struct nfp_fl_push_vlan *push_vlan,
static int
nfp_fl_pre_lag(struct nfp_app *app, const struct flow_action_entry *act,
- struct nfp_fl_payload *nfp_flow, int act_len)
+ struct nfp_fl_payload *nfp_flow, int act_len,
+ struct netlink_ext_ack *extack)
{
size_t act_size = sizeof(struct nfp_fl_pre_lag);
struct nfp_fl_pre_lag *pre_lag;
@@ -65,8 +66,10 @@ nfp_fl_pre_lag(struct nfp_app *app, const struct flow_action_entry *act,
if (!out_dev || !netif_is_lag_master(out_dev))
return 0;
- if (act_len + act_size > NFP_FL_MAX_A_SIZ)
+ if (act_len + act_size > NFP_FL_MAX_A_SIZ) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: maximum allowed action list size exceeded at LAG action");
return -EOPNOTSUPP;
+ }
/* Pre_lag action must be first on action list.
* If other actions already exist they need pushed forward.
@@ -76,7 +79,7 @@ nfp_fl_pre_lag(struct nfp_app *app, const struct flow_action_entry *act,
nfp_flow->action_data, act_len);
pre_lag = (struct nfp_fl_pre_lag *)nfp_flow->action_data;
- err = nfp_flower_lag_populate_pre_action(app, out_dev, pre_lag);
+ err = nfp_flower_lag_populate_pre_action(app, out_dev, pre_lag, extack);
if (err)
return err;
@@ -93,7 +96,8 @@ nfp_fl_output(struct nfp_app *app, struct nfp_fl_output *output,
const struct flow_action_entry *act,
struct nfp_fl_payload *nfp_flow,
bool last, struct net_device *in_dev,
- enum nfp_flower_tun_type tun_type, int *tun_out_cnt)
+ enum nfp_flower_tun_type tun_type, int *tun_out_cnt,
+ struct netlink_ext_ack *extack)
{
size_t act_size = sizeof(struct nfp_fl_output);
struct nfp_flower_priv *priv = app->priv;
@@ -104,18 +108,24 @@ nfp_fl_output(struct nfp_app *app, struct nfp_fl_output *output,
output->head.len_lw = act_size >> NFP_FL_LW_SIZ;
out_dev = act->dev;
- if (!out_dev)
+ if (!out_dev) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: invalid egress interface for mirred action");
return -EOPNOTSUPP;
+ }
tmp_flags = last ? NFP_FL_OUT_FLAGS_LAST : 0;
if (tun_type) {
/* Verify the egress netdev matches the tunnel type. */
- if (!nfp_fl_netdev_is_tunnel_type(out_dev, tun_type))
+ if (!nfp_fl_netdev_is_tunnel_type(out_dev, tun_type)) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: egress interface does not match the required tunnel type");
return -EOPNOTSUPP;
+ }
- if (*tun_out_cnt)
+ if (*tun_out_cnt) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: cannot offload more than one tunnel mirred output per filter");
return -EOPNOTSUPP;
+ }
(*tun_out_cnt)++;
output->flags = cpu_to_be16(tmp_flags |
@@ -127,8 +137,10 @@ nfp_fl_output(struct nfp_app *app, struct nfp_fl_output *output,
output->flags = cpu_to_be16(tmp_flags);
gid = nfp_flower_lag_get_output_id(app, out_dev);
- if (gid < 0)
+ if (gid < 0) {
+ NL_SET_ERR_MSG_MOD(extack, "invalid entry: cannot find group id for LAG action");
return gid;
+ }
output->port = cpu_to_be32(NFP_FL_LAG_OUT | gid);
} else {
/* Set action output parameters. */
@@ -136,29 +148,58 @@ nfp_fl_output(struct nfp_app *app, struct nfp_fl_output *output,
if (nfp_netdev_is_nfp_repr(in_dev)) {
/* Confirm ingress and egress are on same device. */
- if (!netdev_port_same_parent_id(in_dev, out_dev))
+ if (!netdev_port_same_parent_id(in_dev, out_dev)) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: ingress and egress interfaces are on different devices");
return -EOPNOTSUPP;
+ }
}
- if (!nfp_netdev_is_nfp_repr(out_dev))
+ if (!nfp_netdev_is_nfp_repr(out_dev)) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: egress interface is not an nfp port");
return -EOPNOTSUPP;
+ }
output->port = cpu_to_be32(nfp_repr_get_port_id(out_dev));
- if (!output->port)
+ if (!output->port) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: invalid port id for egress interface");
return -EOPNOTSUPP;
+ }
}
nfp_flow->meta.shortcut = output->port;
return 0;
}
+static bool
+nfp_flower_tun_is_gre(struct flow_cls_offload *flow, int start_idx)
+{
+ struct flow_action_entry *act = flow->rule->action.entries;
+ int num_act = flow->rule->action.num_entries;
+ int act_idx;
+
+ /* Preparse action list for next mirred or redirect action */
+ for (act_idx = start_idx + 1; act_idx < num_act; act_idx++)
+ if (act[act_idx].id == FLOW_ACTION_REDIRECT ||
+ act[act_idx].id == FLOW_ACTION_MIRRED)
+ return netif_is_gretap(act[act_idx].dev);
+
+ return false;
+}
+
static enum nfp_flower_tun_type
-nfp_fl_get_tun_from_act_l4_port(struct nfp_app *app,
- const struct flow_action_entry *act)
+nfp_fl_get_tun_from_act(struct nfp_app *app,
+ struct flow_cls_offload *flow,
+ const struct flow_action_entry *act, int act_idx)
{
const struct ip_tunnel_info *tun = act->tunnel;
struct nfp_flower_priv *priv = app->priv;
+ /* Determine the tunnel type based on the egress netdev
+ * in the mirred action for tunnels without l4.
+ */
+ if (nfp_flower_tun_is_gre(flow, act_idx))
+ return NFP_FL_TUNNEL_GRE;
+
switch (tun->key.tp_dst) {
case htons(IANA_VXLAN_UDP_PORT):
return NFP_FL_TUNNEL_VXLAN;
@@ -194,7 +235,8 @@ static struct nfp_fl_pre_tunnel *nfp_fl_pre_tunnel(char *act_data, int act_len)
static int
nfp_fl_push_geneve_options(struct nfp_fl_payload *nfp_fl, int *list_len,
- const struct flow_action_entry *act)
+ const struct flow_action_entry *act,
+ struct netlink_ext_ack *extack)
{
struct ip_tunnel_info *ip_tun = (struct ip_tunnel_info *)act->tunnel;
int opt_len, opt_cnt, act_start, tot_push_len;
@@ -212,20 +254,26 @@ nfp_fl_push_geneve_options(struct nfp_fl_payload *nfp_fl, int *list_len,
struct geneve_opt *opt = (struct geneve_opt *)src;
opt_cnt++;
- if (opt_cnt > NFP_FL_MAX_GENEVE_OPT_CNT)
+ if (opt_cnt > NFP_FL_MAX_GENEVE_OPT_CNT) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: maximum allowed number of geneve options exceeded");
return -EOPNOTSUPP;
+ }
tot_push_len += sizeof(struct nfp_fl_push_geneve) +
opt->length * 4;
- if (tot_push_len > NFP_FL_MAX_GENEVE_OPT_ACT)
+ if (tot_push_len > NFP_FL_MAX_GENEVE_OPT_ACT) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: maximum allowed action list size exceeded at push geneve options");
return -EOPNOTSUPP;
+ }
opt_len -= sizeof(struct geneve_opt) + opt->length * 4;
src += sizeof(struct geneve_opt) + opt->length * 4;
}
- if (*list_len + tot_push_len > NFP_FL_MAX_A_SIZ)
+ if (*list_len + tot_push_len > NFP_FL_MAX_A_SIZ) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: maximum allowed action list size exceeded at push geneve options");
return -EOPNOTSUPP;
+ }
act_start = *list_len;
*list_len += tot_push_len;
@@ -256,14 +304,13 @@ nfp_fl_push_geneve_options(struct nfp_fl_payload *nfp_fl, int *list_len,
}
static int
-nfp_fl_set_ipv4_udp_tun(struct nfp_app *app,
- struct nfp_fl_set_ipv4_udp_tun *set_tun,
- const struct flow_action_entry *act,
- struct nfp_fl_pre_tunnel *pre_tun,
- enum nfp_flower_tun_type tun_type,
- struct net_device *netdev)
+nfp_fl_set_ipv4_tun(struct nfp_app *app, struct nfp_fl_set_ipv4_tun *set_tun,
+ const struct flow_action_entry *act,
+ struct nfp_fl_pre_tunnel *pre_tun,
+ enum nfp_flower_tun_type tun_type,
+ struct net_device *netdev, struct netlink_ext_ack *extack)
{
- size_t act_size = sizeof(struct nfp_fl_set_ipv4_udp_tun);
+ size_t act_size = sizeof(struct nfp_fl_set_ipv4_tun);
const struct ip_tunnel_info *ip_tun = act->tunnel;
struct nfp_flower_priv *priv = app->priv;
u32 tmp_set_ip_tun_type_index = 0;
@@ -275,8 +322,10 @@ nfp_fl_set_ipv4_udp_tun(struct nfp_app *app,
NFP_FL_TUNNEL_GENEVE_OPT != TUNNEL_GENEVE_OPT);
if (ip_tun->options_len &&
(tun_type != NFP_FL_TUNNEL_GENEVE ||
- !(priv->flower_ext_feats & NFP_FL_FEATS_GENEVE_OPT)))
+ !(priv->flower_ext_feats & NFP_FL_FEATS_GENEVE_OPT))) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: loaded firmware does not support geneve options offload");
return -EOPNOTSUPP;
+ }
set_tun->head.jump_id = NFP_FL_ACTION_OPCODE_SET_IPV4_TUNNEL;
set_tun->head.len_lw = act_size >> NFP_FL_LW_SIZ;
@@ -316,8 +365,10 @@ nfp_fl_set_ipv4_udp_tun(struct nfp_app *app,
set_tun->tos = ip_tun->key.tos;
if (!(ip_tun->key.tun_flags & NFP_FL_TUNNEL_KEY) ||
- ip_tun->key.tun_flags & ~NFP_FL_SUPPORTED_IPV4_UDP_TUN_FLAGS)
+ ip_tun->key.tun_flags & ~NFP_FL_SUPPORTED_IPV4_UDP_TUN_FLAGS) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: loaded firmware does not support tunnel flag offload");
return -EOPNOTSUPP;
+ }
set_tun->tun_flags = ip_tun->key.tun_flags;
if (tun_type == NFP_FL_TUNNEL_GENEVE) {
@@ -345,18 +396,22 @@ static void nfp_fl_set_helper32(u32 value, u32 mask, u8 *p_exact, u8 *p_mask)
static int
nfp_fl_set_eth(const struct flow_action_entry *act, u32 off,
- struct nfp_fl_set_eth *set_eth)
+ struct nfp_fl_set_eth *set_eth, struct netlink_ext_ack *extack)
{
u32 exact, mask;
- if (off + 4 > ETH_ALEN * 2)
+ if (off + 4 > ETH_ALEN * 2) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: invalid pedit ethernet action");
return -EOPNOTSUPP;
+ }
mask = ~act->mangle.mask;
exact = act->mangle.val;
- if (exact & ~mask)
+ if (exact & ~mask) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: invalid pedit ethernet action");
return -EOPNOTSUPP;
+ }
nfp_fl_set_helper32(exact, mask, &set_eth->eth_addr_val[off],
&set_eth->eth_addr_mask[off]);
@@ -377,7 +432,8 @@ struct ipv4_ttl_word {
static int
nfp_fl_set_ip4(const struct flow_action_entry *act, u32 off,
struct nfp_fl_set_ip4_addrs *set_ip_addr,
- struct nfp_fl_set_ip4_ttl_tos *set_ip_ttl_tos)
+ struct nfp_fl_set_ip4_ttl_tos *set_ip_ttl_tos,
+ struct netlink_ext_ack *extack)
{
struct ipv4_ttl_word *ttl_word_mask;
struct ipv4_ttl_word *ttl_word;
@@ -389,8 +445,10 @@ nfp_fl_set_ip4(const struct flow_action_entry *act, u32 off,
mask = (__force __be32)~act->mangle.mask;
exact = (__force __be32)act->mangle.val;
- if (exact & ~mask)
+ if (exact & ~mask) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: invalid pedit IPv4 action");
return -EOPNOTSUPP;
+ }
switch (off) {
case offsetof(struct iphdr, daddr):
@@ -413,8 +471,10 @@ nfp_fl_set_ip4(const struct flow_action_entry *act, u32 off,
ttl_word_mask = (struct ipv4_ttl_word *)&mask;
ttl_word = (struct ipv4_ttl_word *)&exact;
- if (ttl_word_mask->protocol || ttl_word_mask->check)
+ if (ttl_word_mask->protocol || ttl_word_mask->check) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: invalid pedit IPv4 ttl action");
return -EOPNOTSUPP;
+ }
set_ip_ttl_tos->ipv4_ttl_mask |= ttl_word_mask->ttl;
set_ip_ttl_tos->ipv4_ttl &= ~ttl_word_mask->ttl;
@@ -429,8 +489,10 @@ nfp_fl_set_ip4(const struct flow_action_entry *act, u32 off,
tos_word = (struct iphdr *)&exact;
if (tos_word_mask->version || tos_word_mask->ihl ||
- tos_word_mask->tot_len)
+ tos_word_mask->tot_len) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: invalid pedit IPv4 tos action");
return -EOPNOTSUPP;
+ }
set_ip_ttl_tos->ipv4_tos_mask |= tos_word_mask->tos;
set_ip_ttl_tos->ipv4_tos &= ~tos_word_mask->tos;
@@ -441,6 +503,7 @@ nfp_fl_set_ip4(const struct flow_action_entry *act, u32 off,
NFP_FL_LW_SIZ;
break;
default:
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: pedit on unsupported section of IPv4 header");
return -EOPNOTSUPP;
}
@@ -468,7 +531,8 @@ struct ipv6_hop_limit_word {
static int
nfp_fl_set_ip6_hop_limit_flow_label(u32 off, __be32 exact, __be32 mask,
- struct nfp_fl_set_ipv6_tc_hl_fl *ip_hl_fl)
+ struct nfp_fl_set_ipv6_tc_hl_fl *ip_hl_fl,
+ struct netlink_ext_ack *extack)
{
struct ipv6_hop_limit_word *fl_hl_mask;
struct ipv6_hop_limit_word *fl_hl;
@@ -478,8 +542,10 @@ nfp_fl_set_ip6_hop_limit_flow_label(u32 off, __be32 exact, __be32 mask,
fl_hl_mask = (struct ipv6_hop_limit_word *)&mask;
fl_hl = (struct ipv6_hop_limit_word *)&exact;
- if (fl_hl_mask->nexthdr || fl_hl_mask->payload_len)
+ if (fl_hl_mask->nexthdr || fl_hl_mask->payload_len) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: invalid pedit IPv6 hop limit action");
return -EOPNOTSUPP;
+ }
ip_hl_fl->ipv6_hop_limit_mask |= fl_hl_mask->hop_limit;
ip_hl_fl->ipv6_hop_limit &= ~fl_hl_mask->hop_limit;
@@ -488,8 +554,10 @@ nfp_fl_set_ip6_hop_limit_flow_label(u32 off, __be32 exact, __be32 mask,
break;
case round_down(offsetof(struct ipv6hdr, flow_lbl), 4):
if (mask & ~IPV6_FLOW_LABEL_MASK ||
- exact & ~IPV6_FLOW_LABEL_MASK)
+ exact & ~IPV6_FLOW_LABEL_MASK) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: invalid pedit IPv6 flow label action");
return -EOPNOTSUPP;
+ }
ip_hl_fl->ipv6_label_mask |= mask;
ip_hl_fl->ipv6_label &= ~mask;
@@ -507,7 +575,8 @@ static int
nfp_fl_set_ip6(const struct flow_action_entry *act, u32 off,
struct nfp_fl_set_ipv6_addr *ip_dst,
struct nfp_fl_set_ipv6_addr *ip_src,
- struct nfp_fl_set_ipv6_tc_hl_fl *ip_hl_fl)
+ struct nfp_fl_set_ipv6_tc_hl_fl *ip_hl_fl,
+ struct netlink_ext_ack *extack)
{
__be32 exact, mask;
int err = 0;
@@ -517,12 +586,14 @@ nfp_fl_set_ip6(const struct flow_action_entry *act, u32 off,
mask = (__force __be32)~act->mangle.mask;
exact = (__force __be32)act->mangle.val;
- if (exact & ~mask)
+ if (exact & ~mask) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: invalid pedit IPv6 action");
return -EOPNOTSUPP;
+ }
if (off < offsetof(struct ipv6hdr, saddr)) {
err = nfp_fl_set_ip6_hop_limit_flow_label(off, exact, mask,
- ip_hl_fl);
+ ip_hl_fl, extack);
} else if (off < offsetof(struct ipv6hdr, daddr)) {
word = (off - offsetof(struct ipv6hdr, saddr)) / sizeof(exact);
nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_SRC, word,
@@ -533,6 +604,7 @@ nfp_fl_set_ip6(const struct flow_action_entry *act, u32 off,
nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_DST, word,
exact, mask, ip_dst);
} else {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: pedit on unsupported section of IPv6 header");
return -EOPNOTSUPP;
}
@@ -541,18 +613,23 @@ nfp_fl_set_ip6(const struct flow_action_entry *act, u32 off,
static int
nfp_fl_set_tport(const struct flow_action_entry *act, u32 off,
- struct nfp_fl_set_tport *set_tport, int opcode)
+ struct nfp_fl_set_tport *set_tport, int opcode,
+ struct netlink_ext_ack *extack)
{
u32 exact, mask;
- if (off)
+ if (off) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: pedit on unsupported section of L4 header");
return -EOPNOTSUPP;
+ }
mask = ~act->mangle.mask;
exact = act->mangle.val;
- if (exact & ~mask)
+ if (exact & ~mask) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: invalid pedit L4 action");
return -EOPNOTSUPP;
+ }
nfp_fl_set_helper32(exact, mask, set_tport->tp_port_val,
set_tport->tp_port_mask);
@@ -592,11 +669,11 @@ struct nfp_flower_pedit_acts {
};
static int
-nfp_fl_commit_mangle(struct tc_cls_flower_offload *flow, char *nfp_action,
+nfp_fl_commit_mangle(struct flow_cls_offload *flow, char *nfp_action,
int *a_len, struct nfp_flower_pedit_acts *set_act,
u32 *csum_updated)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(flow);
size_t act_size = 0;
u8 ip_proto = 0;
@@ -694,8 +771,9 @@ nfp_fl_commit_mangle(struct tc_cls_flower_offload *flow, char *nfp_action,
static int
nfp_fl_pedit(const struct flow_action_entry *act,
- struct tc_cls_flower_offload *flow, char *nfp_action, int *a_len,
- u32 *csum_updated, struct nfp_flower_pedit_acts *set_act)
+ struct flow_cls_offload *flow, char *nfp_action, int *a_len,
+ u32 *csum_updated, struct nfp_flower_pedit_acts *set_act,
+ struct netlink_ext_ack *extack)
{
enum flow_action_mangle_base htype;
u32 offset;
@@ -705,21 +783,22 @@ nfp_fl_pedit(const struct flow_action_entry *act,
switch (htype) {
case TCA_PEDIT_KEY_EX_HDR_TYPE_ETH:
- return nfp_fl_set_eth(act, offset, &set_act->set_eth);
+ return nfp_fl_set_eth(act, offset, &set_act->set_eth, extack);
case TCA_PEDIT_KEY_EX_HDR_TYPE_IP4:
return nfp_fl_set_ip4(act, offset, &set_act->set_ip_addr,
- &set_act->set_ip_ttl_tos);
+ &set_act->set_ip_ttl_tos, extack);
case TCA_PEDIT_KEY_EX_HDR_TYPE_IP6:
return nfp_fl_set_ip6(act, offset, &set_act->set_ip6_dst,
&set_act->set_ip6_src,
- &set_act->set_ip6_tc_hl_fl);
+ &set_act->set_ip6_tc_hl_fl, extack);
case TCA_PEDIT_KEY_EX_HDR_TYPE_TCP:
return nfp_fl_set_tport(act, offset, &set_act->set_tport,
- NFP_FL_ACTION_OPCODE_SET_TCP);
+ NFP_FL_ACTION_OPCODE_SET_TCP, extack);
case TCA_PEDIT_KEY_EX_HDR_TYPE_UDP:
return nfp_fl_set_tport(act, offset, &set_act->set_tport,
- NFP_FL_ACTION_OPCODE_SET_UDP);
+ NFP_FL_ACTION_OPCODE_SET_UDP, extack);
default:
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: pedit on unsupported header");
return -EOPNOTSUPP;
}
}
@@ -730,7 +809,8 @@ nfp_flower_output_action(struct nfp_app *app,
struct nfp_fl_payload *nfp_fl, int *a_len,
struct net_device *netdev, bool last,
enum nfp_flower_tun_type *tun_type, int *tun_out_cnt,
- int *out_cnt, u32 *csum_updated)
+ int *out_cnt, u32 *csum_updated,
+ struct netlink_ext_ack *extack)
{
struct nfp_flower_priv *priv = app->priv;
struct nfp_fl_output *output;
@@ -739,15 +819,19 @@ nfp_flower_output_action(struct nfp_app *app,
/* If csum_updated has not been reset by now, it means HW will
* incorrectly update csums when they are not requested.
*/
- if (*csum_updated)
+ if (*csum_updated) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: set actions without updating checksums are not supported");
return -EOPNOTSUPP;
+ }
- if (*a_len + sizeof(struct nfp_fl_output) > NFP_FL_MAX_A_SIZ)
+ if (*a_len + sizeof(struct nfp_fl_output) > NFP_FL_MAX_A_SIZ) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: mirred output increases action list size beyond the allowed maximum");
return -EOPNOTSUPP;
+ }
output = (struct nfp_fl_output *)&nfp_fl->action_data[*a_len];
err = nfp_fl_output(app, output, act, nfp_fl, last, netdev, *tun_type,
- tun_out_cnt);
+ tun_out_cnt, extack);
if (err)
return err;
@@ -757,11 +841,13 @@ nfp_flower_output_action(struct nfp_app *app,
/* nfp_fl_pre_lag returns -err or size of prelag action added.
* This will be 0 if it is not egressing to a lag dev.
*/
- prelag_size = nfp_fl_pre_lag(app, act, nfp_fl, *a_len);
- if (prelag_size < 0)
+ prelag_size = nfp_fl_pre_lag(app, act, nfp_fl, *a_len, extack);
+ if (prelag_size < 0) {
return prelag_size;
- else if (prelag_size > 0 && (!last || *out_cnt))
+ } else if (prelag_size > 0 && (!last || *out_cnt)) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: LAG action has to be last action in action list");
return -EOPNOTSUPP;
+ }
*a_len += prelag_size;
}
@@ -772,14 +858,15 @@ nfp_flower_output_action(struct nfp_app *app,
static int
nfp_flower_loop_action(struct nfp_app *app, const struct flow_action_entry *act,
- struct tc_cls_flower_offload *flow,
+ struct flow_cls_offload *flow,
struct nfp_fl_payload *nfp_fl, int *a_len,
struct net_device *netdev,
enum nfp_flower_tun_type *tun_type, int *tun_out_cnt,
int *out_cnt, u32 *csum_updated,
- struct nfp_flower_pedit_acts *set_act)
+ struct nfp_flower_pedit_acts *set_act,
+ struct netlink_ext_ack *extack, int act_idx)
{
- struct nfp_fl_set_ipv4_udp_tun *set_tun;
+ struct nfp_fl_set_ipv4_tun *set_tun;
struct nfp_fl_pre_tunnel *pre_tun;
struct nfp_fl_push_vlan *psh_v;
struct nfp_fl_pop_vlan *pop_v;
@@ -792,20 +879,23 @@ nfp_flower_loop_action(struct nfp_app *app, const struct flow_action_entry *act,
case FLOW_ACTION_REDIRECT:
err = nfp_flower_output_action(app, act, nfp_fl, a_len, netdev,
true, tun_type, tun_out_cnt,
- out_cnt, csum_updated);
+ out_cnt, csum_updated, extack);
if (err)
return err;
break;
case FLOW_ACTION_MIRRED:
err = nfp_flower_output_action(app, act, nfp_fl, a_len, netdev,
false, tun_type, tun_out_cnt,
- out_cnt, csum_updated);
+ out_cnt, csum_updated, extack);
if (err)
return err;
break;
case FLOW_ACTION_VLAN_POP:
- if (*a_len + sizeof(struct nfp_fl_pop_vlan) > NFP_FL_MAX_A_SIZ)
+ if (*a_len +
+ sizeof(struct nfp_fl_pop_vlan) > NFP_FL_MAX_A_SIZ) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: maximum allowed action list size exceeded at pop vlan");
return -EOPNOTSUPP;
+ }
pop_v = (struct nfp_fl_pop_vlan *)&nfp_fl->action_data[*a_len];
nfp_fl->meta.shortcut = cpu_to_be32(NFP_FL_SC_ACT_POPV);
@@ -814,8 +904,11 @@ nfp_flower_loop_action(struct nfp_app *app, const struct flow_action_entry *act,
*a_len += sizeof(struct nfp_fl_pop_vlan);
break;
case FLOW_ACTION_VLAN_PUSH:
- if (*a_len + sizeof(struct nfp_fl_push_vlan) > NFP_FL_MAX_A_SIZ)
+ if (*a_len +
+ sizeof(struct nfp_fl_push_vlan) > NFP_FL_MAX_A_SIZ) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: maximum allowed action list size exceeded at push vlan");
return -EOPNOTSUPP;
+ }
psh_v = (struct nfp_fl_push_vlan *)&nfp_fl->action_data[*a_len];
nfp_fl->meta.shortcut = cpu_to_be32(NFP_FL_SC_ACT_NULL);
@@ -826,35 +919,41 @@ nfp_flower_loop_action(struct nfp_app *app, const struct flow_action_entry *act,
case FLOW_ACTION_TUNNEL_ENCAP: {
const struct ip_tunnel_info *ip_tun = act->tunnel;
- *tun_type = nfp_fl_get_tun_from_act_l4_port(app, act);
- if (*tun_type == NFP_FL_TUNNEL_NONE)
+ *tun_type = nfp_fl_get_tun_from_act(app, flow, act, act_idx);
+ if (*tun_type == NFP_FL_TUNNEL_NONE) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: unsupported tunnel type in action list");
return -EOPNOTSUPP;
+ }
- if (ip_tun->mode & ~NFP_FL_SUPPORTED_TUNNEL_INFO_FLAGS)
+ if (ip_tun->mode & ~NFP_FL_SUPPORTED_TUNNEL_INFO_FLAGS) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: unsupported tunnel flags in action list");
return -EOPNOTSUPP;
+ }
/* Pre-tunnel action is required for tunnel encap.
* This checks for next hop entries on NFP.
* If none, the packet falls back before applying other actions.
*/
if (*a_len + sizeof(struct nfp_fl_pre_tunnel) +
- sizeof(struct nfp_fl_set_ipv4_udp_tun) > NFP_FL_MAX_A_SIZ)
+ sizeof(struct nfp_fl_set_ipv4_tun) > NFP_FL_MAX_A_SIZ) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: maximum allowed action list size exceeded at tunnel encap");
return -EOPNOTSUPP;
+ }
pre_tun = nfp_fl_pre_tunnel(nfp_fl->action_data, *a_len);
nfp_fl->meta.shortcut = cpu_to_be32(NFP_FL_SC_ACT_NULL);
*a_len += sizeof(struct nfp_fl_pre_tunnel);
- err = nfp_fl_push_geneve_options(nfp_fl, a_len, act);
+ err = nfp_fl_push_geneve_options(nfp_fl, a_len, act, extack);
if (err)
return err;
set_tun = (void *)&nfp_fl->action_data[*a_len];
- err = nfp_fl_set_ipv4_udp_tun(app, set_tun, act, pre_tun,
- *tun_type, netdev);
+ err = nfp_fl_set_ipv4_tun(app, set_tun, act, pre_tun,
+ *tun_type, netdev, extack);
if (err)
return err;
- *a_len += sizeof(struct nfp_fl_set_ipv4_udp_tun);
+ *a_len += sizeof(struct nfp_fl_set_ipv4_tun);
}
break;
case FLOW_ACTION_TUNNEL_DECAP:
@@ -862,13 +961,15 @@ nfp_flower_loop_action(struct nfp_app *app, const struct flow_action_entry *act,
return 0;
case FLOW_ACTION_MANGLE:
if (nfp_fl_pedit(act, flow, &nfp_fl->action_data[*a_len],
- a_len, csum_updated, set_act))
+ a_len, csum_updated, set_act, extack))
return -EOPNOTSUPP;
break;
case FLOW_ACTION_CSUM:
/* csum action requests recalc of something we have not fixed */
- if (act->csum_flags & ~*csum_updated)
+ if (act->csum_flags & ~*csum_updated) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: unsupported csum update action in action list");
return -EOPNOTSUPP;
+ }
/* If we will correctly fix the csum we can remove it from the
* csum update list. Which will later be used to check support.
*/
@@ -876,6 +977,7 @@ nfp_flower_loop_action(struct nfp_app *app, const struct flow_action_entry *act,
break;
default:
/* Currently we do not handle any other actions. */
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: unsupported action in action list");
return -EOPNOTSUPP;
}
@@ -919,9 +1021,10 @@ static bool nfp_fl_check_mangle_end(struct flow_action *flow_act,
}
int nfp_flower_compile_action(struct nfp_app *app,
- struct tc_cls_flower_offload *flow,
+ struct flow_cls_offload *flow,
struct net_device *netdev,
- struct nfp_fl_payload *nfp_flow)
+ struct nfp_fl_payload *nfp_flow,
+ struct netlink_ext_ack *extack)
{
int act_len, act_cnt, err, tun_out_cnt, out_cnt, i;
struct nfp_flower_pedit_acts set_act;
@@ -942,7 +1045,8 @@ int nfp_flower_compile_action(struct nfp_app *app,
memset(&set_act, 0, sizeof(set_act));
err = nfp_flower_loop_action(app, act, flow, nfp_flow, &act_len,
netdev, &tun_type, &tun_out_cnt,
- &out_cnt, &csum_updated, &set_act);
+ &out_cnt, &csum_updated,
+ &set_act, extack, i);
if (err)
return err;
act_cnt++;
diff --git a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
index 537f7fc19584..0f1706ae5bfc 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
@@ -8,6 +8,7 @@
#include <linux/skbuff.h>
#include <linux/types.h>
#include <net/geneve.h>
+#include <net/gre.h>
#include <net/vxlan.h>
#include "../nfp_app.h"
@@ -22,6 +23,7 @@
#define NFP_FLOWER_LAYER_CT BIT(6)
#define NFP_FLOWER_LAYER_VXLAN BIT(7)
+#define NFP_FLOWER_LAYER2_GRE BIT(0)
#define NFP_FLOWER_LAYER2_GENEVE BIT(5)
#define NFP_FLOWER_LAYER2_GENEVE_OP BIT(6)
@@ -37,6 +39,9 @@
#define NFP_FL_IP_FRAG_FIRST BIT(7)
#define NFP_FL_IP_FRAGMENTED BIT(6)
+/* GRE Tunnel flags */
+#define NFP_FL_GRE_FLAG_KEY BIT(2)
+
/* Compressed HW representation of TCP Flags */
#define NFP_FL_TCP_FLAG_URG BIT(4)
#define NFP_FL_TCP_FLAG_PSH BIT(3)
@@ -107,6 +112,7 @@
enum nfp_flower_tun_type {
NFP_FL_TUNNEL_NONE = 0,
+ NFP_FL_TUNNEL_GRE = 1,
NFP_FL_TUNNEL_VXLAN = 2,
NFP_FL_TUNNEL_GENEVE = 4,
};
@@ -203,7 +209,7 @@ struct nfp_fl_pre_tunnel {
__be32 extra[3];
};
-struct nfp_fl_set_ipv4_udp_tun {
+struct nfp_fl_set_ipv4_tun {
struct nfp_fl_act_head head;
__be16 reserved;
__be64 tun_id __packed;
@@ -354,6 +360,16 @@ struct nfp_flower_ipv6 {
struct in6_addr ipv6_dst;
};
+struct nfp_flower_tun_ipv4 {
+ __be32 src;
+ __be32 dst;
+};
+
+struct nfp_flower_tun_ip_ext {
+ u8 tos;
+ u8 ttl;
+};
+
/* Flow Frame IPv4 UDP TUNNEL --> Tunnel details (4W/16B)
* -----------------------------------------------------------------
* 3 2 1
@@ -371,15 +387,42 @@ struct nfp_flower_ipv6 {
* +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
*/
struct nfp_flower_ipv4_udp_tun {
- __be32 ip_src;
- __be32 ip_dst;
+ struct nfp_flower_tun_ipv4 ipv4;
__be16 reserved1;
- u8 tos;
- u8 ttl;
+ struct nfp_flower_tun_ip_ext ip_ext;
__be32 reserved2;
__be32 tun_id;
};
+/* Flow Frame GRE TUNNEL --> Tunnel details (6W/24B)
+ * -----------------------------------------------------------------
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_addr_src |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_addr_dst |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | tun_flags | tos | ttl |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved | Ethertype |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Key |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+
+struct nfp_flower_ipv4_gre_tun {
+ struct nfp_flower_tun_ipv4 ipv4;
+ __be16 tun_flags;
+ struct nfp_flower_tun_ip_ext ip_ext;
+ __be16 reserved1;
+ __be16 ethertype;
+ __be32 tun_key;
+ __be32 reserved2;
+};
+
struct nfp_flower_geneve_options {
u8 data[NFP_FL_MAX_GENEVE_OPT_KEY];
};
@@ -530,6 +573,8 @@ nfp_fl_netdev_is_tunnel_type(struct net_device *netdev,
{
if (netif_is_vxlan(netdev))
return tun_type == NFP_FL_TUNNEL_VXLAN;
+ if (netif_is_gretap(netdev))
+ return tun_type == NFP_FL_TUNNEL_GRE;
if (netif_is_geneve(netdev))
return tun_type == NFP_FL_TUNNEL_GENEVE;
@@ -546,6 +591,8 @@ static inline bool nfp_fl_is_netdev_to_offload(struct net_device *netdev)
return true;
if (netif_is_geneve(netdev))
return true;
+ if (netif_is_gretap(netdev))
+ return true;
return false;
}
diff --git a/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c b/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c
index 5db838f45694..63907aeb3884 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c
@@ -156,7 +156,8 @@ nfp_fl_lag_find_group_for_master_with_lag(struct nfp_fl_lag *lag,
int nfp_flower_lag_populate_pre_action(struct nfp_app *app,
struct net_device *master,
- struct nfp_fl_pre_lag *pre_act)
+ struct nfp_fl_pre_lag *pre_act,
+ struct netlink_ext_ack *extack)
{
struct nfp_flower_priv *priv = app->priv;
struct nfp_fl_lag_group *group = NULL;
@@ -167,6 +168,7 @@ int nfp_flower_lag_populate_pre_action(struct nfp_app *app,
master);
if (!group) {
mutex_unlock(&priv->nfp_lag.lock);
+ NL_SET_ERR_MSG_MOD(extack, "invalid entry: group does not exist for LAG action");
return -ENOENT;
}
diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h
index 40957a8dbfe6..af9441d5787f 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/main.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/main.h
@@ -343,19 +343,22 @@ int nfp_flower_merge_offloaded_flows(struct nfp_app *app,
struct nfp_fl_payload *sub_flow1,
struct nfp_fl_payload *sub_flow2);
int nfp_flower_compile_flow_match(struct nfp_app *app,
- struct tc_cls_flower_offload *flow,
+ struct flow_cls_offload *flow,
struct nfp_fl_key_ls *key_ls,
struct net_device *netdev,
struct nfp_fl_payload *nfp_flow,
- enum nfp_flower_tun_type tun_type);
+ enum nfp_flower_tun_type tun_type,
+ struct netlink_ext_ack *extack);
int nfp_flower_compile_action(struct nfp_app *app,
- struct tc_cls_flower_offload *flow,
+ struct flow_cls_offload *flow,
struct net_device *netdev,
- struct nfp_fl_payload *nfp_flow);
+ struct nfp_fl_payload *nfp_flow,
+ struct netlink_ext_ack *extack);
int nfp_compile_flow_metadata(struct nfp_app *app,
- struct tc_cls_flower_offload *flow,
+ struct flow_cls_offload *flow,
struct nfp_fl_payload *nfp_flow,
- struct net_device *netdev);
+ struct net_device *netdev,
+ struct netlink_ext_ack *extack);
void __nfp_modify_flow_metadata(struct nfp_flower_priv *priv,
struct nfp_fl_payload *nfp_flow);
int nfp_modify_flow_metadata(struct nfp_app *app,
@@ -389,7 +392,8 @@ int nfp_flower_lag_netdev_event(struct nfp_flower_priv *priv,
bool nfp_flower_lag_unprocessed_msg(struct nfp_app *app, struct sk_buff *skb);
int nfp_flower_lag_populate_pre_action(struct nfp_app *app,
struct net_device *master,
- struct nfp_fl_pre_lag *pre_act);
+ struct nfp_fl_pre_lag *pre_act,
+ struct netlink_ext_ack *extack);
int nfp_flower_lag_get_output_id(struct nfp_app *app,
struct net_device *master);
void nfp_flower_qos_init(struct nfp_app *app);
diff --git a/drivers/net/ethernet/netronome/nfp/flower/match.c b/drivers/net/ethernet/netronome/nfp/flower/match.c
index bfa4bf34911d..9cc3ba17ff69 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/match.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/match.c
@@ -10,9 +10,9 @@
static void
nfp_flower_compile_meta_tci(struct nfp_flower_meta_tci *ext,
struct nfp_flower_meta_tci *msk,
- struct tc_cls_flower_offload *flow, u8 key_type)
+ struct flow_cls_offload *flow, u8 key_type)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(flow);
u16 tmp_tci;
memset(ext, 0, sizeof(struct nfp_flower_meta_tci));
@@ -54,7 +54,8 @@ nfp_flower_compile_ext_meta(struct nfp_flower_ext_meta *frame, u32 key_ext)
static int
nfp_flower_compile_port(struct nfp_flower_in_port *frame, u32 cmsg_port,
- bool mask_version, enum nfp_flower_tun_type tun_type)
+ bool mask_version, enum nfp_flower_tun_type tun_type,
+ struct netlink_ext_ack *extack)
{
if (mask_version) {
frame->in_port = cpu_to_be32(~0);
@@ -64,8 +65,10 @@ nfp_flower_compile_port(struct nfp_flower_in_port *frame, u32 cmsg_port,
if (tun_type) {
frame->in_port = cpu_to_be32(NFP_FL_PORT_TYPE_TUN | tun_type);
} else {
- if (!cmsg_port)
+ if (!cmsg_port) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: invalid ingress interface for match offload");
return -EOPNOTSUPP;
+ }
frame->in_port = cpu_to_be32(cmsg_port);
}
@@ -75,9 +78,9 @@ nfp_flower_compile_port(struct nfp_flower_in_port *frame, u32 cmsg_port,
static void
nfp_flower_compile_mac(struct nfp_flower_mac_mpls *ext,
struct nfp_flower_mac_mpls *msk,
- struct tc_cls_flower_offload *flow)
+ struct flow_cls_offload *flow)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(flow);
memset(ext, 0, sizeof(struct nfp_flower_mac_mpls));
memset(msk, 0, sizeof(struct nfp_flower_mac_mpls));
@@ -127,9 +130,9 @@ nfp_flower_compile_mac(struct nfp_flower_mac_mpls *ext,
static void
nfp_flower_compile_tport(struct nfp_flower_tp_ports *ext,
struct nfp_flower_tp_ports *msk,
- struct tc_cls_flower_offload *flow)
+ struct flow_cls_offload *flow)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(flow);
memset(ext, 0, sizeof(struct nfp_flower_tp_ports));
memset(msk, 0, sizeof(struct nfp_flower_tp_ports));
@@ -148,9 +151,9 @@ nfp_flower_compile_tport(struct nfp_flower_tp_ports *ext,
static void
nfp_flower_compile_ip_ext(struct nfp_flower_ip_ext *ext,
struct nfp_flower_ip_ext *msk,
- struct tc_cls_flower_offload *flow)
+ struct flow_cls_offload *flow)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(flow);
if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
struct flow_match_basic match;
@@ -222,9 +225,9 @@ nfp_flower_compile_ip_ext(struct nfp_flower_ip_ext *ext,
static void
nfp_flower_compile_ipv4(struct nfp_flower_ipv4 *ext,
struct nfp_flower_ipv4 *msk,
- struct tc_cls_flower_offload *flow)
+ struct flow_cls_offload *flow)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(flow);
struct flow_match_ipv4_addrs match;
memset(ext, 0, sizeof(struct nfp_flower_ipv4));
@@ -244,9 +247,9 @@ nfp_flower_compile_ipv4(struct nfp_flower_ipv4 *ext,
static void
nfp_flower_compile_ipv6(struct nfp_flower_ipv6 *ext,
struct nfp_flower_ipv6 *msk,
- struct tc_cls_flower_offload *flow)
+ struct flow_cls_offload *flow)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(flow);
memset(ext, 0, sizeof(struct nfp_flower_ipv6));
memset(msk, 0, sizeof(struct nfp_flower_ipv6));
@@ -266,7 +269,7 @@ nfp_flower_compile_ipv6(struct nfp_flower_ipv6 *ext,
static int
nfp_flower_compile_geneve_opt(void *ext, void *msk,
- struct tc_cls_flower_offload *flow)
+ struct flow_cls_offload *flow)
{
struct flow_match_enc_opts match;
@@ -278,11 +281,76 @@ nfp_flower_compile_geneve_opt(void *ext, void *msk,
}
static void
+nfp_flower_compile_tun_ipv4_addrs(struct nfp_flower_tun_ipv4 *ext,
+ struct nfp_flower_tun_ipv4 *msk,
+ struct flow_cls_offload *flow)
+{
+ struct flow_rule *rule = flow_cls_offload_flow_rule(flow);
+
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS)) {
+ struct flow_match_ipv4_addrs match;
+
+ flow_rule_match_enc_ipv4_addrs(rule, &match);
+ ext->src = match.key->src;
+ ext->dst = match.key->dst;
+ msk->src = match.mask->src;
+ msk->dst = match.mask->dst;
+ }
+}
+
+static void
+nfp_flower_compile_tun_ip_ext(struct nfp_flower_tun_ip_ext *ext,
+ struct nfp_flower_tun_ip_ext *msk,
+ struct flow_cls_offload *flow)
+{
+ struct flow_rule *rule = flow_cls_offload_flow_rule(flow);
+
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IP)) {
+ struct flow_match_ip match;
+
+ flow_rule_match_enc_ip(rule, &match);
+ ext->tos = match.key->tos;
+ ext->ttl = match.key->ttl;
+ msk->tos = match.mask->tos;
+ msk->ttl = match.mask->ttl;
+ }
+}
+
+static void
+nfp_flower_compile_ipv4_gre_tun(struct nfp_flower_ipv4_gre_tun *ext,
+ struct nfp_flower_ipv4_gre_tun *msk,
+ struct flow_cls_offload *flow)
+{
+ struct flow_rule *rule = flow_cls_offload_flow_rule(flow);
+
+ memset(ext, 0, sizeof(struct nfp_flower_ipv4_gre_tun));
+ memset(msk, 0, sizeof(struct nfp_flower_ipv4_gre_tun));
+
+ /* NVGRE is the only supported GRE tunnel type */
+ ext->ethertype = cpu_to_be16(ETH_P_TEB);
+ msk->ethertype = cpu_to_be16(~0);
+
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID)) {
+ struct flow_match_enc_keyid match;
+
+ flow_rule_match_enc_keyid(rule, &match);
+ ext->tun_key = match.key->keyid;
+ msk->tun_key = match.mask->keyid;
+
+ ext->tun_flags = cpu_to_be16(NFP_FL_GRE_FLAG_KEY);
+ msk->tun_flags = cpu_to_be16(NFP_FL_GRE_FLAG_KEY);
+ }
+
+ nfp_flower_compile_tun_ipv4_addrs(&ext->ipv4, &msk->ipv4, flow);
+ nfp_flower_compile_tun_ip_ext(&ext->ip_ext, &msk->ip_ext, flow);
+}
+
+static void
nfp_flower_compile_ipv4_udp_tun(struct nfp_flower_ipv4_udp_tun *ext,
struct nfp_flower_ipv4_udp_tun *msk,
- struct tc_cls_flower_offload *flow)
+ struct flow_cls_offload *flow)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(flow);
memset(ext, 0, sizeof(struct nfp_flower_ipv4_udp_tun));
memset(msk, 0, sizeof(struct nfp_flower_ipv4_udp_tun));
@@ -298,33 +366,17 @@ nfp_flower_compile_ipv4_udp_tun(struct nfp_flower_ipv4_udp_tun *ext,
msk->tun_id = cpu_to_be32(temp_vni);
}
- if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS)) {
- struct flow_match_ipv4_addrs match;
-
- flow_rule_match_enc_ipv4_addrs(rule, &match);
- ext->ip_src = match.key->src;
- ext->ip_dst = match.key->dst;
- msk->ip_src = match.mask->src;
- msk->ip_dst = match.mask->dst;
- }
-
- if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_IP)) {
- struct flow_match_ip match;
-
- flow_rule_match_enc_ip(rule, &match);
- ext->tos = match.key->tos;
- ext->ttl = match.key->ttl;
- msk->tos = match.mask->tos;
- msk->ttl = match.mask->ttl;
- }
+ nfp_flower_compile_tun_ipv4_addrs(&ext->ipv4, &msk->ipv4, flow);
+ nfp_flower_compile_tun_ip_ext(&ext->ip_ext, &msk->ip_ext, flow);
}
int nfp_flower_compile_flow_match(struct nfp_app *app,
- struct tc_cls_flower_offload *flow,
+ struct flow_cls_offload *flow,
struct nfp_fl_key_ls *key_ls,
struct net_device *netdev,
struct nfp_fl_payload *nfp_flow,
- enum nfp_flower_tun_type tun_type)
+ enum nfp_flower_tun_type tun_type,
+ struct netlink_ext_ack *extack)
{
u32 port_id;
int err;
@@ -357,13 +409,13 @@ int nfp_flower_compile_flow_match(struct nfp_app *app,
/* Populate Exact Port data. */
err = nfp_flower_compile_port((struct nfp_flower_in_port *)ext,
- port_id, false, tun_type);
+ port_id, false, tun_type, extack);
if (err)
return err;
/* Populate Mask Port Data. */
err = nfp_flower_compile_port((struct nfp_flower_in_port *)msk,
- port_id, true, tun_type);
+ port_id, true, tun_type, extack);
if (err)
return err;
@@ -402,12 +454,27 @@ int nfp_flower_compile_flow_match(struct nfp_app *app,
msk += sizeof(struct nfp_flower_ipv6);
}
+ if (key_ls->key_layer_two & NFP_FLOWER_LAYER2_GRE) {
+ __be32 tun_dst;
+
+ nfp_flower_compile_ipv4_gre_tun((void *)ext, (void *)msk, flow);
+ tun_dst = ((struct nfp_flower_ipv4_gre_tun *)ext)->ipv4.dst;
+ ext += sizeof(struct nfp_flower_ipv4_gre_tun);
+ msk += sizeof(struct nfp_flower_ipv4_gre_tun);
+
+ /* Store the tunnel destination in the rule data.
+ * This must be present and be an exact match.
+ */
+ nfp_flow->nfp_tun_ipv4_addr = tun_dst;
+ nfp_tunnel_add_ipv4_off(app, tun_dst);
+ }
+
if (key_ls->key_layer & NFP_FLOWER_LAYER_VXLAN ||
key_ls->key_layer_two & NFP_FLOWER_LAYER2_GENEVE) {
__be32 tun_dst;
nfp_flower_compile_ipv4_udp_tun((void *)ext, (void *)msk, flow);
- tun_dst = ((struct nfp_flower_ipv4_udp_tun *)ext)->ip_dst;
+ tun_dst = ((struct nfp_flower_ipv4_udp_tun *)ext)->ipv4.dst;
ext += sizeof(struct nfp_flower_ipv4_udp_tun);
msk += sizeof(struct nfp_flower_ipv4_udp_tun);
diff --git a/drivers/net/ethernet/netronome/nfp/flower/metadata.c b/drivers/net/ethernet/netronome/nfp/flower/metadata.c
index 3d326efdc814..7c4a15e967df 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/metadata.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/metadata.c
@@ -290,9 +290,10 @@ nfp_check_mask_remove(struct nfp_app *app, char *mask_data, u32 mask_len,
}
int nfp_compile_flow_metadata(struct nfp_app *app,
- struct tc_cls_flower_offload *flow,
+ struct flow_cls_offload *flow,
struct nfp_fl_payload *nfp_flow,
- struct net_device *netdev)
+ struct net_device *netdev,
+ struct netlink_ext_ack *extack)
{
struct nfp_fl_stats_ctx_to_flow *ctx_entry;
struct nfp_flower_priv *priv = app->priv;
@@ -302,8 +303,10 @@ int nfp_compile_flow_metadata(struct nfp_app *app,
int err;
err = nfp_get_stats_entry(app, &stats_cxt);
- if (err)
+ if (err) {
+ NL_SET_ERR_MSG_MOD(extack, "invalid entry: cannot allocate new stats context");
return err;
+ }
nfp_flow->meta.host_ctx_id = cpu_to_be32(stats_cxt);
nfp_flow->meta.host_cookie = cpu_to_be64(flow->cookie);
@@ -328,6 +331,12 @@ int nfp_compile_flow_metadata(struct nfp_app *app,
if (!nfp_check_mask_add(app, nfp_flow->mask_data,
nfp_flow->meta.mask_len,
&nfp_flow->meta.flags, &new_mask_id)) {
+ NL_SET_ERR_MSG_MOD(extack, "invalid entry: cannot allocate a new mask id");
+ if (nfp_release_stats_entry(app, stats_cxt)) {
+ NL_SET_ERR_MSG_MOD(extack, "invalid entry: cannot release stats context");
+ err = -EINVAL;
+ goto err_remove_rhash;
+ }
err = -ENOENT;
goto err_remove_rhash;
}
@@ -343,6 +352,21 @@ int nfp_compile_flow_metadata(struct nfp_app *app,
check_entry = nfp_flower_search_fl_table(app, flow->cookie, netdev);
if (check_entry) {
+ NL_SET_ERR_MSG_MOD(extack, "invalid entry: cannot offload duplicate flow entry");
+ if (nfp_release_stats_entry(app, stats_cxt)) {
+ NL_SET_ERR_MSG_MOD(extack, "invalid entry: cannot release stats context");
+ err = -EINVAL;
+ goto err_remove_mask;
+ }
+
+ if (!nfp_check_mask_remove(app, nfp_flow->mask_data,
+ nfp_flow->meta.mask_len,
+ NULL, &new_mask_id)) {
+ NL_SET_ERR_MSG_MOD(extack, "invalid entry: cannot release mask id");
+ err = -EINVAL;
+ goto err_remove_mask;
+ }
+
err = -EEXIST;
goto err_remove_mask;
}
diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
index 1fbfeb43c538..7e725fa60347 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
@@ -52,8 +52,7 @@
#define NFP_FLOWER_WHITELIST_TUN_DISSECTOR_R \
(BIT(FLOW_DISSECTOR_KEY_ENC_CONTROL) | \
- BIT(FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS) | \
- BIT(FLOW_DISSECTOR_KEY_ENC_PORTS))
+ BIT(FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS))
#define NFP_FLOWER_MERGE_FIELDS \
(NFP_FLOWER_LAYER_PORT | \
@@ -122,9 +121,9 @@ nfp_flower_xmit_flow(struct nfp_app *app, struct nfp_fl_payload *nfp_flow,
return 0;
}
-static bool nfp_flower_check_higher_than_mac(struct tc_cls_flower_offload *f)
+static bool nfp_flower_check_higher_than_mac(struct flow_cls_offload *f)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(f);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
return flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IPV4_ADDRS) ||
flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IPV6_ADDRS) ||
@@ -132,14 +131,25 @@ static bool nfp_flower_check_higher_than_mac(struct tc_cls_flower_offload *f)
flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ICMP);
}
+static bool nfp_flower_check_higher_than_l3(struct flow_cls_offload *f)
+{
+ struct flow_rule *rule = flow_cls_offload_flow_rule(f);
+
+ return flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_PORTS) ||
+ flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ICMP);
+}
+
static int
-nfp_flower_calc_opt_layer(struct flow_match_enc_opts *enc_opts,
- u32 *key_layer_two, int *key_size)
+nfp_flower_calc_opt_layer(struct flow_dissector_key_enc_opts *enc_opts,
+ u32 *key_layer_two, int *key_size,
+ struct netlink_ext_ack *extack)
{
- if (enc_opts->key->len > NFP_FL_MAX_GENEVE_OPT_KEY)
+ if (enc_opts->len > NFP_FL_MAX_GENEVE_OPT_KEY) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: geneve options exceed maximum length");
return -EOPNOTSUPP;
+ }
- if (enc_opts->key->len > 0) {
+ if (enc_opts->len > 0) {
*key_layer_two |= NFP_FLOWER_LAYER2_GENEVE_OP;
*key_size += sizeof(struct nfp_flower_geneve_options);
}
@@ -148,13 +158,65 @@ nfp_flower_calc_opt_layer(struct flow_match_enc_opts *enc_opts,
}
static int
+nfp_flower_calc_udp_tun_layer(struct flow_dissector_key_ports *enc_ports,
+ struct flow_dissector_key_enc_opts *enc_op,
+ u32 *key_layer_two, u8 *key_layer, int *key_size,
+ struct nfp_flower_priv *priv,
+ enum nfp_flower_tun_type *tun_type,
+ struct netlink_ext_ack *extack)
+{
+ int err;
+
+ switch (enc_ports->dst) {
+ case htons(IANA_VXLAN_UDP_PORT):
+ *tun_type = NFP_FL_TUNNEL_VXLAN;
+ *key_layer |= NFP_FLOWER_LAYER_VXLAN;
+ *key_size += sizeof(struct nfp_flower_ipv4_udp_tun);
+
+ if (enc_op) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: encap options not supported on vxlan tunnels");
+ return -EOPNOTSUPP;
+ }
+ break;
+ case htons(GENEVE_UDP_PORT):
+ if (!(priv->flower_ext_feats & NFP_FL_FEATS_GENEVE)) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: loaded firmware does not support geneve offload");
+ return -EOPNOTSUPP;
+ }
+ *tun_type = NFP_FL_TUNNEL_GENEVE;
+ *key_layer |= NFP_FLOWER_LAYER_EXT_META;
+ *key_size += sizeof(struct nfp_flower_ext_meta);
+ *key_layer_two |= NFP_FLOWER_LAYER2_GENEVE;
+ *key_size += sizeof(struct nfp_flower_ipv4_udp_tun);
+
+ if (!enc_op)
+ break;
+ if (!(priv->flower_ext_feats & NFP_FL_FEATS_GENEVE_OPT)) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: loaded firmware does not support geneve option offload");
+ return -EOPNOTSUPP;
+ }
+ err = nfp_flower_calc_opt_layer(enc_op, key_layer_two,
+ key_size, extack);
+ if (err)
+ return err;
+ break;
+ default:
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: tunnel type unknown");
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+static int
nfp_flower_calculate_key_layers(struct nfp_app *app,
struct net_device *netdev,
struct nfp_fl_key_ls *ret_key_ls,
- struct tc_cls_flower_offload *flow,
- enum nfp_flower_tun_type *tun_type)
+ struct flow_cls_offload *flow,
+ enum nfp_flower_tun_type *tun_type,
+ struct netlink_ext_ack *extack)
{
- struct flow_rule *rule = tc_cls_flower_offload_flow_rule(flow);
+ struct flow_rule *rule = flow_cls_offload_flow_rule(flow);
struct flow_dissector *dissector = rule->match.dissector;
struct flow_match_basic basic = { NULL, NULL};
struct nfp_flower_priv *priv = app->priv;
@@ -163,14 +225,18 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
int key_size;
int err;
- if (dissector->used_keys & ~NFP_FLOWER_WHITELIST_DISSECTOR)
+ if (dissector->used_keys & ~NFP_FLOWER_WHITELIST_DISSECTOR) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: match not supported");
return -EOPNOTSUPP;
+ }
/* If any tun dissector is used then the required set must be used. */
if (dissector->used_keys & NFP_FLOWER_WHITELIST_TUN_DISSECTOR &&
(dissector->used_keys & NFP_FLOWER_WHITELIST_TUN_DISSECTOR_R)
- != NFP_FLOWER_WHITELIST_TUN_DISSECTOR_R)
+ != NFP_FLOWER_WHITELIST_TUN_DISSECTOR_R) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: tunnel match not supported");
return -EOPNOTSUPP;
+ }
key_layer_two = 0;
key_layer = NFP_FLOWER_LAYER_PORT;
@@ -188,8 +254,10 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
flow_rule_match_vlan(rule, &vlan);
if (!(priv->flower_ext_feats & NFP_FL_FEATS_VLAN_PCP) &&
- vlan.key->vlan_priority)
+ vlan.key->vlan_priority) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: loaded firmware does not support VLAN PCP offload");
return -EOPNOTSUPP;
+ }
}
if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_CONTROL)) {
@@ -200,56 +268,68 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
flow_rule_match_enc_control(rule, &enc_ctl);
- if (enc_ctl.mask->addr_type != 0xffff ||
- enc_ctl.key->addr_type != FLOW_DISSECTOR_KEY_IPV4_ADDRS)
+ if (enc_ctl.mask->addr_type != 0xffff) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: wildcarded protocols on tunnels are not supported");
+ return -EOPNOTSUPP;
+ }
+ if (enc_ctl.key->addr_type != FLOW_DISSECTOR_KEY_IPV4_ADDRS) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: only IPv4 tunnels are supported");
return -EOPNOTSUPP;
+ }
/* These fields are already verified as used. */
flow_rule_match_enc_ipv4_addrs(rule, &ipv4_addrs);
- if (ipv4_addrs.mask->dst != cpu_to_be32(~0))
- return -EOPNOTSUPP;
-
- flow_rule_match_enc_ports(rule, &enc_ports);
- if (enc_ports.mask->dst != cpu_to_be16(~0))
+ if (ipv4_addrs.mask->dst != cpu_to_be32(~0)) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: only an exact match IPv4 destination address is supported");
return -EOPNOTSUPP;
+ }
if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_OPTS))
flow_rule_match_enc_opts(rule, &enc_op);
- switch (enc_ports.key->dst) {
- case htons(IANA_VXLAN_UDP_PORT):
- *tun_type = NFP_FL_TUNNEL_VXLAN;
- key_layer |= NFP_FLOWER_LAYER_VXLAN;
- key_size += sizeof(struct nfp_flower_ipv4_udp_tun);
- if (enc_op.key)
+ if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS)) {
+ /* check if GRE, which has no enc_ports */
+ if (netif_is_gretap(netdev)) {
+ *tun_type = NFP_FL_TUNNEL_GRE;
+ key_layer |= NFP_FLOWER_LAYER_EXT_META;
+ key_size += sizeof(struct nfp_flower_ext_meta);
+ key_layer_two |= NFP_FLOWER_LAYER2_GRE;
+ key_size +=
+ sizeof(struct nfp_flower_ipv4_gre_tun);
+
+ if (enc_op.key) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: encap options not supported on GRE tunnels");
+ return -EOPNOTSUPP;
+ }
+ } else {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: an exact match on L4 destination port is required for non-GRE tunnels");
return -EOPNOTSUPP;
- break;
- case htons(GENEVE_UDP_PORT):
- if (!(priv->flower_ext_feats & NFP_FL_FEATS_GENEVE))
+ }
+ } else {
+ flow_rule_match_enc_ports(rule, &enc_ports);
+ if (enc_ports.mask->dst != cpu_to_be16(~0)) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: only an exact match L4 destination port is supported");
return -EOPNOTSUPP;
- *tun_type = NFP_FL_TUNNEL_GENEVE;
- key_layer |= NFP_FLOWER_LAYER_EXT_META;
- key_size += sizeof(struct nfp_flower_ext_meta);
- key_layer_two |= NFP_FLOWER_LAYER2_GENEVE;
- key_size += sizeof(struct nfp_flower_ipv4_udp_tun);
+ }
- if (!enc_op.key)
- break;
- if (!(priv->flower_ext_feats & NFP_FL_FEATS_GENEVE_OPT))
- return -EOPNOTSUPP;
- err = nfp_flower_calc_opt_layer(&enc_op, &key_layer_two,
- &key_size);
+ err = nfp_flower_calc_udp_tun_layer(enc_ports.key,
+ enc_op.key,
+ &key_layer_two,
+ &key_layer,
+ &key_size, priv,
+ tun_type, extack);
if (err)
return err;
- break;
- default:
- return -EOPNOTSUPP;
- }
- /* Ensure the ingress netdev matches the expected tun type. */
- if (!nfp_fl_netdev_is_tunnel_type(netdev, *tun_type))
- return -EOPNOTSUPP;
+ /* Ensure the ingress netdev matches the expected
+ * tun type.
+ */
+ if (!nfp_fl_netdev_is_tunnel_type(netdev, *tun_type)) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: ingress netdev does not match the expected tunnel type");
+ return -EOPNOTSUPP;
+ }
+ }
}
if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC))
@@ -272,6 +352,7 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
* because we rely on it to get to the host.
*/
case cpu_to_be16(ETH_P_ARP):
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: ARP not supported");
return -EOPNOTSUPP;
case cpu_to_be16(ETH_P_MPLS_UC):
@@ -290,14 +371,15 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
/* Other ethtype - we need check the masks for the
* remainder of the key to ensure we can offload.
*/
- if (nfp_flower_check_higher_than_mac(flow))
+ if (nfp_flower_check_higher_than_mac(flow)) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: non IPv4/IPv6 offload with L3/L4 matches not supported");
return -EOPNOTSUPP;
+ }
break;
}
}
if (basic.mask && basic.mask->ip_proto) {
- /* Ethernet type is present in the key. */
switch (basic.key->ip_proto) {
case IPPROTO_TCP:
case IPPROTO_UDP:
@@ -311,7 +393,11 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
/* Other ip proto - we need check the masks for the
* remainder of the key to ensure we can offload.
*/
- return -EOPNOTSUPP;
+ if (nfp_flower_check_higher_than_l3(flow)) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: unknown IP protocol with L4 matches not supported");
+ return -EOPNOTSUPP;
+ }
+ break;
}
}
@@ -322,22 +408,28 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
flow_rule_match_tcp(rule, &tcp);
tcp_flags = be16_to_cpu(tcp.key->flags);
- if (tcp_flags & ~NFP_FLOWER_SUPPORTED_TCPFLAGS)
+ if (tcp_flags & ~NFP_FLOWER_SUPPORTED_TCPFLAGS) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: no match support for selected TCP flags");
return -EOPNOTSUPP;
+ }
/* We only support PSH and URG flags when either
* FIN, SYN or RST is present as well.
*/
if ((tcp_flags & (TCPHDR_PSH | TCPHDR_URG)) &&
- !(tcp_flags & (TCPHDR_FIN | TCPHDR_SYN | TCPHDR_RST)))
+ !(tcp_flags & (TCPHDR_FIN | TCPHDR_SYN | TCPHDR_RST))) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: PSH and URG is only supported when used with FIN, SYN or RST");
return -EOPNOTSUPP;
+ }
/* We need to store TCP flags in the either the IPv4 or IPv6 key
* space, thus we need to ensure we include a IPv4/IPv6 key
* layer if we have not done so already.
*/
- if (!basic.key)
+ if (!basic.key) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: match on TCP flags requires a match on L3 protocol");
return -EOPNOTSUPP;
+ }
if (!(key_layer & NFP_FLOWER_LAYER_IPV4) &&
!(key_layer & NFP_FLOWER_LAYER_IPV6)) {
@@ -353,6 +445,7 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
break;
default:
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: match on TCP flags requires a match on IPv4/IPv6");
return -EOPNOTSUPP;
}
}
@@ -362,8 +455,10 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
struct flow_match_control ctl;
flow_rule_match_control(rule, &ctl);
- if (ctl.key->flags & ~NFP_FLOWER_SUPPORTED_CTLFLAGS)
+ if (ctl.key->flags & ~NFP_FLOWER_SUPPORTED_CTLFLAGS) {
+ NL_SET_ERR_MSG_MOD(extack, "unsupported offload: match on unknown control flag");
return -EOPNOTSUPP;
+ }
}
ret_key_ls->key_layer = key_layer;
@@ -771,14 +866,16 @@ int nfp_flower_merge_offloaded_flows(struct nfp_app *app,
struct nfp_fl_payload *sub_flow1,
struct nfp_fl_payload *sub_flow2)
{
- struct tc_cls_flower_offload merge_tc_off;
+ struct flow_cls_offload merge_tc_off;
struct nfp_flower_priv *priv = app->priv;
+ struct netlink_ext_ack *extack = NULL;
struct nfp_fl_payload *merge_flow;
struct nfp_fl_key_ls merge_key_ls;
int err;
ASSERT_RTNL();
+ extack = merge_tc_off.common.extack;
if (sub_flow1 == sub_flow2 ||
nfp_flower_is_merge_flow(sub_flow1) ||
nfp_flower_is_merge_flow(sub_flow2))
@@ -816,7 +913,7 @@ int nfp_flower_merge_offloaded_flows(struct nfp_app *app,
merge_tc_off.cookie = merge_flow->tc_flower_cookie;
err = nfp_compile_flow_metadata(app, &merge_tc_off, merge_flow,
- merge_flow->ingress_dev);
+ merge_flow->ingress_dev, extack);
if (err)
goto err_unlink_sub_flow2;
@@ -865,15 +962,17 @@ err_destroy_merge_flow:
*/
static int
nfp_flower_add_offload(struct nfp_app *app, struct net_device *netdev,
- struct tc_cls_flower_offload *flow)
+ struct flow_cls_offload *flow)
{
enum nfp_flower_tun_type tun_type = NFP_FL_TUNNEL_NONE;
struct nfp_flower_priv *priv = app->priv;
+ struct netlink_ext_ack *extack = NULL;
struct nfp_fl_payload *flow_pay;
struct nfp_fl_key_ls *key_layer;
struct nfp_port *port = NULL;
int err;
+ extack = flow->common.extack;
if (nfp_netdev_is_nfp_repr(netdev))
port = nfp_port_from_netdev(netdev);
@@ -882,7 +981,7 @@ nfp_flower_add_offload(struct nfp_app *app, struct net_device *netdev,
return -ENOMEM;
err = nfp_flower_calculate_key_layers(app, netdev, key_layer, flow,
- &tun_type);
+ &tun_type, extack);
if (err)
goto err_free_key_ls;
@@ -893,23 +992,25 @@ nfp_flower_add_offload(struct nfp_app *app, struct net_device *netdev,
}
err = nfp_flower_compile_flow_match(app, flow, key_layer, netdev,
- flow_pay, tun_type);
+ flow_pay, tun_type, extack);
if (err)
goto err_destroy_flow;
- err = nfp_flower_compile_action(app, flow, netdev, flow_pay);
+ err = nfp_flower_compile_action(app, flow, netdev, flow_pay, extack);
if (err)
goto err_destroy_flow;
- err = nfp_compile_flow_metadata(app, flow, flow_pay, netdev);
+ err = nfp_compile_flow_metadata(app, flow, flow_pay, netdev, extack);
if (err)
goto err_destroy_flow;
flow_pay->tc_flower_cookie = flow->cookie;
err = rhashtable_insert_fast(&priv->flow_table, &flow_pay->fl_node,
nfp_flower_table_params);
- if (err)
+ if (err) {
+ NL_SET_ERR_MSG_MOD(extack, "invalid entry: cannot insert flow into tables for offloads");
goto err_release_metadata;
+ }
err = nfp_flower_xmit_flow(app, flow_pay,
NFP_FLOWER_CMSG_TYPE_FLOW_ADD);
@@ -1024,19 +1125,23 @@ nfp_flower_del_linked_merge_flows(struct nfp_app *app,
*/
static int
nfp_flower_del_offload(struct nfp_app *app, struct net_device *netdev,
- struct tc_cls_flower_offload *flow)
+ struct flow_cls_offload *flow)
{
struct nfp_flower_priv *priv = app->priv;
+ struct netlink_ext_ack *extack = NULL;
struct nfp_fl_payload *nfp_flow;
struct nfp_port *port = NULL;
int err;
+ extack = flow->common.extack;
if (nfp_netdev_is_nfp_repr(netdev))
port = nfp_port_from_netdev(netdev);
nfp_flow = nfp_flower_search_fl_table(app, flow->cookie, netdev);
- if (!nfp_flow)
+ if (!nfp_flow) {
+ NL_SET_ERR_MSG_MOD(extack, "invalid entry: cannot remove flow that does not exist");
return -ENOENT;
+ }
err = nfp_modify_flow_metadata(app, nfp_flow);
if (err)
@@ -1127,15 +1232,19 @@ nfp_flower_update_merge_stats(struct nfp_app *app,
*/
static int
nfp_flower_get_stats(struct nfp_app *app, struct net_device *netdev,
- struct tc_cls_flower_offload *flow)
+ struct flow_cls_offload *flow)
{
struct nfp_flower_priv *priv = app->priv;
+ struct netlink_ext_ack *extack = NULL;
struct nfp_fl_payload *nfp_flow;
u32 ctx_id;
+ extack = flow->common.extack;
nfp_flow = nfp_flower_search_fl_table(app, flow->cookie, netdev);
- if (!nfp_flow)
+ if (!nfp_flow) {
+ NL_SET_ERR_MSG_MOD(extack, "invalid entry: cannot dump stats for flow that does not exist");
return -EINVAL;
+ }
ctx_id = be32_to_cpu(nfp_flow->meta.host_ctx_id);
@@ -1156,17 +1265,17 @@ nfp_flower_get_stats(struct nfp_app *app, struct net_device *netdev,
static int
nfp_flower_repr_offload(struct nfp_app *app, struct net_device *netdev,
- struct tc_cls_flower_offload *flower)
+ struct flow_cls_offload *flower)
{
if (!eth_proto_is_802_3(flower->common.protocol))
return -EOPNOTSUPP;
switch (flower->command) {
- case TC_CLSFLOWER_REPLACE:
+ case FLOW_CLS_REPLACE:
return nfp_flower_add_offload(app, netdev, flower);
- case TC_CLSFLOWER_DESTROY:
+ case FLOW_CLS_DESTROY:
return nfp_flower_del_offload(app, netdev, flower);
- case TC_CLSFLOWER_STATS:
+ case FLOW_CLS_STATS:
return nfp_flower_get_stats(app, netdev, flower);
default:
return -EOPNOTSUPP;
@@ -1193,27 +1302,45 @@ static int nfp_flower_setup_tc_block_cb(enum tc_setup_type type,
}
}
+static LIST_HEAD(nfp_block_cb_list);
+
static int nfp_flower_setup_tc_block(struct net_device *netdev,
- struct tc_block_offload *f)
+ struct flow_block_offload *f)
{
struct nfp_repr *repr = netdev_priv(netdev);
struct nfp_flower_repr_priv *repr_priv;
+ struct flow_block_cb *block_cb;
- if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
+ if (f->binder_type != FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
return -EOPNOTSUPP;
repr_priv = repr->app_priv;
- repr_priv->block_shared = tcf_block_shared(f->block);
+ repr_priv->block_shared = f->block_shared;
+ f->driver_block_list = &nfp_block_cb_list;
switch (f->command) {
- case TC_BLOCK_BIND:
- return tcf_block_cb_register(f->block,
- nfp_flower_setup_tc_block_cb,
- repr, repr, f->extack);
- case TC_BLOCK_UNBIND:
- tcf_block_cb_unregister(f->block,
- nfp_flower_setup_tc_block_cb,
- repr);
+ case FLOW_BLOCK_BIND:
+ if (flow_block_cb_is_busy(nfp_flower_setup_tc_block_cb, repr,
+ &nfp_block_cb_list))
+ return -EBUSY;
+
+ block_cb = flow_block_cb_alloc(f->net,
+ nfp_flower_setup_tc_block_cb,
+ repr, repr, NULL);
+ if (IS_ERR(block_cb))
+ return PTR_ERR(block_cb);
+
+ flow_block_cb_add(block_cb, f);
+ list_add_tail(&block_cb->driver_list, &nfp_block_cb_list);
+ return 0;
+ case FLOW_BLOCK_UNBIND:
+ block_cb = flow_block_cb_lookup(f, nfp_flower_setup_tc_block_cb,
+ repr);
+ if (!block_cb)
+ return -ENOENT;
+
+ flow_block_cb_remove(block_cb, f);
+ list_del(&block_cb->driver_list);
return 0;
default:
return -EOPNOTSUPP;
@@ -1258,7 +1385,7 @@ static int nfp_flower_setup_indr_block_cb(enum tc_setup_type type,
void *type_data, void *cb_priv)
{
struct nfp_flower_indr_block_cb_priv *priv = cb_priv;
- struct tc_cls_flower_offload *flower = type_data;
+ struct flow_cls_offload *flower = type_data;
if (flower->common.chain_index)
return -EOPNOTSUPP;
@@ -1272,21 +1399,29 @@ static int nfp_flower_setup_indr_block_cb(enum tc_setup_type type,
}
}
+static void nfp_flower_setup_indr_tc_release(void *cb_priv)
+{
+ struct nfp_flower_indr_block_cb_priv *priv = cb_priv;
+
+ list_del(&priv->list);
+ kfree(priv);
+}
+
static int
nfp_flower_setup_indr_tc_block(struct net_device *netdev, struct nfp_app *app,
- struct tc_block_offload *f)
+ struct flow_block_offload *f)
{
struct nfp_flower_indr_block_cb_priv *cb_priv;
struct nfp_flower_priv *priv = app->priv;
- int err;
+ struct flow_block_cb *block_cb;
- if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS &&
- !(f->binder_type == TCF_BLOCK_BINDER_TYPE_CLSACT_EGRESS &&
+ if (f->binder_type != FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS &&
+ !(f->binder_type == FLOW_BLOCK_BINDER_TYPE_CLSACT_EGRESS &&
nfp_flower_internal_port_can_offload(app, netdev)))
return -EOPNOTSUPP;
switch (f->command) {
- case TC_BLOCK_BIND:
+ case FLOW_BLOCK_BIND:
cb_priv = kmalloc(sizeof(*cb_priv), GFP_KERNEL);
if (!cb_priv)
return -ENOMEM;
@@ -1295,26 +1430,32 @@ nfp_flower_setup_indr_tc_block(struct net_device *netdev, struct nfp_app *app,
cb_priv->app = app;
list_add(&cb_priv->list, &priv->indr_block_cb_priv);
- err = tcf_block_cb_register(f->block,
- nfp_flower_setup_indr_block_cb,
- cb_priv, cb_priv, f->extack);
- if (err) {
+ block_cb = flow_block_cb_alloc(f->net,
+ nfp_flower_setup_indr_block_cb,
+ cb_priv, cb_priv,
+ nfp_flower_setup_indr_tc_release);
+ if (IS_ERR(block_cb)) {
list_del(&cb_priv->list);
kfree(cb_priv);
+ return PTR_ERR(block_cb);
}
- return err;
- case TC_BLOCK_UNBIND:
+ flow_block_cb_add(block_cb, f);
+ list_add_tail(&block_cb->driver_list, &nfp_block_cb_list);
+ return 0;
+ case FLOW_BLOCK_UNBIND:
cb_priv = nfp_flower_indr_block_cb_priv_lookup(app, netdev);
if (!cb_priv)
return -ENOENT;
- tcf_block_cb_unregister(f->block,
- nfp_flower_setup_indr_block_cb,
- cb_priv);
- list_del(&cb_priv->list);
- kfree(cb_priv);
+ block_cb = flow_block_cb_lookup(f,
+ nfp_flower_setup_indr_block_cb,
+ cb_priv);
+ if (!block_cb)
+ return -ENOENT;
+ flow_block_cb_remove(block_cb, f);
+ list_del(&block_cb->driver_list);
return 0;
default:
return -EOPNOTSUPP;
diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
index 8c67505865a4..a7a80f4b722a 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
@@ -162,8 +162,7 @@ void nfp_tunnel_keep_alive(struct nfp_app *app, struct sk_buff *skb)
}
pay_len = nfp_flower_cmsg_get_data_len(skb);
- if (pay_len != sizeof(struct nfp_tun_active_tuns) +
- sizeof(struct route_ip_info) * count) {
+ if (pay_len != struct_size(payload, tun_info, count)) {
nfp_flower_cmsg_warn(app, "Corruption in tunnel keep-alive message.\n");
return;
}
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_main.c b/drivers/net/ethernet/netronome/nfp/nfp_main.c
index 948d1a4b4643..60e57f08de80 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_main.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_main.c
@@ -596,6 +596,10 @@ static int nfp_pci_probe(struct pci_dev *pdev,
struct nfp_pf *pf;
int err;
+ if (pdev->vendor == PCI_VENDOR_ID_NETRONOME &&
+ pdev->device == PCI_DEVICE_ID_NETRONOME_NFP6000_VF)
+ dev_warn(&pdev->dev, "Binding NFP VF device to the NFP PF driver, the VF driver is called 'nfp_netvf'\n");
+
err = pci_enable_device(pdev);
if (err < 0)
return err;
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h
index df9aff2684ed..5d6c3738b494 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net.h
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h
@@ -12,11 +12,14 @@
#ifndef _NFP_NET_H_
#define _NFP_NET_H_
+#include <linux/atomic.h>
#include <linux/interrupt.h>
#include <linux/list.h>
#include <linux/netdevice.h>
#include <linux/pci.h>
#include <linux/io-64-nonatomic-hi-lo.h>
+#include <linux/semaphore.h>
+#include <linux/workqueue.h>
#include <net/xdp.h>
#include "nfp_net_ctrl.h"
@@ -238,7 +241,7 @@ struct nfp_net_tx_ring {
#define PCIE_DESC_RX_I_TCP_CSUM_OK cpu_to_le16(BIT(11))
#define PCIE_DESC_RX_I_UDP_CSUM cpu_to_le16(BIT(10))
#define PCIE_DESC_RX_I_UDP_CSUM_OK cpu_to_le16(BIT(9))
-#define PCIE_DESC_RX_BPF cpu_to_le16(BIT(8))
+#define PCIE_DESC_RX_DECRYPTED cpu_to_le16(BIT(8))
#define PCIE_DESC_RX_EOP cpu_to_le16(BIT(7))
#define PCIE_DESC_RX_IP4_CSUM cpu_to_le16(BIT(6))
#define PCIE_DESC_RX_IP4_CSUM_OK cpu_to_le16(BIT(5))
@@ -365,6 +368,7 @@ struct nfp_net_rx_ring {
* @hw_csum_rx_inner_ok: Counter of packets where the inner HW checksum was OK
* @hw_csum_rx_complete: Counter of packets with CHECKSUM_COMPLETE reported
* @hw_csum_rx_error: Counter of packets with bad checksums
+ * @hw_tls_rx: Number of packets with TLS decrypted by hardware
* @tx_sync: Seqlock for atomic updates of TX stats
* @tx_pkts: Number of Transmitted packets
* @tx_bytes: Number of Transmitted bytes
@@ -372,6 +376,11 @@ struct nfp_net_rx_ring {
* @hw_csum_tx_inner: Counter of inner TX checksum offload requests
* @tx_gather: Counter of packets with Gather DMA
* @tx_lso: Counter of LSO packets sent
+ * @hw_tls_tx: Counter of TLS packets sent with crypto offloaded to HW
+ * @tls_tx_fallback: Counter of TLS packets sent which had to be encrypted
+ * by the fallback path because packets came out of order
+ * @tls_tx_no_fallback: Counter of TLS packets not sent because the fallback
+ * path could not encrypt them
* @tx_errors: How many TX errors were encountered
* @tx_busy: How often was TX busy (no space)?
* @rx_replace_buf_alloc_fail: Counter of RX buffer allocation failures
@@ -392,7 +401,7 @@ struct nfp_net_r_vector {
struct {
struct tasklet_struct tasklet;
struct sk_buff_head queue;
- struct spinlock lock;
+ spinlock_t lock;
};
};
@@ -408,22 +417,30 @@ struct nfp_net_r_vector {
u64 hw_csum_rx_ok;
u64 hw_csum_rx_inner_ok;
u64 hw_csum_rx_complete;
+ u64 hw_tls_rx;
+
+ u64 hw_csum_rx_error;
+ u64 rx_replace_buf_alloc_fail;
struct nfp_net_tx_ring *xdp_ring;
struct u64_stats_sync tx_sync;
u64 tx_pkts;
u64 tx_bytes;
- u64 hw_csum_tx;
+
+ u64 ____cacheline_aligned_in_smp hw_csum_tx;
u64 hw_csum_tx_inner;
u64 tx_gather;
u64 tx_lso;
+ u64 hw_tls_tx;
- u64 hw_csum_rx_error;
- u64 rx_replace_buf_alloc_fail;
+ u64 tls_tx_fallback;
+ u64 tls_tx_no_fallback;
u64 tx_errors;
u64 tx_busy;
+ /* Cold data follows */
+
u32 irq_vector;
irq_handler_t handler;
char name[IFNAMSIZ + 8];
@@ -458,6 +475,7 @@ struct nfp_stat_pair {
* @netdev: Backpointer to net_device structure
* @is_vf: Is the driver attached to a VF?
* @chained_metadata_format: Firemware will use new metadata format
+ * @ktls_tx: Is kTLS TX enabled?
* @rx_dma_dir: Mapping direction for RX buffers
* @rx_dma_off: Offset at which DMA packets (for XDP headroom)
* @rx_offset: Offset in the RX buffers where packet data starts
@@ -482,6 +500,7 @@ struct nfp_net_dp {
u8 is_vf:1;
u8 chained_metadata_format:1;
+ u8 ktls_tx:1;
u8 rx_dma_dir;
u8 rx_offset;
@@ -549,7 +568,7 @@ struct nfp_net_dp {
* @reconfig_timer: Timer for async reading of reconfig results
* @reconfig_in_progress_update: Update FW is processing now (debug only)
* @bar_lock: vNIC config BAR access lock, protects: update,
- * mailbox area
+ * mailbox area, crypto TLV
* @link_up: Is the link up?
* @link_status_lock: Protects @link_* and ensures atomicity with BAR reading
* @rx_coalesce_usecs: RX interrupt moderation usecs delay parameter
@@ -562,6 +581,18 @@ struct nfp_net_dp {
* @tx_bar: Pointer to mapped TX queues
* @rx_bar: Pointer to mapped FL/RX queues
* @tlv_caps: Parsed TLV capabilities
+ * @ktls_tx_conn_cnt: Number of offloaded kTLS TX connections
+ * @ktls_rx_conn_cnt: Number of offloaded kTLS RX connections
+ * @ktls_conn_id_gen: Trivial generator for kTLS connection ids (for TX)
+ * @ktls_no_space: Counter of firmware rejecting kTLS connection due to
+ * lack of space
+ * @mbox_cmsg: Common Control Message via vNIC mailbox state
+ * @mbox_cmsg.queue: CCM mbox queue of pending messages
+ * @mbox_cmsg.wq: CCM mbox wait queue of waiting processes
+ * @mbox_cmsg.workq: CCM mbox work queue for @wait_work and @runq_work
+ * @mbox_cmsg.wait_work: CCM mbox posted msg reconfig wait work
+ * @mbox_cmsg.runq_work: CCM mbox posted msg queue runner work
+ * @mbox_cmsg.tag: CCM mbox message tag allocator
* @debugfs_dir: Device directory in debugfs
* @vnic_list: Entry on device vNIC list
* @pdev: Backpointer to PCI device
@@ -620,7 +651,7 @@ struct nfp_net {
struct timer_list reconfig_timer;
u32 reconfig_in_progress_update;
- struct mutex bar_lock;
+ struct semaphore bar_lock;
u32 rx_coalesce_usecs;
u32 rx_coalesce_max_frames;
@@ -637,6 +668,22 @@ struct nfp_net {
struct nfp_net_tlv_caps tlv_caps;
+ unsigned int ktls_tx_conn_cnt;
+ unsigned int ktls_rx_conn_cnt;
+
+ atomic64_t ktls_conn_id_gen;
+
+ atomic_t ktls_no_space;
+
+ struct {
+ struct sk_buff_head queue;
+ wait_queue_head_t wq;
+ struct workqueue_struct *workq;
+ struct work_struct wait_work;
+ struct work_struct runq_work;
+ u16 tag;
+ } mbox_cmsg;
+
struct dentry *debugfs_dir;
struct list_head vnic_list;
@@ -848,12 +895,17 @@ static inline void nfp_ctrl_unlock(struct nfp_net *nn)
static inline void nn_ctrl_bar_lock(struct nfp_net *nn)
{
- mutex_lock(&nn->bar_lock);
+ down(&nn->bar_lock);
+}
+
+static inline bool nn_ctrl_bar_trylock(struct nfp_net *nn)
+{
+ return !down_trylock(&nn->bar_lock);
}
static inline void nn_ctrl_bar_unlock(struct nfp_net *nn)
{
- mutex_unlock(&nn->bar_lock);
+ up(&nn->bar_lock);
}
/* Globals */
@@ -883,6 +935,7 @@ void nfp_ctrl_close(struct nfp_net *nn);
void nfp_net_set_ethtool_ops(struct net_device *netdev);
void nfp_net_info(struct nfp_net *nn);
+int __nfp_net_reconfig(struct nfp_net *nn, u32 update);
int nfp_net_reconfig(struct nfp_net *nn, u32 update);
unsigned int nfp_net_rss_key_sz(struct nfp_net *nn);
void nfp_net_rss_write_itbl(struct nfp_net *nn);
@@ -891,6 +944,8 @@ void nfp_net_coalesce_write_cfg(struct nfp_net *nn);
int nfp_net_mbox_lock(struct nfp_net *nn, unsigned int data_size);
int nfp_net_mbox_reconfig(struct nfp_net *nn, u32 mbox_cmd);
int nfp_net_mbox_reconfig_and_unlock(struct nfp_net *nn, u32 mbox_cmd);
+void nfp_net_mbox_reconfig_post(struct nfp_net *nn, u32 update);
+int nfp_net_mbox_reconfig_wait_posted(struct nfp_net *nn);
unsigned int
nfp_net_irqs_alloc(struct pci_dev *pdev, struct msix_entry *irq_entries,
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
index 36a3bd30cfd9..9903805717da 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
@@ -23,7 +23,6 @@
#include <linux/interrupt.h>
#include <linux/ip.h>
#include <linux/ipv6.h>
-#include <linux/lockdep.h>
#include <linux/mm.h>
#include <linux/overflow.h>
#include <linux/page_ref.h>
@@ -37,14 +36,17 @@
#include <linux/vmalloc.h>
#include <linux/ktime.h>
+#include <net/tls.h>
#include <net/vxlan.h>
#include "nfpcore/nfp_nsp.h"
+#include "ccm.h"
#include "nfp_app.h"
#include "nfp_net_ctrl.h"
#include "nfp_net.h"
#include "nfp_net_sriov.h"
#include "nfp_port.h"
+#include "crypto/crypto.h"
/**
* nfp_net_get_fw_version() - Read and parse the FW version
@@ -228,6 +230,7 @@ static void nfp_net_reconfig_sync_enter(struct nfp_net *nn)
spin_lock_bh(&nn->reconfig_lock);
+ WARN_ON(nn->reconfig_sync_present);
nn->reconfig_sync_present = true;
if (nn->reconfig_timer_active) {
@@ -271,12 +274,10 @@ static void nfp_net_reconfig_wait_posted(struct nfp_net *nn)
*
* Return: Negative errno on error, 0 on success
*/
-static int __nfp_net_reconfig(struct nfp_net *nn, u32 update)
+int __nfp_net_reconfig(struct nfp_net *nn, u32 update)
{
int ret;
- lockdep_assert_held(&nn->bar_lock);
-
nfp_net_reconfig_sync_enter(nn);
nfp_net_reconfig_start(nn, update);
@@ -331,7 +332,6 @@ int nfp_net_mbox_reconfig(struct nfp_net *nn, u32 mbox_cmd)
u32 mbox = nn->tlv_caps.mbox_off;
int ret;
- lockdep_assert_held(&nn->bar_lock);
nn_writeq(nn, mbox + NFP_NET_CFG_MBOX_SIMPLE_CMD, mbox_cmd);
ret = __nfp_net_reconfig(nn, NFP_NET_CFG_UPDATE_MBOX);
@@ -343,6 +343,24 @@ int nfp_net_mbox_reconfig(struct nfp_net *nn, u32 mbox_cmd)
return -nn_readl(nn, mbox + NFP_NET_CFG_MBOX_SIMPLE_RET);
}
+void nfp_net_mbox_reconfig_post(struct nfp_net *nn, u32 mbox_cmd)
+{
+ u32 mbox = nn->tlv_caps.mbox_off;
+
+ nn_writeq(nn, mbox + NFP_NET_CFG_MBOX_SIMPLE_CMD, mbox_cmd);
+
+ nfp_net_reconfig_post(nn, NFP_NET_CFG_UPDATE_MBOX);
+}
+
+int nfp_net_mbox_reconfig_wait_posted(struct nfp_net *nn)
+{
+ u32 mbox = nn->tlv_caps.mbox_off;
+
+ nfp_net_reconfig_wait_posted(nn);
+
+ return -nn_readl(nn, mbox + NFP_NET_CFG_MBOX_SIMPLE_RET);
+}
+
int nfp_net_mbox_reconfig_and_unlock(struct nfp_net *nn, u32 mbox_cmd)
{
int ret;
@@ -804,6 +822,99 @@ static void nfp_net_tx_csum(struct nfp_net_dp *dp,
u64_stats_update_end(&r_vec->tx_sync);
}
+static struct sk_buff *
+nfp_net_tls_tx(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec,
+ struct sk_buff *skb, u64 *tls_handle, int *nr_frags)
+{
+#ifdef CONFIG_TLS_DEVICE
+ struct nfp_net_tls_offload_ctx *ntls;
+ struct sk_buff *nskb;
+ bool resync_pending;
+ u32 datalen, seq;
+
+ if (likely(!dp->ktls_tx))
+ return skb;
+ if (!skb->sk || !tls_is_sk_tx_device_offloaded(skb->sk))
+ return skb;
+
+ datalen = skb->len - (skb_transport_offset(skb) + tcp_hdrlen(skb));
+ seq = ntohl(tcp_hdr(skb)->seq);
+ ntls = tls_driver_ctx(skb->sk, TLS_OFFLOAD_CTX_DIR_TX);
+ resync_pending = tls_offload_tx_resync_pending(skb->sk);
+ if (unlikely(resync_pending || ntls->next_seq != seq)) {
+ /* Pure ACK out of order already */
+ if (!datalen)
+ return skb;
+
+ u64_stats_update_begin(&r_vec->tx_sync);
+ r_vec->tls_tx_fallback++;
+ u64_stats_update_end(&r_vec->tx_sync);
+
+ nskb = tls_encrypt_skb(skb);
+ if (!nskb) {
+ u64_stats_update_begin(&r_vec->tx_sync);
+ r_vec->tls_tx_no_fallback++;
+ u64_stats_update_end(&r_vec->tx_sync);
+ return NULL;
+ }
+ /* encryption wasn't necessary */
+ if (nskb == skb)
+ return skb;
+ /* we don't re-check ring space */
+ if (unlikely(skb_is_nonlinear(nskb))) {
+ nn_dp_warn(dp, "tls_encrypt_skb() produced fragmented frame\n");
+ u64_stats_update_begin(&r_vec->tx_sync);
+ r_vec->tx_errors++;
+ u64_stats_update_end(&r_vec->tx_sync);
+ dev_kfree_skb_any(nskb);
+ return NULL;
+ }
+
+ /* jump forward, a TX may have gotten lost, need to sync TX */
+ if (!resync_pending && seq - ntls->next_seq < U32_MAX / 4)
+ tls_offload_tx_resync_request(nskb->sk);
+
+ *nr_frags = 0;
+ return nskb;
+ }
+
+ if (datalen) {
+ u64_stats_update_begin(&r_vec->tx_sync);
+ if (!skb_is_gso(skb))
+ r_vec->hw_tls_tx++;
+ else
+ r_vec->hw_tls_tx += skb_shinfo(skb)->gso_segs;
+ u64_stats_update_end(&r_vec->tx_sync);
+ }
+
+ memcpy(tls_handle, ntls->fw_handle, sizeof(ntls->fw_handle));
+ ntls->next_seq += datalen;
+#endif
+ return skb;
+}
+
+static void nfp_net_tls_tx_undo(struct sk_buff *skb, u64 tls_handle)
+{
+#ifdef CONFIG_TLS_DEVICE
+ struct nfp_net_tls_offload_ctx *ntls;
+ u32 datalen, seq;
+
+ if (!tls_handle)
+ return;
+ if (WARN_ON_ONCE(!skb->sk || !tls_is_sk_tx_device_offloaded(skb->sk)))
+ return;
+
+ datalen = skb->len - (skb_transport_offset(skb) + tcp_hdrlen(skb));
+ seq = ntohl(tcp_hdr(skb)->seq);
+
+ ntls = tls_driver_ctx(skb->sk, TLS_OFFLOAD_CTX_DIR_TX);
+ if (ntls->next_seq == seq + datalen)
+ ntls->next_seq = seq;
+ else
+ WARN_ON_ONCE(1);
+#endif
+}
+
static void nfp_net_tx_xmit_more_flush(struct nfp_net_tx_ring *tx_ring)
{
wmb();
@@ -811,24 +922,47 @@ static void nfp_net_tx_xmit_more_flush(struct nfp_net_tx_ring *tx_ring)
tx_ring->wr_ptr_add = 0;
}
-static int nfp_net_prep_port_id(struct sk_buff *skb)
+static int nfp_net_prep_tx_meta(struct sk_buff *skb, u64 tls_handle)
{
struct metadata_dst *md_dst = skb_metadata_dst(skb);
unsigned char *data;
+ u32 meta_id = 0;
+ int md_bytes;
- if (likely(!md_dst))
- return 0;
- if (unlikely(md_dst->type != METADATA_HW_PORT_MUX))
+ if (likely(!md_dst && !tls_handle))
return 0;
+ if (unlikely(md_dst && md_dst->type != METADATA_HW_PORT_MUX)) {
+ if (!tls_handle)
+ return 0;
+ md_dst = NULL;
+ }
+
+ md_bytes = 4 + !!md_dst * 4 + !!tls_handle * 8;
- if (unlikely(skb_cow_head(skb, 8)))
+ if (unlikely(skb_cow_head(skb, md_bytes)))
return -ENOMEM;
- data = skb_push(skb, 8);
- put_unaligned_be32(NFP_NET_META_PORTID, data);
- put_unaligned_be32(md_dst->u.port_info.port_id, data + 4);
+ meta_id = 0;
+ data = skb_push(skb, md_bytes) + md_bytes;
+ if (md_dst) {
+ data -= 4;
+ put_unaligned_be32(md_dst->u.port_info.port_id, data);
+ meta_id = NFP_NET_META_PORTID;
+ }
+ if (tls_handle) {
+ /* conn handle is opaque, we just use u64 to be able to quickly
+ * compare it to zero
+ */
+ data -= 8;
+ memcpy(data, &tls_handle, sizeof(tls_handle));
+ meta_id <<= NFP_NET_META_FIELD_SIZE;
+ meta_id |= NFP_NET_META_CONN_HANDLE;
+ }
+
+ data -= 4;
+ put_unaligned_be32(meta_id, data);
- return 8;
+ return md_bytes;
}
/**
@@ -851,6 +985,7 @@ static int nfp_net_tx(struct sk_buff *skb, struct net_device *netdev)
struct nfp_net_dp *dp;
dma_addr_t dma_addr;
unsigned int fsize;
+ u64 tls_handle = 0;
u16 qidx;
dp = &nn->dp;
@@ -872,18 +1007,21 @@ static int nfp_net_tx(struct sk_buff *skb, struct net_device *netdev)
return NETDEV_TX_BUSY;
}
- md_bytes = nfp_net_prep_port_id(skb);
- if (unlikely(md_bytes < 0)) {
+ skb = nfp_net_tls_tx(dp, r_vec, skb, &tls_handle, &nr_frags);
+ if (unlikely(!skb)) {
nfp_net_tx_xmit_more_flush(tx_ring);
- dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
+ md_bytes = nfp_net_prep_tx_meta(skb, tls_handle);
+ if (unlikely(md_bytes < 0))
+ goto err_flush;
+
/* Start with the head skbuf */
dma_addr = dma_map_single(dp->dev, skb->data, skb_headlen(skb),
DMA_TO_DEVICE);
if (dma_mapping_error(dp->dev, dma_addr))
- goto err_free;
+ goto err_dma_err;
wr_idx = D_IDX(tx_ring, tx_ring->wr_p);
@@ -979,12 +1117,14 @@ err_unmap:
tx_ring->txbufs[wr_idx].skb = NULL;
tx_ring->txbufs[wr_idx].dma_addr = 0;
tx_ring->txbufs[wr_idx].fidx = -2;
-err_free:
+err_dma_err:
nn_dp_warn(dp, "Failed to map DMA TX buffer\n");
+err_flush:
nfp_net_tx_xmit_more_flush(tx_ring);
u64_stats_update_begin(&r_vec->tx_sync);
r_vec->tx_errors++;
u64_stats_update_end(&r_vec->tx_sync);
+ nfp_net_tls_tx_undo(skb, tls_handle);
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
@@ -1857,6 +1997,15 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget)
nfp_net_rx_csum(dp, r_vec, rxd, &meta, skb);
+#ifdef CONFIG_TLS_DEVICE
+ if (rxd->rxd.flags & PCIE_DESC_RX_DECRYPTED) {
+ skb->decrypted = true;
+ u64_stats_update_begin(&r_vec->rx_sync);
+ r_vec->hw_tls_rx++;
+ u64_stats_update_end(&r_vec->rx_sync);
+ }
+#endif
+
if (rxd->rxd.flags & PCIE_DESC_RX_VLAN)
__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
le16_to_cpu(rxd->rxd.vlan));
@@ -3705,7 +3854,7 @@ nfp_net_alloc(struct pci_dev *pdev, void __iomem *ctrl_bar, bool needs_netdev,
nn->dp.txd_cnt = NFP_NET_TX_DESCS_DEFAULT;
nn->dp.rxd_cnt = NFP_NET_RX_DESCS_DEFAULT;
- mutex_init(&nn->bar_lock);
+ sema_init(&nn->bar_lock, 1);
spin_lock_init(&nn->reconfig_lock);
spin_lock_init(&nn->link_status_lock);
@@ -3717,6 +3866,10 @@ nfp_net_alloc(struct pci_dev *pdev, void __iomem *ctrl_bar, bool needs_netdev,
if (err)
goto err_free_nn;
+ err = nfp_ccm_mbox_alloc(nn);
+ if (err)
+ goto err_free_nn;
+
return nn;
err_free_nn:
@@ -3734,8 +3887,7 @@ err_free_nn:
void nfp_net_free(struct nfp_net *nn)
{
WARN_ON(timer_pending(&nn->reconfig_timer) || nn->reconfig_posted);
-
- mutex_destroy(&nn->bar_lock);
+ nfp_ccm_mbox_free(nn);
if (nn->dp.netdev)
free_netdev(nn->dp.netdev);
@@ -4010,14 +4162,27 @@ int nfp_net_init(struct nfp_net *nn)
if (err)
return err;
- if (nn->dp.netdev)
+ if (nn->dp.netdev) {
nfp_net_netdev_init(nn);
+ err = nfp_ccm_mbox_init(nn);
+ if (err)
+ return err;
+
+ err = nfp_net_tls_init(nn);
+ if (err)
+ goto err_clean_mbox;
+ }
+
nfp_net_vecs_init(nn);
if (!nn->dp.netdev)
return 0;
return register_netdev(nn->dp.netdev);
+
+err_clean_mbox:
+ nfp_ccm_mbox_clean(nn);
+ return err;
}
/**
@@ -4030,5 +4195,6 @@ void nfp_net_clean(struct nfp_net *nn)
return;
unregister_netdev(nn->dp.netdev);
+ nfp_ccm_mbox_clean(nn);
nfp_net_reconfig_wait_posted(nn);
}
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c
index 6d5213b5bcb0..d835c14b7257 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.c
@@ -99,6 +99,21 @@ int nfp_net_tlv_caps_parse(struct device *dev, u8 __iomem *ctrl_mem,
caps->repr_cap = readl(data);
break;
+ case NFP_NET_CFG_TLV_TYPE_MBOX_CMSG_TYPES:
+ if (length >= 4)
+ caps->mbox_cmsg_types = readl(data);
+ break;
+ case NFP_NET_CFG_TLV_TYPE_CRYPTO_OPS:
+ if (length < 32) {
+ dev_err(dev,
+ "CRYPTO OPS TLV should be at least 32B, is %dB offset:%u\n",
+ length, offset);
+ return -EINVAL;
+ }
+
+ caps->crypto_ops = readl(data);
+ caps->crypto_enable_off = data - ctrl_mem + 16;
+ break;
default:
if (!FIELD_GET(NFP_NET_CFG_TLV_HEADER_REQUIRED, hdr))
break;
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h
index 25919e338071..ee6b24e4eacd 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h
@@ -44,6 +44,7 @@
#define NFP_NET_META_MARK 2
#define NFP_NET_META_PORTID 5
#define NFP_NET_META_CSUM 6 /* checksum complete type */
+#define NFP_NET_META_CONN_HANDLE 7
#define NFP_META_PORT_ID_CTRL ~0U
@@ -135,6 +136,7 @@
#define NFP_NET_CFG_UPDATE_MACADDR (0x1 << 11) /* MAC address change */
#define NFP_NET_CFG_UPDATE_MBOX (0x1 << 12) /* Mailbox update */
#define NFP_NET_CFG_UPDATE_VF (0x1 << 13) /* VF settings change */
+#define NFP_NET_CFG_UPDATE_CRYPTO (0x1 << 14) /* Crypto on/off */
#define NFP_NET_CFG_UPDATE_ERR (0x1 << 31) /* A error occurred */
#define NFP_NET_CFG_TXRS_ENABLE 0x0008
#define NFP_NET_CFG_RXRS_ENABLE 0x0010
@@ -394,6 +396,7 @@
#define NFP_NET_CFG_MBOX_CMD_CTAG_FILTER_KILL 2
#define NFP_NET_CFG_MBOX_CMD_PCI_DSCP_PRIOMAP_SET 5
+#define NFP_NET_CFG_MBOX_CMD_TLV_CMSG 6
/**
* VLAN filtering using general use mailbox
@@ -466,6 +469,16 @@
* %NFP_NET_CFG_TLV_TYPE_REPR_CAP:
* Single word, equivalent of %NFP_NET_CFG_CAP for representors, features which
* can be used on representors.
+ *
+ * %NFP_NET_CFG_TLV_TYPE_MBOX_CMSG_TYPES:
+ * Variable, bitmap of control message types supported by the mailbox handler.
+ * Bit 0 corresponds to message type 0, bit 1 to 1, etc. Control messages are
+ * encapsulated into simple TLVs, with an end TLV and written to the Mailbox.
+ *
+ * %NFP_NET_CFG_TLV_TYPE_CRYPTO_OPS:
+ * 8 words, bitmaps of supported and enabled crypto operations.
+ * First 16B (4 words) contains a bitmap of supported crypto operations,
+ * and next 16B contain the enabled operations.
*/
#define NFP_NET_CFG_TLV_TYPE_UNKNOWN 0
#define NFP_NET_CFG_TLV_TYPE_RESERVED 1
@@ -475,6 +488,8 @@
#define NFP_NET_CFG_TLV_TYPE_EXPERIMENTAL0 5
#define NFP_NET_CFG_TLV_TYPE_EXPERIMENTAL1 6
#define NFP_NET_CFG_TLV_TYPE_REPR_CAP 7
+#define NFP_NET_CFG_TLV_TYPE_MBOX_CMSG_TYPES 10
+#define NFP_NET_CFG_TLV_TYPE_CRYPTO_OPS 11 /* see crypto/fw.h */
struct device;
@@ -484,12 +499,18 @@ struct device;
* @mbox_off: vNIC mailbox area offset
* @mbox_len: vNIC mailbox area length
* @repr_cap: capabilities for representors
+ * @mbox_cmsg_types: cmsgs which can be passed through the mailbox
+ * @crypto_ops: supported crypto operations
+ * @crypto_enable_off: offset of crypto ops enable region
*/
struct nfp_net_tlv_caps {
u32 me_freq_mhz;
unsigned int mbox_off;
unsigned int mbox_len;
u32 repr_cap;
+ u32 mbox_cmsg_types;
+ u32 crypto_ops;
+ unsigned int crypto_enable_off;
};
int nfp_net_tlv_caps_parse(struct device *dev, u8 __iomem *ctrl_mem,
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
index 851e31e0ba8e..d9cbe84ac6ad 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
@@ -150,8 +150,9 @@ static const struct nfp_et_stat nfp_mac_et_stats[] = {
#define NN_ET_GLOBAL_STATS_LEN ARRAY_SIZE(nfp_net_et_stats)
#define NN_ET_SWITCH_STATS_LEN 9
-#define NN_RVEC_GATHER_STATS 9
+#define NN_RVEC_GATHER_STATS 13
#define NN_RVEC_PER_Q_STATS 3
+#define NN_CTRL_PATH_STATS 1
#define SFP_SFF_REV_COMPLIANCE 1
@@ -423,7 +424,8 @@ static unsigned int nfp_vnic_get_sw_stats_count(struct net_device *netdev)
{
struct nfp_net *nn = netdev_priv(netdev);
- return NN_RVEC_GATHER_STATS + nn->max_r_vecs * NN_RVEC_PER_Q_STATS;
+ return NN_RVEC_GATHER_STATS + nn->max_r_vecs * NN_RVEC_PER_Q_STATS +
+ NN_CTRL_PATH_STATS;
}
static u8 *nfp_vnic_get_sw_stats_strings(struct net_device *netdev, u8 *data)
@@ -442,10 +444,16 @@ static u8 *nfp_vnic_get_sw_stats_strings(struct net_device *netdev, u8 *data)
data = nfp_pr_et(data, "hw_rx_csum_complete");
data = nfp_pr_et(data, "hw_rx_csum_err");
data = nfp_pr_et(data, "rx_replace_buf_alloc_fail");
+ data = nfp_pr_et(data, "rx_tls_decrypted");
data = nfp_pr_et(data, "hw_tx_csum");
data = nfp_pr_et(data, "hw_tx_inner_csum");
data = nfp_pr_et(data, "tx_gather");
data = nfp_pr_et(data, "tx_lso");
+ data = nfp_pr_et(data, "tx_tls_encrypted");
+ data = nfp_pr_et(data, "tx_tls_ooo");
+ data = nfp_pr_et(data, "tx_tls_drop_no_sync_data");
+
+ data = nfp_pr_et(data, "hw_tls_no_space");
return data;
}
@@ -468,16 +476,20 @@ static u64 *nfp_vnic_get_sw_stats(struct net_device *netdev, u64 *data)
tmp[2] = nn->r_vecs[i].hw_csum_rx_complete;
tmp[3] = nn->r_vecs[i].hw_csum_rx_error;
tmp[4] = nn->r_vecs[i].rx_replace_buf_alloc_fail;
+ tmp[5] = nn->r_vecs[i].hw_tls_rx;
} while (u64_stats_fetch_retry(&nn->r_vecs[i].rx_sync, start));
do {
start = u64_stats_fetch_begin(&nn->r_vecs[i].tx_sync);
data[1] = nn->r_vecs[i].tx_pkts;
data[2] = nn->r_vecs[i].tx_busy;
- tmp[5] = nn->r_vecs[i].hw_csum_tx;
- tmp[6] = nn->r_vecs[i].hw_csum_tx_inner;
- tmp[7] = nn->r_vecs[i].tx_gather;
- tmp[8] = nn->r_vecs[i].tx_lso;
+ tmp[6] = nn->r_vecs[i].hw_csum_tx;
+ tmp[7] = nn->r_vecs[i].hw_csum_tx_inner;
+ tmp[8] = nn->r_vecs[i].tx_gather;
+ tmp[9] = nn->r_vecs[i].tx_lso;
+ tmp[10] = nn->r_vecs[i].hw_tls_tx;
+ tmp[11] = nn->r_vecs[i].tls_tx_fallback;
+ tmp[12] = nn->r_vecs[i].tls_tx_no_fallback;
} while (u64_stats_fetch_retry(&nn->r_vecs[i].tx_sync, start));
data += NN_RVEC_PER_Q_STATS;
@@ -489,6 +501,8 @@ static u64 *nfp_vnic_get_sw_stats(struct net_device *netdev, u64 *data)
for (j = 0; j < NN_RVEC_GATHER_STATS; j++)
*data++ = gathered_stats[j];
+ *data++ = atomic_read(&nn->ktls_no_space);
+
return data;
}
diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c
index 42cf4fd875ea..9a08623c325d 100644
--- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c
+++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c
@@ -241,11 +241,16 @@ static int nfp_nsp_check(struct nfp_nsp *state)
state->ver.major = FIELD_GET(NSP_STATUS_MAJOR, reg);
state->ver.minor = FIELD_GET(NSP_STATUS_MINOR, reg);
- if (state->ver.major != NSP_MAJOR || state->ver.minor < NSP_MINOR) {
+ if (state->ver.major != NSP_MAJOR) {
nfp_err(cpp, "Unsupported ABI %hu.%hu\n",
state->ver.major, state->ver.minor);
return -EINVAL;
}
+ if (state->ver.minor < NSP_MINOR) {
+ nfp_err(cpp, "ABI too old to support NIC operation (%u.%hu < %u.%u), please update the management FW on the flash\n",
+ NSP_MAJOR, state->ver.minor, NSP_MAJOR, NSP_MINOR);
+ return -EINVAL;
+ }
if (reg & NSP_STATUS_BUSY) {
nfp_err(cpp, "Service processor busy!\n");
diff --git a/drivers/net/ethernet/ni/nixge.c b/drivers/net/ethernet/ni/nixge.c
index 96f7a9818294..0b384f97d2fd 100644
--- a/drivers/net/ethernet/ni/nixge.c
+++ b/drivers/net/ethernet/ni/nixge.c
@@ -990,7 +990,7 @@ static void nixge_ethtools_get_drvinfo(struct net_device *ndev,
struct ethtool_drvinfo *ed)
{
strlcpy(ed->driver, "nixge", sizeof(ed->driver));
- strlcpy(ed->bus_info, "platform", sizeof(ed->driver));
+ strlcpy(ed->bus_info, "platform", sizeof(ed->bus_info));
}
static int nixge_ethtools_get_coalesce(struct net_device *ndev,
diff --git a/drivers/net/ethernet/pasemi/pasemi_mac.c b/drivers/net/ethernet/pasemi/pasemi_mac.c
index bf5a7bca0298..be6660128b55 100644
--- a/drivers/net/ethernet/pasemi/pasemi_mac.c
+++ b/drivers/net/ethernet/pasemi/pasemi_mac.c
@@ -1042,7 +1042,6 @@ static int pasemi_mac_phy_init(struct net_device *dev)
dn = pci_device_to_OF_node(mac->pdev);
phy_dn = of_parse_phandle(dn, "phy-handle", 0);
- of_node_put(phy_dn);
mac->link = 0;
mac->speed = 0;
@@ -1051,6 +1050,7 @@ static int pasemi_mac_phy_init(struct net_device *dev)
phydev = of_phy_connect(dev, phy_dn, &pasemi_adjust_link, 0,
PHY_INTERFACE_MODE_SGMII);
+ of_node_put(phy_dn);
if (!phydev) {
printk(KERN_ERR "%s: Could not attach to phy\n", dev->name);
return -ENODEV;
diff --git a/drivers/net/ethernet/qlogic/Kconfig b/drivers/net/ethernet/qlogic/Kconfig
index fdbb3ce00e20..a391cf6ee4b2 100644
--- a/drivers/net/ethernet/qlogic/Kconfig
+++ b/drivers/net/ethernet/qlogic/Kconfig
@@ -87,6 +87,7 @@ config QED
depends on PCI
select ZLIB_INFLATE
select CRC8
+ select NET_DEVLINK
---help---
This enables the support for ...
diff --git a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
index 84cb62434556..58e2eaf77014 100644
--- a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
+++ b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
@@ -3248,6 +3248,7 @@ netxen_config_indev_addr(struct netxen_adapter *adapter,
struct net_device *dev, unsigned long event)
{
struct in_device *indev;
+ struct in_ifaddr *ifa;
if (!netxen_destip_supported(adapter))
return;
@@ -3256,7 +3257,8 @@ netxen_config_indev_addr(struct netxen_adapter *adapter,
if (!indev)
return;
- for_ifa(indev) {
+ rcu_read_lock();
+ in_dev_for_each_ifa_rcu(ifa, indev) {
switch (event) {
case NETDEV_UP:
netxen_list_config_ip(adapter, ifa, NX_IP_UP);
@@ -3267,8 +3269,8 @@ netxen_config_indev_addr(struct netxen_adapter *adapter,
default:
break;
}
- } endfor_ifa(indev);
-
+ }
+ rcu_read_unlock();
in_dev_put(indev);
}
diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h
index c5e96ce20f59..89fe091c958d 100644
--- a/drivers/net/ethernet/qlogic/qed/qed.h
+++ b/drivers/net/ethernet/qlogic/qed/qed.h
@@ -140,6 +140,7 @@ struct qed_cxt_mngr;
struct qed_sb_sp_info;
struct qed_ll2_info;
struct qed_mcp_info;
+struct qed_llh_info;
struct qed_rt_data {
u32 *init_val;
@@ -741,6 +742,7 @@ struct qed_dev {
#define QED_DEV_ID_MASK 0xff00
#define QED_DEV_ID_MASK_BB 0x1600
#define QED_DEV_ID_MASK_AH 0x8000
+#define QED_IS_E4(dev) (QED_IS_BB(dev) || QED_IS_AH(dev))
u16 chip_num;
#define CHIP_NUM_MASK 0xffff
@@ -801,6 +803,11 @@ struct qed_dev {
u8 num_hwfns;
struct qed_hwfn hwfns[MAX_HWFNS_PER_DEVICE];
+ /* Engine affinity */
+ u8 l2_affin_hint;
+ u8 fir_affin;
+ u8 iwarp_affin;
+
/* SRIOV */
struct qed_hw_sriov_info *p_iov_info;
#define IS_QED_SRIOV(cdev) (!!(cdev)->p_iov_info)
@@ -815,6 +822,10 @@ struct qed_dev {
/* Recovery */
bool recov_in_prog;
+ /* LLH info */
+ u8 ppfid_bitmap;
+ struct qed_llh_info *p_llh_info;
+
/* Linux specific here */
struct qede_dev *edev;
struct pci_dev *pdev;
@@ -852,6 +863,9 @@ struct qed_dev {
u32 rdma_max_inline;
u32 rdma_max_srq_sge;
u16 tunn_feature_mask;
+
+ struct devlink *dl;
+ bool iwarp_cmt;
};
#define NUM_OF_VFS(dev) (QED_IS_BB(dev) ? MAX_NUM_VFS_BB \
@@ -904,6 +918,14 @@ void qed_set_fw_mac_addr(__le16 *fw_msb,
__le16 *fw_mid, __le16 *fw_lsb, u8 *mac);
#define QED_LEADING_HWFN(dev) (&dev->hwfns[0])
+#define QED_IS_CMT(dev) ((dev)->num_hwfns > 1)
+/* Macros for getting the engine-affinitized hwfn (FIR: fcoe,iscsi,roce) */
+#define QED_FIR_AFFIN_HWFN(dev) (&(dev)->hwfns[dev->fir_affin])
+#define QED_IWARP_AFFIN_HWFN(dev) (&(dev)->hwfns[dev->iwarp_affin])
+#define QED_AFFIN_HWFN(dev) \
+ (QED_IS_IWARP_PERSONALITY(QED_LEADING_HWFN(dev)) ? \
+ QED_IWARP_AFFIN_HWFN(dev) : QED_FIR_AFFIN_HWFN(dev))
+#define QED_AFFIN_HWFN_IDX(dev) (IS_LEAD_HWFN(QED_AFFIN_HWFN(dev)) ? 0 : 1)
/* Flags for indication of required queues */
#define PQ_FLAGS_RLS (BIT(0))
@@ -923,8 +945,6 @@ u16 qed_get_cm_pq_idx_vf(struct qed_hwfn *p_hwfn, u16 vf);
u16 qed_get_cm_pq_idx_ofld_mtc(struct qed_hwfn *p_hwfn, u8 tc);
u16 qed_get_cm_pq_idx_llt_mtc(struct qed_hwfn *p_hwfn, u8 tc);
-#define QED_LEADING_HWFN(dev) (&dev->hwfns[0])
-
/* doorbell recovery mechanism */
void qed_db_recovery_dp(struct qed_hwfn *p_hwfn);
void qed_db_recovery_execute(struct qed_hwfn *p_hwfn);
diff --git a/drivers/net/ethernet/qlogic/qed/qed_cxt.c b/drivers/net/ethernet/qlogic/qed/qed_cxt.c
index e61d1d905415..8e1bdf58b9e7 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_cxt.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_cxt.c
@@ -2351,7 +2351,8 @@ qed_cxt_dynamic_ilt_alloc(struct qed_hwfn *p_hwfn,
/* Write via DMAE since the PSWRQ2_REG_ILT_MEMORY line is a wide-bus */
qed_dmae_host2grc(p_hwfn, p_ptt, (u64) (uintptr_t)&ilt_hw_entry,
- reg_offset, sizeof(ilt_hw_entry) / sizeof(u32), 0);
+ reg_offset, sizeof(ilt_hw_entry) / sizeof(u32),
+ NULL);
if (elem_type == QED_ELEM_CXT) {
u32 last_cid_allocated = (1 + (iid / elems_per_p)) *
@@ -2457,7 +2458,7 @@ qed_cxt_free_ilt_range(struct qed_hwfn *p_hwfn,
(u64) (uintptr_t) &ilt_hw_entry,
reg_offset,
sizeof(ilt_hw_entry) / sizeof(u32),
- 0);
+ NULL);
}
qed_ptt_release(p_hwfn, p_ptt);
diff --git a/drivers/net/ethernet/qlogic/qed/qed_debug.c b/drivers/net/ethernet/qlogic/qed/qed_debug.c
index ab8cacbdee3e..5ea6c4fc6050 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_debug.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_debug.c
@@ -2534,7 +2534,7 @@ static u32 qed_grc_dump_addr_range(struct qed_hwfn *p_hwfn,
(len >= s_platform_defs[dev_data->platform_id].dmae_thresh ||
wide_bus)) {
if (!qed_dmae_grc2host(p_hwfn, p_ptt, DWORDS_TO_BYTES(addr),
- (u64)(uintptr_t)(dump_buf), len, 0))
+ (u64)(uintptr_t)(dump_buf), len, NULL))
return len;
dev_data->use_dmae = 0;
DP_VERBOSE(p_hwfn,
diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
index fccdb06fc5c5..a1ebc2b1ca0b 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
@@ -361,6 +361,927 @@ void qed_db_recovery_execute(struct qed_hwfn *p_hwfn)
/******************** Doorbell Recovery end ****************/
+/********************************** NIG LLH ***********************************/
+
+enum qed_llh_filter_type {
+ QED_LLH_FILTER_TYPE_MAC,
+ QED_LLH_FILTER_TYPE_PROTOCOL,
+};
+
+struct qed_llh_mac_filter {
+ u8 addr[ETH_ALEN];
+};
+
+struct qed_llh_protocol_filter {
+ enum qed_llh_prot_filter_type_t type;
+ u16 source_port_or_eth_type;
+ u16 dest_port;
+};
+
+union qed_llh_filter {
+ struct qed_llh_mac_filter mac;
+ struct qed_llh_protocol_filter protocol;
+};
+
+struct qed_llh_filter_info {
+ bool b_enabled;
+ u32 ref_cnt;
+ enum qed_llh_filter_type type;
+ union qed_llh_filter filter;
+};
+
+struct qed_llh_info {
+ /* Number of LLH filters banks */
+ u8 num_ppfid;
+
+#define MAX_NUM_PPFID 8
+ u8 ppfid_array[MAX_NUM_PPFID];
+
+ /* Array of filters arrays:
+ * "num_ppfid" elements of filters banks, where each is an array of
+ * "NIG_REG_LLH_FUNC_FILTER_EN_SIZE" filters.
+ */
+ struct qed_llh_filter_info **pp_filters;
+};
+
+static void qed_llh_free(struct qed_dev *cdev)
+{
+ struct qed_llh_info *p_llh_info = cdev->p_llh_info;
+ u32 i;
+
+ if (p_llh_info) {
+ if (p_llh_info->pp_filters)
+ for (i = 0; i < p_llh_info->num_ppfid; i++)
+ kfree(p_llh_info->pp_filters[i]);
+
+ kfree(p_llh_info->pp_filters);
+ }
+
+ kfree(p_llh_info);
+ cdev->p_llh_info = NULL;
+}
+
+static int qed_llh_alloc(struct qed_dev *cdev)
+{
+ struct qed_llh_info *p_llh_info;
+ u32 size, i;
+
+ p_llh_info = kzalloc(sizeof(*p_llh_info), GFP_KERNEL);
+ if (!p_llh_info)
+ return -ENOMEM;
+ cdev->p_llh_info = p_llh_info;
+
+ for (i = 0; i < MAX_NUM_PPFID; i++) {
+ if (!(cdev->ppfid_bitmap & (0x1 << i)))
+ continue;
+
+ p_llh_info->ppfid_array[p_llh_info->num_ppfid] = i;
+ DP_VERBOSE(cdev, QED_MSG_SP, "ppfid_array[%d] = %hhd\n",
+ p_llh_info->num_ppfid, i);
+ p_llh_info->num_ppfid++;
+ }
+
+ size = p_llh_info->num_ppfid * sizeof(*p_llh_info->pp_filters);
+ p_llh_info->pp_filters = kzalloc(size, GFP_KERNEL);
+ if (!p_llh_info->pp_filters)
+ return -ENOMEM;
+
+ size = NIG_REG_LLH_FUNC_FILTER_EN_SIZE *
+ sizeof(**p_llh_info->pp_filters);
+ for (i = 0; i < p_llh_info->num_ppfid; i++) {
+ p_llh_info->pp_filters[i] = kzalloc(size, GFP_KERNEL);
+ if (!p_llh_info->pp_filters[i])
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static int qed_llh_shadow_sanity(struct qed_dev *cdev,
+ u8 ppfid, u8 filter_idx, const char *action)
+{
+ struct qed_llh_info *p_llh_info = cdev->p_llh_info;
+
+ if (ppfid >= p_llh_info->num_ppfid) {
+ DP_NOTICE(cdev,
+ "LLH shadow [%s]: using ppfid %d while only %d ppfids are available\n",
+ action, ppfid, p_llh_info->num_ppfid);
+ return -EINVAL;
+ }
+
+ if (filter_idx >= NIG_REG_LLH_FUNC_FILTER_EN_SIZE) {
+ DP_NOTICE(cdev,
+ "LLH shadow [%s]: using filter_idx %d while only %d filters are available\n",
+ action, filter_idx, NIG_REG_LLH_FUNC_FILTER_EN_SIZE);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+#define QED_LLH_INVALID_FILTER_IDX 0xff
+
+static int
+qed_llh_shadow_search_filter(struct qed_dev *cdev,
+ u8 ppfid,
+ union qed_llh_filter *p_filter, u8 *p_filter_idx)
+{
+ struct qed_llh_info *p_llh_info = cdev->p_llh_info;
+ struct qed_llh_filter_info *p_filters;
+ int rc;
+ u8 i;
+
+ rc = qed_llh_shadow_sanity(cdev, ppfid, 0, "search");
+ if (rc)
+ return rc;
+
+ *p_filter_idx = QED_LLH_INVALID_FILTER_IDX;
+
+ p_filters = p_llh_info->pp_filters[ppfid];
+ for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) {
+ if (!memcmp(p_filter, &p_filters[i].filter,
+ sizeof(*p_filter))) {
+ *p_filter_idx = i;
+ break;
+ }
+ }
+
+ return 0;
+}
+
+static int
+qed_llh_shadow_get_free_idx(struct qed_dev *cdev, u8 ppfid, u8 *p_filter_idx)
+{
+ struct qed_llh_info *p_llh_info = cdev->p_llh_info;
+ struct qed_llh_filter_info *p_filters;
+ int rc;
+ u8 i;
+
+ rc = qed_llh_shadow_sanity(cdev, ppfid, 0, "get_free_idx");
+ if (rc)
+ return rc;
+
+ *p_filter_idx = QED_LLH_INVALID_FILTER_IDX;
+
+ p_filters = p_llh_info->pp_filters[ppfid];
+ for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) {
+ if (!p_filters[i].b_enabled) {
+ *p_filter_idx = i;
+ break;
+ }
+ }
+
+ return 0;
+}
+
+static int
+__qed_llh_shadow_add_filter(struct qed_dev *cdev,
+ u8 ppfid,
+ u8 filter_idx,
+ enum qed_llh_filter_type type,
+ union qed_llh_filter *p_filter, u32 *p_ref_cnt)
+{
+ struct qed_llh_info *p_llh_info = cdev->p_llh_info;
+ struct qed_llh_filter_info *p_filters;
+ int rc;
+
+ rc = qed_llh_shadow_sanity(cdev, ppfid, filter_idx, "add");
+ if (rc)
+ return rc;
+
+ p_filters = p_llh_info->pp_filters[ppfid];
+ if (!p_filters[filter_idx].ref_cnt) {
+ p_filters[filter_idx].b_enabled = true;
+ p_filters[filter_idx].type = type;
+ memcpy(&p_filters[filter_idx].filter, p_filter,
+ sizeof(p_filters[filter_idx].filter));
+ }
+
+ *p_ref_cnt = ++p_filters[filter_idx].ref_cnt;
+
+ return 0;
+}
+
+static int
+qed_llh_shadow_add_filter(struct qed_dev *cdev,
+ u8 ppfid,
+ enum qed_llh_filter_type type,
+ union qed_llh_filter *p_filter,
+ u8 *p_filter_idx, u32 *p_ref_cnt)
+{
+ int rc;
+
+ /* Check if the same filter already exist */
+ rc = qed_llh_shadow_search_filter(cdev, ppfid, p_filter, p_filter_idx);
+ if (rc)
+ return rc;
+
+ /* Find a new entry in case of a new filter */
+ if (*p_filter_idx == QED_LLH_INVALID_FILTER_IDX) {
+ rc = qed_llh_shadow_get_free_idx(cdev, ppfid, p_filter_idx);
+ if (rc)
+ return rc;
+ }
+
+ /* No free entry was found */
+ if (*p_filter_idx == QED_LLH_INVALID_FILTER_IDX) {
+ DP_NOTICE(cdev,
+ "Failed to find an empty LLH filter to utilize [ppfid %d]\n",
+ ppfid);
+ return -EINVAL;
+ }
+
+ return __qed_llh_shadow_add_filter(cdev, ppfid, *p_filter_idx, type,
+ p_filter, p_ref_cnt);
+}
+
+static int
+__qed_llh_shadow_remove_filter(struct qed_dev *cdev,
+ u8 ppfid, u8 filter_idx, u32 *p_ref_cnt)
+{
+ struct qed_llh_info *p_llh_info = cdev->p_llh_info;
+ struct qed_llh_filter_info *p_filters;
+ int rc;
+
+ rc = qed_llh_shadow_sanity(cdev, ppfid, filter_idx, "remove");
+ if (rc)
+ return rc;
+
+ p_filters = p_llh_info->pp_filters[ppfid];
+ if (!p_filters[filter_idx].ref_cnt) {
+ DP_NOTICE(cdev,
+ "LLH shadow: trying to remove a filter with ref_cnt=0\n");
+ return -EINVAL;
+ }
+
+ *p_ref_cnt = --p_filters[filter_idx].ref_cnt;
+ if (!p_filters[filter_idx].ref_cnt)
+ memset(&p_filters[filter_idx],
+ 0, sizeof(p_filters[filter_idx]));
+
+ return 0;
+}
+
+static int
+qed_llh_shadow_remove_filter(struct qed_dev *cdev,
+ u8 ppfid,
+ union qed_llh_filter *p_filter,
+ u8 *p_filter_idx, u32 *p_ref_cnt)
+{
+ int rc;
+
+ rc = qed_llh_shadow_search_filter(cdev, ppfid, p_filter, p_filter_idx);
+ if (rc)
+ return rc;
+
+ /* No matching filter was found */
+ if (*p_filter_idx == QED_LLH_INVALID_FILTER_IDX) {
+ DP_NOTICE(cdev, "Failed to find a filter in the LLH shadow\n");
+ return -EINVAL;
+ }
+
+ return __qed_llh_shadow_remove_filter(cdev, ppfid, *p_filter_idx,
+ p_ref_cnt);
+}
+
+static int qed_llh_abs_ppfid(struct qed_dev *cdev, u8 ppfid, u8 *p_abs_ppfid)
+{
+ struct qed_llh_info *p_llh_info = cdev->p_llh_info;
+
+ if (ppfid >= p_llh_info->num_ppfid) {
+ DP_NOTICE(cdev,
+ "ppfid %d is not valid, available indices are 0..%hhd\n",
+ ppfid, p_llh_info->num_ppfid - 1);
+ *p_abs_ppfid = 0;
+ return -EINVAL;
+ }
+
+ *p_abs_ppfid = p_llh_info->ppfid_array[ppfid];
+
+ return 0;
+}
+
+static int
+qed_llh_set_engine_affin(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+{
+ struct qed_dev *cdev = p_hwfn->cdev;
+ enum qed_eng eng;
+ u8 ppfid;
+ int rc;
+
+ rc = qed_mcp_get_engine_config(p_hwfn, p_ptt);
+ if (rc != 0 && rc != -EOPNOTSUPP) {
+ DP_NOTICE(p_hwfn,
+ "Failed to get the engine affinity configuration\n");
+ return rc;
+ }
+
+ /* RoCE PF is bound to a single engine */
+ if (QED_IS_ROCE_PERSONALITY(p_hwfn)) {
+ eng = cdev->fir_affin ? QED_ENG1 : QED_ENG0;
+ rc = qed_llh_set_roce_affinity(cdev, eng);
+ if (rc) {
+ DP_NOTICE(cdev,
+ "Failed to set the RoCE engine affinity\n");
+ return rc;
+ }
+
+ DP_VERBOSE(cdev,
+ QED_MSG_SP,
+ "LLH: Set the engine affinity of RoCE packets as %d\n",
+ eng);
+ }
+
+ /* Storage PF is bound to a single engine while L2 PF uses both */
+ if (QED_IS_FCOE_PERSONALITY(p_hwfn) || QED_IS_ISCSI_PERSONALITY(p_hwfn))
+ eng = cdev->fir_affin ? QED_ENG1 : QED_ENG0;
+ else /* L2_PERSONALITY */
+ eng = QED_BOTH_ENG;
+
+ for (ppfid = 0; ppfid < cdev->p_llh_info->num_ppfid; ppfid++) {
+ rc = qed_llh_set_ppfid_affinity(cdev, ppfid, eng);
+ if (rc) {
+ DP_NOTICE(cdev,
+ "Failed to set the engine affinity of ppfid %d\n",
+ ppfid);
+ return rc;
+ }
+ }
+
+ DP_VERBOSE(cdev, QED_MSG_SP,
+ "LLH: Set the engine affinity of non-RoCE packets as %d\n",
+ eng);
+
+ return 0;
+}
+
+static int qed_llh_hw_init_pf(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt)
+{
+ struct qed_dev *cdev = p_hwfn->cdev;
+ u8 ppfid, abs_ppfid;
+ int rc;
+
+ for (ppfid = 0; ppfid < cdev->p_llh_info->num_ppfid; ppfid++) {
+ u32 addr;
+
+ rc = qed_llh_abs_ppfid(cdev, ppfid, &abs_ppfid);
+ if (rc)
+ return rc;
+
+ addr = NIG_REG_LLH_PPFID2PFID_TBL_0 + abs_ppfid * 0x4;
+ qed_wr(p_hwfn, p_ptt, addr, p_hwfn->rel_pf_id);
+ }
+
+ if (test_bit(QED_MF_LLH_MAC_CLSS, &cdev->mf_bits) &&
+ !QED_IS_FCOE_PERSONALITY(p_hwfn)) {
+ rc = qed_llh_add_mac_filter(cdev, 0,
+ p_hwfn->hw_info.hw_mac_addr);
+ if (rc)
+ DP_NOTICE(cdev,
+ "Failed to add an LLH filter with the primary MAC\n");
+ }
+
+ if (QED_IS_CMT(cdev)) {
+ rc = qed_llh_set_engine_affin(p_hwfn, p_ptt);
+ if (rc)
+ return rc;
+ }
+
+ return 0;
+}
+
+u8 qed_llh_get_num_ppfid(struct qed_dev *cdev)
+{
+ return cdev->p_llh_info->num_ppfid;
+}
+
+#define NIG_REG_PPF_TO_ENGINE_SEL_ROCE_MASK 0x3
+#define NIG_REG_PPF_TO_ENGINE_SEL_ROCE_SHIFT 0
+#define NIG_REG_PPF_TO_ENGINE_SEL_NON_ROCE_MASK 0x3
+#define NIG_REG_PPF_TO_ENGINE_SEL_NON_ROCE_SHIFT 2
+
+int qed_llh_set_ppfid_affinity(struct qed_dev *cdev, u8 ppfid, enum qed_eng eng)
+{
+ struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev);
+ struct qed_ptt *p_ptt = qed_ptt_acquire(p_hwfn);
+ u32 addr, val, eng_sel;
+ u8 abs_ppfid;
+ int rc = 0;
+
+ if (!p_ptt)
+ return -EAGAIN;
+
+ if (!QED_IS_CMT(cdev))
+ goto out;
+
+ rc = qed_llh_abs_ppfid(cdev, ppfid, &abs_ppfid);
+ if (rc)
+ goto out;
+
+ switch (eng) {
+ case QED_ENG0:
+ eng_sel = 0;
+ break;
+ case QED_ENG1:
+ eng_sel = 1;
+ break;
+ case QED_BOTH_ENG:
+ eng_sel = 2;
+ break;
+ default:
+ DP_NOTICE(cdev, "Invalid affinity value for ppfid [%d]\n", eng);
+ rc = -EINVAL;
+ goto out;
+ }
+
+ addr = NIG_REG_PPF_TO_ENGINE_SEL + abs_ppfid * 0x4;
+ val = qed_rd(p_hwfn, p_ptt, addr);
+ SET_FIELD(val, NIG_REG_PPF_TO_ENGINE_SEL_NON_ROCE, eng_sel);
+ qed_wr(p_hwfn, p_ptt, addr, val);
+
+ /* The iWARP affinity is set as the affinity of ppfid 0 */
+ if (!ppfid && QED_IS_IWARP_PERSONALITY(p_hwfn))
+ cdev->iwarp_affin = (eng == QED_ENG1) ? 1 : 0;
+out:
+ qed_ptt_release(p_hwfn, p_ptt);
+
+ return rc;
+}
+
+int qed_llh_set_roce_affinity(struct qed_dev *cdev, enum qed_eng eng)
+{
+ struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev);
+ struct qed_ptt *p_ptt = qed_ptt_acquire(p_hwfn);
+ u32 addr, val, eng_sel;
+ u8 ppfid, abs_ppfid;
+ int rc = 0;
+
+ if (!p_ptt)
+ return -EAGAIN;
+
+ if (!QED_IS_CMT(cdev))
+ goto out;
+
+ switch (eng) {
+ case QED_ENG0:
+ eng_sel = 0;
+ break;
+ case QED_ENG1:
+ eng_sel = 1;
+ break;
+ case QED_BOTH_ENG:
+ eng_sel = 2;
+ qed_wr(p_hwfn, p_ptt, NIG_REG_LLH_ENG_CLS_ROCE_QP_SEL,
+ 0xf); /* QP bit 15 */
+ break;
+ default:
+ DP_NOTICE(cdev, "Invalid affinity value for RoCE [%d]\n", eng);
+ rc = -EINVAL;
+ goto out;
+ }
+
+ for (ppfid = 0; ppfid < cdev->p_llh_info->num_ppfid; ppfid++) {
+ rc = qed_llh_abs_ppfid(cdev, ppfid, &abs_ppfid);
+ if (rc)
+ goto out;
+
+ addr = NIG_REG_PPF_TO_ENGINE_SEL + abs_ppfid * 0x4;
+ val = qed_rd(p_hwfn, p_ptt, addr);
+ SET_FIELD(val, NIG_REG_PPF_TO_ENGINE_SEL_ROCE, eng_sel);
+ qed_wr(p_hwfn, p_ptt, addr, val);
+ }
+out:
+ qed_ptt_release(p_hwfn, p_ptt);
+
+ return rc;
+}
+
+struct qed_llh_filter_details {
+ u64 value;
+ u32 mode;
+ u32 protocol_type;
+ u32 hdr_sel;
+ u32 enable;
+};
+
+static int
+qed_llh_access_filter(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ u8 abs_ppfid,
+ u8 filter_idx,
+ struct qed_llh_filter_details *p_details)
+{
+ struct qed_dmae_params params = {0};
+ u32 addr;
+ u8 pfid;
+ int rc;
+
+ /* The NIG/LLH registers that are accessed in this function have only 16
+ * rows which are exposed to a PF. I.e. only the 16 filters of its
+ * default ppfid. Accessing filters of other ppfids requires pretending
+ * to another PFs.
+ * The calculation of PPFID->PFID in AH is based on the relative index
+ * of a PF on its port.
+ * For BB the pfid is actually the abs_ppfid.
+ */
+ if (QED_IS_BB(p_hwfn->cdev))
+ pfid = abs_ppfid;
+ else
+ pfid = abs_ppfid * p_hwfn->cdev->num_ports_in_engine +
+ MFW_PORT(p_hwfn);
+
+ /* Filter enable - should be done first when removing a filter */
+ if (!p_details->enable) {
+ qed_fid_pretend(p_hwfn, p_ptt,
+ pfid << PXP_PRETEND_CONCRETE_FID_PFID_SHIFT);
+
+ addr = NIG_REG_LLH_FUNC_FILTER_EN + filter_idx * 0x4;
+ qed_wr(p_hwfn, p_ptt, addr, p_details->enable);
+
+ qed_fid_pretend(p_hwfn, p_ptt,
+ p_hwfn->rel_pf_id <<
+ PXP_PRETEND_CONCRETE_FID_PFID_SHIFT);
+ }
+
+ /* Filter value */
+ addr = NIG_REG_LLH_FUNC_FILTER_VALUE + 2 * filter_idx * 0x4;
+
+ params.flags = QED_DMAE_FLAG_PF_DST;
+ params.dst_pfid = pfid;
+ rc = qed_dmae_host2grc(p_hwfn,
+ p_ptt,
+ (u64)(uintptr_t)&p_details->value,
+ addr, 2 /* size_in_dwords */,
+ &params);
+ if (rc)
+ return rc;
+
+ qed_fid_pretend(p_hwfn, p_ptt,
+ pfid << PXP_PRETEND_CONCRETE_FID_PFID_SHIFT);
+
+ /* Filter mode */
+ addr = NIG_REG_LLH_FUNC_FILTER_MODE + filter_idx * 0x4;
+ qed_wr(p_hwfn, p_ptt, addr, p_details->mode);
+
+ /* Filter protocol type */
+ addr = NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE + filter_idx * 0x4;
+ qed_wr(p_hwfn, p_ptt, addr, p_details->protocol_type);
+
+ /* Filter header select */
+ addr = NIG_REG_LLH_FUNC_FILTER_HDR_SEL + filter_idx * 0x4;
+ qed_wr(p_hwfn, p_ptt, addr, p_details->hdr_sel);
+
+ /* Filter enable - should be done last when adding a filter */
+ if (p_details->enable) {
+ addr = NIG_REG_LLH_FUNC_FILTER_EN + filter_idx * 0x4;
+ qed_wr(p_hwfn, p_ptt, addr, p_details->enable);
+ }
+
+ qed_fid_pretend(p_hwfn, p_ptt,
+ p_hwfn->rel_pf_id <<
+ PXP_PRETEND_CONCRETE_FID_PFID_SHIFT);
+
+ return 0;
+}
+
+static int
+qed_llh_add_filter(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ u8 abs_ppfid,
+ u8 filter_idx, u8 filter_prot_type, u32 high, u32 low)
+{
+ struct qed_llh_filter_details filter_details;
+
+ filter_details.enable = 1;
+ filter_details.value = ((u64)high << 32) | low;
+ filter_details.hdr_sel = 0;
+ filter_details.protocol_type = filter_prot_type;
+ /* Mode: 0: MAC-address classification 1: protocol classification */
+ filter_details.mode = filter_prot_type ? 1 : 0;
+
+ return qed_llh_access_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx,
+ &filter_details);
+}
+
+static int
+qed_llh_remove_filter(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx)
+{
+ struct qed_llh_filter_details filter_details = {0};
+
+ return qed_llh_access_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx,
+ &filter_details);
+}
+
+int qed_llh_add_mac_filter(struct qed_dev *cdev,
+ u8 ppfid, u8 mac_addr[ETH_ALEN])
+{
+ struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev);
+ struct qed_ptt *p_ptt = qed_ptt_acquire(p_hwfn);
+ union qed_llh_filter filter = {};
+ u8 filter_idx, abs_ppfid;
+ u32 high, low, ref_cnt;
+ int rc = 0;
+
+ if (!p_ptt)
+ return -EAGAIN;
+
+ if (!test_bit(QED_MF_LLH_MAC_CLSS, &cdev->mf_bits))
+ goto out;
+
+ memcpy(filter.mac.addr, mac_addr, ETH_ALEN);
+ rc = qed_llh_shadow_add_filter(cdev, ppfid,
+ QED_LLH_FILTER_TYPE_MAC,
+ &filter, &filter_idx, &ref_cnt);
+ if (rc)
+ goto err;
+
+ /* Configure the LLH only in case of a new the filter */
+ if (ref_cnt == 1) {
+ rc = qed_llh_abs_ppfid(cdev, ppfid, &abs_ppfid);
+ if (rc)
+ goto err;
+
+ high = mac_addr[1] | (mac_addr[0] << 8);
+ low = mac_addr[5] | (mac_addr[4] << 8) | (mac_addr[3] << 16) |
+ (mac_addr[2] << 24);
+ rc = qed_llh_add_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx,
+ 0, high, low);
+ if (rc)
+ goto err;
+ }
+
+ DP_VERBOSE(cdev,
+ QED_MSG_SP,
+ "LLH: Added MAC filter [%pM] to ppfid %hhd [abs %hhd] at idx %hhd [ref_cnt %d]\n",
+ mac_addr, ppfid, abs_ppfid, filter_idx, ref_cnt);
+
+ goto out;
+
+err: DP_NOTICE(cdev,
+ "LLH: Failed to add MAC filter [%pM] to ppfid %hhd\n",
+ mac_addr, ppfid);
+out:
+ qed_ptt_release(p_hwfn, p_ptt);
+
+ return rc;
+}
+
+static int
+qed_llh_protocol_filter_stringify(struct qed_dev *cdev,
+ enum qed_llh_prot_filter_type_t type,
+ u16 source_port_or_eth_type,
+ u16 dest_port, u8 *str, size_t str_len)
+{
+ switch (type) {
+ case QED_LLH_FILTER_ETHERTYPE:
+ snprintf(str, str_len, "Ethertype 0x%04x",
+ source_port_or_eth_type);
+ break;
+ case QED_LLH_FILTER_TCP_SRC_PORT:
+ snprintf(str, str_len, "TCP src port 0x%04x",
+ source_port_or_eth_type);
+ break;
+ case QED_LLH_FILTER_UDP_SRC_PORT:
+ snprintf(str, str_len, "UDP src port 0x%04x",
+ source_port_or_eth_type);
+ break;
+ case QED_LLH_FILTER_TCP_DEST_PORT:
+ snprintf(str, str_len, "TCP dst port 0x%04x", dest_port);
+ break;
+ case QED_LLH_FILTER_UDP_DEST_PORT:
+ snprintf(str, str_len, "UDP dst port 0x%04x", dest_port);
+ break;
+ case QED_LLH_FILTER_TCP_SRC_AND_DEST_PORT:
+ snprintf(str, str_len, "TCP src/dst ports 0x%04x/0x%04x",
+ source_port_or_eth_type, dest_port);
+ break;
+ case QED_LLH_FILTER_UDP_SRC_AND_DEST_PORT:
+ snprintf(str, str_len, "UDP src/dst ports 0x%04x/0x%04x",
+ source_port_or_eth_type, dest_port);
+ break;
+ default:
+ DP_NOTICE(cdev,
+ "Non valid LLH protocol filter type %d\n", type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+qed_llh_protocol_filter_to_hilo(struct qed_dev *cdev,
+ enum qed_llh_prot_filter_type_t type,
+ u16 source_port_or_eth_type,
+ u16 dest_port, u32 *p_high, u32 *p_low)
+{
+ *p_high = 0;
+ *p_low = 0;
+
+ switch (type) {
+ case QED_LLH_FILTER_ETHERTYPE:
+ *p_high = source_port_or_eth_type;
+ break;
+ case QED_LLH_FILTER_TCP_SRC_PORT:
+ case QED_LLH_FILTER_UDP_SRC_PORT:
+ *p_low = source_port_or_eth_type << 16;
+ break;
+ case QED_LLH_FILTER_TCP_DEST_PORT:
+ case QED_LLH_FILTER_UDP_DEST_PORT:
+ *p_low = dest_port;
+ break;
+ case QED_LLH_FILTER_TCP_SRC_AND_DEST_PORT:
+ case QED_LLH_FILTER_UDP_SRC_AND_DEST_PORT:
+ *p_low = (source_port_or_eth_type << 16) | dest_port;
+ break;
+ default:
+ DP_NOTICE(cdev,
+ "Non valid LLH protocol filter type %d\n", type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int
+qed_llh_add_protocol_filter(struct qed_dev *cdev,
+ u8 ppfid,
+ enum qed_llh_prot_filter_type_t type,
+ u16 source_port_or_eth_type, u16 dest_port)
+{
+ struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev);
+ struct qed_ptt *p_ptt = qed_ptt_acquire(p_hwfn);
+ u8 filter_idx, abs_ppfid, str[32], type_bitmap;
+ union qed_llh_filter filter = {};
+ u32 high, low, ref_cnt;
+ int rc = 0;
+
+ if (!p_ptt)
+ return -EAGAIN;
+
+ if (!test_bit(QED_MF_LLH_PROTO_CLSS, &cdev->mf_bits))
+ goto out;
+
+ rc = qed_llh_protocol_filter_stringify(cdev, type,
+ source_port_or_eth_type,
+ dest_port, str, sizeof(str));
+ if (rc)
+ goto err;
+
+ filter.protocol.type = type;
+ filter.protocol.source_port_or_eth_type = source_port_or_eth_type;
+ filter.protocol.dest_port = dest_port;
+ rc = qed_llh_shadow_add_filter(cdev,
+ ppfid,
+ QED_LLH_FILTER_TYPE_PROTOCOL,
+ &filter, &filter_idx, &ref_cnt);
+ if (rc)
+ goto err;
+
+ rc = qed_llh_abs_ppfid(cdev, ppfid, &abs_ppfid);
+ if (rc)
+ goto err;
+
+ /* Configure the LLH only in case of a new the filter */
+ if (ref_cnt == 1) {
+ rc = qed_llh_protocol_filter_to_hilo(cdev, type,
+ source_port_or_eth_type,
+ dest_port, &high, &low);
+ if (rc)
+ goto err;
+
+ type_bitmap = 0x1 << type;
+ rc = qed_llh_add_filter(p_hwfn, p_ptt, abs_ppfid,
+ filter_idx, type_bitmap, high, low);
+ if (rc)
+ goto err;
+ }
+
+ DP_VERBOSE(cdev,
+ QED_MSG_SP,
+ "LLH: Added protocol filter [%s] to ppfid %hhd [abs %hhd] at idx %hhd [ref_cnt %d]\n",
+ str, ppfid, abs_ppfid, filter_idx, ref_cnt);
+
+ goto out;
+
+err: DP_NOTICE(p_hwfn,
+ "LLH: Failed to add protocol filter [%s] to ppfid %hhd\n",
+ str, ppfid);
+out:
+ qed_ptt_release(p_hwfn, p_ptt);
+
+ return rc;
+}
+
+void qed_llh_remove_mac_filter(struct qed_dev *cdev,
+ u8 ppfid, u8 mac_addr[ETH_ALEN])
+{
+ struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev);
+ struct qed_ptt *p_ptt = qed_ptt_acquire(p_hwfn);
+ union qed_llh_filter filter = {};
+ u8 filter_idx, abs_ppfid;
+ int rc = 0;
+ u32 ref_cnt;
+
+ if (!p_ptt)
+ return;
+
+ if (!test_bit(QED_MF_LLH_MAC_CLSS, &cdev->mf_bits))
+ goto out;
+
+ ether_addr_copy(filter.mac.addr, mac_addr);
+ rc = qed_llh_shadow_remove_filter(cdev, ppfid, &filter, &filter_idx,
+ &ref_cnt);
+ if (rc)
+ goto err;
+
+ rc = qed_llh_abs_ppfid(cdev, ppfid, &abs_ppfid);
+ if (rc)
+ goto err;
+
+ /* Remove from the LLH in case the filter is not in use */
+ if (!ref_cnt) {
+ rc = qed_llh_remove_filter(p_hwfn, p_ptt, abs_ppfid,
+ filter_idx);
+ if (rc)
+ goto err;
+ }
+
+ DP_VERBOSE(cdev,
+ QED_MSG_SP,
+ "LLH: Removed MAC filter [%pM] from ppfid %hhd [abs %hhd] at idx %hhd [ref_cnt %d]\n",
+ mac_addr, ppfid, abs_ppfid, filter_idx, ref_cnt);
+
+ goto out;
+
+err: DP_NOTICE(cdev,
+ "LLH: Failed to remove MAC filter [%pM] from ppfid %hhd\n",
+ mac_addr, ppfid);
+out:
+ qed_ptt_release(p_hwfn, p_ptt);
+}
+
+void qed_llh_remove_protocol_filter(struct qed_dev *cdev,
+ u8 ppfid,
+ enum qed_llh_prot_filter_type_t type,
+ u16 source_port_or_eth_type, u16 dest_port)
+{
+ struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev);
+ struct qed_ptt *p_ptt = qed_ptt_acquire(p_hwfn);
+ u8 filter_idx, abs_ppfid, str[32];
+ union qed_llh_filter filter = {};
+ int rc = 0;
+ u32 ref_cnt;
+
+ if (!p_ptt)
+ return;
+
+ if (!test_bit(QED_MF_LLH_PROTO_CLSS, &cdev->mf_bits))
+ goto out;
+
+ rc = qed_llh_protocol_filter_stringify(cdev, type,
+ source_port_or_eth_type,
+ dest_port, str, sizeof(str));
+ if (rc)
+ goto err;
+
+ filter.protocol.type = type;
+ filter.protocol.source_port_or_eth_type = source_port_or_eth_type;
+ filter.protocol.dest_port = dest_port;
+ rc = qed_llh_shadow_remove_filter(cdev, ppfid, &filter, &filter_idx,
+ &ref_cnt);
+ if (rc)
+ goto err;
+
+ rc = qed_llh_abs_ppfid(cdev, ppfid, &abs_ppfid);
+ if (rc)
+ goto err;
+
+ /* Remove from the LLH in case the filter is not in use */
+ if (!ref_cnt) {
+ rc = qed_llh_remove_filter(p_hwfn, p_ptt, abs_ppfid,
+ filter_idx);
+ if (rc)
+ goto err;
+ }
+
+ DP_VERBOSE(cdev,
+ QED_MSG_SP,
+ "LLH: Removed protocol filter [%s] from ppfid %hhd [abs %hhd] at idx %hhd [ref_cnt %d]\n",
+ str, ppfid, abs_ppfid, filter_idx, ref_cnt);
+
+ goto out;
+
+err: DP_NOTICE(cdev,
+ "LLH: Failed to remove protocol filter [%s] from ppfid %hhd\n",
+ str, ppfid);
+out:
+ qed_ptt_release(p_hwfn, p_ptt);
+}
+
+/******************************* NIG LLH - End ********************************/
+
#define QED_MIN_DPIS (4)
#define QED_MIN_PWM_REGION (QED_WID_SIZE * QED_MIN_DPIS)
@@ -461,6 +1382,8 @@ void qed_resc_free(struct qed_dev *cdev)
kfree(cdev->reset_stats);
cdev->reset_stats = NULL;
+ qed_llh_free(cdev);
+
for_each_hwfn(cdev, i) {
struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
@@ -1428,6 +2351,13 @@ int qed_resc_alloc(struct qed_dev *cdev)
goto alloc_err;
}
+ rc = qed_llh_alloc(cdev);
+ if (rc) {
+ DP_NOTICE(cdev,
+ "Failed to allocate memory for the llh_info structure\n");
+ goto alloc_err;
+ }
+
cdev->reset_stats = kzalloc(sizeof(*cdev->reset_stats), GFP_KERNEL);
if (!cdev->reset_stats)
goto alloc_no_mem;
@@ -1879,6 +2809,10 @@ static int qed_hw_init_port(struct qed_hwfn *p_hwfn,
{
int rc = 0;
+ /* In CMT the gate should be cleared by the 2nd hwfn */
+ if (!QED_IS_CMT(p_hwfn->cdev) || !IS_LEAD_HWFN(p_hwfn))
+ STORE_RT_REG(p_hwfn, NIG_REG_BRB_GATE_DNTFWD_PORT_RT_OFFSET, 0);
+
rc = qed_init_run(p_hwfn, p_ptt, PHASE_PORT, p_hwfn->port_id, hw_mode);
if (rc)
return rc;
@@ -1964,6 +2898,13 @@ static int qed_hw_init_pf(struct qed_hwfn *p_hwfn,
if (rc)
return rc;
+ /* Use the leading hwfn since in CMT only NIG #0 is operational */
+ if (IS_LEAD_HWFN(p_hwfn)) {
+ rc = qed_llh_hw_init_pf(p_hwfn, p_ptt);
+ if (rc)
+ return rc;
+ }
+
if (b_hw_start) {
/* enable interrupts */
qed_int_igu_enable(p_hwfn, p_ptt, int_mode);
@@ -2393,6 +3334,12 @@ int qed_hw_stop(struct qed_dev *cdev)
qed_wr(p_hwfn, p_ptt, DORQ_REG_PF_DB_ENABLE, 0);
qed_wr(p_hwfn, p_ptt, QM_REG_PF_EN, 0);
+ if (IS_LEAD_HWFN(p_hwfn) &&
+ test_bit(QED_MF_LLH_MAC_CLSS, &cdev->mf_bits) &&
+ !QED_IS_FCOE_PERSONALITY(p_hwfn))
+ qed_llh_remove_mac_filter(cdev, 0,
+ p_hwfn->hw_info.hw_mac_addr);
+
if (!cdev->recov_in_prog) {
rc = qed_mcp_unload_done(p_hwfn, p_ptt);
if (rc) {
@@ -2868,6 +3815,36 @@ static int qed_hw_set_resc_info(struct qed_hwfn *p_hwfn)
return 0;
}
+static int qed_hw_get_ppfid_bitmap(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt)
+{
+ struct qed_dev *cdev = p_hwfn->cdev;
+ u8 native_ppfid_idx;
+ int rc;
+
+ /* Calculation of BB/AH is different for native_ppfid_idx */
+ if (QED_IS_BB(cdev))
+ native_ppfid_idx = p_hwfn->rel_pf_id;
+ else
+ native_ppfid_idx = p_hwfn->rel_pf_id /
+ cdev->num_ports_in_engine;
+
+ rc = qed_mcp_get_ppfid_bitmap(p_hwfn, p_ptt);
+ if (rc != 0 && rc != -EOPNOTSUPP)
+ return rc;
+ else if (rc == -EOPNOTSUPP)
+ cdev->ppfid_bitmap = 0x1 << native_ppfid_idx;
+
+ if (!(cdev->ppfid_bitmap & (0x1 << native_ppfid_idx))) {
+ DP_INFO(p_hwfn,
+ "Fix the PPFID bitmap to include the native PPFID [native_ppfid_idx %hhd, orig_bitmap 0x%hhx]\n",
+ native_ppfid_idx, cdev->ppfid_bitmap);
+ cdev->ppfid_bitmap = 0x1 << native_ppfid_idx;
+ }
+
+ return 0;
+}
+
static int qed_hw_get_resc(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
{
struct qed_resc_unlock_params resc_unlock_params;
@@ -2925,6 +3902,13 @@ static int qed_hw_get_resc(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
"Failed to release the resource lock for the resource allocation commands\n");
}
+ /* PPFID bitmap */
+ if (IS_LEAD_HWFN(p_hwfn)) {
+ rc = qed_hw_get_ppfid_bitmap(p_hwfn, p_ptt);
+ if (rc)
+ return rc;
+ }
+
/* Sanity for ILT */
if ((b_ah && (RESC_END(p_hwfn, QED_ILT) > PXP_NUM_ILT_RECORDS_K2)) ||
(!b_ah && (RESC_END(p_hwfn, QED_ILT) > PXP_NUM_ILT_RECORDS_BB))) {
@@ -3443,6 +4427,7 @@ static void qed_nvm_info_free(struct qed_hwfn *p_hwfn)
static int qed_hw_prepare_single(struct qed_hwfn *p_hwfn,
void __iomem *p_regview,
void __iomem *p_doorbells,
+ u64 db_phys_addr,
enum qed_pci_personality personality)
{
struct qed_dev *cdev = p_hwfn->cdev;
@@ -3451,6 +4436,7 @@ static int qed_hw_prepare_single(struct qed_hwfn *p_hwfn,
/* Split PCI bars evenly between hwfns */
p_hwfn->regview = p_regview;
p_hwfn->doorbells = p_doorbells;
+ p_hwfn->db_phys_addr = db_phys_addr;
if (IS_VF(p_hwfn->cdev))
return qed_vf_hw_prepare(p_hwfn);
@@ -3546,7 +4532,9 @@ int qed_hw_prepare(struct qed_dev *cdev,
/* Initialize the first hwfn - will learn number of hwfns */
rc = qed_hw_prepare_single(p_hwfn,
cdev->regview,
- cdev->doorbells, personality);
+ cdev->doorbells,
+ cdev->db_phys_addr,
+ personality);
if (rc)
return rc;
@@ -3555,22 +4543,25 @@ int qed_hw_prepare(struct qed_dev *cdev,
/* Initialize the rest of the hwfns */
if (cdev->num_hwfns > 1) {
void __iomem *p_regview, *p_doorbell;
- u8 __iomem *addr;
+ u64 db_phys_addr;
+ u32 offset;
/* adjust bar offset for second engine */
- addr = cdev->regview +
- qed_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt,
- BAR_ID_0) / 2;
- p_regview = addr;
+ offset = qed_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt,
+ BAR_ID_0) / 2;
+ p_regview = cdev->regview + offset;
+
+ offset = qed_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt,
+ BAR_ID_1) / 2;
- addr = cdev->doorbells +
- qed_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt,
- BAR_ID_1) / 2;
- p_doorbell = addr;
+ p_doorbell = cdev->doorbells + offset;
+
+ db_phys_addr = cdev->db_phys_addr + offset;
/* prepare second hw function */
rc = qed_hw_prepare_single(&cdev->hwfns[1], p_regview,
- p_doorbell, personality);
+ p_doorbell, db_phys_addr,
+ personality);
/* in case of error, need to free the previously
* initiliazed hwfn 0.
@@ -3951,269 +4942,6 @@ int qed_fw_rss_eng(struct qed_hwfn *p_hwfn, u8 src_id, u8 *dst_id)
return 0;
}
-static void qed_llh_mac_to_filter(u32 *p_high, u32 *p_low,
- u8 *p_filter)
-{
- *p_high = p_filter[1] | (p_filter[0] << 8);
- *p_low = p_filter[5] | (p_filter[4] << 8) |
- (p_filter[3] << 16) | (p_filter[2] << 24);
-}
-
-int qed_llh_add_mac_filter(struct qed_hwfn *p_hwfn,
- struct qed_ptt *p_ptt, u8 *p_filter)
-{
- u32 high = 0, low = 0, en;
- int i;
-
- if (!test_bit(QED_MF_LLH_MAC_CLSS, &p_hwfn->cdev->mf_bits))
- return 0;
-
- qed_llh_mac_to_filter(&high, &low, p_filter);
-
- /* Find a free entry and utilize it */
- for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) {
- en = qed_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_EN + i * sizeof(u32));
- if (en)
- continue;
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE +
- 2 * i * sizeof(u32), low);
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE +
- (2 * i + 1) * sizeof(u32), high);
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_MODE + i * sizeof(u32), 0);
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE +
- i * sizeof(u32), 0);
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_EN + i * sizeof(u32), 1);
- break;
- }
- if (i >= NIG_REG_LLH_FUNC_FILTER_EN_SIZE) {
- DP_NOTICE(p_hwfn,
- "Failed to find an empty LLH filter to utilize\n");
- return -EINVAL;
- }
-
- DP_VERBOSE(p_hwfn, NETIF_MSG_HW,
- "mac: %pM is added at %d\n",
- p_filter, i);
-
- return 0;
-}
-
-void qed_llh_remove_mac_filter(struct qed_hwfn *p_hwfn,
- struct qed_ptt *p_ptt, u8 *p_filter)
-{
- u32 high = 0, low = 0;
- int i;
-
- if (!test_bit(QED_MF_LLH_MAC_CLSS, &p_hwfn->cdev->mf_bits))
- return;
-
- qed_llh_mac_to_filter(&high, &low, p_filter);
-
- /* Find the entry and clean it */
- for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) {
- if (qed_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE +
- 2 * i * sizeof(u32)) != low)
- continue;
- if (qed_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE +
- (2 * i + 1) * sizeof(u32)) != high)
- continue;
-
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_EN + i * sizeof(u32), 0);
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE + 2 * i * sizeof(u32), 0);
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE +
- (2 * i + 1) * sizeof(u32), 0);
-
- DP_VERBOSE(p_hwfn, NETIF_MSG_HW,
- "mac: %pM is removed from %d\n",
- p_filter, i);
- break;
- }
- if (i >= NIG_REG_LLH_FUNC_FILTER_EN_SIZE)
- DP_NOTICE(p_hwfn, "Tried to remove a non-configured filter\n");
-}
-
-int
-qed_llh_add_protocol_filter(struct qed_hwfn *p_hwfn,
- struct qed_ptt *p_ptt,
- u16 source_port_or_eth_type,
- u16 dest_port, enum qed_llh_port_filter_type_t type)
-{
- u32 high = 0, low = 0, en;
- int i;
-
- if (!test_bit(QED_MF_LLH_PROTO_CLSS, &p_hwfn->cdev->mf_bits))
- return 0;
-
- switch (type) {
- case QED_LLH_FILTER_ETHERTYPE:
- high = source_port_or_eth_type;
- break;
- case QED_LLH_FILTER_TCP_SRC_PORT:
- case QED_LLH_FILTER_UDP_SRC_PORT:
- low = source_port_or_eth_type << 16;
- break;
- case QED_LLH_FILTER_TCP_DEST_PORT:
- case QED_LLH_FILTER_UDP_DEST_PORT:
- low = dest_port;
- break;
- case QED_LLH_FILTER_TCP_SRC_AND_DEST_PORT:
- case QED_LLH_FILTER_UDP_SRC_AND_DEST_PORT:
- low = (source_port_or_eth_type << 16) | dest_port;
- break;
- default:
- DP_NOTICE(p_hwfn,
- "Non valid LLH protocol filter type %d\n", type);
- return -EINVAL;
- }
- /* Find a free entry and utilize it */
- for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) {
- en = qed_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_EN + i * sizeof(u32));
- if (en)
- continue;
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE +
- 2 * i * sizeof(u32), low);
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE +
- (2 * i + 1) * sizeof(u32), high);
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_MODE + i * sizeof(u32), 1);
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE +
- i * sizeof(u32), 1 << type);
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_EN + i * sizeof(u32), 1);
- break;
- }
- if (i >= NIG_REG_LLH_FUNC_FILTER_EN_SIZE) {
- DP_NOTICE(p_hwfn,
- "Failed to find an empty LLH filter to utilize\n");
- return -EINVAL;
- }
- switch (type) {
- case QED_LLH_FILTER_ETHERTYPE:
- DP_VERBOSE(p_hwfn, NETIF_MSG_HW,
- "ETH type %x is added at %d\n",
- source_port_or_eth_type, i);
- break;
- case QED_LLH_FILTER_TCP_SRC_PORT:
- DP_VERBOSE(p_hwfn, NETIF_MSG_HW,
- "TCP src port %x is added at %d\n",
- source_port_or_eth_type, i);
- break;
- case QED_LLH_FILTER_UDP_SRC_PORT:
- DP_VERBOSE(p_hwfn, NETIF_MSG_HW,
- "UDP src port %x is added at %d\n",
- source_port_or_eth_type, i);
- break;
- case QED_LLH_FILTER_TCP_DEST_PORT:
- DP_VERBOSE(p_hwfn, NETIF_MSG_HW,
- "TCP dst port %x is added at %d\n", dest_port, i);
- break;
- case QED_LLH_FILTER_UDP_DEST_PORT:
- DP_VERBOSE(p_hwfn, NETIF_MSG_HW,
- "UDP dst port %x is added at %d\n", dest_port, i);
- break;
- case QED_LLH_FILTER_TCP_SRC_AND_DEST_PORT:
- DP_VERBOSE(p_hwfn, NETIF_MSG_HW,
- "TCP src/dst ports %x/%x are added at %d\n",
- source_port_or_eth_type, dest_port, i);
- break;
- case QED_LLH_FILTER_UDP_SRC_AND_DEST_PORT:
- DP_VERBOSE(p_hwfn, NETIF_MSG_HW,
- "UDP src/dst ports %x/%x are added at %d\n",
- source_port_or_eth_type, dest_port, i);
- break;
- }
- return 0;
-}
-
-void
-qed_llh_remove_protocol_filter(struct qed_hwfn *p_hwfn,
- struct qed_ptt *p_ptt,
- u16 source_port_or_eth_type,
- u16 dest_port,
- enum qed_llh_port_filter_type_t type)
-{
- u32 high = 0, low = 0;
- int i;
-
- if (!test_bit(QED_MF_LLH_PROTO_CLSS, &p_hwfn->cdev->mf_bits))
- return;
-
- switch (type) {
- case QED_LLH_FILTER_ETHERTYPE:
- high = source_port_or_eth_type;
- break;
- case QED_LLH_FILTER_TCP_SRC_PORT:
- case QED_LLH_FILTER_UDP_SRC_PORT:
- low = source_port_or_eth_type << 16;
- break;
- case QED_LLH_FILTER_TCP_DEST_PORT:
- case QED_LLH_FILTER_UDP_DEST_PORT:
- low = dest_port;
- break;
- case QED_LLH_FILTER_TCP_SRC_AND_DEST_PORT:
- case QED_LLH_FILTER_UDP_SRC_AND_DEST_PORT:
- low = (source_port_or_eth_type << 16) | dest_port;
- break;
- default:
- DP_NOTICE(p_hwfn,
- "Non valid LLH protocol filter type %d\n", type);
- return;
- }
-
- for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) {
- if (!qed_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_EN + i * sizeof(u32)))
- continue;
- if (!qed_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_MODE + i * sizeof(u32)))
- continue;
- if (!(qed_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE +
- i * sizeof(u32)) & BIT(type)))
- continue;
- if (qed_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE +
- 2 * i * sizeof(u32)) != low)
- continue;
- if (qed_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE +
- (2 * i + 1) * sizeof(u32)) != high)
- continue;
-
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_EN + i * sizeof(u32), 0);
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_MODE + i * sizeof(u32), 0);
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE +
- i * sizeof(u32), 0);
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE + 2 * i * sizeof(u32), 0);
- qed_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE +
- (2 * i + 1) * sizeof(u32), 0);
- break;
- }
-
- if (i >= NIG_REG_LLH_FUNC_FILTER_EN_SIZE)
- DP_NOTICE(p_hwfn, "Tried to remove a non-configured filter\n");
-}
-
static int qed_set_coalesce(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
u32 hw_addr, void *p_eth_qzone,
size_t eth_qzone_size, u8 timeset)
diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev_api.h b/drivers/net/ethernet/qlogic/qed/qed_dev_api.h
index e4b4e3b78e8a..47376d4d071f 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_dev_api.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_dev_api.h
@@ -241,11 +241,17 @@ enum qed_dmae_address_type_t {
#define QED_DMAE_FLAG_VF_SRC 0x00000002
#define QED_DMAE_FLAG_VF_DST 0x00000004
#define QED_DMAE_FLAG_COMPLETION_DST 0x00000008
+#define QED_DMAE_FLAG_PORT 0x00000010
+#define QED_DMAE_FLAG_PF_SRC 0x00000020
+#define QED_DMAE_FLAG_PF_DST 0x00000040
struct qed_dmae_params {
u32 flags; /* consists of QED_DMAE_FLAG_* values */
u8 src_vfid;
u8 dst_vfid;
+ u8 port_id;
+ u8 src_pfid;
+ u8 dst_pfid;
};
/**
@@ -257,7 +263,7 @@ struct qed_dmae_params {
* @param source_addr
* @param grc_addr (dmae_data_offset)
* @param size_in_dwords
- * @param flags (one of the flags defined above)
+ * @param p_params (default parameters will be used in case of NULL)
*/
int
qed_dmae_host2grc(struct qed_hwfn *p_hwfn,
@@ -265,7 +271,7 @@ qed_dmae_host2grc(struct qed_hwfn *p_hwfn,
u64 source_addr,
u32 grc_addr,
u32 size_in_dwords,
- u32 flags);
+ struct qed_dmae_params *p_params);
/**
* @brief qed_dmae_grc2host - Read data from dmae data offset
@@ -275,11 +281,11 @@ qed_dmae_host2grc(struct qed_hwfn *p_hwfn,
* @param grc_addr (dmae_data_offset)
* @param dest_addr
* @param size_in_dwords
- * @param flags - one of the flags defined above
+ * @param p_params (default parameters will be used in case of NULL)
*/
int qed_dmae_grc2host(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
u32 grc_addr, dma_addr_t dest_addr, u32 size_in_dwords,
- u32 flags);
+ struct qed_dmae_params *p_params);
/**
* @brief qed_dmae_host2host - copy data from to source address
@@ -290,7 +296,7 @@ int qed_dmae_grc2host(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
* @param source_addr
* @param dest_addr
* @param size_in_dwords
- * @param params
+ * @param p_params (default parameters will be used in case of NULL)
*/
int qed_dmae_host2host(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt,
@@ -368,26 +374,66 @@ int qed_fw_rss_eng(struct qed_hwfn *p_hwfn,
u8 *dst_id);
/**
- * @brief qed_llh_add_mac_filter - configures a MAC filter in llh
+ * @brief qed_llh_get_num_ppfid - Return the allocated number of LLH filter
+ * banks that are allocated to the PF.
*
- * @param p_hwfn
- * @param p_ptt
- * @param p_filter - MAC to add
+ * @param cdev
+ *
+ * @return u8 - Number of LLH filter banks
*/
-int qed_llh_add_mac_filter(struct qed_hwfn *p_hwfn,
- struct qed_ptt *p_ptt, u8 *p_filter);
+u8 qed_llh_get_num_ppfid(struct qed_dev *cdev);
+
+enum qed_eng {
+ QED_ENG0,
+ QED_ENG1,
+ QED_BOTH_ENG,
+};
/**
- * @brief qed_llh_remove_mac_filter - removes a MAC filter from llh
+ * @brief qed_llh_set_ppfid_affinity - Set the engine affinity for the given
+ * LLH filter bank.
+ *
+ * @param cdev
+ * @param ppfid - relative within the allocated ppfids ('0' is the default one).
+ * @param eng
+ *
+ * @return int
+ */
+int qed_llh_set_ppfid_affinity(struct qed_dev *cdev,
+ u8 ppfid, enum qed_eng eng);
+
+/**
+ * @brief qed_llh_set_roce_affinity - Set the RoCE engine affinity
+ *
+ * @param cdev
+ * @param eng
+ *
+ * @return int
+ */
+int qed_llh_set_roce_affinity(struct qed_dev *cdev, enum qed_eng eng);
+
+/**
+ * @brief qed_llh_add_mac_filter - Add a LLH MAC filter into the given filter
+ * bank.
+ *
+ * @param cdev
+ * @param ppfid - relative within the allocated ppfids ('0' is the default one).
+ * @param mac_addr - MAC to add
+ */
+int qed_llh_add_mac_filter(struct qed_dev *cdev,
+ u8 ppfid, u8 mac_addr[ETH_ALEN]);
+
+/**
+ * @brief qed_llh_remove_mac_filter - Remove a LLH MAC filter from the given
+ * filter bank.
*
- * @param p_hwfn
* @param p_ptt
* @param p_filter - MAC to remove
*/
-void qed_llh_remove_mac_filter(struct qed_hwfn *p_hwfn,
- struct qed_ptt *p_ptt, u8 *p_filter);
+void qed_llh_remove_mac_filter(struct qed_dev *cdev,
+ u8 ppfid, u8 mac_addr[ETH_ALEN]);
-enum qed_llh_port_filter_type_t {
+enum qed_llh_prot_filter_type_t {
QED_LLH_FILTER_ETHERTYPE,
QED_LLH_FILTER_TCP_SRC_PORT,
QED_LLH_FILTER_TCP_DEST_PORT,
@@ -398,36 +444,37 @@ enum qed_llh_port_filter_type_t {
};
/**
- * @brief qed_llh_add_protocol_filter - configures a protocol filter in llh
+ * @brief qed_llh_add_protocol_filter - Add a LLH protocol filter into the
+ * given filter bank.
*
- * @param p_hwfn
- * @param p_ptt
+ * @param cdev
+ * @param ppfid - relative within the allocated ppfids ('0' is the default one).
+ * @param type - type of filters and comparing
* @param source_port_or_eth_type - source port or ethertype to add
* @param dest_port - destination port to add
* @param type - type of filters and comparing
*/
int
-qed_llh_add_protocol_filter(struct qed_hwfn *p_hwfn,
- struct qed_ptt *p_ptt,
- u16 source_port_or_eth_type,
- u16 dest_port,
- enum qed_llh_port_filter_type_t type);
+qed_llh_add_protocol_filter(struct qed_dev *cdev,
+ u8 ppfid,
+ enum qed_llh_prot_filter_type_t type,
+ u16 source_port_or_eth_type, u16 dest_port);
/**
- * @brief qed_llh_remove_protocol_filter - remove a protocol filter in llh
+ * @brief qed_llh_remove_protocol_filter - Remove a LLH protocol filter from
+ * the given filter bank.
*
- * @param p_hwfn
- * @param p_ptt
+ * @param cdev
+ * @param ppfid - relative within the allocated ppfids ('0' is the default one).
+ * @param type - type of filters and comparing
* @param source_port_or_eth_type - source port or ethertype to add
* @param dest_port - destination port to add
- * @param type - type of filters and comparing
*/
void
-qed_llh_remove_protocol_filter(struct qed_hwfn *p_hwfn,
- struct qed_ptt *p_ptt,
- u16 source_port_or_eth_type,
- u16 dest_port,
- enum qed_llh_port_filter_type_t type);
+qed_llh_remove_protocol_filter(struct qed_dev *cdev,
+ u8 ppfid,
+ enum qed_llh_prot_filter_type_t type,
+ u16 source_port_or_eth_type, u16 dest_port);
/**
* *@brief Cleanup of previous driver remains prior to load
diff --git a/drivers/net/ethernet/qlogic/qed/qed_fcoe.c b/drivers/net/ethernet/qlogic/qed/qed_fcoe.c
index 46dc93d3b9b5..de31a382f58e 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_fcoe.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_fcoe.c
@@ -745,7 +745,7 @@ struct qed_hash_fcoe_con {
static int qed_fill_fcoe_dev_info(struct qed_dev *cdev,
struct qed_dev_fcoe_info *info)
{
- struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev);
+ struct qed_hwfn *hwfn = QED_AFFIN_HWFN(cdev);
int rc;
memset(info, 0, sizeof(*info));
@@ -806,15 +806,15 @@ static int qed_fcoe_stop(struct qed_dev *cdev)
return -EINVAL;
}
- p_ptt = qed_ptt_acquire(QED_LEADING_HWFN(cdev));
+ p_ptt = qed_ptt_acquire(QED_AFFIN_HWFN(cdev));
if (!p_ptt)
return -EAGAIN;
/* Stop the fcoe */
- rc = qed_sp_fcoe_func_stop(QED_LEADING_HWFN(cdev), p_ptt,
+ rc = qed_sp_fcoe_func_stop(QED_AFFIN_HWFN(cdev), p_ptt,
QED_SPQ_MODE_EBLOCK, NULL);
cdev->flags &= ~QED_FLAG_STORAGE_STARTED;
- qed_ptt_release(QED_LEADING_HWFN(cdev), p_ptt);
+ qed_ptt_release(QED_AFFIN_HWFN(cdev), p_ptt);
return rc;
}
@@ -828,8 +828,8 @@ static int qed_fcoe_start(struct qed_dev *cdev, struct qed_fcoe_tid *tasks)
return 0;
}
- rc = qed_sp_fcoe_func_start(QED_LEADING_HWFN(cdev),
- QED_SPQ_MODE_EBLOCK, NULL);
+ rc = qed_sp_fcoe_func_start(QED_AFFIN_HWFN(cdev), QED_SPQ_MODE_EBLOCK,
+ NULL);
if (rc) {
DP_NOTICE(cdev, "Failed to start fcoe\n");
return rc;
@@ -849,7 +849,7 @@ static int qed_fcoe_start(struct qed_dev *cdev, struct qed_fcoe_tid *tasks)
return -ENOMEM;
}
- rc = qed_cxt_get_tid_mem_info(QED_LEADING_HWFN(cdev), tid_info);
+ rc = qed_cxt_get_tid_mem_info(QED_AFFIN_HWFN(cdev), tid_info);
if (rc) {
DP_NOTICE(cdev, "Failed to gather task information\n");
qed_fcoe_stop(cdev);
@@ -884,7 +884,7 @@ static int qed_fcoe_acquire_conn(struct qed_dev *cdev,
}
/* Acquire the connection */
- rc = qed_fcoe_acquire_connection(QED_LEADING_HWFN(cdev), NULL,
+ rc = qed_fcoe_acquire_connection(QED_AFFIN_HWFN(cdev), NULL,
&hash_con->con);
if (rc) {
DP_NOTICE(cdev, "Failed to acquire Connection\n");
@@ -898,7 +898,7 @@ static int qed_fcoe_acquire_conn(struct qed_dev *cdev,
hash_add(cdev->connections, &hash_con->node, *handle);
if (p_doorbell)
- *p_doorbell = qed_fcoe_get_db_addr(QED_LEADING_HWFN(cdev),
+ *p_doorbell = qed_fcoe_get_db_addr(QED_AFFIN_HWFN(cdev),
*handle);
return 0;
@@ -916,7 +916,7 @@ static int qed_fcoe_release_conn(struct qed_dev *cdev, u32 handle)
}
hlist_del(&hash_con->node);
- qed_fcoe_release_connection(QED_LEADING_HWFN(cdev), hash_con->con);
+ qed_fcoe_release_connection(QED_AFFIN_HWFN(cdev), hash_con->con);
kfree(hash_con);
return 0;
@@ -971,7 +971,7 @@ static int qed_fcoe_offload_conn(struct qed_dev *cdev,
con->d_id.addr_mid = conn_info->d_id.addr_mid;
con->d_id.addr_lo = conn_info->d_id.addr_lo;
- return qed_sp_fcoe_conn_offload(QED_LEADING_HWFN(cdev), con,
+ return qed_sp_fcoe_conn_offload(QED_AFFIN_HWFN(cdev), con,
QED_SPQ_MODE_EBLOCK, NULL);
}
@@ -992,13 +992,13 @@ static int qed_fcoe_destroy_conn(struct qed_dev *cdev,
con = hash_con->con;
con->terminate_params = terminate_params;
- return qed_sp_fcoe_conn_destroy(QED_LEADING_HWFN(cdev), con,
+ return qed_sp_fcoe_conn_destroy(QED_AFFIN_HWFN(cdev), con,
QED_SPQ_MODE_EBLOCK, NULL);
}
static int qed_fcoe_stats(struct qed_dev *cdev, struct qed_fcoe_stats *stats)
{
- return qed_fcoe_get_stats(QED_LEADING_HWFN(cdev), stats);
+ return qed_fcoe_get_stats(QED_AFFIN_HWFN(cdev), stats);
}
void qed_get_protocol_stats_fcoe(struct qed_dev *cdev,
diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h
index 37edaa847512..e054f6c69e3a 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h
@@ -12612,8 +12612,10 @@ struct public_drv_mb {
#define DRV_MSG_CODE_BIST_TEST 0x001e0000
#define DRV_MSG_CODE_SET_LED_MODE 0x00200000
-#define DRV_MSG_CODE_RESOURCE_CMD 0x00230000
+#define DRV_MSG_CODE_RESOURCE_CMD 0x00230000
#define DRV_MSG_CODE_GET_TLV_DONE 0x002f0000
+#define DRV_MSG_CODE_GET_ENGINE_CONFIG 0x00370000
+#define DRV_MSG_CODE_GET_PPFID_BITMAP 0x43000000
#define RESOURCE_CMD_REQ_RESC_MASK 0x0000001F
#define RESOURCE_CMD_REQ_RESC_SHIFT 0
@@ -12802,6 +12804,18 @@ struct public_drv_mb {
#define FW_MB_PARAM_LOAD_DONE_DID_EFUSE_ERROR (1 << 0)
+#define FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALID_MASK 0x00000001
+#define FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALID_SHIFT 0
+#define FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALUE_MASK 0x00000002
+#define FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALUE_SHIFT 1
+#define FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALID_MASK 0x00000004
+#define FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALID_SHIFT 2
+#define FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALUE_MASK 0x00000008
+#define FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALUE_SHIFT 3
+
+#define FW_MB_PARAM_PPFID_BITMAP_MASK 0xFF
+#define FW_MB_PARAM_PPFID_BITMAP_SHIFT 0
+
u32 drv_pulse_mb;
#define DRV_PULSE_SEQ_MASK 0x00007fff
#define DRV_PULSE_SYSTEM_TIME_MASK 0xffff0000
diff --git a/drivers/net/ethernet/qlogic/qed/qed_hw.c b/drivers/net/ethernet/qlogic/qed/qed_hw.c
index 72ec1c6bdf70..a4de9e3ef72c 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_hw.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_hw.c
@@ -392,11 +392,15 @@ u32 qed_vfid_to_concrete(struct qed_hwfn *p_hwfn, u8 vfid)
}
/* DMAE */
+#define QED_DMAE_FLAGS_IS_SET(params, flag) \
+ ((params) != NULL && ((params)->flags & QED_DMAE_FLAG_##flag))
+
static void qed_dmae_opcode(struct qed_hwfn *p_hwfn,
const u8 is_src_type_grc,
const u8 is_dst_type_grc,
struct qed_dmae_params *p_params)
{
+ u8 src_pfid, dst_pfid, port_id;
u16 opcode_b = 0;
u32 opcode = 0;
@@ -407,14 +411,18 @@ static void qed_dmae_opcode(struct qed_hwfn *p_hwfn,
opcode |= (is_src_type_grc ? DMAE_CMD_SRC_MASK_GRC
: DMAE_CMD_SRC_MASK_PCIE) <<
DMAE_CMD_SRC_SHIFT;
- opcode |= ((p_hwfn->rel_pf_id & DMAE_CMD_SRC_PF_ID_MASK) <<
+ src_pfid = QED_DMAE_FLAGS_IS_SET(p_params, PF_SRC) ?
+ p_params->src_pfid : p_hwfn->rel_pf_id;
+ opcode |= ((src_pfid & DMAE_CMD_SRC_PF_ID_MASK) <<
DMAE_CMD_SRC_PF_ID_SHIFT);
/* The destination of the DMA can be: 0-None 1-PCIe 2-GRC 3-None */
opcode |= (is_dst_type_grc ? DMAE_CMD_DST_MASK_GRC
: DMAE_CMD_DST_MASK_PCIE) <<
DMAE_CMD_DST_SHIFT;
- opcode |= ((p_hwfn->rel_pf_id & DMAE_CMD_DST_PF_ID_MASK) <<
+ dst_pfid = QED_DMAE_FLAGS_IS_SET(p_params, PF_DST) ?
+ p_params->dst_pfid : p_hwfn->rel_pf_id;
+ opcode |= ((dst_pfid & DMAE_CMD_DST_PF_ID_MASK) <<
DMAE_CMD_DST_PF_ID_SHIFT);
/* Whether to write a completion word to the completion destination:
@@ -425,12 +433,14 @@ static void qed_dmae_opcode(struct qed_hwfn *p_hwfn,
opcode |= (DMAE_CMD_SRC_ADDR_RESET_MASK <<
DMAE_CMD_SRC_ADDR_RESET_SHIFT);
- if (p_params->flags & QED_DMAE_FLAG_COMPLETION_DST)
+ if (QED_DMAE_FLAGS_IS_SET(p_params, COMPLETION_DST))
opcode |= (1 << DMAE_CMD_COMP_FUNC_SHIFT);
opcode |= (DMAE_CMD_ENDIANITY << DMAE_CMD_ENDIANITY_MODE_SHIFT);
- opcode |= ((p_hwfn->port_id) << DMAE_CMD_PORT_ID_SHIFT);
+ port_id = (QED_DMAE_FLAGS_IS_SET(p_params, PORT)) ?
+ p_params->port_id : p_hwfn->port_id;
+ opcode |= (port_id << DMAE_CMD_PORT_ID_SHIFT);
/* reset source address in next go */
opcode |= (DMAE_CMD_SRC_ADDR_RESET_MASK <<
@@ -441,7 +451,7 @@ static void qed_dmae_opcode(struct qed_hwfn *p_hwfn,
DMAE_CMD_DST_ADDR_RESET_SHIFT);
/* SRC/DST VFID: all 1's - pf, otherwise VF id */
- if (p_params->flags & QED_DMAE_FLAG_VF_SRC) {
+ if (QED_DMAE_FLAGS_IS_SET(p_params, VF_SRC)) {
opcode |= 1 << DMAE_CMD_SRC_VF_ID_VALID_SHIFT;
opcode_b |= p_params->src_vfid << DMAE_CMD_SRC_VF_ID_SHIFT;
} else {
@@ -449,7 +459,7 @@ static void qed_dmae_opcode(struct qed_hwfn *p_hwfn,
DMAE_CMD_SRC_VF_ID_SHIFT;
}
- if (p_params->flags & QED_DMAE_FLAG_VF_DST) {
+ if (QED_DMAE_FLAGS_IS_SET(p_params, VF_DST)) {
opcode |= 1 << DMAE_CMD_DST_VF_ID_VALID_SHIFT;
opcode_b |= p_params->dst_vfid << DMAE_CMD_DST_VF_ID_SHIFT;
} else {
@@ -733,7 +743,7 @@ static int qed_dmae_execute_command(struct qed_hwfn *p_hwfn,
for (i = 0; i <= cnt_split; i++) {
offset = length_limit * i;
- if (!(p_params->flags & QED_DMAE_FLAG_RW_REPL_SRC)) {
+ if (!QED_DMAE_FLAGS_IS_SET(p_params, RW_REPL_SRC)) {
if (src_type == QED_DMAE_ADDRESS_GRC)
src_addr_split = src_addr + offset;
else
@@ -771,14 +781,12 @@ static int qed_dmae_execute_command(struct qed_hwfn *p_hwfn,
int qed_dmae_host2grc(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt,
- u64 source_addr, u32 grc_addr, u32 size_in_dwords, u32 flags)
+ u64 source_addr, u32 grc_addr, u32 size_in_dwords,
+ struct qed_dmae_params *p_params)
{
u32 grc_addr_in_dw = grc_addr / sizeof(u32);
- struct qed_dmae_params params;
int rc;
- memset(&params, 0, sizeof(struct qed_dmae_params));
- params.flags = flags;
mutex_lock(&p_hwfn->dmae_info.mutex);
@@ -786,7 +794,7 @@ int qed_dmae_host2grc(struct qed_hwfn *p_hwfn,
grc_addr_in_dw,
QED_DMAE_ADDRESS_HOST_VIRT,
QED_DMAE_ADDRESS_GRC,
- size_in_dwords, &params);
+ size_in_dwords, p_params);
mutex_unlock(&p_hwfn->dmae_info.mutex);
@@ -796,21 +804,19 @@ int qed_dmae_host2grc(struct qed_hwfn *p_hwfn,
int qed_dmae_grc2host(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt,
u32 grc_addr,
- dma_addr_t dest_addr, u32 size_in_dwords, u32 flags)
+ dma_addr_t dest_addr, u32 size_in_dwords,
+ struct qed_dmae_params *p_params)
{
u32 grc_addr_in_dw = grc_addr / sizeof(u32);
- struct qed_dmae_params params;
int rc;
- memset(&params, 0, sizeof(struct qed_dmae_params));
- params.flags = flags;
mutex_lock(&p_hwfn->dmae_info.mutex);
rc = qed_dmae_execute_command(p_hwfn, p_ptt, grc_addr_in_dw,
dest_addr, QED_DMAE_ADDRESS_GRC,
QED_DMAE_ADDRESS_HOST_VIRT,
- size_in_dwords, &params);
+ size_in_dwords, p_params);
mutex_unlock(&p_hwfn->dmae_info.mutex);
@@ -842,7 +848,6 @@ int qed_dmae_sanity(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt, const char *phase)
{
u32 size = PAGE_SIZE / 2, val;
- struct qed_dmae_params params;
int rc = 0;
dma_addr_t p_phys;
void *p_virt;
@@ -875,9 +880,8 @@ int qed_dmae_sanity(struct qed_hwfn *p_hwfn,
(u64)p_phys,
p_virt, (u64)(p_phys + size), (u8 *)p_virt + size, size);
- memset(&params, 0, sizeof(params));
rc = qed_dmae_host2host(p_hwfn, p_ptt, p_phys, p_phys + size,
- size / 4 /* size_in_dwords */, &params);
+ size / 4, NULL);
if (rc) {
DP_NOTICE(p_hwfn,
"DMAE sanity [%s]: qed_dmae_host2host() failed. rc = %d.\n",
diff --git a/drivers/net/ethernet/qlogic/qed/qed_init_ops.c b/drivers/net/ethernet/qlogic/qed/qed_init_ops.c
index 34193c2f1699..a868d7f88601 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_init_ops.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_init_ops.c
@@ -131,7 +131,7 @@ static int qed_init_rt(struct qed_hwfn *p_hwfn,
rc = qed_dmae_host2grc(p_hwfn, p_ptt,
(uintptr_t)(p_init_val + i),
- addr + (i << 2), segment, 0);
+ addr + (i << 2), segment, NULL);
if (rc)
return rc;
@@ -194,7 +194,7 @@ static int qed_init_array_dmae(struct qed_hwfn *p_hwfn,
} else {
rc = qed_dmae_host2grc(p_hwfn, p_ptt,
(uintptr_t)(buf + dmae_data_offset),
- addr, size, 0);
+ addr, size, NULL);
}
return rc;
@@ -205,6 +205,7 @@ static int qed_init_fill_dmae(struct qed_hwfn *p_hwfn,
u32 addr, u32 fill, u32 fill_count)
{
static u32 zero_buffer[DMAE_MAX_RW_SIZE];
+ struct qed_dmae_params params = {};
memset(zero_buffer, 0, sizeof(u32) * DMAE_MAX_RW_SIZE);
@@ -214,10 +215,10 @@ static int qed_init_fill_dmae(struct qed_hwfn *p_hwfn,
* 3. p_hwfb->temp_data,
* 4. fill_count
*/
-
+ params.flags = QED_DMAE_FLAG_RW_REPL_SRC;
return qed_dmae_host2grc(p_hwfn, p_ptt,
(uintptr_t)(&zero_buffer[0]),
- addr, fill_count, QED_DMAE_FLAG_RW_REPL_SRC);
+ addr, fill_count, &params);
}
static void qed_init_fill(struct qed_hwfn *p_hwfn,
diff --git a/drivers/net/ethernet/qlogic/qed/qed_int.c b/drivers/net/ethernet/qlogic/qed/qed_int.c
index fdfedbc8e431..4e8118a08654 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_int.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_int.c
@@ -1508,10 +1508,10 @@ void qed_int_cau_conf_sb(struct qed_hwfn *p_hwfn,
qed_dmae_host2grc(p_hwfn, p_ptt, (u64)(uintptr_t)&phys_addr,
CAU_REG_SB_ADDR_MEMORY +
- igu_sb_id * sizeof(u64), 2, 0);
+ igu_sb_id * sizeof(u64), 2, NULL);
qed_dmae_host2grc(p_hwfn, p_ptt, (u64)(uintptr_t)&sb_entry,
CAU_REG_SB_VAR_MEMORY +
- igu_sb_id * sizeof(u64), 2, 0);
+ igu_sb_id * sizeof(u64), 2, NULL);
} else {
/* Initialize Status Block Address */
STORE_RT_REG_AGG(p_hwfn,
@@ -2362,7 +2362,7 @@ int qed_int_set_timer_res(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
rc = qed_dmae_grc2host(p_hwfn, p_ptt, CAU_REG_SB_VAR_MEMORY +
sb_id * sizeof(u64),
- (u64)(uintptr_t)&sb_entry, 2, 0);
+ (u64)(uintptr_t)&sb_entry, 2, NULL);
if (rc) {
DP_ERR(p_hwfn, "dmae_grc2host failed %d\n", rc);
return rc;
@@ -2376,7 +2376,7 @@ int qed_int_set_timer_res(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
rc = qed_dmae_host2grc(p_hwfn, p_ptt,
(u64)(uintptr_t)&sb_entry,
CAU_REG_SB_VAR_MEMORY +
- sb_id * sizeof(u64), 2, 0);
+ sb_id * sizeof(u64), 2, NULL);
if (rc) {
DP_ERR(p_hwfn, "dmae_host2grc failed %d\n", rc);
return rc;
diff --git a/drivers/net/ethernet/qlogic/qed/qed_iscsi.c b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c
index 4f8a685d1a55..5585c18053ec 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_iscsi.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_iscsi.c
@@ -1082,7 +1082,7 @@ struct qed_hash_iscsi_con {
static int qed_fill_iscsi_dev_info(struct qed_dev *cdev,
struct qed_dev_iscsi_info *info)
{
- struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev);
+ struct qed_hwfn *hwfn = QED_AFFIN_HWFN(cdev);
int rc;
@@ -1141,8 +1141,8 @@ static int qed_iscsi_stop(struct qed_dev *cdev)
}
/* Stop the iscsi */
- rc = qed_sp_iscsi_func_stop(QED_LEADING_HWFN(cdev),
- QED_SPQ_MODE_EBLOCK, NULL);
+ rc = qed_sp_iscsi_func_stop(QED_AFFIN_HWFN(cdev), QED_SPQ_MODE_EBLOCK,
+ NULL);
cdev->flags &= ~QED_FLAG_STORAGE_STARTED;
return rc;
@@ -1161,9 +1161,8 @@ static int qed_iscsi_start(struct qed_dev *cdev,
return 0;
}
- rc = qed_sp_iscsi_func_start(QED_LEADING_HWFN(cdev),
- QED_SPQ_MODE_EBLOCK, NULL, event_context,
- async_event_cb);
+ rc = qed_sp_iscsi_func_start(QED_AFFIN_HWFN(cdev), QED_SPQ_MODE_EBLOCK,
+ NULL, event_context, async_event_cb);
if (rc) {
DP_NOTICE(cdev, "Failed to start iscsi\n");
return rc;
@@ -1182,8 +1181,7 @@ static int qed_iscsi_start(struct qed_dev *cdev,
return -ENOMEM;
}
- rc = qed_cxt_get_tid_mem_info(QED_LEADING_HWFN(cdev),
- tid_info);
+ rc = qed_cxt_get_tid_mem_info(QED_AFFIN_HWFN(cdev), tid_info);
if (rc) {
DP_NOTICE(cdev, "Failed to gather task information\n");
qed_iscsi_stop(cdev);
@@ -1215,7 +1213,7 @@ static int qed_iscsi_acquire_conn(struct qed_dev *cdev,
return -ENOMEM;
/* Acquire the connection */
- rc = qed_iscsi_acquire_connection(QED_LEADING_HWFN(cdev), NULL,
+ rc = qed_iscsi_acquire_connection(QED_AFFIN_HWFN(cdev), NULL,
&hash_con->con);
if (rc) {
DP_NOTICE(cdev, "Failed to acquire Connection\n");
@@ -1229,7 +1227,7 @@ static int qed_iscsi_acquire_conn(struct qed_dev *cdev,
hash_add(cdev->connections, &hash_con->node, *handle);
if (p_doorbell)
- *p_doorbell = qed_iscsi_get_db_addr(QED_LEADING_HWFN(cdev),
+ *p_doorbell = qed_iscsi_get_db_addr(QED_AFFIN_HWFN(cdev),
*handle);
return 0;
@@ -1247,7 +1245,7 @@ static int qed_iscsi_release_conn(struct qed_dev *cdev, u32 handle)
}
hlist_del(&hash_con->node);
- qed_iscsi_release_connection(QED_LEADING_HWFN(cdev), hash_con->con);
+ qed_iscsi_release_connection(QED_AFFIN_HWFN(cdev), hash_con->con);
kfree(hash_con);
return 0;
@@ -1324,7 +1322,7 @@ static int qed_iscsi_offload_conn(struct qed_dev *cdev,
/* Set default values on other connection fields */
con->offl_flags = 0x1;
- return qed_sp_iscsi_conn_offload(QED_LEADING_HWFN(cdev), con,
+ return qed_sp_iscsi_conn_offload(QED_AFFIN_HWFN(cdev), con,
QED_SPQ_MODE_EBLOCK, NULL);
}
@@ -1351,7 +1349,7 @@ static int qed_iscsi_update_conn(struct qed_dev *cdev,
con->first_seq_length = conn_info->first_seq_length;
con->exp_stat_sn = conn_info->exp_stat_sn;
- return qed_sp_iscsi_conn_update(QED_LEADING_HWFN(cdev), con,
+ return qed_sp_iscsi_conn_update(QED_AFFIN_HWFN(cdev), con,
QED_SPQ_MODE_EBLOCK, NULL);
}
@@ -1366,8 +1364,7 @@ static int qed_iscsi_clear_conn_sq(struct qed_dev *cdev, u32 handle)
return -EINVAL;
}
- return qed_sp_iscsi_conn_clear_sq(QED_LEADING_HWFN(cdev),
- hash_con->con,
+ return qed_sp_iscsi_conn_clear_sq(QED_AFFIN_HWFN(cdev), hash_con->con,
QED_SPQ_MODE_EBLOCK, NULL);
}
@@ -1385,14 +1382,13 @@ static int qed_iscsi_destroy_conn(struct qed_dev *cdev,
hash_con->con->abortive_dsconnect = abrt_conn;
- return qed_sp_iscsi_conn_terminate(QED_LEADING_HWFN(cdev),
- hash_con->con,
+ return qed_sp_iscsi_conn_terminate(QED_AFFIN_HWFN(cdev), hash_con->con,
QED_SPQ_MODE_EBLOCK, NULL);
}
static int qed_iscsi_stats(struct qed_dev *cdev, struct qed_iscsi_stats *stats)
{
- return qed_iscsi_get_stats(QED_LEADING_HWFN(cdev), stats);
+ return qed_iscsi_get_stats(QED_AFFIN_HWFN(cdev), stats);
}
static int qed_iscsi_change_mac(struct qed_dev *cdev,
@@ -1407,8 +1403,7 @@ static int qed_iscsi_change_mac(struct qed_dev *cdev,
return -EINVAL;
}
- return qed_sp_iscsi_mac_update(QED_LEADING_HWFN(cdev),
- hash_con->con,
+ return qed_sp_iscsi_mac_update(QED_AFFIN_HWFN(cdev), hash_con->con,
QED_SPQ_MODE_EBLOCK, NULL);
}
diff --git a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
index ded556b7bab5..f380fae8799d 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
@@ -63,7 +63,12 @@ struct mpa_v2_hdr {
#define MPA_REV2(_mpa_rev) ((_mpa_rev) == MPA_NEGOTIATION_TYPE_ENHANCED)
#define QED_IWARP_INVALID_TCP_CID 0xffffffff
-#define QED_IWARP_RCV_WND_SIZE_DEF (256 * 1024)
+
+#define QED_IWARP_RCV_WND_SIZE_DEF_BB_2P (200 * 1024)
+#define QED_IWARP_RCV_WND_SIZE_DEF_BB_4P (100 * 1024)
+#define QED_IWARP_RCV_WND_SIZE_DEF_AH_2P (150 * 1024)
+#define QED_IWARP_RCV_WND_SIZE_DEF_AH_4P (90 * 1024)
+
#define QED_IWARP_RCV_WND_SIZE_MIN (0xffff)
#define TIMESTAMP_HEADER_SIZE (12)
#define QED_IWARP_MAX_FIN_RT_DEFAULT (2)
@@ -532,7 +537,8 @@ int qed_iwarp_destroy_qp(struct qed_hwfn *p_hwfn, struct qed_rdma_qp *qp)
/* Make sure ep is closed before returning and freeing memory. */
if (ep) {
- while (ep->state != QED_IWARP_EP_CLOSED && wait_count++ < 200)
+ while (READ_ONCE(ep->state) != QED_IWARP_EP_CLOSED &&
+ wait_count++ < 200)
msleep(100);
if (ep->state != QED_IWARP_EP_CLOSED)
@@ -1022,8 +1028,6 @@ qed_iwarp_mpa_complete(struct qed_hwfn *p_hwfn,
params.ep_context = ep;
- ep->state = QED_IWARP_EP_CLOSED;
-
switch (fw_return_code) {
case RDMA_RETURN_OK:
ep->qp->max_rd_atomic_req = ep->cm_info.ord;
@@ -1083,6 +1087,10 @@ qed_iwarp_mpa_complete(struct qed_hwfn *p_hwfn,
break;
}
+ if (fw_return_code != RDMA_RETURN_OK)
+ /* paired with READ_ONCE in destroy_qp */
+ smp_store_release(&ep->state, QED_IWARP_EP_CLOSED);
+
ep->event_cb(ep->cb_context, &params);
/* on passive side, if there is no associated QP (REJECT) we need to
@@ -2528,7 +2536,7 @@ qed_iwarp_ll2_slowpath(void *cxt,
memset(fpdu, 0, sizeof(*fpdu));
}
-static int qed_iwarp_ll2_stop(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+static int qed_iwarp_ll2_stop(struct qed_hwfn *p_hwfn)
{
struct qed_iwarp_info *iwarp_info = &p_hwfn->p_rdma_info->iwarp;
int rc = 0;
@@ -2563,8 +2571,9 @@ static int qed_iwarp_ll2_stop(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
iwarp_info->ll2_mpa_handle = QED_IWARP_HANDLE_INVAL;
}
- qed_llh_remove_mac_filter(p_hwfn,
- p_ptt, p_hwfn->p_rdma_info->iwarp.mac_addr);
+ qed_llh_remove_mac_filter(p_hwfn->cdev, 0,
+ p_hwfn->p_rdma_info->iwarp.mac_addr);
+
return rc;
}
@@ -2609,7 +2618,7 @@ qed_iwarp_ll2_alloc_buffers(struct qed_hwfn *p_hwfn,
static int
qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn,
struct qed_rdma_start_in_params *params,
- struct qed_ptt *p_ptt)
+ u32 rcv_wnd_size)
{
struct qed_iwarp_info *iwarp_info;
struct qed_ll2_acquire_data data;
@@ -2628,7 +2637,7 @@ qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn,
ether_addr_copy(p_hwfn->p_rdma_info->iwarp.mac_addr, params->mac_addr);
- rc = qed_llh_add_mac_filter(p_hwfn, p_ptt, params->mac_addr);
+ rc = qed_llh_add_mac_filter(p_hwfn->cdev, 0, params->mac_addr);
if (rc)
return rc;
@@ -2637,6 +2646,7 @@ qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn,
cbs.rx_release_cb = qed_iwarp_ll2_rel_rx_pkt;
cbs.tx_comp_cb = qed_iwarp_ll2_comp_tx_pkt;
cbs.tx_release_cb = qed_iwarp_ll2_rel_tx_pkt;
+ cbs.slowpath_cb = NULL;
cbs.cookie = p_hwfn;
memset(&data, 0, sizeof(data));
@@ -2653,7 +2663,7 @@ qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn,
rc = qed_ll2_acquire_connection(p_hwfn, &data);
if (rc) {
DP_NOTICE(p_hwfn, "Failed to acquire LL2 connection\n");
- qed_llh_remove_mac_filter(p_hwfn, p_ptt, params->mac_addr);
+ qed_llh_remove_mac_filter(p_hwfn->cdev, 0, params->mac_addr);
return rc;
}
@@ -2675,7 +2685,7 @@ qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn,
data.input.conn_type = QED_LL2_TYPE_OOO;
data.input.mtu = params->max_mtu;
- n_ooo_bufs = (QED_IWARP_MAX_OOO * QED_IWARP_RCV_WND_SIZE_DEF) /
+ n_ooo_bufs = (QED_IWARP_MAX_OOO * rcv_wnd_size) /
iwarp_info->max_mtu;
n_ooo_bufs = min_t(u32, n_ooo_bufs, QED_IWARP_LL2_OOO_MAX_RX_SIZE);
@@ -2708,6 +2718,8 @@ qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn,
data.input.rx_num_desc = n_ooo_bufs * 2;
data.input.tx_num_desc = data.input.rx_num_desc;
data.input.tx_max_bds_per_packet = QED_IWARP_MAX_BDS_PER_FPDU;
+ data.input.tx_tc = PKT_LB_TC;
+ data.input.tx_dest = QED_LL2_TX_DEST_LB;
data.p_connection_handle = &iwarp_info->ll2_mpa_handle;
data.input.secondary_queue = true;
data.cbs = &cbs;
@@ -2757,21 +2769,35 @@ qed_iwarp_ll2_start(struct qed_hwfn *p_hwfn,
&iwarp_info->mpa_buf_list);
return rc;
err:
- qed_iwarp_ll2_stop(p_hwfn, p_ptt);
+ qed_iwarp_ll2_stop(p_hwfn);
return rc;
}
-int qed_iwarp_setup(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
+static struct {
+ u32 two_ports;
+ u32 four_ports;
+} qed_iwarp_rcv_wnd_size[MAX_CHIP_IDS] = {
+ {QED_IWARP_RCV_WND_SIZE_DEF_BB_2P, QED_IWARP_RCV_WND_SIZE_DEF_BB_4P},
+ {QED_IWARP_RCV_WND_SIZE_DEF_AH_2P, QED_IWARP_RCV_WND_SIZE_DEF_AH_4P}
+};
+
+int qed_iwarp_setup(struct qed_hwfn *p_hwfn,
struct qed_rdma_start_in_params *params)
{
+ struct qed_dev *cdev = p_hwfn->cdev;
struct qed_iwarp_info *iwarp_info;
+ enum chip_ids chip_id;
u32 rcv_wnd_size;
iwarp_info = &p_hwfn->p_rdma_info->iwarp;
iwarp_info->tcp_flags = QED_IWARP_TS_EN;
- rcv_wnd_size = QED_IWARP_RCV_WND_SIZE_DEF;
+
+ chip_id = QED_IS_BB(cdev) ? CHIP_BB : CHIP_K2;
+ rcv_wnd_size = (qed_device_num_ports(cdev) == 4) ?
+ qed_iwarp_rcv_wnd_size[chip_id].four_ports :
+ qed_iwarp_rcv_wnd_size[chip_id].two_ports;
/* value 0 is used for ilog2(QED_IWARP_RCV_WND_SIZE_MIN) */
iwarp_info->rcv_wnd_scale = ilog2(rcv_wnd_size) -
@@ -2794,10 +2820,10 @@ int qed_iwarp_setup(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
qed_iwarp_async_event);
qed_ooo_setup(p_hwfn);
- return qed_iwarp_ll2_start(p_hwfn, params, p_ptt);
+ return qed_iwarp_ll2_start(p_hwfn, params, rcv_wnd_size);
}
-int qed_iwarp_stop(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+int qed_iwarp_stop(struct qed_hwfn *p_hwfn)
{
int rc;
@@ -2808,7 +2834,7 @@ int qed_iwarp_stop(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
qed_spq_unregister_async_cb(p_hwfn, PROTOCOLID_IWARP);
- return qed_iwarp_ll2_stop(p_hwfn, p_ptt);
+ return qed_iwarp_ll2_stop(p_hwfn);
}
static void qed_iwarp_qp_in_error(struct qed_hwfn *p_hwfn,
@@ -2825,7 +2851,9 @@ static void qed_iwarp_qp_in_error(struct qed_hwfn *p_hwfn,
params.status = (fw_return_code == IWARP_QP_IN_ERROR_GOOD_CLOSE) ?
0 : -ECONNRESET;
- ep->state = QED_IWARP_EP_CLOSED;
+ /* paired with READ_ONCE in destroy_qp */
+ smp_store_release(&ep->state, QED_IWARP_EP_CLOSED);
+
spin_lock_bh(&p_hwfn->p_rdma_info->iwarp.iw_lock);
list_del(&ep->list_entry);
spin_unlock_bh(&p_hwfn->p_rdma_info->iwarp.iw_lock);
@@ -2914,7 +2942,8 @@ qed_iwarp_tcp_connect_unsuccessful(struct qed_hwfn *p_hwfn,
params.event = QED_IWARP_EVENT_ACTIVE_COMPLETE;
params.ep_context = ep;
params.cm_info = &ep->cm_info;
- ep->state = QED_IWARP_EP_CLOSED;
+ /* paired with READ_ONCE in destroy_qp */
+ smp_store_release(&ep->state, QED_IWARP_EP_CLOSED);
switch (fw_return_code) {
case IWARP_CONN_ERROR_TCP_CONNECT_INVALID_PACKET:
diff --git a/drivers/net/ethernet/qlogic/qed/qed_iwarp.h b/drivers/net/ethernet/qlogic/qed/qed_iwarp.h
index 7ac959038324..c1b2057d23b8 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_iwarp.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_iwarp.h
@@ -183,13 +183,13 @@ struct qed_iwarp_listener {
int qed_iwarp_alloc(struct qed_hwfn *p_hwfn);
-int qed_iwarp_setup(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
+int qed_iwarp_setup(struct qed_hwfn *p_hwfn,
struct qed_rdma_start_in_params *params);
void qed_iwarp_init_fw_ramrod(struct qed_hwfn *p_hwfn,
struct iwarp_init_func_ramrod_data *p_ramrod);
-int qed_iwarp_stop(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt);
+int qed_iwarp_stop(struct qed_hwfn *p_hwfn);
void qed_iwarp_resc_free(struct qed_hwfn *p_hwfn);
diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.c b/drivers/net/ethernet/qlogic/qed/qed_l2.c
index 57641728df69..9f36e7948222 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_l2.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_l2.c
@@ -2111,7 +2111,7 @@ int qed_get_rxq_coalesce(struct qed_hwfn *p_hwfn,
rc = qed_dmae_grc2host(p_hwfn, p_ptt, CAU_REG_SB_VAR_MEMORY +
p_cid->sb_igu_id * sizeof(u64),
- (u64)(uintptr_t)&sb_entry, 2, 0);
+ (u64)(uintptr_t)&sb_entry, 2, NULL);
if (rc) {
DP_ERR(p_hwfn, "dmae_grc2host failed %d\n", rc);
return rc;
@@ -2144,7 +2144,7 @@ int qed_get_txq_coalesce(struct qed_hwfn *p_hwfn,
rc = qed_dmae_grc2host(p_hwfn, p_ptt, CAU_REG_SB_VAR_MEMORY +
p_cid->sb_igu_id * sizeof(u64),
- (u64)(uintptr_t)&sb_entry, 2, 0);
+ (u64)(uintptr_t)&sb_entry, 2, NULL);
if (rc) {
DP_ERR(p_hwfn, "dmae_grc2host failed %d\n", rc);
return rc;
diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.c b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
index b5f419b71287..19a1a58d60f8 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_ll2.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
@@ -239,9 +239,8 @@ out_post1:
buffer->phys_addr = new_phys_addr;
out_post:
- rc = qed_ll2_post_rx_buffer(QED_LEADING_HWFN(cdev), cdev->ll2->handle,
- buffer->phys_addr, 0, buffer, 1);
-
+ rc = qed_ll2_post_rx_buffer(p_hwfn, cdev->ll2->handle,
+ buffer->phys_addr, 0, buffer, 1);
if (rc)
qed_ll2_dealloc_buffer(cdev, buffer);
}
@@ -926,16 +925,15 @@ static int qed_ll2_lb_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
return 0;
}
-static void qed_ll2_stop_ooo(struct qed_dev *cdev)
+static void qed_ll2_stop_ooo(struct qed_hwfn *p_hwfn)
{
- struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev);
- u8 *handle = &hwfn->pf_params.iscsi_pf_params.ll2_ooo_queue_id;
+ u8 *handle = &p_hwfn->pf_params.iscsi_pf_params.ll2_ooo_queue_id;
- DP_VERBOSE(cdev, QED_MSG_STORAGE, "Stopping LL2 OOO queue [%02x]\n",
- *handle);
+ DP_VERBOSE(p_hwfn, (QED_MSG_STORAGE | QED_MSG_LL2),
+ "Stopping LL2 OOO queue [%02x]\n", *handle);
- qed_ll2_terminate_connection(hwfn, *handle);
- qed_ll2_release_connection(hwfn, *handle);
+ qed_ll2_terminate_connection(p_hwfn, *handle);
+ qed_ll2_release_connection(p_hwfn, *handle);
*handle = QED_LL2_UNUSED_HANDLE;
}
@@ -1574,12 +1572,12 @@ int qed_ll2_establish_connection(void *cxt, u8 connection_handle)
if (p_ll2_conn->input.conn_type == QED_LL2_TYPE_FCOE) {
if (!test_bit(QED_MF_UFP_SPECIFIC, &p_hwfn->cdev->mf_bits))
- qed_llh_add_protocol_filter(p_hwfn, p_ptt,
- ETH_P_FCOE, 0,
- QED_LLH_FILTER_ETHERTYPE);
- qed_llh_add_protocol_filter(p_hwfn, p_ptt,
- ETH_P_FIP, 0,
- QED_LLH_FILTER_ETHERTYPE);
+ qed_llh_add_protocol_filter(p_hwfn->cdev, 0,
+ QED_LLH_FILTER_ETHERTYPE,
+ ETH_P_FCOE, 0);
+ qed_llh_add_protocol_filter(p_hwfn->cdev, 0,
+ QED_LLH_FILTER_ETHERTYPE,
+ ETH_P_FIP, 0);
}
out:
@@ -1980,12 +1978,12 @@ int qed_ll2_terminate_connection(void *cxt, u8 connection_handle)
if (p_ll2_conn->input.conn_type == QED_LL2_TYPE_FCOE) {
if (!test_bit(QED_MF_UFP_SPECIFIC, &p_hwfn->cdev->mf_bits))
- qed_llh_remove_protocol_filter(p_hwfn, p_ptt,
- ETH_P_FCOE, 0,
- QED_LLH_FILTER_ETHERTYPE);
- qed_llh_remove_protocol_filter(p_hwfn, p_ptt,
- ETH_P_FIP, 0,
- QED_LLH_FILTER_ETHERTYPE);
+ qed_llh_remove_protocol_filter(p_hwfn->cdev, 0,
+ QED_LLH_FILTER_ETHERTYPE,
+ ETH_P_FCOE, 0);
+ qed_llh_remove_protocol_filter(p_hwfn->cdev, 0,
+ QED_LLH_FILTER_ETHERTYPE,
+ ETH_P_FIP, 0);
}
out:
@@ -2086,12 +2084,12 @@ static void _qed_ll2_get_port_stats(struct qed_hwfn *p_hwfn,
TSTORM_LL2_PORT_STAT_OFFSET(MFW_PORT(p_hwfn)),
sizeof(port_stats));
- p_stats->gsi_invalid_hdr = HILO_64_REGPAIR(port_stats.gsi_invalid_hdr);
- p_stats->gsi_invalid_pkt_length =
+ p_stats->gsi_invalid_hdr += HILO_64_REGPAIR(port_stats.gsi_invalid_hdr);
+ p_stats->gsi_invalid_pkt_length +=
HILO_64_REGPAIR(port_stats.gsi_invalid_pkt_length);
- p_stats->gsi_unsupported_pkt_typ =
+ p_stats->gsi_unsupported_pkt_typ +=
HILO_64_REGPAIR(port_stats.gsi_unsupported_pkt_typ);
- p_stats->gsi_crcchksm_error =
+ p_stats->gsi_crcchksm_error +=
HILO_64_REGPAIR(port_stats.gsi_crcchksm_error);
}
@@ -2109,9 +2107,9 @@ static void _qed_ll2_get_tstats(struct qed_hwfn *p_hwfn,
CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(qid);
qed_memcpy_from(p_hwfn, p_ptt, &tstats, tstats_addr, sizeof(tstats));
- p_stats->packet_too_big_discard =
+ p_stats->packet_too_big_discard +=
HILO_64_REGPAIR(tstats.packet_too_big_discard);
- p_stats->no_buff_discard = HILO_64_REGPAIR(tstats.no_buff_discard);
+ p_stats->no_buff_discard += HILO_64_REGPAIR(tstats.no_buff_discard);
}
static void _qed_ll2_get_ustats(struct qed_hwfn *p_hwfn,
@@ -2128,12 +2126,12 @@ static void _qed_ll2_get_ustats(struct qed_hwfn *p_hwfn,
CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(qid);
qed_memcpy_from(p_hwfn, p_ptt, &ustats, ustats_addr, sizeof(ustats));
- p_stats->rcv_ucast_bytes = HILO_64_REGPAIR(ustats.rcv_ucast_bytes);
- p_stats->rcv_mcast_bytes = HILO_64_REGPAIR(ustats.rcv_mcast_bytes);
- p_stats->rcv_bcast_bytes = HILO_64_REGPAIR(ustats.rcv_bcast_bytes);
- p_stats->rcv_ucast_pkts = HILO_64_REGPAIR(ustats.rcv_ucast_pkts);
- p_stats->rcv_mcast_pkts = HILO_64_REGPAIR(ustats.rcv_mcast_pkts);
- p_stats->rcv_bcast_pkts = HILO_64_REGPAIR(ustats.rcv_bcast_pkts);
+ p_stats->rcv_ucast_bytes += HILO_64_REGPAIR(ustats.rcv_ucast_bytes);
+ p_stats->rcv_mcast_bytes += HILO_64_REGPAIR(ustats.rcv_mcast_bytes);
+ p_stats->rcv_bcast_bytes += HILO_64_REGPAIR(ustats.rcv_bcast_bytes);
+ p_stats->rcv_ucast_pkts += HILO_64_REGPAIR(ustats.rcv_ucast_pkts);
+ p_stats->rcv_mcast_pkts += HILO_64_REGPAIR(ustats.rcv_mcast_pkts);
+ p_stats->rcv_bcast_pkts += HILO_64_REGPAIR(ustats.rcv_bcast_pkts);
}
static void _qed_ll2_get_pstats(struct qed_hwfn *p_hwfn,
@@ -2150,23 +2148,21 @@ static void _qed_ll2_get_pstats(struct qed_hwfn *p_hwfn,
CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(stats_id);
qed_memcpy_from(p_hwfn, p_ptt, &pstats, pstats_addr, sizeof(pstats));
- p_stats->sent_ucast_bytes = HILO_64_REGPAIR(pstats.sent_ucast_bytes);
- p_stats->sent_mcast_bytes = HILO_64_REGPAIR(pstats.sent_mcast_bytes);
- p_stats->sent_bcast_bytes = HILO_64_REGPAIR(pstats.sent_bcast_bytes);
- p_stats->sent_ucast_pkts = HILO_64_REGPAIR(pstats.sent_ucast_pkts);
- p_stats->sent_mcast_pkts = HILO_64_REGPAIR(pstats.sent_mcast_pkts);
- p_stats->sent_bcast_pkts = HILO_64_REGPAIR(pstats.sent_bcast_pkts);
+ p_stats->sent_ucast_bytes += HILO_64_REGPAIR(pstats.sent_ucast_bytes);
+ p_stats->sent_mcast_bytes += HILO_64_REGPAIR(pstats.sent_mcast_bytes);
+ p_stats->sent_bcast_bytes += HILO_64_REGPAIR(pstats.sent_bcast_bytes);
+ p_stats->sent_ucast_pkts += HILO_64_REGPAIR(pstats.sent_ucast_pkts);
+ p_stats->sent_mcast_pkts += HILO_64_REGPAIR(pstats.sent_mcast_pkts);
+ p_stats->sent_bcast_pkts += HILO_64_REGPAIR(pstats.sent_bcast_pkts);
}
-int qed_ll2_get_stats(void *cxt,
- u8 connection_handle, struct qed_ll2_stats *p_stats)
+static int __qed_ll2_get_stats(void *cxt, u8 connection_handle,
+ struct qed_ll2_stats *p_stats)
{
struct qed_hwfn *p_hwfn = cxt;
struct qed_ll2_info *p_ll2_conn = NULL;
struct qed_ptt *p_ptt;
- memset(p_stats, 0, sizeof(*p_stats));
-
if ((connection_handle >= QED_MAX_NUM_OF_LL2_CONNECTIONS) ||
!p_hwfn->p_ll2_info)
return -EINVAL;
@@ -2181,15 +2177,26 @@ int qed_ll2_get_stats(void *cxt,
if (p_ll2_conn->input.gsi_enable)
_qed_ll2_get_port_stats(p_hwfn, p_ptt, p_stats);
+
_qed_ll2_get_tstats(p_hwfn, p_ptt, p_ll2_conn, p_stats);
+
_qed_ll2_get_ustats(p_hwfn, p_ptt, p_ll2_conn, p_stats);
+
if (p_ll2_conn->tx_stats_en)
_qed_ll2_get_pstats(p_hwfn, p_ptt, p_ll2_conn, p_stats);
qed_ptt_release(p_hwfn, p_ptt);
+
return 0;
}
+int qed_ll2_get_stats(void *cxt,
+ u8 connection_handle, struct qed_ll2_stats *p_stats)
+{
+ memset(p_stats, 0, sizeof(*p_stats));
+ return __qed_ll2_get_stats(cxt, connection_handle, p_stats);
+}
+
static void qed_ll2b_release_rx_packet(void *cxt,
u8 connection_handle,
void *cookie,
@@ -2216,7 +2223,7 @@ struct qed_ll2_cbs ll2_cbs = {
.tx_release_cb = &qed_ll2b_complete_tx_packet,
};
-static void qed_ll2_set_conn_data(struct qed_dev *cdev,
+static void qed_ll2_set_conn_data(struct qed_hwfn *p_hwfn,
struct qed_ll2_acquire_data *data,
struct qed_ll2_params *params,
enum qed_ll2_conn_type conn_type,
@@ -2232,7 +2239,7 @@ static void qed_ll2_set_conn_data(struct qed_dev *cdev,
data->input.tx_num_desc = QED_LL2_TX_SIZE;
data->p_connection_handle = handle;
data->cbs = &ll2_cbs;
- ll2_cbs.cookie = QED_LEADING_HWFN(cdev);
+ ll2_cbs.cookie = p_hwfn;
if (lb) {
data->input.tx_tc = PKT_LB_TC;
@@ -2243,74 +2250,102 @@ static void qed_ll2_set_conn_data(struct qed_dev *cdev,
}
}
-static int qed_ll2_start_ooo(struct qed_dev *cdev,
+static int qed_ll2_start_ooo(struct qed_hwfn *p_hwfn,
struct qed_ll2_params *params)
{
- struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev);
- u8 *handle = &hwfn->pf_params.iscsi_pf_params.ll2_ooo_queue_id;
+ u8 *handle = &p_hwfn->pf_params.iscsi_pf_params.ll2_ooo_queue_id;
struct qed_ll2_acquire_data data;
int rc;
- qed_ll2_set_conn_data(cdev, &data, params,
+ qed_ll2_set_conn_data(p_hwfn, &data, params,
QED_LL2_TYPE_OOO, handle, true);
- rc = qed_ll2_acquire_connection(hwfn, &data);
+ rc = qed_ll2_acquire_connection(p_hwfn, &data);
if (rc) {
- DP_INFO(cdev, "Failed to acquire LL2 OOO connection\n");
+ DP_INFO(p_hwfn, "Failed to acquire LL2 OOO connection\n");
goto out;
}
- rc = qed_ll2_establish_connection(hwfn, *handle);
+ rc = qed_ll2_establish_connection(p_hwfn, *handle);
if (rc) {
- DP_INFO(cdev, "Failed to establist LL2 OOO connection\n");
+ DP_INFO(p_hwfn, "Failed to establish LL2 OOO connection\n");
goto fail;
}
return 0;
fail:
- qed_ll2_release_connection(hwfn, *handle);
+ qed_ll2_release_connection(p_hwfn, *handle);
out:
*handle = QED_LL2_UNUSED_HANDLE;
return rc;
}
-static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
+static bool qed_ll2_is_storage_eng1(struct qed_dev *cdev)
{
- struct qed_ll2_buffer *buffer, *tmp_buffer;
- enum qed_ll2_conn_type conn_type;
- struct qed_ll2_acquire_data data;
- struct qed_ptt *p_ptt;
- int rc, i;
+ return (QED_IS_FCOE_PERSONALITY(QED_LEADING_HWFN(cdev)) ||
+ QED_IS_ISCSI_PERSONALITY(QED_LEADING_HWFN(cdev))) &&
+ (QED_AFFIN_HWFN(cdev) != QED_LEADING_HWFN(cdev));
+}
+static int __qed_ll2_stop(struct qed_hwfn *p_hwfn)
+{
+ struct qed_dev *cdev = p_hwfn->cdev;
+ int rc;
- /* Initialize LL2 locks & lists */
- INIT_LIST_HEAD(&cdev->ll2->list);
- spin_lock_init(&cdev->ll2->lock);
- cdev->ll2->rx_size = NET_SKB_PAD + ETH_HLEN +
- L1_CACHE_BYTES + params->mtu;
+ rc = qed_ll2_terminate_connection(p_hwfn, cdev->ll2->handle);
+ if (rc)
+ DP_INFO(cdev, "Failed to terminate LL2 connection\n");
- /*Allocate memory for LL2 */
- DP_INFO(cdev, "Allocating LL2 buffers of size %08x bytes\n",
- cdev->ll2->rx_size);
- for (i = 0; i < QED_LL2_RX_SIZE; i++) {
- buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
- if (!buffer) {
- DP_INFO(cdev, "Failed to allocate LL2 buffers\n");
- goto fail;
- }
+ qed_ll2_release_connection(p_hwfn, cdev->ll2->handle);
- rc = qed_ll2_alloc_buffer(cdev, (u8 **)&buffer->data,
- &buffer->phys_addr);
- if (rc) {
- kfree(buffer);
- goto fail;
- }
+ return rc;
+}
- list_add_tail(&buffer->list, &cdev->ll2->list);
+static int qed_ll2_stop(struct qed_dev *cdev)
+{
+ bool b_is_storage_eng1 = qed_ll2_is_storage_eng1(cdev);
+ struct qed_hwfn *p_hwfn = QED_AFFIN_HWFN(cdev);
+ int rc = 0, rc2 = 0;
+
+ if (cdev->ll2->handle == QED_LL2_UNUSED_HANDLE)
+ return 0;
+
+ qed_llh_remove_mac_filter(cdev, 0, cdev->ll2_mac_address);
+ eth_zero_addr(cdev->ll2_mac_address);
+
+ if (QED_IS_ISCSI_PERSONALITY(p_hwfn))
+ qed_ll2_stop_ooo(p_hwfn);
+
+ /* In CMT mode, LL2 is always started on engine 0 for a storage PF */
+ if (b_is_storage_eng1) {
+ rc2 = __qed_ll2_stop(QED_LEADING_HWFN(cdev));
+ if (rc2)
+ DP_NOTICE(QED_LEADING_HWFN(cdev),
+ "Failed to stop LL2 on engine 0\n");
}
- switch (QED_LEADING_HWFN(cdev)->hw_info.personality) {
+ rc = __qed_ll2_stop(p_hwfn);
+ if (rc)
+ DP_NOTICE(p_hwfn, "Failed to stop LL2\n");
+
+ qed_ll2_kill_buffers(cdev);
+
+ cdev->ll2->handle = QED_LL2_UNUSED_HANDLE;
+
+ return rc | rc2;
+}
+
+static int __qed_ll2_start(struct qed_hwfn *p_hwfn,
+ struct qed_ll2_params *params)
+{
+ struct qed_ll2_buffer *buffer, *tmp_buffer;
+ struct qed_dev *cdev = p_hwfn->cdev;
+ enum qed_ll2_conn_type conn_type;
+ struct qed_ll2_acquire_data data;
+ int rc, rx_cnt;
+
+ switch (p_hwfn->hw_info.personality) {
case QED_PCI_FCOE:
conn_type = QED_LL2_TYPE_FCOE;
break;
@@ -2321,33 +2356,34 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
conn_type = QED_LL2_TYPE_ROCE;
break;
default:
+
conn_type = QED_LL2_TYPE_TEST;
}
- qed_ll2_set_conn_data(cdev, &data, params, conn_type,
+ qed_ll2_set_conn_data(p_hwfn, &data, params, conn_type,
&cdev->ll2->handle, false);
- rc = qed_ll2_acquire_connection(QED_LEADING_HWFN(cdev), &data);
+ rc = qed_ll2_acquire_connection(p_hwfn, &data);
if (rc) {
- DP_INFO(cdev, "Failed to acquire LL2 connection\n");
- goto fail;
+ DP_INFO(p_hwfn, "Failed to acquire LL2 connection\n");
+ return rc;
}
- rc = qed_ll2_establish_connection(QED_LEADING_HWFN(cdev),
- cdev->ll2->handle);
+ rc = qed_ll2_establish_connection(p_hwfn, cdev->ll2->handle);
if (rc) {
- DP_INFO(cdev, "Failed to establish LL2 connection\n");
- goto release_fail;
+ DP_INFO(p_hwfn, "Failed to establish LL2 connection\n");
+ goto release_conn;
}
/* Post all Rx buffers to FW */
spin_lock_bh(&cdev->ll2->lock);
+ rx_cnt = cdev->ll2->rx_cnt;
list_for_each_entry_safe(buffer, tmp_buffer, &cdev->ll2->list, list) {
- rc = qed_ll2_post_rx_buffer(QED_LEADING_HWFN(cdev),
+ rc = qed_ll2_post_rx_buffer(p_hwfn,
cdev->ll2->handle,
buffer->phys_addr, 0, buffer, 1);
if (rc) {
- DP_INFO(cdev,
+ DP_INFO(p_hwfn,
"Failed to post an Rx buffer; Deleting it\n");
dma_unmap_single(&cdev->pdev->dev, buffer->phys_addr,
cdev->ll2->rx_size, DMA_FROM_DEVICE);
@@ -2355,100 +2391,127 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
list_del(&buffer->list);
kfree(buffer);
} else {
- cdev->ll2->rx_cnt++;
+ rx_cnt++;
}
}
spin_unlock_bh(&cdev->ll2->lock);
- if (!cdev->ll2->rx_cnt) {
- DP_INFO(cdev, "Failed passing even a single Rx buffer\n");
- goto release_terminate;
+ if (rx_cnt == cdev->ll2->rx_cnt) {
+ DP_NOTICE(p_hwfn, "Failed passing even a single Rx buffer\n");
+ goto terminate_conn;
}
+ cdev->ll2->rx_cnt = rx_cnt;
+
+ return 0;
+
+terminate_conn:
+ qed_ll2_terminate_connection(p_hwfn, cdev->ll2->handle);
+release_conn:
+ qed_ll2_release_connection(p_hwfn, cdev->ll2->handle);
+ return rc;
+}
+
+static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
+{
+ bool b_is_storage_eng1 = qed_ll2_is_storage_eng1(cdev);
+ struct qed_hwfn *p_hwfn = QED_AFFIN_HWFN(cdev);
+ struct qed_ll2_buffer *buffer;
+ int rx_num_desc, i, rc;
if (!is_valid_ether_addr(params->ll2_mac_address)) {
- DP_INFO(cdev, "Invalid Ethernet address\n");
- goto release_terminate;
+ DP_NOTICE(cdev, "Invalid Ethernet address\n");
+ return -EINVAL;
}
- if (QED_LEADING_HWFN(cdev)->hw_info.personality == QED_PCI_ISCSI) {
- DP_VERBOSE(cdev, QED_MSG_STORAGE, "Starting OOO LL2 queue\n");
- rc = qed_ll2_start_ooo(cdev, params);
+ WARN_ON(!cdev->ll2->cbs);
+
+ /* Initialize LL2 locks & lists */
+ INIT_LIST_HEAD(&cdev->ll2->list);
+ spin_lock_init(&cdev->ll2->lock);
+
+ cdev->ll2->rx_size = NET_SKB_PAD + ETH_HLEN +
+ L1_CACHE_BYTES + params->mtu;
+
+ /* Allocate memory for LL2.
+ * In CMT mode, in case of a storage PF which is affintized to engine 1,
+ * LL2 is started also on engine 0 and thus we need twofold buffers.
+ */
+ rx_num_desc = QED_LL2_RX_SIZE * (b_is_storage_eng1 ? 2 : 1);
+ DP_INFO(cdev, "Allocating %d LL2 buffers of size %08x bytes\n",
+ rx_num_desc, cdev->ll2->rx_size);
+ for (i = 0; i < rx_num_desc; i++) {
+ buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
+ if (!buffer) {
+ DP_INFO(cdev, "Failed to allocate LL2 buffers\n");
+ rc = -ENOMEM;
+ goto err0;
+ }
+
+ rc = qed_ll2_alloc_buffer(cdev, (u8 **)&buffer->data,
+ &buffer->phys_addr);
if (rc) {
- DP_INFO(cdev,
- "Failed to initialize the OOO LL2 queue\n");
- goto release_terminate;
+ kfree(buffer);
+ goto err0;
}
- }
- p_ptt = qed_ptt_acquire(QED_LEADING_HWFN(cdev));
- if (!p_ptt) {
- DP_INFO(cdev, "Failed to acquire PTT\n");
- goto release_terminate;
+ list_add_tail(&buffer->list, &cdev->ll2->list);
}
- rc = qed_llh_add_mac_filter(QED_LEADING_HWFN(cdev), p_ptt,
- params->ll2_mac_address);
- qed_ptt_release(QED_LEADING_HWFN(cdev), p_ptt);
+ rc = __qed_ll2_start(p_hwfn, params);
if (rc) {
- DP_ERR(cdev, "Failed to allocate LLH filter\n");
- goto release_terminate_all;
+ DP_NOTICE(cdev, "Failed to start LL2\n");
+ goto err0;
}
- ether_addr_copy(cdev->ll2_mac_address, params->ll2_mac_address);
- return 0;
-
-release_terminate_all:
-
-release_terminate:
- qed_ll2_terminate_connection(QED_LEADING_HWFN(cdev), cdev->ll2->handle);
-release_fail:
- qed_ll2_release_connection(QED_LEADING_HWFN(cdev), cdev->ll2->handle);
-fail:
- qed_ll2_kill_buffers(cdev);
- cdev->ll2->handle = QED_LL2_UNUSED_HANDLE;
- return -EINVAL;
-}
-
-static int qed_ll2_stop(struct qed_dev *cdev)
-{
- struct qed_ptt *p_ptt;
- int rc;
-
- if (cdev->ll2->handle == QED_LL2_UNUSED_HANDLE)
- return 0;
+ /* In CMT mode, always need to start LL2 on engine 0 for a storage PF,
+ * since broadcast/mutlicast packets are routed to engine 0.
+ */
+ if (b_is_storage_eng1) {
+ rc = __qed_ll2_start(QED_LEADING_HWFN(cdev), params);
+ if (rc) {
+ DP_NOTICE(QED_LEADING_HWFN(cdev),
+ "Failed to start LL2 on engine 0\n");
+ goto err1;
+ }
+ }
- p_ptt = qed_ptt_acquire(QED_LEADING_HWFN(cdev));
- if (!p_ptt) {
- DP_INFO(cdev, "Failed to acquire PTT\n");
- goto fail;
+ if (QED_IS_ISCSI_PERSONALITY(p_hwfn)) {
+ DP_VERBOSE(cdev, QED_MSG_STORAGE, "Starting OOO LL2 queue\n");
+ rc = qed_ll2_start_ooo(p_hwfn, params);
+ if (rc) {
+ DP_NOTICE(cdev, "Failed to start OOO LL2\n");
+ goto err2;
+ }
}
- qed_llh_remove_mac_filter(QED_LEADING_HWFN(cdev), p_ptt,
- cdev->ll2_mac_address);
- qed_ptt_release(QED_LEADING_HWFN(cdev), p_ptt);
- eth_zero_addr(cdev->ll2_mac_address);
+ rc = qed_llh_add_mac_filter(cdev, 0, params->ll2_mac_address);
+ if (rc) {
+ DP_NOTICE(cdev, "Failed to add an LLH filter\n");
+ goto err3;
+ }
- if (QED_LEADING_HWFN(cdev)->hw_info.personality == QED_PCI_ISCSI)
- qed_ll2_stop_ooo(cdev);
+ ether_addr_copy(cdev->ll2_mac_address, params->ll2_mac_address);
- rc = qed_ll2_terminate_connection(QED_LEADING_HWFN(cdev),
- cdev->ll2->handle);
- if (rc)
- DP_INFO(cdev, "Failed to terminate LL2 connection\n");
+ return 0;
+err3:
+ if (QED_IS_ISCSI_PERSONALITY(p_hwfn))
+ qed_ll2_stop_ooo(p_hwfn);
+err2:
+ if (b_is_storage_eng1)
+ __qed_ll2_stop(QED_LEADING_HWFN(cdev));
+err1:
+ __qed_ll2_stop(p_hwfn);
+err0:
qed_ll2_kill_buffers(cdev);
-
- qed_ll2_release_connection(QED_LEADING_HWFN(cdev), cdev->ll2->handle);
cdev->ll2->handle = QED_LL2_UNUSED_HANDLE;
-
return rc;
-fail:
- return -EINVAL;
}
static int qed_ll2_start_xmit(struct qed_dev *cdev, struct sk_buff *skb,
unsigned long xmit_flags)
{
+ struct qed_hwfn *p_hwfn = QED_AFFIN_HWFN(cdev);
struct qed_ll2_tx_pkt_info pkt;
const skb_frag_t *frag;
u8 flags = 0, nr_frags;
@@ -2506,7 +2569,7 @@ static int qed_ll2_start_xmit(struct qed_dev *cdev, struct sk_buff *skb,
* routine may run and free the SKB, so no dereferencing the SKB
* beyond this point unless skb has any fragments.
*/
- rc = qed_ll2_prepare_tx_packet(&cdev->hwfns[0], cdev->ll2->handle,
+ rc = qed_ll2_prepare_tx_packet(p_hwfn, cdev->ll2->handle,
&pkt, 1);
if (rc)
goto err;
@@ -2524,13 +2587,13 @@ static int qed_ll2_start_xmit(struct qed_dev *cdev, struct sk_buff *skb,
goto err;
}
- rc = qed_ll2_set_fragment_of_tx_packet(QED_LEADING_HWFN(cdev),
+ rc = qed_ll2_set_fragment_of_tx_packet(p_hwfn,
cdev->ll2->handle,
mapping,
skb_frag_size(frag));
/* if failed not much to do here, partial packet has been posted
- * we can't free memory, will need to wait for completion.
+ * we can't free memory, will need to wait for completion
*/
if (rc)
goto err2;
@@ -2540,18 +2603,37 @@ static int qed_ll2_start_xmit(struct qed_dev *cdev, struct sk_buff *skb,
err:
dma_unmap_single(&cdev->pdev->dev, mapping, skb->len, DMA_TO_DEVICE);
-
err2:
return rc;
}
static int qed_ll2_stats(struct qed_dev *cdev, struct qed_ll2_stats *stats)
{
+ bool b_is_storage_eng1 = qed_ll2_is_storage_eng1(cdev);
+ struct qed_hwfn *p_hwfn = QED_AFFIN_HWFN(cdev);
+ int rc;
+
if (!cdev->ll2)
return -EINVAL;
- return qed_ll2_get_stats(QED_LEADING_HWFN(cdev),
- cdev->ll2->handle, stats);
+ rc = qed_ll2_get_stats(p_hwfn, cdev->ll2->handle, stats);
+ if (rc) {
+ DP_NOTICE(p_hwfn, "Failed to get LL2 stats\n");
+ return rc;
+ }
+
+ /* In CMT mode, LL2 is always started on engine 0 for a storage PF */
+ if (b_is_storage_eng1) {
+ rc = __qed_ll2_get_stats(QED_LEADING_HWFN(cdev),
+ cdev->ll2->handle, stats);
+ if (rc) {
+ DP_NOTICE(QED_LEADING_HWFN(cdev),
+ "Failed to get LL2 stats on engine 0\n");
+ return rc;
+ }
+ }
+
+ return 0;
}
const struct qed_ll2_ops qed_ll2_ops_pass = {
diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
index 6de23b56b294..829dd60ab937 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_main.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
@@ -48,6 +48,7 @@
#include <linux/crc32.h>
#include <linux/qed/qed_if.h>
#include <linux/qed/qed_ll2_if.h>
+#include <net/devlink.h>
#include "qed.h"
#include "qed_sriov.h"
@@ -342,6 +343,107 @@ static int qed_set_power_state(struct qed_dev *cdev, pci_power_t state)
return 0;
}
+struct qed_devlink {
+ struct qed_dev *cdev;
+};
+
+enum qed_devlink_param_id {
+ QED_DEVLINK_PARAM_ID_BASE = DEVLINK_PARAM_GENERIC_ID_MAX,
+ QED_DEVLINK_PARAM_ID_IWARP_CMT,
+};
+
+static int qed_dl_param_get(struct devlink *dl, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct qed_devlink *qed_dl;
+ struct qed_dev *cdev;
+
+ qed_dl = devlink_priv(dl);
+ cdev = qed_dl->cdev;
+ ctx->val.vbool = cdev->iwarp_cmt;
+
+ return 0;
+}
+
+static int qed_dl_param_set(struct devlink *dl, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct qed_devlink *qed_dl;
+ struct qed_dev *cdev;
+
+ qed_dl = devlink_priv(dl);
+ cdev = qed_dl->cdev;
+ cdev->iwarp_cmt = ctx->val.vbool;
+
+ return 0;
+}
+
+static const struct devlink_param qed_devlink_params[] = {
+ DEVLINK_PARAM_DRIVER(QED_DEVLINK_PARAM_ID_IWARP_CMT,
+ "iwarp_cmt", DEVLINK_PARAM_TYPE_BOOL,
+ BIT(DEVLINK_PARAM_CMODE_RUNTIME),
+ qed_dl_param_get, qed_dl_param_set, NULL),
+};
+
+static const struct devlink_ops qed_dl_ops;
+
+static int qed_devlink_register(struct qed_dev *cdev)
+{
+ union devlink_param_value value;
+ struct qed_devlink *qed_dl;
+ struct devlink *dl;
+ int rc;
+
+ dl = devlink_alloc(&qed_dl_ops, sizeof(*qed_dl));
+ if (!dl)
+ return -ENOMEM;
+
+ qed_dl = devlink_priv(dl);
+
+ cdev->dl = dl;
+ qed_dl->cdev = cdev;
+
+ rc = devlink_register(dl, &cdev->pdev->dev);
+ if (rc)
+ goto err_free;
+
+ rc = devlink_params_register(dl, qed_devlink_params,
+ ARRAY_SIZE(qed_devlink_params));
+ if (rc)
+ goto err_unregister;
+
+ value.vbool = false;
+ devlink_param_driverinit_value_set(dl,
+ QED_DEVLINK_PARAM_ID_IWARP_CMT,
+ value);
+
+ devlink_params_publish(dl);
+ cdev->iwarp_cmt = false;
+
+ return 0;
+
+err_unregister:
+ devlink_unregister(dl);
+
+err_free:
+ cdev->dl = NULL;
+ devlink_free(dl);
+
+ return rc;
+}
+
+static void qed_devlink_unregister(struct qed_dev *cdev)
+{
+ if (!cdev->dl)
+ return;
+
+ devlink_params_unregister(cdev->dl, qed_devlink_params,
+ ARRAY_SIZE(qed_devlink_params));
+
+ devlink_unregister(cdev->dl);
+ devlink_free(cdev->dl);
+}
+
/* probing */
static struct qed_dev *qed_probe(struct pci_dev *pdev,
struct qed_probe_params *params)
@@ -370,6 +472,12 @@ static struct qed_dev *qed_probe(struct pci_dev *pdev,
}
DP_INFO(cdev, "PCI init completed successfully\n");
+ rc = qed_devlink_register(cdev);
+ if (rc) {
+ DP_INFO(cdev, "Failed to register devlink.\n");
+ goto err2;
+ }
+
rc = qed_hw_prepare(cdev, QED_PCI_DEFAULT);
if (rc) {
DP_ERR(cdev, "hw prepare failed\n");
@@ -399,6 +507,8 @@ static void qed_remove(struct qed_dev *cdev)
qed_set_power_state(cdev, PCI_D3hot);
+ qed_devlink_unregister(cdev);
+
qed_free_cdev(cdev);
}
@@ -1301,26 +1411,21 @@ static u32 qed_sb_init(struct qed_dev *cdev,
{
struct qed_hwfn *p_hwfn;
struct qed_ptt *p_ptt;
- int hwfn_index;
u16 rel_sb_id;
- u8 n_hwfns;
u32 rc;
- /* RoCE uses single engine and CMT uses two engines. When using both
- * we force only a single engine. Storage uses only engine 0 too.
- */
- if (type == QED_SB_TYPE_L2_QUEUE)
- n_hwfns = cdev->num_hwfns;
- else
- n_hwfns = 1;
-
- hwfn_index = sb_id % n_hwfns;
- p_hwfn = &cdev->hwfns[hwfn_index];
- rel_sb_id = sb_id / n_hwfns;
+ /* RoCE/Storage use a single engine in CMT mode while L2 uses both */
+ if (type == QED_SB_TYPE_L2_QUEUE) {
+ p_hwfn = &cdev->hwfns[sb_id % cdev->num_hwfns];
+ rel_sb_id = sb_id / cdev->num_hwfns;
+ } else {
+ p_hwfn = QED_AFFIN_HWFN(cdev);
+ rel_sb_id = sb_id;
+ }
DP_VERBOSE(cdev, NETIF_MSG_INTR,
"hwfn [%d] <--[init]-- SB %04x [0x%04x upper]\n",
- hwfn_index, rel_sb_id, sb_id);
+ IS_LEAD_HWFN(p_hwfn) ? 0 : 1, rel_sb_id, sb_id);
if (IS_PF(p_hwfn->cdev)) {
p_ptt = qed_ptt_acquire(p_hwfn);
@@ -1339,20 +1444,26 @@ static u32 qed_sb_init(struct qed_dev *cdev,
}
static u32 qed_sb_release(struct qed_dev *cdev,
- struct qed_sb_info *sb_info, u16 sb_id)
+ struct qed_sb_info *sb_info,
+ u16 sb_id,
+ enum qed_sb_type type)
{
struct qed_hwfn *p_hwfn;
- int hwfn_index;
u16 rel_sb_id;
u32 rc;
- hwfn_index = sb_id % cdev->num_hwfns;
- p_hwfn = &cdev->hwfns[hwfn_index];
- rel_sb_id = sb_id / cdev->num_hwfns;
+ /* RoCE/Storage use a single engine in CMT mode while L2 uses both */
+ if (type == QED_SB_TYPE_L2_QUEUE) {
+ p_hwfn = &cdev->hwfns[sb_id % cdev->num_hwfns];
+ rel_sb_id = sb_id / cdev->num_hwfns;
+ } else {
+ p_hwfn = QED_AFFIN_HWFN(cdev);
+ rel_sb_id = sb_id;
+ }
DP_VERBOSE(cdev, NETIF_MSG_INTR,
"hwfn [%d] <--[init]-- SB %04x [0x%04x upper]\n",
- hwfn_index, rel_sb_id, sb_id);
+ IS_LEAD_HWFN(p_hwfn) ? 0 : 1, rel_sb_id, sb_id);
rc = qed_int_sb_release(p_hwfn, sb_info, rel_sb_id);
@@ -2372,6 +2483,11 @@ static int qed_read_module_eeprom(struct qed_dev *cdev, char *buf,
return rc;
}
+static u8 qed_get_affin_hwfn_idx(struct qed_dev *cdev)
+{
+ return QED_AFFIN_HWFN_IDX(cdev);
+}
+
static struct qed_selftest_ops qed_selftest_ops_pass = {
.selftest_memory = &qed_selftest_memory,
.selftest_interrupt = &qed_selftest_interrupt,
@@ -2419,6 +2535,7 @@ const struct qed_common_ops qed_common_ops_pass = {
.db_recovery_add = &qed_db_recovery_add,
.db_recovery_del = &qed_db_recovery_del,
.read_module_eeprom = &qed_read_module_eeprom,
+ .get_affin_hwfn_idx = &qed_get_affin_hwfn_idx,
};
void qed_get_protocol_stats(struct qed_dev *cdev,
diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
index cc27fd60d689..758702c1ce9c 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
@@ -3685,3 +3685,68 @@ int qed_mcp_set_capabilities(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
return qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_FEATURE_SUPPORT,
features, &mcp_resp, &mcp_param);
}
+
+int qed_mcp_get_engine_config(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+{
+ struct qed_mcp_mb_params mb_params = {0};
+ struct qed_dev *cdev = p_hwfn->cdev;
+ u8 fir_valid, l2_valid;
+ int rc;
+
+ mb_params.cmd = DRV_MSG_CODE_GET_ENGINE_CONFIG;
+ rc = qed_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+ if (rc)
+ return rc;
+
+ if (mb_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+ DP_INFO(p_hwfn,
+ "The get_engine_config command is unsupported by the MFW\n");
+ return -EOPNOTSUPP;
+ }
+
+ fir_valid = QED_MFW_GET_FIELD(mb_params.mcp_param,
+ FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALID);
+ if (fir_valid)
+ cdev->fir_affin =
+ QED_MFW_GET_FIELD(mb_params.mcp_param,
+ FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALUE);
+
+ l2_valid = QED_MFW_GET_FIELD(mb_params.mcp_param,
+ FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALID);
+ if (l2_valid)
+ cdev->l2_affin_hint =
+ QED_MFW_GET_FIELD(mb_params.mcp_param,
+ FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALUE);
+
+ DP_INFO(p_hwfn,
+ "Engine affinity config: FIR={valid %hhd, value %hhd}, L2_hint={valid %hhd, value %hhd}\n",
+ fir_valid, cdev->fir_affin, l2_valid, cdev->l2_affin_hint);
+
+ return 0;
+}
+
+int qed_mcp_get_ppfid_bitmap(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+{
+ struct qed_mcp_mb_params mb_params = {0};
+ struct qed_dev *cdev = p_hwfn->cdev;
+ int rc;
+
+ mb_params.cmd = DRV_MSG_CODE_GET_PPFID_BITMAP;
+ rc = qed_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+ if (rc)
+ return rc;
+
+ if (mb_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+ DP_INFO(p_hwfn,
+ "The get_ppfid_bitmap command is unsupported by the MFW\n");
+ return -EOPNOTSUPP;
+ }
+
+ cdev->ppfid_bitmap = QED_MFW_GET_FIELD(mb_params.mcp_param,
+ FW_MB_PARAM_PPFID_BITMAP);
+
+ DP_VERBOSE(p_hwfn, QED_MSG_SP, "PPFID bitmap 0x%hhx\n",
+ cdev->ppfid_bitmap);
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.h b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
index 261c1a392e2c..e4f8fe4bd062 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
@@ -1186,4 +1186,20 @@ void qed_mcp_read_ufp_config(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt);
*/
int qed_mcp_nvm_info_populate(struct qed_hwfn *p_hwfn);
+/**
+ * @brief Get the engine affinity configuration.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ */
+int qed_mcp_get_engine_config(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt);
+
+/**
+ * @brief Get the PPFID bitmap.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ */
+int qed_mcp_get_ppfid_bitmap(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt);
+
#endif
diff --git a/drivers/net/ethernet/qlogic/qed/qed_ptp.c b/drivers/net/ethernet/qlogic/qed/qed_ptp.c
index 1302b308bd87..0dacf2c18c09 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_ptp.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_ptp.c
@@ -44,6 +44,8 @@
/* Add/subtract the Adjustment_Value when making a Drift adjustment */
#define QED_DRIFT_CNTR_DIRECTION_SHIFT 31
#define QED_TIMESTAMP_MASK BIT(16)
+/* Param mask for Hardware to detect/timestamp the unicast PTP packets */
+#define QED_PTP_UCAST_PARAM_MASK 0xF
static enum qed_resc_lock qed_ptcdev_to_resc(struct qed_hwfn *p_hwfn)
{
@@ -157,7 +159,8 @@ static int qed_ptp_hw_read_tx_ts(struct qed_dev *cdev, u64 *timestamp)
*timestamp = 0;
val = qed_rd(p_hwfn, p_ptt, NIG_REG_TX_LLH_PTP_BUF_SEQID);
if (!(val & QED_TIMESTAMP_MASK)) {
- DP_INFO(p_hwfn, "Invalid Tx timestamp, buf_seqid = %d\n", val);
+ DP_VERBOSE(p_hwfn, QED_MSG_DEBUG,
+ "Invalid Tx timestamp, buf_seqid = %08x\n", val);
return -EINVAL;
}
@@ -242,7 +245,8 @@ static int qed_ptp_hw_cfg_filters(struct qed_dev *cdev,
return -EINVAL;
}
- qed_wr(p_hwfn, p_ptt, NIG_REG_LLH_PTP_PARAM_MASK, 0);
+ qed_wr(p_hwfn, p_ptt, NIG_REG_LLH_PTP_PARAM_MASK,
+ QED_PTP_UCAST_PARAM_MASK);
qed_wr(p_hwfn, p_ptt, NIG_REG_LLH_PTP_RULE_MASK, rule_mask);
qed_wr(p_hwfn, p_ptt, NIG_REG_RX_PTP_EN, enable_cfg);
@@ -252,7 +256,8 @@ static int qed_ptp_hw_cfg_filters(struct qed_dev *cdev,
qed_wr(p_hwfn, p_ptt, NIG_REG_TX_LLH_PTP_RULE_MASK, 0x3FFF);
} else {
qed_wr(p_hwfn, p_ptt, NIG_REG_TX_PTP_EN, enable_cfg);
- qed_wr(p_hwfn, p_ptt, NIG_REG_TX_LLH_PTP_PARAM_MASK, 0);
+ qed_wr(p_hwfn, p_ptt, NIG_REG_TX_LLH_PTP_PARAM_MASK,
+ QED_PTP_UCAST_PARAM_MASK);
qed_wr(p_hwfn, p_ptt, NIG_REG_TX_LLH_PTP_RULE_MASK, rule_mask);
}
diff --git a/drivers/net/ethernet/qlogic/qed/qed_rdma.c b/drivers/net/ethernet/qlogic/qed/qed_rdma.c
index 7873d6dfd91f..f900fde448db 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_rdma.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_rdma.c
@@ -700,7 +700,7 @@ static int qed_rdma_setup(struct qed_hwfn *p_hwfn,
return rc;
if (QED_IS_IWARP_PERSONALITY(p_hwfn)) {
- rc = qed_iwarp_setup(p_hwfn, p_ptt, params);
+ rc = qed_iwarp_setup(p_hwfn, params);
if (rc)
return rc;
} else {
@@ -742,7 +742,7 @@ static int qed_rdma_stop(void *rdma_cxt)
(ll2_ethertype_en & 0xFFFE));
if (QED_IS_IWARP_PERSONALITY(p_hwfn)) {
- rc = qed_iwarp_stop(p_hwfn, p_ptt);
+ rc = qed_iwarp_stop(p_hwfn);
if (rc) {
qed_ptt_release(p_hwfn, p_ptt);
return rc;
@@ -803,7 +803,7 @@ static int qed_rdma_add_user(void *rdma_cxt,
dpi_start_offset +
((out_params->dpi) * p_hwfn->dpi_size));
- out_params->dpi_phys_addr = p_hwfn->cdev->db_phys_addr +
+ out_params->dpi_phys_addr = p_hwfn->db_phys_addr +
dpi_start_offset +
((out_params->dpi) * p_hwfn->dpi_size);
@@ -818,14 +818,17 @@ static struct qed_rdma_port *qed_rdma_query_port(void *rdma_cxt)
{
struct qed_hwfn *p_hwfn = (struct qed_hwfn *)rdma_cxt;
struct qed_rdma_port *p_port = p_hwfn->p_rdma_info->port;
+ struct qed_mcp_link_state *p_link_output;
DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "RDMA Query port\n");
- /* Link may have changed */
- p_port->port_state = p_hwfn->mcp_info->link_output.link_up ?
- QED_RDMA_PORT_UP : QED_RDMA_PORT_DOWN;
+ /* The link state is saved only for the leading hwfn */
+ p_link_output = &QED_LEADING_HWFN(p_hwfn->cdev)->mcp_info->link_output;
- p_port->link_speed = p_hwfn->mcp_info->link_output.speed;
+ p_port->port_state = p_link_output->link_up ? QED_RDMA_PORT_UP
+ : QED_RDMA_PORT_DOWN;
+
+ p_port->link_speed = p_link_output->speed;
p_port->max_msg_size = RDMA_MAX_DATA_SIZE_IN_WQE;
@@ -870,7 +873,7 @@ static void qed_rdma_cnq_prod_update(void *rdma_cxt, u8 qz_offset, u16 prod)
static int qed_fill_rdma_dev_info(struct qed_dev *cdev,
struct qed_dev_rdma_info *info)
{
- struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev);
+ struct qed_hwfn *p_hwfn = QED_AFFIN_HWFN(cdev);
memset(info, 0, sizeof(*info));
@@ -889,9 +892,9 @@ static int qed_rdma_get_sb_start(struct qed_dev *cdev)
int feat_num;
if (cdev->num_hwfns > 1)
- feat_num = FEAT_NUM(QED_LEADING_HWFN(cdev), QED_PF_L2_QUE);
+ feat_num = FEAT_NUM(QED_AFFIN_HWFN(cdev), QED_PF_L2_QUE);
else
- feat_num = FEAT_NUM(QED_LEADING_HWFN(cdev), QED_PF_L2_QUE) *
+ feat_num = FEAT_NUM(QED_AFFIN_HWFN(cdev), QED_PF_L2_QUE) *
cdev->num_hwfns;
return feat_num;
@@ -899,7 +902,7 @@ static int qed_rdma_get_sb_start(struct qed_dev *cdev)
static int qed_rdma_get_min_cnq_msix(struct qed_dev *cdev)
{
- int n_cnq = FEAT_NUM(QED_LEADING_HWFN(cdev), QED_RDMA_CNQ);
+ int n_cnq = FEAT_NUM(QED_AFFIN_HWFN(cdev), QED_RDMA_CNQ);
int n_msix = cdev->int_params.rdma_msix_cnt;
return min_t(int, n_cnq, n_msix);
@@ -1653,7 +1656,7 @@ static int qed_rdma_deregister_tid(void *rdma_cxt, u32 itid)
static void *qed_rdma_get_rdma_ctx(struct qed_dev *cdev)
{
- return QED_LEADING_HWFN(cdev);
+ return QED_AFFIN_HWFN(cdev);
}
static int qed_rdma_modify_srq(void *rdma_cxt,
@@ -1881,7 +1884,7 @@ err:
static int qed_rdma_init(struct qed_dev *cdev,
struct qed_rdma_start_in_params *params)
{
- return qed_rdma_start(QED_LEADING_HWFN(cdev), params);
+ return qed_rdma_start(QED_AFFIN_HWFN(cdev), params);
}
static void qed_rdma_remove_user(void *rdma_cxt, u16 dpi)
@@ -1899,23 +1902,12 @@ static int qed_roce_ll2_set_mac_filter(struct qed_dev *cdev,
u8 *old_mac_address,
u8 *new_mac_address)
{
- struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev);
- struct qed_ptt *p_ptt;
int rc = 0;
- p_ptt = qed_ptt_acquire(p_hwfn);
- if (!p_ptt) {
- DP_ERR(cdev,
- "qed roce ll2 mac filter set: failed to acquire PTT\n");
- return -EINVAL;
- }
-
if (old_mac_address)
- qed_llh_remove_mac_filter(p_hwfn, p_ptt, old_mac_address);
+ qed_llh_remove_mac_filter(cdev, 0, old_mac_address);
if (new_mac_address)
- rc = qed_llh_add_mac_filter(p_hwfn, p_ptt, new_mac_address);
-
- qed_ptt_release(p_hwfn, p_ptt);
+ rc = qed_llh_add_mac_filter(cdev, 0, new_mac_address);
if (rc)
DP_ERR(cdev,
@@ -1924,6 +1916,36 @@ static int qed_roce_ll2_set_mac_filter(struct qed_dev *cdev,
return rc;
}
+static int qed_iwarp_set_engine_affin(struct qed_dev *cdev, bool b_reset)
+{
+ enum qed_eng eng;
+ u8 ppfid = 0;
+ int rc;
+
+ /* Make sure iwarp cmt mode is enabled before setting affinity */
+ if (!cdev->iwarp_cmt)
+ return -EINVAL;
+
+ if (b_reset)
+ eng = QED_BOTH_ENG;
+ else
+ eng = cdev->l2_affin_hint ? QED_ENG1 : QED_ENG0;
+
+ rc = qed_llh_set_ppfid_affinity(cdev, ppfid, eng);
+ if (rc) {
+ DP_NOTICE(cdev,
+ "Failed to set the engine affinity of ppfid %d\n",
+ ppfid);
+ return rc;
+ }
+
+ DP_VERBOSE(cdev, (QED_MSG_RDMA | QED_MSG_SP),
+ "LLH: Set the engine affinity of non-RoCE packets as %d\n",
+ eng);
+
+ return 0;
+}
+
static const struct qed_rdma_ops qed_rdma_ops_pass = {
.common = &qed_common_ops_pass,
.fill_dev_info = &qed_fill_rdma_dev_info,
@@ -1963,6 +1985,7 @@ static const struct qed_rdma_ops qed_rdma_ops_pass = {
.ll2_set_fragment_of_tx_packet = &qed_ll2_set_fragment_of_tx_packet,
.ll2_set_mac_filter = &qed_roce_ll2_set_mac_filter,
.ll2_get_stats = &qed_ll2_get_stats,
+ .iwarp_set_engine_affin = &qed_iwarp_set_engine_affin,
.iwarp_connect = &qed_iwarp_connect,
.iwarp_create_listen = &qed_iwarp_create_listen,
.iwarp_destroy_listen = &qed_iwarp_destroy_listen,
diff --git a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
index 5ce825ca5f24..60f850c3bdd6 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
@@ -254,6 +254,10 @@
0x500840UL
#define NIG_REG_LLH_TAGMAC_DEF_PF_VECTOR \
0x50196cUL
+#define NIG_REG_LLH_PPFID2PFID_TBL_0 \
+ 0x501970UL
+#define NIG_REG_LLH_ENG_CLS_ROCE_QP_SEL \
+ 0x50
#define NIG_REG_LLH_CLS_TYPE_DUALMODE \
0x501964UL
#define NIG_REG_LLH_FUNC_TAG_EN 0x5019b0UL
@@ -1626,6 +1630,8 @@
#define PHY_PCIE_REG_PHY1_K2_E5 \
0x624000UL
#define NIG_REG_ROCE_DUPLICATE_TO_HOST 0x5088f0UL
+#define NIG_REG_PPF_TO_ENGINE_SEL 0x508900UL
+#define NIG_REG_PPF_TO_ENGINE_SEL_SIZE 8
#define PRS_REG_LIGHT_L2_ETHERTYPE_EN 0x1f0968UL
#define NIG_REG_LLH_ENG_CLS_ENG_ID_TBL 0x501b90UL
#define DORQ_REG_PF_DPM_ENABLE 0x100510UL
diff --git a/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c b/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
index 5a495fda9e9d..7e0b795230b2 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
@@ -588,7 +588,7 @@ int qed_sp_pf_update_stag(struct qed_hwfn *p_hwfn)
{
struct qed_spq_entry *p_ent = NULL;
struct qed_sp_init_data init_data;
- int rc = -EINVAL;
+ int rc;
/* Get SPQ entry */
memset(&init_data, 0, sizeof(init_data));
diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.c b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
index 2f318aaf2b05..78f77b712b10 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
@@ -917,10 +917,11 @@ static u8 qed_iov_alloc_vf_igu_sbs(struct qed_hwfn *p_hwfn,
/* Configure igu sb in CAU which were marked valid */
qed_init_cau_sb_entry(p_hwfn, &sb_entry,
p_hwfn->rel_pf_id, vf->abs_vf_id, 1);
+
qed_dmae_host2grc(p_hwfn, p_ptt,
(u64)(uintptr_t)&sb_entry,
CAU_REG_SB_VAR_MEMORY +
- p_block->igu_sb_id * sizeof(u64), 2, 0);
+ p_block->igu_sb_id * sizeof(u64), 2, NULL);
}
vf->num_sbs = (u8) num_rx_queues;
diff --git a/drivers/net/ethernet/qlogic/qede/qede.h b/drivers/net/ethernet/qlogic/qede/qede.h
index 92fe226980fd..0e931c04fecf 100644
--- a/drivers/net/ethernet/qlogic/qede/qede.h
+++ b/drivers/net/ethernet/qlogic/qede/qede.h
@@ -92,6 +92,7 @@ struct qede_stats_common {
u64 non_coalesced_pkts;
u64 coalesced_bytes;
u64 link_change_count;
+ u64 ptp_skip_txts;
/* port */
u64 rx_64_byte_packets;
@@ -189,6 +190,7 @@ struct qede_dev {
const struct qed_eth_ops *ops;
struct qede_ptp *ptp;
+ u64 ptp_skip_txts;
struct qed_dev_eth_info dev_info;
#define QEDE_MAX_RSS_CNT(edev) ((edev)->dev_info.num_queues)
@@ -549,7 +551,7 @@ int qede_txq_has_work(struct qede_tx_queue *txq);
void qede_recycle_rx_bd_ring(struct qede_rx_queue *rxq, u8 count);
void qede_update_rx_prod(struct qede_dev *edev, struct qede_rx_queue *rxq);
int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto,
- struct tc_cls_flower_offload *f);
+ struct flow_cls_offload *f);
#define RX_RING_SIZE_POW 13
#define RX_RING_SIZE ((u16)BIT(RX_RING_SIZE_POW))
diff --git a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
index 8911a97ab0ca..e85f9fef930c 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
@@ -174,6 +174,7 @@ static const struct {
QEDE_STAT(coalesced_bytes),
QEDE_STAT(link_change_count),
+ QEDE_STAT(ptp_skip_txts),
};
#define QEDE_NUM_STATS ARRAY_SIZE(qede_stats_arr)
diff --git a/drivers/net/ethernet/qlogic/qede/qede_filter.c b/drivers/net/ethernet/qlogic/qede/qede_filter.c
index add922b93d2c..9a6a9a008714 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_filter.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_filter.c
@@ -1943,7 +1943,7 @@ qede_parse_flow_attr(struct qede_dev *edev, __be16 proto,
}
int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto,
- struct tc_cls_flower_offload *f)
+ struct flow_cls_offload *f)
{
struct qede_arfs_fltr_node *n;
int min_hlen, rc = -EINVAL;
diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
index 02a97c659e29..8d1c208f778f 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
@@ -390,6 +390,7 @@ void qede_fill_by_demand_stats(struct qede_dev *edev)
p_common->brb_discards = stats.common.brb_discards;
p_common->tx_mac_ctrl_frames = stats.common.tx_mac_ctrl_frames;
p_common->link_change_count = stats.common.link_change_count;
+ p_common->ptp_skip_txts = edev->ptp_skip_txts;
if (QEDE_IS_BB(edev)) {
struct qede_stats_bb *p_bb = &edev->stats.bb;
@@ -547,13 +548,13 @@ static int qede_setup_tc(struct net_device *ndev, u8 num_tc)
}
static int
-qede_set_flower(struct qede_dev *edev, struct tc_cls_flower_offload *f,
+qede_set_flower(struct qede_dev *edev, struct flow_cls_offload *f,
__be16 proto)
{
switch (f->command) {
- case TC_CLSFLOWER_REPLACE:
+ case FLOW_CLS_REPLACE:
return qede_add_tc_flower_fltr(edev, proto, f);
- case TC_CLSFLOWER_DESTROY:
+ case FLOW_CLS_DESTROY:
return qede_delete_flow_filter(edev, f->cookie);
default:
return -EOPNOTSUPP;
@@ -563,7 +564,7 @@ qede_set_flower(struct qede_dev *edev, struct tc_cls_flower_offload *f,
static int qede_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
void *cb_priv)
{
- struct tc_cls_flower_offload *f;
+ struct flow_cls_offload *f;
struct qede_dev *edev = cb_priv;
if (!tc_cls_can_offload_and_chain0(edev->ndev, type_data))
@@ -578,24 +579,7 @@ static int qede_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
}
}
-static int qede_setup_tc_block(struct qede_dev *edev,
- struct tc_block_offload *f)
-{
- if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
- return -EOPNOTSUPP;
-
- switch (f->command) {
- case TC_BLOCK_BIND:
- return tcf_block_cb_register(f->block,
- qede_setup_tc_block_cb,
- edev, edev, f->extack);
- case TC_BLOCK_UNBIND:
- tcf_block_cb_unregister(f->block, qede_setup_tc_block_cb, edev);
- return 0;
- default:
- return -EOPNOTSUPP;
- }
-}
+static LIST_HEAD(qede_block_cb_list);
static int
qede_setup_tc_offload(struct net_device *dev, enum tc_setup_type type,
@@ -606,7 +590,10 @@ qede_setup_tc_offload(struct net_device *dev, enum tc_setup_type type,
switch (type) {
case TC_SETUP_BLOCK:
- return qede_setup_tc_block(edev, type_data);
+ return flow_block_cb_setup_simple(type_data,
+ &qede_block_cb_list,
+ qede_setup_tc_block_cb,
+ edev, edev, true);
case TC_SETUP_QDISC_MQPRIO:
mqprio = type_data;
@@ -959,13 +946,13 @@ void __qede_unlock(struct qede_dev *edev)
/* This version of the lock should be used when acquiring the RTNL lock is also
* needed in addition to the internal qede lock.
*/
-void qede_lock(struct qede_dev *edev)
+static void qede_lock(struct qede_dev *edev)
{
rtnl_lock();
__qede_lock(edev);
}
-void qede_unlock(struct qede_dev *edev)
+static void qede_unlock(struct qede_dev *edev)
{
__qede_unlock(edev);
rtnl_unlock();
@@ -1306,7 +1293,8 @@ static void qede_free_mem_sb(struct qede_dev *edev, struct qed_sb_info *sb_info,
u16 sb_id)
{
if (sb_info->sb_virt) {
- edev->ops->common->sb_release(edev->cdev, sb_info, sb_id);
+ edev->ops->common->sb_release(edev->cdev, sb_info, sb_id,
+ QED_SB_TYPE_L2_QUEUE);
dma_free_coherent(&edev->pdev->dev, sizeof(*sb_info->sb_virt),
(void *)sb_info->sb_virt, sb_info->sb_phys);
memset(sb_info, 0, sizeof(*sb_info));
@@ -2231,6 +2219,8 @@ out:
if (mode != QEDE_UNLOAD_RECOVERY)
DP_NOTICE(edev, "Link is down\n");
+ edev->ptp_skip_txts = 0;
+
DP_INFO(edev, "Ending qede unload\n");
}
diff --git a/drivers/net/ethernet/qlogic/qede/qede_ptp.c b/drivers/net/ethernet/qlogic/qede/qede_ptp.c
index bddb2b5982dc..f815435cf106 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_ptp.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_ptp.c
@@ -30,6 +30,7 @@
* SOFTWARE.
*/
#include "qede_ptp.h"
+#define QEDE_PTP_TX_TIMEOUT (2 * HZ)
struct qede_ptp {
const struct qed_eth_ptp_ops *ops;
@@ -38,6 +39,7 @@ struct qede_ptp {
struct timecounter tc;
struct ptp_clock *clock;
struct work_struct work;
+ unsigned long ptp_tx_start;
struct qede_dev *edev;
struct sk_buff *tx_skb;
@@ -160,18 +162,30 @@ static void qede_ptp_task(struct work_struct *work)
struct qede_dev *edev;
struct qede_ptp *ptp;
u64 timestamp, ns;
+ bool timedout;
int rc;
ptp = container_of(work, struct qede_ptp, work);
edev = ptp->edev;
+ timedout = time_is_before_jiffies(ptp->ptp_tx_start +
+ QEDE_PTP_TX_TIMEOUT);
/* Read Tx timestamp registers */
spin_lock_bh(&ptp->lock);
rc = ptp->ops->read_tx_ts(edev->cdev, &timestamp);
spin_unlock_bh(&ptp->lock);
if (rc) {
- /* Reschedule to keep checking for a valid timestamp value */
- schedule_work(&ptp->work);
+ if (unlikely(timedout)) {
+ DP_INFO(edev, "Tx timestamp is not recorded\n");
+ dev_kfree_skb_any(ptp->tx_skb);
+ ptp->tx_skb = NULL;
+ clear_bit_unlock(QEDE_FLAGS_PTP_TX_IN_PRORGESS,
+ &edev->flags);
+ edev->ptp_skip_txts++;
+ } else {
+ /* Reschedule to keep checking for a valid TS value */
+ schedule_work(&ptp->work);
+ }
return;
}
@@ -514,19 +528,28 @@ void qede_ptp_tx_ts(struct qede_dev *edev, struct sk_buff *skb)
if (!ptp)
return;
- if (test_and_set_bit_lock(QEDE_FLAGS_PTP_TX_IN_PRORGESS, &edev->flags))
+ if (test_and_set_bit_lock(QEDE_FLAGS_PTP_TX_IN_PRORGESS,
+ &edev->flags)) {
+ DP_ERR(edev, "Timestamping in progress\n");
+ edev->ptp_skip_txts++;
return;
+ }
if (unlikely(!test_bit(QEDE_FLAGS_TX_TIMESTAMPING_EN, &edev->flags))) {
- DP_NOTICE(edev,
- "Tx timestamping was not enabled, this packet will not be timestamped\n");
+ DP_ERR(edev,
+ "Tx timestamping was not enabled, this packet will not be timestamped\n");
+ clear_bit_unlock(QEDE_FLAGS_PTP_TX_IN_PRORGESS, &edev->flags);
+ edev->ptp_skip_txts++;
} else if (unlikely(ptp->tx_skb)) {
- DP_NOTICE(edev,
- "The device supports only a single outstanding packet to timestamp, this packet will not be timestamped\n");
+ DP_ERR(edev,
+ "The device supports only a single outstanding packet to timestamp, this packet will not be timestamped\n");
+ clear_bit_unlock(QEDE_FLAGS_PTP_TX_IN_PRORGESS, &edev->flags);
+ edev->ptp_skip_txts++;
} else {
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
/* schedule check for Tx timestamp */
ptp->tx_skb = skb_get(skb);
+ ptp->ptp_tx_start = jiffies;
schedule_work(&ptp->work);
}
}
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
index 7a873002e626..c07438db30ba 100644
--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
+++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
@@ -4119,13 +4119,14 @@ static void
qlcnic_config_indev_addr(struct qlcnic_adapter *adapter,
struct net_device *dev, unsigned long event)
{
+ const struct in_ifaddr *ifa;
struct in_device *indev;
indev = in_dev_get(dev);
if (!indev)
return;
- for_ifa(indev) {
+ in_dev_for_each_ifa_rtnl(ifa, indev) {
switch (event) {
case NETDEV_UP:
qlcnic_config_ipaddr(adapter,
@@ -4138,7 +4139,7 @@ qlcnic_config_indev_addr(struct qlcnic_adapter *adapter,
default:
break;
}
- } endfor_ifa(indev);
+ }
in_dev_put(indev);
}
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c
index af3b037fa442..5632da05145a 100644
--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c
+++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c
@@ -1066,7 +1066,7 @@ static int qlcnic_sriov_pf_cfg_ip_cmd(struct qlcnic_bc_trans *trans,
{
struct qlcnic_vf_info *vf = trans->vf;
struct qlcnic_adapter *adapter = vf->adapter;
- int err = -EIO;
+ int err;
cmd->req.arg[1] |= vf->vp->handle << 16;
cmd->req.arg[1] |= BIT_31;
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
index 4bf20d0651c4..576501db2a0b 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
@@ -4,6 +4,7 @@
#ifndef _RMNET_MAP_H_
#define _RMNET_MAP_H_
+#include <linux/if_rmnet.h>
struct rmnet_map_control_command {
u8 command_name;
@@ -31,30 +32,6 @@ enum rmnet_map_commands {
RMNET_MAP_COMMAND_ENUM_LENGTH
};
-struct rmnet_map_header {
- u8 pad_len:6;
- u8 reserved_bit:1;
- u8 cd_bit:1;
- u8 mux_id;
- __be16 pkt_len;
-} __aligned(1);
-
-struct rmnet_map_dl_csum_trailer {
- u8 reserved1;
- u8 valid:1;
- u8 reserved2:7;
- u16 csum_start_offset;
- u16 csum_length;
- __be16 csum_value;
-} __aligned(1);
-
-struct rmnet_map_ul_csum_header {
- __be16 csum_start_offset;
- u16 csum_insert_offset:14;
- u16 udp_ip4_ind:1;
- u16 csum_enabled:1;
-} __aligned(1);
-
#define RMNET_MAP_GET_MUX_ID(Y) (((struct rmnet_map_header *) \
(Y)->data)->mux_id)
#define RMNET_MAP_GET_CD_BIT(Y) (((struct rmnet_map_header *) \
diff --git a/drivers/net/ethernet/realtek/Makefile b/drivers/net/ethernet/realtek/Makefile
index 33be8c5ad0c9..d5304bad2372 100644
--- a/drivers/net/ethernet/realtek/Makefile
+++ b/drivers/net/ethernet/realtek/Makefile
@@ -6,4 +6,5 @@
obj-$(CONFIG_8139CP) += 8139cp.o
obj-$(CONFIG_8139TOO) += 8139too.o
obj-$(CONFIG_ATP) += atp.o
+r8169-objs += r8169_main.o r8169_firmware.o
obj-$(CONFIG_R8169) += r8169.o
diff --git a/drivers/net/ethernet/realtek/r8169_firmware.c b/drivers/net/ethernet/realtek/r8169_firmware.c
new file mode 100644
index 000000000000..8f54a2c832eb
--- /dev/null
+++ b/drivers/net/ethernet/realtek/r8169_firmware.c
@@ -0,0 +1,231 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* r8169_firmware.c: RealTek 8169/8168/8101 ethernet driver.
+ *
+ * Copyright (c) 2002 ShuChen <shuchen@realtek.com.tw>
+ * Copyright (c) 2003 - 2007 Francois Romieu <romieu@fr.zoreil.com>
+ * Copyright (c) a lot of people too. Please respect their work.
+ *
+ * See MAINTAINERS file for support contact information.
+ */
+
+#include <linux/delay.h>
+#include <linux/firmware.h>
+
+#include "r8169_firmware.h"
+
+enum rtl_fw_opcode {
+ PHY_READ = 0x0,
+ PHY_DATA_OR = 0x1,
+ PHY_DATA_AND = 0x2,
+ PHY_BJMPN = 0x3,
+ PHY_MDIO_CHG = 0x4,
+ PHY_CLEAR_READCOUNT = 0x7,
+ PHY_WRITE = 0x8,
+ PHY_READCOUNT_EQ_SKIP = 0x9,
+ PHY_COMP_EQ_SKIPN = 0xa,
+ PHY_COMP_NEQ_SKIPN = 0xb,
+ PHY_WRITE_PREVIOUS = 0xc,
+ PHY_SKIPN = 0xd,
+ PHY_DELAY_MS = 0xe,
+};
+
+struct fw_info {
+ u32 magic;
+ char version[RTL_VER_SIZE];
+ __le32 fw_start;
+ __le32 fw_len;
+ u8 chksum;
+} __packed;
+
+#define FW_OPCODE_SIZE sizeof(typeof(*((struct rtl_fw_phy_action *)0)->code))
+
+static bool rtl_fw_format_ok(struct rtl_fw *rtl_fw)
+{
+ const struct firmware *fw = rtl_fw->fw;
+ struct fw_info *fw_info = (struct fw_info *)fw->data;
+ struct rtl_fw_phy_action *pa = &rtl_fw->phy_action;
+
+ if (fw->size < FW_OPCODE_SIZE)
+ return false;
+
+ if (!fw_info->magic) {
+ size_t i, size, start;
+ u8 checksum = 0;
+
+ if (fw->size < sizeof(*fw_info))
+ return false;
+
+ for (i = 0; i < fw->size; i++)
+ checksum += fw->data[i];
+ if (checksum != 0)
+ return false;
+
+ start = le32_to_cpu(fw_info->fw_start);
+ if (start > fw->size)
+ return false;
+
+ size = le32_to_cpu(fw_info->fw_len);
+ if (size > (fw->size - start) / FW_OPCODE_SIZE)
+ return false;
+
+ strscpy(rtl_fw->version, fw_info->version, RTL_VER_SIZE);
+
+ pa->code = (__le32 *)(fw->data + start);
+ pa->size = size;
+ } else {
+ if (fw->size % FW_OPCODE_SIZE)
+ return false;
+
+ strscpy(rtl_fw->version, rtl_fw->fw_name, RTL_VER_SIZE);
+
+ pa->code = (__le32 *)fw->data;
+ pa->size = fw->size / FW_OPCODE_SIZE;
+ }
+
+ return true;
+}
+
+static bool rtl_fw_data_ok(struct rtl_fw *rtl_fw)
+{
+ struct rtl_fw_phy_action *pa = &rtl_fw->phy_action;
+ size_t index;
+
+ for (index = 0; index < pa->size; index++) {
+ u32 action = le32_to_cpu(pa->code[index]);
+ u32 regno = (action & 0x0fff0000) >> 16;
+
+ switch (action >> 28) {
+ case PHY_READ:
+ case PHY_DATA_OR:
+ case PHY_DATA_AND:
+ case PHY_MDIO_CHG:
+ case PHY_CLEAR_READCOUNT:
+ case PHY_WRITE:
+ case PHY_WRITE_PREVIOUS:
+ case PHY_DELAY_MS:
+ break;
+
+ case PHY_BJMPN:
+ if (regno > index)
+ goto out;
+ break;
+ case PHY_READCOUNT_EQ_SKIP:
+ if (index + 2 >= pa->size)
+ goto out;
+ break;
+ case PHY_COMP_EQ_SKIPN:
+ case PHY_COMP_NEQ_SKIPN:
+ case PHY_SKIPN:
+ if (index + 1 + regno >= pa->size)
+ goto out;
+ break;
+
+ default:
+ dev_err(rtl_fw->dev, "Invalid action 0x%08x\n", action);
+ return false;
+ }
+ }
+
+ return true;
+out:
+ dev_err(rtl_fw->dev, "Out of range of firmware\n");
+ return false;
+}
+
+void rtl_fw_write_firmware(struct rtl8169_private *tp, struct rtl_fw *rtl_fw)
+{
+ struct rtl_fw_phy_action *pa = &rtl_fw->phy_action;
+ rtl_fw_write_t fw_write = rtl_fw->phy_write;
+ rtl_fw_read_t fw_read = rtl_fw->phy_read;
+ int predata = 0, count = 0;
+ size_t index;
+
+ for (index = 0; index < pa->size; index++) {
+ u32 action = le32_to_cpu(pa->code[index]);
+ u32 data = action & 0x0000ffff;
+ u32 regno = (action & 0x0fff0000) >> 16;
+ enum rtl_fw_opcode opcode = action >> 28;
+
+ if (!action)
+ break;
+
+ switch (opcode) {
+ case PHY_READ:
+ predata = fw_read(tp, regno);
+ count++;
+ break;
+ case PHY_DATA_OR:
+ predata |= data;
+ break;
+ case PHY_DATA_AND:
+ predata &= data;
+ break;
+ case PHY_BJMPN:
+ index -= (regno + 1);
+ break;
+ case PHY_MDIO_CHG:
+ if (data == 0) {
+ fw_write = rtl_fw->phy_write;
+ fw_read = rtl_fw->phy_read;
+ } else if (data == 1) {
+ fw_write = rtl_fw->mac_mcu_write;
+ fw_read = rtl_fw->mac_mcu_read;
+ }
+
+ break;
+ case PHY_CLEAR_READCOUNT:
+ count = 0;
+ break;
+ case PHY_WRITE:
+ fw_write(tp, regno, data);
+ break;
+ case PHY_READCOUNT_EQ_SKIP:
+ if (count == data)
+ index++;
+ break;
+ case PHY_COMP_EQ_SKIPN:
+ if (predata == data)
+ index += regno;
+ break;
+ case PHY_COMP_NEQ_SKIPN:
+ if (predata != data)
+ index += regno;
+ break;
+ case PHY_WRITE_PREVIOUS:
+ fw_write(tp, regno, predata);
+ break;
+ case PHY_SKIPN:
+ index += regno;
+ break;
+ case PHY_DELAY_MS:
+ mdelay(data);
+ break;
+ }
+ }
+}
+
+void rtl_fw_release_firmware(struct rtl_fw *rtl_fw)
+{
+ release_firmware(rtl_fw->fw);
+}
+
+int rtl_fw_request_firmware(struct rtl_fw *rtl_fw)
+{
+ int rc;
+
+ rc = request_firmware(&rtl_fw->fw, rtl_fw->fw_name, rtl_fw->dev);
+ if (rc < 0)
+ goto out;
+
+ if (!rtl_fw_format_ok(rtl_fw) || !rtl_fw_data_ok(rtl_fw)) {
+ release_firmware(rtl_fw->fw);
+ rc = -EINVAL;
+ goto out;
+ }
+
+ return 0;
+out:
+ dev_err(rtl_fw->dev, "Unable to load firmware %s (%d)\n",
+ rtl_fw->fw_name, rc);
+ return rc;
+}
diff --git a/drivers/net/ethernet/realtek/r8169_firmware.h b/drivers/net/ethernet/realtek/r8169_firmware.h
new file mode 100644
index 000000000000..7dc348ed8345
--- /dev/null
+++ b/drivers/net/ethernet/realtek/r8169_firmware.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* r8169_firmware.h: RealTek 8169/8168/8101 ethernet driver.
+ *
+ * Copyright (c) 2002 ShuChen <shuchen@realtek.com.tw>
+ * Copyright (c) 2003 - 2007 Francois Romieu <romieu@fr.zoreil.com>
+ * Copyright (c) a lot of people too. Please respect their work.
+ *
+ * See MAINTAINERS file for support contact information.
+ */
+
+#include <linux/device.h>
+#include <linux/firmware.h>
+
+struct rtl8169_private;
+typedef void (*rtl_fw_write_t)(struct rtl8169_private *tp, int reg, int val);
+typedef int (*rtl_fw_read_t)(struct rtl8169_private *tp, int reg);
+
+#define RTL_VER_SIZE 32
+
+struct rtl_fw {
+ rtl_fw_write_t phy_write;
+ rtl_fw_read_t phy_read;
+ rtl_fw_write_t mac_mcu_write;
+ rtl_fw_read_t mac_mcu_read;
+ const struct firmware *fw;
+ const char *fw_name;
+ struct device *dev;
+
+ char version[RTL_VER_SIZE];
+
+ struct rtl_fw_phy_action {
+ __le32 *code;
+ size_t size;
+ } phy_action;
+};
+
+int rtl_fw_request_firmware(struct rtl_fw *rtl_fw);
+void rtl_fw_release_firmware(struct rtl_fw *rtl_fw);
+void rtl_fw_write_firmware(struct rtl8169_private *tp, struct rtl_fw *rtl_fw);
diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169_main.c
index d06a61f00e78..efef5453b94f 100644
--- a/drivers/net/ethernet/realtek/r8169.c
+++ b/drivers/net/ethernet/realtek/r8169_main.c
@@ -27,12 +27,13 @@
#include <linux/interrupt.h>
#include <linux/dma-mapping.h>
#include <linux/pm_runtime.h>
-#include <linux/firmware.h>
#include <linux/prefetch.h>
#include <linux/pci-aspm.h>
#include <linux/ipv6.h>
#include <net/ip6_checksum.h>
+#include "r8169_firmware.h"
+
#define MODULENAME "r8169"
#define FIRMWARE_8168D_1 "rtl_nic/rtl8168d-1.fw"
@@ -72,6 +73,8 @@ static const int multicast_filter_limit = 32;
#define R8169_TX_RING_BYTES (NUM_TX_DESC * sizeof(struct TxDesc))
#define R8169_RX_RING_BYTES (NUM_RX_DESC * sizeof(struct RxDesc))
+#define RTL_CFG_NO_GBIT 1
+
/* write/read MMIO register */
#define RTL_W8(tp, reg, val8) writeb((val8), tp->mmio_addr + (reg))
#define RTL_W16(tp, reg, val16) writew((val16), tp->mmio_addr + (reg))
@@ -81,7 +84,7 @@ static const int multicast_filter_limit = 32;
#define RTL_R32(tp, reg) readl(tp->mmio_addr + (reg))
enum mac_version {
- RTL_GIGA_MAC_VER_01 = 0,
+ /* support for ancient RTL_GIGA_MAC_VER_01 has been removed */
RTL_GIGA_MAC_VER_02,
RTL_GIGA_MAC_VER_03,
RTL_GIGA_MAC_VER_04,
@@ -132,7 +135,7 @@ enum mac_version {
RTL_GIGA_MAC_VER_49,
RTL_GIGA_MAC_VER_50,
RTL_GIGA_MAC_VER_51,
- RTL_GIGA_MAC_NONE = 0xff,
+ RTL_GIGA_MAC_NONE
};
#define JUMBO_1K ETH_DATA_LEN
@@ -146,7 +149,6 @@ static const struct {
const char *fw_name;
} rtl_chip_infos[] = {
/* PCI devices. */
- [RTL_GIGA_MAC_VER_01] = {"RTL8169" },
[RTL_GIGA_MAC_VER_02] = {"RTL8169s" },
[RTL_GIGA_MAC_VER_03] = {"RTL8110s" },
[RTL_GIGA_MAC_VER_04] = {"RTL8169sb/8110sb" },
@@ -155,7 +157,7 @@ static const struct {
/* PCI-E devices. */
[RTL_GIGA_MAC_VER_07] = {"RTL8102e" },
[RTL_GIGA_MAC_VER_08] = {"RTL8102e" },
- [RTL_GIGA_MAC_VER_09] = {"RTL8102e" },
+ [RTL_GIGA_MAC_VER_09] = {"RTL8102e/RTL8103e" },
[RTL_GIGA_MAC_VER_10] = {"RTL8101e" },
[RTL_GIGA_MAC_VER_11] = {"RTL8168b/8111b" },
[RTL_GIGA_MAC_VER_12] = {"RTL8168b/8111b" },
@@ -188,9 +190,9 @@ static const struct {
[RTL_GIGA_MAC_VER_39] = {"RTL8106e", FIRMWARE_8106E_1},
[RTL_GIGA_MAC_VER_40] = {"RTL8168g/8111g", FIRMWARE_8168G_2},
[RTL_GIGA_MAC_VER_41] = {"RTL8168g/8111g" },
- [RTL_GIGA_MAC_VER_42] = {"RTL8168g/8111g", FIRMWARE_8168G_3},
- [RTL_GIGA_MAC_VER_43] = {"RTL8106e", FIRMWARE_8106E_2},
- [RTL_GIGA_MAC_VER_44] = {"RTL8411", FIRMWARE_8411_2 },
+ [RTL_GIGA_MAC_VER_42] = {"RTL8168gu/8111gu", FIRMWARE_8168G_3},
+ [RTL_GIGA_MAC_VER_43] = {"RTL8106eus", FIRMWARE_8106E_2},
+ [RTL_GIGA_MAC_VER_44] = {"RTL8411b", FIRMWARE_8411_2 },
[RTL_GIGA_MAC_VER_45] = {"RTL8168h/8111h", FIRMWARE_8168H_1},
[RTL_GIGA_MAC_VER_46] = {"RTL8168h/8111h", FIRMWARE_8168H_2},
[RTL_GIGA_MAC_VER_47] = {"RTL8107e", FIRMWARE_8107E_1},
@@ -200,32 +202,24 @@ static const struct {
[RTL_GIGA_MAC_VER_51] = {"RTL8168ep/8111ep" },
};
-enum cfg_version {
- RTL_CFG_0 = 0x00,
- RTL_CFG_1,
- RTL_CFG_2
-};
-
static const struct pci_device_id rtl8169_pci_tbl[] = {
- { PCI_VDEVICE(REALTEK, 0x2502), RTL_CFG_1 },
- { PCI_VDEVICE(REALTEK, 0x2600), RTL_CFG_1 },
- { PCI_VDEVICE(REALTEK, 0x8129), RTL_CFG_0 },
- { PCI_VDEVICE(REALTEK, 0x8136), RTL_CFG_2 },
- { PCI_VDEVICE(REALTEK, 0x8161), RTL_CFG_1 },
- { PCI_VDEVICE(REALTEK, 0x8167), RTL_CFG_0 },
- { PCI_VDEVICE(REALTEK, 0x8168), RTL_CFG_1 },
- { PCI_VDEVICE(NCUBE, 0x8168), RTL_CFG_1 },
- { PCI_VDEVICE(REALTEK, 0x8169), RTL_CFG_0 },
+ { PCI_VDEVICE(REALTEK, 0x2502) },
+ { PCI_VDEVICE(REALTEK, 0x2600) },
+ { PCI_VDEVICE(REALTEK, 0x8129) },
+ { PCI_VDEVICE(REALTEK, 0x8136), RTL_CFG_NO_GBIT },
+ { PCI_VDEVICE(REALTEK, 0x8161) },
+ { PCI_VDEVICE(REALTEK, 0x8167) },
+ { PCI_VDEVICE(REALTEK, 0x8168) },
+ { PCI_VDEVICE(NCUBE, 0x8168) },
+ { PCI_VDEVICE(REALTEK, 0x8169) },
{ PCI_VENDOR_ID_DLINK, 0x4300,
- PCI_VENDOR_ID_DLINK, 0x4b10, 0, 0, RTL_CFG_1 },
- { PCI_VDEVICE(DLINK, 0x4300), RTL_CFG_0 },
- { PCI_VDEVICE(DLINK, 0x4302), RTL_CFG_0 },
- { PCI_VDEVICE(AT, 0xc107), RTL_CFG_0 },
- { PCI_VDEVICE(USR, 0x0116), RTL_CFG_0 },
- { PCI_VENDOR_ID_LINKSYS, 0x1032,
- PCI_ANY_ID, 0x0024, 0, 0, RTL_CFG_0 },
- { 0x0001, 0x8168,
- PCI_ANY_ID, 0x2410, 0, 0, RTL_CFG_2 },
+ PCI_VENDOR_ID_DLINK, 0x4b10, 0, 0 },
+ { PCI_VDEVICE(DLINK, 0x4300) },
+ { PCI_VDEVICE(DLINK, 0x4302) },
+ { PCI_VDEVICE(AT, 0xc107) },
+ { PCI_VDEVICE(USR, 0x0116) },
+ { PCI_VENDOR_ID_LINKSYS, 0x1032, PCI_ANY_ID, 0x0024 },
+ { 0x0001, 0x8168, PCI_ANY_ID, 0x2410 },
{}
};
@@ -406,8 +400,6 @@ enum rtl_register_content {
RxOK = 0x0001,
/* RxStatusDesc */
- RxBOVF = (1 << 24),
- RxFOVF = (1 << 23),
RxRWT = (1 << 22),
RxRES = (1 << 21),
RxRUNT = (1 << 20),
@@ -492,6 +484,7 @@ enum rtl_register_content {
PCIDAC = (1 << 4),
PCIMulRW = (1 << 3),
#define INTT_MASK GENMASK(1, 0)
+#define CPCMD_MASK (Normal_mode | RxVlan | RxChkSum | INTT_MASK)
/* rtl8169_PHYstatus */
TBI_Enable = 0x80,
@@ -503,9 +496,6 @@ enum rtl_register_content {
LinkStatus = 0x02,
FullDup = 0x01,
- /* _TBICSRBit */
- TBILinkOK = 0x02000000,
-
/* ResetCounterCommand */
CounterReset = 0x1,
@@ -578,7 +568,6 @@ enum rtl_rx_desc_bit {
};
#define RsvdMask 0x3fffc000
-#define CPCMD_QUIRK_MASK (Normal_mode | RxVlan | RxChkSum | INTT_MASK)
struct TxDesc {
__le32 opts1;
@@ -639,7 +628,7 @@ struct rtl8169_private {
struct phy_device *phydev;
struct napi_struct napi;
u32 msg_enable;
- u16 mac_version;
+ enum mac_version mac_version;
u32 cur_rx; /* Index into the Rx descriptor buffer of next Rx pkt. */
u32 cur_tx; /* Index into the Tx descriptor buffer of next Rx pkt. */
u32 dirty_tx;
@@ -652,24 +641,9 @@ struct rtl8169_private {
void *Rx_databuff[NUM_RX_DESC]; /* Rx data buffers */
struct ring_info tx_skb[NUM_TX_DESC]; /* Tx data buffers */
u16 cp_cmd;
-
u16 irq_mask;
- const struct rtl_coalesce_info *coalesce_info;
struct clk *clk;
- struct mdio_ops {
- void (*write)(struct rtl8169_private *, int, int);
- int (*read)(struct rtl8169_private *, int);
- } mdio_ops;
-
- struct jumbo_ops {
- void (*enable)(struct rtl8169_private *);
- void (*disable)(struct rtl8169_private *);
- } jumbo_ops;
-
- void (*hw_start)(struct rtl8169_private *tp);
- bool (*tso_csum)(struct rtl8169_private *, struct sk_buff *, u32 *);
-
struct {
DECLARE_BITMAP(flags, RTL_FLAG_MAX);
struct mutex mutex;
@@ -678,24 +652,14 @@ struct rtl8169_private {
unsigned irq_enabled:1;
unsigned supports_gmii:1;
+ unsigned aspm_manageable:1;
dma_addr_t counters_phys_addr;
struct rtl8169_counters *counters;
struct rtl8169_tc_offsets tc_offset;
u32 saved_wolopts;
const char *fw_name;
- struct rtl_fw {
- const struct firmware *fw;
-
-#define RTL_VER_SIZE 32
-
- char version[RTL_VER_SIZE];
-
- struct rtl_fw_phy_action {
- __le32 *code;
- size_t size;
- } phy_action;
- } *rtl_fw;
+ struct rtl_fw *rtl_fw;
u32 ocp_base;
};
@@ -759,6 +723,12 @@ static void rtl_tx_performance_tweak(struct rtl8169_private *tp, u16 force)
PCI_EXP_DEVCTL_READRQ, force);
}
+static bool rtl_is_8168evl_up(struct rtl8169_private *tp)
+{
+ return tp->mac_version >= RTL_GIGA_MAC_VER_34 &&
+ tp->mac_version != RTL_GIGA_MAC_VER_39;
+}
+
struct rtl_cond {
bool (*check)(struct rtl8169_private *);
const char *msg;
@@ -847,7 +817,7 @@ static void r8168_phy_ocp_write(struct rtl8169_private *tp, u32 reg, u32 data)
rtl_udelay_loop_wait_low(tp, &rtl_ocp_gphy_cond, 25, 10);
}
-static u16 r8168_phy_ocp_read(struct rtl8169_private *tp, u32 reg)
+static int r8168_phy_ocp_read(struct rtl8169_private *tp, u32 reg)
{
if (rtl_ocp_reg_failure(tp, reg))
return 0;
@@ -855,7 +825,7 @@ static u16 r8168_phy_ocp_read(struct rtl8169_private *tp, u32 reg)
RTL_W32(tp, GPHY_OCP, reg << 15);
return rtl_udelay_loop_wait_high(tp, &rtl_ocp_gphy_cond, 25, 10) ?
- (RTL_R32(tp, GPHY_OCP) & 0xffff) : ~0;
+ (RTL_R32(tp, GPHY_OCP) & 0xffff) : -ETIMEDOUT;
}
static void r8168_mac_ocp_write(struct rtl8169_private *tp, u32 reg, u32 data)
@@ -938,7 +908,7 @@ static int r8169_mdio_read(struct rtl8169_private *tp, int reg)
RTL_W32(tp, PHYAR, 0x0 | (reg & 0x1f) << 16);
value = rtl_udelay_loop_wait_high(tp, &rtl_phyar_cond, 25, 20) ?
- RTL_R32(tp, PHYAR) & 0xffff : ~0;
+ RTL_R32(tp, PHYAR) & 0xffff : -ETIMEDOUT;
/*
* According to hardware specs a 20us delay is required after read
@@ -978,7 +948,7 @@ static int r8168dp_1_mdio_read(struct rtl8169_private *tp, int reg)
RTL_W32(tp, EPHY_RXER_NUM, 0);
return rtl_udelay_loop_wait_high(tp, &rtl_ocpar_cond, 1000, 100) ?
- RTL_R32(tp, OCPDR) & OCPDR_DATA_MASK : ~0;
+ RTL_R32(tp, OCPDR) & OCPDR_DATA_MASK : -ETIMEDOUT;
}
#define R8168DP_1_MDIO_ACCESS_BIT 0x00020000
@@ -1015,14 +985,38 @@ static int r8168dp_2_mdio_read(struct rtl8169_private *tp, int reg)
return value;
}
-static void rtl_writephy(struct rtl8169_private *tp, int location, u32 val)
+static void rtl_writephy(struct rtl8169_private *tp, int location, int val)
{
- tp->mdio_ops.write(tp, location, val);
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_27:
+ r8168dp_1_mdio_write(tp, location, val);
+ break;
+ case RTL_GIGA_MAC_VER_28:
+ case RTL_GIGA_MAC_VER_31:
+ r8168dp_2_mdio_write(tp, location, val);
+ break;
+ case RTL_GIGA_MAC_VER_40 ... RTL_GIGA_MAC_VER_51:
+ r8168g_mdio_write(tp, location, val);
+ break;
+ default:
+ r8169_mdio_write(tp, location, val);
+ break;
+ }
}
static int rtl_readphy(struct rtl8169_private *tp, int location)
{
- return tp->mdio_ops.read(tp, location);
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_27:
+ return r8168dp_1_mdio_read(tp, location);
+ case RTL_GIGA_MAC_VER_28:
+ case RTL_GIGA_MAC_VER_31:
+ return r8168dp_2_mdio_read(tp, location);
+ case RTL_GIGA_MAC_VER_40 ... RTL_GIGA_MAC_VER_51:
+ return r8168g_mdio_read(tp, location);
+ default:
+ return r8169_mdio_read(tp, location);
+ }
}
static void rtl_patchphy(struct rtl8169_private *tp, int reg_addr, int value)
@@ -1400,9 +1394,7 @@ static void __rtl8169_set_wol(struct rtl8169_private *tp, u32 wolopts)
rtl_unlock_config_regs(tp);
- switch (tp->mac_version) {
- case RTL_GIGA_MAC_VER_34 ... RTL_GIGA_MAC_VER_38:
- case RTL_GIGA_MAC_VER_40 ... RTL_GIGA_MAC_VER_51:
+ if (rtl_is_8168evl_up(tp)) {
tmp = ARRAY_SIZE(cfg) - 1;
if (wolopts & WAKE_MAGIC)
rtl_eri_set_bits(tp, 0x0dc, ERIAR_MASK_0100,
@@ -1410,10 +1402,8 @@ static void __rtl8169_set_wol(struct rtl8169_private *tp, u32 wolopts)
else
rtl_eri_clear_bits(tp, 0x0dc, ERIAR_MASK_0100,
MagicPacket_v2);
- break;
- default:
+ } else {
tmp = ARRAY_SIZE(cfg);
- break;
}
for (i = 0; i < tmp; i++) {
@@ -1424,7 +1414,7 @@ static void __rtl8169_set_wol(struct rtl8169_private *tp, u32 wolopts)
}
switch (tp->mac_version) {
- case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_17:
+ case RTL_GIGA_MAC_VER_02 ... RTL_GIGA_MAC_VER_17:
options = RTL_R8(tp, Config1) & ~PMEnable;
if (wolopts)
options |= PMEnable;
@@ -1794,18 +1784,16 @@ static const struct rtl_coalesce_info rtl_coalesce_info_8168_8136[] = {
static const struct rtl_coalesce_info *rtl_coalesce_info(struct net_device *dev)
{
struct rtl8169_private *tp = netdev_priv(dev);
- struct ethtool_link_ksettings ecmd;
const struct rtl_coalesce_info *ci;
- int rc;
- rc = phy_ethtool_get_link_ksettings(dev, &ecmd);
- if (rc < 0)
- return ERR_PTR(rc);
+ if (tp->mac_version <= RTL_GIGA_MAC_VER_06)
+ ci = rtl_coalesce_info_8169;
+ else
+ ci = rtl_coalesce_info_8168_8136;
- for (ci = tp->coalesce_info; ci->speed != 0; ci++) {
- if (ecmd.base.speed == ci->speed) {
+ for (; ci->speed; ci++) {
+ if (tp->phydev->speed == ci->speed)
return ci;
- }
}
return ERR_PTR(-ELNRNG);
@@ -1954,9 +1942,7 @@ static int rtl_get_eee_supp(struct rtl8169_private *tp)
ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_PCS_EEE_ABLE);
break;
case RTL_GIGA_MAC_VER_40 ... RTL_GIGA_MAC_VER_51:
- phy_write(phydev, 0x1f, 0x0a5c);
- ret = phy_read(phydev, 0x12);
- phy_write(phydev, 0x1f, 0x0000);
+ ret = phy_read_paged(phydev, 0x0a5c, 0x12);
break;
default:
ret = -EPROTONOSUPPORT;
@@ -1979,9 +1965,7 @@ static int rtl_get_eee_lpadv(struct rtl8169_private *tp)
ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_EEE_LPABLE);
break;
case RTL_GIGA_MAC_VER_40 ... RTL_GIGA_MAC_VER_51:
- phy_write(phydev, 0x1f, 0x0a5d);
- ret = phy_read(phydev, 0x11);
- phy_write(phydev, 0x1f, 0x0000);
+ ret = phy_read_paged(phydev, 0x0a5d, 0x11);
break;
default:
ret = -EPROTONOSUPPORT;
@@ -2004,9 +1988,7 @@ static int rtl_get_eee_adv(struct rtl8169_private *tp)
ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_EEE_ADV);
break;
case RTL_GIGA_MAC_VER_40 ... RTL_GIGA_MAC_VER_51:
- phy_write(phydev, 0x1f, 0x0a5d);
- ret = phy_read(phydev, 0x10);
- phy_write(phydev, 0x1f, 0x0000);
+ ret = phy_read_paged(phydev, 0x0a5d, 0x10);
break;
default:
ret = -EPROTONOSUPPORT;
@@ -2029,9 +2011,7 @@ static int rtl_set_eee_adv(struct rtl8169_private *tp, int val)
ret = phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_EEE_ADV, val);
break;
case RTL_GIGA_MAC_VER_40 ... RTL_GIGA_MAC_VER_51:
- phy_write(phydev, 0x1f, 0x0a5d);
- phy_write(phydev, 0x10, val);
- phy_write(phydev, 0x1f, 0x0000);
+ phy_write_paged(phydev, 0x0a5d, 0x10, val);
break;
default:
ret = -EPROTONOSUPPORT;
@@ -2252,7 +2232,6 @@ static void rtl8169_get_mac_version(struct rtl8169_private *tp)
{ 0xfc8, 0x100, RTL_GIGA_MAC_VER_04 },
{ 0xfc8, 0x040, RTL_GIGA_MAC_VER_03 },
{ 0xfc8, 0x008, RTL_GIGA_MAC_VER_02 },
- { 0xfc8, 0x000, RTL_GIGA_MAC_VER_01 },
/* Catch-all */
{ 0x000, 0x000, RTL_GIGA_MAC_NONE }
@@ -2292,246 +2271,10 @@ static void __rtl_writephy_batch(struct rtl8169_private *tp,
#define rtl_writephy_batch(tp, a) __rtl_writephy_batch(tp, a, ARRAY_SIZE(a))
-#define PHY_READ 0x00000000
-#define PHY_DATA_OR 0x10000000
-#define PHY_DATA_AND 0x20000000
-#define PHY_BJMPN 0x30000000
-#define PHY_MDIO_CHG 0x40000000
-#define PHY_CLEAR_READCOUNT 0x70000000
-#define PHY_WRITE 0x80000000
-#define PHY_READCOUNT_EQ_SKIP 0x90000000
-#define PHY_COMP_EQ_SKIPN 0xa0000000
-#define PHY_COMP_NEQ_SKIPN 0xb0000000
-#define PHY_WRITE_PREVIOUS 0xc0000000
-#define PHY_SKIPN 0xd0000000
-#define PHY_DELAY_MS 0xe0000000
-
-struct fw_info {
- u32 magic;
- char version[RTL_VER_SIZE];
- __le32 fw_start;
- __le32 fw_len;
- u8 chksum;
-} __packed;
-
-#define FW_OPCODE_SIZE sizeof(typeof(*((struct rtl_fw_phy_action *)0)->code))
-
-static bool rtl_fw_format_ok(struct rtl8169_private *tp, struct rtl_fw *rtl_fw)
-{
- const struct firmware *fw = rtl_fw->fw;
- struct fw_info *fw_info = (struct fw_info *)fw->data;
- struct rtl_fw_phy_action *pa = &rtl_fw->phy_action;
- char *version = rtl_fw->version;
- bool rc = false;
-
- if (fw->size < FW_OPCODE_SIZE)
- goto out;
-
- if (!fw_info->magic) {
- size_t i, size, start;
- u8 checksum = 0;
-
- if (fw->size < sizeof(*fw_info))
- goto out;
-
- for (i = 0; i < fw->size; i++)
- checksum += fw->data[i];
- if (checksum != 0)
- goto out;
-
- start = le32_to_cpu(fw_info->fw_start);
- if (start > fw->size)
- goto out;
-
- size = le32_to_cpu(fw_info->fw_len);
- if (size > (fw->size - start) / FW_OPCODE_SIZE)
- goto out;
-
- memcpy(version, fw_info->version, RTL_VER_SIZE);
-
- pa->code = (__le32 *)(fw->data + start);
- pa->size = size;
- } else {
- if (fw->size % FW_OPCODE_SIZE)
- goto out;
-
- strlcpy(version, tp->fw_name, RTL_VER_SIZE);
-
- pa->code = (__le32 *)fw->data;
- pa->size = fw->size / FW_OPCODE_SIZE;
- }
- version[RTL_VER_SIZE - 1] = 0;
-
- rc = true;
-out:
- return rc;
-}
-
-static bool rtl_fw_data_ok(struct rtl8169_private *tp, struct net_device *dev,
- struct rtl_fw_phy_action *pa)
-{
- bool rc = false;
- size_t index;
-
- for (index = 0; index < pa->size; index++) {
- u32 action = le32_to_cpu(pa->code[index]);
- u32 regno = (action & 0x0fff0000) >> 16;
-
- switch(action & 0xf0000000) {
- case PHY_READ:
- case PHY_DATA_OR:
- case PHY_DATA_AND:
- case PHY_MDIO_CHG:
- case PHY_CLEAR_READCOUNT:
- case PHY_WRITE:
- case PHY_WRITE_PREVIOUS:
- case PHY_DELAY_MS:
- break;
-
- case PHY_BJMPN:
- if (regno > index) {
- netif_err(tp, ifup, tp->dev,
- "Out of range of firmware\n");
- goto out;
- }
- break;
- case PHY_READCOUNT_EQ_SKIP:
- if (index + 2 >= pa->size) {
- netif_err(tp, ifup, tp->dev,
- "Out of range of firmware\n");
- goto out;
- }
- break;
- case PHY_COMP_EQ_SKIPN:
- case PHY_COMP_NEQ_SKIPN:
- case PHY_SKIPN:
- if (index + 1 + regno >= pa->size) {
- netif_err(tp, ifup, tp->dev,
- "Out of range of firmware\n");
- goto out;
- }
- break;
-
- default:
- netif_err(tp, ifup, tp->dev,
- "Invalid action 0x%08x\n", action);
- goto out;
- }
- }
- rc = true;
-out:
- return rc;
-}
-
-static int rtl_check_firmware(struct rtl8169_private *tp, struct rtl_fw *rtl_fw)
-{
- struct net_device *dev = tp->dev;
- int rc = -EINVAL;
-
- if (!rtl_fw_format_ok(tp, rtl_fw)) {
- netif_err(tp, ifup, dev, "invalid firmware\n");
- goto out;
- }
-
- if (rtl_fw_data_ok(tp, dev, &rtl_fw->phy_action))
- rc = 0;
-out:
- return rc;
-}
-
-static void rtl_phy_write_fw(struct rtl8169_private *tp, struct rtl_fw *rtl_fw)
-{
- struct rtl_fw_phy_action *pa = &rtl_fw->phy_action;
- struct mdio_ops org, *ops = &tp->mdio_ops;
- u32 predata, count;
- size_t index;
-
- predata = count = 0;
- org.write = ops->write;
- org.read = ops->read;
-
- for (index = 0; index < pa->size; ) {
- u32 action = le32_to_cpu(pa->code[index]);
- u32 data = action & 0x0000ffff;
- u32 regno = (action & 0x0fff0000) >> 16;
-
- if (!action)
- break;
-
- switch(action & 0xf0000000) {
- case PHY_READ:
- predata = rtl_readphy(tp, regno);
- count++;
- index++;
- break;
- case PHY_DATA_OR:
- predata |= data;
- index++;
- break;
- case PHY_DATA_AND:
- predata &= data;
- index++;
- break;
- case PHY_BJMPN:
- index -= regno;
- break;
- case PHY_MDIO_CHG:
- if (data == 0) {
- ops->write = org.write;
- ops->read = org.read;
- } else if (data == 1) {
- ops->write = mac_mcu_write;
- ops->read = mac_mcu_read;
- }
-
- index++;
- break;
- case PHY_CLEAR_READCOUNT:
- count = 0;
- index++;
- break;
- case PHY_WRITE:
- rtl_writephy(tp, regno, data);
- index++;
- break;
- case PHY_READCOUNT_EQ_SKIP:
- index += (count == data) ? 2 : 1;
- break;
- case PHY_COMP_EQ_SKIPN:
- if (predata == data)
- index += regno;
- index++;
- break;
- case PHY_COMP_NEQ_SKIPN:
- if (predata != data)
- index += regno;
- index++;
- break;
- case PHY_WRITE_PREVIOUS:
- rtl_writephy(tp, regno, predata);
- index++;
- break;
- case PHY_SKIPN:
- index += regno + 1;
- break;
- case PHY_DELAY_MS:
- mdelay(data);
- index++;
- break;
-
- default:
- BUG();
- }
- }
-
- ops->write = org.write;
- ops->read = org.read;
-}
-
static void rtl_release_firmware(struct rtl8169_private *tp)
{
if (tp->rtl_fw) {
- release_firmware(tp->rtl_fw->fw);
+ rtl_fw_release_firmware(tp->rtl_fw);
kfree(tp->rtl_fw);
tp->rtl_fw = NULL;
}
@@ -2539,9 +2282,9 @@ static void rtl_release_firmware(struct rtl8169_private *tp)
static void rtl_apply_firmware(struct rtl8169_private *tp)
{
- /* TODO: release firmware once rtl_phy_write_fw signals failures. */
+ /* TODO: release firmware if rtl_fw_write_firmware signals failure. */
if (tp->rtl_fw)
- rtl_phy_write_fw(tp, tp->rtl_fw);
+ rtl_fw_write_firmware(tp, tp->rtl_fw);
}
static void rtl_apply_firmware_cond(struct rtl8169_private *tp, u8 reg, u16 val)
@@ -2578,9 +2321,7 @@ static void rtl8168f_config_eee_phy(struct rtl8169_private *tp)
static void rtl8168g_config_eee_phy(struct rtl8169_private *tp)
{
- phy_write(tp->phydev, 0x1f, 0x0a43);
- phy_set_bits(tp->phydev, 0x11, BIT(4));
- phy_write(tp->phydev, 0x1f, 0x0000);
+ phy_modify_paged(tp->phydev, 0x0a43, 0x11, 0, BIT(4));
}
static void rtl8169s_hw_phy_config(struct rtl8169_private *tp)
@@ -2910,50 +2651,59 @@ static void rtl8168c_4_hw_phy_config(struct rtl8169_private *tp)
rtl8168c_3_hw_phy_config(tp);
}
-static void rtl8168d_1_hw_phy_config(struct rtl8169_private *tp)
-{
- static const struct phy_reg phy_reg_init_0[] = {
- /* Channel Estimation */
- { 0x1f, 0x0001 },
- { 0x06, 0x4064 },
- { 0x07, 0x2863 },
- { 0x08, 0x059c },
- { 0x09, 0x26b4 },
- { 0x0a, 0x6a19 },
- { 0x0b, 0xdcc8 },
- { 0x10, 0xf06d },
- { 0x14, 0x7f68 },
- { 0x18, 0x7fd9 },
- { 0x1c, 0xf0ff },
- { 0x1d, 0x3d9c },
- { 0x1f, 0x0003 },
- { 0x12, 0xf49f },
- { 0x13, 0x070b },
- { 0x1a, 0x05ad },
- { 0x14, 0x94c0 },
+static const struct phy_reg rtl8168d_1_phy_reg_init_0[] = {
+ /* Channel Estimation */
+ { 0x1f, 0x0001 },
+ { 0x06, 0x4064 },
+ { 0x07, 0x2863 },
+ { 0x08, 0x059c },
+ { 0x09, 0x26b4 },
+ { 0x0a, 0x6a19 },
+ { 0x0b, 0xdcc8 },
+ { 0x10, 0xf06d },
+ { 0x14, 0x7f68 },
+ { 0x18, 0x7fd9 },
+ { 0x1c, 0xf0ff },
+ { 0x1d, 0x3d9c },
+ { 0x1f, 0x0003 },
+ { 0x12, 0xf49f },
+ { 0x13, 0x070b },
+ { 0x1a, 0x05ad },
+ { 0x14, 0x94c0 },
- /*
- * Tx Error Issue
- * Enhance line driver power
- */
- { 0x1f, 0x0002 },
- { 0x06, 0x5561 },
- { 0x1f, 0x0005 },
- { 0x05, 0x8332 },
- { 0x06, 0x5561 },
+ /*
+ * Tx Error Issue
+ * Enhance line driver power
+ */
+ { 0x1f, 0x0002 },
+ { 0x06, 0x5561 },
+ { 0x1f, 0x0005 },
+ { 0x05, 0x8332 },
+ { 0x06, 0x5561 },
- /*
- * Can not link to 1Gbps with bad cable
- * Decrease SNR threshold form 21.07dB to 19.04dB
- */
- { 0x1f, 0x0001 },
- { 0x17, 0x0cc0 },
+ /*
+ * Can not link to 1Gbps with bad cable
+ * Decrease SNR threshold form 21.07dB to 19.04dB
+ */
+ { 0x1f, 0x0001 },
+ { 0x17, 0x0cc0 },
- { 0x1f, 0x0000 },
- { 0x0d, 0xf880 }
- };
+ { 0x1f, 0x0000 },
+ { 0x0d, 0xf880 }
+};
- rtl_writephy_batch(tp, phy_reg_init_0);
+static const struct phy_reg rtl8168d_1_phy_reg_init_1[] = {
+ { 0x1f, 0x0002 },
+ { 0x05, 0x669a },
+ { 0x1f, 0x0005 },
+ { 0x05, 0x8330 },
+ { 0x06, 0x669a },
+ { 0x1f, 0x0002 }
+};
+
+static void rtl8168d_1_hw_phy_config(struct rtl8169_private *tp)
+{
+ rtl_writephy_batch(tp, rtl8168d_1_phy_reg_init_0);
/*
* Rx Error Issue
@@ -2964,17 +2714,9 @@ static void rtl8168d_1_hw_phy_config(struct rtl8169_private *tp)
rtl_w0w1_phy(tp, 0x0c, 0xa200, 0x5d00);
if (rtl8168d_efuse_read(tp, 0x01) == 0xb1) {
- static const struct phy_reg phy_reg_init[] = {
- { 0x1f, 0x0002 },
- { 0x05, 0x669a },
- { 0x1f, 0x0005 },
- { 0x05, 0x8330 },
- { 0x06, 0x669a },
- { 0x1f, 0x0002 }
- };
int val;
- rtl_writephy_batch(tp, phy_reg_init);
+ rtl_writephy_batch(tp, rtl8168d_1_phy_reg_init_1);
val = rtl_readphy(tp, 0x0d);
@@ -3023,62 +2765,12 @@ static void rtl8168d_1_hw_phy_config(struct rtl8169_private *tp)
static void rtl8168d_2_hw_phy_config(struct rtl8169_private *tp)
{
- static const struct phy_reg phy_reg_init_0[] = {
- /* Channel Estimation */
- { 0x1f, 0x0001 },
- { 0x06, 0x4064 },
- { 0x07, 0x2863 },
- { 0x08, 0x059c },
- { 0x09, 0x26b4 },
- { 0x0a, 0x6a19 },
- { 0x0b, 0xdcc8 },
- { 0x10, 0xf06d },
- { 0x14, 0x7f68 },
- { 0x18, 0x7fd9 },
- { 0x1c, 0xf0ff },
- { 0x1d, 0x3d9c },
- { 0x1f, 0x0003 },
- { 0x12, 0xf49f },
- { 0x13, 0x070b },
- { 0x1a, 0x05ad },
- { 0x14, 0x94c0 },
-
- /*
- * Tx Error Issue
- * Enhance line driver power
- */
- { 0x1f, 0x0002 },
- { 0x06, 0x5561 },
- { 0x1f, 0x0005 },
- { 0x05, 0x8332 },
- { 0x06, 0x5561 },
-
- /*
- * Can not link to 1Gbps with bad cable
- * Decrease SNR threshold form 21.07dB to 19.04dB
- */
- { 0x1f, 0x0001 },
- { 0x17, 0x0cc0 },
-
- { 0x1f, 0x0000 },
- { 0x0d, 0xf880 }
- };
-
- rtl_writephy_batch(tp, phy_reg_init_0);
+ rtl_writephy_batch(tp, rtl8168d_1_phy_reg_init_0);
if (rtl8168d_efuse_read(tp, 0x01) == 0xb1) {
- static const struct phy_reg phy_reg_init[] = {
- { 0x1f, 0x0002 },
- { 0x05, 0x669a },
- { 0x1f, 0x0005 },
- { 0x05, 0x8330 },
- { 0x06, 0x669a },
-
- { 0x1f, 0x0002 }
- };
int val;
- rtl_writephy_batch(tp, phy_reg_init);
+ rtl_writephy_batch(tp, rtl8168d_1_phy_reg_init_1);
val = rtl_readphy(tp, 0x0d);
if ((val & 0x00ff) != 0x006c) {
@@ -3528,20 +3220,15 @@ static void rtl8411_hw_phy_config(struct rtl8169_private *tp)
static void rtl8168g_disable_aldps(struct rtl8169_private *tp)
{
- phy_write(tp->phydev, 0x1f, 0x0a43);
- phy_clear_bits(tp->phydev, 0x10, BIT(2));
+ phy_modify_paged(tp->phydev, 0x0a43, 0x10, BIT(2), 0);
}
static void rtl8168g_phy_adjust_10m_aldps(struct rtl8169_private *tp)
{
struct phy_device *phydev = tp->phydev;
- phy_write(phydev, 0x1f, 0x0bcc);
- phy_clear_bits(phydev, 0x14, BIT(8));
-
- phy_write(phydev, 0x1f, 0x0a44);
- phy_set_bits(phydev, 0x11, BIT(7) | BIT(6));
-
+ phy_modify_paged(phydev, 0x0bcc, 0x14, BIT(8), 0);
+ phy_modify_paged(phydev, 0x0a44, 0x11, 0, BIT(7) | BIT(6));
phy_write(phydev, 0x1f, 0x0a43);
phy_write(phydev, 0x13, 0x8084);
phy_clear_bits(phydev, 0x14, BIT(14) | BIT(13));
@@ -3552,43 +3239,36 @@ static void rtl8168g_phy_adjust_10m_aldps(struct rtl8169_private *tp)
static void rtl8168g_1_hw_phy_config(struct rtl8169_private *tp)
{
+ int ret;
+
rtl_apply_firmware(tp);
- rtl_writephy(tp, 0x1f, 0x0a46);
- if (rtl_readphy(tp, 0x10) & 0x0100) {
- rtl_writephy(tp, 0x1f, 0x0bcc);
- rtl_w0w1_phy(tp, 0x12, 0x0000, 0x8000);
- } else {
- rtl_writephy(tp, 0x1f, 0x0bcc);
- rtl_w0w1_phy(tp, 0x12, 0x8000, 0x0000);
- }
+ ret = phy_read_paged(tp->phydev, 0x0a46, 0x10);
+ if (ret & BIT(8))
+ phy_modify_paged(tp->phydev, 0x0bcc, 0x12, BIT(15), 0);
+ else
+ phy_modify_paged(tp->phydev, 0x0bcc, 0x12, 0, BIT(15));
- rtl_writephy(tp, 0x1f, 0x0a46);
- if (rtl_readphy(tp, 0x13) & 0x0100) {
- rtl_writephy(tp, 0x1f, 0x0c41);
- rtl_w0w1_phy(tp, 0x15, 0x0002, 0x0000);
- } else {
- rtl_writephy(tp, 0x1f, 0x0c41);
- rtl_w0w1_phy(tp, 0x15, 0x0000, 0x0002);
- }
+ ret = phy_read_paged(tp->phydev, 0x0a46, 0x13);
+ if (ret & BIT(8))
+ phy_modify_paged(tp->phydev, 0x0c41, 0x12, 0, BIT(1));
+ else
+ phy_modify_paged(tp->phydev, 0x0c41, 0x12, BIT(1), 0);
/* Enable PHY auto speed down */
- rtl_writephy(tp, 0x1f, 0x0a44);
- rtl_w0w1_phy(tp, 0x11, 0x000c, 0x0000);
+ phy_modify_paged(tp->phydev, 0x0a44, 0x11, 0, BIT(3) | BIT(2));
rtl8168g_phy_adjust_10m_aldps(tp);
/* EEE auto-fallback function */
- rtl_writephy(tp, 0x1f, 0x0a4b);
- rtl_w0w1_phy(tp, 0x11, 0x0004, 0x0000);
+ phy_modify_paged(tp->phydev, 0x0a4b, 0x11, 0, BIT(2));
/* Enable UC LPF tune function */
rtl_writephy(tp, 0x1f, 0x0a43);
rtl_writephy(tp, 0x13, 0x8012);
rtl_w0w1_phy(tp, 0x14, 0x8000, 0x0000);
- rtl_writephy(tp, 0x1f, 0x0c42);
- rtl_w0w1_phy(tp, 0x11, 0x4000, 0x2000);
+ phy_modify_paged(tp->phydev, 0x0c42, 0x11, BIT(13), BIT(14));
/* Improve SWR Efficiency */
rtl_writephy(tp, 0x1f, 0x0bcd);
@@ -3600,6 +3280,7 @@ static void rtl8168g_1_hw_phy_config(struct rtl8169_private *tp)
rtl_writephy(tp, 0x14, 0x1065);
rtl_writephy(tp, 0x14, 0x9065);
rtl_writephy(tp, 0x14, 0x1065);
+ rtl_writephy(tp, 0x1f, 0x0000);
rtl8168g_disable_aldps(tp);
rtl8168g_config_eee_phy(tp);
@@ -3684,14 +3365,10 @@ static void rtl8168h_1_hw_phy_config(struct rtl8169_private *tp)
rtl_writephy(tp, 0x1f, 0x0000);
/* enable GPHY 10M */
- rtl_writephy(tp, 0x1f, 0x0a44);
- rtl_w0w1_phy(tp, 0x11, 0x0800, 0x0000);
- rtl_writephy(tp, 0x1f, 0x0000);
+ phy_modify_paged(tp->phydev, 0x0a44, 0x11, 0, BIT(11));
/* SAR ADC performance */
- rtl_writephy(tp, 0x1f, 0x0bca);
- rtl_w0w1_phy(tp, 0x17, 0x4000, 0x3000);
- rtl_writephy(tp, 0x1f, 0x0000);
+ phy_modify_paged(tp->phydev, 0x0bca, 0x17, BIT(12) | BIT(13), BIT(14));
rtl_writephy(tp, 0x1f, 0x0a43);
rtl_writephy(tp, 0x13, 0x803f);
@@ -3711,9 +3388,7 @@ static void rtl8168h_1_hw_phy_config(struct rtl8169_private *tp)
rtl_writephy(tp, 0x1f, 0x0000);
/* disable phy pfm mode */
- rtl_writephy(tp, 0x1f, 0x0a44);
- rtl_w0w1_phy(tp, 0x11, 0x0000, 0x0080);
- rtl_writephy(tp, 0x1f, 0x0000);
+ phy_modify_paged(tp->phydev, 0x0a44, 0x11, BIT(7), 0);
rtl8168g_disable_aldps(tp);
rtl8168g_config_eee_phy(tp);
@@ -3743,9 +3418,7 @@ static void rtl8168h_2_hw_phy_config(struct rtl8169_private *tp)
rtl_writephy(tp, 0x1f, 0x0000);
/* enable GPHY 10M */
- rtl_writephy(tp, 0x1f, 0x0a44);
- rtl_w0w1_phy(tp, 0x11, 0x0800, 0x0000);
- rtl_writephy(tp, 0x1f, 0x0000);
+ phy_modify_paged(tp->phydev, 0x0a44, 0x11, 0, BIT(11));
r8168_mac_ocp_write(tp, 0xdd02, 0x807d);
data = r8168_mac_ocp_read(tp, 0xdd02);
@@ -3781,9 +3454,7 @@ static void rtl8168h_2_hw_phy_config(struct rtl8169_private *tp)
rtl_writephy(tp, 0x1f, 0x0000);
/* disable phy pfm mode */
- rtl_writephy(tp, 0x1f, 0x0a44);
- rtl_w0w1_phy(tp, 0x11, 0x0000, 0x0080);
- rtl_writephy(tp, 0x1f, 0x0000);
+ phy_modify_paged(tp->phydev, 0x0a44, 0x11, BIT(7), 0);
rtl8168g_disable_aldps(tp);
rtl8168g_config_eee_phy(tp);
@@ -3793,16 +3464,12 @@ static void rtl8168h_2_hw_phy_config(struct rtl8169_private *tp)
static void rtl8168ep_1_hw_phy_config(struct rtl8169_private *tp)
{
/* Enable PHY auto speed down */
- rtl_writephy(tp, 0x1f, 0x0a44);
- rtl_w0w1_phy(tp, 0x11, 0x000c, 0x0000);
- rtl_writephy(tp, 0x1f, 0x0000);
+ phy_modify_paged(tp->phydev, 0x0a44, 0x11, 0, BIT(3) | BIT(2));
rtl8168g_phy_adjust_10m_aldps(tp);
/* Enable EEE auto-fallback function */
- rtl_writephy(tp, 0x1f, 0x0a4b);
- rtl_w0w1_phy(tp, 0x11, 0x0004, 0x0000);
- rtl_writephy(tp, 0x1f, 0x0000);
+ phy_modify_paged(tp->phydev, 0x0a4b, 0x11, 0, BIT(2));
/* Enable UC LPF tune function */
rtl_writephy(tp, 0x1f, 0x0a43);
@@ -3811,9 +3478,7 @@ static void rtl8168ep_1_hw_phy_config(struct rtl8169_private *tp)
rtl_writephy(tp, 0x1f, 0x0000);
/* set rg_sel_sdm_rate */
- rtl_writephy(tp, 0x1f, 0x0c42);
- rtl_w0w1_phy(tp, 0x11, 0x4000, 0x2000);
- rtl_writephy(tp, 0x1f, 0x0000);
+ phy_modify_paged(tp->phydev, 0x0c42, 0x11, BIT(13), BIT(14));
rtl8168g_disable_aldps(tp);
rtl8168g_config_eee_phy(tp);
@@ -3831,9 +3496,7 @@ static void rtl8168ep_2_hw_phy_config(struct rtl8169_private *tp)
rtl_writephy(tp, 0x1f, 0x0000);
/* Set rg_sel_sdm_rate */
- rtl_writephy(tp, 0x1f, 0x0c42);
- rtl_w0w1_phy(tp, 0x11, 0x4000, 0x2000);
- rtl_writephy(tp, 0x1f, 0x0000);
+ phy_modify_paged(tp->phydev, 0x0c42, 0x11, BIT(13), BIT(14));
/* Channel estimation parameters */
rtl_writephy(tp, 0x1f, 0x0a43);
@@ -3985,7 +3648,6 @@ static void rtl_hw_phy_config(struct net_device *dev)
{
static const rtl_generic_fct phy_configs[] = {
/* PCI devices. */
- [RTL_GIGA_MAC_VER_01] = NULL,
[RTL_GIGA_MAC_VER_02] = rtl8169s_hw_phy_config,
[RTL_GIGA_MAC_VER_03] = rtl8169s_hw_phy_config,
[RTL_GIGA_MAC_VER_04] = rtl8169sb_hw_phy_config,
@@ -4050,12 +3712,6 @@ static void rtl_schedule_task(struct rtl8169_private *tp, enum rtl_flag flag)
schedule_work(&tp->wk.work);
}
-static bool rtl_tbi_enabled(struct rtl8169_private *tp)
-{
- return (tp->mac_version == RTL_GIGA_MAC_VER_01) &&
- (RTL_R8(tp, PHYstatus) & TBI_Enable);
-}
-
static void rtl8169_init_phy(struct net_device *dev, struct rtl8169_private *tp)
{
rtl_hw_phy_config(dev);
@@ -4124,31 +3780,6 @@ static int rtl8169_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
return phy_mii_ioctl(tp->phydev, ifr, cmd);
}
-static void rtl_init_mdio_ops(struct rtl8169_private *tp)
-{
- struct mdio_ops *ops = &tp->mdio_ops;
-
- switch (tp->mac_version) {
- case RTL_GIGA_MAC_VER_27:
- ops->write = r8168dp_1_mdio_write;
- ops->read = r8168dp_1_mdio_read;
- break;
- case RTL_GIGA_MAC_VER_28:
- case RTL_GIGA_MAC_VER_31:
- ops->write = r8168dp_2_mdio_write;
- ops->read = r8168dp_2_mdio_read;
- break;
- case RTL_GIGA_MAC_VER_40 ... RTL_GIGA_MAC_VER_51:
- ops->write = r8168g_mdio_write;
- ops->read = r8168g_mdio_read;
- break;
- default:
- ops->write = r8169_mdio_write;
- ops->read = r8169_mdio_read;
- break;
- }
-}
-
static void rtl_wol_suspend_quirk(struct rtl8169_private *tp)
{
switch (tp->mac_version) {
@@ -4168,7 +3799,7 @@ static void rtl_wol_suspend_quirk(struct rtl8169_private *tp)
}
}
-static void r8168_pll_power_down(struct rtl8169_private *tp)
+static void rtl_pll_power_down(struct rtl8169_private *tp)
{
if (r8168_check_dash(tp))
return;
@@ -4203,10 +3834,12 @@ static void r8168_pll_power_down(struct rtl8169_private *tp)
rtl_eri_clear_bits(tp, 0x1a8, ERIAR_MASK_1111, 0xfc000000);
RTL_W8(tp, PMCH, RTL_R8(tp, PMCH) & ~0x80);
break;
+ default:
+ break;
}
}
-static void r8168_pll_power_up(struct rtl8169_private *tp)
+static void rtl_pll_power_up(struct rtl8169_private *tp)
{
switch (tp->mac_version) {
case RTL_GIGA_MAC_VER_25 ... RTL_GIGA_MAC_VER_33:
@@ -4230,6 +3863,8 @@ static void r8168_pll_power_up(struct rtl8169_private *tp)
RTL_W8(tp, PMCH, RTL_R8(tp, PMCH) | 0xc0);
rtl_eri_set_bits(tp, 0x1a8, ERIAR_MASK_1111, 0xfc000000);
break;
+ default:
+ break;
}
phy_resume(tp->phydev);
@@ -4237,32 +3872,10 @@ static void r8168_pll_power_up(struct rtl8169_private *tp)
msleep(20);
}
-static void rtl_pll_power_down(struct rtl8169_private *tp)
-{
- switch (tp->mac_version) {
- case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06:
- case RTL_GIGA_MAC_VER_13 ... RTL_GIGA_MAC_VER_15:
- break;
- default:
- r8168_pll_power_down(tp);
- }
-}
-
-static void rtl_pll_power_up(struct rtl8169_private *tp)
-{
- switch (tp->mac_version) {
- case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06:
- case RTL_GIGA_MAC_VER_13 ... RTL_GIGA_MAC_VER_15:
- break;
- default:
- r8168_pll_power_up(tp);
- }
-}
-
static void rtl_init_rxcfg(struct rtl8169_private *tp)
{
switch (tp->mac_version) {
- case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06:
+ case RTL_GIGA_MAC_VER_02 ... RTL_GIGA_MAC_VER_06:
case RTL_GIGA_MAC_VER_10 ... RTL_GIGA_MAC_VER_17:
RTL_W32(tp, RxConfig, RX_FIFO_THRESH | RX_DMA_BURST);
break;
@@ -4285,24 +3898,6 @@ static void rtl8169_init_ring_indexes(struct rtl8169_private *tp)
tp->dirty_tx = tp->cur_tx = tp->cur_rx = 0;
}
-static void rtl_hw_jumbo_enable(struct rtl8169_private *tp)
-{
- if (tp->jumbo_ops.enable) {
- rtl_unlock_config_regs(tp);
- tp->jumbo_ops.enable(tp);
- rtl_lock_config_regs(tp);
- }
-}
-
-static void rtl_hw_jumbo_disable(struct rtl8169_private *tp)
-{
- if (tp->jumbo_ops.disable) {
- rtl_unlock_config_regs(tp);
- tp->jumbo_ops.disable(tp);
- rtl_lock_config_regs(tp);
- }
-}
-
static void r8168c_hw_jumbo_enable(struct rtl8169_private *tp)
{
RTL_W8(tp, Config3, RTL_R8(tp, Config3) | Jumbo_En0);
@@ -4369,55 +3964,56 @@ static void r8168b_1_hw_jumbo_disable(struct rtl8169_private *tp)
RTL_W8(tp, Config4, RTL_R8(tp, Config4) & ~(1 << 0));
}
-static void rtl_init_jumbo_ops(struct rtl8169_private *tp)
+static void rtl_hw_jumbo_enable(struct rtl8169_private *tp)
{
- struct jumbo_ops *ops = &tp->jumbo_ops;
-
+ rtl_unlock_config_regs(tp);
switch (tp->mac_version) {
case RTL_GIGA_MAC_VER_11:
- ops->disable = r8168b_0_hw_jumbo_disable;
- ops->enable = r8168b_0_hw_jumbo_enable;
+ r8168b_0_hw_jumbo_enable(tp);
break;
case RTL_GIGA_MAC_VER_12:
case RTL_GIGA_MAC_VER_17:
- ops->disable = r8168b_1_hw_jumbo_disable;
- ops->enable = r8168b_1_hw_jumbo_enable;
+ r8168b_1_hw_jumbo_enable(tp);
break;
- case RTL_GIGA_MAC_VER_18: /* Wild guess. Needs info from Realtek. */
- case RTL_GIGA_MAC_VER_19:
- case RTL_GIGA_MAC_VER_20:
- case RTL_GIGA_MAC_VER_21: /* Wild guess. Needs info from Realtek. */
- case RTL_GIGA_MAC_VER_22:
- case RTL_GIGA_MAC_VER_23:
- case RTL_GIGA_MAC_VER_24:
- case RTL_GIGA_MAC_VER_25:
- case RTL_GIGA_MAC_VER_26:
- ops->disable = r8168c_hw_jumbo_disable;
- ops->enable = r8168c_hw_jumbo_enable;
+ case RTL_GIGA_MAC_VER_18 ... RTL_GIGA_MAC_VER_26:
+ r8168c_hw_jumbo_enable(tp);
break;
- case RTL_GIGA_MAC_VER_27:
- case RTL_GIGA_MAC_VER_28:
- ops->disable = r8168dp_hw_jumbo_disable;
- ops->enable = r8168dp_hw_jumbo_enable;
+ case RTL_GIGA_MAC_VER_27 ... RTL_GIGA_MAC_VER_28:
+ r8168dp_hw_jumbo_enable(tp);
break;
- case RTL_GIGA_MAC_VER_31: /* Wild guess. Needs info from Realtek. */
- case RTL_GIGA_MAC_VER_32:
- case RTL_GIGA_MAC_VER_33:
- case RTL_GIGA_MAC_VER_34:
- ops->disable = r8168e_hw_jumbo_disable;
- ops->enable = r8168e_hw_jumbo_enable;
+ case RTL_GIGA_MAC_VER_31 ... RTL_GIGA_MAC_VER_34:
+ r8168e_hw_jumbo_enable(tp);
break;
+ default:
+ break;
+ }
+ rtl_lock_config_regs(tp);
+}
- /*
- * No action needed for jumbo frames with 8169.
- * No jumbo for 810x at all.
- */
- case RTL_GIGA_MAC_VER_40 ... RTL_GIGA_MAC_VER_51:
+static void rtl_hw_jumbo_disable(struct rtl8169_private *tp)
+{
+ rtl_unlock_config_regs(tp);
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_11:
+ r8168b_0_hw_jumbo_disable(tp);
+ break;
+ case RTL_GIGA_MAC_VER_12:
+ case RTL_GIGA_MAC_VER_17:
+ r8168b_1_hw_jumbo_disable(tp);
+ break;
+ case RTL_GIGA_MAC_VER_18 ... RTL_GIGA_MAC_VER_26:
+ r8168c_hw_jumbo_disable(tp);
+ break;
+ case RTL_GIGA_MAC_VER_27 ... RTL_GIGA_MAC_VER_28:
+ r8168dp_hw_jumbo_disable(tp);
+ break;
+ case RTL_GIGA_MAC_VER_31 ... RTL_GIGA_MAC_VER_34:
+ r8168e_hw_jumbo_disable(tp);
+ break;
default:
- ops->disable = NULL;
- ops->enable = NULL;
break;
}
+ rtl_lock_config_regs(tp);
}
DECLARE_RTL_COND(rtl_chipcmd_cond)
@@ -4435,35 +4031,28 @@ static void rtl_hw_reset(struct rtl8169_private *tp)
static void rtl_request_firmware(struct rtl8169_private *tp)
{
struct rtl_fw *rtl_fw;
- int rc = -ENOMEM;
/* firmware loaded already or no firmware available */
if (tp->rtl_fw || !tp->fw_name)
return;
rtl_fw = kzalloc(sizeof(*rtl_fw), GFP_KERNEL);
- if (!rtl_fw)
- goto err_warn;
-
- rc = request_firmware(&rtl_fw->fw, tp->fw_name, tp_to_dev(tp));
- if (rc < 0)
- goto err_free;
-
- rc = rtl_check_firmware(tp, rtl_fw);
- if (rc < 0)
- goto err_release_firmware;
-
- tp->rtl_fw = rtl_fw;
+ if (!rtl_fw) {
+ netif_warn(tp, ifup, tp->dev, "Unable to load firmware, out of memory\n");
+ return;
+ }
- return;
+ rtl_fw->phy_write = rtl_writephy;
+ rtl_fw->phy_read = rtl_readphy;
+ rtl_fw->mac_mcu_write = mac_mcu_write;
+ rtl_fw->mac_mcu_read = mac_mcu_read;
+ rtl_fw->fw_name = tp->fw_name;
+ rtl_fw->dev = tp_to_dev(tp);
-err_release_firmware:
- release_firmware(rtl_fw->fw);
-err_free:
- kfree(rtl_fw);
-err_warn:
- netif_warn(tp, ifup, tp->dev, "unable to load firmware patch %s (%d)\n",
- tp->fw_name, rc);
+ if (rtl_fw_request_firmware(rtl_fw))
+ kfree(rtl_fw);
+ else
+ tp->rtl_fw = rtl_fw;
}
static void rtl_rx_close(struct rtl8169_private *tp)
@@ -4513,8 +4102,7 @@ static void rtl_set_tx_config_registers(struct rtl8169_private *tp)
u32 val = TX_DMA_BURST << TxDMAShift |
InterFrameGap << TxInterFrameGapShift;
- if (tp->mac_version >= RTL_GIGA_MAC_VER_34 &&
- tp->mac_version != RTL_GIGA_MAC_VER_39)
+ if (rtl_is_8168evl_up(tp))
val |= TXCFG_AUTO_FIFO;
RTL_W32(tp, TxConfig, val);
@@ -4608,53 +4196,6 @@ static void rtl_set_rx_mode(struct net_device *dev)
RTL_W32(tp, RxConfig, tmp);
}
-static void rtl_hw_start(struct rtl8169_private *tp)
-{
- rtl_unlock_config_regs(tp);
-
- tp->hw_start(tp);
-
- rtl_set_rx_max_size(tp);
- rtl_set_rx_tx_desc_registers(tp);
- rtl_lock_config_regs(tp);
-
- /* disable interrupt coalescing */
- RTL_W16(tp, IntrMitigate, 0x0000);
- /* Initially a 10 us delay. Turned it into a PCI commit. - FR */
- RTL_R8(tp, IntrMask);
- RTL_W8(tp, ChipCmd, CmdTxEnb | CmdRxEnb);
- rtl_init_rxcfg(tp);
- rtl_set_tx_config_registers(tp);
-
- rtl_set_rx_mode(tp->dev);
- /* no early-rx interrupts */
- RTL_W16(tp, MultiIntr, RTL_R16(tp, MultiIntr) & 0xf000);
- rtl_irq_enable(tp);
-}
-
-static void rtl_hw_start_8169(struct rtl8169_private *tp)
-{
- if (tp->mac_version == RTL_GIGA_MAC_VER_05)
- pci_write_config_byte(tp->pci_dev, PCI_CACHE_LINE_SIZE, 0x08);
-
- RTL_W8(tp, EarlyTxThres, NoEarlyTx);
-
- tp->cp_cmd |= PCIMulRW;
-
- if (tp->mac_version == RTL_GIGA_MAC_VER_02 ||
- tp->mac_version == RTL_GIGA_MAC_VER_03) {
- netif_dbg(tp, drv, tp->dev,
- "Set MAC Reg C+CR Offset 0xe0. Bit 3 and Bit 14 MUST be 1\n");
- tp->cp_cmd |= (1 << 14);
- }
-
- RTL_W16(tp, CPlusCmd, tp->cp_cmd);
-
- rtl8169_set_magic_reg(tp, tp->mac_version);
-
- RTL_W32(tp, RxMissed, 0);
-}
-
DECLARE_RTL_COND(rtl_csiar_cond)
{
return RTL_R32(tp, CSIAR) & CSIAR_FLAG;
@@ -4746,7 +4287,8 @@ static void rtl_pcie_state_l2l3_disable(struct rtl8169_private *tp)
static void rtl_hw_aspm_clkreq_enable(struct rtl8169_private *tp, bool enable)
{
- if (enable) {
+ /* Don't enable ASPM in the chip if OS can't control ASPM */
+ if (enable && tp->aspm_manageable) {
RTL_W8(tp, Config5, RTL_R8(tp, Config5) | ASPM_en);
RTL_W8(tp, Config2, RTL_R8(tp, Config2) | ClkReqEn);
} else {
@@ -4779,9 +4321,6 @@ static void rtl_hw_start_8168bb(struct rtl8169_private *tp)
{
RTL_W8(tp, Config3, RTL_R8(tp, Config3) & ~Beacon_en);
- tp->cp_cmd &= CPCMD_QUIRK_MASK;
- RTL_W16(tp, CPlusCmd, tp->cp_cmd);
-
if (tp->dev->mtu <= ETH_DATA_LEN) {
rtl_tx_performance_tweak(tp, PCI_EXP_DEVCTL_READRQ_4096B |
PCI_EXP_DEVCTL_NOSNOOP_EN);
@@ -4792,8 +4331,6 @@ static void rtl_hw_start_8168bef(struct rtl8169_private *tp)
{
rtl_hw_start_8168bb(tp);
- RTL_W8(tp, MaxTxPacketSize, TxPacketMax);
-
RTL_W8(tp, Config4, RTL_R8(tp, Config4) & ~(1 << 0));
}
@@ -4807,9 +4344,6 @@ static void __rtl_hw_start_8168cp(struct rtl8169_private *tp)
rtl_tx_performance_tweak(tp, PCI_EXP_DEVCTL_READRQ_4096B);
rtl_disable_clock_request(tp);
-
- tp->cp_cmd &= CPCMD_QUIRK_MASK;
- RTL_W16(tp, CPlusCmd, tp->cp_cmd);
}
static void rtl_hw_start_8168cp_1(struct rtl8169_private *tp)
@@ -4837,9 +4371,6 @@ static void rtl_hw_start_8168cp_2(struct rtl8169_private *tp)
if (tp->dev->mtu <= ETH_DATA_LEN)
rtl_tx_performance_tweak(tp, PCI_EXP_DEVCTL_READRQ_4096B);
-
- tp->cp_cmd &= CPCMD_QUIRK_MASK;
- RTL_W16(tp, CPlusCmd, tp->cp_cmd);
}
static void rtl_hw_start_8168cp_3(struct rtl8169_private *tp)
@@ -4851,13 +4382,8 @@ static void rtl_hw_start_8168cp_3(struct rtl8169_private *tp)
/* Magic. */
RTL_W8(tp, DBG_REG, 0x20);
- RTL_W8(tp, MaxTxPacketSize, TxPacketMax);
-
if (tp->dev->mtu <= ETH_DATA_LEN)
rtl_tx_performance_tweak(tp, PCI_EXP_DEVCTL_READRQ_4096B);
-
- tp->cp_cmd &= CPCMD_QUIRK_MASK;
- RTL_W16(tp, CPlusCmd, tp->cp_cmd);
}
static void rtl_hw_start_8168c_1(struct rtl8169_private *tp)
@@ -4909,13 +4435,8 @@ static void rtl_hw_start_8168d(struct rtl8169_private *tp)
rtl_disable_clock_request(tp);
- RTL_W8(tp, MaxTxPacketSize, TxPacketMax);
-
if (tp->dev->mtu <= ETH_DATA_LEN)
rtl_tx_performance_tweak(tp, PCI_EXP_DEVCTL_READRQ_4096B);
-
- tp->cp_cmd &= CPCMD_QUIRK_MASK;
- RTL_W16(tp, CPlusCmd, tp->cp_cmd);
}
static void rtl_hw_start_8168dp(struct rtl8169_private *tp)
@@ -4925,8 +4446,6 @@ static void rtl_hw_start_8168dp(struct rtl8169_private *tp)
if (tp->dev->mtu <= ETH_DATA_LEN)
rtl_tx_performance_tweak(tp, PCI_EXP_DEVCTL_READRQ_4096B);
- RTL_W8(tp, MaxTxPacketSize, TxPacketMax);
-
rtl_disable_clock_request(tp);
}
@@ -4942,8 +4461,6 @@ static void rtl_hw_start_8168d_4(struct rtl8169_private *tp)
rtl_tx_performance_tweak(tp, PCI_EXP_DEVCTL_READRQ_4096B);
- RTL_W8(tp, MaxTxPacketSize, TxPacketMax);
-
rtl_ephy_init(tp, e_info_8168d_4);
rtl_enable_clock_request(tp);
@@ -4974,8 +4491,6 @@ static void rtl_hw_start_8168e_1(struct rtl8169_private *tp)
if (tp->dev->mtu <= ETH_DATA_LEN)
rtl_tx_performance_tweak(tp, PCI_EXP_DEVCTL_READRQ_4096B);
- RTL_W8(tp, MaxTxPacketSize, TxPacketMax);
-
rtl_disable_clock_request(tp);
/* Reset tx FIFO pointer */
@@ -5007,8 +4522,6 @@ static void rtl_hw_start_8168e_2(struct rtl8169_private *tp)
rtl_eri_set_bits(tp, 0x1b0, ERIAR_MASK_0001, BIT(4));
rtl_w0w1_eri(tp, 0x0d4, ERIAR_MASK_0011, 0x0c00, 0xff00);
- RTL_W8(tp, MaxTxPacketSize, EarlySize);
-
rtl_disable_clock_request(tp);
RTL_W8(tp, MCU, RTL_R8(tp, MCU) & ~NOW_IS_OOB);
@@ -5037,8 +4550,6 @@ static void rtl_hw_start_8168f(struct rtl8169_private *tp)
rtl_eri_write(tp, 0xcc, ERIAR_MASK_1111, 0x00000050);
rtl_eri_write(tp, 0xd0, ERIAR_MASK_1111, 0x00000060);
- RTL_W8(tp, MaxTxPacketSize, EarlySize);
-
rtl_disable_clock_request(tp);
RTL_W8(tp, MCU, RTL_R8(tp, MCU) & ~NOW_IS_OOB);
@@ -5095,7 +4606,6 @@ static void rtl_hw_start_8168g(struct rtl8169_private *tp)
rtl_eri_write(tp, 0x2f8, ERIAR_MASK_0011, 0x1d8f);
RTL_W32(tp, MISC, RTL_R32(tp, MISC) & ~RXDV_GATED_EN);
- RTL_W8(tp, MaxTxPacketSize, EarlySize);
rtl_eri_write(tp, 0xc0, ERIAR_MASK_0011, 0x0000);
rtl_eri_write(tp, 0xb8, ERIAR_MASK_0011, 0x0000);
@@ -5193,7 +4703,6 @@ static void rtl_hw_start_8168h_1(struct rtl8169_private *tp)
rtl_eri_write(tp, 0x5f0, ERIAR_MASK_0011, 0x4f87);
RTL_W32(tp, MISC, RTL_R32(tp, MISC) & ~RXDV_GATED_EN);
- RTL_W8(tp, MaxTxPacketSize, EarlySize);
rtl_eri_write(tp, 0xc0, ERIAR_MASK_0011, 0x0000);
rtl_eri_write(tp, 0xb8, ERIAR_MASK_0011, 0x0000);
@@ -5269,7 +4778,6 @@ static void rtl_hw_start_8168ep(struct rtl8169_private *tp)
rtl_eri_write(tp, 0x5f0, ERIAR_MASK_0011, 0x4f87);
RTL_W32(tp, MISC, RTL_R32(tp, MISC) & ~RXDV_GATED_EN);
- RTL_W8(tp, MaxTxPacketSize, EarlySize);
rtl_eri_write(tp, 0xc0, ERIAR_MASK_0011, 0x0000);
rtl_eri_write(tp, 0xb8, ERIAR_MASK_0011, 0x0000);
@@ -5536,33 +5044,70 @@ static void rtl_hw_config(struct rtl8169_private *tp)
static void rtl_hw_start_8168(struct rtl8169_private *tp)
{
- RTL_W8(tp, MaxTxPacketSize, TxPacketMax);
+ if (tp->mac_version == RTL_GIGA_MAC_VER_13 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_16)
+ pcie_capability_set_word(tp->pci_dev, PCI_EXP_DEVCTL,
+ PCI_EXP_DEVCTL_NOSNOOP_EN);
- /* Workaround for RxFIFO overflow. */
- if (tp->mac_version == RTL_GIGA_MAC_VER_11) {
- tp->irq_mask |= RxFIFOOver;
- tp->irq_mask &= ~RxOverflow;
- }
+ if (rtl_is_8168evl_up(tp))
+ RTL_W8(tp, MaxTxPacketSize, EarlySize);
+ else
+ RTL_W8(tp, MaxTxPacketSize, TxPacketMax);
rtl_hw_config(tp);
}
-static void rtl_hw_start_8101(struct rtl8169_private *tp)
+static void rtl_hw_start_8169(struct rtl8169_private *tp)
{
- if (tp->mac_version >= RTL_GIGA_MAC_VER_30)
- tp->irq_mask &= ~RxFIFOOver;
+ if (tp->mac_version == RTL_GIGA_MAC_VER_05)
+ pci_write_config_byte(tp->pci_dev, PCI_CACHE_LINE_SIZE, 0x08);
- if (tp->mac_version == RTL_GIGA_MAC_VER_13 ||
- tp->mac_version == RTL_GIGA_MAC_VER_16)
- pcie_capability_set_word(tp->pci_dev, PCI_EXP_DEVCTL,
- PCI_EXP_DEVCTL_NOSNOOP_EN);
+ RTL_W8(tp, EarlyTxThres, NoEarlyTx);
+
+ tp->cp_cmd |= PCIMulRW;
- RTL_W8(tp, MaxTxPacketSize, TxPacketMax);
+ if (tp->mac_version == RTL_GIGA_MAC_VER_02 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_03) {
+ netif_dbg(tp, drv, tp->dev,
+ "Set MAC Reg C+CR Offset 0xe0. Bit 3 and Bit 14 MUST be 1\n");
+ tp->cp_cmd |= (1 << 14);
+ }
- tp->cp_cmd &= CPCMD_QUIRK_MASK;
RTL_W16(tp, CPlusCmd, tp->cp_cmd);
- rtl_hw_config(tp);
+ rtl8169_set_magic_reg(tp, tp->mac_version);
+
+ RTL_W32(tp, RxMissed, 0);
+}
+
+static void rtl_hw_start(struct rtl8169_private *tp)
+{
+ rtl_unlock_config_regs(tp);
+
+ tp->cp_cmd &= CPCMD_MASK;
+ RTL_W16(tp, CPlusCmd, tp->cp_cmd);
+
+ if (tp->mac_version <= RTL_GIGA_MAC_VER_06)
+ rtl_hw_start_8169(tp);
+ else
+ rtl_hw_start_8168(tp);
+
+ rtl_set_rx_max_size(tp);
+ rtl_set_rx_tx_desc_registers(tp);
+ rtl_lock_config_regs(tp);
+
+ /* disable interrupt coalescing */
+ RTL_W16(tp, IntrMitigate, 0x0000);
+ /* Initially a 10 us delay. Turned it into a PCI commit. - FR */
+ RTL_R8(tp, IntrMask);
+ RTL_W8(tp, ChipCmd, CmdTxEnb | CmdRxEnb);
+ rtl_init_rxcfg(tp);
+ rtl_set_tx_config_registers(tp);
+
+ rtl_set_rx_mode(tp->dev);
+ /* no early-rx interrupts */
+ RTL_W16(tp, MultiIntr, RTL_R16(tp, MultiIntr) & 0xf000);
+ rtl_irq_enable(tp);
}
static int rtl8169_change_mtu(struct net_device *dev, int new_mtu)
@@ -5834,7 +5379,7 @@ static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb,
static void r8169_csum_workaround(struct rtl8169_private *tp,
struct sk_buff *skb)
{
- if (skb_shinfo(skb)->gso_size) {
+ if (skb_is_gso(skb)) {
netdev_features_t features = tp->dev->features;
struct sk_buff *segs, *nskb;
@@ -5857,11 +5402,8 @@ static void r8169_csum_workaround(struct rtl8169_private *tp,
rtl8169_start_xmit(skb, tp->dev);
} else {
- struct net_device_stats *stats;
-
drop:
- stats = &tp->dev->stats;
- stats->tx_dropped++;
+ tp->dev->stats.tx_dropped++;
dev_kfree_skb_any(skb);
}
}
@@ -5889,8 +5431,7 @@ static int msdn_giant_send_check(struct sk_buff *skb)
return ret;
}
-static bool rtl8169_tso_csum_v1(struct rtl8169_private *tp,
- struct sk_buff *skb, u32 *opts)
+static void rtl8169_tso_csum_v1(struct sk_buff *skb, u32 *opts)
{
u32 mss = skb_shinfo(skb)->gso_size;
@@ -5907,8 +5448,6 @@ static bool rtl8169_tso_csum_v1(struct rtl8169_private *tp,
else
WARN_ON_ONCE(1);
}
-
- return true;
}
static bool rtl8169_tso_csum_v2(struct rtl8169_private *tp,
@@ -5998,6 +5537,18 @@ static bool rtl_tx_slots_avail(struct rtl8169_private *tp,
return slots_avail > nr_frags;
}
+/* Versions RTL8102e and from RTL8168c onwards support csum_v2 */
+static bool rtl_chip_supports_csum_v2(struct rtl8169_private *tp)
+{
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_02 ... RTL_GIGA_MAC_VER_06:
+ case RTL_GIGA_MAC_VER_10 ... RTL_GIGA_MAC_VER_17:
+ return false;
+ default:
+ return true;
+ }
+}
+
static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb,
struct net_device *dev)
{
@@ -6017,12 +5568,16 @@ static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb,
if (unlikely(le32_to_cpu(txd->opts1) & DescOwn))
goto err_stop_0;
- opts[1] = cpu_to_le32(rtl8169_tx_vlan_tag(skb));
+ opts[1] = rtl8169_tx_vlan_tag(skb);
opts[0] = DescOwn;
- if (!tp->tso_csum(tp, skb, opts)) {
- r8169_csum_workaround(tp, skb);
- return NETDEV_TX_OK;
+ if (rtl_chip_supports_csum_v2(tp)) {
+ if (!rtl8169_tso_csum_v2(tp, skb, opts)) {
+ r8169_csum_workaround(tp, skb);
+ return NETDEV_TX_OK;
+ }
+ } else {
+ rtl8169_tso_csum_v1(skb, opts);
}
len = skb_headlen(skb);
@@ -6229,7 +5784,6 @@ static struct sk_buff *rtl8169_try_rx_copy(void *data,
skb = napi_alloc_skb(&tp->napi, pkt_size);
if (skb)
skb_copy_to_linear_data(skb, data, pkt_size);
- dma_sync_single_for_device(d, addr, pkt_size, DMA_FROM_DEVICE);
return skb;
}
@@ -6264,14 +5818,8 @@ static int rtl_rx(struct net_device *dev, struct rtl8169_private *tp, u32 budget
dev->stats.rx_length_errors++;
if (status & RxCRC)
dev->stats.rx_crc_errors++;
- /* RxFOVF is a reserved bit on later chip versions */
- if (tp->mac_version == RTL_GIGA_MAC_VER_01 &&
- status & RxFOVF) {
- rtl_schedule_task(tp, RTL_FLAG_TASK_RESET_PENDING);
- dev->stats.rx_fifo_errors++;
- } else if (status & (RxRUNT | RxCRC) &&
- !(status & RxRWT) &&
- dev->features & NETIF_F_RXALL) {
+ if (status & (RxRUNT | RxCRC) && !(status & RxRWT) &&
+ dev->features & NETIF_F_RXALL) {
goto process_pkt;
}
} else {
@@ -6451,7 +5999,10 @@ static int r8169_phy_connect(struct rtl8169_private *tp)
if (ret)
return ret;
- if (!tp->supports_gmii)
+ if (tp->supports_gmii)
+ phy_remove_link_mode(phydev,
+ ETHTOOL_LINK_MODE_1000baseT_Half_BIT);
+ else
phy_set_max_speed(phydev, SPEED_100);
phy_support_asym_pause(phydev);
@@ -6884,30 +6435,18 @@ static const struct net_device_ops rtl_netdev_ops = {
};
-static const struct rtl_cfg_info {
- void (*hw_start)(struct rtl8169_private *tp);
- u16 irq_mask;
- unsigned int has_gmii:1;
- const struct rtl_coalesce_info *coalesce_info;
-} rtl_cfg_infos [] = {
- [RTL_CFG_0] = {
- .hw_start = rtl_hw_start_8169,
- .irq_mask = SYSErr | LinkChg | RxOverflow | RxFIFOOver,
- .has_gmii = 1,
- .coalesce_info = rtl_coalesce_info_8169,
- },
- [RTL_CFG_1] = {
- .hw_start = rtl_hw_start_8168,
- .irq_mask = LinkChg | RxOverflow,
- .has_gmii = 1,
- .coalesce_info = rtl_coalesce_info_8168_8136,
- },
- [RTL_CFG_2] = {
- .hw_start = rtl_hw_start_8101,
- .irq_mask = LinkChg | RxOverflow | RxFIFOOver,
- .coalesce_info = rtl_coalesce_info_8168_8136,
- }
-};
+static void rtl_set_irq_mask(struct rtl8169_private *tp)
+{
+ tp->irq_mask = RTL_EVENT_NAPI | LinkChg;
+
+ if (tp->mac_version <= RTL_GIGA_MAC_VER_06)
+ tp->irq_mask |= SYSErr | RxOverflow | RxFIFOOver;
+ else if (tp->mac_version == RTL_GIGA_MAC_VER_11)
+ /* special workaround needed */
+ tp->irq_mask |= RxFIFOOver;
+ else
+ tp->irq_mask |= RxOverflow;
+}
static int rtl_alloc_irq(struct rtl8169_private *tp)
{
@@ -6928,13 +6467,10 @@ static int rtl_alloc_irq(struct rtl8169_private *tp)
static void rtl_read_mac_address(struct rtl8169_private *tp,
u8 mac_addr[ETH_ALEN])
{
- u32 value;
-
/* Get MAC address */
- switch (tp->mac_version) {
- case RTL_GIGA_MAC_VER_35 ... RTL_GIGA_MAC_VER_38:
- case RTL_GIGA_MAC_VER_40 ... RTL_GIGA_MAC_VER_51:
- value = rtl_eri_read(tp, 0xe0);
+ if (rtl_is_8168evl_up(tp) && tp->mac_version != RTL_GIGA_MAC_VER_34) {
+ u32 value = rtl_eri_read(tp, 0xe0);
+
mac_addr[0] = (value >> 0) & 0xff;
mac_addr[1] = (value >> 8) & 0xff;
mac_addr[2] = (value >> 16) & 0xff;
@@ -6943,9 +6479,6 @@ static void rtl_read_mac_address(struct rtl8169_private *tp,
value = rtl_eri_read(tp, 0xe4);
mac_addr[4] = (value >> 0) & 0xff;
mac_addr[5] = (value >> 8) & 0xff;
- break;
- default:
- break;
}
}
@@ -7046,42 +6579,23 @@ static void rtl_hw_init_8168g(struct rtl8169_private *tp)
data |= (1 << 15);
r8168_mac_ocp_write(tp, 0xe8de, data);
- if (!rtl_udelay_loop_wait_high(tp, &rtl_link_list_ready_cond, 100, 42))
- return;
-}
-
-static void rtl_hw_init_8168ep(struct rtl8169_private *tp)
-{
- rtl8168ep_stop_cmac(tp);
- rtl_hw_init_8168g(tp);
+ rtl_udelay_loop_wait_high(tp, &rtl_link_list_ready_cond, 100, 42);
}
static void rtl_hw_initialize(struct rtl8169_private *tp)
{
switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_49 ... RTL_GIGA_MAC_VER_51:
+ rtl8168ep_stop_cmac(tp);
+ /* fall through */
case RTL_GIGA_MAC_VER_40 ... RTL_GIGA_MAC_VER_48:
rtl_hw_init_8168g(tp);
break;
- case RTL_GIGA_MAC_VER_49 ... RTL_GIGA_MAC_VER_51:
- rtl_hw_init_8168ep(tp);
- break;
default:
break;
}
}
-/* Versions RTL8102e and from RTL8168c onwards support csum_v2 */
-static bool rtl_chip_supports_csum_v2(struct rtl8169_private *tp)
-{
- switch (tp->mac_version) {
- case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06:
- case RTL_GIGA_MAC_VER_10 ... RTL_GIGA_MAC_VER_17:
- return false;
- default:
- return true;
- }
-}
-
static int rtl_jumbo_max(struct rtl8169_private *tp)
{
/* Non-GBit versions don't support jumbo frames */
@@ -7090,7 +6604,7 @@ static int rtl_jumbo_max(struct rtl8169_private *tp)
switch (tp->mac_version) {
/* RTL8169 */
- case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06:
+ case RTL_GIGA_MAC_VER_02 ... RTL_GIGA_MAC_VER_06:
return JUMBO_7K;
/* RTL8168b */
case RTL_GIGA_MAC_VER_11:
@@ -7136,14 +6650,36 @@ static int rtl_get_ether_clk(struct rtl8169_private *tp)
return rc;
}
+static void rtl_init_mac_address(struct rtl8169_private *tp)
+{
+ struct net_device *dev = tp->dev;
+ u8 *mac_addr = dev->dev_addr;
+ int rc, i;
+
+ rc = eth_platform_get_mac_address(tp_to_dev(tp), mac_addr);
+ if (!rc)
+ goto done;
+
+ rtl_read_mac_address(tp, mac_addr);
+ if (is_valid_ether_addr(mac_addr))
+ goto done;
+
+ for (i = 0; i < ETH_ALEN; i++)
+ mac_addr[i] = RTL_R8(tp, MAC0 + i);
+ if (is_valid_ether_addr(mac_addr))
+ goto done;
+
+ eth_hw_addr_random(dev);
+ dev_warn(tp_to_dev(tp), "can't read MAC address, setting random one\n");
+done:
+ rtl_rar_set(tp, mac_addr);
+}
+
static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
{
- const struct rtl_cfg_info *cfg = rtl_cfg_infos + ent->driver_data;
- /* align to u16 for is_valid_ether_addr() */
- u8 mac_addr[ETH_ALEN] __aligned(2) = {};
struct rtl8169_private *tp;
struct net_device *dev;
- int chipset, region, i;
+ int chipset, region;
int jumbo_max, rc;
dev = devm_alloc_etherdev(&pdev->dev, sizeof (*tp));
@@ -7156,7 +6692,7 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
tp->dev = dev;
tp->pci_dev = pdev;
tp->msg_enable = netif_msg_init(debug.msg_enable, R8169_MSG_DEFAULT);
- tp->supports_gmii = cfg->has_gmii;
+ tp->supports_gmii = ent->driver_data == RTL_CFG_NO_GBIT ? 0 : 1;
/* Get the *optional* external "ether_clk" used on some boards */
rc = rtl_get_ether_clk(tp);
@@ -7166,7 +6702,9 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
/* Disable ASPM completely as that cause random device stop working
* problems as well as full system hangs for some PCIe devices users.
*/
- pci_disable_link_state(pdev, PCIE_LINK_STATE_L0S | PCIE_LINK_STATE_L1);
+ rc = pci_disable_link_state(pdev, PCIE_LINK_STATE_L0S |
+ PCIE_LINK_STATE_L1);
+ tp->aspm_manageable = !rc;
/* enable device (incl. PCI PM wakeup and hotplug setup) */
rc = pcim_enable_device(pdev);
@@ -7204,23 +6742,11 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
if (tp->mac_version == RTL_GIGA_MAC_NONE)
return -ENODEV;
- if (rtl_tbi_enabled(tp)) {
- dev_err(&pdev->dev, "TBI fiber mode not supported\n");
- return -ENODEV;
- }
-
tp->cp_cmd = RTL_R16(tp, CPlusCmd);
if (sizeof(dma_addr_t) > 4 && tp->mac_version >= RTL_GIGA_MAC_VER_18 &&
- !dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64))) {
+ !dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)))
dev->features |= NETIF_F_HIGHDMA;
- } else {
- rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
- if (rc < 0) {
- dev_err(&pdev->dev, "DMA configuration failed\n");
- return rc;
- }
- }
rtl_init_rxcfg(tp);
@@ -7232,9 +6758,6 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
pci_set_master(pdev);
- rtl_init_mdio_ops(tp);
- rtl_init_jumbo_ops(tp);
-
chipset = tp->mac_version;
rc = rtl_alloc_irq(tp);
@@ -7248,16 +6771,7 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
u64_stats_init(&tp->rx_stats.syncp);
u64_stats_init(&tp->tx_stats.syncp);
- /* get MAC address */
- rc = eth_platform_get_mac_address(&pdev->dev, mac_addr);
- if (rc)
- rtl_read_mac_address(tp, mac_addr);
-
- if (is_valid_ether_addr(mac_addr))
- rtl_rar_set(tp, mac_addr);
-
- for (i = 0; i < ETH_ALEN; i++)
- dev->dev_addr[i] = RTL_R8(tp, MAC0 + i);
+ rtl_init_mac_address(tp);
dev->ethtool_ops = &rtl8169_ethtool_ops;
@@ -7285,12 +6799,8 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
/* Disallow toggling */
dev->hw_features &= ~NETIF_F_HW_VLAN_CTAG_RX;
- if (rtl_chip_supports_csum_v2(tp)) {
- tp->tso_csum = rtl8169_tso_csum_v2;
+ if (rtl_chip_supports_csum_v2(tp))
dev->hw_features |= NETIF_F_IPV6_CSUM | NETIF_F_TSO6;
- } else {
- tp->tso_csum = rtl8169_tso_csum_v1;
- }
dev->hw_features |= NETIF_F_RXALL;
dev->hw_features |= NETIF_F_RXFCS;
@@ -7300,9 +6810,7 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
jumbo_max = rtl_jumbo_max(tp);
dev->max_mtu = jumbo_max;
- tp->hw_start = cfg->hw_start;
- tp->irq_mask = RTL_EVENT_NAPI | cfg->irq_mask;
- tp->coalesce_info = cfg->coalesce_info;
+ rtl_set_irq_mask(tp);
tp->fw_name = rtl_chip_infos[chipset].fw_name;
diff --git a/drivers/net/ethernet/rocker/rocker_main.c b/drivers/net/ethernet/rocker/rocker_main.c
index 3e5bc1fc3c46..079f459c73a5 100644
--- a/drivers/net/ethernet/rocker/rocker_main.c
+++ b/drivers/net/ethernet/rocker/rocker_main.c
@@ -2210,6 +2210,10 @@ static int rocker_router_fib_event(struct notifier_block *nb,
NL_SET_ERR_MSG_MOD(info->extack, "IPv6 gateway with IPv4 route is not supported");
return notifier_from_errno(-EINVAL);
}
+ if (fen_info->fi->nh) {
+ NL_SET_ERR_MSG_MOD(info->extack, "IPv4 route with nexthop objects is not supported");
+ return notifier_from_errno(-EINVAL);
+ }
}
memcpy(&fib_work->fen_info, ptr, sizeof(fib_work->fen_info));
diff --git a/drivers/net/ethernet/rocker/rocker_ofdpa.c b/drivers/net/ethernet/rocker/rocker_ofdpa.c
index bdfa6a19d620..7072b249c8bd 100644
--- a/drivers/net/ethernet/rocker/rocker_ofdpa.c
+++ b/drivers/net/ethernet/rocker/rocker_ofdpa.c
@@ -18,6 +18,7 @@
#include <net/neighbour.h>
#include <net/switchdev.h>
#include <net/ip_fib.h>
+#include <net/nexthop.h>
#include <net/arp.h>
#include "rocker.h"
@@ -2282,8 +2283,8 @@ static int ofdpa_port_fib_ipv4(struct ofdpa_port *ofdpa_port, __be32 dst,
/* XXX support ECMP */
- nh = fi->fib_nh;
- nh_on_port = (fi->fib_dev == ofdpa_port->dev);
+ nh = fib_info_nh(fi, 0);
+ nh_on_port = (nh->fib_nh_dev == ofdpa_port->dev);
has_gw = !!nh->fib_nh_gw4;
if (has_gw && nh_on_port) {
@@ -2733,11 +2734,13 @@ static int ofdpa_fib4_add(struct rocker *rocker,
{
struct ofdpa *ofdpa = rocker->wpriv;
struct ofdpa_port *ofdpa_port;
+ struct fib_nh *nh;
int err;
if (ofdpa->fib_aborted)
return 0;
- ofdpa_port = ofdpa_port_dev_lower_find(fen_info->fi->fib_dev, rocker);
+ nh = fib_info_nh(fen_info->fi, 0);
+ ofdpa_port = ofdpa_port_dev_lower_find(nh->fib_nh_dev, rocker);
if (!ofdpa_port)
return 0;
err = ofdpa_port_fib_ipv4(ofdpa_port, htonl(fen_info->dst),
@@ -2745,7 +2748,7 @@ static int ofdpa_fib4_add(struct rocker *rocker,
fen_info->tb_id, 0);
if (err)
return err;
- fen_info->fi->fib_nh->fib_nh_flags |= RTNH_F_OFFLOAD;
+ nh->fib_nh_flags |= RTNH_F_OFFLOAD;
return 0;
}
@@ -2754,13 +2757,15 @@ static int ofdpa_fib4_del(struct rocker *rocker,
{
struct ofdpa *ofdpa = rocker->wpriv;
struct ofdpa_port *ofdpa_port;
+ struct fib_nh *nh;
if (ofdpa->fib_aborted)
return 0;
- ofdpa_port = ofdpa_port_dev_lower_find(fen_info->fi->fib_dev, rocker);
+ nh = fib_info_nh(fen_info->fi, 0);
+ ofdpa_port = ofdpa_port_dev_lower_find(nh->fib_nh_dev, rocker);
if (!ofdpa_port)
return 0;
- fen_info->fi->fib_nh->fib_nh_flags &= ~RTNH_F_OFFLOAD;
+ nh->fib_nh_flags &= ~RTNH_F_OFFLOAD;
return ofdpa_port_fib_ipv4(ofdpa_port, htonl(fen_info->dst),
fen_info->dst_len, fen_info->fi,
fen_info->tb_id, OFDPA_OP_FLAG_REMOVE);
@@ -2780,14 +2785,16 @@ static void ofdpa_fib4_abort(struct rocker *rocker)
spin_lock_irqsave(&ofdpa->flow_tbl_lock, flags);
hash_for_each_safe(ofdpa->flow_tbl, bkt, tmp, flow_entry, entry) {
+ struct fib_nh *nh;
+
if (flow_entry->key.tbl_id !=
ROCKER_OF_DPA_TABLE_ID_UNICAST_ROUTING)
continue;
- ofdpa_port = ofdpa_port_dev_lower_find(flow_entry->fi->fib_dev,
- rocker);
+ nh = fib_info_nh(flow_entry->fi, 0);
+ ofdpa_port = ofdpa_port_dev_lower_find(nh->fib_nh_dev, rocker);
if (!ofdpa_port)
continue;
- flow_entry->fi->fib_nh->fib_nh_flags &= ~RTNH_F_OFFLOAD;
+ nh->fib_nh_flags &= ~RTNH_F_OFFLOAD;
ofdpa_flow_tbl_del(ofdpa_port, OFDPA_OP_FLAG_REMOVE,
flow_entry);
}
diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c
index 53b726bfe945..ab58b837df47 100644
--- a/drivers/net/ethernet/sfc/efx.c
+++ b/drivers/net/ethernet/sfc/efx.c
@@ -3614,11 +3614,7 @@ static int efx_pci_probe(struct pci_dev *pci_dev,
netif_warn(efx, probe, efx->net_dev,
"failed to create MTDs (%d)\n", rc);
- rc = pci_enable_pcie_error_reporting(pci_dev);
- if (rc && rc != -EINVAL)
- netif_notice(efx, probe, efx->net_dev,
- "PCIE error reporting unavailable (%d).\n",
- rc);
+ (void)pci_enable_pcie_error_reporting(pci_dev);
if (efx->type->udp_tnl_push_ports)
efx->type->udp_tnl_push_ports(efx);
diff --git a/drivers/net/ethernet/sis/sis900.c b/drivers/net/ethernet/sis/sis900.c
index 9b036c857b1d..aba6eea72f15 100644
--- a/drivers/net/ethernet/sis/sis900.c
+++ b/drivers/net/ethernet/sis/sis900.c
@@ -360,7 +360,7 @@ static int sis635_get_mac_addr(struct pci_dev *pci_dev,
* SiS962 or SiS963 model, use EEPROM to store MAC address. And EEPROM
* is shared by
* LAN and 1394. When access EEPROM, send EEREQ signal to hardware first
- * and wait for EEGNT. If EEGNT is ON, EEPROM is permitted to be access
+ * and wait for EEGNT. If EEGNT is ON, EEPROM is permitted to be accessed
* by LAN, otherwise is not. After MAC address is read from EEPROM, send
* EEDONE signal to refuse EEPROM access by LAN.
* The EEPROM map of SiS962 or SiS963 is different to SiS900.
@@ -882,7 +882,7 @@ static void mdio_reset(struct sis900_private *sp)
* mdio_read - read MII PHY register
* @net_dev: the net device to read
* @phy_id: the phy address to read
- * @location: the phy regiester id to read
+ * @location: the phy register id to read
*
* Read MII registers through MDIO and MDC
* using MDIO management frame structure and protocol(defined by ISO/IEC).
@@ -926,7 +926,7 @@ static int mdio_read(struct net_device *net_dev, int phy_id, int location)
* mdio_write - write MII PHY register
* @net_dev: the net device to write
* @phy_id: the phy address to write
- * @location: the phy regiester id to write
+ * @location: the phy register id to write
* @value: the register value to write with
*
* Write MII registers with @value through MDIO and MDC
@@ -1057,7 +1057,7 @@ sis900_open(struct net_device *net_dev)
sis900_set_mode(sis_priv, HW_SPEED_10_MBPS, FDX_CAPABLE_HALF_SELECTED);
/* Enable all known interrupts by setting the interrupt mask. */
- sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE | TxDESC);
+ sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxDESC);
sw32(cr, RxENA | sr32(cr));
sw32(ier, IE);
@@ -1101,7 +1101,7 @@ sis900_init_rxfilter (struct net_device * net_dev)
sw32(rfdr, w);
if (netif_msg_hw(sis_priv)) {
- printk(KERN_DEBUG "%s: Receive Filter Addrss[%d]=%x\n",
+ printk(KERN_DEBUG "%s: Receive Filter Address[%d]=%x\n",
net_dev->name, i, sr32(rfdr));
}
}
@@ -1148,7 +1148,7 @@ sis900_init_tx_ring(struct net_device *net_dev)
* @net_dev: the net device to initialize for
*
* Initialize the Rx descriptor ring,
- * and pre-allocate recevie buffers (socket buffer)
+ * and pre-allocate receive buffers (socket buffer)
*/
static void
@@ -1578,7 +1578,7 @@ static void sis900_tx_timeout(struct net_device *net_dev)
sw32(txdp, sis_priv->tx_ring_dma);
/* Enable all known interrupts by setting the interrupt mask. */
- sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE | TxDESC);
+ sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxDESC);
}
/**
@@ -1674,8 +1674,8 @@ static irqreturn_t sis900_interrupt(int irq, void *dev_instance)
do {
status = sr32(isr);
- if ((status & (HIBERR|TxURN|TxERR|TxIDLE|TxDESC|RxORN|RxERR|RxOK)) == 0)
- /* nothing intresting happened */
+ if ((status & (HIBERR|TxURN|TxERR|TxDESC|RxORN|RxERR|RxOK)) == 0)
+ /* nothing interesting happened */
break;
handled = 1;
@@ -1684,7 +1684,7 @@ static irqreturn_t sis900_interrupt(int irq, void *dev_instance)
/* Rx interrupt */
sis900_rx(net_dev);
- if (status & (TxURN | TxERR | TxIDLE | TxDESC))
+ if (status & (TxURN | TxERR | TxDESC))
/* Tx interrupt */
sis900_finish_xmit(net_dev);
@@ -1897,7 +1897,7 @@ static void sis900_finish_xmit (struct net_device *net_dev)
if (tx_status & OWN) {
/* The packet is not transmitted yet (owned by hardware) !
* Note: this is an almost impossible condition
- * in case of TxDESC ('descriptor interrupt') */
+ * on TxDESC interrupt ('descriptor interrupt') */
break;
}
@@ -2473,7 +2473,7 @@ static int sis900_resume(struct pci_dev *pci_dev)
sis900_set_mode(sis_priv, HW_SPEED_10_MBPS, FDX_CAPABLE_HALF_SELECTED);
/* Enable all known interrupts by setting the interrupt mask. */
- sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxIDLE | TxDESC);
+ sw32(imr, RxSOVR | RxORN | RxERR | RxOK | TxURN | TxERR | TxDESC);
sw32(cr, RxENA | sr32(cr));
sw32(ier, IE);
diff --git a/drivers/net/ethernet/socionext/Kconfig b/drivers/net/ethernet/socionext/Kconfig
index 25f18be27423..95e99baf3f45 100644
--- a/drivers/net/ethernet/socionext/Kconfig
+++ b/drivers/net/ethernet/socionext/Kconfig
@@ -26,6 +26,7 @@ config SNI_NETSEC
tristate "Socionext NETSEC ethernet support"
depends on (ARCH_SYNQUACER || COMPILE_TEST) && OF
select PHYLIB
+ select PAGE_POOL
select MII
---help---
Enable to add support for the SocioNext NetSec Gigabit Ethernet
diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
index cba5881b2746..1502fe8b0456 100644
--- a/drivers/net/ethernet/socionext/netsec.c
+++ b/drivers/net/ethernet/socionext/netsec.c
@@ -9,8 +9,12 @@
#include <linux/etherdevice.h>
#include <linux/interrupt.h>
#include <linux/io.h>
+#include <linux/netlink.h>
+#include <linux/bpf.h>
+#include <linux/bpf_trace.h>
#include <net/tcp.h>
+#include <net/page_pool.h>
#include <net/ip6_checksum.h>
#define NETSEC_REG_SOFT_RST 0x104
@@ -235,22 +239,41 @@
#define DESC_NUM 256
#define NETSEC_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN)
-#define NETSEC_RX_BUF_SZ 1536
+#define NETSEC_RXBUF_HEADROOM (max(XDP_PACKET_HEADROOM, NET_SKB_PAD) + \
+ NET_IP_ALIGN)
+#define NETSEC_RX_BUF_NON_DATA (NETSEC_RXBUF_HEADROOM + \
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
#define DESC_SZ sizeof(struct netsec_de)
#define NETSEC_F_NETSEC_VER_MAJOR_NUM(x) ((x) & 0xffff0000)
+#define NETSEC_XDP_PASS 0
+#define NETSEC_XDP_CONSUMED BIT(0)
+#define NETSEC_XDP_TX BIT(1)
+#define NETSEC_XDP_REDIR BIT(2)
+#define NETSEC_XDP_RX_OK (NETSEC_XDP_PASS | NETSEC_XDP_TX | NETSEC_XDP_REDIR)
+
enum ring_id {
NETSEC_RING_TX = 0,
NETSEC_RING_RX
};
+enum buf_type {
+ TYPE_NETSEC_SKB = 0,
+ TYPE_NETSEC_XDP_TX,
+ TYPE_NETSEC_XDP_NDO,
+};
+
struct netsec_desc {
- struct sk_buff *skb;
+ union {
+ struct sk_buff *skb;
+ struct xdp_frame *xdpf;
+ };
dma_addr_t dma_addr;
void *addr;
u16 len;
+ u8 buf_type;
};
struct netsec_desc_ring {
@@ -258,11 +281,17 @@ struct netsec_desc_ring {
struct netsec_desc *desc;
void *vaddr;
u16 head, tail;
+ u16 xdp_xmit; /* netsec_xdp_xmit packets */
+ bool is_xdp;
+ struct page_pool *page_pool;
+ struct xdp_rxq_info xdp_rxq;
+ spinlock_t lock; /* XDP tx queue locking */
};
struct netsec_priv {
struct netsec_desc_ring desc_ring[NETSEC_RING_MAX];
struct ethtool_coalesce et_coalesce;
+ struct bpf_prog *xdp_prog;
spinlock_t reglock; /* protect reg access */
struct napi_struct napi;
phy_interface_t phy_interface;
@@ -600,12 +629,14 @@ static void netsec_set_rx_de(struct netsec_priv *priv,
static bool netsec_clean_tx_dring(struct netsec_priv *priv)
{
struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_TX];
- unsigned int pkts, bytes;
struct netsec_de *entry;
int tail = dring->tail;
+ unsigned int bytes;
int cnt = 0;
- pkts = 0;
+ if (dring->is_xdp)
+ spin_lock(&dring->lock);
+
bytes = 0;
entry = dring->vaddr + DESC_SZ * tail;
@@ -618,13 +649,23 @@ static bool netsec_clean_tx_dring(struct netsec_priv *priv)
eop = (entry->attr >> NETSEC_TX_LAST) & 1;
dma_rmb();
- dma_unmap_single(priv->dev, desc->dma_addr, desc->len,
- DMA_TO_DEVICE);
- if (eop) {
- pkts++;
+ /* if buf_type is either TYPE_NETSEC_SKB or
+ * TYPE_NETSEC_XDP_NDO we mapped it
+ */
+ if (desc->buf_type != TYPE_NETSEC_XDP_TX)
+ dma_unmap_single(priv->dev, desc->dma_addr, desc->len,
+ DMA_TO_DEVICE);
+
+ if (!eop)
+ goto next;
+
+ if (desc->buf_type == TYPE_NETSEC_SKB) {
bytes += desc->skb->len;
dev_kfree_skb(desc->skb);
+ } else {
+ xdp_return_frame(desc->xdpf);
}
+next:
/* clean up so netsec_uninit_pkt_dring() won't free the skb
* again
*/
@@ -641,6 +682,8 @@ static bool netsec_clean_tx_dring(struct netsec_priv *priv)
entry = dring->vaddr + DESC_SZ * tail;
cnt++;
}
+ if (dring->is_xdp)
+ spin_unlock(&dring->lock);
if (!cnt)
return false;
@@ -673,33 +716,31 @@ static void netsec_process_tx(struct netsec_priv *priv)
}
static void *netsec_alloc_rx_data(struct netsec_priv *priv,
- dma_addr_t *dma_handle, u16 *desc_len,
- bool napi)
+ dma_addr_t *dma_handle, u16 *desc_len)
+
{
- size_t total_len = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
- size_t payload_len = NETSEC_RX_BUF_SZ;
- dma_addr_t mapping;
- void *buf;
- total_len += SKB_DATA_ALIGN(payload_len + NETSEC_SKB_PAD);
+ struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX];
+ enum dma_data_direction dma_dir;
+ struct page *page;
- buf = napi ? napi_alloc_frag(total_len) : netdev_alloc_frag(total_len);
- if (!buf)
+ page = page_pool_dev_alloc_pages(dring->page_pool);
+ if (!page)
return NULL;
- mapping = dma_map_single(priv->dev, buf + NETSEC_SKB_PAD, payload_len,
- DMA_FROM_DEVICE);
- if (unlikely(dma_mapping_error(priv->dev, mapping)))
- goto err_out;
-
- *dma_handle = mapping;
- *desc_len = payload_len;
-
- return buf;
+ /* We allocate the same buffer length for XDP and non-XDP cases.
+ * page_pool API will map the whole page, skip what's needed for
+ * network payloads and/or XDP
+ */
+ *dma_handle = page_pool_get_dma_addr(page) + NETSEC_RXBUF_HEADROOM;
+ /* Make sure the incoming payload fits in the page for XDP and non-XDP
+ * cases and reserve enough space for headroom + skb_shared_info
+ */
+ *desc_len = PAGE_SIZE - NETSEC_RX_BUF_NON_DATA;
+ dma_dir = page_pool_get_dma_dir(dring->page_pool);
+ dma_sync_single_for_device(priv->dev, *dma_handle, *desc_len, dma_dir);
-err_out:
- skb_free_frag(buf);
- return NULL;
+ return page_address(page);
}
static void netsec_rx_fill(struct netsec_priv *priv, u16 from, u16 num)
@@ -716,22 +757,201 @@ static void netsec_rx_fill(struct netsec_priv *priv, u16 from, u16 num)
}
}
+static void netsec_xdp_ring_tx_db(struct netsec_priv *priv, u16 pkts)
+{
+ if (likely(pkts))
+ netsec_write(priv, NETSEC_REG_NRM_TX_PKTCNT, pkts);
+}
+
+static void netsec_finalize_xdp_rx(struct netsec_priv *priv, u32 xdp_res,
+ u16 pkts)
+{
+ if (xdp_res & NETSEC_XDP_REDIR)
+ xdp_do_flush_map();
+
+ if (xdp_res & NETSEC_XDP_TX)
+ netsec_xdp_ring_tx_db(priv, pkts);
+}
+
+static void netsec_set_tx_de(struct netsec_priv *priv,
+ struct netsec_desc_ring *dring,
+ const struct netsec_tx_pkt_ctrl *tx_ctrl,
+ const struct netsec_desc *desc, void *buf)
+{
+ int idx = dring->head;
+ struct netsec_de *de;
+ u32 attr;
+
+ de = dring->vaddr + (DESC_SZ * idx);
+
+ attr = (1 << NETSEC_TX_SHIFT_OWN_FIELD) |
+ (1 << NETSEC_TX_SHIFT_PT_FIELD) |
+ (NETSEC_RING_GMAC << NETSEC_TX_SHIFT_TDRID_FIELD) |
+ (1 << NETSEC_TX_SHIFT_FS_FIELD) |
+ (1 << NETSEC_TX_LAST) |
+ (tx_ctrl->cksum_offload_flag << NETSEC_TX_SHIFT_CO) |
+ (tx_ctrl->tcp_seg_offload_flag << NETSEC_TX_SHIFT_SO) |
+ (1 << NETSEC_TX_SHIFT_TRS_FIELD);
+ if (idx == DESC_NUM - 1)
+ attr |= (1 << NETSEC_TX_SHIFT_LD_FIELD);
+
+ de->data_buf_addr_up = upper_32_bits(desc->dma_addr);
+ de->data_buf_addr_lw = lower_32_bits(desc->dma_addr);
+ de->buf_len_info = (tx_ctrl->tcp_seg_len << 16) | desc->len;
+ de->attr = attr;
+ /* under spin_lock if using XDP */
+ if (!dring->is_xdp)
+ dma_wmb();
+
+ dring->desc[idx] = *desc;
+ if (desc->buf_type == TYPE_NETSEC_SKB)
+ dring->desc[idx].skb = buf;
+ else if (desc->buf_type == TYPE_NETSEC_XDP_TX ||
+ desc->buf_type == TYPE_NETSEC_XDP_NDO)
+ dring->desc[idx].xdpf = buf;
+
+ /* move head ahead */
+ dring->head = (dring->head + 1) % DESC_NUM;
+}
+
+/* The current driver only supports 1 Txq, this should run under spin_lock() */
+static u32 netsec_xdp_queue_one(struct netsec_priv *priv,
+ struct xdp_frame *xdpf, bool is_ndo)
+
+{
+ struct netsec_desc_ring *tx_ring = &priv->desc_ring[NETSEC_RING_TX];
+ struct page *page = virt_to_page(xdpf->data);
+ struct netsec_tx_pkt_ctrl tx_ctrl = {};
+ struct netsec_desc tx_desc;
+ dma_addr_t dma_handle;
+ u16 filled;
+
+ if (tx_ring->head >= tx_ring->tail)
+ filled = tx_ring->head - tx_ring->tail;
+ else
+ filled = tx_ring->head + DESC_NUM - tx_ring->tail;
+
+ if (DESC_NUM - filled <= 1)
+ return NETSEC_XDP_CONSUMED;
+
+ if (is_ndo) {
+ /* this is for ndo_xdp_xmit, the buffer needs mapping before
+ * sending
+ */
+ dma_handle = dma_map_single(priv->dev, xdpf->data, xdpf->len,
+ DMA_TO_DEVICE);
+ if (dma_mapping_error(priv->dev, dma_handle))
+ return NETSEC_XDP_CONSUMED;
+ tx_desc.buf_type = TYPE_NETSEC_XDP_NDO;
+ } else {
+ /* This is the device Rx buffer from page_pool. No need to remap
+ * just sync and send it
+ */
+ struct netsec_desc_ring *rx_ring =
+ &priv->desc_ring[NETSEC_RING_RX];
+ enum dma_data_direction dma_dir =
+ page_pool_get_dma_dir(rx_ring->page_pool);
+
+ dma_handle = page_pool_get_dma_addr(page) +
+ NETSEC_RXBUF_HEADROOM;
+ dma_sync_single_for_device(priv->dev, dma_handle, xdpf->len,
+ dma_dir);
+ tx_desc.buf_type = TYPE_NETSEC_XDP_TX;
+ }
+
+ tx_desc.dma_addr = dma_handle;
+ tx_desc.addr = xdpf->data;
+ tx_desc.len = xdpf->len;
+
+ netsec_set_tx_de(priv, tx_ring, &tx_ctrl, &tx_desc, xdpf);
+
+ return NETSEC_XDP_TX;
+}
+
+static u32 netsec_xdp_xmit_back(struct netsec_priv *priv, struct xdp_buff *xdp)
+{
+ struct netsec_desc_ring *tx_ring = &priv->desc_ring[NETSEC_RING_TX];
+ struct xdp_frame *xdpf = convert_to_xdp_frame(xdp);
+ u32 ret;
+
+ if (unlikely(!xdpf))
+ return NETSEC_XDP_CONSUMED;
+
+ spin_lock(&tx_ring->lock);
+ ret = netsec_xdp_queue_one(priv, xdpf, false);
+ spin_unlock(&tx_ring->lock);
+
+ return ret;
+}
+
+static u32 netsec_run_xdp(struct netsec_priv *priv, struct bpf_prog *prog,
+ struct xdp_buff *xdp)
+{
+ u32 ret = NETSEC_XDP_PASS;
+ int err;
+ u32 act;
+
+ act = bpf_prog_run_xdp(prog, xdp);
+
+ switch (act) {
+ case XDP_PASS:
+ ret = NETSEC_XDP_PASS;
+ break;
+ case XDP_TX:
+ ret = netsec_xdp_xmit_back(priv, xdp);
+ if (ret != NETSEC_XDP_TX)
+ xdp_return_buff(xdp);
+ break;
+ case XDP_REDIRECT:
+ err = xdp_do_redirect(priv->ndev, xdp, prog);
+ if (!err) {
+ ret = NETSEC_XDP_REDIR;
+ } else {
+ ret = NETSEC_XDP_CONSUMED;
+ xdp_return_buff(xdp);
+ }
+ break;
+ default:
+ bpf_warn_invalid_xdp_action(act);
+ /* fall through */
+ case XDP_ABORTED:
+ trace_xdp_exception(priv->ndev, prog, act);
+ /* fall through -- handle aborts by dropping packet */
+ case XDP_DROP:
+ ret = NETSEC_XDP_CONSUMED;
+ xdp_return_buff(xdp);
+ break;
+ }
+
+ return ret;
+}
+
static int netsec_process_rx(struct netsec_priv *priv, int budget)
{
struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX];
struct net_device *ndev = priv->ndev;
struct netsec_rx_pkt_info rx_info;
- struct sk_buff *skb;
+ enum dma_data_direction dma_dir;
+ struct bpf_prog *xdp_prog;
+ struct sk_buff *skb = NULL;
+ u16 xdp_xmit = 0;
+ u32 xdp_act = 0;
int done = 0;
+ rcu_read_lock();
+ xdp_prog = READ_ONCE(priv->xdp_prog);
+ dma_dir = page_pool_get_dma_dir(dring->page_pool);
+
while (done < budget) {
u16 idx = dring->tail;
struct netsec_de *de = dring->vaddr + (DESC_SZ * idx);
struct netsec_desc *desc = &dring->desc[idx];
+ struct page *page = virt_to_page(desc->addr);
+ u32 xdp_result = XDP_PASS;
u16 pkt_len, desc_len;
dma_addr_t dma_handle;
+ struct xdp_buff xdp;
void *buf_addr;
- u32 truesize;
if (de->attr & (1U << NETSEC_RX_PKT_OWN_FIELD)) {
/* reading the register clears the irq */
@@ -766,53 +986,71 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget)
/* allocate a fresh buffer and map it to the hardware.
* This will eventually replace the old buffer in the hardware
*/
- buf_addr = netsec_alloc_rx_data(priv, &dma_handle, &desc_len,
- true);
+ buf_addr = netsec_alloc_rx_data(priv, &dma_handle, &desc_len);
+
if (unlikely(!buf_addr))
break;
dma_sync_single_for_cpu(priv->dev, desc->dma_addr, pkt_len,
- DMA_FROM_DEVICE);
+ dma_dir);
prefetch(desc->addr);
- truesize = SKB_DATA_ALIGN(desc->len + NETSEC_SKB_PAD) +
- SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
- skb = build_skb(desc->addr, truesize);
+ xdp.data_hard_start = desc->addr;
+ xdp.data = desc->addr + NETSEC_RXBUF_HEADROOM;
+ xdp_set_data_meta_invalid(&xdp);
+ xdp.data_end = xdp.data + pkt_len;
+ xdp.rxq = &dring->xdp_rxq;
+
+ if (xdp_prog) {
+ xdp_result = netsec_run_xdp(priv, xdp_prog, &xdp);
+ if (xdp_result != NETSEC_XDP_PASS) {
+ xdp_act |= xdp_result;
+ if (xdp_result == NETSEC_XDP_TX)
+ xdp_xmit++;
+ goto next;
+ }
+ }
+ skb = build_skb(desc->addr, desc->len + NETSEC_RX_BUF_NON_DATA);
+
if (unlikely(!skb)) {
- /* free the newly allocated buffer, we are not going to
- * use it
+ /* If skb fails recycle_direct will either unmap and
+ * free the page or refill the cache depending on the
+ * cache state. Since we paid the allocation cost if
+ * building an skb fails try to put the page into cache
*/
- dma_unmap_single(priv->dev, dma_handle, desc_len,
- DMA_FROM_DEVICE);
- skb_free_frag(buf_addr);
+ page_pool_recycle_direct(dring->page_pool, page);
netif_err(priv, drv, priv->ndev,
"rx failed to build skb\n");
break;
}
- dma_unmap_single_attrs(priv->dev, desc->dma_addr, desc->len,
- DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC);
-
- /* Update the descriptor with the new buffer we allocated */
- desc->len = desc_len;
- desc->dma_addr = dma_handle;
- desc->addr = buf_addr;
+ page_pool_release_page(dring->page_pool, page);
- skb_reserve(skb, NETSEC_SKB_PAD);
- skb_put(skb, pkt_len);
+ skb_reserve(skb, xdp.data - xdp.data_hard_start);
+ skb_put(skb, xdp.data_end - xdp.data);
skb->protocol = eth_type_trans(skb, priv->ndev);
if (priv->rx_cksum_offload_flag &&
rx_info.rx_cksum_result == NETSEC_RX_CKSUM_OK)
skb->ip_summed = CHECKSUM_UNNECESSARY;
- if (napi_gro_receive(&priv->napi, skb) != GRO_DROP) {
+next:
+ if ((skb && napi_gro_receive(&priv->napi, skb) != GRO_DROP) ||
+ xdp_result & NETSEC_XDP_RX_OK) {
ndev->stats.rx_packets++;
- ndev->stats.rx_bytes += pkt_len;
+ ndev->stats.rx_bytes += xdp.data_end - xdp.data;
}
+ /* Update the descriptor with fresh buffers */
+ desc->len = desc_len;
+ desc->dma_addr = dma_handle;
+ desc->addr = buf_addr;
+
netsec_rx_fill(priv, idx, 1);
dring->tail = (dring->tail + 1) % DESC_NUM;
}
+ netsec_finalize_xdp_rx(priv, xdp_act, xdp_xmit);
+
+ rcu_read_unlock();
return done;
}
@@ -820,19 +1058,12 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget)
static int netsec_napi_poll(struct napi_struct *napi, int budget)
{
struct netsec_priv *priv;
- int rx, done, todo;
+ int done;
priv = container_of(napi, struct netsec_priv, napi);
netsec_process_tx(priv);
-
- todo = budget;
- do {
- rx = netsec_process_rx(priv, todo);
- todo -= rx;
- } while (rx);
-
- done = budget - todo;
+ done = netsec_process_rx(priv, budget);
if (done < budget && napi_complete_done(napi, done)) {
unsigned long flags;
@@ -846,41 +1077,6 @@ static int netsec_napi_poll(struct napi_struct *napi, int budget)
return done;
}
-static void netsec_set_tx_de(struct netsec_priv *priv,
- struct netsec_desc_ring *dring,
- const struct netsec_tx_pkt_ctrl *tx_ctrl,
- const struct netsec_desc *desc,
- struct sk_buff *skb)
-{
- int idx = dring->head;
- struct netsec_de *de;
- u32 attr;
-
- de = dring->vaddr + (DESC_SZ * idx);
-
- attr = (1 << NETSEC_TX_SHIFT_OWN_FIELD) |
- (1 << NETSEC_TX_SHIFT_PT_FIELD) |
- (NETSEC_RING_GMAC << NETSEC_TX_SHIFT_TDRID_FIELD) |
- (1 << NETSEC_TX_SHIFT_FS_FIELD) |
- (1 << NETSEC_TX_LAST) |
- (tx_ctrl->cksum_offload_flag << NETSEC_TX_SHIFT_CO) |
- (tx_ctrl->tcp_seg_offload_flag << NETSEC_TX_SHIFT_SO) |
- (1 << NETSEC_TX_SHIFT_TRS_FIELD);
- if (idx == DESC_NUM - 1)
- attr |= (1 << NETSEC_TX_SHIFT_LD_FIELD);
-
- de->data_buf_addr_up = upper_32_bits(desc->dma_addr);
- de->data_buf_addr_lw = lower_32_bits(desc->dma_addr);
- de->buf_len_info = (tx_ctrl->tcp_seg_len << 16) | desc->len;
- de->attr = attr;
- dma_wmb();
-
- dring->desc[idx] = *desc;
- dring->desc[idx].skb = skb;
-
- /* move head ahead */
- dring->head = (dring->head + 1) % DESC_NUM;
-}
static int netsec_desc_used(struct netsec_desc_ring *dring)
{
@@ -927,8 +1123,12 @@ static netdev_tx_t netsec_netdev_start_xmit(struct sk_buff *skb,
u16 tso_seg_len = 0;
int filled;
+ if (dring->is_xdp)
+ spin_lock_bh(&dring->lock);
filled = netsec_desc_used(dring);
if (netsec_check_stop_tx(priv, filled)) {
+ if (dring->is_xdp)
+ spin_unlock_bh(&dring->lock);
net_warn_ratelimited("%s %s Tx queue full\n",
dev_name(priv->dev), ndev->name);
return NETDEV_TX_BUSY;
@@ -961,6 +1161,8 @@ static netdev_tx_t netsec_netdev_start_xmit(struct sk_buff *skb,
tx_desc.dma_addr = dma_map_single(priv->dev, skb->data,
skb_headlen(skb), DMA_TO_DEVICE);
if (dma_mapping_error(priv->dev, tx_desc.dma_addr)) {
+ if (dring->is_xdp)
+ spin_unlock_bh(&dring->lock);
netif_err(priv, drv, priv->ndev,
"%s: DMA mapping failed\n", __func__);
ndev->stats.tx_dropped++;
@@ -969,11 +1171,14 @@ static netdev_tx_t netsec_netdev_start_xmit(struct sk_buff *skb,
}
tx_desc.addr = skb->data;
tx_desc.len = skb_headlen(skb);
+ tx_desc.buf_type = TYPE_NETSEC_SKB;
skb_tx_timestamp(skb);
netdev_sent_queue(priv->ndev, skb->len);
netsec_set_tx_de(priv, dring, &tx_ctrl, &tx_desc, skb);
+ if (dring->is_xdp)
+ spin_unlock_bh(&dring->lock);
netsec_write(priv, NETSEC_REG_NRM_TX_PKTCNT, 1); /* submit another tx */
return NETDEV_TX_OK;
@@ -987,19 +1192,27 @@ static void netsec_uninit_pkt_dring(struct netsec_priv *priv, int id)
if (!dring->vaddr || !dring->desc)
return;
-
for (idx = 0; idx < DESC_NUM; idx++) {
desc = &dring->desc[idx];
if (!desc->addr)
continue;
- dma_unmap_single(priv->dev, desc->dma_addr, desc->len,
- id == NETSEC_RING_RX ? DMA_FROM_DEVICE :
- DMA_TO_DEVICE);
- if (id == NETSEC_RING_RX)
- skb_free_frag(desc->addr);
- else if (id == NETSEC_RING_TX)
+ if (id == NETSEC_RING_RX) {
+ struct page *page = virt_to_page(desc->addr);
+
+ page_pool_put_page(dring->page_pool, page, false);
+ } else if (id == NETSEC_RING_TX) {
+ dma_unmap_single(priv->dev, desc->dma_addr, desc->len,
+ DMA_TO_DEVICE);
dev_kfree_skb(desc->skb);
+ }
+ }
+
+ /* Rx is currently using page_pool */
+ if (id == NETSEC_RING_RX) {
+ if (xdp_rxq_info_is_reg(&dring->xdp_rxq))
+ xdp_rxq_info_unreg(&dring->xdp_rxq);
+ page_pool_destroy(dring->page_pool);
}
memset(dring->desc, 0, sizeof(struct netsec_desc) * DESC_NUM);
@@ -1029,7 +1242,6 @@ static void netsec_free_dring(struct netsec_priv *priv, int id)
static int netsec_alloc_dring(struct netsec_priv *priv, enum ring_id id)
{
struct netsec_desc_ring *dring = &priv->desc_ring[id];
- int i;
dring->vaddr = dma_alloc_coherent(priv->dev, DESC_SZ * DESC_NUM,
&dring->desc_dma, GFP_KERNEL);
@@ -1040,19 +1252,6 @@ static int netsec_alloc_dring(struct netsec_priv *priv, enum ring_id id)
if (!dring->desc)
goto err;
- if (id == NETSEC_RING_TX) {
- for (i = 0; i < DESC_NUM; i++) {
- struct netsec_de *de;
-
- de = dring->vaddr + (DESC_SZ * i);
- /* de->attr is not going to be accessed by the NIC
- * until netsec_set_tx_de() is called.
- * No need for a dma_wmb() here
- */
- de->attr = 1U << NETSEC_TX_SHIFT_OWN_FIELD;
- }
- }
-
return 0;
err:
netsec_free_dring(priv, id);
@@ -1060,10 +1259,60 @@ err:
return -ENOMEM;
}
+static void netsec_setup_tx_dring(struct netsec_priv *priv)
+{
+ struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_TX];
+ struct bpf_prog *xdp_prog = READ_ONCE(priv->xdp_prog);
+ int i;
+
+ for (i = 0; i < DESC_NUM; i++) {
+ struct netsec_de *de;
+
+ de = dring->vaddr + (DESC_SZ * i);
+ /* de->attr is not going to be accessed by the NIC
+ * until netsec_set_tx_de() is called.
+ * No need for a dma_wmb() here
+ */
+ de->attr = 1U << NETSEC_TX_SHIFT_OWN_FIELD;
+ }
+
+ if (xdp_prog)
+ dring->is_xdp = true;
+ else
+ dring->is_xdp = false;
+
+}
+
static int netsec_setup_rx_dring(struct netsec_priv *priv)
{
struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX];
- int i;
+ struct bpf_prog *xdp_prog = READ_ONCE(priv->xdp_prog);
+ struct page_pool_params pp_params = { 0 };
+ int i, err;
+
+ pp_params.order = 0;
+ /* internal DMA mapping in page_pool */
+ pp_params.flags = PP_FLAG_DMA_MAP;
+ pp_params.pool_size = DESC_NUM;
+ pp_params.nid = cpu_to_node(0);
+ pp_params.dev = priv->dev;
+ pp_params.dma_dir = xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE;
+
+ dring->page_pool = page_pool_create(&pp_params);
+ if (IS_ERR(dring->page_pool)) {
+ err = PTR_ERR(dring->page_pool);
+ dring->page_pool = NULL;
+ goto err_out;
+ }
+
+ err = xdp_rxq_info_reg(&dring->xdp_rxq, priv->ndev, 0);
+ if (err)
+ goto err_out;
+
+ err = xdp_rxq_info_reg_mem_model(&dring->xdp_rxq, MEM_TYPE_PAGE_POOL,
+ dring->page_pool);
+ if (err)
+ goto err_out;
for (i = 0; i < DESC_NUM; i++) {
struct netsec_desc *desc = &dring->desc[i];
@@ -1071,10 +1320,10 @@ static int netsec_setup_rx_dring(struct netsec_priv *priv)
void *buf;
u16 len;
- buf = netsec_alloc_rx_data(priv, &dma_handle, &len,
- false);
+ buf = netsec_alloc_rx_data(priv, &dma_handle, &len);
+
if (!buf) {
- netsec_uninit_pkt_dring(priv, NETSEC_RING_RX);
+ err = -ENOMEM;
goto err_out;
}
desc->dma_addr = dma_handle;
@@ -1087,7 +1336,8 @@ static int netsec_setup_rx_dring(struct netsec_priv *priv)
return 0;
err_out:
- return -ENOMEM;
+ netsec_uninit_pkt_dring(priv, NETSEC_RING_RX);
+ return err;
}
static int netsec_netdev_load_ucode_region(struct netsec_priv *priv, u32 reg,
@@ -1361,6 +1611,7 @@ static int netsec_netdev_open(struct net_device *ndev)
pm_runtime_get_sync(priv->dev);
+ netsec_setup_tx_dring(priv);
ret = netsec_setup_rx_dring(priv);
if (ret) {
netif_err(priv, probe, priv->ndev,
@@ -1466,6 +1717,9 @@ static int netsec_netdev_init(struct net_device *ndev)
if (ret)
goto err2;
+ spin_lock_init(&priv->desc_ring[NETSEC_RING_TX].lock);
+ spin_lock_init(&priv->desc_ring[NETSEC_RING_RX].lock);
+
return 0;
err2:
netsec_free_dring(priv, NETSEC_RING_RX);
@@ -1498,6 +1752,81 @@ static int netsec_netdev_ioctl(struct net_device *ndev, struct ifreq *ifr,
return phy_mii_ioctl(ndev->phydev, ifr, cmd);
}
+static int netsec_xdp_xmit(struct net_device *ndev, int n,
+ struct xdp_frame **frames, u32 flags)
+{
+ struct netsec_priv *priv = netdev_priv(ndev);
+ struct netsec_desc_ring *tx_ring = &priv->desc_ring[NETSEC_RING_TX];
+ int drops = 0;
+ int i;
+
+ if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
+ return -EINVAL;
+
+ spin_lock(&tx_ring->lock);
+ for (i = 0; i < n; i++) {
+ struct xdp_frame *xdpf = frames[i];
+ int err;
+
+ err = netsec_xdp_queue_one(priv, xdpf, true);
+ if (err != NETSEC_XDP_TX) {
+ xdp_return_frame_rx_napi(xdpf);
+ drops++;
+ } else {
+ tx_ring->xdp_xmit++;
+ }
+ }
+ spin_unlock(&tx_ring->lock);
+
+ if (unlikely(flags & XDP_XMIT_FLUSH)) {
+ netsec_xdp_ring_tx_db(priv, tx_ring->xdp_xmit);
+ tx_ring->xdp_xmit = 0;
+ }
+
+ return n - drops;
+}
+
+static int netsec_xdp_setup(struct netsec_priv *priv, struct bpf_prog *prog,
+ struct netlink_ext_ack *extack)
+{
+ struct net_device *dev = priv->ndev;
+ struct bpf_prog *old_prog;
+
+ /* For now just support only the usual MTU sized frames */
+ if (prog && dev->mtu > 1500) {
+ NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP");
+ return -EOPNOTSUPP;
+ }
+
+ if (netif_running(dev))
+ netsec_netdev_stop(dev);
+
+ /* Detach old prog, if any */
+ old_prog = xchg(&priv->xdp_prog, prog);
+ if (old_prog)
+ bpf_prog_put(old_prog);
+
+ if (netif_running(dev))
+ netsec_netdev_open(dev);
+
+ return 0;
+}
+
+static int netsec_xdp(struct net_device *ndev, struct netdev_bpf *xdp)
+{
+ struct netsec_priv *priv = netdev_priv(ndev);
+
+ switch (xdp->command) {
+ case XDP_SETUP_PROG:
+ return netsec_xdp_setup(priv, xdp->prog, xdp->extack);
+ case XDP_QUERY_PROG:
+ xdp->prog_id = priv->xdp_prog ? priv->xdp_prog->aux->id : 0;
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
static const struct net_device_ops netsec_netdev_ops = {
.ndo_init = netsec_netdev_init,
.ndo_uninit = netsec_netdev_uninit,
@@ -1508,6 +1837,8 @@ static const struct net_device_ops netsec_netdev_ops = {
.ndo_set_mac_address = eth_mac_addr,
.ndo_validate_addr = eth_validate_addr,
.ndo_do_ioctl = netsec_netdev_ioctl,
+ .ndo_xdp_xmit = netsec_xdp_xmit,
+ .ndo_bpf = netsec_xdp,
};
static int netsec_of_probe(struct platform_device *pdev,
diff --git a/drivers/net/ethernet/stmicro/stmmac/Kconfig b/drivers/net/ethernet/stmicro/stmmac/Kconfig
index 06545d7399fc..2325b40dff6e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/Kconfig
+++ b/drivers/net/ethernet/stmicro/stmmac/Kconfig
@@ -1,9 +1,10 @@
# SPDX-License-Identifier: GPL-2.0-only
config STMMAC_ETH
- tristate "STMicroelectronics 10/100/1000/EQOS Ethernet driver"
+ tristate "STMicroelectronics Multi-Gigabit Ethernet driver"
depends on HAS_IOMEM && HAS_DMA
select MII
- select PHYLIB
+ select PAGE_POOL
+ select PHYLINK
select CRC32
imply PTP_1588_CLOCK
select RESET_CONTROLLER
@@ -13,6 +14,16 @@ config STMMAC_ETH
if STMMAC_ETH
+config STMMAC_SELFTESTS
+ bool "Support for STMMAC Selftests"
+ depends on INET
+ depends on STMMAC_ETH
+ default n
+ ---help---
+ This adds support for STMMAC Selftests using ethtool. Enable this
+ feature if you are facing problems with your HW and submit the test
+ results to the netdev Mailing List.
+
config STMMAC_PLATFORM
tristate "STMMAC Platform bus support"
depends on STMMAC_ETH
@@ -31,7 +42,6 @@ if STMMAC_PLATFORM
config DWMAC_DWC_QOS_ETH
tristate "Support for snps,dwc-qos-ethernet.txt DT binding."
- select PHYLIB
select CRC32
select MII
depends on OF && HAS_DMA
diff --git a/drivers/net/ethernet/stmicro/stmmac/Makefile b/drivers/net/ethernet/stmicro/stmmac/Makefile
index c529c21e9bdd..c59926d96bcc 100644
--- a/drivers/net/ethernet/stmicro/stmmac/Makefile
+++ b/drivers/net/ethernet/stmicro/stmmac/Makefile
@@ -8,6 +8,8 @@ stmmac-objs:= stmmac_main.o stmmac_ethtool.o stmmac_mdio.o ring_mode.o \
stmmac_tc.o dwxgmac2_core.o dwxgmac2_dma.o dwxgmac2_descs.o \
$(stmmac-y)
+stmmac-$(CONFIG_STMMAC_SELFTESTS) += stmmac_selftests.o
+
# Ordering matters. Generic driver must be last.
obj-$(CONFIG_STMMAC_PLATFORM) += stmmac-platform.o
obj-$(CONFIG_DWMAC_ANARION) += dwmac-anarion.o
diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h
index ceb0d23f5041..ed872eed1cab 100644
--- a/drivers/net/ethernet/stmicro/stmmac/common.h
+++ b/drivers/net/ethernet/stmicro/stmmac/common.h
@@ -246,12 +246,13 @@ struct stmmac_safety_stats {
/* Max/Min RI Watchdog Timer count value */
#define MAX_DMA_RIWT 0xff
-#define MIN_DMA_RIWT 0x20
+#define MIN_DMA_RIWT 0x10
/* Tx coalesce parameters */
#define STMMAC_COAL_TX_TIMER 1000
#define STMMAC_MAX_COAL_TX_TICK 100000
#define STMMAC_TX_MAX_FRAMES 256
-#define STMMAC_TX_FRAMES 25
+#define STMMAC_TX_FRAMES 1
+#define STMMAC_RX_FRAMES 25
/* Packets types */
enum packets_types {
@@ -325,6 +326,7 @@ struct dma_features {
/* 802.3az - Energy-Efficient Ethernet (EEE) */
unsigned int eee;
unsigned int av;
+ unsigned int hash_tb_sz;
unsigned int tsoen;
/* TX and RX csum */
unsigned int tx_coe;
@@ -351,6 +353,7 @@ struct dma_features {
unsigned int frpsel;
unsigned int frpbs;
unsigned int frpes;
+ unsigned int addr64;
};
/* GMAC TX FIFO is 8K, Rx FIFO is 16K */
@@ -392,8 +395,12 @@ struct mac_link {
u32 speed100;
u32 speed1000;
u32 speed2500;
- u32 speed10000;
u32 duplex;
+ struct {
+ u32 speed2500;
+ u32 speed5000;
+ u32 speed10000;
+ } xgmii;
};
struct mii_regs {
@@ -414,12 +421,13 @@ struct mac_device_info {
const struct stmmac_mode_ops *mode;
const struct stmmac_hwtimestamp *ptp;
const struct stmmac_tc_ops *tc;
+ const struct stmmac_mmc_ops *mmc;
struct mii_regs mii; /* MII register Addresses */
struct mac_link link;
void __iomem *pcsr; /* vpointer to device CSRs */
- int multicast_filter_bins;
- int unicast_filter_entries;
- int mcast_bits_log2;
+ unsigned int multicast_filter_bins;
+ unsigned int unicast_filter_entries;
+ unsigned int mcast_bits_log2;
unsigned int rx_csum;
unsigned int pcs;
unsigned int pmt;
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c
index 126b66bb73a6..79f2ee37afed 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c
@@ -9,6 +9,7 @@
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/of_net.h>
+#include <linux/pm_runtime.h>
#include <linux/regmap.h>
#include <linux/stmmac.h>
@@ -298,6 +299,9 @@ static int mediatek_dwmac_init(struct platform_device *pdev, void *priv)
return ret;
}
+ pm_runtime_enable(&pdev->dev);
+ pm_runtime_get_sync(&pdev->dev);
+
return 0;
}
@@ -307,6 +311,9 @@ static void mediatek_dwmac_exit(struct platform_device *pdev, void *priv)
const struct mediatek_dwmac_variant *variant = plat->variant;
clk_bulk_disable_unprepare(variant->num_clks, plat->clks);
+
+ pm_runtime_put_sync(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
}
static int mediatek_dwmac_probe(struct platform_device *pdev)
@@ -349,6 +356,7 @@ static int mediatek_dwmac_probe(struct platform_device *pdev)
plat_dat->has_gmac4 = 1;
plat_dat->has_gmac = 0;
plat_dat->pmt = 0;
+ plat_dat->riwt_off = 1;
plat_dat->maxmtu = ETH_DATA_LEN;
plat_dat->bsp_priv = priv_plat;
plat_dat->init = mediatek_dwmac_init;
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
index 8bdbddeec117..c141fe783e87 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
@@ -27,9 +27,12 @@
#define SYSMGR_EMACGRP_CTRL_PHYSEL_WIDTH 2
#define SYSMGR_EMACGRP_CTRL_PHYSEL_MASK 0x00000003
#define SYSMGR_EMACGRP_CTRL_PTP_REF_CLK_MASK 0x00000010
+#define SYSMGR_GEN10_EMACGRP_CTRL_PTP_REF_CLK_MASK 0x00000100
#define SYSMGR_FPGAGRP_MODULE_REG 0x00000028
#define SYSMGR_FPGAGRP_MODULE_EMAC 0x00000004
+#define SYSMGR_FPGAINTF_EMAC_REG 0x00000070
+#define SYSMGR_FPGAINTF_EMAC_BIT 0x1
#define EMAC_SPLITTER_CTRL_REG 0x0
#define EMAC_SPLITTER_CTRL_SPEED_MASK 0x3
@@ -37,6 +40,11 @@
#define EMAC_SPLITTER_CTRL_SPEED_100 0x3
#define EMAC_SPLITTER_CTRL_SPEED_1000 0x0
+struct socfpga_dwmac;
+struct socfpga_dwmac_ops {
+ int (*set_phy_mode)(struct socfpga_dwmac *dwmac_priv);
+};
+
struct socfpga_dwmac {
int interface;
u32 reg_offset;
@@ -48,6 +56,7 @@ struct socfpga_dwmac {
void __iomem *splitter_base;
bool f2h_ptp_ref_clk;
struct tse_pcs pcs;
+ const struct socfpga_dwmac_ops *ops;
};
static void socfpga_dwmac_fix_mac_speed(void *priv, unsigned int speed)
@@ -222,25 +231,36 @@ err_node_put:
return ret;
}
-static int socfpga_dwmac_set_phy_mode(struct socfpga_dwmac *dwmac)
+static int socfpga_set_phy_mode_common(int phymode, u32 *val)
{
- struct regmap *sys_mgr_base_addr = dwmac->sys_mgr_base_addr;
- int phymode = dwmac->interface;
- u32 reg_offset = dwmac->reg_offset;
- u32 reg_shift = dwmac->reg_shift;
- u32 ctrl, val, module;
-
switch (phymode) {
case PHY_INTERFACE_MODE_RGMII:
case PHY_INTERFACE_MODE_RGMII_ID:
- val = SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_RGMII;
+ *val = SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_RGMII;
break;
case PHY_INTERFACE_MODE_MII:
case PHY_INTERFACE_MODE_GMII:
case PHY_INTERFACE_MODE_SGMII:
- val = SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_GMII_MII;
+ *val = SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_GMII_MII;
+ break;
+ case PHY_INTERFACE_MODE_RMII:
+ *val = SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_RMII;
break;
default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int socfpga_gen5_set_phy_mode(struct socfpga_dwmac *dwmac)
+{
+ struct regmap *sys_mgr_base_addr = dwmac->sys_mgr_base_addr;
+ int phymode = dwmac->interface;
+ u32 reg_offset = dwmac->reg_offset;
+ u32 reg_shift = dwmac->reg_shift;
+ u32 ctrl, val, module;
+
+ if (socfpga_set_phy_mode_common(phymode, &val)) {
dev_err(dwmac->dev, "bad phy mode %d\n", phymode);
return -EINVAL;
}
@@ -291,6 +311,62 @@ static int socfpga_dwmac_set_phy_mode(struct socfpga_dwmac *dwmac)
return 0;
}
+static int socfpga_gen10_set_phy_mode(struct socfpga_dwmac *dwmac)
+{
+ struct regmap *sys_mgr_base_addr = dwmac->sys_mgr_base_addr;
+ int phymode = dwmac->interface;
+ u32 reg_offset = dwmac->reg_offset;
+ u32 reg_shift = dwmac->reg_shift;
+ u32 ctrl, val, module;
+
+ if (socfpga_set_phy_mode_common(phymode, &val))
+ return -EINVAL;
+
+ /* Overwrite val to GMII if splitter core is enabled. The phymode here
+ * is the actual phy mode on phy hardware, but phy interface from
+ * EMAC core is GMII.
+ */
+ if (dwmac->splitter_base)
+ val = SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_GMII_MII;
+
+ /* Assert reset to the enet controller before changing the phy mode */
+ reset_control_assert(dwmac->stmmac_ocp_rst);
+ reset_control_assert(dwmac->stmmac_rst);
+
+ regmap_read(sys_mgr_base_addr, reg_offset, &ctrl);
+ ctrl &= ~(SYSMGR_EMACGRP_CTRL_PHYSEL_MASK);
+ ctrl |= val;
+
+ if (dwmac->f2h_ptp_ref_clk ||
+ phymode == PHY_INTERFACE_MODE_MII ||
+ phymode == PHY_INTERFACE_MODE_GMII ||
+ phymode == PHY_INTERFACE_MODE_SGMII) {
+ ctrl |= SYSMGR_GEN10_EMACGRP_CTRL_PTP_REF_CLK_MASK;
+ regmap_read(sys_mgr_base_addr, SYSMGR_FPGAINTF_EMAC_REG,
+ &module);
+ module |= (SYSMGR_FPGAINTF_EMAC_BIT << reg_shift);
+ regmap_write(sys_mgr_base_addr, SYSMGR_FPGAINTF_EMAC_REG,
+ module);
+ } else {
+ ctrl &= ~SYSMGR_GEN10_EMACGRP_CTRL_PTP_REF_CLK_MASK;
+ }
+
+ regmap_write(sys_mgr_base_addr, reg_offset, ctrl);
+
+ /* Deassert reset for the phy configuration to be sampled by
+ * the enet controller, and operation to start in requested mode
+ */
+ reset_control_deassert(dwmac->stmmac_ocp_rst);
+ reset_control_deassert(dwmac->stmmac_rst);
+ if (phymode == PHY_INTERFACE_MODE_SGMII) {
+ if (tse_pcs_init(dwmac->pcs.tse_pcs_base, &dwmac->pcs) != 0) {
+ dev_err(dwmac->dev, "Unable to initialize TSE PCS");
+ return -EINVAL;
+ }
+ }
+ return 0;
+}
+
static int socfpga_dwmac_probe(struct platform_device *pdev)
{
struct plat_stmmacenet_data *plat_dat;
@@ -300,6 +376,13 @@ static int socfpga_dwmac_probe(struct platform_device *pdev)
struct socfpga_dwmac *dwmac;
struct net_device *ndev;
struct stmmac_priv *stpriv;
+ const struct socfpga_dwmac_ops *ops;
+
+ ops = device_get_match_data(&pdev->dev);
+ if (!ops) {
+ dev_err(&pdev->dev, "no of match data provided\n");
+ return -EINVAL;
+ }
ret = stmmac_get_platform_resources(pdev, &stmmac_res);
if (ret)
@@ -330,6 +413,7 @@ static int socfpga_dwmac_probe(struct platform_device *pdev)
goto err_remove_config_dt;
}
+ dwmac->ops = ops;
plat_dat->bsp_priv = dwmac;
plat_dat->fix_mac_speed = socfpga_dwmac_fix_mac_speed;
@@ -346,7 +430,7 @@ static int socfpga_dwmac_probe(struct platform_device *pdev)
*/
dwmac->stmmac_rst = stpriv->plat->stmmac_rst;
- ret = socfpga_dwmac_set_phy_mode(dwmac);
+ ret = ops->set_phy_mode(dwmac);
if (ret)
goto err_dvr_remove;
@@ -365,8 +449,9 @@ static int socfpga_dwmac_resume(struct device *dev)
{
struct net_device *ndev = dev_get_drvdata(dev);
struct stmmac_priv *priv = netdev_priv(ndev);
+ struct socfpga_dwmac *dwmac_priv = get_stmmac_bsp_priv(dev);
- socfpga_dwmac_set_phy_mode(priv->plat->bsp_priv);
+ dwmac_priv->ops->set_phy_mode(priv->plat->bsp_priv);
/* Before the enet controller is suspended, the phy is suspended.
* This causes the phy clock to be gated. The enet controller is
@@ -393,8 +478,17 @@ static int socfpga_dwmac_resume(struct device *dev)
static SIMPLE_DEV_PM_OPS(socfpga_dwmac_pm_ops, stmmac_suspend,
socfpga_dwmac_resume);
+static const struct socfpga_dwmac_ops socfpga_gen5_ops = {
+ .set_phy_mode = socfpga_gen5_set_phy_mode,
+};
+
+static const struct socfpga_dwmac_ops socfpga_gen10_ops = {
+ .set_phy_mode = socfpga_gen10_set_phy_mode,
+};
+
static const struct of_device_id socfpga_dwmac_match[] = {
- { .compatible = "altr,socfpga-stmmac" },
+ { .compatible = "altr,socfpga-stmmac", .data = &socfpga_gen5_ops },
+ { .compatible = "altr,socfpga-stmmac-a10-s10", .data = &socfpga_gen10_ops },
{ }
};
MODULE_DEVICE_TABLE(of, socfpga_dwmac_match);
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
index a69c34f605b1..2856f3fe5266 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
@@ -138,6 +138,20 @@ static const struct emac_variant emac_variant_a64 = {
.tx_delay_max = 7,
};
+static const struct emac_variant emac_variant_h6 = {
+ .default_syscon_value = 0x50000,
+ .syscon_field = &sun8i_syscon_reg_field,
+ /* The "Internal PHY" of H6 is not on the die. It's on the
+ * co-packaged AC200 chip instead.
+ */
+ .soc_has_internal_phy = false,
+ .support_mii = true,
+ .support_rmii = true,
+ .support_rgmii = true,
+ .rx_delay_max = 31,
+ .tx_delay_max = 7,
+};
+
#define EMAC_BASIC_CTL0 0x00
#define EMAC_BASIC_CTL1 0x04
#define EMAC_INT_STA 0x08
@@ -275,18 +289,18 @@ static void sun8i_dwmac_dma_init(void __iomem *ioaddr,
static void sun8i_dwmac_dma_init_rx(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_rx_phy, u32 chan)
+ dma_addr_t dma_rx_phy, u32 chan)
{
/* Write RX descriptors address */
- writel(dma_rx_phy, ioaddr + EMAC_RX_DESC_LIST);
+ writel(lower_32_bits(dma_rx_phy), ioaddr + EMAC_RX_DESC_LIST);
}
static void sun8i_dwmac_dma_init_tx(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_tx_phy, u32 chan)
+ dma_addr_t dma_tx_phy, u32 chan)
{
/* Write TX descriptors address */
- writel(dma_tx_phy, ioaddr + EMAC_TX_DESC_LIST);
+ writel(lower_32_bits(dma_tx_phy), ioaddr + EMAC_TX_DESC_LIST);
}
/* sun8i_dwmac_dump_regs() - Dump EMAC address space
@@ -884,6 +898,11 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv)
* address. No need to mask it again.
*/
reg |= 1 << H3_EPHY_ADDR_SHIFT;
+ } else {
+ /* For SoCs without internal PHY the PHY selection bit should be
+ * set to 0 (external PHY).
+ */
+ reg &= ~H3_EPHY_SELECT;
}
if (!of_property_read_u32(node, "allwinner,tx-delay-ps", &val)) {
@@ -977,6 +996,18 @@ static void sun8i_dwmac_exit(struct platform_device *pdev, void *priv)
regulator_disable(gmac->regulator);
}
+static void sun8i_dwmac_set_mac_loopback(void __iomem *ioaddr, bool enable)
+{
+ u32 value = readl(ioaddr + EMAC_BASIC_CTL0);
+
+ if (enable)
+ value |= EMAC_LOOPBACK;
+ else
+ value &= ~EMAC_LOOPBACK;
+
+ writel(value, ioaddr + EMAC_BASIC_CTL0);
+}
+
static const struct stmmac_ops sun8i_dwmac_ops = {
.core_init = sun8i_dwmac_core_init,
.set_mac = sun8i_dwmac_set_mac,
@@ -986,6 +1017,7 @@ static const struct stmmac_ops sun8i_dwmac_ops = {
.flow_ctrl = sun8i_dwmac_flow_ctrl,
.set_umac_addr = sun8i_dwmac_set_umac_addr,
.get_umac_addr = sun8i_dwmac_get_umac_addr,
+ .set_mac_loopback = sun8i_dwmac_set_mac_loopback,
};
static struct mac_device_info *sun8i_dwmac_setup(void *ppriv)
@@ -1203,6 +1235,8 @@ static const struct of_device_id sun8i_dwmac_match[] = {
.data = &emac_variant_r40 },
{ .compatible = "allwinner,sun50i-a64-emac",
.data = &emac_variant_a64 },
+ { .compatible = "allwinner,sun50i-h6-emac",
+ .data = &emac_variant_h6 },
{ }
};
MODULE_DEVICE_TABLE(of, sun8i_dwmac_match);
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000.h b/drivers/net/ethernet/stmicro/stmmac/dwmac1000.h
index b83d3a98f5f1..b70d44ac0990 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000.h
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000.h
@@ -136,6 +136,7 @@ enum inter_frame_gap {
#define GMAC_FRAME_FILTER_DAIF 0x00000008 /* DA Inverse Filtering */
#define GMAC_FRAME_FILTER_PM 0x00000010 /* Pass all multicast */
#define GMAC_FRAME_FILTER_DBF 0x00000020 /* Disable Broadcast frames */
+#define GMAC_FRAME_FILTER_PCF 0x00000080 /* Pass Control frames */
#define GMAC_FRAME_FILTER_SAIF 0x00000100 /* Inverse Filtering */
#define GMAC_FRAME_FILTER_SAF 0x00000200 /* Source Address Filter */
#define GMAC_FRAME_FILTER_HPF 0x00000400 /* Hash or perfect Filter */
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
index 9fff81170163..3d69da112625 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
@@ -162,7 +162,7 @@ static void dwmac1000_set_filter(struct mac_device_info *hw,
memset(mc_filter, 0, sizeof(mc_filter));
if (dev->flags & IFF_PROMISC) {
- value = GMAC_FRAME_FILTER_PR;
+ value = GMAC_FRAME_FILTER_PR | GMAC_FRAME_FILTER_PCF;
} else if (dev->flags & IFF_ALLMULTI) {
value = GMAC_FRAME_FILTER_PM; /* pass all multi */
} else if (!netdev_mc_empty(dev)) {
@@ -188,6 +188,7 @@ static void dwmac1000_set_filter(struct mac_device_info *hw,
}
}
+ value |= GMAC_FRAME_FILTER_HPF;
dwmac1000_set_mchash(ioaddr, mc_filter, mcbitslog2);
/* Handle multiple unicast addresses (perfect filtering) */
@@ -206,6 +207,12 @@ static void dwmac1000_set_filter(struct mac_device_info *hw,
GMAC_ADDR_LOW(reg));
reg++;
}
+
+ while (reg <= perfect_addr_number) {
+ writel(0, ioaddr + GMAC_ADDR_HIGH(reg));
+ writel(0, ioaddr + GMAC_ADDR_LOW(reg));
+ reg++;
+ }
}
#ifdef FRAME_FILTER_DEBUG
@@ -489,6 +496,18 @@ static void dwmac1000_debug(void __iomem *ioaddr, struct stmmac_extra_stats *x,
x->mac_gmii_rx_proto_engine++;
}
+static void dwmac1000_set_mac_loopback(void __iomem *ioaddr, bool enable)
+{
+ u32 value = readl(ioaddr + GMAC_CONTROL);
+
+ if (enable)
+ value |= GMAC_CONTROL_LM;
+ else
+ value &= ~GMAC_CONTROL_LM;
+
+ writel(value, ioaddr + GMAC_CONTROL);
+}
+
const struct stmmac_ops dwmac1000_ops = {
.core_init = dwmac1000_core_init,
.set_mac = stmmac_set_mac,
@@ -508,6 +527,7 @@ const struct stmmac_ops dwmac1000_ops = {
.pcs_ctrl_ane = dwmac1000_ctrl_ane,
.pcs_rane = dwmac1000_rane,
.pcs_get_adv_lp = dwmac1000_get_adv_lp,
+ .set_mac_loopback = dwmac1000_set_mac_loopback,
};
int dwmac1000_setup(struct stmmac_priv *priv)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c
index 1fdedf77678f..2bac49b49f73 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c
@@ -112,18 +112,18 @@ static void dwmac1000_dma_init(void __iomem *ioaddr,
static void dwmac1000_dma_init_rx(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_rx_phy, u32 chan)
+ dma_addr_t dma_rx_phy, u32 chan)
{
/* RX descriptor base address list must be written into DMA CSR3 */
- writel(dma_rx_phy, ioaddr + DMA_RCV_BASE_ADDR);
+ writel(lower_32_bits(dma_rx_phy), ioaddr + DMA_RCV_BASE_ADDR);
}
static void dwmac1000_dma_init_tx(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_tx_phy, u32 chan)
+ dma_addr_t dma_tx_phy, u32 chan)
{
/* TX descriptor base address list must be written into DMA CSR4 */
- writel(dma_tx_phy, ioaddr + DMA_TX_BASE_ADDR);
+ writel(lower_32_bits(dma_tx_phy), ioaddr + DMA_TX_BASE_ADDR);
}
static u32 dwmac1000_configure_fc(u32 csr6, int rxfifosz)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac100_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac100_core.c
index 8842f6627cb8..ebcad8dd99db 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac100_core.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac100_core.c
@@ -150,6 +150,18 @@ static void dwmac100_pmt(struct mac_device_info *hw, unsigned long mode)
return;
}
+static void dwmac100_set_mac_loopback(void __iomem *ioaddr, bool enable)
+{
+ u32 value = readl(ioaddr + MAC_CONTROL);
+
+ if (enable)
+ value |= MAC_CONTROL_OM;
+ else
+ value &= ~MAC_CONTROL_OM;
+
+ writel(value, ioaddr + MAC_CONTROL);
+}
+
const struct stmmac_ops dwmac100_ops = {
.core_init = dwmac100_core_init,
.set_mac = stmmac_set_mac,
@@ -161,6 +173,7 @@ const struct stmmac_ops dwmac100_ops = {
.pmt = dwmac100_pmt,
.set_umac_addr = dwmac100_set_umac_addr,
.get_umac_addr = dwmac100_get_umac_addr,
+ .set_mac_loopback = dwmac100_set_mac_loopback,
};
int dwmac100_setup(struct stmmac_priv *priv)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac100_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac100_dma.c
index c980cc7360a4..8f0d9bc7cab5 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac100_dma.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac100_dma.c
@@ -31,18 +31,18 @@ static void dwmac100_dma_init(void __iomem *ioaddr,
static void dwmac100_dma_init_rx(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_rx_phy, u32 chan)
+ dma_addr_t dma_rx_phy, u32 chan)
{
/* RX descriptor base addr lists must be written into DMA CSR3 */
- writel(dma_rx_phy, ioaddr + DMA_RCV_BASE_ADDR);
+ writel(lower_32_bits(dma_rx_phy), ioaddr + DMA_RCV_BASE_ADDR);
}
static void dwmac100_dma_init_tx(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_tx_phy, u32 chan)
+ dma_addr_t dma_tx_phy, u32 chan)
{
/* TX descriptor base addr lists must be written into DMA CSR4 */
- writel(dma_tx_phy, ioaddr + DMA_TX_BASE_ADDR);
+ writel(lower_32_bits(dma_tx_phy), ioaddr + DMA_TX_BASE_ADDR);
}
/* Store and Forward capability is not used at all.
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4.h b/drivers/net/ethernet/stmicro/stmmac/dwmac4.h
index 80234f12bf7f..2ed11a581d80 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4.h
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4.h
@@ -15,8 +15,7 @@
/* MAC registers */
#define GMAC_CONFIG 0x00000000
#define GMAC_PACKET_FILTER 0x00000008
-#define GMAC_HASH_TAB_0_31 0x00000010
-#define GMAC_HASH_TAB_32_63 0x00000014
+#define GMAC_HASH_TAB(x) (0x10 + (x) * 4)
#define GMAC_RX_FLOW_CTRL 0x00000090
#define GMAC_QX_TX_FLOW_CTRL(x) (0x70 + x * 4)
#define GMAC_TXQ_PRTY_MAP0 0x98
@@ -61,6 +60,8 @@
#define GMAC_PACKET_FILTER_PR BIT(0)
#define GMAC_PACKET_FILTER_HMC BIT(2)
#define GMAC_PACKET_FILTER_PM BIT(4)
+#define GMAC_PACKET_FILTER_PCF BIT(7)
+#define GMAC_PACKET_FILTER_HPF BIT(10)
#define GMAC_MAX_PERFECT_ADDRESSES 128
@@ -157,6 +158,7 @@ enum power_event {
#define GMAC_CONFIG_PS BIT(15)
#define GMAC_CONFIG_FES BIT(14)
#define GMAC_CONFIG_DM BIT(13)
+#define GMAC_CONFIG_LM BIT(12)
#define GMAC_CONFIG_DCRS BIT(9)
#define GMAC_CONFIG_TE BIT(1)
#define GMAC_CONFIG_RE BIT(0)
@@ -178,6 +180,7 @@ enum power_event {
#define GMAC_HW_FEAT_MIISEL BIT(0)
/* MAC HW features1 bitmap */
+#define GMAC_HW_HASH_TB_SZ GENMASK(25, 24)
#define GMAC_HW_FEAT_AVSEL BIT(20)
#define GMAC_HW_TSOEN BIT(18)
#define GMAC_HW_TXFIFOSIZE GENMASK(10, 6)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
index 99d772517242..01c2e2d83e76 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
@@ -400,57 +400,74 @@ static void dwmac4_set_filter(struct mac_device_info *hw,
struct net_device *dev)
{
void __iomem *ioaddr = (void __iomem *)dev->base_addr;
- unsigned int value = 0;
+ int numhashregs = (hw->multicast_filter_bins >> 5);
+ int mcbitslog2 = hw->mcast_bits_log2;
+ unsigned int value;
+ int i;
+ value = readl(ioaddr + GMAC_PACKET_FILTER);
+ value &= ~GMAC_PACKET_FILTER_HMC;
+ value &= ~GMAC_PACKET_FILTER_HPF;
+ value &= ~GMAC_PACKET_FILTER_PCF;
+ value &= ~GMAC_PACKET_FILTER_PM;
+ value &= ~GMAC_PACKET_FILTER_PR;
if (dev->flags & IFF_PROMISC) {
- value = GMAC_PACKET_FILTER_PR;
+ value = GMAC_PACKET_FILTER_PR | GMAC_PACKET_FILTER_PCF;
} else if ((dev->flags & IFF_ALLMULTI) ||
- (netdev_mc_count(dev) > HASH_TABLE_SIZE)) {
+ (netdev_mc_count(dev) > hw->multicast_filter_bins)) {
/* Pass all multi */
- value = GMAC_PACKET_FILTER_PM;
- /* Set the 64 bits of the HASH tab. To be updated if taller
- * hash table is used
- */
- writel(0xffffffff, ioaddr + GMAC_HASH_TAB_0_31);
- writel(0xffffffff, ioaddr + GMAC_HASH_TAB_32_63);
+ value |= GMAC_PACKET_FILTER_PM;
+ /* Set all the bits of the HASH tab */
+ for (i = 0; i < numhashregs; i++)
+ writel(0xffffffff, ioaddr + GMAC_HASH_TAB(i));
} else if (!netdev_mc_empty(dev)) {
- u32 mc_filter[2];
struct netdev_hw_addr *ha;
+ u32 mc_filter[8];
/* Hash filter for multicast */
- value = GMAC_PACKET_FILTER_HMC;
+ value |= GMAC_PACKET_FILTER_HMC;
memset(mc_filter, 0, sizeof(mc_filter));
netdev_for_each_mc_addr(ha, dev) {
- /* The upper 6 bits of the calculated CRC are used to
- * index the content of the Hash Table Reg 0 and 1.
+ /* The upper n bits of the calculated CRC are used to
+ * index the contents of the hash table. The number of
+ * bits used depends on the hardware configuration
+ * selected at core configuration time.
*/
- int bit_nr =
- (bitrev32(~crc32_le(~0, ha->addr, 6)) >> 26);
- /* The most significant bit determines the register
- * to use while the other 5 bits determines the bit
- * within the selected register
+ int bit_nr = bitrev32(~crc32_le(~0, ha->addr,
+ ETH_ALEN)) >> (32 - mcbitslog2);
+ /* The most significant bit determines the register to
+ * use (H/L) while the other 5 bits determine the bit
+ * within the register.
*/
- mc_filter[bit_nr >> 5] |= (1 << (bit_nr & 0x1F));
+ mc_filter[bit_nr >> 5] |= (1 << (bit_nr & 0x1f));
}
- writel(mc_filter[0], ioaddr + GMAC_HASH_TAB_0_31);
- writel(mc_filter[1], ioaddr + GMAC_HASH_TAB_32_63);
+ for (i = 0; i < numhashregs; i++)
+ writel(mc_filter[i], ioaddr + GMAC_HASH_TAB(i));
}
+ value |= GMAC_PACKET_FILTER_HPF;
+
/* Handle multiple unicast addresses */
if (netdev_uc_count(dev) > GMAC_MAX_PERFECT_ADDRESSES) {
/* Switch to promiscuous mode if more than 128 addrs
* are required
*/
value |= GMAC_PACKET_FILTER_PR;
- } else if (!netdev_uc_empty(dev)) {
- int reg = 1;
+ } else {
struct netdev_hw_addr *ha;
+ int reg = 1;
netdev_for_each_uc_addr(ha, dev) {
dwmac4_set_umac_addr(hw, ha->addr, reg);
reg++;
}
+
+ while (reg < GMAC_MAX_PERFECT_ADDRESSES) {
+ writel(0, ioaddr + GMAC_ADDR_HIGH(reg));
+ writel(0, ioaddr + GMAC_ADDR_LOW(reg));
+ reg++;
+ }
}
writel(value, ioaddr + GMAC_PACKET_FILTER);
@@ -468,8 +485,9 @@ static void dwmac4_flow_ctrl(struct mac_device_info *hw, unsigned int duplex,
if (fc & FLOW_RX) {
pr_debug("\tReceive Flow-Control ON\n");
flow |= GMAC_RX_FLOW_CTRL_RFE;
- writel(flow, ioaddr + GMAC_RX_FLOW_CTRL);
}
+ writel(flow, ioaddr + GMAC_RX_FLOW_CTRL);
+
if (fc & FLOW_TX) {
pr_debug("\tTransmit Flow-Control ON\n");
@@ -477,7 +495,7 @@ static void dwmac4_flow_ctrl(struct mac_device_info *hw, unsigned int duplex,
pr_debug("\tduplex mode: PAUSE %d\n", pause_time);
for (queue = 0; queue < tx_cnt; queue++) {
- flow |= GMAC_TX_FLOW_CTRL_TFE;
+ flow = GMAC_TX_FLOW_CTRL_TFE;
if (duplex)
flow |=
@@ -485,6 +503,9 @@ static void dwmac4_flow_ctrl(struct mac_device_info *hw, unsigned int duplex,
writel(flow, ioaddr + GMAC_QX_TX_FLOW_CTRL(queue));
}
+ } else {
+ for (queue = 0; queue < tx_cnt; queue++)
+ writel(0, ioaddr + GMAC_QX_TX_FLOW_CTRL(queue));
}
}
@@ -700,6 +721,18 @@ static void dwmac4_debug(void __iomem *ioaddr, struct stmmac_extra_stats *x,
x->mac_gmii_rx_proto_engine++;
}
+static void dwmac4_set_mac_loopback(void __iomem *ioaddr, bool enable)
+{
+ u32 value = readl(ioaddr + GMAC_CONFIG);
+
+ if (enable)
+ value |= GMAC_CONFIG_LM;
+ else
+ value &= ~GMAC_CONFIG_LM;
+
+ writel(value, ioaddr + GMAC_CONFIG);
+}
+
const struct stmmac_ops dwmac4_ops = {
.core_init = dwmac4_core_init,
.set_mac = stmmac_set_mac,
@@ -729,6 +762,7 @@ const struct stmmac_ops dwmac4_ops = {
.pcs_get_adv_lp = dwmac4_get_adv_lp,
.debug = dwmac4_debug,
.set_filter = dwmac4_set_filter,
+ .set_mac_loopback = dwmac4_set_mac_loopback,
};
const struct stmmac_ops dwmac410_ops = {
@@ -760,6 +794,7 @@ const struct stmmac_ops dwmac410_ops = {
.pcs_get_adv_lp = dwmac4_get_adv_lp,
.debug = dwmac4_debug,
.set_filter = dwmac4_set_filter,
+ .set_mac_loopback = dwmac4_set_mac_loopback,
};
const struct stmmac_ops dwmac510_ops = {
@@ -796,6 +831,7 @@ const struct stmmac_ops dwmac510_ops = {
.safety_feat_dump = dwmac5_safety_feat_dump,
.rxp_config = dwmac5_rxp_config,
.flex_pps_config = dwmac5_flex_pps_config,
+ .set_mac_loopback = dwmac4_set_mac_loopback,
};
int dwmac4_setup(struct stmmac_priv *priv)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
index cf6436d3d6c7..dbde23e7e169 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
@@ -443,6 +443,15 @@ static void dwmac4_clear(struct dma_desc *p)
p->des3 = 0;
}
+static int set_16kib_bfsize(int mtu)
+{
+ int ret = 0;
+
+ if (unlikely(mtu >= BUF_SIZE_8KiB))
+ ret = BUF_SIZE_16KiB;
+ return ret;
+}
+
const struct stmmac_desc_ops dwmac4_desc_ops = {
.tx_status = dwmac4_wrback_get_tx_status,
.rx_status = dwmac4_wrback_get_rx_status,
@@ -469,4 +478,6 @@ const struct stmmac_desc_ops dwmac4_desc_ops = {
.clear = dwmac4_clear,
};
-const struct stmmac_mode_ops dwmac4_ring_mode_ops = { };
+const struct stmmac_mode_ops dwmac4_ring_mode_ops = {
+ .set_16kib_bfsize = set_16kib_bfsize,
+};
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
index 0f208e13da9f..3ed5508586ef 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
@@ -70,7 +70,7 @@ static void dwmac4_dma_axi(void __iomem *ioaddr, struct stmmac_axi *axi)
static void dwmac4_dma_init_rx_chan(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_rx_phy, u32 chan)
+ dma_addr_t dma_rx_phy, u32 chan)
{
u32 value;
u32 rxpbl = dma_cfg->rxpbl ?: dma_cfg->pbl;
@@ -79,12 +79,12 @@ static void dwmac4_dma_init_rx_chan(void __iomem *ioaddr,
value = value | (rxpbl << DMA_BUS_MODE_RPBL_SHIFT);
writel(value, ioaddr + DMA_CHAN_RX_CONTROL(chan));
- writel(dma_rx_phy, ioaddr + DMA_CHAN_RX_BASE_ADDR(chan));
+ writel(lower_32_bits(dma_rx_phy), ioaddr + DMA_CHAN_RX_BASE_ADDR(chan));
}
static void dwmac4_dma_init_tx_chan(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_tx_phy, u32 chan)
+ dma_addr_t dma_tx_phy, u32 chan)
{
u32 value;
u32 txpbl = dma_cfg->txpbl ?: dma_cfg->pbl;
@@ -97,7 +97,7 @@ static void dwmac4_dma_init_tx_chan(void __iomem *ioaddr,
writel(value, ioaddr + DMA_CHAN_TX_CONTROL(chan));
- writel(dma_tx_phy, ioaddr + DMA_CHAN_TX_BASE_ADDR(chan));
+ writel(lower_32_bits(dma_tx_phy), ioaddr + DMA_CHAN_TX_BASE_ADDR(chan));
}
static void dwmac4_dma_init_channel(void __iomem *ioaddr,
@@ -351,6 +351,7 @@ static void dwmac4_get_hw_feature(void __iomem *ioaddr,
/* MAC HW feature1 */
hw_cap = readl(ioaddr + GMAC_HW_FEATURE1);
+ dma_cap->hash_tb_sz = (hw_cap & GMAC_HW_HASH_TB_SZ) >> 24;
dma_cap->av = (hw_cap & GMAC_HW_FEAT_AVSEL) >> 20;
dma_cap->tsoen = (hw_cap & GMAC_HW_TSOEN) >> 18;
/* RX and TX FIFO sizes are encoded as log2(n / 128). Undo that by
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
index 85826524683c..f2a29a90e085 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
@@ -85,10 +85,6 @@ void dwmac4_dma_stop_rx(void __iomem *ioaddr, u32 chan)
value &= ~DMA_CONTROL_SR;
writel(value, ioaddr + DMA_CHAN_RX_CONTROL(chan));
-
- value = readl(ioaddr + GMAC_CONFIG);
- value &= ~GMAC_CONFIG_RE;
- writel(value, ioaddr + GMAC_CONFIG);
}
void dwmac4_set_tx_ring_len(void __iomem *ioaddr, u32 len, u32 chan)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h
index 085b700a4994..7f86dffb264d 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h
+++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h
@@ -15,10 +15,14 @@
/* MAC Registers */
#define XGMAC_TX_CONFIG 0x00000000
#define XGMAC_CONFIG_SS_OFF 29
-#define XGMAC_CONFIG_SS_MASK GENMASK(30, 29)
+#define XGMAC_CONFIG_SS_MASK GENMASK(31, 29)
#define XGMAC_CONFIG_SS_10000 (0x0 << XGMAC_CONFIG_SS_OFF)
-#define XGMAC_CONFIG_SS_2500 (0x2 << XGMAC_CONFIG_SS_OFF)
-#define XGMAC_CONFIG_SS_1000 (0x3 << XGMAC_CONFIG_SS_OFF)
+#define XGMAC_CONFIG_SS_2500_GMII (0x2 << XGMAC_CONFIG_SS_OFF)
+#define XGMAC_CONFIG_SS_1000_GMII (0x3 << XGMAC_CONFIG_SS_OFF)
+#define XGMAC_CONFIG_SS_100_MII (0x4 << XGMAC_CONFIG_SS_OFF)
+#define XGMAC_CONFIG_SS_5000 (0x5 << XGMAC_CONFIG_SS_OFF)
+#define XGMAC_CONFIG_SS_2500 (0x6 << XGMAC_CONFIG_SS_OFF)
+#define XGMAC_CONFIG_SS_10_MII (0x7 << XGMAC_CONFIG_SS_OFF)
#define XGMAC_CONFIG_SARC GENMASK(22, 20)
#define XGMAC_CONFIG_SARC_SHIFT 20
#define XGMAC_CONFIG_JD BIT(16)
@@ -29,6 +33,7 @@
#define XGMAC_CONFIG_GPSL GENMASK(29, 16)
#define XGMAC_CONFIG_GPSL_SHIFT 16
#define XGMAC_CONFIG_S2KP BIT(11)
+#define XGMAC_CONFIG_LM BIT(10)
#define XGMAC_CONFIG_IPC BIT(9)
#define XGMAC_CONFIG_JE BIT(8)
#define XGMAC_CONFIG_WD BIT(7)
@@ -39,6 +44,7 @@
#define XGMAC_CORE_INIT_RX 0
#define XGMAC_PACKET_FILTER 0x00000008
#define XGMAC_FILTER_RA BIT(31)
+#define XGMAC_FILTER_PCF BIT(7)
#define XGMAC_FILTER_PM BIT(4)
#define XGMAC_FILTER_HMC BIT(2)
#define XGMAC_FILTER_PR BIT(0)
@@ -81,6 +87,7 @@
#define XGMAC_HWFEAT_GMIISEL BIT(1)
#define XGMAC_HW_FEATURE1 0x00000120
#define XGMAC_HWFEAT_TSOEN BIT(18)
+#define XGMAC_HWFEAT_ADDR64 GENMASK(15, 14)
#define XGMAC_HWFEAT_TXFIFOSIZE GENMASK(10, 6)
#define XGMAC_HWFEAT_RXFIFOSIZE GENMASK(4, 0)
#define XGMAC_HW_FEATURE2 0x00000124
@@ -166,6 +173,7 @@
#define XGMAC_EN_LPI BIT(15)
#define XGMAC_LPI_XIT_PKT BIT(14)
#define XGMAC_AAL BIT(12)
+#define XGMAC_EAME BIT(11)
#define XGMAC_BLEN GENMASK(7, 1)
#define XGMAC_BLEN256 BIT(7)
#define XGMAC_BLEN128 BIT(6)
@@ -175,6 +183,10 @@
#define XGMAC_BLEN8 BIT(2)
#define XGMAC_BLEN4 BIT(1)
#define XGMAC_UNDEF BIT(0)
+#define XGMAC_TX_EDMA_CTRL 0x00003040
+#define XGMAC_TDPS GENMASK(29, 0)
+#define XGMAC_RX_EDMA_CTRL 0x00003044
+#define XGMAC_RDPS GENMASK(29, 0)
#define XGMAC_DMA_CH_CONTROL(x) (0x00003100 + (0x80 * (x)))
#define XGMAC_PBLx8 BIT(16)
#define XGMAC_DMA_CH_TX_CONTROL(x) (0x00003104 + (0x80 * (x)))
@@ -187,7 +199,9 @@
#define XGMAC_RxPBL GENMASK(21, 16)
#define XGMAC_RxPBL_SHIFT 16
#define XGMAC_RXST BIT(0)
+#define XGMAC_DMA_CH_TxDESC_HADDR(x) (0x00003110 + (0x80 * (x)))
#define XGMAC_DMA_CH_TxDESC_LADDR(x) (0x00003114 + (0x80 * (x)))
+#define XGMAC_DMA_CH_RxDESC_HADDR(x) (0x00003118 + (0x80 * (x)))
#define XGMAC_DMA_CH_RxDESC_LADDR(x) (0x0000311c + (0x80 * (x)))
#define XGMAC_DMA_CH_TxDESC_TAIL_LPTR(x) (0x00003124 + (0x80 * (x)))
#define XGMAC_DMA_CH_RxDESC_TAIL_LPTR(x) (0x0000312c + (0x80 * (x)))
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
index 64b8cb88ea45..0a32c96a7854 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
@@ -36,7 +36,7 @@ static void dwxgmac2_core_init(struct mac_device_info *hw,
switch (hw->ps) {
case SPEED_10000:
- tx |= hw->link.speed10000;
+ tx |= hw->link.xgmii.speed10000;
break;
case SPEED_2500:
tx |= hw->link.speed2500;
@@ -310,7 +310,7 @@ static void dwxgmac2_set_filter(struct mac_device_info *hw,
u32 value = XGMAC_FILTER_RA;
if (dev->flags & IFF_PROMISC) {
- value |= XGMAC_FILTER_PR;
+ value |= XGMAC_FILTER_PR | XGMAC_FILTER_PCF;
} else if ((dev->flags & IFF_ALLMULTI) ||
(netdev_mc_count(dev) > HASH_TABLE_SIZE)) {
value |= XGMAC_FILTER_PM;
@@ -321,6 +321,18 @@ static void dwxgmac2_set_filter(struct mac_device_info *hw,
writel(value, ioaddr + XGMAC_PACKET_FILTER);
}
+static void dwxgmac2_set_mac_loopback(void __iomem *ioaddr, bool enable)
+{
+ u32 value = readl(ioaddr + XGMAC_RX_CONFIG);
+
+ if (enable)
+ value |= XGMAC_CONFIG_LM;
+ else
+ value &= ~XGMAC_CONFIG_LM;
+
+ writel(value, ioaddr + XGMAC_RX_CONFIG);
+}
+
const struct stmmac_ops dwxgmac210_ops = {
.core_init = dwxgmac2_core_init,
.set_mac = dwxgmac2_set_mac,
@@ -350,6 +362,7 @@ const struct stmmac_ops dwxgmac210_ops = {
.pcs_get_adv_lp = NULL,
.debug = NULL,
.set_filter = dwxgmac2_set_filter,
+ .set_mac_loopback = dwxgmac2_set_mac_loopback,
};
int dwxgmac2_setup(struct stmmac_priv *priv)
@@ -368,11 +381,13 @@ int dwxgmac2_setup(struct stmmac_priv *priv)
mac->mcast_bits_log2 = ilog2(mac->multicast_filter_bins);
mac->link.duplex = 0;
- mac->link.speed10 = 0;
- mac->link.speed100 = 0;
- mac->link.speed1000 = XGMAC_CONFIG_SS_1000;
- mac->link.speed2500 = XGMAC_CONFIG_SS_2500;
- mac->link.speed10000 = XGMAC_CONFIG_SS_10000;
+ mac->link.speed10 = XGMAC_CONFIG_SS_10_MII;
+ mac->link.speed100 = XGMAC_CONFIG_SS_100_MII;
+ mac->link.speed1000 = XGMAC_CONFIG_SS_1000_GMII;
+ mac->link.speed2500 = XGMAC_CONFIG_SS_2500_GMII;
+ mac->link.xgmii.speed2500 = XGMAC_CONFIG_SS_2500;
+ mac->link.xgmii.speed5000 = XGMAC_CONFIG_SS_5000;
+ mac->link.xgmii.speed10000 = XGMAC_CONFIG_SS_10000;
mac->link.speed_mask = XGMAC_CONFIG_SS_MASK;
mac->mii.addr = XGMAC_MDIO_ADDR;
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c
index 98fa471da7c0..c4c45402b8f8 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c
@@ -242,8 +242,8 @@ static void dwxgmac2_get_addr(struct dma_desc *p, unsigned int *addr)
static void dwxgmac2_set_addr(struct dma_desc *p, dma_addr_t addr)
{
- p->des0 = cpu_to_le32(addr);
- p->des1 = 0;
+ p->des0 = cpu_to_le32(lower_32_bits(addr));
+ p->des1 = cpu_to_le32(upper_32_bits(addr));
}
static void dwxgmac2_clear(struct dma_desc *p)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
index e79037f511e1..a4f236e3593e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
@@ -27,7 +27,7 @@ static void dwxgmac2_dma_init(void __iomem *ioaddr,
if (dma_cfg->aal)
value |= XGMAC_AAL;
- writel(value, ioaddr + XGMAC_DMA_SYSBUS_MODE);
+ writel(value | XGMAC_EAME, ioaddr + XGMAC_DMA_SYSBUS_MODE);
}
static void dwxgmac2_dma_init_chan(void __iomem *ioaddr,
@@ -44,7 +44,7 @@ static void dwxgmac2_dma_init_chan(void __iomem *ioaddr,
static void dwxgmac2_dma_init_rx_chan(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_rx_phy, u32 chan)
+ dma_addr_t phy, u32 chan)
{
u32 rxpbl = dma_cfg->rxpbl ?: dma_cfg->pbl;
u32 value;
@@ -54,12 +54,13 @@ static void dwxgmac2_dma_init_rx_chan(void __iomem *ioaddr,
value |= (rxpbl << XGMAC_RxPBL_SHIFT) & XGMAC_RxPBL;
writel(value, ioaddr + XGMAC_DMA_CH_RX_CONTROL(chan));
- writel(dma_rx_phy, ioaddr + XGMAC_DMA_CH_RxDESC_LADDR(chan));
+ writel(upper_32_bits(phy), ioaddr + XGMAC_DMA_CH_RxDESC_HADDR(chan));
+ writel(lower_32_bits(phy), ioaddr + XGMAC_DMA_CH_RxDESC_LADDR(chan));
}
static void dwxgmac2_dma_init_tx_chan(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_tx_phy, u32 chan)
+ dma_addr_t phy, u32 chan)
{
u32 txpbl = dma_cfg->txpbl ?: dma_cfg->pbl;
u32 value;
@@ -70,7 +71,8 @@ static void dwxgmac2_dma_init_tx_chan(void __iomem *ioaddr,
value |= XGMAC_OSP;
writel(value, ioaddr + XGMAC_DMA_CH_TX_CONTROL(chan));
- writel(dma_tx_phy, ioaddr + XGMAC_DMA_CH_TxDESC_LADDR(chan));
+ writel(upper_32_bits(phy), ioaddr + XGMAC_DMA_CH_TxDESC_HADDR(chan));
+ writel(lower_32_bits(phy), ioaddr + XGMAC_DMA_CH_TxDESC_LADDR(chan));
}
static void dwxgmac2_dma_axi(void __iomem *ioaddr, struct stmmac_axi *axi)
@@ -91,11 +93,11 @@ static void dwxgmac2_dma_axi(void __iomem *ioaddr, struct stmmac_axi *axi)
value |= (axi->axi_rd_osr_lmt << XGMAC_RD_OSR_LMT_SHIFT) &
XGMAC_RD_OSR_LMT;
+ if (!axi->axi_fb)
+ value |= XGMAC_UNDEF;
+
value &= ~XGMAC_BLEN;
for (i = 0; i < AXI_BLEN; i++) {
- if (axi->axi_blen[i])
- value &= ~XGMAC_UNDEF;
-
switch (axi->axi_blen[i]) {
case 256:
value |= XGMAC_BLEN256;
@@ -122,6 +124,8 @@ static void dwxgmac2_dma_axi(void __iomem *ioaddr, struct stmmac_axi *axi)
}
writel(value, ioaddr + XGMAC_DMA_SYSBUS_MODE);
+ writel(XGMAC_TDPS, ioaddr + XGMAC_TX_EDMA_CTRL);
+ writel(XGMAC_RDPS, ioaddr + XGMAC_RX_EDMA_CTRL);
}
static void dwxgmac2_dma_rx_mode(void __iomem *ioaddr, int mode,
@@ -299,10 +303,6 @@ static void dwxgmac2_dma_stop_rx(void __iomem *ioaddr, u32 chan)
value = readl(ioaddr + XGMAC_DMA_CH_RX_CONTROL(chan));
value &= ~XGMAC_RXST;
writel(value, ioaddr + XGMAC_DMA_CH_RX_CONTROL(chan));
-
- value = readl(ioaddr + XGMAC_RX_CONFIG);
- value &= ~XGMAC_CONFIG_RE;
- writel(value, ioaddr + XGMAC_RX_CONFIG);
}
static int dwxgmac2_dma_interrupt(void __iomem *ioaddr,
@@ -363,6 +363,23 @@ static void dwxgmac2_get_hw_feature(void __iomem *ioaddr,
/* MAC HW feature 1 */
hw_cap = readl(ioaddr + XGMAC_HW_FEATURE1);
dma_cap->tsoen = (hw_cap & XGMAC_HWFEAT_TSOEN) >> 18;
+
+ dma_cap->addr64 = (hw_cap & XGMAC_HWFEAT_ADDR64) >> 14;
+ switch (dma_cap->addr64) {
+ case 0:
+ dma_cap->addr64 = 32;
+ break;
+ case 1:
+ dma_cap->addr64 = 40;
+ break;
+ case 2:
+ dma_cap->addr64 = 48;
+ break;
+ default:
+ dma_cap->addr64 = 32;
+ break;
+ }
+
dma_cap->tx_fifo_size =
128 << ((hw_cap & XGMAC_HWFEAT_TXFIFOSIZE) >> 6);
dma_cap->rx_fifo_size =
diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.c b/drivers/net/ethernet/stmicro/stmmac/hwif.c
index 81b966a8261b..6c61b753b55e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/hwif.c
+++ b/drivers/net/ethernet/stmicro/stmmac/hwif.c
@@ -81,6 +81,7 @@ static const struct stmmac_hwif_entry {
const void *hwtimestamp;
const void *mode;
const void *tc;
+ const void *mmc;
int (*setup)(struct stmmac_priv *priv);
int (*quirks)(struct stmmac_priv *priv);
} stmmac_hw[] = {
@@ -100,6 +101,7 @@ static const struct stmmac_hwif_entry {
.hwtimestamp = &stmmac_ptp,
.mode = NULL,
.tc = NULL,
+ .mmc = &dwmac_mmc_ops,
.setup = dwmac100_setup,
.quirks = stmmac_dwmac1_quirks,
}, {
@@ -117,6 +119,7 @@ static const struct stmmac_hwif_entry {
.hwtimestamp = &stmmac_ptp,
.mode = NULL,
.tc = NULL,
+ .mmc = &dwmac_mmc_ops,
.setup = dwmac1000_setup,
.quirks = stmmac_dwmac1_quirks,
}, {
@@ -134,6 +137,7 @@ static const struct stmmac_hwif_entry {
.hwtimestamp = &stmmac_ptp,
.mode = NULL,
.tc = &dwmac510_tc_ops,
+ .mmc = &dwmac_mmc_ops,
.setup = dwmac4_setup,
.quirks = stmmac_dwmac4_quirks,
}, {
@@ -151,6 +155,7 @@ static const struct stmmac_hwif_entry {
.hwtimestamp = &stmmac_ptp,
.mode = &dwmac4_ring_mode_ops,
.tc = &dwmac510_tc_ops,
+ .mmc = &dwmac_mmc_ops,
.setup = dwmac4_setup,
.quirks = NULL,
}, {
@@ -168,6 +173,7 @@ static const struct stmmac_hwif_entry {
.hwtimestamp = &stmmac_ptp,
.mode = &dwmac4_ring_mode_ops,
.tc = &dwmac510_tc_ops,
+ .mmc = &dwmac_mmc_ops,
.setup = dwmac4_setup,
.quirks = NULL,
}, {
@@ -185,6 +191,7 @@ static const struct stmmac_hwif_entry {
.hwtimestamp = &stmmac_ptp,
.mode = &dwmac4_ring_mode_ops,
.tc = &dwmac510_tc_ops,
+ .mmc = &dwmac_mmc_ops,
.setup = dwmac4_setup,
.quirks = NULL,
}, {
@@ -202,6 +209,7 @@ static const struct stmmac_hwif_entry {
.hwtimestamp = &stmmac_ptp,
.mode = NULL,
.tc = &dwmac510_tc_ops,
+ .mmc = NULL,
.setup = dwxgmac2_setup,
.quirks = NULL,
},
@@ -267,6 +275,7 @@ int stmmac_hwif_init(struct stmmac_priv *priv)
mac->ptp = mac->ptp ? : entry->hwtimestamp;
mac->mode = mac->mode ? : entry->mode;
mac->tc = mac->tc ? : entry->tc;
+ mac->mmc = mac->mmc ? : entry->mmc;
priv->hw = mac;
priv->ptpaddr = priv->ioaddr + entry->regs.ptp_off;
diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.h b/drivers/net/ethernet/stmicro/stmmac/hwif.h
index 5bb00234d961..278c0dbec9d9 100644
--- a/drivers/net/ethernet/stmicro/stmmac/hwif.h
+++ b/drivers/net/ethernet/stmicro/stmmac/hwif.h
@@ -6,6 +6,7 @@
#define __STMMAC_HWIF_H__
#include <linux/netdevice.h>
+#include <linux/stmmac.h>
#define stmmac_do_void_callback(__priv, __module, __cname, __arg0, __args...) \
({ \
@@ -149,10 +150,10 @@ struct stmmac_dma_ops {
struct stmmac_dma_cfg *dma_cfg, u32 chan);
void (*init_rx_chan)(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_rx_phy, u32 chan);
+ dma_addr_t phy, u32 chan);
void (*init_tx_chan)(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_tx_phy, u32 chan);
+ dma_addr_t phy, u32 chan);
/* Configure the AXI Bus Mode Register */
void (*axi)(void __iomem *ioaddr, struct stmmac_axi *axi);
/* Dump DMA registers */
@@ -324,6 +325,8 @@ struct stmmac_ops {
int (*flex_pps_config)(void __iomem *ioaddr, int index,
struct stmmac_pps_cfg *cfg, bool enable,
u32 sub_second_inc, u32 systime_flags);
+ /* Loopback for selftests */
+ void (*set_mac_loopback)(void __iomem *ioaddr, bool enable);
};
#define stmmac_core_init(__priv, __args...) \
@@ -392,6 +395,8 @@ struct stmmac_ops {
stmmac_do_callback(__priv, mac, rxp_config, __args)
#define stmmac_flex_pps_config(__priv, __args...) \
stmmac_do_callback(__priv, mac, flex_pps_config, __args)
+#define stmmac_set_mac_loopback(__priv, __args...) \
+ stmmac_do_void_callback(__priv, mac, set_mac_loopback, __args)
/* PTP and HW Timer helpers */
struct stmmac_hwtimestamp {
@@ -464,6 +469,21 @@ struct stmmac_tc_ops {
#define stmmac_tc_setup_cbs(__priv, __args...) \
stmmac_do_callback(__priv, tc, setup_cbs, __args)
+struct stmmac_counters;
+
+struct stmmac_mmc_ops {
+ void (*ctrl)(void __iomem *ioaddr, unsigned int mode);
+ void (*intr_all_mask)(void __iomem *ioaddr);
+ void (*read)(void __iomem *ioaddr, struct stmmac_counters *mmc);
+};
+
+#define stmmac_mmc_ctrl(__priv, __args...) \
+ stmmac_do_void_callback(__priv, mmc, ctrl, __args)
+#define stmmac_mmc_intr_all_mask(__priv, __args...) \
+ stmmac_do_void_callback(__priv, mmc, intr_all_mask, __args)
+#define stmmac_mmc_read(__priv, __args...) \
+ stmmac_do_void_callback(__priv, mmc, read, __args)
+
struct stmmac_regs_off {
u32 ptp_off;
u32 mmc_off;
@@ -482,6 +502,7 @@ extern const struct stmmac_tc_ops dwmac510_tc_ops;
extern const struct stmmac_ops dwxgmac210_ops;
extern const struct stmmac_dma_ops dwxgmac210_dma_ops;
extern const struct stmmac_desc_ops dwxgmac210_desc_ops;
+extern const struct stmmac_mmc_ops dwmac_mmc_ops;
#define GMAC_VERSION 0x00000020 /* GMAC CORE Version */
#define GMAC4_VERSION 0x00000110 /* GMAC4+ CORE Version */
diff --git a/drivers/net/ethernet/stmicro/stmmac/mmc.h b/drivers/net/ethernet/stmicro/stmmac/mmc.h
index 6c8fdee3b25a..3587ceb9faf5 100644
--- a/drivers/net/ethernet/stmicro/stmmac/mmc.h
+++ b/drivers/net/ethernet/stmicro/stmmac/mmc.h
@@ -118,8 +118,4 @@ struct stmmac_counters {
unsigned int mmc_rx_icmp_err_octets;
};
-void dwmac_mmc_ctrl(void __iomem *ioaddr, unsigned int mode);
-void dwmac_mmc_intr_all_mask(void __iomem *ioaddr);
-void dwmac_mmc_read(void __iomem *ioaddr, struct stmmac_counters *mmc);
-
#endif /* __MMC_H__ */
diff --git a/drivers/net/ethernet/stmicro/stmmac/mmc_core.c b/drivers/net/ethernet/stmicro/stmmac/mmc_core.c
index 1d967b8f91a0..a471db6d7b11 100644
--- a/drivers/net/ethernet/stmicro/stmmac/mmc_core.c
+++ b/drivers/net/ethernet/stmicro/stmmac/mmc_core.c
@@ -10,6 +10,7 @@
#include <linux/kernel.h>
#include <linux/io.h>
+#include "hwif.h"
#include "mmc.h"
/* MAC Management Counters register offset */
@@ -118,7 +119,7 @@
#define MMC_RX_ICMP_GD_OCTETS 0x180
#define MMC_RX_ICMP_ERR_OCTETS 0x184
-void dwmac_mmc_ctrl(void __iomem *mmcaddr, unsigned int mode)
+static void dwmac_mmc_ctrl(void __iomem *mmcaddr, unsigned int mode)
{
u32 value = readl(mmcaddr + MMC_CNTRL);
@@ -131,7 +132,7 @@ void dwmac_mmc_ctrl(void __iomem *mmcaddr, unsigned int mode)
}
/* To mask all all interrupts.*/
-void dwmac_mmc_intr_all_mask(void __iomem *mmcaddr)
+static void dwmac_mmc_intr_all_mask(void __iomem *mmcaddr)
{
writel(MMC_DEFAULT_MASK, mmcaddr + MMC_RX_INTR_MASK);
writel(MMC_DEFAULT_MASK, mmcaddr + MMC_TX_INTR_MASK);
@@ -143,7 +144,7 @@ void dwmac_mmc_intr_all_mask(void __iomem *mmcaddr)
* counter after a read. So all the field of the mmc struct
* have to be incremented.
*/
-void dwmac_mmc_read(void __iomem *mmcaddr, struct stmmac_counters *mmc)
+static void dwmac_mmc_read(void __iomem *mmcaddr, struct stmmac_counters *mmc)
{
mmc->mmc_tx_octetcount_gb += readl(mmcaddr + MMC_TX_OCTETCOUNT_GB);
mmc->mmc_tx_framecount_gb += readl(mmcaddr + MMC_TX_FRAMECOUNT_GB);
@@ -256,3 +257,9 @@ void dwmac_mmc_read(void __iomem *mmcaddr, struct stmmac_counters *mmc)
mmc->mmc_rx_icmp_gd_octets += readl(mmcaddr + MMC_RX_ICMP_GD_OCTETS);
mmc->mmc_rx_icmp_err_octets += readl(mmcaddr + MMC_RX_ICMP_ERR_OCTETS);
}
+
+const struct stmmac_mmc_ops dwmac_mmc_ops = {
+ .ctrl = dwmac_mmc_ctrl,
+ .intr_all_mask = dwmac_mmc_intr_all_mask,
+ .read = dwmac_mmc_read,
+};
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
index 62a64356ad22..5cd966c154f3 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
@@ -14,12 +14,13 @@
#include <linux/clk.h>
#include <linux/stmmac.h>
-#include <linux/phy.h>
+#include <linux/phylink.h>
#include <linux/pci.h>
#include "common.h"
#include <linux/ptp_clock_kernel.h>
#include <linux/net_tstamp.h>
#include <linux/reset.h>
+#include <net/page_pool.h>
struct stmmac_resources {
void __iomem *addr;
@@ -54,13 +55,19 @@ struct stmmac_tx_queue {
u32 mss;
};
+struct stmmac_rx_buffer {
+ struct page *page;
+ dma_addr_t addr;
+};
+
struct stmmac_rx_queue {
+ u32 rx_count_frames;
u32 queue_index;
+ struct page_pool *page_pool;
+ struct stmmac_rx_buffer *buf_pool;
struct stmmac_priv *priv_data;
struct dma_extended_desc *dma_erx;
struct dma_desc *dma_rx ____cacheline_aligned_in_smp;
- struct sk_buff **rx_skbuff;
- dma_addr_t *rx_skbuff_dma;
unsigned int cur_rx;
unsigned int dirty_rx;
u32 rx_zeroc_thresh;
@@ -110,6 +117,7 @@ struct stmmac_priv {
/* Frequently used values are kept adjacent for cache effect */
u32 tx_coal_frames;
u32 tx_coal_timer;
+ u32 rx_coal_frames;
int tx_coalesce;
int hwts_tx_en;
@@ -137,14 +145,15 @@ struct stmmac_priv {
/* Generic channel for NAPI */
struct stmmac_channel channel[STMMAC_CH_MAX];
- bool oldlink;
int speed;
- int oldduplex;
unsigned int flow_ctrl;
unsigned int pause;
struct mii_bus *mii;
int mii_irq[PHY_MAX_ADDR];
+ struct phylink_config phylink_config;
+ struct phylink *phylink;
+
struct stmmac_extra_stats xstats ____cacheline_aligned_in_smp;
struct stmmac_safety_stats sstats;
struct plat_stmmacenet_data *plat;
@@ -219,4 +228,26 @@ int stmmac_dvr_probe(struct device *device,
void stmmac_disable_eee_mode(struct stmmac_priv *priv);
bool stmmac_eee_init(struct stmmac_priv *priv);
+#if IS_ENABLED(CONFIG_STMMAC_SELFTESTS)
+void stmmac_selftest_run(struct net_device *dev,
+ struct ethtool_test *etest, u64 *buf);
+void stmmac_selftest_get_strings(struct stmmac_priv *priv, u8 *data);
+int stmmac_selftest_get_count(struct stmmac_priv *priv);
+#else
+static inline void stmmac_selftest_run(struct net_device *dev,
+ struct ethtool_test *etest, u64 *buf)
+{
+ /* Not enabled */
+}
+static inline void stmmac_selftest_get_strings(struct stmmac_priv *priv,
+ u8 *data)
+{
+ /* Not enabled */
+}
+static inline int stmmac_selftest_get_count(struct stmmac_priv *priv)
+{
+ return -EOPNOTSUPP;
+}
+#endif /* CONFIG_STMMAC_SELFTESTS */
+
#endif /* __STMMAC_H__ */
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
index e7af3dc3dd8f..6efb66820d4c 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
@@ -12,7 +12,7 @@
#include <linux/ethtool.h>
#include <linux/interrupt.h>
#include <linux/mii.h>
-#include <linux/phy.h>
+#include <linux/phylink.h>
#include <linux/net_tstamp.h>
#include <asm/io.h>
@@ -264,7 +264,6 @@ static int stmmac_ethtool_get_link_ksettings(struct net_device *dev,
struct ethtool_link_ksettings *cmd)
{
struct stmmac_priv *priv = netdev_priv(dev);
- struct phy_device *phy = dev->phydev;
if (priv->hw->pcs & STMMAC_PCS_RGMII ||
priv->hw->pcs & STMMAC_PCS_SGMII) {
@@ -343,18 +342,7 @@ static int stmmac_ethtool_get_link_ksettings(struct net_device *dev,
return 0;
}
- if (phy == NULL) {
- pr_err("%s: %s: PHY is not registered\n",
- __func__, dev->name);
- return -ENODEV;
- }
- if (!netif_running(dev)) {
- pr_err("%s: interface is disabled: we cannot track "
- "link speed / duplex setting\n", dev->name);
- return -EBUSY;
- }
- phy_ethtool_ksettings_get(phy, cmd);
- return 0;
+ return phylink_ethtool_ksettings_get(priv->phylink, cmd);
}
static int
@@ -362,8 +350,6 @@ stmmac_ethtool_set_link_ksettings(struct net_device *dev,
const struct ethtool_link_ksettings *cmd)
{
struct stmmac_priv *priv = netdev_priv(dev);
- struct phy_device *phy = dev->phydev;
- int rc;
if (priv->hw->pcs & STMMAC_PCS_RGMII ||
priv->hw->pcs & STMMAC_PCS_SGMII) {
@@ -387,9 +373,7 @@ stmmac_ethtool_set_link_ksettings(struct net_device *dev,
return 0;
}
- rc = phy_ethtool_ksettings_set(phy, cmd);
-
- return rc;
+ return phylink_ethtool_ksettings_set(priv->phylink, cmd);
}
static u32 stmmac_ethtool_getmsglevel(struct net_device *dev)
@@ -433,6 +417,13 @@ static void stmmac_ethtool_gregs(struct net_device *dev,
NUM_DWMAC1000_DMA_REGS * 4);
}
+static int stmmac_nway_reset(struct net_device *dev)
+{
+ struct stmmac_priv *priv = netdev_priv(dev);
+
+ return phylink_ethtool_nway_reset(priv->phylink);
+}
+
static void
stmmac_get_pauseparam(struct net_device *netdev,
struct ethtool_pauseparam *pause)
@@ -440,28 +431,13 @@ stmmac_get_pauseparam(struct net_device *netdev,
struct stmmac_priv *priv = netdev_priv(netdev);
struct rgmii_adv adv_lp;
- pause->rx_pause = 0;
- pause->tx_pause = 0;
-
if (priv->hw->pcs && !stmmac_pcs_get_adv_lp(priv, priv->ioaddr, &adv_lp)) {
pause->autoneg = 1;
if (!adv_lp.pause)
return;
} else {
- if (!linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT,
- netdev->phydev->supported) ||
- !linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
- netdev->phydev->supported))
- return;
+ phylink_ethtool_get_pauseparam(priv->phylink, pause);
}
-
- pause->autoneg = netdev->phydev->autoneg;
-
- if (priv->flow_ctrl & FLOW_RX)
- pause->rx_pause = 1;
- if (priv->flow_ctrl & FLOW_TX)
- pause->tx_pause = 1;
-
}
static int
@@ -469,39 +445,16 @@ stmmac_set_pauseparam(struct net_device *netdev,
struct ethtool_pauseparam *pause)
{
struct stmmac_priv *priv = netdev_priv(netdev);
- u32 tx_cnt = priv->plat->tx_queues_to_use;
- struct phy_device *phy = netdev->phydev;
- int new_pause = FLOW_OFF;
struct rgmii_adv adv_lp;
if (priv->hw->pcs && !stmmac_pcs_get_adv_lp(priv, priv->ioaddr, &adv_lp)) {
pause->autoneg = 1;
if (!adv_lp.pause)
return -EOPNOTSUPP;
+ return 0;
} else {
- if (!linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT,
- phy->supported) ||
- !linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
- phy->supported))
- return -EOPNOTSUPP;
- }
-
- if (pause->rx_pause)
- new_pause |= FLOW_RX;
- if (pause->tx_pause)
- new_pause |= FLOW_TX;
-
- priv->flow_ctrl = new_pause;
- phy->autoneg = pause->autoneg;
-
- if (phy->autoneg) {
- if (netif_running(netdev))
- return phy_start_aneg(phy);
+ return phylink_ethtool_set_pauseparam(priv->phylink, pause);
}
-
- stmmac_flow_ctrl(priv, priv->hw, phy->duplex, priv->flow_ctrl,
- priv->pause, tx_cnt);
- return 0;
}
static void stmmac_get_ethtool_stats(struct net_device *dev,
@@ -527,7 +480,7 @@ static void stmmac_get_ethtool_stats(struct net_device *dev,
if (ret) {
/* If supported, for new GMAC chips expose the MMC counters */
if (priv->dma_cap.rmon) {
- dwmac_mmc_read(priv->mmcaddr, &priv->mmc);
+ stmmac_mmc_read(priv, priv->mmcaddr, &priv->mmc);
for (i = 0; i < STMMAC_MMC_STATS_LEN; i++) {
char *p;
@@ -539,7 +492,7 @@ static void stmmac_get_ethtool_stats(struct net_device *dev,
}
}
if (priv->eee_enabled) {
- int val = phy_get_eee_err(dev->phydev);
+ int val = phylink_get_eee_err(priv->phylink);
if (val)
priv->xstats.phy_eee_wakeup_error_n = val;
}
@@ -579,6 +532,8 @@ static int stmmac_get_sset_count(struct net_device *netdev, int sset)
}
return len;
+ case ETH_SS_TEST:
+ return stmmac_selftest_get_count(priv);
default:
return -EOPNOTSUPP;
}
@@ -615,6 +570,9 @@ static void stmmac_get_strings(struct net_device *dev, u32 stringset, u8 *data)
p += ETH_GSTRING_LEN;
}
break;
+ case ETH_SS_TEST:
+ stmmac_selftest_get_strings(priv, p);
+ break;
default:
WARN_ON(1);
break;
@@ -679,7 +637,7 @@ static int stmmac_ethtool_op_get_eee(struct net_device *dev,
edata->eee_active = priv->eee_active;
edata->tx_lpi_timer = priv->tx_lpi_timer;
- return phy_ethtool_get_eee(dev->phydev, edata);
+ return phylink_ethtool_get_eee(priv->phylink, edata);
}
static int stmmac_ethtool_op_set_eee(struct net_device *dev,
@@ -700,7 +658,7 @@ static int stmmac_ethtool_op_set_eee(struct net_device *dev,
return -EOPNOTSUPP;
}
- ret = phy_ethtool_set_eee(dev->phydev, edata);
+ ret = phylink_ethtool_set_eee(priv->phylink, edata);
if (ret)
return ret;
@@ -743,8 +701,10 @@ static int stmmac_get_coalesce(struct net_device *dev,
ec->tx_coalesce_usecs = priv->tx_coal_timer;
ec->tx_max_coalesced_frames = priv->tx_coal_frames;
- if (priv->use_riwt)
+ if (priv->use_riwt) {
+ ec->rx_max_coalesced_frames = priv->rx_coal_frames;
ec->rx_coalesce_usecs = stmmac_riwt2usec(priv->rx_riwt, priv);
+ }
return 0;
}
@@ -757,7 +717,7 @@ static int stmmac_set_coalesce(struct net_device *dev,
unsigned int rx_riwt;
/* Check not supported parameters */
- if ((ec->rx_max_coalesced_frames) || (ec->rx_coalesce_usecs_irq) ||
+ if ((ec->rx_coalesce_usecs_irq) ||
(ec->rx_max_coalesced_frames_irq) || (ec->tx_coalesce_usecs_irq) ||
(ec->use_adaptive_rx_coalesce) || (ec->use_adaptive_tx_coalesce) ||
(ec->pkt_rate_low) || (ec->rx_coalesce_usecs_low) ||
@@ -791,6 +751,7 @@ static int stmmac_set_coalesce(struct net_device *dev,
/* Only copy relevant parameters, ignore all others. */
priv->tx_coal_frames = ec->tx_max_coalesced_frames;
priv->tx_coal_timer = ec->tx_coalesce_usecs;
+ priv->rx_coal_frames = ec->rx_max_coalesced_frames;
priv->rx_riwt = rx_riwt;
stmmac_rx_watchdog(priv, priv->ioaddr, priv->rx_riwt, rx_cnt);
@@ -877,9 +838,10 @@ static const struct ethtool_ops stmmac_ethtool_ops = {
.get_regs = stmmac_ethtool_gregs,
.get_regs_len = stmmac_ethtool_get_regs_len,
.get_link = ethtool_op_get_link,
- .nway_reset = phy_ethtool_nway_reset,
+ .nway_reset = stmmac_nway_reset,
.get_pauseparam = stmmac_get_pauseparam,
.set_pauseparam = stmmac_set_pauseparam,
+ .self_test = stmmac_selftest_run,
.get_ethtool_stats = stmmac_get_ethtool_stats,
.get_strings = stmmac_get_strings,
.get_wol = stmmac_get_wol,
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index 06358fe5b245..c7c9e5f162e6 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -35,6 +35,7 @@
#include <linux/seq_file.h>
#endif /* CONFIG_DEBUG_FS */
#include <linux/net_tstamp.h>
+#include <linux/phylink.h>
#include <net/pkt_cls.h>
#include "stmmac_ptp.h"
#include "stmmac.h"
@@ -318,21 +319,6 @@ static inline u32 stmmac_rx_dirty(struct stmmac_priv *priv, u32 queue)
}
/**
- * stmmac_hw_fix_mac_speed - callback for speed selection
- * @priv: driver private structure
- * Description: on some platforms (e.g. ST), some HW system configuration
- * registers have to be set according to the link speed negotiated.
- */
-static inline void stmmac_hw_fix_mac_speed(struct stmmac_priv *priv)
-{
- struct net_device *ndev = priv->dev;
- struct phy_device *phydev = ndev->phydev;
-
- if (likely(priv->plat->fix_mac_speed))
- priv->plat->fix_mac_speed(priv->plat->bsp_priv, phydev->speed);
-}
-
-/**
* stmmac_enable_eee_mode - check and enter in LPI mode
* @priv: driver private structure
* Description: this function is to verify and enter in LPI mode in case of
@@ -395,14 +381,7 @@ static void stmmac_eee_ctrl_timer(struct timer_list *t)
*/
bool stmmac_eee_init(struct stmmac_priv *priv)
{
- struct net_device *ndev = priv->dev;
- int interface = priv->plat->interface;
- bool ret = false;
-
- if ((interface != PHY_INTERFACE_MODE_MII) &&
- (interface != PHY_INTERFACE_MODE_GMII) &&
- !phy_interface_mode_is_rgmii(interface))
- goto out;
+ int tx_lpi_timer = priv->tx_lpi_timer;
/* Using PCS we cannot dial with the phy registers at this stage
* so we do not support extra feature like EEE.
@@ -410,52 +389,35 @@ bool stmmac_eee_init(struct stmmac_priv *priv)
if ((priv->hw->pcs == STMMAC_PCS_RGMII) ||
(priv->hw->pcs == STMMAC_PCS_TBI) ||
(priv->hw->pcs == STMMAC_PCS_RTBI))
- goto out;
-
- /* MAC core supports the EEE feature. */
- if (priv->dma_cap.eee) {
- int tx_lpi_timer = priv->tx_lpi_timer;
-
- /* Check if the PHY supports EEE */
- if (phy_init_eee(ndev->phydev, 1)) {
- /* To manage at run-time if the EEE cannot be supported
- * anymore (for example because the lp caps have been
- * changed).
- * In that case the driver disable own timers.
- */
- mutex_lock(&priv->lock);
- if (priv->eee_active) {
- netdev_dbg(priv->dev, "disable EEE\n");
- del_timer_sync(&priv->eee_ctrl_timer);
- stmmac_set_eee_timer(priv, priv->hw, 0,
- tx_lpi_timer);
- }
- priv->eee_active = 0;
- mutex_unlock(&priv->lock);
- goto out;
- }
- /* Activate the EEE and start timers */
- mutex_lock(&priv->lock);
- if (!priv->eee_active) {
- priv->eee_active = 1;
- timer_setup(&priv->eee_ctrl_timer,
- stmmac_eee_ctrl_timer, 0);
- mod_timer(&priv->eee_ctrl_timer,
- STMMAC_LPI_T(eee_timer));
-
- stmmac_set_eee_timer(priv, priv->hw,
- STMMAC_DEFAULT_LIT_LS, tx_lpi_timer);
- }
- /* Set HW EEE according to the speed */
- stmmac_set_eee_pls(priv, priv->hw, ndev->phydev->link);
+ return false;
+
+ /* Check if MAC core supports the EEE feature. */
+ if (!priv->dma_cap.eee)
+ return false;
- ret = true;
+ mutex_lock(&priv->lock);
+
+ /* Check if it needs to be deactivated */
+ if (!priv->eee_active) {
+ if (priv->eee_enabled) {
+ netdev_dbg(priv->dev, "disable EEE\n");
+ del_timer_sync(&priv->eee_ctrl_timer);
+ stmmac_set_eee_timer(priv, priv->hw, 0, tx_lpi_timer);
+ }
mutex_unlock(&priv->lock);
+ return false;
+ }
- netdev_dbg(priv->dev, "Energy-Efficient Ethernet initialized\n");
+ if (priv->eee_active && !priv->eee_enabled) {
+ timer_setup(&priv->eee_ctrl_timer, stmmac_eee_ctrl_timer, 0);
+ mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(eee_timer));
+ stmmac_set_eee_timer(priv, priv->hw, STMMAC_DEFAULT_LIT_LS,
+ tx_lpi_timer);
}
-out:
- return ret;
+
+ mutex_unlock(&priv->lock);
+ netdev_dbg(priv->dev, "Energy-Efficient Ethernet initialized\n");
+ return true;
}
/* stmmac_get_tx_hwtstamp - get HW TX timestamps
@@ -838,97 +800,171 @@ static void stmmac_mac_flow_ctrl(struct stmmac_priv *priv, u32 duplex)
priv->pause, tx_cnt);
}
-/**
- * stmmac_adjust_link - adjusts the link parameters
- * @dev: net device structure
- * Description: this is the helper called by the physical abstraction layer
- * drivers to communicate the phy link status. According the speed and duplex
- * this driver can invoke registered glue-logic as well.
- * It also invoke the eee initialization because it could happen when switch
- * on different networks (that are eee capable).
- */
-static void stmmac_adjust_link(struct net_device *dev)
+static void stmmac_validate(struct phylink_config *config,
+ unsigned long *supported,
+ struct phylink_link_state *state)
{
- struct stmmac_priv *priv = netdev_priv(dev);
- struct phy_device *phydev = dev->phydev;
- bool new_state = false;
-
- if (!phydev)
- return;
+ struct stmmac_priv *priv = netdev_priv(to_net_dev(config->dev));
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(mac_supported) = { 0, };
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, };
+ int tx_cnt = priv->plat->tx_queues_to_use;
+ int max_speed = priv->plat->max_speed;
- mutex_lock(&priv->lock);
+ phylink_set(mac_supported, 10baseT_Half);
+ phylink_set(mac_supported, 10baseT_Full);
+ phylink_set(mac_supported, 100baseT_Half);
+ phylink_set(mac_supported, 100baseT_Full);
+
+ phylink_set(mac_supported, Autoneg);
+ phylink_set(mac_supported, Pause);
+ phylink_set(mac_supported, Asym_Pause);
+ phylink_set_port_modes(mac_supported);
+
+ if (priv->plat->has_gmac ||
+ priv->plat->has_gmac4 ||
+ priv->plat->has_xgmac) {
+ phylink_set(mac_supported, 1000baseT_Half);
+ phylink_set(mac_supported, 1000baseT_Full);
+ phylink_set(mac_supported, 1000baseKX_Full);
+ }
+
+ /* Cut down 1G if asked to */
+ if ((max_speed > 0) && (max_speed < 1000)) {
+ phylink_set(mask, 1000baseT_Full);
+ phylink_set(mask, 1000baseX_Full);
+ } else if (priv->plat->has_xgmac) {
+ phylink_set(mac_supported, 2500baseT_Full);
+ phylink_set(mac_supported, 5000baseT_Full);
+ phylink_set(mac_supported, 10000baseSR_Full);
+ phylink_set(mac_supported, 10000baseLR_Full);
+ phylink_set(mac_supported, 10000baseER_Full);
+ phylink_set(mac_supported, 10000baseLRM_Full);
+ phylink_set(mac_supported, 10000baseT_Full);
+ phylink_set(mac_supported, 10000baseKX4_Full);
+ phylink_set(mac_supported, 10000baseKR_Full);
+ }
+
+ /* Half-Duplex can only work with single queue */
+ if (tx_cnt > 1) {
+ phylink_set(mask, 10baseT_Half);
+ phylink_set(mask, 100baseT_Half);
+ phylink_set(mask, 1000baseT_Half);
+ }
+
+ bitmap_and(supported, supported, mac_supported,
+ __ETHTOOL_LINK_MODE_MASK_NBITS);
+ bitmap_andnot(supported, supported, mask,
+ __ETHTOOL_LINK_MODE_MASK_NBITS);
+ bitmap_and(state->advertising, state->advertising, mac_supported,
+ __ETHTOOL_LINK_MODE_MASK_NBITS);
+ bitmap_andnot(state->advertising, state->advertising, mask,
+ __ETHTOOL_LINK_MODE_MASK_NBITS);
+}
- if (phydev->link) {
- u32 ctrl = readl(priv->ioaddr + MAC_CTRL_REG);
+static int stmmac_mac_link_state(struct phylink_config *config,
+ struct phylink_link_state *state)
+{
+ return -EOPNOTSUPP;
+}
- /* Now we make sure that we can be in full duplex mode.
- * If not, we operate in half-duplex mode. */
- if (phydev->duplex != priv->oldduplex) {
- new_state = true;
- if (!phydev->duplex)
- ctrl &= ~priv->hw->link.duplex;
- else
- ctrl |= priv->hw->link.duplex;
- priv->oldduplex = phydev->duplex;
- }
- /* Flow Control operation */
- if (phydev->pause)
- stmmac_mac_flow_ctrl(priv, phydev->duplex);
-
- if (phydev->speed != priv->speed) {
- new_state = true;
- ctrl &= ~priv->hw->link.speed_mask;
- switch (phydev->speed) {
- case SPEED_1000:
- ctrl |= priv->hw->link.speed1000;
- break;
- case SPEED_100:
- ctrl |= priv->hw->link.speed100;
- break;
- case SPEED_10:
- ctrl |= priv->hw->link.speed10;
- break;
- default:
- netif_warn(priv, link, priv->dev,
- "broken speed: %d\n", phydev->speed);
- phydev->speed = SPEED_UNKNOWN;
- break;
- }
- if (phydev->speed != SPEED_UNKNOWN)
- stmmac_hw_fix_mac_speed(priv);
- priv->speed = phydev->speed;
- }
+static void stmmac_mac_config(struct phylink_config *config, unsigned int mode,
+ const struct phylink_link_state *state)
+{
+ struct stmmac_priv *priv = netdev_priv(to_net_dev(config->dev));
+ u32 ctrl;
- writel(ctrl, priv->ioaddr + MAC_CTRL_REG);
+ ctrl = readl(priv->ioaddr + MAC_CTRL_REG);
+ ctrl &= ~priv->hw->link.speed_mask;
- if (!priv->oldlink) {
- new_state = true;
- priv->oldlink = true;
+ if (state->interface == PHY_INTERFACE_MODE_USXGMII) {
+ switch (state->speed) {
+ case SPEED_10000:
+ ctrl |= priv->hw->link.xgmii.speed10000;
+ break;
+ case SPEED_5000:
+ ctrl |= priv->hw->link.xgmii.speed5000;
+ break;
+ case SPEED_2500:
+ ctrl |= priv->hw->link.xgmii.speed2500;
+ break;
+ default:
+ return;
+ }
+ } else {
+ switch (state->speed) {
+ case SPEED_2500:
+ ctrl |= priv->hw->link.speed2500;
+ break;
+ case SPEED_1000:
+ ctrl |= priv->hw->link.speed1000;
+ break;
+ case SPEED_100:
+ ctrl |= priv->hw->link.speed100;
+ break;
+ case SPEED_10:
+ ctrl |= priv->hw->link.speed10;
+ break;
+ default:
+ return;
}
- } else if (priv->oldlink) {
- new_state = true;
- priv->oldlink = false;
- priv->speed = SPEED_UNKNOWN;
- priv->oldduplex = DUPLEX_UNKNOWN;
}
- if (new_state && netif_msg_link(priv))
- phy_print_status(phydev);
+ priv->speed = state->speed;
- mutex_unlock(&priv->lock);
+ if (priv->plat->fix_mac_speed)
+ priv->plat->fix_mac_speed(priv->plat->bsp_priv, state->speed);
- if (phydev->is_pseudo_fixed_link)
- /* Stop PHY layer to call the hook to adjust the link in case
- * of a switch is attached to the stmmac driver.
- */
- phydev->irq = PHY_IGNORE_INTERRUPT;
+ if (!state->duplex)
+ ctrl &= ~priv->hw->link.duplex;
else
- /* At this stage, init the EEE if supported.
- * Never called in case of fixed_link.
- */
+ ctrl |= priv->hw->link.duplex;
+
+ /* Flow Control operation */
+ if (state->pause)
+ stmmac_mac_flow_ctrl(priv, state->duplex);
+
+ writel(ctrl, priv->ioaddr + MAC_CTRL_REG);
+}
+
+static void stmmac_mac_an_restart(struct phylink_config *config)
+{
+ /* Not Supported */
+}
+
+static void stmmac_mac_link_down(struct phylink_config *config,
+ unsigned int mode, phy_interface_t interface)
+{
+ struct stmmac_priv *priv = netdev_priv(to_net_dev(config->dev));
+
+ stmmac_mac_set(priv, priv->ioaddr, false);
+ priv->eee_active = false;
+ stmmac_eee_init(priv);
+ stmmac_set_eee_pls(priv, priv->hw, false);
+}
+
+static void stmmac_mac_link_up(struct phylink_config *config,
+ unsigned int mode, phy_interface_t interface,
+ struct phy_device *phy)
+{
+ struct stmmac_priv *priv = netdev_priv(to_net_dev(config->dev));
+
+ stmmac_mac_set(priv, priv->ioaddr, true);
+ if (phy && priv->dma_cap.eee) {
+ priv->eee_active = phy_init_eee(phy, 1) >= 0;
priv->eee_enabled = stmmac_eee_init(priv);
+ stmmac_set_eee_pls(priv, priv->hw, true);
+ }
}
+static const struct phylink_mac_ops stmmac_phylink_mac_ops = {
+ .validate = stmmac_validate,
+ .mac_link_state = stmmac_mac_link_state,
+ .mac_config = stmmac_mac_config,
+ .mac_an_restart = stmmac_mac_an_restart,
+ .mac_link_down = stmmac_mac_link_down,
+ .mac_link_up = stmmac_mac_link_up,
+};
+
/**
* stmmac_check_pcs_mode - verify if RGMII/SGMII is supported
* @priv: driver private structure
@@ -965,79 +1001,48 @@ static void stmmac_check_pcs_mode(struct stmmac_priv *priv)
static int stmmac_init_phy(struct net_device *dev)
{
struct stmmac_priv *priv = netdev_priv(dev);
- u32 tx_cnt = priv->plat->tx_queues_to_use;
- struct phy_device *phydev;
- char phy_id_fmt[MII_BUS_ID_SIZE + 3];
- char bus_id[MII_BUS_ID_SIZE];
- int interface = priv->plat->interface;
- int max_speed = priv->plat->max_speed;
- priv->oldlink = false;
- priv->speed = SPEED_UNKNOWN;
- priv->oldduplex = DUPLEX_UNKNOWN;
+ struct device_node *node;
+ int ret;
- if (priv->plat->phy_node) {
- phydev = of_phy_connect(dev, priv->plat->phy_node,
- &stmmac_adjust_link, 0, interface);
- } else {
- snprintf(bus_id, MII_BUS_ID_SIZE, "stmmac-%x",
- priv->plat->bus_id);
+ node = priv->plat->phylink_node;
- snprintf(phy_id_fmt, MII_BUS_ID_SIZE + 3, PHY_ID_FMT, bus_id,
- priv->plat->phy_addr);
- netdev_dbg(priv->dev, "%s: trying to attach to %s\n", __func__,
- phy_id_fmt);
+ if (node)
+ ret = phylink_of_phy_connect(priv->phylink, node, 0);
- phydev = phy_connect(dev, phy_id_fmt, &stmmac_adjust_link,
- interface);
- }
+ /* Some DT bindings do not set-up the PHY handle. Let's try to
+ * manually parse it
+ */
+ if (!node || ret) {
+ int addr = priv->plat->phy_addr;
+ struct phy_device *phydev;
- if (IS_ERR_OR_NULL(phydev)) {
- netdev_err(priv->dev, "Could not attach to PHY\n");
- if (!phydev)
+ phydev = mdiobus_get_phy(priv->mii, addr);
+ if (!phydev) {
+ netdev_err(priv->dev, "no phy at addr %d\n", addr);
return -ENODEV;
+ }
- return PTR_ERR(phydev);
+ ret = phylink_connect_phy(priv->phylink, phydev);
}
- /* Stop Advertising 1000BASE Capability if interface is not GMII */
- if ((interface == PHY_INTERFACE_MODE_MII) ||
- (interface == PHY_INTERFACE_MODE_RMII) ||
- (max_speed < 1000 && max_speed > 0))
- phy_set_max_speed(phydev, SPEED_100);
+ return ret;
+}
- /*
- * Half-duplex mode not supported with multiqueue
- * half-duplex can only works with single queue
- */
- if (tx_cnt > 1) {
- phy_remove_link_mode(phydev,
- ETHTOOL_LINK_MODE_10baseT_Half_BIT);
- phy_remove_link_mode(phydev,
- ETHTOOL_LINK_MODE_100baseT_Half_BIT);
- phy_remove_link_mode(phydev,
- ETHTOOL_LINK_MODE_1000baseT_Half_BIT);
- }
+static int stmmac_phy_setup(struct stmmac_priv *priv)
+{
+ struct fwnode_handle *fwnode = of_fwnode_handle(priv->plat->phylink_node);
+ int mode = priv->plat->interface;
+ struct phylink *phylink;
- /*
- * Broken HW is sometimes missing the pull-up resistor on the
- * MDIO line, which results in reads to non-existent devices returning
- * 0 rather than 0xffff. Catch this here and treat 0 as a non-existent
- * device as well.
- * Note: phydev->phy_id is the result of reading the UID PHY registers.
- */
- if (!priv->plat->phy_node && phydev->phy_id == 0) {
- phy_disconnect(phydev);
- return -ENODEV;
- }
+ priv->phylink_config.dev = &priv->dev->dev;
+ priv->phylink_config.type = PHYLINK_NETDEV;
- /* stmmac_adjust_link will change this to PHY_IGNORE_INTERRUPT to avoid
- * subsequent PHY polling, make sure we force a link transition if
- * we have a UP/DOWN/UP transition
- */
- if (phydev->is_pseudo_fixed_link)
- phydev->irq = PHY_POLL;
+ phylink = phylink_create(&priv->phylink_config, fwnode,
+ mode, &stmmac_phylink_mac_ops);
+ if (IS_ERR(phylink))
+ return PTR_ERR(phylink);
- phy_attached_info(phydev);
+ priv->phylink = phylink;
return 0;
}
@@ -1192,26 +1197,14 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p,
int i, gfp_t flags, u32 queue)
{
struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
- struct sk_buff *skb;
+ struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i];
- skb = __netdev_alloc_skb_ip_align(priv->dev, priv->dma_buf_sz, flags);
- if (!skb) {
- netdev_err(priv->dev,
- "%s: Rx init fails; skb is NULL\n", __func__);
+ buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);
+ if (!buf->page)
return -ENOMEM;
- }
- rx_q->rx_skbuff[i] = skb;
- rx_q->rx_skbuff_dma[i] = dma_map_single(priv->device, skb->data,
- priv->dma_buf_sz,
- DMA_FROM_DEVICE);
- if (dma_mapping_error(priv->device, rx_q->rx_skbuff_dma[i])) {
- netdev_err(priv->dev, "%s: DMA mapping error\n", __func__);
- dev_kfree_skb_any(skb);
- return -EINVAL;
- }
-
- stmmac_set_desc_addr(priv, p, rx_q->rx_skbuff_dma[i]);
+ buf->addr = page_pool_get_dma_addr(buf->page);
+ stmmac_set_desc_addr(priv, p, buf->addr);
if (priv->dma_buf_sz == BUF_SIZE_16KiB)
stmmac_init_desc3(priv, p);
@@ -1227,13 +1220,11 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p,
static void stmmac_free_rx_buffer(struct stmmac_priv *priv, u32 queue, int i)
{
struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
+ struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i];
- if (rx_q->rx_skbuff[i]) {
- dma_unmap_single(priv->device, rx_q->rx_skbuff_dma[i],
- priv->dma_buf_sz, DMA_FROM_DEVICE);
- dev_kfree_skb_any(rx_q->rx_skbuff[i]);
- }
- rx_q->rx_skbuff[i] = NULL;
+ if (buf->page)
+ page_pool_put_page(rx_q->page_pool, buf->page, false);
+ buf->page = NULL;
}
/**
@@ -1316,10 +1307,6 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags)
queue);
if (ret)
goto err_init_rx_buffers;
-
- netif_dbg(priv, probe, priv->dev, "[%p]\t[%p]\t[%x]\n",
- rx_q->rx_skbuff[i], rx_q->rx_skbuff[i]->data,
- (unsigned int)rx_q->rx_skbuff_dma[i]);
}
rx_q->cur_rx = 0;
@@ -1493,8 +1480,11 @@ static void free_dma_rx_desc_resources(struct stmmac_priv *priv)
sizeof(struct dma_extended_desc),
rx_q->dma_erx, rx_q->dma_rx_phy);
- kfree(rx_q->rx_skbuff_dma);
- kfree(rx_q->rx_skbuff);
+ kfree(rx_q->buf_pool);
+ if (rx_q->page_pool) {
+ page_pool_request_shutdown(rx_q->page_pool);
+ page_pool_destroy(rx_q->page_pool);
+ }
}
}
@@ -1546,20 +1536,29 @@ static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv)
/* RX queues buffers and DMA */
for (queue = 0; queue < rx_count; queue++) {
struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
+ struct page_pool_params pp_params = { 0 };
rx_q->queue_index = queue;
rx_q->priv_data = priv;
- rx_q->rx_skbuff_dma = kmalloc_array(DMA_RX_SIZE,
- sizeof(dma_addr_t),
- GFP_KERNEL);
- if (!rx_q->rx_skbuff_dma)
+ pp_params.flags = PP_FLAG_DMA_MAP;
+ pp_params.pool_size = DMA_RX_SIZE;
+ pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
+ pp_params.nid = dev_to_node(priv->device);
+ pp_params.dev = priv->device;
+ pp_params.dma_dir = DMA_FROM_DEVICE;
+
+ rx_q->page_pool = page_pool_create(&pp_params);
+ if (IS_ERR(rx_q->page_pool)) {
+ ret = PTR_ERR(rx_q->page_pool);
+ rx_q->page_pool = NULL;
goto err_dma;
+ }
- rx_q->rx_skbuff = kmalloc_array(DMA_RX_SIZE,
- sizeof(struct sk_buff *),
- GFP_KERNEL);
- if (!rx_q->rx_skbuff)
+ rx_q->buf_pool = kmalloc_array(DMA_RX_SIZE,
+ sizeof(*rx_q->buf_pool),
+ GFP_KERNEL);
+ if (!rx_q->buf_pool)
goto err_dma;
if (priv->extend_desc) {
@@ -2049,14 +2048,15 @@ static int stmmac_napi_check(struct stmmac_priv *priv, u32 chan)
struct stmmac_channel *ch = &priv->channel[chan];
if ((status & handle_rx) && (chan < priv->plat->rx_queues_to_use)) {
- stmmac_disable_dma_irq(priv, priv->ioaddr, chan);
- napi_schedule_irqoff(&ch->rx_napi);
+ if (napi_schedule_prep(&ch->rx_napi)) {
+ stmmac_disable_dma_irq(priv, priv->ioaddr, chan);
+ __napi_schedule_irqoff(&ch->rx_napi);
+ status |= handle_tx;
+ }
}
- if ((status & handle_tx) && (chan < priv->plat->tx_queues_to_use)) {
- stmmac_disable_dma_irq(priv, priv->ioaddr, chan);
+ if ((status & handle_tx) && (chan < priv->plat->tx_queues_to_use))
napi_schedule_irqoff(&ch->tx_napi);
- }
return status;
}
@@ -2118,10 +2118,10 @@ static void stmmac_mmc_setup(struct stmmac_priv *priv)
unsigned int mode = MMC_CNTRL_RESET_ON_READ | MMC_CNTRL_COUNTER_RESET |
MMC_CNTRL_PRESET | MMC_CNTRL_FULL_HALF_PRESET;
- dwmac_mmc_intr_all_mask(priv->mmcaddr);
+ stmmac_mmc_intr_all_mask(priv, priv->mmcaddr);
if (priv->dma_cap.rmon) {
- dwmac_mmc_ctrl(priv->mmcaddr, mode);
+ stmmac_mmc_ctrl(priv, priv->mmcaddr, mode);
memset(&priv->mmc, 0, sizeof(struct stmmac_counters));
} else
netdev_info(priv->dev, "No MAC Management Counters available\n");
@@ -2154,8 +2154,8 @@ static void stmmac_check_ether_addr(struct stmmac_priv *priv)
stmmac_get_umac_addr(priv, priv->hw, priv->dev->dev_addr, 0);
if (!is_valid_ether_addr(priv->dev->dev_addr))
eth_hw_addr_random(priv->dev);
- netdev_info(priv->dev, "device MAC address %pM\n",
- priv->dev->dev_addr);
+ dev_info(priv->device, "device MAC address %pM\n",
+ priv->dev->dev_addr);
}
}
@@ -2262,20 +2262,21 @@ static void stmmac_tx_timer(struct timer_list *t)
}
/**
- * stmmac_init_tx_coalesce - init tx mitigation options.
+ * stmmac_init_coalesce - init mitigation options.
* @priv: driver private structure
* Description:
- * This inits the transmit coalesce parameters: i.e. timer rate,
+ * This inits the coalesce parameters: i.e. timer rate,
* timer handler and default threshold used for enabling the
* interrupt on completion bit.
*/
-static void stmmac_init_tx_coalesce(struct stmmac_priv *priv)
+static void stmmac_init_coalesce(struct stmmac_priv *priv)
{
u32 tx_channel_count = priv->plat->tx_queues_to_use;
u32 chan;
priv->tx_coal_frames = STMMAC_TX_FRAMES;
priv->tx_coal_timer = STMMAC_COAL_TX_TIMER;
+ priv->rx_coal_frames = STMMAC_RX_FRAMES;
for (chan = 0; chan < tx_channel_count; chan++) {
struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan];
@@ -2561,9 +2562,9 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
priv->tx_lpi_timer = STMMAC_DEFAULT_TWT_LS;
if (priv->use_riwt) {
- ret = stmmac_rx_watchdog(priv, priv->ioaddr, MAX_DMA_RIWT, rx_cnt);
+ ret = stmmac_rx_watchdog(priv, priv->ioaddr, MIN_DMA_RIWT, rx_cnt);
if (!ret)
- priv->rx_riwt = MAX_DMA_RIWT;
+ priv->rx_riwt = MIN_DMA_RIWT;
}
if (priv->hw->pcs)
@@ -2645,10 +2646,9 @@ static int stmmac_open(struct net_device *dev)
goto init_error;
}
- stmmac_init_tx_coalesce(priv);
+ stmmac_init_coalesce(priv);
- if (dev->phydev)
- phy_start(dev->phydev);
+ phylink_start(priv->phylink);
/* Request the IRQ lines */
ret = request_irq(dev->irq, stmmac_interrupt,
@@ -2695,8 +2695,7 @@ lpiirq_error:
wolirq_error:
free_irq(dev->irq, dev);
irq_error:
- if (dev->phydev)
- phy_stop(dev->phydev);
+ phylink_stop(priv->phylink);
for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++)
del_timer_sync(&priv->tx_queue[chan].txtimer);
@@ -2705,9 +2704,7 @@ irq_error:
init_error:
free_dma_desc_resources(priv);
dma_desc_error:
- if (dev->phydev)
- phy_disconnect(dev->phydev);
-
+ phylink_disconnect_phy(priv->phylink);
return ret;
}
@@ -2726,10 +2723,8 @@ static int stmmac_release(struct net_device *dev)
del_timer_sync(&priv->eee_ctrl_timer);
/* Stop and disconnect the PHY */
- if (dev->phydev) {
- phy_stop(dev->phydev);
- phy_disconnect(dev->phydev);
- }
+ phylink_stop(priv->phylink);
+ phylink_disconnect_phy(priv->phylink);
stmmac_stop_all_queues(priv);
@@ -2772,7 +2767,7 @@ static int stmmac_release(struct net_device *dev)
* This function fills descriptor and request new descriptors according to
* buffer length to fill
*/
-static void stmmac_tso_allocator(struct stmmac_priv *priv, unsigned int des,
+static void stmmac_tso_allocator(struct stmmac_priv *priv, dma_addr_t des,
int total_len, bool last_segment, u32 queue)
{
struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue];
@@ -2783,11 +2778,18 @@ static void stmmac_tso_allocator(struct stmmac_priv *priv, unsigned int des,
tmp_len = total_len;
while (tmp_len > 0) {
+ dma_addr_t curr_addr;
+
tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, DMA_TX_SIZE);
WARN_ON(tx_q->tx_skbuff[tx_q->cur_tx]);
desc = tx_q->dma_tx + tx_q->cur_tx;
- desc->des0 = cpu_to_le32(des + (total_len - tmp_len));
+ curr_addr = des + (total_len - tmp_len);
+ if (priv->dma_cap.addr64 <= 32)
+ desc->des0 = cpu_to_le32(curr_addr);
+ else
+ stmmac_set_desc_addr(priv, desc, curr_addr);
+
buff_size = tmp_len >= TSO_MAX_BUFF_SIZE ?
TSO_MAX_BUFF_SIZE : tmp_len;
@@ -2833,11 +2835,12 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
struct stmmac_priv *priv = netdev_priv(dev);
int nfrags = skb_shinfo(skb)->nr_frags;
u32 queue = skb_get_queue_mapping(skb);
- unsigned int first_entry, des;
+ unsigned int first_entry;
struct stmmac_tx_queue *tx_q;
int tmp_pay_len = 0;
u32 pay_len, mss;
u8 proto_hdr_len;
+ dma_addr_t des;
int i;
tx_q = &priv->tx_queue[queue];
@@ -2894,14 +2897,19 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
tx_q->tx_skbuff_dma[first_entry].buf = des;
tx_q->tx_skbuff_dma[first_entry].len = skb_headlen(skb);
- first->des0 = cpu_to_le32(des);
+ if (priv->dma_cap.addr64 <= 32) {
+ first->des0 = cpu_to_le32(des);
- /* Fill start of payload in buff2 of first descriptor */
- if (pay_len)
- first->des1 = cpu_to_le32(des + proto_hdr_len);
+ /* Fill start of payload in buff2 of first descriptor */
+ if (pay_len)
+ first->des1 = cpu_to_le32(des + proto_hdr_len);
- /* If needed take extra descriptors to fill the remaining payload */
- tmp_pay_len = pay_len - TSO_MAX_BUFF_SIZE;
+ /* If needed take extra descriptors to fill the remaining payload */
+ tmp_pay_len = pay_len - TSO_MAX_BUFF_SIZE;
+ } else {
+ stmmac_set_desc_addr(priv, first, des);
+ tmp_pay_len = pay_len;
+ }
stmmac_tso_allocator(priv, des, tmp_pay_len, (nfrags == 0), queue);
@@ -3031,12 +3039,12 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
int i, csum_insertion = 0, is_jumbo = 0;
u32 queue = skb_get_queue_mapping(skb);
int nfrags = skb_shinfo(skb)->nr_frags;
- int entry;
- unsigned int first_entry;
struct dma_desc *desc, *first;
struct stmmac_tx_queue *tx_q;
+ unsigned int first_entry;
unsigned int enh_desc;
- unsigned int des;
+ dma_addr_t des;
+ int entry;
tx_q = &priv->tx_queue[queue];
@@ -3045,17 +3053,8 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
/* Manage oversized TCP frames for GMAC4 device */
if (skb_is_gso(skb) && priv->tso) {
- if (skb_shinfo(skb)->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6)) {
- /*
- * There is no way to determine the number of TSO
- * capable Queues. Let's use always the Queue 0
- * because if TSO is supported then at least this
- * one will be capable.
- */
- skb_set_queue_mapping(skb, 0);
-
+ if (skb_shinfo(skb)->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6))
return stmmac_tso_xmit(skb, dev);
- }
}
if (unlikely(stmmac_tx_avail(priv, queue) < nfrags + 1)) {
@@ -3281,59 +3280,38 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
int dirty = stmmac_rx_dirty(priv, queue);
unsigned int entry = rx_q->dirty_rx;
- int bfsize = priv->dma_buf_sz;
-
while (dirty-- > 0) {
+ struct stmmac_rx_buffer *buf = &rx_q->buf_pool[entry];
struct dma_desc *p;
+ bool use_rx_wd;
if (priv->extend_desc)
p = (struct dma_desc *)(rx_q->dma_erx + entry);
else
p = rx_q->dma_rx + entry;
- if (likely(!rx_q->rx_skbuff[entry])) {
- struct sk_buff *skb;
-
- skb = netdev_alloc_skb_ip_align(priv->dev, bfsize);
- if (unlikely(!skb)) {
- /* so for a while no zero-copy! */
- rx_q->rx_zeroc_thresh = STMMAC_RX_THRESH;
- if (unlikely(net_ratelimit()))
- dev_err(priv->device,
- "fail to alloc skb entry %d\n",
- entry);
+ if (!buf->page) {
+ buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);
+ if (!buf->page)
break;
- }
-
- rx_q->rx_skbuff[entry] = skb;
- rx_q->rx_skbuff_dma[entry] =
- dma_map_single(priv->device, skb->data, bfsize,
- DMA_FROM_DEVICE);
- if (dma_mapping_error(priv->device,
- rx_q->rx_skbuff_dma[entry])) {
- netdev_err(priv->dev, "Rx DMA map failed\n");
- dev_kfree_skb(skb);
- break;
- }
-
- stmmac_set_desc_addr(priv, p, rx_q->rx_skbuff_dma[entry]);
- stmmac_refill_desc3(priv, rx_q, p);
-
- if (rx_q->rx_zeroc_thresh > 0)
- rx_q->rx_zeroc_thresh--;
-
- netif_dbg(priv, rx_status, priv->dev,
- "refill entry #%d\n", entry);
}
- dma_wmb();
- stmmac_set_rx_owner(priv, p, priv->use_riwt);
+ buf->addr = page_pool_get_dma_addr(buf->page);
+ stmmac_set_desc_addr(priv, p, buf->addr);
+ stmmac_refill_desc3(priv, rx_q, p);
+
+ rx_q->rx_count_frames++;
+ rx_q->rx_count_frames %= priv->rx_coal_frames;
+ use_rx_wd = priv->use_riwt && rx_q->rx_count_frames;
dma_wmb();
+ stmmac_set_rx_owner(priv, p, use_rx_wd);
entry = STMMAC_GET_ENTRY(entry, DMA_RX_SIZE);
}
rx_q->dirty_rx = entry;
+ rx_q->rx_tail_addr = rx_q->dma_rx_phy +
+ (rx_q->dirty_rx * sizeof(struct dma_desc));
stmmac_set_rx_tail_ptr(priv, priv->ioaddr, rx_q->rx_tail_addr, queue);
}
@@ -3352,9 +3330,6 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
unsigned int next_entry = rx_q->cur_rx;
int coe = priv->hw->rx_csum;
unsigned int count = 0;
- bool xmac;
-
- xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac;
if (netif_msg_rx_status(priv)) {
void *rx_head;
@@ -3368,11 +3343,12 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
stmmac_display_ring(priv, rx_head, DMA_RX_SIZE, true);
}
while (count < limit) {
+ struct stmmac_rx_buffer *buf;
+ struct dma_desc *np, *p;
int entry, status;
- struct dma_desc *p;
- struct dma_desc *np;
entry = next_entry;
+ buf = &rx_q->buf_pool[entry];
if (priv->extend_desc)
p = (struct dma_desc *)(rx_q->dma_erx + entry);
@@ -3402,20 +3378,9 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
stmmac_rx_extended_status(priv, &priv->dev->stats,
&priv->xstats, rx_q->dma_erx + entry);
if (unlikely(status == discard_frame)) {
+ page_pool_recycle_direct(rx_q->page_pool, buf->page);
priv->dev->stats.rx_errors++;
- if (priv->hwts_rx_en && !priv->extend_desc) {
- /* DESC2 & DESC3 will be overwritten by device
- * with timestamp value, hence reinitialize
- * them in stmmac_rx_refill() function so that
- * device can reuse it.
- */
- dev_kfree_skb_any(rx_q->rx_skbuff[entry]);
- rx_q->rx_skbuff[entry] = NULL;
- dma_unmap_single(priv->device,
- rx_q->rx_skbuff_dma[entry],
- priv->dma_buf_sz,
- DMA_FROM_DEVICE);
- }
+ buf->page = NULL;
} else {
struct sk_buff *skb;
int frame_len;
@@ -3455,58 +3420,20 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
frame_len, status);
}
- /* The zero-copy is always used for all the sizes
- * in case of GMAC4 because it needs
- * to refill the used descriptors, always.
- */
- if (unlikely(!xmac &&
- ((frame_len < priv->rx_copybreak) ||
- stmmac_rx_threshold_count(rx_q)))) {
- skb = netdev_alloc_skb_ip_align(priv->dev,
- frame_len);
- if (unlikely(!skb)) {
- if (net_ratelimit())
- dev_warn(priv->device,
- "packet dropped\n");
- priv->dev->stats.rx_dropped++;
- continue;
- }
-
- dma_sync_single_for_cpu(priv->device,
- rx_q->rx_skbuff_dma
- [entry], frame_len,
- DMA_FROM_DEVICE);
- skb_copy_to_linear_data(skb,
- rx_q->
- rx_skbuff[entry]->data,
- frame_len);
-
- skb_put(skb, frame_len);
- dma_sync_single_for_device(priv->device,
- rx_q->rx_skbuff_dma
- [entry], frame_len,
- DMA_FROM_DEVICE);
- } else {
- skb = rx_q->rx_skbuff[entry];
- if (unlikely(!skb)) {
- if (net_ratelimit())
- netdev_err(priv->dev,
- "%s: Inconsistent Rx chain\n",
- priv->dev->name);
- priv->dev->stats.rx_dropped++;
- continue;
- }
- prefetch(skb->data - NET_IP_ALIGN);
- rx_q->rx_skbuff[entry] = NULL;
- rx_q->rx_zeroc_thresh++;
-
- skb_put(skb, frame_len);
- dma_unmap_single(priv->device,
- rx_q->rx_skbuff_dma[entry],
- priv->dma_buf_sz,
- DMA_FROM_DEVICE);
+ skb = netdev_alloc_skb_ip_align(priv->dev, frame_len);
+ if (unlikely(!skb)) {
+ priv->dev->stats.rx_dropped++;
+ continue;
}
+ dma_sync_single_for_cpu(priv->device, buf->addr,
+ frame_len, DMA_FROM_DEVICE);
+ skb_copy_to_linear_data(skb, page_address(buf->page),
+ frame_len);
+ skb_put(skb, frame_len);
+ dma_sync_single_for_device(priv->device, buf->addr,
+ frame_len, DMA_FROM_DEVICE);
+
if (netif_msg_pktdata(priv)) {
netdev_dbg(priv->dev, "frame received (%dbytes)",
frame_len);
@@ -3526,6 +3453,10 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
napi_gro_receive(&ch->rx_napi, skb);
+ /* Data payload copied into SKB, page ready for recycle */
+ page_pool_recycle_direct(rx_q->page_pool, buf->page);
+ buf->page = NULL;
+
priv->dev->stats.rx_packets++;
priv->dev->stats.rx_bytes += frame_len;
}
@@ -3568,8 +3499,8 @@ static int stmmac_napi_poll_tx(struct napi_struct *napi, int budget)
work_done = stmmac_tx_clean(priv, DMA_TX_SIZE, chan);
work_done = min(work_done, budget);
- if (work_done < budget && napi_complete_done(napi, work_done))
- stmmac_enable_dma_irq(priv, priv->ioaddr, chan);
+ if (work_done < budget)
+ napi_complete_done(napi, work_done);
/* Force transmission restart */
tx_q = &priv->tx_queue[chan];
@@ -3792,6 +3723,7 @@ static void stmmac_poll_controller(struct net_device *dev)
*/
static int stmmac_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
+ struct stmmac_priv *priv = netdev_priv (dev);
int ret = -EOPNOTSUPP;
if (!netif_running(dev))
@@ -3801,9 +3733,7 @@ static int stmmac_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
case SIOCGMIIPHY:
case SIOCGMIIREG:
case SIOCSMIIREG:
- if (!dev->phydev)
- return -EINVAL;
- ret = phy_mii_ioctl(dev->phydev, rq, cmd);
+ ret = phylink_mii_ioctl(priv->phylink, rq, cmd);
break;
case SIOCSHWTSTAMP:
ret = stmmac_hwtstamp_set(dev, rq);
@@ -3839,23 +3769,7 @@ static int stmmac_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
return ret;
}
-static int stmmac_setup_tc_block(struct stmmac_priv *priv,
- struct tc_block_offload *f)
-{
- if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
- return -EOPNOTSUPP;
-
- switch (f->command) {
- case TC_BLOCK_BIND:
- return tcf_block_cb_register(f->block, stmmac_setup_tc_block_cb,
- priv, priv, f->extack);
- case TC_BLOCK_UNBIND:
- tcf_block_cb_unregister(f->block, stmmac_setup_tc_block_cb, priv);
- return 0;
- default:
- return -EOPNOTSUPP;
- }
-}
+static LIST_HEAD(stmmac_block_cb_list);
static int stmmac_setup_tc(struct net_device *ndev, enum tc_setup_type type,
void *type_data)
@@ -3864,7 +3778,10 @@ static int stmmac_setup_tc(struct net_device *ndev, enum tc_setup_type type,
switch (type) {
case TC_SETUP_BLOCK:
- return stmmac_setup_tc_block(priv, type_data);
+ return flow_block_cb_setup_simple(type_data,
+ &stmmac_block_cb_list,
+ stmmac_setup_tc_block_cb,
+ priv, priv, true);
case TC_SETUP_QDISC_CBS:
return stmmac_tc_setup_cbs(priv, priv, type_data);
default:
@@ -3872,6 +3789,22 @@ static int stmmac_setup_tc(struct net_device *ndev, enum tc_setup_type type,
}
}
+static u16 stmmac_select_queue(struct net_device *dev, struct sk_buff *skb,
+ struct net_device *sb_dev)
+{
+ if (skb_shinfo(skb)->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6)) {
+ /*
+ * There is no way to determine the number of TSO
+ * capable Queues. Let's use always the Queue 0
+ * because if TSO is supported then at least this
+ * one will be capable.
+ */
+ return 0;
+ }
+
+ return netdev_pick_tx(dev, skb, NULL) % dev->real_num_tx_queues;
+}
+
static int stmmac_set_mac_address(struct net_device *ndev, void *addr)
{
struct stmmac_priv *priv = netdev_priv(ndev);
@@ -4088,6 +4021,7 @@ static const struct net_device_ops stmmac_netdev_ops = {
.ndo_tx_timeout = stmmac_tx_timeout,
.ndo_do_ioctl = stmmac_ioctl,
.ndo_setup_tc = stmmac_setup_tc,
+ .ndo_select_queue = stmmac_select_queue,
#ifdef CONFIG_NET_POLL_CONTROLLER
.ndo_poll_controller = stmmac_poll_controller,
#endif
@@ -4160,6 +4094,12 @@ static int stmmac_hw_init(struct stmmac_priv *priv)
priv->plat->enh_desc = priv->dma_cap.enh_desc;
priv->plat->pmt = priv->dma_cap.pmt_remote_wake_up;
priv->hw->pmt = priv->plat->pmt;
+ if (priv->dma_cap.hash_tb_sz) {
+ priv->hw->multicast_filter_bins =
+ (BIT(priv->dma_cap.hash_tb_sz) << 5);
+ priv->hw->mcast_bits_log2 =
+ ilog2(priv->hw->multicast_filter_bins);
+ }
/* TXCOE doesn't work in thresh DMA mode */
if (priv->plat->force_thresh_dma_mode)
@@ -4237,9 +4177,8 @@ int stmmac_dvr_probe(struct device *device,
u32 queue, maxq;
int ret = 0;
- ndev = alloc_etherdev_mqs(sizeof(struct stmmac_priv),
- MTL_MAX_TX_QUEUES,
- MTL_MAX_RX_QUEUES);
+ ndev = devm_alloc_etherdev_mqs(device, sizeof(struct stmmac_priv),
+ MTL_MAX_TX_QUEUES, MTL_MAX_RX_QUEUES);
if (!ndev)
return -ENOMEM;
@@ -4271,8 +4210,7 @@ int stmmac_dvr_probe(struct device *device,
priv->wq = create_singlethread_workqueue("stmmac_wq");
if (!priv->wq) {
dev_err(priv->device, "failed to create workqueue\n");
- ret = -ENOMEM;
- goto error_wq;
+ return -ENOMEM;
}
INIT_WORK(&priv->service_task, stmmac_service_task);
@@ -4319,6 +4257,24 @@ int stmmac_dvr_probe(struct device *device,
priv->tso = true;
dev_info(priv->device, "TSO feature enabled\n");
}
+
+ if (priv->dma_cap.addr64) {
+ ret = dma_set_mask_and_coherent(device,
+ DMA_BIT_MASK(priv->dma_cap.addr64));
+ if (!ret) {
+ dev_info(priv->device, "Using %d bits DMA width\n",
+ priv->dma_cap.addr64);
+ } else {
+ ret = dma_set_mask_and_coherent(device, DMA_BIT_MASK(32));
+ if (ret) {
+ dev_err(priv->device, "Failed to set DMA Mask\n");
+ goto error_hw_init;
+ }
+
+ priv->dma_cap.addr64 = 32;
+ }
+ }
+
ndev->features |= ndev->hw_features | NETIF_F_HIGHDMA;
ndev->watchdog_timeo = msecs_to_jiffies(watchdog);
#ifdef STMMAC_VLAN_TAG_USED
@@ -4396,6 +4352,12 @@ int stmmac_dvr_probe(struct device *device,
}
}
+ ret = stmmac_phy_setup(priv);
+ if (ret) {
+ netdev_err(ndev, "failed to setup phy (%d)\n", ret);
+ goto error_phy_setup;
+ }
+
ret = register_netdev(ndev);
if (ret) {
dev_err(priv->device, "%s: ERROR %i registering the device\n",
@@ -4413,6 +4375,8 @@ int stmmac_dvr_probe(struct device *device,
return ret;
error_netdev_register:
+ phylink_destroy(priv->phylink);
+error_phy_setup:
if (priv->hw->pcs != STMMAC_PCS_RGMII &&
priv->hw->pcs != STMMAC_PCS_TBI &&
priv->hw->pcs != STMMAC_PCS_RTBI)
@@ -4428,8 +4392,6 @@ error_mdio_register:
}
error_hw_init:
destroy_workqueue(priv->wq);
-error_wq:
- free_netdev(ndev);
return ret;
}
@@ -4456,6 +4418,7 @@ int stmmac_dvr_remove(struct device *dev)
stmmac_mac_set(priv, priv->ioaddr, false);
netif_carrier_off(ndev);
unregister_netdev(ndev);
+ phylink_destroy(priv->phylink);
if (priv->plat->stmmac_rst)
reset_control_assert(priv->plat->stmmac_rst);
clk_disable_unprepare(priv->plat->pclk);
@@ -4466,7 +4429,6 @@ int stmmac_dvr_remove(struct device *dev)
stmmac_mdio_unregister(ndev);
destroy_workqueue(priv->wq);
mutex_destroy(&priv->lock);
- free_netdev(ndev);
return 0;
}
@@ -4487,8 +4449,7 @@ int stmmac_suspend(struct device *dev)
if (!ndev || !netif_running(ndev))
return 0;
- if (ndev->phydev)
- phy_stop(ndev->phydev);
+ phylink_stop(priv->phylink);
mutex_lock(&priv->lock);
@@ -4513,9 +4474,7 @@ int stmmac_suspend(struct device *dev)
}
mutex_unlock(&priv->lock);
- priv->oldlink = false;
priv->speed = SPEED_UNKNOWN;
- priv->oldduplex = DUPLEX_UNKNOWN;
return 0;
}
EXPORT_SYMBOL_GPL(stmmac_suspend);
@@ -4590,7 +4549,7 @@ int stmmac_resume(struct device *dev)
stmmac_clear_descriptors(priv);
stmmac_hw_setup(ndev, false);
- stmmac_init_tx_coalesce(priv);
+ stmmac_init_coalesce(priv);
stmmac_set_rx_mode(ndev);
stmmac_enable_all_queues(priv);
@@ -4599,8 +4558,7 @@ int stmmac_resume(struct device *dev)
mutex_unlock(&priv->lock);
- if (ndev->phydev)
- phy_start(ndev->phydev);
+ phylink_start(priv->phylink);
return 0;
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
index 1341bb5f693c..4304c1abc5d1 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
@@ -10,13 +10,13 @@
Maintainer: Giuseppe Cavallaro <peppe.cavallaro@st.com>
*******************************************************************************/
+#include <linux/gpio/consumer.h>
#include <linux/io.h>
#include <linux/iopoll.h>
#include <linux/mii.h>
-#include <linux/of.h>
-#include <linux/of_gpio.h>
#include <linux/of_mdio.h>
#include <linux/phy.h>
+#include <linux/property.h>
#include <linux/slab.h>
#include "dwxgmac2.h"
@@ -24,11 +24,14 @@
#define MII_BUSY 0x00000001
#define MII_WRITE 0x00000002
+#define MII_DATA_MASK GENMASK(15, 0)
/* GMAC4 defines */
#define MII_GMAC4_GOC_SHIFT 2
+#define MII_GMAC4_REG_ADDR_SHIFT 16
#define MII_GMAC4_WRITE (1 << MII_GMAC4_GOC_SHIFT)
#define MII_GMAC4_READ (3 << MII_GMAC4_GOC_SHIFT)
+#define MII_GMAC4_C45E BIT(1)
/* XGMAC defines */
#define MII_XGMAC_SADDR BIT(18)
@@ -155,22 +158,34 @@ static int stmmac_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg)
struct stmmac_priv *priv = netdev_priv(ndev);
unsigned int mii_address = priv->hw->mii.addr;
unsigned int mii_data = priv->hw->mii.data;
- u32 v;
- int data;
u32 value = MII_BUSY;
+ int data = 0;
+ u32 v;
value |= (phyaddr << priv->hw->mii.addr_shift)
& priv->hw->mii.addr_mask;
value |= (phyreg << priv->hw->mii.reg_shift) & priv->hw->mii.reg_mask;
value |= (priv->clk_csr << priv->hw->mii.clk_csr_shift)
& priv->hw->mii.clk_csr_mask;
- if (priv->plat->has_gmac4)
+ if (priv->plat->has_gmac4) {
value |= MII_GMAC4_READ;
+ if (phyreg & MII_ADDR_C45) {
+ value |= MII_GMAC4_C45E;
+ value &= ~priv->hw->mii.reg_mask;
+ value |= ((phyreg >> MII_DEVADDR_C45_SHIFT) <<
+ priv->hw->mii.reg_shift) &
+ priv->hw->mii.reg_mask;
+
+ data |= (phyreg & MII_REGADDR_C45_MASK) <<
+ MII_GMAC4_REG_ADDR_SHIFT;
+ }
+ }
if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY),
100, 10000))
return -EBUSY;
+ writel(data, priv->ioaddr + mii_data);
writel(value, priv->ioaddr + mii_address);
if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY),
@@ -178,7 +193,7 @@ static int stmmac_mdio_read(struct mii_bus *bus, int phyaddr, int phyreg)
return -EBUSY;
/* Read the data from the MII data register */
- data = (int)readl(priv->ioaddr + mii_data);
+ data = (int)readl(priv->ioaddr + mii_data) & MII_DATA_MASK;
return data;
}
@@ -198,8 +213,9 @@ static int stmmac_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg,
struct stmmac_priv *priv = netdev_priv(ndev);
unsigned int mii_address = priv->hw->mii.addr;
unsigned int mii_data = priv->hw->mii.data;
- u32 v;
u32 value = MII_BUSY;
+ int data = phydata;
+ u32 v;
value |= (phyaddr << priv->hw->mii.addr_shift)
& priv->hw->mii.addr_mask;
@@ -207,10 +223,21 @@ static int stmmac_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg,
value |= (priv->clk_csr << priv->hw->mii.clk_csr_shift)
& priv->hw->mii.clk_csr_mask;
- if (priv->plat->has_gmac4)
+ if (priv->plat->has_gmac4) {
value |= MII_GMAC4_WRITE;
- else
+ if (phyreg & MII_ADDR_C45) {
+ value |= MII_GMAC4_C45E;
+ value &= ~priv->hw->mii.reg_mask;
+ value |= ((phyreg >> MII_DEVADDR_C45_SHIFT) <<
+ priv->hw->mii.reg_shift) &
+ priv->hw->mii.reg_mask;
+
+ data |= (phyreg & MII_REGADDR_C45_MASK) <<
+ MII_GMAC4_REG_ADDR_SHIFT;
+ }
+ } else {
value |= MII_WRITE;
+ }
/* Wait until any existing MII operation is complete */
if (readl_poll_timeout(priv->ioaddr + mii_address, v, !(v & MII_BUSY),
@@ -218,7 +245,7 @@ static int stmmac_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg,
return -EBUSY;
/* Set the MII address register to write */
- writel(phydata, priv->ioaddr + mii_data);
+ writel(data, priv->ioaddr + mii_data);
writel(value, priv->ioaddr + mii_address);
/* Wait until any existing MII operation is complete */
@@ -237,51 +264,35 @@ int stmmac_mdio_reset(struct mii_bus *bus)
struct net_device *ndev = bus->priv;
struct stmmac_priv *priv = netdev_priv(ndev);
unsigned int mii_address = priv->hw->mii.addr;
- struct stmmac_mdio_bus_data *data = priv->plat->mdio_bus_data;
#ifdef CONFIG_OF
if (priv->device->of_node) {
- if (data->reset_gpio < 0) {
- struct device_node *np = priv->device->of_node;
+ struct gpio_desc *reset_gpio;
+ u32 delays[3] = { 0, 0, 0 };
- if (!np)
- return 0;
+ reset_gpio = devm_gpiod_get_optional(priv->device,
+ "snps,reset",
+ GPIOD_OUT_LOW);
+ if (IS_ERR(reset_gpio))
+ return PTR_ERR(reset_gpio);
- data->reset_gpio = of_get_named_gpio(np,
- "snps,reset-gpio", 0);
- if (data->reset_gpio < 0)
- return 0;
+ device_property_read_u32_array(priv->device,
+ "snps,reset-delays-us",
+ delays, ARRAY_SIZE(delays));
- data->active_low = of_property_read_bool(np,
- "snps,reset-active-low");
- of_property_read_u32_array(np,
- "snps,reset-delays-us", data->delays, 3);
+ if (delays[0])
+ msleep(DIV_ROUND_UP(delays[0], 1000));
- if (devm_gpio_request(priv->device, data->reset_gpio,
- "mdio-reset"))
- return 0;
- }
-
- gpio_direction_output(data->reset_gpio,
- data->active_low ? 1 : 0);
- if (data->delays[0])
- msleep(DIV_ROUND_UP(data->delays[0], 1000));
+ gpiod_set_value_cansleep(reset_gpio, 1);
+ if (delays[1])
+ msleep(DIV_ROUND_UP(delays[1], 1000));
- gpio_set_value(data->reset_gpio, data->active_low ? 0 : 1);
- if (data->delays[1])
- msleep(DIV_ROUND_UP(data->delays[1], 1000));
-
- gpio_set_value(data->reset_gpio, data->active_low ? 1 : 0);
- if (data->delays[2])
- msleep(DIV_ROUND_UP(data->delays[2], 1000));
+ gpiod_set_value_cansleep(reset_gpio, 0);
+ if (delays[2])
+ msleep(DIV_ROUND_UP(delays[2], 1000));
}
#endif
- if (data->phy_reset) {
- netdev_dbg(ndev, "stmmac_mdio_reset: calling phy_reset\n");
- data->phy_reset(priv->plat->bsp_priv);
- }
-
/* This is a workaround for problems with the STE101P PHY.
* It doesn't complete its reset until at least one clock cycle
* on MDC, so perform a dummy mdio read. To be updated for GMAC4
@@ -318,11 +329,6 @@ int stmmac_mdio_register(struct net_device *ndev)
if (mdio_bus_data->irqs)
memcpy(new_bus->irq, mdio_bus_data->irqs, sizeof(new_bus->irq));
-#ifdef CONFIG_OF
- if (priv->device->of_node)
- mdio_bus_data->reset_gpio = -1;
-#endif
-
new_bus->name = "stmmac";
if (priv->plat->has_xgmac) {
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
index 0bd72739a071..86f9c07a38cf 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
@@ -63,7 +63,6 @@ static void common_default_data(struct plat_stmmacenet_data *plat)
plat->has_gmac = 1;
plat->force_sf_dma_mode = 1;
- plat->mdio_bus_data->phy_reset = NULL;
plat->mdio_bus_data->phy_mask = 0;
/* Set default value for multicast hash bins */
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
index 0f0f4b31eb7e..73fc2524372e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
@@ -323,21 +323,6 @@ static int stmmac_dt_phy(struct plat_stmmacenet_data *plat,
{},
};
- /* If phy-handle property is passed from DT, use it as the PHY */
- plat->phy_node = of_parse_phandle(np, "phy-handle", 0);
- if (plat->phy_node)
- dev_dbg(dev, "Found phy-handle subnode\n");
-
- /* If phy-handle is not specified, check if we have a fixed-phy */
- if (!plat->phy_node && of_phy_is_fixed_link(np)) {
- if ((of_phy_register_fixed_link(np) < 0))
- return -ENODEV;
-
- dev_dbg(dev, "Found fixed-link subnode\n");
- plat->phy_node = of_node_get(np);
- mdio = false;
- }
-
if (of_match_node(need_mdio_ids, np)) {
plat->mdio_node = of_get_child_by_name(np, "mdio");
} else {
@@ -387,6 +372,13 @@ stmmac_probe_config_dt(struct platform_device *pdev, const char **mac)
*mac = of_get_mac_address(np);
plat->interface = of_get_phy_mode(np);
+ /* Some wrapper drivers still rely on phy_node. Let's save it while
+ * they are not converted to phylink. */
+ plat->phy_node = of_parse_phandle(np, "phy-handle", 0);
+
+ /* PHYLINK automatically parses the phy-handle property */
+ plat->phylink_node = np;
+
/* Get max speed of operation from device tree */
if (of_property_read_u32(np, "max-speed", &plat->max_speed))
plat->max_speed = -1;
@@ -581,10 +573,6 @@ error_pclk_get:
void stmmac_remove_config_dt(struct platform_device *pdev,
struct plat_stmmacenet_data *plat)
{
- struct device_node *np = pdev->dev.of_node;
-
- if (of_phy_is_fixed_link(np))
- of_phy_deregister_fixed_link(np);
of_node_put(plat->phy_node);
of_node_put(plat->mdio_node);
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
new file mode 100644
index 000000000000..a97b1ea76438
--- /dev/null
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
@@ -0,0 +1,850 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2019 Synopsys, Inc. and/or its affiliates.
+ * stmmac Selftests Support
+ *
+ * Author: Jose Abreu <joabreu@synopsys.com>
+ */
+
+#include <linux/completion.h>
+#include <linux/ethtool.h>
+#include <linux/ip.h>
+#include <linux/phy.h>
+#include <linux/udp.h>
+#include <net/tcp.h>
+#include <net/udp.h>
+#include "stmmac.h"
+
+struct stmmachdr {
+ __be32 version;
+ __be64 magic;
+ u8 id;
+} __packed;
+
+#define STMMAC_TEST_PKT_SIZE (sizeof(struct ethhdr) + sizeof(struct iphdr) + \
+ sizeof(struct stmmachdr))
+#define STMMAC_TEST_PKT_MAGIC 0xdeadcafecafedeadULL
+#define STMMAC_LB_TIMEOUT msecs_to_jiffies(200)
+
+struct stmmac_packet_attrs {
+ int vlan;
+ int vlan_id_in;
+ int vlan_id_out;
+ unsigned char *src;
+ unsigned char *dst;
+ u32 ip_src;
+ u32 ip_dst;
+ int tcp;
+ int sport;
+ int dport;
+ u32 exp_hash;
+ int dont_wait;
+ int timeout;
+ int size;
+ int remove_sa;
+ u8 id;
+};
+
+static u8 stmmac_test_next_id;
+
+static struct sk_buff *stmmac_test_get_udp_skb(struct stmmac_priv *priv,
+ struct stmmac_packet_attrs *attr)
+{
+ struct sk_buff *skb = NULL;
+ struct udphdr *uhdr = NULL;
+ struct tcphdr *thdr = NULL;
+ struct stmmachdr *shdr;
+ struct ethhdr *ehdr;
+ struct iphdr *ihdr;
+ int iplen, size;
+
+ size = attr->size + STMMAC_TEST_PKT_SIZE;
+ if (attr->vlan) {
+ size += 4;
+ if (attr->vlan > 1)
+ size += 4;
+ }
+
+ if (attr->tcp)
+ size += sizeof(struct tcphdr);
+ else
+ size += sizeof(struct udphdr);
+
+ skb = netdev_alloc_skb(priv->dev, size);
+ if (!skb)
+ return NULL;
+
+ prefetchw(skb->data);
+ skb_reserve(skb, NET_IP_ALIGN);
+
+ if (attr->vlan > 1)
+ ehdr = skb_push(skb, ETH_HLEN + 8);
+ else if (attr->vlan)
+ ehdr = skb_push(skb, ETH_HLEN + 4);
+ else if (attr->remove_sa)
+ ehdr = skb_push(skb, ETH_HLEN - 6);
+ else
+ ehdr = skb_push(skb, ETH_HLEN);
+ skb_reset_mac_header(skb);
+
+ skb_set_network_header(skb, skb->len);
+ ihdr = skb_put(skb, sizeof(*ihdr));
+
+ skb_set_transport_header(skb, skb->len);
+ if (attr->tcp)
+ thdr = skb_put(skb, sizeof(*thdr));
+ else
+ uhdr = skb_put(skb, sizeof(*uhdr));
+
+ if (!attr->remove_sa)
+ eth_zero_addr(ehdr->h_source);
+ eth_zero_addr(ehdr->h_dest);
+ if (attr->src && !attr->remove_sa)
+ ether_addr_copy(ehdr->h_source, attr->src);
+ if (attr->dst)
+ ether_addr_copy(ehdr->h_dest, attr->dst);
+
+ if (!attr->remove_sa) {
+ ehdr->h_proto = htons(ETH_P_IP);
+ } else {
+ __be16 *ptr = (__be16 *)ehdr;
+
+ /* HACK */
+ ptr[3] = htons(ETH_P_IP);
+ }
+
+ if (attr->vlan) {
+ __be16 *tag, *proto;
+
+ if (!attr->remove_sa) {
+ tag = (void *)ehdr + ETH_HLEN;
+ proto = (void *)ehdr + (2 * ETH_ALEN);
+ } else {
+ tag = (void *)ehdr + ETH_HLEN - 6;
+ proto = (void *)ehdr + ETH_ALEN;
+ }
+
+ proto[0] = htons(ETH_P_8021Q);
+ tag[0] = htons(attr->vlan_id_out);
+ tag[1] = htons(ETH_P_IP);
+ if (attr->vlan > 1) {
+ proto[0] = htons(ETH_P_8021AD);
+ tag[1] = htons(ETH_P_8021Q);
+ tag[2] = htons(attr->vlan_id_in);
+ tag[3] = htons(ETH_P_IP);
+ }
+ }
+
+ if (attr->tcp) {
+ thdr->source = htons(attr->sport);
+ thdr->dest = htons(attr->dport);
+ thdr->doff = sizeof(struct tcphdr) / 4;
+ thdr->check = 0;
+ } else {
+ uhdr->source = htons(attr->sport);
+ uhdr->dest = htons(attr->dport);
+ uhdr->len = htons(sizeof(*shdr) + sizeof(*uhdr) + attr->size);
+ uhdr->check = 0;
+ }
+
+ ihdr->ihl = 5;
+ ihdr->ttl = 32;
+ ihdr->version = 4;
+ if (attr->tcp)
+ ihdr->protocol = IPPROTO_TCP;
+ else
+ ihdr->protocol = IPPROTO_UDP;
+ iplen = sizeof(*ihdr) + sizeof(*shdr) + attr->size;
+ if (attr->tcp)
+ iplen += sizeof(*thdr);
+ else
+ iplen += sizeof(*uhdr);
+ ihdr->tot_len = htons(iplen);
+ ihdr->frag_off = 0;
+ ihdr->saddr = 0;
+ ihdr->daddr = htonl(attr->ip_dst);
+ ihdr->tos = 0;
+ ihdr->id = 0;
+ ip_send_check(ihdr);
+
+ shdr = skb_put(skb, sizeof(*shdr));
+ shdr->version = 0;
+ shdr->magic = cpu_to_be64(STMMAC_TEST_PKT_MAGIC);
+ attr->id = stmmac_test_next_id;
+ shdr->id = stmmac_test_next_id++;
+
+ if (attr->size)
+ skb_put(skb, attr->size);
+
+ skb->csum = 0;
+ skb->ip_summed = CHECKSUM_PARTIAL;
+ if (attr->tcp) {
+ thdr->check = ~tcp_v4_check(skb->len, ihdr->saddr, ihdr->daddr, 0);
+ skb->csum_start = skb_transport_header(skb) - skb->head;
+ skb->csum_offset = offsetof(struct tcphdr, check);
+ } else {
+ udp4_hwcsum(skb, ihdr->saddr, ihdr->daddr);
+ }
+
+ skb->protocol = htons(ETH_P_IP);
+ skb->pkt_type = PACKET_HOST;
+ skb->dev = priv->dev;
+
+ return skb;
+}
+
+struct stmmac_test_priv {
+ struct stmmac_packet_attrs *packet;
+ struct packet_type pt;
+ struct completion comp;
+ int double_vlan;
+ int vlan_id;
+ int ok;
+};
+
+static int stmmac_test_loopback_validate(struct sk_buff *skb,
+ struct net_device *ndev,
+ struct packet_type *pt,
+ struct net_device *orig_ndev)
+{
+ struct stmmac_test_priv *tpriv = pt->af_packet_priv;
+ struct stmmachdr *shdr;
+ struct ethhdr *ehdr;
+ struct udphdr *uhdr;
+ struct tcphdr *thdr;
+ struct iphdr *ihdr;
+
+ skb = skb_unshare(skb, GFP_ATOMIC);
+ if (!skb)
+ goto out;
+
+ if (skb_linearize(skb))
+ goto out;
+ if (skb_headlen(skb) < (STMMAC_TEST_PKT_SIZE - ETH_HLEN))
+ goto out;
+
+ ehdr = (struct ethhdr *)skb_mac_header(skb);
+ if (tpriv->packet->dst) {
+ if (!ether_addr_equal(ehdr->h_dest, tpriv->packet->dst))
+ goto out;
+ }
+ if (tpriv->packet->src) {
+ if (!ether_addr_equal(ehdr->h_source, orig_ndev->dev_addr))
+ goto out;
+ }
+
+ ihdr = ip_hdr(skb);
+ if (tpriv->double_vlan)
+ ihdr = (struct iphdr *)(skb_network_header(skb) + 4);
+
+ if (tpriv->packet->tcp) {
+ if (ihdr->protocol != IPPROTO_TCP)
+ goto out;
+
+ thdr = (struct tcphdr *)((u8 *)ihdr + 4 * ihdr->ihl);
+ if (thdr->dest != htons(tpriv->packet->dport))
+ goto out;
+
+ shdr = (struct stmmachdr *)((u8 *)thdr + sizeof(*thdr));
+ } else {
+ if (ihdr->protocol != IPPROTO_UDP)
+ goto out;
+
+ uhdr = (struct udphdr *)((u8 *)ihdr + 4 * ihdr->ihl);
+ if (uhdr->dest != htons(tpriv->packet->dport))
+ goto out;
+
+ shdr = (struct stmmachdr *)((u8 *)uhdr + sizeof(*uhdr));
+ }
+
+ if (shdr->magic != cpu_to_be64(STMMAC_TEST_PKT_MAGIC))
+ goto out;
+ if (tpriv->packet->exp_hash && !skb->hash)
+ goto out;
+ if (tpriv->packet->id != shdr->id)
+ goto out;
+
+ tpriv->ok = true;
+ complete(&tpriv->comp);
+out:
+ kfree_skb(skb);
+ return 0;
+}
+
+static int __stmmac_test_loopback(struct stmmac_priv *priv,
+ struct stmmac_packet_attrs *attr)
+{
+ struct stmmac_test_priv *tpriv;
+ struct sk_buff *skb = NULL;
+ int ret = 0;
+
+ tpriv = kzalloc(sizeof(*tpriv), GFP_KERNEL);
+ if (!tpriv)
+ return -ENOMEM;
+
+ tpriv->ok = false;
+ init_completion(&tpriv->comp);
+
+ tpriv->pt.type = htons(ETH_P_IP);
+ tpriv->pt.func = stmmac_test_loopback_validate;
+ tpriv->pt.dev = priv->dev;
+ tpriv->pt.af_packet_priv = tpriv;
+ tpriv->packet = attr;
+ dev_add_pack(&tpriv->pt);
+
+ skb = stmmac_test_get_udp_skb(priv, attr);
+ if (!skb) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ skb_set_queue_mapping(skb, 0);
+ ret = dev_queue_xmit(skb);
+ if (ret)
+ goto cleanup;
+
+ if (attr->dont_wait)
+ goto cleanup;
+
+ if (!attr->timeout)
+ attr->timeout = STMMAC_LB_TIMEOUT;
+
+ wait_for_completion_timeout(&tpriv->comp, attr->timeout);
+ ret = !tpriv->ok;
+
+cleanup:
+ dev_remove_pack(&tpriv->pt);
+ kfree(tpriv);
+ return ret;
+}
+
+static int stmmac_test_mac_loopback(struct stmmac_priv *priv)
+{
+ struct stmmac_packet_attrs attr = { };
+
+ attr.dst = priv->dev->dev_addr;
+ return __stmmac_test_loopback(priv, &attr);
+}
+
+static int stmmac_test_phy_loopback(struct stmmac_priv *priv)
+{
+ struct stmmac_packet_attrs attr = { };
+ int ret;
+
+ if (!priv->dev->phydev)
+ return -EBUSY;
+
+ ret = phy_loopback(priv->dev->phydev, true);
+ if (ret)
+ return ret;
+
+ attr.dst = priv->dev->dev_addr;
+ ret = __stmmac_test_loopback(priv, &attr);
+
+ phy_loopback(priv->dev->phydev, false);
+ return ret;
+}
+
+static int stmmac_test_mmc(struct stmmac_priv *priv)
+{
+ struct stmmac_counters initial, final;
+ int ret;
+
+ memset(&initial, 0, sizeof(initial));
+ memset(&final, 0, sizeof(final));
+
+ if (!priv->dma_cap.rmon)
+ return -EOPNOTSUPP;
+
+ /* Save previous results into internal struct */
+ stmmac_mmc_read(priv, priv->mmcaddr, &priv->mmc);
+
+ ret = stmmac_test_mac_loopback(priv);
+ if (ret)
+ return ret;
+
+ /* These will be loopback results so no need to save them */
+ stmmac_mmc_read(priv, priv->mmcaddr, &final);
+
+ /*
+ * The number of MMC counters available depends on HW configuration
+ * so we just use this one to validate the feature. I hope there is
+ * not a version without this counter.
+ */
+ if (final.mmc_tx_framecount_g <= initial.mmc_tx_framecount_g)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int stmmac_test_eee(struct stmmac_priv *priv)
+{
+ struct stmmac_extra_stats *initial, *final;
+ int retries = 10;
+ int ret;
+
+ if (!priv->dma_cap.eee || !priv->eee_active)
+ return -EOPNOTSUPP;
+
+ initial = kzalloc(sizeof(*initial), GFP_KERNEL);
+ if (!initial)
+ return -ENOMEM;
+
+ final = kzalloc(sizeof(*final), GFP_KERNEL);
+ if (!final) {
+ ret = -ENOMEM;
+ goto out_free_initial;
+ }
+
+ memcpy(initial, &priv->xstats, sizeof(*initial));
+
+ ret = stmmac_test_mac_loopback(priv);
+ if (ret)
+ goto out_free_final;
+
+ /* We have no traffic in the line so, sooner or later it will go LPI */
+ while (--retries) {
+ memcpy(final, &priv->xstats, sizeof(*final));
+
+ if (final->irq_tx_path_in_lpi_mode_n >
+ initial->irq_tx_path_in_lpi_mode_n)
+ break;
+ msleep(100);
+ }
+
+ if (!retries) {
+ ret = -ETIMEDOUT;
+ goto out_free_final;
+ }
+
+ if (final->irq_tx_path_in_lpi_mode_n <=
+ initial->irq_tx_path_in_lpi_mode_n) {
+ ret = -EINVAL;
+ goto out_free_final;
+ }
+
+ if (final->irq_tx_path_exit_lpi_mode_n <=
+ initial->irq_tx_path_exit_lpi_mode_n) {
+ ret = -EINVAL;
+ goto out_free_final;
+ }
+
+out_free_final:
+ kfree(final);
+out_free_initial:
+ kfree(initial);
+ return ret;
+}
+
+static int stmmac_filter_check(struct stmmac_priv *priv)
+{
+ if (!(priv->dev->flags & IFF_PROMISC))
+ return 0;
+
+ netdev_warn(priv->dev, "Test can't be run in promiscuous mode!\n");
+ return -EOPNOTSUPP;
+}
+
+static int stmmac_test_hfilt(struct stmmac_priv *priv)
+{
+ unsigned char gd_addr[ETH_ALEN] = {0x01, 0x00, 0xcc, 0xcc, 0xdd, 0xdd};
+ unsigned char bd_addr[ETH_ALEN] = {0x09, 0x00, 0xaa, 0xaa, 0xbb, 0xbb};
+ struct stmmac_packet_attrs attr = { };
+ int ret;
+
+ ret = stmmac_filter_check(priv);
+ if (ret)
+ return ret;
+
+ ret = dev_mc_add(priv->dev, gd_addr);
+ if (ret)
+ return ret;
+
+ attr.dst = gd_addr;
+
+ /* Shall receive packet */
+ ret = __stmmac_test_loopback(priv, &attr);
+ if (ret)
+ goto cleanup;
+
+ attr.dst = bd_addr;
+
+ /* Shall NOT receive packet */
+ ret = __stmmac_test_loopback(priv, &attr);
+ ret = !ret;
+
+cleanup:
+ dev_mc_del(priv->dev, gd_addr);
+ return ret;
+}
+
+static int stmmac_test_pfilt(struct stmmac_priv *priv)
+{
+ unsigned char gd_addr[ETH_ALEN] = {0x00, 0x01, 0x44, 0x55, 0x66, 0x77};
+ unsigned char bd_addr[ETH_ALEN] = {0x08, 0x00, 0x22, 0x33, 0x44, 0x55};
+ struct stmmac_packet_attrs attr = { };
+ int ret;
+
+ if (stmmac_filter_check(priv))
+ return -EOPNOTSUPP;
+
+ ret = dev_uc_add(priv->dev, gd_addr);
+ if (ret)
+ return ret;
+
+ attr.dst = gd_addr;
+
+ /* Shall receive packet */
+ ret = __stmmac_test_loopback(priv, &attr);
+ if (ret)
+ goto cleanup;
+
+ attr.dst = bd_addr;
+
+ /* Shall NOT receive packet */
+ ret = __stmmac_test_loopback(priv, &attr);
+ ret = !ret;
+
+cleanup:
+ dev_uc_del(priv->dev, gd_addr);
+ return ret;
+}
+
+static int stmmac_dummy_sync(struct net_device *netdev, const u8 *addr)
+{
+ return 0;
+}
+
+static void stmmac_test_set_rx_mode(struct net_device *netdev)
+{
+ /* As we are in test mode of ethtool we already own the rtnl lock
+ * so no address will change from user. We can just call the
+ * ndo_set_rx_mode() callback directly */
+ if (netdev->netdev_ops->ndo_set_rx_mode)
+ netdev->netdev_ops->ndo_set_rx_mode(netdev);
+}
+
+static int stmmac_test_mcfilt(struct stmmac_priv *priv)
+{
+ unsigned char uc_addr[ETH_ALEN] = {0x00, 0x01, 0x44, 0x55, 0x66, 0x77};
+ unsigned char mc_addr[ETH_ALEN] = {0x01, 0x01, 0x44, 0x55, 0x66, 0x77};
+ struct stmmac_packet_attrs attr = { };
+ int ret;
+
+ if (stmmac_filter_check(priv))
+ return -EOPNOTSUPP;
+
+ /* Remove all MC addresses */
+ __dev_mc_unsync(priv->dev, NULL);
+ stmmac_test_set_rx_mode(priv->dev);
+
+ ret = dev_uc_add(priv->dev, uc_addr);
+ if (ret)
+ goto cleanup;
+
+ attr.dst = uc_addr;
+
+ /* Shall receive packet */
+ ret = __stmmac_test_loopback(priv, &attr);
+ if (ret)
+ goto cleanup;
+
+ attr.dst = mc_addr;
+
+ /* Shall NOT receive packet */
+ ret = __stmmac_test_loopback(priv, &attr);
+ ret = !ret;
+
+cleanup:
+ dev_uc_del(priv->dev, uc_addr);
+ __dev_mc_sync(priv->dev, stmmac_dummy_sync, NULL);
+ stmmac_test_set_rx_mode(priv->dev);
+ return ret;
+}
+
+static int stmmac_test_ucfilt(struct stmmac_priv *priv)
+{
+ unsigned char uc_addr[ETH_ALEN] = {0x00, 0x01, 0x44, 0x55, 0x66, 0x77};
+ unsigned char mc_addr[ETH_ALEN] = {0x01, 0x01, 0x44, 0x55, 0x66, 0x77};
+ struct stmmac_packet_attrs attr = { };
+ int ret;
+
+ if (stmmac_filter_check(priv))
+ return -EOPNOTSUPP;
+
+ /* Remove all UC addresses */
+ __dev_uc_unsync(priv->dev, NULL);
+ stmmac_test_set_rx_mode(priv->dev);
+
+ ret = dev_mc_add(priv->dev, mc_addr);
+ if (ret)
+ goto cleanup;
+
+ attr.dst = mc_addr;
+
+ /* Shall receive packet */
+ ret = __stmmac_test_loopback(priv, &attr);
+ if (ret)
+ goto cleanup;
+
+ attr.dst = uc_addr;
+
+ /* Shall NOT receive packet */
+ ret = __stmmac_test_loopback(priv, &attr);
+ ret = !ret;
+
+cleanup:
+ dev_mc_del(priv->dev, mc_addr);
+ __dev_uc_sync(priv->dev, stmmac_dummy_sync, NULL);
+ stmmac_test_set_rx_mode(priv->dev);
+ return ret;
+}
+
+static int stmmac_test_flowctrl_validate(struct sk_buff *skb,
+ struct net_device *ndev,
+ struct packet_type *pt,
+ struct net_device *orig_ndev)
+{
+ struct stmmac_test_priv *tpriv = pt->af_packet_priv;
+ struct ethhdr *ehdr;
+
+ ehdr = (struct ethhdr *)skb_mac_header(skb);
+ if (!ether_addr_equal(ehdr->h_source, orig_ndev->dev_addr))
+ goto out;
+ if (ehdr->h_proto != htons(ETH_P_PAUSE))
+ goto out;
+
+ tpriv->ok = true;
+ complete(&tpriv->comp);
+out:
+ kfree_skb(skb);
+ return 0;
+}
+
+static int stmmac_test_flowctrl(struct stmmac_priv *priv)
+{
+ unsigned char paddr[ETH_ALEN] = {0x01, 0x80, 0xC2, 0x00, 0x00, 0x01};
+ struct phy_device *phydev = priv->dev->phydev;
+ u32 rx_cnt = priv->plat->rx_queues_to_use;
+ struct stmmac_test_priv *tpriv;
+ unsigned int pkt_count;
+ int i, ret = 0;
+
+ if (!phydev || !phydev->pause)
+ return -EOPNOTSUPP;
+
+ tpriv = kzalloc(sizeof(*tpriv), GFP_KERNEL);
+ if (!tpriv)
+ return -ENOMEM;
+
+ tpriv->ok = false;
+ init_completion(&tpriv->comp);
+ tpriv->pt.type = htons(ETH_P_PAUSE);
+ tpriv->pt.func = stmmac_test_flowctrl_validate;
+ tpriv->pt.dev = priv->dev;
+ tpriv->pt.af_packet_priv = tpriv;
+ dev_add_pack(&tpriv->pt);
+
+ /* Compute minimum number of packets to make FIFO full */
+ pkt_count = priv->plat->rx_fifo_size;
+ if (!pkt_count)
+ pkt_count = priv->dma_cap.rx_fifo_size;
+ pkt_count /= 1400;
+ pkt_count *= 2;
+
+ for (i = 0; i < rx_cnt; i++)
+ stmmac_stop_rx(priv, priv->ioaddr, i);
+
+ ret = dev_set_promiscuity(priv->dev, 1);
+ if (ret)
+ goto cleanup;
+
+ ret = dev_mc_add(priv->dev, paddr);
+ if (ret)
+ goto cleanup;
+
+ for (i = 0; i < pkt_count; i++) {
+ struct stmmac_packet_attrs attr = { };
+
+ attr.dst = priv->dev->dev_addr;
+ attr.dont_wait = true;
+ attr.size = 1400;
+
+ ret = __stmmac_test_loopback(priv, &attr);
+ if (ret)
+ goto cleanup;
+ if (tpriv->ok)
+ break;
+ }
+
+ /* Wait for some time in case RX Watchdog is enabled */
+ msleep(200);
+
+ for (i = 0; i < rx_cnt; i++) {
+ struct stmmac_channel *ch = &priv->channel[i];
+
+ stmmac_start_rx(priv, priv->ioaddr, i);
+ local_bh_disable();
+ napi_reschedule(&ch->rx_napi);
+ local_bh_enable();
+ }
+
+ wait_for_completion_timeout(&tpriv->comp, STMMAC_LB_TIMEOUT);
+ ret = !tpriv->ok;
+
+cleanup:
+ dev_mc_del(priv->dev, paddr);
+ dev_set_promiscuity(priv->dev, -1);
+ dev_remove_pack(&tpriv->pt);
+ kfree(tpriv);
+ return ret;
+}
+
+#define STMMAC_LOOPBACK_NONE 0
+#define STMMAC_LOOPBACK_MAC 1
+#define STMMAC_LOOPBACK_PHY 2
+
+static const struct stmmac_test {
+ char name[ETH_GSTRING_LEN];
+ int lb;
+ int (*fn)(struct stmmac_priv *priv);
+} stmmac_selftests[] = {
+ {
+ .name = "MAC Loopback ",
+ .lb = STMMAC_LOOPBACK_MAC,
+ .fn = stmmac_test_mac_loopback,
+ }, {
+ .name = "PHY Loopback ",
+ .lb = STMMAC_LOOPBACK_NONE, /* Test will handle it */
+ .fn = stmmac_test_phy_loopback,
+ }, {
+ .name = "MMC Counters ",
+ .lb = STMMAC_LOOPBACK_PHY,
+ .fn = stmmac_test_mmc,
+ }, {
+ .name = "EEE ",
+ .lb = STMMAC_LOOPBACK_PHY,
+ .fn = stmmac_test_eee,
+ }, {
+ .name = "Hash Filter MC ",
+ .lb = STMMAC_LOOPBACK_PHY,
+ .fn = stmmac_test_hfilt,
+ }, {
+ .name = "Perfect Filter UC ",
+ .lb = STMMAC_LOOPBACK_PHY,
+ .fn = stmmac_test_pfilt,
+ }, {
+ .name = "MC Filter ",
+ .lb = STMMAC_LOOPBACK_PHY,
+ .fn = stmmac_test_mcfilt,
+ }, {
+ .name = "UC Filter ",
+ .lb = STMMAC_LOOPBACK_PHY,
+ .fn = stmmac_test_ucfilt,
+ }, {
+ .name = "Flow Control ",
+ .lb = STMMAC_LOOPBACK_PHY,
+ .fn = stmmac_test_flowctrl,
+ },
+};
+
+void stmmac_selftest_run(struct net_device *dev,
+ struct ethtool_test *etest, u64 *buf)
+{
+ struct stmmac_priv *priv = netdev_priv(dev);
+ int count = stmmac_selftest_get_count(priv);
+ int carrier = netif_carrier_ok(dev);
+ int i, ret;
+
+ memset(buf, 0, sizeof(*buf) * count);
+ stmmac_test_next_id = 0;
+
+ if (etest->flags != ETH_TEST_FL_OFFLINE) {
+ netdev_err(priv->dev, "Only offline tests are supported\n");
+ etest->flags |= ETH_TEST_FL_FAILED;
+ return;
+ } else if (!carrier) {
+ netdev_err(priv->dev, "You need valid Link to execute tests\n");
+ etest->flags |= ETH_TEST_FL_FAILED;
+ return;
+ }
+
+ /* We don't want extra traffic */
+ netif_carrier_off(dev);
+
+ /* Wait for queues drain */
+ msleep(200);
+
+ for (i = 0; i < count; i++) {
+ ret = 0;
+
+ switch (stmmac_selftests[i].lb) {
+ case STMMAC_LOOPBACK_PHY:
+ ret = -EOPNOTSUPP;
+ if (dev->phydev)
+ ret = phy_loopback(dev->phydev, true);
+ if (!ret)
+ break;
+ /* Fallthrough */
+ case STMMAC_LOOPBACK_MAC:
+ ret = stmmac_set_mac_loopback(priv, priv->ioaddr, true);
+ break;
+ case STMMAC_LOOPBACK_NONE:
+ break;
+ default:
+ ret = -EOPNOTSUPP;
+ break;
+ }
+
+ /*
+ * First tests will always be MAC / PHY loobpack. If any of
+ * them is not supported we abort earlier.
+ */
+ if (ret) {
+ netdev_err(priv->dev, "Loopback is not supported\n");
+ etest->flags |= ETH_TEST_FL_FAILED;
+ break;
+ }
+
+ ret = stmmac_selftests[i].fn(priv);
+ if (ret && (ret != -EOPNOTSUPP))
+ etest->flags |= ETH_TEST_FL_FAILED;
+ buf[i] = ret;
+
+ switch (stmmac_selftests[i].lb) {
+ case STMMAC_LOOPBACK_PHY:
+ ret = -EOPNOTSUPP;
+ if (dev->phydev)
+ ret = phy_loopback(dev->phydev, false);
+ if (!ret)
+ break;
+ /* Fallthrough */
+ case STMMAC_LOOPBACK_MAC:
+ stmmac_set_mac_loopback(priv, priv->ioaddr, false);
+ break;
+ default:
+ break;
+ }
+ }
+
+ /* Restart everything */
+ if (carrier)
+ netif_carrier_on(dev);
+}
+
+void stmmac_selftest_get_strings(struct stmmac_priv *priv, u8 *data)
+{
+ u8 *p = data;
+ int i;
+
+ for (i = 0; i < stmmac_selftest_get_count(priv); i++) {
+ snprintf(p, ETH_GSTRING_LEN, "%2d. %s", i + 1,
+ stmmac_selftests[i].name);
+ p += ETH_GSTRING_LEN;
+ }
+}
+
+int stmmac_selftest_get_count(struct stmmac_priv *priv)
+{
+ return ARRAY_SIZE(stmmac_selftests);
+}
diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c
index 6f99437a6962..0bc5863bffeb 100644
--- a/drivers/net/ethernet/sun/niu.c
+++ b/drivers/net/ethernet/sun/niu.c
@@ -1217,8 +1217,6 @@ static int link_status_1g_rgmii(struct niu *np, int *link_up_p)
spin_lock_irqsave(&np->lock, flags);
- err = -EINVAL;
-
err = mii_read(np, np->phy_addr, MII_BMSR);
if (err < 0)
goto out;
diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig
index bd05a977ee7e..834afca3a019 100644
--- a/drivers/net/ethernet/ti/Kconfig
+++ b/drivers/net/ethernet/ti/Kconfig
@@ -50,6 +50,7 @@ config TI_CPSW
depends on ARCH_DAVINCI || ARCH_OMAP2PLUS || COMPILE_TEST
select TI_DAVINCI_MDIO
select MFD_SYSCON
+ select PAGE_POOL
select REGMAP
---help---
This driver supports TI's CPSW Ethernet Switch.
@@ -60,6 +61,7 @@ config TI_CPSW
config TI_CPTS
bool "TI Common Platform Time Sync (CPTS) Support"
depends on TI_CPSW || TI_KEYSTONE_NETCP || COMPILE_TEST
+ depends on COMMON_CLK
depends on POSIX_TIMERS
---help---
This driver supports the Common Platform Time Sync unit of
diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 634fc484a0b3..f320f9a0de8b 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -31,6 +31,10 @@
#include <linux/if_vlan.h>
#include <linux/kmemleak.h>
#include <linux/sys_soc.h>
+#include <net/page_pool.h>
+#include <linux/bpf.h>
+#include <linux/bpf_trace.h>
+#include <linux/filter.h>
#include <linux/pinctrl/consumer.h>
#include <net/pkt_cls.h>
@@ -60,6 +64,10 @@ static int descs_pool_size = CPSW_CPDMA_DESCS_POOL_SIZE_DEFAULT;
module_param(descs_pool_size, int, 0444);
MODULE_PARM_DESC(descs_pool_size, "Number of CPDMA CPPI descriptors in pool");
+/* The buf includes headroom compatible with both skb and xdpf */
+#define CPSW_HEADROOM_NA (max(XDP_PACKET_HEADROOM, NET_SKB_PAD) + NET_IP_ALIGN)
+#define CPSW_HEADROOM ALIGN(CPSW_HEADROOM_NA, sizeof(long))
+
#define for_each_slave(priv, func, arg...) \
do { \
struct cpsw_slave *slave; \
@@ -74,6 +82,11 @@ MODULE_PARM_DESC(descs_pool_size, "Number of CPDMA CPPI descriptors in pool");
(func)(slave++, ##arg); \
} while (0)
+#define CPSW_XMETA_OFFSET ALIGN(sizeof(struct xdp_frame), sizeof(long))
+
+#define CPSW_XDP_CONSUMED 1
+#define CPSW_XDP_PASS 0
+
static int cpsw_ndo_vlan_rx_add_vid(struct net_device *ndev,
__be16 proto, u16 vid);
@@ -337,24 +350,58 @@ void cpsw_intr_disable(struct cpsw_common *cpsw)
return;
}
+static int cpsw_is_xdpf_handle(void *handle)
+{
+ return (unsigned long)handle & BIT(0);
+}
+
+static void *cpsw_xdpf_to_handle(struct xdp_frame *xdpf)
+{
+ return (void *)((unsigned long)xdpf | BIT(0));
+}
+
+static struct xdp_frame *cpsw_handle_to_xdpf(void *handle)
+{
+ return (struct xdp_frame *)((unsigned long)handle & ~BIT(0));
+}
+
+struct __aligned(sizeof(long)) cpsw_meta_xdp {
+ struct net_device *ndev;
+ int ch;
+};
+
void cpsw_tx_handler(void *token, int len, int status)
{
+ struct cpsw_meta_xdp *xmeta;
+ struct xdp_frame *xdpf;
+ struct net_device *ndev;
struct netdev_queue *txq;
- struct sk_buff *skb = token;
- struct net_device *ndev = skb->dev;
- struct cpsw_common *cpsw = ndev_to_cpsw(ndev);
+ struct sk_buff *skb;
+ int ch;
+
+ if (cpsw_is_xdpf_handle(token)) {
+ xdpf = cpsw_handle_to_xdpf(token);
+ xmeta = (void *)xdpf + CPSW_XMETA_OFFSET;
+ ndev = xmeta->ndev;
+ ch = xmeta->ch;
+ xdp_return_frame(xdpf);
+ } else {
+ skb = token;
+ ndev = skb->dev;
+ ch = skb_get_queue_mapping(skb);
+ cpts_tx_timestamp(ndev_to_cpsw(ndev)->cpts, skb);
+ dev_kfree_skb_any(skb);
+ }
/* Check whether the queue is stopped due to stalled tx dma, if the
* queue is stopped then start the queue as we have free desc for tx
*/
- txq = netdev_get_tx_queue(ndev, skb_get_queue_mapping(skb));
+ txq = netdev_get_tx_queue(ndev, ch);
if (unlikely(netif_tx_queue_stopped(txq)))
netif_tx_wake_queue(txq);
- cpts_tx_timestamp(cpsw->cpts, skb);
ndev->stats.tx_packets++;
ndev->stats.tx_bytes += len;
- dev_kfree_skb_any(skb);
}
static void cpsw_rx_vlan_encap(struct sk_buff *skb)
@@ -400,24 +447,252 @@ static void cpsw_rx_vlan_encap(struct sk_buff *skb)
}
}
+static int cpsw_xdp_tx_frame(struct cpsw_priv *priv, struct xdp_frame *xdpf,
+ struct page *page)
+{
+ struct cpsw_common *cpsw = priv->cpsw;
+ struct cpsw_meta_xdp *xmeta;
+ struct cpdma_chan *txch;
+ dma_addr_t dma;
+ int ret, port;
+
+ xmeta = (void *)xdpf + CPSW_XMETA_OFFSET;
+ xmeta->ndev = priv->ndev;
+ xmeta->ch = 0;
+ txch = cpsw->txv[0].ch;
+
+ port = priv->emac_port + cpsw->data.dual_emac;
+ if (page) {
+ dma = page_pool_get_dma_addr(page);
+ dma += xdpf->headroom + sizeof(struct xdp_frame);
+ ret = cpdma_chan_submit_mapped(txch, cpsw_xdpf_to_handle(xdpf),
+ dma, xdpf->len, port);
+ } else {
+ if (sizeof(*xmeta) > xdpf->headroom) {
+ xdp_return_frame_rx_napi(xdpf);
+ return -EINVAL;
+ }
+
+ ret = cpdma_chan_submit(txch, cpsw_xdpf_to_handle(xdpf),
+ xdpf->data, xdpf->len, port);
+ }
+
+ if (ret) {
+ priv->ndev->stats.tx_dropped++;
+ xdp_return_frame_rx_napi(xdpf);
+ }
+
+ return ret;
+}
+
+static int cpsw_run_xdp(struct cpsw_priv *priv, int ch, struct xdp_buff *xdp,
+ struct page *page)
+{
+ struct cpsw_common *cpsw = priv->cpsw;
+ struct net_device *ndev = priv->ndev;
+ int ret = CPSW_XDP_CONSUMED;
+ struct xdp_frame *xdpf;
+ struct bpf_prog *prog;
+ u32 act;
+
+ rcu_read_lock();
+
+ prog = READ_ONCE(priv->xdp_prog);
+ if (!prog) {
+ ret = CPSW_XDP_PASS;
+ goto out;
+ }
+
+ act = bpf_prog_run_xdp(prog, xdp);
+ switch (act) {
+ case XDP_PASS:
+ ret = CPSW_XDP_PASS;
+ break;
+ case XDP_TX:
+ xdpf = convert_to_xdp_frame(xdp);
+ if (unlikely(!xdpf))
+ goto drop;
+
+ cpsw_xdp_tx_frame(priv, xdpf, page);
+ break;
+ case XDP_REDIRECT:
+ if (xdp_do_redirect(ndev, xdp, prog))
+ goto drop;
+
+ /* Have to flush here, per packet, instead of doing it in bulk
+ * at the end of the napi handler. The RX devices on this
+ * particular hardware is sharing a common queue, so the
+ * incoming device might change per packet.
+ */
+ xdp_do_flush_map();
+ break;
+ default:
+ bpf_warn_invalid_xdp_action(act);
+ /* fall through */
+ case XDP_ABORTED:
+ trace_xdp_exception(ndev, prog, act);
+ /* fall through -- handle aborts by dropping packet */
+ case XDP_DROP:
+ goto drop;
+ }
+out:
+ rcu_read_unlock();
+ return ret;
+drop:
+ rcu_read_unlock();
+ page_pool_recycle_direct(cpsw->page_pool[ch], page);
+ return ret;
+}
+
+static unsigned int cpsw_rxbuf_total_len(unsigned int len)
+{
+ len += CPSW_HEADROOM;
+ len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+
+ return SKB_DATA_ALIGN(len);
+}
+
+static struct page_pool *cpsw_create_page_pool(struct cpsw_common *cpsw,
+ int size)
+{
+ struct page_pool_params pp_params;
+ struct page_pool *pool;
+
+ pp_params.order = 0;
+ pp_params.flags = PP_FLAG_DMA_MAP;
+ pp_params.pool_size = size;
+ pp_params.nid = NUMA_NO_NODE;
+ pp_params.dma_dir = DMA_BIDIRECTIONAL;
+ pp_params.dev = cpsw->dev;
+
+ pool = page_pool_create(&pp_params);
+ if (IS_ERR(pool))
+ dev_err(cpsw->dev, "cannot create rx page pool\n");
+
+ return pool;
+}
+
+static int cpsw_ndev_create_xdp_rxq(struct cpsw_priv *priv, int ch)
+{
+ struct cpsw_common *cpsw = priv->cpsw;
+ struct xdp_rxq_info *rxq;
+ struct page_pool *pool;
+ int ret;
+
+ pool = cpsw->page_pool[ch];
+ rxq = &priv->xdp_rxq[ch];
+
+ ret = xdp_rxq_info_reg(rxq, priv->ndev, ch);
+ if (ret)
+ return ret;
+
+ ret = xdp_rxq_info_reg_mem_model(rxq, MEM_TYPE_PAGE_POOL, pool);
+ if (ret)
+ xdp_rxq_info_unreg(rxq);
+
+ return ret;
+}
+
+static void cpsw_ndev_destroy_xdp_rxq(struct cpsw_priv *priv, int ch)
+{
+ struct xdp_rxq_info *rxq = &priv->xdp_rxq[ch];
+
+ if (!xdp_rxq_info_is_reg(rxq))
+ return;
+
+ xdp_rxq_info_unreg(rxq);
+}
+
+static int cpsw_create_rx_pool(struct cpsw_common *cpsw, int ch)
+{
+ struct page_pool *pool;
+ int ret = 0, pool_size;
+
+ pool_size = cpdma_chan_get_rx_buf_num(cpsw->rxv[ch].ch);
+ pool = cpsw_create_page_pool(cpsw, pool_size);
+ if (IS_ERR(pool))
+ ret = PTR_ERR(pool);
+ else
+ cpsw->page_pool[ch] = pool;
+
+ return ret;
+}
+
+void cpsw_destroy_xdp_rxqs(struct cpsw_common *cpsw)
+{
+ struct net_device *ndev;
+ int i, ch;
+
+ for (ch = 0; ch < cpsw->rx_ch_num; ch++) {
+ for (i = 0; i < cpsw->data.slaves; i++) {
+ ndev = cpsw->slaves[i].ndev;
+ if (!ndev)
+ continue;
+
+ cpsw_ndev_destroy_xdp_rxq(netdev_priv(ndev), ch);
+ }
+
+ page_pool_destroy(cpsw->page_pool[ch]);
+ cpsw->page_pool[ch] = NULL;
+ }
+}
+
+int cpsw_create_xdp_rxqs(struct cpsw_common *cpsw)
+{
+ struct net_device *ndev;
+ int i, ch, ret;
+
+ for (ch = 0; ch < cpsw->rx_ch_num; ch++) {
+ ret = cpsw_create_rx_pool(cpsw, ch);
+ if (ret)
+ goto err_cleanup;
+
+ /* using same page pool is allowed as no running rx handlers
+ * simultaneously for both ndevs
+ */
+ for (i = 0; i < cpsw->data.slaves; i++) {
+ ndev = cpsw->slaves[i].ndev;
+ if (!ndev)
+ continue;
+
+ ret = cpsw_ndev_create_xdp_rxq(netdev_priv(ndev), ch);
+ if (ret)
+ goto err_cleanup;
+ }
+ }
+
+ return 0;
+
+err_cleanup:
+ cpsw_destroy_xdp_rxqs(cpsw);
+
+ return ret;
+}
+
static void cpsw_rx_handler(void *token, int len, int status)
{
- struct cpdma_chan *ch;
- struct sk_buff *skb = token;
- struct sk_buff *new_skb;
- struct net_device *ndev = skb->dev;
- int ret = 0, port;
- struct cpsw_common *cpsw = ndev_to_cpsw(ndev);
+ struct page *new_page, *page = token;
+ void *pa = page_address(page);
+ struct cpsw_meta_xdp *xmeta = pa + CPSW_XMETA_OFFSET;
+ struct cpsw_common *cpsw = ndev_to_cpsw(xmeta->ndev);
+ int pkt_size = cpsw->rx_packet_max;
+ int ret = 0, port, ch = xmeta->ch;
+ int headroom = CPSW_HEADROOM;
+ struct net_device *ndev = xmeta->ndev;
struct cpsw_priv *priv;
+ struct page_pool *pool;
+ struct sk_buff *skb;
+ struct xdp_buff xdp;
+ dma_addr_t dma;
- if (cpsw->data.dual_emac) {
+ if (cpsw->data.dual_emac && status >= 0) {
port = CPDMA_RX_SOURCE_PORT(status);
- if (port) {
+ if (port)
ndev = cpsw->slaves[--port].ndev;
- skb->dev = ndev;
- }
}
+ priv = netdev_priv(ndev);
+ pool = cpsw->page_pool[ch];
if (unlikely(status < 0) || unlikely(!netif_running(ndev))) {
/* In dual emac mode check for all interfaces */
if (cpsw->data.dual_emac && cpsw->usage_count &&
@@ -426,47 +701,88 @@ static void cpsw_rx_handler(void *token, int len, int status)
* is already down and the other interface is up
* and running, instead of freeing which results
* in reducing of the number of rx descriptor in
- * DMA engine, requeue skb back to cpdma.
+ * DMA engine, requeue page back to cpdma.
*/
- new_skb = skb;
+ new_page = page;
goto requeue;
}
- /* the interface is going down, skbs are purged */
- dev_kfree_skb_any(skb);
+ /* the interface is going down, pages are purged */
+ page_pool_recycle_direct(pool, page);
return;
}
- new_skb = netdev_alloc_skb_ip_align(ndev, cpsw->rx_packet_max);
- if (new_skb) {
- skb_copy_queue_mapping(new_skb, skb);
- skb_put(skb, len);
- if (status & CPDMA_RX_VLAN_ENCAP)
- cpsw_rx_vlan_encap(skb);
- priv = netdev_priv(ndev);
- if (priv->rx_ts_enabled)
- cpts_rx_timestamp(cpsw->cpts, skb);
- skb->protocol = eth_type_trans(skb, ndev);
- netif_receive_skb(skb);
- ndev->stats.rx_bytes += len;
- ndev->stats.rx_packets++;
- kmemleak_not_leak(new_skb);
- } else {
+ new_page = page_pool_dev_alloc_pages(pool);
+ if (unlikely(!new_page)) {
+ new_page = page;
ndev->stats.rx_dropped++;
- new_skb = skb;
+ goto requeue;
}
-requeue:
- if (netif_dormant(ndev)) {
- dev_kfree_skb_any(new_skb);
- return;
+ if (priv->xdp_prog) {
+ if (status & CPDMA_RX_VLAN_ENCAP) {
+ xdp.data = pa + CPSW_HEADROOM +
+ CPSW_RX_VLAN_ENCAP_HDR_SIZE;
+ xdp.data_end = xdp.data + len -
+ CPSW_RX_VLAN_ENCAP_HDR_SIZE;
+ } else {
+ xdp.data = pa + CPSW_HEADROOM;
+ xdp.data_end = xdp.data + len;
+ }
+
+ xdp_set_data_meta_invalid(&xdp);
+
+ xdp.data_hard_start = pa;
+ xdp.rxq = &priv->xdp_rxq[ch];
+
+ ret = cpsw_run_xdp(priv, ch, &xdp, page);
+ if (ret != CPSW_XDP_PASS)
+ goto requeue;
+
+ /* XDP prog might have changed packet data and boundaries */
+ len = xdp.data_end - xdp.data;
+ headroom = xdp.data - xdp.data_hard_start;
+
+ /* XDP prog can modify vlan tag, so can't use encap header */
+ status &= ~CPDMA_RX_VLAN_ENCAP;
}
- ch = cpsw->rxv[skb_get_queue_mapping(new_skb)].ch;
- ret = cpdma_chan_submit(ch, new_skb, new_skb->data,
- skb_tailroom(new_skb), 0);
- if (WARN_ON(ret < 0))
- dev_kfree_skb_any(new_skb);
+ /* pass skb to netstack if no XDP prog or returned XDP_PASS */
+ skb = build_skb(pa, cpsw_rxbuf_total_len(pkt_size));
+ if (!skb) {
+ ndev->stats.rx_dropped++;
+ page_pool_recycle_direct(pool, page);
+ goto requeue;
+ }
+
+ skb_reserve(skb, headroom);
+ skb_put(skb, len);
+ skb->dev = ndev;
+ if (status & CPDMA_RX_VLAN_ENCAP)
+ cpsw_rx_vlan_encap(skb);
+ if (priv->rx_ts_enabled)
+ cpts_rx_timestamp(cpsw->cpts, skb);
+ skb->protocol = eth_type_trans(skb, ndev);
+
+ /* unmap page as no netstack skb page recycling */
+ page_pool_release_page(pool, page);
+ netif_receive_skb(skb);
+
+ ndev->stats.rx_bytes += len;
+ ndev->stats.rx_packets++;
+
+requeue:
+ xmeta = page_address(new_page) + CPSW_XMETA_OFFSET;
+ xmeta->ndev = ndev;
+ xmeta->ch = ch;
+
+ dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM;
+ ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma,
+ pkt_size, 0);
+ if (ret < 0) {
+ WARN_ON(ret == -ENOMEM);
+ page_pool_recycle_direct(pool, new_page);
+ }
}
void cpsw_split_res(struct cpsw_common *cpsw)
@@ -1035,33 +1351,39 @@ static void cpsw_init_host_port(struct cpsw_priv *priv)
int cpsw_fill_rx_channels(struct cpsw_priv *priv)
{
struct cpsw_common *cpsw = priv->cpsw;
- struct sk_buff *skb;
+ struct cpsw_meta_xdp *xmeta;
+ struct page_pool *pool;
+ struct page *page;
int ch_buf_num;
int ch, i, ret;
+ dma_addr_t dma;
for (ch = 0; ch < cpsw->rx_ch_num; ch++) {
+ pool = cpsw->page_pool[ch];
ch_buf_num = cpdma_chan_get_rx_buf_num(cpsw->rxv[ch].ch);
for (i = 0; i < ch_buf_num; i++) {
- skb = __netdev_alloc_skb_ip_align(priv->ndev,
- cpsw->rx_packet_max,
- GFP_KERNEL);
- if (!skb) {
- cpsw_err(priv, ifup, "cannot allocate skb\n");
+ page = page_pool_dev_alloc_pages(pool);
+ if (!page) {
+ cpsw_err(priv, ifup, "allocate rx page err\n");
return -ENOMEM;
}
- skb_set_queue_mapping(skb, ch);
- ret = cpdma_chan_submit(cpsw->rxv[ch].ch, skb,
- skb->data, skb_tailroom(skb),
- 0);
+ xmeta = page_address(page) + CPSW_XMETA_OFFSET;
+ xmeta->ndev = priv->ndev;
+ xmeta->ch = ch;
+
+ dma = page_pool_get_dma_addr(page) + CPSW_HEADROOM;
+ ret = cpdma_chan_idle_submit_mapped(cpsw->rxv[ch].ch,
+ page, dma,
+ cpsw->rx_packet_max,
+ 0);
if (ret < 0) {
cpsw_err(priv, ifup,
- "cannot submit skb to channel %d rx, error %d\n",
+ "cannot submit page to channel %d rx, error %d\n",
ch, ret);
- kfree_skb(skb);
+ page_pool_recycle_direct(pool, page);
return ret;
}
- kmemleak_not_leak(skb);
}
cpsw_info(priv, ifup, "ch %d rx, submitted %d descriptors\n",
@@ -1397,6 +1719,13 @@ static int cpsw_ndo_open(struct net_device *ndev)
enable_irq(cpsw->irqs_table[0]);
}
+ /* create rxqs for both infs in dual mac as they use same pool
+ * and must be destroyed together when no users.
+ */
+ ret = cpsw_create_xdp_rxqs(cpsw);
+ if (ret < 0)
+ goto err_cleanup;
+
ret = cpsw_fill_rx_channels(priv);
if (ret < 0)
goto err_cleanup;
@@ -1423,7 +1752,11 @@ static int cpsw_ndo_open(struct net_device *ndev)
return 0;
err_cleanup:
- cpdma_ctlr_stop(cpsw->dma);
+ if (!cpsw->usage_count) {
+ cpdma_ctlr_stop(cpsw->dma);
+ cpsw_destroy_xdp_rxqs(cpsw);
+ }
+
for_each_slave(priv, cpsw_slave_stop, cpsw);
pm_runtime_put_sync(cpsw->dev);
netif_carrier_off(priv->ndev);
@@ -1447,6 +1780,7 @@ static int cpsw_ndo_stop(struct net_device *ndev)
cpsw_intr_disable(cpsw);
cpdma_ctlr_stop(cpsw->dma);
cpsw_ale_stop(cpsw->ale);
+ cpsw_destroy_xdp_rxqs(cpsw);
}
for_each_slave(priv, cpsw_slave_stop, cpsw);
@@ -2004,6 +2338,64 @@ static int cpsw_ndo_setup_tc(struct net_device *ndev, enum tc_setup_type type,
}
}
+static int cpsw_xdp_prog_setup(struct cpsw_priv *priv, struct netdev_bpf *bpf)
+{
+ struct bpf_prog *prog = bpf->prog;
+
+ if (!priv->xdpi.prog && !prog)
+ return 0;
+
+ if (!xdp_attachment_flags_ok(&priv->xdpi, bpf))
+ return -EBUSY;
+
+ WRITE_ONCE(priv->xdp_prog, prog);
+
+ xdp_attachment_setup(&priv->xdpi, bpf);
+
+ return 0;
+}
+
+static int cpsw_ndo_bpf(struct net_device *ndev, struct netdev_bpf *bpf)
+{
+ struct cpsw_priv *priv = netdev_priv(ndev);
+
+ switch (bpf->command) {
+ case XDP_SETUP_PROG:
+ return cpsw_xdp_prog_setup(priv, bpf);
+
+ case XDP_QUERY_PROG:
+ return xdp_attachment_query(&priv->xdpi, bpf);
+
+ default:
+ return -EINVAL;
+ }
+}
+
+static int cpsw_ndo_xdp_xmit(struct net_device *ndev, int n,
+ struct xdp_frame **frames, u32 flags)
+{
+ struct cpsw_priv *priv = netdev_priv(ndev);
+ struct xdp_frame *xdpf;
+ int i, drops = 0;
+
+ if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
+ return -EINVAL;
+
+ for (i = 0; i < n; i++) {
+ xdpf = frames[i];
+ if (xdpf->len < CPSW_MIN_PACKET_SIZE) {
+ xdp_return_frame_rx_napi(xdpf);
+ drops++;
+ continue;
+ }
+
+ if (cpsw_xdp_tx_frame(priv, xdpf, NULL))
+ drops++;
+ }
+
+ return n - drops;
+}
+
#ifdef CONFIG_NET_POLL_CONTROLLER
static void cpsw_ndo_poll_controller(struct net_device *ndev)
{
@@ -2032,6 +2424,8 @@ static const struct net_device_ops cpsw_netdev_ops = {
.ndo_vlan_rx_add_vid = cpsw_ndo_vlan_rx_add_vid,
.ndo_vlan_rx_kill_vid = cpsw_ndo_vlan_rx_kill_vid,
.ndo_setup_tc = cpsw_ndo_setup_tc,
+ .ndo_bpf = cpsw_ndo_bpf,
+ .ndo_xdp_xmit = cpsw_ndo_xdp_xmit,
};
static void cpsw_get_drvinfo(struct net_device *ndev,
@@ -2179,6 +2573,7 @@ static int cpsw_probe_dt(struct cpsw_platform_data *data,
return ret;
}
+ slave_data->slave_node = slave_node;
slave_data->phy_node = of_parse_phandle(slave_node,
"phy-handle", 0);
parp = of_get_property(slave_node, "phy_id", &lenp);
@@ -2262,8 +2657,7 @@ no_phy_slave:
static void cpsw_remove_dt(struct platform_device *pdev)
{
- struct net_device *ndev = platform_get_drvdata(pdev);
- struct cpsw_common *cpsw = ndev_to_cpsw(ndev);
+ struct cpsw_common *cpsw = platform_get_drvdata(pdev);
struct cpsw_platform_data *data = &cpsw->data;
struct device_node *node = pdev->dev.of_node;
struct device_node *slave_node;
@@ -2330,6 +2724,7 @@ static int cpsw_probe_dual_emac(struct cpsw_priv *priv)
/* register the network device */
SET_NETDEV_DEV(ndev, cpsw->dev);
+ ndev->dev.of_node = cpsw->slaves[1].data->slave_node;
ret = register_netdev(ndev);
if (ret)
dev_err(cpsw->dev, "cpsw: error registering net device\n");
@@ -2474,7 +2869,7 @@ static int cpsw_probe(struct platform_device *pdev)
goto clean_cpts;
}
- platform_set_drvdata(pdev, ndev);
+ platform_set_drvdata(pdev, cpsw);
priv = netdev_priv(ndev);
priv->cpsw = cpsw;
priv->ndev = ndev;
@@ -2507,6 +2902,7 @@ static int cpsw_probe(struct platform_device *pdev)
/* register the network device */
SET_NETDEV_DEV(ndev, dev);
+ ndev->dev.of_node = cpsw->slaves[0].data->slave_node;
ret = register_netdev(ndev);
if (ret) {
dev_err(dev, "error registering net device\n");
@@ -2567,9 +2963,8 @@ clean_runtime_disable_ret:
static int cpsw_remove(struct platform_device *pdev)
{
- struct net_device *ndev = platform_get_drvdata(pdev);
- struct cpsw_common *cpsw = ndev_to_cpsw(ndev);
- int ret;
+ struct cpsw_common *cpsw = platform_get_drvdata(pdev);
+ int i, ret;
ret = pm_runtime_get_sync(&pdev->dev);
if (ret < 0) {
@@ -2577,9 +2972,9 @@ static int cpsw_remove(struct platform_device *pdev)
return ret;
}
- if (cpsw->data.dual_emac)
- unregister_netdev(cpsw->slaves[1].ndev);
- unregister_netdev(ndev);
+ for (i = 0; i < cpsw->data.slaves; i++)
+ if (cpsw->slaves[i].ndev)
+ unregister_netdev(cpsw->slaves[i].ndev);
cpts_release(cpsw->cpts);
cpdma_ctlr_destroy(cpsw->dma);
@@ -2592,20 +2987,13 @@ static int cpsw_remove(struct platform_device *pdev)
#ifdef CONFIG_PM_SLEEP
static int cpsw_suspend(struct device *dev)
{
- struct net_device *ndev = dev_get_drvdata(dev);
- struct cpsw_common *cpsw = ndev_to_cpsw(ndev);
-
- if (cpsw->data.dual_emac) {
- int i;
+ struct cpsw_common *cpsw = dev_get_drvdata(dev);
+ int i;
- for (i = 0; i < cpsw->data.slaves; i++) {
+ for (i = 0; i < cpsw->data.slaves; i++)
+ if (cpsw->slaves[i].ndev)
if (netif_running(cpsw->slaves[i].ndev))
cpsw_ndo_stop(cpsw->slaves[i].ndev);
- }
- } else {
- if (netif_running(ndev))
- cpsw_ndo_stop(ndev);
- }
/* Select sleep pin state */
pinctrl_pm_select_sleep_state(dev);
@@ -2615,25 +3003,20 @@ static int cpsw_suspend(struct device *dev)
static int cpsw_resume(struct device *dev)
{
- struct net_device *ndev = dev_get_drvdata(dev);
- struct cpsw_common *cpsw = ndev_to_cpsw(ndev);
+ struct cpsw_common *cpsw = dev_get_drvdata(dev);
+ int i;
/* Select default pin state */
pinctrl_pm_select_default_state(dev);
/* shut up ASSERT_RTNL() warning in netif_set_real_num_tx/rx_queues */
rtnl_lock();
- if (cpsw->data.dual_emac) {
- int i;
- for (i = 0; i < cpsw->data.slaves; i++) {
+ for (i = 0; i < cpsw->data.slaves; i++)
+ if (cpsw->slaves[i].ndev)
if (netif_running(cpsw->slaves[i].ndev))
cpsw_ndo_open(cpsw->slaves[i].ndev);
- }
- } else {
- if (netif_running(ndev))
- cpsw_ndo_open(ndev);
- }
+
rtnl_unlock();
return 0;
diff --git a/drivers/net/ethernet/ti/cpsw_ethtool.c b/drivers/net/ethernet/ti/cpsw_ethtool.c
index 6d1c9ebae7cc..31248a6cc642 100644
--- a/drivers/net/ethernet/ti/cpsw_ethtool.c
+++ b/drivers/net/ethernet/ti/cpsw_ethtool.c
@@ -458,21 +458,22 @@ int cpsw_nway_reset(struct net_device *ndev)
static void cpsw_suspend_data_pass(struct net_device *ndev)
{
struct cpsw_common *cpsw = ndev_to_cpsw(ndev);
- struct cpsw_slave *slave;
int i;
/* Disable NAPI scheduling */
cpsw_intr_disable(cpsw);
/* Stop all transmit queues for every network device.
- * Disable re-using rx descriptors with dormant_on.
*/
- for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++) {
- if (!(slave->ndev && netif_running(slave->ndev)))
+ for (i = 0; i < cpsw->data.slaves; i++) {
+ ndev = cpsw->slaves[i].ndev;
+ if (!(ndev && netif_running(ndev)))
continue;
- netif_tx_stop_all_queues(slave->ndev);
- netif_dormant_on(slave->ndev);
+ netif_tx_stop_all_queues(ndev);
+
+ /* Barrier, so that stop_queue visible to other cpus */
+ smp_mb__after_atomic();
}
/* Handle rest of tx packets and stop cpdma channels */
@@ -483,14 +484,8 @@ static int cpsw_resume_data_pass(struct net_device *ndev)
{
struct cpsw_priv *priv = netdev_priv(ndev);
struct cpsw_common *cpsw = priv->cpsw;
- struct cpsw_slave *slave;
int i, ret;
- /* Allow rx packets handling */
- for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++)
- if (slave->ndev && netif_running(slave->ndev))
- netif_dormant_off(slave->ndev);
-
/* After this receive is started */
if (cpsw->usage_count) {
ret = cpsw_fill_rx_channels(priv);
@@ -502,9 +497,11 @@ static int cpsw_resume_data_pass(struct net_device *ndev)
}
/* Resume transmit for every affected interface */
- for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++)
- if (slave->ndev && netif_running(slave->ndev))
- netif_tx_start_all_queues(slave->ndev);
+ for (i = 0; i < cpsw->data.slaves; i++) {
+ ndev = cpsw->slaves[i].ndev;
+ if (ndev && netif_running(ndev))
+ netif_tx_start_all_queues(ndev);
+ }
return 0;
}
@@ -581,14 +578,26 @@ static int cpsw_update_channels_res(struct cpsw_priv *priv, int ch_num, int rx,
return 0;
}
+static void cpsw_fail(struct cpsw_common *cpsw)
+{
+ struct net_device *ndev;
+ int i;
+
+ for (i = 0; i < cpsw->data.slaves; i++) {
+ ndev = cpsw->slaves[i].ndev;
+ if (ndev)
+ dev_close(ndev);
+ }
+}
+
int cpsw_set_channels_common(struct net_device *ndev,
struct ethtool_channels *chs,
cpdma_handler_fn rx_handler)
{
struct cpsw_priv *priv = netdev_priv(ndev);
struct cpsw_common *cpsw = priv->cpsw;
- struct cpsw_slave *slave;
- int i, ret;
+ struct net_device *sl_ndev;
+ int i, new_pools, ret;
ret = cpsw_check_ch_settings(cpsw, chs);
if (ret < 0)
@@ -596,6 +605,8 @@ int cpsw_set_channels_common(struct net_device *ndev,
cpsw_suspend_data_pass(ndev);
+ new_pools = (chs->rx_count != cpsw->rx_ch_num) && cpsw->usage_count;
+
ret = cpsw_update_channels_res(priv, chs->rx_count, 1, rx_handler);
if (ret)
goto err;
@@ -604,35 +615,40 @@ int cpsw_set_channels_common(struct net_device *ndev,
if (ret)
goto err;
- for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++) {
- if (!(slave->ndev && netif_running(slave->ndev)))
+ for (i = 0; i < cpsw->data.slaves; i++) {
+ sl_ndev = cpsw->slaves[i].ndev;
+ if (!(sl_ndev && netif_running(sl_ndev)))
continue;
/* Inform stack about new count of queues */
- ret = netif_set_real_num_tx_queues(slave->ndev,
- cpsw->tx_ch_num);
+ ret = netif_set_real_num_tx_queues(sl_ndev, cpsw->tx_ch_num);
if (ret) {
dev_err(priv->dev, "cannot set real number of tx queues\n");
goto err;
}
- ret = netif_set_real_num_rx_queues(slave->ndev,
- cpsw->rx_ch_num);
+ ret = netif_set_real_num_rx_queues(sl_ndev, cpsw->rx_ch_num);
if (ret) {
dev_err(priv->dev, "cannot set real number of rx queues\n");
goto err;
}
}
- if (cpsw->usage_count)
- cpsw_split_res(cpsw);
+ cpsw_split_res(cpsw);
+
+ if (new_pools) {
+ cpsw_destroy_xdp_rxqs(cpsw);
+ ret = cpsw_create_xdp_rxqs(cpsw);
+ if (ret)
+ goto err;
+ }
ret = cpsw_resume_data_pass(ndev);
if (!ret)
return 0;
err:
dev_err(priv->dev, "cannot update channels number, closing device\n");
- dev_close(ndev);
+ cpsw_fail(cpsw);
return ret;
}
@@ -652,9 +668,8 @@ void cpsw_get_ringparam(struct net_device *ndev,
int cpsw_set_ringparam(struct net_device *ndev,
struct ethtool_ringparam *ering)
{
- struct cpsw_priv *priv = netdev_priv(ndev);
- struct cpsw_common *cpsw = priv->cpsw;
- int ret;
+ struct cpsw_common *cpsw = ndev_to_cpsw(ndev);
+ int descs_num, ret;
/* ignore ering->tx_pending - only rx_pending adjustment is supported */
@@ -663,22 +678,34 @@ int cpsw_set_ringparam(struct net_device *ndev,
ering->rx_pending > (cpsw->descs_pool_size - CPSW_MAX_QUEUES))
return -EINVAL;
- if (ering->rx_pending == cpdma_get_num_rx_descs(cpsw->dma))
+ descs_num = cpdma_get_num_rx_descs(cpsw->dma);
+ if (ering->rx_pending == descs_num)
return 0;
cpsw_suspend_data_pass(ndev);
- cpdma_set_num_rx_descs(cpsw->dma, ering->rx_pending);
+ ret = cpdma_set_num_rx_descs(cpsw->dma, ering->rx_pending);
+ if (ret) {
+ if (cpsw_resume_data_pass(ndev))
+ goto err;
+
+ return ret;
+ }
- if (cpsw->usage_count)
- cpdma_chan_split_pool(cpsw->dma);
+ if (cpsw->usage_count) {
+ cpsw_destroy_xdp_rxqs(cpsw);
+ ret = cpsw_create_xdp_rxqs(cpsw);
+ if (ret)
+ goto err;
+ }
ret = cpsw_resume_data_pass(ndev);
if (!ret)
return 0;
-
+err:
+ cpdma_set_num_rx_descs(cpsw->dma, descs_num);
dev_err(cpsw->dev, "cannot set ring params, closing device\n");
- dev_close(ndev);
+ cpsw_fail(cpsw);
return ret;
}
diff --git a/drivers/net/ethernet/ti/cpsw_priv.h b/drivers/net/ethernet/ti/cpsw_priv.h
index 04795b97ee71..362c5a986869 100644
--- a/drivers/net/ethernet/ti/cpsw_priv.h
+++ b/drivers/net/ethernet/ti/cpsw_priv.h
@@ -272,6 +272,7 @@ struct cpsw_host_regs {
};
struct cpsw_slave_data {
+ struct device_node *slave_node;
struct device_node *phy_node;
char phy_id[MII_BUS_ID_SIZE];
int phy_if;
@@ -346,6 +347,7 @@ struct cpsw_common {
int rx_ch_num, tx_ch_num;
int speed;
int usage_count;
+ struct page_pool *page_pool[CPSW_MAX_QUEUES];
};
struct cpsw_priv {
@@ -360,6 +362,10 @@ struct cpsw_priv {
int shp_cfg_speed;
int tx_ts_enabled;
int rx_ts_enabled;
+ struct bpf_prog *xdp_prog;
+ struct xdp_rxq_info xdp_rxq[CPSW_MAX_QUEUES];
+ struct xdp_attachment_info xdpi;
+
u32 emac_port;
struct cpsw_common *cpsw;
};
@@ -391,6 +397,8 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv);
void cpsw_intr_enable(struct cpsw_common *cpsw);
void cpsw_intr_disable(struct cpsw_common *cpsw);
void cpsw_tx_handler(void *token, int len, int status);
+int cpsw_create_xdp_rxqs(struct cpsw_common *cpsw);
+void cpsw_destroy_xdp_rxqs(struct cpsw_common *cpsw);
/* ethtool */
u32 cpsw_get_msglevel(struct net_device *ndev);
diff --git a/drivers/net/ethernet/ti/cpts.c b/drivers/net/ethernet/ti/cpts.c
index e257018ada71..61136428e2c0 100644
--- a/drivers/net/ethernet/ti/cpts.c
+++ b/drivers/net/ethernet/ti/cpts.c
@@ -5,6 +5,7 @@
* Copyright (C) 2012 Richard Cochran <richardcochran@gmail.com>
*
*/
+#include <linux/clk-provider.h>
#include <linux/err.h>
#include <linux/if.h>
#include <linux/hrtimer.h>
@@ -532,6 +533,82 @@ static void cpts_calc_mult_shift(struct cpts *cpts)
freq, cpts->cc.mult, cpts->cc.shift, (ns - NSEC_PER_SEC));
}
+static int cpts_of_mux_clk_setup(struct cpts *cpts, struct device_node *node)
+{
+ struct device_node *refclk_np;
+ const char **parent_names;
+ unsigned int num_parents;
+ struct clk_hw *clk_hw;
+ int ret = -EINVAL;
+ u32 *mux_table;
+
+ refclk_np = of_get_child_by_name(node, "cpts-refclk-mux");
+ if (!refclk_np)
+ /* refclk selection supported not for all SoCs */
+ return 0;
+
+ num_parents = of_clk_get_parent_count(refclk_np);
+ if (num_parents < 1) {
+ dev_err(cpts->dev, "mux-clock %s must have parents\n",
+ refclk_np->name);
+ goto mux_fail;
+ }
+
+ parent_names = devm_kzalloc(cpts->dev, (sizeof(char *) * num_parents),
+ GFP_KERNEL);
+
+ mux_table = devm_kzalloc(cpts->dev, sizeof(*mux_table) * num_parents,
+ GFP_KERNEL);
+ if (!mux_table || !parent_names) {
+ ret = -ENOMEM;
+ goto mux_fail;
+ }
+
+ of_clk_parent_fill(refclk_np, parent_names, num_parents);
+
+ ret = of_property_read_variable_u32_array(refclk_np, "ti,mux-tbl",
+ mux_table,
+ num_parents, num_parents);
+ if (ret < 0)
+ goto mux_fail;
+
+ clk_hw = clk_hw_register_mux_table(cpts->dev, refclk_np->name,
+ parent_names, num_parents,
+ 0,
+ &cpts->reg->rftclk_sel, 0, 0x1F,
+ 0, mux_table, NULL);
+ if (IS_ERR(clk_hw)) {
+ ret = PTR_ERR(clk_hw);
+ goto mux_fail;
+ }
+
+ ret = devm_add_action_or_reset(cpts->dev,
+ (void(*)(void *))clk_hw_unregister_mux,
+ clk_hw);
+ if (ret) {
+ dev_err(cpts->dev, "add clkmux unreg action %d", ret);
+ goto mux_fail;
+ }
+
+ ret = of_clk_add_hw_provider(refclk_np, of_clk_hw_simple_get, clk_hw);
+ if (ret)
+ goto mux_fail;
+
+ ret = devm_add_action_or_reset(cpts->dev,
+ (void(*)(void *))of_clk_del_provider,
+ refclk_np);
+ if (ret) {
+ dev_err(cpts->dev, "add clkmux provider unreg action %d", ret);
+ goto mux_fail;
+ }
+
+ return ret;
+
+mux_fail:
+ of_node_put(refclk_np);
+ return ret;
+}
+
static int cpts_of_parse(struct cpts *cpts, struct device_node *node)
{
int ret = -EINVAL;
@@ -547,7 +624,7 @@ static int cpts_of_parse(struct cpts *cpts, struct device_node *node)
(!cpts->cc.mult && cpts->cc.shift))
goto of_error;
- return 0;
+ return cpts_of_mux_clk_setup(cpts, node);
of_error:
dev_err(cpts->dev, "CPTS: Missing property in the DT.\n");
@@ -572,9 +649,14 @@ struct cpts *cpts_create(struct device *dev, void __iomem *regs,
if (ret)
return ERR_PTR(ret);
- cpts->refclk = devm_clk_get(dev, "cpts");
+ cpts->refclk = devm_get_clk_from_child(dev, node, "cpts");
+ if (IS_ERR(cpts->refclk))
+ /* try get clk from dev node for compatibility */
+ cpts->refclk = devm_clk_get(dev, "cpts");
+
if (IS_ERR(cpts->refclk)) {
- dev_err(dev, "Failed to get cpts refclk\n");
+ dev_err(dev, "Failed to get cpts refclk %ld\n",
+ PTR_ERR(cpts->refclk));
return ERR_CAST(cpts->refclk);
}
diff --git a/drivers/net/ethernet/ti/cpts.h b/drivers/net/ethernet/ti/cpts.h
index 024aab6af12f..bb997c11ee15 100644
--- a/drivers/net/ethernet/ti/cpts.h
+++ b/drivers/net/ethernet/ti/cpts.h
@@ -24,7 +24,7 @@
struct cpsw_cpts {
u32 idver; /* Identification and version */
u32 control; /* Time sync control */
- u32 res1;
+ u32 rftclk_sel; /* Reference Clock Select Register */
u32 ts_push; /* Time stamp event push */
u32 ts_load_val; /* Time stamp load value */
u32 ts_load_en; /* Time stamp load enable */
diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c
index 35bf14d8e7af..0ca2a1a254de 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -134,6 +134,15 @@ struct cpdma_control_info {
#define ACCESS_RW (ACCESS_RO | ACCESS_WO)
};
+struct submit_info {
+ struct cpdma_chan *chan;
+ int directed;
+ void *token;
+ void *data;
+ int flags;
+ int len;
+};
+
static struct cpdma_control_info controls[] = {
[CPDMA_TX_RLIM] = {CPDMA_DMACONTROL, 8, 0xffff, ACCESS_RW},
[CPDMA_CMD_IDLE] = {CPDMA_DMACONTROL, 3, 1, ACCESS_WO},
@@ -176,6 +185,8 @@ static struct cpdma_control_info controls[] = {
(directed << CPDMA_TO_PORT_SHIFT)); \
} while (0)
+#define CPDMA_DMA_EXT_MAP BIT(16)
+
static void cpdma_desc_pool_destroy(struct cpdma_ctlr *ctlr)
{
struct cpdma_desc_pool *pool = ctlr->pool;
@@ -1002,34 +1013,26 @@ static void __cpdma_chan_submit(struct cpdma_chan *chan,
}
}
-int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data,
- int len, int directed)
+static int cpdma_chan_submit_si(struct submit_info *si)
{
+ struct cpdma_chan *chan = si->chan;
struct cpdma_ctlr *ctlr = chan->ctlr;
+ int len = si->len;
+ int swlen = len;
struct cpdma_desc __iomem *desc;
dma_addr_t buffer;
- unsigned long flags;
u32 mode;
- int ret = 0;
-
- spin_lock_irqsave(&chan->lock, flags);
-
- if (chan->state == CPDMA_STATE_TEARDOWN) {
- ret = -EINVAL;
- goto unlock_ret;
- }
+ int ret;
if (chan->count >= chan->desc_num) {
chan->stats.desc_alloc_fail++;
- ret = -ENOMEM;
- goto unlock_ret;
+ return -ENOMEM;
}
desc = cpdma_desc_alloc(ctlr->pool);
if (!desc) {
chan->stats.desc_alloc_fail++;
- ret = -ENOMEM;
- goto unlock_ret;
+ return -ENOMEM;
}
if (len < ctlr->params.min_packet_size) {
@@ -1037,16 +1040,21 @@ int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data,
chan->stats.runt_transmit_buff++;
}
- buffer = dma_map_single(ctlr->dev, data, len, chan->dir);
- ret = dma_mapping_error(ctlr->dev, buffer);
- if (ret) {
- cpdma_desc_free(ctlr->pool, desc, 1);
- ret = -EINVAL;
- goto unlock_ret;
- }
-
mode = CPDMA_DESC_OWNER | CPDMA_DESC_SOP | CPDMA_DESC_EOP;
- cpdma_desc_to_port(chan, mode, directed);
+ cpdma_desc_to_port(chan, mode, si->directed);
+
+ if (si->flags & CPDMA_DMA_EXT_MAP) {
+ buffer = (dma_addr_t)si->data;
+ dma_sync_single_for_device(ctlr->dev, buffer, len, chan->dir);
+ swlen |= CPDMA_DMA_EXT_MAP;
+ } else {
+ buffer = dma_map_single(ctlr->dev, si->data, len, chan->dir);
+ ret = dma_mapping_error(ctlr->dev, buffer);
+ if (ret) {
+ cpdma_desc_free(ctlr->pool, desc, 1);
+ return -EINVAL;
+ }
+ }
/* Relaxed IO accessors can be used here as there is read barrier
* at the end of write sequence.
@@ -1055,9 +1063,9 @@ int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data,
writel_relaxed(buffer, &desc->hw_buffer);
writel_relaxed(len, &desc->hw_len);
writel_relaxed(mode | len, &desc->hw_mode);
- writel_relaxed((uintptr_t)token, &desc->sw_token);
+ writel_relaxed((uintptr_t)si->token, &desc->sw_token);
writel_relaxed(buffer, &desc->sw_buffer);
- writel_relaxed(len, &desc->sw_len);
+ writel_relaxed(swlen, &desc->sw_len);
desc_read(desc, sw_len);
__cpdma_chan_submit(chan, desc);
@@ -1066,8 +1074,105 @@ int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data,
chan_write(chan, rxfree, 1);
chan->count++;
+ return 0;
+}
-unlock_ret:
+int cpdma_chan_idle_submit(struct cpdma_chan *chan, void *token, void *data,
+ int len, int directed)
+{
+ struct submit_info si;
+ unsigned long flags;
+ int ret;
+
+ si.chan = chan;
+ si.token = token;
+ si.data = data;
+ si.len = len;
+ si.directed = directed;
+ si.flags = 0;
+
+ spin_lock_irqsave(&chan->lock, flags);
+ if (chan->state == CPDMA_STATE_TEARDOWN) {
+ spin_unlock_irqrestore(&chan->lock, flags);
+ return -EINVAL;
+ }
+
+ ret = cpdma_chan_submit_si(&si);
+ spin_unlock_irqrestore(&chan->lock, flags);
+ return ret;
+}
+
+int cpdma_chan_idle_submit_mapped(struct cpdma_chan *chan, void *token,
+ dma_addr_t data, int len, int directed)
+{
+ struct submit_info si;
+ unsigned long flags;
+ int ret;
+
+ si.chan = chan;
+ si.token = token;
+ si.data = (void *)data;
+ si.len = len;
+ si.directed = directed;
+ si.flags = CPDMA_DMA_EXT_MAP;
+
+ spin_lock_irqsave(&chan->lock, flags);
+ if (chan->state == CPDMA_STATE_TEARDOWN) {
+ spin_unlock_irqrestore(&chan->lock, flags);
+ return -EINVAL;
+ }
+
+ ret = cpdma_chan_submit_si(&si);
+ spin_unlock_irqrestore(&chan->lock, flags);
+ return ret;
+}
+
+int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data,
+ int len, int directed)
+{
+ struct submit_info si;
+ unsigned long flags;
+ int ret;
+
+ si.chan = chan;
+ si.token = token;
+ si.data = data;
+ si.len = len;
+ si.directed = directed;
+ si.flags = 0;
+
+ spin_lock_irqsave(&chan->lock, flags);
+ if (chan->state != CPDMA_STATE_ACTIVE) {
+ spin_unlock_irqrestore(&chan->lock, flags);
+ return -EINVAL;
+ }
+
+ ret = cpdma_chan_submit_si(&si);
+ spin_unlock_irqrestore(&chan->lock, flags);
+ return ret;
+}
+
+int cpdma_chan_submit_mapped(struct cpdma_chan *chan, void *token,
+ dma_addr_t data, int len, int directed)
+{
+ struct submit_info si;
+ unsigned long flags;
+ int ret;
+
+ si.chan = chan;
+ si.token = token;
+ si.data = (void *)data;
+ si.len = len;
+ si.directed = directed;
+ si.flags = CPDMA_DMA_EXT_MAP;
+
+ spin_lock_irqsave(&chan->lock, flags);
+ if (chan->state != CPDMA_STATE_ACTIVE) {
+ spin_unlock_irqrestore(&chan->lock, flags);
+ return -EINVAL;
+ }
+
+ ret = cpdma_chan_submit_si(&si);
spin_unlock_irqrestore(&chan->lock, flags);
return ret;
}
@@ -1097,10 +1202,17 @@ static void __cpdma_chan_free(struct cpdma_chan *chan,
uintptr_t token;
token = desc_read(desc, sw_token);
- buff_dma = desc_read(desc, sw_buffer);
origlen = desc_read(desc, sw_len);
- dma_unmap_single(ctlr->dev, buff_dma, origlen, chan->dir);
+ buff_dma = desc_read(desc, sw_buffer);
+ if (origlen & CPDMA_DMA_EXT_MAP) {
+ origlen &= ~CPDMA_DMA_EXT_MAP;
+ dma_sync_single_for_cpu(ctlr->dev, buff_dma, origlen,
+ chan->dir);
+ } else {
+ dma_unmap_single(ctlr->dev, buff_dma, origlen, chan->dir);
+ }
+
cpdma_desc_free(pool, desc, 1);
(*chan->handler)((void *)token, outlen, status);
}
@@ -1311,8 +1423,23 @@ int cpdma_get_num_tx_descs(struct cpdma_ctlr *ctlr)
return ctlr->num_tx_desc;
}
-void cpdma_set_num_rx_descs(struct cpdma_ctlr *ctlr, int num_rx_desc)
+int cpdma_set_num_rx_descs(struct cpdma_ctlr *ctlr, int num_rx_desc)
{
+ unsigned long flags;
+ int temp, ret;
+
+ spin_lock_irqsave(&ctlr->lock, flags);
+
+ temp = ctlr->num_rx_desc;
ctlr->num_rx_desc = num_rx_desc;
ctlr->num_tx_desc = ctlr->pool->num_desc - ctlr->num_rx_desc;
+ ret = cpdma_chan_split_pool(ctlr);
+ if (ret) {
+ ctlr->num_rx_desc = temp;
+ ctlr->num_tx_desc = ctlr->pool->num_desc - ctlr->num_rx_desc;
+ }
+
+ spin_unlock_irqrestore(&ctlr->lock, flags);
+
+ return ret;
}
diff --git a/drivers/net/ethernet/ti/davinci_cpdma.h b/drivers/net/ethernet/ti/davinci_cpdma.h
index 10376062dafa..d3cfe234d16a 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.h
+++ b/drivers/net/ethernet/ti/davinci_cpdma.h
@@ -77,8 +77,14 @@ int cpdma_chan_stop(struct cpdma_chan *chan);
int cpdma_chan_get_stats(struct cpdma_chan *chan,
struct cpdma_chan_stats *stats);
+int cpdma_chan_submit_mapped(struct cpdma_chan *chan, void *token,
+ dma_addr_t data, int len, int directed);
int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data,
int len, int directed);
+int cpdma_chan_idle_submit_mapped(struct cpdma_chan *chan, void *token,
+ dma_addr_t data, int len, int directed);
+int cpdma_chan_idle_submit(struct cpdma_chan *chan, void *token, void *data,
+ int len, int directed);
int cpdma_chan_process(struct cpdma_chan *chan, int quota);
int cpdma_ctlr_int_ctrl(struct cpdma_ctlr *ctlr, bool enable);
@@ -110,8 +116,7 @@ enum cpdma_control {
int cpdma_control_get(struct cpdma_ctlr *ctlr, int control);
int cpdma_control_set(struct cpdma_ctlr *ctlr, int control, int value);
int cpdma_get_num_rx_descs(struct cpdma_ctlr *ctlr);
-void cpdma_set_num_rx_descs(struct cpdma_ctlr *ctlr, int num_rx_desc);
+int cpdma_set_num_rx_descs(struct cpdma_ctlr *ctlr, int num_rx_desc);
int cpdma_get_num_tx_descs(struct cpdma_ctlr *ctlr);
-int cpdma_chan_split_pool(struct cpdma_ctlr *ctlr);
#endif
diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c
index 4bf65cab79e6..5f4ece0d5a73 100644
--- a/drivers/net/ethernet/ti/davinci_emac.c
+++ b/drivers/net/ethernet/ti/davinci_emac.c
@@ -1428,8 +1428,8 @@ static int emac_dev_open(struct net_device *ndev)
if (!skb)
break;
- ret = cpdma_chan_submit(priv->rxchan, skb, skb->data,
- skb_tailroom(skb), 0);
+ ret = cpdma_chan_idle_submit(priv->rxchan, skb, skb->data,
+ skb_tailroom(skb), 0);
if (WARN_ON(ret < 0))
break;
}
diff --git a/drivers/net/ethernet/ti/netcp_ethss.c b/drivers/net/ethernet/ti/netcp_ethss.c
index ec179700c184..2c1fac33136c 100644
--- a/drivers/net/ethernet/ti/netcp_ethss.c
+++ b/drivers/net/ethernet/ti/netcp_ethss.c
@@ -3554,7 +3554,7 @@ static int set_gbenu_ethss_priv(struct gbe_priv *gbe_dev,
static int gbe_probe(struct netcp_device *netcp_device, struct device *dev,
struct device_node *node, void **inst_priv)
{
- struct device_node *interfaces, *interface;
+ struct device_node *interfaces, *interface, *cpts_node;
struct device_node *secondary_ports;
struct cpsw_ale_params ale_params;
struct gbe_priv *gbe_dev;
@@ -3713,7 +3713,12 @@ static int gbe_probe(struct netcp_device *netcp_device, struct device *dev,
dev_dbg(gbe_dev->dev, "Created a gbe ale engine\n");
}
- gbe_dev->cpts = cpts_create(gbe_dev->dev, gbe_dev->cpts_reg, node);
+ cpts_node = of_get_child_by_name(node, "cpts");
+ if (!cpts_node)
+ cpts_node = of_node_get(node);
+
+ gbe_dev->cpts = cpts_create(gbe_dev->dev, gbe_dev->cpts_reg, cpts_node);
+ of_node_put(cpts_node);
if (IS_ENABLED(CONFIG_TI_CPTS) && IS_ERR(gbe_dev->cpts)) {
ret = PTR_ERR(gbe_dev->cpts);
goto free_sec_ports;
diff --git a/drivers/net/ethernet/toshiba/ps3_gelic_net.h b/drivers/net/ethernet/toshiba/ps3_gelic_net.h
index 3ecddb72f45a..051033580f0a 100644
--- a/drivers/net/ethernet/toshiba/ps3_gelic_net.h
+++ b/drivers/net/ethernet/toshiba/ps3_gelic_net.h
@@ -301,7 +301,7 @@ struct gelic_card {
*/
unsigned int irq;
struct gelic_descr *tx_top, *rx_top;
- struct gelic_descr descr[0]; /* must be the last */
+ struct gelic_descr descr[]; /* must be the last */
};
struct gelic_port {
diff --git a/drivers/net/ethernet/via/via-velocity.h b/drivers/net/ethernet/via/via-velocity.h
index c0ecc6c7b5e0..cdfe7809e3c1 100644
--- a/drivers/net/ethernet/via/via-velocity.h
+++ b/drivers/net/ethernet/via/via-velocity.h
@@ -1509,7 +1509,7 @@ static inline int velocity_get_ip(struct velocity_info *vptr)
rcu_read_lock();
in_dev = __in_dev_get_rcu(vptr->netdev);
if (in_dev != NULL) {
- ifa = (struct in_ifaddr *) in_dev->ifa_list;
+ ifa = rcu_dereference(in_dev->ifa_list);
if (ifa != NULL) {
memcpy(vptr->ip_addr, &ifa->ifa_address, 4);
res = 0;
diff --git a/drivers/net/ethernet/wiznet/w5100-spi.c b/drivers/net/ethernet/wiznet/w5100-spi.c
index 918b3e50850a..2b4126d2427d 100644
--- a/drivers/net/ethernet/wiznet/w5100-spi.c
+++ b/drivers/net/ethernet/wiznet/w5100-spi.c
@@ -15,6 +15,7 @@
#include <linux/delay.h>
#include <linux/netdevice.h>
#include <linux/of_net.h>
+#include <linux/of_device.h>
#include <linux/spi/spi.h>
#include "w5100.h"
@@ -409,14 +410,32 @@ static const struct w5100_ops w5500_ops = {
.init = w5500_spi_init,
};
+static const struct of_device_id w5100_of_match[] = {
+ { .compatible = "wiznet,w5100", .data = (const void*)W5100, },
+ { .compatible = "wiznet,w5200", .data = (const void*)W5200, },
+ { .compatible = "wiznet,w5500", .data = (const void*)W5500, },
+ { },
+};
+MODULE_DEVICE_TABLE(of, w5100_of_match);
+
static int w5100_spi_probe(struct spi_device *spi)
{
- const struct spi_device_id *id = spi_get_device_id(spi);
+ const struct of_device_id *of_id;
const struct w5100_ops *ops;
+ kernel_ulong_t driver_data;
int priv_size;
const void *mac = of_get_mac_address(spi->dev.of_node);
- switch (id->driver_data) {
+ if (spi->dev.of_node) {
+ of_id = of_match_device(w5100_of_match, &spi->dev);
+ if (!of_id)
+ return -ENODEV;
+ driver_data = (kernel_ulong_t)of_id->data;
+ } else {
+ driver_data = spi_get_device_id(spi)->driver_data;
+ }
+
+ switch (driver_data) {
case W5100:
ops = &w5100_spi_ops;
priv_size = 0;
@@ -453,6 +472,7 @@ static struct spi_driver w5100_spi_driver = {
.driver = {
.name = "w5100",
.pm = &w5100_pm_ops,
+ .of_match_table = w5100_of_match,
},
.probe = w5100_spi_probe,
.remove = w5100_spi_remove,
diff --git a/drivers/net/ethernet/xilinx/Kconfig b/drivers/net/ethernet/xilinx/Kconfig
index af96e05c5bcd..8d994cebb6b0 100644
--- a/drivers/net/ethernet/xilinx/Kconfig
+++ b/drivers/net/ethernet/xilinx/Kconfig
@@ -6,7 +6,7 @@
config NET_VENDOR_XILINX
bool "Xilinx devices"
default y
- depends on PPC || PPC32 || MICROBLAZE || ARCH_ZYNQ || MIPS || X86 || COMPILE_TEST
+ depends on PPC || PPC32 || MICROBLAZE || ARCH_ZYNQ || MIPS || X86 || ARM || COMPILE_TEST
---help---
If you have a network (Ethernet) card belonging to this class, say Y.
@@ -26,8 +26,8 @@ config XILINX_EMACLITE
config XILINX_AXI_EMAC
tristate "Xilinx 10/100/1000 AXI Ethernet support"
- depends on MICROBLAZE
- select PHYLIB
+ depends on MICROBLAZE || X86 || ARM || COMPILE_TEST
+ select PHYLINK
---help---
This driver supports the 10/100/1000 Ethernet from Xilinx for the
AXI bus interface used in Xilinx Virtex FPGAs.
diff --git a/drivers/net/ethernet/xilinx/ll_temac.h b/drivers/net/ethernet/xilinx/ll_temac.h
index 1aeda084b8f1..276292bca334 100644
--- a/drivers/net/ethernet/xilinx/ll_temac.h
+++ b/drivers/net/ethernet/xilinx/ll_temac.h
@@ -361,7 +361,7 @@ struct temac_local {
/* For synchronization of indirect register access. Must be
* shared mutex between interfaces in same TEMAC block.
*/
- struct mutex *indirect_mutex;
+ spinlock_t *indirect_lock;
u32 options; /* Current options word */
int last_link;
unsigned int temac_features;
@@ -388,8 +388,9 @@ struct temac_local {
/* xilinx_temac.c */
int temac_indirect_busywait(struct temac_local *lp);
u32 temac_indirect_in32(struct temac_local *lp, int reg);
+u32 temac_indirect_in32_locked(struct temac_local *lp, int reg);
void temac_indirect_out32(struct temac_local *lp, int reg, u32 value);
-
+void temac_indirect_out32_locked(struct temac_local *lp, int reg, u32 value);
/* xilinx_temac_mdio.c */
int temac_mdio_setup(struct temac_local *lp, struct platform_device *pdev);
diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c
index 14870d659f7d..21c1b4322ea7 100644
--- a/drivers/net/ethernet/xilinx/ll_temac_main.c
+++ b/drivers/net/ethernet/xilinx/ll_temac_main.c
@@ -22,7 +22,6 @@
*
* TODO:
* - Factor out locallink DMA code into separate driver
- * - Fix multicast assignment.
* - Fix support for hardware checksumming.
* - Testing. Lots and lots of testing.
*
@@ -53,6 +52,7 @@
#include <linux/slab.h>
#include <linux/interrupt.h>
#include <linux/dma-mapping.h>
+#include <linux/processor.h>
#include <linux/platform_data/xilinx-ll-temac.h>
#include "ll_temac.h"
@@ -84,51 +84,118 @@ static void _temac_iow_le(struct temac_local *lp, int offset, u32 value)
return iowrite32(value, lp->regs + offset);
}
+static bool hard_acs_rdy(struct temac_local *lp)
+{
+ return temac_ior(lp, XTE_RDY0_OFFSET) & XTE_RDY0_HARD_ACS_RDY_MASK;
+}
+
+static bool hard_acs_rdy_or_timeout(struct temac_local *lp, ktime_t timeout)
+{
+ ktime_t cur = ktime_get();
+
+ return hard_acs_rdy(lp) || ktime_after(cur, timeout);
+}
+
+/* Poll for maximum 20 ms. This is similar to the 2 jiffies @ 100 Hz
+ * that was used before, and should cover MDIO bus speed down to 3200
+ * Hz.
+ */
+#define HARD_ACS_RDY_POLL_NS (20 * NSEC_PER_MSEC)
+
+/**
+ * temac_indirect_busywait - Wait for current indirect register access
+ * to complete.
+ */
int temac_indirect_busywait(struct temac_local *lp)
{
- unsigned long end = jiffies + 2;
+ ktime_t timeout = ktime_add_ns(ktime_get(), HARD_ACS_RDY_POLL_NS);
- while (!(temac_ior(lp, XTE_RDY0_OFFSET) & XTE_RDY0_HARD_ACS_RDY_MASK)) {
- if (time_before_eq(end, jiffies)) {
- WARN_ON(1);
- return -ETIMEDOUT;
- }
- usleep_range(500, 1000);
- }
- return 0;
+ spin_until_cond(hard_acs_rdy_or_timeout(lp, timeout));
+ if (WARN_ON(!hard_acs_rdy(lp)))
+ return -ETIMEDOUT;
+ else
+ return 0;
}
/**
- * temac_indirect_in32
- *
- * lp->indirect_mutex must be held when calling this function
+ * temac_indirect_in32 - Indirect register read access. This function
+ * must be called without lp->indirect_lock being held.
*/
u32 temac_indirect_in32(struct temac_local *lp, int reg)
{
- u32 val;
+ unsigned long flags;
+ int val;
+
+ spin_lock_irqsave(lp->indirect_lock, flags);
+ val = temac_indirect_in32_locked(lp, reg);
+ spin_unlock_irqrestore(lp->indirect_lock, flags);
+ return val;
+}
- if (temac_indirect_busywait(lp))
+/**
+ * temac_indirect_in32_locked - Indirect register read access. This
+ * function must be called with lp->indirect_lock being held. Use
+ * this together with spin_lock_irqsave/spin_lock_irqrestore to avoid
+ * repeated lock/unlock and to ensure uninterrupted access to indirect
+ * registers.
+ */
+u32 temac_indirect_in32_locked(struct temac_local *lp, int reg)
+{
+ /* This initial wait should normally not spin, as we always
+ * try to wait for indirect access to complete before
+ * releasing the indirect_lock.
+ */
+ if (WARN_ON(temac_indirect_busywait(lp)))
return -ETIMEDOUT;
+ /* Initiate read from indirect register */
temac_iow(lp, XTE_CTL0_OFFSET, reg);
- if (temac_indirect_busywait(lp))
+ /* Wait for indirect register access to complete. We really
+ * should not see timeouts, and could even end up causing
+ * problem for following indirect access, so let's make a bit
+ * of WARN noise.
+ */
+ if (WARN_ON(temac_indirect_busywait(lp)))
return -ETIMEDOUT;
- val = temac_ior(lp, XTE_LSW0_OFFSET);
-
- return val;
+ /* Value is ready now */
+ return temac_ior(lp, XTE_LSW0_OFFSET);
}
/**
- * temac_indirect_out32
- *
- * lp->indirect_mutex must be held when calling this function
+ * temac_indirect_out32 - Indirect register write access. This function
+ * must be called without lp->indirect_lock being held.
*/
void temac_indirect_out32(struct temac_local *lp, int reg, u32 value)
{
- if (temac_indirect_busywait(lp))
+ unsigned long flags;
+
+ spin_lock_irqsave(lp->indirect_lock, flags);
+ temac_indirect_out32_locked(lp, reg, value);
+ spin_unlock_irqrestore(lp->indirect_lock, flags);
+}
+
+/**
+ * temac_indirect_out32_locked - Indirect register write access. This
+ * function must be called with lp->indirect_lock being held. Use
+ * this together with spin_lock_irqsave/spin_lock_irqrestore to avoid
+ * repeated lock/unlock and to ensure uninterrupted access to indirect
+ * registers.
+ */
+void temac_indirect_out32_locked(struct temac_local *lp, int reg, u32 value)
+{
+ /* As in temac_indirect_in32_locked(), we should normally not
+ * spin here. And if it happens, we actually end up silently
+ * ignoring the write request. Ouch.
+ */
+ if (WARN_ON(temac_indirect_busywait(lp)))
return;
+ /* Initiate write to indirect register */
temac_iow(lp, XTE_LSW0_OFFSET, value);
temac_iow(lp, XTE_CTL0_OFFSET, CNTLREG_WRITE_ENABLE_MASK | reg);
- temac_indirect_busywait(lp);
+ /* As in temac_indirect_in32_locked(), we should not see timeouts
+ * here. And if it happens, we continue before the write has
+ * completed. Not good.
+ */
+ WARN_ON(temac_indirect_busywait(lp));
}
/**
@@ -344,20 +411,21 @@ out:
static void temac_do_set_mac_address(struct net_device *ndev)
{
struct temac_local *lp = netdev_priv(ndev);
+ unsigned long flags;
/* set up unicast MAC address filter set its mac address */
- mutex_lock(lp->indirect_mutex);
- temac_indirect_out32(lp, XTE_UAW0_OFFSET,
- (ndev->dev_addr[0]) |
- (ndev->dev_addr[1] << 8) |
- (ndev->dev_addr[2] << 16) |
- (ndev->dev_addr[3] << 24));
+ spin_lock_irqsave(lp->indirect_lock, flags);
+ temac_indirect_out32_locked(lp, XTE_UAW0_OFFSET,
+ (ndev->dev_addr[0]) |
+ (ndev->dev_addr[1] << 8) |
+ (ndev->dev_addr[2] << 16) |
+ (ndev->dev_addr[3] << 24));
/* There are reserved bits in EUAW1
* so don't affect them Set MAC bits [47:32] in EUAW1 */
- temac_indirect_out32(lp, XTE_UAW1_OFFSET,
- (ndev->dev_addr[4] & 0x000000ff) |
- (ndev->dev_addr[5] << 8));
- mutex_unlock(lp->indirect_mutex);
+ temac_indirect_out32_locked(lp, XTE_UAW1_OFFSET,
+ (ndev->dev_addr[4] & 0x000000ff) |
+ (ndev->dev_addr[5] << 8));
+ spin_unlock_irqrestore(lp->indirect_lock, flags);
}
static int temac_init_mac_address(struct net_device *ndev, const void *address)
@@ -383,49 +451,58 @@ static int temac_set_mac_address(struct net_device *ndev, void *p)
static void temac_set_multicast_list(struct net_device *ndev)
{
struct temac_local *lp = netdev_priv(ndev);
- u32 multi_addr_msw, multi_addr_lsw, val;
- int i;
+ u32 multi_addr_msw, multi_addr_lsw;
+ int i = 0;
+ unsigned long flags;
+ bool promisc_mode_disabled = false;
- mutex_lock(lp->indirect_mutex);
- if (ndev->flags & (IFF_ALLMULTI | IFF_PROMISC) ||
- netdev_mc_count(ndev) > MULTICAST_CAM_TABLE_NUM) {
- /*
- * We must make the kernel realise we had to move
- * into promisc mode or we start all out war on
- * the cable. If it was a promisc request the
- * flag is already set. If not we assert it.
- */
- ndev->flags |= IFF_PROMISC;
+ if (ndev->flags & (IFF_PROMISC | IFF_ALLMULTI) ||
+ (netdev_mc_count(ndev) > MULTICAST_CAM_TABLE_NUM)) {
temac_indirect_out32(lp, XTE_AFM_OFFSET, XTE_AFM_EPPRM_MASK);
dev_info(&ndev->dev, "Promiscuous mode enabled.\n");
- } else if (!netdev_mc_empty(ndev)) {
+ return;
+ }
+
+ spin_lock_irqsave(lp->indirect_lock, flags);
+
+ if (!netdev_mc_empty(ndev)) {
struct netdev_hw_addr *ha;
- i = 0;
netdev_for_each_mc_addr(ha, ndev) {
- if (i >= MULTICAST_CAM_TABLE_NUM)
+ if (WARN_ON(i >= MULTICAST_CAM_TABLE_NUM))
break;
multi_addr_msw = ((ha->addr[3] << 24) |
(ha->addr[2] << 16) |
(ha->addr[1] << 8) |
(ha->addr[0]));
- temac_indirect_out32(lp, XTE_MAW0_OFFSET,
- multi_addr_msw);
+ temac_indirect_out32_locked(lp, XTE_MAW0_OFFSET,
+ multi_addr_msw);
multi_addr_lsw = ((ha->addr[5] << 8) |
(ha->addr[4]) | (i << 16));
- temac_indirect_out32(lp, XTE_MAW1_OFFSET,
- multi_addr_lsw);
+ temac_indirect_out32_locked(lp, XTE_MAW1_OFFSET,
+ multi_addr_lsw);
i++;
}
- } else {
- val = temac_indirect_in32(lp, XTE_AFM_OFFSET);
- temac_indirect_out32(lp, XTE_AFM_OFFSET,
- val & ~XTE_AFM_EPPRM_MASK);
- temac_indirect_out32(lp, XTE_MAW0_OFFSET, 0);
- temac_indirect_out32(lp, XTE_MAW1_OFFSET, 0);
- dev_info(&ndev->dev, "Promiscuous mode disabled.\n");
}
- mutex_unlock(lp->indirect_mutex);
+
+ /* Clear all or remaining/unused address table entries */
+ while (i < MULTICAST_CAM_TABLE_NUM) {
+ temac_indirect_out32_locked(lp, XTE_MAW0_OFFSET, 0);
+ temac_indirect_out32_locked(lp, XTE_MAW1_OFFSET, i << 16);
+ i++;
+ }
+
+ /* Enable address filter block if currently disabled */
+ if (temac_indirect_in32_locked(lp, XTE_AFM_OFFSET)
+ & XTE_AFM_EPPRM_MASK) {
+ temac_indirect_out32_locked(lp, XTE_AFM_OFFSET, 0);
+ promisc_mode_disabled = true;
+ }
+
+ spin_unlock_irqrestore(lp->indirect_lock, flags);
+
+ if (promisc_mode_disabled)
+ dev_info(&ndev->dev, "Promiscuous mode disabled.\n");
}
static struct temac_option {
@@ -516,17 +593,19 @@ static u32 temac_setoptions(struct net_device *ndev, u32 options)
struct temac_local *lp = netdev_priv(ndev);
struct temac_option *tp = &temac_options[0];
int reg;
+ unsigned long flags;
- mutex_lock(lp->indirect_mutex);
+ spin_lock_irqsave(lp->indirect_lock, flags);
while (tp->opt) {
- reg = temac_indirect_in32(lp, tp->reg) & ~tp->m_or;
- if (options & tp->opt)
+ reg = temac_indirect_in32_locked(lp, tp->reg) & ~tp->m_or;
+ if (options & tp->opt) {
reg |= tp->m_or;
- temac_indirect_out32(lp, tp->reg, reg);
+ temac_indirect_out32_locked(lp, tp->reg, reg);
+ }
tp++;
}
+ spin_unlock_irqrestore(lp->indirect_lock, flags);
lp->options |= options;
- mutex_unlock(lp->indirect_mutex);
return 0;
}
@@ -537,6 +616,7 @@ static void temac_device_reset(struct net_device *ndev)
struct temac_local *lp = netdev_priv(ndev);
u32 timeout;
u32 val;
+ unsigned long flags;
/* Perform a software reset */
@@ -545,7 +625,6 @@ static void temac_device_reset(struct net_device *ndev)
dev_dbg(&ndev->dev, "%s()\n", __func__);
- mutex_lock(lp->indirect_mutex);
/* Reset the receiver and wait for it to finish reset */
temac_indirect_out32(lp, XTE_RXC1_OFFSET, XTE_RXC1_RXRST_MASK);
timeout = 1000;
@@ -571,8 +650,11 @@ static void temac_device_reset(struct net_device *ndev)
}
/* Disable the receiver */
- val = temac_indirect_in32(lp, XTE_RXC1_OFFSET);
- temac_indirect_out32(lp, XTE_RXC1_OFFSET, val & ~XTE_RXC1_RXEN_MASK);
+ spin_lock_irqsave(lp->indirect_lock, flags);
+ val = temac_indirect_in32_locked(lp, XTE_RXC1_OFFSET);
+ temac_indirect_out32_locked(lp, XTE_RXC1_OFFSET,
+ val & ~XTE_RXC1_RXEN_MASK);
+ spin_unlock_irqrestore(lp->indirect_lock, flags);
/* Reset Local Link (DMA) */
lp->dma_out(lp, DMA_CONTROL_REG, DMA_CONTROL_RST);
@@ -592,12 +674,12 @@ static void temac_device_reset(struct net_device *ndev)
"temac_device_reset descriptor allocation failed\n");
}
- temac_indirect_out32(lp, XTE_RXC0_OFFSET, 0);
- temac_indirect_out32(lp, XTE_RXC1_OFFSET, 0);
- temac_indirect_out32(lp, XTE_TXC_OFFSET, 0);
- temac_indirect_out32(lp, XTE_FCC_OFFSET, XTE_FCC_RXFLO_MASK);
-
- mutex_unlock(lp->indirect_mutex);
+ spin_lock_irqsave(lp->indirect_lock, flags);
+ temac_indirect_out32_locked(lp, XTE_RXC0_OFFSET, 0);
+ temac_indirect_out32_locked(lp, XTE_RXC1_OFFSET, 0);
+ temac_indirect_out32_locked(lp, XTE_TXC_OFFSET, 0);
+ temac_indirect_out32_locked(lp, XTE_FCC_OFFSET, XTE_FCC_RXFLO_MASK);
+ spin_unlock_irqrestore(lp->indirect_lock, flags);
/* Sync default options with HW
* but leave receiver and transmitter disabled. */
@@ -621,13 +703,14 @@ static void temac_adjust_link(struct net_device *ndev)
struct phy_device *phy = ndev->phydev;
u32 mii_speed;
int link_state;
+ unsigned long flags;
/* hash together the state values to decide if something has changed */
link_state = phy->speed | (phy->duplex << 1) | phy->link;
- mutex_lock(lp->indirect_mutex);
if (lp->last_link != link_state) {
- mii_speed = temac_indirect_in32(lp, XTE_EMCFG_OFFSET);
+ spin_lock_irqsave(lp->indirect_lock, flags);
+ mii_speed = temac_indirect_in32_locked(lp, XTE_EMCFG_OFFSET);
mii_speed &= ~XTE_EMCFG_LINKSPD_MASK;
switch (phy->speed) {
@@ -637,11 +720,12 @@ static void temac_adjust_link(struct net_device *ndev)
}
/* Write new speed setting out to TEMAC */
- temac_indirect_out32(lp, XTE_EMCFG_OFFSET, mii_speed);
+ temac_indirect_out32_locked(lp, XTE_EMCFG_OFFSET, mii_speed);
+ spin_unlock_irqrestore(lp->indirect_lock, flags);
+
lp->last_link = link_state;
phy_print_status(phy);
}
- mutex_unlock(lp->indirect_mutex);
}
#ifdef CONFIG_64BIT
@@ -1011,6 +1095,7 @@ static const struct net_device_ops temac_netdev_ops = {
.ndo_open = temac_open,
.ndo_stop = temac_stop,
.ndo_start_xmit = temac_start_xmit,
+ .ndo_set_rx_mode = temac_set_multicast_list,
.ndo_set_mac_address = temac_set_mac_address,
.ndo_validate_addr = eth_validate_addr,
.ndo_do_ioctl = temac_ioctl,
@@ -1076,7 +1161,6 @@ static int temac_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, ndev);
SET_NETDEV_DEV(ndev, &pdev->dev);
- ndev->flags &= ~IFF_MULTICAST; /* clear multicast */
ndev->features = NETIF_F_SG;
ndev->netdev_ops = &temac_netdev_ops;
ndev->ethtool_ops = &temac_ethtool_ops;
@@ -1103,17 +1187,17 @@ static int temac_probe(struct platform_device *pdev)
/* Setup mutex for synchronization of indirect register access */
if (pdata) {
- if (!pdata->indirect_mutex) {
+ if (!pdata->indirect_lock) {
dev_err(&pdev->dev,
- "indirect_mutex missing in platform_data\n");
+ "indirect_lock missing in platform_data\n");
return -EINVAL;
}
- lp->indirect_mutex = pdata->indirect_mutex;
+ lp->indirect_lock = pdata->indirect_lock;
} else {
- lp->indirect_mutex = devm_kmalloc(&pdev->dev,
- sizeof(*lp->indirect_mutex),
- GFP_KERNEL);
- mutex_init(lp->indirect_mutex);
+ lp->indirect_lock = devm_kmalloc(&pdev->dev,
+ sizeof(*lp->indirect_lock),
+ GFP_KERNEL);
+ spin_lock_init(lp->indirect_lock);
}
/* map device registers */
diff --git a/drivers/net/ethernet/xilinx/ll_temac_mdio.c b/drivers/net/ethernet/xilinx/ll_temac_mdio.c
index a4667326f745..6fd2dea4e60f 100644
--- a/drivers/net/ethernet/xilinx/ll_temac_mdio.c
+++ b/drivers/net/ethernet/xilinx/ll_temac_mdio.c
@@ -25,14 +25,15 @@ static int temac_mdio_read(struct mii_bus *bus, int phy_id, int reg)
{
struct temac_local *lp = bus->priv;
u32 rc;
+ unsigned long flags;
/* Write the PHY address to the MIIM Access Initiator register.
* When the transfer completes, the PHY register value will appear
* in the LSW0 register */
- mutex_lock(lp->indirect_mutex);
+ spin_lock_irqsave(lp->indirect_lock, flags);
temac_iow(lp, XTE_LSW0_OFFSET, (phy_id << 5) | reg);
- rc = temac_indirect_in32(lp, XTE_MIIMAI_OFFSET);
- mutex_unlock(lp->indirect_mutex);
+ rc = temac_indirect_in32_locked(lp, XTE_MIIMAI_OFFSET);
+ spin_unlock_irqrestore(lp->indirect_lock, flags);
dev_dbg(lp->dev, "temac_mdio_read(phy_id=%i, reg=%x) == %x\n",
phy_id, reg, rc);
@@ -43,6 +44,7 @@ static int temac_mdio_read(struct mii_bus *bus, int phy_id, int reg)
static int temac_mdio_write(struct mii_bus *bus, int phy_id, int reg, u16 val)
{
struct temac_local *lp = bus->priv;
+ unsigned long flags;
dev_dbg(lp->dev, "temac_mdio_write(phy_id=%i, reg=%x, val=%x)\n",
phy_id, reg, val);
@@ -50,10 +52,10 @@ static int temac_mdio_write(struct mii_bus *bus, int phy_id, int reg, u16 val)
/* First write the desired value into the write data register
* and then write the address into the access initiator register
*/
- mutex_lock(lp->indirect_mutex);
- temac_indirect_out32(lp, XTE_MGTDR_OFFSET, val);
- temac_indirect_out32(lp, XTE_MIIMAI_OFFSET, (phy_id << 5) | reg);
- mutex_unlock(lp->indirect_mutex);
+ spin_lock_irqsave(lp->indirect_lock, flags);
+ temac_indirect_out32_locked(lp, XTE_MGTDR_OFFSET, val);
+ temac_indirect_out32_locked(lp, XTE_MIIMAI_OFFSET, (phy_id << 5) | reg);
+ spin_unlock_irqrestore(lp->indirect_lock, flags);
return 0;
}
@@ -87,9 +89,7 @@ int temac_mdio_setup(struct temac_local *lp, struct platform_device *pdev)
/* Enable the MDIO bus by asserting the enable bit and writing
* in the clock config */
- mutex_lock(lp->indirect_mutex);
temac_indirect_out32(lp, XTE_MC_OFFSET, 1 << 6 | clk_div);
- mutex_unlock(lp->indirect_mutex);
bus = devm_mdiobus_alloc(&pdev->dev);
if (!bus)
@@ -116,10 +116,8 @@ int temac_mdio_setup(struct temac_local *lp, struct platform_device *pdev)
if (rc)
return rc;
- mutex_lock(lp->indirect_mutex);
dev_dbg(lp->dev, "MDIO bus registered; MC:%x\n",
temac_indirect_in32(lp, XTE_MC_OFFSET));
- mutex_unlock(lp->indirect_mutex);
return 0;
}
diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet.h b/drivers/net/ethernet/xilinx/xilinx_axienet.h
index 011adae32b89..2dacfc85b3ba 100644
--- a/drivers/net/ethernet/xilinx/xilinx_axienet.h
+++ b/drivers/net/ethernet/xilinx/xilinx_axienet.h
@@ -13,6 +13,7 @@
#include <linux/spinlock.h>
#include <linux/interrupt.h>
#include <linux/if_vlan.h>
+#include <linux/phylink.h>
/* Packet size info */
#define XAE_HDR_SIZE 14 /* Size of Ethernet header */
@@ -83,6 +84,8 @@
#define XAXIDMA_CR_RUNSTOP_MASK 0x00000001 /* Start/stop DMA channel */
#define XAXIDMA_CR_RESET_MASK 0x00000004 /* Reset DMA engine */
+#define XAXIDMA_SR_HALT_MASK 0x00000001 /* Indicates DMA channel halted */
+
#define XAXIDMA_BD_NDESC_OFFSET 0x00 /* Next descriptor pointer */
#define XAXIDMA_BD_BUFA_OFFSET 0x08 /* Buffer address */
#define XAXIDMA_BD_CTRL_LEN_OFFSET 0x18 /* Control/buffer length */
@@ -356,9 +359,6 @@
* @app2: MM2S/S2MM User Application Field 2.
* @app3: MM2S/S2MM User Application Field 3.
* @app4: MM2S/S2MM User Application Field 4.
- * @sw_id_offset: MM2S/S2MM Sw ID
- * @reserved5: Reserved and not used
- * @reserved6: Reserved and not used
*/
struct axidma_bd {
u32 next; /* Physical address of next buffer descriptor */
@@ -373,11 +373,9 @@ struct axidma_bd {
u32 app1; /* TX start << 16 | insert */
u32 app2; /* TX csum seed */
u32 app3;
- u32 app4;
- u32 sw_id_offset;
- u32 reserved5;
- u32 reserved6;
-};
+ u32 app4; /* Last field used by HW */
+ struct sk_buff *skb;
+} __aligned(XAXIDMA_BD_MINIMUM_ALIGNMENT);
/**
* struct axienet_local - axienet private per device data
@@ -385,6 +383,7 @@ struct axidma_bd {
* @dev: Pointer to device structure
* @phy_node: Pointer to device node structure
* @mii_bus: Pointer to MII bus structure
+ * @regs_start: Resource start for axienet device addresses
* @regs: Base address for the axienet_local device address space
* @dma_regs: Base address for the axidma device address space
* @dma_err_tasklet: Tasklet structure to process Axi DMA errors
@@ -422,10 +421,17 @@ struct axienet_local {
/* Connection to PHY device */
struct device_node *phy_node;
+ struct phylink *phylink;
+ struct phylink_config phylink_config;
+
+ /* Clock for AXI bus */
+ struct clk *clk;
+
/* MDIO bus data */
struct mii_bus *mii_bus; /* MII bus reference */
/* IO registers, dma functions and IRQs */
+ resource_size_t regs_start;
void __iomem *regs;
void __iomem *dma_regs;
@@ -433,17 +439,19 @@ struct axienet_local {
int tx_irq;
int rx_irq;
+ int eth_irq;
phy_interface_t phy_mode;
u32 options; /* Current options word */
- u32 last_link;
u32 features;
/* Buffer descriptors */
struct axidma_bd *tx_bd_v;
dma_addr_t tx_bd_p;
+ u32 tx_bd_num;
struct axidma_bd *rx_bd_v;
dma_addr_t rx_bd_p;
+ u32 rx_bd_num;
u32 tx_bd_ci;
u32 tx_bd_tail;
u32 rx_bd_ci;
@@ -481,7 +489,7 @@ struct axienet_option {
*/
static inline u32 axienet_ior(struct axienet_local *lp, off_t offset)
{
- return in_be32(lp->regs + offset);
+ return ioread32(lp->regs + offset);
}
static inline u32 axinet_ior_read_mcr(struct axienet_local *lp)
@@ -501,12 +509,13 @@ static inline u32 axinet_ior_read_mcr(struct axienet_local *lp)
static inline void axienet_iow(struct axienet_local *lp, off_t offset,
u32 value)
{
- out_be32((lp->regs + offset), value);
+ iowrite32(value, lp->regs + offset);
}
/* Function prototypes visible in xilinx_axienet_mdio.c for other files */
-int axienet_mdio_setup(struct axienet_local *lp, struct device_node *np);
-int axienet_mdio_wait_until_ready(struct axienet_local *lp);
+int axienet_mdio_enable(struct axienet_local *lp);
+void axienet_mdio_disable(struct axienet_local *lp);
+int axienet_mdio_setup(struct axienet_local *lp);
void axienet_mdio_teardown(struct axienet_local *lp);
#endif /* XILINX_AXI_ENET_H */
diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
index 831967f6eff8..4fc627fb4d11 100644
--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
@@ -7,6 +7,7 @@
* Copyright (c) 2008-2009 Secret Lab Technologies Ltd.
* Copyright (c) 2010 - 2011 Michal Simek <monstr@monstr.eu>
* Copyright (c) 2010 - 2011 PetaLogix
+ * Copyright (c) 2019 SED Systems, a division of Calian Ltd.
* Copyright (c) 2010 - 2012 Xilinx, Inc. All rights reserved.
*
* This is a driver for the Xilinx Axi Ethernet which is used in the Virtex6
@@ -21,6 +22,7 @@
* - Add support for extended VLAN support.
*/
+#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/etherdevice.h>
#include <linux/module.h>
@@ -38,16 +40,18 @@
#include "xilinx_axienet.h"
-/* Descriptors defines for Tx and Rx DMA - 2^n for the best performance */
-#define TX_BD_NUM 64
-#define RX_BD_NUM 128
+/* Descriptors defines for Tx and Rx DMA */
+#define TX_BD_NUM_DEFAULT 64
+#define RX_BD_NUM_DEFAULT 1024
+#define TX_BD_NUM_MAX 4096
+#define RX_BD_NUM_MAX 4096
/* Must be shorter than length of ethtool_drvinfo.driver field to fit */
#define DRIVER_NAME "xaxienet"
#define DRIVER_DESCRIPTION "Xilinx Axi Ethernet driver"
#define DRIVER_VERSION "1.00a"
-#define AXIENET_REGS_N 32
+#define AXIENET_REGS_N 40
/* Match table for of_platform binding */
static const struct of_device_id axienet_of_match[] = {
@@ -125,7 +129,7 @@ static struct axienet_option axienet_options[] = {
*/
static inline u32 axienet_dma_in32(struct axienet_local *lp, off_t reg)
{
- return in_be32(lp->dma_regs + reg);
+ return ioread32(lp->dma_regs + reg);
}
/**
@@ -140,7 +144,7 @@ static inline u32 axienet_dma_in32(struct axienet_local *lp, off_t reg)
static inline void axienet_dma_out32(struct axienet_local *lp,
off_t reg, u32 value)
{
- out_be32((lp->dma_regs + reg), value);
+ iowrite32(value, lp->dma_regs + reg);
}
/**
@@ -156,22 +160,21 @@ static void axienet_dma_bd_release(struct net_device *ndev)
int i;
struct axienet_local *lp = netdev_priv(ndev);
- for (i = 0; i < RX_BD_NUM; i++) {
+ for (i = 0; i < lp->rx_bd_num; i++) {
dma_unmap_single(ndev->dev.parent, lp->rx_bd_v[i].phys,
lp->max_frm_size, DMA_FROM_DEVICE);
- dev_kfree_skb((struct sk_buff *)
- (lp->rx_bd_v[i].sw_id_offset));
+ dev_kfree_skb(lp->rx_bd_v[i].skb);
}
if (lp->rx_bd_v) {
dma_free_coherent(ndev->dev.parent,
- sizeof(*lp->rx_bd_v) * RX_BD_NUM,
+ sizeof(*lp->rx_bd_v) * lp->rx_bd_num,
lp->rx_bd_v,
lp->rx_bd_p);
}
if (lp->tx_bd_v) {
dma_free_coherent(ndev->dev.parent,
- sizeof(*lp->tx_bd_v) * TX_BD_NUM,
+ sizeof(*lp->tx_bd_v) * lp->tx_bd_num,
lp->tx_bd_v,
lp->tx_bd_p);
}
@@ -201,33 +204,33 @@ static int axienet_dma_bd_init(struct net_device *ndev)
/* Allocate the Tx and Rx buffer descriptors. */
lp->tx_bd_v = dma_alloc_coherent(ndev->dev.parent,
- sizeof(*lp->tx_bd_v) * TX_BD_NUM,
+ sizeof(*lp->tx_bd_v) * lp->tx_bd_num,
&lp->tx_bd_p, GFP_KERNEL);
if (!lp->tx_bd_v)
goto out;
lp->rx_bd_v = dma_alloc_coherent(ndev->dev.parent,
- sizeof(*lp->rx_bd_v) * RX_BD_NUM,
+ sizeof(*lp->rx_bd_v) * lp->rx_bd_num,
&lp->rx_bd_p, GFP_KERNEL);
if (!lp->rx_bd_v)
goto out;
- for (i = 0; i < TX_BD_NUM; i++) {
+ for (i = 0; i < lp->tx_bd_num; i++) {
lp->tx_bd_v[i].next = lp->tx_bd_p +
sizeof(*lp->tx_bd_v) *
- ((i + 1) % TX_BD_NUM);
+ ((i + 1) % lp->tx_bd_num);
}
- for (i = 0; i < RX_BD_NUM; i++) {
+ for (i = 0; i < lp->rx_bd_num; i++) {
lp->rx_bd_v[i].next = lp->rx_bd_p +
sizeof(*lp->rx_bd_v) *
- ((i + 1) % RX_BD_NUM);
+ ((i + 1) % lp->rx_bd_num);
skb = netdev_alloc_skb_ip_align(ndev, lp->max_frm_size);
if (!skb)
goto out;
- lp->rx_bd_v[i].sw_id_offset = (u32) skb;
+ lp->rx_bd_v[i].skb = skb;
lp->rx_bd_v[i].phys = dma_map_single(ndev->dev.parent,
skb->data,
lp->max_frm_size,
@@ -269,7 +272,7 @@ static int axienet_dma_bd_init(struct net_device *ndev)
axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET,
cr | XAXIDMA_CR_RUNSTOP_MASK);
axienet_dma_out32(lp, XAXIDMA_RX_TDESC_OFFSET, lp->rx_bd_p +
- (sizeof(*lp->rx_bd_v) * (RX_BD_NUM - 1)));
+ (sizeof(*lp->rx_bd_v) * (lp->rx_bd_num - 1)));
/* Write to the RS (Run-stop) bit in the Tx channel control register.
* Tx channel is now ready to run. But only after we write to the
@@ -434,17 +437,20 @@ static void axienet_setoptions(struct net_device *ndev, u32 options)
lp->options |= options;
}
-static void __axienet_device_reset(struct axienet_local *lp, off_t offset)
+static void __axienet_device_reset(struct axienet_local *lp)
{
u32 timeout;
/* Reset Axi DMA. This would reset Axi Ethernet core as well. The reset
* process of Axi DMA takes a while to complete as all pending
* commands/transfers will be flushed or completed during this
* reset process.
+ * Note that even though both TX and RX have their own reset register,
+ * they both reset the entire DMA core, so only one needs to be used.
*/
- axienet_dma_out32(lp, offset, XAXIDMA_CR_RESET_MASK);
+ axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, XAXIDMA_CR_RESET_MASK);
timeout = DELAY_OF_ONE_MILLISEC;
- while (axienet_dma_in32(lp, offset) & XAXIDMA_CR_RESET_MASK) {
+ while (axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET) &
+ XAXIDMA_CR_RESET_MASK) {
udelay(1);
if (--timeout == 0) {
netdev_err(lp->ndev, "%s: DMA reset timeout!\n",
@@ -470,8 +476,7 @@ static void axienet_device_reset(struct net_device *ndev)
u32 axienet_status;
struct axienet_local *lp = netdev_priv(ndev);
- __axienet_device_reset(lp, XAXIDMA_TX_CR_OFFSET);
- __axienet_device_reset(lp, XAXIDMA_RX_CR_OFFSET);
+ __axienet_device_reset(lp);
lp->max_frm_size = XAE_MAX_VLAN_FRAME_SIZE;
lp->options |= XAE_OPTION_VLAN;
@@ -498,6 +503,8 @@ static void axienet_device_reset(struct net_device *ndev)
axienet_status = axienet_ior(lp, XAE_IP_OFFSET);
if (axienet_status & XAE_INT_RXRJECT_MASK)
axienet_iow(lp, XAE_IS_OFFSET, XAE_INT_RXRJECT_MASK);
+ axienet_iow(lp, XAE_IE_OFFSET, lp->eth_irq > 0 ?
+ XAE_INT_RECV_ERROR_MASK : 0);
axienet_iow(lp, XAE_FCC_OFFSET, XAE_FCC_FCRX_MASK);
@@ -514,63 +521,6 @@ static void axienet_device_reset(struct net_device *ndev)
}
/**
- * axienet_adjust_link - Adjust the PHY link speed/duplex.
- * @ndev: Pointer to the net_device structure
- *
- * This function is called to change the speed and duplex setting after
- * auto negotiation is done by the PHY. This is the function that gets
- * registered with the PHY interface through the "of_phy_connect" call.
- */
-static void axienet_adjust_link(struct net_device *ndev)
-{
- u32 emmc_reg;
- u32 link_state;
- u32 setspeed = 1;
- struct axienet_local *lp = netdev_priv(ndev);
- struct phy_device *phy = ndev->phydev;
-
- link_state = phy->speed | (phy->duplex << 1) | phy->link;
- if (lp->last_link != link_state) {
- if ((phy->speed == SPEED_10) || (phy->speed == SPEED_100)) {
- if (lp->phy_mode == PHY_INTERFACE_MODE_1000BASEX)
- setspeed = 0;
- } else {
- if ((phy->speed == SPEED_1000) &&
- (lp->phy_mode == PHY_INTERFACE_MODE_MII))
- setspeed = 0;
- }
-
- if (setspeed == 1) {
- emmc_reg = axienet_ior(lp, XAE_EMMC_OFFSET);
- emmc_reg &= ~XAE_EMMC_LINKSPEED_MASK;
-
- switch (phy->speed) {
- case SPEED_1000:
- emmc_reg |= XAE_EMMC_LINKSPD_1000;
- break;
- case SPEED_100:
- emmc_reg |= XAE_EMMC_LINKSPD_100;
- break;
- case SPEED_10:
- emmc_reg |= XAE_EMMC_LINKSPD_10;
- break;
- default:
- dev_err(&ndev->dev, "Speed other than 10, 100 "
- "or 1Gbps is not supported\n");
- break;
- }
-
- axienet_iow(lp, XAE_EMMC_OFFSET, emmc_reg);
- lp->last_link = link_state;
- phy_print_status(phy);
- } else {
- netdev_err(ndev,
- "Error setting Axi Ethernet mac speed\n");
- }
- }
-}
-
-/**
* axienet_start_xmit_done - Invoked once a transmit is completed by the
* Axi DMA Tx channel.
* @ndev: Pointer to the net_device structure
@@ -595,26 +545,31 @@ static void axienet_start_xmit_done(struct net_device *ndev)
dma_unmap_single(ndev->dev.parent, cur_p->phys,
(cur_p->cntrl & XAXIDMA_BD_CTRL_LENGTH_MASK),
DMA_TO_DEVICE);
- if (cur_p->app4)
- dev_consume_skb_irq((struct sk_buff *)cur_p->app4);
+ if (cur_p->skb)
+ dev_consume_skb_irq(cur_p->skb);
/*cur_p->phys = 0;*/
cur_p->app0 = 0;
cur_p->app1 = 0;
cur_p->app2 = 0;
cur_p->app4 = 0;
cur_p->status = 0;
+ cur_p->skb = NULL;
size += status & XAXIDMA_BD_STS_ACTUAL_LEN_MASK;
packets++;
- ++lp->tx_bd_ci;
- lp->tx_bd_ci %= TX_BD_NUM;
+ if (++lp->tx_bd_ci >= lp->tx_bd_num)
+ lp->tx_bd_ci = 0;
cur_p = &lp->tx_bd_v[lp->tx_bd_ci];
status = cur_p->status;
}
ndev->stats.tx_packets += packets;
ndev->stats.tx_bytes += size;
+
+ /* Matches barrier in axienet_start_xmit */
+ smp_mb();
+
netif_wake_queue(ndev);
}
@@ -635,7 +590,7 @@ static inline int axienet_check_tx_bd_space(struct axienet_local *lp,
int num_frag)
{
struct axidma_bd *cur_p;
- cur_p = &lp->tx_bd_v[(lp->tx_bd_tail + num_frag) % TX_BD_NUM];
+ cur_p = &lp->tx_bd_v[(lp->tx_bd_tail + num_frag) % lp->tx_bd_num];
if (cur_p->status & XAXIDMA_BD_STS_ALL_MASK)
return NETDEV_TX_BUSY;
return 0;
@@ -670,9 +625,19 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
cur_p = &lp->tx_bd_v[lp->tx_bd_tail];
if (axienet_check_tx_bd_space(lp, num_frag)) {
- if (!netif_queue_stopped(ndev))
- netif_stop_queue(ndev);
- return NETDEV_TX_BUSY;
+ if (netif_queue_stopped(ndev))
+ return NETDEV_TX_BUSY;
+
+ netif_stop_queue(ndev);
+
+ /* Matches barrier in axienet_start_xmit_done */
+ smp_mb();
+
+ /* Space might have just been freed - check again */
+ if (axienet_check_tx_bd_space(lp, num_frag))
+ return NETDEV_TX_BUSY;
+
+ netif_wake_queue(ndev);
}
if (skb->ip_summed == CHECKSUM_PARTIAL) {
@@ -695,8 +660,8 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
skb_headlen(skb), DMA_TO_DEVICE);
for (ii = 0; ii < num_frag; ii++) {
- ++lp->tx_bd_tail;
- lp->tx_bd_tail %= TX_BD_NUM;
+ if (++lp->tx_bd_tail >= lp->tx_bd_num)
+ lp->tx_bd_tail = 0;
cur_p = &lp->tx_bd_v[lp->tx_bd_tail];
frag = &skb_shinfo(skb)->frags[ii];
cur_p->phys = dma_map_single(ndev->dev.parent,
@@ -707,13 +672,13 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
}
cur_p->cntrl |= XAXIDMA_BD_CTRL_TXEOF_MASK;
- cur_p->app4 = (unsigned long)skb;
+ cur_p->skb = skb;
tail_p = lp->tx_bd_p + sizeof(*lp->tx_bd_v) * lp->tx_bd_tail;
/* Start the transfer */
axienet_dma_out32(lp, XAXIDMA_TX_TDESC_OFFSET, tail_p);
- ++lp->tx_bd_tail;
- lp->tx_bd_tail %= TX_BD_NUM;
+ if (++lp->tx_bd_tail >= lp->tx_bd_num)
+ lp->tx_bd_tail = 0;
return NETDEV_TX_OK;
}
@@ -742,13 +707,15 @@ static void axienet_recv(struct net_device *ndev)
while ((cur_p->status & XAXIDMA_BD_STS_COMPLETE_MASK)) {
tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci;
- skb = (struct sk_buff *) (cur_p->sw_id_offset);
- length = cur_p->app4 & 0x0000FFFF;
dma_unmap_single(ndev->dev.parent, cur_p->phys,
lp->max_frm_size,
DMA_FROM_DEVICE);
+ skb = cur_p->skb;
+ cur_p->skb = NULL;
+ length = cur_p->app4 & 0x0000FFFF;
+
skb_put(skb, length);
skb->protocol = eth_type_trans(skb, ndev);
/*skb_checksum_none_assert(skb);*/
@@ -783,10 +750,10 @@ static void axienet_recv(struct net_device *ndev)
DMA_FROM_DEVICE);
cur_p->cntrl = lp->max_frm_size;
cur_p->status = 0;
- cur_p->sw_id_offset = (u32) new_skb;
+ cur_p->skb = new_skb;
- ++lp->rx_bd_ci;
- lp->rx_bd_ci %= RX_BD_NUM;
+ if (++lp->rx_bd_ci >= lp->rx_bd_num)
+ lp->rx_bd_ci = 0;
cur_p = &lp->rx_bd_v[lp->rx_bd_ci];
}
@@ -802,7 +769,7 @@ static void axienet_recv(struct net_device *ndev)
* @irq: irq number
* @_ndev: net_device pointer
*
- * Return: IRQ_HANDLED for all cases.
+ * Return: IRQ_HANDLED if device generated a TX interrupt, IRQ_NONE otherwise.
*
* This is the Axi DMA Tx done Isr. It invokes "axienet_start_xmit_done"
* to complete the BD processing.
@@ -821,7 +788,7 @@ static irqreturn_t axienet_tx_irq(int irq, void *_ndev)
goto out;
}
if (!(status & XAXIDMA_IRQ_ALL_MASK))
- dev_err(&ndev->dev, "No interrupts asserted in Tx path\n");
+ return IRQ_NONE;
if (status & XAXIDMA_IRQ_ERROR_MASK) {
dev_err(&ndev->dev, "DMA Tx error 0x%x\n", status);
dev_err(&ndev->dev, "Current BD is at: 0x%x\n",
@@ -851,7 +818,7 @@ out:
* @irq: irq number
* @_ndev: net_device pointer
*
- * Return: IRQ_HANDLED for all cases.
+ * Return: IRQ_HANDLED if device generated a RX interrupt, IRQ_NONE otherwise.
*
* This is the Axi DMA Rx Isr. It invokes "axienet_recv" to complete the BD
* processing.
@@ -870,7 +837,7 @@ static irqreturn_t axienet_rx_irq(int irq, void *_ndev)
goto out;
}
if (!(status & XAXIDMA_IRQ_ALL_MASK))
- dev_err(&ndev->dev, "No interrupts asserted in Rx path\n");
+ return IRQ_NONE;
if (status & XAXIDMA_IRQ_ERROR_MASK) {
dev_err(&ndev->dev, "DMA Rx error 0x%x\n", status);
dev_err(&ndev->dev, "Current BD is at: 0x%x\n",
@@ -895,6 +862,35 @@ out:
return IRQ_HANDLED;
}
+/**
+ * axienet_eth_irq - Ethernet core Isr.
+ * @irq: irq number
+ * @_ndev: net_device pointer
+ *
+ * Return: IRQ_HANDLED if device generated a core interrupt, IRQ_NONE otherwise.
+ *
+ * Handle miscellaneous conditions indicated by Ethernet core IRQ.
+ */
+static irqreturn_t axienet_eth_irq(int irq, void *_ndev)
+{
+ struct net_device *ndev = _ndev;
+ struct axienet_local *lp = netdev_priv(ndev);
+ unsigned int pending;
+
+ pending = axienet_ior(lp, XAE_IP_OFFSET);
+ if (!pending)
+ return IRQ_NONE;
+
+ if (pending & XAE_INT_RXFIFOOVR_MASK)
+ ndev->stats.rx_missed_errors++;
+
+ if (pending & XAE_INT_RXRJECT_MASK)
+ ndev->stats.rx_frame_errors++;
+
+ axienet_iow(lp, XAE_IS_OFFSET, pending);
+ return IRQ_HANDLED;
+}
+
static void axienet_dma_err_handler(unsigned long data);
/**
@@ -904,67 +900,72 @@ static void axienet_dma_err_handler(unsigned long data);
* Return: 0, on success.
* non-zero error value on failure
*
- * This is the driver open routine. It calls phy_start to start the PHY device.
+ * This is the driver open routine. It calls phylink_start to start the
+ * PHY device.
* It also allocates interrupt service routines, enables the interrupt lines
* and ISR handling. Axi Ethernet core is reset through Axi DMA core. Buffer
* descriptors are initialized.
*/
static int axienet_open(struct net_device *ndev)
{
- int ret, mdio_mcreg;
+ int ret;
struct axienet_local *lp = netdev_priv(ndev);
- struct phy_device *phydev = NULL;
dev_dbg(&ndev->dev, "axienet_open()\n");
- mdio_mcreg = axienet_ior(lp, XAE_MDIO_MC_OFFSET);
- ret = axienet_mdio_wait_until_ready(lp);
- if (ret < 0)
- return ret;
/* Disable the MDIO interface till Axi Ethernet Reset is completed.
* When we do an Axi Ethernet reset, it resets the complete core
- * including the MDIO. If MDIO is not disabled when the reset
- * process is started, MDIO will be broken afterwards.
+ * including the MDIO. MDIO must be disabled before resetting
+ * and re-enabled afterwards.
+ * Hold MDIO bus lock to avoid MDIO accesses during the reset.
*/
- axienet_iow(lp, XAE_MDIO_MC_OFFSET,
- (mdio_mcreg & (~XAE_MDIO_MC_MDIOEN_MASK)));
+ mutex_lock(&lp->mii_bus->mdio_lock);
+ axienet_mdio_disable(lp);
axienet_device_reset(ndev);
- /* Enable the MDIO */
- axienet_iow(lp, XAE_MDIO_MC_OFFSET, mdio_mcreg);
- ret = axienet_mdio_wait_until_ready(lp);
+ ret = axienet_mdio_enable(lp);
+ mutex_unlock(&lp->mii_bus->mdio_lock);
if (ret < 0)
return ret;
- if (lp->phy_node) {
- phydev = of_phy_connect(lp->ndev, lp->phy_node,
- axienet_adjust_link, 0, lp->phy_mode);
-
- if (!phydev)
- dev_err(lp->dev, "of_phy_connect() failed\n");
- else
- phy_start(phydev);
+ ret = phylink_of_phy_connect(lp->phylink, lp->dev->of_node, 0);
+ if (ret) {
+ dev_err(lp->dev, "phylink_of_phy_connect() failed: %d\n", ret);
+ return ret;
}
+ phylink_start(lp->phylink);
+
/* Enable tasklets for Axi DMA error handling */
tasklet_init(&lp->dma_err_tasklet, axienet_dma_err_handler,
(unsigned long) lp);
/* Enable interrupts for Axi DMA Tx */
- ret = request_irq(lp->tx_irq, axienet_tx_irq, 0, ndev->name, ndev);
+ ret = request_irq(lp->tx_irq, axienet_tx_irq, IRQF_SHARED,
+ ndev->name, ndev);
if (ret)
goto err_tx_irq;
/* Enable interrupts for Axi DMA Rx */
- ret = request_irq(lp->rx_irq, axienet_rx_irq, 0, ndev->name, ndev);
+ ret = request_irq(lp->rx_irq, axienet_rx_irq, IRQF_SHARED,
+ ndev->name, ndev);
if (ret)
goto err_rx_irq;
+ /* Enable interrupts for Axi Ethernet core (if defined) */
+ if (lp->eth_irq > 0) {
+ ret = request_irq(lp->eth_irq, axienet_eth_irq, IRQF_SHARED,
+ ndev->name, ndev);
+ if (ret)
+ goto err_eth_irq;
+ }
return 0;
+err_eth_irq:
+ free_irq(lp->rx_irq, ndev);
err_rx_irq:
free_irq(lp->tx_irq, ndev);
err_tx_irq:
- if (phydev)
- phy_disconnect(phydev);
+ phylink_stop(lp->phylink);
+ phylink_disconnect_phy(lp->phylink);
tasklet_kill(&lp->dma_err_tasklet);
dev_err(lp->dev, "request_irq() failed\n");
return ret;
@@ -976,34 +977,61 @@ err_tx_irq:
*
* Return: 0, on success.
*
- * This is the driver stop routine. It calls phy_disconnect to stop the PHY
+ * This is the driver stop routine. It calls phylink_disconnect to stop the PHY
* device. It also removes the interrupt handlers and disables the interrupts.
* The Axi DMA Tx/Rx BDs are released.
*/
static int axienet_stop(struct net_device *ndev)
{
- u32 cr;
+ u32 cr, sr;
+ int count;
struct axienet_local *lp = netdev_priv(ndev);
dev_dbg(&ndev->dev, "axienet_close()\n");
- cr = axienet_dma_in32(lp, XAXIDMA_RX_CR_OFFSET);
- axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET,
- cr & (~XAXIDMA_CR_RUNSTOP_MASK));
- cr = axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET);
- axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET,
- cr & (~XAXIDMA_CR_RUNSTOP_MASK));
+ phylink_stop(lp->phylink);
+ phylink_disconnect_phy(lp->phylink);
+
axienet_setoptions(ndev, lp->options &
~(XAE_OPTION_TXEN | XAE_OPTION_RXEN));
+ cr = axienet_dma_in32(lp, XAXIDMA_RX_CR_OFFSET);
+ cr &= ~(XAXIDMA_CR_RUNSTOP_MASK | XAXIDMA_IRQ_ALL_MASK);
+ axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET, cr);
+
+ cr = axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET);
+ cr &= ~(XAXIDMA_CR_RUNSTOP_MASK | XAXIDMA_IRQ_ALL_MASK);
+ axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, cr);
+
+ axienet_iow(lp, XAE_IE_OFFSET, 0);
+
+ /* Give DMAs a chance to halt gracefully */
+ sr = axienet_dma_in32(lp, XAXIDMA_RX_SR_OFFSET);
+ for (count = 0; !(sr & XAXIDMA_SR_HALT_MASK) && count < 5; ++count) {
+ msleep(20);
+ sr = axienet_dma_in32(lp, XAXIDMA_RX_SR_OFFSET);
+ }
+
+ sr = axienet_dma_in32(lp, XAXIDMA_TX_SR_OFFSET);
+ for (count = 0; !(sr & XAXIDMA_SR_HALT_MASK) && count < 5; ++count) {
+ msleep(20);
+ sr = axienet_dma_in32(lp, XAXIDMA_TX_SR_OFFSET);
+ }
+
+ /* Do a reset to ensure DMA is really stopped */
+ mutex_lock(&lp->mii_bus->mdio_lock);
+ axienet_mdio_disable(lp);
+ __axienet_device_reset(lp);
+ axienet_mdio_enable(lp);
+ mutex_unlock(&lp->mii_bus->mdio_lock);
+
tasklet_kill(&lp->dma_err_tasklet);
+ if (lp->eth_irq > 0)
+ free_irq(lp->eth_irq, ndev);
free_irq(lp->tx_irq, ndev);
free_irq(lp->rx_irq, ndev);
- if (ndev->phydev)
- phy_disconnect(ndev->phydev);
-
axienet_dma_bd_release(ndev);
return 0;
}
@@ -1151,6 +1179,48 @@ static void axienet_ethtools_get_regs(struct net_device *ndev,
data[29] = axienet_ior(lp, XAE_FMI_OFFSET);
data[30] = axienet_ior(lp, XAE_AF0_OFFSET);
data[31] = axienet_ior(lp, XAE_AF1_OFFSET);
+ data[32] = axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET);
+ data[33] = axienet_dma_in32(lp, XAXIDMA_TX_SR_OFFSET);
+ data[34] = axienet_dma_in32(lp, XAXIDMA_TX_CDESC_OFFSET);
+ data[35] = axienet_dma_in32(lp, XAXIDMA_TX_TDESC_OFFSET);
+ data[36] = axienet_dma_in32(lp, XAXIDMA_RX_CR_OFFSET);
+ data[37] = axienet_dma_in32(lp, XAXIDMA_RX_SR_OFFSET);
+ data[38] = axienet_dma_in32(lp, XAXIDMA_RX_CDESC_OFFSET);
+ data[39] = axienet_dma_in32(lp, XAXIDMA_RX_TDESC_OFFSET);
+}
+
+static void axienet_ethtools_get_ringparam(struct net_device *ndev,
+ struct ethtool_ringparam *ering)
+{
+ struct axienet_local *lp = netdev_priv(ndev);
+
+ ering->rx_max_pending = RX_BD_NUM_MAX;
+ ering->rx_mini_max_pending = 0;
+ ering->rx_jumbo_max_pending = 0;
+ ering->tx_max_pending = TX_BD_NUM_MAX;
+ ering->rx_pending = lp->rx_bd_num;
+ ering->rx_mini_pending = 0;
+ ering->rx_jumbo_pending = 0;
+ ering->tx_pending = lp->tx_bd_num;
+}
+
+static int axienet_ethtools_set_ringparam(struct net_device *ndev,
+ struct ethtool_ringparam *ering)
+{
+ struct axienet_local *lp = netdev_priv(ndev);
+
+ if (ering->rx_pending > RX_BD_NUM_MAX ||
+ ering->rx_mini_pending ||
+ ering->rx_jumbo_pending ||
+ ering->rx_pending > TX_BD_NUM_MAX)
+ return -EINVAL;
+
+ if (netif_running(ndev))
+ return -EBUSY;
+
+ lp->rx_bd_num = ering->rx_pending;
+ lp->tx_bd_num = ering->tx_pending;
+ return 0;
}
/**
@@ -1166,12 +1236,9 @@ static void
axienet_ethtools_get_pauseparam(struct net_device *ndev,
struct ethtool_pauseparam *epauseparm)
{
- u32 regval;
struct axienet_local *lp = netdev_priv(ndev);
- epauseparm->autoneg = 0;
- regval = axienet_ior(lp, XAE_FCC_OFFSET);
- epauseparm->tx_pause = regval & XAE_FCC_FCTX_MASK;
- epauseparm->rx_pause = regval & XAE_FCC_FCRX_MASK;
+
+ phylink_ethtool_get_pauseparam(lp->phylink, epauseparm);
}
/**
@@ -1190,27 +1257,9 @@ static int
axienet_ethtools_set_pauseparam(struct net_device *ndev,
struct ethtool_pauseparam *epauseparm)
{
- u32 regval = 0;
struct axienet_local *lp = netdev_priv(ndev);
- if (netif_running(ndev)) {
- netdev_err(ndev,
- "Please stop netif before applying configuration\n");
- return -EFAULT;
- }
-
- regval = axienet_ior(lp, XAE_FCC_OFFSET);
- if (epauseparm->tx_pause)
- regval |= XAE_FCC_FCTX_MASK;
- else
- regval &= ~XAE_FCC_FCTX_MASK;
- if (epauseparm->rx_pause)
- regval |= XAE_FCC_FCRX_MASK;
- else
- regval &= ~XAE_FCC_FCRX_MASK;
- axienet_iow(lp, XAE_FCC_OFFSET, regval);
-
- return 0;
+ return phylink_ethtool_set_pauseparam(lp->phylink, epauseparm);
}
/**
@@ -1289,17 +1338,170 @@ static int axienet_ethtools_set_coalesce(struct net_device *ndev,
return 0;
}
+static int
+axienet_ethtools_get_link_ksettings(struct net_device *ndev,
+ struct ethtool_link_ksettings *cmd)
+{
+ struct axienet_local *lp = netdev_priv(ndev);
+
+ return phylink_ethtool_ksettings_get(lp->phylink, cmd);
+}
+
+static int
+axienet_ethtools_set_link_ksettings(struct net_device *ndev,
+ const struct ethtool_link_ksettings *cmd)
+{
+ struct axienet_local *lp = netdev_priv(ndev);
+
+ return phylink_ethtool_ksettings_set(lp->phylink, cmd);
+}
+
static const struct ethtool_ops axienet_ethtool_ops = {
.get_drvinfo = axienet_ethtools_get_drvinfo,
.get_regs_len = axienet_ethtools_get_regs_len,
.get_regs = axienet_ethtools_get_regs,
.get_link = ethtool_op_get_link,
+ .get_ringparam = axienet_ethtools_get_ringparam,
+ .set_ringparam = axienet_ethtools_set_ringparam,
.get_pauseparam = axienet_ethtools_get_pauseparam,
.set_pauseparam = axienet_ethtools_set_pauseparam,
.get_coalesce = axienet_ethtools_get_coalesce,
.set_coalesce = axienet_ethtools_set_coalesce,
- .get_link_ksettings = phy_ethtool_get_link_ksettings,
- .set_link_ksettings = phy_ethtool_set_link_ksettings,
+ .get_link_ksettings = axienet_ethtools_get_link_ksettings,
+ .set_link_ksettings = axienet_ethtools_set_link_ksettings,
+};
+
+static void axienet_validate(struct phylink_config *config,
+ unsigned long *supported,
+ struct phylink_link_state *state)
+{
+ struct net_device *ndev = to_net_dev(config->dev);
+ struct axienet_local *lp = netdev_priv(ndev);
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, };
+
+ /* Only support the mode we are configured for */
+ if (state->interface != PHY_INTERFACE_MODE_NA &&
+ state->interface != lp->phy_mode) {
+ netdev_warn(ndev, "Cannot use PHY mode %s, supported: %s\n",
+ phy_modes(state->interface),
+ phy_modes(lp->phy_mode));
+ bitmap_zero(supported, __ETHTOOL_LINK_MODE_MASK_NBITS);
+ return;
+ }
+
+ phylink_set(mask, Autoneg);
+ phylink_set_port_modes(mask);
+
+ phylink_set(mask, Asym_Pause);
+ phylink_set(mask, Pause);
+ phylink_set(mask, 1000baseX_Full);
+ phylink_set(mask, 10baseT_Full);
+ phylink_set(mask, 100baseT_Full);
+ phylink_set(mask, 1000baseT_Full);
+
+ bitmap_and(supported, supported, mask,
+ __ETHTOOL_LINK_MODE_MASK_NBITS);
+ bitmap_and(state->advertising, state->advertising, mask,
+ __ETHTOOL_LINK_MODE_MASK_NBITS);
+}
+
+static int axienet_mac_link_state(struct phylink_config *config,
+ struct phylink_link_state *state)
+{
+ struct net_device *ndev = to_net_dev(config->dev);
+ struct axienet_local *lp = netdev_priv(ndev);
+ u32 emmc_reg, fcc_reg;
+
+ state->interface = lp->phy_mode;
+
+ emmc_reg = axienet_ior(lp, XAE_EMMC_OFFSET);
+ if (emmc_reg & XAE_EMMC_LINKSPD_1000)
+ state->speed = SPEED_1000;
+ else if (emmc_reg & XAE_EMMC_LINKSPD_100)
+ state->speed = SPEED_100;
+ else
+ state->speed = SPEED_10;
+
+ state->pause = 0;
+ fcc_reg = axienet_ior(lp, XAE_FCC_OFFSET);
+ if (fcc_reg & XAE_FCC_FCTX_MASK)
+ state->pause |= MLO_PAUSE_TX;
+ if (fcc_reg & XAE_FCC_FCRX_MASK)
+ state->pause |= MLO_PAUSE_RX;
+
+ state->an_complete = 0;
+ state->duplex = 1;
+
+ return 1;
+}
+
+static void axienet_mac_an_restart(struct phylink_config *config)
+{
+ /* Unsupported, do nothing */
+}
+
+static void axienet_mac_config(struct phylink_config *config, unsigned int mode,
+ const struct phylink_link_state *state)
+{
+ struct net_device *ndev = to_net_dev(config->dev);
+ struct axienet_local *lp = netdev_priv(ndev);
+ u32 emmc_reg, fcc_reg;
+
+ emmc_reg = axienet_ior(lp, XAE_EMMC_OFFSET);
+ emmc_reg &= ~XAE_EMMC_LINKSPEED_MASK;
+
+ switch (state->speed) {
+ case SPEED_1000:
+ emmc_reg |= XAE_EMMC_LINKSPD_1000;
+ break;
+ case SPEED_100:
+ emmc_reg |= XAE_EMMC_LINKSPD_100;
+ break;
+ case SPEED_10:
+ emmc_reg |= XAE_EMMC_LINKSPD_10;
+ break;
+ default:
+ dev_err(&ndev->dev,
+ "Speed other than 10, 100 or 1Gbps is not supported\n");
+ break;
+ }
+
+ axienet_iow(lp, XAE_EMMC_OFFSET, emmc_reg);
+
+ fcc_reg = axienet_ior(lp, XAE_FCC_OFFSET);
+ if (state->pause & MLO_PAUSE_TX)
+ fcc_reg |= XAE_FCC_FCTX_MASK;
+ else
+ fcc_reg &= ~XAE_FCC_FCTX_MASK;
+ if (state->pause & MLO_PAUSE_RX)
+ fcc_reg |= XAE_FCC_FCRX_MASK;
+ else
+ fcc_reg &= ~XAE_FCC_FCRX_MASK;
+ axienet_iow(lp, XAE_FCC_OFFSET, fcc_reg);
+}
+
+static void axienet_mac_link_down(struct phylink_config *config,
+ unsigned int mode,
+ phy_interface_t interface)
+{
+ /* nothing meaningful to do */
+}
+
+static void axienet_mac_link_up(struct phylink_config *config,
+ unsigned int mode,
+ phy_interface_t interface,
+ struct phy_device *phy)
+{
+ /* nothing meaningful to do */
+}
+
+static const struct phylink_mac_ops axienet_phylink_ops = {
+ .validate = axienet_validate,
+ .mac_link_state = axienet_mac_link_state,
+ .mac_an_restart = axienet_mac_an_restart,
+ .mac_config = axienet_mac_config,
+ .mac_link_down = axienet_mac_link_down,
+ .mac_link_up = axienet_mac_link_up,
};
/**
@@ -1313,38 +1515,33 @@ static void axienet_dma_err_handler(unsigned long data)
{
u32 axienet_status;
u32 cr, i;
- int mdio_mcreg;
struct axienet_local *lp = (struct axienet_local *) data;
struct net_device *ndev = lp->ndev;
struct axidma_bd *cur_p;
axienet_setoptions(ndev, lp->options &
~(XAE_OPTION_TXEN | XAE_OPTION_RXEN));
- mdio_mcreg = axienet_ior(lp, XAE_MDIO_MC_OFFSET);
- axienet_mdio_wait_until_ready(lp);
/* Disable the MDIO interface till Axi Ethernet Reset is completed.
* When we do an Axi Ethernet reset, it resets the complete core
- * including the MDIO. So if MDIO is not disabled when the reset
- * process is started, MDIO will be broken afterwards.
+ * including the MDIO. MDIO must be disabled before resetting
+ * and re-enabled afterwards.
+ * Hold MDIO bus lock to avoid MDIO accesses during the reset.
*/
- axienet_iow(lp, XAE_MDIO_MC_OFFSET, (mdio_mcreg &
- ~XAE_MDIO_MC_MDIOEN_MASK));
+ mutex_lock(&lp->mii_bus->mdio_lock);
+ axienet_mdio_disable(lp);
+ __axienet_device_reset(lp);
+ axienet_mdio_enable(lp);
+ mutex_unlock(&lp->mii_bus->mdio_lock);
- __axienet_device_reset(lp, XAXIDMA_TX_CR_OFFSET);
- __axienet_device_reset(lp, XAXIDMA_RX_CR_OFFSET);
-
- axienet_iow(lp, XAE_MDIO_MC_OFFSET, mdio_mcreg);
- axienet_mdio_wait_until_ready(lp);
-
- for (i = 0; i < TX_BD_NUM; i++) {
+ for (i = 0; i < lp->tx_bd_num; i++) {
cur_p = &lp->tx_bd_v[i];
if (cur_p->phys)
dma_unmap_single(ndev->dev.parent, cur_p->phys,
(cur_p->cntrl &
XAXIDMA_BD_CTRL_LENGTH_MASK),
DMA_TO_DEVICE);
- if (cur_p->app4)
- dev_kfree_skb_irq((struct sk_buff *) cur_p->app4);
+ if (cur_p->skb)
+ dev_kfree_skb_irq(cur_p->skb);
cur_p->phys = 0;
cur_p->cntrl = 0;
cur_p->status = 0;
@@ -1353,10 +1550,10 @@ static void axienet_dma_err_handler(unsigned long data)
cur_p->app2 = 0;
cur_p->app3 = 0;
cur_p->app4 = 0;
- cur_p->sw_id_offset = 0;
+ cur_p->skb = NULL;
}
- for (i = 0; i < RX_BD_NUM; i++) {
+ for (i = 0; i < lp->rx_bd_num; i++) {
cur_p = &lp->rx_bd_v[i];
cur_p->status = 0;
cur_p->app0 = 0;
@@ -1404,7 +1601,7 @@ static void axienet_dma_err_handler(unsigned long data)
axienet_dma_out32(lp, XAXIDMA_RX_CR_OFFSET,
cr | XAXIDMA_CR_RUNSTOP_MASK);
axienet_dma_out32(lp, XAXIDMA_RX_TDESC_OFFSET, lp->rx_bd_p +
- (sizeof(*lp->rx_bd_v) * (RX_BD_NUM - 1)));
+ (sizeof(*lp->rx_bd_v) * (lp->rx_bd_num - 1)));
/* Write to the RS (Run-stop) bit in the Tx channel control register.
* Tx channel is now ready to run. But only after we write to the
@@ -1422,6 +1619,8 @@ static void axienet_dma_err_handler(unsigned long data)
axienet_status = axienet_ior(lp, XAE_IP_OFFSET);
if (axienet_status & XAE_INT_RXRJECT_MASK)
axienet_iow(lp, XAE_IS_OFFSET, XAE_INT_RXRJECT_MASK);
+ axienet_iow(lp, XAE_IE_OFFSET, lp->eth_irq > 0 ?
+ XAE_INT_RECV_ERROR_MASK : 0);
axienet_iow(lp, XAE_FCC_OFFSET, XAE_FCC_FCRX_MASK);
/* Sync default options with HW but leave receiver and
@@ -1453,7 +1652,7 @@ static int axienet_probe(struct platform_device *pdev)
struct axienet_local *lp;
struct net_device *ndev;
const void *mac_addr;
- struct resource *ethres, dmares;
+ struct resource *ethres;
u32 value;
ndev = alloc_etherdev(sizeof(*lp));
@@ -1476,6 +1675,8 @@ static int axienet_probe(struct platform_device *pdev)
lp->ndev = ndev;
lp->dev = &pdev->dev;
lp->options = XAE_OPTION_DEFAULTS;
+ lp->rx_bd_num = RX_BD_NUM_DEFAULT;
+ lp->tx_bd_num = TX_BD_NUM_DEFAULT;
/* Map device registers */
ethres = platform_get_resource(pdev, IORESOURCE_MEM, 0);
lp->regs = devm_ioremap_resource(&pdev->dev, ethres);
@@ -1484,6 +1685,7 @@ static int axienet_probe(struct platform_device *pdev)
ret = PTR_ERR(lp->regs);
goto free_netdev;
}
+ lp->regs_start = ethres->start;
/* Setup checksum offload, but default to off if not specified */
lp->features = 0;
@@ -1568,38 +1770,56 @@ static int axienet_probe(struct platform_device *pdev)
/* Find the DMA node, map the DMA registers, and decode the DMA IRQs */
np = of_parse_phandle(pdev->dev.of_node, "axistream-connected", 0);
- if (!np) {
- dev_err(&pdev->dev, "could not find DMA node\n");
- ret = -ENODEV;
- goto free_netdev;
- }
- ret = of_address_to_resource(np, 0, &dmares);
- if (ret) {
- dev_err(&pdev->dev, "unable to get DMA resource\n");
+ if (np) {
+ struct resource dmares;
+
+ ret = of_address_to_resource(np, 0, &dmares);
+ if (ret) {
+ dev_err(&pdev->dev,
+ "unable to get DMA resource\n");
+ of_node_put(np);
+ goto free_netdev;
+ }
+ lp->dma_regs = devm_ioremap_resource(&pdev->dev,
+ &dmares);
+ lp->rx_irq = irq_of_parse_and_map(np, 1);
+ lp->tx_irq = irq_of_parse_and_map(np, 0);
of_node_put(np);
- goto free_netdev;
+ lp->eth_irq = platform_get_irq(pdev, 0);
+ } else {
+ /* Check for these resources directly on the Ethernet node. */
+ struct resource *res = platform_get_resource(pdev,
+ IORESOURCE_MEM, 1);
+ if (!res) {
+ dev_err(&pdev->dev, "unable to get DMA memory resource\n");
+ goto free_netdev;
+ }
+ lp->dma_regs = devm_ioremap_resource(&pdev->dev, res);
+ lp->rx_irq = platform_get_irq(pdev, 1);
+ lp->tx_irq = platform_get_irq(pdev, 0);
+ lp->eth_irq = platform_get_irq(pdev, 2);
}
- lp->dma_regs = devm_ioremap_resource(&pdev->dev, &dmares);
if (IS_ERR(lp->dma_regs)) {
dev_err(&pdev->dev, "could not map DMA regs\n");
ret = PTR_ERR(lp->dma_regs);
- of_node_put(np);
goto free_netdev;
}
- lp->rx_irq = irq_of_parse_and_map(np, 1);
- lp->tx_irq = irq_of_parse_and_map(np, 0);
- of_node_put(np);
if ((lp->rx_irq <= 0) || (lp->tx_irq <= 0)) {
dev_err(&pdev->dev, "could not determine irqs\n");
ret = -ENOMEM;
goto free_netdev;
}
+ /* Check for Ethernet core IRQ (optional) */
+ if (lp->eth_irq <= 0)
+ dev_info(&pdev->dev, "Ethernet core IRQ not defined\n");
+
/* Retrieve the MAC address */
mac_addr = of_get_mac_address(pdev->dev.of_node);
if (IS_ERR(mac_addr)) {
- dev_err(&pdev->dev, "could not find MAC address\n");
- goto free_netdev;
+ dev_warn(&pdev->dev, "could not find MAC address property: %ld\n",
+ PTR_ERR(mac_addr));
+ mac_addr = NULL;
}
axienet_set_mac_address(ndev, mac_addr);
@@ -1608,9 +1828,36 @@ static int axienet_probe(struct platform_device *pdev)
lp->phy_node = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
if (lp->phy_node) {
- ret = axienet_mdio_setup(lp, pdev->dev.of_node);
+ lp->clk = devm_clk_get(&pdev->dev, NULL);
+ if (IS_ERR(lp->clk)) {
+ dev_warn(&pdev->dev, "Failed to get clock: %ld\n",
+ PTR_ERR(lp->clk));
+ lp->clk = NULL;
+ } else {
+ ret = clk_prepare_enable(lp->clk);
+ if (ret) {
+ dev_err(&pdev->dev, "Unable to enable clock: %d\n",
+ ret);
+ goto free_netdev;
+ }
+ }
+
+ ret = axienet_mdio_setup(lp);
if (ret)
- dev_warn(&pdev->dev, "error registering MDIO bus\n");
+ dev_warn(&pdev->dev,
+ "error registering MDIO bus: %d\n", ret);
+ }
+
+ lp->phylink_config.dev = &ndev->dev;
+ lp->phylink_config.type = PHYLINK_NETDEV;
+
+ lp->phylink = phylink_create(&lp->phylink_config, pdev->dev.fwnode,
+ lp->phy_mode,
+ &axienet_phylink_ops);
+ if (IS_ERR(lp->phylink)) {
+ ret = PTR_ERR(lp->phylink);
+ dev_err(&pdev->dev, "phylink_create error (%i)\n", ret);
+ goto free_netdev;
}
ret = register_netdev(lp->ndev);
@@ -1632,9 +1879,16 @@ static int axienet_remove(struct platform_device *pdev)
struct net_device *ndev = platform_get_drvdata(pdev);
struct axienet_local *lp = netdev_priv(ndev);
- axienet_mdio_teardown(lp);
unregister_netdev(ndev);
+ if (lp->phylink)
+ phylink_destroy(lp->phylink);
+
+ axienet_mdio_teardown(lp);
+
+ if (lp->clk)
+ clk_disable_unprepare(lp->clk);
+
of_node_put(lp->phy_node);
lp->phy_node = NULL;
@@ -1643,9 +1897,23 @@ static int axienet_remove(struct platform_device *pdev)
return 0;
}
+static void axienet_shutdown(struct platform_device *pdev)
+{
+ struct net_device *ndev = platform_get_drvdata(pdev);
+
+ rtnl_lock();
+ netif_device_detach(ndev);
+
+ if (netif_running(ndev))
+ dev_close(ndev);
+
+ rtnl_unlock();
+}
+
static struct platform_driver axienet_driver = {
.probe = axienet_probe,
.remove = axienet_remove,
+ .shutdown = axienet_shutdown,
.driver = {
.name = "xilinx_axienet",
.of_match_table = axienet_of_match,
diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_mdio.c b/drivers/net/ethernet/xilinx/xilinx_axienet_mdio.c
index 704babdbc8a2..435ed308d990 100644
--- a/drivers/net/ethernet/xilinx/xilinx_axienet_mdio.c
+++ b/drivers/net/ethernet/xilinx/xilinx_axienet_mdio.c
@@ -5,9 +5,11 @@
* Copyright (c) 2009 Secret Lab Technologies, Ltd.
* Copyright (c) 2010 - 2011 Michal Simek <monstr@monstr.eu>
* Copyright (c) 2010 - 2011 PetaLogix
+ * Copyright (c) 2019 SED Systems, a division of Calian Ltd.
* Copyright (c) 2010 - 2012 Xilinx, Inc. All rights reserved.
*/
+#include <linux/clk.h>
#include <linux/of_address.h>
#include <linux/of_mdio.h>
#include <linux/jiffies.h>
@@ -16,10 +18,10 @@
#include "xilinx_axienet.h"
#define MAX_MDIO_FREQ 2500000 /* 2.5 MHz */
-#define DEFAULT_CLOCK_DIVISOR XAE_MDIO_DIV_DFT
+#define DEFAULT_HOST_CLOCK 150000000 /* 150 MHz */
/* Wait till MDIO interface is ready to accept a new transaction.*/
-int axienet_mdio_wait_until_ready(struct axienet_local *lp)
+static int axienet_mdio_wait_until_ready(struct axienet_local *lp)
{
u32 val;
@@ -112,23 +114,42 @@ static int axienet_mdio_write(struct mii_bus *bus, int phy_id, int reg,
}
/**
- * axienet_mdio_setup - MDIO setup function
+ * axienet_mdio_enable - MDIO hardware setup function
* @lp: Pointer to axienet local data structure.
- * @np: Pointer to device node
*
- * Return: 0 on success, -ETIMEDOUT on a timeout, -ENOMEM when
- * mdiobus_alloc (to allocate memory for mii bus structure) fails.
+ * Return: 0 on success, -ETIMEDOUT on a timeout.
*
* Sets up the MDIO interface by initializing the MDIO clock and enabling the
- * MDIO interface in hardware. Register the MDIO interface.
+ * MDIO interface in hardware.
**/
-int axienet_mdio_setup(struct axienet_local *lp, struct device_node *np)
+int axienet_mdio_enable(struct axienet_local *lp)
{
- int ret;
u32 clk_div, host_clock;
- struct mii_bus *bus;
- struct resource res;
- struct device_node *np1;
+
+ if (lp->clk) {
+ host_clock = clk_get_rate(lp->clk);
+ } else {
+ struct device_node *np1;
+
+ /* Legacy fallback: detect CPU clock frequency and use as AXI
+ * bus clock frequency. This only works on certain platforms.
+ */
+ np1 = of_find_node_by_name(NULL, "cpu");
+ if (!np1) {
+ netdev_warn(lp->ndev, "Could not find CPU device node.\n");
+ host_clock = DEFAULT_HOST_CLOCK;
+ } else {
+ int ret = of_property_read_u32(np1, "clock-frequency",
+ &host_clock);
+ if (ret) {
+ netdev_warn(lp->ndev, "CPU clock-frequency property not found.\n");
+ host_clock = DEFAULT_HOST_CLOCK;
+ }
+ of_node_put(np1);
+ }
+ netdev_info(lp->ndev, "Setting assumed host clock to %u\n",
+ host_clock);
+ }
/* clk_div can be calculated by deriving it from the equation:
* fMDIO = fHOST / ((1 + clk_div) * 2)
@@ -155,25 +176,6 @@ int axienet_mdio_setup(struct axienet_local *lp, struct device_node *np)
* "clock-frequency" from the CPU
*/
- np1 = of_find_node_by_name(NULL, "cpu");
- if (!np1) {
- netdev_warn(lp->ndev, "Could not find CPU device node.\n");
- netdev_warn(lp->ndev,
- "Setting MDIO clock divisor to default %d\n",
- DEFAULT_CLOCK_DIVISOR);
- clk_div = DEFAULT_CLOCK_DIVISOR;
- goto issue;
- }
- if (of_property_read_u32(np1, "clock-frequency", &host_clock)) {
- netdev_warn(lp->ndev, "clock-frequency property not found.\n");
- netdev_warn(lp->ndev,
- "Setting MDIO clock divisor to default %d\n",
- DEFAULT_CLOCK_DIVISOR);
- clk_div = DEFAULT_CLOCK_DIVISOR;
- of_node_put(np1);
- goto issue;
- }
-
clk_div = (host_clock / (MAX_MDIO_FREQ * 2)) - 1;
/* If there is any remainder from the division of
* fHOST / (MAX_MDIO_FREQ * 2), then we need to add
@@ -186,12 +188,39 @@ int axienet_mdio_setup(struct axienet_local *lp, struct device_node *np)
"Setting MDIO clock divisor to %u/%u Hz host clock.\n",
clk_div, host_clock);
- of_node_put(np1);
-issue:
- axienet_iow(lp, XAE_MDIO_MC_OFFSET,
- (((u32) clk_div) | XAE_MDIO_MC_MDIOEN_MASK));
+ axienet_iow(lp, XAE_MDIO_MC_OFFSET, clk_div | XAE_MDIO_MC_MDIOEN_MASK);
- ret = axienet_mdio_wait_until_ready(lp);
+ return axienet_mdio_wait_until_ready(lp);
+}
+
+/**
+ * axienet_mdio_disable - MDIO hardware disable function
+ * @lp: Pointer to axienet local data structure.
+ *
+ * Disable the MDIO interface in hardware.
+ **/
+void axienet_mdio_disable(struct axienet_local *lp)
+{
+ axienet_iow(lp, XAE_MDIO_MC_OFFSET, 0);
+}
+
+/**
+ * axienet_mdio_setup - MDIO setup function
+ * @lp: Pointer to axienet local data structure.
+ *
+ * Return: 0 on success, -ETIMEDOUT on a timeout, -ENOMEM when
+ * mdiobus_alloc (to allocate memory for mii bus structure) fails.
+ *
+ * Sets up the MDIO interface by initializing the MDIO clock and enabling the
+ * MDIO interface in hardware. Register the MDIO interface.
+ **/
+int axienet_mdio_setup(struct axienet_local *lp)
+{
+ struct device_node *mdio_node;
+ struct mii_bus *bus;
+ int ret;
+
+ ret = axienet_mdio_enable(lp);
if (ret < 0)
return ret;
@@ -199,10 +228,8 @@ issue:
if (!bus)
return -ENOMEM;
- np1 = of_get_parent(lp->phy_node);
- of_address_to_resource(np1, 0, &res);
- snprintf(bus->id, MII_BUS_ID_SIZE, "%.8llx",
- (unsigned long long) res.start);
+ snprintf(bus->id, MII_BUS_ID_SIZE, "axienet-%.8llx",
+ (unsigned long long)lp->regs_start);
bus->priv = lp;
bus->name = "Xilinx Axi Ethernet MDIO";
@@ -211,7 +238,9 @@ issue:
bus->parent = lp->dev;
lp->mii_bus = bus;
- ret = of_mdiobus_register(bus, np1);
+ mdio_node = of_get_child_by_name(lp->dev->of_node, "mdio");
+ ret = of_mdiobus_register(bus, mdio_node);
+ of_node_put(mdio_node);
if (ret) {
mdiobus_free(bus);
lp->mii_bus = NULL;
diff --git a/drivers/net/fddi/skfp/drvfbi.c b/drivers/net/fddi/skfp/drvfbi.c
index bdd5700e71fa..9c8aa3a95463 100644
--- a/drivers/net/fddi/skfp/drvfbi.c
+++ b/drivers/net/fddi/skfp/drvfbi.c
@@ -20,6 +20,7 @@
#include "h/supern_2.h"
#include "h/skfbiinc.h"
#include <linux/bitrev.h>
+#include <linux/pci_regs.h>
#ifndef lint
static const char ID_sccs[] = "@(#)drvfbi.c 1.63 99/02/11 (C) SK " ;
@@ -127,7 +128,7 @@ static void card_start(struct s_smc *smc)
* at very first before any other initialization functions is
* executed.
*/
- rev_id = inp(PCI_C(PCI_REV_ID)) ;
+ rev_id = inp(PCI_C(PCI_REVISION_ID)) ;
if ((rev_id & 0xf0) == SK_ML_ID_1 || (rev_id & 0xf0) == SK_ML_ID_2) {
smc->hw.hw_is_64bit = TRUE ;
} else {
diff --git a/drivers/net/fddi/skfp/h/skfbi.h b/drivers/net/fddi/skfp/h/skfbi.h
index 89557457b352..480795681719 100644
--- a/drivers/net/fddi/skfp/h/skfbi.h
+++ b/drivers/net/fddi/skfp/h/skfbi.h
@@ -24,49 +24,6 @@
* (ML) = only defined for Monalisa
*/
-/*
- * Configuration Space header
- */
-#define PCI_VENDOR_ID 0x00 /* 16 bit Vendor ID */
-#define PCI_DEVICE_ID 0x02 /* 16 bit Device ID */
-#define PCI_COMMAND 0x04 /* 16 bit Command */
-#define PCI_STATUS 0x06 /* 16 bit Status */
-#define PCI_REV_ID 0x08 /* 8 bit Revision ID */
-#define PCI_CLASS_CODE 0x09 /* 24 bit Class Code */
-#define PCI_CACHE_LSZ 0x0c /* 8 bit Cache Line Size */
-#define PCI_LAT_TIM 0x0d /* 8 bit Latency Timer */
-#define PCI_HEADER_T 0x0e /* 8 bit Header Type */
-#define PCI_BIST 0x0f /* 8 bit Built-in selftest */
-#define PCI_BASE_1ST 0x10 /* 32 bit 1st Base address */
-#define PCI_BASE_2ND 0x14 /* 32 bit 2nd Base address */
-/* Byte 18..2b: Reserved */
-#define PCI_SUB_VID 0x2c /* 16 bit Subsystem Vendor ID */
-#define PCI_SUB_ID 0x2e /* 16 bit Subsystem ID */
-#define PCI_BASE_ROM 0x30 /* 32 bit Expansion ROM Base Address */
-/* Byte 34..33: Reserved */
-#define PCI_CAP_PTR 0x34 /* 8 bit (ML) Capabilities Ptr */
-/* Byte 35..3b: Reserved */
-#define PCI_IRQ_LINE 0x3c /* 8 bit Interrupt Line */
-#define PCI_IRQ_PIN 0x3d /* 8 bit Interrupt Pin */
-#define PCI_MIN_GNT 0x3e /* 8 bit Min_Gnt */
-#define PCI_MAX_LAT 0x3f /* 8 bit Max_Lat */
-/* Device Dependent Region */
-#define PCI_OUR_REG 0x40 /* 32 bit (DV) Our Register */
-#define PCI_OUR_REG_1 0x40 /* 32 bit (ML) Our Register 1 */
-#define PCI_OUR_REG_2 0x44 /* 32 bit (ML) Our Register 2 */
-/* Power Management Region */
-#define PCI_PM_CAP_ID 0x48 /* 8 bit (ML) Power Management Cap. ID */
-#define PCI_PM_NITEM 0x49 /* 8 bit (ML) Next Item Ptr */
-#define PCI_PM_CAP_REG 0x4a /* 16 bit (ML) Power Management Capabilities */
-#define PCI_PM_CTL_STS 0x4c /* 16 bit (ML) Power Manag. Control/Status */
-/* Byte 0x4e: Reserved */
-#define PCI_PM_DAT_REG 0x4f /* 8 bit (ML) Power Manag. Data Register */
-/* VPD Region */
-#define PCI_VPD_CAP_ID 0x50 /* 8 bit (ML) VPD Cap. ID */
-#define PCI_VPD_NITEM 0x51 /* 8 bit (ML) Next Item Ptr */
-#define PCI_VPD_ADR_REG 0x52 /* 16 bit (ML) VPD Address Register */
-#define PCI_VPD_DAT_REG 0x54 /* 32 bit (ML) VPD Data Register */
-/* Byte 58..ff: Reserved */
/*
* I2C Address (PCI Config)
@@ -76,176 +33,10 @@
*/
#define I2C_ADDR_VPD 0xA0 /* I2C address for the VPD EEPROM */
-/*
- * Define Bits and Values of the registers
- */
-/* PCI_VENDOR_ID 16 bit Vendor ID */
-/* PCI_DEVICE_ID 16 bit Device ID */
-/* Values for Vendor ID and Device ID shall be patched into the code */
-/* PCI_COMMAND 16 bit Command */
-#define PCI_FBTEN 0x0200 /* Bit 9: Fast Back-To-Back enable */
-#define PCI_SERREN 0x0100 /* Bit 8: SERR enable */
-#define PCI_ADSTEP 0x0080 /* Bit 7: Address Stepping */
-#define PCI_PERREN 0x0040 /* Bit 6: Parity Report Response enable */
-#define PCI_VGA_SNOOP 0x0020 /* Bit 5: VGA palette snoop */
-#define PCI_MWIEN 0x0010 /* Bit 4: Memory write an inv cycl ena */
-#define PCI_SCYCEN 0x0008 /* Bit 3: Special Cycle enable */
-#define PCI_BMEN 0x0004 /* Bit 2: Bus Master enable */
-#define PCI_MEMEN 0x0002 /* Bit 1: Memory Space Access enable */
-#define PCI_IOEN 0x0001 /* Bit 0: IO Space Access enable */
-
-/* PCI_STATUS 16 bit Status */
-#define PCI_PERR 0x8000 /* Bit 15: Parity Error */
-#define PCI_SERR 0x4000 /* Bit 14: Signaled SERR */
-#define PCI_RMABORT 0x2000 /* Bit 13: Received Master Abort */
-#define PCI_RTABORT 0x1000 /* Bit 12: Received Target Abort */
-#define PCI_STABORT 0x0800 /* Bit 11: Sent Target Abort */
-#define PCI_DEVSEL 0x0600 /* Bit 10..9: DEVSEL Timing */
-#define PCI_DEV_FAST (0<<9) /* fast */
-#define PCI_DEV_MEDIUM (1<<9) /* medium */
-#define PCI_DEV_SLOW (2<<9) /* slow */
-#define PCI_DATAPERR 0x0100 /* Bit 8: DATA Parity error detected */
-#define PCI_FB2BCAP 0x0080 /* Bit 7: Fast Back-to-Back Capability */
-#define PCI_UDF 0x0040 /* Bit 6: User Defined Features */
-#define PCI_66MHZCAP 0x0020 /* Bit 5: 66 MHz PCI bus clock capable */
-#define PCI_NEWCAP 0x0010 /* Bit 4: New cap. list implemented */
-
-#define PCI_ERRBITS (PCI_PERR|PCI_SERR|PCI_RMABORT|PCI_STABORT|PCI_DATAPERR)
-
-/* PCI_REV_ID 8 bit Revision ID */
-/* PCI_CLASS_CODE 24 bit Class Code */
-/* Byte 2: Base Class (02) */
-/* Byte 1: SubClass (02) */
-/* Byte 0: Programming Interface (00) */
-
-/* PCI_CACHE_LSZ 8 bit Cache Line Size */
-/* Possible values: 0,2,4,8,16 */
-
-/* PCI_LAT_TIM 8 bit Latency Timer */
-
-/* PCI_HEADER_T 8 bit Header Type */
-#define PCI_HD_MF_DEV 0x80 /* Bit 7: 0= single, 1= multi-func dev */
-#define PCI_HD_TYPE 0x7f /* Bit 6..0: Header Layout 0= normal */
-
-/* PCI_BIST 8 bit Built-in selftest */
-#define PCI_BIST_CAP 0x80 /* Bit 7: BIST Capable */
-#define PCI_BIST_ST 0x40 /* Bit 6: Start BIST */
-#define PCI_BIST_RET 0x0f /* Bit 3..0: Completion Code */
-
-/* PCI_BASE_1ST 32 bit 1st Base address */
-#define PCI_MEMSIZE 0x800L /* use 2 kB Memory Base */
-#define PCI_MEMBASE_BITS 0xfffff800L /* Bit 31..11: Memory Base Address */
-#define PCI_MEMSIZE_BIIS 0x000007f0L /* Bit 10..4: Memory Size Req. */
-#define PCI_PREFEN 0x00000008L /* Bit 3: Prefetchable */
-#define PCI_MEM_TYP 0x00000006L /* Bit 2..1: Memory Type */
-#define PCI_MEM32BIT (0<<1) /* Base addr anywhere in 32 Bit range */
-#define PCI_MEM1M (1<<1) /* Base addr below 1 MegaByte */
-#define PCI_MEM64BIT (2<<1) /* Base addr anywhere in 64 Bit range */
-#define PCI_MEMSPACE 0x00000001L /* Bit 0: Memory Space Indic. */
-
-/* PCI_SUB_VID 16 bit Subsystem Vendor ID */
-/* PCI_SUB_ID 16 bit Subsystem ID */
-
-/* PCI_BASE_ROM 32 bit Expansion ROM Base Address */
-#define PCI_ROMBASE 0xfffe0000L /* Bit 31..17: ROM BASE address (1st) */
-#define PCI_ROMBASZ 0x0001c000L /* Bit 16..14: Treat as BASE or SIZE */
-#define PCI_ROMSIZE 0x00003800L /* Bit 13..11: ROM Size Requirements */
-#define PCI_ROMEN 0x00000001L /* Bit 0: Address Decode enable */
-
-/* PCI_CAP_PTR 8 bit New Capabilities Pointers */
-/* PCI_IRQ_LINE 8 bit Interrupt Line */
-/* PCI_IRQ_PIN 8 bit Interrupt Pin */
-/* PCI_MIN_GNT 8 bit Min_Gnt */
-/* PCI_MAX_LAT 8 bit Max_Lat */
-/* Device Dependent Region */
-/* PCI_OUR_REG (DV) 32 bit Our Register */
-/* PCI_OUR_REG_1 (ML) 32 bit Our Register 1 */
- /* Bit 31..29: reserved */
-#define PCI_PATCH_DIR (3L<<27) /*(DV) Bit 28..27: Ext Patchs direction */
-#define PCI_PATCH_DIR_0 (1L<<27) /*(DV) Type of the pins EXT_PATCHS<1..0> */
-#define PCI_PATCH_DIR_1 (1L<<28) /* 0 = input */
- /* 1 = output */
-#define PCI_EXT_PATCHS (3L<<25) /*(DV) Bit 26..25: Extended Patches */
-#define PCI_EXT_PATCH_0 (1L<<25) /*(DV) */
-#define PCI_EXT_PATCH_1 (1L<<26) /* CLK for MicroWire (ML) */
-#define PCI_VIO (1L<<25) /*(ML) */
-#define PCI_EN_BOOT (1L<<24) /* Bit 24: Enable BOOT via ROM */
- /* 1 = Don't boot with ROM */
- /* 0 = Boot with ROM */
-#define PCI_EN_IO (1L<<23) /* Bit 23: Mapping to IO space */
-#define PCI_EN_FPROM (1L<<22) /* Bit 22: FLASH mapped to mem? */
- /* 1 = Map Flash to Memory */
- /* 0 = Disable all addr. decoding */
-#define PCI_PAGESIZE (3L<<20) /* Bit 21..20: FLASH Page Size */
-#define PCI_PAGE_16 (0L<<20) /* 16 k pages */
-#define PCI_PAGE_32K (1L<<20) /* 32 k pages */
-#define PCI_PAGE_64K (2L<<20) /* 64 k pages */
-#define PCI_PAGE_128K (3L<<20) /* 128 k pages */
- /* Bit 19: reserved (ML) and (DV) */
-#define PCI_PAGEREG (7L<<16) /* Bit 18..16: Page Register */
- /* Bit 15: reserved */
-#define PCI_FORCE_BE (1L<<14) /* Bit 14: Assert all BEs on MR */
-#define PCI_DIS_MRL (1L<<13) /* Bit 13: Disable Mem R Line */
-#define PCI_DIS_MRM (1L<<12) /* Bit 12: Disable Mem R multip */
-#define PCI_DIS_MWI (1L<<11) /* Bit 11: Disable Mem W & inv */
-#define PCI_DISC_CLS (1L<<10) /* Bit 10: Disc: cacheLsz bound */
-#define PCI_BURST_DIS (1L<<9) /* Bit 9: Burst Disable */
-#define PCI_BYTE_SWAP (1L<<8) /*(DV) Bit 8: Byte Swap in DATA */
-#define PCI_SKEW_DAS (0xfL<<4) /* Bit 7..4: Skew Ctrl, DAS Ext */
-#define PCI_SKEW_BASE (0xfL<<0) /* Bit 3..0: Skew Ctrl, Base */
-
-/* PCI_OUR_REG_2 (ML) 32 bit Our Register 2 (Monalisa only) */
-#define PCI_VPD_WR_TH (0xffL<<24) /* Bit 24..31 VPD Write Threshold */
-#define PCI_DEV_SEL (0x7fL<<17) /* Bit 17..23 EEPROM Device Select */
-#define PCI_VPD_ROM_SZ (7L<<14) /* Bit 14..16 VPD ROM Size */
- /* Bit 12..13 reserved */
-#define PCI_PATCH_DIR2 (0xfL<<8) /* Bit 8..11 Ext Patchs dir 2..5 */
-#define PCI_PATCH_DIR_2 (1L<<8) /* Bit 8 CS for MicroWire */
-#define PCI_PATCH_DIR_3 (1L<<9)
-#define PCI_PATCH_DIR_4 (1L<<10)
-#define PCI_PATCH_DIR_5 (1L<<11)
-#define PCI_EXT_PATCHS2 (0xfL<<4) /* Bit 4..7 Extended Patches */
-#define PCI_EXT_PATCH_2 (1L<<4) /* Bit 4 CS for MicroWire */
-#define PCI_EXT_PATCH_3 (1L<<5)
-#define PCI_EXT_PATCH_4 (1L<<6)
-#define PCI_EXT_PATCH_5 (1L<<7)
-#define PCI_EN_DUMMY_RD (1L<<3) /* Bit 3 Enable Dummy Read */
-#define PCI_REV_DESC (1L<<2) /* Bit 2 Reverse Desc. Bytes */
-#define PCI_USEADDR64 (1L<<1) /* Bit 1 Use 64 Bit Addresse */
-#define PCI_USEDATA64 (1L<<0) /* Bit 0 Use 64 Bit Data bus ext*/
-
-/* Power Management Region */
-/* PCI_PM_CAP_ID 8 bit (ML) Power Management Cap. ID */
-/* PCI_PM_NITEM 8 bit (ML) Next Item Ptr */
-/* PCI_PM_CAP_REG 16 bit (ML) Power Management Capabilities*/
-#define PCI_PME_SUP (0x1f<<11) /* Bit 11..15 PM Manag. Event Support*/
-#define PCI_PM_D2_SUB (1<<10) /* Bit 10 D2 Support Bit */
-#define PCI_PM_D1_SUB (1<<9) /* Bit 9 D1 Support Bit */
- /* Bit 6..8 reserved */
-#define PCI_PM_DSI (1<<5) /* Bit 5 Device Specific Init.*/
-#define PCI_PM_APS (1<<4) /* Bit 4 Auxialiary Power Src */
-#define PCI_PME_CLOCK (1<<3) /* Bit 3 PM Event Clock */
-#define PCI_PM_VER (7<<0) /* Bit 0..2 PM PCI Spec. version */
-
-/* PCI_PM_CTL_STS 16 bit (ML) Power Manag. Control/Status */
-#define PCI_PME_STATUS (1<<15) /* Bit 15 PFA doesn't sup. PME#*/
-#define PCI_PM_DAT_SCL (3<<13) /* Bit 13..14 dat reg Scaling factor */
-#define PCI_PM_DAT_SEL (0xf<<9) /* Bit 9..12 PM data selector field */
- /* Bit 7.. 2 reserved */
-#define PCI_PM_STATE (3<<0) /* Bit 0.. 1 Power Management State */
-#define PCI_PM_STATE_D0 (0<<0) /* D0: Operational (default) */
-#define PCI_PM_STATE_D1 (1<<0) /* D1: not supported */
-#define PCI_PM_STATE_D2 (2<<0) /* D2: not supported */
-#define PCI_PM_STATE_D3 (3<<0) /* D3: HOT, Power Down and Reset */
-
-/* PCI_PM_DAT_REG 8 bit (ML) Power Manag. Data Register */
-/* VPD Region */
-/* PCI_VPD_CAP_ID 8 bit (ML) VPD Cap. ID */
-/* PCI_VPD_NITEM 8 bit (ML) Next Item Ptr */
-/* PCI_VPD_ADR_REG 16 bit (ML) VPD Address Register */
-#define PCI_VPD_FLAG (1<<15) /* Bit 15 starts VPD rd/wd cycle*/
-
-/* PCI_VPD_DAT_REG 32 bit (ML) VPD Data Register */
+
+#define PCI_ERRBITS (PCI_STATUS_DETECTED_PARITY | PCI_STATUS_SIG_SYSTEM_ERROR | PCI_STATUS_REC_MASTER_ABORT | PCI_STATUS_SIG_TARGET_ABORT | PCI_STATUS_PARITY)
+
+
/*
* Control Register File:
@@ -873,20 +664,6 @@
#define T3_MUX (3<<2) /* Bit 3..2: Mux position */
#define T3_VRAM (3<<0) /* Bit 1..0: Virtual RAM buffer Address */
-/* PCI card IDs */
-/*
- * Note: The following 4 byte definitions shall not be used! Use OEM Concept!
- */
-#define PCI_VEND_ID0 0x48 /* PCI vendor ID (SysKonnect) */
-#define PCI_VEND_ID1 0x11 /* PCI vendor ID (SysKonnect) */
- /* (High byte) */
-#define PCI_DEV_ID0 0x00 /* PCI device ID */
-#define PCI_DEV_ID1 0x40 /* PCI device ID (High byte) */
-
-/*#define PCI_CLASS 0x02*/ /* PCI class code: network device */
-#define PCI_NW_CLASS 0x02 /* PCI class code: network device */
-#define PCI_SUB_CLASS 0x02 /* PCI subclass ID: FDDI device */
-#define PCI_PROG_INTFC 0x00 /* PCI programming Interface (=0) */
/*
* address transmission from logical to physical offset address on board
diff --git a/drivers/net/fjes/fjes_debugfs.c b/drivers/net/fjes/fjes_debugfs.c
index 153fc998f9c1..2c2095e7cf1e 100644
--- a/drivers/net/fjes/fjes_debugfs.c
+++ b/drivers/net/fjes/fjes_debugfs.c
@@ -52,20 +52,11 @@ DEFINE_SHOW_ATTRIBUTE(fjes_dbg_status);
void fjes_dbg_adapter_init(struct fjes_adapter *adapter)
{
const char *name = dev_name(&adapter->plat_dev->dev);
- struct dentry *pfile;
adapter->dbg_adapter = debugfs_create_dir(name, fjes_debug_root);
- if (!adapter->dbg_adapter) {
- dev_err(&adapter->plat_dev->dev,
- "debugfs entry for %s failed\n", name);
- return;
- }
- pfile = debugfs_create_file("status", 0444, adapter->dbg_adapter,
- adapter, &fjes_dbg_status_fops);
- if (!pfile)
- dev_err(&adapter->plat_dev->dev,
- "debugfs status for %s failed\n", name);
+ debugfs_create_file("status", 0444, adapter->dbg_adapter, adapter,
+ &fjes_dbg_status_fops);
}
void fjes_dbg_adapter_exit(struct fjes_adapter *adapter)
@@ -77,8 +68,6 @@ void fjes_dbg_adapter_exit(struct fjes_adapter *adapter)
void fjes_dbg_init(void)
{
fjes_debug_root = debugfs_create_dir(fjes_driver_name, NULL);
- if (!fjes_debug_root)
- pr_info("init of debugfs failed\n");
}
void fjes_dbg_exit(void)
diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
index fc45b749db46..ecfe26215935 100644
--- a/drivers/net/gtp.c
+++ b/drivers/net/gtp.c
@@ -285,16 +285,29 @@ static int gtp1u_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb)
return gtp_rx(pctx, skb, hdrlen, gtp->role);
}
-static void gtp_encap_destroy(struct sock *sk)
+static void __gtp_encap_destroy(struct sock *sk)
{
struct gtp_dev *gtp;
- gtp = rcu_dereference_sk_user_data(sk);
+ lock_sock(sk);
+ gtp = sk->sk_user_data;
if (gtp) {
+ if (gtp->sk0 == sk)
+ gtp->sk0 = NULL;
+ else
+ gtp->sk1u = NULL;
udp_sk(sk)->encap_type = 0;
rcu_assign_sk_user_data(sk, NULL);
sock_put(sk);
}
+ release_sock(sk);
+}
+
+static void gtp_encap_destroy(struct sock *sk)
+{
+ rtnl_lock();
+ __gtp_encap_destroy(sk);
+ rtnl_unlock();
}
static void gtp_encap_disable_sock(struct sock *sk)
@@ -302,7 +315,7 @@ static void gtp_encap_disable_sock(struct sock *sk)
if (!sk)
return;
- gtp_encap_destroy(sk);
+ __gtp_encap_destroy(sk);
}
static void gtp_encap_disable(struct gtp_dev *gtp)
@@ -681,7 +694,6 @@ static void gtp_dellink(struct net_device *dev, struct list_head *head)
{
struct gtp_dev *gtp = netdev_priv(dev);
- gtp_encap_disable(gtp);
gtp_hashtable_free(gtp);
list_del_rcu(&gtp->list);
unregister_netdevice_queue(dev, head);
@@ -796,7 +808,8 @@ static struct sock *gtp_encap_enable_socket(int fd, int type,
goto out_sock;
}
- if (rcu_dereference_sk_user_data(sock->sk)) {
+ lock_sock(sock->sk);
+ if (sock->sk->sk_user_data) {
sk = ERR_PTR(-EBUSY);
goto out_sock;
}
@@ -812,6 +825,7 @@ static struct sock *gtp_encap_enable_socket(int fd, int type,
setup_udp_tunnel_sock(sock_net(sock->sk), sock, &tuncfg);
out_sock:
+ release_sock(sock->sk);
sockfd_put(sock);
return sk;
}
@@ -843,8 +857,13 @@ static int gtp_encap_enable(struct gtp_dev *gtp, struct nlattr *data[])
if (data[IFLA_GTP_ROLE]) {
role = nla_get_u32(data[IFLA_GTP_ROLE]);
- if (role > GTP_ROLE_SGSN)
+ if (role > GTP_ROLE_SGSN) {
+ if (sk0)
+ gtp_encap_disable_sock(sk0);
+ if (sk1u)
+ gtp_encap_disable_sock(sk1u);
return -EINVAL;
+ }
}
gtp->sk0 = sk0;
@@ -945,7 +964,7 @@ static int ipv4_pdp_add(struct gtp_dev *gtp, struct sock *sk,
}
- pctx = kmalloc(sizeof(struct pdp_ctx), GFP_KERNEL);
+ pctx = kmalloc(sizeof(*pctx), GFP_ATOMIC);
if (pctx == NULL)
return -ENOMEM;
@@ -1034,6 +1053,7 @@ static int gtp_genl_new_pdp(struct sk_buff *skb, struct genl_info *info)
return -EINVAL;
}
+ rtnl_lock();
rcu_read_lock();
gtp = gtp_find_dev(sock_net(skb->sk), info->attrs);
@@ -1058,6 +1078,7 @@ static int gtp_genl_new_pdp(struct sk_buff *skb, struct genl_info *info)
out_unlock:
rcu_read_unlock();
+ rtnl_unlock();
return err;
}
@@ -1360,9 +1381,9 @@ late_initcall(gtp_init);
static void __exit gtp_fini(void)
{
- unregister_pernet_subsys(&gtp_net_ops);
genl_unregister_family(&gtp_genl_family);
rtnl_link_unregister(&gtp_link_ops);
+ unregister_pernet_subsys(&gtp_net_ops);
pr_info("GTP module unloaded\n");
}
diff --git a/drivers/net/loopback.c b/drivers/net/loopback.c
index 87d361666cdd..14545a8797a8 100644
--- a/drivers/net/loopback.c
+++ b/drivers/net/loopback.c
@@ -55,6 +55,13 @@
#include <net/net_namespace.h>
#include <linux/u64_stats_sync.h>
+/* blackhole_netdev - a device used for dsts that are marked expired!
+ * This is global device (instead of per-net-ns) since it's not needed
+ * to be per-ns and gets initialized at boot time.
+ */
+struct net_device *blackhole_netdev;
+EXPORT_SYMBOL(blackhole_netdev);
+
/* The higher levels take care of making this non-reentrant (it's
* called with bh's disabled).
*/
@@ -150,12 +157,14 @@ static const struct net_device_ops loopback_ops = {
.ndo_set_mac_address = eth_mac_addr,
};
-/* The loopback device is special. There is only one instance
- * per network namespace.
- */
-static void loopback_setup(struct net_device *dev)
+static void gen_lo_setup(struct net_device *dev,
+ unsigned int mtu,
+ const struct ethtool_ops *eth_ops,
+ const struct header_ops *hdr_ops,
+ const struct net_device_ops *dev_ops,
+ void (*dev_destructor)(struct net_device *dev))
{
- dev->mtu = 64 * 1024;
+ dev->mtu = mtu;
dev->hard_header_len = ETH_HLEN; /* 14 */
dev->min_header_len = ETH_HLEN; /* 14 */
dev->addr_len = ETH_ALEN; /* 6 */
@@ -174,11 +183,20 @@ static void loopback_setup(struct net_device *dev)
| NETIF_F_NETNS_LOCAL
| NETIF_F_VLAN_CHALLENGED
| NETIF_F_LOOPBACK;
- dev->ethtool_ops = &loopback_ethtool_ops;
- dev->header_ops = &eth_header_ops;
- dev->netdev_ops = &loopback_ops;
+ dev->ethtool_ops = eth_ops;
+ dev->header_ops = hdr_ops;
+ dev->netdev_ops = dev_ops;
dev->needs_free_netdev = true;
- dev->priv_destructor = loopback_dev_free;
+ dev->priv_destructor = dev_destructor;
+}
+
+/* The loopback device is special. There is only one instance
+ * per network namespace.
+ */
+static void loopback_setup(struct net_device *dev)
+{
+ gen_lo_setup(dev, (64 * 1024), &loopback_ethtool_ops, &eth_header_ops,
+ &loopback_ops, loopback_dev_free);
}
/* Setup and register the loopback device. */
@@ -213,3 +231,45 @@ out:
struct pernet_operations __net_initdata loopback_net_ops = {
.init = loopback_net_init,
};
+
+/* blackhole netdevice */
+static netdev_tx_t blackhole_netdev_xmit(struct sk_buff *skb,
+ struct net_device *dev)
+{
+ kfree_skb(skb);
+ net_warn_ratelimited("%s(): Dropping skb.\n", __func__);
+ return NETDEV_TX_OK;
+}
+
+static const struct net_device_ops blackhole_netdev_ops = {
+ .ndo_start_xmit = blackhole_netdev_xmit,
+};
+
+/* This is a dst-dummy device used specifically for invalidated
+ * DSTs and unlike loopback, this is not per-ns.
+ */
+static void blackhole_netdev_setup(struct net_device *dev)
+{
+ gen_lo_setup(dev, ETH_MIN_MTU, NULL, NULL, &blackhole_netdev_ops, NULL);
+}
+
+/* Setup and register the blackhole_netdev. */
+static int __init blackhole_netdev_init(void)
+{
+ blackhole_netdev = alloc_netdev(0, "blackhole_dev", NET_NAME_UNKNOWN,
+ blackhole_netdev_setup);
+ if (!blackhole_netdev)
+ return -ENOMEM;
+
+ rtnl_lock();
+ dev_init_scheduler(blackhole_netdev);
+ dev_activate(blackhole_netdev);
+ rtnl_unlock();
+
+ blackhole_netdev->flags |= IFF_UP | IFF_RUNNING;
+ dev_net_set(blackhole_netdev, &init_net);
+
+ return 0;
+}
+
+device_initcall(blackhole_netdev_init);
diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
index 75aebf65cd09..8f46aa1ddec0 100644
--- a/drivers/net/macsec.c
+++ b/drivers/net/macsec.c
@@ -865,6 +865,7 @@ static void macsec_reset_skb(struct sk_buff *skb, struct net_device *dev)
static void macsec_finalize_skb(struct sk_buff *skb, u8 icv_len, u8 hdr_len)
{
+ skb->ip_summed = CHECKSUM_NONE;
memmove(skb->data + hdr_len, skb->data, 2 * ETH_ALEN);
skb_pull(skb, hdr_len);
pskb_trim_unique(skb, skb->len - icv_len);
@@ -1099,10 +1100,9 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
}
skb = skb_unshare(skb, GFP_ATOMIC);
- if (!skb) {
- *pskb = NULL;
+ *pskb = skb;
+ if (!skb)
return RX_HANDLER_CONSUMED;
- }
pulled_sci = pskb_may_pull(skb, macsec_extra_len(true));
if (!pulled_sci) {
diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
index 681a882c32cd..940192c057b6 100644
--- a/drivers/net/macvlan.c
+++ b/drivers/net/macvlan.c
@@ -827,7 +827,7 @@ static int macvlan_do_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
struct ifreq ifrr;
int err = -EOPNOTSUPP;
- strncpy(ifrr.ifr_name, real_dev->name, IFNAMSIZ);
+ strscpy(ifrr.ifr_name, real_dev->name, IFNAMSIZ);
ifrr.ifr_ifru = ifr->ifr_ifru;
switch (cmd) {
diff --git a/drivers/net/netdevsim/dev.c b/drivers/net/netdevsim/dev.c
index b509b941d5ca..c5c417a3c0ce 100644
--- a/drivers/net/netdevsim/dev.c
+++ b/drivers/net/netdevsim/dev.c
@@ -38,6 +38,8 @@ static int nsim_dev_debugfs_init(struct nsim_dev *nsim_dev)
nsim_dev->ports_ddir = debugfs_create_dir("ports", nsim_dev->ddir);
if (IS_ERR_OR_NULL(nsim_dev->ports_ddir))
return PTR_ERR_OR_ZERO(nsim_dev->ports_ddir) ?: -EINVAL;
+ debugfs_create_bool("fw_update_status", 0600, nsim_dev->ddir,
+ &nsim_dev->fw_update_status);
return 0;
}
@@ -220,8 +222,49 @@ static int nsim_dev_reload(struct devlink *devlink,
return 0;
}
+#define NSIM_DEV_FLASH_SIZE 500000
+#define NSIM_DEV_FLASH_CHUNK_SIZE 1000
+#define NSIM_DEV_FLASH_CHUNK_TIME_MS 10
+
+static int nsim_dev_flash_update(struct devlink *devlink, const char *file_name,
+ const char *component,
+ struct netlink_ext_ack *extack)
+{
+ struct nsim_dev *nsim_dev = devlink_priv(devlink);
+ int i;
+
+ if (nsim_dev->fw_update_status) {
+ devlink_flash_update_begin_notify(devlink);
+ devlink_flash_update_status_notify(devlink,
+ "Preparing to flash",
+ component, 0, 0);
+ }
+
+ for (i = 0; i < NSIM_DEV_FLASH_SIZE / NSIM_DEV_FLASH_CHUNK_SIZE; i++) {
+ if (nsim_dev->fw_update_status)
+ devlink_flash_update_status_notify(devlink, "Flashing",
+ component,
+ i * NSIM_DEV_FLASH_CHUNK_SIZE,
+ NSIM_DEV_FLASH_SIZE);
+ msleep(NSIM_DEV_FLASH_CHUNK_TIME_MS);
+ }
+
+ if (nsim_dev->fw_update_status) {
+ devlink_flash_update_status_notify(devlink, "Flashing",
+ component,
+ NSIM_DEV_FLASH_SIZE,
+ NSIM_DEV_FLASH_SIZE);
+ devlink_flash_update_status_notify(devlink, "Flashing done",
+ component, 0, 0);
+ devlink_flash_update_end_notify(devlink);
+ }
+
+ return 0;
+}
+
static const struct devlink_ops nsim_dev_devlink_ops = {
.reload = nsim_dev_reload,
+ .flash_update = nsim_dev_flash_update,
};
static struct nsim_dev *
@@ -240,6 +283,7 @@ nsim_dev_create(struct nsim_bus_dev *nsim_bus_dev, unsigned int port_count)
get_random_bytes(nsim_dev->switch_id.id, nsim_dev->switch_id.id_len);
INIT_LIST_HEAD(&nsim_dev->port_list);
mutex_init(&nsim_dev->port_list_lock);
+ nsim_dev->fw_update_status = true;
nsim_dev->fib_data = nsim_fib_create();
if (IS_ERR(nsim_dev->fib_data)) {
diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c
index e5c8aa08e1cd..0740940f41b1 100644
--- a/drivers/net/netdevsim/netdev.c
+++ b/drivers/net/netdevsim/netdev.c
@@ -78,26 +78,6 @@ nsim_setup_tc_block_cb(enum tc_setup_type type, void *type_data, void *cb_priv)
return nsim_bpf_setup_tc_block_cb(type, type_data, cb_priv);
}
-static int
-nsim_setup_tc_block(struct net_device *dev, struct tc_block_offload *f)
-{
- struct netdevsim *ns = netdev_priv(dev);
-
- if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
- return -EOPNOTSUPP;
-
- switch (f->command) {
- case TC_BLOCK_BIND:
- return tcf_block_cb_register(f->block, nsim_setup_tc_block_cb,
- ns, ns, f->extack);
- case TC_BLOCK_UNBIND:
- tcf_block_cb_unregister(f->block, nsim_setup_tc_block_cb, ns);
- return 0;
- default:
- return -EOPNOTSUPP;
- }
-}
-
static int nsim_set_vf_mac(struct net_device *dev, int vf, u8 *mac)
{
struct netdevsim *ns = netdev_priv(dev);
@@ -223,12 +203,19 @@ static int nsim_set_vf_link_state(struct net_device *dev, int vf, int state)
return 0;
}
+static LIST_HEAD(nsim_block_cb_list);
+
static int
nsim_setup_tc(struct net_device *dev, enum tc_setup_type type, void *type_data)
{
+ struct netdevsim *ns = netdev_priv(dev);
+
switch (type) {
case TC_SETUP_BLOCK:
- return nsim_setup_tc_block(dev, type_data);
+ return flow_block_cb_setup_simple(type_data,
+ &nsim_block_cb_list,
+ nsim_setup_tc_block_cb,
+ ns, ns, true);
default:
return -EOPNOTSUPP;
}
diff --git a/drivers/net/netdevsim/netdevsim.h b/drivers/net/netdevsim/netdevsim.h
index 3f398797c2bc..79c05af2a7c0 100644
--- a/drivers/net/netdevsim/netdevsim.h
+++ b/drivers/net/netdevsim/netdevsim.h
@@ -157,6 +157,7 @@ struct nsim_dev {
struct netdev_phys_item_id switch_id;
struct list_head port_list;
struct mutex port_list_lock; /* protects port list */
+ bool fw_update_status;
};
int nsim_dev_init(void);
diff --git a/drivers/net/phy/Kconfig b/drivers/net/phy/Kconfig
index 1d406c6df790..20f14c5fbb7e 100644
--- a/drivers/net/phy/Kconfig
+++ b/drivers/net/phy/Kconfig
@@ -416,6 +416,12 @@ config NATIONAL_PHY
---help---
Currently supports the DP83865 PHY.
+config NXP_TJA11XX_PHY
+ tristate "NXP TJA11xx PHYs support"
+ depends on HWMON
+ ---help---
+ Currently supports the NXP TJA1100 and TJA1101 PHY.
+
config QSEMI_PHY
tristate "Quality Semiconductor PHYs"
---help---
diff --git a/drivers/net/phy/Makefile b/drivers/net/phy/Makefile
index 5b5c8669499e..839acb292c38 100644
--- a/drivers/net/phy/Makefile
+++ b/drivers/net/phy/Makefile
@@ -82,6 +82,7 @@ obj-$(CONFIG_MICROCHIP_PHY) += microchip.o
obj-$(CONFIG_MICROCHIP_T1_PHY) += microchip_t1.o
obj-$(CONFIG_MICROSEMI_PHY) += mscc.o
obj-$(CONFIG_NATIONAL_PHY) += national.o
+obj-$(CONFIG_NXP_TJA11XX_PHY) += nxp-tja11xx.o
obj-$(CONFIG_QSEMI_PHY) += qsemi.o
obj-$(CONFIG_REALTEK_PHY) += realtek.o
obj-$(CONFIG_RENESAS_PHY) += uPD60620.o
diff --git a/drivers/net/phy/aquantia_main.c b/drivers/net/phy/aquantia_main.c
index 0fedd28fdb6e..3b29d381116f 100644
--- a/drivers/net/phy/aquantia_main.c
+++ b/drivers/net/phy/aquantia_main.c
@@ -27,6 +27,7 @@
#define MDIO_PHYXS_VEND_IF_STATUS_TYPE_MASK GENMASK(7, 3)
#define MDIO_PHYXS_VEND_IF_STATUS_TYPE_KR 0
#define MDIO_PHYXS_VEND_IF_STATUS_TYPE_XFI 2
+#define MDIO_PHYXS_VEND_IF_STATUS_TYPE_USXGMII 3
#define MDIO_PHYXS_VEND_IF_STATUS_TYPE_SGMII 6
#define MDIO_PHYXS_VEND_IF_STATUS_TYPE_OCSGMII 10
@@ -360,6 +361,9 @@ static int aqr107_read_status(struct phy_device *phydev)
case MDIO_PHYXS_VEND_IF_STATUS_TYPE_XFI:
phydev->interface = PHY_INTERFACE_MODE_10GKR;
break;
+ case MDIO_PHYXS_VEND_IF_STATUS_TYPE_USXGMII:
+ phydev->interface = PHY_INTERFACE_MODE_USXGMII;
+ break;
case MDIO_PHYXS_VEND_IF_STATUS_TYPE_SGMII:
phydev->interface = PHY_INTERFACE_MODE_SGMII;
break;
@@ -488,9 +492,13 @@ static int aqr107_config_init(struct phy_device *phydev)
if (phydev->interface != PHY_INTERFACE_MODE_SGMII &&
phydev->interface != PHY_INTERFACE_MODE_2500BASEX &&
phydev->interface != PHY_INTERFACE_MODE_XGMII &&
+ phydev->interface != PHY_INTERFACE_MODE_USXGMII &&
phydev->interface != PHY_INTERFACE_MODE_10GKR)
return -ENODEV;
+ WARN(phydev->interface == PHY_INTERFACE_MODE_XGMII,
+ "Your devicetree is out of date, please update it. The AQR107 family doesn't support XGMII, maybe you mean USXGMII.\n");
+
ret = aqr107_wait_reset_complete(phydev);
if (!ret)
aqr107_chip_info(phydev);
diff --git a/drivers/net/phy/bcm87xx.c b/drivers/net/phy/bcm87xx.c
index f0c0eefe2202..f6dce6850850 100644
--- a/drivers/net/phy/bcm87xx.c
+++ b/drivers/net/phy/bcm87xx.c
@@ -81,22 +81,18 @@ static int bcm87xx_of_reg_init(struct phy_device *phydev)
}
#endif /* CONFIG_OF_MDIO */
-static int bcm87xx_config_init(struct phy_device *phydev)
+static int bcm87xx_get_features(struct phy_device *phydev)
{
- linkmode_zero(phydev->supported);
linkmode_set_bit(ETHTOOL_LINK_MODE_10000baseR_FEC_BIT,
phydev->supported);
- linkmode_zero(phydev->advertising);
- linkmode_set_bit(ETHTOOL_LINK_MODE_10000baseR_FEC_BIT,
- phydev->advertising);
- phydev->state = PHY_NOLINK;
- phydev->autoneg = AUTONEG_DISABLE;
-
- bcm87xx_of_reg_init(phydev);
-
return 0;
}
+static int bcm87xx_config_init(struct phy_device *phydev)
+{
+ return bcm87xx_of_reg_init(phydev);
+}
+
static int bcm87xx_config_aneg(struct phy_device *phydev)
{
return -EINVAL;
@@ -194,7 +190,7 @@ static struct phy_driver bcm87xx_driver[] = {
.phy_id = PHY_ID_BCM8706,
.phy_id_mask = 0xffffffff,
.name = "Broadcom BCM8706",
- .features = PHY_10GBIT_FEC_FEATURES,
+ .get_features = bcm87xx_get_features,
.config_init = bcm87xx_config_init,
.config_aneg = bcm87xx_config_aneg,
.read_status = bcm87xx_read_status,
@@ -206,7 +202,7 @@ static struct phy_driver bcm87xx_driver[] = {
.phy_id = PHY_ID_BCM8727,
.phy_id_mask = 0xffffffff,
.name = "Broadcom BCM8727",
- .features = PHY_10GBIT_FEC_FEATURES,
+ .get_features = bcm87xx_get_features,
.config_init = bcm87xx_config_init,
.config_aneg = bcm87xx_config_aneg,
.read_status = bcm87xx_read_status,
diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c
index 67fa05d67523..937d0059e8ac 100644
--- a/drivers/net/phy/broadcom.c
+++ b/drivers/net/phy/broadcom.c
@@ -663,6 +663,8 @@ static struct phy_driver broadcom_drivers[] = {
.config_init = bcm54xx_config_init,
.ack_interrupt = bcm_phy_ack_intr,
.config_intr = bcm_phy_config_intr,
+ .suspend = genphy_suspend,
+ .resume = genphy_resume,
}, {
.phy_id = PHY_ID_BCM5481,
.phy_id_mask = 0xfffffff0,
diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
index c71c7d0f53f0..1f1ecee0ee2f 100644
--- a/drivers/net/phy/dp83867.c
+++ b/drivers/net/phy/dp83867.c
@@ -34,6 +34,7 @@
#define DP83867_RGMIICTL 0x0032
#define DP83867_STRAP_STS1 0x006E
+#define DP83867_STRAP_STS2 0x006f
#define DP83867_RGMIIDCTL 0x0086
#define DP83867_IO_MUX_CFG 0x0170
#define DP83867_10M_SGMII_CFG 0x016F
@@ -63,19 +64,30 @@
/* STRAP_STS1 bits */
#define DP83867_STRAP_STS1_RESERVED BIT(11)
+/* STRAP_STS2 bits */
+#define DP83867_STRAP_STS2_CLK_SKEW_TX_MASK GENMASK(6, 4)
+#define DP83867_STRAP_STS2_CLK_SKEW_TX_SHIFT 4
+#define DP83867_STRAP_STS2_CLK_SKEW_RX_MASK GENMASK(2, 0)
+#define DP83867_STRAP_STS2_CLK_SKEW_RX_SHIFT 0
+#define DP83867_STRAP_STS2_CLK_SKEW_NONE BIT(2)
+
/* PHY CTRL bits */
#define DP83867_PHYCR_FIFO_DEPTH_SHIFT 14
-#define DP83867_PHYCR_FIFO_DEPTH_MASK (3 << 14)
+#define DP83867_PHYCR_FIFO_DEPTH_MAX 0x03
+#define DP83867_PHYCR_FIFO_DEPTH_MASK GENMASK(15, 14)
#define DP83867_PHYCR_RESERVED_MASK BIT(11)
/* RGMIIDCTL bits */
+#define DP83867_RGMII_TX_CLK_DELAY_MAX 0xf
#define DP83867_RGMII_TX_CLK_DELAY_SHIFT 4
+#define DP83867_RGMII_RX_CLK_DELAY_MAX 0xf
+#define DP83867_RGMII_RX_CLK_DELAY_SHIFT 0
/* IO_MUX_CFG bits */
-#define DP83867_IO_MUX_CFG_IO_IMPEDANCE_CTRL 0x1f
-
+#define DP83867_IO_MUX_CFG_IO_IMPEDANCE_MASK 0x1f
#define DP83867_IO_MUX_CFG_IO_IMPEDANCE_MAX 0x0
#define DP83867_IO_MUX_CFG_IO_IMPEDANCE_MIN 0x1f
+#define DP83867_IO_MUX_CFG_CLK_O_DISABLE BIT(6)
#define DP83867_IO_MUX_CFG_CLK_O_SEL_MASK (0x1f << 8)
#define DP83867_IO_MUX_CFG_CLK_O_SEL_SHIFT 8
@@ -89,13 +101,14 @@ enum {
};
struct dp83867_private {
- int rx_id_delay;
- int tx_id_delay;
- int fifo_depth;
+ u32 rx_id_delay;
+ u32 tx_id_delay;
+ u32 fifo_depth;
int io_impedance;
int port_mirroring;
bool rxctrl_strap_quirk;
- int clk_output_sel;
+ bool set_clk_output;
+ u32 clk_output_sel;
};
static int dp83867_ack_interrupt(struct phy_device *phydev)
@@ -157,38 +170,83 @@ static int dp83867_of_init(struct phy_device *phydev)
if (!of_node)
return -ENODEV;
- dp83867->io_impedance = -EINVAL;
-
/* Optional configuration */
ret = of_property_read_u32(of_node, "ti,clk-output-sel",
&dp83867->clk_output_sel);
- if (ret || dp83867->clk_output_sel > DP83867_CLK_O_SEL_REF_CLK)
- /* Keep the default value if ti,clk-output-sel is not set
- * or too high
+ /* If not set, keep default */
+ if (!ret) {
+ dp83867->set_clk_output = true;
+ /* Valid values are 0 to DP83867_CLK_O_SEL_REF_CLK or
+ * DP83867_CLK_O_SEL_OFF.
*/
- dp83867->clk_output_sel = DP83867_CLK_O_SEL_REF_CLK;
+ if (dp83867->clk_output_sel > DP83867_CLK_O_SEL_REF_CLK &&
+ dp83867->clk_output_sel != DP83867_CLK_O_SEL_OFF) {
+ phydev_err(phydev, "ti,clk-output-sel value %u out of range\n",
+ dp83867->clk_output_sel);
+ return -EINVAL;
+ }
+ }
if (of_property_read_bool(of_node, "ti,max-output-impedance"))
dp83867->io_impedance = DP83867_IO_MUX_CFG_IO_IMPEDANCE_MAX;
else if (of_property_read_bool(of_node, "ti,min-output-impedance"))
dp83867->io_impedance = DP83867_IO_MUX_CFG_IO_IMPEDANCE_MIN;
+ else
+ dp83867->io_impedance = -1; /* leave at default */
dp83867->rxctrl_strap_quirk = of_property_read_bool(of_node,
"ti,dp83867-rxctrl-strap-quirk");
- ret = of_property_read_u32(of_node, "ti,rx-internal-delay",
- &dp83867->rx_id_delay);
- if (ret &&
- (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID ||
- phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID))
- return ret;
+ /* Existing behavior was to use default pin strapping delay in rgmii
+ * mode, but rgmii should have meant no delay. Warn existing users.
+ */
+ if (phydev->interface == PHY_INTERFACE_MODE_RGMII) {
+ const u16 val = phy_read_mmd(phydev, DP83867_DEVADDR, DP83867_STRAP_STS2);
+ const u16 txskew = (val & DP83867_STRAP_STS2_CLK_SKEW_TX_MASK) >>
+ DP83867_STRAP_STS2_CLK_SKEW_TX_SHIFT;
+ const u16 rxskew = (val & DP83867_STRAP_STS2_CLK_SKEW_RX_MASK) >>
+ DP83867_STRAP_STS2_CLK_SKEW_RX_SHIFT;
+
+ if (txskew != DP83867_STRAP_STS2_CLK_SKEW_NONE ||
+ rxskew != DP83867_STRAP_STS2_CLK_SKEW_NONE)
+ phydev_warn(phydev,
+ "PHY has delays via pin strapping, but phy-mode = 'rgmii'\n"
+ "Should be 'rgmii-id' to use internal delays\n");
+ }
- ret = of_property_read_u32(of_node, "ti,tx-internal-delay",
- &dp83867->tx_id_delay);
- if (ret &&
- (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID ||
- phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID))
- return ret;
+ /* RX delay *must* be specified if internal delay of RX is used. */
+ if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID ||
+ phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID) {
+ ret = of_property_read_u32(of_node, "ti,rx-internal-delay",
+ &dp83867->rx_id_delay);
+ if (ret) {
+ phydev_err(phydev, "ti,rx-internal-delay must be specified\n");
+ return ret;
+ }
+ if (dp83867->rx_id_delay > DP83867_RGMII_RX_CLK_DELAY_MAX) {
+ phydev_err(phydev,
+ "ti,rx-internal-delay value of %u out of range\n",
+ dp83867->rx_id_delay);
+ return -EINVAL;
+ }
+ }
+
+ /* TX delay *must* be specified if internal delay of RX is used. */
+ if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID ||
+ phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID) {
+ ret = of_property_read_u32(of_node, "ti,tx-internal-delay",
+ &dp83867->tx_id_delay);
+ if (ret) {
+ phydev_err(phydev, "ti,tx-internal-delay must be specified\n");
+ return ret;
+ }
+ if (dp83867->tx_id_delay > DP83867_RGMII_TX_CLK_DELAY_MAX) {
+ phydev_err(phydev,
+ "ti,tx-internal-delay value of %u out of range\n",
+ dp83867->tx_id_delay);
+ return -EINVAL;
+ }
+ }
if (of_property_read_bool(of_node, "enet-phy-lane-swap"))
dp83867->port_mirroring = DP83867_PORT_MIRROING_EN;
@@ -196,8 +254,20 @@ static int dp83867_of_init(struct phy_device *phydev)
if (of_property_read_bool(of_node, "enet-phy-lane-no-swap"))
dp83867->port_mirroring = DP83867_PORT_MIRROING_DIS;
- return of_property_read_u32(of_node, "ti,fifo-depth",
+ ret = of_property_read_u32(of_node, "ti,fifo-depth",
&dp83867->fifo_depth);
+ if (ret) {
+ phydev_err(phydev,
+ "ti,fifo-depth property is required\n");
+ return ret;
+ }
+ if (dp83867->fifo_depth > DP83867_PHYCR_FIFO_DEPTH_MAX) {
+ phydev_err(phydev,
+ "ti,fifo-depth value %u out of range\n",
+ dp83867->fifo_depth);
+ return -EINVAL;
+ }
+ return 0;
}
#else
static int dp83867_of_init(struct phy_device *phydev)
@@ -206,25 +276,29 @@ static int dp83867_of_init(struct phy_device *phydev)
}
#endif /* CONFIG_OF_MDIO */
-static int dp83867_config_init(struct phy_device *phydev)
+static int dp83867_probe(struct phy_device *phydev)
{
struct dp83867_private *dp83867;
+
+ dp83867 = devm_kzalloc(&phydev->mdio.dev, sizeof(*dp83867),
+ GFP_KERNEL);
+ if (!dp83867)
+ return -ENOMEM;
+
+ phydev->priv = dp83867;
+
+ return 0;
+}
+
+static int dp83867_config_init(struct phy_device *phydev)
+{
+ struct dp83867_private *dp83867 = phydev->priv;
int ret, val, bs;
u16 delay;
- if (!phydev->priv) {
- dp83867 = devm_kzalloc(&phydev->mdio.dev, sizeof(*dp83867),
- GFP_KERNEL);
- if (!dp83867)
- return -ENOMEM;
-
- phydev->priv = dp83867;
- ret = dp83867_of_init(phydev);
- if (ret)
- return ret;
- } else {
- dp83867 = (struct dp83867_private *)phydev->priv;
- }
+ ret = dp83867_of_init(phydev);
+ if (ret)
+ return ret;
/* RX_DV/RX_CTRL strapped in mode 1 or mode 2 workaround */
if (dp83867->rxctrl_strap_quirk)
@@ -256,9 +330,16 @@ static int dp83867_config_init(struct phy_device *phydev)
if (ret)
return ret;
- /* Set up RGMII delays */
+ /* If rgmii mode with no internal delay is selected, we do NOT use
+ * aligned mode as one might expect. Instead we use the PHY's default
+ * based on pin strapping. And the "mode 0" default is to *use*
+ * internal delay with a value of 7 (2.00 ns).
+ *
+ * Set up RGMII delays
+ */
val = phy_read_mmd(phydev, DP83867_DEVADDR, DP83867_RGMIICTL);
+ val &= ~(DP83867_RGMII_TX_CLK_DELAY_EN | DP83867_RGMII_RX_CLK_DELAY_EN);
if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID)
val |= (DP83867_RGMII_TX_CLK_DELAY_EN | DP83867_RGMII_RX_CLK_DELAY_EN);
@@ -275,14 +356,14 @@ static int dp83867_config_init(struct phy_device *phydev)
phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_RGMIIDCTL,
delay);
-
- if (dp83867->io_impedance >= 0)
- phy_modify_mmd(phydev, DP83867_DEVADDR, DP83867_IO_MUX_CFG,
- DP83867_IO_MUX_CFG_IO_IMPEDANCE_CTRL,
- dp83867->io_impedance &
- DP83867_IO_MUX_CFG_IO_IMPEDANCE_CTRL);
}
+ /* If specified, set io impedance */
+ if (dp83867->io_impedance >= 0)
+ phy_modify_mmd(phydev, DP83867_DEVADDR, DP83867_IO_MUX_CFG,
+ DP83867_IO_MUX_CFG_IO_IMPEDANCE_MASK,
+ dp83867->io_impedance);
+
if (phydev->interface == PHY_INTERFACE_MODE_SGMII) {
/* For support SPEED_10 in SGMII mode
* DP83867_10M_SGMII_RATE_ADAPT bit
@@ -321,11 +402,20 @@ static int dp83867_config_init(struct phy_device *phydev)
dp83867_config_port_mirroring(phydev);
/* Clock output selection if muxing property is set */
- if (dp83867->clk_output_sel != DP83867_CLK_O_SEL_REF_CLK)
+ if (dp83867->set_clk_output) {
+ u16 mask = DP83867_IO_MUX_CFG_CLK_O_DISABLE;
+
+ if (dp83867->clk_output_sel == DP83867_CLK_O_SEL_OFF) {
+ val = DP83867_IO_MUX_CFG_CLK_O_DISABLE;
+ } else {
+ mask |= DP83867_IO_MUX_CFG_CLK_O_SEL_MASK;
+ val = dp83867->clk_output_sel <<
+ DP83867_IO_MUX_CFG_CLK_O_SEL_SHIFT;
+ }
+
phy_modify_mmd(phydev, DP83867_DEVADDR, DP83867_IO_MUX_CFG,
- DP83867_IO_MUX_CFG_CLK_O_SEL_MASK,
- dp83867->clk_output_sel <<
- DP83867_IO_MUX_CFG_CLK_O_SEL_SHIFT);
+ mask, val);
+ }
return 0;
}
@@ -350,6 +440,7 @@ static struct phy_driver dp83867_driver[] = {
.name = "TI DP83867",
/* PHY_GBIT_FEATURES */
+ .probe = dp83867_probe,
.config_init = dp83867_config_init,
.soft_reset = dp83867_phy_reset,
diff --git a/drivers/net/phy/lxt.c b/drivers/net/phy/lxt.c
index 314486288119..356bd6472f49 100644
--- a/drivers/net/phy/lxt.c
+++ b/drivers/net/phy/lxt.c
@@ -262,6 +262,8 @@ static struct phy_driver lxt97x_driver[] = {
/* PHY_BASIC_FEATURES */
.ack_interrupt = lxt971_ack_interrupt,
.config_intr = lxt971_config_intr,
+ .suspend = genphy_suspend,
+ .resume = genphy_resume,
}, {
.phy_id = 0x00137a10,
.name = "LXT973-A2",
@@ -271,6 +273,8 @@ static struct phy_driver lxt97x_driver[] = {
.probe = lxt973_probe,
.config_aneg = lxt973_config_aneg,
.read_status = lxt973a2_read_status,
+ .suspend = genphy_suspend,
+ .resume = genphy_resume,
}, {
.phy_id = 0x00137a10,
.name = "LXT973",
@@ -279,6 +283,8 @@ static struct phy_driver lxt97x_driver[] = {
.flags = 0,
.probe = lxt973_probe,
.config_aneg = lxt973_config_aneg,
+ .suspend = genphy_suspend,
+ .resume = genphy_resume,
} };
module_phy_driver(lxt97x_driver);
diff --git a/drivers/net/phy/nxp-tja11xx.c b/drivers/net/phy/nxp-tja11xx.c
new file mode 100644
index 000000000000..b705d0bd798b
--- /dev/null
+++ b/drivers/net/phy/nxp-tja11xx.c
@@ -0,0 +1,403 @@
+// SPDX-License-Identifier: GPL-2.0
+/* NXP TJA1100 BroadRReach PHY driver
+ *
+ * Copyright (C) 2018 Marek Vasut <marex@denx.de>
+ */
+#include <linux/delay.h>
+#include <linux/ethtool.h>
+#include <linux/kernel.h>
+#include <linux/mii.h>
+#include <linux/module.h>
+#include <linux/phy.h>
+#include <linux/hwmon.h>
+#include <linux/bitfield.h>
+
+#define PHY_ID_MASK 0xfffffff0
+#define PHY_ID_TJA1100 0x0180dc40
+#define PHY_ID_TJA1101 0x0180dd00
+
+#define MII_ECTRL 17
+#define MII_ECTRL_LINK_CONTROL BIT(15)
+#define MII_ECTRL_POWER_MODE_MASK GENMASK(14, 11)
+#define MII_ECTRL_POWER_MODE_NO_CHANGE (0x0 << 11)
+#define MII_ECTRL_POWER_MODE_NORMAL (0x3 << 11)
+#define MII_ECTRL_POWER_MODE_STANDBY (0xc << 11)
+#define MII_ECTRL_CONFIG_EN BIT(2)
+#define MII_ECTRL_WAKE_REQUEST BIT(0)
+
+#define MII_CFG1 18
+#define MII_CFG1_AUTO_OP BIT(14)
+#define MII_CFG1_SLEEP_CONFIRM BIT(6)
+#define MII_CFG1_LED_MODE_MASK GENMASK(5, 4)
+#define MII_CFG1_LED_MODE_LINKUP 0
+#define MII_CFG1_LED_ENABLE BIT(3)
+
+#define MII_CFG2 19
+#define MII_CFG2_SLEEP_REQUEST_TO GENMASK(1, 0)
+#define MII_CFG2_SLEEP_REQUEST_TO_16MS 0x3
+
+#define MII_INTSRC 21
+#define MII_INTSRC_TEMP_ERR BIT(1)
+#define MII_INTSRC_UV_ERR BIT(3)
+
+#define MII_COMMSTAT 23
+#define MII_COMMSTAT_LINK_UP BIT(15)
+
+#define MII_GENSTAT 24
+#define MII_GENSTAT_PLL_LOCKED BIT(14)
+
+#define MII_COMMCFG 27
+#define MII_COMMCFG_AUTO_OP BIT(15)
+
+struct tja11xx_priv {
+ char *hwmon_name;
+ struct device *hwmon_dev;
+};
+
+struct tja11xx_phy_stats {
+ const char *string;
+ u8 reg;
+ u8 off;
+ u16 mask;
+};
+
+static struct tja11xx_phy_stats tja11xx_hw_stats[] = {
+ { "phy_symbol_error_count", 20, 0, GENMASK(15, 0) },
+ { "phy_polarity_detect", 25, 6, BIT(6) },
+ { "phy_open_detect", 25, 7, BIT(7) },
+ { "phy_short_detect", 25, 8, BIT(8) },
+ { "phy_rem_rcvr_count", 26, 0, GENMASK(7, 0) },
+ { "phy_loc_rcvr_count", 26, 8, GENMASK(15, 8) },
+};
+
+static int tja11xx_check(struct phy_device *phydev, u8 reg, u16 mask, u16 set)
+{
+ int i, ret;
+
+ for (i = 0; i < 200; i++) {
+ ret = phy_read(phydev, reg);
+ if (ret < 0)
+ return ret;
+
+ if ((ret & mask) == set)
+ return 0;
+
+ usleep_range(100, 150);
+ }
+
+ return -ETIMEDOUT;
+}
+
+static int phy_modify_check(struct phy_device *phydev, u8 reg,
+ u16 mask, u16 set)
+{
+ int ret;
+
+ ret = phy_modify(phydev, reg, mask, set);
+ if (ret)
+ return ret;
+
+ return tja11xx_check(phydev, reg, mask, set);
+}
+
+static int tja11xx_enable_reg_write(struct phy_device *phydev)
+{
+ return phy_set_bits(phydev, MII_ECTRL, MII_ECTRL_CONFIG_EN);
+}
+
+static int tja11xx_enable_link_control(struct phy_device *phydev)
+{
+ return phy_set_bits(phydev, MII_ECTRL, MII_ECTRL_LINK_CONTROL);
+}
+
+static int tja11xx_wakeup(struct phy_device *phydev)
+{
+ int ret;
+
+ ret = phy_read(phydev, MII_ECTRL);
+ if (ret < 0)
+ return ret;
+
+ switch (ret & MII_ECTRL_POWER_MODE_MASK) {
+ case MII_ECTRL_POWER_MODE_NO_CHANGE:
+ break;
+ case MII_ECTRL_POWER_MODE_NORMAL:
+ ret = phy_set_bits(phydev, MII_ECTRL, MII_ECTRL_WAKE_REQUEST);
+ if (ret)
+ return ret;
+
+ ret = phy_clear_bits(phydev, MII_ECTRL, MII_ECTRL_WAKE_REQUEST);
+ if (ret)
+ return ret;
+ break;
+ case MII_ECTRL_POWER_MODE_STANDBY:
+ ret = phy_modify_check(phydev, MII_ECTRL,
+ MII_ECTRL_POWER_MODE_MASK,
+ MII_ECTRL_POWER_MODE_STANDBY);
+ if (ret)
+ return ret;
+
+ ret = phy_modify(phydev, MII_ECTRL, MII_ECTRL_POWER_MODE_MASK,
+ MII_ECTRL_POWER_MODE_NORMAL);
+ if (ret)
+ return ret;
+
+ ret = phy_modify_check(phydev, MII_GENSTAT,
+ MII_GENSTAT_PLL_LOCKED,
+ MII_GENSTAT_PLL_LOCKED);
+ if (ret)
+ return ret;
+
+ return tja11xx_enable_link_control(phydev);
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static int tja11xx_soft_reset(struct phy_device *phydev)
+{
+ int ret;
+
+ ret = tja11xx_enable_reg_write(phydev);
+ if (ret)
+ return ret;
+
+ return genphy_soft_reset(phydev);
+}
+
+static int tja11xx_config_init(struct phy_device *phydev)
+{
+ int ret;
+
+ ret = tja11xx_enable_reg_write(phydev);
+ if (ret)
+ return ret;
+
+ phydev->autoneg = AUTONEG_DISABLE;
+ phydev->speed = SPEED_100;
+ phydev->duplex = DUPLEX_FULL;
+
+ switch (phydev->phy_id & PHY_ID_MASK) {
+ case PHY_ID_TJA1100:
+ ret = phy_modify(phydev, MII_CFG1,
+ MII_CFG1_AUTO_OP | MII_CFG1_LED_MODE_MASK |
+ MII_CFG1_LED_ENABLE,
+ MII_CFG1_AUTO_OP | MII_CFG1_LED_MODE_LINKUP |
+ MII_CFG1_LED_ENABLE);
+ if (ret)
+ return ret;
+ break;
+ case PHY_ID_TJA1101:
+ ret = phy_set_bits(phydev, MII_COMMCFG, MII_COMMCFG_AUTO_OP);
+ if (ret)
+ return ret;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ ret = phy_clear_bits(phydev, MII_CFG1, MII_CFG1_SLEEP_CONFIRM);
+ if (ret)
+ return ret;
+
+ ret = phy_modify(phydev, MII_CFG2, MII_CFG2_SLEEP_REQUEST_TO,
+ MII_CFG2_SLEEP_REQUEST_TO_16MS);
+ if (ret)
+ return ret;
+
+ ret = tja11xx_wakeup(phydev);
+ if (ret < 0)
+ return ret;
+
+ /* ACK interrupts by reading the status register */
+ ret = phy_read(phydev, MII_INTSRC);
+ if (ret < 0)
+ return ret;
+
+ return 0;
+}
+
+static int tja11xx_read_status(struct phy_device *phydev)
+{
+ int ret;
+
+ ret = genphy_update_link(phydev);
+ if (ret)
+ return ret;
+
+ if (phydev->link) {
+ ret = phy_read(phydev, MII_COMMSTAT);
+ if (ret < 0)
+ return ret;
+
+ if (!(ret & MII_COMMSTAT_LINK_UP))
+ phydev->link = 0;
+ }
+
+ return 0;
+}
+
+static int tja11xx_get_sset_count(struct phy_device *phydev)
+{
+ return ARRAY_SIZE(tja11xx_hw_stats);
+}
+
+static void tja11xx_get_strings(struct phy_device *phydev, u8 *data)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(tja11xx_hw_stats); i++) {
+ strncpy(data + i * ETH_GSTRING_LEN,
+ tja11xx_hw_stats[i].string, ETH_GSTRING_LEN);
+ }
+}
+
+static void tja11xx_get_stats(struct phy_device *phydev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ int i, ret;
+
+ for (i = 0; i < ARRAY_SIZE(tja11xx_hw_stats); i++) {
+ ret = phy_read(phydev, tja11xx_hw_stats[i].reg);
+ if (ret < 0)
+ data[i] = U64_MAX;
+ else {
+ data[i] = ret & tja11xx_hw_stats[i].mask;
+ data[i] >>= tja11xx_hw_stats[i].off;
+ }
+ }
+}
+
+static int tja11xx_hwmon_read(struct device *dev,
+ enum hwmon_sensor_types type,
+ u32 attr, int channel, long *value)
+{
+ struct phy_device *phydev = dev_get_drvdata(dev);
+ int ret;
+
+ if (type == hwmon_in && attr == hwmon_in_lcrit_alarm) {
+ ret = phy_read(phydev, MII_INTSRC);
+ if (ret < 0)
+ return ret;
+
+ *value = !!(ret & MII_INTSRC_TEMP_ERR);
+ return 0;
+ }
+
+ if (type == hwmon_temp && attr == hwmon_temp_crit_alarm) {
+ ret = phy_read(phydev, MII_INTSRC);
+ if (ret < 0)
+ return ret;
+
+ *value = !!(ret & MII_INTSRC_UV_ERR);
+ return 0;
+ }
+
+ return -EOPNOTSUPP;
+}
+
+static umode_t tja11xx_hwmon_is_visible(const void *data,
+ enum hwmon_sensor_types type,
+ u32 attr, int channel)
+{
+ if (type == hwmon_in && attr == hwmon_in_lcrit_alarm)
+ return 0444;
+
+ if (type == hwmon_temp && attr == hwmon_temp_crit_alarm)
+ return 0444;
+
+ return 0;
+}
+
+static const struct hwmon_channel_info *tja11xx_hwmon_info[] = {
+ HWMON_CHANNEL_INFO(in, HWMON_I_LCRIT_ALARM),
+ HWMON_CHANNEL_INFO(temp, HWMON_T_CRIT_ALARM),
+ NULL
+};
+
+static const struct hwmon_ops tja11xx_hwmon_hwmon_ops = {
+ .is_visible = tja11xx_hwmon_is_visible,
+ .read = tja11xx_hwmon_read,
+};
+
+static const struct hwmon_chip_info tja11xx_hwmon_chip_info = {
+ .ops = &tja11xx_hwmon_hwmon_ops,
+ .info = tja11xx_hwmon_info,
+};
+
+static int tja11xx_probe(struct phy_device *phydev)
+{
+ struct device *dev = &phydev->mdio.dev;
+ struct tja11xx_priv *priv;
+ int i;
+
+ priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
+ priv->hwmon_name = devm_kstrdup(dev, dev_name(dev), GFP_KERNEL);
+ if (!priv->hwmon_name)
+ return -ENOMEM;
+
+ for (i = 0; priv->hwmon_name[i]; i++)
+ if (hwmon_is_bad_char(priv->hwmon_name[i]))
+ priv->hwmon_name[i] = '_';
+
+ priv->hwmon_dev =
+ devm_hwmon_device_register_with_info(dev, priv->hwmon_name,
+ phydev,
+ &tja11xx_hwmon_chip_info,
+ NULL);
+
+ return PTR_ERR_OR_ZERO(priv->hwmon_dev);
+}
+
+static struct phy_driver tja11xx_driver[] = {
+ {
+ PHY_ID_MATCH_MODEL(PHY_ID_TJA1100),
+ .name = "NXP TJA1100",
+ .features = PHY_BASIC_T1_FEATURES,
+ .probe = tja11xx_probe,
+ .soft_reset = tja11xx_soft_reset,
+ .config_init = tja11xx_config_init,
+ .read_status = tja11xx_read_status,
+ .suspend = genphy_suspend,
+ .resume = genphy_resume,
+ .set_loopback = genphy_loopback,
+ /* Statistics */
+ .get_sset_count = tja11xx_get_sset_count,
+ .get_strings = tja11xx_get_strings,
+ .get_stats = tja11xx_get_stats,
+ }, {
+ PHY_ID_MATCH_MODEL(PHY_ID_TJA1101),
+ .name = "NXP TJA1101",
+ .features = PHY_BASIC_T1_FEATURES,
+ .probe = tja11xx_probe,
+ .soft_reset = tja11xx_soft_reset,
+ .config_init = tja11xx_config_init,
+ .read_status = tja11xx_read_status,
+ .suspend = genphy_suspend,
+ .resume = genphy_resume,
+ .set_loopback = genphy_loopback,
+ /* Statistics */
+ .get_sset_count = tja11xx_get_sset_count,
+ .get_strings = tja11xx_get_strings,
+ .get_stats = tja11xx_get_stats,
+ }
+};
+
+module_phy_driver(tja11xx_driver);
+
+static struct mdio_device_id __maybe_unused tja11xx_tbl[] = {
+ { PHY_ID_MATCH_MODEL(PHY_ID_TJA1100) },
+ { PHY_ID_MATCH_MODEL(PHY_ID_TJA1101) },
+ { }
+};
+
+MODULE_DEVICE_TABLE(mdio, tja11xx_tbl);
+
+MODULE_AUTHOR("Marek Vasut <marex@denx.de>");
+MODULE_DESCRIPTION("NXP TJA11xx BoardR-Reach PHY driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/phy/phy-core.c b/drivers/net/phy/phy-core.c
index 3daf0214a242..16667fbac8bf 100644
--- a/drivers/net/phy/phy-core.c
+++ b/drivers/net/phy/phy-core.c
@@ -8,7 +8,7 @@
const char *phy_speed_to_str(int speed)
{
- BUILD_BUG_ON_MSG(__ETHTOOL_LINK_MODE_MASK_NBITS != 67,
+ BUILD_BUG_ON_MSG(__ETHTOOL_LINK_MODE_MASK_NBITS != 69,
"Enum ethtool_link_mode_bit_indices and phylib are out of sync. "
"If a speed or mode has been added please update phy_speed_to_str "
"and the PHY settings array.\n");
@@ -131,9 +131,11 @@ static const struct phy_setting settings[] = {
PHY_SETTING( 1000, FULL, 1000baseKX_Full ),
PHY_SETTING( 1000, FULL, 1000baseT_Full ),
PHY_SETTING( 1000, HALF, 1000baseT_Half ),
+ PHY_SETTING( 1000, FULL, 1000baseT1_Full ),
PHY_SETTING( 1000, FULL, 1000baseX_Full ),
/* 100M */
PHY_SETTING( 100, FULL, 100baseT_Full ),
+ PHY_SETTING( 100, FULL, 100baseT1_Full ),
PHY_SETTING( 100, HALF, 100baseT_Half ),
/* 10M */
PHY_SETTING( 10, FULL, 10baseT_Full ),
diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
index e8885429293a..ef7aa738e0dc 100644
--- a/drivers/net/phy/phy.c
+++ b/drivers/net/phy/phy.c
@@ -29,6 +29,8 @@
#include <linux/uaccess.h>
#include <linux/atomic.h>
+#define PHY_STATE_TIME HZ
+
#define PHY_STATE_STR(_state) \
case PHY_##_state: \
return __stringify(_state); \
@@ -41,7 +43,6 @@ static const char *phy_state_to_str(enum phy_state st)
PHY_STATE_STR(UP)
PHY_STATE_STR(RUNNING)
PHY_STATE_STR(NOLINK)
- PHY_STATE_STR(FORCING)
PHY_STATE_STR(HALTED)
}
@@ -297,12 +298,8 @@ int phy_ethtool_sset(struct phy_device *phydev, struct ethtool_cmd *cmd)
linkmode_copy(phydev->advertising, advertising);
- if (AUTONEG_ENABLE == cmd->autoneg)
- linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
- phydev->advertising);
- else
- linkmode_clear_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
- phydev->advertising);
+ linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ phydev->advertising, AUTONEG_ENABLE == cmd->autoneg);
phydev->duplex = cmd->duplex;
@@ -352,12 +349,8 @@ int phy_ethtool_ksettings_set(struct phy_device *phydev,
linkmode_copy(phydev->advertising, advertising);
- if (autoneg == AUTONEG_ENABLE)
- linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
- phydev->advertising);
- else
- linkmode_clear_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
- phydev->advertising);
+ linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ phydev->advertising, autoneg == AUTONEG_ENABLE);
phydev->duplex = duplex;
@@ -407,6 +400,7 @@ int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd)
struct mii_ioctl_data *mii_data = if_mii(ifr);
u16 val = mii_data->val_in;
bool change_autoneg = false;
+ int prtad, devad;
switch (cmd) {
case SIOCGMIIPHY:
@@ -414,14 +408,29 @@ int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd)
/* fall through */
case SIOCGMIIREG:
- mii_data->val_out = mdiobus_read(phydev->mdio.bus,
- mii_data->phy_id,
- mii_data->reg_num);
+ if (mdio_phy_id_is_c45(mii_data->phy_id)) {
+ prtad = mdio_phy_id_prtad(mii_data->phy_id);
+ devad = mdio_phy_id_devad(mii_data->phy_id);
+ devad = MII_ADDR_C45 | devad << 16 | mii_data->reg_num;
+ } else {
+ prtad = mii_data->phy_id;
+ devad = mii_data->reg_num;
+ }
+ mii_data->val_out = mdiobus_read(phydev->mdio.bus, prtad,
+ devad);
return 0;
case SIOCSMIIREG:
- if (mii_data->phy_id == phydev->mdio.addr) {
- switch (mii_data->reg_num) {
+ if (mdio_phy_id_is_c45(mii_data->phy_id)) {
+ prtad = mdio_phy_id_prtad(mii_data->phy_id);
+ devad = mdio_phy_id_devad(mii_data->phy_id);
+ devad = MII_ADDR_C45 | devad << 16 | mii_data->reg_num;
+ } else {
+ prtad = mii_data->phy_id;
+ devad = mii_data->reg_num;
+ }
+ if (prtad == phydev->mdio.addr) {
+ switch (devad) {
case MII_BMCR:
if ((val & (BMCR_RESET | BMCR_ANENABLE)) == 0) {
if (phydev->autoneg == AUTONEG_ENABLE)
@@ -454,11 +463,10 @@ int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd)
}
}
- mdiobus_write(phydev->mdio.bus, mii_data->phy_id,
- mii_data->reg_num, val);
+ mdiobus_write(phydev->mdio.bus, prtad, devad, val);
- if (mii_data->phy_id == phydev->mdio.addr &&
- mii_data->reg_num == MII_BMCR &&
+ if (prtad == phydev->mdio.addr &&
+ devad == MII_BMCR &&
val & BMCR_RESET)
return phy_init_hw(phydev);
@@ -478,12 +486,12 @@ int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd)
}
EXPORT_SYMBOL(phy_mii_ioctl);
-static void phy_queue_state_machine(struct phy_device *phydev,
- unsigned int secs)
+void phy_queue_state_machine(struct phy_device *phydev, unsigned long jiffies)
{
mod_delayed_work(system_power_efficient_wq, &phydev->state_queue,
- secs * HZ);
+ jiffies);
}
+EXPORT_SYMBOL(phy_queue_state_machine);
static void phy_trigger_machine(struct phy_device *phydev)
{
@@ -560,15 +568,8 @@ int phy_start_aneg(struct phy_device *phydev)
if (err < 0)
goto out_unlock;
- if (phy_is_started(phydev)) {
- if (phydev->autoneg == AUTONEG_ENABLE) {
- err = phy_check_link_status(phydev);
- } else {
- phydev->state = PHY_FORCING;
- phydev->link_timeout = PHY_FORCE_TIMEOUT;
- }
- }
-
+ if (phy_is_started(phydev))
+ err = phy_check_link_status(phydev);
out_unlock:
mutex_unlock(&phydev->lock);
@@ -772,8 +773,13 @@ static irqreturn_t phy_interrupt(int irq, void *phy_dat)
if (phydev->drv->did_interrupt && !phydev->drv->did_interrupt(phydev))
return IRQ_NONE;
- /* reschedule state queue work to run as soon as possible */
- phy_trigger_machine(phydev);
+ if (phydev->drv->handle_interrupt) {
+ if (phydev->drv->handle_interrupt(phydev))
+ goto phy_err;
+ } else {
+ /* reschedule state queue work to run as soon as possible */
+ phy_trigger_machine(phydev);
+ }
if (phy_clear_interrupt(phydev))
goto phy_err;
@@ -799,10 +805,10 @@ static int phy_enable_interrupts(struct phy_device *phydev)
}
/**
- * phy_request_interrupt - request interrupt for a PHY device
+ * phy_request_interrupt - request and enable interrupt for a PHY device
* @phydev: target phy_device struct
*
- * Description: Request the interrupt for the given PHY.
+ * Description: Request and enable the interrupt for the given PHY.
* If this fails, then we set irq to PHY_POLL.
* This should only be called with a valid IRQ number.
*/
@@ -817,11 +823,31 @@ void phy_request_interrupt(struct phy_device *phydev)
phydev_warn(phydev, "Error %d requesting IRQ %d, falling back to polling\n",
err, phydev->irq);
phydev->irq = PHY_POLL;
+ } else {
+ if (phy_enable_interrupts(phydev)) {
+ phydev_warn(phydev, "Can't enable interrupt, falling back to polling\n");
+ phy_free_interrupt(phydev);
+ phydev->irq = PHY_POLL;
+ }
}
}
EXPORT_SYMBOL(phy_request_interrupt);
/**
+ * phy_free_interrupt - disable and free interrupt for a PHY device
+ * @phydev: target phy_device struct
+ *
+ * Description: Disable and free the interrupt for the given PHY.
+ * This should only be called with a valid IRQ number.
+ */
+void phy_free_interrupt(struct phy_device *phydev)
+{
+ phy_disable_interrupts(phydev);
+ free_irq(phydev->irq, phydev);
+}
+EXPORT_SYMBOL(phy_free_interrupt);
+
+/**
* phy_stop - Bring down the PHY link, and stop checking the status
* @phydev: target phy_device struct
*/
@@ -835,9 +861,6 @@ void phy_stop(struct phy_device *phydev)
mutex_lock(&phydev->lock);
- if (phy_interrupt_is_valid(phydev))
- phy_disable_interrupts(phydev);
-
phydev->state = PHY_HALTED;
mutex_unlock(&phydev->lock);
@@ -864,8 +887,6 @@ EXPORT_SYMBOL(phy_stop);
*/
void phy_start(struct phy_device *phydev)
{
- int err;
-
mutex_lock(&phydev->lock);
if (phydev->state != PHY_READY && phydev->state != PHY_HALTED) {
@@ -877,13 +898,6 @@ void phy_start(struct phy_device *phydev)
/* if phy was suspended, bring the physical link up again */
__phy_resume(phydev);
- /* make sure interrupts are enabled for the PHY */
- if (phy_interrupt_is_valid(phydev)) {
- err = phy_enable_interrupts(phydev);
- if (err < 0)
- goto out;
- }
-
phydev->state = PHY_UP;
phy_start_machine(phydev);
@@ -921,20 +935,6 @@ void phy_state_machine(struct work_struct *work)
case PHY_RUNNING:
err = phy_check_link_status(phydev);
break;
- case PHY_FORCING:
- err = genphy_update_link(phydev);
- if (err)
- break;
-
- if (phydev->link) {
- phydev->state = PHY_RUNNING;
- phy_link_up(phydev);
- } else {
- if (0 == phydev->link_timeout--)
- needs_aneg = true;
- phy_link_down(phydev, false);
- }
- break;
case PHY_HALTED:
if (phydev->link) {
phydev->link = 0;
diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
index dcc93a873174..53878908adf4 100644
--- a/drivers/net/phy/phy_device.c
+++ b/drivers/net/phy/phy_device.c
@@ -89,7 +89,7 @@ EXPORT_SYMBOL_GPL(phy_10_100_features_array);
const int phy_basic_t1_features_array[2] = {
ETHTOOL_LINK_MODE_TP_BIT,
- ETHTOOL_LINK_MODE_100baseT_Full_BIT,
+ ETHTOOL_LINK_MODE_100baseT1_Full_BIT,
};
EXPORT_SYMBOL_GPL(phy_basic_t1_features_array);
@@ -948,6 +948,9 @@ int phy_connect_direct(struct net_device *dev, struct phy_device *phydev,
{
int rc;
+ if (!dev)
+ return -EINVAL;
+
rc = phy_attach_direct(dev, phydev, phydev->dev_flags, interface);
if (rc)
return rc;
@@ -1013,7 +1016,7 @@ void phy_disconnect(struct phy_device *phydev)
phy_stop(phydev);
if (phy_interrupt_is_valid(phydev))
- free_irq(phydev->irq, phydev);
+ phy_free_interrupt(phydev);
phydev->adjust_link = NULL;
@@ -1133,6 +1136,44 @@ void phy_attached_print(struct phy_device *phydev, const char *fmt, ...)
}
EXPORT_SYMBOL(phy_attached_print);
+static void phy_sysfs_create_links(struct phy_device *phydev)
+{
+ struct net_device *dev = phydev->attached_dev;
+ int err;
+
+ if (!dev)
+ return;
+
+ err = sysfs_create_link(&phydev->mdio.dev.kobj, &dev->dev.kobj,
+ "attached_dev");
+ if (err)
+ return;
+
+ err = sysfs_create_link_nowarn(&dev->dev.kobj,
+ &phydev->mdio.dev.kobj,
+ "phydev");
+ if (err) {
+ dev_err(&dev->dev, "could not add device link to %s err %d\n",
+ kobject_name(&phydev->mdio.dev.kobj),
+ err);
+ /* non-fatal - some net drivers can use one netdevice
+ * with more then one phy
+ */
+ }
+
+ phydev->sysfs_links = true;
+}
+
+static ssize_t
+phy_standalone_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct phy_device *phydev = to_phy_device(dev);
+
+ return sprintf(buf, "%d\n", !phydev->attached_dev);
+}
+static DEVICE_ATTR_RO(phy_standalone);
+
/**
* phy_attach_direct - attach a network device to a given PHY device pointer
* @dev: network device to attach
@@ -1151,9 +1192,9 @@ EXPORT_SYMBOL(phy_attached_print);
int phy_attach_direct(struct net_device *dev, struct phy_device *phydev,
u32 flags, phy_interface_t interface)
{
- struct module *ndev_owner = dev->dev.parent->driver->owner;
struct mii_bus *bus = phydev->mdio.bus;
struct device *d = &phydev->mdio.dev;
+ struct module *ndev_owner = NULL;
bool using_genphy = false;
int err;
@@ -1162,8 +1203,10 @@ int phy_attach_direct(struct net_device *dev, struct phy_device *phydev,
* our own module->refcnt here, otherwise we would not be able to
* unload later on.
*/
+ if (dev)
+ ndev_owner = dev->dev.parent->driver->owner;
if (ndev_owner != bus->owner && !try_module_get(bus->owner)) {
- dev_err(&dev->dev, "failed to get the bus module\n");
+ phydev_err(phydev, "failed to get the bus module\n");
return -EIO;
}
@@ -1182,7 +1225,7 @@ int phy_attach_direct(struct net_device *dev, struct phy_device *phydev,
}
if (!try_module_get(d->driver->owner)) {
- dev_err(&dev->dev, "failed to get the device driver module\n");
+ phydev_err(phydev, "failed to get the device driver module\n");
err = -EIO;
goto error_put_device;
}
@@ -1203,8 +1246,10 @@ int phy_attach_direct(struct net_device *dev, struct phy_device *phydev,
}
phydev->phy_link_change = phy_link_change;
- phydev->attached_dev = dev;
- dev->phydev = phydev;
+ if (dev) {
+ phydev->attached_dev = dev;
+ dev->phydev = phydev;
+ }
/* Some Ethernet drivers try to connect to a PHY device before
* calling register_netdevice() -> netdev_register_kobject() and
@@ -1216,22 +1261,13 @@ int phy_attach_direct(struct net_device *dev, struct phy_device *phydev,
*/
phydev->sysfs_links = false;
- err = sysfs_create_link(&phydev->mdio.dev.kobj, &dev->dev.kobj,
- "attached_dev");
- if (!err) {
- err = sysfs_create_link_nowarn(&dev->dev.kobj,
- &phydev->mdio.dev.kobj,
- "phydev");
- if (err) {
- dev_err(&dev->dev, "could not add device link to %s err %d\n",
- kobject_name(&phydev->mdio.dev.kobj),
- err);
- /* non-fatal - some net drivers can use one netdevice
- * with more then one phy
- */
- }
+ phy_sysfs_create_links(phydev);
- phydev->sysfs_links = true;
+ if (!phydev->attached_dev) {
+ err = sysfs_create_file(&phydev->mdio.dev.kobj,
+ &dev_attr_phy_standalone.attr);
+ if (err)
+ phydev_err(phydev, "error creating 'phy_standalone' sysfs entry\n");
}
phydev->dev_flags = flags;
@@ -1243,7 +1279,8 @@ int phy_attach_direct(struct net_device *dev, struct phy_device *phydev,
/* Initial carrier state is off as the phy is about to be
* (re)initialized.
*/
- netif_carrier_off(phydev->attached_dev);
+ if (dev)
+ netif_carrier_off(phydev->attached_dev);
/* Do initial configuration here, now that
* we have certain key parameters
@@ -1290,6 +1327,9 @@ struct phy_device *phy_attach(struct net_device *dev, const char *bus_id,
struct device *d;
int rc;
+ if (!dev)
+ return ERR_PTR(-EINVAL);
+
/* Search the list of PHY devices on the mdio bus for the
* PHY with the requested name
*/
@@ -1349,16 +1389,24 @@ EXPORT_SYMBOL_GPL(phy_driver_is_genphy_10g);
void phy_detach(struct phy_device *phydev)
{
struct net_device *dev = phydev->attached_dev;
- struct module *ndev_owner = dev->dev.parent->driver->owner;
+ struct module *ndev_owner = NULL;
struct mii_bus *bus;
if (phydev->sysfs_links) {
- sysfs_remove_link(&dev->dev.kobj, "phydev");
+ if (dev)
+ sysfs_remove_link(&dev->dev.kobj, "phydev");
sysfs_remove_link(&phydev->mdio.dev.kobj, "attached_dev");
}
+
+ if (!phydev->attached_dev)
+ sysfs_remove_file(&phydev->mdio.dev.kobj,
+ &dev_attr_phy_standalone.attr);
+
phy_suspend(phydev);
- phydev->attached_dev->phydev = NULL;
- phydev->attached_dev = NULL;
+ if (dev) {
+ phydev->attached_dev->phydev = NULL;
+ phydev->attached_dev = NULL;
+ }
phydev->phylink = NULL;
phy_led_triggers_unregister(phydev);
@@ -1381,6 +1429,8 @@ void phy_detach(struct phy_device *phydev)
bus = phydev->mdio.bus;
put_device(&phydev->mdio.dev);
+ if (dev)
+ ndev_owner = dev->dev.parent->driver->owner;
if (ndev_owner != bus->owner)
module_put(bus->owner);
@@ -1880,6 +1930,9 @@ int genphy_config_init(struct phy_device *phydev)
if (val & ESTATUS_1000_THALF)
linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT,
features);
+ if (val & ESTATUS_1000_XFULL)
+ linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseX_Full_BIT,
+ features);
}
linkmode_and(phydev->supported, phydev->supported, features);
@@ -1931,6 +1984,8 @@ int genphy_read_abilities(struct phy_device *phydev)
phydev->supported, val & ESTATUS_1000_TFULL);
linkmode_mod_bit(ETHTOOL_LINK_MODE_1000baseT_Half_BIT,
phydev->supported, val & ESTATUS_1000_THALF);
+ linkmode_mod_bit(ETHTOOL_LINK_MODE_1000baseX_Full_BIT,
+ phydev->supported, val & ESTATUS_1000_XFULL);
}
return 0;
diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
index 4c0616ba314d..5d0af041b8f9 100644
--- a/drivers/net/phy/phylink.c
+++ b/drivers/net/phy/phylink.c
@@ -41,6 +41,9 @@ struct phylink {
/* private: */
struct net_device *netdev;
const struct phylink_mac_ops *ops;
+ struct phylink_config *config;
+ struct device *dev;
+ unsigned int old_link_state:1;
unsigned long phylink_disable_state; /* bitmask of disables */
struct phy_device *phydev;
@@ -56,6 +59,7 @@ struct phylink {
phy_interface_t cur_interface;
struct gpio_desc *link_gpio;
+ unsigned int link_irq;
struct timer_list link_poll;
void (*get_fixed_state)(struct net_device *dev,
struct phylink_link_state *s);
@@ -69,6 +73,23 @@ struct phylink {
struct sfp_bus *sfp_bus;
};
+#define phylink_printk(level, pl, fmt, ...) \
+ do { \
+ if ((pl)->config->type == PHYLINK_NETDEV) \
+ netdev_printk(level, (pl)->netdev, fmt, ##__VA_ARGS__); \
+ else if ((pl)->config->type == PHYLINK_DEV) \
+ dev_printk(level, (pl)->dev, fmt, ##__VA_ARGS__); \
+ } while (0)
+
+#define phylink_err(pl, fmt, ...) \
+ phylink_printk(KERN_ERR, pl, fmt, ##__VA_ARGS__)
+#define phylink_warn(pl, fmt, ...) \
+ phylink_printk(KERN_WARNING, pl, fmt, ##__VA_ARGS__)
+#define phylink_info(pl, fmt, ...) \
+ phylink_printk(KERN_INFO, pl, fmt, ##__VA_ARGS__)
+#define phylink_dbg(pl, fmt, ...) \
+ phylink_printk(KERN_DEBUG, pl, fmt, ##__VA_ARGS__)
+
/**
* phylink_set_port_modes() - set the port type modes in the ethtool mask
* @mask: ethtool link mode mask
@@ -115,7 +136,7 @@ static const char *phylink_an_mode_str(unsigned int mode)
static int phylink_validate(struct phylink *pl, unsigned long *supported,
struct phylink_link_state *state)
{
- pl->ops->validate(pl->netdev, supported, state);
+ pl->ops->validate(pl->config, supported, state);
return phylink_is_empty_linkmode(supported) ? -EINVAL : 0;
}
@@ -165,7 +186,7 @@ static int phylink_parse_fixedlink(struct phylink *pl,
ret = fwnode_property_read_u32_array(fwnode, "fixed-link",
NULL, 0);
if (ret != ARRAY_SIZE(prop)) {
- netdev_err(pl->netdev, "broken fixed-link?\n");
+ phylink_err(pl, "broken fixed-link?\n");
return -EINVAL;
}
@@ -184,8 +205,8 @@ static int phylink_parse_fixedlink(struct phylink *pl,
if (pl->link_config.speed > SPEED_1000 &&
pl->link_config.duplex != DUPLEX_FULL)
- netdev_warn(pl->netdev, "fixed link specifies half duplex for %dMbps link?\n",
- pl->link_config.speed);
+ phylink_warn(pl, "fixed link specifies half duplex for %dMbps link?\n",
+ pl->link_config.speed);
bitmap_fill(pl->supported, __ETHTOOL_LINK_MODE_MASK_NBITS);
linkmode_copy(pl->link_config.advertising, pl->supported);
@@ -198,9 +219,9 @@ static int phylink_parse_fixedlink(struct phylink *pl,
if (s) {
__set_bit(s->bit, pl->supported);
} else {
- netdev_warn(pl->netdev, "fixed link %s duplex %dMbps not recognised\n",
- pl->link_config.duplex == DUPLEX_FULL ? "full" : "half",
- pl->link_config.speed);
+ phylink_warn(pl, "fixed link %s duplex %dMbps not recognised\n",
+ pl->link_config.duplex == DUPLEX_FULL ? "full" : "half",
+ pl->link_config.speed);
}
linkmode_and(pl->link_config.advertising, pl->link_config.advertising,
@@ -225,8 +246,8 @@ static int phylink_parse_mode(struct phylink *pl, struct fwnode_handle *fwnode)
if (fwnode_property_read_string(fwnode, "managed", &managed) == 0 &&
strcmp(managed, "in-band-status") == 0) {
if (pl->link_an_mode == MLO_AN_FIXED) {
- netdev_err(pl->netdev,
- "can't use both fixed-link and in-band-status\n");
+ phylink_err(pl,
+ "can't use both fixed-link and in-band-status\n");
return -EINVAL;
}
@@ -273,17 +294,17 @@ static int phylink_parse_mode(struct phylink *pl, struct fwnode_handle *fwnode)
break;
default:
- netdev_err(pl->netdev,
- "incorrect link mode %s for in-band status\n",
- phy_modes(pl->link_config.interface));
+ phylink_err(pl,
+ "incorrect link mode %s for in-band status\n",
+ phy_modes(pl->link_config.interface));
return -EINVAL;
}
linkmode_copy(pl->link_config.advertising, pl->supported);
if (phylink_validate(pl, pl->supported, &pl->link_config)) {
- netdev_err(pl->netdev,
- "failed to validate link configuration for in-band status\n");
+ phylink_err(pl,
+ "failed to validate link configuration for in-band status\n");
return -EINVAL;
}
}
@@ -294,16 +315,16 @@ static int phylink_parse_mode(struct phylink *pl, struct fwnode_handle *fwnode)
static void phylink_mac_config(struct phylink *pl,
const struct phylink_link_state *state)
{
- netdev_dbg(pl->netdev,
- "%s: mode=%s/%s/%s/%s adv=%*pb pause=%02x link=%u an=%u\n",
- __func__, phylink_an_mode_str(pl->link_an_mode),
- phy_modes(state->interface),
- phy_speed_to_str(state->speed),
- phy_duplex_to_str(state->duplex),
- __ETHTOOL_LINK_MODE_MASK_NBITS, state->advertising,
- state->pause, state->link, state->an_enabled);
-
- pl->ops->mac_config(pl->netdev, pl->link_an_mode, state);
+ phylink_dbg(pl,
+ "%s: mode=%s/%s/%s/%s adv=%*pb pause=%02x link=%u an=%u\n",
+ __func__, phylink_an_mode_str(pl->link_an_mode),
+ phy_modes(state->interface),
+ phy_speed_to_str(state->speed),
+ phy_duplex_to_str(state->duplex),
+ __ETHTOOL_LINK_MODE_MASK_NBITS, state->advertising,
+ state->pause, state->link, state->an_enabled);
+
+ pl->ops->mac_config(pl->config, pl->link_an_mode, state);
}
static void phylink_mac_config_up(struct phylink *pl,
@@ -317,12 +338,11 @@ static void phylink_mac_an_restart(struct phylink *pl)
{
if (pl->link_config.an_enabled &&
phy_interface_mode_is_8023z(pl->link_config.interface))
- pl->ops->mac_an_restart(pl->netdev);
+ pl->ops->mac_an_restart(pl->config);
}
static int phylink_get_mac_state(struct phylink *pl, struct phylink_link_state *state)
{
- struct net_device *ndev = pl->netdev;
linkmode_copy(state->advertising, pl->link_config.advertising);
linkmode_zero(state->lp_advertising);
@@ -334,7 +354,7 @@ static int phylink_get_mac_state(struct phylink *pl, struct phylink_link_state *
state->an_complete = 0;
state->link = 1;
- return pl->ops->mac_link_state(ndev, state);
+ return pl->ops->mac_link_state(pl->config, state);
}
/* The fixed state is... fixed except for the link state,
@@ -399,11 +419,43 @@ static const char *phylink_pause_to_str(int pause)
}
}
+static void phylink_mac_link_up(struct phylink *pl,
+ struct phylink_link_state link_state)
+{
+ struct net_device *ndev = pl->netdev;
+
+ pl->cur_interface = link_state.interface;
+ pl->ops->mac_link_up(pl->config, pl->link_an_mode,
+ pl->phy_state.interface,
+ pl->phydev);
+
+ if (ndev)
+ netif_carrier_on(ndev);
+
+ phylink_info(pl,
+ "Link is Up - %s/%s - flow control %s\n",
+ phy_speed_to_str(link_state.speed),
+ phy_duplex_to_str(link_state.duplex),
+ phylink_pause_to_str(link_state.pause));
+}
+
+static void phylink_mac_link_down(struct phylink *pl)
+{
+ struct net_device *ndev = pl->netdev;
+
+ if (ndev)
+ netif_carrier_off(ndev);
+ pl->ops->mac_link_down(pl->config, pl->link_an_mode,
+ pl->cur_interface);
+ phylink_info(pl, "Link is Down\n");
+}
+
static void phylink_resolve(struct work_struct *w)
{
struct phylink *pl = container_of(w, struct phylink, resolve);
struct phylink_link_state link_state;
struct net_device *ndev = pl->netdev;
+ int link_changed;
mutex_lock(&pl->state_mutex);
if (pl->phylink_disable_state) {
@@ -446,25 +498,17 @@ static void phylink_resolve(struct work_struct *w)
}
}
- if (link_state.link != netif_carrier_ok(ndev)) {
- if (!link_state.link) {
- netif_carrier_off(ndev);
- pl->ops->mac_link_down(ndev, pl->link_an_mode,
- pl->cur_interface);
- netdev_info(ndev, "Link is Down\n");
- } else {
- pl->cur_interface = link_state.interface;
- pl->ops->mac_link_up(ndev, pl->link_an_mode,
- pl->cur_interface, pl->phydev);
-
- netif_carrier_on(ndev);
-
- netdev_info(ndev,
- "Link is Up - %s/%s - flow control %s\n",
- phy_speed_to_str(link_state.speed),
- phy_duplex_to_str(link_state.duplex),
- phylink_pause_to_str(link_state.pause));
- }
+ if (pl->netdev)
+ link_changed = (link_state.link != netif_carrier_ok(ndev));
+ else
+ link_changed = (link_state.link != pl->old_link_state);
+
+ if (link_changed) {
+ pl->old_link_state = link_state.link;
+ if (!link_state.link)
+ phylink_mac_link_down(pl);
+ else
+ phylink_mac_link_up(pl, link_state);
}
if (!link_state.link && pl->mac_link_dropped) {
pl->mac_link_dropped = false;
@@ -516,13 +560,12 @@ static int phylink_register_sfp(struct phylink *pl,
if (ret == -ENOENT)
return 0;
- netdev_err(pl->netdev, "unable to parse \"sfp\" node: %d\n",
- ret);
+ phylink_err(pl, "unable to parse \"sfp\" node: %d\n",
+ ret);
return ret;
}
- pl->sfp_bus = sfp_register_upstream(ref.fwnode, pl->netdev, pl,
- &sfp_phylink_ops);
+ pl->sfp_bus = sfp_register_upstream(ref.fwnode, pl, &sfp_phylink_ops);
if (!pl->sfp_bus)
return -ENOMEM;
@@ -543,7 +586,7 @@ static int phylink_register_sfp(struct phylink *pl,
* Returns a pointer to a &struct phylink, or an error-pointer value. Users
* must use IS_ERR() to check for errors from this function.
*/
-struct phylink *phylink_create(struct net_device *ndev,
+struct phylink *phylink_create(struct phylink_config *config,
struct fwnode_handle *fwnode,
phy_interface_t iface,
const struct phylink_mac_ops *ops)
@@ -557,7 +600,17 @@ struct phylink *phylink_create(struct net_device *ndev,
mutex_init(&pl->state_mutex);
INIT_WORK(&pl->resolve, phylink_resolve);
- pl->netdev = ndev;
+
+ pl->config = config;
+ if (config->type == PHYLINK_NETDEV) {
+ pl->netdev = to_net_dev(config->dev);
+ } else if (config->type == PHYLINK_DEV) {
+ pl->dev = config->dev;
+ } else {
+ kfree(pl);
+ return ERR_PTR(-EINVAL);
+ }
+
pl->phy_state.interface = iface;
pl->link_interface = iface;
if (iface == PHY_INTERFACE_MODE_MOCA)
@@ -612,7 +665,7 @@ void phylink_destroy(struct phylink *pl)
{
if (pl->sfp_bus)
sfp_unregister_upstream(pl->sfp_bus);
- if (!IS_ERR_OR_NULL(pl->link_gpio))
+ if (pl->link_gpio)
gpiod_put(pl->link_gpio);
cancel_work_sync(&pl->resolve);
@@ -639,10 +692,10 @@ static void phylink_phy_change(struct phy_device *phydev, bool up,
phylink_run_resolve(pl);
- netdev_dbg(pl->netdev, "phy link %s %s/%s/%s\n", up ? "up" : "down",
- phy_modes(phydev->interface),
- phy_speed_to_str(phydev->speed),
- phy_duplex_to_str(phydev->duplex));
+ phylink_dbg(pl, "phy link %s %s/%s/%s\n", up ? "up" : "down",
+ phy_modes(phydev->interface),
+ phy_speed_to_str(phydev->speed),
+ phy_duplex_to_str(phydev->duplex));
}
static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy)
@@ -675,9 +728,9 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy)
phy->phylink = pl;
phy->phy_link_change = phylink_phy_change;
- netdev_info(pl->netdev,
- "PHY [%s] driver [%s]\n", dev_name(&phy->mdio.dev),
- phy->drv->name);
+ phylink_info(pl,
+ "PHY [%s] driver [%s]\n", dev_name(&phy->mdio.dev),
+ phy->drv->name);
mutex_lock(&phy->lock);
mutex_lock(&pl->state_mutex);
@@ -690,10 +743,10 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy)
mutex_unlock(&pl->state_mutex);
mutex_unlock(&phy->lock);
- netdev_dbg(pl->netdev,
- "phy: setting supported %*pb advertising %*pb\n",
- __ETHTOOL_LINK_MODE_MASK_NBITS, pl->supported,
- __ETHTOOL_LINK_MODE_MASK_NBITS, phy->advertising);
+ phylink_dbg(pl,
+ "phy: setting supported %*pb advertising %*pb\n",
+ __ETHTOOL_LINK_MODE_MASK_NBITS, pl->supported,
+ __ETHTOOL_LINK_MODE_MASK_NBITS, phy->advertising);
if (phy_interrupt_is_valid(phy))
phy_request_interrupt(phy);
@@ -871,10 +924,19 @@ void phylink_mac_change(struct phylink *pl, bool up)
if (!up)
pl->mac_link_dropped = true;
phylink_run_resolve(pl);
- netdev_dbg(pl->netdev, "mac link %s\n", up ? "up" : "down");
+ phylink_dbg(pl, "mac link %s\n", up ? "up" : "down");
}
EXPORT_SYMBOL_GPL(phylink_mac_change);
+static irqreturn_t phylink_link_handler(int irq, void *data)
+{
+ struct phylink *pl = data;
+
+ phylink_run_resolve(pl);
+
+ return IRQ_HANDLED;
+}
+
/**
* phylink_start() - start a phylink instance
* @pl: a pointer to a &struct phylink returned from phylink_create()
@@ -887,12 +949,13 @@ void phylink_start(struct phylink *pl)
{
ASSERT_RTNL();
- netdev_info(pl->netdev, "configuring for %s/%s link mode\n",
- phylink_an_mode_str(pl->link_an_mode),
- phy_modes(pl->link_config.interface));
+ phylink_info(pl, "configuring for %s/%s link mode\n",
+ phylink_an_mode_str(pl->link_an_mode),
+ phy_modes(pl->link_config.interface));
/* Always set the carrier off */
- netif_carrier_off(pl->netdev);
+ if (pl->netdev)
+ netif_carrier_off(pl->netdev);
/* Apply the link configuration to the MAC when starting. This allows
* a fixed-link to start with the correct parameters, and also
@@ -910,7 +973,22 @@ void phylink_start(struct phylink *pl)
clear_bit(PHYLINK_DISABLE_STOPPED, &pl->phylink_disable_state);
phylink_run_resolve(pl);
- if (pl->link_an_mode == MLO_AN_FIXED && !IS_ERR(pl->link_gpio))
+ if (pl->link_an_mode == MLO_AN_FIXED && pl->link_gpio) {
+ int irq = gpiod_to_irq(pl->link_gpio);
+
+ if (irq > 0) {
+ if (!request_irq(irq, phylink_link_handler,
+ IRQF_TRIGGER_RISING |
+ IRQF_TRIGGER_FALLING,
+ "netdev link", pl))
+ pl->link_irq = irq;
+ else
+ irq = 0;
+ }
+ if (irq <= 0)
+ mod_timer(&pl->link_poll, jiffies + HZ);
+ }
+ if (pl->link_an_mode == MLO_AN_FIXED && pl->get_fixed_state)
mod_timer(&pl->link_poll, jiffies + HZ);
if (pl->sfp_bus)
sfp_upstream_start(pl->sfp_bus);
@@ -936,8 +1014,11 @@ void phylink_stop(struct phylink *pl)
phy_stop(pl->phydev);
if (pl->sfp_bus)
sfp_upstream_stop(pl->sfp_bus);
- if (pl->link_an_mode == MLO_AN_FIXED && !IS_ERR(pl->link_gpio))
- del_timer_sync(&pl->link_poll);
+ del_timer_sync(&pl->link_poll);
+ if (pl->link_irq) {
+ free_irq(pl->link_irq, pl);
+ pl->link_irq = 0;
+ }
phylink_run_resolve_and_disable(pl, PHYLINK_DISABLE_STOPPED);
}
@@ -1239,7 +1320,8 @@ int phylink_ethtool_set_pauseparam(struct phylink *pl,
switch (pl->link_an_mode) {
case MLO_AN_PHY:
/* Silently mark the carrier down, and then trigger a resolve */
- netif_carrier_off(pl->netdev);
+ if (pl->netdev)
+ netif_carrier_off(pl->netdev);
phylink_run_resolve(pl);
break;
@@ -1342,8 +1424,8 @@ EXPORT_SYMBOL_GPL(phylink_ethtool_set_eee);
*
* FIXME: should deal with negotiation state too.
*/
-static int phylink_mii_emul_read(struct net_device *ndev, unsigned int reg,
- struct phylink_link_state *state, bool aneg)
+static int phylink_mii_emul_read(unsigned int reg,
+ struct phylink_link_state *state)
{
struct fixed_phy_status fs;
int val;
@@ -1358,8 +1440,6 @@ static int phylink_mii_emul_read(struct net_device *ndev, unsigned int reg,
if (reg == MII_BMSR) {
if (!state->an_complete)
val &= ~BMSR_ANEGCOMPLETE;
- if (!aneg)
- val &= ~BMSR_ANEGCAPABLE;
}
return val;
}
@@ -1455,8 +1535,7 @@ static int phylink_mii_read(struct phylink *pl, unsigned int phy_id,
case MLO_AN_FIXED:
if (phy_id == 0) {
phylink_get_fixed_state(pl, &state);
- val = phylink_mii_emul_read(pl->netdev, reg, &state,
- true);
+ val = phylink_mii_emul_read(reg, &state);
}
break;
@@ -1469,8 +1548,7 @@ static int phylink_mii_read(struct phylink *pl, unsigned int phy_id,
if (val < 0)
return val;
- val = phylink_mii_emul_read(pl->netdev, reg, &state,
- true);
+ val = phylink_mii_emul_read(reg, &state);
}
break;
}
@@ -1573,6 +1651,20 @@ int phylink_mii_ioctl(struct phylink *pl, struct ifreq *ifr, int cmd)
}
EXPORT_SYMBOL_GPL(phylink_mii_ioctl);
+static void phylink_sfp_attach(void *upstream, struct sfp_bus *bus)
+{
+ struct phylink *pl = upstream;
+
+ pl->netdev->sfp_bus = bus;
+}
+
+static void phylink_sfp_detach(void *upstream, struct sfp_bus *bus)
+{
+ struct phylink *pl = upstream;
+
+ pl->netdev->sfp_bus = NULL;
+}
+
static int phylink_sfp_module_insert(void *upstream,
const struct sfp_eeprom_id *id)
{
@@ -1601,8 +1693,8 @@ static int phylink_sfp_module_insert(void *upstream,
/* Ignore errors if we're expecting a PHY to attach later */
ret = phylink_validate(pl, support, &config);
if (ret) {
- netdev_err(pl->netdev, "validation with support %*pb failed: %d\n",
- __ETHTOOL_LINK_MODE_MASK_NBITS, support, ret);
+ phylink_err(pl, "validation with support %*pb failed: %d\n",
+ __ETHTOOL_LINK_MODE_MASK_NBITS, support, ret);
return ret;
}
@@ -1610,26 +1702,26 @@ static int phylink_sfp_module_insert(void *upstream,
iface = sfp_select_interface(pl->sfp_bus, id, config.advertising);
if (iface == PHY_INTERFACE_MODE_NA) {
- netdev_err(pl->netdev,
- "selection of interface failed, advertisement %*pb\n",
- __ETHTOOL_LINK_MODE_MASK_NBITS, config.advertising);
+ phylink_err(pl,
+ "selection of interface failed, advertisement %*pb\n",
+ __ETHTOOL_LINK_MODE_MASK_NBITS, config.advertising);
return -EINVAL;
}
config.interface = iface;
ret = phylink_validate(pl, support1, &config);
if (ret) {
- netdev_err(pl->netdev, "validation of %s/%s with support %*pb failed: %d\n",
- phylink_an_mode_str(MLO_AN_INBAND),
- phy_modes(config.interface),
- __ETHTOOL_LINK_MODE_MASK_NBITS, support, ret);
+ phylink_err(pl, "validation of %s/%s with support %*pb failed: %d\n",
+ phylink_an_mode_str(MLO_AN_INBAND),
+ phy_modes(config.interface),
+ __ETHTOOL_LINK_MODE_MASK_NBITS, support, ret);
return ret;
}
- netdev_dbg(pl->netdev, "requesting link mode %s/%s with support %*pb\n",
- phylink_an_mode_str(MLO_AN_INBAND),
- phy_modes(config.interface),
- __ETHTOOL_LINK_MODE_MASK_NBITS, support);
+ phylink_dbg(pl, "requesting link mode %s/%s with support %*pb\n",
+ phylink_an_mode_str(MLO_AN_INBAND),
+ phy_modes(config.interface),
+ __ETHTOOL_LINK_MODE_MASK_NBITS, support);
if (phy_interface_mode_is_8023z(iface) && pl->phydev)
return -EINVAL;
@@ -1648,9 +1740,9 @@ static int phylink_sfp_module_insert(void *upstream,
changed = true;
- netdev_info(pl->netdev, "switched to %s/%s link mode\n",
- phylink_an_mode_str(MLO_AN_INBAND),
- phy_modes(config.interface));
+ phylink_info(pl, "switched to %s/%s link mode\n",
+ phylink_an_mode_str(MLO_AN_INBAND),
+ phy_modes(config.interface));
}
pl->link_port = port;
@@ -1694,6 +1786,8 @@ static void phylink_sfp_disconnect_phy(void *upstream)
}
static const struct sfp_upstream_ops sfp_phylink_ops = {
+ .attach = phylink_sfp_attach,
+ .detach = phylink_sfp_detach,
.module_insert = phylink_sfp_module_insert,
.link_up = phylink_sfp_link_up,
.link_down = phylink_sfp_link_down,
diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c
index e9c187946cca..b23fc41896ef 100644
--- a/drivers/net/phy/sfp-bus.c
+++ b/drivers/net/phy/sfp-bus.c
@@ -24,7 +24,6 @@ struct sfp_bus {
const struct sfp_upstream_ops *upstream_ops;
void *upstream;
- struct net_device *netdev;
struct phy_device *phydev;
bool registered;
@@ -351,7 +350,7 @@ static int sfp_register_bus(struct sfp_bus *bus)
bus->socket_ops->attach(bus->sfp);
if (bus->started)
bus->socket_ops->start(bus->sfp);
- bus->netdev->sfp_bus = bus;
+ bus->upstream_ops->attach(bus->upstream, bus);
bus->registered = true;
return 0;
}
@@ -360,8 +359,8 @@ static void sfp_unregister_bus(struct sfp_bus *bus)
{
const struct sfp_upstream_ops *ops = bus->upstream_ops;
- bus->netdev->sfp_bus = NULL;
if (bus->registered) {
+ bus->upstream_ops->detach(bus->upstream, bus);
if (bus->started)
bus->socket_ops->stop(bus->sfp);
bus->socket_ops->detach(bus->sfp);
@@ -443,13 +442,11 @@ static void sfp_upstream_clear(struct sfp_bus *bus)
{
bus->upstream_ops = NULL;
bus->upstream = NULL;
- bus->netdev = NULL;
}
/**
* sfp_register_upstream() - Register the neighbouring device
* @fwnode: firmware node for the SFP bus
- * @ndev: network device associated with the interface
* @upstream: the upstream private data
* @ops: the upstream's &struct sfp_upstream_ops
*
@@ -460,7 +457,7 @@ static void sfp_upstream_clear(struct sfp_bus *bus)
* On error, returns %NULL.
*/
struct sfp_bus *sfp_register_upstream(struct fwnode_handle *fwnode,
- struct net_device *ndev, void *upstream,
+ void *upstream,
const struct sfp_upstream_ops *ops)
{
struct sfp_bus *bus = sfp_bus_get(fwnode);
@@ -470,7 +467,6 @@ struct sfp_bus *sfp_register_upstream(struct fwnode_handle *fwnode,
rtnl_lock();
bus->upstream_ops = ops;
bus->upstream = upstream;
- bus->netdev = ndev;
if (bus->sfp) {
ret = sfp_register_bus(bus);
@@ -592,7 +588,7 @@ struct sfp_bus *sfp_register_socket(struct device *dev, struct sfp *sfp,
bus->sfp = sfp;
bus->socket_ops = ops;
- if (bus->netdev) {
+ if (bus->upstream_ops) {
ret = sfp_register_bus(bus);
if (ret)
sfp_socket_clear(bus);
@@ -612,7 +608,7 @@ EXPORT_SYMBOL_GPL(sfp_register_socket);
void sfp_unregister_socket(struct sfp_bus *bus)
{
rtnl_lock();
- if (bus->netdev)
+ if (bus->upstream_ops)
sfp_unregister_bus(bus);
sfp_socket_clear(bus);
rtnl_unlock();
diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
index 71812be0ac64..2d816aadea79 100644
--- a/drivers/net/phy/sfp.c
+++ b/drivers/net/phy/sfp.c
@@ -1,4 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
+#include <linux/acpi.h>
#include <linux/ctype.h>
#include <linux/delay.h>
#include <linux/gpio/consumer.h>
@@ -184,12 +185,14 @@ struct sfp {
int (*write)(struct sfp *, bool, u8, void *, size_t);
struct gpio_desc *gpio[GPIO_MAX];
+ int gpio_irq[GPIO_MAX];
bool attached;
+ struct mutex st_mutex; /* Protects state */
unsigned int state;
struct delayed_work poll;
struct delayed_work timeout;
- struct mutex sm_mutex;
+ struct mutex sm_mutex; /* Protects state machine */
unsigned char sm_mod_state;
unsigned char sm_dev_state;
unsigned short sm_state;
@@ -1719,6 +1722,7 @@ static void sfp_check_state(struct sfp *sfp)
{
unsigned int state, i, changed;
+ mutex_lock(&sfp->st_mutex);
state = sfp_get_state(sfp);
changed = state ^ sfp->state;
changed &= SFP_F_PRESENT | SFP_F_LOS | SFP_F_TX_FAULT;
@@ -1744,6 +1748,7 @@ static void sfp_check_state(struct sfp *sfp)
sfp_sm_event(sfp, state & SFP_F_LOS ?
SFP_E_LOS_HIGH : SFP_E_LOS_LOW);
rtnl_unlock();
+ mutex_unlock(&sfp->st_mutex);
}
static irqreturn_t sfp_irq(int irq, void *data)
@@ -1774,6 +1779,7 @@ static struct sfp *sfp_alloc(struct device *dev)
sfp->dev = dev;
mutex_init(&sfp->sm_mutex);
+ mutex_init(&sfp->st_mutex);
INIT_DELAYED_WORK(&sfp->poll, sfp_poll);
INIT_DELAYED_WORK(&sfp->timeout, sfp_timeout);
@@ -1798,9 +1804,10 @@ static void sfp_cleanup(void *data)
static int sfp_probe(struct platform_device *pdev)
{
const struct sff_data *sff;
+ struct i2c_adapter *i2c;
struct sfp *sfp;
bool poll = false;
- int irq, err, i;
+ int err, i;
sfp = sfp_alloc(&pdev->dev);
if (IS_ERR(sfp))
@@ -1817,7 +1824,6 @@ static int sfp_probe(struct platform_device *pdev)
if (pdev->dev.of_node) {
struct device_node *node = pdev->dev.of_node;
const struct of_device_id *id;
- struct i2c_adapter *i2c;
struct device_node *np;
id = of_match_node(sfp_of_match, node);
@@ -1834,14 +1840,32 @@ static int sfp_probe(struct platform_device *pdev)
i2c = of_find_i2c_adapter_by_node(np);
of_node_put(np);
- if (!i2c)
- return -EPROBE_DEFER;
-
- err = sfp_i2c_configure(sfp, i2c);
- if (err < 0) {
- i2c_put_adapter(i2c);
- return err;
+ } else if (has_acpi_companion(&pdev->dev)) {
+ struct acpi_device *adev = ACPI_COMPANION(&pdev->dev);
+ struct fwnode_handle *fw = acpi_fwnode_handle(adev);
+ struct fwnode_reference_args args;
+ struct acpi_handle *acpi_handle;
+ int ret;
+
+ ret = acpi_node_get_property_reference(fw, "i2c-bus", 0, &args);
+ if (ret || !is_acpi_device_node(args.fwnode)) {
+ dev_err(&pdev->dev, "missing 'i2c-bus' property\n");
+ return -ENODEV;
}
+
+ acpi_handle = ACPI_HANDLE_FWNODE(args.fwnode);
+ i2c = i2c_acpi_find_adapter_by_handle(acpi_handle);
+ } else {
+ return -EINVAL;
+ }
+
+ if (!i2c)
+ return -EPROBE_DEFER;
+
+ err = sfp_i2c_configure(sfp, i2c);
+ if (err < 0) {
+ i2c_put_adapter(i2c);
+ return err;
}
for (i = 0; i < GPIO_MAX; i++)
@@ -1882,19 +1906,22 @@ static int sfp_probe(struct platform_device *pdev)
if (gpio_flags[i] != GPIOD_IN || !sfp->gpio[i])
continue;
- irq = gpiod_to_irq(sfp->gpio[i]);
- if (!irq) {
+ sfp->gpio_irq[i] = gpiod_to_irq(sfp->gpio[i]);
+ if (!sfp->gpio_irq[i]) {
poll = true;
continue;
}
- err = devm_request_threaded_irq(sfp->dev, irq, NULL, sfp_irq,
+ err = devm_request_threaded_irq(sfp->dev, sfp->gpio_irq[i],
+ NULL, sfp_irq,
IRQF_ONESHOT |
IRQF_TRIGGER_RISING |
IRQF_TRIGGER_FALLING,
dev_name(sfp->dev), sfp);
- if (err)
+ if (err) {
+ sfp->gpio_irq[i] = 0;
poll = true;
+ }
}
if (poll)
@@ -1925,9 +1952,26 @@ static int sfp_remove(struct platform_device *pdev)
return 0;
}
+static void sfp_shutdown(struct platform_device *pdev)
+{
+ struct sfp *sfp = platform_get_drvdata(pdev);
+ int i;
+
+ for (i = 0; i < GPIO_MAX; i++) {
+ if (!sfp->gpio_irq[i])
+ continue;
+
+ devm_free_irq(sfp->dev, sfp->gpio_irq[i], sfp);
+ }
+
+ cancel_delayed_work_sync(&sfp->poll);
+ cancel_delayed_work_sync(&sfp->timeout);
+}
+
static struct platform_driver sfp_driver = {
.probe = sfp_probe,
.remove = sfp_remove,
+ .shutdown = sfp_shutdown,
.driver = {
.name = "sfp",
.of_match_table = sfp_of_match,
diff --git a/drivers/net/plip/plip.c b/drivers/net/plip/plip.c
index 8ac33ca9ac3a..e89cdebae6f1 100644
--- a/drivers/net/plip/plip.c
+++ b/drivers/net/plip/plip.c
@@ -1008,7 +1008,7 @@ plip_rewrite_address(const struct net_device *dev, struct ethhdr *eth)
in_dev = __in_dev_get_rcu(dev);
if (in_dev) {
/* Any address will do - we take the first */
- const struct in_ifaddr *ifa = in_dev->ifa_list;
+ const struct in_ifaddr *ifa = rcu_dereference(in_dev->ifa_list);
if (ifa) {
memcpy(eth->h_source, dev->dev_addr, ETH_ALEN);
memset(eth->h_dest, 0xfc, 2);
@@ -1103,7 +1103,7 @@ plip_open(struct net_device *dev)
/* Any address will do - we take the first. We already
have the first two bytes filled with 0xfc, from
plip_init_dev(). */
- struct in_ifaddr *ifa=in_dev->ifa_list;
+ const struct in_ifaddr *ifa = rcu_dereference(in_dev->ifa_list);
if (ifa != NULL) {
memcpy(dev->dev_addr+2, &ifa->ifa_local, 4);
}
diff --git a/drivers/net/tap.c b/drivers/net/tap.c
index 8e01390c738e..dd614c2cd994 100644
--- a/drivers/net/tap.c
+++ b/drivers/net/tap.c
@@ -520,8 +520,7 @@ static int tap_open(struct inode *inode, struct file *file)
goto err;
}
- RCU_INIT_POINTER(q->sock.wq, &q->wq);
- init_waitqueue_head(&q->wq.wait);
+ init_waitqueue_head(&q->sock.wq.wait);
q->sock.type = SOCK_RAW;
q->sock.state = SS_CONNECTED;
q->sock.file = file;
@@ -579,7 +578,7 @@ static __poll_t tap_poll(struct file *file, poll_table *wait)
goto out;
mask = 0;
- poll_wait(file, &q->wq.wait, wait);
+ poll_wait(file, &q->sock.wq.wait, wait);
if (!ptr_ring_empty(&q->ring))
mask |= EPOLLIN | EPOLLRDNORM;
diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
index 36916bf51ee6..abfa0da9bbd2 100644
--- a/drivers/net/team/team.c
+++ b/drivers/net/team/team.c
@@ -2054,9 +2054,34 @@ static void team_ethtool_get_drvinfo(struct net_device *dev,
strlcpy(drvinfo->version, UTS_RELEASE, sizeof(drvinfo->version));
}
+static int team_ethtool_get_link_ksettings(struct net_device *dev,
+ struct ethtool_link_ksettings *cmd)
+{
+ struct team *team= netdev_priv(dev);
+ unsigned long speed = 0;
+ struct team_port *port;
+
+ cmd->base.duplex = DUPLEX_UNKNOWN;
+ cmd->base.port = PORT_OTHER;
+
+ list_for_each_entry(port, &team->port_list, list) {
+ if (team_port_txable(port)) {
+ if (port->state.speed != SPEED_UNKNOWN)
+ speed += port->state.speed;
+ if (cmd->base.duplex == DUPLEX_UNKNOWN &&
+ port->state.duplex != DUPLEX_UNKNOWN)
+ cmd->base.duplex = port->state.duplex;
+ }
+ }
+ cmd->base.speed = speed ? : SPEED_UNKNOWN;
+
+ return 0;
+}
+
static const struct ethtool_ops team_ethtool_ops = {
.get_drvinfo = team_ethtool_get_drvinfo,
.get_link = ethtool_op_get_link,
+ .get_link_ksettings = team_ethtool_get_link_ksettings,
};
/***********************
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index d7c55e0fa8f4..3d443597bd04 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -160,7 +160,6 @@ struct tun_pcpu_stats {
struct tun_file {
struct sock sk;
struct socket socket;
- struct socket_wq wq;
struct tun_struct __rcu *tun;
struct fasync_struct *fasync;
/* only used for fasnyc */
@@ -2165,7 +2164,7 @@ static void *tun_ring_recv(struct tun_file *tfile, int noblock, int *err)
goto out;
}
- add_wait_queue(&tfile->wq.wait, &wait);
+ add_wait_queue(&tfile->socket.wq.wait, &wait);
while (1) {
set_current_state(TASK_INTERRUPTIBLE);
@@ -2185,7 +2184,7 @@ static void *tun_ring_recv(struct tun_file *tfile, int noblock, int *err)
}
__set_current_state(TASK_RUNNING);
- remove_wait_queue(&tfile->wq.wait, &wait);
+ remove_wait_queue(&tfile->socket.wq.wait, &wait);
out:
*err = error;
@@ -3415,8 +3414,7 @@ static int tun_chr_open(struct inode *inode, struct file * file)
tfile->flags = 0;
tfile->ifindex = 0;
- init_waitqueue_head(&tfile->wq.wait);
- RCU_INIT_POINTER(tfile->socket.wq, &tfile->wq);
+ init_waitqueue_head(&tfile->socket.wq.wait);
tfile->socket.file = file;
tfile->socket.ops = &tun_socket_ops;
diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c
index c9bc96310ed4..ef548beba684 100644
--- a/drivers/net/usb/asix_devices.c
+++ b/drivers/net/usb/asix_devices.c
@@ -226,7 +226,7 @@ static void asix_phy_reset(struct usbnet *dev, unsigned int reset_bits)
static int ax88172_bind(struct usbnet *dev, struct usb_interface *intf)
{
int ret = 0;
- u8 buf[ETH_ALEN];
+ u8 buf[ETH_ALEN] = {0};
int i;
unsigned long gpio_bits = dev->driver_info->data;
@@ -677,7 +677,7 @@ static int asix_resume(struct usb_interface *intf)
static int ax88772_bind(struct usbnet *dev, struct usb_interface *intf)
{
int ret, i;
- u8 buf[ETH_ALEN], chipcode = 0;
+ u8 buf[ETH_ALEN] = {0}, chipcode = 0;
u32 phyid;
struct asix_common_private *priv;
@@ -1061,7 +1061,7 @@ static const struct net_device_ops ax88178_netdev_ops = {
static int ax88178_bind(struct usbnet *dev, struct usb_interface *intf)
{
int ret;
- u8 buf[ETH_ALEN];
+ u8 buf[ETH_ALEN] = {0};
usbnet_get_endpoints(dev,intf);
diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
index e0dcb681cfe5..39e0768d734d 100644
--- a/drivers/net/usb/r8152.c
+++ b/drivers/net/usb/r8152.c
@@ -28,7 +28,7 @@
#define NETNEXT_VERSION "09"
/* Information for net */
-#define NET_VERSION "9"
+#define NET_VERSION "10"
#define DRIVER_VERSION "v1." NETNEXT_VERSION "." NET_VERSION
#define DRIVER_AUTHOR "Realtek linux nic maintainers <nic_swsd@realtek.com>"
@@ -53,6 +53,9 @@
#define PAL_BDC_CR 0xd1a0
#define PLA_TEREDO_TIMER 0xd2cc
#define PLA_REALWOW_TIMER 0xd2e8
+#define PLA_SUSPEND_FLAG 0xd38a
+#define PLA_INDICATE_FALG 0xd38c
+#define PLA_EXTRA_STATUS 0xd398
#define PLA_EFUSE_DATA 0xdd00
#define PLA_EFUSE_CMD 0xdd02
#define PLA_LEDSEL 0xdd90
@@ -336,6 +339,15 @@
/* PLA_BOOT_CTRL */
#define AUTOLOAD_DONE 0x0002
+/* PLA_SUSPEND_FLAG */
+#define LINK_CHG_EVENT BIT(0)
+
+/* PLA_INDICATE_FALG */
+#define UPCOMING_RUNTIME_D3 BIT(0)
+
+/* PLA_EXTRA_STATUS */
+#define LINK_CHANGE_FLAG BIT(8)
+
/* USB_USB2PHY */
#define USB2PHY_SUSPEND 0x0001
#define USB2PHY_L1 0x0002
@@ -813,6 +825,14 @@ int set_registers(struct r8152 *tp, u16 value, u16 index, u16 size, void *data)
return ret;
}
+static void rtl_set_unplug(struct r8152 *tp)
+{
+ if (tp->udev->state == USB_STATE_NOTATTACHED) {
+ set_bit(RTL8152_UNPLUG, &tp->flags);
+ smp_mb__after_atomic();
+ }
+}
+
static int generic_ocp_read(struct r8152 *tp, u16 index, u16 size,
void *data, u16 type)
{
@@ -851,7 +871,7 @@ static int generic_ocp_read(struct r8152 *tp, u16 index, u16 size,
}
if (ret == -ENODEV)
- set_bit(RTL8152_UNPLUG, &tp->flags);
+ rtl_set_unplug(tp);
return ret;
}
@@ -921,7 +941,7 @@ static int generic_ocp_write(struct r8152 *tp, u16 index, u16 byteen,
error1:
if (ret == -ENODEV)
- set_bit(RTL8152_UNPLUG, &tp->flags);
+ rtl_set_unplug(tp);
return ret;
}
@@ -1309,7 +1329,7 @@ static void read_bulk_callback(struct urb *urb)
napi_schedule(&tp->napi);
return;
case -ESHUTDOWN:
- set_bit(RTL8152_UNPLUG, &tp->flags);
+ rtl_set_unplug(tp);
netif_device_detach(tp->netdev);
return;
case -ENOENT:
@@ -1429,7 +1449,7 @@ static void intr_callback(struct urb *urb)
resubmit:
res = usb_submit_urb(urb, GFP_ATOMIC);
if (res == -ENODEV) {
- set_bit(RTL8152_UNPLUG, &tp->flags);
+ rtl_set_unplug(tp);
netif_device_detach(tp->netdev);
} else if (res) {
netif_err(tp, intr, tp->netdev,
@@ -2024,7 +2044,7 @@ static void tx_bottom(struct r8152 *tp)
struct net_device *netdev = tp->netdev;
if (res == -ENODEV) {
- set_bit(RTL8152_UNPLUG, &tp->flags);
+ rtl_set_unplug(tp);
netif_device_detach(netdev);
} else {
struct net_device_stats *stats = &netdev->stats;
@@ -2098,7 +2118,7 @@ int r8152_submit_rx(struct r8152 *tp, struct rx_agg *agg, gfp_t mem_flags)
ret = usb_submit_urb(agg->urb, mem_flags);
if (ret == -ENODEV) {
- set_bit(RTL8152_UNPLUG, &tp->flags);
+ rtl_set_unplug(tp);
netif_device_detach(tp->netdev);
} else if (ret) {
struct urb *urb = agg->urb;
@@ -2355,6 +2375,12 @@ static int rtl_stop_rx(struct r8152 *tp)
return 0;
}
+static inline void r8153b_rx_agg_chg_indicate(struct r8152 *tp)
+{
+ ocp_write_byte(tp, MCU_TYPE_USB, USB_UPT_RXDMA_OWN,
+ OWN_UPDATE | OWN_CLEAR);
+}
+
static int rtl_enable(struct r8152 *tp)
{
u32 ocp_data;
@@ -2365,6 +2391,15 @@ static int rtl_enable(struct r8152 *tp)
ocp_data |= CR_RE | CR_TE;
ocp_write_byte(tp, MCU_TYPE_PLA, PLA_CR, ocp_data);
+ switch (tp->version) {
+ case RTL_VER_08:
+ case RTL_VER_09:
+ r8153b_rx_agg_chg_indicate(tp);
+ break;
+ default:
+ break;
+ }
+
rxdy_gated_en(tp, false);
return 0;
@@ -2381,12 +2416,6 @@ static int rtl8152_enable(struct r8152 *tp)
return rtl_enable(tp);
}
-static inline void r8153b_rx_agg_chg_indicate(struct r8152 *tp)
-{
- ocp_write_byte(tp, MCU_TYPE_USB, USB_UPT_RXDMA_OWN,
- OWN_UPDATE | OWN_CLEAR);
-}
-
static void r8153_set_rx_early_timeout(struct r8152 *tp)
{
u32 ocp_data = tp->coalesce / 8;
@@ -2409,7 +2438,6 @@ static void r8153_set_rx_early_timeout(struct r8152 *tp)
128 / 8);
ocp_write_word(tp, MCU_TYPE_USB, USB_RX_EXTRA_AGGR_TMR,
ocp_data);
- r8153b_rx_agg_chg_indicate(tp);
break;
default:
@@ -2433,7 +2461,6 @@ static void r8153_set_rx_early_size(struct r8152 *tp)
case RTL_VER_09:
ocp_write_word(tp, MCU_TYPE_USB, USB_RX_EARLY_SIZE,
ocp_data / 8);
- r8153b_rx_agg_chg_indicate(tp);
break;
default:
WARN_ON_ONCE(1);
@@ -2806,20 +2833,24 @@ static void r8153b_power_cut_en(struct r8152 *tp, bool enable)
ocp_write_word(tp, MCU_TYPE_USB, USB_MISC_0, ocp_data);
}
-static void r8153b_queue_wake(struct r8152 *tp, bool enable)
+static void r8153_queue_wake(struct r8152 *tp, bool enable)
{
u32 ocp_data;
- ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, 0xd38a);
+ ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_INDICATE_FALG);
if (enable)
- ocp_data |= BIT(0);
+ ocp_data |= UPCOMING_RUNTIME_D3;
else
- ocp_data &= ~BIT(0);
- ocp_write_byte(tp, MCU_TYPE_PLA, 0xd38a, ocp_data);
+ ocp_data &= ~UPCOMING_RUNTIME_D3;
+ ocp_write_byte(tp, MCU_TYPE_PLA, PLA_INDICATE_FALG, ocp_data);
+
+ ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_SUSPEND_FLAG);
+ ocp_data &= ~LINK_CHG_EVENT;
+ ocp_write_byte(tp, MCU_TYPE_PLA, PLA_SUSPEND_FLAG, ocp_data);
- ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, 0xd38c);
- ocp_data &= ~BIT(0);
- ocp_write_byte(tp, MCU_TYPE_PLA, 0xd38c, ocp_data);
+ ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_EXTRA_STATUS);
+ ocp_data &= ~LINK_CHANGE_FLAG;
+ ocp_write_word(tp, MCU_TYPE_PLA, PLA_EXTRA_STATUS, ocp_data);
}
static bool rtl_can_wakeup(struct r8152 *tp)
@@ -2887,14 +2918,14 @@ static void rtl8153_runtime_enable(struct r8152 *tp, bool enable)
static void rtl8153b_runtime_enable(struct r8152 *tp, bool enable)
{
if (enable) {
- r8153b_queue_wake(tp, true);
+ r8153_queue_wake(tp, true);
r8153b_u1u2en(tp, false);
r8153_u2p3en(tp, false);
rtl_runtime_suspend_enable(tp, true);
r8153b_ups_en(tp, true);
} else {
r8153b_ups_en(tp, false);
- r8153b_queue_wake(tp, false);
+ r8153_queue_wake(tp, false);
rtl_runtime_suspend_enable(tp, false);
r8153_u2p3en(tp, true);
r8153b_u1u2en(tp, true);
@@ -4221,7 +4252,7 @@ static void r8153b_init(struct r8152 *tp)
r8153b_power_cut_en(tp, false);
r8153b_ups_en(tp, false);
- r8153b_queue_wake(tp, false);
+ r8153_queue_wake(tp, false);
rtl_runtime_suspend_enable(tp, false);
r8153b_u1u2en(tp, true);
usb_enable_lpm(tp->udev);
@@ -4903,8 +4934,17 @@ static int rtl8152_set_coalesce(struct net_device *netdev,
if (tp->coalesce != coalesce->rx_coalesce_usecs) {
tp->coalesce = coalesce->rx_coalesce_usecs;
- if (netif_running(tp->netdev) && netif_carrier_ok(netdev))
- r8153_set_rx_early_timeout(tp);
+ if (netif_running(netdev) && netif_carrier_ok(netdev)) {
+ netif_stop_queue(netdev);
+ napi_disable(&tp->napi);
+ tp->rtl_ops.disable(tp);
+ tp->rtl_ops.enable(tp);
+ rtl_start_rx(tp);
+ clear_bit(RTL8152_SET_RX_MODE, &tp->flags);
+ _rtl8152_set_rx_mode(netdev);
+ napi_enable(&tp->napi);
+ netif_wake_queue(netdev);
+ }
}
mutex_unlock(&tp->control);
@@ -5323,10 +5363,7 @@ static void rtl8152_disconnect(struct usb_interface *intf)
usb_set_intfdata(intf, NULL);
if (tp) {
- struct usb_device *udev = tp->udev;
-
- if (udev->state == USB_STATE_NOTATTACHED)
- set_bit(RTL8152_UNPLUG, &tp->flags);
+ rtl_set_unplug(tp);
netif_napi_del(&tp->napi);
unregister_netdev(tp->netdev);
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 52110e54e621..9f3c839f9e5f 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -38,6 +38,8 @@
#define VETH_XDP_TX BIT(0)
#define VETH_XDP_REDIR BIT(1)
+#define VETH_XDP_TX_BULK_SIZE 16
+
struct veth_rq_stats {
u64 xdp_packets;
u64 xdp_bytes;
@@ -64,6 +66,11 @@ struct veth_priv {
unsigned int requested_headroom;
};
+struct veth_xdp_tx_bq {
+ struct xdp_frame *q[VETH_XDP_TX_BULK_SIZE];
+ unsigned int count;
+};
+
/*
* ethtool interface
*/
@@ -442,13 +449,30 @@ drop:
return ret;
}
-static void veth_xdp_flush(struct net_device *dev)
+static void veth_xdp_flush_bq(struct net_device *dev, struct veth_xdp_tx_bq *bq)
+{
+ int sent, i, err = 0;
+
+ sent = veth_xdp_xmit(dev, bq->count, bq->q, 0);
+ if (sent < 0) {
+ err = sent;
+ sent = 0;
+ for (i = 0; i < bq->count; i++)
+ xdp_return_frame(bq->q[i]);
+ }
+ trace_xdp_bulk_tx(dev, sent, bq->count - sent, err);
+
+ bq->count = 0;
+}
+
+static void veth_xdp_flush(struct net_device *dev, struct veth_xdp_tx_bq *bq)
{
struct veth_priv *rcv_priv, *priv = netdev_priv(dev);
struct net_device *rcv;
struct veth_rq *rq;
rcu_read_lock();
+ veth_xdp_flush_bq(dev, bq);
rcv = rcu_dereference(priv->peer);
if (unlikely(!rcv))
goto out;
@@ -464,19 +488,26 @@ out:
rcu_read_unlock();
}
-static int veth_xdp_tx(struct net_device *dev, struct xdp_buff *xdp)
+static int veth_xdp_tx(struct net_device *dev, struct xdp_buff *xdp,
+ struct veth_xdp_tx_bq *bq)
{
struct xdp_frame *frame = convert_to_xdp_frame(xdp);
if (unlikely(!frame))
return -EOVERFLOW;
- return veth_xdp_xmit(dev, 1, &frame, 0);
+ if (unlikely(bq->count == VETH_XDP_TX_BULK_SIZE))
+ veth_xdp_flush_bq(dev, bq);
+
+ bq->q[bq->count++] = frame;
+
+ return 0;
}
static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
struct xdp_frame *frame,
- unsigned int *xdp_xmit)
+ unsigned int *xdp_xmit,
+ struct veth_xdp_tx_bq *bq)
{
void *hard_start = frame->data - frame->headroom;
void *head = hard_start - sizeof(struct xdp_frame);
@@ -509,7 +540,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
orig_frame = *frame;
xdp.data_hard_start = head;
xdp.rxq->mem = frame->mem;
- if (unlikely(veth_xdp_tx(rq->dev, &xdp) < 0)) {
+ if (unlikely(veth_xdp_tx(rq->dev, &xdp, bq) < 0)) {
trace_xdp_exception(rq->dev, xdp_prog, act);
frame = &orig_frame;
goto err_xdp;
@@ -547,6 +578,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
goto err;
}
+ xdp_release_frame(frame);
xdp_scrub_frame(frame);
skb->protocol = eth_type_trans(skb, rq->dev);
err:
@@ -559,7 +591,8 @@ xdp_xmit:
}
static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq, struct sk_buff *skb,
- unsigned int *xdp_xmit)
+ unsigned int *xdp_xmit,
+ struct veth_xdp_tx_bq *bq)
{
u32 pktlen, headroom, act, metalen;
void *orig_data, *orig_data_end;
@@ -635,7 +668,7 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq, struct sk_buff *skb,
get_page(virt_to_page(xdp.data));
consume_skb(skb);
xdp.rxq->mem = rq->xdp_mem;
- if (unlikely(veth_xdp_tx(rq->dev, &xdp) < 0)) {
+ if (unlikely(veth_xdp_tx(rq->dev, &xdp, bq) < 0)) {
trace_xdp_exception(rq->dev, xdp_prog, act);
goto err_xdp;
}
@@ -690,7 +723,8 @@ xdp_xmit:
return NULL;
}
-static int veth_xdp_rcv(struct veth_rq *rq, int budget, unsigned int *xdp_xmit)
+static int veth_xdp_rcv(struct veth_rq *rq, int budget, unsigned int *xdp_xmit,
+ struct veth_xdp_tx_bq *bq)
{
int i, done = 0, drops = 0, bytes = 0;
@@ -706,11 +740,11 @@ static int veth_xdp_rcv(struct veth_rq *rq, int budget, unsigned int *xdp_xmit)
struct xdp_frame *frame = veth_ptr_to_xdp(ptr);
bytes += frame->len;
- skb = veth_xdp_rcv_one(rq, frame, &xdp_xmit_one);
+ skb = veth_xdp_rcv_one(rq, frame, &xdp_xmit_one, bq);
} else {
skb = ptr;
bytes += skb->len;
- skb = veth_xdp_rcv_skb(rq, skb, &xdp_xmit_one);
+ skb = veth_xdp_rcv_skb(rq, skb, &xdp_xmit_one, bq);
}
*xdp_xmit |= xdp_xmit_one;
@@ -736,10 +770,13 @@ static int veth_poll(struct napi_struct *napi, int budget)
struct veth_rq *rq =
container_of(napi, struct veth_rq, xdp_napi);
unsigned int xdp_xmit = 0;
+ struct veth_xdp_tx_bq bq;
int done;
+ bq.count = 0;
+
xdp_set_return_frame_no_direct();
- done = veth_xdp_rcv(rq, budget, &xdp_xmit);
+ done = veth_xdp_rcv(rq, budget, &xdp_xmit, &bq);
if (done < budget && napi_complete_done(napi, done)) {
/* Write rx_notify_masked before reading ptr_ring */
@@ -751,7 +788,7 @@ static int veth_poll(struct napi_struct *napi, int budget)
}
if (xdp_xmit & VETH_XDP_TX)
- veth_xdp_flush(rq->dev);
+ veth_xdp_flush(rq->dev, &bq);
if (xdp_xmit & VETH_XDP_REDIR)
xdp_do_flush_map();
xdp_clear_return_frame_no_direct();
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 0d4115c9e20b..4f3de0ac8b0b 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -26,7 +26,7 @@
static int napi_weight = NAPI_POLL_WEIGHT;
module_param(napi_weight, int, 0444);
-static bool csum = true, gso = true, napi_tx;
+static bool csum = true, gso = true, napi_tx = true;
module_param(csum, bool, 0444);
module_param(gso, bool, 0444);
module_param(napi_tx, bool, 0644);
diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
index 89984fcab01e..3f48f05dd2a6 100644
--- a/drivers/net/vmxnet3/vmxnet3_drv.c
+++ b/drivers/net/vmxnet3/vmxnet3_drv.c
@@ -3247,6 +3247,7 @@ vmxnet3_probe_device(struct pci_dev *pdev,
.ndo_start_xmit = vmxnet3_xmit_frame,
.ndo_set_mac_address = vmxnet3_set_mac_addr,
.ndo_change_mtu = vmxnet3_change_mtu,
+ .ndo_fix_features = vmxnet3_fix_features,
.ndo_set_features = vmxnet3_set_features,
.ndo_get_stats64 = vmxnet3_get_stats64,
.ndo_tx_timeout = vmxnet3_tx_timeout,
@@ -3651,13 +3652,19 @@ vmxnet3_suspend(struct device *device)
}
if (adapter->wol & WAKE_ARP) {
- in_dev = in_dev_get(netdev);
- if (!in_dev)
+ rcu_read_lock();
+
+ in_dev = __in_dev_get_rcu(netdev);
+ if (!in_dev) {
+ rcu_read_unlock();
goto skip_arp;
+ }
- ifa = (struct in_ifaddr *)in_dev->ifa_list;
- if (!ifa)
+ ifa = rcu_dereference(in_dev->ifa_list);
+ if (!ifa) {
+ rcu_read_unlock();
goto skip_arp;
+ }
pmConf->filters[i].patternSize = ETH_HLEN + /* Ethernet header*/
sizeof(struct arphdr) + /* ARP header */
@@ -3677,7 +3684,9 @@ vmxnet3_suspend(struct device *device)
/* The Unicast IPv4 address in 'tip' field. */
arpreq += 2 * ETH_ALEN + sizeof(u32);
- *(u32 *)arpreq = ifa->ifa_address;
+ *(__be32 *)arpreq = ifa->ifa_address;
+
+ rcu_read_unlock();
/* The mask for the relevant bits. */
pmConf->filters[i].mask[0] = 0x00;
@@ -3686,7 +3695,6 @@ vmxnet3_suspend(struct device *device)
pmConf->filters[i].mask[3] = 0x00;
pmConf->filters[i].mask[4] = 0xC0; /* IPv4 TIP */
pmConf->filters[i].mask[5] = 0x03; /* IPv4 TIP */
- in_dev_put(in_dev);
pmConf->wakeUpEvents |= VMXNET3_PM_WAKEUP_FILTER;
i++;
diff --git a/drivers/net/vmxnet3/vmxnet3_ethtool.c b/drivers/net/vmxnet3/vmxnet3_ethtool.c
index 559db051a500..0a38c76688ab 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethtool.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethtool.c
@@ -257,6 +257,16 @@ vmxnet3_get_strings(struct net_device *netdev, u32 stringset, u8 *buf)
}
}
+netdev_features_t vmxnet3_fix_features(struct net_device *netdev,
+ netdev_features_t features)
+{
+ /* If Rx checksum is disabled, then LRO should also be disabled */
+ if (!(features & NETIF_F_RXCSUM))
+ features &= ~NETIF_F_LRO;
+
+ return features;
+}
+
int vmxnet3_set_features(struct net_device *netdev, netdev_features_t features)
{
struct vmxnet3_adapter *adapter = netdev_priv(netdev);
diff --git a/drivers/net/vmxnet3/vmxnet3_int.h b/drivers/net/vmxnet3/vmxnet3_int.h
index a2c554f8a61b..1cc1cd4aaa59 100644
--- a/drivers/net/vmxnet3/vmxnet3_int.h
+++ b/drivers/net/vmxnet3/vmxnet3_int.h
@@ -69,12 +69,12 @@
/*
* Version numbers
*/
-#define VMXNET3_DRIVER_VERSION_STRING "1.4.16.0-k"
+#define VMXNET3_DRIVER_VERSION_STRING "1.4.17.0-k"
/* Each byte of this 32-bit integer encodes a version number in
* VMXNET3_DRIVER_VERSION_STRING.
*/
-#define VMXNET3_DRIVER_VERSION_NUM 0x01041000
+#define VMXNET3_DRIVER_VERSION_NUM 0x01041100
#if defined(CONFIG_PCI_MSI)
/* RSS only makes sense if MSI-X is supported. */
@@ -454,6 +454,9 @@ vmxnet3_tq_destroy_all(struct vmxnet3_adapter *adapter);
void
vmxnet3_rq_destroy_all(struct vmxnet3_adapter *adapter);
+netdev_features_t
+vmxnet3_fix_features(struct net_device *netdev, netdev_features_t features);
+
int
vmxnet3_set_features(struct net_device *netdev, netdev_features_t features);
diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
index 311b0cc6eb98..54edf8956a25 100644
--- a/drivers/net/vrf.c
+++ b/drivers/net/vrf.c
@@ -1072,12 +1072,14 @@ static struct sk_buff *vrf_l3_rcv(struct net_device *vrf_dev,
#if IS_ENABLED(CONFIG_IPV6)
/* send to link-local or multicast address via interface enslaved to
* VRF device. Force lookup to VRF table without changing flow struct
+ * Note: Caller to this function must hold rcu_read_lock() and no refcnt
+ * is taken on the dst by this function.
*/
static struct dst_entry *vrf_link_scope_lookup(const struct net_device *dev,
struct flowi6 *fl6)
{
struct net *net = dev_net(dev);
- int flags = RT6_LOOKUP_F_IFACE;
+ int flags = RT6_LOOKUP_F_IFACE | RT6_LOOKUP_F_DST_NOREF;
struct dst_entry *dst = NULL;
struct rt6_info *rt;
@@ -1087,7 +1089,6 @@ static struct dst_entry *vrf_link_scope_lookup(const struct net_device *dev,
*/
if (fl6->flowi6_oif == dev->ifindex) {
dst = &net->ipv6.ip6_null_entry->dst;
- dst_hold(dst);
return dst;
}
diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
index 083f3f0bf37f..3d9bcc957f7d 100644
--- a/drivers/net/vxlan.c
+++ b/drivers/net/vxlan.c
@@ -468,14 +468,19 @@ static u32 eth_vni_hash(const unsigned char *addr, __be32 vni)
return jhash_2words(key, vni, vxlan_salt) & (FDB_HASH_SIZE - 1);
}
+static u32 fdb_head_index(struct vxlan_dev *vxlan, const u8 *mac, __be32 vni)
+{
+ if (vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA)
+ return eth_vni_hash(mac, vni);
+ else
+ return eth_hash(mac);
+}
+
/* Hash chain to use given mac address */
static inline struct hlist_head *vxlan_fdb_head(struct vxlan_dev *vxlan,
const u8 *mac, __be32 vni)
{
- if (vxlan->cfg.flags & VXLAN_F_COLLECT_METADATA)
- return &vxlan->fdb_head[eth_vni_hash(mac, vni)];
- else
- return &vxlan->fdb_head[eth_hash(mac)];
+ return &vxlan->fdb_head[fdb_head_index(vxlan, mac, vni)];
}
/* Look up Ethernet address in forwarding table */
@@ -590,8 +595,8 @@ int vxlan_fdb_replay(const struct net_device *dev, __be32 vni,
return -EINVAL;
vxlan = netdev_priv(dev);
- spin_lock_bh(&vxlan->hash_lock);
for (h = 0; h < FDB_HASH_SIZE; ++h) {
+ spin_lock_bh(&vxlan->hash_lock[h]);
hlist_for_each_entry(f, &vxlan->fdb_head[h], hlist) {
if (f->vni == vni) {
list_for_each_entry(rdst, &f->remotes, list) {
@@ -599,14 +604,16 @@ int vxlan_fdb_replay(const struct net_device *dev, __be32 vni,
f, rdst,
extack);
if (rc)
- goto out;
+ goto unlock;
}
}
}
+ spin_unlock_bh(&vxlan->hash_lock[h]);
}
+ return 0;
-out:
- spin_unlock_bh(&vxlan->hash_lock);
+unlock:
+ spin_unlock_bh(&vxlan->hash_lock[h]);
return rc;
}
EXPORT_SYMBOL_GPL(vxlan_fdb_replay);
@@ -622,14 +629,15 @@ void vxlan_fdb_clear_offload(const struct net_device *dev, __be32 vni)
return;
vxlan = netdev_priv(dev);
- spin_lock_bh(&vxlan->hash_lock);
for (h = 0; h < FDB_HASH_SIZE; ++h) {
+ spin_lock_bh(&vxlan->hash_lock[h]);
hlist_for_each_entry(f, &vxlan->fdb_head[h], hlist)
if (f->vni == vni)
list_for_each_entry(rdst, &f->remotes, list)
rdst->offloaded = false;
+ spin_unlock_bh(&vxlan->hash_lock[h]);
}
- spin_unlock_bh(&vxlan->hash_lock);
+
}
EXPORT_SYMBOL_GPL(vxlan_fdb_clear_offload);
@@ -804,6 +812,14 @@ static struct vxlan_fdb *vxlan_fdb_alloc(struct vxlan_dev *vxlan,
return f;
}
+static void vxlan_fdb_insert(struct vxlan_dev *vxlan, const u8 *mac,
+ __be32 src_vni, struct vxlan_fdb *f)
+{
+ ++vxlan->addrcnt;
+ hlist_add_head_rcu(&f->hlist,
+ vxlan_fdb_head(vxlan, mac, src_vni));
+}
+
static int vxlan_fdb_create(struct vxlan_dev *vxlan,
const u8 *mac, union vxlan_addr *ip,
__u16 state, __be16 port, __be32 src_vni,
@@ -829,18 +845,13 @@ static int vxlan_fdb_create(struct vxlan_dev *vxlan,
return rc;
}
- ++vxlan->addrcnt;
- hlist_add_head_rcu(&f->hlist,
- vxlan_fdb_head(vxlan, mac, src_vni));
-
*fdb = f;
return 0;
}
-static void vxlan_fdb_free(struct rcu_head *head)
+static void __vxlan_fdb_free(struct vxlan_fdb *f)
{
- struct vxlan_fdb *f = container_of(head, struct vxlan_fdb, rcu);
struct vxlan_rdst *rd, *nd;
list_for_each_entry_safe(rd, nd, &f->remotes, list) {
@@ -850,6 +861,13 @@ static void vxlan_fdb_free(struct rcu_head *head)
kfree(f);
}
+static void vxlan_fdb_free(struct rcu_head *head)
+{
+ struct vxlan_fdb *f = container_of(head, struct vxlan_fdb, rcu);
+
+ __vxlan_fdb_free(f);
+}
+
static void vxlan_fdb_destroy(struct vxlan_dev *vxlan, struct vxlan_fdb *f,
bool do_notify, bool swdev_notify)
{
@@ -977,6 +995,7 @@ static int vxlan_fdb_update_create(struct vxlan_dev *vxlan,
if (rc < 0)
return rc;
+ vxlan_fdb_insert(vxlan, mac, src_vni, f);
rc = vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f), RTM_NEWNEIGH,
swdev_notify, extack);
if (rc)
@@ -1105,6 +1124,7 @@ static int vxlan_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
__be16 port;
__be32 src_vni, vni;
u32 ifindex;
+ u32 hash_index;
int err;
if (!(ndm->ndm_state & (NUD_PERMANENT|NUD_REACHABLE))) {
@@ -1123,12 +1143,13 @@ static int vxlan_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
if (vxlan->default_dst.remote_ip.sa.sa_family != ip.sa.sa_family)
return -EAFNOSUPPORT;
- spin_lock_bh(&vxlan->hash_lock);
+ hash_index = fdb_head_index(vxlan, addr, src_vni);
+ spin_lock_bh(&vxlan->hash_lock[hash_index]);
err = vxlan_fdb_update(vxlan, addr, &ip, ndm->ndm_state, flags,
port, src_vni, vni, ifindex,
ndm->ndm_flags | NTF_VXLAN_ADDED_BY_USER,
true, extack);
- spin_unlock_bh(&vxlan->hash_lock);
+ spin_unlock_bh(&vxlan->hash_lock[hash_index]);
return err;
}
@@ -1176,16 +1197,18 @@ static int vxlan_fdb_delete(struct ndmsg *ndm, struct nlattr *tb[],
__be32 src_vni, vni;
__be16 port;
u32 ifindex;
+ u32 hash_index;
int err;
err = vxlan_fdb_parse(tb, vxlan, &ip, &port, &src_vni, &vni, &ifindex);
if (err)
return err;
- spin_lock_bh(&vxlan->hash_lock);
+ hash_index = fdb_head_index(vxlan, addr, src_vni);
+ spin_lock_bh(&vxlan->hash_lock[hash_index]);
err = __vxlan_fdb_delete(vxlan, addr, ip, port, src_vni, vni, ifindex,
true);
- spin_unlock_bh(&vxlan->hash_lock);
+ spin_unlock_bh(&vxlan->hash_lock[hash_index]);
return err;
}
@@ -1297,8 +1320,10 @@ static bool vxlan_snoop(struct net_device *dev,
f->updated = jiffies;
vxlan_fdb_notify(vxlan, f, rdst, RTM_NEWNEIGH, true, NULL);
} else {
+ u32 hash_index = fdb_head_index(vxlan, src_mac, vni);
+
/* learned new entry */
- spin_lock(&vxlan->hash_lock);
+ spin_lock(&vxlan->hash_lock[hash_index]);
/* close off race between vxlan_flush and incoming packets */
if (netif_running(dev))
@@ -1309,7 +1334,7 @@ static bool vxlan_snoop(struct net_device *dev,
vni,
vxlan->default_dst.remote_vni,
ifindex, NTF_SELF, true, NULL);
- spin_unlock(&vxlan->hash_lock);
+ spin_unlock(&vxlan->hash_lock[hash_index]);
}
return false;
@@ -2219,7 +2244,7 @@ static struct rtable *vxlan_get_route(struct vxlan_dev *vxlan, struct net_device
fl4.fl4_sport = sport;
rt = ip_route_output_key(vxlan->net, &fl4);
- if (likely(!IS_ERR(rt))) {
+ if (!IS_ERR(rt)) {
if (rt->dst.dev == dev) {
netdev_dbg(dev, "circular route to %pI4\n", &daddr);
ip_rt_put(rt);
@@ -2699,7 +2724,7 @@ static void vxlan_cleanup(struct timer_list *t)
for (h = 0; h < FDB_HASH_SIZE; ++h) {
struct hlist_node *p, *n;
- spin_lock(&vxlan->hash_lock);
+ spin_lock(&vxlan->hash_lock[h]);
hlist_for_each_safe(p, n, &vxlan->fdb_head[h]) {
struct vxlan_fdb *f
= container_of(p, struct vxlan_fdb, hlist);
@@ -2721,7 +2746,7 @@ static void vxlan_cleanup(struct timer_list *t)
} else if (time_before(timeout, next_timer))
next_timer = timeout;
}
- spin_unlock(&vxlan->hash_lock);
+ spin_unlock(&vxlan->hash_lock[h]);
}
mod_timer(&vxlan->age_timer, next_timer);
@@ -2764,12 +2789,13 @@ static int vxlan_init(struct net_device *dev)
static void vxlan_fdb_delete_default(struct vxlan_dev *vxlan, __be32 vni)
{
struct vxlan_fdb *f;
+ u32 hash_index = fdb_head_index(vxlan, all_zeros_mac, vni);
- spin_lock_bh(&vxlan->hash_lock);
+ spin_lock_bh(&vxlan->hash_lock[hash_index]);
f = __vxlan_find_mac(vxlan, all_zeros_mac, vni);
if (f)
vxlan_fdb_destroy(vxlan, f, true, true);
- spin_unlock_bh(&vxlan->hash_lock);
+ spin_unlock_bh(&vxlan->hash_lock[hash_index]);
}
static void vxlan_uninit(struct net_device *dev)
@@ -2814,9 +2840,10 @@ static void vxlan_flush(struct vxlan_dev *vxlan, bool do_all)
{
unsigned int h;
- spin_lock_bh(&vxlan->hash_lock);
for (h = 0; h < FDB_HASH_SIZE; ++h) {
struct hlist_node *p, *n;
+
+ spin_lock_bh(&vxlan->hash_lock[h]);
hlist_for_each_safe(p, n, &vxlan->fdb_head[h]) {
struct vxlan_fdb *f
= container_of(p, struct vxlan_fdb, hlist);
@@ -2826,8 +2853,8 @@ static void vxlan_flush(struct vxlan_dev *vxlan, bool do_all)
if (!is_zero_ether_addr(f->eth_addr))
vxlan_fdb_destroy(vxlan, f, true, true);
}
+ spin_unlock_bh(&vxlan->hash_lock[h]);
}
- spin_unlock_bh(&vxlan->hash_lock);
}
/* Cleanup timer and forwarding table on shutdown */
@@ -3011,7 +3038,6 @@ static void vxlan_setup(struct net_device *dev)
dev->max_mtu = ETH_MAX_MTU;
INIT_LIST_HEAD(&vxlan->next);
- spin_lock_init(&vxlan->hash_lock);
timer_setup(&vxlan->age_timer, vxlan_cleanup, TIMER_DEFERRABLE);
@@ -3019,8 +3045,10 @@ static void vxlan_setup(struct net_device *dev)
gro_cells_init(&vxlan->gro_cells, dev);
- for (h = 0; h < FDB_HASH_SIZE; ++h)
+ for (h = 0; h < FDB_HASH_SIZE; ++h) {
+ spin_lock_init(&vxlan->hash_lock[h]);
INIT_HLIST_HEAD(&vxlan->fdb_head[h]);
+ }
}
static void vxlan_ether_setup(struct net_device *dev)
@@ -3571,12 +3599,17 @@ static int __vxlan_dev_create(struct net *net, struct net_device *dev,
if (err)
goto errout;
- /* notify default fdb entry */
if (f) {
+ vxlan_fdb_insert(vxlan, all_zeros_mac,
+ vxlan->default_dst.remote_vni, f);
+
+ /* notify default fdb entry */
err = vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f),
RTM_NEWNEIGH, true, extack);
- if (err)
- goto errout;
+ if (err) {
+ vxlan_fdb_destroy(vxlan, f, false, false);
+ goto unregister;
+ }
}
list_add(&vxlan->next, &vn->vxlan_list);
@@ -3588,7 +3621,8 @@ errout:
* destroy the entry by hand here.
*/
if (f)
- vxlan_fdb_destroy(vxlan, f, false, false);
+ __vxlan_fdb_free(f);
+unregister:
if (unregister)
unregister_netdevice(dev);
return err;
@@ -3914,7 +3948,9 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[],
/* handle default dst entry */
if (!vxlan_addr_equal(&conf.remote_ip, &dst->remote_ip)) {
- spin_lock_bh(&vxlan->hash_lock);
+ u32 hash_index = fdb_head_index(vxlan, all_zeros_mac, conf.vni);
+
+ spin_lock_bh(&vxlan->hash_lock[hash_index]);
if (!vxlan_addr_any(&conf.remote_ip)) {
err = vxlan_fdb_update(vxlan, all_zeros_mac,
&conf.remote_ip,
@@ -3925,7 +3961,7 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[],
conf.remote_ifindex,
NTF_SELF, true, extack);
if (err) {
- spin_unlock_bh(&vxlan->hash_lock);
+ spin_unlock_bh(&vxlan->hash_lock[hash_index]);
return err;
}
}
@@ -3937,7 +3973,7 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[],
dst->remote_vni,
dst->remote_ifindex,
true);
- spin_unlock_bh(&vxlan->hash_lock);
+ spin_unlock_bh(&vxlan->hash_lock[hash_index]);
}
if (conf.age_interval != vxlan->cfg.age_interval)
@@ -4192,8 +4228,11 @@ vxlan_fdb_offloaded_set(struct net_device *dev,
struct vxlan_dev *vxlan = netdev_priv(dev);
struct vxlan_rdst *rdst;
struct vxlan_fdb *f;
+ u32 hash_index;
+
+ hash_index = fdb_head_index(vxlan, fdb_info->eth_addr, fdb_info->vni);
- spin_lock_bh(&vxlan->hash_lock);
+ spin_lock_bh(&vxlan->hash_lock[hash_index]);
f = vxlan_find_mac(vxlan, fdb_info->eth_addr, fdb_info->vni);
if (!f)
@@ -4209,7 +4248,7 @@ vxlan_fdb_offloaded_set(struct net_device *dev,
rdst->offloaded = fdb_info->offloaded;
out:
- spin_unlock_bh(&vxlan->hash_lock);
+ spin_unlock_bh(&vxlan->hash_lock[hash_index]);
}
static int
@@ -4218,11 +4257,13 @@ vxlan_fdb_external_learn_add(struct net_device *dev,
{
struct vxlan_dev *vxlan = netdev_priv(dev);
struct netlink_ext_ack *extack;
+ u32 hash_index;
int err;
+ hash_index = fdb_head_index(vxlan, fdb_info->eth_addr, fdb_info->vni);
extack = switchdev_notifier_info_to_extack(&fdb_info->info);
- spin_lock_bh(&vxlan->hash_lock);
+ spin_lock_bh(&vxlan->hash_lock[hash_index]);
err = vxlan_fdb_update(vxlan, fdb_info->eth_addr, &fdb_info->remote_ip,
NUD_REACHABLE,
NLM_F_CREATE | NLM_F_REPLACE,
@@ -4232,7 +4273,7 @@ vxlan_fdb_external_learn_add(struct net_device *dev,
fdb_info->remote_ifindex,
NTF_USE | NTF_SELF | NTF_EXT_LEARNED,
false, extack);
- spin_unlock_bh(&vxlan->hash_lock);
+ spin_unlock_bh(&vxlan->hash_lock[hash_index]);
return err;
}
@@ -4243,9 +4284,11 @@ vxlan_fdb_external_learn_del(struct net_device *dev,
{
struct vxlan_dev *vxlan = netdev_priv(dev);
struct vxlan_fdb *f;
+ u32 hash_index;
int err = 0;
- spin_lock_bh(&vxlan->hash_lock);
+ hash_index = fdb_head_index(vxlan, fdb_info->eth_addr, fdb_info->vni);
+ spin_lock_bh(&vxlan->hash_lock[hash_index]);
f = vxlan_find_mac(vxlan, fdb_info->eth_addr, fdb_info->vni);
if (!f)
@@ -4259,7 +4302,7 @@ vxlan_fdb_external_learn_del(struct net_device *dev,
fdb_info->remote_ifindex,
false);
- spin_unlock_bh(&vxlan->hash_lock);
+ spin_unlock_bh(&vxlan->hash_lock[hash_index]);
return err;
}
diff --git a/drivers/net/wan/hdlc_cisco.c b/drivers/net/wan/hdlc_cisco.c
index 61d8f6389c64..a030f5aa6b95 100644
--- a/drivers/net/wan/hdlc_cisco.c
+++ b/drivers/net/wan/hdlc_cisco.c
@@ -193,16 +193,15 @@ static int cisco_rx(struct sk_buff *skb)
mask = ~cpu_to_be32(0); /* is the mask correct? */
if (in_dev != NULL) {
- struct in_ifaddr **ifap = &in_dev->ifa_list;
+ const struct in_ifaddr *ifa;
- while (*ifap != NULL) {
+ in_dev_for_each_ifa_rcu(ifa, in_dev) {
if (strcmp(dev->name,
- (*ifap)->ifa_label) == 0) {
- addr = (*ifap)->ifa_local;
- mask = (*ifap)->ifa_mask;
+ ifa->ifa_label) == 0) {
+ addr = ifa->ifa_local;
+ mask = ifa->ifa_mask;
break;
}
- ifap = &(*ifap)->ifa_next;
}
cisco_keepalive_send(dev, CISCO_ADDR_REPLY,
diff --git a/drivers/net/wan/x25_asy.c b/drivers/net/wan/x25_asy.c
index d78bc838d631..914be5847386 100644
--- a/drivers/net/wan/x25_asy.c
+++ b/drivers/net/wan/x25_asy.c
@@ -602,8 +602,8 @@ static void x25_asy_close_tty(struct tty_struct *tty)
err = lapb_unregister(sl->dev);
if (err != LAPB_OK)
- pr_err("x25_asy_close: lapb_unregister error: %d\n",
- err);
+ pr_err("%s: lapb_unregister error: %d\n",
+ __func__, err);
tty->disc_data = NULL;
sl->tty = NULL;
diff --git a/drivers/net/wireless/ath/Kconfig b/drivers/net/wireless/ath/Kconfig
index af2049e99188..d98d6ac90f3d 100644
--- a/drivers/net/wireless/ath/Kconfig
+++ b/drivers/net/wireless/ath/Kconfig
@@ -1,4 +1,4 @@
-# SPDX-License-Identifier: GPL-2.0-only
+# SPDX-License-Identifier: ISC
config ATH_COMMON
tristate
diff --git a/drivers/net/wireless/ath/Makefile b/drivers/net/wireless/ath/Makefile
index e4e460b5498e..ee2b2431e5a3 100644
--- a/drivers/net/wireless/ath/Makefile
+++ b/drivers/net/wireless/ath/Makefile
@@ -1,4 +1,4 @@
-# SPDX-License-Identifier: GPL-2.0
+# SPDX-License-Identifier: ISC
obj-$(CONFIG_ATH5K) += ath5k/
obj-$(CONFIG_ATH9K_HW) += ath9k/
obj-$(CONFIG_CARL9170) += carl9170/
diff --git a/drivers/net/wireless/ath/ar5523/Kconfig b/drivers/net/wireless/ath/ar5523/Kconfig
index 75fc66983da5..41d3c9a48b08 100644
--- a/drivers/net/wireless/ath/ar5523/Kconfig
+++ b/drivers/net/wireless/ath/ar5523/Kconfig
@@ -1,4 +1,4 @@
-# SPDX-License-Identifier: GPL-2.0-only
+# SPDX-License-Identifier: ISC
config AR5523
tristate "Atheros AR5523 wireless driver support"
depends on MAC80211 && USB
diff --git a/drivers/net/wireless/ath/ar5523/Makefile b/drivers/net/wireless/ath/ar5523/Makefile
index 84fc88aa109e..34efa5772096 100644
--- a/drivers/net/wireless/ath/ar5523/Makefile
+++ b/drivers/net/wireless/ath/ar5523/Makefile
@@ -1,2 +1,2 @@
-# SPDX-License-Identifier: GPL-2.0-only
+# SPDX-License-Identifier: ISC
obj-$(CONFIG_AR5523) := ar5523.o
diff --git a/drivers/net/wireless/ath/ath10k/Kconfig b/drivers/net/wireless/ath/ath10k/Kconfig
index 3522f251fa7f..6b3ff02a373d 100644
--- a/drivers/net/wireless/ath/ath10k/Kconfig
+++ b/drivers/net/wireless/ath/ath10k/Kconfig
@@ -1,4 +1,4 @@
-# SPDX-License-Identifier: GPL-2.0-only
+# SPDX-License-Identifier: ISC
config ATH10K
tristate "Atheros 802.11ac wireless cards support"
depends on MAC80211 && HAS_DMA
diff --git a/drivers/net/wireless/ath/ath10k/ahb.c b/drivers/net/wireless/ath/ath10k/ahb.c
index 0bf726c55736..f80854180e21 100644
--- a/drivers/net/wireless/ath/ath10k/ahb.c
+++ b/drivers/net/wireless/ath/ath10k/ahb.c
@@ -740,7 +740,7 @@ static int ath10k_ahb_probe(struct platform_device *pdev)
enum ath10k_hw_rev hw_rev;
size_t size;
int ret;
- struct ath10k_bus_params bus_params;
+ struct ath10k_bus_params bus_params = {};
of_id = of_match_device(ath10k_ahb_of_match, &pdev->dev);
if (!of_id) {
diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
index aff585658fc0..dc45d16e8d21 100644
--- a/drivers/net/wireless/ath/ath10k/core.c
+++ b/drivers/net/wireless/ath/ath10k/core.c
@@ -2,7 +2,7 @@
/*
* Copyright (c) 2005-2011 Atheros Communications Inc.
* Copyright (c) 2011-2017 Qualcomm Atheros, Inc.
- * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
*/
#include <linux/module.h>
@@ -26,10 +26,13 @@
#include "coredump.h"
unsigned int ath10k_debug_mask;
+EXPORT_SYMBOL(ath10k_debug_mask);
+
static unsigned int ath10k_cryptmode_param;
static bool uart_print;
static bool skip_otp;
static bool rawmode;
+static bool fw_diag_log;
unsigned long ath10k_coredump_mask = BIT(ATH10K_FW_CRASH_DUMP_REGISTERS) |
BIT(ATH10K_FW_CRASH_DUMP_CE_DATA);
@@ -40,6 +43,7 @@ module_param_named(cryptmode, ath10k_cryptmode_param, uint, 0644);
module_param(uart_print, bool, 0644);
module_param(skip_otp, bool, 0644);
module_param(rawmode, bool, 0644);
+module_param(fw_diag_log, bool, 0644);
module_param_named(coredump_mask, ath10k_coredump_mask, ulong, 0444);
MODULE_PARM_DESC(debug_mask, "Debugging mask");
@@ -48,6 +52,7 @@ MODULE_PARM_DESC(skip_otp, "Skip otp failure for calibration in testmode");
MODULE_PARM_DESC(cryptmode, "Crypto mode: 0-hardware, 1-software");
MODULE_PARM_DESC(rawmode, "Use raw 802.11 frame datapath");
MODULE_PARM_DESC(coredump_mask, "Bitfield of what to include in firmware crash file");
+MODULE_PARM_DESC(fw_diag_log, "Diag based fw log debugging");
static const struct ath10k_hw_params ath10k_hw_params_list[] = {
{
@@ -83,6 +88,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.rri_on_ddr = false,
.hw_filter_reset_required = true,
.fw_diag_ce_download = false,
+ .tx_stats_over_pktlog = true,
},
{
.id = QCA988X_HW_2_0_VERSION,
@@ -117,6 +123,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.rri_on_ddr = false,
.hw_filter_reset_required = true,
.fw_diag_ce_download = false,
+ .tx_stats_over_pktlog = true,
},
{
.id = QCA9887_HW_1_0_VERSION,
@@ -152,6 +159,35 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.rri_on_ddr = false,
.hw_filter_reset_required = true,
.fw_diag_ce_download = false,
+ .tx_stats_over_pktlog = false,
+ },
+ {
+ .id = QCA6174_HW_3_2_VERSION,
+ .dev_id = QCA6174_3_2_DEVICE_ID,
+ .bus = ATH10K_BUS_SDIO,
+ .name = "qca6174 hw3.2 sdio",
+ .patch_load_addr = QCA6174_HW_3_0_PATCH_LOAD_ADDR,
+ .uart_pin = 19,
+ .otp_exe_param = 0,
+ .channel_counters_freq_hz = 88000,
+ .max_probe_resp_desc_thres = 0,
+ .cal_data_len = 0,
+ .fw = {
+ .dir = QCA6174_HW_3_0_FW_DIR,
+ .board = QCA6174_HW_3_0_BOARD_DATA_FILE,
+ .board_size = QCA6174_BOARD_DATA_SZ,
+ .board_ext_size = QCA6174_BOARD_EXT_DATA_SZ,
+ },
+ .hw_ops = &qca6174_sdio_ops,
+ .hw_clk = qca6174_clk,
+ .target_cpu_freq = 176000000,
+ .decap_align_bytes = 4,
+ .n_cipher_suites = 8,
+ .num_peers = 10,
+ .ast_skid_limit = 0x10,
+ .num_wds_entries = 0x20,
+ .uart_pin_workaround = true,
+ .tx_stats_over_pktlog = false,
},
{
.id = QCA6174_HW_2_1_VERSION,
@@ -186,6 +222,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.rri_on_ddr = false,
.hw_filter_reset_required = true,
.fw_diag_ce_download = false,
+ .tx_stats_over_pktlog = false,
},
{
.id = QCA6174_HW_2_1_VERSION,
@@ -220,6 +257,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.rri_on_ddr = false,
.hw_filter_reset_required = true,
.fw_diag_ce_download = false,
+ .tx_stats_over_pktlog = false,
},
{
.id = QCA6174_HW_3_0_VERSION,
@@ -254,6 +292,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.rri_on_ddr = false,
.hw_filter_reset_required = true,
.fw_diag_ce_download = false,
+ .tx_stats_over_pktlog = false,
},
{
.id = QCA6174_HW_3_2_VERSION,
@@ -291,6 +330,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.rri_on_ddr = false,
.hw_filter_reset_required = true,
.fw_diag_ce_download = true,
+ .tx_stats_over_pktlog = false,
},
{
.id = QCA99X0_HW_2_0_DEV_VERSION,
@@ -331,6 +371,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.rri_on_ddr = false,
.hw_filter_reset_required = true,
.fw_diag_ce_download = false,
+ .tx_stats_over_pktlog = false,
},
{
.id = QCA9984_HW_1_0_DEV_VERSION,
@@ -378,6 +419,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.rri_on_ddr = false,
.hw_filter_reset_required = true,
.fw_diag_ce_download = false,
+ .tx_stats_over_pktlog = false,
},
{
.id = QCA9888_HW_2_0_DEV_VERSION,
@@ -422,6 +464,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.rri_on_ddr = false,
.hw_filter_reset_required = true,
.fw_diag_ce_download = false,
+ .tx_stats_over_pktlog = false,
},
{
.id = QCA9377_HW_1_0_DEV_VERSION,
@@ -456,6 +499,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.rri_on_ddr = false,
.hw_filter_reset_required = true,
.fw_diag_ce_download = false,
+ .tx_stats_over_pktlog = false,
},
{
.id = QCA9377_HW_1_1_DEV_VERSION,
@@ -492,6 +536,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.rri_on_ddr = false,
.hw_filter_reset_required = true,
.fw_diag_ce_download = true,
+ .tx_stats_over_pktlog = false,
},
{
.id = QCA4019_HW_1_0_DEV_VERSION,
@@ -533,6 +578,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.rri_on_ddr = false,
.hw_filter_reset_required = true,
.fw_diag_ce_download = false,
+ .tx_stats_over_pktlog = false,
},
{
.id = WCN3990_HW_1_0_DEV_VERSION,
@@ -560,6 +606,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.rri_on_ddr = true,
.hw_filter_reset_required = false,
.fw_diag_ce_download = false,
+ .tx_stats_over_pktlog = false,
},
};
@@ -585,6 +632,7 @@ static const char *const ath10k_core_fw_feature_str[] = {
[ATH10K_FW_FEATURE_MGMT_TX_BY_REF] = "mgmt-tx-by-reference",
[ATH10K_FW_FEATURE_NON_BMI] = "non-bmi",
[ATH10K_FW_FEATURE_SINGLE_CHAN_INFO_PER_CHANNEL] = "single-chan-info-per-channel",
+ [ATH10K_FW_FEATURE_PEER_FIXED_RATE] = "peer-fixed-rate",
};
static unsigned int ath10k_core_get_fw_feature_str(char *buf,
@@ -629,7 +677,7 @@ static void ath10k_send_suspend_complete(struct ath10k *ar)
complete(&ar->target_suspend);
}
-static void ath10k_init_sdio(struct ath10k *ar)
+static void ath10k_init_sdio(struct ath10k *ar, enum ath10k_firmware_mode mode)
{
u32 param = 0;
@@ -646,7 +694,12 @@ static void ath10k_init_sdio(struct ath10k *ar)
* not big enough for mac80211 / native wifi frames. disable it
*/
param &= ~HI_ACS_FLAGS_ALT_DATA_CREDIT_SIZE;
- param |= HI_ACS_FLAGS_SDIO_SWAP_MAILBOX_SET;
+
+ if (mode == ATH10K_FIRMWARE_MODE_UTF)
+ param &= ~HI_ACS_FLAGS_SDIO_SWAP_MAILBOX_SET;
+ else
+ param |= HI_ACS_FLAGS_SDIO_SWAP_MAILBOX_SET;
+
ath10k_bmi_write32(ar, hi_acs_flags, param);
/* Explicitly set fwlog prints to zero as target may turn it on
@@ -2065,8 +2118,16 @@ static int ath10k_init_uart(struct ath10k *ar)
return ret;
}
- if (!uart_print)
+ if (!uart_print && ar->hw_params.uart_pin_workaround) {
+ ret = ath10k_bmi_write32(ar, hi_dbg_uart_txpin,
+ ar->hw_params.uart_pin);
+ if (ret) {
+ ath10k_warn(ar, "failed to set UART TX pin: %d", ret);
+ return ret;
+ }
+
return 0;
+ }
ret = ath10k_bmi_write32(ar, hi_dbg_uart_txpin, ar->hw_params.uart_pin);
if (ret) {
@@ -2139,6 +2200,7 @@ static void ath10k_core_restart(struct work_struct *work)
complete(&ar->offchan_tx_completed);
complete(&ar->install_key_done);
complete(&ar->vdev_setup_done);
+ complete(&ar->vdev_delete_done);
complete(&ar->thermal.wmi_sync);
complete(&ar->bss_survey_done);
wake_up(&ar->htt.empty_tx_wq);
@@ -2501,7 +2563,7 @@ int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode,
goto err;
if (ar->hif.bus == ATH10K_BUS_SDIO)
- ath10k_init_sdio(ar);
+ ath10k_init_sdio(ar, mode);
}
ar->htc.htc_ops.target_send_suspend_complete =
@@ -2720,6 +2782,12 @@ int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode,
if (status)
goto err_hif_stop;
+ status = ath10k_hif_set_target_log_mode(ar, fw_diag_log);
+ if (status && status != -EOPNOTSUPP) {
+ ath10k_warn(ar, "set traget log mode faileds: %d\n", status);
+ goto err_hif_stop;
+ }
+
return 0;
err_hif_stop:
@@ -3105,8 +3173,10 @@ struct ath10k *ath10k_core_create(size_t priv_size, struct device *dev,
init_completion(&ar->install_key_done);
init_completion(&ar->vdev_setup_done);
+ init_completion(&ar->vdev_delete_done);
init_completion(&ar->thermal.wmi_sync);
init_completion(&ar->bss_survey_done);
+ init_completion(&ar->peer_delete_done);
INIT_DELAYED_WORK(&ar->scan.timeout, ath10k_scan_timeout_work);
diff --git a/drivers/net/wireless/ath/ath10k/core.h b/drivers/net/wireless/ath/ath10k/core.h
index e35aae5146f1..4d7db07db6ba 100644
--- a/drivers/net/wireless/ath/ath10k/core.h
+++ b/drivers/net/wireless/ath/ath10k/core.h
@@ -2,7 +2,7 @@
/*
* Copyright (c) 2005-2011 Atheros Communications Inc.
* Copyright (c) 2011-2017 Qualcomm Atheros, Inc.
- * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
*/
#ifndef _CORE_H_
@@ -196,7 +196,7 @@ struct ath10k_fw_extd_stats_peer {
struct list_head list;
u8 peer_macaddr[ETH_ALEN];
- u32 rx_duration;
+ u64 rx_duration;
};
struct ath10k_fw_stats_vdev {
@@ -400,6 +400,14 @@ struct ath10k_peer {
/* protected by ar->data_lock */
struct ieee80211_key_conf *keys[WMI_MAX_KEY_INDEX + 1];
+ union htt_rx_pn_t tids_last_pn[ATH10K_TXRX_NUM_EXT_TIDS];
+ bool tids_last_pn_valid[ATH10K_TXRX_NUM_EXT_TIDS];
+ union htt_rx_pn_t frag_tids_last_pn[ATH10K_TXRX_NUM_EXT_TIDS];
+ u32 frag_tids_seq[ATH10K_TXRX_NUM_EXT_TIDS];
+ struct {
+ enum htt_security_types sec_type;
+ int pn_len;
+ } rx_pn[ATH10K_HTT_TXRX_PEER_SECURITY_MAX];
};
struct ath10k_txq {
@@ -506,7 +514,8 @@ struct ath10k_sta {
u32 peer_ps_state;
};
-#define ATH10K_VDEV_SETUP_TIMEOUT_HZ (5 * HZ)
+#define ATH10K_VDEV_SETUP_TIMEOUT_HZ (5 * HZ)
+#define ATH10K_VDEV_DELETE_TIMEOUT_HZ (5 * HZ)
enum ath10k_beacon_state {
ATH10K_BEACON_SCHEDULED = 0,
@@ -571,6 +580,10 @@ struct ath10k_vif {
struct work_struct ap_csa_work;
struct delayed_work connection_loss_work;
struct cfg80211_bitrate_mask bitrate_mask;
+
+ /* For setting VHT peer fixed rate, protected by conf_mutex */
+ int vht_num_rates;
+ u8 vht_pfr;
};
struct ath10k_vif_iter {
@@ -614,6 +627,7 @@ struct ath10k_debug {
bool fw_stats_done;
unsigned long htt_stats_mask;
+ unsigned long reset_htt_stats;
struct delayed_work htt_stats_dwork;
struct ath10k_dfs_stats dfs_stats;
struct ath_dfs_pool_stats dfs_pool_stats;
@@ -631,6 +645,7 @@ struct ath10k_debug {
u32 nf_cal_period;
void *cal_data;
u32 enable_extd_tx_stats;
+ u8 fw_dbglog_mode;
};
enum ath10k_state {
@@ -761,6 +776,9 @@ enum ath10k_fw_features {
/* Firmware sends only one chan_info event per channel */
ATH10K_FW_FEATURE_SINGLE_CHAN_INFO_PER_CHANNEL = 20,
+ /* Firmware allows setting peer fixed rate */
+ ATH10K_FW_FEATURE_PEER_FIXED_RATE = 21,
+
/* keep last */
ATH10K_FW_FEATURE_COUNT,
};
@@ -919,6 +937,7 @@ struct ath10k_bus_params {
u32 chip_id;
enum ath10k_dev_type dev_type;
bool link_can_suspend;
+ bool hl_msdu_ids;
};
struct ath10k {
@@ -1055,6 +1074,7 @@ struct ath10k {
int last_wmi_vdev_start_status;
struct completion vdev_setup_done;
+ struct completion vdev_delete_done;
struct workqueue_struct *workqueue;
/* Auxiliary workqueue */
@@ -1189,6 +1209,7 @@ struct ath10k {
struct ath10k_radar_found_info last_radar_info;
struct work_struct radar_confirmation_work;
struct ath10k_bus_params bus_param;
+ struct completion peer_delete_done;
/* must be last */
u8 drv_priv[0] __aligned(sizeof(void *));
diff --git a/drivers/net/wireless/ath/ath10k/coredump.c b/drivers/net/wireless/ath/ath10k/coredump.c
index 45a355fb62b9..b6d2932383cf 100644
--- a/drivers/net/wireless/ath/ath10k/coredump.c
+++ b/drivers/net/wireless/ath/ath10k/coredump.c
@@ -1192,8 +1192,8 @@ static struct ath10k_dump_file_data *ath10k_coredump_build(struct ath10k *ar)
if (test_bit(ATH10K_FW_CRASH_DUMP_CE_DATA, &ath10k_coredump_mask)) {
dump_tlv = (struct ath10k_tlv_dump_data *)(buf + sofar);
dump_tlv->type = cpu_to_le32(ATH10K_FW_CRASH_DUMP_CE_DATA);
- dump_tlv->tlv_len = cpu_to_le32(sizeof(*ce_hdr) +
- CE_COUNT * sizeof(ce_hdr->entries[0]));
+ dump_tlv->tlv_len = cpu_to_le32(struct_size(ce_hdr, entries,
+ CE_COUNT));
ce_hdr = (struct ath10k_ce_crash_hdr *)(dump_tlv->tlv_data);
ce_hdr->ce_count = cpu_to_le32(CE_COUNT);
memset(ce_hdr->reserved, 0, sizeof(ce_hdr->reserved));
diff --git a/drivers/net/wireless/ath/ath10k/debug.c b/drivers/net/wireless/ath/ath10k/debug.c
index 32d967a31c65..bd2b5628f850 100644
--- a/drivers/net/wireless/ath/ath10k/debug.c
+++ b/drivers/net/wireless/ath/ath10k/debug.c
@@ -305,6 +305,9 @@ void ath10k_debug_fw_stats_process(struct ath10k *ar, struct sk_buff *skb)
if (is_end)
ar->debug.fw_stats_done = true;
+ if (stats.extended)
+ ar->debug.fw_stats.extended = true;
+
is_started = !list_empty(&ar->debug.fw_stats.pdevs);
if (is_started && !is_end) {
@@ -873,7 +876,7 @@ static int ath10k_debug_htt_stats_req(struct ath10k *ar)
cookie = get_jiffies_64();
ret = ath10k_htt_h2t_stats_req(&ar->htt, ar->debug.htt_stats_mask,
- cookie);
+ ar->debug.reset_htt_stats, cookie);
if (ret) {
ath10k_warn(ar, "failed to send htt stats request: %d\n", ret);
return ret;
@@ -922,8 +925,8 @@ static ssize_t ath10k_write_htt_stats_mask(struct file *file,
if (ret)
return ret;
- /* max 8 bit masks (for now) */
- if (mask > 0xff)
+ /* max 17 bit masks (for now) */
+ if (mask > HTT_STATS_BIT_MASK)
return -E2BIG;
mutex_lock(&ar->conf_mutex);
@@ -2469,6 +2472,44 @@ static const struct file_operations fops_ps_state_enable = {
.llseek = default_llseek,
};
+static ssize_t ath10k_write_reset_htt_stats(struct file *file,
+ const char __user *user_buf,
+ size_t count, loff_t *ppos)
+{
+ struct ath10k *ar = file->private_data;
+ unsigned long reset;
+ int ret;
+
+ ret = kstrtoul_from_user(user_buf, count, 0, &reset);
+ if (ret)
+ return ret;
+
+ if (reset == 0 || reset > 0x1ffff)
+ return -EINVAL;
+
+ mutex_lock(&ar->conf_mutex);
+
+ ar->debug.reset_htt_stats = reset;
+
+ ret = ath10k_debug_htt_stats_req(ar);
+ if (ret)
+ goto out;
+
+ ar->debug.reset_htt_stats = 0;
+ ret = count;
+
+out:
+ mutex_unlock(&ar->conf_mutex);
+ return ret;
+}
+
+static const struct file_operations fops_reset_htt_stats = {
+ .write = ath10k_write_reset_htt_stats,
+ .owner = THIS_MODULE,
+ .open = simple_open,
+ .llseek = default_llseek,
+};
+
int ath10k_debug_create(struct ath10k *ar)
{
ar->debug.cal_data = vzalloc(ATH10K_DEBUG_CAL_DATA_LEN);
@@ -2609,6 +2650,9 @@ int ath10k_debug_register(struct ath10k *ar)
debugfs_create_file("ps_state_enable", 0600, ar->debug.debugfs_phy, ar,
&fops_ps_state_enable);
+ debugfs_create_file("reset_htt_stats", 0200, ar->debug.debugfs_phy, ar,
+ &fops_reset_htt_stats);
+
return 0;
}
@@ -2620,8 +2664,8 @@ void ath10k_debug_unregister(struct ath10k *ar)
#endif /* CONFIG_ATH10K_DEBUGFS */
#ifdef CONFIG_ATH10K_DEBUG
-void ath10k_dbg(struct ath10k *ar, enum ath10k_debug_mask mask,
- const char *fmt, ...)
+void __ath10k_dbg(struct ath10k *ar, enum ath10k_debug_mask mask,
+ const char *fmt, ...)
{
struct va_format vaf;
va_list args;
@@ -2638,7 +2682,7 @@ void ath10k_dbg(struct ath10k *ar, enum ath10k_debug_mask mask,
va_end(args);
}
-EXPORT_SYMBOL(ath10k_dbg);
+EXPORT_SYMBOL(__ath10k_dbg);
void ath10k_dbg_dump(struct ath10k *ar,
enum ath10k_debug_mask mask,
@@ -2651,7 +2695,7 @@ void ath10k_dbg_dump(struct ath10k *ar,
if (ath10k_debug_mask & mask) {
if (msg)
- ath10k_dbg(ar, mask, "%s\n", msg);
+ __ath10k_dbg(ar, mask, "%s\n", msg);
for (ptr = buf; (ptr - buf) < len; ptr += 16) {
linebuflen = 0;
diff --git a/drivers/net/wireless/ath/ath10k/debug.h b/drivers/net/wireless/ath/ath10k/debug.h
index db78e855a80f..82f7eb8583d9 100644
--- a/drivers/net/wireless/ath/ath10k/debug.h
+++ b/drivers/net/wireless/ath/ath10k/debug.h
@@ -71,6 +71,9 @@ struct ath10k_pktlog_hdr {
/* FIXME: How to calculate the buffer size sanely? */
#define ATH10K_FW_STATS_BUF_SIZE (1024 * 1024)
+#define ATH10K_TX_POWER_MAX_VAL 70
+#define ATH10K_TX_POWER_MIN_VAL 0
+
extern unsigned int ath10k_debug_mask;
__printf(2, 3) void ath10k_info(struct ath10k *ar, const char *fmt, ...);
@@ -240,18 +243,18 @@ void ath10k_sta_update_rx_tid_stats_ampdu(struct ath10k *ar,
#endif /* CONFIG_MAC80211_DEBUGFS */
#ifdef CONFIG_ATH10K_DEBUG
-__printf(3, 4) void ath10k_dbg(struct ath10k *ar,
- enum ath10k_debug_mask mask,
- const char *fmt, ...);
+__printf(3, 4) void __ath10k_dbg(struct ath10k *ar,
+ enum ath10k_debug_mask mask,
+ const char *fmt, ...);
void ath10k_dbg_dump(struct ath10k *ar,
enum ath10k_debug_mask mask,
const char *msg, const char *prefix,
const void *buf, size_t len);
#else /* CONFIG_ATH10K_DEBUG */
-static inline int ath10k_dbg(struct ath10k *ar,
- enum ath10k_debug_mask dbg_mask,
- const char *fmt, ...)
+static inline int __ath10k_dbg(struct ath10k *ar,
+ enum ath10k_debug_mask dbg_mask,
+ const char *fmt, ...)
{
return 0;
}
@@ -263,4 +266,14 @@ static inline void ath10k_dbg_dump(struct ath10k *ar,
{
}
#endif /* CONFIG_ATH10K_DEBUG */
+
+/* Avoid calling __ath10k_dbg() if debug_mask is not set and tracing
+ * disabled.
+ */
+#define ath10k_dbg(ar, dbg_mask, fmt, ...) \
+do { \
+ if ((ath10k_debug_mask & dbg_mask) || \
+ trace_ath10k_log_dbg_enabled()) \
+ __ath10k_dbg(ar, dbg_mask, fmt, ##__VA_ARGS__); \
+} while (0)
#endif /* _DEBUG_H_ */
diff --git a/drivers/net/wireless/ath/ath10k/debugfs_sta.c b/drivers/net/wireless/ath/ath10k/debugfs_sta.c
index c704ae371c4d..42931a669b02 100644
--- a/drivers/net/wireless/ath/ath10k/debugfs_sta.c
+++ b/drivers/net/wireless/ath/ath10k/debugfs_sta.c
@@ -663,6 +663,13 @@ static ssize_t ath10k_dbg_sta_dump_tx_stats(struct file *file,
mutex_lock(&ar->conf_mutex);
+ if (!arsta->tx_stats) {
+ ath10k_warn(ar, "failed to get tx stats");
+ mutex_unlock(&ar->conf_mutex);
+ kfree(buf);
+ return 0;
+ }
+
spin_lock_bh(&ar->data_lock);
for (k = 0; k < ATH10K_STATS_TYPE_MAX; k++) {
for (j = 0; j < ATH10K_COUNTER_TYPE_MAX; j++) {
diff --git a/drivers/net/wireless/ath/ath10k/hif.h b/drivers/net/wireless/ath/ath10k/hif.h
index fe5417962f40..496ee34a4d78 100644
--- a/drivers/net/wireless/ath/ath10k/hif.h
+++ b/drivers/net/wireless/ath/ath10k/hif.h
@@ -12,6 +12,12 @@
#include "bmi.h"
#include "debug.h"
+/* Types of fw logging mode */
+enum ath_dbg_mode {
+ ATH10K_ENABLE_FW_LOG_DIAG,
+ ATH10K_ENABLE_FW_LOG_CE,
+};
+
struct ath10k_hif_sg_item {
u16 transfer_id;
void *transfer_context; /* NULL = tx completion callback not called */
@@ -88,6 +94,7 @@ struct ath10k_hif_ops {
int (*get_target_info)(struct ath10k *ar,
struct bmi_target_info *target_info);
+ int (*set_target_log_mode)(struct ath10k *ar, u8 fw_log_mode);
};
static inline int ath10k_hif_tx_sg(struct ath10k *ar, u8 pipe_id,
@@ -230,4 +237,12 @@ static inline int ath10k_hif_get_target_info(struct ath10k *ar,
return ar->hif.ops->get_target_info(ar, tgt_info);
}
+static inline int ath10k_hif_set_target_log_mode(struct ath10k *ar,
+ u8 fw_log_mode)
+{
+ if (!ar->hif.ops->set_target_log_mode)
+ return -EOPNOTSUPP;
+
+ return ar->hif.ops->set_target_log_mode(ar, fw_log_mode);
+}
#endif /* _HIF_H_ */
diff --git a/drivers/net/wireless/ath/ath10k/htc.c b/drivers/net/wireless/ath/ath10k/htc.c
index 805a7f8a04f2..1d4d1a1992fe 100644
--- a/drivers/net/wireless/ath/ath10k/htc.c
+++ b/drivers/net/wireless/ath/ath10k/htc.c
@@ -73,6 +73,7 @@ static void ath10k_htc_prepare_tx_skb(struct ath10k_htc_ep *ep,
struct ath10k_htc_hdr *hdr;
hdr = (struct ath10k_htc_hdr *)skb->data;
+ memset(hdr, 0, sizeof(struct ath10k_htc_hdr));
hdr->eid = ep->eid;
hdr->len = __cpu_to_le16(skb->len - sizeof(*hdr));
diff --git a/drivers/net/wireless/ath/ath10k/htt.c b/drivers/net/wireless/ath/ath10k/htt.c
index d235ff3098e8..7b75200ceae5 100644
--- a/drivers/net/wireless/ath/ath10k/htt.c
+++ b/drivers/net/wireless/ath/ath10k/htt.c
@@ -257,7 +257,7 @@ int ath10k_htt_setup(struct ath10k_htt *htt)
return status;
}
- status = htt->tx_ops->htt_h2t_aggr_cfg_msg(htt,
+ status = ath10k_htt_h2t_aggr_cfg_msg(htt,
htt->max_num_ampdu,
htt->max_num_amsdu);
if (status) {
diff --git a/drivers/net/wireless/ath/ath10k/htt.h b/drivers/net/wireless/ath/ath10k/htt.h
index 4cee5492abc8..30c080094af1 100644
--- a/drivers/net/wireless/ath/ath10k/htt.h
+++ b/drivers/net/wireless/ath/ath10k/htt.h
@@ -315,6 +315,7 @@ struct htt_stats_req {
} __packed;
#define HTT_STATS_REQ_CFG_STAT_TYPE_INVALID 0xff
+#define HTT_STATS_BIT_MASK GENMASK(16, 0)
/*
* htt_oob_sync_req - request out-of-band sync
@@ -733,6 +734,20 @@ struct htt_rx_indication_hl {
struct htt_rx_indication_mpdu_range mpdu_ranges[0];
} __packed;
+struct htt_hl_rx_desc {
+ __le32 info;
+ __le32 pn_31_0;
+ union {
+ struct {
+ __le16 pn_47_32;
+ __le16 pn_63_48;
+ } pn16;
+ __le32 pn_63_32;
+ } u0;
+ __le32 pn_95_64;
+ __le32 pn_127_96;
+} __packed;
+
static inline struct htt_rx_indication_mpdu_range *
htt_rx_ind_get_mpdu_ranges(struct htt_rx_indication *rx_ind)
{
@@ -790,6 +805,21 @@ struct htt_rx_peer_unmap {
__le16 peer_id;
} __packed;
+enum htt_txrx_sec_cast_type {
+ HTT_TXRX_SEC_MCAST = 0,
+ HTT_TXRX_SEC_UCAST
+};
+
+enum htt_rx_pn_check_type {
+ HTT_RX_NON_PN_CHECK = 0,
+ HTT_RX_PN_CHECK
+};
+
+enum htt_rx_tkip_demic_type {
+ HTT_RX_NON_TKIP_MIC = 0,
+ HTT_RX_TKIP_MIC
+};
+
enum htt_security_types {
HTT_SECURITY_NONE,
HTT_SECURITY_WEP128,
@@ -803,6 +833,9 @@ enum htt_security_types {
HTT_NUM_SECURITY_TYPES /* keep this last! */
};
+#define ATH10K_HTT_TXRX_PEER_SECURITY_MAX 2
+#define ATH10K_TXRX_NUM_EXT_TIDS 19
+
enum htt_security_flags {
#define HTT_SECURITY_TYPE_MASK 0x7F
#define HTT_SECURITY_TYPE_LSB 0
@@ -1010,6 +1043,11 @@ struct htt_rx_fragment_indication {
u8 fw_msdu_rx_desc[0];
} __packed;
+#define ATH10K_IEEE80211_EXTIV BIT(5)
+#define ATH10K_IEEE80211_TKIP_MICLEN 8 /* trailing MIC */
+
+#define HTT_RX_FRAG_IND_INFO0_HEADER_LEN 16
+
#define HTT_RX_FRAG_IND_INFO0_EXT_TID_MASK 0x1F
#define HTT_RX_FRAG_IND_INFO0_EXT_TID_LSB 0
#define HTT_RX_FRAG_IND_INFO0_FLUSH_VALID_MASK 0x20
@@ -2048,6 +2086,19 @@ static inline void ath10k_htt_free_txbuff(struct ath10k_htt *htt)
htt->tx_ops->htt_free_txbuff(htt);
}
+static inline int ath10k_htt_h2t_aggr_cfg_msg(struct ath10k_htt *htt,
+ u8 max_subfrms_ampdu,
+ u8 max_subfrms_amsdu)
+
+{
+ if (!htt->tx_ops->htt_h2t_aggr_cfg_msg)
+ return -EOPNOTSUPP;
+
+ return htt->tx_ops->htt_h2t_aggr_cfg_msg(htt,
+ max_subfrms_ampdu,
+ max_subfrms_amsdu);
+}
+
struct ath10k_htt_rx_ops {
size_t (*htt_get_rx_ring_size)(struct ath10k_htt *htt);
void (*htt_config_paddrs_ring)(struct ath10k_htt *htt, void *vaddr);
@@ -2055,6 +2106,9 @@ struct ath10k_htt_rx_ops {
int idx);
void* (*htt_get_vaddr_ring)(struct ath10k_htt *htt);
void (*htt_reset_paddrs_ring)(struct ath10k_htt *htt, int idx);
+ bool (*htt_rx_proc_rx_frag_ind)(struct ath10k_htt *htt,
+ struct htt_rx_fragment_indication *rx,
+ struct sk_buff *skb);
};
static inline size_t ath10k_htt_get_rx_ring_size(struct ath10k_htt *htt)
@@ -2094,6 +2148,16 @@ static inline void ath10k_htt_reset_paddrs_ring(struct ath10k_htt *htt, int idx)
htt->rx_ops->htt_reset_paddrs_ring(htt, idx);
}
+static inline bool ath10k_htt_rx_proc_rx_frag_ind(struct ath10k_htt *htt,
+ struct htt_rx_fragment_indication *rx,
+ struct sk_buff *skb)
+{
+ if (!htt->rx_ops->htt_rx_proc_rx_frag_ind)
+ return true;
+
+ return htt->rx_ops->htt_rx_proc_rx_frag_ind(htt, rx, skb);
+}
+
#define RX_HTT_HDR_STATUS_LEN 64
/* This structure layout is programmed via rx ring setup
@@ -2128,10 +2192,8 @@ struct htt_rx_desc {
#define HTT_RX_DESC_HL_INFO_ENCRYPTED_LSB 12
#define HTT_RX_DESC_HL_INFO_CHAN_INFO_PRESENT_MASK 0x00002000
#define HTT_RX_DESC_HL_INFO_CHAN_INFO_PRESENT_LSB 13
-#define HTT_RX_DESC_HL_INFO_MCAST_BCAST_MASK 0x00008000
-#define HTT_RX_DESC_HL_INFO_MCAST_BCAST_LSB 15
-#define HTT_RX_DESC_HL_INFO_FRAGMENT_MASK 0x00010000
-#define HTT_RX_DESC_HL_INFO_FRAGMENT_LSB 16
+#define HTT_RX_DESC_HL_INFO_MCAST_BCAST_MASK 0x00010000
+#define HTT_RX_DESC_HL_INFO_MCAST_BCAST_LSB 16
#define HTT_RX_DESC_HL_INFO_KEY_ID_OCT_MASK 0x01fe0000
#define HTT_RX_DESC_HL_INFO_KEY_ID_OCT_LSB 17
@@ -2195,10 +2257,8 @@ void ath10k_htt_htc_tx_complete(struct ath10k *ar, struct sk_buff *skb);
void ath10k_htt_htc_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb);
bool ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb);
int ath10k_htt_h2t_ver_req_msg(struct ath10k_htt *htt);
-int ath10k_htt_h2t_stats_req(struct ath10k_htt *htt, u8 mask, u64 cookie);
-int ath10k_htt_h2t_aggr_cfg_msg(struct ath10k_htt *htt,
- u8 max_subfrms_ampdu,
- u8 max_subfrms_amsdu);
+int ath10k_htt_h2t_stats_req(struct ath10k_htt *htt, u32 mask, u32 reset_mask,
+ u64 cookie);
void ath10k_htt_hif_tx_complete(struct ath10k *ar, struct sk_buff *skb);
int ath10k_htt_tx_fetch_resp(struct ath10k *ar,
__le32 token,
diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
index 1acc622d2183..83a7fb68fd24 100644
--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
+++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
@@ -2061,9 +2061,91 @@ static int ath10k_htt_rx_handle_amsdu(struct ath10k_htt *htt)
return 0;
}
+static void ath10k_htt_rx_mpdu_desc_pn_hl(struct htt_hl_rx_desc *rx_desc,
+ union htt_rx_pn_t *pn,
+ int pn_len_bits)
+{
+ switch (pn_len_bits) {
+ case 48:
+ pn->pn48 = __le32_to_cpu(rx_desc->pn_31_0) +
+ ((u64)(__le32_to_cpu(rx_desc->u0.pn_63_32) & 0xFFFF) << 32);
+ break;
+ case 24:
+ pn->pn24 = __le32_to_cpu(rx_desc->pn_31_0);
+ break;
+ };
+}
+
+static bool ath10k_htt_rx_pn_cmp48(union htt_rx_pn_t *new_pn,
+ union htt_rx_pn_t *old_pn)
+{
+ return ((new_pn->pn48 & 0xffffffffffffULL) <=
+ (old_pn->pn48 & 0xffffffffffffULL));
+}
+
+static bool ath10k_htt_rx_pn_check_replay_hl(struct ath10k *ar,
+ struct ath10k_peer *peer,
+ struct htt_rx_indication_hl *rx)
+{
+ bool last_pn_valid, pn_invalid = false;
+ enum htt_txrx_sec_cast_type sec_index;
+ enum htt_security_types sec_type;
+ union htt_rx_pn_t new_pn = {0};
+ struct htt_hl_rx_desc *rx_desc;
+ union htt_rx_pn_t *last_pn;
+ u32 rx_desc_info, tid;
+ int num_mpdu_ranges;
+
+ lockdep_assert_held(&ar->data_lock);
+
+ if (!peer)
+ return false;
+
+ if (!(rx->fw_desc.flags & FW_RX_DESC_FLAGS_FIRST_MSDU))
+ return false;
+
+ num_mpdu_ranges = MS(__le32_to_cpu(rx->hdr.info1),
+ HTT_RX_INDICATION_INFO1_NUM_MPDU_RANGES);
+
+ rx_desc = (struct htt_hl_rx_desc *)&rx->mpdu_ranges[num_mpdu_ranges];
+ rx_desc_info = __le32_to_cpu(rx_desc->info);
+
+ if (!MS(rx_desc_info, HTT_RX_DESC_HL_INFO_ENCRYPTED))
+ return false;
+
+ tid = MS(rx->hdr.info0, HTT_RX_INDICATION_INFO0_EXT_TID);
+ last_pn_valid = peer->tids_last_pn_valid[tid];
+ last_pn = &peer->tids_last_pn[tid];
+
+ if (MS(rx_desc_info, HTT_RX_DESC_HL_INFO_MCAST_BCAST))
+ sec_index = HTT_TXRX_SEC_MCAST;
+ else
+ sec_index = HTT_TXRX_SEC_UCAST;
+
+ sec_type = peer->rx_pn[sec_index].sec_type;
+ ath10k_htt_rx_mpdu_desc_pn_hl(rx_desc, &new_pn, peer->rx_pn[sec_index].pn_len);
+
+ if (sec_type != HTT_SECURITY_AES_CCMP &&
+ sec_type != HTT_SECURITY_TKIP &&
+ sec_type != HTT_SECURITY_TKIP_NOMIC)
+ return false;
+
+ if (last_pn_valid)
+ pn_invalid = ath10k_htt_rx_pn_cmp48(&new_pn, last_pn);
+ else
+ peer->tids_last_pn_valid[tid] = 1;
+
+ if (!pn_invalid)
+ last_pn->pn48 = new_pn.pn48;
+
+ return pn_invalid;
+}
+
static bool ath10k_htt_rx_proc_rx_ind_hl(struct ath10k_htt *htt,
struct htt_rx_indication_hl *rx,
- struct sk_buff *skb)
+ struct sk_buff *skb,
+ enum htt_rx_pn_check_type check_pn_type,
+ enum htt_rx_tkip_demic_type tkip_mic_type)
{
struct ath10k *ar = htt->ar;
struct ath10k_peer *peer;
@@ -2076,13 +2158,14 @@ static bool ath10k_htt_rx_proc_rx_ind_hl(struct ath10k_htt *htt,
int num_mpdu_ranges;
size_t tot_hdr_len;
struct ieee80211_channel *ch;
+ bool pn_invalid;
peer_id = __le16_to_cpu(rx->hdr.peer_id);
spin_lock_bh(&ar->data_lock);
peer = ath10k_peer_find_by_id(ar, peer_id);
spin_unlock_bh(&ar->data_lock);
- if (!peer)
+ if (!peer && peer_id != HTT_INVALID_PEERID)
ath10k_warn(ar, "Got RX ind from invalid peer: %u\n", peer_id);
num_mpdu_ranges = MS(__le32_to_cpu(rx->hdr.info1),
@@ -2101,12 +2184,22 @@ static bool ath10k_htt_rx_proc_rx_ind_hl(struct ath10k_htt *htt,
num_mpdu_ranges);
if (mpdu_ranges->mpdu_range_status !=
- HTT_RX_IND_MPDU_STATUS_OK) {
+ HTT_RX_IND_MPDU_STATUS_OK &&
+ mpdu_ranges->mpdu_range_status !=
+ HTT_RX_IND_MPDU_STATUS_TKIP_MIC_ERR) {
ath10k_warn(ar, "MPDU range status: %d\n",
mpdu_ranges->mpdu_range_status);
goto err;
}
+ if (check_pn_type == HTT_RX_PN_CHECK) {
+ spin_lock_bh(&ar->data_lock);
+ pn_invalid = ath10k_htt_rx_pn_check_replay_hl(ar, peer, rx);
+ spin_unlock_bh(&ar->data_lock);
+ if (pn_invalid)
+ goto err;
+ }
+
/* Strip off all headers before the MAC header before delivery to
* mac80211
*/
@@ -2114,6 +2207,7 @@ static bool ath10k_htt_rx_proc_rx_ind_hl(struct ath10k_htt *htt,
sizeof(rx->ppdu) + sizeof(rx->prefix) +
sizeof(rx->fw_desc) +
sizeof(*mpdu_ranges) * num_mpdu_ranges + rx_desc_len;
+
skb_pull(skb, tot_hdr_len);
hdr = (struct ieee80211_hdr *)skb->data;
@@ -2162,6 +2256,13 @@ static bool ath10k_htt_rx_proc_rx_ind_hl(struct ath10k_htt *htt,
RX_FLAG_MMIC_STRIPPED;
}
+ if (tkip_mic_type == HTT_RX_TKIP_MIC)
+ rx_status->flag &= ~RX_FLAG_IV_STRIPPED &
+ ~RX_FLAG_MMIC_STRIPPED;
+
+ if (mpdu_ranges->mpdu_range_status == HTT_RX_IND_MPDU_STATUS_TKIP_MIC_ERR)
+ rx_status->flag |= RX_FLAG_MMIC_ERROR;
+
ieee80211_rx_ni(ar->hw, skb);
/* We have delivered the skb to the upper layers (mac80211) so we
@@ -2175,6 +2276,231 @@ err:
return true;
}
+static int ath10k_htt_rx_frag_tkip_decap_nomic(struct sk_buff *skb,
+ u16 head_len,
+ u16 hdr_len)
+{
+ u8 *ivp, *orig_hdr;
+
+ orig_hdr = skb->data;
+ ivp = orig_hdr + hdr_len + head_len;
+
+ /* the ExtIV bit is always set to 1 for TKIP */
+ if (!(ivp[IEEE80211_WEP_IV_LEN - 1] & ATH10K_IEEE80211_EXTIV))
+ return -EINVAL;
+
+ memmove(orig_hdr + IEEE80211_TKIP_IV_LEN, orig_hdr, head_len + hdr_len);
+ skb_pull(skb, IEEE80211_TKIP_IV_LEN);
+ skb_trim(skb, skb->len - ATH10K_IEEE80211_TKIP_MICLEN);
+ return 0;
+}
+
+static int ath10k_htt_rx_frag_tkip_decap_withmic(struct sk_buff *skb,
+ u16 head_len,
+ u16 hdr_len)
+{
+ u8 *ivp, *orig_hdr;
+
+ orig_hdr = skb->data;
+ ivp = orig_hdr + hdr_len + head_len;
+
+ /* the ExtIV bit is always set to 1 for TKIP */
+ if (!(ivp[IEEE80211_WEP_IV_LEN - 1] & ATH10K_IEEE80211_EXTIV))
+ return -EINVAL;
+
+ memmove(orig_hdr + IEEE80211_TKIP_IV_LEN, orig_hdr, head_len + hdr_len);
+ skb_pull(skb, IEEE80211_TKIP_IV_LEN);
+ skb_trim(skb, skb->len - IEEE80211_TKIP_ICV_LEN);
+ return 0;
+}
+
+static int ath10k_htt_rx_frag_ccmp_decap(struct sk_buff *skb,
+ u16 head_len,
+ u16 hdr_len)
+{
+ u8 *ivp, *orig_hdr;
+
+ orig_hdr = skb->data;
+ ivp = orig_hdr + hdr_len + head_len;
+
+ /* the ExtIV bit is always set to 1 for CCMP */
+ if (!(ivp[IEEE80211_WEP_IV_LEN - 1] & ATH10K_IEEE80211_EXTIV))
+ return -EINVAL;
+
+ skb_trim(skb, skb->len - IEEE80211_CCMP_MIC_LEN);
+ memmove(orig_hdr + IEEE80211_CCMP_HDR_LEN, orig_hdr, head_len + hdr_len);
+ skb_pull(skb, IEEE80211_CCMP_HDR_LEN);
+ return 0;
+}
+
+static int ath10k_htt_rx_frag_wep_decap(struct sk_buff *skb,
+ u16 head_len,
+ u16 hdr_len)
+{
+ u8 *orig_hdr;
+
+ orig_hdr = skb->data;
+
+ memmove(orig_hdr + IEEE80211_WEP_IV_LEN,
+ orig_hdr, head_len + hdr_len);
+ skb_pull(skb, IEEE80211_WEP_IV_LEN);
+ skb_trim(skb, skb->len - IEEE80211_WEP_ICV_LEN);
+ return 0;
+}
+
+static bool ath10k_htt_rx_proc_rx_frag_ind_hl(struct ath10k_htt *htt,
+ struct htt_rx_fragment_indication *rx,
+ struct sk_buff *skb)
+{
+ struct ath10k *ar = htt->ar;
+ enum htt_rx_tkip_demic_type tkip_mic = HTT_RX_NON_TKIP_MIC;
+ enum htt_txrx_sec_cast_type sec_index;
+ struct htt_rx_indication_hl *rx_hl;
+ enum htt_security_types sec_type;
+ u32 tid, frag, seq, rx_desc_info;
+ union htt_rx_pn_t new_pn = {0};
+ struct htt_hl_rx_desc *rx_desc;
+ u16 peer_id, sc, hdr_space;
+ union htt_rx_pn_t *last_pn;
+ struct ieee80211_hdr *hdr;
+ int ret, num_mpdu_ranges;
+ struct ath10k_peer *peer;
+ struct htt_resp *resp;
+ size_t tot_hdr_len;
+
+ resp = (struct htt_resp *)(skb->data + HTT_RX_FRAG_IND_INFO0_HEADER_LEN);
+ skb_pull(skb, HTT_RX_FRAG_IND_INFO0_HEADER_LEN);
+ skb_trim(skb, skb->len - FCS_LEN);
+
+ peer_id = __le16_to_cpu(rx->peer_id);
+ rx_hl = (struct htt_rx_indication_hl *)(&resp->rx_ind_hl);
+
+ spin_lock_bh(&ar->data_lock);
+ peer = ath10k_peer_find_by_id(ar, peer_id);
+ if (!peer) {
+ ath10k_dbg(ar, ATH10K_DBG_HTT, "invalid peer: %u\n", peer_id);
+ goto err;
+ }
+
+ num_mpdu_ranges = MS(__le32_to_cpu(rx_hl->hdr.info1),
+ HTT_RX_INDICATION_INFO1_NUM_MPDU_RANGES);
+
+ tot_hdr_len = sizeof(struct htt_resp_hdr) +
+ sizeof(rx_hl->hdr) +
+ sizeof(rx_hl->ppdu) +
+ sizeof(rx_hl->prefix) +
+ sizeof(rx_hl->fw_desc) +
+ sizeof(struct htt_rx_indication_mpdu_range) * num_mpdu_ranges;
+
+ tid = MS(rx_hl->hdr.info0, HTT_RX_INDICATION_INFO0_EXT_TID);
+ rx_desc = (struct htt_hl_rx_desc *)(skb->data + tot_hdr_len);
+ rx_desc_info = __le32_to_cpu(rx_desc->info);
+
+ if (!MS(rx_desc_info, HTT_RX_DESC_HL_INFO_ENCRYPTED)) {
+ spin_unlock_bh(&ar->data_lock);
+ return ath10k_htt_rx_proc_rx_ind_hl(htt, &resp->rx_ind_hl, skb,
+ HTT_RX_NON_PN_CHECK,
+ HTT_RX_NON_TKIP_MIC);
+ }
+
+ hdr = (struct ieee80211_hdr *)((u8 *)rx_desc + rx_hl->fw_desc.len);
+
+ if (ieee80211_has_retry(hdr->frame_control))
+ goto err;
+
+ hdr_space = ieee80211_hdrlen(hdr->frame_control);
+ sc = __le16_to_cpu(hdr->seq_ctrl);
+ seq = (sc & IEEE80211_SCTL_SEQ) >> 4;
+ frag = sc & IEEE80211_SCTL_FRAG;
+
+ sec_index = MS(rx_desc_info, HTT_RX_DESC_HL_INFO_MCAST_BCAST) ?
+ HTT_TXRX_SEC_MCAST : HTT_TXRX_SEC_UCAST;
+ sec_type = peer->rx_pn[sec_index].sec_type;
+ ath10k_htt_rx_mpdu_desc_pn_hl(rx_desc, &new_pn, peer->rx_pn[sec_index].pn_len);
+
+ switch (sec_type) {
+ case HTT_SECURITY_TKIP:
+ tkip_mic = HTT_RX_TKIP_MIC;
+ ret = ath10k_htt_rx_frag_tkip_decap_withmic(skb,
+ tot_hdr_len +
+ rx_hl->fw_desc.len,
+ hdr_space);
+ if (ret)
+ goto err;
+ break;
+ case HTT_SECURITY_TKIP_NOMIC:
+ ret = ath10k_htt_rx_frag_tkip_decap_nomic(skb,
+ tot_hdr_len +
+ rx_hl->fw_desc.len,
+ hdr_space);
+ if (ret)
+ goto err;
+ break;
+ case HTT_SECURITY_AES_CCMP:
+ ret = ath10k_htt_rx_frag_ccmp_decap(skb,
+ tot_hdr_len + rx_hl->fw_desc.len,
+ hdr_space);
+ if (ret)
+ goto err;
+ break;
+ case HTT_SECURITY_WEP128:
+ case HTT_SECURITY_WEP104:
+ case HTT_SECURITY_WEP40:
+ ret = ath10k_htt_rx_frag_wep_decap(skb,
+ tot_hdr_len + rx_hl->fw_desc.len,
+ hdr_space);
+ if (ret)
+ goto err;
+ break;
+ default:
+ break;
+ }
+
+ resp = (struct htt_resp *)(skb->data);
+
+ if (sec_type != HTT_SECURITY_AES_CCMP &&
+ sec_type != HTT_SECURITY_TKIP &&
+ sec_type != HTT_SECURITY_TKIP_NOMIC) {
+ spin_unlock_bh(&ar->data_lock);
+ return ath10k_htt_rx_proc_rx_ind_hl(htt, &resp->rx_ind_hl, skb,
+ HTT_RX_NON_PN_CHECK,
+ HTT_RX_NON_TKIP_MIC);
+ }
+
+ last_pn = &peer->frag_tids_last_pn[tid];
+
+ if (frag == 0) {
+ if (ath10k_htt_rx_pn_check_replay_hl(ar, peer, &resp->rx_ind_hl))
+ goto err;
+
+ last_pn->pn48 = new_pn.pn48;
+ peer->frag_tids_seq[tid] = seq;
+ } else if (sec_type == HTT_SECURITY_AES_CCMP) {
+ if (seq != peer->frag_tids_seq[tid])
+ goto err;
+
+ if (new_pn.pn48 != last_pn->pn48 + 1)
+ goto err;
+
+ last_pn->pn48 = new_pn.pn48;
+ last_pn = &peer->tids_last_pn[tid];
+ last_pn->pn48 = new_pn.pn48;
+ }
+
+ spin_unlock_bh(&ar->data_lock);
+
+ return ath10k_htt_rx_proc_rx_ind_hl(htt, &resp->rx_ind_hl, skb,
+ HTT_RX_NON_PN_CHECK, tkip_mic);
+
+err:
+ spin_unlock_bh(&ar->data_lock);
+
+ /* Tell the caller that it must free the skb since we have not
+ * consumed it
+ */
+ return true;
+}
+
static void ath10k_htt_rx_proc_rx_ind_ll(struct ath10k_htt *htt,
struct htt_rx_indication *rx)
{
@@ -2193,9 +2519,7 @@ static void ath10k_htt_rx_proc_rx_ind_ll(struct ath10k_htt *htt,
mpdu_ranges = htt_rx_ind_get_mpdu_ranges(rx);
ath10k_dbg_dump(ar, ATH10K_DBG_HTT_DUMP, NULL, "htt rx ind: ",
- rx, sizeof(*rx) +
- (sizeof(struct htt_rx_indication_mpdu_range) *
- num_mpdu_ranges));
+ rx, struct_size(rx, mpdu_ranges, num_mpdu_ranges));
for (i = 0; i < num_mpdu_ranges; i++)
mpdu_count += mpdu_ranges[i].mpdu_count;
@@ -2277,7 +2601,9 @@ static void ath10k_htt_rx_tx_compl_ind(struct ath10k *ar,
* Note that with only one concurrent reader and one concurrent
* writer, you don't need extra locking to use these macro.
*/
- if (!kfifo_put(&htt->txdone_fifo, tx_done)) {
+ if (ar->bus_param.dev_type == ATH10K_DEV_TYPE_HL) {
+ ath10k_txrx_tx_unref(htt, &tx_done);
+ } else if (!kfifo_put(&htt->txdone_fifo, tx_done)) {
ath10k_warn(ar, "txdone fifo overrun, msdu_id %d status %d\n",
tx_done.msdu_id, tx_done.status);
ath10k_txrx_tx_unref(htt, &tx_done);
@@ -2938,14 +3264,14 @@ ath10k_accumulate_per_peer_tx_stats(struct ath10k *ar,
#define STATS_OP_FMT(name) tx_stats->stats[ATH10K_STATS_TYPE_##name]
- if (txrate->flags == RATE_INFO_FLAGS_VHT_MCS) {
+ if (txrate->flags & RATE_INFO_FLAGS_VHT_MCS) {
STATS_OP_FMT(SUCC).vht[0][mcs] += pstats->succ_bytes;
STATS_OP_FMT(SUCC).vht[1][mcs] += pstats->succ_pkts;
STATS_OP_FMT(FAIL).vht[0][mcs] += pstats->failed_bytes;
STATS_OP_FMT(FAIL).vht[1][mcs] += pstats->failed_pkts;
STATS_OP_FMT(RETRY).vht[0][mcs] += pstats->retry_bytes;
STATS_OP_FMT(RETRY).vht[1][mcs] += pstats->retry_pkts;
- } else if (txrate->flags == RATE_INFO_FLAGS_MCS) {
+ } else if (txrate->flags & RATE_INFO_FLAGS_MCS) {
STATS_OP_FMT(SUCC).ht[0][ht_idx] += pstats->succ_bytes;
STATS_OP_FMT(SUCC).ht[1][ht_idx] += pstats->succ_pkts;
STATS_OP_FMT(FAIL).ht[0][ht_idx] += pstats->failed_bytes;
@@ -2966,7 +3292,7 @@ ath10k_accumulate_per_peer_tx_stats(struct ath10k *ar,
if (ATH10K_HW_AMPDU(pstats->flags)) {
tx_stats->ba_fails += ATH10K_HW_BA_FAIL(pstats->flags);
- if (txrate->flags == RATE_INFO_FLAGS_MCS) {
+ if (txrate->flags & RATE_INFO_FLAGS_MCS) {
STATS_OP_FMT(AMPDU).ht[0][ht_idx] +=
pstats->succ_bytes + pstats->retry_bytes;
STATS_OP_FMT(AMPDU).ht[1][ht_idx] +=
@@ -3265,6 +3591,51 @@ out:
rcu_read_unlock();
}
+static int ath10k_htt_rx_pn_len(enum htt_security_types sec_type)
+{
+ switch (sec_type) {
+ case HTT_SECURITY_TKIP:
+ case HTT_SECURITY_TKIP_NOMIC:
+ case HTT_SECURITY_AES_CCMP:
+ return 48;
+ default:
+ return 0;
+ }
+}
+
+static void ath10k_htt_rx_sec_ind_handler(struct ath10k *ar,
+ struct htt_security_indication *ev)
+{
+ enum htt_txrx_sec_cast_type sec_index;
+ enum htt_security_types sec_type;
+ struct ath10k_peer *peer;
+
+ spin_lock_bh(&ar->data_lock);
+
+ peer = ath10k_peer_find_by_id(ar, __le16_to_cpu(ev->peer_id));
+ if (!peer) {
+ ath10k_warn(ar, "failed to find peer id %d for security indication",
+ __le16_to_cpu(ev->peer_id));
+ goto out;
+ }
+
+ sec_type = MS(ev->flags, HTT_SECURITY_TYPE);
+
+ if (ev->flags & HTT_SECURITY_IS_UNICAST)
+ sec_index = HTT_TXRX_SEC_UCAST;
+ else
+ sec_index = HTT_TXRX_SEC_MCAST;
+
+ peer->rx_pn[sec_index].sec_type = sec_type;
+ peer->rx_pn[sec_index].pn_len = ath10k_htt_rx_pn_len(sec_type);
+
+ memset(peer->tids_last_pn_valid, 0, sizeof(peer->tids_last_pn_valid));
+ memset(peer->tids_last_pn, 0, sizeof(peer->tids_last_pn));
+
+out:
+ spin_unlock_bh(&ar->data_lock);
+}
+
bool ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
{
struct ath10k_htt *htt = &ar->htt;
@@ -3296,7 +3667,9 @@ bool ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
if (ar->bus_param.dev_type == ATH10K_DEV_TYPE_HL)
return ath10k_htt_rx_proc_rx_ind_hl(htt,
&resp->rx_ind_hl,
- skb);
+ skb,
+ HTT_RX_PN_CHECK,
+ HTT_RX_NON_TKIP_MIC);
else
ath10k_htt_rx_proc_rx_ind_ll(htt, &resp->rx_ind);
break;
@@ -3358,6 +3731,7 @@ bool ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
struct ath10k *ar = htt->ar;
struct htt_security_indication *ev = &resp->security_indication;
+ ath10k_htt_rx_sec_ind_handler(ar, ev);
ath10k_dbg(ar, ATH10K_DBG_HTT,
"sec ind peer_id %d unicast %d type %d\n",
__le16_to_cpu(ev->peer_id),
@@ -3370,6 +3744,10 @@ bool ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
ath10k_dbg_dump(ar, ATH10K_DBG_HTT_DUMP, NULL, "htt event: ",
skb->data, skb->len);
atomic_inc(&htt->num_mpdus_ready);
+
+ return ath10k_htt_rx_proc_rx_frag_ind(htt,
+ &resp->rx_frag_ind,
+ skb);
break;
}
case HTT_T2H_MSG_TYPE_TEST:
@@ -3583,6 +3961,7 @@ static const struct ath10k_htt_rx_ops htt_rx_ops_64 = {
};
static const struct ath10k_htt_rx_ops htt_rx_ops_hl = {
+ .htt_rx_proc_rx_frag_ind = ath10k_htt_rx_proc_rx_frag_ind_hl,
};
void ath10k_htt_set_rx_ops(struct ath10k_htt *htt)
diff --git a/drivers/net/wireless/ath/ath10k/htt_tx.c b/drivers/net/wireless/ath/ath10k/htt_tx.c
index d8e9cc0bb772..2ef717f18795 100644
--- a/drivers/net/wireless/ath/ath10k/htt_tx.c
+++ b/drivers/net/wireless/ath/ath10k/htt_tx.c
@@ -580,7 +580,8 @@ int ath10k_htt_h2t_ver_req_msg(struct ath10k_htt *htt)
return 0;
}
-int ath10k_htt_h2t_stats_req(struct ath10k_htt *htt, u8 mask, u64 cookie)
+int ath10k_htt_h2t_stats_req(struct ath10k_htt *htt, u32 mask, u32 reset_mask,
+ u64 cookie)
{
struct ath10k *ar = htt->ar;
struct htt_stats_req *req;
@@ -603,11 +604,11 @@ int ath10k_htt_h2t_stats_req(struct ath10k_htt *htt, u8 mask, u64 cookie)
memset(req, 0, sizeof(*req));
- /* currently we support only max 8 bit masks so no need to worry
+ /* currently we support only max 24 bit masks so no need to worry
* about endian support
*/
- req->upload_types[0] = mask;
- req->reset_types[0] = mask;
+ memcpy(req->upload_types, &mask, 3);
+ memcpy(req->reset_types, &reset_mask, 3);
req->stat_type = HTT_STATS_REQ_CFG_STAT_TYPE_INVALID;
req->cookie_lsb = cpu_to_le32(cookie & 0xffffffff);
req->cookie_msb = cpu_to_le32((cookie & 0xffffffff00000000ULL) >> 32);
@@ -977,9 +978,9 @@ static int ath10k_htt_send_rx_ring_cfg_hl(struct ath10k_htt *htt)
return 0;
}
-int ath10k_htt_h2t_aggr_cfg_msg(struct ath10k_htt *htt,
- u8 max_subfrms_ampdu,
- u8 max_subfrms_amsdu)
+static int ath10k_htt_h2t_aggr_cfg_msg_32(struct ath10k_htt *htt,
+ u8 max_subfrms_ampdu,
+ u8 max_subfrms_amsdu)
{
struct ath10k *ar = htt->ar;
struct htt_aggr_conf *aggr_conf;
@@ -1244,6 +1245,7 @@ static int ath10k_htt_tx_hl(struct ath10k_htt *htt, enum ath10k_hw_txrx_mode txm
u8 tid = ath10k_htt_tx_get_tid(msdu, is_eth);
u8 flags0 = 0;
u16 flags1 = 0;
+ u16 msdu_id = 0;
data_len = msdu->len;
@@ -1291,6 +1293,23 @@ static int ath10k_htt_tx_hl(struct ath10k_htt *htt, enum ath10k_hw_txrx_mode txm
}
}
+ if (ar->bus_param.hl_msdu_ids) {
+ flags1 |= HTT_DATA_TX_DESC_FLAGS1_POSTPONED;
+ res = ath10k_htt_tx_alloc_msdu_id(htt, msdu);
+ if (res < 0) {
+ ath10k_err(ar, "msdu_id allocation failed %d\n", res);
+ goto out;
+ }
+ msdu_id = res;
+ }
+
+ /* As msdu is freed by mac80211 (in ieee80211_tx_status()) and by
+ * ath10k (in ath10k_htt_htc_tx_complete()) we have to increase
+ * reference by one to avoid a use-after-free case and a double
+ * free.
+ */
+ skb_get(msdu);
+
skb_push(msdu, sizeof(*cmd_hdr));
skb_push(msdu, sizeof(*tx_desc));
cmd_hdr = (struct htt_cmd_hdr *)msdu->data;
@@ -1300,7 +1319,7 @@ static int ath10k_htt_tx_hl(struct ath10k_htt *htt, enum ath10k_hw_txrx_mode txm
tx_desc->flags0 = flags0;
tx_desc->flags1 = __cpu_to_le16(flags1);
tx_desc->len = __cpu_to_le16(data_len);
- tx_desc->id = 0;
+ tx_desc->id = __cpu_to_le16(msdu_id);
tx_desc->frags_paddr = 0; /* always zero */
/* Initialize peer_id to INVALID_PEER because this is NOT
* Reinjection path
@@ -1728,7 +1747,7 @@ static const struct ath10k_htt_tx_ops htt_tx_ops_32 = {
.htt_tx = ath10k_htt_tx_32,
.htt_alloc_txbuff = ath10k_htt_tx_alloc_cont_txbuf_32,
.htt_free_txbuff = ath10k_htt_tx_free_cont_txbuf_32,
- .htt_h2t_aggr_cfg_msg = ath10k_htt_h2t_aggr_cfg_msg,
+ .htt_h2t_aggr_cfg_msg = ath10k_htt_h2t_aggr_cfg_msg_32,
};
static const struct ath10k_htt_tx_ops htt_tx_ops_64 = {
@@ -1746,6 +1765,7 @@ static const struct ath10k_htt_tx_ops htt_tx_ops_hl = {
.htt_send_rx_ring_cfg = ath10k_htt_send_rx_ring_cfg_hl,
.htt_send_frag_desc_bank_cfg = ath10k_htt_send_frag_desc_bank_cfg_32,
.htt_tx = ath10k_htt_tx_hl,
+ .htt_h2t_aggr_cfg_msg = ath10k_htt_h2t_aggr_cfg_msg_32,
};
void ath10k_htt_set_tx_ops(struct ath10k_htt *htt)
diff --git a/drivers/net/wireless/ath/ath10k/hw.c b/drivers/net/wireless/ath/ath10k/hw.c
index ad082b7d7643..c415e971735b 100644
--- a/drivers/net/wireless/ath/ath10k/hw.c
+++ b/drivers/net/wireless/ath/ath10k/hw.c
@@ -158,7 +158,7 @@ const struct ath10k_hw_values qca6174_values = {
};
const struct ath10k_hw_values qca99x0_values = {
- .rtc_state_val_on = 5,
+ .rtc_state_val_on = 7,
.ce_count = 12,
.msi_assign_ce_max = 12,
.num_target_ce_config_wlan = 10,
@@ -1153,6 +1153,10 @@ const struct ath10k_hw_ops qca6174_ops = {
.is_rssi_enable = ath10k_htt_tx_rssi_enable,
};
+const struct ath10k_hw_ops qca6174_sdio_ops = {
+ .enable_pll_clk = ath10k_hw_qca6174_enable_pll_clock,
+};
+
const struct ath10k_hw_ops wcn3990_ops = {
.tx_data_rssi_pad_bytes = ath10k_get_htt_tx_data_rssi_pad,
.is_rssi_enable = ath10k_htt_tx_rssi_enable_wcn3990,
diff --git a/drivers/net/wireless/ath/ath10k/hw.h b/drivers/net/wireless/ath/ath10k/hw.h
index 71314999aa24..2ae57c1de7b5 100644
--- a/drivers/net/wireless/ath/ath10k/hw.h
+++ b/drivers/net/wireless/ath/ath10k/hw.h
@@ -24,6 +24,7 @@ enum ath10k_bus {
#define QCA988X_2_0_DEVICE_ID (0x003c)
#define QCA6164_2_1_DEVICE_ID (0x0041)
#define QCA6174_2_1_DEVICE_ID (0x003e)
+#define QCA6174_3_2_DEVICE_ID (0x0042)
#define QCA99X0_2_0_DEVICE_ID (0x0040)
#define QCA9888_2_0_DEVICE_ID (0x0056)
#define QCA9984_1_0_DEVICE_ID (0x0046)
@@ -151,6 +152,8 @@ enum qca9377_chip_id_rev {
#define ATH10K_FW_UTF_FILE "utf.bin"
#define ATH10K_FW_UTF_API2_FILE "utf-2.bin"
+#define ATH10K_FW_UTF_FILE_BASE "utf"
+
/* includes also the null byte */
#define ATH10K_FIRMWARE_MAGIC "QCA-ATH10K"
#define ATH10K_BOARD_MAGIC "QCA-ATH10K-BOARD"
@@ -606,6 +609,14 @@ struct ath10k_hw_params {
/* target supporting fw download via diag ce */
bool fw_diag_ce_download;
+
+ /* need to set uart pin if disable uart print, workaround for a
+ * firmware bug
+ */
+ bool uart_pin_workaround;
+
+ /* tx stats support over pktlog */
+ bool tx_stats_over_pktlog;
};
struct htt_rx_desc;
@@ -625,6 +636,7 @@ struct ath10k_hw_ops {
extern const struct ath10k_hw_ops qca988x_ops;
extern const struct ath10k_hw_ops qca99x0_ops;
extern const struct ath10k_hw_ops qca6174_ops;
+extern const struct ath10k_hw_ops qca6174_sdio_ops;
extern const struct ath10k_hw_ops wcn3990_ops;
extern const struct ath10k_hw_clk_params qca6174_clk[];
@@ -1095,6 +1107,7 @@ ath10k_is_rssi_enable(struct ath10k_hw_params *hw,
#define MBOX_CPU_INT_STATUS_ENABLE_ADDRESS 0x00000819
#define MBOX_CPU_INT_STATUS_ENABLE_BIT_LSB 0
#define MBOX_CPU_INT_STATUS_ENABLE_BIT_MASK 0x000000ff
+#define MBOX_CPU_STATUS_ENABLE_ASSERT_MASK 0x00000001
#define MBOX_ERROR_STATUS_ENABLE_ADDRESS 0x0000081a
#define MBOX_ERROR_STATUS_ENABLE_RX_UNDERFLOW_LSB 1
#define MBOX_ERROR_STATUS_ENABLE_RX_UNDERFLOW_MASK 0x00000002
diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
index 9c703d287333..e43a566eef77 100644
--- a/drivers/net/wireless/ath/ath10k/mac.c
+++ b/drivers/net/wireless/ath/ath10k/mac.c
@@ -693,6 +693,26 @@ ath10k_mac_get_any_chandef_iter(struct ieee80211_hw *hw,
*def = &conf->def;
}
+static void ath10k_wait_for_peer_delete_done(struct ath10k *ar, u32 vdev_id,
+ const u8 *addr)
+{
+ unsigned long time_left;
+ int ret;
+
+ if (test_bit(WMI_SERVICE_SYNC_DELETE_CMDS, ar->wmi.svc_map)) {
+ ret = ath10k_wait_for_peer_deleted(ar, vdev_id, addr);
+ if (ret) {
+ ath10k_warn(ar, "failed wait for peer deleted");
+ return;
+ }
+
+ time_left = wait_for_completion_timeout(&ar->peer_delete_done,
+ 5 * HZ);
+ if (!time_left)
+ ath10k_warn(ar, "Timeout in receiving peer delete response\n");
+ }
+}
+
static int ath10k_peer_create(struct ath10k *ar,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta,
@@ -737,7 +757,7 @@ static int ath10k_peer_create(struct ath10k *ar,
spin_unlock_bh(&ar->data_lock);
ath10k_warn(ar, "failed to find peer %pM on vdev %i after creation\n",
addr, vdev_id);
- ath10k_wmi_peer_delete(ar, vdev_id, addr);
+ ath10k_wait_for_peer_delete_done(ar, vdev_id, addr);
return -ENOENT;
}
@@ -819,6 +839,18 @@ static int ath10k_peer_delete(struct ath10k *ar, u32 vdev_id, const u8 *addr)
if (ret)
return ret;
+ if (test_bit(WMI_SERVICE_SYNC_DELETE_CMDS, ar->wmi.svc_map)) {
+ unsigned long time_left;
+
+ time_left = wait_for_completion_timeout
+ (&ar->peer_delete_done, 5 * HZ);
+
+ if (!time_left) {
+ ath10k_warn(ar, "Timeout in receiving peer delete response\n");
+ return -ETIMEDOUT;
+ }
+ }
+
ar->num_peers--;
return 0;
@@ -1011,6 +1043,7 @@ static int ath10k_monitor_vdev_start(struct ath10k *ar, int vdev_id)
arg.channel.max_antenna_gain = channel->max_antenna_gain * 2;
reinit_completion(&ar->vdev_setup_done);
+ reinit_completion(&ar->vdev_delete_done);
ret = ath10k_wmi_vdev_start(ar, &arg);
if (ret) {
@@ -1060,6 +1093,7 @@ static int ath10k_monitor_vdev_stop(struct ath10k *ar)
ar->monitor_vdev_id, ret);
reinit_completion(&ar->vdev_setup_done);
+ reinit_completion(&ar->vdev_delete_done);
ret = ath10k_wmi_vdev_stop(ar, ar->monitor_vdev_id);
if (ret)
@@ -1401,6 +1435,7 @@ static int ath10k_vdev_stop(struct ath10k_vif *arvif)
lockdep_assert_held(&ar->conf_mutex);
reinit_completion(&ar->vdev_setup_done);
+ reinit_completion(&ar->vdev_delete_done);
ret = ath10k_wmi_vdev_stop(ar, arvif->vdev_id);
if (ret) {
@@ -1437,6 +1472,7 @@ static int ath10k_vdev_start_restart(struct ath10k_vif *arvif,
lockdep_assert_held(&ar->conf_mutex);
reinit_completion(&ar->vdev_setup_done);
+ reinit_completion(&ar->vdev_delete_done);
arg.vdev_id = arvif->vdev_id;
arg.dtim_period = arvif->dtim_period;
@@ -1630,6 +1666,10 @@ static int ath10k_mac_setup_prb_tmpl(struct ath10k_vif *arvif)
if (arvif->vdev_type != WMI_VDEV_TYPE_AP)
return 0;
+ /* For mesh, probe response and beacon share the same template */
+ if (ieee80211_vif_is_mesh(vif))
+ return 0;
+
prb = ieee80211_proberesp_get(hw, vif);
if (!prb) {
ath10k_warn(ar, "failed to get probe resp template from mac80211\n");
@@ -5415,8 +5455,11 @@ static int ath10k_add_interface(struct ieee80211_hw *hw,
err_peer_delete:
if (arvif->vdev_type == WMI_VDEV_TYPE_AP ||
- arvif->vdev_type == WMI_VDEV_TYPE_IBSS)
+ arvif->vdev_type == WMI_VDEV_TYPE_IBSS) {
ath10k_wmi_peer_delete(ar, arvif->vdev_id, vif->addr);
+ ath10k_wait_for_peer_delete_done(ar, arvif->vdev_id,
+ vif->addr);
+ }
err_vdev_delete:
ath10k_wmi_vdev_delete(ar, arvif->vdev_id);
@@ -5451,6 +5494,7 @@ static void ath10k_remove_interface(struct ieee80211_hw *hw,
struct ath10k *ar = hw->priv;
struct ath10k_vif *arvif = (void *)vif->drv_priv;
struct ath10k_peer *peer;
+ unsigned long time_left;
int ret;
int i;
@@ -5481,6 +5525,8 @@ static void ath10k_remove_interface(struct ieee80211_hw *hw,
ath10k_warn(ar, "failed to submit AP/IBSS self-peer removal on vdev %i: %d\n",
arvif->vdev_id, ret);
+ ath10k_wait_for_peer_delete_done(ar, arvif->vdev_id,
+ vif->addr);
kfree(arvif->u.ap.noa_data);
}
@@ -5492,6 +5538,15 @@ static void ath10k_remove_interface(struct ieee80211_hw *hw,
ath10k_warn(ar, "failed to delete WMI vdev %i: %d\n",
arvif->vdev_id, ret);
+ if (test_bit(WMI_SERVICE_SYNC_DELETE_CMDS, ar->wmi.svc_map)) {
+ time_left = wait_for_completion_timeout(&ar->vdev_delete_done,
+ ATH10K_VDEV_DELETE_TIMEOUT_HZ);
+ if (time_left == 0) {
+ ath10k_warn(ar, "Timeout in receiving vdev delete response\n");
+ goto out;
+ }
+ }
+
/* Some firmware revisions don't notify host about self-peer removal
* until after associated vdev is deleted.
*/
@@ -5542,6 +5597,7 @@ static void ath10k_remove_interface(struct ieee80211_hw *hw,
ath10k_mac_txq_unref(ar, vif->txq);
+out:
mutex_unlock(&ar->conf_mutex);
}
@@ -5588,8 +5644,8 @@ static void ath10k_bss_info_changed(struct ieee80211_hw *hw,
struct cfg80211_chan_def def;
u32 vdev_param, pdev_param, slottime, preamble;
u16 bitrate, hw_value;
- u8 rate, basic_rate_idx;
- int rateidx, ret = 0, hw_rate_code;
+ u8 rate, basic_rate_idx, rateidx;
+ int ret = 0, hw_rate_code, mcast_rate;
enum nl80211_band band;
const struct ieee80211_supported_band *sband;
@@ -5776,7 +5832,11 @@ static void ath10k_bss_info_changed(struct ieee80211_hw *hw,
if (changed & BSS_CHANGED_MCAST_RATE &&
!ath10k_mac_vif_chan(arvif->vif, &def)) {
band = def.chan->band;
- rateidx = vif->bss_conf.mcast_rate[band] - 1;
+ mcast_rate = vif->bss_conf.mcast_rate[band];
+ if (mcast_rate > 0)
+ rateidx = mcast_rate - 1;
+ else
+ rateidx = ffs(vif->bss_conf.basic_rates) - 1;
if (ar->phy_capability & WHAL_WLAN_11A_CAPABILITY)
rateidx += ATH10K_MAC_FIRST_OFDM_RATE_IDX;
@@ -6350,6 +6410,41 @@ static void ath10k_mac_dec_num_stations(struct ath10k_vif *arvif,
ar->num_stations--;
}
+static int ath10k_sta_set_txpwr(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct ath10k *ar = hw->priv;
+ struct ath10k_vif *arvif = (void *)vif->drv_priv;
+ int ret = 0;
+ s16 txpwr;
+
+ if (sta->txpwr.type == NL80211_TX_POWER_AUTOMATIC) {
+ txpwr = 0;
+ } else {
+ txpwr = sta->txpwr.power;
+ if (!txpwr)
+ return -EINVAL;
+ }
+
+ if (txpwr > ATH10K_TX_POWER_MAX_VAL || txpwr < ATH10K_TX_POWER_MIN_VAL)
+ return -EINVAL;
+
+ mutex_lock(&ar->conf_mutex);
+
+ ret = ath10k_wmi_peer_set_param(ar, arvif->vdev_id, sta->addr,
+ WMI_PEER_USE_FIXED_PWR, txpwr);
+ if (ret) {
+ ath10k_warn(ar, "failed to set tx power for station ret: %d\n",
+ ret);
+ goto out;
+ }
+
+out:
+ mutex_unlock(&ar->conf_mutex);
+ return ret;
+}
+
static int ath10k_sta_state(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta,
@@ -7099,18 +7194,23 @@ exit:
static bool
ath10k_mac_bitrate_mask_has_single_rate(struct ath10k *ar,
enum nl80211_band band,
- const struct cfg80211_bitrate_mask *mask)
+ const struct cfg80211_bitrate_mask *mask,
+ int *vht_num_rates)
{
int num_rates = 0;
- int i;
+ int i, tmp;
num_rates += hweight32(mask->control[band].legacy);
for (i = 0; i < ARRAY_SIZE(mask->control[band].ht_mcs); i++)
num_rates += hweight8(mask->control[band].ht_mcs[i]);
- for (i = 0; i < ARRAY_SIZE(mask->control[band].vht_mcs); i++)
- num_rates += hweight16(mask->control[band].vht_mcs[i]);
+ *vht_num_rates = 0;
+ for (i = 0; i < ARRAY_SIZE(mask->control[band].vht_mcs); i++) {
+ tmp = hweight16(mask->control[band].vht_mcs[i]);
+ num_rates += tmp;
+ *vht_num_rates += tmp;
+ }
return num_rates == 1;
}
@@ -7168,7 +7268,7 @@ static int
ath10k_mac_bitrate_mask_get_single_rate(struct ath10k *ar,
enum nl80211_band band,
const struct cfg80211_bitrate_mask *mask,
- u8 *rate, u8 *nss)
+ u8 *rate, u8 *nss, bool vht_only)
{
int rate_idx;
int i;
@@ -7176,6 +7276,9 @@ ath10k_mac_bitrate_mask_get_single_rate(struct ath10k *ar,
u8 preamble;
u8 hw_rate;
+ if (vht_only)
+ goto next;
+
if (hweight32(mask->control[band].legacy) == 1) {
rate_idx = ffs(mask->control[band].legacy) - 1;
@@ -7209,6 +7312,7 @@ ath10k_mac_bitrate_mask_get_single_rate(struct ath10k *ar,
}
}
+next:
for (i = 0; i < ARRAY_SIZE(mask->control[band].vht_mcs); i++) {
if (hweight16(mask->control[band].vht_mcs[i]) == 1) {
*nss = i + 1;
@@ -7270,7 +7374,8 @@ static int ath10k_mac_set_fixed_rate_params(struct ath10k_vif *arvif,
static bool
ath10k_mac_can_set_bitrate_mask(struct ath10k *ar,
enum nl80211_band band,
- const struct cfg80211_bitrate_mask *mask)
+ const struct cfg80211_bitrate_mask *mask,
+ bool allow_pfr)
{
int i;
u16 vht_mcs;
@@ -7289,7 +7394,8 @@ ath10k_mac_can_set_bitrate_mask(struct ath10k *ar,
case BIT(10) - 1:
break;
default:
- ath10k_warn(ar, "refusing bitrate mask with missing 0-7 VHT MCS rates\n");
+ if (!allow_pfr)
+ ath10k_warn(ar, "refusing bitrate mask with missing 0-7 VHT MCS rates\n");
return false;
}
}
@@ -7297,6 +7403,26 @@ ath10k_mac_can_set_bitrate_mask(struct ath10k *ar,
return true;
}
+static bool ath10k_mac_set_vht_bitrate_mask_fixup(struct ath10k *ar,
+ struct ath10k_vif *arvif,
+ struct ieee80211_sta *sta)
+{
+ int err;
+ u8 rate = arvif->vht_pfr;
+
+ /* skip non vht and multiple rate peers */
+ if (!sta->vht_cap.vht_supported || arvif->vht_num_rates != 1)
+ return false;
+
+ err = ath10k_wmi_peer_set_param(ar, arvif->vdev_id, sta->addr,
+ WMI_PEER_PARAM_FIXED_RATE, rate);
+ if (err)
+ ath10k_warn(ar, "failed to eanble STA %pM peer fixed rate: %d\n",
+ sta->addr, err);
+
+ return true;
+}
+
static void ath10k_mac_set_bitrate_mask_iter(void *data,
struct ieee80211_sta *sta)
{
@@ -7307,6 +7433,9 @@ static void ath10k_mac_set_bitrate_mask_iter(void *data,
if (arsta->arvif != arvif)
return;
+ if (ath10k_mac_set_vht_bitrate_mask_fixup(ar, arvif, sta))
+ return;
+
spin_lock_bh(&ar->data_lock);
arsta->changed |= IEEE80211_RC_SUPP_RATES_CHANGED;
spin_unlock_bh(&ar->data_lock);
@@ -7314,6 +7443,26 @@ static void ath10k_mac_set_bitrate_mask_iter(void *data,
ieee80211_queue_work(ar->hw, &arsta->update_wk);
}
+static void ath10k_mac_clr_bitrate_mask_iter(void *data,
+ struct ieee80211_sta *sta)
+{
+ struct ath10k_vif *arvif = data;
+ struct ath10k_sta *arsta = (struct ath10k_sta *)sta->drv_priv;
+ struct ath10k *ar = arvif->ar;
+ int err;
+
+ /* clear vht peers only */
+ if (arsta->arvif != arvif || !sta->vht_cap.vht_supported)
+ return;
+
+ err = ath10k_wmi_peer_set_param(ar, arvif->vdev_id, sta->addr,
+ WMI_PEER_PARAM_FIXED_RATE,
+ WMI_FIXED_RATE_NONE);
+ if (err)
+ ath10k_warn(ar, "failed to clear STA %pM peer fixed rate: %d\n",
+ sta->addr, err);
+}
+
static int ath10k_mac_op_set_bitrate_mask(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
const struct cfg80211_bitrate_mask *mask)
@@ -7330,6 +7479,9 @@ static int ath10k_mac_op_set_bitrate_mask(struct ieee80211_hw *hw,
u8 ldpc;
int single_nss;
int ret;
+ int vht_num_rates, allow_pfr;
+ u8 vht_pfr;
+ bool update_bitrate_mask = true;
if (ath10k_mac_vif_chan(vif, &def))
return -EPERM;
@@ -7343,9 +7495,21 @@ static int ath10k_mac_op_set_bitrate_mask(struct ieee80211_hw *hw,
if (sgi == NL80211_TXRATE_FORCE_LGI)
return -EINVAL;
- if (ath10k_mac_bitrate_mask_has_single_rate(ar, band, mask)) {
+ allow_pfr = test_bit(ATH10K_FW_FEATURE_PEER_FIXED_RATE,
+ ar->normal_mode_fw.fw_file.fw_features);
+ if (allow_pfr) {
+ mutex_lock(&ar->conf_mutex);
+ ieee80211_iterate_stations_atomic(ar->hw,
+ ath10k_mac_clr_bitrate_mask_iter,
+ arvif);
+ mutex_unlock(&ar->conf_mutex);
+ }
+
+ if (ath10k_mac_bitrate_mask_has_single_rate(ar, band, mask,
+ &vht_num_rates)) {
ret = ath10k_mac_bitrate_mask_get_single_rate(ar, band, mask,
- &rate, &nss);
+ &rate, &nss,
+ false);
if (ret) {
ath10k_warn(ar, "failed to get single rate for vdev %i: %d\n",
arvif->vdev_id, ret);
@@ -7361,12 +7525,30 @@ static int ath10k_mac_op_set_bitrate_mask(struct ieee80211_hw *hw,
max(ath10k_mac_max_ht_nss(ht_mcs_mask),
ath10k_mac_max_vht_nss(vht_mcs_mask)));
- if (!ath10k_mac_can_set_bitrate_mask(ar, band, mask))
- return -EINVAL;
+ if (!ath10k_mac_can_set_bitrate_mask(ar, band, mask,
+ allow_pfr)) {
+ u8 vht_nss;
+
+ if (!allow_pfr || vht_num_rates != 1)
+ return -EINVAL;
+
+ /* Reach here, firmware supports peer fixed rate and has
+ * single vht rate, and don't update vif birate_mask, as
+ * the rate only for specific peer.
+ */
+ ath10k_mac_bitrate_mask_get_single_rate(ar, band, mask,
+ &vht_pfr,
+ &vht_nss,
+ true);
+ update_bitrate_mask = false;
+ }
mutex_lock(&ar->conf_mutex);
- arvif->bitrate_mask = *mask;
+ if (update_bitrate_mask)
+ arvif->bitrate_mask = *mask;
+ arvif->vht_num_rates = vht_num_rates;
+ arvif->vht_pfr = vht_pfr;
ieee80211_iterate_stations_atomic(ar->hw,
ath10k_mac_set_bitrate_mask_iter,
arvif);
@@ -7869,7 +8051,8 @@ ath10k_mac_op_assign_vif_chanctx(struct ieee80211_hw *hw,
arvif->vdev_id, ret);
}
- if (ath10k_peer_stats_enabled(ar)) {
+ if (ath10k_peer_stats_enabled(ar) &&
+ ar->hw_params.tx_stats_over_pktlog) {
ar->pktlog_filter |= ATH10K_PKTLOG_PEER_STATS;
ret = ath10k_wmi_pdev_pktlog_enable(ar,
ar->pktlog_filter);
@@ -8007,6 +8190,7 @@ static const struct ieee80211_ops ath10k_ops = {
.set_key = ath10k_set_key,
.set_default_unicast_key = ath10k_set_default_unicast_key,
.sta_state = ath10k_sta_state,
+ .sta_set_txpwr = ath10k_sta_set_txpwr,
.conf_tx = ath10k_conf_tx,
.remain_on_channel = ath10k_remain_on_channel,
.cancel_remain_on_channel = ath10k_cancel_remain_on_channel,
@@ -8695,6 +8879,9 @@ int ath10k_mac_register(struct ath10k *ar)
wiphy_ext_feature_set(ar->hw->wiphy,
NL80211_EXT_FEATURE_ENABLE_FTM_RESPONDER);
+ if (test_bit(WMI_SERVICE_TX_PWR_PER_PEER, ar->wmi.svc_map))
+ wiphy_ext_feature_set(ar->hw->wiphy,
+ NL80211_EXT_FEATURE_STA_TX_PWR);
/*
* on LL hardware queues are managed entirely by the FW
* so we only advertise to mac we can do the queues thing
diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
index 2c27f407a851..a0b4d265c6eb 100644
--- a/drivers/net/wireless/ath/ath10k/pci.c
+++ b/drivers/net/wireless/ath/ath10k/pci.c
@@ -909,7 +909,7 @@ static int ath10k_pci_diag_read_mem(struct ath10k *ar, u32 address, void *data,
/* Host buffer address in CE space */
u32 ce_data;
dma_addr_t ce_data_base = 0;
- void *data_buf = NULL;
+ void *data_buf;
int i;
mutex_lock(&ar_pci->ce_diag_mutex);
@@ -923,10 +923,8 @@ static int ath10k_pci_diag_read_mem(struct ath10k *ar, u32 address, void *data,
*/
alloc_nbytes = min_t(unsigned int, nbytes, DIAG_TRANSFER_LIMIT);
- data_buf = (unsigned char *)dma_alloc_coherent(ar->dev, alloc_nbytes,
- &ce_data_base,
- GFP_ATOMIC);
-
+ data_buf = dma_alloc_coherent(ar->dev, alloc_nbytes, &ce_data_base,
+ GFP_ATOMIC);
if (!data_buf) {
ret = -ENOMEM;
goto done;
@@ -1054,7 +1052,7 @@ int ath10k_pci_diag_write_mem(struct ath10k *ar, u32 address,
u32 *buf;
unsigned int completed_nbytes, alloc_nbytes, remaining_bytes;
struct ath10k_ce_pipe *ce_diag;
- void *data_buf = NULL;
+ void *data_buf;
dma_addr_t ce_data_base = 0;
int i;
@@ -1069,10 +1067,8 @@ int ath10k_pci_diag_write_mem(struct ath10k *ar, u32 address,
*/
alloc_nbytes = min_t(unsigned int, nbytes, DIAG_TRANSFER_LIMIT);
- data_buf = (unsigned char *)dma_alloc_coherent(ar->dev,
- alloc_nbytes,
- &ce_data_base,
- GFP_ATOMIC);
+ data_buf = dma_alloc_coherent(ar->dev, alloc_nbytes, &ce_data_base,
+ GFP_ATOMIC);
if (!data_buf) {
ret = -ENOMEM;
goto done;
@@ -2059,6 +2055,11 @@ static void ath10k_pci_hif_stop(struct ath10k *ar)
ath10k_dbg(ar, ATH10K_DBG_BOOT, "boot hif stop\n");
+ ath10k_pci_irq_disable(ar);
+ ath10k_pci_irq_sync(ar);
+ napi_synchronize(&ar->napi);
+ napi_disable(&ar->napi);
+
/* Most likely the device has HTT Rx ring configured. The only way to
* prevent the device from accessing (and possible corrupting) host
* memory is to reset the chip now.
@@ -2072,10 +2073,6 @@ static void ath10k_pci_hif_stop(struct ath10k *ar)
*/
ath10k_pci_safe_chip_reset(ar);
- ath10k_pci_irq_disable(ar);
- ath10k_pci_irq_sync(ar);
- napi_synchronize(&ar->napi);
- napi_disable(&ar->napi);
ath10k_pci_flush(ar);
spin_lock_irqsave(&ar_pci->ps_lock, flags);
@@ -3492,7 +3489,7 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
struct ath10k *ar;
struct ath10k_pci *ar_pci;
enum ath10k_hw_rev hw_rev;
- struct ath10k_bus_params bus_params;
+ struct ath10k_bus_params bus_params = {};
bool pci_ps;
int (*pci_soft_reset)(struct ath10k *ar);
int (*pci_hard_reset)(struct ath10k *ar);
diff --git a/drivers/net/wireless/ath/ath10k/qmi.c b/drivers/net/wireless/ath/ath10k/qmi.c
index a7bc2c70d076..3b63b6257c43 100644
--- a/drivers/net/wireless/ath/ath10k/qmi.c
+++ b/drivers/net/wireless/ath/ath10k/qmi.c
@@ -506,6 +506,7 @@ static int ath10k_qmi_cap_send_sync_msg(struct ath10k_qmi *qmi)
struct wlfw_cap_resp_msg_v01 *resp;
struct wlfw_cap_req_msg_v01 req = {};
struct ath10k *ar = qmi->ar;
+ struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar);
struct qmi_txn txn;
int ret;
@@ -560,13 +561,13 @@ static int ath10k_qmi_cap_send_sync_msg(struct ath10k_qmi *qmi)
strlcpy(qmi->fw_build_id, resp->fw_build_id,
MAX_BUILD_ID_LEN + 1);
- ath10k_dbg(ar, ATH10K_DBG_QMI,
- "qmi chip_id 0x%x chip_family 0x%x board_id 0x%x soc_id 0x%x",
- qmi->chip_info.chip_id, qmi->chip_info.chip_family,
- qmi->board_info.board_id, qmi->soc_info.soc_id);
- ath10k_dbg(ar, ATH10K_DBG_QMI,
- "qmi fw_version 0x%x fw_build_timestamp %s fw_build_id %s",
- qmi->fw_version, qmi->fw_build_timestamp, qmi->fw_build_id);
+ if (!test_bit(ATH10K_SNOC_FLAG_REGISTERED, &ar_snoc->flags)) {
+ ath10k_info(ar, "qmi chip_id 0x%x chip_family 0x%x board_id 0x%x soc_id 0x%x",
+ qmi->chip_info.chip_id, qmi->chip_info.chip_family,
+ qmi->board_info.board_id, qmi->soc_info.soc_id);
+ ath10k_info(ar, "qmi fw_version 0x%x fw_build_timestamp %s fw_build_id %s",
+ qmi->fw_version, qmi->fw_build_timestamp, qmi->fw_build_id);
+ }
kfree(resp);
return 0;
@@ -619,6 +620,51 @@ out:
return ret;
}
+int ath10k_qmi_set_fw_log_mode(struct ath10k *ar, u8 fw_log_mode)
+{
+ struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar);
+ struct wlfw_ini_resp_msg_v01 resp = {};
+ struct ath10k_qmi *qmi = ar_snoc->qmi;
+ struct wlfw_ini_req_msg_v01 req = {};
+ struct qmi_txn txn;
+ int ret;
+
+ req.enablefwlog_valid = 1;
+ req.enablefwlog = fw_log_mode;
+
+ ret = qmi_txn_init(&qmi->qmi_hdl, &txn, wlfw_ini_resp_msg_v01_ei,
+ &resp);
+ if (ret < 0)
+ goto out;
+
+ ret = qmi_send_request(&qmi->qmi_hdl, NULL, &txn,
+ QMI_WLFW_INI_REQ_V01,
+ WLFW_INI_REQ_MSG_V01_MAX_MSG_LEN,
+ wlfw_ini_req_msg_v01_ei, &req);
+ if (ret < 0) {
+ qmi_txn_cancel(&txn);
+ ath10k_err(ar, "fail to send fw log reqest: %d\n", ret);
+ goto out;
+ }
+
+ ret = qmi_txn_wait(&txn, ATH10K_QMI_TIMEOUT * HZ);
+ if (ret < 0)
+ goto out;
+
+ if (resp.resp.result != QMI_RESULT_SUCCESS_V01) {
+ ath10k_err(ar, "fw log request rejectedr: %d\n",
+ resp.resp.error);
+ ret = -EINVAL;
+ goto out;
+ }
+ ath10k_dbg(ar, ATH10K_DBG_QMI, "qmi fw log request completed, mode: %d\n",
+ fw_log_mode);
+ return 0;
+
+out:
+ return ret;
+}
+
static int
ath10k_qmi_ind_register_send_sync_msg(struct ath10k_qmi *qmi)
{
@@ -1002,6 +1048,7 @@ int ath10k_qmi_deinit(struct ath10k *ar)
qmi_handle_release(&qmi->qmi_hdl);
cancel_work_sync(&qmi->event_work);
destroy_workqueue(qmi->event_wq);
+ kfree(qmi);
ar_snoc->qmi = NULL;
return 0;
diff --git a/drivers/net/wireless/ath/ath10k/qmi.h b/drivers/net/wireless/ath/ath10k/qmi.h
index e4aa20445666..40aafb875ed0 100644
--- a/drivers/net/wireless/ath/ath10k/qmi.h
+++ b/drivers/net/wireless/ath/ath10k/qmi.h
@@ -114,5 +114,6 @@ int ath10k_qmi_wlan_disable(struct ath10k *ar);
int ath10k_qmi_register_service_notifier(struct notifier_block *nb);
int ath10k_qmi_init(struct ath10k *ar, u32 msa_size);
int ath10k_qmi_deinit(struct ath10k *ar);
+int ath10k_qmi_set_fw_log_mode(struct ath10k *ar, u8 fw_log_mode);
#endif /* ATH10K_QMI_H */
diff --git a/drivers/net/wireless/ath/ath10k/sdio.c b/drivers/net/wireless/ath/ath10k/sdio.c
index fae56c67766f..8ed4fbd8d6c3 100644
--- a/drivers/net/wireless/ath/ath10k/sdio.c
+++ b/drivers/net/wireless/ath/ath10k/sdio.c
@@ -584,6 +584,11 @@ static int ath10k_sdio_mbox_rx_alloc(struct ath10k *ar,
act_len,
&bndl_cnt);
+ if (ret) {
+ ath10k_warn(ar, "alloc_bundle error %d\n", ret);
+ goto err;
+ }
+
n_lookaheads += bndl_cnt;
i += bndl_cnt;
/*Next buffer will be the last in the bundle */
@@ -602,6 +607,10 @@ static int ath10k_sdio_mbox_rx_alloc(struct ath10k *ar,
full_len,
last_in_bundle,
last_in_bundle);
+ if (ret) {
+ ath10k_warn(ar, "alloc_rx_pkt error %d\n", ret);
+ goto err;
+ }
}
ar_sdio->n_rx_pkts = i;
@@ -850,6 +859,10 @@ static int ath10k_sdio_mbox_proc_cpu_intr(struct ath10k *ar)
out:
mutex_unlock(&irq_data->mtx);
+ if (cpu_int_status & MBOX_CPU_STATUS_ENABLE_ASSERT_MASK) {
+ ath10k_err(ar, "firmware crashed!\n");
+ queue_work(ar->workqueue, &ar->restart_work);
+ }
return ret;
}
@@ -1495,8 +1508,10 @@ static int ath10k_sdio_hif_enable_intrs(struct ath10k *ar)
regs->int_status_en |=
FIELD_PREP(MBOX_INT_STATUS_ENABLE_MBOX_DATA_MASK, 1);
- /* Set up the CPU Interrupt status Register */
- regs->cpu_int_status_en = 0;
+ /* Set up the CPU Interrupt Status Register, enable CPU sourced interrupt #0
+ * #0 is used for report assertion from target
+ */
+ regs->cpu_int_status_en = FIELD_PREP(MBOX_CPU_STATUS_ENABLE_ASSERT_MASK, 1);
/* Set up the Error Interrupt status Register */
regs->err_int_status_en =
@@ -1637,7 +1652,12 @@ static int ath10k_sdio_hif_swap_mailbox(struct ath10k *ar)
ath10k_dbg(ar, ATH10K_DBG_SDIO,
"sdio mailbox swap service enabled\n");
ar_sdio->swap_mbox = true;
+ } else {
+ ath10k_dbg(ar, ATH10K_DBG_SDIO,
+ "sdio mailbox swap service disabled\n");
+ ar_sdio->swap_mbox = false;
}
+
return 0;
}
@@ -1954,7 +1974,7 @@ static int ath10k_sdio_probe(struct sdio_func *func,
struct ath10k *ar;
enum ath10k_hw_rev hw_rev;
u32 dev_id_base;
- struct ath10k_bus_params bus_params;
+ struct ath10k_bus_params bus_params = {};
int ret, i;
/* Assumption: All SDIO based chipsets (so far) are QCA6174 based.
@@ -2045,6 +2065,8 @@ static int ath10k_sdio_probe(struct sdio_func *func,
bus_params.dev_type = ATH10K_DEV_TYPE_HL;
/* TODO: don't know yet how to get chip_id with SDIO */
bus_params.chip_id = 0;
+ bus_params.hl_msdu_ids = true;
+
ret = ath10k_core_register(ar, &bus_params);
if (ret) {
ath10k_err(ar, "failed to register driver core: %d\n", ret);
@@ -2052,7 +2074,7 @@ static int ath10k_sdio_probe(struct sdio_func *func,
}
/* TODO: remove this once SDIO support is fully implemented */
- ath10k_warn(ar, "WARNING: ath10k SDIO support is incomplete, don't expect anything to work!\n");
+ ath10k_warn(ar, "WARNING: ath10k SDIO support is work-in-progress, problems may arise!\n");
return 0;
@@ -2073,10 +2095,11 @@ static void ath10k_sdio_remove(struct sdio_func *func)
"sdio removed func %d vendor 0x%x device 0x%x\n",
func->num, func->vendor, func->device);
- (void)ath10k_sdio_hif_disable_intrs(ar);
- cancel_work_sync(&ar_sdio->wr_async_work);
ath10k_core_unregister(ar);
ath10k_core_destroy(ar);
+
+ flush_workqueue(ar_sdio->workqueue);
+ destroy_workqueue(ar_sdio->workqueue);
}
static const struct sdio_device_id ath10k_sdio_devices[] = {
diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c
index 873cb4ce419b..b491361e6ed4 100644
--- a/drivers/net/wireless/ath/ath10k/snoc.c
+++ b/drivers/net/wireless/ath/ath10k/snoc.c
@@ -165,7 +165,7 @@ static struct ce_attr host_ce_config_wlan[] = {
/* CE4: host->target HTT */
{
.flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
- .src_nentries = 256,
+ .src_nentries = 2048,
.src_sz_max = 256,
.dest_nentries = 0,
.send_cb = ath10k_snoc_htt_tx_cb,
@@ -1050,6 +1050,19 @@ err_wlan_enable:
return ret;
}
+static int ath10k_snoc_hif_set_target_log_mode(struct ath10k *ar,
+ u8 fw_log_mode)
+{
+ u8 fw_dbg_mode;
+
+ if (fw_log_mode)
+ fw_dbg_mode = ATH10K_ENABLE_FW_LOG_CE;
+ else
+ fw_dbg_mode = ATH10K_ENABLE_FW_LOG_DIAG;
+
+ return ath10k_qmi_set_fw_log_mode(ar, fw_dbg_mode);
+}
+
#ifdef CONFIG_PM
static int ath10k_snoc_hif_suspend(struct ath10k *ar)
{
@@ -1103,6 +1116,8 @@ static const struct ath10k_hif_ops ath10k_snoc_hif_ops = {
.send_complete_check = ath10k_snoc_hif_send_complete_check,
.get_free_queue_number = ath10k_snoc_hif_get_free_queue_number,
.get_target_info = ath10k_snoc_hif_get_target_info,
+ .set_target_log_mode = ath10k_snoc_hif_set_target_log_mode,
+
#ifdef CONFIG_PM
.suspend = ath10k_snoc_hif_suspend,
.resume = ath10k_snoc_hif_resume,
@@ -1249,7 +1264,7 @@ out:
int ath10k_snoc_fw_indication(struct ath10k *ar, u64 type)
{
struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar);
- struct ath10k_bus_params bus_params;
+ struct ath10k_bus_params bus_params = {};
int ret;
if (test_bit(ATH10K_SNOC_FLAG_UNREGISTERING, &ar_snoc->flags))
diff --git a/drivers/net/wireless/ath/ath10k/swap.c b/drivers/net/wireless/ath/ath10k/swap.c
index 4dddeee684b4..7198a386f2fb 100644
--- a/drivers/net/wireless/ath/ath10k/swap.c
+++ b/drivers/net/wireless/ath/ath10k/swap.c
@@ -106,10 +106,8 @@ ath10k_swap_code_seg_alloc(struct ath10k *ar, size_t swap_bin_len)
virt_addr = dma_alloc_coherent(ar->dev, swap_bin_len, &paddr,
GFP_KERNEL);
- if (!virt_addr) {
- ath10k_err(ar, "failed to allocate dma coherent memory\n");
+ if (!virt_addr)
return NULL;
- }
seg_info->seg_hw_info.bus_addr[0] = __cpu_to_le32(paddr);
seg_info->seg_hw_info.size = __cpu_to_le32(swap_bin_len);
diff --git a/drivers/net/wireless/ath/ath10k/testmode.c b/drivers/net/wireless/ath/ath10k/testmode.c
index a29cfb9c72c2..1bffe3fbea3f 100644
--- a/drivers/net/wireless/ath/ath10k/testmode.c
+++ b/drivers/net/wireless/ath/ath10k/testmode.c
@@ -174,8 +174,23 @@ static int ath10k_tm_fetch_firmware(struct ath10k *ar)
{
struct ath10k_fw_components *utf_mode_fw;
int ret;
+ char fw_name[100];
+ int fw_api2 = 2;
+
+ switch (ar->hif.bus) {
+ case ATH10K_BUS_SDIO:
+ case ATH10K_BUS_USB:
+ scnprintf(fw_name, sizeof(fw_name), "%s-%s-%d.bin",
+ ATH10K_FW_UTF_FILE_BASE, ath10k_bus_str(ar->hif.bus),
+ fw_api2);
+ break;
+ default:
+ scnprintf(fw_name, sizeof(fw_name), "%s-%d.bin",
+ ATH10K_FW_UTF_FILE_BASE, fw_api2);
+ break;
+ }
- ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_UTF_API2_FILE,
+ ret = ath10k_core_fetch_firmware_api_n(ar, fw_name,
&ar->testmode.utf_mode_fw.fw_file);
if (ret == 0) {
ath10k_dbg(ar, ATH10K_DBG_TESTMODE, "testmode using fw utf api 2");
diff --git a/drivers/net/wireless/ath/ath10k/trace.c b/drivers/net/wireless/ath/ath10k/trace.c
index 3ecdff17f64e..c7d4c97e6079 100644
--- a/drivers/net/wireless/ath/ath10k/trace.c
+++ b/drivers/net/wireless/ath/ath10k/trace.c
@@ -7,3 +7,4 @@
#define CREATE_TRACE_POINTS
#include "trace.h"
+EXPORT_SYMBOL(__tracepoint_ath10k_log_dbg);
diff --git a/drivers/net/wireless/ath/ath10k/trace.h b/drivers/net/wireless/ath/ath10k/trace.h
index ba977bbe6291..ab916459d237 100644
--- a/drivers/net/wireless/ath/ath10k/trace.h
+++ b/drivers/net/wireless/ath/ath10k/trace.h
@@ -29,7 +29,11 @@ static inline u32 ath10k_frm_hdr_len(const void *buf, size_t len)
#if !defined(CONFIG_ATH10K_TRACING)
#undef TRACE_EVENT
#define TRACE_EVENT(name, proto, ...) \
-static inline void trace_ ## name(proto) {}
+static inline void trace_ ## name(proto) {} \
+static inline bool trace_##name##_enabled(void) \
+{ \
+ return false; \
+}
#undef DECLARE_EVENT_CLASS
#define DECLARE_EVENT_CLASS(...)
#undef DEFINE_EVENT
diff --git a/drivers/net/wireless/ath/ath10k/txrx.c b/drivers/net/wireless/ath/ath10k/txrx.c
index c5818d28f55a..4102df016931 100644
--- a/drivers/net/wireless/ath/ath10k/txrx.c
+++ b/drivers/net/wireless/ath/ath10k/txrx.c
@@ -150,6 +150,9 @@ struct ath10k_peer *ath10k_peer_find_by_id(struct ath10k *ar, int peer_id)
{
struct ath10k_peer *peer;
+ if (peer_id >= BITS_PER_TYPE(peer->peer_ids))
+ return NULL;
+
lockdep_assert_held(&ar->data_lock);
list_for_each_entry(peer, &ar->peers, list)
diff --git a/drivers/net/wireless/ath/ath10k/usb.c b/drivers/net/wireless/ath/ath10k/usb.c
index 970cf69ac35f..e1420f67f776 100644
--- a/drivers/net/wireless/ath/ath10k/usb.c
+++ b/drivers/net/wireless/ath/ath10k/usb.c
@@ -973,7 +973,7 @@ static int ath10k_usb_probe(struct usb_interface *interface,
struct usb_device *dev = interface_to_usbdev(interface);
int ret, vendor_id, product_id;
enum ath10k_hw_rev hw_rev;
- struct ath10k_bus_params bus_params;
+ struct ath10k_bus_params bus_params = {};
/* Assumption: All USB based chipsets (so far) are QCA9377 based.
* If there will be newer chipsets that does not use the hw reg
@@ -1016,7 +1016,7 @@ static int ath10k_usb_probe(struct usb_interface *interface,
}
/* TODO: remove this once USB support is fully implemented */
- ath10k_warn(ar, "WARNING: ath10k USB support is incomplete, don't expect anything to work!\n");
+ ath10k_warn(ar, "Warning: ath10k USB support is incomplete, don't expect anything to work!\n");
return 0;
diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
index 582fb11f648a..2985bb17decd 100644
--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
@@ -2,7 +2,7 @@
/*
* Copyright (c) 2005-2011 Atheros Communications Inc.
* Copyright (c) 2011-2017 Qualcomm Atheros, Inc.
- * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
*/
#include "core.h"
#include "debug.h"
@@ -212,6 +212,13 @@ static int ath10k_wmi_tlv_event_bcn_tx_status(struct ath10k *ar,
return 0;
}
+static void ath10k_wmi_tlv_event_vdev_delete_resp(struct ath10k *ar,
+ struct sk_buff *skb)
+{
+ ath10k_dbg(ar, ATH10K_DBG_WMI, "WMI_VDEV_DELETE_RESP_EVENTID\n");
+ complete(&ar->vdev_delete_done);
+}
+
static int ath10k_wmi_tlv_event_diag_data(struct ath10k *ar,
struct sk_buff *skb)
{
@@ -458,6 +465,24 @@ static void ath10k_wmi_event_tdls_peer(struct ath10k *ar, struct sk_buff *skb)
kfree(tb);
}
+static int ath10k_wmi_tlv_event_peer_delete_resp(struct ath10k *ar,
+ struct sk_buff *skb)
+{
+ struct wmi_peer_delete_resp_ev_arg *arg;
+ struct wmi_tlv *tlv_hdr;
+
+ tlv_hdr = (struct wmi_tlv *)skb->data;
+ arg = (struct wmi_peer_delete_resp_ev_arg *)tlv_hdr->value;
+
+ ath10k_dbg(ar, ATH10K_DBG_WMI, "vdev id %d", arg->vdev_id);
+ ath10k_dbg(ar, ATH10K_DBG_WMI, "peer mac addr %pM", &arg->peer_addr);
+ ath10k_dbg(ar, ATH10K_DBG_WMI, "wmi tlv peer delete response\n");
+
+ complete(&ar->peer_delete_done);
+
+ return 0;
+}
+
/***********/
/* TLV ops */
/***********/
@@ -514,6 +539,9 @@ static void ath10k_wmi_tlv_op_rx(struct ath10k *ar, struct sk_buff *skb)
case WMI_TLV_VDEV_STOPPED_EVENTID:
ath10k_wmi_event_vdev_stopped(ar, skb);
break;
+ case WMI_TLV_VDEV_DELETE_RESP_EVENTID:
+ ath10k_wmi_tlv_event_vdev_delete_resp(ar, skb);
+ break;
case WMI_TLV_PEER_STA_KICKOUT_EVENTID:
ath10k_wmi_event_peer_sta_kickout(ar, skb);
break;
@@ -607,6 +635,9 @@ static void ath10k_wmi_tlv_op_rx(struct ath10k *ar, struct sk_buff *skb)
case WMI_TLV_TDLS_PEER_EVENTID:
ath10k_wmi_event_tdls_peer(ar, skb);
break;
+ case WMI_TLV_PEER_DELETE_RESP_EVENTID:
+ ath10k_wmi_tlv_event_peer_delete_resp(ar, skb);
+ break;
case WMI_TLV_MGMT_TX_COMPLETION_EVENTID:
ath10k_wmi_event_mgmt_tx_compl(ar, skb);
break;
@@ -1905,6 +1936,28 @@ ath10k_wmi_tlv_op_gen_stop_scan(struct ath10k *ar,
return skb;
}
+static int ath10k_wmi_tlv_op_get_vdev_subtype(struct ath10k *ar,
+ enum wmi_vdev_subtype subtype)
+{
+ switch (subtype) {
+ case WMI_VDEV_SUBTYPE_NONE:
+ return WMI_TLV_VDEV_SUBTYPE_NONE;
+ case WMI_VDEV_SUBTYPE_P2P_DEVICE:
+ return WMI_TLV_VDEV_SUBTYPE_P2P_DEV;
+ case WMI_VDEV_SUBTYPE_P2P_CLIENT:
+ return WMI_TLV_VDEV_SUBTYPE_P2P_CLI;
+ case WMI_VDEV_SUBTYPE_P2P_GO:
+ return WMI_TLV_VDEV_SUBTYPE_P2P_GO;
+ case WMI_VDEV_SUBTYPE_PROXY_STA:
+ return WMI_TLV_VDEV_SUBTYPE_PROXY_STA;
+ case WMI_VDEV_SUBTYPE_MESH_11S:
+ return WMI_TLV_VDEV_SUBTYPE_MESH_11S;
+ case WMI_VDEV_SUBTYPE_MESH_NON_11S:
+ return -ENOTSUPP;
+ }
+ return -ENOTSUPP;
+}
+
static struct sk_buff *
ath10k_wmi_tlv_op_gen_vdev_create(struct ath10k *ar,
u32 vdev_id,
@@ -2840,8 +2893,10 @@ ath10k_wmi_tlv_op_gen_mgmt_tx_send(struct ath10k *ar, struct sk_buff *msdu,
if ((ieee80211_is_action(hdr->frame_control) ||
ieee80211_is_deauth(hdr->frame_control) ||
ieee80211_is_disassoc(hdr->frame_control)) &&
- ieee80211_has_protected(hdr->frame_control))
+ ieee80211_has_protected(hdr->frame_control)) {
+ skb_put(msdu, IEEE80211_CCMP_MIC_LEN);
buf_len += IEEE80211_CCMP_MIC_LEN;
+ }
buf_len = min_t(u32, buf_len, WMI_TLV_MGMT_TX_FRAME_MAX_LEN);
buf_len = round_up(buf_len, 4);
@@ -4305,7 +4360,7 @@ static const struct wmi_ops wmi_tlv_ops = {
.gen_tdls_peer_update = ath10k_wmi_tlv_op_gen_tdls_peer_update,
.gen_adaptive_qcs = ath10k_wmi_tlv_op_gen_adaptive_qcs,
.fw_stats_fill = ath10k_wmi_main_op_fw_stats_fill,
- .get_vdev_subtype = ath10k_wmi_op_get_vdev_subtype,
+ .get_vdev_subtype = ath10k_wmi_tlv_op_get_vdev_subtype,
.gen_echo = ath10k_wmi_tlv_op_gen_echo,
.gen_vdev_spectral_conf = ath10k_wmi_tlv_op_gen_vdev_spectral_conf,
.gen_vdev_spectral_enable = ath10k_wmi_tlv_op_gen_vdev_spectral_enable,
diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.h b/drivers/net/wireless/ath/ath10k/wmi-tlv.h
index 65e6aa520b06..d691f06e58f2 100644
--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.h
+++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.h
@@ -2,7 +2,7 @@
/*
* Copyright (c) 2005-2011 Atheros Communications Inc.
* Copyright (c) 2011-2017 Qualcomm Atheros, Inc.
- * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
*/
#ifndef _WMI_TLV_H
#define _WMI_TLV_H
@@ -301,11 +301,15 @@ enum wmi_tlv_event_id {
WMI_TLV_VDEV_STOPPED_EVENTID,
WMI_TLV_VDEV_INSTALL_KEY_COMPLETE_EVENTID,
WMI_TLV_VDEV_MCC_BCN_INTERVAL_CHANGE_REQ_EVENTID,
+ WMI_TLV_VDEV_TSF_REPORT_EVENTID,
+ WMI_TLV_VDEV_DELETE_RESP_EVENTID,
WMI_TLV_PEER_STA_KICKOUT_EVENTID = WMI_TLV_EV(WMI_TLV_GRP_PEER),
WMI_TLV_PEER_INFO_EVENTID,
WMI_TLV_PEER_TX_FAIL_CNT_THR_EVENTID,
WMI_TLV_PEER_ESTIMATED_LINKSPEED_EVENTID,
WMI_TLV_PEER_STATE_EVENTID,
+ WMI_TLV_PEER_ASSOC_CONF_EVENTID,
+ WMI_TLV_PEER_DELETE_RESP_EVENTID,
WMI_TLV_MGMT_RX_EVENTID = WMI_TLV_EV(WMI_TLV_GRP_MGMT),
WMI_TLV_HOST_SWBA_EVENTID,
WMI_TLV_TBTTOFFSET_UPDATE_EVENTID,
@@ -1567,6 +1571,10 @@ wmi_tlv_svc_map(const __le32 *in, unsigned long *out, size_t len)
WMI_SERVICE_SAP_AUTH_OFFLOAD, len);
SVCMAP(WMI_TLV_SERVICE_MGMT_TX_WMI,
WMI_SERVICE_MGMT_TX_WMI, len);
+ SVCMAP(WMI_TLV_SERVICE_MESH_11S,
+ WMI_SERVICE_MESH_11S, len);
+ SVCMAP(WMI_TLV_SERVICE_SYNC_DELETE_CMDS,
+ WMI_SERVICE_SYNC_DELETE_CMDS, len);
}
static inline void
@@ -1775,6 +1783,16 @@ struct wmi_tlv_start_scan_cmd {
struct wmi_mac_addr mac_mask;
} __packed;
+enum wmi_tlv_vdev_subtype {
+ WMI_TLV_VDEV_SUBTYPE_NONE = 0,
+ WMI_TLV_VDEV_SUBTYPE_P2P_DEV = 1,
+ WMI_TLV_VDEV_SUBTYPE_P2P_CLI = 2,
+ WMI_TLV_VDEV_SUBTYPE_P2P_GO = 3,
+ WMI_TLV_VDEV_SUBTYPE_PROXY_STA = 4,
+ WMI_TLV_VDEV_SUBTYPE_MESH = 5,
+ WMI_TLV_VDEV_SUBTYPE_MESH_11S = 6,
+};
+
struct wmi_tlv_vdev_start_cmd {
__le32 vdev_id;
__le32 requestor_id;
diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
index 98a90e49d666..4f707c6394bb 100644
--- a/drivers/net/wireless/ath/ath10k/wmi.c
+++ b/drivers/net/wireless/ath/ath10k/wmi.c
@@ -8309,7 +8309,7 @@ ath10k_wmi_fw_vdev_stats_fill(const struct ath10k_fw_stats_vdev *vdev,
static void
ath10k_wmi_fw_peer_stats_fill(const struct ath10k_fw_stats_peer *peer,
- char *buf, u32 *length)
+ char *buf, u32 *length, bool extended_peer)
{
u32 len = *length;
u32 buf_len = ATH10K_FW_STATS_BUF_SIZE;
@@ -8322,13 +8322,27 @@ ath10k_wmi_fw_peer_stats_fill(const struct ath10k_fw_stats_peer *peer,
"Peer TX rate", peer->peer_tx_rate);
len += scnprintf(buf + len, buf_len - len, "%30s %u\n",
"Peer RX rate", peer->peer_rx_rate);
- len += scnprintf(buf + len, buf_len - len, "%30s %llu\n",
- "Peer RX duration", peer->rx_duration);
+ if (!extended_peer)
+ len += scnprintf(buf + len, buf_len - len, "%30s %llu\n",
+ "Peer RX duration", peer->rx_duration);
len += scnprintf(buf + len, buf_len - len, "\n");
*length = len;
}
+static void
+ath10k_wmi_fw_extd_peer_stats_fill(const struct ath10k_fw_extd_stats_peer *peer,
+ char *buf, u32 *length)
+{
+ u32 len = *length;
+ u32 buf_len = ATH10K_FW_STATS_BUF_SIZE;
+
+ len += scnprintf(buf + len, buf_len - len, "%30s %pM\n",
+ "Peer MAC address", peer->peer_macaddr);
+ len += scnprintf(buf + len, buf_len - len, "%30s %llu\n",
+ "Peer RX duration", peer->rx_duration);
+}
+
void ath10k_wmi_main_op_fw_stats_fill(struct ath10k *ar,
struct ath10k_fw_stats *fw_stats,
char *buf)
@@ -8374,7 +8388,8 @@ void ath10k_wmi_main_op_fw_stats_fill(struct ath10k *ar,
"=================");
list_for_each_entry(peer, &fw_stats->peers, list) {
- ath10k_wmi_fw_peer_stats_fill(peer, buf, &len);
+ ath10k_wmi_fw_peer_stats_fill(peer, buf, &len,
+ fw_stats->extended);
}
unlock:
@@ -8432,7 +8447,8 @@ void ath10k_wmi_10x_op_fw_stats_fill(struct ath10k *ar,
"=================");
list_for_each_entry(peer, &fw_stats->peers, list) {
- ath10k_wmi_fw_peer_stats_fill(peer, buf, &len);
+ ath10k_wmi_fw_peer_stats_fill(peer, buf, &len,
+ fw_stats->extended);
}
unlock:
@@ -8541,6 +8557,7 @@ void ath10k_wmi_10_4_op_fw_stats_fill(struct ath10k *ar,
const struct ath10k_fw_stats_pdev *pdev;
const struct ath10k_fw_stats_vdev_extd *vdev;
const struct ath10k_fw_stats_peer *peer;
+ const struct ath10k_fw_extd_stats_peer *extd_peer;
size_t num_peers;
size_t num_vdevs;
@@ -8603,7 +8620,15 @@ void ath10k_wmi_10_4_op_fw_stats_fill(struct ath10k *ar,
"=================");
list_for_each_entry(peer, &fw_stats->peers, list) {
- ath10k_wmi_fw_peer_stats_fill(peer, buf, &len);
+ ath10k_wmi_fw_peer_stats_fill(peer, buf, &len,
+ fw_stats->extended);
+ }
+
+ if (fw_stats->extended) {
+ list_for_each_entry(extd_peer, &fw_stats->peers_extd, list) {
+ ath10k_wmi_fw_extd_peer_stats_fill(extd_peer, buf,
+ &len);
+ }
}
unlock:
diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h
index e1c40bb69932..838768c98adc 100644
--- a/drivers/net/wireless/ath/ath10k/wmi.h
+++ b/drivers/net/wireless/ath/ath10k/wmi.h
@@ -2,7 +2,7 @@
/*
* Copyright (c) 2005-2011 Atheros Communications Inc.
* Copyright (c) 2011-2017 Qualcomm Atheros, Inc.
- * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
*/
#ifndef _WMI_H_
@@ -200,6 +200,8 @@ enum wmi_service {
WMI_SERVICE_RTT_RESPONDER_ROLE,
WMI_SERVICE_PER_PACKET_SW_ENCRYPT,
WMI_SERVICE_REPORT_AIRTIME,
+ WMI_SERVICE_SYNC_DELETE_CMDS,
+ WMI_SERVICE_TX_PWR_PER_PEER,
/* Remember to add the new value to wmi_service_name()! */
@@ -367,6 +369,7 @@ enum wmi_10_4_service {
WMI_10_4_SERVICE_RTT_RESPONDER_ROLE,
WMI_10_4_SERVICE_EXT_PEER_TID_CONFIGS_SUPPORT,
WMI_10_4_SERVICE_REPORT_AIRTIME,
+ WMI_10_4_SERVICE_TX_PWR_PER_PEER,
};
static inline char *wmi_service_name(enum wmi_service service_id)
@@ -491,6 +494,8 @@ static inline char *wmi_service_name(enum wmi_service service_id)
SVCSTR(WMI_SERVICE_RTT_RESPONDER_ROLE);
SVCSTR(WMI_SERVICE_PER_PACKET_SW_ENCRYPT);
SVCSTR(WMI_SERVICE_REPORT_AIRTIME);
+ SVCSTR(WMI_SERVICE_SYNC_DELETE_CMDS);
+ SVCSTR(WMI_SERVICE_TX_PWR_PER_PEER);
case WMI_SERVICE_MAX:
return NULL;
@@ -818,6 +823,8 @@ static inline void wmi_10_4_svc_map(const __le32 *in, unsigned long *out,
WMI_SERVICE_PER_PACKET_SW_ENCRYPT, len);
SVCMAP(WMI_10_4_SERVICE_REPORT_AIRTIME,
WMI_SERVICE_REPORT_AIRTIME, len);
+ SVCMAP(WMI_10_4_SERVICE_TX_PWR_PER_PEER,
+ WMI_SERVICE_TX_PWR_PER_PEER, len);
}
#undef SVCMAP
@@ -4535,9 +4542,10 @@ enum wmi_10_4_stats_id {
};
enum wmi_tlv_stats_id {
- WMI_TLV_STAT_PDEV = BIT(0),
- WMI_TLV_STAT_VDEV = BIT(1),
- WMI_TLV_STAT_PEER = BIT(2),
+ WMI_TLV_STAT_PEER = BIT(0),
+ WMI_TLV_STAT_AP = BIT(1),
+ WMI_TLV_STAT_PDEV = BIT(2),
+ WMI_TLV_STAT_VDEV = BIT(3),
WMI_TLV_STAT_PEER_EXTD = BIT(10),
};
@@ -6259,6 +6267,8 @@ enum wmi_peer_param {
WMI_PEER_CHAN_WIDTH = 0x4,
WMI_PEER_NSS = 0x5,
WMI_PEER_USE_4ADDR = 0x6,
+ WMI_PEER_USE_FIXED_PWR = 0x8,
+ WMI_PEER_PARAM_FIXED_RATE = 0x9,
WMI_PEER_DEBUG = 0xa,
WMI_PEER_PHYMODE = 0xd,
WMI_PEER_DUMMY_VAR = 0xff, /* dummy parameter for STA PS workaround */
@@ -6756,6 +6766,11 @@ struct wmi_tlv_mgmt_tx_bundle_compl_ev_arg {
const __le32 *ack_rssi;
};
+struct wmi_peer_delete_resp_ev_arg {
+ __le32 vdev_id;
+ struct wmi_mac_addr peer_addr;
+};
+
struct wmi_mgmt_rx_ev_arg {
__le32 channel;
__le32 snr;
diff --git a/drivers/net/wireless/ath/ath5k/Kconfig b/drivers/net/wireless/ath/ath5k/Kconfig
index c587146795f6..802f8f87773a 100644
--- a/drivers/net/wireless/ath/ath5k/Kconfig
+++ b/drivers/net/wireless/ath/ath5k/Kconfig
@@ -1,4 +1,4 @@
-# SPDX-License-Identifier: GPL-2.0-only
+# SPDX-License-Identifier: ISC
config ATH5K
tristate "Atheros 5xxx wireless cards support"
depends on (PCI || ATH25) && MAC80211
diff --git a/drivers/net/wireless/ath/ath5k/Makefile b/drivers/net/wireless/ath/ath5k/Makefile
index a8724eee21f8..78f318d49af5 100644
--- a/drivers/net/wireless/ath/ath5k/Makefile
+++ b/drivers/net/wireless/ath/ath5k/Makefile
@@ -1,4 +1,4 @@
-# SPDX-License-Identifier: GPL-2.0
+# SPDX-License-Identifier: ISC
ath5k-y += caps.o
ath5k-y += initvals.o
ath5k-y += eeprom.o
diff --git a/drivers/net/wireless/ath/ath6kl/Kconfig b/drivers/net/wireless/ath/ath6kl/Kconfig
index 2b27a87e74f5..dcf8ca0dcc52 100644
--- a/drivers/net/wireless/ath/ath6kl/Kconfig
+++ b/drivers/net/wireless/ath/ath6kl/Kconfig
@@ -1,4 +1,4 @@
-# SPDX-License-Identifier: GPL-2.0-only
+# SPDX-License-Identifier: ISC
config ATH6KL
tristate "Atheros mobile chipsets support"
depends on CFG80211
diff --git a/drivers/net/wireless/ath/ath6kl/cfg80211.c b/drivers/net/wireless/ath/ath6kl/cfg80211.c
index 5477a014e1fb..37cf602d8adf 100644
--- a/drivers/net/wireless/ath/ath6kl/cfg80211.c
+++ b/drivers/net/wireless/ath/ath6kl/cfg80211.c
@@ -2194,13 +2194,13 @@ static int ath6kl_wow_suspend_vif(struct ath6kl_vif *vif,
if (!in_dev)
return 0;
- ifa = in_dev->ifa_list;
+ ifa = rtnl_dereference(in_dev->ifa_list);
memset(&ips, 0, sizeof(ips));
/* Configure IP addr only if IP address count < MAX_IP_ADDRS */
while (index < MAX_IP_ADDRS && ifa) {
ips[index] = ifa->ifa_local;
- ifa = ifa->ifa_next;
+ ifa = rtnl_dereference(ifa->ifa_next);
index++;
}
diff --git a/drivers/net/wireless/ath/ath6kl/debug.c b/drivers/net/wireless/ath/ath6kl/debug.c
index 4e94b22eaada..54337d60f288 100644
--- a/drivers/net/wireless/ath/ath6kl/debug.c
+++ b/drivers/net/wireless/ath/ath6kl/debug.c
@@ -1132,8 +1132,7 @@ int ath6kl_debug_roam_tbl_event(struct ath6kl *ar, const void *buf,
tbl = (const struct wmi_target_roam_tbl *) buf;
num_entries = le16_to_cpu(tbl->num_entries);
- if (sizeof(*tbl) + num_entries * sizeof(struct wmi_bss_roam_info) >
- len)
+ if (struct_size(tbl, info, num_entries) > len)
return -EINVAL;
if (ar->debug.roam_tbl == NULL ||
diff --git a/drivers/net/wireless/ath/ath6kl/htc_pipe.c b/drivers/net/wireless/ath/ath6kl/htc_pipe.c
index 434b66829646..c68848819a52 100644
--- a/drivers/net/wireless/ath/ath6kl/htc_pipe.c
+++ b/drivers/net/wireless/ath/ath6kl/htc_pipe.c
@@ -898,9 +898,6 @@ static int htc_process_trailer(struct htc_target *target, u8 *buffer,
break;
}
- if (status != 0)
- break;
-
/* advance buffer past this record for next time around */
buffer += record->len;
len -= record->len;
diff --git a/drivers/net/wireless/ath/ath6kl/trace.h b/drivers/net/wireless/ath/ath6kl/trace.h
index 91e735cfdef7..a3d3740419eb 100644
--- a/drivers/net/wireless/ath/ath6kl/trace.h
+++ b/drivers/net/wireless/ath/ath6kl/trace.h
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: GPL-2.0 */
+/* SPDX-License-Identifier: ISC */
#if !defined(_ATH6KL_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
#include <net/cfg80211.h>
diff --git a/drivers/net/wireless/ath/ath6kl/wmi.c b/drivers/net/wireless/ath/ath6kl/wmi.c
index 68854c45d0a4..2382c6c46851 100644
--- a/drivers/net/wireless/ath/ath6kl/wmi.c
+++ b/drivers/net/wireless/ath/ath6kl/wmi.c
@@ -1176,6 +1176,10 @@ static int ath6kl_wmi_pstream_timeout_event_rx(struct wmi *wmi, u8 *datap,
return -EINVAL;
ev = (struct wmi_pstream_timeout_event *) datap;
+ if (ev->traffic_class >= WMM_NUM_AC) {
+ ath6kl_err("invalid traffic class: %d\n", ev->traffic_class);
+ return -EINVAL;
+ }
/*
* When the pstream (fat pipe == AC) timesout, it means there were
@@ -1295,8 +1299,7 @@ static int ath6kl_wmi_neighbor_report_event_rx(struct wmi *wmi, u8 *datap,
if (len < sizeof(*ev))
return -EINVAL;
ev = (struct wmi_neighbor_report_event *) datap;
- if (sizeof(*ev) + ev->num_neighbors * sizeof(struct wmi_neighbor_info)
- > len) {
+ if (struct_size(ev, neighbor, ev->num_neighbors) > len) {
ath6kl_dbg(ATH6KL_DBG_WMI,
"truncated neighbor event (num=%d len=%d)\n",
ev->num_neighbors, len);
@@ -1517,6 +1520,10 @@ static int ath6kl_wmi_cac_event_rx(struct wmi *wmi, u8 *datap, int len,
return -EINVAL;
reply = (struct wmi_cac_event *) datap;
+ if (reply->ac >= WMM_NUM_AC) {
+ ath6kl_err("invalid AC: %d\n", reply->ac);
+ return -EINVAL;
+ }
if ((reply->cac_indication == CAC_INDICATION_ADMISSION_RESP) &&
(reply->status_code != IEEE80211_TSPEC_STATUS_ADMISS_ACCEPTED)) {
@@ -2633,7 +2640,7 @@ int ath6kl_wmi_delete_pstream_cmd(struct wmi *wmi, u8 if_idx, u8 traffic_class,
u16 active_tsids = 0;
int ret;
- if (traffic_class > 3) {
+ if (traffic_class >= WMM_NUM_AC) {
ath6kl_err("invalid traffic class: %d\n", traffic_class);
return -EINVAL;
}
diff --git a/drivers/net/wireless/ath/ath9k/Kconfig b/drivers/net/wireless/ath/ath9k/Kconfig
index a1ef8769983a..5601cfd6a293 100644
--- a/drivers/net/wireless/ath/ath9k/Kconfig
+++ b/drivers/net/wireless/ath/ath9k/Kconfig
@@ -1,4 +1,4 @@
-# SPDX-License-Identifier: GPL-2.0-only
+# SPDX-License-Identifier: ISC
config ATH9K_HW
tristate
config ATH9K_COMMON
diff --git a/drivers/net/wireless/ath/ath9k/Makefile b/drivers/net/wireless/ath/ath9k/Makefile
index f71b2ad8275c..15af0a836925 100644
--- a/drivers/net/wireless/ath/ath9k/Makefile
+++ b/drivers/net/wireless/ath/ath9k/Makefile
@@ -1,4 +1,4 @@
-# SPDX-License-Identifier: GPL-2.0
+# SPDX-License-Identifier: ISC
ath9k-y += beacon.o \
gpio.o \
init.o \
diff --git a/drivers/net/wireless/ath/ath9k/ar9003_phy.c b/drivers/net/wireless/ath/ath9k/ar9003_phy.c
index 98c5f524a360..daf30f9946b4 100644
--- a/drivers/net/wireless/ath/ath9k/ar9003_phy.c
+++ b/drivers/net/wireless/ath/ath9k/ar9003_phy.c
@@ -157,7 +157,9 @@ static int ar9003_hw_set_channel(struct ath_hw *ah, struct ath9k_channel *chan)
freq = centers.synth_center;
if (freq < 4800) { /* 2 GHz, fractional mode */
- if (AR_SREV_9330(ah)) {
+ if (AR_SREV_9330(ah) || AR_SREV_9485(ah) ||
+ AR_SREV_9531(ah) || AR_SREV_9550(ah) ||
+ AR_SREV_9561(ah) || AR_SREV_9565(ah)) {
if (ah->is_clk_25mhz)
div = 75;
else
@@ -166,16 +168,6 @@ static int ar9003_hw_set_channel(struct ath_hw *ah, struct ath9k_channel *chan)
channelSel = (freq * 4) / div;
chan_frac = (((freq * 4) % div) * 0x20000) / div;
channelSel = (channelSel << 17) | chan_frac;
- } else if (AR_SREV_9485(ah) || AR_SREV_9565(ah)) {
- /*
- * freq_ref = 40 / (refdiva >> amoderefsel);
- * where refdiva=1 and amoderefsel=0
- * ndiv = ((chan_mhz * 4) / 3) / freq_ref;
- * chansel = int(ndiv), chanfrac = (ndiv - chansel) * 0x20000
- */
- channelSel = (freq * 4) / 120;
- chan_frac = (((freq * 4) % 120) * 0x20000) / 120;
- channelSel = (channelSel << 17) | chan_frac;
} else if (AR_SREV_9340(ah)) {
if (ah->is_clk_25mhz) {
channelSel = (freq * 2) / 75;
@@ -184,16 +176,6 @@ static int ar9003_hw_set_channel(struct ath_hw *ah, struct ath9k_channel *chan)
} else {
channelSel = CHANSEL_2G(freq) >> 1;
}
- } else if (AR_SREV_9550(ah) || AR_SREV_9531(ah) ||
- AR_SREV_9561(ah)) {
- if (ah->is_clk_25mhz)
- div = 75;
- else
- div = 120;
-
- channelSel = (freq * 4) / div;
- chan_frac = (((freq * 4) % div) * 0x20000) / div;
- channelSel = (channelSel << 17) | chan_frac;
} else {
channelSel = CHANSEL_2G(freq);
}
diff --git a/drivers/net/wireless/ath/ath9k/eeprom.c b/drivers/net/wireless/ath/ath9k/eeprom.c
index 6fbd5559c0c0..c22d457dbc54 100644
--- a/drivers/net/wireless/ath/ath9k/eeprom.c
+++ b/drivers/net/wireless/ath/ath9k/eeprom.c
@@ -428,7 +428,7 @@ u16 ath9k_hw_get_scaled_power(struct ath_hw *ah, u16 power_limit,
else
power_limit = 0;
- return power_limit;
+ return min_t(u16, power_limit, MAX_RATE_POWER);
}
void ath9k_hw_update_regulatory_maxpower(struct ath_hw *ah)
diff --git a/drivers/net/wireless/ath/ath9k/eeprom_4k.c b/drivers/net/wireless/ath/ath9k/eeprom_4k.c
index b8c0a08066a0..e8c2cc03be0c 100644
--- a/drivers/net/wireless/ath/ath9k/eeprom_4k.c
+++ b/drivers/net/wireless/ath/ath9k/eeprom_4k.c
@@ -424,6 +424,7 @@ static void ath9k_hw_set_4k_power_per_rate_table(struct ath_hw *ah,
ath9k_hw_get_channel_centers(ah, chan, &centers);
scaledPower = powerLimit - antenna_reduction;
+ scaledPower = min_t(u16, scaledPower, MAX_RATE_POWER);
numCtlModes = ARRAY_SIZE(ctlModesFor11g) - SUB_NUM_CTL_MODES_AT_2G_40;
pCtlMode = ctlModesFor11g;
diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
index 8581d917635a..052deffb4c9d 100644
--- a/drivers/net/wireless/ath/ath9k/hw.c
+++ b/drivers/net/wireless/ath/ath9k/hw.c
@@ -252,8 +252,9 @@ void ath9k_hw_get_channel_centers(struct ath_hw *ah,
/* Chip Revisions */
/******************/
-static void ath9k_hw_read_revisions(struct ath_hw *ah)
+static bool ath9k_hw_read_revisions(struct ath_hw *ah)
{
+ u32 srev;
u32 val;
if (ah->get_mac_revision)
@@ -269,25 +270,33 @@ static void ath9k_hw_read_revisions(struct ath_hw *ah)
val = REG_READ(ah, AR_SREV);
ah->hw_version.macRev = MS(val, AR_SREV_REVISION2);
}
- return;
+ return true;
case AR9300_DEVID_AR9340:
ah->hw_version.macVersion = AR_SREV_VERSION_9340;
- return;
+ return true;
case AR9300_DEVID_QCA955X:
ah->hw_version.macVersion = AR_SREV_VERSION_9550;
- return;
+ return true;
case AR9300_DEVID_AR953X:
ah->hw_version.macVersion = AR_SREV_VERSION_9531;
- return;
+ return true;
case AR9300_DEVID_QCA956X:
ah->hw_version.macVersion = AR_SREV_VERSION_9561;
- return;
+ return true;
}
- val = REG_READ(ah, AR_SREV) & AR_SREV_ID;
+ srev = REG_READ(ah, AR_SREV);
+
+ if (srev == -EIO) {
+ ath_err(ath9k_hw_common(ah),
+ "Failed to read SREV register");
+ return false;
+ }
+
+ val = srev & AR_SREV_ID;
if (val == 0xFF) {
- val = REG_READ(ah, AR_SREV);
+ val = srev;
ah->hw_version.macVersion =
(val & AR_SREV_VERSION2) >> AR_SREV_TYPE2_S;
ah->hw_version.macRev = MS(val, AR_SREV_REVISION2);
@@ -306,6 +315,8 @@ static void ath9k_hw_read_revisions(struct ath_hw *ah)
if (ah->hw_version.macVersion == AR_SREV_VERSION_5416_PCIE)
ah->is_pciexpress = true;
}
+
+ return true;
}
/************************************/
@@ -446,7 +457,7 @@ static void ath9k_hw_init_defaults(struct ath_hw *ah)
struct ath_regulatory *regulatory = ath9k_hw_regulatory(ah);
regulatory->country_code = CTRY_DEFAULT;
- regulatory->power_limit = MAX_RATE_POWER;
+ regulatory->power_limit = MAX_COMBINED_POWER;
ah->hw_version.magic = AR5416_MAGIC;
ah->hw_version.subvendorid = 0;
@@ -559,7 +570,10 @@ static int __ath9k_hw_init(struct ath_hw *ah)
struct ath_common *common = ath9k_hw_common(ah);
int r = 0;
- ath9k_hw_read_revisions(ah);
+ if (!ath9k_hw_read_revisions(ah)) {
+ ath_err(common, "Could not read hardware revisions");
+ return -EOPNOTSUPP;
+ }
switch (ah->hw_version.macVersion) {
case AR_SREV_VERSION_5416_PCI:
@@ -2952,7 +2966,7 @@ void ath9k_hw_apply_txpower(struct ath_hw *ah, struct ath9k_channel *chan,
ctl = ath9k_regd_get_ctl(reg, chan);
channel = chan->chan;
- chan_pwr = min_t(int, channel->max_power * 2, MAX_RATE_POWER);
+ chan_pwr = min_t(int, channel->max_power * 2, MAX_COMBINED_POWER);
new_pwr = min_t(int, chan_pwr, reg->power_limit);
ah->eep_ops->set_txpower(ah, chan, ctl,
@@ -2965,9 +2979,9 @@ void ath9k_hw_set_txpowerlimit(struct ath_hw *ah, u32 limit, bool test)
struct ath9k_channel *chan = ah->curchan;
struct ieee80211_channel *channel = chan->chan;
- reg->power_limit = min_t(u32, limit, MAX_RATE_POWER);
+ reg->power_limit = min_t(u32, limit, MAX_COMBINED_POWER);
if (test)
- channel->max_power = MAX_RATE_POWER / 2;
+ channel->max_power = MAX_COMBINED_POWER / 2;
ath9k_hw_apply_txpower(ah, chan, test);
diff --git a/drivers/net/wireless/ath/ath9k/hw.h b/drivers/net/wireless/ath/ath9k/hw.h
index 68956cdc8c9a..2e4489700a85 100644
--- a/drivers/net/wireless/ath/ath9k/hw.h
+++ b/drivers/net/wireless/ath/ath9k/hw.h
@@ -173,6 +173,7 @@
#define ATH9K_NUM_QUEUES 10
#define MAX_RATE_POWER 63
+#define MAX_COMBINED_POWER 254 /* 128 dBm, chosen to fit in u8 */
#define AH_WAIT_TIMEOUT 100000 /* (us) */
#define AH_TSF_WRITE_TIMEOUT 100 /* (us) */
#define AH_TIME_QUANTUM 10
diff --git a/drivers/net/wireless/ath/ath9k/init.c b/drivers/net/wireless/ath/ath9k/init.c
index a04d8616fe09..17c318902cb8 100644
--- a/drivers/net/wireless/ath/ath9k/init.c
+++ b/drivers/net/wireless/ath/ath9k/init.c
@@ -805,7 +805,7 @@ static void ath9k_init_band_txpower(struct ath_softc *sc, int band)
ah->curchan = &ah->channels[chan->hw_value];
cfg80211_chandef_create(&chandef, chan, NL80211_CHAN_HT20);
ath9k_cmn_get_channel(sc->hw, ah, &chandef);
- ath9k_hw_set_txpowerlimit(ah, MAX_RATE_POWER, true);
+ ath9k_hw_set_txpowerlimit(ah, MAX_COMBINED_POWER, true);
}
}
diff --git a/drivers/net/wireless/ath/ath9k/recv.c b/drivers/net/wireless/ath/ath9k/recv.c
index 4e97f7f3b2a3..06e660858766 100644
--- a/drivers/net/wireless/ath/ath9k/recv.c
+++ b/drivers/net/wireless/ath/ath9k/recv.c
@@ -815,6 +815,7 @@ static int ath9k_rx_skb_preprocess(struct ath_softc *sc,
struct ath_common *common = ath9k_hw_common(ah);
struct ieee80211_hdr *hdr;
bool discard_current = sc->rx.discard_next;
+ bool is_phyerr;
/*
* Discard corrupt descriptors which are marked in
@@ -827,8 +828,11 @@ static int ath9k_rx_skb_preprocess(struct ath_softc *sc,
/*
* Discard zero-length packets and packets smaller than an ACK
+ * which are not PHY_ERROR (short radar pulses have a length of 3)
*/
- if (rx_stats->rs_datalen < 10) {
+ is_phyerr = rx_stats->rs_status & ATH9K_RXERR_PHY;
+ if (!rx_stats->rs_datalen ||
+ (rx_stats->rs_datalen < 10 && !is_phyerr)) {
RX_STAT_INC(sc, rx_len_err);
goto corrupt;
}
diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
index b17e1ca40995..31e7b108279c 100644
--- a/drivers/net/wireless/ath/ath9k/xmit.c
+++ b/drivers/net/wireless/ath/ath9k/xmit.c
@@ -410,7 +410,6 @@ static void ath_tx_count_frames(struct ath_softc *sc, struct ath_buf *bf,
struct ath_tx_status *ts, int txok,
int *nframes, int *nbad)
{
- struct ath_frame_info *fi;
u16 seq_st = 0;
u32 ba[WME_BA_BMP_SIZE >> 5];
int ba_index;
@@ -426,7 +425,6 @@ static void ath_tx_count_frames(struct ath_softc *sc, struct ath_buf *bf,
}
while (bf) {
- fi = get_frame_info(bf->bf_mpdu);
ba_index = ATH_BA_INDEX(seq_st, bf->bf_state.seqno);
(*nframes)++;
@@ -446,7 +444,6 @@ static void ath_tx_complete_aggr(struct ath_softc *sc, struct ath_txq *txq,
{
struct ath_node *an = NULL;
struct sk_buff *skb;
- struct ieee80211_hdr *hdr;
struct ieee80211_tx_info *tx_info;
struct ath_buf *bf_next, *bf_last = bf->bf_lastbf;
struct list_head bf_head;
@@ -463,8 +460,6 @@ static void ath_tx_complete_aggr(struct ath_softc *sc, struct ath_txq *txq,
int bar_index = -1;
skb = bf->bf_mpdu;
- hdr = (struct ieee80211_hdr *)skb->data;
-
tx_info = IEEE80211_SKB_CB(skb);
memcpy(rates, bf->rates, sizeof(rates));
@@ -668,7 +663,8 @@ static bool bf_is_ampdu_not_probing(struct ath_buf *bf)
static void ath_tx_count_airtime(struct ath_softc *sc,
struct ieee80211_sta *sta,
struct ath_buf *bf,
- struct ath_tx_status *ts)
+ struct ath_tx_status *ts,
+ u8 tid)
{
u32 airtime = 0;
int i;
@@ -679,7 +675,7 @@ static void ath_tx_count_airtime(struct ath_softc *sc,
airtime += rate_dur * bf->rates[i].count;
}
- ieee80211_sta_register_airtime(sta, ts->tid, airtime, 0);
+ ieee80211_sta_register_airtime(sta, tid, airtime, 0);
}
static void ath_tx_process_buffer(struct ath_softc *sc, struct ath_txq *txq,
@@ -709,7 +705,7 @@ static void ath_tx_process_buffer(struct ath_softc *sc, struct ath_txq *txq,
if (sta) {
struct ath_node *an = (struct ath_node *)sta->drv_priv;
tid = ath_get_skb_tid(sc, an, bf->bf_mpdu);
- ath_tx_count_airtime(sc, sta, bf, ts);
+ ath_tx_count_airtime(sc, sta, bf, ts, tid->tidno);
if (ts->ts_status & (ATH9K_TXERR_FILT | ATH9K_TXERR_XRETRY))
tid->clear_ps_filter = true;
}
@@ -2269,12 +2265,10 @@ static int ath_tx_prepare(struct ieee80211_hw *hw, struct sk_buff *skb,
int ath_tx_start(struct ieee80211_hw *hw, struct sk_buff *skb,
struct ath_tx_control *txctl)
{
- struct ieee80211_hdr *hdr;
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
struct ieee80211_sta *sta = txctl->sta;
struct ieee80211_vif *vif = info->control.vif;
struct ath_frame_info *fi = get_frame_info(skb);
- struct ath_vif *avp = NULL;
struct ath_softc *sc = hw->priv;
struct ath_txq *txq = txctl->txq;
struct ath_atx_tid *tid = NULL;
@@ -2283,16 +2277,12 @@ int ath_tx_start(struct ieee80211_hw *hw, struct sk_buff *skb,
bool ps_resp;
int q, ret;
- if (vif)
- avp = (void *)vif->drv_priv;
-
ps_resp = !!(info->control.flags & IEEE80211_TX_CTRL_PS_RESPONSE);
ret = ath_tx_prepare(hw, skb, txctl);
if (ret)
return ret;
- hdr = (struct ieee80211_hdr *) skb->data;
/*
* At this point, the vif, hw_key and sta pointers in the tx control
* info are no longer valid (overwritten by the ath_frame_info data.
diff --git a/drivers/net/wireless/ath/carl9170/mac.c b/drivers/net/wireless/ath/carl9170/mac.c
index 7d4a72dc98db..b2eeb9fd68d2 100644
--- a/drivers/net/wireless/ath/carl9170/mac.c
+++ b/drivers/net/wireless/ath/carl9170/mac.c
@@ -519,7 +519,7 @@ int carl9170_set_mac_tpc(struct ar9170 *ar, struct ieee80211_channel *channel)
power = ar->power_5G_leg[0] & 0x3f;
break;
default:
- BUG_ON(1);
+ BUG();
}
power = min_t(unsigned int, power, ar->hw->conf.power_level * 2);
diff --git a/drivers/net/wireless/ath/carl9170/main.c b/drivers/net/wireless/ath/carl9170/main.c
index 7f1bdea742b8..40a8054f8aa6 100644
--- a/drivers/net/wireless/ath/carl9170/main.c
+++ b/drivers/net/wireless/ath/carl9170/main.c
@@ -1387,13 +1387,8 @@ static int carl9170_op_conf_tx(struct ieee80211_hw *hw,
int ret;
mutex_lock(&ar->mutex);
- if (queue < ar->hw->queues) {
- memcpy(&ar->edcf[ar9170_qmap[queue]], param, sizeof(*param));
- ret = carl9170_set_qos(ar);
- } else {
- ret = -EINVAL;
- }
-
+ memcpy(&ar->edcf[ar9170_qmap[queue]], param, sizeof(*param));
+ ret = carl9170_set_qos(ar);
mutex_unlock(&ar->mutex);
return ret;
}
diff --git a/drivers/net/wireless/ath/carl9170/rx.c b/drivers/net/wireless/ath/carl9170/rx.c
index 8e154f6364a3..23ab8a80c18c 100644
--- a/drivers/net/wireless/ath/carl9170/rx.c
+++ b/drivers/net/wireless/ath/carl9170/rx.c
@@ -795,7 +795,7 @@ static void carl9170_rx_untie_data(struct ar9170 *ar, u8 *buf, int len)
break;
default:
- BUG_ON(1);
+ BUG();
break;
}
diff --git a/drivers/net/wireless/ath/carl9170/usb.c b/drivers/net/wireless/ath/carl9170/usb.c
index e7c3f3b8457d..99f1897a775d 100644
--- a/drivers/net/wireless/ath/carl9170/usb.c
+++ b/drivers/net/wireless/ath/carl9170/usb.c
@@ -128,6 +128,8 @@ static const struct usb_device_id carl9170_usb_ids[] = {
};
MODULE_DEVICE_TABLE(usb, carl9170_usb_ids);
+static struct usb_driver carl9170_driver;
+
static void carl9170_usb_submit_data_urb(struct ar9170 *ar)
{
struct urb *urb;
@@ -966,32 +968,28 @@ err_out:
static void carl9170_usb_firmware_failed(struct ar9170 *ar)
{
- struct device *parent = ar->udev->dev.parent;
- struct usb_device *udev;
-
- /*
- * Store a copy of the usb_device pointer locally.
- * This is because device_release_driver initiates
- * carl9170_usb_disconnect, which in turn frees our
- * driver context (ar).
+ /* Store a copies of the usb_interface and usb_device pointer locally.
+ * This is because release_driver initiates carl9170_usb_disconnect,
+ * which in turn frees our driver context (ar).
*/
- udev = ar->udev;
+ struct usb_interface *intf = ar->intf;
+ struct usb_device *udev = ar->udev;
complete(&ar->fw_load_wait);
+ /* at this point 'ar' could be already freed. Don't use it anymore */
+ ar = NULL;
/* unbind anything failed */
- if (parent)
- device_lock(parent);
-
- device_release_driver(&udev->dev);
- if (parent)
- device_unlock(parent);
+ usb_lock_device(udev);
+ usb_driver_release_interface(&carl9170_driver, intf);
+ usb_unlock_device(udev);
- usb_put_dev(udev);
+ usb_put_intf(intf);
}
static void carl9170_usb_firmware_finish(struct ar9170 *ar)
{
+ struct usb_interface *intf = ar->intf;
int err;
err = carl9170_parse_firmware(ar);
@@ -1009,7 +1007,7 @@ static void carl9170_usb_firmware_finish(struct ar9170 *ar)
goto err_unrx;
complete(&ar->fw_load_wait);
- usb_put_dev(ar->udev);
+ usb_put_intf(intf);
return;
err_unrx:
@@ -1052,7 +1050,6 @@ static int carl9170_usb_probe(struct usb_interface *intf,
return PTR_ERR(ar);
udev = interface_to_usbdev(intf);
- usb_get_dev(udev);
ar->udev = udev;
ar->intf = intf;
ar->features = id->driver_info;
@@ -1094,15 +1091,14 @@ static int carl9170_usb_probe(struct usb_interface *intf,
atomic_set(&ar->rx_anch_urbs, 0);
atomic_set(&ar->rx_pool_urbs, 0);
- usb_get_dev(ar->udev);
+ usb_get_intf(intf);
carl9170_set_state(ar, CARL9170_STOPPED);
err = request_firmware_nowait(THIS_MODULE, 1, CARL9170FW_NAME,
&ar->udev->dev, GFP_KERNEL, ar, carl9170_usb_firmware_step2);
if (err) {
- usb_put_dev(udev);
- usb_put_dev(udev);
+ usb_put_intf(intf);
carl9170_free(ar);
}
return err;
@@ -1131,7 +1127,6 @@ static void carl9170_usb_disconnect(struct usb_interface *intf)
carl9170_release_firmware(ar);
carl9170_free(ar);
- usb_put_dev(udev);
}
#ifdef CONFIG_PM
diff --git a/drivers/net/wireless/ath/dfs_pattern_detector.c b/drivers/net/wireless/ath/dfs_pattern_detector.c
index d52b31b45df7..a274eb0d1968 100644
--- a/drivers/net/wireless/ath/dfs_pattern_detector.c
+++ b/drivers/net/wireless/ath/dfs_pattern_detector.c
@@ -111,7 +111,7 @@ static const struct radar_detector_specs jp_radar_ref_types[] = {
JP_PATTERN(0, 0, 1, 1428, 1428, 1, 18, 29, false),
JP_PATTERN(1, 2, 3, 3846, 3846, 1, 18, 29, false),
JP_PATTERN(2, 0, 1, 1388, 1388, 1, 18, 50, false),
- JP_PATTERN(3, 1, 2, 4000, 4000, 1, 18, 50, false),
+ JP_PATTERN(3, 0, 4, 4000, 4000, 1, 18, 50, false),
JP_PATTERN(4, 0, 5, 150, 230, 1, 23, 50, false),
JP_PATTERN(5, 6, 10, 200, 500, 1, 16, 50, false),
JP_PATTERN(6, 11, 20, 200, 500, 1, 12, 50, false),
diff --git a/drivers/net/wireless/ath/regd.h b/drivers/net/wireless/ath/regd.h
index 75ddaefdd049..8d5a16b558e6 100644
--- a/drivers/net/wireless/ath/regd.h
+++ b/drivers/net/wireless/ath/regd.h
@@ -28,7 +28,6 @@ enum ctl_group {
CTL_ETSI = 0x30,
};
-#define NO_CTL 0xff
#define SD_NO_CTL 0xE0
#define NO_CTL 0xff
#define CTL_11A 0
diff --git a/drivers/net/wireless/ath/wcn36xx/Kconfig b/drivers/net/wireless/ath/wcn36xx/Kconfig
index 4ab2d59ff2ca..a4b153470a2c 100644
--- a/drivers/net/wireless/ath/wcn36xx/Kconfig
+++ b/drivers/net/wireless/ath/wcn36xx/Kconfig
@@ -1,4 +1,4 @@
-# SPDX-License-Identifier: GPL-2.0-only
+# SPDX-License-Identifier: ISC
config WCN36XX
tristate "Qualcomm Atheros WCN3660/3680 support"
depends on MAC80211 && HAS_DMA
diff --git a/drivers/net/wireless/ath/wcn36xx/Makefile b/drivers/net/wireless/ath/wcn36xx/Makefile
index 582049f65735..27413703ad69 100644
--- a/drivers/net/wireless/ath/wcn36xx/Makefile
+++ b/drivers/net/wireless/ath/wcn36xx/Makefile
@@ -1,4 +1,4 @@
-# SPDX-License-Identifier: GPL-2.0
+# SPDX-License-Identifier: ISC
obj-$(CONFIG_WCN36XX) := wcn36xx.o
wcn36xx-y += main.o \
dxe.o \
diff --git a/drivers/net/wireless/ath/wil6210/Kconfig b/drivers/net/wireless/ath/wil6210/Kconfig
index b1a339859feb..0d1a8dab30ed 100644
--- a/drivers/net/wireless/ath/wil6210/Kconfig
+++ b/drivers/net/wireless/ath/wil6210/Kconfig
@@ -1,4 +1,4 @@
-# SPDX-License-Identifier: GPL-2.0-only
+# SPDX-License-Identifier: ISC
config WIL6210
tristate "Wilocity 60g WiFi card wil6210 support"
select WANT_DEV_COREDUMP
diff --git a/drivers/net/wireless/ath/wil6210/Makefile b/drivers/net/wireless/ath/wil6210/Makefile
index d3d61ae459e2..53a0d995ddb0 100644
--- a/drivers/net/wireless/ath/wil6210/Makefile
+++ b/drivers/net/wireless/ath/wil6210/Makefile
@@ -1,4 +1,4 @@
-# SPDX-License-Identifier: GPL-2.0
+# SPDX-License-Identifier: ISC
obj-$(CONFIG_WIL6210) += wil6210.o
wil6210-y := main.o
diff --git a/drivers/net/wireless/ath/wil6210/cfg80211.c b/drivers/net/wireless/ath/wil6210/cfg80211.c
index 804955d24b30..d436cc51dfd1 100644
--- a/drivers/net/wireless/ath/wil6210/cfg80211.c
+++ b/drivers/net/wireless/ath/wil6210/cfg80211.c
@@ -314,7 +314,8 @@ int wil_cid_fill_sinfo(struct wil6210_vif *vif, int cid,
memset(&reply, 0, sizeof(reply));
rc = wmi_call(wil, WMI_NOTIFY_REQ_CMDID, vif->mid, &cmd, sizeof(cmd),
- WMI_NOTIFY_REQ_DONE_EVENTID, &reply, sizeof(reply), 20);
+ WMI_NOTIFY_REQ_DONE_EVENTID, &reply, sizeof(reply),
+ WIL_WMI_CALL_GENERAL_TO_MS);
if (rc)
return rc;
@@ -380,8 +381,8 @@ static int wil_cfg80211_get_station(struct wiphy *wiphy,
wil_dbg_misc(wil, "get_station: %pM CID %d MID %d\n", mac, cid,
vif->mid);
- if (cid < 0)
- return cid;
+ if (!wil_cid_valid(wil, cid))
+ return -ENOENT;
rc = wil_cid_fill_sinfo(vif, cid, sinfo);
@@ -395,7 +396,7 @@ static int wil_find_cid_by_idx(struct wil6210_priv *wil, u8 mid, int idx)
{
int i;
- for (i = 0; i < max_assoc_sta; i++) {
+ for (i = 0; i < wil->max_assoc_sta; i++) {
if (wil->sta[i].status == wil_sta_unused)
continue;
if (wil->sta[i].mid != mid)
@@ -417,7 +418,7 @@ static int wil_cfg80211_dump_station(struct wiphy *wiphy,
int rc;
int cid = wil_find_cid_by_idx(wil, vif->mid, idx);
- if (cid < 0)
+ if (!wil_cid_valid(wil, cid))
return -ENOENT;
ether_addr_copy(mac, wil->sta[cid].addr);
@@ -643,6 +644,16 @@ out:
return rc;
}
+static bool wil_is_safe_switch(enum nl80211_iftype from,
+ enum nl80211_iftype to)
+{
+ if (from == NL80211_IFTYPE_STATION &&
+ to == NL80211_IFTYPE_P2P_CLIENT)
+ return true;
+
+ return false;
+}
+
static int wil_cfg80211_change_iface(struct wiphy *wiphy,
struct net_device *ndev,
enum nl80211_iftype type,
@@ -668,7 +679,8 @@ static int wil_cfg80211_change_iface(struct wiphy *wiphy,
* because it can cause significant disruption
*/
if (!wil_has_other_active_ifaces(wil, ndev, true, false) &&
- netif_running(ndev) && !wil_is_recovery_blocked(wil)) {
+ netif_running(ndev) && !wil_is_recovery_blocked(wil) &&
+ !wil_is_safe_switch(wdev->iftype, type)) {
wil_dbg_misc(wil, "interface is up. resetting...\n");
mutex_lock(&wil->mutex);
__wil_down(wil);
@@ -3022,7 +3034,7 @@ static int wil_rf_sector_set_selected(struct wiphy *wiphy,
wil, vif->mid, WMI_INVALID_RF_SECTOR_INDEX,
sector_type, WIL_CID_ALL);
if (rc == -EINVAL) {
- for (i = 0; i < max_assoc_sta; i++) {
+ for (i = 0; i < wil->max_assoc_sta; i++) {
if (wil->sta[i].mid != vif->mid)
continue;
rc = wil_rf_sector_wmi_set_selected(
diff --git a/drivers/net/wireless/ath/wil6210/debugfs.c b/drivers/net/wireless/ath/wil6210/debugfs.c
index df2adff6c33a..74834131cf7c 100644
--- a/drivers/net/wireless/ath/wil6210/debugfs.c
+++ b/drivers/net/wireless/ath/wil6210/debugfs.c
@@ -63,7 +63,9 @@ static void wil_print_desc_edma(struct seq_file *s, struct wil6210_priv *wil,
&ring->va[idx].rx.enhanced;
u16 buff_id = le16_to_cpu(rx_d->mac.buff_id);
- has_skb = wil->rx_buff_mgmt.buff_arr[buff_id].skb;
+ if (wil->rx_buff_mgmt.buff_arr &&
+ wil_val_in_range(buff_id, 0, wil->rx_buff_mgmt.size))
+ has_skb = wil->rx_buff_mgmt.buff_arr[buff_id].skb;
seq_printf(s, "%c", (has_skb) ? _h : _s);
} else {
struct wil_tx_enhanced_desc *d =
@@ -71,9 +73,9 @@ static void wil_print_desc_edma(struct seq_file *s, struct wil6210_priv *wil,
&ring->va[idx].tx.enhanced;
num_of_descs = (u8)d->mac.d[2];
- has_skb = ring->ctx[idx].skb;
+ has_skb = ring->ctx && ring->ctx[idx].skb;
if (num_of_descs >= 1)
- seq_printf(s, "%c", ring->ctx[idx].skb ? _h : _s);
+ seq_printf(s, "%c", has_skb ? _h : _s);
else
/* num_of_descs == 0, it's a frag in a list of descs */
seq_printf(s, "%c", has_skb ? 'h' : _s);
@@ -84,7 +86,7 @@ static void wil_print_ring(struct seq_file *s, struct wil6210_priv *wil,
const char *name, struct wil_ring *ring,
char _s, char _h)
{
- void __iomem *x = wmi_addr(wil, ring->hwtail);
+ void __iomem *x;
u32 v;
seq_printf(s, "RING %s = {\n", name);
@@ -96,7 +98,21 @@ static void wil_print_ring(struct seq_file *s, struct wil6210_priv *wil,
else
seq_printf(s, " swtail = %d\n", ring->swtail);
seq_printf(s, " swhead = %d\n", ring->swhead);
+ if (wil->use_enhanced_dma_hw) {
+ int ring_id = ring->is_rx ?
+ WIL_RX_DESC_RING_ID : ring - wil->ring_tx;
+ /* SUBQ_CONS is a table of 32 entries, one for each Q pair.
+ * lower 16bits are for even ring_id and upper 16bits are for
+ * odd ring_id
+ */
+ x = wmi_addr(wil, RGF_DMA_SCM_SUBQ_CONS + 4 * (ring_id / 2));
+ v = readl_relaxed(x);
+
+ v = (ring_id % 2 ? (v >> 16) : (v & 0xffff));
+ seq_printf(s, " hwhead = %u\n", v);
+ }
seq_printf(s, " hwtail = [0x%08x] -> ", ring->hwtail);
+ x = wmi_addr(wil, ring->hwtail);
if (x) {
v = readl(x);
seq_printf(s, "0x%08x = %d\n", v, v);
@@ -162,7 +178,7 @@ static int ring_show(struct seq_file *s, void *data)
snprintf(name, sizeof(name), "tx_%2d", i);
- if (cid < max_assoc_sta)
+ if (cid < wil->max_assoc_sta)
seq_printf(s,
"\n%pM CID %d TID %d 1x%s BACK([%u] %u TU A%s) [%3d|%3d] idle %s\n",
wil->sta[cid].addr, cid, tid,
@@ -188,7 +204,7 @@ DEFINE_SHOW_ATTRIBUTE(ring);
static void wil_print_sring(struct seq_file *s, struct wil6210_priv *wil,
struct wil_status_ring *sring)
{
- void __iomem *x = wmi_addr(wil, sring->hwtail);
+ void __iomem *x;
int sring_idx = sring - wil->srings;
u32 v;
@@ -199,7 +215,19 @@ static void wil_print_sring(struct seq_file *s, struct wil6210_priv *wil,
seq_printf(s, " size = %d\n", sring->size);
seq_printf(s, " elem_size = %zu\n", sring->elem_size);
seq_printf(s, " swhead = %d\n", sring->swhead);
+ if (wil->use_enhanced_dma_hw) {
+ /* COMPQ_PROD is a table of 32 entries, one for each Q pair.
+ * lower 16bits are for even ring_id and upper 16bits are for
+ * odd ring_id
+ */
+ x = wmi_addr(wil, RGF_DMA_SCM_COMPQ_PROD + 4 * (sring_idx / 2));
+ v = readl_relaxed(x);
+
+ v = (sring_idx % 2 ? (v >> 16) : (v & 0xffff));
+ seq_printf(s, " hwhead = %u\n", v);
+ }
seq_printf(s, " hwtail = [0x%08x] -> ", sring->hwtail);
+ x = wmi_addr(wil, sring->hwtail);
if (x) {
v = readl_relaxed(x);
seq_printf(s, "0x%08x = %d\n", v, v);
@@ -394,25 +422,18 @@ static int wil_debugfs_iomem_x32_get(void *data, u64 *val)
DEFINE_DEBUGFS_ATTRIBUTE(fops_iomem_x32, wil_debugfs_iomem_x32_get,
wil_debugfs_iomem_x32_set, "0x%08llx\n");
-static struct dentry *wil_debugfs_create_iomem_x32(const char *name,
- umode_t mode,
- struct dentry *parent,
- void *value,
- struct wil6210_priv *wil)
+static void wil_debugfs_create_iomem_x32(const char *name, umode_t mode,
+ struct dentry *parent, void *value,
+ struct wil6210_priv *wil)
{
- struct dentry *file;
struct wil_debugfs_iomem_data *data = &wil->dbg_data.data_arr[
wil->dbg_data.iomem_data_count];
data->wil = wil;
data->offset = value;
- file = debugfs_create_file_unsafe(name, mode, parent, data,
- &fops_iomem_x32);
- if (!IS_ERR_OR_NULL(file))
- wil->dbg_data.iomem_data_count++;
-
- return file;
+ debugfs_create_file_unsafe(name, mode, parent, data, &fops_iomem_x32);
+ wil->dbg_data.iomem_data_count++;
}
static int wil_debugfs_ulong_set(void *data, u64 val)
@@ -430,14 +451,6 @@ static int wil_debugfs_ulong_get(void *data, u64 *val)
DEFINE_DEBUGFS_ATTRIBUTE(wil_fops_ulong, wil_debugfs_ulong_get,
wil_debugfs_ulong_set, "0x%llx\n");
-static struct dentry *wil_debugfs_create_ulong(const char *name, umode_t mode,
- struct dentry *parent,
- ulong *value)
-{
- return debugfs_create_file_unsafe(name, mode, parent, value,
- &wil_fops_ulong);
-}
-
/**
* wil6210_debugfs_init_offset - create set of debugfs files
* @wil - driver's context, used for printing
@@ -454,37 +467,30 @@ static void wil6210_debugfs_init_offset(struct wil6210_priv *wil,
int i;
for (i = 0; tbl[i].name; i++) {
- struct dentry *f;
-
switch (tbl[i].type) {
case doff_u32:
- f = debugfs_create_u32(tbl[i].name, tbl[i].mode, dbg,
- base + tbl[i].off);
+ debugfs_create_u32(tbl[i].name, tbl[i].mode, dbg,
+ base + tbl[i].off);
break;
case doff_x32:
- f = debugfs_create_x32(tbl[i].name, tbl[i].mode, dbg,
- base + tbl[i].off);
+ debugfs_create_x32(tbl[i].name, tbl[i].mode, dbg,
+ base + tbl[i].off);
break;
case doff_ulong:
- f = wil_debugfs_create_ulong(tbl[i].name, tbl[i].mode,
- dbg, base + tbl[i].off);
+ debugfs_create_file_unsafe(tbl[i].name, tbl[i].mode,
+ dbg, base + tbl[i].off,
+ &wil_fops_ulong);
break;
case doff_io32:
- f = wil_debugfs_create_iomem_x32(tbl[i].name,
- tbl[i].mode, dbg,
- base + tbl[i].off,
- wil);
+ wil_debugfs_create_iomem_x32(tbl[i].name, tbl[i].mode,
+ dbg, base + tbl[i].off,
+ wil);
break;
case doff_u8:
- f = debugfs_create_u8(tbl[i].name, tbl[i].mode, dbg,
- base + tbl[i].off);
+ debugfs_create_u8(tbl[i].name, tbl[i].mode, dbg,
+ base + tbl[i].off);
break;
- default:
- f = ERR_PTR(-EINVAL);
}
- if (IS_ERR_OR_NULL(f))
- wil_err(wil, "Create file \"%s\": err %ld\n",
- tbl[i].name, PTR_ERR(f));
}
}
@@ -499,19 +505,14 @@ static const struct dbg_off isr_off[] = {
{},
};
-static int wil6210_debugfs_create_ISR(struct wil6210_priv *wil,
- const char *name,
- struct dentry *parent, u32 off)
+static void wil6210_debugfs_create_ISR(struct wil6210_priv *wil,
+ const char *name, struct dentry *parent,
+ u32 off)
{
struct dentry *d = debugfs_create_dir(name, parent);
- if (IS_ERR_OR_NULL(d))
- return -ENODEV;
-
wil6210_debugfs_init_offset(wil, d, (void * __force)wil->csr + off,
isr_off);
-
- return 0;
}
static const struct dbg_off pseudo_isr_off[] = {
@@ -521,18 +522,13 @@ static const struct dbg_off pseudo_isr_off[] = {
{},
};
-static int wil6210_debugfs_create_pseudo_ISR(struct wil6210_priv *wil,
- struct dentry *parent)
+static void wil6210_debugfs_create_pseudo_ISR(struct wil6210_priv *wil,
+ struct dentry *parent)
{
struct dentry *d = debugfs_create_dir("PSEUDO_ISR", parent);
- if (IS_ERR_OR_NULL(d))
- return -ENODEV;
-
wil6210_debugfs_init_offset(wil, d, (void * __force)wil->csr,
pseudo_isr_off);
-
- return 0;
}
static const struct dbg_off lgc_itr_cnt_off[] = {
@@ -580,13 +576,9 @@ static int wil6210_debugfs_create_ITR_CNT(struct wil6210_priv *wil,
struct dentry *d, *dtx, *drx;
d = debugfs_create_dir("ITR_CNT", parent);
- if (IS_ERR_OR_NULL(d))
- return -ENODEV;
dtx = debugfs_create_dir("TX", d);
drx = debugfs_create_dir("RX", d);
- if (IS_ERR_OR_NULL(dtx) || IS_ERR_OR_NULL(drx))
- return -ENODEV;
wil6210_debugfs_init_offset(wil, d, (void * __force)wil->csr,
lgc_itr_cnt_off);
@@ -749,6 +741,44 @@ static const struct file_operations fops_rxon = {
.open = simple_open,
};
+static ssize_t wil_write_file_rbufcap(struct file *file,
+ const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct wil6210_priv *wil = file->private_data;
+ int val;
+ int rc;
+
+ rc = kstrtoint_from_user(buf, count, 0, &val);
+ if (rc) {
+ wil_err(wil, "Invalid argument\n");
+ return rc;
+ }
+ /* input value: negative to disable, 0 to use system default,
+ * 1..ring size to set descriptor threshold
+ */
+ wil_info(wil, "%s RBUFCAP, descriptors threshold - %d\n",
+ val < 0 ? "Disabling" : "Enabling", val);
+
+ if (!wil->ring_rx.va || val > wil->ring_rx.size) {
+ wil_err(wil, "Invalid descriptors threshold, %d\n", val);
+ return -EINVAL;
+ }
+
+ rc = wmi_rbufcap_cfg(wil, val < 0 ? 0 : 1, val < 0 ? 0 : val);
+ if (rc) {
+ wil_err(wil, "RBUFCAP config failed: %d\n", rc);
+ return rc;
+ }
+
+ return count;
+}
+
+static const struct file_operations fops_rbufcap = {
+ .write = wil_write_file_rbufcap,
+ .open = simple_open,
+};
+
/* block ack control, write:
* - "add <ringid> <agg_size> <timeout>" to trigger ADDBA
* - "del_tx <ringid> <reason>" to trigger DELBA for Tx side
@@ -811,7 +841,7 @@ static ssize_t wil_write_back(struct file *file, const char __user *buf,
"BACK: del_rx require at least 2 params\n");
return -EINVAL;
}
- if (p1 < 0 || p1 >= max_assoc_sta) {
+ if (p1 < 0 || p1 >= wil->max_assoc_sta) {
wil_err(wil, "BACK: invalid CID %d\n", p1);
return -EINVAL;
}
@@ -910,9 +940,8 @@ static ssize_t wil_read_pmccfg(struct file *file, char __user *user_buf,
" - \"alloc <num descriptors> <descriptor_size>\" to allocate pmc\n"
" - \"free\" to free memory allocated for pmc\n";
- sprintf(text, "Last command status: %d\n\n%s",
- wil_pmc_last_cmd_status(wil),
- help);
+ snprintf(text, sizeof(text), "Last command status: %d\n\n%s",
+ wil_pmc_last_cmd_status(wil), help);
return simple_read_from_buffer(user_buf, count, ppos, text,
strlen(text) + 1);
@@ -1091,19 +1120,18 @@ static int txdesc_show(struct seq_file *s, void *data)
if (wil->use_enhanced_dma_hw) {
if (tx) {
- skb = ring->ctx[txdesc_idx].skb;
- } else {
+ skb = ring->ctx ? ring->ctx[txdesc_idx].skb : NULL;
+ } else if (wil->rx_buff_mgmt.buff_arr) {
struct wil_rx_enhanced_desc *rx_d =
(struct wil_rx_enhanced_desc *)
&ring->va[txdesc_idx].rx.enhanced;
u16 buff_id = le16_to_cpu(rx_d->mac.buff_id);
if (!wil_val_in_range(buff_id, 0,
- wil->rx_buff_mgmt.size)) {
+ wil->rx_buff_mgmt.size))
seq_printf(s, "invalid buff_id %d\n", buff_id);
- return 0;
- }
- skb = wil->rx_buff_mgmt.buff_arr[buff_id].skb;
+ else
+ skb = wil->rx_buff_mgmt.buff_arr[buff_id].skb;
}
} else {
skb = ring->ctx[txdesc_idx].skb;
@@ -1136,7 +1164,7 @@ static int status_msg_show(struct seq_file *s, void *data)
struct wil6210_priv *wil = s->private;
int sring_idx = dbg_sring_index;
struct wil_status_ring *sring;
- bool tx = sring_idx == wil->tx_sring_idx ? 1 : 0;
+ bool tx;
u32 status_msg_idx = dbg_status_msg_index;
u32 *u;
@@ -1146,6 +1174,7 @@ static int status_msg_show(struct seq_file *s, void *data)
}
sring = &wil->srings[sring_idx];
+ tx = !sring->is_rx;
if (!sring->va) {
seq_printf(s, "No %cX status ring\n", tx ? 'T' : 'R');
@@ -1262,14 +1291,14 @@ static int bf_show(struct seq_file *s, void *data)
memset(&reply, 0, sizeof(reply));
- for (i = 0; i < max_assoc_sta; i++) {
+ for (i = 0; i < wil->max_assoc_sta; i++) {
u32 status;
cmd.cid = i;
rc = wmi_call(wil, WMI_NOTIFY_REQ_CMDID, vif->mid,
&cmd, sizeof(cmd),
WMI_NOTIFY_REQ_DONE_EVENTID, &reply,
- sizeof(reply), 20);
+ sizeof(reply), WIL_WMI_CALL_GENERAL_TO_MS);
/* if reply is all-0, ignore this CID */
if (rc || is_all_zeros(&reply.evt, sizeof(reply.evt)))
continue;
@@ -1307,7 +1336,7 @@ static void print_temp(struct seq_file *s, const char *prefix, s32 t)
{
switch (t) {
case 0:
- case ~(u32)0:
+ case WMI_INVALID_TEMPERATURE:
seq_printf(s, "%s N/A\n", prefix);
break;
default:
@@ -1320,17 +1349,41 @@ static void print_temp(struct seq_file *s, const char *prefix, s32 t)
static int temp_show(struct seq_file *s, void *data)
{
struct wil6210_priv *wil = s->private;
- s32 t_m, t_r;
- int rc = wmi_get_temperature(wil, &t_m, &t_r);
+ int rc, i;
- if (rc) {
- seq_puts(s, "Failed\n");
- return 0;
- }
+ if (test_bit(WMI_FW_CAPABILITY_TEMPERATURE_ALL_RF,
+ wil->fw_capabilities)) {
+ struct wmi_temp_sense_all_done_event sense_all_evt;
- print_temp(s, "T_mac =", t_m);
- print_temp(s, "T_radio =", t_r);
+ wil_dbg_misc(wil,
+ "WMI_FW_CAPABILITY_TEMPERATURE_ALL_RF is supported");
+ rc = wmi_get_all_temperatures(wil, &sense_all_evt);
+ if (rc) {
+ seq_puts(s, "Failed\n");
+ return 0;
+ }
+ print_temp(s, "T_mac =",
+ le32_to_cpu(sense_all_evt.baseband_t1000));
+ seq_printf(s, "Connected RFs [0x%08x]\n",
+ sense_all_evt.rf_bitmap);
+ for (i = 0; i < WMI_MAX_XIF_PORTS_NUM; i++) {
+ seq_printf(s, "RF[%d] = ", i);
+ print_temp(s, "",
+ le32_to_cpu(sense_all_evt.rf_t1000[i]));
+ }
+ } else {
+ s32 t_m, t_r;
+ wil_dbg_misc(wil,
+ "WMI_FW_CAPABILITY_TEMPERATURE_ALL_RF is not supported");
+ rc = wmi_get_temperature(wil, &t_m, &t_r);
+ if (rc) {
+ seq_puts(s, "Failed\n");
+ return 0;
+ }
+ print_temp(s, "T_mac =", t_m);
+ print_temp(s, "T_radio =", t_r);
+ }
return 0;
}
DEFINE_SHOW_ATTRIBUTE(temp);
@@ -1359,7 +1412,7 @@ static int link_show(struct seq_file *s, void *data)
if (!sinfo)
return -ENOMEM;
- for (i = 0; i < max_assoc_sta; i++) {
+ for (i = 0; i < wil->max_assoc_sta; i++) {
struct wil_sta_info *p = &wil->sta[i];
char *status = "unknown";
struct wil6210_vif *vif;
@@ -1561,7 +1614,7 @@ __acquires(&p->tid_rx_lock) __releases(&p->tid_rx_lock)
struct wil6210_priv *wil = s->private;
int i, tid, mcs;
- for (i = 0; i < max_assoc_sta; i++) {
+ for (i = 0; i < wil->max_assoc_sta; i++) {
struct wil_sta_info *p = &wil->sta[i];
char *status = "unknown";
u8 aid = 0;
@@ -1670,7 +1723,7 @@ __acquires(&p->tid_rx_lock) __releases(&p->tid_rx_lock)
struct wil6210_priv *wil = s->private;
int i, bin;
- for (i = 0; i < max_assoc_sta; i++) {
+ for (i = 0; i < wil->max_assoc_sta; i++) {
struct wil_sta_info *p = &wil->sta[i];
char *status = "unknown";
u8 aid = 0;
@@ -1759,7 +1812,7 @@ static ssize_t wil_tx_latency_write(struct file *file, const char __user *buf,
size_t sz = sizeof(u64) * WIL_NUM_LATENCY_BINS;
wil->tx_latency_res = val;
- for (i = 0; i < max_assoc_sta; i++) {
+ for (i = 0; i < wil->max_assoc_sta; i++) {
struct wil_sta_info *sta = &wil->sta[i];
kfree(sta->tx_latency_bins);
@@ -1844,7 +1897,7 @@ static void wil_link_stats_debugfs_show_vif(struct wil6210_vif *vif,
}
seq_printf(s, "TSF %lld\n", vif->fw_stats_tsf);
- for (i = 0; i < max_assoc_sta; i++) {
+ for (i = 0; i < wil->max_assoc_sta; i++) {
if (wil->sta[i].status == wil_sta_unused)
continue;
if (wil->sta[i].mid != vif->mid)
@@ -2336,6 +2389,7 @@ static const struct {
{"tx_latency", 0644, &fops_tx_latency},
{"link_stats", 0644, &fops_link_stats},
{"link_stats_global", 0644, &fops_link_stats_global},
+ {"rbufcap", 0244, &fops_rbufcap},
};
static void wil6210_debugfs_init_files(struct wil6210_priv *wil,
@@ -2460,7 +2514,7 @@ void wil6210_debugfs_remove(struct wil6210_priv *wil)
wil->debug = NULL;
kfree(wil->dbg_data.data_arr);
- for (i = 0; i < max_assoc_sta; i++)
+ for (i = 0; i < wil->max_assoc_sta; i++)
kfree(wil->sta[i].tx_latency_bins);
/* free pmc memory without sending command to fw, as it will
diff --git a/drivers/net/wireless/ath/wil6210/fw.h b/drivers/net/wireless/ath/wil6210/fw.h
index 3e7a28045cab..fa3164765b20 100644
--- a/drivers/net/wireless/ath/wil6210/fw.h
+++ b/drivers/net/wireless/ath/wil6210/fw.h
@@ -1,6 +1,6 @@
/*
* Copyright (c) 2014,2016 Qualcomm Atheros, Inc.
- * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
*
* Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
@@ -109,12 +109,17 @@ struct wil_fw_record_concurrency { /* type == wil_fw_type_comment */
/* brd file info encoded inside a comment record */
#define WIL_BRD_FILE_MAGIC (0xabcddcbb)
+
+struct brd_info {
+ __le32 base_addr;
+ __le32 max_size_bytes;
+} __packed;
+
struct wil_fw_record_brd_file { /* type == wil_fw_type_comment */
/* identifies brd file record */
struct wil_fw_record_comment_hdr hdr;
__le32 version;
- __le32 base_addr;
- __le32 max_size_bytes;
+ struct brd_info brd_info[0];
} __packed;
/* perform action
diff --git a/drivers/net/wireless/ath/wil6210/fw_inc.c b/drivers/net/wireless/ath/wil6210/fw_inc.c
index 3ec0f2fab9b7..94ebfa338e3f 100644
--- a/drivers/net/wireless/ath/wil6210/fw_inc.c
+++ b/drivers/net/wireless/ath/wil6210/fw_inc.c
@@ -156,17 +156,52 @@ fw_handle_brd_file(struct wil6210_priv *wil, const void *data,
size_t size)
{
const struct wil_fw_record_brd_file *rec = data;
+ u32 max_num_ent, i, ent_size;
- if (size < sizeof(*rec)) {
- wil_err_fw(wil, "brd_file record too short: %zu\n", size);
- return 0;
+ if (size <= offsetof(struct wil_fw_record_brd_file, brd_info)) {
+ wil_err(wil, "board record too short, size %zu\n", size);
+ return -EINVAL;
+ }
+
+ ent_size = size - offsetof(struct wil_fw_record_brd_file, brd_info);
+ max_num_ent = ent_size / sizeof(struct brd_info);
+
+ if (!max_num_ent) {
+ wil_err(wil, "brd info entries are missing\n");
+ return -EINVAL;
}
- wil->brd_file_addr = le32_to_cpu(rec->base_addr);
- wil->brd_file_max_size = le32_to_cpu(rec->max_size_bytes);
+ wil->brd_info = kcalloc(max_num_ent, sizeof(struct wil_brd_info),
+ GFP_KERNEL);
+ if (!wil->brd_info)
+ return -ENOMEM;
- wil_dbg_fw(wil, "brd_file_addr 0x%x, brd_file_max_size %d\n",
- wil->brd_file_addr, wil->brd_file_max_size);
+ for (i = 0; i < max_num_ent; i++) {
+ wil->brd_info[i].file_addr =
+ le32_to_cpu(rec->brd_info[i].base_addr);
+ wil->brd_info[i].file_max_size =
+ le32_to_cpu(rec->brd_info[i].max_size_bytes);
+
+ if (!wil->brd_info[i].file_addr)
+ break;
+
+ wil_dbg_fw(wil,
+ "brd info %d: file_addr 0x%x, file_max_size %d\n",
+ i, wil->brd_info[i].file_addr,
+ wil->brd_info[i].file_max_size);
+ }
+
+ wil->num_of_brd_entries = i;
+ if (wil->num_of_brd_entries == 0) {
+ kfree(wil->brd_info);
+ wil->brd_info = NULL;
+ wil_dbg_fw(wil,
+ "no valid brd info entries, using brd file addr\n");
+
+ } else {
+ wil_dbg_fw(wil, "num of brd info entries %d\n",
+ wil->num_of_brd_entries);
+ }
return 0;
}
@@ -634,6 +669,11 @@ int wil_request_firmware(struct wil6210_priv *wil, const char *name,
}
wil_dbg_fw(wil, "Loading <%s>, %zu bytes\n", name, fw->size);
+ /* re-initialize board info params */
+ wil->num_of_brd_entries = 0;
+ kfree(wil->brd_info);
+ wil->brd_info = NULL;
+
for (sz = fw->size, d = fw->data; sz; sz -= rc1, d += rc1) {
rc1 = wil_fw_verify(wil, d, sz);
if (rc1 < 0) {
@@ -662,11 +702,13 @@ static int wil_brd_process(struct wil6210_priv *wil, const void *data,
{
int rc = 0;
const struct wil_fw_record_head *hdr = data;
- size_t s, hdr_sz;
+ size_t s, hdr_sz = 0;
u16 type;
+ int i = 0;
- /* Assuming the board file includes only one header record and one data
- * record. Each record starts with wil_fw_record_head.
+ /* Assuming the board file includes only one file header
+ * and one or several data records.
+ * Each record starts with wil_fw_record_head.
*/
if (size < sizeof(*hdr))
return -EINVAL;
@@ -674,40 +716,67 @@ static int wil_brd_process(struct wil6210_priv *wil, const void *data,
if (s > size)
return -EINVAL;
- /* Skip the header record and handle the data record */
- hdr = (const void *)hdr + s;
+ /* Skip the header record and handle the data records */
size -= s;
- if (size < sizeof(*hdr))
- return -EINVAL;
- hdr_sz = le32_to_cpu(hdr->size);
- if (wil->brd_file_max_size && hdr_sz > wil->brd_file_max_size)
- return -EINVAL;
- if (sizeof(*hdr) + hdr_sz > size)
- return -EINVAL;
- if (hdr_sz % 4) {
- wil_err_fw(wil, "unaligned record size: %zu\n",
- hdr_sz);
- return -EINVAL;
- }
- type = le16_to_cpu(hdr->type);
- if (type != wil_fw_type_data) {
- wil_err_fw(wil, "invalid record type for board file: %d\n",
- type);
- return -EINVAL;
+ for (hdr = data + s;; hdr = (const void *)hdr + s, size -= s, i++) {
+ if (size < sizeof(*hdr))
+ break;
+
+ if (i >= wil->num_of_brd_entries) {
+ wil_err_fw(wil,
+ "Too many brd records: %d, num of expected entries %d\n",
+ i, wil->num_of_brd_entries);
+ break;
+ }
+
+ hdr_sz = le32_to_cpu(hdr->size);
+ s = sizeof(*hdr) + hdr_sz;
+ if (wil->brd_info[i].file_max_size &&
+ hdr_sz > wil->brd_info[i].file_max_size)
+ return -EINVAL;
+ if (sizeof(*hdr) + hdr_sz > size)
+ return -EINVAL;
+ if (hdr_sz % 4) {
+ wil_err_fw(wil, "unaligned record size: %zu\n",
+ hdr_sz);
+ return -EINVAL;
+ }
+ type = le16_to_cpu(hdr->type);
+ if (type != wil_fw_type_data) {
+ wil_err_fw(wil,
+ "invalid record type for board file: %d\n",
+ type);
+ return -EINVAL;
+ }
+ if (hdr_sz < sizeof(struct wil_fw_record_data)) {
+ wil_err_fw(wil, "data record too short: %zu\n", hdr_sz);
+ return -EINVAL;
+ }
+
+ wil_dbg_fw(wil,
+ "using info from fw file for record %d: addr[0x%08x], max size %d\n",
+ i, wil->brd_info[i].file_addr,
+ wil->brd_info[i].file_max_size);
+
+ rc = __fw_handle_data(wil, &hdr[1], hdr_sz,
+ cpu_to_le32(wil->brd_info[i].file_addr));
+ if (rc)
+ return rc;
}
- if (hdr_sz < sizeof(struct wil_fw_record_data)) {
- wil_err_fw(wil, "data record too short: %zu\n", hdr_sz);
+
+ if (size) {
+ wil_err_fw(wil, "unprocessed bytes: %zu\n", size);
+ if (size >= sizeof(*hdr)) {
+ wil_err_fw(wil,
+ "Stop at offset %ld record type %d [%zd bytes]\n",
+ (long)((const void *)hdr - data),
+ le16_to_cpu(hdr->type), hdr_sz);
+ }
return -EINVAL;
}
- wil_dbg_fw(wil, "using addr from fw file: [0x%08x]\n",
- wil->brd_file_addr);
-
- rc = __fw_handle_data(wil, &hdr[1], hdr_sz,
- cpu_to_le32(wil->brd_file_addr));
-
- return rc;
+ return 0;
}
/**
@@ -738,7 +807,8 @@ int wil_request_board(struct wil6210_priv *wil, const char *name)
rc = dlen;
goto out;
}
- /* Process the data record */
+
+ /* Process the data records */
rc = wil_brd_process(wil, brd->data, dlen);
out:
diff --git a/drivers/net/wireless/ath/wil6210/interrupt.c b/drivers/net/wireless/ath/wil6210/interrupt.c
index 3f5bd177d55f..b00a13d6d530 100644
--- a/drivers/net/wireless/ath/wil6210/interrupt.c
+++ b/drivers/net/wireless/ath/wil6210/interrupt.c
@@ -296,21 +296,24 @@ void wil_configure_interrupt_moderation(struct wil6210_priv *wil)
static irqreturn_t wil6210_irq_rx(int irq, void *cookie)
{
struct wil6210_priv *wil = cookie;
- u32 isr = wil_ioread32_and_clear(wil->csr +
- HOSTADDR(RGF_DMA_EP_RX_ICR) +
- offsetof(struct RGF_ICR, ICR));
+ u32 isr;
bool need_unmask = true;
+ wil6210_mask_irq_rx(wil);
+
+ isr = wil_ioread32_and_clear(wil->csr +
+ HOSTADDR(RGF_DMA_EP_RX_ICR) +
+ offsetof(struct RGF_ICR, ICR));
+
trace_wil6210_irq_rx(isr);
wil_dbg_irq(wil, "ISR RX 0x%08x\n", isr);
if (unlikely(!isr)) {
wil_err_ratelimited(wil, "spurious IRQ: RX\n");
+ wil6210_unmask_irq_rx(wil);
return IRQ_NONE;
}
- wil6210_mask_irq_rx(wil);
-
/* RX_DONE and RX_HTRSH interrupts are the same if interrupt
* moderation is not used. Interrupt moderation may cause RX
* buffer overflow while RX_DONE is delayed. The required
@@ -355,21 +358,24 @@ static irqreturn_t wil6210_irq_rx(int irq, void *cookie)
static irqreturn_t wil6210_irq_rx_edma(int irq, void *cookie)
{
struct wil6210_priv *wil = cookie;
- u32 isr = wil_ioread32_and_clear(wil->csr +
- HOSTADDR(RGF_INT_GEN_RX_ICR) +
- offsetof(struct RGF_ICR, ICR));
+ u32 isr;
bool need_unmask = true;
+ wil6210_mask_irq_rx_edma(wil);
+
+ isr = wil_ioread32_and_clear(wil->csr +
+ HOSTADDR(RGF_INT_GEN_RX_ICR) +
+ offsetof(struct RGF_ICR, ICR));
+
trace_wil6210_irq_rx(isr);
wil_dbg_irq(wil, "ISR RX 0x%08x\n", isr);
if (unlikely(!isr)) {
wil_err(wil, "spurious IRQ: RX\n");
+ wil6210_unmask_irq_rx_edma(wil);
return IRQ_NONE;
}
- wil6210_mask_irq_rx_edma(wil);
-
if (likely(isr & BIT_RX_STATUS_IRQ)) {
wil_dbg_irq(wil, "RX status ring\n");
isr &= ~BIT_RX_STATUS_IRQ;
@@ -403,21 +409,24 @@ static irqreturn_t wil6210_irq_rx_edma(int irq, void *cookie)
static irqreturn_t wil6210_irq_tx_edma(int irq, void *cookie)
{
struct wil6210_priv *wil = cookie;
- u32 isr = wil_ioread32_and_clear(wil->csr +
- HOSTADDR(RGF_INT_GEN_TX_ICR) +
- offsetof(struct RGF_ICR, ICR));
+ u32 isr;
bool need_unmask = true;
+ wil6210_mask_irq_tx_edma(wil);
+
+ isr = wil_ioread32_and_clear(wil->csr +
+ HOSTADDR(RGF_INT_GEN_TX_ICR) +
+ offsetof(struct RGF_ICR, ICR));
+
trace_wil6210_irq_tx(isr);
wil_dbg_irq(wil, "ISR TX 0x%08x\n", isr);
if (unlikely(!isr)) {
wil_err(wil, "spurious IRQ: TX\n");
+ wil6210_unmask_irq_tx_edma(wil);
return IRQ_NONE;
}
- wil6210_mask_irq_tx_edma(wil);
-
if (likely(isr & BIT_TX_STATUS_IRQ)) {
wil_dbg_irq(wil, "TX status ring\n");
isr &= ~BIT_TX_STATUS_IRQ;
@@ -446,21 +455,24 @@ static irqreturn_t wil6210_irq_tx_edma(int irq, void *cookie)
static irqreturn_t wil6210_irq_tx(int irq, void *cookie)
{
struct wil6210_priv *wil = cookie;
- u32 isr = wil_ioread32_and_clear(wil->csr +
- HOSTADDR(RGF_DMA_EP_TX_ICR) +
- offsetof(struct RGF_ICR, ICR));
+ u32 isr;
bool need_unmask = true;
+ wil6210_mask_irq_tx(wil);
+
+ isr = wil_ioread32_and_clear(wil->csr +
+ HOSTADDR(RGF_DMA_EP_TX_ICR) +
+ offsetof(struct RGF_ICR, ICR));
+
trace_wil6210_irq_tx(isr);
wil_dbg_irq(wil, "ISR TX 0x%08x\n", isr);
if (unlikely(!isr)) {
wil_err_ratelimited(wil, "spurious IRQ: TX\n");
+ wil6210_unmask_irq_tx(wil);
return IRQ_NONE;
}
- wil6210_mask_irq_tx(wil);
-
if (likely(isr & BIT_DMA_EP_TX_ICR_TX_DONE)) {
wil_dbg_irq(wil, "TX done\n");
isr &= ~BIT_DMA_EP_TX_ICR_TX_DONE;
@@ -532,20 +544,23 @@ static bool wil_validate_mbox_regs(struct wil6210_priv *wil)
static irqreturn_t wil6210_irq_misc(int irq, void *cookie)
{
struct wil6210_priv *wil = cookie;
- u32 isr = wil_ioread32_and_clear(wil->csr +
- HOSTADDR(RGF_DMA_EP_MISC_ICR) +
- offsetof(struct RGF_ICR, ICR));
+ u32 isr;
+
+ wil6210_mask_irq_misc(wil, false);
+
+ isr = wil_ioread32_and_clear(wil->csr +
+ HOSTADDR(RGF_DMA_EP_MISC_ICR) +
+ offsetof(struct RGF_ICR, ICR));
trace_wil6210_irq_misc(isr);
wil_dbg_irq(wil, "ISR MISC 0x%08x\n", isr);
if (!isr) {
wil_err(wil, "spurious IRQ: MISC\n");
+ wil6210_unmask_irq_misc(wil, false);
return IRQ_NONE;
}
- wil6210_mask_irq_misc(wil, false);
-
if (isr & ISR_MISC_FW_ERROR) {
u32 fw_assert_code = wil_r(wil, wil->rgf_fw_assert_code_addr);
u32 ucode_assert_code =
@@ -580,7 +595,7 @@ static irqreturn_t wil6210_irq_misc(int irq, void *cookie)
/* no need to handle HALP ICRs until next vote */
wil->halp.handle_icr = false;
wil_dbg_irq(wil, "irq_misc: HALP IRQ invoked\n");
- wil6210_mask_halp(wil);
+ wil6210_mask_irq_misc(wil, true);
complete(&wil->halp.comp);
}
}
diff --git a/drivers/net/wireless/ath/wil6210/main.c b/drivers/net/wireless/ath/wil6210/main.c
index 9b9c9ec01536..173561fe593d 100644
--- a/drivers/net/wireless/ath/wil6210/main.c
+++ b/drivers/net/wireless/ath/wil6210/main.c
@@ -241,7 +241,7 @@ static bool wil_vif_is_connected(struct wil6210_priv *wil, u8 mid)
{
int i;
- for (i = 0; i < max_assoc_sta; i++) {
+ for (i = 0; i < wil->max_assoc_sta; i++) {
if (wil->sta[i].mid == mid &&
wil->sta[i].status == wil_sta_connected)
return true;
@@ -340,11 +340,11 @@ static void _wil6210_disconnect_complete(struct wil6210_vif *vif,
wil_dbg_misc(wil,
"Disconnect complete %pM, CID=%d, reason=%d\n",
bssid, cid, reason_code);
- if (cid >= 0) /* disconnect 1 peer */
+ if (wil_cid_valid(wil, cid)) /* disconnect 1 peer */
wil_disconnect_cid_complete(vif, cid, reason_code);
} else { /* all */
wil_dbg_misc(wil, "Disconnect complete all\n");
- for (cid = 0; cid < max_assoc_sta; cid++)
+ for (cid = 0; cid < wil->max_assoc_sta; cid++)
wil_disconnect_cid_complete(vif, cid, reason_code);
}
@@ -452,11 +452,11 @@ static void _wil6210_disconnect(struct wil6210_vif *vif, const u8 *bssid,
cid = wil_find_cid(wil, vif->mid, bssid);
wil_dbg_misc(wil, "Disconnect %pM, CID=%d, reason=%d\n",
bssid, cid, reason_code);
- if (cid >= 0) /* disconnect 1 peer */
+ if (wil_cid_valid(wil, cid)) /* disconnect 1 peer */
wil_disconnect_cid(vif, cid, reason_code);
} else { /* all */
wil_dbg_misc(wil, "Disconnect all\n");
- for (cid = 0; cid < max_assoc_sta; cid++)
+ for (cid = 0; cid < wil->max_assoc_sta; cid++)
wil_disconnect_cid(vif, cid, reason_code);
}
@@ -753,6 +753,7 @@ int wil_priv_init(struct wil6210_priv *wil)
wil->reply_mid = U8_MAX;
wil->max_vifs = 1;
+ wil->max_assoc_sta = max_assoc_sta;
/* edma configuration can be updated via debugfs before allocation */
wil->num_rx_status_rings = WIL_DEFAULT_NUM_RX_STATUS_RINGS;
@@ -838,6 +839,7 @@ void wil_priv_deinit(struct wil6210_priv *wil)
wmi_event_flush(wil);
destroy_workqueue(wil->wq_service);
destroy_workqueue(wil->wmi_wq);
+ kfree(wil->brd_info);
}
static void wil_shutdown_bl(struct wil6210_priv *wil)
@@ -1520,6 +1522,7 @@ int wil_ps_update(struct wil6210_priv *wil, enum wmi_ps_profile_type ps_profile)
static void wil_pre_fw_config(struct wil6210_priv *wil)
{
+ wil_clear_fw_log_addr(wil);
/* Mark FW as loaded from host */
wil_s(wil, RGF_USER_USAGE_6, 1);
@@ -1577,6 +1580,20 @@ static int wil_restore_vifs(struct wil6210_priv *wil)
}
/*
+ * Clear FW and ucode log start addr to indicate FW log is not ready. The host
+ * driver clears the addresses before FW starts and FW initializes the address
+ * when it is ready to send logs.
+ */
+void wil_clear_fw_log_addr(struct wil6210_priv *wil)
+{
+ /* FW log addr */
+ wil_w(wil, RGF_USER_USAGE_1, 0);
+ /* ucode log addr */
+ wil_w(wil, RGF_USER_USAGE_2, 0);
+ wil_dbg_misc(wil, "Cleared FW and ucode log address");
+}
+
+/*
* We reset all the structures, and we reset the UMAC.
* After calling this routine, you're expected to reload
* the firmware.
@@ -1709,7 +1726,7 @@ int wil_reset(struct wil6210_priv *wil, bool load_fw)
rc = wil_request_firmware(wil, wil->wil_fw_name, true);
if (rc)
goto out;
- if (wil->brd_file_addr)
+ if (wil->num_of_brd_entries)
rc = wil_request_board(wil, board_file);
else
rc = wil_request_firmware(wil, board_file, true);
@@ -1921,7 +1938,7 @@ int wil_find_cid(struct wil6210_priv *wil, u8 mid, const u8 *mac)
int i;
int rc = -ENOENT;
- for (i = 0; i < max_assoc_sta; i++) {
+ for (i = 0; i < wil->max_assoc_sta; i++) {
if (wil->sta[i].mid == mid &&
wil->sta[i].status != wil_sta_unused &&
ether_addr_equal(wil->sta[i].addr, mac)) {
@@ -1938,6 +1955,9 @@ void wil_halp_vote(struct wil6210_priv *wil)
unsigned long rc;
unsigned long to_jiffies = msecs_to_jiffies(WAIT_FOR_HALP_VOTE_MS);
+ if (wil->hw_version >= HW_VER_TALYN_MB)
+ return;
+
mutex_lock(&wil->halp.lock);
wil_dbg_irq(wil, "halp_vote: start, HALP ref_cnt (%d)\n",
@@ -1969,6 +1989,9 @@ void wil_halp_vote(struct wil6210_priv *wil)
void wil_halp_unvote(struct wil6210_priv *wil)
{
+ if (wil->hw_version >= HW_VER_TALYN_MB)
+ return;
+
WARN_ON(wil->halp.ref_cnt == 0);
mutex_lock(&wil->halp.lock);
diff --git a/drivers/net/wireless/ath/wil6210/pcie_bus.c b/drivers/net/wireless/ath/wil6210/pcie_bus.c
index 3b82d6cfc218..9f5a914abc18 100644
--- a/drivers/net/wireless/ath/wil6210/pcie_bus.c
+++ b/drivers/net/wireless/ath/wil6210/pcie_bus.c
@@ -142,6 +142,8 @@ int wil_set_capabilities(struct wil6210_priv *wil)
min(sizeof(wil->platform_capa), sizeof(platform_capa)));
}
+ wil_info(wil, "platform_capa 0x%lx\n", *wil->platform_capa);
+
/* extract FW capabilities from file without loading the FW */
wil_request_firmware(wil, wil->wil_fw_name, false);
wil_refresh_fw_capabilities(wil);
@@ -418,6 +420,7 @@ static int wil_pcie_probe(struct pci_dev *pdev, const struct pci_device_id *id)
}
/* rollback to bus_disable */
+ wil_clear_fw_log_addr(wil);
rc = wil_if_add(wil);
if (rc) {
wil_err(wil, "wil_if_add failed: %d\n", rc);
diff --git a/drivers/net/wireless/ath/wil6210/rx_reorder.c b/drivers/net/wireless/ath/wil6210/rx_reorder.c
index 32b14fc33a59..784239bcb3a6 100644
--- a/drivers/net/wireless/ath/wil6210/rx_reorder.c
+++ b/drivers/net/wireless/ath/wil6210/rx_reorder.c
@@ -316,7 +316,7 @@ __acquires(&sta->tid_rx_lock) __releases(&sta->tid_rx_lock)
u16 agg_timeout = le16_to_cpu(ba_timeout);
u16 seq_ctrl = le16_to_cpu(ba_seq_ctrl);
struct wil_sta_info *sta;
- u16 agg_wsize = 0;
+ u16 agg_wsize;
/* bit 0: A-MSDU supported
* bit 1: policy (should be 0 for us)
* bits 2..5: TID
@@ -328,7 +328,6 @@ __acquires(&sta->tid_rx_lock) __releases(&sta->tid_rx_lock)
test_bit(WMI_FW_CAPABILITY_AMSDU, wil->fw_capabilities) &&
wil->amsdu_en && (param_set & BIT(0));
int ba_policy = param_set & BIT(1);
- u16 status = WLAN_STATUS_SUCCESS;
u16 ssn = seq_ctrl >> 4;
struct wil_tid_ampdu_rx *r;
int rc = 0;
@@ -336,7 +335,7 @@ __acquires(&sta->tid_rx_lock) __releases(&sta->tid_rx_lock)
might_sleep();
/* sanity checks */
- if (cid >= max_assoc_sta) {
+ if (cid >= wil->max_assoc_sta) {
wil_err(wil, "BACK: invalid CID %d\n", cid);
rc = -EINVAL;
goto out;
@@ -355,27 +354,19 @@ __acquires(&sta->tid_rx_lock) __releases(&sta->tid_rx_lock)
agg_amsdu ? "+" : "-", !!ba_policy, dialog_token, ssn);
/* apply policies */
- if (ba_policy) {
- wil_err(wil, "BACK requested unsupported ba_policy == 1\n");
- status = WLAN_STATUS_INVALID_QOS_PARAM;
- }
- if (status == WLAN_STATUS_SUCCESS) {
- if (req_agg_wsize == 0) {
- wil_dbg_misc(wil, "Suggest BACK wsize %d\n",
- wil->max_agg_wsize);
- agg_wsize = wil->max_agg_wsize;
- } else {
- agg_wsize = min_t(u16,
- wil->max_agg_wsize, req_agg_wsize);
- }
+ if (req_agg_wsize == 0) {
+ wil_dbg_misc(wil, "Suggest BACK wsize %d\n",
+ wil->max_agg_wsize);
+ agg_wsize = wil->max_agg_wsize;
+ } else {
+ agg_wsize = min_t(u16, wil->max_agg_wsize, req_agg_wsize);
}
rc = wil->txrx_ops.wmi_addba_rx_resp(wil, mid, cid, tid, dialog_token,
- status, agg_amsdu, agg_wsize,
- agg_timeout);
- if (rc || (status != WLAN_STATUS_SUCCESS)) {
- wil_err(wil, "do not apply ba, rc(%d), status(%d)\n", rc,
- status);
+ WLAN_STATUS_SUCCESS, agg_amsdu,
+ agg_wsize, agg_timeout);
+ if (rc) {
+ wil_err(wil, "do not apply ba, rc(%d)\n", rc);
goto out;
}
diff --git a/drivers/net/wireless/ath/wil6210/txrx.c b/drivers/net/wireless/ath/wil6210/txrx.c
index 4ccfd1404458..eae00aafaa88 100644
--- a/drivers/net/wireless/ath/wil6210/txrx.c
+++ b/drivers/net/wireless/ath/wil6210/txrx.c
@@ -411,7 +411,7 @@ static int wil_rx_get_cid_by_skb(struct wil6210_priv *wil, struct sk_buff *skb)
ta = hdr->addr2;
}
- if (max_assoc_sta <= WIL6210_RX_DESC_MAX_CID)
+ if (wil->max_assoc_sta <= WIL6210_RX_DESC_MAX_CID)
return cid;
/* assuming no concurrency between AP interfaces and STA interfaces.
@@ -426,14 +426,14 @@ static int wil_rx_get_cid_by_skb(struct wil6210_priv *wil, struct sk_buff *skb)
* to find the real cid, compare transmitter address with the stored
* stations mac address in the driver sta array
*/
- for (i = cid; i < max_assoc_sta; i += WIL6210_RX_DESC_MAX_CID) {
+ for (i = cid; i < wil->max_assoc_sta; i += WIL6210_RX_DESC_MAX_CID) {
if (wil->sta[i].status != wil_sta_unused &&
ether_addr_equal(wil->sta[i].addr, ta)) {
cid = i;
break;
}
}
- if (i >= max_assoc_sta) {
+ if (i >= wil->max_assoc_sta) {
wil_err_ratelimited(wil, "Could not find cid for frame with transmit addr = %pM, iftype = %d, frametype = %d, len = %d\n",
ta, vif->wdev.iftype, ftype, skb->len);
cid = -ENOENT;
@@ -750,6 +750,7 @@ void wil_netif_rx_any(struct sk_buff *skb, struct net_device *ndev)
[GRO_HELD] = "GRO_HELD",
[GRO_NORMAL] = "GRO_NORMAL",
[GRO_DROP] = "GRO_DROP",
+ [GRO_CONSUMED] = "GRO_CONSUMED",
};
wil->txrx_ops.get_netif_rx_params(skb, &cid, &security);
@@ -1036,7 +1037,8 @@ static int wil_vring_init_tx(struct wil6210_vif *vif, int id, int size,
if (!vif->privacy)
txdata->dot1x_open = true;
rc = wmi_call(wil, WMI_VRING_CFG_CMDID, vif->mid, &cmd, sizeof(cmd),
- WMI_VRING_CFG_DONE_EVENTID, &reply, sizeof(reply), 100);
+ WMI_VRING_CFG_DONE_EVENTID, &reply, sizeof(reply),
+ WIL_WMI_CALL_GENERAL_TO_MS);
if (rc)
goto out_free;
@@ -1063,7 +1065,7 @@ static int wil_vring_init_tx(struct wil6210_vif *vif, int id, int size,
txdata->enabled = 0;
spin_unlock_bh(&txdata->lock);
wil_vring_free(wil, vring);
- wil->ring2cid_tid[id][0] = max_assoc_sta;
+ wil->ring2cid_tid[id][0] = wil->max_assoc_sta;
wil->ring2cid_tid[id][1] = 0;
out:
@@ -1124,7 +1126,8 @@ static int wil_tx_vring_modify(struct wil6210_vif *vif, int ring_id, int cid,
cmd.vring_cfg.tx_sw_ring.ring_mem_base = cpu_to_le64(vring->pa);
rc = wmi_call(wil, WMI_VRING_CFG_CMDID, vif->mid, &cmd, sizeof(cmd),
- WMI_VRING_CFG_DONE_EVENTID, &reply, sizeof(reply), 100);
+ WMI_VRING_CFG_DONE_EVENTID, &reply, sizeof(reply),
+ WIL_WMI_CALL_GENERAL_TO_MS);
if (rc)
goto fail;
@@ -1148,7 +1151,7 @@ fail:
txdata->dot1x_open = false;
txdata->enabled = 0;
spin_unlock_bh(&txdata->lock);
- wil->ring2cid_tid[ring_id][0] = max_assoc_sta;
+ wil->ring2cid_tid[ring_id][0] = wil->max_assoc_sta;
wil->ring2cid_tid[ring_id][1] = 0;
return rc;
}
@@ -1195,7 +1198,7 @@ int wil_vring_init_bcast(struct wil6210_vif *vif, int id, int size)
if (rc)
goto out;
- wil->ring2cid_tid[id][0] = max_assoc_sta; /* CID */
+ wil->ring2cid_tid[id][0] = wil->max_assoc_sta; /* CID */
wil->ring2cid_tid[id][1] = 0; /* TID */
cmd.vring_cfg.tx_sw_ring.ring_mem_base = cpu_to_le64(vring->pa);
@@ -1204,7 +1207,8 @@ int wil_vring_init_bcast(struct wil6210_vif *vif, int id, int size)
txdata->dot1x_open = true;
rc = wmi_call(wil, WMI_BCAST_VRING_CFG_CMDID, vif->mid,
&cmd, sizeof(cmd),
- WMI_VRING_CFG_DONE_EVENTID, &reply, sizeof(reply), 100);
+ WMI_VRING_CFG_DONE_EVENTID, &reply, sizeof(reply),
+ WIL_WMI_CALL_GENERAL_TO_MS);
if (rc)
goto out_free;
@@ -1243,7 +1247,7 @@ static struct wil_ring *wil_find_tx_ucast(struct wil6210_priv *wil,
cid = wil_find_cid(wil, vif->mid, da);
- if (cid < 0 || cid >= max_assoc_sta)
+ if (cid < 0 || cid >= wil->max_assoc_sta)
return NULL;
/* TODO: fix for multiple TID */
@@ -1295,7 +1299,7 @@ static struct wil_ring *wil_find_tx_ring_sta(struct wil6210_priv *wil,
continue;
cid = wil->ring2cid_tid[i][0];
- if (cid >= max_assoc_sta) /* skip BCAST */
+ if (cid >= wil->max_assoc_sta) /* skip BCAST */
continue;
if (!wil->ring_tx_data[i].dot1x_open &&
@@ -1373,7 +1377,7 @@ static struct wil_ring *wil_find_tx_bcast_2(struct wil6210_priv *wil,
continue;
cid = wil->ring2cid_tid[i][0];
- if (cid >= max_assoc_sta) /* skip BCAST */
+ if (cid >= wil->max_assoc_sta) /* skip BCAST */
continue;
if (!wil->ring_tx_data[i].dot1x_open &&
skb->protocol != cpu_to_be16(ETH_P_PAE))
@@ -1401,7 +1405,7 @@ found:
if (!v2->va || txdata2->mid != vif->mid)
continue;
cid = wil->ring2cid_tid[i][0];
- if (cid >= max_assoc_sta) /* skip BCAST */
+ if (cid >= wil->max_assoc_sta) /* skip BCAST */
continue;
if (!wil->ring_tx_data[i].dot1x_open &&
skb->protocol != cpu_to_be16(ETH_P_PAE))
@@ -1760,6 +1764,9 @@ static int __wil_tx_vring_tso(struct wil6210_priv *wil, struct wil6210_vif *vif,
}
}
+ if (!_desc)
+ goto mem_error;
+
/* first descriptor may also be the last.
* in this case d pointer is invalid
*/
@@ -2254,7 +2261,7 @@ int wil_tx_complete(struct wil6210_vif *vif, int ringid)
used_before_complete = wil_ring_used_tx(vring);
- if (cid < max_assoc_sta)
+ if (cid < wil->max_assoc_sta)
stats = &wil->sta[cid].stats;
while (!wil_ring_is_empty(vring)) {
diff --git a/drivers/net/wireless/ath/wil6210/txrx_edma.c b/drivers/net/wireless/ath/wil6210/txrx_edma.c
index f6fce6ff73d9..dc040cd4ab06 100644
--- a/drivers/net/wireless/ath/wil6210/txrx_edma.c
+++ b/drivers/net/wireless/ath/wil6210/txrx_edma.c
@@ -26,6 +26,10 @@
#include "txrx.h"
#include "trace.h"
+/* Max number of entries (packets to complete) to update the hwtail of tx
+ * status ring. Should be power of 2
+ */
+#define WIL_EDMA_TX_SRING_UPDATE_HW_TAIL 128
#define WIL_EDMA_MAX_DATA_OFFSET (2)
/* RX buffer size must be aligned to 4 bytes */
#define WIL_EDMA_RX_BUF_LEN_DEFAULT (2048)
@@ -269,6 +273,9 @@ static void wil_move_all_rx_buff_to_free_list(struct wil6210_priv *wil,
struct list_head *active = &wil->rx_buff_mgmt.active;
dma_addr_t pa;
+ if (!wil->rx_buff_mgmt.buff_arr)
+ return;
+
while (!list_empty(active)) {
struct wil_rx_buff *rx_buff =
list_first_entry(active, struct wil_rx_buff, list);
@@ -734,7 +741,7 @@ static int wil_ring_init_tx_edma(struct wil6210_vif *vif, int ring_id,
txdata->enabled = 0;
spin_unlock_bh(&txdata->lock);
wil_ring_free_edma(wil, ring);
- wil->ring2cid_tid[ring_id][0] = max_assoc_sta;
+ wil->ring2cid_tid[ring_id][0] = wil->max_assoc_sta;
wil->ring2cid_tid[ring_id][1] = 0;
out:
@@ -944,7 +951,7 @@ again:
eop = wil_rx_status_get_eop(msg);
cid = wil_rx_status_get_cid(msg);
- if (unlikely(!wil_val_in_range(cid, 0, max_assoc_sta))) {
+ if (unlikely(!wil_val_in_range(cid, 0, wil->max_assoc_sta))) {
wil_err(wil, "Corrupt cid=%d, sring->swhead=%d\n",
cid, sring->swhead);
rxdata->skipping = true;
@@ -1152,7 +1159,7 @@ int wil_tx_sring_handler(struct wil6210_priv *wil,
struct wil_net_stats *stats;
struct wil_tx_enhanced_desc *_d;
unsigned int ring_id;
- unsigned int num_descs;
+ unsigned int num_descs, num_statuses = 0;
int i;
u8 dr_bit; /* Descriptor Ready bit */
struct wil_ring_tx_status msg;
@@ -1199,7 +1206,8 @@ int wil_tx_sring_handler(struct wil6210_priv *wil,
ndev = vif_to_ndev(vif);
cid = wil->ring2cid_tid[ring_id][0];
- stats = (cid < max_assoc_sta ? &wil->sta[cid].stats : NULL);
+ stats = (cid < wil->max_assoc_sta) ? &wil->sta[cid].stats :
+ NULL;
wil_dbg_txrx(wil,
"tx_status: completed desc_ring (%d), num_descs (%d)\n",
@@ -1272,6 +1280,11 @@ int wil_tx_sring_handler(struct wil6210_priv *wil,
}
again:
+ num_statuses++;
+ if (num_statuses % WIL_EDMA_TX_SRING_UPDATE_HW_TAIL == 0)
+ /* update HW tail to allow HW to push new statuses */
+ wil_w(wil, sring->hwtail, sring->swhead);
+
wil_sring_advance_swhead(sring);
wil_get_next_tx_status_msg(sring, &msg);
@@ -1282,8 +1295,9 @@ again:
if (desc_cnt)
wil_update_net_queues(wil, vif, NULL, false);
- /* Update the HW tail ptr (RD ptr) */
- wil_w(wil, sring->hwtail, (sring->swhead - 1) % sring->size);
+ if (num_statuses % WIL_EDMA_TX_SRING_UPDATE_HW_TAIL != 0)
+ /* Update the HW tail ptr (RD ptr) */
+ wil_w(wil, sring->hwtail, (sring->swhead - 1) % sring->size);
return desc_cnt;
}
diff --git a/drivers/net/wireless/ath/wil6210/txrx_edma.h b/drivers/net/wireless/ath/wil6210/txrx_edma.h
index bb4ff28b73e5..e9e6ea9b16b9 100644
--- a/drivers/net/wireless/ath/wil6210/txrx_edma.h
+++ b/drivers/net/wireless/ath/wil6210/txrx_edma.h
@@ -24,7 +24,7 @@
#define WIL_SRING_SIZE_ORDER_MAX (WIL_RING_SIZE_ORDER_MAX)
/* RX sring order should be bigger than RX ring order */
#define WIL_RX_SRING_SIZE_ORDER_DEFAULT (12)
-#define WIL_TX_SRING_SIZE_ORDER_DEFAULT (12)
+#define WIL_TX_SRING_SIZE_ORDER_DEFAULT (14)
#define WIL_RX_BUFF_ARR_SIZE_DEFAULT (2600)
#define WIL_DEFAULT_RX_STATUS_RING_ID 0
diff --git a/drivers/net/wireless/ath/wil6210/wil6210.h b/drivers/net/wireless/ath/wil6210/wil6210.h
index 8724d9975606..6f456b311a39 100644
--- a/drivers/net/wireless/ath/wil6210/wil6210.h
+++ b/drivers/net/wireless/ath/wil6210/wil6210.h
@@ -99,6 +99,7 @@ static inline u32 WIL_GET_BITS(u32 x, int b0, int b1)
#define WIL_MAX_AMPDU_SIZE_128 (128 * 1024) /* FW/HW limit */
#define WIL_MAX_AGG_WSIZE_64 (64) /* FW/HW limit */
#define WIL6210_MAX_STATUS_RINGS (8)
+#define WIL_WMI_CALL_GENERAL_TO_MS 100
/* Hardware offload block adds the following:
* 26 bytes - 3-address QoS data header
@@ -335,6 +336,11 @@ struct RGF_ICR {
#define BIT_BOOT_FROM_ROM BIT(31)
/* eDMA */
+#define RGF_SCM_PTRS_SUBQ_RD_PTR (0x8b4000)
+#define RGF_SCM_PTRS_COMPQ_RD_PTR (0x8b4100)
+#define RGF_DMA_SCM_SUBQ_CONS (0x8b60ec)
+#define RGF_DMA_SCM_COMPQ_PROD (0x8b616c)
+
#define RGF_INT_COUNT_ON_SPECIAL_EVT (0x8b62d8)
#define RGF_INT_CTRL_INT_GEN_CFG_0 (0x8bc000)
@@ -456,15 +462,6 @@ static inline void parse_cidxtid(u8 cidxtid, u8 *cid, u8 *tid)
*tid = (cidxtid >> 4) & 0xf;
}
-/**
- * wil_cid_valid - check cid is valid
- * @cid: CID value
- */
-static inline bool wil_cid_valid(u8 cid)
-{
- return (cid >= 0 && cid < max_assoc_sta);
-}
-
struct wil6210_mbox_ring {
u32 base;
u16 entry_size; /* max. size of mbox entry, incl. all headers */
@@ -913,6 +910,11 @@ struct wil_fw_stats_global {
struct wmi_link_stats_global stats;
};
+struct wil_brd_info {
+ u32 file_addr;
+ u32 file_max_size;
+};
+
struct wil6210_priv {
struct pci_dev *pdev;
u32 bar_size;
@@ -927,8 +929,8 @@ struct wil6210_priv {
const char *hw_name;
const char *wil_fw_name;
char *board_file;
- u32 brd_file_addr;
- u32 brd_file_max_size;
+ u32 num_of_brd_entries;
+ struct wil_brd_info *brd_info;
DECLARE_BITMAP(hw_capa, hw_capa_last);
DECLARE_BITMAP(fw_capabilities, WMI_FW_CAPABILITY_MAX);
DECLARE_BITMAP(platform_capa, WIL_PLATFORM_CAPA_MAX);
@@ -940,6 +942,8 @@ struct wil6210_priv {
struct wil6210_vif *vifs[WIL_MAX_VIFS];
struct mutex vif_mutex; /* protects access to VIF entries */
atomic_t connected_vifs;
+ u32 max_assoc_sta; /* max sta's supported by the driver and the FW */
+
/* profile */
struct cfg80211_chan_def monitor_chandef;
u32 monitor_flags;
@@ -1137,6 +1141,14 @@ static inline void wil_c(struct wil6210_priv *wil, u32 reg, u32 val)
wil_w(wil, reg, wil_r(wil, reg) & ~val);
}
+/**
+ * wil_cid_valid - check cid is valid
+ */
+static inline bool wil_cid_valid(struct wil6210_priv *wil, u8 cid)
+{
+ return (cid >= 0 && cid < wil->max_assoc_sta);
+}
+
void wil_get_board_file(struct wil6210_priv *wil, char *buf, size_t len);
#if defined(CONFIG_DYNAMIC_DEBUG)
@@ -1241,6 +1253,9 @@ int wmi_rx_chain_add(struct wil6210_priv *wil, struct wil_ring *vring);
int wmi_update_ft_ies(struct wil6210_vif *vif, u16 ie_len, const void *ie);
int wmi_rxon(struct wil6210_priv *wil, bool on);
int wmi_get_temperature(struct wil6210_priv *wil, u32 *t_m, u32 *t_r);
+int wmi_get_all_temperatures(struct wil6210_priv *wil,
+ struct wmi_temp_sense_all_done_event
+ *sense_all_evt);
int wmi_disconnect_sta(struct wil6210_vif *vif, const u8 *mac, u16 reason,
bool del_sta);
int wmi_addba(struct wil6210_priv *wil, u8 mid,
@@ -1395,6 +1410,7 @@ int wmi_stop_sched_scan(struct wil6210_priv *wil);
int wmi_mgmt_tx(struct wil6210_vif *vif, const u8 *buf, size_t len);
int wmi_mgmt_tx_ext(struct wil6210_vif *vif, const u8 *buf, size_t len,
u8 channel, u16 duration_ms);
+int wmi_rbufcap_cfg(struct wil6210_priv *wil, bool enable, u16 threshold);
int reverse_memcmp(const void *cs, const void *ct, size_t count);
@@ -1413,4 +1429,5 @@ int wmi_addba_rx_resp_edma(struct wil6210_priv *wil, u8 mid, u8 cid,
void update_supported_bands(struct wil6210_priv *wil);
+void wil_clear_fw_log_addr(struct wil6210_priv *wil);
#endif /* __WIL6210_H__ */
diff --git a/drivers/net/wireless/ath/wil6210/wmi.c b/drivers/net/wireless/ath/wil6210/wmi.c
index d89cd41e78ac..475b1a233cc9 100644
--- a/drivers/net/wireless/ath/wil6210/wmi.c
+++ b/drivers/net/wireless/ath/wil6210/wmi.c
@@ -40,7 +40,6 @@ MODULE_PARM_DESC(led_id,
" 60G device led enablement. Set the led ID (0-2) to enable");
#define WIL_WAIT_FOR_SUSPEND_RESUME_COMP 200
-#define WIL_WMI_CALL_GENERAL_TO_MS 100
#define WIL_WMI_PCP_STOP_TO_MS 5000
/**
@@ -484,6 +483,10 @@ static const char *cmdid2name(u16 cmdid)
return "WMI_FT_REASSOC_CMD";
case WMI_UPDATE_FT_IES_CMDID:
return "WMI_UPDATE_FT_IES_CMD";
+ case WMI_RBUFCAP_CFG_CMDID:
+ return "WMI_RBUFCAP_CFG_CMD";
+ case WMI_TEMP_SENSE_ALL_CMDID:
+ return "WMI_TEMP_SENSE_ALL_CMDID";
default:
return "Untracked CMD";
}
@@ -628,6 +631,10 @@ static const char *eventid2name(u16 eventid)
return "WMI_FT_AUTH_STATUS_EVENT";
case WMI_FT_REASSOC_STATUS_EVENTID:
return "WMI_FT_REASSOC_STATUS_EVENT";
+ case WMI_RBUFCAP_CFG_EVENTID:
+ return "WMI_RBUFCAP_CFG_EVENT";
+ case WMI_TEMP_SENSE_ALL_DONE_EVENTID:
+ return "WMI_TEMP_SENSE_ALL_DONE_EVENTID";
default:
return "Untracked EVENT";
}
@@ -806,8 +813,8 @@ static void wmi_evt_ready(struct wil6210_vif *vif, int id, void *d, int len)
}
}
- max_assoc_sta = min_t(uint, max_assoc_sta, fw_max_assoc_sta);
- wil_dbg_wmi(wil, "setting max assoc sta to %d\n", max_assoc_sta);
+ wil->max_assoc_sta = min_t(uint, max_assoc_sta, fw_max_assoc_sta);
+ wil_dbg_wmi(wil, "setting max assoc sta to %d\n", wil->max_assoc_sta);
wil_set_recovery_state(wil, fw_recovery_idle);
set_bit(wil_status_fwready, wil->status);
@@ -974,7 +981,7 @@ static void wmi_evt_connect(struct wil6210_vif *vif, int id, void *d, int len)
evt->assoc_req_len, evt->assoc_resp_len);
return;
}
- if (evt->cid >= max_assoc_sta) {
+ if (evt->cid >= wil->max_assoc_sta) {
wil_err(wil, "Connect CID invalid : %d\n", evt->cid);
return;
}
@@ -1236,7 +1243,7 @@ static void wmi_evt_ring_en(struct wil6210_vif *vif, int id, void *d, int len)
return;
cid = wil->ring2cid_tid[vri][0];
- if (!wil_cid_valid(cid)) {
+ if (!wil_cid_valid(wil, cid)) {
wil_err(wil, "invalid cid %d for vring %d\n", cid, vri);
return;
}
@@ -1439,7 +1446,7 @@ static void wil_link_stats_store_basic(struct wil6210_vif *vif,
u8 cid = basic->cid;
struct wil_sta_info *sta;
- if (cid < 0 || cid >= max_assoc_sta) {
+ if (cid < 0 || cid >= wil->max_assoc_sta) {
wil_err(wil, "invalid cid %d\n", cid);
return;
}
@@ -1589,7 +1596,7 @@ static int wil_find_cid_ringid_sta(struct wil6210_priv *wil,
continue;
lcid = wil->ring2cid_tid[i][0];
- if (lcid >= max_assoc_sta) /* skip BCAST */
+ if (lcid >= wil->max_assoc_sta) /* skip BCAST */
continue;
wil_dbg_wmi(wil, "find sta -> ringid %d cid %d\n", i, lcid);
@@ -2051,7 +2058,8 @@ int wmi_echo(struct wil6210_priv *wil)
};
return wmi_call(wil, WMI_ECHO_CMDID, vif->mid, &cmd, sizeof(cmd),
- WMI_ECHO_RSP_EVENTID, NULL, 0, 50);
+ WMI_ECHO_RSP_EVENTID, NULL, 0,
+ WIL_WMI_CALL_GENERAL_TO_MS);
}
int wmi_set_mac_address(struct wil6210_priv *wil, void *addr)
@@ -2110,7 +2118,7 @@ int wmi_led_cfg(struct wil6210_priv *wil, bool enable)
rc = wmi_call(wil, WMI_LED_CFG_CMDID, vif->mid, &cmd, sizeof(cmd),
WMI_LED_CFG_DONE_EVENTID, &reply, sizeof(reply),
- 100);
+ WIL_WMI_CALL_GENERAL_TO_MS);
if (rc)
goto out;
@@ -2124,6 +2132,37 @@ out:
return rc;
}
+int wmi_rbufcap_cfg(struct wil6210_priv *wil, bool enable, u16 threshold)
+{
+ struct wil6210_vif *vif = ndev_to_vif(wil->main_ndev);
+ int rc;
+
+ struct wmi_rbufcap_cfg_cmd cmd = {
+ .enable = enable,
+ .rx_desc_threshold = cpu_to_le16(threshold),
+ };
+ struct {
+ struct wmi_cmd_hdr wmi;
+ struct wmi_rbufcap_cfg_event evt;
+ } __packed reply = {
+ .evt = {.status = WMI_FW_STATUS_FAILURE},
+ };
+
+ rc = wmi_call(wil, WMI_RBUFCAP_CFG_CMDID, vif->mid, &cmd, sizeof(cmd),
+ WMI_RBUFCAP_CFG_EVENTID, &reply, sizeof(reply),
+ WIL_WMI_CALL_GENERAL_TO_MS);
+ if (rc)
+ return rc;
+
+ if (reply.evt.status != WMI_FW_STATUS_SUCCESS) {
+ wil_err(wil, "RBUFCAP_CFG failed. status %d\n",
+ reply.evt.status);
+ rc = -EINVAL;
+ }
+
+ return rc;
+}
+
int wmi_pcp_start(struct wil6210_vif *vif,
int bi, u8 wmi_nettype, u8 chan, u8 hidden_ssid, u8 is_go)
{
@@ -2135,7 +2174,7 @@ int wmi_pcp_start(struct wil6210_vif *vif,
.network_type = wmi_nettype,
.disable_sec_offload = 1,
.channel = chan - 1,
- .pcp_max_assoc_sta = max_assoc_sta,
+ .pcp_max_assoc_sta = wil->max_assoc_sta,
.hidden_ssid = hidden_ssid,
.is_go = is_go,
.ap_sme_offload_mode = disable_ap_sme ?
@@ -2228,7 +2267,8 @@ int wmi_get_ssid(struct wil6210_vif *vif, u8 *ssid_len, void *ssid)
memset(&reply, 0, sizeof(reply));
rc = wmi_call(wil, WMI_GET_SSID_CMDID, vif->mid, NULL, 0,
- WMI_GET_SSID_EVENTID, &reply, sizeof(reply), 20);
+ WMI_GET_SSID_EVENTID, &reply, sizeof(reply),
+ WIL_WMI_CALL_GENERAL_TO_MS);
if (rc)
return rc;
@@ -2265,7 +2305,8 @@ int wmi_get_channel(struct wil6210_priv *wil, int *channel)
memset(&reply, 0, sizeof(reply));
rc = wmi_call(wil, WMI_GET_PCP_CHANNEL_CMDID, vif->mid, NULL, 0,
- WMI_GET_PCP_CHANNEL_EVENTID, &reply, sizeof(reply), 20);
+ WMI_GET_PCP_CHANNEL_EVENTID, &reply, sizeof(reply),
+ WIL_WMI_CALL_GENERAL_TO_MS);
if (rc)
return rc;
@@ -2361,7 +2402,8 @@ int wmi_stop_discovery(struct wil6210_vif *vif)
wil_dbg_wmi(wil, "sending WMI_DISCOVERY_STOP_CMDID\n");
rc = wmi_call(wil, WMI_DISCOVERY_STOP_CMDID, vif->mid, NULL, 0,
- WMI_DISCOVERY_STOPPED_EVENTID, NULL, 0, 100);
+ WMI_DISCOVERY_STOPPED_EVENTID, NULL, 0,
+ WIL_WMI_CALL_GENERAL_TO_MS);
if (rc)
wil_err(wil, "Failed to stop discovery\n");
@@ -2507,12 +2549,14 @@ int wmi_rxon(struct wil6210_priv *wil, bool on)
if (on) {
rc = wmi_call(wil, WMI_START_LISTEN_CMDID, vif->mid, NULL, 0,
WMI_LISTEN_STARTED_EVENTID,
- &reply, sizeof(reply), 100);
+ &reply, sizeof(reply),
+ WIL_WMI_CALL_GENERAL_TO_MS);
if ((rc == 0) && (reply.evt.status != WMI_FW_STATUS_SUCCESS))
rc = -EINVAL;
} else {
rc = wmi_call(wil, WMI_DISCOVERY_STOP_CMDID, vif->mid, NULL, 0,
- WMI_DISCOVERY_STOPPED_EVENTID, NULL, 0, 20);
+ WMI_DISCOVERY_STOPPED_EVENTID, NULL, 0,
+ WIL_WMI_CALL_GENERAL_TO_MS);
}
return rc;
@@ -2601,7 +2645,8 @@ int wmi_get_temperature(struct wil6210_priv *wil, u32 *t_bb, u32 *t_rf)
memset(&reply, 0, sizeof(reply));
rc = wmi_call(wil, WMI_TEMP_SENSE_CMDID, vif->mid, &cmd, sizeof(cmd),
- WMI_TEMP_SENSE_DONE_EVENTID, &reply, sizeof(reply), 100);
+ WMI_TEMP_SENSE_DONE_EVENTID, &reply, sizeof(reply),
+ WIL_WMI_CALL_GENERAL_TO_MS);
if (rc)
return rc;
@@ -2613,6 +2658,44 @@ int wmi_get_temperature(struct wil6210_priv *wil, u32 *t_bb, u32 *t_rf)
return 0;
}
+int wmi_get_all_temperatures(struct wil6210_priv *wil,
+ struct wmi_temp_sense_all_done_event
+ *sense_all_evt)
+{
+ struct wil6210_vif *vif = ndev_to_vif(wil->main_ndev);
+ int rc;
+ struct wmi_temp_sense_all_cmd cmd = {
+ .measure_baseband_en = true,
+ .measure_rf_en = true,
+ .measure_mode = TEMPERATURE_MEASURE_NOW,
+ };
+ struct {
+ struct wmi_cmd_hdr wmi;
+ struct wmi_temp_sense_all_done_event evt;
+ } __packed reply;
+
+ if (!sense_all_evt) {
+ wil_err(wil, "Invalid sense_all_evt value\n");
+ return -EINVAL;
+ }
+
+ memset(&reply, 0, sizeof(reply));
+ reply.evt.status = WMI_FW_STATUS_FAILURE;
+ rc = wmi_call(wil, WMI_TEMP_SENSE_ALL_CMDID, vif->mid, &cmd,
+ sizeof(cmd), WMI_TEMP_SENSE_ALL_DONE_EVENTID,
+ &reply, sizeof(reply), WIL_WMI_CALL_GENERAL_TO_MS);
+ if (rc)
+ return rc;
+
+ if (reply.evt.status == WMI_FW_STATUS_FAILURE) {
+ wil_err(wil, "Failed geting TEMP_SENSE_ALL\n");
+ return -EINVAL;
+ }
+
+ memcpy(sense_all_evt, &reply.evt, sizeof(reply.evt));
+ return 0;
+}
+
int wmi_disconnect_sta(struct wil6210_vif *vif, const u8 *mac, u16 reason,
bool del_sta)
{
@@ -2715,7 +2798,7 @@ int wmi_addba_rx_resp(struct wil6210_priv *wil,
.dialog_token = token,
.status_code = cpu_to_le16(status),
/* bit 0: A-MSDU supported
- * bit 1: policy (should be 0 for us)
+ * bit 1: policy (controlled by FW)
* bits 2..5: TID
* bits 6..15: buffer size
*/
@@ -2745,7 +2828,7 @@ int wmi_addba_rx_resp(struct wil6210_priv *wil,
rc = wmi_call(wil, WMI_RCP_ADDBA_RESP_CMDID, mid, &cmd, sizeof(cmd),
WMI_RCP_ADDBA_RESP_SENT_EVENTID, &reply, sizeof(reply),
- 100);
+ WIL_WMI_CALL_GENERAL_TO_MS);
if (rc)
return rc;
@@ -2769,7 +2852,7 @@ int wmi_addba_rx_resp_edma(struct wil6210_priv *wil, u8 mid, u8 cid, u8 tid,
.dialog_token = token,
.status_code = cpu_to_le16(status),
/* bit 0: A-MSDU supported
- * bit 1: policy (should be 0 for us)
+ * bit 1: policy (controlled by FW)
* bits 2..5: TID
* bits 6..15: buffer size
*/
@@ -2827,7 +2910,7 @@ int wmi_ps_dev_profile_cfg(struct wil6210_priv *wil,
rc = wmi_call(wil, WMI_PS_DEV_PROFILE_CFG_CMDID, vif->mid,
&cmd, sizeof(cmd),
WMI_PS_DEV_PROFILE_CFG_EVENTID, &reply, sizeof(reply),
- 100);
+ WIL_WMI_CALL_GENERAL_TO_MS);
if (rc)
return rc;
@@ -2864,7 +2947,7 @@ int wmi_set_mgmt_retry(struct wil6210_priv *wil, u8 retry_short)
rc = wmi_call(wil, WMI_SET_MGMT_RETRY_LIMIT_CMDID, vif->mid,
&cmd, sizeof(cmd),
WMI_SET_MGMT_RETRY_LIMIT_EVENTID, &reply, sizeof(reply),
- 100);
+ WIL_WMI_CALL_GENERAL_TO_MS);
if (rc)
return rc;
@@ -2894,7 +2977,7 @@ int wmi_get_mgmt_retry(struct wil6210_priv *wil, u8 *retry_short)
memset(&reply, 0, sizeof(reply));
rc = wmi_call(wil, WMI_GET_MGMT_RETRY_LIMIT_CMDID, vif->mid, NULL, 0,
WMI_GET_MGMT_RETRY_LIMIT_EVENTID, &reply, sizeof(reply),
- 100);
+ WIL_WMI_CALL_GENERAL_TO_MS);
if (rc)
return rc;
@@ -3220,7 +3303,18 @@ static void wmi_event_handle(struct wil6210_priv *wil,
/* check if someone waits for this event */
if (wil->reply_id && wil->reply_id == id &&
wil->reply_mid == mid) {
- WARN_ON(wil->reply_buf);
+ if (wil->reply_buf) {
+ /* event received while wmi_call is waiting
+ * with a buffer. Such event should be handled
+ * in wmi_recv_cmd function. Handling the event
+ * here means a previous wmi_call was timeout.
+ * Drop the event and do not handle it.
+ */
+ wil_err(wil,
+ "Old event (%d, %s) while wmi_call is waiting. Drop it and Continue waiting\n",
+ id, eventid2name(id));
+ return;
+ }
wmi_evt_call_handler(vif, id, evt_data,
len - sizeof(*wmi));
@@ -3800,6 +3894,7 @@ int wil_wmi_bcast_desc_ring_add(struct wil6210_vif *vif, int ring_id)
.ring_size = cpu_to_le16(ring->size),
.ring_id = ring_id,
},
+ .max_msdu_size = cpu_to_le16(wil_mtu2macbuf(mtu_max)),
.status_ring_id = wil->tx_sring_idx,
.encap_trans_type = WMI_VRING_ENC_TYPE_802_3,
};
diff --git a/drivers/net/wireless/ath/wil6210/wmi.h b/drivers/net/wireless/ath/wil6210/wmi.h
index da46fc8d39cf..3e37229b36b5 100644
--- a/drivers/net/wireless/ath/wil6210/wmi.h
+++ b/drivers/net/wireless/ath/wil6210/wmi.h
@@ -35,6 +35,7 @@
#define WMI_PROX_RANGE_NUM (3)
#define WMI_MAX_LOSS_DMG_BEACONS (20)
#define MAX_NUM_OF_SECTORS (128)
+#define WMI_INVALID_TEMPERATURE (0xFFFFFFFF)
#define WMI_SCHED_MAX_ALLOCS_PER_CMD (4)
#define WMI_RF_DTYPE_LENGTH (3)
#define WMI_RF_ETYPE_LENGTH (3)
@@ -64,6 +65,7 @@
#define WMI_QOS_MAX_WEIGHT 50
#define WMI_QOS_SET_VIF_PRIORITY (0xFF)
#define WMI_QOS_DEFAULT_PRIORITY (WMI_QOS_NUM_OF_PRIORITY)
+#define WMI_MAX_XIF_PORTS_NUM (8)
/* Mailbox interface
* used for commands and events
@@ -105,6 +107,7 @@ enum wmi_fw_capability {
WMI_FW_CAPABILITY_TX_REQ_EXT = 25,
WMI_FW_CAPABILITY_CHANNEL_4 = 26,
WMI_FW_CAPABILITY_IPA = 27,
+ WMI_FW_CAPABILITY_TEMPERATURE_ALL_RF = 30,
WMI_FW_CAPABILITY_MAX,
};
@@ -296,6 +299,7 @@ enum wmi_command_id {
WMI_SET_VRING_PRIORITY_WEIGHT_CMDID = 0xA10,
WMI_SET_VRING_PRIORITY_CMDID = 0xA11,
WMI_RBUFCAP_CFG_CMDID = 0xA12,
+ WMI_TEMP_SENSE_ALL_CMDID = 0xA13,
WMI_SET_MAC_ADDRESS_CMDID = 0xF003,
WMI_ABORT_SCAN_CMDID = 0xF007,
WMI_SET_PROMISCUOUS_MODE_CMDID = 0xF041,
@@ -1411,12 +1415,7 @@ struct wmi_rf_xpm_write_cmd {
u8 data_bytes[0];
} __packed;
-/* WMI_TEMP_SENSE_CMDID
- *
- * Measure MAC and radio temperatures
- *
- * Possible modes for temperature measurement
- */
+/* Possible modes for temperature measurement */
enum wmi_temperature_measure_mode {
TEMPERATURE_USE_OLD_VALUE = 0x01,
TEMPERATURE_MEASURE_NOW = 0x02,
@@ -1942,6 +1941,14 @@ struct wmi_set_ap_slot_size_cmd {
__le32 slot_size;
} __packed;
+/* WMI_TEMP_SENSE_ALL_CMDID */
+struct wmi_temp_sense_all_cmd {
+ u8 measure_baseband_en;
+ u8 measure_rf_en;
+ u8 measure_mode;
+ u8 reserved;
+} __packed;
+
/* WMI Events
* List of Events (target to host)
*/
@@ -2101,6 +2108,7 @@ enum wmi_event_id {
WMI_SET_VRING_PRIORITY_WEIGHT_EVENTID = 0x1A10,
WMI_SET_VRING_PRIORITY_EVENTID = 0x1A11,
WMI_RBUFCAP_CFG_EVENTID = 0x1A12,
+ WMI_TEMP_SENSE_ALL_DONE_EVENTID = 0x1A13,
WMI_SET_CHANNEL_EVENTID = 0x9000,
WMI_ASSOC_REQ_EVENTID = 0x9001,
WMI_EAPOL_RX_EVENTID = 0x9002,
@@ -2784,11 +2792,13 @@ struct wmi_fixed_scheduling_ul_config_event {
*/
struct wmi_temp_sense_done_event {
/* Temperature times 1000 (actual temperature will be achieved by
- * dividing the value by 1000)
+ * dividing the value by 1000). When temperature cannot be read from
+ * device return WMI_INVALID_TEMPERATURE
*/
__le32 baseband_t1000;
/* Temperature times 1000 (actual temperature will be achieved by
- * dividing the value by 1000)
+ * dividing the value by 1000). When temperature cannot be read from
+ * device return WMI_INVALID_TEMPERATURE
*/
__le32 rf_t1000;
} __packed;
@@ -4140,4 +4150,25 @@ struct wmi_rbufcap_cfg_event {
u8 reserved[3];
} __packed;
+/* WMI_TEMP_SENSE_ALL_DONE_EVENTID
+ * Measure MAC and all radio temperatures
+ */
+struct wmi_temp_sense_all_done_event {
+ /* enum wmi_fw_status */
+ u8 status;
+ /* Bitmap of connected RFs */
+ u8 rf_bitmap;
+ u8 reserved[2];
+ /* Temperature times 1000 (actual temperature will be achieved by
+ * dividing the value by 1000). When temperature cannot be read from
+ * device return WMI_INVALID_TEMPERATURE
+ */
+ __le32 rf_t1000[WMI_MAX_XIF_PORTS_NUM];
+ /* Temperature times 1000 (actual temperature will be achieved by
+ * dividing the value by 1000). When temperature cannot be read from
+ * device return WMI_INVALID_TEMPERATURE
+ */
+ __le32 baseband_t1000;
+} __packed;
+
#endif /* __WILOCITY_WMI_H__ */
diff --git a/drivers/net/wireless/broadcom/b43/dma.c b/drivers/net/wireless/broadcom/b43/dma.c
index 806406aab43d..31bf71a80c26 100644
--- a/drivers/net/wireless/broadcom/b43/dma.c
+++ b/drivers/net/wireless/broadcom/b43/dma.c
@@ -797,7 +797,7 @@ static void free_all_descbuffers(struct b43_dmaring *ring)
}
}
-static u64 supported_dma_mask(struct b43_wldev *dev)
+static enum b43_dmatype b43_engine_type(struct b43_wldev *dev)
{
u32 tmp;
u16 mmio_base;
@@ -807,14 +807,14 @@ static u64 supported_dma_mask(struct b43_wldev *dev)
case B43_BUS_BCMA:
tmp = bcma_aread32(dev->dev->bdev, BCMA_IOST);
if (tmp & BCMA_IOST_DMA64)
- return DMA_BIT_MASK(64);
+ return B43_DMA_64BIT;
break;
#endif
#ifdef CONFIG_B43_SSB
case B43_BUS_SSB:
tmp = ssb_read32(dev->dev->sdev, SSB_TMSHIGH);
if (tmp & SSB_TMSHIGH_DMA64)
- return DMA_BIT_MASK(64);
+ return B43_DMA_64BIT;
break;
#endif
}
@@ -823,20 +823,7 @@ static u64 supported_dma_mask(struct b43_wldev *dev)
b43_write32(dev, mmio_base + B43_DMA32_TXCTL, B43_DMA32_TXADDREXT_MASK);
tmp = b43_read32(dev, mmio_base + B43_DMA32_TXCTL);
if (tmp & B43_DMA32_TXADDREXT_MASK)
- return DMA_BIT_MASK(32);
-
- return DMA_BIT_MASK(30);
-}
-
-static enum b43_dmatype dma_mask_to_engine_type(u64 dmamask)
-{
- if (dmamask == DMA_BIT_MASK(30))
- return B43_DMA_30BIT;
- if (dmamask == DMA_BIT_MASK(32))
return B43_DMA_32BIT;
- if (dmamask == DMA_BIT_MASK(64))
- return B43_DMA_64BIT;
- B43_WARN_ON(1);
return B43_DMA_30BIT;
}
@@ -1043,42 +1030,6 @@ void b43_dma_free(struct b43_wldev *dev)
destroy_ring(dma, tx_ring_mcast);
}
-static int b43_dma_set_mask(struct b43_wldev *dev, u64 mask)
-{
- u64 orig_mask = mask;
- bool fallback = false;
- int err;
-
- /* Try to set the DMA mask. If it fails, try falling back to a
- * lower mask, as we can always also support a lower one. */
- while (1) {
- err = dma_set_mask_and_coherent(dev->dev->dma_dev, mask);
- if (!err)
- break;
- if (mask == DMA_BIT_MASK(64)) {
- mask = DMA_BIT_MASK(32);
- fallback = true;
- continue;
- }
- if (mask == DMA_BIT_MASK(32)) {
- mask = DMA_BIT_MASK(30);
- fallback = true;
- continue;
- }
- b43err(dev->wl, "The machine/kernel does not support "
- "the required %u-bit DMA mask\n",
- (unsigned int)dma_mask_to_engine_type(orig_mask));
- return -EOPNOTSUPP;
- }
- if (fallback) {
- b43info(dev->wl, "DMA mask fallback from %u-bit to %u-bit\n",
- (unsigned int)dma_mask_to_engine_type(orig_mask),
- (unsigned int)dma_mask_to_engine_type(mask));
- }
-
- return 0;
-}
-
/* Some hardware with 64-bit DMA seems to be bugged and looks for translation
* bit in low address word instead of high one.
*/
@@ -1101,15 +1052,15 @@ static bool b43_dma_translation_in_low_word(struct b43_wldev *dev,
int b43_dma_init(struct b43_wldev *dev)
{
struct b43_dma *dma = &dev->dma;
+ enum b43_dmatype type = b43_engine_type(dev);
int err;
- u64 dmamask;
- enum b43_dmatype type;
- dmamask = supported_dma_mask(dev);
- type = dma_mask_to_engine_type(dmamask);
- err = b43_dma_set_mask(dev, dmamask);
- if (err)
+ err = dma_set_mask_and_coherent(dev->dev->dma_dev, DMA_BIT_MASK(type));
+ if (err) {
+ b43err(dev->wl, "The machine/kernel does not support "
+ "the required %u-bit DMA mask\n", type);
return err;
+ }
switch (dev->dev->bus_type) {
#ifdef CONFIG_B43_BCMA
@@ -1813,7 +1764,7 @@ void b43_dma_direct_fifo_rx(struct b43_wldev *dev,
enum b43_dmatype type;
u16 mmio_base;
- type = dma_mask_to_engine_type(supported_dma_mask(dev));
+ type = b43_engine_type(dev);
mmio_base = b43_dmacontroller_base(type, engine_index);
direct_fifo_rx(dev, type, mmio_base, enable);
diff --git a/drivers/net/wireless/broadcom/b43/main.c b/drivers/net/wireless/broadcom/b43/main.c
index 20815a71680b..b85603e91c7a 100644
--- a/drivers/net/wireless/broadcom/b43/main.c
+++ b/drivers/net/wireless/broadcom/b43/main.c
@@ -2590,18 +2590,13 @@ start_ieee80211:
err = ieee80211_register_hw(wl->hw);
if (err)
- goto err_one_core_detach;
+ goto out;
wl->hw_registered = true;
b43_leds_register(wl->current_dev);
/* Register HW RNG driver */
b43_rng_init(wl);
- goto out;
-
-err_one_core_detach:
- b43_one_core_detach(dev->dev);
-
out:
kfree(ctx);
}
diff --git a/drivers/net/wireless/broadcom/b43legacy/dma.c b/drivers/net/wireless/broadcom/b43legacy/dma.c
index 1cc25f44dd9a..f7594e2a896e 100644
--- a/drivers/net/wireless/broadcom/b43legacy/dma.c
+++ b/drivers/net/wireless/broadcom/b43legacy/dma.c
@@ -603,7 +603,7 @@ static void free_all_descbuffers(struct b43legacy_dmaring *ring)
}
}
-static u64 supported_dma_mask(struct b43legacy_wldev *dev)
+static enum b43legacy_dmatype b43legacy_engine_type(struct b43legacy_wldev *dev)
{
u32 tmp;
u16 mmio_base;
@@ -615,18 +615,7 @@ static u64 supported_dma_mask(struct b43legacy_wldev *dev)
tmp = b43legacy_read32(dev, mmio_base +
B43legacy_DMA32_TXCTL);
if (tmp & B43legacy_DMA32_TXADDREXT_MASK)
- return DMA_BIT_MASK(32);
-
- return DMA_BIT_MASK(30);
-}
-
-static enum b43legacy_dmatype dma_mask_to_engine_type(u64 dmamask)
-{
- if (dmamask == DMA_BIT_MASK(30))
- return B43legacy_DMA_30BIT;
- if (dmamask == DMA_BIT_MASK(32))
return B43legacy_DMA_32BIT;
- B43legacy_WARN_ON(1);
return B43legacy_DMA_30BIT;
}
@@ -784,54 +773,14 @@ void b43legacy_dma_free(struct b43legacy_wldev *dev)
dma->tx_ring0 = NULL;
}
-static int b43legacy_dma_set_mask(struct b43legacy_wldev *dev, u64 mask)
-{
- u64 orig_mask = mask;
- bool fallback = false;
- int err;
-
- /* Try to set the DMA mask. If it fails, try falling back to a
- * lower mask, as we can always also support a lower one. */
- while (1) {
- err = dma_set_mask_and_coherent(dev->dev->dma_dev, mask);
- if (!err)
- break;
- if (mask == DMA_BIT_MASK(64)) {
- mask = DMA_BIT_MASK(32);
- fallback = true;
- continue;
- }
- if (mask == DMA_BIT_MASK(32)) {
- mask = DMA_BIT_MASK(30);
- fallback = true;
- continue;
- }
- b43legacyerr(dev->wl, "The machine/kernel does not support "
- "the required %u-bit DMA mask\n",
- (unsigned int)dma_mask_to_engine_type(orig_mask));
- return -EOPNOTSUPP;
- }
- if (fallback) {
- b43legacyinfo(dev->wl, "DMA mask fallback from %u-bit to %u-"
- "bit\n",
- (unsigned int)dma_mask_to_engine_type(orig_mask),
- (unsigned int)dma_mask_to_engine_type(mask));
- }
-
- return 0;
-}
-
int b43legacy_dma_init(struct b43legacy_wldev *dev)
{
struct b43legacy_dma *dma = &dev->dma;
struct b43legacy_dmaring *ring;
+ enum b43legacy_dmatype type = b43legacy_engine_type(dev);
int err;
- u64 dmamask;
- enum b43legacy_dmatype type;
- dmamask = supported_dma_mask(dev);
- type = dma_mask_to_engine_type(dmamask);
- err = b43legacy_dma_set_mask(dev, dmamask);
+ err = dma_set_mask_and_coherent(dev->dev->dma_dev, DMA_BIT_MASK(type));
if (err) {
#ifdef CONFIG_B43LEGACY_PIO
b43legacywarn(dev->wl, "DMA for this device not supported. "
diff --git a/drivers/net/wireless/broadcom/brcm80211/Kconfig b/drivers/net/wireless/broadcom/brcm80211/Kconfig
index 1df56d1f5e00..a5bf16c4f495 100644
--- a/drivers/net/wireless/broadcom/brcm80211/Kconfig
+++ b/drivers/net/wireless/broadcom/brcm80211/Kconfig
@@ -18,55 +18,7 @@ config BRCMSMAC
be available if you select BCMA_DRIVER_GPIO. If you choose to build a
module, the driver will be called brcmsmac.ko.
-config BRCMFMAC
- tristate "Broadcom FullMAC WLAN driver"
- depends on CFG80211
- select BRCMUTIL
- ---help---
- This module adds support for wireless adapters based on Broadcom
- FullMAC chipsets. It has to work with at least one of the bus
- interface support. If you choose to build a module, it'll be called
- brcmfmac.ko.
-
-config BRCMFMAC_PROTO_BCDC
- bool
-
-config BRCMFMAC_PROTO_MSGBUF
- bool
-
-config BRCMFMAC_SDIO
- bool "SDIO bus interface support for FullMAC driver"
- depends on (MMC = y || MMC = BRCMFMAC)
- depends on BRCMFMAC
- select BRCMFMAC_PROTO_BCDC
- select FW_LOADER
- default y
- ---help---
- This option enables the SDIO bus interface support for Broadcom
- IEEE802.11n embedded FullMAC WLAN driver. Say Y if you want to
- use the driver for a SDIO wireless card.
-
-config BRCMFMAC_USB
- bool "USB bus interface support for FullMAC driver"
- depends on (USB = y || USB = BRCMFMAC)
- depends on BRCMFMAC
- select BRCMFMAC_PROTO_BCDC
- select FW_LOADER
- ---help---
- This option enables the USB bus interface support for Broadcom
- IEEE802.11n embedded FullMAC WLAN driver. Say Y if you want to
- use the driver for an USB wireless card.
-
-config BRCMFMAC_PCIE
- bool "PCIE bus interface support for FullMAC driver"
- depends on BRCMFMAC
- depends on PCI
- select BRCMFMAC_PROTO_MSGBUF
- select FW_LOADER
- ---help---
- This option enables the PCIE bus interface support for Broadcom
- IEEE802.11ac embedded FullMAC WLAN driver. Say Y if you want to
- use the driver for an PCIE wireless card.
+source "drivers/net/wireless/broadcom/brcm80211/brcmfmac/Kconfig"
config BRCM_TRACING
bool "Broadcom device tracing"
@@ -82,6 +34,6 @@ config BRCM_TRACING
config BRCMDBG
bool "Broadcom driver debug functions"
depends on BRCMSMAC || BRCMFMAC
- select WANT_DEV_COREDUMP
+ select WANT_DEV_COREDUMP if BRCMFMAC
---help---
Selecting this enables additional code for debug purposes.
diff --git a/drivers/net/wireless/broadcom/brcm80211/Makefile b/drivers/net/wireless/broadcom/brcm80211/Makefile
index b987920e982e..88115d072624 100644
--- a/drivers/net/wireless/broadcom/brcm80211/Makefile
+++ b/drivers/net/wireless/broadcom/brcm80211/Makefile
@@ -1,19 +1,9 @@
+# SPDX-License-Identifier: ISC
#
-# Makefile fragment for Broadcom 802.11n Networking Device Driver
+# Makefile fragment for Broadcom 802.11 Networking Device Driver
#
# Copyright (c) 2010 Broadcom Corporation
#
-# Permission to use, copy, modify, and/or distribute this software for any
-# purpose with or without fee is hereby granted, provided that the above
-# copyright notice and this permission notice appear in all copies.
-#
-# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
-# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
-# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
-# SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
-# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
-# OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
-# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
# common flags
subdir-ccflags-$(CONFIG_BRCMDBG) += -DDEBUG
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/Kconfig b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/Kconfig
new file mode 100644
index 000000000000..32794c1eca23
--- /dev/null
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/Kconfig
@@ -0,0 +1,50 @@
+config BRCMFMAC
+ tristate "Broadcom FullMAC WLAN driver"
+ depends on CFG80211
+ select BRCMUTIL
+ help
+ This module adds support for wireless adapters based on Broadcom
+ FullMAC chipsets. It has to work with at least one of the bus
+ interface support. If you choose to build a module, it'll be called
+ brcmfmac.ko.
+
+config BRCMFMAC_PROTO_BCDC
+ bool
+
+config BRCMFMAC_PROTO_MSGBUF
+ bool
+
+config BRCMFMAC_SDIO
+ bool "SDIO bus interface support for FullMAC driver"
+ depends on (MMC = y || MMC = BRCMFMAC)
+ depends on BRCMFMAC
+ select BRCMFMAC_PROTO_BCDC
+ select FW_LOADER
+ default y
+ help
+ This option enables the SDIO bus interface support for Broadcom
+ IEEE802.11n embedded FullMAC WLAN driver. Say Y if you want to
+ use the driver for a SDIO wireless card.
+
+config BRCMFMAC_USB
+ bool "USB bus interface support for FullMAC driver"
+ depends on (USB = y || USB = BRCMFMAC)
+ depends on BRCMFMAC
+ select BRCMFMAC_PROTO_BCDC
+ select FW_LOADER
+ help
+ This option enables the USB bus interface support for Broadcom
+ IEEE802.11n embedded FullMAC WLAN driver. Say Y if you want to
+ use the driver for an USB wireless card.
+
+config BRCMFMAC_PCIE
+ bool "PCIE bus interface support for FullMAC driver"
+ depends on BRCMFMAC
+ depends on PCI
+ select BRCMFMAC_PROTO_MSGBUF
+ select FW_LOADER
+ help
+ This option enables the PCIE bus interface support for Broadcom
+ IEEE802.11ac embedded FullMAC WLAN driver. Say Y if you want to
+ use the driver for an PCIE wireless card.
+
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/Makefile b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/Makefile
index f7cf3e5f4849..9b15bc3f6054 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/Makefile
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/Makefile
@@ -1,19 +1,9 @@
+# SPDX-License-Identifier: ISC
#
-# Makefile fragment for Broadcom 802.11n Networking Device Driver
+# Makefile fragment for Broadcom 802.11 Networking Device Driver
#
# Copyright (c) 2010 Broadcom Corporation
#
-# Permission to use, copy, modify, and/or distribute this software for any
-# purpose with or without fee is hereby granted, provided that the above
-# copyright notice and this permission notice appear in all copies.
-#
-# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
-# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
-# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
-# SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
-# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
-# OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
-# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
ccflags-y += \
-I $(srctree)/$(src) \
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.c
index 98b168736df0..322e913ca7aa 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
/*******************************************************************************
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.h
index 4bc52240ccea..102e6938905c 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2013 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef BRCMFMAC_BCDC_H
#define BRCMFMAC_BCDC_H
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
index 60aede5abb4d..fc12598b2dd3 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
/* ****************** SDIO CARD Interface Functions **************************/
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c
index 372363a6e752..ec2bec0999d1 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2013 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/slab.h>
#include <linux/netdevice.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.h
index 19647c68aa9e..418b9424a179 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2013 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef WL_BTCOEX_H_
#define WL_BTCOEX_H_
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h
index 2fe167eae22c..0988a166a785 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef BRCMFMAC_BUS_H
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
index 8ee8af4e7ec4..b6d0df354b36 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
/* Toplevel file. Relies on dhd_linux.c to send commands to the dongle. */
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h
index 9a6287f084a9..b7b50b07f776 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef BRCMFMAC_CFG80211_H
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c
index 22534bf2a90c..1ec48c4f4d4a 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2014 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/kernel.h>
#include <linux/delay.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.h
index 0ae3b33bab62..206d7695d57a 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2014 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef BRCMF_CHIP_H
#define BRCMF_CHIP_H
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
index 96b8d5b3aeed..aa89d620ee5d 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/kernel.h>
@@ -269,7 +258,7 @@ int brcmf_c_preinit_dcmds(struct brcmf_if *ifp)
/* query for 'ver' to get version info from firmware */
memset(buf, 0, sizeof(buf));
- strcpy(buf, "ver");
+ strlcpy(buf, "ver", sizeof(buf));
err = brcmf_fil_iovar_data_get(ifp, "ver", buf, sizeof(buf));
if (err < 0) {
bphy_err(drvr, "Retrieving version information failed, %d\n",
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.h
index 4ce56be90b74..144cf4570bc3 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.h
@@ -1,16 +1,6 @@
-/* Copyright (c) 2014 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+// SPDX-License-Identifier: ISC
+/*
+ * Copyright (c) 2014 Broadcom Corporation
*/
#ifndef BRCMFMAC_COMMON_H
#define BRCMFMAC_COMMON_H
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/commonring.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/commonring.c
index 7b0e52195a85..49db54d23e03 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/commonring.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/commonring.c
@@ -1,16 +1,6 @@
-/* Copyright (c) 2014 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+// SPDX-License-Identifier: ISC
+/*
+ * Copyright (c) 2014 Broadcom Corporation
*/
#include <linux/types.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/commonring.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/commonring.h
index b85033611c8d..7fb11f4823e4 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/commonring.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/commonring.h
@@ -1,16 +1,6 @@
-/* Copyright (c) 2014 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+// SPDX-License-Identifier: ISC
+/*
+ * Copyright (c) 2014 Broadcom Corporation
*/
#ifndef BRCMFMAC_COMMONRING_H
#define BRCMFMAC_COMMONRING_H
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
index 7d6a08779693..bf18491a33a5 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/kernel.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.h
index 9f09aa31eeda..86517a3d74b1 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
/****************
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/debug.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/debug.c
index 489b5dfdf5b9..120515fe8250 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/debug.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/debug.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2012 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/debugfs.h>
#include <linux/netdevice.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/debug.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/debug.h
index 2998726b62c3..ea6e8e839cae 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/debug.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/debug.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef BRCMFMAC_DEBUG_H
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
index 9f1417e00073..4aa2561934d7 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright 2018 Hans de Goede <hdegoede@redhat.com>
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/dmi.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c
index acca719b3907..73aff4e4039d 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2014 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/netdevice.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.h
index 5e88a7f16ad2..f127eb2030a6 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2014 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef _BRCMF_FEATURE_H
#define _BRCMF_FEATURE_H
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
index 6a333dd80b2d..3aed4c4b887a 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2013 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/efi.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.h
index a0834be8864e..3347439543bb 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2013 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef BRCMFMAC_FIRMWARE_H
#define BRCMFMAC_FIRMWARE_H
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/flowring.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/flowring.c
index d0d8b32af7d0..8e9d067bdfed 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/flowring.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/flowring.c
@@ -1,16 +1,6 @@
-/* Copyright (c) 2014 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+// SPDX-License-Identifier: ISC
+/*
+ * Copyright (c) 2014 Broadcom Corporation
*/
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/flowring.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/flowring.h
index 068e68d94999..818882b0fd01 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/flowring.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/flowring.h
@@ -1,16 +1,6 @@
-/* Copyright (c) 2014 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+// SPDX-License-Identifier: ISC
+/*
+ * Copyright (c) 2014 Broadcom Corporation
*/
#ifndef BRCMFMAC_FLOWRING_H
#define BRCMFMAC_FLOWRING_H
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c
index 63e98fd583ab..adedd4fac10b 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2012 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/netdevice.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.h
index 7027243db17e..a82f51bc1e69 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2012 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.c
index 8ea27489734e..9ed85420f3ca 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2012 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
/* FWIL is the Firmware Interface Layer. In this module the support functions
@@ -314,7 +303,7 @@ brcmf_create_bsscfg(s32 bsscfgidx, char *name, char *data, u32 datalen,
return brcmf_create_iovar(name, data, datalen, buf, buflen);
prefixlen = strlen(prefix);
- namelen = strlen(name) + 1; /* lengh of iovar name + null */
+ namelen = strlen(name) + 1; /* length of iovar name + null */
iolen = prefixlen + namelen + sizeof(bsscfgidx_le) + datalen;
if (buflen < iolen) {
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h
index b6b183b18413..0ff6f5212a94 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2012 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef _fwil_h_
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h
index 39ac1bbb6cc0..37c512036e0e 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2012 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
index c22c49ae552e..b8452cb46297 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/types.h>
#include <linux/module.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.h
index 749c06dcdc17..10184eeaad94 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.h
@@ -1,20 +1,8 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2012 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
-
#ifndef FWSIGNAL_H_
#define FWSIGNAL_H_
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
index 9d1f9ff25bfa..241747bd5cb2 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
@@ -1,16 +1,6 @@
-/* Copyright (c) 2014 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+// SPDX-License-Identifier: ISC
+/*
+ * Copyright (c) 2014 Broadcom Corporation
*/
/*******************************************************************************
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.h
index 692235d25277..2e322edbb907 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.h
@@ -1,16 +1,6 @@
-/* Copyright (c) 2014 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+// SPDX-License-Identifier: ISC
+/*
+ * Copyright (c) 2014 Broadcom Corporation
*/
#ifndef BRCMFMAC_MSGBUF_H
#define BRCMFMAC_MSGBUF_H
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
index 84e3373289eb..b886b56a5e5a 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2014 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/init.h>
#include <linux/of.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.h
index 95b7032d54b1..10bf52253337 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2014 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifdef CONFIG_OF
void brcmf_of_probe(struct device *dev, enum brcmf_bus_type bus_type,
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
index 73a0e550f2b2..7ba9f6a68645 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2012 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/slab.h>
#include <linux/netdevice.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.h
index 39f0d0218088..64ab9b6a677d 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2012 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef WL_CFGP2P_H_
#define WL_CFGP2P_H_
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
index 83e4938527f4..4ea5401c4d6b 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
@@ -1,16 +1,6 @@
-/* Copyright (c) 2014 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+// SPDX-License-Identifier: ISC
+/*
+ * Copyright (c) 2014 Broadcom Corporation
*/
#include <linux/kernel.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h
index 6edaaf8ef5ce..d026401d2001 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h
@@ -1,16 +1,6 @@
-/* Copyright (c) 2014 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+// SPDX-License-Identifier: ISC
+/*
+ * Copyright (c) 2014 Broadcom Corporation
*/
#ifndef BRCMFMAC_PCIE_H
#define BRCMFMAC_PCIE_H
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.c
index 0fb97f7dd5a2..14e530601ef3 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2016 Broadcom
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/netdevice.h>
#include <linux/gcd.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.h
index cd9e35ae3b21..25d406019ac3 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pno.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2016 Broadcom
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef _BRCMF_PNO_H
#define _BRCMF_PNO_H
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.c
index c7964ccdda69..e3d1b075044b 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2013 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.h
index 72355aea9028..8d55fad531d0 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2013 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef BRCMFMAC_PROTO_H
#define BRCMFMAC_PROTO_H
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
index 9a51f1ba87c3..629140b6d7e2 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/types.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.h
index 34b031154da9..0bd47c119dae 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef BRCMFMAC_SDIO_H
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/tracepoint.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/tracepoint.c
index a5c271bff446..814fcc7538d5 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/tracepoint.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/tracepoint.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2012 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/device.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/tracepoint.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/tracepoint.h
index 4d7d51f95716..338c66d0c5f8 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/tracepoint.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/tracepoint.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2013 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#if !defined(BRCMF_TRACEPOINT_H_) || defined(TRACE_HEADER_MULTI_READ)
#define BRCMF_TRACEPOINT_H_
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
index 75fcd6752edc..d33628b79a3a 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2011 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/kernel.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.h
index f483a8c9945b..ee273e3bb101 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2011 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef BRCMFMAC_USB_H
#define BRCMFMAC_USB_H
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.c
index d493021f6031..f6500899fc14 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2014 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/vmalloc.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.h
index 061b7bfa2e1c..418f33ea6fd3 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2014 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef _vendor_h_
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_cmn.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_cmn.c
index 35e3b101e5cf..2441714169de 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_cmn.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_cmn.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/kernel.h>
#include <linux/delay.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_hal.h b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_hal.h
index 4d3734f48d9c..2e6a3d454ee8 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_hal.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_hal.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
/*
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_int.h b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_int.h
index e9e8337f386c..8668fa5558a2 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_int.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_int.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef _BRCM_PHY_INT_H_
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
index c6e107f41948..7ef36234a25d 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/kernel.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.h b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.h
index f4a8ab09da43..ae0e8d5df339 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef _BRCM_PHY_LCN_H_
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_n.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_n.c
index f4f5e9044152..07f61d6155ea 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_n.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_n.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c
index b24bc57ca91b..45dcd277a89f 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include "phy_qmath.h"
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.h b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.h
index 20e3783f921b..5d0083a87fd0 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef _BRCM_QMATH_H_
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_radio.h b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_radio.h
index c3a675455ff5..706ab03c8346 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_radio.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_radio.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef _BRCM_PHY_RADIO_H_
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phyreg_n.h b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phyreg_n.h
index a97c3a799479..f49a10c452e9 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phyreg_n.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phyreg_n.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#define NPHY_TBL_ID_GAIN1 0
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_lcn.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_lcn.c
index d7fa312214f3..be703be34616 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_lcn.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_lcn.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <types.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_lcn.h b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_lcn.h
index 489422a36085..b49580c654fb 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_lcn.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_lcn.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <types.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_n.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_n.c
index 533bd4b0277e..7607e67d20c7 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_n.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_n.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <linux/kernel.h>
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_n.h b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_n.h
index dc8a84e85117..28208aba4af2 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_n.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phytbl_n.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#define ANT_SWCTRL_TBL_REV3_IDX (0)
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmutil/Makefile b/drivers/net/wireless/broadcom/brcm80211/brcmutil/Makefile
index bb02c6220a88..7a82d615ba2a 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmutil/Makefile
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmutil/Makefile
@@ -1,20 +1,9 @@
+# SPDX-License-Identifier: ISC
#
# Makefile fragment for Broadcom 802.11n Networking Device Driver Utilities
#
# Copyright (c) 2011 Broadcom Corporation
#
-# Permission to use, copy, modify, and/or distribute this software for any
-# purpose with or without fee is hereby granted, provided that the above
-# copyright notice and this permission notice appear in all copies.
-#
-# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
-# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
-# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
-# SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
-# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
-# OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
-# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
-
ccflags-y := -I $(srctree)/$(src)/../include
obj-$(CONFIG_BRCMUTIL) += brcmutil.o
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
index 8ac34821f1c1..1e2b1e487eb7 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2013 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
/*********************channel spec common functions*********************/
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmutil/utils.c b/drivers/net/wireless/broadcom/brcm80211/brcmutil/utils.c
index 0543607002fd..4c84c3001c3f 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmutil/utils.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmutil/utils.c
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
diff --git a/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h b/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h
index 839980da9643..d1037b6ef2d6 100644
--- a/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h
+++ b/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef _BRCM_HW_IDS_H_
diff --git a/drivers/net/wireless/broadcom/brcm80211/include/brcmu_d11.h b/drivers/net/wireless/broadcom/brcm80211/include/brcmu_d11.h
index 8b8b2ecb3199..f6344023855c 100644
--- a/drivers/net/wireless/broadcom/brcm80211/include/brcmu_d11.h
+++ b/drivers/net/wireless/broadcom/brcm80211/include/brcmu_d11.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef _BRCMU_D11_H_
diff --git a/drivers/net/wireless/broadcom/brcm80211/include/brcmu_utils.h b/drivers/net/wireless/broadcom/brcm80211/include/brcmu_utils.h
index 41969527b459..946532328667 100644
--- a/drivers/net/wireless/broadcom/brcm80211/include/brcmu_utils.h
+++ b/drivers/net/wireless/broadcom/brcm80211/include/brcmu_utils.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef _BRCMU_UTILS_H_
diff --git a/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h b/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
index dddebaa60352..7b31c212694d 100644
--- a/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
+++ b/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef _BRCMU_WIFI_H_
diff --git a/drivers/net/wireless/broadcom/brcm80211/include/chipcommon.h b/drivers/net/wireless/broadcom/brcm80211/include/chipcommon.h
index de8225e6248b..0340bba96868 100644
--- a/drivers/net/wireless/broadcom/brcm80211/include/chipcommon.h
+++ b/drivers/net/wireless/broadcom/brcm80211/include/chipcommon.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef _SBCHIPC_H
diff --git a/drivers/net/wireless/broadcom/brcm80211/include/defs.h b/drivers/net/wireless/broadcom/brcm80211/include/defs.h
index 8d1e85e0ed51..9e7e6116eb74 100644
--- a/drivers/net/wireless/broadcom/brcm80211/include/defs.h
+++ b/drivers/net/wireless/broadcom/brcm80211/include/defs.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef _BRCM_DEFS_H_
diff --git a/drivers/net/wireless/broadcom/brcm80211/include/soc.h b/drivers/net/wireless/broadcom/brcm80211/include/soc.h
index 123cfa854a0d..92d942b44f2c 100644
--- a/drivers/net/wireless/broadcom/brcm80211/include/soc.h
+++ b/drivers/net/wireless/broadcom/brcm80211/include/soc.h
@@ -1,17 +1,6 @@
+// SPDX-License-Identifier: ISC
/*
* Copyright (c) 2010 Broadcom Corporation
- *
- * Permission to use, copy, modify, and/or distribute this software for any
- * purpose with or without fee is hereby granted, provided that the above
- * copyright notice and this permission notice appear in all copies.
- *
- * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
- * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
- * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
- * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
- * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
- * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef _BRCM_SOC_H
diff --git a/drivers/net/wireless/cisco/Kconfig b/drivers/net/wireless/cisco/Kconfig
index 7329830ed7cc..01e173ede894 100644
--- a/drivers/net/wireless/cisco/Kconfig
+++ b/drivers/net/wireless/cisco/Kconfig
@@ -17,6 +17,7 @@ config AIRO
depends on CFG80211 && ISA_DMA_API && (PCI || BROKEN)
select WIRELESS_EXT
select CRYPTO
+ select CRYPTO_BLKCIPHER
select WEXT_SPY
select WEXT_PRIV
---help---
@@ -40,6 +41,7 @@ config AIRO_CS
select WEXT_PRIV
select CRYPTO
select CRYPTO_AES
+ select CRYPTO_CTR
---help---
This is the standard Linux driver to support Cisco/Aironet PCMCIA
802.11 wireless cards. This driver is the same as the Aironet
diff --git a/drivers/net/wireless/cisco/airo.c b/drivers/net/wireless/cisco/airo.c
index 3f5a14112c6b..9342ffbe1e81 100644
--- a/drivers/net/wireless/cisco/airo.c
+++ b/drivers/net/wireless/cisco/airo.c
@@ -49,6 +49,9 @@
#include <linux/kthread.h>
#include <linux/freezer.h>
+#include <crypto/aes.h>
+#include <crypto/skcipher.h>
+
#include <net/cfg80211.h>
#include <net/iw_handler.h>
@@ -951,7 +954,7 @@ typedef struct {
} mic_statistics;
typedef struct {
- u32 coeff[((EMMH32_MSGLEN_MAX)+3)>>2];
+ __be32 coeff[((EMMH32_MSGLEN_MAX)+3)>>2];
u64 accum; // accumulated mic, reduced to u32 in final()
int position; // current position (byte offset) in message
union {
@@ -1216,7 +1219,7 @@ struct airo_info {
struct iw_spy_data spy_data;
struct iw_public_data wireless_data;
/* MIC stuff */
- struct crypto_cipher *tfm;
+ struct crypto_sync_skcipher *tfm;
mic_module mod[2];
mic_statistics micstats;
HostRxDesc rxfids[MPI_MAX_FIDS]; // rx/tx/config MPI350 descriptors
@@ -1291,14 +1294,14 @@ static int flashrestart(struct airo_info *ai,struct net_device *dev);
static int RxSeqValid (struct airo_info *ai,miccntx *context,int mcast,u32 micSeq);
static void MoveWindow(miccntx *context, u32 micSeq);
static void emmh32_setseed(emmh32_context *context, u8 *pkey, int keylen,
- struct crypto_cipher *tfm);
+ struct crypto_sync_skcipher *tfm);
static void emmh32_init(emmh32_context *context);
static void emmh32_update(emmh32_context *context, u8 *pOctets, int len);
static void emmh32_final(emmh32_context *context, u8 digest[4]);
static int flashpchar(struct airo_info *ai,int byte,int dwelltime);
static void age_mic_context(miccntx *cur, miccntx *old, u8 *key, int key_len,
- struct crypto_cipher *tfm)
+ struct crypto_sync_skcipher *tfm)
{
/* If the current MIC context is valid and its key is the same as
* the MIC register, there's nothing to do.
@@ -1359,7 +1362,7 @@ static int micsetup(struct airo_info *ai) {
int i;
if (ai->tfm == NULL)
- ai->tfm = crypto_alloc_cipher("aes", 0, 0);
+ ai->tfm = crypto_alloc_sync_skcipher("ctr(aes)", 0, 0);
if (IS_ERR(ai->tfm)) {
airo_print_err(ai->dev->name, "failed to load transform for AES");
@@ -1624,37 +1627,31 @@ static void MoveWindow(miccntx *context, u32 micSeq)
/* mic accumulate */
#define MIC_ACCUM(val) \
- context->accum += (u64)(val) * context->coeff[coeff_position++];
-
-static unsigned char aes_counter[16];
+ context->accum += (u64)(val) * be32_to_cpu(context->coeff[coeff_position++]);
/* expand the key to fill the MMH coefficient array */
static void emmh32_setseed(emmh32_context *context, u8 *pkey, int keylen,
- struct crypto_cipher *tfm)
+ struct crypto_sync_skcipher *tfm)
{
/* take the keying material, expand if necessary, truncate at 16-bytes */
/* run through AES counter mode to generate context->coeff[] */
- int i,j;
- u32 counter;
- u8 *cipher, plain[16];
-
- crypto_cipher_setkey(tfm, pkey, 16);
- counter = 0;
- for (i = 0; i < ARRAY_SIZE(context->coeff); ) {
- aes_counter[15] = (u8)(counter >> 0);
- aes_counter[14] = (u8)(counter >> 8);
- aes_counter[13] = (u8)(counter >> 16);
- aes_counter[12] = (u8)(counter >> 24);
- counter++;
- memcpy (plain, aes_counter, 16);
- crypto_cipher_encrypt_one(tfm, plain, plain);
- cipher = plain;
- for (j = 0; (j < 16) && (i < ARRAY_SIZE(context->coeff)); ) {
- context->coeff[i++] = ntohl(*(__be32 *)&cipher[j]);
- j += 4;
- }
- }
+ SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
+ struct scatterlist sg;
+ u8 iv[AES_BLOCK_SIZE] = {};
+ int ret;
+
+ crypto_sync_skcipher_setkey(tfm, pkey, 16);
+
+ memset(context->coeff, 0, sizeof(context->coeff));
+ sg_init_one(&sg, context->coeff, sizeof(context->coeff));
+
+ skcipher_request_set_sync_tfm(req, tfm);
+ skcipher_request_set_callback(req, 0, NULL, NULL);
+ skcipher_request_set_crypt(req, &sg, &sg, sizeof(context->coeff), iv);
+
+ ret = crypto_skcipher_encrypt(req);
+ WARN_ON_ONCE(ret);
}
/* prepare for calculation of a new mic */
@@ -2415,7 +2412,7 @@ void stop_airo_card( struct net_device *dev, int freeres )
ai->shared, ai->shared_dma);
}
}
- crypto_free_cipher(ai->tfm);
+ crypto_free_sync_skcipher(ai->tfm);
del_airo_dev(ai);
free_netdev( dev );
}
diff --git a/drivers/net/wireless/intel/iwlegacy/3945-rs.c b/drivers/net/wireless/intel/iwlegacy/3945-rs.c
index 5bd8a9ee8b1e..6209f85a71dd 100644
--- a/drivers/net/wireless/intel/iwlegacy/3945-rs.c
+++ b/drivers/net/wireless/intel/iwlegacy/3945-rs.c
@@ -631,9 +631,6 @@ il3945_rs_get_rate(void *il_r, struct ieee80211_sta *sta, void *il_sta,
il_sta = NULL;
}
- if (rate_control_send_low(sta, il_sta, txrc))
- return;
-
rate_mask = sta->supp_rates[sband->band];
/* get user max rate if set */
@@ -846,17 +843,8 @@ il3945_add_debugfs(void *il, void *il_sta, struct dentry *dir)
{
struct il3945_rs_sta *lq_sta = il_sta;
- lq_sta->rs_sta_dbgfs_stats_table_file =
- debugfs_create_file("rate_stats_table", 0600, dir, lq_sta,
- &rs_sta_dbgfs_stats_table_ops);
-
-}
-
-static void
-il3945_remove_debugfs(void *il, void *il_sta)
-{
- struct il3945_rs_sta *lq_sta = il_sta;
- debugfs_remove(lq_sta->rs_sta_dbgfs_stats_table_file);
+ debugfs_create_file("rate_stats_table", 0600, dir, lq_sta,
+ &rs_sta_dbgfs_stats_table_ops);
}
#endif
@@ -883,7 +871,6 @@ static const struct rate_control_ops rs_ops = {
.free_sta = il3945_rs_free_sta,
#ifdef CONFIG_MAC80211_DEBUGFS
.add_sta_debugfs = il3945_add_debugfs,
- .remove_sta_debugfs = il3945_remove_debugfs,
#endif
};
diff --git a/drivers/net/wireless/intel/iwlegacy/3945.h b/drivers/net/wireless/intel/iwlegacy/3945.h
index 8e97e95fcbc4..82e4a4878bc2 100644
--- a/drivers/net/wireless/intel/iwlegacy/3945.h
+++ b/drivers/net/wireless/intel/iwlegacy/3945.h
@@ -72,9 +72,6 @@ struct il3945_rs_sta {
u8 start_rate;
struct timer_list rate_scale_flush;
struct il3945_rate_scale_data win[RATE_COUNT_3945];
-#ifdef CONFIG_MAC80211_DEBUGFS
- struct dentry *rs_sta_dbgfs_stats_table_file;
-#endif
/* used to be in sta_info */
int last_txrate_idx;
diff --git a/drivers/net/wireless/intel/iwlegacy/4965-rs.c b/drivers/net/wireless/intel/iwlegacy/4965-rs.c
index a824a10a43b6..7c6e2c863497 100644
--- a/drivers/net/wireless/intel/iwlegacy/4965-rs.c
+++ b/drivers/net/wireless/intel/iwlegacy/4965-rs.c
@@ -2209,10 +2209,6 @@ il4965_rs_get_rate(void *il_r, struct ieee80211_sta *sta, void *il_sta,
il_sta = NULL;
}
- /* Send management frames and NO_ACK data using lowest rate. */
- if (rate_control_send_low(sta, il_sta, txrc))
- return;
-
if (!lq_sta)
return;
@@ -2752,29 +2748,15 @@ static void
il4965_rs_add_debugfs(void *il, void *il_sta, struct dentry *dir)
{
struct il_lq_sta *lq_sta = il_sta;
- lq_sta->rs_sta_dbgfs_scale_table_file =
- debugfs_create_file("rate_scale_table", 0600, dir,
- lq_sta, &rs_sta_dbgfs_scale_table_ops);
- lq_sta->rs_sta_dbgfs_stats_table_file =
- debugfs_create_file("rate_stats_table", 0400, dir, lq_sta,
- &rs_sta_dbgfs_stats_table_ops);
- lq_sta->rs_sta_dbgfs_rate_scale_data_file =
- debugfs_create_file("rate_scale_data", 0400, dir, lq_sta,
- &rs_sta_dbgfs_rate_scale_data_ops);
- lq_sta->rs_sta_dbgfs_tx_agg_tid_en_file =
- debugfs_create_u8("tx_agg_tid_enable", 0600, dir,
- &lq_sta->tx_agg_tid_en);
-
-}
-static void
-il4965_rs_remove_debugfs(void *il, void *il_sta)
-{
- struct il_lq_sta *lq_sta = il_sta;
- debugfs_remove(lq_sta->rs_sta_dbgfs_scale_table_file);
- debugfs_remove(lq_sta->rs_sta_dbgfs_stats_table_file);
- debugfs_remove(lq_sta->rs_sta_dbgfs_rate_scale_data_file);
- debugfs_remove(lq_sta->rs_sta_dbgfs_tx_agg_tid_en_file);
+ debugfs_create_file("rate_scale_table", 0600, dir, lq_sta,
+ &rs_sta_dbgfs_scale_table_ops);
+ debugfs_create_file("rate_stats_table", 0400, dir, lq_sta,
+ &rs_sta_dbgfs_stats_table_ops);
+ debugfs_create_file("rate_scale_data", 0400, dir, lq_sta,
+ &rs_sta_dbgfs_rate_scale_data_ops);
+ debugfs_create_u8("tx_agg_tid_enable", 0600, dir,
+ &lq_sta->tx_agg_tid_en);
}
#endif
@@ -2801,7 +2783,6 @@ static const struct rate_control_ops rs_4965_ops = {
.free_sta = il4965_rs_free_sta,
#ifdef CONFIG_MAC80211_DEBUGFS
.add_sta_debugfs = il4965_rs_add_debugfs,
- .remove_sta_debugfs = il4965_rs_remove_debugfs,
#endif
};
diff --git a/drivers/net/wireless/intel/iwlegacy/common.h b/drivers/net/wireless/intel/iwlegacy/common.h
index 6685b9a7e7d1..e7fb8e6bb9e7 100644
--- a/drivers/net/wireless/intel/iwlegacy/common.h
+++ b/drivers/net/wireless/intel/iwlegacy/common.h
@@ -2807,10 +2807,6 @@ struct il_lq_sta {
struct il_traffic_load load[TID_MAX_LOAD_COUNT];
u8 tx_agg_tid_en;
#ifdef CONFIG_MAC80211_DEBUGFS
- struct dentry *rs_sta_dbgfs_scale_table_file;
- struct dentry *rs_sta_dbgfs_stats_table_file;
- struct dentry *rs_sta_dbgfs_rate_scale_data_file;
- struct dentry *rs_sta_dbgfs_tx_agg_tid_en_file;
u32 dbg_fixed_rate;
#endif
struct il_priv *drv;
diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
index a9c846c59289..93526dfaf791 100644
--- a/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
+++ b/drivers/net/wireless/intel/iwlwifi/cfg/22000.c
@@ -82,6 +82,7 @@
#define IWL_22000_HR_A0_FW_PRE "iwlwifi-QuQnj-a0-hr-a0-"
#define IWL_QU_B_JF_B_FW_PRE "iwlwifi-Qu-b0-jf-b0-"
#define IWL_QUZ_A_HR_B_FW_PRE "iwlwifi-QuZ-a0-hr-b0-"
+#define IWL_QUZ_A_JF_B_FW_PRE "iwlwifi-QuZ-a0-jf-b0-"
#define IWL_QNJ_B_JF_B_FW_PRE "iwlwifi-QuQnj-b0-jf-b0-"
#define IWL_CC_A_FW_PRE "iwlwifi-cc-a0-"
#define IWL_22000_SO_A_JF_B_FW_PRE "iwlwifi-so-a0-jf-b0-"
@@ -106,6 +107,8 @@
IWL_22000_HR_A0_FW_PRE __stringify(api) ".ucode"
#define IWL_QUZ_A_HR_B_MODULE_FIRMWARE(api) \
IWL_QUZ_A_HR_B_FW_PRE __stringify(api) ".ucode"
+#define IWL_QUZ_A_JF_B_MODULE_FIRMWARE(api) \
+ IWL_QUZ_A_JF_B_FW_PRE __stringify(api) ".ucode"
#define IWL_QU_B_JF_B_MODULE_FIRMWARE(api) \
IWL_QU_B_JF_B_FW_PRE __stringify(api) ".ucode"
#define IWL_QNJ_B_JF_B_MODULE_FIRMWARE(api) \
@@ -241,6 +244,18 @@ const struct iwl_cfg iwl_ax101_cfg_qu_hr = {
.max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
};
+const struct iwl_cfg iwl_ax201_cfg_qu_hr = {
+ .name = "Intel(R) Wi-Fi 6 AX201 160MHz",
+ .fw_name_pre = IWL_22000_QU_B_HR_B_FW_PRE,
+ IWL_DEVICE_22500,
+ /*
+ * This device doesn't support receiving BlockAck with a large bitmap
+ * so we need to restrict the size of transmitted aggregation to the
+ * HT size; mac80211 would otherwise pick the HE max (256) by default.
+ */
+ .max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
+};
+
const struct iwl_cfg iwl_ax101_cfg_quz_hr = {
.name = "Intel(R) Wi-Fi 6 AX101",
.fw_name_pre = IWL_QUZ_A_HR_B_FW_PRE,
@@ -253,6 +268,42 @@ const struct iwl_cfg iwl_ax101_cfg_quz_hr = {
.max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
};
+const struct iwl_cfg iwl_ax201_cfg_quz_hr = {
+ .name = "Intel(R) Wi-Fi 6 AX201 160MHz",
+ .fw_name_pre = IWL_QUZ_A_HR_B_FW_PRE,
+ IWL_DEVICE_22500,
+ /*
+ * This device doesn't support receiving BlockAck with a large bitmap
+ * so we need to restrict the size of transmitted aggregation to the
+ * HT size; mac80211 would otherwise pick the HE max (256) by default.
+ */
+ .max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
+};
+
+const struct iwl_cfg iwl_ax1650s_cfg_quz_hr = {
+ .name = "Killer(R) Wi-Fi 6 AX1650s 160MHz Wireless Network Adapter (201D2W)",
+ .fw_name_pre = IWL_QUZ_A_HR_B_FW_PRE,
+ IWL_DEVICE_22500,
+ /*
+ * This device doesn't support receiving BlockAck with a large bitmap
+ * so we need to restrict the size of transmitted aggregation to the
+ * HT size; mac80211 would otherwise pick the HE max (256) by default.
+ */
+ .max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
+};
+
+const struct iwl_cfg iwl_ax1650i_cfg_quz_hr = {
+ .name = "Killer(R) Wi-Fi 6 AX1650i 160MHz Wireless Network Adapter (201NGW)",
+ .fw_name_pre = IWL_QUZ_A_HR_B_FW_PRE,
+ IWL_DEVICE_22500,
+ /*
+ * This device doesn't support receiving BlockAck with a large bitmap
+ * so we need to restrict the size of transmitted aggregation to the
+ * HT size; mac80211 would otherwise pick the HE max (256) by default.
+ */
+ .max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
+};
+
const struct iwl_cfg iwl_ax200_cfg_cc = {
.name = "Intel(R) Wi-Fi 6 AX200 160MHz",
.fw_name_pre = IWL_CC_A_FW_PRE,
@@ -333,6 +384,90 @@ const struct iwl_cfg iwl9560_2ac_cfg_qnj_jf_b0 = {
.max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
};
+const struct iwl_cfg iwl9560_2ac_cfg_quz_a0_jf_b0_soc = {
+ .name = "Intel(R) Wireless-AC 9560 160MHz",
+ .fw_name_pre = IWL_QUZ_A_JF_B_FW_PRE,
+ IWL_DEVICE_22500,
+ /*
+ * This device doesn't support receiving BlockAck with a large bitmap
+ * so we need to restrict the size of transmitted aggregation to the
+ * HT size; mac80211 would otherwise pick the HE max (256) by default.
+ */
+ .max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
+ .integrated = true,
+ .soc_latency = 5000,
+};
+
+const struct iwl_cfg iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc = {
+ .name = "Intel(R) Wireless-AC 9560 160MHz",
+ .fw_name_pre = IWL_QUZ_A_JF_B_FW_PRE,
+ IWL_DEVICE_22500,
+ /*
+ * This device doesn't support receiving BlockAck with a large bitmap
+ * so we need to restrict the size of transmitted aggregation to the
+ * HT size; mac80211 would otherwise pick the HE max (256) by default.
+ */
+ .max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
+ .integrated = true,
+ .soc_latency = 5000,
+};
+
+const struct iwl_cfg iwl9461_2ac_cfg_quz_a0_jf_b0_soc = {
+ .name = "Intel(R) Dual Band Wireless AC 9461",
+ .fw_name_pre = IWL_QUZ_A_JF_B_FW_PRE,
+ IWL_DEVICE_22500,
+ /*
+ * This device doesn't support receiving BlockAck with a large bitmap
+ * so we need to restrict the size of transmitted aggregation to the
+ * HT size; mac80211 would otherwise pick the HE max (256) by default.
+ */
+ .max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
+ .integrated = true,
+ .soc_latency = 5000,
+};
+
+const struct iwl_cfg iwl9462_2ac_cfg_quz_a0_jf_b0_soc = {
+ .name = "Intel(R) Dual Band Wireless AC 9462",
+ .fw_name_pre = IWL_QUZ_A_JF_B_FW_PRE,
+ IWL_DEVICE_22500,
+ /*
+ * This device doesn't support receiving BlockAck with a large bitmap
+ * so we need to restrict the size of transmitted aggregation to the
+ * HT size; mac80211 would otherwise pick the HE max (256) by default.
+ */
+ .max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
+ .integrated = true,
+ .soc_latency = 5000,
+};
+
+const struct iwl_cfg iwl9560_killer_s_2ac_cfg_quz_a0_jf_b0_soc = {
+ .name = "Killer (R) Wireless-AC 1550s Wireless Network Adapter (9560NGW)",
+ .fw_name_pre = IWL_QUZ_A_JF_B_FW_PRE,
+ IWL_DEVICE_22500,
+ /*
+ * This device doesn't support receiving BlockAck with a large bitmap
+ * so we need to restrict the size of transmitted aggregation to the
+ * HT size; mac80211 would otherwise pick the HE max (256) by default.
+ */
+ .max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
+ .integrated = true,
+ .soc_latency = 5000,
+};
+
+const struct iwl_cfg iwl9560_killer_i_2ac_cfg_quz_a0_jf_b0_soc = {
+ .name = "Killer (R) Wireless-AC 1550i Wireless Network Adapter (9560NGW)",
+ .fw_name_pre = IWL_QUZ_A_JF_B_FW_PRE,
+ IWL_DEVICE_22500,
+ /*
+ * This device doesn't support receiving BlockAck with a large bitmap
+ * so we need to restrict the size of transmitted aggregation to the
+ * HT size; mac80211 would otherwise pick the HE max (256) by default.
+ */
+ .max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT,
+ .integrated = true,
+ .soc_latency = 5000,
+};
+
const struct iwl_cfg killer1550i_2ac_cfg_qu_b0_jf_b0 = {
.name = "Killer (R) Wireless-AC 1550i Wireless Network Adapter (9560NGW)",
.fw_name_pre = IWL_QU_B_JF_B_FW_PRE,
@@ -424,12 +559,12 @@ const struct iwl_cfg iwlax210_2ax_cfg_so_jf_a0 = {
};
const struct iwl_cfg iwlax210_2ax_cfg_so_hr_a0 = {
- .name = "Intel(R) Wi-Fi 6 AX201 160MHz",
+ .name = "Intel(R) Wi-Fi 7 AX210 160MHz",
.fw_name_pre = IWL_22000_SO_A_HR_B_FW_PRE,
IWL_DEVICE_AX210,
};
-const struct iwl_cfg iwlax210_2ax_cfg_so_gf_a0 = {
+const struct iwl_cfg iwlax211_2ax_cfg_so_gf_a0 = {
.name = "Intel(R) Wi-Fi 7 AX211 160MHz",
.fw_name_pre = IWL_22000_SO_A_GF_A_FW_PRE,
.uhb_supported = true,
@@ -443,8 +578,8 @@ const struct iwl_cfg iwlax210_2ax_cfg_ty_gf_a0 = {
IWL_DEVICE_AX210,
};
-const struct iwl_cfg iwlax210_2ax_cfg_so_gf4_a0 = {
- .name = "Intel(R) Wi-Fi 7 AX210 160MHz",
+const struct iwl_cfg iwlax411_2ax_cfg_so_gf4_a0 = {
+ .name = "Intel(R) Wi-Fi 7 AX411 160MHz",
.fw_name_pre = IWL_22000_SO_A_GF4_A_FW_PRE,
IWL_DEVICE_AX210,
};
@@ -457,6 +592,7 @@ MODULE_FIRMWARE(IWL_22000_HR_B_QNJ_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
MODULE_FIRMWARE(IWL_22000_HR_A0_QNJ_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
MODULE_FIRMWARE(IWL_QU_B_JF_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
MODULE_FIRMWARE(IWL_QUZ_A_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+MODULE_FIRMWARE(IWL_QUZ_A_JF_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
MODULE_FIRMWARE(IWL_QNJ_B_JF_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
MODULE_FIRMWARE(IWL_CC_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
MODULE_FIRMWARE(IWL_22000_SO_A_JF_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/lib.c b/drivers/net/wireless/intel/iwlwifi/dvm/lib.c
index 1fd6bf578474..eab94d2f46b1 100644
--- a/drivers/net/wireless/intel/iwlwifi/dvm/lib.c
+++ b/drivers/net/wireless/intel/iwlwifi/dvm/lib.c
@@ -1009,8 +1009,7 @@ int iwlagn_send_patterns(struct iwl_priv *priv,
if (!wowlan->n_patterns)
return 0;
- cmd.len[0] = sizeof(*pattern_cmd) +
- wowlan->n_patterns * sizeof(struct iwlagn_wowlan_pattern);
+ cmd.len[0] = struct_size(pattern_cmd, patterns, wowlan->n_patterns);
pattern_cmd = kmalloc(cmd.len[0], GFP_KERNEL);
if (!pattern_cmd)
diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/rs.c b/drivers/net/wireless/intel/iwlwifi/dvm/rs.c
index b500c9279a32..b1e5d64ca60d 100644
--- a/drivers/net/wireless/intel/iwlwifi/dvm/rs.c
+++ b/drivers/net/wireless/intel/iwlwifi/dvm/rs.c
@@ -2720,10 +2720,6 @@ static void rs_get_rate(void *priv_r, struct ieee80211_sta *sta, void *priv_sta,
priv_sta = NULL;
}
- /* Send management frames and NO_ACK data using lowest rate. */
- if (rate_control_send_low(sta, priv_sta, txrc))
- return;
-
rate_idx = lq_sta->last_txrate_idx;
if (lq_sta->last_rate_n_flags & RATE_MCS_HT_MSK) {
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
index 405038ce98d6..7573af2d88ce 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
@@ -97,7 +97,7 @@ IWL_EXPORT_SYMBOL(iwl_acpi_get_object);
union acpi_object *iwl_acpi_get_wifi_pkg(struct device *dev,
union acpi_object *data,
- int data_size)
+ int data_size, int *tbl_rev)
{
int i;
union acpi_object *wifi_pkg;
@@ -113,16 +113,19 @@ union acpi_object *iwl_acpi_get_wifi_pkg(struct device *dev,
/*
* We need at least two packages, one for the revision and one
* for the data itself. Also check that the revision is valid
- * (i.e. it is an integer set to 0).
+ * (i.e. it is an integer smaller than 2, as we currently support only
+ * 2 revisions).
*/
if (data->type != ACPI_TYPE_PACKAGE ||
data->package.count < 2 ||
data->package.elements[0].type != ACPI_TYPE_INTEGER ||
- data->package.elements[0].integer.value != 0) {
+ data->package.elements[0].integer.value > 1) {
IWL_DEBUG_DEV_RADIO(dev, "Unsupported packages structure\n");
return ERR_PTR(-EINVAL);
}
+ *tbl_rev = data->package.elements[0].integer.value;
+
/* loop through all the packages to find the one for WiFi */
for (i = 1; i < data->package.count; i++) {
union acpi_object *domain;
@@ -151,14 +154,15 @@ int iwl_acpi_get_mcc(struct device *dev, char *mcc)
{
union acpi_object *wifi_pkg, *data;
u32 mcc_val;
- int ret;
+ int ret, tbl_rev;
data = iwl_acpi_get_object(dev, ACPI_WRDD_METHOD);
if (IS_ERR(data))
return PTR_ERR(data);
- wifi_pkg = iwl_acpi_get_wifi_pkg(dev, data, ACPI_WRDD_WIFI_DATA_SIZE);
- if (IS_ERR(wifi_pkg)) {
+ wifi_pkg = iwl_acpi_get_wifi_pkg(dev, data, ACPI_WRDD_WIFI_DATA_SIZE,
+ &tbl_rev);
+ if (IS_ERR(wifi_pkg) || tbl_rev != 0) {
ret = PTR_ERR(wifi_pkg);
goto out_free;
}
@@ -185,6 +189,7 @@ u64 iwl_acpi_get_pwr_limit(struct device *dev)
{
union acpi_object *data, *wifi_pkg;
u64 dflt_pwr_limit;
+ int tbl_rev;
data = iwl_acpi_get_object(dev, ACPI_SPLC_METHOD);
if (IS_ERR(data)) {
@@ -193,8 +198,8 @@ u64 iwl_acpi_get_pwr_limit(struct device *dev)
}
wifi_pkg = iwl_acpi_get_wifi_pkg(dev, data,
- ACPI_SPLC_WIFI_DATA_SIZE);
- if (IS_ERR(wifi_pkg) ||
+ ACPI_SPLC_WIFI_DATA_SIZE, &tbl_rev);
+ if (IS_ERR(wifi_pkg) || tbl_rev != 0 ||
wifi_pkg->package.elements[1].integer.value != ACPI_TYPE_INTEGER) {
dflt_pwr_limit = 0;
goto out_free;
@@ -211,14 +216,15 @@ IWL_EXPORT_SYMBOL(iwl_acpi_get_pwr_limit);
int iwl_acpi_get_eckv(struct device *dev, u32 *extl_clk)
{
union acpi_object *wifi_pkg, *data;
- int ret;
+ int ret, tbl_rev;
data = iwl_acpi_get_object(dev, ACPI_ECKV_METHOD);
if (IS_ERR(data))
return PTR_ERR(data);
- wifi_pkg = iwl_acpi_get_wifi_pkg(dev, data, ACPI_ECKV_WIFI_DATA_SIZE);
- if (IS_ERR(wifi_pkg)) {
+ wifi_pkg = iwl_acpi_get_wifi_pkg(dev, data, ACPI_ECKV_WIFI_DATA_SIZE,
+ &tbl_rev);
+ if (IS_ERR(wifi_pkg) || tbl_rev != 0) {
ret = PTR_ERR(wifi_pkg);
goto out_free;
}
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.h b/drivers/net/wireless/intel/iwlwifi/fw/acpi.h
index f5704e16643f..991a23450999 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.h
@@ -97,7 +97,7 @@
void *iwl_acpi_get_object(struct device *dev, acpi_string method);
union acpi_object *iwl_acpi_get_wifi_pkg(struct device *dev,
union acpi_object *data,
- int data_size);
+ int data_size, int *tbl_rev);
/**
* iwl_acpi_get_mcc - read MCC from ACPI, if available
@@ -131,7 +131,8 @@ static inline void *iwl_acpi_get_object(struct device *dev, acpi_string method)
static inline union acpi_object *iwl_acpi_get_wifi_pkg(struct device *dev,
union acpi_object *data,
- int data_size)
+ int data_size,
+ int *tbl_rev)
{
return ERR_PTR(-ENOENT);
}
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h b/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h
index f4202bc231a6..aaf3974a9a20 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h
@@ -291,6 +291,28 @@ struct iwl_fw_ini_trigger_tlv {
struct iwl_fw_ini_trigger trigger_config[];
} __packed; /* FW_TLV_DEBUG_TRIGGERS_API_S_VER_1 */
+#define IWL_FW_INI_MAX_IMG_NAME_LEN 32
+#define IWL_FW_INI_MAX_DBG_CFG_NAME_LEN 64
+
+/**
+ * struct iwl_fw_ini_debug_info_tlv - (IWL_UCODE_TLV_TYPE_DEBUG_INFO)
+ *
+ * holds image name and debug configuration name
+ *
+ * @header: header
+ * @img_name_len: length of the image name string
+ * @img_name: image name string
+ * @dbg_cfg_name_len : length of the debug configuration name string
+ * @dbg_cfg_name: debug configuration name string
+ */
+struct iwl_fw_ini_debug_info_tlv {
+ struct iwl_fw_ini_header header;
+ __le32 img_name_len;
+ u8 img_name[IWL_FW_INI_MAX_IMG_NAME_LEN];
+ __le32 dbg_cfg_name_len;
+ u8 dbg_cfg_name[IWL_FW_INI_MAX_DBG_CFG_NAME_LEN];
+} __packed; /* FW_DEBUG_TLV_INFO_API_S_VER_1 */
+
/**
* enum iwl_fw_ini_trigger_id
*
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/location.h b/drivers/net/wireless/intel/iwlwifi/fw/api/location.h
index 8d78b0e671c0..ec864c7b497f 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/location.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/location.h
@@ -937,8 +937,13 @@ struct iwl_ftm_responder_stats {
__le16 reserved;
} __packed; /* TOF_RESPONDER_STATISTICS_NTFY_S_VER_2 */
-#define IWL_CSI_CHUNK_CTL_NUM_MASK 0x3
-#define IWL_CSI_CHUNK_CTL_IDX_MASK 0xc
+#define IWL_CSI_MAX_EXPECTED_CHUNKS 16
+
+#define IWL_CSI_CHUNK_CTL_NUM_MASK_VER_1 0x0003
+#define IWL_CSI_CHUNK_CTL_IDX_MASK_VER_1 0x000c
+
+#define IWL_CSI_CHUNK_CTL_NUM_MASK_VER_2 0x00ff
+#define IWL_CSI_CHUNK_CTL_IDX_MASK_VER_2 0xff00
struct iwl_csi_chunk_notification {
__le32 token;
@@ -946,6 +951,6 @@ struct iwl_csi_chunk_notification {
__le16 ctl;
__le32 size;
u8 data[];
-} __packed; /* CSI_CHUNKS_HDR_NTFY_API_S_VER_1 */
+} __packed; /* CSI_CHUNKS_HDR_NTFY_API_S_VER_1/VER_2 */
#endif /* __iwl_fw_api_location_h__ */
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/power.h b/drivers/net/wireless/intel/iwlwifi/fw/api/power.h
index 01f003c6cff9..f195db398bed 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/power.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/power.h
@@ -420,13 +420,25 @@ struct iwl_per_chain_offset_group {
} __packed; /* PER_CHAIN_LIMIT_OFFSET_GROUP_S_VER_1 */
/**
+ * struct iwl_geo_tx_power_profile_cmd_v1 - struct for GEO_TX_POWER_LIMIT cmd.
+ * @ops: operations, value from &enum iwl_geo_per_chain_offset_operation
+ * @table: offset profile per band.
+ */
+struct iwl_geo_tx_power_profiles_cmd_v1 {
+ __le32 ops;
+ struct iwl_per_chain_offset_group table[IWL_NUM_GEO_PROFILES];
+} __packed; /* GEO_TX_POWER_LIMIT_VER_1 */
+
+/**
* struct iwl_geo_tx_power_profile_cmd - struct for GEO_TX_POWER_LIMIT cmd.
* @ops: operations, value from &enum iwl_geo_per_chain_offset_operation
* @table: offset profile per band.
+ * @table_revision: BIOS table revision.
*/
struct iwl_geo_tx_power_profiles_cmd {
__le32 ops;
struct iwl_per_chain_offset_group table[IWL_NUM_GEO_PROFILES];
+ __le32 table_revision;
} __packed; /* GEO_TX_POWER_LIMIT */
/**
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h b/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
index 1a67a2a439ab..c4960f045415 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
@@ -750,6 +750,21 @@ struct iwl_scan_req_umac {
struct iwl_scan_umac_chan_param channel;
u8 data[];
} v8; /* SCAN_REQUEST_CMD_UMAC_API_S_VER_8 */
+ struct {
+ u8 active_dwell[SCAN_TWO_LMACS];
+ u8 adwell_default_hb_n_aps;
+ u8 adwell_default_lb_n_aps;
+ u8 adwell_default_n_aps_social;
+ u8 general_flags2;
+ __le16 adwell_max_budget;
+ __le32 max_out_time[SCAN_TWO_LMACS];
+ __le32 suspend_time[SCAN_TWO_LMACS];
+ __le32 scan_priority;
+ u8 passive_dwell[SCAN_TWO_LMACS];
+ u8 num_of_fragments[SCAN_TWO_LMACS];
+ struct iwl_scan_umac_chan_param channel;
+ u8 data[];
+ } v9; /* SCAN_REQUEST_CMD_UMAC_API_S_VER_9 */
};
} __packed;
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
index 33d7bc5500db..e411ac98290d 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
@@ -1059,7 +1059,7 @@ static int iwl_dump_ini_prph_iter(struct iwl_fw_runtime *fwrt,
u32 addr = le32_to_cpu(reg->start_addr[idx]) + le32_to_cpu(reg->offset);
int i;
- range->start_addr = cpu_to_le64(addr);
+ range->internal_base_addr = cpu_to_le32(addr);
range->range_data_size = reg->internal.range_data_size;
for (i = 0; i < le32_to_cpu(reg->internal.range_data_size); i += 4) {
prph_val = iwl_read_prph(fwrt->trans, addr + i);
@@ -1080,7 +1080,7 @@ static int iwl_dump_ini_csr_iter(struct iwl_fw_runtime *fwrt,
u32 addr = le32_to_cpu(reg->start_addr[idx]) + le32_to_cpu(reg->offset);
int i;
- range->start_addr = cpu_to_le64(addr);
+ range->internal_base_addr = cpu_to_le32(addr);
range->range_data_size = reg->internal.range_data_size;
for (i = 0; i < le32_to_cpu(reg->internal.range_data_size); i += 4)
*val++ = cpu_to_le32(iwl_trans_read32(fwrt->trans, addr + i));
@@ -1095,7 +1095,7 @@ static int iwl_dump_ini_dev_mem_iter(struct iwl_fw_runtime *fwrt,
struct iwl_fw_ini_error_dump_range *range = range_ptr;
u32 addr = le32_to_cpu(reg->start_addr[idx]) + le32_to_cpu(reg->offset);
- range->start_addr = cpu_to_le64(addr);
+ range->internal_base_addr = cpu_to_le32(addr);
range->range_data_size = reg->internal.range_data_size;
iwl_trans_read_mem_bytes(fwrt->trans, addr, range->data,
le32_to_cpu(reg->internal.range_data_size));
@@ -1111,7 +1111,7 @@ iwl_dump_ini_paging_gen2_iter(struct iwl_fw_runtime *fwrt,
struct iwl_fw_ini_error_dump_range *range = range_ptr;
u32 page_size = fwrt->trans->init_dram.paging[idx].size;
- range->start_addr = cpu_to_le64(idx);
+ range->page_num = cpu_to_le32(idx);
range->range_data_size = cpu_to_le32(page_size);
memcpy(range->data, fwrt->trans->init_dram.paging[idx].block,
page_size);
@@ -1131,7 +1131,7 @@ static int iwl_dump_ini_paging_iter(struct iwl_fw_runtime *fwrt,
dma_addr_t addr = fwrt->fw_paging_db[idx].fw_paging_phys;
u32 page_size = fwrt->fw_paging_db[idx].fw_paging_size;
- range->start_addr = cpu_to_le64(idx);
+ range->page_num = cpu_to_le32(idx);
range->range_data_size = cpu_to_le32(page_size);
dma_sync_single_for_cpu(fwrt->trans->dev, addr, page_size,
DMA_BIDIRECTIONAL);
@@ -1154,11 +1154,11 @@ iwl_dump_ini_mon_dram_iter(struct iwl_fw_runtime *fwrt,
if (start_addr == 0x5a5a5a5a)
return -EBUSY;
- range->start_addr = cpu_to_le64(start_addr);
- range->range_data_size = cpu_to_le32(fwrt->trans->fw_mon[idx].size);
+ range->dram_base_addr = cpu_to_le64(start_addr);
+ range->range_data_size = cpu_to_le32(fwrt->trans->dbg.fw_mon[idx].size);
- memcpy(range->data, fwrt->trans->fw_mon[idx].block,
- fwrt->trans->fw_mon[idx].size);
+ memcpy(range->data, fwrt->trans->dbg.fw_mon[idx].block,
+ fwrt->trans->dbg.fw_mon[idx].size);
return sizeof(*range) + le32_to_cpu(range->range_data_size);
}
@@ -1228,7 +1228,7 @@ static int iwl_dump_ini_txf_iter(struct iwl_fw_runtime *fwrt,
struct iwl_fw_ini_region_cfg *reg,
void *range_ptr, int idx)
{
- struct iwl_fw_ini_fifo_error_dump_range *range = range_ptr;
+ struct iwl_fw_ini_error_dump_range *range = range_ptr;
struct iwl_ini_txf_iter_data *iter;
struct iwl_fw_ini_error_dump_register *reg_dump = (void *)range->data;
u32 offs = le32_to_cpu(reg->offset), addr;
@@ -1246,8 +1246,8 @@ static int iwl_dump_ini_txf_iter(struct iwl_fw_runtime *fwrt,
iter = fwrt->dump.fifo_iter;
- range->fifo_num = cpu_to_le32(iter->fifo);
- range->num_of_registers = reg->fifos.num_of_registers;
+ range->fifo_hdr.fifo_num = cpu_to_le32(iter->fifo);
+ range->fifo_hdr.num_of_registers = reg->fifos.num_of_registers;
range->range_data_size = cpu_to_le32(iter->fifo_size + registers_size);
iwl_write_prph_no_grab(fwrt->trans, TXF_LARC_NUM + offs, iter->fifo);
@@ -1336,7 +1336,7 @@ static int iwl_dump_ini_rxf_iter(struct iwl_fw_runtime *fwrt,
struct iwl_fw_ini_region_cfg *reg,
void *range_ptr, int idx)
{
- struct iwl_fw_ini_fifo_error_dump_range *range = range_ptr;
+ struct iwl_fw_ini_error_dump_range *range = range_ptr;
struct iwl_ini_rxf_data rxf_data;
struct iwl_fw_ini_error_dump_register *reg_dump = (void *)range->data;
u32 offs = le32_to_cpu(reg->offset), addr;
@@ -1353,8 +1353,8 @@ static int iwl_dump_ini_rxf_iter(struct iwl_fw_runtime *fwrt,
if (!iwl_trans_grab_nic_access(fwrt->trans, &flags))
return -EBUSY;
- range->fifo_num = cpu_to_le32(rxf_data.fifo_num);
- range->num_of_registers = reg->fifos.num_of_registers;
+ range->fifo_hdr.fifo_num = cpu_to_le32(rxf_data.fifo_num);
+ range->fifo_hdr.num_of_registers = reg->fifos.num_of_registers;
range->range_data_size = cpu_to_le32(rxf_data.size + registers_size);
/*
@@ -1408,7 +1408,7 @@ static void *iwl_dump_ini_mem_fill_header(struct iwl_fw_runtime *fwrt,
{
struct iwl_fw_ini_error_dump *dump = data;
- dump->header.version = cpu_to_le32(IWL_INI_DUMP_MEM_VER);
+ dump->header.version = cpu_to_le32(IWL_INI_DUMP_VER);
return dump->ranges;
}
@@ -1433,7 +1433,7 @@ static void
iwl_trans_release_nic_access(fwrt->trans, &flags);
- data->header.version = cpu_to_le32(IWL_INI_DUMP_MONITOR_VER);
+ data->header.version = cpu_to_le32(IWL_INI_DUMP_VER);
data->write_ptr = cpu_to_le32(write_ptr & write_ptr_msk);
data->cycle_cnt = cpu_to_le32(cycle_cnt & cycle_cnt_msk);
@@ -1490,17 +1490,6 @@ static void
}
-static void *iwl_dump_ini_fifo_fill_header(struct iwl_fw_runtime *fwrt,
- struct iwl_fw_ini_region_cfg *reg,
- void *data)
-{
- struct iwl_fw_ini_fifo_error_dump *dump = data;
-
- dump->header.version = cpu_to_le32(IWL_INI_DUMP_FIFO_VER);
-
- return dump->ranges;
-}
-
static u32 iwl_dump_ini_mem_ranges(struct iwl_fw_runtime *fwrt,
struct iwl_fw_ini_region_cfg *reg)
{
@@ -1592,8 +1581,8 @@ static u32 iwl_dump_ini_mon_dram_get_size(struct iwl_fw_runtime *fwrt,
u32 size = sizeof(struct iwl_fw_ini_monitor_dump) +
sizeof(struct iwl_fw_ini_error_dump_range);
- if (fwrt->trans->num_blocks)
- size += fwrt->trans->fw_mon[0].size;
+ if (fwrt->trans->dbg.num_blocks)
+ size += fwrt->trans->dbg.fw_mon[0].size;
return size;
}
@@ -1613,8 +1602,9 @@ static u32 iwl_dump_ini_txf_get_size(struct iwl_fw_runtime *fwrt,
struct iwl_ini_txf_iter_data iter = { .init = true };
void *fifo_iter = fwrt->dump.fifo_iter;
u32 size = 0;
- u32 fifo_hdr = sizeof(struct iwl_fw_ini_fifo_error_dump_range) +
- le32_to_cpu(reg->fifos.num_of_registers) * sizeof(__le32) * 2;
+ u32 fifo_hdr = sizeof(struct iwl_fw_ini_error_dump_range) +
+ le32_to_cpu(reg->fifos.num_of_registers) *
+ sizeof(struct iwl_fw_ini_error_dump_register);
fwrt->dump.fifo_iter = &iter;
while (iwl_ini_txf_iter(fwrt, reg)) {
@@ -1624,7 +1614,7 @@ static u32 iwl_dump_ini_txf_get_size(struct iwl_fw_runtime *fwrt,
}
if (size)
- size += sizeof(struct iwl_fw_ini_fifo_error_dump);
+ size += sizeof(struct iwl_fw_ini_error_dump);
fwrt->dump.fifo_iter = fifo_iter;
@@ -1635,9 +1625,10 @@ static u32 iwl_dump_ini_rxf_get_size(struct iwl_fw_runtime *fwrt,
struct iwl_fw_ini_region_cfg *reg)
{
struct iwl_ini_rxf_data rx_data;
- u32 size = sizeof(struct iwl_fw_ini_fifo_error_dump) +
- sizeof(struct iwl_fw_ini_fifo_error_dump_range) +
- le32_to_cpu(reg->fifos.num_of_registers) * sizeof(__le32) * 2;
+ u32 size = sizeof(struct iwl_fw_ini_error_dump) +
+ sizeof(struct iwl_fw_ini_error_dump_range) +
+ le32_to_cpu(reg->fifos.num_of_registers) *
+ sizeof(struct iwl_fw_ini_error_dump_register);
if (reg->fifos.header_only)
return size;
@@ -1683,20 +1674,24 @@ iwl_dump_ini_mem(struct iwl_fw_runtime *fwrt,
struct iwl_dump_ini_mem_ops *ops)
{
struct iwl_fw_ini_error_dump_header *header = (void *)(*data)->data;
- u32 num_of_ranges, i, type = le32_to_cpu(reg->region_type);
+ u32 num_of_ranges, i, type = le32_to_cpu(reg->region_type), size;
void *range;
if (WARN_ON(!ops || !ops->get_num_of_ranges || !ops->get_size ||
!ops->fill_mem_hdr || !ops->fill_range))
return;
+ size = ops->get_size(fwrt, reg);
+ if (!size)
+ return;
+
IWL_DEBUG_FW(fwrt, "WRT: collecting region: id=%d, type=%d\n",
le32_to_cpu(reg->region_id), type);
num_of_ranges = ops->get_num_of_ranges(fwrt, reg);
- (*data)->type = cpu_to_le32(type | INI_DUMP_BIT);
- (*data)->len = cpu_to_le32(ops->get_size(fwrt, reg));
+ (*data)->type = cpu_to_le32(type);
+ (*data)->len = cpu_to_le32(size);
header->region_id = reg->region_id;
header->num_of_ranges = cpu_to_le32(num_of_ranges);
@@ -1709,7 +1704,7 @@ iwl_dump_ini_mem(struct iwl_fw_runtime *fwrt,
IWL_ERR(fwrt,
"WRT: failed to fill region header: id=%d, type=%d\n",
le32_to_cpu(reg->region_id), type);
- memset(*data, 0, le32_to_cpu((*data)->len));
+ memset(*data, 0, size);
return;
}
@@ -1720,7 +1715,7 @@ iwl_dump_ini_mem(struct iwl_fw_runtime *fwrt,
IWL_ERR(fwrt,
"WRT: failed to dump region: id=%d, type=%d\n",
le32_to_cpu(reg->region_id), type);
- memset(*data, 0, le32_to_cpu((*data)->len));
+ memset(*data, 0, size);
return;
}
range = range + range_size;
@@ -1728,10 +1723,71 @@ iwl_dump_ini_mem(struct iwl_fw_runtime *fwrt,
*data = iwl_fw_error_next_data(*data);
}
+static void iwl_dump_ini_info(struct iwl_fw_runtime *fwrt,
+ struct iwl_fw_ini_trigger *trigger,
+ struct iwl_fw_error_dump_data **data)
+{
+ struct iwl_fw_ini_dump_info *dump = (void *)(*data)->data;
+ u32 reg_ids_size = le32_to_cpu(trigger->num_regions) * sizeof(__le32);
+
+ (*data)->type = cpu_to_le32(IWL_INI_DUMP_INFO_TYPE);
+ (*data)->len = cpu_to_le32(sizeof(*dump) + reg_ids_size);
+
+ dump->version = cpu_to_le32(IWL_INI_DUMP_VER);
+ dump->trigger_id = trigger->trigger_id;
+ dump->is_external_cfg =
+ cpu_to_le32(fwrt->trans->dbg.external_ini_loaded);
+
+ dump->ver_type = cpu_to_le32(fwrt->dump.fw_ver.type);
+ dump->ver_subtype = cpu_to_le32(fwrt->dump.fw_ver.subtype);
+
+ dump->hw_step = cpu_to_le32(CSR_HW_REV_STEP(fwrt->trans->hw_rev));
+ dump->hw_type = cpu_to_le32(CSR_HW_REV_TYPE(fwrt->trans->hw_rev));
+
+ dump->rf_id_flavor =
+ cpu_to_le32(CSR_HW_RFID_FLAVOR(fwrt->trans->hw_rf_id));
+ dump->rf_id_dash = cpu_to_le32(CSR_HW_RFID_DASH(fwrt->trans->hw_rf_id));
+ dump->rf_id_step = cpu_to_le32(CSR_HW_RFID_STEP(fwrt->trans->hw_rf_id));
+ dump->rf_id_type = cpu_to_le32(CSR_HW_RFID_TYPE(fwrt->trans->hw_rf_id));
+
+ dump->lmac_major = cpu_to_le32(fwrt->dump.fw_ver.lmac_major);
+ dump->lmac_minor = cpu_to_le32(fwrt->dump.fw_ver.lmac_minor);
+ dump->umac_major = cpu_to_le32(fwrt->dump.fw_ver.umac_major);
+ dump->umac_minor = cpu_to_le32(fwrt->dump.fw_ver.umac_minor);
+
+ dump->build_tag_len = cpu_to_le32(sizeof(dump->build_tag));
+ memcpy(dump->build_tag, fwrt->fw->human_readable,
+ sizeof(dump->build_tag));
+
+ dump->img_name_len = cpu_to_le32(sizeof(dump->img_name));
+ memcpy(dump->img_name, fwrt->dump.img_name, sizeof(dump->img_name));
+
+ dump->internal_dbg_cfg_name_len =
+ cpu_to_le32(sizeof(dump->internal_dbg_cfg_name));
+ memcpy(dump->internal_dbg_cfg_name, fwrt->dump.internal_dbg_cfg_name,
+ sizeof(dump->internal_dbg_cfg_name));
+
+ dump->external_dbg_cfg_name_len =
+ cpu_to_le32(sizeof(dump->external_dbg_cfg_name));
+
+ /* dump info size is allocated in iwl_fw_ini_get_trigger_len.
+ * The driver allocates (sizeof(*dump) + reg_ids_size) so it is safe to
+ * use reg_ids_size
+ */
+ memcpy(dump->external_dbg_cfg_name, fwrt->dump.external_dbg_cfg_name,
+ sizeof(dump->external_dbg_cfg_name));
+
+ dump->regions_num = trigger->num_regions;
+ memcpy(dump->region_ids, trigger->data, reg_ids_size);
+
+ *data = iwl_fw_error_next_data(*data);
+}
+
static int iwl_fw_ini_get_trigger_len(struct iwl_fw_runtime *fwrt,
struct iwl_fw_ini_trigger *trigger)
{
- int i, size = 0, hdr_len = sizeof(struct iwl_fw_error_dump_data);
+ int i, ret_size = 0, hdr_len = sizeof(struct iwl_fw_error_dump_data);
+ u32 size;
if (!trigger || !trigger->num_regions)
return 0;
@@ -1763,32 +1819,40 @@ static int iwl_fw_ini_get_trigger_len(struct iwl_fw_runtime *fwrt,
case IWL_FW_INI_REGION_CSR:
case IWL_FW_INI_REGION_LMAC_ERROR_TABLE:
case IWL_FW_INI_REGION_UMAC_ERROR_TABLE:
- size += hdr_len + iwl_dump_ini_mem_get_size(fwrt, reg);
+ size = iwl_dump_ini_mem_get_size(fwrt, reg);
+ if (size)
+ ret_size += hdr_len + size;
break;
case IWL_FW_INI_REGION_TXF:
- size += hdr_len + iwl_dump_ini_txf_get_size(fwrt, reg);
+ size = iwl_dump_ini_txf_get_size(fwrt, reg);
+ if (size)
+ ret_size += hdr_len + size;
break;
case IWL_FW_INI_REGION_RXF:
- size += hdr_len + iwl_dump_ini_rxf_get_size(fwrt, reg);
+ size = iwl_dump_ini_rxf_get_size(fwrt, reg);
+ if (size)
+ ret_size += hdr_len + size;
break;
case IWL_FW_INI_REGION_PAGING:
- size += hdr_len;
- if (iwl_fw_dbg_is_paging_enabled(fwrt)) {
- size += iwl_dump_ini_paging_get_size(fwrt, reg);
- } else {
- size += iwl_dump_ini_paging_gen2_get_size(fwrt,
- reg);
- }
+ if (iwl_fw_dbg_is_paging_enabled(fwrt))
+ size = iwl_dump_ini_paging_get_size(fwrt, reg);
+ else
+ size = iwl_dump_ini_paging_gen2_get_size(fwrt,
+ reg);
+ if (size)
+ ret_size += hdr_len + size;
break;
case IWL_FW_INI_REGION_DRAM_BUFFER:
- if (!fwrt->trans->num_blocks)
+ if (!fwrt->trans->dbg.num_blocks)
break;
- size += hdr_len +
- iwl_dump_ini_mon_dram_get_size(fwrt, reg);
+ size = iwl_dump_ini_mon_dram_get_size(fwrt, reg);
+ if (size)
+ ret_size += hdr_len + size;
break;
case IWL_FW_INI_REGION_INTERNAL_BUFFER:
- size += hdr_len +
- iwl_dump_ini_mon_smem_get_size(fwrt, reg);
+ size = iwl_dump_ini_mon_smem_get_size(fwrt, reg);
+ if (size)
+ ret_size += hdr_len + size;
break;
case IWL_FW_INI_REGION_DRAM_IMR:
/* Undefined yet */
@@ -1796,7 +1860,13 @@ static int iwl_fw_ini_get_trigger_len(struct iwl_fw_runtime *fwrt,
break;
}
}
- return size;
+
+ /* add dump info size */
+ if (ret_size)
+ ret_size += hdr_len + sizeof(struct iwl_fw_ini_dump_info) +
+ (le32_to_cpu(trigger->num_regions) * sizeof(__le32));
+
+ return ret_size;
}
static void iwl_fw_ini_dump_trigger(struct iwl_fw_runtime *fwrt,
@@ -1805,6 +1875,8 @@ static void iwl_fw_ini_dump_trigger(struct iwl_fw_runtime *fwrt,
{
int i, num = le32_to_cpu(trigger->num_regions);
+ iwl_dump_ini_info(fwrt, trigger, data);
+
for (i = 0; i < num; i++) {
u32 reg_id = le32_to_cpu(trigger->data[i]);
struct iwl_fw_ini_region_cfg *reg;
@@ -1879,7 +1951,7 @@ static void iwl_fw_ini_dump_trigger(struct iwl_fw_runtime *fwrt,
fwrt->dump.fifo_iter = &iter;
ops.get_num_of_ranges = iwl_dump_ini_txf_ranges;
ops.get_size = iwl_dump_ini_txf_get_size;
- ops.fill_mem_hdr = iwl_dump_ini_fifo_fill_header;
+ ops.fill_mem_hdr = iwl_dump_ini_mem_fill_header;
ops.fill_range = iwl_dump_ini_txf_iter;
iwl_dump_ini_mem(fwrt, data, reg, &ops);
fwrt->dump.fifo_iter = fifo_iter;
@@ -1888,7 +1960,7 @@ static void iwl_fw_ini_dump_trigger(struct iwl_fw_runtime *fwrt,
case IWL_FW_INI_REGION_RXF:
ops.get_num_of_ranges = iwl_dump_ini_rxf_ranges;
ops.get_size = iwl_dump_ini_rxf_get_size;
- ops.fill_mem_hdr = iwl_dump_ini_fifo_fill_header;
+ ops.fill_mem_hdr = iwl_dump_ini_mem_fill_header;
ops.fill_range = iwl_dump_ini_rxf_iter;
iwl_dump_ini_mem(fwrt, data, reg, &ops);
break;
@@ -1908,18 +1980,18 @@ static void iwl_fw_ini_dump_trigger(struct iwl_fw_runtime *fwrt,
}
static struct iwl_fw_error_dump_file *
-iwl_fw_error_ini_dump_file(struct iwl_fw_runtime *fwrt)
+iwl_fw_error_ini_dump_file(struct iwl_fw_runtime *fwrt,
+ enum iwl_fw_ini_trigger_id trig_id)
{
int size;
struct iwl_fw_error_dump_data *dump_data;
struct iwl_fw_error_dump_file *dump_file;
struct iwl_fw_ini_trigger *trigger;
- enum iwl_fw_ini_trigger_id id = fwrt->dump.ini_trig_id;
- if (!iwl_fw_ini_trigger_on(fwrt, id))
+ if (!iwl_fw_ini_trigger_on(fwrt, trig_id))
return NULL;
- trigger = fwrt->dump.active_trigs[id].trig;
+ trigger = fwrt->dump.active_trigs[trig_id].trig;
size = iwl_fw_ini_get_trigger_len(fwrt, trigger);
if (!size)
@@ -1931,7 +2003,7 @@ iwl_fw_error_ini_dump_file(struct iwl_fw_runtime *fwrt)
if (!dump_file)
return NULL;
- dump_file->barker = cpu_to_le32(IWL_FW_ERROR_DUMP_BARKER);
+ dump_file->barker = cpu_to_le32(IWL_FW_INI_ERROR_DUMP_BARKER);
dump_data = (void *)dump_file->data;
dump_file->file_len = cpu_to_le32(size);
@@ -1952,7 +2024,7 @@ static void iwl_fw_error_dump(struct iwl_fw_runtime *fwrt)
if (!dump_file)
goto out;
- if (!fwrt->trans->ini_valid && fwrt->dump.monitor_only)
+ if (fwrt->dump.monitor_only)
dump_mask &= IWL_FW_ERROR_DUMP_FW_MONITOR;
fw_error_dump.trans_ptr = iwl_trans_dump_data(fwrt->trans, dump_mask);
@@ -1984,16 +2056,16 @@ static void iwl_fw_error_dump(struct iwl_fw_runtime *fwrt)
out:
iwl_fw_free_dump_desc(fwrt);
- clear_bit(IWL_FWRT_STATUS_DUMPING, &fwrt->status);
}
-static void iwl_fw_error_ini_dump(struct iwl_fw_runtime *fwrt)
+static void iwl_fw_error_ini_dump(struct iwl_fw_runtime *fwrt, u8 wk_idx)
{
+ enum iwl_fw_ini_trigger_id trig_id = fwrt->dump.wks[wk_idx].ini_trig_id;
struct iwl_fw_error_dump_file *dump_file;
struct scatterlist *sg_dump_data;
u32 file_len;
- dump_file = iwl_fw_error_ini_dump_file(fwrt);
+ dump_file = iwl_fw_error_ini_dump_file(fwrt, trig_id);
if (!dump_file)
goto out;
@@ -2008,8 +2080,7 @@ static void iwl_fw_error_ini_dump(struct iwl_fw_runtime *fwrt)
}
vfree(dump_file);
out:
- fwrt->dump.ini_trig_id = IWL_FW_TRIGGER_ID_INVALID;
- clear_bit(IWL_FWRT_STATUS_DUMPING, &fwrt->status);
+ fwrt->dump.wks[wk_idx].ini_trig_id = IWL_FW_TRIGGER_ID_INVALID;
}
const struct iwl_fw_dump_desc iwl_dump_desc_assert = {
@@ -2027,7 +2098,7 @@ int iwl_fw_dbg_collect_desc(struct iwl_fw_runtime *fwrt,
u32 trig_type = le32_to_cpu(desc->trig_desc.type);
int ret;
- if (fwrt->trans->ini_valid) {
+ if (fwrt->trans->dbg.ini_valid) {
ret = iwl_fw_dbg_ini_collect(fwrt, trig_type);
if (!ret)
iwl_fw_free_dump_desc(fwrt);
@@ -2035,7 +2106,10 @@ int iwl_fw_dbg_collect_desc(struct iwl_fw_runtime *fwrt,
return ret;
}
- if (test_and_set_bit(IWL_FWRT_STATUS_DUMPING, &fwrt->status))
+ /* use wks[0] since dump flow prior to ini does not need to support
+ * consecutive triggers collection
+ */
+ if (test_and_set_bit(fwrt->dump.wks[0].idx, &fwrt->dump.active_wks))
return -EBUSY;
if (WARN_ON(fwrt->dump.desc))
@@ -2047,7 +2121,7 @@ int iwl_fw_dbg_collect_desc(struct iwl_fw_runtime *fwrt,
fwrt->dump.desc = desc;
fwrt->dump.monitor_only = monitor_only;
- schedule_delayed_work(&fwrt->dump.wk, usecs_to_jiffies(delay));
+ schedule_delayed_work(&fwrt->dump.wks[0].wk, usecs_to_jiffies(delay));
return 0;
}
@@ -2057,9 +2131,12 @@ int iwl_fw_dbg_error_collect(struct iwl_fw_runtime *fwrt,
enum iwl_fw_dbg_trigger trig_type)
{
int ret;
- struct iwl_fw_dump_desc *iwl_dump_error_desc =
- kmalloc(sizeof(*iwl_dump_error_desc), GFP_KERNEL);
+ struct iwl_fw_dump_desc *iwl_dump_error_desc;
+
+ if (!test_bit(STATUS_DEVICE_ENABLED, &fwrt->trans->status))
+ return -EIO;
+ iwl_dump_error_desc = kmalloc(sizeof(*iwl_dump_error_desc), GFP_KERNEL);
if (!iwl_dump_error_desc)
return -ENOMEM;
@@ -2123,13 +2200,11 @@ int _iwl_fw_dbg_ini_collect(struct iwl_fw_runtime *fwrt,
{
struct iwl_fw_ini_active_triggers *active;
u32 occur, delay;
+ unsigned long idx;
if (WARN_ON(!iwl_fw_ini_trigger_on(fwrt, id)))
return -EINVAL;
- if (test_and_set_bit(IWL_FWRT_STATUS_DUMPING, &fwrt->status))
- return -EBUSY;
-
if (!iwl_fw_ini_trigger_on(fwrt, id)) {
IWL_WARN(fwrt, "WRT: Trigger %d is not active, aborting dump\n",
id);
@@ -2150,14 +2225,24 @@ int _iwl_fw_dbg_ini_collect(struct iwl_fw_runtime *fwrt,
return 0;
}
- if (test_and_set_bit(IWL_FWRT_STATUS_DUMPING, &fwrt->status))
+ /* Check there is an available worker.
+ * ffz return value is undefined if no zero exists,
+ * so check against ~0UL first.
+ */
+ if (fwrt->dump.active_wks == ~0UL)
+ return -EBUSY;
+
+ idx = ffz(fwrt->dump.active_wks);
+
+ if (idx >= IWL_FW_RUNTIME_DUMP_WK_NUM ||
+ test_and_set_bit(fwrt->dump.wks[idx].idx, &fwrt->dump.active_wks))
return -EBUSY;
- fwrt->dump.ini_trig_id = id;
+ fwrt->dump.wks[idx].ini_trig_id = id;
IWL_WARN(fwrt, "WRT: collecting data: ini trigger %d fired.\n", id);
- schedule_delayed_work(&fwrt->dump.wk, usecs_to_jiffies(delay));
+ schedule_delayed_work(&fwrt->dump.wks[idx].wk, usecs_to_jiffies(delay));
return 0;
}
@@ -2191,9 +2276,6 @@ int iwl_fw_dbg_collect_trig(struct iwl_fw_runtime *fwrt,
int ret, len = 0;
char buf[64];
- if (fwrt->trans->ini_valid)
- return 0;
-
if (fmt) {
va_list ap;
@@ -2270,56 +2352,57 @@ IWL_EXPORT_SYMBOL(iwl_fw_start_dbg_conf);
/* this function assumes dump_start was called beforehand and dump_end will be
* called afterwards
*/
-void iwl_fw_dbg_collect_sync(struct iwl_fw_runtime *fwrt)
+static void iwl_fw_dbg_collect_sync(struct iwl_fw_runtime *fwrt, u8 wk_idx)
{
struct iwl_fw_dbg_params params = {0};
- if (!test_bit(IWL_FWRT_STATUS_DUMPING, &fwrt->status))
+ if (!test_bit(wk_idx, &fwrt->dump.active_wks))
return;
if (fwrt->ops && fwrt->ops->fw_running &&
!fwrt->ops->fw_running(fwrt->ops_ctx)) {
IWL_ERR(fwrt, "Firmware not running - cannot dump error\n");
iwl_fw_free_dump_desc(fwrt);
- clear_bit(IWL_FWRT_STATUS_DUMPING, &fwrt->status);
- return;
+ goto out;
}
/* there's no point in fw dump if the bus is dead */
if (test_bit(STATUS_TRANS_DEAD, &fwrt->trans->status)) {
IWL_ERR(fwrt, "Skip fw error dump since bus is dead\n");
- return;
+ goto out;
}
- iwl_fw_dbg_stop_recording(fwrt, &params);
+ iwl_fw_dbg_stop_recording(fwrt->trans, &params);
IWL_DEBUG_FW_INFO(fwrt, "WRT: data collection start\n");
- if (fwrt->trans->ini_valid)
- iwl_fw_error_ini_dump(fwrt);
+ if (fwrt->trans->dbg.ini_valid)
+ iwl_fw_error_ini_dump(fwrt, wk_idx);
else
iwl_fw_error_dump(fwrt);
IWL_DEBUG_FW_INFO(fwrt, "WRT: data collection done\n");
- /* start recording again if the firmware is not crashed */
- if (!test_bit(STATUS_FW_ERROR, &fwrt->trans->status) &&
- fwrt->fw->dbg.dest_tlv) {
- /* wait before we collect the data till the DBGC stop */
- udelay(500);
- iwl_fw_dbg_restart_recording(fwrt, &params);
- }
+ iwl_fw_dbg_restart_recording(fwrt, &params);
+
+out:
+ clear_bit(wk_idx, &fwrt->dump.active_wks);
}
-IWL_EXPORT_SYMBOL(iwl_fw_dbg_collect_sync);
void iwl_fw_error_dump_wk(struct work_struct *work)
{
- struct iwl_fw_runtime *fwrt =
- container_of(work, struct iwl_fw_runtime, dump.wk.work);
+ struct iwl_fw_runtime *fwrt;
+ typeof(fwrt->dump.wks[0]) *wks;
+
+ wks = container_of(work, typeof(fwrt->dump.wks[0]), wk.work);
+ fwrt = container_of(wks, struct iwl_fw_runtime, dump.wks[wks->idx]);
+ /* assumes the op mode mutex is locked in dump_start since
+ * iwl_fw_dbg_collect_sync can't run in parallel
+ */
if (fwrt->ops && fwrt->ops->dump_start &&
fwrt->ops->dump_start(fwrt->ops_ctx))
return;
- iwl_fw_dbg_collect_sync(fwrt);
+ iwl_fw_dbg_collect_sync(fwrt, wks->idx);
if (fwrt->ops && fwrt->ops->dump_end)
fwrt->ops->dump_end(fwrt->ops_ctx);
@@ -2349,6 +2432,38 @@ void iwl_fw_dbg_read_d3_debug_data(struct iwl_fw_runtime *fwrt)
}
IWL_EXPORT_SYMBOL(iwl_fw_dbg_read_d3_debug_data);
+static void iwl_fw_dbg_info_apply(struct iwl_fw_runtime *fwrt,
+ struct iwl_fw_ini_debug_info_tlv *dbg_info,
+ bool ext, enum iwl_fw_ini_apply_point pnt)
+{
+ u32 img_name_len = le32_to_cpu(dbg_info->img_name_len);
+ u32 dbg_cfg_name_len = le32_to_cpu(dbg_info->dbg_cfg_name_len);
+ const char err_str[] =
+ "WRT: ext=%d. Invalid %s name length %d, expected %d\n";
+
+ if (img_name_len != IWL_FW_INI_MAX_IMG_NAME_LEN) {
+ IWL_WARN(fwrt, err_str, ext, "image", img_name_len,
+ IWL_FW_INI_MAX_IMG_NAME_LEN);
+ return;
+ }
+
+ if (dbg_cfg_name_len != IWL_FW_INI_MAX_DBG_CFG_NAME_LEN) {
+ IWL_WARN(fwrt, err_str, ext, "debug cfg", dbg_cfg_name_len,
+ IWL_FW_INI_MAX_DBG_CFG_NAME_LEN);
+ return;
+ }
+
+ if (ext) {
+ memcpy(fwrt->dump.external_dbg_cfg_name, dbg_info->dbg_cfg_name,
+ sizeof(fwrt->dump.external_dbg_cfg_name));
+ } else {
+ memcpy(fwrt->dump.img_name, dbg_info->img_name,
+ sizeof(fwrt->dump.img_name));
+ memcpy(fwrt->dump.internal_dbg_cfg_name, dbg_info->dbg_cfg_name,
+ sizeof(fwrt->dump.internal_dbg_cfg_name));
+ }
+}
+
static void
iwl_fw_dbg_buffer_allocation(struct iwl_fw_runtime *fwrt, u32 size)
{
@@ -2356,7 +2471,8 @@ iwl_fw_dbg_buffer_allocation(struct iwl_fw_runtime *fwrt, u32 size)
void *virtual_addr = NULL;
dma_addr_t phys_addr;
- if (WARN_ON_ONCE(trans->num_blocks == ARRAY_SIZE(trans->fw_mon)))
+ if (WARN_ON_ONCE(trans->dbg.num_blocks ==
+ ARRAY_SIZE(trans->dbg.fw_mon)))
return;
virtual_addr =
@@ -2370,12 +2486,12 @@ iwl_fw_dbg_buffer_allocation(struct iwl_fw_runtime *fwrt, u32 size)
IWL_DEBUG_FW(trans,
"Allocated DRAM buffer[%d], size=0x%x\n",
- trans->num_blocks, size);
+ trans->dbg.num_blocks, size);
- trans->fw_mon[trans->num_blocks].block = virtual_addr;
- trans->fw_mon[trans->num_blocks].physical = phys_addr;
- trans->fw_mon[trans->num_blocks].size = size;
- trans->num_blocks++;
+ trans->dbg.fw_mon[trans->dbg.num_blocks].block = virtual_addr;
+ trans->dbg.fw_mon[trans->dbg.num_blocks].physical = phys_addr;
+ trans->dbg.fw_mon[trans->dbg.num_blocks].size = size;
+ trans->dbg.num_blocks++;
}
static void iwl_fw_dbg_buffer_apply(struct iwl_fw_runtime *fwrt,
@@ -2393,20 +2509,26 @@ static void iwl_fw_dbg_buffer_apply(struct iwl_fw_runtime *fwrt,
.data[0] = &ldbg_cmd,
.len[0] = sizeof(ldbg_cmd),
};
- int block_idx = trans->num_blocks;
+ int block_idx = trans->dbg.num_blocks;
u32 buf_location = le32_to_cpu(alloc->tlv.buffer_location);
+ if (fwrt->trans->dbg.ini_dest == IWL_FW_INI_LOCATION_INVALID)
+ fwrt->trans->dbg.ini_dest = buf_location;
+
+ if (buf_location != fwrt->trans->dbg.ini_dest) {
+ WARN(fwrt,
+ "WRT: attempt to override buffer location on apply point %d\n",
+ pnt);
+
+ return;
+ }
+
if (buf_location == IWL_FW_INI_LOCATION_SRAM_PATH) {
- if (!WARN(pnt != IWL_FW_INI_APPLY_EARLY,
- "WRT: Invalid apply point %d for SMEM buffer allocation, aborting\n",
- pnt)) {
- IWL_DEBUG_FW(trans,
- "WRT: applying SMEM buffer destination\n");
-
- /* set sram monitor by enabling bit 7 */
- iwl_set_bit(fwrt->trans, CSR_HW_IF_CONFIG_REG,
- CSR_HW_IF_CONFIG_REG_BIT_MONITOR_SRAM);
- }
+ IWL_DEBUG_FW(trans, "WRT: applying SMEM buffer destination\n");
+ /* set sram monitor by enabling bit 7 */
+ iwl_set_bit(fwrt->trans, CSR_HW_IF_CONFIG_REG,
+ CSR_HW_IF_CONFIG_REG_BIT_MONITOR_SRAM);
+
return;
}
@@ -2416,13 +2538,13 @@ static void iwl_fw_dbg_buffer_apply(struct iwl_fw_runtime *fwrt,
if (!alloc->is_alloc) {
iwl_fw_dbg_buffer_allocation(fwrt,
le32_to_cpu(alloc->tlv.size));
- if (block_idx == trans->num_blocks)
+ if (block_idx == trans->dbg.num_blocks)
return;
alloc->is_alloc = 1;
}
/* First block is assigned via registers / context info */
- if (trans->num_blocks == 1)
+ if (trans->dbg.num_blocks == 1)
return;
IWL_DEBUG_FW(trans,
@@ -2430,7 +2552,7 @@ static void iwl_fw_dbg_buffer_apply(struct iwl_fw_runtime *fwrt,
cmd->num_frags = cpu_to_le32(1);
cmd->fragments[0].address =
- cpu_to_le64(trans->fw_mon[block_idx].physical);
+ cpu_to_le64(trans->dbg.fw_mon[block_idx].physical);
cmd->fragments[0].size = alloc->tlv.size;
cmd->allocation_id = alloc->tlv.allocation_id;
cmd->buffer_location = alloc->tlv.buffer_location;
@@ -2653,20 +2775,30 @@ static void _iwl_fw_dbg_apply_point(struct iwl_fw_runtime *fwrt,
struct iwl_ucode_tlv *tlv = iter;
void *ini_tlv = (void *)tlv->data;
u32 type = le32_to_cpu(tlv->type);
+ const char invalid_ap_str[] =
+ "WRT: ext=%d. Invalid apply point %d for %s\n";
switch (type) {
+ case IWL_UCODE_TLV_TYPE_DEBUG_INFO:
+ iwl_fw_dbg_info_apply(fwrt, ini_tlv, ext, pnt);
+ break;
case IWL_UCODE_TLV_TYPE_BUFFER_ALLOCATION: {
struct iwl_fw_ini_allocation_data *buf_alloc = ini_tlv;
+ if (pnt != IWL_FW_INI_APPLY_EARLY) {
+ IWL_ERR(fwrt, invalid_ap_str, ext, pnt,
+ "buffer allocation");
+ goto next;
+ }
+
iwl_fw_dbg_buffer_apply(fwrt, ini_tlv, pnt);
iter += sizeof(buf_alloc->is_alloc);
break;
}
case IWL_UCODE_TLV_TYPE_HCMD:
if (pnt < IWL_FW_INI_APPLY_AFTER_ALIVE) {
- IWL_ERR(fwrt,
- "WRT: ext=%d. Invalid apply point %d for host command\n",
- ext, pnt);
+ IWL_ERR(fwrt, invalid_ap_str, ext, pnt,
+ "host command");
goto next;
}
iwl_fw_dbg_send_hcmd(fwrt, tlv, ext);
@@ -2690,34 +2822,51 @@ next:
}
}
+static void iwl_fw_dbg_ini_reset_cfg(struct iwl_fw_runtime *fwrt)
+{
+ int i;
+
+ for (i = 0; i < IWL_FW_INI_MAX_REGION_ID; i++)
+ fwrt->dump.active_regs[i] = NULL;
+
+ /* disable the triggers, used in recovery flow */
+ for (i = 0; i < IWL_FW_TRIGGER_ID_NUM; i++)
+ fwrt->dump.active_trigs[i].active = false;
+
+ memset(fwrt->dump.img_name, 0,
+ sizeof(fwrt->dump.img_name));
+ memset(fwrt->dump.internal_dbg_cfg_name, 0,
+ sizeof(fwrt->dump.internal_dbg_cfg_name));
+ memset(fwrt->dump.external_dbg_cfg_name, 0,
+ sizeof(fwrt->dump.external_dbg_cfg_name));
+
+ fwrt->trans->dbg.ini_dest = IWL_FW_INI_LOCATION_INVALID;
+}
+
void iwl_fw_dbg_apply_point(struct iwl_fw_runtime *fwrt,
enum iwl_fw_ini_apply_point apply_point)
{
- void *data = &fwrt->trans->apply_points[apply_point];
- int i;
+ void *data = &fwrt->trans->dbg.apply_points[apply_point];
IWL_DEBUG_FW(fwrt, "WRT: enabling apply point %d\n", apply_point);
- if (apply_point == IWL_FW_INI_APPLY_EARLY) {
- for (i = 0; i < IWL_FW_INI_MAX_REGION_ID; i++)
- fwrt->dump.active_regs[i] = NULL;
-
- /* disable the triggers, used in recovery flow */
- for (i = 0; i < IWL_FW_TRIGGER_ID_NUM; i++)
- fwrt->dump.active_trigs[i].active = false;
- }
+ if (apply_point == IWL_FW_INI_APPLY_EARLY)
+ iwl_fw_dbg_ini_reset_cfg(fwrt);
_iwl_fw_dbg_apply_point(fwrt, data, apply_point, false);
- data = &fwrt->trans->apply_points_ext[apply_point];
+ data = &fwrt->trans->dbg.apply_points_ext[apply_point];
_iwl_fw_dbg_apply_point(fwrt, data, apply_point, true);
}
IWL_EXPORT_SYMBOL(iwl_fw_dbg_apply_point);
void iwl_fwrt_stop_device(struct iwl_fw_runtime *fwrt)
{
+ int i;
+
del_timer(&fwrt->dump.periodic_trig);
- iwl_fw_dbg_collect_sync(fwrt);
+ for (i = 0; i < IWL_FW_RUNTIME_DUMP_WK_NUM; i++)
+ iwl_fw_dbg_collect_sync(fwrt, i);
iwl_trans_stop_device(fwrt->trans);
}
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.h b/drivers/net/wireless/intel/iwlwifi/fw/dbg.h
index fd0ad220e961..a8459ac71b2c 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.h
@@ -73,6 +73,7 @@
#include "error-dump.h"
#include "api/commands.h"
#include "api/dbg-tlv.h"
+#include "api/alive.h"
/**
* struct iwl_fw_dump_desc - describes the dump
@@ -201,7 +202,7 @@ _iwl_fw_dbg_trigger_on(struct iwl_fw_runtime *fwrt,
{
struct iwl_fw_dbg_trigger_tlv *trig;
- if (fwrt->trans->ini_valid)
+ if (fwrt->trans->dbg.ini_valid)
return NULL;
if (!iwl_fw_dbg_trigger_enabled(fwrt->fw, id))
@@ -228,7 +229,7 @@ iwl_fw_ini_trigger_on(struct iwl_fw_runtime *fwrt,
struct iwl_fw_ini_trigger *trig;
u32 usec;
- if (!fwrt->trans->ini_valid || id == IWL_FW_TRIGGER_ID_INVALID ||
+ if (!fwrt->trans->dbg.ini_valid || id == IWL_FW_TRIGGER_ID_INVALID ||
id >= IWL_FW_TRIGGER_ID_NUM || !fwrt->dump.active_trigs[id].active)
return false;
@@ -262,23 +263,6 @@ _iwl_fw_dbg_trigger_simple_stop(struct iwl_fw_runtime *fwrt,
iwl_fw_dbg_get_trigger((fwrt)->fw,\
(trig)))
-static inline int
-iwl_fw_dbg_start_stop_hcmd(struct iwl_fw_runtime *fwrt, bool start)
-{
- struct iwl_ldbg_config_cmd cmd = {
- .type = start ? cpu_to_le32(START_DEBUG_RECORDING) :
- cpu_to_le32(STOP_DEBUG_RECORDING),
- };
- struct iwl_host_cmd hcmd = {
- .id = LDBG_CONFIG_CMD,
- .flags = CMD_ASYNC,
- .data[0] = &cmd,
- .len[0] = sizeof(cmd),
- };
-
- return iwl_trans_send_cmd(fwrt->trans, &hcmd);
-}
-
static inline void
_iwl_fw_dbg_stop_recording(struct iwl_trans *trans,
struct iwl_fw_dbg_params *params)
@@ -294,21 +278,35 @@ _iwl_fw_dbg_stop_recording(struct iwl_trans *trans,
}
iwl_write_umac_prph(trans, DBGC_IN_SAMPLE, 0);
- udelay(100);
+ /* wait for the DBGC to finish writing the internal buffer to DRAM to
+ * avoid halting the HW while writing
+ */
+ usleep_range(700, 1000);
iwl_write_umac_prph(trans, DBGC_OUT_CTRL, 0);
#ifdef CONFIG_IWLWIFI_DEBUGFS
- trans->dbg_rec_on = false;
+ trans->dbg.rec_on = false;
#endif
}
static inline void
-iwl_fw_dbg_stop_recording(struct iwl_fw_runtime *fwrt,
+iwl_fw_dbg_stop_recording(struct iwl_trans *trans,
struct iwl_fw_dbg_params *params)
{
- if (fwrt->trans->cfg->device_family < IWL_DEVICE_FAMILY_22560)
- _iwl_fw_dbg_stop_recording(fwrt->trans, params);
- else
- iwl_fw_dbg_start_stop_hcmd(fwrt, false);
+ /* if the FW crashed or not debug monitor cfg was given, there is
+ * no point in stopping
+ */
+ if (test_bit(STATUS_FW_ERROR, &trans->status) ||
+ (!trans->dbg.dest_tlv &&
+ trans->dbg.ini_dest == IWL_FW_INI_LOCATION_INVALID))
+ return;
+
+ if (trans->cfg->device_family >= IWL_DEVICE_FAMILY_22560) {
+ IWL_ERR(trans,
+ "WRT: unsupported device family %d for debug stop recording\n",
+ trans->cfg->device_family);
+ return;
+ }
+ _iwl_fw_dbg_stop_recording(trans, params);
}
static inline void
@@ -324,7 +322,6 @@ _iwl_fw_dbg_restart_recording(struct iwl_trans *trans,
iwl_set_bits_prph(trans, MON_BUFF_SAMPLE_CTL, 0x1);
} else {
iwl_write_umac_prph(trans, DBGC_IN_SAMPLE, params->in_sample);
- udelay(100);
iwl_write_umac_prph(trans, DBGC_OUT_CTRL, params->out_ctrl);
}
}
@@ -332,8 +329,10 @@ _iwl_fw_dbg_restart_recording(struct iwl_trans *trans,
#ifdef CONFIG_IWLWIFI_DEBUGFS
static inline void iwl_fw_set_dbg_rec_on(struct iwl_fw_runtime *fwrt)
{
- if (fwrt->fw->dbg.dest_tlv && fwrt->cur_fw_img == IWL_UCODE_REGULAR)
- fwrt->trans->dbg_rec_on = true;
+ if (fwrt->cur_fw_img == IWL_UCODE_REGULAR &&
+ (fwrt->fw->dbg.dest_tlv ||
+ fwrt->trans->dbg.ini_dest != IWL_FW_INI_LOCATION_INVALID))
+ fwrt->trans->dbg.rec_on = true;
}
#endif
@@ -341,10 +340,21 @@ static inline void
iwl_fw_dbg_restart_recording(struct iwl_fw_runtime *fwrt,
struct iwl_fw_dbg_params *params)
{
- if (fwrt->trans->cfg->device_family < IWL_DEVICE_FAMILY_22560)
- _iwl_fw_dbg_restart_recording(fwrt->trans, params);
- else
- iwl_fw_dbg_start_stop_hcmd(fwrt, true);
+ /* if the FW crashed or not debug monitor cfg was given, there is
+ * no point in restarting
+ */
+ if (test_bit(STATUS_FW_ERROR, &fwrt->trans->status) ||
+ (!fwrt->trans->dbg.dest_tlv &&
+ fwrt->trans->dbg.ini_dest == IWL_FW_INI_LOCATION_INVALID))
+ return;
+
+ if (fwrt->trans->cfg->device_family >= IWL_DEVICE_FAMILY_22560) {
+ IWL_ERR(fwrt,
+ "WRT: unsupported device family %d for debug restart recording\n",
+ fwrt->trans->cfg->device_family);
+ return;
+ }
+ _iwl_fw_dbg_restart_recording(fwrt->trans, params);
#ifdef CONFIG_IWLWIFI_DEBUGFS
iwl_fw_set_dbg_rec_on(fwrt);
#endif
@@ -359,7 +369,7 @@ void iwl_fw_error_dump_wk(struct work_struct *work);
static inline bool iwl_fw_dbg_type_on(struct iwl_fw_runtime *fwrt, u32 type)
{
- return (fwrt->fw->dbg.dump_mask & BIT(type) || fwrt->trans->ini_valid);
+ return (fwrt->fw->dbg.dump_mask & BIT(type));
}
static inline bool iwl_fw_dbg_is_d3_debug_enabled(struct iwl_fw_runtime *fwrt)
@@ -383,16 +393,26 @@ static inline bool iwl_fw_dbg_is_paging_enabled(struct iwl_fw_runtime *fwrt)
void iwl_fw_dbg_read_d3_debug_data(struct iwl_fw_runtime *fwrt);
-static inline void iwl_fw_flush_dump(struct iwl_fw_runtime *fwrt)
+static inline void iwl_fw_flush_dumps(struct iwl_fw_runtime *fwrt)
{
+ int i;
+
del_timer(&fwrt->dump.periodic_trig);
- flush_delayed_work(&fwrt->dump.wk);
+ for (i = 0; i < IWL_FW_RUNTIME_DUMP_WK_NUM; i++) {
+ flush_delayed_work(&fwrt->dump.wks[i].wk);
+ fwrt->dump.wks[i].ini_trig_id = IWL_FW_TRIGGER_ID_INVALID;
+ }
}
-static inline void iwl_fw_cancel_dump(struct iwl_fw_runtime *fwrt)
+static inline void iwl_fw_cancel_dumps(struct iwl_fw_runtime *fwrt)
{
+ int i;
+
del_timer(&fwrt->dump.periodic_trig);
- cancel_delayed_work_sync(&fwrt->dump.wk);
+ for (i = 0; i < IWL_FW_RUNTIME_DUMP_WK_NUM; i++) {
+ cancel_delayed_work_sync(&fwrt->dump.wks[i].wk);
+ fwrt->dump.wks[i].ini_trig_id = IWL_FW_TRIGGER_ID_INVALID;
+ }
}
#ifdef CONFIG_IWLWIFI_DEBUGFS
@@ -431,7 +451,6 @@ static inline void iwl_fw_resume_timestamp(struct iwl_fw_runtime *fwrt) {}
#endif /* CONFIG_IWLWIFI_DEBUGFS */
-void iwl_fw_dbg_collect_sync(struct iwl_fw_runtime *fwrt);
void iwl_fw_dbg_apply_point(struct iwl_fw_runtime *fwrt,
enum iwl_fw_ini_apply_point apply_point);
@@ -440,31 +459,28 @@ void iwl_fwrt_stop_device(struct iwl_fw_runtime *fwrt);
static inline void iwl_fw_lmac1_set_alive_err_table(struct iwl_trans *trans,
u32 lmac_error_event_table)
{
- if (!(trans->error_event_table_tlv_status &
+ if (!(trans->dbg.error_event_table_tlv_status &
IWL_ERROR_EVENT_TABLE_LMAC1) ||
- WARN_ON(trans->lmac_error_event_table[0] !=
+ WARN_ON(trans->dbg.lmac_error_event_table[0] !=
lmac_error_event_table))
- trans->lmac_error_event_table[0] = lmac_error_event_table;
+ trans->dbg.lmac_error_event_table[0] = lmac_error_event_table;
}
static inline void iwl_fw_umac_set_alive_err_table(struct iwl_trans *trans,
u32 umac_error_event_table)
{
- if (!(trans->error_event_table_tlv_status &
+ if (!(trans->dbg.error_event_table_tlv_status &
IWL_ERROR_EVENT_TABLE_UMAC) ||
- WARN_ON(trans->umac_error_event_table !=
+ WARN_ON(trans->dbg.umac_error_event_table !=
umac_error_event_table))
- trans->umac_error_event_table = umac_error_event_table;
+ trans->dbg.umac_error_event_table = umac_error_event_table;
}
-/* This bit is used to differentiate the legacy dump from the ini dump */
-#define INI_DUMP_BIT BIT(31)
-
static inline void iwl_fw_error_collect(struct iwl_fw_runtime *fwrt)
{
- if (fwrt->trans->ini_valid && fwrt->trans->hw_error) {
+ if (fwrt->trans->dbg.ini_valid && fwrt->trans->dbg.hw_error) {
_iwl_fw_dbg_ini_collect(fwrt, IWL_FW_TRIGGER_ID_FW_HW_ERROR);
- fwrt->trans->hw_error = false;
+ fwrt->trans->dbg.hw_error = false;
} else {
iwl_fw_dbg_collect_desc(fwrt, &iwl_dump_desc_assert, false, 0);
}
@@ -473,4 +489,21 @@ static inline void iwl_fw_error_collect(struct iwl_fw_runtime *fwrt)
void iwl_fw_dbg_periodic_trig_handler(struct timer_list *t);
void iwl_fw_error_print_fseq_regs(struct iwl_fw_runtime *fwrt);
+
+static inline void iwl_fwrt_update_fw_versions(struct iwl_fw_runtime *fwrt,
+ struct iwl_lmac_alive *lmac,
+ struct iwl_umac_alive *umac)
+{
+ if (lmac) {
+ fwrt->dump.fw_ver.type = lmac->ver_type;
+ fwrt->dump.fw_ver.subtype = lmac->ver_subtype;
+ fwrt->dump.fw_ver.lmac_major = le32_to_cpu(lmac->ucode_major);
+ fwrt->dump.fw_ver.lmac_minor = le32_to_cpu(lmac->ucode_minor);
+ }
+
+ if (umac) {
+ fwrt->dump.fw_ver.umac_major = le32_to_cpu(umac->umac_major);
+ fwrt->dump.fw_ver.umac_minor = le32_to_cpu(umac->umac_minor);
+ }
+}
#endif /* __iwl_fw_dbg_h__ */
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/error-dump.h b/drivers/net/wireless/intel/iwlwifi/fw/error-dump.h
index 0feff4c33e39..00a45ea85b69 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/error-dump.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/error-dump.h
@@ -67,6 +67,7 @@
#include <linux/types.h>
#define IWL_FW_ERROR_DUMP_BARKER 0x14789632
+#define IWL_FW_INI_ERROR_DUMP_BARKER 0x14789633
/**
* enum iwl_fw_error_dump_type - types of data in the dump file
@@ -278,19 +279,42 @@ struct iwl_fw_error_dump_mem {
u8 data[];
};
-#define IWL_INI_DUMP_MEM_VER 1
-#define IWL_INI_DUMP_MONITOR_VER 1
-#define IWL_INI_DUMP_FIFO_VER 1
+/* Dump version, used by the dump parser to differentiate between
+ * different dump formats
+ */
+#define IWL_INI_DUMP_VER 1
+
+/* Use bit 31 as dump info type to avoid colliding with region types */
+#define IWL_INI_DUMP_INFO_TYPE BIT(31)
+
+/**
+ * struct iwl_fw_ini_fifo_hdr - fifo range header
+ * @fifo_num: the fifo number. In case of umac rx fifo, set BIT(31) to
+ * distinguish between lmac and umac rx fifos
+ * @num_of_registers: num of registers to dump, dword size each
+ */
+struct iwl_fw_ini_fifo_hdr {
+ __le32 fifo_num;
+ __le32 num_of_registers;
+} __packed;
/**
* struct iwl_fw_ini_error_dump_range - range of memory
* @range_data_size: the size of this range, in bytes
- * @start_addr: the start address of this range
+ * @internal_base_addr - base address of internal memory range
+ * @dram_base_addr - base address of dram monitor range
+ * @page_num - page number of memory range
+ * @fifo_hdr - fifo header of memory range
* @data: the actual memory
*/
struct iwl_fw_ini_error_dump_range {
__le32 range_data_size;
- __le64 start_addr;
+ union {
+ __le32 internal_base_addr;
+ __le64 dram_base_addr;
+ __le32 page_num;
+ struct iwl_fw_ini_fifo_hdr fifo_hdr;
+ };
__le32 data[];
} __packed;
@@ -333,30 +357,63 @@ struct iwl_fw_ini_error_dump_register {
__le32 data;
} __packed;
-/**
- * struct iwl_fw_ini_fifo_error_dump_range - ini fifo range dump
- * @fifo_num: the fifo num. In case of rxf and umac rxf, set BIT(31) to
- * distinguish between lmac and umac
- * @num_of_registers: num of registers to dump, dword size each
- * @range_data_size: the size of the data
- * @data: consist of
- * num_of_registers * (register address + register value) + fifo data
+/* struct iwl_fw_ini_dump_info - ini dump information
+ * @version: dump version
+ * @trigger_id: trigger id that caused the dump collection
+ * @trigger_reason: not supported yet
+ * @is_external_cfg: 1 if an external debug configuration was loaded
+ * and 0 otherwise
+ * @ver_type: FW version type
+ * @ver_subtype: FW version subype
+ * @hw_step: HW step
+ * @hw_type: HW type
+ * @rf_id_flavor: HW RF id flavor
+ * @rf_id_dash: HW RF id dash
+ * @rf_id_step: HW RF id step
+ * @rf_id_type: HW RF id type
+ * @lmac_major: lmac major version
+ * @lmac_minor: lmac minor version
+ * @umac_major: umac major version
+ * @umac_minor: umac minor version
+ * @build_tag_len: length of the build tag
+ * @build_tag: build tag string
+ * @img_name_len: length of the FW image name
+ * @img_name: FW image name
+ * @internal_dbg_cfg_name_len: length of the internal debug configuration name
+ * @internal_dbg_cfg_name: internal debug configuration name
+ * @external_dbg_cfg_name_len: length of the external debug configuration name
+ * @external_dbg_cfg_name: external debug configuration name
+ * @regions_num: number of region ids
+ * @region_ids: region ids the trigger configured to collect
*/
-struct iwl_fw_ini_fifo_error_dump_range {
- __le32 fifo_num;
- __le32 num_of_registers;
- __le32 range_data_size;
- __le32 data[];
-} __packed;
+struct iwl_fw_ini_dump_info {
+ __le32 version;
+ __le32 trigger_id;
+ __le32 trigger_reason;
+ __le32 is_external_cfg;
+ __le32 ver_type;
+ __le32 ver_subtype;
+ __le32 hw_step;
+ __le32 hw_type;
+ __le32 rf_id_flavor;
+ __le32 rf_id_dash;
+ __le32 rf_id_step;
+ __le32 rf_id_type;
+ __le32 lmac_major;
+ __le32 lmac_minor;
+ __le32 umac_major;
+ __le32 umac_minor;
+ __le32 build_tag_len;
+ u8 build_tag[FW_VER_HUMAN_READABLE_SZ];
+ __le32 img_name_len;
+ u8 img_name[IWL_FW_INI_MAX_IMG_NAME_LEN];
+ __le32 internal_dbg_cfg_name_len;
+ u8 internal_dbg_cfg_name[IWL_FW_INI_MAX_DBG_CFG_NAME_LEN];
+ __le32 external_dbg_cfg_name_len;
+ u8 external_dbg_cfg_name[IWL_FW_INI_MAX_DBG_CFG_NAME_LEN];
+ __le32 regions_num;
+ __le32 region_ids[];
-/**
- * struct iwl_fw_ini_fifo_error_dump - ini fifo region dump
- * @header: the header of this region
- * @ranges: the memory ranges of this region
- */
-struct iwl_fw_ini_fifo_error_dump {
- struct iwl_fw_ini_error_dump_header header;
- struct iwl_fw_ini_fifo_error_dump_range ranges[];
} __packed;
/**
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/file.h b/drivers/net/wireless/intel/iwlwifi/fw/file.h
index de9243d30135..0c38e7392b61 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/file.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/file.h
@@ -151,12 +151,13 @@ enum iwl_ucode_tlv_type {
IWL_UCODE_TLV_FW_RECOVERY_INFO = 57,
IWL_UCODE_TLV_FW_FSEQ_VERSION = 60,
- IWL_UCODE_TLV_TYPE_BUFFER_ALLOCATION = IWL_UCODE_INI_TLV_GROUP + 0x1,
- IWL_UCODE_TLV_DEBUG_BASE = IWL_UCODE_TLV_TYPE_BUFFER_ALLOCATION,
- IWL_UCODE_TLV_TYPE_HCMD = IWL_UCODE_INI_TLV_GROUP + 0x2,
- IWL_UCODE_TLV_TYPE_REGIONS = IWL_UCODE_INI_TLV_GROUP + 0x3,
- IWL_UCODE_TLV_TYPE_TRIGGERS = IWL_UCODE_INI_TLV_GROUP + 0x4,
- IWL_UCODE_TLV_TYPE_DEBUG_FLOW = IWL_UCODE_INI_TLV_GROUP + 0x5,
+ IWL_UCODE_TLV_DEBUG_BASE = IWL_UCODE_INI_TLV_GROUP,
+ IWL_UCODE_TLV_TYPE_DEBUG_INFO = IWL_UCODE_TLV_DEBUG_BASE + 0,
+ IWL_UCODE_TLV_TYPE_BUFFER_ALLOCATION = IWL_UCODE_TLV_DEBUG_BASE + 1,
+ IWL_UCODE_TLV_TYPE_HCMD = IWL_UCODE_TLV_DEBUG_BASE + 2,
+ IWL_UCODE_TLV_TYPE_REGIONS = IWL_UCODE_TLV_DEBUG_BASE + 3,
+ IWL_UCODE_TLV_TYPE_TRIGGERS = IWL_UCODE_TLV_DEBUG_BASE + 4,
+ IWL_UCODE_TLV_TYPE_DEBUG_FLOW = IWL_UCODE_TLV_DEBUG_BASE + 5,
IWL_UCODE_TLV_DEBUG_MAX = IWL_UCODE_TLV_TYPE_DEBUG_FLOW,
/* TLVs 0x1000-0x2000 are for internal driver usage */
@@ -286,6 +287,8 @@ typedef unsigned int __bitwise iwl_ucode_tlv_api_t;
* SCAN_OFFLOAD_PROFILES_QUERY_RSP_S.
* @IWL_UCODE_TLV_API_MBSSID_HE: This ucode supports v2 of
* STA_CONTEXT_DOT11AX_API_S
+ * @IWL_UCODE_TLV_CAPA_SAR_TABLE_VER: This ucode supports different sar
+ * version tables.
*
* @NUM_IWL_UCODE_TLV_API: number of bits used
*/
@@ -318,6 +321,8 @@ enum iwl_ucode_tlv_api {
IWL_UCODE_TLV_API_MBSSID_HE = (__force iwl_ucode_tlv_api_t)52,
IWL_UCODE_TLV_API_WOWLAN_TCP_SYN_WAKE = (__force iwl_ucode_tlv_api_t)53,
IWL_UCODE_TLV_API_FTM_RTT_ACCURACY = (__force iwl_ucode_tlv_api_t)54,
+ IWL_UCODE_TLV_API_SAR_TABLE_VER = (__force iwl_ucode_tlv_api_t)55,
+ IWL_UCODE_TLV_API_ADWELL_HB_DEF_N_AP = (__force iwl_ucode_tlv_api_t)57,
NUM_IWL_UCODE_TLV_API
#ifdef __CHECKER__
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/init.c b/drivers/net/wireless/intel/iwlwifi/fw/init.c
index 4435c0ce3013..c16d6e126e3c 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/init.c
+++ b/drivers/net/wireless/intel/iwlwifi/fw/init.c
@@ -67,6 +67,8 @@ void iwl_fw_runtime_init(struct iwl_fw_runtime *fwrt, struct iwl_trans *trans,
const struct iwl_fw_runtime_ops *ops, void *ops_ctx,
struct dentry *dbgfs_dir)
{
+ int i;
+
memset(fwrt, 0, sizeof(*fwrt));
fwrt->trans = trans;
fwrt->fw = fw;
@@ -74,7 +76,10 @@ void iwl_fw_runtime_init(struct iwl_fw_runtime *fwrt, struct iwl_trans *trans,
fwrt->dump.conf = FW_DBG_INVALID;
fwrt->ops = ops;
fwrt->ops_ctx = ops_ctx;
- INIT_DELAYED_WORK(&fwrt->dump.wk, iwl_fw_error_dump_wk);
+ for (i = 0; i < IWL_FW_RUNTIME_DUMP_WK_NUM; i++) {
+ fwrt->dump.wks[i].idx = i;
+ INIT_DELAYED_WORK(&fwrt->dump.wks[i].wk, iwl_fw_error_dump_wk);
+ }
iwl_fwrt_dbgfs_register(fwrt, dbgfs_dir);
timer_setup(&fwrt->dump.periodic_trig,
iwl_fw_dbg_periodic_trig_handler, 0);
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/runtime.h b/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
index a6402a0b3854..406ef73992c1 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
@@ -89,9 +89,7 @@ struct iwl_fwrt_shared_mem_cfg {
u32 internal_txfifo_size[TX_FIFO_INTERNAL_MAX_NUM];
};
-enum iwl_fw_runtime_status {
- IWL_FWRT_STATUS_DUMPING = 0,
-};
+#define IWL_FW_RUNTIME_DUMP_WK_NUM 5
/**
* struct iwl_fw_runtime - runtime data for firmware
@@ -100,7 +98,6 @@ enum iwl_fw_runtime_status {
* @dev: device pointer
* @ops: user ops
* @ops_ctx: user ops context
- * @status: status flags
* @fw_paging_db: paging database
* @num_of_paging_blk: number of paging blocks
* @num_of_pages_in_last_blk: number of pages in the last block
@@ -117,8 +114,6 @@ struct iwl_fw_runtime {
const struct iwl_fw_runtime_ops *ops;
void *ops_ctx;
- unsigned long status;
-
/* Paging */
struct iwl_fw_paging fw_paging_db[NUM_OF_FW_PAGING_BLOCKS];
u16 num_of_paging_blk;
@@ -133,7 +128,12 @@ struct iwl_fw_runtime {
struct {
const struct iwl_fw_dump_desc *desc;
bool monitor_only;
- struct delayed_work wk;
+ struct {
+ u8 idx;
+ enum iwl_fw_ini_trigger_id ini_trig_id;
+ struct delayed_work wk;
+ } wks[IWL_FW_RUNTIME_DUMP_WK_NUM];
+ unsigned long active_wks;
u8 conf;
@@ -145,8 +145,20 @@ struct iwl_fw_runtime {
u32 lmac_err_id[MAX_NUM_LMAC];
u32 umac_err_id;
void *fifo_iter;
- enum iwl_fw_ini_trigger_id ini_trig_id;
struct timer_list periodic_trig;
+
+ u8 img_name[IWL_FW_INI_MAX_IMG_NAME_LEN];
+ u8 internal_dbg_cfg_name[IWL_FW_INI_MAX_DBG_CFG_NAME_LEN];
+ u8 external_dbg_cfg_name[IWL_FW_INI_MAX_DBG_CFG_NAME_LEN];
+
+ struct {
+ u8 type;
+ u8 subtype;
+ u32 lmac_major;
+ u32 lmac_minor;
+ u32 umac_major;
+ u32 umac_minor;
+ } fw_ver;
} dump;
#ifdef CONFIG_IWLWIFI_DEBUGFS
struct {
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/smem.c b/drivers/net/wireless/intel/iwlwifi/fw/smem.c
index ff85d69c2a8c..557ee47bffd8 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/smem.c
+++ b/drivers/net/wireless/intel/iwlwifi/fw/smem.c
@@ -8,7 +8,7 @@
* Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
* Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
* Copyright(c) 2016 - 2017 Intel Deutschland GmbH
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018 - 2019 Intel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
@@ -31,7 +31,7 @@
* Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
* Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
* Copyright(c) 2016 - 2017 Intel Deutschland GmbH
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018 - 2019 Intel Corporation
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@@ -134,6 +134,7 @@ void iwl_get_shared_mem_conf(struct iwl_fw_runtime *fwrt)
.len = { 0, },
};
struct iwl_rx_packet *pkt;
+ int ret;
if (fw_has_capa(&fwrt->fw->ucode_capa,
IWL_UCODE_TLV_CAPA_EXTEND_SHARED_MEM_CFG))
@@ -141,8 +142,13 @@ void iwl_get_shared_mem_conf(struct iwl_fw_runtime *fwrt)
else
cmd.id = SHARED_MEM_CFG;
- if (WARN_ON(iwl_trans_send_cmd(fwrt->trans, &cmd)))
+ ret = iwl_trans_send_cmd(fwrt->trans, &cmd);
+
+ if (ret) {
+ WARN(ret != -ERFKILL,
+ "Could not send the SMEM command: %d\n", ret);
return;
+ }
pkt = cmd.resp_pkt;
if (fwrt->trans->cfg->device_family >= IWL_DEVICE_FAMILY_22000)
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
index f3e69edf8907..bc267bd2c3b0 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
@@ -540,14 +540,20 @@ extern const struct iwl_cfg iwl9260_killer_2ac_cfg;
extern const struct iwl_cfg iwl9270_2ac_cfg;
extern const struct iwl_cfg iwl9460_2ac_cfg;
extern const struct iwl_cfg iwl9560_2ac_cfg;
+extern const struct iwl_cfg iwl9560_2ac_cfg_quz_a0_jf_b0_soc;
extern const struct iwl_cfg iwl9560_2ac_160_cfg;
+extern const struct iwl_cfg iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc;
extern const struct iwl_cfg iwl9460_2ac_cfg_soc;
extern const struct iwl_cfg iwl9461_2ac_cfg_soc;
+extern const struct iwl_cfg iwl9461_2ac_cfg_quz_a0_jf_b0_soc;
extern const struct iwl_cfg iwl9462_2ac_cfg_soc;
+extern const struct iwl_cfg iwl9462_2ac_cfg_quz_a0_jf_b0_soc;
extern const struct iwl_cfg iwl9560_2ac_cfg_soc;
extern const struct iwl_cfg iwl9560_2ac_160_cfg_soc;
extern const struct iwl_cfg iwl9560_killer_2ac_cfg_soc;
extern const struct iwl_cfg iwl9560_killer_s_2ac_cfg_soc;
+extern const struct iwl_cfg iwl9560_killer_i_2ac_cfg_quz_a0_jf_b0_soc;
+extern const struct iwl_cfg iwl9560_killer_s_2ac_cfg_quz_a0_jf_b0_soc;
extern const struct iwl_cfg iwl9460_2ac_cfg_shared_clk;
extern const struct iwl_cfg iwl9461_2ac_cfg_shared_clk;
extern const struct iwl_cfg iwl9462_2ac_cfg_shared_clk;
@@ -562,6 +568,10 @@ extern const struct iwl_cfg iwl_ax101_cfg_qu_hr;
extern const struct iwl_cfg iwl_ax101_cfg_quz_hr;
extern const struct iwl_cfg iwl22000_2ax_cfg_hr;
extern const struct iwl_cfg iwl_ax200_cfg_cc;
+extern const struct iwl_cfg iwl_ax201_cfg_qu_hr;
+extern const struct iwl_cfg iwl_ax201_cfg_quz_hr;
+extern const struct iwl_cfg iwl_ax1650i_cfg_quz_hr;
+extern const struct iwl_cfg iwl_ax1650s_cfg_quz_hr;
extern const struct iwl_cfg killer1650s_2ax_cfg_qu_b0_hr_b0;
extern const struct iwl_cfg killer1650i_2ax_cfg_qu_b0_hr_b0;
extern const struct iwl_cfg killer1650x_2ax_cfg;
@@ -580,9 +590,9 @@ extern const struct iwl_cfg iwl9560_2ac_cfg_qnj_jf_b0;
extern const struct iwl_cfg iwl22000_2ax_cfg_qnj_hr_a0;
extern const struct iwl_cfg iwlax210_2ax_cfg_so_jf_a0;
extern const struct iwl_cfg iwlax210_2ax_cfg_so_hr_a0;
-extern const struct iwl_cfg iwlax210_2ax_cfg_so_gf_a0;
+extern const struct iwl_cfg iwlax211_2ax_cfg_so_gf_a0;
extern const struct iwl_cfg iwlax210_2ax_cfg_ty_gf_a0;
-extern const struct iwl_cfg iwlax210_2ax_cfg_so_gf4_a0;
+extern const struct iwl_cfg iwlax411_2ax_cfg_so_gf4_a0;
#endif /* CPTCFG_IWLMVM || CPTCFG_IWLFMAC */
#endif /* __IWL_CONFIG_H__ */
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-csr.h b/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
index 553554846009..93da96a7247c 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
@@ -336,6 +336,7 @@ enum {
/* RF_ID value */
#define CSR_HW_RF_ID_TYPE_JF (0x00105100)
#define CSR_HW_RF_ID_TYPE_HR (0x0010A000)
+#define CSR_HW_RF_ID_TYPE_HR1 (0x0010c100)
#define CSR_HW_RF_ID_TYPE_HRCDB (0x00109F00)
#define CSR_HW_RF_ID_TYPE_GF (0x0010D000)
#define CSR_HW_RF_ID_TYPE_GF4 (0x0010E000)
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
index ba66f7fba064..fcaec410b3be 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
@@ -81,9 +81,9 @@ void iwl_fw_dbg_copy_tlv(struct iwl_trans *trans, struct iwl_ucode_tlv *tlv,
return;
if (ext)
- data = &trans->apply_points_ext[apply_point];
+ data = &trans->dbg.apply_points_ext[apply_point];
else
- data = &trans->apply_points[apply_point];
+ data = &trans->dbg.apply_points[apply_point];
/* add room for is_alloc field in &iwl_fw_ini_allocation_data struct */
if (le32_to_cpu(tlv->type) == IWL_UCODE_TLV_TYPE_BUFFER_ALLOCATION) {
@@ -172,14 +172,14 @@ void iwl_alloc_dbg_tlv(struct iwl_trans *trans, size_t len, const u8 *data,
}
if (ext) {
- trans->apply_points_ext[i].data = mem;
- trans->apply_points_ext[i].size = size[i];
+ trans->dbg.apply_points_ext[i].data = mem;
+ trans->dbg.apply_points_ext[i].size = size[i];
} else {
- trans->apply_points[i].data = mem;
- trans->apply_points[i].size = size[i];
+ trans->dbg.apply_points[i].data = mem;
+ trans->dbg.apply_points[i].size = size[i];
}
- trans->ini_valid = true;
+ trans->dbg.ini_valid = true;
}
}
@@ -187,14 +187,14 @@ void iwl_fw_dbg_free(struct iwl_trans *trans)
{
int i;
- for (i = 0; i < ARRAY_SIZE(trans->apply_points); i++) {
- kfree(trans->apply_points[i].data);
- trans->apply_points[i].size = 0;
- trans->apply_points[i].offset = 0;
+ for (i = 0; i < ARRAY_SIZE(trans->dbg.apply_points); i++) {
+ kfree(trans->dbg.apply_points[i].data);
+ trans->dbg.apply_points[i].size = 0;
+ trans->dbg.apply_points[i].offset = 0;
- kfree(trans->apply_points_ext[i].data);
- trans->apply_points_ext[i].size = 0;
- trans->apply_points_ext[i].offset = 0;
+ kfree(trans->dbg.apply_points_ext[i].data);
+ trans->dbg.apply_points_ext[i].size = 0;
+ trans->dbg.apply_points_ext[i].offset = 0;
}
}
@@ -221,6 +221,7 @@ static int iwl_parse_fw_dbg_tlv(struct iwl_trans *trans, const u8 *data,
data += sizeof(*tlv) + ALIGN(tlv_len, 4);
switch (tlv_type) {
+ case IWL_UCODE_TLV_TYPE_DEBUG_INFO:
case IWL_UCODE_TLV_TYPE_BUFFER_ALLOCATION:
case IWL_UCODE_TLV_TYPE_HCMD:
case IWL_UCODE_TLV_TYPE_REGIONS:
@@ -242,7 +243,7 @@ void iwl_load_fw_dbg_tlv(struct device *dev, struct iwl_trans *trans)
const struct firmware *fw;
int res;
- if (trans->external_ini_loaded || !iwlwifi_mod_params.enable_ini)
+ if (trans->dbg.external_ini_loaded || !iwlwifi_mod_params.enable_ini)
return;
res = request_firmware(&fw, "iwl-dbg-tlv.ini", dev);
@@ -252,6 +253,6 @@ void iwl_load_fw_dbg_tlv(struct device *dev, struct iwl_trans *trans)
iwl_alloc_dbg_tlv(trans, fw->size, fw->data, true);
iwl_parse_fw_dbg_tlv(trans, fw->data, fw->size);
- trans->external_ini_loaded = true;
+ trans->dbg.external_ini_loaded = true;
release_firmware(fw);
}
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
index fba242284507..57d09049e615 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
@@ -1105,6 +1105,18 @@ static int iwl_parse_tlv_firmware(struct iwl_drv *drv,
le32_to_cpu(recov_info->buf_size);
}
break;
+ case IWL_UCODE_TLV_FW_FSEQ_VERSION: {
+ struct {
+ u8 version[32];
+ u8 sha1[20];
+ } *fseq_ver = (void *)tlv_data;
+
+ if (tlv_len != sizeof(*fseq_ver))
+ goto invalid_tlv_len;
+ IWL_INFO(drv, "TLV_FW_FSEQ_VERSION: %s\n",
+ fseq_ver->version);
+ }
+ break;
case IWL_UCODE_TLV_UMAC_DEBUG_ADDRS: {
struct iwl_umac_debug_addrs *dbg_ptrs =
(void *)tlv_data;
@@ -1114,10 +1126,10 @@ static int iwl_parse_tlv_firmware(struct iwl_drv *drv,
if (drv->trans->cfg->device_family <
IWL_DEVICE_FAMILY_22000)
break;
- drv->trans->umac_error_event_table =
+ drv->trans->dbg.umac_error_event_table =
le32_to_cpu(dbg_ptrs->error_info_addr) &
~FW_ADDR_CACHE_CONTROL;
- drv->trans->error_event_table_tlv_status |=
+ drv->trans->dbg.error_event_table_tlv_status |=
IWL_ERROR_EVENT_TABLE_UMAC;
break;
}
@@ -1130,13 +1142,14 @@ static int iwl_parse_tlv_firmware(struct iwl_drv *drv,
if (drv->trans->cfg->device_family <
IWL_DEVICE_FAMILY_22000)
break;
- drv->trans->lmac_error_event_table[0] =
+ drv->trans->dbg.lmac_error_event_table[0] =
le32_to_cpu(dbg_ptrs->error_event_table_ptr) &
~FW_ADDR_CACHE_CONTROL;
- drv->trans->error_event_table_tlv_status |=
+ drv->trans->dbg.error_event_table_tlv_status |=
IWL_ERROR_EVENT_TABLE_LMAC1;
break;
}
+ case IWL_UCODE_TLV_TYPE_DEBUG_INFO:
case IWL_UCODE_TLV_TYPE_BUFFER_ALLOCATION:
case IWL_UCODE_TLV_TYPE_HCMD:
case IWL_UCODE_TLV_TYPE_REGIONS:
@@ -1744,7 +1757,7 @@ IWL_EXPORT_SYMBOL(iwl_opmode_deregister);
static int __init iwl_drv_init(void)
{
- int i;
+ int i, err;
mutex_init(&iwlwifi_opmode_table_mtx);
@@ -1759,7 +1772,17 @@ static int __init iwl_drv_init(void)
iwl_dbgfs_root = debugfs_create_dir(DRV_NAME, NULL);
#endif
- return iwl_pci_register_driver();
+ err = iwl_pci_register_driver();
+ if (err)
+ goto cleanup_debugfs;
+
+ return 0;
+
+cleanup_debugfs:
+#ifdef CONFIG_IWLWIFI_DEBUGFS
+ debugfs_remove_recursive(iwl_dbgfs_root);
+#endif
+ return err;
}
module_init(iwl_drv_init);
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
index 1e4c9ef548cc..0f8aeb111b0e 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
@@ -722,6 +722,50 @@ struct iwl_self_init_dram {
};
/**
+ * struct iwl_trans_debug - transport debug related data
+ *
+ * @n_dest_reg: num of reg_ops in %dbg_dest_tlv
+ * @rec_on: true iff there is a fw debug recording currently active
+ * @dest_tlv: points to the destination TLV for debug
+ * @conf_tlv: array of pointers to configuration TLVs for debug
+ * @trigger_tlv: array of pointers to triggers TLVs for debug
+ * @lmac_error_event_table: addrs of lmacs error tables
+ * @umac_error_event_table: addr of umac error table
+ * @error_event_table_tlv_status: bitmap that indicates what error table
+ * pointers was recevied via TLV. uses enum &iwl_error_event_table_status
+ * @external_ini_loaded: indicates if an external ini cfg was given
+ * @ini_valid: indicates if debug ini mode is on
+ * @num_blocks: number of blocks in fw_mon
+ * @fw_mon: address of the buffers for firmware monitor
+ * @hw_error: equals true if hw error interrupt was received from the FW
+ * @ini_dest: debug monitor destination uses &enum iwl_fw_ini_buffer_location
+ */
+struct iwl_trans_debug {
+ u8 n_dest_reg;
+ bool rec_on;
+
+ const struct iwl_fw_dbg_dest_tlv_v1 *dest_tlv;
+ const struct iwl_fw_dbg_conf_tlv *conf_tlv[FW_DBG_CONF_MAX];
+ struct iwl_fw_dbg_trigger_tlv * const *trigger_tlv;
+
+ u32 lmac_error_event_table[2];
+ u32 umac_error_event_table;
+ unsigned int error_event_table_tlv_status;
+
+ bool external_ini_loaded;
+ bool ini_valid;
+
+ struct iwl_apply_point_data apply_points[IWL_FW_INI_APPLY_NUM];
+ struct iwl_apply_point_data apply_points_ext[IWL_FW_INI_APPLY_NUM];
+
+ int num_blocks;
+ struct iwl_dram_data fw_mon[IWL_FW_INI_APPLY_NUM];
+
+ bool hw_error;
+ enum iwl_fw_ini_buffer_location ini_dest;
+};
+
+/**
* struct iwl_trans - transport common data
*
* @ops - pointer to iwl_trans_ops
@@ -750,24 +794,12 @@ struct iwl_self_init_dram {
* @rx_mpdu_cmd_hdr_size: used for tracing, amount of data before the
* start of the 802.11 header in the @rx_mpdu_cmd
* @dflt_pwr_limit: default power limit fetched from the platform (ACPI)
- * @dbg_dest_tlv: points to the destination TLV for debug
- * @dbg_conf_tlv: array of pointers to configuration TLVs for debug
- * @dbg_trigger_tlv: array of pointers to triggers TLVs for debug
- * @dbg_n_dest_reg: num of reg_ops in %dbg_dest_tlv
- * @num_blocks: number of blocks in fw_mon
- * @fw_mon: address of the buffers for firmware monitor
* @system_pm_mode: the system-wide power management mode in use.
* This mode is set dynamically, depending on the WoWLAN values
* configured from the userspace at runtime.
* @runtime_pm_mode: the runtime power management mode in use. This
* mode is set during the initialization phase and is not
* supposed to change during runtime.
- * @dbg_rec_on: true iff there is a fw debug recording currently active
- * @lmac_error_event_table: addrs of lmacs error tables
- * @umac_error_event_table: addr of umac error table
- * @error_event_table_tlv_status: bitmap that indicates what error table
- * pointers was recevied via TLV. use enum &iwl_error_event_table_status
- * @hw_error: equals true if hw error interrupt was received from the FW
*/
struct iwl_trans {
const struct iwl_trans_ops *ops;
@@ -808,29 +840,12 @@ struct iwl_trans {
struct lockdep_map sync_cmd_lockdep_map;
#endif
- struct iwl_apply_point_data apply_points[IWL_FW_INI_APPLY_NUM];
- struct iwl_apply_point_data apply_points_ext[IWL_FW_INI_APPLY_NUM];
-
- bool external_ini_loaded;
- bool ini_valid;
-
- const struct iwl_fw_dbg_dest_tlv_v1 *dbg_dest_tlv;
- const struct iwl_fw_dbg_conf_tlv *dbg_conf_tlv[FW_DBG_CONF_MAX];
- struct iwl_fw_dbg_trigger_tlv * const *dbg_trigger_tlv;
- u8 dbg_n_dest_reg;
- int num_blocks;
- struct iwl_dram_data fw_mon[IWL_FW_INI_APPLY_NUM];
+ struct iwl_trans_debug dbg;
struct iwl_self_init_dram init_dram;
enum iwl_plat_pm_mode system_pm_mode;
enum iwl_plat_pm_mode runtime_pm_mode;
bool suspending;
- bool dbg_rec_on;
-
- u32 lmac_error_event_table[2];
- u32 umac_error_event_table;
- unsigned int error_event_table_tlv_status;
- bool hw_error;
/* pointer to trans specific struct */
/*Ensure that this pointer will always be aligned to sizeof pointer */
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/constants.h b/drivers/net/wireless/intel/iwlwifi/mvm/constants.h
index dff14f1ec55f..915b172da57a 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/constants.h
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/constants.h
@@ -152,5 +152,6 @@
#define IWL_MVM_FTM_INITIATOR_ALGO IWL_TOF_ALGO_TYPE_MAX_LIKE
#define IWL_MVM_FTM_INITIATOR_DYNACK true
#define IWL_MVM_D3_DEBUG false
+#define IWL_MVM_USE_TWT false
#endif /* __MVM_CONSTANTS_H */
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
index e7e68fb2bd29..cec40855a641 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
@@ -398,8 +398,7 @@ static int iwl_mvm_send_patterns_v1(struct iwl_mvm *mvm,
if (!wowlan->n_patterns)
return 0;
- cmd.len[0] = sizeof(*pattern_cmd) +
- wowlan->n_patterns * sizeof(struct iwl_wowlan_pattern_v1);
+ cmd.len[0] = struct_size(pattern_cmd, patterns, wowlan->n_patterns);
pattern_cmd = kmalloc(cmd.len[0], GFP_KERNEL);
if (!pattern_cmd)
@@ -1079,11 +1078,12 @@ static int __iwl_mvm_suspend(struct ieee80211_hw *hw,
#endif
/*
- * TODO: this is needed because the firmware is not stopping
- * the recording automatically before entering D3. This can
- * be removed once the FW starts doing that.
+ * Prior to 9000 device family the driver needs to stop the dbg
+ * recording before entering D3. In later devices the FW stops the
+ * recording automatically.
*/
- _iwl_fw_dbg_stop_recording(mvm->fwrt.trans, NULL);
+ if (mvm->trans->cfg->device_family < IWL_DEVICE_FAMILY_9000)
+ iwl_fw_dbg_stop_recording(mvm->trans, NULL);
/* must be last -- this switches firmware state */
ret = iwl_mvm_send_cmd(mvm, &d3_cfg_cmd);
@@ -1986,7 +1986,7 @@ static void iwl_mvm_d3_disconnect_iter(void *data, u8 *mac,
static int iwl_mvm_check_rt_status(struct iwl_mvm *mvm,
struct ieee80211_vif *vif)
{
- u32 base = mvm->trans->lmac_error_event_table[0];
+ u32 base = mvm->trans->dbg.lmac_error_event_table[0];
struct error_table_start {
/* cf. struct iwl_error_event_table */
u32 valid;
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
index 5b1bb76c5d28..0c188a82cfc1 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
@@ -467,6 +467,46 @@ static ssize_t iwl_dbgfs_rs_data_read(struct file *file, char __user *user_buf,
return ret;
}
+static ssize_t iwl_dbgfs_amsdu_len_write(struct ieee80211_sta *sta,
+ char *buf, size_t count,
+ loff_t *ppos)
+{
+ struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
+ int i;
+ u16 amsdu_len;
+
+ if (kstrtou16(buf, 0, &amsdu_len))
+ return -EINVAL;
+
+ if (amsdu_len) {
+ mvmsta->orig_amsdu_len = sta->max_amsdu_len;
+ sta->max_amsdu_len = amsdu_len;
+ for (i = 0; i < ARRAY_SIZE(sta->max_tid_amsdu_len); i++)
+ sta->max_tid_amsdu_len[i] = amsdu_len;
+ } else {
+ sta->max_amsdu_len = mvmsta->orig_amsdu_len;
+ mvmsta->orig_amsdu_len = 0;
+ }
+ return count;
+}
+
+static ssize_t iwl_dbgfs_amsdu_len_read(struct file *file,
+ char __user *user_buf,
+ size_t count, loff_t *ppos)
+{
+ struct ieee80211_sta *sta = file->private_data;
+ struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
+
+ char buf[32];
+ int pos;
+
+ pos = scnprintf(buf, sizeof(buf), "current %d ", sta->max_amsdu_len);
+ pos += scnprintf(buf + pos, sizeof(buf) - pos, "stored %d\n",
+ mvmsta->orig_amsdu_len);
+
+ return simple_read_from_buffer(user_buf, count, ppos, buf, pos);
+}
+
static ssize_t iwl_dbgfs_disable_power_off_read(struct file *file,
char __user *user_buf,
size_t count, loff_t *ppos)
@@ -1356,24 +1396,6 @@ static ssize_t iwl_dbgfs_fw_dbg_collect_write(struct iwl_mvm *mvm,
return count;
}
-static ssize_t iwl_dbgfs_max_amsdu_len_write(struct iwl_mvm *mvm,
- char *buf, size_t count,
- loff_t *ppos)
-{
- unsigned int max_amsdu_len;
- int ret;
-
- ret = kstrtouint(buf, 0, &max_amsdu_len);
- if (ret)
- return ret;
-
- if (max_amsdu_len > IEEE80211_MAX_MPDU_LEN_VHT_11454)
- return -EINVAL;
- mvm->max_amsdu_len = max_amsdu_len;
-
- return count;
-}
-
#define ADD_TEXT(...) pos += scnprintf(buf + pos, bufsz - pos, __VA_ARGS__)
#ifdef CONFIG_IWLWIFI_BCAST_FILTERING
static ssize_t iwl_dbgfs_bcast_filters_read(struct file *file,
@@ -1873,7 +1895,6 @@ MVM_DEBUGFS_READ_WRITE_FILE_OPS(scan_ant_rxchain, 8);
MVM_DEBUGFS_READ_WRITE_FILE_OPS(d0i3_refs, 8);
MVM_DEBUGFS_READ_WRITE_FILE_OPS(fw_dbg_conf, 8);
MVM_DEBUGFS_WRITE_FILE_OPS(fw_dbg_collect, 64);
-MVM_DEBUGFS_WRITE_FILE_OPS(max_amsdu_len, 8);
MVM_DEBUGFS_WRITE_FILE_OPS(indirection_tbl,
(IWL_RSS_INDIRECTION_TABLE_SIZE * 2));
MVM_DEBUGFS_WRITE_FILE_OPS(inject_packet, 512);
@@ -1891,6 +1912,8 @@ MVM_DEBUGFS_READ_WRITE_FILE_OPS(bcast_filters_macs, 256);
MVM_DEBUGFS_READ_FILE_OPS(sar_geo_profile);
#endif
+MVM_DEBUGFS_READ_WRITE_STA_FILE_OPS(amsdu_len, 16);
+
MVM_DEBUGFS_READ_WRITE_FILE_OPS(he_sniffer_params, 32);
static ssize_t iwl_dbgfs_mem_read(struct file *file, char __user *user_buf,
@@ -2032,8 +2055,10 @@ void iwl_mvm_sta_add_debugfs(struct ieee80211_hw *hw,
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
- if (iwl_mvm_has_tlc_offload(mvm))
+ if (iwl_mvm_has_tlc_offload(mvm)) {
MVM_DEBUGFS_ADD_STA_FILE(rs_data, dir, 0400);
+ }
+ MVM_DEBUGFS_ADD_STA_FILE(amsdu_len, dir, 0600);
}
void iwl_mvm_dbgfs_register(struct iwl_mvm *mvm, struct dentry *dbgfs_dir)
@@ -2069,7 +2094,6 @@ void iwl_mvm_dbgfs_register(struct iwl_mvm *mvm, struct dentry *dbgfs_dir)
MVM_DEBUGFS_ADD_FILE(d0i3_refs, mvm->debugfs_dir, 0600);
MVM_DEBUGFS_ADD_FILE(fw_dbg_conf, mvm->debugfs_dir, 0600);
MVM_DEBUGFS_ADD_FILE(fw_dbg_collect, mvm->debugfs_dir, 0200);
- MVM_DEBUGFS_ADD_FILE(max_amsdu_len, mvm->debugfs_dir, 0200);
MVM_DEBUGFS_ADD_FILE(send_echo_cmd, mvm->debugfs_dir, 0200);
MVM_DEBUGFS_ADD_FILE(indirection_tbl, mvm->debugfs_dir, 0200);
MVM_DEBUGFS_ADD_FILE(inject_packet, mvm->debugfs_dir, 0200);
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
index 153717587aeb..1d608e9e9101 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
@@ -238,7 +238,7 @@ static bool iwl_alive_fn(struct iwl_notif_wait_data *notif_wait,
iwl_fw_lmac1_set_alive_err_table(mvm->trans, lmac_error_event_table);
if (lmac2)
- mvm->trans->lmac_error_event_table[1] =
+ mvm->trans->dbg.lmac_error_event_table[1] =
le32_to_cpu(lmac2->dbg_ptrs.error_event_table_ptr);
umac_error_event_table = le32_to_cpu(umac->dbg_ptrs.error_info_addr);
@@ -276,6 +276,8 @@ static bool iwl_alive_fn(struct iwl_notif_wait_data *notif_wait,
le32_to_cpu(umac->umac_major),
le32_to_cpu(umac->umac_minor));
+ iwl_fwrt_update_fw_versions(&mvm->fwrt, lmac1, umac);
+
return true;
}
@@ -419,6 +421,8 @@ static int iwl_run_unified_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm)
lockdep_assert_held(&mvm->mutex);
+ mvm->rfkill_safe_init_done = false;
+
iwl_init_notification_wait(&mvm->notif_wait,
&init_wait,
init_complete,
@@ -537,8 +541,7 @@ int iwl_run_init_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm)
lockdep_assert_held(&mvm->mutex);
- if (WARN_ON_ONCE(mvm->rfkill_safe_init_done))
- return 0;
+ mvm->rfkill_safe_init_done = false;
iwl_init_notification_wait(&mvm->notif_wait,
&calib_wait,
@@ -681,15 +684,15 @@ static int iwl_mvm_sar_get_wrds_table(struct iwl_mvm *mvm)
{
union acpi_object *wifi_pkg, *table, *data;
bool enabled;
- int ret;
+ int ret, tbl_rev;
data = iwl_acpi_get_object(mvm->dev, ACPI_WRDS_METHOD);
if (IS_ERR(data))
return PTR_ERR(data);
wifi_pkg = iwl_acpi_get_wifi_pkg(mvm->dev, data,
- ACPI_WRDS_WIFI_DATA_SIZE);
- if (IS_ERR(wifi_pkg)) {
+ ACPI_WRDS_WIFI_DATA_SIZE, &tbl_rev);
+ if (IS_ERR(wifi_pkg) || tbl_rev != 0) {
ret = PTR_ERR(wifi_pkg);
goto out_free;
}
@@ -718,15 +721,15 @@ static int iwl_mvm_sar_get_ewrd_table(struct iwl_mvm *mvm)
{
union acpi_object *wifi_pkg, *data;
bool enabled;
- int i, n_profiles, ret;
+ int i, n_profiles, ret, tbl_rev;
data = iwl_acpi_get_object(mvm->dev, ACPI_EWRD_METHOD);
if (IS_ERR(data))
return PTR_ERR(data);
wifi_pkg = iwl_acpi_get_wifi_pkg(mvm->dev, data,
- ACPI_EWRD_WIFI_DATA_SIZE);
- if (IS_ERR(wifi_pkg)) {
+ ACPI_EWRD_WIFI_DATA_SIZE, &tbl_rev);
+ if (IS_ERR(wifi_pkg) || tbl_rev != 0) {
ret = PTR_ERR(wifi_pkg);
goto out_free;
}
@@ -777,7 +780,7 @@ out_free:
static int iwl_mvm_sar_get_wgds_table(struct iwl_mvm *mvm)
{
union acpi_object *wifi_pkg, *data;
- int i, j, ret;
+ int i, j, ret, tbl_rev;
int idx = 1;
data = iwl_acpi_get_object(mvm->dev, ACPI_WGDS_METHOD);
@@ -785,12 +788,13 @@ static int iwl_mvm_sar_get_wgds_table(struct iwl_mvm *mvm)
return PTR_ERR(data);
wifi_pkg = iwl_acpi_get_wifi_pkg(mvm->dev, data,
- ACPI_WGDS_WIFI_DATA_SIZE);
- if (IS_ERR(wifi_pkg)) {
+ ACPI_WGDS_WIFI_DATA_SIZE, &tbl_rev);
+ if (IS_ERR(wifi_pkg) || tbl_rev > 1) {
ret = PTR_ERR(wifi_pkg);
goto out_free;
}
+ mvm->geo_rev = tbl_rev;
for (i = 0; i < ACPI_NUM_GEO_PROFILES; i++) {
for (j = 0; j < ACPI_GEO_TABLE_SIZE; j++) {
union acpi_object *entry;
@@ -858,6 +862,9 @@ int iwl_mvm_sar_select_profile(struct iwl_mvm *mvm, int prof_a, int prof_b)
return -ENOENT;
}
+ IWL_DEBUG_INFO(mvm,
+ "SAR EWRD: chain %d profile index %d\n",
+ i, profs[i]);
IWL_DEBUG_RADIO(mvm, " Chain[%d]:\n", i);
for (j = 0; j < ACPI_SAR_NUM_SUB_BANDS; j++) {
idx = (i * ACPI_SAR_NUM_SUB_BANDS) + j;
@@ -877,15 +884,29 @@ int iwl_mvm_get_sar_geo_profile(struct iwl_mvm *mvm)
{
struct iwl_geo_tx_power_profiles_resp *resp;
int ret;
+ u16 len;
+ void *data;
+ struct iwl_geo_tx_power_profiles_cmd geo_cmd;
+ struct iwl_geo_tx_power_profiles_cmd_v1 geo_cmd_v1;
+ struct iwl_host_cmd cmd;
+
+ if (fw_has_api(&mvm->fw->ucode_capa, IWL_UCODE_TLV_API_SAR_TABLE_VER)) {
+ geo_cmd.ops =
+ cpu_to_le32(IWL_PER_CHAIN_OFFSET_GET_CURRENT_TABLE);
+ len = sizeof(geo_cmd);
+ data = &geo_cmd;
+ } else {
+ geo_cmd_v1.ops =
+ cpu_to_le32(IWL_PER_CHAIN_OFFSET_GET_CURRENT_TABLE);
+ len = sizeof(geo_cmd_v1);
+ data = &geo_cmd_v1;
+ }
- struct iwl_geo_tx_power_profiles_cmd geo_cmd = {
- .ops = cpu_to_le32(IWL_PER_CHAIN_OFFSET_GET_CURRENT_TABLE),
- };
- struct iwl_host_cmd cmd = {
+ cmd = (struct iwl_host_cmd){
.id = WIDE_ID(PHY_OPS_GROUP, GEO_TX_POWER_LIMIT),
- .len = { sizeof(geo_cmd), },
+ .len = { len, },
.flags = CMD_WANT_SKB,
- .data = { &geo_cmd },
+ .data = { data },
};
ret = iwl_mvm_send_cmd(mvm, &cmd);
@@ -955,6 +976,16 @@ static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm)
i, j, value[1], value[2], value[0]);
}
}
+
+ cmd.table_revision = cpu_to_le32(mvm->geo_rev);
+
+ if (!fw_has_api(&mvm->fw->ucode_capa,
+ IWL_UCODE_TLV_API_SAR_TABLE_VER)) {
+ return iwl_mvm_send_cmd_pdu(mvm, cmd_wide_id, 0,
+ sizeof(struct iwl_geo_tx_power_profiles_cmd_v1),
+ &cmd);
+ }
+
return iwl_mvm_send_cmd_pdu(mvm, cmd_wide_id, 0, sizeof(cmd), &cmd);
}
@@ -1108,10 +1139,13 @@ static int iwl_mvm_load_rt_fw(struct iwl_mvm *mvm)
iwl_fw_dbg_apply_point(&mvm->fwrt, IWL_FW_INI_APPLY_EARLY);
+ mvm->rfkill_safe_init_done = false;
ret = iwl_mvm_load_ucode_wait_alive(mvm, IWL_UCODE_REGULAR);
if (ret)
return ret;
+ mvm->rfkill_safe_init_done = true;
+
iwl_fw_dbg_apply_point(&mvm->fwrt, IWL_FW_INI_APPLY_AFTER_ALIVE);
return iwl_init_paging(&mvm->fwrt, mvm->fwrt.cur_fw_img);
@@ -1144,7 +1178,7 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
if (ret)
IWL_ERR(mvm, "Failed to initialize Smart Fifo\n");
- if (!mvm->trans->ini_valid) {
+ if (!mvm->trans->dbg.ini_valid) {
mvm->fwrt.dump.conf = FW_DBG_INVALID;
/* if we have a destination, assume EARLY START */
if (mvm->fw->dbg.dest_tlv)
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
index 53c217af13c8..cb22d447fcb8 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
@@ -558,15 +558,16 @@ static void iwl_mvm_mac_ctxt_cmd_common(struct iwl_mvm *mvm,
for (i = 0; i < IEEE80211_NUM_ACS; i++) {
u8 txf = iwl_mvm_mac_ac_to_tx_fifo(mvm, i);
+ u8 ucode_ac = iwl_mvm_mac80211_ac_to_ucode_ac(i);
- cmd->ac[txf].cw_min =
+ cmd->ac[ucode_ac].cw_min =
cpu_to_le16(mvmvif->queue_params[i].cw_min);
- cmd->ac[txf].cw_max =
+ cmd->ac[ucode_ac].cw_max =
cpu_to_le16(mvmvif->queue_params[i].cw_max);
- cmd->ac[txf].edca_txop =
+ cmd->ac[ucode_ac].edca_txop =
cpu_to_le16(mvmvif->queue_params[i].txop * 32);
- cmd->ac[txf].aifsn = mvmvif->queue_params[i].aifs;
- cmd->ac[txf].fifos_mask = BIT(txf);
+ cmd->ac[ucode_ac].aifsn = mvmvif->queue_params[i].aifs;
+ cmd->ac[ucode_ac].fifos_mask = BIT(txf);
}
if (vif->bss_conf.qos)
@@ -678,7 +679,7 @@ static int iwl_mvm_mac_ctxt_cmd_sta(struct iwl_mvm *mvm,
if (vif->bss_conf.he_support && !iwlwifi_mod_params.disable_11ax) {
cmd.filter_flags |= cpu_to_le32(MAC_FILTER_IN_11AX);
- if (vif->bss_conf.twt_requester)
+ if (vif->bss_conf.twt_requester && IWL_MVM_USE_TWT)
ctxt_sta->data_policy |= cpu_to_le32(TWT_SUPPORTED);
}
@@ -1081,9 +1082,6 @@ static void iwl_mvm_mac_ctxt_cmd_fill_ap(struct iwl_mvm *mvm,
IWL_DEBUG_HC(mvm, "No need to receive beacons\n");
}
- if (vif->bss_conf.he_support && !iwlwifi_mod_params.disable_11ax)
- cmd->filter_flags |= cpu_to_le32(MAC_FILTER_IN_11AX);
-
ctxt_ap->bi = cpu_to_le32(vif->bss_conf.beacon_int);
ctxt_ap->dtim_interval = cpu_to_le32(vif->bss_conf.beacon_int *
vif->bss_conf.dtim_period);
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
index fdbabca0280e..55cd49ccbf0b 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
@@ -207,6 +207,12 @@ static const struct cfg80211_pmsr_capabilities iwl_mvm_pmsr_capa = {
},
};
+static int iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
+ enum set_key_cmd cmd,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ieee80211_key_conf *key);
+
void iwl_mvm_ref(struct iwl_mvm *mvm, enum iwl_mvm_ref_type ref_type)
{
if (!iwl_mvm_is_d0i3_supported(mvm))
@@ -1439,7 +1445,7 @@ static void iwl_mvm_mac_stop(struct ieee80211_hw *hw)
*/
clear_bit(IWL_MVM_STATUS_FIRMWARE_RUNNING, &mvm->status);
- iwl_fw_cancel_dump(&mvm->fwrt);
+ iwl_fw_cancel_dumps(&mvm->fwrt);
cancel_delayed_work_sync(&mvm->cs_tx_unblock_dwork);
cancel_delayed_work_sync(&mvm->scan_timeout_dwork);
iwl_fw_free_dump_desc(&mvm->fwrt);
@@ -2365,22 +2371,23 @@ static void iwl_mvm_cfg_he_sta(struct iwl_mvm *mvm,
/* Mark MU EDCA as enabled, unless none detected on some AC */
flags |= STA_CTXT_HE_MU_EDCA_CW;
- for (i = 0; i < AC_NUM; i++) {
+ for (i = 0; i < IEEE80211_NUM_ACS; i++) {
struct ieee80211_he_mu_edca_param_ac_rec *mu_edca =
&mvmvif->queue_params[i].mu_edca_param_rec;
+ u8 ac = iwl_mvm_mac80211_ac_to_ucode_ac(i);
if (!mvmvif->queue_params[i].mu_edca) {
flags &= ~STA_CTXT_HE_MU_EDCA_CW;
break;
}
- sta_ctxt_cmd.trig_based_txf[i].cwmin =
+ sta_ctxt_cmd.trig_based_txf[ac].cwmin =
cpu_to_le16(mu_edca->ecw_min_max & 0xf);
- sta_ctxt_cmd.trig_based_txf[i].cwmax =
+ sta_ctxt_cmd.trig_based_txf[ac].cwmax =
cpu_to_le16((mu_edca->ecw_min_max & 0xf0) >> 4);
- sta_ctxt_cmd.trig_based_txf[i].aifsn =
+ sta_ctxt_cmd.trig_based_txf[ac].aifsn =
cpu_to_le16(mu_edca->aifsn);
- sta_ctxt_cmd.trig_based_txf[i].mu_time =
+ sta_ctxt_cmd.trig_based_txf[ac].mu_time =
cpu_to_le16(mu_edca->mu_edca_timer);
}
@@ -2636,7 +2643,7 @@ static int iwl_mvm_start_ap_ibss(struct ieee80211_hw *hw,
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- int ret;
+ int ret, i;
/*
* iwl_mvm_mac_ctxt_add() might read directly from the device
@@ -2710,6 +2717,20 @@ static int iwl_mvm_start_ap_ibss(struct ieee80211_hw *hw,
/* must be set before quota calculations */
mvmvif->ap_ibss_active = true;
+ /* send all the early keys to the device now */
+ for (i = 0; i < ARRAY_SIZE(mvmvif->ap_early_keys); i++) {
+ struct ieee80211_key_conf *key = mvmvif->ap_early_keys[i];
+
+ if (!key)
+ continue;
+
+ mvmvif->ap_early_keys[i] = NULL;
+
+ ret = iwl_mvm_mac_set_key(hw, SET_KEY, vif, NULL, key);
+ if (ret)
+ goto out_quota_failed;
+ }
+
if (vif->type == NL80211_IFTYPE_AP && !vif->p2p) {
iwl_mvm_vif_set_low_latency(mvmvif, true,
LOW_LATENCY_VIF_TYPE);
@@ -3479,11 +3500,12 @@ static int iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
struct ieee80211_sta *sta,
struct ieee80211_key_conf *key)
{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_sta *mvmsta;
struct iwl_mvm_key_pn *ptk_pn;
int keyidx = key->keyidx;
- int ret;
+ int ret, i;
u8 key_offset;
if (iwlwifi_mod_params.swcrypto) {
@@ -3556,6 +3578,22 @@ static int iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
key->hw_key_idx = STA_KEY_IDX_INVALID;
break;
}
+
+ if (!mvmvif->ap_ibss_active) {
+ for (i = 0;
+ i < ARRAY_SIZE(mvmvif->ap_early_keys);
+ i++) {
+ if (!mvmvif->ap_early_keys[i]) {
+ mvmvif->ap_early_keys[i] = key;
+ break;
+ }
+ }
+
+ if (i >= ARRAY_SIZE(mvmvif->ap_early_keys))
+ ret = -ENOSPC;
+
+ break;
+ }
}
/* During FW restart, in order to restore the state as it was,
@@ -3624,6 +3662,18 @@ static int iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
break;
case DISABLE_KEY:
+ ret = -ENOENT;
+ for (i = 0; i < ARRAY_SIZE(mvmvif->ap_early_keys); i++) {
+ if (mvmvif->ap_early_keys[i] == key) {
+ mvmvif->ap_early_keys[i] = NULL;
+ ret = 0;
+ }
+ }
+
+ /* found in pending list - don't do anything else */
+ if (ret == 0)
+ break;
+
if (key->hw_key_idx == STA_KEY_IDX_INVALID) {
ret = 0;
break;
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
index 02efcf2189c4..48c77af54e99 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
@@ -501,6 +501,9 @@ struct iwl_mvm_vif {
netdev_features_t features;
struct iwl_probe_resp_data __rcu *probe_resp_data;
+
+ /* we can only have 2 GTK + 2 IGTK active at a time */
+ struct ieee80211_key_conf *ap_early_keys[4];
};
static inline struct iwl_mvm_vif *
@@ -1107,7 +1110,6 @@ struct iwl_mvm {
u8 ps_disabled; /* u8 instead of bool to ease debugfs_create_* usage */
/* Indicate if 32Khz external clock is valid */
u32 ext_clock_valid;
- unsigned int max_amsdu_len; /* used for debugfs only */
struct ieee80211_vif __rcu *csa_vif;
struct ieee80211_vif __rcu *csa_tx_blocked_vif;
@@ -1181,6 +1183,7 @@ struct iwl_mvm {
#ifdef CONFIG_ACPI
struct iwl_mvm_sar_profile sar_profiles[ACPI_SAR_PROFILE_NUM];
struct iwl_mvm_geo_profile geo_profiles[ACPI_NUM_GEO_PROFILES];
+ u32 geo_rev;
#endif
};
@@ -1307,6 +1310,12 @@ static inline bool iwl_mvm_is_adaptive_dwell_v2_supported(struct iwl_mvm *mvm)
IWL_UCODE_TLV_API_ADAPTIVE_DWELL_V2);
}
+static inline bool iwl_mvm_is_adwell_hb_ap_num_supported(struct iwl_mvm *mvm)
+{
+ return fw_has_api(&mvm->fw->ucode_capa,
+ IWL_UCODE_TLV_API_ADWELL_HB_DEF_N_AP);
+}
+
static inline bool iwl_mvm_is_oce_supported(struct iwl_mvm *mvm)
{
/* OCE should never be enabled for LMAC scan FWs */
@@ -1532,6 +1541,7 @@ void iwl_mvm_hwrate_to_tx_rate(u32 rate_n_flags,
enum nl80211_band band,
struct ieee80211_tx_rate *r);
u8 iwl_mvm_mac80211_idx_to_hwrate(int rate_idx);
+u8 iwl_mvm_mac80211_ac_to_ucode_ac(enum ieee80211_ac_numbers ac);
void iwl_mvm_dump_nic_error_log(struct iwl_mvm *mvm);
u8 first_antenna(u8 mask);
u8 iwl_mvm_next_antenna(struct iwl_mvm *mvm, u8 valid, u8 last_idx);
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c b/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
index 7bdbd010ae6b..719f793b3487 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
@@ -620,6 +620,7 @@ void iwl_mvm_rx_chub_update_mcc(struct iwl_mvm *mvm,
enum iwl_mcc_source src;
char mcc[3];
struct ieee80211_regdomain *regd;
+ u32 wgds_tbl_idx;
lockdep_assert_held(&mvm->mutex);
@@ -643,6 +644,14 @@ void iwl_mvm_rx_chub_update_mcc(struct iwl_mvm *mvm,
if (IS_ERR_OR_NULL(regd))
return;
+ wgds_tbl_idx = iwl_mvm_get_sar_geo_profile(mvm);
+ if (wgds_tbl_idx < 0)
+ IWL_DEBUG_INFO(mvm, "SAR WGDS is disabled (%d)\n",
+ wgds_tbl_idx);
+ else
+ IWL_DEBUG_INFO(mvm, "SAR WGDS: geo profile %d is configured\n",
+ wgds_tbl_idx);
+
regulatory_set_wiphy_regd(mvm->hw->wiphy, regd);
kfree(regd);
}
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
index fad3bf563712..d7d6f3398f86 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
@@ -564,24 +564,24 @@ unlock:
static int iwl_mvm_fwrt_dump_start(void *ctx)
{
struct iwl_mvm *mvm = ctx;
- int ret;
+ int ret = 0;
+
+ mutex_lock(&mvm->mutex);
ret = iwl_mvm_ref_sync(mvm, IWL_MVM_REF_FW_DBG_COLLECT);
if (ret)
- return ret;
-
- mutex_lock(&mvm->mutex);
+ mutex_unlock(&mvm->mutex);
- return 0;
+ return ret;
}
static void iwl_mvm_fwrt_dump_end(void *ctx)
{
struct iwl_mvm *mvm = ctx;
- mutex_unlock(&mvm->mutex);
-
iwl_mvm_unref(mvm, IWL_MVM_REF_FW_DBG_COLLECT);
+
+ mutex_unlock(&mvm->mutex);
}
static bool iwl_mvm_fwrt_fw_running(void *ctx)
@@ -799,11 +799,11 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
iwl_trans_configure(mvm->trans, &trans_cfg);
trans->rx_mpdu_cmd = REPLY_RX_MPDU_CMD;
- trans->dbg_dest_tlv = mvm->fw->dbg.dest_tlv;
- trans->dbg_n_dest_reg = mvm->fw->dbg.n_dest_reg;
- memcpy(trans->dbg_conf_tlv, mvm->fw->dbg.conf_tlv,
- sizeof(trans->dbg_conf_tlv));
- trans->dbg_trigger_tlv = mvm->fw->dbg.trigger_tlv;
+ trans->dbg.dest_tlv = mvm->fw->dbg.dest_tlv;
+ trans->dbg.n_dest_reg = mvm->fw->dbg.n_dest_reg;
+ memcpy(trans->dbg.conf_tlv, mvm->fw->dbg.conf_tlv,
+ sizeof(trans->dbg.conf_tlv));
+ trans->dbg.trigger_tlv = mvm->fw->dbg.trigger_tlv;
trans->iml = mvm->fw->iml;
trans->iml_len = mvm->fw->iml_len;
@@ -880,7 +880,7 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
return op_mode;
out_free:
- iwl_fw_flush_dump(&mvm->fwrt);
+ iwl_fw_flush_dumps(&mvm->fwrt);
iwl_fw_runtime_free(&mvm->fwrt);
if (iwlmvm_mod_params.init_dbg)
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c
index be62f499c595..08b67812e94e 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c
@@ -101,7 +101,7 @@ static u8 rs_fw_sgi_cw_support(struct ieee80211_sta *sta)
struct ieee80211_sta_he_cap *he_cap = &sta->he_cap;
u8 supp = 0;
- if (he_cap && he_cap->has_he)
+ if (he_cap->has_he)
return 0;
if (ht_cap->cap & IEEE80211_HT_CAP_SGI_20)
@@ -123,12 +123,12 @@ static u16 rs_fw_get_config_flags(struct iwl_mvm *mvm,
struct ieee80211_sta_ht_cap *ht_cap = &sta->ht_cap;
struct ieee80211_sta_vht_cap *vht_cap = &sta->vht_cap;
struct ieee80211_sta_he_cap *he_cap = &sta->he_cap;
- bool vht_ena = vht_cap && vht_cap->vht_supported;
+ bool vht_ena = vht_cap->vht_supported;
u16 flags = 0;
if (mvm->cfg->ht_params->stbc &&
(num_of_ant(iwl_mvm_get_valid_tx_ant(mvm)) > 1)) {
- if (he_cap && he_cap->has_he) {
+ if (he_cap->has_he) {
if (he_cap->he_cap_elem.phy_cap_info[2] &
IEEE80211_HE_PHY_CAP2_STBC_RX_UNDER_80MHZ)
flags |= IWL_TLC_MNG_CFG_FLAGS_STBC_MSK;
@@ -136,15 +136,14 @@ static u16 rs_fw_get_config_flags(struct iwl_mvm *mvm,
if (he_cap->he_cap_elem.phy_cap_info[7] &
IEEE80211_HE_PHY_CAP7_STBC_RX_ABOVE_80MHZ)
flags |= IWL_TLC_MNG_CFG_FLAGS_HE_STBC_160MHZ_MSK;
- } else if ((ht_cap &&
- (ht_cap->cap & IEEE80211_HT_CAP_RX_STBC)) ||
+ } else if ((ht_cap->cap & IEEE80211_HT_CAP_RX_STBC) ||
(vht_ena &&
(vht_cap->cap & IEEE80211_VHT_CAP_RXSTBC_MASK)))
flags |= IWL_TLC_MNG_CFG_FLAGS_STBC_MSK;
}
if (mvm->cfg->ht_params->ldpc &&
- ((ht_cap && (ht_cap->cap & IEEE80211_HT_CAP_LDPC_CODING)) ||
+ ((ht_cap->cap & IEEE80211_HT_CAP_LDPC_CODING) ||
(vht_ena && (vht_cap->cap & IEEE80211_VHT_CAP_RXLDPC))))
flags |= IWL_TLC_MNG_CFG_FLAGS_LDPC_MSK;
@@ -154,7 +153,7 @@ static u16 rs_fw_get_config_flags(struct iwl_mvm *mvm,
IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD))
flags &= ~IWL_TLC_MNG_CFG_FLAGS_LDPC_MSK;
- if (he_cap && he_cap->has_he &&
+ if (he_cap->has_he &&
(he_cap->he_cap_elem.phy_cap_info[3] &
IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_RX_MASK))
flags |= IWL_TLC_MNG_CFG_FLAGS_HE_DCM_NSS_1_MSK;
@@ -293,13 +292,13 @@ static void rs_fw_set_supp_rates(struct ieee80211_sta *sta,
cmd->mode = IWL_TLC_MNG_MODE_NON_HT;
/* HT/VHT rates */
- if (he_cap && he_cap->has_he) {
+ if (he_cap->has_he) {
cmd->mode = IWL_TLC_MNG_MODE_HE;
rs_fw_he_set_enabled_rates(sta, sband, cmd);
- } else if (vht_cap && vht_cap->vht_supported) {
+ } else if (vht_cap->vht_supported) {
cmd->mode = IWL_TLC_MNG_MODE_VHT;
rs_fw_vht_set_enabled_rates(sta, vht_cap, cmd);
- } else if (ht_cap && ht_cap->ht_supported) {
+ } else if (ht_cap->ht_supported) {
cmd->mode = IWL_TLC_MNG_MODE_HT;
cmd->ht_rates[0][0] = cpu_to_le16(ht_cap->mcs.rx_mask[0]);
cmd->ht_rates[1][0] = cpu_to_le16(ht_cap->mcs.rx_mask[1]);
@@ -344,7 +343,7 @@ void iwl_mvm_tlc_update_notif(struct iwl_mvm *mvm,
lq_sta->last_rate_n_flags);
}
- if (flags & IWL_TLC_NOTIF_FLAG_AMSDU) {
+ if (flags & IWL_TLC_NOTIF_FLAG_AMSDU && !mvmsta->orig_amsdu_len) {
u16 size = le32_to_cpu(notif->amsdu_size);
int i;
@@ -381,7 +380,7 @@ static u16 rs_fw_get_max_amsdu_len(struct ieee80211_sta *sta)
const struct ieee80211_sta_vht_cap *vht_cap = &sta->vht_cap;
const struct ieee80211_sta_ht_cap *ht_cap = &sta->ht_cap;
- if (vht_cap && vht_cap->vht_supported) {
+ if (vht_cap->vht_supported) {
switch (vht_cap->cap & IEEE80211_VHT_CAP_MAX_MPDU_MASK) {
case IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454:
return IEEE80211_MAX_MPDU_LEN_VHT_11454;
@@ -391,7 +390,7 @@ static u16 rs_fw_get_max_amsdu_len(struct ieee80211_sta *sta)
return IEEE80211_MAX_MPDU_LEN_VHT_3895;
}
- } else if (ht_cap && ht_cap->ht_supported) {
+ } else if (ht_cap->ht_supported) {
if (ht_cap->cap & IEEE80211_HT_CAP_MAX_AMSDU)
/*
* agg is offloaded so we need to assume that agg
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
index 63fdb4e68e9d..8c9069f28a58 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
@@ -2949,10 +2949,6 @@ static void rs_drv_get_rate(void *mvm_r, struct ieee80211_sta *sta,
mvm_sta = NULL;
}
- /* Send management frames and NO_ACK data using lowest rate. */
- if (rate_control_send_low(sta, mvm_sta, txrc))
- return;
-
if (!mvm_sta)
return;
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
index d9ddf9ff6428..c284e6975b1b 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
@@ -83,8 +83,10 @@
#define IWL_SCAN_ADWELL_MAX_BUDGET_FULL_SCAN 300
/* adaptive dwell max budget time [TU] for directed scan */
#define IWL_SCAN_ADWELL_MAX_BUDGET_DIRECTED_SCAN 100
-/* adaptive dwell default APs number */
-#define IWL_SCAN_ADWELL_DEFAULT_N_APS 2
+/* adaptive dwell default high band APs number */
+#define IWL_SCAN_ADWELL_DEFAULT_HB_N_APS 8
+/* adaptive dwell default low band APs number */
+#define IWL_SCAN_ADWELL_DEFAULT_LB_N_APS 2
/* adaptive dwell default APs number in social channels (1, 6, 11) */
#define IWL_SCAN_ADWELL_DEFAULT_N_APS_SOCIAL 10
@@ -1288,7 +1290,11 @@ static void iwl_mvm_scan_umac_dwell(struct iwl_mvm *mvm,
cmd->v7.adwell_default_n_aps_social =
IWL_SCAN_ADWELL_DEFAULT_N_APS_SOCIAL;
cmd->v7.adwell_default_n_aps =
- IWL_SCAN_ADWELL_DEFAULT_N_APS;
+ IWL_SCAN_ADWELL_DEFAULT_LB_N_APS;
+
+ if (iwl_mvm_is_adwell_hb_ap_num_supported(mvm))
+ cmd->v9.adwell_default_hb_n_aps =
+ IWL_SCAN_ADWELL_DEFAULT_HB_N_APS;
/* if custom max budget was configured with debugfs */
if (IWL_MVM_ADWELL_MAX_BUDGET)
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
index b4d4071b865d..4487cc3e07c1 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
@@ -386,6 +386,9 @@ struct iwl_mvm_rxq_dup_data {
* @amsdu_enabled: bitmap of TX AMSDU allowed TIDs.
* In case TLC offload is not active it is either 0xFFFF or 0.
* @max_amsdu_len: max AMSDU length
+ * @orig_amsdu_len: used to save the original amsdu_len when it is changed via
+ * debugfs. If it's set to 0, it means that it is it's not set via
+ * debugfs.
* @agg_tids: bitmap of tids whose status is operational aggregated (IWL_AGG_ON)
* @sleep_tx_count: the number of frames that we told the firmware to let out
* even when that station is asleep. This is useful in case the queue
@@ -434,6 +437,7 @@ struct iwl_mvm_sta {
bool disable_tx;
u16 amsdu_enabled;
u16 max_amsdu_len;
+ u16 orig_amsdu_len;
bool sleeping;
u8 agg_tids;
u8 sleep_tx_count;
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
index 0c2aabc842f9..a3e5d88f1c07 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
@@ -726,6 +726,9 @@ int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb)
memcpy(&info, skb->cb, sizeof(info));
+ if (WARN_ON_ONCE(skb->len > IEEE80211_MAX_DATA_LEN + hdrlen))
+ return -1;
+
if (WARN_ON_ONCE(info.flags & IEEE80211_TX_CTL_AMPDU))
return -1;
@@ -893,18 +896,15 @@ static int iwl_mvm_tx_tso(struct iwl_mvm *mvm, struct sk_buff *skb,
unsigned int mss = skb_shinfo(skb)->gso_size;
unsigned int num_subframes, tcp_payload_len, subf_len, max_amsdu_len;
u16 snap_ip_tcp, pad;
- unsigned int dbg_max_amsdu_len;
netdev_features_t netdev_flags = NETIF_F_CSUM_MASK | NETIF_F_SG;
u8 tid;
snap_ip_tcp = 8 + skb_transport_header(skb) - skb_network_header(skb) +
tcp_hdrlen(skb);
- dbg_max_amsdu_len = READ_ONCE(mvm->max_amsdu_len);
-
if (!mvmsta->max_amsdu_len ||
!ieee80211_is_data_qos(hdr->frame_control) ||
- (!mvmsta->amsdu_enabled && !dbg_max_amsdu_len))
+ !mvmsta->amsdu_enabled)
return iwl_mvm_tx_tso_segment(skb, 1, netdev_flags, mpdus_skb);
/*
@@ -936,10 +936,6 @@ static int iwl_mvm_tx_tso(struct iwl_mvm *mvm, struct sk_buff *skb,
max_amsdu_len = iwl_mvm_max_amsdu_size(mvm, sta, tid);
- if (unlikely(dbg_max_amsdu_len))
- max_amsdu_len = min_t(unsigned int, max_amsdu_len,
- dbg_max_amsdu_len);
-
/*
* Limit A-MSDU in A-MPDU to 4095 bytes when VHT is not
* supported. This is a spec requirement (IEEE 802.11-2015
@@ -1063,7 +1059,9 @@ static int iwl_mvm_tx_pkt_queued(struct iwl_mvm *mvm,
}
/*
- * Sets the fields in the Tx cmd that are crypto related
+ * Sets the fields in the Tx cmd that are crypto related.
+ *
+ * This function must be called with BHs disabled.
*/
static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
struct ieee80211_tx_info *info,
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
index 72cd5b3f2d8d..9ecd5f09615a 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
@@ -238,6 +238,18 @@ u8 iwl_mvm_mac80211_idx_to_hwrate(int rate_idx)
return fw_rate_idx_to_plcp[rate_idx];
}
+u8 iwl_mvm_mac80211_ac_to_ucode_ac(enum ieee80211_ac_numbers ac)
+{
+ static const u8 mac80211_ac_to_ucode_ac[] = {
+ AC_VO,
+ AC_VI,
+ AC_BE,
+ AC_BK
+ };
+
+ return mac80211_ac_to_ucode_ac[ac];
+}
+
void iwl_mvm_rx_fw_error(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb)
{
struct iwl_rx_packet *pkt = rxb_addr(rxb);
@@ -457,10 +469,10 @@ static void iwl_mvm_dump_umac_error_log(struct iwl_mvm *mvm)
{
struct iwl_trans *trans = mvm->trans;
struct iwl_umac_error_event_table table;
- u32 base = mvm->trans->umac_error_event_table;
+ u32 base = mvm->trans->dbg.umac_error_event_table;
if (!mvm->support_umac_log &&
- !(mvm->trans->error_event_table_tlv_status &
+ !(mvm->trans->dbg.error_event_table_tlv_status &
IWL_ERROR_EVENT_TABLE_UMAC))
return;
@@ -496,7 +508,7 @@ static void iwl_mvm_dump_lmac_error_log(struct iwl_mvm *mvm, u8 lmac_num)
{
struct iwl_trans *trans = mvm->trans;
struct iwl_error_event_table table;
- u32 val, base = mvm->trans->lmac_error_event_table[lmac_num];
+ u32 val, base = mvm->trans->dbg.lmac_error_event_table[lmac_num];
if (mvm->fwrt.cur_fw_img == IWL_UCODE_INIT) {
if (!base)
@@ -592,7 +604,7 @@ void iwl_mvm_dump_nic_error_log(struct iwl_mvm *mvm)
iwl_mvm_dump_lmac_error_log(mvm, 0);
- if (mvm->trans->lmac_error_event_table[1])
+ if (mvm->trans->dbg.lmac_error_event_table[1])
iwl_mvm_dump_lmac_error_log(mvm, 1);
iwl_mvm_dump_umac_error_log(mvm);
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
index f496d1bcb643..5e86783d616b 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
@@ -96,13 +96,13 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
cpu_to_le64(trans_pcie->rxq->bd_dma);
/* Configure debug, for integration */
- if (!trans->ini_valid)
+ if (!trans->dbg.ini_valid)
iwl_pcie_alloc_fw_monitor(trans, 0);
- if (trans->num_blocks) {
+ if (trans->dbg.num_blocks) {
prph_sc_ctrl->hwm_cfg.hwm_base_addr =
- cpu_to_le64(trans->fw_mon[0].physical);
+ cpu_to_le64(trans->dbg.fw_mon[0].physical);
prph_sc_ctrl->hwm_cfg.hwm_size =
- cpu_to_le32(trans->fw_mon[0].size);
+ cpu_to_le32(trans->dbg.fw_mon[0].size);
}
/* allocate ucode sections in dram and set addresses */
@@ -169,7 +169,7 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
memcpy(iml_img, trans->iml, trans->iml_len);
- iwl_enable_interrupts(trans);
+ iwl_enable_fw_load_int_ctx_info(trans);
/* kick FW self load */
iwl_write64(trans, CSR_CTXT_INFO_ADDR,
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
index 8969b47bacf2..d38cefbb779e 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
@@ -222,7 +222,7 @@ int iwl_pcie_ctxt_info_init(struct iwl_trans *trans,
trans_pcie->ctxt_info = ctxt_info;
- iwl_enable_interrupts(trans);
+ iwl_enable_fw_load_int_ctx_info(trans);
/* Configure debug, if exists */
if (iwl_pcie_dbg_on(trans))
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
index cd035061cdd5..ccc83fd74649 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
@@ -513,62 +513,56 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
{IWL_PCI_DEVICE(0x24FD, 0x9074, iwl8265_2ac_cfg)},
/* 9000 Series */
- {IWL_PCI_DEVICE(0x02F0, 0x0030, iwl9560_2ac_160_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x0034, iwl9560_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x0038, iwl9560_2ac_160_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x003C, iwl9560_2ac_160_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x0040, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x02F0, 0x0044, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x02F0, 0x0060, iwl9461_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x0064, iwl9461_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x00A0, iwl9462_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x00A4, iwl9462_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x0230, iwl9560_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x0234, iwl9560_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x0238, iwl9560_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x023C, iwl9560_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x0244, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x02F0, 0x0260, iwl9461_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x0264, iwl9461_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x02A0, iwl9462_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x02A4, iwl9462_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x1552, iwl9560_killer_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x2030, iwl9560_2ac_160_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x2034, iwl9560_2ac_160_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x4030, iwl9560_2ac_160_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x4034, iwl9560_2ac_160_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x40A4, iwl9462_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x4234, iwl9560_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x02F0, 0x42A4, iwl9462_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x0030, iwl9560_2ac_160_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x0034, iwl9560_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x0038, iwl9560_2ac_160_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x003C, iwl9560_2ac_160_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x0040, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x06F0, 0x0044, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x06F0, 0x0060, iwl9461_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x0064, iwl9461_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x00A0, iwl9462_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x00A4, iwl9462_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x0230, iwl9560_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x0234, iwl9560_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x0238, iwl9560_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x023C, iwl9560_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x0244, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x06F0, 0x0260, iwl9461_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x0264, iwl9461_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x02A0, iwl9462_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x02A4, iwl9462_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x1551, iwl9560_killer_s_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x1552, iwl9560_killer_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x2030, iwl9560_2ac_160_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x2034, iwl9560_2ac_160_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x4030, iwl9560_2ac_160_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x4034, iwl9560_2ac_160_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x40A4, iwl9462_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x4234, iwl9560_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x06F0, 0x42A4, iwl9462_2ac_cfg_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x0030, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x0034, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x0038, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x003C, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x0060, iwl9461_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x0064, iwl9461_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x00A0, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x00A4, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x0230, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x0234, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x0238, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x023C, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x0260, iwl9461_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x0264, iwl9461_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x02A0, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x02A4, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x1551, iwl9560_killer_s_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x1552, iwl9560_killer_i_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x2030, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x2034, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x4030, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x4034, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x40A4, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x4234, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x02F0, 0x42A4, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x0030, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x0034, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x0038, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x003C, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x0060, iwl9461_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x0064, iwl9461_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x00A0, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x00A4, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x0230, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x0234, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x0238, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x023C, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x0260, iwl9461_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x0264, iwl9461_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x02A0, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x02A4, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x1551, iwl9560_killer_s_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x1552, iwl9560_killer_i_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x2030, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x2034, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x4030, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x4034, iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x40A4, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x4234, iwl9560_2ac_cfg_quz_a0_jf_b0_soc)},
+ {IWL_PCI_DEVICE(0x06F0, 0x42A4, iwl9462_2ac_cfg_quz_a0_jf_b0_soc)},
{IWL_PCI_DEVICE(0x2526, 0x0010, iwl9260_2ac_160_cfg)},
{IWL_PCI_DEVICE(0x2526, 0x0014, iwl9260_2ac_160_cfg)},
{IWL_PCI_DEVICE(0x2526, 0x0018, iwl9260_2ac_160_cfg)},
@@ -621,7 +615,6 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
{IWL_PCI_DEVICE(0x2720, 0x0034, iwl9560_2ac_160_cfg)},
{IWL_PCI_DEVICE(0x2720, 0x0038, iwl9560_2ac_160_cfg)},
{IWL_PCI_DEVICE(0x2720, 0x003C, iwl9560_2ac_160_cfg)},
- {IWL_PCI_DEVICE(0x2720, 0x0044, iwl_ax101_cfg_qu_hr)},
{IWL_PCI_DEVICE(0x2720, 0x0060, iwl9461_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0x2720, 0x0064, iwl9461_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0x2720, 0x00A0, iwl9462_2ac_cfg_soc)},
@@ -630,7 +623,6 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
{IWL_PCI_DEVICE(0x2720, 0x0234, iwl9560_2ac_cfg)},
{IWL_PCI_DEVICE(0x2720, 0x0238, iwl9560_2ac_cfg)},
{IWL_PCI_DEVICE(0x2720, 0x023C, iwl9560_2ac_cfg)},
- {IWL_PCI_DEVICE(0x2720, 0x0244, iwl_ax101_cfg_qu_hr)},
{IWL_PCI_DEVICE(0x2720, 0x0260, iwl9461_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0x2720, 0x0264, iwl9461_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0x2720, 0x02A0, iwl9462_2ac_cfg_soc)},
@@ -708,7 +700,6 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
{IWL_PCI_DEVICE(0x34F0, 0x0034, iwl9560_2ac_cfg_qu_b0_jf_b0)},
{IWL_PCI_DEVICE(0x34F0, 0x0038, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
{IWL_PCI_DEVICE(0x34F0, 0x003C, iwl9560_2ac_160_cfg_qu_b0_jf_b0)},
- {IWL_PCI_DEVICE(0x34F0, 0x0044, iwl_ax101_cfg_qu_hr)},
{IWL_PCI_DEVICE(0x34F0, 0x0060, iwl9461_2ac_cfg_qu_b0_jf_b0)},
{IWL_PCI_DEVICE(0x34F0, 0x0064, iwl9461_2ac_cfg_qu_b0_jf_b0)},
{IWL_PCI_DEVICE(0x34F0, 0x00A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
@@ -717,7 +708,6 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
{IWL_PCI_DEVICE(0x34F0, 0x0234, iwl9560_2ac_cfg_qu_b0_jf_b0)},
{IWL_PCI_DEVICE(0x34F0, 0x0238, iwl9560_2ac_cfg_qu_b0_jf_b0)},
{IWL_PCI_DEVICE(0x34F0, 0x023C, iwl9560_2ac_cfg_qu_b0_jf_b0)},
- {IWL_PCI_DEVICE(0x34F0, 0x0244, iwl_ax101_cfg_qu_hr)},
{IWL_PCI_DEVICE(0x34F0, 0x0260, iwl9461_2ac_cfg_qu_b0_jf_b0)},
{IWL_PCI_DEVICE(0x34F0, 0x0264, iwl9461_2ac_cfg_qu_b0_jf_b0)},
{IWL_PCI_DEVICE(0x34F0, 0x02A0, iwl9462_2ac_cfg_qu_b0_jf_b0)},
@@ -764,7 +754,6 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
{IWL_PCI_DEVICE(0x43F0, 0x0034, iwl9560_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0x43F0, 0x0038, iwl9560_2ac_160_cfg_soc)},
{IWL_PCI_DEVICE(0x43F0, 0x003C, iwl9560_2ac_160_cfg_soc)},
- {IWL_PCI_DEVICE(0x43F0, 0x0044, iwl_ax101_cfg_qu_hr)},
{IWL_PCI_DEVICE(0x43F0, 0x0060, iwl9461_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0x43F0, 0x0064, iwl9461_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0x43F0, 0x00A0, iwl9462_2ac_cfg_soc)},
@@ -773,7 +762,6 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
{IWL_PCI_DEVICE(0x43F0, 0x0234, iwl9560_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0x43F0, 0x0238, iwl9560_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0x43F0, 0x023C, iwl9560_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0x43F0, 0x0244, iwl_ax101_cfg_qu_hr)},
{IWL_PCI_DEVICE(0x43F0, 0x0260, iwl9461_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0x43F0, 0x0264, iwl9461_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0x43F0, 0x02A0, iwl9462_2ac_cfg_soc)},
@@ -833,7 +821,6 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
{IWL_PCI_DEVICE(0xA0F0, 0x0034, iwl9560_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0xA0F0, 0x0038, iwl9560_2ac_160_cfg_soc)},
{IWL_PCI_DEVICE(0xA0F0, 0x003C, iwl9560_2ac_160_cfg_soc)},
- {IWL_PCI_DEVICE(0xA0F0, 0x0044, iwl_ax101_cfg_qu_hr)},
{IWL_PCI_DEVICE(0xA0F0, 0x0060, iwl9461_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0xA0F0, 0x0064, iwl9461_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0xA0F0, 0x00A0, iwl9462_2ac_cfg_soc)},
@@ -842,7 +829,6 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
{IWL_PCI_DEVICE(0xA0F0, 0x0234, iwl9560_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0xA0F0, 0x0238, iwl9560_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0xA0F0, 0x023C, iwl9560_2ac_cfg_soc)},
- {IWL_PCI_DEVICE(0xA0F0, 0x0244, iwl_ax101_cfg_qu_hr)},
{IWL_PCI_DEVICE(0xA0F0, 0x0260, iwl9461_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0xA0F0, 0x0264, iwl9461_2ac_cfg_soc)},
{IWL_PCI_DEVICE(0xA0F0, 0x02A0, iwl9462_2ac_cfg_soc)},
@@ -890,63 +876,80 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
{IWL_PCI_DEVICE(0x2720, 0x0030, iwl9560_2ac_cfg_qnj_jf_b0)},
/* 22000 Series */
- {IWL_PCI_DEVICE(0x02F0, 0x0070, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x02F0, 0x0074, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x02F0, 0x0078, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x02F0, 0x007C, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x02F0, 0x0310, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x02F0, 0x1651, killer1650s_2ax_cfg_qu_b0_hr_b0)},
- {IWL_PCI_DEVICE(0x02F0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0)},
- {IWL_PCI_DEVICE(0x02F0, 0x4070, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x06F0, 0x0070, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x06F0, 0x0074, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x06F0, 0x0078, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x06F0, 0x007C, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x06F0, 0x0310, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x06F0, 0x1651, killer1650s_2ax_cfg_qu_b0_hr_b0)},
- {IWL_PCI_DEVICE(0x06F0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0)},
- {IWL_PCI_DEVICE(0x06F0, 0x4070, iwl_ax101_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x02F0, 0x0070, iwl_ax201_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x02F0, 0x0074, iwl_ax201_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x02F0, 0x0078, iwl_ax201_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x02F0, 0x007C, iwl_ax201_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x02F0, 0x0244, iwl_ax101_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x02F0, 0x0310, iwl_ax201_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x02F0, 0x1651, iwl_ax1650s_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x02F0, 0x1652, iwl_ax1650i_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x02F0, 0x2074, iwl_ax201_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x02F0, 0x4070, iwl_ax201_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x02F0, 0x4244, iwl_ax101_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x06F0, 0x0070, iwl_ax201_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x06F0, 0x0074, iwl_ax201_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x06F0, 0x0078, iwl_ax201_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x06F0, 0x007C, iwl_ax201_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x06F0, 0x0244, iwl_ax101_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x06F0, 0x0310, iwl_ax201_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x06F0, 0x1651, iwl_ax1650s_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x06F0, 0x1652, iwl_ax1650i_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x06F0, 0x2074, iwl_ax201_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x06F0, 0x4070, iwl_ax201_cfg_quz_hr)},
+ {IWL_PCI_DEVICE(0x06F0, 0x4244, iwl_ax101_cfg_quz_hr)},
{IWL_PCI_DEVICE(0x2720, 0x0000, iwl_ax101_cfg_qu_hr)},
{IWL_PCI_DEVICE(0x2720, 0x0040, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x2720, 0x0070, iwl22000_2ac_cfg_hr_cdb)},
- {IWL_PCI_DEVICE(0x2720, 0x0074, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x2720, 0x0078, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x2720, 0x007C, iwl_ax101_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x2720, 0x0044, iwl_ax101_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x2720, 0x0070, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x2720, 0x0074, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x2720, 0x0078, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x2720, 0x007C, iwl_ax201_cfg_qu_hr)},
{IWL_PCI_DEVICE(0x2720, 0x0090, iwl22000_2ac_cfg_hr_cdb)},
- {IWL_PCI_DEVICE(0x2720, 0x0310, iwl22000_2ac_cfg_hr_cdb)},
- {IWL_PCI_DEVICE(0x2720, 0x0A10, iwl22000_2ac_cfg_hr_cdb)},
+ {IWL_PCI_DEVICE(0x2720, 0x0244, iwl_ax101_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x2720, 0x0310, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x2720, 0x0A10, iwl_ax201_cfg_qu_hr)},
{IWL_PCI_DEVICE(0x2720, 0x1080, iwl_ax101_cfg_qu_hr)},
{IWL_PCI_DEVICE(0x2720, 0x1651, killer1650s_2ax_cfg_qu_b0_hr_b0)},
{IWL_PCI_DEVICE(0x2720, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0)},
- {IWL_PCI_DEVICE(0x2720, 0x4070, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x34F0, 0x0040, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x34F0, 0x0070, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x34F0, 0x0074, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x34F0, 0x0078, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x34F0, 0x007C, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x34F0, 0x0310, iwl_ax101_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x2720, 0x2074, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x2720, 0x4070, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x2720, 0x4244, iwl_ax101_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x34F0, 0x0044, iwl_ax101_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x34F0, 0x0070, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x34F0, 0x0074, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x34F0, 0x0078, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x34F0, 0x007C, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x34F0, 0x0244, iwl_ax101_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x34F0, 0x0310, iwl_ax201_cfg_qu_hr)},
{IWL_PCI_DEVICE(0x34F0, 0x1651, killer1650s_2ax_cfg_qu_b0_hr_b0)},
{IWL_PCI_DEVICE(0x34F0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0)},
- {IWL_PCI_DEVICE(0x34F0, 0x4070, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x43F0, 0x0040, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x43F0, 0x0070, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x43F0, 0x0074, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x43F0, 0x0078, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0x43F0, 0x007C, iwl_ax101_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x34F0, 0x2074, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x34F0, 0x4070, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x34F0, 0x4244, iwl_ax101_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x43F0, 0x0044, iwl_ax101_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x43F0, 0x0070, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x43F0, 0x0074, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x43F0, 0x0078, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x43F0, 0x007C, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x43F0, 0x0244, iwl_ax101_cfg_qu_hr)},
{IWL_PCI_DEVICE(0x43F0, 0x1651, killer1650s_2ax_cfg_qu_b0_hr_b0)},
{IWL_PCI_DEVICE(0x43F0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0)},
- {IWL_PCI_DEVICE(0x43F0, 0x4070, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0xA0F0, 0x0000, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0xA0F0, 0x0040, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0xA0F0, 0x0070, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0xA0F0, 0x0074, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0xA0F0, 0x0078, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0xA0F0, 0x007C, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0xA0F0, 0x00B0, iwl_ax101_cfg_qu_hr)},
- {IWL_PCI_DEVICE(0xA0F0, 0x0A10, iwl_ax101_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x43F0, 0x2074, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x43F0, 0x4070, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0x43F0, 0x4244, iwl_ax101_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0xA0F0, 0x0044, iwl_ax101_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0xA0F0, 0x0070, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0xA0F0, 0x0074, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0xA0F0, 0x0078, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0xA0F0, 0x007C, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0xA0F0, 0x0244, iwl_ax101_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0xA0F0, 0x0A10, iwl_ax201_cfg_qu_hr)},
{IWL_PCI_DEVICE(0xA0F0, 0x1651, killer1650s_2ax_cfg_qu_b0_hr_b0)},
{IWL_PCI_DEVICE(0xA0F0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0)},
- {IWL_PCI_DEVICE(0xA0F0, 0x4070, iwl_ax101_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0xA0F0, 0x2074, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0xA0F0, 0x4070, iwl_ax201_cfg_qu_hr)},
+ {IWL_PCI_DEVICE(0xA0F0, 0x4244, iwl_ax101_cfg_qu_hr)},
{IWL_PCI_DEVICE(0x2723, 0x0080, iwl_ax200_cfg_cc)},
{IWL_PCI_DEVICE(0x2723, 0x0084, iwl_ax200_cfg_cc)},
@@ -958,13 +961,19 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
{IWL_PCI_DEVICE(0x2723, 0x4080, iwl_ax200_cfg_cc)},
{IWL_PCI_DEVICE(0x2723, 0x4088, iwl_ax200_cfg_cc)},
- {IWL_PCI_DEVICE(0x2725, 0x0090, iwlax210_2ax_cfg_so_hr_a0)},
- {IWL_PCI_DEVICE(0x7A70, 0x0090, iwlax210_2ax_cfg_so_hr_a0)},
- {IWL_PCI_DEVICE(0x7A70, 0x0310, iwlax210_2ax_cfg_so_hr_a0)},
- {IWL_PCI_DEVICE(0x2725, 0x0020, iwlax210_2ax_cfg_so_hr_a0)},
- {IWL_PCI_DEVICE(0x2725, 0x0310, iwlax210_2ax_cfg_so_hr_a0)},
- {IWL_PCI_DEVICE(0x2725, 0x0A10, iwlax210_2ax_cfg_so_hr_a0)},
- {IWL_PCI_DEVICE(0x2725, 0x00B0, iwlax210_2ax_cfg_so_hr_a0)},
+ {IWL_PCI_DEVICE(0x2725, 0x0090, iwlax211_2ax_cfg_so_gf_a0)},
+ {IWL_PCI_DEVICE(0x2725, 0x0020, iwlax210_2ax_cfg_ty_gf_a0)},
+ {IWL_PCI_DEVICE(0x2725, 0x0310, iwlax210_2ax_cfg_ty_gf_a0)},
+ {IWL_PCI_DEVICE(0x2725, 0x0510, iwlax210_2ax_cfg_ty_gf_a0)},
+ {IWL_PCI_DEVICE(0x2725, 0x0A10, iwlax210_2ax_cfg_ty_gf_a0)},
+ {IWL_PCI_DEVICE(0x2725, 0x00B0, iwlax411_2ax_cfg_so_gf4_a0)},
+ {IWL_PCI_DEVICE(0x7A70, 0x0090, iwlax211_2ax_cfg_so_gf_a0)},
+ {IWL_PCI_DEVICE(0x7A70, 0x0310, iwlax211_2ax_cfg_so_gf_a0)},
+ {IWL_PCI_DEVICE(0x7A70, 0x0510, iwlax211_2ax_cfg_so_gf_a0)},
+ {IWL_PCI_DEVICE(0x7A70, 0x0A10, iwlax211_2ax_cfg_so_gf_a0)},
+ {IWL_PCI_DEVICE(0x7AF0, 0x0310, iwlax211_2ax_cfg_so_gf_a0)},
+ {IWL_PCI_DEVICE(0x7AF0, 0x0510, iwlax211_2ax_cfg_so_gf_a0)},
+ {IWL_PCI_DEVICE(0x7AF0, 0x0A10, iwlax211_2ax_cfg_so_gf_a0)},
#endif /* CONFIG_IWLMVM */
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
index 85973dd57234..9f5d0fc839fe 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
@@ -874,6 +874,33 @@ static inline void iwl_enable_fw_load_int(struct iwl_trans *trans)
}
}
+static inline void iwl_enable_fw_load_int_ctx_info(struct iwl_trans *trans)
+{
+ struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+
+ IWL_DEBUG_ISR(trans, "Enabling ALIVE interrupt only\n");
+
+ if (!trans_pcie->msix_enabled) {
+ /*
+ * When we'll receive the ALIVE interrupt, the ISR will call
+ * iwl_enable_fw_load_int_ctx_info again to set the ALIVE
+ * interrupt (which is not really needed anymore) but also the
+ * RX interrupt which will allow us to receive the ALIVE
+ * notification (which is Rx) and continue the flow.
+ */
+ trans_pcie->inta_mask = CSR_INT_BIT_ALIVE | CSR_INT_BIT_FH_RX;
+ iwl_write32(trans, CSR_INT_MASK, trans_pcie->inta_mask);
+ } else {
+ iwl_enable_hw_int_msk_msix(trans,
+ MSIX_HW_INT_CAUSES_REG_ALIVE);
+ /*
+ * Leave all the FH causes enabled to get the ALIVE
+ * notification.
+ */
+ iwl_enable_fh_int_msk_msix(trans, trans_pcie->fh_init_mask);
+ }
+}
+
static inline u16 iwl_pcie_get_cmd_index(const struct iwl_txq *q, u32 index)
{
return index & (q->n_window - 1);
@@ -1018,7 +1045,7 @@ static inline void __iwl_trans_pcie_set_bit(struct iwl_trans *trans,
static inline bool iwl_pcie_dbg_on(struct iwl_trans *trans)
{
- return (trans->dbg_dest_tlv || trans->ini_valid);
+ return (trans->dbg.dest_tlv || trans->dbg.ini_valid);
}
void iwl_trans_pcie_rf_kill(struct iwl_trans *trans, bool state);
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
index 31b3591f71d1..a2d709642b2a 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
@@ -1827,26 +1827,26 @@ irqreturn_t iwl_pcie_irq_handler(int irq, void *dev_id)
goto out;
}
- if (iwl_have_debug_level(IWL_DL_ISR)) {
- /* NIC fires this, but we don't use it, redundant with WAKEUP */
- if (inta & CSR_INT_BIT_SCD) {
- IWL_DEBUG_ISR(trans,
- "Scheduler finished to transmit the frame/frames.\n");
- isr_stats->sch++;
- }
+ /* NIC fires this, but we don't use it, redundant with WAKEUP */
+ if (inta & CSR_INT_BIT_SCD) {
+ IWL_DEBUG_ISR(trans,
+ "Scheduler finished to transmit the frame/frames.\n");
+ isr_stats->sch++;
+ }
- /* Alive notification via Rx interrupt will do the real work */
- if (inta & CSR_INT_BIT_ALIVE) {
- IWL_DEBUG_ISR(trans, "Alive interrupt\n");
- isr_stats->alive++;
- if (trans->cfg->gen2) {
- /*
- * We can restock, since firmware configured
- * the RFH
- */
- iwl_pcie_rxmq_restock(trans, trans_pcie->rxq);
- }
+ /* Alive notification via Rx interrupt will do the real work */
+ if (inta & CSR_INT_BIT_ALIVE) {
+ IWL_DEBUG_ISR(trans, "Alive interrupt\n");
+ isr_stats->alive++;
+ if (trans->cfg->gen2) {
+ /*
+ * We can restock, since firmware configured
+ * the RFH
+ */
+ iwl_pcie_rxmq_restock(trans, trans_pcie->rxq);
}
+
+ handled |= CSR_INT_BIT_ALIVE;
}
/* Safely ignore these bits for debug checks below */
@@ -1965,6 +1965,9 @@ irqreturn_t iwl_pcie_irq_handler(int irq, void *dev_id)
/* Re-enable RF_KILL if it occurred */
else if (handled & CSR_INT_BIT_RF_KILL)
iwl_enable_rfkill_int(trans);
+ /* Re-enable the ALIVE / Rx interrupt if it occurred */
+ else if (handled & (CSR_INT_BIT_ALIVE | CSR_INT_BIT_FH_RX))
+ iwl_enable_fw_load_int_ctx_info(trans);
spin_unlock(&trans_pcie->irq_lock);
out:
@@ -2108,10 +2111,18 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
return IRQ_NONE;
}
- if (iwl_have_debug_level(IWL_DL_ISR))
- IWL_DEBUG_ISR(trans, "ISR inta_fh 0x%08x, enabled 0x%08x\n",
- inta_fh,
+ if (iwl_have_debug_level(IWL_DL_ISR)) {
+ IWL_DEBUG_ISR(trans,
+ "ISR inta_fh 0x%08x, enabled (sw) 0x%08x (hw) 0x%08x\n",
+ inta_fh, trans_pcie->fh_mask,
iwl_read32(trans, CSR_MSIX_FH_INT_MASK_AD));
+ if (inta_fh & ~trans_pcie->fh_mask)
+ IWL_DEBUG_ISR(trans,
+ "We got a masked interrupt (0x%08x)\n",
+ inta_fh & ~trans_pcie->fh_mask);
+ }
+
+ inta_fh &= trans_pcie->fh_mask;
if ((trans_pcie->shared_vec_mask & IWL_SHARED_IRQ_NON_RX) &&
inta_fh & MSIX_FH_INT_CAUSES_Q0) {
@@ -2151,11 +2162,18 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
}
/* After checking FH register check HW register */
- if (iwl_have_debug_level(IWL_DL_ISR))
+ if (iwl_have_debug_level(IWL_DL_ISR)) {
IWL_DEBUG_ISR(trans,
- "ISR inta_hw 0x%08x, enabled 0x%08x\n",
- inta_hw,
+ "ISR inta_hw 0x%08x, enabled (sw) 0x%08x (hw) 0x%08x\n",
+ inta_hw, trans_pcie->hw_mask,
iwl_read32(trans, CSR_MSIX_HW_INT_MASK_AD));
+ if (inta_hw & ~trans_pcie->hw_mask)
+ IWL_DEBUG_ISR(trans,
+ "We got a masked interrupt 0x%08x\n",
+ inta_hw & ~trans_pcie->hw_mask);
+ }
+
+ inta_hw &= trans_pcie->hw_mask;
/* Alive notification via Rx interrupt will do the real work */
if (inta_hw & MSIX_HW_INT_CAUSES_REG_ALIVE) {
@@ -2212,7 +2230,7 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
"Hardware error detected. Restarting.\n");
isr_stats->hw++;
- trans->hw_error = true;
+ trans->dbg.hw_error = true;
iwl_pcie_irq_handle_error(trans);
}
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
index 8507a7bdcfdd..8d17e68577fd 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
@@ -148,7 +148,7 @@ void _iwl_trans_pcie_gen2_stop_device(struct iwl_trans *trans, bool low_power)
trans_pcie->is_down = true;
/* Stop dbgc before stopping device */
- _iwl_fw_dbg_stop_recording(trans, NULL);
+ iwl_fw_dbg_stop_recording(trans, NULL);
/* tell the device to stop sending interrupts */
iwl_disable_interrupts(trans);
@@ -273,6 +273,15 @@ void iwl_trans_pcie_gen2_fw_alive(struct iwl_trans *trans, u32 scd_addr)
* paging memory cannot be freed included since FW will still use it
*/
iwl_pcie_ctxt_info_free(trans);
+
+ /*
+ * Re-enable all the interrupts, including the RF-Kill one, now that
+ * the firmware is alive.
+ */
+ iwl_enable_interrupts(trans);
+ mutex_lock(&trans_pcie->mutex);
+ iwl_pcie_check_hw_rf_kill(trans);
+ mutex_unlock(&trans_pcie->mutex);
}
int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans,
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
index dfa1bed124aa..f5df5b370d78 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
@@ -90,8 +90,10 @@
void iwl_trans_pcie_dump_regs(struct iwl_trans *trans)
{
-#define PCI_DUMP_SIZE 64
-#define PREFIX_LEN 32
+#define PCI_DUMP_SIZE 352
+#define PCI_MEM_DUMP_SIZE 64
+#define PCI_PARENT_DUMP_SIZE 524
+#define PREFIX_LEN 32
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
struct pci_dev *pdev = trans_pcie->pci_dev;
u32 i, pos, alloc_size, *ptr, *buf;
@@ -102,11 +104,15 @@ void iwl_trans_pcie_dump_regs(struct iwl_trans *trans)
/* Should be a multiple of 4 */
BUILD_BUG_ON(PCI_DUMP_SIZE > 4096 || PCI_DUMP_SIZE & 0x3);
+ BUILD_BUG_ON(PCI_MEM_DUMP_SIZE > 4096 || PCI_MEM_DUMP_SIZE & 0x3);
+ BUILD_BUG_ON(PCI_PARENT_DUMP_SIZE > 4096 || PCI_PARENT_DUMP_SIZE & 0x3);
+
/* Alloc a max size buffer */
- if (PCI_ERR_ROOT_ERR_SRC + 4 > PCI_DUMP_SIZE)
- alloc_size = PCI_ERR_ROOT_ERR_SRC + 4 + PREFIX_LEN;
- else
- alloc_size = PCI_DUMP_SIZE + PREFIX_LEN;
+ alloc_size = PCI_ERR_ROOT_ERR_SRC + 4 + PREFIX_LEN;
+ alloc_size = max_t(u32, alloc_size, PCI_DUMP_SIZE + PREFIX_LEN);
+ alloc_size = max_t(u32, alloc_size, PCI_MEM_DUMP_SIZE + PREFIX_LEN);
+ alloc_size = max_t(u32, alloc_size, PCI_PARENT_DUMP_SIZE + PREFIX_LEN);
+
buf = kmalloc(alloc_size, GFP_ATOMIC);
if (!buf)
return;
@@ -123,7 +129,7 @@ void iwl_trans_pcie_dump_regs(struct iwl_trans *trans)
print_hex_dump(KERN_ERR, prefix, DUMP_PREFIX_OFFSET, 32, 4, buf, i, 0);
IWL_ERR(trans, "iwlwifi device memory mapped registers:\n");
- for (i = 0, ptr = buf; i < PCI_DUMP_SIZE; i += 4, ptr++)
+ for (i = 0, ptr = buf; i < PCI_MEM_DUMP_SIZE; i += 4, ptr++)
*ptr = iwl_read32(trans, i);
print_hex_dump(KERN_ERR, prefix, DUMP_PREFIX_OFFSET, 32, 4, buf, i, 0);
@@ -146,7 +152,7 @@ void iwl_trans_pcie_dump_regs(struct iwl_trans *trans)
IWL_ERR(trans, "iwlwifi parent port (%s) config registers:\n",
pci_name(pdev));
- for (i = 0, ptr = buf; i < PCI_DUMP_SIZE; i += 4, ptr++)
+ for (i = 0, ptr = buf; i < PCI_PARENT_DUMP_SIZE; i += 4, ptr++)
if (pci_read_config_dword(pdev, i, ptr))
goto err_read;
print_hex_dump(KERN_ERR, prefix, DUMP_PREFIX_OFFSET, 32, 4, buf, i, 0);
@@ -188,14 +194,14 @@ static void iwl_pcie_free_fw_monitor(struct iwl_trans *trans)
{
int i;
- for (i = 0; i < trans->num_blocks; i++) {
- dma_free_coherent(trans->dev, trans->fw_mon[i].size,
- trans->fw_mon[i].block,
- trans->fw_mon[i].physical);
- trans->fw_mon[i].block = NULL;
- trans->fw_mon[i].physical = 0;
- trans->fw_mon[i].size = 0;
- trans->num_blocks--;
+ for (i = 0; i < trans->dbg.num_blocks; i++) {
+ dma_free_coherent(trans->dev, trans->dbg.fw_mon[i].size,
+ trans->dbg.fw_mon[i].block,
+ trans->dbg.fw_mon[i].physical);
+ trans->dbg.fw_mon[i].block = NULL;
+ trans->dbg.fw_mon[i].physical = 0;
+ trans->dbg.fw_mon[i].size = 0;
+ trans->dbg.num_blocks--;
}
}
@@ -230,10 +236,10 @@ static void iwl_pcie_alloc_fw_monitor_block(struct iwl_trans *trans,
(unsigned long)BIT(power - 10),
(unsigned long)BIT(max_power - 10));
- trans->fw_mon[trans->num_blocks].block = cpu_addr;
- trans->fw_mon[trans->num_blocks].physical = phys;
- trans->fw_mon[trans->num_blocks].size = size;
- trans->num_blocks++;
+ trans->dbg.fw_mon[trans->dbg.num_blocks].block = cpu_addr;
+ trans->dbg.fw_mon[trans->dbg.num_blocks].physical = phys;
+ trans->dbg.fw_mon[trans->dbg.num_blocks].size = size;
+ trans->dbg.num_blocks++;
}
void iwl_pcie_alloc_fw_monitor(struct iwl_trans *trans, u8 max_power)
@@ -254,7 +260,7 @@ void iwl_pcie_alloc_fw_monitor(struct iwl_trans *trans, u8 max_power)
* This function allocats the default fw monitor.
* The optional additional ones will be allocated in runtime
*/
- if (trans->num_blocks)
+ if (trans->dbg.num_blocks)
return;
iwl_pcie_alloc_fw_monitor_block(trans, max_power, 11);
@@ -889,21 +895,21 @@ static int iwl_pcie_load_cpu_sections(struct iwl_trans *trans,
void iwl_pcie_apply_destination(struct iwl_trans *trans)
{
- const struct iwl_fw_dbg_dest_tlv_v1 *dest = trans->dbg_dest_tlv;
+ const struct iwl_fw_dbg_dest_tlv_v1 *dest = trans->dbg.dest_tlv;
int i;
- if (trans->ini_valid) {
- if (!trans->num_blocks)
+ if (trans->dbg.ini_valid) {
+ if (!trans->dbg.num_blocks)
return;
IWL_DEBUG_FW(trans,
"WRT: applying DRAM buffer[0] destination\n");
iwl_write_umac_prph(trans, MON_BUFF_BASE_ADDR_VER2,
- trans->fw_mon[0].physical >>
+ trans->dbg.fw_mon[0].physical >>
MON_BUFF_SHIFT_VER2);
iwl_write_umac_prph(trans, MON_BUFF_END_ADDR_VER2,
- (trans->fw_mon[0].physical +
- trans->fw_mon[0].size - 256) >>
+ (trans->dbg.fw_mon[0].physical +
+ trans->dbg.fw_mon[0].size - 256) >>
MON_BUFF_SHIFT_VER2);
return;
}
@@ -916,7 +922,7 @@ void iwl_pcie_apply_destination(struct iwl_trans *trans)
else
IWL_WARN(trans, "PCI should have external buffer debug\n");
- for (i = 0; i < trans->dbg_n_dest_reg; i++) {
+ for (i = 0; i < trans->dbg.n_dest_reg; i++) {
u32 addr = le32_to_cpu(dest->reg_ops[i].addr);
u32 val = le32_to_cpu(dest->reg_ops[i].val);
@@ -955,18 +961,19 @@ void iwl_pcie_apply_destination(struct iwl_trans *trans)
}
monitor:
- if (dest->monitor_mode == EXTERNAL_MODE && trans->fw_mon[0].size) {
+ if (dest->monitor_mode == EXTERNAL_MODE && trans->dbg.fw_mon[0].size) {
iwl_write_prph(trans, le32_to_cpu(dest->base_reg),
- trans->fw_mon[0].physical >> dest->base_shift);
+ trans->dbg.fw_mon[0].physical >>
+ dest->base_shift);
if (trans->cfg->device_family >= IWL_DEVICE_FAMILY_8000)
iwl_write_prph(trans, le32_to_cpu(dest->end_reg),
- (trans->fw_mon[0].physical +
- trans->fw_mon[0].size - 256) >>
+ (trans->dbg.fw_mon[0].physical +
+ trans->dbg.fw_mon[0].size - 256) >>
dest->end_shift);
else
iwl_write_prph(trans, le32_to_cpu(dest->end_reg),
- (trans->fw_mon[0].physical +
- trans->fw_mon[0].size) >>
+ (trans->dbg.fw_mon[0].physical +
+ trans->dbg.fw_mon[0].size) >>
dest->end_shift);
}
}
@@ -1003,12 +1010,12 @@ static int iwl_pcie_load_given_ucode(struct iwl_trans *trans,
trans->cfg->device_family == IWL_DEVICE_FAMILY_7000) {
iwl_pcie_alloc_fw_monitor(trans, 0);
- if (trans->fw_mon[0].size) {
+ if (trans->dbg.fw_mon[0].size) {
iwl_write_prph(trans, MON_BUFF_BASE_ADDR,
- trans->fw_mon[0].physical >> 4);
+ trans->dbg.fw_mon[0].physical >> 4);
iwl_write_prph(trans, MON_BUFF_END_ADDR,
- (trans->fw_mon[0].physical +
- trans->fw_mon[0].size) >> 4);
+ (trans->dbg.fw_mon[0].physical +
+ trans->dbg.fw_mon[0].size) >> 4);
}
} else if (iwl_pcie_dbg_on(trans)) {
iwl_pcie_apply_destination(trans);
@@ -1236,7 +1243,7 @@ static void _iwl_trans_pcie_stop_device(struct iwl_trans *trans, bool low_power)
trans_pcie->is_down = true;
/* Stop dbgc before stopping device */
- _iwl_fw_dbg_stop_recording(trans, NULL);
+ iwl_fw_dbg_stop_recording(trans, NULL);
/* tell the device to stop sending interrupts */
iwl_disable_interrupts(trans);
@@ -2729,8 +2736,8 @@ static int iwl_dbgfs_monitor_data_open(struct inode *inode,
struct iwl_trans *trans = inode->i_private;
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
- if (!trans->dbg_dest_tlv ||
- trans->dbg_dest_tlv->monitor_mode != EXTERNAL_MODE) {
+ if (!trans->dbg.dest_tlv ||
+ trans->dbg.dest_tlv->monitor_mode != EXTERNAL_MODE) {
IWL_ERR(trans, "Debug destination is not set to DRAM\n");
return -ENOENT;
}
@@ -2777,22 +2784,22 @@ static ssize_t iwl_dbgfs_monitor_data_read(struct file *file,
{
struct iwl_trans *trans = file->private_data;
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
- void *cpu_addr = (void *)trans->fw_mon[0].block, *curr_buf;
+ void *cpu_addr = (void *)trans->dbg.fw_mon[0].block, *curr_buf;
struct cont_rec *data = &trans_pcie->fw_mon_data;
u32 write_ptr_addr, wrap_cnt_addr, write_ptr, wrap_cnt;
ssize_t size, bytes_copied = 0;
bool b_full;
- if (trans->dbg_dest_tlv) {
+ if (trans->dbg.dest_tlv) {
write_ptr_addr =
- le32_to_cpu(trans->dbg_dest_tlv->write_ptr_reg);
- wrap_cnt_addr = le32_to_cpu(trans->dbg_dest_tlv->wrap_count);
+ le32_to_cpu(trans->dbg.dest_tlv->write_ptr_reg);
+ wrap_cnt_addr = le32_to_cpu(trans->dbg.dest_tlv->wrap_count);
} else {
write_ptr_addr = MON_BUFF_WRPTR;
wrap_cnt_addr = MON_BUFF_CYCLE_CNT;
}
- if (unlikely(!trans->dbg_rec_on))
+ if (unlikely(!trans->dbg.rec_on))
return 0;
mutex_lock(&data->mutex);
@@ -2816,7 +2823,7 @@ static ssize_t iwl_dbgfs_monitor_data_read(struct file *file,
} else if (data->prev_wrap_cnt == wrap_cnt - 1 &&
write_ptr < data->prev_wr_ptr) {
- size = trans->fw_mon[0].size - data->prev_wr_ptr;
+ size = trans->dbg.fw_mon[0].size - data->prev_wr_ptr;
curr_buf = cpu_addr + data->prev_wr_ptr;
b_full = iwl_write_to_user_buf(user_buf, count,
curr_buf, &size,
@@ -3035,14 +3042,10 @@ iwl_trans_pcie_dump_pointers(struct iwl_trans *trans,
base_high = DBGC_CUR_DBGBUF_BASE_ADDR_MSB;
write_ptr = DBGC_CUR_DBGBUF_STATUS;
wrap_cnt = DBGC_DBGBUF_WRAP_AROUND;
- } else if (trans->ini_valid) {
- base = iwl_umac_prph(trans, MON_BUFF_BASE_ADDR_VER2);
- write_ptr = iwl_umac_prph(trans, MON_BUFF_WRPTR_VER2);
- wrap_cnt = iwl_umac_prph(trans, MON_BUFF_CYCLE_CNT_VER2);
- } else if (trans->dbg_dest_tlv) {
- write_ptr = le32_to_cpu(trans->dbg_dest_tlv->write_ptr_reg);
- wrap_cnt = le32_to_cpu(trans->dbg_dest_tlv->wrap_count);
- base = le32_to_cpu(trans->dbg_dest_tlv->base_reg);
+ } else if (trans->dbg.dest_tlv) {
+ write_ptr = le32_to_cpu(trans->dbg.dest_tlv->write_ptr_reg);
+ wrap_cnt = le32_to_cpu(trans->dbg.dest_tlv->wrap_count);
+ base = le32_to_cpu(trans->dbg.dest_tlv->base_reg);
} else {
base = MON_BUFF_BASE_ADDR;
write_ptr = MON_BUFF_WRPTR;
@@ -3069,11 +3072,10 @@ iwl_trans_pcie_dump_monitor(struct iwl_trans *trans,
{
u32 len = 0;
- if ((trans->num_blocks &&
+ if (trans->dbg.dest_tlv ||
+ (trans->dbg.num_blocks &&
(trans->cfg->device_family == IWL_DEVICE_FAMILY_7000 ||
- trans->cfg->device_family >= IWL_DEVICE_FAMILY_AX210 ||
- trans->ini_valid)) ||
- (trans->dbg_dest_tlv && !trans->ini_valid)) {
+ trans->cfg->device_family >= IWL_DEVICE_FAMILY_AX210))) {
struct iwl_fw_error_dump_fw_mon *fw_mon_data;
(*data)->type = cpu_to_le32(IWL_FW_ERROR_DUMP_FW_MONITOR);
@@ -3082,32 +3084,32 @@ iwl_trans_pcie_dump_monitor(struct iwl_trans *trans,
iwl_trans_pcie_dump_pointers(trans, fw_mon_data);
len += sizeof(**data) + sizeof(*fw_mon_data);
- if (trans->num_blocks) {
+ if (trans->dbg.num_blocks) {
memcpy(fw_mon_data->data,
- trans->fw_mon[0].block,
- trans->fw_mon[0].size);
+ trans->dbg.fw_mon[0].block,
+ trans->dbg.fw_mon[0].size);
- monitor_len = trans->fw_mon[0].size;
- } else if (trans->dbg_dest_tlv->monitor_mode == SMEM_MODE) {
+ monitor_len = trans->dbg.fw_mon[0].size;
+ } else if (trans->dbg.dest_tlv->monitor_mode == SMEM_MODE) {
u32 base = le32_to_cpu(fw_mon_data->fw_mon_base_ptr);
/*
* Update pointers to reflect actual values after
* shifting
*/
- if (trans->dbg_dest_tlv->version) {
+ if (trans->dbg.dest_tlv->version) {
base = (iwl_read_prph(trans, base) &
IWL_LDBG_M2S_BUF_BA_MSK) <<
- trans->dbg_dest_tlv->base_shift;
+ trans->dbg.dest_tlv->base_shift;
base *= IWL_M2S_UNIT_SIZE;
base += trans->cfg->smem_offset;
} else {
base = iwl_read_prph(trans, base) <<
- trans->dbg_dest_tlv->base_shift;
+ trans->dbg.dest_tlv->base_shift;
}
iwl_trans_read_mem(trans, base, fw_mon_data->data,
monitor_len / sizeof(u32));
- } else if (trans->dbg_dest_tlv->monitor_mode == MARBH_MODE) {
+ } else if (trans->dbg.dest_tlv->monitor_mode == MARBH_MODE) {
monitor_len =
iwl_trans_pci_dump_marbh_monitor(trans,
fw_mon_data,
@@ -3126,40 +3128,40 @@ iwl_trans_pcie_dump_monitor(struct iwl_trans *trans,
static int iwl_trans_get_fw_monitor_len(struct iwl_trans *trans, u32 *len)
{
- if (trans->num_blocks) {
+ if (trans->dbg.num_blocks) {
*len += sizeof(struct iwl_fw_error_dump_data) +
sizeof(struct iwl_fw_error_dump_fw_mon) +
- trans->fw_mon[0].size;
- return trans->fw_mon[0].size;
- } else if (trans->dbg_dest_tlv) {
+ trans->dbg.fw_mon[0].size;
+ return trans->dbg.fw_mon[0].size;
+ } else if (trans->dbg.dest_tlv) {
u32 base, end, cfg_reg, monitor_len;
- if (trans->dbg_dest_tlv->version == 1) {
- cfg_reg = le32_to_cpu(trans->dbg_dest_tlv->base_reg);
+ if (trans->dbg.dest_tlv->version == 1) {
+ cfg_reg = le32_to_cpu(trans->dbg.dest_tlv->base_reg);
cfg_reg = iwl_read_prph(trans, cfg_reg);
base = (cfg_reg & IWL_LDBG_M2S_BUF_BA_MSK) <<
- trans->dbg_dest_tlv->base_shift;
+ trans->dbg.dest_tlv->base_shift;
base *= IWL_M2S_UNIT_SIZE;
base += trans->cfg->smem_offset;
monitor_len =
(cfg_reg & IWL_LDBG_M2S_BUF_SIZE_MSK) >>
- trans->dbg_dest_tlv->end_shift;
+ trans->dbg.dest_tlv->end_shift;
monitor_len *= IWL_M2S_UNIT_SIZE;
} else {
- base = le32_to_cpu(trans->dbg_dest_tlv->base_reg);
- end = le32_to_cpu(trans->dbg_dest_tlv->end_reg);
+ base = le32_to_cpu(trans->dbg.dest_tlv->base_reg);
+ end = le32_to_cpu(trans->dbg.dest_tlv->end_reg);
base = iwl_read_prph(trans, base) <<
- trans->dbg_dest_tlv->base_shift;
+ trans->dbg.dest_tlv->base_shift;
end = iwl_read_prph(trans, end) <<
- trans->dbg_dest_tlv->end_shift;
+ trans->dbg.dest_tlv->end_shift;
/* Make "end" point to the actual end */
if (trans->cfg->device_family >=
IWL_DEVICE_FAMILY_8000 ||
- trans->dbg_dest_tlv->monitor_mode == MARBH_MODE)
- end += (1 << trans->dbg_dest_tlv->end_shift);
+ trans->dbg.dest_tlv->monitor_mode == MARBH_MODE)
+ end += (1 << trans->dbg.dest_tlv->end_shift);
monitor_len = end - base;
}
*len += sizeof(struct iwl_fw_error_dump_data) +
@@ -3192,7 +3194,7 @@ static struct iwl_trans_dump_data
len = sizeof(*dump_data);
/* host commands */
- if (dump_mask & BIT(IWL_FW_ERROR_DUMP_TXCMD))
+ if (dump_mask & BIT(IWL_FW_ERROR_DUMP_TXCMD) && cmdq)
len += sizeof(*data) +
cmdq->n_window * (sizeof(*txcmd) +
TFD_MAX_PAYLOAD_SIZE);
@@ -3244,7 +3246,7 @@ static struct iwl_trans_dump_data
len = 0;
data = (void *)dump_data->data;
- if (dump_mask & BIT(IWL_FW_ERROR_DUMP_TXCMD)) {
+ if (dump_mask & BIT(IWL_FW_ERROR_DUMP_TXCMD) && cmdq) {
u16 tfd_size = trans_pcie->tfd_size;
data->type = cpu_to_le32(IWL_FW_ERROR_DUMP_TXCMD);
@@ -3569,15 +3571,17 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev,
trans->cfg = &iwlax210_2ax_cfg_so_jf_a0;
} else if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) ==
CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_GF)) {
- trans->cfg = &iwlax210_2ax_cfg_so_gf_a0;
+ trans->cfg = &iwlax211_2ax_cfg_so_gf_a0;
} else if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) ==
CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_GF4)) {
- trans->cfg = &iwlax210_2ax_cfg_so_gf4_a0;
+ trans->cfg = &iwlax411_2ax_cfg_so_gf4_a0;
}
} else if (cfg == &iwl_ax101_cfg_qu_hr) {
- if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) ==
- CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR) &&
- trans->hw_rev == CSR_HW_REV_TYPE_QNJ_B0) {
+ if ((CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) ==
+ CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR) &&
+ trans->hw_rev == CSR_HW_REV_TYPE_QNJ_B0) ||
+ (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) ==
+ CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR1))) {
trans->cfg = &iwl22000_2ax_cfg_qnj_hr_b0;
} else if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) ==
CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR)) {
@@ -3599,8 +3603,9 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev,
} else if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) ==
CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR) &&
((trans->cfg != &iwl_ax200_cfg_cc &&
- trans->cfg != &killer1650x_2ax_cfg &&
- trans->cfg != &killer1650w_2ax_cfg) ||
+ trans->cfg != &killer1650x_2ax_cfg &&
+ trans->cfg != &killer1650w_2ax_cfg &&
+ trans->cfg != &iwl_ax201_cfg_quz_hr) ||
trans->hw_rev == CSR_HW_REV_TYPE_QNJ_B0)) {
u32 hw_status;
@@ -3681,6 +3686,7 @@ void iwl_trans_pcie_sync_nmi(struct iwl_trans *trans)
{
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
unsigned long timeout = jiffies + IWL_TRANS_NMI_TIMEOUT;
+ bool interrupts_enabled = test_bit(STATUS_INT_ENABLED, &trans->status);
u32 inta_addr, sw_err_bit;
if (trans_pcie->msix_enabled) {
@@ -3691,7 +3697,12 @@ void iwl_trans_pcie_sync_nmi(struct iwl_trans *trans)
sw_err_bit = CSR_INT_BIT_SW_ERR;
}
- iwl_disable_interrupts(trans);
+ /* if the interrupts were already disabled, there is no point in
+ * calling iwl_disable_interrupts
+ */
+ if (interrupts_enabled)
+ iwl_disable_interrupts(trans);
+
iwl_force_nmi(trans);
while (time_after(timeout, jiffies)) {
u32 inta_hw = iwl_read32(trans, inta_addr);
@@ -3705,6 +3716,13 @@ void iwl_trans_pcie_sync_nmi(struct iwl_trans *trans)
mdelay(1);
}
- iwl_enable_interrupts(trans);
+
+ /* enable interrupts only if there were already enabled before this
+ * function to avoid a case were the driver enable interrupts before
+ * proper configurations were made
+ */
+ if (interrupts_enabled)
+ iwl_enable_interrupts(trans);
+
iwl_trans_fw_error(trans);
}
diff --git a/drivers/net/wireless/intersil/p54/main.c b/drivers/net/wireless/intersil/p54/main.c
index ca2676f79bbb..a3ca6620dc0c 100644
--- a/drivers/net/wireless/intersil/p54/main.c
+++ b/drivers/net/wireless/intersil/p54/main.c
@@ -411,12 +411,9 @@ static int p54_conf_tx(struct ieee80211_hw *dev,
int ret;
mutex_lock(&priv->conf_mutex);
- if (queue < dev->queues) {
- P54_SET_QUEUE(priv->qos_params[queue], params->aifs,
- params->cw_min, params->cw_max, params->txop);
- ret = p54_set_edcf(priv);
- } else
- ret = -EINVAL;
+ P54_SET_QUEUE(priv->qos_params[queue], params->aifs,
+ params->cw_min, params->cw_max, params->txop);
+ ret = p54_set_edcf(priv);
mutex_unlock(&priv->conf_mutex);
return ret;
}
diff --git a/drivers/net/wireless/intersil/p54/p54usb.c b/drivers/net/wireless/intersil/p54/p54usb.c
index f937815f0f2c..b94764c88750 100644
--- a/drivers/net/wireless/intersil/p54/p54usb.c
+++ b/drivers/net/wireless/intersil/p54/p54usb.c
@@ -30,6 +30,8 @@ MODULE_ALIAS("prism54usb");
MODULE_FIRMWARE("isl3886usb");
MODULE_FIRMWARE("isl3887usb");
+static struct usb_driver p54u_driver;
+
/*
* Note:
*
@@ -918,9 +920,9 @@ static void p54u_load_firmware_cb(const struct firmware *firmware,
{
struct p54u_priv *priv = context;
struct usb_device *udev = priv->udev;
+ struct usb_interface *intf = priv->intf;
int err;
- complete(&priv->fw_wait_load);
if (firmware) {
priv->fw = firmware;
err = p54u_start_ops(priv);
@@ -929,26 +931,22 @@ static void p54u_load_firmware_cb(const struct firmware *firmware,
dev_err(&udev->dev, "Firmware not found.\n");
}
- if (err) {
- struct device *parent = priv->udev->dev.parent;
-
- dev_err(&udev->dev, "failed to initialize device (%d)\n", err);
-
- if (parent)
- device_lock(parent);
+ complete(&priv->fw_wait_load);
+ /*
+ * At this point p54u_disconnect may have already freed
+ * the "priv" context. Do not use it anymore!
+ */
+ priv = NULL;
- device_release_driver(&udev->dev);
- /*
- * At this point p54u_disconnect has already freed
- * the "priv" context. Do not use it anymore!
- */
- priv = NULL;
+ if (err) {
+ dev_err(&intf->dev, "failed to initialize device (%d)\n", err);
- if (parent)
- device_unlock(parent);
+ usb_lock_device(udev);
+ usb_driver_release_interface(&p54u_driver, intf);
+ usb_unlock_device(udev);
}
- usb_put_dev(udev);
+ usb_put_intf(intf);
}
static int p54u_load_firmware(struct ieee80211_hw *dev,
@@ -969,14 +967,14 @@ static int p54u_load_firmware(struct ieee80211_hw *dev,
dev_info(&priv->udev->dev, "Loading firmware file %s\n",
p54u_fwlist[i].fw);
- usb_get_dev(udev);
+ usb_get_intf(intf);
err = request_firmware_nowait(THIS_MODULE, 1, p54u_fwlist[i].fw,
device, GFP_KERNEL, priv,
p54u_load_firmware_cb);
if (err) {
dev_err(&priv->udev->dev, "(p54usb) cannot load firmware %s "
"(%d)!\n", p54u_fwlist[i].fw, err);
- usb_put_dev(udev);
+ usb_put_intf(intf);
}
return err;
@@ -1008,8 +1006,6 @@ static int p54u_probe(struct usb_interface *intf,
skb_queue_head_init(&priv->rx_queue);
init_usb_anchor(&priv->submitted);
- usb_get_dev(udev);
-
/* really lazy and simple way of figuring out if we're a 3887 */
/* TODO: should just stick the identification in the device table */
i = intf->altsetting->desc.bNumEndpoints;
@@ -1050,10 +1046,8 @@ static int p54u_probe(struct usb_interface *intf,
priv->upload_fw = p54u_upload_firmware_net2280;
}
err = p54u_load_firmware(dev, intf);
- if (err) {
- usb_put_dev(udev);
+ if (err)
p54_free_common(dev);
- }
return err;
}
@@ -1069,7 +1063,6 @@ static void p54u_disconnect(struct usb_interface *intf)
wait_for_completion(&priv->fw_wait_load);
p54_unregister_common(dev);
- usb_put_dev(interface_to_usbdev(intf));
release_firmware(priv->fw);
p54_free_common(dev);
}
diff --git a/drivers/net/wireless/intersil/p54/txrx.c b/drivers/net/wireless/intersil/p54/txrx.c
index ff9acd1563f4..873fea59894f 100644
--- a/drivers/net/wireless/intersil/p54/txrx.c
+++ b/drivers/net/wireless/intersil/p54/txrx.c
@@ -139,7 +139,10 @@ static int p54_assign_address(struct p54_common *priv, struct sk_buff *skb)
unlikely(GET_HW_QUEUE(skb) == P54_QUEUE_BEACON))
priv->beacon_req_id = data->req_id;
- __skb_queue_after(&priv->tx_queue, target_skb, skb);
+ if (target_skb)
+ __skb_queue_after(&priv->tx_queue, target_skb, skb);
+ else
+ __skb_queue_head(&priv->tx_queue, skb);
spin_unlock_irqrestore(&priv->tx_queue.lock, flags);
return 0;
}
@@ -328,6 +331,7 @@ static int p54_rx_data(struct p54_common *priv, struct sk_buff *skb)
u16 freq = le16_to_cpu(hdr->freq);
size_t header_len = sizeof(*hdr);
u32 tsf32;
+ __le16 fc;
u8 rate = hdr->rate & 0xf;
/*
@@ -376,6 +380,11 @@ static int p54_rx_data(struct p54_common *priv, struct sk_buff *skb)
skb_pull(skb, header_len);
skb_trim(skb, le16_to_cpu(hdr->len));
+
+ fc = ((struct ieee80211_hdr *)skb->data)->frame_control;
+ if (ieee80211_is_probe_resp(fc) || ieee80211_is_beacon(fc))
+ rx_status->boottime_ns = ktime_get_boottime_ns();
+
if (unlikely(priv->hw->conf.flags & IEEE80211_CONF_PS))
p54_pspoll_workaround(priv, skb);
diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
index a7bf6519d7aa..519b4ee88c5c 100644
--- a/drivers/net/wireless/mac80211_hwsim.c
+++ b/drivers/net/wireless/mac80211_hwsim.c
@@ -454,6 +454,8 @@ static struct wiphy_vendor_command mac80211_hwsim_vendor_commands[] = {
.subcmd = QCA_NL80211_SUBCMD_TEST },
.flags = WIPHY_VENDOR_CMD_NEED_NETDEV,
.doit = mac80211_hwsim_vendor_cmd_test,
+ .policy = hwsim_vendor_test_policy,
+ .maxattr = QCA_WLAN_VENDOR_ATTR_MAX,
}
};
diff --git a/drivers/net/wireless/marvell/libertas/if_usb.c b/drivers/net/wireless/marvell/libertas/if_usb.c
index f1622f0ff8c9..afac2481909b 100644
--- a/drivers/net/wireless/marvell/libertas/if_usb.c
+++ b/drivers/net/wireless/marvell/libertas/if_usb.c
@@ -368,7 +368,7 @@ static int if_usb_send_fw_pkt(struct if_usb_card *cardp)
cardp->fwseqnum, cardp->totalbytes);
} else if (fwdata->hdr.dnldcmd == cpu_to_le32(FW_HAS_LAST_BLOCK)) {
lbs_deb_usb2(&cardp->udev->dev, "Host has finished FW downloading\n");
- lbs_deb_usb2(&cardp->udev->dev, "Donwloading FW JUMP BLOCK\n");
+ lbs_deb_usb2(&cardp->udev->dev, "Downloading FW JUMP BLOCK\n");
cardp->fwfinalblk = 1;
}
diff --git a/drivers/net/wireless/marvell/libertas_tf/if_usb.c b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
index 28a8bd3cf10c..25ac9db35dbf 100644
--- a/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+++ b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
@@ -315,7 +315,7 @@ static int if_usb_send_fw_pkt(struct if_usb_card *cardp)
} else if (fwdata->hdr.dnldcmd == cpu_to_le32(FW_HAS_LAST_BLOCK)) {
lbtf_deb_usb2(&cardp->udev->dev,
"Host has finished FW downloading\n");
- lbtf_deb_usb2(&cardp->udev->dev, "Donwloading FW JUMP BLOCK\n");
+ lbtf_deb_usb2(&cardp->udev->dev, "Downloading FW JUMP BLOCK\n");
/* Host has finished FW downloading
* Donwloading FW JUMP BLOCK
diff --git a/drivers/net/wireless/marvell/mwifiex/11n.c b/drivers/net/wireless/marvell/mwifiex/11n.c
index 5d75c971004b..e435f801bc91 100644
--- a/drivers/net/wireless/marvell/mwifiex/11n.c
+++ b/drivers/net/wireless/marvell/mwifiex/11n.c
@@ -84,17 +84,15 @@ mwifiex_get_ba_status(struct mwifiex_private *priv,
enum mwifiex_ba_status ba_status)
{
struct mwifiex_tx_ba_stream_tbl *tx_ba_tsr_tbl;
- unsigned long flags;
- spin_lock_irqsave(&priv->tx_ba_stream_tbl_lock, flags);
+ spin_lock_bh(&priv->tx_ba_stream_tbl_lock);
list_for_each_entry(tx_ba_tsr_tbl, &priv->tx_ba_stream_tbl_ptr, list) {
if (tx_ba_tsr_tbl->ba_status == ba_status) {
- spin_unlock_irqrestore(&priv->tx_ba_stream_tbl_lock,
- flags);
+ spin_unlock_bh(&priv->tx_ba_stream_tbl_lock);
return tx_ba_tsr_tbl;
}
}
- spin_unlock_irqrestore(&priv->tx_ba_stream_tbl_lock, flags);
+ spin_unlock_bh(&priv->tx_ba_stream_tbl_lock);
return NULL;
}
@@ -516,13 +514,12 @@ void mwifiex_11n_delete_all_tx_ba_stream_tbl(struct mwifiex_private *priv)
{
int i;
struct mwifiex_tx_ba_stream_tbl *del_tbl_ptr, *tmp_node;
- unsigned long flags;
- spin_lock_irqsave(&priv->tx_ba_stream_tbl_lock, flags);
+ spin_lock_bh(&priv->tx_ba_stream_tbl_lock);
list_for_each_entry_safe(del_tbl_ptr, tmp_node,
&priv->tx_ba_stream_tbl_ptr, list)
mwifiex_11n_delete_tx_ba_stream_tbl_entry(priv, del_tbl_ptr);
- spin_unlock_irqrestore(&priv->tx_ba_stream_tbl_lock, flags);
+ spin_unlock_bh(&priv->tx_ba_stream_tbl_lock);
INIT_LIST_HEAD(&priv->tx_ba_stream_tbl_ptr);
@@ -539,18 +536,16 @@ struct mwifiex_tx_ba_stream_tbl *
mwifiex_get_ba_tbl(struct mwifiex_private *priv, int tid, u8 *ra)
{
struct mwifiex_tx_ba_stream_tbl *tx_ba_tsr_tbl;
- unsigned long flags;
- spin_lock_irqsave(&priv->tx_ba_stream_tbl_lock, flags);
+ spin_lock_bh(&priv->tx_ba_stream_tbl_lock);
list_for_each_entry(tx_ba_tsr_tbl, &priv->tx_ba_stream_tbl_ptr, list) {
if (ether_addr_equal_unaligned(tx_ba_tsr_tbl->ra, ra) &&
tx_ba_tsr_tbl->tid == tid) {
- spin_unlock_irqrestore(&priv->tx_ba_stream_tbl_lock,
- flags);
+ spin_unlock_bh(&priv->tx_ba_stream_tbl_lock);
return tx_ba_tsr_tbl;
}
}
- spin_unlock_irqrestore(&priv->tx_ba_stream_tbl_lock, flags);
+ spin_unlock_bh(&priv->tx_ba_stream_tbl_lock);
return NULL;
}
@@ -563,7 +558,6 @@ void mwifiex_create_ba_tbl(struct mwifiex_private *priv, u8 *ra, int tid,
{
struct mwifiex_tx_ba_stream_tbl *new_node;
struct mwifiex_ra_list_tbl *ra_list;
- unsigned long flags;
int tid_down;
if (!mwifiex_get_ba_tbl(priv, tid, ra)) {
@@ -584,9 +578,9 @@ void mwifiex_create_ba_tbl(struct mwifiex_private *priv, u8 *ra, int tid,
new_node->ba_status = ba_status;
memcpy(new_node->ra, ra, ETH_ALEN);
- spin_lock_irqsave(&priv->tx_ba_stream_tbl_lock, flags);
+ spin_lock_bh(&priv->tx_ba_stream_tbl_lock);
list_add_tail(&new_node->list, &priv->tx_ba_stream_tbl_ptr);
- spin_unlock_irqrestore(&priv->tx_ba_stream_tbl_lock, flags);
+ spin_unlock_bh(&priv->tx_ba_stream_tbl_lock);
}
}
@@ -599,7 +593,6 @@ int mwifiex_send_addba(struct mwifiex_private *priv, int tid, u8 *peer_mac)
u32 tx_win_size = priv->add_ba_param.tx_win_size;
static u8 dialog_tok;
int ret;
- unsigned long flags;
u16 block_ack_param_set;
mwifiex_dbg(priv->adapter, CMD, "cmd: %s: tid %d\n", __func__, tid);
@@ -612,10 +605,10 @@ int mwifiex_send_addba(struct mwifiex_private *priv, int tid, u8 *peer_mac)
memcmp(priv->cfg_bssid, peer_mac, ETH_ALEN)) {
struct mwifiex_sta_node *sta_ptr;
- spin_lock_irqsave(&priv->sta_list_spinlock, flags);
+ spin_lock_bh(&priv->sta_list_spinlock);
sta_ptr = mwifiex_get_sta_entry(priv, peer_mac);
if (!sta_ptr) {
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
mwifiex_dbg(priv->adapter, ERROR,
"BA setup with unknown TDLS peer %pM!\n",
peer_mac);
@@ -623,7 +616,7 @@ int mwifiex_send_addba(struct mwifiex_private *priv, int tid, u8 *peer_mac)
}
if (sta_ptr->is_11ac_enabled)
tx_win_size = MWIFIEX_11AC_STA_AMPDU_DEF_TXWINSIZE;
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
}
block_ack_param_set = (u16)((tid << BLOCKACKPARAM_TID_POS) |
@@ -687,9 +680,8 @@ int mwifiex_send_delba(struct mwifiex_private *priv, int tid, u8 *peer_mac,
void mwifiex_11n_delba(struct mwifiex_private *priv, int tid)
{
struct mwifiex_rx_reorder_tbl *rx_reor_tbl_ptr;
- unsigned long flags;
- spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags);
+ spin_lock_bh(&priv->rx_reorder_tbl_lock);
list_for_each_entry(rx_reor_tbl_ptr, &priv->rx_reorder_tbl_ptr, list) {
if (rx_reor_tbl_ptr->tid == tid) {
dev_dbg(priv->adapter->dev,
@@ -700,7 +692,7 @@ void mwifiex_11n_delba(struct mwifiex_private *priv, int tid)
}
}
exit:
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags);
+ spin_unlock_bh(&priv->rx_reorder_tbl_lock);
}
/*
@@ -729,9 +721,8 @@ int mwifiex_get_rx_reorder_tbl(struct mwifiex_private *priv,
struct mwifiex_ds_rx_reorder_tbl *rx_reo_tbl = buf;
struct mwifiex_rx_reorder_tbl *rx_reorder_tbl_ptr;
int count = 0;
- unsigned long flags;
- spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags);
+ spin_lock_bh(&priv->rx_reorder_tbl_lock);
list_for_each_entry(rx_reorder_tbl_ptr, &priv->rx_reorder_tbl_ptr,
list) {
rx_reo_tbl->tid = (u16) rx_reorder_tbl_ptr->tid;
@@ -750,7 +741,7 @@ int mwifiex_get_rx_reorder_tbl(struct mwifiex_private *priv,
if (count >= MWIFIEX_MAX_RX_BASTREAM_SUPPORTED)
break;
}
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags);
+ spin_unlock_bh(&priv->rx_reorder_tbl_lock);
return count;
}
@@ -764,9 +755,8 @@ int mwifiex_get_tx_ba_stream_tbl(struct mwifiex_private *priv,
struct mwifiex_tx_ba_stream_tbl *tx_ba_tsr_tbl;
struct mwifiex_ds_tx_ba_stream_tbl *rx_reo_tbl = buf;
int count = 0;
- unsigned long flags;
- spin_lock_irqsave(&priv->tx_ba_stream_tbl_lock, flags);
+ spin_lock_bh(&priv->tx_ba_stream_tbl_lock);
list_for_each_entry(tx_ba_tsr_tbl, &priv->tx_ba_stream_tbl_ptr, list) {
rx_reo_tbl->tid = (u16) tx_ba_tsr_tbl->tid;
mwifiex_dbg(priv->adapter, DATA, "data: %s tid=%d\n",
@@ -778,7 +768,7 @@ int mwifiex_get_tx_ba_stream_tbl(struct mwifiex_private *priv,
if (count >= MWIFIEX_MAX_TX_BASTREAM_SUPPORTED)
break;
}
- spin_unlock_irqrestore(&priv->tx_ba_stream_tbl_lock, flags);
+ spin_unlock_bh(&priv->tx_ba_stream_tbl_lock);
return count;
}
@@ -790,16 +780,15 @@ int mwifiex_get_tx_ba_stream_tbl(struct mwifiex_private *priv,
void mwifiex_del_tx_ba_stream_tbl_by_ra(struct mwifiex_private *priv, u8 *ra)
{
struct mwifiex_tx_ba_stream_tbl *tbl, *tmp;
- unsigned long flags;
if (!ra)
return;
- spin_lock_irqsave(&priv->tx_ba_stream_tbl_lock, flags);
+ spin_lock_bh(&priv->tx_ba_stream_tbl_lock);
list_for_each_entry_safe(tbl, tmp, &priv->tx_ba_stream_tbl_ptr, list)
if (!memcmp(tbl->ra, ra, ETH_ALEN))
mwifiex_11n_delete_tx_ba_stream_tbl_entry(priv, tbl);
- spin_unlock_irqrestore(&priv->tx_ba_stream_tbl_lock, flags);
+ spin_unlock_bh(&priv->tx_ba_stream_tbl_lock);
return;
}
diff --git a/drivers/net/wireless/marvell/mwifiex/11n.h b/drivers/net/wireless/marvell/mwifiex/11n.h
index ea0fa68b9913..33268ce2cd82 100644
--- a/drivers/net/wireless/marvell/mwifiex/11n.h
+++ b/drivers/net/wireless/marvell/mwifiex/11n.h
@@ -147,11 +147,10 @@ mwifiex_find_stream_to_delete(struct mwifiex_private *priv, int ptr_tid,
int tid;
u8 ret = false;
struct mwifiex_tx_ba_stream_tbl *tx_tbl;
- unsigned long flags;
tid = priv->aggr_prio_tbl[ptr_tid].ampdu_user;
- spin_lock_irqsave(&priv->tx_ba_stream_tbl_lock, flags);
+ spin_lock_bh(&priv->tx_ba_stream_tbl_lock);
list_for_each_entry(tx_tbl, &priv->tx_ba_stream_tbl_ptr, list) {
if (tid > priv->aggr_prio_tbl[tx_tbl->tid].ampdu_user) {
tid = priv->aggr_prio_tbl[tx_tbl->tid].ampdu_user;
@@ -160,7 +159,7 @@ mwifiex_find_stream_to_delete(struct mwifiex_private *priv, int ptr_tid,
ret = true;
}
}
- spin_unlock_irqrestore(&priv->tx_ba_stream_tbl_lock, flags);
+ spin_unlock_bh(&priv->tx_ba_stream_tbl_lock);
return ret;
}
diff --git a/drivers/net/wireless/marvell/mwifiex/11n_aggr.c b/drivers/net/wireless/marvell/mwifiex/11n_aggr.c
index 042a1d07f686..088612438530 100644
--- a/drivers/net/wireless/marvell/mwifiex/11n_aggr.c
+++ b/drivers/net/wireless/marvell/mwifiex/11n_aggr.c
@@ -155,7 +155,7 @@ mwifiex_11n_form_amsdu_txpd(struct mwifiex_private *priv,
int
mwifiex_11n_aggregate_pkt(struct mwifiex_private *priv,
struct mwifiex_ra_list_tbl *pra_list,
- int ptrindex, unsigned long ra_list_flags)
+ int ptrindex)
__releases(&priv->wmm.ra_list_spinlock)
{
struct mwifiex_adapter *adapter = priv->adapter;
@@ -168,8 +168,7 @@ mwifiex_11n_aggregate_pkt(struct mwifiex_private *priv,
skb_src = skb_peek(&pra_list->skb_head);
if (!skb_src) {
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- ra_list_flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
return 0;
}
@@ -177,8 +176,7 @@ mwifiex_11n_aggregate_pkt(struct mwifiex_private *priv,
skb_aggr = mwifiex_alloc_dma_align_buf(adapter->tx_buf_size,
GFP_ATOMIC);
if (!skb_aggr) {
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- ra_list_flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
return -1;
}
@@ -208,17 +206,15 @@ mwifiex_11n_aggregate_pkt(struct mwifiex_private *priv,
pra_list->total_pkt_count--;
atomic_dec(&priv->wmm.tx_pkts_queued);
aggr_num++;
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- ra_list_flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
mwifiex_11n_form_amsdu_pkt(skb_aggr, skb_src, &pad);
mwifiex_write_data_complete(adapter, skb_src, 0, 0);
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock, ra_list_flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
if (!mwifiex_is_ralist_valid(priv, pra_list, ptrindex)) {
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- ra_list_flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
return -1;
}
@@ -232,7 +228,7 @@ mwifiex_11n_aggregate_pkt(struct mwifiex_private *priv,
} while (skb_src);
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock, ra_list_flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
/* Last AMSDU packet does not need padding */
skb_trim(skb_aggr, skb_aggr->len - pad);
@@ -265,10 +261,9 @@ mwifiex_11n_aggregate_pkt(struct mwifiex_private *priv,
}
switch (ret) {
case -EBUSY:
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock, ra_list_flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
if (!mwifiex_is_ralist_valid(priv, pra_list, ptrindex)) {
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- ra_list_flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
mwifiex_write_data_complete(adapter, skb_aggr, 1, -1);
return -1;
}
@@ -286,8 +281,7 @@ mwifiex_11n_aggregate_pkt(struct mwifiex_private *priv,
atomic_inc(&priv->wmm.tx_pkts_queued);
tx_info_aggr->flags |= MWIFIEX_BUF_FLAG_REQUEUED_PKT;
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- ra_list_flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
mwifiex_dbg(adapter, ERROR, "data: -EBUSY is returned\n");
break;
case -1:
diff --git a/drivers/net/wireless/marvell/mwifiex/11n_aggr.h b/drivers/net/wireless/marvell/mwifiex/11n_aggr.h
index 0cd2a3eb6c17..8279b159da7c 100644
--- a/drivers/net/wireless/marvell/mwifiex/11n_aggr.h
+++ b/drivers/net/wireless/marvell/mwifiex/11n_aggr.h
@@ -27,7 +27,7 @@ int mwifiex_11n_deaggregate_pkt(struct mwifiex_private *priv,
struct sk_buff *skb);
int mwifiex_11n_aggregate_pkt(struct mwifiex_private *priv,
struct mwifiex_ra_list_tbl *ptr,
- int ptr_index, unsigned long flags)
+ int ptr_index)
__releases(&priv->wmm.ra_list_spinlock);
#endif /* !_MWIFIEX_11N_AGGR_H_ */
diff --git a/drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c b/drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c
index 5380fba652cc..05a3c61ac603 100644
--- a/drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c
+++ b/drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c
@@ -76,7 +76,8 @@ static int mwifiex_11n_dispatch_amsdu_pkt(struct mwifiex_private *priv,
/* This function will process the rx packet and forward it to kernel/upper
* layer.
*/
-static int mwifiex_11n_dispatch_pkt(struct mwifiex_private *priv, void *payload)
+static int mwifiex_11n_dispatch_pkt(struct mwifiex_private *priv,
+ struct sk_buff *payload)
{
int ret;
@@ -109,27 +110,25 @@ mwifiex_11n_dispatch_pkt_until_start_win(struct mwifiex_private *priv,
struct mwifiex_rx_reorder_tbl *tbl,
int start_win)
{
+ struct sk_buff_head list;
+ struct sk_buff *skb;
int pkt_to_send, i;
- void *rx_tmp_ptr;
- unsigned long flags;
+
+ __skb_queue_head_init(&list);
+ spin_lock_bh(&priv->rx_reorder_tbl_lock);
pkt_to_send = (start_win > tbl->start_win) ?
min((start_win - tbl->start_win), tbl->win_size) :
tbl->win_size;
for (i = 0; i < pkt_to_send; ++i) {
- spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags);
- rx_tmp_ptr = NULL;
if (tbl->rx_reorder_ptr[i]) {
- rx_tmp_ptr = tbl->rx_reorder_ptr[i];
+ skb = tbl->rx_reorder_ptr[i];
+ __skb_queue_tail(&list, skb);
tbl->rx_reorder_ptr[i] = NULL;
}
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags);
- if (rx_tmp_ptr)
- mwifiex_11n_dispatch_pkt(priv, rx_tmp_ptr);
}
- spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags);
/*
* We don't have a circular buffer, hence use rotation to simulate
* circular buffer
@@ -140,7 +139,10 @@ mwifiex_11n_dispatch_pkt_until_start_win(struct mwifiex_private *priv,
}
tbl->start_win = start_win;
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags);
+ spin_unlock_bh(&priv->rx_reorder_tbl_lock);
+
+ while ((skb = __skb_dequeue(&list)))
+ mwifiex_11n_dispatch_pkt(priv, skb);
}
/*
@@ -155,24 +157,21 @@ static void
mwifiex_11n_scan_and_dispatch(struct mwifiex_private *priv,
struct mwifiex_rx_reorder_tbl *tbl)
{
+ struct sk_buff_head list;
+ struct sk_buff *skb;
int i, j, xchg;
- void *rx_tmp_ptr;
- unsigned long flags;
+
+ __skb_queue_head_init(&list);
+ spin_lock_bh(&priv->rx_reorder_tbl_lock);
for (i = 0; i < tbl->win_size; ++i) {
- spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags);
- if (!tbl->rx_reorder_ptr[i]) {
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock,
- flags);
+ if (!tbl->rx_reorder_ptr[i])
break;
- }
- rx_tmp_ptr = tbl->rx_reorder_ptr[i];
+ skb = tbl->rx_reorder_ptr[i];
+ __skb_queue_tail(&list, skb);
tbl->rx_reorder_ptr[i] = NULL;
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags);
- mwifiex_11n_dispatch_pkt(priv, rx_tmp_ptr);
}
- spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags);
/*
* We don't have a circular buffer, hence use rotation to simulate
* circular buffer
@@ -185,7 +184,11 @@ mwifiex_11n_scan_and_dispatch(struct mwifiex_private *priv,
}
}
tbl->start_win = (tbl->start_win + i) & (MAX_TID_VALUE - 1);
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags);
+
+ spin_unlock_bh(&priv->rx_reorder_tbl_lock);
+
+ while ((skb = __skb_dequeue(&list)))
+ mwifiex_11n_dispatch_pkt(priv, skb);
}
/*
@@ -198,19 +201,18 @@ static void
mwifiex_del_rx_reorder_entry(struct mwifiex_private *priv,
struct mwifiex_rx_reorder_tbl *tbl)
{
- unsigned long flags;
int start_win;
if (!tbl)
return;
- spin_lock_irqsave(&priv->adapter->rx_proc_lock, flags);
+ spin_lock_bh(&priv->adapter->rx_proc_lock);
priv->adapter->rx_locked = true;
if (priv->adapter->rx_processing) {
- spin_unlock_irqrestore(&priv->adapter->rx_proc_lock, flags);
+ spin_unlock_bh(&priv->adapter->rx_proc_lock);
flush_workqueue(priv->adapter->rx_workqueue);
} else {
- spin_unlock_irqrestore(&priv->adapter->rx_proc_lock, flags);
+ spin_unlock_bh(&priv->adapter->rx_proc_lock);
}
start_win = (tbl->start_win + tbl->win_size) & (MAX_TID_VALUE - 1);
@@ -219,16 +221,16 @@ mwifiex_del_rx_reorder_entry(struct mwifiex_private *priv,
del_timer_sync(&tbl->timer_context.timer);
tbl->timer_context.timer_is_set = false;
- spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags);
+ spin_lock_bh(&priv->rx_reorder_tbl_lock);
list_del(&tbl->list);
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags);
+ spin_unlock_bh(&priv->rx_reorder_tbl_lock);
kfree(tbl->rx_reorder_ptr);
kfree(tbl);
- spin_lock_irqsave(&priv->adapter->rx_proc_lock, flags);
+ spin_lock_bh(&priv->adapter->rx_proc_lock);
priv->adapter->rx_locked = false;
- spin_unlock_irqrestore(&priv->adapter->rx_proc_lock, flags);
+ spin_unlock_bh(&priv->adapter->rx_proc_lock);
}
@@ -240,17 +242,15 @@ struct mwifiex_rx_reorder_tbl *
mwifiex_11n_get_rx_reorder_tbl(struct mwifiex_private *priv, int tid, u8 *ta)
{
struct mwifiex_rx_reorder_tbl *tbl;
- unsigned long flags;
- spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags);
+ spin_lock_bh(&priv->rx_reorder_tbl_lock);
list_for_each_entry(tbl, &priv->rx_reorder_tbl_ptr, list) {
if (!memcmp(tbl->ta, ta, ETH_ALEN) && tbl->tid == tid) {
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock,
- flags);
+ spin_unlock_bh(&priv->rx_reorder_tbl_lock);
return tbl;
}
}
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags);
+ spin_unlock_bh(&priv->rx_reorder_tbl_lock);
return NULL;
}
@@ -261,21 +261,19 @@ mwifiex_11n_get_rx_reorder_tbl(struct mwifiex_private *priv, int tid, u8 *ta)
void mwifiex_11n_del_rx_reorder_tbl_by_ta(struct mwifiex_private *priv, u8 *ta)
{
struct mwifiex_rx_reorder_tbl *tbl, *tmp;
- unsigned long flags;
if (!ta)
return;
- spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags);
+ spin_lock_bh(&priv->rx_reorder_tbl_lock);
list_for_each_entry_safe(tbl, tmp, &priv->rx_reorder_tbl_ptr, list) {
if (!memcmp(tbl->ta, ta, ETH_ALEN)) {
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock,
- flags);
+ spin_unlock_bh(&priv->rx_reorder_tbl_lock);
mwifiex_del_rx_reorder_entry(priv, tbl);
- spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags);
+ spin_lock_bh(&priv->rx_reorder_tbl_lock);
}
}
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags);
+ spin_unlock_bh(&priv->rx_reorder_tbl_lock);
return;
}
@@ -289,18 +287,16 @@ mwifiex_11n_find_last_seq_num(struct reorder_tmr_cnxt *ctx)
{
struct mwifiex_rx_reorder_tbl *rx_reorder_tbl_ptr = ctx->ptr;
struct mwifiex_private *priv = ctx->priv;
- unsigned long flags;
int i;
- spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags);
+ spin_lock_bh(&priv->rx_reorder_tbl_lock);
for (i = rx_reorder_tbl_ptr->win_size - 1; i >= 0; --i) {
if (rx_reorder_tbl_ptr->rx_reorder_ptr[i]) {
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock,
- flags);
+ spin_unlock_bh(&priv->rx_reorder_tbl_lock);
return i;
}
}
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags);
+ spin_unlock_bh(&priv->rx_reorder_tbl_lock);
return -1;
}
@@ -348,7 +344,6 @@ mwifiex_11n_create_rx_reorder_tbl(struct mwifiex_private *priv, u8 *ta,
int i;
struct mwifiex_rx_reorder_tbl *tbl, *new_node;
u16 last_seq = 0;
- unsigned long flags;
struct mwifiex_sta_node *node;
/*
@@ -372,7 +367,7 @@ mwifiex_11n_create_rx_reorder_tbl(struct mwifiex_private *priv, u8 *ta,
new_node->init_win = seq_num;
new_node->flags = 0;
- spin_lock_irqsave(&priv->sta_list_spinlock, flags);
+ spin_lock_bh(&priv->sta_list_spinlock);
if (mwifiex_queuing_ra_based(priv)) {
if (priv->bss_role == MWIFIEX_BSS_ROLE_UAP) {
node = mwifiex_get_sta_entry(priv, ta);
@@ -386,7 +381,7 @@ mwifiex_11n_create_rx_reorder_tbl(struct mwifiex_private *priv, u8 *ta,
else
last_seq = priv->rx_seq[tid];
}
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
mwifiex_dbg(priv->adapter, INFO,
"info: last_seq=%d start_win=%d\n",
@@ -418,9 +413,9 @@ mwifiex_11n_create_rx_reorder_tbl(struct mwifiex_private *priv, u8 *ta,
for (i = 0; i < win_size; ++i)
new_node->rx_reorder_ptr[i] = NULL;
- spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags);
+ spin_lock_bh(&priv->rx_reorder_tbl_lock);
list_add_tail(&new_node->list, &priv->rx_reorder_tbl_ptr);
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags);
+ spin_unlock_bh(&priv->rx_reorder_tbl_lock);
}
static void
@@ -476,18 +471,17 @@ int mwifiex_cmd_11n_addba_rsp_gen(struct mwifiex_private *priv,
u32 rx_win_size = priv->add_ba_param.rx_win_size;
u8 tid;
int win_size;
- unsigned long flags;
uint16_t block_ack_param_set;
if ((GET_BSS_ROLE(priv) == MWIFIEX_BSS_ROLE_STA) &&
ISSUPP_TDLS_ENABLED(priv->adapter->fw_cap_info) &&
priv->adapter->is_hw_11ac_capable &&
memcmp(priv->cfg_bssid, cmd_addba_req->peer_mac_addr, ETH_ALEN)) {
- spin_lock_irqsave(&priv->sta_list_spinlock, flags);
+ spin_lock_bh(&priv->sta_list_spinlock);
sta_ptr = mwifiex_get_sta_entry(priv,
cmd_addba_req->peer_mac_addr);
if (!sta_ptr) {
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
mwifiex_dbg(priv->adapter, ERROR,
"BA setup with unknown TDLS peer %pM!\n",
cmd_addba_req->peer_mac_addr);
@@ -495,7 +489,7 @@ int mwifiex_cmd_11n_addba_rsp_gen(struct mwifiex_private *priv,
}
if (sta_ptr->is_11ac_enabled)
rx_win_size = MWIFIEX_11AC_STA_AMPDU_DEF_RXWINSIZE;
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
}
cmd->command = cpu_to_le16(HostCmd_CMD_11N_ADDBA_RSP);
@@ -682,7 +676,6 @@ mwifiex_del_ba_tbl(struct mwifiex_private *priv, int tid, u8 *peer_mac,
struct mwifiex_tx_ba_stream_tbl *ptx_tbl;
struct mwifiex_ra_list_tbl *ra_list;
u8 cleanup_rx_reorder_tbl;
- unsigned long flags;
int tid_down;
if (type == TYPE_DELBA_RECEIVE)
@@ -716,9 +709,9 @@ mwifiex_del_ba_tbl(struct mwifiex_private *priv, int tid, u8 *peer_mac,
ra_list->amsdu_in_ampdu = false;
ra_list->ba_status = BA_SETUP_NONE;
}
- spin_lock_irqsave(&priv->tx_ba_stream_tbl_lock, flags);
+ spin_lock_bh(&priv->tx_ba_stream_tbl_lock);
mwifiex_11n_delete_tx_ba_stream_tbl_entry(priv, ptx_tbl);
- spin_unlock_irqrestore(&priv->tx_ba_stream_tbl_lock, flags);
+ spin_unlock_bh(&priv->tx_ba_stream_tbl_lock);
}
}
@@ -804,17 +797,16 @@ void mwifiex_11n_ba_stream_timeout(struct mwifiex_private *priv,
void mwifiex_11n_cleanup_reorder_tbl(struct mwifiex_private *priv)
{
struct mwifiex_rx_reorder_tbl *del_tbl_ptr, *tmp_node;
- unsigned long flags;
- spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags);
+ spin_lock_bh(&priv->rx_reorder_tbl_lock);
list_for_each_entry_safe(del_tbl_ptr, tmp_node,
&priv->rx_reorder_tbl_ptr, list) {
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags);
+ spin_unlock_bh(&priv->rx_reorder_tbl_lock);
mwifiex_del_rx_reorder_entry(priv, del_tbl_ptr);
- spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags);
+ spin_lock_bh(&priv->rx_reorder_tbl_lock);
}
INIT_LIST_HEAD(&priv->rx_reorder_tbl_ptr);
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags);
+ spin_unlock_bh(&priv->rx_reorder_tbl_lock);
mwifiex_reset_11n_rx_seq_num(priv);
}
@@ -826,7 +818,6 @@ void mwifiex_update_rxreor_flags(struct mwifiex_adapter *adapter, u8 flags)
{
struct mwifiex_private *priv;
struct mwifiex_rx_reorder_tbl *tbl;
- unsigned long lock_flags;
int i;
for (i = 0; i < adapter->priv_num; i++) {
@@ -834,10 +825,10 @@ void mwifiex_update_rxreor_flags(struct mwifiex_adapter *adapter, u8 flags)
if (!priv)
continue;
- spin_lock_irqsave(&priv->rx_reorder_tbl_lock, lock_flags);
+ spin_lock_bh(&priv->rx_reorder_tbl_lock);
list_for_each_entry(tbl, &priv->rx_reorder_tbl_ptr, list)
tbl->flags = flags;
- spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, lock_flags);
+ spin_unlock_bh(&priv->rx_reorder_tbl_lock);
}
return;
diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
index e11a4bb67172..d89684168500 100644
--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
@@ -876,13 +876,13 @@ static int mwifiex_deinit_priv_params(struct mwifiex_private *priv)
spin_unlock_irqrestore(&adapter->main_proc_lock, flags);
}
- spin_lock_irqsave(&adapter->rx_proc_lock, flags);
+ spin_lock_bh(&adapter->rx_proc_lock);
adapter->rx_locked = true;
if (adapter->rx_processing) {
- spin_unlock_irqrestore(&adapter->rx_proc_lock, flags);
+ spin_unlock_bh(&adapter->rx_proc_lock);
flush_workqueue(adapter->rx_workqueue);
} else {
- spin_unlock_irqrestore(&adapter->rx_proc_lock, flags);
+ spin_unlock_bh(&adapter->rx_proc_lock);
}
mwifiex_free_priv(priv);
@@ -934,9 +934,9 @@ mwifiex_init_new_priv_params(struct mwifiex_private *priv,
adapter->main_locked = false;
spin_unlock_irqrestore(&adapter->main_proc_lock, flags);
- spin_lock_irqsave(&adapter->rx_proc_lock, flags);
+ spin_lock_bh(&adapter->rx_proc_lock);
adapter->rx_locked = false;
- spin_unlock_irqrestore(&adapter->rx_proc_lock, flags);
+ spin_unlock_bh(&adapter->rx_proc_lock);
mwifiex_set_mac_address(priv, dev, false, NULL);
@@ -1827,7 +1827,6 @@ mwifiex_cfg80211_del_station(struct wiphy *wiphy, struct net_device *dev,
struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev);
struct mwifiex_sta_node *sta_node;
u8 deauth_mac[ETH_ALEN];
- unsigned long flags;
if (!priv->bss_started && priv->wdev.cac_started) {
mwifiex_dbg(priv->adapter, INFO, "%s: abort CAC!\n", __func__);
@@ -1845,11 +1844,11 @@ mwifiex_cfg80211_del_station(struct wiphy *wiphy, struct net_device *dev,
eth_zero_addr(deauth_mac);
- spin_lock_irqsave(&priv->sta_list_spinlock, flags);
+ spin_lock_bh(&priv->sta_list_spinlock);
sta_node = mwifiex_get_sta_entry(priv, params->mac);
if (sta_node)
ether_addr_copy(deauth_mac, params->mac);
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
if (is_valid_ether_addr(deauth_mac)) {
if (mwifiex_send_cmd(priv, HostCmd_CMD_UAP_STA_DEAUTH,
@@ -3268,7 +3267,7 @@ static void mwifiex_set_auto_arp_mef_entry(struct mwifiex_private *priv,
in_dev = __in_dev_get_rtnl(adapter->priv[i]->netdev);
if (!in_dev)
continue;
- ifa = in_dev->ifa_list;
+ ifa = rtnl_dereference(in_dev->ifa_list);
if (!ifa || !ifa->ifa_local)
continue;
ips[i] = ifa->ifa_local;
@@ -3852,15 +3851,14 @@ mwifiex_cfg80211_tdls_chan_switch(struct wiphy *wiphy, struct net_device *dev,
struct cfg80211_chan_def *chandef)
{
struct mwifiex_sta_node *sta_ptr;
- unsigned long flags;
u16 chan;
u8 second_chan_offset, band;
struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev);
- spin_lock_irqsave(&priv->sta_list_spinlock, flags);
+ spin_lock_bh(&priv->sta_list_spinlock);
sta_ptr = mwifiex_get_sta_entry(priv, addr);
if (!sta_ptr) {
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
wiphy_err(wiphy, "%s: Invalid TDLS peer %pM\n",
__func__, addr);
return -ENOENT;
@@ -3868,18 +3866,18 @@ mwifiex_cfg80211_tdls_chan_switch(struct wiphy *wiphy, struct net_device *dev,
if (!(sta_ptr->tdls_cap.extcap.ext_capab[3] &
WLAN_EXT_CAPA4_TDLS_CHAN_SWITCH)) {
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
wiphy_err(wiphy, "%pM do not support tdls cs\n", addr);
return -ENOENT;
}
if (sta_ptr->tdls_status == TDLS_CHAN_SWITCHING ||
sta_ptr->tdls_status == TDLS_IN_OFF_CHAN) {
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
wiphy_err(wiphy, "channel switch is running, abort request\n");
return -EALREADY;
}
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
chan = chandef->chan->hw_value;
second_chan_offset = mwifiex_get_sec_chan_offset(chan);
@@ -3895,23 +3893,22 @@ mwifiex_cfg80211_tdls_cancel_chan_switch(struct wiphy *wiphy,
const u8 *addr)
{
struct mwifiex_sta_node *sta_ptr;
- unsigned long flags;
struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev);
- spin_lock_irqsave(&priv->sta_list_spinlock, flags);
+ spin_lock_bh(&priv->sta_list_spinlock);
sta_ptr = mwifiex_get_sta_entry(priv, addr);
if (!sta_ptr) {
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
wiphy_err(wiphy, "%s: Invalid TDLS peer %pM\n",
__func__, addr);
} else if (!(sta_ptr->tdls_status == TDLS_CHAN_SWITCHING ||
sta_ptr->tdls_status == TDLS_IN_BASE_CHAN ||
sta_ptr->tdls_status == TDLS_IN_OFF_CHAN)) {
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
wiphy_err(wiphy, "tdls chan switch not initialize by %pM\n",
addr);
} else {
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
mwifiex_stop_tdls_cs(priv, addr);
}
}
diff --git a/drivers/net/wireless/marvell/mwifiex/cmdevt.c b/drivers/net/wireless/marvell/mwifiex/cmdevt.c
index 8c35441fd9b7..e8788c35a453 100644
--- a/drivers/net/wireless/marvell/mwifiex/cmdevt.c
+++ b/drivers/net/wireless/marvell/mwifiex/cmdevt.c
@@ -39,10 +39,11 @@ static void mwifiex_cancel_pending_ioctl(struct mwifiex_adapter *adapter);
static void
mwifiex_init_cmd_node(struct mwifiex_private *priv,
struct cmd_ctrl_node *cmd_node,
- u32 cmd_oid, void *data_buf, bool sync)
+ u32 cmd_no, void *data_buf, bool sync)
{
cmd_node->priv = priv;
- cmd_node->cmd_oid = cmd_oid;
+ cmd_node->cmd_no = cmd_no;
+
if (sync) {
cmd_node->wait_q_enabled = true;
cmd_node->cmd_wait_q_woken = false;
@@ -60,19 +61,18 @@ static struct cmd_ctrl_node *
mwifiex_get_cmd_node(struct mwifiex_adapter *adapter)
{
struct cmd_ctrl_node *cmd_node;
- unsigned long flags;
- spin_lock_irqsave(&adapter->cmd_free_q_lock, flags);
+ spin_lock_bh(&adapter->cmd_free_q_lock);
if (list_empty(&adapter->cmd_free_q)) {
mwifiex_dbg(adapter, ERROR,
"GET_CMD_NODE: cmd node not available\n");
- spin_unlock_irqrestore(&adapter->cmd_free_q_lock, flags);
+ spin_unlock_bh(&adapter->cmd_free_q_lock);
return NULL;
}
cmd_node = list_first_entry(&adapter->cmd_free_q,
struct cmd_ctrl_node, list);
list_del(&cmd_node->list);
- spin_unlock_irqrestore(&adapter->cmd_free_q_lock, flags);
+ spin_unlock_bh(&adapter->cmd_free_q_lock);
return cmd_node;
}
@@ -92,7 +92,7 @@ static void
mwifiex_clean_cmd_node(struct mwifiex_adapter *adapter,
struct cmd_ctrl_node *cmd_node)
{
- cmd_node->cmd_oid = 0;
+ cmd_node->cmd_no = 0;
cmd_node->cmd_flag = 0;
cmd_node->data_buf = NULL;
cmd_node->wait_q_enabled = false;
@@ -116,8 +116,6 @@ static void
mwifiex_insert_cmd_to_free_q(struct mwifiex_adapter *adapter,
struct cmd_ctrl_node *cmd_node)
{
- unsigned long flags;
-
if (!cmd_node)
return;
@@ -127,9 +125,9 @@ mwifiex_insert_cmd_to_free_q(struct mwifiex_adapter *adapter,
mwifiex_clean_cmd_node(adapter, cmd_node);
/* Insert node into cmd_free_q */
- spin_lock_irqsave(&adapter->cmd_free_q_lock, flags);
+ spin_lock_bh(&adapter->cmd_free_q_lock);
list_add_tail(&cmd_node->list, &adapter->cmd_free_q);
- spin_unlock_irqrestore(&adapter->cmd_free_q_lock, flags);
+ spin_unlock_bh(&adapter->cmd_free_q_lock);
}
/* This function reuses a command node. */
@@ -182,7 +180,6 @@ static int mwifiex_dnld_cmd_to_fw(struct mwifiex_private *priv,
struct host_cmd_ds_command *host_cmd;
uint16_t cmd_code;
uint16_t cmd_size;
- unsigned long flags;
if (!adapter || !cmd_node)
return -1;
@@ -201,6 +198,7 @@ static int mwifiex_dnld_cmd_to_fw(struct mwifiex_private *priv,
}
cmd_code = le16_to_cpu(host_cmd->command);
+ cmd_node->cmd_no = cmd_code;
cmd_size = le16_to_cpu(host_cmd->size);
if (adapter->hw_status == MWIFIEX_HW_STATUS_RESET &&
@@ -221,9 +219,9 @@ static int mwifiex_dnld_cmd_to_fw(struct mwifiex_private *priv,
cmd_node->priv->bss_num,
cmd_node->priv->bss_type));
- spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags);
+ spin_lock_bh(&adapter->mwifiex_cmd_lock);
adapter->curr_cmd = cmd_node;
- spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags);
+ spin_unlock_bh(&adapter->mwifiex_cmd_lock);
/* Adjust skb length */
if (cmd_node->cmd_skb->len > cmd_size)
@@ -274,9 +272,9 @@ static int mwifiex_dnld_cmd_to_fw(struct mwifiex_private *priv,
adapter->cmd_wait_q.status = -1;
mwifiex_recycle_cmd_node(adapter, adapter->curr_cmd);
- spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags);
+ spin_lock_bh(&adapter->mwifiex_cmd_lock);
adapter->curr_cmd = NULL;
- spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags);
+ spin_unlock_bh(&adapter->mwifiex_cmd_lock);
adapter->dbg.num_cmd_host_to_card_failure++;
return -1;
@@ -621,7 +619,7 @@ int mwifiex_send_cmd(struct mwifiex_private *priv, u16 cmd_no,
}
/* Initialize the command node */
- mwifiex_init_cmd_node(priv, cmd_node, cmd_oid, data_buf, sync);
+ mwifiex_init_cmd_node(priv, cmd_node, cmd_no, data_buf, sync);
if (!cmd_node->cmd_skb) {
mwifiex_dbg(adapter, ERROR,
@@ -695,7 +693,6 @@ mwifiex_insert_cmd_to_pending_q(struct mwifiex_adapter *adapter,
{
struct host_cmd_ds_command *host_cmd = NULL;
u16 command;
- unsigned long flags;
bool add_tail = true;
host_cmd = (struct host_cmd_ds_command *) (cmd_node->cmd_skb->data);
@@ -717,12 +714,12 @@ mwifiex_insert_cmd_to_pending_q(struct mwifiex_adapter *adapter,
}
}
- spin_lock_irqsave(&adapter->cmd_pending_q_lock, flags);
+ spin_lock_bh(&adapter->cmd_pending_q_lock);
if (add_tail)
list_add_tail(&cmd_node->list, &adapter->cmd_pending_q);
else
list_add(&cmd_node->list, &adapter->cmd_pending_q);
- spin_unlock_irqrestore(&adapter->cmd_pending_q_lock, flags);
+ spin_unlock_bh(&adapter->cmd_pending_q_lock);
atomic_inc(&adapter->cmd_pending);
mwifiex_dbg(adapter, CMD,
@@ -747,8 +744,6 @@ int mwifiex_exec_next_cmd(struct mwifiex_adapter *adapter)
struct cmd_ctrl_node *cmd_node;
int ret = 0;
struct host_cmd_ds_command *host_cmd;
- unsigned long cmd_flags;
- unsigned long cmd_pending_q_flags;
/* Check if already in processing */
if (adapter->curr_cmd) {
@@ -757,13 +752,12 @@ int mwifiex_exec_next_cmd(struct mwifiex_adapter *adapter)
return -1;
}
- spin_lock_irqsave(&adapter->mwifiex_cmd_lock, cmd_flags);
+ spin_lock_bh(&adapter->mwifiex_cmd_lock);
/* Check if any command is pending */
- spin_lock_irqsave(&adapter->cmd_pending_q_lock, cmd_pending_q_flags);
+ spin_lock_bh(&adapter->cmd_pending_q_lock);
if (list_empty(&adapter->cmd_pending_q)) {
- spin_unlock_irqrestore(&adapter->cmd_pending_q_lock,
- cmd_pending_q_flags);
- spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, cmd_flags);
+ spin_unlock_bh(&adapter->cmd_pending_q_lock);
+ spin_unlock_bh(&adapter->mwifiex_cmd_lock);
return 0;
}
cmd_node = list_first_entry(&adapter->cmd_pending_q,
@@ -776,17 +770,15 @@ int mwifiex_exec_next_cmd(struct mwifiex_adapter *adapter)
mwifiex_dbg(adapter, ERROR,
"%s: cannot send cmd in sleep state,\t"
"this should not happen\n", __func__);
- spin_unlock_irqrestore(&adapter->cmd_pending_q_lock,
- cmd_pending_q_flags);
- spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, cmd_flags);
+ spin_unlock_bh(&adapter->cmd_pending_q_lock);
+ spin_unlock_bh(&adapter->mwifiex_cmd_lock);
return ret;
}
list_del(&cmd_node->list);
- spin_unlock_irqrestore(&adapter->cmd_pending_q_lock,
- cmd_pending_q_flags);
+ spin_unlock_bh(&adapter->cmd_pending_q_lock);
- spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, cmd_flags);
+ spin_unlock_bh(&adapter->mwifiex_cmd_lock);
ret = mwifiex_dnld_cmd_to_fw(priv, cmd_node);
priv = mwifiex_get_priv(adapter, MWIFIEX_BSS_ROLE_ANY);
/* Any command sent to the firmware when host is in sleep
@@ -820,10 +812,6 @@ int mwifiex_process_cmdresp(struct mwifiex_adapter *adapter)
uint16_t orig_cmdresp_no;
uint16_t cmdresp_no;
uint16_t cmdresp_result;
- unsigned long flags;
-
- /* Now we got response from FW, cancel the command timer */
- del_timer_sync(&adapter->cmd_timer);
if (!adapter->curr_cmd || !adapter->curr_cmd->resp_skb) {
resp = (struct host_cmd_ds_command *) adapter->upld_buf;
@@ -833,9 +821,20 @@ int mwifiex_process_cmdresp(struct mwifiex_adapter *adapter)
return -1;
}
+ resp = (struct host_cmd_ds_command *)adapter->curr_cmd->resp_skb->data;
+ orig_cmdresp_no = le16_to_cpu(resp->command);
+ cmdresp_no = (orig_cmdresp_no & HostCmd_CMD_ID_MASK);
+
+ if (adapter->curr_cmd->cmd_no != cmdresp_no) {
+ mwifiex_dbg(adapter, ERROR,
+ "cmdresp error: cmd=0x%x cmd_resp=0x%x\n",
+ adapter->curr_cmd->cmd_no, cmdresp_no);
+ return -1;
+ }
+ /* Now we got response from FW, cancel the command timer */
+ del_timer_sync(&adapter->cmd_timer);
clear_bit(MWIFIEX_IS_CMD_TIMEDOUT, &adapter->work_flags);
- resp = (struct host_cmd_ds_command *) adapter->curr_cmd->resp_skb->data;
if (adapter->curr_cmd->cmd_flag & CMD_F_HOSTCMD) {
/* Copy original response back to response buffer */
struct mwifiex_ds_misc_cmd *hostcmd;
@@ -849,7 +848,6 @@ int mwifiex_process_cmdresp(struct mwifiex_adapter *adapter)
memcpy(hostcmd->cmd, resp, size);
}
}
- orig_cmdresp_no = le16_to_cpu(resp->command);
/* Get BSS number and corresponding priv */
priv = mwifiex_get_priv_by_id(adapter,
@@ -882,9 +880,9 @@ int mwifiex_process_cmdresp(struct mwifiex_adapter *adapter)
adapter->cmd_wait_q.status = -1;
mwifiex_recycle_cmd_node(adapter, adapter->curr_cmd);
- spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags);
+ spin_lock_bh(&adapter->mwifiex_cmd_lock);
adapter->curr_cmd = NULL;
- spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags);
+ spin_unlock_bh(&adapter->mwifiex_cmd_lock);
return -1;
}
@@ -916,9 +914,9 @@ int mwifiex_process_cmdresp(struct mwifiex_adapter *adapter)
mwifiex_recycle_cmd_node(adapter, adapter->curr_cmd);
- spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags);
+ spin_lock_bh(&adapter->mwifiex_cmd_lock);
adapter->curr_cmd = NULL;
- spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags);
+ spin_unlock_bh(&adapter->mwifiex_cmd_lock);
}
return ret;
@@ -1024,17 +1022,16 @@ void
mwifiex_cancel_pending_scan_cmd(struct mwifiex_adapter *adapter)
{
struct cmd_ctrl_node *cmd_node = NULL, *tmp_node;
- unsigned long flags;
/* Cancel all pending scan command */
- spin_lock_irqsave(&adapter->scan_pending_q_lock, flags);
+ spin_lock_bh(&adapter->scan_pending_q_lock);
list_for_each_entry_safe(cmd_node, tmp_node,
&adapter->scan_pending_q, list) {
list_del(&cmd_node->list);
cmd_node->wait_q_enabled = false;
mwifiex_insert_cmd_to_free_q(adapter, cmd_node);
}
- spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags);
+ spin_unlock_bh(&adapter->scan_pending_q_lock);
}
/*
@@ -1048,9 +1045,8 @@ void
mwifiex_cancel_all_pending_cmd(struct mwifiex_adapter *adapter)
{
struct cmd_ctrl_node *cmd_node = NULL, *tmp_node;
- unsigned long flags, cmd_flags;
- spin_lock_irqsave(&adapter->mwifiex_cmd_lock, cmd_flags);
+ spin_lock_bh(&adapter->mwifiex_cmd_lock);
/* Cancel current cmd */
if ((adapter->curr_cmd) && (adapter->curr_cmd->wait_q_enabled)) {
adapter->cmd_wait_q.status = -1;
@@ -1059,7 +1055,7 @@ mwifiex_cancel_all_pending_cmd(struct mwifiex_adapter *adapter)
/* no recycle probably wait for response */
}
/* Cancel all pending command */
- spin_lock_irqsave(&adapter->cmd_pending_q_lock, flags);
+ spin_lock_bh(&adapter->cmd_pending_q_lock);
list_for_each_entry_safe(cmd_node, tmp_node,
&adapter->cmd_pending_q, list) {
list_del(&cmd_node->list);
@@ -1068,8 +1064,8 @@ mwifiex_cancel_all_pending_cmd(struct mwifiex_adapter *adapter)
adapter->cmd_wait_q.status = -1;
mwifiex_recycle_cmd_node(adapter, cmd_node);
}
- spin_unlock_irqrestore(&adapter->cmd_pending_q_lock, flags);
- spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, cmd_flags);
+ spin_unlock_bh(&adapter->cmd_pending_q_lock);
+ spin_unlock_bh(&adapter->mwifiex_cmd_lock);
mwifiex_cancel_scan(adapter);
}
@@ -1088,11 +1084,10 @@ static void
mwifiex_cancel_pending_ioctl(struct mwifiex_adapter *adapter)
{
struct cmd_ctrl_node *cmd_node = NULL;
- unsigned long cmd_flags;
if ((adapter->curr_cmd) &&
(adapter->curr_cmd->wait_q_enabled)) {
- spin_lock_irqsave(&adapter->mwifiex_cmd_lock, cmd_flags);
+ spin_lock_bh(&adapter->mwifiex_cmd_lock);
cmd_node = adapter->curr_cmd;
/* setting curr_cmd to NULL is quite dangerous, because
* mwifiex_process_cmdresp checks curr_cmd to be != NULL
@@ -1103,7 +1098,7 @@ mwifiex_cancel_pending_ioctl(struct mwifiex_adapter *adapter)
* at that point
*/
adapter->curr_cmd = NULL;
- spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, cmd_flags);
+ spin_unlock_bh(&adapter->mwifiex_cmd_lock);
mwifiex_recycle_cmd_node(adapter, cmd_node);
}
diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h
index b73f99dc5a72..1fb76d2f5d3f 100644
--- a/drivers/net/wireless/marvell/mwifiex/fw.h
+++ b/drivers/net/wireless/marvell/mwifiex/fw.h
@@ -1759,9 +1759,10 @@ struct mwifiex_ie_types_wmm_queue_status {
struct ieee_types_vendor_header {
u8 element_id;
u8 len;
- u8 oui[4]; /* 0~2: oui, 3: oui_type */
- u8 oui_subtype;
- u8 version;
+ struct {
+ u8 oui[3];
+ u8 oui_type;
+ } __packed oui;
} __packed;
struct ieee_types_wmm_parameter {
@@ -1775,6 +1776,9 @@ struct ieee_types_wmm_parameter {
* Version [1]
*/
struct ieee_types_vendor_header vend_hdr;
+ u8 oui_subtype;
+ u8 version;
+
u8 qos_info_bitmap;
u8 reserved;
struct ieee_types_wmm_ac_parameters ac_params[IEEE80211_NUM_ACS];
@@ -1792,6 +1796,8 @@ struct ieee_types_wmm_info {
* Version [1]
*/
struct ieee_types_vendor_header vend_hdr;
+ u8 oui_subtype;
+ u8 version;
u8 qos_info_bitmap;
} __packed;
diff --git a/drivers/net/wireless/marvell/mwifiex/init.c b/drivers/net/wireless/marvell/mwifiex/init.c
index 673e89dff0b5..6c0e52eb8794 100644
--- a/drivers/net/wireless/marvell/mwifiex/init.c
+++ b/drivers/net/wireless/marvell/mwifiex/init.c
@@ -36,7 +36,6 @@ static int mwifiex_add_bss_prio_tbl(struct mwifiex_private *priv)
struct mwifiex_adapter *adapter = priv->adapter;
struct mwifiex_bss_prio_node *bss_prio;
struct mwifiex_bss_prio_tbl *tbl = adapter->bss_prio_tbl;
- unsigned long flags;
bss_prio = kzalloc(sizeof(struct mwifiex_bss_prio_node), GFP_KERNEL);
if (!bss_prio)
@@ -45,9 +44,9 @@ static int mwifiex_add_bss_prio_tbl(struct mwifiex_private *priv)
bss_prio->priv = priv;
INIT_LIST_HEAD(&bss_prio->list);
- spin_lock_irqsave(&tbl[priv->bss_priority].bss_prio_lock, flags);
+ spin_lock_bh(&tbl[priv->bss_priority].bss_prio_lock);
list_add_tail(&bss_prio->list, &tbl[priv->bss_priority].bss_prio_head);
- spin_unlock_irqrestore(&tbl[priv->bss_priority].bss_prio_lock, flags);
+ spin_unlock_bh(&tbl[priv->bss_priority].bss_prio_lock);
return 0;
}
@@ -344,11 +343,9 @@ void mwifiex_set_trans_start(struct net_device *dev)
void mwifiex_wake_up_net_dev_queue(struct net_device *netdev,
struct mwifiex_adapter *adapter)
{
- unsigned long dev_queue_flags;
-
- spin_lock_irqsave(&adapter->queue_lock, dev_queue_flags);
+ spin_lock_bh(&adapter->queue_lock);
netif_tx_wake_all_queues(netdev);
- spin_unlock_irqrestore(&adapter->queue_lock, dev_queue_flags);
+ spin_unlock_bh(&adapter->queue_lock);
}
/*
@@ -357,11 +354,9 @@ void mwifiex_wake_up_net_dev_queue(struct net_device *netdev,
void mwifiex_stop_net_dev_queue(struct net_device *netdev,
struct mwifiex_adapter *adapter)
{
- unsigned long dev_queue_flags;
-
- spin_lock_irqsave(&adapter->queue_lock, dev_queue_flags);
+ spin_lock_bh(&adapter->queue_lock);
netif_tx_stop_all_queues(netdev);
- spin_unlock_irqrestore(&adapter->queue_lock, dev_queue_flags);
+ spin_unlock_bh(&adapter->queue_lock);
}
/*
@@ -506,7 +501,6 @@ int mwifiex_init_fw(struct mwifiex_adapter *adapter)
struct mwifiex_private *priv;
u8 i, first_sta = true;
int is_cmd_pend_q_empty;
- unsigned long flags;
adapter->hw_status = MWIFIEX_HW_STATUS_INITIALIZING;
@@ -547,9 +541,9 @@ int mwifiex_init_fw(struct mwifiex_adapter *adapter)
}
}
- spin_lock_irqsave(&adapter->cmd_pending_q_lock, flags);
+ spin_lock_bh(&adapter->cmd_pending_q_lock);
is_cmd_pend_q_empty = list_empty(&adapter->cmd_pending_q);
- spin_unlock_irqrestore(&adapter->cmd_pending_q_lock, flags);
+ spin_unlock_bh(&adapter->cmd_pending_q_lock);
if (!is_cmd_pend_q_empty) {
/* Send the first command in queue and return */
if (mwifiex_main_process(adapter) != -1)
@@ -574,7 +568,6 @@ static void mwifiex_delete_bss_prio_tbl(struct mwifiex_private *priv)
struct mwifiex_bss_prio_node *bssprio_node, *tmp_node;
struct list_head *head;
spinlock_t *lock; /* bss priority lock */
- unsigned long flags;
for (i = 0; i < adapter->priv_num; ++i) {
head = &adapter->bss_prio_tbl[i].bss_prio_head;
@@ -586,7 +579,7 @@ static void mwifiex_delete_bss_prio_tbl(struct mwifiex_private *priv)
priv->bss_type, priv->bss_num, i, head);
{
- spin_lock_irqsave(lock, flags);
+ spin_lock_bh(lock);
list_for_each_entry_safe(bssprio_node, tmp_node, head,
list) {
if (bssprio_node->priv == priv) {
@@ -598,7 +591,7 @@ static void mwifiex_delete_bss_prio_tbl(struct mwifiex_private *priv)
kfree(bssprio_node);
}
}
- spin_unlock_irqrestore(lock, flags);
+ spin_unlock_bh(lock);
}
}
}
@@ -630,7 +623,6 @@ mwifiex_shutdown_drv(struct mwifiex_adapter *adapter)
{
struct mwifiex_private *priv;
s32 i;
- unsigned long flags;
struct sk_buff *skb;
/* mwifiex already shutdown */
@@ -665,7 +657,7 @@ mwifiex_shutdown_drv(struct mwifiex_adapter *adapter)
while ((skb = skb_dequeue(&adapter->tx_data_q)))
mwifiex_write_data_complete(adapter, skb, 0, 0);
- spin_lock_irqsave(&adapter->rx_proc_lock, flags);
+ spin_lock_bh(&adapter->rx_proc_lock);
while ((skb = skb_dequeue(&adapter->rx_data_q))) {
struct mwifiex_rxinfo *rx_info = MWIFIEX_SKB_RXCB(skb);
@@ -678,7 +670,7 @@ mwifiex_shutdown_drv(struct mwifiex_adapter *adapter)
dev_kfree_skb_any(skb);
}
- spin_unlock_irqrestore(&adapter->rx_proc_lock, flags);
+ spin_unlock_bh(&adapter->rx_proc_lock);
mwifiex_adapter_cleanup(adapter);
diff --git a/drivers/net/wireless/marvell/mwifiex/main.c b/drivers/net/wireless/marvell/mwifiex/main.c
index f6da8edab7f1..a9657ae6d782 100644
--- a/drivers/net/wireless/marvell/mwifiex/main.c
+++ b/drivers/net/wireless/marvell/mwifiex/main.c
@@ -173,30 +173,27 @@ EXPORT_SYMBOL_GPL(mwifiex_queue_main_work);
static void mwifiex_queue_rx_work(struct mwifiex_adapter *adapter)
{
- unsigned long flags;
-
- spin_lock_irqsave(&adapter->rx_proc_lock, flags);
+ spin_lock_bh(&adapter->rx_proc_lock);
if (adapter->rx_processing) {
- spin_unlock_irqrestore(&adapter->rx_proc_lock, flags);
+ spin_unlock_bh(&adapter->rx_proc_lock);
} else {
- spin_unlock_irqrestore(&adapter->rx_proc_lock, flags);
+ spin_unlock_bh(&adapter->rx_proc_lock);
queue_work(adapter->rx_workqueue, &adapter->rx_work);
}
}
static int mwifiex_process_rx(struct mwifiex_adapter *adapter)
{
- unsigned long flags;
struct sk_buff *skb;
struct mwifiex_rxinfo *rx_info;
- spin_lock_irqsave(&adapter->rx_proc_lock, flags);
+ spin_lock_bh(&adapter->rx_proc_lock);
if (adapter->rx_processing || adapter->rx_locked) {
- spin_unlock_irqrestore(&adapter->rx_proc_lock, flags);
+ spin_unlock_bh(&adapter->rx_proc_lock);
goto exit_rx_proc;
} else {
adapter->rx_processing = true;
- spin_unlock_irqrestore(&adapter->rx_proc_lock, flags);
+ spin_unlock_bh(&adapter->rx_proc_lock);
}
/* Check for Rx data */
@@ -219,9 +216,9 @@ static int mwifiex_process_rx(struct mwifiex_adapter *adapter)
mwifiex_handle_rx_packet(adapter, skb);
}
}
- spin_lock_irqsave(&adapter->rx_proc_lock, flags);
+ spin_lock_bh(&adapter->rx_proc_lock);
adapter->rx_processing = false;
- spin_unlock_irqrestore(&adapter->rx_proc_lock, flags);
+ spin_unlock_bh(&adapter->rx_proc_lock);
exit_rx_proc:
return 0;
@@ -825,13 +822,12 @@ mwifiex_clone_skb_for_tx_status(struct mwifiex_private *priv,
skb = skb_clone(skb, GFP_ATOMIC);
if (skb) {
- unsigned long flags;
int id;
- spin_lock_irqsave(&priv->ack_status_lock, flags);
+ spin_lock_bh(&priv->ack_status_lock);
id = idr_alloc(&priv->ack_status_frames, orig_skb,
1, 0x10, GFP_ATOMIC);
- spin_unlock_irqrestore(&priv->ack_status_lock, flags);
+ spin_unlock_bh(&priv->ack_status_lock);
if (id >= 0) {
tx_info = MWIFIEX_SKB_TXCB(skb);
@@ -960,10 +956,10 @@ int mwifiex_set_mac_address(struct mwifiex_private *priv,
mac_addr = old_mac_addr;
- if (priv->bss_type == MWIFIEX_BSS_TYPE_P2P)
+ if (priv->bss_type == MWIFIEX_BSS_TYPE_P2P) {
mac_addr |= BIT_ULL(MWIFIEX_MAC_LOCAL_ADMIN_BIT);
-
- if (mwifiex_get_intf_num(priv->adapter, priv->bss_type) > 1) {
+ mac_addr += priv->bss_num;
+ } else if (priv->adapter->priv[0] != priv) {
/* Set mac address based on bss_type/bss_num */
mac_addr ^= BIT_ULL(priv->bss_type + 8);
mac_addr += priv->bss_num;
@@ -1354,12 +1350,11 @@ void mwifiex_init_priv_params(struct mwifiex_private *priv,
*/
int is_command_pending(struct mwifiex_adapter *adapter)
{
- unsigned long flags;
int is_cmd_pend_q_empty;
- spin_lock_irqsave(&adapter->cmd_pending_q_lock, flags);
+ spin_lock_bh(&adapter->cmd_pending_q_lock);
is_cmd_pend_q_empty = list_empty(&adapter->cmd_pending_q);
- spin_unlock_irqrestore(&adapter->cmd_pending_q_lock, flags);
+ spin_unlock_bh(&adapter->cmd_pending_q_lock);
return !is_cmd_pend_q_empty;
}
diff --git a/drivers/net/wireless/marvell/mwifiex/main.h b/drivers/net/wireless/marvell/mwifiex/main.h
index b025ba164412..3e442c7f7882 100644
--- a/drivers/net/wireless/marvell/mwifiex/main.h
+++ b/drivers/net/wireless/marvell/mwifiex/main.h
@@ -747,7 +747,7 @@ struct mwifiex_bss_prio_tbl {
struct cmd_ctrl_node {
struct list_head list;
struct mwifiex_private *priv;
- u32 cmd_oid;
+ u32 cmd_no;
u32 cmd_flag;
struct sk_buff *cmd_skb;
struct sk_buff *resp_skb;
diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.c b/drivers/net/wireless/marvell/mwifiex/pcie.c
index 3fe81b2a929a..b54f73e3d508 100644
--- a/drivers/net/wireless/marvell/mwifiex/pcie.c
+++ b/drivers/net/wireless/marvell/mwifiex/pcie.c
@@ -2924,10 +2924,9 @@ static int mwifiex_init_pcie(struct mwifiex_adapter *adapter)
pci_set_master(pdev);
- pr_notice("try set_consistent_dma_mask(32)\n");
ret = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
if (ret) {
- pr_err("set_dma_mask(32) failed\n");
+ pr_err("set_dma_mask(32) failed: %d\n", ret);
goto err_set_dma_mask;
}
@@ -2960,7 +2959,7 @@ static int mwifiex_init_pcie(struct mwifiex_adapter *adapter)
goto err_iomap2;
}
- pr_notice("PCI memory map Virt0: %p PCI memory map Virt2: %p\n",
+ pr_notice("PCI memory map Virt0: %pK PCI memory map Virt2: %pK\n",
card->pci_mmap, card->pci_mmap1);
ret = mwifiex_pcie_alloc_buffers(adapter);
diff --git a/drivers/net/wireless/marvell/mwifiex/scan.c b/drivers/net/wireless/marvell/mwifiex/scan.c
index c269a0de9413..0d6d41727037 100644
--- a/drivers/net/wireless/marvell/mwifiex/scan.c
+++ b/drivers/net/wireless/marvell/mwifiex/scan.c
@@ -1361,21 +1361,25 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
break;
case WLAN_EID_VENDOR_SPECIFIC:
- if (element_len + 2 < sizeof(vendor_ie->vend_hdr))
- return -EINVAL;
-
vendor_ie = (struct ieee_types_vendor_specific *)
current_ptr;
- if (!memcmp
- (vendor_ie->vend_hdr.oui, wpa_oui,
- sizeof(wpa_oui))) {
+ /* 802.11 requires at least 3-byte OUI. */
+ if (element_len < sizeof(vendor_ie->vend_hdr.oui.oui))
+ return -EINVAL;
+
+ /* Not long enough for a match? Skip it. */
+ if (element_len < sizeof(wpa_oui))
+ break;
+
+ if (!memcmp(&vendor_ie->vend_hdr.oui, wpa_oui,
+ sizeof(wpa_oui))) {
bss_entry->bcn_wpa_ie =
(struct ieee_types_vendor_specific *)
current_ptr;
bss_entry->wpa_offset = (u16)
(current_ptr - bss_entry->beacon_buf);
- } else if (!memcmp(vendor_ie->vend_hdr.oui, wmm_oui,
+ } else if (!memcmp(&vendor_ie->vend_hdr.oui, wmm_oui,
sizeof(wmm_oui))) {
if (total_ie_len ==
sizeof(struct ieee_types_wmm_parameter) ||
@@ -1500,7 +1504,6 @@ int mwifiex_scan_networks(struct mwifiex_private *priv,
u8 filtered_scan;
u8 scan_current_chan_only;
u8 max_chan_per_scan;
- unsigned long flags;
if (adapter->scan_processing) {
mwifiex_dbg(adapter, WARN,
@@ -1521,9 +1524,9 @@ int mwifiex_scan_networks(struct mwifiex_private *priv,
return -EFAULT;
}
- spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags);
+ spin_lock_bh(&adapter->mwifiex_cmd_lock);
adapter->scan_processing = true;
- spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags);
+ spin_unlock_bh(&adapter->mwifiex_cmd_lock);
scan_cfg_out = kzalloc(sizeof(union mwifiex_scan_cmd_config_tlv),
GFP_KERNEL);
@@ -1551,13 +1554,12 @@ int mwifiex_scan_networks(struct mwifiex_private *priv,
/* Get scan command from scan_pending_q and put to cmd_pending_q */
if (!ret) {
- spin_lock_irqsave(&adapter->scan_pending_q_lock, flags);
+ spin_lock_bh(&adapter->scan_pending_q_lock);
if (!list_empty(&adapter->scan_pending_q)) {
cmd_node = list_first_entry(&adapter->scan_pending_q,
struct cmd_ctrl_node, list);
list_del(&cmd_node->list);
- spin_unlock_irqrestore(&adapter->scan_pending_q_lock,
- flags);
+ spin_unlock_bh(&adapter->scan_pending_q_lock);
mwifiex_insert_cmd_to_pending_q(adapter, cmd_node);
queue_work(adapter->workqueue, &adapter->main_work);
@@ -1568,8 +1570,7 @@ int mwifiex_scan_networks(struct mwifiex_private *priv,
mwifiex_wait_queue_complete(adapter, cmd_node);
}
} else {
- spin_unlock_irqrestore(&adapter->scan_pending_q_lock,
- flags);
+ spin_unlock_bh(&adapter->scan_pending_q_lock);
}
}
@@ -1577,9 +1578,9 @@ int mwifiex_scan_networks(struct mwifiex_private *priv,
kfree(scan_chan_list);
done:
if (ret) {
- spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags);
+ spin_lock_bh(&adapter->mwifiex_cmd_lock);
adapter->scan_processing = false;
- spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags);
+ spin_unlock_bh(&adapter->mwifiex_cmd_lock);
}
return ret;
}
@@ -1715,7 +1716,6 @@ static int mwifiex_update_curr_bss_params(struct mwifiex_private *priv,
{
struct mwifiex_bssdescriptor *bss_desc;
int ret;
- unsigned long flags;
/* Allocate and fill new bss descriptor */
bss_desc = kzalloc(sizeof(struct mwifiex_bssdescriptor), GFP_KERNEL);
@@ -1730,7 +1730,7 @@ static int mwifiex_update_curr_bss_params(struct mwifiex_private *priv,
if (ret)
goto done;
- spin_lock_irqsave(&priv->curr_bcn_buf_lock, flags);
+ spin_lock_bh(&priv->curr_bcn_buf_lock);
/* Make a copy of current BSSID descriptor */
memcpy(&priv->curr_bss_params.bss_descriptor, bss_desc,
sizeof(priv->curr_bss_params.bss_descriptor));
@@ -1739,7 +1739,7 @@ static int mwifiex_update_curr_bss_params(struct mwifiex_private *priv,
* in mwifiex_save_curr_bcn()
*/
mwifiex_save_curr_bcn(priv);
- spin_unlock_irqrestore(&priv->curr_bcn_buf_lock, flags);
+ spin_unlock_bh(&priv->curr_bcn_buf_lock);
done:
/* beacon_ie buffer was allocated in function
@@ -1993,15 +1993,14 @@ static void mwifiex_check_next_scan_command(struct mwifiex_private *priv)
{
struct mwifiex_adapter *adapter = priv->adapter;
struct cmd_ctrl_node *cmd_node;
- unsigned long flags;
- spin_lock_irqsave(&adapter->scan_pending_q_lock, flags);
+ spin_lock_bh(&adapter->scan_pending_q_lock);
if (list_empty(&adapter->scan_pending_q)) {
- spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags);
+ spin_unlock_bh(&adapter->scan_pending_q_lock);
- spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags);
+ spin_lock_bh(&adapter->mwifiex_cmd_lock);
adapter->scan_processing = false;
- spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags);
+ spin_unlock_bh(&adapter->mwifiex_cmd_lock);
mwifiex_active_scan_req_for_passive_chan(priv);
@@ -2025,13 +2024,13 @@ static void mwifiex_check_next_scan_command(struct mwifiex_private *priv)
}
} else if ((priv->scan_aborting && !priv->scan_request) ||
priv->scan_block) {
- spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags);
+ spin_unlock_bh(&adapter->scan_pending_q_lock);
mwifiex_cancel_pending_scan_cmd(adapter);
- spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags);
+ spin_lock_bh(&adapter->mwifiex_cmd_lock);
adapter->scan_processing = false;
- spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags);
+ spin_unlock_bh(&adapter->mwifiex_cmd_lock);
if (!adapter->active_scan_triggered) {
if (priv->scan_request) {
@@ -2057,7 +2056,7 @@ static void mwifiex_check_next_scan_command(struct mwifiex_private *priv)
cmd_node = list_first_entry(&adapter->scan_pending_q,
struct cmd_ctrl_node, list);
list_del(&cmd_node->list);
- spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags);
+ spin_unlock_bh(&adapter->scan_pending_q_lock);
mwifiex_insert_cmd_to_pending_q(adapter, cmd_node);
}
@@ -2067,15 +2066,14 @@ static void mwifiex_check_next_scan_command(struct mwifiex_private *priv)
void mwifiex_cancel_scan(struct mwifiex_adapter *adapter)
{
struct mwifiex_private *priv;
- unsigned long cmd_flags;
int i;
mwifiex_cancel_pending_scan_cmd(adapter);
if (adapter->scan_processing) {
- spin_lock_irqsave(&adapter->mwifiex_cmd_lock, cmd_flags);
+ spin_lock_bh(&adapter->mwifiex_cmd_lock);
adapter->scan_processing = false;
- spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, cmd_flags);
+ spin_unlock_bh(&adapter->mwifiex_cmd_lock);
for (i = 0; i < adapter->priv_num; i++) {
priv = adapter->priv[i];
if (!priv)
@@ -2557,7 +2555,6 @@ int mwifiex_ret_802_11_scan_ext(struct mwifiex_private *priv,
struct host_cmd_ds_command *cmd_ptr;
struct cmd_ctrl_node *cmd_node;
- unsigned long cmd_flags, scan_flags;
bool complete_scan = false;
mwifiex_dbg(adapter, INFO, "info: EXT scan returns successfully\n");
@@ -2592,8 +2589,8 @@ int mwifiex_ret_802_11_scan_ext(struct mwifiex_private *priv,
sizeof(struct mwifiex_ie_types_header));
}
- spin_lock_irqsave(&adapter->cmd_pending_q_lock, cmd_flags);
- spin_lock_irqsave(&adapter->scan_pending_q_lock, scan_flags);
+ spin_lock_bh(&adapter->cmd_pending_q_lock);
+ spin_lock_bh(&adapter->scan_pending_q_lock);
if (list_empty(&adapter->scan_pending_q)) {
complete_scan = true;
list_for_each_entry(cmd_node, &adapter->cmd_pending_q, list) {
@@ -2607,8 +2604,8 @@ int mwifiex_ret_802_11_scan_ext(struct mwifiex_private *priv,
}
}
}
- spin_unlock_irqrestore(&adapter->scan_pending_q_lock, scan_flags);
- spin_unlock_irqrestore(&adapter->cmd_pending_q_lock, cmd_flags);
+ spin_unlock_bh(&adapter->scan_pending_q_lock);
+ spin_unlock_bh(&adapter->cmd_pending_q_lock);
if (complete_scan)
mwifiex_complete_scan(priv);
@@ -2780,13 +2777,12 @@ mwifiex_queue_scan_cmd(struct mwifiex_private *priv,
struct cmd_ctrl_node *cmd_node)
{
struct mwifiex_adapter *adapter = priv->adapter;
- unsigned long flags;
cmd_node->wait_q_enabled = true;
cmd_node->condition = &adapter->scan_wait_q_woken;
- spin_lock_irqsave(&adapter->scan_pending_q_lock, flags);
+ spin_lock_bh(&adapter->scan_pending_q_lock);
list_add_tail(&cmd_node->list, &adapter->scan_pending_q);
- spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags);
+ spin_unlock_bh(&adapter->scan_pending_q_lock);
}
/*
diff --git a/drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c b/drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c
index 24b33e20e7a9..20c206da0631 100644
--- a/drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c
+++ b/drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c
@@ -46,7 +46,6 @@ mwifiex_process_cmdresp_error(struct mwifiex_private *priv,
{
struct mwifiex_adapter *adapter = priv->adapter;
struct host_cmd_ds_802_11_ps_mode_enh *pm;
- unsigned long flags;
mwifiex_dbg(adapter, ERROR,
"CMD_RESP: cmd %#x error, result=%#x\n",
@@ -87,9 +86,9 @@ mwifiex_process_cmdresp_error(struct mwifiex_private *priv,
/* Handling errors here */
mwifiex_recycle_cmd_node(adapter, adapter->curr_cmd);
- spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags);
+ spin_lock_bh(&adapter->mwifiex_cmd_lock);
adapter->curr_cmd = NULL;
- spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags);
+ spin_unlock_bh(&adapter->mwifiex_cmd_lock);
}
/*
diff --git a/drivers/net/wireless/marvell/mwifiex/sta_event.c b/drivers/net/wireless/marvell/mwifiex/sta_event.c
index 8b3123cb84c8..5fdffb114913 100644
--- a/drivers/net/wireless/marvell/mwifiex/sta_event.c
+++ b/drivers/net/wireless/marvell/mwifiex/sta_event.c
@@ -345,7 +345,6 @@ static void mwifiex_process_uap_tx_pause(struct mwifiex_private *priv,
{
struct mwifiex_tx_pause_tlv *tp;
struct mwifiex_sta_node *sta_ptr;
- unsigned long flags;
tp = (void *)tlv;
mwifiex_dbg(priv->adapter, EVENT,
@@ -361,14 +360,14 @@ static void mwifiex_process_uap_tx_pause(struct mwifiex_private *priv,
} else if (is_multicast_ether_addr(tp->peermac)) {
mwifiex_update_ralist_tx_pause(priv, tp->peermac, tp->tx_pause);
} else {
- spin_lock_irqsave(&priv->sta_list_spinlock, flags);
+ spin_lock_bh(&priv->sta_list_spinlock);
sta_ptr = mwifiex_get_sta_entry(priv, tp->peermac);
if (sta_ptr && sta_ptr->tx_pause != tp->tx_pause) {
sta_ptr->tx_pause = tp->tx_pause;
mwifiex_update_ralist_tx_pause(priv, tp->peermac,
tp->tx_pause);
}
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
}
}
@@ -378,7 +377,6 @@ static void mwifiex_process_sta_tx_pause(struct mwifiex_private *priv,
struct mwifiex_tx_pause_tlv *tp;
struct mwifiex_sta_node *sta_ptr;
int status;
- unsigned long flags;
tp = (void *)tlv;
mwifiex_dbg(priv->adapter, EVENT,
@@ -397,7 +395,7 @@ static void mwifiex_process_sta_tx_pause(struct mwifiex_private *priv,
status = mwifiex_get_tdls_link_status(priv, tp->peermac);
if (mwifiex_is_tdls_link_setup(status)) {
- spin_lock_irqsave(&priv->sta_list_spinlock, flags);
+ spin_lock_bh(&priv->sta_list_spinlock);
sta_ptr = mwifiex_get_sta_entry(priv, tp->peermac);
if (sta_ptr && sta_ptr->tx_pause != tp->tx_pause) {
sta_ptr->tx_pause = tp->tx_pause;
@@ -405,7 +403,7 @@ static void mwifiex_process_sta_tx_pause(struct mwifiex_private *priv,
tp->peermac,
tp->tx_pause);
}
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
}
}
}
diff --git a/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c b/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
index ebc0e41e5d3b..74e50566db1f 100644
--- a/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
+++ b/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
@@ -1351,7 +1351,7 @@ mwifiex_set_gen_ie_helper(struct mwifiex_private *priv, u8 *ie_data_ptr,
/* Test to see if it is a WPA IE, if not, then
* it is a gen IE
*/
- if (!memcmp(pvendor_ie->oui, wpa_oui,
+ if (!memcmp(&pvendor_ie->oui, wpa_oui,
sizeof(wpa_oui))) {
/* IE is a WPA/WPA2 IE so call set_wpa function
*/
@@ -1361,7 +1361,7 @@ mwifiex_set_gen_ie_helper(struct mwifiex_private *priv, u8 *ie_data_ptr,
goto next_ie;
}
- if (!memcmp(pvendor_ie->oui, wps_oui,
+ if (!memcmp(&pvendor_ie->oui, wps_oui,
sizeof(wps_oui))) {
/* Test to see if it is a WPS IE,
* if so, enable wps session flag
diff --git a/drivers/net/wireless/marvell/mwifiex/tdls.c b/drivers/net/wireless/marvell/mwifiex/tdls.c
index 27779d7317fd..18e654dc34c6 100644
--- a/drivers/net/wireless/marvell/mwifiex/tdls.c
+++ b/drivers/net/wireless/marvell/mwifiex/tdls.c
@@ -33,12 +33,11 @@ static void mwifiex_restore_tdls_packets(struct mwifiex_private *priv,
struct list_head *tid_list;
struct sk_buff *skb, *tmp;
struct mwifiex_txinfo *tx_info;
- unsigned long flags;
u32 tid;
u8 tid_down;
mwifiex_dbg(priv->adapter, DATA, "%s: %pM\n", __func__, mac);
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock, flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
skb_queue_walk_safe(&priv->tdls_txq, skb, tmp) {
if (!ether_addr_equal(mac, skb->data))
@@ -78,7 +77,7 @@ static void mwifiex_restore_tdls_packets(struct mwifiex_private *priv,
atomic_inc(&priv->wmm.tx_pkts_queued);
}
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock, flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
return;
}
@@ -88,11 +87,10 @@ static void mwifiex_hold_tdls_packets(struct mwifiex_private *priv,
struct mwifiex_ra_list_tbl *ra_list;
struct list_head *ra_list_head;
struct sk_buff *skb, *tmp;
- unsigned long flags;
int i;
mwifiex_dbg(priv->adapter, DATA, "%s: %pM\n", __func__, mac);
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock, flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
for (i = 0; i < MAX_NUM_TID; i++) {
if (!list_empty(&priv->wmm.tid_tbl_ptr[i].ra_list)) {
@@ -111,7 +109,7 @@ static void mwifiex_hold_tdls_packets(struct mwifiex_private *priv,
}
}
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock, flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
return;
}
@@ -1070,7 +1068,6 @@ mwifiex_tdls_process_disable_link(struct mwifiex_private *priv, const u8 *peer)
{
struct mwifiex_sta_node *sta_ptr;
struct mwifiex_ds_tdls_oper tdls_oper;
- unsigned long flags;
memset(&tdls_oper, 0, sizeof(struct mwifiex_ds_tdls_oper));
sta_ptr = mwifiex_get_sta_entry(priv, peer);
@@ -1078,11 +1075,9 @@ mwifiex_tdls_process_disable_link(struct mwifiex_private *priv, const u8 *peer)
if (sta_ptr) {
if (sta_ptr->is_11n_enabled) {
mwifiex_11n_cleanup_reorder_tbl(priv);
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock,
- flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
mwifiex_11n_delete_all_tx_ba_stream_tbl(priv);
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
}
mwifiex_del_sta_entry(priv, peer);
}
@@ -1100,7 +1095,6 @@ mwifiex_tdls_process_enable_link(struct mwifiex_private *priv, const u8 *peer)
{
struct mwifiex_sta_node *sta_ptr;
struct ieee80211_mcs_info mcs;
- unsigned long flags;
int i;
sta_ptr = mwifiex_get_sta_entry(priv, peer);
@@ -1145,11 +1139,9 @@ mwifiex_tdls_process_enable_link(struct mwifiex_private *priv, const u8 *peer)
"tdls: enable link %pM failed\n", peer);
if (sta_ptr) {
mwifiex_11n_cleanup_reorder_tbl(priv);
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock,
- flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
mwifiex_11n_delete_all_tx_ba_stream_tbl(priv);
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
mwifiex_del_sta_entry(priv, peer);
}
mwifiex_restore_tdls_packets(priv, peer, TDLS_LINK_TEARDOWN);
@@ -1194,7 +1186,6 @@ int mwifiex_get_tdls_list(struct mwifiex_private *priv,
struct mwifiex_sta_node *sta_ptr;
struct tdls_peer_info *peer = buf;
int count = 0;
- unsigned long flags;
if (!ISSUPP_TDLS_ENABLED(priv->adapter->fw_cap_info))
return 0;
@@ -1203,7 +1194,7 @@ int mwifiex_get_tdls_list(struct mwifiex_private *priv,
if (!(priv->bss_type == MWIFIEX_BSS_TYPE_STA && priv->media_connected))
return 0;
- spin_lock_irqsave(&priv->sta_list_spinlock, flags);
+ spin_lock_bh(&priv->sta_list_spinlock);
list_for_each_entry(sta_ptr, &priv->sta_list, list) {
if (mwifiex_is_tdls_link_setup(sta_ptr->tdls_status)) {
ether_addr_copy(peer->peer_addr, sta_ptr->mac_addr);
@@ -1213,7 +1204,7 @@ int mwifiex_get_tdls_list(struct mwifiex_private *priv,
break;
}
}
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
return count;
}
@@ -1222,7 +1213,6 @@ void mwifiex_disable_all_tdls_links(struct mwifiex_private *priv)
{
struct mwifiex_sta_node *sta_ptr;
struct mwifiex_ds_tdls_oper tdls_oper;
- unsigned long flags;
if (list_empty(&priv->sta_list))
return;
@@ -1232,11 +1222,9 @@ void mwifiex_disable_all_tdls_links(struct mwifiex_private *priv)
if (sta_ptr->is_11n_enabled) {
mwifiex_11n_cleanup_reorder_tbl(priv);
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock,
- flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
mwifiex_11n_delete_all_tx_ba_stream_tbl(priv);
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
}
mwifiex_restore_tdls_packets(priv, sta_ptr->mac_addr,
@@ -1256,12 +1244,11 @@ void mwifiex_disable_all_tdls_links(struct mwifiex_private *priv)
int mwifiex_tdls_check_tx(struct mwifiex_private *priv, struct sk_buff *skb)
{
struct mwifiex_auto_tdls_peer *peer;
- unsigned long flags;
u8 mac[ETH_ALEN];
ether_addr_copy(mac, skb->data);
- spin_lock_irqsave(&priv->auto_tdls_lock, flags);
+ spin_lock_bh(&priv->auto_tdls_lock);
list_for_each_entry(peer, &priv->auto_tdls_list, list) {
if (!memcmp(mac, peer->mac_addr, ETH_ALEN)) {
if (peer->rssi <= MWIFIEX_TDLS_RSSI_HIGH &&
@@ -1290,7 +1277,7 @@ int mwifiex_tdls_check_tx(struct mwifiex_private *priv, struct sk_buff *skb)
}
}
}
- spin_unlock_irqrestore(&priv->auto_tdls_lock, flags);
+ spin_unlock_bh(&priv->auto_tdls_lock);
return 0;
}
@@ -1298,33 +1285,31 @@ int mwifiex_tdls_check_tx(struct mwifiex_private *priv, struct sk_buff *skb)
void mwifiex_flush_auto_tdls_list(struct mwifiex_private *priv)
{
struct mwifiex_auto_tdls_peer *peer, *tmp_node;
- unsigned long flags;
- spin_lock_irqsave(&priv->auto_tdls_lock, flags);
+ spin_lock_bh(&priv->auto_tdls_lock);
list_for_each_entry_safe(peer, tmp_node, &priv->auto_tdls_list, list) {
list_del(&peer->list);
kfree(peer);
}
INIT_LIST_HEAD(&priv->auto_tdls_list);
- spin_unlock_irqrestore(&priv->auto_tdls_lock, flags);
+ spin_unlock_bh(&priv->auto_tdls_lock);
priv->check_tdls_tx = false;
}
void mwifiex_add_auto_tdls_peer(struct mwifiex_private *priv, const u8 *mac)
{
struct mwifiex_auto_tdls_peer *tdls_peer;
- unsigned long flags;
if (!priv->adapter->auto_tdls)
return;
- spin_lock_irqsave(&priv->auto_tdls_lock, flags);
+ spin_lock_bh(&priv->auto_tdls_lock);
list_for_each_entry(tdls_peer, &priv->auto_tdls_list, list) {
if (!memcmp(tdls_peer->mac_addr, mac, ETH_ALEN)) {
tdls_peer->tdls_status = TDLS_SETUP_INPROGRESS;
tdls_peer->rssi_jiffies = jiffies;
- spin_unlock_irqrestore(&priv->auto_tdls_lock, flags);
+ spin_unlock_bh(&priv->auto_tdls_lock);
return;
}
}
@@ -1341,19 +1326,18 @@ void mwifiex_add_auto_tdls_peer(struct mwifiex_private *priv, const u8 *mac)
"Add auto TDLS peer= %pM to list\n", mac);
}
- spin_unlock_irqrestore(&priv->auto_tdls_lock, flags);
+ spin_unlock_bh(&priv->auto_tdls_lock);
}
void mwifiex_auto_tdls_update_peer_status(struct mwifiex_private *priv,
const u8 *mac, u8 link_status)
{
struct mwifiex_auto_tdls_peer *peer;
- unsigned long flags;
if (!priv->adapter->auto_tdls)
return;
- spin_lock_irqsave(&priv->auto_tdls_lock, flags);
+ spin_lock_bh(&priv->auto_tdls_lock);
list_for_each_entry(peer, &priv->auto_tdls_list, list) {
if (!memcmp(peer->mac_addr, mac, ETH_ALEN)) {
if ((link_status == TDLS_NOT_SETUP) &&
@@ -1366,19 +1350,18 @@ void mwifiex_auto_tdls_update_peer_status(struct mwifiex_private *priv,
break;
}
}
- spin_unlock_irqrestore(&priv->auto_tdls_lock, flags);
+ spin_unlock_bh(&priv->auto_tdls_lock);
}
void mwifiex_auto_tdls_update_peer_signal(struct mwifiex_private *priv,
u8 *mac, s8 snr, s8 nflr)
{
struct mwifiex_auto_tdls_peer *peer;
- unsigned long flags;
if (!priv->adapter->auto_tdls)
return;
- spin_lock_irqsave(&priv->auto_tdls_lock, flags);
+ spin_lock_bh(&priv->auto_tdls_lock);
list_for_each_entry(peer, &priv->auto_tdls_list, list) {
if (!memcmp(peer->mac_addr, mac, ETH_ALEN)) {
peer->rssi = nflr - snr;
@@ -1386,14 +1369,13 @@ void mwifiex_auto_tdls_update_peer_signal(struct mwifiex_private *priv,
break;
}
}
- spin_unlock_irqrestore(&priv->auto_tdls_lock, flags);
+ spin_unlock_bh(&priv->auto_tdls_lock);
}
void mwifiex_check_auto_tdls(struct timer_list *t)
{
struct mwifiex_private *priv = from_timer(priv, t, auto_tdls_timer);
struct mwifiex_auto_tdls_peer *tdls_peer;
- unsigned long flags;
u16 reason = WLAN_REASON_TDLS_TEARDOWN_UNSPECIFIED;
if (WARN_ON_ONCE(!priv || !priv->adapter)) {
@@ -1413,7 +1395,7 @@ void mwifiex_check_auto_tdls(struct timer_list *t)
priv->check_tdls_tx = false;
- spin_lock_irqsave(&priv->auto_tdls_lock, flags);
+ spin_lock_bh(&priv->auto_tdls_lock);
list_for_each_entry(tdls_peer, &priv->auto_tdls_list, list) {
if ((jiffies - tdls_peer->rssi_jiffies) >
(MWIFIEX_AUTO_TDLS_IDLE_TIME * HZ)) {
@@ -1448,7 +1430,7 @@ void mwifiex_check_auto_tdls(struct timer_list *t)
tdls_peer->rssi);
}
}
- spin_unlock_irqrestore(&priv->auto_tdls_lock, flags);
+ spin_unlock_bh(&priv->auto_tdls_lock);
mod_timer(&priv->auto_tdls_timer,
jiffies + msecs_to_jiffies(MWIFIEX_TIMER_10S));
diff --git a/drivers/net/wireless/marvell/mwifiex/txrx.c b/drivers/net/wireless/marvell/mwifiex/txrx.c
index d848933466d9..e3c1446dd847 100644
--- a/drivers/net/wireless/marvell/mwifiex/txrx.c
+++ b/drivers/net/wireless/marvell/mwifiex/txrx.c
@@ -334,15 +334,14 @@ void mwifiex_parse_tx_status_event(struct mwifiex_private *priv,
{
struct tx_status_event *tx_status = (void *)priv->adapter->event_body;
struct sk_buff *ack_skb;
- unsigned long flags;
struct mwifiex_txinfo *tx_info;
if (!tx_status->tx_token_id)
return;
- spin_lock_irqsave(&priv->ack_status_lock, flags);
+ spin_lock_bh(&priv->ack_status_lock);
ack_skb = idr_remove(&priv->ack_status_frames, tx_status->tx_token_id);
- spin_unlock_irqrestore(&priv->ack_status_lock, flags);
+ spin_unlock_bh(&priv->ack_status_lock);
if (ack_skb) {
tx_info = MWIFIEX_SKB_TXCB(ack_skb);
diff --git a/drivers/net/wireless/marvell/mwifiex/uap_txrx.c b/drivers/net/wireless/marvell/mwifiex/uap_txrx.c
index 5ce85d5727e4..354b09c5e8dc 100644
--- a/drivers/net/wireless/marvell/mwifiex/uap_txrx.c
+++ b/drivers/net/wireless/marvell/mwifiex/uap_txrx.c
@@ -71,11 +71,10 @@ mwifiex_uap_del_tx_pkts_in_ralist(struct mwifiex_private *priv,
*/
static void mwifiex_uap_cleanup_tx_queues(struct mwifiex_private *priv)
{
- unsigned long flags;
struct list_head *ra_list;
int i;
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock, flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
for (i = 0; i < MAX_NUM_TID; i++, priv->del_list_idx++) {
if (priv->del_list_idx == MAX_NUM_TID)
@@ -87,7 +86,7 @@ static void mwifiex_uap_cleanup_tx_queues(struct mwifiex_private *priv)
}
}
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock, flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
}
@@ -378,7 +377,6 @@ int mwifiex_process_uap_rx_packet(struct mwifiex_private *priv,
struct rx_packet_hdr *rx_pkt_hdr;
u16 rx_pkt_type;
u8 ta[ETH_ALEN], pkt_type;
- unsigned long flags;
struct mwifiex_sta_node *node;
uap_rx_pd = (struct uap_rxpd *)(skb->data);
@@ -413,12 +411,12 @@ int mwifiex_process_uap_rx_packet(struct mwifiex_private *priv,
if (rx_pkt_type != PKT_TYPE_BAR && uap_rx_pd->priority < MAX_NUM_TID) {
- spin_lock_irqsave(&priv->sta_list_spinlock, flags);
+ spin_lock_bh(&priv->sta_list_spinlock);
node = mwifiex_get_sta_entry(priv, ta);
if (node)
node->rx_seq[uap_rx_pd->priority] =
le16_to_cpu(uap_rx_pd->seq_num);
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
}
if (!priv->ap_11n_enabled ||
diff --git a/drivers/net/wireless/marvell/mwifiex/usb.c b/drivers/net/wireless/marvell/mwifiex/usb.c
index d445acc4786b..c2365eeb7016 100644
--- a/drivers/net/wireless/marvell/mwifiex/usb.c
+++ b/drivers/net/wireless/marvell/mwifiex/usb.c
@@ -1128,10 +1128,9 @@ static void mwifiex_usb_tx_aggr_tmo(struct timer_list *t)
from_timer(timer_context, t, hold_timer);
struct mwifiex_adapter *adapter = timer_context->adapter;
struct usb_tx_data_port *port = timer_context->port;
- unsigned long flags;
int err = 0;
- spin_lock_irqsave(&port->tx_aggr_lock, flags);
+ spin_lock_bh(&port->tx_aggr_lock);
err = mwifiex_usb_prepare_tx_aggr_skb(adapter, port, &skb_send);
if (err) {
mwifiex_dbg(adapter, ERROR,
@@ -1158,7 +1157,7 @@ done:
if (err == -1)
mwifiex_write_data_complete(adapter, skb_send, 0, -1);
unlock:
- spin_unlock_irqrestore(&port->tx_aggr_lock, flags);
+ spin_unlock_bh(&port->tx_aggr_lock);
}
/* This function write a command/data packet to card. */
@@ -1169,7 +1168,6 @@ static int mwifiex_usb_host_to_card(struct mwifiex_adapter *adapter, u8 ep,
struct usb_card_rec *card = adapter->card;
struct urb_context *context = NULL;
struct usb_tx_data_port *port = NULL;
- unsigned long flags;
int idx, ret;
if (test_bit(MWIFIEX_IS_SUSPENDED, &adapter->work_flags)) {
@@ -1211,10 +1209,10 @@ static int mwifiex_usb_host_to_card(struct mwifiex_adapter *adapter, u8 ep,
}
if (adapter->bus_aggr.enable) {
- spin_lock_irqsave(&port->tx_aggr_lock, flags);
+ spin_lock_bh(&port->tx_aggr_lock);
ret = mwifiex_usb_aggr_tx_data(adapter, ep, skb,
tx_param, port);
- spin_unlock_irqrestore(&port->tx_aggr_lock, flags);
+ spin_unlock_bh(&port->tx_aggr_lock);
return ret;
}
diff --git a/drivers/net/wireless/marvell/mwifiex/util.c b/drivers/net/wireless/marvell/mwifiex/util.c
index f9b71539d33e..3b0d31827681 100644
--- a/drivers/net/wireless/marvell/mwifiex/util.c
+++ b/drivers/net/wireless/marvell/mwifiex/util.c
@@ -607,12 +607,11 @@ struct mwifiex_sta_node *
mwifiex_add_sta_entry(struct mwifiex_private *priv, const u8 *mac)
{
struct mwifiex_sta_node *node;
- unsigned long flags;
if (!mac)
return NULL;
- spin_lock_irqsave(&priv->sta_list_spinlock, flags);
+ spin_lock_bh(&priv->sta_list_spinlock);
node = mwifiex_get_sta_entry(priv, mac);
if (node)
goto done;
@@ -625,7 +624,7 @@ mwifiex_add_sta_entry(struct mwifiex_private *priv, const u8 *mac)
list_add_tail(&node->list, &priv->sta_list);
done:
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
return node;
}
@@ -662,9 +661,8 @@ mwifiex_set_sta_ht_cap(struct mwifiex_private *priv, const u8 *ies,
void mwifiex_del_sta_entry(struct mwifiex_private *priv, const u8 *mac)
{
struct mwifiex_sta_node *node;
- unsigned long flags;
- spin_lock_irqsave(&priv->sta_list_spinlock, flags);
+ spin_lock_bh(&priv->sta_list_spinlock);
node = mwifiex_get_sta_entry(priv, mac);
if (node) {
@@ -672,7 +670,7 @@ void mwifiex_del_sta_entry(struct mwifiex_private *priv, const u8 *mac)
kfree(node);
}
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
return;
}
@@ -680,9 +678,8 @@ void mwifiex_del_sta_entry(struct mwifiex_private *priv, const u8 *mac)
void mwifiex_del_all_sta_list(struct mwifiex_private *priv)
{
struct mwifiex_sta_node *node, *tmp;
- unsigned long flags;
- spin_lock_irqsave(&priv->sta_list_spinlock, flags);
+ spin_lock_bh(&priv->sta_list_spinlock);
list_for_each_entry_safe(node, tmp, &priv->sta_list, list) {
list_del(&node->list);
@@ -690,7 +687,7 @@ void mwifiex_del_all_sta_list(struct mwifiex_private *priv)
}
INIT_LIST_HEAD(&priv->sta_list);
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
return;
}
diff --git a/drivers/net/wireless/marvell/mwifiex/wmm.c b/drivers/net/wireless/marvell/mwifiex/wmm.c
index 407b9932ca4d..41f0231376c0 100644
--- a/drivers/net/wireless/marvell/mwifiex/wmm.c
+++ b/drivers/net/wireless/marvell/mwifiex/wmm.c
@@ -138,7 +138,6 @@ void mwifiex_ralist_add(struct mwifiex_private *priv, const u8 *ra)
struct mwifiex_ra_list_tbl *ra_list;
struct mwifiex_adapter *adapter = priv->adapter;
struct mwifiex_sta_node *node;
- unsigned long flags;
for (i = 0; i < MAX_NUM_TID; ++i) {
@@ -163,7 +162,7 @@ void mwifiex_ralist_add(struct mwifiex_private *priv, const u8 *ra)
ra_list->is_11n_enabled = IS_11N_ENABLED(priv);
}
} else {
- spin_lock_irqsave(&priv->sta_list_spinlock, flags);
+ spin_lock_bh(&priv->sta_list_spinlock);
node = mwifiex_get_sta_entry(priv, ra);
if (node)
ra_list->tx_paused = node->tx_pause;
@@ -171,7 +170,7 @@ void mwifiex_ralist_add(struct mwifiex_private *priv, const u8 *ra)
mwifiex_is_sta_11n_enabled(priv, node);
if (ra_list->is_11n_enabled)
ra_list->max_amsdu = node->max_amsdu;
- spin_unlock_irqrestore(&priv->sta_list_spinlock, flags);
+ spin_unlock_bh(&priv->sta_list_spinlock);
}
mwifiex_dbg(adapter, DATA, "data: ralist %p: is_11n_enabled=%d\n",
@@ -240,7 +239,7 @@ mwifiex_wmm_setup_queue_priorities(struct mwifiex_private *priv,
mwifiex_dbg(priv->adapter, INFO,
"info: WMM Parameter IE: version=%d,\t"
"qos_info Parameter Set Count=%d, Reserved=%#x\n",
- wmm_ie->vend_hdr.version, wmm_ie->qos_info_bitmap &
+ wmm_ie->version, wmm_ie->qos_info_bitmap &
IEEE80211_WMM_IE_AP_QOSINFO_PARAM_SET_CNT_MASK,
wmm_ie->reserved);
@@ -583,11 +582,10 @@ static int mwifiex_free_ack_frame(int id, void *p, void *data)
void
mwifiex_clean_txrx(struct mwifiex_private *priv)
{
- unsigned long flags;
struct sk_buff *skb, *tmp;
mwifiex_11n_cleanup_reorder_tbl(priv);
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock, flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
mwifiex_wmm_cleanup_queues(priv);
mwifiex_11n_delete_all_tx_ba_stream_tbl(priv);
@@ -601,7 +599,7 @@ mwifiex_clean_txrx(struct mwifiex_private *priv)
if (priv->adapter->if_ops.clean_pcie_ring &&
!test_bit(MWIFIEX_SURPRISE_REMOVED, &priv->adapter->work_flags))
priv->adapter->if_ops.clean_pcie_ring(priv->adapter);
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock, flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
skb_queue_walk_safe(&priv->tdls_txq, skb, tmp) {
skb_unlink(skb, &priv->tdls_txq);
@@ -642,10 +640,9 @@ void mwifiex_update_ralist_tx_pause(struct mwifiex_private *priv, u8 *mac,
{
struct mwifiex_ra_list_tbl *ra_list;
u32 pkt_cnt = 0, tx_pkts_queued;
- unsigned long flags;
int i;
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock, flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
for (i = 0; i < MAX_NUM_TID; ++i) {
ra_list = mwifiex_wmm_get_ralist_node(priv, i, mac);
@@ -671,7 +668,7 @@ void mwifiex_update_ralist_tx_pause(struct mwifiex_private *priv, u8 *mac,
atomic_set(&priv->wmm.tx_pkts_queued, tx_pkts_queued);
atomic_set(&priv->wmm.highest_queued_prio, HIGH_PRIO_TID);
}
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock, flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
}
/* This function updates non-tdls peer ralist tx_pause while
@@ -682,10 +679,9 @@ void mwifiex_update_ralist_tx_pause_in_tdls_cs(struct mwifiex_private *priv,
{
struct mwifiex_ra_list_tbl *ra_list;
u32 pkt_cnt = 0, tx_pkts_queued;
- unsigned long flags;
int i;
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock, flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
for (i = 0; i < MAX_NUM_TID; ++i) {
list_for_each_entry(ra_list, &priv->wmm.tid_tbl_ptr[i].ra_list,
@@ -716,7 +712,7 @@ void mwifiex_update_ralist_tx_pause_in_tdls_cs(struct mwifiex_private *priv,
atomic_set(&priv->wmm.tx_pkts_queued, tx_pkts_queued);
atomic_set(&priv->wmm.highest_queued_prio, HIGH_PRIO_TID);
}
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock, flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
}
/*
@@ -748,10 +744,9 @@ void
mwifiex_wmm_del_peer_ra_list(struct mwifiex_private *priv, const u8 *ra_addr)
{
struct mwifiex_ra_list_tbl *ra_list;
- unsigned long flags;
int i;
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock, flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
for (i = 0; i < MAX_NUM_TID; ++i) {
ra_list = mwifiex_wmm_get_ralist_node(priv, i, ra_addr);
@@ -767,7 +762,7 @@ mwifiex_wmm_del_peer_ra_list(struct mwifiex_private *priv, const u8 *ra_addr)
list_del(&ra_list->list);
kfree(ra_list);
}
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock, flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
}
/*
@@ -818,7 +813,6 @@ mwifiex_wmm_add_buf_txqueue(struct mwifiex_private *priv,
u32 tid;
struct mwifiex_ra_list_tbl *ra_list;
u8 ra[ETH_ALEN], tid_down;
- unsigned long flags;
struct list_head list_head;
int tdls_status = TDLS_NOT_SETUP;
struct ethhdr *eth_hdr = (struct ethhdr *)skb->data;
@@ -844,7 +838,7 @@ mwifiex_wmm_add_buf_txqueue(struct mwifiex_private *priv,
tid = skb->priority;
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock, flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
tid_down = mwifiex_wmm_downgrade_tid(priv, tid);
@@ -864,8 +858,7 @@ mwifiex_wmm_add_buf_txqueue(struct mwifiex_private *priv,
break;
case TDLS_SETUP_INPROGRESS:
skb_queue_tail(&priv->tdls_txq, skb);
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
return;
default:
list_head = priv->wmm.tid_tbl_ptr[tid_down].ra_list;
@@ -881,7 +874,7 @@ mwifiex_wmm_add_buf_txqueue(struct mwifiex_private *priv,
}
if (!ra_list) {
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock, flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
mwifiex_write_data_complete(adapter, skb, 0, -1);
return;
}
@@ -901,7 +894,7 @@ mwifiex_wmm_add_buf_txqueue(struct mwifiex_private *priv,
else
atomic_inc(&priv->wmm.tx_pkts_queued);
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock, flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
}
/*
@@ -1092,7 +1085,6 @@ mwifiex_wmm_get_highest_priolist_ptr(struct mwifiex_adapter *adapter,
struct mwifiex_ra_list_tbl *ptr;
struct mwifiex_tid_tbl *tid_ptr;
atomic_t *hqp;
- unsigned long flags_ra;
int i, j;
/* check the BSS with highest priority first */
@@ -1118,8 +1110,7 @@ try_again:
hqp = &priv_tmp->wmm.highest_queued_prio;
for (i = atomic_read(hqp); i >= LOW_PRIO_TID; --i) {
- spin_lock_irqsave(&priv_tmp->wmm.
- ra_list_spinlock, flags_ra);
+ spin_lock_bh(&priv_tmp->wmm.ra_list_spinlock);
tid_ptr = &(priv_tmp)->wmm.
tid_tbl_ptr[tos_to_tid[i]];
@@ -1134,9 +1125,7 @@ try_again:
goto found;
}
- spin_unlock_irqrestore(&priv_tmp->wmm.
- ra_list_spinlock,
- flags_ra);
+ spin_unlock_bh(&priv_tmp->wmm.ra_list_spinlock);
}
if (atomic_read(&priv_tmp->wmm.tx_pkts_queued) != 0) {
@@ -1158,7 +1147,7 @@ found:
/* holds ra_list_spinlock */
if (atomic_read(hqp) > i)
atomic_set(hqp, i);
- spin_unlock_irqrestore(&priv_tmp->wmm.ra_list_spinlock, flags_ra);
+ spin_unlock_bh(&priv_tmp->wmm.ra_list_spinlock);
*priv = priv_tmp;
*tid = tos_to_tid[i];
@@ -1182,24 +1171,23 @@ void mwifiex_rotate_priolists(struct mwifiex_private *priv,
struct mwifiex_adapter *adapter = priv->adapter;
struct mwifiex_bss_prio_tbl *tbl = adapter->bss_prio_tbl;
struct mwifiex_tid_tbl *tid_ptr = &priv->wmm.tid_tbl_ptr[tid];
- unsigned long flags;
- spin_lock_irqsave(&tbl[priv->bss_priority].bss_prio_lock, flags);
+ spin_lock_bh(&tbl[priv->bss_priority].bss_prio_lock);
/*
* dirty trick: we remove 'head' temporarily and reinsert it after
* curr bss node. imagine list to stay fixed while head is moved
*/
list_move(&tbl[priv->bss_priority].bss_prio_head,
&tbl[priv->bss_priority].bss_prio_cur->list);
- spin_unlock_irqrestore(&tbl[priv->bss_priority].bss_prio_lock, flags);
+ spin_unlock_bh(&tbl[priv->bss_priority].bss_prio_lock);
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock, flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
if (mwifiex_is_ralist_valid(priv, ra, tid)) {
priv->wmm.packets_out[tid]++;
/* same as above */
list_move(&tid_ptr->ra_list, &ra->list);
}
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock, flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
}
/*
@@ -1236,8 +1224,7 @@ mwifiex_is_11n_aggragation_possible(struct mwifiex_private *priv,
*/
static void
mwifiex_send_single_packet(struct mwifiex_private *priv,
- struct mwifiex_ra_list_tbl *ptr, int ptr_index,
- unsigned long ra_list_flags)
+ struct mwifiex_ra_list_tbl *ptr, int ptr_index)
__releases(&priv->wmm.ra_list_spinlock)
{
struct sk_buff *skb, *skb_next;
@@ -1246,8 +1233,7 @@ mwifiex_send_single_packet(struct mwifiex_private *priv,
struct mwifiex_txinfo *tx_info;
if (skb_queue_empty(&ptr->skb_head)) {
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- ra_list_flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
mwifiex_dbg(adapter, DATA, "data: nothing to send\n");
return;
}
@@ -1265,18 +1251,17 @@ mwifiex_send_single_packet(struct mwifiex_private *priv,
else
skb_next = NULL;
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock, ra_list_flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
tx_param.next_pkt_len = ((skb_next) ? skb_next->len +
sizeof(struct txpd) : 0);
if (mwifiex_process_tx(priv, skb, &tx_param) == -EBUSY) {
/* Queue the packet back at the head */
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock, ra_list_flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
if (!mwifiex_is_ralist_valid(priv, ptr, ptr_index)) {
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- ra_list_flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
mwifiex_write_data_complete(adapter, skb, 0, -1);
return;
}
@@ -1286,8 +1271,7 @@ mwifiex_send_single_packet(struct mwifiex_private *priv,
ptr->total_pkt_count++;
ptr->ba_pkt_count++;
tx_info->flags |= MWIFIEX_BUF_FLAG_REQUEUED_PKT;
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- ra_list_flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
} else {
mwifiex_rotate_priolists(priv, ptr, ptr_index);
atomic_dec(&priv->wmm.tx_pkts_queued);
@@ -1323,8 +1307,7 @@ mwifiex_is_ptr_processed(struct mwifiex_private *priv,
*/
static void
mwifiex_send_processed_packet(struct mwifiex_private *priv,
- struct mwifiex_ra_list_tbl *ptr, int ptr_index,
- unsigned long ra_list_flags)
+ struct mwifiex_ra_list_tbl *ptr, int ptr_index)
__releases(&priv->wmm.ra_list_spinlock)
{
struct mwifiex_tx_param tx_param;
@@ -1334,8 +1317,7 @@ mwifiex_send_processed_packet(struct mwifiex_private *priv,
struct mwifiex_txinfo *tx_info;
if (skb_queue_empty(&ptr->skb_head)) {
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- ra_list_flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
return;
}
@@ -1343,8 +1325,7 @@ mwifiex_send_processed_packet(struct mwifiex_private *priv,
if (adapter->data_sent || adapter->tx_lock_flag) {
ptr->total_pkt_count--;
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- ra_list_flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
skb_queue_tail(&adapter->tx_data_q, skb);
atomic_dec(&priv->wmm.tx_pkts_queued);
atomic_inc(&adapter->tx_queued);
@@ -1358,7 +1339,7 @@ mwifiex_send_processed_packet(struct mwifiex_private *priv,
tx_info = MWIFIEX_SKB_TXCB(skb);
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock, ra_list_flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
tx_param.next_pkt_len =
((skb_next) ? skb_next->len +
@@ -1374,11 +1355,10 @@ mwifiex_send_processed_packet(struct mwifiex_private *priv,
switch (ret) {
case -EBUSY:
mwifiex_dbg(adapter, ERROR, "data: -EBUSY is returned\n");
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock, ra_list_flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
if (!mwifiex_is_ralist_valid(priv, ptr, ptr_index)) {
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- ra_list_flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
mwifiex_write_data_complete(adapter, skb, 0, -1);
return;
}
@@ -1386,8 +1366,7 @@ mwifiex_send_processed_packet(struct mwifiex_private *priv,
skb_queue_tail(&ptr->skb_head, skb);
tx_info->flags |= MWIFIEX_BUF_FLAG_REQUEUED_PKT;
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- ra_list_flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
break;
case -1:
mwifiex_dbg(adapter, ERROR, "host_to_card failed: %#x\n", ret);
@@ -1404,10 +1383,9 @@ mwifiex_send_processed_packet(struct mwifiex_private *priv,
if (ret != -EBUSY) {
mwifiex_rotate_priolists(priv, ptr, ptr_index);
atomic_dec(&priv->wmm.tx_pkts_queued);
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock, ra_list_flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
ptr->total_pkt_count--;
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock,
- ra_list_flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
}
}
@@ -1423,7 +1401,6 @@ mwifiex_dequeue_tx_packet(struct mwifiex_adapter *adapter)
int ptr_index = 0;
u8 ra[ETH_ALEN];
int tid_del = 0, tid = 0;
- unsigned long flags;
ptr = mwifiex_wmm_get_highest_priolist_ptr(adapter, &priv, &ptr_index);
if (!ptr)
@@ -1433,14 +1410,14 @@ mwifiex_dequeue_tx_packet(struct mwifiex_adapter *adapter)
mwifiex_dbg(adapter, DATA, "data: tid=%d\n", tid);
- spin_lock_irqsave(&priv->wmm.ra_list_spinlock, flags);
+ spin_lock_bh(&priv->wmm.ra_list_spinlock);
if (!mwifiex_is_ralist_valid(priv, ptr, ptr_index)) {
- spin_unlock_irqrestore(&priv->wmm.ra_list_spinlock, flags);
+ spin_unlock_bh(&priv->wmm.ra_list_spinlock);
return -1;
}
if (mwifiex_is_ptr_processed(priv, ptr)) {
- mwifiex_send_processed_packet(priv, ptr, ptr_index, flags);
+ mwifiex_send_processed_packet(priv, ptr, ptr_index);
/* ra_list_spinlock has been freed in
mwifiex_send_processed_packet() */
return 0;
@@ -1455,12 +1432,12 @@ mwifiex_dequeue_tx_packet(struct mwifiex_adapter *adapter)
mwifiex_is_amsdu_allowed(priv, tid) &&
mwifiex_is_11n_aggragation_possible(priv, ptr,
adapter->tx_buf_size))
- mwifiex_11n_aggregate_pkt(priv, ptr, ptr_index, flags);
+ mwifiex_11n_aggregate_pkt(priv, ptr, ptr_index);
/* ra_list_spinlock has been freed in
* mwifiex_11n_aggregate_pkt()
*/
else
- mwifiex_send_single_packet(priv, ptr, ptr_index, flags);
+ mwifiex_send_single_packet(priv, ptr, ptr_index);
/* ra_list_spinlock has been freed in
* mwifiex_send_single_packet()
*/
@@ -1481,11 +1458,11 @@ mwifiex_dequeue_tx_packet(struct mwifiex_adapter *adapter)
if (mwifiex_is_amsdu_allowed(priv, tid) &&
mwifiex_is_11n_aggragation_possible(priv, ptr,
adapter->tx_buf_size))
- mwifiex_11n_aggregate_pkt(priv, ptr, ptr_index, flags);
+ mwifiex_11n_aggregate_pkt(priv, ptr, ptr_index);
/* ra_list_spinlock has been freed in
mwifiex_11n_aggregate_pkt() */
else
- mwifiex_send_single_packet(priv, ptr, ptr_index, flags);
+ mwifiex_send_single_packet(priv, ptr, ptr_index);
/* ra_list_spinlock has been freed in
mwifiex_send_single_packet() */
}
diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
index 4381155375e1..d8f61e540bfd 100644
--- a/drivers/net/wireless/mediatek/mt76/dma.c
+++ b/drivers/net/wireless/mediatek/mt76/dma.c
@@ -588,6 +588,7 @@ void mt76_dma_cleanup(struct mt76_dev *dev)
{
int i;
+ netif_napi_del(&dev->tx_napi);
for (i = 0; i < ARRAY_SIZE(dev->q_tx); i++)
mt76_dma_tx_cleanup(dev, i, true);
diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
index 5b6a81ee457e..ec9efb79985f 100644
--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
+++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
@@ -766,10 +766,21 @@ int mt76_get_txpower(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
*dbm = DIV_ROUND_UP(dev->txpower_cur, 2);
/* convert from per-chain power to combined
- * output on 2x2 devices
+ * output power
*/
- if (n_chains > 1)
+ switch (n_chains) {
+ case 4:
+ *dbm += 6;
+ break;
+ case 3:
+ *dbm += 4;
+ break;
+ case 2:
*dbm += 3;
+ break;
+ default:
+ break;
+ }
return 0;
}
@@ -820,3 +831,50 @@ mt76_set_tim(struct ieee80211_hw *hw, struct ieee80211_sta *sta, bool set)
return 0;
}
EXPORT_SYMBOL_GPL(mt76_set_tim);
+
+void mt76_insert_ccmp_hdr(struct sk_buff *skb, u8 key_id)
+{
+ struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
+ int hdr_len = ieee80211_get_hdrlen_from_skb(skb);
+ u8 *hdr, *pn = status->iv;
+
+ __skb_push(skb, 8);
+ memmove(skb->data, skb->data + 8, hdr_len);
+ hdr = skb->data + hdr_len;
+
+ hdr[0] = pn[5];
+ hdr[1] = pn[4];
+ hdr[2] = 0;
+ hdr[3] = 0x20 | (key_id << 6);
+ hdr[4] = pn[3];
+ hdr[5] = pn[2];
+ hdr[6] = pn[1];
+ hdr[7] = pn[0];
+
+ status->flag &= ~RX_FLAG_IV_STRIPPED;
+}
+EXPORT_SYMBOL_GPL(mt76_insert_ccmp_hdr);
+
+int mt76_get_rate(struct mt76_dev *dev,
+ struct ieee80211_supported_band *sband,
+ int idx, bool cck)
+{
+ int i, offset = 0, len = sband->n_bitrates;
+
+ if (cck) {
+ if (sband == &dev->sband_5g.sband)
+ return 0;
+
+ idx &= ~BIT(2); /* short preamble */
+ } else if (sband == &dev->sband_2g.sband) {
+ offset = 4;
+ }
+
+ for (i = offset; i < len; i++) {
+ if ((sband->bitrates[i].hw_value & GENMASK(7, 0)) == idx)
+ return i;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(mt76_get_rate);
diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
index 8ecbf81a906f..989386ecb5e4 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76.h
+++ b/drivers/net/wireless/mediatek/mt76/mt76.h
@@ -30,6 +30,7 @@
#define MT_TX_RING_SIZE 256
#define MT_MCU_RING_SIZE 32
#define MT_RX_BUF_SIZE 2048
+#define MT_SKB_HEAD_LEN 128
struct mt76_dev;
struct mt76_wcid;
@@ -258,10 +259,11 @@ struct mt76_rx_tid {
#define MT_TX_CB_TXS_DONE BIT(1)
#define MT_TX_CB_TXS_FAILED BIT(2)
-#define MT_PACKET_ID_MASK GENMASK(7, 0)
+#define MT_PACKET_ID_MASK GENMASK(6, 0)
#define MT_PACKET_ID_NO_ACK 0
#define MT_PACKET_ID_NO_SKB 1
#define MT_PACKET_ID_FIRST 2
+#define MT_PACKET_ID_HAS_RATE BIT(7)
#define MT_TX_STATUS_SKB_TIMEOUT HZ
@@ -381,7 +383,8 @@ enum mt76u_out_ep {
__MT_EP_OUT_MAX,
};
-#define MT_SG_MAX_SIZE 8
+#define MT_TX_SG_MAX_SIZE 8
+#define MT_RX_SG_MAX_SIZE 1
#define MT_NUM_TX_ENTRIES 256
#define MT_NUM_RX_ENTRIES 128
#define MCU_RESP_URB_SIZE 1024
@@ -393,9 +396,7 @@ struct mt76_usb {
struct delayed_work stat_work;
u8 out_ep[__MT_EP_OUT_MAX];
- u16 out_max_packet;
u8 in_ep[__MT_EP_IN_MAX];
- u16 in_max_packet;
bool sg_en;
struct mt76u_mcu {
@@ -452,6 +453,7 @@ struct mt76_dev {
int tx_dma_idx[4];
struct tasklet_struct tx_tasklet;
+ struct napi_struct tx_napi;
struct delayed_work mac_work;
wait_queue_head_t tx_wait;
@@ -483,6 +485,8 @@ struct mt76_dev {
int txpower_conf;
int txpower_cur;
+ enum nl80211_dfs_regions region;
+
u32 debugfs_reg;
struct led_classdev led_cdev;
@@ -688,6 +692,14 @@ static inline void mt76_insert_hdr_pad(struct sk_buff *skb)
skb->data[len + 1] = 0;
}
+static inline bool mt76_is_skb_pktid(u8 pktid)
+{
+ if (pktid & MT_PACKET_ID_HAS_RATE)
+ return false;
+
+ return pktid >= MT_PACKET_ID_FIRST;
+}
+
void mt76_rx(struct mt76_dev *dev, enum mt76_rxq_id q, struct sk_buff *skb);
void mt76_tx(struct mt76_dev *dev, struct ieee80211_sta *sta,
struct mt76_wcid *wcid, struct sk_buff *skb);
@@ -749,6 +761,10 @@ void mt76_csa_check(struct mt76_dev *dev);
void mt76_csa_finish(struct mt76_dev *dev);
int mt76_set_tim(struct ieee80211_hw *hw, struct ieee80211_sta *sta, bool set);
+void mt76_insert_ccmp_hdr(struct sk_buff *skb, u8 key_id);
+int mt76_get_rate(struct mt76_dev *dev,
+ struct ieee80211_supported_band *sband,
+ int idx, bool cck);
/* internal */
void mt76_tx_free(struct mt76_dev *dev);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/core.c b/drivers/net/wireless/mediatek/mt76/mt7603/core.c
index 37e5644b45ef..e7ee58e3379c 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/core.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/core.c
@@ -35,7 +35,7 @@ irqreturn_t mt7603_irq_handler(int irq, void *dev_instance)
if (intr & MT_INT_TX_DONE_ALL) {
mt7603_irq_disable(dev, MT_INT_TX_DONE_ALL);
- tasklet_schedule(&dev->mt76.tx_tasklet);
+ napi_schedule(&dev->mt76.tx_napi);
}
if (intr & MT_INT_RX_DONE(0)) {
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7603/debugfs.c
index f8b3b6ab6297..a1bc3103cbe9 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/debugfs.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/debugfs.c
@@ -40,6 +40,35 @@ mt7603_radio_read(struct seq_file *s, void *data)
return 0;
}
+static int
+mt7603_edcca_set(void *data, u64 val)
+{
+ struct mt7603_dev *dev = data;
+
+ mutex_lock(&dev->mt76.mutex);
+
+ dev->ed_monitor_enabled = !!val;
+ dev->ed_monitor = dev->ed_monitor_enabled &&
+ dev->mt76.region == NL80211_DFS_ETSI;
+ mt7603_init_edcca(dev);
+
+ mutex_unlock(&dev->mt76.mutex);
+
+ return 0;
+}
+
+static int
+mt7603_edcca_get(void *data, u64 *val)
+{
+ struct mt7603_dev *dev = data;
+
+ *val = dev->ed_monitor_enabled;
+ return 0;
+}
+
+DEFINE_DEBUGFS_ATTRIBUTE(fops_edcca, mt7603_edcca_get,
+ mt7603_edcca_set, "%lld\n");
+
void mt7603_init_debugfs(struct mt7603_dev *dev)
{
struct dentry *dir;
@@ -48,6 +77,7 @@ void mt7603_init_debugfs(struct mt7603_dev *dev)
if (!dir)
return;
+ debugfs_create_file("edcca", 0600, dir, dev, &fops_edcca);
debugfs_create_u32("reset_test", 0600, dir, &dev->reset_test);
debugfs_create_devm_seqfile(dev->mt76.dev, "reset", dir,
mt7603_reset_read);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/dma.c b/drivers/net/wireless/mediatek/mt76/mt7603/dma.c
index 27e2d9f90553..58dc511f93c5 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/dma.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/dma.c
@@ -139,15 +139,30 @@ static void
mt7603_tx_tasklet(unsigned long data)
{
struct mt7603_dev *dev = (struct mt7603_dev *)data;
+
+ mt76_txq_schedule_all(&dev->mt76);
+}
+
+static int mt7603_poll_tx(struct napi_struct *napi, int budget)
+{
+ struct mt7603_dev *dev;
int i;
+ dev = container_of(napi, struct mt7603_dev, mt76.tx_napi);
dev->tx_dma_check = 0;
+
for (i = MT_TXQ_MCU; i >= 0; i--)
mt76_queue_tx_cleanup(dev, i, false);
- mt76_txq_schedule_all(&dev->mt76);
+ if (napi_complete_done(napi, 0))
+ mt7603_irq_enable(dev, MT_INT_TX_DONE_ALL);
- mt7603_irq_enable(dev, MT_INT_TX_DONE_ALL);
+ for (i = MT_TXQ_MCU; i >= 0; i--)
+ mt76_queue_tx_cleanup(dev, i, false);
+
+ tasklet_schedule(&dev->mt76.tx_tasklet);
+
+ return 0;
}
int mt7603_dma_init(struct mt7603_dev *dev)
@@ -216,7 +231,15 @@ int mt7603_dma_init(struct mt7603_dev *dev)
return ret;
mt76_wr(dev, MT_DELAY_INT_CFG, 0);
- return mt76_init_queues(dev);
+ ret = mt76_init_queues(dev);
+ if (ret)
+ return ret;
+
+ netif_tx_napi_add(&dev->mt76.napi_dev, &dev->mt76.tx_napi,
+ mt7603_poll_tx, NAPI_POLL_WEIGHT);
+ napi_enable(&dev->mt76.tx_napi);
+
+ return 0;
}
void mt7603_dma_cleanup(struct mt7603_dev *dev)
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/eeprom.h b/drivers/net/wireless/mediatek/mt76/mt7603/eeprom.h
index f27b99b7e359..b893facfba48 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/eeprom.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/eeprom.h
@@ -69,6 +69,8 @@ enum mt7603_eeprom_field {
MT_EE_CP_FT_VERSION = 0x0f0,
+ MT_EE_TX_POWER_TSSI_OFF = 0x0f2,
+
MT_EE_XTAL_FREQ_OFFSET = 0x0f4,
MT_EE_XTAL_TRIM_2_COMP = 0x0f5,
MT_EE_XTAL_TRIM_3_COMP = 0x0f6,
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/init.c b/drivers/net/wireless/mediatek/mt76/mt7603/init.c
index 78cdbb70e178..38834c7d0891 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/init.c
@@ -227,11 +227,19 @@ mt7603_mac_init(struct mt7603_dev *dev)
mt76_rmw_field(dev, MT_LPON_BTEIR, MT_LPON_BTEIR_MBSS_MODE, 2);
mt76_rmw_field(dev, MT_WF_RMACDR, MT_WF_RMACDR_MBSSID_MASK, 2);
- mt76_wr(dev, MT_AGG_ARUCR, FIELD_PREP(MT_AGG_ARxCR_LIMIT(0), 7));
+ mt76_wr(dev, MT_AGG_ARUCR,
+ FIELD_PREP(MT_AGG_ARxCR_LIMIT(0), 7) |
+ FIELD_PREP(MT_AGG_ARxCR_LIMIT(1), 2) |
+ FIELD_PREP(MT_AGG_ARxCR_LIMIT(2), 2) |
+ FIELD_PREP(MT_AGG_ARxCR_LIMIT(3), 2) |
+ FIELD_PREP(MT_AGG_ARxCR_LIMIT(4), 1) |
+ FIELD_PREP(MT_AGG_ARxCR_LIMIT(5), 1) |
+ FIELD_PREP(MT_AGG_ARxCR_LIMIT(6), 1) |
+ FIELD_PREP(MT_AGG_ARxCR_LIMIT(7), 1));
+
mt76_wr(dev, MT_AGG_ARDCR,
- FIELD_PREP(MT_AGG_ARxCR_LIMIT(0), 0) |
- FIELD_PREP(MT_AGG_ARxCR_LIMIT(1),
- max_t(int, 0, MT7603_RATE_RETRY - 2)) |
+ FIELD_PREP(MT_AGG_ARxCR_LIMIT(0), MT7603_RATE_RETRY - 1) |
+ FIELD_PREP(MT_AGG_ARxCR_LIMIT(1), MT7603_RATE_RETRY - 1) |
FIELD_PREP(MT_AGG_ARxCR_LIMIT(2), MT7603_RATE_RETRY - 1) |
FIELD_PREP(MT_AGG_ARxCR_LIMIT(3), MT7603_RATE_RETRY - 1) |
FIELD_PREP(MT_AGG_ARxCR_LIMIT(4), MT7603_RATE_RETRY - 1) |
@@ -437,7 +445,9 @@ mt7603_regd_notifier(struct wiphy *wiphy,
struct ieee80211_hw *hw = wiphy_to_ieee80211_hw(wiphy);
struct mt7603_dev *dev = hw->priv;
- dev->ed_monitor = request->dfs_region == NL80211_DFS_ETSI;
+ dev->mt76.region = request->dfs_region;
+ dev->ed_monitor = dev->ed_monitor_enabled &&
+ dev->mt76.region == NL80211_DFS_ETSI;
}
static int
@@ -463,9 +473,13 @@ mt7603_init_txpower(struct mt7603_dev *dev,
u8 *eeprom = (u8 *)dev->mt76.eeprom.data;
int target_power = eeprom[MT_EE_TX_POWER_0_START_2G + 2] & ~BIT(7);
u8 *rate_power = &eeprom[MT_EE_TX_POWER_CCK];
+ bool ext_pa = eeprom[MT_EE_NIC_CONF_0 + 1] & BIT(1);
int max_offset, cur_offset;
int i;
+ if (ext_pa && is_mt7603(dev))
+ target_power = eeprom[MT_EE_TX_POWER_TSSI_OFF] & ~BIT(7);
+
if (target_power & BIT(6))
target_power = -(target_power & GENMASK(5, 0));
@@ -488,7 +502,7 @@ mt7603_init_txpower(struct mt7603_dev *dev,
for (i = 0; i < sband->n_channels; i++) {
chan = &sband->channels[i];
- chan->max_power = target_power;
+ chan->max_power = min_t(int, chan->max_reg_power, target_power);
chan->orig_mpwr = target_power;
}
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/mac.c b/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
index 6d506e34c3ee..40db1cbc832d 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
@@ -370,31 +370,6 @@ void mt7603_mac_tx_ba_reset(struct mt7603_dev *dev, int wcid, int tid,
mt76_rmw(dev, addr + (15 * 4), tid_mask, tid_val);
}
-static int
-mt7603_get_rate(struct mt7603_dev *dev, struct ieee80211_supported_band *sband,
- int idx, bool cck)
-{
- int offset = 0;
- int len = sband->n_bitrates;
- int i;
-
- if (cck) {
- if (sband == &dev->mt76.sband_5g.sband)
- return 0;
-
- idx &= ~BIT(2); /* short preamble */
- } else if (sband == &dev->mt76.sband_2g.sband) {
- offset = 4;
- }
-
- for (i = offset; i < len; i++) {
- if ((sband->bitrates[i].hw_value & GENMASK(7, 0)) == idx)
- return i;
- }
-
- return 0;
-}
-
static struct mt76_wcid *
mt7603_rx_get_wcid(struct mt7603_dev *dev, u8 idx, bool unicast)
{
@@ -418,30 +393,6 @@ mt7603_rx_get_wcid(struct mt7603_dev *dev, u8 idx, bool unicast)
return &sta->vif->sta.wcid;
}
-static void
-mt7603_insert_ccmp_hdr(struct sk_buff *skb, u8 key_id)
-{
- struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
- int hdr_len = ieee80211_get_hdrlen_from_skb(skb);
- u8 *pn = status->iv;
- u8 *hdr;
-
- __skb_push(skb, 8);
- memmove(skb->data, skb->data + 8, hdr_len);
- hdr = skb->data + hdr_len;
-
- hdr[0] = pn[5];
- hdr[1] = pn[4];
- hdr[2] = 0;
- hdr[3] = 0x20 | (key_id << 6);
- hdr[4] = pn[3];
- hdr[5] = pn[2];
- hdr[6] = pn[1];
- hdr[7] = pn[0];
-
- status->flag &= ~RX_FLAG_IV_STRIPPED;
-}
-
int
mt7603_mac_fill_rx(struct mt7603_dev *dev, struct sk_buff *skb)
{
@@ -532,7 +483,7 @@ mt7603_mac_fill_rx(struct mt7603_dev *dev, struct sk_buff *skb)
cck = true;
/* fall through */
case MT_PHY_TYPE_OFDM:
- i = mt7603_get_rate(dev, sband, i, cck);
+ i = mt76_get_rate(&dev->mt76, sband, i, cck);
break;
case MT_PHY_TYPE_HT_GF:
case MT_PHY_TYPE_HT:
@@ -580,7 +531,7 @@ mt7603_mac_fill_rx(struct mt7603_dev *dev, struct sk_buff *skb)
if (insert_ccmp_hdr) {
u8 key_id = FIELD_GET(MT_RXD1_NORMAL_KEY_ID, rxd1);
- mt7603_insert_ccmp_hdr(skb, key_id);
+ mt76_insert_ccmp_hdr(skb, key_id);
}
hdr = (struct ieee80211_hdr *)skb->data;
@@ -640,6 +591,7 @@ void mt7603_wtbl_set_rates(struct mt7603_dev *dev, struct mt7603_sta *sta,
struct ieee80211_tx_rate *probe_rate,
struct ieee80211_tx_rate *rates)
{
+ struct ieee80211_tx_rate *ref;
int wcid = sta->wcid.idx;
u32 addr = mt7603_wtbl2_addr(wcid);
bool stbc = false;
@@ -648,7 +600,8 @@ void mt7603_wtbl_set_rates(struct mt7603_dev *dev, struct mt7603_sta *sta,
u16 val[4];
u16 probe_val;
u32 w9 = mt76_rr(dev, addr + 9 * 4);
- int i;
+ bool rateset;
+ int i, k;
if (!mt76_poll(dev, MT_WTBL_UPDATE, MT_WTBL_UPDATE_BUSY, 0, 5000))
return;
@@ -656,6 +609,41 @@ void mt7603_wtbl_set_rates(struct mt7603_dev *dev, struct mt7603_sta *sta,
for (i = n_rates; i < 4; i++)
rates[i] = rates[n_rates - 1];
+ rateset = !(sta->rate_set_tsf & BIT(0));
+ memcpy(sta->rateset[rateset].rates, rates,
+ sizeof(sta->rateset[rateset].rates));
+ if (probe_rate) {
+ sta->rateset[rateset].probe_rate = *probe_rate;
+ ref = &sta->rateset[rateset].probe_rate;
+ } else {
+ sta->rateset[rateset].probe_rate.idx = -1;
+ ref = &sta->rateset[rateset].rates[0];
+ }
+
+ rates = sta->rateset[rateset].rates;
+ for (i = 0; i < ARRAY_SIZE(sta->rateset[rateset].rates); i++) {
+ /*
+ * We don't support switching between short and long GI
+ * within the rate set. For accurate tx status reporting, we
+ * need to make sure that flags match.
+ * For improved performance, avoid duplicate entries by
+ * decrementing the MCS index if necessary
+ */
+ if ((ref->flags ^ rates[i].flags) & IEEE80211_TX_RC_SHORT_GI)
+ rates[i].flags ^= IEEE80211_TX_RC_SHORT_GI;
+
+ for (k = 0; k < i; k++) {
+ if (rates[i].idx != rates[k].idx)
+ continue;
+ if ((rates[i].flags ^ rates[k].flags) &
+ IEEE80211_TX_RC_40_MHZ_WIDTH)
+ continue;
+
+ rates[i].idx--;
+ }
+
+ }
+
w9 &= MT_WTBL2_W9_SHORT_GI_20 | MT_WTBL2_W9_SHORT_GI_40 |
MT_WTBL2_W9_SHORT_GI_80;
@@ -699,19 +687,22 @@ void mt7603_wtbl_set_rates(struct mt7603_dev *dev, struct mt7603_sta *sta,
mt76_wr(dev, MT_WTBL_RIUCR1,
FIELD_PREP(MT_WTBL_RIUCR1_RATE0, probe_val) |
FIELD_PREP(MT_WTBL_RIUCR1_RATE1, val[0]) |
- FIELD_PREP(MT_WTBL_RIUCR1_RATE2_LO, val[0]));
+ FIELD_PREP(MT_WTBL_RIUCR1_RATE2_LO, val[1]));
mt76_wr(dev, MT_WTBL_RIUCR2,
- FIELD_PREP(MT_WTBL_RIUCR2_RATE2_HI, val[0] >> 8) |
+ FIELD_PREP(MT_WTBL_RIUCR2_RATE2_HI, val[1] >> 8) |
FIELD_PREP(MT_WTBL_RIUCR2_RATE3, val[1]) |
- FIELD_PREP(MT_WTBL_RIUCR2_RATE4, val[1]) |
+ FIELD_PREP(MT_WTBL_RIUCR2_RATE4, val[2]) |
FIELD_PREP(MT_WTBL_RIUCR2_RATE5_LO, val[2]));
mt76_wr(dev, MT_WTBL_RIUCR3,
FIELD_PREP(MT_WTBL_RIUCR3_RATE5_HI, val[2] >> 4) |
- FIELD_PREP(MT_WTBL_RIUCR3_RATE6, val[2]) |
+ FIELD_PREP(MT_WTBL_RIUCR3_RATE6, val[3]) |
FIELD_PREP(MT_WTBL_RIUCR3_RATE7, val[3]));
+ mt76_set(dev, MT_LPON_T0CR, MT_LPON_T0CR_MODE); /* TSF read */
+ sta->rate_set_tsf = (mt76_rr(dev, MT_LPON_UTTR0) & ~BIT(0)) | rateset;
+
mt76_wr(dev, MT_WTBL_UPDATE,
FIELD_PREP(MT_WTBL_UPDATE_WLAN_IDX, wcid) |
MT_WTBL_UPDATE_RATE_UPDATE |
@@ -938,9 +929,9 @@ int mt7603_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
if (info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) {
spin_lock_bh(&dev->mt76.lock);
- msta->rate_probe = true;
mt7603_wtbl_set_rates(dev, msta, &info->control.rates[0],
msta->rates);
+ msta->rate_probe = true;
spin_unlock_bh(&dev->mt76.lock);
}
@@ -955,10 +946,12 @@ mt7603_fill_txs(struct mt7603_dev *dev, struct mt7603_sta *sta,
struct ieee80211_tx_info *info, __le32 *txs_data)
{
struct ieee80211_supported_band *sband;
- int final_idx = 0;
+ struct mt7603_rate_set *rs;
+ int first_idx = 0, last_idx;
+ u32 rate_set_tsf;
u32 final_rate;
u32 final_rate_flags;
- bool final_mpdu;
+ bool rs_idx;
bool ack_timeout;
bool fixed_rate;
bool probe;
@@ -966,7 +959,6 @@ mt7603_fill_txs(struct mt7603_dev *dev, struct mt7603_sta *sta,
bool cck = false;
int count;
u32 txs;
- u8 pid;
int idx;
int i;
@@ -974,10 +966,9 @@ mt7603_fill_txs(struct mt7603_dev *dev, struct mt7603_sta *sta,
probe = !!(info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE);
txs = le32_to_cpu(txs_data[4]);
- final_mpdu = txs & MT_TXS4_ACKED_MPDU;
ampdu = !fixed_rate && (txs & MT_TXS4_AMPDU);
- pid = FIELD_GET(MT_TXS4_PID, txs);
count = FIELD_GET(MT_TXS4_TX_COUNT, txs);
+ last_idx = FIELD_GET(MT_TXS4_LAST_TX_RATE, txs);
txs = le32_to_cpu(txs_data[0]);
final_rate = FIELD_GET(MT_TXS0_TX_RATE, txs);
@@ -999,38 +990,57 @@ mt7603_fill_txs(struct mt7603_dev *dev, struct mt7603_sta *sta,
if (ampdu || (info->flags & IEEE80211_TX_CTL_AMPDU))
info->flags |= IEEE80211_TX_STAT_AMPDU | IEEE80211_TX_CTL_AMPDU;
+ first_idx = max_t(int, 0, last_idx - (count + 1) / MT7603_RATE_RETRY);
+
if (fixed_rate && !probe) {
info->status.rates[0].count = count;
+ i = 0;
goto out;
}
- for (i = 0, idx = 0; i < ARRAY_SIZE(info->status.rates); i++) {
- int cur_count = min_t(int, count, 2 * MT7603_RATE_RETRY);
+ rate_set_tsf = READ_ONCE(sta->rate_set_tsf);
+ rs_idx = !((u32)(FIELD_GET(MT_TXS1_F0_TIMESTAMP, le32_to_cpu(txs_data[1])) -
+ rate_set_tsf) < 1000000);
+ rs_idx ^= rate_set_tsf & BIT(0);
+ rs = &sta->rateset[rs_idx];
- if (!i && probe) {
- cur_count = 1;
- } else {
- info->status.rates[i] = sta->rates[idx];
- idx++;
- }
+ if (!first_idx && rs->probe_rate.idx >= 0) {
+ info->status.rates[0] = rs->probe_rate;
- if (i && info->status.rates[i].idx < 0) {
- info->status.rates[i - 1].count += count;
- break;
+ spin_lock_bh(&dev->mt76.lock);
+ if (sta->rate_probe) {
+ mt7603_wtbl_set_rates(dev, sta, NULL,
+ sta->rates);
+ sta->rate_probe = false;
}
+ spin_unlock_bh(&dev->mt76.lock);
+ } else
+ info->status.rates[0] = rs->rates[first_idx / 2];
+ info->status.rates[0].count = 0;
- if (!count) {
- info->status.rates[i].idx = -1;
- break;
- }
+ for (i = 0, idx = first_idx; count && idx <= last_idx; idx++) {
+ struct ieee80211_tx_rate *cur_rate;
+ int cur_count;
- info->status.rates[i].count = cur_count;
- final_idx = i;
+ cur_rate = &rs->rates[idx / 2];
+ cur_count = min_t(int, MT7603_RATE_RETRY, count);
count -= cur_count;
+
+ if (idx && (cur_rate->idx != info->status.rates[i].idx ||
+ cur_rate->flags != info->status.rates[i].flags)) {
+ i++;
+ if (i == ARRAY_SIZE(info->status.rates))
+ break;
+
+ info->status.rates[i] = *cur_rate;
+ info->status.rates[i].count = 0;
+ }
+
+ info->status.rates[i].count += cur_count;
}
out:
- final_rate_flags = info->status.rates[final_idx].flags;
+ final_rate_flags = info->status.rates[i].flags;
switch (FIELD_GET(MT_TX_RATE_MODE, final_rate)) {
case MT_PHY_TYPE_CCK:
@@ -1042,7 +1052,8 @@ out:
else
sband = &dev->mt76.sband_2g.sband;
final_rate &= GENMASK(5, 0);
- final_rate = mt7603_get_rate(dev, sband, final_rate, cck);
+ final_rate = mt76_get_rate(&dev->mt76, sband, final_rate,
+ cck);
final_rate_flags = 0;
break;
case MT_PHY_TYPE_HT_GF:
@@ -1056,8 +1067,8 @@ out:
return false;
}
- info->status.rates[final_idx].idx = final_rate;
- info->status.rates[final_idx].flags = final_rate_flags;
+ info->status.rates[i].idx = final_rate;
+ info->status.rates[i].flags = final_rate_flags;
return true;
}
@@ -1078,16 +1089,6 @@ mt7603_mac_add_txs_skb(struct mt7603_dev *dev, struct mt7603_sta *sta, int pid,
if (skb) {
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
- if (info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) {
- spin_lock_bh(&dev->mt76.lock);
- if (sta->rate_probe) {
- mt7603_wtbl_set_rates(dev, sta, NULL,
- sta->rates);
- sta->rate_probe = false;
- }
- spin_unlock_bh(&dev->mt76.lock);
- }
-
if (!mt7603_fill_txs(dev, sta, info, txs_data)) {
ieee80211_tx_info_clear_status(info);
info->status.rates[0].idx = -1;
@@ -1282,6 +1283,7 @@ static void mt7603_mac_watchdog_reset(struct mt7603_dev *dev)
tasklet_disable(&dev->mt76.pre_tbtt_tasklet);
napi_disable(&dev->mt76.napi[0]);
napi_disable(&dev->mt76.napi[1]);
+ napi_disable(&dev->mt76.tx_napi);
mutex_lock(&dev->mt76.mutex);
@@ -1326,7 +1328,8 @@ skip_dma_reset:
mutex_unlock(&dev->mt76.mutex);
tasklet_enable(&dev->mt76.tx_tasklet);
- tasklet_schedule(&dev->mt76.tx_tasklet);
+ napi_enable(&dev->mt76.tx_napi);
+ napi_schedule(&dev->mt76.tx_napi);
tasklet_enable(&dev->mt76.pre_tbtt_tasklet);
mt7603_beacon_set_timer(dev, -1, beacon_int);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/main.c b/drivers/net/wireless/mediatek/mt76/mt7603/main.c
index 0a0334dc40d5..e5d4cb6381a8 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/main.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/main.c
@@ -103,8 +103,7 @@ mt7603_remove_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
mutex_unlock(&dev->mt76.mutex);
}
-static void
-mt7603_init_edcca(struct mt7603_dev *dev)
+void mt7603_init_edcca(struct mt7603_dev *dev)
{
/* Set lower signal level to -65dBm */
mt76_rmw_field(dev, MT_RXTD(8), MT_RXTD_8_LOWER_SIGNAL, 0x23);
@@ -207,8 +206,11 @@ mt7603_config(struct ieee80211_hw *hw, u32 changed)
int ret = 0;
if (changed & (IEEE80211_CONF_CHANGE_CHANNEL |
- IEEE80211_CONF_CHANGE_POWER))
+ IEEE80211_CONF_CHANGE_POWER)) {
+ ieee80211_stop_queues(hw);
ret = mt7603_set_channel(dev, &hw->conf.chandef);
+ ieee80211_wake_queues(hw);
+ }
if (changed & IEEE80211_CONF_CHANGE_MONITOR) {
mutex_lock(&dev->mt76.mutex);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7603/mcu.c
index 6357b5658a32..343ddc5543c2 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/mcu.c
@@ -346,7 +346,7 @@ int mt7603_mcu_set_eeprom(struct mt7603_dev *dev)
};
struct req_data {
- u16 addr;
+ __le16 addr;
u8 val;
u8 pad;
} __packed;
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/mt7603.h b/drivers/net/wireless/mediatek/mt76/mt7603/mt7603.h
index fa64bbaab0d2..2c6f7b4cf0e9 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/mt7603.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/mt7603.h
@@ -51,6 +51,11 @@ enum mt7603_bw {
MT_BW_80,
};
+struct mt7603_rate_set {
+ struct ieee80211_tx_rate probe_rate;
+ struct ieee80211_tx_rate rates[4];
+};
+
struct mt7603_sta {
struct mt76_wcid wcid; /* must be first */
@@ -58,7 +63,11 @@ struct mt7603_sta {
struct sk_buff_head psq;
- struct ieee80211_tx_rate rates[8];
+ struct ieee80211_tx_rate rates[4];
+
+ struct mt7603_rate_set rateset[2];
+ u32 rate_set_tsf;
+
u8 rate_count;
u8 n_rates;
@@ -117,8 +126,9 @@ struct mt7603_dev {
u8 mac_work_count;
u8 mcu_running;
- u8 ed_monitor;
+ u8 ed_monitor_enabled;
+ u8 ed_monitor;
s8 ed_trigger;
u8 ed_strict_mode;
u8 ed_strong_signal;
@@ -241,4 +251,5 @@ void mt7603_update_channel(struct mt76_dev *mdev);
void mt7603_edcca_set_strict(struct mt7603_dev *dev, bool val);
void mt7603_cca_stats_reset(struct mt7603_dev *dev);
+void mt7603_init_edcca(struct mt7603_dev *dev);
#endif
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/regs.h b/drivers/net/wireless/mediatek/mt76/mt7603/regs.h
index 9d257d5c309d..eb9eefe8e125 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/regs.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/regs.h
@@ -480,6 +480,12 @@ enum {
#define MT_LPON_BASE 0x24000
#define MT_LPON(n) (MT_LPON_BASE + (n))
+#define MT_LPON_T0CR MT_LPON(0x010)
+#define MT_LPON_T0CR_MODE GENMASK(1, 0)
+
+#define MT_LPON_UTTR0 MT_LPON(0x018)
+#define MT_LPON_UTTR1 MT_LPON(0x01c)
+
#define MT_LPON_BTEIR MT_LPON(0x020)
#define MT_LPON_BTEIR_MBSS_MODE GENMASK(31, 29)
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/dma.c b/drivers/net/wireless/mediatek/mt76/mt7615/dma.c
index 3ec6582afd8f..6a70273d4a69 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/dma.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/dma.c
@@ -93,18 +93,33 @@ void mt7615_queue_rx_skb(struct mt76_dev *mdev, enum mt76_rxq_id q,
static void mt7615_tx_tasklet(unsigned long data)
{
struct mt7615_dev *dev = (struct mt7615_dev *)data;
+
+ mt76_txq_schedule_all(&dev->mt76);
+}
+
+static int mt7615_poll_tx(struct napi_struct *napi, int budget)
+{
static const u8 queue_map[] = {
MT_TXQ_MCU,
MT_TXQ_BE
};
+ struct mt7615_dev *dev;
int i;
+ dev = container_of(napi, struct mt7615_dev, mt76.tx_napi);
+
for (i = 0; i < ARRAY_SIZE(queue_map); i++)
mt76_queue_tx_cleanup(dev, queue_map[i], false);
- mt76_txq_schedule_all(&dev->mt76);
+ if (napi_complete_done(napi, 0))
+ mt7615_irq_enable(dev, MT_INT_TX_DONE_ALL);
- mt7615_irq_enable(dev, MT_INT_TX_DONE_ALL);
+ for (i = 0; i < ARRAY_SIZE(queue_map); i++)
+ mt76_queue_tx_cleanup(dev, queue_map[i], false);
+
+ tasklet_schedule(&dev->mt76.tx_tasklet);
+
+ return 0;
}
int mt7615_dma_init(struct mt7615_dev *dev)
@@ -178,6 +193,10 @@ int mt7615_dma_init(struct mt7615_dev *dev)
if (ret < 0)
return ret;
+ netif_tx_napi_add(&dev->mt76.napi_dev, &dev->mt76.tx_napi,
+ mt7615_poll_tx, NAPI_POLL_WEIGHT);
+ napi_enable(&dev->mt76.tx_napi);
+
mt76_poll(dev, MT_WPDMA_GLO_CFG,
MT_WPDMA_GLO_CFG_TX_DMA_BUSY |
MT_WPDMA_GLO_CFG_RX_DMA_BUSY, 0, 1000);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c
index dd5ab46a4f66..dc94f52e6e8b 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c
@@ -42,13 +42,13 @@ static int mt7615_efuse_read(struct mt7615_dev *dev, u32 base,
static int mt7615_efuse_init(struct mt7615_dev *dev)
{
- u32 base = mt7615_reg_map(dev, MT_EFUSE_BASE);
- int len = MT7615_EEPROM_SIZE;
- int ret, i;
+ u32 val, base = mt7615_reg_map(dev, MT_EFUSE_BASE);
+ int i, len = MT7615_EEPROM_SIZE;
void *buf;
- if (mt76_rr(dev, base + MT_EFUSE_BASE_CTRL) & MT_EFUSE_BASE_CTRL_EMPTY)
- return -EINVAL;
+ val = mt76_rr(dev, base + MT_EFUSE_BASE_CTRL);
+ if (val & MT_EFUSE_BASE_CTRL_EMPTY)
+ return 0;
dev->mt76.otp.data = devm_kzalloc(dev->mt76.dev, len, GFP_KERNEL);
dev->mt76.otp.size = len;
@@ -57,6 +57,8 @@ static int mt7615_efuse_init(struct mt7615_dev *dev)
buf = dev->mt76.otp.data;
for (i = 0; i + 16 <= len; i += 16) {
+ int ret;
+
ret = mt7615_efuse_read(dev, base, i, buf + i);
if (ret)
return ret;
@@ -76,6 +78,82 @@ static int mt7615_eeprom_load(struct mt7615_dev *dev)
return mt7615_efuse_init(dev);
}
+static int mt7615_check_eeprom(struct mt76_dev *dev)
+{
+ u16 val = get_unaligned_le16(dev->eeprom.data);
+
+ switch (val) {
+ case 0x7615:
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
+static void mt7615_eeprom_parse_hw_cap(struct mt7615_dev *dev)
+{
+ u8 val, *eeprom = dev->mt76.eeprom.data;
+
+ val = FIELD_GET(MT_EE_NIC_WIFI_CONF_BAND_SEL,
+ eeprom[MT_EE_WIFI_CONF]);
+ switch (val) {
+ case MT_EE_5GHZ:
+ dev->mt76.cap.has_5ghz = true;
+ break;
+ case MT_EE_2GHZ:
+ dev->mt76.cap.has_2ghz = true;
+ break;
+ default:
+ dev->mt76.cap.has_2ghz = true;
+ dev->mt76.cap.has_5ghz = true;
+ break;
+ }
+}
+
+int mt7615_eeprom_get_power_index(struct mt7615_dev *dev,
+ struct ieee80211_channel *chan,
+ u8 chain_idx)
+{
+ int index;
+
+ if (chain_idx > 3)
+ return -EINVAL;
+
+ /* TSSI disabled */
+ if (mt7615_ext_pa_enabled(dev, chan->band)) {
+ if (chan->band == NL80211_BAND_2GHZ)
+ return MT_EE_EXT_PA_2G_TARGET_POWER;
+ else
+ return MT_EE_EXT_PA_5G_TARGET_POWER;
+ }
+
+ /* TSSI enabled */
+ if (chan->band == NL80211_BAND_2GHZ) {
+ index = MT_EE_TX0_2G_TARGET_POWER + chain_idx * 6;
+ } else {
+ int group = mt7615_get_channel_group(chan->hw_value);
+
+ switch (chain_idx) {
+ case 1:
+ index = MT_EE_TX1_5G_G0_TARGET_POWER;
+ break;
+ case 2:
+ index = MT_EE_TX2_5G_G0_TARGET_POWER;
+ break;
+ case 3:
+ index = MT_EE_TX3_5G_G0_TARGET_POWER;
+ break;
+ case 0:
+ default:
+ index = MT_EE_TX0_5G_G0_TARGET_POWER;
+ break;
+ }
+ index += 5 * group;
+ }
+
+ return index;
+}
+
int mt7615_eeprom_init(struct mt7615_dev *dev)
{
int ret;
@@ -84,11 +162,12 @@ int mt7615_eeprom_init(struct mt7615_dev *dev)
if (ret < 0)
return ret;
- memcpy(dev->mt76.eeprom.data, dev->mt76.otp.data, MT7615_EEPROM_SIZE);
-
- dev->mt76.cap.has_2ghz = true;
- dev->mt76.cap.has_5ghz = true;
+ ret = mt7615_check_eeprom(&dev->mt76);
+ if (ret && dev->mt76.otp.data)
+ memcpy(dev->mt76.eeprom.data, dev->mt76.otp.data,
+ MT7615_EEPROM_SIZE);
+ mt7615_eeprom_parse_hw_cap(dev);
memcpy(dev->mt76.macaddr, dev->mt76.eeprom.data + MT_EE_MAC_ADDR,
ETH_ALEN);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.h b/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.h
index a4cf16688171..f4a4280768d2 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/eeprom.h
@@ -11,8 +11,69 @@ enum mt7615_eeprom_field {
MT_EE_VERSION = 0x002,
MT_EE_MAC_ADDR = 0x004,
MT_EE_NIC_CONF_0 = 0x034,
+ MT_EE_NIC_CONF_1 = 0x036,
+ MT_EE_WIFI_CONF = 0x03e,
+ MT_EE_TX0_2G_TARGET_POWER = 0x058,
+ MT_EE_TX0_5G_G0_TARGET_POWER = 0x070,
+ MT_EE_TX1_5G_G0_TARGET_POWER = 0x098,
+ MT_EE_EXT_PA_2G_TARGET_POWER = 0x0f2,
+ MT_EE_EXT_PA_5G_TARGET_POWER = 0x0f3,
+ MT_EE_TX2_5G_G0_TARGET_POWER = 0x142,
+ MT_EE_TX3_5G_G0_TARGET_POWER = 0x16a,
__MT_EE_MAX = 0x3bf
};
+#define MT_EE_NIC_CONF_TSSI_2G BIT(5)
+#define MT_EE_NIC_CONF_TSSI_5G BIT(6)
+
+#define MT_EE_NIC_WIFI_CONF_BAND_SEL GENMASK(5, 4)
+enum mt7615_eeprom_band {
+ MT_EE_DUAL_BAND,
+ MT_EE_5GHZ,
+ MT_EE_2GHZ,
+ MT_EE_DBDC,
+};
+
+enum mt7615_channel_group {
+ MT_CH_5G_JAPAN,
+ MT_CH_5G_UNII_1,
+ MT_CH_5G_UNII_2A,
+ MT_CH_5G_UNII_2B,
+ MT_CH_5G_UNII_2E_1,
+ MT_CH_5G_UNII_2E_2,
+ MT_CH_5G_UNII_2E_3,
+ MT_CH_5G_UNII_3,
+ __MT_CH_MAX
+};
+
+static inline enum mt7615_channel_group
+mt7615_get_channel_group(int channel)
+{
+ if (channel >= 184 && channel <= 196)
+ return MT_CH_5G_JAPAN;
+ if (channel <= 48)
+ return MT_CH_5G_UNII_1;
+ if (channel <= 64)
+ return MT_CH_5G_UNII_2A;
+ if (channel <= 114)
+ return MT_CH_5G_UNII_2E_1;
+ if (channel <= 144)
+ return MT_CH_5G_UNII_2E_2;
+ if (channel <= 161)
+ return MT_CH_5G_UNII_2E_3;
+ return MT_CH_5G_UNII_3;
+}
+
+static inline bool
+mt7615_ext_pa_enabled(struct mt7615_dev *dev, enum nl80211_band band)
+{
+ u8 *eep = dev->mt76.eeprom.data;
+
+ if (band == NL80211_BAND_5GHZ)
+ return !(eep[MT_EE_NIC_CONF_1 + 1] & MT_EE_NIC_CONF_TSSI_5G);
+ else
+ return !(eep[MT_EE_NIC_CONF_1 + 1] & MT_EE_NIC_CONF_TSSI_2G);
+}
+
#endif
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/init.c b/drivers/net/wireless/mediatek/mt76/mt7615/init.c
index 3ab3ff553ef2..859de2454ec6 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/init.c
@@ -9,6 +9,7 @@
#include <linux/etherdevice.h>
#include "mt7615.h"
#include "mac.h"
+#include "eeprom.h"
static void mt7615_phy_init(struct mt7615_dev *dev)
{
@@ -62,16 +63,11 @@ static void mt7615_mac_init(struct mt7615_dev *dev)
MT_AGG_ARCR_RATE_DOWN_RATIO_EN |
FIELD_PREP(MT_AGG_ARCR_RATE_DOWN_RATIO, 1) |
FIELD_PREP(MT_AGG_ARCR_RATE_UP_EXTRA_TH, 4)));
-
- dev->mt76.global_wcid.idx = MT7615_WTBL_RESERVED;
- dev->mt76.global_wcid.hw_key_idx = -1;
- rcu_assign_pointer(dev->mt76.wcid[MT7615_WTBL_RESERVED],
- &dev->mt76.global_wcid);
}
static int mt7615_init_hardware(struct mt7615_dev *dev)
{
- int ret;
+ int ret, idx;
mt76_wr(dev, MT_INT_SOURCE_CSR, ~0);
@@ -98,6 +94,15 @@ static int mt7615_init_hardware(struct mt7615_dev *dev)
mt7615_mcu_ctrl_pm_state(dev, 0);
mt7615_mcu_del_wtbl_all(dev);
+ /* Beacon and mgmt frames should occupy wcid 0 */
+ idx = mt76_wcid_alloc(dev->mt76.wcid_mask, MT7615_WTBL_STA - 1);
+ if (idx)
+ return -ENOSPC;
+
+ dev->mt76.global_wcid.idx = idx;
+ dev->mt76.global_wcid.hw_key_idx = -1;
+ rcu_assign_pointer(dev->mt76.wcid[idx], &dev->mt76.global_wcid);
+
return 0;
}
@@ -133,6 +138,9 @@ static const struct ieee80211_iface_limit if_limits[] = {
{
.max = MT7615_MAX_INTERFACES,
.types = BIT(NL80211_IFTYPE_AP) |
+#ifdef CONFIG_MAC80211_MESH
+ BIT(NL80211_IFTYPE_MESH_POINT) |
+#endif
BIT(NL80211_IFTYPE_STATION)
}
};
@@ -158,6 +166,48 @@ static int mt7615_init_debugfs(struct mt7615_dev *dev)
return 0;
}
+static void
+mt7615_init_txpower(struct mt7615_dev *dev,
+ struct ieee80211_supported_band *sband)
+{
+ int i, n_chains = hweight8(dev->mt76.antenna_mask), target_chains;
+ u8 *eep = (u8 *)dev->mt76.eeprom.data;
+ enum nl80211_band band = sband->band;
+
+ target_chains = mt7615_ext_pa_enabled(dev, band) ? 1 : n_chains;
+ for (i = 0; i < sband->n_channels; i++) {
+ struct ieee80211_channel *chan = &sband->channels[i];
+ u8 target_power = 0;
+ int j;
+
+ for (j = 0; j < target_chains; j++) {
+ int index;
+
+ index = mt7615_eeprom_get_power_index(dev, chan, j);
+ target_power = max(target_power, eep[index]);
+ }
+
+ target_power = DIV_ROUND_UP(target_power, 2);
+ switch (n_chains) {
+ case 4:
+ target_power += 6;
+ break;
+ case 3:
+ target_power += 4;
+ break;
+ case 2:
+ target_power += 3;
+ break;
+ default:
+ break;
+ }
+
+ chan->max_power = min_t(int, chan->max_reg_power,
+ target_power);
+ chan->orig_mpwr = target_power;
+ }
+}
+
int mt7615_register_device(struct mt7615_dev *dev)
{
struct ieee80211_hw *hw = mt76_hw(dev);
@@ -195,6 +245,9 @@ int mt7615_register_device(struct mt7615_dev *dev)
dev->mt76.antenna_mask = 0xf;
wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION) |
+#ifdef CONFIG_MAC80211_MESH
+ BIT(NL80211_IFTYPE_MESH_POINT) |
+#endif
BIT(NL80211_IFTYPE_AP);
ret = mt76_register_device(&dev->mt76, true, mt7615_rates,
@@ -202,6 +255,9 @@ int mt7615_register_device(struct mt7615_dev *dev)
if (ret)
return ret;
+ mt7615_init_txpower(dev, &dev->mt76.sband_2g.sband);
+ mt7615_init_txpower(dev, &dev->mt76.sband_5g.sband);
+
hw->max_tx_fragments = MT_TXP_MAX_BUF_NUM;
return mt7615_init_debugfs(dev);
@@ -212,6 +268,10 @@ void mt7615_unregister_device(struct mt7615_dev *dev)
struct mt76_txwi_cache *txwi;
int id;
+ mt76_unregister_device(&dev->mt76);
+ mt7615_mcu_exit(dev);
+ mt7615_dma_cleanup(dev);
+
spin_lock_bh(&dev->token_lock);
idr_for_each_entry(&dev->token, txwi, id) {
mt7615_txp_skb_unmap(&dev->mt76, txwi);
@@ -221,9 +281,6 @@ void mt7615_unregister_device(struct mt7615_dev *dev)
}
spin_unlock_bh(&dev->token_lock);
idr_destroy(&dev->token);
- mt76_unregister_device(&dev->mt76);
- mt7615_mcu_exit(dev);
- mt7615_dma_cleanup(dev);
- ieee80211_free_hw(mt76_hw(dev));
+ mt76_free_device(&dev->mt76);
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
index b8f48d10f27a..1eb0e9c9970c 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
@@ -13,6 +13,11 @@
#include "../dma.h"
#include "mac.h"
+static inline s8 to_rssi(u32 field, u32 rxv)
+{
+ return (FIELD_GET(field, rxv) - 220) / 2;
+}
+
static struct mt76_wcid *mt7615_rx_get_wcid(struct mt7615_dev *dev,
u8 idx, bool unicast)
{
@@ -36,54 +41,6 @@ static struct mt76_wcid *mt7615_rx_get_wcid(struct mt7615_dev *dev,
return &sta->vif->sta.wcid;
}
-static int mt7615_get_rate(struct mt7615_dev *dev,
- struct ieee80211_supported_band *sband,
- int idx, bool cck)
-{
- int offset = 0;
- int len = sband->n_bitrates;
- int i;
-
- if (cck) {
- if (sband == &dev->mt76.sband_5g.sband)
- return 0;
-
- idx &= ~BIT(2); /* short preamble */
- } else if (sband == &dev->mt76.sband_2g.sband) {
- offset = 4;
- }
-
- for (i = offset; i < len; i++) {
- if ((sband->bitrates[i].hw_value & GENMASK(7, 0)) == idx)
- return i;
- }
-
- return 0;
-}
-
-static void mt7615_insert_ccmp_hdr(struct sk_buff *skb, u8 key_id)
-{
- struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
- int hdr_len = ieee80211_get_hdrlen_from_skb(skb);
- u8 *pn = status->iv;
- u8 *hdr;
-
- __skb_push(skb, 8);
- memmove(skb->data, skb->data + 8, hdr_len);
- hdr = skb->data + hdr_len;
-
- hdr[0] = pn[5];
- hdr[1] = pn[4];
- hdr[2] = 0;
- hdr[3] = 0x20 | (key_id << 6);
- hdr[4] = pn[3];
- hdr[5] = pn[2];
- hdr[6] = pn[1];
- hdr[7] = pn[0];
-
- status->flag &= ~RX_FLAG_IV_STRIPPED;
-}
-
int mt7615_mac_fill_rx(struct mt7615_dev *dev, struct sk_buff *skb)
{
struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
@@ -96,6 +53,9 @@ int mt7615_mac_fill_rx(struct mt7615_dev *dev, struct sk_buff *skb)
bool unicast, remove_pad, insert_ccmp_hdr = false;
int i, idx;
+ if (!test_bit(MT76_STATE_RUNNING, &dev->mt76.state))
+ return -EINVAL;
+
memset(status, 0, sizeof(*status));
unicast = (rxd1 & MT_RXD1_NORMAL_ADDR_TYPE) == MT_RXD1_NORMAL_U2M;
@@ -165,6 +125,7 @@ int mt7615_mac_fill_rx(struct mt7615_dev *dev, struct sk_buff *skb)
if (rxd0 & MT_RXD0_NORMAL_GROUP_3) {
u32 rxdg0 = le32_to_cpu(rxd[0]);
u32 rxdg1 = le32_to_cpu(rxd[1]);
+ u32 rxdg3 = le32_to_cpu(rxd[3]);
u8 stbc = FIELD_GET(MT_RXV1_HT_STBC, rxdg0);
bool cck = false;
@@ -174,7 +135,7 @@ int mt7615_mac_fill_rx(struct mt7615_dev *dev, struct sk_buff *skb)
cck = true;
/* fall through */
case MT_PHY_TYPE_OFDM:
- i = mt7615_get_rate(dev, sband, i, cck);
+ i = mt76_get_rate(&dev->mt76, sband, i, cck);
break;
case MT_PHY_TYPE_HT_GF:
case MT_PHY_TYPE_HT:
@@ -214,7 +175,21 @@ int mt7615_mac_fill_rx(struct mt7615_dev *dev, struct sk_buff *skb)
status->enc_flags |= RX_ENC_FLAG_STBC_MASK * stbc;
- /* TODO: RSSI */
+ status->chains = dev->mt76.antenna_mask;
+ status->chain_signal[0] = to_rssi(MT_RXV4_RCPI0, rxdg3);
+ status->chain_signal[1] = to_rssi(MT_RXV4_RCPI1, rxdg3);
+ status->chain_signal[2] = to_rssi(MT_RXV4_RCPI2, rxdg3);
+ status->chain_signal[3] = to_rssi(MT_RXV4_RCPI3, rxdg3);
+ status->signal = status->chain_signal[0];
+
+ for (i = 1; i < hweight8(dev->mt76.antenna_mask); i++) {
+ if (!(status->chains & BIT(i)))
+ continue;
+
+ status->signal = max(status->signal,
+ status->chain_signal[i]);
+ }
+
rxd += 6;
if ((u8 *)rxd - skb->data >= skb->len)
return -EINVAL;
@@ -225,7 +200,7 @@ int mt7615_mac_fill_rx(struct mt7615_dev *dev, struct sk_buff *skb)
if (insert_ccmp_hdr) {
u8 key_id = FIELD_GET(MT_RXD1_NORMAL_KEY_ID, rxd1);
- mt7615_insert_ccmp_hdr(skb, key_id);
+ mt76_insert_ccmp_hdr(skb, key_id);
}
hdr = (struct ieee80211_hdr *)skb->data;
@@ -549,23 +524,20 @@ static bool mt7615_fill_txs(struct mt7615_dev *dev, struct mt7615_sta *sta,
{
struct ieee80211_supported_band *sband;
int i, idx, count, final_idx = 0;
- bool fixed_rate, final_mpdu, ack_timeout;
+ bool fixed_rate, ack_timeout;
bool probe, ampdu, cck = false;
u32 final_rate, final_rate_flags, final_nss, txs;
- u8 pid;
fixed_rate = info->status.rates[0].count;
probe = !!(info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE);
txs = le32_to_cpu(txs_data[1]);
- final_mpdu = txs & MT_TXS1_ACKED_MPDU;
ampdu = !fixed_rate && (txs & MT_TXS1_AMPDU);
txs = le32_to_cpu(txs_data[3]);
count = FIELD_GET(MT_TXS3_TX_COUNT, txs);
txs = le32_to_cpu(txs_data[0]);
- pid = FIELD_GET(MT_TXS0_PID, txs);
final_rate = FIELD_GET(MT_TXS0_TX_RATE, txs);
ack_timeout = txs & MT_TXS0_ACK_TIMEOUT;
@@ -628,7 +600,8 @@ out:
else
sband = &dev->mt76.sband_2g.sband;
final_rate &= MT_TX_RATE_IDX;
- final_rate = mt7615_get_rate(dev, sband, final_rate, cck);
+ final_rate = mt76_get_rate(&dev->mt76, sband, final_rate,
+ cck);
final_rate_flags = 0;
break;
case MT_PHY_TYPE_HT_GF:
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.h b/drivers/net/wireless/mediatek/mt76/mt7615/mac.h
index 18ad4b8a3807..b00ce8db58e9 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.h
@@ -98,6 +98,11 @@ enum rx_pkt_type {
#define MT_RXV2_GROUP_ID GENMASK(26, 21)
#define MT_RXV2_LENGTH GENMASK(20, 0)
+#define MT_RXV4_RCPI3 GENMASK(31, 24)
+#define MT_RXV4_RCPI2 GENMASK(23, 16)
+#define MT_RXV4_RCPI1 GENMASK(15, 8)
+#define MT_RXV4_RCPI0 GENMASK(7, 0)
+
enum tx_header_format {
MT_HDR_FORMAT_802_3,
MT_HDR_FORMAT_CMD,
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/main.c b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
index 80e6b211f60b..b4d6af812c54 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
@@ -37,6 +37,7 @@ static int get_omac_idx(enum nl80211_iftype type, u32 mask)
switch (type) {
case NL80211_IFTYPE_AP:
+ case NL80211_IFTYPE_MESH_POINT:
/* ap use hw bssid 0 and ext bssid */
if (~mask & BIT(HW_BSSID_0))
return HW_BSSID_0;
@@ -77,11 +78,12 @@ static int mt7615_add_interface(struct ieee80211_hw *hw,
goto out;
}
- mvif->omac_idx = get_omac_idx(vif->type, dev->omac_mask);
- if (mvif->omac_idx < 0) {
+ idx = get_omac_idx(vif->type, dev->omac_mask);
+ if (idx < 0) {
ret = -ENOSPC;
goto out;
}
+ mvif->omac_idx = idx;
/* TODO: DBDC support. Use band 0 and wmm 0 for now */
mvif->band_idx = 0;
@@ -93,7 +95,7 @@ static int mt7615_add_interface(struct ieee80211_hw *hw,
dev->vif_mask |= BIT(mvif->idx);
dev->omac_mask |= BIT(mvif->omac_idx);
- idx = MT7615_WTBL_RESERVED - 1 - mvif->idx;
+ idx = MT7615_WTBL_RESERVED - mvif->idx;
mvif->sta.wcid.idx = idx;
mvif->sta.wcid.hw_key_idx = -1;
@@ -128,8 +130,7 @@ static void mt7615_remove_interface(struct ieee80211_hw *hw,
mutex_unlock(&dev->mt76.mutex);
}
-static int mt7615_set_channel(struct mt7615_dev *dev,
- struct cfg80211_chan_def *def)
+static int mt7615_set_channel(struct mt7615_dev *dev)
{
int ret;
@@ -190,28 +191,28 @@ static int mt7615_config(struct ieee80211_hw *hw, u32 changed)
struct mt7615_dev *dev = hw->priv;
int ret = 0;
- if (changed & IEEE80211_CONF_CHANGE_CHANNEL) {
- mutex_lock(&dev->mt76.mutex);
+ mutex_lock(&dev->mt76.mutex);
+ if (changed & IEEE80211_CONF_CHANGE_CHANNEL) {
ieee80211_stop_queues(hw);
- ret = mt7615_set_channel(dev, &hw->conf.chandef);
+ ret = mt7615_set_channel(dev);
ieee80211_wake_queues(hw);
-
- mutex_unlock(&dev->mt76.mutex);
}
- if (changed & IEEE80211_CONF_CHANGE_MONITOR) {
- mutex_lock(&dev->mt76.mutex);
+ if (changed & IEEE80211_CONF_CHANGE_POWER)
+ ret = mt7615_mcu_set_tx_power(dev);
+ if (changed & IEEE80211_CONF_CHANGE_MONITOR) {
if (!(hw->conf.flags & IEEE80211_CONF_MONITOR))
dev->mt76.rxfilter |= MT_WF_RFCR_DROP_OTHER_UC;
else
dev->mt76.rxfilter &= ~MT_WF_RFCR_DROP_OTHER_UC;
mt76_wr(dev, MT_WF_RFCR, dev->mt76.rxfilter);
-
- mutex_unlock(&dev->mt76.mutex);
}
+
+ mutex_unlock(&dev->mt76.mutex);
+
return ret;
}
@@ -281,26 +282,18 @@ static void mt7615_bss_info_changed(struct ieee80211_hw *hw,
mutex_lock(&dev->mt76.mutex);
- /* TODO: sta mode connect/disconnect
- * BSS_CHANGED_ASSOC | BSS_CHANGED_BSSID
- */
+ if (changed & BSS_CHANGED_ASSOC)
+ mt7615_mcu_set_bss_info(dev, vif, info->assoc);
/* TODO: update beacon content
* BSS_CHANGED_BEACON
*/
if (changed & BSS_CHANGED_BEACON_ENABLED) {
- if (info->enable_beacon) {
- mt7615_mcu_set_bss_info(dev, vif, 1);
- mt7615_mcu_add_wtbl_bmc(dev, vif);
- mt7615_mcu_set_sta_rec_bmc(dev, vif, 1);
- mt7615_mcu_set_bcn(dev, vif, 1);
- } else {
- mt7615_mcu_set_sta_rec_bmc(dev, vif, 0);
- mt7615_mcu_del_wtbl_bmc(dev, vif);
- mt7615_mcu_set_bss_info(dev, vif, 0);
- mt7615_mcu_set_bcn(dev, vif, 0);
- }
+ mt7615_mcu_set_bss_info(dev, vif, info->enable_beacon);
+ mt7615_mcu_wtbl_bmc(dev, vif, info->enable_beacon);
+ mt7615_mcu_set_sta_rec_bmc(dev, vif, info->enable_beacon);
+ mt7615_mcu_set_bcn(dev, vif, info->enable_beacon);
}
mutex_unlock(&dev->mt76.mutex);
@@ -343,7 +336,7 @@ void mt7615_sta_remove(struct mt76_dev *mdev, struct ieee80211_vif *vif,
struct mt7615_dev *dev = container_of(mdev, struct mt7615_dev, mt76);
mt7615_mcu_set_sta_rec(dev, vif, sta, 0);
- mt7615_mcu_del_wtbl(dev, vif, sta);
+ mt7615_mcu_del_wtbl(dev, sta);
}
static void mt7615_sta_rate_tbl_update(struct ieee80211_hw *hw,
@@ -496,4 +489,5 @@ const struct ieee80211_ops mt7615_ops = {
.sw_scan_start = mt7615_sw_scan,
.sw_scan_complete = mt7615_sw_scan_complete,
.release_buffered_frames = mt76_release_buffered_frames,
+ .get_txpower = mt76_get_txpower,
};
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
index ea67c6022fe6..cdad2c8dc297 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
@@ -49,7 +49,7 @@ struct mt7615_fw_trailer {
#define FW_START_WORKING_PDA_CR4 BIT(2)
static int __mt7615_mcu_msg_send(struct mt7615_dev *dev, struct sk_buff *skb,
- int cmd, int query, int dest, int *wait_seq)
+ int cmd, int *wait_seq)
{
struct mt7615_mcu_txd *mcu_txd;
u8 seq, q_idx, pkt_fmt;
@@ -57,9 +57,6 @@ static int __mt7615_mcu_msg_send(struct mt7615_dev *dev, struct sk_buff *skb,
u32 val;
__le32 *txd;
- if (!skb)
- return -EINVAL;
-
seq = ++dev->mt76.mmio.mcu.msg_seq & 0xf;
if (!seq)
seq = ++dev->mt76.mmio.mcu.msg_seq & 0xf;
@@ -94,16 +91,15 @@ static int __mt7615_mcu_msg_send(struct mt7615_dev *dev, struct sk_buff *skb,
mcu_txd->seq = seq;
if (cmd < 0) {
+ mcu_txd->set_query = MCU_Q_NA;
mcu_txd->cid = -cmd;
} else {
mcu_txd->cid = MCU_CMD_EXT_CID;
+ mcu_txd->set_query = MCU_Q_SET;
mcu_txd->ext_cid = cmd;
- if (query != MCU_Q_NA)
- mcu_txd->ext_cid_ack = 1;
+ mcu_txd->ext_cid_ack = 1;
}
-
- mcu_txd->set_query = query;
- mcu_txd->s2d_index = dest;
+ mcu_txd->s2d_index = MCU_S2D_H2N;
if (wait_seq)
*wait_seq = seq;
@@ -116,24 +112,30 @@ static int __mt7615_mcu_msg_send(struct mt7615_dev *dev, struct sk_buff *skb,
return mt76_tx_queue_skb_raw(dev, qid, skb, 0);
}
-static int mt7615_mcu_msg_send(struct mt7615_dev *dev, struct sk_buff *skb,
- int cmd, int query, int dest,
- struct sk_buff **skb_ret)
+static int
+mt7615_mcu_msg_send(struct mt76_dev *mdev, int cmd, const void *data,
+ int len, bool wait_resp)
{
+ struct mt7615_dev *dev = container_of(mdev, struct mt7615_dev, mt76);
unsigned long expires = jiffies + 10 * HZ;
struct mt7615_mcu_rxd *rxd;
+ struct sk_buff *skb;
int ret, seq;
- mutex_lock(&dev->mt76.mmio.mcu.mutex);
+ skb = mt7615_mcu_msg_alloc(data, len);
+ if (!skb)
+ return -ENOMEM;
- ret = __mt7615_mcu_msg_send(dev, skb, cmd, query, dest, &seq);
+ mutex_lock(&mdev->mmio.mcu.mutex);
+
+ ret = __mt7615_mcu_msg_send(dev, skb, cmd, &seq);
if (ret)
goto out;
- while (1) {
- skb = mt76_mcu_get_response(&dev->mt76, expires);
+ while (wait_resp) {
+ skb = mt76_mcu_get_response(mdev, expires);
if (!skb) {
- dev_err(dev->mt76.dev, "Message %d (seq %d) timeout\n",
+ dev_err(mdev->dev, "Message %d (seq %d) timeout\n",
cmd, seq);
ret = -ETIMEDOUT;
break;
@@ -143,23 +145,16 @@ static int mt7615_mcu_msg_send(struct mt7615_dev *dev, struct sk_buff *skb,
if (seq != rxd->seq)
continue;
- if (skb_ret) {
- int hdr_len = sizeof(*rxd);
-
- if (!test_bit(MT76_STATE_MCU_RUNNING,
- &dev->mt76.state))
- hdr_len -= 4;
- skb_pull(skb, hdr_len);
- *skb_ret = skb;
- } else {
- dev_kfree_skb(skb);
+ if (cmd == -MCU_CMD_PATCH_SEM_CONTROL) {
+ skb_pull(skb, sizeof(*rxd) - 4);
+ ret = *skb->data;
}
-
+ dev_kfree_skb(skb);
break;
}
out:
- mutex_unlock(&dev->mt76.mmio.mcu.mutex);
+ mutex_unlock(&mdev->mmio.mcu.mutex);
return ret;
}
@@ -176,28 +171,22 @@ static int mt7615_mcu_init_download(struct mt7615_dev *dev, u32 addr,
.len = cpu_to_le32(len),
.mode = cpu_to_le32(mode),
};
- struct sk_buff *skb = mt7615_mcu_msg_alloc(&req, sizeof(req));
- return mt7615_mcu_msg_send(dev, skb, -MCU_CMD_TARGET_ADDRESS_LEN_REQ,
- MCU_Q_NA, MCU_S2D_H2N, NULL);
+ return __mt76_mcu_send_msg(&dev->mt76, -MCU_CMD_TARGET_ADDRESS_LEN_REQ,
+ &req, sizeof(req), true);
}
static int mt7615_mcu_send_firmware(struct mt7615_dev *dev, const void *data,
int len)
{
- struct sk_buff *skb;
- int ret = 0;
+ int ret = 0, cur_len;
while (len > 0) {
- int cur_len = min_t(int, 4096 - sizeof(struct mt7615_mcu_txd),
- len);
-
- skb = mt7615_mcu_msg_alloc(data, cur_len);
- if (!skb)
- return -ENOMEM;
+ cur_len = min_t(int, 4096 - sizeof(struct mt7615_mcu_txd),
+ len);
- ret = __mt7615_mcu_msg_send(dev, skb, -MCU_CMD_FW_SCATTER,
- MCU_Q_NA, MCU_S2D_H2N, NULL);
+ ret = __mt76_mcu_send_msg(&dev->mt76, -MCU_CMD_FW_SCATTER,
+ data, cur_len, false);
if (ret)
break;
@@ -218,47 +207,27 @@ static int mt7615_mcu_start_firmware(struct mt7615_dev *dev, u32 addr,
.option = cpu_to_le32(option),
.addr = cpu_to_le32(addr),
};
- struct sk_buff *skb = mt7615_mcu_msg_alloc(&req, sizeof(req));
- return mt7615_mcu_msg_send(dev, skb, -MCU_CMD_FW_START_REQ,
- MCU_Q_NA, MCU_S2D_H2N, NULL);
+ return __mt76_mcu_send_msg(&dev->mt76, -MCU_CMD_FW_START_REQ,
+ &req, sizeof(req), true);
}
-static int mt7615_mcu_restart(struct mt7615_dev *dev)
+static int mt7615_mcu_restart(struct mt76_dev *dev)
{
- struct sk_buff *skb = mt7615_mcu_msg_alloc(NULL, 0);
-
- return mt7615_mcu_msg_send(dev, skb, -MCU_CMD_RESTART_DL_REQ,
- MCU_Q_NA, MCU_S2D_H2N, NULL);
+ return __mt76_mcu_send_msg(dev, -MCU_CMD_RESTART_DL_REQ, NULL,
+ 0, true);
}
static int mt7615_mcu_patch_sem_ctrl(struct mt7615_dev *dev, bool get)
{
struct {
- __le32 operation;
+ __le32 op;
} req = {
- .operation = cpu_to_le32(get ? PATCH_SEM_GET :
- PATCH_SEM_RELEASE),
+ .op = cpu_to_le32(get ? PATCH_SEM_GET : PATCH_SEM_RELEASE),
};
- struct event {
- u8 status;
- u8 reserved[3];
- } *resp;
- struct sk_buff *skb = mt7615_mcu_msg_alloc(&req, sizeof(req));
- struct sk_buff *skb_ret;
- int ret;
- ret = mt7615_mcu_msg_send(dev, skb, -MCU_CMD_PATCH_SEM_CONTROL,
- MCU_Q_NA, MCU_S2D_H2N, &skb_ret);
- if (ret)
- goto out;
-
- resp = (struct event *)(skb_ret->data);
- ret = resp->status;
- dev_kfree_skb(skb_ret);
-
-out:
- return ret;
+ return __mt76_mcu_send_msg(&dev->mt76, -MCU_CMD_PATCH_SEM_CONTROL,
+ &req, sizeof(req), true);
}
static int mt7615_mcu_start_patch(struct mt7615_dev *dev)
@@ -269,10 +238,9 @@ static int mt7615_mcu_start_patch(struct mt7615_dev *dev)
} req = {
.check_crc = 0,
};
- struct sk_buff *skb = mt7615_mcu_msg_alloc(&req, sizeof(req));
- return mt7615_mcu_msg_send(dev, skb, -MCU_CMD_PATCH_FINISH_REQ,
- MCU_Q_NA, MCU_S2D_H2N, NULL);
+ return __mt76_mcu_send_msg(&dev->mt76, -MCU_CMD_PATCH_FINISH_REQ,
+ &req, sizeof(req), true);
}
static int mt7615_driver_own(struct mt7615_dev *dev)
@@ -508,8 +476,14 @@ static int mt7615_load_firmware(struct mt7615_dev *dev)
int mt7615_mcu_init(struct mt7615_dev *dev)
{
+ static const struct mt76_mcu_ops mt7615_mcu_ops = {
+ .mcu_send_msg = mt7615_mcu_msg_send,
+ .mcu_restart = mt7615_mcu_restart,
+ };
int ret;
+ dev->mt76.mcu_ops = &mt7615_mcu_ops,
+
ret = mt7615_driver_own(dev);
if (ret)
return ret;
@@ -525,16 +499,13 @@ int mt7615_mcu_init(struct mt7615_dev *dev)
void mt7615_mcu_exit(struct mt7615_dev *dev)
{
- mt7615_mcu_restart(dev);
+ __mt76_mcu_restart(&dev->mt76);
mt76_wr(dev, MT_CFG_LPCR_HOST, MT_CFG_LPCR_HOST_FW_OWN);
skb_queue_purge(&dev->mt76.mmio.mcu.res_q);
}
int mt7615_mcu_set_eeprom(struct mt7615_dev *dev)
{
- struct req_data {
- u8 val;
- } __packed;
struct {
u8 buffer_mode;
u8 pad;
@@ -543,23 +514,22 @@ int mt7615_mcu_set_eeprom(struct mt7615_dev *dev)
.buffer_mode = 1,
.len = __MT_EE_MAX - MT_EE_NIC_CONF_0,
};
- struct sk_buff *skb;
- struct req_data *data;
- const int size = (__MT_EE_MAX - MT_EE_NIC_CONF_0) *
- sizeof(struct req_data);
- u8 *eep = (u8 *)dev->mt76.eeprom.data;
- u16 off;
-
- skb = mt7615_mcu_msg_alloc(NULL, size + sizeof(req_hdr));
- memcpy(skb_put(skb, sizeof(req_hdr)), &req_hdr, sizeof(req_hdr));
- data = (struct req_data *)skb_put(skb, size);
- memset(data, 0, size);
-
- for (off = MT_EE_NIC_CONF_0; off < __MT_EE_MAX; off++)
- data[off - MT_EE_NIC_CONF_0].val = eep[off];
-
- return mt7615_mcu_msg_send(dev, skb, MCU_EXT_CMD_EFUSE_BUFFER_MODE,
- MCU_Q_SET, MCU_S2D_H2N, NULL);
+ int ret, len = sizeof(req_hdr) + __MT_EE_MAX - MT_EE_NIC_CONF_0;
+ u8 *req, *eep = (u8 *)dev->mt76.eeprom.data;
+
+ req = kzalloc(len, GFP_KERNEL);
+ if (!req)
+ return -ENOMEM;
+
+ memcpy(req, &req_hdr, sizeof(req_hdr));
+ memcpy(req + sizeof(req_hdr), eep + MT_EE_NIC_CONF_0,
+ __MT_EE_MAX - MT_EE_NIC_CONF_0);
+
+ ret = __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_EFUSE_BUFFER_MODE,
+ req, len, true);
+ kfree(req);
+
+ return ret;
}
int mt7615_mcu_init_mac(struct mt7615_dev *dev)
@@ -572,10 +542,9 @@ int mt7615_mcu_init_mac(struct mt7615_dev *dev)
.enable = 1,
.band = 0,
};
- struct sk_buff *skb = mt7615_mcu_msg_alloc(&req, sizeof(req));
- return mt7615_mcu_msg_send(dev, skb, MCU_EXT_CMD_MAC_INIT_CTRL,
- MCU_Q_SET, MCU_S2D_H2N, NULL);
+ return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_MAC_INIT_CTRL,
+ &req, sizeof(req), true);
}
int mt7615_mcu_set_rts_thresh(struct mt7615_dev *dev, u32 val)
@@ -592,10 +561,9 @@ int mt7615_mcu_set_rts_thresh(struct mt7615_dev *dev, u32 val)
.len_thresh = cpu_to_le32(val),
.pkt_thresh = cpu_to_le32(0x2),
};
- struct sk_buff *skb = mt7615_mcu_msg_alloc(&req, sizeof(req));
- return mt7615_mcu_msg_send(dev, skb, MCU_EXT_CMD_PROTECT_CTRL,
- MCU_Q_SET, MCU_S2D_H2N, NULL);
+ return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_PROTECT_CTRL,
+ &req, sizeof(req), true);
}
int mt7615_mcu_set_wmm(struct mt7615_dev *dev, u8 queue,
@@ -621,7 +589,6 @@ int mt7615_mcu_set_wmm(struct mt7615_dev *dev, u8 queue,
.aifs = params->aifs,
.txop = cpu_to_le16(params->txop),
};
- struct sk_buff *skb;
if (params->cw_min) {
req.valid |= WMM_CW_MIN_SET;
@@ -632,9 +599,8 @@ int mt7615_mcu_set_wmm(struct mt7615_dev *dev, u8 queue,
req.cw_max = cpu_to_le16(params->cw_max);
}
- skb = mt7615_mcu_msg_alloc(&req, sizeof(req));
- return mt7615_mcu_msg_send(dev, skb, MCU_EXT_CMD_EDCA_UPDATE,
- MCU_Q_SET, MCU_S2D_H2N, NULL);
+ return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_EDCA_UPDATE,
+ &req, sizeof(req), true);
}
int mt7615_mcu_ctrl_pm_state(struct mt7615_dev *dev, int enter)
@@ -662,300 +628,200 @@ int mt7615_mcu_ctrl_pm_state(struct mt7615_dev *dev, int enter)
.pm_state = (enter) ? ENTER_PM_STATE : EXIT_PM_STATE,
.band_idx = 0,
};
- struct sk_buff *skb = mt7615_mcu_msg_alloc(&req, sizeof(req));
-
- return mt7615_mcu_msg_send(dev, skb, MCU_EXT_CMD_PM_STATE_CTRL,
- MCU_Q_SET, MCU_S2D_H2N, NULL);
-}
-
-static int __mt7615_mcu_set_dev_info(struct mt7615_dev *dev,
- struct dev_info *dev_info)
-{
- struct req_hdr {
- u8 omac_idx;
- u8 band_idx;
- __le16 tlv_num;
- u8 is_tlv_append;
- u8 rsv[3];
- } __packed req_hdr = {0};
- struct req_tlv {
- __le16 tag;
- __le16 len;
- u8 active;
- u8 band_idx;
- u8 omac_addr[ETH_ALEN];
- } __packed;
- struct sk_buff *skb;
- u16 tlv_num = 0;
-
- skb = mt7615_mcu_msg_alloc(NULL, sizeof(req_hdr) +
- sizeof(struct req_tlv));
- skb_reserve(skb, sizeof(req_hdr));
-
- if (dev_info->feature & BIT(DEV_INFO_ACTIVE)) {
- struct req_tlv req_tlv = {
- .tag = cpu_to_le16(DEV_INFO_ACTIVE),
- .len = cpu_to_le16(sizeof(req_tlv)),
- .active = dev_info->enable,
- .band_idx = dev_info->band_idx,
- };
- memcpy(req_tlv.omac_addr, dev_info->omac_addr, ETH_ALEN);
- memcpy(skb_put(skb, sizeof(req_tlv)), &req_tlv,
- sizeof(req_tlv));
- tlv_num++;
- }
-
- req_hdr.omac_idx = dev_info->omac_idx;
- req_hdr.band_idx = dev_info->band_idx;
- req_hdr.tlv_num = cpu_to_le16(tlv_num);
- req_hdr.is_tlv_append = tlv_num ? 1 : 0;
- memcpy(skb_push(skb, sizeof(req_hdr)), &req_hdr, sizeof(req_hdr));
-
- return mt7615_mcu_msg_send(dev, skb, MCU_EXT_CMD_DEV_INFO_UPDATE,
- MCU_Q_SET, MCU_S2D_H2N, NULL);
+ return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_PM_STATE_CTRL,
+ &req, sizeof(req), true);
}
-int mt7615_mcu_set_dev_info(struct mt7615_dev *dev, struct ieee80211_vif *vif,
- int en)
+int mt7615_mcu_set_dev_info(struct mt7615_dev *dev,
+ struct ieee80211_vif *vif, bool enable)
{
struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
- struct dev_info dev_info = {0};
-
- dev_info.omac_idx = mvif->omac_idx;
- memcpy(dev_info.omac_addr, vif->addr, ETH_ALEN);
- dev_info.band_idx = mvif->band_idx;
- dev_info.enable = en;
- dev_info.feature = BIT(DEV_INFO_ACTIVE);
+ struct {
+ struct req_hdr {
+ u8 omac_idx;
+ u8 band_idx;
+ __le16 tlv_num;
+ u8 is_tlv_append;
+ u8 rsv[3];
+ } __packed hdr;
+ struct req_tlv {
+ __le16 tag;
+ __le16 len;
+ u8 active;
+ u8 band_idx;
+ u8 omac_addr[ETH_ALEN];
+ } __packed tlv;
+ } data = {
+ .hdr = {
+ .omac_idx = mvif->omac_idx,
+ .band_idx = mvif->band_idx,
+ .tlv_num = cpu_to_le16(1),
+ .is_tlv_append = 1,
+ },
+ .tlv = {
+ .tag = cpu_to_le16(DEV_INFO_ACTIVE),
+ .len = cpu_to_le16(sizeof(struct req_tlv)),
+ .active = enable,
+ .band_idx = mvif->band_idx,
+ },
+ };
- return __mt7615_mcu_set_dev_info(dev, &dev_info);
+ memcpy(data.tlv.omac_addr, vif->addr, ETH_ALEN);
+ return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_DEV_INFO_UPDATE,
+ &data, sizeof(data), true);
}
-static void bss_info_omac_handler (struct mt7615_dev *dev,
- struct bss_info *bss_info,
- struct sk_buff *skb)
+static void
+mt7615_mcu_bss_info_omac_header(struct mt7615_vif *mvif, u8 *data,
+ u32 conn_type)
{
- struct bss_info_omac tlv = {0};
-
- tlv.tag = cpu_to_le16(BSS_INFO_OMAC);
- tlv.len = cpu_to_le16(sizeof(tlv));
- tlv.hw_bss_idx = (bss_info->omac_idx > EXT_BSSID_START) ?
- HW_BSSID_0 : bss_info->omac_idx;
- tlv.omac_idx = bss_info->omac_idx;
- tlv.band_idx = bss_info->band_idx;
- tlv.conn_type = cpu_to_le32(bss_info->conn_type);
-
- memcpy(skb_put(skb, sizeof(tlv)), &tlv, sizeof(tlv));
+ struct bss_info_omac *hdr = (struct bss_info_omac *)data;
+ u8 idx;
+
+ idx = mvif->omac_idx > EXT_BSSID_START ? HW_BSSID_0 : mvif->omac_idx;
+ hdr->tag = cpu_to_le16(BSS_INFO_OMAC);
+ hdr->len = cpu_to_le16(sizeof(struct bss_info_omac));
+ hdr->hw_bss_idx = idx;
+ hdr->omac_idx = mvif->omac_idx;
+ hdr->band_idx = mvif->band_idx;
+ hdr->conn_type = cpu_to_le32(conn_type);
}
-static void bss_info_basic_handler (struct mt7615_dev *dev,
- struct bss_info *bss_info,
- struct sk_buff *skb)
+static void
+mt7615_mcu_bss_info_basic_header(struct ieee80211_vif *vif, u8 *data,
+ u32 net_type, u8 tx_wlan_idx,
+ bool enable)
{
- struct bss_info_basic tlv = {0};
-
- tlv.tag = cpu_to_le16(BSS_INFO_BASIC);
- tlv.len = cpu_to_le16(sizeof(tlv));
- tlv.network_type = cpu_to_le32(bss_info->network_type);
- tlv.active = bss_info->enable;
- tlv.bcn_interval = cpu_to_le16(bss_info->bcn_interval);
- memcpy(tlv.bssid, bss_info->bssid, ETH_ALEN);
- tlv.wmm_idx = bss_info->wmm_idx;
- tlv.dtim_period = bss_info->dtim_period;
- tlv.bmc_tx_wlan_idx = bss_info->bmc_tx_wlan_idx;
-
- memcpy(skb_put(skb, sizeof(tlv)), &tlv, sizeof(tlv));
+ struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
+ struct bss_info_basic *hdr = (struct bss_info_basic *)data;
+
+ hdr->tag = cpu_to_le16(BSS_INFO_BASIC);
+ hdr->len = cpu_to_le16(sizeof(struct bss_info_basic));
+ hdr->network_type = cpu_to_le32(net_type);
+ hdr->active = enable;
+ hdr->bcn_interval = cpu_to_le16(vif->bss_conf.beacon_int);
+ memcpy(hdr->bssid, vif->bss_conf.bssid, ETH_ALEN);
+ hdr->wmm_idx = mvif->wmm_idx;
+ hdr->dtim_period = vif->bss_conf.dtim_period;
+ hdr->bmc_tx_wlan_idx = tx_wlan_idx;
}
-static void bss_info_ext_bss_handler (struct mt7615_dev *dev,
- struct bss_info *bss_info,
- struct sk_buff *skb)
+static void
+mt7615_mcu_bss_info_ext_header(struct mt7615_vif *mvif, u8 *data)
{
/* SIFS 20us + 512 byte beacon tranmitted by 1Mbps (3906us) */
#define BCN_TX_ESTIMATE_TIME (4096 + 20)
- struct bss_info_ext_bss tlv = {0};
- int ext_bss_idx;
-
- ext_bss_idx = bss_info->omac_idx - EXT_BSSID_START;
+ struct bss_info_ext_bss *hdr = (struct bss_info_ext_bss *)data;
+ int ext_bss_idx, tsf_offset;
+ ext_bss_idx = mvif->omac_idx - EXT_BSSID_START;
if (ext_bss_idx < 0)
return;
- tlv.tag = cpu_to_le16(BSS_INFO_EXT_BSS);
- tlv.len = cpu_to_le16(sizeof(tlv));
- tlv.mbss_tsf_offset = ext_bss_idx * BCN_TX_ESTIMATE_TIME;
-
- memcpy(skb_put(skb, sizeof(tlv)), &tlv, sizeof(tlv));
+ hdr->tag = cpu_to_le16(BSS_INFO_EXT_BSS);
+ hdr->len = cpu_to_le16(sizeof(struct bss_info_ext_bss));
+ tsf_offset = ext_bss_idx * BCN_TX_ESTIMATE_TIME;
+ hdr->mbss_tsf_offset = cpu_to_le32(tsf_offset);
}
-static struct bss_info_tag_handler bss_info_tag_handler[] = {
- {BSS_INFO_OMAC, sizeof(struct bss_info_omac), bss_info_omac_handler},
- {BSS_INFO_BASIC, sizeof(struct bss_info_basic), bss_info_basic_handler},
- {BSS_INFO_RF_CH, sizeof(struct bss_info_rf_ch), NULL},
- {BSS_INFO_PM, 0, NULL},
- {BSS_INFO_UAPSD, 0, NULL},
- {BSS_INFO_ROAM_DETECTION, 0, NULL},
- {BSS_INFO_LQ_RM, 0, NULL},
- {BSS_INFO_EXT_BSS, sizeof(struct bss_info_ext_bss), bss_info_ext_bss_handler},
- {BSS_INFO_BMC_INFO, 0, NULL},
- {BSS_INFO_SYNC_MODE, 0, NULL},
- {BSS_INFO_RA, 0, NULL},
- {BSS_INFO_MAX_NUM, 0, NULL},
-};
-
-static int __mt7615_mcu_set_bss_info(struct mt7615_dev *dev,
- struct bss_info *bss_info)
+int mt7615_mcu_set_bss_info(struct mt7615_dev *dev,
+ struct ieee80211_vif *vif, int en)
{
+ struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
struct req_hdr {
u8 bss_idx;
u8 rsv0;
__le16 tlv_num;
u8 is_tlv_append;
u8 rsv1[3];
- } __packed req_hdr = {0};
- struct sk_buff *skb;
- u16 tlv_num = 0;
- u32 size = 0;
- int i;
+ } __packed;
+ int len = sizeof(struct req_hdr) + sizeof(struct bss_info_basic);
+ int ret, i, features = BIT(BSS_INFO_BASIC), ntlv = 1;
+ u32 conn_type = 0, net_type = NETWORK_INFRA;
+ u8 *buf, *data, tx_wlan_idx = 0;
+ struct req_hdr *hdr;
- for (i = 0; i < BSS_INFO_MAX_NUM; i++)
- if ((BIT(bss_info_tag_handler[i].tag) & bss_info->feature) &&
- bss_info_tag_handler[i].handler) {
- tlv_num++;
- size += bss_info_tag_handler[i].len;
+ if (en) {
+ len += sizeof(struct bss_info_omac);
+ features |= BIT(BSS_INFO_OMAC);
+ if (mvif->omac_idx > EXT_BSSID_START) {
+ len += sizeof(struct bss_info_ext_bss);
+ features |= BIT(BSS_INFO_EXT_BSS);
+ ntlv++;
}
+ ntlv++;
+ }
- skb = mt7615_mcu_msg_alloc(NULL, sizeof(req_hdr) + size);
-
- req_hdr.bss_idx = bss_info->bss_idx;
- req_hdr.tlv_num = cpu_to_le16(tlv_num);
- req_hdr.is_tlv_append = tlv_num ? 1 : 0;
-
- memcpy(skb_put(skb, sizeof(req_hdr)), &req_hdr, sizeof(req_hdr));
-
- for (i = 0; i < BSS_INFO_MAX_NUM; i++)
- if ((BIT(bss_info_tag_handler[i].tag) & bss_info->feature) &&
- bss_info_tag_handler[i].handler)
- bss_info_tag_handler[i].handler(dev, bss_info, skb);
-
- return mt7615_mcu_msg_send(dev, skb, MCU_EXT_CMD_BSS_INFO_UPDATE,
- MCU_Q_SET, MCU_S2D_H2N, NULL);
-}
-
-static void bss_info_convert_vif_type(enum nl80211_iftype type,
- u32 *network_type, u32 *conn_type)
-{
- switch (type) {
+ switch (vif->type) {
case NL80211_IFTYPE_AP:
- if (network_type)
- *network_type = NETWORK_INFRA;
- if (conn_type)
- *conn_type = CONNECTION_INFRA_AP;
+ case NL80211_IFTYPE_MESH_POINT:
+ tx_wlan_idx = mvif->sta.wcid.idx;
+ conn_type = CONNECTION_INFRA_AP;
break;
- case NL80211_IFTYPE_STATION:
- if (network_type)
- *network_type = NETWORK_INFRA;
- if (conn_type)
- *conn_type = CONNECTION_INFRA_STA;
+ case NL80211_IFTYPE_STATION: {
+ /* TODO: enable BSS_INFO_UAPSD & BSS_INFO_PM */
+ if (en) {
+ struct ieee80211_sta *sta;
+ struct mt7615_sta *msta;
+
+ rcu_read_lock();
+ sta = ieee80211_find_sta(vif, vif->bss_conf.bssid);
+ if (!sta) {
+ rcu_read_unlock();
+ return -EINVAL;
+ }
+
+ msta = (struct mt7615_sta *)sta->drv_priv;
+ tx_wlan_idx = msta->wcid.idx;
+ rcu_read_unlock();
+ }
+ conn_type = CONNECTION_INFRA_STA;
break;
+ }
default:
WARN_ON(1);
break;
- };
-}
-
-int mt7615_mcu_set_bss_info(struct mt7615_dev *dev, struct ieee80211_vif *vif,
- int en)
-{
- struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
- struct bss_info bss_info = {0};
- u8 bmc_tx_wlan_idx = 0;
- u32 network_type = 0, conn_type = 0;
-
- if (vif->type == NL80211_IFTYPE_AP) {
- bmc_tx_wlan_idx = mvif->sta.wcid.idx;
- } else if (vif->type == NL80211_IFTYPE_STATION) {
- /* find the unicast entry for sta mode bmc tx */
- struct ieee80211_sta *ap_sta;
- struct mt7615_sta *msta;
-
- rcu_read_lock();
-
- ap_sta = ieee80211_find_sta(vif, vif->bss_conf.bssid);
- if (!ap_sta) {
- rcu_read_unlock();
- return -EINVAL;
- }
-
- msta = (struct mt7615_sta *)ap_sta->drv_priv;
- bmc_tx_wlan_idx = msta->wcid.idx;
-
- rcu_read_unlock();
- } else {
- WARN_ON(1);
}
- bss_info_convert_vif_type(vif->type, &network_type, &conn_type);
-
- bss_info.bss_idx = mvif->idx;
- memcpy(bss_info.bssid, vif->bss_conf.bssid, ETH_ALEN);
- bss_info.omac_idx = mvif->omac_idx;
- bss_info.band_idx = mvif->band_idx;
- bss_info.bmc_tx_wlan_idx = bmc_tx_wlan_idx;
- bss_info.wmm_idx = mvif->wmm_idx;
- bss_info.network_type = network_type;
- bss_info.conn_type = conn_type;
- bss_info.bcn_interval = vif->bss_conf.beacon_int;
- bss_info.dtim_period = vif->bss_conf.dtim_period;
- bss_info.enable = en;
- bss_info.feature = BIT(BSS_INFO_BASIC);
- if (en) {
- bss_info.feature |= BIT(BSS_INFO_OMAC);
- if (mvif->omac_idx > EXT_BSSID_START)
- bss_info.feature |= BIT(BSS_INFO_EXT_BSS);
- }
-
- return __mt7615_mcu_set_bss_info(dev, &bss_info);
-}
+ buf = kzalloc(len, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
-static int __mt7615_mcu_set_wtbl(struct mt7615_dev *dev, int wlan_idx,
- int operation, void *buf, int buf_len)
-{
- struct req_hdr {
- u8 wlan_idx;
- u8 operation;
- __le16 tlv_num;
- u8 rsv[4];
- } __packed req_hdr = {0};
- struct tlv {
- __le16 tag;
- __le16 len;
- u8 buf[0];
- } __packed;
- struct sk_buff *skb;
- u16 tlv_num = 0;
- int offset = 0;
+ hdr = (struct req_hdr *)buf;
+ hdr->bss_idx = mvif->idx;
+ hdr->tlv_num = cpu_to_le16(ntlv);
+ hdr->is_tlv_append = 1;
- while (offset < buf_len) {
- struct tlv *tlv = (struct tlv *)((u8 *)buf + offset);
+ data = buf + sizeof(*hdr);
+ for (i = 0; i < BSS_INFO_MAX_NUM; i++) {
+ int tag = ffs(features & BIT(i)) - 1;
- tlv_num++;
- offset += tlv->len;
+ switch (tag) {
+ case BSS_INFO_OMAC:
+ mt7615_mcu_bss_info_omac_header(mvif, data,
+ conn_type);
+ data += sizeof(struct bss_info_omac);
+ break;
+ case BSS_INFO_BASIC:
+ mt7615_mcu_bss_info_basic_header(vif, data, net_type,
+ tx_wlan_idx, en);
+ data += sizeof(struct bss_info_basic);
+ break;
+ case BSS_INFO_EXT_BSS:
+ mt7615_mcu_bss_info_ext_header(mvif, data);
+ data += sizeof(struct bss_info_ext_bss);
+ break;
+ default:
+ break;
+ }
}
- skb = mt7615_mcu_msg_alloc(NULL, sizeof(req_hdr) + buf_len);
-
- req_hdr.wlan_idx = wlan_idx;
- req_hdr.operation = operation;
- req_hdr.tlv_num = cpu_to_le16(tlv_num);
-
- memcpy(skb_put(skb, sizeof(req_hdr)), &req_hdr, sizeof(req_hdr));
-
- if (buf && buf_len)
- memcpy(skb_put(skb, buf_len), buf, buf_len);
+ ret = __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_BSS_INFO_UPDATE,
+ buf, len, true);
+ kfree(buf);
- return mt7615_mcu_msg_send(dev, skb, MCU_EXT_CMD_WTBL_UPDATE,
- MCU_Q_SET, MCU_S2D_H2N, NULL);
+ return ret;
}
static enum mt7615_cipher_type
@@ -995,70 +861,90 @@ int mt7615_mcu_set_wtbl_key(struct mt7615_dev *dev, int wcid,
struct ieee80211_key_conf *key,
enum set_key_cmd cmd)
{
- struct wtbl_sec_key wtbl_sec_key = {0};
- int buf_len = sizeof(struct wtbl_sec_key);
- u8 cipher;
-
- wtbl_sec_key.tag = cpu_to_le16(WTBL_SEC_KEY);
- wtbl_sec_key.len = cpu_to_le16(buf_len);
- wtbl_sec_key.add = cmd;
+ struct {
+ struct wtbl_req_hdr hdr;
+ struct wtbl_sec_key key;
+ } req = {
+ .hdr = {
+ .wlan_idx = wcid,
+ .operation = WTBL_SET,
+ .tlv_num = cpu_to_le16(1),
+ },
+ .key = {
+ .tag = cpu_to_le16(WTBL_SEC_KEY),
+ .len = cpu_to_le16(sizeof(struct wtbl_sec_key)),
+ .add = cmd,
+ },
+ };
if (cmd == SET_KEY) {
- cipher = mt7615_get_key_info(key, wtbl_sec_key.key_material);
- if (cipher == MT_CIPHER_NONE && key)
+ u8 cipher;
+
+ cipher = mt7615_get_key_info(key, req.key.key_material);
+ if (cipher == MT_CIPHER_NONE)
return -EOPNOTSUPP;
- wtbl_sec_key.cipher_id = cipher;
- wtbl_sec_key.key_id = key->keyidx;
- wtbl_sec_key.key_len = key->keylen;
+ req.key.rkv = 1;
+ req.key.cipher_id = cipher;
+ req.key.key_id = key->keyidx;
+ req.key.key_len = key->keylen;
} else {
- wtbl_sec_key.key_len = sizeof(wtbl_sec_key.key_material);
+ req.key.key_len = sizeof(req.key.key_material);
}
- return __mt7615_mcu_set_wtbl(dev, wcid, WTBL_SET, &wtbl_sec_key,
- buf_len);
+ return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_WTBL_UPDATE,
+ &req, sizeof(req), true);
}
-int mt7615_mcu_add_wtbl_bmc(struct mt7615_dev *dev, struct ieee80211_vif *vif)
+static int
+mt7615_mcu_add_wtbl_bmc(struct mt7615_dev *dev,
+ struct mt7615_vif *mvif)
{
- struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
- struct wtbl_generic *wtbl_generic;
- struct wtbl_rx *wtbl_rx;
- int buf_len, ret;
- u8 *buf;
-
- buf = kzalloc(MT7615_WTBL_UPDATE_MAX_SIZE, GFP_KERNEL);
- if (!buf)
- return -ENOMEM;
-
- wtbl_generic = (struct wtbl_generic *)buf;
- buf_len = sizeof(*wtbl_generic);
- wtbl_generic->tag = cpu_to_le16(WTBL_GENERIC);
- wtbl_generic->len = cpu_to_le16(buf_len);
- eth_broadcast_addr(wtbl_generic->peer_addr);
- wtbl_generic->muar_idx = 0xe;
-
- wtbl_rx = (struct wtbl_rx *)(buf + buf_len);
- buf_len += sizeof(*wtbl_rx);
- wtbl_rx->tag = cpu_to_le16(WTBL_RX);
- wtbl_rx->len = cpu_to_le16(sizeof(*wtbl_rx));
- wtbl_rx->rca1 = 1;
- wtbl_rx->rca2 = 1;
- wtbl_rx->rv = 1;
-
- ret = __mt7615_mcu_set_wtbl(dev, mvif->sta.wcid.idx,
- WTBL_RESET_AND_SET, buf, buf_len);
+ struct {
+ struct wtbl_req_hdr hdr;
+ struct wtbl_generic g_wtbl;
+ struct wtbl_rx rx_wtbl;
+ } req = {
+ .hdr = {
+ .wlan_idx = mvif->sta.wcid.idx,
+ .operation = WTBL_RESET_AND_SET,
+ .tlv_num = cpu_to_le16(2),
+ },
+ .g_wtbl = {
+ .tag = cpu_to_le16(WTBL_GENERIC),
+ .len = cpu_to_le16(sizeof(struct wtbl_generic)),
+ .muar_idx = 0xe,
+ },
+ .rx_wtbl = {
+ .tag = cpu_to_le16(WTBL_RX),
+ .len = cpu_to_le16(sizeof(struct wtbl_rx)),
+ .rca1 = 1,
+ .rca2 = 1,
+ .rv = 1,
+ },
+ };
+ eth_broadcast_addr(req.g_wtbl.peer_addr);
- kfree(buf);
- return ret;
+ return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_WTBL_UPDATE,
+ &req, sizeof(req), true);
}
-int mt7615_mcu_del_wtbl_bmc(struct mt7615_dev *dev, struct ieee80211_vif *vif)
+int mt7615_mcu_wtbl_bmc(struct mt7615_dev *dev,
+ struct ieee80211_vif *vif, bool enable)
{
struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
- return __mt7615_mcu_set_wtbl(dev, mvif->sta.wcid.idx,
- WTBL_RESET_AND_SET, NULL, 0);
+ if (!enable) {
+ struct wtbl_req_hdr req = {
+ .wlan_idx = mvif->sta.wcid.idx,
+ .operation = WTBL_RESET_AND_SET,
+ };
+
+ return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_WTBL_UPDATE,
+ &req, sizeof(req), true);
+ }
+
+ return mt7615_mcu_add_wtbl_bmc(dev, mvif);
}
int mt7615_mcu_add_wtbl(struct mt7615_dev *dev, struct ieee80211_vif *vif,
@@ -1066,175 +952,153 @@ int mt7615_mcu_add_wtbl(struct mt7615_dev *dev, struct ieee80211_vif *vif,
{
struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
struct mt7615_sta *msta = (struct mt7615_sta *)sta->drv_priv;
- struct wtbl_generic *wtbl_generic;
- struct wtbl_rx *wtbl_rx;
- int buf_len, ret;
- u8 *buf;
-
- buf = kzalloc(MT7615_WTBL_UPDATE_MAX_SIZE, GFP_KERNEL);
- if (!buf)
- return -ENOMEM;
-
- wtbl_generic = (struct wtbl_generic *)buf;
- buf_len = sizeof(*wtbl_generic);
- wtbl_generic->tag = cpu_to_le16(WTBL_GENERIC);
- wtbl_generic->len = cpu_to_le16(buf_len);
- memcpy(wtbl_generic->peer_addr, sta->addr, ETH_ALEN);
- wtbl_generic->muar_idx = mvif->omac_idx;
- wtbl_generic->qos = sta->wme;
- wtbl_generic->partial_aid = cpu_to_le16(sta->aid);
-
- wtbl_rx = (struct wtbl_rx *)(buf + buf_len);
- buf_len += sizeof(*wtbl_rx);
- wtbl_rx->tag = cpu_to_le16(WTBL_RX);
- wtbl_rx->len = cpu_to_le16(sizeof(*wtbl_rx));
- wtbl_rx->rca1 = (vif->type == NL80211_IFTYPE_AP) ? 0 : 1;
- wtbl_rx->rca2 = 1;
- wtbl_rx->rv = 1;
-
- ret = __mt7615_mcu_set_wtbl(dev, msta->wcid.idx,
- WTBL_RESET_AND_SET, buf, buf_len);
+ struct {
+ struct wtbl_req_hdr hdr;
+ struct wtbl_generic g_wtbl;
+ struct wtbl_rx rx_wtbl;
+ } req = {
+ .hdr = {
+ .wlan_idx = msta->wcid.idx,
+ .operation = WTBL_RESET_AND_SET,
+ .tlv_num = cpu_to_le16(2),
+ },
+ .g_wtbl = {
+ .tag = cpu_to_le16(WTBL_GENERIC),
+ .len = cpu_to_le16(sizeof(struct wtbl_generic)),
+ .muar_idx = mvif->omac_idx,
+ .qos = sta->wme,
+ .partial_aid = cpu_to_le16(sta->aid),
+ },
+ .rx_wtbl = {
+ .tag = cpu_to_le16(WTBL_RX),
+ .len = cpu_to_le16(sizeof(struct wtbl_rx)),
+ .rca1 = vif->type != NL80211_IFTYPE_AP,
+ .rca2 = 1,
+ .rv = 1,
+ },
+ };
+ memcpy(req.g_wtbl.peer_addr, sta->addr, ETH_ALEN);
- kfree(buf);
- return ret;
+ return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_WTBL_UPDATE,
+ &req, sizeof(req), true);
}
-int mt7615_mcu_del_wtbl(struct mt7615_dev *dev, struct ieee80211_vif *vif,
+int mt7615_mcu_del_wtbl(struct mt7615_dev *dev,
struct ieee80211_sta *sta)
{
struct mt7615_sta *msta = (struct mt7615_sta *)sta->drv_priv;
+ struct wtbl_req_hdr req = {
+ .wlan_idx = msta->wcid.idx,
+ .operation = WTBL_RESET_AND_SET,
+ };
- return __mt7615_mcu_set_wtbl(dev, msta->wcid.idx,
- WTBL_RESET_AND_SET, NULL, 0);
+ return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_WTBL_UPDATE,
+ &req, sizeof(req), true);
}
int mt7615_mcu_del_wtbl_all(struct mt7615_dev *dev)
{
- return __mt7615_mcu_set_wtbl(dev, 0, WTBL_RESET_ALL, NULL, 0);
-}
-
-static int __mt7615_mcu_set_sta_rec(struct mt7615_dev *dev, int bss_idx,
- int wlan_idx, int muar_idx, void *buf,
- int buf_len)
-{
- struct req_hdr {
- u8 bss_idx;
- u8 wlan_idx;
- __le16 tlv_num;
- u8 is_tlv_append;
- u8 muar_idx;
- u8 rsv[2];
- } __packed req_hdr = {0};
- struct tlv {
- __le16 tag;
- __le16 len;
- u8 buf[0];
- } __packed;
- struct sk_buff *skb;
- u16 tlv_num = 0;
- int offset = 0;
-
- while (offset < buf_len) {
- struct tlv *tlv = (struct tlv *)((u8 *)buf + offset);
-
- tlv_num++;
- offset += tlv->len;
- }
-
- skb = mt7615_mcu_msg_alloc(NULL, sizeof(req_hdr) + buf_len);
-
- req_hdr.bss_idx = bss_idx;
- req_hdr.wlan_idx = wlan_idx;
- req_hdr.tlv_num = cpu_to_le16(tlv_num);
- req_hdr.is_tlv_append = tlv_num ? 1 : 0;
- req_hdr.muar_idx = muar_idx;
-
- memcpy(skb_put(skb, sizeof(req_hdr)), &req_hdr, sizeof(req_hdr));
-
- if (buf && buf_len)
- memcpy(skb_put(skb, buf_len), buf, buf_len);
+ struct wtbl_req_hdr req = {
+ .operation = WTBL_RESET_ALL,
+ };
- return mt7615_mcu_msg_send(dev, skb, MCU_EXT_CMD_STA_REC_UPDATE,
- MCU_Q_SET, MCU_S2D_H2N, NULL);
+ return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_WTBL_UPDATE,
+ &req, sizeof(req), true);
}
int mt7615_mcu_set_sta_rec_bmc(struct mt7615_dev *dev,
struct ieee80211_vif *vif, bool en)
{
struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
- struct sta_rec_basic sta_rec_basic = {0};
- int buf_len = sizeof(struct sta_rec_basic);
+ struct {
+ struct sta_req_hdr hdr;
+ struct sta_rec_basic basic;
+ } req = {
+ .hdr = {
+ .bss_idx = mvif->idx,
+ .wlan_idx = mvif->sta.wcid.idx,
+ .tlv_num = cpu_to_le16(1),
+ .is_tlv_append = 1,
+ .muar_idx = mvif->omac_idx,
+ },
+ .basic = {
+ .tag = cpu_to_le16(STA_REC_BASIC),
+ .len = cpu_to_le16(sizeof(struct sta_rec_basic)),
+ .conn_type = cpu_to_le32(CONNECTION_INFRA_BC),
+ },
+ };
+ eth_broadcast_addr(req.basic.peer_addr);
- sta_rec_basic.tag = cpu_to_le16(STA_REC_BASIC);
- sta_rec_basic.len = cpu_to_le16(buf_len);
- sta_rec_basic.conn_type = cpu_to_le32(CONNECTION_INFRA_BC);
- eth_broadcast_addr(sta_rec_basic.peer_addr);
if (en) {
- sta_rec_basic.conn_state = CONN_STATE_PORT_SECURE;
- sta_rec_basic.extra_info =
- cpu_to_le16(EXTRA_INFO_VER | EXTRA_INFO_NEW);
+ req.basic.conn_state = CONN_STATE_PORT_SECURE;
+ req.basic.extra_info = cpu_to_le16(EXTRA_INFO_VER |
+ EXTRA_INFO_NEW);
} else {
- sta_rec_basic.conn_state = CONN_STATE_DISCONNECT;
- sta_rec_basic.extra_info = cpu_to_le16(EXTRA_INFO_VER);
+ req.basic.conn_state = CONN_STATE_DISCONNECT;
+ req.basic.extra_info = cpu_to_le16(EXTRA_INFO_VER);
}
- return __mt7615_mcu_set_sta_rec(dev, mvif->idx, mvif->sta.wcid.idx,
- mvif->omac_idx, &sta_rec_basic,
- buf_len);
+ return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_STA_REC_UPDATE,
+ &req, sizeof(req), true);
}
-static void sta_rec_convert_vif_type(enum nl80211_iftype type, u32 *conn_type)
+int mt7615_mcu_set_sta_rec(struct mt7615_dev *dev, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta, bool en)
{
- switch (type) {
+ struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
+ struct mt7615_sta *msta = (struct mt7615_sta *)sta->drv_priv;
+
+ struct {
+ struct sta_req_hdr hdr;
+ struct sta_rec_basic basic;
+ } req = {
+ .hdr = {
+ .bss_idx = mvif->idx,
+ .wlan_idx = msta->wcid.idx,
+ .tlv_num = cpu_to_le16(1),
+ .is_tlv_append = 1,
+ .muar_idx = mvif->omac_idx,
+ },
+ .basic = {
+ .tag = cpu_to_le16(STA_REC_BASIC),
+ .len = cpu_to_le16(sizeof(struct sta_rec_basic)),
+ .qos = sta->wme,
+ .aid = cpu_to_le16(sta->aid),
+ },
+ };
+ memcpy(req.basic.peer_addr, sta->addr, ETH_ALEN);
+
+ switch (vif->type) {
case NL80211_IFTYPE_AP:
- if (conn_type)
- *conn_type = CONNECTION_INFRA_STA;
+ case NL80211_IFTYPE_MESH_POINT:
+ req.basic.conn_type = cpu_to_le32(CONNECTION_INFRA_STA);
break;
case NL80211_IFTYPE_STATION:
- if (conn_type)
- *conn_type = CONNECTION_INFRA_AP;
+ req.basic.conn_type = cpu_to_le32(CONNECTION_INFRA_AP);
break;
default:
WARN_ON(1);
break;
};
-}
-
-int mt7615_mcu_set_sta_rec(struct mt7615_dev *dev, struct ieee80211_vif *vif,
- struct ieee80211_sta *sta, bool en)
-{
- struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
- struct mt7615_sta *msta = (struct mt7615_sta *)sta->drv_priv;
- struct sta_rec_basic sta_rec_basic = {0};
- int buf_len = sizeof(struct sta_rec_basic);
- u32 conn_type = 0;
-
- sta_rec_convert_vif_type(vif->type, &conn_type);
-
- sta_rec_basic.tag = cpu_to_le16(STA_REC_BASIC);
- sta_rec_basic.len = cpu_to_le16(buf_len);
- sta_rec_basic.conn_type = cpu_to_le32(conn_type);
- sta_rec_basic.qos = sta->wme;
- sta_rec_basic.aid = cpu_to_le16(sta->aid);
- memcpy(sta_rec_basic.peer_addr, sta->addr, ETH_ALEN);
if (en) {
- sta_rec_basic.conn_state = CONN_STATE_PORT_SECURE;
- sta_rec_basic.extra_info =
- cpu_to_le16(EXTRA_INFO_VER | EXTRA_INFO_NEW);
+ req.basic.conn_state = CONN_STATE_PORT_SECURE;
+ req.basic.extra_info = cpu_to_le16(EXTRA_INFO_VER |
+ EXTRA_INFO_NEW);
} else {
- sta_rec_basic.conn_state = CONN_STATE_DISCONNECT;
- sta_rec_basic.extra_info = cpu_to_le16(EXTRA_INFO_VER);
+ req.basic.conn_state = CONN_STATE_DISCONNECT;
+ req.basic.extra_info = cpu_to_le16(EXTRA_INFO_VER);
}
- return __mt7615_mcu_set_sta_rec(dev, mvif->idx, msta->wcid.idx,
- mvif->omac_idx, &sta_rec_basic,
- buf_len);
+ return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_STA_REC_UPDATE,
+ &req, sizeof(req), true);
}
int mt7615_mcu_set_bcn(struct mt7615_dev *dev, struct ieee80211_vif *vif,
int en)
{
+ struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
+ struct mt76_wcid *wcid = &dev->mt76.global_wcid;
struct req {
u8 omac_idx;
u8 enable;
@@ -1250,14 +1114,18 @@ int mt7615_mcu_set_bcn(struct mt7615_dev *dev, struct ieee80211_vif *vif,
/* bss color change */
u8 bcc_cnt;
__le16 bcc_ie_pos;
- } __packed req = {0};
- struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
- struct mt76_wcid *wcid = &dev->mt76.global_wcid;
+ } __packed req = {
+ .omac_idx = mvif->omac_idx,
+ .enable = en,
+ .wlan_idx = wcid->idx,
+ .band_idx = mvif->band_idx,
+ /* pky_type: 0 for bcn, 1 for tim */
+ .pkt_type = 0,
+ };
struct sk_buff *skb;
- u16 tim_off, tim_len;
-
- skb = ieee80211_beacon_get_tim(mt76_hw(dev), vif, &tim_off, &tim_len);
+ u16 tim_off;
+ skb = ieee80211_beacon_get_tim(mt76_hw(dev), vif, &tim_off, NULL);
if (!skb)
return -EINVAL;
@@ -1270,21 +1138,79 @@ int mt7615_mcu_set_bcn(struct mt7615_dev *dev, struct ieee80211_vif *vif,
mt7615_mac_write_txwi(dev, (__le32 *)(req.pkt), skb, wcid, NULL,
0, NULL);
memcpy(req.pkt + MT_TXD_SIZE, skb->data, skb->len);
- dev_kfree_skb(skb);
-
- req.omac_idx = mvif->omac_idx;
- req.enable = en;
- req.wlan_idx = wcid->idx;
- req.band_idx = mvif->band_idx;
- /* pky_type: 0 for bcn, 1 for tim */
- req.pkt_type = 0;
req.pkt_len = cpu_to_le16(MT_TXD_SIZE + skb->len);
req.tim_ie_pos = cpu_to_le16(MT_TXD_SIZE + tim_off);
- skb = mt7615_mcu_msg_alloc(&req, sizeof(req));
+ dev_kfree_skb(skb);
+
+ return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_BCN_OFFLOAD,
+ &req, sizeof(req), true);
+}
+
+int mt7615_mcu_set_tx_power(struct mt7615_dev *dev)
+{
+ int i, ret, n_chains = hweight8(dev->mt76.antenna_mask);
+ struct cfg80211_chan_def *chandef = &dev->mt76.chandef;
+ int freq = chandef->center_freq1, len, target_chains;
+ u8 *req, *data, *eep = (u8 *)dev->mt76.eeprom.data;
+ enum nl80211_band band = chandef->chan->band;
+ struct ieee80211_hw *hw = mt76_hw(dev);
+ struct {
+ u8 center_chan;
+ u8 dbdc_idx;
+ u8 band;
+ u8 rsv;
+ } __packed req_hdr = {
+ .center_chan = ieee80211_frequency_to_channel(freq),
+ .band = band,
+ };
+ s8 tx_power;
+
+ len = sizeof(req_hdr) + __MT_EE_MAX - MT_EE_NIC_CONF_0;
+ req = kzalloc(len, GFP_KERNEL);
+ if (!req)
+ return -ENOMEM;
- return mt7615_mcu_msg_send(dev, skb, MCU_EXT_CMD_BCN_OFFLOAD,
- MCU_Q_SET, MCU_S2D_H2N, NULL);
+ memcpy(req, &req_hdr, sizeof(req_hdr));
+ data = req + sizeof(req_hdr);
+ memcpy(data, eep + MT_EE_NIC_CONF_0,
+ __MT_EE_MAX - MT_EE_NIC_CONF_0);
+
+ tx_power = hw->conf.power_level * 2;
+ switch (n_chains) {
+ case 4:
+ tx_power -= 12;
+ break;
+ case 3:
+ tx_power -= 8;
+ break;
+ case 2:
+ tx_power -= 6;
+ break;
+ default:
+ break;
+ }
+ tx_power = max_t(s8, tx_power, 0);
+ dev->mt76.txpower_cur = tx_power;
+
+ target_chains = mt7615_ext_pa_enabled(dev, band) ? 1 : n_chains;
+ for (i = 0; i < target_chains; i++) {
+ int index = -MT_EE_NIC_CONF_0;
+
+ ret = mt7615_eeprom_get_power_index(dev, chandef->chan, i);
+ if (ret < 0)
+ goto out;
+
+ index += ret;
+ data[index] = min_t(u8, data[index], tx_power);
+ }
+
+ ret = __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_SET_TX_POWER_CTRL,
+ req, len, true);
+out:
+ kfree(req);
+
+ return ret;
}
int mt7615_mcu_set_channel(struct mt7615_dev *dev)
@@ -1309,7 +1235,6 @@ int mt7615_mcu_set_channel(struct mt7615_dev *dev)
u8 txpower_sku[53];
u8 rsv2[3];
} req = {0};
- struct sk_buff *skb;
int ret;
req.control_chan = chdef->chan->hw_value;
@@ -1345,18 +1270,15 @@ int mt7615_mcu_set_channel(struct mt7615_dev *dev)
default:
req.bw = CMD_CBW_20MHZ;
}
-
memset(req.txpower_sku, 0x3f, 49);
- skb = mt7615_mcu_msg_alloc(&req, sizeof(req));
- ret = mt7615_mcu_msg_send(dev, skb, MCU_EXT_CMD_CHANNEL_SWITCH,
- MCU_Q_SET, MCU_S2D_H2N, NULL);
+ ret = __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_CHANNEL_SWITCH,
+ &req, sizeof(req), true);
if (ret)
return ret;
- skb = mt7615_mcu_msg_alloc(&req, sizeof(req));
- return mt7615_mcu_msg_send(dev, skb, MCU_EXT_CMD_SET_RX_PATH,
- MCU_Q_SET, MCU_S2D_H2N, NULL);
+ return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_SET_RX_PATH,
+ &req, sizeof(req), true);
}
int mt7615_mcu_set_ht_cap(struct mt7615_dev *dev, struct ieee80211_vif *vif,
@@ -1364,10 +1286,12 @@ int mt7615_mcu_set_ht_cap(struct mt7615_dev *dev, struct ieee80211_vif *vif,
{
struct mt7615_sta *msta = (struct mt7615_sta *)sta->drv_priv;
struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
- struct wtbl_ht *wtbl_ht;
+ struct wtbl_req_hdr *wtbl_hdr;
+ struct sta_req_hdr *sta_hdr;
struct wtbl_raw *wtbl_raw;
- struct sta_rec_ht *sta_rec_ht;
- int buf_len, ret;
+ struct sta_rec_ht *sta_ht;
+ struct wtbl_ht *wtbl_ht;
+ int buf_len, ret, ntlv = 2;
u32 msk, val = 0;
u8 *buf;
@@ -1375,15 +1299,20 @@ int mt7615_mcu_set_ht_cap(struct mt7615_dev *dev, struct ieee80211_vif *vif,
if (!buf)
return -ENOMEM;
+ wtbl_hdr = (struct wtbl_req_hdr *)buf;
+ wtbl_hdr->wlan_idx = msta->wcid.idx;
+ wtbl_hdr->operation = WTBL_SET;
+ buf_len = sizeof(*wtbl_hdr);
+
/* ht basic */
- buf_len = sizeof(*wtbl_ht);
- wtbl_ht = (struct wtbl_ht *)buf;
+ wtbl_ht = (struct wtbl_ht *)(buf + buf_len);
wtbl_ht->tag = cpu_to_le16(WTBL_HT);
wtbl_ht->len = cpu_to_le16(sizeof(*wtbl_ht));
wtbl_ht->ht = 1;
wtbl_ht->ldpc = sta->ht_cap.cap & IEEE80211_HT_CAP_LDPC_CODING;
wtbl_ht->af = sta->ht_cap.ampdu_factor;
wtbl_ht->mm = sta->ht_cap.ampdu_density;
+ buf_len += sizeof(*wtbl_ht);
if (sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_20)
val |= MT_WTBL_W5_SHORT_GI_20;
@@ -1400,6 +1329,7 @@ int mt7615_mcu_set_ht_cap(struct mt7615_dev *dev, struct ieee80211_vif *vif,
wtbl_vht->len = cpu_to_le16(sizeof(*wtbl_vht));
wtbl_vht->ldpc = sta->vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC;
wtbl_vht->vht = 1;
+ ntlv++;
if (sta->vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_80)
val |= MT_WTBL_W5_SHORT_GI_80;
@@ -1416,6 +1346,7 @@ int mt7615_mcu_set_ht_cap(struct mt7615_dev *dev, struct ieee80211_vif *vif,
wtbl_smps->tag = cpu_to_le16(WTBL_SMPS);
wtbl_smps->len = cpu_to_le16(sizeof(*wtbl_smps));
wtbl_smps->smps = 1;
+ ntlv++;
}
/* sgi */
@@ -1431,38 +1362,46 @@ int mt7615_mcu_set_ht_cap(struct mt7615_dev *dev, struct ieee80211_vif *vif,
wtbl_raw->msk = cpu_to_le32(~msk);
wtbl_raw->val = cpu_to_le32(val);
- ret = __mt7615_mcu_set_wtbl(dev, msta->wcid.idx, WTBL_SET, buf,
- buf_len);
- if (ret) {
- kfree(buf);
- return ret;
- }
+ wtbl_hdr->tlv_num = cpu_to_le16(ntlv);
+ ret = __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_WTBL_UPDATE,
+ buf, buf_len, true);
+ if (ret)
+ goto out;
memset(buf, 0, MT7615_WTBL_UPDATE_MAX_SIZE);
- buf_len = sizeof(*sta_rec_ht);
- sta_rec_ht = (struct sta_rec_ht *)buf;
- sta_rec_ht->tag = cpu_to_le16(STA_REC_HT);
- sta_rec_ht->len = cpu_to_le16(sizeof(*sta_rec_ht));
- sta_rec_ht->ht_cap = cpu_to_le16(sta->ht_cap.cap);
+ sta_hdr = (struct sta_req_hdr *)buf;
+ sta_hdr->bss_idx = mvif->idx;
+ sta_hdr->wlan_idx = msta->wcid.idx;
+ sta_hdr->is_tlv_append = 1;
+ ntlv = sta->vht_cap.vht_supported ? 2 : 1;
+ sta_hdr->tlv_num = cpu_to_le16(ntlv);
+ sta_hdr->muar_idx = mvif->omac_idx;
+ buf_len = sizeof(*sta_hdr);
+
+ sta_ht = (struct sta_rec_ht *)(buf + buf_len);
+ sta_ht->tag = cpu_to_le16(STA_REC_HT);
+ sta_ht->len = cpu_to_le16(sizeof(*sta_ht));
+ sta_ht->ht_cap = cpu_to_le16(sta->ht_cap.cap);
+ buf_len += sizeof(*sta_ht);
if (sta->vht_cap.vht_supported) {
- struct sta_rec_vht *sta_rec_vht;
-
- sta_rec_vht = (struct sta_rec_vht *)(buf + buf_len);
- buf_len += sizeof(*sta_rec_vht);
- sta_rec_vht->tag = cpu_to_le16(STA_REC_VHT);
- sta_rec_vht->len = cpu_to_le16(sizeof(*sta_rec_vht));
- sta_rec_vht->vht_cap = cpu_to_le32(sta->vht_cap.cap);
- sta_rec_vht->vht_rx_mcs_map =
- cpu_to_le16(sta->vht_cap.vht_mcs.rx_mcs_map);
- sta_rec_vht->vht_tx_mcs_map =
- cpu_to_le16(sta->vht_cap.vht_mcs.tx_mcs_map);
+ struct sta_rec_vht *sta_vht;
+
+ sta_vht = (struct sta_rec_vht *)(buf + buf_len);
+ buf_len += sizeof(*sta_vht);
+ sta_vht->tag = cpu_to_le16(STA_REC_VHT);
+ sta_vht->len = cpu_to_le16(sizeof(*sta_vht));
+ sta_vht->vht_cap = cpu_to_le32(sta->vht_cap.cap);
+ sta_vht->vht_rx_mcs_map = sta->vht_cap.vht_mcs.rx_mcs_map;
+ sta_vht->vht_tx_mcs_map = sta->vht_cap.vht_mcs.tx_mcs_map;
}
- ret = __mt7615_mcu_set_sta_rec(dev, mvif->idx, msta->wcid.idx,
- mvif->omac_idx, buf, buf_len);
+ ret = __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_STA_REC_UPDATE,
+ buf, buf_len, true);
+out:
kfree(buf);
+
return ret;
}
@@ -1470,98 +1409,128 @@ int mt7615_mcu_set_tx_ba(struct mt7615_dev *dev,
struct ieee80211_ampdu_params *params,
bool add)
{
- struct ieee80211_sta *sta = params->sta;
- struct mt7615_sta *msta = (struct mt7615_sta *)sta->drv_priv;
+ struct mt7615_sta *msta = (struct mt7615_sta *)params->sta->drv_priv;
struct mt7615_vif *mvif = msta->vif;
- u8 ba_range[8] = {4, 8, 12, 24, 36, 48, 54, 64};
- u16 tid = params->tid;
- u16 ba_size = params->buf_size;
- u16 ssn = params->ssn;
- struct wtbl_ba wtbl_ba = {0};
- struct sta_rec_ba sta_rec_ba = {0};
- int ret, buf_len;
-
- buf_len = sizeof(struct wtbl_ba);
-
- wtbl_ba.tag = cpu_to_le16(WTBL_BA);
- wtbl_ba.len = cpu_to_le16(buf_len);
- wtbl_ba.tid = tid;
- wtbl_ba.ba_type = MT_BA_TYPE_ORIGINATOR;
+ struct {
+ struct wtbl_req_hdr hdr;
+ struct wtbl_ba ba;
+ } wtbl_req = {
+ .hdr = {
+ .wlan_idx = msta->wcid.idx,
+ .operation = WTBL_SET,
+ .tlv_num = cpu_to_le16(1),
+ },
+ .ba = {
+ .tag = cpu_to_le16(WTBL_BA),
+ .len = cpu_to_le16(sizeof(struct wtbl_ba)),
+ .tid = params->tid,
+ .ba_type = MT_BA_TYPE_ORIGINATOR,
+ .sn = add ? cpu_to_le16(params->ssn) : 0,
+ .ba_en = add,
+ },
+ };
+ struct {
+ struct sta_req_hdr hdr;
+ struct sta_rec_ba ba;
+ } sta_req = {
+ .hdr = {
+ .bss_idx = mvif->idx,
+ .wlan_idx = msta->wcid.idx,
+ .tlv_num = cpu_to_le16(1),
+ .is_tlv_append = 1,
+ .muar_idx = mvif->omac_idx,
+ },
+ .ba = {
+ .tag = cpu_to_le16(STA_REC_BA),
+ .len = cpu_to_le16(sizeof(struct sta_rec_ba)),
+ .tid = params->tid,
+ .ba_type = MT_BA_TYPE_ORIGINATOR,
+ .amsdu = params->amsdu,
+ .ba_en = add << params->tid,
+ .ssn = cpu_to_le16(params->ssn),
+ .winsize = cpu_to_le16(params->buf_size),
+ },
+ };
+ int ret;
if (add) {
- u8 idx;
+ u8 idx, ba_range[] = { 4, 8, 12, 24, 36, 48, 54, 64 };
for (idx = 7; idx > 0; idx--) {
- if (ba_size >= ba_range[idx])
+ if (params->buf_size >= ba_range[idx])
break;
}
- wtbl_ba.sn = cpu_to_le16(ssn);
- wtbl_ba.ba_en = 1;
- wtbl_ba.ba_winsize_idx = idx;
+ wtbl_req.ba.ba_winsize_idx = idx;
}
- ret = __mt7615_mcu_set_wtbl(dev, msta->wcid.idx, WTBL_SET, &wtbl_ba,
- buf_len);
+ ret = __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_WTBL_UPDATE,
+ &wtbl_req, sizeof(wtbl_req), true);
if (ret)
return ret;
- buf_len = sizeof(struct sta_rec_ba);
-
- sta_rec_ba.tag = cpu_to_le16(STA_REC_BA);
- sta_rec_ba.len = cpu_to_le16(buf_len);
- sta_rec_ba.tid = tid;
- sta_rec_ba.ba_type = MT_BA_TYPE_ORIGINATOR;
- sta_rec_ba.amsdu = params->amsdu;
- sta_rec_ba.ba_en = add << tid;
- sta_rec_ba.ssn = cpu_to_le16(ssn);
- sta_rec_ba.winsize = cpu_to_le16(ba_size);
-
- return __mt7615_mcu_set_sta_rec(dev, mvif->idx, msta->wcid.idx,
- mvif->omac_idx, &sta_rec_ba, buf_len);
+ return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_STA_REC_UPDATE,
+ &sta_req, sizeof(sta_req), true);
}
int mt7615_mcu_set_rx_ba(struct mt7615_dev *dev,
struct ieee80211_ampdu_params *params,
bool add)
{
- struct ieee80211_sta *sta = params->sta;
- struct mt7615_sta *msta = (struct mt7615_sta *)sta->drv_priv;
+ struct mt7615_sta *msta = (struct mt7615_sta *)params->sta->drv_priv;
struct mt7615_vif *mvif = msta->vif;
- u16 tid = params->tid;
- struct wtbl_ba wtbl_ba = {0};
- struct sta_rec_ba sta_rec_ba = {0};
- int ret, buf_len;
-
- buf_len = sizeof(struct sta_rec_ba);
-
- sta_rec_ba.tag = cpu_to_le16(STA_REC_BA);
- sta_rec_ba.len = cpu_to_le16(buf_len);
- sta_rec_ba.tid = tid;
- sta_rec_ba.ba_type = MT_BA_TYPE_RECIPIENT;
- sta_rec_ba.amsdu = params->amsdu;
- sta_rec_ba.ba_en = add << tid;
- sta_rec_ba.ssn = cpu_to_le16(params->ssn);
- sta_rec_ba.winsize = cpu_to_le16(params->buf_size);
-
- ret = __mt7615_mcu_set_sta_rec(dev, mvif->idx, msta->wcid.idx,
- mvif->omac_idx, &sta_rec_ba, buf_len);
- if (ret || !add)
- return ret;
+ struct {
+ struct wtbl_req_hdr hdr;
+ struct wtbl_ba ba;
+ } wtbl_req = {
+ .hdr = {
+ .wlan_idx = msta->wcid.idx,
+ .operation = WTBL_SET,
+ .tlv_num = cpu_to_le16(1),
+ },
+ .ba = {
+ .tag = cpu_to_le16(WTBL_BA),
+ .len = cpu_to_le16(sizeof(struct wtbl_ba)),
+ .tid = params->tid,
+ .ba_type = MT_BA_TYPE_RECIPIENT,
+ .rst_ba_tid = params->tid,
+ .rst_ba_sel = RST_BA_MAC_TID_MATCH,
+ .rst_ba_sb = 1,
+ },
+ };
+ struct {
+ struct sta_req_hdr hdr;
+ struct sta_rec_ba ba;
+ } sta_req = {
+ .hdr = {
+ .bss_idx = mvif->idx,
+ .wlan_idx = msta->wcid.idx,
+ .tlv_num = cpu_to_le16(1),
+ .is_tlv_append = 1,
+ .muar_idx = mvif->omac_idx,
+ },
+ .ba = {
+ .tag = cpu_to_le16(STA_REC_BA),
+ .len = cpu_to_le16(sizeof(struct sta_rec_ba)),
+ .tid = params->tid,
+ .ba_type = MT_BA_TYPE_RECIPIENT,
+ .amsdu = params->amsdu,
+ .ba_en = add << params->tid,
+ .ssn = cpu_to_le16(params->ssn),
+ .winsize = cpu_to_le16(params->buf_size),
+ },
+ };
+ int ret;
- buf_len = sizeof(struct wtbl_ba);
+ memcpy(wtbl_req.ba.peer_addr, params->sta->addr, ETH_ALEN);
- wtbl_ba.tag = cpu_to_le16(WTBL_BA);
- wtbl_ba.len = cpu_to_le16(buf_len);
- wtbl_ba.tid = tid;
- wtbl_ba.ba_type = MT_BA_TYPE_RECIPIENT;
- memcpy(wtbl_ba.peer_addr, sta->addr, ETH_ALEN);
- wtbl_ba.rst_ba_tid = tid;
- wtbl_ba.rst_ba_sel = RST_BA_MAC_TID_MATCH;
- wtbl_ba.rst_ba_sb = 1;
+ ret = __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_STA_REC_UPDATE,
+ &sta_req, sizeof(sta_req), true);
+ if (ret || !add)
+ return ret;
- return __mt7615_mcu_set_wtbl(dev, msta->wcid.idx, WTBL_SET,
- &wtbl_ba, buf_len);
+ return __mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_WTBL_UPDATE,
+ &wtbl_req, sizeof(wtbl_req), true);
}
void mt7615_mcu_set_rates(struct mt7615_dev *dev, struct mt7615_sta *sta,
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.h
index 9455f8fa475d..f8b51ad25220 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.h
@@ -70,6 +70,7 @@ enum {
enum {
MCU_EXT_CMD_PM_STATE_CTRL = 0x07,
MCU_EXT_CMD_CHANNEL_SWITCH = 0x08,
+ MCU_EXT_CMD_SET_TX_POWER_CTRL = 0x11,
MCU_EXT_CMD_EFUSE_BUFFER_MODE = 0x21,
MCU_EXT_CMD_STA_REC_UPDATE = 0x25,
MCU_EXT_CMD_BSS_INFO_UPDATE = 0x26,
@@ -105,25 +106,19 @@ enum {
#define STA_TYPE_STA BIT(0)
#define STA_TYPE_AP BIT(1)
#define STA_TYPE_ADHOC BIT(2)
-#define STA_TYPE_TDLS BIT(3)
#define STA_TYPE_WDS BIT(4)
#define STA_TYPE_BC BIT(5)
#define NETWORK_INFRA BIT(16)
#define NETWORK_P2P BIT(17)
#define NETWORK_IBSS BIT(18)
-#define NETWORK_MESH BIT(19)
-#define NETWORK_BOW BIT(20)
#define NETWORK_WDS BIT(21)
#define CONNECTION_INFRA_STA (STA_TYPE_STA | NETWORK_INFRA)
#define CONNECTION_INFRA_AP (STA_TYPE_AP | NETWORK_INFRA)
#define CONNECTION_P2P_GC (STA_TYPE_STA | NETWORK_P2P)
#define CONNECTION_P2P_GO (STA_TYPE_AP | NETWORK_P2P)
-#define CONNECTION_MESH_STA (STA_TYPE_STA | NETWORK_MESH)
-#define CONNECTION_MESH_AP (STA_TYPE_AP | NETWORK_MESH)
#define CONNECTION_IBSS_ADHOC (STA_TYPE_ADHOC | NETWORK_IBSS)
-#define CONNECTION_TDLS (STA_TYPE_STA | NETWORK_INFRA | STA_TYPE_TDLS)
#define CONNECTION_WDS (STA_TYPE_WDS | NETWORK_WDS)
#define CONNECTION_INFRA_BC (STA_TYPE_BC | NETWORK_INFRA)
@@ -131,41 +126,11 @@ enum {
#define CONN_STATE_CONNECT 1
#define CONN_STATE_PORT_SECURE 2
-struct dev_info {
- u8 omac_idx;
- u8 omac_addr[ETH_ALEN];
- u8 band_idx;
- u8 enable;
- u32 feature;
-};
-
enum {
DEV_INFO_ACTIVE,
DEV_INFO_MAX_NUM
};
-struct bss_info {
- u8 bss_idx;
- u8 bssid[ETH_ALEN];
- u8 omac_idx;
- u8 band_idx;
- u8 bmc_tx_wlan_idx; /* for bmc tx (sta mode use uc entry) */
- u8 wmm_idx;
- u32 network_type;
- u32 conn_type;
- u16 bcn_interval;
- u8 dtim_period;
- u8 enable;
- u32 feature;
-};
-
-struct bss_info_tag_handler {
- u32 tag;
- u32 len;
- void (*handler)(struct mt7615_dev *dev,
- struct bss_info *bss_info, struct sk_buff *skb);
-};
-
struct bss_info_omac {
__le16 tag;
__le16 len;
@@ -231,6 +196,13 @@ enum {
WTBL_RESET_ALL
};
+struct wtbl_req_hdr {
+ u8 wlan_idx;
+ u8 operation;
+ __le16 tlv_num;
+ u8 rsv[4];
+} __packed;
+
struct wtbl_generic {
__le16 tag;
__le16 len;
@@ -396,7 +368,8 @@ struct wtbl_raw {
__le32 val;
} __packed;
-#define MT7615_WTBL_UPDATE_MAX_SIZE (sizeof(struct wtbl_generic) + \
+#define MT7615_WTBL_UPDATE_MAX_SIZE (sizeof(struct wtbl_req_hdr) + \
+ sizeof(struct wtbl_generic) + \
sizeof(struct wtbl_rx) + \
sizeof(struct wtbl_ht) + \
sizeof(struct wtbl_vht) + \
@@ -430,6 +403,15 @@ enum {
WTBL_MAX_NUM
};
+struct sta_req_hdr {
+ u8 bss_idx;
+ u8 wlan_idx;
+ __le16 tlv_num;
+ u8 is_tlv_append;
+ u8 muar_idx;
+ u8 rsv[2];
+} __packed;
+
struct sta_rec_basic {
__le16 tag;
__le16 len;
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h b/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
index 895c2904d7eb..f02ffcffe637 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
@@ -105,11 +105,14 @@ u32 mt7615_reg_map(struct mt7615_dev *dev, u32 addr);
int mt7615_register_device(struct mt7615_dev *dev);
void mt7615_unregister_device(struct mt7615_dev *dev);
int mt7615_eeprom_init(struct mt7615_dev *dev);
+int mt7615_eeprom_get_power_index(struct mt7615_dev *dev,
+ struct ieee80211_channel *chan,
+ u8 chain_idx);
int mt7615_dma_init(struct mt7615_dev *dev);
void mt7615_dma_cleanup(struct mt7615_dev *dev);
int mt7615_mcu_init(struct mt7615_dev *dev);
-int mt7615_mcu_set_dev_info(struct mt7615_dev *dev, struct ieee80211_vif *vif,
- int en);
+int mt7615_mcu_set_dev_info(struct mt7615_dev *dev,
+ struct ieee80211_vif *vif, bool enable);
int mt7615_mcu_set_bss_info(struct mt7615_dev *dev, struct ieee80211_vif *vif,
int en);
int mt7615_mcu_set_wtbl_key(struct mt7615_dev *dev, int wcid,
@@ -118,12 +121,11 @@ int mt7615_mcu_set_wtbl_key(struct mt7615_dev *dev, int wcid,
void mt7615_mcu_set_rates(struct mt7615_dev *dev, struct mt7615_sta *sta,
struct ieee80211_tx_rate *probe_rate,
struct ieee80211_tx_rate *rates);
-int mt7615_mcu_add_wtbl_bmc(struct mt7615_dev *dev, struct ieee80211_vif *vif);
-int mt7615_mcu_del_wtbl_bmc(struct mt7615_dev *dev, struct ieee80211_vif *vif);
+int mt7615_mcu_wtbl_bmc(struct mt7615_dev *dev, struct ieee80211_vif *vif,
+ bool enable);
int mt7615_mcu_add_wtbl(struct mt7615_dev *dev, struct ieee80211_vif *vif,
struct ieee80211_sta *sta);
-int mt7615_mcu_del_wtbl(struct mt7615_dev *dev, struct ieee80211_vif *vif,
- struct ieee80211_sta *sta);
+int mt7615_mcu_del_wtbl(struct mt7615_dev *dev, struct ieee80211_sta *sta);
int mt7615_mcu_del_wtbl_all(struct mt7615_dev *dev);
int mt7615_mcu_set_sta_rec_bmc(struct mt7615_dev *dev,
struct ieee80211_vif *vif, bool en);
@@ -168,6 +170,7 @@ int mt7615_mcu_set_eeprom(struct mt7615_dev *dev);
int mt7615_mcu_init_mac(struct mt7615_dev *dev);
int mt7615_mcu_set_rts_thresh(struct mt7615_dev *dev, u32 val);
int mt7615_mcu_ctrl_pm_state(struct mt7615_dev *dev, int enter);
+int mt7615_mcu_set_tx_power(struct mt7615_dev *dev);
void mt7615_mcu_exit(struct mt7615_dev *dev);
int mt7615_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
@@ -180,7 +183,6 @@ void mt7615_tx_complete_skb(struct mt76_dev *mdev, enum mt76_txq_id qid,
void mt7615_queue_rx_skb(struct mt76_dev *mdev, enum mt76_rxq_id q,
struct sk_buff *skb);
-void mt7615_rx_poll_complete(struct mt76_dev *mdev, enum mt76_rxq_id q);
void mt7615_sta_ps(struct mt76_dev *mdev, struct ieee80211_sta *sta, bool ps);
int mt7615_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif,
struct ieee80211_sta *sta);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/pci.c b/drivers/net/wireless/mediatek/mt76/mt7615/pci.c
index 11122bd2d727..9e82cb53fd60 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/pci.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/pci.c
@@ -27,14 +27,15 @@ u32 mt7615_reg_map(struct mt7615_dev *dev, u32 addr)
return MT_PCIE_REMAP_BASE_2 + offset;
}
-void mt7615_rx_poll_complete(struct mt76_dev *mdev, enum mt76_rxq_id q)
+static void
+mt7615_rx_poll_complete(struct mt76_dev *mdev, enum mt76_rxq_id q)
{
struct mt7615_dev *dev = container_of(mdev, struct mt7615_dev, mt76);
mt7615_irq_enable(dev, MT_INT_RX_DONE(q));
}
-irqreturn_t mt7615_irq_handler(int irq, void *dev_instance)
+static irqreturn_t mt7615_irq_handler(int irq, void *dev_instance)
{
struct mt7615_dev *dev = dev_instance;
u32 intr;
@@ -49,7 +50,7 @@ irqreturn_t mt7615_irq_handler(int irq, void *dev_instance)
if (intr & MT_INT_TX_DONE_ALL) {
mt7615_irq_disable(dev, MT_INT_TX_DONE_ALL);
- tasklet_schedule(&dev->mt76.tx_tasklet);
+ napi_schedule(&dev->mt76.tx_napi);
}
if (intr & MT_INT_RX_DONE(0)) {
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/init.c b/drivers/net/wireless/mediatek/mt76/mt76x0/init.c
index 71237d5cdf7f..cf7fc307322b 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x0/init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x0/init.c
@@ -271,8 +271,9 @@ mt76x0_init_txpower(struct mt76x02_dev *dev,
mt76x0_get_tx_power_per_rate(dev, chan, &t);
mt76x0_get_power_info(dev, chan, &tp);
- chan->max_power = (mt76x02_get_max_rate_power(&t) + tp) / 2;
- chan->orig_mpwr = chan->max_power;
+ chan->orig_mpwr = (mt76x02_get_max_rate_power(&t) + tp) / 2;
+ chan->max_power = min_t(int, chan->max_reg_power,
+ chan->orig_mpwr);
}
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/main.c b/drivers/net/wireless/mediatek/mt76/mt76x0/main.c
index a7f335d6e8f8..d7bf7bc15e52 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x0/main.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x0/main.c
@@ -25,7 +25,7 @@ mt76x0_set_channel(struct mt76x02_dev *dev, struct cfg80211_chan_def *chandef)
mt76_rr(dev, MT_CH_IDLE);
mt76_rr(dev, MT_CH_BUSY);
- mt76x02_edcca_init(dev, true);
+ mt76x02_edcca_init(dev);
if (mt76_is_mmio(dev)) {
mt76x02_dfs_init_params(dev);
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c b/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c
index e11da6900222..1ecfc334ae79 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c
@@ -422,15 +422,15 @@ mt76x0_phy_set_chan_bbp_params(struct mt76x02_dev *dev, u16 rf_bw_band)
static void mt76x0_phy_ant_select(struct mt76x02_dev *dev)
{
u16 ee_ant = mt76x02_eeprom_get(dev, MT_EE_ANTENNA);
+ u16 ee_cfg1 = mt76x02_eeprom_get(dev, MT_EE_CFG1_INIT);
u16 nic_conf2 = mt76x02_eeprom_get(dev, MT_EE_NIC_CONF_2);
- u32 wlan, coex3, cmb;
+ u32 wlan, coex3;
bool ant_div;
wlan = mt76_rr(dev, MT_WLAN_FUN_CTRL);
- cmb = mt76_rr(dev, MT_CMB_CTRL);
coex3 = mt76_rr(dev, MT_COEXCFG3);
- cmb &= ~(BIT(14) | BIT(12));
+ ee_ant &= ~(BIT(14) | BIT(12));
wlan &= ~(BIT(6) | BIT(5));
coex3 &= ~GENMASK(5, 2);
@@ -439,7 +439,7 @@ static void mt76x0_phy_ant_select(struct mt76x02_dev *dev)
ant_div = !(nic_conf2 & MT_EE_NIC_CONF_2_ANT_OPT) &&
(nic_conf2 & MT_EE_NIC_CONF_2_ANT_DIV);
if (ant_div)
- cmb |= BIT(12);
+ ee_ant |= BIT(12);
else
coex3 |= BIT(4);
coex3 |= BIT(3);
@@ -456,10 +456,11 @@ static void mt76x0_phy_ant_select(struct mt76x02_dev *dev)
}
if (is_mt7630(dev))
- cmb |= BIT(14) | BIT(11);
+ ee_ant |= BIT(14) | BIT(11);
mt76_wr(dev, MT_WLAN_FUN_CTRL, wlan);
- mt76_wr(dev, MT_CMB_CTRL, cmb);
+ mt76_rmw(dev, MT_CMB_CTRL, GENMASK(15, 0), ee_ant);
+ mt76_rmw(dev, MT_CSR_EE_CFG1, GENMASK(15, 0), ee_cfg1);
mt76_clear(dev, MT_COEXCFG0, BIT(2));
mt76_wr(dev, MT_COEXCFG3, coex3);
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/usb.c b/drivers/net/wireless/mediatek/mt76/mt76x0/usb.c
index 2dc67e68c6a2..627ed1fc7b15 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x0/usb.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x0/usb.c
@@ -183,7 +183,7 @@ static int mt76x0u_register_device(struct mt76x02_dev *dev)
/* check hw sg support in order to enable AMSDU */
if (dev->mt76.usb.sg_en)
- hw->max_tx_fragments = MT_SG_MAX_SIZE;
+ hw->max_tx_fragments = MT_TX_SG_MAX_SIZE;
else
hw->max_tx_fragments = 1;
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02.h b/drivers/net/wireless/mediatek/mt76/mt76x02.h
index 687bd14b2d77..f7fd53a1738a 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02.h
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02.h
@@ -90,7 +90,6 @@ struct mt76x02_dev {
struct sk_buff *rx_head;
- struct napi_struct tx_napi;
struct delayed_work cal_work;
struct delayed_work wdt_work;
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_beacon.c b/drivers/net/wireless/mediatek/mt76/mt76x02_beacon.c
index e196b9c0a686..d61c686e08de 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_beacon.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_beacon.c
@@ -189,10 +189,8 @@ mt76x02_resync_beacon_timer(struct mt76x02_dev *dev)
mt76_rmw_field(dev, MT_BEACON_TIME_CFG,
MT_BEACON_TIME_CFG_INTVAL, timer_val);
- if (dev->tbtt_count >= 64) {
+ if (dev->tbtt_count >= 64)
dev->tbtt_count = 0;
- return;
- }
}
EXPORT_SYMBOL_GPL(mt76x02_resync_beacon_timer);
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_debugfs.c b/drivers/net/wireless/mediatek/mt76/mt76x02_debugfs.c
index b1d6fd4861e3..1b1e424ccbb2 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_debugfs.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_debugfs.c
@@ -120,12 +120,16 @@ static int
mt76_edcca_set(void *data, u64 val)
{
struct mt76x02_dev *dev = data;
- enum nl80211_dfs_regions region = dev->dfs_pd.region;
+ enum nl80211_dfs_regions region = dev->mt76.region;
+
+ mutex_lock(&dev->mt76.mutex);
dev->ed_monitor_enabled = !!val;
dev->ed_monitor = dev->ed_monitor_enabled &&
region == NL80211_DFS_ETSI;
- mt76x02_edcca_init(dev, true);
+ mt76x02_edcca_init(dev);
+
+ mutex_unlock(&dev->mt76.mutex);
return 0;
}
@@ -153,7 +157,7 @@ void mt76x02_init_debugfs(struct mt76x02_dev *dev)
debugfs_create_u8("temperature", 0400, dir, &dev->cal.temp);
debugfs_create_bool("tpc", 0600, dir, &dev->enable_tpc);
- debugfs_create_file("edcca", 0400, dir, dev, &fops_edcca);
+ debugfs_create_file("edcca", 0600, dir, dev, &fops_edcca);
debugfs_create_file("ampdu_stat", 0400, dir, dev, &fops_ampdu_stat);
debugfs_create_file("dfs_stats", 0400, dir, dev, &fops_dfs_stat);
debugfs_create_devm_seqfile(dev->mt76.dev, "txpower", dir,
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_dfs.c b/drivers/net/wireless/mediatek/mt76/mt76x02_dfs.c
index 17d12d212d1b..50e9b310e496 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_dfs.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_dfs.c
@@ -283,7 +283,7 @@ static bool mt76x02_dfs_check_hw_pulse(struct mt76x02_dev *dev,
if (!pulse->period || !pulse->w1)
return false;
- switch (dev->dfs_pd.region) {
+ switch (dev->mt76.region) {
case NL80211_DFS_FCC:
if (pulse->engine > 3)
break;
@@ -457,7 +457,7 @@ static int mt76x02_dfs_create_sequence(struct mt76x02_dev *dev,
with_sum = event->width + cur_event->width;
sw_params = &dfs_pd->sw_dpd_params;
- switch (dev->dfs_pd.region) {
+ switch (dev->mt76.region) {
case NL80211_DFS_FCC:
case NL80211_DFS_JP:
if (with_sum < 600)
@@ -685,7 +685,7 @@ static void mt76x02_dfs_init_sw_detector(struct mt76x02_dev *dev)
{
struct mt76x02_dfs_pattern_detector *dfs_pd = &dev->dfs_pd;
- switch (dev->dfs_pd.region) {
+ switch (dev->mt76.region) {
case NL80211_DFS_FCC:
dfs_pd->sw_dpd_params.max_pri = MT_DFS_FCC_MAX_PRI;
dfs_pd->sw_dpd_params.min_pri = MT_DFS_FCC_MIN_PRI;
@@ -725,7 +725,7 @@ static void mt76x02_dfs_set_bbp_params(struct mt76x02_dev *dev)
break;
}
- switch (dev->dfs_pd.region) {
+ switch (dev->mt76.region) {
case NL80211_DFS_FCC:
radar_specs = &fcc_radar_specs[shift];
break;
@@ -836,7 +836,7 @@ void mt76x02_dfs_init_params(struct mt76x02_dev *dev)
struct cfg80211_chan_def *chandef = &dev->mt76.chandef;
if ((chandef->chan->flags & IEEE80211_CHAN_RADAR) &&
- dev->dfs_pd.region != NL80211_DFS_UNSET) {
+ dev->mt76.region != NL80211_DFS_UNSET) {
mt76x02_dfs_init_sw_detector(dev);
mt76x02_dfs_set_bbp_params(dev);
/* enable debug mode */
@@ -869,7 +869,7 @@ void mt76x02_dfs_init_detector(struct mt76x02_dev *dev)
INIT_LIST_HEAD(&dfs_pd->sequences);
INIT_LIST_HEAD(&dfs_pd->seq_pool);
- dfs_pd->region = NL80211_DFS_UNSET;
+ dev->mt76.region = NL80211_DFS_UNSET;
dfs_pd->last_sw_check = jiffies;
tasklet_init(&dfs_pd->dfs_tasklet, mt76x02_dfs_tasklet,
(unsigned long)dev);
@@ -882,14 +882,14 @@ mt76x02_dfs_set_domain(struct mt76x02_dev *dev,
struct mt76x02_dfs_pattern_detector *dfs_pd = &dev->dfs_pd;
mutex_lock(&dev->mt76.mutex);
- if (dfs_pd->region != region) {
+ if (dev->mt76.region != region) {
tasklet_disable(&dfs_pd->dfs_tasklet);
dev->ed_monitor = dev->ed_monitor_enabled &&
region == NL80211_DFS_ETSI;
- mt76x02_edcca_init(dev, true);
+ mt76x02_edcca_init(dev);
- dfs_pd->region = region;
+ dev->mt76.region = region;
mt76x02_dfs_init_params(dev);
tasklet_enable(&dfs_pd->dfs_tasklet);
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_dfs.h b/drivers/net/wireless/mediatek/mt76/mt76x02_dfs.h
index 70b394e17340..0408613b45a4 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_dfs.h
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_dfs.h
@@ -118,8 +118,6 @@ struct mt76x02_dfs_seq_stats {
};
struct mt76x02_dfs_pattern_detector {
- enum nl80211_dfs_regions region;
-
u8 chirp_pulse_cnt;
u32 chirp_pulse_ts;
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.h b/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.h
index e3442bc4e0a4..0ba536de3d6e 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.h
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_eeprom.h
@@ -26,6 +26,7 @@ enum mt76x02_eeprom_field {
MT_EE_MAC_ADDR = 0x004,
MT_EE_PCI_ID = 0x00A,
MT_EE_ANTENNA = 0x022,
+ MT_EE_CFG1_INIT = 0x024,
MT_EE_NIC_CONF_0 = 0x034,
MT_EE_NIC_CONF_1 = 0x036,
MT_EE_COUNTRY_REGION_5GHZ = 0x038,
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
index 56510a1a843a..82bafb5ac326 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
@@ -420,30 +420,92 @@ void mt76x02_mac_write_txwi(struct mt76x02_dev *dev, struct mt76x02_txwi *txwi,
EXPORT_SYMBOL_GPL(mt76x02_mac_write_txwi);
static void
-mt76x02_mac_fill_tx_status(struct mt76x02_dev *dev,
+mt76x02_tx_rate_fallback(struct ieee80211_tx_rate *rates, int idx, int phy)
+{
+ u8 mcs, nss;
+
+ if (!idx)
+ return;
+
+ rates += idx - 1;
+ rates[1] = rates[0];
+ switch (phy) {
+ case MT_PHY_TYPE_VHT:
+ mcs = ieee80211_rate_get_vht_mcs(rates);
+ nss = ieee80211_rate_get_vht_nss(rates);
+
+ if (mcs == 0)
+ nss = max_t(int, nss - 1, 1);
+ else
+ mcs--;
+
+ ieee80211_rate_set_vht(rates + 1, mcs, nss);
+ break;
+ case MT_PHY_TYPE_HT_GF:
+ case MT_PHY_TYPE_HT:
+ /* MCS 8 falls back to MCS 0 */
+ if (rates[0].idx == 8) {
+ rates[1].idx = 0;
+ break;
+ }
+ /* fall through */
+ default:
+ rates[1].idx = max_t(int, rates[0].idx - 1, 0);
+ break;
+ }
+}
+
+static void
+mt76x02_mac_fill_tx_status(struct mt76x02_dev *dev, struct mt76x02_sta *msta,
struct ieee80211_tx_info *info,
struct mt76x02_tx_status *st, int n_frames)
{
struct ieee80211_tx_rate *rate = info->status.rates;
- int cur_idx, last_rate;
+ struct ieee80211_tx_rate last_rate;
+ u16 first_rate;
+ int retry = st->retry;
+ int phy;
int i;
if (!n_frames)
return;
- last_rate = min_t(int, st->retry, IEEE80211_TX_MAX_RATES - 1);
- mt76x02_mac_process_tx_rate(&rate[last_rate], st->rate,
+ phy = FIELD_GET(MT_RXWI_RATE_PHY, st->rate);
+
+ if (st->pktid & MT_PACKET_ID_HAS_RATE) {
+ first_rate = st->rate & ~MT_RXWI_RATE_INDEX;
+ first_rate |= st->pktid & MT_RXWI_RATE_INDEX;
+
+ mt76x02_mac_process_tx_rate(&rate[0], first_rate,
+ dev->mt76.chandef.chan->band);
+ } else if (rate[0].idx < 0) {
+ if (!msta)
+ return;
+
+ mt76x02_mac_process_tx_rate(&rate[0], msta->wcid.tx_info,
+ dev->mt76.chandef.chan->band);
+ }
+
+ mt76x02_mac_process_tx_rate(&last_rate, st->rate,
dev->mt76.chandef.chan->band);
- if (last_rate < IEEE80211_TX_MAX_RATES - 1)
- rate[last_rate + 1].idx = -1;
-
- cur_idx = rate[last_rate].idx + last_rate;
- for (i = 0; i <= last_rate; i++) {
- rate[i].flags = rate[last_rate].flags;
- rate[i].idx = max_t(int, 0, cur_idx - i);
- rate[i].count = 1;
+
+ for (i = 0; i < ARRAY_SIZE(info->status.rates); i++) {
+ retry--;
+ if (i + 1 == ARRAY_SIZE(info->status.rates)) {
+ info->status.rates[i] = last_rate;
+ info->status.rates[i].count = max_t(int, retry, 1);
+ break;
+ }
+
+ mt76x02_tx_rate_fallback(info->status.rates, i, phy);
+ if (info->status.rates[i].idx == last_rate.idx)
+ break;
+ }
+
+ if (i + 1 < ARRAY_SIZE(info->status.rates)) {
+ info->status.rates[i + 1].idx = -1;
+ info->status.rates[i + 1].count = 0;
}
- rate[last_rate].count = st->retry + 1 - last_rate;
info->status.ampdu_len = n_frames;
info->status.ampdu_ack_len = st->success ? n_frames : 0;
@@ -489,13 +551,19 @@ void mt76x02_send_tx_status(struct mt76x02_dev *dev,
mt76_tx_status_lock(mdev, &list);
if (wcid) {
- if (stat->pktid >= MT_PACKET_ID_FIRST)
+ if (mt76_is_skb_pktid(stat->pktid))
status.skb = mt76_tx_status_skb_get(mdev, wcid,
stat->pktid, &list);
if (status.skb)
status.info = IEEE80211_SKB_CB(status.skb);
}
+ if (!status.skb && !(stat->pktid & MT_PACKET_ID_HAS_RATE)) {
+ mt76_tx_status_unlock(mdev, &list);
+ rcu_read_unlock();
+ return;
+ }
+
if (msta && stat->aggr && !status.skb) {
u32 stat_val, stat_cache;
@@ -512,14 +580,14 @@ void mt76x02_send_tx_status(struct mt76x02_dev *dev,
return;
}
- mt76x02_mac_fill_tx_status(dev, status.info, &msta->status,
- msta->n_frames);
+ mt76x02_mac_fill_tx_status(dev, msta, status.info,
+ &msta->status, msta->n_frames);
msta->status = *stat;
msta->n_frames = 1;
*update = 0;
} else {
- mt76x02_mac_fill_tx_status(dev, status.info, stat, 1);
+ mt76x02_mac_fill_tx_status(dev, msta, status.info, stat, 1);
*update = 1;
}
@@ -945,12 +1013,12 @@ mt76x02_edcca_tx_enable(struct mt76x02_dev *dev, bool enable)
dev->ed_tx_blocked = !enable;
}
-void mt76x02_edcca_init(struct mt76x02_dev *dev, bool enable)
+void mt76x02_edcca_init(struct mt76x02_dev *dev)
{
dev->ed_trigger = 0;
dev->ed_silent = 0;
- if (dev->ed_monitor && enable) {
+ if (dev->ed_monitor) {
struct ieee80211_channel *chan = dev->mt76.chandef.chan;
u8 ed_th = chan->band == NL80211_BAND_5GHZ ? 0x0e : 0x20;
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.h b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.h
index e4a9e0d0924b..cb39da79527a 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.h
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.h
@@ -209,5 +209,5 @@ int mt76x02_mac_set_beacon(struct mt76x02_dev *dev, u8 vif_idx,
void mt76x02_mac_set_beacon_enable(struct mt76x02_dev *dev,
struct ieee80211_vif *vif, bool val);
-void mt76x02_edcca_init(struct mt76x02_dev *dev, bool enable);
+void mt76x02_edcca_init(struct mt76x02_dev *dev);
#endif
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c
index 7b7163bc3b62..467b28379870 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c
@@ -166,7 +166,8 @@ static void mt76x02_tx_tasklet(unsigned long data)
static int mt76x02_poll_tx(struct napi_struct *napi, int budget)
{
- struct mt76x02_dev *dev = container_of(napi, struct mt76x02_dev, tx_napi);
+ struct mt76x02_dev *dev = container_of(napi, struct mt76x02_dev,
+ mt76.tx_napi);
int i;
mt76x02_mac_poll_tx_status(dev, false);
@@ -245,9 +246,9 @@ int mt76x02_dma_init(struct mt76x02_dev *dev)
if (ret)
return ret;
- netif_tx_napi_add(&dev->mt76.napi_dev, &dev->tx_napi, mt76x02_poll_tx,
- NAPI_POLL_WEIGHT);
- napi_enable(&dev->tx_napi);
+ netif_tx_napi_add(&dev->mt76.napi_dev, &dev->mt76.tx_napi,
+ mt76x02_poll_tx, NAPI_POLL_WEIGHT);
+ napi_enable(&dev->mt76.tx_napi);
return 0;
}
@@ -303,7 +304,7 @@ irqreturn_t mt76x02_irq_handler(int irq, void *dev_instance)
if (intr & (MT_INT_TX_STAT | MT_INT_TX_DONE_ALL)) {
mt76x02_irq_disable(dev, MT_INT_TX_DONE_ALL);
- napi_schedule(&dev->tx_napi);
+ napi_schedule(&dev->mt76.tx_napi);
}
if (intr & MT_INT_GPTIMER) {
@@ -334,7 +335,6 @@ static void mt76x02_dma_enable(struct mt76x02_dev *dev)
void mt76x02_dma_cleanup(struct mt76x02_dev *dev)
{
tasklet_kill(&dev->mt76.tx_tasklet);
- netif_napi_del(&dev->tx_napi);
mt76_dma_cleanup(&dev->mt76);
}
EXPORT_SYMBOL_GPL(mt76x02_dma_cleanup);
@@ -454,7 +454,7 @@ static void mt76x02_watchdog_reset(struct mt76x02_dev *dev)
tasklet_disable(&dev->mt76.pre_tbtt_tasklet);
tasklet_disable(&dev->mt76.tx_tasklet);
- napi_disable(&dev->tx_napi);
+ napi_disable(&dev->mt76.tx_napi);
for (i = 0; i < ARRAY_SIZE(dev->mt76.napi); i++)
napi_disable(&dev->mt76.napi[i]);
@@ -508,8 +508,8 @@ static void mt76x02_watchdog_reset(struct mt76x02_dev *dev)
clear_bit(MT76_RESET, &dev->mt76.state);
tasklet_enable(&dev->mt76.tx_tasklet);
- napi_enable(&dev->tx_napi);
- napi_schedule(&dev->tx_napi);
+ napi_enable(&dev->mt76.tx_napi);
+ napi_schedule(&dev->mt76.tx_napi);
tasklet_enable(&dev->mt76.pre_tbtt_tasklet);
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_regs.h b/drivers/net/wireless/mediatek/mt76/mt76x02_regs.h
index 2ce05b543dff..ea7833964ec0 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_regs.h
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_regs.h
@@ -66,6 +66,9 @@
#define MT_WLAN_FUN_CTRL_GPIO_OUT GENMASK(23, 16) /* MT76x0 */
#define MT_WLAN_FUN_CTRL_GPIO_OUT_EN GENMASK(31, 24) /* MT76x0 */
+/* MT76x0 */
+#define MT_CSR_EE_CFG1 0x0104
+
#define MT_XO_CTRL0 0x0100
#define MT_XO_CTRL1 0x0104
#define MT_XO_CTRL2 0x0108
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_txrx.c b/drivers/net/wireless/mediatek/mt76/mt76x02_txrx.c
index cf7abd9b7d2e..04118f08debc 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_txrx.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_txrx.c
@@ -154,6 +154,7 @@ int mt76x02_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76);
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)tx_info->skb->data;
struct mt76x02_txwi *txwi = txwi_ptr;
+ bool ampdu = IEEE80211_SKB_CB(tx_info->skb)->flags & IEEE80211_TX_CTL_AMPDU;
int hdrlen, len, pid, qsel = MT_QSEL_EDCA;
if (qid == MT_TXQ_PSD && wcid && wcid->idx < 128)
@@ -164,9 +165,15 @@ int mt76x02_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
mt76x02_mac_write_txwi(dev, txwi, tx_info->skb, wcid, sta, len);
pid = mt76_tx_status_skb_add(mdev, wcid, tx_info->skb);
+
+ /* encode packet rate for no-skb packet id to fix up status reporting */
+ if (pid == MT_PACKET_ID_NO_SKB)
+ pid = MT_PACKET_ID_HAS_RATE |
+ (le16_to_cpu(txwi->rate) & MT_RXWI_RATE_INDEX);
+
txwi->pktid = pid;
- if (pid >= MT_PACKET_ID_FIRST)
+ if (mt76_is_skb_pktid(pid) && ampdu)
qsel = MT_QSEL_MGMT;
tx_info->info = FIELD_PREP(MT_TXD_INFO_QSEL, qsel) |
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c
index 6b89f7eab26c..5e4f3a8c5784 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c
@@ -14,7 +14,7 @@
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
-#include "mt76x02.h"
+#include "mt76x02_usb.h"
static void mt76x02u_remove_dma_hdr(struct sk_buff *skb)
{
@@ -79,6 +79,7 @@ int mt76x02u_tx_prepare_skb(struct mt76_dev *mdev, void *data,
struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76);
int pid, len = tx_info->skb->len, ep = q2ep(mdev->q_tx[qid].q->hw_idx);
struct mt76x02_txwi *txwi;
+ bool ampdu = IEEE80211_SKB_CB(tx_info->skb)->flags & IEEE80211_TX_CTL_AMPDU;
enum mt76_qsel qsel;
u32 flags;
@@ -89,9 +90,15 @@ int mt76x02u_tx_prepare_skb(struct mt76_dev *mdev, void *data,
skb_push(tx_info->skb, sizeof(*txwi));
pid = mt76_tx_status_skb_add(mdev, wcid, tx_info->skb);
+
+ /* encode packet rate for no-skb packet id to fix up status reporting */
+ if (pid == MT_PACKET_ID_NO_SKB)
+ pid = MT_PACKET_ID_HAS_RATE |
+ (le16_to_cpu(txwi->rate) & MT_RXWI_RATE_INDEX);
+
txwi->pktid = pid;
- if (pid >= MT_PACKET_ID_FIRST || ep == MT_EP_OUT_HCCA)
+ if ((mt76_is_skb_pktid(pid) && ampdu) || ep == MT_EP_OUT_HCCA)
qsel = MT_QSEL_MGMT;
else
qsel = MT_QSEL_EDCA;
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/init.c b/drivers/net/wireless/mediatek/mt76/mt76x2/init.c
index c6078e90ca43..97c3543eed8a 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x2/init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x2/init.c
@@ -173,13 +173,14 @@ void mt76x2_init_txpower(struct mt76x02_dev *dev,
mt76x2_get_power_info(dev, &txp, chan);
mt76x2_get_rate_power(dev, &t, chan);
- chan->max_power = mt76x02_get_max_rate_power(&t) +
+ chan->orig_mpwr = mt76x02_get_max_rate_power(&t) +
txp.target_power;
- chan->max_power = DIV_ROUND_UP(chan->max_power, 2);
+ chan->orig_mpwr = DIV_ROUND_UP(chan->orig_mpwr, 2);
/* convert to combined output power on 2x2 devices */
- chan->max_power += 3;
- chan->orig_mpwr = chan->max_power;
+ chan->orig_mpwr += 3;
+ chan->max_power = min_t(int, chan->max_reg_power,
+ chan->orig_mpwr);
}
}
EXPORT_SYMBOL_GPL(mt76x2_init_txpower);
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_main.c b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_main.c
index e416eee6a306..3a1467326f4d 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_main.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_main.c
@@ -54,14 +54,14 @@ mt76x2_set_channel(struct mt76x02_dev *dev, struct cfg80211_chan_def *chandef)
int ret;
cancel_delayed_work_sync(&dev->cal_work);
+ tasklet_disable(&dev->mt76.pre_tbtt_tasklet);
+ tasklet_disable(&dev->dfs_pd.dfs_tasklet);
+ mutex_lock(&dev->mt76.mutex);
set_bit(MT76_RESET, &dev->mt76.state);
mt76_set_channel(&dev->mt76);
- tasklet_disable(&dev->mt76.pre_tbtt_tasklet);
- tasklet_disable(&dev->dfs_pd.dfs_tasklet);
-
mt76x2_mac_stop(dev, true);
ret = mt76x2_phy_set_channel(dev, chandef);
@@ -72,10 +72,12 @@ mt76x2_set_channel(struct mt76x02_dev *dev, struct cfg80211_chan_def *chandef)
mt76x02_dfs_init_params(dev);
mt76x2_mac_resume(dev);
- tasklet_enable(&dev->dfs_pd.dfs_tasklet);
- tasklet_enable(&dev->mt76.pre_tbtt_tasklet);
clear_bit(MT76_RESET, &dev->mt76.state);
+ mutex_unlock(&dev->mt76.mutex);
+
+ tasklet_enable(&dev->dfs_pd.dfs_tasklet);
+ tasklet_enable(&dev->mt76.pre_tbtt_tasklet);
mt76_txq_schedule_all(&dev->mt76);
@@ -111,14 +113,14 @@ mt76x2_config(struct ieee80211_hw *hw, u32 changed)
}
}
+ mutex_unlock(&dev->mt76.mutex);
+
if (changed & IEEE80211_CONF_CHANGE_CHANNEL) {
ieee80211_stop_queues(hw);
ret = mt76x2_set_channel(dev, &hw->conf.chandef);
ieee80211_wake_queues(hw);
}
- mutex_unlock(&dev->mt76.mutex);
-
return ret;
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_phy.c b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_phy.c
index cc1aebcb0696..2edf1bd0c18c 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x2/pci_phy.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x2/pci_phy.c
@@ -74,7 +74,7 @@ mt76x2_phy_channel_calibrate(struct mt76x02_dev *dev, bool mac_stopped)
mt76x2_mac_resume(dev);
mt76x2_apply_gain_adj(dev);
- mt76x02_edcca_init(dev, true);
+ mt76x02_edcca_init(dev);
dev->cal.channel_cal_done = true;
}
@@ -294,10 +294,16 @@ void mt76x2_phy_calibrate(struct work_struct *work)
struct mt76x02_dev *dev;
dev = container_of(work, struct mt76x02_dev, cal_work.work);
+
+ mutex_lock(&dev->mt76.mutex);
+
mt76x2_phy_channel_calibrate(dev, false);
mt76x2_phy_tssi_compensate(dev);
mt76x2_phy_temp_compensate(dev);
mt76x2_phy_update_channel_gain(dev);
+
+ mutex_unlock(&dev->mt76.mutex);
+
ieee80211_queue_delayed_work(mt76_hw(dev), &dev->cal_work,
MT_CALIBRATE_INTERVAL);
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/usb_init.c b/drivers/net/wireless/mediatek/mt76/mt76x2/usb_init.c
index f2c57d5b87f9..94f52f98019b 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x2/usb_init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x2/usb_init.c
@@ -225,7 +225,7 @@ int mt76x2u_register_device(struct mt76x02_dev *dev)
/* check hw sg support in order to enable AMSDU */
if (dev->mt76.usb.sg_en)
- hw->max_tx_fragments = MT_SG_MAX_SIZE;
+ hw->max_tx_fragments = MT_TX_SG_MAX_SIZE;
else
hw->max_tx_fragments = 1;
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/usb_main.c b/drivers/net/wireless/mediatek/mt76/mt76x2/usb_main.c
index 97bcf6494ec1..e4dfc3bea3c5 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x2/usb_main.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x2/usb_main.c
@@ -48,22 +48,23 @@ mt76x2u_set_channel(struct mt76x02_dev *dev,
int err;
cancel_delayed_work_sync(&dev->cal_work);
+ dev->beacon_ops->pre_tbtt_enable(dev, false);
+
+ mutex_lock(&dev->mt76.mutex);
set_bit(MT76_RESET, &dev->mt76.state);
mt76_set_channel(&dev->mt76);
- dev->beacon_ops->pre_tbtt_enable(dev, false);
-
mt76x2_mac_stop(dev, false);
err = mt76x2u_phy_set_channel(dev, chandef);
mt76x2_mac_resume(dev);
- mt76x02_edcca_init(dev, true);
-
- dev->beacon_ops->pre_tbtt_enable(dev, true);
clear_bit(MT76_RESET, &dev->mt76.state);
+ mutex_unlock(&dev->mt76.mutex);
+
+ dev->beacon_ops->pre_tbtt_enable(dev, true);
mt76_txq_schedule_all(&dev->mt76);
return err;
@@ -85,12 +86,6 @@ mt76x2u_config(struct ieee80211_hw *hw, u32 changed)
mt76_wr(dev, MT_RX_FILTR_CFG, dev->mt76.rxfilter);
}
- if (changed & IEEE80211_CONF_CHANGE_CHANNEL) {
- ieee80211_stop_queues(hw);
- err = mt76x2u_set_channel(dev, &hw->conf.chandef);
- ieee80211_wake_queues(hw);
- }
-
if (changed & IEEE80211_CONF_CHANGE_POWER) {
dev->mt76.txpower_conf = hw->conf.power_level * 2;
@@ -103,6 +98,12 @@ mt76x2u_config(struct ieee80211_hw *hw, u32 changed)
mutex_unlock(&dev->mt76.mutex);
+ if (changed & IEEE80211_CONF_CHANGE_CHANNEL) {
+ ieee80211_stop_queues(hw);
+ err = mt76x2u_set_channel(dev, &hw->conf.chandef);
+ ieee80211_wake_queues(hw);
+ }
+
return err;
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/usb_phy.c b/drivers/net/wireless/mediatek/mt76/mt76x2/usb_phy.c
index 07f67cb6854c..dfd54f9b0e97 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x2/usb_phy.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x2/usb_phy.c
@@ -45,7 +45,7 @@ mt76x2u_phy_channel_calibrate(struct mt76x02_dev *dev, bool mac_stopped)
if (!mac_stopped)
mt76x2_mac_resume(dev);
mt76x2_apply_gain_adj(dev);
- mt76x02_edcca_init(dev, true);
+ mt76x02_edcca_init(dev);
dev->cal.channel_cal_done = true;
}
@@ -55,10 +55,15 @@ void mt76x2u_phy_calibrate(struct work_struct *work)
struct mt76x02_dev *dev;
dev = container_of(work, struct mt76x02_dev, cal_work.work);
+
+ mutex_lock(&dev->mt76.mutex);
+
mt76x2u_phy_channel_calibrate(dev, false);
mt76x2_phy_tssi_compensate(dev);
mt76x2_phy_update_channel_gain(dev);
+ mutex_unlock(&dev->mt76.mutex);
+
ieee80211_queue_delayed_work(mt76_hw(dev), &dev->cal_work,
MT_CALIBRATE_INTERVAL);
}
diff --git a/drivers/net/wireless/mediatek/mt76/usb.c b/drivers/net/wireless/mediatek/mt76/usb.c
index bbaa1365bbda..fb87ce7fbdf6 100644
--- a/drivers/net/wireless/mediatek/mt76/usb.c
+++ b/drivers/net/wireless/mediatek/mt76/usb.c
@@ -267,12 +267,10 @@ mt76u_set_endpoints(struct usb_interface *intf,
if (usb_endpoint_is_bulk_in(ep_desc) &&
in_ep < __MT_EP_IN_MAX) {
usb->in_ep[in_ep] = usb_endpoint_num(ep_desc);
- usb->in_max_packet = usb_endpoint_maxp(ep_desc);
in_ep++;
} else if (usb_endpoint_is_bulk_out(ep_desc) &&
out_ep < __MT_EP_OUT_MAX) {
usb->out_ep[out_ep] = usb_endpoint_num(ep_desc);
- usb->out_max_packet = usb_endpoint_maxp(ep_desc);
out_ep++;
}
}
@@ -333,12 +331,13 @@ mt76u_refill_rx(struct mt76_dev *dev, struct urb *urb, int nsgs, gfp_t gfp)
}
static int
-mt76u_urb_alloc(struct mt76_dev *dev, struct mt76_queue_entry *e)
+mt76u_urb_alloc(struct mt76_dev *dev, struct mt76_queue_entry *e,
+ int sg_max_size)
{
unsigned int size = sizeof(struct urb);
if (dev->usb.sg_en)
- size += MT_SG_MAX_SIZE * sizeof(struct scatterlist);
+ size += sg_max_size * sizeof(struct scatterlist);
e->urb = kzalloc(size, GFP_KERNEL);
if (!e->urb)
@@ -357,11 +356,12 @@ mt76u_rx_urb_alloc(struct mt76_dev *dev, struct mt76_queue_entry *e)
{
int err;
- err = mt76u_urb_alloc(dev, e);
+ err = mt76u_urb_alloc(dev, e, MT_RX_SG_MAX_SIZE);
if (err)
return err;
- return mt76u_refill_rx(dev, e->urb, MT_SG_MAX_SIZE, GFP_KERNEL);
+ return mt76u_refill_rx(dev, e->urb, MT_RX_SG_MAX_SIZE,
+ GFP_KERNEL);
}
static void mt76u_urb_free(struct urb *urb)
@@ -429,6 +429,42 @@ static int mt76u_get_rx_entry_len(u8 *data, u32 data_len)
return dma_len;
}
+static struct sk_buff *
+mt76u_build_rx_skb(void *data, int len, int buf_size)
+{
+ struct sk_buff *skb;
+
+ if (SKB_WITH_OVERHEAD(buf_size) < MT_DMA_HDR_LEN + len) {
+ struct page *page;
+
+ /* slow path, not enough space for data and
+ * skb_shared_info
+ */
+ skb = alloc_skb(MT_SKB_HEAD_LEN, GFP_ATOMIC);
+ if (!skb)
+ return NULL;
+
+ skb_put_data(skb, data + MT_DMA_HDR_LEN, MT_SKB_HEAD_LEN);
+ data += (MT_DMA_HDR_LEN + MT_SKB_HEAD_LEN);
+ page = virt_to_head_page(data);
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+ page, data - page_address(page),
+ len - MT_SKB_HEAD_LEN, buf_size);
+
+ return skb;
+ }
+
+ /* fast path */
+ skb = build_skb(data, buf_size);
+ if (!skb)
+ return NULL;
+
+ skb_reserve(skb, MT_DMA_HDR_LEN);
+ __skb_put(skb, len);
+
+ return skb;
+}
+
static int
mt76u_process_rx_entry(struct mt76_dev *dev, struct urb *urb)
{
@@ -446,19 +482,11 @@ mt76u_process_rx_entry(struct mt76_dev *dev, struct urb *urb)
return 0;
data_len = min_t(int, len, data_len - MT_DMA_HDR_LEN);
- if (MT_DMA_HDR_LEN + data_len > SKB_WITH_OVERHEAD(q->buf_size)) {
- dev_err_ratelimited(dev->dev, "rx data too big %d\n", data_len);
- return 0;
- }
-
- skb = build_skb(data, q->buf_size);
+ skb = mt76u_build_rx_skb(data, data_len, q->buf_size);
if (!skb)
return 0;
- skb_reserve(skb, MT_DMA_HDR_LEN);
- __skb_put(skb, data_len);
len -= data_len;
-
while (len > 0 && nsgs < urb->num_sgs) {
data_len = min_t(int, len, urb->sg[nsgs].length);
skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
@@ -577,8 +605,9 @@ static int mt76u_alloc_rx(struct mt76_dev *dev)
if (!q->entry)
return -ENOMEM;
- q->buf_size = dev->usb.sg_en ? MT_RX_BUF_SIZE : PAGE_SIZE;
q->ndesc = MT_NUM_RX_ENTRIES;
+ q->buf_size = PAGE_SIZE;
+
for (i = 0; i < q->ndesc; i++) {
err = mt76u_rx_urb_alloc(dev, &q->entry[i]);
if (err < 0)
@@ -735,7 +764,7 @@ mt76u_tx_setup_buffers(struct mt76_dev *dev, struct sk_buff *skb,
urb->transfer_buffer = skb->data;
return 0;
} else {
- sg_init_table(urb->sg, MT_SG_MAX_SIZE);
+ sg_init_table(urb->sg, MT_TX_SG_MAX_SIZE);
urb->num_sgs = skb_to_sgvec(skb, urb->sg, 0, skb->len);
if (urb->num_sgs == 0)
return -ENOMEM;
@@ -829,7 +858,8 @@ static int mt76u_alloc_tx(struct mt76_dev *dev)
q->ndesc = MT_NUM_TX_ENTRIES;
for (j = 0; j < q->ndesc; j++) {
- err = mt76u_urb_alloc(dev, &q->entry[j]);
+ err = mt76u_urb_alloc(dev, &q->entry[j],
+ MT_TX_SG_MAX_SIZE);
if (err < 0)
return err;
}
diff --git a/drivers/net/wireless/mediatek/mt7601u/dma.c b/drivers/net/wireless/mediatek/mt7601u/dma.c
index 66d60283e456..f6a0454abe04 100644
--- a/drivers/net/wireless/mediatek/mt7601u/dma.c
+++ b/drivers/net/wireless/mediatek/mt7601u/dma.c
@@ -185,10 +185,23 @@ static void mt7601u_complete_rx(struct urb *urb)
struct mt7601u_rx_queue *q = &dev->rx_q;
unsigned long flags;
- spin_lock_irqsave(&dev->rx_lock, flags);
+ /* do no schedule rx tasklet if urb has been unlinked
+ * or the device has been removed
+ */
+ switch (urb->status) {
+ case -ECONNRESET:
+ case -ESHUTDOWN:
+ case -ENOENT:
+ return;
+ default:
+ dev_err_ratelimited(dev->dev, "rx urb failed: %d\n",
+ urb->status);
+ /* fall through */
+ case 0:
+ break;
+ }
- if (mt7601u_urb_has_error(urb))
- dev_err(dev->dev, "Error: RX urb failed:%d\n", urb->status);
+ spin_lock_irqsave(&dev->rx_lock, flags);
if (WARN_ONCE(q->e[q->end].urb != urb, "RX urb mismatch"))
goto out;
@@ -220,14 +233,25 @@ static void mt7601u_complete_tx(struct urb *urb)
struct sk_buff *skb;
unsigned long flags;
- spin_lock_irqsave(&dev->tx_lock, flags);
+ switch (urb->status) {
+ case -ECONNRESET:
+ case -ESHUTDOWN:
+ case -ENOENT:
+ return;
+ default:
+ dev_err_ratelimited(dev->dev, "tx urb failed: %d\n",
+ urb->status);
+ /* fall through */
+ case 0:
+ break;
+ }
- if (mt7601u_urb_has_error(urb))
- dev_err(dev->dev, "Error: TX urb failed:%d\n", urb->status);
+ spin_lock_irqsave(&dev->tx_lock, flags);
if (WARN_ONCE(q->e[q->start].urb != urb, "TX urb mismatch"))
goto out;
skb = q->e[q->start].skb;
+ q->e[q->start].skb = NULL;
trace_mt_tx_dma_done(dev, skb);
__skb_queue_tail(&dev->tx_skb_done, skb);
@@ -355,19 +379,9 @@ int mt7601u_dma_enqueue_tx(struct mt7601u_dev *dev, struct sk_buff *skb,
static void mt7601u_kill_rx(struct mt7601u_dev *dev)
{
int i;
- unsigned long flags;
-
- spin_lock_irqsave(&dev->rx_lock, flags);
-
- for (i = 0; i < dev->rx_q.entries; i++) {
- int next = dev->rx_q.end;
- spin_unlock_irqrestore(&dev->rx_lock, flags);
- usb_poison_urb(dev->rx_q.e[next].urb);
- spin_lock_irqsave(&dev->rx_lock, flags);
- }
-
- spin_unlock_irqrestore(&dev->rx_lock, flags);
+ for (i = 0; i < dev->rx_q.entries; i++)
+ usb_poison_urb(dev->rx_q.e[i].urb);
}
static int mt7601u_submit_rx_buf(struct mt7601u_dev *dev,
@@ -437,10 +451,10 @@ static void mt7601u_free_tx_queue(struct mt7601u_tx_queue *q)
{
int i;
- WARN_ON(q->used);
-
for (i = 0; i < q->entries; i++) {
usb_poison_urb(q->e[i].urb);
+ if (q->e[i].skb)
+ mt7601u_tx_status(q->dev, q->e[i].skb);
usb_free_urb(q->e[i].urb);
}
}
diff --git a/drivers/net/wireless/mediatek/mt7601u/tx.c b/drivers/net/wireless/mediatek/mt7601u/tx.c
index 906e19c5f628..f3dff8319a4c 100644
--- a/drivers/net/wireless/mediatek/mt7601u/tx.c
+++ b/drivers/net/wireless/mediatek/mt7601u/tx.c
@@ -109,9 +109,9 @@ void mt7601u_tx_status(struct mt7601u_dev *dev, struct sk_buff *skb)
info->status.rates[0].idx = -1;
info->flags |= IEEE80211_TX_STAT_ACK;
- spin_lock(&dev->mac_lock);
+ spin_lock_bh(&dev->mac_lock);
ieee80211_tx_status(dev->hw, skb);
- spin_unlock(&dev->mac_lock);
+ spin_unlock_bh(&dev->mac_lock);
}
static int mt7601u_skb_rooms(struct mt7601u_dev *dev, struct sk_buff *skb)
diff --git a/drivers/net/wireless/quantenna/qtnfmac/commands.c b/drivers/net/wireless/quantenna/qtnfmac/commands.c
index 459f6b81d2eb..dc0c7244b60e 100644
--- a/drivers/net/wireless/quantenna/qtnfmac/commands.c
+++ b/drivers/net/wireless/quantenna/qtnfmac/commands.c
@@ -1011,9 +1011,8 @@ qtnf_parse_variable_mac_info(struct qtnf_wmac *mac,
if (WARN_ON(resp->n_reg_rules > NL80211_MAX_SUPP_REG_RULES))
return -E2BIG;
- mac->rd = kzalloc(sizeof(*mac->rd) +
- sizeof(struct ieee80211_reg_rule) *
- resp->n_reg_rules, GFP_KERNEL);
+ mac->rd = kzalloc(struct_size(mac->rd, reg_rules, resp->n_reg_rules),
+ GFP_KERNEL);
if (!mac->rd)
return -ENOMEM;
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2800lib.c b/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
index 621cd4ce69e2..c9b957ac5733 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
+++ b/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
@@ -30,6 +30,10 @@
#include "rt2800lib.h"
#include "rt2800.h"
+static bool modparam_watchdog;
+module_param_named(watchdog, modparam_watchdog, bool, S_IRUGO);
+MODULE_PARM_DESC(watchdog, "Enable watchdog to detect tx/rx hangs and reset hardware if detected");
+
/*
* Register access.
* All access to the CSR registers will go through the methods
@@ -1212,6 +1216,63 @@ void rt2800_txdone_nostatus(struct rt2x00_dev *rt2x00dev)
}
EXPORT_SYMBOL_GPL(rt2800_txdone_nostatus);
+static int rt2800_check_hung(struct data_queue *queue)
+{
+ unsigned int cur_idx = rt2800_drv_get_dma_done(queue);
+
+ if (queue->wd_idx != cur_idx)
+ queue->wd_count = 0;
+ else
+ queue->wd_count++;
+
+ return queue->wd_count > 16;
+}
+
+void rt2800_watchdog(struct rt2x00_dev *rt2x00dev)
+{
+ struct data_queue *queue;
+ bool hung_tx = false;
+ bool hung_rx = false;
+
+ if (test_bit(DEVICE_STATE_SCANNING, &rt2x00dev->flags))
+ return;
+
+ queue_for_each(rt2x00dev, queue) {
+ switch (queue->qid) {
+ case QID_AC_VO:
+ case QID_AC_VI:
+ case QID_AC_BE:
+ case QID_AC_BK:
+ case QID_MGMT:
+ if (rt2x00queue_empty(queue))
+ continue;
+ hung_tx = rt2800_check_hung(queue);
+ break;
+ case QID_RX:
+ /* For station mode we should reactive at least
+ * beacons. TODO: need to find good way detect
+ * RX hung for AP mode.
+ */
+ if (rt2x00dev->intf_sta_count == 0)
+ continue;
+ hung_rx = rt2800_check_hung(queue);
+ break;
+ default:
+ break;
+ }
+ }
+
+ if (hung_tx)
+ rt2x00_warn(rt2x00dev, "Watchdog TX hung detected\n");
+
+ if (hung_rx)
+ rt2x00_warn(rt2x00dev, "Watchdog RX hung detected\n");
+
+ if (hung_tx || hung_rx)
+ ieee80211_restart_hw(rt2x00dev->hw);
+}
+EXPORT_SYMBOL_GPL(rt2800_watchdog);
+
static unsigned int rt2800_hw_beacon_base(struct rt2x00_dev *rt2x00dev,
unsigned int index)
{
@@ -1593,14 +1654,15 @@ static void rt2800_config_wcid_attr_cipher(struct rt2x00_dev *rt2x00dev,
offset = MAC_IVEIV_ENTRY(key->hw_key_idx);
- memset(&iveiv_entry, 0, sizeof(iveiv_entry));
+ rt2800_register_multiread(rt2x00dev, offset,
+ &iveiv_entry, sizeof(iveiv_entry));
if ((crypto->cipher == CIPHER_TKIP) ||
(crypto->cipher == CIPHER_TKIP_NO_MIC) ||
(crypto->cipher == CIPHER_AES))
iveiv_entry.iv[3] |= 0x20;
iveiv_entry.iv[3] |= key->keyidx << 6;
rt2800_register_multiwrite(rt2x00dev, offset,
- &iveiv_entry, sizeof(iveiv_entry));
+ &iveiv_entry, sizeof(iveiv_entry));
}
int rt2800_config_shared_key(struct rt2x00_dev *rt2x00dev,
@@ -1789,6 +1851,25 @@ int rt2800_sta_remove(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
}
EXPORT_SYMBOL_GPL(rt2800_sta_remove);
+void rt2800_pre_reset_hw(struct rt2x00_dev *rt2x00dev)
+{
+ struct rt2800_drv_data *drv_data = rt2x00dev->drv_data;
+ struct data_queue *queue = rt2x00dev->bcn;
+ struct queue_entry *entry;
+ int i, wcid;
+
+ for (wcid = WCID_START; wcid < WCID_END; wcid++) {
+ drv_data->wcid_to_sta[wcid - WCID_START] = NULL;
+ __clear_bit(wcid - WCID_START, drv_data->sta_ids);
+ }
+
+ for (i = 0; i < queue->limit; i++) {
+ entry = &queue->entries[i];
+ clear_bit(ENTRY_BCN_ASSIGNED, &entry->flags);
+ }
+}
+EXPORT_SYMBOL_GPL(rt2800_pre_reset_hw);
+
void rt2800_config_filter(struct rt2x00_dev *rt2x00dev,
const unsigned int filter_flags)
{
@@ -6006,13 +6087,11 @@ static int rt2800_init_registers(struct rt2x00_dev *rt2x00dev)
* ASIC will keep garbage value after boot, clear encryption keys.
*/
for (i = 0; i < 4; i++)
- rt2800_register_write(rt2x00dev,
- SHARED_KEY_MODE_ENTRY(i), 0);
+ rt2800_register_write(rt2x00dev, SHARED_KEY_MODE_ENTRY(i), 0);
for (i = 0; i < 256; i++) {
rt2800_config_wcid(rt2x00dev, NULL, i);
rt2800_delete_wcid_attr(rt2x00dev, i);
- rt2800_register_write(rt2x00dev, MAC_IVEIV_ENTRY(i), 0);
}
/*
@@ -10211,6 +10290,13 @@ int rt2800_probe_hw(struct rt2x00_dev *rt2x00dev)
__set_bit(REQUIRE_TASKLET_CONTEXT, &rt2x00dev->cap_flags);
}
+ if (modparam_watchdog) {
+ __set_bit(CAPABILITY_RESTART_HW, &rt2x00dev->cap_flags);
+ rt2x00dev->link.watchdog_interval = msecs_to_jiffies(100);
+ } else {
+ rt2x00dev->link.watchdog_disabled = true;
+ }
+
/*
* Set the rssi offset.
*/
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2800lib.h b/drivers/net/wireless/ralink/rt2x00/rt2800lib.h
index 48adc6cc3233..1139405c0ebb 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2800lib.h
+++ b/drivers/net/wireless/ralink/rt2x00/rt2800lib.h
@@ -65,6 +65,7 @@ struct rt2800_ops {
const u8 *data, const size_t len);
int (*drv_init_registers)(struct rt2x00_dev *rt2x00dev);
__le32 *(*drv_get_txwi)(struct queue_entry *entry);
+ unsigned int (*drv_get_dma_done)(struct data_queue *queue);
};
static inline u32 rt2800_register_read(struct rt2x00_dev *rt2x00dev,
@@ -166,6 +167,13 @@ static inline __le32 *rt2800_drv_get_txwi(struct queue_entry *entry)
return rt2800ops->drv_get_txwi(entry);
}
+static inline unsigned int rt2800_drv_get_dma_done(struct data_queue *queue)
+{
+ const struct rt2800_ops *rt2800ops = queue->rt2x00dev->ops->drv;
+
+ return rt2800ops->drv_get_dma_done(queue);
+}
+
void rt2800_mcu_request(struct rt2x00_dev *rt2x00dev,
const u8 command, const u8 token,
const u8 arg0, const u8 arg1);
@@ -189,6 +197,8 @@ void rt2800_txdone_nostatus(struct rt2x00_dev *rt2x00dev);
bool rt2800_txstatus_timeout(struct rt2x00_dev *rt2x00dev);
bool rt2800_txstatus_pending(struct rt2x00_dev *rt2x00dev);
+void rt2800_watchdog(struct rt2x00_dev *rt2x00dev);
+
void rt2800_write_beacon(struct queue_entry *entry, struct txentry_desc *txdesc);
void rt2800_clear_beacon(struct queue_entry *entry);
@@ -247,5 +257,6 @@ void rt2800_disable_wpdma(struct rt2x00_dev *rt2x00dev);
void rt2800_get_txwi_rxwi_size(struct rt2x00_dev *rt2x00dev,
unsigned short *txwi_size,
unsigned short *rxwi_size);
+void rt2800_pre_reset_hw(struct rt2x00_dev *rt2x00dev);
#endif /* RT2800LIB_H */
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2800mmio.c b/drivers/net/wireless/ralink/rt2x00/rt2800mmio.c
index d1de8e2ff690..110bb391c372 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2800mmio.c
+++ b/drivers/net/wireless/ralink/rt2x00/rt2800mmio.c
@@ -24,6 +24,37 @@
#include "rt2800lib.h"
#include "rt2800mmio.h"
+unsigned int rt2800mmio_get_dma_done(struct data_queue *queue)
+{
+ struct rt2x00_dev *rt2x00dev = queue->rt2x00dev;
+ struct queue_entry *entry;
+ int idx, qid;
+
+ switch (queue->qid) {
+ case QID_AC_VO:
+ case QID_AC_VI:
+ case QID_AC_BE:
+ case QID_AC_BK:
+ qid = queue->qid;
+ idx = rt2x00mmio_register_read(rt2x00dev, TX_DTX_IDX(qid));
+ break;
+ case QID_MGMT:
+ idx = rt2x00mmio_register_read(rt2x00dev, TX_DTX_IDX(5));
+ break;
+ case QID_RX:
+ entry = rt2x00queue_get_entry(queue, Q_INDEX_DMA_DONE);
+ idx = entry->entry_idx;
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ idx = 0;
+ break;
+ }
+
+ return idx;
+}
+EXPORT_SYMBOL_GPL(rt2800mmio_get_dma_done);
+
/*
* TX descriptor initialization
*/
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2800mmio.h b/drivers/net/wireless/ralink/rt2x00/rt2800mmio.h
index 29b5cfd2856f..adcd9d54ac1c 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2800mmio.h
+++ b/drivers/net/wireless/ralink/rt2x00/rt2800mmio.h
@@ -114,6 +114,8 @@
#define RXD_W3_PLCP_SIGNAL FIELD32(0x00020000)
#define RXD_W3_PLCP_RSSI FIELD32(0x00040000)
+unsigned int rt2800mmio_get_dma_done(struct data_queue *queue);
+
/* TX descriptor initialization */
__le32 *rt2800mmio_get_txwi(struct queue_entry *entry);
void rt2800mmio_write_tx_desc(struct queue_entry *entry,
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2800pci.c b/drivers/net/wireless/ralink/rt2x00/rt2800pci.c
index ead8bd3e9236..a23c26574002 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2800pci.c
+++ b/drivers/net/wireless/ralink/rt2x00/rt2800pci.c
@@ -326,6 +326,7 @@ static const struct rt2800_ops rt2800pci_rt2800_ops = {
.drv_write_firmware = rt2800pci_write_firmware,
.drv_init_registers = rt2800mmio_init_registers,
.drv_get_txwi = rt2800mmio_get_txwi,
+ .drv_get_dma_done = rt2800mmio_get_dma_done,
};
static const struct rt2x00lib_ops rt2800pci_rt2x00_ops = {
@@ -350,6 +351,7 @@ static const struct rt2x00lib_ops rt2800pci_rt2x00_ops = {
.link_tuner = rt2800_link_tuner,
.gain_calibration = rt2800_gain_calibration,
.vco_calibration = rt2800_vco_calibration,
+ .watchdog = rt2800_watchdog,
.start_queue = rt2800mmio_start_queue,
.kick_queue = rt2800mmio_kick_queue,
.stop_queue = rt2800mmio_stop_queue,
@@ -366,6 +368,7 @@ static const struct rt2x00lib_ops rt2800pci_rt2x00_ops = {
.config_erp = rt2800_config_erp,
.config_ant = rt2800_config_ant,
.config = rt2800_config,
+ .pre_reset_hw = rt2800_pre_reset_hw,
};
static const struct rt2x00_ops rt2800pci_ops = {
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2800soc.c b/drivers/net/wireless/ralink/rt2x00/rt2800soc.c
index 230557d36c52..7b931bb96a9e 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2800soc.c
+++ b/drivers/net/wireless/ralink/rt2x00/rt2800soc.c
@@ -171,6 +171,7 @@ static const struct rt2800_ops rt2800soc_rt2800_ops = {
.drv_write_firmware = rt2800soc_write_firmware,
.drv_init_registers = rt2800mmio_init_registers,
.drv_get_txwi = rt2800mmio_get_txwi,
+ .drv_get_dma_done = rt2800mmio_get_dma_done,
};
static const struct rt2x00lib_ops rt2800soc_rt2x00_ops = {
@@ -195,6 +196,7 @@ static const struct rt2x00lib_ops rt2800soc_rt2x00_ops = {
.link_tuner = rt2800_link_tuner,
.gain_calibration = rt2800_gain_calibration,
.vco_calibration = rt2800_vco_calibration,
+ .watchdog = rt2800_watchdog,
.start_queue = rt2800mmio_start_queue,
.kick_queue = rt2800mmio_kick_queue,
.stop_queue = rt2800mmio_stop_queue,
@@ -211,6 +213,7 @@ static const struct rt2x00lib_ops rt2800soc_rt2x00_ops = {
.config_erp = rt2800_config_erp,
.config_ant = rt2800_config_ant,
.config = rt2800_config,
+ .pre_reset_hw = rt2800_pre_reset_hw,
};
static const struct rt2x00_ops rt2800soc_ops = {
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2800usb.c b/drivers/net/wireless/ralink/rt2x00/rt2800usb.c
index 551427b83775..fdf0504b5f1d 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2800usb.c
+++ b/drivers/net/wireless/ralink/rt2x00/rt2800usb.c
@@ -379,6 +379,14 @@ static int rt2800usb_set_device_state(struct rt2x00_dev *rt2x00dev,
return retval;
}
+static unsigned int rt2800usb_get_dma_done(struct data_queue *queue)
+{
+ struct queue_entry *entry;
+
+ entry = rt2x00queue_get_entry(queue, Q_INDEX_DMA_DONE);
+ return entry->entry_idx;
+}
+
/*
* TX descriptor initialization
*/
@@ -661,6 +669,7 @@ static const struct rt2800_ops rt2800usb_rt2800_ops = {
.drv_write_firmware = rt2800usb_write_firmware,
.drv_init_registers = rt2800usb_init_registers,
.drv_get_txwi = rt2800usb_get_txwi,
+ .drv_get_dma_done = rt2800usb_get_dma_done,
};
static const struct rt2x00lib_ops rt2800usb_rt2x00_ops = {
@@ -678,6 +687,7 @@ static const struct rt2x00lib_ops rt2800usb_rt2x00_ops = {
.link_tuner = rt2800_link_tuner,
.gain_calibration = rt2800_gain_calibration,
.vco_calibration = rt2800_vco_calibration,
+ .watchdog = rt2800_watchdog,
.start_queue = rt2800usb_start_queue,
.kick_queue = rt2x00usb_kick_queue,
.stop_queue = rt2800usb_stop_queue,
@@ -696,6 +706,7 @@ static const struct rt2x00lib_ops rt2800usb_rt2x00_ops = {
.config_erp = rt2800_config_erp,
.config_ant = rt2800_config_ant,
.config = rt2800_config,
+ .pre_reset_hw = rt2800_pre_reset_hw,
};
static void rt2800usb_queue_init(struct data_queue *queue)
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00.h b/drivers/net/wireless/ralink/rt2x00/rt2x00.h
index 64a792a8fb2c..7e43690a861c 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2x00.h
+++ b/drivers/net/wireless/ralink/rt2x00/rt2x00.h
@@ -325,6 +325,8 @@ struct link {
* to bring the device/driver back into the desired state.
*/
struct delayed_work watchdog_work;
+ unsigned int watchdog_interval;
+ bool watchdog_disabled;
/*
* Work structure for scheduling periodic AGC adjustments.
@@ -615,6 +617,7 @@ struct rt2x00lib_ops {
void (*config) (struct rt2x00_dev *rt2x00dev,
struct rt2x00lib_conf *libconf,
const unsigned int changed_flags);
+ void (*pre_reset_hw) (struct rt2x00_dev *rt2x00dev);
int (*sta_add) (struct rt2x00_dev *rt2x00dev,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta);
@@ -710,6 +713,7 @@ enum rt2x00_capability_flags {
CAPABILITY_VCO_RECALIBRATION,
CAPABILITY_EXTERNAL_PA_TX0,
CAPABILITY_EXTERNAL_PA_TX1,
+ CAPABILITY_RESTART_HW,
};
/*
@@ -1266,6 +1270,12 @@ rt2x00_has_cap_vco_recalibration(struct rt2x00_dev *rt2x00dev)
return rt2x00_has_cap_flag(rt2x00dev, CAPABILITY_VCO_RECALIBRATION);
}
+static inline bool
+rt2x00_has_cap_restart_hw(struct rt2x00_dev *rt2x00dev)
+{
+ return rt2x00_has_cap_flag(rt2x00dev, CAPABILITY_RESTART_HW);
+}
+
/**
* rt2x00queue_map_txskb - Map a skb into DMA for TX purposes.
* @entry: Pointer to &struct queue_entry
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00debug.c b/drivers/net/wireless/ralink/rt2x00/rt2x00debug.c
index aac3aae7afaa..ef5f51512212 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2x00debug.c
+++ b/drivers/net/wireless/ralink/rt2x00/rt2x00debug.c
@@ -52,6 +52,7 @@ struct rt2x00debug_intf {
* - chipset file
* - device state flags file
* - device capability flags file
+ * - hardware restart file
* - register folder
* - csr offset/value files
* - eeprom offset/value files
@@ -68,6 +69,7 @@ struct rt2x00debug_intf {
struct dentry *chipset_entry;
struct dentry *dev_flags;
struct dentry *cap_flags;
+ struct dentry *restart_hw;
struct dentry *register_folder;
struct dentry *csr_off_entry;
struct dentry *csr_val_entry;
@@ -566,6 +568,34 @@ static const struct file_operations rt2x00debug_fop_cap_flags = {
.llseek = default_llseek,
};
+static ssize_t rt2x00debug_write_restart_hw(struct file *file,
+ const char __user *buf,
+ size_t length,
+ loff_t *offset)
+{
+ struct rt2x00debug_intf *intf = file->private_data;
+ struct rt2x00_dev *rt2x00dev = intf->rt2x00dev;
+ static unsigned long last_reset;
+
+ if (!rt2x00_has_cap_restart_hw(rt2x00dev))
+ return -EOPNOTSUPP;
+
+ if (time_before(jiffies, last_reset + msecs_to_jiffies(2000)))
+ return -EBUSY;
+
+ last_reset = jiffies;
+
+ ieee80211_restart_hw(rt2x00dev->hw);
+ return length;
+}
+
+static const struct file_operations rt2x00debug_restart_hw = {
+ .owner = THIS_MODULE,
+ .write = rt2x00debug_write_restart_hw,
+ .open = simple_open,
+ .llseek = generic_file_llseek,
+};
+
static struct dentry *rt2x00debug_create_file_driver(const char *name,
struct rt2x00debug_intf
*intf,
@@ -661,6 +691,10 @@ void rt2x00debug_register(struct rt2x00_dev *rt2x00dev)
intf->driver_folder, intf,
&rt2x00debug_fop_cap_flags);
+ intf->restart_hw = debugfs_create_file("restart_hw", 0200,
+ intf->driver_folder, intf,
+ &rt2x00debug_restart_hw);
+
intf->register_folder =
debugfs_create_dir("register", intf->driver_folder);
@@ -742,6 +776,7 @@ void rt2x00debug_deregister(struct rt2x00_dev *rt2x00dev)
debugfs_remove(intf->csr_off_entry);
debugfs_remove(intf->register_folder);
debugfs_remove(intf->dev_flags);
+ debugfs_remove(intf->restart_hw);
debugfs_remove(intf->cap_flags);
debugfs_remove(intf->chipset_entry);
debugfs_remove(intf->driver_entry);
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c b/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c
index a6c374c483c2..35414f97a978 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c
+++ b/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c
@@ -1258,8 +1258,14 @@ int rt2x00lib_start(struct rt2x00_dev *rt2x00dev)
{
int retval;
- if (test_bit(DEVICE_STATE_STARTED, &rt2x00dev->flags))
- return 0;
+ if (test_bit(DEVICE_STATE_STARTED, &rt2x00dev->flags)) {
+ /*
+ * This is special case for ieee80211_restart_hw(), otherwise
+ * mac80211 never call start() two times in row without stop();
+ */
+ rt2x00dev->ops->lib->pre_reset_hw(rt2x00dev);
+ rt2x00lib_stop(rt2x00dev);
+ }
/*
* If this is the first interface which is added,
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00link.c b/drivers/net/wireless/ralink/rt2x00/rt2x00link.c
index 939cfa5141c6..b052c96347d6 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2x00link.c
+++ b/drivers/net/wireless/ralink/rt2x00/rt2x00link.c
@@ -384,10 +384,10 @@ void rt2x00link_start_watchdog(struct rt2x00_dev *rt2x00dev)
struct link *link = &rt2x00dev->link;
if (test_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags) &&
- rt2x00dev->ops->lib->watchdog)
+ rt2x00dev->ops->lib->watchdog && !link->watchdog_disabled)
ieee80211_queue_delayed_work(rt2x00dev->hw,
&link->watchdog_work,
- WATCHDOG_INTERVAL);
+ link->watchdog_interval);
}
void rt2x00link_stop_watchdog(struct rt2x00_dev *rt2x00dev)
@@ -413,11 +413,16 @@ static void rt2x00link_watchdog(struct work_struct *work)
if (test_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags))
ieee80211_queue_delayed_work(rt2x00dev->hw,
&link->watchdog_work,
- WATCHDOG_INTERVAL);
+ link->watchdog_interval);
}
void rt2x00link_register(struct rt2x00_dev *rt2x00dev)
{
- INIT_DELAYED_WORK(&rt2x00dev->link.watchdog_work, rt2x00link_watchdog);
- INIT_DELAYED_WORK(&rt2x00dev->link.work, rt2x00link_tuner);
+ struct link *link = &rt2x00dev->link;
+
+ INIT_DELAYED_WORK(&link->work, rt2x00link_tuner);
+ INIT_DELAYED_WORK(&link->watchdog_work, rt2x00link_watchdog);
+
+ if (link->watchdog_interval == 0)
+ link->watchdog_interval = WATCHDOG_INTERVAL;
}
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00queue.h b/drivers/net/wireless/ralink/rt2x00/rt2x00queue.h
index 099e747f70e7..23739dd0bc9b 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2x00queue.h
+++ b/drivers/net/wireless/ralink/rt2x00/rt2x00queue.h
@@ -435,6 +435,9 @@ enum data_queue_flags {
* @length: Number of frames in queue.
* @index: Index pointers to entry positions in the queue,
* use &enum queue_index to get a specific index field.
+ * @wd_count: watchdog counter number of times entry does change
+ * in the queue
+ * @wd_idx: index of queue entry saved by watchdog
* @txop: maximum burst time.
* @aifs: The aifs value for outgoing frames (field ignored in RX queue).
* @cw_min: The cw min value for outgoing frames (field ignored in RX queue).
@@ -462,6 +465,9 @@ struct data_queue {
unsigned short length;
unsigned short index[Q_INDEX_MAX];
+ unsigned short wd_count;
+ unsigned int wd_idx;
+
unsigned short txop;
unsigned short aifs;
unsigned short cw_min;
diff --git a/drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.c b/drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.c
index 2ac0481b29ef..152242ac0aa5 100644
--- a/drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.c
+++ b/drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.c
@@ -1578,7 +1578,7 @@ void exhalbtc_scan_notify_wifi_only(struct wifi_only_cfg *wifionly_cfg,
void exhalbtc_connect_notify(struct btc_coexist *btcoexist, u8 action)
{
- u8 asso_type, asso_type_v2;
+ u8 asso_type;
bool wifi_under_5g;
if (!halbtc_is_bt_coexist_available(btcoexist))
@@ -1589,15 +1589,10 @@ void exhalbtc_connect_notify(struct btc_coexist *btcoexist, u8 action)
btcoexist->btc_get(btcoexist, BTC_GET_BL_WIFI_UNDER_5G, &wifi_under_5g);
- if (action) {
+ if (action)
asso_type = BTC_ASSOCIATE_START;
- asso_type_v2 = wifi_under_5g ? BTC_ASSOCIATE_5G_START :
- BTC_ASSOCIATE_START;
- } else {
+ else
asso_type = BTC_ASSOCIATE_FINISH;
- asso_type_v2 = wifi_under_5g ? BTC_ASSOCIATE_5G_FINISH :
- BTC_ASSOCIATE_FINISH;
- }
halbtc_leave_low_power(btcoexist);
@@ -1746,30 +1741,6 @@ void exhalbtc_rf_status_notify(struct btc_coexist *btcoexist, u8 type)
}
}
-void exhalbtc_stack_operation_notify(struct btc_coexist *btcoexist, u8 type)
-{
- u8 stack_op_type;
-
- if (!halbtc_is_bt_coexist_available(btcoexist))
- return;
- btcoexist->statistics.cnt_stack_operation_notify++;
- if (btcoexist->manual_control)
- return;
-
- if ((type == HCI_BT_OP_INQUIRY_START) ||
- (type == HCI_BT_OP_PAGING_START) ||
- (type == HCI_BT_OP_PAIRING_START)) {
- stack_op_type = BTC_STACK_OP_INQ_PAGE_PAIR_START;
- } else if ((type == HCI_BT_OP_INQUIRY_FINISH) ||
- (type == HCI_BT_OP_PAGING_SUCCESS) ||
- (type == HCI_BT_OP_PAGING_UNSUCCESS) ||
- (type == HCI_BT_OP_PAIRING_FINISH)) {
- stack_op_type = BTC_STACK_OP_INQ_PAGE_PAIR_FINISH;
- } else {
- stack_op_type = BTC_STACK_OP_NONE;
- }
-}
-
void exhalbtc_halt_notify(struct btc_coexist *btcoexist)
{
if (!halbtc_is_bt_coexist_available(btcoexist))
diff --git a/drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.h b/drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.h
index ee9aeddf1ebc..8c0a7fdbf200 100644
--- a/drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.h
+++ b/drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.h
@@ -764,7 +764,6 @@ void exhalbtc_special_packet_notify(struct btc_coexist *btcoexist, u8 pkt_type);
void exhalbtc_bt_info_notify(struct btc_coexist *btcoexist, u8 *tmp_buf,
u8 length);
void exhalbtc_rf_status_notify(struct btc_coexist *btcoexist, u8 type);
-void exhalbtc_stack_operation_notify(struct btc_coexist *btcoexist, u8 type);
void exhalbtc_halt_notify(struct btc_coexist *btcoexist);
void exhalbtc_pnp_notify(struct btc_coexist *btcoexist, u8 pnp_state);
void exhalbtc_coex_dm_switch(struct btc_coexist *btcoexist);
diff --git a/drivers/net/wireless/realtek/rtlwifi/btcoexist/rtl_btc.c b/drivers/net/wireless/realtek/rtlwifi/btcoexist/rtl_btc.c
index 0e509c33e9e6..b8c4536af6c0 100644
--- a/drivers/net/wireless/realtek/rtlwifi/btcoexist/rtl_btc.c
+++ b/drivers/net/wireless/realtek/rtlwifi/btcoexist/rtl_btc.c
@@ -316,7 +316,7 @@ void rtl_btc_btinfo_notify(struct rtl_priv *rtlpriv, u8 *tmp_buf, u8 length)
void rtl_btc_btmpinfo_notify(struct rtl_priv *rtlpriv, u8 *tmp_buf, u8 length)
{
struct btc_coexist *btcoexist = rtl_btc_coexist(rtlpriv);
- u8 extid, seq, len;
+ u8 extid, seq;
u16 bt_real_fw_ver;
u8 bt_fw_ver;
u8 *data;
@@ -332,7 +332,6 @@ void rtl_btc_btmpinfo_notify(struct rtl_priv *rtlpriv, u8 *tmp_buf, u8 length)
if (extid != 1) /* C2H_TRIG_BY_BT_FW = 1 */
return;
- len = tmp_buf[1] >> 4;
seq = tmp_buf[2] >> 4;
data = &tmp_buf[3];
diff --git a/drivers/net/wireless/realtek/rtlwifi/efuse.c b/drivers/net/wireless/realtek/rtlwifi/efuse.c
index e68340dfd980..ea4fc53764de 100644
--- a/drivers/net/wireless/realtek/rtlwifi/efuse.c
+++ b/drivers/net/wireless/realtek/rtlwifi/efuse.c
@@ -117,10 +117,8 @@ u8 efuse_read_1byte(struct ieee80211_hw *hw, u16 address)
rtlpriv->cfg->
maps[EFUSE_CTRL] + 3);
k++;
- if (k == 1000) {
- k = 0;
+ if (k == 1000)
break;
- }
}
data = rtl_read_byte(rtlpriv, rtlpriv->cfg->maps[EFUSE_CTRL]);
return data;
@@ -986,7 +984,6 @@ static int efuse_pg_packet_write(struct ieee80211_hw *hw,
} else if (write_state == PG_STATE_DATA) {
RTPRINT(rtlpriv, FEEPROM, EFUSE_PG,
"efuse PG_STATE_DATA\n");
- badworden = 0x0f;
badworden =
enable_efuse_data_write(hw, efuse_addr + 1,
target_pkt.word_en,
diff --git a/drivers/net/wireless/realtek/rtlwifi/rc.c b/drivers/net/wireless/realtek/rtlwifi/rc.c
index cf8e42a01015..0c7d74902d33 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rc.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rc.c
@@ -173,9 +173,6 @@ static void rtl_get_rate(void *ppriv, struct ieee80211_sta *sta,
u8 try_per_rate, i, rix;
bool not_data = !ieee80211_is_data(fc);
- if (rate_control_send_low(sta, priv_sta, txrc))
- return;
-
rix = _rtl_rc_get_highest_rix(rtlpriv, sta, skb, not_data);
try_per_rate = 1;
_rtl_rc_rate_set_series(rtlpriv, sta, &rates[0], txrc,
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
index 454bab38b165..f92e95f5494f 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
@@ -1039,7 +1039,7 @@ int rtl88ee_hw_init(struct ieee80211_hw *hw)
struct rtl_ps_ctl *ppsc = rtl_psc(rtl_priv(hw));
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
struct rtl_efuse *rtlefuse = rtl_efuse(rtl_priv(hw));
- bool rtstatus = true;
+ bool rtstatus;
int err = 0;
u8 tmp_u1b, u1byte;
unsigned long flags;
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/dm.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/dm.c
index 7cc86bb387a1..71f3b6b5d7bd 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/dm.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/dm.c
@@ -680,6 +680,7 @@ static void rtl92d_bandtype_2_4G(struct ieee80211_hw *hw, long *temp_cckg,
int i;
unsigned long flag = 0;
long temp_cck;
+ const u8 *cckswing;
/* Query CCK default setting From 0xa24 */
rtl92d_acquire_cckandrw_pagea_ctl(hw, &flag);
@@ -687,28 +688,19 @@ static void rtl92d_bandtype_2_4G(struct ieee80211_hw *hw, long *temp_cckg,
MASKDWORD) & MASKCCK;
rtl92d_release_cckandrw_pagea_ctl(hw, &flag);
for (i = 0; i < CCK_TABLE_LENGTH; i++) {
- if (rtlpriv->dm.cck_inch14) {
- if (!memcmp((void *)&temp_cck,
- (void *)&cckswing_table_ch14[i][2], 4)) {
- *cck_index_old = (u8) i;
- RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
- "Initial reg0x%x = 0x%lx, cck_index=0x%x, ch 14 %d\n",
- RCCK0_TXFILTER2, temp_cck,
- *cck_index_old,
- rtlpriv->dm.cck_inch14);
- break;
- }
- } else {
- if (!memcmp((void *) &temp_cck,
- &cckswing_table_ch1ch13[i][2], 4)) {
- *cck_index_old = (u8) i;
- RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
- "Initial reg0x%x = 0x%lx, cck_index = 0x%x, ch14 %d\n",
- RCCK0_TXFILTER2, temp_cck,
- *cck_index_old,
- rtlpriv->dm.cck_inch14);
- break;
- }
+ if (rtlpriv->dm.cck_inch14)
+ cckswing = &cckswing_table_ch14[i][2];
+ else
+ cckswing = &cckswing_table_ch1ch13[i][2];
+
+ if (temp_cck == le32_to_cpu(*((__le32 *)cckswing))) {
+ *cck_index_old = (u8)i;
+ RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
+ "Initial reg0x%x = 0x%lx, cck_index = 0x%x, ch14 %d\n",
+ RCCK0_TXFILTER2, temp_cck,
+ *cck_index_old,
+ rtlpriv->dm.cck_inch14);
+ break;
}
}
*temp_cckg = temp_cck;
@@ -718,8 +710,8 @@ static void rtl92d_bandtype_5G(struct rtl_hal *rtlhal, u8 *ofdm_index,
bool *internal_pa, u8 thermalvalue, u8 delta,
u8 rf, struct rtl_efuse *rtlefuse,
struct rtl_priv *rtlpriv, struct rtl_phy *rtlphy,
- u8 index_mapping[5][INDEX_MAPPING_NUM],
- u8 index_mapping_pa[8][INDEX_MAPPING_NUM])
+ const u8 index_mapping[5][INDEX_MAPPING_NUM],
+ const u8 index_mapping_pa[8][INDEX_MAPPING_NUM])
{
int i;
u8 index;
@@ -787,9 +779,9 @@ static void rtl92d_dm_txpower_tracking_callback_thermalmeter(
bool internal_pa = false;
long ele_a = 0, ele_d, temp_cck, val_x, value32;
long val_y, ele_c = 0;
- u8 ofdm_index[3];
+ u8 ofdm_index[2];
s8 cck_index = 0;
- u8 ofdm_index_old[3] = {0, 0, 0};
+ u8 ofdm_index_old[2] = {0, 0};
s8 cck_index_old = 0;
u8 index;
int i;
@@ -797,7 +789,7 @@ static void rtl92d_dm_txpower_tracking_callback_thermalmeter(
u8 ofdm_min_index = 6, ofdm_min_index_internal_pa = 3, rf;
u8 indexforchannel =
rtl92d_get_rightchnlplace_for_iqk(rtlphy->current_channel);
- u8 index_mapping[5][INDEX_MAPPING_NUM] = {
+ static const u8 index_mapping[5][INDEX_MAPPING_NUM] = {
/* 5G, path A/MAC 0, decrease power */
{0, 1, 3, 6, 8, 9, 11, 13, 14, 16, 17, 18, 18},
/* 5G, path A/MAC 0, increase power */
@@ -809,7 +801,7 @@ static void rtl92d_dm_txpower_tracking_callback_thermalmeter(
/* 2.4G, for decreas power */
{0, 1, 2, 3, 4, 5, 6, 7, 7, 8, 9, 10, 10},
};
- u8 index_mapping_internal_pa[8][INDEX_MAPPING_NUM] = {
+ static const u8 index_mapping_internal_pa[8][INDEX_MAPPING_NUM] = {
/* 5G, path A/MAC 0, ch36-64, decrease power */
{0, 1, 2, 4, 6, 7, 9, 11, 12, 14, 15, 16, 16},
/* 5G, path A/MAC 0, ch36-64, increase power */
@@ -837,365 +829,338 @@ static void rtl92d_dm_txpower_tracking_callback_thermalmeter(
rtlpriv->dm.thermalvalue, rtlefuse->eeprom_thermalmeter);
rtl92d_phy_ap_calibrate(hw, (thermalvalue -
rtlefuse->eeprom_thermalmeter));
+
+ if (!thermalvalue)
+ goto exit;
+
if (is2t)
rf = 2;
else
rf = 1;
- if (thermalvalue) {
- ele_d = rtl_get_bbreg(hw, ROFDM0_XATXIQIMBALANCE,
+
+ if (rtlpriv->dm.thermalvalue && !rtlhal->reloadtxpowerindex)
+ goto old_index_done;
+
+ ele_d = rtl_get_bbreg(hw, ROFDM0_XATXIQIMBALANCE, MASKDWORD) & MASKOFDM_D;
+ for (i = 0; i < OFDM_TABLE_SIZE_92D; i++) {
+ if (ele_d == (ofdmswing_table[i] & MASKOFDM_D)) {
+ ofdm_index_old[0] = (u8)i;
+
+ RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
+ "Initial pathA ele_d reg0x%x = 0x%lx, ofdm_index=0x%x\n",
+ ROFDM0_XATXIQIMBALANCE,
+ ele_d, ofdm_index_old[0]);
+ break;
+ }
+ }
+ if (is2t) {
+ ele_d = rtl_get_bbreg(hw, ROFDM0_XBTXIQIMBALANCE,
MASKDWORD) & MASKOFDM_D;
for (i = 0; i < OFDM_TABLE_SIZE_92D; i++) {
- if (ele_d == (ofdmswing_table[i] & MASKOFDM_D)) {
- ofdm_index_old[0] = (u8) i;
-
- RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
- "Initial pathA ele_d reg0x%x = 0x%lx, ofdm_index=0x%x\n",
- ROFDM0_XATXIQIMBALANCE,
- ele_d, ofdm_index_old[0]);
+ if (ele_d ==
+ (ofdmswing_table[i] & MASKOFDM_D)) {
+ ofdm_index_old[1] = (u8)i;
+ RT_TRACE(rtlpriv, COMP_POWER_TRACKING,
+ DBG_LOUD,
+ "Initial pathB ele_d reg 0x%x = 0x%lx, ofdm_index = 0x%x\n",
+ ROFDM0_XBTXIQIMBALANCE, ele_d,
+ ofdm_index_old[1]);
break;
}
}
- if (is2t) {
- ele_d = rtl_get_bbreg(hw, ROFDM0_XBTXIQIMBALANCE,
- MASKDWORD) & MASKOFDM_D;
- for (i = 0; i < OFDM_TABLE_SIZE_92D; i++) {
- if (ele_d ==
- (ofdmswing_table[i] & MASKOFDM_D)) {
- ofdm_index_old[1] = (u8) i;
- RT_TRACE(rtlpriv, COMP_POWER_TRACKING,
- DBG_LOUD,
- "Initial pathB ele_d reg 0x%x = 0x%lx, ofdm_index = 0x%x\n",
- ROFDM0_XBTXIQIMBALANCE, ele_d,
- ofdm_index_old[1]);
- break;
- }
- }
- }
- if (rtlhal->current_bandtype == BAND_ON_2_4G) {
- rtl92d_bandtype_2_4G(hw, &temp_cck, &cck_index_old);
- } else {
- temp_cck = 0x090e1317;
- cck_index_old = 12;
- }
+ }
+ if (rtlhal->current_bandtype == BAND_ON_2_4G) {
+ rtl92d_bandtype_2_4G(hw, &temp_cck, &cck_index_old);
+ } else {
+ temp_cck = 0x090e1317;
+ cck_index_old = 12;
+ }
- if (!rtlpriv->dm.thermalvalue) {
- rtlpriv->dm.thermalvalue =
- rtlefuse->eeprom_thermalmeter;
- rtlpriv->dm.thermalvalue_lck = thermalvalue;
- rtlpriv->dm.thermalvalue_iqk = thermalvalue;
- rtlpriv->dm.thermalvalue_rxgain =
- rtlefuse->eeprom_thermalmeter;
- for (i = 0; i < rf; i++)
- rtlpriv->dm.ofdm_index[i] = ofdm_index_old[i];
- rtlpriv->dm.cck_index = cck_index_old;
+ if (!rtlpriv->dm.thermalvalue) {
+ rtlpriv->dm.thermalvalue = rtlefuse->eeprom_thermalmeter;
+ rtlpriv->dm.thermalvalue_lck = thermalvalue;
+ rtlpriv->dm.thermalvalue_iqk = thermalvalue;
+ rtlpriv->dm.thermalvalue_rxgain = rtlefuse->eeprom_thermalmeter;
+ for (i = 0; i < rf; i++)
+ rtlpriv->dm.ofdm_index[i] = ofdm_index_old[i];
+ rtlpriv->dm.cck_index = cck_index_old;
+ }
+ if (rtlhal->reloadtxpowerindex) {
+ for (i = 0; i < rf; i++)
+ rtlpriv->dm.ofdm_index[i] = ofdm_index_old[i];
+ rtlpriv->dm.cck_index = cck_index_old;
+ RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
+ "reload ofdm index for band switch\n");
+ }
+old_index_done:
+ for (i = 0; i < rf; i++)
+ ofdm_index[i] = rtlpriv->dm.ofdm_index[i];
+
+ rtlpriv->dm.thermalvalue_avg
+ [rtlpriv->dm.thermalvalue_avg_index] = thermalvalue;
+ rtlpriv->dm.thermalvalue_avg_index++;
+ if (rtlpriv->dm.thermalvalue_avg_index == AVG_THERMAL_NUM)
+ rtlpriv->dm.thermalvalue_avg_index = 0;
+ for (i = 0; i < AVG_THERMAL_NUM; i++) {
+ if (rtlpriv->dm.thermalvalue_avg[i]) {
+ thermalvalue_avg += rtlpriv->dm.thermalvalue_avg[i];
+ thermalvalue_avg_count++;
}
- if (rtlhal->reloadtxpowerindex) {
+ }
+ if (thermalvalue_avg_count)
+ thermalvalue = (u8)(thermalvalue_avg / thermalvalue_avg_count);
+ if (rtlhal->reloadtxpowerindex) {
+ delta = (thermalvalue > rtlefuse->eeprom_thermalmeter) ?
+ (thermalvalue - rtlefuse->eeprom_thermalmeter) :
+ (rtlefuse->eeprom_thermalmeter - thermalvalue);
+ rtlhal->reloadtxpowerindex = false;
+ rtlpriv->dm.done_txpower = false;
+ } else if (rtlpriv->dm.done_txpower) {
+ delta = (thermalvalue > rtlpriv->dm.thermalvalue) ?
+ (thermalvalue - rtlpriv->dm.thermalvalue) :
+ (rtlpriv->dm.thermalvalue - thermalvalue);
+ } else {
+ delta = (thermalvalue > rtlefuse->eeprom_thermalmeter) ?
+ (thermalvalue - rtlefuse->eeprom_thermalmeter) :
+ (rtlefuse->eeprom_thermalmeter - thermalvalue);
+ }
+ delta_lck = (thermalvalue > rtlpriv->dm.thermalvalue_lck) ?
+ (thermalvalue - rtlpriv->dm.thermalvalue_lck) :
+ (rtlpriv->dm.thermalvalue_lck - thermalvalue);
+ delta_iqk = (thermalvalue > rtlpriv->dm.thermalvalue_iqk) ?
+ (thermalvalue - rtlpriv->dm.thermalvalue_iqk) :
+ (rtlpriv->dm.thermalvalue_iqk - thermalvalue);
+ delta_rxgain =
+ (thermalvalue > rtlpriv->dm.thermalvalue_rxgain) ?
+ (thermalvalue - rtlpriv->dm.thermalvalue_rxgain) :
+ (rtlpriv->dm.thermalvalue_rxgain - thermalvalue);
+ RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
+ "Readback Thermal Meter = 0x%x pre thermal meter 0x%x eeprom_thermalmeter 0x%x delta 0x%x delta_lck 0x%x delta_iqk 0x%x\n",
+ thermalvalue, rtlpriv->dm.thermalvalue,
+ rtlefuse->eeprom_thermalmeter, delta, delta_lck,
+ delta_iqk);
+ if (delta_lck > rtlefuse->delta_lck && rtlefuse->delta_lck != 0) {
+ rtlpriv->dm.thermalvalue_lck = thermalvalue;
+ rtl92d_phy_lc_calibrate(hw);
+ }
+
+ if (delta == 0 || !rtlpriv->dm.txpower_track_control)
+ goto check_delta;
+
+ rtlpriv->dm.done_txpower = true;
+ delta = (thermalvalue > rtlefuse->eeprom_thermalmeter) ?
+ (thermalvalue - rtlefuse->eeprom_thermalmeter) :
+ (rtlefuse->eeprom_thermalmeter - thermalvalue);
+ if (rtlhal->current_bandtype == BAND_ON_2_4G) {
+ offset = 4;
+ if (delta > INDEX_MAPPING_NUM - 1)
+ index = index_mapping[offset][INDEX_MAPPING_NUM - 1];
+ else
+ index = index_mapping[offset][delta];
+ if (thermalvalue > rtlpriv->dm.thermalvalue) {
for (i = 0; i < rf; i++)
- rtlpriv->dm.ofdm_index[i] = ofdm_index_old[i];
- rtlpriv->dm.cck_index = cck_index_old;
- RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
- "reload ofdm index for band switch\n");
- }
- rtlpriv->dm.thermalvalue_avg
- [rtlpriv->dm.thermalvalue_avg_index] = thermalvalue;
- rtlpriv->dm.thermalvalue_avg_index++;
- if (rtlpriv->dm.thermalvalue_avg_index == AVG_THERMAL_NUM)
- rtlpriv->dm.thermalvalue_avg_index = 0;
- for (i = 0; i < AVG_THERMAL_NUM; i++) {
- if (rtlpriv->dm.thermalvalue_avg[i]) {
- thermalvalue_avg +=
- rtlpriv->dm.thermalvalue_avg[i];
- thermalvalue_avg_count++;
- }
- }
- if (thermalvalue_avg_count)
- thermalvalue = (u8) (thermalvalue_avg /
- thermalvalue_avg_count);
- if (rtlhal->reloadtxpowerindex) {
- delta = (thermalvalue > rtlefuse->eeprom_thermalmeter) ?
- (thermalvalue - rtlefuse->eeprom_thermalmeter) :
- (rtlefuse->eeprom_thermalmeter - thermalvalue);
- rtlhal->reloadtxpowerindex = false;
- rtlpriv->dm.done_txpower = false;
- } else if (rtlpriv->dm.done_txpower) {
- delta = (thermalvalue > rtlpriv->dm.thermalvalue) ?
- (thermalvalue - rtlpriv->dm.thermalvalue) :
- (rtlpriv->dm.thermalvalue - thermalvalue);
+ ofdm_index[i] -= delta;
+ cck_index -= delta;
} else {
- delta = (thermalvalue > rtlefuse->eeprom_thermalmeter) ?
- (thermalvalue - rtlefuse->eeprom_thermalmeter) :
- (rtlefuse->eeprom_thermalmeter - thermalvalue);
+ for (i = 0; i < rf; i++)
+ ofdm_index[i] += index;
+ cck_index += index;
}
- delta_lck = (thermalvalue > rtlpriv->dm.thermalvalue_lck) ?
- (thermalvalue - rtlpriv->dm.thermalvalue_lck) :
- (rtlpriv->dm.thermalvalue_lck - thermalvalue);
- delta_iqk = (thermalvalue > rtlpriv->dm.thermalvalue_iqk) ?
- (thermalvalue - rtlpriv->dm.thermalvalue_iqk) :
- (rtlpriv->dm.thermalvalue_iqk - thermalvalue);
- delta_rxgain =
- (thermalvalue > rtlpriv->dm.thermalvalue_rxgain) ?
- (thermalvalue - rtlpriv->dm.thermalvalue_rxgain) :
- (rtlpriv->dm.thermalvalue_rxgain - thermalvalue);
+ } else if (rtlhal->current_bandtype == BAND_ON_5G) {
+ rtl92d_bandtype_5G(rtlhal, ofdm_index,
+ &internal_pa, thermalvalue,
+ delta, rf, rtlefuse, rtlpriv,
+ rtlphy, index_mapping,
+ index_mapping_internal_pa);
+ }
+ if (is2t) {
RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
- "Readback Thermal Meter = 0x%x pre thermal meter 0x%x eeprom_thermalmeter 0x%x delta 0x%x delta_lck 0x%x delta_iqk 0x%x\n",
- thermalvalue, rtlpriv->dm.thermalvalue,
- rtlefuse->eeprom_thermalmeter, delta, delta_lck,
- delta_iqk);
- if ((delta_lck > rtlefuse->delta_lck) &&
- (rtlefuse->delta_lck != 0)) {
- rtlpriv->dm.thermalvalue_lck = thermalvalue;
- rtl92d_phy_lc_calibrate(hw);
+ "temp OFDM_A_index=0x%x, OFDM_B_index = 0x%x,cck_index=0x%x\n",
+ rtlpriv->dm.ofdm_index[0],
+ rtlpriv->dm.ofdm_index[1],
+ rtlpriv->dm.cck_index);
+ } else {
+ RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
+ "temp OFDM_A_index=0x%x,cck_index = 0x%x\n",
+ rtlpriv->dm.ofdm_index[0],
+ rtlpriv->dm.cck_index);
+ }
+ for (i = 0; i < rf; i++) {
+ if (ofdm_index[i] > OFDM_TABLE_SIZE_92D - 1)
+ ofdm_index[i] = OFDM_TABLE_SIZE_92D - 1;
+ else if (ofdm_index[i] < ofdm_min_index)
+ ofdm_index[i] = ofdm_min_index;
+ }
+ if (rtlhal->current_bandtype == BAND_ON_2_4G) {
+ if (cck_index > CCK_TABLE_SIZE - 1) {
+ cck_index = CCK_TABLE_SIZE - 1;
+ } else if (internal_pa ||
+ rtlhal->current_bandtype == BAND_ON_2_4G) {
+ if (ofdm_index[i] < ofdm_min_index_internal_pa)
+ ofdm_index[i] = ofdm_min_index_internal_pa;
+ } else if (cck_index < 0) {
+ cck_index = 0;
}
- if (delta > 0 && rtlpriv->dm.txpower_track_control) {
- rtlpriv->dm.done_txpower = true;
- delta = (thermalvalue > rtlefuse->eeprom_thermalmeter) ?
- (thermalvalue - rtlefuse->eeprom_thermalmeter) :
- (rtlefuse->eeprom_thermalmeter - thermalvalue);
- if (rtlhal->current_bandtype == BAND_ON_2_4G) {
- offset = 4;
- if (delta > INDEX_MAPPING_NUM - 1)
- index = index_mapping[offset]
- [INDEX_MAPPING_NUM - 1];
- else
- index = index_mapping[offset][delta];
- if (thermalvalue > rtlpriv->dm.thermalvalue) {
- for (i = 0; i < rf; i++)
- ofdm_index[i] -= delta;
- cck_index -= delta;
- } else {
- for (i = 0; i < rf; i++)
- ofdm_index[i] += index;
- cck_index += index;
- }
- } else if (rtlhal->current_bandtype == BAND_ON_5G) {
- rtl92d_bandtype_5G(rtlhal, ofdm_index,
- &internal_pa, thermalvalue,
- delta, rf, rtlefuse, rtlpriv,
- rtlphy, index_mapping,
- index_mapping_internal_pa);
- }
- if (is2t) {
- RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
- "temp OFDM_A_index=0x%x, OFDM_B_index = 0x%x,cck_index=0x%x\n",
- rtlpriv->dm.ofdm_index[0],
- rtlpriv->dm.ofdm_index[1],
- rtlpriv->dm.cck_index);
- } else {
- RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
- "temp OFDM_A_index=0x%x,cck_index = 0x%x\n",
- rtlpriv->dm.ofdm_index[0],
- rtlpriv->dm.cck_index);
- }
- for (i = 0; i < rf; i++) {
- if (ofdm_index[i] > OFDM_TABLE_SIZE_92D - 1)
- ofdm_index[i] = OFDM_TABLE_SIZE_92D - 1;
- else if (ofdm_index[i] < ofdm_min_index)
- ofdm_index[i] = ofdm_min_index;
- }
- if (rtlhal->current_bandtype == BAND_ON_2_4G) {
- if (cck_index > CCK_TABLE_SIZE - 1) {
- cck_index = CCK_TABLE_SIZE - 1;
- } else if (internal_pa ||
- rtlhal->current_bandtype ==
- BAND_ON_2_4G) {
- if (ofdm_index[i] <
- ofdm_min_index_internal_pa)
- ofdm_index[i] =
- ofdm_min_index_internal_pa;
- } else if (cck_index < 0) {
- cck_index = 0;
- }
- }
- if (is2t) {
- RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
- "new OFDM_A_index=0x%x, OFDM_B_index = 0x%x, cck_index=0x%x\n",
- ofdm_index[0], ofdm_index[1],
- cck_index);
- } else {
- RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
- "new OFDM_A_index=0x%x,cck_index = 0x%x\n",
- ofdm_index[0], cck_index);
- }
- ele_d = (ofdmswing_table[(u8) ofdm_index[0]] &
- 0xFFC00000) >> 22;
- val_x = rtlphy->iqk_matrix
- [indexforchannel].value[0][0];
- val_y = rtlphy->iqk_matrix
- [indexforchannel].value[0][1];
- if (val_x != 0) {
- if ((val_x & 0x00000200) != 0)
- val_x = val_x | 0xFFFFFC00;
- ele_a =
- ((val_x * ele_d) >> 8) & 0x000003FF;
-
- /* new element C = element D x Y */
- if ((val_y & 0x00000200) != 0)
- val_y = val_y | 0xFFFFFC00;
- ele_c = ((val_y * ele_d) >> 8) & 0x000003FF;
-
- /* wirte new elements A, C, D to regC80 and
- * regC94, element B is always 0 */
- value32 = (ele_d << 22) | ((ele_c & 0x3F) <<
- 16) | ele_a;
- rtl_set_bbreg(hw, ROFDM0_XATXIQIMBALANCE,
- MASKDWORD, value32);
-
- value32 = (ele_c & 0x000003C0) >> 6;
- rtl_set_bbreg(hw, ROFDM0_XCTXAFE, MASKH4BITS,
- value32);
-
- value32 = ((val_x * ele_d) >> 7) & 0x01;
- rtl_set_bbreg(hw, ROFDM0_ECCATHRESHOLD, BIT(24),
- value32);
+ }
+ if (is2t) {
+ RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
+ "new OFDM_A_index=0x%x, OFDM_B_index = 0x%x, cck_index=0x%x\n",
+ ofdm_index[0], ofdm_index[1],
+ cck_index);
+ } else {
+ RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
+ "new OFDM_A_index=0x%x,cck_index = 0x%x\n",
+ ofdm_index[0], cck_index);
+ }
+ ele_d = (ofdmswing_table[ofdm_index[0]] & 0xFFC00000) >> 22;
+ val_x = rtlphy->iqk_matrix[indexforchannel].value[0][0];
+ val_y = rtlphy->iqk_matrix[indexforchannel].value[0][1];
+ if (val_x != 0) {
+ if ((val_x & 0x00000200) != 0)
+ val_x = val_x | 0xFFFFFC00;
+ ele_a = ((val_x * ele_d) >> 8) & 0x000003FF;
+
+ /* new element C = element D x Y */
+ if ((val_y & 0x00000200) != 0)
+ val_y = val_y | 0xFFFFFC00;
+ ele_c = ((val_y * ele_d) >> 8) & 0x000003FF;
+
+ /* write new elements A, C, D to regC80 and
+ * regC94, element B is always 0
+ */
+ value32 = (ele_d << 22) | ((ele_c & 0x3F) << 16) | ele_a;
+ rtl_set_bbreg(hw, ROFDM0_XATXIQIMBALANCE,
+ MASKDWORD, value32);
+
+ value32 = (ele_c & 0x000003C0) >> 6;
+ rtl_set_bbreg(hw, ROFDM0_XCTXAFE, MASKH4BITS,
+ value32);
+
+ value32 = ((val_x * ele_d) >> 7) & 0x01;
+ rtl_set_bbreg(hw, ROFDM0_ECCATHRESHOLD, BIT(24),
+ value32);
- } else {
- rtl_set_bbreg(hw, ROFDM0_XATXIQIMBALANCE,
- MASKDWORD,
- ofdmswing_table
- [(u8)ofdm_index[0]]);
- rtl_set_bbreg(hw, ROFDM0_XCTXAFE, MASKH4BITS,
- 0x00);
- rtl_set_bbreg(hw, ROFDM0_ECCATHRESHOLD,
- BIT(24), 0x00);
- }
+ } else {
+ rtl_set_bbreg(hw, ROFDM0_XATXIQIMBALANCE,
+ MASKDWORD,
+ ofdmswing_table[(u8)ofdm_index[0]]);
+ rtl_set_bbreg(hw, ROFDM0_XCTXAFE, MASKH4BITS,
+ 0x00);
+ rtl_set_bbreg(hw, ROFDM0_ECCATHRESHOLD,
+ BIT(24), 0x00);
+ }
- RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
- "TxPwrTracking for interface %d path A: X = 0x%lx, Y = 0x%lx ele_A = 0x%lx ele_C = 0x%lx ele_D = 0x%lx 0xe94 = 0x%lx 0xe9c = 0x%lx\n",
- rtlhal->interfaceindex,
- val_x, val_y, ele_a, ele_c, ele_d,
- val_x, val_y);
-
- if (cck_index >= CCK_TABLE_SIZE)
- cck_index = CCK_TABLE_SIZE - 1;
- if (cck_index < 0)
- cck_index = 0;
- if (rtlhal->current_bandtype == BAND_ON_2_4G) {
- /* Adjust CCK according to IQK result */
- if (!rtlpriv->dm.cck_inch14) {
- rtl_write_byte(rtlpriv, 0xa22,
- cckswing_table_ch1ch13
- [(u8)cck_index][0]);
- rtl_write_byte(rtlpriv, 0xa23,
- cckswing_table_ch1ch13
- [(u8)cck_index][1]);
- rtl_write_byte(rtlpriv, 0xa24,
- cckswing_table_ch1ch13
- [(u8)cck_index][2]);
- rtl_write_byte(rtlpriv, 0xa25,
- cckswing_table_ch1ch13
- [(u8)cck_index][3]);
- rtl_write_byte(rtlpriv, 0xa26,
- cckswing_table_ch1ch13
- [(u8)cck_index][4]);
- rtl_write_byte(rtlpriv, 0xa27,
- cckswing_table_ch1ch13
- [(u8)cck_index][5]);
- rtl_write_byte(rtlpriv, 0xa28,
- cckswing_table_ch1ch13
- [(u8)cck_index][6]);
- rtl_write_byte(rtlpriv, 0xa29,
- cckswing_table_ch1ch13
- [(u8)cck_index][7]);
- } else {
- rtl_write_byte(rtlpriv, 0xa22,
- cckswing_table_ch14
- [(u8)cck_index][0]);
- rtl_write_byte(rtlpriv, 0xa23,
- cckswing_table_ch14
- [(u8)cck_index][1]);
- rtl_write_byte(rtlpriv, 0xa24,
- cckswing_table_ch14
- [(u8)cck_index][2]);
- rtl_write_byte(rtlpriv, 0xa25,
- cckswing_table_ch14
- [(u8)cck_index][3]);
- rtl_write_byte(rtlpriv, 0xa26,
- cckswing_table_ch14
- [(u8)cck_index][4]);
- rtl_write_byte(rtlpriv, 0xa27,
- cckswing_table_ch14
- [(u8)cck_index][5]);
- rtl_write_byte(rtlpriv, 0xa28,
- cckswing_table_ch14
- [(u8)cck_index][6]);
- rtl_write_byte(rtlpriv, 0xa29,
- cckswing_table_ch14
- [(u8)cck_index][7]);
- }
- }
- if (is2t) {
- ele_d = (ofdmswing_table[(u8) ofdm_index[1]] &
- 0xFFC00000) >> 22;
- val_x = rtlphy->iqk_matrix
- [indexforchannel].value[0][4];
- val_y = rtlphy->iqk_matrix
- [indexforchannel].value[0][5];
- if (val_x != 0) {
- if ((val_x & 0x00000200) != 0)
- /* consider minus */
- val_x = val_x | 0xFFFFFC00;
- ele_a = ((val_x * ele_d) >> 8) &
- 0x000003FF;
- /* new element C = element D x Y */
- if ((val_y & 0x00000200) != 0)
- val_y =
- val_y | 0xFFFFFC00;
- ele_c =
- ((val_y *
- ele_d) >> 8) & 0x00003FF;
- /* write new elements A, C, D to regC88
- * and regC9C, element B is always 0
- */
- value32 = (ele_d << 22) |
- ((ele_c & 0x3F) << 16) |
- ele_a;
- rtl_set_bbreg(hw,
- ROFDM0_XBTXIQIMBALANCE,
- MASKDWORD, value32);
- value32 = (ele_c & 0x000003C0) >> 6;
- rtl_set_bbreg(hw, ROFDM0_XDTXAFE,
- MASKH4BITS, value32);
- value32 = ((val_x * ele_d) >> 7) & 0x01;
- rtl_set_bbreg(hw, ROFDM0_ECCATHRESHOLD,
- BIT(28), value32);
- } else {
- rtl_set_bbreg(hw,
- ROFDM0_XBTXIQIMBALANCE,
- MASKDWORD,
- ofdmswing_table
- [(u8) ofdm_index[1]]);
- rtl_set_bbreg(hw, ROFDM0_XDTXAFE,
- MASKH4BITS, 0x00);
- rtl_set_bbreg(hw, ROFDM0_ECCATHRESHOLD,
- BIT(28), 0x00);
- }
- RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
- "TxPwrTracking path B: X = 0x%lx, Y = 0x%lx ele_A = 0x%lx ele_C = 0x%lx ele_D = 0x%lx 0xeb4 = 0x%lx 0xebc = 0x%lx\n",
- val_x, val_y, ele_a, ele_c,
- ele_d, val_x, val_y);
- }
- RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
- "TxPwrTracking 0xc80 = 0x%x, 0xc94 = 0x%x RF 0x24 = 0x%x\n",
- rtl_get_bbreg(hw, 0xc80, MASKDWORD),
- rtl_get_bbreg(hw, 0xc94, MASKDWORD),
- rtl_get_rfreg(hw, RF90_PATH_A, 0x24,
- RFREG_OFFSET_MASK));
- }
- if ((delta_iqk > rtlefuse->delta_iqk) &&
- (rtlefuse->delta_iqk != 0)) {
- rtl92d_phy_reset_iqk_result(hw);
- rtlpriv->dm.thermalvalue_iqk = thermalvalue;
- rtl92d_phy_iq_calibrate(hw);
+ RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
+ "TxPwrTracking for interface %d path A: X = 0x%lx, Y = 0x%lx ele_A = 0x%lx ele_C = 0x%lx ele_D = 0x%lx 0xe94 = 0x%lx 0xe9c = 0x%lx\n",
+ rtlhal->interfaceindex,
+ val_x, val_y, ele_a, ele_c, ele_d,
+ val_x, val_y);
+
+ if (cck_index >= CCK_TABLE_SIZE)
+ cck_index = CCK_TABLE_SIZE - 1;
+ if (cck_index < 0)
+ cck_index = 0;
+ if (rtlhal->current_bandtype == BAND_ON_2_4G) {
+ /* Adjust CCK according to IQK result */
+ if (!rtlpriv->dm.cck_inch14) {
+ rtl_write_byte(rtlpriv, 0xa22,
+ cckswing_table_ch1ch13[cck_index][0]);
+ rtl_write_byte(rtlpriv, 0xa23,
+ cckswing_table_ch1ch13[cck_index][1]);
+ rtl_write_byte(rtlpriv, 0xa24,
+ cckswing_table_ch1ch13[cck_index][2]);
+ rtl_write_byte(rtlpriv, 0xa25,
+ cckswing_table_ch1ch13[cck_index][3]);
+ rtl_write_byte(rtlpriv, 0xa26,
+ cckswing_table_ch1ch13[cck_index][4]);
+ rtl_write_byte(rtlpriv, 0xa27,
+ cckswing_table_ch1ch13[cck_index][5]);
+ rtl_write_byte(rtlpriv, 0xa28,
+ cckswing_table_ch1ch13[cck_index][6]);
+ rtl_write_byte(rtlpriv, 0xa29,
+ cckswing_table_ch1ch13[cck_index][7]);
+ } else {
+ rtl_write_byte(rtlpriv, 0xa22,
+ cckswing_table_ch14[cck_index][0]);
+ rtl_write_byte(rtlpriv, 0xa23,
+ cckswing_table_ch14[cck_index][1]);
+ rtl_write_byte(rtlpriv, 0xa24,
+ cckswing_table_ch14[cck_index][2]);
+ rtl_write_byte(rtlpriv, 0xa25,
+ cckswing_table_ch14[cck_index][3]);
+ rtl_write_byte(rtlpriv, 0xa26,
+ cckswing_table_ch14[cck_index][4]);
+ rtl_write_byte(rtlpriv, 0xa27,
+ cckswing_table_ch14[cck_index][5]);
+ rtl_write_byte(rtlpriv, 0xa28,
+ cckswing_table_ch14[cck_index][6]);
+ rtl_write_byte(rtlpriv, 0xa29,
+ cckswing_table_ch14[cck_index][7]);
}
- if (delta_rxgain > 0 && rtlhal->current_bandtype == BAND_ON_5G
- && thermalvalue <= rtlefuse->eeprom_thermalmeter) {
- rtlpriv->dm.thermalvalue_rxgain = thermalvalue;
- rtl92d_dm_rxgain_tracking_thermalmeter(hw);
+ }
+ if (is2t) {
+ ele_d = (ofdmswing_table[ofdm_index[1]] & 0xFFC00000) >> 22;
+ val_x = rtlphy->iqk_matrix[indexforchannel].value[0][4];
+ val_y = rtlphy->iqk_matrix[indexforchannel].value[0][5];
+ if (val_x != 0) {
+ if ((val_x & 0x00000200) != 0)
+ /* consider minus */
+ val_x = val_x | 0xFFFFFC00;
+ ele_a = ((val_x * ele_d) >> 8) & 0x000003FF;
+ /* new element C = element D x Y */
+ if ((val_y & 0x00000200) != 0)
+ val_y = val_y | 0xFFFFFC00;
+ ele_c = ((val_y * ele_d) >> 8) & 0x00003FF;
+ /* write new elements A, C, D to regC88
+ * and regC9C, element B is always 0
+ */
+ value32 = (ele_d << 22) | ((ele_c & 0x3F) << 16) | ele_a;
+ rtl_set_bbreg(hw,
+ ROFDM0_XBTXIQIMBALANCE,
+ MASKDWORD, value32);
+ value32 = (ele_c & 0x000003C0) >> 6;
+ rtl_set_bbreg(hw, ROFDM0_XDTXAFE,
+ MASKH4BITS, value32);
+ value32 = ((val_x * ele_d) >> 7) & 0x01;
+ rtl_set_bbreg(hw, ROFDM0_ECCATHRESHOLD,
+ BIT(28), value32);
+ } else {
+ rtl_set_bbreg(hw,
+ ROFDM0_XBTXIQIMBALANCE,
+ MASKDWORD,
+ ofdmswing_table[ofdm_index[1]]);
+ rtl_set_bbreg(hw, ROFDM0_XDTXAFE,
+ MASKH4BITS, 0x00);
+ rtl_set_bbreg(hw, ROFDM0_ECCATHRESHOLD,
+ BIT(28), 0x00);
}
- if (rtlpriv->dm.txpower_track_control)
- rtlpriv->dm.thermalvalue = thermalvalue;
+ RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
+ "TxPwrTracking path B: X = 0x%lx, Y = 0x%lx ele_A = 0x%lx ele_C = 0x%lx ele_D = 0x%lx 0xeb4 = 0x%lx 0xebc = 0x%lx\n",
+ val_x, val_y, ele_a, ele_c,
+ ele_d, val_x, val_y);
+ }
+ RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD,
+ "TxPwrTracking 0xc80 = 0x%x, 0xc94 = 0x%x RF 0x24 = 0x%x\n",
+ rtl_get_bbreg(hw, 0xc80, MASKDWORD),
+ rtl_get_bbreg(hw, 0xc94, MASKDWORD),
+ rtl_get_rfreg(hw, RF90_PATH_A, 0x24,
+ RFREG_OFFSET_MASK));
+
+check_delta:
+ if (delta_iqk > rtlefuse->delta_iqk && rtlefuse->delta_iqk != 0) {
+ rtl92d_phy_reset_iqk_result(hw);
+ rtlpriv->dm.thermalvalue_iqk = thermalvalue;
+ rtl92d_phy_iq_calibrate(hw);
}
+ if (delta_rxgain > 0 && rtlhal->current_bandtype == BAND_ON_5G &&
+ thermalvalue <= rtlefuse->eeprom_thermalmeter) {
+ rtlpriv->dm.thermalvalue_rxgain = thermalvalue;
+ rtl92d_dm_rxgain_tracking_thermalmeter(hw);
+ }
+ if (rtlpriv->dm.txpower_track_control)
+ rtlpriv->dm.thermalvalue = thermalvalue;
+exit:
RT_TRACE(rtlpriv, COMP_POWER_TRACKING, DBG_LOUD, "<===\n");
}
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/dm.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/dm.c
index 49d05b631ba1..b54230433a6b 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/dm.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/dm.c
@@ -655,10 +655,9 @@ static void rtl8821ae_dm_check_rssi_monitor(struct ieee80211_hw *hw)
u8 h2c_parameter[4] = { 0 };
long tmp_entry_max_pwdb = 0, tmp_entry_min_pwdb = 0xff;
u8 stbc_tx = 0;
- u64 cur_txokcnt = 0, cur_rxokcnt = 0;
+ u64 cur_rxokcnt = 0;
static u64 last_txokcnt = 0, last_rxokcnt;
- cur_txokcnt = rtlpriv->stats.txbytesunicast - last_txokcnt;
cur_rxokcnt = rtlpriv->stats.rxbytesunicast - last_rxokcnt;
last_txokcnt = rtlpriv->stats.txbytesunicast;
last_rxokcnt = rtlpriv->stats.rxbytesunicast;
@@ -2654,7 +2653,6 @@ static void rtl8821ae_dm_check_edca_turbo(struct ieee80211_hw *hw)
u32 edca_be = 0x5ea42b;
u8 iot_peer = 0;
bool *pb_is_cur_rdl_state = NULL;
- bool b_last_is_cur_rdl_state = false;
bool b_bias_on_rx = false;
bool b_edca_turbo_on = false;
@@ -2672,7 +2670,6 @@ static void rtl8821ae_dm_check_edca_turbo(struct ieee80211_hw *hw)
* list paramter for different platform
*===============================
*/
- b_last_is_cur_rdl_state = rtlpriv->dm.is_cur_rdlstate;
pb_is_cur_rdl_state = &rtlpriv->dm.is_cur_rdlstate;
cur_tx_ok_cnt = rtlpriv->stats.txbytesunicast - rtldm->last_tx_ok_cnt;
@@ -2958,10 +2955,11 @@ void rtl8821ae_dm_set_tx_ant_by_tx_info(struct ieee80211_hw *hw,
struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
struct rtl_dm *rtldm = rtl_dm(rtl_priv(hw));
struct fast_ant_training *pfat_table = &rtldm->fat_table;
+ __le32 *pdesc32 = (__le32 *)pdesc;
if (rtlhal->hw_type != HARDWARE_TYPE_RTL8812AE)
return;
if (rtlefuse->antenna_div_type == CG_TRX_HW_ANTDIV)
- SET_TX_DESC_TX_ANT(pdesc, pfat_table->antsel_a[mac_id]);
+ set_tx_desc_tx_ant(pdesc32, pfat_table->antsel_a[mac_id]);
}
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.c
index 7b6faf38e09c..cd809c992245 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.c
@@ -56,7 +56,7 @@ static u8 _rtl8821ae_evm_dbm_jaguar(s8 value)
}
static void query_rxphystatus(struct ieee80211_hw *hw,
- struct rtl_stats *pstatus, u8 *pdesc,
+ struct rtl_stats *pstatus, __le32 *pdesc,
struct rx_fwinfo_8821ae *p_drvinfo,
bool bpacket_match_bssid,
bool bpacket_toself, bool packet_beacon)
@@ -274,7 +274,7 @@ static void query_rxphystatus(struct ieee80211_hw *hw,
static void translate_rx_signal_stuff(struct ieee80211_hw *hw,
struct sk_buff *skb,
- struct rtl_stats *pstatus, u8 *pdesc,
+ struct rtl_stats *pstatus, __le32 *pdesc,
struct rx_fwinfo_8821ae *p_drvinfo)
{
struct rtl_mac *mac = rtl_mac(rtl_priv(hw));
@@ -332,14 +332,14 @@ static void translate_rx_signal_stuff(struct ieee80211_hw *hw,
rtl_process_phyinfo(hw, tmp_buf, pstatus);
}
-static void _rtl8821ae_insert_emcontent(struct rtl_tcb_desc *ptcb_desc,
- u8 *virtualaddress)
+static void rtl8821ae_insert_emcontent(struct rtl_tcb_desc *ptcb_desc,
+ __le32 *virtualaddress)
{
u32 dwtmp = 0;
memset(virtualaddress, 0, 8);
- SET_EARLYMODE_PKTNUM(virtualaddress, ptcb_desc->empkt_num);
+ set_earlymode_pktnum(virtualaddress, ptcb_desc->empkt_num);
if (ptcb_desc->empkt_num == 1) {
dwtmp = ptcb_desc->empkt_len[0];
} else {
@@ -347,7 +347,7 @@ static void _rtl8821ae_insert_emcontent(struct rtl_tcb_desc *ptcb_desc,
dwtmp += ((dwtmp % 4) ? (4 - dwtmp % 4) : 0)+4;
dwtmp += ptcb_desc->empkt_len[1];
}
- SET_EARLYMODE_LEN0(virtualaddress, dwtmp);
+ set_earlymode_len0(virtualaddress, dwtmp);
if (ptcb_desc->empkt_num <= 3) {
dwtmp = ptcb_desc->empkt_len[2];
@@ -356,7 +356,7 @@ static void _rtl8821ae_insert_emcontent(struct rtl_tcb_desc *ptcb_desc,
dwtmp += ((dwtmp % 4) ? (4 - dwtmp % 4) : 0)+4;
dwtmp += ptcb_desc->empkt_len[3];
}
- SET_EARLYMODE_LEN1(virtualaddress, dwtmp);
+ set_earlymode_len1(virtualaddress, dwtmp);
if (ptcb_desc->empkt_num <= 5) {
dwtmp = ptcb_desc->empkt_len[4];
} else {
@@ -364,8 +364,8 @@ static void _rtl8821ae_insert_emcontent(struct rtl_tcb_desc *ptcb_desc,
dwtmp += ((dwtmp % 4) ? (4 - dwtmp % 4) : 0)+4;
dwtmp += ptcb_desc->empkt_len[5];
}
- SET_EARLYMODE_LEN2_1(virtualaddress, dwtmp & 0xF);
- SET_EARLYMODE_LEN2_2(virtualaddress, dwtmp >> 4);
+ set_earlymode_len2_1(virtualaddress, dwtmp & 0xF);
+ set_earlymode_len2_2(virtualaddress, dwtmp >> 4);
if (ptcb_desc->empkt_num <= 7) {
dwtmp = ptcb_desc->empkt_len[6];
} else {
@@ -373,7 +373,7 @@ static void _rtl8821ae_insert_emcontent(struct rtl_tcb_desc *ptcb_desc,
dwtmp += ((dwtmp % 4) ? (4 - dwtmp % 4) : 0)+4;
dwtmp += ptcb_desc->empkt_len[7];
}
- SET_EARLYMODE_LEN3(virtualaddress, dwtmp);
+ set_earlymode_len3(virtualaddress, dwtmp);
if (ptcb_desc->empkt_num <= 9) {
dwtmp = ptcb_desc->empkt_len[8];
} else {
@@ -381,15 +381,15 @@ static void _rtl8821ae_insert_emcontent(struct rtl_tcb_desc *ptcb_desc,
dwtmp += ((dwtmp % 4) ? (4 - dwtmp % 4) : 0)+4;
dwtmp += ptcb_desc->empkt_len[9];
}
- SET_EARLYMODE_LEN4(virtualaddress, dwtmp);
+ set_earlymode_len4(virtualaddress, dwtmp);
}
-static bool rtl8821ae_get_rxdesc_is_ht(struct ieee80211_hw *hw, u8 *pdesc)
+static bool rtl8821ae_get_rxdesc_is_ht(struct ieee80211_hw *hw, __le32 *pdesc)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
u8 rx_rate = 0;
- rx_rate = GET_RX_DESC_RXMCS(pdesc);
+ rx_rate = get_rx_desc_rxmcs(pdesc);
RT_TRACE(rtlpriv, COMP_RXDESC, DBG_LOUD, "rx_rate=0x%02x.\n", rx_rate);
@@ -398,12 +398,12 @@ static bool rtl8821ae_get_rxdesc_is_ht(struct ieee80211_hw *hw, u8 *pdesc)
return false;
}
-static bool rtl8821ae_get_rxdesc_is_vht(struct ieee80211_hw *hw, u8 *pdesc)
+static bool rtl8821ae_get_rxdesc_is_vht(struct ieee80211_hw *hw, __le32 *pdesc)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
u8 rx_rate = 0;
- rx_rate = GET_RX_DESC_RXMCS(pdesc);
+ rx_rate = get_rx_desc_rxmcs(pdesc);
RT_TRACE(rtlpriv, COMP_RXDESC, DBG_LOUD, "rx_rate=0x%02x.\n", rx_rate);
@@ -412,12 +412,12 @@ static bool rtl8821ae_get_rxdesc_is_vht(struct ieee80211_hw *hw, u8 *pdesc)
return false;
}
-static u8 rtl8821ae_get_rx_vht_nss(struct ieee80211_hw *hw, u8 *pdesc)
+static u8 rtl8821ae_get_rx_vht_nss(struct ieee80211_hw *hw, __le32 *pdesc)
{
u8 rx_rate = 0;
u8 vht_nss = 0;
- rx_rate = GET_RX_DESC_RXMCS(pdesc);
+ rx_rate = get_rx_desc_rxmcs(pdesc);
if ((rx_rate >= DESC_RATEVHT1SS_MCS0) &&
(rx_rate <= DESC_RATEVHT1SS_MCS9))
vht_nss = 1;
@@ -431,30 +431,31 @@ static u8 rtl8821ae_get_rx_vht_nss(struct ieee80211_hw *hw, u8 *pdesc)
bool rtl8821ae_rx_query_desc(struct ieee80211_hw *hw,
struct rtl_stats *status,
struct ieee80211_rx_status *rx_status,
- u8 *pdesc, struct sk_buff *skb)
+ u8 *pdesc8, struct sk_buff *skb)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rx_fwinfo_8821ae *p_drvinfo;
struct ieee80211_hdr *hdr;
u8 wake_match;
- u32 phystatus = GET_RX_DESC_PHYST(pdesc);
+ __le32 *pdesc = (__le32 *)pdesc8;
+ u32 phystatus = get_rx_desc_physt(pdesc);
- status->length = (u16)GET_RX_DESC_PKT_LEN(pdesc);
- status->rx_drvinfo_size = (u8)GET_RX_DESC_DRV_INFO_SIZE(pdesc) *
+ status->length = (u16)get_rx_desc_pkt_len(pdesc);
+ status->rx_drvinfo_size = (u8)get_rx_desc_drv_info_size(pdesc) *
RX_DRV_INFO_SIZE_UNIT;
- status->rx_bufshift = (u8)(GET_RX_DESC_SHIFT(pdesc) & 0x03);
- status->icv = (u16)GET_RX_DESC_ICV(pdesc);
- status->crc = (u16)GET_RX_DESC_CRC32(pdesc);
+ status->rx_bufshift = (u8)(get_rx_desc_shift(pdesc) & 0x03);
+ status->icv = (u16)get_rx_desc_icv(pdesc);
+ status->crc = (u16)get_rx_desc_crc32(pdesc);
status->hwerror = (status->crc | status->icv);
- status->decrypted = !GET_RX_DESC_SWDEC(pdesc);
- status->rate = (u8)GET_RX_DESC_RXMCS(pdesc);
- status->shortpreamble = (u16)GET_RX_DESC_SPLCP(pdesc);
- status->isampdu = (bool)(GET_RX_DESC_PAGGR(pdesc) == 1);
- status->isfirst_ampdu = (bool)(GET_RX_DESC_PAGGR(pdesc) == 1);
- status->timestamp_low = GET_RX_DESC_TSFL(pdesc);
- status->rx_packet_bw = GET_RX_DESC_BW(pdesc);
- status->macid = GET_RX_DESC_MACID(pdesc);
- status->is_short_gi = !(bool)GET_RX_DESC_SPLCP(pdesc);
+ status->decrypted = !get_rx_desc_swdec(pdesc);
+ status->rate = (u8)get_rx_desc_rxmcs(pdesc);
+ status->shortpreamble = (u16)get_rx_desc_splcp(pdesc);
+ status->isampdu = (bool)(get_rx_desc_paggr(pdesc) == 1);
+ status->isfirst_ampdu = (bool)(get_rx_desc_paggr(pdesc) == 1);
+ status->timestamp_low = get_rx_desc_tsfl(pdesc);
+ status->rx_packet_bw = get_rx_desc_bw(pdesc);
+ status->macid = get_rx_desc_macid(pdesc);
+ status->is_short_gi = !(bool)get_rx_desc_splcp(pdesc);
status->is_ht = rtl8821ae_get_rxdesc_is_ht(hw, pdesc);
status->is_vht = rtl8821ae_get_rxdesc_is_vht(hw, pdesc);
status->vht_nss = rtl8821ae_get_rx_vht_nss(hw, pdesc);
@@ -467,16 +468,16 @@ bool rtl8821ae_rx_query_desc(struct ieee80211_hw *hw,
status->is_ht, status->is_vht, status->vht_nss,
status->is_short_gi);
- if (GET_RX_STATUS_DESC_RPT_SEL(pdesc))
+ if (get_rx_status_desc_rpt_sel(pdesc))
status->packet_report_type = C2H_PACKET;
else
status->packet_report_type = NORMAL_RX;
- if (GET_RX_STATUS_DESC_PATTERN_MATCH(pdesc))
+ if (get_rx_status_desc_pattern_match(pdesc))
wake_match = BIT(2);
- else if (GET_RX_STATUS_DESC_MAGIC_MATCH(pdesc))
+ else if (get_rx_status_desc_magic_match(pdesc))
wake_match = BIT(1);
- else if (GET_RX_STATUS_DESC_UNICAST_MATCH(pdesc))
+ else if (get_rx_status_desc_unicast_match(pdesc))
wake_match = BIT(0);
else
wake_match = 0;
@@ -543,9 +544,9 @@ bool rtl8821ae_rx_query_desc(struct ieee80211_hw *hw,
rx_status->signal = status->recvsignalpower + 10;
if (status->packet_report_type == TX_REPORT2) {
status->macid_valid_entry[0] =
- GET_RX_RPT2_DESC_MACID_VALID_1(pdesc);
+ get_rx_rpt2_desc_macid_valid_1(pdesc);
status->macid_valid_entry[1] =
- GET_RX_RPT2_DESC_MACID_VALID_2(pdesc);
+ get_rx_rpt2_desc_macid_valid_2(pdesc);
}
return true;
}
@@ -656,7 +657,7 @@ static u8 rtl8821ae_sc_mapping(struct ieee80211_hw *hw,
}
void rtl8821ae_tx_fill_desc(struct ieee80211_hw *hw,
- struct ieee80211_hdr *hdr, u8 *pdesc_tx, u8 *txbd,
+ struct ieee80211_hdr *hdr, u8 *pdesc8, u8 *txbd,
struct ieee80211_tx_info *info,
struct ieee80211_sta *sta,
struct sk_buff *skb,
@@ -667,7 +668,6 @@ void rtl8821ae_tx_fill_desc(struct ieee80211_hw *hw,
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
struct rtl_hal *rtlhal = rtl_hal(rtlpriv);
struct rtlwifi_tx_info *tx_info = rtl_tx_skb_cb_info(skb);
- u8 *pdesc = (u8 *)pdesc_tx;
u16 seq_number;
__le16 fc = hdr->frame_control;
unsigned int buf_len = 0;
@@ -679,6 +679,8 @@ void rtl8821ae_tx_fill_desc(struct ieee80211_hw *hw,
cpu_to_le16(IEEE80211_FCTL_MOREFRAGS)) == 0);
dma_addr_t mapping;
u8 short_gi = 0;
+ bool tmp_bool;
+ __le32 *pdesc = (__le32 *)pdesc8;
seq_number = (le16_to_cpu(hdr->seq_ctrl) & IEEE80211_SCTL_SEQ) >> 4;
rtl_get_tcb_desc(hw, info, sta, skb, ptcb_desc);
@@ -695,69 +697,70 @@ void rtl8821ae_tx_fill_desc(struct ieee80211_hw *hw,
"DMA mapping error\n");
return;
}
- CLEAR_PCI_TX_DESC_CONTENT(pdesc, sizeof(struct tx_desc_8821ae));
+ clear_pci_tx_desc_content(pdesc, sizeof(struct tx_desc_8821ae));
if (ieee80211_is_nullfunc(fc) || ieee80211_is_ctl(fc)) {
firstseg = true;
lastseg = true;
}
if (firstseg) {
if (rtlhal->earlymode_enable) {
- SET_TX_DESC_PKT_OFFSET(pdesc, 1);
- SET_TX_DESC_OFFSET(pdesc, USB_HWDESC_HEADER_LEN +
+ set_tx_desc_pkt_offset(pdesc, 1);
+ set_tx_desc_offset(pdesc, USB_HWDESC_HEADER_LEN +
EM_HDR_LEN);
if (ptcb_desc->empkt_num) {
RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE,
"Insert 8 byte.pTcb->EMPktNum:%d\n",
ptcb_desc->empkt_num);
- _rtl8821ae_insert_emcontent(ptcb_desc,
- (u8 *)(skb->data));
+ rtl8821ae_insert_emcontent(ptcb_desc,
+ (__le32 *)skb->data);
}
} else {
- SET_TX_DESC_OFFSET(pdesc, USB_HWDESC_HEADER_LEN);
+ set_tx_desc_offset(pdesc, USB_HWDESC_HEADER_LEN);
}
/* ptcb_desc->use_driver_rate = true; */
- SET_TX_DESC_TX_RATE(pdesc, ptcb_desc->hw_rate);
+ set_tx_desc_tx_rate(pdesc, ptcb_desc->hw_rate);
if (ptcb_desc->hw_rate > DESC_RATEMCS0)
short_gi = (ptcb_desc->use_shortgi) ? 1 : 0;
else
short_gi = (ptcb_desc->use_shortpreamble) ? 1 : 0;
- SET_TX_DESC_DATA_SHORTGI(pdesc, short_gi);
+ set_tx_desc_data_shortgi(pdesc, short_gi);
if (info->flags & IEEE80211_TX_CTL_AMPDU) {
- SET_TX_DESC_AGG_ENABLE(pdesc, 1);
- SET_TX_DESC_MAX_AGG_NUM(pdesc, 0x1f);
+ set_tx_desc_agg_enable(pdesc, 1);
+ set_tx_desc_max_agg_num(pdesc, 0x1f);
}
- SET_TX_DESC_SEQ(pdesc, seq_number);
- SET_TX_DESC_RTS_ENABLE(pdesc, ((ptcb_desc->rts_enable &&
+ set_tx_desc_seq(pdesc, seq_number);
+ set_tx_desc_rts_enable(pdesc,
+ ((ptcb_desc->rts_enable &&
!ptcb_desc->cts_enable) ? 1 : 0));
- SET_TX_DESC_HW_RTS_ENABLE(pdesc, 0);
- SET_TX_DESC_CTS2SELF(pdesc, ((ptcb_desc->cts_enable) ? 1 : 0));
+ set_tx_desc_hw_rts_enable(pdesc, 0);
+ set_tx_desc_cts2self(pdesc, ((ptcb_desc->cts_enable) ? 1 : 0));
- SET_TX_DESC_RTS_RATE(pdesc, ptcb_desc->rts_rate);
- SET_TX_DESC_RTS_SC(pdesc, ptcb_desc->rts_sc);
- SET_TX_DESC_RTS_SHORT(pdesc,
- ((ptcb_desc->rts_rate <= DESC_RATE54M) ?
- (ptcb_desc->rts_use_shortpreamble ? 1 : 0) :
- (ptcb_desc->rts_use_shortgi ? 1 : 0)));
+ set_tx_desc_rts_rate(pdesc, ptcb_desc->rts_rate);
+ set_tx_desc_rts_sc(pdesc, ptcb_desc->rts_sc);
+ tmp_bool = ((ptcb_desc->rts_rate <= DESC_RATE54M) ?
+ (ptcb_desc->rts_use_shortpreamble ? 1 : 0) :
+ (ptcb_desc->rts_use_shortgi ? 1 : 0));
+ set_tx_desc_rts_short(pdesc, tmp_bool);
if (ptcb_desc->tx_enable_sw_calc_duration)
- SET_TX_DESC_NAV_USE_HDR(pdesc, 1);
+ set_tx_desc_nav_use_hdr(pdesc, 1);
- SET_TX_DESC_DATA_BW(pdesc,
- rtl8821ae_bw_mapping(hw, ptcb_desc));
+ set_tx_desc_data_bw(pdesc,
+ rtl8821ae_bw_mapping(hw, ptcb_desc));
- SET_TX_DESC_TX_SUB_CARRIER(pdesc,
- rtl8821ae_sc_mapping(hw, ptcb_desc));
+ set_tx_desc_tx_sub_carrier(pdesc,
+ rtl8821ae_sc_mapping(hw, ptcb_desc));
- SET_TX_DESC_LINIP(pdesc, 0);
- SET_TX_DESC_PKT_SIZE(pdesc, (u16)skb_len);
+ set_tx_desc_linip(pdesc, 0);
+ set_tx_desc_pkt_size(pdesc, (u16)skb_len);
if (sta) {
u8 ampdu_density = sta->ht_cap.ampdu_density;
- SET_TX_DESC_AMPDU_DENSITY(pdesc, ampdu_density);
+ set_tx_desc_ampdu_density(pdesc, ampdu_density);
}
if (info->control.hw_key) {
struct ieee80211_key_conf *keyconf =
@@ -766,69 +769,70 @@ void rtl8821ae_tx_fill_desc(struct ieee80211_hw *hw,
case WLAN_CIPHER_SUITE_WEP40:
case WLAN_CIPHER_SUITE_WEP104:
case WLAN_CIPHER_SUITE_TKIP:
- SET_TX_DESC_SEC_TYPE(pdesc, 0x1);
+ set_tx_desc_sec_type(pdesc, 0x1);
break;
case WLAN_CIPHER_SUITE_CCMP:
- SET_TX_DESC_SEC_TYPE(pdesc, 0x3);
+ set_tx_desc_sec_type(pdesc, 0x3);
break;
default:
- SET_TX_DESC_SEC_TYPE(pdesc, 0x0);
+ set_tx_desc_sec_type(pdesc, 0x0);
break;
}
}
- SET_TX_DESC_QUEUE_SEL(pdesc, fw_qsel);
- SET_TX_DESC_DATA_RATE_FB_LIMIT(pdesc, 0x1F);
- SET_TX_DESC_RTS_RATE_FB_LIMIT(pdesc, 0xF);
- SET_TX_DESC_DISABLE_FB(pdesc, ptcb_desc->disable_ratefallback ?
+ set_tx_desc_queue_sel(pdesc, fw_qsel);
+ set_tx_desc_data_rate_fb_limit(pdesc, 0x1F);
+ set_tx_desc_rts_rate_fb_limit(pdesc, 0xF);
+ set_tx_desc_disable_fb(pdesc, ptcb_desc->disable_ratefallback ?
1 : 0);
- SET_TX_DESC_USE_RATE(pdesc, ptcb_desc->use_driver_rate ? 1 : 0);
+ set_tx_desc_use_rate(pdesc, ptcb_desc->use_driver_rate ? 1 : 0);
if (ieee80211_is_data_qos(fc)) {
if (mac->rdg_en) {
RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE,
"Enable RDG function.\n");
- SET_TX_DESC_RDG_ENABLE(pdesc, 1);
- SET_TX_DESC_HTC(pdesc, 1);
+ set_tx_desc_rdg_enable(pdesc, 1);
+ set_tx_desc_htc(pdesc, 1);
}
}
/* tx report */
- rtl_set_tx_report(ptcb_desc, pdesc, hw, tx_info);
+ rtl_set_tx_report(ptcb_desc, pdesc8, hw, tx_info);
}
- SET_TX_DESC_FIRST_SEG(pdesc, (firstseg ? 1 : 0));
- SET_TX_DESC_LAST_SEG(pdesc, (lastseg ? 1 : 0));
- SET_TX_DESC_TX_BUFFER_SIZE(pdesc, (u16)buf_len);
- SET_TX_DESC_TX_BUFFER_ADDRESS(pdesc, mapping);
+ set_tx_desc_first_seg(pdesc, (firstseg ? 1 : 0));
+ set_tx_desc_last_seg(pdesc, (lastseg ? 1 : 0));
+ set_tx_desc_tx_buffer_size(pdesc, buf_len);
+ set_tx_desc_tx_buffer_address(pdesc, mapping);
/* if (rtlpriv->dm.useramask) { */
if (1) {
- SET_TX_DESC_RATE_ID(pdesc, ptcb_desc->ratr_index);
- SET_TX_DESC_MACID(pdesc, ptcb_desc->mac_id);
+ set_tx_desc_rate_id(pdesc, ptcb_desc->ratr_index);
+ set_tx_desc_macid(pdesc, ptcb_desc->mac_id);
} else {
- SET_TX_DESC_RATE_ID(pdesc, 0xC + ptcb_desc->ratr_index);
- SET_TX_DESC_MACID(pdesc, ptcb_desc->mac_id);
+ set_tx_desc_rate_id(pdesc, 0xC + ptcb_desc->ratr_index);
+ set_tx_desc_macid(pdesc, ptcb_desc->mac_id);
}
if (!ieee80211_is_data_qos(fc)) {
- SET_TX_DESC_HWSEQ_EN(pdesc, 1);
- SET_TX_DESC_HWSEQ_SEL(pdesc, 0);
+ set_tx_desc_hwseq_en(pdesc, 1);
+ set_tx_desc_hwseq_sel(pdesc, 0);
}
- SET_TX_DESC_MORE_FRAG(pdesc, (lastseg ? 0 : 1));
+ set_tx_desc_more_frag(pdesc, (lastseg ? 0 : 1));
if (is_multicast_ether_addr(ieee80211_get_DA(hdr)) ||
is_broadcast_ether_addr(ieee80211_get_DA(hdr))) {
- SET_TX_DESC_BMC(pdesc, 1);
+ set_tx_desc_bmc(pdesc, 1);
}
- rtl8821ae_dm_set_tx_ant_by_tx_info(hw, pdesc, ptcb_desc->mac_id);
+ rtl8821ae_dm_set_tx_ant_by_tx_info(hw, pdesc8, ptcb_desc->mac_id);
RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE, "\n");
}
void rtl8821ae_tx_fill_cmddesc(struct ieee80211_hw *hw,
- u8 *pdesc, bool firstseg,
+ u8 *pdesc8, bool firstseg,
bool lastseg, struct sk_buff *skb)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
u8 fw_queue = QSLT_BEACON;
+ __le32 *pdesc = (__le32 *)pdesc8;
dma_addr_t mapping = pci_map_single(rtlpci->pdev,
skb->data, skb->len,
@@ -839,48 +843,50 @@ void rtl8821ae_tx_fill_cmddesc(struct ieee80211_hw *hw,
"DMA mapping error\n");
return;
}
- CLEAR_PCI_TX_DESC_CONTENT(pdesc, TX_DESC_SIZE);
+ clear_pci_tx_desc_content(pdesc, TX_DESC_SIZE);
- SET_TX_DESC_FIRST_SEG(pdesc, 1);
- SET_TX_DESC_LAST_SEG(pdesc, 1);
+ set_tx_desc_first_seg(pdesc, 1);
+ set_tx_desc_last_seg(pdesc, 1);
- SET_TX_DESC_PKT_SIZE((u8 *)pdesc, (u16)(skb->len));
+ set_tx_desc_pkt_size(pdesc, (u16)(skb->len));
- SET_TX_DESC_OFFSET(pdesc, USB_HWDESC_HEADER_LEN);
+ set_tx_desc_offset(pdesc, USB_HWDESC_HEADER_LEN);
- SET_TX_DESC_USE_RATE(pdesc, 1);
- SET_TX_DESC_TX_RATE(pdesc, DESC_RATE1M);
- SET_TX_DESC_DISABLE_FB(pdesc, 1);
+ set_tx_desc_use_rate(pdesc, 1);
+ set_tx_desc_tx_rate(pdesc, DESC_RATE1M);
+ set_tx_desc_disable_fb(pdesc, 1);
- SET_TX_DESC_DATA_BW(pdesc, 0);
+ set_tx_desc_data_bw(pdesc, 0);
- SET_TX_DESC_HWSEQ_EN(pdesc, 1);
+ set_tx_desc_hwseq_en(pdesc, 1);
- SET_TX_DESC_QUEUE_SEL(pdesc, fw_queue);
+ set_tx_desc_queue_sel(pdesc, fw_queue);
- SET_TX_DESC_TX_BUFFER_SIZE(pdesc, (u16)(skb->len));
+ set_tx_desc_tx_buffer_size(pdesc, skb->len);
- SET_TX_DESC_TX_BUFFER_ADDRESS(pdesc, mapping);
+ set_tx_desc_tx_buffer_address(pdesc, mapping);
- SET_TX_DESC_MACID(pdesc, 0);
+ set_tx_desc_macid(pdesc, 0);
- SET_TX_DESC_OWN(pdesc, 1);
+ set_tx_desc_own(pdesc, 1);
RT_PRINT_DATA(rtlpriv, COMP_CMD, DBG_LOUD,
"H2C Tx Cmd Content\n",
- pdesc, TX_DESC_SIZE);
+ pdesc8, TX_DESC_SIZE);
}
-void rtl8821ae_set_desc(struct ieee80211_hw *hw, u8 *pdesc,
+void rtl8821ae_set_desc(struct ieee80211_hw *hw, u8 *pdesc8,
bool istx, u8 desc_name, u8 *val)
{
+ __le32 *pdesc = (__le32 *)pdesc8;
+
if (istx) {
switch (desc_name) {
case HW_DESC_OWN:
- SET_TX_DESC_OWN(pdesc, 1);
+ set_tx_desc_own(pdesc, 1);
break;
case HW_DESC_TX_NEXTDESC_ADDR:
- SET_TX_DESC_NEXT_DESC_ADDRESS(pdesc, *(u32 *)val);
+ set_tx_desc_next_desc_address(pdesc, *(u32 *)val);
break;
default:
WARN_ONCE(true,
@@ -891,16 +897,16 @@ void rtl8821ae_set_desc(struct ieee80211_hw *hw, u8 *pdesc,
} else {
switch (desc_name) {
case HW_DESC_RXOWN:
- SET_RX_DESC_OWN(pdesc, 1);
+ set_rx_desc_own(pdesc, 1);
break;
case HW_DESC_RXBUFF_ADDR:
- SET_RX_DESC_BUFF_ADDR(pdesc, *(u32 *)val);
+ set_rx_desc_buff_addr(pdesc, *(u32 *)val);
break;
case HW_DESC_RXPKT_LEN:
- SET_RX_DESC_PKT_LEN(pdesc, *(u32 *)val);
+ set_rx_desc_pkt_len(pdesc, *(u32 *)val);
break;
case HW_DESC_RXERO:
- SET_RX_DESC_EOR(pdesc, 1);
+ set_rx_desc_eor(pdesc, 1);
break;
default:
WARN_ONCE(true,
@@ -912,17 +918,18 @@ void rtl8821ae_set_desc(struct ieee80211_hw *hw, u8 *pdesc,
}
u64 rtl8821ae_get_desc(struct ieee80211_hw *hw,
- u8 *pdesc, bool istx, u8 desc_name)
+ u8 *pdesc8, bool istx, u8 desc_name)
{
u32 ret = 0;
+ __le32 *pdesc = (__le32 *)pdesc8;
if (istx) {
switch (desc_name) {
case HW_DESC_OWN:
- ret = GET_TX_DESC_OWN(pdesc);
+ ret = get_tx_desc_own(pdesc);
break;
case HW_DESC_TXBUFF_ADDR:
- ret = GET_TX_DESC_TX_BUFFER_ADDRESS(pdesc);
+ ret = get_tx_desc_tx_buffer_address(pdesc);
break;
default:
WARN_ONCE(true,
@@ -933,13 +940,13 @@ u64 rtl8821ae_get_desc(struct ieee80211_hw *hw,
} else {
switch (desc_name) {
case HW_DESC_OWN:
- ret = GET_RX_DESC_OWN(pdesc);
+ ret = get_rx_desc_own(pdesc);
break;
case HW_DESC_RXPKT_LEN:
- ret = GET_RX_DESC_PKT_LEN(pdesc);
+ ret = get_rx_desc_pkt_len(pdesc);
break;
case HW_DESC_RXBUFF_ADDR:
- ret = GET_RX_DESC_BUFF_ADDR(pdesc);
+ ret = get_rx_desc_buff_addr(pdesc);
break;
default:
WARN_ONCE(true,
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.h b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.h
index a3feecad645d..81951f0c80b6 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.h
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.h
@@ -14,341 +14,385 @@
#define USB_HWDESC_HEADER_LEN 40
#define CRCLENGTH 4
-#define SET_TX_DESC_PKT_SIZE(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc, 0, 16, __val)
-#define SET_TX_DESC_OFFSET(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc, 16, 8, __val)
-#define SET_TX_DESC_BMC(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc, 24, 1, __val)
-#define SET_TX_DESC_HTC(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc, 25, 1, __val)
-#define SET_TX_DESC_LAST_SEG(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc, 26, 1, __val)
-#define SET_TX_DESC_FIRST_SEG(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc, 27, 1, __val)
-#define SET_TX_DESC_LINIP(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc, 28, 1, __val)
-#define SET_TX_DESC_NO_ACM(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc, 29, 1, __val)
-#define SET_TX_DESC_GF(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc, 30, 1, __val)
-#define SET_TX_DESC_OWN(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc, 31, 1, __val)
-
-#define GET_TX_DESC_PKT_SIZE(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 0, 16)
-#define GET_TX_DESC_OFFSET(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 16, 8)
-#define GET_TX_DESC_BMC(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 24, 1)
-#define GET_TX_DESC_HTC(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 25, 1)
-#define GET_TX_DESC_LAST_SEG(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 26, 1)
-#define GET_TX_DESC_FIRST_SEG(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 27, 1)
-#define GET_TX_DESC_LINIP(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 28, 1)
-#define GET_TX_DESC_NO_ACM(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 29, 1)
-#define GET_TX_DESC_GF(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 30, 1)
-#define GET_TX_DESC_OWN(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 31, 1)
-
-#define SET_TX_DESC_MACID(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+4, 0, 7, __val)
-#define SET_TX_DESC_QUEUE_SEL(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+4, 8, 5, __val)
-#define SET_TX_DESC_RDG_NAV_EXT(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+4, 13, 1, __val)
-#define SET_TX_DESC_LSIG_TXOP_EN(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+4, 14, 1, __val)
-#define SET_TX_DESC_PIFS(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+4, 15, 1, __val)
-#define SET_TX_DESC_RATE_ID(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+4, 16, 5, __val)
-#define SET_TX_DESC_EN_DESC_ID(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+4, 21, 1, __val)
-#define SET_TX_DESC_SEC_TYPE(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+4, 22, 2, __val)
-#define SET_TX_DESC_PKT_OFFSET(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+4, 24, 5, __val)
-
-#define SET_TX_DESC_PAID(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+8, 0, 9, __val)
-#define SET_TX_DESC_CCA_RTS(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+8, 10, 2, __val)
-#define SET_TX_DESC_AGG_ENABLE(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+8, 12, 1, __val)
-#define SET_TX_DESC_RDG_ENABLE(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+8, 13, 1, __val)
-#define SET_TX_DESC_BAR_RTY_TH(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+8, 14, 2, __val)
-#define SET_TX_DESC_AGG_BREAK(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+8, 16, 1, __val)
-#define SET_TX_DESC_MORE_FRAG(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+8, 17, 1, __val)
-#define SET_TX_DESC_RAW(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+8, 18, 1, __val)
-#define SET_TX_DESC_SPE_RPT(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE((__pdesc) + 8, 19, 1, __val)
-#define SET_TX_DESC_AMPDU_DENSITY(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+8, 20, 3, __val)
-#define SET_TX_DESC_BT_INT(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+8, 23, 1, __val)
-#define SET_TX_DESC_GID(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+8, 24, 6, __val)
-
-#define SET_TX_DESC_WHEADER_LEN(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+12, 0, 4, __val)
-#define SET_TX_DESC_CHK_EN(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+12, 4, 1, __val)
-#define SET_TX_DESC_EARLY_MODE(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+12, 5, 1, __val)
-#define SET_TX_DESC_HWSEQ_SEL(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+12, 6, 2, __val)
-#define SET_TX_DESC_USE_RATE(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+12, 8, 1, __val)
-#define SET_TX_DESC_DISABLE_RTS_FB(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+12, 9, 1, __val)
-#define SET_TX_DESC_DISABLE_FB(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+12, 10, 1, __val)
-#define SET_TX_DESC_CTS2SELF(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+12, 11, 1, __val)
-#define SET_TX_DESC_RTS_ENABLE(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+12, 12, 1, __val)
-#define SET_TX_DESC_HW_RTS_ENABLE(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+12, 13, 1, __val)
-#define SET_TX_DESC_NAV_USE_HDR(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+12, 15, 1, __val)
-#define SET_TX_DESC_USE_MAX_LEN(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+12, 16, 1, __val)
-#define SET_TX_DESC_MAX_AGG_NUM(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+12, 17, 5, __val)
-#define SET_TX_DESC_NDPA(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+12, 22, 2, __val)
-#define SET_TX_DESC_AMPDU_MAX_TIME(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+12, 24, 8, __val)
-#define SET_TX_DESC_TX_ANT(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+20, 24, 4, __val)
-
-#define SET_TX_DESC_TX_RATE(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+16, 0, 7, __val)
-#define SET_TX_DESC_DATA_RATE_FB_LIMIT(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+16, 8, 5, __val)
-#define SET_TX_DESC_RTS_RATE_FB_LIMIT(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+16, 13, 4, __val)
-#define SET_TX_DESC_RETRY_LIMIT_ENABLE(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+16, 17, 1, __val)
-#define SET_TX_DESC_DATA_RETRY_LIMIT(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+16, 18, 6, __val)
-#define SET_TX_DESC_RTS_RATE(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+16, 24, 5, __val)
-
-#define SET_TX_DESC_TX_SUB_CARRIER(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+20, 0, 4, __val)
-#define SET_TX_DESC_DATA_SHORTGI(__pdesc, __val) \
- SET_BITS_TO_LE_1BYTE(__pdesc+20, 4, 1, __val)
-#define SET_TX_DESC_DATA_BW(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+20, 5, 2, __val)
-#define SET_TX_DESC_DATA_LDPC(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+20, 7, 1, __val)
-#define SET_TX_DESC_DATA_STBC(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+20, 8, 2, __val)
-#define SET_TX_DESC_CTROL_STBC(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+20, 10, 2, __val)
-#define SET_TX_DESC_RTS_SHORT(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+20, 12, 1, __val)
-#define SET_TX_DESC_RTS_SC(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+20, 13, 4, __val)
-
-#define SET_TX_DESC_SW_DEFINE(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE((__pdesc) + 24, 0, 12, __val)
-#define SET_TX_DESC_ANTSEL_A(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE((__pdesc) + 24, 16, 3, __val)
-#define SET_TX_DESC_ANTSEL_B(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE((__pdesc) + 24, 19, 3, __val)
-#define SET_TX_DESC_ANTSEL_C(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE((__pdesc) + 24, 22, 3, __val)
-#define SET_TX_DESC_ANTSEL_D(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE((__pdesc) + 24, 25, 3, __val)
-#define SET_TX_DESC_MBSSID(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(i(__pdesc) + 24, 12, 4, __val)
-
-#define SET_TX_DESC_TX_BUFFER_SIZE(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE((__pdesc) + 28, 0, 16, __val)
-
-#define GET_TX_DESC_TX_BUFFER_SIZE(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+28, 0, 16)
-
-#define SET_TX_DESC_HWSEQ_EN(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+32, 15, 1, __val)
-
-#define SET_TX_DESC_SEQ(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+36, 12, 12, __val)
-
-#define SET_TX_DESC_TX_BUFFER_ADDRESS(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+40, 0, 32, __val)
-
-#define GET_TX_DESC_TX_BUFFER_ADDRESS(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+40, 0, 32)
-
-#define SET_TX_DESC_NEXT_DESC_ADDRESS(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+48, 0, 32, __val)
-
-#define GET_TX_DESC_NEXT_DESC_ADDRESS(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+48, 0, 32)
-
-#define GET_RX_DESC_PKT_LEN(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 0, 14)
-#define GET_RX_DESC_CRC32(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 14, 1)
-#define GET_RX_DESC_ICV(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 15, 1)
-#define GET_RX_DESC_DRV_INFO_SIZE(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 16, 4)
-#define GET_RX_DESC_SECURITY(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 20, 3)
-#define GET_RX_DESC_QOS(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 23, 1)
-#define GET_RX_DESC_SHIFT(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 24, 2)
-#define GET_RX_DESC_PHYST(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 26, 1)
-#define GET_RX_DESC_SWDEC(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 27, 1)
-#define GET_RX_DESC_LS(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 28, 1)
-#define GET_RX_DESC_FS(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 29, 1)
-#define GET_RX_DESC_EOR(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 30, 1)
-#define GET_RX_DESC_OWN(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc, 31, 1)
-
-#define SET_RX_DESC_PKT_LEN(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc, 0, 14, __val)
-#define SET_RX_DESC_EOR(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc, 30, 1, __val)
-#define SET_RX_DESC_OWN(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc, 31, 1, __val)
-
-#define GET_RX_DESC_MACID(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+4, 0, 7)
-#define GET_RX_DESC_TID(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+4, 8, 4)
-#define GET_RX_DESC_AMSDU(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+4, 13, 1)
-#define GET_RX_STATUS_DESC_RXID_MATCH(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+4, 14, 1)
-#define GET_RX_DESC_PAGGR(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+4, 15, 1)
-#define GET_RX_DESC_A1_FIT(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+4, 16, 4)
-#define GET_RX_DESC_CHKERR(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+4, 20, 1)
-#define GET_RX_DESC_IPVER(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+4, 21, 1)
-#define GET_RX_STATUS_DESC_IS_TCPUDP(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+4, 22, 1)
-#define GET_RX_STATUS_DESC_CHK_VLD(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+4, 23, 1)
-#define GET_RX_DESC_PAM(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+4, 24, 1)
-#define GET_RX_DESC_PWR(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+4, 25, 1)
-#define GET_RX_DESC_MD(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+4, 26, 1)
-#define GET_RX_DESC_MF(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+4, 27, 1)
-#define GET_RX_DESC_TYPE(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+4, 28, 2)
-#define GET_RX_DESC_MC(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+4, 30, 1)
-#define GET_RX_DESC_BC(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+4, 31, 1)
-
-#define GET_RX_DESC_SEQ(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+8, 0, 12)
-#define GET_RX_DESC_FRAG(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+8, 12, 4)
-#define GET_RX_STATUS_DESC_RX_IS_QOS(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+8, 16, 1)
-#define GET_RX_STATUS_DESC_WLANHD_IV_LEN(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+8, 18, 6)
-#define GET_RX_STATUS_DESC_RPT_SEL(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+8, 28, 1)
-
-#define GET_RX_DESC_RXMCS(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+12, 0, 7)
-#define GET_RX_DESC_HTC(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+12, 10, 1)
-#define GET_RX_STATUS_DESC_EOSP(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+12, 11, 1)
-#define GET_RX_STATUS_DESC_BSSID_FIT(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+12, 12, 2)
-
-#define GET_RX_STATUS_DESC_PATTERN_MATCH(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+12, 29, 1)
-#define GET_RX_STATUS_DESC_UNICAST_MATCH(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+12, 30, 1)
-#define GET_RX_STATUS_DESC_MAGIC_MATCH(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+12, 31, 1)
-
-#define GET_RX_DESC_SPLCP(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+16, 0, 1)
-#define GET_RX_STATUS_DESC_LDPC(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+16, 1, 1)
-#define GET_RX_STATUS_DESC_STBC(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+16, 2, 1)
-#define GET_RX_DESC_BW(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+16, 4, 2)
-
-#define GET_RX_DESC_TSFL(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+20, 0, 32)
-
-#define GET_RX_DESC_BUFF_ADDR(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+24, 0, 32)
-#define GET_RX_DESC_BUFF_ADDR64(__pdesc) \
- LE_BITS_TO_4BYTE(__pdesc+28, 0, 32)
-
-#define SET_RX_DESC_BUFF_ADDR(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+24, 0, 32, __val)
-#define SET_RX_DESC_BUFF_ADDR64(__pdesc, __val) \
- SET_BITS_TO_LE_4BYTE(__pdesc+28, 0, 32, __val)
+static inline void set_tx_desc_pkt_size(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc, __val, GENMASK(15, 0));
+}
+
+static inline void set_tx_desc_offset(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc, __val, GENMASK(23, 16));
+}
+
+static inline void set_tx_desc_bmc(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc, __val, BIT(24));
+}
+
+static inline void set_tx_desc_htc(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc, __val, BIT(25));
+}
+
+static inline void set_tx_desc_last_seg(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc, __val, BIT(26));
+}
+
+static inline void set_tx_desc_first_seg(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc, __val, BIT(27));
+}
+
+static inline void set_tx_desc_linip(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc, __val, BIT(28));
+}
+
+static inline void set_tx_desc_own(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc, __val, BIT(31));
+}
+
+static inline int get_tx_desc_own(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc), BIT(31));
+}
+
+static inline void set_tx_desc_macid(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 1, __val, GENMASK(6, 0));
+}
+
+static inline void set_tx_desc_queue_sel(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 1, __val, GENMASK(12, 8));
+}
+
+static inline void set_tx_desc_rate_id(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 1, __val, GENMASK(20, 16));
+}
+
+static inline void set_tx_desc_sec_type(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 1, __val, GENMASK(23, 22));
+}
+
+static inline void set_tx_desc_pkt_offset(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 1, __val, GENMASK(28, 24));
+}
+
+static inline void set_tx_desc_agg_enable(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 2, __val, BIT(12));
+}
+
+static inline void set_tx_desc_rdg_enable(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 2, __val, BIT(13));
+}
+
+static inline void set_tx_desc_more_frag(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 2, __val, BIT(17));
+}
+
+static inline void set_tx_desc_ampdu_density(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 2, __val, GENMASK(22, 20));
+}
+
+static inline void set_tx_desc_hwseq_sel(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 3, __val, GENMASK(7, 6));
+}
+
+static inline void set_tx_desc_use_rate(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 3, __val, BIT(8));
+}
+
+static inline void set_tx_desc_disable_fb(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 3, __val, BIT(10));
+}
+
+static inline void set_tx_desc_cts2self(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 3, __val, BIT(11));
+}
+
+static inline void set_tx_desc_rts_enable(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 3, __val, BIT(12));
+}
+
+static inline void set_tx_desc_hw_rts_enable(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 3, __val, BIT(13));
+}
+
+static inline void set_tx_desc_nav_use_hdr(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 3, __val, BIT(15));
+}
+
+static inline void set_tx_desc_max_agg_num(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 3, __val, GENMASK(21, 17));
+}
+
+static inline void set_tx_desc_tx_ant(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 5, __val, GENMASK(27, 24));
+}
+
+static inline void set_tx_desc_tx_rate(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 4, __val, GENMASK(6, 0));
+}
+
+static inline void set_tx_desc_data_rate_fb_limit(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 4, __val, GENMASK(12, 8));
+}
+
+static inline void set_tx_desc_rts_rate_fb_limit(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 4, __val, GENMASK(16, 13));
+}
+
+static inline void set_tx_desc_rts_rate(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 4, __val, GENMASK(28, 24));
+}
+
+static inline void set_tx_desc_tx_sub_carrier(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 5, __val, GENMASK(3, 0));
+}
+
+static inline void set_tx_desc_data_shortgi(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 5, __val, BIT(4));
+}
+
+static inline void set_tx_desc_data_bw(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 5, __val, GENMASK(6, 5));
+}
+
+static inline void set_tx_desc_rts_short(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 5, __val, BIT(12));
+}
+
+static inline void set_tx_desc_rts_sc(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 5, __val, GENMASK(16, 13));
+}
+
+static inline void set_tx_desc_tx_buffer_size(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 7, __val, GENMASK(15, 0));
+}
+
+static inline void set_tx_desc_hwseq_en(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 8, __val, BIT(15));
+}
+
+static inline void set_tx_desc_seq(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc + 9, __val, GENMASK(23, 12));
+}
+
+static inline void set_tx_desc_tx_buffer_address(__le32 *__pdesc, u32 __val)
+{
+ *(__pdesc + 10) = cpu_to_le32(__val);
+}
+
+static inline int get_tx_desc_tx_buffer_address(__le32 *__pdesc)
+{
+ return le32_to_cpu(*(__pdesc + 10));
+}
+
+static inline void set_tx_desc_next_desc_address(__le32 *__pdesc, u32 __val)
+{
+ *(__pdesc + 12) = cpu_to_le32(__val);
+}
+
+static inline int get_rx_desc_pkt_len(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc), GENMASK(13, 0));
+}
+
+static inline int get_rx_desc_crc32(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc), BIT(14));
+}
+
+static inline int get_rx_desc_icv(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc), BIT(15));
+}
+
+static inline int get_rx_desc_drv_info_size(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc), GENMASK(19, 16));
+}
+
+static inline int get_rx_desc_shift(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc), GENMASK(25, 24));
+}
+
+static inline int get_rx_desc_physt(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc), BIT(26));
+}
+
+static inline int get_rx_desc_swdec(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc), BIT(27));
+}
+
+static inline int get_rx_desc_own(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc), BIT(31));
+}
+
+static inline void set_rx_desc_pkt_len(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc, __val, GENMASK(13, 0));
+}
+
+static inline void set_rx_desc_eor(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc, __val, BIT(30));
+}
+
+static inline void set_rx_desc_own(__le32 *__pdesc, u32 __val)
+{
+ le32p_replace_bits(__pdesc, __val, BIT(31));
+}
+
+static inline int get_rx_desc_macid(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc + 1), GENMASK(6, 0));
+}
+
+static inline int get_rx_desc_paggr(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc + 1), BIT(15));
+}
+
+static inline int get_rx_status_desc_rpt_sel(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc + 1), BIT(28));
+}
+
+static inline int get_rx_desc_rxmcs(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc + 3), GENMASK(6, 0));
+}
+
+static inline int get_rx_status_desc_pattern_match(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc + 3), BIT(29));
+}
+
+static inline int get_rx_status_desc_unicast_match(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc + 3), BIT(30));
+}
+
+static inline int get_rx_status_desc_magic_match(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc + 3), BIT(31));
+}
+
+static inline int get_rx_desc_splcp(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc + 4), BIT(0));
+}
+
+static inline int get_rx_desc_bw(__le32 *__pdesc)
+{
+ return le32_get_bits(*(__pdesc + 4), GENMASK(5, 4));
+}
+
+static inline int get_rx_desc_tsfl(__le32 *__pdesc)
+{
+ return le32_to_cpu(*(__pdesc + 5));
+}
+
+static inline int get_rx_desc_buff_addr(__le32 *__pdesc)
+{
+ return le32_to_cpu(*(__pdesc + 6));
+}
+
+static inline void set_rx_desc_buff_addr(__le32 *__pdesc, u32 __val)
+{
+ *(__pdesc + 6) = cpu_to_le32(__val);
+}
/* TX report 2 format in Rx desc*/
-#define GET_RX_RPT2_DESC_PKT_LEN(__status) \
- LE_BITS_TO_4BYTE(__status, 0, 9)
-#define GET_RX_RPT2_DESC_MACID_VALID_1(__status) \
- LE_BITS_TO_4BYTE(__status+16, 0, 32)
-#define GET_RX_RPT2_DESC_MACID_VALID_2(__status) \
- LE_BITS_TO_4BYTE(__status+20, 0, 32)
-
-#define SET_EARLYMODE_PKTNUM(__paddr, __value) \
- SET_BITS_TO_LE_4BYTE(__paddr, 0, 4, __value)
-#define SET_EARLYMODE_LEN0(__paddr, __value) \
- SET_BITS_TO_LE_4BYTE(__paddr, 4, 12, __value)
-#define SET_EARLYMODE_LEN1(__paddr, __value) \
- SET_BITS_TO_LE_4BYTE(__paddr, 16, 12, __value)
-#define SET_EARLYMODE_LEN2_1(__paddr, __value) \
- SET_BITS_TO_LE_4BYTE(__paddr, 28, 4, __value)
-#define SET_EARLYMODE_LEN2_2(__paddr, __value) \
- SET_BITS_TO_LE_4BYTE(__paddr+4, 0, 8, __value)
-#define SET_EARLYMODE_LEN3(__paddr, __value) \
- SET_BITS_TO_LE_4BYTE(__paddr+4, 8, 12, __value)
-#define SET_EARLYMODE_LEN4(__paddr, __value) \
- SET_BITS_TO_LE_4BYTE(__paddr+4, 20, 12, __value)
-
-#define CLEAR_PCI_TX_DESC_CONTENT(__pdesc, _size) \
-do { \
- if (_size > TX_DESC_NEXT_DESC_OFFSET) \
- memset(__pdesc, 0, TX_DESC_NEXT_DESC_OFFSET); \
- else \
- memset(__pdesc, 0, _size); \
-} while (0)
+static inline int get_rx_rpt2_desc_macid_valid_1(__le32 *__status)
+{
+ return le32_to_cpu(*(__status + 4));
+}
+
+static inline int get_rx_rpt2_desc_macid_valid_2(__le32 *__status)
+{
+ return le32_to_cpu(*(__status + 5));
+}
+
+static inline void set_earlymode_pktnum(__le32 *__paddr, u32 __value)
+{
+ le32p_replace_bits(__paddr, __value, GENMASK(3, 0));
+}
+
+static inline void set_earlymode_len0(__le32 *__paddr, u32 __value)
+{
+ le32p_replace_bits(__paddr, __value, GENMASK(15, 4));
+}
+
+static inline void set_earlymode_len1(__le32 *__paddr, u32 __value)
+{
+ le32p_replace_bits(__paddr, __value, GENMASK(27, 16));
+}
+
+static inline void set_earlymode_len2_1(__le32 *__paddr, u32 __value)
+{
+ le32p_replace_bits(__paddr, __value, GENMASK(31, 28));
+}
+
+static inline void set_earlymode_len2_2(__le32 *__paddr, u32 __value)
+{
+ le32p_replace_bits(__paddr, __value, GENMASK(7, 0));
+}
+
+static inline void set_earlymode_len3(__le32 *__paddr, u32 __value)
+{
+ le32p_replace_bits((__paddr + 1), __value, GENMASK(19, 8));
+}
+
+static inline void set_earlymode_len4(__le32 *__paddr, u32 __value)
+{
+ le32p_replace_bits((__paddr + 1), __value, GENMASK(31, 20));
+}
+
+static inline void clear_pci_tx_desc_content(__le32 *__pdesc, int _size)
+{
+ if (_size > TX_DESC_NEXT_DESC_OFFSET)
+ memset(__pdesc, 0, TX_DESC_NEXT_DESC_OFFSET);
+ else
+ memset(__pdesc, 0, _size);
+}
#define RTL8821AE_RX_HAL_IS_CCK_RATE(rxmcs)\
(rxmcs == DESC_RATE1M ||\
diff --git a/drivers/net/wireless/realtek/rtlwifi/usb.c b/drivers/net/wireless/realtek/rtlwifi/usb.c
index e24fda5e9087..34d68dbf4b4c 100644
--- a/drivers/net/wireless/realtek/rtlwifi/usb.c
+++ b/drivers/net/wireless/realtek/rtlwifi/usb.c
@@ -1064,13 +1064,13 @@ int rtl_usb_probe(struct usb_interface *intf,
rtlpriv->cfg->ops->read_eeprom_info(hw);
err = _rtl_usb_init(hw);
if (err)
- goto error_out;
+ goto error_out2;
rtl_usb_init_sw(hw);
/* Init mac80211 sw */
err = rtl_init_core(hw);
if (err) {
pr_err("Can't allocate sw for mac80211\n");
- goto error_out;
+ goto error_out2;
}
if (rtlpriv->cfg->ops->init_sw_vars(hw)) {
pr_err("Can't init_sw_vars\n");
@@ -1091,6 +1091,7 @@ int rtl_usb_probe(struct usb_interface *intf,
error_out:
rtl_deinit_core(hw);
+error_out2:
_rtl_usb_io_handler_release(hw);
usb_put_dev(udev);
complete(&rtlpriv->firmware_loading_complete);
diff --git a/drivers/net/wireless/realtek/rtlwifi/wifi.h b/drivers/net/wireless/realtek/rtlwifi/wifi.h
index 518aaa875361..81caa3782ec0 100644
--- a/drivers/net/wireless/realtek/rtlwifi/wifi.h
+++ b/drivers/net/wireless/realtek/rtlwifi/wifi.h
@@ -13,6 +13,7 @@
#include <linux/usb.h>
#include <net/mac80211.h>
#include <linux/completion.h>
+#include <linux/bitfield.h>
#include "debug.h"
#define MASKBYTE0 0xff
diff --git a/drivers/net/wireless/realtek/rtw88/hci.h b/drivers/net/wireless/realtek/rtw88/hci.h
index 2676582a85a0..aba329c9d0cf 100644
--- a/drivers/net/wireless/realtek/rtw88/hci.h
+++ b/drivers/net/wireless/realtek/rtw88/hci.h
@@ -97,7 +97,7 @@ static inline void rtw_write8_set(struct rtw_dev *rtwdev, u32 addr, u8 bit)
rtw_write8(rtwdev, addr, val | bit);
}
-static inline void rtw_writ16_set(struct rtw_dev *rtwdev, u32 addr, u16 bit)
+static inline void rtw_write16_set(struct rtw_dev *rtwdev, u32 addr, u16 bit)
{
u16 val;
diff --git a/drivers/net/wireless/realtek/rtw88/mac.c b/drivers/net/wireless/realtek/rtw88/mac.c
index 25a923bc6366..fc14b37d927d 100644
--- a/drivers/net/wireless/realtek/rtw88/mac.c
+++ b/drivers/net/wireless/realtek/rtw88/mac.c
@@ -285,8 +285,14 @@ int rtw_mac_power_on(struct rtw_dev *rtwdev)
goto err;
ret = rtw_mac_power_switch(rtwdev, true);
- if (ret)
+ if (ret == -EALREADY) {
+ rtw_mac_power_switch(rtwdev, false);
+ ret = rtw_mac_power_switch(rtwdev, true);
+ if (ret)
+ goto err;
+ } else if (ret) {
goto err;
+ }
ret = rtw_mac_init_system_cfg(rtwdev);
if (ret)
diff --git a/drivers/net/wireless/realtek/rtw88/mac80211.c b/drivers/net/wireless/realtek/rtw88/mac80211.c
index abded63f138d..abe6a148673b 100644
--- a/drivers/net/wireless/realtek/rtw88/mac80211.c
+++ b/drivers/net/wireless/realtek/rtw88/mac80211.c
@@ -85,30 +85,35 @@ static const struct rtw_vif_port rtw_vif_port[] = {
.bssid = {.addr = 0x0618},
.net_type = {.addr = 0x0100, .mask = 0x30000},
.aid = {.addr = 0x06a8, .mask = 0x7ff},
+ .bcn_ctrl = {.addr = 0x0550, .mask = 0xff},
},
[1] = {
.mac_addr = {.addr = 0x0700},
.bssid = {.addr = 0x0708},
.net_type = {.addr = 0x0100, .mask = 0xc0000},
.aid = {.addr = 0x0710, .mask = 0x7ff},
+ .bcn_ctrl = {.addr = 0x0551, .mask = 0xff},
},
[2] = {
.mac_addr = {.addr = 0x1620},
.bssid = {.addr = 0x1628},
.net_type = {.addr = 0x1100, .mask = 0x3},
.aid = {.addr = 0x1600, .mask = 0x7ff},
+ .bcn_ctrl = {.addr = 0x0578, .mask = 0xff},
},
[3] = {
.mac_addr = {.addr = 0x1630},
.bssid = {.addr = 0x1638},
.net_type = {.addr = 0x1100, .mask = 0xc},
.aid = {.addr = 0x1604, .mask = 0x7ff},
+ .bcn_ctrl = {.addr = 0x0579, .mask = 0xff},
},
[4] = {
.mac_addr = {.addr = 0x1640},
.bssid = {.addr = 0x1648},
.net_type = {.addr = 0x1100, .mask = 0x30},
.aid = {.addr = 0x1608, .mask = 0x7ff},
+ .bcn_ctrl = {.addr = 0x057a, .mask = 0xff},
},
};
@@ -120,6 +125,7 @@ static int rtw_ops_add_interface(struct ieee80211_hw *hw,
enum rtw_net_type net_type;
u32 config = 0;
u8 port = 0;
+ u8 bcn_ctrl = 0;
rtwvif->port = port;
rtwvif->vif = vif;
@@ -136,13 +142,16 @@ static int rtw_ops_add_interface(struct ieee80211_hw *hw,
case NL80211_IFTYPE_AP:
case NL80211_IFTYPE_MESH_POINT:
net_type = RTW_NET_AP_MODE;
+ bcn_ctrl = BIT_EN_BCN_FUNCTION | BIT_DIS_TSF_UDT;
break;
case NL80211_IFTYPE_ADHOC:
net_type = RTW_NET_AD_HOC;
+ bcn_ctrl = BIT_EN_BCN_FUNCTION | BIT_DIS_TSF_UDT;
break;
case NL80211_IFTYPE_STATION:
default:
net_type = RTW_NET_NO_LINK;
+ bcn_ctrl = BIT_EN_BCN_FUNCTION;
break;
}
@@ -150,6 +159,8 @@ static int rtw_ops_add_interface(struct ieee80211_hw *hw,
config |= PORT_SET_MAC_ADDR;
rtwvif->net_type = net_type;
config |= PORT_SET_NET_TYPE;
+ rtwvif->bcn_ctrl = bcn_ctrl;
+ config |= PORT_SET_BCN_CTRL;
rtw_vif_port_config(rtwdev, rtwvif, config);
mutex_unlock(&rtwdev->mutex);
@@ -173,6 +184,8 @@ static void rtw_ops_remove_interface(struct ieee80211_hw *hw,
config |= PORT_SET_MAC_ADDR;
rtwvif->net_type = RTW_NET_NO_LINK;
config |= PORT_SET_NET_TYPE;
+ rtwvif->bcn_ctrl = 0;
+ config |= PORT_SET_BCN_CTRL;
rtw_vif_port_config(rtwdev, rtwvif, config);
mutex_unlock(&rtwdev->mutex);
@@ -446,20 +459,39 @@ static void rtw_ops_sw_scan_start(struct ieee80211_hw *hw,
{
struct rtw_dev *rtwdev = hw->priv;
struct rtw_vif *rtwvif = (struct rtw_vif *)vif->drv_priv;
+ u32 config = 0;
rtw_leave_lps(rtwdev, rtwvif);
+ mutex_lock(&rtwdev->mutex);
+
+ ether_addr_copy(rtwvif->mac_addr, mac_addr);
+ config |= PORT_SET_MAC_ADDR;
+ rtw_vif_port_config(rtwdev, rtwvif, config);
+
rtw_flag_set(rtwdev, RTW_FLAG_DIG_DISABLE);
rtw_flag_set(rtwdev, RTW_FLAG_SCANNING);
+
+ mutex_unlock(&rtwdev->mutex);
}
static void rtw_ops_sw_scan_complete(struct ieee80211_hw *hw,
struct ieee80211_vif *vif)
{
struct rtw_dev *rtwdev = hw->priv;
+ struct rtw_vif *rtwvif = (struct rtw_vif *)vif->drv_priv;
+ u32 config = 0;
+
+ mutex_lock(&rtwdev->mutex);
rtw_flag_clear(rtwdev, RTW_FLAG_SCANNING);
rtw_flag_clear(rtwdev, RTW_FLAG_DIG_DISABLE);
+
+ ether_addr_copy(rtwvif->mac_addr, vif->addr);
+ config |= PORT_SET_MAC_ADDR;
+ rtw_vif_port_config(rtwdev, rtwvif, config);
+
+ mutex_unlock(&rtwdev->mutex);
}
const struct ieee80211_ops rtw_ops = {
diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
index b2dac4609138..5a2c06267d07 100644
--- a/drivers/net/wireless/realtek/rtw88/main.c
+++ b/drivers/net/wireless/realtek/rtw88/main.c
@@ -20,7 +20,7 @@ EXPORT_SYMBOL(rtw_debug_mask);
module_param_named(support_lps, rtw_fw_support_lps, bool, 0644);
module_param_named(debug_mask, rtw_debug_mask, uint, 0644);
-MODULE_PARM_DESC(support_lps, "Set Y to enable LPS support");
+MODULE_PARM_DESC(support_lps, "Set Y to enable Leisure Power Save support, to turn radio off between beacons");
MODULE_PARM_DESC(debug_mask, "Debugging mask");
static struct ieee80211_channel rtw_channeltable_2g[] = {
@@ -198,15 +198,20 @@ void rtw_get_channel_params(struct cfg80211_chan_def *chandef,
{
struct ieee80211_channel *channel = chandef->chan;
enum nl80211_chan_width width = chandef->width;
+ u8 *cch_by_bw = chan_params->cch_by_bw;
u32 primary_freq, center_freq;
u8 center_chan;
u8 bandwidth = RTW_CHANNEL_WIDTH_20;
u8 primary_chan_idx = 0;
+ u8 i;
center_chan = channel->hw_value;
primary_freq = channel->center_freq;
center_freq = chandef->center_freq1;
+ /* assign the center channel used while 20M bw is selected */
+ cch_by_bw[RTW_CHANNEL_WIDTH_20] = channel->hw_value;
+
switch (width) {
case NL80211_CHAN_WIDTH_20_NOHT:
case NL80211_CHAN_WIDTH_20:
@@ -233,6 +238,10 @@ void rtw_get_channel_params(struct cfg80211_chan_def *chandef,
primary_chan_idx = 3;
center_chan -= 6;
}
+ /* assign the center channel used
+ * while 40M bw is selected
+ */
+ cch_by_bw[RTW_CHANNEL_WIDTH_40] = center_chan + 4;
} else {
if (center_freq - primary_freq == 10) {
primary_chan_idx = 2;
@@ -241,6 +250,10 @@ void rtw_get_channel_params(struct cfg80211_chan_def *chandef,
primary_chan_idx = 4;
center_chan += 6;
}
+ /* assign the center channel used
+ * while 40M bw is selected
+ */
+ cch_by_bw[RTW_CHANNEL_WIDTH_40] = center_chan - 4;
}
break;
default:
@@ -251,6 +264,12 @@ void rtw_get_channel_params(struct cfg80211_chan_def *chandef,
chan_params->center_chan = center_chan;
chan_params->bandwidth = bandwidth;
chan_params->primary_chan_idx = primary_chan_idx;
+
+ /* assign the center channel used while current bw is selected */
+ cch_by_bw[bandwidth] = center_chan;
+
+ for (i = bandwidth + 1; i <= RTW_MAX_CHANNEL_WIDTH; i++)
+ cch_by_bw[i] = 0;
}
void rtw_set_channel(struct rtw_dev *rtwdev)
@@ -260,6 +279,7 @@ void rtw_set_channel(struct rtw_dev *rtwdev)
struct rtw_chip_info *chip = rtwdev->chip;
struct rtw_channel_params ch_param;
u8 center_chan, bandwidth, primary_chan_idx;
+ u8 i;
rtw_get_channel_params(&hw->conf.chandef, &ch_param);
if (WARN(ch_param.center_chan == 0, "Invalid channel\n"))
@@ -272,6 +292,10 @@ void rtw_set_channel(struct rtw_dev *rtwdev)
hal->current_band_width = bandwidth;
hal->current_channel = center_chan;
hal->current_band_type = center_chan > 14 ? RTW_BAND_5G : RTW_BAND_2G;
+
+ for (i = RTW_CHANNEL_WIDTH_20; i <= RTW_MAX_CHANNEL_WIDTH; i++)
+ hal->cch_by_bw[i] = ch_param.cch_by_bw[i];
+
chip->ops->set_channel(rtwdev, center_chan, bandwidth, primary_chan_idx);
rtw_phy_set_tx_power_level(rtwdev, center_chan);
@@ -309,6 +333,11 @@ void rtw_vif_port_config(struct rtw_dev *rtwdev,
mask = rtwvif->conf->aid.mask;
rtw_write32_mask(rtwdev, addr, mask, rtwvif->aid);
}
+ if (config & PORT_SET_BCN_CTRL) {
+ addr = rtwvif->conf->bcn_ctrl.addr;
+ mask = rtwvif->conf->bcn_ctrl.mask;
+ rtw_write8_mask(rtwdev, addr, mask, rtwvif->bcn_ctrl);
+ }
}
static u8 hw_bw_cap_to_bitamp(u8 bw_cap)
@@ -1042,7 +1071,7 @@ static int rtw_chip_board_info_setup(struct rtw_dev *rtwdev)
rtw_phy_setup_phy_cond(rtwdev, 0);
- rtw_hw_init_tx_power(hal);
+ rtw_phy_init_tx_power(rtwdev);
rtw_load_table(rtwdev, rfe_def->phy_pg_tbl);
rtw_load_table(rtwdev, rfe_def->txpwr_lmt_tbl);
rtw_phy_tx_power_by_rate_config(hal);
@@ -1169,6 +1198,7 @@ int rtw_register_hw(struct rtw_dev *rtwdev, struct ieee80211_hw *hw)
ieee80211_hw_set(hw, REPORTS_TX_ACK_STATUS);
ieee80211_hw_set(hw, SUPPORTS_PS);
ieee80211_hw_set(hw, SUPPORTS_DYNAMIC_PS);
+ ieee80211_hw_set(hw, SUPPORT_FAST_XMIT);
hw->wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION) |
BIT(NL80211_IFTYPE_AP) |
@@ -1178,6 +1208,8 @@ int rtw_register_hw(struct rtw_dev *rtwdev, struct ieee80211_hw *hw)
hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_TDLS |
WIPHY_FLAG_TDLS_EXTERNAL_SETUP;
+ hw->wiphy->features |= NL80211_FEATURE_SCAN_RANDOM_MAC_ADDR;
+
rtw_set_supported_band(hw, rtwdev->chip);
SET_IEEE80211_PERM_ADDR(hw, rtwdev->efuse.addr);
diff --git a/drivers/net/wireless/realtek/rtw88/main.h b/drivers/net/wireless/realtek/rtw88/main.h
index 00fc77fb9b54..8fa05751836b 100644
--- a/drivers/net/wireless/realtek/rtw88/main.h
+++ b/drivers/net/wireless/realtek/rtw88/main.h
@@ -62,6 +62,9 @@ enum rtw_supported_band {
RTW_BAND_MAX,
};
+/* now, support upto 80M bw */
+#define RTW_MAX_CHANNEL_WIDTH RTW_CHANNEL_WIDTH_80
+
enum rtw_bandwidth {
RTW_CHANNEL_WIDTH_20 = 0,
RTW_CHANNEL_WIDTH_40 = 1,
@@ -286,10 +289,16 @@ enum rtw_trx_desc_rate {
};
enum rtw_regulatory_domains {
- RTW_REGD_FCC = 0,
- RTW_REGD_MKK = 1,
- RTW_REGD_ETSI = 2,
- RTW_REGD_WW = 3,
+ RTW_REGD_FCC = 0,
+ RTW_REGD_MKK = 1,
+ RTW_REGD_ETSI = 2,
+ RTW_REGD_IC = 3,
+ RTW_REGD_KCC = 4,
+ RTW_REGD_ACMA = 5,
+ RTW_REGD_CHILE = 6,
+ RTW_REGD_UKRAINE = 7,
+ RTW_REGD_MEXICO = 8,
+ RTW_REGD_WW,
RTW_REGD_MAX
};
@@ -413,6 +422,10 @@ struct rtw_channel_params {
u8 center_chan;
u8 bandwidth;
u8 primary_chan_idx;
+ /* center channel by different available bandwidth,
+ * val of (bw > current bandwidth) is invalid
+ */
+ u8 cch_by_bw[RTW_MAX_CHANNEL_WIDTH + 1];
};
struct rtw_hw_reg {
@@ -431,6 +444,7 @@ enum rtw_vif_port_set {
PORT_SET_BSSID = BIT(1),
PORT_SET_NET_TYPE = BIT(2),
PORT_SET_AID = BIT(3),
+ PORT_SET_BCN_CTRL = BIT(4),
};
struct rtw_vif_port {
@@ -438,6 +452,7 @@ struct rtw_vif_port {
struct rtw_hw_reg bssid;
struct rtw_hw_reg net_type;
struct rtw_hw_reg aid;
+ struct rtw_hw_reg bcn_ctrl;
};
struct rtw_tx_pkt_info {
@@ -591,6 +606,7 @@ struct rtw_vif {
u8 mac_addr[ETH_ALEN];
u8 bssid[ETH_ALEN];
u8 port;
+ u8 bcn_ctrl;
const struct rtw_vif_port *conf;
struct rtw_traffic_stats stats;
@@ -838,6 +854,9 @@ struct rtw_chip_info {
u32 rfe_defs_size;
};
+#define DACK_MSBK_BACKUP_NUM 0xf
+#define DACK_DCK_BACKUP_NUM 0x2
+
struct rtw_dm_info {
u32 cck_fa_cnt;
u32 ofdm_fa_cnt;
@@ -853,6 +872,11 @@ struct rtw_dm_info {
u8 cck_gi_u_bnd;
u8 cck_gi_l_bnd;
+
+ /* backup dack results for each path and I/Q */
+ u32 dack_adck[RTW_RF_PATH_MAX];
+ u16 dack_msbk[RTW_RF_PATH_MAX][2][DACK_MSBK_BACKUP_NUM];
+ u8 dack_dck[RTW_RF_PATH_MAX][2][DACK_DCK_BACKUP_NUM];
};
struct rtw_efuse {
@@ -973,6 +997,12 @@ struct rtw_hal {
u8 current_channel;
u8 current_band_width;
u8 current_band_type;
+
+ /* center channel for different available bandwidth,
+ * val of (bw > current_band_width) is invalid
+ */
+ u8 cch_by_bw[RTW_MAX_CHANNEL_WIDTH + 1];
+
u8 sec_ch_offset;
u8 rf_type;
u8 rf_path_num;
diff --git a/drivers/net/wireless/realtek/rtw88/pci.c b/drivers/net/wireless/realtek/rtw88/pci.c
index cfe05ba7280d..353871c27779 100644
--- a/drivers/net/wireless/realtek/rtw88/pci.c
+++ b/drivers/net/wireless/realtek/rtw88/pci.c
@@ -487,10 +487,10 @@ static void rtw_pci_stop(struct rtw_dev *rtwdev)
}
static u8 ac_to_hwq[] = {
- [0] = RTW_TX_QUEUE_VO,
- [1] = RTW_TX_QUEUE_VI,
- [2] = RTW_TX_QUEUE_BE,
- [3] = RTW_TX_QUEUE_BK,
+ [IEEE80211_AC_VO] = RTW_TX_QUEUE_VO,
+ [IEEE80211_AC_VI] = RTW_TX_QUEUE_VI,
+ [IEEE80211_AC_BE] = RTW_TX_QUEUE_BE,
+ [IEEE80211_AC_BK] = RTW_TX_QUEUE_BK,
};
static u8 rtw_hw_queue_mapping(struct sk_buff *skb)
@@ -504,6 +504,8 @@ static u8 rtw_hw_queue_mapping(struct sk_buff *skb)
queue = RTW_TX_QUEUE_BCN;
else if (unlikely(ieee80211_is_mgmt(fc) || ieee80211_is_ctl(fc)))
queue = RTW_TX_QUEUE_MGMT;
+ else if (WARN_ON_ONCE(q_mapping >= ARRAY_SIZE(ac_to_hwq)))
+ queue = ac_to_hwq[IEEE80211_AC_BE];
else
queue = ac_to_hwq[q_mapping];
diff --git a/drivers/net/wireless/realtek/rtw88/phy.c b/drivers/net/wireless/realtek/rtw88/phy.c
index 404d89432c96..4ec8dcf17361 100644
--- a/drivers/net/wireless/realtek/rtw88/phy.c
+++ b/drivers/net/wireless/realtek/rtw88/phy.c
@@ -65,6 +65,56 @@ static const u32 db_invert_table[12][8] = {
1995262315, 2511886432U, 3162277660U, 3981071706U}
};
+u8 rtw_cck_rates[] = { DESC_RATE1M, DESC_RATE2M, DESC_RATE5_5M, DESC_RATE11M };
+u8 rtw_ofdm_rates[] = {
+ DESC_RATE6M, DESC_RATE9M, DESC_RATE12M,
+ DESC_RATE18M, DESC_RATE24M, DESC_RATE36M,
+ DESC_RATE48M, DESC_RATE54M
+};
+u8 rtw_ht_1s_rates[] = {
+ DESC_RATEMCS0, DESC_RATEMCS1, DESC_RATEMCS2,
+ DESC_RATEMCS3, DESC_RATEMCS4, DESC_RATEMCS5,
+ DESC_RATEMCS6, DESC_RATEMCS7
+};
+u8 rtw_ht_2s_rates[] = {
+ DESC_RATEMCS8, DESC_RATEMCS9, DESC_RATEMCS10,
+ DESC_RATEMCS11, DESC_RATEMCS12, DESC_RATEMCS13,
+ DESC_RATEMCS14, DESC_RATEMCS15
+};
+u8 rtw_vht_1s_rates[] = {
+ DESC_RATEVHT1SS_MCS0, DESC_RATEVHT1SS_MCS1,
+ DESC_RATEVHT1SS_MCS2, DESC_RATEVHT1SS_MCS3,
+ DESC_RATEVHT1SS_MCS4, DESC_RATEVHT1SS_MCS5,
+ DESC_RATEVHT1SS_MCS6, DESC_RATEVHT1SS_MCS7,
+ DESC_RATEVHT1SS_MCS8, DESC_RATEVHT1SS_MCS9
+};
+u8 rtw_vht_2s_rates[] = {
+ DESC_RATEVHT2SS_MCS0, DESC_RATEVHT2SS_MCS1,
+ DESC_RATEVHT2SS_MCS2, DESC_RATEVHT2SS_MCS3,
+ DESC_RATEVHT2SS_MCS4, DESC_RATEVHT2SS_MCS5,
+ DESC_RATEVHT2SS_MCS6, DESC_RATEVHT2SS_MCS7,
+ DESC_RATEVHT2SS_MCS8, DESC_RATEVHT2SS_MCS9
+};
+u8 *rtw_rate_section[RTW_RATE_SECTION_MAX] = {
+ rtw_cck_rates, rtw_ofdm_rates,
+ rtw_ht_1s_rates, rtw_ht_2s_rates,
+ rtw_vht_1s_rates, rtw_vht_2s_rates
+};
+u8 rtw_rate_size[RTW_RATE_SECTION_MAX] = {
+ ARRAY_SIZE(rtw_cck_rates),
+ ARRAY_SIZE(rtw_ofdm_rates),
+ ARRAY_SIZE(rtw_ht_1s_rates),
+ ARRAY_SIZE(rtw_ht_2s_rates),
+ ARRAY_SIZE(rtw_vht_1s_rates),
+ ARRAY_SIZE(rtw_vht_2s_rates)
+};
+static const u8 rtw_cck_size = ARRAY_SIZE(rtw_cck_rates);
+static const u8 rtw_ofdm_size = ARRAY_SIZE(rtw_ofdm_rates);
+static const u8 rtw_ht_1s_size = ARRAY_SIZE(rtw_ht_1s_rates);
+static const u8 rtw_ht_2s_size = ARRAY_SIZE(rtw_ht_2s_rates);
+static const u8 rtw_vht_1s_size = ARRAY_SIZE(rtw_vht_1s_rates);
+static const u8 rtw_vht_2s_size = ARRAY_SIZE(rtw_vht_2s_rates);
+
enum rtw_phy_band_type {
PHY_BAND_2G = 0,
PHY_BAND_5G = 1,
@@ -601,14 +651,19 @@ bool rtw_phy_write_rf_reg(struct rtw_dev *rtwdev, enum rtw_rf_path rf_path,
direct_addr = base_addr[rf_path] + (addr << 2);
mask &= RFREG_MASK;
- rtw_write32_mask(rtwdev, REG_RSV_CTRL, BITS_RFC_DIRECT, DISABLE_PI);
- rtw_write32_mask(rtwdev, REG_WLRF1, BITS_RFC_DIRECT, DISABLE_PI);
+ if (addr == RF_CFGCH) {
+ rtw_write32_mask(rtwdev, REG_RSV_CTRL, BITS_RFC_DIRECT, DISABLE_PI);
+ rtw_write32_mask(rtwdev, REG_WLRF1, BITS_RFC_DIRECT, DISABLE_PI);
+ }
+
rtw_write32_mask(rtwdev, direct_addr, mask, data);
udelay(1);
- rtw_write32_mask(rtwdev, REG_RSV_CTRL, BITS_RFC_DIRECT, ENABLE_PI);
- rtw_write32_mask(rtwdev, REG_WLRF1, BITS_RFC_DIRECT, ENABLE_PI);
+ if (addr == RF_CFGCH) {
+ rtw_write32_mask(rtwdev, REG_RSV_CTRL, BITS_RFC_DIRECT, ENABLE_PI);
+ rtw_write32_mask(rtwdev, REG_WLRF1, BITS_RFC_DIRECT, ENABLE_PI);
+ }
return true;
}
@@ -714,6 +769,353 @@ void rtw_parse_tbl_phy_cond(struct rtw_dev *rtwdev, const struct rtw_table *tbl)
}
}
+#define bcd_to_dec_pwr_by_rate(val, i) bcd2bin(val >> (i * 8))
+
+static u8 tbl_to_dec_pwr_by_rate(struct rtw_dev *rtwdev, u32 hex, u8 i)
+{
+ if (rtwdev->chip->is_pwr_by_rate_dec)
+ return bcd_to_dec_pwr_by_rate(hex, i);
+
+ return (hex >> (i * 8)) & 0xFF;
+}
+
+static void
+rtw_phy_get_rate_values_of_txpwr_by_rate(struct rtw_dev *rtwdev,
+ u32 addr, u32 mask, u32 val, u8 *rate,
+ u8 *pwr_by_rate, u8 *rate_num)
+{
+ int i;
+
+ switch (addr) {
+ case 0xE00:
+ case 0x830:
+ rate[0] = DESC_RATE6M;
+ rate[1] = DESC_RATE9M;
+ rate[2] = DESC_RATE12M;
+ rate[3] = DESC_RATE18M;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xE04:
+ case 0x834:
+ rate[0] = DESC_RATE24M;
+ rate[1] = DESC_RATE36M;
+ rate[2] = DESC_RATE48M;
+ rate[3] = DESC_RATE54M;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xE08:
+ rate[0] = DESC_RATE1M;
+ pwr_by_rate[0] = bcd_to_dec_pwr_by_rate(val, 1);
+ *rate_num = 1;
+ break;
+ case 0x86C:
+ if (mask == 0xffffff00) {
+ rate[0] = DESC_RATE2M;
+ rate[1] = DESC_RATE5_5M;
+ rate[2] = DESC_RATE11M;
+ for (i = 1; i < 4; ++i)
+ pwr_by_rate[i - 1] =
+ tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 3;
+ } else if (mask == 0x000000ff) {
+ rate[0] = DESC_RATE11M;
+ pwr_by_rate[0] = bcd_to_dec_pwr_by_rate(val, 0);
+ *rate_num = 1;
+ }
+ break;
+ case 0xE10:
+ case 0x83C:
+ rate[0] = DESC_RATEMCS0;
+ rate[1] = DESC_RATEMCS1;
+ rate[2] = DESC_RATEMCS2;
+ rate[3] = DESC_RATEMCS3;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xE14:
+ case 0x848:
+ rate[0] = DESC_RATEMCS4;
+ rate[1] = DESC_RATEMCS5;
+ rate[2] = DESC_RATEMCS6;
+ rate[3] = DESC_RATEMCS7;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xE18:
+ case 0x84C:
+ rate[0] = DESC_RATEMCS8;
+ rate[1] = DESC_RATEMCS9;
+ rate[2] = DESC_RATEMCS10;
+ rate[3] = DESC_RATEMCS11;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xE1C:
+ case 0x868:
+ rate[0] = DESC_RATEMCS12;
+ rate[1] = DESC_RATEMCS13;
+ rate[2] = DESC_RATEMCS14;
+ rate[3] = DESC_RATEMCS15;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0x838:
+ rate[0] = DESC_RATE1M;
+ rate[1] = DESC_RATE2M;
+ rate[2] = DESC_RATE5_5M;
+ for (i = 1; i < 4; ++i)
+ pwr_by_rate[i - 1] = tbl_to_dec_pwr_by_rate(rtwdev,
+ val, i);
+ *rate_num = 3;
+ break;
+ case 0xC20:
+ case 0xE20:
+ case 0x1820:
+ case 0x1A20:
+ rate[0] = DESC_RATE1M;
+ rate[1] = DESC_RATE2M;
+ rate[2] = DESC_RATE5_5M;
+ rate[3] = DESC_RATE11M;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xC24:
+ case 0xE24:
+ case 0x1824:
+ case 0x1A24:
+ rate[0] = DESC_RATE6M;
+ rate[1] = DESC_RATE9M;
+ rate[2] = DESC_RATE12M;
+ rate[3] = DESC_RATE18M;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xC28:
+ case 0xE28:
+ case 0x1828:
+ case 0x1A28:
+ rate[0] = DESC_RATE24M;
+ rate[1] = DESC_RATE36M;
+ rate[2] = DESC_RATE48M;
+ rate[3] = DESC_RATE54M;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xC2C:
+ case 0xE2C:
+ case 0x182C:
+ case 0x1A2C:
+ rate[0] = DESC_RATEMCS0;
+ rate[1] = DESC_RATEMCS1;
+ rate[2] = DESC_RATEMCS2;
+ rate[3] = DESC_RATEMCS3;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xC30:
+ case 0xE30:
+ case 0x1830:
+ case 0x1A30:
+ rate[0] = DESC_RATEMCS4;
+ rate[1] = DESC_RATEMCS5;
+ rate[2] = DESC_RATEMCS6;
+ rate[3] = DESC_RATEMCS7;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xC34:
+ case 0xE34:
+ case 0x1834:
+ case 0x1A34:
+ rate[0] = DESC_RATEMCS8;
+ rate[1] = DESC_RATEMCS9;
+ rate[2] = DESC_RATEMCS10;
+ rate[3] = DESC_RATEMCS11;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xC38:
+ case 0xE38:
+ case 0x1838:
+ case 0x1A38:
+ rate[0] = DESC_RATEMCS12;
+ rate[1] = DESC_RATEMCS13;
+ rate[2] = DESC_RATEMCS14;
+ rate[3] = DESC_RATEMCS15;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xC3C:
+ case 0xE3C:
+ case 0x183C:
+ case 0x1A3C:
+ rate[0] = DESC_RATEVHT1SS_MCS0;
+ rate[1] = DESC_RATEVHT1SS_MCS1;
+ rate[2] = DESC_RATEVHT1SS_MCS2;
+ rate[3] = DESC_RATEVHT1SS_MCS3;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xC40:
+ case 0xE40:
+ case 0x1840:
+ case 0x1A40:
+ rate[0] = DESC_RATEVHT1SS_MCS4;
+ rate[1] = DESC_RATEVHT1SS_MCS5;
+ rate[2] = DESC_RATEVHT1SS_MCS6;
+ rate[3] = DESC_RATEVHT1SS_MCS7;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xC44:
+ case 0xE44:
+ case 0x1844:
+ case 0x1A44:
+ rate[0] = DESC_RATEVHT1SS_MCS8;
+ rate[1] = DESC_RATEVHT1SS_MCS9;
+ rate[2] = DESC_RATEVHT2SS_MCS0;
+ rate[3] = DESC_RATEVHT2SS_MCS1;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xC48:
+ case 0xE48:
+ case 0x1848:
+ case 0x1A48:
+ rate[0] = DESC_RATEVHT2SS_MCS2;
+ rate[1] = DESC_RATEVHT2SS_MCS3;
+ rate[2] = DESC_RATEVHT2SS_MCS4;
+ rate[3] = DESC_RATEVHT2SS_MCS5;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xC4C:
+ case 0xE4C:
+ case 0x184C:
+ case 0x1A4C:
+ rate[0] = DESC_RATEVHT2SS_MCS6;
+ rate[1] = DESC_RATEVHT2SS_MCS7;
+ rate[2] = DESC_RATEVHT2SS_MCS8;
+ rate[3] = DESC_RATEVHT2SS_MCS9;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xCD8:
+ case 0xED8:
+ case 0x18D8:
+ case 0x1AD8:
+ rate[0] = DESC_RATEMCS16;
+ rate[1] = DESC_RATEMCS17;
+ rate[2] = DESC_RATEMCS18;
+ rate[3] = DESC_RATEMCS19;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xCDC:
+ case 0xEDC:
+ case 0x18DC:
+ case 0x1ADC:
+ rate[0] = DESC_RATEMCS20;
+ rate[1] = DESC_RATEMCS21;
+ rate[2] = DESC_RATEMCS22;
+ rate[3] = DESC_RATEMCS23;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xCE0:
+ case 0xEE0:
+ case 0x18E0:
+ case 0x1AE0:
+ rate[0] = DESC_RATEVHT3SS_MCS0;
+ rate[1] = DESC_RATEVHT3SS_MCS1;
+ rate[2] = DESC_RATEVHT3SS_MCS2;
+ rate[3] = DESC_RATEVHT3SS_MCS3;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xCE4:
+ case 0xEE4:
+ case 0x18E4:
+ case 0x1AE4:
+ rate[0] = DESC_RATEVHT3SS_MCS4;
+ rate[1] = DESC_RATEVHT3SS_MCS5;
+ rate[2] = DESC_RATEVHT3SS_MCS6;
+ rate[3] = DESC_RATEVHT3SS_MCS7;
+ for (i = 0; i < 4; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 4;
+ break;
+ case 0xCE8:
+ case 0xEE8:
+ case 0x18E8:
+ case 0x1AE8:
+ rate[0] = DESC_RATEVHT3SS_MCS8;
+ rate[1] = DESC_RATEVHT3SS_MCS9;
+ for (i = 0; i < 2; ++i)
+ pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
+ *rate_num = 2;
+ break;
+ default:
+ rtw_warn(rtwdev, "invalid tx power index addr 0x%08x\n", addr);
+ break;
+ }
+}
+
+static void rtw_phy_store_tx_power_by_rate(struct rtw_dev *rtwdev,
+ u32 band, u32 rfpath, u32 txnum,
+ u32 regaddr, u32 bitmask, u32 data)
+{
+ struct rtw_hal *hal = &rtwdev->hal;
+ u8 rate_num = 0;
+ u8 rate;
+ u8 rates[RTW_RF_PATH_MAX] = {0};
+ s8 offset;
+ s8 pwr_by_rate[RTW_RF_PATH_MAX] = {0};
+ int i;
+
+ rtw_phy_get_rate_values_of_txpwr_by_rate(rtwdev, regaddr, bitmask, data,
+ rates, pwr_by_rate, &rate_num);
+
+ if (WARN_ON(rfpath >= RTW_RF_PATH_MAX ||
+ (band != PHY_BAND_2G && band != PHY_BAND_5G) ||
+ rate_num > RTW_RF_PATH_MAX))
+ return;
+
+ for (i = 0; i < rate_num; i++) {
+ offset = pwr_by_rate[i];
+ rate = rates[i];
+ if (band == PHY_BAND_2G)
+ hal->tx_pwr_by_rate_offset_2g[rfpath][rate] = offset;
+ else if (band == PHY_BAND_5G)
+ hal->tx_pwr_by_rate_offset_5g[rfpath][rate] = offset;
+ else
+ continue;
+ }
+}
+
void rtw_parse_tbl_bb_pg(struct rtw_dev *rtwdev, const struct rtw_table *tbl)
{
const struct phy_pg_cfg_pair *p = tbl->data;
@@ -726,12 +1128,142 @@ void rtw_parse_tbl_bb_pg(struct rtw_dev *rtwdev, const struct rtw_table *tbl)
msleep(50);
continue;
}
- phy_store_tx_power_by_rate(rtwdev, p->band, p->rf_path,
- p->tx_num, p->addr, p->bitmask,
- p->data);
+ rtw_phy_store_tx_power_by_rate(rtwdev, p->band, p->rf_path,
+ p->tx_num, p->addr, p->bitmask,
+ p->data);
}
}
+static const u8 rtw_channel_idx_5g[RTW_MAX_CHANNEL_NUM_5G] = {
+ 36, 38, 40, 42, 44, 46, 48, /* Band 1 */
+ 52, 54, 56, 58, 60, 62, 64, /* Band 2 */
+ 100, 102, 104, 106, 108, 110, 112, /* Band 3 */
+ 116, 118, 120, 122, 124, 126, 128, /* Band 3 */
+ 132, 134, 136, 138, 140, 142, 144, /* Band 3 */
+ 149, 151, 153, 155, 157, 159, 161, /* Band 4 */
+ 165, 167, 169, 171, 173, 175, 177}; /* Band 4 */
+
+static int rtw_channel_to_idx(u8 band, u8 channel)
+{
+ int ch_idx;
+ u8 n_channel;
+
+ if (band == PHY_BAND_2G) {
+ ch_idx = channel - 1;
+ n_channel = RTW_MAX_CHANNEL_NUM_2G;
+ } else if (band == PHY_BAND_5G) {
+ n_channel = RTW_MAX_CHANNEL_NUM_5G;
+ for (ch_idx = 0; ch_idx < n_channel; ch_idx++)
+ if (rtw_channel_idx_5g[ch_idx] == channel)
+ break;
+ } else {
+ return -1;
+ }
+
+ if (ch_idx >= n_channel)
+ return -1;
+
+ return ch_idx;
+}
+
+static void rtw_phy_set_tx_power_limit(struct rtw_dev *rtwdev, u8 regd, u8 band,
+ u8 bw, u8 rs, u8 ch, s8 pwr_limit)
+{
+ struct rtw_hal *hal = &rtwdev->hal;
+ u8 max_power_index = rtwdev->chip->max_power_index;
+ s8 ww;
+ int ch_idx;
+
+ pwr_limit = clamp_t(s8, pwr_limit,
+ -max_power_index, max_power_index);
+ ch_idx = rtw_channel_to_idx(band, ch);
+
+ if (regd >= RTW_REGD_MAX || bw >= RTW_CHANNEL_WIDTH_MAX ||
+ rs >= RTW_RATE_SECTION_MAX || ch_idx < 0) {
+ WARN(1,
+ "wrong txpwr_lmt regd=%u, band=%u bw=%u, rs=%u, ch_idx=%u, pwr_limit=%d\n",
+ regd, band, bw, rs, ch_idx, pwr_limit);
+ return;
+ }
+
+ if (band == PHY_BAND_2G) {
+ hal->tx_pwr_limit_2g[regd][bw][rs][ch_idx] = pwr_limit;
+ ww = hal->tx_pwr_limit_2g[RTW_REGD_WW][bw][rs][ch_idx];
+ ww = min_t(s8, ww, pwr_limit);
+ hal->tx_pwr_limit_2g[RTW_REGD_WW][bw][rs][ch_idx] = ww;
+ } else if (band == PHY_BAND_5G) {
+ hal->tx_pwr_limit_5g[regd][bw][rs][ch_idx] = pwr_limit;
+ ww = hal->tx_pwr_limit_5g[RTW_REGD_WW][bw][rs][ch_idx];
+ ww = min_t(s8, ww, pwr_limit);
+ hal->tx_pwr_limit_5g[RTW_REGD_WW][bw][rs][ch_idx] = ww;
+ }
+}
+
+/* cross-reference 5G power limits if values are not assigned */
+static void
+rtw_xref_5g_txpwr_lmt(struct rtw_dev *rtwdev, u8 regd,
+ u8 bw, u8 ch_idx, u8 rs_ht, u8 rs_vht)
+{
+ struct rtw_hal *hal = &rtwdev->hal;
+ u8 max_power_index = rtwdev->chip->max_power_index;
+ s8 lmt_ht = hal->tx_pwr_limit_5g[regd][bw][rs_ht][ch_idx];
+ s8 lmt_vht = hal->tx_pwr_limit_5g[regd][bw][rs_vht][ch_idx];
+
+ if (lmt_ht == lmt_vht)
+ return;
+
+ if (lmt_ht == max_power_index)
+ hal->tx_pwr_limit_5g[regd][bw][rs_ht][ch_idx] = lmt_vht;
+
+ else if (lmt_vht == max_power_index)
+ hal->tx_pwr_limit_5g[regd][bw][rs_vht][ch_idx] = lmt_ht;
+}
+
+/* cross-reference power limits for ht and vht */
+static void
+rtw_xref_txpwr_lmt_by_rs(struct rtw_dev *rtwdev, u8 regd, u8 bw, u8 ch_idx)
+{
+ u8 rs_idx, rs_ht, rs_vht;
+ u8 rs_cmp[2][2] = {{RTW_RATE_SECTION_HT_1S, RTW_RATE_SECTION_VHT_1S},
+ {RTW_RATE_SECTION_HT_2S, RTW_RATE_SECTION_VHT_2S} };
+
+ for (rs_idx = 0; rs_idx < 2; rs_idx++) {
+ rs_ht = rs_cmp[rs_idx][0];
+ rs_vht = rs_cmp[rs_idx][1];
+
+ rtw_xref_5g_txpwr_lmt(rtwdev, regd, bw, ch_idx, rs_ht, rs_vht);
+ }
+}
+
+/* cross-reference power limits for 5G channels */
+static void
+rtw_xref_5g_txpwr_lmt_by_ch(struct rtw_dev *rtwdev, u8 regd, u8 bw)
+{
+ u8 ch_idx;
+
+ for (ch_idx = 0; ch_idx < RTW_MAX_CHANNEL_NUM_5G; ch_idx++)
+ rtw_xref_txpwr_lmt_by_rs(rtwdev, regd, bw, ch_idx);
+}
+
+/* cross-reference power limits for 20/40M bandwidth */
+static void
+rtw_xref_txpwr_lmt_by_bw(struct rtw_dev *rtwdev, u8 regd)
+{
+ u8 bw;
+
+ for (bw = RTW_CHANNEL_WIDTH_20; bw <= RTW_CHANNEL_WIDTH_40; bw++)
+ rtw_xref_5g_txpwr_lmt_by_ch(rtwdev, regd, bw);
+}
+
+/* cross-reference power limits */
+static void rtw_xref_txpwr_lmt(struct rtw_dev *rtwdev)
+{
+ u8 regd;
+
+ for (regd = 0; regd < RTW_REGD_MAX; regd++)
+ rtw_xref_txpwr_lmt_by_bw(rtwdev, regd);
+}
+
void rtw_parse_tbl_txpwr_lmt(struct rtw_dev *rtwdev,
const struct rtw_table *tbl)
{
@@ -741,10 +1273,11 @@ void rtw_parse_tbl_txpwr_lmt(struct rtw_dev *rtwdev,
BUILD_BUG_ON(sizeof(struct txpwr_lmt_cfg_pair) != sizeof(u8) * 6);
for (; p < end; p++) {
- phy_set_tx_power_limit(rtwdev, p->regd, p->band,
- p->bw, p->rs,
- p->ch, p->txpwr_lmt);
+ rtw_phy_set_tx_power_limit(rtwdev, p->regd, p->band,
+ p->bw, p->rs, p->ch, p->txpwr_lmt);
}
+
+ rtw_xref_txpwr_lmt(rtwdev);
}
void rtw_phy_cfg_mac(struct rtw_dev *rtwdev, const struct rtw_table *tbl,
@@ -819,93 +1352,6 @@ void rtw_phy_load_tables(struct rtw_dev *rtwdev)
}
}
-#define bcd_to_dec_pwr_by_rate(val, i) bcd2bin(val >> (i * 8))
-
-#define RTW_MAX_POWER_INDEX 0x3F
-
-u8 rtw_cck_rates[] = { DESC_RATE1M, DESC_RATE2M, DESC_RATE5_5M, DESC_RATE11M };
-u8 rtw_ofdm_rates[] = {
- DESC_RATE6M, DESC_RATE9M, DESC_RATE12M,
- DESC_RATE18M, DESC_RATE24M, DESC_RATE36M,
- DESC_RATE48M, DESC_RATE54M
-};
-u8 rtw_ht_1s_rates[] = {
- DESC_RATEMCS0, DESC_RATEMCS1, DESC_RATEMCS2,
- DESC_RATEMCS3, DESC_RATEMCS4, DESC_RATEMCS5,
- DESC_RATEMCS6, DESC_RATEMCS7
-};
-u8 rtw_ht_2s_rates[] = {
- DESC_RATEMCS8, DESC_RATEMCS9, DESC_RATEMCS10,
- DESC_RATEMCS11, DESC_RATEMCS12, DESC_RATEMCS13,
- DESC_RATEMCS14, DESC_RATEMCS15
-};
-u8 rtw_vht_1s_rates[] = {
- DESC_RATEVHT1SS_MCS0, DESC_RATEVHT1SS_MCS1,
- DESC_RATEVHT1SS_MCS2, DESC_RATEVHT1SS_MCS3,
- DESC_RATEVHT1SS_MCS4, DESC_RATEVHT1SS_MCS5,
- DESC_RATEVHT1SS_MCS6, DESC_RATEVHT1SS_MCS7,
- DESC_RATEVHT1SS_MCS8, DESC_RATEVHT1SS_MCS9
-};
-u8 rtw_vht_2s_rates[] = {
- DESC_RATEVHT2SS_MCS0, DESC_RATEVHT2SS_MCS1,
- DESC_RATEVHT2SS_MCS2, DESC_RATEVHT2SS_MCS3,
- DESC_RATEVHT2SS_MCS4, DESC_RATEVHT2SS_MCS5,
- DESC_RATEVHT2SS_MCS6, DESC_RATEVHT2SS_MCS7,
- DESC_RATEVHT2SS_MCS8, DESC_RATEVHT2SS_MCS9
-};
-
-static u8 rtw_cck_size = ARRAY_SIZE(rtw_cck_rates);
-static u8 rtw_ofdm_size = ARRAY_SIZE(rtw_ofdm_rates);
-static u8 rtw_ht_1s_size = ARRAY_SIZE(rtw_ht_1s_rates);
-static u8 rtw_ht_2s_size = ARRAY_SIZE(rtw_ht_2s_rates);
-static u8 rtw_vht_1s_size = ARRAY_SIZE(rtw_vht_1s_rates);
-static u8 rtw_vht_2s_size = ARRAY_SIZE(rtw_vht_2s_rates);
-u8 *rtw_rate_section[RTW_RATE_SECTION_MAX] = {
- rtw_cck_rates, rtw_ofdm_rates,
- rtw_ht_1s_rates, rtw_ht_2s_rates,
- rtw_vht_1s_rates, rtw_vht_2s_rates
-};
-u8 rtw_rate_size[RTW_RATE_SECTION_MAX] = {
- ARRAY_SIZE(rtw_cck_rates),
- ARRAY_SIZE(rtw_ofdm_rates),
- ARRAY_SIZE(rtw_ht_1s_rates),
- ARRAY_SIZE(rtw_ht_2s_rates),
- ARRAY_SIZE(rtw_vht_1s_rates),
- ARRAY_SIZE(rtw_vht_2s_rates)
-};
-
-static const u8 rtw_channel_idx_5g[RTW_MAX_CHANNEL_NUM_5G] = {
- 36, 38, 40, 42, 44, 46, 48, /* Band 1 */
- 52, 54, 56, 58, 60, 62, 64, /* Band 2 */
- 100, 102, 104, 106, 108, 110, 112, /* Band 3 */
- 116, 118, 120, 122, 124, 126, 128, /* Band 3 */
- 132, 134, 136, 138, 140, 142, 144, /* Band 3 */
- 149, 151, 153, 155, 157, 159, 161, /* Band 4 */
- 165, 167, 169, 171, 173, 175, 177}; /* Band 4 */
-
-static int rtw_channel_to_idx(u8 band, u8 channel)
-{
- int ch_idx;
- u8 n_channel;
-
- if (band == PHY_BAND_2G) {
- ch_idx = channel - 1;
- n_channel = RTW_MAX_CHANNEL_NUM_2G;
- } else if (band == PHY_BAND_5G) {
- n_channel = RTW_MAX_CHANNEL_NUM_5G;
- for (ch_idx = 0; ch_idx < n_channel; ch_idx++)
- if (rtw_channel_idx_5g[ch_idx] == channel)
- break;
- } else {
- return -1;
- }
-
- if (ch_idx >= n_channel)
- return -1;
-
- return ch_idx;
-}
-
static u8 rtw_get_channel_group(u8 channel)
{
switch (channel) {
@@ -995,10 +1441,10 @@ static u8 rtw_get_channel_group(u8 channel)
}
}
-static u8 phy_get_2g_tx_power_index(struct rtw_dev *rtwdev,
- struct rtw_2g_txpwr_idx *pwr_idx_2g,
- enum rtw_bandwidth bandwidth,
- u8 rate, u8 group)
+static u8 rtw_phy_get_2g_tx_power_index(struct rtw_dev *rtwdev,
+ struct rtw_2g_txpwr_idx *pwr_idx_2g,
+ enum rtw_bandwidth bandwidth,
+ u8 rate, u8 group)
{
struct rtw_chip_info *chip = rtwdev->chip;
u8 tx_power;
@@ -1042,10 +1488,10 @@ static u8 phy_get_2g_tx_power_index(struct rtw_dev *rtwdev,
return tx_power;
}
-static u8 phy_get_5g_tx_power_index(struct rtw_dev *rtwdev,
- struct rtw_5g_txpwr_idx *pwr_idx_5g,
- enum rtw_bandwidth bandwidth,
- u8 rate, u8 group)
+static u8 rtw_phy_get_5g_tx_power_index(struct rtw_dev *rtwdev,
+ struct rtw_5g_txpwr_idx *pwr_idx_5g,
+ enum rtw_bandwidth bandwidth,
+ u8 rate, u8 group)
{
struct rtw_chip_info *chip = rtwdev->chip;
u8 tx_power;
@@ -1096,81 +1542,112 @@ static u8 phy_get_5g_tx_power_index(struct rtw_dev *rtwdev,
return tx_power;
}
-/* set tx power level by path for each rates, note that the order of the rates
- * are *very* important, bacause 8822B/8821C combines every four bytes of tx
- * power index into a four-byte power index register, and calls set_tx_agc to
- * write these values into hardware
- */
-static
-void phy_set_tx_power_level_by_path(struct rtw_dev *rtwdev, u8 ch, u8 path)
+static s8 rtw_phy_get_tx_power_limit(struct rtw_dev *rtwdev, u8 band,
+ enum rtw_bandwidth bw, u8 rf_path,
+ u8 rate, u8 channel, u8 regd)
{
struct rtw_hal *hal = &rtwdev->hal;
+ u8 *cch_by_bw = hal->cch_by_bw;
+ s8 power_limit = (s8)rtwdev->chip->max_power_index;
u8 rs;
+ int ch_idx;
+ u8 cur_bw, cur_ch;
+ s8 cur_lmt;
- /* do not need cck rates if we are not in 2.4G */
- if (hal->current_band_type == RTW_BAND_2G)
+ if (regd > RTW_REGD_WW)
+ return power_limit;
+
+ if (rate >= DESC_RATE1M && rate <= DESC_RATE11M)
rs = RTW_RATE_SECTION_CCK;
- else
+ else if (rate >= DESC_RATE6M && rate <= DESC_RATE54M)
rs = RTW_RATE_SECTION_OFDM;
+ else if (rate >= DESC_RATEMCS0 && rate <= DESC_RATEMCS7)
+ rs = RTW_RATE_SECTION_HT_1S;
+ else if (rate >= DESC_RATEMCS8 && rate <= DESC_RATEMCS15)
+ rs = RTW_RATE_SECTION_HT_2S;
+ else if (rate >= DESC_RATEVHT1SS_MCS0 && rate <= DESC_RATEVHT1SS_MCS9)
+ rs = RTW_RATE_SECTION_VHT_1S;
+ else if (rate >= DESC_RATEVHT2SS_MCS0 && rate <= DESC_RATEVHT2SS_MCS9)
+ rs = RTW_RATE_SECTION_VHT_2S;
+ else
+ goto err;
- for (; rs < RTW_RATE_SECTION_MAX; rs++)
- phy_set_tx_power_index_by_rs(rtwdev, ch, path, rs);
-}
+ /* only 20M BW with cck and ofdm */
+ if (rs == RTW_RATE_SECTION_CCK || rs == RTW_RATE_SECTION_OFDM)
+ bw = RTW_CHANNEL_WIDTH_20;
-void rtw_phy_set_tx_power_level(struct rtw_dev *rtwdev, u8 channel)
-{
- struct rtw_chip_info *chip = rtwdev->chip;
- struct rtw_hal *hal = &rtwdev->hal;
- u8 path;
+ /* only 20/40M BW with ht */
+ if (rs == RTW_RATE_SECTION_HT_1S || rs == RTW_RATE_SECTION_HT_2S)
+ bw = min_t(u8, bw, RTW_CHANNEL_WIDTH_40);
- mutex_lock(&hal->tx_power_mutex);
+ /* select min power limit among [20M BW ~ current BW] */
+ for (cur_bw = RTW_CHANNEL_WIDTH_20; cur_bw <= bw; cur_bw++) {
+ cur_ch = cch_by_bw[cur_bw];
- for (path = 0; path < hal->rf_path_num; path++)
- phy_set_tx_power_level_by_path(rtwdev, channel, path);
+ ch_idx = rtw_channel_to_idx(band, cur_ch);
+ if (ch_idx < 0)
+ goto err;
- chip->ops->set_tx_power_index(rtwdev);
- mutex_unlock(&hal->tx_power_mutex);
-}
+ cur_lmt = cur_ch <= RTW_MAX_CHANNEL_NUM_2G ?
+ hal->tx_pwr_limit_2g[regd][cur_bw][rs][ch_idx] :
+ hal->tx_pwr_limit_5g[regd][cur_bw][rs][ch_idx];
+
+ power_limit = min_t(s8, cur_lmt, power_limit);
+ }
+
+ return power_limit;
-s8 phy_get_tx_power_limit(struct rtw_dev *rtwdev, u8 band,
- enum rtw_bandwidth bandwidth, u8 rf_path,
- u8 rate, u8 channel, u8 regd);
+err:
+ WARN(1, "invalid arguments, band=%d, bw=%d, path=%d, rate=%d, ch=%d\n",
+ band, bw, rf_path, rate, channel);
+ return (s8)rtwdev->chip->max_power_index;
+}
-static
-u8 phy_get_tx_power_index(void *adapter, u8 rf_path, u8 rate,
- enum rtw_bandwidth bandwidth, u8 channel, u8 regd)
+void rtw_get_tx_power_params(struct rtw_dev *rtwdev, u8 path, u8 rate, u8 bw,
+ u8 ch, u8 regd, struct rtw_power_params *pwr_param)
{
- struct rtw_dev *rtwdev = adapter;
struct rtw_hal *hal = &rtwdev->hal;
struct rtw_txpwr_idx *pwr_idx;
- u8 tx_power;
- u8 group;
- u8 band;
- s8 offset, limit;
+ u8 group, band;
+ u8 *base = &pwr_param->pwr_base;
+ s8 *offset = &pwr_param->pwr_offset;
+ s8 *limit = &pwr_param->pwr_limit;
- pwr_idx = &rtwdev->efuse.txpwr_idx_table[rf_path];
- group = rtw_get_channel_group(channel);
+ pwr_idx = &rtwdev->efuse.txpwr_idx_table[path];
+ group = rtw_get_channel_group(ch);
/* base power index for 2.4G/5G */
- if (channel <= 14) {
+ if (ch <= 14) {
band = PHY_BAND_2G;
- tx_power = phy_get_2g_tx_power_index(rtwdev,
- &pwr_idx->pwr_idx_2g,
- bandwidth, rate, group);
- offset = hal->tx_pwr_by_rate_offset_2g[rf_path][rate];
+ *base = rtw_phy_get_2g_tx_power_index(rtwdev,
+ &pwr_idx->pwr_idx_2g,
+ bw, rate, group);
+ *offset = hal->tx_pwr_by_rate_offset_2g[path][rate];
} else {
band = PHY_BAND_5G;
- tx_power = phy_get_5g_tx_power_index(rtwdev,
- &pwr_idx->pwr_idx_5g,
- bandwidth, rate, group);
- offset = hal->tx_pwr_by_rate_offset_5g[rf_path][rate];
+ *base = rtw_phy_get_5g_tx_power_index(rtwdev,
+ &pwr_idx->pwr_idx_5g,
+ bw, rate, group);
+ *offset = hal->tx_pwr_by_rate_offset_5g[path][rate];
}
- limit = phy_get_tx_power_limit(rtwdev, band, bandwidth, rf_path,
- rate, channel, regd);
+ *limit = rtw_phy_get_tx_power_limit(rtwdev, band, bw, path,
+ rate, ch, regd);
+}
+
+u8
+rtw_phy_get_tx_power_index(struct rtw_dev *rtwdev, u8 rf_path, u8 rate,
+ enum rtw_bandwidth bandwidth, u8 channel, u8 regd)
+{
+ struct rtw_power_params pwr_param = {0};
+ u8 tx_power;
+ s8 offset;
+
+ rtw_get_tx_power_params(rtwdev, rf_path, rate, bandwidth,
+ channel, regd, &pwr_param);
- if (offset > limit)
- offset = limit;
+ tx_power = pwr_param.pwr_base;
+ offset = min_t(s8, pwr_param.pwr_offset, pwr_param.pwr_limit);
tx_power += offset;
@@ -1180,9 +1657,9 @@ u8 phy_get_tx_power_index(void *adapter, u8 rf_path, u8 rate,
return tx_power;
}
-void phy_set_tx_power_index_by_rs(void *adapter, u8 ch, u8 path, u8 rs)
+static void rtw_phy_set_tx_power_index_by_rs(struct rtw_dev *rtwdev,
+ u8 ch, u8 path, u8 rs)
{
- struct rtw_dev *rtwdev = adapter;
struct rtw_hal *hal = &rtwdev->hal;
u8 regd = rtwdev->regd.txpwr_regd;
u8 *rates;
@@ -1200,361 +1677,51 @@ void phy_set_tx_power_index_by_rs(void *adapter, u8 ch, u8 path, u8 rs)
bw = hal->current_band_width;
for (i = 0; i < size; i++) {
rate = rates[i];
- pwr_idx = phy_get_tx_power_index(adapter, path, rate, bw, ch,
- regd);
+ pwr_idx = rtw_phy_get_tx_power_index(rtwdev, path, rate,
+ bw, ch, regd);
hal->tx_pwr_tbl[path][rate] = pwr_idx;
}
}
-static u8 tbl_to_dec_pwr_by_rate(struct rtw_dev *rtwdev, u32 hex, u8 i)
-{
- if (rtwdev->chip->is_pwr_by_rate_dec)
- return bcd_to_dec_pwr_by_rate(hex, i);
- else
- return (hex >> (i * 8)) & 0xFF;
-}
-
-static void phy_get_rate_values_of_txpwr_by_rate(struct rtw_dev *rtwdev,
- u32 addr, u32 mask,
- u32 val, u8 *rate,
- u8 *pwr_by_rate, u8 *rate_num)
+/* set tx power level by path for each rates, note that the order of the rates
+ * are *very* important, bacause 8822B/8821C combines every four bytes of tx
+ * power index into a four-byte power index register, and calls set_tx_agc to
+ * write these values into hardware
+ */
+static void rtw_phy_set_tx_power_level_by_path(struct rtw_dev *rtwdev,
+ u8 ch, u8 path)
{
- int i;
+ struct rtw_hal *hal = &rtwdev->hal;
+ u8 rs;
- switch (addr) {
- case 0xE00:
- case 0x830:
- rate[0] = DESC_RATE6M;
- rate[1] = DESC_RATE9M;
- rate[2] = DESC_RATE12M;
- rate[3] = DESC_RATE18M;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xE04:
- case 0x834:
- rate[0] = DESC_RATE24M;
- rate[1] = DESC_RATE36M;
- rate[2] = DESC_RATE48M;
- rate[3] = DESC_RATE54M;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xE08:
- rate[0] = DESC_RATE1M;
- pwr_by_rate[0] = bcd_to_dec_pwr_by_rate(val, 1);
- *rate_num = 1;
- break;
- case 0x86C:
- if (mask == 0xffffff00) {
- rate[0] = DESC_RATE2M;
- rate[1] = DESC_RATE5_5M;
- rate[2] = DESC_RATE11M;
- for (i = 1; i < 4; ++i)
- pwr_by_rate[i - 1] =
- tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 3;
- } else if (mask == 0x000000ff) {
- rate[0] = DESC_RATE11M;
- pwr_by_rate[0] = bcd_to_dec_pwr_by_rate(val, 0);
- *rate_num = 1;
- }
- break;
- case 0xE10:
- case 0x83C:
- rate[0] = DESC_RATEMCS0;
- rate[1] = DESC_RATEMCS1;
- rate[2] = DESC_RATEMCS2;
- rate[3] = DESC_RATEMCS3;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xE14:
- case 0x848:
- rate[0] = DESC_RATEMCS4;
- rate[1] = DESC_RATEMCS5;
- rate[2] = DESC_RATEMCS6;
- rate[3] = DESC_RATEMCS7;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xE18:
- case 0x84C:
- rate[0] = DESC_RATEMCS8;
- rate[1] = DESC_RATEMCS9;
- rate[2] = DESC_RATEMCS10;
- rate[3] = DESC_RATEMCS11;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xE1C:
- case 0x868:
- rate[0] = DESC_RATEMCS12;
- rate[1] = DESC_RATEMCS13;
- rate[2] = DESC_RATEMCS14;
- rate[3] = DESC_RATEMCS15;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
+ /* do not need cck rates if we are not in 2.4G */
+ if (hal->current_band_type == RTW_BAND_2G)
+ rs = RTW_RATE_SECTION_CCK;
+ else
+ rs = RTW_RATE_SECTION_OFDM;
- break;
- case 0x838:
- rate[0] = DESC_RATE1M;
- rate[1] = DESC_RATE2M;
- rate[2] = DESC_RATE5_5M;
- for (i = 1; i < 4; ++i)
- pwr_by_rate[i - 1] = tbl_to_dec_pwr_by_rate(rtwdev,
- val, i);
- *rate_num = 3;
- break;
- case 0xC20:
- case 0xE20:
- case 0x1820:
- case 0x1A20:
- rate[0] = DESC_RATE1M;
- rate[1] = DESC_RATE2M;
- rate[2] = DESC_RATE5_5M;
- rate[3] = DESC_RATE11M;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xC24:
- case 0xE24:
- case 0x1824:
- case 0x1A24:
- rate[0] = DESC_RATE6M;
- rate[1] = DESC_RATE9M;
- rate[2] = DESC_RATE12M;
- rate[3] = DESC_RATE18M;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xC28:
- case 0xE28:
- case 0x1828:
- case 0x1A28:
- rate[0] = DESC_RATE24M;
- rate[1] = DESC_RATE36M;
- rate[2] = DESC_RATE48M;
- rate[3] = DESC_RATE54M;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xC2C:
- case 0xE2C:
- case 0x182C:
- case 0x1A2C:
- rate[0] = DESC_RATEMCS0;
- rate[1] = DESC_RATEMCS1;
- rate[2] = DESC_RATEMCS2;
- rate[3] = DESC_RATEMCS3;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xC30:
- case 0xE30:
- case 0x1830:
- case 0x1A30:
- rate[0] = DESC_RATEMCS4;
- rate[1] = DESC_RATEMCS5;
- rate[2] = DESC_RATEMCS6;
- rate[3] = DESC_RATEMCS7;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xC34:
- case 0xE34:
- case 0x1834:
- case 0x1A34:
- rate[0] = DESC_RATEMCS8;
- rate[1] = DESC_RATEMCS9;
- rate[2] = DESC_RATEMCS10;
- rate[3] = DESC_RATEMCS11;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xC38:
- case 0xE38:
- case 0x1838:
- case 0x1A38:
- rate[0] = DESC_RATEMCS12;
- rate[1] = DESC_RATEMCS13;
- rate[2] = DESC_RATEMCS14;
- rate[3] = DESC_RATEMCS15;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xC3C:
- case 0xE3C:
- case 0x183C:
- case 0x1A3C:
- rate[0] = DESC_RATEVHT1SS_MCS0;
- rate[1] = DESC_RATEVHT1SS_MCS1;
- rate[2] = DESC_RATEVHT1SS_MCS2;
- rate[3] = DESC_RATEVHT1SS_MCS3;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xC40:
- case 0xE40:
- case 0x1840:
- case 0x1A40:
- rate[0] = DESC_RATEVHT1SS_MCS4;
- rate[1] = DESC_RATEVHT1SS_MCS5;
- rate[2] = DESC_RATEVHT1SS_MCS6;
- rate[3] = DESC_RATEVHT1SS_MCS7;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xC44:
- case 0xE44:
- case 0x1844:
- case 0x1A44:
- rate[0] = DESC_RATEVHT1SS_MCS8;
- rate[1] = DESC_RATEVHT1SS_MCS9;
- rate[2] = DESC_RATEVHT2SS_MCS0;
- rate[3] = DESC_RATEVHT2SS_MCS1;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xC48:
- case 0xE48:
- case 0x1848:
- case 0x1A48:
- rate[0] = DESC_RATEVHT2SS_MCS2;
- rate[1] = DESC_RATEVHT2SS_MCS3;
- rate[2] = DESC_RATEVHT2SS_MCS4;
- rate[3] = DESC_RATEVHT2SS_MCS5;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xC4C:
- case 0xE4C:
- case 0x184C:
- case 0x1A4C:
- rate[0] = DESC_RATEVHT2SS_MCS6;
- rate[1] = DESC_RATEVHT2SS_MCS7;
- rate[2] = DESC_RATEVHT2SS_MCS8;
- rate[3] = DESC_RATEVHT2SS_MCS9;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xCD8:
- case 0xED8:
- case 0x18D8:
- case 0x1AD8:
- rate[0] = DESC_RATEMCS16;
- rate[1] = DESC_RATEMCS17;
- rate[2] = DESC_RATEMCS18;
- rate[3] = DESC_RATEMCS19;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xCDC:
- case 0xEDC:
- case 0x18DC:
- case 0x1ADC:
- rate[0] = DESC_RATEMCS20;
- rate[1] = DESC_RATEMCS21;
- rate[2] = DESC_RATEMCS22;
- rate[3] = DESC_RATEMCS23;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xCE0:
- case 0xEE0:
- case 0x18E0:
- case 0x1AE0:
- rate[0] = DESC_RATEVHT3SS_MCS0;
- rate[1] = DESC_RATEVHT3SS_MCS1;
- rate[2] = DESC_RATEVHT3SS_MCS2;
- rate[3] = DESC_RATEVHT3SS_MCS3;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xCE4:
- case 0xEE4:
- case 0x18E4:
- case 0x1AE4:
- rate[0] = DESC_RATEVHT3SS_MCS4;
- rate[1] = DESC_RATEVHT3SS_MCS5;
- rate[2] = DESC_RATEVHT3SS_MCS6;
- rate[3] = DESC_RATEVHT3SS_MCS7;
- for (i = 0; i < 4; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 4;
- break;
- case 0xCE8:
- case 0xEE8:
- case 0x18E8:
- case 0x1AE8:
- rate[0] = DESC_RATEVHT3SS_MCS8;
- rate[1] = DESC_RATEVHT3SS_MCS9;
- for (i = 0; i < 2; ++i)
- pwr_by_rate[i] = tbl_to_dec_pwr_by_rate(rtwdev, val, i);
- *rate_num = 2;
- break;
- default:
- rtw_warn(rtwdev, "invalid tx power index addr 0x%08x\n", addr);
- break;
- }
+ for (; rs < RTW_RATE_SECTION_MAX; rs++)
+ rtw_phy_set_tx_power_index_by_rs(rtwdev, ch, path, rs);
}
-void phy_store_tx_power_by_rate(void *adapter, u32 band, u32 rfpath, u32 txnum,
- u32 regaddr, u32 bitmask, u32 data)
+void rtw_phy_set_tx_power_level(struct rtw_dev *rtwdev, u8 channel)
{
- struct rtw_dev *rtwdev = adapter;
+ struct rtw_chip_info *chip = rtwdev->chip;
struct rtw_hal *hal = &rtwdev->hal;
- u8 rate_num = 0;
- u8 rate;
- u8 rates[RTW_RF_PATH_MAX] = {0};
- s8 offset;
- s8 pwr_by_rate[RTW_RF_PATH_MAX] = {0};
- int i;
+ u8 path;
- phy_get_rate_values_of_txpwr_by_rate(rtwdev, regaddr, bitmask, data,
- rates, pwr_by_rate, &rate_num);
+ mutex_lock(&hal->tx_power_mutex);
- if (WARN_ON(rfpath >= RTW_RF_PATH_MAX ||
- (band != PHY_BAND_2G && band != PHY_BAND_5G) ||
- rate_num > RTW_RF_PATH_MAX))
- return;
+ for (path = 0; path < hal->rf_path_num; path++)
+ rtw_phy_set_tx_power_level_by_path(rtwdev, channel, path);
- for (i = 0; i < rate_num; i++) {
- offset = pwr_by_rate[i];
- rate = rates[i];
- if (band == PHY_BAND_2G)
- hal->tx_pwr_by_rate_offset_2g[rfpath][rate] = offset;
- else if (band == PHY_BAND_5G)
- hal->tx_pwr_by_rate_offset_5g[rfpath][rate] = offset;
- else
- continue;
- }
+ chip->ops->set_tx_power_index(rtwdev);
+ mutex_unlock(&hal->tx_power_mutex);
}
-static
-void phy_tx_power_by_rate_config_by_path(struct rtw_hal *hal, u8 path,
- u8 rs, u8 size, u8 *rates)
+static void
+rtw_phy_tx_power_by_rate_config_by_path(struct rtw_hal *hal, u8 path,
+ u8 rs, u8 size, u8 *rates)
{
u8 rate;
u8 base_idx, rate_idx;
@@ -1580,36 +1747,35 @@ void rtw_phy_tx_power_by_rate_config(struct rtw_hal *hal)
u8 path;
for (path = 0; path < RTW_RF_PATH_MAX; path++) {
- phy_tx_power_by_rate_config_by_path(hal, path,
+ rtw_phy_tx_power_by_rate_config_by_path(hal, path,
RTW_RATE_SECTION_CCK,
rtw_cck_size, rtw_cck_rates);
- phy_tx_power_by_rate_config_by_path(hal, path,
+ rtw_phy_tx_power_by_rate_config_by_path(hal, path,
RTW_RATE_SECTION_OFDM,
rtw_ofdm_size, rtw_ofdm_rates);
- phy_tx_power_by_rate_config_by_path(hal, path,
+ rtw_phy_tx_power_by_rate_config_by_path(hal, path,
RTW_RATE_SECTION_HT_1S,
rtw_ht_1s_size, rtw_ht_1s_rates);
- phy_tx_power_by_rate_config_by_path(hal, path,
+ rtw_phy_tx_power_by_rate_config_by_path(hal, path,
RTW_RATE_SECTION_HT_2S,
rtw_ht_2s_size, rtw_ht_2s_rates);
- phy_tx_power_by_rate_config_by_path(hal, path,
+ rtw_phy_tx_power_by_rate_config_by_path(hal, path,
RTW_RATE_SECTION_VHT_1S,
rtw_vht_1s_size, rtw_vht_1s_rates);
- phy_tx_power_by_rate_config_by_path(hal, path,
+ rtw_phy_tx_power_by_rate_config_by_path(hal, path,
RTW_RATE_SECTION_VHT_2S,
rtw_vht_2s_size, rtw_vht_2s_rates);
}
}
static void
-phy_tx_power_limit_config(struct rtw_hal *hal, u8 regd, u8 bw, u8 rs)
+__rtw_phy_tx_power_limit_config(struct rtw_hal *hal, u8 regd, u8 bw, u8 rs)
{
- s8 base, orig;
+ s8 base;
u8 ch;
for (ch = 0; ch < RTW_MAX_CHANNEL_NUM_2G; ch++) {
base = hal->tx_pwr_by_rate_base_2g[0][rs];
- orig = hal->tx_pwr_limit_2g[regd][bw][rs][ch];
hal->tx_pwr_limit_2g[regd][bw][rs][ch] -= base;
}
@@ -1623,98 +1789,34 @@ void rtw_phy_tx_power_limit_config(struct rtw_hal *hal)
{
u8 regd, bw, rs;
+ /* default at channel 1 */
+ hal->cch_by_bw[RTW_CHANNEL_WIDTH_20] = 1;
+
for (regd = 0; regd < RTW_REGD_MAX; regd++)
for (bw = 0; bw < RTW_CHANNEL_WIDTH_MAX; bw++)
for (rs = 0; rs < RTW_RATE_SECTION_MAX; rs++)
- phy_tx_power_limit_config(hal, regd, bw, rs);
-}
-
-static s8 get_tx_power_limit(struct rtw_hal *hal, u8 bw, u8 rs, u8 ch, u8 regd)
-{
- if (regd > RTW_REGD_WW)
- return RTW_MAX_POWER_INDEX;
-
- return hal->tx_pwr_limit_2g[regd][bw][rs][ch];
-}
-
-s8 phy_get_tx_power_limit(struct rtw_dev *rtwdev, u8 band,
- enum rtw_bandwidth bw, u8 rf_path,
- u8 rate, u8 channel, u8 regd)
-{
- struct rtw_hal *hal = &rtwdev->hal;
- s8 power_limit;
- u8 rs;
- int ch_idx;
-
- if (rate >= DESC_RATE1M && rate <= DESC_RATE11M)
- rs = RTW_RATE_SECTION_CCK;
- else if (rate >= DESC_RATE6M && rate <= DESC_RATE54M)
- rs = RTW_RATE_SECTION_OFDM;
- else if (rate >= DESC_RATEMCS0 && rate <= DESC_RATEMCS7)
- rs = RTW_RATE_SECTION_HT_1S;
- else if (rate >= DESC_RATEMCS8 && rate <= DESC_RATEMCS15)
- rs = RTW_RATE_SECTION_HT_2S;
- else if (rate >= DESC_RATEVHT1SS_MCS0 && rate <= DESC_RATEVHT1SS_MCS9)
- rs = RTW_RATE_SECTION_VHT_1S;
- else if (rate >= DESC_RATEVHT2SS_MCS0 && rate <= DESC_RATEVHT2SS_MCS9)
- rs = RTW_RATE_SECTION_VHT_2S;
- else
- goto err;
-
- ch_idx = rtw_channel_to_idx(band, channel);
- if (ch_idx < 0)
- goto err;
-
- power_limit = get_tx_power_limit(hal, bw, rs, ch_idx, regd);
-
- return power_limit;
-
-err:
- WARN(1, "invalid arguments, band=%d, bw=%d, path=%d, rate=%d, ch=%d\n",
- band, bw, rf_path, rate, channel);
- return RTW_MAX_POWER_INDEX;
+ __rtw_phy_tx_power_limit_config(hal, regd, bw, rs);
}
-void phy_set_tx_power_limit(struct rtw_dev *rtwdev, u8 regd, u8 band,
- u8 bw, u8 rs, u8 ch, s8 pwr_limit)
+static void rtw_phy_init_tx_power_limit(struct rtw_dev *rtwdev,
+ u8 regd, u8 bw, u8 rs)
{
struct rtw_hal *hal = &rtwdev->hal;
- int ch_idx;
-
- pwr_limit = clamp_t(s8, pwr_limit,
- -RTW_MAX_POWER_INDEX, RTW_MAX_POWER_INDEX);
- ch_idx = rtw_channel_to_idx(band, ch);
-
- if (regd >= RTW_REGD_MAX || bw >= RTW_CHANNEL_WIDTH_MAX ||
- rs >= RTW_RATE_SECTION_MAX || ch_idx < 0) {
- WARN(1,
- "wrong txpwr_lmt regd=%u, band=%u bw=%u, rs=%u, ch_idx=%u, pwr_limit=%d\n",
- regd, band, bw, rs, ch_idx, pwr_limit);
- return;
- }
-
- if (band == PHY_BAND_2G)
- hal->tx_pwr_limit_2g[regd][bw][rs][ch_idx] = pwr_limit;
- else if (band == PHY_BAND_5G)
- hal->tx_pwr_limit_5g[regd][bw][rs][ch_idx] = pwr_limit;
-}
-
-static
-void rtw_hw_tx_power_limit_init(struct rtw_hal *hal, u8 regd, u8 bw, u8 rs)
-{
+ s8 max_power_index = (s8)rtwdev->chip->max_power_index;
u8 ch;
/* 2.4G channels */
for (ch = 0; ch < RTW_MAX_CHANNEL_NUM_2G; ch++)
- hal->tx_pwr_limit_2g[regd][bw][rs][ch] = RTW_MAX_POWER_INDEX;
+ hal->tx_pwr_limit_2g[regd][bw][rs][ch] = max_power_index;
/* 5G channels */
for (ch = 0; ch < RTW_MAX_CHANNEL_NUM_5G; ch++)
- hal->tx_pwr_limit_5g[regd][bw][rs][ch] = RTW_MAX_POWER_INDEX;
+ hal->tx_pwr_limit_5g[regd][bw][rs][ch] = max_power_index;
}
-void rtw_hw_init_tx_power(struct rtw_hal *hal)
+void rtw_phy_init_tx_power(struct rtw_dev *rtwdev)
{
+ struct rtw_hal *hal = &rtwdev->hal;
u8 regd, path, rate, rs, bw;
/* init tx power by rate offset */
@@ -1729,5 +1831,6 @@ void rtw_hw_init_tx_power(struct rtw_hal *hal)
for (regd = 0; regd < RTW_REGD_MAX; regd++)
for (bw = 0; bw < RTW_CHANNEL_WIDTH_MAX; bw++)
for (rs = 0; rs < RTW_RATE_SECTION_MAX; rs++)
- rtw_hw_tx_power_limit_init(hal, regd, bw, rs);
+ rtw_phy_init_tx_power_limit(rtwdev, regd, bw,
+ rs);
}
diff --git a/drivers/net/wireless/realtek/rtw88/phy.h b/drivers/net/wireless/realtek/rtw88/phy.h
index ec03a2051e52..7c8eb732b13c 100644
--- a/drivers/net/wireless/realtek/rtw88/phy.h
+++ b/drivers/net/wireless/realtek/rtw88/phy.h
@@ -27,11 +27,6 @@ bool rtw_phy_write_rf_reg(struct rtw_dev *rtwdev, enum rtw_rf_path rf_path,
u32 addr, u32 mask, u32 data);
bool rtw_phy_write_rf_reg_mix(struct rtw_dev *rtwdev, enum rtw_rf_path rf_path,
u32 addr, u32 mask, u32 data);
-void phy_store_tx_power_by_rate(void *adapter, u32 band, u32 rfpath, u32 txnum,
- u32 regaddr, u32 bitmask, u32 data);
-void phy_set_tx_power_limit(struct rtw_dev *rtwdev, u8 regd, u8 band,
- u8 bw, u8 rs, u8 ch, s8 pwr_limit);
-void phy_set_tx_power_index_by_rs(void *adapter, u8 ch, u8 path, u8 rs);
void rtw_phy_setup_phy_cond(struct rtw_dev *rtwdev, u32 pkg);
void rtw_parse_tbl_phy_cond(struct rtw_dev *rtwdev, const struct rtw_table *tbl);
void rtw_parse_tbl_bb_pg(struct rtw_dev *rtwdev, const struct rtw_table *tbl);
@@ -44,7 +39,7 @@ void rtw_phy_cfg_bb(struct rtw_dev *rtwdev, const struct rtw_table *tbl,
u32 addr, u32 data);
void rtw_phy_cfg_rf(struct rtw_dev *rtwdev, const struct rtw_table *tbl,
u32 addr, u32 data);
-void rtw_hw_init_tx_power(struct rtw_hal *hal);
+void rtw_phy_init_tx_power(struct rtw_dev *rtwdev);
void rtw_phy_load_tables(struct rtw_dev *rtwdev);
void rtw_phy_set_tx_power_level(struct rtw_dev *rtwdev, u8 channel);
void rtw_phy_tx_power_by_rate_config(struct rtw_hal *hal);
@@ -110,6 +105,17 @@ static inline int rtw_check_supported_rfe(struct rtw_dev *rtwdev)
void rtw_phy_dig_write(struct rtw_dev *rtwdev, u8 igi);
+struct rtw_power_params {
+ u8 pwr_base;
+ s8 pwr_offset;
+ s8 pwr_limit;
+};
+
+void
+rtw_get_tx_power_params(struct rtw_dev *rtwdev, u8 path,
+ u8 rate, u8 bw, u8 ch, u8 regd,
+ struct rtw_power_params *pwr_param);
+
#define MASKBYTE0 0xff
#define MASKBYTE1 0xff00
#define MASKBYTE2 0xff0000
diff --git a/drivers/net/wireless/realtek/rtw88/regd.c b/drivers/net/wireless/realtek/rtw88/regd.c
index e7750a833a8e..69744dd65968 100644
--- a/drivers/net/wireless/realtek/rtw88/regd.c
+++ b/drivers/net/wireless/realtek/rtw88/regd.c
@@ -21,19 +21,19 @@ static const struct rtw_regulatory rtw_defined_chplan =
static const struct rtw_regulatory all_chplan_map[] = {
COUNTRY_CHPLAN_ENT("AD", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("AE", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("AE", RTW_CHPLAN_WORLD_ETSI2, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("AF", RTW_CHPLAN_ETSI1_ETSI4, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("AG", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("AG", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("AI", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("AL", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("AM", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("AN", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("AN", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("AO", RTW_CHPLAN_WORLD_ETSI6, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("AQ", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("AR", RTW_CHPLAN_FCC2_FCC7, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("AS", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("AT", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("AU", RTW_CHPLAN_WORLD_ACMA1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("AU", RTW_CHPLAN_WORLD_ACMA1, RTW_REGD_ACMA),
COUNTRY_CHPLAN_ENT("AW", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("AZ", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("BA", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
@@ -42,31 +42,34 @@ static const struct rtw_regulatory all_chplan_map[] = {
COUNTRY_CHPLAN_ENT("BE", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("BF", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("BG", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("BH", RTW_CHPLAN_WORLD_ETSI6, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("BH", RTW_CHPLAN_WORLD_ETSI7, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("BI", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("BJ", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("BM", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("BN", RTW_CHPLAN_WORLD_ETSI6, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("BO", RTW_CHPLAN_WORLD_FCC7, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("BR", RTW_CHPLAN_FCC2_FCC1, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("BS", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
- COUNTRY_CHPLAN_ENT("BW", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("BT", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("BV", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("BW", RTW_CHPLAN_WORLD_ETSI2, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("BY", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("BZ", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
- COUNTRY_CHPLAN_ENT("CA", RTW_CHPLAN_IC1_IC2, RTW_REGD_FCC),
+ COUNTRY_CHPLAN_ENT("CA", RTW_CHPLAN_IC1_IC2, RTW_REGD_IC),
COUNTRY_CHPLAN_ENT("CC", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("CD", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("CF", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("CG", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("CH", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("CI", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("CI", RTW_CHPLAN_ETSI1_ETSI4, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("CK", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("CL", RTW_CHPLAN_WORLD_CHILE1, RTW_REGD_FCC),
+ COUNTRY_CHPLAN_ENT("CL", RTW_CHPLAN_WORLD_CHILE1, RTW_REGD_CHILE),
COUNTRY_CHPLAN_ENT("CM", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("CN", RTW_CHPLAN_WORLD_ETSI7, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("CO", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("CR", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("CV", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("CX", RTW_CHPLAN_WORLD_ACMA1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("CX", RTW_CHPLAN_WORLD_ACMA1, RTW_REGD_ACMA),
COUNTRY_CHPLAN_ENT("CY", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("CZ", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("DE", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
@@ -90,7 +93,7 @@ static const struct rtw_regulatory all_chplan_map[] = {
COUNTRY_CHPLAN_ENT("FR", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("GA", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("GB", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("GD", RTW_CHPLAN_FCC1_FCC7, RTW_REGD_FCC),
+ COUNTRY_CHPLAN_ENT("GD", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("GE", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("GF", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("GG", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
@@ -107,8 +110,8 @@ static const struct rtw_regulatory all_chplan_map[] = {
COUNTRY_CHPLAN_ENT("GU", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("GW", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("GY", RTW_CHPLAN_FCC1_NCC3, RTW_REGD_FCC),
- COUNTRY_CHPLAN_ENT("HK", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("HM", RTW_CHPLAN_WORLD_ACMA1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("HK", RTW_CHPLAN_WORLD_ETSI2, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("HM", RTW_CHPLAN_WORLD_ACMA1, RTW_REGD_ACMA),
COUNTRY_CHPLAN_ENT("HN", RTW_CHPLAN_WORLD_FCC5, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("HR", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("HT", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
@@ -118,20 +121,22 @@ static const struct rtw_regulatory all_chplan_map[] = {
COUNTRY_CHPLAN_ENT("IL", RTW_CHPLAN_WORLD_ETSI6, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("IM", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("IN", RTW_CHPLAN_WORLD_ETSI7, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("IO", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("IQ", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("IR", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("IS", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("IT", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("JE", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("JM", RTW_CHPLAN_WORLD_ETSI10, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("JM", RTW_CHPLAN_WORLD_FCC5, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("JO", RTW_CHPLAN_WORLD_ETSI8, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("JP", RTW_CHPLAN_MKK1_MKK1, RTW_REGD_MKK),
COUNTRY_CHPLAN_ENT("KE", RTW_CHPLAN_WORLD_ETSI6, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("KG", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("KH", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("KI", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("KM", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("KN", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
- COUNTRY_CHPLAN_ENT("KR", RTW_CHPLAN_KCC1_KCC2, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("KR", RTW_CHPLAN_KCC1_KCC3, RTW_REGD_KCC),
COUNTRY_CHPLAN_ENT("KW", RTW_CHPLAN_WORLD_ETSI6, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("KY", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("KZ", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
@@ -157,7 +162,7 @@ static const struct rtw_regulatory all_chplan_map[] = {
COUNTRY_CHPLAN_ENT("ML", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("MM", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("MN", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("MO", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("MO", RTW_CHPLAN_WORLD_ETSI2, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("MP", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("MQ", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("MR", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
@@ -167,26 +172,26 @@ static const struct rtw_regulatory all_chplan_map[] = {
COUNTRY_CHPLAN_ENT("MV", RTW_CHPLAN_WORLD_ETSI6, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("MW", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("MX", RTW_CHPLAN_FCC2_FCC7, RTW_REGD_FCC),
- COUNTRY_CHPLAN_ENT("MY", RTW_CHPLAN_WORLD_ETSI20, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("MY", RTW_CHPLAN_WORLD_ETSI15, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("MZ", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("NA", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("NC", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("NE", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("NF", RTW_CHPLAN_WORLD_ACMA1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("NF", RTW_CHPLAN_WORLD_ACMA1, RTW_REGD_ACMA),
COUNTRY_CHPLAN_ENT("NG", RTW_CHPLAN_WORLD_ETSI20, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("NI", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("NL", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("NO", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("NP", RTW_CHPLAN_WORLD_ETSI6, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("NP", RTW_CHPLAN_WORLD_ETSI7, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("NR", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("NU", RTW_CHPLAN_WORLD_ACMA1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("NZ", RTW_CHPLAN_WORLD_ACMA1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("NU", RTW_CHPLAN_WORLD_ACMA1, RTW_REGD_ACMA),
+ COUNTRY_CHPLAN_ENT("NZ", RTW_CHPLAN_WORLD_ACMA1, RTW_REGD_ACMA),
COUNTRY_CHPLAN_ENT("OM", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("PA", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("PE", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("PF", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("PG", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("PH", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("PG", RTW_CHPLAN_WORLD_ETSI2, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("PH", RTW_CHPLAN_WORLD_ETSI2, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("PK", RTW_CHPLAN_WORLD_ETSI10, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("PL", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("PM", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
@@ -194,17 +199,17 @@ static const struct rtw_regulatory all_chplan_map[] = {
COUNTRY_CHPLAN_ENT("PT", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("PW", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("PY", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
- COUNTRY_CHPLAN_ENT("QA", RTW_CHPLAN_WORLD_ETSI10, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("QA", RTW_CHPLAN_WORLD_ETSI2, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("RE", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("RO", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("RS", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("RU", RTW_CHPLAN_WORLD_ETSI14, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("RW", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("SA", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("SA", RTW_CHPLAN_WORLD_ETSI2, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("SB", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("SC", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("SE", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("SG", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("SG", RTW_CHPLAN_WORLD_ETSI2, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("SH", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("SI", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("SJ", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
@@ -222,14 +227,15 @@ static const struct rtw_regulatory all_chplan_map[] = {
COUNTRY_CHPLAN_ENT("TD", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("TF", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("TG", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("TH", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("TH", RTW_CHPLAN_WORLD_ETSI2, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("TJ", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("TK", RTW_CHPLAN_WORLD_ACMA1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("TK", RTW_CHPLAN_WORLD_ACMA1, RTW_REGD_ACMA),
COUNTRY_CHPLAN_ENT("TM", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("TN", RTW_CHPLAN_WORLD_ETSI6, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("TO", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("TR", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("TT", RTW_CHPLAN_ETSI1_ETSI4, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("TT", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
+ COUNTRY_CHPLAN_ENT("TV", RTW_CHPLAN_ETSI1_NULL, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("TW", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("TZ", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("UA", RTW_CHPLAN_WORLD_ETSI3, RTW_REGD_ETSI),
@@ -240,14 +246,15 @@ static const struct rtw_regulatory all_chplan_map[] = {
COUNTRY_CHPLAN_ENT("VA", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("VC", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("VE", RTW_CHPLAN_WORLD_FCC3, RTW_REGD_FCC),
+ COUNTRY_CHPLAN_ENT("VG", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("VI", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
- COUNTRY_CHPLAN_ENT("VN", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("VN", RTW_CHPLAN_WORLD_ETSI2, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("VU", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("WF", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("WS", RTW_CHPLAN_FCC2_FCC11, RTW_REGD_FCC),
COUNTRY_CHPLAN_ENT("YE", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("YT", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
- COUNTRY_CHPLAN_ENT("ZA", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
+ COUNTRY_CHPLAN_ENT("ZA", RTW_CHPLAN_WORLD_ETSI2, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("ZM", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
COUNTRY_CHPLAN_ENT("ZW", RTW_CHPLAN_WORLD_ETSI1, RTW_REGD_ETSI),
};
diff --git a/drivers/net/wireless/realtek/rtw88/regd.h b/drivers/net/wireless/realtek/rtw88/regd.h
index 7784bb6d3ba7..5d4578331788 100644
--- a/drivers/net/wireless/realtek/rtw88/regd.h
+++ b/drivers/net/wireless/realtek/rtw88/regd.h
@@ -8,6 +8,7 @@
#define IEEE80211_CHAN_NO_IBSS IEEE80211_CHAN_NO_IR
#define IEEE80211_CHAN_PASSIVE_SCAN IEEE80211_CHAN_NO_IR
enum rtw_chplan_id {
+ RTW_CHPLAN_ETSI1_NULL = 0x21,
RTW_CHPLAN_WORLD_ETSI1 = 0x26,
RTW_CHPLAN_MKK1_MKK1 = 0x27,
RTW_CHPLAN_IC1_IC2 = 0x2B,
@@ -15,6 +16,7 @@ enum rtw_chplan_id {
RTW_CHPLAN_WORLD_FCC3 = 0x30,
RTW_CHPLAN_WORLD_FCC5 = 0x32,
RTW_CHPLAN_FCC1_FCC7 = 0x34,
+ RTW_CHPLAN_WORLD_ETSI2 = 0x35,
RTW_CHPLAN_WORLD_ETSI3 = 0x36,
RTW_CHPLAN_ETSI1_ETSI12 = 0x3D,
RTW_CHPLAN_KCC1_KCC2 = 0x3E,
@@ -24,10 +26,12 @@ enum rtw_chplan_id {
RTW_CHPLAN_WORLD_ETSI6 = 0x47,
RTW_CHPLAN_WORLD_ETSI7 = 0x48,
RTW_CHPLAN_WORLD_ETSI8 = 0x49,
+ RTW_CHPLAN_KCC1_KCC3 = 0x4B,
RTW_CHPLAN_WORLD_ETSI10 = 0x51,
RTW_CHPLAN_WORLD_ETSI14 = 0x59,
RTW_CHPLAN_FCC2_FCC7 = 0x61,
RTW_CHPLAN_FCC2_FCC1 = 0x62,
+ RTW_CHPLAN_WORLD_ETSI15 = 0x63,
RTW_CHPLAN_WORLD_FCC7 = 0x73,
RTW_CHPLAN_FCC2_FCC17 = 0x74,
RTW_CHPLAN_WORLD_ETSI20 = 0x75,
diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
index b4f7242e5aa3..f6214ff20337 100644
--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
@@ -203,7 +203,7 @@ static void rtw8822c_dac_iq_offset(struct rtw_dev *rtwdev, u32 *vec, u32 *val)
*val = t;
}
-static u32 rtw8822c_get_path_base_addr(u8 path)
+static u32 rtw8822c_get_path_write_addr(u8 path)
{
u32 base_addr;
@@ -222,6 +222,25 @@ static u32 rtw8822c_get_path_base_addr(u8 path)
return base_addr;
}
+static u32 rtw8822c_get_path_read_addr(u8 path)
+{
+ u32 base_addr;
+
+ switch (path) {
+ case RF_PATH_A:
+ base_addr = 0x2800;
+ break;
+ case RF_PATH_B:
+ base_addr = 0x4500;
+ break;
+ default:
+ WARN_ON(1);
+ return -1;
+ }
+
+ return base_addr;
+}
+
static bool rtw8822c_dac_iq_check(struct rtw_dev *rtwdev, u32 value)
{
bool ret = true;
@@ -316,8 +335,6 @@ static void rtw8822c_dac_cal_rf_mode(struct rtw_dev *rtwdev,
u32 iv[DACK_SN_8822C], qv[DACK_SN_8822C];
u32 rf_a, rf_b;
- mdelay(10);
-
rf_a = rtw_read_rf(rtwdev, RF_PATH_A, 0x0, RFREG_MASK);
rf_b = rtw_read_rf(rtwdev, RF_PATH_B, 0x0, RFREG_MASK);
@@ -347,6 +364,7 @@ static void rtw8822c_dac_bb_setting(struct rtw_dev *rtwdev)
static void rtw8822c_dac_cal_adc(struct rtw_dev *rtwdev,
u8 path, u32 *adc_ic, u32 *adc_qc)
{
+ struct rtw_dm_info *dm_info = &rtwdev->dm_info;
u32 ic = 0, qc = 0, temp = 0;
u32 base_addr;
u32 path_sel;
@@ -354,7 +372,7 @@ static void rtw8822c_dac_cal_adc(struct rtw_dev *rtwdev,
rtw_dbg(rtwdev, RTW_DBG_RFK, "[DACK] ADCK path(%d)\n", path);
- base_addr = rtw8822c_get_path_base_addr(path);
+ base_addr = rtw8822c_get_path_write_addr(path);
switch (path) {
case RF_PATH_A:
path_sel = 0xa0000;
@@ -396,6 +414,7 @@ static void rtw8822c_dac_cal_adc(struct rtw_dev *rtwdev,
}
temp = (ic & 0x3ff) | ((qc & 0x3ff) << 10);
rtw_write32(rtwdev, base_addr + 0x68, temp);
+ dm_info->dack_adck[path] = temp;
rtw_dbg(rtwdev, RTW_DBG_RFK, "[DACK] ADCK 0x%08x=0x08%x\n",
base_addr + 0x68, temp);
/* check ADC DC offset */
@@ -422,10 +441,14 @@ static void rtw8822c_dac_cal_adc(struct rtw_dev *rtwdev,
static void rtw8822c_dac_cal_step1(struct rtw_dev *rtwdev, u8 path)
{
+ struct rtw_dm_info *dm_info = &rtwdev->dm_info;
u32 base_addr;
+ u32 read_addr;
- base_addr = rtw8822c_get_path_base_addr(path);
+ base_addr = rtw8822c_get_path_write_addr(path);
+ read_addr = rtw8822c_get_path_read_addr(path);
+ rtw_write32(rtwdev, base_addr + 0x68, dm_info->dack_adck[path]);
rtw_write32(rtwdev, base_addr + 0x0c, 0xdff00220);
if (path == RF_PATH_A) {
rtw_write32(rtwdev, base_addr + 0x60, 0xf0040ff0);
@@ -447,11 +470,13 @@ static void rtw8822c_dac_cal_step1(struct rtw_dev *rtwdev, u8 path)
rtw_write32(rtwdev, base_addr + 0xcc, 0x0a11fb89);
mdelay(1);
rtw_write32(rtwdev, base_addr + 0xb8, 0x62000000);
- mdelay(20);
rtw_write32(rtwdev, base_addr + 0xd4, 0x62000000);
mdelay(20);
+ if (!check_hw_ready(rtwdev, read_addr + 0x08, 0x7fff80, 0xffff) ||
+ !check_hw_ready(rtwdev, read_addr + 0x34, 0x7fff80, 0xffff))
+ rtw_err(rtwdev, "failed to wait for dack ready\n");
rtw_write32(rtwdev, base_addr + 0xb8, 0x02000000);
- mdelay(20);
+ mdelay(1);
rtw_write32(rtwdev, base_addr + 0xbc, 0x0008ff87);
rtw_write32(rtwdev, 0x9b4, 0xdb6db600);
rtw_write32(rtwdev, base_addr + 0x10, 0x02d508c5);
@@ -465,7 +490,7 @@ static void rtw8822c_dac_cal_step2(struct rtw_dev *rtwdev,
u32 base_addr;
u32 ic, qc, ic_in, qc_in;
- base_addr = rtw8822c_get_path_base_addr(path);
+ base_addr = rtw8822c_get_path_write_addr(path);
rtw_write32_mask(rtwdev, base_addr + 0xbc, 0xf0000000, 0x0);
rtw_write32_mask(rtwdev, base_addr + 0xc0, 0xf, 0x8);
rtw_write32_mask(rtwdev, base_addr + 0xd8, 0xf0000000, 0x0);
@@ -514,10 +539,12 @@ static void rtw8822c_dac_cal_step3(struct rtw_dev *rtwdev, u8 path,
u32 *i_out, u32 *q_out)
{
u32 base_addr;
+ u32 read_addr;
u32 ic, qc;
u32 temp;
- base_addr = rtw8822c_get_path_base_addr(path);
+ base_addr = rtw8822c_get_path_write_addr(path);
+ read_addr = rtw8822c_get_path_read_addr(path);
ic = *ic_in;
qc = *qc_in;
@@ -542,11 +569,13 @@ static void rtw8822c_dac_cal_step3(struct rtw_dev *rtwdev, u8 path,
rtw_write32(rtwdev, base_addr + 0xcc, 0x0a11fb89);
mdelay(1);
rtw_write32(rtwdev, base_addr + 0xb8, 0x62000000);
- mdelay(20);
rtw_write32(rtwdev, base_addr + 0xd4, 0x62000000);
mdelay(20);
+ if (!check_hw_ready(rtwdev, read_addr + 0x24, 0x07f80000, ic) ||
+ !check_hw_ready(rtwdev, read_addr + 0x50, 0x07f80000, qc))
+ rtw_err(rtwdev, "failed to write IQ vector to hardware\n");
rtw_write32(rtwdev, base_addr + 0xb8, 0x02000000);
- mdelay(20);
+ mdelay(1);
rtw_write32_mask(rtwdev, base_addr + 0xbc, 0xe, 0x3);
rtw_write32(rtwdev, 0x9b4, 0xdb6db600);
@@ -583,7 +612,7 @@ static void rtw8822c_dac_cal_step3(struct rtw_dev *rtwdev, u8 path,
static void rtw8822c_dac_cal_step4(struct rtw_dev *rtwdev, u8 path)
{
- u32 base_addr = rtw8822c_get_path_base_addr(path);
+ u32 base_addr = rtw8822c_get_path_write_addr(path);
rtw_write32(rtwdev, base_addr + 0x68, 0x0);
rtw_write32(rtwdev, base_addr + 0x10, 0x02d508c4);
@@ -591,6 +620,296 @@ static void rtw8822c_dac_cal_step4(struct rtw_dev *rtwdev, u8 path)
rtw_write32_mask(rtwdev, base_addr + 0x30, BIT(30), 0x1);
}
+static void rtw8822c_dac_cal_backup_vec(struct rtw_dev *rtwdev,
+ u8 path, u8 vec, u32 w_addr, u32 r_addr)
+{
+ struct rtw_dm_info *dm_info = &rtwdev->dm_info;
+ u16 val;
+ u32 i;
+
+ if (WARN_ON(vec >= 2))
+ return;
+
+ for (i = 0; i < DACK_MSBK_BACKUP_NUM; i++) {
+ rtw_write32_mask(rtwdev, w_addr, 0xf0000000, i);
+ val = (u16)rtw_read32_mask(rtwdev, r_addr, 0x7fc0000);
+ dm_info->dack_msbk[path][vec][i] = val;
+ }
+}
+
+static void rtw8822c_dac_cal_backup_path(struct rtw_dev *rtwdev, u8 path)
+{
+ u32 w_off = 0x1c;
+ u32 r_off = 0x2c;
+ u32 w_addr, r_addr;
+
+ if (WARN_ON(path >= 2))
+ return;
+
+ /* backup I vector */
+ w_addr = rtw8822c_get_path_write_addr(path) + 0xb0;
+ r_addr = rtw8822c_get_path_read_addr(path) + 0x10;
+ rtw8822c_dac_cal_backup_vec(rtwdev, path, 0, w_addr, r_addr);
+
+ /* backup Q vector */
+ w_addr = rtw8822c_get_path_write_addr(path) + 0xb0 + w_off;
+ r_addr = rtw8822c_get_path_read_addr(path) + 0x10 + r_off;
+ rtw8822c_dac_cal_backup_vec(rtwdev, path, 1, w_addr, r_addr);
+}
+
+static void rtw8822c_dac_cal_backup_dck(struct rtw_dev *rtwdev)
+{
+ struct rtw_dm_info *dm_info = &rtwdev->dm_info;
+ u8 val;
+
+ val = (u8)rtw_read32_mask(rtwdev, REG_DCKA_I_0, 0xf0000000);
+ dm_info->dack_dck[RF_PATH_A][0][0] = val;
+ val = (u8)rtw_read32_mask(rtwdev, REG_DCKA_I_1, 0xf);
+ dm_info->dack_dck[RF_PATH_A][0][1] = val;
+ val = (u8)rtw_read32_mask(rtwdev, REG_DCKA_Q_0, 0xf0000000);
+ dm_info->dack_dck[RF_PATH_A][1][0] = val;
+ val = (u8)rtw_read32_mask(rtwdev, REG_DCKA_Q_1, 0xf);
+ dm_info->dack_dck[RF_PATH_A][1][1] = val;
+
+ val = (u8)rtw_read32_mask(rtwdev, REG_DCKB_I_0, 0xf0000000);
+ dm_info->dack_dck[RF_PATH_B][0][0] = val;
+ val = (u8)rtw_read32_mask(rtwdev, REG_DCKB_I_1, 0xf);
+ dm_info->dack_dck[RF_PATH_B][1][0] = val;
+ val = (u8)rtw_read32_mask(rtwdev, REG_DCKB_Q_0, 0xf0000000);
+ dm_info->dack_dck[RF_PATH_B][0][1] = val;
+ val = (u8)rtw_read32_mask(rtwdev, REG_DCKB_Q_1, 0xf);
+ dm_info->dack_dck[RF_PATH_B][1][1] = val;
+}
+
+static void rtw8822c_dac_cal_backup(struct rtw_dev *rtwdev)
+{
+ u32 temp[3];
+
+ temp[0] = rtw_read32(rtwdev, 0x1860);
+ temp[1] = rtw_read32(rtwdev, 0x4160);
+ temp[2] = rtw_read32(rtwdev, 0x9b4);
+
+ /* set clock */
+ rtw_write32(rtwdev, 0x9b4, 0xdb66db00);
+
+ /* backup path-A I/Q */
+ rtw_write32_clr(rtwdev, 0x1830, BIT(30));
+ rtw_write32_mask(rtwdev, 0x1860, 0xfc000000, 0x3c);
+ rtw8822c_dac_cal_backup_path(rtwdev, RF_PATH_A);
+
+ /* backup path-B I/Q */
+ rtw_write32_clr(rtwdev, 0x4130, BIT(30));
+ rtw_write32_mask(rtwdev, 0x4160, 0xfc000000, 0x3c);
+ rtw8822c_dac_cal_backup_path(rtwdev, RF_PATH_B);
+
+ rtw8822c_dac_cal_backup_dck(rtwdev);
+ rtw_write32_set(rtwdev, 0x1830, BIT(30));
+ rtw_write32_set(rtwdev, 0x4130, BIT(30));
+
+ rtw_write32(rtwdev, 0x1860, temp[0]);
+ rtw_write32(rtwdev, 0x4160, temp[1]);
+ rtw_write32(rtwdev, 0x9b4, temp[2]);
+}
+
+static void rtw8822c_dac_cal_restore_dck(struct rtw_dev *rtwdev)
+{
+ struct rtw_dm_info *dm_info = &rtwdev->dm_info;
+ u8 val;
+
+ rtw_write32_set(rtwdev, REG_DCKA_I_0, BIT(19));
+ val = dm_info->dack_dck[RF_PATH_A][0][0];
+ rtw_write32_mask(rtwdev, REG_DCKA_I_0, 0xf0000000, val);
+ val = dm_info->dack_dck[RF_PATH_A][0][1];
+ rtw_write32_mask(rtwdev, REG_DCKA_I_1, 0xf, val);
+
+ rtw_write32_set(rtwdev, REG_DCKA_Q_0, BIT(19));
+ val = dm_info->dack_dck[RF_PATH_A][1][0];
+ rtw_write32_mask(rtwdev, REG_DCKA_Q_0, 0xf0000000, val);
+ val = dm_info->dack_dck[RF_PATH_A][1][1];
+ rtw_write32_mask(rtwdev, REG_DCKA_Q_1, 0xf, val);
+
+ rtw_write32_set(rtwdev, REG_DCKB_I_0, BIT(19));
+ val = dm_info->dack_dck[RF_PATH_B][0][0];
+ rtw_write32_mask(rtwdev, REG_DCKB_I_0, 0xf0000000, val);
+ val = dm_info->dack_dck[RF_PATH_B][0][1];
+ rtw_write32_mask(rtwdev, REG_DCKB_I_1, 0xf, val);
+
+ rtw_write32_set(rtwdev, REG_DCKB_Q_0, BIT(19));
+ val = dm_info->dack_dck[RF_PATH_B][1][0];
+ rtw_write32_mask(rtwdev, REG_DCKB_Q_0, 0xf0000000, val);
+ val = dm_info->dack_dck[RF_PATH_B][1][1];
+ rtw_write32_mask(rtwdev, REG_DCKB_Q_1, 0xf, val);
+}
+
+static void rtw8822c_dac_cal_restore_prepare(struct rtw_dev *rtwdev)
+{
+ rtw_write32(rtwdev, 0x9b4, 0xdb66db00);
+
+ rtw_write32_mask(rtwdev, 0x18b0, BIT(27), 0x0);
+ rtw_write32_mask(rtwdev, 0x18cc, BIT(27), 0x0);
+ rtw_write32_mask(rtwdev, 0x41b0, BIT(27), 0x0);
+ rtw_write32_mask(rtwdev, 0x41cc, BIT(27), 0x0);
+
+ rtw_write32_mask(rtwdev, 0x1830, BIT(30), 0x0);
+ rtw_write32_mask(rtwdev, 0x1860, 0xfc000000, 0x3c);
+ rtw_write32_mask(rtwdev, 0x18b4, BIT(0), 0x1);
+ rtw_write32_mask(rtwdev, 0x18d0, BIT(0), 0x1);
+
+ rtw_write32_mask(rtwdev, 0x4130, BIT(30), 0x0);
+ rtw_write32_mask(rtwdev, 0x4160, 0xfc000000, 0x3c);
+ rtw_write32_mask(rtwdev, 0x41b4, BIT(0), 0x1);
+ rtw_write32_mask(rtwdev, 0x41d0, BIT(0), 0x1);
+
+ rtw_write32_mask(rtwdev, 0x18b0, 0xf00, 0x0);
+ rtw_write32_mask(rtwdev, 0x18c0, BIT(14), 0x0);
+ rtw_write32_mask(rtwdev, 0x18cc, 0xf00, 0x0);
+ rtw_write32_mask(rtwdev, 0x18dc, BIT(14), 0x0);
+
+ rtw_write32_mask(rtwdev, 0x18b0, BIT(0), 0x0);
+ rtw_write32_mask(rtwdev, 0x18cc, BIT(0), 0x0);
+ rtw_write32_mask(rtwdev, 0x18b0, BIT(0), 0x1);
+ rtw_write32_mask(rtwdev, 0x18cc, BIT(0), 0x1);
+
+ rtw8822c_dac_cal_restore_dck(rtwdev);
+
+ rtw_write32_mask(rtwdev, 0x18c0, 0x38000, 0x7);
+ rtw_write32_mask(rtwdev, 0x18dc, 0x38000, 0x7);
+ rtw_write32_mask(rtwdev, 0x41c0, 0x38000, 0x7);
+ rtw_write32_mask(rtwdev, 0x41dc, 0x38000, 0x7);
+
+ rtw_write32_mask(rtwdev, 0x18b8, BIT(26) | BIT(25), 0x1);
+ rtw_write32_mask(rtwdev, 0x18d4, BIT(26) | BIT(25), 0x1);
+
+ rtw_write32_mask(rtwdev, 0x41b0, 0xf00, 0x0);
+ rtw_write32_mask(rtwdev, 0x41c0, BIT(14), 0x0);
+ rtw_write32_mask(rtwdev, 0x41cc, 0xf00, 0x0);
+ rtw_write32_mask(rtwdev, 0x41dc, BIT(14), 0x0);
+
+ rtw_write32_mask(rtwdev, 0x41b0, BIT(0), 0x0);
+ rtw_write32_mask(rtwdev, 0x41cc, BIT(0), 0x0);
+ rtw_write32_mask(rtwdev, 0x41b0, BIT(0), 0x1);
+ rtw_write32_mask(rtwdev, 0x41cc, BIT(0), 0x1);
+
+ rtw_write32_mask(rtwdev, 0x41b8, BIT(26) | BIT(25), 0x1);
+ rtw_write32_mask(rtwdev, 0x41d4, BIT(26) | BIT(25), 0x1);
+}
+
+static bool rtw8822c_dac_cal_restore_wait(struct rtw_dev *rtwdev,
+ u32 target_addr, u32 toggle_addr)
+{
+ u32 cnt = 0;
+
+ do {
+ rtw_write32_mask(rtwdev, toggle_addr, BIT(26) | BIT(25), 0x0);
+ rtw_write32_mask(rtwdev, toggle_addr, BIT(26) | BIT(25), 0x2);
+
+ if (rtw_read32_mask(rtwdev, target_addr, 0xf) == 0x6)
+ return true;
+
+ } while (cnt++ < 100);
+
+ return false;
+}
+
+static bool rtw8822c_dac_cal_restore_path(struct rtw_dev *rtwdev, u8 path)
+{
+ struct rtw_dm_info *dm_info = &rtwdev->dm_info;
+ u32 w_off = 0x1c;
+ u32 r_off = 0x2c;
+ u32 w_i, r_i, w_q, r_q;
+ u32 value;
+ u32 i;
+
+ w_i = rtw8822c_get_path_write_addr(path) + 0xb0;
+ r_i = rtw8822c_get_path_read_addr(path) + 0x08;
+ w_q = rtw8822c_get_path_write_addr(path) + 0xb0 + w_off;
+ r_q = rtw8822c_get_path_read_addr(path) + 0x08 + r_off;
+
+ if (!rtw8822c_dac_cal_restore_wait(rtwdev, r_i, w_i + 0x8))
+ return false;
+
+ for (i = 0; i < DACK_MSBK_BACKUP_NUM; i++) {
+ rtw_write32_mask(rtwdev, w_i + 0x4, BIT(2), 0x0);
+ value = dm_info->dack_msbk[path][0][i];
+ rtw_write32_mask(rtwdev, w_i + 0x4, 0xff8, value);
+ rtw_write32_mask(rtwdev, w_i, 0xf0000000, i);
+ rtw_write32_mask(rtwdev, w_i + 0x4, BIT(2), 0x1);
+ }
+
+ rtw_write32_mask(rtwdev, w_i + 0x4, BIT(2), 0x0);
+
+ if (!rtw8822c_dac_cal_restore_wait(rtwdev, r_q, w_q + 0x8))
+ return false;
+
+ for (i = 0; i < DACK_MSBK_BACKUP_NUM; i++) {
+ rtw_write32_mask(rtwdev, w_q + 0x4, BIT(2), 0x0);
+ value = dm_info->dack_msbk[path][1][i];
+ rtw_write32_mask(rtwdev, w_q + 0x4, 0xff8, value);
+ rtw_write32_mask(rtwdev, w_q, 0xf0000000, i);
+ rtw_write32_mask(rtwdev, w_q + 0x4, BIT(2), 0x1);
+ }
+ rtw_write32_mask(rtwdev, w_q + 0x4, BIT(2), 0x0);
+
+ rtw_write32_mask(rtwdev, w_i + 0x8, BIT(26) | BIT(25), 0x0);
+ rtw_write32_mask(rtwdev, w_q + 0x8, BIT(26) | BIT(25), 0x0);
+ rtw_write32_mask(rtwdev, w_i + 0x4, BIT(0), 0x0);
+ rtw_write32_mask(rtwdev, w_q + 0x4, BIT(0), 0x0);
+
+ return true;
+}
+
+static bool __rtw8822c_dac_cal_restore(struct rtw_dev *rtwdev)
+{
+ if (!rtw8822c_dac_cal_restore_path(rtwdev, RF_PATH_A))
+ return false;
+
+ if (!rtw8822c_dac_cal_restore_path(rtwdev, RF_PATH_B))
+ return false;
+
+ return true;
+}
+
+static bool rtw8822c_dac_cal_restore(struct rtw_dev *rtwdev)
+{
+ struct rtw_dm_info *dm_info = &rtwdev->dm_info;
+ u32 temp[3];
+
+ /* sample the first element for both path's IQ vector */
+ if (dm_info->dack_msbk[RF_PATH_A][0][0] == 0 &&
+ dm_info->dack_msbk[RF_PATH_A][1][0] == 0 &&
+ dm_info->dack_msbk[RF_PATH_B][0][0] == 0 &&
+ dm_info->dack_msbk[RF_PATH_B][1][0] == 0)
+ return false;
+
+ temp[0] = rtw_read32(rtwdev, 0x1860);
+ temp[1] = rtw_read32(rtwdev, 0x4160);
+ temp[2] = rtw_read32(rtwdev, 0x9b4);
+
+ rtw8822c_dac_cal_restore_prepare(rtwdev);
+ if (!check_hw_ready(rtwdev, 0x2808, 0x7fff80, 0xffff) ||
+ !check_hw_ready(rtwdev, 0x2834, 0x7fff80, 0xffff) ||
+ !check_hw_ready(rtwdev, 0x4508, 0x7fff80, 0xffff) ||
+ !check_hw_ready(rtwdev, 0x4534, 0x7fff80, 0xffff))
+ return false;
+
+ if (!__rtw8822c_dac_cal_restore(rtwdev)) {
+ rtw_err(rtwdev, "failed to restore dack vectors\n");
+ return false;
+ }
+
+ rtw_write32_mask(rtwdev, 0x1830, BIT(30), 0x1);
+ rtw_write32_mask(rtwdev, 0x4130, BIT(30), 0x1);
+ rtw_write32(rtwdev, 0x1860, temp[0]);
+ rtw_write32(rtwdev, 0x4160, temp[1]);
+ rtw_write32_mask(rtwdev, 0x18b0, BIT(27), 0x1);
+ rtw_write32_mask(rtwdev, 0x18cc, BIT(27), 0x1);
+ rtw_write32_mask(rtwdev, 0x41b0, BIT(27), 0x1);
+ rtw_write32_mask(rtwdev, 0x41cc, BIT(27), 0x1);
+ rtw_write32(rtwdev, 0x9b4, temp[2]);
+
+ return true;
+}
+
static void rtw8822c_rf_dac_cal(struct rtw_dev *rtwdev)
{
struct rtw_backup_info backup_rf[DACK_RF_8822C * DACK_PATH_8822C];
@@ -600,6 +919,11 @@ static void rtw8822c_rf_dac_cal(struct rtw_dev *rtwdev)
u32 ic_a = 0x0, qc_a = 0x0, ic_b = 0x0, qc_b = 0x0;
u32 adc_ic_a = 0x0, adc_qc_a = 0x0, adc_ic_b = 0x0, adc_qc_b = 0x0;
+ if (rtw8822c_dac_cal_restore(rtwdev))
+ return;
+
+ /* not able to restore, do it */
+
rtw8822c_dac_backup_reg(rtwdev, backup, backup_rf);
rtw8822c_dac_bb_setting(rtwdev);
@@ -644,6 +968,9 @@ static void rtw8822c_rf_dac_cal(struct rtw_dev *rtwdev)
rtw8822c_dac_restore_reg(rtwdev, backup, backup_rf);
+ /* backup results to restore, saving a lot of time */
+ rtw8822c_dac_cal_backup(rtwdev);
+
rtw_dbg(rtwdev, RTW_DBG_RFK, "[DACK] path A: ic=0x%x, qc=0x%x\n", ic_a, qc_a);
rtw_dbg(rtwdev, RTW_DBG_RFK, "[DACK] path B: ic=0x%x, qc=0x%x\n", ic_b, qc_b);
rtw_dbg(rtwdev, RTW_DBG_RFK, "[DACK] path A: i=0x%x, q=0x%x\n", i_a, q_a);
@@ -1015,8 +1342,28 @@ static void rtw8822c_set_channel_bb(struct rtw_dev *rtwdev, u8 channel, u8 bw,
rtw_write32_clr(rtwdev, REG_CCKTXONLY, BIT_BB_CCK_CHECK_EN);
rtw_write32_mask(rtwdev, REG_CCAMSK, 0x3F000000, 0xF);
- rtw_write32_mask(rtwdev, REG_RXAGCCTL0, 0x1f0, 0x0);
- rtw_write32_mask(rtwdev, REG_RXAGCCTL, 0x1f0, 0x0);
+ switch (bw) {
+ case RTW_CHANNEL_WIDTH_20:
+ rtw_write32_mask(rtwdev, REG_RXAGCCTL0, BITS_RXAGC_CCK,
+ 0x5);
+ rtw_write32_mask(rtwdev, REG_RXAGCCTL, BITS_RXAGC_CCK,
+ 0x5);
+ rtw_write32_mask(rtwdev, REG_RXAGCCTL0, BITS_RXAGC_OFDM,
+ 0x6);
+ rtw_write32_mask(rtwdev, REG_RXAGCCTL, BITS_RXAGC_OFDM,
+ 0x6);
+ break;
+ case RTW_CHANNEL_WIDTH_40:
+ rtw_write32_mask(rtwdev, REG_RXAGCCTL0, BITS_RXAGC_CCK,
+ 0x4);
+ rtw_write32_mask(rtwdev, REG_RXAGCCTL, BITS_RXAGC_CCK,
+ 0x4);
+ rtw_write32_mask(rtwdev, REG_RXAGCCTL0, BITS_RXAGC_OFDM,
+ 0x0);
+ rtw_write32_mask(rtwdev, REG_RXAGCCTL, BITS_RXAGC_OFDM,
+ 0x0);
+ break;
+ }
if (channel == 13 || channel == 14)
rtw_write32_mask(rtwdev, REG_SCOTRK, 0xfff, 0x969);
else if (channel == 11 || channel == 12)
@@ -1061,14 +1408,20 @@ static void rtw8822c_set_channel_bb(struct rtw_dev *rtwdev, u8 channel, u8 bw,
rtw_write32_mask(rtwdev, REG_CCAMSK, 0x3F000000, 0x22);
rtw_write32_mask(rtwdev, REG_TXDFIR0, 0x70, 0x3);
if (channel >= 36 && channel <= 64) {
- rtw_write32_mask(rtwdev, REG_RXAGCCTL0, 0x1f0, 0x1);
- rtw_write32_mask(rtwdev, REG_RXAGCCTL, 0x1f0, 0x1);
+ rtw_write32_mask(rtwdev, REG_RXAGCCTL0, BITS_RXAGC_OFDM,
+ 0x1);
+ rtw_write32_mask(rtwdev, REG_RXAGCCTL, BITS_RXAGC_OFDM,
+ 0x1);
} else if (channel >= 100 && channel <= 144) {
- rtw_write32_mask(rtwdev, REG_RXAGCCTL0, 0x1f0, 0x2);
- rtw_write32_mask(rtwdev, REG_RXAGCCTL, 0x1f0, 0x2);
+ rtw_write32_mask(rtwdev, REG_RXAGCCTL0, BITS_RXAGC_OFDM,
+ 0x2);
+ rtw_write32_mask(rtwdev, REG_RXAGCCTL, BITS_RXAGC_OFDM,
+ 0x2);
} else if (channel >= 149) {
- rtw_write32_mask(rtwdev, REG_RXAGCCTL0, 0x1f0, 0x3);
- rtw_write32_mask(rtwdev, REG_RXAGCCTL, 0x1f0, 0x3);
+ rtw_write32_mask(rtwdev, REG_RXAGCCTL0, BITS_RXAGC_OFDM,
+ 0x3);
+ rtw_write32_mask(rtwdev, REG_RXAGCCTL, BITS_RXAGC_OFDM,
+ 0x3);
}
if (channel >= 36 && channel <= 51)
@@ -1092,6 +1445,9 @@ static void rtw8822c_set_channel_bb(struct rtw_dev *rtwdev, u8 channel, u8 bw,
rtw_write32_mask(rtwdev, REG_TXBWCTL, 0xffc0, 0x0);
rtw_write32_mask(rtwdev, REG_TXCLK, 0x700, 0x7);
rtw_write32_mask(rtwdev, REG_TXCLK, 0x700000, 0x6);
+ rtw_write32_mask(rtwdev, REG_CCK_SOURCE, BIT_NBI_EN, 0x0);
+ rtw_write32_mask(rtwdev, REG_SBD, BITS_SUBTUNE, 0x1);
+ rtw_write32_mask(rtwdev, REG_PT_CHSMO, BIT_PT_OPT, 0x0);
break;
case RTW_CHANNEL_WIDTH_40:
rtw_write32_mask(rtwdev, REG_CCKSB, BIT(4),
@@ -1100,12 +1456,17 @@ static void rtw8822c_set_channel_bb(struct rtw_dev *rtwdev, u8 channel, u8 bw,
rtw_write32_mask(rtwdev, REG_TXBWCTL, 0xc0, 0x0);
rtw_write32_mask(rtwdev, REG_TXBWCTL, 0xff00,
(primary_ch_idx | (primary_ch_idx << 4)));
+ rtw_write32_mask(rtwdev, REG_CCK_SOURCE, BIT_NBI_EN, 0x1);
+ rtw_write32_mask(rtwdev, REG_SBD, BITS_SUBTUNE, 0x1);
+ rtw_write32_mask(rtwdev, REG_PT_CHSMO, BIT_PT_OPT, 0x1);
break;
case RTW_CHANNEL_WIDTH_80:
rtw_write32_mask(rtwdev, REG_TXBWCTL, 0xf, 0xa);
rtw_write32_mask(rtwdev, REG_TXBWCTL, 0xc0, 0x0);
rtw_write32_mask(rtwdev, REG_TXBWCTL, 0xff00,
(primary_ch_idx | (primary_ch_idx << 4)));
+ rtw_write32_mask(rtwdev, REG_SBD, BITS_SUBTUNE, 0x6);
+ rtw_write32_mask(rtwdev, REG_PT_CHSMO, BIT_PT_OPT, 0x1);
break;
case RTW_CHANNEL_WIDTH_5:
rtw_write32_mask(rtwdev, REG_DFIRBW, 0x3FF0, 0x2AB);
@@ -1113,6 +1474,9 @@ static void rtw8822c_set_channel_bb(struct rtw_dev *rtwdev, u8 channel, u8 bw,
rtw_write32_mask(rtwdev, REG_TXBWCTL, 0xffc0, 0x1);
rtw_write32_mask(rtwdev, REG_TXCLK, 0x700, 0x4);
rtw_write32_mask(rtwdev, REG_TXCLK, 0x700000, 0x4);
+ rtw_write32_mask(rtwdev, REG_CCK_SOURCE, BIT_NBI_EN, 0x0);
+ rtw_write32_mask(rtwdev, REG_SBD, BITS_SUBTUNE, 0x1);
+ rtw_write32_mask(rtwdev, REG_PT_CHSMO, BIT_PT_OPT, 0x0);
break;
case RTW_CHANNEL_WIDTH_10:
rtw_write32_mask(rtwdev, REG_DFIRBW, 0x3FF0, 0x2AB);
@@ -1120,6 +1484,9 @@ static void rtw8822c_set_channel_bb(struct rtw_dev *rtwdev, u8 channel, u8 bw,
rtw_write32_mask(rtwdev, REG_TXBWCTL, 0xffc0, 0x2);
rtw_write32_mask(rtwdev, REG_TXCLK, 0x700, 0x6);
rtw_write32_mask(rtwdev, REG_TXCLK, 0x700000, 0x5);
+ rtw_write32_mask(rtwdev, REG_CCK_SOURCE, BIT_NBI_EN, 0x0);
+ rtw_write32_mask(rtwdev, REG_SBD, BITS_SUBTUNE, 0x1);
+ rtw_write32_mask(rtwdev, REG_PT_CHSMO, BIT_PT_OPT, 0x0);
break;
}
}
@@ -1451,13 +1818,30 @@ static void rtw8822c_false_alarm_statistics(struct rtw_dev *rtwdev)
u32 cck_enable;
u32 cck_fa_cnt;
u32 ofdm_fa_cnt;
- u32 ofdm_tx_counter;
+ u32 ofdm_fa_cnt1, ofdm_fa_cnt2, ofdm_fa_cnt3, ofdm_fa_cnt4, ofdm_fa_cnt5;
+ u16 parity_fail, rate_illegal, crc8_fail, mcs_fail, sb_search_fail,
+ fast_fsync, crc8_fail_vhta, mcs_fail_vht;
cck_enable = rtw_read32(rtwdev, REG_ENCCK) & BIT_CCK_BLK_EN;
cck_fa_cnt = rtw_read16(rtwdev, REG_CCK_FACNT);
- ofdm_fa_cnt = rtw_read16(rtwdev, REG_OFDM_FACNT);
- ofdm_tx_counter = rtw_read16(rtwdev, REG_OFDM_TXCNT);
- ofdm_fa_cnt -= ofdm_tx_counter;
+
+ ofdm_fa_cnt1 = rtw_read32(rtwdev, REG_OFDM_FACNT1);
+ ofdm_fa_cnt2 = rtw_read32(rtwdev, REG_OFDM_FACNT2);
+ ofdm_fa_cnt3 = rtw_read32(rtwdev, REG_OFDM_FACNT3);
+ ofdm_fa_cnt4 = rtw_read32(rtwdev, REG_OFDM_FACNT4);
+ ofdm_fa_cnt5 = rtw_read32(rtwdev, REG_OFDM_FACNT5);
+
+ parity_fail = FIELD_GET(GENMASK(31, 16), ofdm_fa_cnt1);
+ rate_illegal = FIELD_GET(GENMASK(15, 0), ofdm_fa_cnt2);
+ crc8_fail = FIELD_GET(GENMASK(31, 16), ofdm_fa_cnt2);
+ crc8_fail_vhta = FIELD_GET(GENMASK(15, 0), ofdm_fa_cnt3);
+ mcs_fail = FIELD_GET(GENMASK(15, 0), ofdm_fa_cnt4);
+ mcs_fail_vht = FIELD_GET(GENMASK(31, 16), ofdm_fa_cnt4);
+ fast_fsync = FIELD_GET(GENMASK(15, 0), ofdm_fa_cnt5);
+ sb_search_fail = FIELD_GET(GENMASK(31, 16), ofdm_fa_cnt5);
+
+ ofdm_fa_cnt = parity_fail + rate_illegal + crc8_fail + crc8_fail_vhta +
+ mcs_fail + mcs_fail_vht + fast_fsync + sb_search_fail;
dm_info->cck_fa_cnt = cck_fa_cnt;
dm_info->ofdm_fa_cnt = ofdm_fa_cnt;
@@ -1468,8 +1852,12 @@ static void rtw8822c_false_alarm_statistics(struct rtw_dev *rtwdev)
rtw_write32_mask(rtwdev, REG_CCANRX, BIT_CCK_FA_RST, 2);
rtw_write32_mask(rtwdev, REG_CCANRX, BIT_OFDM_FA_RST, 0);
rtw_write32_mask(rtwdev, REG_CCANRX, BIT_OFDM_FA_RST, 2);
+
+ /* disable rx clk gating to reset counters */
+ rtw_write32_clr(rtwdev, REG_RX_BREAK, BIT_COM_RX_GCK_EN);
rtw_write32_set(rtwdev, REG_CNT_CTRL, BIT_ALL_CNT_RST);
rtw_write32_clr(rtwdev, REG_CNT_CTRL, BIT_ALL_CNT_RST);
+ rtw_write32_set(rtwdev, REG_RX_BREAK, BIT_COM_RX_GCK_EN);
}
static void rtw8822c_do_iqk(struct rtw_dev *rtwdev)
diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.h b/drivers/net/wireless/realtek/rtw88/rtw8822c.h
index d3bd9850baa0..5ee1de41504d 100644
--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.h
+++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.h
@@ -133,6 +133,8 @@ struct rtw8822c_efuse {
#define REG_DYMPRITH 0x86c
#define REG_DYMENTH0 0x870
#define REG_DYMENTH 0x874
+#define REG_SBD 0x88c
+#define BITS_SUBTUNE GENMASK(15, 12)
#define REG_DYMTHMIN 0x8a4
#define REG_TXBWCTL 0x9b0
#define REG_TXCLK 0x9b4
@@ -140,12 +142,20 @@ struct rtw8822c_efuse {
#define REG_MRCM 0xc38
#define REG_AGCSWSH 0xc44
#define REG_ANTWTPD 0xc54
+#define REG_PT_CHSMO 0xcbc
+#define BIT_PT_OPT BIT(21)
#define REG_ORITXCODE 0x1800
#define REG_3WIRE 0x180c
#define BIT_3WIRE_TX_EN BIT(0)
#define BIT_3WIRE_RX_EN BIT(1)
#define BIT_3WIRE_PI_ON BIT(28)
#define REG_RXAGCCTL0 0x18ac
+#define BITS_RXAGC_CCK GENMASK(15, 12)
+#define BITS_RXAGC_OFDM GENMASK(8, 4)
+#define REG_DCKA_I_0 0x18bc
+#define REG_DCKA_I_1 0x18c0
+#define REG_DCKA_Q_0 0x18d8
+#define REG_DCKA_Q_1 0x18dc
#define REG_CCKSB 0x1a00
#define REG_RXCCKSEL 0x1a04
#define REG_BGCTRL 0x1a14
@@ -164,11 +174,15 @@ struct rtw8822c_efuse {
#define REG_TXF5 0x1aa0
#define REG_TXF6 0x1aac
#define REG_TXF7 0x1ab0
+#define REG_CCK_SOURCE 0x1abc
+#define BIT_NBI_EN BIT(30)
#define REG_TXANT 0x1c28
#define REG_ENCCK 0x1c3c
#define BIT_CCK_BLK_EN BIT(1)
#define BIT_CCK_OFDM_BLK_EN (BIT(0) | BIT(1))
#define REG_CCAMSK 0x1c80
+#define REG_RX_BREAK 0x1d2c
+#define BIT_COM_RX_GCK_EN BIT(31)
#define REG_RXFNCTL 0x1d30
#define REG_RXIGI 0x1d70
#define REG_ENFN 0x1e24
@@ -178,9 +192,18 @@ struct rtw8822c_efuse {
#define REG_CNT_CTRL 0x1eb4
#define BIT_ALL_CNT_RST BIT(25)
#define REG_OFDM_FACNT 0x2d00
+#define REG_OFDM_FACNT1 0x2d04
+#define REG_OFDM_FACNT2 0x2d08
+#define REG_OFDM_FACNT3 0x2d0c
+#define REG_OFDM_FACNT4 0x2d10
+#define REG_OFDM_FACNT5 0x2d20
#define REG_OFDM_TXCNT 0x2de0
#define REG_ORITXCODE2 0x4100
#define REG_3WIRE2 0x410c
#define REG_RXAGCCTL 0x41ac
+#define REG_DCKB_I_0 0x41bc
+#define REG_DCKB_I_1 0x41c0
+#define REG_DCKB_Q_0 0x41d8
+#define REG_DCKB_Q_1 0x41dc
#endif
diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c_table.c b/drivers/net/wireless/realtek/rtw88/rtw8822c_table.c
index 49044f510c6c..18e609a69829 100644
--- a/drivers/net/wireless/realtek/rtw88/rtw8822c_table.c
+++ b/drivers/net/wireless/realtek/rtw88/rtw8822c_table.c
@@ -9489,55 +9489,55 @@ static const u8 rtw8822c_txpwr_lmt_type0[] = {
0, 0, 1, 3, 13, 127, 2, 0, 1, 3, 13, 127,
0, 0, 1, 3, 14, 127, 2, 0, 1, 3, 14, 127,
0, 1, 0, 1, 36, 74, 2, 1, 0, 1, 36, 62,
- 0, 1, 0, 1, 40, 80, 2, 1, 0, 1, 40, 62,
- 0, 1, 0, 1, 44, 80, 2, 1, 0, 1, 44, 62,
- 0, 1, 0, 1, 48, 80, 2, 1, 0, 1, 48, 62,
- 0, 1, 0, 1, 52, 80, 2, 1, 0, 1, 52, 62,
- 0, 1, 0, 1, 56, 80, 2, 1, 0, 1, 56, 62,
- 0, 1, 0, 1, 60, 80, 2, 1, 0, 1, 60, 62,
+ 0, 1, 0, 1, 40, 76, 2, 1, 0, 1, 40, 62,
+ 0, 1, 0, 1, 44, 76, 2, 1, 0, 1, 44, 62,
+ 0, 1, 0, 1, 48, 76, 2, 1, 0, 1, 48, 62,
+ 0, 1, 0, 1, 52, 76, 2, 1, 0, 1, 52, 62,
+ 0, 1, 0, 1, 56, 76, 2, 1, 0, 1, 56, 62,
+ 0, 1, 0, 1, 60, 76, 2, 1, 0, 1, 60, 62,
0, 1, 0, 1, 64, 74, 2, 1, 0, 1, 64, 62,
0, 1, 0, 1, 100, 72, 2, 1, 0, 1, 100, 62,
- 0, 1, 0, 1, 104, 80, 2, 1, 0, 1, 104, 62,
- 0, 1, 0, 1, 108, 80, 2, 1, 0, 1, 108, 62,
- 0, 1, 0, 1, 112, 80, 2, 1, 0, 1, 112, 62,
- 0, 1, 0, 1, 116, 80, 2, 1, 0, 1, 116, 62,
- 0, 1, 0, 1, 120, 80, 2, 1, 0, 1, 120, 62,
- 0, 1, 0, 1, 124, 80, 2, 1, 0, 1, 124, 62,
- 0, 1, 0, 1, 128, 80, 2, 1, 0, 1, 128, 62,
- 0, 1, 0, 1, 132, 80, 2, 1, 0, 1, 132, 62,
- 0, 1, 0, 1, 136, 80, 2, 1, 0, 1, 136, 62,
+ 0, 1, 0, 1, 104, 76, 2, 1, 0, 1, 104, 62,
+ 0, 1, 0, 1, 108, 76, 2, 1, 0, 1, 108, 62,
+ 0, 1, 0, 1, 112, 76, 2, 1, 0, 1, 112, 62,
+ 0, 1, 0, 1, 116, 76, 2, 1, 0, 1, 116, 62,
+ 0, 1, 0, 1, 120, 76, 2, 1, 0, 1, 120, 62,
+ 0, 1, 0, 1, 124, 76, 2, 1, 0, 1, 124, 62,
+ 0, 1, 0, 1, 128, 76, 2, 1, 0, 1, 128, 62,
+ 0, 1, 0, 1, 132, 76, 2, 1, 0, 1, 132, 62,
+ 0, 1, 0, 1, 136, 76, 2, 1, 0, 1, 136, 62,
0, 1, 0, 1, 140, 72, 2, 1, 0, 1, 140, 62,
- 0, 1, 0, 1, 144, 80, 2, 1, 0, 1, 144, 127,
- 0, 1, 0, 1, 149, 80, 2, 1, 0, 1, 149, 127,
- 0, 1, 0, 1, 153, 80, 2, 1, 0, 1, 153, 127,
- 0, 1, 0, 1, 157, 80, 2, 1, 0, 1, 157, 127,
- 0, 1, 0, 1, 161, 80, 2, 1, 0, 1, 161, 127,
- 0, 1, 0, 1, 165, 80, 2, 1, 0, 1, 165, 127,
+ 0, 1, 0, 1, 144, 76, 2, 1, 0, 1, 144, 127,
+ 0, 1, 0, 1, 149, 76, 2, 1, 0, 1, 149, -128,
+ 0, 1, 0, 1, 153, 76, 2, 1, 0, 1, 153, -128,
+ 0, 1, 0, 1, 157, 76, 2, 1, 0, 1, 157, -128,
+ 0, 1, 0, 1, 161, 76, 2, 1, 0, 1, 161, -128,
+ 0, 1, 0, 1, 165, 76, 2, 1, 0, 1, 165, -128,
0, 1, 0, 2, 36, 72, 2, 1, 0, 2, 36, 62,
- 0, 1, 0, 2, 40, 80, 2, 1, 0, 2, 40, 62,
- 0, 1, 0, 2, 44, 80, 2, 1, 0, 2, 44, 62,
- 0, 1, 0, 2, 48, 80, 2, 1, 0, 2, 48, 62,
- 0, 1, 0, 2, 52, 80, 2, 1, 0, 2, 52, 62,
- 0, 1, 0, 2, 56, 80, 2, 1, 0, 2, 56, 62,
- 0, 1, 0, 2, 60, 80, 2, 1, 0, 2, 60, 62,
+ 0, 1, 0, 2, 40, 76, 2, 1, 0, 2, 40, 62,
+ 0, 1, 0, 2, 44, 76, 2, 1, 0, 2, 44, 62,
+ 0, 1, 0, 2, 48, 76, 2, 1, 0, 2, 48, 62,
+ 0, 1, 0, 2, 52, 76, 2, 1, 0, 2, 52, 62,
+ 0, 1, 0, 2, 56, 76, 2, 1, 0, 2, 56, 62,
+ 0, 1, 0, 2, 60, 76, 2, 1, 0, 2, 60, 62,
0, 1, 0, 2, 64, 74, 2, 1, 0, 2, 64, 62,
0, 1, 0, 2, 100, 70, 2, 1, 0, 2, 100, 62,
- 0, 1, 0, 2, 104, 80, 2, 1, 0, 2, 104, 62,
- 0, 1, 0, 2, 108, 80, 2, 1, 0, 2, 108, 62,
- 0, 1, 0, 2, 112, 80, 2, 1, 0, 2, 112, 62,
- 0, 1, 0, 2, 116, 80, 2, 1, 0, 2, 116, 62,
- 0, 1, 0, 2, 120, 80, 2, 1, 0, 2, 120, 62,
- 0, 1, 0, 2, 124, 80, 2, 1, 0, 2, 124, 62,
- 0, 1, 0, 2, 128, 80, 2, 1, 0, 2, 128, 62,
- 0, 1, 0, 2, 132, 80, 2, 1, 0, 2, 132, 62,
- 0, 1, 0, 2, 136, 80, 2, 1, 0, 2, 136, 62,
+ 0, 1, 0, 2, 104, 76, 2, 1, 0, 2, 104, 62,
+ 0, 1, 0, 2, 108, 76, 2, 1, 0, 2, 108, 62,
+ 0, 1, 0, 2, 112, 76, 2, 1, 0, 2, 112, 62,
+ 0, 1, 0, 2, 116, 76, 2, 1, 0, 2, 116, 62,
+ 0, 1, 0, 2, 120, 76, 2, 1, 0, 2, 120, 62,
+ 0, 1, 0, 2, 124, 76, 2, 1, 0, 2, 124, 62,
+ 0, 1, 0, 2, 128, 76, 2, 1, 0, 2, 128, 62,
+ 0, 1, 0, 2, 132, 76, 2, 1, 0, 2, 132, 62,
+ 0, 1, 0, 2, 136, 76, 2, 1, 0, 2, 136, 62,
0, 1, 0, 2, 140, 70, 2, 1, 0, 2, 140, 62,
- 0, 1, 0, 2, 144, 80, 2, 1, 0, 2, 144, 127,
- 0, 1, 0, 2, 149, 80, 2, 1, 0, 2, 149, 127,
- 0, 1, 0, 2, 153, 80, 2, 1, 0, 2, 153, 127,
- 0, 1, 0, 2, 157, 80, 2, 1, 0, 2, 157, 127,
- 0, 1, 0, 2, 161, 80, 2, 1, 0, 2, 161, 127,
- 0, 1, 0, 2, 165, 80, 2, 1, 0, 2, 165, 127,
+ 0, 1, 0, 2, 144, 76, 2, 1, 0, 2, 144, 127,
+ 0, 1, 0, 2, 149, 76, 2, 1, 0, 2, 149, -128,
+ 0, 1, 0, 2, 153, 76, 2, 1, 0, 2, 153, -128,
+ 0, 1, 0, 2, 157, 76, 2, 1, 0, 2, 157, -128,
+ 0, 1, 0, 2, 161, 76, 2, 1, 0, 2, 161, -128,
+ 0, 1, 0, 2, 165, 76, 2, 1, 0, 2, 165, -128,
0, 1, 0, 3, 36, 68, 2, 1, 0, 3, 36, 38,
0, 1, 0, 3, 40, 68, 2, 1, 0, 3, 40, 38,
0, 1, 0, 3, 44, 68, 2, 1, 0, 3, 44, 38,
@@ -9558,23 +9558,23 @@ static const u8 rtw8822c_txpwr_lmt_type0[] = {
0, 1, 0, 3, 136, 68, 2, 1, 0, 3, 136, 38,
0, 1, 0, 3, 140, 60, 2, 1, 0, 3, 140, 38,
0, 1, 0, 3, 144, 68, 2, 1, 0, 3, 144, 127,
- 0, 1, 0, 3, 149, 80, 2, 1, 0, 3, 149, 127,
- 0, 1, 0, 3, 153, 80, 2, 1, 0, 3, 153, 127,
- 0, 1, 0, 3, 157, 80, 2, 1, 0, 3, 157, 127,
- 0, 1, 0, 3, 161, 80, 2, 1, 0, 3, 161, 127,
- 0, 1, 0, 3, 165, 80, 2, 1, 0, 3, 165, 127,
+ 0, 1, 0, 3, 149, 76, 2, 1, 0, 3, 149, -128,
+ 0, 1, 0, 3, 153, 76, 2, 1, 0, 3, 153, -128,
+ 0, 1, 0, 3, 157, 76, 2, 1, 0, 3, 157, -128,
+ 0, 1, 0, 3, 161, 76, 2, 1, 0, 3, 161, -128,
+ 0, 1, 0, 3, 165, 76, 2, 1, 0, 3, 165, -128,
0, 1, 1, 2, 38, 66, 2, 1, 1, 2, 38, 64,
0, 1, 1, 2, 46, 72, 2, 1, 1, 2, 46, 64,
0, 1, 1, 2, 54, 72, 2, 1, 1, 2, 54, 64,
0, 1, 1, 2, 62, 64, 2, 1, 1, 2, 62, 64,
0, 1, 1, 2, 102, 58, 2, 1, 1, 2, 102, 64,
- 0, 1, 1, 2, 110, 74, 2, 1, 1, 2, 110, 64,
- 0, 1, 1, 2, 118, 74, 2, 1, 1, 2, 118, 64,
- 0, 1, 1, 2, 126, 74, 2, 1, 1, 2, 126, 64,
- 0, 1, 1, 2, 134, 74, 2, 1, 1, 2, 134, 64,
- 0, 1, 1, 2, 142, 74, 2, 1, 1, 2, 142, 127,
- 0, 1, 1, 2, 151, 74, 2, 1, 1, 2, 151, 127,
- 0, 1, 1, 2, 159, 74, 2, 1, 1, 2, 159, 127,
+ 0, 1, 1, 2, 110, 72, 2, 1, 1, 2, 110, 64,
+ 0, 1, 1, 2, 118, 72, 2, 1, 1, 2, 118, 64,
+ 0, 1, 1, 2, 126, 72, 2, 1, 1, 2, 126, 64,
+ 0, 1, 1, 2, 134, 72, 2, 1, 1, 2, 134, 64,
+ 0, 1, 1, 2, 142, 72, 2, 1, 1, 2, 142, 127,
+ 0, 1, 1, 2, 151, 72, 2, 1, 1, 2, 151, -128,
+ 0, 1, 1, 2, 159, 72, 2, 1, 1, 2, 159, -128,
0, 1, 1, 3, 38, 60, 2, 1, 1, 3, 38, 40,
0, 1, 1, 3, 46, 68, 2, 1, 1, 3, 46, 40,
0, 1, 1, 3, 54, 68, 2, 1, 1, 3, 54, 40,
@@ -9585,20 +9585,703 @@ static const u8 rtw8822c_txpwr_lmt_type0[] = {
0, 1, 1, 3, 126, 68, 2, 1, 1, 3, 126, 40,
0, 1, 1, 3, 134, 68, 2, 1, 1, 3, 134, 40,
0, 1, 1, 3, 142, 68, 2, 1, 1, 3, 142, 127,
- 0, 1, 1, 3, 151, 74, 2, 1, 1, 3, 151, 127,
- 0, 1, 1, 3, 159, 74, 2, 1, 1, 3, 159, 127,
+ 0, 1, 1, 3, 151, 72, 2, 1, 1, 3, 151, -128,
+ 0, 1, 1, 3, 159, 72, 2, 1, 1, 3, 159, -128,
0, 1, 2, 4, 42, 64, 2, 1, 2, 4, 42, 64,
0, 1, 2, 4, 58, 62, 2, 1, 2, 4, 58, 64,
0, 1, 2, 4, 106, 58, 2, 1, 2, 4, 106, 64,
0, 1, 2, 4, 122, 72, 2, 1, 2, 4, 122, 64,
0, 1, 2, 4, 138, 72, 2, 1, 2, 4, 138, 127,
- 0, 1, 2, 4, 155, 72, 2, 1, 2, 4, 155, 127,
+ 0, 1, 2, 4, 155, 72, 2, 1, 2, 4, 155, -128,
0, 1, 2, 5, 42, 54, 2, 1, 2, 5, 42, 40,
0, 1, 2, 5, 58, 52, 2, 1, 2, 5, 58, 40,
0, 1, 2, 5, 106, 50, 2, 1, 2, 5, 106, 40,
0, 1, 2, 5, 122, 66, 2, 1, 2, 5, 122, 40,
0, 1, 2, 5, 138, 66, 2, 1, 2, 5, 138, 127,
- 0, 1, 2, 5, 155, 62, 2, 1, 2, 5, 155, 127
+ 0, 1, 2, 5, 155, 62, 2, 1, 2, 5, 155, -128,
+ 1, 0, 0, 0, 1, 68, 3, 0, 0, 0, 1, 72,
+ 4, 0, 0, 0, 1, 76, 5, 0, 0, 0, 1, 60,
+ 6, 0, 0, 0, 1, 72, 7, 0, 0, 0, 1, 60,
+ 8, 0, 0, 0, 1, 72, 1, 0, 0, 0, 2, 68,
+ 3, 0, 0, 0, 2, 72, 4, 0, 0, 0, 2, 76,
+ 5, 0, 0, 0, 2, 60, 6, 0, 0, 0, 2, 72,
+ 7, 0, 0, 0, 2, 60, 8, 0, 0, 0, 2, 72,
+ 1, 0, 0, 0, 3, 68, 3, 0, 0, 0, 3, 76,
+ 4, 0, 0, 0, 3, 76, 5, 0, 0, 0, 3, 60,
+ 6, 0, 0, 0, 3, 76, 7, 0, 0, 0, 3, 60,
+ 8, 0, 0, 0, 3, 76, 1, 0, 0, 0, 4, 68,
+ 3, 0, 0, 0, 4, 76, 4, 0, 0, 0, 4, 76,
+ 5, 0, 0, 0, 4, 60, 6, 0, 0, 0, 4, 76,
+ 7, 0, 0, 0, 4, 60, 8, 0, 0, 0, 4, 76,
+ 1, 0, 0, 0, 5, 68, 3, 0, 0, 0, 5, 76,
+ 4, 0, 0, 0, 5, 76, 5, 0, 0, 0, 5, 60,
+ 6, 0, 0, 0, 5, 76, 7, 0, 0, 0, 5, 60,
+ 8, 0, 0, 0, 5, 76, 1, 0, 0, 0, 6, 68,
+ 3, 0, 0, 0, 6, 76, 4, 0, 0, 0, 6, 76,
+ 5, 0, 0, 0, 6, 60, 6, 0, 0, 0, 6, 76,
+ 7, 0, 0, 0, 6, 60, 8, 0, 0, 0, 6, 76,
+ 1, 0, 0, 0, 7, 68, 3, 0, 0, 0, 7, 76,
+ 4, 0, 0, 0, 7, 76, 5, 0, 0, 0, 7, 60,
+ 6, 0, 0, 0, 7, 76, 7, 0, 0, 0, 7, 60,
+ 8, 0, 0, 0, 7, 76, 1, 0, 0, 0, 8, 68,
+ 3, 0, 0, 0, 8, 76, 4, 0, 0, 0, 8, 76,
+ 5, 0, 0, 0, 8, 60, 6, 0, 0, 0, 8, 76,
+ 7, 0, 0, 0, 8, 60, 8, 0, 0, 0, 8, 76,
+ 1, 0, 0, 0, 9, 68, 3, 0, 0, 0, 9, 76,
+ 4, 0, 0, 0, 9, 76, 5, 0, 0, 0, 9, 60,
+ 6, 0, 0, 0, 9, 76, 7, 0, 0, 0, 9, 60,
+ 8, 0, 0, 0, 9, 76, 1, 0, 0, 0, 10, 68,
+ 3, 0, 0, 0, 10, 72, 4, 0, 0, 0, 10, 76,
+ 5, 0, 0, 0, 10, 60, 6, 0, 0, 0, 10, 72,
+ 7, 0, 0, 0, 10, 60, 8, 0, 0, 0, 10, 72,
+ 1, 0, 0, 0, 11, 68, 3, 0, 0, 0, 11, 72,
+ 4, 0, 0, 0, 11, 76, 5, 0, 0, 0, 11, 60,
+ 6, 0, 0, 0, 11, 72, 7, 0, 0, 0, 11, 60,
+ 8, 0, 0, 0, 11, 72, 1, 0, 0, 0, 12, 68,
+ 3, 0, 0, 0, 12, 52, 4, 0, 0, 0, 12, 76,
+ 5, 0, 0, 0, 12, 60, 6, 0, 0, 0, 12, 52,
+ 7, 0, 0, 0, 12, 60, 8, 0, 0, 0, 12, 52,
+ 1, 0, 0, 0, 13, 68, 3, 0, 0, 0, 13, 48,
+ 4, 0, 0, 0, 13, 76, 5, 0, 0, 0, 13, 60,
+ 6, 0, 0, 0, 13, 48, 7, 0, 0, 0, 13, 60,
+ 8, 0, 0, 0, 13, 48, 1, 0, 0, 0, 14, 68,
+ 3, 0, 0, 0, 14, 127, 4, 0, 0, 0, 14, 127,
+ 5, 0, 0, 0, 14, 127, 6, 0, 0, 0, 14, 127,
+ 7, 0, 0, 0, 14, 127, 8, 0, 0, 0, 14, 127,
+ 1, 0, 0, 1, 1, 76, 3, 0, 0, 1, 1, 52,
+ 4, 0, 0, 1, 1, 76, 5, 0, 0, 1, 1, 60,
+ 6, 0, 0, 1, 1, 52, 7, 0, 0, 1, 1, 60,
+ 8, 0, 0, 1, 1, 52, 1, 0, 0, 1, 2, 76,
+ 3, 0, 0, 1, 2, 60, 4, 0, 0, 1, 2, 76,
+ 5, 0, 0, 1, 2, 60, 6, 0, 0, 1, 2, 60,
+ 7, 0, 0, 1, 2, 60, 8, 0, 0, 1, 2, 60,
+ 1, 0, 0, 1, 3, 76, 3, 0, 0, 1, 3, 64,
+ 4, 0, 0, 1, 3, 76, 5, 0, 0, 1, 3, 60,
+ 6, 0, 0, 1, 3, 64, 7, 0, 0, 1, 3, 60,
+ 8, 0, 0, 1, 3, 64, 1, 0, 0, 1, 4, 76,
+ 3, 0, 0, 1, 4, 68, 4, 0, 0, 1, 4, 76,
+ 5, 0, 0, 1, 4, 60, 6, 0, 0, 1, 4, 68,
+ 7, 0, 0, 1, 4, 60, 8, 0, 0, 1, 4, 68,
+ 1, 0, 0, 1, 5, 76, 3, 0, 0, 1, 5, 76,
+ 4, 0, 0, 1, 5, 76, 5, 0, 0, 1, 5, 60,
+ 6, 0, 0, 1, 5, 76, 7, 0, 0, 1, 5, 60,
+ 8, 0, 0, 1, 5, 76, 1, 0, 0, 1, 6, 76,
+ 3, 0, 0, 1, 6, 76, 4, 0, 0, 1, 6, 76,
+ 5, 0, 0, 1, 6, 60, 6, 0, 0, 1, 6, 76,
+ 7, 0, 0, 1, 6, 60, 8, 0, 0, 1, 6, 76,
+ 1, 0, 0, 1, 7, 76, 3, 0, 0, 1, 7, 76,
+ 4, 0, 0, 1, 7, 76, 5, 0, 0, 1, 7, 60,
+ 6, 0, 0, 1, 7, 76, 7, 0, 0, 1, 7, 60,
+ 8, 0, 0, 1, 7, 76, 1, 0, 0, 1, 8, 76,
+ 3, 0, 0, 1, 8, 68, 4, 0, 0, 1, 8, 76,
+ 5, 0, 0, 1, 8, 60, 6, 0, 0, 1, 8, 68,
+ 7, 0, 0, 1, 8, 60, 8, 0, 0, 1, 8, 68,
+ 1, 0, 0, 1, 9, 76, 3, 0, 0, 1, 9, 64,
+ 4, 0, 0, 1, 9, 76, 5, 0, 0, 1, 9, 60,
+ 6, 0, 0, 1, 9, 64, 7, 0, 0, 1, 9, 60,
+ 8, 0, 0, 1, 9, 64, 1, 0, 0, 1, 10, 76,
+ 3, 0, 0, 1, 10, 60, 4, 0, 0, 1, 10, 76,
+ 5, 0, 0, 1, 10, 60, 6, 0, 0, 1, 10, 60,
+ 7, 0, 0, 1, 10, 60, 8, 0, 0, 1, 10, 60,
+ 1, 0, 0, 1, 11, 76, 3, 0, 0, 1, 11, 52,
+ 4, 0, 0, 1, 11, 76, 5, 0, 0, 1, 11, 60,
+ 6, 0, 0, 1, 11, 52, 7, 0, 0, 1, 11, 60,
+ 8, 0, 0, 1, 11, 52, 1, 0, 0, 1, 12, 76,
+ 3, 0, 0, 1, 12, 40, 4, 0, 0, 1, 12, 76,
+ 5, 0, 0, 1, 12, 60, 6, 0, 0, 1, 12, 40,
+ 7, 0, 0, 1, 12, 60, 8, 0, 0, 1, 12, 40,
+ 1, 0, 0, 1, 13, 76, 3, 0, 0, 1, 13, 28,
+ 4, 0, 0, 1, 13, 70, 5, 0, 0, 1, 13, 60,
+ 6, 0, 0, 1, 13, 28, 7, 0, 0, 1, 13, 60,
+ 8, 0, 0, 1, 13, 28, 1, 0, 0, 1, 14, 127,
+ 3, 0, 0, 1, 14, 127, 4, 0, 0, 1, 14, 127,
+ 5, 0, 0, 1, 14, 127, 6, 0, 0, 1, 14, 127,
+ 7, 0, 0, 1, 14, 127, 8, 0, 0, 1, 14, 127,
+ 1, 0, 0, 2, 1, 76, 3, 0, 0, 2, 1, 52,
+ 4, 0, 0, 2, 1, 76, 5, 0, 0, 2, 1, 60,
+ 6, 0, 0, 2, 1, 52, 7, 0, 0, 2, 1, 60,
+ 8, 0, 0, 2, 1, 52, 1, 0, 0, 2, 2, 76,
+ 3, 0, 0, 2, 2, 60, 4, 0, 0, 2, 2, 76,
+ 5, 0, 0, 2, 2, 60, 6, 0, 0, 2, 2, 60,
+ 7, 0, 0, 2, 2, 60, 8, 0, 0, 2, 2, 60,
+ 1, 0, 0, 2, 3, 76, 3, 0, 0, 2, 3, 64,
+ 4, 0, 0, 2, 3, 76, 5, 0, 0, 2, 3, 60,
+ 6, 0, 0, 2, 3, 64, 7, 0, 0, 2, 3, 60,
+ 8, 0, 0, 2, 3, 64, 1, 0, 0, 2, 4, 76,
+ 3, 0, 0, 2, 4, 68, 4, 0, 0, 2, 4, 76,
+ 5, 0, 0, 2, 4, 60, 6, 0, 0, 2, 4, 68,
+ 7, 0, 0, 2, 4, 60, 8, 0, 0, 2, 4, 68,
+ 1, 0, 0, 2, 5, 76, 3, 0, 0, 2, 5, 76,
+ 4, 0, 0, 2, 5, 76, 5, 0, 0, 2, 5, 60,
+ 6, 0, 0, 2, 5, 76, 7, 0, 0, 2, 5, 60,
+ 8, 0, 0, 2, 5, 76, 1, 0, 0, 2, 6, 76,
+ 3, 0, 0, 2, 6, 76, 4, 0, 0, 2, 6, 76,
+ 5, 0, 0, 2, 6, 60, 6, 0, 0, 2, 6, 76,
+ 7, 0, 0, 2, 6, 60, 8, 0, 0, 2, 6, 76,
+ 1, 0, 0, 2, 7, 76, 3, 0, 0, 2, 7, 76,
+ 4, 0, 0, 2, 7, 76, 5, 0, 0, 2, 7, 60,
+ 6, 0, 0, 2, 7, 76, 7, 0, 0, 2, 7, 60,
+ 8, 0, 0, 2, 7, 76, 1, 0, 0, 2, 8, 76,
+ 3, 0, 0, 2, 8, 68, 4, 0, 0, 2, 8, 76,
+ 5, 0, 0, 2, 8, 60, 6, 0, 0, 2, 8, 68,
+ 7, 0, 0, 2, 8, 60, 8, 0, 0, 2, 8, 68,
+ 1, 0, 0, 2, 9, 76, 3, 0, 0, 2, 9, 64,
+ 4, 0, 0, 2, 9, 76, 5, 0, 0, 2, 9, 60,
+ 6, 0, 0, 2, 9, 64, 7, 0, 0, 2, 9, 60,
+ 8, 0, 0, 2, 9, 64, 1, 0, 0, 2, 10, 76,
+ 3, 0, 0, 2, 10, 60, 4, 0, 0, 2, 10, 76,
+ 5, 0, 0, 2, 10, 60, 6, 0, 0, 2, 10, 60,
+ 7, 0, 0, 2, 10, 60, 8, 0, 0, 2, 10, 60,
+ 1, 0, 0, 2, 11, 76, 3, 0, 0, 2, 11, 52,
+ 4, 0, 0, 2, 11, 76, 5, 0, 0, 2, 11, 60,
+ 6, 0, 0, 2, 11, 52, 7, 0, 0, 2, 11, 60,
+ 8, 0, 0, 2, 11, 52, 1, 0, 0, 2, 12, 76,
+ 3, 0, 0, 2, 12, 40, 4, 0, 0, 2, 12, 76,
+ 5, 0, 0, 2, 12, 60, 6, 0, 0, 2, 12, 40,
+ 7, 0, 0, 2, 12, 60, 8, 0, 0, 2, 12, 40,
+ 1, 0, 0, 2, 13, 76, 3, 0, 0, 2, 13, 28,
+ 4, 0, 0, 2, 13, 72, 5, 0, 0, 2, 13, 60,
+ 6, 0, 0, 2, 13, 28, 7, 0, 0, 2, 13, 60,
+ 8, 0, 0, 2, 13, 28, 1, 0, 0, 2, 14, 127,
+ 3, 0, 0, 2, 14, 127, 4, 0, 0, 2, 14, 127,
+ 5, 0, 0, 2, 14, 127, 6, 0, 0, 2, 14, 127,
+ 7, 0, 0, 2, 14, 127, 8, 0, 0, 2, 14, 127,
+ 1, 0, 0, 3, 1, 66, 3, 0, 0, 3, 1, 52,
+ 4, 0, 0, 3, 1, 68, 5, 0, 0, 3, 1, 36,
+ 6, 0, 0, 3, 1, 52, 7, 0, 0, 3, 1, 36,
+ 8, 0, 0, 3, 1, 52, 1, 0, 0, 3, 2, 66,
+ 3, 0, 0, 3, 2, 60, 4, 0, 0, 3, 2, 70,
+ 5, 0, 0, 3, 2, 36, 6, 0, 0, 3, 2, 60,
+ 7, 0, 0, 3, 2, 36, 8, 0, 0, 3, 2, 60,
+ 1, 0, 0, 3, 3, 66, 3, 0, 0, 3, 3, 64,
+ 4, 0, 0, 3, 3, 70, 5, 0, 0, 3, 3, 36,
+ 6, 0, 0, 3, 3, 64, 7, 0, 0, 3, 3, 36,
+ 8, 0, 0, 3, 3, 64, 1, 0, 0, 3, 4, 66,
+ 3, 0, 0, 3, 4, 68, 4, 0, 0, 3, 4, 70,
+ 5, 0, 0, 3, 4, 36, 6, 0, 0, 3, 4, 68,
+ 7, 0, 0, 3, 4, 36, 8, 0, 0, 3, 4, 68,
+ 1, 0, 0, 3, 5, 66, 3, 0, 0, 3, 5, 76,
+ 4, 0, 0, 3, 5, 70, 5, 0, 0, 3, 5, 36,
+ 6, 0, 0, 3, 5, 76, 7, 0, 0, 3, 5, 36,
+ 8, 0, 0, 3, 5, 76, 1, 0, 0, 3, 6, 66,
+ 3, 0, 0, 3, 6, 76, 4, 0, 0, 3, 6, 70,
+ 5, 0, 0, 3, 6, 36, 6, 0, 0, 3, 6, 76,
+ 7, 0, 0, 3, 6, 36, 8, 0, 0, 3, 6, 76,
+ 1, 0, 0, 3, 7, 66, 3, 0, 0, 3, 7, 76,
+ 4, 0, 0, 3, 7, 70, 5, 0, 0, 3, 7, 36,
+ 6, 0, 0, 3, 7, 76, 7, 0, 0, 3, 7, 36,
+ 8, 0, 0, 3, 7, 76, 1, 0, 0, 3, 8, 66,
+ 3, 0, 0, 3, 8, 68, 4, 0, 0, 3, 8, 70,
+ 5, 0, 0, 3, 8, 36, 6, 0, 0, 3, 8, 68,
+ 7, 0, 0, 3, 8, 36, 8, 0, 0, 3, 8, 68,
+ 1, 0, 0, 3, 9, 66, 3, 0, 0, 3, 9, 64,
+ 4, 0, 0, 3, 9, 70, 5, 0, 0, 3, 9, 36,
+ 6, 0, 0, 3, 9, 64, 7, 0, 0, 3, 9, 36,
+ 8, 0, 0, 3, 9, 64, 1, 0, 0, 3, 10, 66,
+ 3, 0, 0, 3, 10, 60, 4, 0, 0, 3, 10, 70,
+ 5, 0, 0, 3, 10, 36, 6, 0, 0, 3, 10, 60,
+ 7, 0, 0, 3, 10, 36, 8, 0, 0, 3, 10, 60,
+ 1, 0, 0, 3, 11, 66, 3, 0, 0, 3, 11, 52,
+ 4, 0, 0, 3, 11, 70, 5, 0, 0, 3, 11, 36,
+ 6, 0, 0, 3, 11, 52, 7, 0, 0, 3, 11, 36,
+ 8, 0, 0, 3, 11, 52, 1, 0, 0, 3, 12, 66,
+ 3, 0, 0, 3, 12, 40, 4, 0, 0, 3, 12, 70,
+ 5, 0, 0, 3, 12, 36, 6, 0, 0, 3, 12, 40,
+ 7, 0, 0, 3, 12, 36, 8, 0, 0, 3, 12, 40,
+ 1, 0, 0, 3, 13, 66, 3, 0, 0, 3, 13, 28,
+ 4, 0, 0, 3, 13, 62, 5, 0, 0, 3, 13, 36,
+ 6, 0, 0, 3, 13, 28, 7, 0, 0, 3, 13, 36,
+ 8, 0, 0, 3, 13, 28, 1, 0, 0, 3, 14, 127,
+ 3, 0, 0, 3, 14, 127, 4, 0, 0, 3, 14, 127,
+ 5, 0, 0, 3, 14, 127, 6, 0, 0, 3, 14, 127,
+ 7, 0, 0, 3, 14, 127, 8, 0, 0, 3, 14, 127,
+ 1, 0, 1, 2, 1, 127, 3, 0, 1, 2, 1, 127,
+ 4, 0, 1, 2, 1, 127, 5, 0, 1, 2, 1, 127,
+ 6, 0, 1, 2, 1, 127, 7, 0, 1, 2, 1, 127,
+ 8, 0, 1, 2, 1, 127, 1, 0, 1, 2, 2, 127,
+ 3, 0, 1, 2, 2, 127, 4, 0, 1, 2, 2, 127,
+ 5, 0, 1, 2, 2, 127, 6, 0, 1, 2, 2, 127,
+ 7, 0, 1, 2, 2, 127, 8, 0, 1, 2, 2, 127,
+ 1, 0, 1, 2, 3, 72, 3, 0, 1, 2, 3, 52,
+ 4, 0, 1, 2, 3, 72, 5, 0, 1, 2, 3, 60,
+ 6, 0, 1, 2, 3, 52, 7, 0, 1, 2, 3, 60,
+ 8, 0, 1, 2, 3, 52, 1, 0, 1, 2, 4, 72,
+ 3, 0, 1, 2, 4, 52, 4, 0, 1, 2, 4, 72,
+ 5, 0, 1, 2, 4, 60, 6, 0, 1, 2, 4, 52,
+ 7, 0, 1, 2, 4, 60, 8, 0, 1, 2, 4, 52,
+ 1, 0, 1, 2, 5, 72, 3, 0, 1, 2, 5, 60,
+ 4, 0, 1, 2, 5, 72, 5, 0, 1, 2, 5, 60,
+ 6, 0, 1, 2, 5, 60, 7, 0, 1, 2, 5, 60,
+ 8, 0, 1, 2, 5, 60, 1, 0, 1, 2, 6, 72,
+ 3, 0, 1, 2, 6, 64, 4, 0, 1, 2, 6, 72,
+ 5, 0, 1, 2, 6, 60, 6, 0, 1, 2, 6, 64,
+ 7, 0, 1, 2, 6, 60, 8, 0, 1, 2, 6, 64,
+ 1, 0, 1, 2, 7, 72, 3, 0, 1, 2, 7, 60,
+ 4, 0, 1, 2, 7, 72, 5, 0, 1, 2, 7, 60,
+ 6, 0, 1, 2, 7, 60, 7, 0, 1, 2, 7, 60,
+ 8, 0, 1, 2, 7, 60, 1, 0, 1, 2, 8, 72,
+ 3, 0, 1, 2, 8, 52, 4, 0, 1, 2, 8, 72,
+ 5, 0, 1, 2, 8, 60, 6, 0, 1, 2, 8, 52,
+ 7, 0, 1, 2, 8, 60, 8, 0, 1, 2, 8, 52,
+ 1, 0, 1, 2, 9, 72, 3, 0, 1, 2, 9, 52,
+ 4, 0, 1, 2, 9, 72, 5, 0, 1, 2, 9, 60,
+ 6, 0, 1, 2, 9, 52, 7, 0, 1, 2, 9, 60,
+ 8, 0, 1, 2, 9, 52, 1, 0, 1, 2, 10, 72,
+ 3, 0, 1, 2, 10, 40, 4, 0, 1, 2, 10, 72,
+ 5, 0, 1, 2, 10, 60, 6, 0, 1, 2, 10, 40,
+ 7, 0, 1, 2, 10, 60, 8, 0, 1, 2, 10, 40,
+ 1, 0, 1, 2, 11, 72, 3, 0, 1, 2, 11, 28,
+ 4, 0, 1, 2, 11, 70, 5, 0, 1, 2, 11, 60,
+ 6, 0, 1, 2, 11, 28, 7, 0, 1, 2, 11, 60,
+ 8, 0, 1, 2, 11, 28, 1, 0, 1, 2, 12, 127,
+ 3, 0, 1, 2, 12, 127, 4, 0, 1, 2, 12, 127,
+ 5, 0, 1, 2, 12, 127, 6, 0, 1, 2, 12, 127,
+ 7, 0, 1, 2, 12, 127, 8, 0, 1, 2, 12, 127,
+ 1, 0, 1, 2, 13, 127, 3, 0, 1, 2, 13, 127,
+ 4, 0, 1, 2, 13, 127, 5, 0, 1, 2, 13, 127,
+ 6, 0, 1, 2, 13, 127, 7, 0, 1, 2, 13, 127,
+ 8, 0, 1, 2, 13, 127, 1, 0, 1, 2, 14, 127,
+ 3, 0, 1, 2, 14, 127, 4, 0, 1, 2, 14, 127,
+ 5, 0, 1, 2, 14, 127, 6, 0, 1, 2, 14, 127,
+ 7, 0, 1, 2, 14, 127, 8, 0, 1, 2, 14, 127,
+ 1, 0, 1, 3, 1, 127, 3, 0, 1, 3, 1, 127,
+ 4, 0, 1, 3, 1, 127, 5, 0, 1, 3, 1, 127,
+ 6, 0, 1, 3, 1, 127, 7, 0, 1, 3, 1, 127,
+ 8, 0, 1, 3, 1, 127, 1, 0, 1, 3, 2, 127,
+ 3, 0, 1, 3, 2, 127, 4, 0, 1, 3, 2, 127,
+ 5, 0, 1, 3, 2, 127, 6, 0, 1, 3, 2, 127,
+ 7, 0, 1, 3, 2, 127, 8, 0, 1, 3, 2, 127,
+ 1, 0, 1, 3, 3, 66, 3, 0, 1, 3, 3, 48,
+ 4, 0, 1, 3, 3, 66, 5, 0, 1, 3, 3, 36,
+ 6, 0, 1, 3, 3, 48, 7, 0, 1, 3, 3, 36,
+ 8, 0, 1, 3, 3, 48, 1, 0, 1, 3, 4, 66,
+ 3, 0, 1, 3, 4, 48, 4, 0, 1, 3, 4, 70,
+ 5, 0, 1, 3, 4, 36, 6, 0, 1, 3, 4, 48,
+ 7, 0, 1, 3, 4, 36, 8, 0, 1, 3, 4, 48,
+ 1, 0, 1, 3, 5, 66, 3, 0, 1, 3, 5, 60,
+ 4, 0, 1, 3, 5, 70, 5, 0, 1, 3, 5, 36,
+ 6, 0, 1, 3, 5, 60, 7, 0, 1, 3, 5, 36,
+ 8, 0, 1, 3, 5, 60, 1, 0, 1, 3, 6, 66,
+ 3, 0, 1, 3, 6, 64, 4, 0, 1, 3, 6, 70,
+ 5, 0, 1, 3, 6, 36, 6, 0, 1, 3, 6, 64,
+ 7, 0, 1, 3, 6, 36, 8, 0, 1, 3, 6, 64,
+ 1, 0, 1, 3, 7, 66, 3, 0, 1, 3, 7, 60,
+ 4, 0, 1, 3, 7, 70, 5, 0, 1, 3, 7, 36,
+ 6, 0, 1, 3, 7, 60, 7, 0, 1, 3, 7, 36,
+ 8, 0, 1, 3, 7, 60, 1, 0, 1, 3, 8, 66,
+ 3, 0, 1, 3, 8, 52, 4, 0, 1, 3, 8, 70,
+ 5, 0, 1, 3, 8, 36, 6, 0, 1, 3, 8, 52,
+ 7, 0, 1, 3, 8, 36, 8, 0, 1, 3, 8, 52,
+ 1, 0, 1, 3, 9, 66, 3, 0, 1, 3, 9, 52,
+ 4, 0, 1, 3, 9, 70, 5, 0, 1, 3, 9, 36,
+ 6, 0, 1, 3, 9, 52, 7, 0, 1, 3, 9, 36,
+ 8, 0, 1, 3, 9, 52, 1, 0, 1, 3, 10, 66,
+ 3, 0, 1, 3, 10, 40, 4, 0, 1, 3, 10, 70,
+ 5, 0, 1, 3, 10, 36, 6, 0, 1, 3, 10, 40,
+ 7, 0, 1, 3, 10, 36, 8, 0, 1, 3, 10, 40,
+ 1, 0, 1, 3, 11, 66, 3, 0, 1, 3, 11, 26,
+ 4, 0, 1, 3, 11, 66, 5, 0, 1, 3, 11, 36,
+ 6, 0, 1, 3, 11, 26, 7, 0, 1, 3, 11, 36,
+ 8, 0, 1, 3, 11, 26, 1, 0, 1, 3, 12, 127,
+ 3, 0, 1, 3, 12, 127, 4, 0, 1, 3, 12, 127,
+ 5, 0, 1, 3, 12, 127, 6, 0, 1, 3, 12, 127,
+ 7, 0, 1, 3, 12, 127, 8, 0, 1, 3, 12, 127,
+ 1, 0, 1, 3, 13, 127, 3, 0, 1, 3, 13, 127,
+ 4, 0, 1, 3, 13, 127, 5, 0, 1, 3, 13, 127,
+ 6, 0, 1, 3, 13, 127, 7, 0, 1, 3, 13, 127,
+ 8, 0, 1, 3, 13, 127, 1, 0, 1, 3, 14, 127,
+ 3, 0, 1, 3, 14, 127, 4, 0, 1, 3, 14, 127,
+ 5, 0, 1, 3, 14, 127, 6, 0, 1, 3, 14, 127,
+ 7, 0, 1, 3, 14, 127, 8, 0, 1, 3, 14, 127,
+ 1, 1, 0, 1, 36, 60, 3, 1, 0, 1, 36, 62,
+ 4, 1, 0, 1, 36, 76, 5, 1, 0, 1, 36, 62,
+ 6, 1, 0, 1, 36, 64, 7, 1, 0, 1, 36, 54,
+ 8, 1, 0, 1, 36, 62, 1, 1, 0, 1, 40, 62,
+ 3, 1, 0, 1, 40, 62, 4, 1, 0, 1, 40, 76,
+ 5, 1, 0, 1, 40, 62, 6, 1, 0, 1, 40, 64,
+ 7, 1, 0, 1, 40, 54, 8, 1, 0, 1, 40, 62,
+ 1, 1, 0, 1, 44, 62, 3, 1, 0, 1, 44, 62,
+ 4, 1, 0, 1, 44, 76, 5, 1, 0, 1, 44, 62,
+ 6, 1, 0, 1, 44, 64, 7, 1, 0, 1, 44, 54,
+ 8, 1, 0, 1, 44, 62, 1, 1, 0, 1, 48, 62,
+ 3, 1, 0, 1, 48, 62, 4, 1, 0, 1, 48, 76,
+ 5, 1, 0, 1, 48, 62, 6, 1, 0, 1, 48, 64,
+ 7, 1, 0, 1, 48, 54, 8, 1, 0, 1, 48, 62,
+ 1, 1, 0, 1, 52, 62, 3, 1, 0, 1, 52, 64,
+ 4, 1, 0, 1, 52, 76, 5, 1, 0, 1, 52, 62,
+ 6, 1, 0, 1, 52, 76, 7, 1, 0, 1, 52, 54,
+ 8, 1, 0, 1, 52, 76, 1, 1, 0, 1, 56, 62,
+ 3, 1, 0, 1, 56, 64, 4, 1, 0, 1, 56, 76,
+ 5, 1, 0, 1, 56, 62, 6, 1, 0, 1, 56, 76,
+ 7, 1, 0, 1, 56, 54, 8, 1, 0, 1, 56, 76,
+ 1, 1, 0, 1, 60, 62, 3, 1, 0, 1, 60, 64,
+ 4, 1, 0, 1, 60, 76, 5, 1, 0, 1, 60, 62,
+ 6, 1, 0, 1, 60, 76, 7, 1, 0, 1, 60, 54,
+ 8, 1, 0, 1, 60, 76, 1, 1, 0, 1, 64, 60,
+ 3, 1, 0, 1, 64, 64, 4, 1, 0, 1, 64, 76,
+ 5, 1, 0, 1, 64, 62, 6, 1, 0, 1, 64, 74,
+ 7, 1, 0, 1, 64, 54, 8, 1, 0, 1, 64, 74,
+ 1, 1, 0, 1, 100, 76, 3, 1, 0, 1, 100, 72,
+ 4, 1, 0, 1, 100, 76, 5, 1, 0, 1, 100, 62,
+ 6, 1, 0, 1, 100, 72, 7, 1, 0, 1, 100, 54,
+ 8, 1, 0, 1, 100, 72, 1, 1, 0, 1, 104, 76,
+ 3, 1, 0, 1, 104, 76, 4, 1, 0, 1, 104, 76,
+ 5, 1, 0, 1, 104, 62, 6, 1, 0, 1, 104, 76,
+ 7, 1, 0, 1, 104, 54, 8, 1, 0, 1, 104, 76,
+ 1, 1, 0, 1, 108, 76, 3, 1, 0, 1, 108, 76,
+ 4, 1, 0, 1, 108, 76, 5, 1, 0, 1, 108, 62,
+ 6, 1, 0, 1, 108, 76, 7, 1, 0, 1, 108, 54,
+ 8, 1, 0, 1, 108, 76, 1, 1, 0, 1, 112, 76,
+ 3, 1, 0, 1, 112, 76, 4, 1, 0, 1, 112, 76,
+ 5, 1, 0, 1, 112, 62, 6, 1, 0, 1, 112, 76,
+ 7, 1, 0, 1, 112, 54, 8, 1, 0, 1, 112, 76,
+ 1, 1, 0, 1, 116, 76, 3, 1, 0, 1, 116, 76,
+ 4, 1, 0, 1, 116, 76, 5, 1, 0, 1, 116, 62,
+ 6, 1, 0, 1, 116, 76, 7, 1, 0, 1, 116, 54,
+ 8, 1, 0, 1, 116, 76, 1, 1, 0, 1, 120, 76,
+ 3, 1, 0, 1, 120, 127, 4, 1, 0, 1, 120, 76,
+ 5, 1, 0, 1, 120, 127, 6, 1, 0, 1, 120, 76,
+ 7, 1, 0, 1, 120, 54, 8, 1, 0, 1, 120, 76,
+ 1, 1, 0, 1, 124, 76, 3, 1, 0, 1, 124, 127,
+ 4, 1, 0, 1, 124, 76, 5, 1, 0, 1, 124, 127,
+ 6, 1, 0, 1, 124, 76, 7, 1, 0, 1, 124, 54,
+ 8, 1, 0, 1, 124, 76, 1, 1, 0, 1, 128, 76,
+ 3, 1, 0, 1, 128, 127, 4, 1, 0, 1, 128, 76,
+ 5, 1, 0, 1, 128, 127, 6, 1, 0, 1, 128, 76,
+ 7, 1, 0, 1, 128, 54, 8, 1, 0, 1, 128, 76,
+ 1, 1, 0, 1, 132, 76, 3, 1, 0, 1, 132, 76,
+ 4, 1, 0, 1, 132, 76, 5, 1, 0, 1, 132, 62,
+ 6, 1, 0, 1, 132, 76, 7, 1, 0, 1, 132, 54,
+ 8, 1, 0, 1, 132, 76, 1, 1, 0, 1, 136, 76,
+ 3, 1, 0, 1, 136, 76, 4, 1, 0, 1, 136, 76,
+ 5, 1, 0, 1, 136, 62, 6, 1, 0, 1, 136, 76,
+ 7, 1, 0, 1, 136, 127, 8, 1, 0, 1, 136, 76,
+ 1, 1, 0, 1, 140, 76, 3, 1, 0, 1, 140, 72,
+ 4, 1, 0, 1, 140, 76, 5, 1, 0, 1, 140, 62,
+ 6, 1, 0, 1, 140, 72, 7, 1, 0, 1, 140, 127,
+ 8, 1, 0, 1, 140, 72, 1, 1, 0, 1, 144, 127,
+ 3, 1, 0, 1, 144, 76, 4, 1, 0, 1, 144, 76,
+ 5, 1, 0, 1, 144, 127, 6, 1, 0, 1, 144, 76,
+ 7, 1, 0, 1, 144, 127, 8, 1, 0, 1, 144, 76,
+ 1, 1, 0, 1, 149, 127, 3, 1, 0, 1, 149, 76,
+ 4, 1, 0, 1, 149, 74, 5, 1, 0, 1, 149, 76,
+ 6, 1, 0, 1, 149, 76, 7, 1, 0, 1, 149, 54,
+ 8, 1, 0, 1, 149, 76, 1, 1, 0, 1, 153, 127,
+ 3, 1, 0, 1, 153, 76, 4, 1, 0, 1, 153, 74,
+ 5, 1, 0, 1, 153, 76, 6, 1, 0, 1, 153, 76,
+ 7, 1, 0, 1, 153, 54, 8, 1, 0, 1, 153, 76,
+ 1, 1, 0, 1, 157, 127, 3, 1, 0, 1, 157, 76,
+ 4, 1, 0, 1, 157, 74, 5, 1, 0, 1, 157, 76,
+ 6, 1, 0, 1, 157, 76, 7, 1, 0, 1, 157, 54,
+ 8, 1, 0, 1, 157, 76, 1, 1, 0, 1, 161, 127,
+ 3, 1, 0, 1, 161, 76, 4, 1, 0, 1, 161, 74,
+ 5, 1, 0, 1, 161, 76, 6, 1, 0, 1, 161, 76,
+ 7, 1, 0, 1, 161, 54, 8, 1, 0, 1, 161, 76,
+ 1, 1, 0, 1, 165, 127, 3, 1, 0, 1, 165, 76,
+ 4, 1, 0, 1, 165, 74, 5, 1, 0, 1, 165, 76,
+ 6, 1, 0, 1, 165, 76, 7, 1, 0, 1, 165, 54,
+ 8, 1, 0, 1, 165, 76, 1, 1, 0, 2, 36, 62,
+ 3, 1, 0, 2, 36, 62, 4, 1, 0, 2, 36, 76,
+ 5, 1, 0, 2, 36, 62, 6, 1, 0, 2, 36, 64,
+ 7, 1, 0, 2, 36, 54, 8, 1, 0, 2, 36, 62,
+ 1, 1, 0, 2, 40, 62, 3, 1, 0, 2, 40, 62,
+ 4, 1, 0, 2, 40, 76, 5, 1, 0, 2, 40, 62,
+ 6, 1, 0, 2, 40, 64, 7, 1, 0, 2, 40, 54,
+ 8, 1, 0, 2, 40, 62, 1, 1, 0, 2, 44, 62,
+ 3, 1, 0, 2, 44, 62, 4, 1, 0, 2, 44, 76,
+ 5, 1, 0, 2, 44, 62, 6, 1, 0, 2, 44, 64,
+ 7, 1, 0, 2, 44, 54, 8, 1, 0, 2, 44, 62,
+ 1, 1, 0, 2, 48, 62, 3, 1, 0, 2, 48, 62,
+ 4, 1, 0, 2, 48, 76, 5, 1, 0, 2, 48, 62,
+ 6, 1, 0, 2, 48, 64, 7, 1, 0, 2, 48, 54,
+ 8, 1, 0, 2, 48, 62, 1, 1, 0, 2, 52, 62,
+ 3, 1, 0, 2, 52, 64, 4, 1, 0, 2, 52, 76,
+ 5, 1, 0, 2, 52, 62, 6, 1, 0, 2, 52, 76,
+ 7, 1, 0, 2, 52, 54, 8, 1, 0, 2, 52, 76,
+ 1, 1, 0, 2, 56, 62, 3, 1, 0, 2, 56, 64,
+ 4, 1, 0, 2, 56, 76, 5, 1, 0, 2, 56, 62,
+ 6, 1, 0, 2, 56, 76, 7, 1, 0, 2, 56, 54,
+ 8, 1, 0, 2, 56, 76, 1, 1, 0, 2, 60, 62,
+ 3, 1, 0, 2, 60, 64, 4, 1, 0, 2, 60, 76,
+ 5, 1, 0, 2, 60, 62, 6, 1, 0, 2, 60, 76,
+ 7, 1, 0, 2, 60, 54, 8, 1, 0, 2, 60, 76,
+ 1, 1, 0, 2, 64, 60, 3, 1, 0, 2, 64, 64,
+ 4, 1, 0, 2, 64, 74, 5, 1, 0, 2, 64, 62,
+ 6, 1, 0, 2, 64, 74, 7, 1, 0, 2, 64, 54,
+ 8, 1, 0, 2, 64, 74, 1, 1, 0, 2, 100, 76,
+ 3, 1, 0, 2, 100, 70, 4, 1, 0, 2, 100, 76,
+ 5, 1, 0, 2, 100, 62, 6, 1, 0, 2, 100, 70,
+ 7, 1, 0, 2, 100, 54, 8, 1, 0, 2, 100, 70,
+ 1, 1, 0, 2, 104, 76, 3, 1, 0, 2, 104, 76,
+ 4, 1, 0, 2, 104, 76, 5, 1, 0, 2, 104, 62,
+ 6, 1, 0, 2, 104, 76, 7, 1, 0, 2, 104, 54,
+ 8, 1, 0, 2, 104, 76, 1, 1, 0, 2, 108, 76,
+ 3, 1, 0, 2, 108, 76, 4, 1, 0, 2, 108, 76,
+ 5, 1, 0, 2, 108, 62, 6, 1, 0, 2, 108, 76,
+ 7, 1, 0, 2, 108, 54, 8, 1, 0, 2, 108, 76,
+ 1, 1, 0, 2, 112, 76, 3, 1, 0, 2, 112, 76,
+ 4, 1, 0, 2, 112, 76, 5, 1, 0, 2, 112, 62,
+ 6, 1, 0, 2, 112, 76, 7, 1, 0, 2, 112, 54,
+ 8, 1, 0, 2, 112, 76, 1, 1, 0, 2, 116, 76,
+ 3, 1, 0, 2, 116, 76, 4, 1, 0, 2, 116, 76,
+ 5, 1, 0, 2, 116, 62, 6, 1, 0, 2, 116, 76,
+ 7, 1, 0, 2, 116, 54, 8, 1, 0, 2, 116, 76,
+ 1, 1, 0, 2, 120, 76, 3, 1, 0, 2, 120, 127,
+ 4, 1, 0, 2, 120, 76, 5, 1, 0, 2, 120, 127,
+ 6, 1, 0, 2, 120, 76, 7, 1, 0, 2, 120, 54,
+ 8, 1, 0, 2, 120, 76, 1, 1, 0, 2, 124, 76,
+ 3, 1, 0, 2, 124, 127, 4, 1, 0, 2, 124, 76,
+ 5, 1, 0, 2, 124, 127, 6, 1, 0, 2, 124, 76,
+ 7, 1, 0, 2, 124, 54, 8, 1, 0, 2, 124, 76,
+ 1, 1, 0, 2, 128, 76, 3, 1, 0, 2, 128, 127,
+ 4, 1, 0, 2, 128, 76, 5, 1, 0, 2, 128, 127,
+ 6, 1, 0, 2, 128, 76, 7, 1, 0, 2, 128, 54,
+ 8, 1, 0, 2, 128, 76, 1, 1, 0, 2, 132, 76,
+ 3, 1, 0, 2, 132, 76, 4, 1, 0, 2, 132, 76,
+ 5, 1, 0, 2, 132, 62, 6, 1, 0, 2, 132, 76,
+ 7, 1, 0, 2, 132, 54, 8, 1, 0, 2, 132, 76,
+ 1, 1, 0, 2, 136, 76, 3, 1, 0, 2, 136, 76,
+ 4, 1, 0, 2, 136, 76, 5, 1, 0, 2, 136, 62,
+ 6, 1, 0, 2, 136, 76, 7, 1, 0, 2, 136, 127,
+ 8, 1, 0, 2, 136, 76, 1, 1, 0, 2, 140, 76,
+ 3, 1, 0, 2, 140, 70, 4, 1, 0, 2, 140, 76,
+ 5, 1, 0, 2, 140, 62, 6, 1, 0, 2, 140, 70,
+ 7, 1, 0, 2, 140, 127, 8, 1, 0, 2, 140, 70,
+ 1, 1, 0, 2, 144, 127, 3, 1, 0, 2, 144, 76,
+ 4, 1, 0, 2, 144, 76, 5, 1, 0, 2, 144, 127,
+ 6, 1, 0, 2, 144, 76, 7, 1, 0, 2, 144, 127,
+ 8, 1, 0, 2, 144, 76, 1, 1, 0, 2, 149, 127,
+ 3, 1, 0, 2, 149, 76, 4, 1, 0, 2, 149, 74,
+ 5, 1, 0, 2, 149, 76, 6, 1, 0, 2, 149, 76,
+ 7, 1, 0, 2, 149, 54, 8, 1, 0, 2, 149, 76,
+ 1, 1, 0, 2, 153, 127, 3, 1, 0, 2, 153, 76,
+ 4, 1, 0, 2, 153, 74, 5, 1, 0, 2, 153, 76,
+ 6, 1, 0, 2, 153, 76, 7, 1, 0, 2, 153, 54,
+ 8, 1, 0, 2, 153, 76, 1, 1, 0, 2, 157, 127,
+ 3, 1, 0, 2, 157, 76, 4, 1, 0, 2, 157, 74,
+ 5, 1, 0, 2, 157, 76, 6, 1, 0, 2, 157, 76,
+ 7, 1, 0, 2, 157, 54, 8, 1, 0, 2, 157, 76,
+ 1, 1, 0, 2, 161, 127, 3, 1, 0, 2, 161, 76,
+ 4, 1, 0, 2, 161, 74, 5, 1, 0, 2, 161, 76,
+ 6, 1, 0, 2, 161, 76, 7, 1, 0, 2, 161, 54,
+ 8, 1, 0, 2, 161, 76, 1, 1, 0, 2, 165, 127,
+ 3, 1, 0, 2, 165, 76, 4, 1, 0, 2, 165, 74,
+ 5, 1, 0, 2, 165, 76, 6, 1, 0, 2, 165, 76,
+ 7, 1, 0, 2, 165, 54, 8, 1, 0, 2, 165, 76,
+ 1, 1, 0, 3, 36, 50, 3, 1, 0, 3, 36, 38,
+ 4, 1, 0, 3, 36, 66, 5, 1, 0, 3, 36, 38,
+ 6, 1, 0, 3, 36, 52, 7, 1, 0, 3, 36, 30,
+ 8, 1, 0, 3, 36, 50, 1, 1, 0, 3, 40, 50,
+ 3, 1, 0, 3, 40, 38, 4, 1, 0, 3, 40, 66,
+ 5, 1, 0, 3, 40, 38, 6, 1, 0, 3, 40, 52,
+ 7, 1, 0, 3, 40, 30, 8, 1, 0, 3, 40, 50,
+ 1, 1, 0, 3, 44, 50, 3, 1, 0, 3, 44, 38,
+ 4, 1, 0, 3, 44, 66, 5, 1, 0, 3, 44, 38,
+ 6, 1, 0, 3, 44, 52, 7, 1, 0, 3, 44, 30,
+ 8, 1, 0, 3, 44, 50, 1, 1, 0, 3, 48, 50,
+ 3, 1, 0, 3, 48, 38, 4, 1, 0, 3, 48, 66,
+ 5, 1, 0, 3, 48, 38, 6, 1, 0, 3, 48, 52,
+ 7, 1, 0, 3, 48, 30, 8, 1, 0, 3, 48, 50,
+ 1, 1, 0, 3, 52, 50, 3, 1, 0, 3, 52, 40,
+ 4, 1, 0, 3, 52, 66, 5, 1, 0, 3, 52, 38,
+ 6, 1, 0, 3, 52, 68, 7, 1, 0, 3, 52, 30,
+ 8, 1, 0, 3, 52, 68, 1, 1, 0, 3, 56, 50,
+ 3, 1, 0, 3, 56, 40, 4, 1, 0, 3, 56, 66,
+ 5, 1, 0, 3, 56, 38, 6, 1, 0, 3, 56, 68,
+ 7, 1, 0, 3, 56, 30, 8, 1, 0, 3, 56, 68,
+ 1, 1, 0, 3, 60, 50, 3, 1, 0, 3, 60, 40,
+ 4, 1, 0, 3, 60, 66, 5, 1, 0, 3, 60, 38,
+ 6, 1, 0, 3, 60, 66, 7, 1, 0, 3, 60, 30,
+ 8, 1, 0, 3, 60, 66, 1, 1, 0, 3, 64, 50,
+ 3, 1, 0, 3, 64, 40, 4, 1, 0, 3, 64, 66,
+ 5, 1, 0, 3, 64, 38, 6, 1, 0, 3, 64, 68,
+ 7, 1, 0, 3, 64, 30, 8, 1, 0, 3, 64, 68,
+ 1, 1, 0, 3, 100, 70, 3, 1, 0, 3, 100, 60,
+ 4, 1, 0, 3, 100, 64, 5, 1, 0, 3, 100, 38,
+ 6, 1, 0, 3, 100, 60, 7, 1, 0, 3, 100, 30,
+ 8, 1, 0, 3, 100, 60, 1, 1, 0, 3, 104, 70,
+ 3, 1, 0, 3, 104, 68, 4, 1, 0, 3, 104, 64,
+ 5, 1, 0, 3, 104, 38, 6, 1, 0, 3, 104, 68,
+ 7, 1, 0, 3, 104, 30, 8, 1, 0, 3, 104, 68,
+ 1, 1, 0, 3, 108, 70, 3, 1, 0, 3, 108, 68,
+ 4, 1, 0, 3, 108, 64, 5, 1, 0, 3, 108, 38,
+ 6, 1, 0, 3, 108, 68, 7, 1, 0, 3, 108, 30,
+ 8, 1, 0, 3, 108, 68, 1, 1, 0, 3, 112, 70,
+ 3, 1, 0, 3, 112, 68, 4, 1, 0, 3, 112, 64,
+ 5, 1, 0, 3, 112, 38, 6, 1, 0, 3, 112, 68,
+ 7, 1, 0, 3, 112, 30, 8, 1, 0, 3, 112, 68,
+ 1, 1, 0, 3, 116, 70, 3, 1, 0, 3, 116, 68,
+ 4, 1, 0, 3, 116, 64, 5, 1, 0, 3, 116, 38,
+ 6, 1, 0, 3, 116, 68, 7, 1, 0, 3, 116, 30,
+ 8, 1, 0, 3, 116, 68, 1, 1, 0, 3, 120, 70,
+ 3, 1, 0, 3, 120, 127, 4, 1, 0, 3, 120, 64,
+ 5, 1, 0, 3, 120, 127, 6, 1, 0, 3, 120, 68,
+ 7, 1, 0, 3, 120, 30, 8, 1, 0, 3, 120, 68,
+ 1, 1, 0, 3, 124, 70, 3, 1, 0, 3, 124, 127,
+ 4, 1, 0, 3, 124, 64, 5, 1, 0, 3, 124, 127,
+ 6, 1, 0, 3, 124, 68, 7, 1, 0, 3, 124, 30,
+ 8, 1, 0, 3, 124, 68, 1, 1, 0, 3, 128, 70,
+ 3, 1, 0, 3, 128, 127, 4, 1, 0, 3, 128, 64,
+ 5, 1, 0, 3, 128, 127, 6, 1, 0, 3, 128, 68,
+ 7, 1, 0, 3, 128, 30, 8, 1, 0, 3, 128, 68,
+ 1, 1, 0, 3, 132, 70, 3, 1, 0, 3, 132, 68,
+ 4, 1, 0, 3, 132, 64, 5, 1, 0, 3, 132, 38,
+ 6, 1, 0, 3, 132, 68, 7, 1, 0, 3, 132, 30,
+ 8, 1, 0, 3, 132, 68, 1, 1, 0, 3, 136, 70,
+ 3, 1, 0, 3, 136, 68, 4, 1, 0, 3, 136, 64,
+ 5, 1, 0, 3, 136, 38, 6, 1, 0, 3, 136, 68,
+ 7, 1, 0, 3, 136, 127, 8, 1, 0, 3, 136, 68,
+ 1, 1, 0, 3, 140, 70, 3, 1, 0, 3, 140, 60,
+ 4, 1, 0, 3, 140, 64, 5, 1, 0, 3, 140, 38,
+ 6, 1, 0, 3, 140, 60, 7, 1, 0, 3, 140, 127,
+ 8, 1, 0, 3, 140, 60, 1, 1, 0, 3, 144, 127,
+ 3, 1, 0, 3, 144, 68, 4, 1, 0, 3, 144, 64,
+ 5, 1, 0, 3, 144, 127, 6, 1, 0, 3, 144, 68,
+ 7, 1, 0, 3, 144, 127, 8, 1, 0, 3, 144, 68,
+ 1, 1, 0, 3, 149, 127, 3, 1, 0, 3, 149, 76,
+ 4, 1, 0, 3, 149, 60, 5, 1, 0, 3, 149, 76,
+ 6, 1, 0, 3, 149, 76, 7, 1, 0, 3, 149, 30,
+ 8, 1, 0, 3, 149, 72, 1, 1, 0, 3, 153, 127,
+ 3, 1, 0, 3, 153, 76, 4, 1, 0, 3, 153, 60,
+ 5, 1, 0, 3, 153, 76, 6, 1, 0, 3, 153, 76,
+ 7, 1, 0, 3, 153, 30, 8, 1, 0, 3, 153, 76,
+ 1, 1, 0, 3, 157, 127, 3, 1, 0, 3, 157, 76,
+ 4, 1, 0, 3, 157, 60, 5, 1, 0, 3, 157, 76,
+ 6, 1, 0, 3, 157, 76, 7, 1, 0, 3, 157, 30,
+ 8, 1, 0, 3, 157, 76, 1, 1, 0, 3, 161, 127,
+ 3, 1, 0, 3, 161, 76, 4, 1, 0, 3, 161, 60,
+ 5, 1, 0, 3, 161, 76, 6, 1, 0, 3, 161, 76,
+ 7, 1, 0, 3, 161, 30, 8, 1, 0, 3, 161, 76,
+ 1, 1, 0, 3, 165, 127, 3, 1, 0, 3, 165, 76,
+ 4, 1, 0, 3, 165, 60, 5, 1, 0, 3, 165, 76,
+ 6, 1, 0, 3, 165, 76, 7, 1, 0, 3, 165, 30,
+ 8, 1, 0, 3, 165, 76, 1, 1, 1, 2, 38, 62,
+ 3, 1, 1, 2, 38, 64, 4, 1, 1, 2, 38, 72,
+ 5, 1, 1, 2, 38, 64, 6, 1, 1, 2, 38, 64,
+ 7, 1, 1, 2, 38, 54, 8, 1, 1, 2, 38, 62,
+ 1, 1, 1, 2, 46, 62, 3, 1, 1, 2, 46, 64,
+ 4, 1, 1, 2, 46, 72, 5, 1, 1, 2, 46, 64,
+ 6, 1, 1, 2, 46, 64, 7, 1, 1, 2, 46, 54,
+ 8, 1, 1, 2, 46, 62, 1, 1, 1, 2, 54, 62,
+ 3, 1, 1, 2, 54, 64, 4, 1, 1, 2, 54, 72,
+ 5, 1, 1, 2, 54, 64, 6, 1, 1, 2, 54, 72,
+ 7, 1, 1, 2, 54, 54, 8, 1, 1, 2, 54, 72,
+ 1, 1, 1, 2, 62, 62, 3, 1, 1, 2, 62, 64,
+ 4, 1, 1, 2, 62, 70, 5, 1, 1, 2, 62, 64,
+ 6, 1, 1, 2, 62, 64, 7, 1, 1, 2, 62, 54,
+ 8, 1, 1, 2, 62, 64, 1, 1, 1, 2, 102, 72,
+ 3, 1, 1, 2, 102, 58, 4, 1, 1, 2, 102, 72,
+ 5, 1, 1, 2, 102, 64, 6, 1, 1, 2, 102, 58,
+ 7, 1, 1, 2, 102, 54, 8, 1, 1, 2, 102, 58,
+ 1, 1, 1, 2, 110, 72, 3, 1, 1, 2, 110, 72,
+ 4, 1, 1, 2, 110, 72, 5, 1, 1, 2, 110, 64,
+ 6, 1, 1, 2, 110, 72, 7, 1, 1, 2, 110, 54,
+ 8, 1, 1, 2, 110, 72, 1, 1, 1, 2, 118, 72,
+ 3, 1, 1, 2, 118, 127, 4, 1, 1, 2, 118, 72,
+ 5, 1, 1, 2, 118, 127, 6, 1, 1, 2, 118, 72,
+ 7, 1, 1, 2, 118, 54, 8, 1, 1, 2, 118, 72,
+ 1, 1, 1, 2, 126, 72, 3, 1, 1, 2, 126, 127,
+ 4, 1, 1, 2, 126, 72, 5, 1, 1, 2, 126, 127,
+ 6, 1, 1, 2, 126, 72, 7, 1, 1, 2, 126, 54,
+ 8, 1, 1, 2, 126, 72, 1, 1, 1, 2, 134, 72,
+ 3, 1, 1, 2, 134, 72, 4, 1, 1, 2, 134, 72,
+ 5, 1, 1, 2, 134, 64, 6, 1, 1, 2, 134, 72,
+ 7, 1, 1, 2, 134, 127, 8, 1, 1, 2, 134, 72,
+ 1, 1, 1, 2, 142, 127, 3, 1, 1, 2, 142, 72,
+ 4, 1, 1, 2, 142, 72, 5, 1, 1, 2, 142, 127,
+ 6, 1, 1, 2, 142, 72, 7, 1, 1, 2, 142, 127,
+ 8, 1, 1, 2, 142, 72, 1, 1, 1, 2, 151, 127,
+ 3, 1, 1, 2, 151, 72, 4, 1, 1, 2, 151, 72,
+ 5, 1, 1, 2, 151, 72, 6, 1, 1, 2, 151, 72,
+ 7, 1, 1, 2, 151, 54, 8, 1, 1, 2, 151, 72,
+ 1, 1, 1, 2, 159, 127, 3, 1, 1, 2, 159, 72,
+ 4, 1, 1, 2, 159, 72, 5, 1, 1, 2, 159, 72,
+ 6, 1, 1, 2, 159, 72, 7, 1, 1, 2, 159, 54,
+ 8, 1, 1, 2, 159, 72, 1, 1, 1, 3, 38, 50,
+ 3, 1, 1, 3, 38, 40, 4, 1, 1, 3, 38, 62,
+ 5, 1, 1, 3, 38, 40, 6, 1, 1, 3, 38, 52,
+ 7, 1, 1, 3, 38, 30, 8, 1, 1, 3, 38, 50,
+ 1, 1, 1, 3, 46, 50, 3, 1, 1, 3, 46, 40,
+ 4, 1, 1, 3, 46, 62, 5, 1, 1, 3, 46, 40,
+ 6, 1, 1, 3, 46, 52, 7, 1, 1, 3, 46, 30,
+ 8, 1, 1, 3, 46, 50, 1, 1, 1, 3, 54, 50,
+ 3, 1, 1, 3, 54, 40, 4, 1, 1, 3, 54, 62,
+ 5, 1, 1, 3, 54, 40, 6, 1, 1, 3, 54, 68,
+ 7, 1, 1, 3, 54, 30, 8, 1, 1, 3, 54, 68,
+ 1, 1, 1, 3, 62, 48, 3, 1, 1, 3, 62, 40,
+ 4, 1, 1, 3, 62, 58, 5, 1, 1, 3, 62, 40,
+ 6, 1, 1, 3, 62, 58, 7, 1, 1, 3, 62, 30,
+ 8, 1, 1, 3, 62, 58, 1, 1, 1, 3, 102, 70,
+ 3, 1, 1, 3, 102, 54, 4, 1, 1, 3, 102, 64,
+ 5, 1, 1, 3, 102, 40, 6, 1, 1, 3, 102, 54,
+ 7, 1, 1, 3, 102, 30, 8, 1, 1, 3, 102, 54,
+ 1, 1, 1, 3, 110, 70, 3, 1, 1, 3, 110, 68,
+ 4, 1, 1, 3, 110, 64, 5, 1, 1, 3, 110, 40,
+ 6, 1, 1, 3, 110, 68, 7, 1, 1, 3, 110, 30,
+ 8, 1, 1, 3, 110, 68, 1, 1, 1, 3, 118, 70,
+ 3, 1, 1, 3, 118, 127, 4, 1, 1, 3, 118, 64,
+ 5, 1, 1, 3, 118, 127, 6, 1, 1, 3, 118, 68,
+ 7, 1, 1, 3, 118, 30, 8, 1, 1, 3, 118, 68,
+ 1, 1, 1, 3, 126, 70, 3, 1, 1, 3, 126, 127,
+ 4, 1, 1, 3, 126, 64, 5, 1, 1, 3, 126, 127,
+ 6, 1, 1, 3, 126, 68, 7, 1, 1, 3, 126, 30,
+ 8, 1, 1, 3, 126, 68, 1, 1, 1, 3, 134, 70,
+ 3, 1, 1, 3, 134, 68, 4, 1, 1, 3, 134, 64,
+ 5, 1, 1, 3, 134, 40, 6, 1, 1, 3, 134, 68,
+ 7, 1, 1, 3, 134, 127, 8, 1, 1, 3, 134, 68,
+ 1, 1, 1, 3, 142, 127, 3, 1, 1, 3, 142, 68,
+ 4, 1, 1, 3, 142, 64, 5, 1, 1, 3, 142, 127,
+ 6, 1, 1, 3, 142, 68, 7, 1, 1, 3, 142, 127,
+ 8, 1, 1, 3, 142, 68, 1, 1, 1, 3, 151, 127,
+ 3, 1, 1, 3, 151, 72, 4, 1, 1, 3, 151, 66,
+ 5, 1, 1, 3, 151, 72, 6, 1, 1, 3, 151, 72,
+ 7, 1, 1, 3, 151, 30, 8, 1, 1, 3, 151, 68,
+ 1, 1, 1, 3, 159, 127, 3, 1, 1, 3, 159, 72,
+ 4, 1, 1, 3, 159, 66, 5, 1, 1, 3, 159, 72,
+ 6, 1, 1, 3, 159, 72, 7, 1, 1, 3, 159, 30,
+ 8, 1, 1, 3, 159, 72, 1, 1, 2, 4, 42, 64,
+ 3, 1, 2, 4, 42, 64, 4, 1, 2, 4, 42, 68,
+ 5, 1, 2, 4, 42, 64, 6, 1, 2, 4, 42, 64,
+ 7, 1, 2, 4, 42, 54, 8, 1, 2, 4, 42, 62,
+ 1, 1, 2, 4, 58, 64, 3, 1, 2, 4, 58, 62,
+ 4, 1, 2, 4, 58, 64, 5, 1, 2, 4, 58, 64,
+ 6, 1, 2, 4, 58, 62, 7, 1, 2, 4, 58, 54,
+ 8, 1, 2, 4, 58, 62, 1, 1, 2, 4, 106, 72,
+ 3, 1, 2, 4, 106, 58, 4, 1, 2, 4, 106, 66,
+ 5, 1, 2, 4, 106, 64, 6, 1, 2, 4, 106, 58,
+ 7, 1, 2, 4, 106, 54, 8, 1, 2, 4, 106, 58,
+ 1, 1, 2, 4, 122, 72, 3, 1, 2, 4, 122, 127,
+ 4, 1, 2, 4, 122, 68, 5, 1, 2, 4, 122, 127,
+ 6, 1, 2, 4, 122, 72, 7, 1, 2, 4, 122, 54,
+ 8, 1, 2, 4, 122, 72, 1, 1, 2, 4, 138, 127,
+ 3, 1, 2, 4, 138, 72, 4, 1, 2, 4, 138, 68,
+ 5, 1, 2, 4, 138, 127, 6, 1, 2, 4, 138, 72,
+ 7, 1, 2, 4, 138, 127, 8, 1, 2, 4, 138, 72,
+ 1, 1, 2, 4, 155, 127, 3, 1, 2, 4, 155, 72,
+ 4, 1, 2, 4, 155, 68, 5, 1, 2, 4, 155, 72,
+ 6, 1, 2, 4, 155, 72, 7, 1, 2, 4, 155, 54,
+ 8, 1, 2, 4, 155, 68, 1, 1, 2, 5, 42, 50,
+ 3, 1, 2, 5, 42, 40, 4, 1, 2, 5, 42, 58,
+ 5, 1, 2, 5, 42, 40, 6, 1, 2, 5, 42, 52,
+ 7, 1, 2, 5, 42, 30, 8, 1, 2, 5, 42, 50,
+ 1, 1, 2, 5, 58, 50, 3, 1, 2, 5, 58, 40,
+ 4, 1, 2, 5, 58, 56, 5, 1, 2, 5, 58, 40,
+ 6, 1, 2, 5, 58, 52, 7, 1, 2, 5, 58, 30,
+ 8, 1, 2, 5, 58, 52, 1, 1, 2, 5, 106, 72,
+ 3, 1, 2, 5, 106, 50, 4, 1, 2, 5, 106, 56,
+ 5, 1, 2, 5, 106, 40, 6, 1, 2, 5, 106, 50,
+ 7, 1, 2, 5, 106, 30, 8, 1, 2, 5, 106, 50,
+ 1, 1, 2, 5, 122, 72, 3, 1, 2, 5, 122, 127,
+ 4, 1, 2, 5, 122, 56, 5, 1, 2, 5, 122, 127,
+ 6, 1, 2, 5, 122, 66, 7, 1, 2, 5, 122, 30,
+ 8, 1, 2, 5, 122, 66, 1, 1, 2, 5, 138, 127,
+ 3, 1, 2, 5, 138, 66, 4, 1, 2, 5, 138, 58,
+ 5, 1, 2, 5, 138, 127, 6, 1, 2, 5, 138, 66,
+ 7, 1, 2, 5, 138, 127, 8, 1, 2, 5, 138, 66,
+ 1, 1, 2, 5, 155, 127, 3, 1, 2, 5, 155, 62,
+ 4, 1, 2, 5, 155, 58, 5, 1, 2, 5, 155, 72,
+ 6, 1, 2, 5, 155, 62, 7, 1, 2, 5, 155, 30,
+ 8, 1, 2, 5, 155, 62
};
RTW_DECL_TABLE_TXPWR_LMT(rtw8822c_txpwr_lmt_type0);
diff --git a/drivers/net/wireless/realtek/rtw88/tx.c b/drivers/net/wireless/realtek/rtw88/tx.c
index e32faf8bead9..8eaa9809ca44 100644
--- a/drivers/net/wireless/realtek/rtw88/tx.c
+++ b/drivers/net/wireless/realtek/rtw88/tx.c
@@ -362,6 +362,6 @@ void rtw_rsvd_page_pkt_info_update(struct rtw_dev *rtwdev,
pkt_info->bmc = bmc;
pkt_info->tx_pkt_size = skb->len;
pkt_info->offset = chip->tx_pkt_desc_sz;
- pkt_info->qsel = skb->priority;
+ pkt_info->qsel = TX_DESC_QSEL_MGMT;
pkt_info->ls = true;
}
diff --git a/drivers/net/wireless/ti/wl18xx/main.c b/drivers/net/wireless/ti/wl18xx/main.c
index a5e0604d3009..0b3cf8477c6c 100644
--- a/drivers/net/wireless/ti/wl18xx/main.c
+++ b/drivers/net/wireless/ti/wl18xx/main.c
@@ -1847,44 +1847,6 @@ static const struct ieee80211_iface_limit wl18xx_iface_ap_limits[] = {
},
};
-static const struct ieee80211_iface_limit wl18xx_iface_ap_cl_limits[] = {
- {
- .max = 1,
- .types = BIT(NL80211_IFTYPE_STATION),
- },
- {
- .max = 1,
- .types = BIT(NL80211_IFTYPE_AP),
- },
- {
- .max = 1,
- .types = BIT(NL80211_IFTYPE_P2P_CLIENT),
- },
- {
- .max = 1,
- .types = BIT(NL80211_IFTYPE_P2P_DEVICE),
- },
-};
-
-static const struct ieee80211_iface_limit wl18xx_iface_ap_go_limits[] = {
- {
- .max = 1,
- .types = BIT(NL80211_IFTYPE_STATION),
- },
- {
- .max = 1,
- .types = BIT(NL80211_IFTYPE_AP),
- },
- {
- .max = 1,
- .types = BIT(NL80211_IFTYPE_P2P_GO),
- },
- {
- .max = 1,
- .types = BIT(NL80211_IFTYPE_P2P_DEVICE),
- },
-};
-
static const struct ieee80211_iface_combination
wl18xx_iface_combinations[] = {
{
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 783198844dd7..240f762b3749 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -633,7 +633,7 @@ int xenvif_connect_data(struct xenvif_queue *queue,
unsigned int rx_evtchn)
{
struct task_struct *task;
- int err = -ENOMEM;
+ int err;
BUG_ON(queue->tx_irq);
BUG_ON(queue->task);
diff --git a/drivers/nfc/st-nci/i2c.c b/drivers/nfc/st-nci/i2c.c
index 67a685adfd44..55d600cd3861 100644
--- a/drivers/nfc/st-nci/i2c.c
+++ b/drivers/nfc/st-nci/i2c.c
@@ -72,7 +72,7 @@ static void st_nci_i2c_disable(void *phy_id)
*/
static int st_nci_i2c_write(void *phy_id, struct sk_buff *skb)
{
- int r = -1;
+ int r;
struct st_nci_i2c_phy *phy = phy_id;
struct i2c_client *client = phy->i2c_dev;
diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
index fd4cb75088f9..e44af7f4d37f 100644
--- a/drivers/pci/pcie/aspm.c
+++ b/drivers/pci/pcie/aspm.c
@@ -1062,18 +1062,18 @@ void pcie_aspm_powersave_config_link(struct pci_dev *pdev)
up_read(&pci_bus_sem);
}
-static void __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem)
+static int __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem)
{
struct pci_dev *parent = pdev->bus->self;
struct pcie_link_state *link;
if (!pci_is_pcie(pdev))
- return;
+ return 0;
if (pdev->has_secondary_link)
parent = pdev;
if (!parent || !parent->link_state)
- return;
+ return -EINVAL;
/*
* A driver requested that ASPM be disabled on this device, but
@@ -1085,7 +1085,7 @@ static void __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem)
*/
if (aspm_disabled) {
pci_warn(pdev, "can't disable ASPM; OS doesn't have ASPM control\n");
- return;
+ return -EPERM;
}
if (sem)
@@ -1105,11 +1105,13 @@ static void __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem)
mutex_unlock(&aspm_lock);
if (sem)
up_read(&pci_bus_sem);
+
+ return 0;
}
-void pci_disable_link_state_locked(struct pci_dev *pdev, int state)
+int pci_disable_link_state_locked(struct pci_dev *pdev, int state)
{
- __pci_disable_link_state(pdev, state, false);
+ return __pci_disable_link_state(pdev, state, false);
}
EXPORT_SYMBOL(pci_disable_link_state_locked);
@@ -1117,14 +1119,14 @@ EXPORT_SYMBOL(pci_disable_link_state_locked);
* pci_disable_link_state - Disable device's link state, so the link will
* never enter specific states. Note that if the BIOS didn't grant ASPM
* control to the OS, this does nothing because we can't touch the LNKCTL
- * register.
+ * register. Returns 0 or a negative errno.
*
* @pdev: PCI device
* @state: ASPM link state to disable
*/
-void pci_disable_link_state(struct pci_dev *pdev, int state)
+int pci_disable_link_state(struct pci_dev *pdev, int state)
{
- __pci_disable_link_state(pdev, state, true);
+ return __pci_disable_link_state(pdev, state, true);
}
EXPORT_SYMBOL(pci_disable_link_state);
diff --git a/drivers/ptp/Kconfig b/drivers/ptp/Kconfig
index 9b8fee5178e8..960961fb0d7c 100644
--- a/drivers/ptp/Kconfig
+++ b/drivers/ptp/Kconfig
@@ -44,7 +44,7 @@ config PTP_1588_CLOCK_DTE
config PTP_1588_CLOCK_QORIQ
tristate "Freescale QorIQ 1588 timer as PTP clock"
- depends on GIANFAR || FSL_DPAA_ETH || FSL_ENETC || FSL_ENETC_VF
+ depends on GIANFAR || FSL_DPAA_ETH || FSL_DPAA2_ETH || FSL_ENETC || FSL_ENETC_VF || COMPILE_TEST
depends on PTP_1588_CLOCK
default y
help
diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c
index e189fa1be21e..e60eab7f8a61 100644
--- a/drivers/ptp/ptp_clock.c
+++ b/drivers/ptp/ptp_clock.c
@@ -63,7 +63,7 @@ static void enqueue_external_timestamp(struct timestamp_event_queue *queue,
spin_unlock_irqrestore(&queue->lock, flags);
}
-static s32 scaled_ppm_to_ppb(long ppm)
+s32 scaled_ppm_to_ppb(long ppm)
{
/*
* The 'freq' field in the 'struct timex' is in parts per
@@ -82,6 +82,7 @@ static s32 scaled_ppm_to_ppb(long ppm)
ppb >>= 13;
return (s32) ppb;
}
+EXPORT_SYMBOL(scaled_ppm_to_ppb);
/* posix clock implementation */
diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h
index 784a2e76a1b0..c7ee07ce3615 100644
--- a/drivers/s390/net/qeth_core.h
+++ b/drivers/s390/net/qeth_core.h
@@ -25,6 +25,8 @@
#include <linux/wait.h>
#include <linux/workqueue.h>
+#include <net/dst.h>
+#include <net/ip6_fib.h>
#include <net/ipv6.h>
#include <net/if_inet6.h>
#include <net/addrconf.h>
@@ -60,7 +62,7 @@ struct qeth_dbf_info {
debug_info_t *id;
};
-#define QETH_DBF_CTRL_LEN 256
+#define QETH_DBF_CTRL_LEN 256U
#define QETH_DBF_TEXT(name, level, text) \
debug_text_event(qeth_dbf[QETH_DBF_##name].id, level, text)
@@ -525,11 +527,6 @@ struct qeth_qdio_info {
};
/**
- * buffer stuff for read channel
- */
-#define QETH_CMD_BUFFER_NO 8
-
-/**
* channel state machine
*/
enum qeth_channel_states {
@@ -537,8 +534,6 @@ enum qeth_channel_states {
CH_STATE_DOWN,
CH_STATE_HALTED,
CH_STATE_STOPPED,
- CH_STATE_RCD,
- CH_STATE_RCD_DONE,
};
/**
* card state machine
@@ -553,15 +548,11 @@ enum qeth_card_states {
* Protocol versions
*/
enum qeth_prot_versions {
+ QETH_PROT_NONE = 0x0000,
QETH_PROT_IPV4 = 0x0004,
QETH_PROT_IPV6 = 0x0006,
};
-enum qeth_cmd_buffer_state {
- BUF_STATE_FREE,
- BUF_STATE_LOCKED,
-};
-
enum qeth_cq {
QETH_CQ_DISABLED = 0,
QETH_CQ_ENABLED = 1,
@@ -575,39 +566,37 @@ struct qeth_ipato {
struct list_head entries;
};
-struct qeth_channel;
+struct qeth_channel {
+ struct ccw_device *ccwdev;
+ enum qeth_channel_states state;
+ atomic_t irq_pending;
+};
struct qeth_cmd_buffer {
- enum qeth_cmd_buffer_state state;
+ unsigned int length;
+ refcount_t ref_count;
struct qeth_channel *channel;
struct qeth_reply *reply;
long timeout;
unsigned char *data;
- void (*finalize)(struct qeth_card *card, struct qeth_cmd_buffer *iob,
- unsigned int length);
- void (*callback)(struct qeth_card *card, struct qeth_channel *channel,
- struct qeth_cmd_buffer *iob);
+ void (*finalize)(struct qeth_card *card, struct qeth_cmd_buffer *iob);
+ void (*callback)(struct qeth_card *card, struct qeth_cmd_buffer *iob);
};
+static inline void qeth_get_cmd(struct qeth_cmd_buffer *iob)
+{
+ refcount_inc(&iob->ref_count);
+}
+
static inline struct qeth_ipa_cmd *__ipa_cmd(struct qeth_cmd_buffer *iob)
{
return (struct qeth_ipa_cmd *)(iob->data + IPA_PDU_HEADER_SIZE);
}
-/**
- * definition of a qeth channel, used for read and write
- */
-struct qeth_channel {
- enum qeth_channel_states state;
- struct ccw1 *ccw;
- spinlock_t iob_lock;
- wait_queue_head_t wait_q;
- struct ccw_device *ccwdev;
-/*command buffer for control data*/
- struct qeth_cmd_buffer iob[QETH_CMD_BUFFER_NO];
- atomic_t irq_pending;
- int io_buf_no;
-};
+static inline struct ccw1 *__ccw_from_cmd(struct qeth_cmd_buffer *iob)
+{
+ return (struct ccw1 *)(iob->data + ALIGN(iob->length, 8));
+}
static inline bool qeth_trylock_channel(struct qeth_channel *channel)
{
@@ -665,6 +654,7 @@ struct qeth_card_info {
__u16 func_level;
char mcl_level[QETH_MCL_LENGTH + 1];
u8 open_when_online:1;
+ u8 use_v1_blkt:1;
u8 is_vm_nic:1;
int mac_bits;
enum qeth_card_types type;
@@ -725,9 +715,6 @@ struct qeth_discipline {
void (*remove) (struct ccwgroup_device *);
int (*set_online) (struct ccwgroup_device *);
int (*set_offline) (struct ccwgroup_device *);
- int (*freeze)(struct ccwgroup_device *);
- int (*thaw) (struct ccwgroup_device *);
- int (*restore)(struct ccwgroup_device *);
int (*do_ioctl)(struct net_device *dev, struct ifreq *rq, int cmd);
int (*control_event_handler)(struct qeth_card *card,
struct qeth_ipa_cmd *cmd);
@@ -764,6 +751,7 @@ struct qeth_card {
enum qeth_card_states state;
spinlock_t lock;
struct ccwgroup_device *gdev;
+ struct qeth_cmd_buffer *read_cmd;
struct qeth_channel read;
struct qeth_channel write;
struct qeth_channel data;
@@ -891,6 +879,17 @@ static inline int qeth_get_ether_cast_type(struct sk_buff *skb)
return RTN_UNICAST;
}
+static inline struct dst_entry *qeth_dst_check_rcu(struct sk_buff *skb, int ipv)
+{
+ struct dst_entry *dst = skb_dst(skb);
+ struct rt6_info *rt;
+
+ rt = (struct rt6_info *) dst;
+ if (dst)
+ dst = dst_check(dst, (ipv == 6) ? rt6_get_cookie(rt) : 0);
+ return dst;
+}
+
static inline void qeth_rx_csum(struct qeth_card *card, struct sk_buff *skb,
u8 flags)
{
@@ -925,12 +924,12 @@ static inline int qeth_is_diagass_supported(struct qeth_card *card,
int qeth_send_simple_setassparms_prot(struct qeth_card *card,
enum qeth_ipa_funcs ipa_func,
- u16 cmd_code, long data,
+ u16 cmd_code, u32 *data,
enum qeth_prot_versions prot);
/* IPv4 variant */
static inline int qeth_send_simple_setassparms(struct qeth_card *card,
enum qeth_ipa_funcs ipa_func,
- u16 cmd_code, long data)
+ u16 cmd_code, u32 *data)
{
return qeth_send_simple_setassparms_prot(card, ipa_func, cmd_code,
data, QETH_PROT_IPV4);
@@ -938,7 +937,7 @@ static inline int qeth_send_simple_setassparms(struct qeth_card *card,
static inline int qeth_send_simple_setassparms_v6(struct qeth_card *card,
enum qeth_ipa_funcs ipa_func,
- u16 cmd_code, long data)
+ u16 cmd_code, u32 *data)
{
return qeth_send_simple_setassparms_prot(card, ipa_func, cmd_code,
data, QETH_PROT_IPV6);
@@ -979,8 +978,23 @@ int qeth_send_ipa_cmd(struct qeth_card *, struct qeth_cmd_buffer *,
int (*reply_cb)
(struct qeth_card *, struct qeth_reply *, unsigned long),
void *);
-struct qeth_cmd_buffer *qeth_get_ipacmd_buffer(struct qeth_card *,
- enum qeth_ipa_cmds, enum qeth_prot_versions);
+struct qeth_cmd_buffer *qeth_ipa_alloc_cmd(struct qeth_card *card,
+ enum qeth_ipa_cmds cmd_code,
+ enum qeth_prot_versions prot,
+ unsigned int data_length);
+struct qeth_cmd_buffer *qeth_alloc_cmd(struct qeth_channel *channel,
+ unsigned int length, unsigned int ccws,
+ long timeout);
+struct qeth_cmd_buffer *qeth_get_setassparms_cmd(struct qeth_card *card,
+ enum qeth_ipa_funcs ipa_func,
+ u16 cmd_code,
+ unsigned int data_length,
+ enum qeth_prot_versions prot);
+struct qeth_cmd_buffer *qeth_get_diag_cmd(struct qeth_card *card,
+ enum qeth_diags_cmds sub_cmd,
+ unsigned int data_length);
+void qeth_put_cmd(struct qeth_cmd_buffer *iob);
+
struct sk_buff *qeth_core_get_next_skb(struct qeth_card *,
struct qeth_qdio_buffer *, struct qdio_buffer_element **, int *,
struct qeth_hdr **);
@@ -989,15 +1003,13 @@ int qeth_poll(struct napi_struct *napi, int budget);
void qeth_clear_ipacmd_list(struct qeth_card *);
int qeth_qdio_clear_card(struct qeth_card *, int);
void qeth_clear_working_pool_list(struct qeth_card *);
-void qeth_clear_cmd_buffers(struct qeth_channel *);
void qeth_drain_output_queues(struct qeth_card *card);
void qeth_setadp_promisc_mode(struct qeth_card *);
int qeth_setadpparms_change_macaddr(struct qeth_card *);
void qeth_tx_timeout(struct net_device *);
-void qeth_release_buffer(struct qeth_channel *, struct qeth_cmd_buffer *);
+void qeth_notify_reply(struct qeth_reply *reply, int reason);
void qeth_prepare_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob,
u16 cmd_length);
-struct qeth_cmd_buffer *qeth_wait_for_buffer(struct qeth_channel *);
int qeth_query_switch_attributes(struct qeth_card *card,
struct qeth_switch_info *sw_info);
int qeth_query_card_info(struct qeth_card *card,
@@ -1014,10 +1026,6 @@ int qeth_configure_cq(struct qeth_card *, enum qeth_cq);
int qeth_hw_trap(struct qeth_card *, enum qeth_diags_trap_action);
void qeth_trace_features(struct qeth_card *);
int qeth_setassparms_cb(struct qeth_card *, struct qeth_reply *, unsigned long);
-struct qeth_cmd_buffer *qeth_get_setassparms_cmd(struct qeth_card *,
- enum qeth_ipa_funcs,
- __u16, __u16,
- enum qeth_prot_versions);
int qeth_set_features(struct net_device *, netdev_features_t);
void qeth_enable_hw_features(struct net_device *dev);
netdev_features_t qeth_fix_features(struct net_device *, netdev_features_t);
@@ -1032,11 +1040,10 @@ int qeth_stop(struct net_device *dev);
int qeth_vm_request_mac(struct qeth_card *card);
int qeth_xmit(struct qeth_card *card, struct sk_buff *skb,
- struct qeth_qdio_out_q *queue, int ipv, int cast_type,
+ struct qeth_qdio_out_q *queue, int ipv,
void (*fill_header)(struct qeth_qdio_out_q *queue,
struct qeth_hdr *hdr, struct sk_buff *skb,
- int ipv, int cast_type,
- unsigned int data_len));
+ int ipv, unsigned int data_len));
/* exports for OSN */
int qeth_osn_assist(struct net_device *, void *, int);
diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
index b1823d75dd35..4d0caeebc802 100644
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -20,6 +20,7 @@
#include <linux/ip.h>
#include <linux/tcp.h>
#include <linux/mii.h>
+#include <linux/mm.h>
#include <linux/kthread.h>
#include <linux/slab.h>
#include <linux/if_vlan.h>
@@ -62,9 +63,7 @@ static struct device *qeth_core_root_dev;
static struct lock_class_key qdio_out_skb_queue_key;
static void qeth_issue_next_read_cb(struct qeth_card *card,
- struct qeth_channel *channel,
struct qeth_cmd_buffer *iob);
-static struct qeth_cmd_buffer *qeth_get_buffer(struct qeth_channel *);
static void qeth_free_buffer_pool(struct qeth_card *);
static int qeth_qdio_establish(struct qeth_card *);
static void qeth_free_qdio_queues(struct qeth_card *card);
@@ -292,7 +291,7 @@ static int qeth_cq_init(struct qeth_card *card)
int rc;
if (card->options.cq == QETH_CQ_ENABLED) {
- QETH_DBF_TEXT(SETUP, 2, "cqinit");
+ QETH_CARD_TEXT(card, 2, "cqinit");
qdio_reset_buffers(card->qdio.c_q->qdio_bufs,
QDIO_MAX_BUFFERS_PER_Q);
card->qdio.c_q->next_buf_to_init = 127;
@@ -300,7 +299,7 @@ static int qeth_cq_init(struct qeth_card *card)
card->qdio.no_in_queues - 1, 0,
127);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "1err%d", rc);
goto out;
}
}
@@ -317,7 +316,7 @@ static int qeth_alloc_cq(struct qeth_card *card)
int i;
struct qdio_outbuf_state *outbuf_states;
- QETH_DBF_TEXT(SETUP, 2, "cqon");
+ QETH_CARD_TEXT(card, 2, "cqon");
card->qdio.c_q = qeth_alloc_qdio_queue();
if (!card->qdio.c_q) {
rc = -1;
@@ -339,11 +338,11 @@ static int qeth_alloc_cq(struct qeth_card *card)
outbuf_states += QDIO_MAX_BUFFERS_PER_Q;
}
} else {
- QETH_DBF_TEXT(SETUP, 2, "nocq");
+ QETH_CARD_TEXT(card, 2, "nocq");
card->qdio.c_q = NULL;
card->qdio.no_in_queues = 1;
}
- QETH_DBF_TEXT_(SETUP, 2, "iqc%d", card->qdio.no_in_queues);
+ QETH_CARD_TEXT_(card, 2, "iqc%d", card->qdio.no_in_queues);
rc = 0;
out:
return rc;
@@ -486,42 +485,39 @@ static inline int qeth_is_cq(struct qeth_card *card, unsigned int queue)
queue == card->qdio.no_in_queues - 1;
}
-static void qeth_setup_ccw(struct ccw1 *ccw, u8 cmd_code, u32 len, void *data)
+static void qeth_setup_ccw(struct ccw1 *ccw, u8 cmd_code, u8 flags, u32 len,
+ void *data)
{
ccw->cmd_code = cmd_code;
- ccw->flags = CCW_FLAG_SLI;
+ ccw->flags = flags | CCW_FLAG_SLI;
ccw->count = len;
ccw->cda = (__u32) __pa(data);
}
static int __qeth_issue_next_read(struct qeth_card *card)
{
- struct qeth_channel *channel = &card->read;
- struct qeth_cmd_buffer *iob;
+ struct qeth_cmd_buffer *iob = card->read_cmd;
+ struct qeth_channel *channel = iob->channel;
+ struct ccw1 *ccw = __ccw_from_cmd(iob);
int rc;
QETH_CARD_TEXT(card, 5, "issnxrd");
if (channel->state != CH_STATE_UP)
return -EIO;
- iob = qeth_get_buffer(channel);
- if (!iob) {
- dev_warn(&card->gdev->dev, "The qeth device driver "
- "failed to recover an error on the device\n");
- QETH_DBF_MESSAGE(2, "issue_next_read on device %x failed: no iob available\n",
- CARD_DEVID(card));
- return -ENOMEM;
- }
- qeth_setup_ccw(channel->ccw, CCW_CMD_READ, QETH_BUFSIZE, iob->data);
+ memset(iob->data, 0, iob->length);
+ qeth_setup_ccw(ccw, CCW_CMD_READ, 0, iob->length, iob->data);
iob->callback = qeth_issue_next_read_cb;
+ /* keep the cmd alive after completion: */
+ qeth_get_cmd(iob);
+
QETH_CARD_TEXT(card, 6, "noirqpnd");
- rc = ccw_device_start(channel->ccwdev, channel->ccw,
- (addr_t) iob, 0, 0);
+ rc = ccw_device_start(channel->ccwdev, ccw, (addr_t) iob, 0, 0);
if (rc) {
QETH_DBF_MESSAGE(2, "error %i on device %x when starting next read ccw!\n",
rc, CARD_DEVID(card));
atomic_set(&channel->irq_pending, 0);
- qeth_release_buffer(channel, iob);
+ qeth_put_cmd(iob);
card->read_or_write_problem = 1;
qeth_schedule_recovery(card);
wake_up(&card->wait_q);
@@ -577,11 +573,12 @@ static void qeth_dequeue_reply(struct qeth_card *card, struct qeth_reply *reply)
spin_unlock_irq(&card->lock);
}
-static void qeth_notify_reply(struct qeth_reply *reply, int reason)
+void qeth_notify_reply(struct qeth_reply *reply, int reason)
{
reply->rc = reason;
complete(&reply->received);
}
+EXPORT_SYMBOL_GPL(qeth_notify_reply);
static void qeth_issue_ipa_msg(struct qeth_ipa_cmd *cmd, int rc,
struct qeth_card *card)
@@ -692,48 +689,21 @@ static int qeth_check_idx_response(struct qeth_card *card,
return 0;
}
-static struct qeth_cmd_buffer *__qeth_get_buffer(struct qeth_channel *channel)
-{
- __u8 index;
-
- index = channel->io_buf_no;
- do {
- if (channel->iob[index].state == BUF_STATE_FREE) {
- channel->iob[index].state = BUF_STATE_LOCKED;
- channel->iob[index].timeout = QETH_TIMEOUT;
- channel->io_buf_no = (channel->io_buf_no + 1) %
- QETH_CMD_BUFFER_NO;
- memset(channel->iob[index].data, 0, QETH_BUFSIZE);
- return channel->iob + index;
- }
- index = (index + 1) % QETH_CMD_BUFFER_NO;
- } while (index != channel->io_buf_no);
-
- return NULL;
-}
-
-void qeth_release_buffer(struct qeth_channel *channel,
- struct qeth_cmd_buffer *iob)
+void qeth_put_cmd(struct qeth_cmd_buffer *iob)
{
- unsigned long flags;
-
- spin_lock_irqsave(&channel->iob_lock, flags);
- iob->state = BUF_STATE_FREE;
- iob->callback = NULL;
- if (iob->reply) {
- qeth_put_reply(iob->reply);
- iob->reply = NULL;
+ if (refcount_dec_and_test(&iob->ref_count)) {
+ if (iob->reply)
+ qeth_put_reply(iob->reply);
+ kfree(iob->data);
+ kfree(iob);
}
- spin_unlock_irqrestore(&channel->iob_lock, flags);
- wake_up(&channel->wait_q);
}
-EXPORT_SYMBOL_GPL(qeth_release_buffer);
+EXPORT_SYMBOL_GPL(qeth_put_cmd);
static void qeth_release_buffer_cb(struct qeth_card *card,
- struct qeth_channel *channel,
struct qeth_cmd_buffer *iob)
{
- qeth_release_buffer(channel, iob);
+ qeth_put_cmd(iob);
}
static void qeth_cancel_cmd(struct qeth_cmd_buffer *iob, int rc)
@@ -742,41 +712,38 @@ static void qeth_cancel_cmd(struct qeth_cmd_buffer *iob, int rc)
if (reply)
qeth_notify_reply(reply, rc);
- qeth_release_buffer(iob->channel, iob);
+ qeth_put_cmd(iob);
}
-static struct qeth_cmd_buffer *qeth_get_buffer(struct qeth_channel *channel)
+struct qeth_cmd_buffer *qeth_alloc_cmd(struct qeth_channel *channel,
+ unsigned int length, unsigned int ccws,
+ long timeout)
{
- struct qeth_cmd_buffer *buffer = NULL;
- unsigned long flags;
+ struct qeth_cmd_buffer *iob;
- spin_lock_irqsave(&channel->iob_lock, flags);
- buffer = __qeth_get_buffer(channel);
- spin_unlock_irqrestore(&channel->iob_lock, flags);
- return buffer;
-}
+ if (length > QETH_BUFSIZE)
+ return NULL;
-struct qeth_cmd_buffer *qeth_wait_for_buffer(struct qeth_channel *channel)
-{
- struct qeth_cmd_buffer *buffer;
- wait_event(channel->wait_q,
- ((buffer = qeth_get_buffer(channel)) != NULL));
- return buffer;
-}
-EXPORT_SYMBOL_GPL(qeth_wait_for_buffer);
+ iob = kzalloc(sizeof(*iob), GFP_KERNEL);
+ if (!iob)
+ return NULL;
-void qeth_clear_cmd_buffers(struct qeth_channel *channel)
-{
- int cnt;
+ iob->data = kzalloc(ALIGN(length, 8) + ccws * sizeof(struct ccw1),
+ GFP_KERNEL | GFP_DMA);
+ if (!iob->data) {
+ kfree(iob);
+ return NULL;
+ }
- for (cnt = 0; cnt < QETH_CMD_BUFFER_NO; cnt++)
- qeth_release_buffer(channel, &channel->iob[cnt]);
- channel->io_buf_no = 0;
+ refcount_set(&iob->ref_count, 1);
+ iob->channel = channel;
+ iob->timeout = timeout;
+ iob->length = length;
+ return iob;
}
-EXPORT_SYMBOL_GPL(qeth_clear_cmd_buffers);
+EXPORT_SYMBOL_GPL(qeth_alloc_cmd);
static void qeth_issue_next_read_cb(struct qeth_card *card,
- struct qeth_channel *channel,
struct qeth_cmd_buffer *iob)
{
struct qeth_ipa_cmd *cmd = NULL;
@@ -849,7 +816,8 @@ out:
memcpy(&card->seqno.pdu_hdr_ack,
QETH_PDU_HEADER_SEQ_NO(iob->data),
QETH_SEQ_NO_LENGTH);
- qeth_release_buffer(channel, iob);
+ qeth_put_cmd(iob);
+ __qeth_issue_next_read(card);
}
static int qeth_set_thread_start_bit(struct qeth_card *card,
@@ -976,7 +944,7 @@ static int qeth_get_problem(struct qeth_card *card, struct ccw_device *cdev,
}
static int qeth_check_irb_error(struct qeth_card *card, struct ccw_device *cdev,
- unsigned long intparm, struct irb *irb)
+ struct irb *irb)
{
if (!IS_ERR(irb))
return 0;
@@ -993,12 +961,6 @@ static int qeth_check_irb_error(struct qeth_card *card, struct ccw_device *cdev,
" on the device\n");
QETH_CARD_TEXT(card, 2, "ckirberr");
QETH_CARD_TEXT_(card, 2, " rc%d", -ETIMEDOUT);
- if (intparm == QETH_RCD_PARM) {
- if (card->data.ccwdev == cdev) {
- card->data.state = CH_STATE_DOWN;
- wake_up(&card->wait_q);
- }
- }
return -ETIMEDOUT;
default:
QETH_DBF_MESSAGE(2, "unknown error %ld on channel %x\n",
@@ -1041,7 +1003,7 @@ static void qeth_irq(struct ccw_device *cdev, unsigned long intparm,
if (qeth_intparm_is_iob(intparm))
iob = (struct qeth_cmd_buffer *) __va((addr_t)intparm);
- rc = qeth_check_irb_error(card, cdev, intparm, irb);
+ rc = qeth_check_irb_error(card, cdev, irb);
if (rc) {
/* IO was terminated, free its resources. */
if (iob)
@@ -1059,11 +1021,6 @@ static void qeth_irq(struct ccw_device *cdev, unsigned long intparm,
if (irb->scsw.cmd.fctl & (SCSW_FCTL_HALT_FUNC))
channel->state = CH_STATE_HALTED;
- /*let's wake up immediately on data channel*/
- if ((channel == &card->data) && (intparm != 0) &&
- (intparm != QETH_RCD_PARM))
- goto out;
-
if (intparm == QETH_CLEAR_CHANNEL_PARM) {
QETH_CARD_TEXT(card, 6, "clrchpar");
/* we don't have to handle this further */
@@ -1093,10 +1050,7 @@ static void qeth_irq(struct ccw_device *cdev, unsigned long intparm,
print_hex_dump(KERN_WARNING, "qeth: sense data ",
DUMP_PREFIX_OFFSET, 16, 1, irb->ecw, 32, 1);
}
- if (intparm == QETH_RCD_PARM) {
- channel->state = CH_STATE_DOWN;
- goto out;
- }
+
rc = qeth_get_problem(card, cdev, irb);
if (rc) {
card->read_or_write_problem = 1;
@@ -1108,18 +1062,8 @@ static void qeth_irq(struct ccw_device *cdev, unsigned long intparm,
}
}
- if (intparm == QETH_RCD_PARM) {
- channel->state = CH_STATE_RCD_DONE;
- goto out;
- }
- if (channel == &card->data)
- return;
- if (channel == &card->read &&
- channel->state == CH_STATE_UP)
- __qeth_issue_next_read(card);
-
if (iob && iob->callback)
- iob->callback(card, iob->channel, iob);
+ iob->callback(card, iob);
out:
wake_up(&card->wait_q);
@@ -1222,56 +1166,26 @@ static void qeth_free_buffer_pool(struct qeth_card *card)
static void qeth_clean_channel(struct qeth_channel *channel)
{
struct ccw_device *cdev = channel->ccwdev;
- int cnt;
QETH_DBF_TEXT(SETUP, 2, "freech");
spin_lock_irq(get_ccwdev_lock(cdev));
cdev->handler = NULL;
spin_unlock_irq(get_ccwdev_lock(cdev));
-
- for (cnt = 0; cnt < QETH_CMD_BUFFER_NO; cnt++)
- kfree(channel->iob[cnt].data);
- kfree(channel->ccw);
}
-static int qeth_setup_channel(struct qeth_channel *channel, bool alloc_buffers)
+static void qeth_setup_channel(struct qeth_channel *channel)
{
struct ccw_device *cdev = channel->ccwdev;
- int cnt;
QETH_DBF_TEXT(SETUP, 2, "setupch");
- channel->ccw = kmalloc(sizeof(struct ccw1), GFP_KERNEL | GFP_DMA);
- if (!channel->ccw)
- return -ENOMEM;
channel->state = CH_STATE_DOWN;
atomic_set(&channel->irq_pending, 0);
- init_waitqueue_head(&channel->wait_q);
spin_lock_irq(get_ccwdev_lock(cdev));
cdev->handler = qeth_irq;
spin_unlock_irq(get_ccwdev_lock(cdev));
-
- if (!alloc_buffers)
- return 0;
-
- for (cnt = 0; cnt < QETH_CMD_BUFFER_NO; cnt++) {
- channel->iob[cnt].data = kmalloc(QETH_BUFSIZE,
- GFP_KERNEL | GFP_DMA);
- if (channel->iob[cnt].data == NULL)
- break;
- channel->iob[cnt].state = BUF_STATE_FREE;
- channel->iob[cnt].channel = channel;
- }
- if (cnt < QETH_CMD_BUFFER_NO) {
- qeth_clean_channel(channel);
- return -ENOMEM;
- }
- channel->io_buf_no = 0;
- spin_lock_init(&channel->iob_lock);
-
- return 0;
}
static int qeth_osa_set_output_queues(struct qeth_card *card, bool single)
@@ -1306,7 +1220,7 @@ static int qeth_update_from_chp_desc(struct qeth_card *card)
struct channel_path_desc_fmt0 *chp_dsc;
int rc = 0;
- QETH_DBF_TEXT(SETUP, 2, "chp_desc");
+ QETH_CARD_TEXT(card, 2, "chp_desc");
ccwdev = card->data.ccwdev;
chp_dsc = ccw_device_get_chp_desc(ccwdev, 0);
@@ -1320,14 +1234,14 @@ static int qeth_update_from_chp_desc(struct qeth_card *card)
rc = qeth_osa_set_output_queues(card, chp_dsc->chpp & 0x02);
kfree(chp_dsc);
- QETH_DBF_TEXT_(SETUP, 2, "nr:%x", card->qdio.no_out_queues);
- QETH_DBF_TEXT_(SETUP, 2, "lvl:%02x", card->info.func_level);
+ QETH_CARD_TEXT_(card, 2, "nr:%x", card->qdio.no_out_queues);
+ QETH_CARD_TEXT_(card, 2, "lvl:%02x", card->info.func_level);
return rc;
}
static void qeth_init_qdio_info(struct qeth_card *card)
{
- QETH_DBF_TEXT(SETUP, 4, "intqdinf");
+ QETH_CARD_TEXT(card, 4, "intqdinf");
atomic_set(&card->qdio.state, QETH_QDIO_UNINITIALIZED);
card->qdio.do_prio_queueing = QETH_PRIOQ_DEFAULT;
card->qdio.default_out_queue = QETH_DEFAULT_QUEUE;
@@ -1393,8 +1307,7 @@ static void qeth_start_kernel_thread(struct work_struct *work)
static void qeth_buffer_reclaim_work(struct work_struct *);
static void qeth_setup_card(struct qeth_card *card)
{
- QETH_DBF_TEXT(SETUP, 2, "setupcrd");
- QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *));
+ QETH_CARD_TEXT(card, 2, "setupcrd");
card->info.type = CARD_RDEV(card)->id.driver_info;
card->state = CARD_STATE_DOWN;
@@ -1442,21 +1355,19 @@ static struct qeth_card *qeth_alloc_card(struct ccwgroup_device *gdev)
dev_name(&gdev->dev));
if (!card->event_wq)
goto out_wq;
- if (qeth_setup_channel(&card->read, true))
- goto out_ip;
- if (qeth_setup_channel(&card->write, true))
- goto out_channel;
- if (qeth_setup_channel(&card->data, false))
- goto out_data;
+
+ card->read_cmd = qeth_alloc_cmd(&card->read, QETH_BUFSIZE, 1, 0);
+ if (!card->read_cmd)
+ goto out_read_cmd;
+
+ qeth_setup_channel(&card->read);
+ qeth_setup_channel(&card->write);
+ qeth_setup_channel(&card->data);
card->qeth_service_level.seq_print = qeth_core_sl_print;
register_service_level(&card->qeth_service_level);
return card;
-out_data:
- qeth_clean_channel(&card->write);
-out_channel:
- qeth_clean_channel(&card->read);
-out_ip:
+out_read_cmd:
destroy_workqueue(card->event_wq);
out_wq:
dev_set_drvdata(&gdev->dev, NULL);
@@ -1582,60 +1493,6 @@ int qeth_qdio_clear_card(struct qeth_card *card, int use_halt)
}
EXPORT_SYMBOL_GPL(qeth_qdio_clear_card);
-static int qeth_read_conf_data(struct qeth_card *card, void **buffer,
- int *length)
-{
- struct ciw *ciw;
- char *rcd_buf;
- int ret;
- struct qeth_channel *channel = &card->data;
-
- /*
- * scan for RCD command in extended SenseID data
- */
- ciw = ccw_device_get_ciw(channel->ccwdev, CIW_TYPE_RCD);
- if (!ciw || ciw->cmd == 0)
- return -EOPNOTSUPP;
- rcd_buf = kzalloc(ciw->count, GFP_KERNEL | GFP_DMA);
- if (!rcd_buf)
- return -ENOMEM;
-
- qeth_setup_ccw(channel->ccw, ciw->cmd, ciw->count, rcd_buf);
- channel->state = CH_STATE_RCD;
- spin_lock_irq(get_ccwdev_lock(channel->ccwdev));
- ret = ccw_device_start_timeout(channel->ccwdev, channel->ccw,
- QETH_RCD_PARM, LPM_ANYPATH, 0,
- QETH_RCD_TIMEOUT);
- spin_unlock_irq(get_ccwdev_lock(channel->ccwdev));
- if (!ret)
- wait_event(card->wait_q,
- (channel->state == CH_STATE_RCD_DONE ||
- channel->state == CH_STATE_DOWN));
- if (channel->state == CH_STATE_DOWN)
- ret = -EIO;
- else
- channel->state = CH_STATE_DOWN;
- if (ret) {
- kfree(rcd_buf);
- *buffer = NULL;
- *length = 0;
- } else {
- *length = ciw->count;
- *buffer = rcd_buf;
- }
- return ret;
-}
-
-static void qeth_configure_unitaddr(struct qeth_card *card, char *prcd)
-{
- QETH_DBF_TEXT(SETUP, 2, "cfgunit");
- card->info.chpid = prcd[30];
- card->info.unit_addr2 = prcd[31];
- card->info.cula = prcd[63];
- card->info.is_vm_nic = ((prcd[0x10] == _ascebc['V']) &&
- (prcd[0x11] == _ascebc['M']));
-}
-
static enum qeth_discipline_id qeth_vm_detect_layer(struct qeth_card *card)
{
enum qeth_discipline_id disc = QETH_DISCIPLINE_UNDETERMINED;
@@ -1645,7 +1502,7 @@ static enum qeth_discipline_id qeth_vm_detect_layer(struct qeth_card *card)
char userid[80];
int rc = 0;
- QETH_DBF_TEXT(SETUP, 2, "vmlayer");
+ QETH_CARD_TEXT(card, 2, "vmlayer");
cpcmd("QUERY USERID", userid, sizeof(userid), &rc);
if (rc)
@@ -1688,7 +1545,7 @@ out:
kfree(response);
kfree(request);
if (rc)
- QETH_DBF_TEXT_(SETUP, 2, "err%x", rc);
+ QETH_CARD_TEXT_(card, 2, "err%x", rc);
return disc;
}
@@ -1705,24 +1562,23 @@ static enum qeth_discipline_id qeth_enforce_discipline(struct qeth_card *card)
switch (disc) {
case QETH_DISCIPLINE_LAYER2:
- QETH_DBF_TEXT(SETUP, 3, "force l2");
+ QETH_CARD_TEXT(card, 3, "force l2");
break;
case QETH_DISCIPLINE_LAYER3:
- QETH_DBF_TEXT(SETUP, 3, "force l3");
+ QETH_CARD_TEXT(card, 3, "force l3");
break;
default:
- QETH_DBF_TEXT(SETUP, 3, "force no");
+ QETH_CARD_TEXT(card, 3, "force no");
}
return disc;
}
-static void qeth_configure_blkt_default(struct qeth_card *card, char *prcd)
+static void qeth_set_blkt_defaults(struct qeth_card *card)
{
- QETH_DBF_TEXT(SETUP, 2, "cfgblkt");
+ QETH_CARD_TEXT(card, 2, "cfgblkt");
- if (prcd[74] == 0xF0 && prcd[75] == 0xF0 &&
- prcd[76] >= 0xF1 && prcd[76] <= 0xF4) {
+ if (card->info.use_v1_blkt) {
card->info.blkt.time_total = 0;
card->info.blkt.inter_packet = 0;
card->info.blkt.inter_packet_jumbo = 0;
@@ -1758,11 +1614,8 @@ static void qeth_init_func_level(struct qeth_card *card)
}
static void qeth_idx_finalize_cmd(struct qeth_card *card,
- struct qeth_cmd_buffer *iob,
- unsigned int length)
+ struct qeth_cmd_buffer *iob)
{
- qeth_setup_ccw(iob->channel->ccw, CCW_CMD_WRITE, length, iob->data);
-
memcpy(QETH_TRANSPORT_HEADER_SEQ_NO(iob->data), &card->seqno.trans_hdr,
QETH_SEQ_NO_LENGTH);
if (iob->channel == &card->write)
@@ -1779,10 +1632,9 @@ static int qeth_peer_func_level(int level)
}
static void qeth_mpc_finalize_cmd(struct qeth_card *card,
- struct qeth_cmd_buffer *iob,
- unsigned int length)
+ struct qeth_cmd_buffer *iob)
{
- qeth_idx_finalize_cmd(card, iob, length);
+ qeth_idx_finalize_cmd(card, iob);
memcpy(QETH_PDU_HEADER_SEQ_NO(iob->data),
&card->seqno.pdu_hdr, QETH_SEQ_NO_LENGTH);
@@ -1794,10 +1646,26 @@ static void qeth_mpc_finalize_cmd(struct qeth_card *card,
iob->callback = qeth_release_buffer_cb;
}
+static struct qeth_cmd_buffer *qeth_mpc_alloc_cmd(struct qeth_card *card,
+ void *data,
+ unsigned int data_length)
+{
+ struct qeth_cmd_buffer *iob;
+
+ iob = qeth_alloc_cmd(&card->write, data_length, 1, QETH_TIMEOUT);
+ if (!iob)
+ return NULL;
+
+ memcpy(iob->data, data, data_length);
+ qeth_setup_ccw(__ccw_from_cmd(iob), CCW_CMD_WRITE, 0, data_length,
+ iob->data);
+ iob->finalize = qeth_mpc_finalize_cmd;
+ return iob;
+}
+
/**
* qeth_send_control_data() - send control command to the card
* @card: qeth_card structure pointer
- * @len: size of the command buffer
* @iob: qeth_cmd_buffer pointer
* @reply_cb: callback function pointer
* @cb_card: pointer to the qeth_card structure
@@ -1817,7 +1685,7 @@ static void qeth_mpc_finalize_cmd(struct qeth_card *card,
* field 'param' of the structure qeth_reply.
*/
-static int qeth_send_control_data(struct qeth_card *card, int len,
+static int qeth_send_control_data(struct qeth_card *card,
struct qeth_cmd_buffer *iob,
int (*reply_cb)(struct qeth_card *cb_card,
struct qeth_reply *cb_reply,
@@ -1833,13 +1701,13 @@ static int qeth_send_control_data(struct qeth_card *card, int len,
reply = qeth_alloc_reply(card);
if (!reply) {
- qeth_release_buffer(channel, iob);
+ qeth_put_cmd(iob);
return -ENOMEM;
}
reply->callback = reply_cb;
reply->param = reply_param;
- /* pairs with qeth_release_buffer(): */
+ /* pairs with qeth_put_cmd(): */
qeth_get_reply(reply);
iob->reply = reply;
@@ -1848,18 +1716,19 @@ static int qeth_send_control_data(struct qeth_card *card, int len,
timeout);
if (timeout <= 0) {
qeth_put_reply(reply);
- qeth_release_buffer(channel, iob);
+ qeth_put_cmd(iob);
return (timeout == -ERESTARTSYS) ? -EINTR : -ETIME;
}
- iob->finalize(card, iob, len);
- QETH_DBF_HEX(CTRL, 2, iob->data, min(len, QETH_DBF_CTRL_LEN));
+ if (iob->finalize)
+ iob->finalize(card, iob);
+ QETH_DBF_HEX(CTRL, 2, iob->data, min(iob->length, QETH_DBF_CTRL_LEN));
qeth_enqueue_reply(card, reply);
QETH_CARD_TEXT(card, 6, "noirqpnd");
spin_lock_irq(get_ccwdev_lock(channel->ccwdev));
- rc = ccw_device_start_timeout(channel->ccwdev, channel->ccw,
+ rc = ccw_device_start_timeout(channel->ccwdev, __ccw_from_cmd(iob),
(addr_t) iob, 0, 0, timeout);
spin_unlock_irq(get_ccwdev_lock(channel->ccwdev));
if (rc) {
@@ -1868,7 +1737,7 @@ static int qeth_send_control_data(struct qeth_card *card, int len,
QETH_CARD_TEXT_(card, 2, " err%d", rc);
qeth_dequeue_reply(card, reply);
qeth_put_reply(reply);
- qeth_release_buffer(channel, iob);
+ qeth_put_cmd(iob);
atomic_set(&channel->irq_pending, 0);
wake_up(&card->wait_q);
return rc;
@@ -1886,6 +1755,46 @@ static int qeth_send_control_data(struct qeth_card *card, int len,
return rc;
}
+static void qeth_read_conf_data_cb(struct qeth_card *card,
+ struct qeth_cmd_buffer *iob)
+{
+ unsigned char *prcd = iob->data;
+
+ QETH_CARD_TEXT(card, 2, "cfgunit");
+ card->info.chpid = prcd[30];
+ card->info.unit_addr2 = prcd[31];
+ card->info.cula = prcd[63];
+ card->info.is_vm_nic = ((prcd[0x10] == _ascebc['V']) &&
+ (prcd[0x11] == _ascebc['M']));
+ card->info.use_v1_blkt = prcd[74] == 0xF0 && prcd[75] == 0xF0 &&
+ prcd[76] >= 0xF1 && prcd[76] <= 0xF4;
+
+ qeth_notify_reply(iob->reply, 0);
+ qeth_put_cmd(iob);
+}
+
+static int qeth_read_conf_data(struct qeth_card *card)
+{
+ struct qeth_channel *channel = &card->data;
+ struct qeth_cmd_buffer *iob;
+ struct ciw *ciw;
+
+ /* scan for RCD command in extended SenseID data */
+ ciw = ccw_device_get_ciw(channel->ccwdev, CIW_TYPE_RCD);
+ if (!ciw || ciw->cmd == 0)
+ return -EOPNOTSUPP;
+
+ iob = qeth_alloc_cmd(channel, ciw->count, 1, QETH_RCD_TIMEOUT);
+ if (!iob)
+ return -ENOMEM;
+
+ iob->callback = qeth_read_conf_data_cb;
+ qeth_setup_ccw(__ccw_from_cmd(iob), ciw->cmd, 0, iob->length,
+ iob->data);
+
+ return qeth_send_control_data(card, iob, NULL, NULL);
+}
+
static int qeth_idx_check_activate_response(struct qeth_card *card,
struct qeth_channel *channel,
struct qeth_cmd_buffer *iob)
@@ -1900,8 +1809,8 @@ static int qeth_idx_check_activate_response(struct qeth_card *card,
return 0;
/* negative reply: */
- QETH_DBF_TEXT_(SETUP, 2, "idxneg%c",
- QETH_IDX_ACT_CAUSE_CODE(iob->data));
+ QETH_CARD_TEXT_(card, 2, "idxneg%c",
+ QETH_IDX_ACT_CAUSE_CODE(iob->data));
switch (QETH_IDX_ACT_CAUSE_CODE(iob->data)) {
case QETH_IDX_ACT_ERR_EXCL:
@@ -1920,14 +1829,14 @@ static int qeth_idx_check_activate_response(struct qeth_card *card,
}
}
-static void qeth_idx_query_read_cb(struct qeth_card *card,
- struct qeth_channel *channel,
- struct qeth_cmd_buffer *iob)
+static void qeth_idx_activate_read_channel_cb(struct qeth_card *card,
+ struct qeth_cmd_buffer *iob)
{
+ struct qeth_channel *channel = iob->channel;
u16 peer_level;
int rc;
- QETH_DBF_TEXT(SETUP, 2, "idxrdcb");
+ QETH_CARD_TEXT(card, 2, "idxrdcb");
rc = qeth_idx_check_activate_response(card, channel, iob);
if (rc)
@@ -1950,17 +1859,17 @@ static void qeth_idx_query_read_cb(struct qeth_card *card,
out:
qeth_notify_reply(iob->reply, rc);
- qeth_release_buffer(channel, iob);
+ qeth_put_cmd(iob);
}
-static void qeth_idx_query_write_cb(struct qeth_card *card,
- struct qeth_channel *channel,
- struct qeth_cmd_buffer *iob)
+static void qeth_idx_activate_write_channel_cb(struct qeth_card *card,
+ struct qeth_cmd_buffer *iob)
{
+ struct qeth_channel *channel = iob->channel;
u16 peer_level;
int rc;
- QETH_DBF_TEXT(SETUP, 2, "idxwrcb");
+ QETH_CARD_TEXT(card, 2, "idxwrcb");
rc = qeth_idx_check_activate_response(card, channel, iob);
if (rc)
@@ -1977,22 +1886,7 @@ static void qeth_idx_query_write_cb(struct qeth_card *card,
out:
qeth_notify_reply(iob->reply, rc);
- qeth_release_buffer(channel, iob);
-}
-
-static void qeth_idx_finalize_query_cmd(struct qeth_card *card,
- struct qeth_cmd_buffer *iob,
- unsigned int length)
-{
- qeth_setup_ccw(iob->channel->ccw, CCW_CMD_READ, length, iob->data);
-}
-
-static void qeth_idx_activate_cb(struct qeth_card *card,
- struct qeth_channel *channel,
- struct qeth_cmd_buffer *iob)
-{
- qeth_notify_reply(iob->reply, 0);
- qeth_release_buffer(channel, iob);
+ qeth_put_cmd(iob);
}
static void qeth_idx_setup_activate_cmd(struct qeth_card *card,
@@ -2000,11 +1894,14 @@ static void qeth_idx_setup_activate_cmd(struct qeth_card *card,
{
u16 addr = (card->info.cula << 8) + card->info.unit_addr2;
u8 port = ((u8)card->dev->dev_port) | 0x80;
+ struct ccw1 *ccw = __ccw_from_cmd(iob);
struct ccw_dev_id dev_id;
+ qeth_setup_ccw(&ccw[0], CCW_CMD_WRITE, CCW_FLAG_CC, IDX_ACTIVATE_SIZE,
+ iob->data);
+ qeth_setup_ccw(&ccw[1], CCW_CMD_READ, 0, iob->length, iob->data);
ccw_device_get_id(CARD_DDEV(card), &dev_id);
iob->finalize = qeth_idx_finalize_cmd;
- iob->callback = qeth_idx_activate_cb;
memcpy(QETH_IDX_ACT_PNO(iob->data), &port, 1);
memcpy(QETH_IDX_ACT_ISSUER_RM_TOKEN(iob->data),
@@ -2021,26 +1918,17 @@ static int qeth_idx_activate_read_channel(struct qeth_card *card)
struct qeth_cmd_buffer *iob;
int rc;
- QETH_DBF_TEXT(SETUP, 2, "idxread");
+ QETH_CARD_TEXT(card, 2, "idxread");
- iob = qeth_get_buffer(channel);
+ iob = qeth_alloc_cmd(channel, QETH_BUFSIZE, 2, QETH_TIMEOUT);
if (!iob)
return -ENOMEM;
memcpy(iob->data, IDX_ACTIVATE_READ, IDX_ACTIVATE_SIZE);
qeth_idx_setup_activate_cmd(card, iob);
+ iob->callback = qeth_idx_activate_read_channel_cb;
- rc = qeth_send_control_data(card, IDX_ACTIVATE_SIZE, iob, NULL, NULL);
- if (rc)
- return rc;
-
- iob = qeth_get_buffer(channel);
- if (!iob)
- return -ENOMEM;
-
- iob->finalize = qeth_idx_finalize_query_cmd;
- iob->callback = qeth_idx_query_read_cb;
- rc = qeth_send_control_data(card, QETH_BUFSIZE, iob, NULL, NULL);
+ rc = qeth_send_control_data(card, iob, NULL, NULL);
if (rc)
return rc;
@@ -2054,26 +1942,17 @@ static int qeth_idx_activate_write_channel(struct qeth_card *card)
struct qeth_cmd_buffer *iob;
int rc;
- QETH_DBF_TEXT(SETUP, 2, "idxwrite");
+ QETH_CARD_TEXT(card, 2, "idxwrite");
- iob = qeth_get_buffer(channel);
+ iob = qeth_alloc_cmd(channel, QETH_BUFSIZE, 2, QETH_TIMEOUT);
if (!iob)
return -ENOMEM;
memcpy(iob->data, IDX_ACTIVATE_WRITE, IDX_ACTIVATE_SIZE);
qeth_idx_setup_activate_cmd(card, iob);
+ iob->callback = qeth_idx_activate_write_channel_cb;
- rc = qeth_send_control_data(card, IDX_ACTIVATE_SIZE, iob, NULL, NULL);
- if (rc)
- return rc;
-
- iob = qeth_get_buffer(channel);
- if (!iob)
- return -ENOMEM;
-
- iob->finalize = qeth_idx_finalize_query_cmd;
- iob->callback = qeth_idx_query_write_cb;
- rc = qeth_send_control_data(card, QETH_BUFSIZE, iob, NULL, NULL);
+ rc = qeth_send_control_data(card, iob, NULL, NULL);
if (rc)
return rc;
@@ -2086,7 +1965,7 @@ static int qeth_cm_enable_cb(struct qeth_card *card, struct qeth_reply *reply,
{
struct qeth_cmd_buffer *iob;
- QETH_DBF_TEXT(SETUP, 2, "cmenblcb");
+ QETH_CARD_TEXT(card, 2, "cmenblcb");
iob = (struct qeth_cmd_buffer *) data;
memcpy(&card->token.cm_filter_r,
@@ -2097,23 +1976,20 @@ static int qeth_cm_enable_cb(struct qeth_card *card, struct qeth_reply *reply,
static int qeth_cm_enable(struct qeth_card *card)
{
- int rc;
struct qeth_cmd_buffer *iob;
- QETH_DBF_TEXT(SETUP, 2, "cmenable");
+ QETH_CARD_TEXT(card, 2, "cmenable");
- iob = qeth_wait_for_buffer(&card->write);
- iob->finalize = qeth_mpc_finalize_cmd;
- memcpy(iob->data, CM_ENABLE, CM_ENABLE_SIZE);
+ iob = qeth_mpc_alloc_cmd(card, CM_ENABLE, CM_ENABLE_SIZE);
+ if (!iob)
+ return -ENOMEM;
memcpy(QETH_CM_ENABLE_ISSUER_RM_TOKEN(iob->data),
&card->token.issuer_rm_r, QETH_MPC_TOKEN_LENGTH);
memcpy(QETH_CM_ENABLE_FILTER_TOKEN(iob->data),
&card->token.cm_filter_w, QETH_MPC_TOKEN_LENGTH);
- rc = qeth_send_control_data(card, CM_ENABLE_SIZE, iob,
- qeth_cm_enable_cb, NULL);
- return rc;
+ return qeth_send_control_data(card, iob, qeth_cm_enable_cb, NULL);
}
static int qeth_cm_setup_cb(struct qeth_card *card, struct qeth_reply *reply,
@@ -2121,7 +1997,7 @@ static int qeth_cm_setup_cb(struct qeth_card *card, struct qeth_reply *reply,
{
struct qeth_cmd_buffer *iob;
- QETH_DBF_TEXT(SETUP, 2, "cmsetpcb");
+ QETH_CARD_TEXT(card, 2, "cmsetpcb");
iob = (struct qeth_cmd_buffer *) data;
memcpy(&card->token.cm_connection_r,
@@ -2132,14 +2008,13 @@ static int qeth_cm_setup_cb(struct qeth_card *card, struct qeth_reply *reply,
static int qeth_cm_setup(struct qeth_card *card)
{
- int rc;
struct qeth_cmd_buffer *iob;
- QETH_DBF_TEXT(SETUP, 2, "cmsetup");
+ QETH_CARD_TEXT(card, 2, "cmsetup");
- iob = qeth_wait_for_buffer(&card->write);
- iob->finalize = qeth_mpc_finalize_cmd;
- memcpy(iob->data, CM_SETUP, CM_SETUP_SIZE);
+ iob = qeth_mpc_alloc_cmd(card, CM_SETUP, CM_SETUP_SIZE);
+ if (!iob)
+ return -ENOMEM;
memcpy(QETH_CM_SETUP_DEST_ADDR(iob->data),
&card->token.issuer_rm_r, QETH_MPC_TOKEN_LENGTH);
@@ -2147,9 +2022,7 @@ static int qeth_cm_setup(struct qeth_card *card)
&card->token.cm_connection_w, QETH_MPC_TOKEN_LENGTH);
memcpy(QETH_CM_SETUP_FILTER_TOKEN(iob->data),
&card->token.cm_filter_r, QETH_MPC_TOKEN_LENGTH);
- rc = qeth_send_control_data(card, CM_SETUP_SIZE, iob,
- qeth_cm_setup_cb, NULL);
- return rc;
+ return qeth_send_control_data(card, iob, qeth_cm_setup_cb, NULL);
}
static int qeth_update_max_mtu(struct qeth_card *card, unsigned int max_mtu)
@@ -2214,7 +2087,7 @@ static int qeth_ulp_enable_cb(struct qeth_card *card, struct qeth_reply *reply,
__u8 link_type;
struct qeth_cmd_buffer *iob;
- QETH_DBF_TEXT(SETUP, 2, "ulpenacb");
+ QETH_CARD_TEXT(card, 2, "ulpenacb");
iob = (struct qeth_cmd_buffer *) data;
memcpy(&card->token.ulp_filter_r,
@@ -2235,7 +2108,7 @@ static int qeth_ulp_enable_cb(struct qeth_card *card, struct qeth_reply *reply,
card->info.link_type = link_type;
} else
card->info.link_type = 0;
- QETH_DBF_TEXT_(SETUP, 2, "link%d", card->info.link_type);
+ QETH_CARD_TEXT_(card, 2, "link%d", card->info.link_type);
return 0;
}
@@ -2253,12 +2126,11 @@ static int qeth_ulp_enable(struct qeth_card *card)
u16 max_mtu;
int rc;
- /*FIXME: trace view callbacks*/
- QETH_DBF_TEXT(SETUP, 2, "ulpenabl");
+ QETH_CARD_TEXT(card, 2, "ulpenabl");
- iob = qeth_wait_for_buffer(&card->write);
- iob->finalize = qeth_mpc_finalize_cmd;
- memcpy(iob->data, ULP_ENABLE, ULP_ENABLE_SIZE);
+ iob = qeth_mpc_alloc_cmd(card, ULP_ENABLE, ULP_ENABLE_SIZE);
+ if (!iob)
+ return -ENOMEM;
*(QETH_ULP_ENABLE_LINKNUM(iob->data)) = (u8) card->dev->dev_port;
memcpy(QETH_ULP_ENABLE_PROT_TYPE(iob->data), &prot_type, 1);
@@ -2266,8 +2138,7 @@ static int qeth_ulp_enable(struct qeth_card *card)
&card->token.cm_connection_r, QETH_MPC_TOKEN_LENGTH);
memcpy(QETH_ULP_ENABLE_FILTER_TOKEN(iob->data),
&card->token.ulp_filter_w, QETH_MPC_TOKEN_LENGTH);
- rc = qeth_send_control_data(card, ULP_ENABLE_SIZE, iob,
- qeth_ulp_enable_cb, &max_mtu);
+ rc = qeth_send_control_data(card, iob, qeth_ulp_enable_cb, &max_mtu);
if (rc)
return rc;
return qeth_update_max_mtu(card, max_mtu);
@@ -2278,7 +2149,7 @@ static int qeth_ulp_setup_cb(struct qeth_card *card, struct qeth_reply *reply,
{
struct qeth_cmd_buffer *iob;
- QETH_DBF_TEXT(SETUP, 2, "ulpstpcb");
+ QETH_CARD_TEXT(card, 2, "ulpstpcb");
iob = (struct qeth_cmd_buffer *) data;
memcpy(&card->token.ulp_connection_r,
@@ -2286,7 +2157,7 @@ static int qeth_ulp_setup_cb(struct qeth_card *card, struct qeth_reply *reply,
QETH_MPC_TOKEN_LENGTH);
if (!strncmp("00S", QETH_ULP_SETUP_RESP_CONNECTION_TOKEN(iob->data),
3)) {
- QETH_DBF_TEXT(SETUP, 2, "olmlimit");
+ QETH_CARD_TEXT(card, 2, "olmlimit");
dev_err(&card->gdev->dev, "A connection could not be "
"established because of an OLM limit\n");
return -EMLINK;
@@ -2296,16 +2167,15 @@ static int qeth_ulp_setup_cb(struct qeth_card *card, struct qeth_reply *reply,
static int qeth_ulp_setup(struct qeth_card *card)
{
- int rc;
__u16 temp;
struct qeth_cmd_buffer *iob;
struct ccw_dev_id dev_id;
- QETH_DBF_TEXT(SETUP, 2, "ulpsetup");
+ QETH_CARD_TEXT(card, 2, "ulpsetup");
- iob = qeth_wait_for_buffer(&card->write);
- iob->finalize = qeth_mpc_finalize_cmd;
- memcpy(iob->data, ULP_SETUP, ULP_SETUP_SIZE);
+ iob = qeth_mpc_alloc_cmd(card, ULP_SETUP, ULP_SETUP_SIZE);
+ if (!iob)
+ return -ENOMEM;
memcpy(QETH_ULP_SETUP_DEST_ADDR(iob->data),
&card->token.cm_connection_r, QETH_MPC_TOKEN_LENGTH);
@@ -2318,9 +2188,7 @@ static int qeth_ulp_setup(struct qeth_card *card)
memcpy(QETH_ULP_SETUP_CUA(iob->data), &dev_id.devno, 2);
temp = (card->info.cula << 8) + card->info.unit_addr2;
memcpy(QETH_ULP_SETUP_REAL_DEVADDR(iob->data), &temp, 2);
- rc = qeth_send_control_data(card, ULP_SETUP_SIZE, iob,
- qeth_ulp_setup_cb, NULL);
- return rc;
+ return qeth_send_control_data(card, iob, qeth_ulp_setup_cb, NULL);
}
static int qeth_init_qdio_out_buf(struct qeth_qdio_out_q *q, int bidx)
@@ -2369,13 +2237,13 @@ static int qeth_alloc_qdio_queues(struct qeth_card *card)
{
int i, j;
- QETH_DBF_TEXT(SETUP, 2, "allcqdbf");
+ QETH_CARD_TEXT(card, 2, "allcqdbf");
if (atomic_cmpxchg(&card->qdio.state, QETH_QDIO_UNINITIALIZED,
QETH_QDIO_ALLOCATED) != QETH_QDIO_UNINITIALIZED)
return 0;
- QETH_DBF_TEXT(SETUP, 2, "inq");
+ QETH_CARD_TEXT(card, 2, "inq");
card->qdio.in_q = qeth_alloc_qdio_queue();
if (!card->qdio.in_q)
goto out_nomem;
@@ -2389,8 +2257,8 @@ static int qeth_alloc_qdio_queues(struct qeth_card *card)
card->qdio.out_qs[i] = qeth_alloc_output_queue();
if (!card->qdio.out_qs[i])
goto out_freeoutq;
- QETH_DBF_TEXT_(SETUP, 2, "outq %i", i);
- QETH_DBF_HEX(SETUP, 2, &card->qdio.out_qs[i], sizeof(void *));
+ QETH_CARD_TEXT_(card, 2, "outq %i", i);
+ QETH_CARD_HEX(card, 2, &card->qdio.out_qs[i], sizeof(void *));
card->qdio.out_qs[i]->card = card;
card->qdio.out_qs[i]->queue_no = i;
/* give outbound qeth_qdio_buffers their qdio_buffers */
@@ -2481,79 +2349,77 @@ static void qeth_create_qib_param_field_blkt(struct qeth_card *card,
static int qeth_qdio_activate(struct qeth_card *card)
{
- QETH_DBF_TEXT(SETUP, 3, "qdioact");
+ QETH_CARD_TEXT(card, 3, "qdioact");
return qdio_activate(CARD_DDEV(card));
}
static int qeth_dm_act(struct qeth_card *card)
{
- int rc;
struct qeth_cmd_buffer *iob;
- QETH_DBF_TEXT(SETUP, 2, "dmact");
+ QETH_CARD_TEXT(card, 2, "dmact");
- iob = qeth_wait_for_buffer(&card->write);
- iob->finalize = qeth_mpc_finalize_cmd;
- memcpy(iob->data, DM_ACT, DM_ACT_SIZE);
+ iob = qeth_mpc_alloc_cmd(card, DM_ACT, DM_ACT_SIZE);
+ if (!iob)
+ return -ENOMEM;
memcpy(QETH_DM_ACT_DEST_ADDR(iob->data),
&card->token.cm_connection_r, QETH_MPC_TOKEN_LENGTH);
memcpy(QETH_DM_ACT_CONNECTION_TOKEN(iob->data),
&card->token.ulp_connection_r, QETH_MPC_TOKEN_LENGTH);
- rc = qeth_send_control_data(card, DM_ACT_SIZE, iob, NULL, NULL);
- return rc;
+ return qeth_send_control_data(card, iob, NULL, NULL);
}
static int qeth_mpc_initialize(struct qeth_card *card)
{
int rc;
- QETH_DBF_TEXT(SETUP, 2, "mpcinit");
+ QETH_CARD_TEXT(card, 2, "mpcinit");
rc = qeth_issue_next_read(card);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "1err%d", rc);
return rc;
}
rc = qeth_cm_enable(card);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "2err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "2err%d", rc);
goto out_qdio;
}
rc = qeth_cm_setup(card);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "3err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "3err%d", rc);
goto out_qdio;
}
rc = qeth_ulp_enable(card);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "4err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "4err%d", rc);
goto out_qdio;
}
rc = qeth_ulp_setup(card);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "5err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "5err%d", rc);
goto out_qdio;
}
rc = qeth_alloc_qdio_queues(card);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "5err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "5err%d", rc);
goto out_qdio;
}
rc = qeth_qdio_establish(card);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "6err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "6err%d", rc);
qeth_free_qdio_queues(card);
goto out_qdio;
}
rc = qeth_qdio_activate(card);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "7err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "7err%d", rc);
goto out_qdio;
}
rc = qeth_dm_act(card);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "8err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "8err%d", rc);
goto out_qdio;
}
@@ -2706,7 +2572,7 @@ int qeth_init_qdio_queues(struct qeth_card *card)
unsigned int i;
int rc;
- QETH_DBF_TEXT(SETUP, 2, "initqdqs");
+ QETH_CARD_TEXT(card, 2, "initqdqs");
/* inbound queue */
qdio_reset_buffers(card->qdio.in_q->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q);
@@ -2720,7 +2586,7 @@ int qeth_init_qdio_queues(struct qeth_card *card)
rc = do_QDIO(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT, 0, 0,
card->qdio.in_buf_pool.buf_count - 1);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "1err%d", rc);
return rc;
}
@@ -2746,36 +2612,10 @@ int qeth_init_qdio_queues(struct qeth_card *card)
}
EXPORT_SYMBOL_GPL(qeth_init_qdio_queues);
-static __u8 qeth_get_ipa_adp_type(enum qeth_link_types link_type)
-{
- switch (link_type) {
- case QETH_LINK_TYPE_HSTR:
- return 2;
- default:
- return 1;
- }
-}
-
-static void qeth_fill_ipacmd_header(struct qeth_card *card,
- struct qeth_ipa_cmd *cmd,
- enum qeth_ipa_cmds command,
- enum qeth_prot_versions prot)
-{
- cmd->hdr.command = command;
- cmd->hdr.initiator = IPA_CMD_INITIATOR_HOST;
- /* cmd->hdr.seqno is set by qeth_send_control_data() */
- cmd->hdr.adapter_type = qeth_get_ipa_adp_type(card->info.link_type);
- cmd->hdr.rel_adapter_no = (u8) card->dev->dev_port;
- cmd->hdr.prim_version_no = IS_LAYER2(card) ? 2 : 1;
- cmd->hdr.param_count = 1;
- cmd->hdr.prot_version = prot;
-}
-
static void qeth_ipa_finalize_cmd(struct qeth_card *card,
- struct qeth_cmd_buffer *iob,
- unsigned int length)
+ struct qeth_cmd_buffer *iob)
{
- qeth_mpc_finalize_cmd(card, iob, length);
+ qeth_mpc_finalize_cmd(card, iob);
/* override with IPA-specific values: */
__ipa_cmd(iob)->hdr.seqno = card->seqno.ipa;
@@ -2785,11 +2625,12 @@ static void qeth_ipa_finalize_cmd(struct qeth_card *card,
void qeth_prepare_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob,
u16 cmd_length)
{
- u16 total_length = IPA_PDU_HEADER_SIZE + cmd_length;
u8 prot_type = qeth_mpc_select_prot_type(card);
+ u16 total_length = iob->length;
+ qeth_setup_ccw(__ccw_from_cmd(iob), CCW_CMD_WRITE, 0, total_length,
+ iob->data);
iob->finalize = qeth_ipa_finalize_cmd;
- iob->timeout = QETH_IPA_TIMEOUT;
memcpy(iob->data, IPA_PDU_HEADER, IPA_PDU_HEADER_SIZE);
memcpy(QETH_IPA_PDU_LEN_TOTAL(iob->data), &total_length, 2);
@@ -2802,25 +2643,35 @@ void qeth_prepare_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob,
}
EXPORT_SYMBOL_GPL(qeth_prepare_ipa_cmd);
-struct qeth_cmd_buffer *qeth_get_ipacmd_buffer(struct qeth_card *card,
- enum qeth_ipa_cmds ipacmd, enum qeth_prot_versions prot)
+struct qeth_cmd_buffer *qeth_ipa_alloc_cmd(struct qeth_card *card,
+ enum qeth_ipa_cmds cmd_code,
+ enum qeth_prot_versions prot,
+ unsigned int data_length)
{
+ enum qeth_link_types link_type = card->info.link_type;
struct qeth_cmd_buffer *iob;
+ struct qeth_ipacmd_hdr *hdr;
- iob = qeth_get_buffer(&card->write);
- if (iob) {
- qeth_prepare_ipa_cmd(card, iob, sizeof(struct qeth_ipa_cmd));
- qeth_fill_ipacmd_header(card, __ipa_cmd(iob), ipacmd, prot);
- } else {
- dev_warn(&card->gdev->dev,
- "The qeth driver ran out of channel command buffers\n");
- QETH_DBF_MESSAGE(1, "device %x ran out of channel command buffers",
- CARD_DEVID(card));
- }
+ data_length += offsetof(struct qeth_ipa_cmd, data);
+ iob = qeth_alloc_cmd(&card->write, IPA_PDU_HEADER_SIZE + data_length, 1,
+ QETH_IPA_TIMEOUT);
+ if (!iob)
+ return NULL;
+ qeth_prepare_ipa_cmd(card, iob, data_length);
+
+ hdr = &__ipa_cmd(iob)->hdr;
+ hdr->command = cmd_code;
+ hdr->initiator = IPA_CMD_INITIATOR_HOST;
+ /* hdr->seqno is set by qeth_send_control_data() */
+ hdr->adapter_type = (link_type == QETH_LINK_TYPE_HSTR) ? 2 : 1;
+ hdr->rel_adapter_no = (u8) card->dev->dev_port;
+ hdr->prim_version_no = IS_LAYER2(card) ? 2 : 1;
+ hdr->param_count = 1;
+ hdr->prot_version = prot;
return iob;
}
-EXPORT_SYMBOL_GPL(qeth_get_ipacmd_buffer);
+EXPORT_SYMBOL_GPL(qeth_ipa_alloc_cmd);
static int qeth_send_ipa_cmd_cb(struct qeth_card *card,
struct qeth_reply *reply, unsigned long data)
@@ -2841,20 +2692,18 @@ int qeth_send_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob,
unsigned long),
void *reply_param)
{
- u16 length;
int rc;
QETH_CARD_TEXT(card, 4, "sendipa");
if (card->read_or_write_problem) {
- qeth_release_buffer(iob->channel, iob);
+ qeth_put_cmd(iob);
return -EIO;
}
if (reply_cb == NULL)
reply_cb = qeth_send_ipa_cmd_cb;
- memcpy(&length, QETH_IPA_PDU_LEN_TOTAL(iob->data), 2);
- rc = qeth_send_control_data(card, length, iob, reply_cb, reply_param);
+ rc = qeth_send_control_data(card, iob, reply_cb, reply_param);
if (rc == -ETIME) {
qeth_clear_ipacmd_list(card);
qeth_schedule_recovery(card);
@@ -2878,9 +2727,9 @@ static int qeth_send_startlan(struct qeth_card *card)
{
struct qeth_cmd_buffer *iob;
- QETH_DBF_TEXT(SETUP, 2, "strtlan");
+ QETH_CARD_TEXT(card, 2, "strtlan");
- iob = qeth_get_ipacmd_buffer(card, IPA_CMD_STARTLAN, 0);
+ iob = qeth_ipa_alloc_cmd(card, IPA_CMD_STARTLAN, QETH_PROT_NONE, 0);
if (!iob)
return -ENOMEM;
return qeth_send_ipa_cmd(card, iob, qeth_send_startlan_cb, NULL);
@@ -2906,7 +2755,7 @@ static int qeth_query_setadapterparms_cb(struct qeth_card *card,
if (cmd->data.setadapterparms.data.query_cmds_supp.lan_type & 0x7f) {
card->info.link_type =
cmd->data.setadapterparms.data.query_cmds_supp.lan_type;
- QETH_DBF_TEXT_(SETUP, 2, "lnk %d", card->info.link_type);
+ QETH_CARD_TEXT_(card, 2, "lnk %d", card->info.link_type);
}
card->options.adp.supported_funcs =
cmd->data.setadapterparms.data.query_cmds_supp.supported_cmds;
@@ -2914,21 +2763,24 @@ static int qeth_query_setadapterparms_cb(struct qeth_card *card,
}
static struct qeth_cmd_buffer *qeth_get_adapter_cmd(struct qeth_card *card,
- __u32 command, __u32 cmdlen)
+ enum qeth_ipa_setadp_cmd adp_cmd,
+ unsigned int data_length)
{
+ struct qeth_ipacmd_setadpparms_hdr *hdr;
struct qeth_cmd_buffer *iob;
- struct qeth_ipa_cmd *cmd;
- iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SETADAPTERPARMS,
- QETH_PROT_IPV4);
- if (iob) {
- cmd = __ipa_cmd(iob);
- cmd->data.setadapterparms.hdr.cmdlength = cmdlen;
- cmd->data.setadapterparms.hdr.command_code = command;
- cmd->data.setadapterparms.hdr.used_total = 1;
- cmd->data.setadapterparms.hdr.seq_no = 1;
- }
+ iob = qeth_ipa_alloc_cmd(card, IPA_CMD_SETADAPTERPARMS, QETH_PROT_IPV4,
+ data_length +
+ offsetof(struct qeth_ipacmd_setadpparms,
+ data));
+ if (!iob)
+ return NULL;
+ hdr = &__ipa_cmd(iob)->data.setadapterparms.hdr;
+ hdr->cmdlength = sizeof(*hdr) + data_length;
+ hdr->command_code = adp_cmd;
+ hdr->used_total = 1;
+ hdr->seq_no = 1;
return iob;
}
@@ -2939,7 +2791,7 @@ static int qeth_query_setadapterparms(struct qeth_card *card)
QETH_CARD_TEXT(card, 3, "queryadp");
iob = qeth_get_adapter_cmd(card, IPA_SETADP_QUERY_COMMANDS_SUPPORTED,
- sizeof(struct qeth_ipacmd_setadpparms));
+ SETADP_DATA_SIZEOF(query_cmds_supp));
if (!iob)
return -ENOMEM;
rc = qeth_send_ipa_cmd(card, iob, qeth_query_setadapterparms_cb, NULL);
@@ -2951,7 +2803,7 @@ static int qeth_query_ipassists_cb(struct qeth_card *card,
{
struct qeth_ipa_cmd *cmd;
- QETH_DBF_TEXT(SETUP, 2, "qipasscb");
+ QETH_CARD_TEXT(card, 2, "qipasscb");
cmd = (struct qeth_ipa_cmd *) data;
@@ -2960,7 +2812,7 @@ static int qeth_query_ipassists_cb(struct qeth_card *card,
break;
case IPA_RC_NOTSUPP:
case IPA_RC_L2_UNSUPPORTED_CMD:
- QETH_DBF_TEXT(SETUP, 2, "ipaunsup");
+ QETH_CARD_TEXT(card, 2, "ipaunsup");
card->options.ipa4.supported_funcs |= IPA_SETADAPTERPARMS;
card->options.ipa6.supported_funcs |= IPA_SETADAPTERPARMS;
return -EOPNOTSUPP;
@@ -2988,8 +2840,8 @@ static int qeth_query_ipassists(struct qeth_card *card,
int rc;
struct qeth_cmd_buffer *iob;
- QETH_DBF_TEXT_(SETUP, 2, "qipassi%i", prot);
- iob = qeth_get_ipacmd_buffer(card, IPA_CMD_QIPASSIST, prot);
+ QETH_CARD_TEXT_(card, 2, "qipassi%i", prot);
+ iob = qeth_ipa_alloc_cmd(card, IPA_CMD_QIPASSIST, prot, 0);
if (!iob)
return -ENOMEM;
rc = qeth_send_ipa_cmd(card, iob, qeth_query_ipassists_cb, NULL);
@@ -3026,14 +2878,32 @@ int qeth_query_switch_attributes(struct qeth_card *card,
return -EOPNOTSUPP;
if (!netif_carrier_ok(card->dev))
return -ENOMEDIUM;
- iob = qeth_get_adapter_cmd(card, IPA_SETADP_QUERY_SWITCH_ATTRIBUTES,
- sizeof(struct qeth_ipacmd_setadpparms_hdr));
+ iob = qeth_get_adapter_cmd(card, IPA_SETADP_QUERY_SWITCH_ATTRIBUTES, 0);
if (!iob)
return -ENOMEM;
return qeth_send_ipa_cmd(card, iob,
qeth_query_switch_attributes_cb, sw_info);
}
+struct qeth_cmd_buffer *qeth_get_diag_cmd(struct qeth_card *card,
+ enum qeth_diags_cmds sub_cmd,
+ unsigned int data_length)
+{
+ struct qeth_ipacmd_diagass *cmd;
+ struct qeth_cmd_buffer *iob;
+
+ iob = qeth_ipa_alloc_cmd(card, IPA_CMD_SET_DIAG_ASS, QETH_PROT_NONE,
+ DIAG_HDR_LEN + data_length);
+ if (!iob)
+ return NULL;
+
+ cmd = &__ipa_cmd(iob)->data.diagass;
+ cmd->subcmd_len = DIAG_SUB_HDR_LEN + data_length;
+ cmd->subcmd = sub_cmd;
+ return iob;
+}
+EXPORT_SYMBOL_GPL(qeth_get_diag_cmd);
+
static int qeth_query_setdiagass_cb(struct qeth_card *card,
struct qeth_reply *reply, unsigned long data)
{
@@ -3052,15 +2922,11 @@ static int qeth_query_setdiagass_cb(struct qeth_card *card,
static int qeth_query_setdiagass(struct qeth_card *card)
{
struct qeth_cmd_buffer *iob;
- struct qeth_ipa_cmd *cmd;
- QETH_DBF_TEXT(SETUP, 2, "qdiagass");
- iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SET_DIAG_ASS, 0);
+ QETH_CARD_TEXT(card, 2, "qdiagass");
+ iob = qeth_get_diag_cmd(card, QETH_DIAGS_CMD_QUERY, 0);
if (!iob)
return -ENOMEM;
- cmd = __ipa_cmd(iob);
- cmd->data.diagass.subcmd_len = 16;
- cmd->data.diagass.subcmd = QETH_DIAGS_CMD_QUERY;
return qeth_send_ipa_cmd(card, iob, qeth_query_setdiagass_cb, NULL);
}
@@ -3107,13 +2973,11 @@ int qeth_hw_trap(struct qeth_card *card, enum qeth_diags_trap_action action)
struct qeth_cmd_buffer *iob;
struct qeth_ipa_cmd *cmd;
- QETH_DBF_TEXT(SETUP, 2, "diagtrap");
- iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SET_DIAG_ASS, 0);
+ QETH_CARD_TEXT(card, 2, "diagtrap");
+ iob = qeth_get_diag_cmd(card, QETH_DIAGS_CMD_TRAP, 64);
if (!iob)
return -ENOMEM;
cmd = __ipa_cmd(iob);
- cmd->data.diagass.subcmd_len = 80;
- cmd->data.diagass.subcmd = QETH_DIAGS_CMD_TRAP;
cmd->data.diagass.type = 1;
cmd->data.diagass.action = action;
switch (action) {
@@ -3236,13 +3100,6 @@ static void qeth_handle_send_error(struct qeth_card *card,
int sbalf15 = buffer->buffer->element[15].sflags;
QETH_CARD_TEXT(card, 6, "hdsnderr");
- if (IS_IQD(card)) {
- if (sbalf15 == 0) {
- qdio_err = 0;
- } else {
- qdio_err = 1;
- }
- }
qeth_check_qdio_errors(card, buffer->buffer, qdio_err, "qouterr");
if (!qdio_err)
@@ -3730,8 +3587,8 @@ check_layout:
__elements = 1 + qeth_count_elements(skb, proto_len);
else
__elements = qeth_count_elements(skb, 0);
- } else if (!proto_len && qeth_get_elements_for_range(start, end) == 1) {
- /* Push HW header into a new page. */
+ } else if (!proto_len && PAGE_ALIGNED(skb->data)) {
+ /* Push HW header into preceding page, flush with skb->data. */
push_ok = true;
__elements = 1 + qeth_count_elements(skb, 0);
} else {
@@ -3785,18 +3642,16 @@ static void __qeth_fill_buffer(struct sk_buff *skb,
int element = buf->next_element_to_fill;
int length = skb_headlen(skb) - offset;
char *data = skb->data + offset;
- int length_here, cnt;
+ unsigned int elem_length, cnt;
/* map linear part into buffer element(s) */
while (length > 0) {
- /* length_here is the remaining amount of data in this page */
- length_here = PAGE_SIZE - ((unsigned long) data % PAGE_SIZE);
- if (length < length_here)
- length_here = length;
+ elem_length = min_t(unsigned int, length,
+ PAGE_SIZE - offset_in_page(data));
buffer->element[element].addr = data;
- buffer->element[element].length = length_here;
- length -= length_here;
+ buffer->element[element].length = elem_length;
+ length -= elem_length;
if (is_first_elem) {
is_first_elem = false;
if (length || skb_is_nonlinear(skb))
@@ -3809,7 +3664,8 @@ static void __qeth_fill_buffer(struct sk_buff *skb,
buffer->element[element].eflags =
SBAL_EFLAGS_MIDDLE_FRAG;
}
- data += length_here;
+
+ data += elem_length;
element++;
}
@@ -3820,17 +3676,16 @@ static void __qeth_fill_buffer(struct sk_buff *skb,
data = skb_frag_address(frag);
length = skb_frag_size(frag);
while (length > 0) {
- length_here = PAGE_SIZE -
- ((unsigned long) data % PAGE_SIZE);
- if (length < length_here)
- length_here = length;
+ elem_length = min_t(unsigned int, length,
+ PAGE_SIZE - offset_in_page(data));
buffer->element[element].addr = data;
- buffer->element[element].length = length_here;
+ buffer->element[element].length = elem_length;
buffer->element[element].eflags =
SBAL_EFLAGS_MIDDLE_FRAG;
- length -= length_here;
- data += length_here;
+
+ length -= elem_length;
+ data += elem_length;
element++;
}
}
@@ -4053,11 +3908,10 @@ static void qeth_fill_tso_ext(struct qeth_hdr_tso *hdr,
}
int qeth_xmit(struct qeth_card *card, struct sk_buff *skb,
- struct qeth_qdio_out_q *queue, int ipv, int cast_type,
+ struct qeth_qdio_out_q *queue, int ipv,
void (*fill_header)(struct qeth_qdio_out_q *queue,
struct qeth_hdr *hdr, struct sk_buff *skb,
- int ipv, int cast_type,
- unsigned int data_len))
+ int ipv, unsigned int data_len))
{
unsigned int proto_len, hw_hdr_len;
unsigned int frame_len = skb->len;
@@ -4091,7 +3945,7 @@ int qeth_xmit(struct qeth_card *card, struct sk_buff *skb,
data_offset = push_len + proto_len;
}
memset(hdr, 0, hw_hdr_len);
- fill_header(queue, hdr, skb, ipv, cast_type, frame_len);
+ fill_header(queue, hdr, skb, ipv, frame_len);
if (is_tso)
qeth_fill_tso_ext((struct qeth_hdr_tso *) hdr,
frame_len - proto_len, skb, proto_len);
@@ -4160,7 +4014,7 @@ void qeth_setadp_promisc_mode(struct qeth_card *card)
QETH_CARD_TEXT_(card, 4, "mode:%x", mode);
iob = qeth_get_adapter_cmd(card, IPA_SETADP_SET_PROMISC_MODE,
- sizeof(struct qeth_ipacmd_setadpparms_hdr) + 8);
+ SETADP_DATA_SIZEOF(mode));
if (!iob)
return;
cmd = __ipa_cmd(iob);
@@ -4200,8 +4054,7 @@ int qeth_setadpparms_change_macaddr(struct qeth_card *card)
QETH_CARD_TEXT(card, 4, "chgmac");
iob = qeth_get_adapter_cmd(card, IPA_SETADP_ALTER_MAC_ADDRESS,
- sizeof(struct qeth_ipacmd_setadpparms_hdr) +
- sizeof(struct qeth_change_addr));
+ SETADP_DATA_SIZEOF(change_addr));
if (!iob)
return -ENOMEM;
cmd = __ipa_cmd(iob);
@@ -4228,10 +4081,8 @@ static int qeth_setadpparms_set_access_ctrl_cb(struct qeth_card *card,
qeth_setadpparms_inspect_rc(cmd);
access_ctrl_req = &cmd->data.setadapterparms.data.set_access_ctrl;
- QETH_DBF_TEXT_(SETUP, 2, "setaccb");
- QETH_DBF_TEXT_(SETUP, 2, "%s", card->gdev->dev.kobj.name);
- QETH_DBF_TEXT_(SETUP, 2, "rc=%d",
- cmd->data.setadapterparms.hdr.return_code);
+ QETH_CARD_TEXT_(card, 2, "rc=%d",
+ cmd->data.setadapterparms.hdr.return_code);
if (cmd->data.setadapterparms.hdr.return_code !=
SET_ACCESS_CTRL_RC_SUCCESS)
QETH_DBF_MESSAGE(3, "ERR:SET_ACCESS_CTRL(%#x) on device %x: %#x\n",
@@ -4311,12 +4162,8 @@ static int qeth_setadpparms_set_access_ctrl(struct qeth_card *card,
QETH_CARD_TEXT(card, 4, "setacctl");
- QETH_DBF_TEXT_(SETUP, 2, "setacctl");
- QETH_DBF_TEXT_(SETUP, 2, "%s", card->gdev->dev.kobj.name);
-
iob = qeth_get_adapter_cmd(card, IPA_SETADP_SET_ACCESS_CONTROL,
- sizeof(struct qeth_ipacmd_setadpparms_hdr) +
- sizeof(struct qeth_set_access_ctrl));
+ SETADP_DATA_SIZEOF(set_access_ctrl));
if (!iob)
return -ENOMEM;
cmd = __ipa_cmd(iob);
@@ -4325,7 +4172,7 @@ static int qeth_setadpparms_set_access_ctrl(struct qeth_card *card,
rc = qeth_send_ipa_cmd(card, iob, qeth_setadpparms_set_access_ctrl_cb,
&fallback);
- QETH_DBF_TEXT_(SETUP, 2, "rc=%d", rc);
+ QETH_CARD_TEXT_(card, 2, "rc=%d", rc);
return rc;
}
@@ -4472,18 +4319,13 @@ static int qeth_snmp_command_cb(struct qeth_card *card,
return -ENOSPC;
}
QETH_CARD_TEXT_(card, 4, "snore%i",
- cmd->data.setadapterparms.hdr.used_total);
+ cmd->data.setadapterparms.hdr.used_total);
QETH_CARD_TEXT_(card, 4, "sseqn%i",
- cmd->data.setadapterparms.hdr.seq_no);
+ cmd->data.setadapterparms.hdr.seq_no);
/*copy entries to user buffer*/
memcpy(qinfo->udata + qinfo->udata_offset, snmp_data, data_len);
qinfo->udata_offset += data_len;
- /* check if all replies received ... */
- QETH_CARD_TEXT_(card, 4, "srtot%i",
- cmd->data.setadapterparms.hdr.used_total);
- QETH_CARD_TEXT_(card, 4, "srseq%i",
- cmd->data.setadapterparms.hdr.seq_no);
if (cmd->data.setadapterparms.hdr.seq_no <
cmd->data.setadapterparms.hdr.used_total)
return 1;
@@ -4492,9 +4334,8 @@ static int qeth_snmp_command_cb(struct qeth_card *card,
static int qeth_snmp_command(struct qeth_card *card, char __user *udata)
{
+ struct qeth_snmp_ureq __user *ureq;
struct qeth_cmd_buffer *iob;
- struct qeth_ipa_cmd *cmd;
- struct qeth_snmp_ureq *ureq;
unsigned int req_len;
struct qeth_arp_query_info qinfo = {0, };
int rc = 0;
@@ -4508,38 +4349,28 @@ static int qeth_snmp_command(struct qeth_card *card, char __user *udata)
IS_LAYER3(card))
return -EOPNOTSUPP;
- /* skip 4 bytes (data_len struct member) to get req_len */
- if (copy_from_user(&req_len, udata + sizeof(int), sizeof(int)))
+ ureq = (struct qeth_snmp_ureq __user *) udata;
+ if (get_user(qinfo.udata_len, &ureq->hdr.data_len) ||
+ get_user(req_len, &ureq->hdr.req_len))
+ return -EFAULT;
+
+ iob = qeth_get_adapter_cmd(card, IPA_SETADP_SET_SNMP_CONTROL, req_len);
+ if (!iob)
+ return -ENOMEM;
+
+ if (copy_from_user(&__ipa_cmd(iob)->data.setadapterparms.data.snmp,
+ &ureq->cmd, req_len)) {
+ qeth_put_cmd(iob);
return -EFAULT;
- if (req_len > (QETH_BUFSIZE - IPA_PDU_HEADER_SIZE -
- sizeof(struct qeth_ipacmd_hdr) -
- sizeof(struct qeth_ipacmd_setadpparms_hdr)))
- return -EINVAL;
- ureq = memdup_user(udata, req_len + sizeof(struct qeth_snmp_ureq_hdr));
- if (IS_ERR(ureq)) {
- QETH_CARD_TEXT(card, 2, "snmpnome");
- return PTR_ERR(ureq);
}
- qinfo.udata_len = ureq->hdr.data_len;
+
qinfo.udata = kzalloc(qinfo.udata_len, GFP_KERNEL);
if (!qinfo.udata) {
- kfree(ureq);
+ qeth_put_cmd(iob);
return -ENOMEM;
}
qinfo.udata_offset = sizeof(struct qeth_snmp_ureq_hdr);
- iob = qeth_get_adapter_cmd(card, IPA_SETADP_SET_SNMP_CONTROL,
- QETH_SNMP_SETADP_CMDLENGTH + req_len);
- if (!iob) {
- rc = -ENOMEM;
- goto out;
- }
-
- /* for large requests, fix-up the length fields: */
- qeth_prepare_ipa_cmd(card, iob, QETH_SETADP_BASE_LEN + req_len);
-
- cmd = __ipa_cmd(iob);
- memcpy(&cmd->data.setadapterparms.data.snmp, &ureq->cmd, req_len);
rc = qeth_send_ipa_cmd(card, iob, qeth_snmp_command_cb, &qinfo);
if (rc)
QETH_DBF_MESSAGE(2, "SNMP command failed on device %x: (%#x)\n",
@@ -4548,8 +4379,7 @@ static int qeth_snmp_command(struct qeth_card *card, char __user *udata)
if (copy_to_user(udata, qinfo.udata, qinfo.udata_len))
rc = -EFAULT;
}
-out:
- kfree(ureq);
+
kfree(qinfo.udata);
return rc;
}
@@ -4615,8 +4445,7 @@ static int qeth_query_oat_command(struct qeth_card *card, char __user *udata)
}
iob = qeth_get_adapter_cmd(card, IPA_SETADP_QUERY_OAT,
- sizeof(struct qeth_ipacmd_setadpparms_hdr) +
- sizeof(struct qeth_query_oat));
+ SETADP_DATA_SIZEOF(query_oat));
if (!iob) {
rc = -ENOMEM;
goto out_free;
@@ -4678,8 +4507,7 @@ int qeth_query_card_info(struct qeth_card *card,
QETH_CARD_TEXT(card, 2, "qcrdinfo");
if (!qeth_adp_supported(card, IPA_SETADP_QUERY_CARD_INFO))
return -EOPNOTSUPP;
- iob = qeth_get_adapter_cmd(card, IPA_SETADP_QUERY_CARD_INFO,
- sizeof(struct qeth_ipacmd_setadpparms_hdr));
+ iob = qeth_get_adapter_cmd(card, IPA_SETADP_QUERY_CARD_INFO, 0);
if (!iob)
return -ENOMEM;
return qeth_send_ipa_cmd(card, iob, qeth_query_card_info_cb,
@@ -4701,7 +4529,7 @@ int qeth_vm_request_mac(struct qeth_card *card)
struct ccw_dev_id id;
int rc;
- QETH_DBF_TEXT(SETUP, 2, "vmreqmac");
+ QETH_CARD_TEXT(card, 2, "vmreqmac");
request = kzalloc(sizeof(*request), GFP_KERNEL | GFP_DMA);
response = kzalloc(sizeof(*response), GFP_KERNEL | GFP_DMA);
@@ -4726,13 +4554,13 @@ int qeth_vm_request_mac(struct qeth_card *card)
if (request->resp_buf_len < sizeof(*response) ||
response->version != request->resp_version) {
rc = -EIO;
- QETH_DBF_TEXT(SETUP, 2, "badresp");
- QETH_DBF_HEX(SETUP, 2, &request->resp_buf_len,
- sizeof(request->resp_buf_len));
+ QETH_CARD_TEXT(card, 2, "badresp");
+ QETH_CARD_HEX(card, 2, &request->resp_buf_len,
+ sizeof(request->resp_buf_len));
} else if (!is_valid_ether_addr(response->mac)) {
rc = -EINVAL;
- QETH_DBF_TEXT(SETUP, 2, "badmac");
- QETH_DBF_HEX(SETUP, 2, response->mac, ETH_ALEN);
+ QETH_CARD_TEXT(card, 2, "badmac");
+ QETH_CARD_HEX(card, 2, response->mac, ETH_ALEN);
} else {
ether_addr_copy(card->dev->dev_addr, response->mac);
}
@@ -4747,43 +4575,37 @@ EXPORT_SYMBOL_GPL(qeth_vm_request_mac);
static void qeth_determine_capabilities(struct qeth_card *card)
{
int rc;
- int length;
- char *prcd;
struct ccw_device *ddev;
int ddev_offline = 0;
- QETH_DBF_TEXT(SETUP, 2, "detcapab");
+ QETH_CARD_TEXT(card, 2, "detcapab");
ddev = CARD_DDEV(card);
if (!ddev->online) {
ddev_offline = 1;
rc = ccw_device_set_online(ddev);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "3err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "3err%d", rc);
goto out;
}
}
- rc = qeth_read_conf_data(card, (void **) &prcd, &length);
+ rc = qeth_read_conf_data(card);
if (rc) {
QETH_DBF_MESSAGE(2, "qeth_read_conf_data on device %x returned %i\n",
CARD_DEVID(card), rc);
- QETH_DBF_TEXT_(SETUP, 2, "5err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "5err%d", rc);
goto out_offline;
}
- qeth_configure_unitaddr(card, prcd);
- if (ddev_offline)
- qeth_configure_blkt_default(card, prcd);
- kfree(prcd);
rc = qdio_get_ssqd_desc(ddev, &card->ssqd);
if (rc)
- QETH_DBF_TEXT_(SETUP, 2, "6err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "6err%d", rc);
- QETH_DBF_TEXT_(SETUP, 2, "qfmt%d", card->ssqd.qfmt);
- QETH_DBF_TEXT_(SETUP, 2, "ac1:%02x", card->ssqd.qdioac1);
- QETH_DBF_TEXT_(SETUP, 2, "ac2:%04x", card->ssqd.qdioac2);
- QETH_DBF_TEXT_(SETUP, 2, "ac3:%04x", card->ssqd.qdioac3);
- QETH_DBF_TEXT_(SETUP, 2, "icnt%d", card->ssqd.icnt);
+ QETH_CARD_TEXT_(card, 2, "qfmt%d", card->ssqd.qfmt);
+ QETH_CARD_TEXT_(card, 2, "ac1:%02x", card->ssqd.qdioac1);
+ QETH_CARD_TEXT_(card, 2, "ac2:%04x", card->ssqd.qdioac2);
+ QETH_CARD_TEXT_(card, 2, "ac3:%04x", card->ssqd.qdioac3);
+ QETH_CARD_TEXT_(card, 2, "icnt%d", card->ssqd.icnt);
if (!((card->ssqd.qfmt != QDIO_IQDIO_QFMT) ||
((card->ssqd.qdioac1 & CHSC_AC1_INITIATE_INPUTQ) == 0) ||
((card->ssqd.qdioac3 & CHSC_AC3_FORMAT2_CQ_AVAILABLE) == 0))) {
@@ -4831,7 +4653,7 @@ static int qeth_qdio_establish(struct qeth_card *card)
int i, j, k;
int rc = 0;
- QETH_DBF_TEXT(SETUP, 2, "qdioest");
+ QETH_CARD_TEXT(card, 2, "qdioest");
qib_param_field = kzalloc(QDIO_MAX_BUFFERS_PER_Q,
GFP_KERNEL);
@@ -4935,11 +4757,11 @@ out_free_nothing:
static void qeth_core_free_card(struct qeth_card *card)
{
- QETH_DBF_TEXT(SETUP, 2, "freecrd");
- QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *));
+ QETH_CARD_TEXT(card, 2, "freecrd");
qeth_clean_channel(&card->read);
qeth_clean_channel(&card->write);
qeth_clean_channel(&card->data);
+ qeth_put_cmd(card->read_cmd);
destroy_workqueue(card->event_wq);
qeth_free_qdio_queues(card);
unregister_service_level(&card->qeth_service_level);
@@ -4988,7 +4810,7 @@ int qeth_core_hardsetup_card(struct qeth_card *card, bool *carrier_ok)
int retries = 3;
int rc;
- QETH_DBF_TEXT(SETUP, 2, "hrdsetup");
+ QETH_CARD_TEXT(card, 2, "hrdsetup");
atomic_set(&card->force_alloc_skb, 0);
rc = qeth_update_from_chp_desc(card);
if (rc)
@@ -5013,10 +4835,10 @@ retry:
goto retriable;
retriable:
if (rc == -ERESTARTSYS) {
- QETH_DBF_TEXT(SETUP, 2, "break1");
+ QETH_CARD_TEXT(card, 2, "break1");
return rc;
} else if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "1err%d", rc);
if (--retries < 0)
goto out;
else
@@ -5028,10 +4850,10 @@ retriable:
rc = qeth_idx_activate_read_channel(card);
if (rc == -EINTR) {
- QETH_DBF_TEXT(SETUP, 2, "break2");
+ QETH_CARD_TEXT(card, 2, "break2");
return rc;
} else if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "3err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "3err%d", rc);
if (--retries < 0)
goto out;
else
@@ -5040,10 +4862,10 @@ retriable:
rc = qeth_idx_activate_write_channel(card);
if (rc == -EINTR) {
- QETH_DBF_TEXT(SETUP, 2, "break3");
+ QETH_CARD_TEXT(card, 2, "break3");
return rc;
} else if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "4err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "4err%d", rc);
if (--retries < 0)
goto out;
else
@@ -5052,13 +4874,13 @@ retriable:
card->read_or_write_problem = 0;
rc = qeth_mpc_initialize(card);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "5err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "5err%d", rc);
goto out;
}
rc = qeth_send_startlan(card);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "6err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "6err%d", rc);
if (rc == -ENETDOWN) {
dev_warn(&card->gdev->dev, "The LAN is offline\n");
*carrier_ok = false;
@@ -5085,14 +4907,14 @@ retriable:
if (qeth_is_supported(card, IPA_SETADAPTERPARMS)) {
rc = qeth_query_setadapterparms(card);
if (rc < 0) {
- QETH_DBF_TEXT_(SETUP, 2, "7err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "7err%d", rc);
goto out;
}
}
if (qeth_adp_supported(card, IPA_SETADP_SET_DIAG_ASSIST)) {
rc = qeth_query_setdiagass(card);
if (rc < 0) {
- QETH_DBF_TEXT_(SETUP, 2, "8err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "8err%d", rc);
goto out;
}
}
@@ -5352,42 +5174,47 @@ EXPORT_SYMBOL_GPL(qeth_setassparms_cb);
struct qeth_cmd_buffer *qeth_get_setassparms_cmd(struct qeth_card *card,
enum qeth_ipa_funcs ipa_func,
- __u16 cmd_code, __u16 len,
+ u16 cmd_code,
+ unsigned int data_length,
enum qeth_prot_versions prot)
{
+ struct qeth_ipacmd_setassparms *setassparms;
+ struct qeth_ipacmd_setassparms_hdr *hdr;
struct qeth_cmd_buffer *iob;
- struct qeth_ipa_cmd *cmd;
QETH_CARD_TEXT(card, 4, "getasscm");
- iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SETASSPARMS, prot);
+ iob = qeth_ipa_alloc_cmd(card, IPA_CMD_SETASSPARMS, prot,
+ data_length +
+ offsetof(struct qeth_ipacmd_setassparms,
+ data));
+ if (!iob)
+ return NULL;
- if (iob) {
- cmd = __ipa_cmd(iob);
- cmd->data.setassparms.hdr.assist_no = ipa_func;
- cmd->data.setassparms.hdr.length = 8 + len;
- cmd->data.setassparms.hdr.command_code = cmd_code;
- }
+ setassparms = &__ipa_cmd(iob)->data.setassparms;
+ setassparms->assist_no = ipa_func;
+ hdr = &setassparms->hdr;
+ hdr->length = sizeof(*hdr) + data_length;
+ hdr->command_code = cmd_code;
return iob;
}
EXPORT_SYMBOL_GPL(qeth_get_setassparms_cmd);
int qeth_send_simple_setassparms_prot(struct qeth_card *card,
enum qeth_ipa_funcs ipa_func,
- u16 cmd_code, long data,
+ u16 cmd_code, u32 *data,
enum qeth_prot_versions prot)
{
- int length = 0;
+ unsigned int length = data ? SETASS_DATA_SIZEOF(flags_32bit) : 0;
struct qeth_cmd_buffer *iob;
QETH_CARD_TEXT_(card, 4, "simassp%i", prot);
- if (data)
- length = sizeof(__u32);
iob = qeth_get_setassparms_cmd(card, ipa_func, cmd_code, length, prot);
if (!iob)
return -ENOMEM;
- __ipa_cmd(iob)->data.setassparms.data.flags_32bit = (__u32) data;
+ if (data)
+ __ipa_cmd(iob)->data.setassparms.data.flags_32bit = *data;
return qeth_send_ipa_cmd(card, iob, qeth_setassparms_cb, NULL);
}
EXPORT_SYMBOL_GPL(qeth_send_simple_setassparms_prot);
@@ -5670,6 +5497,8 @@ static int qeth_core_probe_device(struct ccwgroup_device *gdev)
if (rc)
goto err_chp_desc;
qeth_determine_capabilities(card);
+ qeth_set_blkt_defaults(card);
+
enforced_disc = qeth_enforce_discipline(card);
switch (enforced_disc) {
case QETH_DISCIPLINE_UNDETERMINED:
@@ -5707,7 +5536,7 @@ static void qeth_core_remove_device(struct ccwgroup_device *gdev)
{
struct qeth_card *card = dev_get_drvdata(&gdev->dev);
- QETH_DBF_TEXT(SETUP, 2, "removedv");
+ QETH_CARD_TEXT(card, 2, "removedv");
if (card->discipline) {
card->discipline->remove(gdev);
@@ -5759,28 +5588,30 @@ static void qeth_core_shutdown(struct ccwgroup_device *gdev)
qdio_free(CARD_DDEV(card));
}
-static int qeth_core_freeze(struct ccwgroup_device *gdev)
+static int qeth_suspend(struct ccwgroup_device *gdev)
{
struct qeth_card *card = dev_get_drvdata(&gdev->dev);
- if (card->discipline && card->discipline->freeze)
- return card->discipline->freeze(gdev);
- return 0;
-}
-static int qeth_core_thaw(struct ccwgroup_device *gdev)
-{
- struct qeth_card *card = dev_get_drvdata(&gdev->dev);
- if (card->discipline && card->discipline->thaw)
- return card->discipline->thaw(gdev);
+ qeth_set_allowed_threads(card, 0, 1);
+ wait_event(card->wait_q, qeth_threads_running(card, 0xffffffff) == 0);
+ if (gdev->state == CCWGROUP_OFFLINE)
+ return 0;
+
+ card->discipline->set_offline(gdev);
return 0;
}
-static int qeth_core_restore(struct ccwgroup_device *gdev)
+static int qeth_resume(struct ccwgroup_device *gdev)
{
struct qeth_card *card = dev_get_drvdata(&gdev->dev);
- if (card->discipline && card->discipline->restore)
- return card->discipline->restore(gdev);
- return 0;
+ int rc;
+
+ rc = card->discipline->set_online(gdev);
+
+ qeth_set_allowed_threads(card, 0xffffffff, 0);
+ if (rc)
+ dev_warn(&card->gdev->dev, "The qeth device driver failed to recover an error on the device\n");
+ return rc;
}
static ssize_t group_store(struct device_driver *ddrv, const char *buf,
@@ -5821,9 +5652,9 @@ static struct ccwgroup_driver qeth_core_ccwgroup_driver = {
.shutdown = qeth_core_shutdown,
.prepare = NULL,
.complete = NULL,
- .freeze = qeth_core_freeze,
- .thaw = qeth_core_thaw,
- .restore = qeth_core_restore,
+ .freeze = qeth_suspend,
+ .thaw = qeth_resume,
+ .restore = qeth_resume,
};
struct qeth_card *qeth_get_card_by_busid(char *bus_id)
@@ -5902,8 +5733,8 @@ static int qeth_start_csum_cb(struct qeth_card *card, struct qeth_reply *reply,
static int qeth_set_csum_off(struct qeth_card *card, enum qeth_ipa_funcs cstype,
enum qeth_prot_versions prot)
{
- return qeth_send_simple_setassparms_prot(card, cstype,
- IPA_CMD_ASS_STOP, 0, prot);
+ return qeth_send_simple_setassparms_prot(card, cstype, IPA_CMD_ASS_STOP,
+ NULL, prot);
}
static int qeth_set_csum_on(struct qeth_card *card, enum qeth_ipa_funcs cstype,
@@ -5934,7 +5765,8 @@ static int qeth_set_csum_on(struct qeth_card *card, enum qeth_ipa_funcs cstype,
return -EOPNOTSUPP;
}
- iob = qeth_get_setassparms_cmd(card, cstype, IPA_CMD_ASS_ENABLE, 4,
+ iob = qeth_get_setassparms_cmd(card, cstype, IPA_CMD_ASS_ENABLE,
+ SETASS_DATA_SIZEOF(flags_32bit),
prot);
if (!iob) {
qeth_set_csum_off(card, cstype, prot);
@@ -5991,7 +5823,7 @@ static int qeth_set_tso_off(struct qeth_card *card,
enum qeth_prot_versions prot)
{
return qeth_send_simple_setassparms_prot(card, IPA_OUTBOUND_TSO,
- IPA_CMD_ASS_STOP, 0, prot);
+ IPA_CMD_ASS_STOP, NULL, prot);
}
static int qeth_set_tso_on(struct qeth_card *card,
@@ -6017,7 +5849,8 @@ static int qeth_set_tso_on(struct qeth_card *card,
}
iob = qeth_get_setassparms_cmd(card, IPA_OUTBOUND_TSO,
- IPA_CMD_ASS_ENABLE, sizeof(caps), prot);
+ IPA_CMD_ASS_ENABLE,
+ SETASS_DATA_SIZEOF(caps), prot);
if (!iob) {
qeth_set_tso_off(card, prot);
return -ENOMEM;
@@ -6104,8 +5937,8 @@ int qeth_set_features(struct net_device *dev, netdev_features_t features)
netdev_features_t changed = dev->features ^ features;
int rc = 0;
- QETH_DBF_TEXT(SETUP, 2, "setfeat");
- QETH_DBF_HEX(SETUP, 2, &features, sizeof(features));
+ QETH_CARD_TEXT(card, 2, "setfeat");
+ QETH_CARD_HEX(card, 2, &features, sizeof(features));
if ((changed & NETIF_F_IP_CSUM)) {
rc = qeth_set_ipa_csum(card, features & NETIF_F_IP_CSUM,
@@ -6151,7 +5984,7 @@ netdev_features_t qeth_fix_features(struct net_device *dev,
{
struct qeth_card *card = dev->ml_priv;
- QETH_DBF_TEXT(SETUP, 2, "fixfeat");
+ QETH_CARD_TEXT(card, 2, "fixfeat");
if (!qeth_is_supported(card, IPA_OUTBOUND_CHECKSUM))
features &= ~NETIF_F_IP_CSUM;
if (!qeth_is_supported6(card, IPA_OUTBOUND_CHECKSUM_V6))
@@ -6164,7 +5997,7 @@ netdev_features_t qeth_fix_features(struct net_device *dev,
if (!qeth_is_supported6(card, IPA_OUTBOUND_TSO))
features &= ~NETIF_F_TSO6;
- QETH_DBF_HEX(SETUP, 2, &features, sizeof(features));
+ QETH_CARD_HEX(card, 2, &features, sizeof(features));
return features;
}
EXPORT_SYMBOL_GPL(qeth_fix_features);
diff --git a/drivers/s390/net/qeth_core_mpc.h b/drivers/s390/net/qeth_core_mpc.h
index f5237b7c14c4..75b5834ed28d 100644
--- a/drivers/s390/net/qeth_core_mpc.h
+++ b/drivers/s390/net/qeth_core_mpc.h
@@ -31,14 +31,12 @@ extern unsigned char IPA_PDU_HEADER[];
#define QETH_CLEAR_CHANNEL_PARM -10
#define QETH_HALT_CHANNEL_PARM -11
-#define QETH_RCD_PARM -12
static inline bool qeth_intparm_is_iob(unsigned long intparm)
{
switch (intparm) {
case QETH_CLEAR_CHANNEL_PARM:
case QETH_HALT_CHANNEL_PARM:
- case QETH_RCD_PARM:
case 0:
return false;
}
@@ -381,9 +379,7 @@ struct qeth_ipacmd_layer2setdelvlan {
__u16 vlan_id;
} __attribute__ ((packed));
-
struct qeth_ipacmd_setassparms_hdr {
- __u32 assist_no;
__u16 length;
__u16 command_code;
__u16 return_code;
@@ -428,6 +424,7 @@ struct qeth_tso_start_data {
/* SETASSPARMS IPA Command: */
struct qeth_ipacmd_setassparms {
+ u32 assist_no;
struct qeth_ipacmd_setassparms_hdr hdr;
union {
__u32 flags_32bit;
@@ -439,6 +436,8 @@ struct qeth_ipacmd_setassparms {
} data;
} __attribute__ ((packed));
+#define SETASS_DATA_SIZEOF(field) FIELD_SIZEOF(struct qeth_ipacmd_setassparms,\
+ data.field)
/* SETRTG IPA Command: ****************************************************/
struct qeth_set_routing {
@@ -526,8 +525,6 @@ struct qeth_query_switch_attributes {
#define QETH_SETADP_FLAGS_VIRTUAL_MAC 0x80 /* for CHANGE_ADDR_READ_MAC */
struct qeth_ipacmd_setadpparms_hdr {
- u32 supp_hw_cmds;
- u32 reserved1;
u16 cmdlength;
u16 reserved2;
u32 command_code;
@@ -539,6 +536,7 @@ struct qeth_ipacmd_setadpparms_hdr {
};
struct qeth_ipacmd_setadpparms {
+ struct qeth_ipa_caps hw_cmds;
struct qeth_ipacmd_setadpparms_hdr hdr;
union {
struct qeth_query_cmds_supp query_cmds_supp;
@@ -552,6 +550,9 @@ struct qeth_ipacmd_setadpparms {
} data;
} __attribute__ ((packed));
+#define SETADP_DATA_SIZEOF(field) FIELD_SIZEOF(struct qeth_ipacmd_setadpparms,\
+ data.field)
+
/* CREATE_ADDR IPA Command: ***********************************************/
struct qeth_create_destroy_address {
__u8 unique_id[8];
@@ -598,6 +599,11 @@ struct qeth_ipacmd_diagass {
__u8 cdata[64];
} __attribute__ ((packed));
+#define DIAG_HDR_LEN offsetofend(struct qeth_ipacmd_diagass, ext)
+#define DIAG_SUB_HDR_LEN (offsetofend(struct qeth_ipacmd_diagass, ext) -\
+ offsetof(struct qeth_ipacmd_diagass, \
+ subcmd_len))
+
/* VNIC Characteristics IPA Command: *****************************************/
/* IPA commands/sub commands for VNICC */
#define IPA_VNICC_QUERY_CHARS 0x00000000L
@@ -624,12 +630,6 @@ struct qeth_ipacmd_diagass {
/* VNICC header */
struct qeth_ipacmd_vnicc_hdr {
- u32 sup;
- u32 cur;
-};
-
-/* VNICC sub command header */
-struct qeth_vnicc_sub_hdr {
u16 data_length;
u16 reserved;
u32 sub_command;
@@ -654,15 +654,18 @@ struct qeth_vnicc_getset_timeout {
/* complete VNICC IPA command message */
struct qeth_ipacmd_vnicc {
+ struct qeth_ipa_caps vnicc_cmds;
struct qeth_ipacmd_vnicc_hdr hdr;
- struct qeth_vnicc_sub_hdr sub_hdr;
union {
struct qeth_vnicc_query_cmds query_cmds;
struct qeth_vnicc_set_char set_char;
struct qeth_vnicc_getset_timeout getset_timeout;
- };
+ } data;
};
+#define VNICC_DATA_SIZEOF(field) FIELD_SIZEOF(struct qeth_ipacmd_vnicc,\
+ data.field)
+
/* SETBRIDGEPORT IPA Command: *********************************************/
enum qeth_ipa_sbp_cmd {
IPA_SBP_QUERY_COMMANDS_SUPPORTED = 0x00000000L,
@@ -688,8 +691,6 @@ struct mac_addr_lnid {
} __packed;
struct qeth_ipacmd_sbp_hdr {
- __u32 supported_sbp_cmds;
- __u32 enabled_sbp_cmds;
__u16 cmdlength;
__u16 reserved1;
__u32 command_code;
@@ -704,16 +705,10 @@ struct qeth_sbp_query_cmds_supp {
__u32 reserved;
} __packed;
-struct qeth_sbp_reset_role {
-} __packed;
-
struct qeth_sbp_set_primary {
struct net_if_token token;
} __packed;
-struct qeth_sbp_set_secondary {
-} __packed;
-
struct qeth_sbp_port_entry {
__u8 role;
__u8 state;
@@ -739,17 +734,19 @@ struct qeth_sbp_state_change {
} __packed;
struct qeth_ipacmd_setbridgeport {
+ struct qeth_ipa_caps sbp_cmds;
struct qeth_ipacmd_sbp_hdr hdr;
union {
struct qeth_sbp_query_cmds_supp query_cmds_supp;
- struct qeth_sbp_reset_role reset_role;
struct qeth_sbp_set_primary set_primary;
- struct qeth_sbp_set_secondary set_secondary;
struct qeth_sbp_query_ports query_ports;
struct qeth_sbp_state_change state_change;
} data;
} __packed;
+#define SBP_DATA_SIZEOF(field) FIELD_SIZEOF(struct qeth_ipacmd_setbridgeport,\
+ data.field)
+
/* ADDRESS_CHANGE_NOTIFICATION adapter-initiated "command" *******************/
/* Bitmask for entry->change_code. Both bits may be raised. */
enum qeth_ipa_addr_change_code {
@@ -808,6 +805,8 @@ struct qeth_ipa_cmd {
} data;
} __attribute__ ((packed));
+#define IPA_DATA_SIZEOF(field) FIELD_SIZEOF(struct qeth_ipa_cmd, data.field)
+
/*
* special command for ARP processing.
* this is not included in setassparms command before, because we get
@@ -825,10 +824,6 @@ enum qeth_ipa_arp_return_codes {
extern const char *qeth_get_ipa_msg(enum qeth_ipa_return_codes rc);
extern const char *qeth_get_ipa_cmd_name(enum qeth_ipa_cmds cmd);
-#define QETH_SETADP_BASE_LEN (sizeof(struct qeth_ipacmd_hdr) + \
- sizeof(struct qeth_ipacmd_setadpparms_hdr))
-#define QETH_SNMP_SETADP_CMDLENGTH 16
-
/* Helper functions */
#define IS_IPA_REPLY(cmd) ((cmd->hdr.initiator == IPA_CMD_INITIATOR_HOST) || \
(cmd->hdr.initiator == IPA_CMD_INITIATOR_OSA_REPLY))
diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
index ff8a6cd790b1..fd64bc3f4062 100644
--- a/drivers/s390/net/qeth_l2_main.c
+++ b/drivers/s390/net/qeth_l2_main.c
@@ -85,7 +85,8 @@ static int qeth_l2_send_setdelmac(struct qeth_card *card, __u8 *mac,
struct qeth_cmd_buffer *iob;
QETH_CARD_TEXT(card, 2, "L2sdmac");
- iob = qeth_get_ipacmd_buffer(card, ipacmd, QETH_PROT_IPV4);
+ iob = qeth_ipa_alloc_cmd(card, ipacmd, QETH_PROT_IPV4,
+ IPA_DATA_SIZEOF(setdelmac));
if (!iob)
return -ENOMEM;
cmd = __ipa_cmd(iob);
@@ -163,8 +164,9 @@ static void qeth_l2_drain_rx_mode_cache(struct qeth_card *card)
static void qeth_l2_fill_header(struct qeth_qdio_out_q *queue,
struct qeth_hdr *hdr, struct sk_buff *skb,
- int ipv, int cast_type, unsigned int data_len)
+ int ipv, unsigned int data_len)
{
+ int cast_type = qeth_get_ether_cast_type(skb);
struct vlan_ethhdr *veth = vlan_eth_hdr(skb);
hdr->hdr.l2.pkt_length = data_len;
@@ -240,7 +242,8 @@ static int qeth_l2_send_setdelvlan(struct qeth_card *card, __u16 i,
struct qeth_cmd_buffer *iob;
QETH_CARD_TEXT_(card, 4, "L2sdv%x", ipacmd);
- iob = qeth_get_ipacmd_buffer(card, ipacmd, QETH_PROT_IPV4);
+ iob = qeth_ipa_alloc_cmd(card, ipacmd, QETH_PROT_IPV4,
+ IPA_DATA_SIZEOF(setdelvlan));
if (!iob)
return -ENOMEM;
cmd = __ipa_cmd(iob);
@@ -274,8 +277,7 @@ static int qeth_l2_vlan_rx_kill_vid(struct net_device *dev,
static void qeth_l2_stop_card(struct qeth_card *card)
{
- QETH_DBF_TEXT(SETUP , 2, "stopcard");
- QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *));
+ QETH_CARD_TEXT(card, 2, "stopcard");
qeth_set_allowed_threads(card, 0, 1);
@@ -292,10 +294,6 @@ static void qeth_l2_stop_card(struct qeth_card *card)
qeth_clear_working_pool_list(card);
card->state = CARD_STATE_DOWN;
}
- if (card->state == CARD_STATE_DOWN) {
- qeth_clear_cmd_buffers(&card->read);
- qeth_clear_cmd_buffers(&card->write);
- }
flush_workqueue(card->event_wq);
card->info.mac_bits &= ~QETH_LAYER2_MAC_REGISTERED;
@@ -354,8 +352,7 @@ static int qeth_l2_request_initial_mac(struct qeth_card *card)
{
int rc = 0;
- QETH_DBF_TEXT(SETUP, 2, "l2reqmac");
- QETH_DBF_TEXT_(SETUP, 2, "doL2%s", CARD_BUS_ID(card));
+ QETH_CARD_TEXT(card, 2, "l2reqmac");
if (MACHINE_IS_VM) {
rc = qeth_vm_request_mac(card);
@@ -363,7 +360,7 @@ static int qeth_l2_request_initial_mac(struct qeth_card *card)
goto out;
QETH_DBF_MESSAGE(2, "z/VM MAC Service failed on device %x: %#x\n",
CARD_DEVID(card), rc);
- QETH_DBF_TEXT_(SETUP, 2, "err%04x", rc);
+ QETH_CARD_TEXT_(card, 2, "err%04x", rc);
/* fall back to alternative mechanism: */
}
@@ -373,7 +370,7 @@ static int qeth_l2_request_initial_mac(struct qeth_card *card)
goto out;
QETH_DBF_MESSAGE(2, "READ_MAC Assist failed on device %x: %#x\n",
CARD_DEVID(card), rc);
- QETH_DBF_TEXT_(SETUP, 2, "1err%04x", rc);
+ QETH_CARD_TEXT_(card, 2, "1err%04x", rc);
/* fall back once more: */
}
@@ -383,7 +380,7 @@ static int qeth_l2_request_initial_mac(struct qeth_card *card)
eth_hw_addr_random(card->dev);
out:
- QETH_DBF_HEX(SETUP, 2, card->dev->dev_addr, card->dev->addr_len);
+ QETH_CARD_HEX(card, 2, card->dev->dev_addr, card->dev->addr_len);
return 0;
}
@@ -467,7 +464,7 @@ static void qeth_promisc_to_bridge(struct qeth_card *card)
role = QETH_SBP_ROLE_NONE;
rc = qeth_bridgeport_setrole(card, role);
- QETH_DBF_TEXT_(SETUP, 2, "bpm%c%04x",
+ QETH_CARD_TEXT_(card, 2, "bpm%c%04x",
(promisc_mode == SET_PROMISC_MODE_ON) ? '+' : '-', rc);
if (!rc) {
card->options.sbp.role = role;
@@ -602,7 +599,6 @@ static netdev_tx_t qeth_l2_hard_start_xmit(struct sk_buff *skb,
rc = qeth_l2_xmit_osn(card, skb, queue);
else
rc = qeth_xmit(card, skb, queue, qeth_get_ip_version(skb),
- qeth_get_ether_cast_type(skb),
qeth_l2_fill_header);
if (!rc) {
@@ -796,12 +792,11 @@ static int qeth_l2_set_online(struct ccwgroup_device *gdev)
mutex_lock(&card->discipline_mutex);
mutex_lock(&card->conf_mutex);
- QETH_DBF_TEXT(SETUP, 2, "setonlin");
- QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *));
+ QETH_CARD_TEXT(card, 2, "setonlin");
rc = qeth_core_hardsetup_card(card, &carrier_ok);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "2err%04x", rc);
+ QETH_CARD_TEXT_(card, 2, "2err%04x", rc);
rc = -ENODEV;
goto out_remove;
}
@@ -832,7 +827,7 @@ static int qeth_l2_set_online(struct ccwgroup_device *gdev)
qeth_print_status_message(card);
/* softsetup */
- QETH_DBF_TEXT(SETUP, 2, "softsetp");
+ QETH_CARD_TEXT(card, 2, "softsetp");
if (IS_OSD(card) || IS_OSX(card)) {
rc = qeth_l2_start_ipassists(card);
@@ -842,7 +837,7 @@ static int qeth_l2_set_online(struct ccwgroup_device *gdev)
rc = qeth_init_qdio_queues(card);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "6err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "6err%d", rc);
rc = -ENODEV;
goto out_remove;
}
@@ -882,7 +877,6 @@ out_remove:
ccw_device_set_offline(CARD_WDEV(card));
ccw_device_set_offline(CARD_RDEV(card));
qdio_free(CARD_DDEV(card));
- card->state = CARD_STATE_DOWN;
mutex_unlock(&card->conf_mutex);
mutex_unlock(&card->discipline_mutex);
@@ -897,8 +891,7 @@ static int __qeth_l2_set_offline(struct ccwgroup_device *cgdev,
mutex_lock(&card->discipline_mutex);
mutex_lock(&card->conf_mutex);
- QETH_DBF_TEXT(SETUP, 3, "setoffl");
- QETH_DBF_HEX(SETUP, 3, &card, sizeof(void *));
+ QETH_CARD_TEXT(card, 3, "setoffl");
if ((!recovery_mode && card->info.hwtrap) || card->info.hwtrap == 2) {
qeth_hw_trap(card, QETH_DIAGS_TRAP_DISARM);
@@ -919,7 +912,7 @@ static int __qeth_l2_set_offline(struct ccwgroup_device *cgdev,
if (!rc)
rc = (rc2) ? rc2 : rc3;
if (rc)
- QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "1err%d", rc);
qdio_free(CARD_DDEV(card));
/* let user_space know that device is offline */
@@ -972,33 +965,6 @@ static void __exit qeth_l2_exit(void)
pr_info("unregister layer 2 discipline\n");
}
-static int qeth_l2_pm_suspend(struct ccwgroup_device *gdev)
-{
- struct qeth_card *card = dev_get_drvdata(&gdev->dev);
-
- qeth_set_allowed_threads(card, 0, 1);
- wait_event(card->wait_q, qeth_threads_running(card, 0xffffffff) == 0);
- if (gdev->state == CCWGROUP_OFFLINE)
- return 0;
-
- qeth_l2_set_offline(gdev);
- return 0;
-}
-
-static int qeth_l2_pm_resume(struct ccwgroup_device *gdev)
-{
- struct qeth_card *card = dev_get_drvdata(&gdev->dev);
- int rc;
-
- rc = qeth_l2_set_online(gdev);
-
- qeth_set_allowed_threads(card, 0xffffffff, 0);
- if (rc)
- dev_warn(&card->gdev->dev, "The qeth device driver "
- "failed to recover an error on the device\n");
- return rc;
-}
-
/* Returns zero if the command is successfully "consumed" */
static int qeth_l2_control_event(struct qeth_card *card,
struct qeth_ipa_cmd *cmd)
@@ -1028,50 +994,16 @@ struct qeth_discipline qeth_l2_discipline = {
.remove = qeth_l2_remove_device,
.set_online = qeth_l2_set_online,
.set_offline = qeth_l2_set_offline,
- .freeze = qeth_l2_pm_suspend,
- .thaw = qeth_l2_pm_resume,
- .restore = qeth_l2_pm_resume,
.do_ioctl = NULL,
.control_event_handler = qeth_l2_control_event,
};
EXPORT_SYMBOL_GPL(qeth_l2_discipline);
-static int qeth_osn_send_control_data(struct qeth_card *card, int len,
- struct qeth_cmd_buffer *iob)
+static void qeth_osn_assist_cb(struct qeth_card *card,
+ struct qeth_cmd_buffer *iob)
{
- struct qeth_channel *channel = iob->channel;
- int rc = 0;
-
- QETH_CARD_TEXT(card, 5, "osndctrd");
-
- wait_event(card->wait_q, qeth_trylock_channel(channel));
- iob->finalize(card, iob, len);
- QETH_DBF_HEX(CTRL, 2, iob->data, min(len, QETH_DBF_CTRL_LEN));
- QETH_CARD_TEXT(card, 6, "osnoirqp");
- spin_lock_irq(get_ccwdev_lock(channel->ccwdev));
- rc = ccw_device_start_timeout(channel->ccwdev, channel->ccw,
- (addr_t) iob, 0, 0, iob->timeout);
- spin_unlock_irq(get_ccwdev_lock(channel->ccwdev));
- if (rc) {
- QETH_DBF_MESSAGE(2, "qeth_osn_send_control_data: "
- "ccw_device_start rc = %i\n", rc);
- QETH_CARD_TEXT_(card, 2, " err%d", rc);
- qeth_release_buffer(channel, iob);
- atomic_set(&channel->irq_pending, 0);
- wake_up(&card->wait_q);
- }
- return rc;
-}
-
-static int qeth_osn_send_ipa_cmd(struct qeth_card *card,
- struct qeth_cmd_buffer *iob)
-{
- u16 length;
-
- QETH_CARD_TEXT(card, 4, "osndipa");
-
- memcpy(&length, QETH_IPA_PDU_LEN_TOTAL(iob->data), 2);
- return qeth_osn_send_control_data(card, length, iob);
+ qeth_notify_reply(iob->reply, 0);
+ qeth_put_cmd(iob);
}
int qeth_osn_assist(struct net_device *dev, void *data, int data_len)
@@ -1079,6 +1011,8 @@ int qeth_osn_assist(struct net_device *dev, void *data, int data_len)
struct qeth_cmd_buffer *iob;
struct qeth_card *card;
+ if (data_len < 0)
+ return -EINVAL;
if (!dev)
return -ENODEV;
card = dev->ml_priv;
@@ -1087,10 +1021,16 @@ int qeth_osn_assist(struct net_device *dev, void *data, int data_len)
QETH_CARD_TEXT(card, 2, "osnsdmc");
if (!qeth_card_hw_is_reachable(card))
return -ENODEV;
- iob = qeth_wait_for_buffer(&card->write);
+
+ iob = qeth_alloc_cmd(&card->write, IPA_PDU_HEADER_SIZE + data_len, 1,
+ QETH_IPA_TIMEOUT);
+ if (!iob)
+ return -ENOMEM;
+
qeth_prepare_ipa_cmd(card, iob, (u16) data_len);
memcpy(__ipa_cmd(iob), data, data_len);
- return qeth_osn_send_ipa_cmd(card, iob);
+ iob->callback = qeth_osn_assist_cb;
+ return qeth_send_ipa_cmd(card, iob, NULL, NULL);
}
EXPORT_SYMBOL(qeth_osn_assist);
@@ -1456,22 +1396,25 @@ static int qeth_bridgeport_makerc(struct qeth_card *card,
static struct qeth_cmd_buffer *qeth_sbp_build_cmd(struct qeth_card *card,
enum qeth_ipa_sbp_cmd sbp_cmd,
- unsigned int cmd_length)
+ unsigned int data_length)
{
enum qeth_ipa_cmds ipa_cmd = IS_IQD(card) ? IPA_CMD_SETBRIDGEPORT_IQD :
IPA_CMD_SETBRIDGEPORT_OSA;
+ struct qeth_ipacmd_sbp_hdr *hdr;
struct qeth_cmd_buffer *iob;
- struct qeth_ipa_cmd *cmd;
- iob = qeth_get_ipacmd_buffer(card, ipa_cmd, 0);
+ iob = qeth_ipa_alloc_cmd(card, ipa_cmd, QETH_PROT_NONE,
+ data_length +
+ offsetof(struct qeth_ipacmd_setbridgeport,
+ data));
if (!iob)
return iob;
- cmd = __ipa_cmd(iob);
- cmd->data.sbp.hdr.cmdlength = sizeof(struct qeth_ipacmd_sbp_hdr) +
- cmd_length;
- cmd->data.sbp.hdr.command_code = sbp_cmd;
- cmd->data.sbp.hdr.used_total = 1;
- cmd->data.sbp.hdr.seq_no = 1;
+
+ hdr = &__ipa_cmd(iob)->data.sbp.hdr;
+ hdr->cmdlength = sizeof(*hdr) + data_length;
+ hdr->command_code = sbp_cmd;
+ hdr->used_total = 1;
+ hdr->seq_no = 1;
return iob;
}
@@ -1506,7 +1449,7 @@ static void qeth_bridgeport_query_support(struct qeth_card *card)
QETH_CARD_TEXT(card, 2, "brqsuppo");
iob = qeth_sbp_build_cmd(card, IPA_SBP_QUERY_COMMANDS_SUPPORTED,
- sizeof(struct qeth_sbp_query_cmds_supp));
+ SBP_DATA_SIZEOF(query_cmds_supp));
if (!iob)
return;
@@ -1598,23 +1541,21 @@ static int qeth_bridgeport_set_cb(struct qeth_card *card,
*/
int qeth_bridgeport_setrole(struct qeth_card *card, enum qeth_sbp_roles role)
{
- int cmdlength;
struct qeth_cmd_buffer *iob;
enum qeth_ipa_sbp_cmd setcmd;
+ unsigned int cmdlength = 0;
QETH_CARD_TEXT(card, 2, "brsetrol");
switch (role) {
case QETH_SBP_ROLE_NONE:
setcmd = IPA_SBP_RESET_BRIDGE_PORT_ROLE;
- cmdlength = sizeof(struct qeth_sbp_reset_role);
break;
case QETH_SBP_ROLE_PRIMARY:
setcmd = IPA_SBP_SET_PRIMARY_BRIDGE_PORT;
- cmdlength = sizeof(struct qeth_sbp_set_primary);
+ cmdlength = SBP_DATA_SIZEOF(set_primary);
break;
case QETH_SBP_ROLE_SECONDARY:
setcmd = IPA_SBP_SET_SECONDARY_BRIDGE_PORT;
- cmdlength = sizeof(struct qeth_sbp_set_secondary);
break;
default:
return -EINVAL;
@@ -1764,10 +1705,6 @@ static int qeth_l2_vnicc_makerc(struct qeth_card *card, u16 ipa_rc)
struct _qeth_l2_vnicc_request_cbctl {
u32 sub_cmd;
struct {
- u32 vnic_char;
- u32 timeout;
- } param;
- struct {
union{
u32 *sup_cmds;
u32 *timeout;
@@ -1789,80 +1726,52 @@ static int qeth_l2_vnicc_request_cb(struct qeth_card *card,
if (cmd->hdr.return_code)
return qeth_l2_vnicc_makerc(card, cmd->hdr.return_code);
/* return results to caller */
- card->options.vnicc.sup_chars = rep->hdr.sup;
- card->options.vnicc.cur_chars = rep->hdr.cur;
+ card->options.vnicc.sup_chars = rep->vnicc_cmds.supported;
+ card->options.vnicc.cur_chars = rep->vnicc_cmds.enabled;
if (cbctl->sub_cmd == IPA_VNICC_QUERY_CMDS)
- *cbctl->result.sup_cmds = rep->query_cmds.sup_cmds;
+ *cbctl->result.sup_cmds = rep->data.query_cmds.sup_cmds;
if (cbctl->sub_cmd == IPA_VNICC_GET_TIMEOUT)
- *cbctl->result.timeout = rep->getset_timeout.timeout;
+ *cbctl->result.timeout = rep->data.getset_timeout.timeout;
return 0;
}
-/* generic VNICC request */
-static int qeth_l2_vnicc_request(struct qeth_card *card,
- struct _qeth_l2_vnicc_request_cbctl *cbctl)
+static struct qeth_cmd_buffer *qeth_l2_vnicc_build_cmd(struct qeth_card *card,
+ u32 vnicc_cmd,
+ unsigned int data_length)
{
- struct qeth_ipacmd_vnicc *req;
+ struct qeth_ipacmd_vnicc_hdr *hdr;
struct qeth_cmd_buffer *iob;
- struct qeth_ipa_cmd *cmd;
-
- QETH_CARD_TEXT(card, 2, "vniccreq");
- /* get new buffer for request */
- iob = qeth_get_ipacmd_buffer(card, IPA_CMD_VNICC, 0);
+ iob = qeth_ipa_alloc_cmd(card, IPA_CMD_VNICC, QETH_PROT_NONE,
+ data_length +
+ offsetof(struct qeth_ipacmd_vnicc, data));
if (!iob)
- return -ENOMEM;
-
- /* create header for request */
- cmd = __ipa_cmd(iob);
- req = &cmd->data.vnicc;
-
- /* create sub command header for request */
- req->sub_hdr.data_length = sizeof(req->sub_hdr);
- req->sub_hdr.sub_command = cbctl->sub_cmd;
-
- /* create sub command specific request fields */
- switch (cbctl->sub_cmd) {
- case IPA_VNICC_QUERY_CHARS:
- break;
- case IPA_VNICC_QUERY_CMDS:
- req->sub_hdr.data_length += sizeof(req->query_cmds);
- req->query_cmds.vnic_char = cbctl->param.vnic_char;
- break;
- case IPA_VNICC_ENABLE:
- case IPA_VNICC_DISABLE:
- req->sub_hdr.data_length += sizeof(req->set_char);
- req->set_char.vnic_char = cbctl->param.vnic_char;
- break;
- case IPA_VNICC_SET_TIMEOUT:
- req->getset_timeout.timeout = cbctl->param.timeout;
- /* fallthrough */
- case IPA_VNICC_GET_TIMEOUT:
- req->sub_hdr.data_length += sizeof(req->getset_timeout);
- req->getset_timeout.vnic_char = cbctl->param.vnic_char;
- break;
- default:
- qeth_release_buffer(iob->channel, iob);
- return -EOPNOTSUPP;
- }
+ return NULL;
- /* send request */
- return qeth_send_ipa_cmd(card, iob, qeth_l2_vnicc_request_cb, cbctl);
+ hdr = &__ipa_cmd(iob)->data.vnicc.hdr;
+ hdr->data_length = sizeof(*hdr) + data_length;
+ hdr->sub_command = vnicc_cmd;
+ return iob;
}
/* VNICC query VNIC characteristics request */
static int qeth_l2_vnicc_query_chars(struct qeth_card *card)
{
struct _qeth_l2_vnicc_request_cbctl cbctl;
+ struct qeth_cmd_buffer *iob;
+
+ QETH_CARD_TEXT(card, 2, "vniccqch");
+ iob = qeth_l2_vnicc_build_cmd(card, IPA_VNICC_QUERY_CHARS, 0);
+ if (!iob)
+ return -ENOMEM;
/* prepare callback control */
cbctl.sub_cmd = IPA_VNICC_QUERY_CHARS;
- QETH_CARD_TEXT(card, 2, "vniccqch");
- return qeth_l2_vnicc_request(card, &cbctl);
+ return qeth_send_ipa_cmd(card, iob, qeth_l2_vnicc_request_cb, &cbctl);
}
/* VNICC query sub commands request */
@@ -1870,14 +1779,21 @@ static int qeth_l2_vnicc_query_cmds(struct qeth_card *card, u32 vnic_char,
u32 *sup_cmds)
{
struct _qeth_l2_vnicc_request_cbctl cbctl;
+ struct qeth_cmd_buffer *iob;
+
+ QETH_CARD_TEXT(card, 2, "vniccqcm");
+ iob = qeth_l2_vnicc_build_cmd(card, IPA_VNICC_QUERY_CMDS,
+ VNICC_DATA_SIZEOF(query_cmds));
+ if (!iob)
+ return -ENOMEM;
+
+ __ipa_cmd(iob)->data.vnicc.data.query_cmds.vnic_char = vnic_char;
/* prepare callback control */
cbctl.sub_cmd = IPA_VNICC_QUERY_CMDS;
- cbctl.param.vnic_char = vnic_char;
cbctl.result.sup_cmds = sup_cmds;
- QETH_CARD_TEXT(card, 2, "vniccqcm");
- return qeth_l2_vnicc_request(card, &cbctl);
+ return qeth_send_ipa_cmd(card, iob, qeth_l2_vnicc_request_cb, &cbctl);
}
/* VNICC enable/disable characteristic request */
@@ -1885,31 +1801,47 @@ static int qeth_l2_vnicc_set_char(struct qeth_card *card, u32 vnic_char,
u32 cmd)
{
struct _qeth_l2_vnicc_request_cbctl cbctl;
+ struct qeth_cmd_buffer *iob;
+
+ QETH_CARD_TEXT(card, 2, "vniccedc");
+ iob = qeth_l2_vnicc_build_cmd(card, cmd, VNICC_DATA_SIZEOF(set_char));
+ if (!iob)
+ return -ENOMEM;
+
+ __ipa_cmd(iob)->data.vnicc.data.set_char.vnic_char = vnic_char;
/* prepare callback control */
cbctl.sub_cmd = cmd;
- cbctl.param.vnic_char = vnic_char;
- QETH_CARD_TEXT(card, 2, "vniccedc");
- return qeth_l2_vnicc_request(card, &cbctl);
+ return qeth_send_ipa_cmd(card, iob, qeth_l2_vnicc_request_cb, &cbctl);
}
/* VNICC get/set timeout for characteristic request */
static int qeth_l2_vnicc_getset_timeout(struct qeth_card *card, u32 vnicc,
u32 cmd, u32 *timeout)
{
+ struct qeth_vnicc_getset_timeout *getset_timeout;
struct _qeth_l2_vnicc_request_cbctl cbctl;
+ struct qeth_cmd_buffer *iob;
+
+ QETH_CARD_TEXT(card, 2, "vniccgst");
+ iob = qeth_l2_vnicc_build_cmd(card, cmd,
+ VNICC_DATA_SIZEOF(getset_timeout));
+ if (!iob)
+ return -ENOMEM;
+
+ getset_timeout = &__ipa_cmd(iob)->data.vnicc.data.getset_timeout;
+ getset_timeout->vnic_char = vnicc;
+
+ if (cmd == IPA_VNICC_SET_TIMEOUT)
+ getset_timeout->timeout = *timeout;
/* prepare callback control */
cbctl.sub_cmd = cmd;
- cbctl.param.vnic_char = vnicc;
- if (cmd == IPA_VNICC_SET_TIMEOUT)
- cbctl.param.timeout = *timeout;
if (cmd == IPA_VNICC_GET_TIMEOUT)
cbctl.result.timeout = timeout;
- QETH_CARD_TEXT(card, 2, "vniccgst");
- return qeth_l2_vnicc_request(card, &cbctl);
+ return qeth_send_ipa_cmd(card, iob, qeth_l2_vnicc_request_cb, &cbctl);
}
/* set current VNICC flag state; called from sysfs store function */
diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
index 13bf3e2e9cea..2dd99f103671 100644
--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -32,7 +32,6 @@
#include <net/route.h>
#include <net/ipv6.h>
#include <net/ip6_route.h>
-#include <net/ip6_fib.h>
#include <net/iucv/af_iucv.h>
#include <linux/hashtable.h>
@@ -377,7 +376,8 @@ static int qeth_l3_send_setdelmc(struct qeth_card *card,
QETH_CARD_TEXT(card, 4, "setdelmc");
- iob = qeth_get_ipacmd_buffer(card, ipacmd, addr->proto);
+ iob = qeth_ipa_alloc_cmd(card, ipacmd, addr->proto,
+ IPA_DATA_SIZEOF(setdelipm));
if (!iob)
return -ENOMEM;
cmd = __ipa_cmd(iob);
@@ -429,7 +429,8 @@ static int qeth_l3_send_setdelip(struct qeth_card *card,
QETH_CARD_TEXT(card, 4, "setdelip");
- iob = qeth_get_ipacmd_buffer(card, ipacmd, addr->proto);
+ iob = qeth_ipa_alloc_cmd(card, ipacmd, addr->proto,
+ IPA_DATA_SIZEOF(setdelip6));
if (!iob)
return -ENOMEM;
cmd = __ipa_cmd(iob);
@@ -461,7 +462,8 @@ static int qeth_l3_send_setrouting(struct qeth_card *card,
struct qeth_cmd_buffer *iob;
QETH_CARD_TEXT(card, 4, "setroutg");
- iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SETRTG, prot);
+ iob = qeth_ipa_alloc_cmd(card, IPA_CMD_SETRTG, prot,
+ IPA_DATA_SIZEOF(setrtg));
if (!iob)
return -ENOMEM;
cmd = __ipa_cmd(iob);
@@ -742,7 +744,7 @@ static int qeth_l3_setadapter_parms(struct qeth_card *card)
{
int rc = 0;
- QETH_DBF_TEXT(SETUP, 2, "setadprm");
+ QETH_CARD_TEXT(card, 2, "setadprm");
if (qeth_adp_supported(card, IPA_SETADP_ALTER_MAC_ADDRESS)) {
rc = qeth_setadpparms_change_macaddr(card);
@@ -767,7 +769,7 @@ static int qeth_l3_start_ipa_arp_processing(struct qeth_card *card)
return 0;
}
rc = qeth_send_simple_setassparms(card, IPA_ARP_PROCESSING,
- IPA_CMD_ASS_START, 0);
+ IPA_CMD_ASS_START, NULL);
if (rc) {
dev_warn(&card->gdev->dev,
"Starting ARP processing support for %s failed\n",
@@ -790,7 +792,7 @@ static int qeth_l3_start_ipa_source_mac(struct qeth_card *card)
}
rc = qeth_send_simple_setassparms(card, IPA_SOURCE_MAC,
- IPA_CMD_ASS_START, 0);
+ IPA_CMD_ASS_START, NULL);
if (rc)
dev_warn(&card->gdev->dev,
"Starting source MAC-address support for %s failed\n",
@@ -811,7 +813,7 @@ static int qeth_l3_start_ipa_vlan(struct qeth_card *card)
}
rc = qeth_send_simple_setassparms(card, IPA_VLAN_PRIO,
- IPA_CMD_ASS_START, 0);
+ IPA_CMD_ASS_START, NULL);
if (rc) {
dev_warn(&card->gdev->dev,
"Starting VLAN support for %s failed\n",
@@ -836,7 +838,7 @@ static int qeth_l3_start_ipa_multicast(struct qeth_card *card)
}
rc = qeth_send_simple_setassparms(card, IPA_MULTICASTING,
- IPA_CMD_ASS_START, 0);
+ IPA_CMD_ASS_START, NULL);
if (rc) {
dev_warn(&card->gdev->dev,
"Starting multicast support for %s failed\n",
@@ -850,6 +852,7 @@ static int qeth_l3_start_ipa_multicast(struct qeth_card *card)
static int qeth_l3_softsetup_ipv6(struct qeth_card *card)
{
+ u32 ipv6_data = 3;
int rc;
QETH_CARD_TEXT(card, 3, "softipv6");
@@ -857,16 +860,16 @@ static int qeth_l3_softsetup_ipv6(struct qeth_card *card)
if (IS_IQD(card))
goto out;
- rc = qeth_send_simple_setassparms(card, IPA_IPV6,
- IPA_CMD_ASS_START, 3);
+ rc = qeth_send_simple_setassparms(card, IPA_IPV6, IPA_CMD_ASS_START,
+ &ipv6_data);
if (rc) {
dev_err(&card->gdev->dev,
"Activating IPv6 support for %s failed\n",
QETH_CARD_IFNAME(card));
return rc;
}
- rc = qeth_send_simple_setassparms_v6(card, IPA_IPV6,
- IPA_CMD_ASS_START, 0);
+ rc = qeth_send_simple_setassparms_v6(card, IPA_IPV6, IPA_CMD_ASS_START,
+ NULL);
if (rc) {
dev_err(&card->gdev->dev,
"Activating IPv6 support for %s failed\n",
@@ -874,7 +877,7 @@ static int qeth_l3_softsetup_ipv6(struct qeth_card *card)
return rc;
}
rc = qeth_send_simple_setassparms_v6(card, IPA_PASSTHRU,
- IPA_CMD_ASS_START, 0);
+ IPA_CMD_ASS_START, NULL);
if (rc) {
dev_warn(&card->gdev->dev,
"Enabling the passthrough mode for %s failed\n",
@@ -900,6 +903,7 @@ static int qeth_l3_start_ipa_ipv6(struct qeth_card *card)
static int qeth_l3_start_ipa_broadcast(struct qeth_card *card)
{
+ u32 filter_data = 1;
int rc;
QETH_CARD_TEXT(card, 3, "stbrdcst");
@@ -912,7 +916,7 @@ static int qeth_l3_start_ipa_broadcast(struct qeth_card *card)
goto out;
}
rc = qeth_send_simple_setassparms(card, IPA_FILTERING,
- IPA_CMD_ASS_START, 0);
+ IPA_CMD_ASS_START, NULL);
if (rc) {
dev_warn(&card->gdev->dev, "Enabling broadcast filtering for "
"%s failed\n", QETH_CARD_IFNAME(card));
@@ -920,7 +924,7 @@ static int qeth_l3_start_ipa_broadcast(struct qeth_card *card)
}
rc = qeth_send_simple_setassparms(card, IPA_FILTERING,
- IPA_CMD_ASS_CONFIGURE, 1);
+ IPA_CMD_ASS_CONFIGURE, &filter_data);
if (rc) {
dev_warn(&card->gdev->dev,
"Setting up broadcast filtering for %s failed\n",
@@ -930,7 +934,7 @@ static int qeth_l3_start_ipa_broadcast(struct qeth_card *card)
card->info.broadcast_capable = QETH_BROADCAST_WITH_ECHO;
dev_info(&card->gdev->dev, "Broadcast enabled\n");
rc = qeth_send_simple_setassparms(card, IPA_FILTERING,
- IPA_CMD_ASS_ENABLE, 1);
+ IPA_CMD_ASS_ENABLE, &filter_data);
if (rc) {
dev_warn(&card->gdev->dev, "Setting up broadcast echo "
"filtering for %s failed\n", QETH_CARD_IFNAME(card));
@@ -979,10 +983,10 @@ static int qeth_l3_iqd_read_initial_mac(struct qeth_card *card)
struct qeth_cmd_buffer *iob;
struct qeth_ipa_cmd *cmd;
- QETH_DBF_TEXT(SETUP, 2, "hsrmac");
+ QETH_CARD_TEXT(card, 2, "hsrmac");
- iob = qeth_get_ipacmd_buffer(card, IPA_CMD_CREATE_ADDR,
- QETH_PROT_IPV6);
+ iob = qeth_ipa_alloc_cmd(card, IPA_CMD_CREATE_ADDR, QETH_PROT_IPV6,
+ IPA_DATA_SIZEOF(create_destroy_addr));
if (!iob)
return -ENOMEM;
cmd = __ipa_cmd(iob);
@@ -1017,7 +1021,7 @@ static int qeth_l3_get_unique_id(struct qeth_card *card)
struct qeth_cmd_buffer *iob;
struct qeth_ipa_cmd *cmd;
- QETH_DBF_TEXT(SETUP, 2, "guniqeid");
+ QETH_CARD_TEXT(card, 2, "guniqeid");
if (!qeth_is_supported(card, IPA_IPV6)) {
card->info.unique_id = UNIQUE_ID_IF_CREATE_ADDR_FAILED |
@@ -1025,8 +1029,8 @@ static int qeth_l3_get_unique_id(struct qeth_card *card)
return 0;
}
- iob = qeth_get_ipacmd_buffer(card, IPA_CMD_CREATE_ADDR,
- QETH_PROT_IPV6);
+ iob = qeth_ipa_alloc_cmd(card, IPA_CMD_CREATE_ADDR, QETH_PROT_IPV6,
+ IPA_DATA_SIZEOF(create_destroy_addr));
if (!iob)
return -ENOMEM;
cmd = __ipa_cmd(iob);
@@ -1044,7 +1048,7 @@ qeth_diags_trace_cb(struct qeth_card *card, struct qeth_reply *reply,
struct qeth_ipa_cmd *cmd;
__u16 rc;
- QETH_DBF_TEXT(SETUP, 2, "diastrcb");
+ QETH_CARD_TEXT(card, 2, "diastrcb");
cmd = (struct qeth_ipa_cmd *)data;
rc = cmd->hdr.return_code;
@@ -1100,14 +1104,12 @@ qeth_diags_trace(struct qeth_card *card, enum qeth_diags_trace_cmds diags_cmd)
struct qeth_cmd_buffer *iob;
struct qeth_ipa_cmd *cmd;
- QETH_DBF_TEXT(SETUP, 2, "diagtrac");
+ QETH_CARD_TEXT(card, 2, "diagtrac");
- iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SET_DIAG_ASS, 0);
+ iob = qeth_get_diag_cmd(card, QETH_DIAGS_CMD_TRACE, 0);
if (!iob)
return -ENOMEM;
cmd = __ipa_cmd(iob);
- cmd->data.diagass.subcmd_len = 16;
- cmd->data.diagass.subcmd = QETH_DIAGS_CMD_TRACE;
cmd->data.diagass.type = QETH_DIAGS_TYPE_HIPERSOCKET;
cmd->data.diagass.action = diags_cmd;
return qeth_send_ipa_cmd(card, iob, qeth_diags_trace_cb, NULL);
@@ -1309,6 +1311,15 @@ static int qeth_l3_vlan_rx_kill_vid(struct net_device *dev,
static void qeth_l3_rebuild_skb(struct qeth_card *card, struct sk_buff *skb,
struct qeth_hdr *hdr)
{
+ struct af_iucv_trans_hdr *iucv = (struct af_iucv_trans_hdr *) skb->data;
+ struct net_device *dev = skb->dev;
+
+ if (IS_IQD(card) && iucv->magic == ETH_P_AF_IUCV) {
+ dev_hard_header(skb, dev, ETH_P_AF_IUCV, dev->dev_addr,
+ "FAKELL", skb->len);
+ return;
+ }
+
if (!(hdr->hdr.l3.flags & QETH_HDR_PASSTHRU)) {
u16 prot = (hdr->hdr.l3.flags & QETH_HDR_IPV6) ? ETH_P_IPV6 :
ETH_P_IP;
@@ -1342,8 +1353,6 @@ static void qeth_l3_rebuild_skb(struct qeth_card *card, struct sk_buff *skb,
tg_addr, "FAKELL", skb->len);
}
- skb->protocol = eth_type_trans(skb, card->dev);
-
/* copy VLAN tag from hdr into skb */
if (!card->options.sniffer &&
(hdr->hdr.l3.ext_flags & (QETH_HDR_EXT_VLAN_FRAME |
@@ -1360,12 +1369,10 @@ static void qeth_l3_rebuild_skb(struct qeth_card *card, struct sk_buff *skb,
static int qeth_l3_process_inbound_buffer(struct qeth_card *card,
int budget, int *done)
{
- struct net_device *dev = card->dev;
int work_done = 0;
struct sk_buff *skb;
struct qeth_hdr *hdr;
unsigned int len;
- __u16 magic;
*done = 0;
WARN_ON_ONCE(!budget);
@@ -1379,23 +1386,12 @@ static int qeth_l3_process_inbound_buffer(struct qeth_card *card,
}
switch (hdr->hdr.l3.id) {
case QETH_HEADER_TYPE_LAYER3:
- magic = *(__u16 *)skb->data;
- if (IS_IQD(card) && magic == ETH_P_AF_IUCV) {
- len = skb->len;
- dev_hard_header(skb, dev, ETH_P_AF_IUCV,
- dev->dev_addr, "FAKELL", len);
- skb->protocol = eth_type_trans(skb, dev);
- netif_receive_skb(skb);
- } else {
- qeth_l3_rebuild_skb(card, skb, hdr);
- len = skb->len;
- napi_gro_receive(&card->napi, skb);
- }
- break;
+ qeth_l3_rebuild_skb(card, skb, hdr);
+ /* fall through */
case QETH_HEADER_TYPE_LAYER2: /* for HiperSockets sniffer */
skb->protocol = eth_type_trans(skb, skb->dev);
len = skb->len;
- netif_receive_skb(skb);
+ napi_gro_receive(&card->napi, skb);
break;
default:
dev_kfree_skb_any(skb);
@@ -1413,8 +1409,7 @@ static int qeth_l3_process_inbound_buffer(struct qeth_card *card,
static void qeth_l3_stop_card(struct qeth_card *card)
{
- QETH_DBF_TEXT(SETUP, 2, "stopcard");
- QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *));
+ QETH_CARD_TEXT(card, 2, "stopcard");
qeth_set_allowed_threads(card, 0, 1);
@@ -1436,10 +1431,6 @@ static void qeth_l3_stop_card(struct qeth_card *card)
qeth_clear_working_pool_list(card);
card->state = CARD_STATE_DOWN;
}
- if (card->state == CARD_STATE_DOWN) {
- qeth_clear_cmd_buffers(&card->read);
- qeth_clear_cmd_buffers(&card->write);
- }
flush_workqueue(card->event_wq);
}
@@ -1563,7 +1554,8 @@ static int qeth_l3_arp_set_no_entries(struct qeth_card *card, int no_entries)
}
iob = qeth_get_setassparms_cmd(card, IPA_ARP_PROCESSING,
- IPA_CMD_ASS_ARP_SET_NO_ENTRIES, 4,
+ IPA_CMD_ASS_ARP_SET_NO_ENTRIES,
+ SETASS_DATA_SIZEOF(flags_32bit),
QETH_PROT_IPV4);
if (!iob)
return -ENOMEM;
@@ -1709,9 +1701,7 @@ static int qeth_l3_query_arp_cache_info(struct qeth_card *card,
iob = qeth_get_setassparms_cmd(card, IPA_ARP_PROCESSING,
IPA_CMD_ASS_ARP_QUERY_INFO,
- sizeof(struct qeth_arp_query_data)
- - sizeof(char),
- prot);
+ SETASS_DATA_SIZEOF(query_arp), prot);
if (!iob)
return -ENOMEM;
cmd = __ipa_cmd(iob);
@@ -1795,7 +1785,8 @@ static int qeth_l3_arp_modify_entry(struct qeth_card *card,
}
iob = qeth_get_setassparms_cmd(card, IPA_ARP_PROCESSING, arp_cmd,
- sizeof(*cmd_entry), QETH_PROT_IPV4);
+ SETASS_DATA_SIZEOF(arp_entry),
+ QETH_PROT_IPV4);
if (!iob)
return -ENOMEM;
@@ -1886,26 +1877,17 @@ static int qeth_l3_do_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
return rc;
}
-static int qeth_l3_get_cast_type(struct sk_buff *skb)
+static int qeth_l3_get_cast_type_rcu(struct sk_buff *skb, struct dst_entry *dst,
+ int ipv)
{
- int ipv = qeth_get_ip_version(skb);
struct neighbour *n = NULL;
- struct dst_entry *dst;
-
- rcu_read_lock();
- dst = skb_dst(skb);
- if (dst) {
- struct rt6_info *rt = (struct rt6_info *) dst;
- dst = dst_check(dst, (ipv == 6) ? rt6_get_cookie(rt) : 0);
- if (dst)
- n = dst_neigh_lookup_skb(dst, skb);
- }
+ if (dst)
+ n = dst_neigh_lookup_skb(dst, skb);
if (n) {
int cast_type = n->type;
- rcu_read_unlock();
neigh_release(n);
if ((cast_type == RTN_BROADCAST) ||
(cast_type == RTN_MULTICAST) ||
@@ -1913,7 +1895,6 @@ static int qeth_l3_get_cast_type(struct sk_buff *skb)
return cast_type;
return RTN_UNICAST;
}
- rcu_read_unlock();
/* no neighbour (eg AF_PACKET), fall back to target's IP address ... */
switch (ipv) {
@@ -1931,6 +1912,20 @@ static int qeth_l3_get_cast_type(struct sk_buff *skb)
}
}
+static int qeth_l3_get_cast_type(struct sk_buff *skb)
+{
+ int ipv = qeth_get_ip_version(skb);
+ struct dst_entry *dst;
+ int cast_type;
+
+ rcu_read_lock();
+ dst = qeth_dst_check_rcu(skb, ipv);
+ cast_type = qeth_l3_get_cast_type_rcu(skb, dst, ipv);
+ rcu_read_unlock();
+
+ return cast_type;
+}
+
static u8 qeth_l3_cast_type_to_flag(int cast_type)
{
if (cast_type == RTN_MULTICAST)
@@ -1944,12 +1939,13 @@ static u8 qeth_l3_cast_type_to_flag(int cast_type)
static void qeth_l3_fill_header(struct qeth_qdio_out_q *queue,
struct qeth_hdr *hdr, struct sk_buff *skb,
- int ipv, int cast_type, unsigned int data_len)
+ int ipv, unsigned int data_len)
{
struct qeth_hdr_layer3 *l3_hdr = &hdr->hdr.l3;
struct vlan_ethhdr *veth = vlan_eth_hdr(skb);
struct qeth_card *card = queue->card;
struct dst_entry *dst;
+ int cast_type;
hdr->hdr.l3.length = data_len;
@@ -1986,36 +1982,23 @@ static void qeth_l3_fill_header(struct qeth_qdio_out_q *queue,
hdr->hdr.l3.vlan_id = ntohs(veth->h_vlan_TCI);
}
- l3_hdr->flags = qeth_l3_cast_type_to_flag(cast_type);
-
- /* OSA only: */
- if (!ipv) {
- l3_hdr->flags |= QETH_HDR_PASSTHRU;
- return;
- }
-
rcu_read_lock();
- dst = skb_dst(skb);
+ dst = qeth_dst_check_rcu(skb, ipv);
- if (ipv == 4) {
- struct rtable *rt;
+ if (IS_IQD(card) && skb_get_queue_mapping(skb) != QETH_IQD_MCAST_TXQ)
+ cast_type = RTN_UNICAST;
+ else
+ cast_type = qeth_l3_get_cast_type_rcu(skb, dst, ipv);
+ l3_hdr->flags |= qeth_l3_cast_type_to_flag(cast_type);
- if (dst)
- dst = dst_check(dst, 0);
- rt = (struct rtable *) dst;
+ if (ipv == 4) {
+ struct rtable *rt = (struct rtable *) dst;
*((__be32 *) &hdr->hdr.l3.next_hop.ipv4.addr) = (rt) ?
rt_nexthop(rt, ip_hdr(skb)->daddr) :
ip_hdr(skb)->daddr;
- } else {
- /* IPv6 */
- struct rt6_info *rt;
-
- if (dst) {
- rt = (struct rt6_info *) dst;
- dst = dst_check(dst, rt6_get_cookie(rt));
- }
- rt = (struct rt6_info *) dst;
+ } else if (ipv == 6) {
+ struct rt6_info *rt = (struct rt6_info *) dst;
if (rt && !ipv6_addr_any(&rt->rt6i_gateway))
l3_hdr->next_hop.ipv6_addr = rt->rt6i_gateway;
@@ -2025,6 +2008,9 @@ static void qeth_l3_fill_header(struct qeth_qdio_out_q *queue,
hdr->hdr.l3.flags |= QETH_HDR_IPV6;
if (!IS_IQD(card))
hdr->hdr.l3.flags |= QETH_HDR_PASSTHRU;
+ } else {
+ /* OSA only: */
+ l3_hdr->flags |= QETH_HDR_PASSTHRU;
}
rcu_read_unlock();
}
@@ -2044,7 +2030,7 @@ static void qeth_l3_fixup_headers(struct sk_buff *skb)
}
static int qeth_l3_xmit(struct qeth_card *card, struct sk_buff *skb,
- struct qeth_qdio_out_q *queue, int ipv, int cast_type)
+ struct qeth_qdio_out_q *queue, int ipv)
{
unsigned int hw_hdr_len;
int rc;
@@ -2058,7 +2044,7 @@ static int qeth_l3_xmit(struct qeth_card *card, struct sk_buff *skb,
skb_pull(skb, ETH_HLEN);
qeth_l3_fixup_headers(skb);
- return qeth_xmit(card, skb, queue, ipv, cast_type, qeth_l3_fill_header);
+ return qeth_xmit(card, skb, queue, ipv, qeth_l3_fill_header);
}
static netdev_tx_t qeth_l3_hard_start_xmit(struct sk_buff *skb,
@@ -2069,7 +2055,7 @@ static netdev_tx_t qeth_l3_hard_start_xmit(struct sk_buff *skb,
int ipv = qeth_get_ip_version(skb);
struct qeth_qdio_out_q *queue;
int tx_bytes = skb->len;
- int cast_type, rc;
+ int rc;
if (IS_IQD(card)) {
queue = card->qdio.out_qs[qeth_iqd_translate_txq(dev, txq)];
@@ -2080,24 +2066,18 @@ static netdev_tx_t qeth_l3_hard_start_xmit(struct sk_buff *skb,
(card->options.cq == QETH_CQ_ENABLED &&
skb->protocol != htons(ETH_P_AF_IUCV)))
goto tx_drop;
-
- if (txq == QETH_IQD_MCAST_TXQ)
- cast_type = qeth_l3_get_cast_type(skb);
- else
- cast_type = RTN_UNICAST;
} else {
queue = card->qdio.out_qs[txq];
- cast_type = qeth_l3_get_cast_type(skb);
}
- if (cast_type == RTN_BROADCAST && !card->info.broadcast_capable)
+ if (!(dev->flags & IFF_BROADCAST) &&
+ qeth_l3_get_cast_type(skb) == RTN_BROADCAST)
goto tx_drop;
if (ipv == 4 || IS_IQD(card))
- rc = qeth_l3_xmit(card, skb, queue, ipv, cast_type);
+ rc = qeth_l3_xmit(card, skb, queue, ipv);
else
- rc = qeth_xmit(card, skb, queue, ipv, cast_type,
- qeth_l3_fill_header);
+ rc = qeth_xmit(card, skb, queue, ipv, qeth_l3_fill_header);
if (!rc) {
QETH_TXQ_STAT_INC(queue, tx_packets);
@@ -2337,12 +2317,11 @@ static int qeth_l3_set_online(struct ccwgroup_device *gdev)
mutex_lock(&card->discipline_mutex);
mutex_lock(&card->conf_mutex);
- QETH_DBF_TEXT(SETUP, 2, "setonlin");
- QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *));
+ QETH_CARD_TEXT(card, 2, "setonlin");
rc = qeth_core_hardsetup_card(card, &carrier_ok);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "2err%04x", rc);
+ QETH_CARD_TEXT_(card, 2, "2err%04x", rc);
rc = -ENODEV;
goto out_remove;
}
@@ -2358,28 +2337,28 @@ static int qeth_l3_set_online(struct ccwgroup_device *gdev)
qeth_print_status_message(card);
/* softsetup */
- QETH_DBF_TEXT(SETUP, 2, "softsetp");
+ QETH_CARD_TEXT(card, 2, "softsetp");
rc = qeth_l3_setadapter_parms(card);
if (rc)
- QETH_DBF_TEXT_(SETUP, 2, "2err%04x", rc);
+ QETH_CARD_TEXT_(card, 2, "2err%04x", rc);
if (!card->options.sniffer) {
rc = qeth_l3_start_ipassists(card);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "3err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "3err%d", rc);
goto out_remove;
}
rc = qeth_l3_setrouting_v4(card);
if (rc)
- QETH_DBF_TEXT_(SETUP, 2, "4err%04x", rc);
+ QETH_CARD_TEXT_(card, 2, "4err%04x", rc);
rc = qeth_l3_setrouting_v6(card);
if (rc)
- QETH_DBF_TEXT_(SETUP, 2, "5err%04x", rc);
+ QETH_CARD_TEXT_(card, 2, "5err%04x", rc);
}
rc = qeth_init_qdio_queues(card);
if (rc) {
- QETH_DBF_TEXT_(SETUP, 2, "6err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "6err%d", rc);
rc = -ENODEV;
goto out_remove;
}
@@ -2420,7 +2399,6 @@ out_remove:
ccw_device_set_offline(CARD_WDEV(card));
ccw_device_set_offline(CARD_RDEV(card));
qdio_free(CARD_DDEV(card));
- card->state = CARD_STATE_DOWN;
mutex_unlock(&card->conf_mutex);
mutex_unlock(&card->discipline_mutex);
@@ -2435,8 +2413,7 @@ static int __qeth_l3_set_offline(struct ccwgroup_device *cgdev,
mutex_lock(&card->discipline_mutex);
mutex_lock(&card->conf_mutex);
- QETH_DBF_TEXT(SETUP, 3, "setoffl");
- QETH_DBF_HEX(SETUP, 3, &card, sizeof(void *));
+ QETH_CARD_TEXT(card, 3, "setoffl");
if ((!recovery_mode && card->info.hwtrap) || card->info.hwtrap == 2) {
qeth_hw_trap(card, QETH_DIAGS_TRAP_DISARM);
@@ -2462,7 +2439,7 @@ static int __qeth_l3_set_offline(struct ccwgroup_device *cgdev,
if (!rc)
rc = (rc2) ? rc2 : rc3;
if (rc)
- QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc);
+ QETH_CARD_TEXT_(card, 2, "1err%d", rc);
qdio_free(CARD_DDEV(card));
/* let user_space know that device is offline */
@@ -2505,33 +2482,6 @@ static int qeth_l3_recover(void *ptr)
return 0;
}
-static int qeth_l3_pm_suspend(struct ccwgroup_device *gdev)
-{
- struct qeth_card *card = dev_get_drvdata(&gdev->dev);
-
- qeth_set_allowed_threads(card, 0, 1);
- wait_event(card->wait_q, qeth_threads_running(card, 0xffffffff) == 0);
- if (gdev->state == CCWGROUP_OFFLINE)
- return 0;
-
- qeth_l3_set_offline(gdev);
- return 0;
-}
-
-static int qeth_l3_pm_resume(struct ccwgroup_device *gdev)
-{
- struct qeth_card *card = dev_get_drvdata(&gdev->dev);
- int rc;
-
- rc = qeth_l3_set_online(gdev);
-
- qeth_set_allowed_threads(card, 0xffffffff, 0);
- if (rc)
- dev_warn(&card->gdev->dev, "The qeth device driver "
- "failed to recover an error on the device\n");
- return rc;
-}
-
/* Returns zero if the command is successfully "consumed" */
static int qeth_l3_control_event(struct qeth_card *card,
struct qeth_ipa_cmd *cmd)
@@ -2547,9 +2497,6 @@ struct qeth_discipline qeth_l3_discipline = {
.remove = qeth_l3_remove_device,
.set_online = qeth_l3_set_online,
.set_offline = qeth_l3_set_offline,
- .freeze = qeth_l3_pm_suspend,
- .thaw = qeth_l3_pm_resume,
- .restore = qeth_l3_pm_resume,
.do_ioctl = qeth_l3_do_ioctl,
.control_event_handler = qeth_l3_control_event,
};
diff --git a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
index b8dd9e648dd0..524cdbcd29aa 100644
--- a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
+++ b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
@@ -1243,8 +1243,12 @@ static int cxgb3i_ddp_init(struct cxgbi_device *cdev)
tformat.pgsz_order[i] = uinfo.pgsz_factor[i];
cxgbi_tagmask_check(tagmask, &tformat);
- cxgbi_ddp_ppm_setup(&tdev->ulp_iscsi, cdev, &tformat, ppmax,
- uinfo.llimit, uinfo.llimit, 0);
+ err = cxgbi_ddp_ppm_setup(&tdev->ulp_iscsi, cdev, &tformat,
+ (uinfo.ulimit - uinfo.llimit + 1),
+ uinfo.llimit, uinfo.llimit, 0, 0, 0);
+ if (err)
+ return err;
+
if (!(cdev->flags & CXGBI_FLAG_DDP_OFF)) {
uinfo.tagmask = tagmask;
uinfo.ulimit = uinfo.llimit + (ppmax << PPOD_SIZE_SHIFT);
@@ -1318,7 +1322,7 @@ static void cxgb3i_dev_open(struct t3cdev *t3dev)
err = cxgb3i_ddp_init(cdev);
if (err) {
- pr_info("0x%p ddp init failed\n", cdev);
+ pr_info("0x%p ddp init failed %d\n", cdev, err);
goto err_out;
}
diff --git a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
index 124f3345420f..66d6e1f4b3c3 100644
--- a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
+++ b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
@@ -2070,7 +2070,7 @@ static int cxgb4i_ddp_init(struct cxgbi_device *cdev)
struct net_device *ndev = cdev->ports[0];
struct cxgbi_tag_format tformat;
unsigned int ppmax;
- int i;
+ int i, err;
if (!lldi->vr->iscsi.size) {
pr_warn("%s, iscsi NOT enabled, check config!\n", ndev->name);
@@ -2086,8 +2086,17 @@ static int cxgb4i_ddp_init(struct cxgbi_device *cdev)
& 0xF;
cxgbi_tagmask_check(lldi->iscsi_tagmask, &tformat);
- cxgbi_ddp_ppm_setup(lldi->iscsi_ppm, cdev, &tformat, ppmax,
- lldi->iscsi_llimit, lldi->vr->iscsi.start, 2);
+ pr_info("iscsi_edram.start 0x%x iscsi_edram.size 0x%x",
+ lldi->vr->ppod_edram.start, lldi->vr->ppod_edram.size);
+
+ err = cxgbi_ddp_ppm_setup(lldi->iscsi_ppm, cdev, &tformat,
+ lldi->vr->iscsi.size, lldi->iscsi_llimit,
+ lldi->vr->iscsi.start, 2,
+ lldi->vr->ppod_edram.start,
+ lldi->vr->ppod_edram.size);
+
+ if (err < 0)
+ return err;
cdev->csk_ddp_setup_digest = ddp_setup_conn_digest;
cdev->csk_ddp_setup_pgidx = ddp_setup_conn_pgidx;
@@ -2141,7 +2150,7 @@ static void *t4_uld_add(const struct cxgb4_lld_info *lldi)
rc = cxgb4i_ddp_init(cdev);
if (rc) {
- pr_info("t4 0x%p ddp init failed.\n", cdev);
+ pr_info("t4 0x%p ddp init failed %d.\n", cdev, rc);
goto err_out;
}
rc = cxgb4i_ofld_init(cdev);
diff --git a/drivers/scsi/cxgbi/libcxgbi.c b/drivers/scsi/cxgbi/libcxgbi.c
index 7d43e014bd21..3e17af8aedeb 100644
--- a/drivers/scsi/cxgbi/libcxgbi.c
+++ b/drivers/scsi/cxgbi/libcxgbi.c
@@ -1285,14 +1285,15 @@ EXPORT_SYMBOL_GPL(cxgbi_ddp_set_one_ppod);
static unsigned char padding[4];
-void cxgbi_ddp_ppm_setup(void **ppm_pp, struct cxgbi_device *cdev,
- struct cxgbi_tag_format *tformat, unsigned int ppmax,
- unsigned int llimit, unsigned int start,
- unsigned int rsvd_factor)
+int cxgbi_ddp_ppm_setup(void **ppm_pp, struct cxgbi_device *cdev,
+ struct cxgbi_tag_format *tformat,
+ unsigned int iscsi_size, unsigned int llimit,
+ unsigned int start, unsigned int rsvd_factor,
+ unsigned int edram_start, unsigned int edram_size)
{
int err = cxgbi_ppm_init(ppm_pp, cdev->ports[0], cdev->pdev,
- cdev->lldev, tformat, ppmax, llimit, start,
- rsvd_factor);
+ cdev->lldev, tformat, iscsi_size, llimit, start,
+ rsvd_factor, edram_start, edram_size);
if (err >= 0) {
struct cxgbi_ppm *ppm = (struct cxgbi_ppm *)(*ppm_pp);
@@ -1304,6 +1305,8 @@ void cxgbi_ddp_ppm_setup(void **ppm_pp, struct cxgbi_device *cdev,
} else {
cdev->flags |= CXGBI_FLAG_DDP_OFF;
}
+
+ return err;
}
EXPORT_SYMBOL_GPL(cxgbi_ddp_ppm_setup);
diff --git a/drivers/scsi/cxgbi/libcxgbi.h b/drivers/scsi/cxgbi/libcxgbi.h
index 1917ff57651d..84b96af52655 100644
--- a/drivers/scsi/cxgbi/libcxgbi.h
+++ b/drivers/scsi/cxgbi/libcxgbi.h
@@ -617,8 +617,9 @@ void cxgbi_ddp_page_size_factor(int *);
void cxgbi_ddp_set_one_ppod(struct cxgbi_pagepod *,
struct cxgbi_task_tag_info *,
struct scatterlist **sg_pp, unsigned int *sg_off);
-void cxgbi_ddp_ppm_setup(void **ppm_pp, struct cxgbi_device *,
- struct cxgbi_tag_format *, unsigned int ppmax,
- unsigned int llimit, unsigned int start,
- unsigned int rsvd_factor);
+int cxgbi_ddp_ppm_setup(void **ppm_pp, struct cxgbi_device *cdev,
+ struct cxgbi_tag_format *tformat,
+ unsigned int iscsi_size, unsigned int llimit,
+ unsigned int start, unsigned int rsvd_factor,
+ unsigned int edram_start, unsigned int edram_size);
#endif /*__LIBCXGBI_H__*/
diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
index 6ef0f741bf89..a42babde036d 100644
--- a/drivers/scsi/qedf/qedf_main.c
+++ b/drivers/scsi/qedf/qedf_main.c
@@ -2215,16 +2215,21 @@ static void qedf_simd_int_handler(void *cookie)
static void qedf_sync_free_irqs(struct qedf_ctx *qedf)
{
int i;
+ u16 vector_idx = 0;
+ u32 vector;
if (qedf->int_info.msix_cnt) {
for (i = 0; i < qedf->int_info.used_cnt; i++) {
- synchronize_irq(qedf->int_info.msix[i].vector);
- irq_set_affinity_hint(qedf->int_info.msix[i].vector,
- NULL);
- irq_set_affinity_notifier(qedf->int_info.msix[i].vector,
- NULL);
- free_irq(qedf->int_info.msix[i].vector,
- &qedf->fp_array[i]);
+ vector_idx = i * qedf->dev_info.common.num_hwfns +
+ qed_ops->common->get_affin_hwfn_idx(qedf->cdev);
+ QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC,
+ "Freeing IRQ #%d vector_idx=%d.\n",
+ i, vector_idx);
+ vector = qedf->int_info.msix[vector_idx].vector;
+ synchronize_irq(vector);
+ irq_set_affinity_hint(vector, NULL);
+ irq_set_affinity_notifier(vector, NULL);
+ free_irq(vector, &qedf->fp_array[i]);
}
} else
qed_ops->common->simd_handler_clean(qedf->cdev,
@@ -2237,11 +2242,19 @@ static void qedf_sync_free_irqs(struct qedf_ctx *qedf)
static int qedf_request_msix_irq(struct qedf_ctx *qedf)
{
int i, rc, cpu;
+ u16 vector_idx = 0;
+ u32 vector;
cpu = cpumask_first(cpu_online_mask);
for (i = 0; i < qedf->num_queues; i++) {
- rc = request_irq(qedf->int_info.msix[i].vector,
- qedf_msix_handler, 0, "qedf", &qedf->fp_array[i]);
+ vector_idx = i * qedf->dev_info.common.num_hwfns +
+ qed_ops->common->get_affin_hwfn_idx(qedf->cdev);
+ QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC,
+ "Requesting IRQ #%d vector_idx=%d.\n",
+ i, vector_idx);
+ vector = qedf->int_info.msix[vector_idx].vector;
+ rc = request_irq(vector, qedf_msix_handler, 0, "qedf",
+ &qedf->fp_array[i]);
if (rc) {
QEDF_WARN(&(qedf->dbg_ctx), "request_irq failed.\n");
@@ -2250,8 +2263,7 @@ static int qedf_request_msix_irq(struct qedf_ctx *qedf)
}
qedf->int_info.used_cnt++;
- rc = irq_set_affinity_hint(qedf->int_info.msix[i].vector,
- get_cpu_mask(cpu));
+ rc = irq_set_affinity_hint(vector, get_cpu_mask(cpu));
cpu = cpumask_next(cpu, cpu_online_mask);
}
@@ -3208,6 +3220,11 @@ static int __qedf_probe(struct pci_dev *pdev, int mode)
goto err1;
}
+ QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC,
+ "dev_info: num_hwfns=%d affin_hwfn_idx=%d.\n",
+ qedf->dev_info.common.num_hwfns,
+ qed_ops->common->get_affin_hwfn_idx(qedf->cdev));
+
/* queue allocation code should come here
* order should be
* slowpath_start
diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
index f210a3e0c9b1..acb930b8c6a6 100644
--- a/drivers/scsi/qedi/qedi_main.c
+++ b/drivers/scsi/qedi/qedi_main.c
@@ -1313,13 +1313,20 @@ static void qedi_simd_int_handler(void *cookie)
static void qedi_sync_free_irqs(struct qedi_ctx *qedi)
{
int i;
+ u16 idx;
if (qedi->int_info.msix_cnt) {
for (i = 0; i < qedi->int_info.used_cnt; i++) {
- synchronize_irq(qedi->int_info.msix[i].vector);
- irq_set_affinity_hint(qedi->int_info.msix[i].vector,
+ idx = i * qedi->dev_info.common.num_hwfns +
+ qedi_ops->common->get_affin_hwfn_idx(qedi->cdev);
+
+ QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+ "Freeing IRQ #%d vector_idx=%d.\n", i, idx);
+
+ synchronize_irq(qedi->int_info.msix[idx].vector);
+ irq_set_affinity_hint(qedi->int_info.msix[idx].vector,
NULL);
- free_irq(qedi->int_info.msix[i].vector,
+ free_irq(qedi->int_info.msix[idx].vector,
&qedi->fp_array[i]);
}
} else {
@@ -1334,20 +1341,28 @@ static void qedi_sync_free_irqs(struct qedi_ctx *qedi)
static int qedi_request_msix_irq(struct qedi_ctx *qedi)
{
int i, rc, cpu;
+ u16 idx;
cpu = cpumask_first(cpu_online_mask);
- for (i = 0; i < qedi->int_info.msix_cnt; i++) {
- rc = request_irq(qedi->int_info.msix[i].vector,
+ for (i = 0; i < MIN_NUM_CPUS_MSIX(qedi); i++) {
+ idx = i * qedi->dev_info.common.num_hwfns +
+ qedi_ops->common->get_affin_hwfn_idx(qedi->cdev);
+
+ QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+ "dev_info: num_hwfns=%d affin_hwfn_idx=%d.\n",
+ qedi->dev_info.common.num_hwfns,
+ qedi_ops->common->get_affin_hwfn_idx(qedi->cdev));
+
+ rc = request_irq(qedi->int_info.msix[idx].vector,
qedi_msix_handler, 0, "qedi",
&qedi->fp_array[i]);
-
if (rc) {
QEDI_WARN(&qedi->dbg_ctx, "request_irq failed.\n");
qedi_sync_free_irqs(qedi);
return rc;
}
qedi->int_info.used_cnt++;
- rc = irq_set_affinity_hint(qedi->int_info.msix[i].vector,
+ rc = irq_set_affinity_hint(qedi->int_info.msix[idx].vector,
get_cpu_mask(cpu));
cpu = cpumask_next(cpu, cpu_online_mask);
}
@@ -2415,6 +2430,11 @@ static int __qedi_probe(struct pci_dev *pdev, int mode)
if (rc)
goto free_host;
+ QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+ "dev_info: num_hwfns=%d affin_hwfn_idx=%d.\n",
+ qedi->dev_info.common.num_hwfns,
+ qedi_ops->common->get_affin_hwfn_idx(qedi->cdev));
+
if (mode != QEDI_MODE_RECOVERY) {
rc = qedi_set_iscsi_pf_param(qedi);
if (rc) {
diff --git a/drivers/ssb/driver_gpio.c b/drivers/ssb/driver_gpio.c
index e809dae4c470..66a76fd83248 100644
--- a/drivers/ssb/driver_gpio.c
+++ b/drivers/ssb/driver_gpio.c
@@ -460,9 +460,6 @@ int ssb_gpio_init(struct ssb_bus *bus)
return ssb_gpio_chipco_init(bus);
else if (ssb_extif_available(&bus->extif))
return ssb_gpio_extif_init(bus);
- else
- WARN_ON(1);
-
return -1;
}
@@ -472,9 +469,6 @@ int ssb_gpio_unregister(struct ssb_bus *bus)
ssb_extif_available(&bus->extif)) {
gpiochip_remove(&bus->gpio);
return 0;
- } else {
- WARN_ON(1);
}
-
return -1;
}
diff --git a/drivers/staging/Kconfig b/drivers/staging/Kconfig
index d5f771fafc21..7c96a01eef6c 100644
--- a/drivers/staging/Kconfig
+++ b/drivers/staging/Kconfig
@@ -118,4 +118,6 @@ source "drivers/staging/fieldbus/Kconfig"
source "drivers/staging/kpc2000/Kconfig"
+source "drivers/staging/isdn/Kconfig"
+
endif # STAGING
diff --git a/drivers/staging/Makefile b/drivers/staging/Makefile
index 0da0d3f0b5e4..fcaac9693b83 100644
--- a/drivers/staging/Makefile
+++ b/drivers/staging/Makefile
@@ -49,3 +49,4 @@ obj-$(CONFIG_XIL_AXIS_FIFO) += axis-fifo/
obj-$(CONFIG_EROFS_FS) += erofs/
obj-$(CONFIG_FIELDBUS_DEV) += fieldbus/
obj-$(CONFIG_KPC2000) += kpc2000/
+obj-$(CONFIG_ISDN_CAPI) += isdn/
diff --git a/drivers/staging/isdn/Kconfig b/drivers/staging/isdn/Kconfig
new file mode 100644
index 000000000000..faaf63887094
--- /dev/null
+++ b/drivers/staging/isdn/Kconfig
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: GPL-2.0-only
+menu "ISDN CAPI drivers"
+ depends on ISDN_CAPI
+
+source "drivers/staging/isdn/avm/Kconfig"
+
+source "drivers/staging/isdn/gigaset/Kconfig"
+
+source "drivers/staging/isdn/hysdn/Kconfig"
+
+endmenu
+
diff --git a/drivers/staging/isdn/Makefile b/drivers/staging/isdn/Makefile
new file mode 100644
index 000000000000..025504bae5df
--- /dev/null
+++ b/drivers/staging/isdn/Makefile
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: GPL-2.0
+# Makefile for the kernel ISDN subsystem and device drivers.
+
+# Object files in subdirectories
+
+obj-$(CONFIG_CAPI_AVM) += avm/
+obj-$(CONFIG_HYSDN) += hysdn/
+obj-$(CONFIG_ISDN_DRV_GIGASET) += gigaset/
diff --git a/drivers/staging/isdn/TODO b/drivers/staging/isdn/TODO
new file mode 100644
index 000000000000..9210d11eb68b
--- /dev/null
+++ b/drivers/staging/isdn/TODO
@@ -0,0 +1,22 @@
+TODO: Remove in late 2019 unless there are users
+
+
+I tried to find any indication of whether the capi drivers are
+still in use, and have not found anything from a long time ago.
+
+With public ISDN networks almost completely shut down over the past 12
+months, there is very little you can actually do with this hardware. The
+main remaining use case would be to connect ISDN voice phones to an
+in-house installation with Asterisk or LCR, but anyone trying this in
+turn seems to be using either the mISDN driver stack, or out-of-tree
+drivers from the hardware vendors.
+
+I may of course have missed something, so I would suggest moving
+these into drivers/staging/ just in case someone still uses one
+of the three remaining in-kernel drivers (avm, hysdn, gigaset).
+
+If nobody complains, we can remove them entirely in six months,
+or otherwise move the core code and any drivers that are still
+needed back into drivers/isdn.
+
+ Arnd Bergmann <arnd@arndb.de>
diff --git a/drivers/isdn/hardware/avm/Kconfig b/drivers/staging/isdn/avm/Kconfig
index 81483db067bb..81483db067bb 100644
--- a/drivers/isdn/hardware/avm/Kconfig
+++ b/drivers/staging/isdn/avm/Kconfig
diff --git a/drivers/isdn/hardware/avm/Makefile b/drivers/staging/isdn/avm/Makefile
index 3830a0573fcc..3830a0573fcc 100644
--- a/drivers/isdn/hardware/avm/Makefile
+++ b/drivers/staging/isdn/avm/Makefile
diff --git a/drivers/isdn/hardware/avm/avm_cs.c b/drivers/staging/isdn/avm/avm_cs.c
index 62b8030ee331..62b8030ee331 100644
--- a/drivers/isdn/hardware/avm/avm_cs.c
+++ b/drivers/staging/isdn/avm/avm_cs.c
diff --git a/drivers/isdn/hardware/avm/avmcard.h b/drivers/staging/isdn/avm/avmcard.h
index cdfa89c71997..cdfa89c71997 100644
--- a/drivers/isdn/hardware/avm/avmcard.h
+++ b/drivers/staging/isdn/avm/avmcard.h
diff --git a/drivers/isdn/hardware/avm/b1.c b/drivers/staging/isdn/avm/b1.c
index 40ca1e8fa09f..40ca1e8fa09f 100644
--- a/drivers/isdn/hardware/avm/b1.c
+++ b/drivers/staging/isdn/avm/b1.c
diff --git a/drivers/isdn/hardware/avm/b1dma.c b/drivers/staging/isdn/avm/b1dma.c
index 6a3dc9937ce5..6a3dc9937ce5 100644
--- a/drivers/isdn/hardware/avm/b1dma.c
+++ b/drivers/staging/isdn/avm/b1dma.c
diff --git a/drivers/isdn/hardware/avm/b1isa.c b/drivers/staging/isdn/avm/b1isa.c
index cdfea72e0ef6..cdfea72e0ef6 100644
--- a/drivers/isdn/hardware/avm/b1isa.c
+++ b/drivers/staging/isdn/avm/b1isa.c
diff --git a/drivers/isdn/hardware/avm/b1pci.c b/drivers/staging/isdn/avm/b1pci.c
index b76b57a82c02..b76b57a82c02 100644
--- a/drivers/isdn/hardware/avm/b1pci.c
+++ b/drivers/staging/isdn/avm/b1pci.c
diff --git a/drivers/isdn/hardware/avm/b1pcmcia.c b/drivers/staging/isdn/avm/b1pcmcia.c
index 3aca16e62902..3aca16e62902 100644
--- a/drivers/isdn/hardware/avm/b1pcmcia.c
+++ b/drivers/staging/isdn/avm/b1pcmcia.c
diff --git a/drivers/isdn/hardware/avm/c4.c b/drivers/staging/isdn/avm/c4.c
index ac72cd204c4d..ac72cd204c4d 100644
--- a/drivers/isdn/hardware/avm/c4.c
+++ b/drivers/staging/isdn/avm/c4.c
diff --git a/drivers/isdn/hardware/avm/t1isa.c b/drivers/staging/isdn/avm/t1isa.c
index 2153619c5b31..2153619c5b31 100644
--- a/drivers/isdn/hardware/avm/t1isa.c
+++ b/drivers/staging/isdn/avm/t1isa.c
diff --git a/drivers/isdn/hardware/avm/t1pci.c b/drivers/staging/isdn/avm/t1pci.c
index f5ed1d5004c9..f5ed1d5004c9 100644
--- a/drivers/isdn/hardware/avm/t1pci.c
+++ b/drivers/staging/isdn/avm/t1pci.c
diff --git a/drivers/isdn/gigaset/Kconfig b/drivers/staging/isdn/gigaset/Kconfig
index fe41e9cfb672..c593105b3600 100644
--- a/drivers/isdn/gigaset/Kconfig
+++ b/drivers/staging/isdn/gigaset/Kconfig
@@ -30,15 +30,6 @@ config GIGASET_CAPI
Say N to build the old native ISDN4Linux variant.
If unsure, say Y.
-config GIGASET_I4L
- bool
- depends on ISDN_I4L='y'||(ISDN_I4L='m'&&ISDN_DRV_GIGASET='m')
- default !GIGASET_CAPI
-
-config GIGASET_DUMMYLL
- bool
- default !GIGASET_CAPI&&!GIGASET_I4L
-
config GIGASET_BASE
tristate "Gigaset base station support"
depends on USB
diff --git a/drivers/isdn/gigaset/Makefile b/drivers/staging/isdn/gigaset/Makefile
index ac45a2739f56..9c010891dcd7 100644
--- a/drivers/isdn/gigaset/Makefile
+++ b/drivers/staging/isdn/gigaset/Makefile
@@ -1,8 +1,12 @@
# SPDX-License-Identifier: GPL-2.0
gigaset-y := common.o interface.o proc.o ev-layer.o asyncdata.o
-gigaset-$(CONFIG_GIGASET_CAPI) += capi.o
-gigaset-$(CONFIG_GIGASET_I4L) += i4l.o
-gigaset-$(CONFIG_GIGASET_DUMMYLL) += dummyll.o
+
+ifdef CONFIG_GIGASET_CAPI
+gigaset-y += capi.o
+else
+gigaset-y += dummyll.o
+endif
+
usb_gigaset-y := usb-gigaset.o
ser_gigaset-y := ser-gigaset.o
bas_gigaset-y := bas-gigaset.o isocdata.o
diff --git a/drivers/isdn/gigaset/asyncdata.c b/drivers/staging/isdn/gigaset/asyncdata.c
index a34b3c9d8a71..a34b3c9d8a71 100644
--- a/drivers/isdn/gigaset/asyncdata.c
+++ b/drivers/staging/isdn/gigaset/asyncdata.c
diff --git a/drivers/isdn/gigaset/bas-gigaset.c b/drivers/staging/isdn/gigaset/bas-gigaset.c
index c334525a5f63..c334525a5f63 100644
--- a/drivers/isdn/gigaset/bas-gigaset.c
+++ b/drivers/staging/isdn/gigaset/bas-gigaset.c
diff --git a/drivers/isdn/gigaset/capi.c b/drivers/staging/isdn/gigaset/capi.c
index 83d7dd48c61d..83d7dd48c61d 100644
--- a/drivers/isdn/gigaset/capi.c
+++ b/drivers/staging/isdn/gigaset/capi.c
diff --git a/drivers/isdn/gigaset/common.c b/drivers/staging/isdn/gigaset/common.c
index 3bb8092858ab..3bb8092858ab 100644
--- a/drivers/isdn/gigaset/common.c
+++ b/drivers/staging/isdn/gigaset/common.c
diff --git a/drivers/isdn/gigaset/dummyll.c b/drivers/staging/isdn/gigaset/dummyll.c
index 4b9637e5da6e..4b9637e5da6e 100644
--- a/drivers/isdn/gigaset/dummyll.c
+++ b/drivers/staging/isdn/gigaset/dummyll.c
diff --git a/drivers/isdn/gigaset/ev-layer.c b/drivers/staging/isdn/gigaset/ev-layer.c
index f8bb1869c600..f8bb1869c600 100644
--- a/drivers/isdn/gigaset/ev-layer.c
+++ b/drivers/staging/isdn/gigaset/ev-layer.c
diff --git a/drivers/isdn/gigaset/gigaset.h b/drivers/staging/isdn/gigaset/gigaset.h
index 0ecc2b5ea553..0ecc2b5ea553 100644
--- a/drivers/isdn/gigaset/gigaset.h
+++ b/drivers/staging/isdn/gigaset/gigaset.h
diff --git a/drivers/isdn/gigaset/interface.c b/drivers/staging/isdn/gigaset/interface.c
index 17fa615a8c68..17fa615a8c68 100644
--- a/drivers/isdn/gigaset/interface.c
+++ b/drivers/staging/isdn/gigaset/interface.c
diff --git a/drivers/isdn/gigaset/isocdata.c b/drivers/staging/isdn/gigaset/isocdata.c
index 3ecf6e33ed15..3ecf6e33ed15 100644
--- a/drivers/isdn/gigaset/isocdata.c
+++ b/drivers/staging/isdn/gigaset/isocdata.c
diff --git a/drivers/isdn/gigaset/proc.c b/drivers/staging/isdn/gigaset/proc.c
index 8914439a4237..8914439a4237 100644
--- a/drivers/isdn/gigaset/proc.c
+++ b/drivers/staging/isdn/gigaset/proc.c
diff --git a/drivers/isdn/gigaset/ser-gigaset.c b/drivers/staging/isdn/gigaset/ser-gigaset.c
index 5587e9e7fc73..5587e9e7fc73 100644
--- a/drivers/isdn/gigaset/ser-gigaset.c
+++ b/drivers/staging/isdn/gigaset/ser-gigaset.c
diff --git a/drivers/isdn/gigaset/usb-gigaset.c b/drivers/staging/isdn/gigaset/usb-gigaset.c
index 1b9b43659bdf..1b9b43659bdf 100644
--- a/drivers/isdn/gigaset/usb-gigaset.c
+++ b/drivers/staging/isdn/gigaset/usb-gigaset.c
diff --git a/drivers/isdn/hysdn/Kconfig b/drivers/staging/isdn/hysdn/Kconfig
index 1971ef850c9a..1971ef850c9a 100644
--- a/drivers/isdn/hysdn/Kconfig
+++ b/drivers/staging/isdn/hysdn/Kconfig
diff --git a/drivers/isdn/hysdn/Makefile b/drivers/staging/isdn/hysdn/Makefile
index e01f17f22ebb..e01f17f22ebb 100644
--- a/drivers/isdn/hysdn/Makefile
+++ b/drivers/staging/isdn/hysdn/Makefile
diff --git a/drivers/isdn/hysdn/boardergo.c b/drivers/staging/isdn/hysdn/boardergo.c
index 2aa2a0e08247..2aa2a0e08247 100644
--- a/drivers/isdn/hysdn/boardergo.c
+++ b/drivers/staging/isdn/hysdn/boardergo.c
diff --git a/drivers/isdn/hysdn/boardergo.h b/drivers/staging/isdn/hysdn/boardergo.h
index e99bd81c4034..e99bd81c4034 100644
--- a/drivers/isdn/hysdn/boardergo.h
+++ b/drivers/staging/isdn/hysdn/boardergo.h
diff --git a/drivers/isdn/hysdn/hycapi.c b/drivers/staging/isdn/hysdn/hycapi.c
index a2c15cd7bf67..a2c15cd7bf67 100644
--- a/drivers/isdn/hysdn/hycapi.c
+++ b/drivers/staging/isdn/hysdn/hycapi.c
diff --git a/drivers/isdn/hysdn/hysdn_boot.c b/drivers/staging/isdn/hysdn/hysdn_boot.c
index ba177c3a621b..ba177c3a621b 100644
--- a/drivers/isdn/hysdn/hysdn_boot.c
+++ b/drivers/staging/isdn/hysdn/hysdn_boot.c
diff --git a/drivers/isdn/hysdn/hysdn_defs.h b/drivers/staging/isdn/hysdn/hysdn_defs.h
index cdac46a21692..cdac46a21692 100644
--- a/drivers/isdn/hysdn/hysdn_defs.h
+++ b/drivers/staging/isdn/hysdn/hysdn_defs.h
diff --git a/drivers/isdn/hysdn/hysdn_init.c b/drivers/staging/isdn/hysdn/hysdn_init.c
index 0db2f7506250..0db2f7506250 100644
--- a/drivers/isdn/hysdn/hysdn_init.c
+++ b/drivers/staging/isdn/hysdn/hysdn_init.c
diff --git a/drivers/isdn/hysdn/hysdn_net.c b/drivers/staging/isdn/hysdn/hysdn_net.c
index 8e9c34f33d86..bea37ae30ebb 100644
--- a/drivers/isdn/hysdn/hysdn_net.c
+++ b/drivers/staging/isdn/hysdn/hysdn_net.c
@@ -70,9 +70,13 @@ net_open(struct net_device *dev)
for (i = 0; i < ETH_ALEN; i++)
dev->dev_addr[i] = 0xfc;
if ((in_dev = dev->ip_ptr) != NULL) {
- struct in_ifaddr *ifa = in_dev->ifa_list;
+ const struct in_ifaddr *ifa;
+
+ rcu_read_lock();
+ ifa = rcu_dereference(in_dev->ifa_list);
if (ifa != NULL)
memcpy(dev->dev_addr + (ETH_ALEN - sizeof(ifa->ifa_local)), &ifa->ifa_local, sizeof(ifa->ifa_local));
+ rcu_read_unlock();
}
} else
memcpy(dev->dev_addr, card->mac_addr, ETH_ALEN);
diff --git a/drivers/isdn/hysdn/hysdn_pof.h b/drivers/staging/isdn/hysdn/hysdn_pof.h
index f63f5fa59d7e..f63f5fa59d7e 100644
--- a/drivers/isdn/hysdn/hysdn_pof.h
+++ b/drivers/staging/isdn/hysdn/hysdn_pof.h
diff --git a/drivers/isdn/hysdn/hysdn_procconf.c b/drivers/staging/isdn/hysdn/hysdn_procconf.c
index 73079213ec94..73079213ec94 100644
--- a/drivers/isdn/hysdn/hysdn_procconf.c
+++ b/drivers/staging/isdn/hysdn/hysdn_procconf.c
diff --git a/drivers/isdn/hysdn/hysdn_proclog.c b/drivers/staging/isdn/hysdn/hysdn_proclog.c
index 6e898b90e86e..6e898b90e86e 100644
--- a/drivers/isdn/hysdn/hysdn_proclog.c
+++ b/drivers/staging/isdn/hysdn/hysdn_proclog.c
diff --git a/drivers/isdn/hysdn/hysdn_sched.c b/drivers/staging/isdn/hysdn/hysdn_sched.c
index 31d7c1415543..31d7c1415543 100644
--- a/drivers/isdn/hysdn/hysdn_sched.c
+++ b/drivers/staging/isdn/hysdn/hysdn_sched.c
diff --git a/drivers/isdn/hysdn/ince1pc.h b/drivers/staging/isdn/hysdn/ince1pc.h
index cab68361de65..cab68361de65 100644
--- a/drivers/isdn/hysdn/ince1pc.h
+++ b/drivers/staging/isdn/hysdn/ince1pc.h
diff --git a/drivers/target/iscsi/cxgbit/cxgbit_ddp.c b/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
index c859afa4308e..54bb1ebd8eb5 100644
--- a/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
+++ b/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
@@ -315,8 +315,10 @@ int cxgbit_ddp_init(struct cxgbit_device *cdev)
ret = cxgbi_ppm_init(lldi->iscsi_ppm, cdev->lldi.ports[0],
cdev->lldi.pdev, &cdev->lldi, &tformat,
- ppmax, lldi->iscsi_llimit,
- lldi->vr->iscsi.start, 2);
+ lldi->vr->iscsi.size, lldi->iscsi_llimit,
+ lldi->vr->iscsi.start, 2,
+ lldi->vr->ppod_edram.start,
+ lldi->vr->ppod_edram.size);
if (ret >= 0) {
struct cxgbi_ppm *ppm = (struct cxgbi_ppm *)(*lldi->iscsi_ppm);
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index d57ebdd616d9..247e5585af5d 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -35,7 +35,7 @@
#include "vhost.h"
-static int experimental_zcopytx = 1;
+static int experimental_zcopytx = 0;
module_param(experimental_zcopytx, int, 0444);
MODULE_PARM_DESC(experimental_zcopytx, "Enable Zero Copy TX;"
" 1 -Enable; 0 - Disable");