Age | Commit message (Collapse) | Author |
|
When liquidio driver open failed, it doesn't release resources. Compile
tested only.
Fixes: 5b07aee11227 ("liquidio: MSIX support for CN23XX")
Fixes: dbc97bfd3918 ("net: liquidio: Add missing null pointer checks")
Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
gdma_obj_handle_t is no more than redefinition of basic
u64 type. Remove such obfuscation.
Link: https://lore.kernel.org/r/3c1e821279e6a165d058655d2343722d6650e776.1668160486.git.leonro@nvidia.com
Acked-by: Long Li <longli@microsoft.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
|
|
Implement ethtool_op get_link_ext_stats for PHY down events
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Reviewed-by: Gal Pressman <gal@nvidia.com>
|
|
The pre_ct table realizes in hardware the act_ct cache logic, bypassing
the CT table if the ct state was already set by a previous ct lookup.
As such, the pre_ct table will always miss for chain 0 filters.
Optimize the pre_ct table lookup for rules installed on chain 0.
Signed-off-by: Oz Shlomo <ozsh@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
A single async context object is sufficient to wait for the completions
of many callbacks. Switch to using one instance per a bulk of commands.
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Gal Pressman <gal@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Waiting on a completion object for each callback before cleaning up their
async contexts is not necessary, as this is already implied in the
mlx5_cmd_cleanup_async_ctx() API.
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Gal Pressman <gal@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Work field in struct mlx5e_async_ctx is not used. Remove it.
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Gal Pressman <gal@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
The case where the packet is not offloaded and needs to be restored
to slow path and couldn't find expected tunnel information should not
dump a call trace to the user. there is a debug call.
Signed-off-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Maor Dickman <maord@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Up until now, return value of update_rx was ignored. Therefore, flow
continues even if it fails. Add error flow in case of update_rx fails in
mlx5e_open_locked, mlx5i_open and mlx5i_pkey_open.
Signed-off-by: Guy Truzman <gtruzman@nvidia.com>
Reviewed-by: Aya Levin <ayal@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Params info print was meant to be printed on load.
With time, new calls to mlx5e_init_rq_type_params and
mlx5e_build_rq_params were added, mistakenly printing
the params once again.
Move the print to were it belongs, in mlx5e_probe.
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Gal Pressman <gal@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
CQE compression feature improves performance by reducing PCI bandwidth
bottleneck on CQEs write.
Enhanced CQE compression introduced in ConnectX-6 and it aims to reduce
CPU utilization of SW side packets decompression by eliminating the
need to rewrite ownership bit, which is likely to cost a cache-miss, is
replaced by validity byte handled solely by HW.
Another advantage of the enhanced feature is that session packets are
available to SW as soon as a single CQE slot is filled, instead of
waiting for session to close, this improves packet latency from NIC to
host.
Performance:
Following are tested scenarios and reults comparing basic and enahnced
CQE compression.
setup: IXIA 100GbE connected directly to port 0 and port 1 of
ConnectX-6 Dx 100GbE dual port.
Case #1 RX only, single flow goes to single queue:
IRQ rate reduced by ~ 30%, CPU utilization improved by 2%.
Case #2 IP forwarding from port 1 to port 0 single flow goes to
single queue:
Avg latency improved from 60us to 21us, frame loss improved from 0.5% to 0.0%.
Case #3 IP forwarding from port 1 to port 0 Max Throughput IXIA sends
100%, 8192 UDP flows, goes to 24 queues:
Enhanced is equal or slightly better than basic.
Testing the basic compression feature with this patch shows there is
no perfrormance degradation of the basic compression feature.
Signed-off-by: Ofer Levi <oferle@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Replace the min/max operations with a single clamp.
Signed-off-by: Gal Pressman <gal@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
This is never used, and probably something that was intended to be used
before per-protocol hash tables were chosen instead.
Signed-off-by: Anisse Astier <anisse@astier.eu>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
hca_id is an identifier of an mlx5_core instance within the hardware.
This identifier may be required for troubleshooting.
Expose it to debugfs.
Example:
$ cat /sys/kernel/debug/mlx5/mlx5_core.sf.2/vhca_id
0x12
Signed-off-by: Eli Cohen <elic@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Before this patch, devlink traps are registered only on full driver
probe and unregistered on driver removal. As devlink traps are not
usable once driver functionality is unloaded, it should be unrgeistered
also on flows that unload the driver and then registered when loaded
back, e.g. devlink reload flow.
Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Aya Levin <ayal@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
There is a spelling mistake in an error message. Fix it.
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
There is no need for the warn if entry already removed.
Use debug print like in the update flow.
Also update the messages so user can identify if the it's
from the update flow or remove flow.
Signed-off-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Vlad Buslov <vladbu@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
When stmmac_xdp_release() is called, there is a possibility that tx
function is still running on other queues which will lead to tx queue
timed out and reset adapter.
This commit ensure that tx function is not running xdp before release
flow continue to run.
Fixes: ac746c8520d9 ("net: stmmac: enhance XDP ZC driver level switching performance")
Signed-off-by: Song Yoong Siang <yoong.siang.song@intel.com>
Signed-off-by: Mohd Faizal Abdul Rahim <faizal.abdul.rahim@intel.com>
Signed-off-by: Noor Azura Ahmad Tarmizi <noor.azura.ahmad.tarmizi@intel.com>
Link: https://lore.kernel.org/r/20221110064552.22504-1-noor.azura.ahmad.tarmizi@linux.intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
A problem about hinic create debugfs failed is triggered with the
following log given:
[ 931.419023] debugfs: Directory 'hinic' with parent '/' already present!
The reason is that hinic_module_init() returns pci_register_driver()
directly without checking its return value, if pci_register_driver()
failed, it returns without destroy the newly created debugfs, resulting
the debugfs of hinic can never be created later.
hinic_module_init()
hinic_dbg_register_debugfs() # create debugfs directory
pci_register_driver()
driver_register()
bus_add_driver()
priv = kzalloc(...) # OOM happened
# return without destroy debugfs directory
Fix by removing debugfs when pci_register_driver() returns error.
Fixes: 253ac3a97921 ("hinic: add support to query sq info")
Signed-off-by: Yuan Can <yuancan@huawei.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Link: https://lore.kernel.org/r/20221110021642.80378-1-yuancan@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
netdev is allocated in bgmac_alloc() with devm_alloc_etherdev() and will
be auto released in ->remove and ->probe failure path. Using free_netdev()
in bgmac_enet_remove() leads to double free.
Fixes: 34a5102c3235 ("net: bgmac: allocate struct bgmac just once & don't copy it")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Link: https://lore.kernel.org/r/20221109150136.2991171-1-weiyongjun@huaweicloud.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
skb->vlan_present seems redundant.
We can instead derive it from this boolean expression:
vlan_present = skb->vlan_proto != 0 || skb->vlan_tci != 0
Add a new union, to access both fields in a single load/store
when possible.
union {
u32 vlan_all;
struct {
__be16 vlan_proto;
__u16 vlan_tci;
};
};
This allows following patch to remove a conditional test in GRO stack.
Note:
We move remcsum_offload to keep TC_AT_INGRESS_MASK
and SKB_MONO_DELIVERY_TIME_MASK unchanged.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Yonghong Song <yhs@fb.com>
Acked-by: Martin KaFai Lau <martin.lau@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Convert all remaining drivers that still use .adjfreq to the newer .adjfine
implementation. These drivers are not straightforward, as they use
non-standard methods of programming their hardware. They are all converted
to use scaled_ppm_to_ppb to get the parts per billion value that their
logic depends on.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Cc: Ariel Elior <aelior@marvell.com>
Cc: Sudarsana Kalluru <skalluru@marvell.com>
Cc: Manish Chopra <manishc@marvell.com>
Cc: Derek Chickles <dchickles@marvell.com>
Cc: Satanand Burla <sburla@marvell.com>
Cc: Felix Manlunas <fmanlunas@marvell.com>
Cc: Raju Rangoju <rajur@chelsio.com>
Cc: Joakim Zhang <qiangqing.zhang@nxp.com>
Cc: Edward Cree <ecree.xilinx@gmail.com>
Cc: Martin Habets <habetsm.xilinx@gmail.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When the BNXT_FW_CAP_PTP_RTC flag is not set, the bnxt driver implements
.adjfreq on a cyclecounter in terms of the straightforward "base * ppb / 1
billion" calculation. When BNXT_FW_CAP_PTP_RTC is set, the driver forwards
the ppb value to firmware for configuration.
Convert the driver to the newer .adjfine interface, updating the
cyclecounter calculation to use adjust_by_scaled_ppm to perform the
calculation. Use scaled_ppm_to_ppb when forwarding the correction to
firmware.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Cc: Michael Chan <michael.chan@broadcom.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Reviewed-by: Pavan Chebbi <pavan.chebbi@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The cpts implementation of .adjfreq is implemented in terms of a
straight forward "base * ppb / 1 billion" calculation.
Convert this to the newer .adjfine, using the recently added
adjust_by_scaled_ppm helper function.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The stmac implementation of .adjfreq is implemented in terms of a
straight forward "base * ppb / 1 billion" calculation.
Convert this to the newer .adjfine, using the recently added
adjust_by_scaled_ppm helper function.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Cc: Alexandre Torgue <alexandre.torgue@foss.st.com>
Cc: Jose Abreu <joabreu@synopsys.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The hclge implementation of .adjfreq is implemented in terms of a
straight forward "base * ppb / 1 billion" calculation.
Convert this to the newer .adjfine, using the recently added
adjust_by_scaled_ppm helper function.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Cc: Yisen Zhuang <yisen.zhuang@huawei.com>
Cc: Salil Mehta <salil.mehta@huawei.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The tg3 implementation of .adjfreq is implemented in terms of a
straight forward "base * ppb / 1 billion" calculation.
Convert this to the newer .adjfine, using the recently added
diff_by_scaled_ppm helper function to calculate the difference and
direction of the adjustment.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Cc: Siva Reddy Kallam <siva.kallam@broadcom.com>
Cc: Prashant Sreedharan <prashant@broadcom.com>
Cc: Michael Chan <mchan@broadcom.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Reviewed-by: Pavan Chebbi <pavan.chebbi@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The ptp_ixp46x implementation of .adjfreq is implemented in terms of a
straight forward "base * ppb / 1 billion" calculation.
Convert this to the newer .adjfine, using the recently added
adjust_by_scaled_ppm helper function.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Bump MIN version to reflect support of new platform (AC5X family devices).
Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add support for the following AC5x Marvell Prestera PP family devices:
98DX7312M (12x25G / 8x25G + 1x100G);
98DX3500 (24x1G + 6x25G);
98DX3501 (16x1G + 6x10G);
98DX3510 (48x1G + 6x25G);
98DX3520 (24x2.5G + 6x25G);
Known issues:
- FW reload doesn't work (rmmod/modprobe sequence).
Co-developed-by: Vadym Kochan <vadym.kochan@plvision.eu>
Signed-off-by: Vadym Kochan <vadym.kochan@plvision.eu>
Signed-off-by: Maksym Glubokiy <maksym.glubokiy@plvision.eu>
Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use defines with proper device names instead of device-id in pci-devices
listing.
Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use the page_pool API for allocation, freeing and DMA handling instead
of dev_alloc_pages, __free_pages and dma_map_page.
Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Introduce basic XDP support to lan966x driver. Currently the driver
supports only the actions XDP_PASS, XDP_DROP and XDP_ABORTED.
Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The function lan966x_fdma_rx_get_frame was unmapping the frame from
device and check also if the frame was received on a valid port. And
only after that it tried to generate the skb.
Move this check in a different function, in preparation for xdp
support. Such that xdp to be added here and the
lan966x_fdma_rx_get_frame to be used only when giving the skb to upper
layers.
Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The total length of IFH(inter frame header) in bytes is calculated as
IFH_LEN * sizeof(u32). Because IFH_LEN describes the length in words
and not in bytes. As the length of IFH in bytes is used quite often,
add a define for this. This is just to simplify the things.
Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Extend the size of QSFP EEPROM for types SSF8436 and SFF8636
from 256 to 640 bytes in order to expose all the EEPROM pages by
ethtool.
For SFF-8636 and SFF-8436 specifications, the driver exposes
256 bytes of EEPROM data for ethtool's get_module_eeprom()
callback, resulting in "netlink error: Invalid argument" when
an EEPROM read with an offset larger than 256 bytes is attempted.
Changing the length enumerators to the _MAX_LEN
variants exposes all 640 bytes of the EEPROM allowing upper
pages 1, 2 and 3 to be read.
Fixes: 96d971e307cc ("ethtool: Add fallback to get_module_eeprom from netlink command")
Signed-off-by: Jaco Coetzee <jaco.coetzee@corigine.com>
Reviewed-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Set irq affinity to cpus that belong to the same numa node with
NIC device first.
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Reviewed-by: Louis Peens <louis.peens@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This tests that the available keyfield and actionfield add methods are
doing the exepected work: adding the value (and mask) to the
keyfield/actionfield list item in the rule.
The test also covers the functionality that matches a rule to a keyset.
Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com>
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use a tc matchall rule with a goto action to the VCAP specific chain to
enable the VCAP lookups.
If the matchall rule is removed the VCAP lookups will be disabled
again using its cookie as lookup to find the VCAP instance.
To enable the Sparx5 IS2 VCAP on eth0 you would use this command:
tc filter add dev eth0 ingress prio 5 handle 5 matchall \
skip_sw action goto chain 8000000
as the first lookup in IS2 has chain id 8000000
Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add support for validating keyfields and actionfields when they are added
to a VCAP rule.
We need to ensure that the field is not already present and that the field
is in the key- or actionset, if the client has added a key- or actionset to
the rule at this point.
Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This tries to match the keys in a rule with the keysets supported by the
VCAP instance, and generate a list of keysets.
This list is then validated against the list of keysets that is currently
selected for the lookups (per port) in the VCAP configuration.
The Sparx5 IS2 only has one actionset, so there is no actionset matching
performed for now.
Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add support for a goto action and ensure that a HW offloaded TC flower
filter has a valid goto action and that pass and trap actions are not both
used in the same filter.
Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add a helper function that finds the lookup index in a VCAP instance from
the chain id.
Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This adds the following TC flower filter keys to Sparx5 for IS2:
- ipv4_addr (sip and dip)
- ipv6_addr (sip and dip)
- control (IPv4 fragments)
- portnum (tcp and udp port numbers)
- basic (L3 and L4 protocol)
- vlan (outer vlan tag info)
- tcp (tcp flags)
- ip (tos field)
as well as an 128 bit keyfield interface on the VCAP API to set the IPv6
addresses.
IS2 supports the classified VLAN information which amounts to the outer
VLAN info in case of multiple tags.
Here are some examples of the tc flower filter operations that are now
supported for the IS2 VCAP:
- IPv4 Addresses
tc filter add dev eth12 ingress chain 8000000 prio 12 handle 12 \
protocol ip flower skip_sw dst_ip 1.0.1.1 src_ip 2.0.2.2 \
action trap action goto chain 81000000
- IPv6 Addresses
tc filter add dev eth12 ingress chain 8000000 prio 13 handle 13 \
protocol ipv6 flower skip_sw dst_ip 1::1:1 src_ip 2::2:2 \
action trap action goto chain 81000000
- IPv4 fragments
tc filter add dev eth12 ingress chain 8000000 prio 14 handle 14 \
protocol ip flower skip_sw dst_ip 3.0.3.3 src_ip 2.0.2.2 \
ip_flags frag/nofirstfrag action trap action goto chain 81000000
- TCP and UDP portnumbers
tc filter add dev eth12 ingress chain 8000000 prio 21 handle 21 \
protocol ip flower skip_sw dst_ip 8.8.8.8 src_ip 2.0.2.2 \
ip_proto tcp dst_port 100 src_port 12000 action trap action goto
chain 81000000
tc filter add dev eth12 ingress chain 8000000 prio 23 handle 23 \
protocol ipv6 flower skip_sw dst_ip 5::5:5 src_ip 2::2:2 \
ip_proto tcp dst_port 300 src_port 13000 action trap action goto
chain 81000000
- Layer 3 and Layer 4 protocol info
tc filter add dev eth12 ingress chain 8000000 prio 28 handle 28 \
protocol ipv4 flower skip_sw dst_ip 9.0.9.9 src_ip 2.0.2.2 \
ip_proto icmp action trap action goto chain 81000000
- VLAN tag info (outer tag)
tc filter add dev eth12 ingress chain 8000000 prio 29 handle 29 \
protocol 802.1q flower skip_sw vlan_id 600 vlan_prio 6 \
vlan_ethtype ipv4 action trap action goto chain 81000000
tc filter add dev eth12 ingress chain 8000000 prio 31 handle 31 \
protocol 802.1q flower skip_sw vlan_id 600 vlan_prio 5 \
vlan_ethtype ipv6 action trap action goto chain 81000000
- TCP flags
tc filter add dev eth12 ingress chain 8000000 prio 15 handle 15 \
protocol ip flower skip_sw dst_ip 4.0.4.4 src_ip 2.0.2.2 \
ip_proto tcp tcp_flags 0x2a/0x3f action trap action goto chain
81000000
- IP info (IPv4 TOS field)
tc filter add dev eth12 ingress chain 8000000 prio 16 handle 16 \
protocol ip flower skip_sw ip_tos 0x35 dst_ip 5.0.5.5 \
src_ip 2.0.2.2 action trap action goto chain 81000000
Notes:
- The "protocol all" selection is not supported yet.
- The MAC address rule now needs to use non-ip and non "protocol all". Here
is an example:
tc filter add dev eth12 ingress chain 8000000 prio 10 handle 10 \
protocol 0xbeef flower skip_sw \
dst_mac 0a:0b:0c:0d:0e:0f \
src_mac 2:0:0:0:0:1 \
action trap action goto chain 81000000
- The VLAN rules use classified VLAN information, and to get the
classification information into the frame metadata, the ingress port need
to be added to a bridge with the VID and vlan filtering enabled, like
this (using VID 600 and four ports eth12, eth13, eth14 and eth15):
ip link add name br5 type bridge
ip link set dev br5 up
ip link set eth12 master br5
ip link set eth13 master br5
ip link set eth14 master br5
ip link set eth15 master br5
sysctl -w net.ipv6.conf.eth12.disable_ipv6=1
sysctl -w net.ipv6.conf.eth13.disable_ipv6=1
sysctl -w net.ipv6.conf.eth14.disable_ipv6=1
sysctl -w net.ipv6.conf.eth15.disable_ipv6=1
sysctl -w net.ipv6.conf.br5.disable_ipv6=1
ip link set dev br5 type bridge vlan_filtering 1
bridge vlan add dev eth12 vid 600
bridge vlan add dev eth13 vid 600
bridge vlan add dev eth14 vid 600
bridge vlan add dev eth15 vid 600
bridge vlan add dev br5 vid 600 self
Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com>
Tested-by: Casper Andersson <casper.casan@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This changes the port keyset configuration for Sparx5 IS2 so that
- IPv4 generates a IP4_TCP_UDP keyset for IPv4 TCP/UDP frames and a
IP4_OTHER keyset for other IPv4 frames (both UC and MC)
- IPv6 generates a IP_7TUPLE keyset (both UC and MC)
ARP and non-IP traffic continues to generate the MAC_ETYPE keyset
Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Introduce WED RX MIB counters support available on MT7986a SoC.
Tested-by: Daniel Golle <daniel@makrotopia.org>
Co-developed-by: Sujuan Chen <sujuan.chen@mediatek.com>
Signed-off-by: Sujuan Chen <sujuan.chen@mediatek.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Enable RX Wireless Ethernet Dispatch available on MT7986 Soc.
Tested-by: Daniel Golle <daniel@makrotopia.org>
Co-developed-by: Sujuan Chen <sujuan.chen@mediatek.com>
Signed-off-by: Sujuan Chen <sujuan.chen@mediatek.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Rename tx_wdma queue array in rx_wdma since this is rx side of wdma soc.
Moreover rename mtk_wed_wdma_ring_setup routine in
mtk_wed_wdma_rx_ring_setup()
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Introduce WO chip support to mtk wed driver. MTK WED WO is used to
implement RX Wireless Ethernet Dispatch and offload traffic received by
wlan nic to the wired interface.
Tested-by: Daniel Golle <daniel@makrotopia.org>
Co-developed-by: Sujuan Chen <sujuan.chen@mediatek.com>
Signed-off-by: Sujuan Chen <sujuan.chen@mediatek.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Introduce WED mcu support used to configure WED WO chip.
This is a preliminary patch in order to add RX Wireless
Ethernet Dispatch available on MT7986 SoC.
Tested-by: Daniel Golle <daniel@makrotopia.org>
Co-developed-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Sujuan Chen <sujuan.chen@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|