Age | Commit message (Collapse) | Author |
|
Populate the various operation structures meant for IPv6 with logic
unique to that protocol suite.
Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
spectrum_router and spectrum_mr have several APIs that are used to
manipulate configurations originating from ipmr fib notifications.
Following previous patches all the protocol-specifics that are necessary
for the configuration are hidden within spectrum_mr. This allows us to
clean the API and make sure that other than choosing the mr_table based
on the fib notification family, spectrum_router wouldn't care about the
source of the notification when passing it onward to spectrum_mr.
This would later allow us to leverage the same code for fib
notifications originating from ip6mr.
Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Current multicast routing logic in driver assumes it's always meant to
deal with IPv4 multicast routes, leaving several placeholders for
later IPv6 support [currently usually WARN()].
This patch changes the driver's internal multicast route struct into
holding a common mr_mfc instead of the IPv4 mfc_cache.
The various placeholders are grouped into 2:
- Functions that require only the common bits; These remain and the
restriction for IPv4-only is lifted.
- Function that require IPv4-specifics - for handling these functions
we add sets of operations that encapsulate the protocol differences
Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
A step toward offloading IPv6 routing, this adds an additional
multicast routing table meant for IPv6 [with its underlying TCAM
region] and populates the default rule for IPv6 multicast packets.
Following this, ingress IPv6 multicast packets would be trapped and
delivered to the host CPU.
Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Since commit c011ec1bbfd6 ("mlxsw: spectrum: Add the multicast routing
offloading logic") spectrum_mr did not populate the protocol portion of
the catcahall_route_params; mr-tcam logic worked correctly for ipv4
since the enum value for MLXSW_SP_L3_PROTO_IPV4 is '0'.
Explicitly fill the protocol as we'll soon need to differentiate between
ipv4 and ipv6.
Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add new fields for the rmft register necessary for setting the IPv6
multicast FIB table. Add a matching wrapper function for filling
the register in the IPv6 scenario.
Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Similarly to what was done in commit 4af5964e5888 ("mlxsw: reg:
Configure RIF to forward IPv4 multicast packets by default") we now set
two additional bits to allow IPv6 multicast forwarding.
Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Since ipmr and ip6mr are using the same mr_mfc struct at their core, we
can now refactor the ipmr_cache_{hold,put} logic and apply refcounting
to both ipmr and ip6mr.
Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add the ability to discern whether a given FIB rule notification relates
to the default rule inserted when registering ip6mr or a different one.
Would later be used by drivers wishing to offload ipv6 multicast routes
but unable to offload rules other than the default one.
Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In similar fashion to ipmr, support fib notifications for ip6mr mfc and
vif related events. This would later allow drivers to react to said
notifications and offload the IPv6 mroutes.
Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Since all the primitive elements used for the notification done by ipmr
are now common [mr_table, mr_mfc, vif_device] we can refactor the logic
for dumping them to a common file.
Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Like vif notifications, move the notifier struct for MFC as well as its
helpers into a common file; Currently they're only used by ipmr.
Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The fib-notifiers are tightly coupled with the vif_device which is
already common. Move the notifier struct definition and helpers to the
common file; Currently they're only used by ipmr.
Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
It's required to create a modules.alias via MODULE_DEVICE_TABLE helper
for the OF platform driver. Otherwise, module autoloading cannot work.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
MODULE_ALIAS exports information to allow the module to be auto-loaded at
boot for the drivers registered using legacy platform registration.
However, currently the driver is always used by DT-only platform,
MODULE_ALIAS is redundant and should be removed properly.
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Kirill Tkhai says:
====================
Converting pernet_operations (part #7.1)
this is a resending of the 4 patches from path #7.
Anna kindly reviewed them and suggested to take the patches
through net tree, since there is pernet_operations::async only
in net-next.git.
There is Anna's acks on every header, the rest of patch
has no changes.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
These pernet_operations create and destroy per-net pipe
and dentry, and they seem safe to be marked as async.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Acked-by: Anna Schumaker <Anna.Schumaker@netapp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
These pernet_operations look similar to rpcsec_gss_net_ops,
they just create and destroy another cache. Also they create
and destroy directory. So, they also look safe to be async.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Acked-by: Anna Schumaker <Anna.Schumaker@netapp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
These pernet_operations look similar to rpcsec_gss_net_ops,
they just create and destroy another caches. So, they also
can be async.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Acked-by: Anna Schumaker <Anna.Schumaker@netapp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
These pernet_operations initialize and destroy sunrpc_net_id
refered per-net items. Only used global list is cache_list,
and accesses already serialized.
sunrpc_destroy_cache_detail() check for list_empty() without
cache_list_lock, but when it's called from unregister_pernet_subsys(),
there can't be callers in parallel, so we won't miss list_empty()
in this case.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Acked-by: Anna Schumaker <Anna.Schumaker@netapp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Pieter Jansen van Vuuren says:
====================
nfp: flower: add ip fragmentation offloading support
This set allows offloading IP fragmentation classification. It Implements
ip fragmentation match offloading for both IPv4 and IPv6 and offloads
frag, nofrag, first and nofirstfrag classification.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Implement ip fragmentation match offloading for both IPv4 and IPv6. Allows
offloading frag, nofrag, first and nofirstfrag classification.
Signed-off-by: Pieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Refactored shared ip header code for IPv4 and IPv6 in match offload.
Signed-off-by: Pieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch implements multiple pieces of the initialization flow
as follows:
1) A reset is issued to ensure a clean device state, followed
by initialization of admin queue interface.
2) Once the admin queue interface is up, clear the PF config
and transition the device to non-PXE mode.
3) Get the NVM configuration stored in the device's non-volatile
memory (NVM) using ice_init_nvm.
CC: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@oracle.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
We try to hold TX virtqueue mutex in vhost_net_rx_peek_head_len()
after RX virtqueue mutex is held in handle_rx(). This requires an
appropriate lock nesting notation to calm down deadlock detector.
Fixes: 0308813724606 ("vhost_net: basic polling support")
Reported-by: syzbot+7f073540b1384a614e09@syzkaller.appspotmail.com
Signed-off-by: Jason Wang <jasowang@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This is needed to support the modem found in HP EliteBook 820 G3.
Signed-off-by: Torsten Hilbrich <torsten.hilbrich@secunet.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The same fix as in 'bonding: move dev_mc_sync after master_upper_dev_link
in bond_enslave' is needed for team driver.
The panic can be reproduced easily:
ip link add team1 type team
ip link set team1 up
ip link add link team1 vlan1 type vlan id 80
ip link set vlan1 master team1
Fixes: cb41c997d444 ("team: team should sync the port's uc/mc addrs when add a port")
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Xin Long says:
====================
bonding: a bunch of fixes for dev hwaddr sync in bond_enslave
This patchset is mainly to fix a crash when adding vlan as slave of
bond which is also the parent link in patch 2/3, and also fix some
err process problems in bond_enslave in patch 1/3 and 3/3.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When dev_set_promiscuity(1) succeeds but dev_set_allmulti(1) fails,
dev_set_promiscuity(-1) should be done before going to the err path.
Otherwise, dev->promiscuity will leak.
Fixes: 7e1a1ac1fbaa ("bonding: Check return of dev_set_promiscuity/allmulti")
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Andy Gospodarek <andy@greyhouse.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Beniamino found a crash when adding vlan as slave of bond which is also
the parent link:
ip link add bond1 type bond
ip link set bond1 up
ip link add link bond1 vlan1 type vlan id 80
ip link set vlan1 master bond1
The call trace is as below:
[<ffffffffa850842a>] queued_spin_lock_slowpath+0xb/0xf
[<ffffffffa8515680>] _raw_spin_lock+0x20/0x30
[<ffffffffa83f6f07>] dev_mc_sync+0x37/0x80
[<ffffffffc08687dc>] vlan_dev_set_rx_mode+0x1c/0x30 [8021q]
[<ffffffffa83efd2a>] __dev_set_rx_mode+0x5a/0xa0
[<ffffffffa83f7138>] dev_mc_sync_multiple+0x78/0x80
[<ffffffffc084127c>] bond_enslave+0x67c/0x1190 [bonding]
[<ffffffffa8401909>] do_setlink+0x9c9/0xe50
[<ffffffffa8403bf2>] rtnl_newlink+0x522/0x880
[<ffffffffa8403ff7>] rtnetlink_rcv_msg+0xa7/0x260
[<ffffffffa8424ecb>] netlink_rcv_skb+0xab/0xc0
[<ffffffffa83fe498>] rtnetlink_rcv+0x28/0x30
[<ffffffffa8424850>] netlink_unicast+0x170/0x210
[<ffffffffa8424bf8>] netlink_sendmsg+0x308/0x420
[<ffffffffa83cc396>] sock_sendmsg+0xb6/0xf0
This is actually a dead lock caused by sync slave hwaddr from master when
the master is the slave's 'slave'. This dead loop check is actually done
by netdev_master_upper_dev_link. However, Commit 1f718f0f4f97 ("bonding:
populate neighbour's private on enslave") moved it after dev_mc_sync.
This patch is to fix it by moving dev_mc_sync after master_upper_dev_link,
so that this loop check would be earlier than dev_mc_sync. It also moves
if (mode == BOND_MODE_8023AD) into if (!bond_uses_primary) clause as an
improvement.
Note team driver also has this issue, I will fix it in another patch.
Fixes: 1f718f0f4f97 ("bonding: populate neighbour's private on enslave")
Reported-by: Beniamino Galvani <bgalvani@redhat.com>
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Andy Gospodarek <andy@greyhouse.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
vlan_vids_add_by_dev is called right after dev hwaddr sync, so on
the err path it should unsync dev hwaddr. Otherwise, the slave
dev's hwaddr will never be unsync when this err happens.
Fixes: 1ff412ad7714 ("bonding: change the bond's vlan syncing functions with the standard ones")
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Acked-by: Andy Gospodarek <andy@greyhouse.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Sinan Kaya says:
====================
netdev: Eliminate duplicate barriers on weakly-ordered archs
Code includes wmb() followed by writel() in multiple places. writel()
already has a barrier on some architectures like arm64.
This ends up CPU observing two barriers back to back before executing the
register write.
Since code already has an explicit barrier call, changing writel() to
writel_relaxed().
I did a regex search for wmb() followed by writel() in each drivers
directory.
I scrubbed the ones I care about in this series.
I considered "ease of change", "popular usage" and "performance critical
path" as the determining criteria for my filtering.
We used relaxed API heavily on ARM for a long time but
it did not exist on other architectures. For this reason, relaxed
architectures have been paying double penalty in order to use the common
drivers.
Now that relaxed API is present on all architectures, we can go and scrub
all drivers to see what needs to change and what can remain.
We start with mostly used ones and hope to increase the coverage over time.
It will take a while to cover all drivers.
Feel free to apply patches individually.
Changes since v6:
- bring back amazon ena and add mmiowb, remove
ena_com_write_sq_doorbell_rel().
- remove extra mmiowb in bnx2x
- correct spelling mistake in bnx2x: Replace doorbell barrier() with wmb()
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Code includes barrier() followed by writel(). writel() already has a
barrier on some architectures like arm64.
This ends up CPU observing two barriers back to back before executing the
register write.
Create a new wrapper function with relaxed write operator. Use the new
wrapper when a write is following a barrier().
Since code already has an explicit barrier call, changing writel() to
writel_relaxed() and adding mmiowb() for ordering protection.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Code includes wmb() followed by writel(). writel() already has a barrier on
some architectures like arm64.
This ends up CPU observing two barriers back to back before executing the
register write.
Create a new wrapper function with relaxed write operator. Use the new
wrapper when a write is following a wmb().
Since code already has an explicit barrier call, changing writel() to
writel_relaxed().
Also add mmiowb() so that write code doesn't move outside of scope.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Acked-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Code includes wmb() followed by writel(). writel() already has a barrier on
some architectures like arm64.
This ends up CPU observing two barriers back to back before executing the
register write.
Create a new wrapper function with relaxed write operator. Use the new
wrapper when a write is following a wmb().
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Code includes wmb() followed by writel(). writel() already has a
barrier on some architectures like arm64.
This ends up CPU observing two barriers back to back before executing
the register write.
Since code already has an explicit barrier call, changing writel() to
writel_relaxed().
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
barrier() doesn't guarantee memory writes to be observed by the hardware on
all architectures. barrier() only tells compiler not to move this code
with respect to other read/writes.
If memory write needs to be observed by the HW, wmb() is the right choice.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Code includes wmb() followed by writel(). writel() already has a
barrier on some architectures like arm64.
This ends up CPU observing two barriers back to back before executing
the register write.
Since code already has an explicit barrier call, changing writel() to
writel_relaxed().
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Acked-by: Manish Chopra <manish.chopra@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Code includes wmb() followed by writel(). writel() already has a
barrier on some architectures like arm64.
This ends up CPU observing two barriers back to back before executing
the register write.
Since code already has an explicit barrier call, changing code to
wmb()
writel_relaxed()
mmiowb()
for multi-arch support.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
A control queue is a hardware interface which is used by the driver
to interact with other subsystems (like firmware, PHY, etc.). It is
implemented as a producer-consumer ring. More specifically, an
"admin queue" is a type of control queue used to interact with the
firmware.
This patch introduces data structures and functions to initialize
and teardown control/admin queues. Once the admin queue is initialized,
the driver uses it to get the firmware version.
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
After the qdisc lock was dropped in pfifo_fast we allow multiple
enqueue threads and dequeue threads to run in parallel. On the
enqueue side the skb bit ooo_okay is used to ensure all related
skbs are enqueued in-order. On the dequeue side though there is
no similar logic. What we observe is with fewer queues than CPUs
it is possible to re-order packets when two instances of
__qdisc_run() are running in parallel. Each thread will dequeue
a skb and then whichever thread calls the ndo op first will
be sent on the wire. This doesn't typically happen because
qdisc_run() is usually triggered by the same core that did the
enqueue. However, drivers will trigger __netif_schedule()
when queues are transitioning from stopped to awake using the
netif_tx_wake_* APIs. When this happens netif_schedule() calls
qdisc_run() on the same CPU that did the netif_tx_wake_* which
is usually done in the interrupt completion context. This CPU
is selected with the irq affinity which is unrelated to the
enqueue operations.
To resolve this we add a RUNNING bit to the qdisc to ensure
only a single dequeue per qdisc is running. Enqueue and dequeue
operations can still run in parallel and also on multi queue
NICs we can still have a dequeue in-flight per qdisc, which
is typically per CPU.
Fixes: c5ad119fb6c0 ("net: sched: pfifo_fast use skb_array")
Reported-by: Jakob Unterwurzacher <jakob.unterwurzacher@theobroma-systems.com>
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
BroadMobi BM806U is an Qualcomm MDM9225 based 3G/4G modem.
Tested hardware BM806U is mounted on D-Link DWR-921-C3 router.
The USB id is added to qmi_wwan.c to allow QMI communication with
the BM806U.
Tested on 4.14 kernel and OpenWRT.
Signed-off-by: Pawel Dembicki <paweldembicki@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Sergei Shtylyov says:
====================
sh_eth: unify the SoC feature checks
Here's a set of 5 patches against DaveM's 'net-next.git' repo.
The Ether driver sometimes uses the bit fields in 'struct sh_eth_cpu_data'
to check which Ether registers exist in a certain SoC and sometimes it uses
sh_eth_is_{gether|rz_fast_ether}() which basically compares 2 pointers (1 of
them being constant) -- the latter is definitely not a strongest feature of
the RISC CPUs (be it SH or ARM), so I decided to get rid of this type of
the feature checks in favour of the bit fields (I've also made use of a
32-bit value and method pointer where appropriate)...
[1/5] sh_eth: add sh_eth_cpu_data::soft_reset() method
[2/5] sh_eth: add sh_eth_cpu_data::edtrr_trns value
[3/5] sh_eth: add sh_eth_cpu_data::xdfar_rw flag
[4/5] sh_eth: add sh_eth_cpu_data::no_tx_cntr flag
[5/5] sh_eth: add sh_eth_cpu_data::cexcr flag
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
GEther controllers have CERCR/CEECR instead of CNDCR on the others.
Currently we are calling sh_eth_is_gether() in order to check for this,
however it would be simpler to check the new 'cexcr' bitfield in the
'struct sh_eth_cpu_data'; then we'd be able to remove sh_eth_is_gether()
as there would be no callers left...
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
RZ/A1H (R7S72100) Ether controller doesn't seem to have the TX counter
registers like TROCR/CDCR/LCCR (or at least they are still undocumented
like some TSU registers), so we bail out of sh_eth_get_stats() early in
this case. Currently we are calling sh_eth_is_rz_fast_ether() in order
to check for this, but it would be simpler to check the new 'no_tx_cntrs'
bitfield in the 'struct sh_eth_cpu_data'; then we'd be able to remove
sh_eth_is_rz_fast_ether() as there would be no callers left...
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The GEther-like controllers have writeable RDFAR/TDFAR, on the others
they are read-only or just absent (on R-Car). Currently we are calling
sh_eth_is_{gether|rz_fast_ether}() in order to check if these registers
can be written to, however it would be simpler to check the new 'xdfar_rw'
bitfield in the 'struct sh_eth_cpu_data'...
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
sh_eth_get_edtrr_trns() returns the value to be written to EDTRR in order
to start TX DMA -- this value is different between the GEther-like and
the other controllers. We can replace this function (and thus get rid of
the calls to sh_eth_is_{gether|rz_fast_ether}() by it) with a new field
'edtrr_trns' in the 'struct sh_eth_cpu_data'.
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
sh_eth_reset() performs a software reset which is implemented in a
completely different way for the GEther-like controllers vs the other
controllers due to a different layout of EDMR (and other factors) --
it therefore makes sense to convert this function to a mandatory
sh_eth_cpu_data::soft_reset() method and thus get rid of the runtime
controller type check via sh_eth_is_{gether|rz_fast_ether}().
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
and switch to https where possible.
All links have been eyeballed to verify that the domains have
not changed, etc.
Signed-off-by: Sanjeev Gupta <ghane0@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Ever since commit 3a06c7ac24f9 ("posix-clocks: Remove interval timer
facility and mmap/fasync callbacks") the possibility of PHC based
posix timers has been removed. In addition it will probably never
make sense to implement this functionality.
This patch removes the misleading text which seems to suggest that
posix timers for PHC devices will ever be a thing.
Signed-off-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|