Age | Commit message (Collapse) | Author |
|
This looks to be what NVIDIA do pretty much everywhere, since forever.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
This way we can catch it with debugging on for PMC subdev.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
This is safe because ptherm hasn't been configured yet and will be a
little further down the initialization path. Ptherm should be safe
regarding to runtime reconfiguration.
v2:
- do not limit this patch to nv84-a3 and make it nv84+
v3:
- move the ack to fini()
- disable IRQs on fini()
- silently ignore un-requested IRQs
Signed-off-by: Martin Peres <martin.peres@labri.fr>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
NV31 has different config bits than NV40+ do. Also fix the DMA_IMAGE
VRAM-only setting to check the right bits.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
This makes nv31+ able to actually perform the nv_call, since previously
the inst was not available.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
Since nv40 only covers pre-nv44 now, it can use the nv31-provided
functions.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
The nv31/nv40 impls are actually fairly nv44-specific, since they assume
the presence of the instance register/context switching. Create a copy
before nv31/nv40 get fixed.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
NV1A is numerically higher than NV17 but generationally lower. Use the
new card type to help disambiguate.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
NV11/17/1F/18 come after NV10/15/16/1A. In order to facilitate using
numerical comparisons, split up the two sets into different card types.
This change should be a no-op except that the relevant cards will see
NV11 printed instead of NV10 for the family.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
This code was originally moved to using nv_mask by d31e078d84. This
should not have any actual effect since the mask isn't applied to the
value.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
NVIDIA's name for what rnndb calls PVCOMP.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
This fixes a reported locking inversion when interacting with the DRM
core's vblank routines.
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
This is a necessary step towards being able to work with the insane locking
requirements of the DRM core's vblank routines, and a nice cleanup as a
side-effect.
This is similar in spirit to the interfaces that Peter Hurley arrived at
with his nouveau_event rcu conversion series.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
Most nouveau event handlers have storage in 'static' containers
(structures with lifetimes nearly equivalent to the drm_device),
but are dangerously reused via nouveau_event_get/_put. For
example, if nouveau_event_get is called more than once for a
given handler, the event handler list will be corrupted.
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
The index_nr field is constant for the lifetime of the event, so
serialized access is unnecessary.
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
Provide private field for event handlers exclusive use.
Convert nouveau_fence_wait_uevent() and
nouveau_fence_wait_uevent_handler(); drop struct nouveau_fence_uevent.
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
The test here should be ">= ARRAY_SIZE()" instead of "> ARRAY_SIZE()".
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
self-assignment of a variable doesn't make a lot of sense.
Signed-off-by: Dave Jones <davej@fedoraproject.org>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
|
|
Many of the uses of get_random_bytes() do not actually need
cryptographically secure random numbers. Replace those uses with a
call to prandom_u32(), which is faster and which doesn't consume
entropy from the /dev/random driver.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
|
|
If we failed to init&add kobject when fill_super, stats info and proc object of
f2fs will not be released.
We should free them before we finish fill_super.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
use genernal method supported by kernel
o changes from v1
If any waiter exists at end io, wake up it.
Signed-off-by: Changman Lee <cm224.lee@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
Commit ec22ba8e ("ext4: disable merging of uninitialized extents")
ensured that if either extent under consideration is uninit, we
decline to merge, and ext4_can_extents_be_merged() returns false.
So there is no need for the caller to then test whether the
extent under consideration is unitialized; if it were, we
wouldn't have gotten that far.
The comments were also inaccurate; ext4_can_extents_be_merged()
no longer XORs the states, it fails if *either* is uninit.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Reviewed-by: Zheng Liu <wenqing.lz@taobao.com>
|
|
Mathias Krause says:
====================
move pskb_put (was: IPsec improvements)
This series moves pskb_put() to the core code, making the code
duplication in caif obsolete (patches 1 and 2).
Patch 3 fixes a few kernel-doc issues.
v2 of this series does no longer contain the skb_cow_data() patch and
therefore no performance improvements for IPsec. The change is still
under discussion, but otherwise independent from the above changes.
Please apply!
v2:
- kernel-doc fixes for pskb_put, as noticed by Ben
- dropped skb_cow_data patch as it's still discussed
- added a kernel-doc fixes patch (patch 3)
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use "@" to refer to parameters in the kernel-doc description. According
to Documentation/kernel-doc-nano-HOWTO.txt "&" shall be used to refer to
structures only.
Signed-off-by: Mathias Krause <mathias.krause@secunet.com>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Also remove the warning for fragmented packets -- skb_cow_data() will
linearize the buffer, removing all fragments.
Signed-off-by: Mathias Krause <mathias.krause@secunet.com>
Cc: Dmitry Tarnyagin <dmitry.tarnyagin@lockless.no>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This function has usage beside IPsec so move it to the core skbuff code.
While doing so, give it some documentation and change its return type to
'unsigned char *' to be in line with skb_put().
Signed-off-by: Mathias Krause <mathias.krause@secunet.com>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Changing MTU size of an xgmac network interface while it is active can
cause a panic like
skbuff: skb_over_panic: text:c03bc62c len:1090 put:1090 head:edfb6900 data:edfb6942 tail:0xedfb6d84 end:0xedfb6bc0 dev:eth0
------------[ cut here ]------------
kernel BUG at net/core/skbuff.c:126!
Internal error: Oops - BUG: 0 [#1] SMP ARM
Modules linked in:
CPU: 0 PID: 762 Comm: python Tainted: G W 3.10.0-00015-g3e33cd7 #309
task: edcfe000 ti: ed67e000 task.ti: ed67e000
PC is at skb_panic+0x64/0x70
LR is at wake_up_klogd+0x5c/0x68
This happens because xgmac_change_mtu modifies dev->mtu before the
network interface is quiesced. And thus there still might be buffers
in use which have a buffer size based on the old MTU.
To fix this I moved the change of dev->mtu after the call to
xgmac_stop.
Another modification is required (in xgmac_stop) to ensure that
xgmac_xmit is really not called anymore (xgmac_tx_complete might wake
up the queue again).
I've tested the fix by switching MTU size every second between 600 and
1500 while network traffic was going on. The test box survived a test
of several hours (until I've stopped it) whereas w/o this fix above
panic occurs after several minutes (at most).
Change since v1:
- remove call to netif_stop_queue at beginning of xgmac_stop
- use netif_tx_disable instead of locking+netif_stop_queue
Signed-off-by: Andreas Herrmann <andreas.herrmann@calxeda.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Amir Vadai says:
====================
net/mlx4: Mellanox driver update 07-11-2013
This patchset contains some enhancements and bug fixes for the mlx4_* drivers.
Patchset was applied and tested against commit: "9bb8ca8 virtio-net: switch to
use XPS to choose txq"
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
For each RX/TX ring and its CQ, allocation is done on a NUMA node that
corresponds to the core that the data structure should operate on.
The assumption is that the core number is reflected by the ring index.
The affected allocations are the ring/CQ data structures,
the TX/RX info and the shared HW/SW buffer.
For TX rings, each core has rings of all UPs.
Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.com>
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com>
Reviewed-by: Hadar Hen Zion <hadarh@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This is done to optimize FW/HW access to host memory.
Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.com>
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com>
Reviewed-by: Hadar Hen Zion <hadarh@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Currently all TX/RX rings and completion queues are part of the
netdev priv structure and are allocated statically. This patch
will change the priv to hold only arrays of pointers and therefore
all TX/RX rings and completetion queues will be allocated
dynamically. This is in preparation for NUMA aware allocations.
Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.com>
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com>
Reviewed-by: Hadar Hen Zion <hadarh@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Allow immediate activate of VGT->VST and VST->VGT transitions, without
the need of rebinding in mlx4_master_immediate_activate_vlan_qos().
Also in struct res_qp: add qp parameters (vlan_index,fvl,vlan_cntrol..)
to the saved set, in order to restore when move to VGT.
- Clear at mlx4_RST2INIT_QP_wrapper()
- Save at mlx4_INIT2RTR_QP_wrapper()
- Restore at mlx4_vf_immed_vlan_work_handler()
Update mlx4_vf_immed_vlan_work_handler() to support VGT.
Signed-off-by: Rony Efraim <ronye@mellanox.com>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Reviewed-by: Hadar Hen Zion <hadarh@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
To guarantee that all unused fields in all FW commands for both inboxes
and outboxes are zeroed out, initialize the mailbox buffer to all zeroes.
This is especially important for SRIOV comm-channel virtual commands
(such as QUERY_FUNC_CAP), where if new fields are added to support new
features, the driver can depend on older kernels passing zeroes in these
fields.
In addition to zeroing out the mailbox buffer at allocation time, all
(now unnecessary) calls to memset by the callers of
mlx4_alloc_cmd_mailbox() are removed.
Signed-off-by: Majd Dibbiny <majd@mellanox.com>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Modify RFS code to support applying filters for incoming UDP streams.
Signed-off-by: Eyal Perry <eyalpe@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
John Fastabend says:
====================
l2 hardware accelerated macvlans
This patch adds support to offload macvlan net_devices to the
hardware. With these patches packets are pushed to the macvlan
net_device directly and do not pass through the lower dev.
The patches here have made it through multiple iterations
each with a slightly different focus. First I tried to
push these as a new link type called "VMDQ". The patches
shown here,
http://comments.gmane.org/gmane.linux.network/237617
Following this implementation I renamed the link type
"VSI" and addressed various comments. Finally Neil
Horman picked up the patches and integrated the offload
into the macvlan code. Here,
http://permalink.gmane.org/gmane.linux.network/285658
The attached series is clean-up of his patches, with a
few fixes.
If folks find this series acceptable there are a few
items we can work on next. First broadcast and multicast
will use the hardware even for local traffic with this
series. It would be best (I think) to use the software
path for macvlan to macvlan traffic and save the PCIe
bus. This depends on how much you value CPU time vs
PCIE bandwidth. This will need another patch series
to flush out.
Also this series only allows for layer 2 mac forwarding
where some hardware supports more interesting forwarding
capabilities. Integrating with OVS may be useful here.
As always any comments/feedback welcome.
My basic I/O test is here but I've also done some link
testing, SRIOV/DCB with macvlans and others,
Changelog:
v2: two fixes to ixgbe when all features DCB, FCoE, SR-IOV
are enabled with macvlans. A VMDQ_P() reference
should have been accel->pool and do not set the offset
of the ring index from dfwd add call. The offset is used
by SR-IOV so clearing it can cause SR-IOV quue index's
to go sideways. With these fixes testing macvlan's with
SRIOV enabled was successful.
v3: addressed Neil's comments in ixgbe
fixed error path on dfwd_add_station() in ixgbe
fixed ixgbe to allow SRIOV and accelerated macvlans to
coexist.
v4: Dave caught some strange indentation, fixed it here
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Now that l2 acceleration ops are in place from the prior patch,
enable ixgbe to take advantage of these operations. Allow it to
allocate queues for a macvlan so that when we transmit a frame,
we can do the switching in hardware inside the ixgbe card, rather
than in software.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add a operations structure that allows a network interface to export
the fact that it supports package forwarding in hardware between
physical interfaces and other mac layer devices assigned to it (such
as macvlans). This operaions structure can be used by virtual mac
devices to bypass software switching so that forwarding can be done
in hardware more efficiently.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
timecounter_init() was was called only after first potential
timecounter_read().
Moved mlx4_en_init_timestamp() before mlx4_en_init_netdev()
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|