Age | Commit message (Collapse) | Author |
|
Commit 293b2da1b611 ("ARM: pxa: move platform_data definitions")
moved the file to the current location but forgot to remove the pointer
to its previous location. Clean it up.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
|
|
Commit 1ef21f6343ff ("ARM: msm: move platform_data definitions")
moved the file to the current location but forgot to remove the pointer
to its previous location. Clean it up.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
|
|
Commit a3b2924547a7 ("ARM: ep93xx: move platform_data definitions")
moved the file to the current location but forgot to remove the pointer
to its previous location. Clean it up. While at it also change the header
file protection macros appropriately.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
|
|
Currently the driver re-implements the code found in of_get_videomode()
except for the fact that the latter honors the 'native-mode' property
to select a spcific video timing from the list of possible timings.
The driver builds up a list of all video timings, but uses only the
last mode from the list anyway. While building the list it incorrectly
OR's the 'pixelclk-active' and 'de-active' flags of all modes into
single flags, possibly leading to a wrong pixelclock or data-enable
polarity setting.
Fix this by using the of_get_videomode() directly with the
OF_USE_NATIVE_MODE flag.
Since all current dts files only have one entry in their
display-timings node, this bug was not apparent and the fix does not
change the driver's behaviour for the current users.
Signed-off-by: Lothar Waßmann <LW@KARO-electronics.de>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
|
|
messages
Make the messages that are printed in case of fatal errors actually
visible to the user without having to recompile the driver with
debugging enabled.
Signed-off-by: Lothar Waßmann <LW@KARO-electronics.de>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
|
|
The driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
|
|
The driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
|
|
The driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
|
|
The driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
|
|
This fixes a sparse warning.
Cc: Jean-Christophe Plagniol-Villard <plagnioj@jcrosoft.com>
Cc: Tomi Valkeinen <tomi.valkeinen@ti.com>
Cc: linux-fbdev@vger.kernel.org
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
|
|
Fixes error messages in vmware.log
Signed-off-by: Jakob Bornecrantz <jakob@vmware.com>
Reviewed-by: Michael Banack <banackm@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
|
|
Michael Dalton says:
====================
virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This patchset introduces virtio-net mergeable buffer size auto-tuning,
with buffer sizes ranging from aligned MTU-size to PAGE_SIZE. Packet
buffer size is chosen based on a per-receive queue EWMA of incoming
packet size.
To unify mergeable receive buffer memory allocation and improve
SKB frag coalescing, all mergeable buffer memory allocation is
migrated to per-receive queue page frag allocators.
The per-receive queue mergeable packet buffer size is exported via
sysfs, and the network device sysfs layer has been extended to add
support for device-specific per-receive queue sysfs attribute groups.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add initial support for per-rx queue sysfs attributes to virtio-net. If
mergeable packet buffers are enabled, adds a read-only mergeable packet
buffer size sysfs attribute for each RX queue.
Suggested-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael Dalton <mwdalton@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
To ensure ewma_read() without a lock returns a valid but possibly
out of date average, modify ewma_add() by using ACCESS_ONCE to prevent
intermediate wrong values from being written to avg->internal.
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Michael Dalton <mwdalton@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Extend existing support for netdevice receive queue sysfs attributes to
permit a device-specific attribute group. Initial use case for this
support will be to allow the virtio-net device to export per-receive
queue mergeable receive buffer size.
Signed-off-by: Michael Dalton <mwdalton@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag
allocators") changed the mergeable receive buffer size from PAGE_SIZE to
MTU-size, introducing a single-stream regression for benchmarks with large
average packet size. There is no single optimal buffer size for all
workloads. For workloads with packet size <= MTU bytes, MTU + virtio-net
header-sized buffers are preferred as larger buffers reduce the TCP window
due to SKB truesize. However, single-stream workloads with large average
packet sizes have higher throughput if larger (e.g., PAGE_SIZE) buffers
are used.
This commit auto-tunes the mergeable receiver buffer packet size by
choosing the packet buffer size based on an EWMA of the recent packet
sizes for the receive queue. Packet buffer sizes range from MTU_SIZE +
virtio-net header len to PAGE_SIZE. This improves throughput for
large packet workloads, as any workload with average packet size >=
PAGE_SIZE will use PAGE_SIZE buffers.
These optimizations interact positively with recent commit
ba275241030c ("virtio-net: coalesce rx frags when possible during rx"),
which coalesces adjacent RX SKB fragments in virtio_net. The coalescing
optimizations benefit buffers of any size.
Benchmarks taken from an average of 5 netperf 30-second TCP_STREAM runs
between two QEMU VMs on a single physical machine. Each VM has two VCPUs
with all offloads & vhost enabled. All VMs and vhost threads run in a
single 4 CPU cgroup cpuset, using cgroups to ensure that other processes
in the system will not be scheduled on the benchmark CPUs. Trunk includes
SKB rx frag coalescing.
net-next w/ virtio_net before 2613af0ed18a (PAGE_SIZE bufs): 14642.85Gb/s
net-next (MTU-size bufs): 13170.01Gb/s
net-next + auto-tune: 14555.94Gb/s
Jason Wang also reported a throughput increase on mlx4 from 22Gb/s
using MTU-sized buffers to about 26Gb/s using auto-tuning.
Signed-off-by: Michael Dalton <mwdalton@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The virtio-net driver currently uses netdev_alloc_frag() for GFP_ATOMIC
mergeable rx buffer allocations. This commit migrates virtio-net to use
per-receive queue page frags for GFP_ATOMIC allocation. This change unifies
mergeable rx buffer memory allocation, which now will use skb_refill_frag()
for both atomic and GFP-WAIT buffer allocations.
To address fragmentation concerns, if after buffer allocation there
is too little space left in the page frag to allocate a subsequent
buffer, the remaining space is added to the current allocated buffer
so that the remaining space can be used to store packet data.
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael Dalton <mwdalton@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation strategy employed by netdev_alloc_frag, which attempts
higher-order page allocations whether or not GFP_WAIT is set, falling
back to successively lower-order page allocations on failure. Part
of migration of virtio-net to per-receive queue page frag allocators.
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Michael Dalton <mwdalton@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The device and kernel module disagrees about the command length of
some commands. More pack attributes might be needed.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
|
|
Adds the relevant commands to the device interface header and
implements 64-bit binding for 64 bit VMs.
v2: Uppercase command IDs, Correctly use also 64 bit page tables.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jakob Bornecrantz <jakob@vmware.com>
|
|
With guest-backed surfaces, surface->sizes == NULL, causing a kernel oops.
Use the base_size member instead.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jakob Bornecrantz <jakob@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
Update otable definitions and modify the otable setup code
accordingly.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
|
|
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
|
|
Combine it with vmw_dummy_query_bo_create, and also make sure
we use tryreserve when reserving the bo to avoid any lockdep warnings
We are sure the tryreserve will always succeed since we are
the only users at that point.
In addition, allow the vmw_bo_pin function to pin/unpin system memory.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jakob Bornecrantz <jakob@vmware.com>
|
|
Only scrub context bindings when a bound resource is destroyed, or when
the MOB backing the context is unbound.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
|
|
The device is no longer capable of scrubbing context bindings of resources
that are bound when destroyed.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jakob Bornecrantz <jakob@vmware.com>
|
|
It's been deprecated.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jakob Bornecrantz <jakob@vmware.com>
|
|
Also bump minor to signal a GB-aware kernel module
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
|
|
This ioctl enables inter-process synchronization of buffer objects,
which is needed for mesa Guest-Backed objects.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
By default SVGA device creates nonmaskable multisampling surfaces, in
which case multisampleCount of 1 means: the first quality setting
of nonmaskable multisampling surface. Lets change it to make sure
that the backends know that multisampling is really off.
Signed-off-by: Zack Rusin <zackr@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
Make sure we disallow commands if the device doesn't support them.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
|
|
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
|
|
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
|
|
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
|
|
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
|
|
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
|
|
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
|
|
Contexts are managed by the kernel only, so disable access to GB
context commands from user-space
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Ruzin <zackr@vmware.com>
|
|
When the backing store buffer is evicted, Issue a readback from the
resources and notify the resources that they are no longer bound to
a valid backing store.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
|
|
Perform a translation of legacy query commands should they occur
in the command stream.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
|
|
Also do basic consistency checking.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
|
|
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
|
|
To bind a buffer object as a MOB, just validate it as a MOB
memory type. We are reusing the GMRID manager, although we create a new
instance of it to manage MOB ids and tomake sure we don't exceed
the maximum amount of MOB pages.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
|
|
Implement MOB setup, binding and unbinding, but don't hook up to
TTM yet.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
|
|
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
Conflicts:
drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c
|
|
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
|
|
In the future, Scanout buffers need not be backed by VRAM and
the two definitions will differ.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
|
|
Not hooked up yet. This is only the definition.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
Conflicts:
include/uapi/drm/vmwgfx_drm.h
|
|
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
|
|
With dma compliance / IOMMU support added to the driver in kernel 3.13,
the dma addresses can exceed 44 bits, which is what we support in
32-bit mode and with GMR1.
So in 32-bit mode and optionally in 64-bit mode, restrict the dma
addresses to 44 bits, and strip the old GMR1 code.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jakob Bornecrantz <jakob@vmware.com>
Cc: stable@vger.kernel.org
|