Age | Commit message (Collapse) | Author |
|
In netvsc_start_xmit() we can handle packets which are scattered around not
more than MAX_PAGE_BUFFER_COUNT-2 pages. It is, however, easy to create a
packet which is not big in size but occupies more pages (e.g. if it uses frags
on compound pages boundaries). When we drop such packet it cases sender to try
resending it but in most cases it will try resending the same packet which will
also get dropped, this will cause the particular connection to stick. To solve
the issue we can try linearizing skb.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
... which validly uses dev_kfree_skb_any() instead of dev_kfree_skb().
Setting ret to -EFAULT and -ENOMEM have no real meaning here (we need to set
it to anything but -EAGAIN) as we drop the packet and return NETDEV_TX_OK
anyway.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Some M-Audio devices require to receive bootup command just after
powering on, while codes in BeBoB driver doesn't work properly in
big-endian machine because the command should be aligned by
little-endian.
This commit fixes this bug. This fix should go to stable kernel.
Cc: Takayuki Shiroma <t.shiroma.oki@gmail.com>
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
Shradha Shah says:
====================
sfc: Nic specific sriov functions, netdev_ops and sriov_configure
First two patches among the series of patches to support SRIOV on EF10.
First patch declares nic specific sriov functions in nic specific headers,
creates only one instance of the netdev_ops, removes sriov functionality
from Falcon code.
Second patch adds support for sriov_configure.
The Virtual Functions can be enabled but they do not bind to the SFC
driver just yet.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds support for the use of sriov_configure on EF10
to enable Virtual Functions while the driver is loaded.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
netdev_ops and sriov removed from Falcon code
By putting all the efx_{siena,ef10}_sriov_* declarations in
{siena,ef10}_sriov.h, ensure they cannot be called from nic-generic code.
Also fixes up an instance of this, where mcdi.c was calling
efx_siena_sriov_flr.
The single instance of netdev_ops should call general high level
functions that can then call something adapter specific in efx_nic_type.
We should only do adapter specialisation via efx_nic_type.
Removal of sriov functionality from the Falcon code means that tests
are needed for the presence of some callbacks.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Alexander Duyck says:
====================
Replace wmb()/rmb() with dma_wmb()/dma_rmb() where appropriate
This is a start of a side project cleaning up the drivers that can make use
of the dma_wmb and dma_rmb calls. The general idea is to start removing
the unnecessary wmb/rmb calls from a number of drivers and to make use of
the lighter weight dma_wmb/dma_rmb calls as this should allow for an
overall improvement in performance as each barrier can cost a significant
number of cycles and on architectures such as x86 this is unnecessary.
These changes are what I would consider low hanging fruit. The likelihood
of the changes introducing an error should be low since the use of the
barriers in these cases are fairly obvious.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This change replaces calls to rmb with dma_rmb in the case where we want to
order all follow-on descriptor reads after the check for the descriptor
status bit.
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This change updates several spots where a wmb was being used to instead use
a dma_wmb to flush out writes before updating the control portion of the
descriptor.
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch goes through and replaces wmb/rmb with dma_wmb/dma_rmb in cases
where the barrier is being used to order writes or reads to just memory and
doesn't involve any programmed I/O.
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
bond_3ad_bind_slave() calls ad_initialize_port() and then immediately
assigns correct values making some of that initialization unnecessary.
Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch breaks the rich assignments into it's own statements
and removes some duplicate code where admin-key, & oper-key are
updated.
Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Fixes: 79b16aadea32cce ("udp_tunnel: Pass UDP socket down through udp_tunnel{, 6}_xmit_skb().")
Reported-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The socket parameter might legally be NULL, thus sock_net is sometimes
causing a NULL pointer dereference. Using net_device pointer in dst_entry
is more reliable.
Fixes: b6a7719aedd7e5c ("ipv4: hash net ptr into fragmentation bucket selection")
Reported-by: Rick Jones <rick.jones2@hp.com>
Cc: Rick Jones <rick.jones2@hp.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This reverts commit d63e2e1f3df904bf6bd150bdafb42ddbb3257ea8.
David Ahern reported that d63e2e1f3df9 breaks booting on an 8-socket T5
sparc system. He also verified that the system boots with d63e2e1f3df9
reverted. Yinghai has some fixes, but they need a little more polishing
than we can do before v4.0.
Link: http://lkml.kernel.org/r/5514391F.2030300@oracle.com # report
Link: http://lkml.kernel.org/r/1427857069-6789-1-git-send-email-yinghai@kernel.org # patches
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: stable@vger.kernel.org # v3.19+
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
User visible changes:
- Teach 'perf record' about perf_event_attr.clockid (Peter Zijlstra)
- Improve 'perf sched replay' on high CPU core count machines (Yunlong Song)
- Consider PERF_RECORD_ events with cpumode == 0 in 'perf top', removing one
cause of long term memory usage buildup, i.e. not processing PERF_RECORD_EXIT
events (Arnaldo Carvalho de Melo)
- Add 'I' event modifier for perf_event_attr.exclude_idle bit (Jiri Olsa)
- Respect -i option 'in perf kmem' (Jiri Olsa)
Infrastructure changes:
- Honor operator priority in libtraceevent (Namhyung Kim)
- Merge all perf_event_attr print functions (Peter Zijlstra)
- Check kmaps access to make code more robust (Wang Nan)
- Fix inverted logic in perf_mmap__empty() (He Kuang)
- Fix ARM 32 'perf probe' building error (Wang Nan)
- Fix perf_event_attr tests (Jiri Olsa)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Add compatible string definitions and supported pin functions.
Signed-off-by: Ivan T. Ivanov <ivan.ivanov@linaro.org>
Acked-by: Bjorn Andersson <bjorn.andersson@sonymobile.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
Booting a v3.18 or newer Xen domU kernel with PCI devices passed through
results in an oops (this is a 32-bit 3.13.11 dom0 with a 64-bit 4.4.0
hypervisor and 32-bit domU):
BUG: unable to handle kernel paging request at 0030303e
IP: [<c06ed0e6>] acpi_ns_validate_handle+0x12/0x1a
Call Trace:
[<c06eda4d>] ? acpi_evaluate_object+0x31/0x1fc
[<c06b78e1>] ? pci_get_hp_params+0x111/0x4e0
[<c0407bc7>] ? xen_force_evtchn_callback+0x17/0x30
[<c04085fb>] ? xen_restore_fl_direct_reloc+0x4/0x4
[<c0699d34>] ? pci_device_add+0x24/0x450
Don't look for ACPI configuration information if ACPI has been disabled.
I don't think this is the best fix, because we can boot plain Linux (no
Xen) with "acpi=off", and we don't need this check in pci_get_hp_params().
There should be a better fix that would make Xen domU work the same way.
The domU kernel has ACPI support but it has no AML. There should be a way
to initialize the ACPI data structures so things fail gracefully rather
than oopsing. This is an interim fix to address the regression.
Fixes: 6cd33649fa83 ("PCI: Add pci_configure_device() during enumeration")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=96301
Reported-by: Michael D Labriola <mlabriol@gdeb.com>
Tested-by: Michael D Labriola <mlabriol@gdeb.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: stable@vger.kernel.org # v3.18+
|
|
Add a enum_map file in the tracing directory to see what enums have been
saved to convert in the print fmt files.
As this requires the enum mapping to be persistent in memory, it is only
created if the new config option CONFIG_TRACE_ENUM_MAP_FILE is enabled.
This is for debugging and will increase the persistent memory footprint
of the kernel.
Link: http://lkml.kernel.org/r/20150403013802.220157513@goodmis.org
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Tested-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
The enums used in tracepoints for __print_symbolic() do not have their
values shown in the tracepoint format files and this makes it difficult
for user space tools to convert the binary values to the strings they
are to represent.
Add TRACE_DEFINE_ENUM(x) macros to export the enum names to their values
to make the tracing output from user space tools more robust.
Link: http://lkml.kernel.org/r/20150403013802.220157513@goodmis.org
Cc: Dave Chinner <dchinner@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Tested-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Add an userdata set extension and allow the user to attach arbitrary
data to set elements. This is intended to hold TLV encoded data like
comments or DNS annotations that have no meaning to the kernel.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Add a new "dynset" expression for dynamic set updates.
A new set op ->update() is added which, for non existant elements,
invokes an initialization callback and inserts the new element.
For both new or existing elements the extenstion pointer is returned
to the caller to optionally perform timer updates or other actions.
Element removal is not supported so far, however that seems to be a
rather exotic need and can be added later on.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Currently a set binding is assumed to be related to a lookup and, in
case of maps, a data load.
In order to use bindings for set updates, the loop detection checks
must be restricted to map operations only. Add a flags member to the
binding struct to hold the set "action" flags such as NFT_SET_MAP,
and perform loop detection based on these.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Use atomic operations for the element count to avoid races with async
updates.
To properly handle the transactional semantics during netlink updates,
deleted but not yet committed elements are accounted for seperately and
are treated as being already removed. This means for the duration of
a netlink transaction, the limit might be exceeded by the amount of
elements deleted. Set implementations must be prepared to handle this.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
The NFT_SET_TIMEOUT flag is ignore in nft_select_set_ops, which may
lead to selection of a set implementation that doesn't actually
support timeouts.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Enums used by tracepoints for __print_symbolic() are shown in the
tracepoint format files with just their names and not their values.
This makes it difficult for user space tools to know how to convert the
binary data into their string representations.
By adding the use of TRACE_DEFINE_ENUM(), the enum names will be mapped
to their values and shown in the tracing file system to let tools
convert the data as necessary.
Link: http://lkml.kernel.org/r/20150403013802.220157513@goodmis.org
Acked-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Tested-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Replace occurences of the pci api by appropriate call to the dma api.
A simplified version of the semantic patch that finds this problem is as
follows: (http://coccinelle.lip6.fr)
@deprecated@
idexpression id;
position p;
@@
(
pci_dma_supported@p ( id, ...)
|
pci_alloc_consistent@p ( id, ...)
)
@bad1@
idexpression id;
position deprecated.p;
@@
...when != &id->dev
when != pci_get_drvdata ( id )
when != pci_enable_device ( id )
(
pci_dma_supported@p ( id, ...)
|
pci_alloc_consistent@p ( id, ...)
)
@depends on !bad1@
idexpression id;
expression direction;
position deprecated.p;
@@
(
- pci_dma_supported@p ( id,
+ dma_supported ( &id->dev,
...
+ , GFP_ATOMIC
)
|
- pci_alloc_consistent@p ( id,
+ dma_alloc_coherent ( &id->dev,
...
+ , GFP_ATOMIC
)
)
Signed-off-by: Quentin Lambert <lambert.quentin@gmail.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
|
|
nf_bridge_info->mask is used for several things, for example to
remember if skb->pkt_type was set to OTHER_HOST.
For a bridge, OTHER_HOST is expected case. For ip forward its a non-starter
though -- routing expects PACKET_HOST.
Bridge netfilter thus changes OTHER_HOST to PACKET_HOST before hook
invocation and then un-does it after hook traversal.
This information is irrelevant outside of br_netfilter.
After this change, ->mask now only contains flags that need to be
known outside of br_netfilter in fast-path.
Future patch changes mask into a 2bit state field in sk_buff, so that
we can remove skb->nf_bridge pointer for good and consider all remaining
places that access nf_bridge info content a not-so fastpath.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
->mask is a bit info field that mixes various use cases.
In particular, we have flags that are mutually exlusive, and flags that
are only used within br_netfilter while others need to be exposed to
other parts of the kernel.
Remove BRNF_8021Q/PPPoE flags. They're mutually exclusive and only
needed within br_netfilter context.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Don't access skb->nf_bridge directly, this pointer will be removed soon.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Avoid skb->nf_bridge accesses where possible.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
right now we store this in the nf_bridge_info struct, accessible
via skb->nf_bridge. This patch prepares removal of this pointer from skb:
Instead of using skb->nf_bridge->x, we use helpers to obtain the in/out
device (or ifindexes).
Followup patches to netfilter will then allow nf_bridge_info to be
obtained by a call into the br_netfilter core, rather than keeping a
pointer to it in sk_buff.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
br_netfilter maintains an extra state, nf_bridge_info, which is attached
to skb via skb->nf_bridge pointer.
Amongst other things we use skb->nf_bridge->data to store the original
mac header for every processed skb.
This is required for ip refragmentation when using conntrack
on top of bridge, because ip_fragment doesn't copy it from original skb.
However there is no need anymore to do this unconditionally.
Move this to the one place where its needed -- when br_netfilter calls
ip_fragment().
Also switch to percpu storage for this so we can handle fragmenting
without accessing nf_bridge meta data.
Only user left is neigh resolution when DNAT is detected, to hold
the original source mac address (neigh resolution builds new mac header
using bridge mac), so rename ->data and reduce its size to whats needed.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
match
Currently in xt_socket, we take advantage of early demuxed sockets
since commit 00028aa37098 ("netfilter: xt_socket: use IP early demux")
in order to avoid a second socket lookup in the fast path, but we
only make partial use of this:
We still unnecessarily parse headers, extract proto, {s,d}addr and
{s,d}ports from the skb data, accessing possible conntrack information,
etc even though we were not even calling into the socket lookup via
xt_socket_get_sock_{v4,v6}() due to skb->sk hit, meaning those cycles
can be spared.
After this patch, we only proceed the slower, manual lookup path
when we have a skb->sk miss, thus time to match verdict for early
demuxed sockets will improve further, which might be i.e. interesting
for use cases such as mentioned in 681f130f39e1 ("netfilter: xt_socket:
add XT_SOCKET_NOWILDCARD flag").
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Currently, the driver uses handle_simple_irq for all IRQ types and hard
codes the acknowledge for different IRQ types into the handler. It is
better to use the IRQ core as intended and let it handle the differences
between the various types of IRQ. For example the current system does
not work for threaded level triggered IRQs as these need to be masked
until the threaded handler has run.
Signed-off-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
quotad periodically syncs in-memory quotas to the ondisk quota file
and sets the QDF_REFRESH flag so that a subsequent read of a synced
quota is re-read from disk.
gfs2_quota_lock() checks for this flag and sets a 'force' bit to
force re-read from disk if requested. However, there is a race
condition here. It is possible for gfs2_quota_lock() to find the
QDF_REFRESH flag unset (i.e force=0) and quotad comes in immediately
after and syncs the relevant quota and sets the QDF_REFRESH flag.
gfs2_quota_lock() resumes with force=0 and uses the stale in-memory
quota usage values that result in miscalculations.
This patch fixes this race by moving the check for the QDF_REFRESH
flag check further out into the gfs2_quota_lock() process, i.e, in
do_glock(), under the protection of the quota glock.
Signed-off-by: Abhi Das <adas@redhat.com>
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Acked-by: Steven Whitehouse <swhiteho@redhat.com>
|
|
The AES implementation still assumes, that the hw_desc[0] has a valid
key as long as no new key needs to be set; consequentialy it always
sets the AES key header for the first descriptor and puts data into
the second one (hw_desc[1]).
Change this to only update the key in the hardware, when a new key is
to be set and use the first descriptor for data otherwise.
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
With commit
7e77bdebff5cb1e9876c561f69710b9ab8fa1f7e crypto: af_alg - fix backlog handling
in place, the backlog works under all circumstances where it previously failed, atleast
for the sahara driver. Use it.
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The function crypto_alg_match returns an algorithm without taking
any references on it. This means that the algorithm can be freed
at any time, therefore all users of crypto_alg_match are buggy.
This patch fixes this by taking a reference count on the algorithm
to prevent such races.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The output buffer is used for CPU access, so
the API should be dma_sync_single_for_cpu which
makes the cache line invalid in order to reload
the value in memory.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The input buffer and output buffer are mapped for DMA transfer
in Atmel AES driver. But they are also be used by CPU when
the requested crypt length is not bigger than the threshold
value 16. The buffers will be cached in cache line when CPU
accessed them. When DMA uses the buffers again, the memory
can happened to be flushed by cache while DMA starts transfer.
So using API dma_sync_single_for_device and dma_sync_single_for_cpu
in DMA to ensure DMA coherence and CPU always access the correct
value. This fix the issue that the encrypted result periodically goes
wrong when doing performance test with OpenSSH.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Kernel will report "BUG: spinlock lockup suspected on CPU#0"
when CONFIG_DEBUG_SPINLOCK is enabled in kernel config and the
spinlock is used at the first time. It's caused by uninitialized
spinlock, so just initialize it in probe.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Kernel will report "BUG: spinlock lockup suspected on CPU#0"
when CONFIG_DEBUG_SPINLOCK is enabled in kernel config and the
spinlock is used at the first time. It's caused by uninitialized
spinlock, so just initialize it in probe.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The maximum source and destination burst size is 16
according to the datasheet of Atmel DMA. And the value
is also checked in function at_xdmac_csize of Atmel
DMA driver. With the restrict, the value beyond maximum
value will not be processed in DMA driver, so SHA384 and
SHA512 will not work and the program will wait forever.
So here change the max burst size of all the cases to 16
in order to make SHA384 and SHA512 work and keep consistent
with DMA driver and datasheet.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Kernel will report "BUG: spinlock lockup suspected on CPU#0"
when CONFIG_DEBUG_SPINLOCK is enabled in kernel config and the
spinlock is used at the first time. It's caused by uninitialized
spinlock, so just initialize it in probe.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Having a zero length sg doesn't mean it is the end of the sg list. This
case happens when calculating HMAC of IPSec packet.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
When a hash is requested on data bigger than the buffer allocated by the
SHA driver, the way DMA transfers are performed is quite strange:
The buffer is filled at each update request. When full, a DMA transfer
is done. On next update request, another DMA transfer is done. Then we
wait to have a full buffer (or the end of the data) to perform the dma
transfer. Such a situation lead sometimes, on SAMA5D4, to a case where
dma transfer is finished but the data ready irq never comes. Moreover
hash was incorrect in this case.
With this patch, dma transfers are only performed when the buffer is
full or when there is no more data. So it removes the transfer whose size
is equal the update size after the full buffer transmission.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add new version of atmel-sha available with SAMA5D4 devices.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add new version of atmel-aes available with SAMA5D4 devices.
Signed-off-by: Leilei Zhao <leilei.zhao@atmel.com>
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Reported by 0-day test infrastructure.
Fixes: ecaa1f6a0154 ("vfio-pci: Add VGA arbiter client")
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|