Age | Commit message (Collapse) | Author |
|
Now we are ready to enable RDMA FLUSH capability for RXE.
It can support Global Visibility and Persistence placement types.
Link: https://lore.kernel.org/r/20221206130201.30986-11-lizhijian@fujitsu.com
Reviewed-by: Zhu Yanjun <zyjzyj2000@gmail.com>
Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Similar to RDMA and Atomic qp attributes enabled by default in CM, enable
FLUSH attribute for supported device. That makes applications that are
built with rdma_create_ep, rdma_accept APIs have FLUSH qp attribute
natively so that user is able to request FLUSH operation simpler.
Note that, a FLUSH operation requires FLUSH are supported by both
device(HCA) and memory region(MR) and QP at the same time, so it's safe
to enable FLUSH qp attribute by default here.
FLUSH attribute can be disable by modify_qp() interface.
Link: https://lore.kernel.org/r/20221206130201.30986-10-lizhijian@fujitsu.com
Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Per IBA SPEC, FLUSH will ack in rdma read response with 0 length.
Use IB_WC_FLUSH (aka IB_UVERBS_WC_FLUSH) code to tell userspace a FLUSH
completion.
Link: https://lore.kernel.org/r/20221206130201.30986-9-lizhijian@fujitsu.com
Reviewed-by: Zhu Yanjun <zyjzyj2000@gmail.com>
Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Only the requested placement types that also registered in the destination
memory region are acceptable.
Otherwise, responder will also reply NAK "Remote Access Error" if it
found a placement type violation.
We will persist data via arch_wb_cache_pmem(), which could be
architecture specific.
This commit also adds 2 helpers to update qp.resp from the incoming packet.
Link: https://lore.kernel.org/r/20221206130201.30986-8-lizhijian@fujitsu.com
Reviewed-by: Zhu Yanjun <zyjzyj2000@gmail.com>
Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Implement FLUSH request operation in the requester.
Link: https://lore.kernel.org/r/20221206130201.30986-7-lizhijian@fujitsu.com
Reviewed-by: Zhu Yanjun <zyjzyj2000@gmail.com>
Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Extend rxe opcode tables, headers, helper and constants to support
flush operations.
Refer to the IBA A19.4.1 for more FETH definition details
Link: https://lore.kernel.org/r/20221206130201.30986-6-lizhijian@fujitsu.com
Reviewed-by: Zhu Yanjun <zyjzyj2000@gmail.com>
Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Memory region could support at most 2 flush access flags:
IB_ACCESS_FLUSH_PERSISTENT and IB_ACCESS_FLUSH_GLOBAL
But we only allow user to register persistent flush flags to the pmem MR
where it has the ability of persisting data across power cycles.
So registering a persistent access flag to a non-pmem MR will be rejected.
Link: https://lore.kernel.org/r/20221206130201.30986-5-lizhijian@fujitsu.com
CC: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
The code in rxe_resp.c at check_length() is incorrect as it compares
pkt->opcode an 8 bit value against various mask bits which are all higher
than 256 so nothing is ever reported.
This patch rewrites this to compare against pkt->mask which is
correct. However this now exposes another error. For UD send packets the
value of the pmtu cannot be determined from qp->mtu. All that is required
here is to later check if the payload fits into the posted receive buffer
in that case.
Fixes: 837a55847ead ("RDMA/rxe: Implement packet length validation on responder")
Link: https://lore.kernel.org/r/20221208210945.28607-1-rpearsonhpe@gmail.com
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Reviewed-by: Daisuke Matsuda <matsuda-daisuke@fujitsu.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Some special polaris 10 chips overlap with the polaris11
DID range. Handle this properly in the driver.
v2: use local flags for other function calls.
Acked-by: Luben Tuikov <luben.tuikov@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
|
|
Only apply the static threshold for Stoney and Carrizo.
This hardware has certain requirements that don't allow
mixing of GTT and VRAM. Newer asics do not have these
requirements so we should be able to be more flexible
with where buffers end up.
Bug: https://gitlab.freedesktop.org/drm/amd/-/issues/2270
Bug: https://gitlab.freedesktop.org/drm/amd/-/issues/2291
Bug: https://gitlab.freedesktop.org/drm/amd/-/issues/2255
Acked-by: Luben Tuikov <luben.tuikov@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
|
|
There is a spelling mistake in the struct field dram_clk_chanage. Fix it.
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: Hamza Mahfooz <hamza.mahfooz@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
|
|
An earlier commit introduced a mechanism to parse the context image to
find the OA context control offset. This resulted in an NPD on haswell
when gem_context was passed into i915_perf_open_ioctl params. Haswell
does not support logical ring contexts, so ensure that the context image
is parsed only for platforms with logical ring contexts and also
validate lrc_reg_state.
v2: Fix build failure
v3: Fix checkpatch error
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/7432
Fixes: a5c3a3cbf029 ("drm/i915/perf: Determine gen12 oa ctx offset at runtime")
Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221123235342.713068-1-umesh.nerlige.ramappa@intel.com
(cherry picked from commit 95c713d722017b26e301303713d638e0b95b1f68)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
|
|
intel_uncore_forcewake_put__locked() is used to release a reference.
Fixes: a6111f7b6604 ("drm/i915: Reduce locking in execlist command submission")
Signed-off-by: Miaoqian Lin <linmq006@gmail.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221207112909.2655251-1-linmq006@gmail.com
(cherry picked from commit 955f4d7176eb154db587ae162ec2b392dc8d5f27)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
|
|
The kerneldoc function name was not updated when this function was
converted to a non-fw form.
Fixes: 41f425adbce9 ("drm/i915/gt: Manage uncore->lock while waiting on MCR register")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221128233014.4000136-2-matthew.d.roper@intel.com
(cherry picked from commit 03b713d029bd17a1ed426590609af79843db95e2)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
|
|
The commit 686d348476ee ("RDMA/rxe: Remove unnecessary mr testing") causes
a kernel crash. If responder get a zero-byte RDMA Read request,
qp->resp.mr is not set in check_rkey() (see IBA C9-88). The mr is NULL in
this case, and a NULL pointer dereference occurs as shown below.
BUG: kernel NULL pointer dereference, address: 0000000000000010
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
PGD 0 P4D 0
Oops: 0002 [#1] PREEMPT SMP PTI
CPU: 2 PID: 3622 Comm: python3 Kdump: loaded Not tainted 6.1.0-rc3+ #34
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
RIP: 0010:__rxe_put+0xc/0x60 [rdma_rxe]
Code: cc cc cc 31 f6 e8 64 36 1b d3 41 b8 01 00 00 00 44 89 c0 c3 cc cc cc cc 41 89 c0 eb c1 90 0f 1f 44 00 00 41 54 b8 ff ff ff ff <f0> 0f c1 47 10 83 f8 01 74 11 45 31 e4 85 c0 7e 20 44 89 e0 41 5c
RSP: 0018:ffffb27bc012ce78 EFLAGS: 00010246
RAX: 00000000ffffffff RBX: ffff9790857b0580 RCX: 0000000000000000
RDX: ffff979080fe145a RSI: 000055560e3e0000 RDI: 0000000000000000
RBP: ffff97909c7dd800 R08: 0000000000000001 R09: e7ce43d97f7bed0f
R10: ffff97908b29c300 R11: 0000000000000000 R12: 0000000000000000
R13: 0000000000000000 R14: ffff97908b29c300 R15: 0000000000000000
FS: 00007f276f7bd740(0000) GS:ffff9792b5c80000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000010 CR3: 0000000114230002 CR4: 0000000000060ee0
Call Trace:
<IRQ>
read_reply+0xda/0x310 [rdma_rxe]
rxe_responder+0x82d/0xe50 [rdma_rxe]
do_task+0x84/0x170 [rdma_rxe]
tasklet_action_common.constprop.0+0xa7/0x120
__do_softirq+0xcb/0x2ac
do_softirq+0x63/0x90
</IRQ>
Support a NULL mr during read_reply()
Fixes: 686d348476ee ("RDMA/rxe: Remove unnecessary mr testing")
Fixes: b5f9a01fae42 ("RDMA/rxe: Fix mr leak in RESPST_ERR_RNR")
Link: https://lore.kernel.org/r/20221209045926.531689-1-matsuda-daisuke@fujitsu.com
Link: https://lore.kernel.org/r/20221202145713.13152-1-lizhijian@fujitsu.com
Signed-off-by: Daisuke Matsuda <matsuda-daisuke@fujitsu.com>
Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
For dependencies in following patches
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Eric points out this is wrong for the rare case of someone using
allow_unsafe_interrupts on ARM. We always have to setup the MSI window in
the domain if the iommu driver asks for it.
Move the iommu_get_msi_cookie() setup to the top of the function and
always do it, regardless of the security mode. Add checks to
iommufd_device_setup_msi() to ensure the driver is not doing something
incomprehensible. No current driver will set both a HW and SW MSI window,
or have more than one SW MSI window.
Fixes: e8d57210035b ("iommufd: Add kAPI toward external drivers for physical devices")
Link: https://lore.kernel.org/r/3-v1-0362a1a1c034+98-iommufd_fixes1_jgg@nvidia.com
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reported-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Correct a few items noticed late in review:
- We should assert that the math in batch_clear_carry() doesn't underflow
- user->locked should be -1 not 0 sicne we just did mmput
- npages should not have been recalculated, it already has that value
No functional change.
Fixes: 8d160cd4d506 ("iommufd: Algorithms for PFN storage")
Link: https://lore.kernel.org/r/2-v1-0362a1a1c034+98-iommufd_fixes1_jgg@nvidia.com
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reported-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Repair some typos in comments that were noticed late in the review
cycle.
Fixes: f394576eb11d ("iommufd: PFN handling for iopt_pages")
Link: https://lore.kernel.org/r/1-v1-0362a1a1c034+98-iommufd_fixes1_jgg@nvidia.com
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reported-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media
Pull media fix from Mauro Carvalho Chehab:
"A v4l-core fix related to validating DV timings related to video
blanking values"
* tag 'media/v6.1-4' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media:
media: v4l2-dv-timings.c: fix too strict blanking sanity checks
|
|
Use the correct struct name for the kernel-doc notation to prevent
a kernel-doc warning:
clk-nomadik.c:148: warning: expecting prototype for struct clk_pll1. Prototype was for struct clk_pll instead
Fixes: ef6eb322ce57 ("clk: nomadik: implement the Nomadik clocks properly")
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Reported-by: kernel test robot <lkp@intel.com>
Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: Michael Turquette <mturquette@baylibre.com>
Cc: Stephen Boyd <sboyd@kernel.org>
Cc: linux-clk@vger.kernel.org
Link: https://lore.kernel.org/r/20221209002016.14776-1-rdunlap@infradead.org
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
|
|
Provide a public callback handle_mask_sync() that drivers can use when
they have more complex IRQ masking logic. The default implementation is
regmap_irq_handle_mask_sync(), used if the chip doesn't provide its own
callback.
Cc: Mark Brown <broonie@kernel.org>
Signed-off-by: William Breathitt Gray <william.gray@linaro.org>
Link: https://lore.kernel.org/r/e083474b3d467a86e6cb53da8072de4515bd6276.1669100542.git.william.gray@linaro.org
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
If vp alloc failed in qlcnic_sriov_init(), all previously allocated vp
needs to be freed.
Fixes: f197a7aa6288 ("qlcnic: VF-PF communication channel implementation")
Signed-off-by: Yuan Can <yuancan@huawei.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The bitmap_free() should be called to free priv->af_xdp_zc_qps
when create_singlethread_workqueue() fails, otherwise there will
be a memory leak, so we add the err path error_wq_init to fix it.
Fixes: bba2556efad6 ("net: stmmac: Enable RX via AF_XDP zero-copy")
Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The skb allocated by stmmac_test_get_arp_skb() hasn't been released in
some error handling case, which will lead to a memory leak. Fix this up
by adding kfree_skb() to release skb.
Compile tested only.
Fixes: 5e3fb0a6e2b3 ("net: stmmac: selftests: Implement the ARP Offload test")
Signed-off-by: Zhang Changzhong <zhangchangzhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
It turns out we can just modify the newer STM32 CRYP driver
to be used with Ux500 and now that we have done that, delete
the old and sparsely maintained Ux500 CRYP driver.
Cc: Lionel Debieve <lionel.debieve@foss.st.com>
Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com>
Cc: Alexandre Torgue <alexandre.torgue@foss.st.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This adds a few small quirks to handle the differences between
the STM32 and Ux500 cryp blocks. The following differences
are handled with special bool switch bits in the capabilities:
- The main difference is that some registers are removed, so we
add register offsets for all registers in the
per-variant data. Then we assign the right offsets for Ux500
vs the STM32 variants.
- The Ux500 does not support the aeads algorithms; gcm(aes)
and ccm(aes). Avoid registering them when running on Ux500.
- The Ux500 has a special "linear" key format and does some
elaborare bit swizzling of the key bits before writing them
into the key registers. This is written as an "application
note" inside the DB8500 design specification, and seems to
be the result of some mishap when assigning the data lines
to register bits. (STM32 has clearly fixed this.)
- The Ux500 does not have the KP "key prepare" bit in the
CR register. Instead, we need to set the KSE bit,
"key schedule encryption" bit which does the same thing
but is in bit 11 rather than being a special "algorithm
type" as on STM32. The algorithm must however be specified
as AES ECB while doing this.
- The Ux500 cannot just read out IV registers, we need to
set the KEYRDEN "key read enable" bit, as this protects
not just the key but also the IV from being read out.
Enable this bit before reading out the IV and disable it
afterwards.
Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com>
Cc: Alexandre Torgue <alexandre.torgue@foss.st.com>
Acked by: Lionel Debieve <lionel.debieve@foss.st.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The Ux500 cryp and hash drivers are older versions of the
hardware managed by the stm32 driver.
Instead of trying to improve the Ux500 cryp and hash drivers,
start to switch over to the modern and more well-maintained
STM32 drivers.
Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com>
Cc: Alexandre Torgue <alexandre.torgue@foss.st.com>
Acked-by: Lionel Debieve <lionel.debieve@foss.st.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
for_each_pci_dev() is implemented by pci_get_device(). The comment of
pci_get_device() says that it will increase the reference count for the
returned pci_dev and also decrease the reference count for the input
pci_dev @from if it is not NULL.
If we break for_each_pci_dev() loop with pdev not NULL, we need to call
pci_dev_put() to decrease the reference count. We add a new struct
'amd_geode_priv' to record pointer of the pci_dev and membase, and then
add missing pci_dev_put() for the normal and error path.
Fixes: ef5d862734b8 ("[PATCH] Add Geode HW RNG driver")
Signed-off-by: Xiongfeng Wang <wangxiongfeng2@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
for_each_pci_dev() is implemented by pci_get_device(). The comment of
pci_get_device() says that it will increase the reference count for the
returned pci_dev and also decrease the reference count for the input
pci_dev @from if it is not NULL.
If we break for_each_pci_dev() loop with pdev not NULL, we need to call
pci_dev_put() to decrease the reference count. Add the missing
pci_dev_put() for the normal and error path.
Fixes: 96d63c0297cc ("[PATCH] Add AMD HW RNG driver")
Signed-off-by: Xiongfeng Wang <wangxiongfeng2@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This driver has been implicitly relying on kmalloc alignment
to be sufficient for DMA. This may no longer be the case with
upcoming arm64 changes.
This patch changes it to explicitly request DMA alignment from
the Crypto API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This driver has been implicitly relying on kmalloc alignment
to be sufficient for DMA. This may no longer be the case with
upcoming arm64 changes.
This patch changes it to explicitly request DMA alignment from
the Crypto API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This driver has been implicitly relying on kmalloc alignment
to be sufficient for DMA. This may no longer be the case with
upcoming arm64 changes.
This patch changes it to explicitly request DMA alignment from
the Crypto API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This driver has been implicitly relying on kmalloc alignment
to be sufficient for DMA. This may no longer be the case with
upcoming arm64 changes.
This patch changes it to explicitly request DMA alignment from
the Crypto API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This driver has been implicitly relying on kmalloc alignment
to be sufficient for DMA. This may no longer be the case with
upcoming arm64 changes.
This patch changes it to explicitly request DMA alignment from
the Crypto API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This driver has been implicitly relying on kmalloc alignment
to be sufficient for DMA. This may no longer be the case with
upcoming arm64 changes.
This patch changes it to explicitly request DMA alignment from
the Crypto API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This driver has been implicitly relying on kmalloc alignment
to be sufficient for DMA. This may no longer be the case with
upcoming arm64 changes.
This patch changes it to explicitly request DMA alignment from
the Crypto API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This driver has been implicitly relying on kmalloc alignment
to be sufficient for DMA. This may no longer be the case with
upcoming arm64 changes.
This patch changes it to explicitly request DMA alignment from
the Crypto API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This driver has been implicitly relying on kmalloc alignment
to be sufficient for DMA. This may no longer be the case with
upcoming arm64 changes.
This patch changes it to explicitly request DMA alignment from
the Crypto API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This driver has been implicitly relying on kmalloc alignment
to be sufficient for DMA. This may no longer be the case with
upcoming arm64 changes.
This patch changes it to explicitly request DMA alignment from
the Crypto API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Smatch report warning as follows:
drivers/crypto/img-hash.c:366 img_hash_dma_task() warn: variable
dereferenced before check 'hdev->req'
Variable dereferenced should be done after check 'hdev->req',
fix it.
Fixes: d358f1abbf71 ("crypto: img-hash - Add Imagination Technologies hw hash accelerator")
Fixes: 10badea259fa ("crypto: img-hash - Fix null pointer exception")
Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch fixes the sparse warning about arrays of flexible
structures by removing an unnecessary use of them in struct
__crypto_ctx.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The acomp API allows to send requests with a NULL destination buffer. In
this case, the algorithm implementation needs to allocate the
destination scatter list, perform the operation and return the buffer to
the user. For decompression, data is likely to expand and be bigger than
the allocated buffer.
This implements a re-submission mechanism for decompression requests
that is triggered if the destination buffer, allocated by the driver,
is not sufficiently big to store the output from decompression.
If an overflow is detected when processing the callback for a
decompression request with a NULL destination buffer, a workqueue is
scheduled. This allocates a new scatter list of size CRYPTO_ACOMP_DST_MAX,
now 128KB, creates a new firmware scatter list and resubmits the job to
the hardware accelerator.
Suggested-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Enable deflate for QAT GEN4 devices.
This adds
(1) logic to create configuration entries at probe time for the
compression instances for QAT GEN4 devices;
(2) the implementation of QAT GEN4 specific compression operations,
required since the creation of the compression request template is
different between GEN2 and GEN4; and
(3) updates to the firmware API related to compression for GEN4.
The implementation configures the device to produce data compressed
dynamically, optimized for throughput over compression ratio.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com>
Reviewed-by: Adam Guerin <adam.guerin@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add infrastructure for implementing the acomp APIs in the QAT driver and
expose the deflate algorithm for QAT GEN2 devices.
This adds
(1) the compression service which includes logic to create, allocate
and handle compression instances;
(2) logic to create configuration entries at probe time for the
compression instances;
(3) updates to the firmware API for allowing the compression service;
and;
(4) a back-end for deflate that implements the acomp api for QAT GEN2
devices.
The implementation configures the device to produce data compressed
statically, optimized for throughput over compression ratio.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com>
Reviewed-by: Adam Guerin <adam.guerin@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Rename qat_crypto_dev_config() in adf_gen2_dev_config() and relocate it
to the newly created file adf_gen2_config.c.
This function is specific to QAT GEN2 devices and will be used also to
configure the compression service.
In addition change the drivers to use the dev_config() in the hardware
data structure (which for GEN2 devices now points to
adf_gen2_dev_config()), for consistency.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com>
Reviewed-by: Adam Guerin <adam.guerin@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Move qat_algs_alloc_flags() from qat_crypto.h to qat_bl.h as this will
be used also by the compression logic.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com>
Reviewed-by: Adam Guerin <adam.guerin@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Move the structures qat_instance_backlog and qat_alg_req from
qat_crypto.h to qat_algs_send.h since they are not unique to crypto.
Both structures will be used by the compression service to support
requests with the CRYPTO_TFM_REQ_MAY_BACKLOG flag set.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com>
Reviewed-by: Adam Guerin <adam.guerin@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The compression service requires an additional pre-allocated buffer for
each destination scatter list.
Extend the function qat_alg_sgl_to_bufl() to take an additional
structure that contains the dma address and the size of the extra
buffer which will be appended in the destination FW SGL.
The logic that unmaps buffers in qat_alg_free_bufl() has been changed to
start unmapping from buffer 0 instead of skipping the initial buffers
num_buff - num_mapped_bufs as that functionality was not used in the
code.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com>
Reviewed-by: Adam Guerin <adam.guerin@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The structure qat_crypto_request_buffs which contains the source and
destination buffer lists and correspondent sizes and dma addresses is
also required for the compression service.
Rename it as qat_request_buffs and move it to qat_bl.h.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com>
Reviewed-by: Adam Guerin <adam.guerin@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|