Age | Commit message (Collapse) | Author |
|
Commit f6aa5beb45be ("serial: 8250: Fix clearing FIFOs in RS485 mode
again") makes a change to FIFO clearing code which its commit message
suggests was intended to be specific to use with RS485 mode, however:
1) The change made does not just affect __do_stop_tx_rs485(), it also
affects other uses of serial8250_clear_fifos() including paths for
starting up, shutting down or auto-configuring a port regardless of
whether it's an RS485 port or not.
2) It makes the assumption that resetting the FIFOs is a no-op when
FIFOs are disabled, and as such it checks for this case & explicitly
avoids setting the FIFO reset bits when the FIFO enable bit is
clear. A reading of the PC16550D manual would suggest that this is
OK since the FIFO should automatically be reset if it is later
enabled, but we support many 16550-compatible devices and have never
required this auto-reset behaviour for at least the whole git era.
Starting to rely on it now seems risky, offers no benefit, and
indeed breaks at least the Ingenic JZ4780's UARTs which reads
garbage when the RX FIFO is enabled if we don't explicitly reset it.
3) By only resetting the FIFOs if they're enabled, the behaviour of
serial8250_do_startup() during boot now depends on what the value of
FCR is before the 8250 driver is probed. This in itself seems
questionable and leaves us with FCR=0 & no FIFO reset if the UART
was used by 8250_early, otherwise it depends upon what the
bootloader left behind.
4) Although the naming of serial8250_clear_fifos() may be unclear, it
is clear that callers of it expect that it will disable FIFOs. Both
serial8250_do_startup() & serial8250_do_shutdown() contain comments
to that effect, and other callers explicitly re-enable the FIFOs
after calling serial8250_clear_fifos(). The premise of that patch
that disabling the FIFOs is incorrect therefore seems wrong.
For these reasons, this reverts commit f6aa5beb45be ("serial: 8250: Fix
clearing FIFOs in RS485 mode again").
Signed-off-by: Paul Burton <paul.burton@mips.com>
Fixes: f6aa5beb45be ("serial: 8250: Fix clearing FIFOs in RS485 mode again").
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Daniel Jedrychowski <avistel@gmail.com>
Cc: Marek Vasut <marex@denx.de>
Cc: linux-mips@vger.kernel.org
Cc: linux-serial@vger.kernel.org
Cc: stable <stable@vger.kernel.org> # 4.10+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
As commented in the struct's definition there shouldn't be anything
underneath its 'priv[0]' member as it would break some macros.
The patch converts the broken_suspend into a bit-field and relocates it
next to to the rest of bit-fields.
Fixes: a7d57abcc8a5 ("xhci: workaround CSS timeout on AMD SNPS 3.0 xHC")
Reported-by: Oliver Neukum <oneukum@suse.com>
Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Acked-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Old firmware versions don't support this command. Sending it
to any firmware before -41.ucode will crash the firmware.
This fixes https://bugzilla.kernel.org/show_bug.cgi?id=201975
Fixes: 66e839030fd6 ("iwlwifi: fix wrong WGDS_WIFI_DATA_SIZE")
CC: <stable@vger.kernel.org> #4.19+
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
|
|
The opcodes used by the controller when doing batched page prog should
be written in NFC_REG_WCMD_SET not FC_REG_RCMD_SET. Luckily, the
default NFC_REG_WCMD_SET value matches the one we set in the driver
which explains why we didn't notice the problem.
Fixes: 614049a8d904 ("mtd: nand: sunxi: add support for DMA assisted operations")
Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
|
|
Fix 'defined but not used' compiler warning for act8945a_suspend()
function in case CONFIG_PM_SLEEP is not defined.
Fixes: b5ebba46e694 ("regulator: act8945a-regulator: add shutdown function")
Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com>
Reported-by: Andrei Stefanescu <andrei.stefanescu@microchip.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
Initial commit of set_ramp_delay feature was missing an assignment which
should have populated slew_rate table for dcdc2 regulator. Add it.
Fixes: d29f54df8b16 ("regulator: axp20x: add support for set_ramp_delay for AXP209")
Signed-off-by: Priit Laes <plaes@plaes.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
On several arches, virt_to_phys() is in io.h
Build fails without it:
CC lib/test_debug_virtual.o
lib/test_debug_virtual.c: In function 'test_debug_virtual_init':
lib/test_debug_virtual.c:26:7: error: implicit declaration of function 'virt_to_phys' [-Werror=implicit-function-declaration]
pa = virt_to_phys(va);
^
Fixes: e4dace361552 ("lib: add test module for CONFIG_DEBUG_VIRTUAL")
CC: stable@vger.kernel.org
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
The code:
ifdef CONFIG_6xx
KBUILD_CFLAGS += -mcpu=powerpc
endif
was added in 2006 in commit f48b8296b315 ("[PATCH] powerpc32: Set cpu
explicitly in kernel compiles"). This change was acceptable since the
TARGET_CPU logic was 64-bit only.
Since commit 0e00a8c9fd92 ("powerpc: Allow CPU selection
also on PPC32") this logic is no longer acceptable after the TARGET_CPU
specific. It currently appends -mcpu=powerpc at the end of the command
line, after any TARGET_CPU specific:
gcc -Wp,-MD,init/.do_mounts.o.d ...
-mcpu=powerpc -mbig-endian -m32 ...
-mcpu=e300c2 ...
-mcpu=powerpc ...
../init/do_mounts.c
Fixes: 0e00a8c9fd92 ("powerpc: Allow CPU selection also on PPC32")
Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
ipic_set_priority() has been unused since 2006 when the last usage was
removed in commit b9f0f1bb2bca ("[POWERPC] Adapt ipic driver to new
host_ops interface, add set_irq_type to set IRQ sense").
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Merge our fixes branch again, this has a couple of build fixes and also
a change to do_syscall_trace_enter() that will conflict with a patch we
want to apply in next.
|
|
Some eMMCs from Micron have been reported to need ~800 ms timeout, while
enabling the CACHE ctrl after running sudden power failure tests. The
needed timeout is greater than what the card specifies as its generic CMD6
timeout, through the EXT_CSD register, hence the problem.
Normally we would introduce a card quirk to extend the timeout for these
specific Micron cards. However, due to the rather complicated debug process
needed to find out the error, let's simply use a minimum timeout of 1600ms,
the double of what has been reported, for all cards when enabling CACHE
ctrl.
Reported-by: Sjoerd Simons <sjoerd.simons@collabora.co.uk>
Reported-by: Andreas Dannenberg <dannenberg@ti.com>
Reported-by: Faiz Abbas <faiz_abbas@ti.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
In commit 5320226a0512 ("mmc: core: Disable HPI for certain Hynix eMMC
cards"), then intent was to prevent HPI from being used for some eMMC
cards, which didn't properly support it. However, that went too far, as
even BKOPS and CACHE ctrl became prevented. Let's restore those parts and
allow BKOPS and CACHE ctrl even if HPI isn't supported.
Fixes: 5320226a0512 ("mmc: core: Disable HPI for certain Hynix eMMC cards")
Cc: Pratibhasagar V <pratibha@codeaurora.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
During a re-initialization of the eMMC card, we may fail to re-enable HPI.
In these cases, that isn't properly reflected in the card->ext_csd.hpi_en
bit, as it keeps being set. This may cause following attempts to use HPI,
even if's not enabled. Let's fix this!
Fixes: eb0d8f135b67 ("mmc: core: support HPI send command")
Cc: <stable@vger.kernel.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
While booting with rootfs on MMC, the following warning is encountered
on OMAP4430:
omap-dma-engine 4a056000.dma-controller: DMA-API: mapping sg segment longer than device claims to support [len=69632] [max=65536]
This is because the DMA engine has a default maximum segment size of 64K
but HSMMC sets:
mmc->max_blk_size = 512; /* Block Length at max can be 1024 */
mmc->max_blk_count = 0xFFFF; /* No. of Blocks is 16 bits */
mmc->max_req_size = mmc->max_blk_size * mmc->max_blk_count;
mmc->max_seg_size = mmc->max_req_size;
which ends up telling the block layer that we support a maximum segment
size of 65535*512, which exceeds the advertised DMA engine capabilities.
Fix this by clamping the maximum segment size to the lower of the
maximum request size and of the DMA engine device used for either DMA
channel.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Cc: <stable@vger.kernel.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
Some of the SDMMC pads auto calibration values parsed from
devicetree are assigned incorrectly. This patch fixes it.
Signed-off-by: Sowjanya Komatineni <skomatineni@nvidia.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Fixes: 51b77c8ea784 ("mmc: tegra: Program pad autocal offsets from dt")
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
V4_MODE is Bit-15 of SDHCI_HOST_CONTROL2 register.
Need to perform word access to this register.
Signed-off-by: Sowjanya Komatineni <skomatineni@nvidia.com>
Fixes: b3f80b434f72 ("mmc: sdhci: Add sd host v4 mode")
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
According to the documentation in include/uapi/asm-generic/ioctl.h,
_IOW means userspace is writing and kernel is reading, and
_IOR means userspace is reading and kernel is writing.
In case of these two ioctls, kernel is writing and userspace is reading,
so they have to be _IOR instead of _IOW.
Fixes: 72cd87576d1d8 ("block: Introduce BLKGETZONESZ ioctl")
Fixes: 65e4e3eee83d7 ("block: Introduce BLKGETNRZONES ioctl")
Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Dmitry V. Levin <ldv@altlinux.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Previously when a device was being emulated by an L1 guest for an L2
guest, that device couldn't then be passed through to an L3 guest. This
was because the L1 guest had no method for accessing L3 memory.
The hcall H_COPY_TOFROM_GUEST provides this access. Thus this setup for
passthrough can now be allowed.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
|
|
quadrants 1 & 2
A guest cannot access quadrants 1 or 2 as this would result in an
exception. Thus introduce the hcall H_COPY_TOFROM_GUEST to be used by a
guest when it wants to perform an access to quadrants 1 or 2, for
example when it wants to access memory for one of its nested guests.
Also provide an implementation for the kvm-hv module.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
|
|
Allow for a device which is being emulated at L0 (the host) for an L1
guest to be passed through to a nested (L2) guest.
The existing kvmppc_hv_emulate_mmio function can be used here. The main
challenge is that for a load the result must be stored into the L2 gpr,
not an L1 gpr as would normally be the case after going out to qemu to
complete the operation. This presents a challenge as at this point the
L2 gpr state has been written back into L1 memory.
To work around this we store the address in L1 memory of the L2 gpr
where the result of the load is to be stored and use the new io_gpr
value KVM_MMIO_REG_NESTED_GPR to indicate that this is a nested load for
which completion must be done when returning back into the kernel. Then
in kvmppc_complete_mmio_load() the resultant value is written into L1
memory at the location of the indicated L2 gpr.
Note that we don't currently let an L1 guest emulate a device for an L2
guest which is then passed through to an L3 guest.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
|
|
The functions kvmppc_st and kvmppc_ld are used to access guest memory
from the host using a guest effective address. They do so by translating
through the process table to obtain a guest real address and then using
kvm_read_guest or kvm_write_guest to make the access with the guest real
address.
This method of access however only works for L1 guests and will give the
incorrect results for a nested guest.
We can however use the store_to_eaddr and load_from_eaddr kvmppc_ops to
perform the access for a nested guesti (and a L1 guest). So attempt this
method first and fall back to the old method if this fails and we aren't
running a nested guest.
At this stage there is no fall back method to perform the access for a
nested guest and this is left as a future improvement. For now we will
return to the nested guest and rely on the fact that a translation
should be faulted in before retrying the access.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
|
|
The kvmppc_ops struct is used to store function pointers to kvm
implementation specific functions.
Introduce two new functions load_from_eaddr and store_to_eaddr to be
used to load from and store to a guest effective address respectively.
Also implement these for the kvm-hv module. If we are using the radix
mmu then we can call the functions to access quadrant 1 and 2.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
|
|
The POWER9 radix mmu has the concept of quadrants. The quadrant number
is the two high bits of the effective address and determines the fully
qualified address to be used for the translation. The fully qualified
address consists of the effective lpid, the effective pid and the
effective address. This gives then 4 possible quadrants 0, 1, 2, and 3.
When accessing these quadrants the fully qualified address is obtained
as follows:
Quadrant | Hypervisor | Guest
--------------------------------------------------------------------------
| EA[0:1] = 0b00 | EA[0:1] = 0b00
0 | effLPID = 0 | effLPID = LPIDR
| effPID = PIDR | effPID = PIDR
--------------------------------------------------------------------------
| EA[0:1] = 0b01 |
1 | effLPID = LPIDR | Invalid Access
| effPID = PIDR |
--------------------------------------------------------------------------
| EA[0:1] = 0b10 |
2 | effLPID = LPIDR | Invalid Access
| effPID = 0 |
--------------------------------------------------------------------------
| EA[0:1] = 0b11 | EA[0:1] = 0b11
3 | effLPID = 0 | effLPID = LPIDR
| effPID = 0 | effPID = 0
--------------------------------------------------------------------------
In the Guest;
Quadrant 3 is normally used to address the operating system since this
uses effPID=0 and effLPID=LPIDR, meaning the PID register doesn't need to
be switched.
Quadrant 0 is normally used to address user space since the effLPID and
effPID are taken from the corresponding registers.
In the Host;
Quadrant 0 and 3 are used as above, however the effLPID is always 0 to
address the host.
Quadrants 1 and 2 can be used by the host to address guest memory using
a guest effective address. Since the effLPID comes from the LPID register,
the host loads the LPID of the guest it would like to access (and the
PID of the process) and can perform accesses to a guest effective
address.
This means quadrant 1 can be used to address the guest user space and
quadrant 2 can be used to address the guest operating system from the
hypervisor, using a guest effective address.
Access to the quadrants can cause a Hypervisor Data Storage Interrupt
(HDSI) due to being unable to perform partition scoped translation.
Previously this could only be generated from a guest and so the code
path expects us to take the KVM trampoline in the interrupt handler.
This is no longer the case so we modify the handler to call
bad_page_fault() to check if we were expecting this fault so we can
handle it gracefully and just return with an error code. In the hash mmu
case we still raise an unknown exception since quadrants aren't defined
for the hash mmu.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
|
|
There exists a function kvm_is_radix() which is used to determine if a
kvm instance is using the radix mmu. However this only applies to the
first level (L1) guest. Add a function kvmhv_vcpu_is_radix() which can
be used to determine if the current execution context of the vcpu is
radix, accounting for if the vcpu is running a nested guest.
Currently all nested guests must be radix but this may change in the
future.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
|
|
The kvm capability KVM_CAP_SPAPR_TCE_VFIO is used to indicate the
availability of in kernel tce acceleration for vfio. However it is
currently the case that this is only available on a powernv machine,
not for a pseries machine.
Thus make this capability dependent on having the cpu feature
CPU_FTR_HVMODE.
[paulus@ozlabs.org - fixed compilation for Book E.]
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
|
|
This adds code to flush the partition-scoped page tables for a radix
guest when dirty tracking is turned on or off for a memslot. Only the
guest real addresses covered by the memslot are flushed. The reason
for this is to get rid of any 2M PTEs in the partition-scoped page
tables that correspond to host transparent huge pages, so that page
dirtiness is tracked at a system page (4k or 64k) granularity rather
than a 2M granularity. The page tables are also flushed when turning
dirty tracking off so that the memslot's address space can be
repopulated with THPs if possible.
To do this, we add a new function kvmppc_radix_flush_memslot(). Since
this does what's needed for kvmppc_core_flush_memslot_hv() on a radix
guest, we now make kvmppc_core_flush_memslot_hv() call the new
kvmppc_radix_flush_memslot() rather than calling kvm_unmap_radix()
for each page in the memslot. This has the effect of fixing a bug in
that kvmppc_core_flush_memslot_hv() was previously calling
kvm_unmap_radix() without holding the kvm->mmu_lock spinlock, which
is required to be held.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
|
|
This adds 'const' to the declarations for the struct kvm_memory_slot
pointer parameters of some functions, which will make it possible to
call those functions from kvmppc_core_commit_memory_region_hv()
in the next patch.
This also fixes some comments about locking.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
|
|
For radix guests, this makes KVM map guest memory as individual pages
when dirty page logging is enabled for the memslot corresponding to the
guest real address. Having a separate partition-scoped PTE for each
system page mapped to the guest means that we have a separate dirty
bit for each page, thus making the reported dirty bitmap more accurate.
Without this, if part of guest memory is backed by transparent huge
pages, the dirty status is reported at a 2MB granularity rather than
a 64kB (or 4kB) granularity for that part, causing userspace to have
to transmit more data when migrating the guest.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
|
|
Currently, kvm_arch_commit_memory_region() gets called with a
parameter indicating what type of change is being made to the memslot,
but it doesn't pass it down to the platform-specific memslot commit
functions. This adds the `change' parameter to the lower-level
functions so that they can use it in future.
[paulus@ozlabs.org - fix book E also.]
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
|
|
|
|
Ido Schimmel says:
====================
mlxsw: spectrum_acl: Add Bloom filter support
Nir says:
Spectrum-2 uses Bloom filter to reduce the number of lookups in the
algorithmic TCAM (A-TCAM). HW performs multiple exact match lookups in a
given region using a key composed of { packet & mask, mask ID, region ID }.
The masks which are used in a region are called rule patterns or RP.
When such multiple masks are used, the A-TCAM region uses an eRP
(extended RP) table that describes which rule patterns are in use and
defines the order of the lookup. When eRP table is used in a region, one
way to reduce the number of the lookups is to consult a Bloom filter
before doing the lookup.
A Bloom filter is a space-efficient probabilistic data structure, on
which a query returns either "possibly in set" or "definitely not in
set". HW can skip a lookup if a query on the Bloom filter results a
"definitely not set" response. The mlxsw driver implements a "counting
filter" and when either a new entry is marked or the last entry is
removed it will update the HW. Update of this counting filter occurs
when rule is configured or deleted from a region.
Patch #1 adds PEABFE register which is used for setting Bloom filter
entries.
Patch #2 adds Bloom filter resources.
Patch #3 and patch #4 provide Bloom filter handling within mlxsw, by
adding initialization and logic for updating the Bloom bit vector in HW.
Patch #5 and patch #6 add required calls for Bloom filter update as part
of rule configuration flow.
Patch #7 handles transitions to and from eRP table. It uses a list to
keep A-TCAM rules in order to update rules in Bloom filter, in cases of
transitions from master mask based A-TCAM region to an eRP table based
region and vice versa.
Patch #8 removes a trick done on master RP index to a remaining RP,
since Bloom filter is updated on eRP transitions.
Finally, patch #9 activates Bloom filter mechanism in HW, by cancelling
the bypass that was configured before and the remaining three patches
are selftests that exercise the new code.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The eRP table is active when there is more than a single rule
pattern. It may be that the patterns are close enough and use delta
mechanism. Bloom filter index computation is based on the values of
{rule & mask, mask ID, region ID} where the rule delta bits must be
cleared.
Add a test that exercises Bloom filter with delta mechanism.
Configure rules within delta range and pass a packet which is
supposed to hit the correct rule.
Signed-off-by: Nir Dotan <nird@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Bloom filter index computation is based on the values of
{rule & mask, mask ID, region ID} and the computation also varies
according to the region key size.
Add a test that exercises the possible combinations by creating
multiple chains using different key sizes and then pass a frame that
is supposed to to produce a hit on all of the regions.
Signed-off-by: Nir Dotan <nird@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add a test that exercises Bloom filter code.
Activate eRP table in the region by adding multiple rule patterns which
with very high probability use different entries in the Bloom filter.
Then send packets in order to check lookup hits on all relevant rules.
Signed-off-by: Nir Dotan <nird@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Now that mlxsw driver handles all aspects of updating
the Bloom filter mechanism, set bf_bypass value to false
and allow HW to use Bloom filter.
Signed-off-by: Nir Dotan <nird@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Bloom filter is updated on transitions from a single rule pattern,
also called master RP, to eRP table and vice versa. Since rules are
being written to or deleted from the Bloom filter on such transitions,
it is not required to keep the same eRP bank ID for the master RP.
Change master RP index assignment so it will be assigned with zero.
This is consistent with the assignment of the first available spot
that is used for allocating eRP's indices.
Signed-off-by: Nir Dotan <nird@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Bloom filter update is required only for rules which reside on an
eRP. When the region has only a single rule pattern then eRP table
is not used, however insertion of another pattern would trigger a
move to an active eRP table so it is imperative to update the Bloom
filter with all previously configured rules.
Add a method that updates Bloom filter entries for all rules
currently configured in the region, on the event of a transition
from master mask to eRP, or vice versa. For that purpose, maintain
a list of all A-TCAM rules within mlxsw_sp_acl_atcam_region.
Signed-off-by: Nir Dotan <nird@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add calls to eRP module for updating Bloom filter when a rule is
added or removed from the A-TCAM. eRP module will update the Bloom
filter only for cases in which the region has an active eRP table.
Signed-off-by: Nir Dotan <nird@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add Bloom filter update for rule insertion and rule removal scenarios.
This is done within eRP module in order to assure that Bloom filter
updates are done only for rules which are part of an eRP, as HW does not
consult Bloom filter for entries when there is a single (master) mask in
the region.
Signed-off-by: Nir Dotan <nird@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Spectrum-2 HW uses Bloom filter in order to skip lookups on specific
eRPs. It uses crc-16-Msbit-first calculation over a specific layout
of a rule's key fields combined with eRP ID as well as region ID.
Per potential lookup, iff the Bloom filter entry of the calculated
index is empty, then the lookup can be skipped. Hence, the mlxsw
driver should update the Bloom filter entry per each rule insertion
or deletion when rules are part of an eRP.
Add functions for adding and deleting entries in the Bloom filter.
In order to do so also add crc-16 computation based on the specific
Spectrum-2 polynomial and a function for encoding the crc-16 input
in the manner dictated by HW implementation.
Signed-off-by: Nir Dotan <nird@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Lay the foundations for Bloom filter handling. Introduce a new file for
Bloom filter actions.
Add struct mlxsw_sp_acl_bf to struct mlxsw_sp_acl_erp_core and initialize
the Bloom filter data structure. Also take care of proper destruction when
terminating.
Signed-off-by: Nir Dotan <nird@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add the maximum Bloom filter logarithmic size per eRP table bank.
Signed-off-by: Nir Dotan <nird@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Bloom filter is a bit vector which allows the HW a fast lookup on a
small size bit vector, that may reduce the number of lookups on the
A-TCAM memory. PEABFE register allows setting values to the bits of
the bit vector mentioned above.
Add the register to be later used in A-TCAM optimizations.
Signed-off-by: Nir Dotan <nird@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Roopa Prabhu says:
====================
rtnl fdb get
This series adds support for rtnl fdb get similar to
route get.
v2: add nda_policy, fixes to exact msgs, strict nlmsg parsing
v3: remove unnecessary attribute length checks + simplify code
as pointed out by david
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
tests the below three cases of bridge fdb get:
[bridge, mac, vlan]
[bridge_port, mac, vlan, flags=[NTF_MASTER]]
[vxlandev, mac, flags=NTF_SELF]
depends on iproute2 support for bridge fdb get.
Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch implements ndo_fdb_get for a vxlan device.
Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com>
Reviewed-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch implements ndo_fdb_get for the bridge
fdb.
Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com>
Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Reviewed-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds support for fdb get similar to
route get. arguments can be any of the following (similar to fdb add/del/dump):
[bridge, mac, vlan] or
[bridge_port, mac, vlan, flags=[NTF_MASTER]] or
[dev, mac, [vni|vlan], flags=[NTF_SELF]]
Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com>
Reviewed-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Marek Vasut says:
====================
net: dsa: ksz: Clean up the tag code in prep for more switches
Clean up the KSZ DSA tag code in preparation for adding more switches.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In case the destination address is link local, add override bit into the
switch tag to let such a packet through the switch even if the port is
blocked.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: Tristram Ha <Tristram.Ha@microchip.com>
Cc: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Cc: Woojung Huh <woojung.huh@microchip.com>
Cc: David S. Miller <davem@davemloft.net>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|