Age | Commit message (Collapse) | Author |
|
Introduce the feature of multi-packet WQE (RX Work Queue Element)
referred to as (MPWQE or Striding RQ), in which WQEs are larger
and serve multiple packets each.
Every WQE consists of many strides of the same size, every received
packet is aligned to a beginning of a stride and is written to
consecutive strides within a WQE.
In the regular approach, each regular WQE is big enough to be capable
of serving one received packet of any size up to MTU or 64K in case of
device LRO is enabled, making it very wasteful when dealing with
small packets or device LRO is enabled.
For its flexibility, MPWQE allows a better memory utilization
(implying improvements in CPU utilization and packet rate) as packets
consume strides according to their size, preserving the rest of
the WQE to be available for other packets.
MPWQE default configuration:
Num of WQEs = 16
Strides Per WQE = 2048
Stride Size = 64 byte
The default WQEs memory footprint went from 1024*mtu (~1.5MB) to
16 * 2048 * 64 = 2MB per ring.
However, HW LRO can now be supported at no additional cost in memory
footprint, and hence we turn it on by default and get an even better
performance.
Performance tested on ConnectX4-Lx 50G.
To isolate the feature under test, the numbers below were measured with
HW LRO turned off. We verified that the performance just improves when
LRO is turned back on.
* Netperf single TCP stream:
- BW raised by 10-15% for representative packet sizes:
default, 64B, 1024B, 1478B, 65536B.
* Netperf multi TCP stream:
- No degradation, line rate reached.
* Pktgen: packet rate raised by 2-10% for traffic of different message
sizes: 64B, 128B, 256B, 1024B, and 1500B.
* Pktgen: packet loss in bursts of small messages (64byte),
single stream:
- | num packets | packets loss before | packets loss after
| 2K | ~ 1K | 0
| 8K | ~ 6K | 0
| 16K | ~13K | 0
| 32K | ~28K | 0
| 64K | ~57K | ~24K
As expected as the driver can receive as many small packets (<=64B) as
the number of total strides in the ring (default = 2048 * 16) vs. 1024
(default ring size regardless of packets size) before this feature.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Achiad Shochat <achiad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
A queue counter can collect several statistics for one or more
hardware queues (QPs, RQs, etc ..) that the counter is attached to.
For Ethernet it will provide an "out of buffer" counter which
collects the number of all packets that are dropped due to lack
of software buffers.
Here we add device commands to alloc/query/dealloc queue counters.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Rana Shahout <ranas@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Maintain the PCI status and provide wrappers for enabling and disabling
the PCI device. Performing the actions more than once without doing
its opposite results in warning logs.
This occurred when EEH hotplugged the device causing a warning for
disabling an already disabled device.
Fixes: 2ba5fbd62b25 ('net/mlx4_core: Handle AER flow properly')
Signed-off-by: Daniel Jurgens <danielj@mellanox.com>
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch folds NETIF_F_ALL_TSO into the bitmask for NETIF_F_GSO_SOFTWARE.
The idea is to avoid duplication of defines since the only difference
between the two was the GSO_UDP bit.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
move trace_call_bpf() into helper function to minimize the size
of perf_trace_*() tracepoint handlers.
text data bss dec hex filename
10541679 5526646 2945024 19013349 1221ee5 vmlinux_before
10509422 5526646 2945024 18981092 121a0e4 vmlinux_after
It may seem that perf_fetch_caller_regs() can also be moved,
but that is incorrect, since ip/sp will be wrong.
bpf+tracepoint performance is not affected, since
perf_swevent_put_recursion_context() is now inlined.
export_symbol_gpl can also be dropped.
No measurable change in normal perf tracepoints.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The commit 17e8351a7739 consistently use int for temperature,
however it missed a few in trip temperature and thermal_core.
In current codes, the trip->temperature used "unsigned long"
and zone->temperature used"int", if the temperature is negative
value, it will get wrong result when compare temperature with
trip temperature.
This patch can fix it.
Signed-off-by: Wei Ni <wni@nvidia.com>
Signed-off-by: Eduardo Valentin <edubezval@gmail.com>
|
|
This LSM enforces that kernel-loaded files (modules, firmware, etc)
must all come from the same filesystem, with the expectation that
such a filesystem is backed by a read-only device such as dm-verity
or CDROM. This allows systems that have a verified and/or unchangeable
filesystem to enforce module and firmware loading restrictions without
needing to sign the files individually.
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Serge Hallyn <serge.hallyn@canonical.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
|
|
A string representation of the kernel_read_file_id enumeration is
needed for displaying messages (eg. pr_info, auditing) that can be
used by multiple LSMs and the integrity subsystem. To simplify
keeping the list of strings up to date with the enumeration, this
patch defines two new preprocessing macros named __fid_enumify and
__fid_stringify to create the enumeration and an array of strings.
kernel_read_file_id_str() returns a string based on the enumeration.
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
[kees: removed removal of my old version, constified pointer values]
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: James Morris <james.l.morris@oracle.com>
|
|
Allocate a NULL-terminated file path with special characters escaped,
safe for logging.
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: James Morris <james.l.morris@oracle.com>
|
|
Provide an escaped (but readable: no inter-argument NULLs) commandline
safe for logging.
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: James Morris <james.l.morris@oracle.com>
|
|
Handle allocating and escaping a string safe for logging.
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: James Morris <james.l.morris@oracle.com>
|
|
Add device managed APIs devm_pinctrl_register() and
devm_pinctrl_unregister() for the APIs pinctrl_register()
and pinctrl_unregister().
This helps in reducing code in error path and sometimes
removal of .remove callback for driver unbind.
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Acked-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
IOVA allocation has two problems that impede high-throughput I/O.
First, it can do a linear search over the allocated IOVA ranges.
Second, the rbtree spinlock that serializes IOVA allocations becomes
contended.
Address these problems by creating an API for caching allocated IOVA
ranges, so that the IOVA allocator isn't accessed frequently. This
patch adds a per-CPU cache, from which CPUs can alloc/free IOVAs
without taking the rbtree spinlock. The per-CPU caches are backed by
a global cache, to avoid invoking the (linear-time) IOVA allocator
without needing to make the per-CPU cache size excessive. This design
is based on magazines, as described in "Magazines and Vmem: Extending
the Slab Allocator to Many CPUs and Arbitrary Resources" (currently
available at https://www.usenix.org/legacy/event/usenix01/bonwick.html)
Adding caching on top of the existing rbtree allocator maintains the
property that IOVAs are densely packed in the IO virtual address space,
which is important for keeping IOMMU page table usage low.
To keep the cache size reasonable, we bound the IOVA space a CPU can
cache by 32 MiB (we cache a bounded number of IOVA ranges, and only
ranges of size <= 128 KiB). The shared global cache is bounded at
4 MiB of IOVA space.
Signed-off-by: Omer Peleg <omer@cs.technion.ac.il>
[mad@cs.technion.ac.il: rebased, cleaned up and reworded the commit message]
Signed-off-by: Adam Morrison <mad@cs.technion.ac.il>
Reviewed-by: Shaohua Li <shli@fb.com>
Reviewed-by: Ben Serebrin <serebrin@google.com>
[dwmw2: split out VT-d part into a separate patch]
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/geert/renesas-drivers into clk-next
clk: renesas: R-Car SYSC PM Domain Preparation
- Export the CPG/MSSR and CPG/MSTP attach/detach_dev callbacks, so
they can be called by the R-Car SYSC PM Domain driver.
* tag 'clk-renesas-for-v4.7-tag2' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/renesas-drivers:
clk: renesas: cpg-mssr: Export cpg_mssr_{at,de}tach_dev()
clk: renesas: mstp: Provide dummy attach/detach_dev callbacks
clk: renesas: Provide Kconfig symbols for CPG/MSSR and CPG/MSTP support
|
|
Using a bitfield enables the compiler to lay out the structure more
efficiently when we have other boolean flags since multiple values can
be included in a single byte.
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
kvm_make_request and kvm_check_request imply a producer-consumer
relationship; add implicit memory barriers to them. There was indeed
already a place that was adding an explicit smp_mb() to order between
kvm_check_request and the processing of the request. That memory
barrier can be removed (as an added benefit, kvm_check_request can use
smp_mb__after_atomic which is free on x86).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
The CCP has the ability to provide DMA services to the
kernel using pass-through mode of the device. Register
these services as general purpose DMA channels.
Changes since v2:
- Add a Signed-off-by
Changes since v1:
- Allocate memory for a string in ccp_dmaengine_register
- Ensure register/unregister calls are properly ordered
- Verified all changed files are listed in the diffstat
- Undo some superfluous changes
- Added a cc:
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The R-Car SYSC PM Domain driver has to power manage devices in power
areas using clocks. To reuse code and to share knowledge of clocks
suitable for power management, this is ideally done through the existing
cpg_mssr_attach_dev() and cpg_mssr_detach_dev() callbacks.
Hence these callbacks can no longer rely on their "domain" parameter
pointing to the CPG/MSSR Clock Domain. To handle this, keep a pointer to
the clock domain in a static variable. cpg_mssr_attach_dev() has to
support probe deferral, as the R-Car SYSC PM Domain may be initialized,
and devices may be added to it, before the CPG/MSSR Clock Domain is
initialized.
Dummy callbacks are provided for the case where CPG/MSTP support is not
included, so the rcar-sysc driver won't have to care about this.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Provide dummy cpg_mstp_{at,de}tach_dev() PM Domain callbacks if CPG/MSTP
support is not included, so the rcar-sysc driver won't have to care
about this.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
By passing the smd channel reference to the callback, rather than the
smd device, we can open additional smd channels from sub-devices of smd
devices.
Also updates the two smd clients today found in mainline.
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Andy Gross <andy.gross@linaro.org>
|
|
This patch adds a new helper for cls/act programs that can push events
to user space applications. For networking, this can be f.e. for sampling,
debugging, logging purposes or pushing of arbitrary wake-up events. The
idea is similar to a43eec304259 ("bpf: introduce bpf_perf_event_output()
helper") and 39111695b1b8 ("samples: bpf: add bpf_perf_event_output example").
The eBPF program utilizes a perf event array map that user space populates
with fds from perf_event_open(), the eBPF program calls into the helper
f.e. as skb_event_output(skb, &my_map, BPF_F_CURRENT_CPU, raw, sizeof(raw))
so that the raw data is pushed into the fd f.e. at the map index of the
current CPU.
User space can poll/mmap/etc on this and has a data channel for receiving
events that can be post-processed. The nice thing is that since the eBPF
program and user space application making use of it are tightly coupled,
they can define their own arbitrary raw data format and what/when they
want to push.
While f.e. packet headers could be one part of the meta data that is being
pushed, this is not a substitute for things like packet sockets as whole
packet is not being pushed and push is only done in a single direction.
Intention is more of a generically usable, efficient event pipe to applications.
Workflow is that tc can pin the map and applications can attach themselves
e.g. after cls/act setup to one or multiple map slots, demuxing is done by
the eBPF program.
Adding this facility is with minimal effort, it reuses the helper
introduced in a43eec304259 ("bpf: introduce bpf_perf_event_output() helper")
and we get its functionality for free by overloading its BPF_FUNC_ identifier
for cls/act programs, ctx is currently unused, but will be made use of in
future. Example will be added to iproute2's BPF example files.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Struct ctl_table_header holds pointer to sysctl table which could be used
for freeing it after unregistration. IPv4 sysctls already use that.
Remove redundant NULL assignment: ndev allocated using kzalloc.
This also saves some bytes: sysctl table could be shorter than
DEVCONF_MAX+1 if some options are disable in config.
Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add registration APIs in the clk fixed-rate code to return struct
clk_hw pointers instead of struct clk pointers. This way we hide
the struct clk pointer from providers unless they need to use
consumer facing APIs.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
Add registration APIs in the clk gpio code to return struct
clk_hw pointers instead of struct clk pointers. This way we hide
the struct clk pointer from providers unless they need to use
consumer facing APIs.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
Add registration APIs in the clk composite code to return struct
clk_hw pointers instead of struct clk pointers. This way we hide
the struct clk pointer from providers unless they need to use
consumer facing APIs.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
Add registration APIs in the clk fractional divider code to
return struct clk_hw pointers instead of struct clk pointers.
This way we hide the struct clk pointer from providers unless
they need to use consumer facing APIs.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
Add registration APIs in the clk fixed-factor code to return
struct clk_hw pointers instead of struct clk pointers. This way
we hide the struct clk pointer from providers unless they need to
use consumer facing APIs.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
Add registration APIs in the clk mux code to return struct clk_hw
pointers instead of struct clk pointers. This way we hide the
struct clk pointer from providers unless they need to use
consumer facing APIs.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
Add registration APIs in the clk gate code to return struct
clk_hw pointers instead of struct clk pointers. This way we hide
the struct clk pointer from providers unless they need to use
consumer facing APIs.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
Add registration APIs in the clk divider code to return struct
clk_hw pointers instead of struct clk pointers. This way we hide
the struct clk pointer from providers unless they need to use
consumer facing APIs.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
Now that we have a clk registration API that doesn't return
struct clks, we need to have some way to hand out struct clks via
the clk_get() APIs that doesn't involve associating struct clk
pointers with a struct clk_lookup. Luckily, clkdev already
operates on struct clk_hw pointers, except for the registration
facing APIs where it converts struct clk pointers into struct
clk_hw pointers almost immediately.
Let's add clk_hw based registration APIs so that we can skip the
conversion step and provide a way for clk provider drivers to
operate exclusively on clk_hw structs. This way we clearly
split the API between consumers and providers.
Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
Now that we have a clk registration API that doesn't return
struct clks, we need to have some way to hand out struct clks via
the clk_get() APIs that doesn't involve associating struct clk
pointers with an OF node. Currently we ask the OF provider to
give us a struct clk pointer for some clkspec, turn that struct
clk into a struct clk_hw and then allocate a new struct clk to
return to the caller.
Let's add a clk_hw based OF provider hook that returns a struct
clk_hw directly, so that we skip the intermediate step of
converting from struct clk to struct clk_hw. Eventually when
we've converted all OF clk providers to struct clk_hw based APIs
we can remove the struct clk based ones.
It should also be noted that we change the onecell provider to
have a flex array instead of a pointer for the array of clk_hw
pointers. This allows providers to allocate one structure of the
correct length in one step instead of two.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
We've largely split the clk consumer and provider APIs along
struct clk and struct clk_hw, but clk_register() still returns a
struct clk pointer for each struct clk_hw that's registered.
Eventually we'd like to only allocate struct clks when there's a
user, because struct clk is per-user now, so clk_register() needs
to change.
Let's add new APIs to register struct clk_hws, but this time
we'll hide the struct clk from the caller by returning an int
error code. Also add an unregistration API that takes the clk_hw
structure that was passed to the registration API. This way
provider drivers never have to deal with a struct clk pointer
unless they're using the clk consumer APIs.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
Now that we've converted the only caller over to another clkdev
API, remove this one.
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
Merge the ptmx internal interface cleanup branch.
This doesn't change semantics, but it should be a sane basis for
eventually getting the multi-instance devpts code into some sane shape
where we can get rid of the kernel config option. Which we can
hopefully get done next merge window..
* ptmx-cleanup:
devpts: clean up interface to pty drivers
|
|
The original thought was that if a device implemented ACS, then surely
we want to use that... well, it turns out that devices can make an ACS
capability so broken that we still need to fall back to quirks.
Reverse the order of ACS enabling to give quirks first shot at it.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
|
|
Similar to enable_event/disable_event triggers, these triggers enable
and disable the aggregation of events into maps rather than enabling
and disabling their writing into the trace buffer.
They can be used to automatically start and stop hist triggers based
on a matching filter condition.
If there's a paused hist trigger on system:event, the following would
start it when the filter condition was hit:
# echo enable_hist:system:event [ if filter] > event/trigger
And the following would disable a running system:event hist trigger:
# echo disable_hist:system:event [ if filter] > event/trigger
See Documentation/trace/events.txt for real examples.
Link: http://lkml.kernel.org/r/f812f086e52c8b7c8ad5443487375e03c96a601f.1457029949.git.tom.zanussi@linux.intel.com
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Tested-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Reviewed-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
This helper function can be used to copy the arguments of a
phandle to an array.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Rob Herring <robh@kernel.org>
|
|
With this macro any user can easily iterate over a list of
phandles. The patch also converts __of_parse_phandle_with_args()
to make use of the macro.
The of_count_phandle_with_args() function is not converted,
because the macro hides the return value of of_phandle_iterator_init(),
which is needed in there.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Rob Herring <robh@kernel.org>
|
|
Move the code to walk over the phandles out of the loop in
__of_parse_phandle_with_args() to a separate function that
just works with the iterator handle: of_phandle_iterator_next().
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Rob Herring <robh@kernel.org>
|
|
This struct carrys all necessary information to iterate over
a list of phandles and extract the arguments. Add an
init-function for the iterator and make use of it in
__of_parse_phandle_with_args().
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Rob Herring <robh@kernel.org>
|
|
Replace the default nand_ecclayout definitions for large and small page
devices with the equivalent mtd_ooblayout_ops.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
ECC layout definitions are currently exposed using the nand_ecclayout
struct which embeds oobfree and eccpos arrays with predefined size.
This approach was acceptable when NAND chips were providing relatively
small OOB regions, but MLC and TLC now provide OOB regions of several
hundreds of bytes, which implies a non negligible overhead for everybody
even those who only need to support legacy NANDs.
Create an mtd_ooblayout_ops interface providing the same functionality
(expose the ECC and oobfree layout) without the need for this huge
structure.
The mtd->ecclayout is now deprecated and should be replaced by the
equivalent mtd_ooblayout_ops. In the meantime we provide a wrapper around
the ->ecclayout field to ease migration to this new model.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
Add an mtd_set_ecclayout() helper function to avoid direct accesses to the
mtd->ecclayout field. This will ease future reworks of ECC layout
definition.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
In order to make the ecclayout definition completely dynamic we need to
rework the way the OOB layout are defined and iterated.
Create a few mtd_ooblayout_xxx() helpers to ease OOB bytes manipulation
and hide ecclayout internals to their users.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
Export the default read/write oob functions (for the standard and syndrome
scheme), so that drivers can use them for their raw implementation and
implement their own functions for the normal oob operation.
This is required if your ECC engine is capable of fixing some of the OOB
data. In this case you have to overload the ->read_oob() and ->write_oob(),
but if you don't specify the ->read/write_oob_raw() functions they are
assigned to the ->read/write_oob() implementation, which is not what you
want.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
The new IFC controller version 2.0 has a different memory map page.
Upto IFC 1.4 PAGE size is 4 KB and from IFC2.0 PAGE size is 64KB.
This patch segregates the IFC global and runtime registers to appropriate
PAGE sizes.
Signed-off-by: Jaiprakash Singh <b44839@freescale.com>
Signed-off-by: Raghav Dogra <raghav@freescale.com>
Acked-by: Li Yang <leoyang.li@nxp.com>
Signed-off-by: Raghav Dogra <raghav.dogra@nxp.com>
Acked-by: Scott Wood <oss@buserror.net>
Acked-by: Brian Norris <computersforpeace@gmail.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
NAND subsystem is being slightly reworked to store ECC details in
separated fields. In future we'll want to add support for more DT
properties as specifying every possible setup with a single
"nand-ecc-mode" is a pretty bad idea.
To allow this let's add a helper that will support something like
"nand-ecc-algo" in future. Right now we use it for keeping backward
compatibility.
Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
Our nand_ecc_modes_t is already a bit abused by value NAND_ECC_SOFT_BCH.
This enum should store ECC mode only and putting algorithm details there
is a bad idea. It would result in too many values impossible to support
in a sane way.
To solve this problem let's add a new enum. We'll have to modify all
drivers to set it properly but once it's done it'll be possible to drop
NAND_ECC_SOFT_BCH. That will result in a cleaner design and more
possibilities like setting ECC algorithm for hardware ECC mode.
Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/j.anaszewski/linux-leds into nand/next
Pull leds-trigger changes from Jacek Anaszewski.
Create a generic mtd led-trigger to replace the exisitng nand led-trigger
implementation.
* 'mtd-nand-trigger' of git://git.kernel.org/pub/scm/linux/kernel/git/j.anaszewski/linux-leds:
mtd: Hook I/O activity to the MTD LED trigger
mtd: nand: Remove the "nand-disk" LED trigger
leds: trigger: Introduce a MTD (NAND/NOR) trigger
mtd: Uninline mtd_write_oob and move it to mtdcore.c
leds: trigger: Introduce a kernel panic LED trigger
|