Age | Commit message (Collapse) | Author |
|
Pull 4.15 updates to take over the previous urgent fixes.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
You can not select KBUILD_DEFCONFIG depending on any CONFIG option
because include/config/auto.conf is not included when building config
targets. So, CONFIG_SUPERH32 is never set during the configuration,
then cayman_defconfig is always chosen.
This commit provides a sensible way to choose shx3/cayman_defconfig.
arch/sh/Kconfig sets either SUPERH32 or SUPERH64 depending on ARCH
environment, like follows:
config SUPERH32
def_bool ARCH = "sh"
...
config SUPERH64
def_bool ARCH = "sh64"
It should make sense to choose the default defconfig by ARCH,
like arch/sparc/Makefile.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
|
|
I was not seeing my linker flags getting added when using ld-option when
cross compiling with Clang. Upon investigation, this seems to be due to
a difference in how GCC vs Clang handle cross compilation.
GCC is configured at build time to support one backend, that is implicit
when compiling. Clang is explicit via the use of `-target <triple>` and
ships with all supported backends by default.
GNU Make feature test macros that compile then link will always fail
when cross compiling with Clang unless Clang's triple is passed along to
the compiler. For example:
$ clang -x c /dev/null -c -o temp.o
$ aarch64-linux-android/bin/ld -E temp.o
aarch64-linux-android/bin/ld:
unknown architecture of input file `temp.o' is incompatible with
aarch64 output
aarch64-linux-android/bin/ld:
warning: cannot find entry symbol _start; defaulting to
0000000000400078
$ echo $?
1
$ clang -target aarch64-linux-android- -x c /dev/null -c -o temp.o
$ aarch64-linux-android/bin/ld -E temp.o
aarch64-linux-android/bin/ld:
warning: cannot find entry symbol _start; defaulting to 00000000004002e4
$ echo $?
0
This causes conditional checks that invoke $(CC) without the target
triple, then $(LD) on the result, to always fail.
Suggested-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Matthias Kaehlcke <mka@chromium.org>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
|
|
The cache files are only cleaned away by "make clean". If you continue
incremental builds, the cache files will grow up little by little.
It is not a big deal in general use cases because compiler flags do not
change quite often.
However, if you do build-test for various architectures, compilers, and
kernel configurations, you will end up with huge cache files soon.
When the cache file exceeds 1000 lines, shrink it down to 500 by "tail".
The Least Recently Added lines are cut. (not Least Recently Used)
I hope it will work well enough.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
|
|
Some $(call cc-option,...) are invoked very early, even before
KBUILD_CFLAGS, etc. are initialized.
The returned string from $(call cc-option,...) depends on
KBUILD_CPPFLAGS, KBUILD_CFLAGS, and GCC_PLUGINS_CFLAGS.
Since they are exported, they are not empty when the top Makefile
is recursively invoked.
The recursion occurs in several places. For example, the top
Makefile invokes itself for silentoldconfig. "make tinyconfig",
"make rpm-pkg" are the cases, too.
In those cases, the second call of cc-option from the same line
runs a different shell command due to non-pristine KBUILD_CFLAGS.
To get the same result all the time, KBUILD_* and GCC_PLUGINS_CFLAGS
must be initialized before any call of cc-option. This avoids
garbage data in the .cache.mk file.
Move all calls of cc-option below the config targets because target
compiler flags are unnecessary for Kconfig.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
|
|
These are a few stragglers that I left out of the original patch to
cache calls to the C compiler ("kbuild: Add a cache for generated
variables") because they bleed out into the main Makefile and thus
uglify things a little bit. The idea is the same here, though.
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Tested-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
|
|
While timing a "no-op" build of the kernel (incrementally building the
kernel even though nothing changed) in the Chrome OS build system I
found that it was much slower than I expected.
Digging into things a bit, I found that quite a bit of the time was
spent invoking the C compiler even though we weren't actually building
anything. Currently in the Chrome OS build system the C compiler is
called through a number of wrappers (one of which is written in
python!) and can take upwards of 100 ms to invoke even if we're not
doing anything difficult, so these invocations of the compiler were
taking a lot of time. Worse the invocations couldn't seem to take
advantage of the multiple cores on my system.
Certainly it seems like we could make the compiler invocations in the
Chrome OS build system faster, but only to a point. Inherently
invoking a program as big as a C compiler is a fairly heavy
operation. Thus even if we can speed the compiler calls it made sense
to track down what was happening.
It turned out that all the compiler invocations were coming from
usages like this in the kernel's Makefile:
KBUILD_CFLAGS += $(call cc-option,-fno-delete-null-pointer-checks,)
Due to the way cc-option and similar statements work the above
contains an implicit call to the C compiler. ...and due to the fact
that we're storing the result in KBUILD_CFLAGS, a simply expanded
variable, the call will happen every time the Makefile is parsed, even
if there are no users of KBUILD_CFLAGS.
Rather than redoing this computation every time, it makes a lot of
sense to cache the result of all of the Makefile's compiler calls just
like we do when we compile a ".c" file to a ".o" file. Conceptually
this is quite a simple idea. ...and since the calls to invoke the
compiler and similar tools are centrally located in the Kbuild.include
file this doesn't even need to be super invasive.
Implementing the cache in a simple-to-use and efficient way is not
quite as simple as it first sounds, though. To get maximum speed we
really want the cache in a format that make can natively understand
and make doesn't really have an ability to load/parse files. ...but
make _can_ import other Makefiles, so the solution is to store the
cache in Makefile format. This requires coming up with a valid/unique
Makefile variable name for each value to be cached, but that's
solvable with some cleverness.
After this change, we'll automatically create a ".cache.mk" file that
will contain our cached variables. We'll load this on each invocation
of make and will avoid recomputing anything that's already in our
cache. The cache is stored in a format that it shouldn't need any
invalidation since anything that might change should affect the "key"
and any old cached value won't be used.
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Tested-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
|
|
$(kbuild-file) and Kbuild.include are included before the default
target "all".
We will add a target into Kbuild.include. In advance, add a forward
declaration of the default target.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
|
|
There are a few defines that manully shift a bit. Change these to using
the BIT() macro.
Signed-off-by: John Crispin <john@phrozen.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/15322/
Signed-off-by: James Hogan <jhogan@kernel.org>
|
|
Switch the printk() call to the prefered pr_warn() api.
Fixes: 7e5873d3755c ("MIPS: pci: Add MT7620a PCIE driver")
Signed-off-by: John Crispin <john@phrozen.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 4.5+
Patchwork: https://patchwork.linux-mips.org/patch/15321/
Signed-off-by: James Hogan <jhogan@kernel.org>
|
|
An invalid and duplicate define has gone unnoticed for some time. lets
remove it. The correct define is 3 lines below.
Fixes: 7e5873d3755c ("MIPS: pci: Add MT7620a PCIE driver")
Signed-off-by: John Crispin <john@phrozen.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/15320/
Signed-off-by: James Hogan <jhogan@kernel.org>
|
|
devices
Heikki discovered a runtime issue with this patch. Taking into
consideration we have no time to test any fix right now, revert the
commit 43aaf4f03f063b12bcba2f8b800fdec85e2acc75.
Reported-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
|
|
Radix keeps no meaningful state in addr_limit, so remove it from radix
code and rename to slb_addr_limit to make it clear it applies to hash
only.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Radix VA space allocations test addresses against mm->task_size which
is 512TB, even in cases where the intention is to limit allocation to
below 128TB.
This results in mmap with a hint address below 128TB but address +
length above 128TB succeeding when it should fail (as hash does after
the previous patch).
Set the high address limit to be considered up front, and base
subsequent allocation checks on that consistently.
Fixes: f4ea6dcb08ea ("powerpc/mm: Enable mappings above 128TB")
Cc: stable@vger.kernel.org # v4.12+
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
While mapping hints with a length that cross 128TB are disallowed,
MAP_FIXED allocations that cross 128TB are allowed. These are failing
on hash (on radix they succeed). Add an additional case for fixed
mappings to expand the addr_limit when crossing 128TB.
Fixes: f4ea6dcb08ea ("powerpc/mm: Enable mappings above 128TB")
Cc: stable@vger.kernel.org # v4.12+
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Hash unconditionally resets the addr_limit to default (128TB) when the
mm context is initialised. If a process has > 128TB mappings when it
forks, the child will not get the 512TB addr_limit, so accesses to
valid > 128TB mappings will fail in the child.
Fix this by only resetting the addr_limit to default if it was 0. Non
zero indicates it was duplicated from the parent (0 means exec()).
Fixes: f4ea6dcb08ea ("powerpc/mm: Enable mappings above 128TB")
Cc: stable@vger.kernel.org # v4.12+
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
When allocating VA space with a hint that crosses 128TB, the SLB
addr_limit variable is not expanded if addr is not > 128TB, but the
slice allocation looks at task_size, which is 512TB. This results in
slice_check_fit() incorrectly succeeding because the slice_count
truncates off bit 128 of the requested mask, so the comparison to the
available mask succeeds.
Fix this by using mm->context.addr_limit instead of mm->task_size for
testing allocation limits. This causes such allocations to fail.
Fixes: f4ea6dcb08ea ("powerpc/mm: Enable mappings above 128TB")
Cc: stable@vger.kernel.org # v4.12+
Reported-by: Florian Weimer <fweimer@redhat.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Currently userspace is able to request mmap() search between 128T-512T
by specifying a hint address that is greater than 128T. But that means
a hint of 128T exactly will return an address below 128T, which is
confusing and wrong.
So fix the logic to check the hint is greater than *or equal* to 128T.
Fixes: f4ea6dcb08ea ("powerpc/mm: Enable mappings above 128TB")
Cc: stable@vger.kernel.org # v4.12+
Suggested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Suggested-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Split out of Nick's bigger patch]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
There is a typo inside the pinmux setup code. The function is called
refclk and not reclk.
Fixes: 53263a1c6852 ("MIPS: ralink: add mt7628an support")
Signed-off-by: Mathias Kresin <dev@kresin.me>
Acked-by: John Crispin <john@phrozen.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.19+
Patchwork: https://patchwork.linux-mips.org/patch/16047/
Signed-off-by: James Hogan <jhogan@kernel.org>
|
|
According to the datasheet the REFCLK pin is shared with GPIO#37 and
the PERST pin is shared with GPIO#36.
Fixes: 53263a1c6852 ("MIPS: ralink: add mt7628an support")
Signed-off-by: Mathias Kresin <dev@kresin.me>
Acked-by: John Crispin <john@phrozen.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.19+
Patchwork: https://patchwork.linux-mips.org/patch/16046/
Signed-off-by: James Hogan <jhogan@kernel.org>
|
|
Commit 398a719d34a1 ("powerpc/mm: Update bits used to skip hash_page")
mistakenly dropped the DSISR_DABRMATCH bit from the mask of bit tested
to skip trying to hash a page.
As a result, the DABR matches would no longer be detected.
This adds it back. We open code it in the 2 places where it matters
rather than fold it into DSISR_BAD_FAULT_32S/64S because this isn't
technically a bad fault and while we would never hit it with the
current code, I prefer if page_fault_is_bad() didn't trigger on these.
Fixes: 398a719d34a1 ("powerpc/mm: Update bits used to skip hash_page")
Cc: stable@vger.kernel.org # v4.14
Tested-by: Pedro Miraglia Franco de Carvalho <pedromfc@br.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
Function platform_get_irq_byname() returns a negative error code on
failure, and a zero or positive number on success. However, in function
cpcap_usb_init_irq(), positive IRQ numbers are also taken as error
cases. Use "if (irq < 0)" instead of "if (!irq)" to validate the return
value of platform_get_irq_byname().
Signed-off-by: Pan Bian <bianpan2016@163.com>
Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.co.uk>
|
|
This is no longer required after commit 959bc7b22bd2
("gpio: Automatically add lockdep keys")
Signed-off-by: Axel Lin <axel.lin@ingics.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
Commit 6184fc0b8dd7 ("quota: Propagate error from ->acquire_dquot()")
missed to handle error from dquot_initialize in dquot_file_open, fix it.
Signed-off-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Jan Kara <jack@suse.cz>
|
|
The CPU hotplug notifiers are history. Remove the last reminders.
Reported-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
The jprobes APIs are deprecated - but are still in occasional use for code that
few people seem to care about, so stop generating deprecation warnings.
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
The (alleged) users of the module addresses are the same: kernel
profiling.
So just expose the same helper and format macros, and unify the logic.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
On the Samsung Chromebook Plus I get this error with 4.14-rc3:
BUG: scheduling while atomic: kworker/3:1/50/0x00000002
Modules linked in:
CPU: 3 PID: 50 Comm: kworker/3:1 Not tainted 4.14.0-0.rc3-kevin #2
Hardware name: Google Kevin (DT)
Workqueue: events analogix_dp_psr_work
Call trace:
[<ffffff80080873b0>] dump_backtrace+0x0/0x320
[<ffffff80080876e4>] show_stack+0x14/0x20
[<ffffff8008606d38>] dump_stack+0x9c/0xbc
[<ffffff80080c6b5c>] __schedule_bug+0x4c/0x70
[<ffffff80086188c0>] __schedule+0x3f0/0x458
[<ffffff8008618960>] schedule+0x38/0xa0
[<ffffff800861c20c>] schedule_hrtimeout_range_clock+0x84/0xe8
[<ffffff800861c2a0>] schedule_hrtimeout_range+0x10/0x18
[<ffffff800861bcec>] usleep_range+0x64/0x78
[<ffffff8008415a6c>] analogix_dp_transfer+0x16c/0x340
[<ffffff8008412550>] analogix_dpaux_transfer+0x10/0x18
[<ffffff80083ceb14>] drm_dp_dpcd_access+0x4c/0xf0
[<ffffff80083cf614>] drm_dp_dpcd_write+0x1c/0x28
[<ffffff8008413b98>] analogix_dp_disable_psr+0x60/0xa8
[<ffffff800840da3c>] analogix_dp_psr_work+0x4c/0x90
[<ffffff80080bb09c>] process_one_work+0x1d4/0x348
[<ffffff80080bb258>] worker_thread+0x48/0x478
[<ffffff80080c11fc>] kthread+0x12c/0x130
[<ffffff8008084290>] ret_from_fork+0x10/0x18
Changing rockchip_dp_device::psr_lock to a mutex rather
than spinlock seems to fix the issue.
Signed-off-by: Emil Renner Berthing <kernel@esmil.dk>
Tested-by: Enric Balletbo i Serra <enric.balletbo@collabora.com>
Signed-off-by: Mark Yao <mark.yao@rock-chips.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171004175346.11956-1-kernel@esmil.dk
|
|
Xin Long says:
====================
net: improve the process of redirect and toobig for ipv6 tunnels
Now let's say there are 3 kinds of icmp packets to process for tunnels,
toobig(needfrag), redirect, others, their process should be:
- toobig(needfrag)
update the lower dst's pmtu by route cache, also update sk dst's pmtu
if possible, or it will be fine if sk dst pmtu will get updated on tx
path.
- redirect
update the lower dst's gw by route cache and return, no need to send
this redirect packet to user sk.
- others
send the packet to user's sk, or it will also be fine to use err_count
to count it and report fail link on tx path.
All ipv4 tunnels basically follow this while some of ipv6 tunnels are
doing in different ways, like ip6gre and ip6_tunnels update tnl dev's
mtu instead of updating lower dst pmtu, no redirect process on their
err_handlers, which doesn't make any sense and even causes performance
problems.
This patchset is to improve the process of redirect and toobig for ip6gre
ip4ip6, ip6ip6 tunnels, as in ipv4 tunnels.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch is to remove some useless codes of redirect and fix some
indents on ip4ip6 and ip6ip6's err_handlers.
Note that redirect icmp packet is already processed in ip6_tnl_err,
the old redirect codes in ip4ip6_err actually never worked even
before this patch. Besides, there's no need to send redirect to
user's sk, it's for lower dst, so just remove it in this patch.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The same improvement in "ip6_gre: process toobig in a better way"
is needed by ip4ip6 and ip6ip6 as well.
Note that ip4ip6 and ip6ip6 will also update sk dst pmtu in their
err_handlers. Like I said before, gre6 could not do this as it's
inner proto is not certain. But for all of them, sk dst pmtu will
be updated in tx path if in need.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The same process for redirect in "ip6_gre: add the process for redirect
in ip6gre_err" is needed by ip4ip6 and ip6ip6 as well.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Now ip6gre processes toobig icmp packet by setting gre dev's mtu in
ip6gre_err, which would cause few things not good:
- It couldn't set mtu with dev_set_mtu due to it's not in user context,
which causes route cache and idev->cnf.mtu6 not to be updated.
- It has to update sk dst pmtu in tx path according to gredev->mtu for
ip6gre, while it updates pmtu again according to lower dst pmtu in
ip6_tnl_xmit.
- To change dev->mtu by toobig icmp packet is not a good idea, it should
only work on pmtu.
This patch is to process toobig by updating the lower dst's pmtu, as later
sk dst pmtu will be updated in ip6_tnl_xmit, the same way as in ip4gre.
Note that gre dev's mtu will not be updated any more, it doesn't make any
sense to change dev's mtu after receiving a toobig packet.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch is to add redirect icmp packet process for ip6gre by
calling ip6_redirect() in ip6gre_err(), as in vti6_err.
Prior to this patch, there's even no route cache generated after
receiving redirect.
Reported-by: Jianlin Shi <jishi@redhat.com>
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In xmit process, the variables are set many times. In fact,
it is enough for these variables to be set once.
After a long time test, the throughput performance is better
than before.
CC: Srinivas Eeda <srinivas.eeda@oracle.com>
CC: Joe Jin <joe.jin@oracle.com>
CC: Junxiao Bi <junxiao.bi@oracle.com>
Signed-off-by: Zhu Yanjun <yanjun.zhu@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/sameo/nfc-next
Samuel Ortiz says:
====================
NFC 4.15 pull request
This is the NFC pull request for 4.15. We have:
- A new netlink command for explicitly deactivating NFC targets
- i2c constification for all NFC drivers
- One NFC device allocation error path fix
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Andy Zhou says:
====================
Openvswitch meter action
This patch series is the first attempt to add openvswitch
meter support. We have previously experimented with adding
metering support in nftables. However 1) It was not clear
how to expose a named nftables object cleanly, and 2)
the logic that implements metering is quite small, < 100 lines
of code.
With those two observations, it seems cleaner to add meter
support in the openvswitch module directly.
---
v1(RFC)->v2: remove unused code improve locking
and other review comments
v2 -> v3: rebase
v3 -> v4: fix undefined "__udivdi3" references on 32 bit builds.
use div_u64() instead.
v4 -> v5: rebase
====================
Acked-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Implements OVS kernel meter action support.
Signed-off-by: Andy Zhou <azhou@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
OVS kernel datapath so far does not support Openflow meter action.
This is the first stab at adding kernel datapath meter support.
This implementation supports only drop band type.
Signed-off-by: Andy Zhou <azhou@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Later patches will invoke get_dp() outside of datapath.c. Export it.
Signed-off-by: Andy Zhou <azhou@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Meter has its own netlink family. Define netlink messages and attributes
for communicating with the user space programs.
Signed-off-by: Andy Zhou <azhou@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Florian Fainelli says:
====================
net: dsa: b53: Support prepended Broadcom tags
This patch series adds support for prepended 4-bytes Broadcom tags that we
already support. This type of tag will typically be used when interfaced to
a SoC like BCM58xx (NorthStar Plus) which supports a Flow Accelerator (WIP).
In that case, we need to support a slightly different tagging format.
The first patch does a bit of re-factoring and passes a port index to
the get_tag_protocol() function since at least two different drivers need
that type of information (mt7530, b53) to support tagging or not.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
On BCM58xx devices (Northstar Plus), there is an accelerator attached to
port 8 which would only work if we use prepended Broadcom tags. Resolve
that difference in our get_tag_protocol() function by setting the
appropriate tagging protocol in that case. We need to change
b53_brcm_hdr_setup() a little bit now since we can deal with two types
of Broadcom tags.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add a new type: DSA_TAG_PROTO_PREPEND which allows us to support for the
4-bytes Broadcom tag that we already support, but in a format where it
is pre-pended to the packet instead of located between the MAC SA and
the Ethertyper (DSA_TAG_PROTO_BRCM).
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In preparation for supporting the same Broadcom tag format, but instead
of inserted between the MAC SA and EtherType, prepended to the Ethernet
frame, restructure the code a little bit to make that possible and take
an offset parameter.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
A number of drivers want to check whether the configured CPU port is a
possible configuration for enabling tagging, pass down the CPU port
number so they verify that.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
gcc-4.4.4 (at lest) has issues with initializers and anonymous unions:
net/sched/sch_red.c: In function 'red_dump_offload':
net/sched/sch_red.c:282: error: unknown field 'stats' specified in initializer
net/sched/sch_red.c:282: warning: initialization makes integer from pointer without a cast
net/sched/sch_red.c:283: error: unknown field 'stats' specified in initializer
net/sched/sch_red.c:283: warning: initialization makes integer from pointer without a cast
net/sched/sch_red.c: In function 'red_dump_stats':
net/sched/sch_red.c:352: error: unknown field 'xstats' specified in initializer
net/sched/sch_red.c:352: warning: initialization makes integer from pointer without a cast
Work around this.
Fixes: 602f3baf2218 ("net_sch: red: Add offload ability to RED qdisc")
Cc: Nogah Frankel <nogahf@mellanox.com>
Cc: Jiri Pirko <jiri@mellanox.com>
Cc: Simon Horman <simon.horman@netronome.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Since Mellanox focus is on newer adapters, we would like to have the
ability to disable the support for old gen2 adapters.
This can be done by turning off the MLX4_CORE_GEN2 Kconfig flag.
We keep it turned on by default.
Signed-off-by: Slava Shwartsman <slavash@mellanox.com>
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The way people generally use netlink_dump is that they fill in the skb
as much as possible, breaking when nla_put returns an error. Then, they
get called again and start filling out the next skb, and again, and so
forth. The mechanism at work here is the ability for the iterative
dumping function to detect when the skb is filled up and not fill it
past the brim, waiting for a fresh skb for the rest of the data.
However, if the attributes are small and nicely packed, it is possible
that a dump callback function successfully fills in attributes until the
skb is of size 4080 (libmnl's default page-sized receive buffer size).
The dump function completes, satisfied, and then, if it happens to be
that this is actually the last skb, and no further ones are to be sent,
then netlink_dump will add on the NLMSG_DONE part:
nlh = nlmsg_put_answer(skb, cb, NLMSG_DONE, sizeof(len), NLM_F_MULTI);
It is very important that netlink_dump does this, of course. However, in
this example, that call to nlmsg_put_answer will fail, because the
previous filling by the dump function did not leave it enough room. And
how could it possibly have done so? All of the nla_put variety of
functions simply check to see if the skb has enough tailroom,
independent of the context it is in.
In order to keep the important assumptions of all netlink dump users, it
is therefore important to give them an skb that has this end part of the
tail already reserved, so that the call to nlmsg_put_answer does not
fail. Otherwise, library authors are forced to find some bizarre sized
receive buffer that has a large modulo relative to the common sizes of
messages received, which is ugly and buggy.
This patch thus saves the NLMSG_DONE for an additional message, for the
case that things are dangerously close to the brim. This requires
keeping track of the errno from ->dump() across calls.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Dave Taht says:
====================
netem: add nsec scheduling and slot feature
This patch series converts netem away from the old "ticks" interface and
userspace API, and adds support for a new "slot" feature intended to
emulate bursty macs such as WiFi and LTE better.
Changes since v2:
Use u64 for packet_len_sched_time()
Use simpler max(time_to_send,q->slot.slot_next)
Changes since v1:
Always pass new nanosecond APIs to userspace
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|