Age | Commit message (Collapse) | Author |
|
The command ./scripts/kernel-doc -none include/linux/hid.h reports:
include/linux/hid.h:818: warning: cannot understand function prototype: 'struct hid_ll_driver '
include/linux/hid.h:1135: warning: expecting prototype for hid_may_wakeup(). Prototype was for hid_hw_may_wakeup() instead
Address those kernel-doc warnings.
Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
This rework the handling of hci_inquiry_result_with_rssi_evt to not use
a union to represent the different inquiry responses.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
Tested-by: Soenke Huster <soenke.huster@eknoes.de>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
|
|
Eric Dumazet suggested to allow users to modify max GRO packet size.
We have seen GRO being disabled by users of appliances (such as
wifi access points) because of claimed bufferbloat issues,
or some work arounds in sch_cake, to split GRO/GSO packets.
Instead of disabling GRO completely, one can chose to limit
the maximum packet size of GRO packets, depending on their
latency constraints.
This patch adds a per device gro_max_size attribute
that can be changed with ip link command.
ip link set dev eth0 gro_max_size 16000
Suggested-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Coco Li <lixiaoyan@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When multiple sockets using the SOF_TIMESTAMPING_BIND_PHC flag received
a packet with a hardware timestamp (e.g. multiple PTP instances in
different PTP domains using the UDPv4/v6 multicast or L2 transport),
the timestamps received on some sockets were corrupted due to repeated
conversion of the same timestamp (by the same or different vclocks).
Fix ptp_convert_timestamp() to not modify the shared skb timestamp
and return the converted timestamp as a ktime_t instead. If the
conversion fails, return 0 to not confuse the application with
timestamps corresponding to an unexpected PHC.
Fixes: d7c088265588 ("net: socket: support hardware timestamp conversion to PHC bound")
Signed-off-by: Miroslav Lichvar <mlichvar@redhat.com>
Cc: Yangbo Lu <yangbo.lu@nxp.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
As discussed during review here:
https://patchwork.kernel.org/project/netdevbpf/patch/20220105132141.2648876-3-vladimir.oltean@nxp.com/
we should inform developers about pitfalls of concurrent access to the
boolean properties of dsa_switch and dsa_port, now that they've been
converted to bit fields. No other measure than a comment needs to be
taken, since the code paths that update these bit fields are not
concurrent with each other.
Suggested-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This is a cosmetic incremental fixup to commits
7787ff776398 ("net: dsa: merge all bools of struct dsa_switch into a single u32")
bde82f389af1 ("net: dsa: merge all bools of struct dsa_port into a single u8")
The desire to make this change was enunciated after posting these
patches here:
https://patchwork.kernel.org/project/netdevbpf/cover/20220105132141.2648876-1-vladimir.oltean@nxp.com/
but due to a slight timing overlap (message posted at 2:28 p.m. UTC,
merge commit is at 2:46 p.m. UTC), that comment was missed and the
changes were applied as-is.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec
Steffen Klassert says:
====================
pull request (net): ipsec 2022-01-06
1) Fix xfrm policy lookups for ipv6 gre packets by initializing
fl6_gre_key properly. From Ghalem Boudour.
2) Fix the dflt policy check on forwarding when there is no
policy configured. The check was done for the wrong direction.
From Nicolas Dichtel.
3) Use the correct 'struct xfrm_user_offload' when calculating
netlink message lenghts in xfrm_sa_len(). From Eric Dumazet.
4) Tread inserting xfrm interface id 0 as an error.
From Antony Antony.
5) Fail if xfrm state or policy is inserted with XFRMA_IF_ID 0,
xfrm interfaces with id 0 are not allowed.
From Antony Antony.
6) Fix inner_ipproto setting in the sec_path for tunnel mode.
From Raed Salem.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next
Steffen Klassert says:
====================
pull request (net-next): ipsec-next 2022-01-06
1) Fix some clang_analyzer warnings about never read variables.
From luo penghao.
2) Check for pols[0] only once in xfrm_expand_policies().
From Jean Sacren.
3) The SA curlft.use_time was updated only on SA cration time.
Update whenever the SA is used. From Antony Antony
4) Add support for SM3 secure hash.
From Xu Jia.
5) Add support for SM4 symmetric cipher algorithm.
From Xu Jia.
6) Add a rate limit for SA mapping change messages.
From Antony Antony.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
page->freelist is for the use of slab. Using page->index is the same
set of bits as page->freelist, and by using an integer instead of a
pointer, we can avoid casts.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: <x86@kernel.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
|
|
KASAN accesses some slab related struct page fields so we need to
convert it to struct slab. Some places are a bit simplified thanks to
kasan_addr_to_slab() encapsulating the PageSlab flag check through
virt_to_slab(). When resolving object address to either a real slab or
a large kmalloc, use struct folio as the intermediate type for testing
the slab flag to avoid unnecessary implicit compound_head().
[ vbabka@suse.cz: use struct folio, adjust to differences in previous
patches ]
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Reviewed-by: Roman Gushchin <guro@fb.com>
Tested-by: Hyeongogn Yoo <42.hyeyoo@gmail.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: <kasan-dev@googlegroups.com>
|
|
page->memcg_data is used with MEMCG_DATA_OBJCGS flag only for slab pages
so convert all the related infrastructure to struct slab. Also use
struct folio instead of struct page when resolving object pointers.
This is not just mechanistic changing of types and names. Now in
mem_cgroup_from_obj() we use folio_test_slab() to decide if we interpret
the folio as a real slab instead of a large kmalloc, instead of relying
on MEMCG_DATA_OBJCGS bit that used to be checked in page_objcgs_check().
Similarly in memcg_slab_free_hook() where we can encounter
kmalloc_large() pages (here the folio slab flag check is implied by
virt_to_slab()). As a result, page_objcgs_check() can be dropped instead
of converted.
To avoid include cycles, move the inline definition of slab_objcgs()
from memcontrol.h to mm/slab.h.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Roman Gushchin <guro@fb.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: <cgroups@vger.kernel.org>
|
|
KASAN, KFENCE and memcg interact with SLAB or SLUB internals through
functions nearest_obj(), obj_to_index() and objs_per_slab() that use
struct page as parameter. This patch converts it to struct slab
including all callers, through a coccinelle semantic patch.
// Options: --include-headers --no-includes --smpl-spacing include/linux/slab_def.h include/linux/slub_def.h mm/slab.h mm/kasan/*.c mm/kfence/kfence_test.c mm/memcontrol.c mm/slab.c mm/slub.c
// Note: needs coccinelle 1.1.1 to avoid breaking whitespace
@@
@@
-objs_per_slab_page(
+objs_per_slab(
...
)
{ ... }
@@
@@
-objs_per_slab_page(
+objs_per_slab(
...
)
@@
identifier fn =~ "obj_to_index|objs_per_slab";
@@
fn(...,
- const struct page *page
+ const struct slab *slab
,...)
{
<...
(
- page_address(page)
+ slab_address(slab)
|
- page
+ slab
)
...>
}
@@
identifier fn =~ "nearest_obj";
@@
fn(...,
- struct page *page
+ const struct slab *slab
,...)
{
<...
(
- page_address(page)
+ slab_address(slab)
|
- page
+ slab
)
...>
}
@@
identifier fn =~ "nearest_obj|obj_to_index|objs_per_slab";
expression E;
@@
fn(...,
(
- slab_page(E)
+ E
|
- virt_to_page(E)
+ virt_to_slab(E)
|
- virt_to_head_page(E)
+ virt_to_slab(E)
|
- page
+ page_slab(page)
)
,...)
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Reviewed-by: Roman Gushchin <guro@fb.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Julia Lawall <julia.lawall@inria.fr>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Marco Elver <elver@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: <kasan-dev@googlegroups.com>
Cc: <cgroups@vger.kernel.org>
|
|
Update comments mentioning pages to mention slabs where appropriate.
Also some goto labels.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Roman Gushchin <guro@fb.com>
|
|
The majority of conversion from struct page to struct slab in SLUB
internals can be delegated to a coccinelle semantic patch. This includes
renaming of variables with 'page' in name to 'slab', and similar.
Big thanks to Julia Lawall and Luis Chamberlain for help with
coccinelle.
// Options: --include-headers --no-includes --smpl-spacing include/linux/slub_def.h mm/slub.c
// Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
// embedded script
// build list of functions to exclude from applying the next rule
@initialize:ocaml@
@@
let ok_function p =
not (List.mem (List.hd p).current_element ["nearest_obj";"obj_to_index";"objs_per_slab_page";"__slab_lock";"__slab_unlock";"free_nonslab_page";"kmalloc_large_node"])
// convert the type from struct page to struct page in all functions except the
// list from previous rule
// this also affects struct kmem_cache_cpu, but that's ok
@@
position p : script:ocaml() { ok_function p };
@@
- struct page@p
+ struct slab
// in struct kmem_cache_cpu, change the name from page to slab
// the type was already converted by the previous rule
@@
@@
struct kmem_cache_cpu {
...
-struct slab *page;
+struct slab *slab;
...
}
// there are many places that use c->page which is now c->slab after the
// previous rule
@@
struct kmem_cache_cpu *c;
@@
-c->page
+c->slab
@@
@@
struct kmem_cache {
...
- unsigned int cpu_partial_pages;
+ unsigned int cpu_partial_slabs;
...
}
@@
struct kmem_cache *s;
@@
- s->cpu_partial_pages
+ s->cpu_partial_slabs
@@
@@
static void
- setup_page_debug(
+ setup_slab_debug(
...)
{...}
@@
@@
- setup_page_debug(
+ setup_slab_debug(
...);
// for all functions (with exceptions), change any "struct slab *page"
// parameter to "struct slab *slab" in the signature, and generally all
// occurences of "page" to "slab" in the body - with some special cases.
@@
identifier fn !~ "free_nonslab_page|obj_to_index|objs_per_slab_page|nearest_obj";
@@
fn(...,
- struct slab *page
+ struct slab *slab
,...)
{
<...
- page
+ slab
...>
}
// similar to previous but the param is called partial_page
@@
identifier fn;
@@
fn(...,
- struct slab *partial_page
+ struct slab *partial_slab
,...)
{
<...
- partial_page
+ partial_slab
...>
}
// similar to previous but for functions that take pointer to struct page ptr
@@
identifier fn;
@@
fn(...,
- struct slab **ret_page
+ struct slab **ret_slab
,...)
{
<...
- ret_page
+ ret_slab
...>
}
// functions converted by previous rules that were temporarily called using
// slab_page(E) so we want to remove the wrapper now that they accept struct
// slab ptr directly
@@
identifier fn =~ "slab_free|do_slab_free";
expression E;
@@
fn(...,
- slab_page(E)
+ E
,...)
// similar to previous but for another pattern
@@
identifier fn =~ "slab_pad_check|check_object";
@@
fn(...,
- folio_page(folio, 0)
+ slab
,...)
// functions that were returning struct page ptr and now will return struct
// slab ptr, including slab_page() wrapper removal
@@
identifier fn =~ "allocate_slab|new_slab";
expression E;
@@
static
-struct slab *
+struct slab *
fn(...)
{
<...
- slab_page(E)
+ E
...>
}
// rename any former struct page * declarations
@@
@@
struct slab *
(
- page
+ slab
|
- partial_page
+ partial_slab
|
- oldpage
+ oldslab
)
;
// this has to be separate from previous rule as page and page2 appear at the
// same line
@@
@@
struct slab *
-page2
+slab2
;
// similar but with initial assignment
@@
expression E;
@@
struct slab *
(
- page
+ slab
|
- flush_page
+ flush_slab
|
- discard_page
+ slab_to_discard
|
- page_to_unfreeze
+ slab_to_unfreeze
)
= E;
// convert most of struct page to struct slab usage inside functions (with
// exceptions), including specific variable renames
@@
identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)*lock|__free_slab|free_nonslab_page|kmalloc_large_node";
expression E;
@@
fn(...)
{
<...
(
- int pages;
+ int slabs;
|
- int pages = E;
+ int slabs = E;
|
- page
+ slab
|
- flush_page
+ flush_slab
|
- partial_page
+ partial_slab
|
- oldpage->pages
+ oldslab->slabs
|
- oldpage
+ oldslab
|
- unsigned int nr_pages;
+ unsigned int nr_slabs;
|
- nr_pages
+ nr_slabs
|
- unsigned int partial_pages = E;
+ unsigned int partial_slabs = E;
|
- partial_pages
+ partial_slabs
)
...>
}
// this has to be split out from the previous rule so that lines containing
// multiple matching changes will be fully converted
@@
identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)*lock|__free_slab|free_nonslab_page|kmalloc_large_node";
@@
fn(...)
{
<...
(
- slab->pages
+ slab->slabs
|
- pages
+ slabs
|
- page2
+ slab2
|
- discard_page
+ slab_to_discard
|
- page_to_unfreeze
+ slab_to_unfreeze
)
...>
}
// after we simply changed all occurences of page to slab, some usages need
// adjustment for slab-specific functions, or use slab_page() wrapper
@@
identifier fn !~ "nearest_obj|obj_to_index|objs_per_slab_page|__slab_(un)*lock|__free_slab|free_nonslab_page|kmalloc_large_node";
@@
fn(...)
{
<...
(
- page_slab(slab)
+ slab
|
- kasan_poison_slab(slab)
+ kasan_poison_slab(slab_page(slab))
|
- page_address(slab)
+ slab_address(slab)
|
- page_size(slab)
+ slab_size(slab)
|
- PageSlab(slab)
+ folio_test_slab(slab_folio(slab))
|
- page_to_nid(slab)
+ slab_nid(slab)
|
- compound_order(slab)
+ slab_order(slab)
)
...>
}
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Roman Gushchin <guro@fb.com>
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: Julia Lawall <julia.lawall@inria.fr>
Cc: Luis Chamberlain <mcgrof@kernel.org>
|
|
Ensure that we're not seeing a tail page inside __check_heap_object() by
converting to a slab instead of a page. Take the opportunity to mark
the slab as const since we're not modifying it. Also move the
declaration of __check_heap_object() to mm/slab.h so it's not available
to the wider kernel.
[ vbabka@suse.cz: in check_heap_object() only convert to struct slab for
actual PageSlab pages; use folio as intermediate step instead of page ]
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Roman Gushchin <guro@fb.com>
|
|
Make struct slab independent of struct page. It still uses the
underlying memory in struct page for storing slab-specific data, but
slab and slub can now be weaned off using struct page directly. Some of
the wrapper functions (slab_address() and slab_order()) still need to
cast to struct folio, but this is a significant disentanglement.
[ vbabka@suse.cz: Rebase on folios, use folio instead of page where
possible.
Do not duplicate flags field in struct slab, instead make the related
accessors go through slab_folio(). For testing pfmemalloc use the
folio_*_active flag accessors directly so the PageSlabPfmemalloc
wrappers can be removed later.
Make folio_slab() expect only folio_test_slab() == true folios and
virt_to_slab() return NULL when folio_test_slab() == false.
Move struct slab to mm/slab.h.
Don't represent with struct slab pages that are not true slab pages,
but just a compound page obtained directly rom page allocator (with
large kmalloc() for SLUB and SLOB). ]
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Roman Gushchin <guro@fb.com>
|
|
There are no callers outside of mm/slub.c anymore.
Move freelist_corrupted() that calls object_err() to avoid a need for
forward declaration.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Roman Gushchin <guro@fb.com>
|
|
The number of GPA bits supported for a RISC-V Guest/VM is based on the
MMU mode used by the G-stage translation. The KVM RISC-V will detect and
use the best possible MMU mode for the G-stage in kvm_arch_init().
We add a generic VM capability KVM_CAP_VM_GPA_BITS which can be used by
the KVM userspace to get the number of GPA (guest physical address) bits
supported for a Guest/VM.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-and-tested-by: Atish Patra <atishp@rivosinc.com>
|
|
The main reason of this change is that unpopulated-alloc
code cannot be used in its current form on Arm, but there
is a desire to reuse it to avoid wasting real RAM pages
for the grant/foreign mappings.
The problem is that system "iomem_resource" is used for
the address space allocation, but the really unallocated
space can't be figured out precisely by the domain on Arm
without hypervisor involvement. For example, not all device
I/O regions are known by the time domain starts creating
grant/foreign mappings. And following the advise from
"iomem_resource" we might end up reusing these regions by
a mistake. So, the hypervisor which maintains the P2M for
the domain is in the best position to provide unused regions
of guest physical address space which could be safely used
to create grant/foreign mappings.
Introduce new helper arch_xen_unpopulated_init() which purpose
is to create specific Xen resource based on the memory regions
provided by the hypervisor to be used as unused space for Xen
scratch pages. If arch doesn't define arch_xen_unpopulated_init()
the default "iomem_resource" will be used.
Update the arguments list of allocate_resource() in fill_list()
to always allocate a region from the hotpluggable range
(maximum possible addressable physical memory range for which
the linear mapping could be created). If arch doesn't define
arch_get_mappable_range() the default range (0,-1) will be used.
The behaviour on x86 won't be changed by current patch as both
arch_xen_unpopulated_init() and arch_get_mappable_range()
are not implemented for it.
Also fallback to allocate xenballooned pages (balloon out RAM
pages) if we do not have any suitable resource to work with
(target_resource is invalid) and as the result we won't be able
to provide unpopulated pages on a request.
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Link: https://lore.kernel.org/r/1639080336-26573-5-git-send-email-olekstysh@gmail.com
Signed-off-by: Juergen Gross <jgross@suse.com>
|
|
This patch rolls back some of the changes introduced by commit
121f2faca2c0a "xen/balloon: rename alloc/free_xenballooned_pages"
in order to make possible to still allocate xenballooned pages
if CONFIG_XEN_UNPOPULATED_ALLOC is enabled.
On Arm the unpopulated pages will be allocated on top of extended
regions provided by Xen via device-tree (the subsequent patches
will add required bits to support unpopulated-alloc feature on Arm).
The problem is that extended regions feature has been introduced
into Xen quite recently (during 4.16 release cycle). So this
effectively means that Linux must only use unpopulated-alloc on Arm
if it is running on "new Xen" which advertises these regions.
But, it will only be known after parsing the "hypervisor" node
at boot time, so before doing that we cannot assume anything.
In order to keep working if CONFIG_XEN_UNPOPULATED_ALLOC is enabled
and the extended regions are not advertised (Linux is running on
"old Xen", etc) we need the fallback to alloc_xenballooned_pages().
This way we wouldn't reduce the amount of memory usable (wasting
RAM pages) for any of the external mappings anymore (and eliminate
XSA-300) with "new Xen", but would be still functional ballooning
out RAM pages with "old Xen".
Also rename alloc(free)_xenballooned_pages to xen_alloc(free)_ballooned_pages
and make xen_alloc(free)_unpopulated_pages static inline in xen.h
if CONFIG_XEN_UNPOPULATED_ALLOC is disabled.
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Link: https://lore.kernel.org/r/1639080336-26573-4-git-send-email-olekstysh@gmail.com
Signed-off-by: Juergen Gross <jgross@suse.com>
|
|
The hypervisor has been supplying this information for a couple of major
releases. Make use of it. The need to set a flag in the capabilities
field also points out that the prior setting of that field from the
hypervisor interface's gbl_caps one was wrong, so that code gets deleted
(there's also no equivalent of this in native boot code).
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Link: https://lore.kernel.org/r/a3df8bf3-d044-b7bb-3383-cd5239d6d4af@suse.com
Signed-off-by: Juergen Gross <jgross@suse.com>
|
|
Add an xdp_do_redirect_frame() variant which supports pre-computed
xdp_frame structures. This will be used in bpf_prog_run() to avoid having
to write to the xdp_frame structure when the XDP program doesn't modify the
frame boundaries.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220103150812.87914-6-toke@redhat.com
|
|
All map redirect functions except XSK maps convert xdp_buff to xdp_frame
before enqueueing it. So move this conversion of out the map functions
and into xdp_do_redirect(). This removes a bit of duplicated code, but more
importantly it makes it possible to support caller-allocated xdp_frame
structures, which will be added in a subsequent commit.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220103150812.87914-5-toke@redhat.com
|
|
Store the XDP mem ID inside the page_pool struct so it can be retrieved
later for use in bpf_prog_run().
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Link: https://lore.kernel.org/bpf/20220103150812.87914-4-toke@redhat.com
|
|
Add a new callback function to page_pool that, if set, will be called every
time a new page is allocated. This will be used from bpf_test_run() to
initialise the page data with the data provided by userspace when running
XDP programs with redirect turned on.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Link: https://lore.kernel.org/bpf/20220103150812.87914-3-toke@redhat.com
|
|
The functions that register an XDP memory model take a struct xdp_rxq as
parameter, but the RXQ is not actually used for anything other than pulling
out the struct xdp_mem_info that it embeds. So refactor the register
functions and export variants that just take a pointer to the xdp_mem_info.
This is in preparation for enabling XDP_REDIRECT in bpf_prog_run(), using a
page_pool instance that is not connected to any network device.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220103150812.87914-2-toke@redhat.com
|
|
Add device tree bindings for SMU (System Management Unit) controller of
Toshiba Visconti TMPV770x SoC series.
Signed-off-by: Nobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp>
Reviewed-by: Rob Herring <robh@kernel.org>
Link: https://lore.kernel.org/r/20211025031038.4180686-3-nobuhiro1.iwamatsu@toshiba.co.jp
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
|
|
No conflicts.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Pull networking fixes from Jakub Kicinski"
"Networking fixes, including fixes from bpf, and WiFi. One last pull
request, turns out some of the recent fixes did more harm than good.
Current release - regressions:
- Revert "xsk: Do not sleep in poll() when need_wakeup set", made the
problem worse
- Revert "net: phy: fixed_phy: Fix NULL vs IS_ERR() checking in
__fixed_phy_register", broke EPROBE_DEFER handling
- Revert "net: usb: r8152: Add MAC pass-through support for more
Lenovo Docks", broke setups without a Lenovo dock
Current release - new code bugs:
- selftests: set amt.sh executable
Previous releases - regressions:
- batman-adv: mcast: don't send link-local multicast to mcast routers
Previous releases - always broken:
- ipv4/ipv6: check attribute length for RTA_FLOW / RTA_GATEWAY
- sctp: hold endpoint before calling cb in
sctp_transport_lookup_process
- mac80211: mesh: embed mesh_paths and mpp_paths into
ieee80211_if_mesh to avoid complicated handling of sub-object
allocation failures
- seg6: fix traceroute in the presence of SRv6
- tipc: fix a kernel-infoleak in __tipc_sendmsg()"
* tag 'net-5.16-final' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (36 commits)
selftests: set amt.sh executable
Revert "net: usb: r8152: Add MAC passthrough support for more Lenovo Docks"
sfc: The RX page_ring is optional
iavf: Fix limit of total number of queues to active queues of VF
i40e: Fix incorrect netdev's real number of RX/TX queues
i40e: Fix for displaying message regarding NVM version
i40e: fix use-after-free in i40e_sync_filters_subtask()
i40e: Fix to not show opcode msg on unsuccessful VF MAC change
ieee802154: atusb: fix uninit value in atusb_set_extended_addr
mac80211: mesh: embedd mesh_paths and mpp_paths into ieee80211_if_mesh
mac80211: initialize variable have_higher_than_11mbit
sch_qfq: prevent shift-out-of-bounds in qfq_init_qdisc
netrom: fix copying in user data in nr_setsockopt
udp6: Use Segment Routing Header for dest address if present
icmp: ICMPV6: Examine invoking packet for Segment Route Headers.
seg6: export get_srh() for ICMP handling
Revert "net: phy: fixed_phy: Fix NULL vs IS_ERR() checking in __fixed_phy_register"
ipv6: Do cleanup if attribute validation fails in multipath route
ipv6: Continue processing multipath route even if gateway attribute is invalid
net/fsl: Remove leftover definition in xgmac_mdio
...
|
|
Under MAD query port, Report NDR speed when NDR is supported in the port
capability mask.
Link: https://lore.kernel.org/r/a2ab630d2a634547db9b581faa9d65da2edb9d05.1639554831.git.leonro@nvidia.com
Signed-off-by: Maher Sanalla <msanalla@nvidia.com>
Reviewed-by: Michael Guralnik <michaelgur@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
When iterating a list, a particular request may need to be moved for
special handling. Provide a helper function to achieve that so drivers
don't need to reimplement rqlist manipulation.
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20220105170518.3181469-4-kbusch@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
While iterating a list, a particular request may need to be removed for
special handling. Provide an iterator that can safely handle that.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Link: https://lore.kernel.org/r/20220105170518.3181469-3-kbusch@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Move the request list macros to the header file that defines that struct
they operate on.
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20220105170518.3181469-2-kbusch@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Several drivers used same function to initialize query MAD,
so move that function to global header file.
Link: https://lore.kernel.org/r/af6f35c590ff5ef56d0137351b8b295af0f7c13c.1641369858.git.leonro@nvidia.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Reviewed-by: Håkon Bugge <haakon.bugge@oracle.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
'for-next/stacktrace', 'for-next/xor-neon', 'for-next/kasan', 'for-next/armv8_7-fp', 'for-next/atomics', 'for-next/bti', 'for-next/sve', 'for-next/kselftest' and 'for-next/kcsan', remote-tracking branch 'arm64/for-next/perf' into for-next/core
* arm64/for-next/perf: (32 commits)
arm64: perf: Don't register user access sysctl handler multiple times
drivers: perf: marvell_cn10k: fix an IS_ERR() vs NULL check
perf/smmuv3: Fix unused variable warning when CONFIG_OF=n
arm64: perf: Support new DT compatibles
arm64: perf: Simplify registration boilerplate
arm64: perf: Support Denver and Carmel PMUs
drivers/perf: hisi: Add driver for HiSilicon PCIe PMU
docs: perf: Add description for HiSilicon PCIe PMU driver
dt-bindings: perf: Add YAML schemas for Marvell CN10K LLC-TAD pmu bindings
drivers: perf: Add LLC-TAD perf counter support
perf/smmuv3: Synthesize IIDR from CoreSight ID registers
perf/smmuv3: Add devicetree support
dt-bindings: Add Arm SMMUv3 PMCG binding
perf/arm-cmn: Add debugfs topology info
perf/arm-cmn: Add CI-700 Support
dt-bindings: perf: arm-cmn: Add CI-700
perf/arm-cmn: Support new IP features
perf/arm-cmn: Demarcate CMN-600 specifics
perf/arm-cmn: Move group validation data off-stack
perf/arm-cmn: Optimise DTC counter accesses
...
* for-next/misc:
: Miscellaneous patches
arm64: Use correct method to calculate nomap region boundaries
arm64: Drop outdated links in comments
arm64: errata: Fix exec handling in erratum 1418040 workaround
arm64: Unhash early pointer print plus improve comment
asm-generic: introduce io_stop_wc() and add implementation for ARM64
arm64: remove __dma_*_area() aliases
docs/arm64: delete a space from tagged-address-abi
arm64/fp: Add comments documenting the usage of state restore functions
arm64: mm: Use asid feature macro for cheanup
arm64: mm: Rename asid2idx() to ctxid2asid()
arm64: kexec: reduce calls to page_address()
arm64: extable: remove unused ex_handler_t definition
arm64: entry: Use SDEI event constants
arm64: Simplify checking for populated DT
arm64/kvm: Fix bitrotted comment for SVE handling in handle_exit.c
* for-next/cache-ops-dzp:
: Avoid DC instructions when DCZID_EL0.DZP == 1
arm64: mte: DC {GVA,GZVA} shouldn't be used when DCZID_EL0.DZP == 1
arm64: clear_page() shouldn't use DC ZVA when DCZID_EL0.DZP == 1
* for-next/stacktrace:
: Unify the arm64 unwind code
arm64: Make some stacktrace functions private
arm64: Make dump_backtrace() use arch_stack_walk()
arm64: Make profile_pc() use arch_stack_walk()
arm64: Make return_address() use arch_stack_walk()
arm64: Make __get_wchan() use arch_stack_walk()
arm64: Make perf_callchain_kernel() use arch_stack_walk()
arm64: Mark __switch_to() as __sched
arm64: Add comment for stack_info::kr_cur
arch: Make ARCH_STACKWALK independent of STACKTRACE
* for-next/xor-neon:
: Use SHA3 instructions to speed up XOR
arm64/xor: use EOR3 instructions when available
* for-next/kasan:
: Log potential KASAN shadow aliases
arm64: mm: log potential KASAN shadow alias
arm64: mm: use die_kernel_fault() in do_mem_abort()
* for-next/armv8_7-fp:
: Add HWCAPS for ARMv8.7 FEAT_AFP amd FEAT_RPRES
arm64: cpufeature: add HWCAP for FEAT_RPRES
arm64: add ID_AA64ISAR2_EL1 sys register
arm64: cpufeature: add HWCAP for FEAT_AFP
* for-next/atomics:
: arm64 atomics clean-ups and codegen improvements
arm64: atomics: lse: define RETURN ops in terms of FETCH ops
arm64: atomics: lse: improve constraints for simple ops
arm64: atomics: lse: define ANDs in terms of ANDNOTs
arm64: atomics lse: define SUBs in terms of ADDs
arm64: atomics: format whitespace consistently
* for-next/bti:
: BTI clean-ups
arm64: Ensure that the 'bti' macro is defined where linkage.h is included
arm64: Use BTI C directly and unconditionally
arm64: Unconditionally override SYM_FUNC macros
arm64: Add macro version of the BTI instruction
arm64: ftrace: add missing BTIs
arm64: kexec: use __pa_symbol(empty_zero_page)
arm64: update PAC description for kernel
* for-next/sve:
: SVE code clean-ups and refactoring in prepararation of Scalable Matrix Extensions
arm64/sve: Minor clarification of ABI documentation
arm64/sve: Generalise vector length configuration prctl() for SME
arm64/sve: Make sysctl interface for SVE reusable by SME
* for-next/kselftest:
: arm64 kselftest additions
kselftest/arm64: Add pidbench for floating point syscall cases
kselftest/arm64: Add a test program to exercise the syscall ABI
kselftest/arm64: Allow signal tests to trigger from a function
kselftest/arm64: Parameterise ptrace vector length information
* for-next/kcsan:
: Enable KCSAN for arm64
arm64: Enable KCSAN
|
|
<linux/device.h>
The <linux/usb/ch9.h> header is used over 1,400 times in a typical distro
build, but few of its users actually need the full <linux/device.h> header.
--------------------------------------------------------------------
| Combined, preprocessed C code size of header, without line markers,
| with comments stripped:
-------------------------
before: | #include <linux/usb/ch9.h> | LOC: 7,078 | headers: 172
after: | #include <linux/usb/ch9.h> | LOC: 812 | headers: 38
Remove it and add it to the places that need it.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next
Marc Kleine-Budde says:
====================
pull-request: can-next 2022-01-05
this is a pull request of 15 patches for net-next/master.
The first patch is by me and removed an unused variable from the
usb_8dev driver.
Andy Shevchenko contributes a patch for the mcp251x driver, which
removes an unneeded assignment.
Jimmy Assarsson's patch for the kvaser_usb makes use of units.h in the
assignment of frequencies.
Lad Prabhakar provides 2 patches, converting the ti_hecc and the
sja1000 driver to make use of platform_get_irq().
The 10 remaining patches are by Vincent Mailhol. First the etas_es58x
driver populates the net_device::dev_port. The next 5 patches cleanup
the handling of CAN error and CAN RTR messages of all drivers. The
remaining 4 patches enhance the CAN controller mode flag handling and
export it via netlink to user space.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
There is a 7 byte hole after dst->setup and a 4 byte hole after
dst->default_proto. Combining them, we have a single hole of just 3
bytes on 64 bit machines.
Before:
pahole -C dsa_switch_tree net/dsa/slave.o
struct dsa_switch_tree {
struct list_head list; /* 0 16 */
struct list_head ports; /* 16 16 */
struct raw_notifier_head nh; /* 32 8 */
unsigned int index; /* 40 4 */
struct kref refcount; /* 44 4 */
struct net_device * * lags; /* 48 8 */
bool setup; /* 56 1 */
/* XXX 7 bytes hole, try to pack */
/* --- cacheline 1 boundary (64 bytes) --- */
const struct dsa_device_ops * tag_ops; /* 64 8 */
enum dsa_tag_protocol default_proto; /* 72 4 */
/* XXX 4 bytes hole, try to pack */
struct dsa_platform_data * pd; /* 80 8 */
struct list_head rtable; /* 88 16 */
unsigned int lags_len; /* 104 4 */
unsigned int last_switch; /* 108 4 */
/* size: 112, cachelines: 2, members: 13 */
/* sum members: 101, holes: 2, sum holes: 11 */
/* last cacheline: 48 bytes */
};
After:
pahole -C dsa_switch_tree net/dsa/slave.o
struct dsa_switch_tree {
struct list_head list; /* 0 16 */
struct list_head ports; /* 16 16 */
struct raw_notifier_head nh; /* 32 8 */
unsigned int index; /* 40 4 */
struct kref refcount; /* 44 4 */
struct net_device * * lags; /* 48 8 */
const struct dsa_device_ops * tag_ops; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
enum dsa_tag_protocol default_proto; /* 64 4 */
bool setup; /* 68 1 */
/* XXX 3 bytes hole, try to pack */
struct dsa_platform_data * pd; /* 72 8 */
struct list_head rtable; /* 80 16 */
unsigned int lags_len; /* 96 4 */
unsigned int last_switch; /* 100 4 */
/* size: 104, cachelines: 2, members: 13 */
/* sum members: 101, holes: 1, sum holes: 3 */
/* last cacheline: 40 bytes */
};
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
dst->ports is accessed most notably by dsa_master_find_slave(), which is
invoked in the RX path.
dst->lags is accessed by dsa_lag_dev(), which is invoked in the RX path
of tag_dsa.c.
dst->tag_ops, dst->default_proto and dst->pd don't need to be in the
first cache line, so they are moved out by this change.
Before:
pahole -C dsa_switch_tree net/dsa/slave.o
struct dsa_switch_tree {
struct list_head list; /* 0 16 */
struct raw_notifier_head nh; /* 16 8 */
unsigned int index; /* 24 4 */
struct kref refcount; /* 28 4 */
bool setup; /* 32 1 */
/* XXX 7 bytes hole, try to pack */
const struct dsa_device_ops * tag_ops; /* 40 8 */
enum dsa_tag_protocol default_proto; /* 48 4 */
/* XXX 4 bytes hole, try to pack */
struct dsa_platform_data * pd; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
struct list_head ports; /* 64 16 */
struct list_head rtable; /* 80 16 */
struct net_device * * lags; /* 96 8 */
unsigned int lags_len; /* 104 4 */
unsigned int last_switch; /* 108 4 */
/* size: 112, cachelines: 2, members: 13 */
/* sum members: 101, holes: 2, sum holes: 11 */
/* last cacheline: 48 bytes */
};
After:
pahole -C dsa_switch_tree net/dsa/slave.o
struct dsa_switch_tree {
struct list_head list; /* 0 16 */
struct list_head ports; /* 16 16 */
struct raw_notifier_head nh; /* 32 8 */
unsigned int index; /* 40 4 */
struct kref refcount; /* 44 4 */
struct net_device * * lags; /* 48 8 */
bool setup; /* 56 1 */
/* XXX 7 bytes hole, try to pack */
/* --- cacheline 1 boundary (64 bytes) --- */
const struct dsa_device_ops * tag_ops; /* 64 8 */
enum dsa_tag_protocol default_proto; /* 72 4 */
/* XXX 4 bytes hole, try to pack */
struct dsa_platform_data * pd; /* 80 8 */
struct list_head rtable; /* 88 16 */
unsigned int lags_len; /* 104 4 */
unsigned int last_switch; /* 108 4 */
/* size: 112, cachelines: 2, members: 13 */
/* sum members: 101, holes: 2, sum holes: 11 */
/* last cacheline: 48 bytes */
};
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Currently, num_ports is declared as size_t, which is defined as
__kernel_ulong_t, therefore it occupies 8 bytes of memory.
Even switches with port numbers in the range of tens are exotic, so
there is no need for this amount of storage.
Additionally, because the max_num_bridges member right above it is also
4 bytes, it means the compiler needs to add padding between the last 2
fields. By reducing the size, we don't need that padding and can reduce
the struct size.
Before:
pahole -C dsa_switch net/dsa/slave.o
struct dsa_switch {
struct device * dev; /* 0 8 */
struct dsa_switch_tree * dst; /* 8 8 */
unsigned int index; /* 16 4 */
u32 setup:1; /* 20: 0 4 */
u32 vlan_filtering_is_global:1; /* 20: 1 4 */
u32 needs_standalone_vlan_filtering:1; /* 20: 2 4 */
u32 configure_vlan_while_not_filtering:1; /* 20: 3 4 */
u32 untag_bridge_pvid:1; /* 20: 4 4 */
u32 assisted_learning_on_cpu_port:1; /* 20: 5 4 */
u32 vlan_filtering:1; /* 20: 6 4 */
u32 pcs_poll:1; /* 20: 7 4 */
u32 mtu_enforcement_ingress:1; /* 20: 8 4 */
/* XXX 23 bits hole, try to pack */
struct notifier_block nb; /* 24 24 */
/* XXX last struct has 4 bytes of padding */
void * priv; /* 48 8 */
void * tagger_data; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
struct dsa_chip_data * cd; /* 64 8 */
const struct dsa_switch_ops * ops; /* 72 8 */
u32 phys_mii_mask; /* 80 4 */
/* XXX 4 bytes hole, try to pack */
struct mii_bus * slave_mii_bus; /* 88 8 */
unsigned int ageing_time_min; /* 96 4 */
unsigned int ageing_time_max; /* 100 4 */
struct dsa_8021q_context * tag_8021q_ctx; /* 104 8 */
struct devlink * devlink; /* 112 8 */
unsigned int num_tx_queues; /* 120 4 */
unsigned int num_lag_ids; /* 124 4 */
/* --- cacheline 2 boundary (128 bytes) --- */
unsigned int max_num_bridges; /* 128 4 */
/* XXX 4 bytes hole, try to pack */
size_t num_ports; /* 136 8 */
/* size: 144, cachelines: 3, members: 27 */
/* sum members: 132, holes: 2, sum holes: 8 */
/* sum bitfield members: 9 bits, bit holes: 1, sum bit holes: 23 bits */
/* paddings: 1, sum paddings: 4 */
/* last cacheline: 16 bytes */
};
After:
pahole -C dsa_switch net/dsa/slave.o
struct dsa_switch {
struct device * dev; /* 0 8 */
struct dsa_switch_tree * dst; /* 8 8 */
unsigned int index; /* 16 4 */
u32 setup:1; /* 20: 0 4 */
u32 vlan_filtering_is_global:1; /* 20: 1 4 */
u32 needs_standalone_vlan_filtering:1; /* 20: 2 4 */
u32 configure_vlan_while_not_filtering:1; /* 20: 3 4 */
u32 untag_bridge_pvid:1; /* 20: 4 4 */
u32 assisted_learning_on_cpu_port:1; /* 20: 5 4 */
u32 vlan_filtering:1; /* 20: 6 4 */
u32 pcs_poll:1; /* 20: 7 4 */
u32 mtu_enforcement_ingress:1; /* 20: 8 4 */
/* XXX 23 bits hole, try to pack */
struct notifier_block nb; /* 24 24 */
/* XXX last struct has 4 bytes of padding */
void * priv; /* 48 8 */
void * tagger_data; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
struct dsa_chip_data * cd; /* 64 8 */
const struct dsa_switch_ops * ops; /* 72 8 */
u32 phys_mii_mask; /* 80 4 */
/* XXX 4 bytes hole, try to pack */
struct mii_bus * slave_mii_bus; /* 88 8 */
unsigned int ageing_time_min; /* 96 4 */
unsigned int ageing_time_max; /* 100 4 */
struct dsa_8021q_context * tag_8021q_ctx; /* 104 8 */
struct devlink * devlink; /* 112 8 */
unsigned int num_tx_queues; /* 120 4 */
unsigned int num_lag_ids; /* 124 4 */
/* --- cacheline 2 boundary (128 bytes) --- */
unsigned int max_num_bridges; /* 128 4 */
unsigned int num_ports; /* 132 4 */
/* size: 136, cachelines: 3, members: 27 */
/* sum members: 128, holes: 1, sum holes: 4 */
/* sum bitfield members: 9 bits, bit holes: 1, sum bit holes: 23 bits */
/* paddings: 1, sum paddings: 4 */
/* last cacheline: 8 bytes */
};
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
struct dsa_switch has 9 boolean properties, many of which are in fact
set by drivers for custom behavior (vlan_filtering_is_global,
needs_standalone_vlan_filtering, etc etc). The binary layout of the
structure could be improved. For example, the "bool setup" at the
beginning introduces a gratuitous 7 byte hole in the first cache line.
The change merges all boolean properties into bitfields of an u32, and
places that u32 in the first cache line of the structure, since many
bools are accessed from the data path (untag_bridge_pvid, vlan_filtering,
vlan_filtering_is_global).
We place this u32 after the existing ds->index, which is also 4 bytes in
size. As a positive side effect, ds->tagger_data now fits into the first
cache line too, because 4 bytes are saved.
Before:
pahole -C dsa_switch net/dsa/slave.o
struct dsa_switch {
bool setup; /* 0 1 */
/* XXX 7 bytes hole, try to pack */
struct device * dev; /* 8 8 */
struct dsa_switch_tree * dst; /* 16 8 */
unsigned int index; /* 24 4 */
/* XXX 4 bytes hole, try to pack */
struct notifier_block nb; /* 32 24 */
/* XXX last struct has 4 bytes of padding */
void * priv; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
void * tagger_data; /* 64 8 */
struct dsa_chip_data * cd; /* 72 8 */
const struct dsa_switch_ops * ops; /* 80 8 */
u32 phys_mii_mask; /* 88 4 */
/* XXX 4 bytes hole, try to pack */
struct mii_bus * slave_mii_bus; /* 96 8 */
unsigned int ageing_time_min; /* 104 4 */
unsigned int ageing_time_max; /* 108 4 */
struct dsa_8021q_context * tag_8021q_ctx; /* 112 8 */
struct devlink * devlink; /* 120 8 */
/* --- cacheline 2 boundary (128 bytes) --- */
unsigned int num_tx_queues; /* 128 4 */
bool vlan_filtering_is_global; /* 132 1 */
bool needs_standalone_vlan_filtering; /* 133 1 */
bool configure_vlan_while_not_filtering; /* 134 1 */
bool untag_bridge_pvid; /* 135 1 */
bool assisted_learning_on_cpu_port; /* 136 1 */
bool vlan_filtering; /* 137 1 */
bool pcs_poll; /* 138 1 */
bool mtu_enforcement_ingress; /* 139 1 */
unsigned int num_lag_ids; /* 140 4 */
unsigned int max_num_bridges; /* 144 4 */
/* XXX 4 bytes hole, try to pack */
size_t num_ports; /* 152 8 */
/* size: 160, cachelines: 3, members: 27 */
/* sum members: 141, holes: 4, sum holes: 19 */
/* paddings: 1, sum paddings: 4 */
/* last cacheline: 32 bytes */
};
After:
pahole -C dsa_switch net/dsa/slave.o
struct dsa_switch {
struct device * dev; /* 0 8 */
struct dsa_switch_tree * dst; /* 8 8 */
unsigned int index; /* 16 4 */
u32 setup:1; /* 20: 0 4 */
u32 vlan_filtering_is_global:1; /* 20: 1 4 */
u32 needs_standalone_vlan_filtering:1; /* 20: 2 4 */
u32 configure_vlan_while_not_filtering:1; /* 20: 3 4 */
u32 untag_bridge_pvid:1; /* 20: 4 4 */
u32 assisted_learning_on_cpu_port:1; /* 20: 5 4 */
u32 vlan_filtering:1; /* 20: 6 4 */
u32 pcs_poll:1; /* 20: 7 4 */
u32 mtu_enforcement_ingress:1; /* 20: 8 4 */
/* XXX 23 bits hole, try to pack */
struct notifier_block nb; /* 24 24 */
/* XXX last struct has 4 bytes of padding */
void * priv; /* 48 8 */
void * tagger_data; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
struct dsa_chip_data * cd; /* 64 8 */
const struct dsa_switch_ops * ops; /* 72 8 */
u32 phys_mii_mask; /* 80 4 */
/* XXX 4 bytes hole, try to pack */
struct mii_bus * slave_mii_bus; /* 88 8 */
unsigned int ageing_time_min; /* 96 4 */
unsigned int ageing_time_max; /* 100 4 */
struct dsa_8021q_context * tag_8021q_ctx; /* 104 8 */
struct devlink * devlink; /* 112 8 */
unsigned int num_tx_queues; /* 120 4 */
unsigned int num_lag_ids; /* 124 4 */
/* --- cacheline 2 boundary (128 bytes) --- */
unsigned int max_num_bridges; /* 128 4 */
/* XXX 4 bytes hole, try to pack */
size_t num_ports; /* 136 8 */
/* size: 144, cachelines: 3, members: 27 */
/* sum members: 132, holes: 2, sum holes: 8 */
/* sum bitfield members: 9 bits, bit holes: 1, sum bit holes: 23 bits */
/* paddings: 1, sum paddings: 4 */
/* last cacheline: 16 bytes */
};
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Both dsa_port :: type and dsa_port :: index introduce a 4 octet hole
after them, so we can group them together and the holes would be
eliminated, turning 16 octets of storage into just 8. This makes the
cpu_dp pointer fit in the first cache line, which is good, because
dsa_slave_to_master(), called by dsa_enqueue_skb(), uses it.
Before:
pahole -C dsa_port net/dsa/slave.o
struct dsa_port {
union {
struct net_device * master; /* 0 8 */
struct net_device * slave; /* 0 8 */
}; /* 0 8 */
const struct dsa_device_ops * tag_ops; /* 8 8 */
struct dsa_switch_tree * dst; /* 16 8 */
struct sk_buff * (*rcv)(struct sk_buff *, struct net_device *); /* 24 8 */
enum {
DSA_PORT_TYPE_UNUSED = 0,
DSA_PORT_TYPE_CPU = 1,
DSA_PORT_TYPE_DSA = 2,
DSA_PORT_TYPE_USER = 3,
} type; /* 32 4 */
/* XXX 4 bytes hole, try to pack */
struct dsa_switch * ds; /* 40 8 */
unsigned int index; /* 48 4 */
/* XXX 4 bytes hole, try to pack */
const char * name; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
struct dsa_port * cpu_dp; /* 64 8 */
u8 mac[6]; /* 72 6 */
u8 stp_state; /* 78 1 */
u8 vlan_filtering:1; /* 79: 0 1 */
u8 learning:1; /* 79: 1 1 */
u8 lag_tx_enabled:1; /* 79: 2 1 */
u8 devlink_port_setup:1; /* 79: 3 1 */
u8 setup:1; /* 79: 4 1 */
/* XXX 3 bits hole, try to pack */
struct device_node * dn; /* 80 8 */
unsigned int ageing_time; /* 88 4 */
/* XXX 4 bytes hole, try to pack */
struct dsa_bridge * bridge; /* 96 8 */
struct devlink_port devlink_port; /* 104 288 */
/* --- cacheline 6 boundary (384 bytes) was 8 bytes ago --- */
struct phylink * pl; /* 392 8 */
struct phylink_config pl_config; /* 400 40 */
struct net_device * lag_dev; /* 440 8 */
/* --- cacheline 7 boundary (448 bytes) --- */
struct net_device * hsr_dev; /* 448 8 */
struct list_head list; /* 456 16 */
const struct ethtool_ops * orig_ethtool_ops; /* 472 8 */
const struct dsa_netdevice_ops * netdev_ops; /* 480 8 */
struct mutex addr_lists_lock; /* 488 32 */
/* --- cacheline 8 boundary (512 bytes) was 8 bytes ago --- */
struct list_head fdbs; /* 520 16 */
struct list_head mdbs; /* 536 16 */
/* size: 552, cachelines: 9, members: 30 */
/* sum members: 539, holes: 3, sum holes: 12 */
/* sum bitfield members: 5 bits, bit holes: 1, sum bit holes: 3 bits */
/* last cacheline: 40 bytes */
};
After:
pahole -C dsa_port net/dsa/slave.o
struct dsa_port {
union {
struct net_device * master; /* 0 8 */
struct net_device * slave; /* 0 8 */
}; /* 0 8 */
const struct dsa_device_ops * tag_ops; /* 8 8 */
struct dsa_switch_tree * dst; /* 16 8 */
struct sk_buff * (*rcv)(struct sk_buff *, struct net_device *); /* 24 8 */
struct dsa_switch * ds; /* 32 8 */
unsigned int index; /* 40 4 */
enum {
DSA_PORT_TYPE_UNUSED = 0,
DSA_PORT_TYPE_CPU = 1,
DSA_PORT_TYPE_DSA = 2,
DSA_PORT_TYPE_USER = 3,
} type; /* 44 4 */
const char * name; /* 48 8 */
struct dsa_port * cpu_dp; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
u8 mac[6]; /* 64 6 */
u8 stp_state; /* 70 1 */
u8 vlan_filtering:1; /* 71: 0 1 */
u8 learning:1; /* 71: 1 1 */
u8 lag_tx_enabled:1; /* 71: 2 1 */
u8 devlink_port_setup:1; /* 71: 3 1 */
u8 setup:1; /* 71: 4 1 */
/* XXX 3 bits hole, try to pack */
struct device_node * dn; /* 72 8 */
unsigned int ageing_time; /* 80 4 */
/* XXX 4 bytes hole, try to pack */
struct dsa_bridge * bridge; /* 88 8 */
struct devlink_port devlink_port; /* 96 288 */
/* --- cacheline 6 boundary (384 bytes) --- */
struct phylink * pl; /* 384 8 */
struct phylink_config pl_config; /* 392 40 */
struct net_device * lag_dev; /* 432 8 */
struct net_device * hsr_dev; /* 440 8 */
/* --- cacheline 7 boundary (448 bytes) --- */
struct list_head list; /* 448 16 */
const struct ethtool_ops * orig_ethtool_ops; /* 464 8 */
const struct dsa_netdevice_ops * netdev_ops; /* 472 8 */
struct mutex addr_lists_lock; /* 480 32 */
/* --- cacheline 8 boundary (512 bytes) --- */
struct list_head fdbs; /* 512 16 */
struct list_head mdbs; /* 528 16 */
/* size: 544, cachelines: 9, members: 30 */
/* sum members: 539, holes: 1, sum holes: 4 */
/* sum bitfield members: 5 bits, bit holes: 1, sum bit holes: 3 bits */
/* last cacheline: 32 bytes */
};
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
struct dsa_port has 5 bool members which create quite a number of 7 byte
holes in the structure layout. By merging them all into bitfields of an
u8, and placing that u8 in the 1-byte hole after dp->mac and dp->stp_state,
we can reduce the structure size from 576 bytes to 552 bytes on arm64.
Before:
pahole -C dsa_port net/dsa/slave.o
struct dsa_port {
union {
struct net_device * master; /* 0 8 */
struct net_device * slave; /* 0 8 */
}; /* 0 8 */
const struct dsa_device_ops * tag_ops; /* 8 8 */
struct dsa_switch_tree * dst; /* 16 8 */
struct sk_buff * (*rcv)(struct sk_buff *, struct net_device *); /* 24 8 */
enum {
DSA_PORT_TYPE_UNUSED = 0,
DSA_PORT_TYPE_CPU = 1,
DSA_PORT_TYPE_DSA = 2,
DSA_PORT_TYPE_USER = 3,
} type; /* 32 4 */
/* XXX 4 bytes hole, try to pack */
struct dsa_switch * ds; /* 40 8 */
unsigned int index; /* 48 4 */
/* XXX 4 bytes hole, try to pack */
const char * name; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
struct dsa_port * cpu_dp; /* 64 8 */
u8 mac[6]; /* 72 6 */
u8 stp_state; /* 78 1 */
/* XXX 1 byte hole, try to pack */
struct device_node * dn; /* 80 8 */
unsigned int ageing_time; /* 88 4 */
bool vlan_filtering; /* 92 1 */
bool learning; /* 93 1 */
/* XXX 2 bytes hole, try to pack */
struct dsa_bridge * bridge; /* 96 8 */
struct devlink_port devlink_port; /* 104 288 */
/* --- cacheline 6 boundary (384 bytes) was 8 bytes ago --- */
bool devlink_port_setup; /* 392 1 */
/* XXX 7 bytes hole, try to pack */
struct phylink * pl; /* 400 8 */
struct phylink_config pl_config; /* 408 40 */
/* --- cacheline 7 boundary (448 bytes) --- */
struct net_device * lag_dev; /* 448 8 */
bool lag_tx_enabled; /* 456 1 */
/* XXX 7 bytes hole, try to pack */
struct net_device * hsr_dev; /* 464 8 */
struct list_head list; /* 472 16 */
const struct ethtool_ops * orig_ethtool_ops; /* 488 8 */
const struct dsa_netdevice_ops * netdev_ops; /* 496 8 */
struct mutex addr_lists_lock; /* 504 32 */
/* --- cacheline 8 boundary (512 bytes) was 24 bytes ago --- */
struct list_head fdbs; /* 536 16 */
struct list_head mdbs; /* 552 16 */
bool setup; /* 568 1 */
/* size: 576, cachelines: 9, members: 30 */
/* sum members: 544, holes: 6, sum holes: 25 */
/* padding: 7 */
};
After:
pahole -C dsa_port net/dsa/slave.o
struct dsa_port {
union {
struct net_device * master; /* 0 8 */
struct net_device * slave; /* 0 8 */
}; /* 0 8 */
const struct dsa_device_ops * tag_ops; /* 8 8 */
struct dsa_switch_tree * dst; /* 16 8 */
struct sk_buff * (*rcv)(struct sk_buff *, struct net_device *); /* 24 8 */
enum {
DSA_PORT_TYPE_UNUSED = 0,
DSA_PORT_TYPE_CPU = 1,
DSA_PORT_TYPE_DSA = 2,
DSA_PORT_TYPE_USER = 3,
} type; /* 32 4 */
/* XXX 4 bytes hole, try to pack */
struct dsa_switch * ds; /* 40 8 */
unsigned int index; /* 48 4 */
/* XXX 4 bytes hole, try to pack */
const char * name; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
struct dsa_port * cpu_dp; /* 64 8 */
u8 mac[6]; /* 72 6 */
u8 stp_state; /* 78 1 */
u8 vlan_filtering:1; /* 79: 0 1 */
u8 learning:1; /* 79: 1 1 */
u8 lag_tx_enabled:1; /* 79: 2 1 */
u8 devlink_port_setup:1; /* 79: 3 1 */
u8 setup:1; /* 79: 4 1 */
/* XXX 3 bits hole, try to pack */
struct device_node * dn; /* 80 8 */
unsigned int ageing_time; /* 88 4 */
/* XXX 4 bytes hole, try to pack */
struct dsa_bridge * bridge; /* 96 8 */
struct devlink_port devlink_port; /* 104 288 */
/* --- cacheline 6 boundary (384 bytes) was 8 bytes ago --- */
struct phylink * pl; /* 392 8 */
struct phylink_config pl_config; /* 400 40 */
struct net_device * lag_dev; /* 440 8 */
/* --- cacheline 7 boundary (448 bytes) --- */
struct net_device * hsr_dev; /* 448 8 */
struct list_head list; /* 456 16 */
const struct ethtool_ops * orig_ethtool_ops; /* 472 8 */
const struct dsa_netdevice_ops * netdev_ops; /* 480 8 */
struct mutex addr_lists_lock; /* 488 32 */
/* --- cacheline 8 boundary (512 bytes) was 8 bytes ago --- */
struct list_head fdbs; /* 520 16 */
struct list_head mdbs; /* 536 16 */
/* size: 552, cachelines: 9, members: 30 */
/* sum members: 539, holes: 3, sum holes: 12 */
/* sum bitfield members: 5 bits, bit holes: 1, sum bit holes: 3 bits */
/* last cacheline: 40 bytes */
};
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The MAC address of a port is 6 octets in size, and this creates a 2
octet hole after it. There are some other u8 members of struct dsa_port
that we can put in that hole. One such member is the stp_state.
Before:
pahole -C dsa_port net/dsa/slave.o
struct dsa_port {
union {
struct net_device * master; /* 0 8 */
struct net_device * slave; /* 0 8 */
}; /* 0 8 */
const struct dsa_device_ops * tag_ops; /* 8 8 */
struct dsa_switch_tree * dst; /* 16 8 */
struct sk_buff * (*rcv)(struct sk_buff *, struct net_device *); /* 24 8 */
enum {
DSA_PORT_TYPE_UNUSED = 0,
DSA_PORT_TYPE_CPU = 1,
DSA_PORT_TYPE_DSA = 2,
DSA_PORT_TYPE_USER = 3,
} type; /* 32 4 */
/* XXX 4 bytes hole, try to pack */
struct dsa_switch * ds; /* 40 8 */
unsigned int index; /* 48 4 */
/* XXX 4 bytes hole, try to pack */
const char * name; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
struct dsa_port * cpu_dp; /* 64 8 */
u8 mac[6]; /* 72 6 */
/* XXX 2 bytes hole, try to pack */
struct device_node * dn; /* 80 8 */
unsigned int ageing_time; /* 88 4 */
bool vlan_filtering; /* 92 1 */
bool learning; /* 93 1 */
u8 stp_state; /* 94 1 */
/* XXX 1 byte hole, try to pack */
struct dsa_bridge * bridge; /* 96 8 */
struct devlink_port devlink_port; /* 104 288 */
/* --- cacheline 6 boundary (384 bytes) was 8 bytes ago --- */
bool devlink_port_setup; /* 392 1 */
/* XXX 7 bytes hole, try to pack */
struct phylink * pl; /* 400 8 */
struct phylink_config pl_config; /* 408 40 */
/* --- cacheline 7 boundary (448 bytes) --- */
struct net_device * lag_dev; /* 448 8 */
bool lag_tx_enabled; /* 456 1 */
/* XXX 7 bytes hole, try to pack */
struct net_device * hsr_dev; /* 464 8 */
struct list_head list; /* 472 16 */
const struct ethtool_ops * orig_ethtool_ops; /* 488 8 */
const struct dsa_netdevice_ops * netdev_ops; /* 496 8 */
struct mutex addr_lists_lock; /* 504 32 */
/* --- cacheline 8 boundary (512 bytes) was 24 bytes ago --- */
struct list_head fdbs; /* 536 16 */
struct list_head mdbs; /* 552 16 */
bool setup; /* 568 1 */
/* size: 576, cachelines: 9, members: 30 */
/* sum members: 544, holes: 6, sum holes: 25 */
/* padding: 7 */
};
After:
pahole -C dsa_port net/dsa/slave.o
struct dsa_port {
union {
struct net_device * master; /* 0 8 */
struct net_device * slave; /* 0 8 */
}; /* 0 8 */
const struct dsa_device_ops * tag_ops; /* 8 8 */
struct dsa_switch_tree * dst; /* 16 8 */
struct sk_buff * (*rcv)(struct sk_buff *, struct net_device *); /* 24 8 */
enum {
DSA_PORT_TYPE_UNUSED = 0,
DSA_PORT_TYPE_CPU = 1,
DSA_PORT_TYPE_DSA = 2,
DSA_PORT_TYPE_USER = 3,
} type; /* 32 4 */
/* XXX 4 bytes hole, try to pack */
struct dsa_switch * ds; /* 40 8 */
unsigned int index; /* 48 4 */
/* XXX 4 bytes hole, try to pack */
const char * name; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
struct dsa_port * cpu_dp; /* 64 8 */
u8 mac[6]; /* 72 6 */
u8 stp_state; /* 78 1 */
/* XXX 1 byte hole, try to pack */
struct device_node * dn; /* 80 8 */
unsigned int ageing_time; /* 88 4 */
bool vlan_filtering; /* 92 1 */
bool learning; /* 93 1 */
/* XXX 2 bytes hole, try to pack */
struct dsa_bridge * bridge; /* 96 8 */
struct devlink_port devlink_port; /* 104 288 */
/* --- cacheline 6 boundary (384 bytes) was 8 bytes ago --- */
bool devlink_port_setup; /* 392 1 */
/* XXX 7 bytes hole, try to pack */
struct phylink * pl; /* 400 8 */
struct phylink_config pl_config; /* 408 40 */
/* --- cacheline 7 boundary (448 bytes) --- */
struct net_device * lag_dev; /* 448 8 */
bool lag_tx_enabled; /* 456 1 */
/* XXX 7 bytes hole, try to pack */
struct net_device * hsr_dev; /* 464 8 */
struct list_head list; /* 472 16 */
const struct ethtool_ops * orig_ethtool_ops; /* 488 8 */
const struct dsa_netdevice_ops * netdev_ops; /* 496 8 */
struct mutex addr_lists_lock; /* 504 32 */
/* --- cacheline 8 boundary (512 bytes) was 24 bytes ago --- */
struct list_head fdbs; /* 536 16 */
struct list_head mdbs; /* 552 16 */
bool setup; /* 568 1 */
/* size: 576, cachelines: 9, members: 30 */
/* sum members: 544, holes: 6, sum holes: 25 */
/* padding: 7 */
};
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus
ASoC: Updates for v5.17
Not much going on framework release this time, but a big update for
drivers especially the Intel and SOF ones.
- Refinements and cleanups around the delay() APIs.
- Wider use of dev_err_probe().
- Continuing cleanups and improvements to the SOF code.
- Support for pin switches in simple-card derived cards.
- Support for AMD Renoir ACP, Asahi Kasei Microdevices AKM4375, Intel
systems using NAU8825 and MAX98390, Mediatek MT8915, nVidia Tegra20
S/PDIF, Qualcomm systems using ALC5682I-VS and Texas Instruments
TLV320ADC3xxx.
|
|
Pull 5.17 materials.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
Add a couple of helpers and definitions to extract the clause 45 regad
and devad fields from the regnum passed into MDIO drivers.
Tested-by: Daniel Golle <daniel@makrotopia.org>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Golle <daniel@makrotopia.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Currently, the CAN netlink interface provides no easy ways to check
the capabilities of a given controller. The only method from the
command line is to try each CAN_CTRLMODE_* individually to check
whether the netlink interface returns an -EOPNOTSUPP error or not
(alternatively, one may find it easier to directly check the source
code of the driver instead...)
This patch introduces a method for the user to check both the
supported and the static capabilities. The proposed method introduces
a new IFLA nest: IFLA_CAN_CTRLMODE_EXT which extends the current
IFLA_CAN_CTRLMODE. This is done to guaranty a full forward and
backward compatibility between the kernel and the user land
applications.
The IFLA_CAN_CTRLMODE_EXT nest contains one single entry:
IFLA_CAN_CTRLMODE_SUPPORTED. Because this entry is only used in one
direction: kernel to userland, no new struct nla_policy are
introduced.
Below table explains how IFLA_CAN_CTRLMODE_SUPPORTED (hereafter:
"supported") and can_ctrlmode::flags (hereafter: "flags") allow us to
identify both the supported and the static capabilities, when masked
with any of the CAN_CTRLMODE_* bit flags:
supported & flags & Controller capabilities
CAN_CTRLMODE_* CAN_CTRLMODE_*
-----------------------------------------------------------------------
false false Feature not supported (always disabled)
false true Static feature (always enabled)
true false Feature supported but disabled
true true Feature supported and enabled
Link: https://lore.kernel.org/all/20211213160226.56219-5-mailhol.vincent@wanadoo.fr
Signed-off-by: Vincent Mailhol <mailhol.vincent@wanadoo.fr>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
|
|
Save eight bytes of holes on x86-64 architectures by reordering the
members of struct can_priv.
Before:
| $ pahole -C can_priv drivers/net/can/dev/dev.o
| struct can_priv {
| struct net_device * dev; /* 0 8 */
| struct can_device_stats can_stats; /* 8 24 */
| const struct can_bittiming_const * bittiming_const; /* 32 8 */
| const struct can_bittiming_const * data_bittiming_const; /* 40 8 */
| struct can_bittiming bittiming; /* 48 32 */
| /* --- cacheline 1 boundary (64 bytes) was 16 bytes ago --- */
| struct can_bittiming data_bittiming; /* 80 32 */
| const struct can_tdc_const * tdc_const; /* 112 8 */
| struct can_tdc tdc; /* 120 12 */
| /* --- cacheline 2 boundary (128 bytes) was 4 bytes ago --- */
| unsigned int bitrate_const_cnt; /* 132 4 */
| const u32 * bitrate_const; /* 136 8 */
| const u32 * data_bitrate_const; /* 144 8 */
| unsigned int data_bitrate_const_cnt; /* 152 4 */
| u32 bitrate_max; /* 156 4 */
| struct can_clock clock; /* 160 4 */
| unsigned int termination_const_cnt; /* 164 4 */
| const u16 * termination_const; /* 168 8 */
| u16 termination; /* 176 2 */
|
| /* XXX 6 bytes hole, try to pack */
|
| struct gpio_desc * termination_gpio; /* 184 8 */
| /* --- cacheline 3 boundary (192 bytes) --- */
| u16 termination_gpio_ohms[2]; /* 192 4 */
| enum can_state state; /* 196 4 */
| u32 ctrlmode; /* 200 4 */
| u32 ctrlmode_supported; /* 204 4 */
| int restart_ms; /* 208 4 */
|
| /* XXX 4 bytes hole, try to pack */
|
| struct delayed_work restart_work; /* 216 88 */
|
| /* XXX last struct has 4 bytes of padding */
|
| /* --- cacheline 4 boundary (256 bytes) was 48 bytes ago --- */
| int (*do_set_bittiming)(struct net_device *); /* 304 8 */
| int (*do_set_data_bittiming)(struct net_device *); /* 312 8 */
| /* --- cacheline 5 boundary (320 bytes) --- */
| int (*do_set_mode)(struct net_device *, enum can_mode); /* 320 8 */
| int (*do_set_termination)(struct net_device *, u16); /* 328 8 */
| int (*do_get_state)(const struct net_device *, enum can_state *); /* 336 8 */
| int (*do_get_berr_counter)(const struct net_device *, struct can_berr_counter *); /* 344 8 */
| unsigned int echo_skb_max; /* 352 4 */
|
| /* XXX 4 bytes hole, try to pack */
|
| struct sk_buff * * echo_skb; /* 360 8 */
|
| /* size: 368, cachelines: 6, members: 32 */
| /* sum members: 354, holes: 3, sum holes: 14 */
| /* paddings: 1, sum paddings: 4 */
| /* last cacheline: 48 bytes */
| };
After:
| $ pahole -C can_priv drivers/net/can/dev/dev.o
| struct can_priv {
| struct net_device * dev; /* 0 8 */
| struct can_device_stats can_stats; /* 8 24 */
| const struct can_bittiming_const * bittiming_const; /* 32 8 */
| const struct can_bittiming_const * data_bittiming_const; /* 40 8 */
| struct can_bittiming bittiming; /* 48 32 */
| /* --- cacheline 1 boundary (64 bytes) was 16 bytes ago --- */
| struct can_bittiming data_bittiming; /* 80 32 */
| const struct can_tdc_const * tdc_const; /* 112 8 */
| struct can_tdc tdc; /* 120 12 */
| /* --- cacheline 2 boundary (128 bytes) was 4 bytes ago --- */
| unsigned int bitrate_const_cnt; /* 132 4 */
| const u32 * bitrate_const; /* 136 8 */
| const u32 * data_bitrate_const; /* 144 8 */
| unsigned int data_bitrate_const_cnt; /* 152 4 */
| u32 bitrate_max; /* 156 4 */
| struct can_clock clock; /* 160 4 */
| unsigned int termination_const_cnt; /* 164 4 */
| const u16 * termination_const; /* 168 8 */
| u16 termination; /* 176 2 */
|
| /* XXX 6 bytes hole, try to pack */
|
| struct gpio_desc * termination_gpio; /* 184 8 */
| /* --- cacheline 3 boundary (192 bytes) --- */
| u16 termination_gpio_ohms[2]; /* 192 4 */
| unsigned int echo_skb_max; /* 196 4 */
| struct sk_buff * * echo_skb; /* 200 8 */
| enum can_state state; /* 208 4 */
| u32 ctrlmode; /* 212 4 */
| u32 ctrlmode_supported; /* 216 4 */
| int restart_ms; /* 220 4 */
| struct delayed_work restart_work; /* 224 88 */
|
| /* XXX last struct has 4 bytes of padding */
|
| /* --- cacheline 4 boundary (256 bytes) was 56 bytes ago --- */
| int (*do_set_bittiming)(struct net_device *); /* 312 8 */
| /* --- cacheline 5 boundary (320 bytes) --- */
| int (*do_set_data_bittiming)(struct net_device *); /* 320 8 */
| int (*do_set_mode)(struct net_device *, enum can_mode); /* 328 8 */
| int (*do_set_termination)(struct net_device *, u16); /* 336 8 */
| int (*do_get_state)(const struct net_device *, enum can_state *); /* 344 8 */
| int (*do_get_berr_counter)(const struct net_device *, struct can_berr_counter *); /* 352 8 */
|
| /* size: 360, cachelines: 6, members: 32 */
| /* sum members: 354, holes: 1, sum holes: 6 */
| /* paddings: 1, sum paddings: 4 */
| /* last cacheline: 40 bytes */
| };
Link: https://lore.kernel.org/all/20211213160226.56219-4-mailhol.vincent@wanadoo.fr
Signed-off-by: Vincent Mailhol <mailhol.vincent@wanadoo.fr>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
|
|
Previous patch removed can_priv::ctrlmode_static to replace it with
can_get_static_ctrlmode().
A condition sine qua non for this to work is that the controller
static modes should never be set in can_priv::ctrlmode_supported
(c.f. the comment on can_priv::ctrlmode_supported which states that it
is for "options that can be *modified* by netlink"). Also, this
condition is already correctly fulfilled by all existing drivers
which rely on the ctrlmode_static feature.
Nonetheless, we added an extra safeguard in can_set_static_ctrlmode()
to return an error value and to warn the developer who would be
adventurous enough to set to static a given feature that is already
set to supported.
The drivers which rely on the static controller mode are then updated
to check the return value of can_set_static_ctrlmode().
Link: https://lore.kernel.org/all/20211213160226.56219-3-mailhol.vincent@wanadoo.fr
Signed-off-by: Vincent Mailhol <mailhol.vincent@wanadoo.fr>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
|