diff options
author | Alexei Starovoitov <ast@kernel.org> | 2024-10-24 10:26:00 -0700 |
---|---|---|
committer | Alexei Starovoitov <ast@kernel.org> | 2024-10-24 10:26:00 -0700 |
commit | c6fb8030b4baa01c850f99fc6da051b1017edc46 (patch) | |
tree | 160ed2ab568f0fc513ccda0dfa19b08dfd00f4a1 /include/linux/bpf.h | |
parent | 39b8ab1519687054769bc07feb97821fc40f56e2 (diff) | |
parent | bd5879a6fe4be407bf36c212cd91ed1e4485a6f9 (diff) |
Merge branch 'share-user-memory-to-bpf-program-through-task-storage-map'
Martin KaFai Lau says:
====================
Share user memory to BPF program through task storage map
From: Martin KaFai Lau <martin.lau@kernel.org>
It is the v6 of this series. Starting from v5, it is a continuation work
of the RFC v4.
Changes in v6:
1. In patch 1, reject t->size == 0 in btf_check_and_fixup_fields.
Reject a uptr pointing to an empty struct.
A test is added to patch 12 to test this case.
2. In patch 6, when checking if the uptr struct spans across
pages, there was an off by one error in calculating the "end" such
that the uptr will be rejected by error if the object is located
exactly at the end of a page.
This is fixed by adding t->size "- 1" to "start".
A test is added to patch 9 to test this case.
3. In patch 6, check for PageHighMem(page) and return -EOPNOTSUPP.
The 32 bit arch jit is missing other crucial bpf features (e.g. kfunc).
Patch 6 commit message has been updated to include this change.
4. The selftests are cleaned up such that "struct user_data *dummy_data"
global ptr is used instead of the whole "struct user_data dummy_data"
object. Still a hack to avoid generating fwd btf type for the
uptr struct but somewhat lighter than a full blown global object.
Changes in v5:
1. The original patch 1 and patch 2 are combined.
2. Patch 3, 4, and 5 are new. They get the bpf_local_storage
ready to handle the __uptr in the map_value.
3. Patch 6 is mostly new, so I reset the sob.
4. There are some changes in the carry over patch 1 and 2 also. They
are mentioned at the individual patch.
5. More tests are added.
The following is the original cover letter and the earlier change log.
The bpf prog example has been removed. Please find a similar
example in the selftests task_ls_uptr.c.
~~~~~~~~
Some of BPF schedulers (sched_ext) need hints from user programs to do
a better job. For example, a scheduler can handle a task in a
different way if it knows a task is doing GC. So, we need an efficient
way to share the information between user programs and BPF
programs. Sharing memory between user programs and BPF programs is
what this patchset does.
== REQUIREMENT ==
This patchset enables every task in every process to share a small
chunk of memory of it's own with a BPF scheduler. So, they can update
the hints without expensive overhead of syscalls. It also wants every
task sees only the data/memory belong to the task/or the task's
process.
== DESIGN ==
This patchset enables BPF prorams to embed __uptr; uptr in the values
of task storage maps. A uptr field can only be set by user programs by
updating map element value through a syscall. A uptr points to a block
of memory allocated by the user program updating the element
value. The memory will be pinned to ensure it staying in the core
memory and to avoid a page fault when the BPF program accesses it.
Please see the selftests task_ls_uptr.c for an example.
== MEMORY ==
In order to use memory efficiently, we don't want to pin a large
number of pages. To archieve that, user programs should collect the
memory blocks pointed by uptrs together to share memory pages if
possible. It avoid the situation that pin one page for each thread in
a process. Instead, we can have several threads pointing their uptrs
to the same page but with different offsets.
Although it is not necessary, avoiding the memory pointed by an uptr
crossing the boundary of a page can prevent an additional mapping in
the kernel address space.
== RESTRICT ==
The memory pointed by a uptr should reside in one memory
page. Crossing multi-pages is not supported at the moment.
Only task storage map have been supported at the moment.
The values of uptrs can only be updated by user programs through
syscalls.
bpf_map_lookup_elem() from userspace returns zeroed values for uptrs
to prevent leaking information of the kernel.
---
Changes from v3:
- Merge part 4 and 5 as the new part 4 in order to cease the warning
of unused functions from CI.
Changes from v1:
- Rename BPF_KPTR_USER to BPF_UPTR.
- Restrict uptr to one page.
- Mark uptr with PTR_TO_MEM | PTR_MAY_BE_NULL and with the size of
the target type.
- Move uptr away from bpf_obj_memcpy() by introducing
bpf_obj_uptrcpy() and copy_map_uptr_locked().
- Remove the BPF_FROM_USER flag.
- Align the meory pointed by an uptr in the test case. Remove the
uptr of mmapped memory.
Kui-Feng Lee (4):
bpf: Support __uptr type tag in BTF
bpf: Handle BPF_UPTR in verifier
libbpf: define __uptr.
selftests/bpf: Some basic __uptr tests
====================
Link: https://lore.kernel.org/r/20241023234759.860539-1-martin.lau@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'include/linux/bpf.h')
-rw-r--r-- | include/linux/bpf.h | 25 |
1 files changed, 25 insertions, 0 deletions
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 0c216e71cec7..8888689aa917 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -203,6 +203,7 @@ enum btf_field_type { BPF_GRAPH_ROOT = BPF_RB_ROOT | BPF_LIST_HEAD, BPF_REFCOUNT = (1 << 9), BPF_WORKQUEUE = (1 << 10), + BPF_UPTR = (1 << 11), }; typedef void (*btf_dtor_kfunc_t)(void *); @@ -322,6 +323,8 @@ static inline const char *btf_field_type_name(enum btf_field_type type) return "kptr"; case BPF_KPTR_PERCPU: return "percpu_kptr"; + case BPF_UPTR: + return "uptr"; case BPF_LIST_HEAD: return "bpf_list_head"; case BPF_LIST_NODE: @@ -350,6 +353,7 @@ static inline u32 btf_field_type_size(enum btf_field_type type) case BPF_KPTR_UNREF: case BPF_KPTR_REF: case BPF_KPTR_PERCPU: + case BPF_UPTR: return sizeof(u64); case BPF_LIST_HEAD: return sizeof(struct bpf_list_head); @@ -379,6 +383,7 @@ static inline u32 btf_field_type_align(enum btf_field_type type) case BPF_KPTR_UNREF: case BPF_KPTR_REF: case BPF_KPTR_PERCPU: + case BPF_UPTR: return __alignof__(u64); case BPF_LIST_HEAD: return __alignof__(struct bpf_list_head); @@ -419,6 +424,7 @@ static inline void bpf_obj_init_field(const struct btf_field *field, void *addr) case BPF_KPTR_UNREF: case BPF_KPTR_REF: case BPF_KPTR_PERCPU: + case BPF_UPTR: break; default: WARN_ON_ONCE(1); @@ -507,6 +513,25 @@ static inline void copy_map_value_long(struct bpf_map *map, void *dst, void *src bpf_obj_memcpy(map->record, dst, src, map->value_size, true); } +static inline void bpf_obj_swap_uptrs(const struct btf_record *rec, void *dst, void *src) +{ + unsigned long *src_uptr, *dst_uptr; + const struct btf_field *field; + int i; + + if (!btf_record_has_field(rec, BPF_UPTR)) + return; + + for (i = 0, field = rec->fields; i < rec->cnt; i++, field++) { + if (field->type != BPF_UPTR) + continue; + + src_uptr = src + field->offset; + dst_uptr = dst + field->offset; + swap(*src_uptr, *dst_uptr); + } +} + static inline void bpf_obj_memzero(struct btf_record *rec, void *dst, u32 size) { u32 curr_off = 0; |