Commit 85d33df3 authored by Martin KaFai Lau's avatar Martin KaFai Lau Committed by Alexei Starovoitov

bpf: Introduce BPF_MAP_TYPE_STRUCT_OPS

The patch introduces BPF_MAP_TYPE_STRUCT_OPS.  The map value
is a kernel struct with its func ptr implemented in bpf prog.
This new map is the interface to register/unregister/introspect
a bpf implemented kernel struct.

The kernel struct is actually embedded inside another new struct
(or called the "value" struct in the code).  For example,
"struct tcp_congestion_ops" is embbeded in:
struct bpf_struct_ops_tcp_congestion_ops {
	refcount_t refcnt;
	enum bpf_struct_ops_state state;
	struct tcp_congestion_ops data;  /* <-- kernel subsystem struct here */
}
The map value is "struct bpf_struct_ops_tcp_congestion_ops".
The "bpftool map dump" will then be able to show the
state ("inuse"/"tobefree") and the number of subsystem's refcnt (e.g.
number of tcp_sock in the tcp_congestion_ops case).  This "value" struct
is created automatically by a macro.  Having a separate "value" struct
will also make extending "struct bpf_struct_ops_XYZ" easier (e.g. adding
"void (*init)(void)" to "struct bpf_struct_ops_XYZ" to do some
initialization works before registering the struct_ops to the kernel
subsystem).  The libbpf will take care of finding and populating the
"struct bpf_struct_ops_XYZ" from "struct XYZ".

Register a struct_ops to a kernel subsystem:
1. Load all needed BPF_PROG_TYPE_STRUCT_OPS prog(s)
2. Create a BPF_MAP_TYPE_STRUCT_OPS with attr->btf_vmlinux_value_type_id
   set to the btf id "struct bpf_struct_ops_tcp_congestion_ops" of the
   running kernel.
   Instead of reusing the attr->btf_value_type_id,
   btf_vmlinux_value_type_id s added such that attr->btf_fd can still be
   used as the "user" btf which could store other useful sysadmin/debug
   info that may be introduced in the furture,
   e.g. creation-date/compiler-details/map-creator...etc.
3. Create a "struct bpf_struct_ops_tcp_congestion_ops" object as described
   in the running kernel btf.  Populate the value of this object.
   The function ptr should be populated with the prog fds.
4. Call BPF_MAP_UPDATE with the object created in (3) as
   the map value.  The key is always "0".

During BPF_MAP_UPDATE, the code that saves the kernel-func-ptr's
args as an array of u64 is generated.  BPF_MAP_UPDATE also allows
the specific struct_ops to do some final checks in "st_ops->init_member()"
(e.g. ensure all mandatory func ptrs are implemented).
If everything looks good, it will register this kernel struct
to the kernel subsystem.  The map will not allow further update
from this point.

Unregister a struct_ops from the kernel subsystem:
BPF_MAP_DELETE with key "0".

Introspect a struct_ops:
BPF_MAP_LOOKUP_ELEM with key "0".  The map value returned will
have the prog _id_ populated as the func ptr.

The map value state (enum bpf_struct_ops_state) will transit from:
INIT (map created) =>
INUSE (map updated, i.e. reg) =>
TOBEFREE (map value deleted, i.e. unreg)

The kernel subsystem needs to call bpf_struct_ops_get() and
bpf_struct_ops_put() to manage the "refcnt" in the
"struct bpf_struct_ops_XYZ".  This patch uses a separate refcnt
for the purose of tracking the subsystem usage.  Another approach
is to reuse the map->refcnt and then "show" (i.e. during map_lookup)
the subsystem's usage by doing map->refcnt - map->usercnt to filter out
the map-fd/pinned-map usage.  However, that will also tie down the
future semantics of map->refcnt and map->usercnt.

The very first subsystem's refcnt (during reg()) holds one
count to map->refcnt.  When the very last subsystem's refcnt
is gone, it will also release the map->refcnt.  All bpf_prog will be
freed when the map->refcnt reaches 0 (i.e. during map_free()).

Here is how the bpftool map command will look like:
[root@arch-fb-vm1 bpf]# bpftool map show
6: struct_ops  name dctcp  flags 0x0
	key 4B  value 256B  max_entries 1  memlock 4096B
	btf_id 6
[root@arch-fb-vm1 bpf]# bpftool map dump id 6
[{
        "value": {
            "refcnt": {
                "refs": {
                    "counter": 1
                }
            },
            "state": 1,
            "data": {
                "list": {
                    "next": 0,
                    "prev": 0
                },
                "key": 0,
                "flags": 2,
                "init": 24,
                "release": 0,
                "ssthresh": 25,
                "cong_avoid": 30,
                "set_state": 27,
                "cwnd_event": 28,
                "in_ack_event": 26,
                "undo_cwnd": 29,
                "pkts_acked": 0,
                "min_tso_segs": 0,
                "sndbuf_expand": 0,
                "cong_control": 0,
                "get_info": 0,
                "name": [98,112,102,95,100,99,116,99,112,0,0,0,0,0,0,0
                ],
                "owner": 0
            }
        }
    }
]

Misc Notes:
* bpf_struct_ops_map_sys_lookup_elem() is added for syscall lookup.
  It does an inplace update on "*value" instead returning a pointer
  to syscall.c.  Otherwise, it needs a separate copy of "zero" value
  for the BPF_STRUCT_OPS_STATE_INIT to avoid races.

* The bpf_struct_ops_map_delete_elem() is also called without
  preempt_disable() from map_delete_elem().  It is because
  the "->unreg()" may requires sleepable context, e.g.
  the "tcp_unregister_congestion_control()".

* "const" is added to some of the existing "struct btf_func_model *"
  function arg to avoid a compiler warning caused by this patch.
Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
Acked-by: default avatarAndrii Nakryiko <andriin@fb.com>
Acked-by: default avatarYonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200109003505.3855919-1-kafai@fb.com
parent 27ae7997
...@@ -1328,7 +1328,7 @@ xadd: if (is_imm8(insn->off)) ...@@ -1328,7 +1328,7 @@ xadd: if (is_imm8(insn->off))
return proglen; return proglen;
} }
static void save_regs(struct btf_func_model *m, u8 **prog, int nr_args, static void save_regs(const struct btf_func_model *m, u8 **prog, int nr_args,
int stack_size) int stack_size)
{ {
int i; int i;
...@@ -1344,7 +1344,7 @@ static void save_regs(struct btf_func_model *m, u8 **prog, int nr_args, ...@@ -1344,7 +1344,7 @@ static void save_regs(struct btf_func_model *m, u8 **prog, int nr_args,
-(stack_size - i * 8)); -(stack_size - i * 8));
} }
static void restore_regs(struct btf_func_model *m, u8 **prog, int nr_args, static void restore_regs(const struct btf_func_model *m, u8 **prog, int nr_args,
int stack_size) int stack_size)
{ {
int i; int i;
...@@ -1361,7 +1361,7 @@ static void restore_regs(struct btf_func_model *m, u8 **prog, int nr_args, ...@@ -1361,7 +1361,7 @@ static void restore_regs(struct btf_func_model *m, u8 **prog, int nr_args,
-(stack_size - i * 8)); -(stack_size - i * 8));
} }
static int invoke_bpf(struct btf_func_model *m, u8 **pprog, static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
struct bpf_prog **progs, int prog_cnt, int stack_size) struct bpf_prog **progs, int prog_cnt, int stack_size)
{ {
u8 *prog = *pprog; u8 *prog = *pprog;
...@@ -1456,7 +1456,8 @@ static int invoke_bpf(struct btf_func_model *m, u8 **pprog, ...@@ -1456,7 +1456,8 @@ static int invoke_bpf(struct btf_func_model *m, u8 **pprog,
* add rsp, 8 // skip eth_type_trans's frame * add rsp, 8 // skip eth_type_trans's frame
* ret // return to its caller * ret // return to its caller
*/ */
int arch_prepare_bpf_trampoline(void *image, struct btf_func_model *m, u32 flags, int arch_prepare_bpf_trampoline(void *image, void *image_end,
const struct btf_func_model *m, u32 flags,
struct bpf_prog **fentry_progs, int fentry_cnt, struct bpf_prog **fentry_progs, int fentry_cnt,
struct bpf_prog **fexit_progs, int fexit_cnt, struct bpf_prog **fexit_progs, int fexit_cnt,
void *orig_call) void *orig_call)
...@@ -1523,13 +1524,10 @@ int arch_prepare_bpf_trampoline(void *image, struct btf_func_model *m, u32 flags ...@@ -1523,13 +1524,10 @@ int arch_prepare_bpf_trampoline(void *image, struct btf_func_model *m, u32 flags
/* skip our return address and return to parent */ /* skip our return address and return to parent */
EMIT4(0x48, 0x83, 0xC4, 8); /* add rsp, 8 */ EMIT4(0x48, 0x83, 0xC4, 8); /* add rsp, 8 */
EMIT1(0xC3); /* ret */ EMIT1(0xC3); /* ret */
/* One half of the page has active running trampoline. /* Make sure the trampoline generation logic doesn't overflow */
* Another half is an area for next trampoline. if (WARN_ON_ONCE(prog > (u8 *)image_end - BPF_INSN_SAFETY))
* Make sure the trampoline generation logic doesn't overflow.
*/
if (WARN_ON_ONCE(prog - (u8 *)image > PAGE_SIZE / 2 - BPF_INSN_SAFETY))
return -EFAULT; return -EFAULT;
return 0; return prog - (u8 *)image;
} }
static int emit_cond_near_jump(u8 **pprog, void *func, void *ip, u8 jmp_cond) static int emit_cond_near_jump(u8 **pprog, void *func, void *ip, u8 jmp_cond)
......
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#include <linux/u64_stats_sync.h> #include <linux/u64_stats_sync.h>
#include <linux/refcount.h> #include <linux/refcount.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/module.h>
struct bpf_verifier_env; struct bpf_verifier_env;
struct bpf_verifier_log; struct bpf_verifier_log;
...@@ -106,6 +107,7 @@ struct bpf_map { ...@@ -106,6 +107,7 @@ struct bpf_map {
struct btf *btf; struct btf *btf;
struct bpf_map_memory memory; struct bpf_map_memory memory;
char name[BPF_OBJ_NAME_LEN]; char name[BPF_OBJ_NAME_LEN];
u32 btf_vmlinux_value_type_id;
bool unpriv_array; bool unpriv_array;
bool frozen; /* write-once; write-protected by freeze_mutex */ bool frozen; /* write-once; write-protected by freeze_mutex */
/* 22 bytes hole */ /* 22 bytes hole */
...@@ -183,7 +185,8 @@ static inline bool bpf_map_offload_neutral(const struct bpf_map *map) ...@@ -183,7 +185,8 @@ static inline bool bpf_map_offload_neutral(const struct bpf_map *map)
static inline bool bpf_map_support_seq_show(const struct bpf_map *map) static inline bool bpf_map_support_seq_show(const struct bpf_map *map)
{ {
return map->btf && map->ops->map_seq_show_elem; return (map->btf_value_type_id || map->btf_vmlinux_value_type_id) &&
map->ops->map_seq_show_elem;
} }
int map_check_no_btf(const struct bpf_map *map, int map_check_no_btf(const struct bpf_map *map,
...@@ -441,7 +444,8 @@ struct btf_func_model { ...@@ -441,7 +444,8 @@ struct btf_func_model {
* fentry = a set of program to run before calling original function * fentry = a set of program to run before calling original function
* fexit = a set of program to run after original function * fexit = a set of program to run after original function
*/ */
int arch_prepare_bpf_trampoline(void *image, struct btf_func_model *m, u32 flags, int arch_prepare_bpf_trampoline(void *image, void *image_end,
const struct btf_func_model *m, u32 flags,
struct bpf_prog **fentry_progs, int fentry_cnt, struct bpf_prog **fentry_progs, int fentry_cnt,
struct bpf_prog **fexit_progs, int fexit_cnt, struct bpf_prog **fexit_progs, int fexit_cnt,
void *orig_call); void *orig_call);
...@@ -672,6 +676,7 @@ struct bpf_array_aux { ...@@ -672,6 +676,7 @@ struct bpf_array_aux {
struct work_struct work; struct work_struct work;
}; };
struct bpf_struct_ops_value;
struct btf_type; struct btf_type;
struct btf_member; struct btf_member;
...@@ -681,21 +686,61 @@ struct bpf_struct_ops { ...@@ -681,21 +686,61 @@ struct bpf_struct_ops {
int (*init)(struct btf *btf); int (*init)(struct btf *btf);
int (*check_member)(const struct btf_type *t, int (*check_member)(const struct btf_type *t,
const struct btf_member *member); const struct btf_member *member);
int (*init_member)(const struct btf_type *t,
const struct btf_member *member,
void *kdata, const void *udata);
int (*reg)(void *kdata);
void (*unreg)(void *kdata);
const struct btf_type *type; const struct btf_type *type;
const struct btf_type *value_type;
const char *name; const char *name;
struct btf_func_model func_models[BPF_STRUCT_OPS_MAX_NR_MEMBERS]; struct btf_func_model func_models[BPF_STRUCT_OPS_MAX_NR_MEMBERS];
u32 type_id; u32 type_id;
u32 value_id;
}; };
#if defined(CONFIG_BPF_JIT) && defined(CONFIG_BPF_SYSCALL) #if defined(CONFIG_BPF_JIT) && defined(CONFIG_BPF_SYSCALL)
#define BPF_MODULE_OWNER ((void *)((0xeB9FUL << 2) + POISON_POINTER_DELTA))
const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id); const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id);
void bpf_struct_ops_init(struct btf *btf); void bpf_struct_ops_init(struct btf *btf);
bool bpf_struct_ops_get(const void *kdata);
void bpf_struct_ops_put(const void *kdata);
int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key,
void *value);
static inline bool bpf_try_module_get(const void *data, struct module *owner)
{
if (owner == BPF_MODULE_OWNER)
return bpf_struct_ops_get(data);
else
return try_module_get(owner);
}
static inline void bpf_module_put(const void *data, struct module *owner)
{
if (owner == BPF_MODULE_OWNER)
bpf_struct_ops_put(data);
else
module_put(owner);
}
#else #else
static inline const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id) static inline const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id)
{ {
return NULL; return NULL;
} }
static inline void bpf_struct_ops_init(struct btf *btf) { } static inline void bpf_struct_ops_init(struct btf *btf) { }
static inline bool bpf_try_module_get(const void *data, struct module *owner)
{
return try_module_get(owner);
}
static inline void bpf_module_put(const void *data, struct module *owner)
{
module_put(owner);
}
static inline int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map,
void *key,
void *value)
{
return -EINVAL;
}
#endif #endif
struct bpf_array { struct bpf_array {
......
...@@ -109,3 +109,6 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_REUSEPORT_SOCKARRAY, reuseport_array_ops) ...@@ -109,3 +109,6 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_REUSEPORT_SOCKARRAY, reuseport_array_ops)
#endif #endif
BPF_MAP_TYPE(BPF_MAP_TYPE_QUEUE, queue_map_ops) BPF_MAP_TYPE(BPF_MAP_TYPE_QUEUE, queue_map_ops)
BPF_MAP_TYPE(BPF_MAP_TYPE_STACK, stack_map_ops) BPF_MAP_TYPE(BPF_MAP_TYPE_STACK, stack_map_ops)
#if defined(CONFIG_BPF_JIT)
BPF_MAP_TYPE(BPF_MAP_TYPE_STRUCT_OPS, bpf_struct_ops_map_ops)
#endif
...@@ -7,6 +7,8 @@ ...@@ -7,6 +7,8 @@
#include <linux/types.h> #include <linux/types.h>
#include <uapi/linux/btf.h> #include <uapi/linux/btf.h>
#define BTF_TYPE_EMIT(type) ((void)(type *)0)
struct btf; struct btf;
struct btf_member; struct btf_member;
struct btf_type; struct btf_type;
...@@ -60,6 +62,10 @@ const struct btf_type *btf_type_resolve_ptr(const struct btf *btf, ...@@ -60,6 +62,10 @@ const struct btf_type *btf_type_resolve_ptr(const struct btf *btf,
u32 id, u32 *res_id); u32 id, u32 *res_id);
const struct btf_type *btf_type_resolve_func_ptr(const struct btf *btf, const struct btf_type *btf_type_resolve_func_ptr(const struct btf *btf,
u32 id, u32 *res_id); u32 id, u32 *res_id);
const struct btf_type *
btf_resolve_size(const struct btf *btf, const struct btf_type *type,
u32 *type_size, const struct btf_type **elem_type,
u32 *total_nelems);
#define for_each_member(i, struct_type, member) \ #define for_each_member(i, struct_type, member) \
for (i = 0, member = btf_type_member(struct_type); \ for (i = 0, member = btf_type_member(struct_type); \
...@@ -106,6 +112,13 @@ static inline bool btf_type_kflag(const struct btf_type *t) ...@@ -106,6 +112,13 @@ static inline bool btf_type_kflag(const struct btf_type *t)
return BTF_INFO_KFLAG(t->info); return BTF_INFO_KFLAG(t->info);
} }
static inline u32 btf_member_bit_offset(const struct btf_type *struct_type,
const struct btf_member *member)
{
return btf_type_kflag(struct_type) ? BTF_MEMBER_BIT_OFFSET(member->offset)
: member->offset;
}
static inline u32 btf_member_bitfield_size(const struct btf_type *struct_type, static inline u32 btf_member_bitfield_size(const struct btf_type *struct_type,
const struct btf_member *member) const struct btf_member *member)
{ {
......
...@@ -136,6 +136,7 @@ enum bpf_map_type { ...@@ -136,6 +136,7 @@ enum bpf_map_type {
BPF_MAP_TYPE_STACK, BPF_MAP_TYPE_STACK,
BPF_MAP_TYPE_SK_STORAGE, BPF_MAP_TYPE_SK_STORAGE,
BPF_MAP_TYPE_DEVMAP_HASH, BPF_MAP_TYPE_DEVMAP_HASH,
BPF_MAP_TYPE_STRUCT_OPS,
}; };
/* Note that tracing related programs such as /* Note that tracing related programs such as
...@@ -398,6 +399,10 @@ union bpf_attr { ...@@ -398,6 +399,10 @@ union bpf_attr {
__u32 btf_fd; /* fd pointing to a BTF type data */ __u32 btf_fd; /* fd pointing to a BTF type data */
__u32 btf_key_type_id; /* BTF type_id of the key */ __u32 btf_key_type_id; /* BTF type_id of the key */
__u32 btf_value_type_id; /* BTF type_id of the value */ __u32 btf_value_type_id; /* BTF type_id of the value */
__u32 btf_vmlinux_value_type_id;/* BTF type_id of a kernel-
* struct stored as the
* map value
*/
}; };
struct { /* anonymous struct used by BPF_MAP_*_ELEM commands */ struct { /* anonymous struct used by BPF_MAP_*_ELEM commands */
...@@ -3350,7 +3355,7 @@ struct bpf_map_info { ...@@ -3350,7 +3355,7 @@ struct bpf_map_info {
__u32 map_flags; __u32 map_flags;
char name[BPF_OBJ_NAME_LEN]; char name[BPF_OBJ_NAME_LEN];
__u32 ifindex; __u32 ifindex;
__u32 :32; __u32 btf_vmlinux_value_type_id;
__u64 netns_dev; __u64 netns_dev;
__u64 netns_ino; __u64 netns_ino;
__u32 btf_id; __u32 btf_id;
......
This diff is collapsed.
...@@ -500,13 +500,6 @@ static const char *btf_int_encoding_str(u8 encoding) ...@@ -500,13 +500,6 @@ static const char *btf_int_encoding_str(u8 encoding)
return "UNKN"; return "UNKN";
} }
static u32 btf_member_bit_offset(const struct btf_type *struct_type,
const struct btf_member *member)
{
return btf_type_kflag(struct_type) ? BTF_MEMBER_BIT_OFFSET(member->offset)
: member->offset;
}
static u32 btf_type_int(const struct btf_type *t) static u32 btf_type_int(const struct btf_type *t)
{ {
return *(u32 *)(t + 1); return *(u32 *)(t + 1);
...@@ -1089,7 +1082,7 @@ static const struct resolve_vertex *env_stack_peak(struct btf_verifier_env *env) ...@@ -1089,7 +1082,7 @@ static const struct resolve_vertex *env_stack_peak(struct btf_verifier_env *env)
* *elem_type: same as return type ("struct X") * *elem_type: same as return type ("struct X")
* *total_nelems: 1 * *total_nelems: 1
*/ */
static const struct btf_type * const struct btf_type *
btf_resolve_size(const struct btf *btf, const struct btf_type *type, btf_resolve_size(const struct btf *btf, const struct btf_type *type,
u32 *type_size, const struct btf_type **elem_type, u32 *type_size, const struct btf_type **elem_type,
u32 *total_nelems) u32 *total_nelems)
...@@ -1143,8 +1136,10 @@ btf_resolve_size(const struct btf *btf, const struct btf_type *type, ...@@ -1143,8 +1136,10 @@ btf_resolve_size(const struct btf *btf, const struct btf_type *type,
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
*type_size = nelems * size; *type_size = nelems * size;
*total_nelems = nelems; if (total_nelems)
*elem_type = type; *total_nelems = nelems;
if (elem_type)
*elem_type = type;
return array_type ? : type; return array_type ? : type;
} }
...@@ -1858,7 +1853,10 @@ static void btf_modifier_seq_show(const struct btf *btf, ...@@ -1858,7 +1853,10 @@ static void btf_modifier_seq_show(const struct btf *btf,
u32 type_id, void *data, u32 type_id, void *data,
u8 bits_offset, struct seq_file *m) u8 bits_offset, struct seq_file *m)
{ {
t = btf_type_id_resolve(btf, &type_id); if (btf->resolved_ids)
t = btf_type_id_resolve(btf, &type_id);
else
t = btf_type_skip_modifiers(btf, type_id, NULL);
btf_type_ops(t)->seq_show(btf, t, type_id, data, bits_offset, m); btf_type_ops(t)->seq_show(btf, t, type_id, data, bits_offset, m);
} }
......
...@@ -22,7 +22,8 @@ struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd) ...@@ -22,7 +22,8 @@ struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd)
*/ */
if (inner_map->map_type == BPF_MAP_TYPE_PROG_ARRAY || if (inner_map->map_type == BPF_MAP_TYPE_PROG_ARRAY ||
inner_map->map_type == BPF_MAP_TYPE_CGROUP_STORAGE || inner_map->map_type == BPF_MAP_TYPE_CGROUP_STORAGE ||
inner_map->map_type == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE) { inner_map->map_type == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE ||
inner_map->map_type == BPF_MAP_TYPE_STRUCT_OPS) {
fdput(f); fdput(f);
return ERR_PTR(-ENOTSUPP); return ERR_PTR(-ENOTSUPP);
} }
......
...@@ -628,7 +628,7 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf, ...@@ -628,7 +628,7 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf,
return ret; return ret;
} }
#define BPF_MAP_CREATE_LAST_FIELD btf_value_type_id #define BPF_MAP_CREATE_LAST_FIELD btf_vmlinux_value_type_id
/* called via syscall */ /* called via syscall */
static int map_create(union bpf_attr *attr) static int map_create(union bpf_attr *attr)
{ {
...@@ -642,6 +642,14 @@ static int map_create(union bpf_attr *attr) ...@@ -642,6 +642,14 @@ static int map_create(union bpf_attr *attr)
if (err) if (err)
return -EINVAL; return -EINVAL;
if (attr->btf_vmlinux_value_type_id) {
if (attr->map_type != BPF_MAP_TYPE_STRUCT_OPS ||
attr->btf_key_type_id || attr->btf_value_type_id)
return -EINVAL;
} else if (attr->btf_key_type_id && !attr->btf_value_type_id) {
return -EINVAL;
}
f_flags = bpf_get_file_flag(attr->map_flags); f_flags = bpf_get_file_flag(attr->map_flags);
if (f_flags < 0) if (f_flags < 0)
return f_flags; return f_flags;
...@@ -664,32 +672,35 @@ static int map_create(union bpf_attr *attr) ...@@ -664,32 +672,35 @@ static int map_create(union bpf_attr *attr)
atomic64_set(&map->usercnt, 1); atomic64_set(&map->usercnt, 1);
mutex_init(&map->freeze_mutex); mutex_init(&map->freeze_mutex);
if (attr->btf_key_type_id || attr->btf_value_type_id) { map->spin_lock_off = -EINVAL;
if (attr->btf_key_type_id || attr->btf_value_type_id ||
/* Even the map's value is a kernel's struct,
* the bpf_prog.o must have BTF to begin with
* to figure out the corresponding kernel's
* counter part. Thus, attr->btf_fd has
* to be valid also.
*/
attr->btf_vmlinux_value_type_id) {
struct btf *btf; struct btf *btf;
if (!attr->btf_value_type_id) {
err = -EINVAL;
goto free_map;
}
btf = btf_get_by_fd(attr->btf_fd); btf = btf_get_by_fd(attr->btf_fd);
if (IS_ERR(btf)) { if (IS_ERR(btf)) {
err = PTR_ERR(btf); err = PTR_ERR(btf);
goto free_map; goto free_map;
} }
map->btf = btf;
err = map_check_btf(map, btf, attr->btf_key_type_id, if (attr->btf_value_type_id) {
attr->btf_value_type_id); err = map_check_btf(map, btf, attr->btf_key_type_id,
if (err) { attr->btf_value_type_id);
btf_put(btf); if (err)
goto free_map; goto free_map;
} }
map->btf = btf;
map->btf_key_type_id = attr->btf_key_type_id; map->btf_key_type_id = attr->btf_key_type_id;
map->btf_value_type_id = attr->btf_value_type_id; map->btf_value_type_id = attr->btf_value_type_id;
} else { map->btf_vmlinux_value_type_id =
map->spin_lock_off = -EINVAL; attr->btf_vmlinux_value_type_id;
} }
err = security_bpf_map_alloc(map); err = security_bpf_map_alloc(map);
...@@ -888,6 +899,9 @@ static int map_lookup_elem(union bpf_attr *attr) ...@@ -888,6 +899,9 @@ static int map_lookup_elem(union bpf_attr *attr)
} else if (map->map_type == BPF_MAP_TYPE_QUEUE || } else if (map->map_type == BPF_MAP_TYPE_QUEUE ||
map->map_type == BPF_MAP_TYPE_STACK) { map->map_type == BPF_MAP_TYPE_STACK) {
err = map->ops->map_peek_elem(map, value); err = map->ops->map_peek_elem(map, value);
} else if (map->map_type == BPF_MAP_TYPE_STRUCT_OPS) {
/* struct_ops map requires directly updating "value" */
err = bpf_struct_ops_map_sys_lookup_elem(map, key, value);
} else { } else {
rcu_read_lock(); rcu_read_lock();
if (map->ops->map_lookup_elem_sys_only) if (map->ops->map_lookup_elem_sys_only)
...@@ -1003,7 +1017,8 @@ static int map_update_elem(union bpf_attr *attr) ...@@ -1003,7 +1017,8 @@ static int map_update_elem(union bpf_attr *attr)
goto out; goto out;
} else if (map->map_type == BPF_MAP_TYPE_CPUMAP || } else if (map->map_type == BPF_MAP_TYPE_CPUMAP ||
map->map_type == BPF_MAP_TYPE_SOCKHASH || map->map_type == BPF_MAP_TYPE_SOCKHASH ||
map->map_type == BPF_MAP_TYPE_SOCKMAP) { map->map_type == BPF_MAP_TYPE_SOCKMAP ||
map->map_type == BPF_MAP_TYPE_STRUCT_OPS) {
err = map->ops->map_update_elem(map, key, value, attr->flags); err = map->ops->map_update_elem(map, key, value, attr->flags);
goto out; goto out;
} else if (IS_FD_PROG_ARRAY(map)) { } else if (IS_FD_PROG_ARRAY(map)) {
...@@ -1092,7 +1107,9 @@ static int map_delete_elem(union bpf_attr *attr) ...@@ -1092,7 +1107,9 @@ static int map_delete_elem(union bpf_attr *attr)
if (bpf_map_is_dev_bound(map)) { if (bpf_map_is_dev_bound(map)) {
err = bpf_map_offload_delete_elem(map, key); err = bpf_map_offload_delete_elem(map, key);
goto out; goto out;
} else if (IS_FD_PROG_ARRAY(map)) { } else if (IS_FD_PROG_ARRAY(map) ||
map->map_type == BPF_MAP_TYPE_STRUCT_OPS) {
/* These maps require sleepable context */
err = map->ops->map_delete_elem(map, key); err = map->ops->map_delete_elem(map, key);
goto out; goto out;
} }
...@@ -2822,6 +2839,7 @@ static int bpf_map_get_info_by_fd(struct bpf_map *map, ...@@ -2822,6 +2839,7 @@ static int bpf_map_get_info_by_fd(struct bpf_map *map,
info.btf_key_type_id = map->btf_key_type_id; info.btf_key_type_id = map->btf_key_type_id;
info.btf_value_type_id = map->btf_value_type_id; info.btf_value_type_id = map->btf_value_type_id;
} }
info.btf_vmlinux_value_type_id = map->btf_vmlinux_value_type_id;
if (bpf_map_is_dev_bound(map)) { if (bpf_map_is_dev_bound(map)) {
err = bpf_map_offload_info_fill(&info, map); err = bpf_map_offload_info_fill(&info, map);
......
...@@ -160,11 +160,12 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr) ...@@ -160,11 +160,12 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
if (fexit_cnt) if (fexit_cnt)
flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME; flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
err = arch_prepare_bpf_trampoline(new_image, &tr->func.model, flags, err = arch_prepare_bpf_trampoline(new_image, new_image + PAGE_SIZE / 2,
&tr->func.model, flags,
fentry, fentry_cnt, fentry, fentry_cnt,
fexit, fexit_cnt, fexit, fexit_cnt,
tr->func.addr); tr->func.addr);
if (err) if (err < 0)
goto out; goto out;
if (tr->selector) if (tr->selector)
...@@ -296,7 +297,8 @@ void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start) ...@@ -296,7 +297,8 @@ void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start)
} }
int __weak int __weak
arch_prepare_bpf_trampoline(void *image, struct btf_func_model *m, u32 flags, arch_prepare_bpf_trampoline(void *image, void *image_end,
const struct btf_func_model *m, u32 flags,
struct bpf_prog **fentry_progs, int fentry_cnt, struct bpf_prog **fentry_progs, int fentry_cnt,
struct bpf_prog **fexit_progs, int fexit_cnt, struct bpf_prog **fexit_progs, int fexit_cnt,
void *orig_call) void *orig_call)
......
...@@ -8155,6 +8155,11 @@ static int check_map_prog_compatibility(struct bpf_verifier_env *env, ...@@ -8155,6 +8155,11 @@ static int check_map_prog_compatibility(struct bpf_verifier_env *env,
return -EINVAL; return -EINVAL;
} }
if (map->map_type == BPF_MAP_TYPE_STRUCT_OPS) {
verbose(env, "bpf_struct_ops map cannot be used in prog\n");
return -EINVAL;
}
return 0; return 0;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment