Commit b40b414e authored by Alexei Starovoitov's avatar Alexei Starovoitov

Merge branch 'bpf_loop inlining'

Eduard Zingerman says:

====================

Hi Everyone,

This is the next iteration of the patch. It includes changes suggested
by Song, Joanne and Alexei. Please find updated intro message and
change log below.

This patch implements inlining of calls to bpf_loop helper function
when bpf_loop's callback is statically known. E.g. the rewrite does
the following transformation during BPF program processing:

  bpf_loop(10, foo, NULL, 0);

 ->

  for (int i = 0; i < 10; ++i)
    foo(i, NULL);

The transformation leads to measurable latency change for simple
loops. Measurements using `benchs/run_bench_bpf_loop.sh` inside QEMU /
KVM on i7-4710HQ CPU show a drop in latency from 14 ns/op to 2 ns/op.

The change is split in five parts:

* Update to test_verifier.c to specify expected and unexpected
  instruction sequences. This allows to check BPF program rewrites
  applied by e.g. do_mix_fixups function.

* Update to test_verifier.c to specify BTF function infos and types
  per test case. This is necessary for tests that load sub-program
  addresses to a variable because of the checks applied by
  check_ld_imm function.

* The update to verifier.c that tracks state of the parameters for
  each bpf_loop call in a program and decides whether it could be
  replaced by a loop.

* A set of test cases for `test_verifier` that use capabilities added
  by the first two patches to verify instructions produced by inlining
  logic.

* Two test cases for `test_prog` to check that possible corner cases
  behave as expected.

Additional details are available in commit messages for each patch.

Changes since v7:
 - Call to `mark_chain_precision` is added in `loop_flag_is_zero` to
   avoid potential issues with state pruning and precision tracking.
 - `flags non-zero` test_verifier test case is updated to have two
   execution paths reaching `bpf_loop` call, one with flags = 0,
   another with flags = 1. Potentially this test case should be able
   to show that call to `mark_chain_precision` is necessary in
   `loop_flag_is_zero` but not at the moment. Please refer to
   discussion for [PATCH bpf-next v7 3/5] for additional details.
 - `stack_depth_extra` computation is updated to guarantee that R6, R7
   and R8 offsets are always aligned on 8 byte boundary.
 - `stack locations for loop vars` test_verifier test case updated to
   show that R6, R7, R8 offsets are indeed aligned when function stack
   depth is not a multiple of 8.
 - I removed Song Liu's ACK from commit message for [PATCH bpf-next v8
   4/5] because I updated the patch. (Please let me know if I had to
   keep the ACK tag).

Changes since v6:
 - Return value of the `optimize_bpf_loop` function is no longer
   ignored. This is necessary to properly propagate -ENOMEM error.

Changes since v5:
 - Added function `loop_flag_is_zero` to skip a few checks in
   `update_loop_inline_state` when loop instruction is not fit for
   inline.

Changes since v4:
 - Added missing `static` modifier for `update_loop_inline_state` and
   `inline_bpf_loop` functions.
 - `update_loop_inline_state` updated for better readability.
 - Fields `initialized` and `fit_for_inline` of `struct
   bpf_loop_inline_state` are changed back from `bool` to bitfields.
 - Acks from Song Liu added to comments for patches 1/5, 2/5, 4/5,
   5/5.

Changes since v3:
 - Function `adjust_stack_depth_for_loop_inlining` is replaced by
   function `optimize_bpf_loop`. Function `optimize_bpf_loop` is
   responsible for both stack depth adjustment and call instruction
   replacement.
 - Changes in `do_misc_fixups` are reverted.
 - Changes in `adjust_subprog_starts_after_remove` are reverted and
   function `adjust_loop_inline_subprogno` is removed. This is
   possible because call to `optimize_bpf_loop` is placed before the
   dead code removal in `opt_remove_dead_code` (in contrast to the
   position of `do_misc_fixups` where inlining was done in v3).
 - Field `bpf_insn_aux_data.loop_inline_state` is now a part of
   anonymous union at the start of the `bpf_insn_aux_data`.
 - Data structure `bpf_loop_inline_state` is simplified to use single
   flag field `fit_for_inline` instead of separate fields
   `flags_is_zero` & `callback_is_constant`.
 - Macro definition `BPF_MAX_LOOPS` is moved from
   `include/linux/bpf_verifier.h` to `include/linux/bpf.h` to avoid
   include of `include/linux/bpf_verifier.h` in `bpf_iter.c`.
 - `inline_bpf_loop` changed back to use array initialization and hard
   coded offsets as in v2.
 - Style / formatting updates.

Changes since v2:
 - fix for `stack_check` test case in `test_progs-no_alu32`, all tests
   are passing now;
 - v2 3/3 patch is split in three parts:
   - kernel changes
   - test_verifier changes
   - test_prog changes
 - updated `inline_bpf_loop` in `verifier.c` to calculate each offset
   used in instructions to avoid "magic" numbers;
 - removed newline handling logic in `fail_log` branch of
   `do_single_test` in `test_verifier.c` to simplify the patch set;
 - styling fixes suggested in review for v2 of this patch set.

Changes since v1:
 - allow to use SKIP_INSNS in instruction pattern specification in
   test_verifier tests;
 - fix for a bug in spill offset assignement for loop vars when
   bpf_loop is located in a non-main function.
====================
Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
parents aca80dd9 0e1bf9ed
...@@ -1286,6 +1286,9 @@ struct bpf_array { ...@@ -1286,6 +1286,9 @@ struct bpf_array {
#define BPF_COMPLEXITY_LIMIT_INSNS 1000000 /* yes. 1M insns */ #define BPF_COMPLEXITY_LIMIT_INSNS 1000000 /* yes. 1M insns */
#define MAX_TAIL_CALL_CNT 33 #define MAX_TAIL_CALL_CNT 33
/* Maximum number of loops for bpf_loop */
#define BPF_MAX_LOOPS BIT(23)
#define BPF_F_ACCESS_MASK (BPF_F_RDONLY | \ #define BPF_F_ACCESS_MASK (BPF_F_RDONLY | \
BPF_F_RDONLY_PROG | \ BPF_F_RDONLY_PROG | \
BPF_F_WRONLY | \ BPF_F_WRONLY | \
......
...@@ -344,6 +344,14 @@ struct bpf_verifier_state_list { ...@@ -344,6 +344,14 @@ struct bpf_verifier_state_list {
int miss_cnt, hit_cnt; int miss_cnt, hit_cnt;
}; };
struct bpf_loop_inline_state {
int initialized:1; /* set to true upon first entry */
int fit_for_inline:1; /* true if callback function is the same
* at each call and flags are always zero
*/
u32 callback_subprogno; /* valid when fit_for_inline is true */
};
/* Possible states for alu_state member. */ /* Possible states for alu_state member. */
#define BPF_ALU_SANITIZE_SRC (1U << 0) #define BPF_ALU_SANITIZE_SRC (1U << 0)
#define BPF_ALU_SANITIZE_DST (1U << 1) #define BPF_ALU_SANITIZE_DST (1U << 1)
...@@ -373,6 +381,10 @@ struct bpf_insn_aux_data { ...@@ -373,6 +381,10 @@ struct bpf_insn_aux_data {
u32 mem_size; /* mem_size for non-struct typed var */ u32 mem_size; /* mem_size for non-struct typed var */
}; };
} btf_var; } btf_var;
/* if instruction is a call to bpf_loop this field tracks
* the state of the relevant registers to make decision about inlining
*/
struct bpf_loop_inline_state loop_inline_state;
}; };
u64 map_key_state; /* constant (32 bit) key tracking for maps */ u64 map_key_state; /* constant (32 bit) key tracking for maps */
int ctx_field_size; /* the ctx field size for load insn, maybe 0 */ int ctx_field_size; /* the ctx field size for load insn, maybe 0 */
......
...@@ -723,9 +723,6 @@ const struct bpf_func_proto bpf_for_each_map_elem_proto = { ...@@ -723,9 +723,6 @@ const struct bpf_func_proto bpf_for_each_map_elem_proto = {
.arg4_type = ARG_ANYTHING, .arg4_type = ARG_ANYTHING,
}; };
/* maximum number of loops */
#define MAX_LOOPS BIT(23)
BPF_CALL_4(bpf_loop, u32, nr_loops, void *, callback_fn, void *, callback_ctx, BPF_CALL_4(bpf_loop, u32, nr_loops, void *, callback_fn, void *, callback_ctx,
u64, flags) u64, flags)
{ {
...@@ -733,9 +730,13 @@ BPF_CALL_4(bpf_loop, u32, nr_loops, void *, callback_fn, void *, callback_ctx, ...@@ -733,9 +730,13 @@ BPF_CALL_4(bpf_loop, u32, nr_loops, void *, callback_fn, void *, callback_ctx,
u64 ret; u64 ret;
u32 i; u32 i;
/* Note: these safety checks are also verified when bpf_loop
* is inlined, be careful to modify this code in sync. See
* function verifier.c:inline_bpf_loop.
*/
if (flags) if (flags)
return -EINVAL; return -EINVAL;
if (nr_loops > MAX_LOOPS) if (nr_loops > BPF_MAX_LOOPS)
return -E2BIG; return -E2BIG;
for (i = 0; i < nr_loops; i++) { for (i = 0; i < nr_loops; i++) {
......
...@@ -7124,6 +7124,41 @@ static int check_get_func_ip(struct bpf_verifier_env *env) ...@@ -7124,6 +7124,41 @@ static int check_get_func_ip(struct bpf_verifier_env *env)
return -ENOTSUPP; return -ENOTSUPP;
} }
static struct bpf_insn_aux_data *cur_aux(struct bpf_verifier_env *env)
{
return &env->insn_aux_data[env->insn_idx];
}
static bool loop_flag_is_zero(struct bpf_verifier_env *env)
{
struct bpf_reg_state *regs = cur_regs(env);
struct bpf_reg_state *reg = &regs[BPF_REG_4];
bool reg_is_null = register_is_null(reg);
if (reg_is_null)
mark_chain_precision(env, BPF_REG_4);
return reg_is_null;
}
static void update_loop_inline_state(struct bpf_verifier_env *env, u32 subprogno)
{
struct bpf_loop_inline_state *state = &cur_aux(env)->loop_inline_state;
if (!state->initialized) {
state->initialized = 1;
state->fit_for_inline = loop_flag_is_zero(env);
state->callback_subprogno = subprogno;
return;
}
if (!state->fit_for_inline)
return;
state->fit_for_inline = (loop_flag_is_zero(env) &&
state->callback_subprogno == subprogno);
}
static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn, static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
int *insn_idx_p) int *insn_idx_p)
{ {
...@@ -7276,6 +7311,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn ...@@ -7276,6 +7311,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
err = check_bpf_snprintf_call(env, regs); err = check_bpf_snprintf_call(env, regs);
break; break;
case BPF_FUNC_loop: case BPF_FUNC_loop:
update_loop_inline_state(env, meta.subprogno);
err = __check_func_call(env, insn, insn_idx_p, meta.subprogno, err = __check_func_call(env, insn, insn_idx_p, meta.subprogno,
set_loop_callback_state); set_loop_callback_state);
break; break;
...@@ -7682,11 +7718,6 @@ static bool check_reg_sane_offset(struct bpf_verifier_env *env, ...@@ -7682,11 +7718,6 @@ static bool check_reg_sane_offset(struct bpf_verifier_env *env,
return true; return true;
} }
static struct bpf_insn_aux_data *cur_aux(struct bpf_verifier_env *env)
{
return &env->insn_aux_data[env->insn_idx];
}
enum { enum {
REASON_BOUNDS = -1, REASON_BOUNDS = -1,
REASON_TYPE = -2, REASON_TYPE = -2,
...@@ -14315,6 +14346,142 @@ static int do_misc_fixups(struct bpf_verifier_env *env) ...@@ -14315,6 +14346,142 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
return 0; return 0;
} }
static struct bpf_prog *inline_bpf_loop(struct bpf_verifier_env *env,
int position,
s32 stack_base,
u32 callback_subprogno,
u32 *cnt)
{
s32 r6_offset = stack_base + 0 * BPF_REG_SIZE;
s32 r7_offset = stack_base + 1 * BPF_REG_SIZE;
s32 r8_offset = stack_base + 2 * BPF_REG_SIZE;
int reg_loop_max = BPF_REG_6;
int reg_loop_cnt = BPF_REG_7;
int reg_loop_ctx = BPF_REG_8;
struct bpf_prog *new_prog;
u32 callback_start;
u32 call_insn_offset;
s32 callback_offset;
/* This represents an inlined version of bpf_iter.c:bpf_loop,
* be careful to modify this code in sync.
*/
struct bpf_insn insn_buf[] = {
/* Return error and jump to the end of the patch if
* expected number of iterations is too big.
*/
BPF_JMP_IMM(BPF_JLE, BPF_REG_1, BPF_MAX_LOOPS, 2),
BPF_MOV32_IMM(BPF_REG_0, -E2BIG),
BPF_JMP_IMM(BPF_JA, 0, 0, 16),
/* spill R6, R7, R8 to use these as loop vars */
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, r6_offset),
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_7, r7_offset),
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_8, r8_offset),
/* initialize loop vars */
BPF_MOV64_REG(reg_loop_max, BPF_REG_1),
BPF_MOV32_IMM(reg_loop_cnt, 0),
BPF_MOV64_REG(reg_loop_ctx, BPF_REG_3),
/* loop header,
* if reg_loop_cnt >= reg_loop_max skip the loop body
*/
BPF_JMP_REG(BPF_JGE, reg_loop_cnt, reg_loop_max, 5),
/* callback call,
* correct callback offset would be set after patching
*/
BPF_MOV64_REG(BPF_REG_1, reg_loop_cnt),
BPF_MOV64_REG(BPF_REG_2, reg_loop_ctx),
BPF_CALL_REL(0),
/* increment loop counter */
BPF_ALU64_IMM(BPF_ADD, reg_loop_cnt, 1),
/* jump to loop header if callback returned 0 */
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, -6),
/* return value of bpf_loop,
* set R0 to the number of iterations
*/
BPF_MOV64_REG(BPF_REG_0, reg_loop_cnt),
/* restore original values of R6, R7, R8 */
BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_10, r6_offset),
BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_10, r7_offset),
BPF_LDX_MEM(BPF_DW, BPF_REG_8, BPF_REG_10, r8_offset),
};
*cnt = ARRAY_SIZE(insn_buf);
new_prog = bpf_patch_insn_data(env, position, insn_buf, *cnt);
if (!new_prog)
return new_prog;
/* callback start is known only after patching */
callback_start = env->subprog_info[callback_subprogno].start;
/* Note: insn_buf[12] is an offset of BPF_CALL_REL instruction */
call_insn_offset = position + 12;
callback_offset = callback_start - call_insn_offset - 1;
env->prog->insnsi[call_insn_offset].imm = callback_offset;
return new_prog;
}
static bool is_bpf_loop_call(struct bpf_insn *insn)
{
return insn->code == (BPF_JMP | BPF_CALL) &&
insn->src_reg == 0 &&
insn->imm == BPF_FUNC_loop;
}
/* For all sub-programs in the program (including main) check
* insn_aux_data to see if there are bpf_loop calls that require
* inlining. If such calls are found the calls are replaced with a
* sequence of instructions produced by `inline_bpf_loop` function and
* subprog stack_depth is increased by the size of 3 registers.
* This stack space is used to spill values of the R6, R7, R8. These
* registers are used to store the loop bound, counter and context
* variables.
*/
static int optimize_bpf_loop(struct bpf_verifier_env *env)
{
struct bpf_subprog_info *subprogs = env->subprog_info;
int i, cur_subprog = 0, cnt, delta = 0;
struct bpf_insn *insn = env->prog->insnsi;
int insn_cnt = env->prog->len;
u16 stack_depth = subprogs[cur_subprog].stack_depth;
u16 stack_depth_roundup = round_up(stack_depth, 8) - stack_depth;
u16 stack_depth_extra = 0;
for (i = 0; i < insn_cnt; i++, insn++) {
struct bpf_loop_inline_state *inline_state =
&env->insn_aux_data[i + delta].loop_inline_state;
if (is_bpf_loop_call(insn) && inline_state->fit_for_inline) {
struct bpf_prog *new_prog;
stack_depth_extra = BPF_REG_SIZE * 3 + stack_depth_roundup;
new_prog = inline_bpf_loop(env,
i + delta,
-(stack_depth + stack_depth_extra),
inline_state->callback_subprogno,
&cnt);
if (!new_prog)
return -ENOMEM;
delta += cnt - 1;
env->prog = new_prog;
insn = new_prog->insnsi + i + delta;
}
if (subprogs[cur_subprog + 1].start == i + delta + 1) {
subprogs[cur_subprog].stack_depth += stack_depth_extra;
cur_subprog++;
stack_depth = subprogs[cur_subprog].stack_depth;
stack_depth_roundup = round_up(stack_depth, 8) - stack_depth;
stack_depth_extra = 0;
}
}
env->prog->aux->stack_depth = env->subprog_info[0].stack_depth;
return 0;
}
static void free_states(struct bpf_verifier_env *env) static void free_states(struct bpf_verifier_env *env)
{ {
struct bpf_verifier_state_list *sl, *sln; struct bpf_verifier_state_list *sl, *sln;
...@@ -15052,6 +15219,9 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr) ...@@ -15052,6 +15219,9 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr)
ret = check_max_stack_depth(env); ret = check_max_stack_depth(env);
/* instruction rewrites happen after this point */ /* instruction rewrites happen after this point */
if (ret == 0)
ret = optimize_bpf_loop(env);
if (is_priv) { if (is_priv) {
if (ret == 0) if (ret == 0)
opt_hard_wire_dead_code_branches(env); opt_hard_wire_dead_code_branches(env);
......
...@@ -120,6 +120,64 @@ static void check_nested_calls(struct bpf_loop *skel) ...@@ -120,6 +120,64 @@ static void check_nested_calls(struct bpf_loop *skel)
bpf_link__destroy(link); bpf_link__destroy(link);
} }
static void check_non_constant_callback(struct bpf_loop *skel)
{
struct bpf_link *link =
bpf_program__attach(skel->progs.prog_non_constant_callback);
if (!ASSERT_OK_PTR(link, "link"))
return;
skel->bss->callback_selector = 0x0F;
usleep(1);
ASSERT_EQ(skel->bss->g_output, 0x0F, "g_output #1");
skel->bss->callback_selector = 0xF0;
usleep(1);
ASSERT_EQ(skel->bss->g_output, 0xF0, "g_output #2");
bpf_link__destroy(link);
}
static void check_stack(struct bpf_loop *skel)
{
struct bpf_link *link = bpf_program__attach(skel->progs.stack_check);
const int max_key = 12;
int key;
int map_fd;
if (!ASSERT_OK_PTR(link, "link"))
return;
map_fd = bpf_map__fd(skel->maps.map1);
if (!ASSERT_GE(map_fd, 0, "bpf_map__fd"))
goto out;
for (key = 1; key <= max_key; ++key) {
int val = key;
int err = bpf_map_update_elem(map_fd, &key, &val, BPF_NOEXIST);
if (!ASSERT_OK(err, "bpf_map_update_elem"))
goto out;
}
usleep(1);
for (key = 1; key <= max_key; ++key) {
int val;
int err = bpf_map_lookup_elem(map_fd, &key, &val);
if (!ASSERT_OK(err, "bpf_map_lookup_elem"))
goto out;
if (!ASSERT_EQ(val, key + 1, "bad value in the map"))
goto out;
}
out:
bpf_link__destroy(link);
}
void test_bpf_loop(void) void test_bpf_loop(void)
{ {
struct bpf_loop *skel; struct bpf_loop *skel;
...@@ -140,6 +198,10 @@ void test_bpf_loop(void) ...@@ -140,6 +198,10 @@ void test_bpf_loop(void)
check_invalid_flags(skel); check_invalid_flags(skel);
if (test__start_subtest("check_nested_calls")) if (test__start_subtest("check_nested_calls"))
check_nested_calls(skel); check_nested_calls(skel);
if (test__start_subtest("check_non_constant_callback"))
check_non_constant_callback(skel);
if (test__start_subtest("check_stack"))
check_stack(skel);
bpf_loop__destroy(skel); bpf_loop__destroy(skel);
} }
...@@ -34,7 +34,6 @@ static bool always_log; ...@@ -34,7 +34,6 @@ static bool always_log;
#undef CHECK #undef CHECK
#define CHECK(condition, format...) _CHECK(condition, "check", duration, format) #define CHECK(condition, format...) _CHECK(condition, "check", duration, format)
#define BTF_END_RAW 0xdeadbeef
#define NAME_TBD 0xdeadb33f #define NAME_TBD 0xdeadb33f
#define NAME_NTH(N) (0xfffe0000 | N) #define NAME_NTH(N) (0xfffe0000 | N)
......
...@@ -11,11 +11,19 @@ struct callback_ctx { ...@@ -11,11 +11,19 @@ struct callback_ctx {
int output; int output;
}; };
struct {
__uint(type, BPF_MAP_TYPE_HASH);
__uint(max_entries, 32);
__type(key, int);
__type(value, int);
} map1 SEC(".maps");
/* These should be set by the user program */ /* These should be set by the user program */
u32 nested_callback_nr_loops; u32 nested_callback_nr_loops;
u32 stop_index = -1; u32 stop_index = -1;
u32 nr_loops; u32 nr_loops;
int pid; int pid;
int callback_selector;
/* Making these global variables so that the userspace program /* Making these global variables so that the userspace program
* can verify the output through the skeleton * can verify the output through the skeleton
...@@ -111,3 +119,109 @@ int prog_nested_calls(void *ctx) ...@@ -111,3 +119,109 @@ int prog_nested_calls(void *ctx)
return 0; return 0;
} }
static int callback_set_f0(int i, void *ctx)
{
g_output = 0xF0;
return 0;
}
static int callback_set_0f(int i, void *ctx)
{
g_output = 0x0F;
return 0;
}
/*
* non-constant callback is a corner case for bpf_loop inline logic
*/
SEC("fentry/" SYS_PREFIX "sys_nanosleep")
int prog_non_constant_callback(void *ctx)
{
struct callback_ctx data = {};
if (bpf_get_current_pid_tgid() >> 32 != pid)
return 0;
int (*callback)(int i, void *ctx);
g_output = 0;
if (callback_selector == 0x0F)
callback = callback_set_0f;
else
callback = callback_set_f0;
bpf_loop(1, callback, NULL, 0);
return 0;
}
static int stack_check_inner_callback(void *ctx)
{
return 0;
}
static int map1_lookup_elem(int key)
{
int *val = bpf_map_lookup_elem(&map1, &key);
return val ? *val : -1;
}
static void map1_update_elem(int key, int val)
{
bpf_map_update_elem(&map1, &key, &val, BPF_ANY);
}
static int stack_check_outer_callback(void *ctx)
{
int a = map1_lookup_elem(1);
int b = map1_lookup_elem(2);
int c = map1_lookup_elem(3);
int d = map1_lookup_elem(4);
int e = map1_lookup_elem(5);
int f = map1_lookup_elem(6);
bpf_loop(1, stack_check_inner_callback, NULL, 0);
map1_update_elem(1, a + 1);
map1_update_elem(2, b + 1);
map1_update_elem(3, c + 1);
map1_update_elem(4, d + 1);
map1_update_elem(5, e + 1);
map1_update_elem(6, f + 1);
return 0;
}
/* Some of the local variables in stack_check and
* stack_check_outer_callback would be allocated on stack by
* compiler. This test should verify that stack content for these
* variables is preserved between calls to bpf_loop (might be an issue
* if loop inlining allocates stack slots incorrectly).
*/
SEC("fentry/" SYS_PREFIX "sys_nanosleep")
int stack_check(void *ctx)
{
if (bpf_get_current_pid_tgid() >> 32 != pid)
return 0;
int a = map1_lookup_elem(7);
int b = map1_lookup_elem(8);
int c = map1_lookup_elem(9);
int d = map1_lookup_elem(10);
int e = map1_lookup_elem(11);
int f = map1_lookup_elem(12);
bpf_loop(1, stack_check_outer_callback, NULL, 0);
map1_update_elem(7, a + 1);
map1_update_elem(8, b + 1);
map1_update_elem(9, c + 1);
map1_update_elem(10, d + 1);
map1_update_elem(11, e + 1);
map1_update_elem(12, f + 1);
return 0;
}
...@@ -4,6 +4,8 @@ ...@@ -4,6 +4,8 @@
#ifndef _TEST_BTF_H #ifndef _TEST_BTF_H
#define _TEST_BTF_H #define _TEST_BTF_H
#define BTF_END_RAW 0xdeadbeef
#define BTF_INFO_ENC(kind, kind_flag, vlen) \ #define BTF_INFO_ENC(kind, kind_flag, vlen) \
((!!(kind_flag) << 31) | ((kind) << 24) | ((vlen) & BTF_MAX_VLEN)) ((!!(kind_flag) << 31) | ((kind) << 24) | ((vlen) & BTF_MAX_VLEN))
......
This diff is collapsed.
#define BTF_TYPES \
.btf_strings = "\0int\0i\0ctx\0callback\0main\0", \
.btf_types = { \
/* 1: int */ BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4), \
/* 2: int* */ BTF_PTR_ENC(1), \
/* 3: void* */ BTF_PTR_ENC(0), \
/* 4: int __(void*) */ BTF_FUNC_PROTO_ENC(1, 1), \
BTF_FUNC_PROTO_ARG_ENC(7, 3), \
/* 5: int __(int, int*) */ BTF_FUNC_PROTO_ENC(1, 2), \
BTF_FUNC_PROTO_ARG_ENC(5, 1), \
BTF_FUNC_PROTO_ARG_ENC(7, 2), \
/* 6: main */ BTF_FUNC_ENC(20, 4), \
/* 7: callback */ BTF_FUNC_ENC(11, 5), \
BTF_END_RAW \
}
#define MAIN_TYPE 6
#define CALLBACK_TYPE 7
/* can't use BPF_CALL_REL, jit_subprogs adjusts IMM & OFF
* fields for pseudo calls
*/
#define PSEUDO_CALL_INSN() \
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_CALL, \
INSN_OFF_MASK, INSN_IMM_MASK)
/* can't use BPF_FUNC_loop constant,
* do_mix_fixups adjusts the IMM field
*/
#define HELPER_CALL_INSN() \
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, INSN_OFF_MASK, INSN_IMM_MASK)
{
"inline simple bpf_loop call",
.insns = {
/* main */
/* force verifier state branching to verify logic on first and
* subsequent bpf_loop insn processing steps
*/
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_jiffies64),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 777, 2),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 2),
BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2, BPF_PSEUDO_FUNC, 0, 6),
BPF_RAW_INSN(0, 0, 0, 0, 0),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_3, 0),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0),
BPF_EXIT_INSN(),
/* callback */
BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 1),
BPF_EXIT_INSN(),
},
.expected_insns = { PSEUDO_CALL_INSN() },
.unexpected_insns = { HELPER_CALL_INSN() },
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
.result = ACCEPT,
.runs = 0,
.func_info = { { 0, MAIN_TYPE }, { 12, CALLBACK_TYPE } },
.func_info_cnt = 2,
BTF_TYPES
},
{
"don't inline bpf_loop call, flags non-zero",
.insns = {
/* main */
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_jiffies64),
BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_jiffies64),
BPF_ALU64_REG(BPF_MOV, BPF_REG_7, BPF_REG_0),
BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 0, 9),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 0),
BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 0),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 1),
BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2, BPF_PSEUDO_FUNC, 0, 7),
BPF_RAW_INSN(0, 0, 0, 0, 0),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_3, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0),
BPF_EXIT_INSN(),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 1),
BPF_JMP_IMM(BPF_JA, 0, 0, -10),
/* callback */
BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 1),
BPF_EXIT_INSN(),
},
.expected_insns = { HELPER_CALL_INSN() },
.unexpected_insns = { PSEUDO_CALL_INSN() },
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
.result = ACCEPT,
.runs = 0,
.func_info = { { 0, MAIN_TYPE }, { 16, CALLBACK_TYPE } },
.func_info_cnt = 2,
BTF_TYPES
},
{
"don't inline bpf_loop call, callback non-constant",
.insns = {
/* main */
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_jiffies64),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 777, 4), /* pick a random callback */
BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 1),
BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2, BPF_PSEUDO_FUNC, 0, 10),
BPF_RAW_INSN(0, 0, 0, 0, 0),
BPF_JMP_IMM(BPF_JA, 0, 0, 3),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 1),
BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2, BPF_PSEUDO_FUNC, 0, 8),
BPF_RAW_INSN(0, 0, 0, 0, 0),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_3, 0),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0),
BPF_EXIT_INSN(),
/* callback */
BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 1),
BPF_EXIT_INSN(),
/* callback #2 */
BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 1),
BPF_EXIT_INSN(),
},
.expected_insns = { HELPER_CALL_INSN() },
.unexpected_insns = { PSEUDO_CALL_INSN() },
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
.result = ACCEPT,
.runs = 0,
.func_info = {
{ 0, MAIN_TYPE },
{ 14, CALLBACK_TYPE },
{ 16, CALLBACK_TYPE }
},
.func_info_cnt = 3,
BTF_TYPES
},
{
"bpf_loop_inline and a dead func",
.insns = {
/* main */
/* A reference to callback #1 to make verifier count it as a func.
* This reference is overwritten below and callback #1 is dead.
*/
BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2, BPF_PSEUDO_FUNC, 0, 9),
BPF_RAW_INSN(0, 0, 0, 0, 0),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 1),
BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2, BPF_PSEUDO_FUNC, 0, 8),
BPF_RAW_INSN(0, 0, 0, 0, 0),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_3, 0),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0),
BPF_EXIT_INSN(),
/* callback */
BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 1),
BPF_EXIT_INSN(),
/* callback #2 */
BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 1),
BPF_EXIT_INSN(),
},
.expected_insns = { PSEUDO_CALL_INSN() },
.unexpected_insns = { HELPER_CALL_INSN() },
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
.result = ACCEPT,
.runs = 0,
.func_info = {
{ 0, MAIN_TYPE },
{ 10, CALLBACK_TYPE },
{ 12, CALLBACK_TYPE }
},
.func_info_cnt = 3,
BTF_TYPES
},
{
"bpf_loop_inline stack locations for loop vars",
.insns = {
/* main */
BPF_ST_MEM(BPF_W, BPF_REG_10, -12, 0x77),
/* bpf_loop call #1 */
BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 1),
BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2, BPF_PSEUDO_FUNC, 0, 22),
BPF_RAW_INSN(0, 0, 0, 0, 0),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_3, 0),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
/* bpf_loop call #2 */
BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 2),
BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2, BPF_PSEUDO_FUNC, 0, 16),
BPF_RAW_INSN(0, 0, 0, 0, 0),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_3, 0),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
/* call func and exit */
BPF_CALL_REL(2),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0),
BPF_EXIT_INSN(),
/* func */
BPF_ST_MEM(BPF_DW, BPF_REG_10, -32, 0x55),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 2),
BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2, BPF_PSEUDO_FUNC, 0, 6),
BPF_RAW_INSN(0, 0, 0, 0, 0),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_3, 0),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0),
BPF_EXIT_INSN(),
/* callback */
BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 1),
BPF_EXIT_INSN(),
},
.expected_insns = {
BPF_ST_MEM(BPF_W, BPF_REG_10, -12, 0x77),
SKIP_INSNS(),
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -40),
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_7, -32),
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_8, -24),
SKIP_INSNS(),
/* offsets are the same as in the first call */
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -40),
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_7, -32),
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_8, -24),
SKIP_INSNS(),
BPF_ST_MEM(BPF_DW, BPF_REG_10, -32, 0x55),
SKIP_INSNS(),
/* offsets differ from main because of different offset
* in BPF_ST_MEM instruction
*/
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -56),
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_7, -48),
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_8, -40),
},
.unexpected_insns = { HELPER_CALL_INSN() },
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
.result = ACCEPT,
.func_info = {
{ 0, MAIN_TYPE },
{ 16, MAIN_TYPE },
{ 25, CALLBACK_TYPE },
},
.func_info_cnt = 3,
BTF_TYPES
},
#undef HELPER_CALL_INSN
#undef PSEUDO_CALL_INSN
#undef CALLBACK_TYPE
#undef MAIN_TYPE
#undef BTF_TYPES
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment