1. 21 Jun, 2022 7 commits
    • Alexei Starovoitov's avatar
      Merge branch 'bpf_loop inlining' · b40b414e
      Alexei Starovoitov authored
      Eduard Zingerman says:
      
      ====================
      
      Hi Everyone,
      
      This is the next iteration of the patch. It includes changes suggested
      by Song, Joanne and Alexei. Please find updated intro message and
      change log below.
      
      This patch implements inlining of calls to bpf_loop helper function
      when bpf_loop's callback is statically known. E.g. the rewrite does
      the following transformation during BPF program processing:
      
        bpf_loop(10, foo, NULL, 0);
      
       ->
      
        for (int i = 0; i < 10; ++i)
          foo(i, NULL);
      
      The transformation leads to measurable latency change for simple
      loops. Measurements using `benchs/run_bench_bpf_loop.sh` inside QEMU /
      KVM on i7-4710HQ CPU show a drop in latency from 14 ns/op to 2 ns/op.
      
      The change is split in five parts:
      
      * Update to test_verifier.c to specify expected and unexpected
        instruction sequences. This allows to check BPF program rewrites
        applied by e.g. do_mix_fixups function.
      
      * Update to test_verifier.c to specify BTF function infos and types
        per test case. This is necessary for tests that load sub-program
        addresses to a variable because of the checks applied by
        check_ld_imm function.
      
      * The update to verifier.c that tracks state of the parameters for
        each bpf_loop call in a program and decides whether it could be
        replaced by a loop.
      
      * A set of test cases for `test_verifier` that use capabilities added
        by the first two patches to verify instructions produced by inlining
        logic.
      
      * Two test cases for `test_prog` to check that possible corner cases
        behave as expected.
      
      Additional details are available in commit messages for each patch.
      
      Changes since v7:
       - Call to `mark_chain_precision` is added in `loop_flag_is_zero` to
         avoid potential issues with state pruning and precision tracking.
       - `flags non-zero` test_verifier test case is updated to have two
         execution paths reaching `bpf_loop` call, one with flags = 0,
         another with flags = 1. Potentially this test case should be able
         to show that call to `mark_chain_precision` is necessary in
         `loop_flag_is_zero` but not at the moment. Please refer to
         discussion for [PATCH bpf-next v7 3/5] for additional details.
       - `stack_depth_extra` computation is updated to guarantee that R6, R7
         and R8 offsets are always aligned on 8 byte boundary.
       - `stack locations for loop vars` test_verifier test case updated to
         show that R6, R7, R8 offsets are indeed aligned when function stack
         depth is not a multiple of 8.
       - I removed Song Liu's ACK from commit message for [PATCH bpf-next v8
         4/5] because I updated the patch. (Please let me know if I had to
         keep the ACK tag).
      
      Changes since v6:
       - Return value of the `optimize_bpf_loop` function is no longer
         ignored. This is necessary to properly propagate -ENOMEM error.
      
      Changes since v5:
       - Added function `loop_flag_is_zero` to skip a few checks in
         `update_loop_inline_state` when loop instruction is not fit for
         inline.
      
      Changes since v4:
       - Added missing `static` modifier for `update_loop_inline_state` and
         `inline_bpf_loop` functions.
       - `update_loop_inline_state` updated for better readability.
       - Fields `initialized` and `fit_for_inline` of `struct
         bpf_loop_inline_state` are changed back from `bool` to bitfields.
       - Acks from Song Liu added to comments for patches 1/5, 2/5, 4/5,
         5/5.
      
      Changes since v3:
       - Function `adjust_stack_depth_for_loop_inlining` is replaced by
         function `optimize_bpf_loop`. Function `optimize_bpf_loop` is
         responsible for both stack depth adjustment and call instruction
         replacement.
       - Changes in `do_misc_fixups` are reverted.
       - Changes in `adjust_subprog_starts_after_remove` are reverted and
         function `adjust_loop_inline_subprogno` is removed. This is
         possible because call to `optimize_bpf_loop` is placed before the
         dead code removal in `opt_remove_dead_code` (in contrast to the
         position of `do_misc_fixups` where inlining was done in v3).
       - Field `bpf_insn_aux_data.loop_inline_state` is now a part of
         anonymous union at the start of the `bpf_insn_aux_data`.
       - Data structure `bpf_loop_inline_state` is simplified to use single
         flag field `fit_for_inline` instead of separate fields
         `flags_is_zero` & `callback_is_constant`.
       - Macro definition `BPF_MAX_LOOPS` is moved from
         `include/linux/bpf_verifier.h` to `include/linux/bpf.h` to avoid
         include of `include/linux/bpf_verifier.h` in `bpf_iter.c`.
       - `inline_bpf_loop` changed back to use array initialization and hard
         coded offsets as in v2.
       - Style / formatting updates.
      
      Changes since v2:
       - fix for `stack_check` test case in `test_progs-no_alu32`, all tests
         are passing now;
       - v2 3/3 patch is split in three parts:
         - kernel changes
         - test_verifier changes
         - test_prog changes
       - updated `inline_bpf_loop` in `verifier.c` to calculate each offset
         used in instructions to avoid "magic" numbers;
       - removed newline handling logic in `fail_log` branch of
         `do_single_test` in `test_verifier.c` to simplify the patch set;
       - styling fixes suggested in review for v2 of this patch set.
      
      Changes since v1:
       - allow to use SKIP_INSNS in instruction pattern specification in
         test_verifier tests;
       - fix for a bug in spill offset assignement for loop vars when
         bpf_loop is located in a non-main function.
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      b40b414e
    • Eduard Zingerman's avatar
      selftests/bpf: BPF test_prog selftests for bpf_loop inlining · 0e1bf9ed
      Eduard Zingerman authored
      Two new test BPF programs for test_prog selftests checking bpf_loop
      behavior. Both are corner cases for bpf_loop inlinig transformation:
       - check that bpf_loop behaves correctly when callback function is not
         a compile time constant
       - check that local function variables are not affected by allocating
         additional stack storage for registers spilled by loop inlining
      Signed-off-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Link: https://lore.kernel.org/r/20220620235344.569325-6-eddyz87@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      0e1bf9ed
    • Eduard Zingerman's avatar
      selftests/bpf: BPF test_verifier selftests for bpf_loop inlining · f8acfdd0
      Eduard Zingerman authored
      A number of test cases for BPF selftests test_verifier to check how
      bpf_loop inline transformation rewrites the BPF program. The following
      cases are covered:
       - happy path
       - no-rewrite when flags is non-zero
       - no-rewrite when callback is non-constant
       - subprogno in insn_aux is updated correctly when dead sub-programs
         are removed
       - check that correct stack offsets are assigned for spilling of R6-R8
         registers
      Signed-off-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Link: https://lore.kernel.org/r/20220620235344.569325-5-eddyz87@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      f8acfdd0
    • Eduard Zingerman's avatar
      bpf: Inline calls to bpf_loop when callback is known · 1ade2371
      Eduard Zingerman authored
      Calls to `bpf_loop` are replaced with direct loops to avoid
      indirection. E.g. the following:
      
        bpf_loop(10, foo, NULL, 0);
      
      Is replaced by equivalent of the following:
      
        for (int i = 0; i < 10; ++i)
          foo(i, NULL);
      
      This transformation could be applied when:
      - callback is known and does not change during program execution;
      - flags passed to `bpf_loop` are always zero.
      
      Inlining logic works as follows:
      
      - During execution simulation function `update_loop_inline_state`
        tracks the following information for each `bpf_loop` call
        instruction:
        - is callback known and constant?
        - are flags constant and zero?
      - Function `optimize_bpf_loop` increases stack depth for functions
        where `bpf_loop` calls can be inlined and invokes `inline_bpf_loop`
        to apply the inlining. The additional stack space is used to spill
        registers R6, R7 and R8. These registers are used as loop counter,
        loop maximal bound and callback context parameter;
      
      Measurements using `benchs/run_bench_bpf_loop.sh` inside QEMU / KVM on
      i7-4710HQ CPU show a drop in latency from 14 ns/op to 2 ns/op.
      Signed-off-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Link: https://lore.kernel.org/r/20220620235344.569325-4-eddyz87@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      1ade2371
    • Eduard Zingerman's avatar
      selftests/bpf: allow BTF specs and func infos in test_verifier tests · 7a42008c
      Eduard Zingerman authored
      The BTF and func_info specification for test_verifier tests follows
      the same notation as in prog_tests/btf.c tests. E.g.:
      
        ...
        .func_info = { { 0, 6 }, { 8, 7 } },
        .func_info_cnt = 2,
        .btf_strings = "\0int\0",
        .btf_types = {
          BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4),
          BTF_PTR_ENC(1),
        },
        ...
      
      The BTF specification is loaded only when specified.
      Signed-off-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Link: https://lore.kernel.org/r/20220620235344.569325-3-eddyz87@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      7a42008c
    • Eduard Zingerman's avatar
      selftests/bpf: specify expected instructions in test_verifier tests · 933ff531
      Eduard Zingerman authored
      Allows to specify expected and unexpected instruction sequences in
      test_verifier test cases. The instructions are requested from kernel
      after BPF program loading, thus allowing to check some of the
      transformations applied by BPF verifier.
      
      - `expected_insn` field specifies a sequence of instructions expected
        to be found in the program;
      - `unexpected_insn` field specifies a sequence of instructions that
        are not expected to be found in the program;
      - `INSN_OFF_MASK` and `INSN_IMM_MASK` values could be used to mask
        `off` and `imm` fields.
      - `SKIP_INSNS` could be used to specify that some instructions in the
        (un)expected pattern are not important (behavior similar to usage of
        `\t` in `errstr` field).
      
      The intended usage is as follows:
      
        {
      	"inline simple bpf_loop call",
      	.insns = {
      	/* main */
      	BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 1),
      	BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2,
      			BPF_PSEUDO_FUNC, 0, 6),
          ...
      	BPF_EXIT_INSN(),
      	/* callback */
      	BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 1),
      	BPF_EXIT_INSN(),
      	},
      	.expected_insns = {
      	BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 1),
      	SKIP_INSNS(),
      	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_CALL, 8, 1)
      	},
      	.unexpected_insns = {
      	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0,
      			INSN_OFF_MASK, INSN_IMM_MASK),
      	},
      	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
      	.result = ACCEPT,
      	.runs = 0,
        },
      
      Here it is expected that move of 1 to register 1 would remain in place
      and helper function call instruction would be replaced by a relative
      call instruction.
      Signed-off-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Link: https://lore.kernel.org/r/20220620235344.569325-2-eddyz87@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      933ff531
    • Delyan Kratunov's avatar
      uprobe: gate bpf call behind BPF_EVENTS · aca80dd9
      Delyan Kratunov authored
      The call into bpf from uprobes needs to be gated now that it doesn't use
      the trace_events.h helpers.
      
      Randy found this as a randconfig build failure on linux-next [1].
      
        [1]: https://lore.kernel.org/linux-next/2de99180-7d55-2fdf-134d-33198c27cc58@infradead.org/Reported-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: default avatarDelyan Kratunov <delyank@fb.com>
      Tested-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Acked-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Link: https://lore.kernel.org/r/cb8bfbbcde87ed5d811227a393ef4925f2aadb7b.camel@fb.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      aca80dd9
  2. 20 Jun, 2022 23 commits
  3. 19 Jun, 2022 10 commits