1. 30 Jan, 2023 7 commits
    • Ilya Leoshkevich's avatar
      selftests/bpf: Fix s390x vmlinux path · af320fb7
      Ilya Leoshkevich authored
      After commit edd4a866 ("s390/boot: get rid of startup archive")
      there is no more compressed/ subdirectory.
      Signed-off-by: default avatarIlya Leoshkevich <iii@linux.ibm.com>
      Link: https://lore.kernel.org/r/20230129190501.1624747-8-iii@linux.ibm.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      af320fb7
    • Ilya Leoshkevich's avatar
      s390/bpf: Implement bpf_jit_supports_kfunc_call() · 63d7b53a
      Ilya Leoshkevich authored
      Implement calling kernel functions from eBPF. In general, the eBPF ABI
      is fairly close to that of s390x, with one important difference: on
      s390x callers should sign-extend signed arguments. Handle that by using
      information returned by bpf_jit_find_kfunc_model().
      
      Here is an example of how sign extensions works. Suppose we need to
      call the following function from BPF:
      
          ; long noinline bpf_kfunc_call_test4(signed char a, short b, int c,
      long d)
          0000000000936a78 <bpf_kfunc_call_test4>:
          936a78:       c0 04 00 00 00 00       jgnop bpf_kfunc_call_test4
          ;     return (long)a + (long)b + (long)c + d;
          936a7e:       b9 08 00 45             agr     %r4,%r5
          936a82:       b9 08 00 43             agr     %r4,%r3
          936a86:       b9 08 00 24             agr     %r2,%r4
          936a8a:       c0 f4 00 1e 3b 27       jg      <__s390_indirect_jump_r14>
      
      As per the s390x ABI, bpf_kfunc_call_test4() has the right to assume
      that a, b and c are sign-extended by the caller, which results in using
      64-bit additions (agr) without any additional conversions. Without sign
      extension we would have the following on the JITed code side:
      
          ; tmp = bpf_kfunc_call_test4(-3, -30, -200, -1000);
          ;        5:       b4 10 00 00 ff ff ff fd w1 = -3
          0x3ff7fdcdad4:       llilf   %r2,0xfffffffd
          ;        6:       b4 20 00 00 ff ff ff e2 w2 = -30
          0x3ff7fdcdada:       llilf   %r3,0xffffffe2
          ;        7:       b4 30 00 00 ff ff ff 38 w3 = -200
          0x3ff7fdcdae0:       llilf   %r4,0xffffff38
          ;       8:       b7 40 00 00 ff ff fc 18 r4 = -1000
          0x3ff7fdcdae6:       lgfi    %r5,-1000
          0x3ff7fdcdaec:       mvc     64(4,%r15),160(%r15)
          0x3ff7fdcdaf2:       lgrl    %r1,bpf_kfunc_call_test4@GOT
          0x3ff7fdcdaf8:       brasl   %r14,__s390_indirect_jump_r1
      
      This first 3 llilfs are 32-bit loads, that need to be sign-extended
      to 64 bits.
      
      Note: at the moment bpf_jit_find_kfunc_model() does not seem to play
      nicely with XDP metadata functions: add_kfunc_call() adds an "abstract"
      bpf_*() version to kfunc_btf_tab, but then fixup_kfunc_call() puts the
      concrete version into insn->imm, which bpf_jit_find_kfunc_model() cannot
      find. But this seems to be a common code problem.
      Signed-off-by: default avatarIlya Leoshkevich <iii@linux.ibm.com>
      Link: https://lore.kernel.org/r/20230129190501.1624747-7-iii@linux.ibm.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      63d7b53a
    • Ilya Leoshkevich's avatar
      s390/bpf: Implement bpf_jit_supports_subprog_tailcalls() · dd691e84
      Ilya Leoshkevich authored
      Allow mixing subprogs and tail calls by passing the current tail
      call count to subprogs.
      Signed-off-by: default avatarIlya Leoshkevich <iii@linux.ibm.com>
      Link: https://lore.kernel.org/r/20230129190501.1624747-6-iii@linux.ibm.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      dd691e84
    • Ilya Leoshkevich's avatar
      s390/bpf: Implement arch_prepare_bpf_trampoline() · 528eb2cb
      Ilya Leoshkevich authored
      arch_prepare_bpf_trampoline() is used for direct attachment of eBPF
      programs to various places, bypassing kprobes. It's responsible for
      calling a number of eBPF programs before, instead and/or after
      whatever they are attached to.
      
      Add a s390x implementation, paying attention to the following:
      
      - Reuse the existing JIT infrastructure, where possible.
      - Like the existing JIT, prefer making multiple passes instead of
        backpatching. Currently 2 passes is enough. If literal pool is
        introduced, this needs to be raised to 3. However, at the moment
        adding literal pool only makes the code larger. If branch
        shortening is introduced, the number of passes needs to be
        increased even further.
      - Support both regular and ftrace calling conventions, depending on
        the trampoline flags.
      - Use expolines for indirect calls.
      - Handle the mismatch between the eBPF and the s390x ABIs.
      - Sign-extend fmod_ret return values.
      
      invoke_bpf_prog() produces about 120 bytes; it might be possible to
      slightly optimize this, but reaching 50 bytes, like on x86_64, looks
      unrealistic: just loading cookie, __bpf_prog_enter, bpf_func, insnsi
      and __bpf_prog_exit as literals already takes at least 5 * 12 = 60
      bytes, and we can't use relative addressing for most of them.
      Therefore, lower BPF_MAX_TRAMP_LINKS on s390x.
      Signed-off-by: default avatarIlya Leoshkevich <iii@linux.ibm.com>
      Link: https://lore.kernel.org/r/20230129190501.1624747-5-iii@linux.ibm.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      528eb2cb
    • Ilya Leoshkevich's avatar
      s390/bpf: Implement bpf_arch_text_poke() · f1d5df84
      Ilya Leoshkevich authored
      bpf_arch_text_poke() is used to hotpatch eBPF programs and trampolines.
      s390x has a very strict hotpatching restriction: the only thing that is
      allowed to be hotpatched is conditional branch mask.
      
      Take the same approach as commit de5012b4 ("s390/ftrace: implement
      hotpatching"): create a conditional jump to a "plt", which loads the
      target address from memory and jumps to it; then first patch this
      address, and then the mask.
      
      Trampolines (introduced in the next patch) respect the ftrace calling
      convention: the return address is in %r0, and %r1 is clobbered. With
      that in mind, bpf_arch_text_poke() does not differentiate between jumps
      and calls.
      
      However, there is a simple optimization for jumps (for the epilogue_ip
      case): if a jump already points to the destination, then there is no
      "plt" and we can just flip the mask.
      
      For simplicity, the "plt" template is defined in assembly, and its size
      is used to define C arrays. There doesn't seem to be a way to convey
      this size to C as a constant, so it's hardcoded and double-checked
      during runtime.
      Signed-off-by: default avatarIlya Leoshkevich <iii@linux.ibm.com>
      Link: https://lore.kernel.org/r/20230129190501.1624747-4-iii@linux.ibm.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      f1d5df84
    • Ilya Leoshkevich's avatar
      s390/bpf: Add expoline to tail calls · bb4ef8fc
      Ilya Leoshkevich authored
      All the indirect jumps in the eBPF JIT already use expolines, except
      for the tail call one.
      
      Fixes: de5cb6eb ("s390: use expoline thunks in the BPF JIT")
      Signed-off-by: default avatarIlya Leoshkevich <iii@linux.ibm.com>
      Link: https://lore.kernel.org/r/20230129190501.1624747-3-iii@linux.ibm.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      bb4ef8fc
    • Ilya Leoshkevich's avatar
      selftests/bpf: Fix sk_assign on s390x · 7ce878ca
      Ilya Leoshkevich authored
      sk_assign is failing on an s390x machine running Debian "bookworm" for
      2 reasons: legacy server_map definition and uninitialized addrlen in
      recvfrom() call.
      
      Fix by adding a new-style server_map definition and dropping addrlen
      (recvfrom() allows NULL values for src_addr and addrlen).
      
      Since the test should support tc built without libbpf, build the prog
      twice: with the old-style definition and with the new-style definition,
      then select the right one at runtime. This could be done at compile
      time too, but this would not be cross-compilation friendly.
      Signed-off-by: default avatarIlya Leoshkevich <iii@linux.ibm.com>
      Link: https://lore.kernel.org/r/20230129190501.1624747-2-iii@linux.ibm.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      7ce878ca
  2. 28 Jan, 2023 33 commits