1. 23 Jul, 2019 3 commits
    • Eric Dumazet's avatar
      selftests/bpf: add another gso_segs access · be69483b
      Eric Dumazet authored
      Use BPF_REG_1 for source and destination of gso_segs read,
      to exercise "bpf: fix access to skb_shared_info->gso_segs" fix.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Suggested-by: default avatarStanislav Fomichev <sdf@google.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      be69483b
    • Eric Dumazet's avatar
      bpf: fix access to skb_shared_info->gso_segs · 06a22d89
      Eric Dumazet authored
      It is possible we reach bpf_convert_ctx_access() with
      si->dst_reg == si->src_reg
      
      Therefore, we need to load BPF_REG_AX before eventually
      mangling si->src_reg.
      
      syzbot generated this x86 code :
         3:   55                      push   %rbp
         4:   48 89 e5                mov    %rsp,%rbp
         7:   48 81 ec 00 00 00 00    sub    $0x0,%rsp // Might be avoided ?
         e:   53                      push   %rbx
         f:   41 55                   push   %r13
        11:   41 56                   push   %r14
        13:   41 57                   push   %r15
        15:   6a 00                   pushq  $0x0
        17:   31 c0                   xor    %eax,%eax
        19:   48 8b bf c0 00 00 00    mov    0xc0(%rdi),%rdi
        20:   44 8b 97 bc 00 00 00    mov    0xbc(%rdi),%r10d
        27:   4c 01 d7                add    %r10,%rdi
        2a:   48 0f b7 7f 06          movzwq 0x6(%rdi),%rdi // Crash
        2f:   5b                      pop    %rbx
        30:   41 5f                   pop    %r15
        32:   41 5e                   pop    %r14
        34:   41 5d                   pop    %r13
        36:   5b                      pop    %rbx
        37:   c9                      leaveq
        38:   c3                      retq
      
      Fixes: d9ff286a ("bpf: allow BPF programs access skb_shared_info->gso_segs field")
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Reported-by: default avatarsyzbot <syzkaller@googlegroups.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      06a22d89
    • Ilya Leoshkevich's avatar
      bpf: fix narrower loads on s390 · d9b8aada
      Ilya Leoshkevich authored
      The very first check in test_pkt_md_access is failing on s390, which
      happens because loading a part of a struct __sk_buff field produces
      an incorrect result.
      
      The preprocessed code of the check is:
      
      {
      	__u8 tmp = *((volatile __u8 *)&skb->len +
      		((sizeof(skb->len) - sizeof(__u8)) / sizeof(__u8)));
      	if (tmp != ((*(volatile __u32 *)&skb->len) & 0xFF)) return 2;
      };
      
      clang generates the following code for it:
      
            0:	71 21 00 03 00 00 00 00	r2 = *(u8 *)(r1 + 3)
            1:	61 31 00 00 00 00 00 00	r3 = *(u32 *)(r1 + 0)
            2:	57 30 00 00 00 00 00 ff	r3 &= 255
            3:	5d 23 00 1d 00 00 00 00	if r2 != r3 goto +29 <LBB0_10>
      
      Finally, verifier transforms it to:
      
        0: (61) r2 = *(u32 *)(r1 +104)
        1: (bc) w2 = w2
        2: (74) w2 >>= 24
        3: (bc) w2 = w2
        4: (54) w2 &= 255
        5: (bc) w2 = w2
      
      The problem is that when verifier emits the code to replace a partial
      load of a struct __sk_buff field (*(u8 *)(r1 + 3)) with a full load of
      struct sk_buff field (*(u32 *)(r1 + 104)), an optional shift and a
      bitwise AND, it assumes that the machine is little endian and
      incorrectly decides to use a shift.
      
      Adjust shift count calculation to account for endianness.
      
      Fixes: 31fd8581 ("bpf: permits narrower load from bpf program context fields")
      Signed-off-by: default avatarIlya Leoshkevich <iii@linux.ibm.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      d9b8aada
  2. 22 Jul, 2019 18 commits
  3. 19 Jul, 2019 5 commits
  4. 18 Jul, 2019 14 commits