Commit ddaa935d authored by Jakub Kicinski's avatar Jakub Kicinski

Merge tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf

Daniel Borkmann says:

====================
pull-request: bpf 2023-08-31

We've added 15 non-merge commits during the last 3 day(s) which contain
a total of 17 files changed, 468 insertions(+), 97 deletions(-).

The main changes are:

1) BPF selftest fixes: one flake and one related to clang18 testing,
   from Yonghong Song.

2) Fix a d_path BPF selftest failure after fast-forward from Linus'
   tree, from Jiri Olsa.

3) Fix a preempt_rt splat in sockmap when using raw_spin_lock_t,
   from John Fastabend.

4) Fix a xsk_diag_fill use-after-free race during socket cleanup,
   from Magnus Karlsson.

5) Fix xsk_build_skb to address a buggy dereference of an ERR_PTR(),
   from Tirthendu Sarkar.

6) Fix a bpftool build warning when compiled with -Wtype-limits,
   from Yafang Shao.

7) Several misc fixes and cleanups in standardization docs,
   from David Vernet.

8) Fix BPF selftest install to consider no_alu32/cpuv4/bpf-gcc flavors,
   from Björn Töpel.

9) Annotate a data race in bpf_long_memcpy for KCSAN, from Daniel Borkmann.

10) Extend documentation with a description for CO-RE relocations,
    from Eduard Zingerman.

11) Fix several invalid escape sequence warnings in bpf_doc.py script,
    from Vishal Chourasia.

12) Fix the instruction set doc wrt offset of BPF-to-BPF call,
    from Will Hawkins.

* tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf:
  selftests/bpf: Include build flavors for install target
  bpf: Annotate bpf_long_memcpy with data_race
  selftests/bpf: Fix d_path test
  bpf, docs: Fix invalid escape sequence warnings in bpf_doc.py
  xsk: Fix xsk_diag use-after-free error during socket cleanup
  bpf, docs: s/eBPF/BPF in standards documents
  bpf, docs: Add abi.rst document to standardization subdirectory
  bpf, docs: Move linux-notes.rst to root bpf docs tree
  bpf, sockmap: Fix preempt_rt splat when using raw_spin_lock_t
  docs/bpf: Add description for CO-RE relocations
  bpf, docs: Correct source of offset for program-local call
  selftests/bpf: Fix flaky cgroup_iter_sleepable subtest
  xsk: Fix xsk_build_skb() error: 'skb' dereferencing possible ERR_PTR()
  bpftool: Fix build warnings with -Wtype-limits
  bpf: Prevent inlining of bpf_fentry_test7()
====================

Link: https://lore.kernel.org/r/20230831210019.14417-1-daniel@iogearbox.netSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parents 8aae7625 be8e754c
......@@ -726,8 +726,8 @@ same as the one describe in :ref:`BTF_Type_String`.
4.2 .BTF.ext section
--------------------
The .BTF.ext section encodes func_info and line_info which needs loader
manipulation before loading into the kernel.
The .BTF.ext section encodes func_info, line_info and CO-RE relocations
which needs loader manipulation before loading into the kernel.
The specification for .BTF.ext section is defined at ``tools/lib/bpf/btf.h``
and ``tools/lib/bpf/btf.c``.
......@@ -745,15 +745,20 @@ The current header of .BTF.ext section::
__u32 func_info_len;
__u32 line_info_off;
__u32 line_info_len;
/* optional part of .BTF.ext header */
__u32 core_relo_off;
__u32 core_relo_len;
};
It is very similar to .BTF section. Instead of type/string section, it
contains func_info and line_info section. See :ref:`BPF_Prog_Load` for details
about func_info and line_info record format.
contains func_info, line_info and core_relo sub-sections.
See :ref:`BPF_Prog_Load` for details about func_info and line_info
record format.
The func_info is organized as below.::
func_info_rec_size
func_info_rec_size /* __u32 value */
btf_ext_info_sec for section #1 /* func_info for section #1 */
btf_ext_info_sec for section #2 /* func_info for section #2 */
...
......@@ -773,7 +778,7 @@ Here, num_info must be greater than 0.
The line_info is organized as below.::
line_info_rec_size
line_info_rec_size /* __u32 value */
btf_ext_info_sec for section #1 /* line_info for section #1 */
btf_ext_info_sec for section #2 /* line_info for section #2 */
...
......@@ -787,6 +792,20 @@ kernel API, the ``insn_off`` is the instruction offset in the unit of ``struct
bpf_insn``. For ELF API, the ``insn_off`` is the byte offset from the
beginning of section (``btf_ext_info_sec->sec_name_off``).
The core_relo is organized as below.::
core_relo_rec_size /* __u32 value */
btf_ext_info_sec for section #1 /* core_relo for section #1 */
btf_ext_info_sec for section #2 /* core_relo for section #2 */
``core_relo_rec_size`` specifies the size of ``bpf_core_relo``
structure when .BTF.ext is generated. All ``bpf_core_relo`` structures
within a single ``btf_ext_info_sec`` describe relocations applied to
section named by ``btf_ext_info_sec->sec_name_off``.
See :ref:`Documentation/bpf/llvm_reloc <btf-co-re-relocations>`
for more information on CO-RE relocations.
4.2 .BTF_ids section
--------------------
......
......@@ -29,6 +29,7 @@ that goes into great technical depth about the BPF Architecture.
bpf_licensing
test_debug
clang-notes
linux-notes
other
redirect
......
......@@ -240,3 +240,307 @@ The .BTF/.BTF.ext sections has R_BPF_64_NODYLD32 relocations::
Offset Info Type Symbol's Value Symbol's Name
000000000000002c 0000000200000004 R_BPF_64_NODYLD32 0000000000000000 .text
0000000000000040 0000000200000004 R_BPF_64_NODYLD32 0000000000000000 .text
.. _btf-co-re-relocations:
=================
CO-RE Relocations
=================
From object file point of view CO-RE mechanism is implemented as a set
of CO-RE specific relocation records. These relocation records are not
related to ELF relocations and are encoded in .BTF.ext section.
See :ref:`Documentation/bpf/btf <BTF_Ext_Section>` for more
information on .BTF.ext structure.
CO-RE relocations are applied to BPF instructions to update immediate
or offset fields of the instruction at load time with information
relevant for target kernel.
Field to patch is selected basing on the instruction class:
* For BPF_ALU, BPF_ALU64, BPF_LD `immediate` field is patched;
* For BPF_LDX, BPF_STX, BPF_ST `offset` field is patched;
* BPF_JMP, BPF_JMP32 instructions **should not** be patched.
Relocation kinds
================
There are several kinds of CO-RE relocations that could be split in
three groups:
* Field-based - patch instruction with field related information, e.g.
change offset field of the BPF_LDX instruction to reflect offset
of a specific structure field in the target kernel.
* Type-based - patch instruction with type related information, e.g.
change immediate field of the BPF_ALU move instruction to 0 or 1 to
reflect if specific type is present in the target kernel.
* Enum-based - patch instruction with enum related information, e.g.
change immediate field of the BPF_LD_IMM64 instruction to reflect
value of a specific enum literal in the target kernel.
The complete list of relocation kinds is represented by the following enum:
.. code-block:: c
enum bpf_core_relo_kind {
BPF_CORE_FIELD_BYTE_OFFSET = 0, /* field byte offset */
BPF_CORE_FIELD_BYTE_SIZE = 1, /* field size in bytes */
BPF_CORE_FIELD_EXISTS = 2, /* field existence in target kernel */
BPF_CORE_FIELD_SIGNED = 3, /* field signedness (0 - unsigned, 1 - signed) */
BPF_CORE_FIELD_LSHIFT_U64 = 4, /* bitfield-specific left bitshift */
BPF_CORE_FIELD_RSHIFT_U64 = 5, /* bitfield-specific right bitshift */
BPF_CORE_TYPE_ID_LOCAL = 6, /* type ID in local BPF object */
BPF_CORE_TYPE_ID_TARGET = 7, /* type ID in target kernel */
BPF_CORE_TYPE_EXISTS = 8, /* type existence in target kernel */
BPF_CORE_TYPE_SIZE = 9, /* type size in bytes */
BPF_CORE_ENUMVAL_EXISTS = 10, /* enum value existence in target kernel */
BPF_CORE_ENUMVAL_VALUE = 11, /* enum value integer value */
BPF_CORE_TYPE_MATCHES = 12, /* type match in target kernel */
};
Notes:
* ``BPF_CORE_FIELD_LSHIFT_U64`` and ``BPF_CORE_FIELD_RSHIFT_U64`` are
supposed to be used to read bitfield values using the following
algorithm:
.. code-block:: c
// To read bitfield ``f`` from ``struct s``
is_signed = relo(s->f, BPF_CORE_FIELD_SIGNED)
off = relo(s->f, BPF_CORE_FIELD_BYTE_OFFSET)
sz = relo(s->f, BPF_CORE_FIELD_BYTE_SIZE)
l = relo(s->f, BPF_CORE_FIELD_LSHIFT_U64)
r = relo(s->f, BPF_CORE_FIELD_RSHIFT_U64)
// define ``v`` as signed or unsigned integer of size ``sz``
v = *({s|u}<sz> *)((void *)s + off)
v <<= l
v >>= r
* The ``BPF_CORE_TYPE_MATCHES`` queries matching relation, defined as
follows:
* for integers: types match if size and signedness match;
* for arrays & pointers: target types are recursively matched;
* for structs & unions:
* local members need to exist in target with the same name;
* for each member we recursively check match unless it is already behind a
pointer, in which case we only check matching names and compatible kind;
* for enums:
* local variants have to have a match in target by symbolic name (but not
numeric value);
* size has to match (but enum may match enum64 and vice versa);
* for function pointers:
* number and position of arguments in local type has to match target;
* for each argument and the return value we recursively check match.
CO-RE Relocation Record
=======================
Relocation record is encoded as the following structure:
.. code-block:: c
struct bpf_core_relo {
__u32 insn_off;
__u32 type_id;
__u32 access_str_off;
enum bpf_core_relo_kind kind;
};
* ``insn_off`` - instruction offset (in bytes) within a code section
associated with this relocation;
* ``type_id`` - BTF type ID of the "root" (containing) entity of a
relocatable type or field;
* ``access_str_off`` - offset into corresponding .BTF string section.
String interpretation depends on specific relocation kind:
* for field-based relocations, string encodes an accessed field using
a sequence of field and array indices, separated by colon (:). It's
conceptually very close to LLVM's `getelementptr <GEP_>`_ instruction's
arguments for identifying offset to a field. For example, consider the
following C code:
.. code-block:: c
struct sample {
int a;
int b;
struct { int c[10]; };
} __attribute__((preserve_access_index));
struct sample *s;
* Access to ``s[0].a`` would be encoded as ``0:0``:
* ``0``: first element of ``s`` (as if ``s`` is an array);
* ``0``: index of field ``a`` in ``struct sample``.
* Access to ``s->a`` would be encoded as ``0:0`` as well.
* Access to ``s->b`` would be encoded as ``0:1``:
* ``0``: first element of ``s``;
* ``1``: index of field ``b`` in ``struct sample``.
* Access to ``s[1].c[5]`` would be encoded as ``1:2:0:5``:
* ``1``: second element of ``s``;
* ``2``: index of anonymous structure field in ``struct sample``;
* ``0``: index of field ``c`` in anonymous structure;
* ``5``: access to array element #5.
* for type-based relocations, string is expected to be just "0";
* for enum value-based relocations, string contains an index of enum
value within its enum type;
* ``kind`` - one of ``enum bpf_core_relo_kind``.
.. _GEP: https://llvm.org/docs/LangRef.html#getelementptr-instruction
.. _btf_co_re_relocation_examples:
CO-RE Relocation Examples
=========================
For the following C code:
.. code-block:: c
struct foo {
int a;
int b;
unsigned c:15;
} __attribute__((preserve_access_index));
enum bar { U, V };
With the following BTF definitions:
.. code-block::
...
[2] STRUCT 'foo' size=8 vlen=2
'a' type_id=3 bits_offset=0
'b' type_id=3 bits_offset=32
'c' type_id=4 bits_offset=64 bitfield_size=15
[3] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED
[4] INT 'unsigned int' size=4 bits_offset=0 nr_bits=32 encoding=(none)
...
[16] ENUM 'bar' encoding=UNSIGNED size=4 vlen=2
'U' val=0
'V' val=1
Field offset relocations are generated automatically when
``__attribute__((preserve_access_index))`` is used, for example:
.. code-block:: c
void alpha(struct foo *s, volatile unsigned long *g) {
*g = s->a;
s->a = 1;
}
00 <alpha>:
0: r3 = *(s32 *)(r1 + 0x0)
00: CO-RE <byte_off> [2] struct foo::a (0:0)
1: *(u64 *)(r2 + 0x0) = r3
2: *(u32 *)(r1 + 0x0) = 0x1
10: CO-RE <byte_off> [2] struct foo::a (0:0)
3: exit
All relocation kinds could be requested via built-in functions.
E.g. field-based relocations:
.. code-block:: c
void bravo(struct foo *s, volatile unsigned long *g) {
*g = __builtin_preserve_field_info(s->b, 0 /* field byte offset */);
*g = __builtin_preserve_field_info(s->b, 1 /* field byte size */);
*g = __builtin_preserve_field_info(s->b, 2 /* field existence */);
*g = __builtin_preserve_field_info(s->b, 3 /* field signedness */);
*g = __builtin_preserve_field_info(s->c, 4 /* bitfield left shift */);
*g = __builtin_preserve_field_info(s->c, 5 /* bitfield right shift */);
}
20 <bravo>:
4: r1 = 0x4
20: CO-RE <byte_off> [2] struct foo::b (0:1)
5: *(u64 *)(r2 + 0x0) = r1
6: r1 = 0x4
30: CO-RE <byte_sz> [2] struct foo::b (0:1)
7: *(u64 *)(r2 + 0x0) = r1
8: r1 = 0x1
40: CO-RE <field_exists> [2] struct foo::b (0:1)
9: *(u64 *)(r2 + 0x0) = r1
10: r1 = 0x1
50: CO-RE <signed> [2] struct foo::b (0:1)
11: *(u64 *)(r2 + 0x0) = r1
12: r1 = 0x31
60: CO-RE <lshift_u64> [2] struct foo::c (0:2)
13: *(u64 *)(r2 + 0x0) = r1
14: r1 = 0x31
70: CO-RE <rshift_u64> [2] struct foo::c (0:2)
15: *(u64 *)(r2 + 0x0) = r1
16: exit
Type-based relocations:
.. code-block:: c
void charlie(struct foo *s, volatile unsigned long *g) {
*g = __builtin_preserve_type_info(*s, 0 /* type existence */);
*g = __builtin_preserve_type_info(*s, 1 /* type size */);
*g = __builtin_preserve_type_info(*s, 2 /* type matches */);
*g = __builtin_btf_type_id(*s, 0 /* type id in this object file */);
*g = __builtin_btf_type_id(*s, 1 /* type id in target kernel */);
}
88 <charlie>:
17: r1 = 0x1
88: CO-RE <type_exists> [2] struct foo
18: *(u64 *)(r2 + 0x0) = r1
19: r1 = 0xc
98: CO-RE <type_size> [2] struct foo
20: *(u64 *)(r2 + 0x0) = r1
21: r1 = 0x1
a8: CO-RE <type_matches> [2] struct foo
22: *(u64 *)(r2 + 0x0) = r1
23: r1 = 0x2 ll
b8: CO-RE <local_type_id> [2] struct foo
25: *(u64 *)(r2 + 0x0) = r1
26: r1 = 0x2 ll
d0: CO-RE <target_type_id> [2] struct foo
28: *(u64 *)(r2 + 0x0) = r1
29: exit
Enum-based relocations:
.. code-block:: c
void delta(struct foo *s, volatile unsigned long *g) {
*g = __builtin_preserve_enum_value(*(enum bar *)U, 0 /* enum literal existence */);
*g = __builtin_preserve_enum_value(*(enum bar *)V, 1 /* enum literal value */);
}
f0 <delta>:
30: r1 = 0x1 ll
f0: CO-RE <enumval_exists> [16] enum bar::U = 0
32: *(u64 *)(r2 + 0x0) = r1
33: r1 = 0x1 ll
108: CO-RE <enumval_value> [16] enum bar::V = 1
35: *(u64 *)(r2 + 0x0) = r1
36: exit
.. contents::
.. sectnum::
===================================================
BPF ABI Recommended Conventions and Guidelines v1.0
===================================================
This is version 1.0 of an informational document containing recommended
conventions and guidelines for producing portable BPF program binaries.
Registers and calling convention
================================
BPF has 10 general purpose registers and a read-only frame pointer register,
all of which are 64-bits wide.
The BPF calling convention is defined as:
* R0: return value from function calls, and exit value for BPF programs
* R1 - R5: arguments for function calls
* R6 - R9: callee saved registers that function calls will preserve
* R10: read-only frame pointer to access stack
R0 - R5 are scratch registers and BPF programs needs to spill/fill them if
necessary across calls.
......@@ -12,7 +12,7 @@ for the working group charter, documents, and more.
:maxdepth: 1
instruction-set
linux-notes
abi
.. Links:
.. _IETF BPF Working Group: https://datatracker.ietf.org/wg/bpf/about/
.. contents::
.. sectnum::
========================================
eBPF Instruction Set Specification, v1.0
========================================
=======================================
BPF Instruction Set Specification, v1.0
=======================================
This document specifies version 1.0 of the eBPF instruction set.
This document specifies version 1.0 of the BPF instruction set.
Documentation conventions
=========================
......@@ -97,26 +97,10 @@ Definitions
A: 10000110
B: 11111111 10000110
Registers and calling convention
================================
eBPF has 10 general purpose registers and a read-only frame pointer register,
all of which are 64-bits wide.
The eBPF calling convention is defined as:
* R0: return value from function calls, and exit value for eBPF programs
* R1 - R5: arguments for function calls
* R6 - R9: callee saved registers that function calls will preserve
* R10: read-only frame pointer to access stack
R0 - R5 are scratch registers and eBPF programs needs to spill/fill them if
necessary across calls.
Instruction encoding
====================
eBPF has two instruction encodings:
BPF has two instruction encodings:
* the basic instruction encoding, which uses 64 bits to encode an instruction
* the wide instruction encoding, which appends a second 64-bit immediate (i.e.,
......@@ -260,7 +244,7 @@ BPF_END 0xd0 0 byte swap operations (see `Byte swap instructions`_ b
========= ===== ======= ==========================================================
Underflow and overflow are allowed during arithmetic operations, meaning
the 64-bit or 32-bit value will wrap. If eBPF program execution would
the 64-bit or 32-bit value will wrap. If BPF program execution would
result in division by zero, the destination register is instead set to zero.
If execution would result in modulo by zero, for ``BPF_ALU64`` the value of
the destination register is unchanged whereas for ``BPF_ALU`` the upper
......@@ -373,7 +357,7 @@ BPF_JNE 0x5 any PC += offset if dst != src
BPF_JSGT 0x6 any PC += offset if dst > src signed
BPF_JSGE 0x7 any PC += offset if dst >= src signed
BPF_CALL 0x8 0x0 call helper function by address see `Helper functions`_
BPF_CALL 0x8 0x1 call PC += offset see `Program-local functions`_
BPF_CALL 0x8 0x1 call PC += imm see `Program-local functions`_
BPF_CALL 0x8 0x2 call helper function by BTF ID see `Helper functions`_
BPF_EXIT 0x9 0x0 return BPF_JMP only
BPF_JLT 0xa any PC += offset if dst < src unsigned
......@@ -382,7 +366,7 @@ BPF_JSLT 0xc any PC += offset if dst < src signed
BPF_JSLE 0xd any PC += offset if dst <= src signed
======== ===== === =========================================== =========================================
The eBPF program needs to store the return value into register R0 before doing a
The BPF program needs to store the return value into register R0 before doing a
``BPF_EXIT``.
Example:
......@@ -424,8 +408,8 @@ Program-local functions
~~~~~~~~~~~~~~~~~~~~~~~
Program-local functions are functions exposed by the same BPF program as the
caller, and are referenced by offset from the call instruction, similar to
``BPF_JA``. A ``BPF_EXIT`` within the program-local function will return to
the caller.
``BPF_JA``. The offset is encoded in the imm field of the call instruction.
A ``BPF_EXIT`` within the program-local function will return to the caller.
Load and store instructions
===========================
......@@ -502,9 +486,9 @@ Atomic operations
Atomic operations are operations that operate on memory and can not be
interrupted or corrupted by other access to the same memory region
by other eBPF programs or means outside of this specification.
by other BPF programs or means outside of this specification.
All atomic operations supported by eBPF are encoded as store operations
All atomic operations supported by BPF are encoded as store operations
that use the ``BPF_ATOMIC`` mode modifier as follows:
* ``BPF_ATOMIC | BPF_W | BPF_STX`` for 32-bit operations
......@@ -594,7 +578,7 @@ where
Maps
~~~~
Maps are shared memory regions accessible by eBPF programs on some platforms.
Maps are shared memory regions accessible by BPF programs on some platforms.
A map can have various semantics as defined in a separate document, and may or
may not have a single contiguous memory region, but the 'map_val(map)' is
currently only defined for maps that do have a single contiguous memory region.
......@@ -616,6 +600,6 @@ identified by the given id.
Legacy BPF Packet access instructions
-------------------------------------
eBPF previously introduced special instructions for access to packet data that were
BPF previously introduced special instructions for access to packet data that were
carried over from classic BPF. However, these instructions are
deprecated and should no longer be used.
......@@ -438,7 +438,7 @@ static inline void bpf_long_memcpy(void *dst, const void *src, u32 size)
size /= sizeof(long);
while (size--)
*ldst++ = *lsrc++;
data_race(*ldst++ = *lsrc++);
}
/* copy everything but bpf_spin_lock, bpf_timer, and kptrs. There could be one of each. */
......
......@@ -543,6 +543,7 @@ struct bpf_fentry_test_t {
int noinline bpf_fentry_test7(struct bpf_fentry_test_t *arg)
{
asm volatile ("");
return (long)arg;
}
......
......@@ -18,7 +18,7 @@ struct bpf_stab {
struct bpf_map map;
struct sock **sks;
struct sk_psock_progs progs;
raw_spinlock_t lock;
spinlock_t lock;
};
#define SOCK_CREATE_FLAG_MASK \
......@@ -44,7 +44,7 @@ static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
return ERR_PTR(-ENOMEM);
bpf_map_init_from_attr(&stab->map, attr);
raw_spin_lock_init(&stab->lock);
spin_lock_init(&stab->lock);
stab->sks = bpf_map_area_alloc((u64) stab->map.max_entries *
sizeof(struct sock *),
......@@ -411,7 +411,7 @@ static int __sock_map_delete(struct bpf_stab *stab, struct sock *sk_test,
struct sock *sk;
int err = 0;
raw_spin_lock_bh(&stab->lock);
spin_lock_bh(&stab->lock);
sk = *psk;
if (!sk_test || sk_test == sk)
sk = xchg(psk, NULL);
......@@ -421,7 +421,7 @@ static int __sock_map_delete(struct bpf_stab *stab, struct sock *sk_test,
else
err = -EINVAL;
raw_spin_unlock_bh(&stab->lock);
spin_unlock_bh(&stab->lock);
return err;
}
......@@ -487,7 +487,7 @@ static int sock_map_update_common(struct bpf_map *map, u32 idx,
psock = sk_psock(sk);
WARN_ON_ONCE(!psock);
raw_spin_lock_bh(&stab->lock);
spin_lock_bh(&stab->lock);
osk = stab->sks[idx];
if (osk && flags == BPF_NOEXIST) {
ret = -EEXIST;
......@@ -501,10 +501,10 @@ static int sock_map_update_common(struct bpf_map *map, u32 idx,
stab->sks[idx] = sk;
if (osk)
sock_map_unref(osk, &stab->sks[idx]);
raw_spin_unlock_bh(&stab->lock);
spin_unlock_bh(&stab->lock);
return 0;
out_unlock:
raw_spin_unlock_bh(&stab->lock);
spin_unlock_bh(&stab->lock);
if (psock)
sk_psock_put(sk, psock);
out_free:
......@@ -835,7 +835,7 @@ struct bpf_shtab_elem {
struct bpf_shtab_bucket {
struct hlist_head head;
raw_spinlock_t lock;
spinlock_t lock;
};
struct bpf_shtab {
......@@ -910,7 +910,7 @@ static void sock_hash_delete_from_link(struct bpf_map *map, struct sock *sk,
* is okay since it's going away only after RCU grace period.
* However, we need to check whether it's still present.
*/
raw_spin_lock_bh(&bucket->lock);
spin_lock_bh(&bucket->lock);
elem_probe = sock_hash_lookup_elem_raw(&bucket->head, elem->hash,
elem->key, map->key_size);
if (elem_probe && elem_probe == elem) {
......@@ -918,7 +918,7 @@ static void sock_hash_delete_from_link(struct bpf_map *map, struct sock *sk,
sock_map_unref(elem->sk, elem);
sock_hash_free_elem(htab, elem);
}
raw_spin_unlock_bh(&bucket->lock);
spin_unlock_bh(&bucket->lock);
}
static long sock_hash_delete_elem(struct bpf_map *map, void *key)
......@@ -932,7 +932,7 @@ static long sock_hash_delete_elem(struct bpf_map *map, void *key)
hash = sock_hash_bucket_hash(key, key_size);
bucket = sock_hash_select_bucket(htab, hash);
raw_spin_lock_bh(&bucket->lock);
spin_lock_bh(&bucket->lock);
elem = sock_hash_lookup_elem_raw(&bucket->head, hash, key, key_size);
if (elem) {
hlist_del_rcu(&elem->node);
......@@ -940,7 +940,7 @@ static long sock_hash_delete_elem(struct bpf_map *map, void *key)
sock_hash_free_elem(htab, elem);
ret = 0;
}
raw_spin_unlock_bh(&bucket->lock);
spin_unlock_bh(&bucket->lock);
return ret;
}
......@@ -1000,7 +1000,7 @@ static int sock_hash_update_common(struct bpf_map *map, void *key,
hash = sock_hash_bucket_hash(key, key_size);
bucket = sock_hash_select_bucket(htab, hash);
raw_spin_lock_bh(&bucket->lock);
spin_lock_bh(&bucket->lock);
elem = sock_hash_lookup_elem_raw(&bucket->head, hash, key, key_size);
if (elem && flags == BPF_NOEXIST) {
ret = -EEXIST;
......@@ -1026,10 +1026,10 @@ static int sock_hash_update_common(struct bpf_map *map, void *key,
sock_map_unref(elem->sk, elem);
sock_hash_free_elem(htab, elem);
}
raw_spin_unlock_bh(&bucket->lock);
spin_unlock_bh(&bucket->lock);
return 0;
out_unlock:
raw_spin_unlock_bh(&bucket->lock);
spin_unlock_bh(&bucket->lock);
sk_psock_put(sk, psock);
out_free:
sk_psock_free_link(link);
......@@ -1115,7 +1115,7 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
for (i = 0; i < htab->buckets_num; i++) {
INIT_HLIST_HEAD(&htab->buckets[i].head);
raw_spin_lock_init(&htab->buckets[i].lock);
spin_lock_init(&htab->buckets[i].lock);
}
return &htab->map;
......@@ -1147,11 +1147,11 @@ static void sock_hash_free(struct bpf_map *map)
* exists, psock exists and holds a ref to socket. That
* lets us to grab a socket ref too.
*/
raw_spin_lock_bh(&bucket->lock);
spin_lock_bh(&bucket->lock);
hlist_for_each_entry(elem, &bucket->head, node)
sock_hold(elem->sk);
hlist_move_list(&bucket->head, &unlink_list);
raw_spin_unlock_bh(&bucket->lock);
spin_unlock_bh(&bucket->lock);
/* Process removed entries out of atomic context to
* block for socket lock before deleting the psock's
......
......@@ -602,7 +602,7 @@ static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs,
for (copied = 0, i = skb_shinfo(skb)->nr_frags; copied < len; i++) {
if (unlikely(i >= MAX_SKB_FRAGS))
return ERR_PTR(-EFAULT);
return ERR_PTR(-EOVERFLOW);
page = pool->umem->pgs[addr >> PAGE_SHIFT];
get_page(page);
......@@ -655,15 +655,17 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
skb_put(skb, len);
err = skb_store_bits(skb, 0, buffer, len);
if (unlikely(err))
if (unlikely(err)) {
kfree_skb(skb);
goto free_err;
}
} else {
int nr_frags = skb_shinfo(skb)->nr_frags;
struct page *page;
u8 *vaddr;
if (unlikely(nr_frags == (MAX_SKB_FRAGS - 1) && xp_mb_desc(desc))) {
err = -EFAULT;
err = -EOVERFLOW;
goto free_err;
}
......@@ -690,12 +692,14 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
return skb;
free_err:
if (err == -EAGAIN) {
xsk_cq_cancel_locked(xs, 1);
} else {
xsk_set_destructor_arg(skb);
xsk_drop_skb(skb);
if (err == -EOVERFLOW) {
/* Drop the packet */
xsk_set_destructor_arg(xs->skb);
xsk_drop_skb(xs->skb);
xskq_cons_release(xs->tx);
} else {
/* Let application retry */
xsk_cq_cancel_locked(xs, 1);
}
return ERR_PTR(err);
......@@ -738,7 +742,7 @@ static int __xsk_generic_xmit(struct sock *sk)
skb = xsk_build_skb(xs, &desc);
if (IS_ERR(skb)) {
err = PTR_ERR(skb);
if (err == -EAGAIN)
if (err != -EOVERFLOW)
goto out;
err = 0;
continue;
......
......@@ -111,6 +111,9 @@ static int xsk_diag_fill(struct sock *sk, struct sk_buff *nlskb,
sock_diag_save_cookie(sk, msg->xdiag_cookie);
mutex_lock(&xs->mutex);
if (READ_ONCE(xs->state) == XSK_UNBOUND)
goto out_nlmsg_trim;
if ((req->xdiag_show & XDP_SHOW_INFO) && xsk_diag_put_info(xs, nlskb))
goto out_nlmsg_trim;
......
......@@ -59,9 +59,9 @@ class Helper(APIElement):
Break down helper function protocol into smaller chunks: return type,
name, distincts arguments.
"""
arg_re = re.compile('((\w+ )*?(\w+|...))( (\**)(\w+))?$')
arg_re = re.compile(r'((\w+ )*?(\w+|...))( (\**)(\w+))?$')
res = {}
proto_re = re.compile('(.+) (\**)(\w+)\(((([^,]+)(, )?){1,5})\)$')
proto_re = re.compile(r'(.+) (\**)(\w+)\(((([^,]+)(, )?){1,5})\)$')
capture = proto_re.match(self.proto)
res['ret_type'] = capture.group(1)
......@@ -114,11 +114,11 @@ class HeaderParser(object):
return Helper(proto=proto, desc=desc, ret=ret)
def parse_symbol(self):
p = re.compile(' \* ?(BPF\w+)$')
p = re.compile(r' \* ?(BPF\w+)$')
capture = p.match(self.line)
if not capture:
raise NoSyscallCommandFound
end_re = re.compile(' \* ?NOTES$')
end_re = re.compile(r' \* ?NOTES$')
end = end_re.match(self.line)
if end:
raise NoSyscallCommandFound
......@@ -133,7 +133,7 @@ class HeaderParser(object):
# - Same as above, with "const" and/or "struct" in front of type
# - "..." (undefined number of arguments, for bpf_trace_printk())
# There is at least one term ("void"), and at most five arguments.
p = re.compile(' \* ?((.+) \**\w+\((((const )?(struct )?(\w+|\.\.\.)( \**\w+)?)(, )?){1,5}\))$')
p = re.compile(r' \* ?((.+) \**\w+\((((const )?(struct )?(\w+|\.\.\.)( \**\w+)?)(, )?){1,5}\))$')
capture = p.match(self.line)
if not capture:
raise NoHelperFound
......@@ -141,7 +141,7 @@ class HeaderParser(object):
return capture.group(1)
def parse_desc(self, proto):
p = re.compile(' \* ?(?:\t| {5,8})Description$')
p = re.compile(r' \* ?(?:\t| {5,8})Description$')
capture = p.match(self.line)
if not capture:
raise Exception("No description section found for " + proto)
......@@ -154,7 +154,7 @@ class HeaderParser(object):
if self.line == ' *\n':
desc += '\n'
else:
p = re.compile(' \* ?(?:\t| {5,8})(?:\t| {8})(.*)')
p = re.compile(r' \* ?(?:\t| {5,8})(?:\t| {8})(.*)')
capture = p.match(self.line)
if capture:
desc_present = True
......@@ -167,7 +167,7 @@ class HeaderParser(object):
return desc
def parse_ret(self, proto):
p = re.compile(' \* ?(?:\t| {5,8})Return$')
p = re.compile(r' \* ?(?:\t| {5,8})Return$')
capture = p.match(self.line)
if not capture:
raise Exception("No return section found for " + proto)
......@@ -180,7 +180,7 @@ class HeaderParser(object):
if self.line == ' *\n':
ret += '\n'
else:
p = re.compile(' \* ?(?:\t| {5,8})(?:\t| {8})(.*)')
p = re.compile(r' \* ?(?:\t| {5,8})(?:\t| {8})(.*)')
capture = p.match(self.line)
if capture:
ret_present = True
......@@ -219,12 +219,12 @@ class HeaderParser(object):
self.seek_to('enum bpf_cmd {',
'Could not find start of bpf_cmd enum', 0)
# Searches for either one or more BPF\w+ enums
bpf_p = re.compile('\s*(BPF\w+)+')
bpf_p = re.compile(r'\s*(BPF\w+)+')
# Searches for an enum entry assigned to another entry,
# for e.g. BPF_PROG_RUN = BPF_PROG_TEST_RUN, which is
# not documented hence should be skipped in check to
# determine if the right number of syscalls are documented
assign_p = re.compile('\s*(BPF\w+)\s*=\s*(BPF\w+)')
assign_p = re.compile(r'\s*(BPF\w+)\s*=\s*(BPF\w+)')
bpf_cmd_str = ''
while True:
capture = assign_p.match(self.line)
......@@ -239,7 +239,7 @@ class HeaderParser(object):
break
self.line = self.reader.readline()
# Find the number of occurences of BPF\w+
self.enum_syscalls = re.findall('(BPF\w+)+', bpf_cmd_str)
self.enum_syscalls = re.findall(r'(BPF\w+)+', bpf_cmd_str)
def parse_desc_helpers(self):
self.seek_to(helpersDocStart,
......@@ -263,7 +263,7 @@ class HeaderParser(object):
self.seek_to('#define ___BPF_FUNC_MAPPER(FN, ctx...)',
'Could not find start of eBPF helper definition list')
# Searches for one FN(\w+) define or a backslash for newline
p = re.compile('\s*FN\((\w+), (\d+), ##ctx\)|\\\\')
p = re.compile(r'\s*FN\((\w+), (\d+), ##ctx\)|\\\\')
fn_defines_str = ''
i = 0
while True:
......@@ -278,7 +278,7 @@ class HeaderParser(object):
break
self.line = self.reader.readline()
# Find the number of occurences of FN(\w+)
self.define_unique_helpers = re.findall('FN\(\w+, \d+, ##ctx\)', fn_defines_str)
self.define_unique_helpers = re.findall(r'FN\(\w+, \d+, ##ctx\)', fn_defines_str)
def validate_helpers(self):
last_helper = ''
......@@ -425,7 +425,7 @@ class PrinterRST(Printer):
try:
cmd = ['git', 'log', '-1', '--pretty=format:%cs', '--no-patch',
'-L',
'/{}/,/\*\//:include/uapi/linux/bpf.h'.format(delimiter)]
'/{}/,/\\*\\//:include/uapi/linux/bpf.h'.format(delimiter)]
date = subprocess.run(cmd, cwd=linuxRoot,
capture_output=True, check=True)
return date.stdout.decode().rstrip()
......@@ -516,7 +516,7 @@ as "Dual BSD/GPL", may be used). Some helper functions are only accessible to
programs that are compatible with the GNU Privacy License (GPL).
In order to use such helpers, the eBPF program must be loaded with the correct
license string passed (via **attr**) to the **bpf**\ () system call, and this
license string passed (via **attr**) to the **bpf**\\ () system call, and this
generally translates into the C source code of the program containing a line
similar to the following:
......@@ -550,7 +550,7 @@ may be interested in:
* The bpftool utility can be used to probe the availability of helper functions
on the system (as well as supported program and map types, and a number of
other parameters). To do so, run **bpftool feature probe** (see
**bpftool-feature**\ (8) for details). Add the **unprivileged** keyword to
**bpftool-feature**\\ (8) for details). Add the **unprivileged** keyword to
list features available to unprivileged users.
Compatibility between helper functions and program types can generally be found
......@@ -562,23 +562,23 @@ other functions, themselves allowing access to additional helpers. The
requirement for GPL license is also in those **struct bpf_func_proto**.
Compatibility between helper functions and map types can be found in the
**check_map_func_compatibility**\ () function in file *kernel/bpf/verifier.c*.
**check_map_func_compatibility**\\ () function in file *kernel/bpf/verifier.c*.
Helper functions that invalidate the checks on **data** and **data_end**
pointers for network processing are listed in function
**bpf_helper_changes_pkt_data**\ () in file *net/core/filter.c*.
**bpf_helper_changes_pkt_data**\\ () in file *net/core/filter.c*.
SEE ALSO
========
**bpf**\ (2),
**bpftool**\ (8),
**cgroups**\ (7),
**ip**\ (8),
**perf_event_open**\ (2),
**sendmsg**\ (2),
**socket**\ (7),
**tc-bpf**\ (8)'''
**bpf**\\ (2),
**bpftool**\\ (8),
**cgroups**\\ (7),
**ip**\\ (8),
**perf_event_open**\\ (2),
**sendmsg**\\ (2),
**socket**\\ (7),
**tc-bpf**\\ (8)'''
print(footer)
def print_proto(self, helper):
......@@ -598,7 +598,7 @@ SEE ALSO
one_arg = '{}{}'.format(comma, a['type'])
if a['name']:
if a['star']:
one_arg += ' {}**\ '.format(a['star'].replace('*', '\\*'))
one_arg += ' {}**\\ '.format(a['star'].replace('*', '\\*'))
else:
one_arg += '** '
one_arg += '*{}*\\ **'.format(a['name'])
......
......@@ -83,7 +83,7 @@ const char *evsel__hw_cache_result[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
#define perf_event_name(array, id) ({ \
const char *event_str = NULL; \
\
if ((id) >= 0 && (id) < ARRAY_SIZE(array)) \
if ((id) < ARRAY_SIZE(array)) \
event_str = array[id]; \
event_str; \
})
......
......@@ -50,14 +50,17 @@ TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_lpm_map test
test_cgroup_storage \
test_tcpnotify_user test_sysctl \
test_progs-no_alu32
TEST_INST_SUBDIRS := no_alu32
# Also test bpf-gcc, if present
ifneq ($(BPF_GCC),)
TEST_GEN_PROGS += test_progs-bpf_gcc
TEST_INST_SUBDIRS += bpf_gcc
endif
ifneq ($(CLANG_CPUV4),)
TEST_GEN_PROGS += test_progs-cpuv4
TEST_INST_SUBDIRS += cpuv4
endif
TEST_GEN_FILES = test_lwt_ip_encap.bpf.o test_tc_edt.bpf.o
......@@ -714,3 +717,12 @@ EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR) $(HOST_SCRATCH_DIR) \
# Delete partially updated (corrupted) files on error
.DELETE_ON_ERROR:
DEFAULT_INSTALL_RULE := $(INSTALL_RULE)
override define INSTALL_RULE
$(DEFAULT_INSTALL_RULE)
@for DIR in $(TEST_INST_SUBDIRS); do \
mkdir -p $(INSTALL_PATH)/$$DIR; \
rsync -a $(OUTPUT)/$$DIR/*.bpf.o $(INSTALL_PATH)/$$DIR;\
done
endef
......@@ -8,6 +8,7 @@
#include <linux/unistd.h>
#include <linux/mount.h>
#include <sys/syscall.h>
#include "bpf/libbpf_internal.h"
static inline int sys_fsopen(const char *fsname, unsigned flags)
{
......@@ -155,7 +156,7 @@ static void validate_pin(int map_fd, const char *map_name, int src_value,
ASSERT_OK(err, "obj_pin");
/* cleanup */
if (pin_opts.path_fd >= 0)
if (path_kind == PATH_FD_REL && pin_opts.path_fd >= 0)
close(pin_opts.path_fd);
if (old_cwd[0])
ASSERT_OK(chdir(old_cwd), "restore_cwd");
......@@ -220,7 +221,7 @@ static void validate_get(int map_fd, const char *map_name, int src_value,
goto cleanup;
/* cleanup */
if (get_opts.path_fd >= 0)
if (path_kind == PATH_FD_REL && get_opts.path_fd >= 0)
close(get_opts.path_fd);
if (old_cwd[0])
ASSERT_OK(chdir(old_cwd), "restore_cwd");
......
......@@ -12,6 +12,17 @@
#include "test_d_path_check_rdonly_mem.skel.h"
#include "test_d_path_check_types.skel.h"
/* sys_close_range is not around for long time, so let's
* make sure we can call it on systems with older glibc
*/
#ifndef __NR_close_range
#ifdef __alpha__
#define __NR_close_range 546
#else
#define __NR_close_range 436
#endif
#endif
static int duration;
static struct {
......@@ -90,7 +101,11 @@ static int trigger_fstat_events(pid_t pid)
fstat(indicatorfd, &fileStat);
out_close:
/* triggers filp_close */
/* sys_close no longer triggers filp_close, but we can
* call sys_close_range instead which still does
*/
#define close(fd) syscall(__NR_close_range, fd, fd, 0)
close(pipefd[0]);
close(pipefd[1]);
close(sockfd);
......@@ -98,6 +113,8 @@ static int trigger_fstat_events(pid_t pid)
close(devfd);
close(localfd);
close(indicatorfd);
#undef close
return ret;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment