Commit e57f52b4 authored by Alexei Starovoitov's avatar Alexei Starovoitov

Merge branch 'bpf: implement variadic printk helper'

Dave Marchevsky says:

====================

This series introduces a new helper, bpf_trace_vprintk, which functions
like bpf_trace_printk but supports > 3 arguments via a pseudo-vararg u64
array. The bpf_printk libbpf convenience macro is modified to use
bpf_trace_vprintk when > 3 varargs are passed, otherwise the previous
behavior - using bpf_trace_printk - is retained.

Helper functions and macros added during the implementation of
bpf_seq_printf and bpf_snprintf do most of the heavy lifting for
bpf_trace_vprintk. There's no novel format string wrangling here.

Usecase here is straightforward: Giving BPF program writers a more
powerful printk will ease development of BPF programs, particularly
during debugging and testing, where printk tends to be used.

This feature was proposed by Andrii in libbpf mirror's issue tracker
[1].

[1] https://github.com/libbpf/libbpf/issues/315

v5 -> v6: Rebase to pick up newly-added helper

v4 -> v5:

* patch 8: added test for "%pS" format string w/ NULL fmt arg [Daniel]
* patch 8: dmesg -> /sys/kernel/debug/tracing/trace_pipe in commit message [Andrii]
* patch 9: squash into patch 8, remove the added test in favor of just bpf_printk'ing in patch 8's test [Andrii]
    * migrate comment to /* */
* header comments improved$
    * uapi/linux/bpf.h: u64 -> long return type [Daniel]
    * uapi/linux/bpf.h: function description explains benefit of bpf_trace_vprintk over bpf_trace_printk [Daniel]
    * uapi/linux/bpf.h: added patch explaining that data_len should be a multiple of 8 in bpf_seq_printf, bpf_snprintf descriptions [Daniel]
    * tools/lib/bpf/bpf_helpers.h: move comment to new bpf_printk [Andrii]
* rebase

v3 -> v4:
* Add patch 2, which migrates reference_tracking prog_test away from
  bpf_program__load. Could be placed a bit later in the series, but
  wanted to keep the actual vprintk-related patches contiguous
* Add patch 9, which adds a program w/ 0 fmt arg bpf_printk to vprintk
  test
* bpf_printk convenience macro isn't multiline anymore, so simplify [Andrii]
* Add some comments to ___bpf_pick_printk to make it more obvious when
  implementation switches from printk to vprintk [Andrii]
* BPF_PRINTK_FMT_TYPE -> BPF_PRINTK_FMT_MOD for 'static const' fmt string
  in printk wrapper macro [Andrii]
    * checkpatch.pl doesn't like this, says "Macros with complex values
      should be enclosed in parentheses". Strange that it didn't have similar
      complaints about v3's BPF_PRINTK_FMT_TYPE. Regardless, IMO the complaint
      is not highlighting a real issue in the case of this macro.
* Fix alignment of __bpf_vprintk and __bpf_pick_printk [Andrii]
* rebase

v2 -> v3:
* Clean up patch 3's commit message [Alexei]
* Add patch 4, which modifies __bpf_printk to use 'static const char' to
  store fmt string with fallback for older kernels [Andrii]
* rebase

v1 -> v2:

* Naming conversation seems to have gone in favor of keeping
  bpf_trace_vprintk, names are unchanged

* Patch 3 now modifies bpf_printk convenience macro to choose between
  __bpf_printk and __bpf_vprintk 'implementation' macros based on arg
  count. __bpf_vprintk is a renaming of bpf_vprintk convenience macro
  from v1, __bpf_printk is the existing bpf_printk implementation.

  This patch could use some scrutiny as I think current implementation
  may regress developer experience in a specific case, turning a
  compile-time error into a load-time error. Unclear to me how
  common the case is, or whether the macro magic I chose is ideal.

* char ___fmt[] to static const char ___fmt[] change was not done,
  wanted to leave __bpf_printk 'implementation' macro unchanged for v2
  to ease discussion of above point

* Removed __always_inline from __set_printk_clr_event [Andrii]
* Simplified bpf_trace_printk docstring to refer to other functions
  instead of copy/pasting and avoid specifying 12 vararg limit [Andrii]
* Migrated trace_printk selftest to use ASSERT_ instead of CHECK
  * Adds new patch 5, previous patch 5 is now 6
* Migrated trace_vprintk selftest to use ASSERT_ instead of CHECK,
  open_and_load instead of separate open, load [Andrii]
* Patch 2's commit message now correctly mentions trace_pipe instead of
  dmesg [Andrii]
====================
Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
parents af54faab a42effb0
...@@ -1088,6 +1088,7 @@ bool bpf_prog_array_compatible(struct bpf_array *array, const struct bpf_prog *f ...@@ -1088,6 +1088,7 @@ bool bpf_prog_array_compatible(struct bpf_array *array, const struct bpf_prog *f
int bpf_prog_calc_tag(struct bpf_prog *fp); int bpf_prog_calc_tag(struct bpf_prog *fp);
const struct bpf_func_proto *bpf_get_trace_printk_proto(void); const struct bpf_func_proto *bpf_get_trace_printk_proto(void);
const struct bpf_func_proto *bpf_get_trace_vprintk_proto(void);
typedef unsigned long (*bpf_ctx_copy_t)(void *dst, const void *src, typedef unsigned long (*bpf_ctx_copy_t)(void *dst, const void *src,
unsigned long off, unsigned long len); unsigned long off, unsigned long len);
...@@ -2216,6 +2217,8 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t, ...@@ -2216,6 +2217,8 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
struct btf_id_set; struct btf_id_set;
bool btf_id_set_contains(const struct btf_id_set *set, u32 id); bool btf_id_set_contains(const struct btf_id_set *set, u32 id);
#define MAX_BPRINTF_VARARGS 12
int bpf_bprintf_prepare(char *fmt, u32 fmt_size, const u64 *raw_args, int bpf_bprintf_prepare(char *fmt, u32 fmt_size, const u64 *raw_args,
u32 **bin_buf, u32 num_args); u32 **bin_buf, u32 num_args);
void bpf_bprintf_cleanup(void); void bpf_bprintf_cleanup(void);
......
...@@ -4046,7 +4046,7 @@ union bpf_attr { ...@@ -4046,7 +4046,7 @@ union bpf_attr {
* arguments. The *data* are a **u64** array and corresponding format string * arguments. The *data* are a **u64** array and corresponding format string
* values are stored in the array. For strings and pointers where pointees * values are stored in the array. For strings and pointers where pointees
* are accessed, only the pointer values are stored in the *data* array. * are accessed, only the pointer values are stored in the *data* array.
* The *data_len* is the size of *data* in bytes. * The *data_len* is the size of *data* in bytes - must be a multiple of 8.
* *
* Formats **%s**, **%p{i,I}{4,6}** requires to read kernel memory. * Formats **%s**, **%p{i,I}{4,6}** requires to read kernel memory.
* Reading kernel memory may fail due to either invalid address or * Reading kernel memory may fail due to either invalid address or
...@@ -4751,7 +4751,8 @@ union bpf_attr { ...@@ -4751,7 +4751,8 @@ union bpf_attr {
* Each format specifier in **fmt** corresponds to one u64 element * Each format specifier in **fmt** corresponds to one u64 element
* in the **data** array. For strings and pointers where pointees * in the **data** array. For strings and pointers where pointees
* are accessed, only the pointer values are stored in the *data* * are accessed, only the pointer values are stored in the *data*
* array. The *data_len* is the size of *data* in bytes. * array. The *data_len* is the size of *data* in bytes - must be
* a multiple of 8.
* *
* Formats **%s** and **%p{i,I}{4,6}** require to read kernel * Formats **%s** and **%p{i,I}{4,6}** require to read kernel
* memory. Reading kernel memory may fail due to either invalid * memory. Reading kernel memory may fail due to either invalid
...@@ -4898,6 +4899,16 @@ union bpf_attr { ...@@ -4898,6 +4899,16 @@ union bpf_attr {
* **-EINVAL** if *flags* is not zero. * **-EINVAL** if *flags* is not zero.
* *
* **-ENOENT** if architecture does not support branch records. * **-ENOENT** if architecture does not support branch records.
*
* long bpf_trace_vprintk(const char *fmt, u32 fmt_size, const void *data, u32 data_len)
* Description
* Behaves like **bpf_trace_printk**\ () helper, but takes an array of u64
* to format and can handle more format args as a result.
*
* Arguments are to be used as in **bpf_seq_printf**\ () helper.
* Return
* The number of bytes written to the buffer, or a negative error
* in case of failure.
*/ */
#define __BPF_FUNC_MAPPER(FN) \ #define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \ FN(unspec), \
...@@ -5077,6 +5088,7 @@ union bpf_attr { ...@@ -5077,6 +5088,7 @@ union bpf_attr {
FN(get_attach_cookie), \ FN(get_attach_cookie), \
FN(task_pt_regs), \ FN(task_pt_regs), \
FN(get_branch_snapshot), \ FN(get_branch_snapshot), \
FN(trace_vprintk), \
/* */ /* */
/* integer value in 'imm' field of BPF_CALL instruction selects which helper /* integer value in 'imm' field of BPF_CALL instruction selects which helper
......
...@@ -2357,6 +2357,11 @@ const struct bpf_func_proto * __weak bpf_get_trace_printk_proto(void) ...@@ -2357,6 +2357,11 @@ const struct bpf_func_proto * __weak bpf_get_trace_printk_proto(void)
return NULL; return NULL;
} }
const struct bpf_func_proto * __weak bpf_get_trace_vprintk_proto(void)
{
return NULL;
}
u64 __weak u64 __weak
bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size, bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
void *ctx, u64 ctx_size, bpf_ctx_copy_t ctx_copy) void *ctx, u64 ctx_size, bpf_ctx_copy_t ctx_copy)
......
...@@ -979,15 +979,13 @@ int bpf_bprintf_prepare(char *fmt, u32 fmt_size, const u64 *raw_args, ...@@ -979,15 +979,13 @@ int bpf_bprintf_prepare(char *fmt, u32 fmt_size, const u64 *raw_args,
return err; return err;
} }
#define MAX_SNPRINTF_VARARGS 12
BPF_CALL_5(bpf_snprintf, char *, str, u32, str_size, char *, fmt, BPF_CALL_5(bpf_snprintf, char *, str, u32, str_size, char *, fmt,
const void *, data, u32, data_len) const void *, data, u32, data_len)
{ {
int err, num_args; int err, num_args;
u32 *bin_args; u32 *bin_args;
if (data_len % 8 || data_len > MAX_SNPRINTF_VARARGS * 8 || if (data_len % 8 || data_len > MAX_BPRINTF_VARARGS * 8 ||
(data_len && !data)) (data_len && !data))
return -EINVAL; return -EINVAL;
num_args = data_len / 8; num_args = data_len / 8;
...@@ -1437,6 +1435,8 @@ bpf_base_func_proto(enum bpf_func_id func_id) ...@@ -1437,6 +1435,8 @@ bpf_base_func_proto(enum bpf_func_id func_id)
return &bpf_snprintf_proto; return &bpf_snprintf_proto;
case BPF_FUNC_task_pt_regs: case BPF_FUNC_task_pt_regs:
return &bpf_task_pt_regs_proto; return &bpf_task_pt_regs_proto;
case BPF_FUNC_trace_vprintk:
return bpf_get_trace_vprintk_proto();
default: default:
return NULL; return NULL;
} }
......
...@@ -398,7 +398,7 @@ static const struct bpf_func_proto bpf_trace_printk_proto = { ...@@ -398,7 +398,7 @@ static const struct bpf_func_proto bpf_trace_printk_proto = {
.arg2_type = ARG_CONST_SIZE, .arg2_type = ARG_CONST_SIZE,
}; };
const struct bpf_func_proto *bpf_get_trace_printk_proto(void) static void __set_printk_clr_event(void)
{ {
/* /*
* This program might be calling bpf_trace_printk, * This program might be calling bpf_trace_printk,
...@@ -410,11 +410,57 @@ const struct bpf_func_proto *bpf_get_trace_printk_proto(void) ...@@ -410,11 +410,57 @@ const struct bpf_func_proto *bpf_get_trace_printk_proto(void)
*/ */
if (trace_set_clr_event("bpf_trace", "bpf_trace_printk", 1)) if (trace_set_clr_event("bpf_trace", "bpf_trace_printk", 1))
pr_warn_ratelimited("could not enable bpf_trace_printk events"); pr_warn_ratelimited("could not enable bpf_trace_printk events");
}
const struct bpf_func_proto *bpf_get_trace_printk_proto(void)
{
__set_printk_clr_event();
return &bpf_trace_printk_proto; return &bpf_trace_printk_proto;
} }
#define MAX_SEQ_PRINTF_VARARGS 12 BPF_CALL_4(bpf_trace_vprintk, char *, fmt, u32, fmt_size, const void *, data,
u32, data_len)
{
static char buf[BPF_TRACE_PRINTK_SIZE];
unsigned long flags;
int ret, num_args;
u32 *bin_args;
if (data_len & 7 || data_len > MAX_BPRINTF_VARARGS * 8 ||
(data_len && !data))
return -EINVAL;
num_args = data_len / 8;
ret = bpf_bprintf_prepare(fmt, fmt_size, data, &bin_args, num_args);
if (ret < 0)
return ret;
raw_spin_lock_irqsave(&trace_printk_lock, flags);
ret = bstr_printf(buf, sizeof(buf), fmt, bin_args);
trace_bpf_trace_printk(buf);
raw_spin_unlock_irqrestore(&trace_printk_lock, flags);
bpf_bprintf_cleanup();
return ret;
}
static const struct bpf_func_proto bpf_trace_vprintk_proto = {
.func = bpf_trace_vprintk,
.gpl_only = true,
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_MEM,
.arg2_type = ARG_CONST_SIZE,
.arg3_type = ARG_PTR_TO_MEM_OR_NULL,
.arg4_type = ARG_CONST_SIZE_OR_ZERO,
};
const struct bpf_func_proto *bpf_get_trace_vprintk_proto(void)
{
__set_printk_clr_event();
return &bpf_trace_vprintk_proto;
}
BPF_CALL_5(bpf_seq_printf, struct seq_file *, m, char *, fmt, u32, fmt_size, BPF_CALL_5(bpf_seq_printf, struct seq_file *, m, char *, fmt, u32, fmt_size,
const void *, data, u32, data_len) const void *, data, u32, data_len)
...@@ -422,7 +468,7 @@ BPF_CALL_5(bpf_seq_printf, struct seq_file *, m, char *, fmt, u32, fmt_size, ...@@ -422,7 +468,7 @@ BPF_CALL_5(bpf_seq_printf, struct seq_file *, m, char *, fmt, u32, fmt_size,
int err, num_args; int err, num_args;
u32 *bin_args; u32 *bin_args;
if (data_len & 7 || data_len > MAX_SEQ_PRINTF_VARARGS * 8 || if (data_len & 7 || data_len > MAX_BPRINTF_VARARGS * 8 ||
(data_len && !data)) (data_len && !data))
return -EINVAL; return -EINVAL;
num_args = data_len / 8; num_args = data_len / 8;
...@@ -1162,6 +1208,8 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) ...@@ -1162,6 +1208,8 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
return &bpf_get_func_ip_proto_tracing; return &bpf_get_func_ip_proto_tracing;
case BPF_FUNC_get_branch_snapshot: case BPF_FUNC_get_branch_snapshot:
return &bpf_get_branch_snapshot_proto; return &bpf_get_branch_snapshot_proto;
case BPF_FUNC_trace_vprintk:
return bpf_get_trace_vprintk_proto();
default: default:
return bpf_base_func_proto(func_id); return bpf_base_func_proto(func_id);
} }
......
...@@ -624,6 +624,7 @@ probe_helpers_for_progtype(enum bpf_prog_type prog_type, bool supported_type, ...@@ -624,6 +624,7 @@ probe_helpers_for_progtype(enum bpf_prog_type prog_type, bool supported_type,
*/ */
switch (id) { switch (id) {
case BPF_FUNC_trace_printk: case BPF_FUNC_trace_printk:
case BPF_FUNC_trace_vprintk:
case BPF_FUNC_probe_write_user: case BPF_FUNC_probe_write_user:
if (!full_mode) if (!full_mode)
continue; continue;
......
...@@ -4046,7 +4046,7 @@ union bpf_attr { ...@@ -4046,7 +4046,7 @@ union bpf_attr {
* arguments. The *data* are a **u64** array and corresponding format string * arguments. The *data* are a **u64** array and corresponding format string
* values are stored in the array. For strings and pointers where pointees * values are stored in the array. For strings and pointers where pointees
* are accessed, only the pointer values are stored in the *data* array. * are accessed, only the pointer values are stored in the *data* array.
* The *data_len* is the size of *data* in bytes. * The *data_len* is the size of *data* in bytes - must be a multiple of 8.
* *
* Formats **%s**, **%p{i,I}{4,6}** requires to read kernel memory. * Formats **%s**, **%p{i,I}{4,6}** requires to read kernel memory.
* Reading kernel memory may fail due to either invalid address or * Reading kernel memory may fail due to either invalid address or
...@@ -4751,7 +4751,8 @@ union bpf_attr { ...@@ -4751,7 +4751,8 @@ union bpf_attr {
* Each format specifier in **fmt** corresponds to one u64 element * Each format specifier in **fmt** corresponds to one u64 element
* in the **data** array. For strings and pointers where pointees * in the **data** array. For strings and pointers where pointees
* are accessed, only the pointer values are stored in the *data* * are accessed, only the pointer values are stored in the *data*
* array. The *data_len* is the size of *data* in bytes. * array. The *data_len* is the size of *data* in bytes - must be
* a multiple of 8.
* *
* Formats **%s** and **%p{i,I}{4,6}** require to read kernel * Formats **%s** and **%p{i,I}{4,6}** require to read kernel
* memory. Reading kernel memory may fail due to either invalid * memory. Reading kernel memory may fail due to either invalid
...@@ -4898,6 +4899,16 @@ union bpf_attr { ...@@ -4898,6 +4899,16 @@ union bpf_attr {
* **-EINVAL** if *flags* is not zero. * **-EINVAL** if *flags* is not zero.
* *
* **-ENOENT** if architecture does not support branch records. * **-ENOENT** if architecture does not support branch records.
*
* long bpf_trace_vprintk(const char *fmt, u32 fmt_size, const void *data, u32 data_len)
* Description
* Behaves like **bpf_trace_printk**\ () helper, but takes an array of u64
* to format and can handle more format args as a result.
*
* Arguments are to be used as in **bpf_seq_printf**\ () helper.
* Return
* The number of bytes written to the buffer, or a negative error
* in case of failure.
*/ */
#define __BPF_FUNC_MAPPER(FN) \ #define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \ FN(unspec), \
...@@ -5077,6 +5088,7 @@ union bpf_attr { ...@@ -5077,6 +5088,7 @@ union bpf_attr {
FN(get_attach_cookie), \ FN(get_attach_cookie), \
FN(task_pt_regs), \ FN(task_pt_regs), \
FN(get_branch_snapshot), \ FN(get_branch_snapshot), \
FN(trace_vprintk), \
/* */ /* */
/* integer value in 'imm' field of BPF_CALL instruction selects which helper /* integer value in 'imm' field of BPF_CALL instruction selects which helper
......
...@@ -14,14 +14,6 @@ ...@@ -14,14 +14,6 @@
#define __type(name, val) typeof(val) *name #define __type(name, val) typeof(val) *name
#define __array(name, val) typeof(val) *name[] #define __array(name, val) typeof(val) *name[]
/* Helper macro to print out debug messages */
#define bpf_printk(fmt, ...) \
({ \
char ____fmt[] = fmt; \
bpf_trace_printk(____fmt, sizeof(____fmt), \
##__VA_ARGS__); \
})
/* /*
* Helper macro to place programs, maps, license in * Helper macro to place programs, maps, license in
* different sections in elf_bpf file. Section names * different sections in elf_bpf file. Section names
...@@ -224,4 +216,47 @@ enum libbpf_tristate { ...@@ -224,4 +216,47 @@ enum libbpf_tristate {
___param, sizeof(___param)); \ ___param, sizeof(___param)); \
}) })
#ifdef BPF_NO_GLOBAL_DATA
#define BPF_PRINTK_FMT_MOD
#else
#define BPF_PRINTK_FMT_MOD static const
#endif
#define __bpf_printk(fmt, ...) \
({ \
BPF_PRINTK_FMT_MOD char ____fmt[] = fmt; \
bpf_trace_printk(____fmt, sizeof(____fmt), \
##__VA_ARGS__); \
})
/*
* __bpf_vprintk wraps the bpf_trace_vprintk helper with variadic arguments
* instead of an array of u64.
*/
#define __bpf_vprintk(fmt, args...) \
({ \
static const char ___fmt[] = fmt; \
unsigned long long ___param[___bpf_narg(args)]; \
\
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
___bpf_fill(___param, args); \
_Pragma("GCC diagnostic pop") \
\
bpf_trace_vprintk(___fmt, sizeof(___fmt), \
___param, sizeof(___param)); \
})
/* Use __bpf_printk when bpf_printk call has 3 or fewer fmt args
* Otherwise use __bpf_vprintk
*/
#define ___bpf_pick_printk(...) \
___bpf_nth(_, ##__VA_ARGS__, __bpf_vprintk, __bpf_vprintk, __bpf_vprintk, \
__bpf_vprintk, __bpf_vprintk, __bpf_vprintk, __bpf_vprintk, \
__bpf_vprintk, __bpf_vprintk, __bpf_printk /*3*/, __bpf_printk /*2*/,\
__bpf_printk /*1*/, __bpf_printk /*0*/)
/* Helper macro to print out debug messages */
#define bpf_printk(fmt, args...) ___bpf_pick_printk(args)(fmt, ##args)
#endif #endif
...@@ -315,7 +315,8 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \ ...@@ -315,7 +315,8 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \
linked_vars.skel.h linked_maps.skel.h linked_vars.skel.h linked_maps.skel.h
LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \ LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \
test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c \
trace_vprintk.c
SKEL_BLACKLIST += $$(LSKELS) SKEL_BLACKLIST += $$(LSKELS)
test_static_linked.skel.h-deps := test_static_linked1.o test_static_linked2.o test_static_linked.skel.h-deps := test_static_linked1.o test_static_linked2.o
......
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
#include <test_progs.h> #include <test_progs.h>
static void toggle_object_autoload_progs(const struct bpf_object *obj,
const char *title_load)
{
struct bpf_program *prog;
bpf_object__for_each_program(prog, obj) {
const char *title = bpf_program__section_name(prog);
if (!strcmp(title_load, title))
bpf_program__set_autoload(prog, true);
else
bpf_program__set_autoload(prog, false);
}
}
void test_reference_tracking(void) void test_reference_tracking(void)
{ {
const char *file = "test_sk_lookup_kern.o"; const char *file = "test_sk_lookup_kern.o";
...@@ -9,21 +24,21 @@ void test_reference_tracking(void) ...@@ -9,21 +24,21 @@ void test_reference_tracking(void)
.object_name = obj_name, .object_name = obj_name,
.relaxed_maps = true, .relaxed_maps = true,
); );
struct bpf_object *obj; struct bpf_object *obj_iter, *obj = NULL;
struct bpf_program *prog; struct bpf_program *prog;
__u32 duration = 0; __u32 duration = 0;
int err = 0; int err = 0;
obj = bpf_object__open_file(file, &open_opts); obj_iter = bpf_object__open_file(file, &open_opts);
if (!ASSERT_OK_PTR(obj, "obj_open_file")) if (!ASSERT_OK_PTR(obj_iter, "obj_iter_open_file"))
return; return;
if (CHECK(strcmp(bpf_object__name(obj), obj_name), "obj_name", if (CHECK(strcmp(bpf_object__name(obj_iter), obj_name), "obj_name",
"wrong obj name '%s', expected '%s'\n", "wrong obj name '%s', expected '%s'\n",
bpf_object__name(obj), obj_name)) bpf_object__name(obj_iter), obj_name))
goto cleanup; goto cleanup;
bpf_object__for_each_program(prog, obj) { bpf_object__for_each_program(prog, obj_iter) {
const char *title; const char *title;
/* Ignore .text sections */ /* Ignore .text sections */
...@@ -34,19 +49,27 @@ void test_reference_tracking(void) ...@@ -34,19 +49,27 @@ void test_reference_tracking(void)
if (!test__start_subtest(title)) if (!test__start_subtest(title))
continue; continue;
obj = bpf_object__open_file(file, &open_opts);
if (!ASSERT_OK_PTR(obj, "obj_open_file"))
goto cleanup;
toggle_object_autoload_progs(obj, title);
/* Expect verifier failure if test name has 'err' */ /* Expect verifier failure if test name has 'err' */
if (strstr(title, "err_") != NULL) { if (strstr(title, "err_") != NULL) {
libbpf_print_fn_t old_print_fn; libbpf_print_fn_t old_print_fn;
old_print_fn = libbpf_set_print(NULL); old_print_fn = libbpf_set_print(NULL);
err = !bpf_program__load(prog, "GPL", 0); err = !bpf_object__load(obj);
libbpf_set_print(old_print_fn); libbpf_set_print(old_print_fn);
} else { } else {
err = bpf_program__load(prog, "GPL", 0); err = bpf_object__load(obj);
} }
CHECK(err, title, "\n"); CHECK(err, title, "\n");
bpf_object__close(obj);
obj = NULL;
} }
cleanup: cleanup:
bpf_object__close(obj); bpf_object__close(obj);
bpf_object__close(obj_iter);
} }
...@@ -10,7 +10,7 @@ ...@@ -10,7 +10,7 @@
void test_trace_printk(void) void test_trace_printk(void)
{ {
int err, iter = 0, duration = 0, found = 0; int err = 0, iter = 0, found = 0;
struct trace_printk__bss *bss; struct trace_printk__bss *bss;
struct trace_printk *skel; struct trace_printk *skel;
char *buf = NULL; char *buf = NULL;
...@@ -18,25 +18,24 @@ void test_trace_printk(void) ...@@ -18,25 +18,24 @@ void test_trace_printk(void)
size_t buflen; size_t buflen;
skel = trace_printk__open(); skel = trace_printk__open();
if (CHECK(!skel, "skel_open", "failed to open skeleton\n")) if (!ASSERT_OK_PTR(skel, "trace_printk__open"))
return; return;
ASSERT_EQ(skel->rodata->fmt[0], 'T', "invalid printk fmt string"); ASSERT_EQ(skel->rodata->fmt[0], 'T', "skel->rodata->fmt[0]");
skel->rodata->fmt[0] = 't'; skel->rodata->fmt[0] = 't';
err = trace_printk__load(skel); err = trace_printk__load(skel);
if (CHECK(err, "skel_load", "failed to load skeleton: %d\n", err)) if (!ASSERT_OK(err, "trace_printk__load"))
goto cleanup; goto cleanup;
bss = skel->bss; bss = skel->bss;
err = trace_printk__attach(skel); err = trace_printk__attach(skel);
if (CHECK(err, "skel_attach", "skeleton attach failed: %d\n", err)) if (!ASSERT_OK(err, "trace_printk__attach"))
goto cleanup; goto cleanup;
fp = fopen(TRACEBUF, "r"); fp = fopen(TRACEBUF, "r");
if (CHECK(fp == NULL, "could not open trace buffer", if (!ASSERT_OK_PTR(fp, "fopen(TRACEBUF)"))
"error %d opening %s", errno, TRACEBUF))
goto cleanup; goto cleanup;
/* We do not want to wait forever if this test fails... */ /* We do not want to wait forever if this test fails... */
...@@ -46,14 +45,10 @@ void test_trace_printk(void) ...@@ -46,14 +45,10 @@ void test_trace_printk(void)
usleep(1); usleep(1);
trace_printk__detach(skel); trace_printk__detach(skel);
if (CHECK(bss->trace_printk_ran == 0, if (!ASSERT_GT(bss->trace_printk_ran, 0, "bss->trace_printk_ran"))
"bpf_trace_printk never ran",
"ran == %d", bss->trace_printk_ran))
goto cleanup; goto cleanup;
if (CHECK(bss->trace_printk_ret <= 0, if (!ASSERT_GT(bss->trace_printk_ret, 0, "bss->trace_printk_ret"))
"bpf_trace_printk returned <= 0 value",
"got %d", bss->trace_printk_ret))
goto cleanup; goto cleanup;
/* verify our search string is in the trace buffer */ /* verify our search string is in the trace buffer */
...@@ -66,8 +61,7 @@ void test_trace_printk(void) ...@@ -66,8 +61,7 @@ void test_trace_printk(void)
break; break;
} }
if (CHECK(!found, "message from bpf_trace_printk not found", if (!ASSERT_EQ(found, bss->trace_printk_ran, "found"))
"no instance of %s in %s", SEARCHMSG, TRACEBUF))
goto cleanup; goto cleanup;
cleanup: cleanup:
......
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2021 Facebook */
#include <test_progs.h>
#include "trace_vprintk.lskel.h"
#define TRACEBUF "/sys/kernel/debug/tracing/trace_pipe"
#define SEARCHMSG "1,2,3,4,5,6,7,8,9,10"
void test_trace_vprintk(void)
{
int err = 0, iter = 0, found = 0;
struct trace_vprintk__bss *bss;
struct trace_vprintk *skel;
char *buf = NULL;
FILE *fp = NULL;
size_t buflen;
skel = trace_vprintk__open_and_load();
if (!ASSERT_OK_PTR(skel, "trace_vprintk__open_and_load"))
goto cleanup;
bss = skel->bss;
err = trace_vprintk__attach(skel);
if (!ASSERT_OK(err, "trace_vprintk__attach"))
goto cleanup;
fp = fopen(TRACEBUF, "r");
if (!ASSERT_OK_PTR(fp, "fopen(TRACEBUF)"))
goto cleanup;
/* We do not want to wait forever if this test fails... */
fcntl(fileno(fp), F_SETFL, O_NONBLOCK);
/* wait for tracepoint to trigger */
usleep(1);
trace_vprintk__detach(skel);
if (!ASSERT_GT(bss->trace_vprintk_ran, 0, "bss->trace_vprintk_ran"))
goto cleanup;
if (!ASSERT_GT(bss->trace_vprintk_ret, 0, "bss->trace_vprintk_ret"))
goto cleanup;
/* verify our search string is in the trace buffer */
while (getline(&buf, &buflen, fp) >= 0 || errno == EAGAIN) {
if (strstr(buf, SEARCHMSG) != NULL)
found++;
if (found == bss->trace_vprintk_ran)
break;
if (++iter > 1000)
break;
}
if (!ASSERT_EQ(found, bss->trace_vprintk_ran, "found"))
goto cleanup;
if (!ASSERT_LT(bss->null_data_vprintk_ret, 0, "bss->null_data_vprintk_ret"))
goto cleanup;
cleanup:
trace_vprintk__destroy(skel);
free(buf);
if (fp)
fclose(fp);
}
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2021 Facebook */
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
char _license[] SEC("license") = "GPL";
int null_data_vprintk_ret = 0;
int trace_vprintk_ret = 0;
int trace_vprintk_ran = 0;
SEC("fentry/__x64_sys_nanosleep")
int sys_enter(void *ctx)
{
static const char one[] = "1";
static const char three[] = "3";
static const char five[] = "5";
static const char seven[] = "7";
static const char nine[] = "9";
static const char f[] = "%pS\n";
/* runner doesn't search for \t, just ensure it compiles */
bpf_printk("\t");
trace_vprintk_ret = __bpf_vprintk("%s,%d,%s,%d,%s,%d,%s,%d,%s,%d %d\n",
one, 2, three, 4, five, 6, seven, 8, nine, 10, ++trace_vprintk_ran);
/* non-NULL fmt w/ NULL data should result in error */
null_data_vprintk_ret = bpf_trace_vprintk(f, sizeof(f), NULL, 0);
return 0;
}
...@@ -57,6 +57,11 @@ def default_iface(f): ...@@ -57,6 +57,11 @@ def default_iface(f):
return f(*args, iface, **kwargs) return f(*args, iface, **kwargs)
return wrapper return wrapper
DMESG_EMITTING_HELPERS = [
"bpf_probe_write_user",
"bpf_trace_printk",
"bpf_trace_vprintk",
]
class TestBpftool(unittest.TestCase): class TestBpftool(unittest.TestCase):
@classmethod @classmethod
...@@ -67,10 +72,7 @@ class TestBpftool(unittest.TestCase): ...@@ -67,10 +72,7 @@ class TestBpftool(unittest.TestCase):
@default_iface @default_iface
def test_feature_dev_json(self, iface): def test_feature_dev_json(self, iface):
unexpected_helpers = [ unexpected_helpers = DMESG_EMITTING_HELPERS
"bpf_probe_write_user",
"bpf_trace_printk",
]
expected_keys = [ expected_keys = [
"syscall_config", "syscall_config",
"program_types", "program_types",
...@@ -94,10 +96,7 @@ class TestBpftool(unittest.TestCase): ...@@ -94,10 +96,7 @@ class TestBpftool(unittest.TestCase):
bpftool_json(["feature", "probe"]), bpftool_json(["feature", "probe"]),
bpftool_json(["feature"]), bpftool_json(["feature"]),
] ]
unexpected_helpers = [ unexpected_helpers = DMESG_EMITTING_HELPERS
"bpf_probe_write_user",
"bpf_trace_printk",
]
expected_keys = [ expected_keys = [
"syscall_config", "syscall_config",
"system_config", "system_config",
...@@ -121,10 +120,7 @@ class TestBpftool(unittest.TestCase): ...@@ -121,10 +120,7 @@ class TestBpftool(unittest.TestCase):
bpftool_json(["feature", "probe", "kernel", "full"]), bpftool_json(["feature", "probe", "kernel", "full"]),
bpftool_json(["feature", "probe", "full"]), bpftool_json(["feature", "probe", "full"]),
] ]
expected_helpers = [ expected_helpers = DMESG_EMITTING_HELPERS
"bpf_probe_write_user",
"bpf_trace_printk",
]
for tc in test_cases: for tc in test_cases:
# Check if expected helpers are included at least once in any # Check if expected helpers are included at least once in any
...@@ -157,7 +153,7 @@ class TestBpftool(unittest.TestCase): ...@@ -157,7 +153,7 @@ class TestBpftool(unittest.TestCase):
not_full_set.add(helper) not_full_set.add(helper)
self.assertCountEqual(full_set - not_full_set, self.assertCountEqual(full_set - not_full_set,
{"bpf_probe_write_user", "bpf_trace_printk"}) set(DMESG_EMITTING_HELPERS))
self.assertCountEqual(not_full_set - full_set, set()) self.assertCountEqual(not_full_set - full_set, set())
def test_feature_macros(self): def test_feature_macros(self):
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment