- 09 Mar, 2023 3 commits
-
-
Andrii Nakryiko authored
Teach verifier about the concept of the open-coded (or inline) iterators. This patch adds generic iterator loop verification logic, new STACK_ITER stack slot type to contain iterator state, and necessary kfunc plumbing for iterator's constructor, destructor and next methods. Next patch implements first specific iterator (numbers iterator for implementing for() loop logic). Such split allows to have more focused commits for verifier logic and separate commit that we could point later to demonstrating what does it take to add a new kind of iterator. Each kind of iterator has its own associated struct bpf_iter_<type>, where <type> denotes a specific type of iterator. struct bpf_iter_<type> state is supposed to live on BPF program stack, so there will be no way to change its size later on without breaking backwards compatibility, so choose wisely! But given this struct is specific to a given <type> of iterator, this allows a lot of flexibility: simple iterators could be fine with just one stack slot (8 bytes), like numbers iterator in the next patch, while some other more complicated iterators might need way more to keep their iterator state. Either way, such design allows to avoid runtime memory allocations, which otherwise would be necessary if we fixed on-the-stack size and it turned out to be too small for a given iterator implementation. The way BPF verifier logic is implemented, there are no artificial restrictions on a number of active iterators, it should work correctly using multiple active iterators at the same time. This also means you can have multiple nested iteration loops. struct bpf_iter_<type> reference can be safely passed to subprograms as well. General flow is easiest to demonstrate with a simple example using number iterator implemented in next patch. Here's the simplest possible loop: struct bpf_iter_num it; int *v; bpf_iter_num_new(&it, 2, 5); while ((v = bpf_iter_num_next(&it))) { bpf_printk("X = %d", *v); } bpf_iter_num_destroy(&it); Above snippet should output "X = 2", "X = 3", "X = 4". Note that 5 is exclusive and is not returned. This matches similar APIs (e.g., slices in Go or Rust) that implement a range of elements, where end index is non-inclusive. In the above example, we see a trio of function: - constructor, bpf_iter_num_new(), which initializes iterator state (struct bpf_iter_num it) on the stack. If any of the input arguments are invalid, constructor should make sure to still initialize it such that subsequent bpf_iter_num_next() calls will return NULL. I.e., on error, return error and construct empty iterator. - next method, bpf_iter_num_next(), which accepts pointer to iterator state and produces an element. Next method should always return a pointer. The contract between BPF verifier is that next method will always eventually return NULL when elements are exhausted. Once NULL is returned, subsequent next calls should keep returning NULL. In the case of numbers iterator, bpf_iter_num_next() returns a pointer to an int (storage for this integer is inside the iterator state itself), which can be dereferenced after corresponding NULL check. - once done with the iterator, it's mandated that user cleans up its state with the call to destructor, bpf_iter_num_destroy() in this case. Destructor frees up any resources and marks stack space used by struct bpf_iter_num as usable for something else. Any other iterator implementation will have to implement at least these three methods. It is enforced that for any given type of iterator only applicable constructor/destructor/next are callable. I.e., verifier ensures you can't pass number iterator state into, say, cgroup iterator's next method. It is important to keep the naming pattern consistent to be able to create generic macros to help with BPF iter usability. E.g., one of the follow up patches adds generic bpf_for_each() macro to bpf_misc.h in selftests, which allows to utilize iterator "trio" nicely without having to code the above somewhat tedious loop explicitly every time. This is enforced at kfunc registration point by one of the previous patches in this series. At the implementation level, iterator state tracking for verification purposes is very similar to dynptr. We add STACK_ITER stack slot type, reserve necessary number of slots, depending on sizeof(struct bpf_iter_<type>), and keep track of necessary extra state in the "main" slot, which is marked with non-zero ref_obj_id. Other slots are also marked as STACK_ITER, but have zero ref_obj_id. This is simpler than having a separate "is_first_slot" flag. Another big distinction is that STACK_ITER is *always refcounted*, which simplifies implementation without sacrificing usability. So no need for extra "iter_id", no need to anticipate reuse of STACK_ITER slots for new constructors, etc. Keeping it simple here. As far as the verification logic goes, there are two extensive comments: in process_iter_next_call() and iter_active_depths_differ() explaining some important and sometimes subtle aspects. Please refer to them for details. But from 10,000-foot point of view, next methods are the points of forking a verification state, which are conceptually similar to what verifier is doing when validating conditional jump. We branch out at a `call bpf_iter_<type>_next` instruction and simulate two outcomes: NULL (iteration is done) and non-NULL (new element is returned). NULL is simulated first and is supposed to reach exit without looping. After that non-NULL case is validated and it either reaches exit (for trivial examples with no real loop), or reaches another `call bpf_iter_<type>_next` instruction with the state equivalent to already (partially) validated one. State equivalency at that point means we technically are going to be looping forever without "breaking out" out of established "state envelope" (i.e., subsequent iterations don't add any new knowledge or constraints to the verifier state, so running 1, 2, 10, or a million of them doesn't matter). But taking into account the contract stating that iterator next method *has to* return NULL eventually, we can conclude that loop body is safe and will eventually terminate. Given we validated logic outside of the loop (NULL case), and concluded that loop body is safe (though potentially looping many times), verifier can claim safety of the overall program logic. The rest of the patch is necessary plumbing for state tracking, marking, validation, and necessary further kfunc plumbing to allow implementing iterator constructor, destructor, and next methods. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20230308184121.1165081-4-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Add ability to register kfuncs that implement BPF open-coded iterator contract and enforce naming and function proto convention. Enforcement happens at the time of kfunc registration and significantly simplifies the rest of iterators logic in the verifier. More details follow in subsequent patches, but we enforce the following conditions. All kfuncs (constructor, next, destructor) have to be named consistenly as bpf_iter_<type>_{new,next,destroy}(), respectively. <type> represents iterator type, and iterator state should be represented as a matching `struct bpf_iter_<type>` state type. Also, all iter kfuncs should have a pointer to this `struct bpf_iter_<type>` as the very first argument. Additionally: - Constructor, i.e., bpf_iter_<type>_new(), can have arbitrary extra number of arguments. Return type is not enforced either. - Next method, i.e., bpf_iter_<type>_next(), has to return a pointer type and should have exactly one argument: `struct bpf_iter_<type> *` (const/volatile/restrict and typedefs are ignored). - Destructor, i.e., bpf_iter_<type>_destroy(), should return void and should have exactly one argument, similar to the next method. - struct bpf_iter_<type> size is enforced to be positive and a multiple of 8 bytes (to fit stack slots correctly). Such strictness and consistency allows to build generic helpers abstracting important, but boilerplate, details to be able to use open-coded iterators effectively and ergonomically (see bpf_for_each() in subsequent patches). It also simplifies the verifier logic in some places. At the same time, this doesn't hurt generality of possible iterator implementations. Win-win. Constructor kfunc is marked with a new KF_ITER_NEW flags, next method is marked with KF_ITER_NEXT (and should also have KF_RET_NULL, of course), while destructor kfunc is marked as KF_ITER_DESTROY. Additionally, we add a trivial kfunc name validation: it should be a valid non-NULL and non-empty string. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20230308184121.1165081-3-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Factor out logic to fetch basic kfunc metadata based on struct bpf_insn. This is not exactly short or trivial code to just copy/paste and this information is sometimes necessary in other parts of the verifier logic. Subsequent patches will rely on this to determine if an instruction is a kfunc call to iterator next method. No functional changes intended, including that verbose() warning behavior when kfunc is not allowed for a particular program type. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20230308184121.1165081-2-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 08 Mar, 2023 24 commits
-
-
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski authored
Andrii Nakryiko says: ==================== pull-request: bpf-next 2023-03-08 We've added 23 non-merge commits during the last 2 day(s) which contain a total of 28 files changed, 414 insertions(+), 104 deletions(-). The main changes are: 1) Add more precise memory usage reporting for all BPF map types, from Yafang Shao. 2) Add ARM32 USDT support to libbpf, from Puranjay Mohan. 3) Fix BTF_ID_LIST size causing problems in !CONFIG_DEBUG_INFO_BTF, from Nathan Chancellor. 4) IMA selftests fix, from Roberto Sassu. 5) libbpf fix in APK support code, from Daniel Müller. * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (23 commits) selftests/bpf: Fix IMA test libbpf: USDT arm arg parsing support libbpf: Refactor parse_usdt_arg() to re-use code libbpf: Fix theoretical u32 underflow in find_cd() function bpf: enforce all maps having memory usage callback bpf: offload map memory usage bpf, net: xskmap memory usage bpf, net: sock_map memory usage bpf, net: bpf_local_storage memory usage bpf: local_storage memory usage bpf: bpf_struct_ops memory usage bpf: queue_stack_maps memory usage bpf: devmap memory usage bpf: cpumap memory usage bpf: bloom_filter memory usage bpf: ringbuf memory usage bpf: reuseport_array memory usage bpf: stackmap memory usage bpf: arraymap memory usage bpf: hashtab memory usage ... ==================== Link: https://lore.kernel.org/r/20230308193533.1671597-1-andrii@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Roberto Sassu authored
Commit 62622dab ("ima: return IMA digest value only when IMA_COLLECTED flag is set") caused bpf_ima_inode_hash() to refuse to give non-fresh digests. IMA test #3 assumed the old behavior, that bpf_ima_inode_hash() still returned also non-fresh digests. Correct the test by accepting both cases. If the samples returned are 1, assume that the commit above is applied and that the returned digest is fresh. If the samples returned are 2, assume that the commit above is not applied, and check both the non-fresh and fresh digest. Fixes: 62622dab ("ima: return IMA digest value only when IMA_COLLECTED flag is set") Reported-by: David Vernet <void@manifault.com> Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Matt Bobrowski <mattbobrowski@google.com> Link: https://lore.kernel.org/bpf/20230308103713.1681200-1-roberto.sassu@huaweicloud.com
-
Eric Dumazet authored
Commit 0091bfc8 ("io_uring/af_unix: defer registered files gc to io_uring release") added one bit to struct sk_buff. This structure is critical for networking, and we try very hard to not add bloat on it, unless absolutely required. For instance, we can use a specific destructor as a wrapper around unix_destruct_scm(), to identify skbs that unix_gc() has to special case. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Pavel Begunkov <asml.silence@gmail.com> Cc: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Cc: Jens Axboe <axboe@kernel.dk> Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Steen Hegelund says: ==================== Add support for TC flower templates in Sparx5 This adds support for the TC template mechanism in the Sparx5 flower filter implementation. Templates are as such handled by the TC framework, but when a template is created (using a chain id) there are by definition no filters on this chain (an error will be returned if there are any). If the templates chain id is one that is represented by a VCAP lookup, then when the template is created, we know that it is safe to use the keys provided in the template to change the keyset configuration for the (port, lookup) combination, if this is needed to improve the match on the template. The original port keyset configuration is captured in the template state information which is kept per port, so that when the template is deleted the port keyset configuration can be restored to its previous setting. The template also provides the protocol parameter which is the basic information that is used to find out which port keyset configuration needs to be changed. The VCAPs and lookups are slightly different when it comes to which keys, keysets and protocol are supported and used for selection, so in some cases a bit of tweaking is needed to find a useful match. This is done by e.g. removing a key that prevents the best matching keyset from being selected. The debugfs output that is provided for a port allows inspection of the currently used keyset in each of the VCAPs lookups. So when a template has been created the debugfs output allows you to verify if the keyset configuration has been changed successfully. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steen Hegelund authored
This adds support for using the "template add" and "template destroy" functionality to change the port keyset configuration. If the VCAP lookup already contains rules, the port keyset is left unchanged, as a change would make these rules unusable. When the template is destroyed the port keyset configuration is restored. The filters using the template chain will automatically be deleted by the TC framework. Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steen Hegelund authored
With this its is now possible for clients (like TC) to change the port keyset configuration in the Sparx5 VCAPs. This is typically done per traffic class which is guided with the L3 protocol information. Before the change the current keyset configuration is collected in a list that is handed back to the client. Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steen Hegelund authored
This adds a list that is used to collect the templates that are active on a port. This allows the template creation to change the port configuration and the template destruction to change it back. Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steen Hegelund authored
This provides these 3 functions in the VCAP API: - Count the number of rules in a VCAP lookup (chain) - Remove a key from a VCAP rule - Find the keyset that gives the smallest rule list from a list of keysets Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steen Hegelund authored
Correct the name used in the debugfs output. Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arınç ÜNAL authored
The line endings must be preserved on gpio-controller, io-supply, and reset-gpios properties to look proper when the YAML file is parsed. Currently it's interpreted as a single line when parsed. Change the style of the description of these properties to literal style to preserve the line endings. Signed-off-by: Arınç ÜNAL <arinc.unal@arinc9.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiapeng Chong authored
No functional modification involved. drivers/net/ethernet/emulex/benet/be_cmds.c:1120 be_cmd_pmac_add() warn: inconsistent indenting. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=4396Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gustavo A. R. Silva authored
Zero-length arrays as fake flexible arrays are deprecated and we are moving towards adopting C99 flexible-array members instead. Transform zero-length array into flexible-array member in struct mlx4_en_rx_desc. Address the following warnings found with GCC-13 and -fstrict-flex-arrays=3 enabled: drivers/net/ethernet/mellanox/mlx4/en_rx.c:88:30: warning: array subscript i is outside array bounds of ‘struct mlx4_wqe_data_seg[0]’ [-Warray-bounds=] drivers/net/ethernet/mellanox/mlx4/en_rx.c:149:30: warning: array subscript 0 is outside array bounds of ‘struct mlx4_wqe_data_seg[0]’ [-Warray-bounds=] drivers/net/ethernet/mellanox/mlx4/en_rx.c:127:30: warning: array subscript i is outside array bounds of ‘struct mlx4_wqe_data_seg[0]’ [-Warray-bounds=] drivers/net/ethernet/mellanox/mlx4/en_rx.c:128:30: warning: array subscript i is outside array bounds of ‘struct mlx4_wqe_data_seg[0]’ [-Warray-bounds=] drivers/net/ethernet/mellanox/mlx4/en_rx.c:129:30: warning: array subscript i is outside array bounds of ‘struct mlx4_wqe_data_seg[0]’ [-Warray-bounds=] drivers/net/ethernet/mellanox/mlx4/en_rx.c:117:30: warning: array subscript i is outside array bounds of ‘struct mlx4_wqe_data_seg[0]’ [-Warray-bounds=] drivers/net/ethernet/mellanox/mlx4/en_rx.c:119:30: warning: array subscript i is outside array bounds of ‘struct mlx4_wqe_data_seg[0]’ [-Warray-bounds=] This helps with the ongoing efforts to tighten the FORTIFY_SOURCE routines on memcpy() and help us make progress towards globally enabling -fstrict-flex-arrays=3 [1]. Link: https://github.com/KSPP/linux/issues/21 Link: https://github.com/KSPP/linux/issues/264 Link: https://gcc.gnu.org/pipermail/gcc-patches/2022-October/602902.html [1] Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Heiner Kallweit says: ==================== r8169: disable ASPM during NAPI poll This is a rework of ideas from Kai-Heng on how to avoid the known ASPM issues whilst still allowing for a maximum of ASPM-related power savings. As a prerequisite some locking is added first. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Now that ASPM is disabled during NAPI poll, we can remove all ASPM restrictions. This allows for higher power savings if the network isn't fully loaded. Reviewed-by: Simon Horman <simon.horman@corigine.com> Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Several chip versions have problems with ASPM, what may result in rx_missed errors or tx timeouts. The root cause isn't known but experience shows that disabling ASPM during NAPI poll can avoid these problems. Suggested-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Bail out if the function is used with chip versions that don't support ASPM configuration. In addition remove the delay, it tuned out that it's not needed, also vendor driver r8125 doesn't have it. Suggested-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
For disabling ASPM during NAPI poll we'll have to unlock access to the config registers in atomic context. Other code parts running with config register access unlocked are partially longer and can sleep. Add a usage counter to enable parallel execution of code parts requiring unlocked config registers. Reviewed-by: Simon Horman <simon.horman@corigine.com> Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
For disabling ASPM during NAPI poll we'll have to access both registers in atomic context. Use a spinlock to protect access. Reviewed-by: Simon Horman <simon.horman@corigine.com> Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
For disabling ASPM during NAPI poll we'll have to access mac ocp registers in atomic context. This could result in races because a mac ocp read consists of a write to register OCPDR, followed by a read from the same register. Therefore add a spinlock to protect access to mac ocp registers. Reviewed-by: Simon Horman <simon.horman@corigine.com> Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vadim Fedorenko authored
When the feature was added it was enabled for SW timestamps only but with current hardware the same out-of-order timestamps can be seen. Let's expand the area for the feature to all types of timestamps. Signed-off-by: Vadim Fedorenko <vadfed@meta.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gustavo A. R. Silva authored
Zero-length arrays as fake flexible arrays are deprecated and we are moving towards adopting C99 flexible-array members instead. Transform zero-length array into flexible-array member in struct nx_cardrsp_rx_ctx_t. Address the following warnings found with GCC-13 and -fstrict-flex-arrays=3 enabled: drivers/net/ethernet/qlogic/netxen/netxen_nic_ctx.c:361:26: warning: array subscript <unknown> is outside array bounds of ‘char[0]’ [-Warray-bounds=] drivers/net/ethernet/qlogic/netxen/netxen_nic_ctx.c:372:25: warning: array subscript <unknown> is outside array bounds of ‘char[0]’ [-Warray-bounds=] This helps with the ongoing efforts to tighten the FORTIFY_SOURCE routines on memcpy() and help us make progress towards globally enabling -fstrict-flex-arrays=3 [1]. Link: https://github.com/KSPP/linux/issues/21 Link: https://github.com/KSPP/linux/issues/265 Link: https://gcc.gnu.org/pipermail/gcc-patches/2022-October/602902.html [1] Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/ZAZ57I6WdQEwWh7v@workSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Heiner Kallweit authored
lan95xx_config_aneg_ext() can be simplified by using phy_set_bits(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/3da785c7-3ef8-b5d3-89a0-340f550be3c2@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
enum skb_drop_reason is more generic, we can adopt it instead. Provide dev_kfree_skb_irq_reason() and dev_kfree_skb_any_reason(). This means drivers can use more precise drop reasons if they want to. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com> Link: https://lore.kernel.org/r/20230306204313.10492-1-edumazet@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Heiner Kallweit authored
cond sometimes is (val & MASK) what may result in a false positive if val is a negative errno. We shouldn't evaluate cond if val < 0. This has no functional impact here, but it's not nice. Therefore switch order of the checks. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/6d8274ac-4344-23b4-d9a3-cad4c39517d4@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 07 Mar, 2023 13 commits
-
-
Andrii Nakryiko authored
Puranjay Mohan says: ==================== This series add the support of the ARM architecture to libbpf USDT. This involves implementing the parse_usdt_arg() function for ARM. It was seen that the last part of parse_usdt_arg() is repeated for all architectures, so, the first patch in this series refactors these functions and moved the post processing to parse_usdt_spec() Changes in V2[1] to V3: - Use a tabular approach to find register offsets. - Add the patch for refactoring parse_usdt_arg() ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Puranjay Mohan authored
Parsing of USDT arguments is architecture-specific; on arm it is relatively easy since registers used are r[0-10], fp, ip, sp, lr, pc. Format is slightly different compared to aarch64; forms are - "size @ [ reg, #offset ]" for dereferences, for example "-8 @ [ sp, #76 ]" ; " -4 @ [ sp ]" - "size @ reg" for register values; for example "-4@r0" - "size @ #value" for raw values; for example "-8@#1" Add support for parsing USDT arguments for ARM architecture. To test the above changes QEMU's virt[1] board with cortex-a15 CPU was used. libbpf-bootstrap's usdt example[2] was modified to attach to a test program with DTRACE_PROBE1/2/3/4... probes to test different combinations. [1] https://www.qemu.org/docs/master/system/arm/virt.html [2] https://github.com/libbpf/libbpf-bootstrap/blob/master/examples/c/usdt.bpf.cSigned-off-by: Puranjay Mohan <puranjay12@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230307120440.25941-3-puranjay12@gmail.com
-
Puranjay Mohan authored
The parse_usdt_arg() function is defined differently for each architecture but the last part of the function is repeated verbatim for each architecture. Refactor parse_usdt_arg() to fill the arg_sz and then do the repeated post-processing in parse_usdt_spec(). Signed-off-by: Puranjay Mohan <puranjay12@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230307120440.25941-2-puranjay12@gmail.com
-
Daniel Müller authored
Coverity reported a potential underflow of the offset variable used in the find_cd() function. Switch to using a signed 64 bit integer for the representation of offset to make sure we can never underflow. Fixes: 1eebcb60 ("libbpf: Implement basic zip archive parsing support") Signed-off-by: Daniel Müller <deso@posteo.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230307215504.837321-1-deso@posteo.net
-
Alexei Starovoitov authored
Yafang Shao says: ==================== Currently we can't get bpf memory usage reliably either from memcg or from bpftool. In memcg, there's not a 'bpf' item in memory.stat, but only 'kernel', 'sock', 'vmalloc' and 'percpu' which may related to bpf memory. With these items we still can't get the bpf memory usage, because bpf memory usage may far less than the kmem in a memcg, for example, the dentry may consume lots of kmem. bpftool now shows the bpf memory footprint, which is difference with bpf memory usage. The difference can be quite great in some cases, for example, - non-preallocated bpf map The non-preallocated bpf map memory usage is dynamically changed. The allocated elements count can be from 0 to the max entries. But the memory footprint in bpftool only shows a fixed number. - bpf metadata consumes more memory than bpf element In some corner cases, the bpf metadata can consumes a lot more memory than bpf element consumes. For example, it can happen when the element size is quite small. - some maps don't have key, value or max_entries For example the key_size and value_size of ringbuf is 0, so its memlock is always 0. We need a way to show the bpf memory usage especially there will be more and more bpf programs running on the production environment and thus the bpf memory usage is not trivial. This patchset introduces a new map ops ->map_mem_usage to calculate the memory usage. Note that we don't intend to make the memory usage 100% accurate, while our goal is to make sure there is only a small difference between what bpftool reports and the real memory. That small difference can be ignored compared to the total usage. That is enough to monitor the bpf memory usage. For example, the user can rely on this value to monitor the trend of bpf memory usage, compare the difference in bpf memory usage between different bpf program versions, figure out which maps consume large memory, and etc. This patchset implements the bpf memory usage for all maps, and yet there's still work to do. We don't want to introduce runtime overhead in the element update and delete path, but we have to do it for some non-preallocated maps, - devmap, xskmap When we update or delete an element, it will allocate or free memory. In order to track this dynamic memory, we have to track the count in element update and delete path. - cpumap The element size of each cpumap element is not determinated. If we want to track the usage, we have to count the size of all elements in the element update and delete path. So I just put it aside currently. - local_storage, bpf_local_storage When we attach or detach a cgroup, it will allocate or free memory. If we want to track the dynamic memory, we also need to do something in the update and delete path. So I just put it aside currently. - offload map The element update and delete of offload map is via the netdev dev_ops, in which it may dynamically allocate or free memory, but this dynamic memory isn't counted in offload map memory usage currently. The result of each map can be found in the individual patch. We may also need to track per-container bpf memory usage, that will be addressed by a different patchset. Changes: v3->v4: code improvement on ringbuf (Andrii) use READ_ONCE() to read lpm_trie (Tao) explain why we can't get bpf memory usage from memcg. v2->v3: check callback at map creation time and avoid warning (Alexei) fix build error under CONFIG_BPF=n (lkp@intel.com) v1->v2: calculate the memory usage within bpf (Alexei) - [v1] bpf, mm: bpf memory usage https://lwn.net/Articles/921991/ - [RFC PATCH v2] mm, bpf: Add BPF into /proc/meminfo https://lwn.net/Articles/919848/ - [RFC PATCH v1] mm, bpf: Add BPF into /proc/meminfo https://lwn.net/Articles/917647/ - [RFC PATCH] bpf, mm: Add a new item bpf into memory.stat https://lore.kernel.org/bpf/20220921170002.29557-1-laoar.shao@gmail].com/ ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yafang Shao authored
We have implemented memory usage callback for all maps, and we enforce any newly added map having a callback as well. We check this callback at map creation time. If it doesn't have the callback, we will return EINVAL. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Link: https://lore.kernel.org/r/20230305124615.12358-19-laoar.shao@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yafang Shao authored
A new helper is introduced to calculate offload map memory usage. But currently the memory dynamically allocated in netdev dev_ops, like nsim_map_update_elem, is not counted. Let's just put it aside now. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Link: https://lore.kernel.org/r/20230305124615.12358-18-laoar.shao@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yafang Shao authored
A new helper is introduced to calculate xskmap memory usage. The xfsmap memory usage can be dynamically changed when we add or remove a xsk_map_node. Hence we need to track the count of xsk_map_node to get its memory usage. The result as follows, - before 10: xskmap name count_map flags 0x0 key 4B value 4B max_entries 65536 memlock 524288B - after 10: xskmap name count_map flags 0x0 <<< no elements case key 4B value 4B max_entries 65536 memlock 524608B Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Link: https://lore.kernel.org/r/20230305124615.12358-17-laoar.shao@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yafang Shao authored
sockmap and sockhash don't have something in common in allocation, so let's introduce different helpers to calculate their memory usage. The reuslt as follows, - before 28: sockmap name count_map flags 0x0 key 4B value 4B max_entries 65536 memlock 524288B 29: sockhash name count_map flags 0x0 key 4B value 4B max_entries 65536 memlock 524288B - after 28: sockmap name count_map flags 0x0 key 4B value 4B max_entries 65536 memlock 524608B 29: sockhash name count_map flags 0x0 <<<< no updated elements key 4B value 4B max_entries 65536 memlock 1048896B Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Link: https://lore.kernel.org/r/20230305124615.12358-16-laoar.shao@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yafang Shao authored
A new helper is introduced into bpf_local_storage map to calculate the memory usage. This helper is also used by other maps like bpf_cgrp_storage, bpf_inode_storage, bpf_task_storage and etc. Note that currently the dynamically allocated storage elements are not counted in the usage, since it will take extra runtime overhead in the elements update or delete path. So let's put it aside now, and implement it in the future when someone really need it. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Link: https://lore.kernel.org/r/20230305124615.12358-15-laoar.shao@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yafang Shao authored
A new helper is introduced to calculate local_storage map memory usage. Currently the dynamically allocated elements are not counted, since it will take runtime overhead in the element update or delete path. So let's put it aside currently, and implement it in the future if the user really needs it. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Link: https://lore.kernel.org/r/20230305124615.12358-14-laoar.shao@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yafang Shao authored
A new helper is introduced to calculate bpf_struct_ops memory usage. The result as follows, - before 1: struct_ops name count_map flags 0x0 key 4B value 256B max_entries 1 memlock 4096B btf_id 73 - after 1: struct_ops name count_map flags 0x0 key 4B value 256B max_entries 1 memlock 5016B btf_id 73 Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Link: https://lore.kernel.org/r/20230305124615.12358-13-laoar.shao@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yafang Shao authored
A new helper is introduced to calculate queue_stack_maps memory usage. The result as follows, - before 20: queue name count_map flags 0x0 key 0B value 4B max_entries 65536 memlock 266240B 21: stack name count_map flags 0x0 key 0B value 4B max_entries 65536 memlock 266240B - after 20: queue name count_map flags 0x0 key 0B value 4B max_entries 65536 memlock 524288B 21: stack name count_map flags 0x0 key 0B value 4B max_entries 65536 memlock 524288B Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Link: https://lore.kernel.org/r/20230305124615.12358-12-laoar.shao@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-