Commit 31ad3923 authored by Daniel Borkmann's avatar Daniel Borkmann

Merge branch 'bpf-ipv6-seg6-bpf-action'

Mathieu Xhonneux says:

====================
As of Linux 4.14, it is possible to define advanced local processing for
IPv6 packets with a Segment Routing Header through the seg6local LWT
infrastructure. This LWT implements the network programming principles
defined in the IETF "SRv6 Network Programming" draft.

The implemented operations are generic, and it would be very interesting to
be able to implement user-specific seg6local actions, without having to
modify the kernel directly. To do so, this patchset adds an End.BPF action
to seg6local, powered by some specific Segment Routing-related helpers,
which provide SR functionalities that can be applied on the packet. This
BPF hook would then allow to implement specific actions at native kernel
speed such as OAM features, advanced SR SDN policies, SRv6 actions like
Segment Routing Header (SRH) encapsulation depending on the content of
the packet, etc.

This patchset is divided in 6 patches, whose main features are :

- A new seg6local action End.BPF with the corresponding new BPF program
  type BPF_PROG_TYPE_LWT_SEG6LOCAL. Such attached BPF program can be
  passed to the LWT seg6local through netlink, the same way as the LWT
  BPF hook operates.
- 3 new BPF helpers for the seg6local BPF hook, allowing to edit/grow/
  shrink a SRH and apply on a packet some of the generic SRv6 actions.
- 1 new BPF helper for the LWT BPF IN hook, allowing to add a SRH through
  encapsulation (via IPv6 encapsulation or inlining if the packet contains
  already an IPv6 header).

As this patchset adds a new LWT BPF hook, I took into account the result
of the discussions when the LWT BPF infrastructure got merged. Hence, the
seg6local BPF hook doesn't allow write access to skb->data directly, only
the SRH can be modified through specific helpers, which ensures that the
integrity of the packet is maintained. More details are available in the
related patches messages.

The performances of this BPF hook have been assessed with the BPF JIT
enabled on an Intel Xeon X3440 processors with 4 cores and 8 threads
clocked at 2.53 GHz. No throughput losses are noted with the seg6local
BPF hook when the BPF program does nothing (440kpps). Adding a 8-bytes
TLV (1 call each to bpf_lwt_seg6_adjust_srh and bpf_lwt_seg6_store_bytes)
drops the throughput to 410kpps, and inlining a SRH via bpf_lwt_seg6_action
drops the throughput to 420kpps. All throughputs are stable.

Changelog:

v2: move the SRH integrity state from skb->cb to a per-cpu buffer
v3: - document helpers in man-page style
    - fix kbuild bugs
    - un-break BPF LWT out hook
    - bpf_push_seg6_encap is now static
    - preempt_enable is now called when the packet is dropped in
      input_action_end_bpf
v4: fix kbuild bugs when CONFIG_IPV6=m
v5: fix kbuild sparse warnings when CONFIG_IPV6=m
v6: fix skb pointers-related bugs in helpers
v7: - fix memory leak in error path of End.BPF setup
    - add freeing of BPF data in seg6_local_destroy_state
    - new enums SEG6_LOCAL_BPF_* instead of re-using ones of lwt bpf for
      netlink nested bpf attributes
    - SEG6_LOCAL_BPF_PROG attr now contains prog->aux->id when dumping
      state
====================
Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
parents 30cfe3b4 c99a84ea
......@@ -9,9 +9,10 @@ BPF_PROG_TYPE(BPF_PROG_TYPE_XDP, xdp)
BPF_PROG_TYPE(BPF_PROG_TYPE_CGROUP_SKB, cg_skb)
BPF_PROG_TYPE(BPF_PROG_TYPE_CGROUP_SOCK, cg_sock)
BPF_PROG_TYPE(BPF_PROG_TYPE_CGROUP_SOCK_ADDR, cg_sock_addr)
BPF_PROG_TYPE(BPF_PROG_TYPE_LWT_IN, lwt_inout)
BPF_PROG_TYPE(BPF_PROG_TYPE_LWT_OUT, lwt_inout)
BPF_PROG_TYPE(BPF_PROG_TYPE_LWT_IN, lwt_in)
BPF_PROG_TYPE(BPF_PROG_TYPE_LWT_OUT, lwt_out)
BPF_PROG_TYPE(BPF_PROG_TYPE_LWT_XMIT, lwt_xmit)
BPF_PROG_TYPE(BPF_PROG_TYPE_LWT_SEG6LOCAL, lwt_seg6local)
BPF_PROG_TYPE(BPF_PROG_TYPE_SOCK_OPS, sock_ops)
BPF_PROG_TYPE(BPF_PROG_TYPE_SK_SKB, sk_skb)
BPF_PROG_TYPE(BPF_PROG_TYPE_SK_MSG, sk_msg)
......
......@@ -49,7 +49,11 @@ struct seg6_pernet_data {
static inline struct seg6_pernet_data *seg6_pernet(struct net *net)
{
#if IS_ENABLED(CONFIG_IPV6)
return net->ipv6.seg6_data;
#else
return NULL;
#endif
}
extern int seg6_init(void);
......@@ -63,5 +67,6 @@ extern bool seg6_validate_srh(struct ipv6_sr_hdr *srh, int len);
extern int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh,
int proto);
extern int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh);
extern int seg6_lookup_nexthop(struct sk_buff *skb, struct in6_addr *nhaddr,
u32 tbl_id);
#endif
/*
* SR-IPv6 implementation
*
* Authors:
* David Lebrun <david.lebrun@uclouvain.be>
* eBPF support: Mathieu Xhonneux <m.xhonneux@gmail.com>
*
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#ifndef _NET_SEG6_LOCAL_H
#define _NET_SEG6_LOCAL_H
#include <linux/percpu.h>
#include <linux/net.h>
#include <linux/ipv6.h>
extern int seg6_lookup_nexthop(struct sk_buff *skb, struct in6_addr *nhaddr,
u32 tbl_id);
struct seg6_bpf_srh_state {
bool valid;
u16 hdrlen;
};
DECLARE_PER_CPU(struct seg6_bpf_srh_state, seg6_bpf_srh_states);
#endif
......@@ -141,6 +141,7 @@ enum bpf_prog_type {
BPF_PROG_TYPE_SK_MSG,
BPF_PROG_TYPE_RAW_TRACEPOINT,
BPF_PROG_TYPE_CGROUP_SOCK_ADDR,
BPF_PROG_TYPE_LWT_SEG6LOCAL,
};
enum bpf_attach_type {
......@@ -1902,6 +1903,90 @@ union bpf_attr {
* egress otherwise). This is the only flag supported for now.
* Return
* **SK_PASS** on success, or **SK_DROP** on error.
*
* int bpf_lwt_push_encap(struct sk_buff *skb, u32 type, void *hdr, u32 len)
* Description
* Encapsulate the packet associated to *skb* within a Layer 3
* protocol header. This header is provided in the buffer at
* address *hdr*, with *len* its size in bytes. *type* indicates
* the protocol of the header and can be one of:
*
* **BPF_LWT_ENCAP_SEG6**
* IPv6 encapsulation with Segment Routing Header
* (**struct ipv6_sr_hdr**). *hdr* only contains the SRH,
* the IPv6 header is computed by the kernel.
* **BPF_LWT_ENCAP_SEG6_INLINE**
* Only works if *skb* contains an IPv6 packet. Insert a
* Segment Routing Header (**struct ipv6_sr_hdr**) inside
* the IPv6 header.
*
* A call to this helper is susceptible to change the underlaying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
* Return
* 0 on success, or a negative error in case of failure.
*
* int bpf_lwt_seg6_store_bytes(struct sk_buff *skb, u32 offset, const void *from, u32 len)
* Description
* Store *len* bytes from address *from* into the packet
* associated to *skb*, at *offset*. Only the flags, tag and TLVs
* inside the outermost IPv6 Segment Routing Header can be
* modified through this helper.
*
* A call to this helper is susceptible to change the underlaying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
* Return
* 0 on success, or a negative error in case of failure.
*
* int bpf_lwt_seg6_adjust_srh(struct sk_buff *skb, u32 offset, s32 delta)
* Description
* Adjust the size allocated to TLVs in the outermost IPv6
* Segment Routing Header contained in the packet associated to
* *skb*, at position *offset* by *delta* bytes. Only offsets
* after the segments are accepted. *delta* can be as well
* positive (growing) as negative (shrinking).
*
* A call to this helper is susceptible to change the underlaying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
* Return
* 0 on success, or a negative error in case of failure.
*
* int bpf_lwt_seg6_action(struct sk_buff *skb, u32 action, void *param, u32 param_len)
* Description
* Apply an IPv6 Segment Routing action of type *action* to the
* packet associated to *skb*. Each action takes a parameter
* contained at address *param*, and of length *param_len* bytes.
* *action* can be one of:
*
* **SEG6_LOCAL_ACTION_END_X**
* End.X action: Endpoint with Layer-3 cross-connect.
* Type of *param*: **struct in6_addr**.
* **SEG6_LOCAL_ACTION_END_T**
* End.T action: Endpoint with specific IPv6 table lookup.
* Type of *param*: **int**.
* **SEG6_LOCAL_ACTION_END_B6**
* End.B6 action: Endpoint bound to an SRv6 policy.
* Type of param: **struct ipv6_sr_hdr**.
* **SEG6_LOCAL_ACTION_END_B6_ENCAP**
* End.B6.Encap action: Endpoint bound to an SRv6
* encapsulation policy.
* Type of param: **struct ipv6_sr_hdr**.
*
* A call to this helper is susceptible to change the underlaying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
* Return
* 0 on success, or a negative error in case of failure.
*/
#define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \
......@@ -1976,7 +2061,11 @@ union bpf_attr {
FN(fib_lookup), \
FN(sock_hash_update), \
FN(msg_redirect_hash), \
FN(sk_redirect_hash),
FN(sk_redirect_hash), \
FN(lwt_push_encap), \
FN(lwt_seg6_store_bytes), \
FN(lwt_seg6_adjust_srh), \
FN(lwt_seg6_action),
/* integer value in 'imm' field of BPF_CALL instruction selects which helper
* function eBPF program intends to call
......@@ -2043,6 +2132,12 @@ enum bpf_hdr_start_off {
BPF_HDR_START_NET,
};
/* Encapsulation type for BPF_FUNC_lwt_push_encap helper. */
enum bpf_lwt_encap_mode {
BPF_LWT_ENCAP_SEG6,
BPF_LWT_ENCAP_SEG6_INLINE
};
/* user accessible mirror of in-kernel sk_buff.
* new fields can only be added to the end of this structure
*/
......
......@@ -25,6 +25,7 @@ enum {
SEG6_LOCAL_NH6,
SEG6_LOCAL_IIF,
SEG6_LOCAL_OIF,
SEG6_LOCAL_BPF,
__SEG6_LOCAL_MAX,
};
#define SEG6_LOCAL_MAX (__SEG6_LOCAL_MAX - 1)
......@@ -59,10 +60,21 @@ enum {
SEG6_LOCAL_ACTION_END_AS = 13,
/* forward to SR-unaware VNF with masquerading */
SEG6_LOCAL_ACTION_END_AM = 14,
/* custom BPF action */
SEG6_LOCAL_ACTION_END_BPF = 15,
__SEG6_LOCAL_ACTION_MAX,
};
#define SEG6_LOCAL_ACTION_MAX (__SEG6_LOCAL_ACTION_MAX - 1)
enum {
SEG6_LOCAL_BPF_PROG_UNSPEC,
SEG6_LOCAL_BPF_PROG,
SEG6_LOCAL_BPF_PROG_NAME,
__SEG6_LOCAL_BPF_PROG_MAX,
};
#define SEG6_LOCAL_BPF_PROG_MAX (__SEG6_LOCAL_BPF_PROG_MAX - 1)
#endif
......@@ -1262,6 +1262,7 @@ static bool may_access_direct_pkt_data(struct bpf_verifier_env *env,
switch (env->prog->type) {
case BPF_PROG_TYPE_LWT_IN:
case BPF_PROG_TYPE_LWT_OUT:
case BPF_PROG_TYPE_LWT_SEG6LOCAL:
/* dst_input() and dst_output() can't write for now */
if (t == BPF_WRITE)
return false;
......
......@@ -64,6 +64,10 @@
#include <net/ip_fib.h>
#include <net/flow.h>
#include <net/arp.h>
#include <net/ipv6.h>
#include <linux/seg6_local.h>
#include <net/seg6.h>
#include <net/seg6_local.h>
/**
* sk_filter_trim_cap - run a packet through a socket filter
......@@ -3363,28 +3367,6 @@ static const struct bpf_func_proto bpf_xdp_redirect_map_proto = {
.arg3_type = ARG_ANYTHING,
};
bool bpf_helper_changes_pkt_data(void *func)
{
if (func == bpf_skb_vlan_push ||
func == bpf_skb_vlan_pop ||
func == bpf_skb_store_bytes ||
func == bpf_skb_change_proto ||
func == bpf_skb_change_head ||
func == bpf_skb_change_tail ||
func == bpf_skb_adjust_room ||
func == bpf_skb_pull_data ||
func == bpf_clone_redirect ||
func == bpf_l3_csum_replace ||
func == bpf_l4_csum_replace ||
func == bpf_xdp_adjust_head ||
func == bpf_xdp_adjust_meta ||
func == bpf_msg_pull_data ||
func == bpf_xdp_adjust_tail)
return true;
return false;
}
static unsigned long bpf_skb_copy(void *dst_buff, const void *skb,
unsigned long off, unsigned long len)
{
......@@ -4360,6 +4342,264 @@ static const struct bpf_func_proto bpf_skb_fib_lookup_proto = {
.arg4_type = ARG_ANYTHING,
};
#if IS_ENABLED(CONFIG_IPV6_SEG6_BPF)
static int bpf_push_seg6_encap(struct sk_buff *skb, u32 type, void *hdr, u32 len)
{
int err;
struct ipv6_sr_hdr *srh = (struct ipv6_sr_hdr *)hdr;
if (!seg6_validate_srh(srh, len))
return -EINVAL;
switch (type) {
case BPF_LWT_ENCAP_SEG6_INLINE:
if (skb->protocol != htons(ETH_P_IPV6))
return -EBADMSG;
err = seg6_do_srh_inline(skb, srh);
break;
case BPF_LWT_ENCAP_SEG6:
skb_reset_inner_headers(skb);
skb->encapsulation = 1;
err = seg6_do_srh_encap(skb, srh, IPPROTO_IPV6);
break;
default:
return -EINVAL;
}
bpf_compute_data_pointers(skb);
if (err)
return err;
ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
skb_set_transport_header(skb, sizeof(struct ipv6hdr));
return seg6_lookup_nexthop(skb, NULL, 0);
}
#endif /* CONFIG_IPV6_SEG6_BPF */
BPF_CALL_4(bpf_lwt_push_encap, struct sk_buff *, skb, u32, type, void *, hdr,
u32, len)
{
switch (type) {
#if IS_ENABLED(CONFIG_IPV6_SEG6_BPF)
case BPF_LWT_ENCAP_SEG6:
case BPF_LWT_ENCAP_SEG6_INLINE:
return bpf_push_seg6_encap(skb, type, hdr, len);
#endif
default:
return -EINVAL;
}
}
static const struct bpf_func_proto bpf_lwt_push_encap_proto = {
.func = bpf_lwt_push_encap,
.gpl_only = false,
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_ANYTHING,
.arg3_type = ARG_PTR_TO_MEM,
.arg4_type = ARG_CONST_SIZE
};
BPF_CALL_4(bpf_lwt_seg6_store_bytes, struct sk_buff *, skb, u32, offset,
const void *, from, u32, len)
{
#if IS_ENABLED(CONFIG_IPV6_SEG6_BPF)
struct seg6_bpf_srh_state *srh_state =
this_cpu_ptr(&seg6_bpf_srh_states);
void *srh_tlvs, *srh_end, *ptr;
struct ipv6_sr_hdr *srh;
int srhoff = 0;
if (ipv6_find_hdr(skb, &srhoff, IPPROTO_ROUTING, NULL, NULL) < 0)
return -EINVAL;
srh = (struct ipv6_sr_hdr *)(skb->data + srhoff);
srh_tlvs = (void *)((char *)srh + ((srh->first_segment + 1) << 4));
srh_end = (void *)((char *)srh + sizeof(*srh) + srh_state->hdrlen);
ptr = skb->data + offset;
if (ptr >= srh_tlvs && ptr + len <= srh_end)
srh_state->valid = 0;
else if (ptr < (void *)&srh->flags ||
ptr + len > (void *)&srh->segments)
return -EFAULT;
if (unlikely(bpf_try_make_writable(skb, offset + len)))
return -EFAULT;
memcpy(skb->data + offset, from, len);
return 0;
#else /* CONFIG_IPV6_SEG6_BPF */
return -EOPNOTSUPP;
#endif
}
static const struct bpf_func_proto bpf_lwt_seg6_store_bytes_proto = {
.func = bpf_lwt_seg6_store_bytes,
.gpl_only = false,
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_ANYTHING,
.arg3_type = ARG_PTR_TO_MEM,
.arg4_type = ARG_CONST_SIZE
};
BPF_CALL_4(bpf_lwt_seg6_action, struct sk_buff *, skb,
u32, action, void *, param, u32, param_len)
{
#if IS_ENABLED(CONFIG_IPV6_SEG6_BPF)
struct seg6_bpf_srh_state *srh_state =
this_cpu_ptr(&seg6_bpf_srh_states);
struct ipv6_sr_hdr *srh;
int srhoff = 0;
int err;
if (ipv6_find_hdr(skb, &srhoff, IPPROTO_ROUTING, NULL, NULL) < 0)
return -EINVAL;
srh = (struct ipv6_sr_hdr *)(skb->data + srhoff);
if (!srh_state->valid) {
if (unlikely((srh_state->hdrlen & 7) != 0))
return -EBADMSG;
srh->hdrlen = (u8)(srh_state->hdrlen >> 3);
if (unlikely(!seg6_validate_srh(srh, (srh->hdrlen + 1) << 3)))
return -EBADMSG;
srh_state->valid = 1;
}
switch (action) {
case SEG6_LOCAL_ACTION_END_X:
if (param_len != sizeof(struct in6_addr))
return -EINVAL;
return seg6_lookup_nexthop(skb, (struct in6_addr *)param, 0);
case SEG6_LOCAL_ACTION_END_T:
if (param_len != sizeof(int))
return -EINVAL;
return seg6_lookup_nexthop(skb, NULL, *(int *)param);
case SEG6_LOCAL_ACTION_END_B6:
err = bpf_push_seg6_encap(skb, BPF_LWT_ENCAP_SEG6_INLINE,
param, param_len);
if (!err)
srh_state->hdrlen =
((struct ipv6_sr_hdr *)param)->hdrlen << 3;
return err;
case SEG6_LOCAL_ACTION_END_B6_ENCAP:
err = bpf_push_seg6_encap(skb, BPF_LWT_ENCAP_SEG6,
param, param_len);
if (!err)
srh_state->hdrlen =
((struct ipv6_sr_hdr *)param)->hdrlen << 3;
return err;
default:
return -EINVAL;
}
#else /* CONFIG_IPV6_SEG6_BPF */
return -EOPNOTSUPP;
#endif
}
static const struct bpf_func_proto bpf_lwt_seg6_action_proto = {
.func = bpf_lwt_seg6_action,
.gpl_only = false,
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_ANYTHING,
.arg3_type = ARG_PTR_TO_MEM,
.arg4_type = ARG_CONST_SIZE
};
BPF_CALL_3(bpf_lwt_seg6_adjust_srh, struct sk_buff *, skb, u32, offset,
s32, len)
{
#if IS_ENABLED(CONFIG_IPV6_SEG6_BPF)
struct seg6_bpf_srh_state *srh_state =
this_cpu_ptr(&seg6_bpf_srh_states);
void *srh_end, *srh_tlvs, *ptr;
struct ipv6_sr_hdr *srh;
struct ipv6hdr *hdr;
int srhoff = 0;
int ret;
if (ipv6_find_hdr(skb, &srhoff, IPPROTO_ROUTING, NULL, NULL) < 0)
return -EINVAL;
srh = (struct ipv6_sr_hdr *)(skb->data + srhoff);
srh_tlvs = (void *)((unsigned char *)srh + sizeof(*srh) +
((srh->first_segment + 1) << 4));
srh_end = (void *)((unsigned char *)srh + sizeof(*srh) +
srh_state->hdrlen);
ptr = skb->data + offset;
if (unlikely(ptr < srh_tlvs || ptr > srh_end))
return -EFAULT;
if (unlikely(len < 0 && (void *)((char *)ptr - len) > srh_end))
return -EFAULT;
if (len > 0) {
ret = skb_cow_head(skb, len);
if (unlikely(ret < 0))
return ret;
ret = bpf_skb_net_hdr_push(skb, offset, len);
} else {
ret = bpf_skb_net_hdr_pop(skb, offset, -1 * len);
}
bpf_compute_data_pointers(skb);
if (unlikely(ret < 0))
return ret;
hdr = (struct ipv6hdr *)skb->data;
hdr->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
srh_state->hdrlen += len;
srh_state->valid = 0;
return 0;
#else /* CONFIG_IPV6_SEG6_BPF */
return -EOPNOTSUPP;
#endif
}
static const struct bpf_func_proto bpf_lwt_seg6_adjust_srh_proto = {
.func = bpf_lwt_seg6_adjust_srh,
.gpl_only = false,
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_ANYTHING,
.arg3_type = ARG_ANYTHING,
};
bool bpf_helper_changes_pkt_data(void *func)
{
if (func == bpf_skb_vlan_push ||
func == bpf_skb_vlan_pop ||
func == bpf_skb_store_bytes ||
func == bpf_skb_change_proto ||
func == bpf_skb_change_head ||
func == bpf_skb_change_tail ||
func == bpf_skb_adjust_room ||
func == bpf_skb_pull_data ||
func == bpf_clone_redirect ||
func == bpf_l3_csum_replace ||
func == bpf_l4_csum_replace ||
func == bpf_xdp_adjust_head ||
func == bpf_xdp_adjust_meta ||
func == bpf_msg_pull_data ||
func == bpf_xdp_adjust_tail ||
func == bpf_lwt_push_encap ||
func == bpf_lwt_seg6_store_bytes ||
func == bpf_lwt_seg6_adjust_srh ||
func == bpf_lwt_seg6_action
)
return true;
return false;
}
static const struct bpf_func_proto *
bpf_base_func_proto(enum bpf_func_id func_id)
{
......@@ -4543,33 +4783,6 @@ xdp_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
}
}
static const struct bpf_func_proto *
lwt_inout_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
{
switch (func_id) {
case BPF_FUNC_skb_load_bytes:
return &bpf_skb_load_bytes_proto;
case BPF_FUNC_skb_pull_data:
return &bpf_skb_pull_data_proto;
case BPF_FUNC_csum_diff:
return &bpf_csum_diff_proto;
case BPF_FUNC_get_cgroup_classid:
return &bpf_get_cgroup_classid_proto;
case BPF_FUNC_get_route_realm:
return &bpf_get_route_realm_proto;
case BPF_FUNC_get_hash_recalc:
return &bpf_get_hash_recalc_proto;
case BPF_FUNC_perf_event_output:
return &bpf_skb_event_output_proto;
case BPF_FUNC_get_smp_processor_id:
return &bpf_get_smp_processor_id_proto;
case BPF_FUNC_skb_under_cgroup:
return &bpf_skb_under_cgroup_proto;
default:
return bpf_base_func_proto(func_id);
}
}
static const struct bpf_func_proto *
sock_ops_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
{
......@@ -4635,6 +4848,44 @@ sk_skb_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
}
}
static const struct bpf_func_proto *
lwt_out_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
{
switch (func_id) {
case BPF_FUNC_skb_load_bytes:
return &bpf_skb_load_bytes_proto;
case BPF_FUNC_skb_pull_data:
return &bpf_skb_pull_data_proto;
case BPF_FUNC_csum_diff:
return &bpf_csum_diff_proto;
case BPF_FUNC_get_cgroup_classid:
return &bpf_get_cgroup_classid_proto;
case BPF_FUNC_get_route_realm:
return &bpf_get_route_realm_proto;
case BPF_FUNC_get_hash_recalc:
return &bpf_get_hash_recalc_proto;
case BPF_FUNC_perf_event_output:
return &bpf_skb_event_output_proto;
case BPF_FUNC_get_smp_processor_id:
return &bpf_get_smp_processor_id_proto;
case BPF_FUNC_skb_under_cgroup:
return &bpf_skb_under_cgroup_proto;
default:
return bpf_base_func_proto(func_id);
}
}
static const struct bpf_func_proto *
lwt_in_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
{
switch (func_id) {
case BPF_FUNC_lwt_push_encap:
return &bpf_lwt_push_encap_proto;
default:
return lwt_out_func_proto(func_id, prog);
}
}
static const struct bpf_func_proto *
lwt_xmit_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
{
......@@ -4666,7 +4917,22 @@ lwt_xmit_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
case BPF_FUNC_set_hash_invalid:
return &bpf_set_hash_invalid_proto;
default:
return lwt_inout_func_proto(func_id, prog);
return lwt_out_func_proto(func_id, prog);
}
}
static const struct bpf_func_proto *
lwt_seg6local_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
{
switch (func_id) {
case BPF_FUNC_lwt_seg6_store_bytes:
return &bpf_lwt_seg6_store_bytes_proto;
case BPF_FUNC_lwt_seg6_action:
return &bpf_lwt_seg6_action_proto;
case BPF_FUNC_lwt_seg6_adjust_srh:
return &bpf_lwt_seg6_adjust_srh_proto;
default:
return lwt_out_func_proto(func_id, prog);
}
}
......@@ -4774,7 +5040,6 @@ static bool lwt_is_valid_access(int off, int size,
return bpf_skb_is_valid_access(off, size, type, prog, info);
}
/* Attach type specific accesses */
static bool __sock_filter_check_attach_type(int off,
enum bpf_access_type access_type,
......@@ -6348,13 +6613,23 @@ const struct bpf_prog_ops cg_skb_prog_ops = {
.test_run = bpf_prog_test_run_skb,
};
const struct bpf_verifier_ops lwt_inout_verifier_ops = {
.get_func_proto = lwt_inout_func_proto,
const struct bpf_verifier_ops lwt_in_verifier_ops = {
.get_func_proto = lwt_in_func_proto,
.is_valid_access = lwt_is_valid_access,
.convert_ctx_access = bpf_convert_ctx_access,
};
const struct bpf_prog_ops lwt_inout_prog_ops = {
const struct bpf_prog_ops lwt_in_prog_ops = {
.test_run = bpf_prog_test_run_skb,
};
const struct bpf_verifier_ops lwt_out_verifier_ops = {
.get_func_proto = lwt_out_func_proto,
.is_valid_access = lwt_is_valid_access,
.convert_ctx_access = bpf_convert_ctx_access,
};
const struct bpf_prog_ops lwt_out_prog_ops = {
.test_run = bpf_prog_test_run_skb,
};
......@@ -6369,6 +6644,16 @@ const struct bpf_prog_ops lwt_xmit_prog_ops = {
.test_run = bpf_prog_test_run_skb,
};
const struct bpf_verifier_ops lwt_seg6local_verifier_ops = {
.get_func_proto = lwt_seg6local_func_proto,
.is_valid_access = lwt_is_valid_access,
.convert_ctx_access = bpf_convert_ctx_access,
};
const struct bpf_prog_ops lwt_seg6local_prog_ops = {
.test_run = bpf_prog_test_run_skb,
};
const struct bpf_verifier_ops cg_sock_verifier_ops = {
.get_func_proto = sock_filter_func_proto,
.is_valid_access = sock_filter_is_valid_access,
......
......@@ -329,4 +329,9 @@ config IPV6_SEG6_HMAC
If unsure, say N.
config IPV6_SEG6_BPF
def_bool y
depends on IPV6_SEG6_LWTUNNEL
depends on IPV6 = y
endif # IPV6
/*
* SR-IPv6 implementation
*
* Author:
* Authors:
* David Lebrun <david.lebrun@uclouvain.be>
* eBPF support: Mathieu Xhonneux <m.xhonneux@gmail.com>
*
*
* This program is free software; you can redistribute it and/or
......@@ -30,7 +31,9 @@
#ifdef CONFIG_IPV6_SEG6_HMAC
#include <net/seg6_hmac.h>
#endif
#include <net/seg6_local.h>
#include <linux/etherdevice.h>
#include <linux/bpf.h>
struct seg6_local_lwt;
......@@ -41,6 +44,11 @@ struct seg6_action_desc {
int static_headroom;
};
struct bpf_lwt_prog {
struct bpf_prog *prog;
char *name;
};
struct seg6_local_lwt {
int action;
struct ipv6_sr_hdr *srh;
......@@ -49,6 +57,7 @@ struct seg6_local_lwt {
struct in6_addr nh6;
int iif;
int oif;
struct bpf_lwt_prog bpf;
int headroom;
struct seg6_action_desc *desc;
......@@ -140,8 +149,8 @@ static void advance_nextseg(struct ipv6_sr_hdr *srh, struct in6_addr *daddr)
*daddr = *addr;
}
static void lookup_nexthop(struct sk_buff *skb, struct in6_addr *nhaddr,
u32 tbl_id)
int seg6_lookup_nexthop(struct sk_buff *skb, struct in6_addr *nhaddr,
u32 tbl_id)
{
struct net *net = dev_net(skb->dev);
struct ipv6hdr *hdr = ipv6_hdr(skb);
......@@ -187,6 +196,7 @@ static void lookup_nexthop(struct sk_buff *skb, struct in6_addr *nhaddr,
skb_dst_drop(skb);
skb_dst_set(skb, dst);
return dst->error;
}
/* regular endpoint function */
......@@ -200,7 +210,7 @@ static int input_action_end(struct sk_buff *skb, struct seg6_local_lwt *slwt)
advance_nextseg(srh, &ipv6_hdr(skb)->daddr);
lookup_nexthop(skb, NULL, 0);
seg6_lookup_nexthop(skb, NULL, 0);
return dst_input(skb);
......@@ -220,7 +230,7 @@ static int input_action_end_x(struct sk_buff *skb, struct seg6_local_lwt *slwt)
advance_nextseg(srh, &ipv6_hdr(skb)->daddr);
lookup_nexthop(skb, &slwt->nh6, 0);
seg6_lookup_nexthop(skb, &slwt->nh6, 0);
return dst_input(skb);
......@@ -239,7 +249,7 @@ static int input_action_end_t(struct sk_buff *skb, struct seg6_local_lwt *slwt)
advance_nextseg(srh, &ipv6_hdr(skb)->daddr);
lookup_nexthop(skb, NULL, slwt->table);
seg6_lookup_nexthop(skb, NULL, slwt->table);
return dst_input(skb);
......@@ -331,7 +341,7 @@ static int input_action_end_dx6(struct sk_buff *skb,
if (!ipv6_addr_any(&slwt->nh6))
nhaddr = &slwt->nh6;
lookup_nexthop(skb, nhaddr, 0);
seg6_lookup_nexthop(skb, nhaddr, 0);
return dst_input(skb);
drop:
......@@ -380,7 +390,7 @@ static int input_action_end_dt6(struct sk_buff *skb,
if (!pskb_may_pull(skb, sizeof(struct ipv6hdr)))
goto drop;
lookup_nexthop(skb, NULL, slwt->table);
seg6_lookup_nexthop(skb, NULL, slwt->table);
return dst_input(skb);
......@@ -406,7 +416,7 @@ static int input_action_end_b6(struct sk_buff *skb, struct seg6_local_lwt *slwt)
ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
skb_set_transport_header(skb, sizeof(struct ipv6hdr));
lookup_nexthop(skb, NULL, 0);
seg6_lookup_nexthop(skb, NULL, 0);
return dst_input(skb);
......@@ -438,7 +448,7 @@ static int input_action_end_b6_encap(struct sk_buff *skb,
ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
skb_set_transport_header(skb, sizeof(struct ipv6hdr));
lookup_nexthop(skb, NULL, 0);
seg6_lookup_nexthop(skb, NULL, 0);
return dst_input(skb);
......@@ -447,6 +457,71 @@ static int input_action_end_b6_encap(struct sk_buff *skb,
return err;
}
DEFINE_PER_CPU(struct seg6_bpf_srh_state, seg6_bpf_srh_states);
static int input_action_end_bpf(struct sk_buff *skb,
struct seg6_local_lwt *slwt)
{
struct seg6_bpf_srh_state *srh_state =
this_cpu_ptr(&seg6_bpf_srh_states);
struct seg6_bpf_srh_state local_srh_state;
struct ipv6_sr_hdr *srh;
int srhoff = 0;
int ret;
srh = get_and_validate_srh(skb);
if (!srh)
goto drop;
advance_nextseg(srh, &ipv6_hdr(skb)->daddr);
/* preempt_disable is needed to protect the per-CPU buffer srh_state,
* which is also accessed by the bpf_lwt_seg6_* helpers
*/
preempt_disable();
srh_state->hdrlen = srh->hdrlen << 3;
srh_state->valid = 1;
rcu_read_lock();
bpf_compute_data_pointers(skb);
ret = bpf_prog_run_save_cb(slwt->bpf.prog, skb);
rcu_read_unlock();
local_srh_state = *srh_state;
preempt_enable();
switch (ret) {
case BPF_OK:
case BPF_REDIRECT:
break;
case BPF_DROP:
goto drop;
default:
pr_warn_once("bpf-seg6local: Illegal return value %u\n", ret);
goto drop;
}
if (unlikely((local_srh_state.hdrlen & 7) != 0))
goto drop;
if (ipv6_find_hdr(skb, &srhoff, IPPROTO_ROUTING, NULL, NULL) < 0)
goto drop;
srh = (struct ipv6_sr_hdr *)(skb->data + srhoff);
srh->hdrlen = (u8)(local_srh_state.hdrlen >> 3);
if (!local_srh_state.valid &&
unlikely(!seg6_validate_srh(srh, (srh->hdrlen + 1) << 3)))
goto drop;
if (ret != BPF_REDIRECT)
seg6_lookup_nexthop(skb, NULL, 0);
return dst_input(skb);
drop:
kfree_skb(skb);
return -EINVAL;
}
static struct seg6_action_desc seg6_action_table[] = {
{
.action = SEG6_LOCAL_ACTION_END,
......@@ -493,7 +568,13 @@ static struct seg6_action_desc seg6_action_table[] = {
.attrs = (1 << SEG6_LOCAL_SRH),
.input = input_action_end_b6_encap,
.static_headroom = sizeof(struct ipv6hdr),
}
},
{
.action = SEG6_LOCAL_ACTION_END_BPF,
.attrs = (1 << SEG6_LOCAL_BPF),
.input = input_action_end_bpf,
},
};
static struct seg6_action_desc *__get_action_desc(int action)
......@@ -538,6 +619,7 @@ static const struct nla_policy seg6_local_policy[SEG6_LOCAL_MAX + 1] = {
.len = sizeof(struct in6_addr) },
[SEG6_LOCAL_IIF] = { .type = NLA_U32 },
[SEG6_LOCAL_OIF] = { .type = NLA_U32 },
[SEG6_LOCAL_BPF] = { .type = NLA_NESTED },
};
static int parse_nla_srh(struct nlattr **attrs, struct seg6_local_lwt *slwt)
......@@ -715,6 +797,75 @@ static int cmp_nla_oif(struct seg6_local_lwt *a, struct seg6_local_lwt *b)
return 0;
}
#define MAX_PROG_NAME 256
static const struct nla_policy bpf_prog_policy[SEG6_LOCAL_BPF_PROG_MAX + 1] = {
[SEG6_LOCAL_BPF_PROG] = { .type = NLA_U32, },
[SEG6_LOCAL_BPF_PROG_NAME] = { .type = NLA_NUL_STRING,
.len = MAX_PROG_NAME },
};
static int parse_nla_bpf(struct nlattr **attrs, struct seg6_local_lwt *slwt)
{
struct nlattr *tb[SEG6_LOCAL_BPF_PROG_MAX + 1];
struct bpf_prog *p;
int ret;
u32 fd;
ret = nla_parse_nested(tb, SEG6_LOCAL_BPF_PROG_MAX,
attrs[SEG6_LOCAL_BPF], bpf_prog_policy, NULL);
if (ret < 0)
return ret;
if (!tb[SEG6_LOCAL_BPF_PROG] || !tb[SEG6_LOCAL_BPF_PROG_NAME])
return -EINVAL;
slwt->bpf.name = nla_memdup(tb[SEG6_LOCAL_BPF_PROG_NAME], GFP_KERNEL);
if (!slwt->bpf.name)
return -ENOMEM;
fd = nla_get_u32(tb[SEG6_LOCAL_BPF_PROG]);
p = bpf_prog_get_type(fd, BPF_PROG_TYPE_LWT_SEG6LOCAL);
if (IS_ERR(p)) {
kfree(slwt->bpf.name);
return PTR_ERR(p);
}
slwt->bpf.prog = p;
return 0;
}
static int put_nla_bpf(struct sk_buff *skb, struct seg6_local_lwt *slwt)
{
struct nlattr *nest;
if (!slwt->bpf.prog)
return 0;
nest = nla_nest_start(skb, SEG6_LOCAL_BPF);
if (!nest)
return -EMSGSIZE;
if (nla_put_u32(skb, SEG6_LOCAL_BPF_PROG, slwt->bpf.prog->aux->id))
return -EMSGSIZE;
if (slwt->bpf.name &&
nla_put_string(skb, SEG6_LOCAL_BPF_PROG_NAME, slwt->bpf.name))
return -EMSGSIZE;
return nla_nest_end(skb, nest);
}
static int cmp_nla_bpf(struct seg6_local_lwt *a, struct seg6_local_lwt *b)
{
if (!a->bpf.name && !b->bpf.name)
return 0;
if (!a->bpf.name || !b->bpf.name)
return 1;
return strcmp(a->bpf.name, b->bpf.name);
}
struct seg6_action_param {
int (*parse)(struct nlattr **attrs, struct seg6_local_lwt *slwt);
int (*put)(struct sk_buff *skb, struct seg6_local_lwt *slwt);
......@@ -745,6 +896,11 @@ static struct seg6_action_param seg6_action_params[SEG6_LOCAL_MAX + 1] = {
[SEG6_LOCAL_OIF] = { .parse = parse_nla_oif,
.put = put_nla_oif,
.cmp = cmp_nla_oif },
[SEG6_LOCAL_BPF] = { .parse = parse_nla_bpf,
.put = put_nla_bpf,
.cmp = cmp_nla_bpf },
};
static int parse_nla_action(struct nlattr **attrs, struct seg6_local_lwt *slwt)
......@@ -830,6 +986,13 @@ static void seg6_local_destroy_state(struct lwtunnel_state *lwt)
struct seg6_local_lwt *slwt = seg6_local_lwtunnel(lwt);
kfree(slwt->srh);
if (slwt->desc->attrs & (1 << SEG6_LOCAL_BPF)) {
kfree(slwt->bpf.name);
bpf_prog_put(slwt->bpf.prog);
}
return;
}
static int seg6_local_fill_encap(struct sk_buff *skb,
......@@ -882,6 +1045,11 @@ static int seg6_local_get_encap_size(struct lwtunnel_state *lwt)
if (attrs & (1 << SEG6_LOCAL_OIF))
nlsize += nla_total_size(4);
if (attrs & (1 << SEG6_LOCAL_BPF))
nlsize += nla_total_size(sizeof(struct nlattr)) +
nla_total_size(MAX_PROG_NAME) +
nla_total_size(4);
return nlsize;
}
......
......@@ -141,6 +141,7 @@ enum bpf_prog_type {
BPF_PROG_TYPE_SK_MSG,
BPF_PROG_TYPE_RAW_TRACEPOINT,
BPF_PROG_TYPE_CGROUP_SOCK_ADDR,
BPF_PROG_TYPE_LWT_SEG6LOCAL,
};
enum bpf_attach_type {
......@@ -1902,6 +1903,90 @@ union bpf_attr {
* egress otherwise). This is the only flag supported for now.
* Return
* **SK_PASS** on success, or **SK_DROP** on error.
*
* int bpf_lwt_push_encap(struct sk_buff *skb, u32 type, void *hdr, u32 len)
* Description
* Encapsulate the packet associated to *skb* within a Layer 3
* protocol header. This header is provided in the buffer at
* address *hdr*, with *len* its size in bytes. *type* indicates
* the protocol of the header and can be one of:
*
* **BPF_LWT_ENCAP_SEG6**
* IPv6 encapsulation with Segment Routing Header
* (**struct ipv6_sr_hdr**). *hdr* only contains the SRH,
* the IPv6 header is computed by the kernel.
* **BPF_LWT_ENCAP_SEG6_INLINE**
* Only works if *skb* contains an IPv6 packet. Insert a
* Segment Routing Header (**struct ipv6_sr_hdr**) inside
* the IPv6 header.
*
* A call to this helper is susceptible to change the underlaying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
* Return
* 0 on success, or a negative error in case of failure.
*
* int bpf_lwt_seg6_store_bytes(struct sk_buff *skb, u32 offset, const void *from, u32 len)
* Description
* Store *len* bytes from address *from* into the packet
* associated to *skb*, at *offset*. Only the flags, tag and TLVs
* inside the outermost IPv6 Segment Routing Header can be
* modified through this helper.
*
* A call to this helper is susceptible to change the underlaying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
* Return
* 0 on success, or a negative error in case of failure.
*
* int bpf_lwt_seg6_adjust_srh(struct sk_buff *skb, u32 offset, s32 delta)
* Description
* Adjust the size allocated to TLVs in the outermost IPv6
* Segment Routing Header contained in the packet associated to
* *skb*, at position *offset* by *delta* bytes. Only offsets
* after the segments are accepted. *delta* can be as well
* positive (growing) as negative (shrinking).
*
* A call to this helper is susceptible to change the underlaying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
* Return
* 0 on success, or a negative error in case of failure.
*
* int bpf_lwt_seg6_action(struct sk_buff *skb, u32 action, void *param, u32 param_len)
* Description
* Apply an IPv6 Segment Routing action of type *action* to the
* packet associated to *skb*. Each action takes a parameter
* contained at address *param*, and of length *param_len* bytes.
* *action* can be one of:
*
* **SEG6_LOCAL_ACTION_END_X**
* End.X action: Endpoint with Layer-3 cross-connect.
* Type of *param*: **struct in6_addr**.
* **SEG6_LOCAL_ACTION_END_T**
* End.T action: Endpoint with specific IPv6 table lookup.
* Type of *param*: **int**.
* **SEG6_LOCAL_ACTION_END_B6**
* End.B6 action: Endpoint bound to an SRv6 policy.
* Type of param: **struct ipv6_sr_hdr**.
* **SEG6_LOCAL_ACTION_END_B6_ENCAP**
* End.B6.Encap action: Endpoint bound to an SRv6
* encapsulation policy.
* Type of param: **struct ipv6_sr_hdr**.
*
* A call to this helper is susceptible to change the underlaying
* packet buffer. Therefore, at load time, all checks on pointers
* previously done by the verifier are invalidated and must be
* performed again, if the helper is used in combination with
* direct packet access.
* Return
* 0 on success, or a negative error in case of failure.
*/
#define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \
......@@ -1976,7 +2061,11 @@ union bpf_attr {
FN(fib_lookup), \
FN(sock_hash_update), \
FN(msg_redirect_hash), \
FN(sk_redirect_hash),
FN(sk_redirect_hash), \
FN(lwt_push_encap), \
FN(lwt_seg6_store_bytes), \
FN(lwt_seg6_adjust_srh), \
FN(lwt_seg6_action),
/* integer value in 'imm' field of BPF_CALL instruction selects which helper
* function eBPF program intends to call
......@@ -2043,6 +2132,12 @@ enum bpf_hdr_start_off {
BPF_HDR_START_NET,
};
/* Encapsulation type for BPF_FUNC_lwt_push_encap helper. */
enum bpf_lwt_encap_mode {
BPF_LWT_ENCAP_SEG6,
BPF_LWT_ENCAP_SEG6_INLINE
};
/* user accessible mirror of in-kernel sk_buff.
* new fields can only be added to the end of this structure
*/
......
......@@ -1456,6 +1456,7 @@ static bool bpf_prog_type__needs_kver(enum bpf_prog_type type)
case BPF_PROG_TYPE_LWT_IN:
case BPF_PROG_TYPE_LWT_OUT:
case BPF_PROG_TYPE_LWT_XMIT:
case BPF_PROG_TYPE_LWT_SEG6LOCAL:
case BPF_PROG_TYPE_SOCK_OPS:
case BPF_PROG_TYPE_SK_SKB:
case BPF_PROG_TYPE_CGROUP_DEVICE:
......
......@@ -33,7 +33,8 @@ TEST_GEN_FILES = test_pkt_access.o test_xdp.o test_l4lb.o test_tcp_estats.o test
sample_map_ret0.o test_tcpbpf_kern.o test_stacktrace_build_id.o \
sockmap_tcp_msg_prog.o connect4_prog.o connect6_prog.o test_adjust_tail.o \
test_btf_haskv.o test_btf_nokv.o test_sockmap_kern.o test_tunnel_kern.o \
test_get_stack_rawtp.o test_sockmap_kern.o test_sockhash_kern.o
test_get_stack_rawtp.o test_sockmap_kern.o test_sockhash_kern.o \
test_lwt_seg6local.o
# Order correspond to 'make run_tests' order
TEST_PROGS := test_kmod.sh \
......@@ -42,7 +43,8 @@ TEST_PROGS := test_kmod.sh \
test_xdp_meta.sh \
test_offload.py \
test_sock_addr.sh \
test_tunnel.sh
test_tunnel.sh \
test_lwt_seg6local.sh
# Compile but not part of 'make run_tests'
TEST_GEN_PROGS_EXTENDED = test_libbpf_open test_sock_addr
......
......@@ -114,6 +114,18 @@ static int (*bpf_get_stack)(void *ctx, void *buf, int size, int flags) =
static int (*bpf_fib_lookup)(void *ctx, struct bpf_fib_lookup *params,
int plen, __u32 flags) =
(void *) BPF_FUNC_fib_lookup;
static int (*bpf_lwt_push_encap)(void *ctx, unsigned int type, void *hdr,
unsigned int len) =
(void *) BPF_FUNC_lwt_push_encap;
static int (*bpf_lwt_seg6_store_bytes)(void *ctx, unsigned int offset,
void *from, unsigned int len) =
(void *) BPF_FUNC_lwt_seg6_store_bytes;
static int (*bpf_lwt_seg6_action)(void *ctx, unsigned int action, void *param,
unsigned int param_len) =
(void *) BPF_FUNC_lwt_seg6_action;
static int (*bpf_lwt_seg6_adjust_srh)(void *ctx, unsigned int offset,
unsigned int len) =
(void *) BPF_FUNC_lwt_seg6_adjust_srh;
/* llvm builtin functions that eBPF C program may use to
* emit BPF_LD_ABS and BPF_LD_IND instructions
......
#include <stddef.h>
#include <inttypes.h>
#include <errno.h>
#include <linux/seg6_local.h>
#include <linux/bpf.h>
#include "bpf_helpers.h"
#include "bpf_endian.h"
#define bpf_printk(fmt, ...) \
({ \
char ____fmt[] = fmt; \
bpf_trace_printk(____fmt, sizeof(____fmt), \
##__VA_ARGS__); \
})
/* Packet parsing state machine helpers. */
#define cursor_advance(_cursor, _len) \
({ void *_tmp = _cursor; _cursor += _len; _tmp; })
#define SR6_FLAG_ALERT (1 << 4)
#define htonll(x) ((bpf_htonl(1)) == 1 ? (x) : ((uint64_t)bpf_htonl((x) & \
0xFFFFFFFF) << 32) | bpf_htonl((x) >> 32))
#define ntohll(x) ((bpf_ntohl(1)) == 1 ? (x) : ((uint64_t)bpf_ntohl((x) & \
0xFFFFFFFF) << 32) | bpf_ntohl((x) >> 32))
#define BPF_PACKET_HEADER __attribute__((packed))
struct ip6_t {
unsigned int ver:4;
unsigned int priority:8;
unsigned int flow_label:20;
unsigned short payload_len;
unsigned char next_header;
unsigned char hop_limit;
unsigned long long src_hi;
unsigned long long src_lo;
unsigned long long dst_hi;
unsigned long long dst_lo;
} BPF_PACKET_HEADER;
struct ip6_addr_t {
unsigned long long hi;
unsigned long long lo;
} BPF_PACKET_HEADER;
struct ip6_srh_t {
unsigned char nexthdr;
unsigned char hdrlen;
unsigned char type;
unsigned char segments_left;
unsigned char first_segment;
unsigned char flags;
unsigned short tag;
struct ip6_addr_t segments[0];
} BPF_PACKET_HEADER;
struct sr6_tlv_t {
unsigned char type;
unsigned char len;
unsigned char value[0];
} BPF_PACKET_HEADER;
__attribute__((always_inline)) struct ip6_srh_t *get_srh(struct __sk_buff *skb)
{
void *cursor, *data_end;
struct ip6_srh_t *srh;
struct ip6_t *ip;
uint8_t *ipver;
data_end = (void *)(long)skb->data_end;
cursor = (void *)(long)skb->data;
ipver = (uint8_t *)cursor;
if ((void *)ipver + sizeof(*ipver) > data_end)
return NULL;
if ((*ipver >> 4) != 6)
return NULL;
ip = cursor_advance(cursor, sizeof(*ip));
if ((void *)ip + sizeof(*ip) > data_end)
return NULL;
if (ip->next_header != 43)
return NULL;
srh = cursor_advance(cursor, sizeof(*srh));
if ((void *)srh + sizeof(*srh) > data_end)
return NULL;
if (srh->type != 4)
return NULL;
return srh;
}
__attribute__((always_inline))
int update_tlv_pad(struct __sk_buff *skb, uint32_t new_pad,
uint32_t old_pad, uint32_t pad_off)
{
int err;
if (new_pad != old_pad) {
err = bpf_lwt_seg6_adjust_srh(skb, pad_off,
(int) new_pad - (int) old_pad);
if (err)
return err;
}
if (new_pad > 0) {
char pad_tlv_buf[16] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0};
struct sr6_tlv_t *pad_tlv = (struct sr6_tlv_t *) pad_tlv_buf;
pad_tlv->type = SR6_TLV_PADDING;
pad_tlv->len = new_pad - 2;
err = bpf_lwt_seg6_store_bytes(skb, pad_off,
(void *)pad_tlv_buf, new_pad);
if (err)
return err;
}
return 0;
}
__attribute__((always_inline))
int is_valid_tlv_boundary(struct __sk_buff *skb, struct ip6_srh_t *srh,
uint32_t *tlv_off, uint32_t *pad_size,
uint32_t *pad_off)
{
uint32_t srh_off, cur_off;
int offset_valid = 0;
int err;
srh_off = (char *)srh - (char *)(long)skb->data;
// cur_off = end of segments, start of possible TLVs
cur_off = srh_off + sizeof(*srh) +
sizeof(struct ip6_addr_t) * (srh->first_segment + 1);
*pad_off = 0;
// we can only go as far as ~10 TLVs due to the BPF max stack size
#pragma clang loop unroll(full)
for (int i = 0; i < 10; i++) {
struct sr6_tlv_t tlv;
if (cur_off == *tlv_off)
offset_valid = 1;
if (cur_off >= srh_off + ((srh->hdrlen + 1) << 3))
break;
err = bpf_skb_load_bytes(skb, cur_off, &tlv, sizeof(tlv));
if (err)
return err;
if (tlv.type == SR6_TLV_PADDING) {
*pad_size = tlv.len + sizeof(tlv);
*pad_off = cur_off;
if (*tlv_off == srh_off) {
*tlv_off = cur_off;
offset_valid = 1;
}
break;
} else if (tlv.type == SR6_TLV_HMAC) {
break;
}
cur_off += sizeof(tlv) + tlv.len;
} // we reached the padding or HMAC TLVs, or the end of the SRH
if (*pad_off == 0)
*pad_off = cur_off;
if (*tlv_off == -1)
*tlv_off = cur_off;
else if (!offset_valid)
return -EINVAL;
return 0;
}
__attribute__((always_inline))
int add_tlv(struct __sk_buff *skb, struct ip6_srh_t *srh, uint32_t tlv_off,
struct sr6_tlv_t *itlv, uint8_t tlv_size)
{
uint32_t srh_off = (char *)srh - (char *)(long)skb->data;
uint8_t len_remaining, new_pad;
uint32_t pad_off = 0;
uint32_t pad_size = 0;
uint32_t partial_srh_len;
int err;
if (tlv_off != -1)
tlv_off += srh_off;
if (itlv->type == SR6_TLV_PADDING || itlv->type == SR6_TLV_HMAC)
return -EINVAL;
err = is_valid_tlv_boundary(skb, srh, &tlv_off, &pad_size, &pad_off);
if (err)
return err;
err = bpf_lwt_seg6_adjust_srh(skb, tlv_off, sizeof(*itlv) + itlv->len);
if (err)
return err;
err = bpf_lwt_seg6_store_bytes(skb, tlv_off, (void *)itlv, tlv_size);
if (err)
return err;
// the following can't be moved inside update_tlv_pad because the
// bpf verifier has some issues with it
pad_off += sizeof(*itlv) + itlv->len;
partial_srh_len = pad_off - srh_off;
len_remaining = partial_srh_len % 8;
new_pad = 8 - len_remaining;
if (new_pad == 1) // cannot pad for 1 byte only
new_pad = 9;
else if (new_pad == 8)
new_pad = 0;
return update_tlv_pad(skb, new_pad, pad_size, pad_off);
}
__attribute__((always_inline))
int delete_tlv(struct __sk_buff *skb, struct ip6_srh_t *srh,
uint32_t tlv_off)
{
uint32_t srh_off = (char *)srh - (char *)(long)skb->data;
uint8_t len_remaining, new_pad;
uint32_t partial_srh_len;
uint32_t pad_off = 0;
uint32_t pad_size = 0;
struct sr6_tlv_t tlv;
int err;
tlv_off += srh_off;
err = is_valid_tlv_boundary(skb, srh, &tlv_off, &pad_size, &pad_off);
if (err)
return err;
err = bpf_skb_load_bytes(skb, tlv_off, &tlv, sizeof(tlv));
if (err)
return err;
err = bpf_lwt_seg6_adjust_srh(skb, tlv_off, -(sizeof(tlv) + tlv.len));
if (err)
return err;
pad_off -= sizeof(tlv) + tlv.len;
partial_srh_len = pad_off - srh_off;
len_remaining = partial_srh_len % 8;
new_pad = 8 - len_remaining;
if (new_pad == 1) // cannot pad for 1 byte only
new_pad = 9;
else if (new_pad == 8)
new_pad = 0;
return update_tlv_pad(skb, new_pad, pad_size, pad_off);
}
__attribute__((always_inline))
int has_egr_tlv(struct __sk_buff *skb, struct ip6_srh_t *srh)
{
int tlv_offset = sizeof(struct ip6_t) + sizeof(struct ip6_srh_t) +
((srh->first_segment + 1) << 4);
struct sr6_tlv_t tlv;
if (bpf_skb_load_bytes(skb, tlv_offset, &tlv, sizeof(struct sr6_tlv_t)))
return 0;
if (tlv.type == SR6_TLV_EGRESS && tlv.len == 18) {
struct ip6_addr_t egr_addr;
if (bpf_skb_load_bytes(skb, tlv_offset + 4, &egr_addr, 16))
return 0;
// check if egress TLV value is correct
if (ntohll(egr_addr.hi) == 0xfd00000000000000 &&
ntohll(egr_addr.lo) == 0x4)
return 1;
}
return 0;
}
// This function will push a SRH with segments fd00::1, fd00::2, fd00::3,
// fd00::4
SEC("encap_srh")
int __encap_srh(struct __sk_buff *skb)
{
unsigned long long hi = 0xfd00000000000000;
struct ip6_addr_t *seg;
struct ip6_srh_t *srh;
char srh_buf[72]; // room for 4 segments
int err;
srh = (struct ip6_srh_t *)srh_buf;
srh->nexthdr = 0;
srh->hdrlen = 8;
srh->type = 4;
srh->segments_left = 3;
srh->first_segment = 3;
srh->flags = 0;
srh->tag = 0;
seg = (struct ip6_addr_t *)((char *)srh + sizeof(*srh));
#pragma clang loop unroll(full)
for (unsigned long long lo = 0; lo < 4; lo++) {
seg->lo = htonll(4 - lo);
seg->hi = htonll(hi);
seg = (struct ip6_addr_t *)((char *)seg + sizeof(*seg));
}
err = bpf_lwt_push_encap(skb, 0, (void *)srh, sizeof(srh_buf));
if (err)
return BPF_DROP;
return BPF_REDIRECT;
}
// Add an Egress TLV fc00::4, add the flag A,
// and apply End.X action to fc42::1
SEC("add_egr_x")
int __add_egr_x(struct __sk_buff *skb)
{
unsigned long long hi = 0xfc42000000000000;
unsigned long long lo = 0x1;
struct ip6_srh_t *srh = get_srh(skb);
uint8_t new_flags = SR6_FLAG_ALERT;
struct ip6_addr_t addr;
int err, offset;
if (srh == NULL)
return BPF_DROP;
uint8_t tlv[20] = {2, 18, 0, 0, 0xfd, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x4};
err = add_tlv(skb, srh, (srh->hdrlen+1) << 3,
(struct sr6_tlv_t *)&tlv, 20);
if (err)
return BPF_DROP;
offset = sizeof(struct ip6_t) + offsetof(struct ip6_srh_t, flags);
err = bpf_lwt_seg6_store_bytes(skb, offset,
(void *)&new_flags, sizeof(new_flags));
if (err)
return BPF_DROP;
addr.lo = htonll(lo);
addr.hi = htonll(hi);
err = bpf_lwt_seg6_action(skb, SEG6_LOCAL_ACTION_END_X,
(void *)&addr, sizeof(addr));
if (err)
return BPF_DROP;
return BPF_REDIRECT;
}
// Pop the Egress TLV, reset the flags, change the tag 2442 and finally do a
// simple End action
SEC("pop_egr")
int __pop_egr(struct __sk_buff *skb)
{
struct ip6_srh_t *srh = get_srh(skb);
uint16_t new_tag = bpf_htons(2442);
uint8_t new_flags = 0;
int err, offset;
if (srh == NULL)
return BPF_DROP;
if (srh->flags != SR6_FLAG_ALERT)
return BPF_DROP;
if (srh->hdrlen != 11) // 4 segments + Egress TLV + Padding TLV
return BPF_DROP;
if (!has_egr_tlv(skb, srh))
return BPF_DROP;
err = delete_tlv(skb, srh, 8 + (srh->first_segment + 1) * 16);
if (err)
return BPF_DROP;
offset = sizeof(struct ip6_t) + offsetof(struct ip6_srh_t, flags);
if (bpf_lwt_seg6_store_bytes(skb, offset, (void *)&new_flags,
sizeof(new_flags)))
return BPF_DROP;
offset = sizeof(struct ip6_t) + offsetof(struct ip6_srh_t, tag);
if (bpf_lwt_seg6_store_bytes(skb, offset, (void *)&new_tag,
sizeof(new_tag)))
return BPF_DROP;
return BPF_OK;
}
// Inspect if the Egress TLV and flag have been removed, if the tag is correct,
// then apply a End.T action to reach the last segment
SEC("inspect_t")
int __inspect_t(struct __sk_buff *skb)
{
struct ip6_srh_t *srh = get_srh(skb);
int table = 117;
int err;
if (srh == NULL)
return BPF_DROP;
if (srh->flags != 0)
return BPF_DROP;
if (srh->tag != bpf_htons(2442))
return BPF_DROP;
if (srh->hdrlen != 8) // 4 segments
return BPF_DROP;
err = bpf_lwt_seg6_action(skb, SEG6_LOCAL_ACTION_END_T,
(void *)&table, sizeof(table));
if (err)
return BPF_DROP;
return BPF_REDIRECT;
}
char __license[] SEC("license") = "GPL";
#!/bin/bash
# Connects 6 network namespaces through veths.
# Each NS may have different IPv6 global scope addresses :
# NS1 ---- NS2 ---- NS3 ---- NS4 ---- NS5 ---- NS6
# fb00::1 fd00::1 fd00::2 fd00::3 fb00::6
# fc42::1 fd00::4
#
# All IPv6 packets going to fb00::/16 through NS2 will be encapsulated in a
# IPv6 header with a Segment Routing Header, with segments :
# fd00::1 -> fd00::2 -> fd00::3 -> fd00::4
#
# 3 fd00::/16 IPv6 addresses are binded to seg6local End.BPF actions :
# - fd00::1 : add a TLV, change the flags and apply a End.X action to fc42::1
# - fd00::2 : remove the TLV, change the flags, add a tag
# - fd00::3 : apply an End.T action to fd00::4, through routing table 117
#
# fd00::4 is a simple Segment Routing node decapsulating the inner IPv6 packet.
# Each End.BPF action will validate the operations applied on the SRH by the
# previous BPF program in the chain, otherwise the packet is dropped.
#
# An UDP datagram is sent from fb00::1 to fb00::6. The test succeeds if this
# datagram can be read on NS6 when binding to fb00::6.
TMP_FILE="/tmp/selftest_lwt_seg6local.txt"
cleanup()
{
if [ "$?" = "0" ]; then
echo "selftests: test_lwt_seg6local [PASS]";
else
echo "selftests: test_lwt_seg6local [FAILED]";
fi
set +e
ip netns del ns1 2> /dev/null
ip netns del ns2 2> /dev/null
ip netns del ns3 2> /dev/null
ip netns del ns4 2> /dev/null
ip netns del ns5 2> /dev/null
ip netns del ns6 2> /dev/null
rm -f $TMP_FILE
}
set -e
ip netns add ns1
ip netns add ns2
ip netns add ns3
ip netns add ns4
ip netns add ns5
ip netns add ns6
trap cleanup 0 2 3 6 9
ip link add veth1 type veth peer name veth2
ip link add veth3 type veth peer name veth4
ip link add veth5 type veth peer name veth6
ip link add veth7 type veth peer name veth8
ip link add veth9 type veth peer name veth10
ip link set veth1 netns ns1
ip link set veth2 netns ns2
ip link set veth3 netns ns2
ip link set veth4 netns ns3
ip link set veth5 netns ns3
ip link set veth6 netns ns4
ip link set veth7 netns ns4
ip link set veth8 netns ns5
ip link set veth9 netns ns5
ip link set veth10 netns ns6
ip netns exec ns1 ip link set dev veth1 up
ip netns exec ns2 ip link set dev veth2 up
ip netns exec ns2 ip link set dev veth3 up
ip netns exec ns3 ip link set dev veth4 up
ip netns exec ns3 ip link set dev veth5 up
ip netns exec ns4 ip link set dev veth6 up
ip netns exec ns4 ip link set dev veth7 up
ip netns exec ns5 ip link set dev veth8 up
ip netns exec ns5 ip link set dev veth9 up
ip netns exec ns6 ip link set dev veth10 up
ip netns exec ns6 ip link set dev lo up
# All link scope addresses and routes required between veths
ip netns exec ns1 ip -6 addr add fb00::12/16 dev veth1 scope link
ip netns exec ns1 ip -6 route add fb00::21 dev veth1 scope link
ip netns exec ns2 ip -6 addr add fb00::21/16 dev veth2 scope link
ip netns exec ns2 ip -6 addr add fb00::34/16 dev veth3 scope link
ip netns exec ns2 ip -6 route add fb00::43 dev veth3 scope link
ip netns exec ns3 ip -6 route add fb00::65 dev veth5 scope link
ip netns exec ns3 ip -6 addr add fb00::43/16 dev veth4 scope link
ip netns exec ns3 ip -6 addr add fb00::56/16 dev veth5 scope link
ip netns exec ns4 ip -6 addr add fb00::65/16 dev veth6 scope link
ip netns exec ns4 ip -6 addr add fb00::78/16 dev veth7 scope link
ip netns exec ns4 ip -6 route add fb00::87 dev veth7 scope link
ip netns exec ns5 ip -6 addr add fb00::87/16 dev veth8 scope link
ip netns exec ns5 ip -6 addr add fb00::910/16 dev veth9 scope link
ip netns exec ns5 ip -6 route add fb00::109 dev veth9 scope link
ip netns exec ns5 ip -6 route add fb00::109 table 117 dev veth9 scope link
ip netns exec ns6 ip -6 addr add fb00::109/16 dev veth10 scope link
ip netns exec ns1 ip -6 addr add fb00::1/16 dev lo
ip netns exec ns1 ip -6 route add fb00::6 dev veth1 via fb00::21
ip netns exec ns2 ip -6 route add fb00::6 encap bpf in obj test_lwt_seg6local.o sec encap_srh dev veth2
ip netns exec ns2 ip -6 route add fd00::1 dev veth3 via fb00::43 scope link
ip netns exec ns3 ip -6 route add fc42::1 dev veth5 via fb00::65
ip netns exec ns3 ip -6 route add fd00::1 encap seg6local action End.BPF obj test_lwt_seg6local.o sec add_egr_x dev veth4
ip netns exec ns4 ip -6 route add fd00::2 encap seg6local action End.BPF obj test_lwt_seg6local.o sec pop_egr dev veth6
ip netns exec ns4 ip -6 addr add fc42::1 dev lo
ip netns exec ns4 ip -6 route add fd00::3 dev veth7 via fb00::87
ip netns exec ns5 ip -6 route add fd00::4 table 117 dev veth9 via fb00::109
ip netns exec ns5 ip -6 route add fd00::3 encap seg6local action End.BPF obj test_lwt_seg6local.o sec inspect_t dev veth8
ip netns exec ns6 ip -6 addr add fb00::6/16 dev lo
ip netns exec ns6 ip -6 addr add fd00::4/16 dev lo
ip netns exec ns1 sysctl net.ipv6.conf.all.forwarding=1 > /dev/null
ip netns exec ns2 sysctl net.ipv6.conf.all.forwarding=1 > /dev/null
ip netns exec ns3 sysctl net.ipv6.conf.all.forwarding=1 > /dev/null
ip netns exec ns4 sysctl net.ipv6.conf.all.forwarding=1 > /dev/null
ip netns exec ns5 sysctl net.ipv6.conf.all.forwarding=1 > /dev/null
ip netns exec ns6 sysctl net.ipv6.conf.all.seg6_enabled=1 > /dev/null
ip netns exec ns6 sysctl net.ipv6.conf.lo.seg6_enabled=1 > /dev/null
ip netns exec ns6 sysctl net.ipv6.conf.veth10.seg6_enabled=1 > /dev/null
ip netns exec ns6 nc -l -6 -u -d 7330 > $TMP_FILE &
ip netns exec ns1 bash -c "echo 'foobar' | nc -w0 -6 -u -p 2121 -s fb00::1 fb00::6 7330"
sleep 5 # wait enough time to ensure the UDP datagram arrived to the last segment
kill -INT $!
if [[ $(< $TMP_FILE) != "foobar" ]]; then
exit 1
fi
exit 0
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment