← Back

Built an XDP L2 forwarder and packets kept vanishing into the void

January 10, 2026

I spent the better part of a day watching packets disappear into thin air. Not crash. Not get rejected. Just... vanish. No errors, no logs, nothing. If you've ever worked with XDP and veth pairs, you probably know exactly what I'm talking about.

Let me walk you through building a basic Layer 2 packet forwarder using XDP, and more importantly, the debugging journey that made it actually work.

I was building...

A simple XDP program that acts as a transparent bridge between two network namespaces. When a packet arrives on one interface, rewrite its MAC addresses and forward it to the other interface. Think of it as a software switch, but one that processes packets at the driver level before the kernel even sees them.

The setup looks like this:

netns1 (10.0.0.1) <--veth1-p---veth1--> HOST <--veth2---veth2-p--> netns2 (10.0.0.2)

The XDP programs attach to veth1 and veth2 on the host side. When a ping goes from netns1 to netns2, the program on veth1 should catch it, rewrite the destination MAC to point to netns2's interface, and redirect it to veth2.

Here's the XDP program's core for the forwarder

#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
#include <linux/if_ether.h>

#define VETH1_IFINDEX 6
#define VETH2_IFINDEX 8

static unsigned char MAC_NS1[] = {0x32, 0xB0, 0xCB, 0xD0, 0xE8, 0x98};
static unsigned char MAC_NS2[] = {0x76, 0x7E, 0x0C, 0x2E, 0x0B, 0x6D};

SEC("xdp")
int xdp_forwarder(struct xdp_md *ctx) {
    void *data_end = (void *)(long)ctx->data_end;
    void *data = (void *)(long)ctx->data;
    
    // Bounds check - the verifier demands this
    if (data + sizeof(struct ethhdr) > data_end) {
        return XDP_DROP;
    }
    
    struct ethhdr *eth = data;
    int ingress_ifindex = ctx->ingress_ifindex;
    int target_ifindex = 0;
    unsigned char *target_mac = 0;
    
    if (ingress_ifindex == VETH1_IFINDEX) {
        target_ifindex = VETH2_IFINDEX;
        target_mac = MAC_NS2;
    } 
    else if (ingress_ifindex == VETH2_IFINDEX) {
        target_ifindex = VETH1_IFINDEX;
        target_mac = MAC_NS1;
    }
    else {
        return XDP_PASS;
    }
    
    // Only rewrite MACs for unicast traffic
    if (!(eth->h_dest[0] & 0x01)) {
        __builtin_memcpy(eth->h_dest, target_mac, ETH_ALEN);
    }
    
    return bpf_redirect(target_ifindex, 0);
}

char __license[] SEC("license") = "Dual MIT/GPL";

The logic is straightforward. Check which interface the packet came from, set the destination MAC to the appropriate namespace's MAC, and redirect to the other interface. Simple, right?

Nothing works after...

I compiled it. I loaded it. I attached it to both veth interfaces. I ran the ping.

sudo ip netns exec netns1 ping -c3 10.0.0.2

Nothing. Destination Host Unreachable. I ran tcpdump on veth1, veth2, everywhere. Zero packets. The XDP program was clearly running (I added debug prints), it was receiving packets, it was calling bpf_redirect(), and the redirect was returning success.

But the packets just disappeared.

Debug by logging

Typical first step for me when debugging XDP is to add bpf_trace_printk() here and there in the code. Here's what that looks like:

char fmt[] = "RX ifindex=%d proto=%x";
bpf_trace_printk(fmt, sizeof(fmt), ingress_ifindex, __bpf_ntohs(eth->h_proto));

You read the output with:

sudo cat /sys/kernel/debug/tracing/trace_pipe

The logs confirmed it. Packets were arriving on veth1 (ifindex 6), the program identified them as ARP requests (protocol 0x806), and it was calling redirect to veth2 (ifindex 8). Everything looked correct.

But still, nothing on the other side.

The actual problem

After digging through kernel documentation and Stack Overflow, I found the answer buried in a random forum post. When using XDP with veth pairs in driver mode, you need XDP programs attached to both ends of the veth pair.

Not just the host side. The namespace side too.

The veth driver requires an XDP program on the receiving end for redirects to actually work. It doesn't matter what the program does. It can literally just return XDP_PASS. But it has to be there.

Here's the dummy program I attached to the namespace-side interfaces:

#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>

SEC("xdp")
int xdp_pass(struct xdp_md *ctx) {
    return XDP_PASS;
}

char __license[] SEC("license") = "Dual MIT/GPL";

Compiled it:

clang -O2 -g -target bpf -c dummy.c -o dummy.o

Attached it to both namespace interfaces:

sudo ip netns exec netns1 ip link set dev veth1-p xdp obj dummy.o sec xdp
sudo ip netns exec netns2 ip link set dev veth2-p xdp obj dummy.o sec xdp

And brethens, finally:

sudo ip netns exec netns1 ping -c3 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.114 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.091 ms

It worked.

The Go loader

XDP program from Go is pretty clean with the cilium/ebpf library:

package main

import (
    "log"
    "net"
    "github.com/cilium/ebpf/link"
    "github.com/cilium/ebpf/rlimit"
)

//go:generate go run github.com/cilium/ebpf/cmd/bpf2go -target bpfel forwarder ./bpf/forwarder.c -- -I/usr/include/$(uname -m)-linux-gnu

func main() {
    if err := rlimit.RemoveMemlock(); err != nil {
        log.Fatalf("Failed to remove memlock limit: %v", err)
    }
    
    objs := forwarderObjects{}
    if err := loadForwarderObjects(&objs, nil); err != nil {
        log.Fatalf("Failed to load BPF objects: %v", err)
    }
    defer objs.Close()
    
    veth1, _ := net.InterfaceByName("veth1")
    l1, err := link.AttachXDP(link.XDPOptions{
        Program: objs.XdpForwarder,
        Interface: veth1.Index,
    })
    if err != nil {
        log.Fatalf("Failed to attach XDP to veth1: %v", err)
    }
    defer l1.Close()
    
    veth2, _ := net.InterfaceByName("veth2")
    l2, err := link.AttachXDP(link.XDPOptions{
        Program: objs.XdpForwarder,
        Interface: veth2.Index,
    })
    if err != nil {
        log.Fatalf("Failed to attach XDP to veth2: %v", err)
    }
    defer l2.Close()
    
    log.Println("XDP Forwarder running... Press Ctrl+C to exit.")
    select {}
}

The //go:generate directive compiles the C code into BPF bytecode and generates Go bindings automatically. Then you just load it and attach to the host-side interfaces.

Ran tcpdump to get a view into the packets as they arrive on netns2's veth2:

sudo ip netns exec netns2 tcpdump -i veth2-p -e -n -l
...
19:12:58.784335 76:7e:0c:2e:0b:6d > 32:b0:cb:d0:e8:98, ethertype ARP (0x0806), length 42: Request who-has 10.0.0.1 tell 10.0.0.2, length 28

19:12:58.784494 32:b0:cb:d0:e8:98 > 76:7e:0c:2e:0b:6d, ethertype ARP (0x0806), length 42: Request who-has 10.0.0.2 tell 10.0.0.1, length 28

So, what is this, Yemi?

This forwarder works for a basic L2 bridge between two local namespaces. But what if those namespaces were on different physical machines across the internet? You can't use XDP_REDIRECT across the network. You'd need to encapsulate the L2 frame inside a new IP packet that can be routed.

That's VXLAN. And that's what I'm building next. Follow the progress on GitHub.

For now, though, I have a working packet forwarder that processes traffic at line rate in the kernel's fast path. Not bad for a few hundred lines of C and Go.