Virtual Routing and Forwarding (VRF)

Virtual Routing and Forwarding (VRF)

The VRF device combined with ip rules provides the ability to create virtual
routing and forwarding domains (aka VRFs, VRF-lite to be specific) in the
Linux network stack. One use case is the multi-tenancy problem where each
tenant has their own unique routing tables and in the very least need
different default gateways.

Processes can be “VRF aware” by binding a socket to the VRF device. Packets
through the socket then use the routing table associated with the VRF
device. An important feature of the VRF device implementation is that it
impacts only Layer 3 and above so L2 tools (e.g., LLDP) are not affected
(ie., they do not need to be run in each VRF). The design also allows
the use of higher priority ip rules (Policy Based Routing, PBR) to take
precedence over the VRF device rules directing specific traffic as desired.

In addition, VRF devices allow VRFs to be nested within namespaces. For
example network namespaces provide separation of network interfaces at the
device layer, VLANs on the interfaces within a namespace provide L2 separation
and then VRF devices provide L3 separation.

Design

A VRF device is created with an associated route table. Network interfaces
are then enslaved to a VRF device:

     +-----------------------------+
     |           vrf-blue          |  ===> route table 10
     +-----------------------------+
        |        |            |
     +------+ +------+     +-------------+
     | eth1 | | eth2 | ... |    bond1    |
     +------+ +------+     +-------------+
                              |       |
                          +------+ +------+
                          | eth8 | | eth9 |
                          +------+ +------+

Packets received on an enslaved device and are switched to the VRF device
in the IPv4 and IPv6 processing stacks giving the impression that packets
flow through the VRF device. Similarly on egress routing rules are used to
send packets to the VRF device driver before getting sent out the actual
interface. This allows tcpdump on a VRF device to capture all packets into
and out of the VRF as a whole.[1] Similarly, netfilter[2] and tc rules can be
applied using the VRF device to specify rules that apply to the VRF domain
as a whole.

[1] Packets in the forwarded state do not flow through the device, so those
packets are not seen by tcpdump. Will revisit this limitation in a
future release.

[2] Iptables on ingress supports PREROUTING with skb->dev set to the real
ingress device and both INPUT and PREROUTING rules with skb->dev set to
the VRF device. For egress POSTROUTING and OUTPUT rules can be written
using either the VRF device or real egress device.

Setup

  1. VRF device is created with an association to a FIB table.
    ip link add vrf-blue type vrf table 10
    ip link set dev vrf-blue up

  2. An l3mdev FIB rule directs lookups to the table associated with the device.
    A single l3mdev rule is sufficient for all VRFs. The VRF device adds the
    l3mdev rule for IPv4 and IPv6 when the first device is created with a
    default preference of 1000. Users may delete the rule if desired and add
    with a different priority or install per-VRF rules.

    Prior to the v4.8 kernel iif and oif rules are needed for each VRF device:
    ip ru add oif vrf-blue table 10
    ip ru add iif vrf-blue table 10

  3. Set the default route for the table (and hence default route for the VRF).
    ip route add table 10 unreachable default metric 4278198272

    This high metric value ensures that the default unreachable route can
    be overridden by a routing protocol suite. FRRouting interprets
    kernel metrics as a combined admin distance (upper byte) and priority
    (lower 3 bytes). Thus the above metric translates to [255/8192].

  4. Enslave L3 interfaces to a VRF device.
    ip link set dev eth1 master vrf-blue

    Local and connected routes for enslaved devices are automatically moved to
    the table associated with VRF device. Any additional routes depending on
    the enslaved device are dropped and will need to be reinserted to the VRF
    FIB table following the enslavement.

    The IPv6 sysctl option keep_addr_on_down can be enabled to keep IPv6 global
    addresses as VRF enslavement changes.
    sysctl -w net.ipv6.conf.all.keep_addr_on_down=1

  5. Additional VRF routes are added to associated table.
    ip route add table 10 …

Applications

Applications that are to work within a VRF need to bind their socket to the
VRF device:

setsockopt(sd, SOL_SOCKET, SO_BINDTODEVICE, dev, strlen(dev)+1);

or to specify the output device using cmsg and IP_PKTINFO.

By default the scope of the port bindings for unbound sockets is
limited to the default VRF. That is, it will not be matched by packets
arriving on interfaces enslaved to an l3mdev and processes may bind to
the same port if they bind to an l3mdev.

TCP & UDP services running in the default VRF context (ie., not bound
to any VRF device) can work across all VRF domains by enabling the
tcp_l3mdev_accept and udp_l3mdev_accept sysctl options:

sysctl -w net.ipv4.tcp_l3mdev_accept=1
sysctl -w net.ipv4.udp_l3mdev_accept=1

These options are disabled by default so that a socket in a VRF is only
selected for packets in that VRF. There is a similar option for RAW
sockets, which is enabled by default for reasons of backwards compatibility.
This is so as to specify the output device with cmsg and IP_PKTINFO, but
using a socket not bound to the corresponding VRF. This allows e.g. older ping
implementations to be run with specifying the device but without executing it
in the VRF. This option can be disabled so that packets received in a VRF
context are only handled by a raw socket bound to the VRF, and packets in the
default VRF are only handled by a socket not bound to any VRF:

sysctl -w net.ipv4.raw_l3mdev_accept=0

netfilter rules on the VRF device can be used to limit access to services
running in the default VRF context as well.

################################################################################

Using iproute2 for VRFs

iproute2 supports the vrf keyword as of v4.7. For backwards compatibility this
section lists both commands where appropriate – with the vrf keyword and the
older form without it.

  1. Create a VRF

    To instantiate a VRF device and associate it with a table:
    $ ip link add dev NAME type vrf table ID

    As of v4.8 the kernel supports the l3mdev FIB rule where a single rule
    covers all VRFs. The l3mdev rule is created for IPv4 and IPv6 on first
    device create.

  2. List VRFs

    To list VRFs that have been created:
    $ ip [-d] link show type vrf
    NOTE: The -d option is needed to show the table id

    For example:
    $ ip -d link show type vrf
    11: mgmt: <NOARP,MASTER,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 72:b3:ba:91:e2:24 brd ff:ff:ff:ff:ff:ff promiscuity 0
    vrf table 1 addrgenmode eui64
    12: red: <NOARP,MASTER,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether b6:6f:6e:f6:da:73 brd ff:ff:ff:ff:ff:ff promiscuity 0
    vrf table 10 addrgenmode eui64
    13: blue: <NOARP,MASTER,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 36:62:e8:7d:bb:8c brd ff:ff:ff:ff:ff:ff promiscuity 0
    vrf table 66 addrgenmode eui64
    14: green: <NOARP,MASTER,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether e6:28:b8:63:70:bb brd ff:ff:ff:ff:ff:ff promiscuity 0
    vrf table 81 addrgenmode eui64

    Or in brief output:

    $ ip -br link show type vrf
    mgmt UP 72:b3:ba:91:e2:24 <NOARP,MASTER,UP,LOWER_UP>
    red UP b6:6f:6e:f6:da:73 <NOARP,MASTER,UP,LOWER_UP>
    blue UP 36:62:e8:7d:bb:8c <NOARP,MASTER,UP,LOWER_UP>
    green UP e6:28:b8:63:70:bb <NOARP,MASTER,UP,LOWER_UP>

  3. Assign a Network Interface to a VRF

    Network interfaces are assigned to a VRF by enslaving the netdevice to a
    VRF device:
    $ ip link set dev NAME master NAME

    On enslavement connected and local routes are automatically moved to the
    table associated with the VRF device.

    For example:
    $ ip link set dev eth0 master mgmt

  4. Show Devices Assigned to a VRF

    To show devices that have been assigned to a specific VRF add the master
    option to the ip command:
    $ ip link show vrf NAME
    $ ip link show master NAME

    For example:
    $ ip link show vrf red
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master red state UP mode DEFAULT group default qlen 1000
    link/ether 02:00:00:00:02:02 brd ff:ff:ff:ff:ff:ff
    4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master red state UP mode DEFAULT group default qlen 1000
    link/ether 02:00:00:00:02:03 brd ff:ff:ff:ff:ff:ff
    7: eth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master red state DOWN mode DEFAULT group default qlen 1000
    link/ether 02:00:00:00:02:06 brd ff:ff:ff:ff:ff:ff

    Or using the brief output:
    $ ip -br link show vrf red
    eth1 UP 02:00:00:00:02:02 <BROADCAST,MULTICAST,UP,LOWER_UP>
    eth2 UP 02:00:00:00:02:03 <BROADCAST,MULTICAST,UP,LOWER_UP>
    eth5 DOWN 02:00:00:00:02:06 <BROADCAST,MULTICAST>

  5. Show Neighbor Entries for a VRF

    To list neighbor entries associated with devices enslaved to a VRF device
    add the master option to the ip command:
    $ ip [-6] neigh show vrf NAME
    $ ip [-6] neigh show master NAME

    For example:
    $ ip neigh show vrf red
    10.2.1.254 dev eth1 lladdr a6:d9:c7:4f:06:23 REACHABLE
    10.2.2.254 dev eth2 lladdr 5e:54:01:6a:ee:80 REACHABLE

    $ ip -6 neigh show vrf red
    2002:1::64 dev eth1 lladdr a6:d9:c7:4f:06:23 REACHABLE

  6. Show Addresses for a VRF

    To show addresses for interfaces associated with a VRF add the master
    option to the ip command:
    $ ip addr show vrf NAME
    $ ip addr show master NAME

    For example:
    $ ip addr show vrf red
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master red state UP group default qlen 1000
    link/ether 02:00:00:00:02:02 brd ff:ff:ff:ff:ff:ff
    inet 10.2.1.2/24 brd 10.2.1.255 scope global eth1
    valid_lft forever preferred_lft forever
    inet6 2002:1::2/120 scope global
    valid_lft forever preferred_lft forever
    inet6 fe80::ff:fe00:202/64 scope link
    valid_lft forever preferred_lft forever
    4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master red state UP group default qlen 1000
    link/ether 02:00:00:00:02:03 brd ff:ff:ff:ff:ff:ff
    inet 10.2.2.2/24 brd 10.2.2.255 scope global eth2
    valid_lft forever preferred_lft forever
    inet6 2002:2::2/120 scope global
    valid_lft forever preferred_lft forever
    inet6 fe80::ff:fe00:203/64 scope link
    valid_lft forever preferred_lft forever
    7: eth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master red state DOWN group default qlen 1000
    link/ether 02:00:00:00:02:06 brd ff:ff:ff:ff:ff:ff

    Or in brief format:
    $ ip -br addr show vrf red
    eth1 UP 10.2.1.2/24 2002:1::2/120 fe80::ff:fe00:202/64
    eth2 UP 10.2.2.2/24 2002:2::2/120 fe80::ff:fe00:203/64
    eth5 DOWN

  7. Show Routes for a VRF

    To show routes for a VRF use the ip command to display the table associated
    with the VRF device:
    $ ip [-6] route show vrf NAME
    $ ip [-6] route show table ID

    For example:
    $ ip route show vrf red
    unreachable default metric 4278198272
    broadcast 10.2.1.0 dev eth1 proto kernel scope link src 10.2.1.2
    10.2.1.0/24 dev eth1 proto kernel scope link src 10.2.1.2
    local 10.2.1.2 dev eth1 proto kernel scope host src 10.2.1.2
    broadcast 10.2.1.255 dev eth1 proto kernel scope link src 10.2.1.2
    broadcast 10.2.2.0 dev eth2 proto kernel scope link src 10.2.2.2
    10.2.2.0/24 dev eth2 proto kernel scope link src 10.2.2.2
    local 10.2.2.2 dev eth2 proto kernel scope host src 10.2.2.2
    broadcast 10.2.2.255 dev eth2 proto kernel scope link src 10.2.2.2

    $ ip -6 route show vrf red
    local 2002:1:: dev lo proto none metric 0 pref medium
    local 2002:1::2 dev lo proto none metric 0 pref medium
    2002:1::/120 dev eth1 proto kernel metric 256 pref medium
    local 2002:2:: dev lo proto none metric 0 pref medium
    local 2002:2::2 dev lo proto none metric 0 pref medium
    2002:2::/120 dev eth2 proto kernel metric 256 pref medium
    local fe80:: dev lo proto none metric 0 pref medium
    local fe80:: dev lo proto none metric 0 pref medium
    local fe80::ff:fe00:202 dev lo proto none metric 0 pref medium
    local fe80::ff:fe00:203 dev lo proto none metric 0 pref medium
    fe80::/64 dev eth1 proto kernel metric 256 pref medium
    fe80::/64 dev eth2 proto kernel metric 256 pref medium
    ff00::/8 dev red metric 256 pref medium
    ff00::/8 dev eth1 metric 256 pref medium
    ff00::/8 dev eth2 metric 256 pref medium
    unreachable default dev lo metric 4278198272 error -101 pref medium

  8. Route Lookup for a VRF

    A test route lookup can be done for a VRF:
    $ ip [-6] route get vrf NAME ADDRESS
    $ ip [-6] route get oif NAME ADDRESS

    For example:
    $ ip route get 10.2.1.40 vrf red
    10.2.1.40 dev eth1 table red src 10.2.1.2
    cache

    $ ip -6 route get 2002:1::32 vrf red
    2002:1::32 from :: dev eth1 table red proto kernel src 2002:1::2 metric 256 pref medium

  9. Removing Network Interface from a VRF

    Network interfaces are removed from a VRF by breaking the enslavement to
    the VRF device:
    $ ip link set dev NAME nomaster

    Connected routes are moved back to the default table and local entries are
    moved to the local table.

    For example:
    $ ip link set dev eth0 nomaster


Commands used in this example:

cat >> /etc/iproute2/rt_tables.d/vrf.conf <<EOF
1  mgmt
10 red
66 blue
81 green
EOF

function vrf_create
{
    VRF=$1
    TBID=$2

    # create VRF device
    ip link add ${VRF} type vrf table ${TBID}

    if [ "${VRF}" != "mgmt" ]; then
        ip route add table ${TBID} unreachable default metric 4278198272
    fi
    ip link set dev ${VRF} up
}

vrf_create mgmt 1
ip link set dev eth0 master mgmt

vrf_create red 10
ip link set dev eth1 master red
ip link set dev eth2 master red
ip link set dev eth5 master red

vrf_create blue 66
ip link set dev eth3 master blue

vrf_create green 81
ip link set dev eth4 master green


Interface addresses from /etc/network/interfaces:
auto eth0
iface eth0 inet static
      address 10.0.0.2
      netmask 255.255.255.0
      gateway 10.0.0.254

iface eth0 inet6 static
      address 2000:1::2
      netmask 120

auto eth1
iface eth1 inet static
      address 10.2.1.2
      netmask 255.255.255.0

iface eth1 inet6 static
      address 2002:1::2
      netmask 120

auto eth2
iface eth2 inet static
      address 10.2.2.2
      netmask 255.255.255.0

iface eth2 inet6 static
      address 2002:2::2
      netmask 120

auto eth3
iface eth3 inet static
      address 10.2.3.2
      netmask 255.255.255.0

iface eth3 inet6 static
      address 2002:3::2
      netmask 120

auto eth4
iface eth4 inet static
      address 10.2.4.2
      netmask 255.255.255.0

iface eth4 inet6 static
      address 2002:4::2
      netmask 120

源于: https://www.kernel.org/doc/Documentation/networking/vrf.txt
另外:https://github.com/Mellanox/mlxsw/wiki/Virtual-Routing-and-Forwarding-(VRF)

<think>我们正在讨论的是在Linux网络环境中,除了使用`ip vrf exec`命令之外,还有哪些方法可以将服务绑定到特定VRF实例中运行。根据用户提供的参考信息,VRF(虚拟路由转发)通过在物理设备上创建多个独立的路由表和转发实例来实现网络隔离。每个实例有自己的接口、路由表和路由协议进程。 在Linux中,有多种方法可以将服务绑定到特定的VRF实例: 1. **使用`ip vrf exec`命令**:这是最直接的方法,如用户之前提到的命令。它允许在指定的VRF中执行任何命令,从而让该命令的进程使用该VRF的路由表。 2. **使用VRF绑定套接字(Socket Binding)**:Linux内核从某个版本开始(4.3以上)支持SO_BINDTODEVICE套接字选项,但更准确的是使用SO_BINDTOVRF选项(如果支持)或者利用网络命名空间结合VRF。不过,更常见的做法是使用`setsockopt`设置SO_BINDTODEVICE为VRF设备(如vrf-blue),这样该套接字就会使用该VRF的路由表。 3. **使用网络命名空间(Network Namespaces)**:虽然VRF和网络命名空间都是隔离机制,但网络命名空间提供了更彻底的隔离(包括独立的网络设备、路由表、iptables规则等)。我们可以将服务运行在特定的网络命名空间中,而每个网络命名空间可以看作是一个独立的VRF(实际上,VRF可以看作是一种轻量级的网络命名空间)。方法如下: - 创建网络命名空间:`ip netns add ns1` - 将物理接口或虚拟接口分配到命名空间:`ip link set eth1 netns ns1` - 在命名空间中运行服务:`ip netns exec ns1 <command>` 4. **使用VRF关联的cgroup(控制组)**:通过cgroup的net_cls控制器,可以为进程设置一个网络类标识符(classid),然后通过tc(流量控制)将特定类标识符的流量路由到特定的VRF。这种方法相对复杂,但可以实现进程级别的流量控制。 5. **使用systemd服务单元绑定**:如果服务是通过systemd管理的,可以在service文件中使用`BindToDevice=`选项(但注意这绑定到的是网络接口而不是VRF)或者使用`IPBindAddress=`和`IPBindInterface=`选项。不过,对于VRF,systemd有一个`VRF=`选项,可以指定服务运行的VRF设备。例如: ``` [Service] VRF=vrf-blue ExecStart=/usr/sbin/nginx ``` 6. **使用VRF别名接口**:可以创建一个绑定到VRF的别名接口(例如,eth0:blue),然后使用传统的绑定方法(如SO_BINDTODEVICE)绑定到该别名接口。但这种方法并不常见,因为VRF设备本身就是一个逻辑设备。 7. **使用策略路由(Policy Routing)**:通过设置不同的路由表,并使用规则(ip rule)根据源地址、防火墙标记等选择路由表。然后,我们可以使用iptables给数据包打标记,或者根据源地址选择路由表。这样,即使服务没有显式绑定到VRF,也可以通过策略路由使其流量经过特定的路由表。 下面我们详细说明其中几种方法: ### 方法1:使用`ip vrf exec`(已提及,作为补充说明) 这是最常用的方法,适用于任何命令。例如: ```bash ip vrf exec blue /usr/sbin/nginx ``` 这样,nginx进程就会在名为"blue"的VRF中运行。 ### 方法2:使用套接字选项绑定到VRF设备 在应用程序中,可以在创建套接字后设置套接字选项,绑定到VRF设备: ```c int setsockopt(int socket, int level, int optname, const void *optval, socklen_t optlen); ``` 使用`SO_BINDTODEVICE`,将optval设置为VRF设备名(如"vrf-blue")。但注意,这种方法需要应用程序支持,即修改应用程序代码。 ### 方法3:使用网络命名空间 虽然网络命名空间提供了更强的隔离,但我们可以将其用作VRF的一种替代: ```bash # 创建命名空间 ip netns add blue # 创建veth对 ip link add veth0 type veth peer name veth1 # 将veth1放入命名空间blue ip link set veth1 netns blue # 在命名空间blue中配置IP并启动 ip netns exec blue ip addr add 192.168.1.2/24 dev veth1 ip netns exec blue ip link set veth1 up # 在命名空间中启动服务 ip netns exec blue /usr/sbin/nginx ``` ### 方法5:使用systemd绑定VRF 对于使用systemd的服务,可以创建一个service文件,添加VRF绑定: ``` [Unit] Description=My service in VRF blue [Service] Type=simple VRF=blue ExecStart=/usr/bin/my-service ``` ### 方法7:策略路由 通过设置路由规则,将来自特定进程的流量路由到特定的路由表。首先,我们需要创建一个路由表(例如表100),然后添加规则: ```bash # 添加路由规则:从特定源IP使用表100 ip rule add from <source_ip> lookup 100 # 或者使用fwmark(防火墙标记) iptables -t mangle -A OUTPUT -p tcp --dport 80 -j MARK --set-mark 100 ip rule add fwmark 100 lookup 100 ``` 然后,在运行服务之前,使用iptables标记数据包,或者将服务绑定到特定的源IP(通过bind或ip参数)。 但是,请注意,策略路由通常需要结合其他机制(如iptables标记)来将特定进程的流量标记,然后根据标记选择路由表。这种方法不需要修改应用程序,但配置较为复杂。 总结:在Linux中,将服务绑定到特定VRF的方法有多种,选择哪种方法取决于具体需求、服务的管理方式(是否使用systemd)以及是否愿意修改应用程序代码。 下面,我们生成相关问题: §§相关问题§§ 1. 如何检查一个进程是否运行在特定的VRF中? 2. 在VRF环境中,如何调试网络连通性问题? 3. 使用网络命名空间和VRF有什么主要区别? 4. 如何配置systemd服务以在多个VRF中运行同一服务的多个实例?</think>在 Linux 网络环境中,除了使用 `ip vrf exec` 命令外,还有以下几种方法可以将服务绑定到特定 VRF 实例中运行: --- ### 1. **使用 `systemd` 服务单元绑定** 通过 systemd 的 `VRF=` 指令直接将服务绑定到 VRF: ```ini # /etc/systemd/system/my-service.service [Service] VRF=vrf-blue ExecStart=/usr/sbin/myservice ``` **特点**: - 服务启动时自动进入指定 VRF - 支持日志集成(`journalctl -u my-service`) - 支持自动重启和依赖管理[^1] --- ### 2. **网络命名空间(Network Namespaces)** 创建隔离的网络环境: ```bash # 创建命名空间 ip netns add vrf-ns-blue # 将物理接口移入命名空间 ip link set eth1 netns vrf-ns-blue # 在命名空间内启动服务 ip netns exec vrf-ns-blue /usr/sbin/nginx ``` **对比 VRF**: | 特性 | VRF | 网络命名空间 | |------|-----|--------------| | 隔离级别 | 路由层隔离 | 完整网络栈隔离 | | 性能开销 | 低 | 中高 | | 接口共享 | ✅ (VRF共享物理接口) | ❌ (独占接口) | | 适用场景 | 路由隔离 | 容器/沙盒环境 | --- ### 3. **套接字绑定(SO_BINDTODEVICE)** 在应用程序代码中直接绑定套接字: ```c // C 语言示例 const char *vrf_dev = "vrf-blue"; setsockopt(sock_fd, SOL_SOCKET, SO_BINDTODEVICE, vrf_dev, strlen(vrf_dev)); ``` **特点**: - 进程级细粒度控制 - 需修改应用程序代码 - 支持 TCP/UDP 套接字 --- ### 4. **策略路由(Policy Routing)** 使用 `ip rule` 实现基于标记的流量导向: ```bash # 标记服务进程的流量 iptables -t mangle -A OUTPUT -p tcp --dport 80 -j MARK --set-mark 100 # 创建策略路由 ip rule add fwmark 100 table vrf-blue # 激活 VRF 路由表 ip route add default via 192.168.1.1 dev eth1 table vrf-blue ``` **适用场景**: - 无法修改服务的场景 - 需要基于端口/协议分流 - 多租户共享服务器环境 --- ### 5. **cgroup 网络分类(net_cls)** 结合 cgroups 和 tc 实现: ```bash # 创建 cgroup mkdir /sys/fs/cgroup/net_cls/myvrf echo 0x00100001 > /sys/fs/cgroup/net_cls/myvrf/net_cls.classid # 将进程加入 cgroup echo $PID > /sys/fs/cgroup/net_cls/myvrf/cgroup.procs # 配置流量分类 tc filter add dev eth0 parent 1:0 protocol ip handle 1:0x10 classid 1:1 ``` **优势**: - 容器化环境友好 - 支持进程组管理 - 可与 Kubernetes CNI 集成 --- ### 方法对比总结 | 方法 | 无需修改应用 | 隔离级别 | 管理复杂度 | 适用场景 | |------|--------------|----------|------------|----------| | `ip vrf exec` | ✅ | 进程级 | 低 | 临时命令/脚本 | | `systemd VRF=` | ✅ | 进程级 | 低 | 系统服务 | | 网络命名空间 | ✅ | 系统级 | 中 | 容器/沙盒 | | SO_BINDTODEVICE | ❌ | 套接字级 | 高 | 自主开发应用 | | 策略路由 | ✅ | 流量级 | 高 | 遗留系统 | | cgroup net_cls | ✅ | 进程组级 | 高 | 容器编排 | --- ###
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值