Linux Network Namespaces
7 min read
The fundamental isolation technology supporting containers on Linux are Linux namespaces. Namespaces provide isolation of global resources in a way that is transparent to the processes within the namespace. There are currently 7 different namespaces that are supported: Cgroups, IPC, Network, Mount, PID, User, UTS. Today we will be looking at the network namespace.
The network namespace can be thought of as a copy of the network stack. It provides isolation in network interfaces, routes, and firewall rules. In this article we will be using the tools from iproute2 to demonstrate.
First let's create a new network namespace to play with:
# Create a new net namespace ns0
$ ip netns add ns0
# Show all net namespaces
$ ip netns show
ns0
Here we have created a new network namespace called ns0
. Let's go into the namespace and see what it looks like.
# Execute `ip link` in the ns0 namespace
$ ip netns exec ns0 ip link show
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
As shown from this example, our newly created namespace ns0
only has one loopback interface. If you are familiar with
network interfaces (those "labels" you see in the well known ifconfig
command), you can see that ns0 does not inherit
any of the interfaces such as eth0, wlan0
from your main namespace.
This is what we mean by isolation of network device: whenever we create a new network namespace, the network resources
in that namespace is separated from your main namespace. Processes that runs in this namespace (e.g. ip link
) does not
have access to the interfaces in other namespaces and vice versa.
Linux namespaces allow processes within one computer to think that they actually exists on different computers. In a network context, you can create a virtual network consisting of different processes, rather than per computer.
Let's get into the fun stuff and create a network of processes! There are a few ways to allow communication between containers, the most common way is to use a device called veth pair (virtual ethernet pair). In simple terms, veth pair is always created in pairs, and whatever information we passed into one end will come out from another. Let's start with a simple example to demonstrate veth pair.
# Let's create another namespace ns1
$ ip netns add ns1
$ ip netns show
ns1
ns0
Now we create a veth pair in the main namespace:
# Create a veth pair vth0 and vth1
$ ip link add vth0 type veth peer name vth1
And then we move the two end points into namespaces ns0
and ns1
. Remember to check that our move operation was
successful using the ip link
command.
# Move vth0 into ns0 and vth1 into ns1
$ ip link set vth0 netns ns0
$ ip link set vth1 netns ns1
# Verify that vth0 is now in namespace ns0
$ ip netns exec ns0 ip link show vth0
6: vth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 9e:e4:e9:12:ad:42 brd ff:ff:ff:ff:ff:ff
# Verify that vth1 is now in namespace ns1
$ ip netns exec ns1 ip link show vth1
5: vth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether e2:da:f7:04:9e:e3 brd ff:ff:ff:ff:ff:ff
We also want to give those devices addresses and subnet.
# assign a static address to specified device
$ ip netns exec ns0 ip addr add "10.0.0.1/24" dev vth0
$ ip netns exec ns1 ip addr add "10.0.0.2/24" dev vth1
Let's bring the devices up and see if it works.
$ ip netns exec ns0 ip link set vth0 up
$ ip netns exec ns1 ip link set vth1 up
$ ip netns exec ns0 ping -c 2 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.052 ms
--- 10.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.047/0.049/0.052/0.007 ms
$ ip netns exec ns1 ping -c 2 10.0.0.2
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.055 ms
--- 10.0.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.043/0.049/0.055/0.006 ms
We have created a working virtual network between two network namespace. Veth pairs are easy to implement, but what if you want to add in more namespaces? it will eventually become a complete graph with a lots of links all over the place. This is very inefficient and hard to maintain. To solve this problem, we can use a bridge.
A bridge works like a virtual switch, it has the ability to connect multiple ethernet segments together in a protocol independent way. In our case, we are trying to connect multiple virtual ethernet segments together. Let's try putting the veth pairs we created earlier into a bridge.
# Create a bridge device in ns0
$ ip netns exec ns0 ip link add br0 type bridge
# Assign an IP to the bridge device
$ ip netns exec ns0 ip addr add "10.0.0.1/24" dev br0
# Brings the bridge up
$ ip netns exec ns0 ip link set br0 up
# Plug the veth end-point into the bridge
$ ip netns exec ns0 ip link set vth0 master br0
Notice here that we have set the bridge's address the same as vth0
's address. This is because vth0
's address does
not matter anymore as it is now connected to br0, and br0 essentially serves as the entry of the network for ns0
now.
We should now revoke the address that was given to vth0 so the traffic can be routed properly.
# Revoke vth0's address
$ ip netns exec ns0 ip addr del "10.0.0.1/24" dev vth0
Let's test that it all works:
# Pinging 10.0.0.2 from ns0
$ ip netns exec ns0 ping -c 2 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.063 ms
--- 10.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.054/0.058/0.063/0.008 ms
# Pinging 10.0.0.1 from ns1
$ ip netns exec ns1 ping -c 2 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.066 ms
--- 10.0.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.052/0.059/0.066/0.007 ms
Now if we want to add a namespace into the network, we can simply do the following:
# Create a new namespace ns3 with a veth pair vth2 and vth3
$ ip netns add ns2
$ ip netns exec ns2 ip link add vth2 type veth peer name vth3
$ ip netns exec ns2 ip addr add "10.0.0.3/24" dev vth3
$ ip netns exec ns2 ip link set vth3 up
# Move vth2 into ns0
$ ip netns exec ns2 ip link set vth2 netns ns0
$ ip netns exec ns0 ip link set vth2 up
# Plug vth2 into the bridge
$ ip netns exec ns0 ip link set vth2 master br0
Then you can test the connectivity using ping. ns0, ns1
and ns2
should be able to ping each other successfully.
There are other ways to enable communication between network namespaces, such as Vlan, Macvlan, IPvlan, tuntap, etc. These will be left for a later topic.