Large-scale virtual network testing platform

Introduction

NetMirage is a tool for testing IP-based networked applications. NetMirage emulates a large virtual network, allowing you to run and test unmodified applications in real-time. Applications run without any virtualization or API hooks, allowing you to achieve maximum performance. NetMirage constructs the virtual network using standard Linux components, so you have complete control over the operation of the network.

NetMirage is compatible with any IP-based Linux application with the capability to bind to a specific IP address. In particular, NetMirage is able to construct large-scale virtual Tor networks.

CAUTION

NetMirage is currently in open beta testing. It is likely to contain bugs and usability issues. It is not quite ready for production use yet.

This page is a placeholder, and contains only a basic tutorial. When the software is ready for widespread use, we will provide additional documentation on this website.

Requirements

Hardware

You need at least two machines to run NetMirage. You can use virtual machines, but we recommend using physical machines for best performance.

One machine acts as the core node. The core node is responsible for emulating the virtual network. You will need to dedicate a network interface on the core node for this purpose. Note that if you connect to the core node using SSH, then you must do so on a network interface other than the one dedicated for use with NetMirage.

One or more machines act as edge nodes. Each edge node can run multiple instances of the program that you want to test. Each program instance is assigned an IP address within the virtual network. All traffic between applications is automatically routed through the core node, where it experiences all of the delays, bandwidth restrictions, and packet drops associated with the virtual network topology.

Software

The core and edge nodes must run Linux with kernel version 3.3 or newer. Any modern Linux distribution should satisfy this requirement. We have tested NetMirage with the following distributions:

  • Debian stable (Stretch): requires kernel patch
  • Debian testing (Buster): working
  • Ubuntu 14.04 LTS (Trusty Tahr): requires Open vSwitch update from PPA, requires kernel patch
  • Ubuntu 16.04 LTS (Xenial Xerus): requires kernel patch
Other distributions may work, but we have not tested them.

Warning: Linux kernel versions older than 4.12 contain a bug that will produce incorrect network emulation results. The core node must run on kernel version 4.12 or later. Alternatively, you can try applying the netem bugfix patch manually.

The core node requires Open vSwitch 2.1.0 or newer. On Debian and Ubuntu systems, you can install Open vSwitch on the core node using the following command:

sudo apt-get install openvswitch-switch

By default, the Open vSwitch package configures a virtual switch daemon. NetMirage runs its own isolated instance of Open vSwitch, so if you do not need this daemon for other tasks, it is best to disable it:

sudo systemctl disable openvswitch-switch (for systemd distributions)
sudo update-rc.d openvswitch-switch disable (for others)

Aside from several common C libraries, no additional software is required to run NetMirage. Compiling NetMirage from source requires some additional compilation tools, described below.

In order to configure the virtual network, you will need a GraphML network topology file describing the structure of the network. This file contains information about the network's routers, links, and their associated characteristics. NetMirage supports the topologies used by the Shadow discrete-event simulator. You can also use topologies from the Internet Topology Zoo with some modifications.

Getting NetMirage

Precompiled binaries for NetMirage are not yet available. To use the beta version, you will need to compile from source.

Compiling From Source

First, you must download a copy of the source. You can either download a release tarball, or clone the git repository.

The git repository is expected to be relatively unstable. We recommend that you download tarball releases unless you are a developer. After downloading a tarball, extract it to a new directory to begin the compilation.

Compiling NetMirage requires gcc 4 or newer, and the scons build environment. The code also depends on libxml2 and GLib. On Debian or Ubuntu, you can install these through their associated packages:

sudo apt-get install build-essential scons libglib2.0-dev libxml2-dev

To compile the program, simply enter the directory with the extracted files and run the following command:

scons

Compiled binaries are placed in the bin directory. Intermediate build files are placed in the build directory. If you would like to compile debugging versions of the programs, use the following command instead:

scons debug=1

Once you have finished compiling, copy netmirage-core to the core node, and netmirage-edge to each edge node. We recommend placing these programs in the system PATH for convenience.

Running NetMirage

Running an experiment using NetMirage consists of the following high-level steps:

  1. Start a virtual network in the core node
  2. Configure routing on the edge nodes
  3. Start applications on the edge nodes

NetMirage includes everything necessary for the first two steps. For general IP-based applications, you will need to write your own mechanism for configuring and launching your application. For Tor, we provide the scripts necessary to launch an experiment (described later).

Configuring the Core Node

Warning: The core node requires a dedicated interface for communicating with the edge nodes. If you specify the interface that you are using for SSH access, then your connection will be dropped!

To begin, you need to start a virtual network in the core node. This task is performed by the netmirage-core program. Configuration for this program can be provided in two ways: through the command line, or through a "setup file" (discussed in a later section). To begin, let's look at an example of starting a network from the command line:

sudo netmirage-core --file topology-300.graphml \
--edge-node=192.168.66.11 --iface eth1
The arguments have the following meanings:
  • --file topology-300.graphml: This specifies that the network topology should be read from the topology-300.graphml file. You must specify a network topology file. We will return to these files later in this documentation.
  • --edge-node=192.168.66.11: This specifies that the machine at 192.168.66.11 is an edge node. You can include more than one of these arguments in order to use multiple edge nodes, but you must include at least one.
  • --iface eth1: This specifies that all edge nodes are located on the network attached to the eth1 interface.

Unfortunately, netmirage-core must currently run as the root user in order to manipulate network namespaces (a feature of Linux used to construct virtual networks). This requirement may be relaxed in the future.

When setting up a network with netmirage-core, existing networks are first destroyed. To shut down the experiment without starting a new one, run the program with the --destroy argument:

sudo netmirage-core --destroy

There are many additional arguments available. Run netmirage-core --help to see a complete list and documentation. The following arguments are particularly useful:

  • --verbosity=<level>: This adjusts the logging level. Valid levels are debug, info, warning, and error.
  • --log-file=<file>: This causes logs to be written to the specified file rather than stderr.
  • --vsubnet=<CIDR>: This argument changes the subnet of the virtual network. By default, all IP addresses in the network are assigned to the private 10.0.0.0/8 range. You can specify an alternate range, but note that using public addresses will prevent edge nodes from communicating with those addresses in the real Internet. Moreover, you must leave some addresses available in order to construct the virtual network (i.e., choosing 0.0.0.0/0 is not valid, but 0.0.0.0/1 will work for most topology files).
All of the arguments have short versions. The first example could also be written in the following way:
sudo netmirage-core -f topology-300.graphml -e 192.168.66.11 -i eth1

After the command completes, the virtual network is running. This command may produce output similar to the following:

netmirage-edge -c 300 <iface> <core-ip> 10.0.0.0/8 <applications>
This is the command that you should execute on the edge node, after filling in the missing details. We discuss this in the next section.

Configuring the Edge Node

Once the core node is emulating a virtual network, the next step is to configure the edge node to communicate with the core. Continuing with the previous example, you might do this by running the following command:

sudo netmirage-edge -c 300 eth2 192.168.66.10 10.0.0.0/8 900
The arguments have the following meanings:
  • -c 300 (long name --clients): This specifies the number of "client" nodes in the network topology that were assigned to this edge node by the core node. The value for this parameter should always be retrieved from the output of netmirage-core on the core node. The meaning of this argument is explained in a later section.
  • eth2: Indicates that the core node is located behind the network interface eth2.
  • 192.168.66.10: This is the IP address of the core node.
  • 10.0.0.0/8: This is the virtual address space assigned to this edge node by the core node.
  • 900: This is the maximum number of applications that you intend to run on this edge node. You can specify any number up to the size of the virtual subnet. If you specify max for this argument, then the entire subnet is used. Warning: as this value increases, the command takes longer to run. Start with a small number, and increase it as necessary!
Several additional options are available. Run netmirage-edge --help to see a complete list. One particularly common option is -e (long name --other-edges). -e contains a comma-separated list of virtual subnets (in CIDR notation) belonging to other edge nodes. All traffic destined for these addresses will be routed to the core node. If you are running multiple edge nodes, then netmirage-core will include an appropriate -e option in its output that should be given to netmirage-edge.

Once this command completes, the edge node is ready to communicate with the core. All packets destined for any of the virtual subnets will be routed through the core node, where they will experience the configured virtual network characteristics. netmirage-edge allocates the specified number of IP addresses for use by client applications (900 in the above example). To see these addresses, run ip addr. Each application should bind to one of these addresses, including when establishing outbound connections.

It is also possible to have netmirage-edge write a list of allocated IP addresses to a text file. This may be more convenient for some experiments. To do so, use the --ip-file=FILE argument, where FILE specifies the file to write.

Once you are finished with your experiments, you can restore the original configuration by using the --remove argument:

sudo netmirage-edge -c 300 \
eth2 192.168.66.10 10.0.0.0/8 900 --remove
Warning: You must pass the exact arguments used to configure the edge again when using --remove. If you do not do this, then the configuration may not be fully reversed.

Running an Experiment

NetMirage is a generic network emulator that is designed to work with any IP-based application. As such, the details of actually configuring and launching the applications is left to the experimenters. As long as the applications bind to virtual IP addresses assigned by netmirage-edge, everything will work correctly. Connections between applications running on different edge nodes will also work properly.

As a simple example, we can test the latency and bandwidth between two virtual clients. Assuming that 10.0.0.1 and 10.128.0.0 were addresses assigned by netmirage-edge, we can test latency with the following command:

ping 10.0.0.1 -I 10.128.0.0
This will send ICMP pings from 10.128.0.0 to 10.0.0.1 and display packet losses and latency measurements. To test the bandwidth of the same virtual link, we can use the iperf utility included in many distributions:
(In shell 1) iperf -s -B 10.0.0.1
(In shell 2) iperf -c 10.0.0.1 -B 10.128.0.0
The first command launches a server listening on IP address 10.0.0.1. The second command connects from 10.128.0.0 to 10.0.0.1 and measures the bandwidth of the connection.

Running a Tor Experiment

We provide some helper scripts for researchers interested in using NetMirage to test virtual Tor networks. You will need to install Python, Tor, and Chutney. Chutney is a testing platform for Tor that allows you to run many Tor relays and clients on a single machine. Normally, these clients communicate with each other over ideal loopback connections. In this section, we will describe how to use Chutney to set up Tor experiments with traffic emulated by NetMirage.

Currently, there are no standardized tests of Tor networking characteristics. Chutney offers a basic test involving transferring a file between two Tor clients, but this is the extent of the framework. For this reason, any meaningful experiments will involve setting up custom Chutney experiments. This process is beyond the current scope of this documentation—see the Chutney documentation for details.

To begin, download the NetMirage Tor scripts for Python, and extract the archive. If you place the directory somewhere within the PYTHONPATH environment variable, then it will be easier to include into your scripts (in the remainder of the documentation, we assume that it is placed outside the path). We assume that you have already set up a NetMirage core and edge node as described above, and that you have used the --ip-file argument to netmirage-edge in order to save client addresses in a text file. For the purposes of this tutorial, let's assume that your Chutney experiment configuration file looks like this (a copy of the basic experiment included with Chutney):

# By default, Authorities are not configured as exits
Authority = Node(tag="a", authority=1, relay=1, torrc="authority.tmpl")
ExitRelay = Node(tag="r", relay=1, exit=1, torrc="relay.tmpl")
Client = Node(tag="c", torrc="client.tmpl")

# We need 8 authorities/relays/exits to ensure at least 2 get the guard flag
# in 0.2.6
NODES = Authority.getN(3) + ExitRelay.getN(5) + Client.getN(2)

ConfigureNodes(NODES)
The next step is to assign IP addresses within the NetMirage virtual address space to these nodes. This can be done by adding the following snippet:
# By default, Authorities are not configured as exits
Authority = Node(tag="a", authority=1, relay=1, torrc="authority.tmpl")
ExitRelay = Node(tag="r", relay=1, exit=1, torrc="relay.tmpl")
Client = Node(tag="c", torrc="client.tmpl")

# We need 8 authorities/relays/exits to ensure at least 2 get the guard flag
# in 0.2.6
NODES = Authority.getN(3) + ExitRelay.getN(5) + Client.getN(2)

# Omit if you extracted the scripts to the PYTHONPATH
import sys
sys.path.append('/path/to/netmirage_tor')

import netmirage_tor
netmirage_tor.Assign(NODES, '/path/to/ip_addresses.txt')

ConfigureNodes(NODES)

If you do not have a custom Chutney experiment, or if you want to set up a basic network with a set number of authorities, relays, and clients, then you can use the genconf.py script included with the NetMirage Tor package to generate a Chutney configuration file. The following command will create an experiment file with 10 authorities, 50 relays, and 100 clients:

netmirage_tor/genconf.py 10 50 100 \
/path/to/ip_addresses.txt /path/to/new_experiment
This will create a new Chutney experiment file in /path/to/new_experiment.

Warning: You must ensure that the Tor relays created by Chutney are configured to bind to the NetMirage addresses. This means that you must specify ${ip} as part of all *Port directives (e.g., OrPort), and you must also specify ${ip} for OutboundBindAddress. The default Chutney scripts do not configure these settings. If you are using the default templates, you must make the following modifications:

  • chutney/torrc_templates/common.i
    • Add OutboundBindAddress ${ip} anywhere in the file
  • chutney/torrc_templates/relay-non-exit.tmpl
    • Change to OrPort ${ip}:${orport}
    • Change to DirPort ${ip}:${dirport}

To start a Chutney experiment, perform the following steps in the Chutney directory:

./chutney configure /path/to/experiment # Creates torrc files in net/nodes
./chutney start /path/to/experiment # Starts the Tor processes

# Perform your experiment here. For example:
./chutney verify /path/to/experiment # Transfer a file between clients

# Shut down the network when you are done
./chutney hup /path/to/experiment
./chutney stop /path/to/experiment
See the Chutney documentation for more information. After starting the experiment, you should make sure that the Tor instances have bound to the correct addresses:
sudo ss -antp | grep LISTEN | grep tor | grep 127.0.0.1
The only entries listed in this output should be SOCKS and control ports (typically in the 8000-9999 range). If any OR ports are listed (typically in the 5000-5999 range), then make sure that you made the necessary modifications to the Chutney templates (see above).

Note that the total number of Tor relays you set up must be less than or equal to the number of IP addresses that you reserved when configuring the NetMirage edge node!

Advanced Usage

Multiple Edge Nodes

Sometimes, you need to provide more information about edge nodes than just their IP address. The following command, executed on the core node, demonstrates a more advanced edge node configuration:

sudo netmirage-core --file topology-300.graphml \
--edge-node=192.168.66.11,iface=eth1,mac=00:01:02:03:04:05,vsubnet=10.0.0.0/8 \
--edge-node=192.168.66.12,iface=eth2,vsubnet=192.168.200.0/24
This command configures the network with two edge nodes. The first is located at 192.168.66.11 behind interface eth1, has MAC address 00:01:02:03:04:05, and applications running on it will be assigned addresses in the 10.0.0.0/8 range. If you do not specify a MAC address for an edge node, then it must be possible for the core node to ping the edge node using ICMP when configuring the network; explicitly specifying a MAC address avoids this requirement (e.g., the edge node may be offline when starting up the network). The second edge node in this example has IP address 192.168.66.12, is located behind interface eth2, and will be given virtual addresses in the 192.168.200.0/24 range. Subnets assigned to edge nodes may be disjoint, but they cannot overlap.

Setup Files

If you find yourself running the same NetMirage commands multiple times, then it may be worthwhile to save your configuration. You can do this by creating a "setup file". Setup files are INI-like files that store NetMirage configurations. By default, NetMirage attempts to read configuration from setup.cfg in the current directory. You can specify a different setup file using the --setup-file (or -s) argument—the last argument you will ever need to use!

A setup file for netmirage-core might look like this:

[edge-1]
ip=192.168.66.11
iface=eth1
mac=00:01:02:03:04:05
vsubnet=10.0.0.0/8

[edge-2]
ip=192.168.66.12
iface=eth2
vsubnet=192.168.200.0/24

[emulator]
file=/path/to/topology.graphml
verbosity=info
Each edge-* section defines the configuration for an edge node in the same way as the --edge-node argument. The emulator section contains other configuration values. You can use any long name for a command-line argument in the emulator section, with the exception of edge-node and setup-file. If this file is saved as setup.cfg, then you can set up the network very easily:
sudo netmirage-core

Setup files for netmirage-edge work in a similar way, except that there are no edge-* sections, and the section for defining configuration values is called edge rather than emulator. Edge node setup files can also specify the four non-option arguments. For example:

[edge]
clients=300
iface=eth2
core-ip=192.168.66.10
vsubnet=10.0.0.0/8
applications=900

GraphML Files

GraphML is an XML-based file format for storing graphs (the mathematical concept of vertices connected by edges). A GraphML file stores a set of vertices, a set of edges between them, and attributes about the vertices and edges. For details about the format, see the GraphML Primer.

NetMirage can read GraphML files that store information about network topologies, such as the ones those included with Shadow, the network simulator.

In the GraphML files, each vertex is either a client node or a non-client node. Client nodes are analogous to end-users on the real Internet. Applications running on the edge nodes are each implicitly assigned to a client node. All traffic going to or from an application passes through its associated client node. Non-client nodes represent Autonomous Systems (ASes) on the real Internet.

There are several vertex attributes that describe the connection between a client node and its associated applications. packetloss contains the loss rate between 0.0 and 1.0. bandwidthup and bandwidthdown sets the bandwidth of the client. All applications connected to the same client share this bandwidth. The units of the bandwidth values are controlled with the --units argument. Connections between applications connected to the same client will experience the associated bandwidth limitations, but not the latencies.

Edges in the file define links between nodes (both clients and non-clients). Several edge attributes describe the link characteristics. latency and jitter are both specified in milliseconds. packetloss, like for clients, is a rate between 0.0 and 1.0. Finally, queue_len controls the size of the underlying netem queue. The bandwidth and other link properties are shared among all packets transiting the link, regardless of their origin/destination client or application.

Two additional parameters for netmirage-core control GraphML-related values. --weight specifies the name of the edge parameter to use for determining the shortest paths when setting up static routing (it may be set to one of the aforementioned keys, or any custom one). The static routes between client nodes will always take the shortest path in terms of the specified weights. --client-node specifies the value for the type vertex attribute that identifies client nodes (all others will be non-clients).

For performance reasons, NetMirage expects all of the <node> elements to appear before the <edge> elements in the GraphML file, even though this is not a requirement of the GraphML specification. If you are using files that do not satisfy this property, then you can use the --two-pass parameter to process them. Note that using this parameter requires the file to be read twice, so it suffers a performance penalty.

Network Manipulation

The NetMirage core program constructs a virtual network using network namespaces—a lightweight virtualization technique in Linux that is a key component of containerization schemes. Each network namespace has a completely independent networking stack (e.g., interfaces, addresses, connection states, routing tables, ARP tables, policy-based routing rules, and more). Although NetMirage communicates directly with the kernel to configure the network (for performance reasons), it does so in a manner that is compatible with the iproute2 suite of system utilities. This means that you can use standard Linux tools to interact with the virtual network after it has been created.

NetMirage creates one network namespace for each node in the network, which we call node namespaces, and an additional namespace called the root namespace (unless the --root-ns parameter is used, as discussed in a later section). Namespaces are connected together using veth (virtual Ethernet) pairs, which are pairs of virtual network interfaces that allow communication between namespaces.

To see the list of namespaces on the system, use ip netns list. To run a command within a namespace, use ip netns exec namespace command. You can use this technique to launch a shell within a namespace:

ip netns exec namespace bash
All namespaces created by NetMirage are given a prefix specified by --netns-prefix; the default prefix is nm-. All of the ip commands execute in the current namespace for the parent process, so to use them in other namespaces, they must be executed through ip netns exec ....

To list all of the network interfaces in the current namespace, use ip link list. You can view the addresses associated with the interfaces using ip addr list. Interfaces may have more than one address assigned. To view the main routing table, use ip route list. To view the policy-based routing rules, use ip rule list. Policy-based routing rules may refer to routing tables other than the main one. You can view a named or numbered routing table using ip route list table table. NetMirage uses some advanced policy-based routing setups for client node namespaces and edge nodes. To manipulate traffic control settings applied to a network interface, use the tc tool. You can view the ARP table for the namespace using ip neigh list.

The primary purpose of node namespaces is to route packets between nodes in the network, as specified by the topology. Each node in the topology is given a number (visible in the debug output of netmirage-core). Using the default prefix, node 0 will be configured in the namespace nm-0. Veth interfaces are named after their destination node. For example, if node 0 is connected to node 1, then nm-0 will have an interface called node-1 connected to an interface called node-0 in nm-1.

Non-client node namespaces accomplish their objectives through basic use of the main routing table. Client node namespaces also contain connections to the root namespace. Traffic going to the client from other clients is sent to the root interface, which is connected to the root namespace. Traffic coming from the client and going to other clients is received from the root interface. Traffic traveling between two applications within the same client comes from and returns to the self interface.

The root namespace is responsible for routing traffic between the virtual network and the edge nodes. All physical interfaces connected to the edge nodes are moved into this namespace in order to isolate them from normal use. NetMirage starts an Open vSwitch instance in the root namespace and configures it to route traffic appropriately. All management files associated with the Open vSwitch instance are stored in the directory specified by the --ovs-dir parameter; by default, the files are stored in /tmp/netmirage/. Before manipulating the Open vSwitch instance, you should export OVS_RUNDIR=/tmp/netmirage. You can then use ovs-ofctl to view and manipulate the static OpenFlow rules. See the man page for usage instructions.

Advanced Interfaces

Sometimes, you may need to use NetMirage with network interfaces that cannot be moved into a network namespace (e.g., bonded interfaces). In these situations, NetMirage will need to set up the Open vSwitch instance and rules, and endpoints for all veth pairs associated with client nodes, in the init namespace. The init namespace is the one associated with the init process (i.e., it is the "default" network namespace).

To tell NetMirage to make the init namespace the root namespace, use the --root-ns init argument. Note that this will clutter the init namespace, and limits the isolation of the configuration. In other words, this increases the chances that another application or configuration will conflict with the NetMirage setup. This configuration should be avoided when possible, but it is sometimes required. Specifically, if the interfaces connected to the edge nodes cannot be moved between network namespaces (e.g., bonded interfaces and other mechanisms with the NETIF_F_NETNS_LOCAL flag set) then --root-ns init must be specified.

Help!

NetMirage is not fully released and is still under active development. You will encounter bugs. Please submit bug reports and feature enhancements to the issue tracker.