SkillAgentSearch skills...

Multinet

Large-scale SDN emulator based on Mininet

Install / Use

/learn @intracom-telecom-sdn/Multinet
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Code Climate Code Health Build Status Docker Automated build Documentation Status Code Issues

Multinet

The goal of Multinet is to provide a fast, controlled and resource-efficient way to boot large-scale SDN topologies. It builds on the Mininet project to emulate SDN networks via multiple isolated topologies, each launched on a separate machine, and all connected to the same controller.

Multinet has been verified with the Lithium release of the OpenDaylight controller, where we managed to boot and connect a topology of 3000+ OVS OF 1.3 switches to a single controller instance in less than 10 minutes. The controller was running on a moderate-sized VM (8 VCPUs, 32GB memory) and the multinet topology over 10 small-sized VMs (1 VCPU, 4GB memory each).

Why isolated topologies?

The main motivation behind Multinet was to be able to stress an SDN controller in terms of its switch scalable limits. In this context, Multinet contents itself to booting topologies that are isolated from each other, without really caring to be interconnected, as we believe this policy is simple and good enough to approximate the behavior of large-scale realistic SDN networks and their interaction with the controller. If creating large-scale interconnected topologies is your primary concern, then you might want to look at other efforts such as Maxinet or the Cluster Edition Prototype of Mininet. Instead, Multinet clearly emphasizes on creating scalable pressure to the controller and provides options to control certain aspects that affect the switches-controller interaction, such as the way these are being connected during start-up.

Why multiple VMs?

The cost to boot a large Mininet topology on a single machine grows exponentially with the number of switches. To amortize this cost, we opted to scale out and utilize multiple VMs to spawn multiple smaller topologies in parallel. Eventually, one of the key questions that we try to answer through Multinet is: what is the best time to boot-up a topology of S Mininet switches with the least amount of resources?

Features

  • Large-scale SDN networks emulation, using multiple isolated Mininet topologies distributed across multiple VMs
  • Controllable boot-up of switches in groups of configurable size and configurable intermediate delay. This enables studying different policies of connecting large-scale topologies to the controller.
  • Centralized and RESTful control of topologies via a master-worker architecture
  • Well-known topology types offered out-of-the-box (disconnected, linear, ring, mesh)
  • Smooth integration with custom topologies created via the high-level Mininet API, provided they have slightly modified their build method

Multinet Architecture

Getting Started

Environment setup

To use Multinet you should have a distributed environment of machines configured as follows:

  • Software dependencies:
    • Python 2.7
    • bottle, requests and paramiko Python packages
    • a recent version of Mininet (we support 2.2.1rc)
    • Mausezahn, tool for network traffic generation.
  • Connectivity:
    • the machines should be able to communicate with each other
    • the machines should have SSH connectivity

The above software dependencies are installed inside a virtualenv (isolated Python environment), which is created from the deploy/provision.sh script which is responsible for the environment setup. In the next section we demonstrate how to prepare such an environment using Vagrant to provision and boot multiple VMs and docker to provision multiple containers.

Environment setup using Vagrant

You can use Vagrant to setup a testing environment quickly. Using the provided Vagrantfile you can boot a configurable number of fully provisioned VMs in a private network and specify their IP scheme.

Under the deploy/vagrant directory we provide scripts and Vagrantfiles to automatically setup a distributed environment of VMs to run Multinet. The steps for this are:

  1. Provision the base box from which VMs will be instantiated:

    [user@machine multinet/]$ cd deploy/vagrant/base/
    

    If you sit behind a proxy, edit the http_proxy variable in the Vagrantfile. Then start provisioning:

    [user@machine multinet/deploy/vagrant/base]$ vagrant up
    

    When the above command finishes, package the base box that has been created:

    [user@machine multinet/deploy/vagrant/base]$ vagrant package --output mh-provisioned.box
    [user@machine multinet/deploy/vagrant/base]$ vagrant box add mh-provisioned mh-provisioned.box
    [user@machine multinet/deploy/vagrant/base]$ vagrant destroy
    

    For more info on Vagrant box packaging take a look at this guide

  2. Configure the VMs:

    [user@machine multinet/]$ cd deploy/vagrant/packaged_multi/
    

    Edit the Vagrantfile according to your preferences. For example:

    http_proxy = ''  # if you sit behind a corporate proxy, provide it here
    mh_vm_basebox = 'mh-provisioned' # the name of the Vagrant box we created in step 2
    mh_vm_ram_mini = '2048'  # RAM size per VM
    mh_vm_cpus_mini = '2'    # number of CPUs per VM
    num_multinet_vms = 10    # total number of VMs to boot
    mh_vm_private_network_ip_mini = '10.1.1.70'  # the first IP Address in the mininet VMs IP Address range
    

    Optional Configuration If you need port forwarding from the master guest machine to the host machine, edit these variables inside the Vagrantfile:

    forwarded_ports_master = [] # A list of the ports the guest VM needs
                                # to forward
    forwarded_ports_host = []   # The host ports where the guest ports will be
                                # forwarded to (1 - 1 correspondence)
    # Example:
    #   port 3300 from master VM will be forwarded to port 3300 of
    #   the host machine
    #   port 6634 from master VM will be forwarded to port 6635 of
    #   the host machine
    #   forwarded_ports_master = [3300, 6634]
    #   forwarded_ports_host = [3300, 6635]
    
  3. Boot the VMs:

    [user@machine multinet/deploy/vagrant/packaged_multi]$ vagrant up
    

You should now have a number of interconnected VMs with all the dependencies installed.

Environment setup using Docker

In order to create a docker container we must first create an image which will be used as a base for creating one or more docker containers. For the creation of a docker image we provide a dockerfile

  1. Install docker: docker installation guides

  2. Creation of docker image (without proxy settings):

    [user@machine multinet/]$ cd deploy/docker/no_proxy/
    [user@machine multinet/deploy/docker/no_proxy/]$ sudo docker build -t multinet_image .
    

    After this step when you run the command

    [user@machine multinet/deploy/docker/no_proxy/]$ sudo docker images
    

    You should see something like the following output

    REPOSITORY          TAG                   IMAGE ID            CREATED             SIZE
    <repo_name>         multinet_image        a75c906f03c7        1 minute ago        1.72 GB
    
  3. Create containers from the created image: Open 2 terminals and execute the following command in order to create 2 docker containers

    [user@machine]$ sudo docker run -it <repo_name>:multinet_image /bin/bash
    

    After running the above commands on each terminal you should see the command prompt of the container. It should be something like the following

    root@cfb6dccfc41d:/#
    

    Multinet, inside a docker container, is under path /opt/multinet

    root@cfb6dccfc41d:/#cd /opt/multinet
    

The 2 containers are interconnected you can get the ip address information if you run the command

root@cfb6dccfc41d:/opt/multinet# ifconfig

The default docker network for the containers is 172.17.0.0/16. Use the IP addresses of docker containers in the configuration file for the "master_ip": and "worker_ip_list":. See next in the document in the configuration section of multinet. For more information about docker container networks visit the link Understand Docker container networks

Configuration

To start using Multinet, the first step is to clone the Multinet reposi

Related Skills

View on GitHub
GitHub Stars14
CategoryDevelopment
Updated11mo ago
Forks3

Languages

Python

Security Score

82/100

Audited on Apr 27, 2025

No findings