Project
DDoS attacks detection by using SVM on SDN networks.
Install / Use
/learn @GAR-Project/ProjectREADME
GAR-Project 2019-2020
This workgroup is a project created by 4 students of the University of Alcalá for the subject of Network Management and Administration of the fourth year.
Abstract
The purpose of this project is to develop an artificial intelligence to classify possible DDoS attacks in an SDN network. This will be done by using data collectors such as Telegraf, Mininet to emulate the SDN network, and InfluxDB and Grafana as a means to store data and visualize it respectively. For non-English speakers we leave part of the content of this guide written in Spanish:
- Network Scenario - Mininet Guide: Link
- DDoS using hping3 tool Guide: Link
- Mininet Internals (II) Guide: Link
Keywords: DDoS attacks; SDN network; Artificial Intelligence classification; Mininet
Index
- Installation methods :wrench:
- Vagrant
- Native
- Our scenario
- Running the scenario
- Is working properly?
- Attack time! :boom:
- Time to limit the links
- Getting used to hping3
- Installing things... again! :weary:
- Usage
- Demo time! :tada:
- Traffic classification with a SVM (Support Vector Machine)
- First step: Getting the data collection to work :dizzy_face:
- Second step: Generating the training datasets
- Third step: Putting it all together:
src/traffic_classifier.py
- Mininet CLI (Command Line Interface)
- Mininet Internals
- Network Namespaces
- Mininet Internals (II) <a name="mininet_internals_II"></a>
- Is Mininet using Network Namespaces?
- The Big Picture
- How would our Kernel-level scenario look then?
- Troubleshooting
- Appendix <a name="appendix"></a>
- The Vagrantfile
- File descriptors:
stdoutand friends
Notes
Throughout the document we will always be talking about 2 virtual machines (VMs) on which we implement the scenario we are discussing. In order to keep it simple we hace called one VM controller and the other one test. Even though the names may seem kind of random at the moment we promise they're not. Just keep this in mind as you continue reading.
<br>Installation methods :wrench:
We have created a Vagrantfile through which we provide each machine with the necessary scripts to install and configure the scenario. By working in a virtualized environment we make sure we all have the exact same configuration so that tracing and fixing erros becomes much easier. If you do not want to use Vagrant as a provider you can follow the native installation method we present below.
Vagrant
First of all, clone the repository from GitHub :octocat: and navigate into the new directory with:
git clone https://github.com/GAR-Project/project
cd project
We power up the virtual machine through Vagrant:
vagrant up
And we have to connect to both machines. Vagrant provides a wrapper for the SSH utility that makes it a breeze to get into each virtual machine. The syntax is just vagrant ssh <machine_name> where the <machine_name> is given in the Vagrantfile (see the appendix):
vagrant ssh test
vagrant ssh controller
We should already have all the machines configured with all the necessary tools to bring our network up with Mininet on the test VM, and Ryu on the controller VM. This includes every python3 dependency as well as any needed packages.
Troubleshooting problems regarding SSH
If you have problems connecting via SSH to the machine, check that the keys in the path .vagrant/machines/test/virtualbox/ are owned by the user, and have read-only permissions for the owner of the key.
cd .vagrant/machines/test/virtualbox/
chmod 400 private_key
# We could also use this instead of "chmod 400" (u,g,o -> user, group, others)
# chmod u=r,go= private_key
Instead of using vagrant's manager to make the SSH connection, we can opt for manually doing it ourselves by passing the path to the private key to SSH. For example:
ssh -i .vagrant/machines/test/virtualbox/private_key vagrant@10.0.123.2
Native
This method assumes you already have any VMs up and running with the correct configuration and dependencies installed. Ideally you should have 2 VMs. We will be running Ryu (the SDN controller) in one of them and we will have mininet's emulated network with running in the other one. Try to use Ubuntu 16.04 (a.k.a Xenial) as the VM's distribution to avoid any mistakes we may have not encountered.
First of all clone the repository, just like how the Kaminoans :alien: do it and then navigate into it:
git clone https://github.com/GAR-Project/project
cd project
Manually launch the provisioning scripts in each machine:
# To install Mininet, Mininet's dependencies and telegraf. Run it on the "mininet" VM
sudo ./util/install_mininet.sh
sudo ./util/install_telegraf.sh
# To install Ryu and Monitoring system (Grafana + InfluxDB). Run it on the "controller" VM
sudo ./util/install_ryu.sh
sudo ./util/install_grafana_influxdb.sh
Our scenario
Our network scenario is described in the following script: src/scenario_basic.py. Mininet makes use of a Python API to give users the ability to automate processes easily, or to develop certain modules at their convenience. For this and many other reasons, Mininet is a highly flexible and powerful tool for network emulation which is widely used by the scientific community.
- For more information about the API, see its manual.
The image above presents us with the logic scenario we will be working with. As with many other areas in networking this logic picture doesn't correspond with the real implementation we are using. We have seen throughout the installation procedure how we are always talking about 2 VMs. If you read carefully you'll see that one VM's "names" are controller and mininet. So it should come as no surprise that the controller and the network itself are living in different machines!
The first question that may arise is how on Earth can we logically join these 2 together. When working with virtualized enviroments we will generate a virtual LAN where each VM is able to communicate with one another. Once we stop thinking about programs and abstract the idea of "process" we find that we can easily identify the controller which is just a ryu app, which is nothing more than a python3 app with the controller's VM IP address and the port number where the ryu is listening. We shouldn't forget that any process running within any host in the entire Internet can be identified with the host's IP address and the processes port number. Isn't it amazing?
Ok, the above sounds great but... Why should we let the controller live in a machine when we could have everything in a single machine and call it a day? We have our reasons:
-
Facilitate teamwork, since the AI's logic will go directly into the controller's VM. This let's us increase both working group's independence. One may work on the mininet core and the data collection with telegraf whilst the other can look into the DDoS attack detection logic and visualization using Grafana and InfluxDB.
-
Facilitate the storage of data into InfluxDB from telegraf, as due to the internal workings of Mininet there may be conflicts in the communication of said data. Mininet's basic operation at a low level is be detailed below.
-
Having two different environments relying on distinct tools and implementing different functionalities let's us identify and debug problems way faster. We can know what piece of software is causing problems right away!
Running the scenario
Running the scenario requires having logged into both VMs manually or using vagrant's SSH wrapper. First of all we're going to power up the controller, to do so we run the following from the controller VM. It's an application that does a basic forwarding, which is just what we need:
ryu-manager ryu.app.simple_switch_13
You might prefer to run the controller in the background as it doesn't provide really meaningful information. In order to do so we'll run:
ryu-manager ryu.app.simple_switch_13 > /dev/null 2>&1 &
Let's break this big boy down:
-
> /dev/nullredirects thestdoutfile descriptor to a file located in/dev/null. This is a "special" file in linux systems that behaves pretty much like a black hole. Anything you write to it just "disappears" :open_mouth:. This way we get rid of all the bloat caused by the network startup. -
2>&1will make thestderrfile descriptor point where thestdoutfile descriptor is currently pointing (/dev/null). Terminal emulators usually have bothstdoutandstderr"going into" the terminal itself so we need to redirect these two to be sure we won't see any output. -
&makes the process run in the background so that you'll be given a new prompt as soon as you run the command.
If you want to move the controller app back into the foreground so that you can kill it with
Related Skills
node-connect
349.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.4kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
349.0kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
349.0kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
