Teal
Codebase for Teal (SIGCOMM 2023)
Install / Use
/learn @harvard-cns/TealREADME
Teal: Traffic Engineering Accelerated by Learning
Teal is a learning-accelerated traffic engineering (TE) algorithm for cloud wide-area networks (WANs), published at ACM SIGCOMM '23. By harnessing the parallel processing power of GPUs, Teal achieves unprecedented acceleration of TE control, surpassing production TE solvers by several orders of magnitude while retaining near-optimal flow allocations.
Getting started
Hardware requirements
- Linux OS (tested on Ubuntu 20.04, 22.04, and CentOS 7)
- A CPU instance with 16+ cores
- (Optional*) A GPU instance with 24+ GB memory and CUDA installed
*The baseline TE schemes only require a CPU to run. Teal runs on CPU as well, but its runtime will be significantly longer than on GPU.
Cloning Teal with submodules
git clone https://github.com/harvard-cns/teal.gitcd tealand update git submodules withgit submodule update --init --recursive
Dependencies
- Run
conda env create -f environment.ymlto create a Conda environment with essential Python dependencies - Run
conda activate tealto activate the Conda environment. All the following steps related to Python (e.g.,pipandpythoncommands) must be performed within this Conda environment to ensure correct Python dependencies. - Run
pip install -r requirements.txtto install additional Python dependencies
Dependencies only required for baselines
- Install
make- e.g.,
sudo apt install build-essentialon Ubuntu
- e.g.,
- Acquire a Gurobi license from Gurobi and activate it with
grbgetkey [gurobi-license]- Run
gurobi_clto verify the activation
- Run
Dependencies only required for Teal
- If on a GPU instance, run
nvcc --versionto identify the installed version of CUDA- Note: when following the next steps to install
torch,torch-scatter, andtorch-sparse, it might be fine to select a version that supports a different CUDA version than the output ofnvcc, provided that this CUDA version is supported by the GPU driver (as shown innvidia-smi).
- Note: when following the next steps to install
- Follow the official instructions to install PyTorch via pip based on the execution environment (CPU, or GPU with a specific version of CUDA).
- Example: Install PyTorch 1.10.1 for CUDA 11.1 on a GPU instance:
Runpip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu111/torch_stable.htmlpython -c "import torch; print(torch.cuda.is_available())"to verify the installation. - Example: Install PyTorch 1.10.1 on a CPU instance:
Runpip install torch==1.10.1+cpu torchvision==0.11.2+cpu torchaudio==0.10.1 -f https://download.pytorch.org/whl/cpu/torch_stable.htmlpython -c "import torch; print(torch.__version__)"to verify the installation.
- Example: Install PyTorch 1.10.1 for CUDA 11.1 on a GPU instance:
- Install PyTorch extension libraries
torch-scatterandtorch-sparse:- First, identify the appropriate archive URL here based on PyTorch and CUDA versions. E.g., copy the link of
torch-1.10.1+cu111for PyTorch 1.10.1 and CUDA 11.1. - Run
pip install --no-index torch-scatter torch-sparse -f [archive URL], replacing[archive URL]with the copied archive URL. - Example: On a GPU instance with PyTorch 1.10.1 and CUDA 11.1:
pip install --no-index torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-1.10.1%2Bcu111.html` - Example: On a CPU instance with PyTorch 1.10.1:
pip install --no-index torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-1.10.1%2Bcpu.html - Run
python -c "import torch_scatter; print(torch_scatter.__version__)"andpython -c "import torch_sparse; print(torch_sparse.__version__)"to verify the installation. - Troubleshooting: refer to the Installation from Source section.
- First, identify the appropriate archive URL here based on PyTorch and CUDA versions. E.g., copy the link of
Code structure
.
├── lib # source code for Teal (details in lib/README.md)
├── pop-ncflow-lptop # submodule for baselines
│ ├── benchmarks # test code for baselines
│ ├── ext # external code for baselines
│ └── lib # source code for baselines
├── run # test code for Teal
├── topologies # network topologies with link capacity (e.g. `B4.json`)
│ └── paths # paths in topologies (auto-generated if not existent)
└── traffic-matrices # TE traffic matrices
├── real # real traffic matrices from abilene.txt in Yates (https://github.com/cornell-netlab/yates)
│ # (e.g. `B4.json_real_0_1.0_traffic-matrix.pkl`)
└── toy # toy traffic matrices (e.g. `ASN2k.json_toy_0_1.0_traffic-matrix.pkl`)
Note: As we are not allowed to share the proprietary traffic data from Microsoft WAN (or the Teal model trained on that data), we mapped the publicly accessible Yates traffic data to the B4 topology to facilitate code testing. For the other topologies (UsCarrier, Kdl, and ASN), we synthetically generated "toy" traffic matrices due to their larger sizes.
Evaluating Teal
To evaluate Teal on the B4 topology:
$ cd ./run
$ python teal.py --obj total_flow --topo B4.json --epochs 3 --admm-steps 2
Loading paths from pickle file ~/teal/topologies/paths/path-form/B4.json-4-paths_edge-disjoint-True_dist-metric-min-hop-dict.pkl
path_dict size: 132
Creating model teal-models/B4.json_flowGNN-6_std-False.pt
Training epoch 0/3: 100%|█████████████████████████████████| 1/1 [00:01<00:00, 1.63s/it]
Training epoch 1/3: 100%|█████████████████████████████████| 1/1 [00:00<00:00, 2.45it/s]
Training epoch 2/3: 100%|█████████████████████████████████| 1/1 [00:00<00:00, 2.61it/s]
Testing: 100%|████████████████| 8/8 [00:00<00:00, 38.06it/s, runtime=0.0133, obj=0.9537]
To show explanations on the input parameters:
$ python teal.py --help
Results will be saved in
teal-total_flow-all.csv: performance numbersteal-logs: directory with TE solution matricesteal-models: directory to save the trained models when--model-save True
Realistic traffic matrices are only available for B4 (please refer to the note above). For the other topologies — UsCarrier (UsCarrier.json), Kdl (Kdl.json), or ASN (ASN2k.json), use the "toy" traffic matrices we generated (taking UsCarrier as an example):
$ python teal.py --obj total_flow --topo UsCarrier.json --tm-model toy --epochs 3 --admm-steps 2
Evaluating baselines
Teal is compared with the following baselines:
- LP-all (
path_form.py): LP-all solves the TE optimization problem for all demands using linear programming (implemented in Gurobi) - LP-top (
top_form.py): LP-top allocates the top α% (α=10 by default) of demands using an LP solver and assigns the remaining demands to the shortest paths - NCFlow (
ncflow.py): the NCFlow algorithm from the NSDI '21 paper: Contracting Wide-area Network Topologies to Solve Flow Problems Quickly - POP (
pop.py): the POP algorithm from the SOSP '21 paper: Solving Large-Scale Granular Resource Allocation Problems Efficiently with POP
To evaluate the baselines on B4, run the following commands from the project root:
$ cd ./pop-ncflow-lptop/benchmarks
$ python path_form.py --obj total_flow --topos B4.json
$ python top_form.py --obj total_flow --topos B4.json
$ python ncflow.py --obj total_flow --topos B4.json
$ python pop.py --obj total_flow --topos B4.json --algo-cls PathFormulation --split-fractions 0.25 --num-subproblems 4
Results will be saved in
path-form-total_flow-all.csv,top-form-total_flow-all.csv,ncflow-total_flow-all.csv,pop-total_flow-all.csv: performance numberspath-form-logs,top-form-logs,ncflow-logs,pop-logs: directory with TE solution matrices
To test on UsCarrier (UsCarrier.json), Kdl (Kdl.json), or ASN (ASN2k.json), specify the "toy" traffic matrices we generated (taking UsCarrier as an example):
$ python path_form.py --obj total_flow --tm-models toy --topos UsCarrier.json
$ python top_form.py --obj total_flow --tm-models toy --topos UsCarrier.json
$ python ncflow.py --obj total_flow --tm-models toy --topos UsCarrier.json
$ python pop.py --obj total_flow --tm-models toy --topos UsCarrier.json --algo-cls PathFormulation --split-fractions 0.25 --num-subproblems 4
Extending Teal
To add another TE implementation to this repo,
- If the implementation is based on linear programming or Gurobi, add test code to
./pop-ncflow-lptop/benchmarks/and source code to./pop-ncflow-lptop/lib/algorithms. Code in./pop-ncflow-lptop/lib(e.g.,lp_solver.py,traffic_matrix.py) and./pop-ncflow-lptop/benchmarks(e.g.,benchmark_helpers.py) is reusable. - If the implementation is based on machine learning, add test code to
./run/and source code to./lib/. Code in./lib/(e.g.,teal_env.py,utils.py) and./run/(e.g.,teal_helpers.py) is reusable.
Citation
If you use our code in your research, please cite our paper:
@inproceedings{teal,
title={Teal: Learning-Accelerated Optimization of WAN Traffic Engineering},
author={Xu, Zhiying and Yan, Francis Y. and Singh, Rachee and Chiu, Justin T. and Rush, Alexander M. and Yu, Minlan},
booktitle={Proceedings of the ACM SIGCOMM 2023 Conference},
pages={378--393},
month=sep,
year={2023}
}
Related Skills
node-connect
351.4kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
110.7kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
351.4kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
351.4kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
