SkillAgentSearch skills...

Mortise

Mortise: Auto-tuning Congestion Control to Optimize QoE via Network-Aware Parameter Optimization (USENIX NSDI 2026)

Install / Use

/learn @BobAnkh/Mortise
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Mortise: Auto-tuning Congestion Control to Optimize QoE via Network-Aware Parameter Optimization (USENIX NSDI 2026)

A real-time, network-aware adaptation framework that dynamically and continuously tunes Congestion Control Algorithm parameters to maximize QoE in time-varying network conditions.

Note: Although we are not allowed to publicly release the application modeling, CCA, and adjustment strategies used in the production environment, we have prototyped this repository to demonstrate how Mortise operates and adjusts CCA parameters to optimize QoE with emulation. We provide a file download application that closely resembles the real-world services, along with the corresponding workload traces.

Table of Contents

Prerequisites

  • Host System: Linux (Ubuntu 20.04+ recommended)
  • Hardware: At least 16GB RAM, 6 CPU cores, 50GB free disk space
  • Virtualization: KVM/QEMU support enabled
  • Network: Internet connection for downloading traces and dependencies

Project Structure

├── scripts/                     # Install and setup scripts (-> /home/vagrant/scripts)
├── mortise/                     # Main experiment directory (-> /home/vagrant/mortise)
│   ├── scripts/                 # Evaluation scripts
│   ├── src/                     # Source code for evaluation and framework
│   ├── workload/                # Workload trace directory
│   ├── result/                  # Log and raw output directory
│   └── traces/                  # Network trace directory
├── algorithm/                   # Congestion control algorithms (-> /home/vagrant/algorithm)
│   ├── kern-mod/                # Kernel module algorithms (mvfst, copa)
│   └── bpf-kern/                # BPF-based algorithms
├── Vagrantfile                  # VM configuration

# VM-only directories:
# /home/vagrant/tools/           # Installed tools and dependencies

Note: Directories marked with (-> path) are automatically synced to the specified VM location.

Environment Setup

1. Host Environment Setup

Run the setup script to install Vagrant, libvirt, and required plugins:

bash setup.sh

This script will:

  • Install Vagrant and libvirt
  • Install vagrant plugins: vagrant-rsync-back (for VM-to-host syncing)
  • Add your user to libvirt and kvm groups

Important: After setup completes, you must logout and re-login for group changes to take effect. And please remember to allow traffic from the VM to your host machine in your firewall.

2. Virtual Machine Setup

Build and provision the VM:

vagrant up

Note: You can change the VM resources in the Vagrantfile if needed, such as CPU cores and memory. Currently, it is configured with 6 CPU cores and 16GB RAM.

This automatically runs scripts/setup vm-new inside the VM during provisioning.

Access the VM:

vagrant ssh

3. BPF Environment Setup

Inside the VM, setup the BPF development environment:

scripts/setup bpf

4. Mortise Environment Setup

Inside the VM, setup the Mortise experiment environment:

scripts/setup mortise

NOTE: Please logout and re-login after all the setup steps to ensure all environment variables are correctly set and privileges take effect.

5. Baseline Algorithm Setup

Our framework supports all congestion control algorithms that can be configured through the TCP_CONGESTION interface without additional socket settings. The baselines involved in the paper can be mainly divided into three categories.

Category 1: Kernel Built-in Algorithms

These algorithms are included in the Linux kernel and can be enabled via sysctl (e.g., bbr, vegas):

# Enable BBR
sudo sysctl -w net.ipv4.tcp_congestion_control=bbr

# Enable Vegas
sudo sysctl -w net.ipv4.tcp_congestion_control=vegas

# Check available algorithms
sudo cat /proc/sys/net/ipv4/tcp_available_congestion_control
# Or
# sudo sysctl net.ipv4.tcp_available_congestion_control

Category 2: Custom Kernel Module Algorithms

These algorithms require compilation and installation as kernel modules (e.g., copa mit, copa mvfst):

# Navigate to kernel module directory
cd /home/vagrant/algorithm/kern-mod

# Compile and install copa and mvfst
make
sudo make install

Category 3: Machine Learning-based Algorithms

These algorithms use machine learning models and require specific setup procedures (e.g., Antelope, Orca). Currently, we refer readers to the steps and guidelines provided by their official code repositories to install and run these CCAs (e.g., Antelope Repo, Orca Repo, DeepCC). After installation, they can also be run in a similar manner to the other two categories.

<!-- ```bash --> <!-- # Navigate to ML algorithms directory --> <!-- cd /home/vagrant/algorithm/ml-mod --> <!-- ``` --> <!----> <!-- **Note:** Each ML-based algorithm directory contains detailed README files with specific installation and configuration instructions. -->

Running Basic Experiments

Environment Preparation

  1. Setup Python virtual environment (inside VM /home/vagrant):
sudo apt -qq install -y python3-pip python3-venv
python3 -m venv mortise-venv
source mortise-venv/bin/activate
pip install numpy structlog scipy pandas
  1. Build the project (inside VM /home/vagrant/mortise):
cd /home/vagrant/mortise
cargo build --release --all

Executables will be generated in target/release/.

Basic Usage

Although we are not allowed to provide the full set of workloads for evaluating performance in our paper, we offer two workload traces as a demonstration. The workload/demo.wk is used for testing functionality and debugging, while workload/app.wk is an anonymized workload record collected from real services in our production environment.

Workload Format: Each workload file contains two columns of numbers representing:

  • Column 1: Inter-request interval in milliseconds (time gap between consecutive requests)
  • Column 2: File size in bytes (download size for each request)

Below are the examples of how you can test the basic functionality of the framework using the provided workload traces. You can directly jump to next section to use the encapsulated evaluation scripts to run the experiments.

Server-Client Mode to Test General Algorithms

For evaluations of general congestion control algorithms, you can run the server-client mode as follows:

  1. Start server (in background):
./target/release/server &
  1. Run client with specific congestion control algorithm:
./target/release/client --output result.csv --congestion bbr --workload workload/demo.wk

This command:

  • Uses BBR congestion control algorithm
  • Follows the workload specification in workload/demo.wk
  • Outputs results to result.csv

Mortise Mode to Test Mortise Framework

For Mortise-specific evaluations, two additional steps are required:

  1. Run the Python preprocessing script (remember to activate the mortise-venv first):
python process-report.py
  1. Start manager (requires privileges):
sudo ./target/release/manager
  1. Then run server and client applications as described above. You should change the --congestion argument to mortise_copa to use the Mortise framework with the Copa algorithm, for example:
./target/release/server &
./target/release/client --output result.csv --congestion mortise_copa --workload workload/demo.wk

Reproducing File Download Emulation Experiments

1. Download Network Traces

Download cellular network traces from Cellular-Traces-NYC:

cd traces/cellular-nyc
bash download.sh

2. Test Mortise Performance

Run multi-threaded parallel tests across 23 traces with given workloads:

cd /home/vagrant/mortise
bash run-mortise.sh

Note: The run-mortise.sh script automatically handles the complete Mortise evaluation workflow:

  • Activates the Python virtual environment
  • Runs python process-report.py
  • Starts the manager process with appropriate privileges
  • Executes parallel measurements across all 23 cellular traces
  • Manages process coordination and cleanup
  • Saves results to the default directory: /home/vagrant/result

3. Test Baseline Algorithms

You have to wait several minutes between each run of run-mortise.sh or run-baseline.sh for server address to be released

Test baseline algorithms (e.g., Cubic or BBR):

bash run-baseline.sh bbr

This script runs the same workloads using standard kernel congestion control algorithms for comparison.

4. Results Analysis

Generate a comprehensive analysis of all results:

python stat.py -i /path/to/your/result(default: ./result)

This script will read all result files from experiments and generate CSV files containing:

  • Completion time for each download per algorithm
  • Packet loss statistics per session

5. Configuring Experiment Parameters

Quick Results (Default): The framework is configured for fast evaluation with iteration = 1 in exp.toml.

More Stable Results: To achieve more stable and accurate results, readers should conduct the evaluation multiple times as we did in our paper. To do this, modify exp.toml:

iteration = 10

Parallelization Configuration:

  • task in exp.toml: Controls the number of concurrent connec

Related Skills

View on GitHub
GitHub Stars15
CategoryDevelopment
Updated1mo ago
Forks1

Languages

C

Security Score

90/100

Audited on Feb 7, 2026

No findings