Spright
No description available
Install / Use
/learn @ucr-serverless/SprightREADME
SPRIGHT
SPRIGHT is a lightweight, high-performance serverless framework that exploits shared memory processing to improve the scalability of the dataplane of serverless function chains by avoiding unnecessary networking overheads.
For more information, please refer to:
- SIGCOMM 2022: SPRIGHT: Extracting the Server from Serverless Computing! High-performance eBPF-based Event-driven, Shared-memory Processing
Installation guideline (on Cloudlab)
This guideline is mainly for deploying SPRIGHT on NSF Cloudlab. We focus on a single-node deployment to demonstrate the shared memory processing supported by SPRIGHT. Currently SPRIGHT offers several deployment options: Process-on-bare-metal (POBM mode), Kubernetes pod (K8S mode), and Knative functions (Kn mode).
Follow steps below to set up SPRIGHT:
- Creating a 2-node cluster on Cloudlab
- Upgrading kernel & Installing SPRIGHT dependencies
- Setting up Kubernetes & Knative
- Setting up SPRIGHT
SIGCOMM artifact evaluation
To reproduce the experiment in our paper, please refer to commit 98434fd.
Publication
@inproceedings{spright-sigcomm22,
author = {Qi, Shixiong and Monis, Leslie and Zeng, Ziteng and Wang, Ian-chin and Ramakrishnan, K. K.},
title = {SPRIGHT: Extracting the Server from Serverless Computing! High-Performance EBPF-Based Event-Driven, Shared-Memory Processing},
year = {2022},
isbn = {9781450394208},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3544216.3544259},
doi = {10.1145/3544216.3544259},
booktitle = {Proceedings of the ACM SIGCOMM 2022 Conference},
pages = {780–794},
numpages = {15},
keywords = {event-driven, eBPF, function chain, serverless},
location = {Amsterdam, Netherlands},
series = {SIGCOMM '22}
}
<!-- ## Description
This artifact runs on c220g5 nodes on NSF Cloudlab.
## 1. Starting up a 2-node cluster on Cloudlab
1. The following steps require a **bash** environment. Please configure the default shell in your CloudLab account to be bash. For how to configure bash on Cloudlab, Please refer to the post "Choose your shell": https://www.cloudlab.us/portal-news.php?idx=49
2. When starting a new experiment on Cloudlab, select the **small-lan** profile
3. In the profile parameterization page,
- Set **Number of Nodes** as **2**
- Set OS image as **Ubuntu 20.04**
- Set physical node type as **c220g5**
- Please check **Temp Filesystem Max Space**
- Keep **Temporary Filesystem Mount Point** as default (**/mydata**)
4. Wait for the cluster to be initialized (It may take 5 to 10 minutes)
5. Extend the disk. This is because Cloudlab only allocates a 16GB disk space.
On the master node (**node-0**) and worker node (**node-1**), run
```bash
sudo chown -R $(id -u):$(id -g) /mydata
cd /mydata
git clone https://github.com/ucr-serverless/spright.git
export MYMOUNT=/mydata
```
## 2. Install SPRIGHT on the master node (**node-0**)
### Update Kernel of the master node
```bash
$ cd /mydata/spright
spright$ ./sigcomm-experiment/env-setup/001-env_setup_master.sh
# The master node will be rebooted after the script is complete
# Rebooting usually takes 5 - 10 minutes
```
### Re-login to master node after rebooting (**node-0**)
### Install libbpf, dpdk and SPRIGHT
```bash
$ cd /mydata/spright
spright$ ./sigcomm-experiment/env-setup/002-env_setup_master.sh
```
## 3. Install Kubernetes control plane and Knative
### Setting up the Kubernetes master node (**node-0**).
```shell=
$ cd /mydata/spright
spright$ export MYMOUNT=/mydata
spright$ ./sigcomm-experiment/env-setup/100-docker_install.sh
spright$ source ~/.bashrc
spright$ ./sigcomm-experiment/env-setup/200-k8s_install.sh master 10.10.1.1
## Once the installation of Kuberentes control plane is done,
## it will print out an token `kubeadm join ...`.
## **PLEASE copy and save this token somewhere**.
## The worker node (**node-1**) needs this token to join the Kuberentes control plane.
spright$ echo 'source <(kubectl completion bash)' >>~/.bashrc && source ~/.bashrc
```
### Setting up the Kubernetes worker node (**node-1**).
```shell=
$ cd /mydata/spright
spright$ export MYMOUNT=/mydata
spright$ ./sigcomm-experiment/env-setup/100-docker_install.sh
spright$ source ~/.bashrc
spright$ ./sigcomm-experiment/env-setup/200-k8s_install.sh slave
# Use the token returned from the master node (**node-0**) to join the Kubernetes control plane. Run `sudo kubeadm join ...` with the token just saved. Please run the `kubeadm join` command with *sudo*
spright$ sudo kubeadm join <control-plane-token>
```
### Enable pod placement on master node (**node-0**) and taint worker node (**node-1**):
```shell=
$ cd /mydata/spright
spright$ ./sigcomm-experiment/env-setup/201-taint_nodes.sh
```
### Setting up the Knative.
1. On the master node (**node-0**), run
```shell=
$ cd /mydata/spright
spright$ ./sigcomm-experiment/env-setup/300-knative_install.sh
```
## 4. Experiment Workflow
Note: We will run the SPRIGHT components directly as a binary for fast demonstration and testing purpose. To run SPRIGHT as a function pod, please refer to
### 1 - Online boutique.
**Check-List**:
- Program: S-SPRIGHT, D-SPRIGHT, Knative function, gRPC (Kubernetes pod)
- Metrics: RPS, Latency and CPU usage
- Time needed to complete experiments: 2 hours
#### 1.0 Install Deps of Locust generator on worker node (**node-1**)
> **Worker node (node-1) operations**
Install Locust load generator
```shell=
$ cd /mydata/spright/sigcomm-experiment/env-setup/
env-setup$ ./400-install_locust.sh
```
#### 1.1 Run Online boutique using S-SPRIGHT (SKMSG)
> **Master node (node-0) operations**
```shell=
$ cd /mydata/spright/sigcomm-experiment/expt-1-online-boutique/
expt-1-online-boutique$ ./run_spright.sh s-spright
# After the experiment is done (~3 minutes)
# Enter Ctrl+B then D to detach from tmux
```
---
> **Worker node (node-1) operations**
```shell=
$ cd /mydata/spright/sigcomm-experiment/expt-1-online-boutique/
expt-1-online-boutique$ ./run_load_generators.sh spright 10.10.1.1 8080
# After the experiment is done (~3 minutes)
# Enter Ctrl+B then D to detach from tmux
```
---
> **Consolidate metric files generated by Locust workers on the worker node (node-1)**
```shell=
# Make sure you are on the worker node
$ cd /mydata/spright/sigcomm-experiment/expt-1-online-boutique/
expt-1-online-boutique$ ./consolidate_locust_stats.sh s-spright
```
#### 1.2 Run Online boutique using D-SPRIGHT (DPDK's RTE rings)
> **Master node (node-0) operations**
```shell=
$ cd /mydata/spright/sigcomm-experiment/expt-1-online-boutique/
expt-1-online-boutique$ ./run_spright.sh d-spright
# After the experiment is done (~3 minutes)
# Enter Ctrl+B then D to detach from tmux
```
---
> **Worker node (node-1) operations**
```shell=
$ cd /mydata/spright/sigcomm-experiment/expt-1-online-boutique/
expt-1-online-boutique$ ./run_load_generators.sh spright 10.10.1.1 8080
# After the experiment is done (~3 minutes)
# Enter Ctrl+B then D to detach from tmux
```
---
> **Consolidate metric files generated by Locust workers on the worker node (node-1)**
```shell=
# Make sure you are on the worker node
$ cd /mydata/spright/sigcomm-experiment/expt-1-online-boutique/
expt-1-online-boutique$ ./consolidate_locust_stats.sh d-spright
```
#### 1.3 Run Online boutique using Knative
> **Master node (node-0) operations**
**Step-1**: start Knative functions
```shell=
$ cd /mydata/spright/sigcomm-experiment/expt-1-online-boutique/
expt-1-online-boutique$ python3 hack/hack.py && cd /mydata/spright/
spright$ kubectl apply -f sigcomm-experiment/expt-1-online-boutique/manifests/knative
# Get the IP of Istio Ingress
spright$ kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].status.hostIP}'
# Get the Port of Istio Ingress
spright$ kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}'
# Record the IP of parking proxy, the IP of Istio Ingress and the Port of Istio Ingress, they will be used by the load generator on worker node
```
**Step-2**: Start CPU usage collection
```shell=
$ cd /mydata/spright/sigcomm-experiment/expt-1-online-boutique/
expt-1-online-boutique$ ./run_kn.sh
```
---
> **Worker node (node-1) operations**
```shell=
$ cd /mydata/spright/sigcomm-experiment/expt-1-online-boutique/
# Please use the IP and Port obtained on master node (Step-1 of Knative online boutique)
expt-1-online-boutique$ ./run_load_generators.sh kn $ISTIO_INGRESS_GW_IP $ISTIO_INGRESS_GW_PORT
```
---
> **Consolidate metric files generated by Locust workers on the worker node (node-1)**
```shell=
# Make sure you are on the worker node
$ cd /mydata/spright/sigcomm-experiment/expt-1-online-boutique/
expt-1-online-boutique$ ./consolidate_locust_stats.sh kn
```
#### 1.4 Run Online boutique using gRPC
> **Master node (node-0) operations**
**Step-1**: start gRPC functions
```shell=
$ cd /mydata/spright
spright$ kubectl apply -f sigcomm-experiment/expt-1-online-boutique/manifests/kubernetes
# Get the IP of Frontend Service
spright$ kubectl get po -l app=frontend -o wide
# Record the IP of Frontend Service, it will be used by the load generator on worker node
```
**Step-2**: CPU usage collection
```shell=
$ cd /mydata/spright/sigcomm-experiment/expt-1-online-boutique/
expt-1-online-boutique$ ./run_grpc.sh
```
---
> **Worker node (node-1) operations**
```shell=
$ cd /mydata/spright/sigc