Octopoda
🐙 A lightweight multi-nodes scenarios management platform with node monitoring, scripts execution, scenario deployment, version control, file distribution etc.
Install / Use
/learn @piaodazhu/OctopodaREADME

Octopoda
🐙 Octopoda is a lightweight multi-node scenario management platform. It's not a lightweight K8S. It is originally designed for managing Lab101's ICN application scenarios (Obviously it can do more than that), which require the execution of commands on the node at the lower level of the system, such as inserting a kernel driver module. Note that it not safe enough to deploy Octopoda in unfamiliar network environment.
Features of Octopoda:
- Simple topology with NAT reverse path.
- Out-of-box.
- Robust & auto retry & auto reboot.
- Nodes status monitoring.
- Customized, automated scenario deployment.
- Scenario/Application version control.
- Scenario/Application durability.
- Centralized file management and distribution.
- Centralized scripts execution.
- Log management.
- Fast SSH login.
- Golang/C/Python SDK.
Table of Contents
- Octopoda
- Table of Contents
- Concepts
- Quick Start
- Octl Command Manual
- Scenario Example
- Octl SDK
Concepts
Topology
+-----------------+
| NameServer |
+-----------------+ HTTP
| HTTPS |-------------+
SSH .---------->-----+--|--<-reverse----|-<-+------<--|--+------ ...
| | | | | | |
+---------+ CLI +--------+ HTTPS +-x-------+ TLS +----------+ +----------+
| TermUser| <===> | Octl | <--+---> | Brain | <-----> | Tentacle | | Tentacle | ...
+---------+ + ---- + | +---------+ +----------+ +----------+
+---------+ SDK | Go/C/Py| <--+ |HTTP | |
|Developer| <===> | Client | +---------+ +---------+ +---------+
+---------+ +--------+ | Pakma | | Pakma | | Pakma |
+---------+ +---------+ +---------+
\-----------/ \-----------------------------/
Master Node Controlled Networks
SAN Model
SAN model is the working model of Octopoda: S stands for Scenario, A stands for Application and N stands for Node. The current model has the following features:
- An Octopoda network can manage multiple scenarios.
- Each scenario is made of multiple applications.
- Each application can be run on multiple nodes.
- Each application can be identified uniquely by (application name, scenario name)
- Each node can run multiple application.
- Each application running on a specific node is called a NodeApp, and it can be identified uniquely by (nodename, application name, scenario name).
- The granularity of version control are scenario and NodeApp. Version updates of NodeApps automatically trigger version updates of the scenario they belong to.
Quick Start
# 1 Generate keys and certificates
cd httpNameServer
bash ./CertGen.sh ca helloCa # generate your ca key and cert ( keep your ca.key very safe! )
bash ./CertGen.sh server helloNs 1.1.1.1 # for your httpsNameServer
bash ./CertGen.sh client helloBrain # for your Brain
bash ./CertGen.sh client helloTentacle # for your Tentacle
cp ca.pem server.key server.pem /etc/octopoda/cert/ # copy to your httpsNameServer
cp ca.pem client.key client.pem /etc/octopoda/cert/ # copy to your Brain and Tentacle
# 2 Install httpsNameServer on a machine
sudo systemctl start redis-server
tar -Jxvf httpns_v1.5.2_linux_amd64.tar.xz
cd httpns_v1.5.2_linux_amd64
# then modify httpns.yaml if you want
# run it foreground
sudo ./httpns -p
# or install it and start background
sudo bash setup.sh
# 3 Install Brain on a machine
sudo systemctl start redis-server
tar -Jxvf brain_v1.5.2_linux_amd64.tar.xz
cd brain_v1.5.2_linux_amd64
# then modify brain.yaml if you want (httpsNameServer, name, nic is important)
# run it foreground
sudo ./brain -p
# or install it and start background
sudo bash setup.sh
## 4 Set root workgroup password on Brain machine. (For step 7)
redis-cli
127.0.0.1:6379> set info: yourpass
# 5 Install Tentacle on a machine
tar -Jxvf tentacle_v1.5.2_linux_amd64.tar.xz
cd tentacle_v1.5.2_linux_amd64
# then modify tentacle.yaml if you want (httpsNameServer, name, brain is important)
# run it foreground
sudo ./tentacle -p
# or install it and start background
sudo bash setup.sh
# 6 Install Pakma on your Brain or Tentacle machine (optional, only for online upgrade)
tar -Jxvf pakma_v1.5.2_linux_amd64.tar.xz
cd pakma_v1.5.2_linux_amd64
# make sure pakma is installed after Brain or Tentacle
# install it for your Brain
sudo bash setup.sh brain
# or install it for your Tentacle
sudo bash setup.sh tentacle
# 7 Install Octl
cd octl_v1.5.2_linux_amd64
# then modify octl.yaml. (workgroup.root="", workgroup.password="yourpass")
sudo cp octl.yaml /etc/octopoda/octl/
sudo cp octl /usr/local/bin/
# 8 Hello World
$ octl node get
# {
# "nodes": [
# {
# "name": "pi0",
# "addr": "192.168.1.4",
# "state": "online",
# "delay": "3ms",
# "online_time": "2m42.483697064s"
# }
# ],
# "total": 1,
# "active": 1,
# "offline": 0
# }
$ octl node get | grep name | awk '{print $2}' | sed 's/"//g' | sed -z 's/\n/ /g' | sed 's/,//g'
# you may get: pi02 pi05 pi06 pi08
Octl Command Manual
A. Node Information
GET
usage: octl node get [-sf <statefilter>] [[ALL] | <node1> <@group1> ...]
With this subcmd we can get some basic information of all nodes or given nodes. With optional flag -sf, you we can define a state filter such as online, offline to filter nodes.
PRUNE
usage: octl node prune [ALL | <node1> <@group1> ...]
With this subcmd we can prune dead nodes within given nodes.
STATUS
usage: octl node status [[ALL] | <node1> <@group1> ...]
With this subcmd we can get the running status of nodes or a given node, such as:
- CPU Load.
- Memory Used/Total.
- Disk Used/Total.
- Other Status.
B. Workgroup
CONCEPT
Workgroup are an Octopoda mechanism that supports resource isolation at node granularity, hierarchical device authorization and referencing multiple node names with a group name. Workgroups are organized in a tree structure, with each node having a unique path, a non-empty set of node names, and a collection of subworkgroups.
For example, workgroup with path /room1/alice and node names set (pi1,pi2,pi3) will never be aware of pi4 even if it is in the same Octopoda network. And group /room1 can list, add and remove members in/to/from /room1/alice. And /room1/alice can create /room1/alice/g1 with node names set (pi1,pi2).
The relative group name can be used to reference its node names set. For example, if currentPath=/room1, then octl cmd run 'uname -a' @alice pi4 is equivalent to octl cmd run 'uname -a' pi1 pi2 pi3 pi4.
Workgroup is a mechanism forshould be configured in octl.yaml.
workgroup:
root: "grouppath"
password: "password"
currentPathFile: "/etc/octopoda/octl/.curPath.yaml"
PATH COMMAND
usage: octl wg pwd
usage: octl ls [<grouppath>]
usage: octl cd [<grouppath>]
usage: octl wg grant <grouppath> <password>
MEMBERS COMMAND
usage: octl wg get [<grouppath>]
usage: octl wg add <grouppath> [<node1>] [@<group1>] ...
usage: octl wg rm <grouppath> [[<node1>] [@<node1>] ...]
C. Command Exection
RUN
usage: octl cmd run [-ta] [[-c] 'cmd' | -bg 'cmd' | -ss 'shellScript'] <node1> <@group1> ...
With this subcmd we can run a command or a script on given nodes. For running a forground command, we can use flag -c 'cmd' or only 'cmd'. As for blocking command, we need to run it background, so we can use flag -bg 'cmd'. For running a script, we need to specify the complete filepath of th
Related Skills
tmux
351.8kRemote-control tmux sessions for interactive CLIs by sending keystrokes and scraping pane output.
xurl
351.8kA CLI tool for making authenticated requests to the X (Twitter) API. Use this skill when you need to post tweets, reply, quote, search, read posts, manage followers, send DMs, upload media, or interact with any X API v2 endpoint.
diffs
351.8kUse the diffs tool to produce real, shareable diffs (viewer URL, file artifact, or both) instead of manual edit summaries.
terraform-provider-genesyscloud
Terraform Provider Genesyscloud
