SkillAgentSearch skills...

Octopoda

🐙 A lightweight multi-nodes scenarios management platform with node monitoring, scripts execution, scenario deployment, version control, file distribution etc.

Install / Use

/learn @piaodazhu/Octopoda
About this skill

Quality Score

0/100

Category

Operations

Supported Platforms

Universal

README

Octopoda

Octopoda

🐙 Octopoda is a lightweight multi-node scenario management platform. It's not a lightweight K8S. It is originally designed for managing Lab101's ICN application scenarios (Obviously it can do more than that), which require the execution of commands on the node at the lower level of the system, such as inserting a kernel driver module. Note that it not safe enough to deploy Octopoda in unfamiliar network environment.

Features of Octopoda:

  1. Simple topology with NAT reverse path.
  2. Out-of-box.
  3. Robust & auto retry & auto reboot.
  4. Nodes status monitoring.
  5. Customized, automated scenario deployment.
  6. Scenario/Application version control.
  7. Scenario/Application durability.
  8. Centralized file management and distribution.
  9. Centralized scripts execution.
  10. Log management.
  11. Fast SSH login.
  12. Golang/C/Python SDK.

Table of Contents

Concepts

Topology

                                            +-----------------+ 
                                            |    NameServer   | 
                                            +-----------------+   HTTP 
                                             | HTTPS         |-------------+  
                     SSH .---------->-----+--|--<-reverse----|-<-+------<--|--+------ ... 
                         |                |  |               |   |         |  |  
  +---------+  CLI  +--------+  HTTPS   +-x-------+   TLS   +----------+ +----------+  
  | TermUser| <===> |  Octl  | <--+---> |  Brain  | <-----> | Tentacle | | Tentacle | ...  
  +---------+       +  ----  +    |     +---------+         +----------+ +----------+  
  +---------+  SDK  | Go/C/Py| <--+          |HTTP               |             | 
  |Developer| <===> | Client |          +---------+         +---------+   +---------+ 
  +---------+       +--------+          |  Pakma  |         |  Pakma  |   |  Pakma  | 
                                        +---------+         +---------+   +---------+ 
                                       \-----------/       \-----------------------------/ 
                                         Master Node           Controlled Networks 

SAN Model

SAN model is the working model of Octopoda: S stands for Scenario, A stands for Application and N stands for Node. The current model has the following features:

  • An Octopoda network can manage multiple scenarios.
  • Each scenario is made of multiple applications.
  • Each application can be run on multiple nodes.
  • Each application can be identified uniquely by (application name, scenario name)
  • Each node can run multiple application.
  • Each application running on a specific node is called a NodeApp, and it can be identified uniquely by (nodename, application name, scenario name).
  • The granularity of version control are scenario and NodeApp. Version updates of NodeApps automatically trigger version updates of the scenario they belong to.

Quick Start

# 1 Generate keys and certificates
cd httpNameServer
bash ./CertGen.sh ca helloCa # generate your ca key and cert ( keep your ca.key very safe! )
bash ./CertGen.sh server helloNs 1.1.1.1 # for your httpsNameServer
bash ./CertGen.sh client helloBrain # for your Brain
bash ./CertGen.sh client helloTentacle # for your Tentacle 

cp ca.pem server.key server.pem /etc/octopoda/cert/ # copy to your httpsNameServer
cp ca.pem client.key client.pem /etc/octopoda/cert/ # copy to your Brain and Tentacle

# 2 Install httpsNameServer on a machine
sudo systemctl start redis-server
tar -Jxvf httpns_v1.5.2_linux_amd64.tar.xz
cd httpns_v1.5.2_linux_amd64
# then modify httpns.yaml if you want 
# run it foreground
sudo ./httpns -p
# or install it and start background
sudo bash setup.sh

# 3 Install Brain on a machine
sudo systemctl start redis-server
tar -Jxvf brain_v1.5.2_linux_amd64.tar.xz
cd brain_v1.5.2_linux_amd64
# then modify brain.yaml if you want (httpsNameServer, name, nic is important)
# run it foreground
sudo ./brain -p
# or install it and start background
sudo bash setup.sh

## 4 Set root workgroup password on Brain machine. (For step 7)
redis-cli
127.0.0.1:6379> set info: yourpass

# 5 Install Tentacle on a machine
tar -Jxvf tentacle_v1.5.2_linux_amd64.tar.xz
cd tentacle_v1.5.2_linux_amd64
# then modify tentacle.yaml if you want (httpsNameServer, name, brain is important)
# run it foreground
sudo ./tentacle -p
# or install it and start background
sudo bash setup.sh

# 6 Install Pakma on your Brain or Tentacle machine (optional, only for online upgrade)
tar -Jxvf pakma_v1.5.2_linux_amd64.tar.xz
cd pakma_v1.5.2_linux_amd64
# make sure pakma is installed after Brain or Tentacle
# install it for your Brain
sudo bash setup.sh brain
# or install it for your Tentacle
sudo bash setup.sh tentacle

# 7 Install Octl
cd octl_v1.5.2_linux_amd64
# then modify octl.yaml. (workgroup.root="", workgroup.password="yourpass")
sudo cp octl.yaml /etc/octopoda/octl/
sudo cp octl /usr/local/bin/

# 8 Hello World
$ octl node get
# {
#   "nodes": [
#     {
#       "name": "pi0",
#       "addr": "192.168.1.4",
#       "state": "online",
#       "delay": "3ms",
#       "online_time": "2m42.483697064s"
#     }
#   ],
#   "total": 1,
#   "active": 1,
#   "offline": 0
# }

$ octl node get | grep name | awk '{print $2}' | sed 's/"//g' | sed -z 's/\n/ /g' | sed 's/,//g'
# you may get: pi02 pi05 pi06 pi08

Octl Command Manual

A. Node Information

GET

usage: octl node get [-sf <statefilter>] [[ALL] | <node1> <@group1> ...]

With this subcmd we can get some basic information of all nodes or given nodes. With optional flag -sf, you we can define a state filter such as online, offline to filter nodes.

<!-- - Basic information of all scenarios in the network or detailed information of a given scenario. - Basic informations of all apps on the given node or detailed information of a given app on the given node. -->

PRUNE

usage: octl node prune [ALL | <node1> <@group1> ...]

With this subcmd we can prune dead nodes within given nodes.

STATUS

usage: octl node status [[ALL] | <node1> <@group1> ...]

With this subcmd we can get the running status of nodes or a given node, such as:

  • CPU Load.
  • Memory Used/Total.
  • Disk Used/Total.
  • Other Status.
<!-- ### LOG > `usage: octl log [<node>|brain] [-l<maxline>] [-d<maxday>]` With this subcmd we can get the running log of brain or a given node. The argument `l` means max lines need to be read, and argument `d` means max days before today need to be read. Default `l` is 30 and default `d` is 0, means latest 30 lines of logs will be return. -->

B. Workgroup

CONCEPT

Workgroup are an Octopoda mechanism that supports resource isolation at node granularity, hierarchical device authorization and referencing multiple node names with a group name. Workgroups are organized in a tree structure, with each node having a unique path, a non-empty set of node names, and a collection of subworkgroups.

For example, workgroup with path /room1/alice and node names set (pi1,pi2,pi3) will never be aware of pi4 even if it is in the same Octopoda network. And group /room1 can list, add and remove members in/to/from /room1/alice. And /room1/alice can create /room1/alice/g1 with node names set (pi1,pi2).

The relative group name can be used to reference its node names set. For example, if currentPath=/room1, then octl cmd run 'uname -a' @alice pi4 is equivalent to octl cmd run 'uname -a' pi1 pi2 pi3 pi4.

Workgroup is a mechanism forshould be configured in octl.yaml.

workgroup:
  root: "grouppath"
  password: "password"
  currentPathFile: "/etc/octopoda/octl/.curPath.yaml"

PATH COMMAND

usage: octl wg pwd

usage: octl ls [<grouppath>]

usage: octl cd [<grouppath>]

usage: octl wg grant <grouppath> <password>

MEMBERS COMMAND

usage: octl wg get [<grouppath>]

usage: octl wg add <grouppath> [<node1>] [@<group1>] ...

usage: octl wg rm <grouppath> [[<node1>] [@<node1>] ...]

C. Command Exection

RUN

usage: octl cmd run [-ta] [[-c] 'cmd' | -bg 'cmd' | -ss 'shellScript'] <node1> <@group1> ...

With this subcmd we can run a command or a script on given nodes. For running a forground command, we can use flag -c 'cmd' or only 'cmd'. As for blocking command, we need to run it background, so we can use flag -bg 'cmd'. For running a script, we need to specify the complete filepath of th

Related Skills

View on GitHub
GitHub Stars21
CategoryOperations
Updated10d ago
Forks0

Languages

Go

Security Score

95/100

Audited on Mar 28, 2026

No findings