Kinc
Kubernetes simplified, containerized, and democratized for rootless environments.
Install / Use
/learn @T0MASD/KincREADME
kinc - Kubernetes in Container
Single-node rootless Kubernetes cluster running in a Podman container.
Features
- 🚀 Fast: Cluster ready in ~40 seconds (with cached images)
- 🔒 Rootless: Runs as regular user, no root required
- 📦 Self-contained: Everything in one container (systemd, CRI-O, kubeadm, kubectl)
- 🔧 Configurable: Baked-in or mounted configuration
- 🌐 Isolated networking: Sequential port allocation with subnet derivation
- 📊 Multi-cluster: Run multiple clusters concurrently
- 🔍 Observability: Optional Faro event capture for bootstrap analysis (enabled in CI by default)
- ✅ Production-grade: Uses official Kubernetes tools (kubeadm, kubectl, CRI-O)
Quick Start
Prerequisites
- Podman (rootless)
- IP forwarding enabled
- Sufficient inotify limits (for multiple clusters)
- Sufficient kernel keyring limits (for multiple clusters)
# Enable IP forwarding (one-time setup)
sudo sysctl -w net.ipv4.ip_forward=1
# Make permanent
echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.d/99-kubernetes.conf
sudo sysctl -p /etc/sysctl.d/99-kubernetes.conf
# Increase inotify limits for multiple clusters
sudo sysctl -w fs.inotify.max_user_watches=524288
sudo sysctl -w fs.inotify.max_user_instances=2048
# Increase kernel keyring limits (critical for multi-cluster)
sudo sysctl -w kernel.keys.maxkeys=1000
sudo sysctl -w kernel.keys.maxbytes=25000
# Make all changes persistent
echo 'fs.inotify.max_user_watches = 524288' | sudo tee -a /etc/sysctl.d/99-kubernetes.conf
echo 'fs.inotify.max_user_instances = 2048' | sudo tee -a /etc/sysctl.d/99-kubernetes.conf
echo 'kernel.keys.maxkeys = 1000' | sudo tee -a /etc/sysctl.d/99-kubernetes.conf
echo 'kernel.keys.maxbytes = 25000' | sudo tee -a /etc/sysctl.d/99-kubernetes.conf
sudo sysctl -p /etc/sysctl.d/99-kubernetes.conf
Deploy a Cluster
# Build the image (one time)
./tools/build.sh
# Deploy with baked-in config (simplest)
USE_BAKED_IN_CONFIG=true ./tools/deploy.sh
# Extract kubeconfig
mkdir -p ~/.kube
podman cp kinc-default-control-plane:/etc/kubernetes/admin.conf ~/.kube/config
sed -i 's|server: https://.*:6443|server: https://127.0.0.1:6443|g' ~/.kube/config
# Use your cluster
kubectl get nodes
kubectl get pods -A
Deploy Multiple Clusters
# Deploy with mounted config (supports multiple clusters)
CLUSTER_NAME=dev ./tools/deploy.sh
CLUSTER_NAME=staging ./tools/deploy.sh
CLUSTER_NAME=prod ./tools/deploy.sh
# Clusters get sequential ports and isolated networks:
# dev: 127.0.0.1:6443, subnet 10.244.43.0/24
# staging: 127.0.0.1:6444, subnet 10.244.44.0/24
# prod: 127.0.0.1:6445, subnet 10.244.45.0/24
Cleanup
# Remove a cluster
CLUSTER_NAME=default ./tools/cleanup.sh
# Or with baked-in config
USE_BAKED_IN_CONFIG=true CLUSTER_NAME=default ./tools/cleanup.sh
Architecture
Multi-Service Initialization
kinc uses a systemd-driven multi-service architecture for reliable initialization:
Container Start
↓
┌─────────────────────────────────────┐
│ kinc-preflight.service (oneshot) │
│ - Config validation (yq) │
│ - CRI-O readiness check │
│ - kubeadm.conf templating │
└─────────────────────────────────────┘
↓
┌─────────────────────────────────────┐
│ kubeadm-init.service (oneshot) │
│ - kubeadm init (isolated) │
│ - No kubectl waits │
│ - Clean systemd logs │
└─────────────────────────────────────┘
↓
┌─────────────────────────────────────┐
│ kinc-postinit.service (oneshot) │
│ - CNI installation (kindnet) │
│ - Storage provisioner │
│ - kubectl wait for readiness │
└─────────────────────────────────────┘
↓
Initialization Complete
Marker: /var/lib/kinc-initialized
Port and Network Allocation
Ports are allocated sequentially, and network subnets are derived from the port's last 2 digits:
| Cluster | Host Port | Pod Subnet | Service Subnet | |---------|---------------|------------------|----------------| | default | 127.0.0.1:6443 | 10.244.43.0/24 | 10.43.0.0/16 | | cluster01 | 127.0.0.1:6444 | 10.244.44.0/24 | 10.44.0.0/16 | | cluster02 | 127.0.0.1:6445 | 10.244.45.0/24 | 10.45.0.0/16 |
This ensures non-overlapping networks for concurrent clusters.
Configuration Modes
Baked-In Config (Zero-Config)
Use the default configuration embedded in the image:
USE_BAKED_IN_CONFIG=true ./tools/deploy.sh
- No config volume mount
- Single cluster only (can't customize cluster name in kubeadm.conf)
- Fastest deployment
Mounted Config (Multi-Cluster)
Mount custom configuration from runtime/config/kubeadm.conf:
CLUSTER_NAME=myapp ./tools/deploy.sh
- Config volume mounted to
/etc/kinc/config - Supports multiple clusters with different names
- Per-cluster network isolation
Tools
build.sh
Build the kinc container image.
./tools/build.sh
# Force package updates
CACHE_BUST=1 ./tools/build.sh
deploy.sh
Deploy a single kinc cluster using Quadlet (systemd integration).
# Baked-in config
USE_BAKED_IN_CONFIG=true ./tools/deploy.sh
# Mounted config with custom name
CLUSTER_NAME=myapp ./tools/deploy.sh
# Force specific port
FORCE_PORT=6500 CLUSTER_NAME=special ./tools/deploy.sh
# Bypass sysctl checks (not recommended)
KINC_SKIP_SYSCTL_CHECKS=true CLUSTER_NAME=myapp ./tools/deploy.sh
Features:
- System prerequisites validation (IP forwarding, inotify limits, kernel keyring)
- Smart multi-cluster detection: requires proper sysctls when other clusters exist
- Automatic sequential port allocation
- Subnet derivation from port
- Systemd-driven initialization waits
- Multi-service architecture verification
Environment Variables:
CLUSTER_NAME: Cluster identifier (default:default)FORCE_PORT: Override auto port allocationKINC_IMAGE: Image to use (default:localhost/kinc/node:v1.33.5)KINC_SKIP_SYSCTL_CHECKS: Bypass inotify/keyring checks (default:false)KINC_ENABLE_FARO: Enable Faro event capture (default:false, CI:true)
cleanup.sh
Remove a kinc cluster and clean up all resources.
CLUSTER_NAME=myapp ./tools/cleanup.sh
What it does:
- Stops systemd services
- Removes container
- Removes volumes
- Removes Quadlet files
- Reloads systemd
run-validation.sh
Run full validation suite (7 clusters):
./tools/run-validation.sh
# Skip cleanup for manual inspection
SKIP_CLEANUP=true ./tools/run-validation.sh
Tests:
- T1: Baked-in config (deploy.sh)
- T2: Mounted config - 5 concurrent clusters (deploy.sh)
- T3: Direct podman run (baked-in config)
- Multi-service architecture verification
- Complete cleanup
Advanced Usage
Direct Podman Run (No Quadlet)
For environments without systemd or for quick testing:
# Create volume
podman volume create kinc-var-data
# Run cluster
podman run -d --name kinc-cluster \
--hostname kinc-control-plane \
--cgroups=split \
--cap-add=SYS_ADMIN --cap-add=SYS_RESOURCE --cap-add=NET_ADMIN \
--cap-add=SETPCAP --cap-add=NET_RAW --cap-add=SYS_PTRACE \
--cap-add=DAC_OVERRIDE --cap-add=CHOWN --cap-add=FOWNER \
--cap-add=FSETID --cap-add=KILL --cap-add=SETGID --cap-add=SETUID \
--cap-add=NET_BIND_SERVICE --cap-add=SYS_CHROOT --cap-add=SETFCAP \
--cap-add=DAC_READ_SEARCH --cap-add=AUDIT_WRITE \
--device /dev/fuse \
--tmpfs /tmp:rw,rprivate,nosuid,nodev,tmpcopyup \
--tmpfs /run:rw,rprivate,nosuid,nodev,tmpcopyup \
--tmpfs /run/lock:rw,rprivate,nosuid,nodev,tmpcopyup \
--volume kinc-var-data:/var:rw \
--volume $HOME/.local/share/containers/storage:/root/.local/share/containers/storage:rw \
--sysctl net.ipv6.conf.all.disable_ipv6=0 \
--sysctl net.ipv6.conf.all.keep_addr_on_down=1 \
--sysctl net.netfilter.nf_conntrack_tcp_timeout_established=86400 \
--sysctl net.netfilter.nf_conntrack_tcp_timeout_close_wait=3600 \
-p 127.0.0.1:6443:6443/tcp \
--env container=podman \
ghcr.io/t0masd/kinc:latest
# Wait for cluster (~40 seconds)
timeout 300 bash -c 'until podman exec kinc-cluster test -f /var/lib/kinc-initialized 2>/dev/null; do sleep 2; done'
# Extract kubeconfig
mkdir -p ~/.kube
podman cp kinc-cluster:/etc/kubernetes/admin.conf ~/.kube/config
sed -i 's|server: https://.*:6443|server: https://127.0.0.1:6443|g' ~/.kube/config
# Verify
kubectl get nodes
Custom kubeadm Configuration
Edit runtime/config/kubeadm.conf to customize:
- Kubernetes version
- Pod/Service subnets
- API server arguments
- Kubelet configuration
- Feature gates
Then deploy with mounted config:
CLUSTER_NAME=custom ./tools/deploy.sh
Faro Event Capture (Optional)
Faro is a Kubernetes resource monitoring library that captures real-time events during cluster bootstrap. It's useful for:
- Debugging initialization issues
- Performance analysis
- CI/CD validation
- Cluster behavior comparison
Enable Faro:
# Single cluster with event capture
KINC_ENABLE_FARO=true CLUSTER_NAME=myapp ./tools/deploy.sh
# Multiple clusters with event capture
KINC_ENABLE_FARO=true CLUSTER_NAME=dev ./tools/deploy.sh
KINC_ENABLE_FARO=true CLUSTER_NAME=staging ./tools/deploy.sh
Default Behavior:
- Disabled in normal deployments (minimal overhead)
- Enabled automatically in CI/CD (for validation)
Configuration:
Faro configuration and deployment are embedded in the kinc image:
- **
