Nlsh
A command-line tool that converts natural language instructions into shell commands using LLM model.
Install / Use
/learn @abakermi/NlshREADME
Natural Language Shell (nlsh)
<img src="./resources//play.gif" width="500" />A command-line tool that converts natural language instructions into shell commands using AI. Simply describe what you want to do in plain English, and nlsh will generate and execute the appropriate shell command.
Features
- 🧠 Natural language to shell command conversion
- 🤖 Multiple AI backends: OpenAI GPT and Google Gemini
- 🛡️ Built-in safety checks for dangerous commands
- ⚙️ Configurable settings via
.nlshrc - 🎨 Colored output for better readability
- 📝 Command history and context awareness
- 🔄 Interactive and single command modes
- 🔒 Confirmation for potentially dangerous operations
Prerequisites
- Go 1.24 or later
- OpenAI API key or Google Gemini API key
Installation
Option 1: Quick Install
Install directly using curl:
curl -fsSL https://raw.githubusercontent.com/abakermi/nlsh/master/install.sh | bash
Option 2: Go Install
go install github.com/abakermi/nlsh@latest
Option 3: Manual Installation
- Clone the repository:
git clone https://github.com/abakermi/nlsh.git cd nlsh - Set your API key as an environment variable:
export OPENAI_API_KEY='your-api-key-here' # or export GEMINI_API_KEY='your-api-key-here' - Run the installation script:
./install.sh - Restart your terminal or source your shell configuration:
source ~/.zshrc # or source ~/.bashrc
Usage
Set your API key:
# For OpenAI
export OPENAI_API_KEY='your-api-key-here'
# For Gemini
export GEMINI_API_KEY='your-api-key-here'
Interactive Mode
nlsh
Single Command Mode
nlsh "list all files in current directory"
Examples
# List files
nlsh "show me all hidden files"
# Git operations
nlsh "commit all changes with message 'update readme'"
# Docker operations
nlsh "show all running containers"
Configuration
You can customize nlsh's behavior by creating a .nlshrc file in your home directory. The configuration file supports TOML format.
Switching Between Backends
Set the backend option to choose your AI provider:
# Backend to use: "openai" or "gemini"
backend = "gemini"
Local / Self-Hosted Models (Ollama, vLLM, etc.)
You can use local models compatible with the OpenAI API by configuring base_url:
[openai]
model = "llama3" # Replace with your local model name
base_url = "http://localhost:11434/v1" # Example for Ollama
If base_url is set, OPENAI_API_KEY is not required.
Full Configuration Example
# Backend to use: "openai" or "gemini"
backend = "openai"
[openai]
model = "gpt-4-turbo-preview"
temperature = 0.7
[gemini]
model = "gemini-2.0-flash"
temperature = 0.7
[safety]
confirm_execution = true
allowed_commands = [
"ls *",
"touch *",
"mkdir *",
"echo *",
"cat *",
"cp *",
"mv *",
"git *",
"docker *",
"code *",
"vim *",
"nano *"
]
denied_commands = [
"rm -rf /*",
"rm -rf /",
"dd if=/dev/*",
"mkfs.*",
"> /dev/*",
"shutdown *",
"reboot *",
"halt *",
"*--no-preserve-root*"
]
Safety Features
- Command confirmation before execution
- Configurable allowed/denied commands
- Pattern-based command filtering
- Protection against dangerous operations
License
This project is open source and available under the MIT License.
Related Skills
node-connect
339.5kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
83.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
339.5kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
83.9kCommit, push, and open a PR
