SkillAgentSearch skills...

Nlsh

🔍 AI-powered CLI tool to generate context-aware shell commands with multi-backend LLM support. Supports shell-specific syntax, read-only system tools, and custom inference endpoints.

Install / Use

/learn @eqld/Nlsh
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Neural Shell (nlsh)

PyPI Downloads Awesome

nlsh (Neural Shell) is an AI-driven command-line assistant that generates shell commands and one-liners tailored to your system context.

Features

  • 🔄 Multi-Backend LLM Support
    Configure multiple OpenAI-compatible endpoints (e.g., local Ollama, DeepSeek API, Mistral API) and switch them using -0, -1, etc.
  • 🧠 System-Aware Context
    Automatically gathers information about your environment to generate commands tailored to your system.
  • 🐚 Shell-Aware Generation
    Set your shell (bash/zsh/fish/powershell) via config/env to ensure syntax compatibility.
  • 🛡️ Safety First
    Never executes commands automatically, works in interactive confirmation mode.
  • ⚙️ Configurable
    YAML configuration for backends and shell preferences.

Installation

  1. Install the package
pip install neural-shell
  1. Create a configuration file
# Option 1: Use the built-in initialization command
nlsh --init
# or
nlgc --init

# Option 2: Manually create the directory and copy the example
mkdir -p ~/.nlsh
cp examples/config.yml ~/.nlsh/config.yml  # Edit this file with your API keys
  1. Set up your API keys
# Edit the config file to add your API keys
nano ~/.nlsh/config.yml

# Or set them as environment variables, referenced in the config file
export OPENAI_API_KEY=sk-...
export GROQ_KEY=gsk_...
export DEEPSEEK_API_KEY=...

See: https://pypi.org/project/neural-shell/.

PyPI project statistics:

  • https://pypistats.org/packages/neural-shell
  • https://pepy.tech/projects/neural-shell

If you want to install from source:

  1. Clone the repository
git clone https://github.com/eqld/nlsh.git
cd nlsh
  1. Install the package
# Option 1: Install in development mode with all dependencies
pip install -r requirements.txt
pip install -e .

# Option 2: Simple installation
pip install .

Usage

Command Generation Mode

Basic usage for generating shell commands:

nlsh find all pdfs modified in the last 2 days and compress them
# Example output:
# Suggested: find . -name "*.pdf" -mtime -2 -exec tar czvf archive.tar.gz {} +
# [Confirm] Run this command? (y/N/e/r/x) y 
# Executing:
# (command output appears here)

# Edit the suggested command before running:
nlsh list all files in the current directory
# Example output:
# Suggested: ls -la
# [Confirm] Run this command? (y/N/e/r/x) e
# (Opens your $EDITOR with 'ls -la')
# (Edit the command, e.g., to 'ls -l')
# (Save and close editor)
# Edited command: ls -l
# [Confirm] Run this command? (y/N/e/r/x) y
# Executing: ls -l
# (command output appears here)

Generate and display commands without executing them using the -p or --print flag:

# Generate command without execution
nlsh -p find all PDF files larger than 10MB
# Output: find . -name "*.pdf" -size +10M

With verbose mode for reasoning models:

nlsh -v -2 count lines of code in all javascript files
# Example output:
# Reasoning: To count lines of code in JavaScript files, I can use the 'find' command to locate all .js files,
# then pipe the results to 'xargs wc -l' to count the lines in each file.
# Suggested: find . -name "*.js" -type f | xargs wc -l
# [Confirm] Run this command? (y/N/e/r/x) y 
# Executing:
# (command output appears here)

Note on Command Execution: nlsh executes commands using non-blocking I/O with the select module to read from stdout/stderr. This approach ensures compatibility with a wide range of commands, including those with pipes (|) and redirections. The non-blocking implementation prevents deadlocks that can occur with piped commands where one process might be waiting for input before producing output. While this works well for most commands, highly interactive commands (like those with progress bars or TUI applications) might not render perfectly.

Command Explanation Mode

Get detailed explanations of shell commands using the -e or --explain flag:

# Explain complex commands
nlsh -e "find . -name '*.log' -mtime +30 -delete"
# Provides detailed breakdown of the find command with safety warnings

# Use with verbose mode for reasoning
nlsh -e -v "tar -czf backup.tar.gz /home/user/documents"
# Shows the AI's reasoning process before providing the explanation

STDIN Processing Mode

nlsh can also process input from STDIN and output results directly to STDOUT, making it perfect for use in pipelines:

# Summarize content from a file
cat document.md | nlsh summarize this in 3 bullet points > summary.txt

# Extract specific information from logs
cat server.log | nlsh find all error messages and list them with timestamps

# Process JSON data
curl -s https://api.example.com/data | nlsh extract all email addresses from this JSON

# Transform text content
echo "hello world" | nlsh convert to uppercase and add exclamation marks

# Analyze code files
cat script.py | nlsh explain what this Python script does and identify any potential issues

# Process CSV data
cat data.csv | nlsh find the top 5 entries by sales amount and format as a table

# Control output length with --max-tokens
cat large_document.txt | nlsh --max-tokens 500 summarize this document briefly

Image Processing Support

nlsh automatically detects and processes image input from STDIN when using vision-capable models:

# Analyze an image
cat image.jpg | nlsh describe what you see in this image

# Extract text from screenshots
cat screenshot.png | nlsh extract all text from this image and format it as markdown

# Analyze charts and graphs
cat chart.png | nlsh summarize the data trends shown in this chart

# Process multiple images in a pipeline
for img in *.jpg; do
    cat "$img" | nlsh identify the main subject of this image >> results.txt
done

# Use specific backend for image processing
cat diagram.png | nlsh -1 explain this technical diagram step by step

Supported Image Formats:

  • PNG (.png)
  • JPEG (.jpg, .jpeg)
  • GIF (.gif)
  • WebP (.webp)
  • BMP (.bmp)

Image Processing Features:

  • Automatic input type detection (text vs. image)
  • Configurable backend selection for vision processing
  • Support for base64-encoded images
  • Size validation (max 20MB by default)
  • Seamless integration with existing STDIN workflows

In STDIN processing mode:

  • No command confirmation is required
  • Output goes directly to STDOUT for easy piping
  • The LLM processes the input content according to your instructions
  • Perfect for automation and scripting workflows
  • Automatic backend selection based on input type (text vs. image)

Using nlgc for Commit Messages

The package also includes nlgc (Neural Git Commit) to generate commit messages based on your staged changes:

# Stage your changes first
git add .

# Generate a commit message (using default backend)
nlgc
# Example output:
# Suggested commit message:
# --------------------
# feat: Add nlgc command for AI-generated commit messages
# 
# Implements the nlgc command which analyzes staged git diffs
# and uses an LLM to generate conventional commit messages.
# Includes configuration options and CLI flags to control
# whether full file content is included in the prompt.
# --------------------
# [Confirm] Use this message? (y/N/e/r) y
# Executing: git commit -m "feat: Add nlgc command..."
# Commit successful.

# Generate using a specific backend and exclude full file content
nlgc -1 --no-full-files

# Generate commit message in Spanish
nlgc --language Spanish

# Generate commit message in French using short flag
nlgc -l French

# Edit the suggested message before committing
nlgc
# [Confirm] Use this message? (y/N/e/r) e 
# (Opens your $EDITOR with the message)
# (Save and close editor)
# Using edited message:
# ...
# Commit with this message? (y/N) y

nlgc analyzes the diff of staged files and, optionally, their full content to generate a conventional commit message. You can confirm, edit (e), or regenerate (r) the message.

Using nlt for Token Counting

The package also includes nlt (Neural Language Tokenizer) to count tokens in text and image inputs:

# Count tokens from STDIN
echo "Hello world" | nlt
# Output: 3

# Count tokens from files
nlt -f document.txt -f image.jpg
# Output: 2628

# Count tokens with breakdown
nlt -v -f document.txt -f image.jpg
# Output:
# document.txt: 1523
# image.jpg: 1105
# Total: 2628

# Count tokens from both STDIN and files
cat input.txt | nlt -f additional.txt

# Count tokens with custom encoding
cat input.txt | nlt --encoding gpt2

nlt uses tiktoken (the same tokenizer used by OpenAI models) to provide accurate token counts for both text and image inputs.


Configuration

Creating a Configuration File

You have two options to create a configuration file:

  1. Automatic initialization:

    nlsh --init
    # or
    nlgc --init
    

    This will prompt you to choose where to create the config file (if XDG_CONFIG_HOME is set) and create a default configuration file with placeholders for API keys.

  2. Manual creation: Create ~/.nlsh/config.yml manually:

shell: "zsh"  # Override with env $NLSH_SHELL
backends:
  # Text-only backend
  - name: "local-ollama"
    url: "http://localhost:11434/v1"
    api_key: "ollama"
    model: "llama3"
    supports_vision: false  # This model doesn't support image processing
  
  # Vision-capable backend
  - name: "openai-gpt4-vision"
    url: "htt

Related Skills

View on GitHub
GitHub Stars36
CategoryProduct
Updated10d ago
Forks2

Languages

Python

Security Score

90/100

Audited on Mar 18, 2026

No findings