SkillAgentSearch skills...

Athena Protocol

An AI tech lead in server form—this intelligent MCP agent validates your coding agent's strategy, analyzes impact, and catches critical issues before any code is written. Like having a senior engineer review every approach, ensuring thoughtful architecture and fewer regressions.

Install / Use

/learn @n0zer0d4y/Athena Protocol

README

Athena Protocol MCP Server: A Precursor to Context Orchestration Protocol

TypeScript MCP Registry MCP Dev MCP Server MCP server with features' standard-readme compliant License: MIT MseeP.ai Security Assessment Badge

An intelligent MCP server that acts as an AI tech lead for coding agents—providing expert validation, impact analysis, and strategic guidance before code changes are made. Like a senior engineer reviewing your approach, Athena Protocol helps AI agents catch critical issues early, validate assumptions against the actual codebase, and optimize their problem-solving strategies. The result: higher quality code, fewer regressions, and more thoughtful architectural decisions.

Key Feature: Precision file analysis with analysisTargets - achieve 70-85% token reduction and 3-4× faster performance with precision-targeted code analysis. See Enhanced File Analysis for details

Context Orchestration Protocol: The Future of AI-Assisted Development

Imagine LLMs working with context so refined and targeted that they eliminate guesswork, reduce errors by 80%, and deliver code with the precision of seasoned architects—transforming how AI agents understand and enhance complex codebases.

Table of Contents

Security

This server handles API keys for multiple LLM providers. Ensure your .env file is properly secured and never committed to version control. The server validates all API keys on startup and provides detailed error messages for configuration issues.

Background

The Athena Protocol MCP Server provides systematic thinking validation for AI coding agents. It supports 14 LLM providers and offers various validation tools including thinking validation, impact analysis, assumption checking, dependency mapping, and thinking optimization.

Key features:

  • Smart Client Mode with precision-targeted code analysis (70-85% token reduction)
  • Environment-driven configuration with no hardcoded defaults
  • Multi-provider LLM support (14 providers) with automatic fallback
  • Enhanced file reading with multiple modes (full, head, tail, range)
  • Concurrent file operations for 3-4× performance improvement
  • Session-based validation history and memory management
  • Comprehensive configuration validation and health monitoring
  • Dual-agent architecture for efficient validation workflows

Install

This module depends upon a knowledge of Node.js and npm.

npm install
npm run build

Prerequisites

  • Node.js >= 18
  • npm or yarn

Configuration

The Athena Protocol uses 100% environment-driven configuration - no hardcoded provider values or defaults. Configure everything through your .env file:

  1. Copy the example configuration:
   cp .env.example .env
  1. Edit .env and configure your provider:

    • Set DEFAULT_LLM_PROVIDER (e.g., openai, anthropic, google)
    • Add your API key for the chosen provider
    • Configure model and parameters (optional)
  2. Validate and test:

    npm install
    npm run build
    npm run validate-config  # Validates your .env configuration
    npm test
    

See .env.example for complete configuration options and all 14 supported providers.

Critical Configuration Requirements

  • PROVIDER_SELECTION_PRIORITY is REQUIRED - list your providers in priority order
  • No hardcoded fallbacks exist - all configuration must be explicit in .env
  • Fail-fast validation - invalid configuration causes immediate startup failure
  • Complete provider config required - API key, model, and parameters for each provider

Supported Providers

The Athena Protocol supports 14 LLM providers. While OpenAI is commonly used, you can configure any of:

Major Cloud Providers:

  • OpenAI - GPT-5 (with thinking), GPT-4o, GPT-4-turbo
  • Anthropic - Claude Opus 4.1, Claude Sonnet 4.5, Claude Haiku 4.5
  • Google - Gemini 2.5 (Flash/Pro/Ultra)
  • Azure OpenAI - Enterprise-grade GPT models
  • AWS Bedrock - Claude, Llama, and more
  • Google Vertex AI - Gemini with enterprise features

Specialized Providers:

  • OpenRouter - Access to 400+ models
  • Groq - Ultra-fast inference
  • Mistral AI - Open-source models
  • Perplexity - Search-augmented models
  • XAI - Grok models
  • Qwen - Alibaba's high-performance LLMs
  • ZAI - GLM models

Local/Self-Hosted:

  • Ollama - Run models locally

Quick switch example:

# Edit .env file
ANTHROPIC_API_KEY=sk-ant-your-key-here
DEFAULT_LLM_PROVIDER=anthropic

# Restart server
npm run build && npm start

Provider Switching

See the detailed provider guide for complete setup instructions.

Usage

MCP Client Configuration

For detailed, tested MCP client configurations, see CLIENT_MCP_CONFIGURATION_EXAMPLES.md

For Local Installation (with .env file)

Local installation with .env file remains fully functional and unchanged. Simply clone the repository and run:

npm install
npm run build

Then configure your MCP client to point to the local installation:

{
  "mcpServers": {
    "athena-protocol": {
      "command": "node",
      "args": ["/absolute/path/to/athena-protocol/dist/index.js"],
      "type": "stdio",
      "timeout": 300
    }
  }
}

For NPM Installation (with MCP environment variables - RECOMMENDED)

For npm/npx usage, configure your MCP client with environment variables. Only the configurations in CLIENT_MCP_CONFIGURATION_EXAMPLES.md are tested and guaranteed to work.

Example for GPT-5:

{
  "mcpServers": {
    "athena-protocol": {
      "command": "npx",
      "args": ["@n0zer0d4y/athena-protocol"],
      "env": {
        "DEFAULT_LLM_PROVIDER": "openai",
        "OPENAI_API_KEY": "your-openai-api-key-here",
        "OPENAI_MODEL_DEFAULT": "gpt-5",
        "OPENAI_MAX_COMPLETION_TOKENS_DEFAULT": "8192",
        "OPENAI_VERBOSITY_DEFAULT": "medium",
        "OPENAI_REASONING_EFFORT_DEFAULT": "high",
        "LLM_TEMPERATURE_DEFAULT": "0.7",
        "LLM_MAX_TOKENS_DEFAULT": "2000",
        "LLM_TIMEOUT_DEFAULT": "30000"
      },
      "type": "stdio",
      "timeout": 300
    }
  }
}

See CLIENT_MCP_CONFIGURATION_EXAMPLES.md for complete working configurations.

Configuration Notes:

  • NPM Installation: Use npx @n0zer0d4y/athena-protocol with the env field for easiest setup
  • Local Installation: Local .env file execution remains fully functional and unchanged
  • Environment Priority: MCP env variables take precedence over .env file variables
  • GPT-5 Support: Includes specific parameters for GPT-5 models
  • Timeout Configuration: The default timeout of 300 seconds (5 minutes) is set for reasoning models like GPT-5. For faster LLMs (GPT-4, Claude, Gemini), you can reduce this to 60-120 seconds
  • GPT-5 Parameter Notes: The parameters LLM_TEMPERATURE_DEFAULT, LLM_MAX_TOKENS_DEFAULT, and LLM_TIMEOUT_DEFAULT are currently required for GPT-5 models but are not used by the model itself. This is a temporary limitation that will be addressed in a future refactoring
  • Security: Never commit API keys to version control - use MCP client environment variables instead

Future Refactoring Plans

GPT-5 Parameter Optimization

Current Issue: GPT-5 models currently require the standard LLM parameters (LLM_TEMPERATURE_DEFAULT, LLM_MAX_TOKENS_DEFAULT, LLM_TIMEOUT_DEFAULT) even though these parameters are not used by the model.

Planned Solution:

  1. Modify getTemperature() function to return undefined for GPT-5+ models instead of a hardcoded default
  2. Update AI provider interfaces to handle undefined temperature values
  3. Implement conditional parameter validation that skips standard parameters for GPT-5+ models
  4. Update OpenAI provider to omit unused parameters when communicating with GPT-5 API

Benefits:

  • Cleaner configuration for GPT-5 users
  • More accurate representation of model capabilities
  • Better adherence to OpenAI's GPT-5 API specification

Timeline: Target implementation in v0.3.0

Server Modes

MCP Server Mode (for production use)

npm start                    # Start MCP server for client integration (requires .env or MCP env)
npm run dev                  # Development mode with auto-restart
npx @n0zer0d4y/athena-protocol  # Run published version via npx (requires MCP env)

Standalone Mode (for testing)

npm run start:standalone     # Test server without MCP client
npm run dev:standalone       # Development standalone mode

Configuration Tools

# Validate your complete configuration
npm run validate-config

# Or use the comprehensive MCP validation tool
node dist/index.js
# Then call: validate_configuration_comprehensive

Key Features

Multi-Provider LLM Support

Athena Protocol supports 14 providers including:

  • Cloud Providers: OpenAI, Anthropic, Google, Azure OpenAI, AWS Bedrock, Vertex AI
  • **S
View on GitHub
GitHub Stars7
CategoryDevelopment
Updated1d ago
Forks3

Languages

TypeScript

Security Score

90/100

Audited on Apr 7, 2026

No findings