SkillAgentSearch skills...

FrontAgent

AI agent platform for frontend engineering with RAG, Skills, SDD, MCP | 通过多种适配前端工程的技术进行赋能的智能体

Install / Use

/learn @ceilf6/FrontAgent
About this skill

Quality Score

0/100

Supported Platforms

Claude Code
Cursor

README

FrontAgent

npm version License: MIT Node.js Version

Enterprise-grade AI Agent System - Constrained by SDD, Powered by MCP for Controlled Perception and Execution

中文文档 | Quick Start | Architecture | Design Doc

FrontAgent is an AI Agent system designed specifically for frontend engineering, addressing core challenges faced when deploying agents in real-world engineering scenarios:

  • Two-Stage Architecture - Separate planning and execution to avoid JSON parsing errors and enable dynamic code generation
  • Phase-Based Execution - Steps grouped by phases with error recovery within each phase
  • Self-Healing - Tool Error Feedback Loop automatically analyzes errors and generates fix steps
  • Facts Memory - Structured facts-based context system for precise project state tracking
  • Module Dependency Tracking - Automatic import/export parsing to detect path hallucinations
  • Hallucination Prevention - Multi-layer hallucination detection and interception
  • SDD Constraints - Specification Driven Development as hard constraints for agent behavior
  • MCP Protocol - Controlled tool invocation via Model Context Protocol
  • Minimal Changes - Patch-based code modifications with rollback support
  • Web Awareness - Understand page structure through browser MCP
  • Shell Integration - Terminal command execution (requires user approval)
  • Pre-Planning Scan - Scan project structure before planning to generate accurate file paths
  • Auto Port Detection - Automatically detect dev server ports from config files
  • Remote Hybrid RAG - Full-repository indexing with submodule exclusion, combining BM25 keyword search and embedding-based semantic search
  • LangGraph Engine (Optional) - Switchable graph-based execution engine with optional checkpoints
  • Planner Skills Layer - Reusable planning skills for task decomposition and phase injection
  • Skill Lab - Benchmark, improve, and promote content skills with local eval suites
  • Repository Management Phase - Auto git/gh workflow after acceptance (commit, push, PR)

TL;DR

# 1. Install globally via npm
npm install -g frontagent
# or using pnpm
pnpm add -g frontagent
# or using yarn
yarn global add frontagent

# 2. Configure LLM (supports OpenAI and Anthropic)
# OpenAI config
export PROVIDER="openai"
export BASE_URL="https://api.openai.com/v1"
export MODEL="gpt-4"
export API_KEY="sk-..."

# Or Anthropic config
export PROVIDER="anthropic"
export BASE_URL="https://api.anthropic.com"
export MODEL="claude-sonnet-4-20250514"
export API_KEY="sk-ant-..."

# 3. Navigate to your project directory and initialize SDD
cd your-project
frontagent init

# 4. Let AI help you complete tasks
frontagent run "Create a user login page"
frontagent run "Optimize homepage loading performance"
frontagent run "Add dark mode support"
# Use LangGraph engine + checkpoint (optional)
frontagent run "Add route guards and open a PR" --engine langgraph --langgraph-checkpoint

Remote RAG

FrontAgent now supports a full remote repository knowledge base flow for planning and code generation:

  • It syncs the remote repository into .frontagent/rag-cache/repo
  • It indexes the full repository by chunk, and automatically excludes Git submodule paths
  • It runs BM25 keyword retrieval and embedding-based semantic retrieval in parallel
  • It applies metadata filters to each candidate list, then fuses the ranked results
  • Built indexes and embedding vectors are cached under .frontagent/rag-cache

Default knowledge source:

  • Repository: https://github.com/ceilf6/Lab.git

CLI options:

frontagent run "Explain React setState behavior" \
  --provider openai \
  --base-url https://yunwu.ai/v1 \
  --api-key YOUR_TOKEN \
  --rag-repo https://github.com/ceilf6/Lab.git \
  --rag-branch main \
  --rag-keyword-candidates 40 \
  --rag-semantic-candidates 40 \
  --rag-keyword-weight 0.45 \
  --rag-semantic-weight 0.55

# When provider=openai, RAG embeddings inherit the same base-url/api-key by default.
# Override them only if your embedding endpoint is different.
frontagent run "Explain React setState behavior" \
  --provider openai \
  --base-url https://yunwu.ai/v1 \
  --api-key YOUR_TOKEN \
  --rag-embedding-model text-embedding-3-small

# Use Weaviate as the semantic vector store (BM25 stays local)
frontagent run "Explain React setState behavior" \
  --provider openai \
  --base-url https://yunwu.ai/v1 \
  --api-key YOUR_TOKEN \
  --rag-embedding-model text-embedding-3-small \
  --rag-vector-store-provider weaviate \
  --rag-weaviate-url http://127.0.0.1:8080 \
  --rag-weaviate-collection-prefix FrontAgentRagChunk

# Disable LLM query rewrite before retrieval
frontagent run "How to build a custom selector" \
  --disable-rag-query-rewrite

# Cross-encoder reranking is enabled by default after BM25 + embedding candidate retrieval
frontagent run "Explain React setState behavior" \
  --provider openai \
  --base-url https://yunwu.ai/v1 \
  --api-key YOUR_TOKEN \
  --rag-embedding-model text-embedding-3-small \
  --rag-reranker-model jina-reranker-v2-base-multilingual \
  --rag-reranker-base-url https://your-reranker-endpoint/v1

# Disable reranking for a run
frontagent run "Explain React setState behavior" \
  --disable-rag-reranker

# Disable semantic retrieval and use BM25 only
frontagent run "Explain React setState behavior" \
  --disable-rag-semantic

# Disable remote RAG for a run
frontagent run "Create a page" --disable-rag

Skill Lab

FrontAgent now includes a local Skill Lab workflow for iterating on content skills under skills/.

# List visible content skills
frontagent skill list

# Scaffold a new content skill
frontagent skill scaffold pricing-audit

# Generate starter trigger evals for a skill
frontagent skill init-evals frontend-design

# Generate starter behavior evals (binary checks for output quality)
frontagent skill init-behavior-evals frontend-design

# Benchmark current trigger behavior
frontagent skill benchmark frontend-design

# Benchmark trigger + behavior together
frontagent skill benchmark frontend-design --behavior

# Generate a candidate revision and compare it against baseline
frontagent skill improve frontend-design

# Improve with both trigger and behavior eval suites
frontagent skill improve frontend-design --behavior

# Promote a candidate after review
frontagent skill promote frontend-design 20260331T120000Z

The current Skill Lab flow supports two eval tracks for content skills:

  • Trigger evals: whether the skill activates correctly.
  • Behavior evals: whether the final output quality passes binary checks.

You can run trigger-only (default) or trigger + behavior (--behavior) in benchmark/improve.

Environment variables:

export FRONTAGENT_RAG_REPO="https://github.com/ceilf6/Lab.git"
export FRONTAGENT_RAG_BRANCH="main"
export FRONTAGENT_RAG_MAX_RESULTS="5"
export FRONTAGENT_RAG_KEYWORD_CANDIDATES="40"
export FRONTAGENT_RAG_SEMANTIC_CANDIDATES="40"
export FRONTAGENT_RAG_KEYWORD_WEIGHT="0.45"
export FRONTAGENT_RAG_SEMANTIC_WEIGHT="0.55"
export FRONTAGENT_RAG_QUERY_REWRITE_MAX_TOKENS="160"
export FRONTAGENT_RAG_QUERY_REWRITE_TEMPERATURE="0.1"
export FRONTAGENT_RAG_RERANKER_MODEL="jina-reranker-v2-base-multilingual"
export FRONTAGENT_RAG_RERANKER_BASE_URL="https://your-reranker-endpoint/v1"
export FRONTAGENT_RAG_RERANKER_API_KEY="sk-..."
export FRONTAGENT_RAG_RERANKER_CANDIDATE_COUNT="20"
export FRONTAGENT_RAG_RERANKER_MAX_DOCUMENT_CHARS="1800"
export FRONTAGENT_RAG_EMBEDDING_MODEL="text-embedding-3-small"
export FRONTAGENT_RAG_EMBEDDING_BASE_URL="https://api.openai.com/v1"
export FRONTAGENT_RAG_EMBEDDING_API_KEY="sk-..."
export FRONTAGENT_RAG_VECTOR_STORE_PROVIDER="weaviate"
export FRONTAGENT_RAG_WEAVIATE_URL="http://127.0.0.1:8080"
export FRONTAGENT_RAG_WEAVIATE_API_KEY=""
export FRONTAGENT_RAG_WEAVIATE_COLLECTION_PREFIX="FrontAgentRagChunk"

If provider=openai, and FRONTAGENT_RAG_EMBEDDING_BASE_URL / FRONTAGENT_RAG_EMBEDDING_API_KEY are not set, FrontAgent will reuse the LLM base-url and api-key automatically.

Main LLM sampling controls:

frontagent run "Explain React createElement" \
  --temperature 0.2 \
  --top-p 0.9
  • --temperature is supported.
  • --top-p is supported through the AI SDK call settings.
  • --top-k is exposed, but only some providers/models support it. For example, Anthropic models can use it, while OpenAI-compatible chat models may ignore it as unsupported.
  • repetition_penalty is not exposed yet in FrontAgent because the current AI SDK/provider stack does not provide a stable cross-provider path for it.

Before retrieval, FrontAgent now sends the user's original request through a separate LLM rewrite step to generate a more retrieval-friendly frontend search query. This rewrite uses the same provider/base-url/model/api-key as the main agent, but the rewritten query is only used for RAG and does not replace the user's original task.

After BM25 + embedding recall, FrontAgent will by default send the top candidate chunks to a reranker endpoint (/rerank, Jina/Cohere-compatible) for cross-encoder-style final ordering when reranker model/base-url/api-key are available. Use --disable-rag-reranker to turn it off for a run.

When FRONTAGENT_RAG_VECTOR_STORE_PROVIDER=weaviate, FrontAgent keeps BM25 in the local index.json, but semantic vectors are written to and queried from Weaviate instead of embeddings.json.

Prebuilt cache bundle workflow:

  • Do not commit .frontagent/rag-cache into Git
  • Export a prebuilt cache bundle and upload it to GitHub Rele

Related Skills

View on GitHub
GitHub Stars89
CategoryDevelopment
Updated53m ago
Forks11

Languages

TypeScript

Security Score

100/100

Audited on Apr 1, 2026

No findings