SkillAgentSearch skills...

Layoutcopilot

TryToFollowThePaperLayoutcopilotToFinishAnAi-edaServiceSistem

Install / Use

/learn @Ponpogunt-makabaka/Layoutcopilot
About this skill

Quality Score

0/100

Supported Platforms

GitHub Copilot

README

LayoutCopilot: LLM-Powered Multi-Agent EDA Layout Design Framework

License: MIT Python 3.9+ Docker

LayoutCopilot is a production-ready, multi-agent collaborative framework that bridges the gap between natural language design intent and executable EDA tool commands. It combines Large Language Models (LLMs) with rigorous validation and human-in-the-loop collaboration to transform analog layout design workflows.

🏗️ Architecture Overview

LayoutCopilot implements a dual-pipeline microservices architecture:

Core Pipelines

  • 🧠 ARP (Abstract Request Processor): Handles high-level design intents through knowledge-augmented reasoning
  • ⚡ CRP (Concrete Request Processor): Converts structured requests into validated, executable EDA commands

Multi-Agent System

User Request → Classifier → [Abstract Path] → Analyzer → Refiner → Adapter
                      ↘ [Concrete Path] → Decomposer → CodeGen → Validation

🚀 Quick Start

Prerequisites

  • Python 3.9+ (required for dependency compatibility)
  • Docker & Docker Compose
  • Redis (message bus)
  • MongoDB (data persistence)

Installation

  1. Clone and setup:
git clone <repository-url>
cd layoutCopilot
  1. Install dependencies:
pip install -r requirements.txt
  1. Configure environment:
# Copy and edit configuration
cp config/settings.py.example config/settings.py

# Set your API keys
export OPENAI_API_KEY="your-key-here"
# or
export ANTHROPIC_API_KEY="your-key-here"
  1. Start all services:
docker-compose up -d

Usage Examples

1. Submit Natural Language Request

curl -X POST "http://localhost:8000/orchestrator/route" \
  -H "Content-Type: application/json" \
  -d '{
    "user_id": "designer1",
    "text": "add symmetry between M1 and M2 transistors",
    "session_id": "session123",
    "design_context": {"netlist": "differential_pair.sp"}
  }'

2. Monitor Processing Status

curl "http://localhost:8000/orchestrator/status/{request_id}"

3. View Telemetry Data

curl "http://localhost:8001/telemetry/trace/{trace_id}"

🎯 Key Features

✅ Production-Ready Components

| Component | Description | Status | |-----------|-------------|--------| | Classifier Agent | Routes requests as 'abstract' or 'concrete' | ✅ Complete | | Analyzer Agent | RAG-powered knowledge retrieval for abstract requests | ✅ Complete | | Solution Refiner | Human-in-the-loop collaboration interface | ✅ Complete | | Solution Adapter | Deterministic translation to concrete requests | ✅ Complete | | Task Decomposer | Breaks down concrete requests into sub-tasks | ✅ Complete | | Code Generator | LLM-powered command generation with constraints | ✅ Complete |

✅ Infrastructure & DevOps

  • Event-Driven Architecture with Redis message bus
  • Comprehensive Telemetry with distributed tracing
  • Docker Containerization with multi-service orchestration
  • Health Checks and monitoring endpoints
  • Automated Validation with self-correction loops

✅ Quality Assurance

  • Syntax Validation against EDA tool schemas
  • Logic Validation for command consistency
  • Evaluation Frameworks (Sanity + Functionality tests)
  • Human Approval Tracking and metrics

📊 Supported EDA Commands

The system generates validated commands for:

| Action | Description | Example | |--------|-------------|---------| | move | Relocate devices | Move transistor M1 to coordinates (10, 20) | | swap | Exchange device positions | Swap positions of M1 and M2 | | symAdd | Add symmetry constraints | Create Y-axis symmetry for [M1, M2] | | rotate | Rotate devices | Rotate M1 by 90 degrees | | mirror | Mirror devices | Mirror M2 across X-axis | | resize | Adjust device dimensions | Resize M1 width to 2.5μm |

🧪 Development & Testing

Running Evaluation Tests

Sanity Check (validates command syntax):

python -m eval.sanity_check --dataset eval/datasets/sanity_test.json

Functionality Check (compares against ground truth):

python -m eval.functionality_check --dataset eval/datasets/functionality_test.json

Project Structure

layoutCopilot/
├── agents/           # Multi-agent system components
│   ├── classifier/   # Request routing agent
│   ├── analyzer/     # RAG-powered analysis agent
│   ├── refiner/      # Human collaboration agent
│   ├── adapter/      # Solution translation agent
│   ├── decomposer/   # Task breakdown agent
│   └── codegen/      # Command generation agent
├── apps/             # Microservice applications
│   ├── orchestrator/ # Main API and request routing
│   ├── eda_adapter/  # EDA tool integration
│   └── telemetry/    # Monitoring and metrics
├── config/           # Configuration management
├── schemas/          # Data models and validation
├── infra/            # Deployment configurations
└── eval/             # Evaluation frameworks

🐳 Production Deployment

Docker Compose (Recommended)

# Production deployment
docker-compose -f docker-compose.prod.yml up -d

# Scale specific services
docker-compose up -d --scale classifier=3 --scale codegen=2

Environment Configuration

| Variable | Description | Default | |----------|-------------|---------| | OPENAI_API_KEY | OpenAI API key for LLM access | Required | | ANTHROPIC_API_KEY | Anthropic Claude API key | Required | | OLLAMA_BASE_URL | Local Ollama server URL | http://localhost:11434 | | REDIS_URL | Redis message bus connection | redis://localhost:6379 | | MONGODB_URL | MongoDB data store connection | mongodb://localhost:27017 | | LLM_PROVIDER | LLM provider (openai/anthropic/ollama) | ollama | | LLM_MODEL | Model name to use | gpt-oss:120b | | DEBUG_MODE | Enable debug logging | false |

📈 Performance Metrics

The system tracks key performance indicators:

  • 🕐 End-to-End Latency: Average request processing time
  • ✅ Validation Success Rate: Commands passing syntax/logic validation
  • 👍 Human Approval Rate: Solutions approved by domain experts
  • ⚡ Command Execution Rate: Successfully executed EDA commands

🛠️ Extending the System

Adding Custom Agents

  1. Create agent class:
from agents.base_agent import BaseAgent
from schemas.data_models import Envelope

class CustomAgent(BaseAgent):
    async def on_message(self, envelope: Envelope) -> None:
        # Implement your logic
        data = await self.load_data(envelope.input_ref)

        # Process data...
        result = {"custom_output": "processed"}

        # Persist and forward
        output_ref = await self.persist_data(
            envelope.request_id, "custom_step", result
        )

        new_envelope = envelope.copy_with(
            agent="CustomAgent",
            step="custom_complete",
            output_ref=output_ref
        )

        await self.publish_event("topic.custom.complete", new_envelope)
  1. Register in Docker Compose:
custom_agent:
  build:
    context: .
    dockerfile: infra/docker/Dockerfile
  environment:
    - SERVICE_NAME=custom_agent
    - PYTHONPATH=/app
  depends_on:
    - redis
    - mongodb

Custom EDA Tool Integration

Extend apps/eda_adapter/adapter.py to support new tools:

class CustomEDAAdapter:
    async def execute_command(self, command: Command):
        if command.tool == "CustomTool":
            # Implement tool-specific execution
            return await self._execute_custom_command(command)

🚦 Current Status & Roadmap

✅ Completed (Current State)

  • ✅ Multi-agent architecture with 6 specialized agents
  • ✅ Event-driven communication with Redis
  • ✅ Comprehensive data models and validation
  • ✅ Docker containerization and orchestration
  • ✅ Configuration management with environment variables
  • ✅ Health checks and basic monitoring
  • ✅ Evaluation frameworks for testing

🚧 In Progress

  • 🚧 Knowledge base integration and RAG system
  • 🚧 Full EDA tool adapter implementations
  • 🚧 Web UI for human-in-the-loop interactions
  • 🚧 Advanced telemetry and monitoring dashboard

📋 Planned Features

  • 📋 Kubernetes deployment manifests
  • 📋 Support for additional EDA tools (Mentor, Synopsys)
  • 📋 Advanced layout optimization algorithms
  • 📋 Integration with version control systems
  • 📋 Automated regression testing suite

🤝 Contributing

We welcome contributions! Please:

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Make your changes and add tests
  4. Commit: git commit -m 'Add amazing feature'
  5. Push: git push origin feature/amazing-feature
  6. Open a Pull Request

Development Setup

# Install development dependencies
pip install -r requirements-dev.txt

# Run tests
pytest tests/

# Run linting
flake8 .
black .

📝 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

Based on research in LLM-powered EDA design automation and implements a production-ready multi-agent system for analog layout design. Special thanks to the EDA and AI research communities for foundational work in this domain.

📞 Support


LayoutCopilot - Transforming Analog Layout Design with AI 🚀

Related Skills

View on GitHub
GitHub Stars5
CategoryDevelopment
Updated1mo ago
Forks2

Languages

Python

Security Score

70/100

Audited on Feb 21, 2026

No findings