SkillAgentSearch skills...

OmicsClaw

Conversational & memory-enabled AI research partner for multi-omics analysis. From biological idea to full research paper.

Install / Use

/learn @TianGzlab/OmicsClaw

README

<div align="center"> <img src="docs/images/OmicsClaw_logo.jpeg" alt="OmicsClaw Logo" width="400"/> <h3>🧬 OmicsClaw</h3> <p><strong>Your Persistent AI Research Partner for Multi-Omics Analysis</strong></p> <p>Remembers your data • Learns your preferences • Resumes your workflows</p> <p><em>Conversational. Memory-enabled. Local-first. Cross-platform.</em></p> <p> <a href="README.md"><b>English</b></a> • <a href="README_zh-CN.md"><b>简体中文</b></a> </p> </div>

OmicsClaw

AI research assistant that remembers. OmicsClaw transforms multi-omics analysis from repetitive command execution into natural conversations with a persistent partner that tracks your datasets, learns your methods, and resumes interrupted workflows across sessions.

Python 3.11+ License Code style: black CI Website

[!NOTE] 🚀 v0.1.0 正式版发布 / Official v0.1.0 Release

经过充分的开发与严格测试,OmicsClaw v0.1.0 现已正式发布!在这一里程碑大版本中,我们提升了交互式自然语言分析的体验,并引入了直观的原生记忆管理面板(Memory Explorer),提供了覆盖 6 个组学领域的 72 个内置原生技能。欢迎下载体验,任何问题与建议请通过 GitHub Issues 提交。期待您的反馈!

OmicsClaw v0.1.0 is officially released! This milestone version completes the core architecture, elevating the interactive natural language analysis experience, introducing a native Memory Explorer dashboard, and providing robust execution of 72 built-in skills across 6 omics domains. Try it now and share your feedback via GitHub Issues.

<h3>⚡ Unified Control, Different Surfaces</h3> <table> <tr> <th width="75%"><p align="center">🖥️ CLI / TUI</p></th> <th width="25%"><p align="center">📱 Mobile (Feishu)</p></th> </tr> <tr> <td align="center"> <video src="https://github.com/user-attachments/assets/a24b16b8-dc72-439a-8fcd-d0c0623a4c8a" autoplay loop muted playsinline width="100%"> <a href="https://github.com/user-attachments/assets/a24b16b8-dc72-439a-8fcd-d0c0623a4c8a">View CLI demo</a> </video> </td> <td align="center"> <video src="https://github.com/user-attachments/assets/0ccb21f8-6aa9-45ec-b50d-44146566e64e" width="100%" autoplay loop muted playsinline> <a href="https://github.com/user-attachments/assets/0ccb21f8-6aa9-45ec-b50d-44146566e64e">View mobile demo</a> </video> </td> </tr> </table>

Why OmicsClaw?

Traditional tools make you repeat yourself. Every session starts from zero: re-upload data, re-explain context, re-run preprocessing. OmicsClaw remembers.

✨ Features

  • 🧠 Persistent Memory — Context, preferences, and analysis history survive across sessions.
  • 🛠️ Extensibility (MCP & Skill Builder) — Natively integrates Model Context Protocol (MCP) servers and features omics-skill-builder to automate custom analysis deployment.
  • 🌐 Multi-Provider — Anthropic, OpenAI, DeepSeek, or local LLMs — one config to switch.
  • 📱 Multi-Channel — CLI as the hub; Telegram, Feishu, and more — one agent session.
  • 🔄 Workflow Continuity — Resume interrupted analyses, track lineage, and avoid redundant computation.
  • 🔒 Privacy-First — All processing is local; memory stores metadata only (no raw data uploads).
  • 🎯 Smart Routing — Natural language routed to the appropriate analysis automatically.
  • 🧬 Multi-Omics Coverage — 72 predefined skills across spatial, single-cell, genomics, proteomics, metabolomics, bulk RNA-seq, literature and orchestration.

What makes it different:

| Traditional Tools | OmicsClaw | |-------------------|-----------| | Re-upload data every session | Remembers file paths & metadata | | Forget analysis history | Tracks full lineage (preprocess → cluster → DE) | | Repeat parameters manually | Learns & applies your preferences | | CLI-only, steep learning curve | Chat interface + CLI | | Stateless execution | Persistent research partner |

📖 Deep dive: See docs/MEMORY_SYSTEM.md for detailed comparison of memory vs. stateless workflows.

📦 Installation

To prevent dependency conflicts, we strongly recommend installing OmicsClaw inside a virtual environment. You can use either the standard venv or the ultra-fast uv.

<details open> <summary> 🪛 Setup Virtual Environment (Highly Recommended)</summary>

Option A: Using standard venv

# 1. Create a virtual environment
python3 -m venv .venv

# 2. Activate it
source .venv/bin/activate

Option B: Using uv (Ultrafast)

# 1. Install uv (if you don't have it)
curl -LsSf https://astral.sh/uv/install.sh | sh

# 2. Create and activate virtual environment
uv venv
source .venv/bin/activate
</details>
# Clone the repository
git clone https://github.com/TianGzlab/OmicsClaw.git
cd OmicsClaw

# Install core system operations
pip install -e .

# Optional: Install Interactive TUI & Bot capabilities
# Includes prompt-toolkit/Textual plus the LLM client stack used by interactive mode
pip install -e ".[tui]"
pip install -r bot/requirements.txt  # If you want messaging channels

Advanced installation tiers:

  • pip install -e . — Core system operations
  • pip install -e ".[<domain>]" — Where <domain> is spatial, singlecell, genomics, proteomics, metabolomics, or bulkrna
  • pip install -e ".[spatial-domains]" — Standalone Deep Learning Layer for SpaGCN and STAGATE
  • pip install -e ".[full]" — All domain extras and optional method backends across all domains

Check your installation status anytime with python omicsclaw.py env.

🔑 Configuration

The Easiest Way (Interactive Setup): OmicsClaw provides a built-in interactive wizard that walks through LLM setup, shared runtime settings, graph memory options, and messaging channel credentials in one flow.

omicsclaw onboard  # or use short alias: oc onboard

The wizard writes the project-root .env used by CLI, TUI, routing, and bot entrypoints.

<div align="center"> <img src="docs/images/OmicsClaw_configure_fast.png" alt="OmicsClaw Interactive Setup Wizard" width="85%"/> </div> <details> <summary><b>Option B: Manual Configuration (.env)</b></summary>

OmicsClaw supports switching between multiple LLM engines with a single config change. It automatically loads the project-root .env file for CLI, TUI, routing, and bot entrypoints. If python-dotenv is not installed, it falls back to a built-in .env parser, so standard key/value configuration still works in lean installs.

For hosted providers, you can configure either:

  • LLM_API_KEY
  • a provider-specific key such as DEEPSEEK_API_KEY, OPENAI_API_KEY, or ANTHROPIC_API_KEY

1. DeepSeek (Default):

DEEPSEEK_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

2. Anthropic (Claude):

ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# Automatically detects the key and defaults to claude-3-5-sonnet

3. OpenAI (GPT-4o):

OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

4. Local LLM (Ollama): If you have strict data compliance requirements, you can run models entirely locally via Ollama. No API key is needed:

LLM_PROVIDER=ollama
OMICSCLAW_MODEL=qwen2.5:7b  # Replace with your pulled model
LLM_BASE_URL=http://localhost:11434/v1

5. Custom OpenAI-compatible endpoint:

LLM_PROVIDER=custom
LLM_BASE_URL=https://your-endpoint.example.com/v1
OMICSCLAW_MODEL=your-model-name
LLM_API_KEY=sk-xxxxxxxxxxxxxxxx

📖 Full Provider List: See .env.example for instructions on configuring other engines like NVIDIA NIM, OpenRouter, DashScope, and custom endpoints.

📖 Bot / channel config: See bot/README.md and bot/CHANNELS_SETUP.md for messaging channel credentials, allowlists, and runtime controls.

</details>

⚡ Quick Start

1. Chat Interface (Recommended)


# Start the Interactive Terminal Chat
omicsclaw interactive  # or: omicsclaw chat
omicsclaw tui          # or: oc tui

# OR start messaging channels as background frontends
python -m bot.run --channels feishu,telegram

📖 Bot Configuration Guide: See bot/README.md for detailed step-by-step instructions on configuring .env and channel-specific credentials.

Chat with your data:

You: "Preprocess my Visium data"
Bot: ✅ [Runs QC, normalization, clustering]
     💾 [Remembers: visium_sample.h5ad, 5000 spots, normalized]

[Next day]
You: "Find spatial domains"
Bot: 🧠 "Using your Visium data from yesterday (5000 spots, normalized).
     Running domain detection..."
<details> <summary>In-session commands (Interactive CLI/TUI)</summary>

| Command | Description | | ------- | ----------- | | Analysis & Orchestration | | | /run <skill> [...] | Run an analysis skill directly (e.g. /run spatial-domains --demo) | | /skills [domain] | List all available analysis skills | | /research | Launch multi-agent autonomous research pipeline | | /install-skill | Add new custom skills or extension packs from local or GitHub | | Workflow & Planning | | | /plan | Interactively inspect or create the session's action plan | | /tasks | View the structured execution steps for the current pipeline | | /approve-plan | Approve the autonomous pipeline to proceed | | /do-current-task | Proceed with the next execution step in the pipeline | | Session & Context Memory | | | /sessions | List all recent saved conversational workflows | | /resume [id/tag] | Resume

View on GitHub
GitHub Stars113
CategoryEducation
Updated1h ago
Forks20

Languages

Python

Security Score

100/100

Audited on Apr 5, 2026

No findings