BioClaw
AI-Powered Bioinformatics Research Assistant. Built on OpenClaw.
Install / Use
/learn @Runchuan-BU/BioClawREADME
BioClaw
AI-Powered Bioinformatics Research Assistant on WhatsApp
BioClaw brings the power of computational biology directly into WhatsApp group chats. Researchers can run BLAST searches, render protein structures, generate publication-quality plots, perform sequencing QC, and search the literature — all through natural language messages.
Built on the NanoClaw architecture with bioinformatics tools and skills from the STELLA project, powered by the Claude Agent SDK.
<p align="center"> New BioClaw-compatible skills can be developed either directly in BioClaw or in <a href="https://github.com/zongtingwei/Bioclaw_Skills_Hub">Bioclaw_Skills_Hub</a>, which can serve as a staging space for early iteration and testing before useful skills are promoted into the main BioClaw repository. Skills that prove practical and stable may later be integrated into BioClaw itself. To get newly promoted skills and other updates from BioClaw, pull the latest version of this repository with <code>git pull</code>. </p> </div>Join WeChat Group
Welcome to join our WeChat group to discuss and exchange ideas! Scan the QR code below to join:
<p align="center"> <img src="wechat_group.jpg" width="50%" alt="WeChat Group QR Code"/> <br/> <em>Scan to join the BioClaw community</em> </p>Contents
- Overview
- What's New
- Quick Start
- Messaging channels
- Demo Examples
- System Architecture
- Skills & Skills Hub
- Included Tools
- Project Structure
- Citation
- License
Overview
The rapid growth of biomedical data, tools, and literature has created a fragmented research landscape that outpaces human expertise. Researchers frequently need to switch between command-line bioinformatics tools, visualization software, databases, and literature search engines — often across different machines and environments.
BioClaw addresses this by providing a conversational interface to a comprehensive bioinformatics toolkit. By messaging @Bioclaw in a WhatsApp group, researchers can:
- Sequence Analysis — Run BLAST searches against NCBI databases, align reads with BWA/minimap2, and call variants
- Quality Control — Generate FastQC reports on sequencing data with automated interpretation
- Structural Biology — Fetch and render 3D protein structures from PDB with PyMOL
- Data Visualization — Create volcano plots, heatmaps, and expression figures from CSV data
- Literature Search — Query PubMed for recent papers with structured summaries
- Image-based Wet-Lab Interpretation — Analyze gel/blot photos captured from camera or uploaded in chat (e.g., SDS-PAGE lane quality and target-band checks)
- Workspace Management — Triage files, recommend analysis steps, and manage shared group workspaces
Results — including images, plots, and structured reports — are delivered directly back to the chat.
What's New
Recent updates make BioClaw feel much closer to a real multi-chat research workspace:
- Multiple web chats, each with its own memory — the local web UI now lets you open separate threads like ChatGPT, so one chat can stay on literature search while another focuses on QC or plotting.
- A built-in control layer in chat — you can now manage the current thread directly in chat with commands like
/status,/doctor,/threads,/new,/use,/rename,/archive,/workspace,/provider, and/model. - Per-thread working directory —
/dirlets each thread remember its own default folder inside the workspace, so different chats can work in different subdirectories without stepping on each other. - Reusable shortcuts for recurring workflows —
/commandsand/aliaslet you save common prompts as short commands, so repeated lab routines do not need to be typed from scratch every time. - Skill visibility and preference control —
/skillsshows the installed BioClaw skill modules and lets you mark preferred ones for the current thread or agent. - Better local web management — the browser UI now has a thread list, rename/archive controls, and a lightweight management panel for status and diagnostics.
- Quick OpenRouter health check —
npm run check:openroutersends a tiny test request using your current.envso you can tell whether the key works before debugging the full app.
Quick Start
Prerequisites
- macOS / Linux / Windows (Windows requires PowerShell 5.1+)
- Node.js 20+
- Docker Desktop
- Anthropic API key or OpenRouter API key
Installation
One-command setup (recommended for first-time users):
<details> <summary><b>macOS / Linux</b></summary>git clone https://github.com/Runchuan-BU/BioClaw.git
cd BioClaw
bash scripts/setup.sh
</details>
<details>
<summary><b>Windows (PowerShell)</b></summary>
git clone https://github.com/Runchuan-BU/BioClaw.git
cd BioClaw
powershell -ExecutionPolicy Bypass -File scripts\setup.ps1
</details>
The setup script will check prerequisites, install dependencies, build the Docker image, and walk you through API key configuration interactively.
Manual setup:
git clone https://github.com/Runchuan-BU/BioClaw.git
cd BioClaw
npm install
cp .env.example .env # Edit with your API keys (see model section below)
docker build --no-cache -t bioclaw-agent:latest container/ # uncomment Dockerfile image source if you meet 100 errors.
npm start
Model Provider Configuration
BioClaw now supports two provider paths:
- Anthropic — default, keeps the original Claude Agent SDK flow
- OpenRouter / OpenAI-compatible — optional path for OpenRouter and similar
/chat/completionsproviders
Create a .env file in the project root and choose one of the following setups.
Option A — Anthropic (default)
ANTHROPIC_API_KEY=your_anthropic_key
Option B — OpenRouter (Gemini, DeepSeek, Claude, GPT, and more)
MODEL_PROVIDER=openrouter
OPENROUTER_API_KEY=sk-or-v1-your-key
OPENROUTER_BASE_URL=https://openrouter.ai/api/v1
OPENROUTER_MODEL=deepseek/deepseek-chat-v3.1
Popular model IDs: deepseek/deepseek-chat-v3.1, google/gemini-2.5-flash, anthropic/claude-3.5-sonnet. Full list: openrouter.ai/models
Note: Use models that support tool calling (e.g. DeepSeek, Gemini, Claude). Session history is preserved within a container session; after idle timeout, a new container starts with a fresh context.
Generic OpenAI-compatible setup
MODEL_PROVIDER=openai-compatible
OPENAI_COMPATIBLE_API_KEY=your_api_key
OPENAI_COMPATIBLE_BASE_URL=https://your-provider.example/v1
OPENAI_COMPATIBLE_MODEL=your-model-name
After updating .env, restart BioClaw:
npm run dev
When a container starts, docker logs <container-name> will show which provider path is active.
Usage
In any connected chat, simply message:
@Bioclaw <your request>
Messaging channels
Supported platforms include WhatsApp (default), Feishu (Lark), WeCom, Discord, Slack (Socket Mode), WeChat (fully supported), QQ and optional local web (browser) chat. Full setup steps, env vars, and disabling channels are in docs/CHANNELS.md (简体中文:docs/CHANNELS.zh-CN.md).
WhatsApp Integration Example
<img src="ExampleTask/1.jpg" width="300" />BioClaw supports WhatsApp group workflows for conversational task requests and in-chat delivery of analysis results.
Feishu (Lark) Integration Example
<img src="docs/images/feishu/feishu-bioclaw.jpg" width="300" />BioClaw also supports Feishu/Lark conversations for interactive task requests and result delivery in chat.
WeCom Integration Example
<img src="docs/images/wecom/wecom-bioclaw.jpg" width="300" />BioClaw also supports WeCom conversations for team collaboration and in-chat analysis result delivery.
Discord Integration Example
BioClaw supports Discord channel workflows. Screenshot example will be added in a future update.
Slack (Socket Mode) Integration Example
BioClaw supports Slack (Socket Mode) workflows. Screenshot example will be added in a future update.
WeChat Integration Example
<img src="docs/images/weixin/weixin-bioclaw.jpg" width="300" />BioClaw supports one-click WeChat onboarding and in-chat file handoff workflows (send docs/images, then continue analysis in the same thread).
QQ Integration Example
<img src="docs/images/qq/qq-deepseek-1.jpg" width="300" />BioClaw also supports QQ-based conversations for task requests and chat-native result delivery.
Local Web UI (Dashboard) Example
<img src="docs/images/dashboard/UI-bioclaw.jpg" width="1000" />The local web channel includes both chat and the built-in dashboard (Lab trace) for timeline observability.
Lab trace (SSE timeline, workspace tree) is built into the local web UI — no extra config needed. See docs/DASHBOARD.md.
Second Quick Start
Just send the message to OpenClaw:
i
