SkillAgentSearch skills...

BiliStalkerMCP

Bilibili MCP server (Model Context Protocol) for AI agents. Access user profiles, videos, dynamics, articles, subtitles, and followings.

Install / Use

/learn @222wcnm/BiliStalkerMCP
About this skill

Quality Score

0/100

Supported Platforms

Claude Code
Cursor

README

BiliStalkerMCP

Python MCP PyPI version

Bilibili MCP Server for Specific User Analysis

BiliStalkerMCP is a Bilibili MCP server built on Model Context Protocol (MCP), designed for AI agents that need to analyze a specific Bilibili user or creator.

It is optimized for workflows that start from a target uid or username, then retrieve that user's profile, videos, dynamics, articles, subtitles, and followings with structured tools.

If you are searching for a Bilibili MCP server, a Bilibili Model Context Protocol server, or an MCP server for tracking and analyzing a specific Bilibili user, this repository is designed for that use case.

English | 中文说明

Installation

uvx bili-stalker-mcp
# or
pip install bili-stalker-mcp

Configuration (Claude Desktop, Recommended)

{
  "mcpServers": {
    "bilistalker": {
      "command": "uv",
      "args": ["run", "--directory", "/path/to/BiliStalkerMCP", "bili-stalker-mcp"],
      "env": {
        "SESSDATA": "required_sessdata",
        "BILI_JCT": "optional_jct",
        "BUVID3": "optional_buvid3"
      }
    }
  }
}

Prefer uv run --directory ... for faster local updates when PyPI release propagation is delayed. You can still use uvx bili-stalker-mcp for quick one-off usage.

Auth: Obtain SESSDATA from Browser DevTools (F12) > Application > Cookies > .bilibili.com.

Environment Variables

| Key | Req | Description | |-----|:---:|-------------| | SESSDATA | Yes | Bilibili session token. | | BILI_JCT | No | CSRF protection token. | | BUVID3 | No | Hardware fingerprint (reduces rate-limiting risk). | | BILI_LOG_LEVEL | No | DEBUG, INFO (Default), WARNING. | | BILI_TIMEZONE | No | Output time zone for formatted timestamps (default: Asia/Shanghai). |

Available Tools

| Tool | Capability | Parameters | |------|------------|------------| | get_user_info | Profile & core statistics | user_id_or_username | | get_user_videos | Lightweight video list | user_id_or_username, page, limit | | search_user_videos | Keyword search in one user's video list | user_id_or_username, keyword, page, limit | | get_video_detail | Full video detail + optional subtitles | bvid, fetch_subtitles (default: false), subtitle_mode (smart/full/minimal), subtitle_lang (default: auto), subtitle_max_chars | | get_user_dynamics | Structured dynamics with cursor pagination | user_id_or_username, cursor, limit, dynamic_type | | get_user_articles | Lightweight article list | user_id_or_username, page, limit | | get_article_content | Full article markdown content | article_id | | get_user_followings | Subscription list analysis | user_id_or_username, page, limit |

Dynamic Filtering (dynamic_type)

  • ALL (default): Text, Draw, and Reposts.
  • ALL_RAW: Unfiltered (includes Videos & Articles).
  • VIDEO, ARTICLE, DRAW, TEXT: Specific category filtering.

Pagination: Responses include next_cursor. Pass this to subsequent requests for seamless scrolling.

Subtitle Modes (get_video_detail)

  • smart (default when fetch_subtitles=true): fetch metadata for all pages, download only one best-matched subtitle track text.
  • full: download text for all subtitle tracks (higher cost).
  • minimal: skip subtitle metadata and subtitle text fetching.

subtitle_lang can force a language (for example en-US); auto uses built-in priority fallback.
subtitle_max_chars caps returned subtitle text size to avoid token explosion.

Bundled Skill

The repository ships a ready-to-use AI agent skill in skills/bili-content-analysis/:

skills/bili-content-analysis/
├── SKILL.md                        # Workflow & output contract
└── references/
    └── analysis-style.md           # Detailed writing style rules

What It Does

Guides compatible AI agents (Gemini, Claude, etc.) through a structured 6-step workflow for deep Bilibili content analysis:

  1. Clarify target and scope (uid / bvid / keyword).
  2. Collect evidence — lightweight lists first, heavy detail only for high-value items.
  3. Reconstruct source structure before interpreting (timeline, chapters, speakers).
  4. Analyze — facts, logic chain, assumptions, themes, and shifts.
  5. Retain anchors — uid, bvid, article_id, timestamps, key source snippets.
  6. Handle failures — state blockers explicitly, stop speculation.

Usage

Copy the bili-content-analysis folder into your project's skill directory:

<project>/.agent/skills/bili-content-analysis/

The agent will automatically activate the skill when user requests involve Bilibili creator tracking, transcript interpretation, timeline reconstruction, or content analysis.

Development

# Setup
git clone https://github.com/222wcnm/BiliStalkerMCP.git
cd BiliStalkerMCP
uv pip install -e .[dev]

# Test
uv run pytest -q

# Integration & Performance (Requires Auth)
uv run python scripts/integration_suite.py -u <UID>
uv run python scripts/perf_baseline.py -u <UID> --tools dynamics -n 3

Release (Maintainers)

Prerequisite: Ensure that a .pypirc file is configured in your user home directory to provide PyPI credentials.

# Build + test + twine check (no upload)
.\scripts\pypi_release.ps1

# Upload to TestPyPI
.\scripts\pypi_release.ps1 -TestPyPI -Upload

# Upload to PyPI
.\scripts\pypi_release.ps1 -Upload

Docker

Runs via stdio transport. No ports exposed.

docker build -t bilistalker-mcp .
docker run -e SESSDATA=... bilistalker-mcp

Troubleshooting

  • 412 Precondition Failed: Bilibili anti-crawling system triggered. Refresh SESSDATA or provide BUVID3.
  • Cloud IPs: Highly susceptible to blocking; local execution is recommended.

License

MIT

Disclaimer: For personal research and learning only. Bulk profiling, harassment, or commercial surveillance is prohibited.


This project is built and maintained with the help of AI.

View on GitHub
GitHub Stars3
CategoryDevelopment
Updated9h ago
Forks2

Languages

Python

Security Score

90/100

Audited on Apr 8, 2026

No findings