SkillAgentSearch skills...

NekoImageGallery

An AI-powered natural language & reverse Image Search Engine powered by CLIP & qdrant.

Install / Use

/learn @hv0905/NekoImageGallery
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

NekoImageGallery

GitHub Workflow Status (with event) codecov Man hours Docker Pulls Ask DeepWiki

An online AI image search engine based on the Clip model and Qdrant vector database. Supports keyword search and similar image search.

中文文档

✨ Features

  • Use the Clip model to generate 768-dimensional vectors for each image as the basis for search. No need for manual annotation or classification, unlimited classification categories.
  • OCR Text search is supported, use PaddleOCR to extract text from images and use BERT to generate text vectors for search.
  • Use Qdrant vector database for efficient vector search.

📷Screenshots

Screenshot1 Screenshot2 Screenshot3 Screenshot4 Screenshot5 Screenshot6

The above screenshots may contain copyrighted images from different artists, please do not use them for other purposes.

✈️ Deployment

📦 Prerequisites

Hardware requirements

| Hardware | Minimum | Recommended | |----------|-----------------------------------------------|----------------------------------------------------------| | CPU | X86_64 or ARM64 CPU, 2 cores or more | 4 cores or more | | RAM | 4GB or more | 8GB or more | | Storage | 10GB or more for libraries, models, and datas | 50GB or more, SSD is recommended | | GPU | Not required | CUDA supported GPU for acceleration, 4GB of VRAM or more |

Software requirements

  • For local deployment: Python 3.10 ~ Python 3.12, with uv package manager installed.
  • For Docker deployment: Docker and Docker Compose (For CUDA users, nvidia-container-runtime is required) or equivalent container runtime.

🖥️ Local Deployment

Choose a metadata storage method

Qdrant Database (Recommended)

In most cases, we recommend using the Qdrant database to store metadata. The Qdrant database provides efficient retrieval performance, flexible scalability, and better data security.

Please deploy the Qdrant database according to the Qdrant documentation. It is recommended to use Docker for deployment.

If you don't want to deploy Qdrant yourself, you can use the online service provided by Qdrant.

Local File Storage

Local file storage directly stores image metadata (including feature vectors, etc.) in a local SQLite database. It is only recommended for small-scale deployments or development deployments.

Local file storage does not require an additional database deployment process, but has the following disadvantages:

  • Local storage does not index and optimize vectors, so the time complexity of all searches is O(n). Therefore, if the data scale is large, the performance of search and indexing will decrease.
  • Using local file storage will make NekoImageGallery stateful, so it will lose horizontal scalability.
  • When you want to migrate to Qdrant database for storage, the indexed metadata may be difficult to migrate directly.

Deploy NekoImageGallery

[!NOTE] This tutorial is for NekoImageGallery v1.4.0 and later, in which we switch to uv as package manager. If you are using an earlier version, please refer to the README file in the corresponding version tag.

  1. Clone the project directory to your own PC or server, then checkout to a specific version tag (like v1.4.0).
  2. Install the required dependencies:
    uv sync --no-dev --extra cpu # For CPU-only deployment
    
    uv sync --no-dev --extra cu124 # For CUDA v12.4 deployment
    
    uv sync --no-dev --extra cu118 # For CUDA v11.8 deployment
    

[!NOTE]

  • It's required to specify the --extra option to install the correct dependencies. If you don't specify the --extra option, PyTorch and its related dependencies will not be installed.
  • If you want to use CUDA for accelerated inference, be sure to select the CUDA-enabled extra variant in this step (we recommend cu124 unless your platform does not support cuda12+). After installation, you can use torch.cuda.is_available() to confirm that CUDA is available.
  • If you are developing or testing, you can sync without the --no-dev switch to install the dependencies required for development, testing, and code checking.
  1. Modify the configuration file in the config directory as needed. You can directly modify default.env, but it is recommended to create a file named local.env to override the configuration in default.env.
  2. (Optional) Enable the built-in frontend: NekoImageGallery v1.5.0+ has a built-in frontend application based on NekoImageGallery.App. To enable it, set APP_WITH_FRONTEND=True in your configuration file.

    [!WARNING] After enabling the built-in frontend, all APIs will be automatically mounted under the /api sub-path. For example, the original /docs will become /api/docs. This may affect your existing deployment, please proceed with caution.

  3. Run the application:
    uv run main.py
    
    You can specify the ip address to bind to with --host (default is 0.0.0.0) and the port to bind to with --port (default is 8000). You can view all available commands and options with uv run main.py --help.
  4. (Optional) Deploy the frontend application: If you do not want to use the built-in frontend, or want to deploy the frontend independently, you can refer to the deployment documentation of NekoImageGallery.App.

🐋 Docker Deployment

About Docker Images

NekoImageGallery's docker image are built and released on Docker Hub, including serval variants:

| Tags | Description | Latest Image Size | |---------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | edgeneko/neko-image-gallery:<version><br>edgeneko/neko-image-gallery:<version>-cuda<br>edgeneko/neko-image-gallery:<version>-cuda12.4 | Supports GPU inferencing with CUDA12.4 | Docker Image Size (tag) | | edgeneko/neko-image-gallery:<version>-cuda11.8 | Supports GPU inferencing with CUDA11.8 | Docker Image Size (tag) | | edgeneko/neko-image-gallery:<version>-cpu | Supports CPU inferencing | Docker Image Size (tag) | | edgeneko/neko-image-gallery:<version>-cpu-arm | (Alpha) Supports CPU inferencing on ARM64(aarch64) devices | Docker Image Size (tag) |

Where <version> is the version number or version alias of NekoImageGallery, as follows:

| Version | Description | |-------------------|--------------------------------------------------------------------------------------------------------| | latest | The latest stable version of NekoImageGallery | | v*.*.* / v*.* | The specific version number (correspond to Git tags) | | edge | The latest development v

View on GitHub
GitHub Stars187
CategoryDevelopment
Updated1d ago
Forks13

Languages

Python

Security Score

100/100

Audited on Mar 25, 2026

No findings