Panoptikon
AI-based media indexing, tagging, and semantic search engine for local files
Install / Use
/learn @reasv/PanoptikonREADME
State-of-the-Art, Local, Multimodal, Multimedia Search Engine
Panoptikon indexes your local files using state-of-the-art AI and machine learning models, making difficult-to-search media such as images and videos easily findable.
Combining OCR, Whisper Speech-to-Text, CLIP image embeddings, text embeddings, full-text search, automated tagging, and automated image captioning, Panoptikon is the Swiss Army knife of local media indexing.
Panoptikon aims to be the text-generation-webui or stable-diffusion-webui of local search. It is fully customizable, allowing you to easily configure custom models for any of the supported tasks. It comes with a wealth of capable models available out of the box, and adding another one or updating to a newer fine-tune is never more than a few TOML configuration lines away.
As long as a model is supported by any of the built-in implementation classes (supporting, among others, OpenCLIP, Sentence Transformers, Faster Whisper, and Florence 2 via HF Transformers), you can simply add it to the inference server configuration by specifying the Hugging Face repo, and it will immediately be available for use.
Panoptikon is designed to keep index data produced by multiple different models (or different configurations of the same model) side by side, letting you choose which one(s) to use at search time. As such, Panoptikon is an excellent tool for comparing the real-world performance of different methods of data extraction or embedding models, and allows you to leverage their combined power instead of relying on the accuracy of only one.
For example, when searching with a given tag, you can pick multiple tagging models from a list and choose whether to match an item if at least one model has set the tag(s) you're searching for, or require that all of them have.
The intended use of Panoptikon is for power users and more technically minded enthusiasts to leverage more capable and/or custom-trained open-source models to index and search their files. Unlike tools such as Hydrus, Panoptikon will never copy, move, or otherwise touch your data. You only need to add your directories to the list of allowed paths and run the indexing jobs.
Panoptikon will build an index inside its own SQLite database, referencing the original source file paths. Files are kept track of by their hash, so there's no issue with renaming or moving them around after they've been indexed. You only need to make sure to re-run the file scan job after moving or renaming files to update the index with the new paths. It's also possible to configure Panoptikon to automatically re-scan directories at regular intervals through the cron job feature.
<a href="https://panoptikon.dev/search" target="_blank"> <img alt="Panoptikon Screenshot" src="https://raw.githubusercontent.com/reasv/panoptikon/refs/heads/master/static/screenshot_1.jpg"> </a>Warning
Panoptikon is designed to be used as a local service and is not intended to be exposed to the internet. It does not currently have any authentication features and exposes, among other things, an API that can be abused for remote code execution on your host machine. Panoptikon binds to localhost by default, and if you intend to expose it, you should add a reverse proxy with authentication such as HTTP Basic Auth or OAuth2 in front of it.
Public Instance (panoptikon.dev)
The only configuration that we endorse for a public Panoptikon instance is the provided docker-compose file, which exposes two separate services running on ports 6339 and 6340, respectively. The former is meant to be exposed publicly and blocks access to all dangerous APIs, while the second one is to be used as a private admin panel and has no restrictions on usage or API access. There is no authentication, although HTTP Basic Auth can easily be added to the Nginx configuration file if needed.
This exact docker-compose configuration is currently running at panoptikon.dev as a public demonstration instance for users to try Panoptikon before installing it locally. Certain features, such as the ability to open files and folders in the file manager, have been disabled in the public instance for security reasons.
Panoptikon is also not designed with high concurrency in mind, and the public instance may be slow or unresponsive at times if many users are accessing it simultaneously, especially when it comes to the inference server and related semantic search features. This is because requests to the inference server's prediction endpoint are not debounced, and the instant search box will make a request for every keystroke.
The public instance is meant for demonstration purposes only, to show the capabilities of Panoptikon to interested users. If you wanted to host a public Panoptikon instance for real-world use, it would be necessary to add authentication and rate limiting to the API, optimize the inference server for high concurrency, and possibly add a caching layer.
💡 Panoptikon's search API is not tightly coupled to the inference server. It is possible to implement a caching layer or a distributed queue system to handle inference requests more efficiently. Without modifying Panoptikon's source code, you could use a different inference server implementation that scales better, then simply pass the embeddings it outputs to Panoptikon's search API.
ℹ️ The public instance currently contains a small subset of images from the latentcat/animesfw dataset.
Although large parts of the API are disabled in the public instance, you can still consult the full API documentation at panoptikon.dev/docs.
Optional Companion: Panoptikon Relay (NEW)
In scenarios where Panoptikon is running on a remote server, inside a container, or in any environment where it cannot directly access your local file system to open files or reveal them in your file manager, Panoptikon Relay comes to the rescue.
If you can access the files indexed by Panoptikon directly on your client machine (e.g., via network shares like SMB/NFS), Panoptikon Relay bridges this gap. It's a lightweight tray icon application and local HTTP server that runs on your client machine.
How it works with Panoptikon:
- You run Panoptikon Relay on your client machine.
- In Panoptikon's web UI (under "File Details" -> "File Open Relay"), you configure Panoptikon to use the Relay by providing its address (e.g.,
http://127.0.0.1:17600) and an API key. - When you click "Open File" or "Show in Folder" in Panoptikon, the request is sent to Panoptikon Relay.
- The Relay authenticates the request, translates the server-side path (as Panoptikon sees it) to a local client-side path using configurable mappings, and then executes local commands to open the file or show it in your file manager.
Key Features of Panoptikon Relay:
- Tray Icon: For easy access to API key, configuration, and logs.
- Secure API: Uses a Bearer Token (API Key) for authentication.
- Path Mapping: Flexible
config.tomlto map server paths to client paths. - Customizable Commands: Define your own shell commands for opening/showing files.
- Platform-Aware Defaults: Sensible default commands for Windows, macOS, and Linux.
For more details, installation instructions, and configuration options, please visit the Panoptikon Relay GitHub repository.
REST API
Panoptikon exposes a REST API that can be used to interact with the search and bookmarking functionality programmatically, as well as to retrieve the indexed data, the actual files, and their associated metadata. Additionally, inferio, the inference server, exposes an API under /api/inference that can be used to run batch inference using the available models.
The API is documented in the OpenAPI format. The interactive documentation generated by FastAPI can be accessed at /docs when running Panoptikon, for example at http://127.0.0.1:6342/docs by default. Alternatively, ReDoc can be accessed at /redoc, for example at http://127.0.0.1:6342/redoc by default.
API endpoints support specifying the name of the index and user_data databases to use, regardless of what databases are specified in environment variables (see below).
This is done through the index_db and user_data_db query parameters. If not specified, the databases specified in environment variables are used by default.
⏩ Installation (Automated)
Run the appropriate automated installation script for your platform. If this doesn't work, you can always install manually (see below).
For MacOS / Linux (CPU only or Nvidia GPU):
./install.sh
For AMD GPU on Linux (experimental):
./install-amd.sh
For Windows (Nvidia GPU):
.\install-nvidia.bat
For Windows (CPU only):
.\install-cpu.bat
Afterwards, run start.sh (linux/macos) or start.bat (windows) to start the server.
❗ You may have to re-run the installation script whenever Panoptikon is updated
🛠 Installation (Manual)
This project uses UV for dependency management — a Python package manager that works with pyproject.toml.
✅ Prerequisites
Install UV:
MacOS / Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
Windows (PowerShell)
powershell -ExecutionPolicy ByPass -c "irm https:/
