Vane
Vane is an AI-powered answering engine.
Install / Use
/learn @ItzCrazyKns/VaneREADME
Vane 🔍
Vane is a privacy-focused AI answering engine that runs entirely on your own hardware. It combines knowledge from the vast internet with support for local LLMs (Ollama) and cloud providers (OpenAI, Claude, Groq), delivering accurate answers with cited sources while keeping your searches completely private.

Want to know more about its architecture and how it works? You can read it here.
✨ Features
🤖 Support for all major AI providers - Use local LLMs through Ollama or connect to OpenAI, Anthropic Claude, Google Gemini, Groq, and more. Mix and match models based on your needs.
⚡ Smart search modes - Choose Speed Mode when you need quick answers, Balanced Mode for everyday searches, or Quality Mode for deep research.
🧭 Pick your sources - Search the web, discussions, or academic papers. More sources and integrations are in progress.
🧩 Widgets - Helpful UI cards that show up when relevant, like weather, calculations, stock prices, and other quick lookups.
🔍 Web search powered by SearxNG - Access multiple search engines while keeping your identity private. Support for Tavily and Exa coming soon for even better results.
📷 Image and video search - Find visual content alongside text results. Search isn't limited to just articles anymore.
📄 File uploads - Upload documents and ask questions about them. PDFs, text files, images - Vane understands them all.
🌐 Search specific domains - Limit your search to specific websites when you know where to look. Perfect for technical documentation or research papers.
💡 Smart suggestions - Get intelligent search suggestions as you type, helping you formulate better queries.
📚 Discover - Browse interesting articles and trending content throughout the day. Stay informed without even searching.
🕒 Search history - Every search is saved locally so you can revisit your discoveries anytime. Your research is never lost.
✨ More coming soon - We're actively developing new features based on community feedback. Join our Discord to help shape Vane's future!
Sponsors
Vane's development is powered by the generous support of our sponsors. Their contributions help keep this project free, open-source, and accessible to everyone.
<div align="center"> <a href="https://www.warp.dev/perplexica"> <img alt="Warp Terminal" src=".assets/sponsers/warp.png" width="100%"> </a>✨ Try Warp - The AI-Powered Terminal →
Warp is revolutionizing development workflows with AI-powered features, modern UX, and blazing-fast performance. Used by developers at top companies worldwide.
</div>We'd also like to thank the following partners for their generous support:
<table> <tr> <td width="100" align="center"> <a href="https://dashboard.exa.ai" target="_blank"> <img src=".assets/sponsers/exa.png" alt="Exa" width="80" height="80" style="border-radius: .75rem;" /> </a> </td> <td> <a href="https://dashboard.exa.ai">Exa</a> • The Perfect Web Search API for LLMs - web search, crawling, deep research, and answer APIs </td> </tr> </table>Installation
There are mainly 2 ways of installing Vane - With Docker, Without Docker. Using Docker is highly recommended.
Getting Started with Docker (Recommended)
Vane can be easily run using Docker. Simply run the following command:
docker run -d -p 3000:3000 -v vane-data:/home/vane/data --name vane itzcrazykns1337/vane:latest
This will pull and start the Vane container with the bundled SearxNG search engine. Once running, open your browser and navigate to http://localhost:3000. You can then configure your settings (API keys, models, etc.) directly in the setup screen.
Note: The image includes both Vane and SearxNG, so no additional setup is required. The -v flags create persistent volumes for your data and uploaded files.
Using Vane with Your Own SearxNG Instance
If you already have SearxNG running, you can use the slim version of Vane:
docker run -d -p 3000:3000 -e SEARXNG_API_URL=http://your-searxng-url:8080 -v vane-data:/home/vane/data --name vane itzcrazykns1337/vane:slim-latest
Important: Make sure your SearxNG instance has:
- JSON format enabled in the settings
- Wolfram Alpha search engine enabled
Replace http://your-searxng-url:8080 with your actual SearxNG URL. Then configure your AI provider settings in the setup screen at http://localhost:3000.
Advanced Setup (Building from Source)
If you prefer to build from source or need more control:
-
Ensure Docker is installed and running on your system.
-
Clone the Vane repository:
git clone https://github.com/ItzCrazyKns/Vane.git -
After cloning, navigate to the directory containing the project files.
-
Build and run using Docker:
docker build -t vane . docker run -d -p 3000:3000 -v vane-data:/home/vane/data --name vane vane -
Access Vane at http://localhost:3000 and configure your settings in the setup screen.
Note: After the containers are built, you can start Vane directly from Docker without having to open a terminal.
Non-Docker Installation
-
Install SearXNG and allow
JSONformat in the SearXNG settings. Make sure Wolfram Alpha search engine is also enabled. -
Clone the repository:
git clone https://github.com/ItzCrazyKns/Vane.git cd Vane -
Install dependencies:
npm i -
Build the application:
npm run build -
Start the application:
npm run start -
Open your browser and navigate to http://localhost:3000 to complete the setup and configure your settings (API keys, models, SearxNG URL, etc.) in the setup screen.
Note: Using Docker is recommended as it simplifies the setup process, especially for managing environment variables and dependencies.
See the installation documentation for more information like updating, etc.
Troubleshooting
Local OpenAI-API-Compliant Servers
If Vane tells you that you haven't configured any chat model providers, ensure that:
- Your server is running on
0.0.0.0(not127.0.0.1) and on the same port you put in the API URL. - You have specified the correct model name loaded by your local LLM server.
- You have specified the correct API key, or if one is not defined, you have put something in the API key field and not left it empty.
Ollama Connection Errors
If you're encountering an Ollama connection error, it is likely due to the backend being unable to connect to Ollama's API. To fix this issue you can:
-
Check your Ollama API URL: Ensure that the API URL is correctly set in the settings menu.
-
Update API URL Based on OS:
- Windows: Use
http://host.docker.internal:11434 - Mac: Use
http://host.docker.internal:11434 - Linux: Use
http://<private_ip_of_host>:11434
Adjust the port number if you're using a different one.
- Windows: Use
-
Linux Users - Expose Ollama to Network:
-
Inside
/etc/systemd/system/ollama.service, you need to addEnvironment="OLLAMA_HOST=0.0.0.0:11434". (Change the port number if you are using a different one.) Then reload the systemd manager configuration withsystemctl daemon-reload, and restart Ollama bysystemctl restart ollama. For more information see Ollama docs -
Ensure that the port (default is 11434) is not blocked by your firewall.
-
Lemonade Connection Errors
If you're encountering a Lemonade connection error, it is likely due to the backend being unable to connect to Lemonade's API. To fix this issue you can:
-
Check your Lemonade API URL: Ensure that the API URL is correctly set in the settings menu.
-
Update API URL Based on OS:
- Windows: Use
http://host.docker.internal:8000 - Mac: Use
http://host.docker.internal:8000 - Linux: Use
http://<private_ip_of_host>:8000
Adjust the port number if you're using a different one.
- Windows: Use
-
Ensure Lemonade Server is Running:
- Make sure your Lemonade server is running and accessible on the configured port (default is 8000).
- Verify that Lemonade is configured to accept connections from all interfaces (
0.0.0.0), not just localhost (127.0.0.1). - Ensure that the port (default is 8000) is not blocked by your firewall.
Using as a Search Engine
If you wish to use Vane as an alternative to traditional search engines like Google or Bing, or if you want to add a shortcut for quick access from your browser's search bar, follow these steps:
- Open your browser's settings.
- Navigate to the 'Search Engines' section.
- Add a new site search with the following URL: `http://localho
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
API
A learning and reflection platform designed to cultivate clarity, resilience, and antifragile thinking in an uncertain world.
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
sec-edgar-agentkit
10AI agent toolkit for accessing and analyzing SEC EDGAR filing data. Build intelligent agents with LangChain, MCP-use, Gradio, Dify, and smolagents to analyze financial statements, insider trading, and company filings.
