SkillAgentSearch skills...

CoexistAI

CoexistAI is a modular, developer-friendly research assistant framework . It enables you to build, search, summarize, and automate research workflows using LLMs, web search, Reddit, YouTube, and mapping tools—all with simple MCP tool calls or API calls or Python functions.

Install / Use

/learn @SPThole/CoexistAI
About this skill

Quality Score

0/100

Supported Platforms

Claude Code
Cursor

README

CoexistAI

CoexistAI is a modular, developer-friendly research assistant framework. It enables you to build, search, summarize, and automate research workflows using LLMs, web search, Reddit, YouTube, git and mapping tools—all with simple API calls or Python functions.

<p align="center"> <img src="artifacts/logo.jpeg" alt="CoexistAI Logo" width="200"/> </p>

🎙️ New Features & Updates

  • 🔥 Docker Installation available (Thanks for all the feedback, hope this makes installations easy). For a containerized setup with Docker, follow the instructions in README.docker.md.
  • Text → Podcast: Instantly turn written content into engaging podcast episodes—ideal for on-the-go listening or repurposing articles/notes/blogs. Example: Converted this article to a podcast. Listen here
  • Text → Speech: Convert text to high-quality audio using advanced TTS. Check Notebook for examples.
  • Flexible Integration: Generate audio files via FastAPI or MCP—integrate with agents or use standalone.
  • Direct Location Search: Search for any place, not just routes.
  • Advanced Reddit Search: Custom phrases with BM25 ranking for sharper discovery.
  • YouTube Power-Up: Search/summarize videos or URLs with custom prompts.
  • File/Folder Exploration: Explore local folders/files with vision support for images (.png, .jpg, etc.).
  • Sharper Web Search: More focused, actionable results.
  • MCP Support Everywhere: Full integration with LM Studio and other MCP hosts. See Guide
  • GitHub & Local Repo Explorer: Ask questions about codebases (GitHub or local).

🚀 Features

  • Web Explorer: Query the web, summarize results, and extract context using LLMs.
  • Reddit Explorer: Fetch and summarize reddit via search phrase or subreddit focused queries
  • YouTube Transcript Explorer: Search youtube with search phrases and summarise/QA any video
  • Map Explorer: Generate maps, explore routes, locations with points of interest like hotels, cafes near given locations.
  • Github Explorer: Explore/summarise/explain/QA any github or even local git codebases
  • Pluggable LLMs and Embedders: Use any LLMs Google Gemini, OpenAI, Ollama, and any embedders
  • Async & Parallel: Fast, scalable, and robust asynchronous execution.
  • Notebook & API Ready: Use as a Python library or via a FAST API.
  • MCP ready: Spins up the MCP server on the fly along with FAST API server

🛠️ Installation

Prerequisite: Make sure Docker is installed and the Docker daemon is running.

Method 1: Docker (Recommended) New 🔥

For a containerized setup with Docker, follow the instructions in README.docker.md. This method uses Method A (helper script) to automate the process and provides an Admin UI for easy configuration.

Method 2: Local Setup

  1. Clone the repository:

    git clone https://github.com/SPThole/CoexistAI.git coexistai
    cd coexistai
    
  2. Configure your model and embedding settings:

    • [NEW] Edit config/model_config.json to set your preferred LLM and embedding model.
    • Edit above file to set your preferred SearxNG host and port (if needed)
    • Add LLM and Embedder API Key (for google mode both would be same)
    • Example (for full local mode):
{
  "llm_model_name": "jan-nano",
  "llm_type": "local",  // based on baseurl dict given below
  "embed_mode": "infinity_emb",
  "embedding_model_name": "nomic-ai/nomic-embed-text-v1",
  "llm_kwargs": {
    "temperature": 0.1,
    "max_tokens": null,
    "timeout": null,
    "max_retries": 2
  },
  "embed_kwargs": {},
  "llm_api_key": "dummy",
  "HOST_APP": "localhost",
  "PORT_NUM_APP": 8000,
  "HOST_SEARXNG": "localhost",
  "PORT_NUM_SEARXNG": 8080,
  "openai_compatible": {
    "google": "https://generativelanguage.googleapis.com/v1beta/openai/",
    "local": "http://localhost:1234/v1",
    "groq": "https://api.groq.com/openai/v1",
    "openai": "https://api.openai.com/v1",
    "others": "https://openrouter.ai/api/v1"
  }
}
  • See the file for all available options and defaults.
  • If you using others llm type, then check the openai_compatible url dict for others key, you can generally find it by "googling YOUR provider name openai api base compatilble url"
  1. Run the setup script:

    • For macOS or Linux with zsh:
      zsh quick_setup.sh
      
    • For Linux with bash:
      bash quick_setup.sh
      

    The script will:

    • Pull the SearxNG Docker image
    • Create and activate a Python virtual environment
    • USER ACTION NEEDED Set your GOOGLE_API_KEY (edit the script to use your real key). Obtain your API key (Currently Gemini, OpenAI and ollama is supported) from your preferred LLM provider. (Only needed when google mode is set, else set in model_config.py)
    • Start the SearxNG Docker container
    • Install Python dependencies
    • Start the FastAPI server
  2. That’s it!
    The FastAPI and MCP server will start automatically and you’re ready to go.

Note:

  • Make sure Docker, Python 3, and pip are installed on your system.
  • Edit quick_setup.sh to set your real GOOGLE_API_KEY before running (needed if using google models)
  • Windows users can use WSL or Git Bash to run the script, or follow manual setup steps.

Get Your API Key (optional if you want to use gemini llm/google embedders)

Obtain your API key (Currently Gemini, OpenAI and ollama is supported) from your preferred LLM provider. Once you have the key, update the app.py file or your environment variables as follows:

import os
os.environ['GOOGLE_API_KEY'] = "YOUR_API_KEY"

Alternatively, you can set the API key in your shell before starting the server:

export YOUR_LLM_API_KEY=your-api-key-here

Note: For optimal quality and speed, use Google models with embedding-001 embeddings and Gemini Flash models. They provide free API keys.

Update the place (default: India) in utils/config.py for personalized results

🔧 How to use FASTAPI/tools

Remove comments after // before pasting Swagger UI: http://127.0.0.1:8000/docs if you haven't changed the host and port

1. Web Search

Search the web, summarize, and get actionable answers—automatically.

Endpoint:
POST /web-search

Request Example:

{
  "query": "Top news of today worldwide", // Query you want to ask; if you provide a URL and ask to summarise, it will summarize the full page.
  "rerank": true, // Set to true for better result ranking.
  "num_results": 2, // Number of top results per subquery to explore (higher values = more tokens, slower/more costly).
  "local_mode": false, // Set to true to explore local documents (currently, only PDF supported).
  "split": true, // Set to false if you want full pages as input to LLMs; false may cause slower/more costly response.
  "document_paths": [] // If local_mode is true, add a list of document paths, e.g., ["documents/1706.03762v7.pdf"]
}

or QA/sumamrise local documents

{
  "query": "Summarise this research paper",
  "rerank": true,
  "num_results": 3,
  "local_mode": true,
  "split": true,
  "document_paths": ["documents/1706.03762v7.pdf"] // Must be a list.
}

2. Summarize Any Web Page

Summarize any article or research paper by URL.

Endpoint:
POST /web-summarize

Request Example:


{
  "query": "Write a short blog on the model", // Instruction or question for the fetched page content.
  "url": "https://huggingface.co/unsloth/Qwen3-8B-GGUF", // Webpage to fetch content from.
  "local_mode": false // Set to true if summarizing a local document.
}


3. YouTube Search

Search YouTube (supports prompts and batch).

Endpoint:
POST /youtube-search

Request Example:


{
  "query": "switzerland itinerary", // Query to search on YouTube; if a URL is provided, it fetches content from that URL. url should be in format: https://www.youtube.com/watch?v=videoID
  "prompt": "I want to plan my Switzerland trip", // Instruction or question for using the fetched content.
  "n": 2 // Number of top search results to summarize (only works if query is not a URL).
}

4. Reddit Deep Dive

Custom Reddit search, sort, filter, and get top comments.

Endpoint:
POST /reddit-search

Request Example:


{
  "subreddit": "", // Subreddit to fetch content from (use if url_type is not 'search').
  "url_type": "search", // 'search' for phrase search; "url" for url, otherwise, use 'hot', 'top', 'best', etc.
  "n": 3, // Number of posts to fetch.
  "k": 1, // Number of top comments per post.
  "custom_url": "", // Use if you already have a specific Reddit URL.
  "time_filter": "all", // Time range: 'all', 'today', 'week', 'month', 'year'.
  "search_query": "gemma 3n reviews", // Search phrase (useful if url_type is 'search').
  "sort_type": "relevance" // 'top', 'hot', 'new', 'relevance' — controls how results are sorted.
}


5. Map & Location/Route Search

Find places, routes, and nearby points of interest.

Endpoint:
POST /map-search

Request Example:


{
  "start_location": "MG Road, Bangalore", // Starting point.
  "end_location": "Lalbagh, Bangalore", // Destination.
  "pois_radius": 500, // Search radius in meters for amenities.
  "amenities": "restaurant|cafe|bar|hotel", // Amenities to search near start or end location.
  "limit": 3, // Maximum number of results if address not found exactly.
  "task": "route_and_pois" // Use 'location_only' for address/

Related Skills

View on GitHub
GitHub Stars460
CategoryDevelopment
Updated11h ago
Forks29

Languages

Jupyter Notebook

Security Score

85/100

Audited on Apr 3, 2026

No findings