SkillAgentSearch skills...

AgenticSeek

Fully Local Manus AI. No APIs, No $200 monthly bills. Enjoy an autonomous agent that thinks, browses the web, and code for the sole cost of electricity. 🔔 Official updates only via twitter @Martin993886460 (Beware of fake account)

Install / Use

/learn @Fosowl/AgenticSeek
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

AgenticSeek: Private, Local Manus Alternative.

<p align="center"> <img align="center" src="./media/agentic_seek_logo.png" width="300" height="300" alt="Agentic Seek Logo"> <p>

English | 中文 | 繁體中文 | Français | 日本語 | Português (Brasil) | Español | Türkçe

A 100% local alternative to Manus AI, this voice-enabled AI assistant autonomously browses the web, writes code, and plans tasks while keeping all data on your device. Tailored for local reasoning models, it runs entirely on your hardware, ensuring complete privacy and zero cloud dependency.

Visit AgenticSeek License Discord Twitter GitHub stars

Why AgenticSeek ?

  • 🔒 Fully Local & Private - Everything runs on your machine — no cloud, no data sharing. Your files, conversations, and searches stay private.

  • 🌐 Smart Web Browsing - AgenticSeek can browse the internet by itself — search, read, extract info, fill web form — all hands-free.

  • 💻 Autonomous Coding Assistant - Need code? It can write, debug, and run programs in Python, C, Go, Java, and more — all without supervision.

  • 🧠 Smart Agent Selection - You ask, it figures out the best agent for the job automatically. Like having a team of experts ready to help.

  • 📋 Plans & Executes Complex Tasks - From trip planning to complex projects — it can split big tasks into steps and get things done using multiple AI agents.

  • 🎙️ Voice-Enabled - Clean, fast, futuristic voice and speech to text allowing you to talk to it like it's your personal AI from a sci-fi movie. (In progress)

Demo

Can you search for the agenticSeek project, learn what skills are required, then open the CV_candidates.zip and then tell me which match best the project

https://github.com/user-attachments/assets/b8ca60e9-7b3b-4533-840e-08f9ac426316

Disclaimer: This demo, including all the files that appear (e.g: CV_candidates.zip), are entirely fictional. We are not a corporation, we seek open-source contributors not candidates.

🛠⚠️️ Active Work in Progress

🙏 This project started as a side-project and has zero roadmap and zero funding. It's grown way beyond what I expected by ending in GitHub Trending. Contributions, feedback, and patience are deeply appreciated.

Prerequisites

Before you begin, ensure you have the following software installed:

  • Git: For cloning the repository. Download Git
  • Python 3.10.x: We strongly recommend using Python version 3.10.x. Using other versions might lead to dependency errors. Download Python 3.10 (pick a 3.10.x version).
  • Docker Engine & Docker Compose: For running bundled services like SearxNG.
    • Install Docker Desktop (which includes Docker Compose V2): Windows | Mac | Linux
    • Alternatively, install Docker Engine and Docker Compose separately on Linux: Docker Engine | Docker Compose (ensure you install Compose V2, e.g., sudo apt-get install docker-compose-plugin).

1. Clone the repository and setup

git clone https://github.com/Fosowl/agenticSeek.git
cd agenticSeek
mv .env.example .env

2. Change the .env file content

SEARXNG_BASE_URL="http://searxng:8080" # http://127.0.0.1:8080 if running on host
REDIS_BASE_URL="redis://redis:6379/0"
WORK_DIR="/Users/mlg/Documents/workspace_for_ai"
OLLAMA_PORT="11434"
LM_STUDIO_PORT="1234"
CUSTOM_ADDITIONAL_LLM_PORT="11435"
OPENAI_API_KEY='optional'
DEEPSEEK_API_KEY='optional'
OPENROUTER_API_KEY='optional'
TOGETHER_API_KEY='optional'
GOOGLE_API_KEY='optional'
ANTHROPIC_API_KEY='optional'

Update the .env file with your own values as needed:

  • SEARXNG_BASE_URL: Leave unchanged unless running on host with CLI mode.
  • REDIS_BASE_URL: Leave unchanged
  • WORK_DIR: Path to your working directory on your local machine. AgenticSeek will be able to read and interact with these files.
  • OLLAMA_PORT: Port number for the Ollama service.
  • LM_STUDIO_PORT: Port number for the LM Studio service.
  • CUSTOM_ADDITIONAL_LLM_PORT: Port for any additional custom LLM service.

API Key are totally optional for user who choose to run LLM locally. Which is the primary purpose of this project. Leave empty if you have sufficient hardware

3. Start Docker

Make sure Docker is installed and running on your system. You can start Docker using the following commands:

  • On Linux/macOS:
    Open a terminal and run:

    sudo systemctl start docker
    

    Or launch Docker Desktop from your applications menu if installed.

  • On Windows:
    Start Docker Desktop from the Start menu.

You can verify Docker is running by executing:

docker info

If you see information about your Docker installation, it is running correctly.

See the table of Local Providers below for a summary.

Next step: Run AgenticSeek locally

See the Troubleshooting section if you are having issues. If your hardware can't run LLMs locally, see Setup to run with an API. For detailed config.ini explanations, see Config Section.


Setup for running LLM locally on your machine

Hardware Requirements:

To run LLMs locally, you'll need sufficient hardware. At a minimum, a GPU capable of running Magistral, Qwen or Deepseek 14B is required. See the FAQ for detailed model/performance recommendations.

Setup your local provider

Start your local provider (for example with ollama):

Unless you wish to to run AgenticSeek on host (CLI mode), export or set the provider listen address:

export OLLAMA_HOST=0.0.0.0:11434

Then, start you provider:

ollama serve

See below for a list of local supported provider.

Update the config.ini

Change the config.ini file to set the provider_name to a supported provider and provider_model to a LLM supported by your provider. We recommend reasoning model such as Magistral or Deepseek.

See the FAQ at the end of the README for required hardware.

[MAIN]
is_local = True # Whenever you are running locally or with remote provider.
provider_name = ollama # or lm-studio, openai, etc..
provider_model = deepseek-r1:14b # choose a model that fit your hardware
provider_server_address = 127.0.0.1:11434
agent_name = Jarvis # name of your AI
recover_last_session = True # whenever to recover the previous session
save_session = True # whenever to remember the current session
speak = False # text to speech
listen = False # Speech to text, only for CLI, experimental
jarvis_personality = False # Whenever to use a more "Jarvis" like personality (experimental)
languages = en zh # The list of languages, Text to speech will default to the first language on the list
[BROWSER]
headless_browser = True # leave unchanged unless using CLI on host.
stealth_mode = True # Use undetected selenium to reduce browser detection

Warning:

  • The config.ini file format does not support comments. Do not copy and paste the example configuration directly, as comments will cause errors. Instead, manually modify the config.ini file with your desired settings, excluding any comments.

  • Do NOT set provider_name to openai if using LM-studio for running LLMs. Set it to lm-studio.

  • Some provider (eg: lm-studio) require you to have http:// in front of the IP. For example http://127.0.0.1:1234

List of local providers

| Provider | Local? | Description | |-----------|--------|-----------------------------------------------------------| | ollama | Yes | Run LLMs locally with ease using ollama as a LLM provider | | lm-studio | Yes | Run LLM locally with LM studio (set provider_name to lm-studio)| | openai | Yes | Use openai compatible API (eg: llama.cpp server) |

Next step: Start services and run AgenticSeek

See the Troubleshooting section if you are having issues. If your hardware can't run LLMs locally, see Setup to run with an API. For detailed config.ini explanations, see Config Section.

Setup to run with an API

This setup uses external, cloud-based LLM providers. You'll need an API key from your chosen service.

1. Choose an API Provider and Get an API Key:

Refer to the List of API Providers below. Visit their websites to sign up and obtain an API key.

2. Set Your API Key as an Environment Variable:

  • Linux/macOS: Open your terminal and use the export command. It's best to add this to your shell's profile file (e.g., ~/.bashrc, ~/.zshrc) for persistence.
    export PROVIDER_API_KEY="your_api_key_here" 
    # Replace PROVIDER_API_KEY with the specific variable name, e.g., OPENAI_API_KEY, GOOGLE_API_KEY
    
    Example for TogetherAI:
    export TOGETHER_API_KEY="xxxxxxxxxxxxxxx
    
View on GitHub
GitHub Stars25.6k
CategoryDevelopment
Updated27m ago
Forks2.9k

Languages

Python

Security Score

100/100

Audited on Mar 26, 2026

No findings