SkillAgentSearch skills...

Abbey

Abbey is a self-hosted configurable AI interface with workspaces, document chats, YouTube chats, and more. Find our hosted version at https://abbey.us.ai.

Install / Use

/learn @goodreasonai/Abbey
About this skill

Quality Score

0/100

Supported Platforms

Claude Code
Claude Desktop

README

Abbey 📚

Abbey is an AI interface with notebooks, basic chat, documents, YouTube videos, and more. It orchestrates a variety of AI models in a private self-hosted package. You can run Abbey as a server for multiple users using your own authentication provider, or you can run it for yourself on your own machine. Abbey is highly configurable, using your chosen LLMs, TTS models, OCR models, and search engines. You can find a hosted version of Abbey here, which is used by many students and professionals.

Having any issues? Please, please post an issue or reach out to the creator directly! Twitter DM @gkamer8, email gordon@us.ai, or otherwise ping him – he likes it.

If Abbey is not by default configurable to your liking, and you're comfortable writing code, please consider opening a PR with your improvements! Adding new integrations and even full interfaces is straightforward; see more details in the "Contributing" section below.

Screenshots

Document screenshot

Workspace screenshot

Setup and Install (New)

Upgrading From Previous Version

If you already have Abbey setup but are git pulling a new version, please refer to the Upgrading section below to make sure any configuration changes you made still work.

Prerequisites

  • Installs: You must have Docker and docker compose installed. See details here.
  • 3rd Party Credentials: If you're setting up an outside API to work with Abbey, have those credentials handy. You'll need to configure at least 1 language model and 1 embedding model.

If you have a previous version of Abbey and are doing the "new install" pattern with settings.yml for the first time, pull, create a new settings.yml and .env as described below, move your files from backend/app/static to file-storage, and rebuild with --build

Setup (3 easy steps)

Setup involves cloning/downloading this repo, creating .env and settings.yml files with your chosen AI integrations, and then running docker compose for either development (worse performance but easy to play around with) or production (better performance but slower to change settings). Here are the steps:

Step 1: Clone / download this repository and navigate inside it.

Step 2: Create a file called .env for secret keys and a file called settings.yml for configuration settings at the root of the repo (i.e., at the same level as the docker-compose.yml file). Then, enter into those files the keys / models you want to use. You can find details on how to configure each type of integration throughout this README.

The .env file holds any API keys or other secrets you need. You must also include a password for the MySQL database that Abbey uses. A .env file for someone using the official OpenAI API, an OpenAI Compatible API requiring a key, and the Anthropic API might look like:

MYSQL_ROOT_PASSWORD="my-password"
OPENAI_API_KEY="my-openai-key"
OPENAI_COMPATIBLE_KEY="my-api-key"
ANTHROPIC_API_KEY="my-anthropic-key"

The settings.yml file configures Abbey to use the models and options you want. At minimum, you must use at least one language model and one embedding model. Put the best models first so that Abbey uses them by default. For example, here is a settings.yml file that uses models from the official OpenAI API, an OpenAI compatible API, Anthropic, and Ollama:

lms:
  models:
    - provider: anthropic
      model: "claude-3-5-sonnet-20241022"
      name: "Claude 3.5 Sonnet"  # optional, give a name for Abbey to use
      traits: "Coding"  # optional, let Abbey display what it's good for
      desc: "One of the best models ever!"  # optional, let Abbey show a description
      accepts_images: true  # optional, put true if the model is a vision model / accepts image input
      context_length: 200_000  # optional, defaults to 8192
    - provider: openai_compatible
      model: "gpt-4o"
      accepts_images: true
      context_length: 128_000  
    - provider: ollama
      model: "llama3.2"

openai_compatible:
  url: "http://host.docker.internal:1234"  # Use host.docker.internal for services running on localhost

ollama:
  url: "http://host.docker.internal:11434"  # Use host.docker.internal for services running on localhost

embeds:
  models:
    - provider: "openai"
      model: "text-embedding-ada-002"

And given that you've also put the relevant keys into .env, that would be a complete settings file. To configure different models, search engines, authentication services, text-to-speech models, etc.: please look for the appropriate documentation below!

Step 3: If you're still playing around with your settings, you can run Abbey in dev mode by simply using:

docker compose up

In dev mode, when you change your settings / secrets, you just need to restart the containers to get your settings to apply, which can be done with:

docker compose restart backend frontend celery db_pooler

Once you're ready, you can run Abbey in production mode to give better performance:

docker compose -f docker-compose.prod.yml up --build

If you want to change your settings / secrets in prod mode, you need to rebuild the containers:

docker compose down
docker compose -f docker-compose.prod.yml up --build

Now Abbey should be running at http://localhost:3000! Just visit that URL in your browser to start using Abbey. In dev mode, it might take a second to load.

Note that the backend runs at http://localhost:5000 – if you go there, you should see a lyric from Gilbert and Sullivan's HMS Pinafore. If not, then the backend isn't running.

If something's not working right, please (please) file an issue or reach out to the creator directly – @gkamer8 on Twitter or gordon@us.ai by email.

Running Abbey at Different URLs / Ports

By default, Abbey runs on localhost at ports 3000 for the frontend and 5000 for the backend. If you want to alter these (since you're pretty tech savvy), you'll need to modify your docker compose file, and then add this to your settings.yml:

services:
  backend:
    public_url: http://localhost:5000  # Replace with your new user-accessible BACKEND URL
    internal_url: http://backend:5000  # This probably won't change - it's where the frontend calls the backend server side, within Docker
  frontend:
    public_url: http://localhost:3000  # Replace with your new user-accessible FRONTEND URL

Be sure to update your docker compose file by, for example, changing the port mapping for the backend to 1234:5000, if changing the port. Be sure to switch it out for the correct docker-compose file (docker-compose.prod.yml for prod builds, docker-compose.yml for dev). Here's what that would look like for the backend:

backend:
    # ... some stuff
    ports:
      - "1234:5000"  # now the backend is at http://localhost:1234 in my browser
    # ... some stuff

Upgrading

If you're upgrading Abbey to this version from a previous one, here are some important notes:

  • The redis, celery, and db_pooler services have been subsumed into a single backend service. Please update any docker-compose files you have to match the updated, current docker-compose.yml in this repository. Note that changes to mounted volumes have also been made.
  • An experimental web crawling feature has been added, which uses a web scraping service based on Playwright. To enable it, add these lines to your settings.yml:
scraper:

templates:
  experimental: true

and make sure to run docker-compose.scraper.yml in addition to the regular docker-compose.yml, like:

docker compose -f docker-compose.yml -f docker-compose.scraper.yml up

In some deployed environments, you may also want to specify an API key for the scraper service; you should create a new .env inside the scraper folder, which would look like this:

SCRAPER_API_KEY=your-key

and be sure to add the same variable to the root .env.

Because the crawler is an experimental feature, no security guarantees can be made at this time.

Troubleshooting

  1. General: make sure that all the docker containers are actually running with docker ps. You should see 6: backend, frontend, and mysql. If one isn't running, try restarting it with docker compose restart backend (or frontend, or mysql, or what have you). If it keeps crashing, there's a good chance you've messed up your settings.yml or forgot to put appropriate secrets into .env. Otherwise, look at the logs.

  2. docker config invalid: If it's telling you your docker compose is invalid, then you probably need to upgrade docker on your machine to something >= version 2. Abbey takes advantage of certain relatively new docker features like defaults for env variables and profiles. It's going to be easier just to upgrade docker in the long run - trust.

  3. Things look blank / don't load / requests to the backend don't seem to work quite right. First, navigate to the backend in the browser, like to http://localhost:5000 or whatever URL you put in originally (see the services heading in settings.yml described above). It should give you a message like "A British tar is a soaring soul..." If you see that, then the backend is up and running but your backend URL config is wrong or incomplete (were you playing around with it?). If your backend isn't running, check the logs in Docker for more information – please read what they say!

  4. Docker gets stuck downloading/installing/running an image. There is a possibility that you've run out of space on your machine. First, try running docker system prune to clean up any nasty stuff lying around in Docker that you've forgotten about. Then try clearing up space on your computer – perhaps enough for ~10gb on your machine. Then restart Docker and try again. If you still get issues

View on GitHub
GitHub Stars419
CategoryDevelopment
Updated4d ago
Forks28

Languages

JavaScript

Security Score

85/100

Audited on Apr 2, 2026

No findings