SkillAgentSearch skills...

Ragbase

Completely local RAG. Chat with your PDF documents (with open LLM) and UI to that uses LangChain, Streamlit, Ollama (Llama 3.1), Qdrant and advanced methods like reranking and semantic chunking.

Install / Use

/learn @curiousily/Ragbase
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

RagBase - Private Chat with Your Documents

Completely local RAG with chat UI

<a href="https://www.mlexpert.io/bootcamp" target="_blank"> <img src="https://raw.githubusercontent.com/curiousily/ragbase/master/.github/ui.png"> </a>

Demo

Check out the RagBase on Streamlit Cloud. Runs with Groq API.

Installation

Clone the repo:

git clone git@github.com:curiousily/ragbase.git
cd ragbase

Install the dependencies (requires Poetry):

poetry install

Fetch your LLM (gemma2:9b by default):

ollama pull gemma2:9b

Run the Ollama server

ollama serve

Start RagBase:

poetry run streamlit run app.py

Architecture

<a href="https://www.mlexpert.io/bootcamp" target="_blank"> <img src="https://raw.githubusercontent.com/curiousily/ragbase/master/.github/architecture.png"> </a>

Ingestor

Extracts text from PDF documents and creates chunks (using semantic and character splitter) that are stored in a vector databse

Retriever

Given a query, searches for similar documents, reranks the result and applies LLM chain filter before returning the response.

QA Chain

Combines the LLM with the retriever to answer a given user question

Tech Stack

Add Groq API Key (Optional)

You can also use the Groq API to replace the local LLM, for that you'll need a .env file with Groq API key:

GROQ_API_KEY=YOUR API KEY

Related Skills

View on GitHub
GitHub Stars123
CategoryDevelopment
Updated2d ago
Forks44

Languages

Python

Security Score

100/100

Audited on Apr 3, 2026

No findings