Wardrowbe
Put your wardrobe in rows. Self-hosted AI-powered wardrobe management app.
Install / Use
/learn @Anyesh/WardrowbeREADME
Self-hosted wardrobe management with AI-powered outfit recommendations. Take photos of your clothes, let AI tag them, and get daily outfit suggestions based on weather and occasion.
Features
- Photo-based wardrobe - Upload photos, AI extracts clothing details automatically
- Smart recommendations - Outfits matched to weather, occasion, and your preferences
- Scheduled notifications - Daily outfit suggestions via ntfy/Mattermost/email
- Family support - Manage wardrobes for household members
- Wear tracking - History, ratings, and outfit feedback
- Analytics - See what you wear, what you don't, color distribution
- Fully self-hosted - Your data stays on your hardware
- Works with any AI - OpenAI, Ollama, LocalAI, or any OpenAI-compatible API
Screenshots
Wardrobe & Item Details
| Grid View | Item Details & AI Analysis |
|-----------|---------------------------|
|
|
|
Wash Tracking & Outfit Suggestions
| Wash Tracking | Suggestions |
|---------------|-------------|
|
|
|
History & Analytics
| History Calendar | Analytics |
|------------------|-----------|
|
|
|
Pairings
| Pairing View | Pairing Modal |
|--------------|---------------|
|
|
|
Quick Start
Prerequisites
- Docker and Docker Compose installed
- At least 4GB of RAM available
- An AI service (Ollama recommended for free local AI, or OpenAI API key)
Setup
Step 1: Install Ollama (if using local AI)
Option A: Using Ollama (Recommended - Free, runs locally)
# Install Ollama from https://ollama.ai
# Then pull required models:
ollama pull gemma3 # Multimodel LLM (for image analysis and outfit recommendations)
# Verify it's running:
curl http://localhost:11434/api/tags
Option B: Using OpenAI (Paid API)
Get your API key from https://platform.openai.com/api-keys
Step 2: Clone and Configure
# Clone the repository
git clone https://github.com/yourusername/wardrowbe.git
cd wardrowbe
# Copy environment template
cp .env.example .env
# IMPORTANT: Edit .env and configure AI settings
# For Ollama (default in .env.example):
# AI_BASE_URL=http://host.docker.internal:11434/v1
# AI_VISION_MODEL=gemma3:latest
# AI_TEXT_MODEL=gemma3:latest
#
# For OpenAI, uncomment and set:
# AI_BASE_URL=https://api.openai.com/v1
# AI_API_KEY=sk-your-api-key-here
# AI_VISION_MODEL=gpt-4o
# AI_TEXT_MODEL=gpt-4o
# Optional: Generate secure secrets for production
# SECRET_KEY=$(openssl rand -hex 32)
# NEXTAUTH_SECRET=$(openssl rand -hex 32)
Step 3: Start Services
# Start all containers
docker compose up -d
# Wait for services to be healthy (30 seconds)
docker compose ps
# Run database migrations (REQUIRED)
docker compose exec backend alembic upgrade head
# Verify everything is working
curl http://localhost:8000/api/v1/health
# Should return: {"status":"healthy"}
Step 4: Access the App
- Frontend: http://localhost:3000
- API Docs: http://localhost:8000/docs
- Login: Click "Login" - uses dev credentials by default (no password needed)
Development Mode
For hot reloading during development (auto-rebuilds on code changes):
# Start in dev mode
docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d
# Run migrations (first time only)
docker compose exec backend alembic upgrade head
# View logs
docker compose logs -f frontend backend
AI Configuration
Wardrowbe works with any OpenAI-compatible API. You need two types of models:
- Vision model: Analyzes clothing images to extract colors, patterns, styles
- Text model: Generates outfit recommendations and descriptions
Using Ollama (Recommended for Self-Hosting)
Free, runs locally, no API key needed, works offline
-
Install Ollama
-
Pull models:
ollama pull gemma3:latest # multimodel LLM (3.4GB) - analyze images and generates recommendations # Alternative text models you can use: # ollama pull llama3:latest # Good all-around model # ollama pull qwen2.5:latest # Fast and efficient # ollama pull mistral:latest # Great for creative text -
Configure in
.env:AI_BASE_URL=http://host.docker.internal:11434/v1 AI_API_KEY=not-needed AI_VISION_MODEL=gemma3:latest AI_TEXT_MODEL=gemma3:latest
Note: Use host.docker.internal instead of localhost so Docker containers can reach your host's Ollama.
Using OpenAI
Paid API, requires internet connection
- Get API key from https://platform.openai.com/api-keys
- Configure in
.env:AI_BASE_URL=https://api.openai.com/v1 AI_API_KEY=sk-your-api-key-here AI_VISION_MODEL=gpt-4o AI_TEXT_MODEL=gpt-4o
Using LocalAI
Self-hosted OpenAI alternative
AI_BASE_URL=http://localai:8080/v1
AI_API_KEY=not-needed
AI_VISION_MODEL=gpt-4-vision-preview
AI_TEXT_MODEL=gpt-3.5-turbo
Using Multimodal Models
Some models can handle both vision and text (like qwen2-vl, llama3.2-vision):
AI_VISION_MODEL=llama3.2-vision:11b
AI_TEXT_MODEL=llama3.2-vision:11b # Same model for both tasks
Architecture
┌─────────────────────────────────────────────────────────────┐
│ Frontend │
│ (Next.js + React Query) │
└─────────────────────────┬───────────────────────────────────┘
│
┌─────────────────────────▼───────────────────────────────────┐
│ Backend │
│ (FastAPI + SQLAlchemy) │
└──────────┬──────────────┬──────────────────┬────────────────┘
│ │ │
┌──────▼──────┐ ┌─────▼─────┐ ┌──────▼──────┐
│ PostgreSQL │ │ Redis │ │ AI Service │
│ (Database) │ │ (Job Queue)│ │ (OpenAI/etc)│
└─────────────┘ └─────┬─────┘ └─────────────┘
│
┌──────────▼──────────┐
│ Background Worker │
│ (arq - AI Jobs) │
└─────────────────────┘
Tech Stack
| Layer | Technology | |-------|------------| | Frontend | Next.js 14, TypeScript, TanStack Query, Tailwind CSS, shadcn/ui | | Backend | FastAPI, SQLAlchemy (async), Pydantic, Python 3.11+ | | Database | PostgreSQL 15 | | Cache/Queue | Redis 7 | | Background Jobs | arq | | Authentication | NextAuth.js (supports OIDC, dev credentials) | | AI | Any OpenAI-compatible API |
Deployment
Docker Compose (Production)
See docker-compose.prod.yml for production configuration.
docker compose -f docker-compose.prod.yml up -d
docker compose exec backend alembic upgrade head
Kubernetes
See the k8s/ directory for Kubernetes manifests including:
- PostgreSQL and Redis with persistent storage
- Backend API and worker deployments
- Next.js frontend
- Ingress with TLS
- Network policies
Configuration
Environment Variables
| Variable | Description | Required |
|----------|-------------|----------|
| DATABASE_URL | PostgreSQL connection string | Yes |
| SECRET_KEY | Backend secret for JWT | Yes |
| NEXTAUTH_SECRET | NextAuth session encryption | Yes |
| AI_BASE_URL | AI service URL | Yes |
| AI_API_KEY | AI API key (if required) | Depends |
| OIDC_ISSUER_URL | OIDC provider URL (enables SSO login) | No |
| OIDC_CLIENT_ID | OIDC client ID | If OIDC |
| OIDC_CLIENT_SECRET | OIDC client secret | If OIDC |
| OIDC_SKIP_SSL_VERIFY | Skip TLS verification for OIDC provider (self-signed certs) | No |
| LOCAL_DNS | Custom DNS server for container n
