SkillAgentSearch skills...

Wardrowbe

Put your wardrobe in rows. Self-hosted AI-powered wardrobe management app.

Install / Use

/learn @Anyesh/Wardrowbe
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p align="center"> <img src="./frontend/public/logo.svg" alt="wardrowbe" width="120" height="120"> </p> <h1 align="center">wardrowbe</h1> <p align="center"> Put your wardrobe in rows. Snap. Organize. Wear. </p> <p align="center"> <a href="https://claude.ai/code"><img src="https://img.shields.io/badge/Built%20with%20Claude%20Code-D97757?style=for-the-badge&logo=claude&logoColor=white" alt="Built with Claude Code"></a> <a href="https://nextjs.org/"><img src="https://img.shields.io/badge/Next.js-000000?style=for-the-badge&logo=nextdotjs&logoColor=white" alt="Next.js"></a> <a href="https://fastapi.tiangolo.com/"><img src="https://img.shields.io/badge/FastAPI-009688?style=for-the-badge&logo=fastapi&logoColor=white" alt="FastAPI"></a> <a href="https://www.typescriptlang.org/"><img src="https://img.shields.io/badge/TypeScript-3178C6?style=for-the-badge&logo=typescript&logoColor=white" alt="TypeScript"></a> <a href="https://www.python.org/"><img src="https://img.shields.io/badge/Python-3776AB?style=for-the-badge&logo=python&logoColor=white" alt="Python"></a> </p> <p align="center"> <a href="#features">Features</a> • <a href="#quick-start">Quick Start</a> • <a href="#deployment">Deployment</a> • <a href="#architecture">Architecture</a> • <a href="#contributing">Contributing</a> </p> <p align="center"> <a href="https://buymeacoffee.com/anyesh"> <img src="https://img.shields.io/badge/Buy%20Me%20A%20Coffee-FFDD00?style=for-the-badge&logo=buymeacoffee&logoColor=black" alt="Buy Me A Coffee"> </a> </p> <p align="center"> <img src="https://img.shields.io/badge/Google%20Play-Coming%20Soon-34A853?style=for-the-badge&logo=googleplay&logoColor=white" alt="Google Play - Coming Soon"> &nbsp; <a href="https://apps.apple.com/us/app/wardrowbe/id6759947671"> <img src="https://img.shields.io/badge/App%20Store-0D96F6?style=for-the-badge&logo=appstore&logoColor=white" alt="App Store"> </a> </p>

Self-hosted wardrobe management with AI-powered outfit recommendations. Take photos of your clothes, let AI tag them, and get daily outfit suggestions based on weather and occasion.

Features

  • Photo-based wardrobe - Upload photos, AI extracts clothing details automatically
  • Smart recommendations - Outfits matched to weather, occasion, and your preferences
  • Scheduled notifications - Daily outfit suggestions via ntfy/Mattermost/email
  • Family support - Manage wardrobes for household members
  • Wear tracking - History, ratings, and outfit feedback
  • Analytics - See what you wear, what you don't, color distribution
  • Fully self-hosted - Your data stays on your hardware
  • Works with any AI - OpenAI, Ollama, LocalAI, or any OpenAI-compatible API

Screenshots

Wardrobe & Item Details

| Grid View | Item Details & AI Analysis | |-----------|---------------------------| | Wardrobe | Item Details |

Wash Tracking & Outfit Suggestions

| Wash Tracking | Suggestions | |---------------|-------------| | Wash Tracking | Suggest |

History & Analytics

| History Calendar | Analytics | |------------------|-----------| | History | Analytics |

Pairings

| Pairing View | Pairing Modal | |--------------|---------------| | Pairing | Pairing Modal |

Quick Start

Prerequisites

  • Docker and Docker Compose installed
  • At least 4GB of RAM available
  • An AI service (Ollama recommended for free local AI, or OpenAI API key)

Setup

Step 1: Install Ollama (if using local AI)

Option A: Using Ollama (Recommended - Free, runs locally)

# Install Ollama from https://ollama.ai
# Then pull required models:
ollama pull gemma3        # Multimodel LLM (for image analysis and outfit recommendations)

# Verify it's running:
curl http://localhost:11434/api/tags

Option B: Using OpenAI (Paid API)

Get your API key from https://platform.openai.com/api-keys

Step 2: Clone and Configure

# Clone the repository
git clone https://github.com/yourusername/wardrowbe.git
cd wardrowbe

# Copy environment template
cp .env.example .env

# IMPORTANT: Edit .env and configure AI settings
# For Ollama (default in .env.example):
#   AI_BASE_URL=http://host.docker.internal:11434/v1
#   AI_VISION_MODEL=gemma3:latest
#   AI_TEXT_MODEL=gemma3:latest
#
# For OpenAI, uncomment and set:
#   AI_BASE_URL=https://api.openai.com/v1
#   AI_API_KEY=sk-your-api-key-here
#   AI_VISION_MODEL=gpt-4o
#   AI_TEXT_MODEL=gpt-4o

# Optional: Generate secure secrets for production
# SECRET_KEY=$(openssl rand -hex 32)
# NEXTAUTH_SECRET=$(openssl rand -hex 32)

Step 3: Start Services

# Start all containers
docker compose up -d

# Wait for services to be healthy (30 seconds)
docker compose ps

# Run database migrations (REQUIRED)
docker compose exec backend alembic upgrade head

# Verify everything is working
curl http://localhost:8000/api/v1/health
# Should return: {"status":"healthy"}

Step 4: Access the App

  • Frontend: http://localhost:3000
  • API Docs: http://localhost:8000/docs
  • Login: Click "Login" - uses dev credentials by default (no password needed)

Development Mode

For hot reloading during development (auto-rebuilds on code changes):

# Start in dev mode
docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d

# Run migrations (first time only)
docker compose exec backend alembic upgrade head

# View logs
docker compose logs -f frontend backend

AI Configuration

Wardrowbe works with any OpenAI-compatible API. You need two types of models:

  • Vision model: Analyzes clothing images to extract colors, patterns, styles
  • Text model: Generates outfit recommendations and descriptions

Using Ollama (Recommended for Self-Hosting)

Free, runs locally, no API key needed, works offline

  1. Install Ollama

  2. Pull models:

    ollama pull gemma3:latest  # multimodel LLM (3.4GB) - analyze images and generates recommendations
    
    # Alternative text models you can use:
    # ollama pull llama3:latest     # Good all-around model
    # ollama pull qwen2.5:latest    # Fast and efficient
    # ollama pull mistral:latest    # Great for creative text
    
  3. Configure in .env:

    AI_BASE_URL=http://host.docker.internal:11434/v1
    AI_API_KEY=not-needed
    AI_VISION_MODEL=gemma3:latest
    AI_TEXT_MODEL=gemma3:latest
    

Note: Use host.docker.internal instead of localhost so Docker containers can reach your host's Ollama.

Using OpenAI

Paid API, requires internet connection

  1. Get API key from https://platform.openai.com/api-keys
  2. Configure in .env:
    AI_BASE_URL=https://api.openai.com/v1
    AI_API_KEY=sk-your-api-key-here
    AI_VISION_MODEL=gpt-4o
    AI_TEXT_MODEL=gpt-4o
    

Using LocalAI

Self-hosted OpenAI alternative

AI_BASE_URL=http://localai:8080/v1
AI_API_KEY=not-needed
AI_VISION_MODEL=gpt-4-vision-preview
AI_TEXT_MODEL=gpt-3.5-turbo

Using Multimodal Models

Some models can handle both vision and text (like qwen2-vl, llama3.2-vision):

AI_VISION_MODEL=llama3.2-vision:11b
AI_TEXT_MODEL=llama3.2-vision:11b  # Same model for both tasks

Architecture

┌─────────────────────────────────────────────────────────────┐
│                        Frontend                              │
│                   (Next.js + React Query)                    │
└─────────────────────────┬───────────────────────────────────┘
                          │
┌─────────────────────────▼───────────────────────────────────┐
│                        Backend                               │
│                   (FastAPI + SQLAlchemy)                     │
└──────────┬──────────────┬──────────────────┬────────────────┘
           │              │                  │
    ┌──────▼──────┐ ┌─────▼─────┐    ┌──────▼──────┐
    │  PostgreSQL │ │   Redis   │    │  AI Service │
    │  (Database) │ │ (Job Queue)│   │ (OpenAI/etc)│
    └─────────────┘ └─────┬─────┘    └─────────────┘
                          │
               ┌──────────▼──────────┐
               │   Background Worker │
               │    (arq - AI Jobs)  │
               └─────────────────────┘

Tech Stack

| Layer | Technology | |-------|------------| | Frontend | Next.js 14, TypeScript, TanStack Query, Tailwind CSS, shadcn/ui | | Backend | FastAPI, SQLAlchemy (async), Pydantic, Python 3.11+ | | Database | PostgreSQL 15 | | Cache/Queue | Redis 7 | | Background Jobs | arq | | Authentication | NextAuth.js (supports OIDC, dev credentials) | | AI | Any OpenAI-compatible API |

Deployment

Docker Compose (Production)

See docker-compose.prod.yml for production configuration.

docker compose -f docker-compose.prod.yml up -d
docker compose exec backend alembic upgrade head

Kubernetes

See the k8s/ directory for Kubernetes manifests including:

  • PostgreSQL and Redis with persistent storage
  • Backend API and worker deployments
  • Next.js frontend
  • Ingress with TLS
  • Network policies

Configuration

Environment Variables

| Variable | Description | Required | |----------|-------------|----------| | DATABASE_URL | PostgreSQL connection string | Yes | | SECRET_KEY | Backend secret for JWT | Yes | | NEXTAUTH_SECRET | NextAuth session encryption | Yes | | AI_BASE_URL | AI service URL | Yes | | AI_API_KEY | AI API key (if required) | Depends | | OIDC_ISSUER_URL | OIDC provider URL (enables SSO login) | No | | OIDC_CLIENT_ID | OIDC client ID | If OIDC | | OIDC_CLIENT_SECRET | OIDC client secret | If OIDC | | OIDC_SKIP_SSL_VERIFY | Skip TLS verification for OIDC provider (self-signed certs) | No | | LOCAL_DNS | Custom DNS server for container n

View on GitHub
GitHub Stars88
CategoryDevelopment
Updated14h ago
Forks13

Languages

Python

Security Score

100/100

Audited on Apr 1, 2026

No findings