Alice
Alice is a voice-first desktop AI assistant application built with Vue.js, Vite, and Electron. Advanced memory system, function calling, MCP support, optional fully local use, and more.
Install / Use
/learn @pmbstyle/AliceREADME
Alice
<img src="https://img.shields.io/github/license/pmbstyle/alice"> <img src="https://img.shields.io/github/v/release/pmbstyle/alice"> <img src="https://img.shields.io/github/downloads/pmbstyle/Alice/total">
Say "Hi" to Alice 👋, your open-source AI companion designed to live on your desktop.
Alice brings together voice interaction, intelligent context awareness, powerful tooling, and a friendly personality to assist you with everything from daily tasks to deeper creative work. Alice is more than a chatbot; she’s built to feel present, responsive, emotionally engaging, and deeply useful.
Quick showcase
<p align="center"> <a href="https://www.youtube.com/watch?v=fDYUjh6UXqk"> <img width="817" height="504" alt="AliceVideo" src="https://github.com/user-attachments/assets/9e0ffee2-198a-43a0-9f9a-a003d221e31d" /> </a> </p>✨ Key Features
💻 Local and Cloud use
Alice is designed to work with Cloud(OpenAI/OpenRouter) and Local LLMs (Ollama/LM Studio). Has built-in speech-to-text, text-to-speech, and embedding services. While the OpenAI cloud API is preferred and provides the best user experience, Alice can also operate fully locally (experimental).
🗣️ Voice Interaction
- Fast, VAD-powered voice recognition (via
gpt-4o-transcribe,google-tts-voiceorwhisper-large-v3) - Natural-sounding responses with OpenAI/Google TTS and optional support for local multilingual text-to-speech via Piper TTS
- Interruptible speech and streaming response cancellation for smoother flow
🧠 Memory & Context
- Thoughts: Short-term context stored in Hnswlib vector DB
- Memories: Structured long-term facts in local DB
- Summarization: Compact message history into context prompts
- Emotion awareness: Summaries include mood estimation for more human responses
- Local RAG: Add local documents to the LLM context, chat with your docs
🎨 Vision & Visual Output
- Screenshot interpretation using Vision API
- Image generation using
gpt-image-1 - Animated video states (standby/speaking/thinking)
🪄 Computer Use Tools
Alice can interact with your local system with user-approved permissions:
-
📂 File system browsing (e.g., listing folders)
-
💻 Shell command execution (
ls,mv,mkdir, etc) -
🔐 Granular command approvals:
- One-time
- Session-based
- Permanent (revocable)
-
🔧 Settings tab "Permissions" lets you review and manage all approved commands
⚙️ Function Calling
- Web search (including Searxng support)
- Google Calendar & Gmail integration
- Torrent search & download (via Jackett + qBittorrent)
- Time & date awareness
- Clipboard management
- Task scheduler (reminders and command execution)
- Open applications & URLs
- Image generation
- MCP server support
💬 Wake Word Support
With the local STT model, you can set a wake-up word (like "Hey, Siri").
- Alice will always listen, but only process requests when the wake word is spoken.
- Default mode is auto language detection, but you can also select a specific language in settings.
💻 Dedicated Chrome Extension
- Ask Alice about your active Chrome tab
- Context menu for selected text on a web page
- Fact check this
- Summarize this
- Tell me more about it
🎛️ Flexible Settings
Fully customizable settings interface:
- LLM provider selection between OpenAI, OpenRouter, Ollama, LM Studio
- Cloud or local TTS, STT, Embeddings
- Model choice & parameters (temperature, top_p, history, etc)
- Prompt and summarization tuning
- Audio/mic toggles & hotkeys
- Available tools & MCP configuration
- Google integrations
🔨 Custom Tools
Alice supports custom tools that are defined in JSON and backed by local scripts.
- Open Settings → Customization → Custom tools
- Upload or drop your script (writes to
custom-tool-scripts/) - Click Add Tool, fill in metadata, and paste the JSON schema. Saving updates
custom-tools.json - Toggle the tool on/off in the list. Only enabled + valid entries are offered to the model.
🎭 Custom Avatars
Swap Alice's appearance with your own video loops:
- Create a folder under
user-customization/custom-avatars/<AvatarName>/. - Drop
speaking.mp4,thinking.mp4, andstandby.mp4into that folder (all required). - Open Settings → Customization → Assistant Avatar, hit Refresh, and pick the new avatar.
🚀 Download
<!-- STABLE_DOWNLOADS -->| Platform | Download | |----------|----------| | Windows | Alice-AI-App-Windows-1.3.0-Setup.exe | | macOS | Alice-AI-App-Mac-1.3.0-Installer.dmg | | Linux | Alice-AI-App-Linux-1.3.0.AppImage | | ArchLinux(community build) | AUR Package |
<!-- STABLE_DOWNLOADS_END -->Follow the Setup Instructions to configure your API keys and environment.
🛠️ Technologies Used
- Frontend: Vue.js, TailwindCSS
- Desktop Shell: Electron
- State Management: Pinia
- AI APIs: OpenAI, OpenRouter, Groq
- Backend: Go
- Vector search engine: hnswlib-node
- Local storage: better-sqlite3
- Voice activity detection: VAD (Web)
- Local STT & TTS: whisper.cpp & Piper
- Local Embeddings: all-MiniLM-L6-v2
- Animation: Kling Pro
Other tools:
- Jackett — Torrent aggregator
- qBittorrent — Torrent client
- Searxng - Self-hosted web search
🧑💻 Getting Started (Development)
# 1. Clone the repo
$ git clone https://github.com/pmbstyle/Alice.git
# 2. Install dependencies
$ npm install
# 3. Set up your .env file (see .env.example for reference)
Follow setup instructions to obtain required API credentials.
# 4. Compile backend
npm run build:go
# 5. Run dev environment
$ npm run dev
📦 Production Build
Optionally, create an app-config.json file in the root directory for Google integration:
{
"VITE_GOOGLE_CLIENT_ID": "",
"VITE_GOOGLE_CLIENT_SECRET": ""
}
# Build the app
$ npm run build
Install the output from the release/ directory.
🤝 Contributing
Ideas, bug reports, feature requests - all welcome! Open an issue or PR, or drop by to share your thoughts. Your input helps shape Alice into something wonderful 💚
