Lira
A Voice-First AI Companion
Install / Use
/learn @bold-ronin/LiraREADME
LIRA - provides real-time conversations, context-aware responses, and on-demand image generation. It listens, understands, and interacts naturally to assist users with daily tasks, emotional check-ins, and creative prompts.
Lira is a sleek Flutter mobile app (iOS/Android) that acts as your always-on voice buddy—like ChatGPT’s voice mode but cozier.
It uses a cloned "grandma" voice to provide empathetic advice on daily life, emotional check-ins, quick planning (e.g., "Remind me about that meeting in underepresented languages?"), or just venting sessions.
- Hands-free, real-time chat: Speak naturally (even with Ethiopian accents), it listens live, thinks via AI, and responds in a warm, storytelling tone.
- Privacy-first (mostly on-device), with optional integration with Neuroviate vibes for multicultural empathy.
- Monetization can be added later via premium voices or third-party integrations.
- Target audience: Busy individuals craving low-key wisdom, starting in Ethiopia/global diaspora.
🔥 Tech Stack & Free LLM Options
Frontend
Backend
AI & LLMs (Free)
Speech Processing (Free)
📸 Screenshots
<p align="center"> <img src="https://raw.githubusercontent.com/Naomer/lira/567e552045b40a20d08d8f4bb99a7eb09be0e8e7/IMG_6680.PNG" alt="Lira Home Screen" width="300" /> <img src="https://raw.githubusercontent.com/Naomer/lira/3903e290a9d1f6220b24ad5193fe99a60056beb9/IMG_6681.PNG" alt="Lira Voice Screen" width="300" /> </p> </p>🏗️ UI Components
1. Home / Dashboard Screen (lib/screens/home_screen.dart)
- User greeting with profile picture
- "Good Morning" prompt
- Main "Talk to AI assistant" card with Start Talking button
- Voice and Image feature cards
- Topics section with pill-shaped buttons
- Information cards (Blood pressure, Sleep)
- Bottom navigation bar with AI sparkle button
2. Voice Analysis Screen (lib/screens/voice_analysis_screen.dart)
- "Listening..." indicator
- Animated 3D orb visualizer with gradient colors
- Live transcript display
- Bottom control bar with timer, microphone button, and cancel button
3. Smart Chat Screen (lib/screens/smart_chat_screen.dart)
- Chat interface with AI and user message bubbles
- Sparkle icons for AI messages
- Audio message bubbles with waveform visualization
- Text input field with mic and add buttons
- Pre-populated sample conversation
4. Shared Components
- Gradient Background (
lib/utils/gradient_background.dart) — Purple/pink gradient - Status Bar (
lib/widgets/status_bar.dart) — Time, signal, WiFi, battery - Orb Visualizer (
lib/widgets/orb_visualizer.dart) — Animated 3D sphere with swirling patterns
🎨 Design Features
- Purple/pink gradient backgrounds matching app visuals
- Rounded corners on all UI elements
- Modern, clean aesthetic
- Smooth animations on the orb visualizer
- Consistent color scheme using
#9B7EDEpurple
🧠 Lira MVP Workflow
flowchart TD
A[User speaks into Flutter app] --> B[Flutter captures audio]
B --> C[Speech-to-Text (Whisper / Vosk / Coqui STT)]
C --> D[Text sent to Python FastAPI backend]
D --> E[Backend queries Free LLM (Mistral / LLaMA / OpenRouter)]
E --> F[AI generates agentic response (grandma voice style)]
F --> G[Text returned to Flutter app]
G --> H[Text-to-Speech (Coqui TTS / flutter_tts)]
H --> I[Flutter plays AI voice response]
I --> A[User continues conversation]
Workflow explanation:
- User speaks → Flutter captures audio
- Audio → text via STT
- Python backend receives text → queries free LLM
- LLM generates empathetic, agentic response
- Text-to-speech converts AI text → voice
- Flutter plays voice back to user
- Conversation continues naturally
🛠️ Free Backend Setup (Python + LLM)
- Python with FastAPI for REST API endpoints
- Free LLM options: OpenRouter, HuggingFace Inference (Mistral, LLaMA, Grok, Qwen)
- Speech-to-Text: Whisper (local) or Vosk
- Text-to-Speech: Coqui TTS or flutter_tts
- Conversation memory: store last 3–5 messages in RAM (privacy-first)
Fully free, no subscription required, and privacy-friendly MVP
📂 Project Structure
Lira/
│── lib/
│ ├── screens/
│ │ ├── home_screen.dart
│ │ ├── voice_analysis_screen.dart
│ │ └── smart_chat_screen.dart
│ ├── widgets/
│ │ ├── status_bar.dart
│ │ └── orb_visualizer.dart
│ └── utils/
│ └── gradient_background.dart
│── assets/
│── backend/
│ ├── app/
│ │ ├── main.py (FastAPI factory)
│ │ ├── routers/ (chat, stt, tts routes)
│ │ ├── schemas.py (Pydantic models)
│ │ └── services/ (LLM provider abstractions)
│ └── requirements.txt
│── README.md
▶️ Getting Started
1. Clone the repository
git clone https://github.com/your-username/lira.git
cd lira
2. Install Flutter dependencies
flutter pub get
3. Run the app
flutter run
4. Backend setup
cd backend
pip install -r requirements.txt
cp .env.example .env
# Edit .env and add your LLM API key
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
5. Configure environment
Edit backend/.env:
LLM_API_BASE_URL=https://openrouter.ai/api/v1
LLM_API_KEY=sk-...
LLM_MODEL=mistralai/mistral-7b-instruct
6. Configure Flutter backend URL
Edit lib/config/api_config.dart and set your backend URL:
- Local:
http://localhost:8000 - Android Emulator:
http://10.0.2.2:8000 - Physical device:
http://YOUR_COMPUTER_IP:8000
📖 For detailed setup instructions, see SETUP.md
📌 Roadmap for MVP → Full App
- Multi-language support (Amharic, English)
- Premium voices & AI personality options
- Push notifications & reminders
- Advanced conversation memory & reasoning
- Integrations with Neuroviate for multicultural empathy
- Polished UI animations and orb visualizer
🗣️ Voice + Agentic Integration Checklist
- Audio capture (Flutter): use
recordorflutter_soundto stream PCM via WebSocket to the/sttendpoint. Buffer 1–2s chunks for responsiveness. - Speech-to-Text (Python): replace the stub with Whisper (
faster-whisper) or Vosk. Emit partial transcripts to the client sovoice_analysis_screen.dartcan display live text. - Conversation hand-off: send the latest transcript plus last 5–10 turns to
/chat. The backend keeps persona prompts + temperature settings server-side. - LLM provider config: switch models using
LLM_MODELenv var without touching Flutter. Supports OpenRouter, HuggingFace or local inference once you pointLLM_BASE_URLaccordingly. - Text-to-Speech: call
/ttswith the assistant reply. Implement Coqui TTS (offline) orgTTSfor a quick cloud option; Flutter plays viajust_audio. - Memory + tools: use
conversationpayload to pass lightweight memory now; later extend backend to persist slots and emittool_callsfor reminders, journaling, etc.
🤝 Contributing
Pull requests welcome! Please open an issue for major changes.
📄 License
MIT License
