Persona
Persona transforms interview prep: a fully local AI coach giving personalized, private feedback verbally and visually so students, job seekers, and teams worldwide can practice with confidence.
Install / Use
/learn @TheAnshulPrakash/PersonaREADME
Persona
A local, private AI-powered interview coach.
Jump to Overview
Jump to Why GPT-OSS:20B
Jump to Project Structure
Jump to Requirements
Jump to Install dependencies
Jump to Important Ollama Setup
Jump to Usage
Jump to Tech Stack
Jump to License
Jump to Author
Try it out Windows Executable
Live walktrough 👉 Youtube
Overview
Persona is a completely local AI interview coach that helps candidates practice for interviews across any domain.
Unlike online tools, Persona ensures 100% privacy by running fully on your machine. It adapts to your skills, the employer’s requirements, and provides real-time verbal interviews powered by the GPT-OSS:20B model.
Persona also uses Mediapipe + OpenCV to analyze your posture and body language during the session, giving instant feedback and generating detailed reports with:
- 📊 Confidence scoring
- 🗣️ Answer quality
- 🌐 English proficiency
- 🧍 Posture tracking
- 📑 Recruiter insights
Built with Flet, Persona delivers a sleek, minimalist GUI optimized for performance so that system resources are prioritized for the model itself.
Why GPT-OSS:20B?
Persona is powered by GPT-OSS:20B, chosen specifically for its unique advantages:
- 📏 128K context window → handles long interviews seamlessly.
- ⚡ MXFP4 quantization → reduces memory + compute requirements while maintaining high accuracy.
- 🔀 Mixture-of-Experts architecture → enables blazing-fast inference speeds, even on CPU.
This combination makes Persona fast, private, and reliable — unlike most online interview tools.
Screenshots

Minimalist landing page with "Get Started" flow.
Live posture tracking and verbal interview interface.

Line graph showing confidence, answer quality, proficiency, and posture.
Project Structure
Persona/
├── 08-09, 19-34.json # Session/experiment logs
├── 08-09, 19-44.json
├── 08-09, 20-48.json
├── 08-09, 22-34.json
│
├── LICENSE # License file
├── README.md # Project documentation
├── requirements.txt # Python dependencies
│
├── Sylphie_voice.py # Voice synthesis and processing (Piper TTS)
├── eye_detection.py # Eye detection / vision-based module
├── hugginface_inference.py # Hugging Face inference integration
├── main.py # Main entry point of the project
├── ollama_gpt.py # Ollama GPT integration module (GPT-OSS:20B)
├── python_pdf_docx.py # PDF/Docx processing module (CV parsing)
│
├── assets/
│ ├── fonts/ # Montserrat-Regular.ttf
│ └── images/ # App icons, graphics, screenshots
│
├── piper_models/ # Voice models (Piper TTS, default voice included as hfc_female)
└── .gitignore
Requirements
- Python 3.12.7
- Ollama (for GPT-OSS:20B local inference)
- CUDA-compatible GPU (optional, for faster inference)
- Works on Linux / macOS / Windows
- At least 16gb of system RAM
Install dependencies
git clone https://github.com/<your-username>/persona.git
cd persona
python3.12 -m venv venv
source venv/bin/activate # (or venv\Scripts\activate on Windows)
pip install -r requirements.txt
Ollama Setup
🔴 Note: Persona relies on GPT-OSS:20B running locally via Ollama. Make sure you have the correct Ollama version installed and running.
Install Ollama → Download here
Verify Ollama version
ollama --version
(Recommended: latest stable version)
Pull GPT-OSS:20B model
ollama pull gpt-oss:20b
Start Ollama service (must run in the background)
ollama run gpt-oss:20b
Usage
Start the app
python main.py
On launch:
-
Click Get Started.
-
Upload your CV (PDF/DOCX) or skip for demo.
-
Choose your field of work and role.
-
Enter details about yourself and what the employer is looking for.
-
Persona begins the mock interview:
-
Verbal Q/A in real time with GPT-OSS:20B.
-
Faster-Whisper handles speech-to-text.
-
Posture analysis runs with Mediapipe + OpenCV.
After interview:
Get detailed analytics with graphs and recruiter-style notes.
Get Persona
💾 Download the fully compiled Windows version of Persona here:
⚠️ Note: Make sure you have at least 16 GB RAM and Ollama installed for GPT-OSS:20B and active.
Tech Stack
- LLM: GPT-OSS:20B (via Ollama)
- Runner: Ollama (local model hosting)
- STT: Faster-Whisper (small model, GPU-accelerated if available)
- TTS: Piper (default female voice hfc_female)
- GUI: Flet
- Vision: Mediapipe + OpenCV (posture, eye detection)
- Python: 3.12.7
License
Apache 2.0
Author
Developed by Anshul.
Related Skills
node-connect
349.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.5kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
349.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
349.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
