Echo
Production-ready audio and video transcription app that can run on your laptop or in the cloud.
Install / Use
/learn @Lightning-Universe/EchoREADME
Lightning Echo
Use Echo to generate quick captions of video and audio content. Powered by OpenAI’s Whisper, Echo benefits from near-human speech recognition to transcribe spoken words into text.
Running Echo
Configuration
<details> <summary>All configuration is done using environment variables, which are documented below with their default values.</summary>| Name | Type | Default Value | Description |
| -------------------------------------------- | ----------------------------------------------------------------------------------------- | -------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| ECHO_MODE | development/production | production | Toggles monitoring and other production-specific features. |
| ECHO_MODEL_SIZE | See Whisper Docs | base | The Whisper model to use. |
| ECHO_ENABLE_MULTI_TENANCY | boolean | false | If enabled, users will not be able to see Echoes or data created by other users. If disabled, the app treats everyone as the same user so everything is visible to everyone who uses it. |
| ECHO_RECOGNIZER_MIN_REPLICAS | integer | 1 | Minimum number of speech recognizer Works to keep running at all times, even if they are idle. |
| ECHO_RECOGNIZER_MAX_IDLE_SECONDS_PER_WORK | integer | 120 | Autoscaler will shut down any spare recognizer Works that haven't processed anything after this duration. |
| ECHO_RECOGNIZER_MAX_PENDING_CALLS_PER_WORK | integer | 10 | Autoscaler will create a new recognizer Work if any existing recognizer Work has this many pending items to process. |
| ECHO_RECOGNIZER_AUTOSCALER_CRON_SCHEDULE | cron | */5 * * * * | How often the autoscaler will check to see if recognizer Works need to be scaled up/down |
| ECHO_RECOGNIZER_CLOUD_COMPUTE | Cloud Compute | gpu | The instance type each recognizer Work will use when running in the cloud. |
| ECHO_FILESERVER_CLOUD_COMPUTE | Cloud Compute | cpu-small | The instance type the fileserver Work will use when running in the cloud. |
| ECHO_FILESERVER_AUTH_TOKEN | string | None | Pre-shared key that prevents anyone other than the Flow from deleting files from the fileserver. |
| ECHO_YOUTUBER_MIN_REPLICAS | integer | 1 | Minimum number of downloader Works to keep running at all times, even if they are idle. |
| ECHO_YOUTUBER_MAX_IDLE_SECONDS_PER_WORK | integer | 120 | Autoscaler will shut down any spare downloader Works that haven't processed anything after this duration. |
| ECHO_YOUTUBER_MAX_PENDING_CALLS_PER_WORK | integer | 10 | Autoscaler will create a new downloader Work if any existing downloader Work has this many pending items to process. |
| ECHO_YOUTUBER_AUTOSCALER_CRON_SCHEDULE | cron | */5 * * * * | How often the autoscaler will check to see if downloader Works need to be scaled up/down |
| ECHO_YOUTUBER_CLOUD_COMPUTE | Cloud Compute | cpu | The instance type each downloader Work will use when running in the cloud. |
| ECHO_USER_ECHOES_LIMIT | integer | 100 | Maximum number of Echoes that each user can create. |
| ECHO_SOURCE_TYPE_FILE_ENABLED | boolean | true | Allows Echoes to be created from a local file upload (.mp3, .mp4, etc) |
| ECHO_SOURCE_TYPE_RECORDING_ENABLED | boolean | true | Allows Echoes to be recorded with the device microphone using the UI. |
| ECHO_SOURCE_TYPE_YOUTUBE_ENABLED | boolean | true | Allows Echoes to be created by providing the URL to a public YouTube video. |
| ECHO_GARBAGE_COLLECTION_CRON_SCHEDULE
Related Skills
docs-writer
98.9k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
334.1kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
arscontexta
2.8kClaude Code plugin that generates individualized knowledge systems from conversation. You describe how you think and work, have a conversation and get a complete second brain as markdown files you own.
docs
High-performance, modular RAG backend and "Knowledge Engine" Built with Go & Gin, featuring Git-Ops knowledge sync, pgvector semantic search, and OpenAI-compatible model support.
