Autoshorts
Automatically generate viral-ready vertical short clips from long-form gameplay footage using AI-powered scene analysis, GPU-accelerated rendering, and optional AI voiceovers.
Install / Use
/learn @divyaprakash0426/AutoshortsREADME
AutoShorts
Automatically generate viral-ready vertical short clips from long-form gameplay footage using AI-powered scene analysis, GPU-accelerated rendering, and optional AI voiceovers.
AutoShorts analyzes your gameplay videos to identify the most engaging moments—action sequences, funny fails, or highlight achievements—then automatically crops, renders, and adds subtitles or AI voiceovers to create ready-to-upload short-form content.
🎬 Example Output
Here are some shorts automatically generated from gameplay footage:
| sample 1 | sample 2 | sample 3 | sample 4 |
| :---: | :---: | :---: | :---: |
|
|
|
|
|
🎥 Showcase: Multi-Language & Style Generation
AutoShorts automatically adapts its editing style, captions, and voiceover personality based on the content and target language. Here are some examples generated entirely by the pipeline:
| Content | Style | Language | Video | | :--- | :--- | :--- | :--- | | Fortnite | Story Roast | 🇺🇸 English | Watch Part 1 | | Indiana Jones | GenZ Slang | 🇺🇸 English | Watch Part 1 | | Battlefield 6 | Dramatic Story | 🇯🇵 Japanese | Watch Part 1 | | Indiana Jones | Story News | 🇨🇳 Chinese | Watch Part 1 | | Fortnite | Story Roast | 🇪🇸 Spanish | Watch Part 1 | | Fortnite | Story Roast | 🇷🇺 Russian | Watch Part 1 | | Indiana Jones | Auto Gameplay | 🇧🇷 Portuguese | Watch Part 1 |
✨ Features
🎯 AI-Powered Scene Analysis
- Multi-Provider Support: Choose between OpenAI (GPT-5-mini, GPT-4o) or Google Gemini for scene analysis, or run in
localmode with heuristic scoring (no API needed) - Gemini Deep Analysis Mode 🧠: Upload full video to Gemini for context-aware scene detection — the AI sees the whole game, not just short clips
- 7 Semantic Types (all analyzed automatically):
action— Combat, kills, intense gameplay, close callsfunny— Fails, glitches, unexpected humor, comedic timingclutch— 1vX situations, comebacks, last-second winswtf— Unexpected events, "wait what?" moments, random chaosepic_fail— Embarrassing deaths, tragic blunders, game-losing mistakeshype— Celebrations, "LET'S GO" energy, peak excitementskill— Trick shots, IQ plays, advanced mechanics, impressive techniques
🎙️ Subtitle Generation
- Speech Mode: Uses OpenAI Whisper to transcribe voice/commentary
- AI Captions Mode: AI-generated contextual captions for gameplay without voice
- Caption Styles:
- Classic:
gaming,dramatic,funny,minimal - GenZ Mode ✨:
genz- Slang-heavy reactions ("bruh 💀", "no cap", "finna") - Story Modes ✨: Narrative-style captions
story_news- Professional esports broadcasterstory_roast- Sarcastic roasting commentarystory_creepypasta- Horror/tension narrativestory_dramatic- Epic cinematic narration
auto- Auto-match style to detected semantic type
- Classic:
- PyCaps Integration: Multiple visual templates including
hype,retro-gaming,neo-minimal - AI Enhancement: Semantic tagging and emoji suggestions (e.g., "HEADSHOT! 💀🔥")
🔊 AI Voiceover (Qwen3-TTS)
- Voice Design Engine: Powered by Qwen3-TTS 1.7B-VoiceDesign for creating unique voices from natural language descriptions
- Dynamic Voice Generation: AI automatically generates voice persona based on caption style + caption content
- Style-Adaptive Voices: Each caption style has a unique voice preset:
- GenZ → Casual energetic voice with modern slang
- Story News → Professional broadcaster
- Story Roast → Sarcastic playful narrator
- Story Creepypasta → Deep ominous voice with tension
- Story Dramatic → Epic movie-trailer narrator
- Natural Language Instructions: Define voice characteristics via text prompts without needing reference audio
- Ultra-Low Latency: Local inference with FlashAttention 2 optimization
- Multilingual Support: Native support for 10+ languages including English, Chinese, Japanese, Korean
- Smart Mixing: Automatic ducking of game audio when voiceover plays
⚡ GPU-Accelerated Pipeline
- Scene Detection: Custom implementation using
decord+ PyTorch on GPU - Audio Analysis:
torchaudioon GPU for fast RMS and spectral flux calculation - Video Analysis: GPU streaming via
decordfor stable motion estimation - Image Processing:
cupy(CUDA-accelerated NumPy) for blur and transforms - Rendering: PyTorch + NVENC hardware encoder for ultra-fast rendering
📐 Smart Video Processing
- Scenes ranked by combined action score (audio 0.6 + video 0.4 weights)
- Configurable aspect ratio (default 9:16 for TikTok/Shorts/Reels)
- Smart cropping with optional blurred background for non-vertical footage
- Retry logic during rendering to avoid spurious failures
🛡️ Robust Fallback System
AutoShorts is designed to work even when optimal components fail:
| Component | Primary | Fallback |
| :--- | :--- | :--- |
| Video Encoding | NVENC (GPU) | libx264 (CPU) |
| Subtitle Rendering | PyCaps (styled) | FFmpeg burn-in (basic) |
| AI Analysis | OpenAI/Gemini API | Heuristic scoring (local mode) |
| TTS Device | GPU (6GB+ VRAM) | CPU Fallback (slower) |
📋 Requirements
Hardware
- NVIDIA GPU with CUDA support (6GB+ VRAM recommended for Qwen3-TTS 1.7B)
- NVIDIA Drivers and System RAM (16GB+ recommended)
Software
- Python 3.10
- FFmpeg 4.4.2 (for Decord compatibility)
- CUDA Toolkit with
nvcc(for building Decord from source) - System libraries:
libgl1,libglib2.0-0
🚀 Installation
Option 1: Makefile Installation (Recommended)
The Makefile handles everything automatically—environment creation, dependency installation, and building Decord with CUDA support.
git clone https://github.com/divyaprakash0426/autoshorts.git
cd autoshorts
# Run the installer (uses conda/micromamba automatically)
make install
# Setup environment variables
cp .env.example .env
# Edit .env and add your API keys (Gemini/OpenAI)
# Activate the environment
overlay use .venv/bin/activate.nu # For Nushell
# OR
source .venv/bin/activate # For Bash/Zsh
The Makefile will:
- Download micromamba if conda/mamba is not found
- Create a Python 3.10 environment with FFmpeg 4.4.2
- Install NV Codec Headers for NVENC support
- Build Decord from source with CUDA enabled
- Install all pip requirements
Option 2: Docker (GPU Required)
Prerequisite: NVIDIA Container Toolkit must be installed.
# Build the image
docker build -t autoshorts .
# Run with GPU access
docker run --rm \
--gpus all \
-v $(pwd)/gameplay:/app/gameplay \
-v $(pwd)/generated:/app/generated \
--env-file .env \
autoshorts
Note: The
--gpus allflag is essential for NVENC and CUDA acceleration.
⚙️ Configuration
Copy .env.example to .env and configure:
cp .env.example .env
Key Configuration Options
| Category | Variable | Description |
| :--- | :--- | :--- |
| AI Provider | AI_PROVIDER | openai, gemini, or local (heuristic-only, no API) |
| | AI_ANALYSIS_ENABLED | Enable/disable AI scene analysis |
| | GEMINI_DEEP_ANALYSIS | Gemini-only: upload full video for smarter scene detection (slower initial upload, better results) |
| | OPENAI_MODEL | Model for analysis (e.g., gpt-5-mini) |
| | AI_SCORE_WEIGHT | How much to weight AI vs heuristic (0.0-1.0) |
| Semantic Analysis | SEMANTIC_TYPES | All 7 types analyzed: action, funny, clutch, wtf, epic_fail, hype, skill |
| | CANDIDATE_CLIP_COUNT | Number of clips to analyze |
| Subtitles | ENABLE_SUBTITLES | Enable subtitle generation |
| | SUBTITLE_MODE | speech (Whisper), ai_captions, or none |
| | CAPTION_STYLE | gaming, dramatic, funny, minimal, genz, story_news, story_roast, story_creepypasta, story_dramatic, auto |
| | PYCAPS_TEMPLATE | Visual template for captions |
| TTS Voiceover | ENABLE_TTS | Enable Qwen3-TTS voiceover |
| | TTS_LANGUAGE | Language code (en, zh, ja, ko, de, fr, ru, pt, es, it) |
| | TTS_VOICE_DESCRIPTION | Natural language voice description (auto-generated if empty) |
| | TTS_GAME_AUDIO_VOLUME | Game audio volume when TTS plays (0.0-1.0, default 0.3) |
| | TTS_VOICEOVER_VOLUME | TTS voiceover volume (0.0-1.0, default 1.0) |
| Video Output | TARGET_RATIO_W/H | Aspect ratio (default 9:16) |
| | SCENE_LIMIT | Max clips per source video |
| | MIN/MAX_SHORT_LENGTH | Clip duration bounds (seconds) |
See .env.example for the complete list with detailed descriptions.
📖 Usage
-
Place source videos in the
gameplay/directory -
Run the script:
python run.py -
Generated clips are saved to
generated/
🧭 Dashboard (Streamlit UI)
Launch the l
Related Skills
bluebubbles
334.1kUse when you need to send or manage iMessages via BlueBubbles (recommended iMessage integration). Calls go through the generic message tool with channel="bluebubbles".
claude-opus-4-5-migration
82.1kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
async-pr-review
98.9kTrigger this skill when the user wants to start an asynchronous PR review, run background checks on a PR, or check the status of a previously started async PR review.
code-reviewer
98.9kCode Reviewer This skill guides the agent in conducting professional and thorough code reviews for both local development and remote Pull Requests. Workflow 1. Determine Review Target
