Openjourney
Open-source clone of the MidJourney web interface featuring real AI image and video generation powered by Google's Gemini SDK. Use Imagen 4 to generate images and Veo 2 and 3 for image and text to video with audio.
Install / Use
/learn @ammaarreshi/OpenjourneyREADME
Openjourney - MidJourney UI clone
A high-fidelity, open-source clone of the MidJourney web interface built with Next.js 15, featuring real AI image and video generation powered by Google's Gemini SDK. Use Imagen 4 to generate images and Veo 2 and 3 for image and text to video with audio.
OR extend and bring your favorite models to the same experience.
✨ Features
🎨 AI Image Generation
- Imagen 4 integration for high-quality image generation
- 4-image grid layout matching MidJourney's design
- Real-time generation with loading animations
🎬 AI Video Generation
- Veo 3 text-to-video generation
- Veo 2 image-to-video conversion
- 2x2 video grid with autoplay on hover
🚀 Interactive features
- Download generated images and videos
- Image-to-video conversion with one click
- Hover animations and professional transitions
- Real-time loading states with skeleton animations
- Film strip navigation to easily flip through your gneerations
🛠️ Tech Stack
- Next.js 15 with App Router and Turbopack
- TypeScript for type safety
- Tailwind CSS v4 for styling
- Framer Motion for animations
- ShadCN UI components
- Google GenAI SDK for AI generation
- Radix UI for accessible components
📋 Prerequisites
- Node.js 18+ (20+ recommended)
- npm or yarn
- Google AI API Key (free tier available)
🚀 Quick Start
1. Clone & Install
git clone https://github.com/your-username/openjourney.git
cd openjourney/openjourney-app
npm install
2. Environment Setup
Create .env.local:
GOOGLE_AI_API_KEY=your_google_ai_api_key_here
Get your API key:
- Visit Google AI Studio
- Create a new project or select existing
- Generate an API key
- Copy and paste into
.env.local
3. Development Server
npm run dev
Open http://localhost:3000 to view the app.
🎯 Usage Guide
Generating Images
- Type your prompt in the input bar
- Click "Image" button or press Enter
- Watch 4 high-quality images generate in real-time
- Hover to Download, Expand, or Animate with Veo 2
Generating Videos
- Type your video prompt
- Click "Video" button
- Watch videos generate (takes 30-60 seconds)
- Hover over video row to autoplay both videos
Image-to-Video Conversion
- Generate or use sample images
- Hover over any image
- Click "Animate with Veo 2"
- Watch as image becomes animated video
Viewing & Downloading
- Expand - Click to view in fullscreen lightbox
- Download - Save images/videos locally
- Navigation - Use arrow keys or or scroll in full screen mode
🏗️ Project Structure
openjourney-app/
├── src/
│ ├── app/
│ │ ├── api/ # API routes for AI generation
│ │ │ ├── generate-images/ # Imagen 4 integration
│ │ │ ├── generate-videos/ # Veo 3 text-to-video
│ │ │ └── image-to-video/ # Veo 2 image-to-video
│ │ ├── globals.css # Global styles
│ │ ├── layout.tsx # Root layout
│ │ └── page.tsx # Main page component
│ ├── components/
│ │ ├── ui/ # ShadCN UI components
│ │ ├── prompt-bar.tsx # Prompt input with logo
│ │ ├── content-grid.tsx # Generation management
│ │ ├── image-grid.tsx # 4-image display grid
│ │ ├── video-grid.tsx # 2x2 video display
│ │ ├── loading-grid.tsx # Skeleton loading states
│ │ └── lightbox-modal.tsx # Fullscreen viewer
│ └── lib/
│ └── utils.ts # Utility functions
├── public/
│ ├── openjourney-logo.svg # Brand logo
│ ├── sample-images/ # Demo images
│ └── sample-videos/ # Demo videos
├── next.config.js # Next.js configuration
├── tailwind.config.ts # Tailwind configuration
└── package.json # Dependencies
🎨 AI Models Used
Imagen 4 (Image Generation)
- Model:
imagen-3.0-generate-001 - Output: 4 high-quality 1024x1024 images
- Format: Base64 encoded PNG
Veo 3 (Text-to-Video)
- Model:
veo-3 - Output: High quality video generation with audio
- Duration: ~3-5 seconds, 720p
Veo 2 (Image-to-Video)
- Model:
veo-2.0-generate-001with image input - Input: Base64 image + text prompt
- Output: Animated video from static image
🔧 Configuration
Image Domains (next.config.js)
images: {
remotePatterns: [
{
protocol: 'https',
hostname: 'picsum.photos', // For placeholder images
},
],
}
🚀 Deployment
Vercel (Recommended)
npm run build
npx vercel --prod
Docker
docker build -t openjourney .
docker run -p 3000:3000 openjourney
Environment Variables
Make sure to set GOOGLE_AI_API_KEY in your deployment platform or you can enter your key in the app.
🤝 Contributing
- Fork the repository
- Create feature branch:
git checkout -b feature/amazing-feature - Commit changes:
git commit -m 'Add amazing feature' - Push to branch:
git push origin feature/amazing-feature - Open a Pull Request
📝 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
- MidJourney for the original interface inspiration
- Google Gemini API for the powerful generation models
- Vercel for Next.js and deployment platform
- ShadCN for the beautiful UI components
Follow for more!
- For more AI projects: @ammaar on X
