Gpt4free
The official gpt4free repository | various collection of powerful language models | opus 4.6 gpt 5.3 kimi 2.5 deepseek v3.2 gemini 3
Install / Use
/learn @xtekky/Gpt4freeREADME
GPT4Free (g4f)
<p align="center"> <img src="https://github.com/user-attachments/assets/7f60c240-00fa-4c37-bf7f-ae5cc20906a1" alt="GPT4Free logo" height="200" /> </p> <p align="center"> <span style="background: linear-gradient(45deg, #12c2e9, #c471ed, #f64f59); -webkit-background-clip: text; -webkit-text-fill-color: transparent;"> <strong>Created by <a href="https://github.com/xtekky">@xtekky</a>,<br> maintained by <a href="https://github.com/hlohaus">@hlohaus</a></strong> </span> </p> <p align="center"> <span>Support the project on</span> <a href="https://github.com/sponsors/hlohaus" target="_blank" rel="noopener noreferrer"> GitHub Sponsors </a> ❤️ </p> <p align="center"> Live demo & docs: https://g4f.dev | Documentation: https://g4f.dev/docs </p>GPT4Free (g4f) is a community-driven project that aggregates multiple accessible providers and interfaces to make working with modern LLMs and media-generation models easier and more flexible. GPT4Free aims to offer multi-provider support, local GUI, OpenAI-compatible REST APIs, and convenient Python and JavaScript clients — all under a community-first license.
This README is a consolidated, improved, and complete guide to installing, running, and contributing to GPT4Free.
Table of contents
- What’s included
- Quick links
- Requirements & compatibility
- Installation
- Running the app
- Using the Python client
- Using GPT4Free.js (browser JS client)
- Providers & models (overview)
- Local inference & media
- Configuration & customization
- Running on smartphone
- Interference API (OpenAI‑compatible)
- Examples & common patterns
- Contributing
- Security, privacy & takedown policy
- Credits, contributors & attribution
- Powered-by highlights
- Changelog & releases
- Manifesto / Project principles
- License
- Contact & sponsorship
- Appendix: Quick commands & examples
What’s included
- Python client library and async client.
- Optional local web GUI.
- FastAPI-based OpenAI-compatible API (Interference API).
- Official browser JS client (g4f.dev distribution).
- Docker images (full and slim).
- Multi-provider adapters (LLMs, media providers, local inference backends).
- Tooling for image/audio/video generation and media persistence.
Quick links
- Website & docs: https://g4f.dev | https://g4f.dev/docs
- PyPI: https://pypi.org/project/g4f
- Docker image: https://hub.docker.com/r/hlohaus789/g4f
- Releases: https://github.com/xtekky/gpt4free/releases
- Issues: https://github.com/xtekky/gpt4free/issues
- Community: Telegram (https://telegram.me/g4f_channel) · Discord News (https://discord.gg/5E39JUWUFa) · Discord Support (https://discord.gg/qXA4Wf4Fsm)
Requirements & compatibility
- Python 3.10+ recommended.
- Google Chrome/Chromium for providers using browser automation.
- Docker for containerized deployment.
- Works on x86_64 and arm64 (slim image supports both).
- Some provider adapters may require platform-specific tooling (Chrome/Chromium, etc.). Check provider docs for details.
Installation
Docker (recommended)
- Install Docker: https://docs.docker.com/get-docker/
- Create persistent directories:
- Example (Linux/macOS):
mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_media sudo chown -R 1200:1201 ${PWD}/har_and_cookies ${PWD}/generated_media
- Example (Linux/macOS):
- Pull image:
docker pull hlohaus789/g4f - Run container:
docker run -p 8080:8080 -p 7900:7900 \ --shm-size="2g" \ -v ${PWD}/har_and_cookies:/app/har_and_cookies \ -v ${PWD}/generated_media:/app/generated_media \ hlohaus789/g4f:latest
Notes:
- Port 8080 serves GUI/API; 7900 can expose a VNC-like desktop for provider logins (optional).
- Increase --shm-size for heavier browser automation tasks.
Slim Docker image (x64 & arm64)
mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_media
chown -R 1000:1000 ${PWD}/har_and_cookies ${PWD}/generated_media
docker run \
-p 1337:8080 -p 8080:8080 \
-v ${PWD}/har_and_cookies:/app/har_and_cookies \
-v ${PWD}/generated_media:/app/generated_media \
hlohaus789/g4f:latest-slim
Notes:
- The slim image can update the g4f package on startup and installs additional dependencies as needed.
- In this example, the Interference API is mapped to 1337.
Windows Guide (.exe)
👉 Check out the Windows launcher for GPT4Free:
🔗 https://github.com/gpt4free/g4f.exe 🚀
- Download the release artifact
g4f.exe.zipfrom: https://github.com/xtekky/gpt4free/releases/latest - Unzip and run
g4f.exe. - Open GUI at: http://localhost:8080/chat/
- If Windows Firewall blocks access, allow the application.
Python Installation (pip / from source / partial installs)
Prerequisites:
- Python 3.10+ (https://www.python.org/downloads/)
- Chrome/Chromium for some providers.
Install from PyPI (recommended):
pip install -U g4f[all]
Partial installs
- To install only specific functionality, use optional extras groups. See docs/requirements.md in the project docs.
Install from source:
git clone https://github.com/xtekky/gpt4free.git
cd gpt4free
pip install -r requirements.txt
pip install -e .
Notes:
- Some features require Chrome/Chromium or other tools; follow provider-specific docs.
Running the app
GUI (web client)
- Run via Python:
from g4f.gui import run_gui
run_gui()
- Or via CLI:
python -m g4f.cli gui --port 8080 --debug
- Open: http://localhost:8080/chat/
FastAPI / Interference API
- Start FastAPI server:
python -m g4f --port 8080 --debug
- If using slim docker mapping, Interference API may be available at
http://localhost:1337/v1 - Swagger UI:
http://localhost:1337/docs
CLI
- Start GUI server:
python -m g4f.cli gui --port 8080 --debug
MCP Server
GPT4Free now includes a Model Context Protocol (MCP) server that allows AI assistants like Claude to access web search, scraping, and image generation capabilities.
Starting the MCP server (stdio mode):
# Using g4f command
g4f mcp
# Or using Python module
python -m g4f.mcp
Starting the MCP server (HTTP mode):
# Start HTTP server on port 8765
g4f mcp --http --port 8765
# Custom host and port
g4f mcp --http --host 127.0.0.1 --port 3000
HTTP mode provides:
POST http://localhost:8765/mcp- JSON-RPC endpointGET http://localhost:8765/health- Health check
Configuring with Claude Desktop:
Add to your claude_desktop_config.json:
{
"mcpServers": {
"gpt4free": {
"command": "python",
"args": ["-m", "g4f.mcp"]
}
}
}
Available MCP Tools:
web_search- Search the web using DuckDuckGoweb_scrape- Extract text content from web pagesimage_generation- Generate images from text prompts
For detailed MCP documentation, see g4f/mcp/README.md
Optional provider login (desktop within container)
- Accessible at:
http://localhost:7900/?autoconnect=1&resize=scale&password=secret - Useful for logging into web-based providers to obtain cookies/HAR files.
Using the Python client
Install:
pip install -U g4f[all]
Synchronous text example:
from g4f.client import Client
client = Client()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello, how are you?"}],
web_search=False
)
print(response.choices[0].message.content)
Expected:
Hello! How can I assist you today?
Image generation example:
from g4f.client import Client
client = Client()
response = client.images.generate(
model="flux",
prompt="a white siamese cat",
response_format="url"
)
print(f"Generated image URL: {response.data[0].url}")
Async client example:
from g4f.client import AsyncClient
import asyncio
async def main():
client = AsyncClient()
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "u
Related Skills
openhue
325.6kControl Philips Hue lights and scenes via the OpenHue CLI.
sag
325.6kElevenLabs text-to-speech with mac-style say UX.
weather
325.6kGet current weather and forecasts via wttr.in or Open-Meteo
tweakcc
1.4kCustomize Claude Code's system prompts, create custom toolsets, input pattern highlighters, themes/thinking verbs/spinners, customize input box & user message styling, support AGENTS.md, unlock private/unreleased features, and much more. Supports both native/npm installs on all platforms.
