Laravel
⚡️ Gemini PHP for Laravel is a community-maintained PHP API client that allows you to interact with the Gemini AI API.
Install / Use
/learn @google-gemini-php/LaravelREADME
Gemini PHP for Laravel is a community-maintained PHP API client that allows you to interact with the Gemini AI API.
- Fatih AYDIN github.com/aydinfatih
- Vytautas Smilingis github.com/Plytas
For more information, take a look at the google-gemini-php/client repository.
Table of Contents
- Prerequisites
- Setup
- Usage
- Chat Resource
- Text-only Input
- Text-and-image Input
- Text-and-video Input
- Image Generation
- Multi-turn Conversations (Chat)
- Chat with Streaming
- Stream Generate Content
- Structured Output
- Function calling
- Code Execution
- Grounding with Google Search
- System Instructions
- Speech generation
- Thinking Mode
- Count tokens
- Configuration
- File Management
- Cached Content
- Embedding Resource
- Models
- Chat Resource
- Troubleshooting
- Testing
Prerequisites
To complete this quickstart, make sure that your development environment meets the following requirements:
- Requires PHP 8.1+
- Requires Laravel 9,10,11,12
Setup
Installation
First, install Gemini via the Composer package manager:
composer require google-gemini-php/laravel
Next, execute the install command:
php artisan gemini:install
This will create a config/gemini.php configuration file in your project, which you can modify to your needs using environment variables. Blank environment variables for the Gemini API key is already appended to your .env file.
GEMINI_API_KEY=
You can also define the following environment variables.
GEMINI_BASE_URL=
GEMINI_REQUEST_TIMEOUT=
Setup your API key
To use the Gemini API, you'll need an API key. If you don't already have one, create a key in Google AI Studio.
Upgrade to 2.0
Starting 2.0 release this package will work only with Gemini v1beta API (see API versions).
To update, run this command:
composer require google-gemini-php/laravel:^2.0
This release introduces support for new features:
- Structured output
- System instructions
- File uploads
- Function calling
- Code execution
- Grounding with Google Search
- Cached content
- Thinking model configuration
- Speech model configuration
\Gemini\Enums\ModelType enum has been deprecated and will be removed in next major version. Together with this Gemini::geminiPro() and Gemini::geminiFlash() methods have been removed.
We suggest using Gemini::generativeModel() method and pass in the model string directly. All methods that had previously accepted ModelType enum now accept a BackedEnum. We recommend implementing your own enum for convenience.
There may be other breaking changes not listed here. If you encounter any issues, please submit an issue or a pull request.
Usage
Interact with Gemini's API:
use Gemini\Enums\ModelVariation;
use Gemini\GeminiHelper;
use Gemini\Laravel\Facades\Gemini;
$result = Gemini::generativeModel(model: 'gemini-2.0-flash')->generateContent('Hello');
$result->text(); // Hello! How can I assist you today?
// Helper method usage
$result = Gemini::generativeModel(
model: GeminiHelper::generateGeminiModel(
variation: ModelVariation::FLASH,
generation: 2.5,
version: "preview-04-17"
), // models/gemini-2.5-flash-preview-04-17
)->generateContent('Hello');
$result->text(); // Hello! How can I assist you today?
Chat Resource
For a complete list of supported input formats and methods in Gemini API v1, see the models documentation.
Text-only Input
Generate a response from the model given an input message.
use Gemini\Laravel\Facades\Gemini;
$result = Gemini::generativeModel(model: 'gemini-2.0-flash')->generateContent('Hello');
$result->text(); // Hello! How can I assist you today?
Text-and-image Input
Generate responses by providing both text prompts and images to the Gemini model.
use Gemini\Data\Blob;
use Gemini\Enums\MimeType;
use Gemini\Laravel\Facades\Gemini;
$result = Gemini::generativeModel(model: 'gemini-2.0-flash')
->generateContent([
'What is this picture?',
new Blob(
mimeType: MimeType::IMAGE_JPEG,
data: base64_encode(
file_get_contents('https://storage.googleapis.com/generativeai-downloads/images/scones.jpg')
)
)
]);
$result->text(); // The picture shows a table with a white tablecloth. On the table are two cups of coffee, a bowl of blueberries, a silver spoon, and some flowers. There are also some blueberry scones on the table.
Text-and-video Input
Process video content and get AI-generated descriptions using the Gemini API with an uploaded video file.
use Gemini\Data\UploadedFile;
use Gemini\Enums\MimeType;
use Gemini\Laravel\Facades\Gemini;
$result = Gemini::generativeModel(model: 'gemini-2.0-flash')
->generateContent([
'What is this video?',
new UploadedFile(
fileUri: '123-456', // accepts just the name or the full URI
mimeType: MimeType::VIDEO_MP4
)
]);
$result->text(); // The video shows...
Image Generation
Generate images from text prompts using the Imagen model.
use Gemini\Data\ImageConfig;
use Gemini\Data\GenerationConfig;
use Gemini\Laravel\Facades\Gemini;
$imageConfig = new ImageConfig(aspectRatio: '16:9');
$generationConfig = new GenerationConfig(imageConfig: $imageConfig);
$response = Gemini::generativeModel(model: 'gemini-2.5-flash-image')
->withGenerationConfig($generationConfig)
->generateContent('Draw a futuristic city');
// Save the image
file_put_contents('image.png', base64_decode($response->parts()[0]->inlineData->data));
Multi-turn Conversations (Chat)
Using Gemini, you can build freeform conversations across multiple turns.
use Gemini\Data\Content;
use Gemini\Enums\Role;
use Gemini\Laravel\Facades\Gemini;
$chat = Gemini::generativeModel(model: 'gemini-2.0-flash')
->startChat(history: [
Content::parse(part: 'The stories you write about what I have to say should be one line. Is that clear?'),
Content::parse(part: 'Yes, I understand. The stories I write about your input should be one line long.', role: Role::MODEL)
]);
$response = $chat->sendMessage('Create a story set in a quiet village in 1600s France');
echo $response->text(); // Amidst rolling hills and winding cobblestone streets, the tranquil village of Beausoleil whispered tales of love, intrigue, and the magic of everyday life in 17th century France.
$response = $chat->sendMessage('Rewrite the same story in 1600s England');
echo $response->text(); // In the heart of England's lush countryside, amidst emerald fields and thatched-roof cottages, the village of Willowbrook unfolded a tapestry of love, mystery, and the enchantment of ordinary days in the 17th century.
Chat with Streaming
You can also stream the response in a chat session. The history is automatically updated with the full response after the stream completes.
use Gemini\Laravel\Facades\Gemini;
$chat = Gemini::generativeModel(model: 'gemini-2.0-flash')->startChat();
$stream = $chat->streamSendMessage('Hello');
foreach ($stream as $response) {
echo $response->text();
}
Stream Generate Content
By default, the model returns a response after completing the entire generation process. You can achieve faster interactions by not waiting for the entire result, and instead use streaming to handle partial results.
use Gemini\Laravel\Facades\Gemini;
$stream = Gemini::generativeModel(model: 'gemini-2.0-flash')
->streamGenerateContent('Write long a story about a magic backpack.');
foreach ($stream as $response) {
echo $response->text();
}
Structured Output
Gemini generates unstructured
Related Skills
node-connect
343.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
90.0kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
async-pr-review
99.7kTrigger this skill when the user wants to start an asynchronous PR review, run background checks on a PR, or check the status of a previously started async PR review.
ci
99.7kCI Replicate & Status This skill enables the agent to efficiently monitor GitHub Actions, triage failures, and bridge remote CI errors to local development. It defaults to automatic replication
