GenAIApp
About The Google Apps Script binding for Gemini and OpenAI generative AI API
Install / Use
/learn @scriptit-fr/GenAIAppREADME
GenAIApp
The GenAIApp library is a Google Apps Script library designed for creating, managing, and interacting with LLMs using Gemini and OpenAI's API. The library provides features like text-based conversation, browsing the web, image analysis, and more, allowing you to build versatile AI chat applications that can integrate with various functionalities and external data sources.
Table of Contents
- Features
- Prerequisites
- Installation
- Usage
- Setting API Keys
- Creating a New Chat
- Adding Messages
- Adding Callable Functions to the Chat
- Enable Web Browsing (Optional)
- Enable OpenAI server-side compaction (Optional)
- Give a Web Page as a Knowledge Base (Optional)
- Add Image (Optional)
- Add File to Chat (optional)
- Add MCP Connector (optional)
- Running the Chat
- FunctionObject Class
- VectorStoreObject Class
- Retrieving Knowledge from an OpenAI Vector Store
- Examples
- Example 1: Send a Prompt and Get Completion
- Example 2: Ask Open AI to Create a Draft Reply for the Last Email in Gmail Inbox
- Example 3: Retrieve Structured Data Instead of Raw Text with onlyReturnArguments
- Example 4: Use Web Browsing
- Example 5: Describe an Image
- Example 6: Extend a Chat with an MCP Connector
- Example 7: Connect to a Custom MCP Server with setServerUrl()
- Example 8: Continue a Conversation with previous_response_id
- Contributing
- License
- Reference
Features
- Chat Creation: Create interactive chats that can send and receive messages using Gemini or OpenAI's API.
- Web Search Integration: Perform web searches to enhance chatbot responses.
- Image Analysis: Retrieve image descriptions using Gemini and OpenAI's vision models.
- Function Calling: Enable the chat to call predefined functions and utilize their results in conversations.
- Vector Store Search: Retrieve knowledge from OpenAI vector stores for a better contextual response.
- Document Analysis: Analyze documents from Google Drive with support for various formats.
- MCP Connectors: Attach Google Workspace or custom Model Context Protocol connectors to securely retrieve additional context during a conversation.
Prerequisites
The setup for GenAIApp varies depending on which models you plan to use:
- If you want to use OpenAI models: You'll need an OpenAI API key
- If you want to use Google Gemini models: you’ll need a Google Cloud Platform (GCP) project with Vertex AI enabled for access to Gemini models. Ensure to link your Google Apps Script project to a GCP project with Vertex AI enabled, and to include the following scopes in your manifest file:
"oauthScopes": [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/script.external_request"
]
- An OpenAI API key for accessing OpenAI models.
- A Gemini API key OR a Google Cloud Platform (GCP) project for using Gemini models.
Installation
To start using the library, include the GenAIApp code in your Google Apps Script project environment.
Usage
Setting API Keys
You need to set your API keys before starting any chat:
// Set Gemini API Key
GenAIApp.setGeminiAPIKey('your-gemini-api-key');
// Set Gemini Auth if using Google Cloud
GenAIApp.setGeminiAuth('your-gcp-project-id','your-region');
// Set OpenAI API Key if using OpenAI
GenAIApp.setOpenAIAPIKey('your-openai-api-key');
// Set global metadata passed with each request (optional)
GenAIApp.setGlobalMetadata('app', 'demo');
// Use a custom OpenAI-compatible endpoint (optional)
GenAIApp.setPrivateInstanceBaseUrl('https://your-endpoint.example.com');
Creating a New Chat
To start a new chat, call the newChat() method. This creates a new Chat instance.
let chat = GenAIApp.newChat();
Adding Messages
You can add messages to your chat using the addMessage() method. Messages can be from the user or the system.
// Add a user message
chat.addMessage("Hello, how are you?");
// Add a system message (optional)
chat.addMessage("Answer to the user in a professional way.", true);
Adding callable Functions to the Chat
You can create and add functions to the chat that the AI can call during the conversation:
The newFunction() method allows you to create a new Function instance. You can then add this function to your chat using the addFunction() method.
// Create a new function
let myFunction = GenAIApp.newFunction()
.setName("getWeather")
.setDescription("Retrieve the current weather for a given city.")
.addParameter("city", "string", "The name of the city.");
// Add the function to the chat
chat.addFunction(myFunction);
From the moment that you add a function to chat, we will use function calling features. For more information :
- https://ai.google.dev/gemini-api/docs/function-calling
- https://platform.openai.com/docs/guides/gpt/function-calling
Enable web browsing (optional)
If you want to allow the chat to perform web searches and fetch web pages, enable browsing on your chat instance:
chat.enableBrowsing(true);
If want to restrict your browsing to a specific web page, you can add as a second argument the url of this web page as bellow.
chat.enableBrowsing(true, "https://support.google.com");
Enable OpenAI server-side compaction (optional)
Use Responses API native compaction to let OpenAI compact long conversations automatically.
const chat = GenAIApp.newChat()
.enableCompaction(true)
.setCompactionThreshold(120000); // minimum: 1000
If you only need default behavior, enabling compaction is enough (default threshold is 10000):
const chat = GenAIApp.newChat().enableCompaction(true);
Give a web page as a knowledge base (optional)
If you don't need the perform a web search and want to directly give a link for a web page you want the chat to read before performing any action, you can use the addKnowledgeLink(url) function.
chat.addKnowledgeLink("https://developers.google.com/apps-script/guides/libraries");
Add Image (optional)
To include an image in the conversation, use the addImage() method with a URL or a Blob.
chat.addImage('https://example.com/image.jpg');
Add File to Chat (optional)
You can include the contents of a Google Drive file or a Blob in your conversation using the addFile() method. This works with both Gemini and OpenAI multimodal models.
// Add a Google Drive file to the chat context using its Drive file ID
chat.addFile('your-google-drive-file-id');
Add MCP Connector (optional)
Use Model Context Protocol (MCP) connectors to let OpenAI Responses API models reach structured data sources such as Gmail, Calendar, Drive, or your own custom MCP server.
const chat = GenAIApp.newChat();
const gmailConnector = GenAIApp.newConnector()
.setConnectorId('gmail')
.setRequireApproval('domain');
chat.addMCP(gmailConnector);
const customConnector = GenAIApp.newConnector()
.setLabel('Salesforce CRM')
.setDescription('Query opportunity data from Salesforce via MCP proxy')
.setServerUrl('https://mcp.example.com/salesforce')
.setAuthorization('Bearer ' + SALESFORCE_MCP_TOKEN)
.setRequireApproval('always');
chat.addMCP(customConnector);
- Google Workspace connectors: Call
.setConnectorId("gmail" | "calendar" | "drive")to use Google-managed connectors authenticated with your script's OAuth token by default. - Custom MCP servers: Configure a connector with
.setLabel(),.setDescription(), and.setServerUrl("https://..."), and optionally.setAuthorization()if the server expects a bearer token or API key. - Approval workflows:
.setRequireApproval('never' | 'domain' | 'always')lets you enforce end-user approval before the model calls the connector.
⚠️ MCP connectors are currently available only when you run the chat with OpenAI Responses API models (for example,
o4-mini,o3, orgpt-5.4).
Running the Chat
Once you've set up the chat and added the necessary components, you can start the conversation by calling the run() method.
let response = chat.run({
model: "gemini-2.5-flash", // Optional: set the model to use
temperature: 0.5, // Optional: set response creativity
function_call: "getWeather" // Optional: force the first API response to call a function
});
console.log(response);
The library supports the following model
