Openai4j
Java client library for OpenAI API.Full support for all OpenAI API models including Completions, Chat, Edits, Embeddings, Audio, Files, Assistants-v2, Images, Moderations, Batch, and Fine-tuning.
Install / Use
/learn @Lambdua/Openai4jREADME
OpenAi4J
OpenAi4J is an unofficial Java library tailored to facilitate the interaction with OpenAI's GPT models, including the newest additions such as gpt4-turbo vision,assistant-v2. Originally forked from TheoKanning/openai-java, this library continues development to incorporate latest API features after the original project's maintenance was discontinued.
Features
- Full support for all OpenAI API models including Completions, Chat, Edits, Embeddings, Audio, Files, Assistants-v2, Images, Moderations, Batch, and Fine-tuning.
- Easy-to-use client setup with Retrofit for immediate API interaction.
- Extensive examples and documentation to help you start quickly.
- Customizable setup with environment variable integration for API keys and base URLs.
- Supports synchronous and asynchronous API calls.
This library aims to provide Java developers with a robust tool to integrate OpenAI's powerful capabilities into their applications effortlessly.
Quick Start
Import
Gradle
implementation 'io.github.lambdua:<api|client|service>:0.22.92'
Maven
<dependency>
<groupId>io.github.lambdua</groupId>
<artifactId>service</artifactId>
<version>0.22.92</version>
</dependency>
chat with OpenAi model
static void simpleChat() {
//api-key get from environment variable OPENAI_API_KEY
OpenAiService service = new OpenAiService(Duration.ofSeconds(30));
List<ChatMessage> messages = new ArrayList<>();
ChatMessage systemMessage = new SystemMessage("You are a cute cat and will speak as such.");
messages.add(systemMessage);
ChatCompletionRequest chatCompletionRequest = ChatCompletionRequest.builder()
.model("gpt-4o-mini")
.messages(messages)
.n(1)
.maxTokens(50)
.build();
ChatCompletionResult chatCompletion = service.createChatCompletion(chatCompletionRequest);
System.out.println(chatCompletion.getChoices().get(0).getMessage().getContent());
}
Just Using POJO
If you wish to develop your own client, simply import POJOs from the api module.</br> Ensure your client adopts snake case naming for compatibility with the OpenAI API. To utilize pojos, import the api module:
<dependency>
<groupId>io.github.lambdua</groupId>
<artifactId>api</artifactId>
<version>0.22.92</version>
</dependency>
other examples:
The sample code is all in the example package, which includes most of the functional usage. </br>
You can refer to the code in the example package. Below are some commonly used feature usage examples
static void gptVision() {
OpenAiService service = new OpenAiService(Duration.ofSeconds(20));
final List<ChatMessage> messages = new ArrayList<>();
final ChatMessage systemMessage = new SystemMessage("You are a helpful assistant.");
//Here, the imageMessage is intended for image recognition
final ChatMessage imageMessage = UserMessage.buildImageMessage("What's in this image?",
"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg");
messages.add(systemMessage);
messages.add(imageMessage);
ChatCompletionRequest chatCompletionRequest = ChatCompletionRequest.builder()
.model("gpt-4-turbo")
.messages(messages)
.n(1)
.maxTokens(200)
.build();
ChatCompletionChoice choice = service.createChatCompletion(chatCompletionRequest).getChoices().get(0);
System.out.println(choice.getMessage().getContent());
}
</details>
<details>
<summary>Customizing OpenAiService</summary>
OpenAiService is versatile in its setup options, as demonstrated in the `example.ServiceCreateExample` within the example package.
//0 Using the default configuration, read the environment variables OPENAI-API_KEY, OPENAI-API_BASE-URL as the default API_KEY and BASE-URL,
//encourage the use of environment variables to load the OpenAI API key
OpenAiService openAiService0 = new OpenAiService();
//1.Use the default base URL and configure service by default. Here, the base URL (key: OPENAI API BASE URL) will be obtained from the environment variable by default. If not, the default URL will be used“ https://api.openai.com/v 1/";
OpenAiService openAiService = new OpenAiService(API_KEY);
//2. Use custom base Url with default configuration of service
OpenAiService openAiService1 = new OpenAiService(API_KEY, BASE_URL);
//3.Custom expiration time
OpenAiService openAiService2 = new OpenAiService(API_KEY, Duration.ofSeconds(10));
//4. More flexible customization
//4.1. customize okHttpClient
OkHttpClient client = new OkHttpClient.Builder()
//connection pool
.connectionPool(new ConnectionPool(Runtime.getRuntime().availableProcessors() * 2, 30, TimeUnit.SECONDS))
//Customized interceptors, such as retry interceptors, log interceptors, load balancing interceptors, etc
// .addInterceptor(new RetryInterceptor())
// .addInterceptor(new LogInterceptor())
// .addInterceptor(new LoadBalanceInterceptor())
// .proxy(new Proxy(Proxy.Type.HTTP, new InetSocketAddress("proxyHost", 8080)))
.connectTimeout(2, TimeUnit.SECONDS)
.writeTimeout(3, TimeUnit.SECONDS)
.readTimeout(10, TimeUnit.SECONDS)
.protocols(Arrays.asList(Protocol.HTTP_2, Protocol.HTTP_1_1))
.build();
//4.2 Customizing Retorfit Configuration
Retrofit retrofit = OpenAiService.defaultRetrofit(client, OpenAiService.defaultObjectMapper(), BASE_URL);
OpenAiApi openAiApi = retrofit.create(OpenAiApi.class);
OpenAiService openAiService3 = new OpenAiService(openAiApi);
</details>
<details>
<summary>stream chat</summary>
static void streamChat() {
//api-key get from environment variable OPENAI_API_KEY
OpenAiService service = new OpenAiService(Duration.ofSeconds(30));
List<ChatMessage> messages = new ArrayList<>();
ChatMessage systemMessage = new SystemMessage("You are a cute cat and will speak as such.");
messages.add(systemMessage);
ChatCompletionRequest chatCompletionRequest = ChatCompletionRequest.builder()
.model("gpt-4o-mini")
.messages(messages)
.n(1)
.maxTokens(50)
.build();
service.streamChatCompletion(chatCompletionRequest).blockingForEach(System.out::println);
}
</details>
<details>
<summary>Tools</summary>
This library supports both the outdated method of function calls and the current tool-based approach.
First, we define a function object. The definition of a function object is flexible; you can use POJO to define it (
automatically serialized by JSON schema) or use methods like map and FunctionDefinition to define it. You can refer
to the code in the example package. Here, we define a weather query function object:
public class Weather {
@JsonPropertyDescription("City and state, for example: León, Guanajuato")
public String location;
@JsonPropertyDescription("The temperature unit, can be 'celsius' or 'fahrenheit'")
@JsonProperty(required = true)
public WeatherUnit unit;
}
public enum WeatherUnit {
CELSIUS, FAHRENHEIT;
}
public static class WeatherResponse {
public String location;
public WeatherUnit unit;
public int temperature;
public String description;
// constructor
}
Next, we declare the function and associate it with an executor, here simulating an API response:
//First, a function to fetch the weather
public static FunctionDefinition weatherFunction() {
return FunctionDefinition.<Weather>builder()
.name("get_weather")
.description("Get the current weather in a given location")
.parametersDefinitionByClass(Weather.class)
//The executor here is a lambda expression that accepts a Weather object and returns a Weather Response object
.executor(w -> new WeatherResponse(w.location, w.unit, 25, "sunny"))
.build();
}
Then, the service is used for a chatCompletion request, incorporating the tool:
static void toolChat() {
OpenAiService service = new OpenAiService(Duration.ofSeconds(30));
final ChatTool tool = new ChatTool(ToolUtil.weatherFunction());
final List<ChatMessage> messages = new ArrayList<>();
final ChatMessage systemMessage = new SystemMessage("You are a helpful assistant.");
final ChatMessage userMessage = new UserMessage("What is the weather in BeiJin?");
messages.add(systemMessage);
messages.add(userMessage);
ChatCompletionRequest chatCompletionRequest = ChatCompletionRequest.builder()
.model("gpt-4o-mini")
.messages(messages)
//Tools is a list; multiple tools can be included
.tools(Collections.singletonList(tool))
.toolChoice(ToolChoice.AUTO)
.n(1)
.maxTokens(100)
.build();
//Request is sent
ChatCompletionChoice choice = service.createChatCompletion(chatCompletionRequest).getChoices().get(0);
AssistantMessage toolCallMsg = choice.getMessage();
ChatToolCall toolCall = toolCallMsg.getToolCalls().get(0);
System.out.println(toolCall.getFunction());
messages.add(toolCallMsg);
messages.add(new ToolMessage("the weather is fine today.", toolCall.getId()));
//submit tool call
ChatCompletionRequest toolCallRequest = ChatCompletionRequest.builder()
.model("gpt-4o-mini")
.messages(messages)
.n(1)
.maxTokens(100)
.build();
ChatCompletionChoice toolCallChoice = service.createChatCompletion(toolCallRequest).getChoices().get(0);
System.out.println(toolCallChoice.getMessage().getContent());
}
</details>
<details>
<summary>stream chat with tRelated Skills
tweakcc
1.4kCustomize Claude Code's system prompts, create custom toolsets, input pattern highlighters, themes/thinking verbs/spinners, customize input box & user message styling, support AGENTS.md, unlock private/unreleased features, and much more. Supports both native/npm installs on all platforms.
poco-claw
1.2kA more beautiful and easier-to-use alternative to OpenClaw. It features a nicer Web UI, built-in IM support, and a sandboxed runtime for improved safety. Under the hood, it is powered by a Claude Code–based agent.
cc-switch
30.9kA cross-platform desktop All-in-One assistant tool for Claude Code, Codex, OpenCode, openclaw & Gemini CLI.
cc-switch
30.9kA cross-platform desktop All-in-One assistant tool for Claude Code, Codex, OpenCode, openclaw & Gemini CLI.
