ChatLLM
A Flexible Interface for 'LLM' API Interactions
Install / Use
/learn @knowusuboaky/ChatLLMREADME
chatLLM <a href="https://knowusuboaky.github.io/chatLLM/"><img src="man/figures/openlogo.png" align="right" height="120" /></a>
<!-- badges: start --> <!-- badges: end -->Overview
chatLLM is an R package providing a single, consistent interface to multiple “OpenAI‑compatible” chat APIs (OpenAI, Groq, Anthropic, DeepSeek, Alibaba DashScope, Gemini, Grok, GitHub Models, AWS Bedrock, Azure OpenAI, and Azure AI Foundry).
Key features:
- 🔄 Uniform API across providers
- 🗣 Multi‑message context (system/user/assistant roles)
- 🔁 Retries & backoff with clear timeout handling
- 🔈 Verbose control (
verbose = TRUE/FALSE) - ⚙️ Discover models via
list_models() - 🏗 Factory interface for repeated calls
- 🌐 Custom endpoint override and advanced tuning
Installation
From CRAN:
install.packages("chatLLM")
Development version:
# install.packages("remotes") # if needed
remotes::install_github("knowusuboaky/chatLLM")
Setup
Set your API keys or tokens once per session:
Sys.setenv(
OPENAI_API_KEY = "your-openai-key",
GROQ_API_KEY = "your-groq-key",
ANTHROPIC_API_KEY = "your-anthropic-key",
DEEPSEEK_API_KEY = "your-deepseek-key",
DASHSCOPE_API_KEY = "your-dashscope-key",
GH_MODELS_TOKEN = "your-github-models-token",
GEMINI_API_KEY = "your-gemini-key",
XAI_API_KEY = "your-grok-key",
AWS_ACCESS_KEY_ID = "your-aws-access-key",
AWS_SECRET_ACCESS_KEY = "your-aws-secret-key",
AWS_REGION = "us-east-1",
AZURE_OPENAI_KEY = "your-azure-openai-key",
AZURE_OPENAI_ENDPOINT = "https://your-resource.openai.azure.com",
AZURE_FOUNDRY_KEY = "your-azure-foundry-key",
AZURE_FOUNDRY_ENDPOINT = "https://your-foundry-endpoint"
)
Usage
1. Simple Prompt
response <- call_llm(
prompt = "Who is Messi?",
provider = "openai",
max_tokens = 300
)
cat(response)
2. Multi‑Message Conversation
conv <- list(
list(role = "system", content = "You are a helpful assistant."),
list(role = "user", content = "Explain recursion in R.")
)
response <- call_llm(
messages = conv,
provider = "openai",
max_tokens = 200,
presence_penalty = 0.2,
frequency_penalty = 0.1,
top_p = 0.95
)
cat(response)
3. Verbose Off
Suppress informational messages:
res <- call_llm(
prompt = "Tell me a joke",
provider = "openai",
verbose = FALSE
)
cat(res)
4. Factory Interface
Create a reusable LLM function:
# Build a “GitHub Models” engine with defaults baked in
GitHubLLM <- call_llm(
provider = "github",
max_tokens = 60,
verbose = FALSE
)
# Invoke it like a function:
story <- GitHubLLM("Tell me a short story about libraries.")
cat(story)
5. Discover Available Models
# All providers at once
all_models <- list_models("all")
names(all_models)
# Only OpenAI models
openai_models <- list_models("openai")
head(openai_models)
6. Call a Specific Model
Pick from the list and pass it to call_llm():
anthro_models <- list_models("anthropic")
cat(call_llm(
prompt = "Write a haiku about autumn.",
provider = "anthropic",
model = anthro_models[1],
max_tokens = 60
))
Troubleshooting
- Timeouts: increase
n_tries/backoffor supply a custom.post_funcwith highertimeout(). - Model Not Found: use
list_models("<provider>")or consult provider docs. - Auth Errors: verify your API key/token and environment variables.
- Network Issues: check VPN/proxy, firewall, or SSL certs.
Contributing & Support
Issues and PRs welcome at https://github.com/knowusuboaky/chatLLM
License
MIT © Kwadwo Daddy Nyame Owusu - Boakye
Acknowledgements
Inspired by RAGFlowChainR, powered by httr and the R community. Enjoy!
Related Skills
node-connect
347.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
108.0kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
347.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
347.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
