Qllm
QLLM: A powerful CLI for seamless interaction with multiple Large Language Models. Simplify AI workflows, streamline development, and unlock the potential of cutting-edge language models. ⭐ If you find QLLM useful, consider giving us a star on GitHub! It helps us reach more developers and improve the tool. ⭐
Install / Use
/learn @quantalogic/QllmREADME
QLLM: Simplifying Language Model Interactions
Chapter 1: Introduction
1.1 Welcome to QLLM
Welcome to QLLM, your ultimate command-line tool for interacting with Large Language Models (LLMs).
Imagine having a powerful AI assistant at your fingertips, ready to help you tackle complex tasks, generate creative content, and analyze data—all from your terminal.
This README will guide you through everything you need to know to harness the full potential of QLLM and become a master of AI-powered productivity.
1.2 Show Your Support
If you find QLLM helpful and enjoyable to use, please consider giving us a star ✨ on GitHub! Your support not only motivates us to keep improving the project but also helps others discover QLLM. Thank you for being a part of our community!
Chapter 2: Benefits of QLLM
2.1 Why QLLM and QLLM-LIB?
Key Benefits:
- Unified Access: QLLM brings together multiple LLM providers under one roof. No more context-switching between different tools and APIs.
- Command-Line Power: As a developer, you live in the terminal. QLLM integrates seamlessly into your existing workflow.
- Flexibility and Customization: Tailor AI interactions to your specific needs with extensive configuration options and support for custom templates.
- Time-Saving Features: From quick queries to ongoing conversations, QLLM helps you get answers fast.
- Cross-Platform Compatibility: Works consistently across Windows, macOS, and Linux.
2.2 Anecdote: A Productivity Boost
Imagine you're a data analyst working on a tight deadline. You need to quickly analyze a large dataset and generate a report for your team. Instead of manually sifting through the data and writing the report, you turn to QLLM. With a few simple commands, you're able to:
- Summarize the key insights from the dataset.
- Generate visualizations to highlight important trends.
- Draft a concise, well-written report.
All of this without leaving your terminal. The time you save allows you to focus on higher-level analysis and deliver the report ahead of schedule. Your manager is impressed, and you've just demonstrated the power of QLLM to streamline your workflow.
Chapter 3: Packages
graph TD
A[qllm-cli] --> B[qllm-lib]
3.1 qllm-lib
A versatile TypeScript library for seamless LLM integration. It simplifies working with different AI models and provides features like templating, streaming, and conversation management.
Practical Example
import { createLLMProvider } from 'qllm-lib';
async function generateProductDescription() {
const provider = createLLMProvider({ name: 'openai' });
const result = await provider.generateChatCompletion({
messages: [
{
role: 'user',
content: {
type: 'text',
text: 'Write a compelling product description for a new smartphone with a foldable screen, 5G capability, and 48-hour battery life.'
},
},
],
options: { model: 'gpt-4', maxTokens: 200 },
});
console.log('Generated Product Description:', result.text);
}
generateProductDescription();
3.2 qllm-cli
A command-line interface that leverages qllm-lib to provide easy access to LLM capabilities directly from your terminal.
Practical Example
# Generate a product description
qllm ask "Write a 50-word product description for a smart home security camera with night vision and two-way audio."
# Use a specific model for market analysis
qllm ask --model gpt-4o-mini --provider openai "Analyze the potential market impact of electric vehicles in the next 5 years. Provide 3 key points."
# Write a short blog post about the benefits of remote work
qllm ask --model gemma2:2b --provider ollama "Write a short blog post about the benefits of remote work."
# Analyze CSV data from stdin
cat sales_data.csv | qllm ask "Analyze this CSV data. Provide a summary of total sales, top-selling products, and any notable trends. Format your response as a bulleted list."
## Example using question from stdin
echo "What is the weather in Tokyo?" | qllm --provider ollama --model gemma2:2b
Chapter 4: Getting Started
4.1 System Requirements
Before we dive into the exciting world of QLLM, let's make sure your system is ready:
- Node.js (version 16.5 or higher)
- npm (usually comes with Node.js)
- A terminal or command prompt
- An internet connection (QLLM needs to talk to the AI, after all!)
4.2 Step-by-Step Installation Guide
- Open your terminal or command prompt.
- Run the following command:
This command tells npm to install QLLM globally on your system, making it available from any directory.npm install -g qllm - Wait for the installation to complete. You might see a progress bar and some text scrolling by. Don't panic, that's normal!
- Once it's done, verify the installation by running:
You should see a version number (e.g., 1.8.0) displayed. If you do, congratulations! You've successfully installed QLLM.qllm --version
💡 Pro Tip: If you encounter any permission errors during installation, you might need to use
sudoon Unix-based systems or run your command prompt as an administrator on Windows.
4.3 Configuration
Now that QLLM is installed, let's get it configured. Think of this as teaching QLLM your preferences and giving it the keys to the AI kingdom.
Configuring Default Settings
While you're in the configuration mode, you can also set up some default preferences:
- Choose your default provider and model.
- Set default values for parameters like temperature and max tokens.
- Configure other settings like log level and custom prompt directory.
Here's an example of what this might look like:
$ qllm configure
? Default Provider: openai
? Default Model: gpt-4o-mini
? Temperature (0.0 to 1.0): 0.7
? Max Tokens: 150
? Log Level: info
AWS Configuration
To use AWS Bedrock with QLLM, you need to configure your AWS credentials. Ensure you have the following environment variables set:
AWS_ACCESS_KEY_ID: Your AWS access key ID.AWS_SECRET_ACCESS_KEY: Your AWS secret access key.AWS_BEDROCK_REGION: The AWS region you want to use (optional, defaults to a predefined region).AWS_BEDROCK_PROFILE: If you prefer to use a named profile from your AWS credentials file, set this variable instead of the access key and secret.
You can set these variables in your terminal or include them in your environment configuration file (e.g., .env file) for convenience.
💡 Pro Tip: You can always change these settings later, either through the
qllm configurecommand or directly in the configuration file located at~/.qllmrc.
Providers Supported
- openai
- anthropic
- AWS Bedrock (Anthropic)
- ollama
- groq
- mistral
- claude
- openrouter
4.4 Your First QLLM Command
Enough setup, let's see QLLM in action! We'll start with a simple query to test the waters.
Running a Simple Query
- In your terminal, type:
qllm ask "What is the meaning of life, the universe, and everything?" - Press Enter and watch the magic happen!
Understanding the Output
QLLM will display the response from the AI. It might look something like this:
Assistant: The phrase "the meaning of life, the universe, and everything" is a reference to Douglas Adams' science fiction series "The Hitchhiker's Guide to the Galaxy." In the story, a supercomputer named Deep Thought is asked to calculate the answer to the "Ultimate Question of Life, the Universe, and Everything." After 7.5 million years of computation, it provides the answer: 42...
🧠 Pause and Reflect: What do you think about this response? How does it compare to what you might have gotten from a simple web search?
Chapter 5: Core Commands
5.1 The 'ask' Command
The ask command is your go-to for quick, one-off questions. It's like having a knowledgeable assistant always ready to help.
Syntax and Options
qllm ask "Your question here"
-p, --provider: Specify the LLM provider (e.g., openai, anthropic)-m, --model: Choose a specific model-t, --max-tokens: Set maximum tokens for the response--temperature: Adjust output randomness (0.0 to 1.0)
Use Cases and Examples
- Quick fact-checking:
qllm ask "What year was the first Moon landing?" - Code explanation:
qllm ask "Explain this Python code: print([x for x in range(10) if x % 2 == 0])" - Language translation:
qllm ask "Translate 'Hello, world!' to French, Spanish, and Japanese"
5.2 The 'chat' Command
While ask is perfect for quick queries, chat is where QLLM really shines. It allows you to have multi-turn conversations, maintaining context throughout.
Starting and Managing Conversations
To start a chat session:
qllm chat
Once in a chat session, you can use various commands:
/help: Display available commands/new: Start a new conversation/save: Save the current conversation
5.3 The 'run' Command
The run command allows you to execute predefined templates, streamlining complex or repetitive tasks.
Using Predefined Templates
To run a template:
qllm <template-url or path>
For example:
qllm https://raw.githubusercontent.com/quantalogic/qllm/main/prompts/chain_of_thought_leader.yaml
Creating Custom Templates
You can create your own templates as YAML files. Here's a simple example:
name: "Simple Greeting"
version: "1.0"
author: "Raphaël MANSUY"
description: "
Related Skills
node-connect
346.8kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
107.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
346.8kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
346.8kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
