Welcome to QLLM, your ultimate command-line tool for interacting with Large Language Models (LLMs).
Imagine having a powerful AI assistant at your fingertips, ready to help you tackle complex tasks, generate creative content, and analyze data—all from your terminal.
This README will guide you through everything you need to know to harness the full potential of QLLM and become a master of AI-powered productivity.
If you find QLLM helpful and enjoyable to use, please consider giving us a star ✨ on GitHub! Your support not only motivates us to keep improving the project but also helps others discover QLLM. Thank you for being a part of our community!
- Unified Access: QLLM brings together multiple LLM providers under one roof. No more context-switching between different tools and APIs.
- Command-Line Power: As a developer, you live in the terminal. QLLM integrates seamlessly into your existing workflow.
- Flexibility and Customization: Tailor AI interactions to your specific needs with extensive configuration options and support for custom templates.
- Time-Saving Features: From quick queries to ongoing conversations, QLLM helps you get answers fast.
- Cross-Platform Compatibility: Works consistently across Windows, macOS, and Linux.
Imagine you're a data analyst working on a tight deadline. You need to quickly analyze a large dataset and generate a report for your team. Instead of manually sifting through the data and writing the report, you turn to QLLM. With a few simple commands, you're able to:
- Summarize the key insights from the dataset.
- Generate visualizations to highlight important trends.
- Draft a concise, well-written report.
All of this without leaving your terminal. The time you save allows you to focus on higher-level analysis and deliver the report ahead of schedule. Your manager is impressed, and you've just demonstrated the power of QLLM to streamline your workflow.
graph TD
A[qllm-cli] --> B[qllm-lib]
A versatile TypeScript library for seamless LLM integration. It simplifies working with different AI models and provides features like templating, streaming, and conversation management.
import { createLLMProvider } from 'qllm-lib';
async function generateProductDescription() {
const provider = createLLMProvider({ name: 'openai' });
const result = await provider.generateChatCompletion({
messages: [
{
role: 'user',
content: {
type: 'text',
text: 'Write a compelling product description for a new smartphone with a foldable screen, 5G capability, and 48-hour battery life.'
},
},
],
options: { model: 'gpt-4', maxTokens: 200 },
});
console.log('Generated Product Description:', result.text);
}
generateProductDescription();
A command-line interface that leverages qllm-lib to provide easy access to LLM capabilities directly from your terminal.
# Generate a product description
qllm ask "Write a 50-word product description for a smart home security camera with night vision and two-way audio."
# Use a specific model for market analysis
qllm ask --model gpt-4o-mini --provider openai "Analyze the potential market impact of electric vehicles in the next 5 years. Provide 3 key points."
# Write a short blog post about the benefits of remote work
qllm ask --model gemma2:2b --provider ollama "Write a short blog post about the benefits of remote work."
# Analyze CSV data from stdin
cat sales_data.csv | qllm ask "Analyze this CSV data. Provide a summary of total sales, top-selling products, and any notable trends. Format your response as a bulleted list."
## Example using question from stdin
echo "What is the weather in Tokyo?" | qllm --provider ollama --model gemma2:2b
Before we dive into the exciting world of QLLM, let's make sure your system is ready:
- Node.js (version 16.5 or higher)
- npm (usually comes with Node.js)
- A terminal or command prompt
- An internet connection (QLLM needs to talk to the AI, after all!)
- Open your terminal or command prompt.
- Run the following command:
This command tells npm to install QLLM globally on your system, making it available from any directory.
npm install -g qllm
- Wait for the installation to complete. You might see a progress bar and some text scrolling by. Don't panic, that's normal!
- Once it's done, verify the installation by running:
You should see a version number (e.g., 1.8.0) displayed. If you do, congratulations! You've successfully installed QLLM.
qllm --version
💡 Pro Tip: If you encounter any permission errors during installation, you might need to use
sudo
on Unix-based systems or run your command prompt as an administrator on Windows.
Now that QLLM is installed, let's get it configured. Think of this as teaching QLLM your preferences and giving it the keys to the AI kingdom.
While you're in the configuration mode, you can also set up some default preferences:
- Choose your default provider and model.
- Set default values for parameters like temperature and max tokens.
- Configure other settings like log level and custom prompt directory.
Here's an example of what this might look like:
$ qllm configure
? Default Provider: openai
? Default Model: gpt-4o-mini
? Temperature (0.0 to 1.0): 0.7
? Max Tokens: 150
? Log Level: info
To use AWS Bedrock with QLLM, you need to configure your AWS credentials. Ensure you have the following environment variables set:
AWS_ACCESS_KEY_ID
: Your AWS access key ID.AWS_SECRET_ACCESS_KEY
: Your AWS secret access key.AWS_BEDROCK_REGION
: The AWS region you want to use (optional, defaults to a predefined region).AWS_BEDROCK_PROFILE
: If you prefer to use a named profile from your AWS credentials file, set this variable instead of the access key and secret.
You can set these variables in your terminal or include them in your environment configuration file (e.g., .env
file) for convenience.
💡 Pro Tip: You can always change these settings later, either through the
qllm configure
command or directly in the configuration file located at~/.qllmrc
.
Providers Supported
- openai
- anthropic
- AWS Bedrock (Anthropic)
- ollama
- groq
- mistral
- claude
- openrouter
Enough setup, let's see QLLM in action! We'll start with a simple query to test the waters.
- In your terminal, type:
qllm ask "What is the meaning of life, the universe, and everything?"
- Press Enter and watch the magic happen!
QLLM will display the response from the AI. It might look something like this:
Assistant: The phrase "the meaning of life, the universe, and everything" is a reference to Douglas Adams' science fiction series "The Hitchhiker's Guide to the Galaxy." In the story, a supercomputer named Deep Thought is asked to calculate the answer to the "Ultimate Question of Life, the Universe, and Everything." After 7.5 million years of computation, it provides the answer: 42...
🧠 Pause and Reflect: What do you think about this response? How does it compare to what you might have gotten from a simple web search?
The ask
command is your go-to for quick, one-off questions. It's like having a knowledgeable assistant always ready to help.
qllm ask "Your question here"
-p, --provider
: Specify the LLM provider (e.g., openai, anthropic)-m, --model
: Choose a specific model-t, --max-tokens
: Set maximum tokens for the response--temperature
: Adjust output randomness (0.0 to 1.0)
- Quick fact-checking:
qllm ask "What year was the first Moon landing?"
- Code explanation:
qllm ask "Explain this Python code: print([x for x in range(10) if x % 2 == 0])"
- Language translation:
qllm ask "Translate 'Hello, world!' to French, Spanish, and Japanese"
While ask
is perfect for quick queries, chat
is where QLLM really shines. It allows you to have multi-turn conversations, maintaining context throughout.
To start a chat session:
qllm chat
Once in a chat session, you can use various commands:
/help
: Display available commands/new
: Start a new conversation/save
: Save the current conversation
The run
command allows you to execute predefined templates, streamlining complex or repetitive tasks.
To run a template:
qllm <template-url or path>
For example:
qllm https://raw.githubusercontent.com/quantalogic/qllm/main/prompts/chain_of_thought_leader.yaml
You can create your own templates as YAML files. Here's a simple example:
name: "Simple Greeting"
version: "1.0"
author: "Raphaël MANSUY"
description: "A template that generates a greeting"
input_variables:
name:
type: "string"
description: "The name of the person to greet"
content: > "Generate a friendly greeting for {{name}}."
Save this as greeting.yaml
and run it with:
qllm run greeting.yaml
🧠 Pause and Reflect: How could you use custom templates to streamline your workflow? Think about repetitive tasks in your daily work that could benefit from AI assistance.
Imagine you're a developer facing code reviews. Let's set up a code review template to streamline this process.
Save this as code_review.yaml
:
name: "Code Review"
description: "Analyzes code and provides improvement suggestions"
input_variables:
code:
type: "string"
description: "The code to review"
language:
type: "string"
description: "The programming language"
prompt: >
You are an experienced software developer. Review the following {{language}} code and provide suggestions for improvement: {{language}} {{code}}
Please consider:
1. Code efficiency
2. Readability
3. Best practices
4. Potential bugs
Let's look at how QLLM can assist in content creation, from ideation to drafting and editing.
Create a template for brainstorming ideas. Save this as brainstorm_ideas.yaml
:
name: "Content Brainstorming"
description: "Generates content ideas based on a topic and target audience"
input_variables:
topic:
type: "string"
description: "The main topic or theme"
audience:
type: "string"
description: "The target audience"
content_type:
type: "string"
description: "The type of content (e.g., blog post, video script, social media)"
prompt: |
As a creative content strategist, generate 5 unique content ideas for {{content_type}} about {{topic}} targeted at {{audience}}. For each idea, provide:
1. A catchy title
2. A brief description (2-3 sentences)
3. Key points to cover
4. Potential challenges or considerations
Imagine you have a CSV file with sales data. You can use QLLM to help interpret this data:
cat sales_data.csv | qllm ask "Analyze this CSV data. Provide a summary of total sales, top-selling products, and any notable trends. Format your response as a bulleted list."
QLLM also supports image analysis, allowing you to describe and analyze images directly through the command line.
qllm ask "What do you see in this image?" -i path/to/image.jpg
This command sends the specified image to the AI for analysis and generates a description based on its contents.
You can capture and analyze screenshots directly from the CLI, making it easier to get insights from visual content.
qllm ask "Analyze this screenshot" --screenshot 0
This command captures the current screen and sends it to the AI for analysis, providing insights based on what is displayed.
Even the most powerful tools can sometimes hiccup. Here are some common issues you might encounter with QLLM and how to resolve them:
- Rate Limiting: Implement a retry mechanism with exponential backoff.
- Unexpected Output Format: Be more specific in your prompts.
To get the most out of QLLM, keep these best practices in mind:
- Effective Prompt Engineering: Be specific and clear in your prompts.
- Managing Conversation Context: Use
/new
to start fresh conversations when switching topics. - Leveraging Templates for Consistency: Create templates for tasks you perform regularly.
Congratulations! You've now mastered the essentials of QLLM and are well on your way to becoming a CLI AI wizard.
Within the next 24 hours, use QLLM to solve a real problem you're facing in your work or personal projects. It could be analyzing some data, drafting a document, or even helping debug a tricky piece of code. Share your experience with a colleague or in the QLLM community.
Thank you for joining me on this whirlwind tour of QLLM. Now go forth and command your AI assistant with confidence! 🚀
For detailed documentation on the packages used in QLLM, please refer to the following links:
We warmly welcome contributions to QLLM CLI! This project is licensed under the Apache License, Version 2.0. To contribute, please follow these steps:
- Fork the repository on GitHub.
- Clone your forked repository to your local machine.
- Create a new branch for your feature or bug fix.
- Make your changes, adhering to the existing code style and conventions.
- Write tests for your changes if applicable.
- Run the existing test suite to ensure your changes don't introduce regressions:
pnpm test
- Commit your changes with a clear and descriptive commit message.
- Push your changes to your fork on GitHub.
- Create a pull request from your fork to the main QLLM CLI repository.
Please ensure your code adheres to our coding standards:
- Use TypeScript for type safety.
- Follow the existing code style (we use Prettier for formatting).
- Write unit tests for new features.
- Update documentation as necessary, including this README if you're adding or changing features.
We use GitHub Actions for CI/CD, so make sure your changes pass all automated checks.
This project is licensed under the Apache License, Version 2.0. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
We would like to extend our heartfelt thanks to the following individuals and organizations for their invaluable contributions to QLLM:
- OpenAI: For their groundbreaking work on large language models and the API that powers QLLM.
- Anthropic: For their innovative approach to AI and the Claude models that enhance QLLM's capabilities.
- AWS Bedrock: For their support in providing access to advanced AI models through AWS.
- Ollama: For their cutting-edge LLM platform that powers QLLM's locally.
- Groq: For their powerful and scalable LLM infrastructure.
- Mistral: For their innovative approach to AI and te represent France 🇫🇷.
A special thanks to the entire QLLM community for their feedback and support. Your insights and contributions are invaluable to us.
And of course for Quantalogic for funding the project.