A powerful GitHub Action that enables seamless integration with various AI providers, allowing you to enhance your workflows with artificial intelligence capabilities. Supporting multiple providers including OpenAI, Anthropic, Groq, Mistral, and more.
-
Multi-Provider Support:
- OpenAI
- Anthropic (Claude)
- Groq
- Mistral
- Cohere
- DeepInfra
- Fireworks
- Together AI
- XAI
-
Flexible Configuration:
- Custom API endpoints
- Configurable model parameters
- Adjustable response settings
- Header customization
-
Easy Integration:
- Simple workflow setup
- Comprehensive output handling
- Support for both chat and completion modes
- Add the action to your workflow file (e.g.,
.github/workflows/ai.yml
) - Configure your AI provider credentials as repository secrets
- Customize the action parameters as needed
Create a workflow file in your repository:
name: Basic AI Prompt
on:
push:
branches: [main]
jobs:
generate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Generate AI Text
uses: 0xjord4n/aixion@v1.2.1
with:
config: >
{
"provider": "openai",
"provider_options": {
"api_key": "${{ secrets.OPENAI_API_KEY }}"
},
"prompt": "Your prompt here",
"model": "gpt-4"
}
aixion supports both direct text input and file-based input for prompts and messages:
You can now reference files for your prompts and system messages:
config: >
{
"provider": "openai",
"provider_options": {
"api_key": "${{ secrets.OPENAI_API_KEY }}"
},
"prompt_file": ".github/prompts/analysis.txt",
"system_file": ".github/prompts/system.txt",
"model": "gpt-4"
}
For messages array, you can mix file-based and direct content:
config: >
{
"provider": "openai",
"provider_options": {
"api_key": "${{ secrets.OPENAI_API_KEY }}"
},
"messages": [
{
"role": "system",
"content_file": ".github/prompts/system.txt"
},
{
"role": "user",
"content_file": ".github/prompts/user_query.txt"
},
{
"role": "assistant",
"content": "I understand your question. Let me help..."
}
],
"model": "gpt-4"
}
Note: When using file-based inputs:
- Use
prompt_file
instead ofprompt
to read from a file - Use
system_file
instead ofsystem
to read from a file - In messages array, use
content_file
instead ofcontent
to read from a file - File paths are relative to your repository root
- Files must exist and be readable
- You can mix direct content and file-based content in the messages array
aixion supports three different ways to interact with AI models:
Best for straightforward, single-turn interactions:
config: >
{
"provider": "openai",
"provider_options": {
"api_key": "${{ secrets.OPENAI_API_KEY }}"
},
"prompt": "What is the capital of France?",
"model": "gpt-4"
}
Useful when you need to set specific behavior or context:
config: >
{
"provider": "openai",
"provider_options": {
"api_key": "${{ secrets.OPENAI_API_KEY }}"
},
"system": "You are a helpful programming assistant.",
"prompt": "How do I write a hello world in Python?",
"model": "gpt-4"
}
Perfect for multi-turn conversations or complex interactions:
config: >
{
"provider": "openai",
"provider_options": {
"api_key": "${{ secrets.OPENAI_API_KEY }}"
},
"messages": [
{
"role": "system",
"content": "You are a helpful programming assistant."
},
{
"role": "user",
"content": "What is a variable?"
},
{
"role": "assistant",
"content": "A variable is a container for storing data values."
},
{
"role": "user",
"content": "Can you show an example?"
}
],
"model": "gpt-4"
}
Note: You should use only one of these methods per request. The precedence order is:
messages
(if present, others are ignored)system
+prompt
(if no messages)prompt
alone (if no messages or system)
api_key
: Your provider's API keybase_url
: Custom API endpoint (optional)headers
: Additional headers (optional)
temperature
: Control randomness (0.0-2.0)max_tokens
: Maximum response lengthtop_p
: Nucleus sampling parameterfrequency_penalty
: Repetition controlpresence_penalty
: Topic diversity controlstop
: Array of sequences where the API will stop generating further tokens
save_path
: Path where the response will be saved (optional)
The action provides the following outputs:
text
: Generated responseusage
: Token usage statisticsfinishReason
: Completion status
To contribute to this project:
- Clone the repository:
git clone https://github.com/0xjord4n/aixion.git
- Install dependencies
- Make your changes
- Submit a pull request
For issues, feature requests, or questions:
- Open an issue
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.