Welcome to the official GitHub repository for LouminAI Labs' open-source contributions to Perplexity.ai! This repository hosts a collection of tools, scripts, and integrations developed by LouminAI Labs to enhance and extend the functionalities of Perplexity.ai, an advanced AI-driven search engine and chatbot.
Perplexity AI leverages large language models, including OpenAI's GPT technology, to deliver accurate, context-aware answers to user queries. With its natural language processing capabilities, Perplexity AI provides immediate answers along with relevant sources and citations, making it a powerful tool for information discovery and decision-making.
LouminAI Labs, a division of LouminAI.com, is dedicated to empowering humanity through generative AI. Our mission is to align technology with human needs to create transformative AI solutions that enhance individual and organizational capabilities. We are pioneers in the field, committed to open-source and collaborative initiatives that promote the adoption and understanding of AI technologies.
- Discover: Understand unique needs and goals.
- Strategize: Develop customized AI implementation plans.
- Implement: Integrate cutting-edge AI solutions into workflows.
- Optimize: Continuously refine AI applications to maximize benefits.
LouminAI Labs is led by David Youngblood, a visionary committed to integrating AI with human potential to achieve groundbreaking advancements.
Our flagship project, LEMA (Learning Model Adaptability), is designed to evolve with human progress, providing adaptable and intuitive AI support across various sectors.
This repository includes:
- Python Scripts: Tools and scripts for interacting dynamically with Perplexity.ai's API, offering robust error handling and customizable parameters.
- Integration Examples: Demonstrations of how to integrate Perplexity.ai into various applications and systems.
- Documentation and Guides: Detailed guides on using our tools and contributing to the repository.
To get started with our tools:
- Clone this repository.
- Install the required dependencies.
- Explore the examples to see how to integrate and extend Perplexity.ai's capabilities.
This repository houses dynamic Python scripts designed for interacting with Perplexity AI's advanced API. Developed by LouminAI Labs, these scripts exemplifies how to leverage AI to generate responsive and contextual text based on user prompts. These scripts facilitate seamless interaction with Perplexity AI, allowing users to switch between different AI models dynamically and control the response generation process comprehensively.
The script includes dynamic functions, to send the prompt to Perplexity, which allows for the dynamic specifications of the AI model used for generating responses. This feature enables users to adjust their AI model choice on-the-fly without needing to alter the script's codebase, making it highly adaptable to various needs and conditions.
Comprehensive error handling is embedded within the function to ensure that any exceptions occurring during the API request process are caught and reported accurately. This ensures that users can understand and rectify issues quickly, enhancing reliability.
The script offers two options for sending prompts to Perplexity AI:
- Option 1: Sends a prompt with default parameters.
- Option 2: Provides detailed control over the response generation process through additional parameters such as
temperature
,top_p
,top_k
,stream
,presence_penalty
, andfrequency_penalty
.
To use the script with the default settings:
response = send_prompt_to_perplexity("Explain quantum physics.")
print(response)
For detailed control over the AI's response, you can specify additional parameters:
response = send_prompt_to_perplexity("Describe the solar system.", temperature=0.5, top_p=0.9, stream=True)
print(response)
- Clone the repository:
git clone https://github.com/LouminAI/perplexity-ai-integration.git
- Install required dependencies:
pip install requests
Detailed model information and pricing can be found at Perplexity AI Pricing Documentation. The script supports various models with different token pricing and request costs, suitable for diverse use cases and budget considerations.
The script allows interaction with several models from Perplexity AI, including but not limited to:
Detailed Model Information and Pricing (as of 24.04.22):
- llama-3-70b-instruct
: $1.00 per 1M tokens, Context Length: 8192, Chat Completion # Default
- llama-3-8b-instruct
: $0.20 per 1M tokens, Context Length: 8192, Chat Completion # Commented out
- codellama-70b-instruct
: $1.00 per 1M tokens, Context Length: 16384, Chat Completion # Commented out
- sonar-small-chat
: $0.20 per 1M tokens, Context Length: 16384, Chat Completion # Commented out
- sonar-medium-chat
: $0.60 per 1M tokens, Context Length: 16384, Chat Completion # Commented out
- sonar-small-online
: $5 per 1000 requests, $0.20 per 1M tokens, Context Length: 12000, Chat Completion # Commented out
- sonar-medium-online
: $5 per 1000 requests, $0.60 per 1M tokens, Context Length: 12000, Chat Completion # Commented out
Other models are available but commented out within the script for ease of use and customization.
Errors are managed within the script to provide clear and actionable feedback, particularly focusing on HTTP and validation errors that might occur during API interactions.
This script is a JavaScript application that allows users to send prompts to the Perplexity AI API and receive generated responses. Users can specify the prompt text, the maximum number of tokens to generate, and the AI model to use. The script provides a simple interface for interacting with the Perplexity AI API, making it easy for users to experiment with different prompts and models to generate desired outputs.
The script is a JavaScript implementation of a client for the Perplexity AI API. It provides a flexible and dynamic function named sendPromptToPerplexity that allows developers to send prompts to the API and retrieve generated responses. The function supports various parameters for controlling the response generation process, such as the prompt text, maximum tokens, AI model, temperature, top-p sampling, and more. The script includes detailed error handling to catch and report exceptions that may occur during the API request process. It also provides two options for sending prompts, each with different levels of control over the generation process. The script is designed to be easily integrated into other applications or used as a standalone tool for interacting with the Perplexity AI API.
(Additional details may be appended over time here.)
We encourage contributions from the community! Whether you're fixing bugs, adding features, or improving documentation, your help is welcome. Please see our CONTRIBUTING.md
for guidelines on how to make contributions.
All contributions made to this repository are licensed under the MIT License. See LICENSE
for more details.
Join our vibrant community of developers and AI enthusiasts! Connect with us through our website or participate in our forums to discuss AI development, share ideas, and collaborate on projects.
Together, let's drive the future of AI, making technology accessible and beneficial for everyone!