Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update provider ecosystem and enhance functionality #2246

Merged
merged 23 commits into from
Sep 29, 2024
Merged

Conversation

kqlio67
Copy link
Contributor

@kqlio67 kqlio67 commented Sep 24, 2024

New providers added

  • g4f/Provider/DeepInfraChat.py
    • Model support for text generation: llama-3.1-405b, llama-3.1-70b, llama-3. 1-8B, mixtral-8x22b, mixtral-8x7b, wizardlm-2-8x22b, wizardlm-2-7b, qwen-2-72b, phi-3-medium-4k, gemma-2b-27b, minicpm-llama-3-v2.5, mistral-7b, lzlv_70b, openchat-3.6-8b, phind-codellama-34b-v2, dolphin-2.9.1-llama-3-70b.
    • Model that supports vision: minicpm-llama-3-v2.5.


  • g4f/Provider/ChatHub.py
    • Model support for text generation: llama-3.1-8b, mixtral-8x7b, gemma-2, sonar-online



Removed providers



Fixing the G4F enhancement

  • Fixed WebUI Scaling error (WebUI Scaling Issue #2228)
  • docs/providers-and-models.md: add providers and models documentation page

  • g4f/client/async_client.py
    • Simplified iter_response and create methods
    • Enhanced iter_image_response with logging
    • Added detailed logging in create_image and generate methods
    • Improved error messages and validation checks

  • etc/unittest/async_client.py
    Refactor async tests for chat completions (etc/unittest/async_client.py)
    • Updated the test cases to utilize the latest structure of AsyncClient and its methods.
    • Ensured proper handling of response types and improved assertions for better coverage.
    • Maintained consistency across tests for max tokens and stream handling.


Provider fixes and improvements

  • g4f/Provider/AI365VIP.py
    Update the AI365VIP provider to support new models and aliases
    • Add 'gpt-3.5-turbo-16k' to the list of supported models
    • Remove 'claude-3-haiku-20240307' from the list of supported models
    • Update model aliases to map 'gpt-3.5-turbo' to 'gpt-3.5-turbo-16k'

  • g4f/Provider/Airforce.py (Advertisement in answers  #2233)
    • Add new image models: 'flux-4o' and 'dall-e-3'
    • Update model aliases for consistency
    • Modify generate_text method to handle message history correctly
    • Add 'max_tokens' parameter support in generate_text method
    • Update request headers to include authorization, cache control, and other compatibility improvements
    • Enhance error handling in generate_text:
      • Check for message length exceeding limits
      • Wrap main logic in try-except block to catch and re-raise errors as ResponseStatusError
    • Improve error handling in generate_image:
      • Handle ClientResponseError and decode errors
      • Raise ResponseStatusError for non-successful HTTP responses and various error scenarios
    • Standardize error reporting using ResponseStatusError across both text and image generation methods
    • Explicitly check for image content type before processing the response in generate_image

  • g4f/Provider/Bixin123.py
    • Add gpt-3.5-turbo to the list of supported models
    • No changes to the functionality or behavior of the code

  • g4f/Provider/Blackbox.py
    • Add a new parameter webSearchMode with a default value of False to the data object sent in the API request
    • The new parameter is added to control the web search mode setting in the Blackbox AI API
    • No changes to the existing functionality or behavior of the code
    • New models added gpt-4o, claude-3.5-sonnet, gemini-pro ([Request] More models for Blackbox #2238)

  • g4f/Provider/Chatgpt4o.py
    • Update default_model to gpt-4o-mini-2024-07-18
    • Add models list with gpt-4o-mini-2024-07-18 as the only available model
    • Introduce model_aliases dictionary for mapping model aliases to actual model names
    • Remove model_to_url and get_default_model methods

  • g4f/Provider/DDG.py
  • Remove base64 encoding for URLs and use plain strings
  • Simplify headers and user agent string
  • Improve error handling in get_vqd method
  • Refactor create_async_generator for better readability and efficiency
  • Remove Conversation class and use dict for conversation state
  • Update model handling and add get_model method
  • Add support for system messages and stream mode


  • g4f/Provider/Liaobots.py
    • Added new models for OpenAI's o1 and Grok series.
    • Updated default model to gpt-3.5-turbo.
    • Revised model aliases to include new models.
    • Adjusted context and token limits for several models.

  • g4f/Provider/LiteIcoding.py
    • Added support for multiple bearer tokens with round-robin selection to improve API reliability.
    • Introduced model aliases for better model handling.
    • Maintained improved error handling for client responses.
    • Optimized content decoding and response filtering for cleaner outputs.
    • Updated headers for enhanced compatibility with the API.

  • g4f/Provider/MagickPen.py
    • Refactor API credential fetching and improve async generator.
    • Consolidate API endpoint definitions.
    • Add support for streaming and message history.
    • Enhance error handling for credential extraction.
    • Update payload structure for API requests.

  • g4f/Provider/ChatGpt.py
    • Refine model list and assignment logic.
    • Revert to explicit model checks in create_completion.

  • g4f/Provider/ChatGptEs.py
    • Change class inheritance to AsyncGeneratorProvider for async support
    • Implement model resolution with get_model method
    • Update headers and requests to use asynchronous context
    • Allow streaming responses and enhance model management with aliases

  • g4f/Provider/Upstage.py
    Change default model from upstage/solar-1-mini-chat to solar-pro and include solar-pro in the models list.

  • g4f/Provider/HuggingChat.py (Update provider ecosystem and enhance functionality #2246 (comment))
    • Updated models list with Qwen/Qwen2.5-72B-Instruct
    • Added alias for Qwen2.5-72B model
    • Add new models: Hermes-3-Llama-3.1-8B and Mistral-Nemo-Instruct-2407
    • Update existing model entries for Phi-3.5-mini-instruct
    • Adjust model aliases to reflect new models and naming conventions
    • Remove outdated models

  • g4f/Provider/Nexra.py
    • Add support for new models from NexraBing, NexraChatGPT, NexraChatGPT4o, NexraChatGPTWeb, NexraGeminiPro, NexraImageURL, NexraLlama, and NexraQwen
    • Introduce dynamic API endpoint retrieval based on model
  • Update model aliases and add sdxl-turbo as the default image model

This change improves extensibility and allows for easier addition of new AI models and their corresponding endpoints.



The problem with providers

  • g4f/Provider/AI365VIP.py

    Note: The AI365VIP provider currently experiences issues with Cloudflare protection, which may impact its reliability and usability. Further investigation and potential workarounds are required to address this problem.


  • g4f/Provider/AiChatOnline.py

    Note: The provider currently experiences issues with Cloudflare protection, which may impact its reliability and usability. Further investigation and potential workarounds are required to address this problem.


  • g4f/Provider/AiChats.py

    Note: The provider currently experiences issues with Captcha, which may impact its reliability and usability. Further investigation and potential workarounds are required to address this problem.


  • g4f/Provider/Bing.py

    Note: The provider currently experiences issues with Captcha, which may impact its reliability and usability. Further investigation and potential workarounds are required to address this problem.


  • g4f/Provider/Chatgpt4o.py

Note: The provider currently experiences issues with Cloudflare protection, which may impact its reliability and usability. Further investigation and potential workarounds are required to address this problem.


  • g4f/Provider/Chatgpt4Online.py

    Note: The provider currently experiences issues with Cloudflare protection, which may impact its reliability and usability. Further investigation and potential workarounds are required to address this problem.


  • g4f/Provider/ChatgptFree.py

    Note: The provider currently experiences issues with Cloudflare protection, which may impact its reliability and usability. Further investigation and potential workarounds are required to address this problem.


  • g4f/Provider/FreeNetfly.py

    Note: The provider currently experiences issues with Cloudflare protection, which may impact its reliability and usability. Further investigation and potential workarounds are required to address this problem.


  • g4f/Provider/Koala.py

    Note: The provider currently experiences issues with Cloudflare protection, which may impact its reliability and usability. Further investigation and potential workarounds are required to address this problem.


  • g4f/Provider/PerplexityLabs.py

    Note: The provider currently experiences issues with Cloudflare protection, which may impact its reliability and usability. Further investigation and potential workarounds are required to address this problem.


@TheFirstNoob
Copy link

TheFirstNoob commented Sep 24, 2024

Hi! Thanks for your work!
Any info why ChatGot is remove? Fully work fine.

Tested on 24.09 (14:40 UTC +3)

    {
        "model": "gemini-pro",
        "provider": "ChatGot",
        "response": "Hello there! How can I assist you today?"
    },

And Removed: CodeNews but below you send updates for CodeNews. Typo?

@TheFirstNoob
Copy link

TheFirstNoob commented Sep 24, 2024

Okeeey now i provide list some problems (or thats my problem only)
Some providers you fix already on this list

  1. Return CodeNews:
  • You remove him in main list update but file it not returned. My mistake? Need remove or not need remove? :)
  1. AsyncClient (g4f func) for images:
  • I start test g4f all docs and find some problems like AsyncClient function for image generation.
    If we use g4f.client for image generation all fully work fine but if we try use AsyncClient we have: No provider found in response but in terminale we see `Use RetryProvider (Provider name) and model (model name).
  1. AsyncClient (g4f func) for vision:

My code for test:

import requests
import asyncio
from g4f.client import AsyncClient
from g4f.Provider import Replicate #request api_key dont forget add below!

async def main():
    client = AsyncClient(provider=Replicate, api_key="add_api_here")

    image_url = "https://kartinki.pics/pics/uploads/posts/2022-09/thumbs/1663711643_52-kartinkin-net-p-yaponskaya-yenotovidnaya-sobaka-tanuki-pin-54.jpg"
    image = requests.get(image_url, stream=True).raw

    response = await client.chat.completions.create(
        messages=[{"role": "user", "content": "What is on image?"}],
        model="yorickvp/llava-13b", #yorickvp/llava-13b
        image=image
    )

    print(response.choices[0].message.content)

if __name__ == "__main__":
    asyncio.run(main())
  1. Prodia provider
    I try to call models like model = model_name but always have Unknown models. I think thats my problem and not understand how use this provider correclty. Maybe need rework models calls?
    I would be very grateful if you could provide a very simple code to test this provider.

Thanks!
If there are any questions about how and what I did to check, then please write to me. I will try to explain more precisely.

@Felitendo
Copy link

Felitendo commented Sep 25, 2024

For bypassing Cloudflare Protection take a look at this: https://github.com/FlareSolverr/FlareSolverr

@xtekky
Copy link
Owner

xtekky commented Sep 25, 2024

@kqlio67 I saw that you deleted all provides in the "deprecated" folder. It would be better to keep them for future reference and in case they get un-patched ...

Other than that, very good pull, will merge soon when you add them back.

Also I am thinking about implementing a cloudflare solving mechanism, that will be available to all providers if needed

@kqlio67
Copy link
Contributor Author

kqlio67 commented Sep 25, 2024

Hello! @TheFirstNoob

Response to point 1. (Return CodeNews)
The provider is removed because it does not work CodeNews, but if it starts working stably, I will definitely return this provider, but at the moment it does not work, so it is removed.


Response to point 2. (AsyncClient (g4f func) for images)
Here's an example code that demonstrates the successful use of AsyncClient for image generation:

import asyncio
from g4f.client import Client

async def main():
    client = Client()
    images = await client.async_images()
    response = await images.async_generate(
        model="flux",
        prompt="a white siamese cat",
    )
    image_url = response.data[0].url
    print(f"Generated image URL: {image_url}")

if __name__ == "__main__":
    asyncio.run(main())

Code execution result:

python main.py 
Generated image URL: https://api.airforce/imagine2?prompt=a+white+siamese+cat&size=1:1&model=flux

Judging from the result, AsyncClient successfully generates an image URL based on the provided prompt "a white siamese cat" using the "flux" model.

If you don't encounter any errors when using AsyncClient for image generation and you get the expected result, then the problem you reported earlier may be specific to a certain usage scenario or depend on other factors.

If you still encounter problems when using AsyncClient to generate images in other parts of your code, please provide more details about the specific scenario where the error occurs so that I can better understand the issue and try to help you.


Response to point 3. (AsyncClient (g4f func) for vision)
I've modified your code to ensure the image is passed in the correct format. I've tested this with DeepInfraChat and Blackbox providers.

Although I haven't tested it with Replicate, it should work:

import requests
import asyncio
import base64
from g4f.client import AsyncClient
from g4f.Provider import DeepInfraChat, Blackbox, Replicate

import g4f.debug
g4f.debug.logging = True
g4f.debug.verbose = True
g4f.debug.version_check = False

def to_data_uri(image_data):
    base64_image = base64.b64encode(image_data).decode('utf-8')
    return f"data:image/jpeg;base64,{base64_image}"

async def analyze_image(image_url, provider, model=None, api_key=None):
    client = AsyncClient(provider=provider, api_key=api_key)

    image_data = requests.get(image_url).content
    image_base64 = to_data_uri(image_data)

    if provider == Blackbox:
        messages = [
            {
                "role": "user",
                "content": "What is on this image? Describe it in detail.",
                "data": {
                    "imageBase64": image_base64,
                    "fileText": "image.jpg"
                }
            }
        ]
    elif provider == Replicate:
        image = requests.get(image_url, stream=True).raw
        messages = [{"role": "user", "content": "What is on this image? Describe it in detail."}]
    else:
        messages = [
            {
                "role": "user",
                "content": [
                    {
                        "type": "image_url",
                        "image_url": {"url": image_base64}
                    },
                    {
                        "type": "text",
                        "text": "What is on this image? Describe it in detail."
                    }
                ]
            }
        ]

    try:
        if provider == Replicate:
            response = await client.chat.completions.create(
                messages=messages,
                model=model,
                image=image
            )
        else:
            response = await client.chat.completions.create(
                messages=messages,
                model=model
            )
        if isinstance(response, str):
            return response
        return response.choices[0].message.content
    except Exception as e:
        return f"Error: {str(e)}"

async def main():
    image_url = "https://kartinki.pics/pics/uploads/posts/2022-09/thumbs/1663711643_52-kartinkin-net-p-yaponskaya-yenotovidnaya-sobaka-tanuki-pin-54.jpg"
    
    providers = [
        (DeepInfraChat, "openbmb/MiniCPM-Llama3-V-2_5", None),
        (Blackbox, "blackbox", None),
        (Replicate, "yorickvp/llava-13b", "your_replicate_api_key_here")
    ]

    for provider, model, api_key in providers:
        print(f"\nTrying with {provider.__name__} and model {model}")
        result = await analyze_image(image_url, provider, model, api_key)
        print(result)

if __name__ == "__main__":
    asyncio.run(main())

Here are the results:

python main.py 

Trying with DeepInfraChat and model openbmb/MiniCPM-Llama3-V-2_5
Using DeepInfraChat provider and openbmb/MiniCPM-Llama3-V-2_5 model
The image features a raccoon, which is a mammal belonging to the genus Procyon. These animals are known for their distinctive facial markings, which include a black "mask" around the eyes, as well as their dexterous hands and bushy tails. The raccoon in the image is standing on what appears to be a paved surface, surrounded by natural debris like leaves and plants, indicating that the location could be a park or a wooded area near human habitation. The raccoon's fur is predominantly gray and black, but with a distinctive lighter fur on its back, which is a characteristic feature of many raccoon species. This kind of fur pattern helps them to blend into their environment, providing camouflage. The animal's posture and the direction of its gaze suggest it is alert and aware of its surroundings.

Trying with Blackbox and model blackbox
Using Blackbox provider and blackbox model
Unfortunately, I'm a text-based AI assistant and do not have the capability to visually access or analyze images. However, I can try to guide you through a process to help you describe the image.

If you can provide more context or details about the image, I can try to help you identify what's on it. Alternatively, you can describe the image to me, and I can help you clarify or provide more information about the objects, scenes, or elements you see in the image.

Please provide more context or describe the image to me, and I'll do my best to assist you.

Trying with Replicate and model yorickvp/llava-13b
Using Replicate provider and yorickvp/llava-13b model
Error: Response 401: {"title":"Unauthenticated","detail":"You did not pass a valid authentication token","status":401}

Key modifications:
1. Extended functionality:

  • The new code supports multiple providers (DeepInfraChat, Blackbox, Replicate), not just Replicate.

2. Code structure:

  • Added an analyze_image function that handles requests for all providers.
  • The main function now includes a loop to iterate through all providers.

3. Image processing:

  • Added a to_data_uri function to convert images to base64 format.
  • Different providers use different image formats (base64, raw bytes).

4. Message formatting:

  • Each provider uses its specific message format.

5. API calls:

  • For Replicate, the original call format with the image parameter is preserved.
  • For other providers, a different format without image is used.

6. Error handling:

  • Added general exception handling for all providers.

7. Configuration:

  • Added debug settings (g4f.debug).
  • API keys and models are now passed as parameters.

8. Flexibility:

  • The new code allows for easy addition of new providers and models.

Try running this modified code and see if you can successfully pass the image to the Replicate provider and receive a response.

If you encounter any issues, please report the error or provide additional information so I can assist you further.


Response to point 4. (Prodia provider)
Thank you for reporting this issue. I appreciate your feedback. I've verified that the Prodia provider is working correctly in GUI-G4F, but there seems to be a problem when using it in CLI-G4F. I'll investigate this error and work on fixing it.

@kqlio67
Copy link
Contributor Author

kqlio67 commented Sep 25, 2024

@Felitendo Thanks for the useful advice! I'll definitely take a look at the FlareSolverr project for bypassing Cloudflare security. It can be very useful to improve the functionality of the project. I appreciate your input and the time you took to share this resource.

@kqlio67
Copy link
Contributor Author

kqlio67 commented Sep 25, 2024

@xtekky Thank you for your feedback! I understand your point about keeping the deprecated providers and agree that it could be useful for future reference. I have already restored the deprecated folder with all the previous providers.

Regarding the Cloudflare solving mechanism - that sounds like a great idea that could significantly improve functionality for all providers. While I'm still in the learning process, I've been actively contributing useful fixes that have been accepted, and I'm quickly picking up new skills. I'm excited to see how this develops and potentially contribute where I can.

Please let me know if any additional changes or clarifications are needed for my pull request. I'm eager to continue learning and improving my contributions to the project. Thank you for considering my changes and for providing these opportunities to grow and contribute!

@TheFirstNoob
Copy link

@kqlio67 Hello! Thanks for reply!
For image generation you still use g4f.Client. Need use g4f.AsyncClient. Maybe i dont understand g4f docs correclty but in Key features AsyncClient api say image generation is support.

import asyncio
from g4f.client import AsyncClient
from g4f.Provider import Airforce

async def main():
    client = AsyncClient(image_provider='Airforce')
    response = await client.images.generate(
        model="flux",
        prompt="Black Cat with Red eyes"
    )
    image_url = response.data[0].url
    print(image_url)

asyncio.run(main())

Result:
g4f.errors.RetryNoProviderError: No provider found

@TheFirstNoob
Copy link

TheFirstNoob commented Sep 25, 2024

HuggingChat now use updated list new models:

  1. NousResearch/Hermes-3-Llama-3.1-8B
  2. mistralai/Mistral-Nemo-Instruct-2407
  3. microsoft/Phi-3.5-mini-instruct

Old models (remove?):

"mixtral-8x7b": "mistralai/Mixtral-8x7B-Instruct-v0.1",
"mixtral-8x7b-dpo": "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
"mistral-7b": "mistralai/Mistral-7B-Instruct-v0.3",
"phi-3-mini-4k": "microsoft/Phi-3-mini-4k-instruct"

UPDATED:
Found new vision model: https://huggingface.co/spaces/openbmb/MiniCPM-V-2_6
Direct access: (https://openbmb-minicpm-v-2-6.hf.space/)

Qvis1.6: https://huggingface.co/spaces/AIDC-AI/Ovis1.6-Gemma2-9B
Direct access: (https://aidc-ai-ovis1-6-gemma2-9b.hf.space/)

NEW GEMINI PROVIDER (to reverse)
https://aichatfree.info/

@Felitendo
Copy link

@Felitendo Thanks for the useful advice! I'll definitely take a look at the FlareSolverr project for bypassing Cloudflare security. It can be very useful to improve the functionality of the project. I appreciate your input and the time you took to share this resource.

No problem, thanks for fixing #2228 btw :)

@kqlio67
Copy link
Contributor Author

kqlio67 commented Sep 25, 2024

@kqlio67 Hello! Thanks for reply! For image generation you still use g4f.Client. Need use g4f.AsyncClient. Maybe i dont understand g4f docs correclty but in Key features AsyncClient api say image generation is support.

import asyncio
from g4f.client import AsyncClient
from g4f.Provider import Airforce

async def main():
    client = AsyncClient(image_provider='Airforce')
    response = await client.images.generate(
        model="flux",
        prompt="Black Cat with Red eyes"
    )
    image_url = response.data[0].url
    print(image_url)

asyncio.run(main())

Result: g4f.errors.RetryNoProviderError: No provider found

@kqlio67 kqlio67 closed this Sep 25, 2024
@kqlio67 kqlio67 reopened this Sep 25, 2024
@kqlio67
Copy link
Contributor Author

kqlio67 commented Sep 26, 2024

@kqlio67 Hello! Thanks for reply! For image generation you still use g4f.Client. Need use g4f.AsyncClient. Maybe i dont understand g4f docs correclty but in Key features AsyncClient api say image generation is support.

import asyncio
from g4f.client import AsyncClient
from g4f.Provider import Airforce

async def main():
    client = AsyncClient(image_provider='Airforce')
    response = await client.images.generate(
        model="flux",
        prompt="Black Cat with Red eyes"
    )
    image_url = response.data[0].url
    print(image_url)

asyncio.run(main())

Result: g4f.errors.RetryNoProviderError: No provider found

@TheFirstNoob Thank you for bringing this to my attention! I've just tried running your code and it seems to be working now. Here's what I got:

python main.py 
https://api.airforce/imagine2?prompt=Black+Cat+with+Red+eyes&size=1:1&model=flux

It looks like the issue has been resolved. The image generation is now functioning correctly without the error you encountered earlier.

Regarding the documentation, it seems that after recent updates and contributions, it might not be fully up-to-date. I'm sure the project maintainers or other contributors will update it soon to reflect the current functionality.

Thanks again for your input. It's great to see the community helping each other out!

@TheFirstNoob
Copy link

@kqlio67 Yooooo, its biggers work update! Thank you very much for your work!
Although all that was done on my part was that I only "found" problems and very little code, because 99.9% of the work was done by you)
Thank you very much!

I test fully code working later after merge.

@Felitendo
Copy link

Remove the AIChatFree Provider please and add https://gprochat.com/. It's the original version and it's also more up to date (look at the copyright date on the bottom).

https://github.com/babaohuang/GeminiProChat

@kqlio67
Copy link
Contributor Author

kqlio67 commented Sep 27, 2024

Remove the AIChatFree Provider please and add https://gprochat.com/. It's the original version and it's also more up to date (look at the copyright date on the bottom).

https://github.com/babaohuang/GeminiProChat

@Felitendo, thanks for the suggestion! I've gone ahead and added the new GPROChat provider from https://gprochat.com/, as you recommended. However, I decided to keep the AIChatFree provider for now, since it's still working. I don't see a reason to remove a provider if it's functioning properly. This way, users will have more options to choose from. Of course, if any issues come up with AIChatFree in the future, I'll consider removing it then. But for now, I think it's best to keep both providers available, as they're both working and giving users additional choices.

@Felitendo
Copy link

Remove the AIChatFree Provider please and add https://gprochat.com/. It's the original version and it's also more up to date (look at the copyright date on the bottom).
https://github.com/babaohuang/GeminiProChat

@Felitendo, thanks for the suggestion! I've gone ahead and added the new GPROChat provider from https://gprochat.com/, as you recommended. However, I decided to keep the AIChatFree provider for now, since it's still working. I don't see a reason to remove a provider if it's functioning properly. This way, users will have more options to choose from. Of course, if any issues come up with AIChatFree in the future, I'll consider removing it then. But for now, I think it's best to keep both providers available, as they're both working and giving users additional choices.

Alright that's fine. I just thought that we have too much providers already, so maybe you could move it in the deprecated folder to restore later on if needed. But if you don't that's fine too :)

@xtekky xtekky merged commit 0deb0f6 into xtekky:main Sep 29, 2024
1 check passed
@kqlio67
Copy link
Contributor Author

kqlio67 commented Oct 2, 2024

4. Prodia provider
I try to call models like model = model_name but always have Unknown models. I think thats my problem and not understand how use this provider correclty. Maybe need rework models calls?
I would be very grateful if you could provide a very simple code to test this provider.

Hi @TheFirstNoob,

You asked about how to use the Prodia provider for generating images. Below is an example of how you can obtain the URL of an image generated by the model using an asynchronous client:

import asyncio
from g4f.client import AsyncClient
from g4f.Provider import Prodia

async def main():
    # Create an AsyncClient with the specified image provider
    client = AsyncClient(image_provider=Prodia)

    # Generate an image based on the prompt
    response = await client.images.generate(
        model="absolutereality_v181.safetensors [3d9d4d2b]",
        prompt="a white siamese cat"
    )

    # Check if there are any images in the response
    if response.data:
        # Loop through and print the URL of each generated image
        for img in response.data:
            print(img.url)
    else:
        print("No images found.")

# Run the main function
asyncio.run(main())

Result: https://images.prodia.xyz/b31c5be8-7850-4b5b-b4ba-e8eef3aed957.png

This should help you get started. Even if you're already familiar, it might still be useful for you or others who come across it.

@TheFirstNoob
Copy link

@kqlio67 Hi! Very thanks for provide this code and your work for this :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants