Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(weave): Implement VertexAI integration #2743

Draft
wants to merge 7 commits into
base: master
Choose a base branch
from
Draft

Conversation

soumik12345
Copy link
Contributor

@soumik12345 soumik12345 commented Oct 21, 2024

Description

This PR adds the autopatch integration with VertexAI Generative Models API.

Supported cases

Sync generation

Without Streaming

import vertexai
import weave
from vertexai.generative_models import GenerativeModel

weave.init(project_name="google_ai_studio-test")
vertexai.init(project="wandb-growth", location="us-central1")
model = GenerativeModel("gemini-1.5-flash-002")
response = model.generate_content(
    "What's a good name for a flower shop specialising in selling dried flower bouquets?"
)

Sample trace

With Streaming

import vertexai
import weave
from vertexai.generative_models import GenerativeModel

weave.init(project_name="google_ai_studio-test")
vertexai.init(project="wandb-growth", location="us-central1")
model = GenerativeModel("gemini-1.5-flash-002")
response = model.generate_content(
    "What's a good name for a flower shop specialising in selling dried flower bouquets?",
    stream=True,
)

Sample trace

Async Generation

Without Streaming

import vertexai
import weave
from vertexai.generative_models import GenerativeModel

weave.init(project_name="google_ai_studio-test")
vertexai.init(project="wandb-growth", location="us-central1")
model = GenerativeModel("gemini-1.5-flash-002")

async def async_generate():
    response = await model.generate_content_async(
        "What's a good name for a flower shop that specializes in selling bouquets of dried flowers?"
    )
    return response

response = asyncio.run(async_generate())

Sample trace

With Streaming

import vertexai
import weave
from vertexai.generative_models import GenerativeModel

weave.init(project_name="google_ai_studio-test")
vertexai.init(project="wandb-growth", location="us-central1")
model = GenerativeModel("gemini-1.5-flash-002")

async def get_response():
    chunks = []
    async for chunk in await model.generate_content_async(
        "What's a good name for a flower shop that specializes in selling bouquets of dried flowers?",
        stream=True,
    ):
        if chunk.text:
            chunks.append(chunk.text)
    return chunks

response = asyncio.run(get_response())

Sample trace

@soumik12345 soumik12345 self-assigned this Oct 21, 2024
@soumik12345 soumik12345 requested a review from a team as a code owner October 21, 2024 18:58
@soumik12345 soumik12345 marked this pull request as draft October 21, 2024 19:16
Copy link

socket-security bot commented Oct 21, 2024

New dependencies detected. Learn more about Socket for GitHub ↗︎

Package New capabilities Transitives Size Publisher
pypi/vertexai@1.70.0 None 0 0 B

View full report↗︎

@soumik12345 soumik12345 changed the title feat(weave): Implement VertexAI implementation feat(weave): Implement VertexAI integration Oct 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant