-
Notifications
You must be signed in to change notification settings - Fork 191
fix: conflict package versions in ts examples #678
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: conflict package versions in ts examples #678
Conversation
|
WalkthroughThe changes update import statements to source Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant ExampleWorkflow
participant OpenAI (from @llamaindex/openai)
User->>ExampleWorkflow: Trigger workflow operation
ExampleWorkflow->>OpenAI (from @llamaindex/openai): Send LLM request (using gpt-4o-mini)
OpenAI (from @llamaindex/openai)-->>ExampleWorkflow: Return LLM response
ExampleWorkflow-->>User: Return processed result
Possibly related PRs
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (3)
packages/server/examples/agentic-rag/index.ts(1 hunks)packages/server/examples/package.json(1 hunks)packages/server/examples/simple-workflow/calculator.ts(1 hunks)
🔇 Additional comments (5)
packages/server/examples/simple-workflow/calculator.ts (3)
1-1: LGTM: Import refactoring aligns with package restructuring.Moving the
OpenAIimport to@llamaindex/openaiis consistent with the dependency updates and package restructuring.
4-4: LGTM: Clean separation of imports.Good separation keeping
Settingsandtoolin the mainllamaindexpackage while moving OpenAI-specific imports to the dedicated package.
7-9: LGTM: Explicit LLM configuration improves clarity.Explicitly setting the LLM model to "gpt-4o-mini" is a good practice that makes the configuration clear and removes ambiguity about which model is being used.
packages/server/examples/agentic-rag/index.ts (2)
1-1: LGTM: Consistent import refactoring for OpenAI components.Moving
OpenAIandOpenAIEmbeddingimports to@llamaindex/openaiis consistent with the package restructuring seen across the examples.
4-4: LGTM: Proper separation of core LlamaIndex imports.Keeping core LlamaIndex functionality (
Document,Settings,VectorStoreIndex) in the main package while separating OpenAI-specific components is a clean architectural choice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
packages/server/examples/custom-layout/index.ts (1)
7-10: Consider making the model configurable.The explicit LLM configuration is good for ensuring consistent behavior. However, the hardcoded model name could be made more flexible.
Consider using an environment variable for the model:
Settings.llm = new OpenAI({ - model: "gpt-4o-mini", + model: process.env.OPENAI_MODEL || "gpt-4o-mini", });packages/server/examples/devmode/index.ts (1)
6-9: Explicit LLM configuration improves consistency.The explicit configuration ensures the example uses the intended model. The pattern is consistent with other example files in this PR.
For better maintainability, consider extracting the model configuration to a shared constant or environment variable, especially since this pattern is repeated across multiple example files.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
packages/server/examples/custom-layout/index.ts(1 hunks)packages/server/examples/devmode/index.ts(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (56)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
- GitHub Check: Unit Tests (windows-latest, 3.9)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
- GitHub Check: lint
🔇 Additional comments (4)
packages/server/examples/custom-layout/index.ts (2)
1-1: LGTM! Import restructuring aligns with dependency conflict resolution.Moving the
OpenAIimport fromllamaindexto@llamaindex/openaihelps resolve the dependency conflicts mentioned in the PR objectives.
4-4: Good addition of Settings import.Adding
Settingsto the imports enables explicit LLM configuration, which improves control over the model behavior.packages/server/examples/devmode/index.ts (2)
1-1: LGTM! Consistent dependency restructuring.The import change aligns with the dependency conflict resolution strategy and maintains consistency across example files.
3-3: Good addition for explicit configuration.Adding the
Settingsimport enables explicit LLM configuration, improving predictability of the example behavior.
Duplicate Dependency Resolution Issue:
The examples package explicitly listed @llamaindex/workflow@1.1.3 in its dependencies, while @llamaindex/server@workspace:* (linked to packages/server) also depended on @llamaindex/workflow@1.1.3. pnpm resolved these as separate instances of @llama-flow/core@0.4.1 and causing an instanceof mismatch
To detect issue:
cd packages\server\examples && pnpm why @llama-flow/coreThis PR is to remove @llamaindex/workflow from the examples package’s dependencies and rely on @llamaindex/server to provide it. This ensures a single instance of @llama-flow/core@0.4.1 is used in examples and server.
Summary by CodeRabbit