Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Not able to import Llama index modules in tensor rt llms #564

Open
rvssridatta opened this issue Jun 25, 2024 · 1 comment
Open

Comments

@rvssridatta
Copy link

Bug Description
Even though I am following the latest documentation . I am still not able to import llama index.

Issue 1
Nvidia jetson container link:https://github.com/dusty-nv/jetson-containers?tab=readme-ov-file
versions:
ubuntu:22.04
cuda : 12.2
architecture --> Arm 64
jetpack - 6.0

1 photo added

please provide standard solution to deploy tensor rt llm integrated with some llama index rag modules.
Device used : Advantech Jetson Orin NX - 16 GB variant

Version
llama-index 0.10.50

**Steps to Reproduce

Issue 1:**

followed dusty nv documenation, steb by step provided commands.
got same error mentioned in relevant logs, "$ jetson-containers run $(autotag tensorrt-llm)

Relevant Logs/Tracbacks
Error:

Looking in indexes: https://pypi.org/simple, https://pypi.nvidia.com
Collecting tensorrt_llm==0.8.0
Downloading tensorrt-llm-0.8.0.tar.gz (6.9 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "", line 2, in
File "", line 34, in
File "/tmp/pip-install-r7zpl9ve/tensorrt-llm_382951b6d5f34b8798d95f1967eb0620/setup.py", line 90, in
raise RuntimeError("Bad params")
RuntimeError: Bad params
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

@dusty-nv
Copy link
Owner

@rvssridatta the TensorRT-LLM container is exploratory and not yet officially supported (hope soon). llama-index would be installed independently or with the llama-index container.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants