Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about using robotic_anything_offline.py #6

Closed
euminds opened this issue Jun 14, 2023 · 6 comments
Closed

Questions about using robotic_anything_offline.py #6

euminds opened this issue Jun 14, 2023 · 6 comments

Comments

@euminds
Copy link

euminds commented Jun 14, 2023

Hi,
thanks for your excellent work.

  1. I tried to run the _robotic_anything_gpt_online.py. However, even with a proxy, I still got the error "_openai.error.AuthenticationError: ". The network conditions cannot be improved at the moment.
  2. I tried to use the LLAMA-adapter to implement Instruct2Act. I would like to know how to use the LLAMA-adapter and robotic_anything_offline.py to implement Instruc2Act. Will this part of the code be open source later?
    Any help would be much appreciated.
    Bests
@SiyuanHuang95
Copy link
Collaborator

Hi,

thanks for your interest.

  1. Have you updated the API key required? Seems that the problem could be related to your API key.
  2. The offline version currently only fakes the generation, since the task is quite structured for robotics, it is more like a text competition.
  3. For the generation with LLaMA-Adapter, it is an easy-to-use project. You just need to install the Adapter-related. And change the line:
            response = openai.Completion.create()

to the Adapter ones. Then else should be the same.

Hope that would help you.

Bests.

@euminds
Copy link
Author

euminds commented Jun 15, 2023

Hi,

thanks for your interest.

  1. Have you updated the API key required? Seems that the problem could be related to your API key.
  2. The offline version currently only fakes the generation, since the task is quite structured for robotics, it is more like a text competition.
  3. For the generation with LLaMA-Adapter, it is an easy-to-use project. You just need to install the Adapter-related. And change the line:
            response = openai.Completion.create()

to the Adapter ones. Then else should be the same.

Hope that would help you.

Bests.

I have tested my API on Colab, and it was working fine. I tried using a different proxy, but I still encountered the same error: "openai.error.AuthenticationError: empty message".

@SiyuanHuang95
Copy link
Collaborator

Few bugs? Can you point them out?

I will fix them a little later.

@SiyuanHuang95 SiyuanHuang95 reopened this Jun 15, 2023
@euminds
Copy link
Author

euminds commented Jun 16, 2023

The current instructions in the readme file state that the OpenAI API key should be modified in the visual_programming_prompt/prompt_generation.py file. However, the code file robotic_anything_gpt_online.py does not call the prompt_generation.py file but instead uses robotic_exec_generation.py. Consequently, the correct location to modify the OpenAI API key and proxy settings should be in the robotic_exec_generation.py file.

@euminds
Copy link
Author

euminds commented Jun 16, 2023

More Implementation Steps:
Comment out the following lines in the environment.yaml file:

- torch==1.12.1+cu113

- torchaudio==0.12.1+cu113

- torchvision==0.13.1+cu113

- vima==0.1

Install PyTorch and related packages using the following command:
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
Note: This command installs the specific versions of PyTorch, torchvision, and torchaudio that are compatible with the project.
Install the Vima package by executing the following commands:
git clone https://github.com/vimalabs/VimaBench
cd VimaBench
pip install -e .
Install the SAM package by running the following commands:
git clone https://github.com/facebookresearch/segment-anything.git
cd segment-anything
pip install -e .
Install the Open_clip package by executing the following commands:
git clone https://github.com/mlfoundations/open_clip.git
cd open_clip
pip install -e .
Download the required VIT-H models from the Huggingface model repository.
Install the additional dependencies cchardet and chardet using the following commands:
pip install cchardet
pip install chardet
These packages are required for proper functionality.

@SiyuanHuang95
Copy link
Collaborator

@euminds Have updated the readme. Thanks for your info!

Hope now it can work. :-) Enjoy it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants