Skip to content

wave4y/Auto-Vicuna

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 

Repository files navigation

Auto-Vicuna

0x1 Prerequisite

install vicuna & model example

install Auto-GPT based on the DGdev91's PR #2594 (2023/04/22). You can do this by:

 git clone https://github.com/DGdev91/Auto-GPT.git # clone DGdev91's fork
 cd Auto-GPT
 git checkout b349f2144f4692de7adb9458a8888839f48bd95c # checkout the PR change

0x2 Guide

  1. Start Fastchat API
python -m fastchat.serve.controller

python -m fastchat.serve.model_worker --model-name 'gpt-3.5-turbo' --model-path /path/to/vicuna/weights

export FASTCHAT_CONTROLLER_URL=http://localhost:21001
python3 -m fastchat.serve.api --host localhost --port 8000

# or windows powershell
$env:FASTCHAT_CONTROLLER_URL="http://localhost:21001"
python -m fastchat.serve.api --host localhost --port 8000

--model-name must be "gpt-3.5-turbo" for Auto-GPT :( I don't know how to modify it.

  1. modify autogpt/llm_utils.py delete line 96, line 97
    else:
        response = openai.ChatCompletion.create(
            model=model,
            messages=messages,
            temperature=0.9,
            max_tokens=3094,
        )

to

    else:
        response = openai.ChatCompletion.create(
            model=model,
            messages=messages,
        )

I don't know why, maybe fastchat not support temperature & max_tokens.

0x3 todo

Vicuna may not be generate command to excute, maybe add more prompt.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published