Skip to content

Commit

Permalink
Merge pull request #63 from microsoft/vyokky/dev
Browse files Browse the repository at this point in the history
Code refactor and control filtered
  • Loading branch information
vyokky authored May 5, 2024
2 parents 4df0eb8 + 02ea662 commit 640bd0e
Show file tree
Hide file tree
Showing 51 changed files with 5,105 additions and 1,523 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
__pycache__/
**/__pycache__/
*.pyc
/.VSCodeCounter

# Ignore the config file
ufo/config/config.yaml
Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
![Python Version](https://img.shields.io/badge/Python-3776AB?&logo=python&logoColor=white-blue&label=3.10%20%7C%203.11) 
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) 
![Welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat) 
[![X (formerly Twitter) Follow](https://img.shields.io/twitter/follow/UFO_Agent)](https://twitter.com/intent/follow?screen_name=UFO_Agent)
<!-- [![X (formerly Twitter) Follow](https://img.shields.io/twitter/follow/UFO_Agent)](https://twitter.com/intent/follow?screen_name=UFO_Agent) -->

</div>

Expand Down Expand Up @@ -106,7 +106,7 @@ API_VERSION: "2024-02-15-preview", # "2024-02-15-preview" by default
API_MODEL: "gpt-4-vision-preview", # The only OpenAI model by now that accepts visual input
API_DEPLOYMENT_ID: "YOUR_AOAI_DEPLOYMENT", # The deployment id for the AOAI API
```
You can also non-visial model (e.g., GPT-4) for each agent, by setting `VISUAL_MODE: True` and proper `API_MODEL` (openai) and `API_DEPLOYMENT_ID` (aoai). You can also optionally set an backup LLM engine in the field of `BACKUP_AGENT` if the above engines failed during the inference.
You can also non-visial model (e.g., GPT-4) for each agent, by setting `VISUAL_MODE: False` and proper `API_MODEL` (openai) and `API_DEPLOYMENT_ID` (aoai). You can also optionally set an backup LLM engine in the field of `BACKUP_AGENT` if the above engines failed during the inference.


#### Non-Visual Model Configuration
Expand All @@ -117,7 +117,7 @@ You can utilize non-visual models (e.g., GPT-4) for each agent by configuring th

Optionally, you can set a backup language model (LLM) engine in the `BACKUP_AGENT` field to handle cases where the primary engines fail during inference. Ensure you configure these settings accurately to leverage non-visual models effectively.

####
#### NOTE
💡 UFO also supports other LLMs and advanced configurations, such as customize your own model, please check the [documents](./model_worker/readme.md) for more details. Because of the limitations of model input, a lite version of the prompt is provided to allow users to experience it, which is configured in `config_dev`.yaml.

### 📔 Step 3: Additional Setting for RAG (optional).
Expand Down
8 changes: 5 additions & 3 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ langchain_community==0.0.27
msal==1.25.0
openai==1.13.3
Pillow==10.2.0
pywin32==304
pywin32==306
pywinauto==0.6.8
PyYAML==6.0.1
Requests==2.31.0
Expand All @@ -14,5 +14,7 @@ lxml==5.1.0
psutil==5.9.8
beautifulsoup4==4.12.3
sentence-transformers==2.5.1
#For Qwen
#dashscope
##For Qwen
#dashscope==1.15.0
##For removing stopwords
#nltk==3.8.1
Loading

0 comments on commit 640bd0e

Please sign in to comment.