- Deploy a base model:
-
Login to Azure OpenAI Studio >
Deployments
>Deploy model
>Deploy base model
-
Select
gpt-4o
> clickConfirm
-
Deployment Name:
smart-factory
-
Model version:
2024-08-06
-
Deployment type:
Global Standard
-
Tokens per Minute Rate Limit:
150K
-
Content filter:
DefaultV2
-
Click
Deploy
-
Once the model is deployed, note the following information from the
Deployment info
panel: -
Endpoint
>Target URI
(copy the name, for example:https://aoai123456.openai.azure.com
)
-
- Rename the file
.env_template
to.env
mv .env_template .env
- Retrieve the environment following variables you defined in Part 1 - Provision resources (Edge and Cloud) ==> Note(2):
$ASSISTANT_APP_ID $ASSISTANT_APP_SECRET $ASSISTANT_TENANT $AZURE_OPENAI_KEY
- Retrieve the Fabric endpoint from Fabric homepage >
Settings
>Manage connections and gateways
Connections
> copy the name of your Fabric endpoint (Connection type:Azure Data Explorer (Kusto)
)- Modify environment variables in
.env
fileAZURE_OPENAI_ENDPOINT = <YOUR_AZURE_OPENAI_ENDPOINT> # for example: https://aoai123456.openai.azure.com AZURE_OPENAI_API_KEY = <$AZURE_OPENAI_KEY> AZURE_OPENAI_DEPLOYMENT_NAME = "smart-factory" AZURE_OPENAI_MODEL_NAME = "gpt-4o" AZURE_OPENAI_DEPLOYMENT_VERSION = "2024-08-06" AZURE_AD_TENANT_ID = <$ASSISTANT_TENANT> KUSTO_CLUSTER = <YOUR_MICROSOFT_FABRIC_ENDPOINT> KUSTO_MANAGED_IDENTITY_APP_ID = <$ASSISTANT_APP_ID> KUSTO_MANAGED_IDENTITY_SECRET = <$ASSISTANT_APP_SECRET> KUSTO_DATABASE_NAME = <YOUR_DATABASE> KUSTO_TABLE_NAME = "aio_gold"
-
Option 1 (from command line)
- Start a terminal from the directory
- Execute the following commands:
pip install -r requirements.txt streamlit run .\frontend.py
-
Option 2 (Docker)
- Start a terminal from the directory
- Execute the following commands:
docker build . -t factory-assistant:v1.0 docker run -p 8501:8501 factory-assistant:v1.0
-
Launch a browser with the following URL to access the application:
http://localhost:8501/
-
You can now query the database using Natural Language
Note: no data from the database is transmitted to the Large Language Model in Azure Open AI, but only the prompt, and the model will return the appropriate query to execute. -
Some example queries are provided.