diff --git a/README.md b/README.md
index 8acf10b663..f5b8ea3dee 100644
--- a/README.md
+++ b/README.md
@@ -313,8 +313,31 @@ _Analysis with serve mode_
```
curl -X GET "http://localhost:8080/analyze?namespace=k8sgpt&explain=false"
```
+
+
+## Additional AI providers
+
+### Azure OpenAI
+Prerequisites: an Azure OpenAI deployment is needed, please visit MS official [documentation](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource) to create your own.
+
+To authenticate with k8sgpt, you will need the Azure OpenAI endpoint of your tenant `"https://your Azure OpenAI Endpoint"`, the api key to access your deployment, the deployment name of your model and the model name itself.
+
+
+### Run k8sgpt
+To run k8sgpt, run `k8sgpt auth` with the `azureopenai` backend:
+```
+k8sgpt auth --backend azureopenai --baseurl https:// --engine --model
+```
+Lastly, enter your Azure API key, after the prompt.
-## Running local models
+Now you are ready to analyze with the azure openai backend:
+```
+k8sgpt analyze --explain --backend azureopenai
+```
+
+
+
+### Running local models
To run local models, it is possible to use OpenAI compatible APIs, for instance [LocalAI](https://github.com/go-skynet/LocalAI) which uses [llama.cpp](https://github.com/ggerganov/llama.cpp) and [ggml](https://github.com/ggerganov/ggml) to run inference on consumer-grade hardware. Models supported by LocalAI for instance are Vicuna, Alpaca, LLaMA, Cerebras, GPT4ALL, GPT4ALL-J and koala.
@@ -334,8 +357,6 @@ To run k8sgpt, run `k8sgpt auth` with the `localai` backend:
k8sgpt auth --backend localai --model --baseurl http://localhost:8080/v1
```
-When being asked for an API key, just press enter.
-
Now you can analyze with the `localai` backend:
```