- Clone or download this repostory to your local machine
- Edit the front-end port in
.env
(optional) - Change IRIS password in
./src-iris/irispw.txt
(optional, default: SYS) - Create and run containers:
docker-compose up -d
(macOS:docker-compose -f docker-compose-arm.yaml up -d
) - Open IRIS managementportal: http://localhost:52773/csp/sys/UtilHome.csp
- Open SwaggerUI (APIs): http://localhost:52773/swagger-ui/index.html?url=http://localhost:52773/rag/_spec
- Open Streamlit (Chat interface): http://localhost:8051/
RAG Submit PDF or text content to vectorize and use as context:
- Open SwaggerUI
- Select /SubmitContent or /SubmitPDF
- Submit file or type the new text contet to vectorize
- In the Demo.RecordEmbeddings is possible to check the vectors created for each text.
- It is possible to change model, settings or intructions to use for the LLM model, in the Production > LLM Operation. (optional)
- Open Streamlit and just start asking about the content submited.
Alternatively, submit Audio File to vectorize and use as context:
- Copy audio file to the
./volumes/Input
folder.
macOS*
- Docker Desktop does not currently support GPU acceleration on Mac.
- On this case for a better performance it is recommended for MacOS users to run Ollama as a standalone application outside of Docker container.