-
Notifications
You must be signed in to change notification settings - Fork 7.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Creating embeddings with ollama extremely slow #1787
Comments
I am in for the answer here. Ollama is very slow for me. I switched to llama and it is much faster. There is something broken with ollama and ingestion |
I use simple, ollama version 1.29 |
I updated the settings-ollama.yaml file to what you linked and verified my ollama version was 0.1.29 but Im not seeing much of a speed improvement and my GPU seems like it isnt getting tasked. Neither the the available RAM or CPU seem to be driven much either. Three files totaling roughly 6.5MB are taking close to 30 mins (20 mins @ 8 workers) to ingest where llama completed the ingest in less than a minute. Am I doing something wrong? |
I've switched over to lmstudio (0.2.17) mixtral instruct 8x q4 k m, and start the server in lmstudio I installed privategpt with the following installation command: settings-vllm.yaml: llm: embedding: local: openai: set PGPT_PROFILES=vllm And: it ingests now much faster! But: I hope that you can amend privategpt, so that it also runs fast with ollama! |
the |
@paul-asvb Index writing will always be a bottleneck. With |
It could also be the swapping from LLM to embedding model and back that makes it very slow |
Same here. There seems to be a "hard limit" somewhere setting the pace to 2.06 - > 2.11 s/it on "Generating Embeddings" - when embedding multiple files in "pipeline" all workers are crippled to that speed |
|
Anyone find a fix for this yet? I tried the pipeline settings in the yaml file and it only increased the speed a little bit. Still taking a long time to ingest a 46 page, 1.5mb file. |
I got similar results...40 docs took all night and only about a hour on version 0.2. Going to try the lmstudio idea presented in this thread as a workaround. |
I kind of debugged this. There are multiple reasons.
which is referenced here ->
On the Ollama side, the problem is that ollama starts by default with On the vector store side, if you use Qdrant, the problem is that you can't rely on concurrent access if you configure it with Overall, a big performance improvement can always be achieved by using a memfs for both, source files and datastore, regardless of your configuration. Anyway it needs a bit of work. Maybe I'll write a PR and make a curses dashboard for ingestion. It seems needed, especially for large datasets where an estimate of how long the process will take is needed. |
That is some great sleuthing. Thank you for taking a look at that more in
depth. I kind of had to accept the massive IO wait times and GPU
underutilization in the meantime. Didn't know about the ollama parallelism
and assumed it was passed somehow via the API.
If you do a PR, I will help test it when I'm back home mid July (unless
ollama works on AMD GPUs, then I can test next week).
…On Sat, Jun 29, 2024, 1:11 PM nopmop ***@***.***> wrote:
I kind of debugged this. There are multiple reasons.
On the privateGPT side there's the fact that the embedding section in
settings-ollama.yaml comes without 2 parameters:
ingest_mode: parallel
count_workers: <workers_count>
which is referenced here ->
https://github.com/zylon-ai/private-gpt/blob/c7212ac7cc891f9e3c713cc206ae9807c5dfdeb6/private_gpt/components/ingest/ingest_component.py#L498
-- if you add those parameters, the files will be processed in parallel.
But there will be bottlenecks in your vector store and in Ollama.
On the Ollama side, the problem is that ollama starts by default with --parallel
1.
To increase parallelism you can't modify the --parallel parameter because
the model is started by the Ollama server, so before your server starts you
need to set the variable called OLLAMA_NUM_PARALLEL. However...under
linux, it seems that the Ollama server (i.e. the command ollama serve
which is run by systemd -> /etc/systemd/system/ollama.service) doesn't
even sense the environment that is passed it via Environment=.... systemd
directives or via subshell/export tricks.
On the vector store side, if you use Qdrant, the problem is that you can't
rely on concurrent access if you configure it with path: local_data...,
so you need to run the Qdrant server. A further problem arises when server
gives a subtle error because of the max_optimization_threads: null which
is a default config parameter in Qdrant -- the value shouldn't be null.
Overall, a *big* performance improvement can always be achieved by using
a memfs for both, source files and datastore, regardless of your
configuration.
Anyway it needs a bit of work. Maybe I'll write a PR and make a curses
dashboard for ingestion. It seems needed, especially for large datasets
where an estimate of how long the process will take is needed.
I hope this helps.
—
Reply to this email directly, view it on GitHub
<#1787 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AATL4VH3N2LVQVZOZL4S4F3ZJ3TCPAVCNFSM6AAAAABFDHOW5KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOJYGI3DKMBVGM>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
I have tried my best, but I cannot resolve the GPU underutilization. |
This is a Windows setup, using also ollama for windows.
System:
Setup: poetry install --extras "ui vector-stores-qdrant llms-ollama embeddings-ollama"
Ollama: pull mixtral, then pull nomic-embed-text.
This is what the logging says (startup, and then loading a 1kb txt file). It is taking a long time.
Did I do something wrong?
Using python3 (3.11.8)
13:21:55.666 [INFO ] private_gpt.settings.settings_loader - Starting application with profiles=['default', 'ollama']
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
tokenizer_config.json: 100%|██████████████████████████████████████████████████████████████| 1.46k/1.46k [00:00<?, ?B/s]
13:22:03.875 [WARNING ] py.warnings - C:\Users\jwbor\AppData\Local\pypoetry\Cache\virtualenvs\private-gpt-TFCUF6yI-py3.11\Lib\site-packages\huggingface_hub\file_download.py:147: UserWarning:
huggingface_hub
cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in D:\privategpt\models\cache. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting theHF_HUB_DISABLE_SYMLINKS_WARNING
environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
warnings.warn(message)
tokenizer.model: 100%|██████████████████████████████████████████████████████████████| 493k/493k [00:00<00:00, 39.6MB/s]
tokenizer.json: 100%|█████████████████████████████████████████████████████████████| 1.80M/1.80M [00:00<00:00, 3.74MB/s]
special_tokens_map.json: 100%|███████████████████████████████████████████████████████| 72.0/72.0 [00:00<00:00, 144kB/s]
13:22:05.412 [INFO ] private_gpt.components.llm.llm_component - Initializing the LLM in mode=ollama
13:22:06.695 [INFO ] private_gpt.components.embedding.embedding_component - Initializing the embedding model in mode=ollama
13:22:06.706 [INFO ] llama_index.core.indices.loading - Loading all indices.
13:22:06.706 [INFO ] private_gpt.components.ingest.ingest_component - Creating a new vector store index
Parsing nodes: 0it [00:00, ?it/s]
Generating embeddings: 0it [00:00, ?it/s]
13:22:06.827 [INFO ] private_gpt.ui.ui - Mounting the gradio UI, at path=/
13:22:06.983 [INFO ] uvicorn.error - Started server process [1572]
13:22:06.983 [INFO ] uvicorn.error - Waiting for application startup.
13:22:06.983 [INFO ] uvicorn.error - Application startup complete.
13:22:06.983 [INFO ] uvicorn.error - Uvicorn running on http://0.0.0.0:8001 (Press CTRL+C to quit)
13:22:33.469 [INFO ] uvicorn.access - 127.0.0.1:57963 - "GET / HTTP/1.1" 200
13:22:33.559 [INFO ] uvicorn.access - 127.0.0.1:57963 - "GET /info HTTP/1.1" 200
13:22:33.563 [INFO ] uvicorn.access - 127.0.0.1:57963 - "GET /theme.css HTTP/1.1" 200
13:22:33.768 [INFO ] uvicorn.access - 127.0.0.1:57963 - "POST /run/predict HTTP/1.1" 200
13:22:33.774 [INFO ] uvicorn.access - 127.0.0.1:57963 - "POST /queue/join HTTP/1.1" 200
13:22:33.777 [INFO ] uvicorn.access - 127.0.0.1:57963 - "GET /queue/data?session_hash=94yqqpkh9p HTTP/1.1" 200
13:22:42.139 [INFO ] uvicorn.access - 127.0.0.1:57964 - "POST /upload HTTP/1.1" 200
13:22:42.144 [INFO ] uvicorn.access - 127.0.0.1:57964 - "POST /queue/join HTTP/1.1" 200
13:22:42.148 [INFO ] uvicorn.access - 127.0.0.1:57964 - "GET /queue/data?session_hash=94yqqpkh9p HTTP/1.1" 200
13:22:42.209 [INFO ] private_gpt.server.ingest.ingest_service - Ingesting file_names=['boericke_zizia.txt']
Parsing nodes: 100%|███████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1001.03it/s]
Generating embeddings: 100%|███████████████████████████████████████████████████████████| 18/18 [00:37<00:00, 2.10s/it]
Generating embeddings: 0it [00:00, ?it/s]
13:23:21.988 [INFO ] private_gpt.server.ingest.ingest_service - Finished ingestion file_name=['boericke_zizia.txt']
13:23:22.054 [INFO ] uvicorn.access - 127.0.0.1:57964 - "POST /queue/join HTTP/1.1" 200
13:23:22.057 [INFO ] uvicorn.access - 127.0.0.1:57964 - "GET /queue/data?session_hash=94yqqpkh9p HTTP/1.1" 200
13:23:22.167 [INFO ] uvicorn.access - 127.0.0.1:57964 - "POST /queue/join HTTP/1.1" 200
13:23:22.171 [INFO ] uvicorn.access - 127.0.0.1:57964 - "GET /queue/data?session_hash=94yqqpkh9p HTTP/1.1" 200
The text was updated successfully, but these errors were encountered: