-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to run performance case, reason = Performance case optimize timeout [PG Vector Scale] #369
Comments
@Sheharyar570 Could you please share any experiences or advice you have regarding the |
Maybe we just need to give a user a timeout config? |
BTW, I don't think it's a wise choice to run 10M or more data on PG vector, it is simply too slow. |
A timeout config is exactly useful for me too! |
@KendrickChou @xiaofan-luan I will consider adding a
We have set different default VectorDBBench/vectordb_bench/__init__.py Lines 40 to 56 in b364fe3
|
@agandra30 Well, I've not tried running 10M cohere on PG Vector Scale. So I won't be able to suggest any specific configuration. Although, I would suggest to make But you still may need to update the default timeout. |
I ran multiple tests with PGvector scale using diskann and one of the biggest problem is that the vectordbbench just exits the execution with optmize timeout , this is very true for PGVECTOR SCALE and pgvectorRS.
I know we can increase the time in the scripts , but did anyone observed or recommend any configuration settings that can complete the execution with in the 5hours(default timeout) for 10M cohere 768 Dimension dataset ,we want to look at the cross comparison with out editing the default time outs for a large datastes , did any successfully completed with in that timeout (Milvus Yes , but other DBs ?)
Error message :
024-09-17 22:14:02,230 | WARNING: VectorDB optimize timeout in 18000 (task_runner.py:249) (3816719) 2024-09-17 22:14:02,274 | WARNING: Failed to run performance case, reason = Performance case optimize timeout (task_runner.py:191) (3816719) Traceback (most recent call last): File "/root/vectordbbench_runs/lib/python3.12/site-packages/vectordb_bench/backend/task_runner.py", line 247, in _optimize return future.result(timeout=self.ca.optimize_timeout)[1]
Query:
`CREATE INDEX IF NOT EXISTS "pgvectorscale_index" ON public. "pg_vectorscale_collection"
USING "diskann" (embedding "vector_cosine_ops" )
WITH ( "storage_layout" = "memory_optimized", "num_neighbors" = "50", "search_list_size" = "100", "max_alpha" = "1.2", "num_bits_per_dimension" = "2" ); (pgvectorscale.py:200) (3935818)
`
My Postgres server infra configuration :
# free -mh total used free shared buff/cache available Mem: 1.0Ti 13Gi 986Gi 152Mi 7.3Gi 988Gi Swap: 0B 0B 0B
4. Extensions used :
`
pgdiskann=# \dx;
List of installed extensions
Name | Version | Schema | Description
-------------+---------+------------+------------------------------------------------------
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
vector | 0.7.4 | public | vector data type and ivfflat and hnsw access methods
vectorscale | 0.3.0 | public | pgvectorscale: Advanced indexing for vector data
(3 rows)
`
The text was updated successfully, but these errors were encountered: