-
Notifications
You must be signed in to change notification settings - Fork 118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for pgvector's hnsw (0.7.4) and generic support for Postgres (16) indexes #309
Add support for pgvector's hnsw (0.7.4) and generic support for Postgres (16) indexes #309
Conversation
if len(result_ids) < k: | ||
# Pad with -1 if we have less than k results. This is only needed if the | ||
# index-access method cannot guarantee returning k results. | ||
# | ||
# As of today, this is only possible with PostgresPgvectorHnsw when | ||
# ef_search < k. | ||
result_ids.extend([-1] * (k - len(result_ids))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it ok to pad self.res like this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, should be fine. The recall computation code performs a set intersection between the ground-truth results and reported results, so any -1 results would be ignored as intended.
with psycopg.connect(PG_CONN_STR, autocommit=True) as conn: | ||
with conn.cursor() as cur: | ||
cursor_print_and_execute(cur, "TRUNCATE test_tbl") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although the other algo implementations don't seem to be doing this, I thought that we have to reset all the data in main table and the index here, i.e., before switching to a different query-args set. Does that make sense?
273c4dc
to
5147f07
Compare
@@ -289,6 +289,7 @@ def run_docker(definition, dataset, count, runs, timeout, rebuild, | |||
}, | |||
cpuset_cpus=cpu_limit, | |||
mem_limit=mem_limit, | |||
privileged=True, # so that we can run perf inside the container |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can be reverted as mentioned in PR descr, only useful for FlameGraph'ing purposes
# install linux-tools-generic into docker so that devs can use perf if they want | ||
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt install -y linux-tools-generic | ||
|
||
# clone FlameGraph for the same purpose | ||
RUN git clone https://github.com/brendangregg/FlameGraph |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can be reverted as mentioned in PR descr, only useful for FlameGraph'ing purposes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the usage of this script? Not sure I see it referenced in the README.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah nevermind I see where it's used. Would it be possible to just create the file directly inside the Docker container instead of having it separately like this? Would be a bit more in line with the style of the other algorithms
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to just create the file directly inside the Docker container instead of having it separately like this?
sure 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
requirements_py3.10.txt
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do these need to be top-level requirements or could they be installed in the Dockerfile?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
they can be installed in the Dockerfile, let me do that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
with psycopg.connect(PG_CONN_STR, autocommit=True) as conn: | ||
with conn.cursor() as cur: | ||
cursor_print_and_execute(cur, f"CREATE TABLE test_tbl (id bigint, vec_col vector({ndim}))") | ||
cursor_print_and_execute(cur, f"CREATE INDEX vec_col_idx ON test_tbl USING {self.pg_index_method} (vec_col {self.ind_op_class}) {index_build_params_clause}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To stabilize the measurements, should we disable autovacuum when creating the table since we explicitly vacuum the table at some point? @orhankislal?
Closes #293.
See below explanation, this is the meat of the PR.
Can revert this if requested.
This PR adds support for benchmarking pgvector's hnsw index-access-method with
the runbooks and the datasets supported by bigann benchmarks.
To do that, added a base docker image that would help us testing other Postgres
index-access-methods in the future. And to make use of that docker image, had
to make some changes in install.py so that other Postgres based indexes can
depend on a common docker image that has the Postgres installed already. Note
that install.py will build that base docker image only if the algorithm name
starts with "postgres-". If you see that this PR is not a draft one anymore,
then I should've already documented this in the docs.
This PR also adds BaseStreamingANNPostgres that can be used to easily add
support for other Postgres based indexes in the future. One would simply need
to define a new python wrapper which implements:
and that properly sets the following attributes in their
__init__
methodsbefore calling
super().__init__
:Given that pgvector's hnsw is the first Postgres-based-index that benefit from
this infra (via this PR), neurips23/streaming/postgres-pgvector-hnsw/ can be seen
as an example implementation on how to make use of Dockerfile.BasePostgres and
BaseStreamingANNPostgres in general to add support for more Postgres based indexes.
Differently than other other algorithms under streaming, the time it takes to
complete a runbook can be several times slower than what is for other algorithms.
This is not because Postgres based indexes are bad, but because SQL is the only
interface to such indexes. So, all those insert / delete / search operations
first have to build the SQL queries, and, specifically for inserts, transferring
the data to the Postgres server adds an important overhead. Unless we make some
huge changes in this repo to re-design "insert" in a way that it could benefit
from server-side-copy functionality of Postgres, we cannot do much about it.
Other than that, please feel free to drop comments if you see any inefficiencies
that I can quickly fix in my code. Note that I'm not a python expert, hence
sincerely requesting this :)
And, to explain the build & query time params that have to be provided in
such a Postgres based indexing algorithm's config.yaml file, let's take a look
the the following snippet from pgvector's hnsw's config.yaml file:
Presence of insert_conns & query_conns are enforced by BaseStreamingANNPostgres
and any Postgres based index implementation that we add to this repo in the future
must also provide values for them in their config.yaml files.
Similar to insert_threads in other algorithm implementations, this is used to
determine parallelism for inserts. In short, this determines the number of database
connections used during insert steps.
Similar to T in other algorithm implementations, this is used to determine
parallelism for SELECT queries. In short, this determines the number of database
connections used during search steps.
Other than those two params, any other parameters that need to be specified when
building the index or when performing an index-scan (read as "search" step) via
config.yaml too.
The parameters provided in "args" (except insert_conns) are directly passed into
CREATE INDEX
statement that's used to create index in setup step. For example, for pgvector's
hnsw, above config will result in the following CREATE INDEX statement to create
the index. Especially note the "WITH" clause:
The parameters provided in "query-args" (except query_conns) are directly used to
set the GUCs that
determine runtime parameters used during "search" steps by the algorithm via
SET commands. Note that
BaseStreamingANNPostgres qualifies all those query-args with self.guc_prefix when
creating the SET commands that need to be run for all Postgres connections. For
example, for pgvector's hnsw, above config will result in the executing the following
SET statement for each Postgres connection. Note that if pgvector's hnsw had more
query-args, then we'd have multiple SET statements:
We prefixed "ef_search" with "hnsw" since PostgresPgvectorHnsw sets self.guc_prefix
to "hnsw".
And while we're at it, let's take a closer look into how the python wrapper should
look like when adding support for a Postgres based index in the future. From the wrapper
added for pgvector's hnsw:
This probably sets the experiment name, mostly required by grand-parent class
.BaseStreamingANN
USING clause of the CREATE INDEX statement that's used when creating the index.
See the docs for CREATE INDEX mentioned earlier and CREATE INDEX statement
shared above as an example of how we make use of it.
query-args, as described above.
relevant opclass
that the index needs to use when building the index and is passed to CREATE
INDEX statements again. See the docs for CREATE INDEX mentioned earlier.
For example, when the metric is "euclidian" for pgvector's hnsw, this function
returns "vector_l2_ops" and so it was used in the above CREATE INDEX statement.
operator that's used to match the index during search. For example, when the
metric is "euclidian" for pgvector's hnsw, this function returns "<->" and so
it will be used in the SELECT query that's executed in search step as in: