You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am considering CEL as a replacement for a system that allows users to save queries and be notified when objects match their queries. These queries are usually written only once or twice and not often updated. We are currently using OpenSearch / ElasticSearch for this with percolate queries. However, we do not need the full range of queries that ElasticSearch provides. Almost all of our queries could be replaced with some simple boolean logic and either equals or matches calls. We already use Protobuf extensively in our environment. Currently, our messages that are sent to ElasticSearch for matching are converted from Protobuf to JSON.
In terms of scale, we have 500k - 1 million queries and about 130 - 150 million objects per day that we would be running the queries on.
Are there any best practices and architecture considerations for us to keep in mind? Is this a reasonable number of CEL expressions to be evaluated on one machine in less than 5 or 10 seconds? An architecture where each node can hold all of the expressions and we scale throughput by adding nodes seems straightforward to build.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I am considering CEL as a replacement for a system that allows users to save queries and be notified when objects match their queries. These queries are usually written only once or twice and not often updated. We are currently using OpenSearch / ElasticSearch for this with percolate queries. However, we do not need the full range of queries that ElasticSearch provides. Almost all of our queries could be replaced with some simple boolean logic and either
equals
ormatches
calls. We already use Protobuf extensively in our environment. Currently, our messages that are sent to ElasticSearch for matching are converted from Protobuf to JSON.In terms of scale, we have 500k - 1 million queries and about 130 - 150 million objects per day that we would be running the queries on.
Are there any best practices and architecture considerations for us to keep in mind? Is this a reasonable number of CEL expressions to be evaluated on one machine in less than 5 or 10 seconds? An architecture where each node can hold all of the expressions and we scale throughput by adding nodes seems straightforward to build.
Beta Was this translation helpful? Give feedback.
All reactions