-
Notifications
You must be signed in to change notification settings - Fork 476
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prepare SQL statements for optimized queries #1355
Conversation
Recently optimized queries (ListRegistrationEntries, FetchRegistrationEntry, GetNodeSelectors) are not being executed using prepared statements causing significant, and unnecessary, database load. This change introduces an on-demand statement cache to all but remove the preparation cost for the queries. While it would be possible to enumerate and prepare all the queries in advance, extra care would be required to make sure all of the variants were considered. Although the current approach isn't quite ideal, it is a step in the right direction and requires no additional maintenance if more optimized queries are added. Signed-off-by: Andrew Harding <azdagron@gmail.com>
@@ -88,11 +88,50 @@ type sqlDB struct { | |||
raw *sql.DB | |||
*gorm.DB | |||
|
|||
stmtMu sync.RWMutex | |||
stmts map[string]*sql.Stmt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this use a sync.Map (with a typed wrapper) instead of a global lock over the map?
It seems like it fits this use-case, and may avoid contention on a single lock.
I suspect it likely doesn't matter, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I considered it but usually don't reach for sync.Map unless I have proven the codepath to be a hotspot because lockless programming is fraught with peril. However, I did some quick benchmarking and the sync.Map does lead to an order of magnitude less contention in this situation (e.g. 5microseconds v.s 25microseconds to take the lock with 500 concurrent query executors). The only downside is that it results in more prepared statements when the cache is cold since concurrent goroutines have to unconditionally prepare the statement between the failed Load
and the hopeful LoadOrStore
. That unnecessary load should only be a tiny spike right in the beginning, and is no worse than we have today. The tradeoff is probably worth the reduced contention. I'll switch over.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Load / LoadOrStore race seems fine to me; if it's ever a problem then I think we could pre-warm the cache in openDB.
Signed-off-by: Andrew Harding <azdagron@gmail.com>
There were a few things I'd wondered about this code, all of which were answered in the documentation: Stmt is a prepared statement. A Stmt is safe for concurrent use by multiple goroutines. If a Stmt is prepared on a DB, it will remain usable for the lifetime of the DB. When the Stmt needs to execute on a new underlying connection, it will prepare itself on the new connection automatically. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚀
Recently optimized queries (ListRegistrationEntries, FetchRegistrationEntry, GetNodeSelectors) are not being executed using prepared statements causing significant, and unnecessary, database load.
This change introduces an on-demand statement cache to all but remove the preparation cost for the queries. While it would be possible to enumerate and prepare all the queries in advance, extra care would be required to make sure all of the variants were considered. Although the current approach isn't quite ideal, it is a step in the right direction and requires no additional maintenance if more optimized queries are added.