Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
CBG-3271 allow in memory buckets to persist
- in memory buckets will now live for the lifecycle of the program until Bucket.CloseAndDelete is called. This facilitates bucket closing without removing the data, which is a common use in Sync Gateway tests that use persistent config to update database configuration. - When a bucket is first created, a copy is stored in the bucket registry, and this is never closed until: - in memory: CloseAndDelete is called - on disk: Close is called to bring refcount of buckets to 0 - force bucket name to be defined, and do not let multiple copies of a persistent bucket to be opened if they are different paths. Note, this is a blunt instrument, and it is indiscriminate to path manipulations. It does path matching on lexography, not normalized paths. The idea is to be safer. Sync Gateway is not architected to support multiple buckets with the same name that do not have the same backing data. Implementation: The global state of a bucket is representative of two things: - *sql.DB represnting the underlying sqlite connection - dcpFeeds for each connection that exists The sql.DB connection can be opened multiple times on the same path, and this was the original implementation of rosmar. However, it can't be opened multiple times for in memory files except via cache=shared query parameter to sqlite3_open. This ended up causing behavior that I didn't understand, and is not typically a supported in sqlite, since multiple databases are managed with a WAL when on disk. DCP feeds in rosmar work by having pushing events on a CUD operation to a queue, which can be read by any running feeds. Instead of having separate feeds for each copy of Bucket and publishing them via `bucketsAtUrl`, we now only have a single canonical set of bucket feeds. Moved this field from a Collection to Bucket. Whether a bucket is open or closed is controlled by Bucket._db() that is called by any CRUD operations. A Collection has a pointer to its parent bucket. Each Bucket opened now will create a Collection dynamically, but these share pointers to the cached in memory versions.
- Loading branch information