-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add SQLite3 backend option #6176
Comments
Any sort of ORM layer will mask lower level SQL dialect/capability divergences. What they've done there is effectively rollig their own solution. Tools like
As always, the devil is in the details here. Switching over night to "pure SQL" isn't realistic. First, there're matters of migrating every existing Nevertheless, I agree There's also an alternative piece-wise path: with some of the refactoring we've done in |
^ TL;DR: rather than attempt to rip everything out in one swoop, we should incrementally target the areas that have the potential for the most gain (heavily loaded areas like: payments, invoices, and forwarding related material) |
I haven't looked into it much yet. Some ORMs can be a bit too smart sometimes also. But indeed, there are many ways to cover this. And perhaps it won't ever be necessary to use super specific capabilities.
That's why I proposed to do a step-by-step migration from kv to pure sql. I never wanted to suggest that all the stores should be converted to structured sql in one go. I totally agree that a full switch-over from bbolt to key-value-sqlite can't happen in a single release. Running hours are needed to gain confidence and just adding SQLite as a backend first makes sense.
I think the downside of this is that there will be two implementations of a store to maintain for an extended period of time. Dev and maintenance could be done by another team, but overall it may be less efficient. Also that custom implementation will need to have clear added value beyond maintainability of the code for it to be used. And there is the question of how problematic it is to have lnd state spread across a bbolt file and a sql database. |
That's good point, on the code level we can use things like interface assertions to guard new functionality. Like let's say someone has been working on an implementation using Redis instead of bolt to store invoices. We add a new base feature to the The upside is that storage evolution is decoupled from mainline
Today, With all that said, I think abstracting out the invoice storage is a good initial target, as the current iteraction is very tihglty coupled to |
Renamed the issue to just target a |
The relevant branch where some work has been started for those looking to pursue this issue: https://github.com/bottlepay/lnd/tree/sqlite |
Hello! I'm interested in picking up this issue, and just want to make sure I understand the expected scope for the first iteration: Given that postgres support has landed, the next step we'd like to take is to introduce
All migration tooling and/or storing data in normalized SQL tables (as opposed to a single key-value table) would come in the future. Is there anything that I'm missing or misunderstand? If not, I'll start working on this. Thanks in advance! |
Correct! An initial target to start to store data in normalized SQL tables would be areas that are heavily loaded and could use better queryability like invoices #6288: I've also started to use the c-go free version of sqlite along with sqlc in a new project. So far so good! I've also been using some new generic interface/middleware abstractions (project uses Go 1.18 by default), which'll allow us to potentially use |
Some relevant documentation re which options we may want to enable by default: https://www.sqlite.org/pragma.html. Namely: foreign key checks should always be active, and we may want to enable the WAL mode as that'll enable us to support multiple readers in a non-blocking manner (readers don't block writers and writers don't block readers). The WAL mode would also enable backup/replication modes which aren't as safe as full on synchronous replication (such as litestream and rqlite -- tho the later is more heavy weight) , but may give hobbyist node operators that aren't ready to set up an RDMS server a sweet spot. |
Does that sound like the right approach to you, or do you think I should give a |
I like that approach: it means we can get
|
Another relevant tool that might come in handy once we start to add proper SQL tables into the mix: https://github.com/golang-migrate/migrate. It can be used as a CLI command to run migrations, or as a Go package. Under the hood, it'll handle locking tables if needed during a migration, and creates a new internal table that tracks which database version is active. |
In the mean time, I gained some experience with golang-migrate/migate, and there is one annoying thing about it: If a migration doesn't complete, it sets a dirty flag that needs to be cleared manually. This even happens when the whole migration is executed in a db transaction and nothing is actually dirty. Many people complaining about it. Furthermore this library requires migration sql to live in separate files which isn't always the best option when you want a self-contained executable. The files can be go:embedded again, but why not define a few clean sql strings. Because of these reasons, I switched to github.com/rubenv/sql-migrate which does everything I need. |
What do you mean by this? The files can be embedded and then it's as if everything else can be read from a virtual file system in the Go runtime. Clean strings here means you'd rather have the migration live in normal Go strings in some file? I don't see how that's any different than the embedded VFS approach, particularly given you can take those regular strings then wrap a fake file system around it to adhere to the interface. Either way, we don't have anything to migrate quite yet (so don't need to commit to a migration library), since we're still in pure KV land for now. |
Re-opening as we still have #7252 |
LND 0.14 ships with support for Postgres as a database backend. Unfortunately the full potential of Postgres can't be reached because internally LND still uses key/value storage everywhere.
This means that currently with a Postgres backend, the data that is stored is mostly unreadable. It uses custom serialization/deserialization in several flavours. Any tool that wants to operate on this data needs to implement the exact same serialization methods.
More importantly, maintaining custom serialization code is time-consuming and error-prone. This becomes apparent in an extreme way for database migrations. For each migration, a frozen copy of the serialization code needs to be added to the code base, to ensure that the migration won't ever break.
Furthermore, a minimalistic kv store like bbolt doesn't offer functionality like indexes (other than on the key) and constraints. Functionality that could be put to good use to guard database integrity in a simple way and make lnd more reliable and faster overall.
These are all advantages that could be unlocked by a step-by-step migration from kv to structured sql. The problem with that migration is that there is still bbolt. If bbolt support needs to stay, a large amount of code needs to be added to have two implementations for every store in the system. This means more potential for bugs and increased maintenance costs.
One way to address this problem is to replace bbolt by SQLite. SQLite is probably one of the top five most deployed software modules in the world, so reliability shouldn't be an issue.
The initial replacement will stick to the kv model for data. But once fully on the sql platform, it becomes possible to migrate select sub-systems that need it the most to structured sql tables. This migration can happen at the same time for both SQLite and Postgres, so that there won't be duplicate implementations.
Ideas for a light abstraction to iron out the differences in SQL dialect between Postgres and SQLite can be taken from c-lightning.
A first test with LND on SQLite has been carried out successfully.
The text was updated successfully, but these errors were encountered: