Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multi: add sqlite backend option #7252

Merged
merged 7 commits into from
Jan 31, 2023

Conversation

ellemouton
Copy link
Collaborator

In this PR, the changes in #7251 are made use of & a new sqlite db backend option is given.

Depends on #7251
Fixes #6176
Replaces #6570

lncfg/db.go Outdated Show resolved Hide resolved
lncfg/db.go Outdated Show resolved Hide resolved
lncfg/db.go Outdated Show resolved Hide resolved
lncfg/db.go Outdated Show resolved Hide resolved
docs/sqlite.md Show resolved Hide resolved
sample-lnd.conf Outdated Show resolved Hide resolved
@ellemouton ellemouton force-pushed the sqlite-pt2 branch 2 times, most recently from 5ea0d29 to cf419c5 Compare December 16, 2022 11:12
lncfg/db.go Outdated Show resolved Hide resolved
lncfg/db.go Outdated Show resolved Hide resolved
docs/sqlite.md Show resolved Hide resolved
lncfg/db.go Outdated Show resolved Hide resolved
go.mod Outdated
@@ -166,6 +182,8 @@ replace github.com/ulikunitz/xz => github.com/ulikunitz/xz v0.5.8
// https://deps.dev/advisory/OSV/GO-2021-0053?from=%2Fgo%2Fgithub.com%252Fgogo%252Fprotobuf%2Fv1.3.1
replace github.com/gogo/protobuf => github.com/gogo/protobuf v1.3.2

replace github.com/lightningnetwork/lnd/kvdb => github.com/ellemouton/lnd/kvdb v0.0.0-20221216101740-c01520c57f3c
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to be replaced with kvdb module tag before landing :)

lncfg/db.go Show resolved Hide resolved
@ellemouton ellemouton force-pushed the sqlite-pt2 branch 6 times, most recently from 629e146 to eca1a9e Compare December 19, 2022 15:51
@ellemouton ellemouton requested a review from Roasbeef December 19, 2022 16:21
@ellemouton ellemouton force-pushed the sqlite-pt2 branch 5 times, most recently from 96e9d7c to 6dc510c Compare December 21, 2022 16:54
@ellemouton
Copy link
Collaborator Author

I used the async_payments_benchmark itest to benchmark some of the different flavours of the sqlite backend. Here are the results (note, I ran these on a mac M1 Pro). I ran the benchmark 4 times per flavour.

wallet.db and sphinx.db separate from main db:

All transactions starting with "BEGIN DEFERRED" (the default directive):

  • this causes SQLITE_BUSY errors.

All transactions starting with "BEGIN IMMEDIATE":

  1. Benchmark info: Elapsed time: 10.920178333s
    Benchmark info: TPS: 44.230046916029615
  2. Benchmark info: Elapsed time: 10.195665s
    Benchmark info: TPS: 47.373074733232215
  3. Benchmark info: Elapsed time: 9.145208208s
    Benchmark info: TPS: 52.81454385887942
  4. Benchmark info: Elapsed time: 10.644479083s
    Benchmark info: TPS: 45.37563522214871

Write txs use the "BEGIN IMMEDIATE" and reads use the "BEGIN DEFERRED":

  1. Benchmark info: Elapsed time: 12.37037875s
    Benchmark info: TPS: 39.04488373082352
  2. Benchmark info: Elapsed time: 9.54607925s
    Benchmark info: TPS: 50.596688687662
  3. Benchmark info: Elapsed time: 6.8426615s
    Benchmark info: TPS: 70.58656927571238
  4. Benchmark info: Elapsed time: 11.509999s
    Benchmark info: TPS: 41.963513637142796

wallet.db and sphinx.db in main db:

All transactions starting with "BEGIN DEFERRED" (the default directive):

  • this causes SQLITE_BUSY errors.

All transactions starting with "BEGIN IMMEDIATE":

  1. Benchmark info: Elapsed time: 9.512753542s
    Benchmark info: TPS: 50.77394235722542
  2. Benchmark info: Elapsed time: 8.107619333s
    Benchmark info: TPS: 59.57359123091429
  3. Benchmark info: Elapsed time: 11.372332292s
    Benchmark info: TPS: 42.47149903804446
  4. Benchmark info: Elapsed time: 10.099899334s
    Benchmark info: TPS: 47.82225881935706

Write txs use the "BEGIN IMMEDIATE" and reads use the "BEGIN DEFERRED"

  1. Benchmark info: Elapsed time: 5.733995958s
    Benchmark info: TPS: 84.23445072822635
  2. Benchmark info: Elapsed time: 8.6996415s
    Benchmark info: TPS: 55.51952916680532
  3. Benchmark info: Elapsed time: 9.790121833s
    Benchmark info: TPS: 49.33544323952439
  4. Benchmark info: Elapsed time: 10.441266958s
    Benchmark info: TPS: 46.258754032711515

The current PR state is equivalent to this last configuration: reads are deferred, writes are immediate and the macaroon & sphinx dbs are in the main db. Only the wallet.db is kept separate due to the deadlocks it would cause if kept in the same db.

@lightninglabs-deploy
Copy link

@Roasbeef: review reminder
@ellemouton, remember to re-request review from reviewers when ready

@saubyk saubyk requested a review from guggero January 3, 2023 18:12
Copy link
Collaborator

@guggero guggero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome PR, great work!
Did a first pass and it looks mostly ready. I want to do some manual testing first before giving my thumbs up though.

lncfg/db.go Show resolved Hide resolved
sample-lnd.conf Outdated Show resolved Hide resolved
sample-lnd.conf Outdated Show resolved Hide resolved
lntemp/node/config.go Outdated Show resolved Hide resolved
lntest/harness_node.go Outdated Show resolved Hide resolved
@@ -0,0 +1,26 @@
# SQLite support in LND
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Carrying over my comments from #7251 for convenience:

  • Mention that Windows i386/ARM and Linux PPC/MIPS aren't supported for SQLite backends (and why)
  • Mention that just switching the db.backend to another type will NOT migrate any data, so for the time being the SQLite backend can only be used for new nodes.

@Roasbeef
Copy link
Member

Roasbeef commented Jan 5, 2023

Some benchmark runs on my laptop (M1):

    test_harness.go:123: 	Benchmark info: Elapsed time:  3.611866542s
    test_harness.go:123: 	Benchmark info: TPS:  133.7258712035767

    test_harness.go:123: 	Benchmark info: Elapsed time:  5.225612958s
    test_harness.go:123: 	Benchmark info: TPS:  92.42934826632447

    test_harness.go:123: 	Benchmark info: Elapsed time:  4.930356916s
    test_harness.go:123: 	Benchmark info: TPS:  97.96451012148184

It's actually faster than bolt on this hardware/context for me (bolt run), but on a linux VM the opposite is true:

    test_harness.go:123: 	Benchmark info: Elapsed time:  40.274300833s
    test_harness.go:123: 	Benchmark info: TPS:  11.992759402647133

I then tried some of the options in this blog post:

diff --git a/kvdb/sqlite/db.go b/kvdb/sqlite/db.go
index ae5ff9295..c683e3425 100644
--- a/kvdb/sqlite/db.go
+++ b/kvdb/sqlite/db.go
@@ -51,6 +51,18 @@ func NewSqliteBackend(ctx context.Context, cfg *Config, fileName,
 			name:  "journal_mode",
 			value: "WAL",
 		},
+		{
+			name:  "synchronous",
+			value: "NORMAL",
+		},
+		{
+			name:  "temp_store",
+			value: "MEMORY",
+		},
+		{
+			name:  "mmap_size",
+			value: "1000000000",
+		},
 	}
 	sqliteOptions := make(url.Values)
 	for _, option := range pragmaOptions {
    test_harness.go:123: 	Benchmark info: Elapsed time:  4.328912959s
    test_harness.go:123: 	Benchmark info: TPS:  111.57535496199381

    test_harness.go:123: 	Benchmark info: Elapsed time:  5.30250675s
    test_harness.go:123: 	Benchmark info: TPS:  91.08899295602028

    test_harness.go:123: 	Benchmark info: Elapsed time:  4.930356916s
    test_harness.go:123: 	Benchmark info: TPS:  97.96451012148184

Linux VM run (4 vCPU, 8 GB RAM, bolt):

  test_harness.go:123:        Benchmark info: Elapsed time:  6.840655805s
  test_harness.go:123:        Benchmark info: TPS:  70.60726540969415

  test_harness.go:123:        Benchmark info: Elapsed time:  7.049314106s
  test_harness.go:123:        Benchmark info: TPS:  68.51730434155235

(sqlite, new options linked above)

    test_harness.go:123:        Benchmark info: Elapsed time:  9.897418056s                  │                  {
    test_harness.go:123:        Benchmark info: TPS:  48.80060610425528

    test_harness.go:123:        Benchmark info: Elapsed time:  11.133455894s                 │                  {
    test_harness.go:123:        Benchmark info: TPS:  43.382755956333064

    test_harness.go:123:        Benchmark info: Elapsed time:  13.009343366s                 │                  {
    test_harness.go:123:        Benchmark info: TPS:  37.127162102764046

default options:

    test_harness.go:123:        Benchmark info: Elapsed time:  20.616160447s                 │                        value: "WAL",
    test_harness.go:123:        Benchmark info: TPS:  23.428222788704804

    test_harness.go:123:        Benchmark info: Elapsed time:  19.756080022s                 │                        value: "WAL",
    test_harness.go:123:        Benchmark info: TPS:  24.448169852629686

    test_harness.go:123:        Benchmark info: Elapsed time:  13.554117076s                 │                        value: "WAL",
    test_harness.go:123:        Benchmark info: TPS:  35.63492902501471

So totally not scientific benchmarks, hard to really conclude the impact of the options. In the other PR I suggested allowing a raw pragma query string to be passed in as an opt, so ppl can tune these w/o recompiling.

kvdb/etcd_test.go Outdated Show resolved Hide resolved
}
closeFuncs[NSMacaroonDB] = sqliteMacaroonBackend.Close

sqliteDecayedLogBackend, err := kvdb.Open(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Realized that we aren't storing the neutrino data in here. Current entrypoint is in initNeutrinoBackend. It actually still uses the walletdb directly also vs kvdb.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used this diff to have the neutrino data be stored in sqlite as well:

diff --git a/config_builder.go b/config_builder.go
index 6f427ffd2..f0d425f1a 100644
--- a/config_builder.go
+++ b/config_builder.go
@@ -1190,13 +1190,28 @@ func initNeutrinoBackend(cfg *Config, chainDir string,
 		return nil, nil, err
 	}
 
-	dbName := filepath.Join(dbPath, "neutrino.db")
-	db, err := walletdb.Create(
-		"bdb", dbName, !cfg.SyncFreelist, cfg.DB.Bolt.DBTimeout,
+	var (
+		db  walletdb.DB
+		err error
 	)
-	if err != nil {
-		return nil, nil, fmt.Errorf("unable to create neutrino "+
-			"database: %v", err)
+	switch {
+	case cfg.DB.Backend == kvdb.SqliteBackendName:
+		db, err = kvdb.Open(
+			kvdb.SqliteBackendName, context.Background(), cfg.DB.Sqlite,
+			"neutrino.db", "neutrino",
+		)
+		if err != nil {
+			return nil, nil, err
+		}
+	default:
+		dbName := filepath.Join(dbPath, "neutrino.db")
+		db, err = walletdb.Create(
+			"bdb", dbName, !cfg.SyncFreelist, cfg.DB.Bolt.DBTimeout,
+		)
+		if err != nil {
+			return nil, nil, fmt.Errorf("unable to create neutrino "+
+				"database: %v", err)
+		}
 	}
 
 	headerStateAssertion, err := parseHeaderStateAssertion(

That places it into its own database (which seems like what we want to do here?).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great catch!! included your diff. I think we can just put it in lnd.db too if it doesnt cause any deadlocks right? will see if itests pass like that

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok with neutrino in same file as the other DBs and running make itest dbbackend=sqlite backend=neutrino seems to behave ok. Just ran some of the usual flakes but looks like majority of tests pass. So perhaps fine to keep it in same file?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To same file, or not to same file....🤔

Only arg I can think of not putting it in the same file is if ppl frequently just delete that db if they want to force a resync (tho they can do so already by deleting the on disk headers).

Looking at the way the files are set up, if we put it all in the same file (or diff file) with the distinct directory, then the flat header files end up in diff a diff directory, so things end up being more scattered vs retaining our existing pretty well established file format.

Copy link
Collaborator Author

@ellemouton ellemouton Jan 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok so even with the latest update where we dont have the distinct directory?

In any case, latest update keeps the neutrino db in separate file and puts all sqlite db files in the main data/chain/bitcoin/<network>/ dir. This includes all dbs that would usually have been in the graph path... is that ok?

Lemme know what you think

@ellemouton ellemouton force-pushed the sqlite-pt2 branch 3 times, most recently from 226ec69 to 9a177f4 Compare January 5, 2023 11:15
@ellemouton ellemouton requested a review from Roasbeef January 5, 2023 20:09
@ellemouton
Copy link
Collaborator Author

Thanks for all the performance tests @Roasbeef 🚀
For now, I have hard coded the synchronous=normal one and also added the config option so that users can add others. Let me know if you think we should also hard code any of the other pragma options.

@ellemouton ellemouton force-pushed the sqlite-pt2 branch 2 times, most recently from 00540cb to 7ebc4b9 Compare January 25, 2023 05:24
Copy link
Collaborator

@guggero guggero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks good to me, nice work!
So this can be merged once we come to a conclusion concerning what pragma flags to enable by default.

@@ -360,7 +360,7 @@ jobs:
fail-fast: false
matrix:
pinned_dep:
- google.golang.org/grpc v1.38.0
- google.golang.org/grpc v1.41.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Phew, I think this might just work out without any issues... But the next version will give us problems I think: lightninglabs/aperture#62
So probably this is the last "painless" etcd upgrade 😬

@ellemouton
Copy link
Collaborator Author

looks like the resolution_handoff itest is consistently failing for the sqlite backend 🙈

ellemouton and others added 7 commits January 25, 2023 14:03
Use kvdb package v1.4.1. This update also forced the protobuf version to
be bumped which required `make rpc` to be run to update the generated
files. This also required a bump in the github pinned dependencies
config for the grpc and protobuf libs.
In this diff, we expose the option to use sqlite as a db backend.
Warn a user if they attempt to initialise a new db type (sqlite or
postgres) if an old bbolt db file is found.
@ellemouton
Copy link
Collaborator Author

ellemouton commented Jan 25, 2023

works if I change maxDbConns to 2 (up from 1)

@joostjager
Copy link
Contributor

joostjager commented Jan 25, 2023

In terms of the effect of SQLITE_OPEN_FULLMUTEX, I think its mainly guards certain critical blocks and doesn't block the process of creating new read/write transactions (fine grained, not some coarse mega lock).

I am not convinced of this. On the branch below, I set max connections to 1 and tried to detect parallelism in the async_payments_benchmark test. It didn't happen.

master...joostjager:lnd:parallel-canary

When settings connections to 2, it does detect parallelism (itest crashes because canary panics).

looks like the resolution_handoff itest is consistently failing for the sqlite backend 🙈
works if I change maxDbConns to 2 (up from 1)

I think it could be that this itest requires parallelism.

Still the question remains then whether a low number of max connections is ideal for performance given that readers do seem to block each other. And also how to avoid deadlocks caused by multiple readers using a small set of connnections?

@Roasbeef
Copy link
Member

Roasbeef commented Jan 26, 2023

Still the question remains then whether a low number of max connections is ideal for performance given that readers do seem to block each other.

See my tests on the linux machine above, if you have enough cores, then things can scale well. Otherwise with a "normal" amount of cores, then performance suffers as you increase the number of max connections.

And also how to avoid deadlocks caused by multiple readers using a small set of connnections?

IIUC, the prior approach of distinct read/write connections allows this to succeed, as with the WAL mode, then readers don't block writers and the other way around. I think it's cool we were able to update the sqlite library so we can rely on the tx options, but it does seem like the two connection route allows for a better level of parallelism. The "default" way of scaling out reads/writes for sqlite seems to be multi-connection/multi-process. Historically, it looks like the author added the thread safety stuff, but doesn't "believe" in threads and instead informs ppl that they should use multiple processes instead. In Go, this would be multiple connections, up to a limit based on the number of available cores.

@Roasbeef
Copy link
Member

Roasbeef commented Jan 26, 2023

Dug up some more documentation/discussion re multi-threading and concurrency: https://sqlite.org/forum/forumpost/2a53b75034770302?raw

However, only one thread may be executing in a connection at a time. That means that if one thread executes a command on a connection, then for the duration of that thread executing "inside the library", any other thread attempting to enter the library "on a different thread" will have to wait. In other words the SQLite3 library is serially entrant per connection, not multiple-entrant per connection. This is a consequence of how it is built and cannot be changed. Note that a "statement" is a child of a "connection" and as such "statements" are not isolated from each other -- it is at the parent connection level that isolation occurs.

You do not have to keep opening/closing connections, but you probably want each thread to open and use its own connection so that the processing on each thread is not visible to (or have impact on) other threads/connections until that thread/connection commits its changes. If you do this SQLite3 will arbitrate each threads/connections access and view of the database individually, which is I think what you mean.

Which argues for the prior approach where we have a read connection that can parallelize reads using the WAL, and another write connection that serializes the writes with the exclusive lock.

@joostjager
Copy link
Contributor

Which argues for the prior approach where we have a read connection that can parallelize reads using the WAL, and another write connection that serializes the writes with the exclusive lock.

Also on a dedicated read connection with SQLITE_OPEN_FULLMUTEX, I don't see parallelization of reads happening in my tests. I believe that the only way to get this parallel reading from the WAL is to use a separate connection for each thread.

@Roasbeef
Copy link
Member

Roasbeef commented Jan 27, 2023

Also on a dedicated read connection with SQLITE_OPEN_FULLMUTEX, I don't see parallelization of reads happening in my tests. I believe that the only way to get this parallel reading from the WAL is to use a separate connection for each thread.

Here's the source for sqlite3_exec: https://gitlab.com/cznic/sqlite/-/blob/v1.20.3/lib/sqlite_linux_amd64.go#L80120. It grabs the mutex at that line, confirming that the lib by default only uses SQLITE_OPEN_FULLMUTEX and serializes things per connection.

As was shown above, there seems to be a sweet spot: too many active connections (managed by the database/sql connection pool) serves to slow things down at a certain point. If I pull things down on mac, and run w/ 10 conns, then the async test doesn't even pass. On my linux machine (has a ton of cores), increasing that value beyond 2 sees a degrade in perf (~299 TPS (2 conns), to (50 conns) ~ 50 TPS). If I remove our call to SetMaxOpenConns, then I see a similar drop off (~90 TPS).

The purpose of the pooled connections in the stdlib package is to get around that libraries like sqlite only allow one "operation" on a connection at a single time: https://go.dev/doc/database/manage-connections

At this point, all we have are conjectures w/o proper instrumentation, so we're just in bikeshedding territory here. From all my tests above, sqlite is faster than bbolt, if it's actually doing that without giving us access to concurrent reads, then that's pretty amazing. The maxconns (increase number of conns in the pool, for greater concurrency at tradeoff of more over head) setting is available on the CLI, so we can further experiment w/ tweaking that to enable more concurrency once this lands. Once this lands, we can do further benchmarks for things like read-heavy workloads (a site that continually hits channelbalance every 2 seconds or w/e).

@joostjager
Copy link
Contributor

The only question that is still on my mind is why exactly more connections make it so much slower. I also thought of removing SetMaxOpenConns for no maximum so that every goroutine can have its own non-blocked reads, but apparently that doesn't work.

A limit of 1 leading to deadlocks and a limit of 10 making it incredibly slow feels like there is not too much leeway in setting the right number of connections.

@Roasbeef
Copy link
Member

I also thought of removing SetMaxOpenConns for no maximum so that every goroutine can have its own non-blocked reads, but apparently that doesn't work.

Yeah I tried that above, so leaving the library to its defaults, and that has similar degradation. I think the next route here once we land this PR is to run the same benchmarks connected to profiling tools, so we can get things like flame graphs to see why perf degrades so much.

It might be something in databases/sql (like the MaxIdleConnections config), or something else deeper in the sqlite library. It also might just be the fact that we make a ton of db transactions (as you've documented in other issues) in the payment and path finding critical path that slow things down.

FWIW as mentioned above, if you have a lot of cores and a fast SSD then things don't drop off so much. With my runs above, it at least appears as though the current defaults are as fast, or even faster than default bbolt.

@joostjager
Copy link
Contributor

joostjager commented Jan 30, 2023

I think the next route here once we land this PR is to run the same benchmarks connected to profiling tools, so we can get things like flame graphs to see why perf degrades so much.

I tried to reproduce the slow down with a simple (non lnd/kvdb) read benchmark, but I see nothing bad happening at 500 simultaneous connections.

One thing that I remember seeing in one of my tests is that the scheduling of sqlite transactions may not be optimal. In some cases I got one connection that got starved because it was never its turn to run. Maybe that has something to do with it too.

To me it seems that executing the benchmark that you propose isn't that much work, and this could hopefully complete the picture.

I tried, but ran into:

~/lightninglabs/lnd (sqlite-pt2 ✔) make itest icase=async_payments_benchmark dbbackend=sqlite temptest=true
 Building itest btcd and lnd.
CGO_ENABLED=0 go build -v -tags="rpctest" -o lntest/itest/btcd-itest -ldflags " -X github.com/lightningnetwork/lnd/build.Commit=kvdb/v1.4.1-13-gfe5254510" github.com/btcsuite/btcd
CGO_ENABLED=0 go build -v -tags="dev kvdb_sqlite autopilotrpc chainrpc invoicesrpc neutrinorpc peersrpc routerrpc signrpc verrpc walletrpc watchtowerrpc wtclientrpc rpctest btcd" -o lntest/itest/lnd-itest -ldflags " -X github.com/lightningnetwork/lnd/build.Commit=kvdb/v1.4.1-13-gfe5254510" github.com/lightningnetwork/lnd/cmd/lnd
github.com/lightningnetwork/lnd/kvdb/sqlbase
# github.com/lightningnetwork/lnd/kvdb/sqlbase
../../go/pkg/mod/github.com/lightningnetwork/lnd/kvdb@v1.4.1/sqlbase/db.go:12:2: could not import sync/atomic (open : no such file or directory)
make: *** [build-itest] Error 2

@Roasbeef
Copy link
Member

Roasbeef commented Jan 31, 2023

I tried to reproduce the slow down with a simple (non lnd/kvdb) read benchmark, but I see nothing bad happening at 500 simultaneous connections.

The issue doesn't seem to be reads, but instead long running or frequent writes that end up starving out reads. We also have instances where at time around a dozen db transactions are created in the process of fully settling a payment: #5186. I think at one point you counted something like 40 fsyncs for a single payment.

I tried, but ran into:

That's odd, which version of Go are you running? That looks like a GOPATH issue perhaps?

Also re CLN: confirmed that all db access for them is single threaded (one connection for the entire process).

My point above was simply that further benchmarking shouldn't block forward progress towards landing this PR so users can start testing against it in master. The main open param now (number of active connections and idle connections) can be tuned via a command line flag. We may also end up changing ht defaults given further observations.

@joostjager
Copy link
Contributor

That's odd, which version of Go are you running? That looks like a GOPATH issue perhaps?

Not sure what it was, but go clean -modcache fixed it.

@Roasbeef Roasbeef merged commit 5d22d5e into lightningnetwork:master Jan 31, 2023
@ellemouton ellemouton deleted the sqlite-pt2 branch January 31, 2023 18:15
dstadulis added a commit to lightninglabs/pool that referenced this pull request Mar 7, 2023
lnd PR lightningnetwork/lnd#7252 removed support for 10 OS / architectures

As Pool requires lnd to run, this PR replicates the build list of lnd for Pool.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[bug]: run unit tests (btcd unit-cover) failure due to build tag mismatch add SQLite3 backend option
7 participants