From 06345d6053d866c7305821099421f11e3ba55d63 Mon Sep 17 00:00:00 2001 From: Dave Collins Date: Fri, 17 Jul 2015 04:07:36 -0500 Subject: [PATCH] WIP: database: Major redesign of database package. This commit contains a complete redesign and rewrite of the database package that approaches things in a vastly different manner than the previous version. This is the first part of several stages that will be needed to ultimately make use of this new package. Some of the reason for this were discussed in #255, however a quick summary is as follows: - The previous database could only contain blocks on the main chain and reorgs required deleting the blocks from the database. This made it impossible to store orphans and could make external RPC calls for information about blocks during the middle of a reorg fail. - The previous database interface forced a high level of bitcoin-specific intelligence such as spend tracking into each backend driver. - The aforementioned point led to making it difficult to implement new backend drivers due to the need to repeat a lot of non-trivial logic which is better handled at a higher layer, such as the blockchain package. - The old database stored all blocks in leveldb. This made it extremely inefficient to do things such as lookup headers and individual transactions since the entire block had to be loaded from leveldb (which entails it doing data copies) to get access. In order to address all of these concerns, and others not mentioned, the database interface has been redesigned as follows: - Two main categories of functionality are provided: block storage and metadata storage - All block storage and metadata storage are done via read-only and read-write MVCC transactions with both manual and managed modes - Support for multiple concurrent readers and a single writer - Readers use a snapshot and therefore are not blocked by the writer - Some key properties of the block storage and retrieval API: - It is generic and does NOT contain additional bitcoin logic such spend tracking and block linking - Provides access to the raw serialized bytes so deserialization is not forced for callers that don't need it - Support for fetching headers via independent functions which allows implementations to provide significant optimizations - Ability to efficiently retrieve arbitrary regions of blocks (transactions, scripts, etc) - A rich metadata storage API is provided: - Key/value with arbitrary data - Support for buckets and nested buckets - Bucket iteration through a couple of different mechanisms - Cursors for efficient and direct key seeking - Supports registration of backend database implementations - Comprehensive test coverage - Provides strong documentation with example usage This commit also contains an implementation of the previously discussed interface named ffldb (flat file plus leveldb metadata backend). Here is a quick overview: - Highly optimized for read performance with consistent write performance regardless of database size - All blocks are stored in flat files on the file system - Bulk block region fetching is optimized to perform linear reads which improves performance on spindle disks - Anti-corruption mechanisms: - Flat files contain full block checksums to quickly an easily detect database corruption without needing to do expensive merkle root calculations - Metadata checksums - Open reconciliation - Extensive test coverage: - Comprehensive blackbox interface testing - Whitebox testing which uses intimate knowledge to exercise uncommon failure paths such as deleting files out from under the database - Corruption tests (replacing random data in the files) In addition, this commit also contains a new tool under the new database directory named dbtool which provides a few basic commands for testing the database. It is designed around commands, so it could be useful to expand on in the future. Finally, this commit addresses the following issues: - Adds support for and therefore closes #255 - Fixes #199 - Fixes #201 - Implements and closes #256 - Obsoletes and closes #257 - Closes #247 once the required chain and btcd modifications are in place to make use of this new code --- database2/README.md | 77 + database2/cmd/dbtool/fetchblock.go | 62 + database2/cmd/dbtool/fetchblockregion.go | 90 + database2/cmd/dbtool/globalconfig.go | 121 ++ database2/cmd/dbtool/insecureimport.go | 401 ++++ database2/cmd/dbtool/loadheaders.go | 101 + database2/cmd/dbtool/main.go | 116 ++ database2/cmd/dbtool/signal.go | 82 + database2/doc.go | 94 + database2/driver.go | 92 + database2/driver_test.go | 136 ++ database2/error.go | 197 ++ database2/error_test.go | 97 + database2/example_test.go | 177 ++ database2/export_test.go | 17 + database2/ffboltdb/README.md | 53 + database2/ffboltdb/bench_test.go | 103 + database2/ffboltdb/blockio.go | 749 +++++++ database2/ffboltdb/db.go | 1583 +++++++++++++++ database2/ffboltdb/doc.go | 30 + database2/ffboltdb/driver.go | 84 + database2/ffboltdb/driver_test.go | 288 +++ database2/ffboltdb/export_test.go | 26 + database2/ffboltdb/interface_test.go | 2311 +++++++++++++++++++++ database2/ffboltdb/mockfile_test.go | 163 ++ database2/ffboltdb/reconcile.go | 117 ++ database2/ffboltdb/whitebox_test.go | 810 ++++++++ database2/ffldb/README.md | 52 + database2/ffldb/bench_test.go | 103 + database2/ffldb/blockio.go | 749 +++++++ database2/ffldb/db.go | 2078 +++++++++++++++++++ database2/ffldb/doc.go | 29 + database2/ffldb/driver.go | 84 + database2/ffldb/driver_test.go | 288 +++ database2/ffldb/export_test.go | 26 + database2/ffldb/interface_test.go | 2314 ++++++++++++++++++++++ database2/ffldb/ldbtreapiter.go | 58 + database2/ffldb/mockfile_test.go | 163 ++ database2/ffldb/reconcile.go | 117 ++ database2/ffldb/whitebox_test.go | 721 +++++++ database2/interface.go | 455 +++++ database2/internal/treap/README.md | 36 + database2/internal/treap/doc.go | 12 + database2/internal/treap/treap.go | 335 ++++ database2/internal/treap/treap_test.go | 383 ++++ database2/internal/treap/treapiter.go | 322 +++ database2/log.go | 65 + database2/log_test.go | 67 + database2/testdata/blocks1-256.bz2 | Bin 0 -> 37555 bytes 49 files changed, 16634 insertions(+) create mode 100644 database2/README.md create mode 100644 database2/cmd/dbtool/fetchblock.go create mode 100644 database2/cmd/dbtool/fetchblockregion.go create mode 100644 database2/cmd/dbtool/globalconfig.go create mode 100644 database2/cmd/dbtool/insecureimport.go create mode 100644 database2/cmd/dbtool/loadheaders.go create mode 100644 database2/cmd/dbtool/main.go create mode 100644 database2/cmd/dbtool/signal.go create mode 100644 database2/doc.go create mode 100644 database2/driver.go create mode 100644 database2/driver_test.go create mode 100644 database2/error.go create mode 100644 database2/error_test.go create mode 100644 database2/example_test.go create mode 100644 database2/export_test.go create mode 100644 database2/ffboltdb/README.md create mode 100644 database2/ffboltdb/bench_test.go create mode 100644 database2/ffboltdb/blockio.go create mode 100644 database2/ffboltdb/db.go create mode 100644 database2/ffboltdb/doc.go create mode 100644 database2/ffboltdb/driver.go create mode 100644 database2/ffboltdb/driver_test.go create mode 100644 database2/ffboltdb/export_test.go create mode 100644 database2/ffboltdb/interface_test.go create mode 100644 database2/ffboltdb/mockfile_test.go create mode 100644 database2/ffboltdb/reconcile.go create mode 100644 database2/ffboltdb/whitebox_test.go create mode 100644 database2/ffldb/README.md create mode 100644 database2/ffldb/bench_test.go create mode 100644 database2/ffldb/blockio.go create mode 100644 database2/ffldb/db.go create mode 100644 database2/ffldb/doc.go create mode 100644 database2/ffldb/driver.go create mode 100644 database2/ffldb/driver_test.go create mode 100644 database2/ffldb/export_test.go create mode 100644 database2/ffldb/interface_test.go create mode 100644 database2/ffldb/ldbtreapiter.go create mode 100644 database2/ffldb/mockfile_test.go create mode 100644 database2/ffldb/reconcile.go create mode 100644 database2/ffldb/whitebox_test.go create mode 100644 database2/interface.go create mode 100644 database2/internal/treap/README.md create mode 100644 database2/internal/treap/doc.go create mode 100644 database2/internal/treap/treap.go create mode 100644 database2/internal/treap/treap_test.go create mode 100644 database2/internal/treap/treapiter.go create mode 100644 database2/log.go create mode 100644 database2/log_test.go create mode 100644 database2/testdata/blocks1-256.bz2 diff --git a/database2/README.md b/database2/README.md new file mode 100644 index 00000000000..014214a2e79 --- /dev/null +++ b/database2/README.md @@ -0,0 +1,77 @@ +database +======== + +[![Build Status](https://travis-ci.org/btcsuite/btcd.png?branch=master)] +(https://travis-ci.org/btcsuite/btcd) + +Package database provides a block and metadata storage database. + +Please note that this package is intended to enable btcd to support different +database backends and is not something that a client can directly access as only +one entity can have the database open at a time (for most database backends), +and that entity will be btcd. + +When a client wants programmatic access to the data provided by btcd, they'll +likely want to use the [btcrpcclient](https://github.com/btcsuite/btcrpcclient) +package which makes use of the [JSON-RPC API] +(https://github.com/btcsuite/btcd/tree/master/docs/json_rpc_api.md). + +However, this package could be extremely useful for any applications requiring +Bitcoin block storage capabilities. + +As of July 2015, there are over 365,000 blocks in the Bitcoin block chain and +and over 76 million transactions (which turns out to be over 35GB of data). +This package provides a database layer to store and retrieve this data in a +simple and efficient manner. + +The default backend, ffleveldb, has a strong focus on speed, efficiency, and +robustness. It makes use leveldb for the metadata, flat files for block +storage, and strict checksums in key areas to ensure data integrity. + +## Feature Overview + +- Key/value metadata store +- Bitcoin block storage +- Efficient retrieval of block headers and regions (transactions, scripts, etc) +- Read-only and read-write transactions with both manual and managed modes +- Nested buckets +- Iteration support including cursors with seek capability +- Supports registration of backend databases +- Comprehensive test coverage + +## Documentation + +[![GoDoc](https://godoc.org/github.com/btcsuite/btcd/database?status.png)] +(http://godoc.org/github.com/btcsuite/btcd/database) + +Full `go doc` style documentation for the project can be viewed online without +installing this package by using the GoDoc site here: +http://godoc.org/github.com/btcsuite/btcd/database + +You can also view the documentation locally once the package is installed with +the `godoc` tool by running `godoc -http=":6060"` and pointing your browser to +http://localhost:6060/pkg/github.com/btcsuite/btcd/database + +## Installation + +```bash +$ go get github.com/btcsuite/btcd/database +``` + +## Examples + +* [Basic Usage Example] + (http://godoc.org/github.com/btcsuite/btcd/database#example-package--BasicUsage) + Demonstrates creating a new database and using a managed read-write + transaction to store and retrieve metadata. + +* [Block Storage and Retrieval Example] + (http://godoc.org/github.com/btcsuite/btcd/database#example-package--BlockStorageAndRetrieval) + Demonstrates creating a new database, using a managed read-write transaction + to store a block, and then using a managed read-only transaction to fetch the + block. + +## License + +Package database is licensed under the [copyfree](http://copyfree.org) ISC +License. diff --git a/database2/cmd/dbtool/fetchblock.go b/database2/cmd/dbtool/fetchblock.go new file mode 100644 index 00000000000..79a77777b0c --- /dev/null +++ b/database2/cmd/dbtool/fetchblock.go @@ -0,0 +1,62 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package main + +import ( + "encoding/hex" + "errors" + "time" + + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcd/wire" +) + +// fetchBlockCmd defines the configuration options for the fetchblock command. +type fetchBlockCmd struct{} + +var ( + // fetchBlockCfg defines the configuration options for the command. + fetchBlockCfg = fetchBlockCmd{} +) + +// Execute is the main entry point for the command. It's invoked by the parser. +func (cmd *fetchBlockCmd) Execute(args []string) error { + // Setup the global config options and ensure they are valid. + if err := setupGlobalConfig(); err != nil { + return err + } + + if len(args) != 1 { + return errors.New("required block hash parameter not specified") + } + blockHash, err := wire.NewShaHashFromStr(args[0]) + if err != nil { + return err + } + + // Load the block database. + db, err := loadBlockDB() + if err != nil { + return err + } + defer db.Close() + + return db.View(func(tx database.Tx) error { + log.Infof("Fetching block %s", blockHash) + startTime := time.Now() + blockBytes, err := tx.FetchBlock(blockHash) + if err != nil { + return err + } + log.Infof("Loaded block in %v", time.Now().Sub(startTime)) + log.Infof("Block Hex: %s", hex.EncodeToString(blockBytes)) + return nil + }) +} + +// Usage overrides the usage display for the command. +func (cmd *fetchBlockCmd) Usage() string { + return "" +} diff --git a/database2/cmd/dbtool/fetchblockregion.go b/database2/cmd/dbtool/fetchblockregion.go new file mode 100644 index 00000000000..cdf628a5382 --- /dev/null +++ b/database2/cmd/dbtool/fetchblockregion.go @@ -0,0 +1,90 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package main + +import ( + "encoding/hex" + "errors" + "strconv" + "time" + + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcd/wire" +) + +// blockRegionCmd defines the configuration options for the fetchblockregion +// command. +type blockRegionCmd struct{} + +var ( + // blockRegionCfg defines the configuration options for the command. + blockRegionCfg = blockRegionCmd{} +) + +// Execute is the main entry point for the command. It's invoked by the parser. +func (cmd *blockRegionCmd) Execute(args []string) error { + // Setup the global config options and ensure they are valid. + if err := setupGlobalConfig(); err != nil { + return err + } + + // Ensure expected arguments. + if len(args) < 1 { + return errors.New("required block hash parameter not specified") + } + if len(args) < 2 { + return errors.New("required start offset parameter not " + + "specified") + } + if len(args) < 3 { + return errors.New("required region length parameter not " + + "specified") + } + + // Parse arguments. + blockHash, err := wire.NewShaHashFromStr(args[0]) + if err != nil { + return err + } + startOffset, err := strconv.ParseUint(args[1], 10, 32) + if err != nil { + return err + } + regionLen, err := strconv.ParseUint(args[2], 10, 32) + if err != nil { + return err + } + + // Load the block database. + db, err := loadBlockDB() + if err != nil { + return err + } + defer db.Close() + + return db.View(func(tx database.Tx) error { + log.Infof("Fetching block region %s<%d:%d>", blockHash, + startOffset, regionLen) + region := database.BlockRegion{ + Hash: blockHash, + Offset: uint32(startOffset), + Len: uint32(regionLen), + } + startTime := time.Now() + regionBytes, err := tx.FetchBlockRegion(®ion) + if err != nil { + return err + } + log.Infof("Loaded block region in %v", time.Now().Sub(startTime)) + log.Infof("Double SHA256: %s", wire.DoubleSha256SH(regionBytes)) + log.Infof("Region Hex: %s", hex.EncodeToString(regionBytes)) + return nil + }) +} + +// Usage overrides the usage display for the command. +func (cmd *blockRegionCmd) Usage() string { + return " " +} diff --git a/database2/cmd/dbtool/globalconfig.go b/database2/cmd/dbtool/globalconfig.go new file mode 100644 index 00000000000..df67fb78679 --- /dev/null +++ b/database2/cmd/dbtool/globalconfig.go @@ -0,0 +1,121 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package main + +import ( + "errors" + "fmt" + "os" + "path/filepath" + + "github.com/btcsuite/btcd/chaincfg" + database "github.com/btcsuite/btcd/database2" + _ "github.com/btcsuite/btcd/database2/ffldb" + "github.com/btcsuite/btcd/wire" + "github.com/btcsuite/btcutil" +) + +var ( + btcdHomeDir = btcutil.AppDataDir("btcd", false) + knownDbTypes = database.SupportedDrivers() + activeNetParams = &chaincfg.MainNetParams + + // Default global config. + cfg = &config{ + DataDir: filepath.Join(btcdHomeDir, "data"), + DbType: "ffldb", + } +) + +// config defines the global configuration options. +type config struct { + DataDir string `short:"b" long:"datadir" description:"Location of the btcd data directory"` + DbType string `long:"dbtype" description:"Database backend to use for the Block Chain"` + TestNet3 bool `long:"testnet" description:"Use the test network"` + RegressionTest bool `long:"regtest" description:"Use the regression test network"` + SimNet bool `long:"simnet" description:"Use the simulation test network"` +} + +// filesExists reports whether the named file or directory exists. +func fileExists(name string) bool { + if _, err := os.Stat(name); err != nil { + if os.IsNotExist(err) { + return false + } + } + return true +} + +// validDbType returns whether or not dbType is a supported database type. +func validDbType(dbType string) bool { + for _, knownType := range knownDbTypes { + if dbType == knownType { + return true + } + } + + return false +} + +// netName returns the name used when referring to a bitcoin network. At the +// time of writing, btcd currently places blocks for testnet version 3 in the +// data and log directory "testnet", which does not match the Name field of the +// chaincfg parameters. This function can be used to override this directory name +// as "testnet" when the passed active network matches wire.TestNet3. +// +// A proper upgrade to move the data and log directories for this network to +// "testnet3" is planned for the future, at which point this function can be +// removed and the network parameter's name used instead. +func netName(chainParams *chaincfg.Params) string { + switch chainParams.Net { + case wire.TestNet3: + return "testnet" + default: + return chainParams.Name + } +} + +// setupGlobalConfig examine the global configuration options for any conditions +// which are invalid as well as performs any addition setup necessary after the +// initial parse. +func setupGlobalConfig() error { + // Multiple networks can't be selected simultaneously. + // Count number of network flags passed; assign active network params + // while we're at it + numNets := 0 + if cfg.TestNet3 { + numNets++ + activeNetParams = &chaincfg.TestNet3Params + } + if cfg.RegressionTest { + numNets++ + activeNetParams = &chaincfg.RegressionNetParams + } + if cfg.SimNet { + numNets++ + activeNetParams = &chaincfg.SimNetParams + } + if numNets > 1 { + return errors.New("The testnet, regtest, and simnet params " + + "can't be used together -- choose one of the three") + } + + // Validate database type. + if !validDbType(cfg.DbType) { + str := "The specified database type [%v] is invalid -- " + + "supported types %v" + return fmt.Errorf(str, cfg.DbType, knownDbTypes) + } + + // Append the network type to the data directory so it is "namespaced" + // per network. In addition to the block database, there are other + // pieces of data that are saved to disk such as address manager state. + // All data is specific to a network, so namespacing the data directory + // means each individual piece of serialized data does not have to + // worry about changing names per network and such. + cfg.DataDir = filepath.Join(cfg.DataDir, netName(activeNetParams)) + + return nil +} diff --git a/database2/cmd/dbtool/insecureimport.go b/database2/cmd/dbtool/insecureimport.go new file mode 100644 index 00000000000..cb5a706186c --- /dev/null +++ b/database2/cmd/dbtool/insecureimport.go @@ -0,0 +1,401 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package main + +import ( + "encoding/binary" + "fmt" + "io" + "os" + "sync" + "time" + + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcd/wire" + "github.com/btcsuite/btcutil" +) + +// importCmd defines the configuration options for the insecureimport command. +type importCmd struct { + InFile string `short:"i" long:"infile" description:"File containing the block(s)"` + Progress int `short:"p" long:"progress" description:"Show a progress message each time this number of seconds have passed -- Use 0 to disable progress announcements"` +} + +var ( + // importCfg defines the configuration options for the command. + importCfg = importCmd{ + InFile: "bootstrap.dat", + Progress: 10, + } + + // zeroHash is a simply a hash with all zeros. It is defined here to + // avoid creating it multiple times. + zeroHash = wire.ShaHash{} +) + +// importResults houses the stats and result as an import operation. +type importResults struct { + blocksProcessed int64 + blocksImported int64 + err error +} + +// blockImporter houses information about an ongoing import from a block data +// file to the block database. +type blockImporter struct { + db database.DB + r io.ReadSeeker + processQueue chan []byte + doneChan chan bool + errChan chan error + quit chan struct{} + wg sync.WaitGroup + blocksProcessed int64 + blocksImported int64 + receivedLogBlocks int64 + receivedLogTx int64 + lastHeight int64 + lastBlockTime time.Time + lastLogTime time.Time +} + +// readBlock reads the next block from the input file. +func (bi *blockImporter) readBlock() ([]byte, error) { + // The block file format is: + // + var net uint32 + err := binary.Read(bi.r, binary.LittleEndian, &net) + if err != nil { + if err != io.EOF { + return nil, err + } + + // No block and no error means there are no more blocks to read. + return nil, nil + } + if net != uint32(activeNetParams.Net) { + return nil, fmt.Errorf("network mismatch -- got %x, want %x", + net, uint32(activeNetParams.Net)) + } + + // Read the block length and ensure it is sane. + var blockLen uint32 + if err := binary.Read(bi.r, binary.LittleEndian, &blockLen); err != nil { + return nil, err + } + if blockLen > wire.MaxBlockPayload { + return nil, fmt.Errorf("block payload of %d bytes is larger "+ + "than the max allowed %d bytes", blockLen, + wire.MaxBlockPayload) + } + + serializedBlock := make([]byte, blockLen) + if _, err := io.ReadFull(bi.r, serializedBlock); err != nil { + return nil, err + } + + return serializedBlock, nil +} + +// processBlock potentially imports the block into the database. It first +// deserializes the raw block while checking for errors. Already known blocks +// are skipped and orphan blocks are considered errors. Returns whether the +// block was imported along with any potential errors. +// +// NOTE: This is not a safe import as it does not verify chain rules. +func (bi *blockImporter) processBlock(serializedBlock []byte) (bool, error) { + // Deserialize the block which includes checks for malformed blocks. + block, err := btcutil.NewBlockFromBytes(serializedBlock) + if err != nil { + return false, err + } + + // update progress statistics + bi.lastBlockTime = block.MsgBlock().Header.Timestamp + bi.receivedLogTx += int64(len(block.MsgBlock().Transactions)) + + // Skip blocks that already exist. + var exists bool + err = bi.db.View(func(tx database.Tx) error { + exists, err = tx.HasBlock(block.Sha()) + if err != nil { + return err + } + return nil + }) + if err != nil { + return false, err + } + if exists { + return false, nil + } + + // Don't bother trying to process orphans. + prevHash := &block.MsgBlock().Header.PrevBlock + if !prevHash.IsEqual(&zeroHash) { + var exists bool + err := bi.db.View(func(tx database.Tx) error { + exists, err = tx.HasBlock(prevHash) + if err != nil { + return err + } + return nil + }) + if err != nil { + return false, err + } + if !exists { + return false, fmt.Errorf("import file contains block "+ + "%v which does not link to the available "+ + "block chain", prevHash) + } + } + + // Put the blocks into the database with no checking of chain rules. + err = bi.db.Update(func(tx database.Tx) error { + return tx.StoreBlock(block) + }) + if err != nil { + return false, err + } + + return true, nil +} + +// readHandler is the main handler for reading blocks from the import file. +// This allows block processing to take place in parallel with block reads. +// It must be run as a goroutine. +func (bi *blockImporter) readHandler() { +out: + for { + // Read the next block from the file and if anything goes wrong + // notify the status handler with the error and bail. + serializedBlock, err := bi.readBlock() + if err != nil { + bi.errChan <- fmt.Errorf("Error reading from input "+ + "file: %v", err.Error()) + break out + } + + // A nil block with no error means we're done. + if serializedBlock == nil { + break out + } + + // Send the block or quit if we've been signalled to exit by + // the status handler due to an error elsewhere. + select { + case bi.processQueue <- serializedBlock: + case <-bi.quit: + break out + } + } + + // Close the processing channel to signal no more blocks are coming. + close(bi.processQueue) + bi.wg.Done() +} + +// logProgress logs block progress as an information message. In order to +// prevent spam, it limits logging to one message every importCfg.Progress +// seconds with duration and totals included. +func (bi *blockImporter) logProgress() { + bi.receivedLogBlocks++ + + now := time.Now() + duration := now.Sub(bi.lastLogTime) + if duration < time.Second*time.Duration(importCfg.Progress) { + return + } + + // Truncate the duration to 10s of milliseconds. + durationMillis := int64(duration / time.Millisecond) + tDuration := 10 * time.Millisecond * time.Duration(durationMillis/10) + + // Log information about new block height. + blockStr := "blocks" + if bi.receivedLogBlocks == 1 { + blockStr = "block" + } + txStr := "transactions" + if bi.receivedLogTx == 1 { + txStr = "transaction" + } + log.Infof("Processed %d %s in the last %s (%d %s, height %d, %s)", + bi.receivedLogBlocks, blockStr, tDuration, bi.receivedLogTx, + txStr, bi.lastHeight, bi.lastBlockTime) + + bi.receivedLogBlocks = 0 + bi.receivedLogTx = 0 + bi.lastLogTime = now +} + +// processHandler is the main handler for processing blocks. This allows block +// processing to take place in parallel with block reads from the import file. +// It must be run as a goroutine. +func (bi *blockImporter) processHandler() { +out: + for { + select { + case serializedBlock, ok := <-bi.processQueue: + // We're done when the channel is closed. + if !ok { + break out + } + + bi.blocksProcessed++ + bi.lastHeight++ + imported, err := bi.processBlock(serializedBlock) + if err != nil { + bi.errChan <- err + break out + } + + if imported { + bi.blocksImported++ + } + + bi.logProgress() + + case <-bi.quit: + break out + } + } + bi.wg.Done() +} + +// statusHandler waits for updates from the import operation and notifies +// the passed doneChan with the results of the import. It also causes all +// goroutines to exit if an error is reported from any of them. +func (bi *blockImporter) statusHandler(resultsChan chan *importResults) { + select { + // An error from either of the goroutines means we're done so signal + // caller with the error and signal all goroutines to quit. + case err := <-bi.errChan: + resultsChan <- &importResults{ + blocksProcessed: bi.blocksProcessed, + blocksImported: bi.blocksImported, + err: err, + } + close(bi.quit) + + // The import finished normally. + case <-bi.doneChan: + resultsChan <- &importResults{ + blocksProcessed: bi.blocksProcessed, + blocksImported: bi.blocksImported, + err: nil, + } + } +} + +// Import is the core function which handles importing the blocks from the file +// associated with the block importer to the database. It returns a channel +// on which the results will be returned when the operation has completed. +func (bi *blockImporter) Import() chan *importResults { + // Start up the read and process handling goroutines. This setup allows + // blocks to be read from disk in parallel while being processed. + bi.wg.Add(2) + go bi.readHandler() + go bi.processHandler() + + // Wait for the import to finish in a separate goroutine and signal + // the status handler when done. + go func() { + bi.wg.Wait() + bi.doneChan <- true + }() + + // Start the status handler and return the result channel that it will + // send the results on when the import is done. + resultChan := make(chan *importResults) + go bi.statusHandler(resultChan) + return resultChan +} + +// newBlockImporter returns a new importer for the provided file reader seeker +// and database. +func newBlockImporter(db database.DB, r io.ReadSeeker) *blockImporter { + return &blockImporter{ + db: db, + r: r, + processQueue: make(chan []byte, 2), + doneChan: make(chan bool), + errChan: make(chan error), + quit: make(chan struct{}), + lastLogTime: time.Now(), + } +} + +// Execute is the main entry point for the command. It's invoked by the parser. +func (cmd *importCmd) Execute(args []string) error { + // Setup the global config options and ensure they are valid. + if err := setupGlobalConfig(); err != nil { + return err + } + + // Ensure the specified block file exists. + if !fileExists(cmd.InFile) { + str := "The specified block file [%v] does not exist" + return fmt.Errorf(str, cmd.InFile) + } + + // Load the block database. + db, err := loadBlockDB() + if err != nil { + return err + } + defer db.Close() + + // Ensure the database is sync'd and closed on Ctrl+C. + addInterruptHandler(func() { + log.Infof("Gracefully shutting down the database...") + db.Close() + }) + + fi, err := os.Open(importCfg.InFile) + if err != nil { + return err + } + defer fi.Close() + + // Create a block importer for the database and input file and start it. + // The results channel returned from start will contain an error if + // anything went wrong. + importer := newBlockImporter(db, fi) + + // Perform the import asynchronously and signal the main goroutine when + // done. This allows blocks to be processed and read in parallel. The + // results channel returned from Import contains the statistics about + // the import including an error if something went wrong. This is done + // in a separate goroutine rather than waiting directly so the main + // goroutine can be signaled for shutdown by either completion, error, + // or from the main interrupt handler. This is necessary since the main + // goroutine must be kept running long enough for the interrupt handler + // goroutine to finish. + go func() { + log.Info("Starting import") + resultsChan := importer.Import() + results := <-resultsChan + if results.err != nil { + dbErr, ok := results.err.(database.Error) + if !ok || ok && dbErr.ErrorCode != database.ErrDbNotOpen { + shutdownChannel <- results.err + return + } + } + + log.Infof("Processed a total of %d blocks (%d imported, %d "+ + "already known)", results.blocksProcessed, + results.blocksImported, + results.blocksProcessed-results.blocksImported) + shutdownChannel <- nil + }() + + // Wait for shutdown signal from either a normal completion or from the + // interrupt handler. + err = <-shutdownChannel + return err +} diff --git a/database2/cmd/dbtool/loadheaders.go b/database2/cmd/dbtool/loadheaders.go new file mode 100644 index 00000000000..4ebc16f444a --- /dev/null +++ b/database2/cmd/dbtool/loadheaders.go @@ -0,0 +1,101 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package main + +import ( + "time" + + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcd/wire" +) + +// headersCmd defines the configuration options for the loadheaders command. +type headersCmd struct { + Bulk bool `long:"bulk" description:"Use bulk loading of headers instead of one at a time"` +} + +var ( + // headersCfg defines the configuration options for the command. + headersCfg = headersCmd{ + Bulk: false, + } +) + +// Execute is the main entry point for the command. It's invoked by the parser. +func (cmd *headersCmd) Execute(args []string) error { + // Setup the global config options and ensure they are valid. + if err := setupGlobalConfig(); err != nil { + return err + } + + // Load the block database. + db, err := loadBlockDB() + if err != nil { + return err + } + defer db.Close() + + // NOTE: This code will only work for ffldb. Ideally the package using + // the database would keep a metadata index of its own. + blockIdxName := []byte("ffldb-blockidx") + if !headersCfg.Bulk { + err = db.View(func(tx database.Tx) error { + totalHdrs := 0 + blockIdxBucket := tx.Metadata().Bucket(blockIdxName) + blockIdxBucket.ForEach(func(k, v []byte) error { + totalHdrs++ + return nil + }) + log.Infof("Loading headers for %d blocks...", totalHdrs) + numLoaded := 0 + startTime := time.Now() + blockIdxBucket.ForEach(func(k, v []byte) error { + var hash wire.ShaHash + copy(hash[:], k) + _, err := tx.FetchBlockHeader(&hash) + if err != nil { + return err + } + numLoaded++ + return nil + }) + log.Infof("Loaded %d headers in %v", numLoaded, + time.Now().Sub(startTime)) + return nil + }) + if err != nil { + return err + } + + return nil + } + + // Bulk load headers. + err = db.View(func(tx database.Tx) error { + blockIdxBucket := tx.Metadata().Bucket(blockIdxName) + hashes := make([]wire.ShaHash, 0, 500000) + blockIdxBucket.ForEach(func(k, v []byte) error { + var hash wire.ShaHash + copy(hash[:], k) + hashes = append(hashes, hash) + return nil + }) + + log.Infof("Loading headers for %d blocks...", len(hashes)) + startTime := time.Now() + hdrs, err := tx.FetchBlockHeaders(hashes) + if err != nil { + return err + } + log.Infof("Loaded %d headers in %v", len(hdrs), + time.Now().Sub(startTime)) + return nil + }) + if err != nil { + return err + } + + return nil +} diff --git a/database2/cmd/dbtool/main.go b/database2/cmd/dbtool/main.go new file mode 100644 index 00000000000..db276bde2e9 --- /dev/null +++ b/database2/cmd/dbtool/main.go @@ -0,0 +1,116 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package main + +import ( + "os" + "path/filepath" + "runtime" + "strings" + + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btclog" + flags "github.com/btcsuite/go-flags" +) + +const ( + // blockDbNamePrefix is the prefix for the btcd block database. + blockDbNamePrefix = "blocks" +) + +var ( + log btclog.Logger + shutdownChannel = make(chan error) +) + +// loadBlockDB opens the block database and returns a handle to it. +func loadBlockDB() (database.DB, error) { + // The database name is based on the database type. + dbName := blockDbNamePrefix + "_" + cfg.DbType + dbPath := filepath.Join(cfg.DataDir, dbName) + + log.Infof("Loading block database from '%s'", dbPath) + db, err := database.Open(cfg.DbType, dbPath, activeNetParams.Net) + if err != nil { + // Return the error if it's not because the database doesn't + // exist. + if dbErr, ok := err.(database.Error); !ok || dbErr.ErrorCode != + database.ErrDbDoesNotExist { + + return nil, err + } + + // Create the db if it does not exist. + err = os.MkdirAll(cfg.DataDir, 0700) + if err != nil { + return nil, err + } + db, err = database.Create(cfg.DbType, dbPath, activeNetParams.Net) + if err != nil { + return nil, err + } + } + + log.Info("Block database loaded") + return db, nil +} + +// realMain is the real main function for the utility. It is necessary to work +// around the fact that deferred functions do not run when os.Exit() is called. +func realMain() error { + // Setup logging. + backendLogger := btclog.NewDefaultBackendLogger() + defer backendLogger.Flush() + log = btclog.NewSubsystemLogger(backendLogger, "") + dbLog := btclog.NewSubsystemLogger(backendLogger, "BCDB: ") + dbLog.SetLevel(btclog.DebugLvl) + database.UseLogger(dbLog) + + // Setup the parser options and commands. + appName := filepath.Base(os.Args[0]) + appName = strings.TrimSuffix(appName, filepath.Ext(appName)) + parserFlags := flags.Options(flags.HelpFlag | flags.PassDoubleDash) + parser := flags.NewNamedParser(appName, parserFlags) + parser.AddGroup("Global Options", "", cfg) + parser.AddCommand("insecureimport", + "Insecurely import bulk block data from bootstrap.dat", + "Insecurely import bulk block data from bootstrap.dat. "+ + "WARNING: This is NOT secure because it does NOT "+ + "verify chain rules. It is only provided for testing "+ + "purposes.", &importCfg) + parser.AddCommand("loadheaders", + "Time how long to load headers for all blocks in the database", + "", &headersCfg) + parser.AddCommand("fetchblock", + "Fetch the specific block hash from the database", "", + &fetchBlockCfg) + parser.AddCommand("fetchblockregion", + "Fetch the specified block region from the database", "", + &blockRegionCfg) + + // Parse command line and invoke the Execute function for the specified + // command. + if _, err := parser.Parse(); err != nil { + if e, ok := err.(*flags.Error); ok && e.Type == flags.ErrHelp { + parser.WriteHelp(os.Stderr) + } else { + log.Error(err) + } + + return err + } + + return nil +} + +func main() { + // Use all processor cores. + runtime.GOMAXPROCS(runtime.NumCPU()) + + // Work around defer not working after os.Exit() + if err := realMain(); err != nil { + os.Exit(1) + } +} diff --git a/database2/cmd/dbtool/signal.go b/database2/cmd/dbtool/signal.go new file mode 100644 index 00000000000..123fe6bc180 --- /dev/null +++ b/database2/cmd/dbtool/signal.go @@ -0,0 +1,82 @@ +// Copyright (c) 2013-2014 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package main + +import ( + "os" + "os/signal" +) + +// interruptChannel is used to receive SIGINT (Ctrl+C) signals. +var interruptChannel chan os.Signal + +// addHandlerChannel is used to add an interrupt handler to the list of handlers +// to be invoked on SIGINT (Ctrl+C) signals. +var addHandlerChannel = make(chan func()) + +// mainInterruptHandler listens for SIGINT (Ctrl+C) signals on the +// interruptChannel and invokes the registered interruptCallbacks accordingly. +// It also listens for callback registration. It must be run as a goroutine. +func mainInterruptHandler() { + // interruptCallbacks is a list of callbacks to invoke when a + // SIGINT (Ctrl+C) is received. + var interruptCallbacks []func() + + // isShutdown is a flag which is used to indicate whether or not + // the shutdown signal has already been received and hence any future + // attempts to add a new interrupt handler should invoke them + // immediately. + var isShutdown bool + + for { + select { + case <-interruptChannel: + // Ignore more than one shutdown signal. + if isShutdown { + log.Infof("Received SIGINT (Ctrl+C). " + + "Already shutting down...") + continue + } + + isShutdown = true + log.Infof("Received SIGINT (Ctrl+C). Shutting down...") + + // Run handlers in LIFO order. + for i := range interruptCallbacks { + idx := len(interruptCallbacks) - 1 - i + callback := interruptCallbacks[idx] + callback() + } + + // Signal the main goroutine to shutdown. + go func() { + shutdownChannel <- nil + }() + + case handler := <-addHandlerChannel: + // The shutdown signal has already been received, so + // just invoke and new handlers immediately. + if isShutdown { + handler() + } + + interruptCallbacks = append(interruptCallbacks, handler) + } + } +} + +// addInterruptHandler adds a handler to call when a SIGINT (Ctrl+C) is +// received. +func addInterruptHandler(handler func()) { + // Create the channel and start the main interrupt handler which invokes + // all other callbacks and exits if not already done. + if interruptChannel == nil { + interruptChannel = make(chan os.Signal, 1) + signal.Notify(interruptChannel, os.Interrupt) + go mainInterruptHandler() + } + + addHandlerChannel <- handler +} diff --git a/database2/doc.go b/database2/doc.go new file mode 100644 index 00000000000..e8e0a3ce950 --- /dev/null +++ b/database2/doc.go @@ -0,0 +1,94 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +/* +Package database2 provides a block and metadata storage database. + +Overview + +As of July 2015, there are over 365,000 blocks in the Bitcoin block chain and +and over 76 million transactions (which turns out to be over 35GB of data). +This package provides a database layer to store and retrieve this data in a +simple and efficient manner. + +The default backend, ffleveldb, has a strong focus on speed, efficiency, and +robustness. It makes use leveldb for the metadata, flat files for block +storage, and strict checksums in key areas to ensure data integrity. + +A quick overview of the features database provides are as follows: + + - Key/value metadata store + - Bitcoin block storage + - Efficient retrieval of block headers and regions (transactions, scripts, etc) + - Read-only and read-write transactions with both manual and managed modes + - Nested buckets + - Supports registration of backend databases + - Comprehensive test coverage + +Database + +The main entry point is the DB interface. It exposes functionality for +transactional-based access and storage of metadata and block data. It is +obtained via the Create and Open functions which take a database type string +that identifies the specific database driver (backend) to use as well as +arguments specific to the specified driver. + +Namespaces + +The Namespace interface is an abstraction that provides facilities for obtaining +transactions (the Tx interface) that are the basis of all database reads and +writes. Unlike some database interfaces that support reading and writing +without transactions, this interface requires transactions even when only +reading or writing a single key. + +The Begin function provides an unmanaged transaction while the View and Update +functions provide a managed transaction. These are described in more detail +below. + +Transactions + +The Tx interface provides facilities for rolling back or commiting changes that +took place while the transaction was active. It also provides the root metadata +bucket under which all keys, values, and nested buckets are stored. A +transaction can either be read-only or read-write and managed or unmanaged. + +Managed versus Unmanaged Transactions + +A managed transaction is one where the caller provides a function to execute +within the context of the transaction and the commit or rollback is handled +automatically depending on whether or not the provided function returns an +error. Attempting to manually call Rollback or Commit on the managed +transaction will result in a panic. + +An unmanaged transaction, on the other hand, requires the caller to manually +call Commit or Rollback when they are finished with it. Leaving transactions +open for long periods of time can have several adverse effects, so it is +recommended that managed transactions are used instead. + +Buckets + +The Bucket interface provides the ability to manipulate key/value pairs and +nested buckets as well as iterate through them. + +The Get, Put, and Delete functions work with key/value pairs, while the Bucket, +CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with +buckets. The ForEach function allows the caller to provide a function to be +called with each key/value pair and nested bucket in the current bucket. + +Metadata Bucket + +As discussed above, all of the functions which are used to manipulate key/value +pairs and nested buckets exist on the Bucket interface. The root metadata +bucket is the upper-most bucket in which data is stored and is created at the +same time as the database. Use the Metadata function on the Tx interface +to retrieve it. + +Nested Buckets + +The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface +provide the ability to create an arbitrary number of nested buckets. It is +a good idea to avoid a lot of buckets with little data in them as it could lead +to poor page utilization depending on the specific driver in use. +*/ +package database2 diff --git a/database2/driver.go b/database2/driver.go new file mode 100644 index 00000000000..9bdd1898da3 --- /dev/null +++ b/database2/driver.go @@ -0,0 +1,92 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +// Parts of this interface were inspired heavily by the excellent boltdb project +// at https://github.com/boltdb/bolt by Ben B. Johnson. + +package database2 + +import ( + "fmt" + + "github.com/btcsuite/btclog" +) + +// Driver defines a structure for backend drivers to use when they registered +// themselves as a backend which implements the Db interface. +type Driver struct { + // DbType is the identifier used to uniquely identify a specific + // database driver. There can be only one driver with the same name. + DbType string + + // Create is the function that will be invoked with all user-specified + // arguments to create the database. This function must return + // ErrDbExists if the database already exists. + Create func(args ...interface{}) (DB, error) + + // Open is the function that will be invoked with all user-specified + // arguments to open the database. This function must return + // ErrDbDoesNotExist if the database has not already been created. + Open func(args ...interface{}) (DB, error) + + // UseLogger uses a specified Logger to output package logging info. + UseLogger func(logger btclog.Logger) +} + +// driverList holds all of the registered database backends. +var drivers = make(map[string]*Driver) + +// RegisterDriver adds a backend database driver to available interfaces. +// ErrDbTypeRegistered will be retruned if the database type for the driver has +// already been registered. +func RegisterDriver(driver Driver) error { + if _, exists := drivers[driver.DbType]; exists { + str := fmt.Sprintf("driver %q is already registered", + driver.DbType) + return makeError(ErrDbTypeRegistered, str, nil) + } + + drivers[driver.DbType] = &driver + return nil +} + +// SupportedDrivers returns a slice of strings that represent the database +// drivers that have been registered and are therefore supported. +func SupportedDrivers() []string { + supportedDBs := make([]string, 0, len(drivers)) + for _, drv := range drivers { + supportedDBs = append(supportedDBs, drv.DbType) + } + return supportedDBs +} + +// Create intializes and opens a database for the specified type. The arguments +// are specific to the database type driver. See the documentation for the +// database driver for further details. +// +// ErrDbUnknownType will be returned if the the database type is not registered. +func Create(dbType string, args ...interface{}) (DB, error) { + drv, exists := drivers[dbType] + if !exists { + str := fmt.Sprintf("driver %q is not registered", dbType) + return nil, makeError(ErrDbUnknownType, str, nil) + } + + return drv.Create(args...) +} + +// Open opens an existing database for the specified type. The arguments are +// specific to the database type driver. See the documentation for the database +// driver for further details. +// +// ErrDbUnknownType will be returned if the the database type is not registered. +func Open(dbType string, args ...interface{}) (DB, error) { + drv, exists := drivers[dbType] + if !exists { + str := fmt.Sprintf("driver %q is not registered", dbType) + return nil, makeError(ErrDbUnknownType, str, nil) + } + + return drv.Open(args...) +} diff --git a/database2/driver_test.go b/database2/driver_test.go new file mode 100644 index 00000000000..22cea501316 --- /dev/null +++ b/database2/driver_test.go @@ -0,0 +1,136 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package database2_test + +import ( + "fmt" + "testing" + + database "github.com/btcsuite/btcd/database2" + _ "github.com/btcsuite/btcd/database2/ffldb" +) + +var ( + // ignoreDbTypes are types which should be ignored when running tests + // that iterate all supported DB types. This allows some tests to add + // bogus drivers for testing purposes while still allowing other tests + // to easily iterate all supported drivers. + ignoreDbTypes = map[string]bool{"createopenfail": true} +) + +// checkDbError ensures the passed error is a database.Error with an error code +// that matches the passed error code. +func checkDbError(t *testing.T, testName string, gotErr error, wantErrCode database.ErrorCode) bool { + dbErr, ok := gotErr.(database.Error) + if !ok { + t.Errorf("%s: unexpected error type - got %T, want %T", + testName, gotErr, database.Error{}) + return false + } + if dbErr.ErrorCode != wantErrCode { + t.Errorf("%s: unexpected error code - got %s (%s), want %s", + testName, dbErr.ErrorCode, dbErr.Description, + wantErrCode) + return false + } + + return true +} + +// TestAddDuplicateDriver ensures that adding a duplicate driver does not +// overwrite an existing one. +func TestAddDuplicateDriver(t *testing.T) { + supportedDrivers := database.SupportedDrivers() + if len(supportedDrivers) == 0 { + t.Errorf("no backends to test") + return + } + dbType := supportedDrivers[0] + + // bogusCreateDB is a function which acts as a bogus create and open + // driver function and intentionally returns a failure that can be + // detected if the interface allows a duplicate driver to overwrite an + // existing one. + bogusCreateDB := func(args ...interface{}) (database.DB, error) { + return nil, fmt.Errorf("duplicate driver allowed for database "+ + "type [%v]", dbType) + } + + // Create a driver that tries to replace an existing one. Set its + // create and open functions to a function that causes a test failure if + // they are invoked. + driver := database.Driver{ + DbType: dbType, + Create: bogusCreateDB, + Open: bogusCreateDB, + } + testName := "duplicate driver registration" + err := database.RegisterDriver(driver) + if !checkDbError(t, testName, err, database.ErrDbTypeRegistered) { + return + } +} + +// TestCreateOpenFail ensures that errors which occur while opening or closing +// a database are handled properly. +func TestCreateOpenFail(t *testing.T) { + // bogusCreateDB is a function which acts as a bogus create and open + // driver function that intentionally returns a failure which can be + // detected. + dbType := "createopenfail" + openError := fmt.Errorf("failed to create or open database for "+ + "database type [%v]", dbType) + bogusCreateDB := func(args ...interface{}) (database.DB, error) { + return nil, openError + } + + // Create and add driver that intentionally fails when created or opened + // to ensure errors on database open and create are handled properly. + driver := database.Driver{ + DbType: dbType, + Create: bogusCreateDB, + Open: bogusCreateDB, + } + database.RegisterDriver(driver) + + // Ensure creating a database with the new type fails with the expected + // error. + _, err := database.Create(dbType) + if err != openError { + t.Errorf("expected error not received - got: %v, want %v", err, + openError) + return + } + + // Ensure opening a database with the new type fails with the expected + // error. + _, err = database.Open(dbType) + if err != openError { + t.Errorf("expected error not received - got: %v, want %v", err, + openError) + return + } +} + +// TestCreateOpenUnsupported ensures that attempting to create or open an +// unsupported database type is handled properly. +func TestCreateOpenUnsupported(t *testing.T) { + // Ensure creating a database with an unsupported type fails with the + // expected error. + testName := "create with unsupported database type" + dbType := "unsupported" + _, err := database.Create(dbType) + if !checkDbError(t, testName, err, database.ErrDbUnknownType) { + return + } + + // Ensure opening a database with the an unsupported type fails with the + // expected error. + testName = "open with unsupported database type" + _, err = database.Open(dbType) + if !checkDbError(t, testName, err, database.ErrDbUnknownType) { + return + } +} diff --git a/database2/error.go b/database2/error.go new file mode 100644 index 00000000000..6ff3f7e7613 --- /dev/null +++ b/database2/error.go @@ -0,0 +1,197 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package database2 + +import "fmt" + +// ErrorCode identifies a kind of error. +type ErrorCode int + +// These constants are used to identify a specific database Error. +const ( + // ************************************** + // Errors related to driver registration. + // ************************************** + + // ErrDbTypeRegistered indicates two different database drivers + // attempt to register with the name database type. + ErrDbTypeRegistered ErrorCode = iota + + // ************************************* + // Errors related to database functions. + // ************************************* + + // ErrDbUnknownType indicates there is no driver registered for + // the specified database type. + ErrDbUnknownType + + // ErrDbDoesNotExist indicates open is called for a database that + // does not exist. + ErrDbDoesNotExist + + // ErrDbExists indicates create is called for a database that + // already exists. + ErrDbExists + + // ErrDbNotOpen indicates a database instance is accessed before + // it is opened or after it is closed. + ErrDbNotOpen + + // ErrDbAlreadyOpen indicates open was called on a database that + // is already open. + ErrDbAlreadyOpen + + // ErrInvalid indicates the specified database is not valid. + ErrInvalid + + // ErrCorruption indicates a checksum failure occurred which invariably + // means the database is corrupt. + ErrCorruption + + // **************************************** + // Errors related to database transactions. + // **************************************** + + // ErrTxClosed indicates an attempt was made to commit or rollback a + // transaction that has already had one of those operations performed. + ErrTxClosed + + // ErrTxNotWritable indicates an operation that requires write access to + // the database was attempted against a read-only transaction. + ErrTxNotWritable + + // ************************************** + // Errors related to metadata operations. + // ************************************** + + // ErrBucketNotFound indicates an attempt to access a bucket that has + // not been created yet. + ErrBucketNotFound + + // ErrBucketExists indicates an attempt to create a bucket that already + // exists. + ErrBucketExists + + // ErrBucketNameRequired indicates an attempt to create a bucket with a + // blank name. + ErrBucketNameRequired + + // ErrKeyRequired indicates at attempt to insert a zero-length key. + ErrKeyRequired + + // ErrKeyTooLarge indicates an attmempt to insert a key that is larger + // than the max allowed key size. The max key size depends on the + // specific backend driver being used. As a general rule, key sizes + // should be relatively, so this should rarely be an issue. + ErrKeyTooLarge + + // ErrValueTooLarge indicates an attmpt to insert a value that is larger + // than max allowed value size. The max key size depends on the + // specific backend driver being used. + ErrValueTooLarge + + // ErrIncompatibleValue indicates the value in question is invalid for + // the specific requested operation. For example, trying create or + // delete a bucket with an existing non-bucket key, attempting to create + // or delete a non-bucket key with an existing bucket key, or trying to + // delete a value via a cursor when it points to a nested bucket. + ErrIncompatibleValue + + // *************************************** + // Errors related to block I/O operations. + // *************************************** + + // ErrBlockNotFound indicates a block with the provided hash does not + // exist in the database. + ErrBlockNotFound + + // ErrBlockExists indicates a block with the provided hash already + // exists in the database. + ErrBlockExists + + // ErrBlockRegionInvalid indicates a region that exceeds the bounds of + // the specified block was requested. When the hash provided by the + // region does not correspond to an existing block, the error will be + // ErrBlockNotFound instead. + ErrBlockRegionInvalid + + // *********************************** + // Support for driver-specific errors. + // *********************************** + + // ErrDriverSpecific indicates the Err field is a driver-specific error. + // This provides a mechanism for drivers to plug-in their own custom + // errors for any situations which aren't already covered by the error + // codes provided by this package. + ErrDriverSpecific + + // numErrorCodes is the maximum error code number used in tests. + numErrorCodes +) + +// Map of ErrorCode values back to their constant names for pretty printing. +var errorCodeStrings = map[ErrorCode]string{ + ErrDbTypeRegistered: "ErrDbTypeRegistered", + ErrDbUnknownType: "ErrDbUnknownType", + ErrDbDoesNotExist: "ErrDbDoesNotExist", + ErrDbExists: "ErrDbExists", + ErrDbNotOpen: "ErrDbNotOpen", + ErrDbAlreadyOpen: "ErrDbAlreadyOpen", + ErrInvalid: "ErrInvalid", + ErrCorruption: "ErrCorruption", + ErrTxClosed: "ErrTxClosed", + ErrTxNotWritable: "ErrTxNotWritable", + ErrBucketNotFound: "ErrBucketNotFound", + ErrBucketExists: "ErrBucketExists", + ErrBucketNameRequired: "ErrBucketNameRequired", + ErrKeyRequired: "ErrKeyRequired", + ErrKeyTooLarge: "ErrKeyTooLarge", + ErrValueTooLarge: "ErrValueTooLarge", + ErrIncompatibleValue: "ErrIncompatibleValue", + ErrBlockNotFound: "ErrBlockNotFound", + ErrBlockExists: "ErrBlockExists", + ErrBlockRegionInvalid: "ErrBlockRegionInvalid", + ErrDriverSpecific: "ErrDriverSpecific", +} + +// String returns the ErrorCode as a human-readable name. +func (e ErrorCode) String() string { + if s := errorCodeStrings[e]; s != "" { + return s + } + return fmt.Sprintf("Unknown ErrorCode (%d)", int(e)) +} + +// Error provides a single type for errors that can happen during database +// operation. It is used to indicate several types of failures including errors +// with caller requests such as specifying invalid block regions or attempting +// to access data against closed database transactions, driver errors, errors +// retrieving data, and errors communicating with database servers. +// +// The caller can use type assertions to determine if an error is an Error and +// access the ErrorCode field to ascertain the specific reason for the failure. +// +// The ErrDriverSpecific error code will also have the Err field set with the +// underlying error. Depending on the backend driver, the Err field might be +// set to the underlying error for other error codes as well. +type Error struct { + ErrorCode ErrorCode // Describes the kind of error + Description string // Human readable description of the issue + Err error // Underlying error +} + +// Error satisfies the error interface and prints human-readable errors. +func (e Error) Error() string { + if e.Err != nil { + return e.Description + ": " + e.Err.Error() + } + return e.Description +} + +// makeError creates an Error given a set of arguments. The error code must +// be one of the error codes provided by this package. +func makeError(c ErrorCode, desc string, err error) Error { + return Error{ErrorCode: c, Description: desc, Err: err} +} diff --git a/database2/error_test.go b/database2/error_test.go new file mode 100644 index 00000000000..1ca625023b1 --- /dev/null +++ b/database2/error_test.go @@ -0,0 +1,97 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package database2_test + +import ( + "errors" + "testing" + + database "github.com/btcsuite/btcd/database2" +) + +// TestErrorCodeStringer tests the stringized output for the ErrorCode type. +func TestErrorCodeStringer(t *testing.T) { + tests := []struct { + in database.ErrorCode + want string + }{ + {database.ErrDbTypeRegistered, "ErrDbTypeRegistered"}, + {database.ErrDbUnknownType, "ErrDbUnknownType"}, + {database.ErrDbDoesNotExist, "ErrDbDoesNotExist"}, + {database.ErrDbExists, "ErrDbExists"}, + {database.ErrDbNotOpen, "ErrDbNotOpen"}, + {database.ErrDbAlreadyOpen, "ErrDbAlreadyOpen"}, + {database.ErrInvalid, "ErrInvalid"}, + {database.ErrCorruption, "ErrCorruption"}, + {database.ErrTxClosed, "ErrTxClosed"}, + {database.ErrTxNotWritable, "ErrTxNotWritable"}, + {database.ErrBucketNotFound, "ErrBucketNotFound"}, + {database.ErrBucketExists, "ErrBucketExists"}, + {database.ErrBucketNameRequired, "ErrBucketNameRequired"}, + {database.ErrKeyRequired, "ErrKeyRequired"}, + {database.ErrKeyTooLarge, "ErrKeyTooLarge"}, + {database.ErrValueTooLarge, "ErrValueTooLarge"}, + {database.ErrIncompatibleValue, "ErrIncompatibleValue"}, + {database.ErrBlockNotFound, "ErrBlockNotFound"}, + {database.ErrBlockExists, "ErrBlockExists"}, + {database.ErrBlockRegionInvalid, "ErrBlockRegionInvalid"}, + {database.ErrDriverSpecific, "ErrDriverSpecific"}, + + {0xffff, "Unknown ErrorCode (65535)"}, + } + + // Detect additional error codes that don't have the stringer added. + if len(tests)-1 != int(database.TstNumErrorCodes) { + t.Errorf("It appears an error code was added without adding " + + "an associated stringer test") + } + + t.Logf("Running %d tests", len(tests)) + for i, test := range tests { + result := test.in.String() + if result != test.want { + t.Errorf("String #%d\ngot: %s\nwant: %s", i, result, + test.want) + continue + } + } +} + +// TestError tests the error output for the Error type. +func TestError(t *testing.T) { + t.Parallel() + + tests := []struct { + in database.Error + want string + }{ + { + database.Error{Description: "some error"}, + "some error", + }, + { + database.Error{Description: "human-readable error"}, + "human-readable error", + }, + { + database.Error{ + ErrorCode: database.ErrDriverSpecific, + Description: "some error", + Err: errors.New("driver-specific error"), + }, + "some error: driver-specific error", + }, + } + + t.Logf("Running %d tests", len(tests)) + for i, test := range tests { + result := test.in.Error() + if result != test.want { + t.Errorf("Error #%d\n got: %s want: %s", i, result, + test.want) + continue + } + } +} diff --git a/database2/example_test.go b/database2/example_test.go new file mode 100644 index 00000000000..8dd68313152 --- /dev/null +++ b/database2/example_test.go @@ -0,0 +1,177 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package database2_test + +import ( + "bytes" + "fmt" + "os" + "path/filepath" + + "github.com/btcsuite/btcd/chaincfg" + database "github.com/btcsuite/btcd/database2" + _ "github.com/btcsuite/btcd/database2/ffldb" + "github.com/btcsuite/btcd/wire" + "github.com/btcsuite/btcutil" +) + +// This example demonstrates creating a new database. +func ExampleCreate() { + // This example assumes the ffldb driver is imported. + // + // import ( + // "github.com/btcsuite/btcd/database" + // _ "github.com/btcsuite/btcd/database/ffldb" + // ) + + // Create a database and schedule it to be closed and removed on exit. + // Typically you wouldn't want to remove the database right away like + // this, nor put it in the temp directory, but it's done here to ensure + // the example cleans up after itself. + dbPath := filepath.Join(os.TempDir(), "examplecreate") + db, err := database.Create("ffldb", dbPath, wire.MainNet) + if err != nil { + fmt.Println(err) + return + } + defer os.RemoveAll(dbPath) + defer db.Close() + + // Output: +} + +// This example demonstrates creating a new database and using a managed +// read-write transaction to store and retrieve metadata. +func Example_basicUsage() { + // This example assumes the ffldb driver is imported. + // + // import ( + // "github.com/btcsuite/btcd/database" + // _ "github.com/btcsuite/btcd/database/ffldb" + // ) + + // Create a database and schedule it to be closed and removed on exit. + // Typically you wouldn't want to remove the database right away like + // this, nor put it in the temp directory, but it's done here to ensure + // the example cleans up after itself. + dbPath := filepath.Join(os.TempDir(), "exampleusage") + db, err := database.Create("ffldb", dbPath, wire.MainNet) + if err != nil { + fmt.Println(err) + return + } + defer os.RemoveAll(dbPath) + defer db.Close() + + // Use the Update function of the database to perform a managed + // read-write transaction. The transaction will automatically be rolled + // back if the supplied inner function returns a non-nil error. + err = db.Update(func(tx database.Tx) error { + // Store a key/value pair directly in the metadata bucket. + // Typically a nested bucket would be used for a given feature, + // but this example is using the metadata bucket directly for + // simplicity. + key := []byte("mykey") + value := []byte("myvalue") + if err := tx.Metadata().Put(key, value); err != nil { + return err + } + + // Read the key back and ensure it matches. + if !bytes.Equal(tx.Metadata().Get(key), value) { + return fmt.Errorf("unexpected value for key '%s'", key) + } + + // Create a new nested bucket under the metadata bucket. + nestedBucketKey := []byte("mybucket") + nestedBucket, err := tx.Metadata().CreateBucket(nestedBucketKey) + if err != nil { + return err + } + + // The key from above that was set in the metadata bucket does + // not exist in this new nested bucket. + if nestedBucket.Get(key) != nil { + return fmt.Errorf("key '%s' is not expected nil", key) + } + + return nil + }) + if err != nil { + fmt.Println(err) + return + } + + // Output: +} + +// This example demonstrates creating a new database, using a managed read-write +// transaction to store a block, and using a managed read-only transaction to +// fetch the block. +func Example_blockStorageAndRetrieval() { + // This example assumes the ffldb driver is imported. + // + // import ( + // "github.com/btcsuite/btcd/database" + // _ "github.com/btcsuite/btcd/database/ffldb" + // ) + + // Create a database and schedule it to be closed and removed on exit. + // Typically you wouldn't want to remove the database right away like + // this, nor put it in the temp directory, but it's done here to ensure + // the example cleans up after itself. + dbPath := filepath.Join(os.TempDir(), "exampleblkstorage") + db, err := database.Create("ffldb", dbPath, wire.MainNet) + if err != nil { + fmt.Println(err) + return + } + defer os.RemoveAll(dbPath) + defer db.Close() + + // Use the Update function of the database to perform a managed + // read-write transaction and store a genesis block in the database as + // and example. + err = db.Update(func(tx database.Tx) error { + genesisBlock := chaincfg.MainNetParams.GenesisBlock + return tx.StoreBlock(btcutil.NewBlock(genesisBlock)) + }) + if err != nil { + fmt.Println(err) + return + } + + // Use the View function of the database to perform a managed read-only + // transaction and fetch the block stored above. + var loadedBlockBytes []byte + err = db.Update(func(tx database.Tx) error { + genesisHash := chaincfg.MainNetParams.GenesisHash + blockBytes, err := tx.FetchBlock(genesisHash) + if err != nil { + return err + } + + // As documented, all data fetched from the database is only + // valid during a database transaction in order to support + // zero-copy backends. Thus, make a copy of the data so it + // can be used outside of the transaction. + loadedBlockBytes = make([]byte, len(blockBytes)) + copy(loadedBlockBytes, blockBytes) + return nil + }) + if err != nil { + fmt.Println(err) + return + } + + // Typically at this point, the block could be deserialized via the + // wire.MsgBlock.Deserialize function or used in its serialized form + // depending on need. However, for this example, just display the + // number of serialized bytes to show it was loaded as expected. + fmt.Printf("Serialized block size: %d bytes\n", len(loadedBlockBytes)) + + // Output: + // Serialized block size: 285 bytes +} diff --git a/database2/export_test.go b/database2/export_test.go new file mode 100644 index 00000000000..4b055af0b38 --- /dev/null +++ b/database2/export_test.go @@ -0,0 +1,17 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +/* +This test file is part of the database package rather than than the +database_test package so it can bridge access to the internals to properly test +cases which are either not possible or can't reliably be tested via the public +interface. The functions, constants, and variables are only exported while the +tests are being run. +*/ + +package database2 + +// TstNumErrorCodes makes the internal numErrorCodes parameter available to the +// test package. +const TstNumErrorCodes = numErrorCodes diff --git a/database2/ffboltdb/README.md b/database2/ffboltdb/README.md new file mode 100644 index 00000000000..ff4f1b56d24 --- /dev/null +++ b/database2/ffboltdb/README.md @@ -0,0 +1,53 @@ +ffboltdb +======== + +[![Build Status](https://travis-ci.org/btcsuite/btcd.png?branch=master)] +(https://travis-ci.org/btcsuite/btcd) + +Package ffboltdb implements a driver for the database package that uses boltdb +for the backing metadata and flat files for block storage. + +This driver is the recommended driver for use with btcd. It has a strong focus +on speed, efficiency, and robustness. It makes use of zero-copy memory mapping +for the metadata, flat files for block storage, and checksums in key areas to +ensure data integrity. + +Package ffboltdb is licensed under the copyfree ISC license. + +## Usage + +This package is a driver to the database package and provides the database type +of "ffboltdb". The parameters the Open and Create functions take are the +database path as a string and the block network. + +```Go +db, err := database.Open("ffboltdb", "path/to/database", wire.MainNet) +if err != nil { + // Handle error +} +``` + +```Go +db, err := database.Create("ffboltdb", "path/to/database", wire.MainNet) +if err != nil { + // Handle error +} +``` + +## Documentation + +[![GoDoc](https://godoc.org/github.com/btcsuite/btcd/database/ffboltdb?status.png)] +(http://godoc.org/github.com/btcsuite/btcd/database/ffboltdb) + +Full `go doc` style documentation for the project can be viewed online without +installing this package by using the GoDoc site here: +http://godoc.org/github.com/btcsuite/btcd/database/ffboltdb + +You can also view the documentation locally once the package is installed with +the `godoc` tool by running `godoc -http=":6060"` and pointing your browser to +http://localhost:6060/pkg/github.com/btcsuite/btcd/database/ffboltdb + +## License + +Package ffboltdb is licensed under the [copyfree](http://copyfree.org) ISC +License. diff --git a/database2/ffboltdb/bench_test.go b/database2/ffboltdb/bench_test.go new file mode 100644 index 00000000000..262862ed902 --- /dev/null +++ b/database2/ffboltdb/bench_test.go @@ -0,0 +1,103 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package ffboltdb + +import ( + "os" + "path/filepath" + "testing" + + "github.com/btcsuite/btcd/chaincfg" + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcutil" +) + +// BenchmarkBlockHeader benchmarks how long it takes to load the mainnet genesis +// block header. +func BenchmarkBlockHeader(b *testing.B) { + // Start by creating a new database and populating it with the mainnet + // genesis block. + dbPath := filepath.Join(os.TempDir(), "ffboltdb-benchblkhdr") + _ = os.RemoveAll(dbPath) + db, err := database.Create("ffboltdb", dbPath, blockDataNet) + if err != nil { + b.Fatal(err) + } + defer os.RemoveAll(dbPath) + defer db.Close() + err = db.Update(func(tx database.Tx) error { + block := btcutil.NewBlock(chaincfg.MainNetParams.GenesisBlock) + if err := tx.StoreBlock(block); err != nil { + return err + } + return nil + }) + if err != nil { + b.Fatal(err) + } + + b.ReportAllocs() + b.ResetTimer() + err = db.View(func(tx database.Tx) error { + blockHash := chaincfg.MainNetParams.GenesisHash + for i := 0; i < b.N; i++ { + _, err := tx.FetchBlockHeader(blockHash) + if err != nil { + return err + } + } + return nil + }) + if err != nil { + b.Fatal(err) + } + + // Don't benchmark teardown. + b.StopTimer() +} + +// BenchmarkBlockHeader benchmarks how long it takes to load the mainnet genesis +// block. +func BenchmarkBlock(b *testing.B) { + // Start by creating a new database and populating it with the mainnet + // genesis block. + dbPath := filepath.Join(os.TempDir(), "ffboltdb-benchblk") + _ = os.RemoveAll(dbPath) + db, err := database.Create("ffboltdb", dbPath, blockDataNet) + if err != nil { + b.Fatal(err) + } + defer os.RemoveAll(dbPath) + defer db.Close() + err = db.Update(func(tx database.Tx) error { + block := btcutil.NewBlock(chaincfg.MainNetParams.GenesisBlock) + if err := tx.StoreBlock(block); err != nil { + return err + } + return nil + }) + if err != nil { + b.Fatal(err) + } + + b.ReportAllocs() + b.ResetTimer() + err = db.View(func(tx database.Tx) error { + blockHash := chaincfg.MainNetParams.GenesisHash + for i := 0; i < b.N; i++ { + _, err := tx.FetchBlock(blockHash) + if err != nil { + return err + } + } + return nil + }) + if err != nil { + b.Fatal(err) + } + + // Don't benchmark teardown. + b.StopTimer() +} diff --git a/database2/ffboltdb/blockio.go b/database2/ffboltdb/blockio.go new file mode 100644 index 00000000000..aced14d6bd3 --- /dev/null +++ b/database2/ffboltdb/blockio.go @@ -0,0 +1,749 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +// This file contains the implementation functions for reading, writing, and +// otherwise working with the flat files that house the actual blocks. + +package ffboltdb + +import ( + "container/list" + "encoding/binary" + "fmt" + "hash/crc32" + "io" + "os" + "path/filepath" + "sync" + + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcd/wire" +) + +const ( + // The Bitcoin protocol encodes block height as int32, so max number of + // blocks is 2^31. Max block size per the protocol is 32MiB per block. + // So the theoretical max at the time this comment was written is 64PiB + // (pebibytes). With files @ 512MiB each, this would require a maximum + // of 134,217,728 files. Thus, choose 9 digits of precision for the + // filenames. An additional benefit is 9 digits provides 10^9 files @ + // 512MiB each for a total of ~476.84PiB (roughly 7.4 times the current + // theoretical max), so there is room for the max block size to grow in + // the future. + blockFilenameTemplate = "%09d.fdb" + + // maxOpenFiles is the max number of open files to maintain in the + // open blocks cache. Note that this does not include the current + // write file, so there will typically be one more than this value open. + maxOpenFiles = 25 + + // maxBlockFileSize is the maximum size for each file used to store + // blocks. + // + // NOTE: The current code uses uint32 for all offsets, so this value + // must be less than 2^32 (4 GiB). This is also why it's a typed + // constant. + maxBlockFileSize uint32 = 512 * 1024 * 1024 // 512 MiB + + // blockLocSize is the number of bytes the serialized block location + // data that is stored in the block index. + // + // The serialized block location format is: + // + // [0:4] Block file (4 bytes) + // [4:8] File offset (4 bytes) + // [8:12] Block length (4 bytes) + blockLocSize = 12 +) + +var ( + // castagnoli houses the Catagnoli polynomial used for CRC-32 checksums. + castagnoli = crc32.MakeTable(crc32.Castagnoli) +) + +// filer is an interface which acts very similar to a *os.File and is typically +// implemented by it. It exists so the test code can provide mock files for +// properly testing corruption and file system issues. +type filer interface { + io.Closer + io.WriterAt + io.ReaderAt + Truncate(size int64) error + Sync() error +} + +// lockableFile represents a block file on disk that has been opened for either +// read or read/write access. It also contains a read-write mutex to support +// multiple concurrent readers. +type lockableFile struct { + sync.RWMutex + file filer +} + +// writeCursor represents the current file and offset of the block file on disk +// for performing all writes. It also contains a read-write mutex to support +// multiple concurrent readers which can reuse the file handle. +type writeCursor struct { + sync.RWMutex + + // curFile is the current block file that will be appended to when + // writing new blocks. + curFile *lockableFile + + // curFileNum is the current block file number and is used to allow + // readers to use the same open file handle. + curFileNum uint32 + + // curOffset is the offset in the current write block file where the + // next new block will be written. + curOffset uint32 +} + +// blockStore houses information used to handle reading and writing blocks (and +// part of blocks) into flat files with support for multiple concurrent readers. +type blockStore struct { + // network is the specific network to use in the flat files for each + // block. + network wire.BitcoinNet + + // basePath is the base path used for the flat block files and metadata. + basePath string + + // maxBlockFileSize is the maximum size for each file used to store + // blocks. It is defined on the store so the whitebox tests can + // override the value. + maxBlockFileSize uint32 + + // The following fields are related to the flat files which hold the + // actual blocks. The number of open files is limited by maxOpenFiles. + // + // obfMutex protects concurrent access to the openBlockFiles map. It is + // a RWMutex so multiple readers can simultaneously access open files. + // + // openBlockFiles houses the open file handles for existing block files + // which have been opened read-only along with an individual RWMutex. + // This scheme allows multiple concurrent readers to the same file while + // preventing the file from being closed out from under them. + // + // lruMutex protects concurrent access to the least recently used list + // and lookup map. + // + // openBlocksLRU tracks how the open files are refenced by pushing the + // most recently used files to the front of the list thereby trickling + // the least recently used files to end of the list. When a file needs + // to be closed due to exceeding the the max number of allowed open + // files, the one at the end of the list is closed. + // + // fileNumToLRUElem is a mapping between a specific block file number + // and the associated list element on the least recently used list. + // + // Thus, with the combination of these fields, the database supports + // concurrent non-blocking reads across multiple and individual files + // along with intelligently limiting the number of open file handles by + // closing the least recently used files as needed. + // + // NOTE: The locking order used throughout is well-defined and MUST be + // followed. Failure to do so could lead to deadlocks. In particular, + // the locking order is as follows: + // 1) obfMutex + // 2) lruMutex + // 3) writeCursor mutex + // 4) specific file mutexes + // + // None of the mutexes are required to be locked at the same time, and + // often aren't. However, if they are to be locked simultaneously, they + // MUST be locked in the order previously specified. + // + // Due to the high performance and multi-read concurrency requirements, + // write locks should only be held for the minimum time necessary. + obfMutex sync.RWMutex + lruMutex sync.Mutex + openBlocksLRU *list.List // Contains uint32 block file numbers. + fileNumToLRUElem map[uint32]*list.Element + openBlockFiles map[uint32]*lockableFile + + // writeCursor houses the state for the current file and location that + // new blocks are written to. + writeCursor *writeCursor + + // These functions are set to openFile, openWriteFile, and deleteFile by + // default, but are exposed here to allow the whitebox tests to replace + // them when working with mock files. + openFileFunc func(fileNum uint32) (*lockableFile, error) + openWriteFileFunc func(fileNum uint32) (filer, error) + deleteFileFunc func(fileNum uint32) error +} + +// blockLocation identifies a particular block file and location. +type blockLocation struct { + blockFileNum uint32 + fileOffset uint32 + blockLen uint32 +} + +// deserializeBlockLoc deserializes the passed serialized block location +// information. This is data stored into the block index metadata for each +// block. The serialized data passed to this function MUST be at least +// blockLocSize bytes or it will panic. Thererror check is avoided here because +// this information will always be coming from the block index which includes a +// checksum to detect corruption. Thus it is safe to use this unchecked here. +func deserializeBlockLoc(serializedLoc []byte) blockLocation { + // The serialized block location format is: + // + // [0:4] Block file (4 bytes) + // [4:8] File offset (4 bytes) + // [8:12] Block length (4 bytes) + return blockLocation{ + blockFileNum: byteOrder.Uint32(serializedLoc[0:4]), + fileOffset: byteOrder.Uint32(serializedLoc[4:8]), + blockLen: byteOrder.Uint32(serializedLoc[8:12]), + } +} + +// serializeBlockLoc returns the serialization of the passed block location. +// This is data to be stored into the block index metadata for each block. +func serializeBlockLoc(loc blockLocation) []byte { + // The serialized block location format is: + // + // [0:4] Block file (4 bytes) + // [4:8] File offset (4 bytes) + // [8:12] Block length (4 bytes) + var serializedData [12]byte + byteOrder.PutUint32(serializedData[0:4], loc.blockFileNum) + byteOrder.PutUint32(serializedData[4:8], loc.fileOffset) + byteOrder.PutUint32(serializedData[8:12], loc.blockLen) + return serializedData[:] +} + +// blockFilePath return the file path for the provided block file number. +func blockFilePath(dbPath string, fileNum uint32) string { + fileName := fmt.Sprintf(blockFilenameTemplate, fileNum) + return filepath.Join(dbPath, fileName) +} + +// openWriteFile returns a file handle for the passed flat file number in +// read/write mode. The file will be created if needed. It is typically used +// for the current file that will have all new data appended. Unlike openFile, +// this function does not keep track the open file and it is not subject to the +// maxOpenFiles limit. +func (s *blockStore) openWriteFile(fileNum uint32) (filer, error) { + // The current block file needs to be read-write so it is possible to + // append to it. Also, it shouldn't be part of the least recently used + // file. + filePath := blockFilePath(s.basePath, fileNum) + file, err := os.OpenFile(filePath, os.O_RDWR|os.O_CREATE, 0666) + if err != nil { + str := fmt.Sprintf("failed to open file %q: %v", filePath, err) + return nil, makeDbErr(database.ErrDriverSpecific, str, err) + } + + return file, nil +} + +// openFile returns a read-only file handle for the passed flat file number. +// The function also keeps track of the open files, performs least recently +// used tracking, and limits the number of open files to maxOpenFiles by closing +// the least recently used file as needed. +// +// This function MUST be called with the overall files mutex (s.obfMutex) locked +// for WRITES. +func (s *blockStore) openFile(fileNum uint32) (*lockableFile, error) { + // Open the appropriate file as read-only. + filePath := blockFilePath(s.basePath, fileNum) + file, err := os.Open(filePath) + if err != nil { + return nil, makeDbErr(database.ErrDriverSpecific, err.Error(), + err) + } + blockFile := &lockableFile{file: file} + + // Close the least recently used file if the file exceeds the max + // allowed open files. This is not done until after the file open in + // case the file fails to open, there is no need to close any files. + // + // A write lock is required on the LRU list here to protect against + // modifications happening as already open files are read from and + // shuffled to the front of the list. + // + // Also, add the file that was just opened to the front of the least + // recently used list to indicate it is the most recently used file and + // therefore should be closed last. + s.lruMutex.Lock() + lruList := s.openBlocksLRU + if lruList.Len() >= maxOpenFiles { + lruFileNum := lruList.Remove(lruList.Back()).(uint32) + oldBlockFile := s.openBlockFiles[lruFileNum] + + // Close the old file under the write lock for the file in case + // any readers are currently reading from it so it's not closed + // out from under them. + oldBlockFile.Lock() + _ = oldBlockFile.file.Close() + oldBlockFile.Unlock() + + delete(s.openBlockFiles, lruFileNum) + delete(s.fileNumToLRUElem, lruFileNum) + } + s.fileNumToLRUElem[fileNum] = lruList.PushFront(fileNum) + s.lruMutex.Unlock() + + // Store a reference to it the open block files map. + s.openBlockFiles[fileNum] = blockFile + + return blockFile, nil +} + +// deleteFile remove the block file for the passed flat file number. The file +// must already be closed and it is the responsibility of the caller to do any +// other state cleanup necessary. +func (s *blockStore) deleteFile(fileNum uint32) error { + filePath := blockFilePath(s.basePath, fileNum) + if err := os.Remove(filePath); err != nil { + return makeDbErr(database.ErrDriverSpecific, err.Error(), err) + } + + return nil +} + +// blockFile attempts to return an existing file handle for the passed flat file +// number if it is already open as well as marking it as most recently used. It +// will also open the file when it's not already open subject to the rules +// described in openFile. +// +// NOTE: The returned block file will already have the read lock acquired and +// the caller MUST call .RUnlock() to release it once it has finished all read +// operations. This is necessary because otherwise it would be possible for a +// separate goroutine to close the file after it is returned from here, but +// before the caller has acquired a read lock. +func (s *blockStore) blockFile(fileNum uint32) (*lockableFile, error) { + // When the requested block file is open for writes, return it. + wc := s.writeCursor + wc.RLock() + if fileNum == wc.curFileNum && wc.curFile.file != nil { + obf := wc.curFile + obf.RLock() + wc.RUnlock() + return obf, nil + } + wc.RUnlock() + + // Try to return an open file under the overall files read lock. + s.obfMutex.RLock() + if obf, ok := s.openBlockFiles[fileNum]; ok { + s.lruMutex.Lock() + s.openBlocksLRU.MoveToFront(s.fileNumToLRUElem[fileNum]) + s.lruMutex.Unlock() + + obf.RLock() + s.obfMutex.RUnlock() + return obf, nil + } + s.obfMutex.RUnlock() + + // Since the file isn't open already, need to check the open block files + // map again under write lock in case multiple readers got here and a + // separate one is already opening the file. + s.obfMutex.Lock() + if obf, ok := s.openBlockFiles[fileNum]; ok { + obf.RLock() + s.obfMutex.Unlock() + return obf, nil + } + + // The file isn't open, so open it while closing the least recently used + // one. The called function grabs the overall files write lock and + // checks the opened block files map again in case multiple readers get + // here. + obf, err := s.openFileFunc(fileNum) + if err != nil { + s.obfMutex.Unlock() + return nil, err + } + obf.RLock() + s.obfMutex.Unlock() + return obf, nil +} + +// writeData is a helper function for writeBlock which writes the provided data +// at the current write offset and updates the write cursor accordingly. The +// field name parameter is only used when there is an error to provide a nicer +// error message. +// +// The write cursor will be advanced the number of bytes actually written in the +// event of failure. +// +// NOTE: This function MUST be called with the write cursor current file lock +// held and must only be called during a write transaction so it is effectively +// locked for writes. Also, the write cursor current file must NOT be nil. +func (s *blockStore) writeData(data []byte, fieldName string) error { + wc := s.writeCursor + n, err := wc.curFile.file.WriteAt(data, int64(wc.curOffset)) + wc.curOffset += uint32(n) + if err != nil { + str := fmt.Sprintf("failed to write %s to file %d at "+ + "offset %d: %v", fieldName, wc.curFileNum, + wc.curOffset-uint32(n), err) + return makeDbErr(database.ErrDriverSpecific, str, err) + } + + return nil +} + +// writeBlock appends the specified raw block bytes to the store's write cursor +// location and increments it accordingly. When the block would exceed the max +// file size for the current flat file, this function will close the current +// file, create the next file, update the write cursor, and write the block to +// the new file. +// +// The write cursor will also be advanced the number of bytes actually written +// in the event of failure. +// +// Format: +func (s *blockStore) writeBlock(rawBlock []byte) (blockLocation, error) { + // Compute how many bytes will be written. + // 4 bytes each for block network + 4 bytes for block length + + // length of raw block + 4 bytes for checksum. + blockLen := uint32(len(rawBlock)) + fullLen := blockLen + 12 + + // Move to the next block file if adding the new block would exceed the + // max allowed size for the current block file. Also detect overflow + // to be paranoid, even though it isn't possible currently, numbers + // might change in the future to make possible. + // + // NOTE: The writeCursor.offset field isn't protected by the mutex + // since it's only read/changed during this function which can only be + // called during a write transaction, of which there can be only one at + // a time. + wc := s.writeCursor + finalOffset := wc.curOffset + fullLen + if finalOffset < wc.curOffset || finalOffset > s.maxBlockFileSize { + // This is done under the write cursor lock since the fileNum + // field is accessed elsewhere by readers. + // + // Close the current write file to force a read-only reopen + // with LRU tracking. The close is done under the write lock + // for the file to prevent it from being closed out from under + // any readers currently reading from it. + wc.Lock() + wc.curFile.Lock() + if wc.curFile.file != nil { + _ = wc.curFile.file.Close() + wc.curFile.file = nil + } + wc.curFile.Unlock() + + // Start writes into next file. + wc.curFileNum++ + wc.curOffset = 0 + wc.Unlock() + } + + // All writes are done under the write lock for the file to ensure any + // readers are finished and blocked first. + wc.curFile.Lock() + defer wc.curFile.Unlock() + + // Open the current file if needed. This will typically only be the + // case when moving to the next file to write to or on initial database + // load. However, it might also be the case if rollbacks happened after + // file writes started during a transaction commit. + if wc.curFile.file == nil { + file, err := s.openWriteFileFunc(wc.curFileNum) + if err != nil { + return blockLocation{}, err + } + wc.curFile.file = file + } + + // Bitcoin network. + origOffset := wc.curOffset + hasher := crc32.New(castagnoli) + var scratch [4]byte + byteOrder.PutUint32(scratch[:], uint32(s.network)) + if err := s.writeData(scratch[:], "network"); err != nil { + return blockLocation{}, err + } + _, _ = hasher.Write(scratch[:]) + + // Block length. + byteOrder.PutUint32(scratch[:], blockLen) + if err := s.writeData(scratch[:], "block length"); err != nil { + return blockLocation{}, err + } + _, _ = hasher.Write(scratch[:]) + + // Serialized block. + if err := s.writeData(rawBlock[:], "block"); err != nil { + return blockLocation{}, err + } + _, _ = hasher.Write(rawBlock) + + // Castagnoli CRC-32 as a checksum of all the previous. + if err := s.writeData(hasher.Sum(nil), "checksum"); err != nil { + return blockLocation{}, err + } + + // Sync the file to disk. + if err := wc.curFile.file.Sync(); err != nil { + str := fmt.Sprintf("failed to sync file %d: %v", wc.curFileNum, + err) + return blockLocation{}, makeDbErr(database.ErrDriverSpecific, + str, err) + } + + loc := blockLocation{ + blockFileNum: wc.curFileNum, + fileOffset: origOffset, + blockLen: fullLen, + } + return loc, nil +} + +// readBlock reads the specified block record and returns the serialized block. +// It ensures the integrity of the block data by checking that the serialized +// network matches the current network associated with the block store and +// comparing the calculated checksum against the one stored in the flat file. +// This function also automatically handles all file management such as opening +// and closing files as necessary to stay within the maximum allowed open files +// limit. +// +// Returns ErrDriverSpecific if the data fails to read for any reason and +// ErrCorruption if the checksum of the read data doesn't match the checksum +// read from the file. +// +// Format: +func (s *blockStore) readBlock(hash *wire.ShaHash, loc blockLocation) ([]byte, error) { + // Get the referenced block file handle opening the file as needed. The + // function also handles closing files as needed to avoid going over the + // max allowed open files. + blockFile, err := s.blockFile(loc.blockFileNum) + if err != nil { + return nil, err + } + + serializedData := make([]byte, loc.blockLen) + n, err := blockFile.file.ReadAt(serializedData, int64(loc.fileOffset)) + blockFile.RUnlock() + if err != nil { + str := fmt.Sprintf("failed to read block %s from file %d, "+ + "offset %d: %v", hash, loc.blockFileNum, loc.fileOffset, + err) + return nil, makeDbErr(database.ErrDriverSpecific, str, err) + } + + // Calculate the checksum of the read data and ensure it matches the + // serialized checksum. This will detect any data corruption in the + // flat file without having to do much more expensive merkle root + // calculations on the loaded block. + serializedChecksum := binary.BigEndian.Uint32(serializedData[n-4:]) + calculatedChecksum := crc32.Checksum(serializedData[:n-4], castagnoli) + if serializedChecksum != calculatedChecksum { + str := fmt.Sprintf("block data for block %s checksum "+ + "does not match - got %x, want %x", hash, + calculatedChecksum, serializedChecksum) + return nil, makeDbErr(database.ErrCorruption, str, nil) + } + + // The network associated with the block must match the current active + // network, otherwise somebody probably put the block files for the + // wrong network in the directory. + serializedNet := byteOrder.Uint32(serializedData[:4]) + if serializedNet != uint32(s.network) { + str := fmt.Sprintf("block data for block %s is for the "+ + "wrong network - got %d, want %d", hash, serializedNet, + uint32(s.network)) + return nil, makeDbErr(database.ErrDriverSpecific, str, nil) + } + + // The raw block excludes the network, length of the block, and + // checksum. + return serializedData[8 : n-4], nil +} + +// readBlockRegion reads the specified amount of data at the provided offset for +// a given block location. The offset is relative to the start of the +// serialized block (as opposed to the beginning of the block record). This +// function automatically handles all file management such as opening and +// closing files as necessary to stay within the maximum allowed open files +// limit. +// +// Returns ErrDriverSpecific if the data fails to read for any reason. +func (s *blockStore) readBlockRegion(loc blockLocation, offset, numBytes uint32) ([]byte, error) { + // Get the referenced block file handle opening the file as needed. The + // function also handles closing files as needed to avoid going over the + // max allowed open files. + blockFile, err := s.blockFile(loc.blockFileNum) + if err != nil { + return nil, err + } + + // Regions are offsets into the actual block, however the serialized + // data for a block includes an initial 4 bytes for network + 4 bytes + // for block length. Thus, add 8 bytes to adjust. + readOffset := loc.fileOffset + 8 + offset + serializedData := make([]byte, numBytes) + _, err = blockFile.file.ReadAt(serializedData, int64(readOffset)) + blockFile.RUnlock() + if err != nil { + str := fmt.Sprintf("failed to read region from block file %d, "+ + "offset %d, len %d: %v", loc.blockFileNum, readOffset, + numBytes, err) + return nil, makeDbErr(database.ErrDriverSpecific, str, err) + } + + return serializedData, nil +} + +// handleRollback rolls the block files on disk back to the provided file number +// and offset. This involves potentially deleting and truncating the files that +// were partially written. +// +// There are effectively two scenarios to consider here: +// 1) Transient write failures from which recovery is possible +// 2) More permanant failures such as hard disk death and/or removal +// +// In either case, the write cursor will be repositioned to the old block file +// offset regardless of any other errors that occur while attempting to undo +// writes. +// +// For the first scenario, this will lead to any data which failed to be undone +// being overwritten and thus behaves as desired as the system continues to run. +// +// For the second scenario, the metadata which stores the current write cursor +// position within the block files will not have been updated yet and thus if +// the system eventually recovers (perhaps the hard drive is reconnected), it +// will also lead to any data which failed to be undone being overwritten and +// thus behaves as desired. +// +// Therefore, any errors are simply logged at a warning level rather than being +// returned since there is nothing more that could be done about it anyways. +func (s *blockStore) handleRollback(oldBlockFileNum, oldBlockOffset uint32) { + // Grab the write cursor mutex since it is modified throughout this + // function. + wc := s.writeCursor + wc.Lock() + defer wc.Unlock() + + // Nothing to do if the rollback point is the same as the current write + // cursor. + if wc.curFileNum == oldBlockFileNum && wc.curOffset == oldBlockOffset { + return + } + + // Regardless of any failures that happen below, reposition the write + // cursor to the old block file and offset. + defer func() { + wc.curFileNum = oldBlockFileNum + wc.curOffset = oldBlockOffset + }() + + log.Debugf("ROLLBACK: Rolling back to file %d, offset %d", + oldBlockFileNum, oldBlockOffset) + + // Close the current write file if it needs to be deleted. Then delete + // all files that are newer than the provided rollback file while + // also moving the write cursor file backwards accordingly. + if wc.curFileNum > oldBlockFileNum { + wc.curFile.Lock() + if wc.curFile.file != nil { + _ = wc.curFile.file.Close() + wc.curFile.file = nil + } + wc.curFile.Unlock() + } + for ; wc.curFileNum > oldBlockFileNum; wc.curFileNum-- { + if err := s.deleteFileFunc(wc.curFileNum); err != nil { + _ = log.Warnf("ROLLBACK: Failed to delete block file "+ + "number %d: %v", wc.curFileNum, err) + return + } + } + + // Open the file for the current write cursor if needed. + wc.curFile.Lock() + if wc.curFile.file == nil { + obf, err := s.openWriteFileFunc(wc.curFileNum) + if err != nil { + wc.curFile.Unlock() + _ = log.Warnf("ROLLBACK: %v", err) + return + } + wc.curFile.file = obf + } + + // Truncate the to the provided rollback offset. + if err := wc.curFile.file.Truncate(int64(oldBlockOffset)); err != nil { + wc.curFile.Unlock() + _ = log.Warnf("ROLLBACK: Failed to truncate file %d: %v", + wc.curFileNum, err) + return + } + + // Sync the file to disk. + err := wc.curFile.file.Sync() + wc.curFile.Unlock() + if err != nil { + _ = log.Warnf("ROLLBACK: Failed to sync file %d: %v", + wc.curFileNum, err) + return + } + return +} + +// scanBlockFiles searches the database directory for all flat block files to +// find the end of the most recent file. This position is considered the +// current write cursor which is also stored in the metadata. Thus, it is used +// to detect unexpected shutdowns in the middle of writes so the block files +// can be reconciled. +func scanBlockFiles(dbPath string) (int, uint32) { + lastFile := -1 + fileLen := uint32(0) + for i := 0; ; i++ { + filePath := blockFilePath(dbPath, uint32(i)) + st, err := os.Stat(filePath) + if err != nil { + break + } + lastFile = i + + fileLen = uint32(st.Size()) + } + + log.Tracef("Scan found latest block file #%d with length %d", lastFile, + fileLen) + return lastFile, fileLen +} + +// newBlockStore returns a new block store with the current block file number +// and offset set and all fields initialized. +func newBlockStore(basePath string, network wire.BitcoinNet) *blockStore { + // Look for the end of the latest block to file to determine what the + // write cursor position is from the viewpoing of the block files on + // disk. + fileNum, fileOff := scanBlockFiles(basePath) + if fileNum == -1 { + fileNum = 0 + fileOff = 0 + } + + store := &blockStore{ + network: network, + basePath: basePath, + maxBlockFileSize: maxBlockFileSize, + openBlockFiles: make(map[uint32]*lockableFile), + openBlocksLRU: list.New(), + fileNumToLRUElem: make(map[uint32]*list.Element), + + writeCursor: &writeCursor{ + curFile: &lockableFile{}, + curFileNum: uint32(fileNum), + curOffset: uint32(fileOff), + }, + } + store.openFileFunc = store.openFile + store.openWriteFileFunc = store.openWriteFile + store.deleteFileFunc = store.deleteFile + return store +} diff --git a/database2/ffboltdb/db.go b/database2/ffboltdb/db.go new file mode 100644 index 00000000000..b38d161a51c --- /dev/null +++ b/database2/ffboltdb/db.go @@ -0,0 +1,1583 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package ffboltdb + +import ( + "encoding/binary" + "fmt" + "hash/crc32" + "os" + "path/filepath" + "sort" + "sync" + + "github.com/btcsuite/bolt" + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcd/wire" + "github.com/btcsuite/btcutil" +) + +const ( + // metadataDbName is the name used for the metadata database. + metadataDbName = "metadata.db" + + // blockHdrSize is the size of a block header. This is simply the + // constant from wire and is only provided here for convenience since + // wire.MaxBlockHeaderPayload is quite long. + blockHdrSize = wire.MaxBlockHeaderPayload + + // blockHdrOffset and checksumOffset define the offsets into a block + // index row for the block header and block row checksum, respectively. + // + // The serialized block index row format is: + // + blockHdrOffset = blockLocSize + checksumOffset = blockLocSize + blockHdrSize +) + +var ( + // byteOrder is the preferred byte order used through the database and + // block files. Sometimes big endian will be used to allow ordered byte + // sortable integer values. + byteOrder = binary.LittleEndian + + // metadataBucketName is the top-level bucket used for all metadata. + metadataBucketName = []byte("metadata") + + // blockIdxBucketName is the bucket used internally to track block + // metadata. + blockIdxBucketName = []byte("ffboltdb-blockidx") + + // writeLocKeyName is the key used to store the current write file + // location. + writeLocKeyName = []byte("ffboltdb-writeloc") +) + +// Common error strings. +const ( + // errDbNotOpenStr is the text to use for the database.ErrDbNotOpen + // error code. + errDbNotOpenStr = "database is not open" + + // errTxClosedStr is the text to use for the database.ErrTxClosed error + // code. + errTxClosedStr = "database tx is closed" +) + +// bulkFetchData is allows a block location to be specified along with the +// index it was requested from. This in turn allows the bulk data loading +// functions to sort the data accesses based on the location to improve +// performance while keeping track of which result the data is for. +type bulkFetchData struct { + *blockLocation + replyIndex int +} + +// bulkFetchDataSorter implements sort.Interface to allow a slice of +// bulkFetchData to be sorted. In particular it sorts by file and then +// offset so that reads from files are grouped and linear. +type bulkFetchDataSorter []bulkFetchData + +// Len returns the number of items in the slice. It is part of the +// sort.Interface implementation. +func (s bulkFetchDataSorter) Len() int { + return len(s) +} + +// Swap swaps the items at the passed indices. It is part of the +// sort.Interface implementation. +func (s bulkFetchDataSorter) Swap(i, j int) { + s[i], s[j] = s[j], s[i] +} + +// Less returns whether the item with index i should sort before the item with +// index j. It is part of the sort.Interface implementation. +func (s bulkFetchDataSorter) Less(i, j int) bool { + if s[i].blockFileNum < s[j].blockFileNum { + return true + } + if s[i].blockFileNum > s[j].blockFileNum { + return false + } + + return s[i].fileOffset < s[j].fileOffset +} + +// makeDbErr creates a database.Error given a set of arguments. +func makeDbErr(c database.ErrorCode, desc string, err error) database.Error { + return database.Error{ErrorCode: c, Description: desc, Err: err} +} + +// convertErr converts the passed bolt error into a database error with an +// equivalent error code and the passed description. It also sets the passed +// error as the underlying error. +func convertErr(desc string, boltErr error) database.Error { + // Use the driver-specific error code by default. The code below will + // update this with the converted bolt error if it's recognized. + var code = database.ErrDriverSpecific + + switch boltErr { + // Database open/create errors. + case bolt.ErrDatabaseNotOpen: + code = database.ErrDbNotOpen + case bolt.ErrInvalid: + code = database.ErrInvalid + + // Transaction errors. + case bolt.ErrTxNotWritable: + code = database.ErrTxNotWritable + case bolt.ErrTxClosed: + code = database.ErrTxClosed + + // Value/bucket errors. + case bolt.ErrBucketNotFound: + code = database.ErrBucketNotFound + case bolt.ErrBucketExists: + code = database.ErrBucketExists + case bolt.ErrBucketNameRequired: + code = database.ErrBucketNameRequired + case bolt.ErrKeyRequired: + code = database.ErrKeyRequired + case bolt.ErrKeyTooLarge: + code = database.ErrKeyTooLarge + case bolt.ErrValueTooLarge: + code = database.ErrValueTooLarge + case bolt.ErrIncompatibleValue: + code = database.ErrIncompatibleValue + } + + return database.Error{ErrorCode: code, Description: desc, Err: boltErr} +} + +// cursor is an internal type used to represent a cursor over key/value pairs +// and nested buckets of a bucket and implements the database.Cursor interface. +type cursor struct { + bucket *bucket + boltCursor *bolt.Cursor + key []byte + value []byte +} + +// Enforce cursor implements the database.Cursor interface. +var _ database.Cursor = (*cursor)(nil) + +// Bucket returns the bucket the cursor was created for. +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) Bucket() database.Bucket { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return nil + } + + return c.bucket +} + +// Delete removes the current key/value pair the cursor is at without +// invalidating the cursor. +// +// Returns the following errors as required by the interface contract: +// - ErrIncompatibleValue if attempted when the cursor points to a nested +// bucket +// - ErrTxNotWritable if attempted against a read-only transaction +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) Delete() error { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return err + } + + if err := c.boltCursor.Delete(); err != nil { + str := "failed to delete cursor key" + return convertErr(str, err) + } + + return nil +} + +// First positions the cursor at the first key/value pair and returns whether or +// not the pair exists. +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) First() bool { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return false + } + + c.key, c.value = c.boltCursor.First() + return c.key != nil +} + +// Last positions the cursor at the last key/value pair and returns whether or +// not the pair exists. +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) Last() bool { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return false + } + + c.key, c.value = c.boltCursor.Last() + return c.key != nil +} + +// Next moves the cursor one key/value pair forward and returns whether or not +// the pair exists. +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) Next() bool { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return false + } + + c.key, c.value = c.boltCursor.Next() + return c.key != nil +} + +// Prev moves the cursor one key/value pair backward and returns whether or not +// the pair exists. +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) Prev() bool { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return false + } + + c.key, c.value = c.boltCursor.Prev() + return c.key != nil +} + +// Seek positions the cursor at the first key/value pair that is greater than or +// equal to the passed seek key. Returns false if no suitable key was found. +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) Seek(seek []byte) bool { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return false + } + + c.key, c.value = c.boltCursor.Seek(seek) + return c.key != nil +} + +// Key returns the current key the cursor is pointing to. +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) Key() []byte { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return nil + } + + return c.key +} + +// Value returns the current value the cursor is pointing to. This will be nil +// for nested buckets. +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) Value() []byte { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return nil + } + + return c.value +} + +// bucket is an internal type used to represent a collection of key/value pairs +// and implements the database.Bucket interface. +type bucket struct { + tx *transaction + boltBucket *bolt.Bucket +} + +// Enforce bucket implements the database.Bucket interface. +var _ database.Bucket = (*bucket)(nil) + +// Bucket retrieves a nested bucket with the given key. Returns nil if +// the bucket does not exist. +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) Bucket(key []byte) database.Bucket { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return nil + } + + // This nil check is intentional so the return value can be checked + // against nil directly. + boltBucket := b.boltBucket.Bucket(key) + if boltBucket == nil { + return nil + } + return &bucket{tx: b.tx, boltBucket: boltBucket} +} + +// CreateBucket creates and returns a new nested bucket with the given key. +// +// Returns the following errors as required by the interface contract: +// - ErrBucketExists if the bucket already exists +// - ErrBucketNameRequired if the key is empty +// - ErrIncompatibleValue if the key is otherwise invalid for the particular +// implementation +// - ErrTxNotWritable if attempted against a read-only transaction +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) CreateBucket(key []byte) (database.Bucket, error) { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return nil, err + } + + boltBucket, err := b.boltBucket.CreateBucket(key) + if err != nil { + str := fmt.Sprintf("failed to create bucket with key %q", key) + return nil, convertErr(str, err) + } + return &bucket{tx: b.tx, boltBucket: boltBucket}, nil +} + +// CreateBucketIfNotExists creates and returns a new nested bucket with the +// given key if it does not already exist. +// +// Returns the following errors as required by the interface contract: +// - ErrBucketNameRequired if the key is empty +// - ErrIncompatibleValue if the key is otherwise invalid for the particular +// implementation +// - ErrTxNotWritable if attempted against a read-only transaction +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) CreateBucketIfNotExists(key []byte) (database.Bucket, error) { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return nil, err + } + + boltBucket, err := b.boltBucket.CreateBucketIfNotExists(key) + if err != nil { + str := fmt.Sprintf("failed to create bucket with key %q", key) + return nil, convertErr(str, err) + } + return &bucket{tx: b.tx, boltBucket: boltBucket}, nil +} + +// DeleteBucket removes a nested bucket with the given key. +// +// Returns the following errors as required by the interface contract: +// - ErrTxNotWritable if attempted against a read-only transaction +// - ErrBucketNotFound if the specified bucket does not exist +// - ErrTxNotWritable if attempted against a read-only transaction +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) DeleteBucket(key []byte) error { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return err + } + + err := b.boltBucket.DeleteBucket(key) + if err != nil { + str := fmt.Sprintf("failed to delete bucket %q", key) + return convertErr(str, err) + } + + return nil +} + +// Cursor returns a new cursor, allowing for iteration over the bucket's +// key/value pairs and nested buckets in forward or backward order. +// +// You must seek to a position using the First, Last, or Seek functions before +// calling the Next, Prev, Key, or Value functions. Failure to do so will +// result in the same return values as an exhausted cursor, which is false for +// the Prev and Next functions and nil for Key and Value functions. +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) Cursor() database.Cursor { + return &cursor{bucket: b, boltCursor: b.boltBucket.Cursor()} +} + +// ForEach invokes the passed function with every key/value pair in the bucket. +// This does not include nested buckets or the key/value pairs within those +// nested buckets. +// +// WARNING: It is not safe to mutate data while iterating with this method. +// Doing so may cause the underlying cursor to be invalidated and return +// unexpected keys and/or values. +// +// Returns the following errors as required by the interface contract: +// - ErrTxClosed if the transaction has already been closed +// +// NOTE: The values returned by this function are only valid during a +// transaction. Attempting to access them after a transaction has ended will +// likely result in an access violation. +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) ForEach(fn func(k, v []byte) error) error { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return err + } + + // Keep track of the caller returned the error so it can be + // differentiated from a bolt error which needs to be converted. + var callerErr error + err := b.boltBucket.ForEach(func(k, v []byte) error { + if v == nil { + return nil + } + callerErr = fn(k, v) + return callerErr + }) + if callerErr != nil { + return callerErr + } + if err != nil { + str := "failed while iterating bucket" + return convertErr(str, err) + } + + return nil +} + +// ForEachBucket invokes the passed function with the key of every nested bucket +// in the current bucket. This does not include any nested buckets within those +// nested buckets. +// +// WARNING: It is not safe to mutate data while iterating with this method. +// Doing so may cause the underlying cursor to be invalidated and return +// unexpected keys and/or values. +// +// Returns the following errors as required by the interface contract: +// - ErrTxClosed if the transaction has already been closed +// +// NOTE: The values returned by this function are only valid during a +// transaction. Attempting to access them after a transaction has ended will +// likely result in an access violation. +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) ForEachBucket(fn func(k []byte) error) error { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return err + } + + // Keep track of the caller returned the error so it can be + // differentiated from a bolt error which needs to be converted. + var callerErr error + err := b.boltBucket.ForEach(func(k, v []byte) error { + if v != nil { + return nil + } + callerErr = fn(k) + return callerErr + }) + if callerErr != nil { + return callerErr + } + if err != nil { + str := "failed while iterating bucket" + return convertErr(str, err) + } + + return nil +} + +// Writable returns whether or not the bucket is writable. +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) Writable() bool { + return b.tx.writable +} + +// Put saves the specified key/value pair to the bucket. Keys that do not +// already exist are added and keys that already exist are overwritten. +// +// Returns the following errors as required by the interface contract: +// - ErrKeyRequired if the key is empty +// - ErrIncompatibleValue if the key is the same as an existing bucket +// - ErrTxNotWritable if attempted against a read-only transaction +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) Put(key, value []byte) error { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return err + } + + err := b.boltBucket.Put(key, value) + if err != nil { + str := fmt.Sprintf("failed to put value for key %q", key) + return convertErr(str, err) + } + + return nil +} + +// Get returns the value for the given key. Returns nil if the key does +// not exist in this bucket. +// +// NOTE: The value returned by this function is only valid during a +// transaction. Attempting to access it after a transaction has ended +// will likely result in an access violation. +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) Get(key []byte) []byte { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return nil + } + + return b.boltBucket.Get(key) +} + +// Delete removes the specified key from the bucket. Deleting a key that does +// not exist does not return an error. +// +// Returns the following errors as required by the interface contract: +// - ErrKeyRequired if the key is empty +// - ErrIncompatibleValue if the key is the same as an existing bucket +// - ErrTxNotWritable if attempted against a read-only transaction +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) Delete(key []byte) error { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return err + } + + err := b.boltBucket.Delete(key) + if err != nil { + str := fmt.Sprintf("failed to delete key %q", key) + return convertErr(str, err) + } + + return nil +} + +// pendingBlock houses a block that will be written to disk when the database +// transaction is committed. +type pendingBlock struct { + hash *wire.ShaHash + bytes []byte +} + +// transaction represents a database transaction. It can either by read-only or +// read-write and implements the database.Bucket interface. The transaction +// provides a root bucket against which all read and writes occur. +type transaction struct { + managed bool // Is the transaction managed? + closed bool // Is the transaction closed? + writable bool // Is the transaction writable? + db *db // DB instance the tx was created from. + boltTx *bolt.Tx // Underlying bolt tx for metadata storage. + metaBucket *bucket // The metadata bucket in underlying bolt DB. + blockIdxBucket *bucket // The block index bucket. + + // Blocks that need to be stored on commit. The pendingBlocks map is + // kept to allow quick looks up pending data by block hash. + pendingBlocks map[wire.ShaHash]int + pendingBlockData []pendingBlock +} + +// Enforce transaction implements the database.Tx interface. +var _ database.Tx = (*transaction)(nil) + +// checkClosed returns an error if the the database or transaction is closed. +func (tx *transaction) checkClosed() error { + // The transaction is no longer valid if it has been closed. + if tx.closed { + return makeDbErr(database.ErrTxClosed, errTxClosedStr, nil) + } + + return nil +} + +// Metadata returns the top-most bucket for all metadata storage. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) Metadata() database.Bucket { + return tx.metaBucket +} + +// hasBlock returns whether or not a block with the given hash exists. +func (tx *transaction) hasBlock(hash *wire.ShaHash) bool { + // Return true if the block is pending to be written on commit since + // it exists from the viewpoint of this transaction. + if _, exists := tx.pendingBlocks[*hash]; exists { + return true + } + + // Bolt is zero-copy so this doesn't incur additional overhead of + // loading the entry. + return tx.blockIdxBucket.Get(hash[:]) != nil +} + +// StoreBlock stores the provided block into the database. There are no checks +// to ensure the block connects to a previous block, contains double spends, or +// any additional functionality such as transaction indexing. It simply stores +// the block in the database. +// +// Returns the following errors as required by the interface contract: +// - ErrBlockExists when the block hash already exists +// - ErrTxNotWritable if attempted against a read-only transaction +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) StoreBlock(block *btcutil.Block) error { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return err + } + + // Ensure the transaction is writable. + if !tx.writable { + str := "store block requires a writable database transaction" + return makeDbErr(database.ErrTxNotWritable, str, nil) + } + + // Reject the block if it already exists. + blockHash := block.Sha() + if tx.hasBlock(blockHash) { + str := fmt.Sprintf("block %s already exists", blockHash) + return makeDbErr(database.ErrBlockExists, str, nil) + } + + blockBytes, err := block.Bytes() + if err != nil { + str := fmt.Sprintf("failed to get serialized bytes for block %s", + blockHash) + return makeDbErr(database.ErrDriverSpecific, str, err) + } + + // Add the block to be stored to the list of pending blocks to store + // when the transaction is committed. Also, add it to pending blocks + // map so it is easy to determine the block is pending based on the + // block hash. + if tx.pendingBlocks == nil { + tx.pendingBlocks = make(map[wire.ShaHash]int) + } + tx.pendingBlocks[*blockHash] = len(tx.pendingBlockData) + tx.pendingBlockData = append(tx.pendingBlockData, pendingBlock{ + hash: blockHash, + bytes: blockBytes, + }) + log.Tracef("Added block %s to pending blocks", blockHash) + + return nil +} + +// HasBlock returns whether or not a block with the given hash exists in the +// database. +// +// Returns the following errors as required by the interface contract: +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) HasBlock(hash *wire.ShaHash) (bool, error) { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return false, err + } + + return tx.hasBlock(hash), nil +} + +// HasBlocks returns whether or not the blocks with the provided hashes +// exist in the database. +// +// Returns the following errors as required by the interface contract: +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) HasBlocks(hashes []wire.ShaHash) ([]bool, error) { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return nil, err + } + + results := make([]bool, len(hashes)) + for i := range hashes { + results[i] = tx.hasBlock(&hashes[i]) + } + + return results, nil +} + +// fetchBlockRow fetches the metadata stored in the block index for the provided +// hash. It will return ErrBlockNotFound if there is no entry and ErrCorruption +// if the checksum of the entry doesn't match. +func (tx *transaction) fetchBlockRow(hash *wire.ShaHash) ([]byte, error) { + blockRow := tx.blockIdxBucket.Get(hash[:]) + if blockRow == nil { + str := fmt.Sprintf("block %s does not exist", hash) + return nil, makeDbErr(database.ErrBlockNotFound, str, nil) + } + + // Ensure the block row checksum matches. The checksum is at the end. + gotChecksum := crc32.Checksum(blockRow[:checksumOffset], castagnoli) + wantChecksumBytes := blockRow[checksumOffset : checksumOffset+4] + wantChecksum := byteOrder.Uint32(wantChecksumBytes) + if gotChecksum != wantChecksum { + str := fmt.Sprintf("metadata for block %s does not match "+ + "the expected checksum - got %d, want %d", hash, + gotChecksum, wantChecksum) + return nil, makeDbErr(database.ErrCorruption, str, nil) + } + + return blockRow, nil +} + +// FetchBlockHeader returns the raw serialized bytes for the block header +// identified by the given hash. The raw bytes are in the format returned by +// Serialize on a wire.BlockHeader. +// +// Returns the following errors as required by the interface contract: +// - ErrBlockNotFound if the requested block hash does not exist +// - ErrTxClosed if the transaction has already been closed +// - ErrCorruption if the database has somehow become corrupted +// +// NOTE: The data returned by this function is only valid during a +// database transaction. Attempting to access it after a transaction +// has ended results in undefined behavior. This constraint prevents +// additional data copies and allows support for memory-mapped database +// implementations. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) FetchBlockHeader(hash *wire.ShaHash) ([]byte, error) { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return nil, err + } + + // When the block is pending to be written on commit return the bytes + // from there. + if idx, exists := tx.pendingBlocks[*hash]; exists { + blockBytes := tx.pendingBlockData[idx].bytes + return blockBytes[0:blockHdrSize:blockHdrSize], nil + } + + // Fetch the block index row and slice off the header. Notice the use + // of the cap on the subslice to prevent the caller from accidentally + // appending into the db data. + blockRow, err := tx.fetchBlockRow(hash) + if err != nil { + return nil, err + } + endOffset := blockLocSize + blockHdrSize + return blockRow[blockLocSize:endOffset:endOffset], nil +} + +// FetchBlockHeaders returns the raw serialized bytes for the block headers +// identified by the given hashes. The raw bytes are in the format returned by +// Serialize on a wire.BlockHeader. +// +// Returns the following errors as required by the interface contract: +// - ErrBlockNotFound if the any of the requested block hashes do not exist +// - ErrTxClosed if the transaction has already been closed +// - ErrCorruption if the database has somehow become corrupted +// +// NOTE: The data returned by this function is only valid during a database +// transaction. Attempting to access it after a transaction has ended results +// in undefined behavior. This constraint prevents additional data copies and +// allows support for memory-mapped database implementations. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) FetchBlockHeaders(hashes []wire.ShaHash) ([][]byte, error) { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return nil, err + } + + // NOTE: This could check for the existence of all blocks before loading + // any of the headers which would be faster in the failure case, however + // callers will not typically be calling this function with invalid + // values, so optimize for the common case. + + // Load the headers. + headers := make([][]byte, len(hashes)) + for i := range hashes { + hash := &hashes[i] + + // When the block is pending to be written on commit return the + // bytes from there. + if idx, exists := tx.pendingBlocks[*hash]; exists { + blkBytes := tx.pendingBlockData[idx].bytes + headers[i] = blkBytes[0:blockHdrSize:blockHdrSize] + continue + } + + // Fetch the block index row and slice off the header. Notice + // the use of the cap on the subslice to prevent the caller + // from accidentally appending into the db data. + blockRow, err := tx.fetchBlockRow(hash) + if err != nil { + return nil, err + } + endOffset := blockLocSize + blockHdrSize + headers[i] = blockRow[blockLocSize:endOffset:endOffset] + } + + return headers, nil +} + +// FetchBlock returns the raw serialized bytes for the block identified by the +// given hash. The raw bytes are in the format returned by Serialize on a +// wire.MsgBlock. +// +// Returns the following errors as required by the interface contract: +// - ErrBlockNotFound if the requested block hash does not exist +// - ErrTxClosed if the transaction has already been closed +// - ErrCorruption if the database has somehow become corrupted +// +// In addition, returns ErrDriverSpecific if any failures occur when reading the +// block files. +// +// NOTE: The data returned by this function is only valid during a database +// transaction. Attempting to access it after a transaction has ended results +// in undefined behavior. This constraint prevents additional data copies and +// allows support for memory-mapped database implementations. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) FetchBlock(hash *wire.ShaHash) ([]byte, error) { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return nil, err + } + + // When the block is pending to be written on commit return the bytes + // from there. + if idx, exists := tx.pendingBlocks[*hash]; exists { + return tx.pendingBlockData[idx].bytes, nil + } + + // Lookup the location of the block in the files from the block index. + blockRow, err := tx.fetchBlockRow(hash) + if err != nil { + return nil, err + } + location := deserializeBlockLoc(blockRow) + + // Read the block from the appropriate location. The function also + // performs a checksum over the data to detect data corruption. + blockBytes, err := tx.db.store.readBlock(hash, location) + if err != nil { + return nil, err + } + + return blockBytes, nil +} + +// FetchBlocks returns the raw serialized bytes for the blocks identified by the +// given hashes. The raw bytes are in the format returned by Serialize on a +// wire.MsgBlock. +// +// Returns the following errors as required by the interface contract: +// - ErrBlockNotFound if any of the requested block hashed do not exist +// - ErrTxClosed if the transaction has already been closed +// - ErrCorruption if the database has somehow become corrupted +// +// In addition, returns ErrDriverSpecific if any failures occur when reading the +// block files. +// +// NOTE: The data returned by this function is only valid during a database +// transaction. Attempting to access it after a transaction has ended results +// in undefined behavior. This constraint prevents additional data copies and +// allows support for memory-mapped database implementations. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) FetchBlocks(hashes []wire.ShaHash) ([][]byte, error) { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return nil, err + } + + // NOTE: This could check for the existence of all blocks before loading + // any of them which would be faster in the failure case, however + // callers will not typically be calling this function with invalid + // values, so optimize for the common case. + + // Load the blocks. + blocks := make([][]byte, len(hashes)) + for i := range hashes { + var err error + blocks[i], err = tx.FetchBlock(&hashes[i]) + if err != nil { + return nil, err + } + } + + return blocks, nil +} + +// fetchPendingRegion attempts to fetch the provided region from any block which +// are pending to be written on commit. It will return nil for the byte slice +// when there region references a block which is not pending. When the region +// does reference a pending block, it is bounds checked and returns +// ErrBlockRegionInvalid if invalid. +func (tx *transaction) fetchPendingRegion(region *database.BlockRegion) ([]byte, error) { + // Nothing to do if the block is not pending to be written on commit. + idx, exists := tx.pendingBlocks[*region.Hash] + if !exists { + return nil, nil + } + + // Ensure the region is within the bounds of the block. + blockBytes := tx.pendingBlockData[idx].bytes + blockLen := uint32(len(blockBytes)) + endOffset := region.Offset + region.Len + if endOffset < region.Offset || endOffset > blockLen { + str := fmt.Sprintf("block %s region offset %d, length %d "+ + "exceeds block length of %d", region.Hash, + region.Offset, region.Len, blockLen) + return nil, makeDbErr(database.ErrBlockRegionInvalid, str, nil) + } + + // Return the bytes from the pending block. + return blockBytes[region.Offset:endOffset:endOffset], nil +} + +// FetchBlockRegion returns the raw serialized bytes for the given block region. +// +// For example, it is possible to directly extract Bitcoin transactions and/or +// scripts from a block with this function. Depending on the backend +// implementation, this can provide significant savings by avoiding the need to +// load entire blocks. +// +// The raw bytes are in the format returned by Serialize on a wire.MsgBlock and +// the Offset field in the provided BlockRegion is zero-based and relative to +// the start of the block (byte 0). +// +// Returns the following errors as required by the interface contract: +// - ErrBlockNotFound if the requested block hash does not exist +// - ErrBlockRegionInvalid if the region exceeds the bounds of the associated +// block +// - ErrTxClosed if the transaction has already been closed +// - ErrCorruption if the database has somehow become corrupted +// +// In addition, returns ErrDriverSpecific if any failures occur when reading the +// block files. +// +// NOTE: The data returned by this function is only valid during a database +// transaction. Attempting to access it after a transaction has ended results +// in undefined behavior. This constraint prevents additional data copies and +// allows support for memory-mapped database implementations. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) FetchBlockRegion(region *database.BlockRegion) ([]byte, error) { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return nil, err + } + + // When the block is pending to be written on commit return the bytes + // from there. + if tx.pendingBlocks != nil { + regionBytes, err := tx.fetchPendingRegion(region) + if err != nil { + return nil, err + } + if regionBytes != nil { + return regionBytes, nil + } + } + + // Lookup the location of the block in the files from the block index. + blockRow, err := tx.fetchBlockRow(region.Hash) + if err != nil { + return nil, err + } + location := deserializeBlockLoc(blockRow) + + // Ensure the region is within the bounds of the block. + endOffset := region.Offset + region.Len + if endOffset < region.Offset || endOffset > location.blockLen { + str := fmt.Sprintf("block %s region offset %d, length %d "+ + "exceeds block length of %d", region.Hash, + region.Offset, region.Len, location.blockLen) + return nil, makeDbErr(database.ErrBlockRegionInvalid, str, nil) + + } + + // Read the region from the appropriate disk block file. + regionBytes, err := tx.db.store.readBlockRegion(location, region.Offset, + region.Len) + if err != nil { + return nil, err + } + + return regionBytes, nil +} + +// FetchBlockRegions returns the raw serialized bytes for the given block +// regions. +// +// For example, it is possible to directly extract Bitcoin transactions and/or +// scripts from various blocks with this function. Depending on the backend +// implementation, this can provide significant savings by avoiding the need to +// load entire blocks. +// +// The raw bytes are in the format returned by Serialize on a wire.MsgBlock and +// the Offset fields in the provided BlockRegions are zero-based and relative to +// the start of the block (byte 0). +// +// Returns the following errors as required by the interface contract: +// - ErrBlockNotFound if any of the request block hashes do not exist +// - ErrBlockRegionInvalid if one or more region exceed the bounds of the +// associated block +// - ErrTxClosed if the transaction has already been closed +// - ErrCorruption if the database has somehow become corrupted +// +// In addition, returns ErrDriverSpecific if any failures occur when reading the +// block files. +// +// NOTE: The data returned by this function is only valid during a database +// transaction. Attempting to access it after a transaction has ended results +// in undefined behavior. This constraint prevents additional data copies and +// allows support for memory-mapped database implementations. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) FetchBlockRegions(regions []database.BlockRegion) ([][]byte, error) { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return nil, err + } + + // NOTE: This could check for the existence of all blocks before + // deserializing the locations and building up the fetch list which + // would be faster in the failure case, however callers will not + // typically be calling this function with invalid values, so optimize + // for the common case. + + // NOTE: A potential optimization here would be to combine adjacent + // regions to reduce the number of reads. + + // In order to improve efficiency of loading the bulk data, first grab + // the block location for all of the requested block hashes and sort + // the reads by filenum:offset so that all reads are grouped by file + // and linear within each file. This can result in quite a significant + // performance increase depending on how spread out the requested hashes + // are by reducing the number of file open/closes and random accesses + // needed. The fetchList is intentionally allocated with a cap because + // some of the regions might be fetched from the pending blocks and + // hence there is no need to fetch those from disk. + blockRegions := make([][]byte, len(regions)) + fetchList := make([]bulkFetchData, 0, len(regions)) + for i := range regions { + region := ®ions[i] + + // When the block is pending to be written on commit grab the + // bytes from there. + if tx.pendingBlocks != nil { + regionBytes, err := tx.fetchPendingRegion(region) + if err != nil { + return nil, err + } + if regionBytes != nil { + blockRegions[i] = regionBytes + continue + } + } + + // Lookup the location of the block in the files from the block + // index. + blockRow, err := tx.fetchBlockRow(region.Hash) + if err != nil { + return nil, err + } + location := deserializeBlockLoc(blockRow) + + // Ensure the region is within the bounds of the block. + endOffset := region.Offset + region.Len + if endOffset < region.Offset || endOffset > location.blockLen { + str := fmt.Sprintf("block %s region offset %d, length "+ + "%d exceeds block length of %d", region.Hash, + region.Offset, region.Len, location.blockLen) + return nil, makeDbErr(database.ErrBlockRegionInvalid, str, nil) + } + + fetchList = append(fetchList, bulkFetchData{&location, i}) + } + sort.Sort(bulkFetchDataSorter(fetchList)) + + // Read all of the regions in the fetch list and set the results. + for i := range fetchList { + fetchData := &fetchList[i] + ri := fetchData.replyIndex + region := ®ions[ri] + location := fetchData.blockLocation + regionBytes, err := tx.db.store.readBlockRegion(*location, + region.Offset, region.Len) + if err != nil { + return nil, err + } + blockRegions[ri] = regionBytes + } + + return blockRegions, nil +} + +// close marks the transaction closed, releases any pending data, and releases +// the transaction read lock. +func (tx *transaction) close() { + tx.closed = true + + // Clear pending blocks that would have been written on commit. + tx.pendingBlocks = nil + tx.pendingBlockData = nil + + tx.db.mtx.RUnlock() +} + +// serializeBlockRow serializes a block row into a format suitable for storage +// into the block index. +func serializeBlockRow(blockLoc blockLocation, blockHdr []byte) []byte { + // The serialized block index row format is: + // + // [0:blockLocSize] Block location + // [blockLocSize:blockLocSize+blockHdrSize] Block header + // [checksumOffset:checksumOffset+4] Castagnoli CRC-32 checksum + serializedRow := make([]byte, blockLocSize+blockHdrSize+4) + copy(serializedRow, serializeBlockLoc(blockLoc)) + copy(serializedRow[blockHdrOffset:], blockHdr) + checksum := crc32.Checksum(serializedRow[:checksumOffset], castagnoli) + byteOrder.PutUint32(serializedRow[checksumOffset:], checksum) + return serializedRow +} + +// writePendingAndCommit writes pending block data to the flat block files, +// updates the metadata with their locations as well as the new current write +// location, and commits the metadata to the underlying bolt database. It also +// properly handles rollback in the case of failures. +// +// This function MUST only be called when there is pending data to be written. +func (tx *transaction) writePendingAndCommit() error { + // Save the current block store write position for potential rollback. + // These variables are only updated here in this function and there can + // only be one write transaction active at a time, so it's safe to store + // them for potential rollback. + wc := tx.db.store.writeCursor + wc.RLock() + oldBlkFileNum := wc.curFileNum + oldBlkOffset := wc.curOffset + wc.RUnlock() + + // rollback is a closure that is used to rollback all writes to the + // block files. It also optionally rolls back the underlying bolt + // transaction. + rollback := func(rollbackBolt bool) { + // Rollback any modification made to the block files and the + // underlying bolt transaction if needed. + tx.db.store.handleRollback(oldBlkFileNum, oldBlkOffset) + if rollbackBolt { + _ = tx.boltTx.Rollback() + } + } + + // Loop through all of the pending blocks to store and write them. + for _, blockData := range tx.pendingBlockData { + log.Tracef("Storing block %s", blockData.hash) + location, err := tx.db.store.writeBlock(blockData.bytes) + if err != nil { + rollback(true) + return err + } + + // Add a record in the block index for the block. The record + // includes the location information needed to locate the block + // on the filesystem as well as the block header since they are + // so commonly needed. + blockHdr := blockData.bytes[0:blockHdrSize] + blockRow := serializeBlockRow(location, blockHdr) + err = tx.blockIdxBucket.Put(blockData.hash[:], blockRow) + if err != nil { + rollback(true) + return err + } + } + + // Update the metadata for the current write file and offset. + writeRow := serializeWriteRow(wc.curFileNum, wc.curOffset) + if err := tx.metaBucket.Put(writeLocKeyName, writeRow); err != nil { + rollback(true) + return convertErr("failed to store write cursor", err) + } + + // Commit metadata updates. + if err := tx.boltTx.Commit(); err != nil { + rollback(false) + return convertErr("failed to commit transaction", err) + } + + return nil +} + +// rollback rollsback the underly bolt database and closes the transaction. It +// is separated mainly so the code panics on attempts to commit or rollback a +// managed transaction can rollback first. +func (tx *transaction) rollback() error { + // Regardless of whether the rollback succeeds, the transaction is + // closed on return. + err := tx.boltTx.Rollback() + tx.close() + if err != nil { + return convertErr("failed to rollback underlying bolt tx", err) + } + + return nil +} + +// Commit commits all changes that have been made through the root bucket and +// all of its sub-buckets to persistent storage. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) Commit() error { + // Prevent commits on managed transactions. + if tx.managed { + _ = tx.rollback() + panic("managed transaction commit not allowed") + } + + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return err + } + + // Regardless of whether the commit succeeds, the transaction is closed + // on return. This is done as a defer since some of the committing code + // requires the transaction to be open. + defer tx.close() + + // Ensure the transaction is writable. + if !tx.writable { + str := "Commit requires a writable database transaction" + return makeDbErr(database.ErrTxNotWritable, str, nil) + } + + // When there is no pending block data to be written, just commit the + // underlying bolt transaction and exit. + if len(tx.pendingBlockData) == 0 { + if err := tx.boltTx.Commit(); err != nil { + return convertErr("failed to commit transaction", err) + } + + return nil + } + + // Otherwise, there is pending block data to be written, so write it + // along with the necessary metadata. The function will rollback if + // any errors occur. + return tx.writePendingAndCommit() +} + +// Rollback undoes all changes that have been made to the root bucket and all of +// its sub-buckets. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) Rollback() error { + // Prevent rollbacks on managed transactions. + if tx.managed { + _ = tx.rollback() + panic("managed transaction rollback not allowed") + } + + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return err + } + + return tx.rollback() +} + +// db represents a collection of namespaces which are persisted and implements +// the database.DB interface. All database access is performed through +// transactions which are obtained through the specific Namespace. +type db struct { + mtx sync.RWMutex // Protect concurrent access. + closed bool // Is the database closed? + boltDB *bolt.DB // The underlying bolt DB for metadata. + store *blockStore // Handles read/writing blocks to flat files. +} + +// Enforce db implements the database.DB interface. +var _ database.DB = (*db)(nil) + +// Type returns the database driver type the current database instance was +// created with. +// +// This function is part of the database.DB interface implementation. +func (db *db) Type() string { + return dbType +} + +// begin is the implemntation function for the Begin database method. See its +// documentation for more details. +// +// This function is only separate because it returns the internal transaction +// which is used by the managed transaction code while the database method +// returns the interface. +func (db *db) begin(writable bool) (*transaction, error) { + // Whenever a new transaction is started, grab a read lock against the + // database to ensure Close will wait for the transaction to finish. + // This lock will not be released until the transaction is closed (via + // Rollback or Commit). + db.mtx.RLock() + if db.closed { + db.mtx.RUnlock() + return nil, makeDbErr(database.ErrDbNotOpen, errDbNotOpenStr, + nil) + } + + // Bolt already handles allowing multiple concurrent read transactions + // while only only allowing a single write transaction, so make use of + // that functionality. + boltTx, err := db.boltDB.Begin(writable) + if err != nil { + db.mtx.RUnlock() + str := "failed to open transaction" + return nil, convertErr(str, err) + } + + metaBucket := boltTx.Bucket(metadataBucketName) + blockIdxBucket := metaBucket.Bucket(blockIdxBucketName) + tx := &transaction{ + writable: writable, + db: db, + boltTx: boltTx, + metaBucket: &bucket{boltBucket: metaBucket}, + blockIdxBucket: &bucket{boltBucket: blockIdxBucket}, + } + tx.metaBucket.tx = tx + tx.blockIdxBucket.tx = tx + return tx, nil +} + +// Begin starts a transaction which is either read-only or read-write depending +// on the specified flag. Multiple read-only transactions can be started +// simultaneously while only a single read-write transaction can be started at a +// time. The call will block when starting a read-write transaction when one is +// already open. +// +// NOTE: The transaction must be closed by calling Rollback or Commit on it when +// it is no longer needed. Failure to do so will result in unclaimed memory. +// +// This function is part of the database.DB interface implementation. +func (db *db) Begin(writable bool) (database.Tx, error) { + return db.begin(writable) +} + +// rollbackOnPanic rolls the passed transaction back if the code in the calling +// function panics. This is needed since the mutex on a transaction must be +// released and a panic in called code would prevent that from happening. +// +// NOTE: This can only be handled manually for managed transactions since they +// control the life-cycle of the transaction. As the documentation on Begin +// calls out, callers opting to use manual transactions will have to ensure the +// transaction is rolled back on panic if it desires that functionality as well +// or the database will fail to close since the read-lock will never be +// released. +func rollbackOnPanic(tx *transaction) { + if err := recover(); err != nil { + tx.managed = false + _ = tx.Rollback() + panic(err) + } +} + +// View invokes the passed function in the context of a managed read-only +// transaction with the root bucket for the namespace. Any errors returned from +// the user-supplied function are returned from this function. +// +// This function is part of the database.DB interface implementation. +func (db *db) View(fn func(database.Tx) error) error { + // Start a read-only transaction. + tx, err := db.begin(false) + if err != nil { + return err + } + + // Since the user-provided function might panic, ensure the transaction + // releases all mutexes and resources. There is no guarantee the caller + // won't use recover and keep going. Thus, the database must still be + // in a usable state on panics due to user issues. + defer rollbackOnPanic(tx) + + tx.managed = true + err = fn(tx) + tx.managed = false + if err != nil { + // The error is ignored here because nothing was written yet + // and regardless of a rollback failure, the tx is closed now + // anyways. + _ = tx.Rollback() + return err + } + + return tx.Rollback() +} + +// Update invokes the passed function in the context of a managed read-write +// transaction with the root bucket for the namespace. Any errors returned from +// the user-supplied function will cause the transaction to be rolled back and +// are returned from this function. Otherwise, the transaction is committed +// when the user-supplied function returns a nil error. +// +// This function is part of the database.DB interface implementation. +func (db *db) Update(fn func(database.Tx) error) error { + // Start a read-write transaction. + tx, err := db.begin(true) + if err != nil { + return err + } + + // Since the user-provided function might panic, ensure the transaction + // releases all mutexes and resources. There is no guarantee the caller + // won't use recover and keep going. Thus, the database must still be + // in a usable state on panics due to user issues. + defer rollbackOnPanic(tx) + + tx.managed = true + err = fn(tx) + tx.managed = false + if err != nil { + // The error is ignored here because nothing was written yet + // and regardless of a rollback failure, the tx is closed now + // anyways. + _ = tx.Rollback() + return err + } + + return tx.Commit() +} + +// Close cleanly shuts down the database and syncs all data. Any data in +// database transactions which have not been committed will be lost, so it is +// important to ensure all transactions are finalized prior to calling this +// function if that data is intended to be stored. +// +// This function is part of the database.DB interface implementation. +func (db *db) Close() error { + // Since all transactions have a read lock on this mutex, this will + // cause Close to wait for all readers to complete. + db.mtx.Lock() + defer db.mtx.Unlock() + + if db.closed { + return makeDbErr(database.ErrDbNotOpen, errDbNotOpenStr, nil) + } + db.closed = true + + // NOTE: Since the above lock waits for all transactions to finish and + // prevents any new ones from being started, it is safe to clear all + // state without the individual locks. + + // Close any open flat files that house the blocks. + wc := db.store.writeCursor + if wc.curFile.file != nil { + _ = wc.curFile.file.Close() + wc.curFile.file = nil + } + for _, blockFile := range db.store.openBlockFiles { + _ = blockFile.file.Close() + } + db.store.openBlockFiles = nil + db.store.openBlocksLRU.Init() + db.store.fileNumToLRUElem = nil + + if err := db.boltDB.Close(); err != nil { + str := "failed to close underlying bolt database" + return convertErr(str, err) + } + + return nil +} + +// filesExists reports whether the named file or directory exists. +func fileExists(name string) bool { + if _, err := os.Stat(name); err != nil { + if os.IsNotExist(err) { + return false + } + } + return true +} + +// initBoltDB creates the initial buckets and values used by the package. This +// is mainly in a separate function for testing purposes. +func initBoltDB(boltDB *bolt.DB) error { + err := boltDB.Update(func(tx *bolt.Tx) error { + // All metadata is housed in the metadata bucket at the + // root of the database. + metaBucket, err := tx.CreateBucket(metadataBucketName) + if err != nil { + return err + } + + // Create the internal block index bucket. + _, err = metaBucket.CreateBucket(blockIdxBucketName) + if err != nil { + return err + } + + // The starting block file write cursor location is file num 0, + // offset 0. + return metaBucket.Put(writeLocKeyName, serializeWriteRow(0, 0)) + }) + if err != nil { + str := fmt.Sprintf("failed to initialize metadata database: %v", + err) + return convertErr(str, err) + } + + return nil +} + +// openDB opens the database at the provided path. database.ErrDbDoesNotExist +// is returned if the database doesn't exist and the create flag is not set. +func openDB(dbPath string, network wire.BitcoinNet, create bool) (database.DB, error) { + // Error if the database doesn't exist and the create flag is not set. + metadataDbPath := filepath.Join(dbPath, metadataDbName) + dbExists := fileExists(metadataDbPath) + if !create && !dbExists { + str := fmt.Sprintf("database %q does not exist", metadataDbPath) + return nil, makeDbErr(database.ErrDbDoesNotExist, str, nil) + } + + // Ensure the full path to the database exists. + if !dbExists { + // The error can be ignored here since the call to bolt.Open + // will fail if the directory couldn't be created. + _ = os.MkdirAll(dbPath, 0700) + } + + // Open the bolt metadata database (will create it if needed). + boltDB, err := bolt.Open(metadataDbPath, 0600, nil) + if err != nil { + return nil, convertErr(err.Error(), err) + } + + // Create the block store which includes scanning the existing flat + // block files to find what the current write cursor position is + // according to the data that is actually on disk. + store := newBlockStore(dbPath, network) + pdb := &db{boltDB: boltDB, store: store} + + // Perform any reconciliation needed between the block and metadata as + // well as bolt database initialization, if needed. + return reconcileDB(pdb, create) +} diff --git a/database2/ffboltdb/doc.go b/database2/ffboltdb/doc.go new file mode 100644 index 00000000000..1adb192605e --- /dev/null +++ b/database2/ffboltdb/doc.go @@ -0,0 +1,30 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +/* +Package ffboltdb implements a driver for the database package that uses boltdb +for the backing metadata and flat files for block storage. + +This driver is the recommended driver for use with btcd. It has a strong focus +on speed, efficiency, and robustness. It makes use of zero-copy memory mapping +for the metadata, flat files for block storage, and checksums in key areas to +ensure data integrity. + +Usage + +This package is a driver to the database package and provides the database type +of "ffboltdb". The parameters the Open and Create functions take are the +database path as a string and the block network: + + db, err := database.Open("ffboltdb", "path/to/database", wire.MainNet) + if err != nil { + // Handle error + } + + db, err := database.Create("ffboltdb", "path/to/database", wire.MainNet) + if err != nil { + // Handle error + } +*/ +package ffboltdb diff --git a/database2/ffboltdb/driver.go b/database2/ffboltdb/driver.go new file mode 100644 index 00000000000..4237213cf19 --- /dev/null +++ b/database2/ffboltdb/driver.go @@ -0,0 +1,84 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package ffboltdb + +import ( + "fmt" + + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcd/wire" + "github.com/btcsuite/btclog" +) + +var log = btclog.Disabled + +const ( + dbType = "ffboltdb" +) + +// parseArgs parses the arguments from the database Open/Create methods. +func parseArgs(funcName string, args ...interface{}) (string, wire.BitcoinNet, error) { + if len(args) != 2 { + return "", 0, fmt.Errorf("invalid arguments to %s.%s -- "+ + "expected database path and block network", dbType, + funcName) + } + + dbPath, ok := args[0].(string) + if !ok { + return "", 0, fmt.Errorf("first argument to %s.%s is invalid -- "+ + "expected database path string", dbType, funcName) + } + + network, ok := args[1].(wire.BitcoinNet) + if !ok { + return "", 0, fmt.Errorf("second argument to %s.%s is invalid -- "+ + "expected block network", dbType, funcName) + } + + return dbPath, network, nil +} + +// openDBDriver is the callback provided during driver registration that opens +// an existing database for use. +func openDBDriver(args ...interface{}) (database.DB, error) { + dbPath, network, err := parseArgs("Open", args...) + if err != nil { + return nil, err + } + + return openDB(dbPath, network, false) +} + +// createDBDriver is the callback provided during driver registration that +// creates, initializes, and opens a database for use. +func createDBDriver(args ...interface{}) (database.DB, error) { + dbPath, network, err := parseArgs("Create", args...) + if err != nil { + return nil, err + } + + return openDB(dbPath, network, true) +} + +// useLogger is the callback provided during driver registration that sets the +// current logger to the provided one. +func useLogger(logger btclog.Logger) { + log = logger +} + +func init() { + // Register the driver. + driver := database.Driver{ + DbType: dbType, + Create: createDBDriver, + Open: openDBDriver, + UseLogger: useLogger, + } + if err := database.RegisterDriver(driver); err != nil { + panic(fmt.Sprintf("Failed to regiser database driver '%s': %v", + dbType, err)) + } +} diff --git a/database2/ffboltdb/driver_test.go b/database2/ffboltdb/driver_test.go new file mode 100644 index 00000000000..c571c6a8ff0 --- /dev/null +++ b/database2/ffboltdb/driver_test.go @@ -0,0 +1,288 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package ffboltdb_test + +import ( + "fmt" + "os" + "path/filepath" + "reflect" + "runtime" + "testing" + + "github.com/btcsuite/btcd/chaincfg" + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcd/database2/ffboltdb" + "github.com/btcsuite/btcutil" +) + +// dbType is the database type name for this driver. +const dbType = "ffboltdb" + +// TestCreateOpenFail ensures that errors related to creating and opening a +// database are handled properly. +func TestCreateOpenFail(t *testing.T) { + t.Parallel() + + // Ensure that attempting to open a database that doesn't exist returns + // the expected error. + wantErrCode := database.ErrDbDoesNotExist + _, err := database.Open(dbType, "noexist", blockDataNet) + if !checkDbError(t, "Open", err, wantErrCode) { + return + } + + // Ensure that attempting to open a database with the wrong number of + // parameters returns the expected error. + wantErr := fmt.Errorf("invalid arguments to %s.Open -- expected "+ + "database path and block network", dbType) + _, err = database.Open(dbType, 1, 2, 3) + if err.Error() != wantErr.Error() { + t.Errorf("Open: did not receive expected error - got %v, "+ + "want %v", err, wantErr) + return + } + + // Ensure that attempting to open a database with an invalid type for + // the first parameter returns the expected error. + wantErr = fmt.Errorf("first argument to %s.Open is invalid -- "+ + "expected database path string", dbType) + _, err = database.Open(dbType, 1, blockDataNet) + if err.Error() != wantErr.Error() { + t.Errorf("Open: did not receive expected error - got %v, "+ + "want %v", err, wantErr) + return + } + + // Ensure that attempting to open a database with an invalid type for + // the second parameter returns the expected error. + wantErr = fmt.Errorf("second argument to %s.Open is invalid -- "+ + "expected block network", dbType) + _, err = database.Open(dbType, "noexist", "invalid") + if err.Error() != wantErr.Error() { + t.Errorf("Open: did not receive expected error - got %v, "+ + "want %v", err, wantErr) + return + } + + // Ensure that attempting to create a database with the wrong number of + // parameters returns the expected error. + wantErr = fmt.Errorf("invalid arguments to %s.Create -- expected "+ + "database path and block network", dbType) + _, err = database.Create(dbType, 1, 2, 3) + if err.Error() != wantErr.Error() { + t.Errorf("Create: did not receive expected error - got %v, "+ + "want %v", err, wantErr) + return + } + + // Ensure that attempting to create a database with an invalid type for + // the first parameter returns the expected error. + wantErr = fmt.Errorf("first argument to %s.Create is invalid -- "+ + "expected database path string", dbType) + _, err = database.Create(dbType, 1, blockDataNet) + if err.Error() != wantErr.Error() { + t.Errorf("Create: did not receive expected error - got %v, "+ + "want %v", err, wantErr) + return + } + + // Ensure that attempting to create a database with an invalid type for + // the second parameter returns the expected error. + wantErr = fmt.Errorf("second argument to %s.Create is invalid -- "+ + "expected block network", dbType) + _, err = database.Create(dbType, "noexist", "invalid") + if err.Error() != wantErr.Error() { + t.Errorf("Create: did not receive expected error - got %v, "+ + "want %v", err, wantErr) + return + } + + // Ensure operations against a closed database return the expected + // error. + dbPath := filepath.Join(os.TempDir(), "ffboltdb-createfail") + _ = os.RemoveAll(dbPath) + db, err := database.Create(dbType, dbPath, blockDataNet) + if err != nil { + t.Errorf("Create: unexpected error: %v", err) + return + } + defer os.RemoveAll(dbPath) + db.Close() + + wantErrCode = database.ErrDbNotOpen + err = db.View(func(tx database.Tx) error { + return nil + }) + if !checkDbError(t, "View", err, wantErrCode) { + return + } + + wantErrCode = database.ErrDbNotOpen + err = db.Update(func(tx database.Tx) error { + return nil + }) + if !checkDbError(t, "Update", err, wantErrCode) { + return + } + + wantErrCode = database.ErrDbNotOpen + _, err = db.Begin(false) + if !checkDbError(t, "Begin(false)", err, wantErrCode) { + return + } + + wantErrCode = database.ErrDbNotOpen + _, err = db.Begin(true) + if !checkDbError(t, "Begin(true)", err, wantErrCode) { + return + } + + wantErrCode = database.ErrDbNotOpen + err = db.Close() + if !checkDbError(t, "Close", err, wantErrCode) { + return + } +} + +// TestPersistence ensures that values stored are still valid after closing and +// reopening the database. +func TestPersistence(t *testing.T) { + t.Parallel() + + // Create a new database to run tests against. + dbPath := filepath.Join(os.TempDir(), "ffboltdb-persistencetest") + _ = os.RemoveAll(dbPath) + db, err := database.Create(dbType, dbPath, blockDataNet) + if err != nil { + t.Errorf("Failed to create test database (%s) %v", dbType, err) + return + } + defer os.RemoveAll(dbPath) + defer db.Close() + + // Create a bucket, put some values into it, and store a block so they + // can be tested for existence on re-open. + bucket1Key := []byte("bucket1") + storeValues := map[string]string{ + "b1key1": "foo1", + "b1key2": "foo2", + "b1key3": "foo3", + } + genesisBlock := btcutil.NewBlock(chaincfg.MainNetParams.GenesisBlock) + genesisHash := chaincfg.MainNetParams.GenesisHash + err = db.Update(func(tx database.Tx) error { + metadataBucket := tx.Metadata() + if metadataBucket == nil { + return fmt.Errorf("Metadata: unexpected nil bucket") + } + + bucket1, err := metadataBucket.CreateBucket(bucket1Key) + if err != nil { + return fmt.Errorf("CreateBucket: unexpected error: %v", + err) + } + + for k, v := range storeValues { + err := bucket1.Put([]byte(k), []byte(v)) + if err != nil { + return fmt.Errorf("Put: unexpected error: %v", + err) + } + } + + if err := tx.StoreBlock(genesisBlock); err != nil { + return fmt.Errorf("StoreBlock: unexpected error: %v", + err) + } + + return nil + }) + if err != nil { + t.Errorf("Update: unexpected error: %v", err) + return + } + + // Close and reopen the database to ensure the values persist. + db.Close() + db, err = database.Open(dbType, dbPath, blockDataNet) + if err != nil { + t.Errorf("Failed to open test database (%s) %v", dbType, err) + return + } + defer db.Close() + + // Ensure the values previously stored in the 3rd namespace still exist + // and are correct. + err = db.View(func(tx database.Tx) error { + metadataBucket := tx.Metadata() + if metadataBucket == nil { + return fmt.Errorf("Metadata: unexpected nil bucket") + } + + bucket1 := metadataBucket.Bucket(bucket1Key) + if bucket1 == nil { + return fmt.Errorf("Bucket1: unexpected nil bucket") + } + + for k, v := range storeValues { + gotVal := bucket1.Get([]byte(k)) + if !reflect.DeepEqual(gotVal, []byte(v)) { + return fmt.Errorf("Get: key '%s' does not "+ + "match expected value - got %s, want %s", + k, gotVal, v) + } + } + + genesisBlockBytes, _ := genesisBlock.Bytes() + gotBytes, err := tx.FetchBlock(genesisHash) + if err != nil { + return fmt.Errorf("FetchBlock: unexpected error: %v", + err) + } + if !reflect.DeepEqual(gotBytes, genesisBlockBytes) { + return fmt.Errorf("FetchBlock: stored block mismatch") + } + + return nil + }) + if err != nil { + t.Errorf("View: unexpected error: %v", err) + return + } +} + +// TestInterface performs all interfaces tests for this database driver. +func TestInterface(t *testing.T) { + t.Parallel() + + // Create a new database to run tests against. + dbPath := filepath.Join(os.TempDir(), "ffboltdb-interfacetest") + _ = os.RemoveAll(dbPath) + db, err := database.Create(dbType, dbPath, blockDataNet) + if err != nil { + t.Errorf("Failed to create test database (%s) %v", dbType, err) + return + } + defer os.RemoveAll(dbPath) + defer db.Close() + + // Ensure the driver type is the expected value. + gotDbType := db.Type() + if gotDbType != dbType { + t.Errorf("Type: unepxected driver type - got %v, want %v", + gotDbType, dbType) + return + } + + // Run all of the interface tests against the database. + runtime.GOMAXPROCS(runtime.NumCPU()) + + // Change the maximum file size to a small value to force multiple flat + // files with the test data set. + ffboltdb.TstRunWithMaxBlockFileSize(db, 2048, func() { + testInterface(t, db) + }) +} diff --git a/database2/ffboltdb/export_test.go b/database2/ffboltdb/export_test.go new file mode 100644 index 00000000000..367f23edc72 --- /dev/null +++ b/database2/ffboltdb/export_test.go @@ -0,0 +1,26 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +/* +This test file is part of the ffboltdb package rather than than the +ffboltdb_test package so it can bridge access to the internals to properly test +cases which are either not possible or can't reliably be tested via the public +interface. The functions are only exported while the tests are being run. +*/ + +package ffboltdb + +import database "github.com/btcsuite/btcd/database2" + +// TstRunWithMaxBlockFileSize runs the passed function with the maximum allowed +// file size for the database set to the provided value. The value will be set +// back to the original value upon completion. +func TstRunWithMaxBlockFileSize(idb database.DB, size uint32, fn func()) { + ffboltdb := idb.(*db) + origSize := ffboltdb.store.maxBlockFileSize + + ffboltdb.store.maxBlockFileSize = size + fn() + ffboltdb.store.maxBlockFileSize = origSize +} diff --git a/database2/ffboltdb/interface_test.go b/database2/ffboltdb/interface_test.go new file mode 100644 index 00000000000..13fac51617a --- /dev/null +++ b/database2/ffboltdb/interface_test.go @@ -0,0 +1,2311 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +// This file intended to be copied into each backend driver directory. Each +// driver should have their own driver_test.go file which creates a database and +// invokes the testInterface function in this file to ensure the driver properly +// implements the interface. +// +// NOTE: When copying this file into the backend driver folder, the package name +// will need to be changed accordingly. + +package ffboltdb_test + +import ( + "bytes" + "compress/bzip2" + "encoding/binary" + "fmt" + "io" + "os" + "path/filepath" + "reflect" + "sync/atomic" + "testing" + "time" + + "github.com/btcsuite/btcd/chaincfg" + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcd/wire" + "github.com/btcsuite/btcutil" +) + +var ( + // blockDataNet is the expected network in the test block data. + blockDataNet = wire.MainNet + + // blockDataFile is the path to a file containing the first 256 blocks + // of the block chain. + blockDataFile = filepath.Join("..", "testdata", "blocks1-256.bz2") + + // errSubTestFail is used to signal that a sub test returned false. + errSubTestFail = fmt.Errorf("sub test failure") +) + +// loadBlocks loads the blocks contained in the testdata directory and returns +// a slice of them. +func loadBlocks(t *testing.T, dataFile string, network wire.BitcoinNet) ([]*btcutil.Block, error) { + // Open the file that contains the blocks for reading. + fi, err := os.Open(dataFile) + if err != nil { + t.Errorf("failed to open file %v, err %v", dataFile, err) + return nil, err + } + defer func() { + if err := fi.Close(); err != nil { + t.Errorf("failed to close file %v %v", dataFile, + err) + } + }() + dr := bzip2.NewReader(fi) + + // Set the first block as the genesis block. + blocks := make([]*btcutil.Block, 0, 256) + genesis := btcutil.NewBlock(chaincfg.MainNetParams.GenesisBlock) + blocks = append(blocks, genesis) + + // Load the remaining blocks. + for height := 1; ; height++ { + var net uint32 + err := binary.Read(dr, binary.LittleEndian, &net) + if err == io.EOF { + // Hit end of file at the expected offset. No error. + break + } + if err != nil { + t.Errorf("Failed to load network type for block %d: %v", + height, err) + return nil, err + } + if net != uint32(network) { + t.Errorf("Block doesn't match network: %v expects %v", + net, network) + return nil, err + } + + var blockLen uint32 + err = binary.Read(dr, binary.LittleEndian, &blockLen) + if err != nil { + t.Errorf("Failed to load block size for block %d: %v", + height, err) + return nil, err + } + + // Read the block. + blockBytes := make([]byte, blockLen) + _, err = io.ReadFull(dr, blockBytes) + if err != nil { + t.Errorf("Failed to load block %d: %v", height, err) + return nil, err + } + + // Deserialize and store the block. + block, err := btcutil.NewBlockFromBytes(blockBytes) + if err != nil { + t.Errorf("Failed to parse block %v: %v", height, err) + return nil, err + } + blocks = append(blocks, block) + } + + return blocks, nil +} + +// checkDbError ensures the passed error is a database.Error with an error code +// that matches the passed error code. +func checkDbError(t *testing.T, testName string, gotErr error, wantErrCode database.ErrorCode) bool { + dbErr, ok := gotErr.(database.Error) + if !ok { + t.Errorf("%s: unexpected error type - got %T, want %T", + testName, gotErr, database.Error{}) + return false + } + if dbErr.ErrorCode != wantErrCode { + t.Errorf("%s: unexpected error code - got %s (%s), want %s", + testName, dbErr.ErrorCode, dbErr.Description, + wantErrCode) + return false + } + + return true +} + +// testContext is used to store context information about a running test which +// is passed into helper functions. +type testContext struct { + t *testing.T + db database.DB + bucketDepth int + isWritable bool + blocks []*btcutil.Block +} + +// keyPair houses a key/value pair. It is used over maps so ordering can be +// maintained. +type keyPair struct { + key string + value string +} + +// lookupKey is a convenience method to lookup the requested key from the +// provided keypair slice along with whether or not the key was found. +func lookupKey(key string, values []keyPair) (string, bool) { + for _, item := range values { + if item.key == key { + return item.value, true + } + } + + return "", false +} + +// rollbackValues returns a copy of the provided keypairs with all values set to +// an empty string. This is used to test that values are properly rolled back. +func rollbackValues(values []keyPair) []keyPair { + ret := make([]keyPair, len(values)) + copy(ret, values) + for i := range ret { + ret[i].value = "" + } + return ret +} + +// testCursorKeyPair checks that the provide key and value match the expected +// keypair at the provided index. It also ensures the index is in range for the +// provided slice of expected keypairs. +func testCursorKeyPair(tc *testContext, k, v []byte, index int, values []keyPair) bool { + if index >= len(values) || index < 0 { + tc.t.Errorf("Cursor: exceeded the expected range of values - "+ + "index %d, num values %d", index, len(values)) + return false + } + + pair := &values[index] + kString := string(k) + if kString != pair.key { + tc.t.Errorf("Mismatched cursor key: index %d does not match "+ + "the expected key - got %q, want %q", index, kString, + pair.key) + return false + } + vString := string(v) + if vString != pair.value { + tc.t.Errorf("Mismatched cursor value: index %d does not match "+ + "the expected value - got %q, want %q", index, + vString, pair.value) + return false + } + + return true +} + +// testGetValues checks that all of the provided key/value pairs can be +// retrieved from the database and the retrieved values match the provided +// values. +func testGetValues(tc *testContext, bucket database.Bucket, values []keyPair) bool { + for _, item := range values { + var vBytes []byte + if item.value != "" { + vBytes = []byte(item.value) + } + + gotValue := bucket.Get([]byte(item.key)) + if !reflect.DeepEqual(gotValue, vBytes) { + tc.t.Errorf("Get: unexpected value - got %s, want %s", + gotValue, vBytes) + return false + } + } + + return true +} + +// testPutValues stores all of the provided key/value pairs in the provided +// bucket while checking for errors. +func testPutValues(tc *testContext, bucket database.Bucket, values []keyPair) bool { + for _, item := range values { + var vBytes []byte + if item.value != "" { + vBytes = []byte(item.value) + } + if err := bucket.Put([]byte(item.key), vBytes); err != nil { + tc.t.Errorf("Put: unexpected error: %v", err) + return false + } + } + + return true +} + +// testDeleteValues removes all of the provided key/value pairs from the +// provided bucket. +func testDeleteValues(tc *testContext, bucket database.Bucket, values []keyPair) bool { + for _, item := range values { + if err := bucket.Delete([]byte(item.key)); err != nil { + tc.t.Errorf("Delete: unexpected error: %v", err) + return false + } + } + + return true +} + +// testCursorInterface ensures the cursor itnerface is working properly by +// exercising all of its functions on the passed bucket. +func testCursorInterface(tc *testContext, bucket database.Bucket) bool { + // Ensure a cursor can be obtained for the bucket. + cursor := bucket.Cursor() + if cursor == nil { + tc.t.Error("Bucket.Cursor: unexpected nil cursor returned") + return false + } + + // Ensure the cursor returns the same bucket it was created for. + if cursor.Bucket() != bucket { + tc.t.Error("Cursor.Bucket: does not match the bucket it was " + + "created for") + return false + } + + if tc.isWritable { + unsortedValues := []keyPair{ + {"cursor", "val1"}, + {"abcd", "val1"}, + {"bcd", "val1"}, + } + sortedValues := []keyPair{ + {"abcd", "val1"}, + {"bcd", "val1"}, + {"cursor", "val1"}, + } + + // Store the values to be used in the cursor tests in unsorted + // order and ensure they were actually stored. + if !testPutValues(tc, bucket, unsortedValues) { + return false + } + if !testGetValues(tc, bucket, unsortedValues) { + return false + } + + // Ensure the cursor returns all items in byte-sorted order when + // iterating forward. + curIdx := 0 + for ok := cursor.First(); ok; ok = cursor.Next() { + k, v := cursor.Key(), cursor.Value() + if !testCursorKeyPair(tc, k, v, curIdx, sortedValues) { + return false + } + curIdx++ + } + if curIdx != len(unsortedValues) { + tc.t.Errorf("Cursor: expected to iterate %d values, "+ + "but only iterated %d", len(unsortedValues), + curIdx) + return false + } + + // Ensure the cursor returns all items in reverse byte-sorted + // order when iterating in reverse. + curIdx = len(sortedValues) - 1 + for ok := cursor.Last(); ok; ok = cursor.Prev() { + k, v := cursor.Key(), cursor.Value() + if !testCursorKeyPair(tc, k, v, curIdx, sortedValues) { + return false + } + curIdx-- + } + if curIdx > -1 { + tc.t.Errorf("Reverse cursor: expected to iterate %d "+ + "values, but only iterated %d", + len(sortedValues), len(sortedValues)-(curIdx+1)) + return false + } + + // Ensure foward iteration works as expected after seeking. + middleIdx := (len(sortedValues) - 1) / 2 + seekKey := []byte(sortedValues[middleIdx].key) + curIdx = middleIdx + for ok := cursor.Seek(seekKey); ok; ok = cursor.Next() { + k, v := cursor.Key(), cursor.Value() + if !testCursorKeyPair(tc, k, v, curIdx, sortedValues) { + return false + } + curIdx++ + } + if curIdx != len(sortedValues) { + tc.t.Errorf("Cursor after seek: expected to iterate "+ + "%d values, but only iterated %d", + len(sortedValues)-middleIdx, curIdx-middleIdx) + return false + } + + // Ensure reverse iteration works as expected after seeking. + curIdx = middleIdx + for ok := cursor.Seek(seekKey); ok; ok = cursor.Prev() { + k, v := cursor.Key(), cursor.Value() + if !testCursorKeyPair(tc, k, v, curIdx, sortedValues) { + return false + } + curIdx-- + } + if curIdx > -1 { + tc.t.Errorf("Reverse cursor after seek: expected to "+ + "iterate %d values, but only iterated %d", + len(sortedValues)-middleIdx, middleIdx-curIdx) + return false + } + + // Ensure the cursor deletes items properly. + cursor.First() + k := cursor.Key() + if err := cursor.Delete(); err != nil { + tc.t.Errorf("Cursor.Delete: unexpected error: %v", err) + return false + } + if val := bucket.Get(k); val != nil { + tc.t.Errorf("Cursor.Delete: value for key %q was not "+ + "deleted", k) + return false + } + } + + return true +} + +// testNestedBucket reruns the testBucketInterface against a nested bucket along +// with a counter to only test a couple of level deep. +func testNestedBucket(tc *testContext, testBucket database.Bucket) bool { + // Don't go more than 2 nested levels deep. + if tc.bucketDepth > 1 { + return true + } + + tc.bucketDepth++ + defer func() { + tc.bucketDepth-- + }() + if !testBucketInterface(tc, testBucket) { + return false + } + + return true +} + +// testBucketInterface ensures the bucket interface is working properly by +// exercising all of its functions. This includes the cursor interface for the +// cursor returned from the bucket. +func testBucketInterface(tc *testContext, bucket database.Bucket) bool { + if bucket.Writable() != tc.isWritable { + tc.t.Errorf("Bucket writable state does not match.") + return false + } + + if tc.isWritable { + // keyValues holds the keys and values to use when putting + // values into the bucket. + var keyValues = []keyPair{ + {"bucketkey1", "foo1"}, + {"bucketkey2", "foo2"}, + {"bucketkey3", "foo3"}, + } + if !testPutValues(tc, bucket, keyValues) { + return false + } + + if !testGetValues(tc, bucket, keyValues) { + return false + } + + // Ensure errors returned from the user-supplied ForEach + // function are returned. + forEachError := fmt.Errorf("example foreach error") + err := bucket.ForEach(func(k, v []byte) error { + return forEachError + }) + if err != forEachError { + tc.t.Errorf("ForEach: inner function error not "+ + "returned - got %v, want %v", err, forEachError) + return false + } + + // Iterate all of the keys using ForEach while making sure the + // stored values are the expected values. + keysFound := make(map[string]struct{}, len(keyValues)) + err = bucket.ForEach(func(k, v []byte) error { + kString := string(k) + wantV, found := lookupKey(kString, keyValues) + if !found { + return fmt.Errorf("ForEach: key '%s' should "+ + "exist", kString) + } + + if !reflect.DeepEqual(v, []byte(wantV)) { + return fmt.Errorf("ForEach: value for key '%s' "+ + "does not match - got %s, want %s", + kString, v, wantV) + } + + keysFound[kString] = struct{}{} + return nil + }) + if err != nil { + tc.t.Errorf("%v", err) + return false + } + + // Ensure all keys were iterated. + for _, item := range keyValues { + if _, ok := keysFound[item.key]; !ok { + tc.t.Errorf("ForEach: key '%s' was not iterated "+ + "when it should have been", item.key) + return false + } + } + + // Delete the keys and ensure they were deleted. + if !testDeleteValues(tc, bucket, keyValues) { + return false + } + if !testGetValues(tc, bucket, rollbackValues(keyValues)) { + return false + } + + // Ensure creating a new bucket works as expected. + testBucketName := []byte("testbucket") + testBucket, err := bucket.CreateBucket(testBucketName) + if err != nil { + tc.t.Errorf("CreateBucket: unexpected error: %v", err) + return false + } + if !testNestedBucket(tc, testBucket) { + return false + } + + // Ensure errors returned from the user-supplied ForEachBucket + // function are returned. + err = bucket.ForEachBucket(func(k []byte) error { + return forEachError + }) + if err != forEachError { + tc.t.Errorf("ForEachBucket: inner function error not "+ + "returned - got %v, want %v", err, forEachError) + return false + } + + // Ensure creating a bucket that already exists fails with the + // expected error. + wantErrCode := database.ErrBucketExists + _, err = bucket.CreateBucket(testBucketName) + if !checkDbError(tc.t, "CreateBucket", err, wantErrCode) { + return false + } + + // Ensure CreateBucketIfNotExists returns an existing bucket. + testBucket, err = bucket.CreateBucketIfNotExists(testBucketName) + if err != nil { + tc.t.Errorf("CreateBucketIfNotExists: unexpected "+ + "error: %v", err) + return false + } + if !testNestedBucket(tc, testBucket) { + return false + } + + // Ensure retrieving an existing bucket works as expected. + testBucket = bucket.Bucket(testBucketName) + if !testNestedBucket(tc, testBucket) { + return false + } + + // Ensure deleting a bucket works as intended. + if err := bucket.DeleteBucket(testBucketName); err != nil { + tc.t.Errorf("DeleteBucket: unexpected error: %v", err) + return false + } + if b := bucket.Bucket(testBucketName); b != nil { + tc.t.Errorf("DeleteBucket: bucket '%s' still exists", + testBucketName) + return false + } + + // Ensure deleting a bucket that doesn't exist returns the + // expected error. + wantErrCode = database.ErrBucketNotFound + err = bucket.DeleteBucket(testBucketName) + if !checkDbError(tc.t, "DeleteBucket", err, wantErrCode) { + return false + } + + // Ensure CreateBucketIfNotExists creates a new bucket when + // it doesn't already exist. + testBucket, err = bucket.CreateBucketIfNotExists(testBucketName) + if err != nil { + tc.t.Errorf("CreateBucketIfNotExists: unexpected "+ + "error: %v", err) + return false + } + if !testNestedBucket(tc, testBucket) { + return false + } + + // Ensure the cursor interface works as expected. + if !testCursorInterface(tc, testBucket) { + return false + } + + // Delete the test bucket to avoid leaving it around for future + // calls. + if err := bucket.DeleteBucket(testBucketName); err != nil { + tc.t.Errorf("DeleteBucket: unexpected error: %v", err) + return false + } + if b := bucket.Bucket(testBucketName); b != nil { + tc.t.Errorf("DeleteBucket: bucket '%s' still exists", + testBucketName) + return false + } + } else { + // Put should fail with bucket that is not writable. + testName := "unwritable tx put" + wantErrCode := database.ErrTxNotWritable + failBytes := []byte("fail") + err := bucket.Put(failBytes, failBytes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Delete should fail with bucket that is not writable. + testName = "unwritable tx delete" + err = bucket.Delete(failBytes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // CreateBucket should fail with bucket that is not writable. + testName = "unwritable tx create bucket" + _, err = bucket.CreateBucket(failBytes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // CreateBucketIfNotExists should fail with bucket that is not + // writable. + testName = "unwritable tx create bucket if not exists" + _, err = bucket.CreateBucketIfNotExists(failBytes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // DeleteBucket should fail with bucket that is not writable. + testName = "unwritable tx delete bucket" + err = bucket.DeleteBucket(failBytes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure the cursor interface works as expected with read-only + // buckets. + if !testCursorInterface(tc, bucket) { + return false + } + } + + return true +} + +// rollbackOnPanic rolls the passed transaction back if the code in the calling +// function panics. This is useful in case the tests unexpectedly panic which +// would leave any manually created transactions with the database mutex locked +// thereby leading to a deadlock and masking the real reason for the panic. It +// also logs a test error and repanics so the original panic can be traced. +func rollbackOnPanic(t *testing.T, tx database.Tx) { + if err := recover(); err != nil { + t.Errorf("Unexpected panic: %v", err) + _ = tx.Rollback() + panic(err) + } +} + +// testMetadataManualTxInterface ensures that the manual transactions metadata +// interface works as expected. +func testMetadataManualTxInterface(tc *testContext) bool { + // populateValues tests that populating values works as expected. + // + // When the writable flag is false, a read-only tranasction is created, + // standard bucket tests for read-only transactions are performed, and + // the Commit function is checked to ensure it fails as expected. + // + // Otherwise, a read-write transaction is created, the values are + // written, standard bucket tests for read-write transactions are + // performed, and then the transaction is either commited or rolled + // back depending on the flag. + bucket1Name := []byte("bucket1") + populateValues := func(writable, rollback bool, putValues []keyPair) bool { + tx, err := tc.db.Begin(writable) + if err != nil { + tc.t.Errorf("Begin: unexpected error %v", err) + return false + } + defer rollbackOnPanic(tc.t, tx) + + metadataBucket := tx.Metadata() + if metadataBucket == nil { + tc.t.Errorf("Metadata: unexpected nil bucket") + _ = tx.Rollback() + return false + } + + bucket1 := metadataBucket.Bucket(bucket1Name) + if bucket1 == nil { + tc.t.Errorf("Bucket1: unexpected nil bucket") + return false + } + + tc.isWritable = writable + if !testBucketInterface(tc, bucket1) { + _ = tx.Rollback() + return false + } + + if !writable { + // The transaction is not writable, so it should fail + // the commit. + testName := "unwritable tx commit" + wantErrCode := database.ErrTxNotWritable + err := tx.Commit() + if !checkDbError(tc.t, testName, err, wantErrCode) { + _ = tx.Rollback() + return false + } + } else { + if !testPutValues(tc, bucket1, putValues) { + return false + } + + if rollback { + // Rollback the transaction. + if err := tx.Rollback(); err != nil { + tc.t.Errorf("Rollback: unexpected "+ + "error %v", err) + return false + } + } else { + // The commit should succeed. + if err := tx.Commit(); err != nil { + tc.t.Errorf("Commit: unexpected error "+ + "%v", err) + return false + } + } + } + + return true + } + + // checkValues starts a read-only transaction and checks that all of + // the key/value pairs specified in the expectedValues parameter match + // what's in the database. + checkValues := func(expectedValues []keyPair) bool { + tx, err := tc.db.Begin(false) + if err != nil { + tc.t.Errorf("Begin: unexpected error %v", err) + return false + } + defer rollbackOnPanic(tc.t, tx) + + metadataBucket := tx.Metadata() + if metadataBucket == nil { + tc.t.Errorf("Metadata: unexpected nil bucket") + _ = tx.Rollback() + return false + } + + bucket1 := metadataBucket.Bucket(bucket1Name) + if bucket1 == nil { + tc.t.Errorf("Bucket1: unexpected nil bucket") + return false + } + + if !testGetValues(tc, bucket1, expectedValues) { + _ = tx.Rollback() + return false + } + + // Rollback the read-only transaction. + if err := tx.Rollback(); err != nil { + tc.t.Errorf("Commit: unexpected error %v", err) + return false + } + + return true + } + + // deleteValues starts a read-write transaction and deletes the keys + // in the passed key/value pairs. + deleteValues := func(values []keyPair) bool { + tx, err := tc.db.Begin(true) + if err != nil { + + } + defer rollbackOnPanic(tc.t, tx) + + metadataBucket := tx.Metadata() + if metadataBucket == nil { + tc.t.Errorf("Metadata: unexpected nil bucket") + _ = tx.Rollback() + return false + } + + bucket1 := metadataBucket.Bucket(bucket1Name) + if bucket1 == nil { + tc.t.Errorf("Bucket1: unexpected nil bucket") + return false + } + + // Delete the keys and ensure they were deleted. + if !testDeleteValues(tc, bucket1, values) { + _ = tx.Rollback() + return false + } + if !testGetValues(tc, bucket1, rollbackValues(values)) { + _ = tx.Rollback() + return false + } + + // Commit the changes and ensure it was successful. + if err := tx.Commit(); err != nil { + tc.t.Errorf("Commit: unexpected error %v", err) + return false + } + + return true + } + + // keyValues holds the keys and values to use when putting values into a + // bucket. + var keyValues = []keyPair{ + {"umtxkey1", "foo1"}, + {"umtxkey2", "foo2"}, + {"umtxkey3", "foo3"}, + } + + // Ensure that attempting populating the values using a read-only + // transaction fails as expected. + if !populateValues(false, true, keyValues) { + return false + } + if !checkValues(rollbackValues(keyValues)) { + return false + } + + // Ensure that attempting populating the values using a read-write + // transaction and then rolling it back yields the expected values. + if !populateValues(true, true, keyValues) { + return false + } + if !checkValues(rollbackValues(keyValues)) { + return false + } + + // Ensure that attempting populating the values using a read-write + // transaction and then committing it stores the expected values. + if !populateValues(true, false, keyValues) { + return false + } + if !checkValues(keyValues) { + return false + } + + // Clean up the keys. + if !deleteValues(keyValues) { + return false + } + + return true +} + +// testManagedTxPanics ensures calling Rollback of Commit inside a managed +// transaction panics. +func testManagedTxPanics(tc *testContext) bool { + testPanic := func(fn func()) (paniced bool) { + // Setup a defer to catch the expected panic and update the + // return variable. + defer func() { + if err := recover(); err != nil { + paniced = true + } + }() + + fn() + return false + } + + // Ensure calling Commit on a managed read-only transaction panics. + paniced := testPanic(func() { + tc.db.View(func(tx database.Tx) error { + tx.Commit() + return nil + }) + }) + if !paniced { + tc.t.Error("Commit called inside View did not panic") + return false + } + + // Ensure calling Rollback on a managed read-only transaction panics. + paniced = testPanic(func() { + tc.db.View(func(tx database.Tx) error { + tx.Rollback() + return nil + }) + }) + if !paniced { + tc.t.Error("Rollback called inside View did not panic") + return false + } + + // Ensure calling Commit on a managed read-write transaction panics. + paniced = testPanic(func() { + tc.db.Update(func(tx database.Tx) error { + tx.Commit() + return nil + }) + }) + if !paniced { + tc.t.Error("Commit called inside Update did not panic") + return false + } + + // Ensure calling Rollback on a managed read-write transaction panics. + paniced = testPanic(func() { + tc.db.Update(func(tx database.Tx) error { + tx.Rollback() + return nil + }) + }) + if !paniced { + tc.t.Error("Rollback called inside Update did not panic") + return false + } + + return true +} + +// testMetadataTxInterface tests all facets of the managed read/write and +// manual transaction metadata interfaces as well as the bucket interfaces under +// them. +func testMetadataTxInterface(tc *testContext) bool { + if !testManagedTxPanics(tc) { + return false + } + + bucket1Name := []byte("bucket1") + err := tc.db.Update(func(tx database.Tx) error { + _, err := tx.Metadata().CreateBucket(bucket1Name) + return err + }) + if err != nil { + tc.t.Errorf("Update: unexpected error creating bucket: %v", err) + return false + } + + if !testMetadataManualTxInterface(tc) { + return false + } + + // keyValues holds the keys and values to use when putting values + // into a bucket. + var keyValues = []keyPair{ + {"mtxkey1", "foo1"}, + {"mtxkey2", "foo2"}, + {"mtxkey3", "foo3"}, + } + + // Test the bucket interface via a managed read-only transaction. + err = tc.db.View(func(tx database.Tx) error { + metadataBucket := tx.Metadata() + if metadataBucket == nil { + return fmt.Errorf("Metadata: unexpected nil bucket") + } + + bucket1 := metadataBucket.Bucket(bucket1Name) + if bucket1 == nil { + return fmt.Errorf("Bucket1: unexpected nil bucket") + } + + tc.isWritable = false + if !testBucketInterface(tc, bucket1) { + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + // Ensure errors returned from the user-supplied View function are + // returned. + viewError := fmt.Errorf("example view error") + err = tc.db.View(func(tx database.Tx) error { + return viewError + }) + if err != viewError { + tc.t.Errorf("View: inner function error not returned - got "+ + "%v, want %v", err, viewError) + return false + } + + // Test the bucket interface via a managed read-write transaction. + // Also, put a series of values and force a rollback so the following + // code can ensure the values were not stored. + forceRollbackError := fmt.Errorf("force rollback") + err = tc.db.Update(func(tx database.Tx) error { + metadataBucket := tx.Metadata() + if metadataBucket == nil { + return fmt.Errorf("Metadata: unexpected nil bucket") + } + + bucket1 := metadataBucket.Bucket(bucket1Name) + if bucket1 == nil { + return fmt.Errorf("Bucket1: unexpected nil bucket") + } + + tc.isWritable = true + if !testBucketInterface(tc, bucket1) { + return errSubTestFail + } + + if !testPutValues(tc, bucket1, keyValues) { + return errSubTestFail + } + + // Return an error to force a rollback. + return forceRollbackError + }) + if err != forceRollbackError { + if err == errSubTestFail { + return false + } + + tc.t.Errorf("Update: inner function error not returned - got "+ + "%v, want %v", err, forceRollbackError) + return false + } + + // Ensure the values that should not have been stored due to the forced + // rollback above were not actually stored. + err = tc.db.View(func(tx database.Tx) error { + metadataBucket := tx.Metadata() + if metadataBucket == nil { + return fmt.Errorf("Metadata: unexpected nil bucket") + } + + if !testGetValues(tc, metadataBucket, rollbackValues(keyValues)) { + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + // Store a series of values via a managed read-write transaction. + err = tc.db.Update(func(tx database.Tx) error { + metadataBucket := tx.Metadata() + if metadataBucket == nil { + return fmt.Errorf("Metadata: unexpected nil bucket") + } + + bucket1 := metadataBucket.Bucket(bucket1Name) + if bucket1 == nil { + return fmt.Errorf("Bucket1: unexpected nil bucket") + } + + if !testPutValues(tc, bucket1, keyValues) { + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + // Ensure the values stored above were committed as expected. + err = tc.db.View(func(tx database.Tx) error { + metadataBucket := tx.Metadata() + if metadataBucket == nil { + return fmt.Errorf("Metadata: unexpected nil bucket") + } + + bucket1 := metadataBucket.Bucket(bucket1Name) + if bucket1 == nil { + return fmt.Errorf("Bucket1: unexpected nil bucket") + } + + if !testGetValues(tc, bucket1, keyValues) { + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + // Clean up the values stored above in a managed read-write transaction. + err = tc.db.Update(func(tx database.Tx) error { + metadataBucket := tx.Metadata() + if metadataBucket == nil { + return fmt.Errorf("Metadata: unexpected nil bucket") + } + + bucket1 := metadataBucket.Bucket(bucket1Name) + if bucket1 == nil { + return fmt.Errorf("Bucket1: unexpected nil bucket") + } + + if !testDeleteValues(tc, bucket1, keyValues) { + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + return true +} + +// testFetchBlockIOMissing ensures that all of the block retrieval API functions +// work as expected when requesting blocks that don't exist. +func testFetchBlockIOMissing(tc *testContext, tx database.Tx) bool { + wantErrCode := database.ErrBlockNotFound + + // --------------------- + // Non-bulk Block IO API + // --------------------- + + // Test the individual block APIs one block at a time to ensure they + // return the expected error. Also, build the data needed to test the + // bulk APIs below while looping. + allBlockHashes := make([]wire.ShaHash, len(tc.blocks)) + allBlockRegions := make([]database.BlockRegion, len(tc.blocks)) + for i, block := range tc.blocks { + blockHash := block.Sha() + allBlockHashes[i] = *blockHash + + txLocs, err := block.TxLoc() + if err != nil { + tc.t.Errorf("block.TxLoc(%d): unexpected error: %v", i, + err) + return false + } + + // Ensure FetchBlock returns expected error. + testName := fmt.Sprintf("FetchBlock #%d on missing block", i) + _, err = tx.FetchBlock(blockHash) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure FetchBlockHeader returns expected error. + testName = fmt.Sprintf("FetchBlockHeader #%d on missing block", + i) + _, err = tx.FetchBlockHeader(blockHash) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure the first transaction fetched as a block region from + // the database returns the expected error. + region := database.BlockRegion{ + Hash: blockHash, + Offset: uint32(txLocs[0].TxStart), + Len: uint32(txLocs[0].TxLen), + } + allBlockRegions[i] = region + _, err = tx.FetchBlockRegion(®ion) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure HasBlock returns false. + hasBlock, err := tx.HasBlock(blockHash) + if err != nil { + tc.t.Errorf("HasBlock #%d: unexpected err: %v", i, err) + return false + } + if hasBlock { + tc.t.Errorf("HasBlock #%d: should not have block", i) + return false + } + } + + // ----------------- + // Bulk Block IO API + // ----------------- + + // Ensure FetchBlocks returns expected error. + testName := "FetchBlocks on missing blocks" + _, err := tx.FetchBlocks(allBlockHashes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure FetchBlockHeaders returns expected error. + testName = "FetchBlockHeaders on missing blocks" + _, err = tx.FetchBlockHeaders(allBlockHashes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure FetchBlockRegions returns expected error. + testName = "FetchBlockRegions on missing blocks" + _, err = tx.FetchBlockRegions(allBlockRegions) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure HasBlocks returns false for all blocks. + hasBlocks, err := tx.HasBlocks(allBlockHashes) + if err != nil { + tc.t.Errorf("HasBlocks: unexpected err: %v", err) + } + for i, hasBlock := range hasBlocks { + if hasBlock { + tc.t.Errorf("HasBlocks #%d: should not have block", i) + return false + } + } + + return true +} + +// testFetchBlockIO ensures all of the block retrieval API functions work as +// expected for the provide set of blocks. The blocks must already be stored in +// the database, or at least stored into the the passed transaction. It also +// tests several error conditions such as ensuring the expected errors are +// returned when fetching blocks, headers, and regions that don't exist. +func testFetchBlockIO(tc *testContext, tx database.Tx) bool { + // --------------------- + // Non-bulk Block IO API + // --------------------- + + // Test the individual block APIs one block at a time. Also, build the + // data needed to test the bulk APIs below while looping. + allBlockHashes := make([]wire.ShaHash, len(tc.blocks)) + allBlockBytes := make([][]byte, len(tc.blocks)) + allBlockTxLocs := make([][]wire.TxLoc, len(tc.blocks)) + allBlockRegions := make([]database.BlockRegion, len(tc.blocks)) + for i, block := range tc.blocks { + blockHash := block.Sha() + allBlockHashes[i] = *blockHash + + blockBytes, err := block.Bytes() + if err != nil { + tc.t.Errorf("block.Bytes(%d): unexpected error: %v", i, + err) + return false + } + allBlockBytes[i] = blockBytes + + txLocs, err := block.TxLoc() + if err != nil { + tc.t.Errorf("block.TxLoc(%d): unexpected error: %v", i, + err) + return false + } + allBlockTxLocs[i] = txLocs + + // Ensure the block data fetched from the database matches the + // expected bytes. + gotBlockBytes, err := tx.FetchBlock(blockHash) + if err != nil { + tc.t.Errorf("FetchBlock(%s): unexpected error: %v", + blockHash, err) + return false + } + if !bytes.Equal(gotBlockBytes, blockBytes) { + tc.t.Errorf("FetchBlock(%s): bytes mismatch: got %x, "+ + "want %x", blockHash, gotBlockBytes, blockBytes) + return false + } + + // Ensure the block header fetched from the database matches the + // expected bytes. + wantHeaderBytes := blockBytes[0:wire.MaxBlockHeaderPayload] + gotHeaderBytes, err := tx.FetchBlockHeader(blockHash) + if err != nil { + tc.t.Errorf("FetchBlockHeader(%s): unexpected error: %v", + blockHash, err) + return false + } + if !bytes.Equal(gotHeaderBytes, wantHeaderBytes) { + tc.t.Errorf("FetchBlockHeader(%s): bytes mismatch: "+ + "got %x, want %x", blockHash, gotHeaderBytes, + wantHeaderBytes) + return false + } + + // Ensure the first transaction fetched as a block region from + // the database matches the expected bytes. + region := database.BlockRegion{ + Hash: blockHash, + Offset: uint32(txLocs[0].TxStart), + Len: uint32(txLocs[0].TxLen), + } + allBlockRegions[i] = region + endRegionOffset := region.Offset + region.Len + wantRegionBytes := blockBytes[region.Offset:endRegionOffset] + gotRegionBytes, err := tx.FetchBlockRegion(®ion) + if err != nil { + tc.t.Errorf("FetchBlockRegion(%s): unexpected error: %v", + blockHash, err) + return false + } + if !bytes.Equal(gotRegionBytes, wantRegionBytes) { + tc.t.Errorf("FetchBlockRegion(%s): bytes mismatch: "+ + "got %x, want %x", blockHash, gotRegionBytes, + wantRegionBytes) + return false + } + + // Ensure the block header fetched from the database matches the + // expected bytes. + hasBlock, err := tx.HasBlock(blockHash) + if err != nil { + tc.t.Errorf("HasBlock(%s): unexpected error: %v", + blockHash, err) + return false + } + if !hasBlock { + tc.t.Errorf("HasBlock(%s): database claims it doesn't "+ + "have the block when it should", blockHash) + return false + } + + // ----------------------- + // Invalid blocks/regions. + // ----------------------- + + // Ensure fetching a block that doesn't exist returns the + // expected error. + badBlockHash := &wire.ShaHash{} + testName := fmt.Sprintf("FetchBlock(%s) invalid block", + badBlockHash) + wantErrCode := database.ErrBlockNotFound + _, err = tx.FetchBlock(badBlockHash) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure fetching a block header that doesn't exist returns + // the expected error. + testName = fmt.Sprintf("FetchBlockHeader(%s) invalid block", + badBlockHash) + _, err = tx.FetchBlockHeader(badBlockHash) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure fetching a block region in a block that doesn't exist + // return the expected error. + testName = fmt.Sprintf("FetchBlockRegion(%s) invalid hash", + badBlockHash) + wantErrCode = database.ErrBlockNotFound + region.Hash = badBlockHash + region.Offset = ^uint32(0) + _, err = tx.FetchBlockRegion(®ion) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure fetching a block region that is out of bounds returns + // the expected error. + testName = fmt.Sprintf("FetchBlockRegion(%s) invalid region", + blockHash) + wantErrCode = database.ErrBlockRegionInvalid + region.Hash = blockHash + region.Offset = ^uint32(0) + _, err = tx.FetchBlockRegion(®ion) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + } + + // ----------------- + // Bulk Block IO API + // ----------------- + + // Ensure the bulk block data fetched from the database matches the + // expected bytes. + blockData, err := tx.FetchBlocks(allBlockHashes) + if err != nil { + tc.t.Errorf("FetchBlocks: unexpected error: %v", err) + return false + } + if len(blockData) != len(allBlockBytes) { + tc.t.Errorf("FetchBlocks: unexpected number of results - got "+ + "%d, want %d", len(blockData), len(allBlockBytes)) + return false + } + for i := 0; i < len(blockData); i++ { + blockHash := allBlockHashes[i] + wantBlockBytes := allBlockBytes[i] + gotBlockBytes := blockData[i] + if !bytes.Equal(gotBlockBytes, wantBlockBytes) { + tc.t.Errorf("FetchBlocks(%s): bytes mismatch: got %x, "+ + "want %x", blockHash, gotBlockBytes, + wantBlockBytes) + return false + } + } + + // Ensure the bulk block headers fetched from the database match the + // expected bytes. + blockHeaderData, err := tx.FetchBlockHeaders(allBlockHashes) + if err != nil { + tc.t.Errorf("FetchBlockHeaders: unexpected error: %v", err) + return false + } + if len(blockHeaderData) != len(allBlockBytes) { + tc.t.Errorf("FetchBlockHeaders: unexpected number of results "+ + "- got %d, want %d", len(blockHeaderData), + len(allBlockBytes)) + return false + } + for i := 0; i < len(blockHeaderData); i++ { + blockHash := allBlockHashes[i] + wantHeaderBytes := allBlockBytes[i][0:wire.MaxBlockHeaderPayload] + gotHeaderBytes := blockHeaderData[i] + if !bytes.Equal(gotHeaderBytes, wantHeaderBytes) { + tc.t.Errorf("FetchBlockHeaders(%s): bytes mismatch: "+ + "got %x, want %x", blockHash, gotHeaderBytes, + wantHeaderBytes) + return false + } + } + + // Ensure the first transaction of every block fetched in bulk block + // regions from the database matches the expected bytes. + allRegionBytes, err := tx.FetchBlockRegions(allBlockRegions) + if err != nil { + tc.t.Errorf("FetchBlockRegions: unexpected error: %v", err) + return false + + } + if len(allRegionBytes) != len(allBlockRegions) { + tc.t.Errorf("FetchBlockRegions: unexpected number of results "+ + "- got %d, want %d", len(allRegionBytes), + len(allBlockRegions)) + return false + } + for i, gotRegionBytes := range allRegionBytes { + region := &allBlockRegions[i] + endRegionOffset := region.Offset + region.Len + wantRegionBytes := blockData[i][region.Offset:endRegionOffset] + if !bytes.Equal(gotRegionBytes, wantRegionBytes) { + tc.t.Errorf("FetchBlockRegions(%d): bytes mismatch: "+ + "got %x, want %x", i, gotRegionBytes, + wantRegionBytes) + return false + } + } + + // Ensure the bulk determination of whether a set of block hashes are in + // the database returns true for all loaded blocks. + hasBlocks, err := tx.HasBlocks(allBlockHashes) + if err != nil { + tc.t.Errorf("HasBlocks: unexpected error: %v", err) + return false + } + for i, hasBlock := range hasBlocks { + if !hasBlock { + tc.t.Errorf("HasBlocks(%d): should have block", i) + return false + } + } + + // ----------------------- + // Invalid blocks/regions. + // ----------------------- + + // Ensure fetching blocks for which one doesn't exist returns the + // expected error. + testName := "FetchBlocks invalid hash" + badBlockHashes := make([]wire.ShaHash, len(allBlockHashes)+1) + copy(badBlockHashes, allBlockHashes) + badBlockHashes[len(badBlockHashes)-1] = wire.ShaHash{} + wantErrCode := database.ErrBlockNotFound + _, err = tx.FetchBlocks(badBlockHashes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure fetching block headers for which one doesn't exist returns the + // expected error. + testName = "FetchBlockHeaders invalid hash" + _, err = tx.FetchBlockHeaders(badBlockHashes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure fetching block regions for which one of blocks doesn't exist + // returns expected error. + testName = "FetchBlockRegions invalid hash" + badBlockRegions := make([]database.BlockRegion, len(allBlockRegions)+1) + copy(badBlockRegions, allBlockRegions) + badBlockRegions[len(badBlockRegions)-1].Hash = &wire.ShaHash{} + wantErrCode = database.ErrBlockNotFound + _, err = tx.FetchBlockRegions(badBlockRegions) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure fetching block regions that are out of bounds returns the + // expected error. + testName = "FetchBlockRegions invalid regions" + badBlockRegions = badBlockRegions[:len(badBlockRegions)-1] + for i := range badBlockRegions { + badBlockRegions[i].Offset = ^uint32(0) + } + wantErrCode = database.ErrBlockRegionInvalid + _, err = tx.FetchBlockRegions(badBlockRegions) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + return true +} + +// testBlockIOTxInterface ensures that the block IO interface works as expected +// for both managed read/write and manual transactions. This function leaves +// all of the stored blocks in the database. +func testBlockIOTxInterface(tc *testContext) bool { + // Ensure attempting to store a block with a read-only transaction fails + // with the expected error. + err := tc.db.View(func(tx database.Tx) error { + wantErrCode := database.ErrTxNotWritable + for i, block := range tc.blocks { + testName := fmt.Sprintf("StoreBlock(%d) on ro tx", i) + err := tx.StoreBlock(block) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return errSubTestFail + } + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + // Populate the database with loaded blocks and ensure all of the data + // fetching APIs work properly on them within the transaction before a + // commit or rollback. Then, force a rollback so the code below can + // ensure none of the data actually gets stored. + forceRollbackError := fmt.Errorf("force rollback") + err = tc.db.Update(func(tx database.Tx) error { + // Store all blocks in the same transaction. + for i, block := range tc.blocks { + err := tx.StoreBlock(block) + if err != nil { + tc.t.Errorf("StoreBlock #%d: unexpected error: "+ + "%v", i, err) + return errSubTestFail + } + } + + // Ensure attempting to store the same block again, before the + // transaction has been committed, returns the expected error. + wantErrCode := database.ErrBlockExists + for i, block := range tc.blocks { + testName := fmt.Sprintf("duplicate block entry #%d "+ + "(before commit)", i) + err := tx.StoreBlock(block) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return errSubTestFail + } + } + + // Ensure that all data fetches from the stored blocks before + // the transaction has been committed work as expected. + if !testFetchBlockIO(tc, tx) { + return errSubTestFail + } + + return forceRollbackError + }) + if err != forceRollbackError { + if err == errSubTestFail { + return false + } + + tc.t.Errorf("Update: inner function error not returned - got "+ + "%v, want %v", err, forceRollbackError) + return false + } + + // Ensure rollback was successful + err = tc.db.View(func(tx database.Tx) error { + if !testFetchBlockIOMissing(tc, tx) { + return errSubTestFail + } + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + // Populate the database with loaded blocks and ensure all of the data + // fetching APIs work properly. + err = tc.db.Update(func(tx database.Tx) error { + // Store a bunch of blocks in the same transaction. + for i, block := range tc.blocks { + err := tx.StoreBlock(block) + if err != nil { + tc.t.Errorf("StoreBlock #%d: unexpected error: "+ + "%v", i, err) + return errSubTestFail + } + } + + // Ensure attempting to store the same block again while in the + // same transaction, but before it has been committed, returns + // the expected error. + for i, block := range tc.blocks { + testName := fmt.Sprintf("duplicate block entry #%d "+ + "(before commit)", i) + wantErrCode := database.ErrBlockExists + err := tx.StoreBlock(block) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return errSubTestFail + } + } + + // Ensure that all data fetches from the stored blocks before + // the transaction has been committed work as expected. + if !testFetchBlockIO(tc, tx) { + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + // Ensure all data fetch tests work as expected using a managed + // read-only transaction after the data was successfully committed + // above. + err = tc.db.View(func(tx database.Tx) error { + if !testFetchBlockIO(tc, tx) { + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + // Ensure all data fetch tests work as expected using a managed + // read-write transaction after the data was successfully committed + // above. + err = tc.db.Update(func(tx database.Tx) error { + if !testFetchBlockIO(tc, tx) { + return errSubTestFail + } + + // Ensure attempting to store existing blocks again returns the + // expected error. Note that this is different from the + // previous version since this is a new transaction after the + // blocks have been committed. + wantErrCode := database.ErrBlockExists + for i, block := range tc.blocks { + testName := fmt.Sprintf("duplicate block entry #%d "+ + "(before commit)", i) + err := tx.StoreBlock(block) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return errSubTestFail + } + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + return true +} + +// testClosedTxInterface ensures that both the metadata and block IO API +// functions behave as expected when attempted against a closed transaction. +func testClosedTxInterface(tc *testContext, tx database.Tx) bool { + wantErrCode := database.ErrTxClosed + bucket := tx.Metadata() + cursor := tx.Metadata().Cursor() + bucketName := []byte("closedtxbucket") + keyName := []byte("closedtxkey") + + // ------------ + // Metadata API + // ------------ + + // Ensure that attempting to get an existing bucket returns nil when the + // transaction is closed. + if b := bucket.Bucket(bucketName); b != nil { + tc.t.Errorf("Bucket: did not return nil on closed tx") + return false + } + + // Ensure CreateBucket returns expected error. + testName := "CreateBucket on closed tx" + _, err := bucket.CreateBucket(bucketName) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure CreateBucketIfNotExists returns expected error. + testName = "CreateBucketIfNotExists on closed tx" + _, err = bucket.CreateBucketIfNotExists(bucketName) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure Delete returns expected error. + testName = "Delete on closed tx" + err = bucket.Delete(keyName) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure DeleteBucket returns expected error. + testName = "DeleteBucket on closed tx" + err = bucket.DeleteBucket(bucketName) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure ForEach returns expected error. + testName = "ForEach on closed tx" + err = bucket.ForEach(nil) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure ForEachBucket returns expected error. + testName = "ForEachBucket on closed tx" + err = bucket.ForEachBucket(nil) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure Get returns expected error. + testName = "Get on closed tx" + if k := bucket.Get(keyName); k != nil { + tc.t.Errorf("Get: did not return nil on closed tx") + return false + } + + // Ensure Put returns expected error. + testName = "Put on closed tx" + err = bucket.Put(keyName, []byte("test")) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // ------------------- + // Metadata Cursor API + // ------------------- + + // Ensure attempting to get a bucket from a cursor on a closed tx gives + // back nil. + if b := cursor.Bucket(); b != nil { + tc.t.Error("Cursor.Bucket: returned non-nil on closed tx") + return false + } + + // Ensure Cursor.Delete returns expected error. + testName = "Cursor.Delete on closed tx" + err = cursor.Delete() + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure Cursor.First on a closed tx returns false and nil key/value. + if cursor.First() { + tc.t.Error("Cursor.First: claims ok on closed tx") + return false + } + if cursor.Key() != nil || cursor.Value() != nil { + tc.t.Error("Cursor.First: key and/or value are not nil on " + + "closed tx") + return false + } + + // Ensure Cursor.Last on a closed tx returns false and nil key/value. + if cursor.Last() { + tc.t.Error("Cursor.Last: claims ok on closed tx") + return false + } + if cursor.Key() != nil || cursor.Value() != nil { + tc.t.Error("Cursor.Last: key and/or value are not nil on " + + "closed tx") + return false + } + + // Ensure Cursor.Next on a closed tx returns false and nil key/value. + if cursor.Next() { + tc.t.Error("Cursor.Next: claims ok on closed tx") + return false + } + if cursor.Key() != nil || cursor.Value() != nil { + tc.t.Error("Cursor.Next: key and/or value are not nil on " + + "closed tx") + return false + } + + // Ensure Cursor.Prev on a closed tx returns false and nil key/value. + if cursor.Prev() { + tc.t.Error("Cursor.Prev: claims ok on closed tx") + return false + } + if cursor.Key() != nil || cursor.Value() != nil { + tc.t.Error("Cursor.Prev: key and/or value are not nil on " + + "closed tx") + return false + } + + // Ensure Cursor.Seek on a closed tx returns false and nil key/value. + if cursor.Seek([]byte{}) { + tc.t.Error("Cursor.Seek: claims ok on closed tx") + return false + } + if cursor.Key() != nil || cursor.Value() != nil { + tc.t.Error("Cursor.Seek: key and/or value are not nil on " + + "closed tx") + return false + } + + // --------------------- + // Non-bulk Block IO API + // --------------------- + + // Test the individual block APIs one block at a time to ensure they + // return the expected error. Also, build the data needed to test the + // bulk APIs below while looping. + allBlockHashes := make([]wire.ShaHash, len(tc.blocks)) + allBlockRegions := make([]database.BlockRegion, len(tc.blocks)) + for i, block := range tc.blocks { + blockHash := block.Sha() + allBlockHashes[i] = *blockHash + + txLocs, err := block.TxLoc() + if err != nil { + tc.t.Errorf("block.TxLoc(%d): unexpected error: %v", i, + err) + return false + } + + // Ensure StoreBlock returns expected error. + testName = "StoreBlock on closed tx" + err = tx.StoreBlock(block) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure FetchBlock returns expected error. + testName = fmt.Sprintf("FetchBlock #%d on closed tx", i) + _, err = tx.FetchBlock(blockHash) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure FetchBlockHeader returns expected error. + testName = fmt.Sprintf("FetchBlockHeader #%d on closed tx", i) + _, err = tx.FetchBlockHeader(blockHash) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure the first transaction fetched as a block region from + // the database returns the expected error. + region := database.BlockRegion{ + Hash: blockHash, + Offset: uint32(txLocs[0].TxStart), + Len: uint32(txLocs[0].TxLen), + } + allBlockRegions[i] = region + _, err = tx.FetchBlockRegion(®ion) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure HasBlock returns expected error. + testName = fmt.Sprintf("HasBlock #%d on closed tx", i) + _, err = tx.HasBlock(blockHash) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + } + + // ----------------- + // Bulk Block IO API + // ----------------- + + // Ensure FetchBlocks returns expected error. + testName = "FetchBlocks on closed tx" + _, err = tx.FetchBlocks(allBlockHashes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure FetchBlockHeaders returns expected error. + testName = "FetchBlockHeaders on closed tx" + _, err = tx.FetchBlockHeaders(allBlockHashes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure FetchBlockRegions returns expected error. + testName = "FetchBlockRegions on closed tx" + _, err = tx.FetchBlockRegions(allBlockRegions) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure HasBlocks returns expected error. + testName = "HasBlocks on closed tx" + _, err = tx.HasBlocks(allBlockHashes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // --------------- + // Commit/Rollback + // --------------- + + // Ensure that attempting to rollback or commit a transaction that is + // already closed returns the expected error. + err = tx.Rollback() + if !checkDbError(tc.t, "closed tx rollback", err, wantErrCode) { + return false + } + err = tx.Commit() + if !checkDbError(tc.t, "closed tx commit", err, wantErrCode) { + return false + } + + return true +} + +// testTxClosed ensures that both the metadata and block IO API functions behave +// as expected when attempted against both read-only and read-write +// transactions. +func testTxClosed(tc *testContext) bool { + bucketName := []byte("closedtxbucket") + keyName := []byte("closedtxkey") + + // Start a transaction, create a bucket and key used for testing, and + // immediately perform a commit on it so it is closed. + tx, err := tc.db.Begin(true) + if err != nil { + tc.t.Errorf("Begin(true): unexpected error: %v", err) + return false + } + defer rollbackOnPanic(tc.t, tx) + if _, err := tx.Metadata().CreateBucket(bucketName); err != nil { + tc.t.Errorf("CreateBucket: unexpected error: %v", err) + return false + } + if err := tx.Metadata().Put(keyName, []byte("test")); err != nil { + tc.t.Errorf("Put: unexpected error: %v", err) + return false + } + if err := tx.Commit(); err != nil { + tc.t.Errorf("Commit: unexpected error: %v", err) + return false + } + + // Ensure invoking all of the functions on the closed read-write + // transaction behave as expected. + if !testClosedTxInterface(tc, tx) { + return false + } + + // Repeat the tests with a rolled-back read-only transaction. + tx, err = tc.db.Begin(false) + if err != nil { + tc.t.Errorf("Begin(false): unexpected error: %v", err) + return false + } + defer rollbackOnPanic(tc.t, tx) + if err := tx.Rollback(); err != nil { + tc.t.Errorf("Rollback: unexpected error: %v", err) + return false + } + + // Ensure invoking all of the functions on the closed read-only + // transaction behave as expected. + return testClosedTxInterface(tc, tx) +} + +// testConcurrecy ensure the database properly supports concurrent readers and +// only a single writer. It also ensures views act as snapshots at the time +// they are acquired. +func testConcurrecy(tc *testContext) bool { + // sleepTime is how long each of the concurrent readers should sleep to + // aid in detection of whether or not the data is actually being read + // concurrently. It starts with a sane lower bound. + var sleepTime = time.Millisecond * 250 + + // Determine about how long it takes for a single block read. When it's + // longer than the default minimum sleep time, adjust the sleep time to + // help prevent durations that are too short which would cause erroneous + // test failures on slower systems. + startTime := time.Now() + err := tc.db.View(func(tx database.Tx) error { + _, err := tx.FetchBlock(tc.blocks[0].Sha()) + if err != nil { + return err + } + return nil + }) + if err != nil { + tc.t.Errorf("Unexpected error in view: %v", err) + return false + } + elapsed := time.Now().Sub(startTime) + if sleepTime < elapsed { + sleepTime = elapsed + } + tc.t.Logf("Time to load block 0: %v, using sleep time: %v", elapsed, + sleepTime) + + // reader takes a block number to load and channel to return the result + // of the operation on. It is used below to launch multiple concurrent + // readers. + numReaders := len(tc.blocks) + resultChan := make(chan bool, numReaders) + reader := func(blockNum int) { + err := tc.db.View(func(tx database.Tx) error { + time.Sleep(sleepTime) + _, err := tx.FetchBlock(tc.blocks[blockNum].Sha()) + if err != nil { + return err + } + return nil + }) + if err != nil { + tc.t.Errorf("Unexpected error in concurrent view: %v", + err) + resultChan <- false + } + resultChan <- true + } + + // Start up several concurrent readers for the same block and wait for + // the results. + startTime = time.Now() + for i := 0; i < numReaders; i++ { + go reader(0) + } + for i := 0; i < numReaders; i++ { + if result := <-resultChan; !result { + return false + } + } + elapsed = time.Now().Sub(startTime) + tc.t.Logf("%d concurrent reads of same block elapsed: %v", numReaders, + elapsed) + + // Consider it a failure if it took longer than half the time it would + // take with no concurrency. + if elapsed > sleepTime*time.Duration(numReaders/2) { + tc.t.Errorf("Concurrent views for same block did not appear to "+ + "run simultaneously: elapsed %v", elapsed) + return false + } + + // Start up several concurrent readers for different blocks and wait for + // the results. + startTime = time.Now() + for i := 0; i < numReaders; i++ { + go reader(i) + } + for i := 0; i < numReaders; i++ { + if result := <-resultChan; !result { + return false + } + } + elapsed = time.Now().Sub(startTime) + tc.t.Logf("%d concurrent reads of different blocks elapsed: %v", + numReaders, elapsed) + + // Consider it a failure if it took longer than half the time it would + // take with no concurrency. + if elapsed > sleepTime*time.Duration(numReaders/2) { + tc.t.Errorf("Concurrent views for different blocks did not "+ + "appear to run simultaneously: elapsed %v", elapsed) + return false + } + + // Start up a few readers and wait for them to acquire views. Each + // reader waits for a signal from the writer to be finished to ensure + // that the data written by the writer is not seen by the view since it + // was started before the data was set. + concurrentKey := []byte("notthere") + concurrentVal := []byte("someval") + started := make(chan struct{}) + writeComplete := make(chan struct{}) + reader = func(blockNum int) { + err := tc.db.View(func(tx database.Tx) error { + started <- struct{}{} + + // Wait for the writer to complete. + <-writeComplete + + // Since this reader was created before the write took + // place, the data it added should not be visible. + val := tx.Metadata().Get(concurrentKey) + if val != nil { + return fmt.Errorf("%s should not be visible", + concurrentKey) + } + return nil + }) + if err != nil { + tc.t.Errorf("Unexpected error in concurrent view: %v", + err) + resultChan <- false + } + resultChan <- true + } + for i := 0; i < numReaders; i++ { + go reader(0) + } + for i := 0; i < numReaders; i++ { + <-started + } + + // All readers are started and waiting for completion of the writer. + // Set some data the readers are expecting to not find and signal the + // readers the write is done by closing the writeComplete channel. + err = tc.db.Update(func(tx database.Tx) error { + err := tx.Metadata().Put(concurrentKey, concurrentVal) + if err != nil { + return err + } + return nil + }) + if err != nil { + tc.t.Errorf("Unexpected error in update: %v", err) + return false + } + close(writeComplete) + + // Wait for reader results. + for i := 0; i < numReaders; i++ { + if result := <-resultChan; !result { + return false + } + } + + // Start a few writers and ensure the total time is at least the + // writeSleepTime * numWriters. This ensures only one write transaction + // can be active at a time. + writeSleepTime := time.Millisecond * 250 + writer := func() { + err := tc.db.Update(func(tx database.Tx) error { + time.Sleep(writeSleepTime) + return nil + }) + if err != nil { + tc.t.Errorf("Unexpected error in concurrent view: %v", + err) + resultChan <- false + } + resultChan <- true + } + numWriters := 3 + startTime = time.Now() + for i := 0; i < numWriters; i++ { + go writer() + } + for i := 0; i < numWriters; i++ { + if result := <-resultChan; !result { + return false + } + } + elapsed = time.Now().Sub(startTime) + tc.t.Logf("%d concurrent writers elapsed using sleep time %v: %v", + numWriters, writeSleepTime, elapsed) + + // The total time must have been at least the sum of all sleeps if the + // writes blocked properly. + if elapsed < writeSleepTime*time.Duration(numWriters) { + tc.t.Errorf("Concurrent writes appeared to run simultaneously: "+ + "elapsed %v", elapsed) + return false + } + + return true +} + +// testConcurrentClose ensures that closing the database with open transactions +// blocks until the transactions are finished. +// +// The database will be closed upon returning from this function. +func testConcurrentClose(tc *testContext) bool { + // Start up a few readers and wait for them to acquire views. Each + // reader waits for a signal to complete to ensure the transactions stay + // open until they are explicitly signalled to be closed. + var activeReaders int32 + numReaders := 3 + started := make(chan struct{}) + finishReaders := make(chan struct{}) + resultChan := make(chan bool, numReaders+1) + reader := func() { + err := tc.db.View(func(tx database.Tx) error { + atomic.AddInt32(&activeReaders, 1) + started <- struct{}{} + <-finishReaders + atomic.AddInt32(&activeReaders, -1) + return nil + }) + if err != nil { + tc.t.Errorf("Unexpected error in concurrent view: %v", + err) + resultChan <- false + } + resultChan <- true + } + for i := 0; i < numReaders; i++ { + go reader() + } + for i := 0; i < numReaders; i++ { + <-started + } + + // Close the database in a separate goroutine. This should block until + // the transactions are finished. Once the close has taken place, the + // dbClosed channel is closed to signal the main goroutine below. + dbClosed := make(chan struct{}) + go func() { + started <- struct{}{} + err := tc.db.Close() + if err != nil { + tc.t.Errorf("Unexpected error in concurrent view: %v", + err) + resultChan <- false + } + close(dbClosed) + resultChan <- true + }() + <-started + + // Wait a short period and then signal the reader transactions to + // finish. When the db closed channel is received, ensure there are no + // active readers open. + time.AfterFunc(time.Millisecond*250, func() { close(finishReaders) }) + <-dbClosed + if nr := atomic.LoadInt32(&activeReaders); nr != 0 { + tc.t.Errorf("Close did not appear to block with active "+ + "readers: %d active", nr) + return false + } + + // Wait for all results. + for i := 0; i < numReaders+1; i++ { + if result := <-resultChan; !result { + return false + } + } + + return true +} + +// testInterface tests performs tests for the various interfaces of the database +// package which require state in the database for the given database type. +func testInterface(t *testing.T, db database.DB) { + // Create a test context to pass around. + context := testContext{t: t, db: db} + + // Load the test blocks and store in the test context for use throughout + // the tests. + blocks, err := loadBlocks(t, blockDataFile, blockDataNet) + if err != nil { + t.Errorf("loadBlocks: Unexpected error: %v", err) + return + } + context.blocks = blocks + + // Test the transaction metadata interface including managed and manual + // transactions as well as buckets. + if !testMetadataTxInterface(&context) { + return + } + + // Test the transaction block IO interface using managed and manual + // transactions. This function leaves all of the stored blocks in the + // database since they're used later. + if !testBlockIOTxInterface(&context) { + return + } + + // Test all of the transaction interface functions against a closed + // transaction work as expected. + if !testTxClosed(&context) { + return + } + + // Test the database properly supports concurrency. + if !testConcurrecy(&context) { + return + } + + // Test that closing the database with open transactions blocks until + // the transactions are finished. + // + // The database will be closed upon returning from this function, so it + // must be the last thing called. + testConcurrentClose(&context) +} diff --git a/database2/ffboltdb/mockfile_test.go b/database2/ffboltdb/mockfile_test.go new file mode 100644 index 00000000000..a4d2d847127 --- /dev/null +++ b/database2/ffboltdb/mockfile_test.go @@ -0,0 +1,163 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +// This file is part of the ffboltdb package rather than the ffboltdb_test +// package as it is part of the whitebox testing. + +package ffboltdb + +import ( + "errors" + "io" + "sync" +) + +// Errors used for the mock file. +var ( + // errMockFileClosed is used to indicate a mock file is closed. + errMockFileClosed = errors.New("file closed") + + // errInvalidOffset is used to indicate an offset that is out of range + // for the file was provided. + errInvalidOffset = errors.New("invalid offset") + + // errSyncFail is used to indicate simulated sync failure. + errSyncFail = errors.New("simulated sync failure") +) + +// mockFile implements the filer interface and used in order to force failures +// the database code related to reading and writing from the flat block files. +// A maxSize of -1 is unlimited. +type mockFile struct { + sync.RWMutex + maxSize int64 + data []byte + forceSyncErr bool + closed bool +} + +// Close closes the mock file without releasing any data associated with it. +// This allows it to be "reopened" without losing the data. +// +// This is part of the filer implementation. +func (f *mockFile) Close() error { + f.Lock() + defer f.Unlock() + + if f.closed { + return errMockFileClosed + } + f.closed = true + return nil +} + +// ReadAt reads len(b) bytes from the mock file starting at byte offset off. It +// returns the number of bytes read and the error, if any. ReadAt always +// returns a non-nil error when n < len(b). At end of file, that error is +// io.EOF. +// +// This is part of the filer implementation. +func (f *mockFile) ReadAt(b []byte, off int64) (int, error) { + f.RLock() + defer f.RUnlock() + + if f.closed { + return 0, errMockFileClosed + } + maxSize := int64(len(f.data)) + if f.maxSize > -1 && maxSize > f.maxSize { + maxSize = f.maxSize + } + if off < 0 || off > maxSize { + return 0, errInvalidOffset + } + + // Limit to the max size field, if set. + numToRead := int64(len(b)) + endOffset := off + numToRead + if endOffset > maxSize { + numToRead = maxSize - off + } + + copy(b, f.data[off:off+numToRead]) + if numToRead < int64(len(b)) { + return int(numToRead), io.EOF + } + return int(numToRead), nil +} + +// Truncate changes the size of the mock file. +// +// This is part of the filer implementation. +func (f *mockFile) Truncate(size int64) error { + f.Lock() + defer f.Unlock() + + if f.closed { + return errMockFileClosed + } + maxSize := int64(len(f.data)) + if f.maxSize > -1 && maxSize > f.maxSize { + maxSize = f.maxSize + } + if size > maxSize { + return errInvalidOffset + } + + f.data = f.data[:size] + return nil +} + +// Write writes len(b) bytes to the mock file. It returns the number of bytes +// written and an error, if any. Write returns a non-nil error any time +// n != len(b). +// +// This is part of the filer implementation. +func (f *mockFile) WriteAt(b []byte, off int64) (int, error) { + f.Lock() + defer f.Unlock() + + if f.closed { + return 0, errMockFileClosed + } + maxSize := f.maxSize + if maxSize < 0 { + maxSize = 100 * 1024 // 100KiB + } + if off < 0 || off > maxSize { + return 0, errInvalidOffset + } + + // Limit to the max size field, if set, and grow the slice if needed. + numToWrite := int64(len(b)) + if off+numToWrite > maxSize { + numToWrite = maxSize - off + } + if off+numToWrite > int64(len(f.data)) { + newData := make([]byte, off+numToWrite) + copy(newData, f.data) + f.data = newData + } + + copy(f.data[off:], b[:numToWrite]) + if numToWrite < int64(len(b)) { + return int(numToWrite), io.EOF + } + return int(numToWrite), nil +} + +// Sync doesn't do anything for mock files. However, it will return an error if +// the mock file's forceSyncErr flag is set. +// +// This is part of the filer implementation. +func (f *mockFile) Sync() error { + if f.forceSyncErr { + return errSyncFail + } + + return nil +} + +// Ensure the mockFile type implements the filer interface. +var _ filer = (*mockFile)(nil) diff --git a/database2/ffboltdb/reconcile.go b/database2/ffboltdb/reconcile.go new file mode 100644 index 00000000000..2635842b9bb --- /dev/null +++ b/database2/ffboltdb/reconcile.go @@ -0,0 +1,117 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package ffboltdb + +import ( + "fmt" + "hash/crc32" + + database "github.com/btcsuite/btcd/database2" +) + +// The serialized write cursor location format is: +// +// [0:4] Block file (4 bytes) +// [4:8] File offset (4 bytes) +// [8:12] Castagnoli CRC-32 checksum (4 bytes) + +// serializeWriteRow serialize the current block file and offset where new +// will be written into a format suitable for storage into the metadata. +func serializeWriteRow(curBlockFileNum, curFileOffset uint32) []byte { + var serializedRow [12]byte + byteOrder.PutUint32(serializedRow[0:4], curBlockFileNum) + byteOrder.PutUint32(serializedRow[4:8], curFileOffset) + checksum := crc32.Checksum(serializedRow[:8], castagnoli) + byteOrder.PutUint32(serializedRow[8:12], checksum) + return serializedRow[:] +} + +// deserializeWriteRow deserializes the write cursor location stored in the +// metadata. Returns ErrCorruption if the checksum of the entry doesn't match. +func deserializeWriteRow(writeRow []byte) (uint32, uint32, error) { + // Ensure the checksum matches. The checksum is at the end. + gotChecksum := crc32.Checksum(writeRow[:8], castagnoli) + wantChecksumBytes := writeRow[8:12] + wantChecksum := byteOrder.Uint32(wantChecksumBytes) + if gotChecksum != wantChecksum { + str := fmt.Sprintf("metadata for write cursor does not match "+ + "the expected checksum - got %d, want %d", gotChecksum, + wantChecksum) + return 0, 0, makeDbErr(database.ErrCorruption, str, nil) + } + + fileNum := byteOrder.Uint32(writeRow[0:4]) + fileOffset := byteOrder.Uint32(writeRow[4:8]) + return fileNum, fileOffset, nil +} + +// reconcileDB reconciles the metadata with the flat block files on disk. It +// will also initialize the bolt database if the create flag is set. +func reconcileDB(pdb *db, create bool) (database.DB, error) { + // Perform initial internal bucket and value creation during database + // creation. + if create { + if err := initBoltDB(pdb.boltDB); err != nil { + return nil, err + } + } + + // Load the current write cursor position from the metadata. + var curFileNum, curOffset uint32 + err := pdb.View(func(tx database.Tx) error { + writeRow := tx.Metadata().Get(writeLocKeyName) + if writeRow == nil { + str := "write cursor does not exist" + return makeDbErr(database.ErrCorruption, str, nil) + } + + var err error + curFileNum, curOffset, err = deserializeWriteRow(writeRow) + return err + }) + if err != nil { + return nil, err + } + + // When the write cursor position found by scanning the block files on + // disk is AFTER the position the metadata believes to be true, truncate + // the files on disk to match the metadata. This can be a fairly common + // occurrence in unclean shutdown scenarios while the block files are in + // the middle of being written. Since the metadata isn't updated until + // after the block data is written, this is effectively just a rollback + // to the known good point before the unclean shutdown. + wc := pdb.store.writeCursor + if wc.curFileNum > curFileNum || (wc.curFileNum == curFileNum && + wc.curOffset > curOffset) { + + log.Info("Detected unclean shutdown - Repairing...") + log.Debugf("Metadata claims file %d, offset %d. Block data is "+ + "at file %d, offset %d", curFileNum, curOffset, + wc.curFileNum, wc.curOffset) + pdb.store.handleRollback(curFileNum, curOffset) + log.Infof("Database sync complete") + } + + // When the write cursor position found by scanning the block files on + // disk is BEFORE the position the metadata believes to be true, return + // a corruption error. Since sync is called after each block is written + // and before the metadata is updated, this should only happen in the + // case of missing, deleted, or truncated block files, which generally + // is not an easily recoverable scenario. In the future, it might be + // possible to rescan and rebuild the metadata from the block files, + // however, that would need to happen with coordination from a higher + // layer since it could invalidate other metadata. + if wc.curFileNum < curFileNum || (wc.curFileNum == curFileNum && + wc.curOffset < curOffset) { + + str := fmt.Sprintf("metadata claims file %d, offset %d, but "+ + "block data is at file %d, offset %d", curFileNum, + curOffset, wc.curFileNum, wc.curOffset) + _ = log.Warnf("***Database corruption detected***: %v", str) + return nil, makeDbErr(database.ErrCorruption, str, nil) + } + + return pdb, nil +} diff --git a/database2/ffboltdb/whitebox_test.go b/database2/ffboltdb/whitebox_test.go new file mode 100644 index 00000000000..72a3bef618f --- /dev/null +++ b/database2/ffboltdb/whitebox_test.go @@ -0,0 +1,810 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +// This file is part of the ffboltdb package rather than the ffboltdb_test +// package as it provides whitebox testing. + +package ffboltdb + +import ( + "compress/bzip2" + "encoding/binary" + "fmt" + "hash/crc32" + "io" + "os" + "path/filepath" + "testing" + + "github.com/btcsuite/bolt" + "github.com/btcsuite/btcd/chaincfg" + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcd/wire" + "github.com/btcsuite/btcutil" +) + +var ( + // blockDataNet is the expected network in the test block data. + blockDataNet = wire.MainNet + + // blockDataFile is the path to a file containing the first 256 blocks + // of the block chain. + blockDataFile = filepath.Join("..", "testdata", "blocks1-256.bz2") + + // errSubTestFail is used to signal that a sub test returned false. + errSubTestFail = fmt.Errorf("sub test failure") +) + +// loadBlocks loads the blocks contained in the testdata directory and returns +// a slice of them. +func loadBlocks(t *testing.T, dataFile string, network wire.BitcoinNet) ([]*btcutil.Block, error) { + // Open the file that contains the blocks for reading. + fi, err := os.Open(dataFile) + if err != nil { + t.Errorf("failed to open file %v, err %v", dataFile, err) + return nil, err + } + defer func() { + if err := fi.Close(); err != nil { + t.Errorf("failed to close file %v %v", dataFile, + err) + } + }() + dr := bzip2.NewReader(fi) + + // Set the first block as the genesis block. + blocks := make([]*btcutil.Block, 0, 256) + genesis := btcutil.NewBlock(chaincfg.MainNetParams.GenesisBlock) + blocks = append(blocks, genesis) + + // Load the remaining blocks. + for height := 1; ; height++ { + var net uint32 + err := binary.Read(dr, binary.LittleEndian, &net) + if err == io.EOF { + // Hit end of file at the expected offset. No error. + break + } + if err != nil { + t.Errorf("Failed to load network type for block %d: %v", + height, err) + return nil, err + } + if net != uint32(network) { + t.Errorf("Block doesn't match network: %v expects %v", + net, network) + return nil, err + } + + var blockLen uint32 + err = binary.Read(dr, binary.LittleEndian, &blockLen) + if err != nil { + t.Errorf("Failed to load block size for block %d: %v", + height, err) + return nil, err + } + + // Read the block. + blockBytes := make([]byte, blockLen) + _, err = io.ReadFull(dr, blockBytes) + if err != nil { + t.Errorf("Failed to load block %d: %v", height, err) + return nil, err + } + + // Deserialize and store the block. + block, err := btcutil.NewBlockFromBytes(blockBytes) + if err != nil { + t.Errorf("Failed to parse block %v: %v", height, err) + return nil, err + } + blocks = append(blocks, block) + } + + return blocks, nil +} + +// checkDbError ensures the passed error is a database.Error with an error code +// that matches the passed error code. +func checkDbError(t *testing.T, testName string, gotErr error, wantErrCode database.ErrorCode) bool { + dbErr, ok := gotErr.(database.Error) + if !ok { + t.Errorf("%s: unexpected error type - got %T, want %T", + testName, gotErr, database.Error{}) + return false + } + if dbErr.ErrorCode != wantErrCode { + t.Errorf("%s: unexpected error code - got %s (%s), want %s", + testName, dbErr.ErrorCode, dbErr.Description, + wantErrCode) + return false + } + + return true +} + +// testContext is used to store context information about a running test which +// is passed into helper functions. +type testContext struct { + t *testing.T + db database.DB + files map[uint32]*lockableFile + maxFileSizes map[uint32]int64 + blocks []*btcutil.Block +} + +// TestConvertErr ensures the bolt error to database error conversion works as +// expected. +func TestConvertErr(t *testing.T) { + t.Parallel() + + tests := []struct { + boltErr error + wantErrCode database.ErrorCode + }{ + {bolt.ErrDatabaseNotOpen, database.ErrDbNotOpen}, + {bolt.ErrInvalid, database.ErrInvalid}, + {bolt.ErrTxNotWritable, database.ErrTxNotWritable}, + {bolt.ErrTxClosed, database.ErrTxClosed}, + {bolt.ErrBucketNotFound, database.ErrBucketNotFound}, + {bolt.ErrBucketExists, database.ErrBucketExists}, + {bolt.ErrBucketNameRequired, database.ErrBucketNameRequired}, + {bolt.ErrKeyRequired, database.ErrKeyRequired}, + {bolt.ErrKeyTooLarge, database.ErrKeyTooLarge}, + {bolt.ErrValueTooLarge, database.ErrValueTooLarge}, + {bolt.ErrIncompatibleValue, database.ErrIncompatibleValue}, + } + + for i, test := range tests { + gotErr := convertErr("test", test.boltErr) + if gotErr.ErrorCode != test.wantErrCode { + t.Errorf("convertErr #%d unexpected error - got %v, "+ + "want %v", i, gotErr.ErrorCode, test.wantErrCode) + continue + } + } +} + +// TestCornerCases ensures several corner cases which can happen when opening +// a database and/or block files work as expected. +func TestCornerCases(t *testing.T) { + t.Parallel() + + // Create a file at the datapase path to force the open below to fail. + dbPath := filepath.Join(os.TempDir(), "ffboltdb-errors") + _ = os.RemoveAll(dbPath) + fi, err := os.Create(dbPath) + if err != nil { + t.Errorf("os.Create: unexpected error: %v", err) + return + } + fi.Close() + + // Ensure creating a new database fails when a file exists where a + // directory is needed. + testName := "openDB: fail due to file at target location" + wantErrCode := database.ErrDriverSpecific + idb, err := openDB(dbPath, blockDataNet, true) + if !checkDbError(t, testName, err, wantErrCode) { + if err == nil { + idb.Close() + } + _ = os.RemoveAll(dbPath) + return + } + + // Remove the file and create the database to run tests against. It + // should be successful this time. + _ = os.RemoveAll(dbPath) + idb, err = openDB(dbPath, blockDataNet, true) + if err != nil { + t.Errorf("openDB: unexpected error: %v", err) + return + } + defer os.RemoveAll(dbPath) + defer idb.Close() + + // Ensure attempting to write to a file that can't be created returns + // the expected error. + testName = "writeBlock: open file failure" + filePath := blockFilePath(dbPath, 0) + if err := os.Mkdir(filePath, 0755); err != nil { + t.Errorf("os.Mkdir: unexpected error: %v", err) + return + } + store := idb.(*db).store + _, err = store.writeBlock([]byte{0x00}) + if !checkDbError(t, testName, err, database.ErrDriverSpecific) { + return + } + _ = os.RemoveAll(filePath) + + // Ensure initilization errors in the underlying bolt database work as + // expected. + testName = "initBoltDB: reinitialization" + wantErrCode = database.ErrBucketExists + boltDB := idb.(*db).boltDB + err = initBoltDB(boltDB) + if !checkDbError(t, testName, err, wantErrCode) { + return + } + + // Start a transaction and close the underlying bolt transaction out + // from under it. + dbTx, err := idb.Begin(true) + if err != nil { + t.Errorf("Begin: unexpected error: %v", err) + return + } + dbTx.(*transaction).boltTx.Rollback() + + // Ensure errors in the underlying bolt database during a transaction + // commit are handled properly. + testName = "Commit: underlying bolt error" + wantErrCode = database.ErrTxClosed + err = dbTx.Commit() + if !checkDbError(t, testName, err, wantErrCode) { + return + } + + // Reopen the transaction enough to force a rollback failure due to the + // underlying bolt tx being closed. + dbTx.(*transaction).db.mtx.RLock() + dbTx.(*transaction).closed = false + + // Ensure errors in the underlying bolt database during a transaction + // rollback are handled properly. + testName = "Rollback: underlying bolt error" + err = dbTx.Rollback() + if !checkDbError(t, testName, err, wantErrCode) { + return + } + + // Ensure errors in ForEach due to the underlying bolt database are + // handled properly. + err = idb.Update(func(tx database.Tx) error { + // Close the underlying bolt transaction out from under the + // transaction instance. + tx.(*transaction).boltTx.Rollback() + + wantErrCode = database.ErrTxClosed + err = tx.Metadata().ForEach(func(k, v []byte) error { + return nil + }) + if !checkDbError(t, testName, err, wantErrCode) { + return errSubTestFail + } + + // The Update is expected to fail since the underlying bolt + // transaction was closed. + return errSubTestFail + }) + if err != nil { + if err != errSubTestFail { + t.Errorf("Update: unexpected error: %v", err) + } + return + } + + // Close the underlying bolt database out from under the database + // instance. + boltDB.Close() + + // Ensure the View handles errors in the underlying bolt database + // properly. + testName = "View: underlying bolt error" + wantErrCode = database.ErrDbNotOpen + err = idb.View(func(tx database.Tx) error { + return nil + }) + if !checkDbError(t, testName, err, wantErrCode) { + return + } + + // Ensure the Update handles errors in the underlying bolt database + // properly. + testName = "Update: underlying bolt error" + err = idb.Update(func(tx database.Tx) error { + return nil + }) + if !checkDbError(t, testName, err, wantErrCode) { + return + } +} + +// resetDatabase removes everything from the opened database associated with the +// test context including all metadata and the mock files. +func resetDatabase(tc *testContext) bool { + // Reset the metadata. + err := tc.db.Update(func(tx database.Tx) error { + // Remove all the keys using a cursor while also generating a + // list of buckets. It's not safe to remove keys during ForEach + // iteration nor is it safe to remove buckets during cursor + // iteration, so this dual approach is needed. + var bucketNames [][]byte + cursor := tx.Metadata().Cursor() + for ok := cursor.First(); ok; ok = cursor.Next() { + if cursor.Value() != nil { + if err := cursor.Delete(); err != nil { + return err + } + } else { + bucketNames = append(bucketNames, cursor.Key()) + } + } + + // Remove the buckets. + for _, k := range bucketNames { + if err := tx.Metadata().DeleteBucket(k); err != nil { + return err + } + } + + _, err := tx.Metadata().CreateBucket(blockIdxBucketName) + return err + }) + if err != nil { + tc.t.Errorf("Update: unexpected error: %v", err) + return false + } + + // Reset the mock files. + store := tc.db.(*db).store + wc := store.writeCursor + wc.curFile.Lock() + if wc.curFile.file != nil { + wc.curFile.file.Close() + wc.curFile.file = nil + } + wc.curFile.Unlock() + wc.Lock() + wc.curFileNum = 0 + wc.curOffset = 0 + wc.Unlock() + tc.files = make(map[uint32]*lockableFile) + tc.maxFileSizes = make(map[uint32]int64) + return true +} + +// testWriteFailures tests various failures paths when writing to the block +// files. +func testWriteFailures(tc *testContext) bool { + if !resetDatabase(tc) { + return false + } + + // Ensure file sync errors during writeBlock return the expected error. + store := tc.db.(*db).store + testName := "writeBlock: file sync failure" + store.writeCursor.Lock() + oldFile := store.writeCursor.curFile + store.writeCursor.curFile = &lockableFile{ + file: &mockFile{forceSyncErr: true, maxSize: -1}, + } + store.writeCursor.Unlock() + _, err := store.writeBlock([]byte{0x00}) + if !checkDbError(tc.t, testName, err, database.ErrDriverSpecific) { + return false + } + store.writeCursor.Lock() + store.writeCursor.curFile = oldFile + store.writeCursor.Unlock() + + // Force errors in the various error paths when writing data by using + // mock files with a limited max size. + block0Bytes, _ := tc.blocks[0].Bytes() + tests := []struct { + fileNum uint32 + maxSize int64 + }{ + // Force an error when writing the network bytes. + {fileNum: 0, maxSize: 2}, + + // Force an error when writing the block size. + {fileNum: 0, maxSize: 6}, + + // Force an error when writing the block. + {fileNum: 0, maxSize: 17}, + + // Force an error when writing the checksum. + {fileNum: 0, maxSize: int64(len(block0Bytes)) + 10}, + + // Force an error after writing enough blocks for force multiple + // files. + {fileNum: 15, maxSize: 1}, + } + + for i, test := range tests { + if !resetDatabase(tc) { + return false + } + + // Ensure storing the specified number of blocks using a mock + // file that fails the write fails when the transaction is + // committed, not when the block is stored. + tc.maxFileSizes = map[uint32]int64{test.fileNum: test.maxSize} + err := tc.db.Update(func(tx database.Tx) error { + for i, block := range tc.blocks { + err := tx.StoreBlock(block) + if err != nil { + tc.t.Errorf("StoreBlock (%d): unexpected "+ + "error: %v", i, err) + return errSubTestFail + } + } + + return nil + }) + testName := fmt.Sprintf("Force update commit failure - test "+ + "%d, fileNum %d, maxsize %d", i, test.fileNum, + test.maxSize) + if !checkDbError(tc.t, testName, err, database.ErrDriverSpecific) { + tc.t.Errorf("%v", err) + return false + } + + // Ensure the commit rollback removed all extra files and data. + if len(tc.files) != 1 { + tc.t.Errorf("Update rollback: new not removed - want "+ + "1 file, got %d", len(tc.files)) + return false + } + if _, ok := tc.files[0]; !ok { + tc.t.Error("Update rollback: file 0 does not exist") + return false + } + file := tc.files[0].file.(*mockFile) + if len(file.data) != 0 { + tc.t.Errorf("Update rollback: file did not truncate - "+ + "want len 0, got len %d", len(file.data)) + return false + } + } + + return true +} + +// testBlockFileErrors ensures the database returns expected errors with various +// file-related issues such as closed and missing files. +func testBlockFileErrors(tc *testContext) bool { + if !resetDatabase(tc) { + return false + } + + // Ensure errors in blockFile and openFile when requesting invalid file + // numbers. + store := tc.db.(*db).store + testName := "blockFile invalid file open" + _, err := store.blockFile(^uint32(0)) + if !checkDbError(tc.t, testName, err, database.ErrDriverSpecific) { + return false + } + testName = "openFile invalid file open" + _, err = store.openFile(^uint32(0)) + if !checkDbError(tc.t, testName, err, database.ErrDriverSpecific) { + return false + } + + // Insert the first block into the mock file. + err = tc.db.Update(func(tx database.Tx) error { + err := tx.StoreBlock(tc.blocks[0]) + if err != nil { + tc.t.Errorf("StoreBlock: unexpected error: %v", err) + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("Update: unexpected error: %v", err) + } + return false + } + + // Ensure errors in readBlock and readBlockRegion when requesting a file + // number that doesn't exist. + block0Hash := tc.blocks[0].Sha() + testName = "readBlock invalid file number" + invalidLoc := blockLocation{ + blockFileNum: ^uint32(0), + blockLen: 80, + } + _, err = store.readBlock(block0Hash, invalidLoc) + if !checkDbError(tc.t, testName, err, database.ErrDriverSpecific) { + return false + } + testName = "readBlockRegion invalid file number" + _, err = store.readBlockRegion(invalidLoc, 0, 80) + if !checkDbError(tc.t, testName, err, database.ErrDriverSpecific) { + return false + } + + // Close the block file out from under the database. + store.writeCursor.curFile.Lock() + store.writeCursor.curFile.file.Close() + store.writeCursor.curFile.Unlock() + + // Ensure failures in FetchBlock and FetchBlockRegion(s) since the + // underlying file they need to read from has been closed. + err = tc.db.View(func(tx database.Tx) error { + testName = "FetchBlock closed file" + wantErrCode := database.ErrDriverSpecific + _, err := tx.FetchBlock(block0Hash) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return errSubTestFail + } + + testName = "FetchBlockRegion closed file" + regions := []database.BlockRegion{ + { + Hash: block0Hash, + Len: 80, + Offset: 0, + }, + } + _, err = tx.FetchBlockRegion(®ions[0]) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return errSubTestFail + } + + testName = "FetchBlockRegions closed file" + _, err = tx.FetchBlockRegions(regions) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("View: unexpected error: %v", err) + } + return false + } + + return true +} + +// testCorruption ensures the database returns expected errors under various +// corruption scenarios. +func testCorruption(tc *testContext) bool { + if !resetDatabase(tc) { + return false + } + + // Insert the first block into the mock file. + err := tc.db.Update(func(tx database.Tx) error { + err := tx.StoreBlock(tc.blocks[0]) + if err != nil { + tc.t.Errorf("StoreBlock: unexpected error: %v", err) + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("Update: unexpected error: %v", err) + } + return false + } + + // Ensure corruption is detected by intentionally modifying the bytes + // stored to the mock file and reading the block. + block0Bytes, _ := tc.blocks[0].Bytes() + block0Hash := tc.blocks[0].Sha() + tests := []struct { + offset uint32 + fixChecksum bool + wantErrCode database.ErrorCode + }{ + // One of the network bytes. The checksum needs to be fixed so + // the invalid network is detected. + {2, true, database.ErrDriverSpecific}, + + // The same network byte, but this time don't fix the checksum + // to ensure the corruption is detected. + {2, false, database.ErrCorruption}, + + // One of the block length bytes. + {6, false, database.ErrCorruption}, + + // Random header byte. + {17, false, database.ErrCorruption}, + + // Random transaction byte. + {90, false, database.ErrCorruption}, + + // Random checksum byte. + {uint32(len(block0Bytes)) + 10, false, database.ErrCorruption}, + } + err = tc.db.View(func(tx database.Tx) error { + data := tc.files[0].file.(*mockFile).data + for i, test := range tests { + // Corrupt the byte at the offset by a single bit. + data[test.offset] ^= 0x10 + + // Fix the checksum if requested to force other errors. + fileLen := len(data) + var oldChecksumBytes [4]byte + copy(oldChecksumBytes[:], data[fileLen-4:]) + if test.fixChecksum { + toSum := data[:fileLen-4] + cksum := crc32.Checksum(toSum, castagnoli) + binary.BigEndian.PutUint32(data[fileLen-4:], cksum) + } + + testName := fmt.Sprintf("FetchBlock (test #%d): "+ + "corruption", i) + _, err := tx.FetchBlock(block0Hash) + if !checkDbError(tc.t, testName, err, test.wantErrCode) { + return errSubTestFail + } + + // Reset the corrupted data back to the original. + data[test.offset] ^= 0x10 + if test.fixChecksum { + copy(data[fileLen-4:], oldChecksumBytes[:]) + } + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("View: unexpected error: %v", err) + } + return false + } + + // Modify the checksum in the block row index and ensure the expected + // error is received when reading the block row. + err = tc.db.Update(func(tx database.Tx) error { + // Intentionally corrupt the block row entry. + blockIdxBucket := tx.Metadata().Bucket(blockIdxBucketName) + oldBlockRow := blockIdxBucket.Get(block0Hash[:]) + blockRow := make([]byte, len(oldBlockRow)) + copy(blockRow, oldBlockRow) + blockRow[3] ^= 0x20 + err := blockIdxBucket.Put(block0Hash[:], blockRow) + if err != nil { + tc.t.Errorf("Put: Unexpected error: %v", err) + return errSubTestFail + } + + // Ensure attempting to fetch block data for the block with the + // corrupted block row returns the expected error. + testName := "FetchBlock with corrupted block row" + wantErrCode := database.ErrCorruption + _, err = tx.FetchBlock(block0Hash) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return errSubTestFail + } + + // Put the uncorrupted block row entry back. + err = blockIdxBucket.Put(block0Hash[:], oldBlockRow) + if err != nil { + tc.t.Errorf("Put: Unexpected error: %v", err) + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("Update: unexpected error: %v", err) + } + return false + } + + return true +} + +// TestFailureScenarios ensures several failure scenarios such as database +// corruption, block file write failures, and rollback failures are handled +// correctly. +func TestFailureScenarios(t *testing.T) { + // Create a new database to run tests against. + dbPath := filepath.Join(os.TempDir(), "ffboltdb-failurescenarios") + _ = os.RemoveAll(dbPath) + idb, err := database.Create(dbType, dbPath, blockDataNet) + if err != nil { + t.Errorf("Failed to create test database (%s) %v", dbType, err) + return + } + defer os.RemoveAll(dbPath) + defer idb.Close() + + // Create a test context to pass around. + tc := &testContext{ + t: t, + db: idb, + files: make(map[uint32]*lockableFile), + maxFileSizes: make(map[uint32]int64), + } + + // Change the maximum file size to a small value to force multiple flat + // files with the test data set and replace the file-related functions + // to make use of mock files in memory. This allows injection of + // various file-related errors. + store := idb.(*db).store + store.maxBlockFileSize = 1024 // 1KiB + store.openWriteFileFunc = func(fileNum uint32) (filer, error) { + if file, ok := tc.files[fileNum]; ok { + // "Reopen" the file. + file.Lock() + mock := file.file.(*mockFile) + mock.Lock() + mock.closed = false + mock.Unlock() + file.Unlock() + return mock, nil + } + + // Limit the max size of the mock file as specified in the test + // context. + maxSize := int64(-1) + if maxFileSize, ok := tc.maxFileSizes[fileNum]; ok { + maxSize = int64(maxFileSize) + } + file := &mockFile{maxSize: int64(maxSize)} + tc.files[fileNum] = &lockableFile{file: file} + return file, nil + } + store.openFileFunc = func(fileNum uint32) (*lockableFile, error) { + // Force error when trying to open max file num. + if fileNum == ^uint32(0) { + return nil, makeDbErr(database.ErrDriverSpecific, + "test", nil) + } + if file, ok := tc.files[fileNum]; ok { + // "Reopen" the file. + file.Lock() + mock := file.file.(*mockFile) + mock.Lock() + mock.closed = false + mock.Unlock() + file.Unlock() + return file, nil + } + file := &lockableFile{file: &mockFile{}} + tc.files[fileNum] = file + return file, nil + } + store.deleteFileFunc = func(fileNum uint32) error { + if file, ok := tc.files[fileNum]; ok { + file.Lock() + file.file.Close() + file.Unlock() + delete(tc.files, fileNum) + return nil + } + + str := fmt.Sprintf("file %d does not exist", fileNum) + return makeDbErr(database.ErrDriverSpecific, str, nil) + } + + // Load the test blocks and save in the test context for use throughout + // the tests. + blocks, err := loadBlocks(t, blockDataFile, blockDataNet) + if err != nil { + t.Errorf("loadBlocks: Unexpected error: %v", err) + return + } + tc.blocks = blocks + + // Test various failures paths when writing to the block files. + if !testWriteFailures(tc) { + return + } + + // Test various file-related issues such as closed and missing files. + if !testBlockFileErrors(tc) { + return + } + + // Test various corruption scenarios. + testCorruption(tc) +} diff --git a/database2/ffldb/README.md b/database2/ffldb/README.md new file mode 100644 index 00000000000..dcb8308c59e --- /dev/null +++ b/database2/ffldb/README.md @@ -0,0 +1,52 @@ +ffldb +===== + +[![Build Status](https://travis-ci.org/btcsuite/btcd.png?branch=master)] +(https://travis-ci.org/btcsuite/btcd) + +Package ffldb implements a driver for the database package that uses leveldb for +the backing metadata and flat files for block storage. + +This driver is the recommended driver for use with btcd. It makes use leveldb +for the metadata, flat files for block storage, and checksums in key areas to +ensure data integrity. + +Package ffldb is licensed under the copyfree ISC license. + +## Usage + +This package is a driver to the database package and provides the database type +of "ffldb". The parameters the Open and Create functions take are the +database path as a string and the block network. + +```Go +db, err := database.Open("ffldb", "path/to/database", wire.MainNet) +if err != nil { + // Handle error +} +``` + +```Go +db, err := database.Create("ffldb", "path/to/database", wire.MainNet) +if err != nil { + // Handle error +} +``` + +## Documentation + +[![GoDoc](https://godoc.org/github.com/btcsuite/btcd/database/ffldb?status.png)] +(http://godoc.org/github.com/btcsuite/btcd/database/ffldb) + +Full `go doc` style documentation for the project can be viewed online without +installing this package by using the GoDoc site here: +http://godoc.org/github.com/btcsuite/btcd/database/ffldb + +You can also view the documentation locally once the package is installed with +the `godoc` tool by running `godoc -http=":6060"` and pointing your browser to +http://localhost:6060/pkg/github.com/btcsuite/btcd/database/ffldb + +## License + +Package ffldb is licensed under the [copyfree](http://copyfree.org) ISC +License. diff --git a/database2/ffldb/bench_test.go b/database2/ffldb/bench_test.go new file mode 100644 index 00000000000..94ef6cbae29 --- /dev/null +++ b/database2/ffldb/bench_test.go @@ -0,0 +1,103 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package ffldb + +import ( + "os" + "path/filepath" + "testing" + + "github.com/btcsuite/btcd/chaincfg" + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcutil" +) + +// BenchmarkBlockHeader benchmarks how long it takes to load the mainnet genesis +// block header. +func BenchmarkBlockHeader(b *testing.B) { + // Start by creating a new database and populating it with the mainnet + // genesis block. + dbPath := filepath.Join(os.TempDir(), "ffldb-benchblkhdr") + _ = os.RemoveAll(dbPath) + db, err := database.Create("ffldb", dbPath, blockDataNet) + if err != nil { + b.Fatal(err) + } + defer os.RemoveAll(dbPath) + defer db.Close() + err = db.Update(func(tx database.Tx) error { + block := btcutil.NewBlock(chaincfg.MainNetParams.GenesisBlock) + if err := tx.StoreBlock(block); err != nil { + return err + } + return nil + }) + if err != nil { + b.Fatal(err) + } + + b.ReportAllocs() + b.ResetTimer() + err = db.View(func(tx database.Tx) error { + blockHash := chaincfg.MainNetParams.GenesisHash + for i := 0; i < b.N; i++ { + _, err := tx.FetchBlockHeader(blockHash) + if err != nil { + return err + } + } + return nil + }) + if err != nil { + b.Fatal(err) + } + + // Don't benchmark teardown. + b.StopTimer() +} + +// BenchmarkBlockHeader benchmarks how long it takes to load the mainnet genesis +// block. +func BenchmarkBlock(b *testing.B) { + // Start by creating a new database and populating it with the mainnet + // genesis block. + dbPath := filepath.Join(os.TempDir(), "ffldb-benchblk") + _ = os.RemoveAll(dbPath) + db, err := database.Create("ffldb", dbPath, blockDataNet) + if err != nil { + b.Fatal(err) + } + defer os.RemoveAll(dbPath) + defer db.Close() + err = db.Update(func(tx database.Tx) error { + block := btcutil.NewBlock(chaincfg.MainNetParams.GenesisBlock) + if err := tx.StoreBlock(block); err != nil { + return err + } + return nil + }) + if err != nil { + b.Fatal(err) + } + + b.ReportAllocs() + b.ResetTimer() + err = db.View(func(tx database.Tx) error { + blockHash := chaincfg.MainNetParams.GenesisHash + for i := 0; i < b.N; i++ { + _, err := tx.FetchBlock(blockHash) + if err != nil { + return err + } + } + return nil + }) + if err != nil { + b.Fatal(err) + } + + // Don't benchmark teardown. + b.StopTimer() +} diff --git a/database2/ffldb/blockio.go b/database2/ffldb/blockio.go new file mode 100644 index 00000000000..284cb459e7e --- /dev/null +++ b/database2/ffldb/blockio.go @@ -0,0 +1,749 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +// This file contains the implementation functions for reading, writing, and +// otherwise working with the flat files that house the actual blocks. + +package ffldb + +import ( + "container/list" + "encoding/binary" + "fmt" + "hash/crc32" + "io" + "os" + "path/filepath" + "sync" + + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcd/wire" +) + +const ( + // The Bitcoin protocol encodes block height as int32, so max number of + // blocks is 2^31. Max block size per the protocol is 32MiB per block. + // So the theoretical max at the time this comment was written is 64PiB + // (pebibytes). With files @ 512MiB each, this would require a maximum + // of 134,217,728 files. Thus, choose 9 digits of precision for the + // filenames. An additional benefit is 9 digits provides 10^9 files @ + // 512MiB each for a total of ~476.84PiB (roughly 7.4 times the current + // theoretical max), so there is room for the max block size to grow in + // the future. + blockFilenameTemplate = "%09d.fdb" + + // maxOpenFiles is the max number of open files to maintain in the + // open blocks cache. Note that this does not include the current + // write file, so there will typically be one more than this value open. + maxOpenFiles = 25 + + // maxBlockFileSize is the maximum size for each file used to store + // blocks. + // + // NOTE: The current code uses uint32 for all offsets, so this value + // must be less than 2^32 (4 GiB). This is also why it's a typed + // constant. + maxBlockFileSize uint32 = 512 * 1024 * 1024 // 512 MiB + + // blockLocSize is the number of bytes the serialized block location + // data that is stored in the block index. + // + // The serialized block location format is: + // + // [0:4] Block file (4 bytes) + // [4:8] File offset (4 bytes) + // [8:12] Block length (4 bytes) + blockLocSize = 12 +) + +var ( + // castagnoli houses the Catagnoli polynomial used for CRC-32 checksums. + castagnoli = crc32.MakeTable(crc32.Castagnoli) +) + +// filer is an interface which acts very similar to a *os.File and is typically +// implemented by it. It exists so the test code can provide mock files for +// properly testing corruption and file system issues. +type filer interface { + io.Closer + io.WriterAt + io.ReaderAt + Truncate(size int64) error + Sync() error +} + +// lockableFile represents a block file on disk that has been opened for either +// read or read/write access. It also contains a read-write mutex to support +// multiple concurrent readers. +type lockableFile struct { + sync.RWMutex + file filer +} + +// writeCursor represents the current file and offset of the block file on disk +// for performing all writes. It also contains a read-write mutex to support +// multiple concurrent readers which can reuse the file handle. +type writeCursor struct { + sync.RWMutex + + // curFile is the current block file that will be appended to when + // writing new blocks. + curFile *lockableFile + + // curFileNum is the current block file number and is used to allow + // readers to use the same open file handle. + curFileNum uint32 + + // curOffset is the offset in the current write block file where the + // next new block will be written. + curOffset uint32 +} + +// blockStore houses information used to handle reading and writing blocks (and +// part of blocks) into flat files with support for multiple concurrent readers. +type blockStore struct { + // network is the specific network to use in the flat files for each + // block. + network wire.BitcoinNet + + // basePath is the base path used for the flat block files and metadata. + basePath string + + // maxBlockFileSize is the maximum size for each file used to store + // blocks. It is defined on the store so the whitebox tests can + // override the value. + maxBlockFileSize uint32 + + // The following fields are related to the flat files which hold the + // actual blocks. The number of open files is limited by maxOpenFiles. + // + // obfMutex protects concurrent access to the openBlockFiles map. It is + // a RWMutex so multiple readers can simultaneously access open files. + // + // openBlockFiles houses the open file handles for existing block files + // which have been opened read-only along with an individual RWMutex. + // This scheme allows multiple concurrent readers to the same file while + // preventing the file from being closed out from under them. + // + // lruMutex protects concurrent access to the least recently used list + // and lookup map. + // + // openBlocksLRU tracks how the open files are refenced by pushing the + // most recently used files to the front of the list thereby trickling + // the least recently used files to end of the list. When a file needs + // to be closed due to exceeding the the max number of allowed open + // files, the one at the end of the list is closed. + // + // fileNumToLRUElem is a mapping between a specific block file number + // and the associated list element on the least recently used list. + // + // Thus, with the combination of these fields, the database supports + // concurrent non-blocking reads across multiple and individual files + // along with intelligently limiting the number of open file handles by + // closing the least recently used files as needed. + // + // NOTE: The locking order used throughout is well-defined and MUST be + // followed. Failure to do so could lead to deadlocks. In particular, + // the locking order is as follows: + // 1) obfMutex + // 2) lruMutex + // 3) writeCursor mutex + // 4) specific file mutexes + // + // None of the mutexes are required to be locked at the same time, and + // often aren't. However, if they are to be locked simultaneously, they + // MUST be locked in the order previously specified. + // + // Due to the high performance and multi-read concurrency requirements, + // write locks should only be held for the minimum time necessary. + obfMutex sync.RWMutex + lruMutex sync.Mutex + openBlocksLRU *list.List // Contains uint32 block file numbers. + fileNumToLRUElem map[uint32]*list.Element + openBlockFiles map[uint32]*lockableFile + + // writeCursor houses the state for the current file and location that + // new blocks are written to. + writeCursor *writeCursor + + // These functions are set to openFile, openWriteFile, and deleteFile by + // default, but are exposed here to allow the whitebox tests to replace + // them when working with mock files. + openFileFunc func(fileNum uint32) (*lockableFile, error) + openWriteFileFunc func(fileNum uint32) (filer, error) + deleteFileFunc func(fileNum uint32) error +} + +// blockLocation identifies a particular block file and location. +type blockLocation struct { + blockFileNum uint32 + fileOffset uint32 + blockLen uint32 +} + +// deserializeBlockLoc deserializes the passed serialized block location +// information. This is data stored into the block index metadata for each +// block. The serialized data passed to this function MUST be at least +// blockLocSize bytes or it will panic. Thererror check is avoided here because +// this information will always be coming from the block index which includes a +// checksum to detect corruption. Thus it is safe to use this unchecked here. +func deserializeBlockLoc(serializedLoc []byte) blockLocation { + // The serialized block location format is: + // + // [0:4] Block file (4 bytes) + // [4:8] File offset (4 bytes) + // [8:12] Block length (4 bytes) + return blockLocation{ + blockFileNum: byteOrder.Uint32(serializedLoc[0:4]), + fileOffset: byteOrder.Uint32(serializedLoc[4:8]), + blockLen: byteOrder.Uint32(serializedLoc[8:12]), + } +} + +// serializeBlockLoc returns the serialization of the passed block location. +// This is data to be stored into the block index metadata for each block. +func serializeBlockLoc(loc blockLocation) []byte { + // The serialized block location format is: + // + // [0:4] Block file (4 bytes) + // [4:8] File offset (4 bytes) + // [8:12] Block length (4 bytes) + var serializedData [12]byte + byteOrder.PutUint32(serializedData[0:4], loc.blockFileNum) + byteOrder.PutUint32(serializedData[4:8], loc.fileOffset) + byteOrder.PutUint32(serializedData[8:12], loc.blockLen) + return serializedData[:] +} + +// blockFilePath return the file path for the provided block file number. +func blockFilePath(dbPath string, fileNum uint32) string { + fileName := fmt.Sprintf(blockFilenameTemplate, fileNum) + return filepath.Join(dbPath, fileName) +} + +// openWriteFile returns a file handle for the passed flat file number in +// read/write mode. The file will be created if needed. It is typically used +// for the current file that will have all new data appended. Unlike openFile, +// this function does not keep track the open file and it is not subject to the +// maxOpenFiles limit. +func (s *blockStore) openWriteFile(fileNum uint32) (filer, error) { + // The current block file needs to be read-write so it is possible to + // append to it. Also, it shouldn't be part of the least recently used + // file. + filePath := blockFilePath(s.basePath, fileNum) + file, err := os.OpenFile(filePath, os.O_RDWR|os.O_CREATE, 0666) + if err != nil { + str := fmt.Sprintf("failed to open file %q: %v", filePath, err) + return nil, makeDbErr(database.ErrDriverSpecific, str, err) + } + + return file, nil +} + +// openFile returns a read-only file handle for the passed flat file number. +// The function also keeps track of the open files, performs least recently +// used tracking, and limits the number of open files to maxOpenFiles by closing +// the least recently used file as needed. +// +// This function MUST be called with the overall files mutex (s.obfMutex) locked +// for WRITES. +func (s *blockStore) openFile(fileNum uint32) (*lockableFile, error) { + // Open the appropriate file as read-only. + filePath := blockFilePath(s.basePath, fileNum) + file, err := os.Open(filePath) + if err != nil { + return nil, makeDbErr(database.ErrDriverSpecific, err.Error(), + err) + } + blockFile := &lockableFile{file: file} + + // Close the least recently used file if the file exceeds the max + // allowed open files. This is not done until after the file open in + // case the file fails to open, there is no need to close any files. + // + // A write lock is required on the LRU list here to protect against + // modifications happening as already open files are read from and + // shuffled to the front of the list. + // + // Also, add the file that was just opened to the front of the least + // recently used list to indicate it is the most recently used file and + // therefore should be closed last. + s.lruMutex.Lock() + lruList := s.openBlocksLRU + if lruList.Len() >= maxOpenFiles { + lruFileNum := lruList.Remove(lruList.Back()).(uint32) + oldBlockFile := s.openBlockFiles[lruFileNum] + + // Close the old file under the write lock for the file in case + // any readers are currently reading from it so it's not closed + // out from under them. + oldBlockFile.Lock() + _ = oldBlockFile.file.Close() + oldBlockFile.Unlock() + + delete(s.openBlockFiles, lruFileNum) + delete(s.fileNumToLRUElem, lruFileNum) + } + s.fileNumToLRUElem[fileNum] = lruList.PushFront(fileNum) + s.lruMutex.Unlock() + + // Store a reference to it the open block files map. + s.openBlockFiles[fileNum] = blockFile + + return blockFile, nil +} + +// deleteFile remove the block file for the passed flat file number. The file +// must already be closed and it is the responsibility of the caller to do any +// other state cleanup necessary. +func (s *blockStore) deleteFile(fileNum uint32) error { + filePath := blockFilePath(s.basePath, fileNum) + if err := os.Remove(filePath); err != nil { + return makeDbErr(database.ErrDriverSpecific, err.Error(), err) + } + + return nil +} + +// blockFile attempts to return an existing file handle for the passed flat file +// number if it is already open as well as marking it as most recently used. It +// will also open the file when it's not already open subject to the rules +// described in openFile. +// +// NOTE: The returned block file will already have the read lock acquired and +// the caller MUST call .RUnlock() to release it once it has finished all read +// operations. This is necessary because otherwise it would be possible for a +// separate goroutine to close the file after it is returned from here, but +// before the caller has acquired a read lock. +func (s *blockStore) blockFile(fileNum uint32) (*lockableFile, error) { + // When the requested block file is open for writes, return it. + wc := s.writeCursor + wc.RLock() + if fileNum == wc.curFileNum && wc.curFile.file != nil { + obf := wc.curFile + obf.RLock() + wc.RUnlock() + return obf, nil + } + wc.RUnlock() + + // Try to return an open file under the overall files read lock. + s.obfMutex.RLock() + if obf, ok := s.openBlockFiles[fileNum]; ok { + s.lruMutex.Lock() + s.openBlocksLRU.MoveToFront(s.fileNumToLRUElem[fileNum]) + s.lruMutex.Unlock() + + obf.RLock() + s.obfMutex.RUnlock() + return obf, nil + } + s.obfMutex.RUnlock() + + // Since the file isn't open already, need to check the open block files + // map again under write lock in case multiple readers got here and a + // separate one is already opening the file. + s.obfMutex.Lock() + if obf, ok := s.openBlockFiles[fileNum]; ok { + obf.RLock() + s.obfMutex.Unlock() + return obf, nil + } + + // The file isn't open, so open it while closing the least recently used + // one. The called function grabs the overall files write lock and + // checks the opened block files map again in case multiple readers get + // here. + obf, err := s.openFileFunc(fileNum) + if err != nil { + s.obfMutex.Unlock() + return nil, err + } + obf.RLock() + s.obfMutex.Unlock() + return obf, nil +} + +// writeData is a helper function for writeBlock which writes the provided data +// at the current write offset and updates the write cursor accordingly. The +// field name parameter is only used when there is an error to provide a nicer +// error message. +// +// The write cursor will be advanced the number of bytes actually written in the +// event of failure. +// +// NOTE: This function MUST be called with the write cursor current file lock +// held and must only be called during a write transaction so it is effectively +// locked for writes. Also, the write cursor current file must NOT be nil. +func (s *blockStore) writeData(data []byte, fieldName string) error { + wc := s.writeCursor + n, err := wc.curFile.file.WriteAt(data, int64(wc.curOffset)) + wc.curOffset += uint32(n) + if err != nil { + str := fmt.Sprintf("failed to write %s to file %d at "+ + "offset %d: %v", fieldName, wc.curFileNum, + wc.curOffset-uint32(n), err) + return makeDbErr(database.ErrDriverSpecific, str, err) + } + + return nil +} + +// writeBlock appends the specified raw block bytes to the store's write cursor +// location and increments it accordingly. When the block would exceed the max +// file size for the current flat file, this function will close the current +// file, create the next file, update the write cursor, and write the block to +// the new file. +// +// The write cursor will also be advanced the number of bytes actually written +// in the event of failure. +// +// Format: +func (s *blockStore) writeBlock(rawBlock []byte) (blockLocation, error) { + // Compute how many bytes will be written. + // 4 bytes each for block network + 4 bytes for block length + + // length of raw block + 4 bytes for checksum. + blockLen := uint32(len(rawBlock)) + fullLen := blockLen + 12 + + // Move to the next block file if adding the new block would exceed the + // max allowed size for the current block file. Also detect overflow + // to be paranoid, even though it isn't possible currently, numbers + // might change in the future to make possible. + // + // NOTE: The writeCursor.offset field isn't protected by the mutex + // since it's only read/changed during this function which can only be + // called during a write transaction, of which there can be only one at + // a time. + wc := s.writeCursor + finalOffset := wc.curOffset + fullLen + if finalOffset < wc.curOffset || finalOffset > s.maxBlockFileSize { + // This is done under the write cursor lock since the fileNum + // field is accessed elsewhere by readers. + // + // Close the current write file to force a read-only reopen + // with LRU tracking. The close is done under the write lock + // for the file to prevent it from being closed out from under + // any readers currently reading from it. + wc.Lock() + wc.curFile.Lock() + if wc.curFile.file != nil { + _ = wc.curFile.file.Close() + wc.curFile.file = nil + } + wc.curFile.Unlock() + + // Start writes into next file. + wc.curFileNum++ + wc.curOffset = 0 + wc.Unlock() + } + + // All writes are done under the write lock for the file to ensure any + // readers are finished and blocked first. + wc.curFile.Lock() + defer wc.curFile.Unlock() + + // Open the current file if needed. This will typically only be the + // case when moving to the next file to write to or on initial database + // load. However, it might also be the case if rollbacks happened after + // file writes started during a transaction commit. + if wc.curFile.file == nil { + file, err := s.openWriteFileFunc(wc.curFileNum) + if err != nil { + return blockLocation{}, err + } + wc.curFile.file = file + } + + // Bitcoin network. + origOffset := wc.curOffset + hasher := crc32.New(castagnoli) + var scratch [4]byte + byteOrder.PutUint32(scratch[:], uint32(s.network)) + if err := s.writeData(scratch[:], "network"); err != nil { + return blockLocation{}, err + } + _, _ = hasher.Write(scratch[:]) + + // Block length. + byteOrder.PutUint32(scratch[:], blockLen) + if err := s.writeData(scratch[:], "block length"); err != nil { + return blockLocation{}, err + } + _, _ = hasher.Write(scratch[:]) + + // Serialized block. + if err := s.writeData(rawBlock[:], "block"); err != nil { + return blockLocation{}, err + } + _, _ = hasher.Write(rawBlock) + + // Castagnoli CRC-32 as a checksum of all the previous. + if err := s.writeData(hasher.Sum(nil), "checksum"); err != nil { + return blockLocation{}, err + } + + // Sync the file to disk. + if err := wc.curFile.file.Sync(); err != nil { + str := fmt.Sprintf("failed to sync file %d: %v", wc.curFileNum, + err) + return blockLocation{}, makeDbErr(database.ErrDriverSpecific, + str, err) + } + + loc := blockLocation{ + blockFileNum: wc.curFileNum, + fileOffset: origOffset, + blockLen: fullLen, + } + return loc, nil +} + +// readBlock reads the specified block record and returns the serialized block. +// It ensures the integrity of the block data by checking that the serialized +// network matches the current network associated with the block store and +// comparing the calculated checksum against the one stored in the flat file. +// This function also automatically handles all file management such as opening +// and closing files as necessary to stay within the maximum allowed open files +// limit. +// +// Returns ErrDriverSpecific if the data fails to read for any reason and +// ErrCorruption if the checksum of the read data doesn't match the checksum +// read from the file. +// +// Format: +func (s *blockStore) readBlock(hash *wire.ShaHash, loc blockLocation) ([]byte, error) { + // Get the referenced block file handle opening the file as needed. The + // function also handles closing files as needed to avoid going over the + // max allowed open files. + blockFile, err := s.blockFile(loc.blockFileNum) + if err != nil { + return nil, err + } + + serializedData := make([]byte, loc.blockLen) + n, err := blockFile.file.ReadAt(serializedData, int64(loc.fileOffset)) + blockFile.RUnlock() + if err != nil { + str := fmt.Sprintf("failed to read block %s from file %d, "+ + "offset %d: %v", hash, loc.blockFileNum, loc.fileOffset, + err) + return nil, makeDbErr(database.ErrDriverSpecific, str, err) + } + + // Calculate the checksum of the read data and ensure it matches the + // serialized checksum. This will detect any data corruption in the + // flat file without having to do much more expensive merkle root + // calculations on the loaded block. + serializedChecksum := binary.BigEndian.Uint32(serializedData[n-4:]) + calculatedChecksum := crc32.Checksum(serializedData[:n-4], castagnoli) + if serializedChecksum != calculatedChecksum { + str := fmt.Sprintf("block data for block %s checksum "+ + "does not match - got %x, want %x", hash, + calculatedChecksum, serializedChecksum) + return nil, makeDbErr(database.ErrCorruption, str, nil) + } + + // The network associated with the block must match the current active + // network, otherwise somebody probably put the block files for the + // wrong network in the directory. + serializedNet := byteOrder.Uint32(serializedData[:4]) + if serializedNet != uint32(s.network) { + str := fmt.Sprintf("block data for block %s is for the "+ + "wrong network - got %d, want %d", hash, serializedNet, + uint32(s.network)) + return nil, makeDbErr(database.ErrDriverSpecific, str, nil) + } + + // The raw block excludes the network, length of the block, and + // checksum. + return serializedData[8 : n-4], nil +} + +// readBlockRegion reads the specified amount of data at the provided offset for +// a given block location. The offset is relative to the start of the +// serialized block (as opposed to the beginning of the block record). This +// function automatically handles all file management such as opening and +// closing files as necessary to stay within the maximum allowed open files +// limit. +// +// Returns ErrDriverSpecific if the data fails to read for any reason. +func (s *blockStore) readBlockRegion(loc blockLocation, offset, numBytes uint32) ([]byte, error) { + // Get the referenced block file handle opening the file as needed. The + // function also handles closing files as needed to avoid going over the + // max allowed open files. + blockFile, err := s.blockFile(loc.blockFileNum) + if err != nil { + return nil, err + } + + // Regions are offsets into the actual block, however the serialized + // data for a block includes an initial 4 bytes for network + 4 bytes + // for block length. Thus, add 8 bytes to adjust. + readOffset := loc.fileOffset + 8 + offset + serializedData := make([]byte, numBytes) + _, err = blockFile.file.ReadAt(serializedData, int64(readOffset)) + blockFile.RUnlock() + if err != nil { + str := fmt.Sprintf("failed to read region from block file %d, "+ + "offset %d, len %d: %v", loc.blockFileNum, readOffset, + numBytes, err) + return nil, makeDbErr(database.ErrDriverSpecific, str, err) + } + + return serializedData, nil +} + +// handleRollback rolls the block files on disk back to the provided file number +// and offset. This involves potentially deleting and truncating the files that +// were partially written. +// +// There are effectively two scenarios to consider here: +// 1) Transient write failures from which recovery is possible +// 2) More permanant failures such as hard disk death and/or removal +// +// In either case, the write cursor will be repositioned to the old block file +// offset regardless of any other errors that occur while attempting to undo +// writes. +// +// For the first scenario, this will lead to any data which failed to be undone +// being overwritten and thus behaves as desired as the system continues to run. +// +// For the second scenario, the metadata which stores the current write cursor +// position within the block files will not have been updated yet and thus if +// the system eventually recovers (perhaps the hard drive is reconnected), it +// will also lead to any data which failed to be undone being overwritten and +// thus behaves as desired. +// +// Therefore, any errors are simply logged at a warning level rather than being +// returned since there is nothing more that could be done about it anyways. +func (s *blockStore) handleRollback(oldBlockFileNum, oldBlockOffset uint32) { + // Grab the write cursor mutex since it is modified throughout this + // function. + wc := s.writeCursor + wc.Lock() + defer wc.Unlock() + + // Nothing to do if the rollback point is the same as the current write + // cursor. + if wc.curFileNum == oldBlockFileNum && wc.curOffset == oldBlockOffset { + return + } + + // Regardless of any failures that happen below, reposition the write + // cursor to the old block file and offset. + defer func() { + wc.curFileNum = oldBlockFileNum + wc.curOffset = oldBlockOffset + }() + + log.Debugf("ROLLBACK: Rolling back to file %d, offset %d", + oldBlockFileNum, oldBlockOffset) + + // Close the current write file if it needs to be deleted. Then delete + // all files that are newer than the provided rollback file while + // also moving the write cursor file backwards accordingly. + if wc.curFileNum > oldBlockFileNum { + wc.curFile.Lock() + if wc.curFile.file != nil { + _ = wc.curFile.file.Close() + wc.curFile.file = nil + } + wc.curFile.Unlock() + } + for ; wc.curFileNum > oldBlockFileNum; wc.curFileNum-- { + if err := s.deleteFileFunc(wc.curFileNum); err != nil { + _ = log.Warnf("ROLLBACK: Failed to delete block file "+ + "number %d: %v", wc.curFileNum, err) + return + } + } + + // Open the file for the current write cursor if needed. + wc.curFile.Lock() + if wc.curFile.file == nil { + obf, err := s.openWriteFileFunc(wc.curFileNum) + if err != nil { + wc.curFile.Unlock() + _ = log.Warnf("ROLLBACK: %v", err) + return + } + wc.curFile.file = obf + } + + // Truncate the to the provided rollback offset. + if err := wc.curFile.file.Truncate(int64(oldBlockOffset)); err != nil { + wc.curFile.Unlock() + _ = log.Warnf("ROLLBACK: Failed to truncate file %d: %v", + wc.curFileNum, err) + return + } + + // Sync the file to disk. + err := wc.curFile.file.Sync() + wc.curFile.Unlock() + if err != nil { + _ = log.Warnf("ROLLBACK: Failed to sync file %d: %v", + wc.curFileNum, err) + return + } + return +} + +// scanBlockFiles searches the database directory for all flat block files to +// find the end of the most recent file. This position is considered the +// current write cursor which is also stored in the metadata. Thus, it is used +// to detect unexpected shutdowns in the middle of writes so the block files +// can be reconciled. +func scanBlockFiles(dbPath string) (int, uint32) { + lastFile := -1 + fileLen := uint32(0) + for i := 0; ; i++ { + filePath := blockFilePath(dbPath, uint32(i)) + st, err := os.Stat(filePath) + if err != nil { + break + } + lastFile = i + + fileLen = uint32(st.Size()) + } + + log.Tracef("Scan found latest block file #%d with length %d", lastFile, + fileLen) + return lastFile, fileLen +} + +// newBlockStore returns a new block store with the current block file number +// and offset set and all fields initialized. +func newBlockStore(basePath string, network wire.BitcoinNet) *blockStore { + // Look for the end of the latest block to file to determine what the + // write cursor position is from the viewpoing of the block files on + // disk. + fileNum, fileOff := scanBlockFiles(basePath) + if fileNum == -1 { + fileNum = 0 + fileOff = 0 + } + + store := &blockStore{ + network: network, + basePath: basePath, + maxBlockFileSize: maxBlockFileSize, + openBlockFiles: make(map[uint32]*lockableFile), + openBlocksLRU: list.New(), + fileNumToLRUElem: make(map[uint32]*list.Element), + + writeCursor: &writeCursor{ + curFile: &lockableFile{}, + curFileNum: uint32(fileNum), + curOffset: uint32(fileOff), + }, + } + store.openFileFunc = store.openFile + store.openWriteFileFunc = store.openWriteFile + store.deleteFileFunc = store.deleteFile + return store +} diff --git a/database2/ffldb/db.go b/database2/ffldb/db.go new file mode 100644 index 00000000000..bc01ea55e74 --- /dev/null +++ b/database2/ffldb/db.go @@ -0,0 +1,2078 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package ffldb + +import ( + "bytes" + "encoding/binary" + "fmt" + "os" + "path/filepath" + "runtime" + "sort" + "sync" + + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcd/database2/internal/treap" + "github.com/btcsuite/btcd/wire" + "github.com/btcsuite/btcutil" + "github.com/btcsuite/goleveldb/leveldb" + "github.com/btcsuite/goleveldb/leveldb/comparer" + ldberrors "github.com/btcsuite/goleveldb/leveldb/errors" + "github.com/btcsuite/goleveldb/leveldb/filter" + "github.com/btcsuite/goleveldb/leveldb/iterator" + "github.com/btcsuite/goleveldb/leveldb/opt" + "github.com/btcsuite/goleveldb/leveldb/util" +) + +const ( + // metadataDbName is the name used for the metadata database. + metadataDbName = "metadata" + + // blockHdrSize is the size of a block header. This is simply the + // constant from wire and is only provided here for convenience since + // wire.MaxBlockHeaderPayload is quite long. + blockHdrSize = wire.MaxBlockHeaderPayload + + // blockHdrOffset defines the offsets into a block index row for the + // block header. + // + // The serialized block index row format is: + // + blockHdrOffset = blockLocSize +) + +var ( + // byteOrder is the preferred byte order used through the database and + // block files. Sometimes big endian will be used to allow ordered byte + // sortable integer values. + byteOrder = binary.LittleEndian + + // bucketIndexPrefix is the prefix used for all entries in the bucket + // index. + bucketIndexPrefix = []byte("bidx") + + // curBucketIDKeyName is the name of the key used to keep track of the + // current bucket ID counter. + curBucketIDKeyName = []byte("bidx-cbid") + + // metadataBucketID is the ID of the top-level metadata bucket. + // It is the value 0 encoded as an unsigned big-endian uint32. + metadataBucketID = [4]byte{} + + // blockIdxBucketID is the ID of the internal block metadata bucket. + // It is the value 1 encoded as an unsigned big-endian uint32. + blockIdxBucketID = [4]byte{0x00, 0x00, 0x00, 0x01} + + // blockIdxBucketName is the bucket used internally to track block + // metadata. + blockIdxBucketName = []byte("ffldb-blockidx") + + // writeLocKeyName is the key used to store the current write file + // location. + writeLocKeyName = []byte("ffldb-writeloc") +) + +// Common error strings. +const ( + // errDbNotOpenStr is the text to use for the database.ErrDbNotOpen + // error code. + errDbNotOpenStr = "database is not open" + + // errTxClosedStr is the text to use for the database.ErrTxClosed error + // code. + errTxClosedStr = "database tx is closed" +) + +// bulkFetchData is allows a block location to be specified along with the +// index it was requested from. This in turn allows the bulk data loading +// functions to sort the data accesses based on the location to improve +// performance while keeping track of which result the data is for. +type bulkFetchData struct { + *blockLocation + replyIndex int +} + +// bulkFetchDataSorter implements sort.Interface to allow a slice of +// bulkFetchData to be sorted. In particular it sorts by file and then +// offset so that reads from files are grouped and linear. +type bulkFetchDataSorter []bulkFetchData + +// Len returns the number of items in the slice. It is part of the +// sort.Interface implementation. +func (s bulkFetchDataSorter) Len() int { + return len(s) +} + +// Swap swaps the items at the passed indices. It is part of the +// sort.Interface implementation. +func (s bulkFetchDataSorter) Swap(i, j int) { + s[i], s[j] = s[j], s[i] +} + +// Less returns whether the item with index i should sort before the item with +// index j. It is part of the sort.Interface implementation. +func (s bulkFetchDataSorter) Less(i, j int) bool { + if s[i].blockFileNum < s[j].blockFileNum { + return true + } + if s[i].blockFileNum > s[j].blockFileNum { + return false + } + + return s[i].fileOffset < s[j].fileOffset +} + +// makeDbErr creates a database.Error given a set of arguments. +func makeDbErr(c database.ErrorCode, desc string, err error) database.Error { + return database.Error{ErrorCode: c, Description: desc, Err: err} +} + +// convertErr converts the passed leveldb error into a database error with an +// equivalent error code and the passed description. It also sets the passed +// error as the underlying error. +func convertErr(desc string, ldbErr error) database.Error { + // Use the driver-specific error code by default. The code below will + // update this with the converted error if it's recognized. + var code = database.ErrDriverSpecific + + switch { + // Database corruption errors. + case ldberrors.IsCorrupted(ldbErr): + code = database.ErrCorruption + + // Database open/create errors. + case ldbErr == leveldb.ErrClosed: + code = database.ErrDbNotOpen + + // Transaction errors. + case ldbErr == leveldb.ErrSnapshotReleased: + code = database.ErrTxClosed + case ldbErr == leveldb.ErrIterReleased: + code = database.ErrTxClosed + } + + return database.Error{ErrorCode: code, Description: desc, Err: ldbErr} +} + +// copySlice returns a copy of the passed slice. This is mostly used to copy +// leveldb iterator keys and values since they are only valid until the iterator +// is moved instead of during the entirety of the transaction. +func copySlice(slice []byte) []byte { + ret := make([]byte, len(slice)) + copy(ret, slice) + return ret +} + +// cursor is an internal type used to represent a cursor over key/value pairs +// and nested buckets of a bucket and implements the database.Cursor interface. +type cursor struct { + bucket *bucket + dbIter iterator.Iterator + pendingIter iterator.Iterator + currentIter iterator.Iterator +} + +// Enforce cursor implements the database.Cursor interface. +var _ database.Cursor = (*cursor)(nil) + +// Bucket returns the bucket the cursor was created for. +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) Bucket() database.Bucket { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return nil + } + + return c.bucket +} + +// Delete removes the current key/value pair the cursor is at without +// invalidating the cursor. +// +// Returns the following errors as required by the interface contract: +// - ErrIncompatibleValue if attempted when the cursor points to a nested +// bucket +// - ErrTxNotWritable if attempted against a read-only transaction +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) Delete() error { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return err + } + + // Error if the cursor is exhausted. + if c.currentIter == nil { + str := "cursor is exhausted" + return makeDbErr(database.ErrIncompatibleValue, str, nil) + } + + // Do not allow buckets to be deleted via the cursor. + key := c.currentIter.Key() + if bytes.HasPrefix(key, bucketIndexPrefix) { + str := "buckets may not be deleted from a cursor" + return makeDbErr(database.ErrIncompatibleValue, str, nil) + } + + c.bucket.tx.deleteKey(copySlice(key), true) + return nil +} + +// skipPendingUpdates skips any keys at the current database iterator position +// that are being updated by the transaction. The forwards flag indicates the +// direction the cursor is moving moved. +func (c *cursor) skipPendingUpdates(forwards bool) { + for c.dbIter.Valid() { + var skip bool + key := c.dbIter.Key() + if _, ok := c.bucket.tx.pendingRemove[string(key)]; ok { + skip = true + } else if c.bucket.tx.pendingKeys.Has(key) { + skip = true + } + if !skip { + break + } + + if forwards { + c.dbIter.Next() + } else { + c.dbIter.Prev() + } + } +} + +// chooseIterator first skips any entries in the database iterator that are +// being updated by the transaction and sets the current iterator to the +// appropriate iterator depending on their validatidy and the order they compare +// in while taking into account the direction flag. When the cursor is being +// moved forwards and both iterators are valid, the iterator with the smaller +// key is chosen and vice versa when the cursor is being moved backwards. +func (c *cursor) chooseIterator(forwards bool) bool { + // Skip any keys at the current database iterator position that are + // being updated by the transaction. + c.skipPendingUpdates(forwards) + + // When bother iterators are exhausted, the cursor is exhausted too. + if !c.dbIter.Valid() && !c.pendingIter.Valid() { + c.currentIter = nil + return false + } + + // Choose the database iterator when the pending keys iterator is + // exhausted. + if !c.pendingIter.Valid() { + c.currentIter = c.dbIter + return true + } + + // Choose the pending keys iterator when the database iterator is + // exhausted. + if !c.dbIter.Valid() { + c.currentIter = c.pendingIter + return true + } + + // Both iterators are valid, so choose the iterator with either the + // smaller or larger key depending on the forwards flag. + compare := bytes.Compare(c.dbIter.Key(), c.pendingIter.Key()) + if (forwards && compare > 0) || (!forwards && compare < 0) { + c.currentIter = c.pendingIter + } else { + c.currentIter = c.dbIter + } + return true +} + +// First positions the cursor at the first key/value pair and returns whether or +// not the pair exists. +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) First() bool { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return false + } + + // Seek to the first key in both the database and pending iterators and + // choose the iterator that is both valid and has the smaller key. + c.dbIter.First() + c.pendingIter.First() + return c.chooseIterator(true) +} + +// Last positions the cursor at the last key/value pair and returns whether or +// not the pair exists. +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) Last() bool { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return false + } + + // Seek to the last key in both the database and pending iterators and + // choose the iterator that is both valid and has the larger key. + c.dbIter.Last() + c.pendingIter.Last() + return c.chooseIterator(false) +} + +// Next moves the cursor one key/value pair forward and returns whether or not +// the pair exists. +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) Next() bool { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return false + } + + // Nothing to return if cursor is exhausted. + if c.currentIter == nil { + return false + } + + // Move the current iterator to the next entry and choose the iterator + // that is both valid and has the smaller key. + c.currentIter.Next() + return c.chooseIterator(true) +} + +// Prev moves the cursor one key/value pair backward and returns whether or not +// the pair exists. +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) Prev() bool { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return false + } + + // Nothing to return if cursor is exhausted. + if c.currentIter == nil { + return false + } + + // Move the current iterator to the previous entry and choose the + // iterator that is both valid and has the larger key. + c.currentIter.Prev() + return c.chooseIterator(false) +} + +// Seek positions the cursor at the first key/value pair that is greater than or +// equal to the passed seek key. Returns false if no suitable key was found. +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) Seek(seek []byte) bool { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return false + } + + // Seek to the provided key in both the database and pending iterators + // then choose the iterator that is both valid and has the smaller key. + seekKey := bucketizedKey(c.bucket.id, seek) + c.dbIter.Seek(seekKey) + c.pendingIter.Seek(seekKey) + return c.chooseIterator(true) +} + +// rawKey returns the current key the cursor is pointing to without stripping +// the current bucket prefix or bucket index prefix. +func (c *cursor) rawKey() []byte { + // Nothing to return if cursor is exhausted. + if c.currentIter == nil { + return nil + } + + return copySlice(c.currentIter.Key()) +} + +// Key returns the current key the cursor is pointing to. +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) Key() []byte { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return nil + } + + // Nothing to return if cursor is exhausted. + if c.currentIter == nil { + return nil + } + + // Slice out the actual key name and make a copy since it is no longer + // valid after iterating to the next item. + + // The key is after the bucket index prefix and parent ID when the + // cursor is pointing to a nested bucket. + key := c.currentIter.Key() + if bytes.HasPrefix(key, bucketIndexPrefix) { + key = key[len(bucketIndexPrefix)+4:] + return copySlice(key) + } + + // The key is after the bucket ID when the cursor is pointing to a + // normal entry. + key = key[len(c.bucket.id):] + return copySlice(key) +} + +// rawValue returns the current value the cursor is pointing to without +// stripping without filtering bucket index values. +func (c *cursor) rawValue() []byte { + // Nothing to return if cursor is exhausted. + if c.currentIter == nil { + return nil + } + + return copySlice(c.currentIter.Value()) +} + +// Value returns the current value the cursor is pointing to. This will be nil +// for nested buckets. +// +// This function is part of the database.Cursor interface implementation. +func (c *cursor) Value() []byte { + // Ensure transaction state is valid. + if err := c.bucket.tx.checkClosed(); err != nil { + return nil + } + + // Nothing to return if cursor is exhausted. + if c.currentIter == nil { + return nil + } + + // Return nil for the value when the cursor is pointing to a nested + // bucket. + if bytes.HasPrefix(c.currentIter.Key(), bucketIndexPrefix) { + return nil + } + + return copySlice(c.currentIter.Value()) +} + +// cursorType defines the type of cursor to create. +type cursorType int + +// The following constants define the allowed cursor types. +const ( + // ctKeys iterates through all of the keys in a given bucket. + ctKeys cursorType = iota + + // ctBuckets iterates through all directly nested buckets in a given + // bucket. + ctBuckets + + // ctFull iterates through both the keys and the directly nested buckets + // in a given bucket. + ctFull +) + +// cursorFinalizer is either invoked when a cursor is being garbage collected or +// called manually to ensure the underlying cursor iterators are released. +func cursorFinalizer(c *cursor) { + c.dbIter.Release() + c.pendingIter.Release() +} + +// newCursor returns a new cursor for the given bucket, bucket ID, and cursor +// type. +// +// NOTE: The caller is responsible for calling the cursorFinalizer function on +// the returned cursor. +func newCursor(b *bucket, bucketID []byte, cursorTyp cursorType) *cursor { + var dbIter, pendingIter iterator.Iterator + switch cursorTyp { + case ctKeys: + keyRange := util.BytesPrefix(bucketID) + dbIter = b.tx.snapshot.NewIterator(keyRange, nil) + pendingKeyIter := newLdbTreapIter(b.tx, keyRange) + pendingIter = pendingKeyIter + + case ctBuckets: + // The serialized bucket index key format is: + // + + // Create an iterator for the both the database and the pending + // keys which are prefixed by the bucket index identifier and + // the provided bucket ID. + prefix := make([]byte, len(bucketIndexPrefix)+4) + copy(prefix, bucketIndexPrefix) + copy(prefix[len(bucketIndexPrefix):], bucketID) + bucketRange := util.BytesPrefix(prefix) + + dbIter = b.tx.snapshot.NewIterator(bucketRange, nil) + pendingBucketIter := newLdbTreapIter(b.tx, bucketRange) + pendingIter = pendingBucketIter + + case ctFull: + fallthrough + default: + // The serialized bucket index key format is: + // + prefix := make([]byte, len(bucketIndexPrefix)+4) + copy(prefix, bucketIndexPrefix) + copy(prefix[len(bucketIndexPrefix):], bucketID) + bucketRange := util.BytesPrefix(prefix) + keyRange := util.BytesPrefix(bucketID) + + // Since both keys and buckets are needed from the database, + // create an individual iterator for each prefix and then create + // a merged iterator from them. + dbKeyIter := b.tx.snapshot.NewIterator(keyRange, nil) + dbBucketIter := b.tx.snapshot.NewIterator(bucketRange, nil) + iters := []iterator.Iterator{dbKeyIter, dbBucketIter} + dbIter = iterator.NewMergedIterator(iters, + comparer.DefaultComparer, true) + + // Since both keys and buckets are needed from the pending keys, + // create an individual iterator for each prefix and then create + // a merged iterator from them. + pendingKeyIter := newLdbTreapIter(b.tx, keyRange) + pendingBucketIter := newLdbTreapIter(b.tx, bucketRange) + iters = []iterator.Iterator{pendingKeyIter, pendingBucketIter} + pendingIter = iterator.NewMergedIterator(iters, + comparer.DefaultComparer, true) + } + + // Create the cursor using the iterators. + return &cursor{bucket: b, dbIter: dbIter, pendingIter: pendingIter} +} + +// bucket is an internal type used to represent a collection of key/value pairs +// and implements the database.Bucket interface. +type bucket struct { + tx *transaction + id [4]byte +} + +// Enforce bucket implements the database.Bucket interface. +var _ database.Bucket = (*bucket)(nil) + +// bucketIndexKey returns the actual key to use for storing and retrieving a +// child bucket in the bucket index. This is required because additional +// information is needed to distinguish nested buckets with the same name. +func bucketIndexKey(parentID [4]byte, key []byte) []byte { + // The serialized bucket index key format is: + // + indexKey := make([]byte, len(bucketIndexPrefix)+4+len(key)) + copy(indexKey, bucketIndexPrefix) + copy(indexKey[len(bucketIndexPrefix):], parentID[:]) + copy(indexKey[len(bucketIndexPrefix)+4:], key) + return indexKey +} + +// bucketizedKey returns the actual key to use for storing and retrieving a key +// for the provided bucket ID. This is required because bucketizing is handled +// through the use of a unique prefix per bucket. +func bucketizedKey(bucketID [4]byte, key []byte) []byte { + // The serialized block index key format is: + // + bKey := make([]byte, 4+len(key)) + copy(bKey, bucketID[:]) + copy(bKey[4:], key) + return bKey +} + +// Bucket retrieves a nested bucket with the given key. Returns nil if +// the bucket does not exist. +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) Bucket(key []byte) database.Bucket { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return nil + } + + // Attempt to fetch the ID for the child bucket. The bucket does not + // exist if the bucket index entry does not exist. + childID := b.tx.fetchKey(bucketIndexKey(b.id, key)) + if childID == nil { + return nil + } + + childBucket := &bucket{tx: b.tx} + copy(childBucket.id[:], childID) + return childBucket +} + +// CreateBucket creates and returns a new nested bucket with the given key. +// +// Returns the following errors as required by the interface contract: +// - ErrBucketExists if the bucket already exists +// - ErrBucketNameRequired if the key is empty +// - ErrIncompatibleValue if the key is otherwise invalid for the particular +// implementation +// - ErrTxNotWritable if attempted against a read-only transaction +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) CreateBucket(key []byte) (database.Bucket, error) { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return nil, err + } + + // Ensure the transaction is writable. + if !b.tx.writable { + str := "create bucket requires a writable database transaction" + return nil, makeDbErr(database.ErrTxNotWritable, str, nil) + } + + // Ensure bucket does not already exist. + bidxKey := bucketIndexKey(b.id, key) + if b.tx.hasKey(bidxKey) { + str := "bucket already exists" + return nil, makeDbErr(database.ErrBucketExists, str, nil) + } + + // Find the appropriate next bucket ID to use for the new bucket. In + // the case of the special internal block index, keep the fixed ID. + var childID [4]byte + if b.id == metadataBucketID && bytes.Equal(key, blockIdxBucketName) { + childID = blockIdxBucketID + } else { + var err error + childID, err = b.tx.nextBucketID() + if err != nil { + return nil, err + } + } + + // Add the new bucket to the bucket index. + if err := b.tx.putKey(bidxKey, childID[:]); err != nil { + str := fmt.Sprintf("failed to create bucket with key %q", key) + return nil, convertErr(str, err) + } + return &bucket{tx: b.tx, id: childID}, nil +} + +// CreateBucketIfNotExists creates and returns a new nested bucket with the +// given key if it does not already exist. +// +// Returns the following errors as required by the interface contract: +// - ErrBucketNameRequired if the key is empty +// - ErrIncompatibleValue if the key is otherwise invalid for the particular +// implementation +// - ErrTxNotWritable if attempted against a read-only transaction +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) CreateBucketIfNotExists(key []byte) (database.Bucket, error) { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return nil, err + } + + // Ensure the transaction is writable. + if !b.tx.writable { + str := "create bucket requires a writable database transaction" + return nil, makeDbErr(database.ErrTxNotWritable, str, nil) + } + + // Return existing bucket if it already exists, otherwise create it. + if bucket := b.Bucket(key); bucket != nil { + return bucket, nil + } + return b.CreateBucket(key) +} + +// DeleteBucket removes a nested bucket with the given key. +// +// Returns the following errors as required by the interface contract: +// - ErrBucketNotFound if the specified bucket does not exist +// - ErrTxNotWritable if attempted against a read-only transaction +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) DeleteBucket(key []byte) error { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return err + } + + // Ensure the transaction is writable. + if !b.tx.writable { + str := "delete bucket requires a writable database transaction" + return makeDbErr(database.ErrTxNotWritable, str, nil) + } + + // Attempt to fetch the ID for the child bucket. The bucket does not + // exist if the bucket index entry does not exist. In the case of the + // special internal block index, keep the fixed ID. + bidxKey := bucketIndexKey(b.id, key) + childID := b.tx.fetchKey(bidxKey) + if childID == nil { + str := fmt.Sprintf("bucket %q does not exist", key) + return makeDbErr(database.ErrBucketNotFound, str, nil) + } + + // Remove all nested buckets and their keys. + childIDs := [][]byte{childID} + for len(childIDs) > 0 { + childID = childIDs[len(childIDs)-1] + childIDs = childIDs[:len(childIDs)-1] + + // Delete all keys in the nested bucket. + keyCursor := newCursor(b, childID, ctKeys) + for ok := keyCursor.First(); ok; ok = keyCursor.Next() { + b.tx.deleteKey(keyCursor.rawKey(), false) + } + cursorFinalizer(keyCursor) + + // Iterate through all nested buckets. + bucketCursor := newCursor(b, childID, ctBuckets) + for ok := bucketCursor.First(); ok; ok = bucketCursor.Next() { + // Push the id of the nested bucket onto the stack for + // the next iteration. + childID := bucketCursor.rawValue() + childIDs = append(childIDs, childID) + + // Remove the nested bucket from the bucket index. + b.tx.deleteKey(bucketCursor.rawKey(), false) + } + cursorFinalizer(bucketCursor) + } + + // Remove the nested bucket from the bucket index. Any buckets nested + // under it were already removed above. + b.tx.deleteKey(bidxKey, true) + return nil +} + +// Cursor returns a new cursor, allowing for iteration over the bucket's +// key/value pairs and nested buckets in forward or backward order. +// +// You must seek to a position using the First, Last, or Seek functions before +// calling the Next, Prev, Key, or Value functions. Failure to do so will +// result in the same return values as an exhausted cursor, which is false for +// the Prev and Next functions and nil for Key and Value functions. +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) Cursor() database.Cursor { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return &cursor{bucket: b} + } + + // Create the cursor and setup a runtime finalizer to ensure the + // iterators are released when the cursor is garbage collected. + c := newCursor(b, b.id[:], ctFull) + runtime.SetFinalizer(c, cursorFinalizer) + return c +} + +// ForEach invokes the passed function with every key/value pair in the bucket. +// This does not include nested buckets or the key/value pairs within those +// nested buckets. +// +// +// WARNING: It is not safe to mutate data while iterating with this method. +// Doing so may cause the underlying cursor to be invalidated and return +// unexpected keys and/or values. +// +// Returns the following errors as required by the interface contract: +// - ErrTxClosed if the transaction has already been closed +// +// NOTE: The values returned by this function are only valid during a +// transaction. Attempting to access them after a transaction has ended will +// likely result in an access violation. +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) ForEach(fn func(k, v []byte) error) error { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return err + } + + // Invoke the callback for each cursor item. Return the error returned + // from the callback when it is non-nil. + c := newCursor(b, b.id[:], ctKeys) + defer cursorFinalizer(c) + for ok := c.First(); ok; ok = c.Next() { + err := fn(c.Key(), c.Value()) + if err != nil { + return err + } + } + + return nil +} + +// ForEachBucket invokes the passed function with the key of every nested bucket +// in the current bucket. This does not include any nested buckets within those +// nested buckets. +// +// WARNING: It is not safe to mutate data while iterating with this method. +// Doing so may cause the underlying cursor to be invalidated and return +// unexpected keys. +// +// Returns the following errors as required by the interface contract: +// - ErrTxClosed if the transaction has already been closed +// +// NOTE: The values returned by this function are only valid during a +// transaction. Attempting to access them after a transaction has ended will +// likely result in an access violation. +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) ForEachBucket(fn func(k []byte) error) error { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return err + } + + // Invoke the callback for each cursor item. Return the error returned + // from the callback when it is non-nil. + c := newCursor(b, b.id[:], ctBuckets) + defer cursorFinalizer(c) + for ok := c.First(); ok; ok = c.Next() { + err := fn(c.Key()) + if err != nil { + return err + } + } + + return nil +} + +// Writable returns whether or not the bucket is writable. +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) Writable() bool { + return b.tx.writable +} + +// Put saves the specified key/value pair to the bucket. Keys that do not +// already exist are added and keys that already exist are overwritten. +// +// Returns the following errors as required by the interface contract: +// - ErrKeyRequired if the key is empty +// - ErrIncompatibleValue if the key is the same as an existing bucket +// - ErrTxNotWritable if attempted against a read-only transaction +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) Put(key, value []byte) error { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return err + } + + // Ensure the transaction is writable. + if !b.tx.writable { + str := "setting a key requires a writable database transaction" + return makeDbErr(database.ErrTxNotWritable, str, nil) + } + + return b.tx.putKey(bucketizedKey(b.id, key), value) +} + +// Get returns the value for the given key. Returns nil if the key does +// not exist in this bucket. +// +// NOTE: The value returned by this function is only valid during a +// transaction. Attempting to access it after a transaction has ended +// will likely result in an access violation. +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) Get(key []byte) []byte { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return nil + } + + // Nothing to return if there is no key. + if len(key) == 0 { + return nil + } + + return b.tx.fetchKey(bucketizedKey(b.id, key)) +} + +// Delete removes the specified key from the bucket. Deleting a key that does +// not exist does not return an error. +// +// Returns the following errors as required by the interface contract: +// - ErrKeyRequired if the key is empty +// - ErrIncompatibleValue if the key is the same as an existing bucket +// - ErrTxNotWritable if attempted against a read-only transaction +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Bucket interface implementation. +func (b *bucket) Delete(key []byte) error { + // Ensure transaction state is valid. + if err := b.tx.checkClosed(); err != nil { + return err + } + + // Ensure the transaction is writable. + if !b.tx.writable { + str := "deleting a value requires a writable database transaction" + return makeDbErr(database.ErrTxNotWritable, str, nil) + } + + b.tx.deleteKey(bucketizedKey(b.id, key), true) + return nil +} + +// pendingBlock houses a block that will be written to disk when the database +// transaction is committed. +type pendingBlock struct { + hash *wire.ShaHash + bytes []byte +} + +// transaction represents a database transaction. It can either by read-only or +// read-write and implements the database.Bucket interface. The transaction +// provides a root bucket against which all read and writes occur. +type transaction struct { + managed bool // Is the transaction managed? + closed bool // Is the transaction closed? + writable bool // Is the transaction writable? + db *db // DB instance the tx was created from. + snapshot *leveldb.Snapshot // Underlying snapshot for txns. + metaBucket *bucket // The root metadata bucket. + blockIdxBucket *bucket // The block index bucket. + + // Blocks that need to be stored on commit. The pendingBlocks map is + // kept to allow quick lookups of pending data by block hash. + pendingBlocks map[wire.ShaHash]int + pendingBlockData []pendingBlock + + // Keys that need to be stored or deleted on commit. + pendingKeys *treap.Treap + pendingRemove map[string]struct{} + + // Active iterators that need to be notified when the pending keys have + // been updated so the cursors can properly handle updates to the + // transaction state. + activeIterLock sync.RWMutex + activeIters []*treap.Iterator +} + +// Enforce transaction implements the database.Tx interface. +var _ database.Tx = (*transaction)(nil) + +// removeActiveIter removes the passed iterator from the list of active +// iterators against the pending keys treap. +func (tx *transaction) removeActiveIter(iter *treap.Iterator) { + // An indexing for loop is intentionally used over a range here as range + // does not reevaluate the slice on each iteration nor does it adjust + // the index for the modified slice. + tx.activeIterLock.Lock() + for i := 0; i < len(tx.activeIters); i++ { + if tx.activeIters[i] == iter { + copy(tx.activeIters[i:], tx.activeIters[i+1:]) + tx.activeIters[len(tx.activeIters)-1] = nil + tx.activeIters = tx.activeIters[:len(tx.activeIters)-1] + } + } + tx.activeIterLock.Unlock() +} + +// addActiveIter adds the passed iterator to the list of active iterators for +// the pending keys treap. +func (tx *transaction) addActiveIter(iter *treap.Iterator) { + tx.activeIterLock.Lock() + tx.activeIters = append(tx.activeIters, iter) + tx.activeIterLock.Unlock() +} + +// notifyActiveIters notifies all of the active iterators for the pending keys +// treap that it has been updated. +func (tx *transaction) notifyActiveIters() { + tx.activeIterLock.RLock() + for _, iter := range tx.activeIters { + iter.ForceReseek() + } + tx.activeIterLock.RUnlock() +} + +// checkClosed returns an error if the the database or transaction is closed. +func (tx *transaction) checkClosed() error { + // The transaction is no longer valid if it has been closed. + if tx.closed { + return makeDbErr(database.ErrTxClosed, errTxClosedStr, nil) + } + + return nil +} + +// hasKey returns whether or not the provided key exists in the database while +// taking into account the current transaction state. +func (tx *transaction) hasKey(key []byte) bool { + // When the transaction is writable, check the pending transaction + // state first. + if tx.writable { + if _, ok := tx.pendingRemove[string(key)]; ok { + return false + } + if tx.pendingKeys.Has(key) { + return true + } + } + + // Consult the database. + hasKey, _ := tx.snapshot.Has(key, nil) + return hasKey +} + +// putKey adds the provided key to the list of keys to be updated in the +// database when the transaction is committed. +// +// NOTE: This function must only be called on a writable transaction. Since it +// is an internal helper function, it does not check. +func (tx *transaction) putKey(key, value []byte) error { + // Prevent the key from being deleted if it was previously scheduled + // to be deleted on transaction commit. + delete(tx.pendingRemove, string(key)) + + // Add the key/value pair to the list to be written on transaction + // commit. + tx.pendingKeys.Put(key, value) + tx.notifyActiveIters() + return nil +} + +// fetchKey attempts to fetch the provided key from the database while taking +// into account the current transaction state. Returns nil if the key does not +// exist. +func (tx *transaction) fetchKey(key []byte) []byte { + // When the transaction is writable, check the pending transaction + // state first. + if tx.writable { + if _, ok := tx.pendingRemove[string(key)]; ok { + return nil + } + // TODO(davec): Avoid the double lookup. This will likely + // require returning an additional flag from Get since the value + // is allowed to be nil, it can't be used to check for + // existence. + if tx.pendingKeys.Has(key) { + return tx.pendingKeys.Get(key) + } + } + + value, err := tx.snapshot.Get(key, nil) + if err != nil { + return nil + } + return value +} + +// deleteKey adds the provided key to the list of keys to be deleted from the +// database when the transaction is committed. The notify iterators flag is +// useful to delay notifying iterators about the changes during bulk deletes. +// +// NOTE: This function must only be called on a writable transaction. Since it +// is an internal helper function, it does not check. +func (tx *transaction) deleteKey(key []byte, notifyIterators bool) { + // Remove the key from the list of pendings keys to be written on + // transaction commit if needed. + tx.pendingKeys.Delete(key) + + // Add the key to the list to be deleted on transaction commit. + if tx.pendingRemove == nil { + tx.pendingRemove = make(map[string]struct{}) + } + tx.pendingRemove[string(key)] = struct{}{} + + // Notify the active iterators about the change if the flag is set. + if notifyIterators { + tx.notifyActiveIters() + } +} + +// nextBucketID returns the next bucket ID to use for creating a new bucket. +// +// NOTE: This function must only be called on a writable transaction. Since it +// is an internal helper function, it does not check. +func (tx *transaction) nextBucketID() ([4]byte, error) { + // Load the currently highest used bucket ID. + curIDBytes := tx.fetchKey(curBucketIDKeyName) + curBucketNum := binary.BigEndian.Uint32(curIDBytes) + + // Increment and update the current bucket ID and return it. + var nextBucketID [4]byte + binary.BigEndian.PutUint32(nextBucketID[:], curBucketNum+1) + if err := tx.putKey(curBucketIDKeyName, nextBucketID[:]); err != nil { + return [4]byte{}, err + } + return nextBucketID, nil +} + +// Metadata returns the top-most bucket for all metadata storage. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) Metadata() database.Bucket { + return tx.metaBucket +} + +// hasBlock returns whether or not a block with the given hash exists. +func (tx *transaction) hasBlock(hash *wire.ShaHash) bool { + // Return true if the block is pending to be written on commit since + // it exists from the viewpoint of this transaction. + if _, exists := tx.pendingBlocks[*hash]; exists { + return true + } + + return tx.hasKey(bucketizedKey(blockIdxBucketID, hash[:])) +} + +// StoreBlock stores the provided block into the database. There are no checks +// to ensure the block connects to a previous block, contains double spends, or +// any additional functionality such as transaction indexing. It simply stores +// the block in the database. +// +// Returns the following errors as required by the interface contract: +// - ErrBlockExists when the block hash already exists +// - ErrTxNotWritable if attempted against a read-only transaction +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) StoreBlock(block *btcutil.Block) error { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return err + } + + // Ensure the transaction is writable. + if !tx.writable { + str := "store block requires a writable database transaction" + return makeDbErr(database.ErrTxNotWritable, str, nil) + } + + // Reject the block if it already exists. + blockHash := block.Sha() + if tx.hasBlock(blockHash) { + str := fmt.Sprintf("block %s already exists", blockHash) + return makeDbErr(database.ErrBlockExists, str, nil) + } + + blockBytes, err := block.Bytes() + if err != nil { + str := fmt.Sprintf("failed to get serialized bytes for block %s", + blockHash) + return makeDbErr(database.ErrDriverSpecific, str, err) + } + + // Add the block to be stored to the list of pending blocks to store + // when the transaction is committed. Also, add it to pending blocks + // map so it is easy to determine the block is pending based on the + // block hash. + if tx.pendingBlocks == nil { + tx.pendingBlocks = make(map[wire.ShaHash]int) + } + tx.pendingBlocks[*blockHash] = len(tx.pendingBlockData) + tx.pendingBlockData = append(tx.pendingBlockData, pendingBlock{ + hash: blockHash, + bytes: blockBytes, + }) + log.Tracef("Added block %s to pending blocks", blockHash) + + return nil +} + +// HasBlock returns whether or not a block with the given hash exists in the +// database. +// +// Returns the following errors as required by the interface contract: +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) HasBlock(hash *wire.ShaHash) (bool, error) { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return false, err + } + + return tx.hasBlock(hash), nil +} + +// HasBlocks returns whether or not the blocks with the provided hashes +// exist in the database. +// +// Returns the following errors as required by the interface contract: +// - ErrTxClosed if the transaction has already been closed +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) HasBlocks(hashes []wire.ShaHash) ([]bool, error) { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return nil, err + } + + results := make([]bool, len(hashes)) + for i := range hashes { + results[i] = tx.hasBlock(&hashes[i]) + } + + return results, nil +} + +// fetchBlockRow fetches the metadata stored in the block index for the provided +// hash. It will return ErrBlockNotFound if there is no entry and ErrCorruption +// if the checksum of the entry doesn't match. +func (tx *transaction) fetchBlockRow(hash *wire.ShaHash) ([]byte, error) { + blockRow := tx.blockIdxBucket.Get(hash[:]) + if blockRow == nil { + str := fmt.Sprintf("block %s does not exist", hash) + return nil, makeDbErr(database.ErrBlockNotFound, str, nil) + } + + return blockRow, nil +} + +// FetchBlockHeader returns the raw serialized bytes for the block header +// identified by the given hash. The raw bytes are in the format returned by +// Serialize on a wire.BlockHeader. +// +// Returns the following errors as required by the interface contract: +// - ErrBlockNotFound if the requested block hash does not exist +// - ErrTxClosed if the transaction has already been closed +// - ErrCorruption if the database has somehow become corrupted +// +// NOTE: The data returned by this function is only valid during a +// database transaction. Attempting to access it after a transaction +// has ended results in undefined behavior. This constraint prevents +// additional data copies and allows support for memory-mapped database +// implementations. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) FetchBlockHeader(hash *wire.ShaHash) ([]byte, error) { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return nil, err + } + + // When the block is pending to be written on commit return the bytes + // from there. + if idx, exists := tx.pendingBlocks[*hash]; exists { + blockBytes := tx.pendingBlockData[idx].bytes + return blockBytes[0:blockHdrSize:blockHdrSize], nil + } + + // Fetch the block index row and slice off the header. Notice the use + // of the cap on the subslice to prevent the caller from accidentally + // appending into the db data. + blockRow, err := tx.fetchBlockRow(hash) + if err != nil { + return nil, err + } + endOffset := blockLocSize + blockHdrSize + return blockRow[blockLocSize:endOffset:endOffset], nil +} + +// FetchBlockHeaders returns the raw serialized bytes for the block headers +// identified by the given hashes. The raw bytes are in the format returned by +// Serialize on a wire.BlockHeader. +// +// Returns the following errors as required by the interface contract: +// - ErrBlockNotFound if the any of the requested block hashes do not exist +// - ErrTxClosed if the transaction has already been closed +// - ErrCorruption if the database has somehow become corrupted +// +// NOTE: The data returned by this function is only valid during a database +// transaction. Attempting to access it after a transaction has ended results +// in undefined behavior. This constraint prevents additional data copies and +// allows support for memory-mapped database implementations. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) FetchBlockHeaders(hashes []wire.ShaHash) ([][]byte, error) { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return nil, err + } + + // NOTE: This could check for the existence of all blocks before loading + // any of the headers which would be faster in the failure case, however + // callers will not typically be calling this function with invalid + // values, so optimize for the common case. + + // Load the headers. + headers := make([][]byte, len(hashes)) + for i := range hashes { + hash := &hashes[i] + + // When the block is pending to be written on commit return the + // bytes from there. + if idx, exists := tx.pendingBlocks[*hash]; exists { + blkBytes := tx.pendingBlockData[idx].bytes + headers[i] = blkBytes[0:blockHdrSize:blockHdrSize] + continue + } + + // Fetch the block index row and slice off the header. Notice + // the use of the cap on the subslice to prevent the caller + // from accidentally appending into the db data. + blockRow, err := tx.fetchBlockRow(hash) + if err != nil { + return nil, err + } + endOffset := blockLocSize + blockHdrSize + headers[i] = blockRow[blockLocSize:endOffset:endOffset] + } + + return headers, nil +} + +// FetchBlock returns the raw serialized bytes for the block identified by the +// given hash. The raw bytes are in the format returned by Serialize on a +// wire.MsgBlock. +// +// Returns the following errors as required by the interface contract: +// - ErrBlockNotFound if the requested block hash does not exist +// - ErrTxClosed if the transaction has already been closed +// - ErrCorruption if the database has somehow become corrupted +// +// In addition, returns ErrDriverSpecific if any failures occur when reading the +// block files. +// +// NOTE: The data returned by this function is only valid during a database +// transaction. Attempting to access it after a transaction has ended results +// in undefined behavior. This constraint prevents additional data copies and +// allows support for memory-mapped database implementations. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) FetchBlock(hash *wire.ShaHash) ([]byte, error) { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return nil, err + } + + // When the block is pending to be written on commit return the bytes + // from there. + if idx, exists := tx.pendingBlocks[*hash]; exists { + return tx.pendingBlockData[idx].bytes, nil + } + + // Lookup the location of the block in the files from the block index. + blockRow, err := tx.fetchBlockRow(hash) + if err != nil { + return nil, err + } + location := deserializeBlockLoc(blockRow) + + // Read the block from the appropriate location. The function also + // performs a checksum over the data to detect data corruption. + blockBytes, err := tx.db.store.readBlock(hash, location) + if err != nil { + return nil, err + } + + return blockBytes, nil +} + +// FetchBlocks returns the raw serialized bytes for the blocks identified by the +// given hashes. The raw bytes are in the format returned by Serialize on a +// wire.MsgBlock. +// +// Returns the following errors as required by the interface contract: +// - ErrBlockNotFound if any of the requested block hashed do not exist +// - ErrTxClosed if the transaction has already been closed +// - ErrCorruption if the database has somehow become corrupted +// +// In addition, returns ErrDriverSpecific if any failures occur when reading the +// block files. +// +// NOTE: The data returned by this function is only valid during a database +// transaction. Attempting to access it after a transaction has ended results +// in undefined behavior. This constraint prevents additional data copies and +// allows support for memory-mapped database implementations. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) FetchBlocks(hashes []wire.ShaHash) ([][]byte, error) { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return nil, err + } + + // NOTE: This could check for the existence of all blocks before loading + // any of them which would be faster in the failure case, however + // callers will not typically be calling this function with invalid + // values, so optimize for the common case. + + // Load the blocks. + blocks := make([][]byte, len(hashes)) + for i := range hashes { + var err error + blocks[i], err = tx.FetchBlock(&hashes[i]) + if err != nil { + return nil, err + } + } + + return blocks, nil +} + +// fetchPendingRegion attempts to fetch the provided region from any block which +// are pending to be written on commit. It will return nil for the byte slice +// when there region references a block which is not pending. When the region +// does reference a pending block, it is bounds checked and returns +// ErrBlockRegionInvalid if invalid. +func (tx *transaction) fetchPendingRegion(region *database.BlockRegion) ([]byte, error) { + // Nothing to do if the block is not pending to be written on commit. + idx, exists := tx.pendingBlocks[*region.Hash] + if !exists { + return nil, nil + } + + // Ensure the region is within the bounds of the block. + blockBytes := tx.pendingBlockData[idx].bytes + blockLen := uint32(len(blockBytes)) + endOffset := region.Offset + region.Len + if endOffset < region.Offset || endOffset > blockLen { + str := fmt.Sprintf("block %s region offset %d, length %d "+ + "exceeds block length of %d", region.Hash, + region.Offset, region.Len, blockLen) + return nil, makeDbErr(database.ErrBlockRegionInvalid, str, nil) + } + + // Return the bytes from the pending block. + return blockBytes[region.Offset:endOffset:endOffset], nil +} + +// FetchBlockRegion returns the raw serialized bytes for the given block region. +// +// For example, it is possible to directly extract Bitcoin transactions and/or +// scripts from a block with this function. Depending on the backend +// implementation, this can provide significant savings by avoiding the need to +// load entire blocks. +// +// The raw bytes are in the format returned by Serialize on a wire.MsgBlock and +// the Offset field in the provided BlockRegion is zero-based and relative to +// the start of the block (byte 0). +// +// Returns the following errors as required by the interface contract: +// - ErrBlockNotFound if the requested block hash does not exist +// - ErrBlockRegionInvalid if the region exceeds the bounds of the associated +// block +// - ErrTxClosed if the transaction has already been closed +// - ErrCorruption if the database has somehow become corrupted +// +// In addition, returns ErrDriverSpecific if any failures occur when reading the +// block files. +// +// NOTE: The data returned by this function is only valid during a database +// transaction. Attempting to access it after a transaction has ended results +// in undefined behavior. This constraint prevents additional data copies and +// allows support for memory-mapped database implementations. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) FetchBlockRegion(region *database.BlockRegion) ([]byte, error) { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return nil, err + } + + // When the block is pending to be written on commit return the bytes + // from there. + if tx.pendingBlocks != nil { + regionBytes, err := tx.fetchPendingRegion(region) + if err != nil { + return nil, err + } + if regionBytes != nil { + return regionBytes, nil + } + } + + // Lookup the location of the block in the files from the block index. + blockRow, err := tx.fetchBlockRow(region.Hash) + if err != nil { + return nil, err + } + location := deserializeBlockLoc(blockRow) + + // Ensure the region is within the bounds of the block. + endOffset := region.Offset + region.Len + if endOffset < region.Offset || endOffset > location.blockLen { + str := fmt.Sprintf("block %s region offset %d, length %d "+ + "exceeds block length of %d", region.Hash, + region.Offset, region.Len, location.blockLen) + return nil, makeDbErr(database.ErrBlockRegionInvalid, str, nil) + + } + + // Read the region from the appropriate disk block file. + regionBytes, err := tx.db.store.readBlockRegion(location, region.Offset, + region.Len) + if err != nil { + return nil, err + } + + return regionBytes, nil +} + +// FetchBlockRegions returns the raw serialized bytes for the given block +// regions. +// +// For example, it is possible to directly extract Bitcoin transactions and/or +// scripts from various blocks with this function. Depending on the backend +// implementation, this can provide significant savings by avoiding the need to +// load entire blocks. +// +// The raw bytes are in the format returned by Serialize on a wire.MsgBlock and +// the Offset fields in the provided BlockRegions are zero-based and relative to +// the start of the block (byte 0). +// +// Returns the following errors as required by the interface contract: +// - ErrBlockNotFound if any of the request block hashes do not exist +// - ErrBlockRegionInvalid if one or more region exceed the bounds of the +// associated block +// - ErrTxClosed if the transaction has already been closed +// - ErrCorruption if the database has somehow become corrupted +// +// In addition, returns ErrDriverSpecific if any failures occur when reading the +// block files. +// +// NOTE: The data returned by this function is only valid during a database +// transaction. Attempting to access it after a transaction has ended results +// in undefined behavior. This constraint prevents additional data copies and +// allows support for memory-mapped database implementations. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) FetchBlockRegions(regions []database.BlockRegion) ([][]byte, error) { + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return nil, err + } + + // NOTE: This could check for the existence of all blocks before + // deserializing the locations and building up the fetch list which + // would be faster in the failure case, however callers will not + // typically be calling this function with invalid values, so optimize + // for the common case. + + // NOTE: A potential optimization here would be to combine adjacent + // regions to reduce the number of reads. + + // In order to improve efficiency of loading the bulk data, first grab + // the block location for all of the requested block hashes and sort + // the reads by filenum:offset so that all reads are grouped by file + // and linear within each file. This can result in quite a significant + // performance increase depending on how spread out the requested hashes + // are by reducing the number of file open/closes and random accesses + // needed. The fetchList is intentionally allocated with a cap because + // some of the regions might be fetched from the pending blocks and + // hence there is no need to fetch those from disk. + blockRegions := make([][]byte, len(regions)) + fetchList := make([]bulkFetchData, 0, len(regions)) + for i := range regions { + region := ®ions[i] + + // When the block is pending to be written on commit grab the + // bytes from there. + if tx.pendingBlocks != nil { + regionBytes, err := tx.fetchPendingRegion(region) + if err != nil { + return nil, err + } + if regionBytes != nil { + blockRegions[i] = regionBytes + continue + } + } + + // Lookup the location of the block in the files from the block + // index. + blockRow, err := tx.fetchBlockRow(region.Hash) + if err != nil { + return nil, err + } + location := deserializeBlockLoc(blockRow) + + // Ensure the region is within the bounds of the block. + endOffset := region.Offset + region.Len + if endOffset < region.Offset || endOffset > location.blockLen { + str := fmt.Sprintf("block %s region offset %d, length "+ + "%d exceeds block length of %d", region.Hash, + region.Offset, region.Len, location.blockLen) + return nil, makeDbErr(database.ErrBlockRegionInvalid, str, nil) + } + + fetchList = append(fetchList, bulkFetchData{&location, i}) + } + sort.Sort(bulkFetchDataSorter(fetchList)) + + // Read all of the regions in the fetch list and set the results. + for i := range fetchList { + fetchData := &fetchList[i] + ri := fetchData.replyIndex + region := ®ions[ri] + location := fetchData.blockLocation + regionBytes, err := tx.db.store.readBlockRegion(*location, + region.Offset, region.Len) + if err != nil { + return nil, err + } + blockRegions[ri] = regionBytes + } + + return blockRegions, nil +} + +// close marks the transaction closed then releases any pending data, the +// underlying snapshot, and the transaction read lock. +func (tx *transaction) close() { + tx.closed = true + + // Clear pending blocks that would have been written on commit. + tx.pendingBlocks = nil + tx.pendingBlockData = nil + + // Clear pending keys that would have been written or deleted on commit. + tx.pendingKeys.Reset() + tx.pendingRemove = nil + + // Release the snapshot. + if tx.snapshot != nil { + tx.snapshot.Release() + tx.snapshot = nil + } + + tx.db.closeLock.RUnlock() + + // Release the writer lock for writable transactions to unblock any + // other write transaction which are possibly waiting. + if tx.writable { + tx.db.writeLock.Unlock() + } +} + +// serializeBlockRow serializes a block row into a format suitable for storage +// into the block index. +func serializeBlockRow(blockLoc blockLocation, blockHdr []byte) []byte { + // The serialized block index row format is: + // + // [0:blockLocSize] Block location + // [blockLocSize:blockLocSize+blockHdrSize] Block header + serializedRow := make([]byte, blockLocSize+blockHdrSize) + copy(serializedRow, serializeBlockLoc(blockLoc)) + copy(serializedRow[blockHdrOffset:], blockHdr) + return serializedRow +} + +// writePendingAndCommit writes pending block data to the flat block files, +// updates the metadata with their locations as well as the new current write +// location, and commits the metadata to the underlying database. It also +// properly handles rollback in the case of failures. +// +// This function MUST only be called when there is pending data to be written. +func (tx *transaction) writePendingAndCommit() error { + // Save the current block store write position for potential rollback. + // These variables are only updated here in this function and there can + // only be one write transaction active at a time, so it's safe to store + // them for potential rollback. + wc := tx.db.store.writeCursor + wc.RLock() + oldBlkFileNum := wc.curFileNum + oldBlkOffset := wc.curOffset + wc.RUnlock() + + // rollback is a closure that is used to rollback all writes to the + // block files. + rollback := func() { + // Rollback any modifications made to the block files if needed. + tx.db.store.handleRollback(oldBlkFileNum, oldBlkOffset) + } + + // Loop through all of the pending blocks to store and write them. + for _, blockData := range tx.pendingBlockData { + log.Tracef("Storing block %s", blockData.hash) + location, err := tx.db.store.writeBlock(blockData.bytes) + if err != nil { + rollback() + return err + } + + // Add a record in the block index for the block. The record + // includes the location information needed to locate the block + // on the filesystem as well as the block header since they are + // so commonly needed. + blockHdr := blockData.bytes[0:blockHdrSize] + blockRow := serializeBlockRow(location, blockHdr) + err = tx.blockIdxBucket.Put(blockData.hash[:], blockRow) + if err != nil { + rollback() + return err + } + } + + // Update the metadata for the current write file and offset. + writeRow := serializeWriteRow(wc.curFileNum, wc.curOffset) + if err := tx.metaBucket.Put(writeLocKeyName, writeRow); err != nil { + rollback() + return convertErr("failed to store write cursor", err) + } + + // Perform all leveldb update operations using a batch for atomicity. + batch := new(leveldb.Batch) + iter := tx.pendingKeys.Iterator(nil, nil) + for ok := iter.First(); ok; ok = iter.Next() { + batch.Put(iter.Key(), iter.Value()) + } + for k := range tx.pendingRemove { + batch.Delete([]byte(k)) + } + if err := tx.db.ldb.Write(batch, nil); err != nil { + rollback() + return convertErr("failed to commit transaction", err) + } + + return nil +} + +// Commit commits all changes that have been made through the root bucket and +// all of its sub-buckets to persistent storage. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) Commit() error { + // Prevent commits on managed transactions. + if tx.managed { + tx.close() + panic("managed transaction commit not allowed") + } + + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return err + } + + // Regardless of whether the commit succeeds, the transaction is closed + // on return. + defer tx.close() + + // Ensure the transaction is writable. + if !tx.writable { + str := "Commit requires a writable database transaction" + return makeDbErr(database.ErrTxNotWritable, str, nil) + } + + // Write pending data. The function will rollback if any errors occur. + return tx.writePendingAndCommit() +} + +// Rollback undoes all changes that have been made to the root bucket and all of +// its sub-buckets. +// +// This function is part of the database.Tx interface implementation. +func (tx *transaction) Rollback() error { + // Prevent rollbacks on managed transactions. + if tx.managed { + tx.close() + panic("managed transaction rollback not allowed") + } + + // Ensure transaction state is valid. + if err := tx.checkClosed(); err != nil { + return err + } + + tx.close() + return nil +} + +// db represents a collection of namespaces which are persisted and implements +// the database.DB interface. All database access is performed through +// transactions which are obtained through the specific Namespace. +type db struct { + writeLock sync.Mutex // Limit to one write transaction at a time. + closeLock sync.RWMutex // Make database close block while txns active. + closed bool // Is the database closed? + ldb *leveldb.DB // The underlying leveldb DB for metadata. + store *blockStore // Handles read/writing blocks to flat files. +} + +// Enforce db implements the database.DB interface. +var _ database.DB = (*db)(nil) + +// Type returns the database driver type the current database instance was +// created with. +// +// This function is part of the database.DB interface implementation. +func (db *db) Type() string { + return dbType +} + +// begin is the implementation function for the Begin database method. See its +// documentation for more details. +// +// This function is only separate because it returns the internal transaction +// which is used by the managed transaction code while the database method +// returns the interface. +func (db *db) begin(writable bool) (*transaction, error) { + // Whenever a new writable transaction is started, grab the write lock + // to ensure only a single write transaction can be active at the same + // time. This lock will not be released until the transaction is + // closed (via Rollback or Commit). + if writable { + db.writeLock.Lock() + } + + // Whenever a new transaction is started, grab a read lock against the + // database to ensure Close will wait for the transaction to finish. + // This lock will not be released until the transaction is closed (via + // Rollback or Commit). + db.closeLock.RLock() + if db.closed { + db.closeLock.RUnlock() + if writable { + db.writeLock.Unlock() + } + return nil, makeDbErr(database.ErrDbNotOpen, errDbNotOpenStr, + nil) + } + + snapshot, err := db.ldb.GetSnapshot() + if err != nil { + db.closeLock.RUnlock() + if writable { + db.writeLock.Unlock() + } + + str := "failed to open transaction" + return nil, convertErr(str, err) + } + + // The metadata and block index buckets are internal-only buckets, so + // they have defined IDs. + tx := &transaction{ + writable: writable, + db: db, + snapshot: snapshot, + pendingKeys: treap.New(), + } + tx.metaBucket = &bucket{tx: tx, id: metadataBucketID} + tx.blockIdxBucket = &bucket{tx: tx, id: blockIdxBucketID} + return tx, nil +} + +// Begin starts a transaction which is either read-only or read-write depending +// on the specified flag. Multiple read-only transactions can be started +// simultaneously while only a single read-write transaction can be started at a +// time. The call will block when starting a read-write transaction when one is +// already open. +// +// NOTE: The transaction must be closed by calling Rollback or Commit on it when +// it is no longer needed. Failure to do so will result in unclaimed memory. +// +// This function is part of the database.DB interface implementation. +func (db *db) Begin(writable bool) (database.Tx, error) { + return db.begin(writable) +} + +// rollbackOnPanic rolls the passed transaction back if the code in the calling +// function panics. This is needed since the mutex on a transaction must be +// released and a panic in called code would prevent that from happening. +// +// NOTE: This can only be handled manually for managed transactions since they +// control the life-cycle of the transaction. As the documentation on Begin +// calls out, callers opting to use manual transactions will have to ensure the +// transaction is rolled back on panic if it desires that functionality as well +// or the database will fail to close since the read-lock will never be +// released. +func rollbackOnPanic(tx *transaction) { + if err := recover(); err != nil { + tx.managed = false + _ = tx.Rollback() + panic(err) + } +} + +// View invokes the passed function in the context of a managed read-only +// transaction with the root bucket for the namespace. Any errors returned from +// the user-supplied function are returned from this function. +// +// This function is part of the database.DB interface implementation. +func (db *db) View(fn func(database.Tx) error) error { + // Start a read-only transaction. + tx, err := db.begin(false) + if err != nil { + return err + } + + // Since the user-provided function might panic, ensure the transaction + // releases all mutexes and resources. There is no guarantee the caller + // won't use recover and keep going. Thus, the database must still be + // in a usable state on panics due to user issues. + defer rollbackOnPanic(tx) + + tx.managed = true + err = fn(tx) + tx.managed = false + if err != nil { + // The error is ignored here because nothing was written yet + // and regardless of a rollback failure, the tx is closed now + // anyways. + _ = tx.Rollback() + return err + } + + return tx.Rollback() +} + +// Update invokes the passed function in the context of a managed read-write +// transaction with the root bucket for the namespace. Any errors returned from +// the user-supplied function will cause the transaction to be rolled back and +// are returned from this function. Otherwise, the transaction is committed +// when the user-supplied function returns a nil error. +// +// This function is part of the database.DB interface implementation. +func (db *db) Update(fn func(database.Tx) error) error { + // Start a read-write transaction. + tx, err := db.begin(true) + if err != nil { + return err + } + + // Since the user-provided function might panic, ensure the transaction + // releases all mutexes and resources. There is no guarantee the caller + // won't use recover and keep going. Thus, the database must still be + // in a usable state on panics due to user issues. + defer rollbackOnPanic(tx) + + tx.managed = true + err = fn(tx) + tx.managed = false + if err != nil { + // The error is ignored here because nothing was written yet + // and regardless of a rollback failure, the tx is closed now + // anyways. + _ = tx.Rollback() + return err + } + + return tx.Commit() +} + +// Close cleanly shuts down the database and syncs all data. Any data in +// database transactions which have not been committed will be lost, so it is +// important to ensure all transactions are finalized prior to calling this +// function if that data is intended to be stored. +// +// This function is part of the database.DB interface implementation. +func (db *db) Close() error { + // Since all transactions have a read lock on this mutex, this will + // cause Close to wait for all readers to complete. + db.closeLock.Lock() + defer db.closeLock.Unlock() + + if db.closed { + return makeDbErr(database.ErrDbNotOpen, errDbNotOpenStr, nil) + } + db.closed = true + + // NOTE: Since the above lock waits for all transactions to finish and + // prevents any new ones from being started, it is safe to clear all + // state without the individual locks. + + // Close any open flat files that house the blocks. + wc := db.store.writeCursor + if wc.curFile.file != nil { + _ = wc.curFile.file.Close() + wc.curFile.file = nil + } + for _, blockFile := range db.store.openBlockFiles { + _ = blockFile.file.Close() + } + db.store.openBlockFiles = nil + db.store.openBlocksLRU.Init() + db.store.fileNumToLRUElem = nil + + if err := db.ldb.Close(); err != nil { + str := "failed to close underlying leveldb database" + return convertErr(str, err) + } + + return nil +} + +// filesExists reports whether the named file or directory exists. +func fileExists(name string) bool { + if _, err := os.Stat(name); err != nil { + if os.IsNotExist(err) { + return false + } + } + return true +} + +// initDB creates the initial buckets and values used by the package. This is +// mainly in a separate function for testing purposes. +func initDB(ldb *leveldb.DB) error { + // The starting block file write cursor location is file num 0, offset + // 0. + batch := new(leveldb.Batch) + batch.Put(bucketizedKey(metadataBucketID, writeLocKeyName), + serializeWriteRow(0, 0)) + + // Create block index bucket and set the current bucket id. + // + // NOTE: Since buckets are virtualized through the use of prefixes, + // there is no need to store the bucket index data for the metadata + // bucket in the database. However, the first bucket ID to use does + // need to account for it to ensure there are no key collisions. + batch.Put(bucketIndexKey(metadataBucketID, blockIdxBucketName), + blockIdxBucketID[:]) + batch.Put(curBucketIDKeyName, blockIdxBucketID[:]) + + // Write everything as a single batch. + if err := ldb.Write(batch, nil); err != nil { + str := fmt.Sprintf("failed to initialize metadata database: %v", + err) + return convertErr(str, err) + } + + return nil +} + +// openDB opens the database at the provided path. database.ErrDbDoesNotExist +// is returned if the database doesn't exist and the create flag is not set. +func openDB(dbPath string, network wire.BitcoinNet, create bool) (database.DB, error) { + // Error if the database doesn't exist and the create flag is not set. + metadataDbPath := filepath.Join(dbPath, metadataDbName) + dbExists := fileExists(metadataDbPath) + if !create && !dbExists { + str := fmt.Sprintf("database %q does not exist", metadataDbPath) + return nil, makeDbErr(database.ErrDbDoesNotExist, str, nil) + } + + // Ensure the full path to the database exists. + if !dbExists { + // The error can be ignored here since the call to + // leveldb.OpenFile will fail if the directory couldn't be + // created. + _ = os.MkdirAll(dbPath, 0700) + } + + // Open the metadata database (will create it if needed). + opts := opt.Options{ + ErrorIfExist: create, + Strict: opt.DefaultStrict, + Filter: filter.NewBloomFilter(10), + } + ldb, err := leveldb.OpenFile(metadataDbPath, &opts) + if err != nil { + return nil, convertErr(err.Error(), err) + } + + // Create the block store which includes scanning the existing flat + // block files to find what the current write cursor position is + // according to the data that is actually on disk. + store := newBlockStore(dbPath, network) + pdb := &db{ldb: ldb, store: store} + + // Perform any reconciliation needed between the block and metadata as + // well as database initialization, if needed. + return reconcileDB(pdb, create) +} diff --git a/database2/ffldb/doc.go b/database2/ffldb/doc.go new file mode 100644 index 00000000000..246ee247754 --- /dev/null +++ b/database2/ffldb/doc.go @@ -0,0 +1,29 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +/* +Package ffldb implements a driver for the database package that uses leveldb +for the backing metadata and flat files for block storage. + +This driver is the recommended driver for use with btcd. It makes use leveldb +for the metadata, flat files for block storage, and checksums in key areas to +ensure data integrity. + +Usage + +This package is a driver to the database package and provides the database type +of "ffldb". The parameters the Open and Create functions take are the +database path as a string and the block network: + + db, err := database.Open("ffldb", "path/to/database", wire.MainNet) + if err != nil { + // Handle error + } + + db, err := database.Create("ffldb", "path/to/database", wire.MainNet) + if err != nil { + // Handle error + } +*/ +package ffldb diff --git a/database2/ffldb/driver.go b/database2/ffldb/driver.go new file mode 100644 index 00000000000..38aed212dea --- /dev/null +++ b/database2/ffldb/driver.go @@ -0,0 +1,84 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package ffldb + +import ( + "fmt" + + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcd/wire" + "github.com/btcsuite/btclog" +) + +var log = btclog.Disabled + +const ( + dbType = "ffldb" +) + +// parseArgs parses the arguments from the database Open/Create methods. +func parseArgs(funcName string, args ...interface{}) (string, wire.BitcoinNet, error) { + if len(args) != 2 { + return "", 0, fmt.Errorf("invalid arguments to %s.%s -- "+ + "expected database path and block network", dbType, + funcName) + } + + dbPath, ok := args[0].(string) + if !ok { + return "", 0, fmt.Errorf("first argument to %s.%s is invalid -- "+ + "expected database path string", dbType, funcName) + } + + network, ok := args[1].(wire.BitcoinNet) + if !ok { + return "", 0, fmt.Errorf("second argument to %s.%s is invalid -- "+ + "expected block network", dbType, funcName) + } + + return dbPath, network, nil +} + +// openDBDriver is the callback provided during driver registration that opens +// an existing database for use. +func openDBDriver(args ...interface{}) (database.DB, error) { + dbPath, network, err := parseArgs("Open", args...) + if err != nil { + return nil, err + } + + return openDB(dbPath, network, false) +} + +// createDBDriver is the callback provided during driver registration that +// creates, initializes, and opens a database for use. +func createDBDriver(args ...interface{}) (database.DB, error) { + dbPath, network, err := parseArgs("Create", args...) + if err != nil { + return nil, err + } + + return openDB(dbPath, network, true) +} + +// useLogger is the callback provided during driver registration that sets the +// current logger to the provided one. +func useLogger(logger btclog.Logger) { + log = logger +} + +func init() { + // Register the driver. + driver := database.Driver{ + DbType: dbType, + Create: createDBDriver, + Open: openDBDriver, + UseLogger: useLogger, + } + if err := database.RegisterDriver(driver); err != nil { + panic(fmt.Sprintf("Failed to regiser database driver '%s': %v", + dbType, err)) + } +} diff --git a/database2/ffldb/driver_test.go b/database2/ffldb/driver_test.go new file mode 100644 index 00000000000..e76a1cb7618 --- /dev/null +++ b/database2/ffldb/driver_test.go @@ -0,0 +1,288 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package ffldb_test + +import ( + "fmt" + "os" + "path/filepath" + "reflect" + "runtime" + "testing" + + "github.com/btcsuite/btcd/chaincfg" + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcd/database2/ffldb" + "github.com/btcsuite/btcutil" +) + +// dbType is the database type name for this driver. +const dbType = "ffldb" + +// TestCreateOpenFail ensures that errors related to creating and opening a +// database are handled properly. +func TestCreateOpenFail(t *testing.T) { + t.Parallel() + + // Ensure that attempting to open a database that doesn't exist returns + // the expected error. + wantErrCode := database.ErrDbDoesNotExist + _, err := database.Open(dbType, "noexist", blockDataNet) + if !checkDbError(t, "Open", err, wantErrCode) { + return + } + + // Ensure that attempting to open a database with the wrong number of + // parameters returns the expected error. + wantErr := fmt.Errorf("invalid arguments to %s.Open -- expected "+ + "database path and block network", dbType) + _, err = database.Open(dbType, 1, 2, 3) + if err.Error() != wantErr.Error() { + t.Errorf("Open: did not receive expected error - got %v, "+ + "want %v", err, wantErr) + return + } + + // Ensure that attempting to open a database with an invalid type for + // the first parameter returns the expected error. + wantErr = fmt.Errorf("first argument to %s.Open is invalid -- "+ + "expected database path string", dbType) + _, err = database.Open(dbType, 1, blockDataNet) + if err.Error() != wantErr.Error() { + t.Errorf("Open: did not receive expected error - got %v, "+ + "want %v", err, wantErr) + return + } + + // Ensure that attempting to open a database with an invalid type for + // the second parameter returns the expected error. + wantErr = fmt.Errorf("second argument to %s.Open is invalid -- "+ + "expected block network", dbType) + _, err = database.Open(dbType, "noexist", "invalid") + if err.Error() != wantErr.Error() { + t.Errorf("Open: did not receive expected error - got %v, "+ + "want %v", err, wantErr) + return + } + + // Ensure that attempting to create a database with the wrong number of + // parameters returns the expected error. + wantErr = fmt.Errorf("invalid arguments to %s.Create -- expected "+ + "database path and block network", dbType) + _, err = database.Create(dbType, 1, 2, 3) + if err.Error() != wantErr.Error() { + t.Errorf("Create: did not receive expected error - got %v, "+ + "want %v", err, wantErr) + return + } + + // Ensure that attempting to create a database with an invalid type for + // the first parameter returns the expected error. + wantErr = fmt.Errorf("first argument to %s.Create is invalid -- "+ + "expected database path string", dbType) + _, err = database.Create(dbType, 1, blockDataNet) + if err.Error() != wantErr.Error() { + t.Errorf("Create: did not receive expected error - got %v, "+ + "want %v", err, wantErr) + return + } + + // Ensure that attempting to create a database with an invalid type for + // the second parameter returns the expected error. + wantErr = fmt.Errorf("second argument to %s.Create is invalid -- "+ + "expected block network", dbType) + _, err = database.Create(dbType, "noexist", "invalid") + if err.Error() != wantErr.Error() { + t.Errorf("Create: did not receive expected error - got %v, "+ + "want %v", err, wantErr) + return + } + + // Ensure operations against a closed database return the expected + // error. + dbPath := filepath.Join(os.TempDir(), "ffldb-createfail") + _ = os.RemoveAll(dbPath) + db, err := database.Create(dbType, dbPath, blockDataNet) + if err != nil { + t.Errorf("Create: unexpected error: %v", err) + return + } + defer os.RemoveAll(dbPath) + db.Close() + + wantErrCode = database.ErrDbNotOpen + err = db.View(func(tx database.Tx) error { + return nil + }) + if !checkDbError(t, "View", err, wantErrCode) { + return + } + + wantErrCode = database.ErrDbNotOpen + err = db.Update(func(tx database.Tx) error { + return nil + }) + if !checkDbError(t, "Update", err, wantErrCode) { + return + } + + wantErrCode = database.ErrDbNotOpen + _, err = db.Begin(false) + if !checkDbError(t, "Begin(false)", err, wantErrCode) { + return + } + + wantErrCode = database.ErrDbNotOpen + _, err = db.Begin(true) + if !checkDbError(t, "Begin(true)", err, wantErrCode) { + return + } + + wantErrCode = database.ErrDbNotOpen + err = db.Close() + if !checkDbError(t, "Close", err, wantErrCode) { + return + } +} + +// TestPersistence ensures that values stored are still valid after closing and +// reopening the database. +func TestPersistence(t *testing.T) { + t.Parallel() + + // Create a new database to run tests against. + dbPath := filepath.Join(os.TempDir(), "ffldb-persistencetest") + _ = os.RemoveAll(dbPath) + db, err := database.Create(dbType, dbPath, blockDataNet) + if err != nil { + t.Errorf("Failed to create test database (%s) %v", dbType, err) + return + } + defer os.RemoveAll(dbPath) + defer db.Close() + + // Create a bucket, put some values into it, and store a block so they + // can be tested for existence on re-open. + bucket1Key := []byte("bucket1") + storeValues := map[string]string{ + "b1key1": "foo1", + "b1key2": "foo2", + "b1key3": "foo3", + } + genesisBlock := btcutil.NewBlock(chaincfg.MainNetParams.GenesisBlock) + genesisHash := chaincfg.MainNetParams.GenesisHash + err = db.Update(func(tx database.Tx) error { + metadataBucket := tx.Metadata() + if metadataBucket == nil { + return fmt.Errorf("Metadata: unexpected nil bucket") + } + + bucket1, err := metadataBucket.CreateBucket(bucket1Key) + if err != nil { + return fmt.Errorf("CreateBucket: unexpected error: %v", + err) + } + + for k, v := range storeValues { + err := bucket1.Put([]byte(k), []byte(v)) + if err != nil { + return fmt.Errorf("Put: unexpected error: %v", + err) + } + } + + if err := tx.StoreBlock(genesisBlock); err != nil { + return fmt.Errorf("StoreBlock: unexpected error: %v", + err) + } + + return nil + }) + if err != nil { + t.Errorf("Update: unexpected error: %v", err) + return + } + + // Close and reopen the database to ensure the values persist. + db.Close() + db, err = database.Open(dbType, dbPath, blockDataNet) + if err != nil { + t.Errorf("Failed to open test database (%s) %v", dbType, err) + return + } + defer db.Close() + + // Ensure the values previously stored in the 3rd namespace still exist + // and are correct. + err = db.View(func(tx database.Tx) error { + metadataBucket := tx.Metadata() + if metadataBucket == nil { + return fmt.Errorf("Metadata: unexpected nil bucket") + } + + bucket1 := metadataBucket.Bucket(bucket1Key) + if bucket1 == nil { + return fmt.Errorf("Bucket1: unexpected nil bucket") + } + + for k, v := range storeValues { + gotVal := bucket1.Get([]byte(k)) + if !reflect.DeepEqual(gotVal, []byte(v)) { + return fmt.Errorf("Get: key '%s' does not "+ + "match expected value - got %s, want %s", + k, gotVal, v) + } + } + + genesisBlockBytes, _ := genesisBlock.Bytes() + gotBytes, err := tx.FetchBlock(genesisHash) + if err != nil { + return fmt.Errorf("FetchBlock: unexpected error: %v", + err) + } + if !reflect.DeepEqual(gotBytes, genesisBlockBytes) { + return fmt.Errorf("FetchBlock: stored block mismatch") + } + + return nil + }) + if err != nil { + t.Errorf("View: unexpected error: %v", err) + return + } +} + +// TestInterface performs all interfaces tests for this database driver. +func TestInterface(t *testing.T) { + t.Parallel() + + // Create a new database to run tests against. + dbPath := filepath.Join(os.TempDir(), "ffldb-interfacetest") + _ = os.RemoveAll(dbPath) + db, err := database.Create(dbType, dbPath, blockDataNet) + if err != nil { + t.Errorf("Failed to create test database (%s) %v", dbType, err) + return + } + defer os.RemoveAll(dbPath) + defer db.Close() + + // Ensure the driver type is the expected value. + gotDbType := db.Type() + if gotDbType != dbType { + t.Errorf("Type: unepxected driver type - got %v, want %v", + gotDbType, dbType) + return + } + + // Run all of the interface tests against the database. + runtime.GOMAXPROCS(runtime.NumCPU()) + + // Change the maximum file size to a small value to force multiple flat + // files with the test data set. + ffldb.TstRunWithMaxBlockFileSize(db, 2048, func() { + testInterface(t, db) + }) +} diff --git a/database2/ffldb/export_test.go b/database2/ffldb/export_test.go new file mode 100644 index 00000000000..fd66bc571b9 --- /dev/null +++ b/database2/ffldb/export_test.go @@ -0,0 +1,26 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +/* +This test file is part of the ffldb package rather than than the ffldb_test +package so it can bridge access to the internals to properly test cases which +are either not possible or can't reliably be tested via the public interface. +The functions are only exported while the tests are being run. +*/ + +package ffldb + +import database "github.com/btcsuite/btcd/database2" + +// TstRunWithMaxBlockFileSize runs the passed function with the maximum allowed +// file size for the database set to the provided value. The value will be set +// back to the original value upon completion. +func TstRunWithMaxBlockFileSize(idb database.DB, size uint32, fn func()) { + ffldb := idb.(*db) + origSize := ffldb.store.maxBlockFileSize + + ffldb.store.maxBlockFileSize = size + fn() + ffldb.store.maxBlockFileSize = origSize +} diff --git a/database2/ffldb/interface_test.go b/database2/ffldb/interface_test.go new file mode 100644 index 00000000000..eee55ba6c1b --- /dev/null +++ b/database2/ffldb/interface_test.go @@ -0,0 +1,2314 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +// This file intended to be copied into each backend driver directory. Each +// driver should have their own driver_test.go file which creates a database and +// invokes the testInterface function in this file to ensure the driver properly +// implements the interface. +// +// NOTE: When copying this file into the backend driver folder, the package name +// will need to be changed accordingly. + +package ffldb_test + +import ( + "bytes" + "compress/bzip2" + "encoding/binary" + "fmt" + "io" + "os" + "path/filepath" + "reflect" + "sync/atomic" + "testing" + "time" + + "github.com/btcsuite/btcd/chaincfg" + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcd/wire" + "github.com/btcsuite/btcutil" +) + +var ( + // blockDataNet is the expected network in the test block data. + blockDataNet = wire.MainNet + + // blockDataFile is the path to a file containing the first 256 blocks + // of the block chain. + blockDataFile = filepath.Join("..", "testdata", "blocks1-256.bz2") + + // errSubTestFail is used to signal that a sub test returned false. + errSubTestFail = fmt.Errorf("sub test failure") +) + +// loadBlocks loads the blocks contained in the testdata directory and returns +// a slice of them. +func loadBlocks(t *testing.T, dataFile string, network wire.BitcoinNet) ([]*btcutil.Block, error) { + // Open the file that contains the blocks for reading. + fi, err := os.Open(dataFile) + if err != nil { + t.Errorf("failed to open file %v, err %v", dataFile, err) + return nil, err + } + defer func() { + if err := fi.Close(); err != nil { + t.Errorf("failed to close file %v %v", dataFile, + err) + } + }() + dr := bzip2.NewReader(fi) + + // Set the first block as the genesis block. + blocks := make([]*btcutil.Block, 0, 256) + genesis := btcutil.NewBlock(chaincfg.MainNetParams.GenesisBlock) + blocks = append(blocks, genesis) + + // Load the remaining blocks. + for height := 1; ; height++ { + var net uint32 + err := binary.Read(dr, binary.LittleEndian, &net) + if err == io.EOF { + // Hit end of file at the expected offset. No error. + break + } + if err != nil { + t.Errorf("Failed to load network type for block %d: %v", + height, err) + return nil, err + } + if net != uint32(network) { + t.Errorf("Block doesn't match network: %v expects %v", + net, network) + return nil, err + } + + var blockLen uint32 + err = binary.Read(dr, binary.LittleEndian, &blockLen) + if err != nil { + t.Errorf("Failed to load block size for block %d: %v", + height, err) + return nil, err + } + + // Read the block. + blockBytes := make([]byte, blockLen) + _, err = io.ReadFull(dr, blockBytes) + if err != nil { + t.Errorf("Failed to load block %d: %v", height, err) + return nil, err + } + + // Deserialize and store the block. + block, err := btcutil.NewBlockFromBytes(blockBytes) + if err != nil { + t.Errorf("Failed to parse block %v: %v", height, err) + return nil, err + } + blocks = append(blocks, block) + } + + return blocks, nil +} + +// checkDbError ensures the passed error is a database.Error with an error code +// that matches the passed error code. +func checkDbError(t *testing.T, testName string, gotErr error, wantErrCode database.ErrorCode) bool { + dbErr, ok := gotErr.(database.Error) + if !ok { + t.Errorf("%s: unexpected error type - got %T, want %T", + testName, gotErr, database.Error{}) + return false + } + if dbErr.ErrorCode != wantErrCode { + t.Errorf("%s: unexpected error code - got %s (%s), want %s", + testName, dbErr.ErrorCode, dbErr.Description, + wantErrCode) + return false + } + + return true +} + +// testContext is used to store context information about a running test which +// is passed into helper functions. +type testContext struct { + t *testing.T + db database.DB + bucketDepth int + isWritable bool + blocks []*btcutil.Block +} + +// keyPair houses a key/value pair. It is used over maps so ordering can be +// maintained. +type keyPair struct { + key string + value string +} + +// lookupKey is a convenience method to lookup the requested key from the +// provided keypair slice along with whether or not the key was found. +func lookupKey(key string, values []keyPair) (string, bool) { + for _, item := range values { + if item.key == key { + return item.value, true + } + } + + return "", false +} + +// rollbackValues returns a copy of the provided keypairs with all values set to +// an empty string. This is used to test that values are properly rolled back. +func rollbackValues(values []keyPair) []keyPair { + ret := make([]keyPair, len(values)) + copy(ret, values) + for i := range ret { + ret[i].value = "" + } + return ret +} + +// testCursorKeyPair checks that the provide key and value match the expected +// keypair at the provided index. It also ensures the index is in range for the +// provided slice of expected keypairs. +func testCursorKeyPair(tc *testContext, k, v []byte, index int, values []keyPair) bool { + if index >= len(values) || index < 0 { + tc.t.Errorf("Cursor: exceeded the expected range of values - "+ + "index %d, num values %d", index, len(values)) + return false + } + + pair := &values[index] + kString := string(k) + if kString != pair.key { + tc.t.Errorf("Mismatched cursor key: index %d does not match "+ + "the expected key - got %q, want %q", index, kString, + pair.key) + return false + } + vString := string(v) + if vString != pair.value { + tc.t.Errorf("Mismatched cursor value: index %d does not match "+ + "the expected value - got %q, want %q", index, + vString, pair.value) + return false + } + + return true +} + +// testGetValues checks that all of the provided key/value pairs can be +// retrieved from the database and the retrieved values match the provided +// values. +func testGetValues(tc *testContext, bucket database.Bucket, values []keyPair) bool { + for _, item := range values { + var vBytes []byte + if item.value != "" { + vBytes = []byte(item.value) + } + + gotValue := bucket.Get([]byte(item.key)) + if !reflect.DeepEqual(gotValue, vBytes) { + tc.t.Errorf("Get: unexpected value - got %s, want %s", + gotValue, vBytes) + return false + } + } + + return true +} + +// testPutValues stores all of the provided key/value pairs in the provided +// bucket while checking for errors. +func testPutValues(tc *testContext, bucket database.Bucket, values []keyPair) bool { + for _, item := range values { + var vBytes []byte + if item.value != "" { + vBytes = []byte(item.value) + } + if err := bucket.Put([]byte(item.key), vBytes); err != nil { + tc.t.Errorf("Put: unexpected error: %v", err) + return false + } + } + + return true +} + +// testDeleteValues removes all of the provided key/value pairs from the +// provided bucket. +func testDeleteValues(tc *testContext, bucket database.Bucket, values []keyPair) bool { + for _, item := range values { + if err := bucket.Delete([]byte(item.key)); err != nil { + tc.t.Errorf("Delete: unexpected error: %v", err) + return false + } + } + + return true +} + +// testCursorInterface ensures the cursor itnerface is working properly by +// exercising all of its functions on the passed bucket. +func testCursorInterface(tc *testContext, bucket database.Bucket) bool { + // Ensure a cursor can be obtained for the bucket. + cursor := bucket.Cursor() + if cursor == nil { + tc.t.Error("Bucket.Cursor: unexpected nil cursor returned") + return false + } + + // Ensure the cursor returns the same bucket it was created for. + if cursor.Bucket() != bucket { + tc.t.Error("Cursor.Bucket: does not match the bucket it was " + + "created for") + return false + } + + if tc.isWritable { + unsortedValues := []keyPair{ + {"cursor", "val1"}, + {"abcd", "val1"}, + {"bcd", "val1"}, + } + sortedValues := []keyPair{ + {"abcd", "val1"}, + {"bcd", "val1"}, + {"cursor", "val1"}, + } + + // Store the values to be used in the cursor tests in unsorted + // order and ensure they were actually stored. + if !testPutValues(tc, bucket, unsortedValues) { + return false + } + if !testGetValues(tc, bucket, unsortedValues) { + return false + } + + // Ensure the cursor returns all items in byte-sorted order when + // iterating forward. + curIdx := 0 + for ok := cursor.First(); ok; ok = cursor.Next() { + k, v := cursor.Key(), cursor.Value() + if !testCursorKeyPair(tc, k, v, curIdx, sortedValues) { + return false + } + curIdx++ + } + if curIdx != len(unsortedValues) { + tc.t.Errorf("Cursor: expected to iterate %d values, "+ + "but only iterated %d", len(unsortedValues), + curIdx) + return false + } + + // Ensure the cursor returns all items in reverse byte-sorted + // order when iterating in reverse. + curIdx = len(sortedValues) - 1 + for ok := cursor.Last(); ok; ok = cursor.Prev() { + k, v := cursor.Key(), cursor.Value() + if !testCursorKeyPair(tc, k, v, curIdx, sortedValues) { + return false + } + curIdx-- + } + if curIdx > -1 { + tc.t.Errorf("Reverse cursor: expected to iterate %d "+ + "values, but only iterated %d", + len(sortedValues), len(sortedValues)-(curIdx+1)) + return false + } + + // Ensure foward iteration works as expected after seeking. + middleIdx := (len(sortedValues) - 1) / 2 + seekKey := []byte(sortedValues[middleIdx].key) + curIdx = middleIdx + for ok := cursor.Seek(seekKey); ok; ok = cursor.Next() { + k, v := cursor.Key(), cursor.Value() + if !testCursorKeyPair(tc, k, v, curIdx, sortedValues) { + return false + } + curIdx++ + } + if curIdx != len(sortedValues) { + tc.t.Errorf("Cursor after seek: expected to iterate "+ + "%d values, but only iterated %d", + len(sortedValues)-middleIdx, curIdx-middleIdx) + return false + } + + // Ensure reverse iteration works as expected after seeking. + curIdx = middleIdx + for ok := cursor.Seek(seekKey); ok; ok = cursor.Prev() { + k, v := cursor.Key(), cursor.Value() + if !testCursorKeyPair(tc, k, v, curIdx, sortedValues) { + return false + } + curIdx-- + } + if curIdx > -1 { + tc.t.Errorf("Reverse cursor after seek: expected to "+ + "iterate %d values, but only iterated %d", + len(sortedValues)-middleIdx, middleIdx-curIdx) + return false + } + + // Ensure the cursor deletes items properly. + if !cursor.First() { + tc.t.Errorf("Cursor.First: no value") + return false + } + k := cursor.Key() + if err := cursor.Delete(); err != nil { + tc.t.Errorf("Cursor.Delete: unexpected error: %v", err) + return false + } + if val := bucket.Get(k); val != nil { + tc.t.Errorf("Cursor.Delete: value for key %q was not "+ + "deleted", k) + return false + } + } + + return true +} + +// testNestedBucket reruns the testBucketInterface against a nested bucket along +// with a counter to only test a couple of level deep. +func testNestedBucket(tc *testContext, testBucket database.Bucket) bool { + // Don't go more than 2 nested levels deep. + if tc.bucketDepth > 1 { + return true + } + + tc.bucketDepth++ + defer func() { + tc.bucketDepth-- + }() + if !testBucketInterface(tc, testBucket) { + return false + } + + return true +} + +// testBucketInterface ensures the bucket interface is working properly by +// exercising all of its functions. This includes the cursor interface for the +// cursor returned from the bucket. +func testBucketInterface(tc *testContext, bucket database.Bucket) bool { + if bucket.Writable() != tc.isWritable { + tc.t.Errorf("Bucket writable state does not match.") + return false + } + + if tc.isWritable { + // keyValues holds the keys and values to use when putting + // values into the bucket. + var keyValues = []keyPair{ + {"bucketkey1", "foo1"}, + {"bucketkey2", "foo2"}, + {"bucketkey3", "foo3"}, + } + if !testPutValues(tc, bucket, keyValues) { + return false + } + + if !testGetValues(tc, bucket, keyValues) { + return false + } + + // Ensure errors returned from the user-supplied ForEach + // function are returned. + forEachError := fmt.Errorf("example foreach error") + err := bucket.ForEach(func(k, v []byte) error { + return forEachError + }) + if err != forEachError { + tc.t.Errorf("ForEach: inner function error not "+ + "returned - got %v, want %v", err, forEachError) + return false + } + + // Iterate all of the keys using ForEach while making sure the + // stored values are the expected values. + keysFound := make(map[string]struct{}, len(keyValues)) + err = bucket.ForEach(func(k, v []byte) error { + kString := string(k) + wantV, found := lookupKey(kString, keyValues) + if !found { + return fmt.Errorf("ForEach: key '%s' should "+ + "exist", kString) + } + + if !reflect.DeepEqual(v, []byte(wantV)) { + return fmt.Errorf("ForEach: value for key '%s' "+ + "does not match - got %s, want %s", + kString, v, wantV) + } + + keysFound[kString] = struct{}{} + return nil + }) + if err != nil { + tc.t.Errorf("%v", err) + return false + } + + // Ensure all keys were iterated. + for _, item := range keyValues { + if _, ok := keysFound[item.key]; !ok { + tc.t.Errorf("ForEach: key '%s' was not iterated "+ + "when it should have been", item.key) + return false + } + } + + // Delete the keys and ensure they were deleted. + if !testDeleteValues(tc, bucket, keyValues) { + return false + } + if !testGetValues(tc, bucket, rollbackValues(keyValues)) { + return false + } + + // Ensure creating a new bucket works as expected. + testBucketName := []byte("testbucket") + testBucket, err := bucket.CreateBucket(testBucketName) + if err != nil { + tc.t.Errorf("CreateBucket: unexpected error: %v", err) + return false + } + if !testNestedBucket(tc, testBucket) { + return false + } + + // Ensure errors returned from the user-supplied ForEachBucket + // function are returned. + err = bucket.ForEachBucket(func(k []byte) error { + return forEachError + }) + if err != forEachError { + tc.t.Errorf("ForEachBucket: inner function error not "+ + "returned - got %v, want %v", err, forEachError) + return false + } + + // Ensure creating a bucket that already exists fails with the + // expected error. + wantErrCode := database.ErrBucketExists + _, err = bucket.CreateBucket(testBucketName) + if !checkDbError(tc.t, "CreateBucket", err, wantErrCode) { + return false + } + + // Ensure CreateBucketIfNotExists returns an existing bucket. + testBucket, err = bucket.CreateBucketIfNotExists(testBucketName) + if err != nil { + tc.t.Errorf("CreateBucketIfNotExists: unexpected "+ + "error: %v", err) + return false + } + if !testNestedBucket(tc, testBucket) { + return false + } + + // Ensure retrieving an existing bucket works as expected. + testBucket = bucket.Bucket(testBucketName) + if !testNestedBucket(tc, testBucket) { + return false + } + + // Ensure deleting a bucket works as intended. + if err := bucket.DeleteBucket(testBucketName); err != nil { + tc.t.Errorf("DeleteBucket: unexpected error: %v", err) + return false + } + if b := bucket.Bucket(testBucketName); b != nil { + tc.t.Errorf("DeleteBucket: bucket '%s' still exists", + testBucketName) + return false + } + + // Ensure deleting a bucket that doesn't exist returns the + // expected error. + wantErrCode = database.ErrBucketNotFound + err = bucket.DeleteBucket(testBucketName) + if !checkDbError(tc.t, "DeleteBucket", err, wantErrCode) { + return false + } + + // Ensure CreateBucketIfNotExists creates a new bucket when + // it doesn't already exist. + testBucket, err = bucket.CreateBucketIfNotExists(testBucketName) + if err != nil { + tc.t.Errorf("CreateBucketIfNotExists: unexpected "+ + "error: %v", err) + return false + } + if !testNestedBucket(tc, testBucket) { + return false + } + + // Ensure the cursor interface works as expected. + if !testCursorInterface(tc, testBucket) { + return false + } + + // Delete the test bucket to avoid leaving it around for future + // calls. + if err := bucket.DeleteBucket(testBucketName); err != nil { + tc.t.Errorf("DeleteBucket: unexpected error: %v", err) + return false + } + if b := bucket.Bucket(testBucketName); b != nil { + tc.t.Errorf("DeleteBucket: bucket '%s' still exists", + testBucketName) + return false + } + } else { + // Put should fail with bucket that is not writable. + testName := "unwritable tx put" + wantErrCode := database.ErrTxNotWritable + failBytes := []byte("fail") + err := bucket.Put(failBytes, failBytes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Delete should fail with bucket that is not writable. + testName = "unwritable tx delete" + err = bucket.Delete(failBytes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // CreateBucket should fail with bucket that is not writable. + testName = "unwritable tx create bucket" + _, err = bucket.CreateBucket(failBytes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // CreateBucketIfNotExists should fail with bucket that is not + // writable. + testName = "unwritable tx create bucket if not exists" + _, err = bucket.CreateBucketIfNotExists(failBytes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // DeleteBucket should fail with bucket that is not writable. + testName = "unwritable tx delete bucket" + err = bucket.DeleteBucket(failBytes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure the cursor interface works as expected with read-only + // buckets. + if !testCursorInterface(tc, bucket) { + return false + } + } + + return true +} + +// rollbackOnPanic rolls the passed transaction back if the code in the calling +// function panics. This is useful in case the tests unexpectedly panic which +// would leave any manually created transactions with the database mutex locked +// thereby leading to a deadlock and masking the real reason for the panic. It +// also logs a test error and repanics so the original panic can be traced. +func rollbackOnPanic(t *testing.T, tx database.Tx) { + if err := recover(); err != nil { + t.Errorf("Unexpected panic: %v", err) + _ = tx.Rollback() + panic(err) + } +} + +// testMetadataManualTxInterface ensures that the manual transactions metadata +// interface works as expected. +func testMetadataManualTxInterface(tc *testContext) bool { + // populateValues tests that populating values works as expected. + // + // When the writable flag is false, a read-only tranasction is created, + // standard bucket tests for read-only transactions are performed, and + // the Commit function is checked to ensure it fails as expected. + // + // Otherwise, a read-write transaction is created, the values are + // written, standard bucket tests for read-write transactions are + // performed, and then the transaction is either commited or rolled + // back depending on the flag. + bucket1Name := []byte("bucket1") + populateValues := func(writable, rollback bool, putValues []keyPair) bool { + tx, err := tc.db.Begin(writable) + if err != nil { + tc.t.Errorf("Begin: unexpected error %v", err) + return false + } + defer rollbackOnPanic(tc.t, tx) + + metadataBucket := tx.Metadata() + if metadataBucket == nil { + tc.t.Errorf("Metadata: unexpected nil bucket") + _ = tx.Rollback() + return false + } + + bucket1 := metadataBucket.Bucket(bucket1Name) + if bucket1 == nil { + tc.t.Errorf("Bucket1: unexpected nil bucket") + return false + } + + tc.isWritable = writable + if !testBucketInterface(tc, bucket1) { + _ = tx.Rollback() + return false + } + + if !writable { + // The transaction is not writable, so it should fail + // the commit. + testName := "unwritable tx commit" + wantErrCode := database.ErrTxNotWritable + err := tx.Commit() + if !checkDbError(tc.t, testName, err, wantErrCode) { + _ = tx.Rollback() + return false + } + } else { + if !testPutValues(tc, bucket1, putValues) { + return false + } + + if rollback { + // Rollback the transaction. + if err := tx.Rollback(); err != nil { + tc.t.Errorf("Rollback: unexpected "+ + "error %v", err) + return false + } + } else { + // The commit should succeed. + if err := tx.Commit(); err != nil { + tc.t.Errorf("Commit: unexpected error "+ + "%v", err) + return false + } + } + } + + return true + } + + // checkValues starts a read-only transaction and checks that all of + // the key/value pairs specified in the expectedValues parameter match + // what's in the database. + checkValues := func(expectedValues []keyPair) bool { + tx, err := tc.db.Begin(false) + if err != nil { + tc.t.Errorf("Begin: unexpected error %v", err) + return false + } + defer rollbackOnPanic(tc.t, tx) + + metadataBucket := tx.Metadata() + if metadataBucket == nil { + tc.t.Errorf("Metadata: unexpected nil bucket") + _ = tx.Rollback() + return false + } + + bucket1 := metadataBucket.Bucket(bucket1Name) + if bucket1 == nil { + tc.t.Errorf("Bucket1: unexpected nil bucket") + return false + } + + if !testGetValues(tc, bucket1, expectedValues) { + _ = tx.Rollback() + return false + } + + // Rollback the read-only transaction. + if err := tx.Rollback(); err != nil { + tc.t.Errorf("Commit: unexpected error %v", err) + return false + } + + return true + } + + // deleteValues starts a read-write transaction and deletes the keys + // in the passed key/value pairs. + deleteValues := func(values []keyPair) bool { + tx, err := tc.db.Begin(true) + if err != nil { + + } + defer rollbackOnPanic(tc.t, tx) + + metadataBucket := tx.Metadata() + if metadataBucket == nil { + tc.t.Errorf("Metadata: unexpected nil bucket") + _ = tx.Rollback() + return false + } + + bucket1 := metadataBucket.Bucket(bucket1Name) + if bucket1 == nil { + tc.t.Errorf("Bucket1: unexpected nil bucket") + return false + } + + // Delete the keys and ensure they were deleted. + if !testDeleteValues(tc, bucket1, values) { + _ = tx.Rollback() + return false + } + if !testGetValues(tc, bucket1, rollbackValues(values)) { + _ = tx.Rollback() + return false + } + + // Commit the changes and ensure it was successful. + if err := tx.Commit(); err != nil { + tc.t.Errorf("Commit: unexpected error %v", err) + return false + } + + return true + } + + // keyValues holds the keys and values to use when putting values into a + // bucket. + var keyValues = []keyPair{ + {"umtxkey1", "foo1"}, + {"umtxkey2", "foo2"}, + {"umtxkey3", "foo3"}, + } + + // Ensure that attempting populating the values using a read-only + // transaction fails as expected. + if !populateValues(false, true, keyValues) { + return false + } + if !checkValues(rollbackValues(keyValues)) { + return false + } + + // Ensure that attempting populating the values using a read-write + // transaction and then rolling it back yields the expected values. + if !populateValues(true, true, keyValues) { + return false + } + if !checkValues(rollbackValues(keyValues)) { + return false + } + + // Ensure that attempting populating the values using a read-write + // transaction and then committing it stores the expected values. + if !populateValues(true, false, keyValues) { + return false + } + if !checkValues(keyValues) { + return false + } + + // Clean up the keys. + if !deleteValues(keyValues) { + return false + } + + return true +} + +// testManagedTxPanics ensures calling Rollback of Commit inside a managed +// transaction panics. +func testManagedTxPanics(tc *testContext) bool { + testPanic := func(fn func()) (paniced bool) { + // Setup a defer to catch the expected panic and update the + // return variable. + defer func() { + if err := recover(); err != nil { + paniced = true + } + }() + + fn() + return false + } + + // Ensure calling Commit on a managed read-only transaction panics. + paniced := testPanic(func() { + tc.db.View(func(tx database.Tx) error { + tx.Commit() + return nil + }) + }) + if !paniced { + tc.t.Error("Commit called inside View did not panic") + return false + } + + // Ensure calling Rollback on a managed read-only transaction panics. + paniced = testPanic(func() { + tc.db.View(func(tx database.Tx) error { + tx.Rollback() + return nil + }) + }) + if !paniced { + tc.t.Error("Rollback called inside View did not panic") + return false + } + + // Ensure calling Commit on a managed read-write transaction panics. + paniced = testPanic(func() { + tc.db.Update(func(tx database.Tx) error { + tx.Commit() + return nil + }) + }) + if !paniced { + tc.t.Error("Commit called inside Update did not panic") + return false + } + + // Ensure calling Rollback on a managed read-write transaction panics. + paniced = testPanic(func() { + tc.db.Update(func(tx database.Tx) error { + tx.Rollback() + return nil + }) + }) + if !paniced { + tc.t.Error("Rollback called inside Update did not panic") + return false + } + + return true +} + +// testMetadataTxInterface tests all facets of the managed read/write and +// manual transaction metadata interfaces as well as the bucket interfaces under +// them. +func testMetadataTxInterface(tc *testContext) bool { + if !testManagedTxPanics(tc) { + return false + } + + bucket1Name := []byte("bucket1") + err := tc.db.Update(func(tx database.Tx) error { + _, err := tx.Metadata().CreateBucket(bucket1Name) + return err + }) + if err != nil { + tc.t.Errorf("Update: unexpected error creating bucket: %v", err) + return false + } + + if !testMetadataManualTxInterface(tc) { + return false + } + + // keyValues holds the keys and values to use when putting values + // into a bucket. + var keyValues = []keyPair{ + {"mtxkey1", "foo1"}, + {"mtxkey2", "foo2"}, + {"mtxkey3", "foo3"}, + } + + // Test the bucket interface via a managed read-only transaction. + err = tc.db.View(func(tx database.Tx) error { + metadataBucket := tx.Metadata() + if metadataBucket == nil { + return fmt.Errorf("Metadata: unexpected nil bucket") + } + + bucket1 := metadataBucket.Bucket(bucket1Name) + if bucket1 == nil { + return fmt.Errorf("Bucket1: unexpected nil bucket") + } + + tc.isWritable = false + if !testBucketInterface(tc, bucket1) { + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + // Ensure errors returned from the user-supplied View function are + // returned. + viewError := fmt.Errorf("example view error") + err = tc.db.View(func(tx database.Tx) error { + return viewError + }) + if err != viewError { + tc.t.Errorf("View: inner function error not returned - got "+ + "%v, want %v", err, viewError) + return false + } + + // Test the bucket interface via a managed read-write transaction. + // Also, put a series of values and force a rollback so the following + // code can ensure the values were not stored. + forceRollbackError := fmt.Errorf("force rollback") + err = tc.db.Update(func(tx database.Tx) error { + metadataBucket := tx.Metadata() + if metadataBucket == nil { + return fmt.Errorf("Metadata: unexpected nil bucket") + } + + bucket1 := metadataBucket.Bucket(bucket1Name) + if bucket1 == nil { + return fmt.Errorf("Bucket1: unexpected nil bucket") + } + + tc.isWritable = true + if !testBucketInterface(tc, bucket1) { + return errSubTestFail + } + + if !testPutValues(tc, bucket1, keyValues) { + return errSubTestFail + } + + // Return an error to force a rollback. + return forceRollbackError + }) + if err != forceRollbackError { + if err == errSubTestFail { + return false + } + + tc.t.Errorf("Update: inner function error not returned - got "+ + "%v, want %v", err, forceRollbackError) + return false + } + + // Ensure the values that should not have been stored due to the forced + // rollback above were not actually stored. + err = tc.db.View(func(tx database.Tx) error { + metadataBucket := tx.Metadata() + if metadataBucket == nil { + return fmt.Errorf("Metadata: unexpected nil bucket") + } + + if !testGetValues(tc, metadataBucket, rollbackValues(keyValues)) { + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + // Store a series of values via a managed read-write transaction. + err = tc.db.Update(func(tx database.Tx) error { + metadataBucket := tx.Metadata() + if metadataBucket == nil { + return fmt.Errorf("Metadata: unexpected nil bucket") + } + + bucket1 := metadataBucket.Bucket(bucket1Name) + if bucket1 == nil { + return fmt.Errorf("Bucket1: unexpected nil bucket") + } + + if !testPutValues(tc, bucket1, keyValues) { + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + // Ensure the values stored above were committed as expected. + err = tc.db.View(func(tx database.Tx) error { + metadataBucket := tx.Metadata() + if metadataBucket == nil { + return fmt.Errorf("Metadata: unexpected nil bucket") + } + + bucket1 := metadataBucket.Bucket(bucket1Name) + if bucket1 == nil { + return fmt.Errorf("Bucket1: unexpected nil bucket") + } + + if !testGetValues(tc, bucket1, keyValues) { + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + // Clean up the values stored above in a managed read-write transaction. + err = tc.db.Update(func(tx database.Tx) error { + metadataBucket := tx.Metadata() + if metadataBucket == nil { + return fmt.Errorf("Metadata: unexpected nil bucket") + } + + bucket1 := metadataBucket.Bucket(bucket1Name) + if bucket1 == nil { + return fmt.Errorf("Bucket1: unexpected nil bucket") + } + + if !testDeleteValues(tc, bucket1, keyValues) { + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + return true +} + +// testFetchBlockIOMissing ensures that all of the block retrieval API functions +// work as expected when requesting blocks that don't exist. +func testFetchBlockIOMissing(tc *testContext, tx database.Tx) bool { + wantErrCode := database.ErrBlockNotFound + + // --------------------- + // Non-bulk Block IO API + // --------------------- + + // Test the individual block APIs one block at a time to ensure they + // return the expected error. Also, build the data needed to test the + // bulk APIs below while looping. + allBlockHashes := make([]wire.ShaHash, len(tc.blocks)) + allBlockRegions := make([]database.BlockRegion, len(tc.blocks)) + for i, block := range tc.blocks { + blockHash := block.Sha() + allBlockHashes[i] = *blockHash + + txLocs, err := block.TxLoc() + if err != nil { + tc.t.Errorf("block.TxLoc(%d): unexpected error: %v", i, + err) + return false + } + + // Ensure FetchBlock returns expected error. + testName := fmt.Sprintf("FetchBlock #%d on missing block", i) + _, err = tx.FetchBlock(blockHash) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure FetchBlockHeader returns expected error. + testName = fmt.Sprintf("FetchBlockHeader #%d on missing block", + i) + _, err = tx.FetchBlockHeader(blockHash) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure the first transaction fetched as a block region from + // the database returns the expected error. + region := database.BlockRegion{ + Hash: blockHash, + Offset: uint32(txLocs[0].TxStart), + Len: uint32(txLocs[0].TxLen), + } + allBlockRegions[i] = region + _, err = tx.FetchBlockRegion(®ion) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure HasBlock returns false. + hasBlock, err := tx.HasBlock(blockHash) + if err != nil { + tc.t.Errorf("HasBlock #%d: unexpected err: %v", i, err) + return false + } + if hasBlock { + tc.t.Errorf("HasBlock #%d: should not have block", i) + return false + } + } + + // ----------------- + // Bulk Block IO API + // ----------------- + + // Ensure FetchBlocks returns expected error. + testName := "FetchBlocks on missing blocks" + _, err := tx.FetchBlocks(allBlockHashes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure FetchBlockHeaders returns expected error. + testName = "FetchBlockHeaders on missing blocks" + _, err = tx.FetchBlockHeaders(allBlockHashes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure FetchBlockRegions returns expected error. + testName = "FetchBlockRegions on missing blocks" + _, err = tx.FetchBlockRegions(allBlockRegions) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure HasBlocks returns false for all blocks. + hasBlocks, err := tx.HasBlocks(allBlockHashes) + if err != nil { + tc.t.Errorf("HasBlocks: unexpected err: %v", err) + } + for i, hasBlock := range hasBlocks { + if hasBlock { + tc.t.Errorf("HasBlocks #%d: should not have block", i) + return false + } + } + + return true +} + +// testFetchBlockIO ensures all of the block retrieval API functions work as +// expected for the provide set of blocks. The blocks must already be stored in +// the database, or at least stored into the the passed transaction. It also +// tests several error conditions such as ensuring the expected errors are +// returned when fetching blocks, headers, and regions that don't exist. +func testFetchBlockIO(tc *testContext, tx database.Tx) bool { + // --------------------- + // Non-bulk Block IO API + // --------------------- + + // Test the individual block APIs one block at a time. Also, build the + // data needed to test the bulk APIs below while looping. + allBlockHashes := make([]wire.ShaHash, len(tc.blocks)) + allBlockBytes := make([][]byte, len(tc.blocks)) + allBlockTxLocs := make([][]wire.TxLoc, len(tc.blocks)) + allBlockRegions := make([]database.BlockRegion, len(tc.blocks)) + for i, block := range tc.blocks { + blockHash := block.Sha() + allBlockHashes[i] = *blockHash + + blockBytes, err := block.Bytes() + if err != nil { + tc.t.Errorf("block.Bytes(%d): unexpected error: %v", i, + err) + return false + } + allBlockBytes[i] = blockBytes + + txLocs, err := block.TxLoc() + if err != nil { + tc.t.Errorf("block.TxLoc(%d): unexpected error: %v", i, + err) + return false + } + allBlockTxLocs[i] = txLocs + + // Ensure the block data fetched from the database matches the + // expected bytes. + gotBlockBytes, err := tx.FetchBlock(blockHash) + if err != nil { + tc.t.Errorf("FetchBlock(%s): unexpected error: %v", + blockHash, err) + return false + } + if !bytes.Equal(gotBlockBytes, blockBytes) { + tc.t.Errorf("FetchBlock(%s): bytes mismatch: got %x, "+ + "want %x", blockHash, gotBlockBytes, blockBytes) + return false + } + + // Ensure the block header fetched from the database matches the + // expected bytes. + wantHeaderBytes := blockBytes[0:wire.MaxBlockHeaderPayload] + gotHeaderBytes, err := tx.FetchBlockHeader(blockHash) + if err != nil { + tc.t.Errorf("FetchBlockHeader(%s): unexpected error: %v", + blockHash, err) + return false + } + if !bytes.Equal(gotHeaderBytes, wantHeaderBytes) { + tc.t.Errorf("FetchBlockHeader(%s): bytes mismatch: "+ + "got %x, want %x", blockHash, gotHeaderBytes, + wantHeaderBytes) + return false + } + + // Ensure the first transaction fetched as a block region from + // the database matches the expected bytes. + region := database.BlockRegion{ + Hash: blockHash, + Offset: uint32(txLocs[0].TxStart), + Len: uint32(txLocs[0].TxLen), + } + allBlockRegions[i] = region + endRegionOffset := region.Offset + region.Len + wantRegionBytes := blockBytes[region.Offset:endRegionOffset] + gotRegionBytes, err := tx.FetchBlockRegion(®ion) + if err != nil { + tc.t.Errorf("FetchBlockRegion(%s): unexpected error: %v", + blockHash, err) + return false + } + if !bytes.Equal(gotRegionBytes, wantRegionBytes) { + tc.t.Errorf("FetchBlockRegion(%s): bytes mismatch: "+ + "got %x, want %x", blockHash, gotRegionBytes, + wantRegionBytes) + return false + } + + // Ensure the block header fetched from the database matches the + // expected bytes. + hasBlock, err := tx.HasBlock(blockHash) + if err != nil { + tc.t.Errorf("HasBlock(%s): unexpected error: %v", + blockHash, err) + return false + } + if !hasBlock { + tc.t.Errorf("HasBlock(%s): database claims it doesn't "+ + "have the block when it should", blockHash) + return false + } + + // ----------------------- + // Invalid blocks/regions. + // ----------------------- + + // Ensure fetching a block that doesn't exist returns the + // expected error. + badBlockHash := &wire.ShaHash{} + testName := fmt.Sprintf("FetchBlock(%s) invalid block", + badBlockHash) + wantErrCode := database.ErrBlockNotFound + _, err = tx.FetchBlock(badBlockHash) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure fetching a block header that doesn't exist returns + // the expected error. + testName = fmt.Sprintf("FetchBlockHeader(%s) invalid block", + badBlockHash) + _, err = tx.FetchBlockHeader(badBlockHash) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure fetching a block region in a block that doesn't exist + // return the expected error. + testName = fmt.Sprintf("FetchBlockRegion(%s) invalid hash", + badBlockHash) + wantErrCode = database.ErrBlockNotFound + region.Hash = badBlockHash + region.Offset = ^uint32(0) + _, err = tx.FetchBlockRegion(®ion) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure fetching a block region that is out of bounds returns + // the expected error. + testName = fmt.Sprintf("FetchBlockRegion(%s) invalid region", + blockHash) + wantErrCode = database.ErrBlockRegionInvalid + region.Hash = blockHash + region.Offset = ^uint32(0) + _, err = tx.FetchBlockRegion(®ion) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + } + + // ----------------- + // Bulk Block IO API + // ----------------- + + // Ensure the bulk block data fetched from the database matches the + // expected bytes. + blockData, err := tx.FetchBlocks(allBlockHashes) + if err != nil { + tc.t.Errorf("FetchBlocks: unexpected error: %v", err) + return false + } + if len(blockData) != len(allBlockBytes) { + tc.t.Errorf("FetchBlocks: unexpected number of results - got "+ + "%d, want %d", len(blockData), len(allBlockBytes)) + return false + } + for i := 0; i < len(blockData); i++ { + blockHash := allBlockHashes[i] + wantBlockBytes := allBlockBytes[i] + gotBlockBytes := blockData[i] + if !bytes.Equal(gotBlockBytes, wantBlockBytes) { + tc.t.Errorf("FetchBlocks(%s): bytes mismatch: got %x, "+ + "want %x", blockHash, gotBlockBytes, + wantBlockBytes) + return false + } + } + + // Ensure the bulk block headers fetched from the database match the + // expected bytes. + blockHeaderData, err := tx.FetchBlockHeaders(allBlockHashes) + if err != nil { + tc.t.Errorf("FetchBlockHeaders: unexpected error: %v", err) + return false + } + if len(blockHeaderData) != len(allBlockBytes) { + tc.t.Errorf("FetchBlockHeaders: unexpected number of results "+ + "- got %d, want %d", len(blockHeaderData), + len(allBlockBytes)) + return false + } + for i := 0; i < len(blockHeaderData); i++ { + blockHash := allBlockHashes[i] + wantHeaderBytes := allBlockBytes[i][0:wire.MaxBlockHeaderPayload] + gotHeaderBytes := blockHeaderData[i] + if !bytes.Equal(gotHeaderBytes, wantHeaderBytes) { + tc.t.Errorf("FetchBlockHeaders(%s): bytes mismatch: "+ + "got %x, want %x", blockHash, gotHeaderBytes, + wantHeaderBytes) + return false + } + } + + // Ensure the first transaction of every block fetched in bulk block + // regions from the database matches the expected bytes. + allRegionBytes, err := tx.FetchBlockRegions(allBlockRegions) + if err != nil { + tc.t.Errorf("FetchBlockRegions: unexpected error: %v", err) + return false + + } + if len(allRegionBytes) != len(allBlockRegions) { + tc.t.Errorf("FetchBlockRegions: unexpected number of results "+ + "- got %d, want %d", len(allRegionBytes), + len(allBlockRegions)) + return false + } + for i, gotRegionBytes := range allRegionBytes { + region := &allBlockRegions[i] + endRegionOffset := region.Offset + region.Len + wantRegionBytes := blockData[i][region.Offset:endRegionOffset] + if !bytes.Equal(gotRegionBytes, wantRegionBytes) { + tc.t.Errorf("FetchBlockRegions(%d): bytes mismatch: "+ + "got %x, want %x", i, gotRegionBytes, + wantRegionBytes) + return false + } + } + + // Ensure the bulk determination of whether a set of block hashes are in + // the database returns true for all loaded blocks. + hasBlocks, err := tx.HasBlocks(allBlockHashes) + if err != nil { + tc.t.Errorf("HasBlocks: unexpected error: %v", err) + return false + } + for i, hasBlock := range hasBlocks { + if !hasBlock { + tc.t.Errorf("HasBlocks(%d): should have block", i) + return false + } + } + + // ----------------------- + // Invalid blocks/regions. + // ----------------------- + + // Ensure fetching blocks for which one doesn't exist returns the + // expected error. + testName := "FetchBlocks invalid hash" + badBlockHashes := make([]wire.ShaHash, len(allBlockHashes)+1) + copy(badBlockHashes, allBlockHashes) + badBlockHashes[len(badBlockHashes)-1] = wire.ShaHash{} + wantErrCode := database.ErrBlockNotFound + _, err = tx.FetchBlocks(badBlockHashes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure fetching block headers for which one doesn't exist returns the + // expected error. + testName = "FetchBlockHeaders invalid hash" + _, err = tx.FetchBlockHeaders(badBlockHashes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure fetching block regions for which one of blocks doesn't exist + // returns expected error. + testName = "FetchBlockRegions invalid hash" + badBlockRegions := make([]database.BlockRegion, len(allBlockRegions)+1) + copy(badBlockRegions, allBlockRegions) + badBlockRegions[len(badBlockRegions)-1].Hash = &wire.ShaHash{} + wantErrCode = database.ErrBlockNotFound + _, err = tx.FetchBlockRegions(badBlockRegions) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure fetching block regions that are out of bounds returns the + // expected error. + testName = "FetchBlockRegions invalid regions" + badBlockRegions = badBlockRegions[:len(badBlockRegions)-1] + for i := range badBlockRegions { + badBlockRegions[i].Offset = ^uint32(0) + } + wantErrCode = database.ErrBlockRegionInvalid + _, err = tx.FetchBlockRegions(badBlockRegions) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + return true +} + +// testBlockIOTxInterface ensures that the block IO interface works as expected +// for both managed read/write and manual transactions. This function leaves +// all of the stored blocks in the database. +func testBlockIOTxInterface(tc *testContext) bool { + // Ensure attempting to store a block with a read-only transaction fails + // with the expected error. + err := tc.db.View(func(tx database.Tx) error { + wantErrCode := database.ErrTxNotWritable + for i, block := range tc.blocks { + testName := fmt.Sprintf("StoreBlock(%d) on ro tx", i) + err := tx.StoreBlock(block) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return errSubTestFail + } + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + // Populate the database with loaded blocks and ensure all of the data + // fetching APIs work properly on them within the transaction before a + // commit or rollback. Then, force a rollback so the code below can + // ensure none of the data actually gets stored. + forceRollbackError := fmt.Errorf("force rollback") + err = tc.db.Update(func(tx database.Tx) error { + // Store all blocks in the same transaction. + for i, block := range tc.blocks { + err := tx.StoreBlock(block) + if err != nil { + tc.t.Errorf("StoreBlock #%d: unexpected error: "+ + "%v", i, err) + return errSubTestFail + } + } + + // Ensure attempting to store the same block again, before the + // transaction has been committed, returns the expected error. + wantErrCode := database.ErrBlockExists + for i, block := range tc.blocks { + testName := fmt.Sprintf("duplicate block entry #%d "+ + "(before commit)", i) + err := tx.StoreBlock(block) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return errSubTestFail + } + } + + // Ensure that all data fetches from the stored blocks before + // the transaction has been committed work as expected. + if !testFetchBlockIO(tc, tx) { + return errSubTestFail + } + + return forceRollbackError + }) + if err != forceRollbackError { + if err == errSubTestFail { + return false + } + + tc.t.Errorf("Update: inner function error not returned - got "+ + "%v, want %v", err, forceRollbackError) + return false + } + + // Ensure rollback was successful + err = tc.db.View(func(tx database.Tx) error { + if !testFetchBlockIOMissing(tc, tx) { + return errSubTestFail + } + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + // Populate the database with loaded blocks and ensure all of the data + // fetching APIs work properly. + err = tc.db.Update(func(tx database.Tx) error { + // Store a bunch of blocks in the same transaction. + for i, block := range tc.blocks { + err := tx.StoreBlock(block) + if err != nil { + tc.t.Errorf("StoreBlock #%d: unexpected error: "+ + "%v", i, err) + return errSubTestFail + } + } + + // Ensure attempting to store the same block again while in the + // same transaction, but before it has been committed, returns + // the expected error. + for i, block := range tc.blocks { + testName := fmt.Sprintf("duplicate block entry #%d "+ + "(before commit)", i) + wantErrCode := database.ErrBlockExists + err := tx.StoreBlock(block) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return errSubTestFail + } + } + + // Ensure that all data fetches from the stored blocks before + // the transaction has been committed work as expected. + if !testFetchBlockIO(tc, tx) { + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + // Ensure all data fetch tests work as expected using a managed + // read-only transaction after the data was successfully committed + // above. + err = tc.db.View(func(tx database.Tx) error { + if !testFetchBlockIO(tc, tx) { + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + // Ensure all data fetch tests work as expected using a managed + // read-write transaction after the data was successfully committed + // above. + err = tc.db.Update(func(tx database.Tx) error { + if !testFetchBlockIO(tc, tx) { + return errSubTestFail + } + + // Ensure attempting to store existing blocks again returns the + // expected error. Note that this is different from the + // previous version since this is a new transaction after the + // blocks have been committed. + wantErrCode := database.ErrBlockExists + for i, block := range tc.blocks { + testName := fmt.Sprintf("duplicate block entry #%d "+ + "(before commit)", i) + err := tx.StoreBlock(block) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return errSubTestFail + } + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("%v", err) + } + return false + } + + return true +} + +// testClosedTxInterface ensures that both the metadata and block IO API +// functions behave as expected when attempted against a closed transaction. +func testClosedTxInterface(tc *testContext, tx database.Tx) bool { + wantErrCode := database.ErrTxClosed + bucket := tx.Metadata() + cursor := tx.Metadata().Cursor() + bucketName := []byte("closedtxbucket") + keyName := []byte("closedtxkey") + + // ------------ + // Metadata API + // ------------ + + // Ensure that attempting to get an existing bucket returns nil when the + // transaction is closed. + if b := bucket.Bucket(bucketName); b != nil { + tc.t.Errorf("Bucket: did not return nil on closed tx") + return false + } + + // Ensure CreateBucket returns expected error. + testName := "CreateBucket on closed tx" + _, err := bucket.CreateBucket(bucketName) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure CreateBucketIfNotExists returns expected error. + testName = "CreateBucketIfNotExists on closed tx" + _, err = bucket.CreateBucketIfNotExists(bucketName) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure Delete returns expected error. + testName = "Delete on closed tx" + err = bucket.Delete(keyName) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure DeleteBucket returns expected error. + testName = "DeleteBucket on closed tx" + err = bucket.DeleteBucket(bucketName) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure ForEach returns expected error. + testName = "ForEach on closed tx" + err = bucket.ForEach(nil) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure ForEachBucket returns expected error. + testName = "ForEachBucket on closed tx" + err = bucket.ForEachBucket(nil) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure Get returns expected error. + testName = "Get on closed tx" + if k := bucket.Get(keyName); k != nil { + tc.t.Errorf("Get: did not return nil on closed tx") + return false + } + + // Ensure Put returns expected error. + testName = "Put on closed tx" + err = bucket.Put(keyName, []byte("test")) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // ------------------- + // Metadata Cursor API + // ------------------- + + // Ensure attempting to get a bucket from a cursor on a closed tx gives + // back nil. + if b := cursor.Bucket(); b != nil { + tc.t.Error("Cursor.Bucket: returned non-nil on closed tx") + return false + } + + // Ensure Cursor.Delete returns expected error. + testName = "Cursor.Delete on closed tx" + err = cursor.Delete() + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure Cursor.First on a closed tx returns false and nil key/value. + if cursor.First() { + tc.t.Error("Cursor.First: claims ok on closed tx") + return false + } + if cursor.Key() != nil || cursor.Value() != nil { + tc.t.Error("Cursor.First: key and/or value are not nil on " + + "closed tx") + return false + } + + // Ensure Cursor.Last on a closed tx returns false and nil key/value. + if cursor.Last() { + tc.t.Error("Cursor.Last: claims ok on closed tx") + return false + } + if cursor.Key() != nil || cursor.Value() != nil { + tc.t.Error("Cursor.Last: key and/or value are not nil on " + + "closed tx") + return false + } + + // Ensure Cursor.Next on a closed tx returns false and nil key/value. + if cursor.Next() { + tc.t.Error("Cursor.Next: claims ok on closed tx") + return false + } + if cursor.Key() != nil || cursor.Value() != nil { + tc.t.Error("Cursor.Next: key and/or value are not nil on " + + "closed tx") + return false + } + + // Ensure Cursor.Prev on a closed tx returns false and nil key/value. + if cursor.Prev() { + tc.t.Error("Cursor.Prev: claims ok on closed tx") + return false + } + if cursor.Key() != nil || cursor.Value() != nil { + tc.t.Error("Cursor.Prev: key and/or value are not nil on " + + "closed tx") + return false + } + + // Ensure Cursor.Seek on a closed tx returns false and nil key/value. + if cursor.Seek([]byte{}) { + tc.t.Error("Cursor.Seek: claims ok on closed tx") + return false + } + if cursor.Key() != nil || cursor.Value() != nil { + tc.t.Error("Cursor.Seek: key and/or value are not nil on " + + "closed tx") + return false + } + + // --------------------- + // Non-bulk Block IO API + // --------------------- + + // Test the individual block APIs one block at a time to ensure they + // return the expected error. Also, build the data needed to test the + // bulk APIs below while looping. + allBlockHashes := make([]wire.ShaHash, len(tc.blocks)) + allBlockRegions := make([]database.BlockRegion, len(tc.blocks)) + for i, block := range tc.blocks { + blockHash := block.Sha() + allBlockHashes[i] = *blockHash + + txLocs, err := block.TxLoc() + if err != nil { + tc.t.Errorf("block.TxLoc(%d): unexpected error: %v", i, + err) + return false + } + + // Ensure StoreBlock returns expected error. + testName = "StoreBlock on closed tx" + err = tx.StoreBlock(block) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure FetchBlock returns expected error. + testName = fmt.Sprintf("FetchBlock #%d on closed tx", i) + _, err = tx.FetchBlock(blockHash) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure FetchBlockHeader returns expected error. + testName = fmt.Sprintf("FetchBlockHeader #%d on closed tx", i) + _, err = tx.FetchBlockHeader(blockHash) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure the first transaction fetched as a block region from + // the database returns the expected error. + region := database.BlockRegion{ + Hash: blockHash, + Offset: uint32(txLocs[0].TxStart), + Len: uint32(txLocs[0].TxLen), + } + allBlockRegions[i] = region + _, err = tx.FetchBlockRegion(®ion) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure HasBlock returns expected error. + testName = fmt.Sprintf("HasBlock #%d on closed tx", i) + _, err = tx.HasBlock(blockHash) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + } + + // ----------------- + // Bulk Block IO API + // ----------------- + + // Ensure FetchBlocks returns expected error. + testName = "FetchBlocks on closed tx" + _, err = tx.FetchBlocks(allBlockHashes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure FetchBlockHeaders returns expected error. + testName = "FetchBlockHeaders on closed tx" + _, err = tx.FetchBlockHeaders(allBlockHashes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure FetchBlockRegions returns expected error. + testName = "FetchBlockRegions on closed tx" + _, err = tx.FetchBlockRegions(allBlockRegions) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // Ensure HasBlocks returns expected error. + testName = "HasBlocks on closed tx" + _, err = tx.HasBlocks(allBlockHashes) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return false + } + + // --------------- + // Commit/Rollback + // --------------- + + // Ensure that attempting to rollback or commit a transaction that is + // already closed returns the expected error. + err = tx.Rollback() + if !checkDbError(tc.t, "closed tx rollback", err, wantErrCode) { + return false + } + err = tx.Commit() + if !checkDbError(tc.t, "closed tx commit", err, wantErrCode) { + return false + } + + return true +} + +// testTxClosed ensures that both the metadata and block IO API functions behave +// as expected when attempted against both read-only and read-write +// transactions. +func testTxClosed(tc *testContext) bool { + bucketName := []byte("closedtxbucket") + keyName := []byte("closedtxkey") + + // Start a transaction, create a bucket and key used for testing, and + // immediately perform a commit on it so it is closed. + tx, err := tc.db.Begin(true) + if err != nil { + tc.t.Errorf("Begin(true): unexpected error: %v", err) + return false + } + defer rollbackOnPanic(tc.t, tx) + if _, err := tx.Metadata().CreateBucket(bucketName); err != nil { + tc.t.Errorf("CreateBucket: unexpected error: %v", err) + return false + } + if err := tx.Metadata().Put(keyName, []byte("test")); err != nil { + tc.t.Errorf("Put: unexpected error: %v", err) + return false + } + if err := tx.Commit(); err != nil { + tc.t.Errorf("Commit: unexpected error: %v", err) + return false + } + + // Ensure invoking all of the functions on the closed read-write + // transaction behave as expected. + if !testClosedTxInterface(tc, tx) { + return false + } + + // Repeat the tests with a rolled-back read-only transaction. + tx, err = tc.db.Begin(false) + if err != nil { + tc.t.Errorf("Begin(false): unexpected error: %v", err) + return false + } + defer rollbackOnPanic(tc.t, tx) + if err := tx.Rollback(); err != nil { + tc.t.Errorf("Rollback: unexpected error: %v", err) + return false + } + + // Ensure invoking all of the functions on the closed read-only + // transaction behave as expected. + return testClosedTxInterface(tc, tx) +} + +// testConcurrecy ensure the database properly supports concurrent readers and +// only a single writer. It also ensures views act as snapshots at the time +// they are acquired. +func testConcurrecy(tc *testContext) bool { + // sleepTime is how long each of the concurrent readers should sleep to + // aid in detection of whether or not the data is actually being read + // concurrently. It starts with a sane lower bound. + var sleepTime = time.Millisecond * 250 + + // Determine about how long it takes for a single block read. When it's + // longer than the default minimum sleep time, adjust the sleep time to + // help prevent durations that are too short which would cause erroneous + // test failures on slower systems. + startTime := time.Now() + err := tc.db.View(func(tx database.Tx) error { + _, err := tx.FetchBlock(tc.blocks[0].Sha()) + if err != nil { + return err + } + return nil + }) + if err != nil { + tc.t.Errorf("Unexpected error in view: %v", err) + return false + } + elapsed := time.Now().Sub(startTime) + if sleepTime < elapsed { + sleepTime = elapsed + } + tc.t.Logf("Time to load block 0: %v, using sleep time: %v", elapsed, + sleepTime) + + // reader takes a block number to load and channel to return the result + // of the operation on. It is used below to launch multiple concurrent + // readers. + numReaders := len(tc.blocks) + resultChan := make(chan bool, numReaders) + reader := func(blockNum int) { + err := tc.db.View(func(tx database.Tx) error { + time.Sleep(sleepTime) + _, err := tx.FetchBlock(tc.blocks[blockNum].Sha()) + if err != nil { + return err + } + return nil + }) + if err != nil { + tc.t.Errorf("Unexpected error in concurrent view: %v", + err) + resultChan <- false + } + resultChan <- true + } + + // Start up several concurrent readers for the same block and wait for + // the results. + startTime = time.Now() + for i := 0; i < numReaders; i++ { + go reader(0) + } + for i := 0; i < numReaders; i++ { + if result := <-resultChan; !result { + return false + } + } + elapsed = time.Now().Sub(startTime) + tc.t.Logf("%d concurrent reads of same block elapsed: %v", numReaders, + elapsed) + + // Consider it a failure if it took longer than half the time it would + // take with no concurrency. + if elapsed > sleepTime*time.Duration(numReaders/2) { + tc.t.Errorf("Concurrent views for same block did not appear to "+ + "run simultaneously: elapsed %v", elapsed) + return false + } + + // Start up several concurrent readers for different blocks and wait for + // the results. + startTime = time.Now() + for i := 0; i < numReaders; i++ { + go reader(i) + } + for i := 0; i < numReaders; i++ { + if result := <-resultChan; !result { + return false + } + } + elapsed = time.Now().Sub(startTime) + tc.t.Logf("%d concurrent reads of different blocks elapsed: %v", + numReaders, elapsed) + + // Consider it a failure if it took longer than half the time it would + // take with no concurrency. + if elapsed > sleepTime*time.Duration(numReaders/2) { + tc.t.Errorf("Concurrent views for different blocks did not "+ + "appear to run simultaneously: elapsed %v", elapsed) + return false + } + + // Start up a few readers and wait for them to acquire views. Each + // reader waits for a signal from the writer to be finished to ensure + // that the data written by the writer is not seen by the view since it + // was started before the data was set. + concurrentKey := []byte("notthere") + concurrentVal := []byte("someval") + started := make(chan struct{}) + writeComplete := make(chan struct{}) + reader = func(blockNum int) { + err := tc.db.View(func(tx database.Tx) error { + started <- struct{}{} + + // Wait for the writer to complete. + <-writeComplete + + // Since this reader was created before the write took + // place, the data it added should not be visible. + val := tx.Metadata().Get(concurrentKey) + if val != nil { + return fmt.Errorf("%s should not be visible", + concurrentKey) + } + return nil + }) + if err != nil { + tc.t.Errorf("Unexpected error in concurrent view: %v", + err) + resultChan <- false + } + resultChan <- true + } + for i := 0; i < numReaders; i++ { + go reader(0) + } + for i := 0; i < numReaders; i++ { + <-started + } + + // All readers are started and waiting for completion of the writer. + // Set some data the readers are expecting to not find and signal the + // readers the write is done by closing the writeComplete channel. + err = tc.db.Update(func(tx database.Tx) error { + err := tx.Metadata().Put(concurrentKey, concurrentVal) + if err != nil { + return err + } + return nil + }) + if err != nil { + tc.t.Errorf("Unexpected error in update: %v", err) + return false + } + close(writeComplete) + + // Wait for reader results. + for i := 0; i < numReaders; i++ { + if result := <-resultChan; !result { + return false + } + } + + // Start a few writers and ensure the total time is at least the + // writeSleepTime * numWriters. This ensures only one write transaction + // can be active at a time. + writeSleepTime := time.Millisecond * 250 + writer := func() { + err := tc.db.Update(func(tx database.Tx) error { + time.Sleep(writeSleepTime) + return nil + }) + if err != nil { + tc.t.Errorf("Unexpected error in concurrent view: %v", + err) + resultChan <- false + } + resultChan <- true + } + numWriters := 3 + startTime = time.Now() + for i := 0; i < numWriters; i++ { + go writer() + } + for i := 0; i < numWriters; i++ { + if result := <-resultChan; !result { + return false + } + } + elapsed = time.Now().Sub(startTime) + tc.t.Logf("%d concurrent writers elapsed using sleep time %v: %v", + numWriters, writeSleepTime, elapsed) + + // The total time must have been at least the sum of all sleeps if the + // writes blocked properly. + if elapsed < writeSleepTime*time.Duration(numWriters) { + tc.t.Errorf("Concurrent writes appeared to run simultaneously: "+ + "elapsed %v", elapsed) + return false + } + + return true +} + +// testConcurrentClose ensures that closing the database with open transactions +// blocks until the transactions are finished. +// +// The database will be closed upon returning from this function. +func testConcurrentClose(tc *testContext) bool { + // Start up a few readers and wait for them to acquire views. Each + // reader waits for a signal to complete to ensure the transactions stay + // open until they are explicitly signalled to be closed. + var activeReaders int32 + numReaders := 3 + started := make(chan struct{}) + finishReaders := make(chan struct{}) + resultChan := make(chan bool, numReaders+1) + reader := func() { + err := tc.db.View(func(tx database.Tx) error { + atomic.AddInt32(&activeReaders, 1) + started <- struct{}{} + <-finishReaders + atomic.AddInt32(&activeReaders, -1) + return nil + }) + if err != nil { + tc.t.Errorf("Unexpected error in concurrent view: %v", + err) + resultChan <- false + } + resultChan <- true + } + for i := 0; i < numReaders; i++ { + go reader() + } + for i := 0; i < numReaders; i++ { + <-started + } + + // Close the database in a separate goroutine. This should block until + // the transactions are finished. Once the close has taken place, the + // dbClosed channel is closed to signal the main goroutine below. + dbClosed := make(chan struct{}) + go func() { + started <- struct{}{} + err := tc.db.Close() + if err != nil { + tc.t.Errorf("Unexpected error in concurrent view: %v", + err) + resultChan <- false + } + close(dbClosed) + resultChan <- true + }() + <-started + + // Wait a short period and then signal the reader transactions to + // finish. When the db closed channel is received, ensure there are no + // active readers open. + time.AfterFunc(time.Millisecond*250, func() { close(finishReaders) }) + <-dbClosed + if nr := atomic.LoadInt32(&activeReaders); nr != 0 { + tc.t.Errorf("Close did not appear to block with active "+ + "readers: %d active", nr) + return false + } + + // Wait for all results. + for i := 0; i < numReaders+1; i++ { + if result := <-resultChan; !result { + return false + } + } + + return true +} + +// testInterface tests performs tests for the various interfaces of the database +// package which require state in the database for the given database type. +func testInterface(t *testing.T, db database.DB) { + // Create a test context to pass around. + context := testContext{t: t, db: db} + + // Load the test blocks and store in the test context for use throughout + // the tests. + blocks, err := loadBlocks(t, blockDataFile, blockDataNet) + if err != nil { + t.Errorf("loadBlocks: Unexpected error: %v", err) + return + } + context.blocks = blocks + + // Test the transaction metadata interface including managed and manual + // transactions as well as buckets. + if !testMetadataTxInterface(&context) { + return + } + + // Test the transaction block IO interface using managed and manual + // transactions. This function leaves all of the stored blocks in the + // database since they're used later. + if !testBlockIOTxInterface(&context) { + return + } + + // Test all of the transaction interface functions against a closed + // transaction work as expected. + if !testTxClosed(&context) { + return + } + + // Test the database properly supports concurrency. + if !testConcurrecy(&context) { + return + } + + // Test that closing the database with open transactions blocks until + // the transactions are finished. + // + // The database will be closed upon returning from this function, so it + // must be the last thing called. + testConcurrentClose(&context) +} diff --git a/database2/ffldb/ldbtreapiter.go b/database2/ffldb/ldbtreapiter.go new file mode 100644 index 00000000000..91abbdcec52 --- /dev/null +++ b/database2/ffldb/ldbtreapiter.go @@ -0,0 +1,58 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package ffldb + +import ( + "github.com/btcsuite/btcd/database2/internal/treap" + "github.com/btcsuite/goleveldb/leveldb/iterator" + "github.com/btcsuite/goleveldb/leveldb/util" +) + +// ldbTreapIter wraps a treap iterator to provide the additional functionality +// needed to satisfy the leveldb iterator.Iterator interface. +type ldbTreapIter struct { + *treap.Iterator + tx *transaction + released bool +} + +// Enforce ldbTreapIter implements the leveldb iterator.Iterator interface. +var _ iterator.Iterator = (*ldbTreapIter)(nil) + +// Error is only provided to satisfy the iterator interface as there are no +// errors for this memory-only structure. +// +// This is part of the leveldb iterator.Iterator interface implementation. +func (iter *ldbTreapIter) Error() error { + return nil +} + +// SetReleaser is only provided to satisfy the iterator interface as there is no +// need to override it. +// +// This is part of the leveldb iterator.Iterator interface implementation. +func (iter *ldbTreapIter) SetReleaser(releaser util.Releaser) { +} + +// Release releases the iterator by removing the underlying treap iterator from +// the list of active iterators against the pending keys treap. +// +// This is part of the leveldb iterator.Iterator interface implementation. +func (iter *ldbTreapIter) Release() { + if !iter.released { + iter.tx.removeActiveIter(iter.Iterator) + iter.released = true + } +} + +// newLdbTreapIter create a new treap iterator for the given slice against the +// pending keys for the passed transaction and returns it wrapped in an +// ldbTreapIter so it can be used as a leveldb iterator. It also adds the new +// iterator to the list of active iterators for the transaction. +func newLdbTreapIter(tx *transaction, slice *util.Range) *ldbTreapIter { + iter := tx.pendingKeys.Iterator(slice.Start, slice.Limit) + tx.addActiveIter(iter) + return &ldbTreapIter{Iterator: iter, tx: tx} +} diff --git a/database2/ffldb/mockfile_test.go b/database2/ffldb/mockfile_test.go new file mode 100644 index 00000000000..b61e9a06606 --- /dev/null +++ b/database2/ffldb/mockfile_test.go @@ -0,0 +1,163 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +// This file is part of the ffldb package rather than the ffldb_test package as +// it is part of the whitebox testing. + +package ffldb + +import ( + "errors" + "io" + "sync" +) + +// Errors used for the mock file. +var ( + // errMockFileClosed is used to indicate a mock file is closed. + errMockFileClosed = errors.New("file closed") + + // errInvalidOffset is used to indicate an offset that is out of range + // for the file was provided. + errInvalidOffset = errors.New("invalid offset") + + // errSyncFail is used to indicate simulated sync failure. + errSyncFail = errors.New("simulated sync failure") +) + +// mockFile implements the filer interface and used in order to force failures +// the database code related to reading and writing from the flat block files. +// A maxSize of -1 is unlimited. +type mockFile struct { + sync.RWMutex + maxSize int64 + data []byte + forceSyncErr bool + closed bool +} + +// Close closes the mock file without releasing any data associated with it. +// This allows it to be "reopened" without losing the data. +// +// This is part of the filer implementation. +func (f *mockFile) Close() error { + f.Lock() + defer f.Unlock() + + if f.closed { + return errMockFileClosed + } + f.closed = true + return nil +} + +// ReadAt reads len(b) bytes from the mock file starting at byte offset off. It +// returns the number of bytes read and the error, if any. ReadAt always +// returns a non-nil error when n < len(b). At end of file, that error is +// io.EOF. +// +// This is part of the filer implementation. +func (f *mockFile) ReadAt(b []byte, off int64) (int, error) { + f.RLock() + defer f.RUnlock() + + if f.closed { + return 0, errMockFileClosed + } + maxSize := int64(len(f.data)) + if f.maxSize > -1 && maxSize > f.maxSize { + maxSize = f.maxSize + } + if off < 0 || off > maxSize { + return 0, errInvalidOffset + } + + // Limit to the max size field, if set. + numToRead := int64(len(b)) + endOffset := off + numToRead + if endOffset > maxSize { + numToRead = maxSize - off + } + + copy(b, f.data[off:off+numToRead]) + if numToRead < int64(len(b)) { + return int(numToRead), io.EOF + } + return int(numToRead), nil +} + +// Truncate changes the size of the mock file. +// +// This is part of the filer implementation. +func (f *mockFile) Truncate(size int64) error { + f.Lock() + defer f.Unlock() + + if f.closed { + return errMockFileClosed + } + maxSize := int64(len(f.data)) + if f.maxSize > -1 && maxSize > f.maxSize { + maxSize = f.maxSize + } + if size > maxSize { + return errInvalidOffset + } + + f.data = f.data[:size] + return nil +} + +// Write writes len(b) bytes to the mock file. It returns the number of bytes +// written and an error, if any. Write returns a non-nil error any time +// n != len(b). +// +// This is part of the filer implementation. +func (f *mockFile) WriteAt(b []byte, off int64) (int, error) { + f.Lock() + defer f.Unlock() + + if f.closed { + return 0, errMockFileClosed + } + maxSize := f.maxSize + if maxSize < 0 { + maxSize = 100 * 1024 // 100KiB + } + if off < 0 || off > maxSize { + return 0, errInvalidOffset + } + + // Limit to the max size field, if set, and grow the slice if needed. + numToWrite := int64(len(b)) + if off+numToWrite > maxSize { + numToWrite = maxSize - off + } + if off+numToWrite > int64(len(f.data)) { + newData := make([]byte, off+numToWrite) + copy(newData, f.data) + f.data = newData + } + + copy(f.data[off:], b[:numToWrite]) + if numToWrite < int64(len(b)) { + return int(numToWrite), io.EOF + } + return int(numToWrite), nil +} + +// Sync doesn't do anything for mock files. However, it will return an error if +// the mock file's forceSyncErr flag is set. +// +// This is part of the filer implementation. +func (f *mockFile) Sync() error { + if f.forceSyncErr { + return errSyncFail + } + + return nil +} + +// Ensure the mockFile type implements the filer interface. +var _ filer = (*mockFile)(nil) diff --git a/database2/ffldb/reconcile.go b/database2/ffldb/reconcile.go new file mode 100644 index 00000000000..59f73c21af3 --- /dev/null +++ b/database2/ffldb/reconcile.go @@ -0,0 +1,117 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package ffldb + +import ( + "fmt" + "hash/crc32" + + database "github.com/btcsuite/btcd/database2" +) + +// The serialized write cursor location format is: +// +// [0:4] Block file (4 bytes) +// [4:8] File offset (4 bytes) +// [8:12] Castagnoli CRC-32 checksum (4 bytes) + +// serializeWriteRow serialize the current block file and offset where new +// will be written into a format suitable for storage into the metadata. +func serializeWriteRow(curBlockFileNum, curFileOffset uint32) []byte { + var serializedRow [12]byte + byteOrder.PutUint32(serializedRow[0:4], curBlockFileNum) + byteOrder.PutUint32(serializedRow[4:8], curFileOffset) + checksum := crc32.Checksum(serializedRow[:8], castagnoli) + byteOrder.PutUint32(serializedRow[8:12], checksum) + return serializedRow[:] +} + +// deserializeWriteRow deserializes the write cursor location stored in the +// metadata. Returns ErrCorruption if the checksum of the entry doesn't match. +func deserializeWriteRow(writeRow []byte) (uint32, uint32, error) { + // Ensure the checksum matches. The checksum is at the end. + gotChecksum := crc32.Checksum(writeRow[:8], castagnoli) + wantChecksumBytes := writeRow[8:12] + wantChecksum := byteOrder.Uint32(wantChecksumBytes) + if gotChecksum != wantChecksum { + str := fmt.Sprintf("metadata for write cursor does not match "+ + "the expected checksum - got %d, want %d", gotChecksum, + wantChecksum) + return 0, 0, makeDbErr(database.ErrCorruption, str, nil) + } + + fileNum := byteOrder.Uint32(writeRow[0:4]) + fileOffset := byteOrder.Uint32(writeRow[4:8]) + return fileNum, fileOffset, nil +} + +// reconcileDB reconciles the metadata with the flat block files on disk. It +// will also initialize the underlying database if the create flag is set. +func reconcileDB(pdb *db, create bool) (database.DB, error) { + // Perform initial internal bucket and value creation during database + // creation. + if create { + if err := initDB(pdb.ldb); err != nil { + return nil, err + } + } + + // Load the current write cursor position from the metadata. + var curFileNum, curOffset uint32 + err := pdb.View(func(tx database.Tx) error { + writeRow := tx.Metadata().Get(writeLocKeyName) + if writeRow == nil { + str := "write cursor does not exist" + return makeDbErr(database.ErrCorruption, str, nil) + } + + var err error + curFileNum, curOffset, err = deserializeWriteRow(writeRow) + return err + }) + if err != nil { + return nil, err + } + + // When the write cursor position found by scanning the block files on + // disk is AFTER the position the metadata believes to be true, truncate + // the files on disk to match the metadata. This can be a fairly common + // occurrence in unclean shutdown scenarios while the block files are in + // the middle of being written. Since the metadata isn't updated until + // after the block data is written, this is effectively just a rollback + // to the known good point before the unclean shutdown. + wc := pdb.store.writeCursor + if wc.curFileNum > curFileNum || (wc.curFileNum == curFileNum && + wc.curOffset > curOffset) { + + log.Info("Detected unclean shutdown - Repairing...") + log.Debugf("Metadata claims file %d, offset %d. Block data is "+ + "at file %d, offset %d", curFileNum, curOffset, + wc.curFileNum, wc.curOffset) + pdb.store.handleRollback(curFileNum, curOffset) + log.Infof("Database sync complete") + } + + // When the write cursor position found by scanning the block files on + // disk is BEFORE the position the metadata believes to be true, return + // a corruption error. Since sync is called after each block is written + // and before the metadata is updated, this should only happen in the + // case of missing, deleted, or truncated block files, which generally + // is not an easily recoverable scenario. In the future, it might be + // possible to rescan and rebuild the metadata from the block files, + // however, that would need to happen with coordination from a higher + // layer since it could invalidate other metadata. + if wc.curFileNum < curFileNum || (wc.curFileNum == curFileNum && + wc.curOffset < curOffset) { + + str := fmt.Sprintf("metadata claims file %d, offset %d, but "+ + "block data is at file %d, offset %d", curFileNum, + curOffset, wc.curFileNum, wc.curOffset) + _ = log.Warnf("***Database corruption detected***: %v", str) + return nil, makeDbErr(database.ErrCorruption, str, nil) + } + + return pdb, nil +} diff --git a/database2/ffldb/whitebox_test.go b/database2/ffldb/whitebox_test.go new file mode 100644 index 00000000000..c9863475b06 --- /dev/null +++ b/database2/ffldb/whitebox_test.go @@ -0,0 +1,721 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +// This file is part of the ffldb package rather than the ffldb_test package as +// it provides whitebox testing. + +package ffldb + +import ( + "compress/bzip2" + "encoding/binary" + "fmt" + "hash/crc32" + "io" + "os" + "path/filepath" + "testing" + + "github.com/btcsuite/btcd/chaincfg" + database "github.com/btcsuite/btcd/database2" + "github.com/btcsuite/btcd/wire" + "github.com/btcsuite/btcutil" + "github.com/btcsuite/goleveldb/leveldb" + ldberrors "github.com/btcsuite/goleveldb/leveldb/errors" +) + +var ( + // blockDataNet is the expected network in the test block data. + blockDataNet = wire.MainNet + + // blockDataFile is the path to a file containing the first 256 blocks + // of the block chain. + blockDataFile = filepath.Join("..", "testdata", "blocks1-256.bz2") + + // errSubTestFail is used to signal that a sub test returned false. + errSubTestFail = fmt.Errorf("sub test failure") +) + +// loadBlocks loads the blocks contained in the testdata directory and returns +// a slice of them. +func loadBlocks(t *testing.T, dataFile string, network wire.BitcoinNet) ([]*btcutil.Block, error) { + // Open the file that contains the blocks for reading. + fi, err := os.Open(dataFile) + if err != nil { + t.Errorf("failed to open file %v, err %v", dataFile, err) + return nil, err + } + defer func() { + if err := fi.Close(); err != nil { + t.Errorf("failed to close file %v %v", dataFile, + err) + } + }() + dr := bzip2.NewReader(fi) + + // Set the first block as the genesis block. + blocks := make([]*btcutil.Block, 0, 256) + genesis := btcutil.NewBlock(chaincfg.MainNetParams.GenesisBlock) + blocks = append(blocks, genesis) + + // Load the remaining blocks. + for height := 1; ; height++ { + var net uint32 + err := binary.Read(dr, binary.LittleEndian, &net) + if err == io.EOF { + // Hit end of file at the expected offset. No error. + break + } + if err != nil { + t.Errorf("Failed to load network type for block %d: %v", + height, err) + return nil, err + } + if net != uint32(network) { + t.Errorf("Block doesn't match network: %v expects %v", + net, network) + return nil, err + } + + var blockLen uint32 + err = binary.Read(dr, binary.LittleEndian, &blockLen) + if err != nil { + t.Errorf("Failed to load block size for block %d: %v", + height, err) + return nil, err + } + + // Read the block. + blockBytes := make([]byte, blockLen) + _, err = io.ReadFull(dr, blockBytes) + if err != nil { + t.Errorf("Failed to load block %d: %v", height, err) + return nil, err + } + + // Deserialize and store the block. + block, err := btcutil.NewBlockFromBytes(blockBytes) + if err != nil { + t.Errorf("Failed to parse block %v: %v", height, err) + return nil, err + } + blocks = append(blocks, block) + } + + return blocks, nil +} + +// checkDbError ensures the passed error is a database.Error with an error code +// that matches the passed error code. +func checkDbError(t *testing.T, testName string, gotErr error, wantErrCode database.ErrorCode) bool { + dbErr, ok := gotErr.(database.Error) + if !ok { + t.Errorf("%s: unexpected error type - got %T, want %T", + testName, gotErr, database.Error{}) + return false + } + if dbErr.ErrorCode != wantErrCode { + t.Errorf("%s: unexpected error code - got %s (%s), want %s", + testName, dbErr.ErrorCode, dbErr.Description, + wantErrCode) + return false + } + + return true +} + +// testContext is used to store context information about a running test which +// is passed into helper functions. +type testContext struct { + t *testing.T + db database.DB + files map[uint32]*lockableFile + maxFileSizes map[uint32]int64 + blocks []*btcutil.Block +} + +// TestConvertErr ensures the leveldb error to database error conversion works +// as expected. +func TestConvertErr(t *testing.T) { + t.Parallel() + + tests := []struct { + err error + wantErrCode database.ErrorCode + }{ + {&ldberrors.ErrCorrupted{}, database.ErrCorruption}, + {leveldb.ErrClosed, database.ErrDbNotOpen}, + {leveldb.ErrSnapshotReleased, database.ErrTxClosed}, + {leveldb.ErrIterReleased, database.ErrTxClosed}, + } + + for i, test := range tests { + gotErr := convertErr("test", test.err) + if gotErr.ErrorCode != test.wantErrCode { + t.Errorf("convertErr #%d unexpected error - got %v, "+ + "want %v", i, gotErr.ErrorCode, test.wantErrCode) + continue + } + } +} + +// TestCornerCases ensures several corner cases which can happen when opening +// a database and/or block files work as expected. +func TestCornerCases(t *testing.T) { + t.Parallel() + + // Create a file at the datapase path to force the open below to fail. + dbPath := filepath.Join(os.TempDir(), "ffldb-errors") + _ = os.RemoveAll(dbPath) + fi, err := os.Create(dbPath) + if err != nil { + t.Errorf("os.Create: unexpected error: %v", err) + return + } + fi.Close() + + // Ensure creating a new database fails when a file exists where a + // directory is needed. + testName := "openDB: fail due to file at target location" + wantErrCode := database.ErrDriverSpecific + idb, err := openDB(dbPath, blockDataNet, true) + if !checkDbError(t, testName, err, wantErrCode) { + if err == nil { + idb.Close() + } + _ = os.RemoveAll(dbPath) + return + } + + // Remove the file and create the database to run tests against. It + // should be successful this time. + _ = os.RemoveAll(dbPath) + idb, err = openDB(dbPath, blockDataNet, true) + if err != nil { + t.Errorf("openDB: unexpected error: %v", err) + return + } + defer os.RemoveAll(dbPath) + defer idb.Close() + + // Ensure attempting to write to a file that can't be created returns + // the expected error. + testName = "writeBlock: open file failure" + filePath := blockFilePath(dbPath, 0) + if err := os.Mkdir(filePath, 0755); err != nil { + t.Errorf("os.Mkdir: unexpected error: %v", err) + return + } + store := idb.(*db).store + _, err = store.writeBlock([]byte{0x00}) + if !checkDbError(t, testName, err, database.ErrDriverSpecific) { + return + } + _ = os.RemoveAll(filePath) + + // Start a transaction and close the underlying leveldb database out + // from under it. + dbTx, err := idb.Begin(true) + if err != nil { + t.Errorf("Begin: unexpected error: %v", err) + return + } + ldb := idb.(*db).ldb + ldb.Close() + + // Ensure initilization errors in the underlying database work as + // expected. + testName = "initDB: reinitialization" + wantErrCode = database.ErrDbNotOpen + err = initDB(ldb) + if !checkDbError(t, testName, err, wantErrCode) { + return + } + + // Ensure errors in the underlying database during a transaction commit + // are handled properly. + testName = "Commit: underlying leveldb error" + wantErrCode = database.ErrDbNotOpen + err = dbTx.Commit() + if !checkDbError(t, testName, err, wantErrCode) { + return + } + + // Ensure the View handles errors in the underlying leveldb database + // properly. + testName = "View: underlying leveldb error" + wantErrCode = database.ErrDbNotOpen + err = idb.View(func(tx database.Tx) error { + return nil + }) + if !checkDbError(t, testName, err, wantErrCode) { + return + } + + // Ensure the Update handles errors in the underlying leveldb database + // properly. + testName = "Update: underlying leveldb error" + err = idb.Update(func(tx database.Tx) error { + return nil + }) + if !checkDbError(t, testName, err, wantErrCode) { + return + } +} + +// resetDatabase removes everything from the opened database associated with the +// test context including all metadata and the mock files. +func resetDatabase(tc *testContext) bool { + // Reset the metadata. + err := tc.db.Update(func(tx database.Tx) error { + // Remove all the keys using a cursor while also generating a + // list of buckets. It's not safe to remove keys during ForEach + // iteration nor is it safe to remove buckets during cursor + // iteration, so this dual approach is needed. + var bucketNames [][]byte + cursor := tx.Metadata().Cursor() + for ok := cursor.First(); ok; ok = cursor.Next() { + if cursor.Value() != nil { + if err := cursor.Delete(); err != nil { + return err + } + } else { + bucketNames = append(bucketNames, cursor.Key()) + } + } + + // Remove the buckets. + for _, k := range bucketNames { + if err := tx.Metadata().DeleteBucket(k); err != nil { + return err + } + } + + _, err := tx.Metadata().CreateBucket(blockIdxBucketName) + return err + }) + if err != nil { + tc.t.Errorf("Update: unexpected error: %v", err) + return false + } + + // Reset the mock files. + store := tc.db.(*db).store + wc := store.writeCursor + wc.curFile.Lock() + if wc.curFile.file != nil { + wc.curFile.file.Close() + wc.curFile.file = nil + } + wc.curFile.Unlock() + wc.Lock() + wc.curFileNum = 0 + wc.curOffset = 0 + wc.Unlock() + tc.files = make(map[uint32]*lockableFile) + tc.maxFileSizes = make(map[uint32]int64) + return true +} + +// testWriteFailures tests various failures paths when writing to the block +// files. +func testWriteFailures(tc *testContext) bool { + if !resetDatabase(tc) { + return false + } + + // Ensure file sync errors during writeBlock return the expected error. + store := tc.db.(*db).store + testName := "writeBlock: file sync failure" + store.writeCursor.Lock() + oldFile := store.writeCursor.curFile + store.writeCursor.curFile = &lockableFile{ + file: &mockFile{forceSyncErr: true, maxSize: -1}, + } + store.writeCursor.Unlock() + _, err := store.writeBlock([]byte{0x00}) + if !checkDbError(tc.t, testName, err, database.ErrDriverSpecific) { + return false + } + store.writeCursor.Lock() + store.writeCursor.curFile = oldFile + store.writeCursor.Unlock() + + // Force errors in the various error paths when writing data by using + // mock files with a limited max size. + block0Bytes, _ := tc.blocks[0].Bytes() + tests := []struct { + fileNum uint32 + maxSize int64 + }{ + // Force an error when writing the network bytes. + {fileNum: 0, maxSize: 2}, + + // Force an error when writing the block size. + {fileNum: 0, maxSize: 6}, + + // Force an error when writing the block. + {fileNum: 0, maxSize: 17}, + + // Force an error when writing the checksum. + {fileNum: 0, maxSize: int64(len(block0Bytes)) + 10}, + + // Force an error after writing enough blocks for force multiple + // files. + {fileNum: 15, maxSize: 1}, + } + + for i, test := range tests { + if !resetDatabase(tc) { + return false + } + + // Ensure storing the specified number of blocks using a mock + // file that fails the write fails when the transaction is + // committed, not when the block is stored. + tc.maxFileSizes = map[uint32]int64{test.fileNum: test.maxSize} + err := tc.db.Update(func(tx database.Tx) error { + for i, block := range tc.blocks { + err := tx.StoreBlock(block) + if err != nil { + tc.t.Errorf("StoreBlock (%d): unexpected "+ + "error: %v", i, err) + return errSubTestFail + } + } + + return nil + }) + testName := fmt.Sprintf("Force update commit failure - test "+ + "%d, fileNum %d, maxsize %d", i, test.fileNum, + test.maxSize) + if !checkDbError(tc.t, testName, err, database.ErrDriverSpecific) { + tc.t.Errorf("%v", err) + return false + } + + // Ensure the commit rollback removed all extra files and data. + if len(tc.files) != 1 { + tc.t.Errorf("Update rollback: new not removed - want "+ + "1 file, got %d", len(tc.files)) + return false + } + if _, ok := tc.files[0]; !ok { + tc.t.Error("Update rollback: file 0 does not exist") + return false + } + file := tc.files[0].file.(*mockFile) + if len(file.data) != 0 { + tc.t.Errorf("Update rollback: file did not truncate - "+ + "want len 0, got len %d", len(file.data)) + return false + } + } + + return true +} + +// testBlockFileErrors ensures the database returns expected errors with various +// file-related issues such as closed and missing files. +func testBlockFileErrors(tc *testContext) bool { + if !resetDatabase(tc) { + return false + } + + // Ensure errors in blockFile and openFile when requesting invalid file + // numbers. + store := tc.db.(*db).store + testName := "blockFile invalid file open" + _, err := store.blockFile(^uint32(0)) + if !checkDbError(tc.t, testName, err, database.ErrDriverSpecific) { + return false + } + testName = "openFile invalid file open" + _, err = store.openFile(^uint32(0)) + if !checkDbError(tc.t, testName, err, database.ErrDriverSpecific) { + return false + } + + // Insert the first block into the mock file. + err = tc.db.Update(func(tx database.Tx) error { + err := tx.StoreBlock(tc.blocks[0]) + if err != nil { + tc.t.Errorf("StoreBlock: unexpected error: %v", err) + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("Update: unexpected error: %v", err) + } + return false + } + + // Ensure errors in readBlock and readBlockRegion when requesting a file + // number that doesn't exist. + block0Hash := tc.blocks[0].Sha() + testName = "readBlock invalid file number" + invalidLoc := blockLocation{ + blockFileNum: ^uint32(0), + blockLen: 80, + } + _, err = store.readBlock(block0Hash, invalidLoc) + if !checkDbError(tc.t, testName, err, database.ErrDriverSpecific) { + return false + } + testName = "readBlockRegion invalid file number" + _, err = store.readBlockRegion(invalidLoc, 0, 80) + if !checkDbError(tc.t, testName, err, database.ErrDriverSpecific) { + return false + } + + // Close the block file out from under the database. + store.writeCursor.curFile.Lock() + store.writeCursor.curFile.file.Close() + store.writeCursor.curFile.Unlock() + + // Ensure failures in FetchBlock and FetchBlockRegion(s) since the + // underlying file they need to read from has been closed. + err = tc.db.View(func(tx database.Tx) error { + testName = "FetchBlock closed file" + wantErrCode := database.ErrDriverSpecific + _, err := tx.FetchBlock(block0Hash) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return errSubTestFail + } + + testName = "FetchBlockRegion closed file" + regions := []database.BlockRegion{ + { + Hash: block0Hash, + Len: 80, + Offset: 0, + }, + } + _, err = tx.FetchBlockRegion(®ions[0]) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return errSubTestFail + } + + testName = "FetchBlockRegions closed file" + _, err = tx.FetchBlockRegions(regions) + if !checkDbError(tc.t, testName, err, wantErrCode) { + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("View: unexpected error: %v", err) + } + return false + } + + return true +} + +// testCorruption ensures the database returns expected errors under various +// corruption scenarios. +func testCorruption(tc *testContext) bool { + if !resetDatabase(tc) { + return false + } + + // Insert the first block into the mock file. + err := tc.db.Update(func(tx database.Tx) error { + err := tx.StoreBlock(tc.blocks[0]) + if err != nil { + tc.t.Errorf("StoreBlock: unexpected error: %v", err) + return errSubTestFail + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("Update: unexpected error: %v", err) + } + return false + } + + // Ensure corruption is detected by intentionally modifying the bytes + // stored to the mock file and reading the block. + block0Bytes, _ := tc.blocks[0].Bytes() + block0Hash := tc.blocks[0].Sha() + tests := []struct { + offset uint32 + fixChecksum bool + wantErrCode database.ErrorCode + }{ + // One of the network bytes. The checksum needs to be fixed so + // the invalid network is detected. + {2, true, database.ErrDriverSpecific}, + + // The same network byte, but this time don't fix the checksum + // to ensure the corruption is detected. + {2, false, database.ErrCorruption}, + + // One of the block length bytes. + {6, false, database.ErrCorruption}, + + // Random header byte. + {17, false, database.ErrCorruption}, + + // Random transaction byte. + {90, false, database.ErrCorruption}, + + // Random checksum byte. + {uint32(len(block0Bytes)) + 10, false, database.ErrCorruption}, + } + err = tc.db.View(func(tx database.Tx) error { + data := tc.files[0].file.(*mockFile).data + for i, test := range tests { + // Corrupt the byte at the offset by a single bit. + data[test.offset] ^= 0x10 + + // Fix the checksum if requested to force other errors. + fileLen := len(data) + var oldChecksumBytes [4]byte + copy(oldChecksumBytes[:], data[fileLen-4:]) + if test.fixChecksum { + toSum := data[:fileLen-4] + cksum := crc32.Checksum(toSum, castagnoli) + binary.BigEndian.PutUint32(data[fileLen-4:], cksum) + } + + testName := fmt.Sprintf("FetchBlock (test #%d): "+ + "corruption", i) + _, err := tx.FetchBlock(block0Hash) + if !checkDbError(tc.t, testName, err, test.wantErrCode) { + return errSubTestFail + } + + // Reset the corrupted data back to the original. + data[test.offset] ^= 0x10 + if test.fixChecksum { + copy(data[fileLen-4:], oldChecksumBytes[:]) + } + } + + return nil + }) + if err != nil { + if err != errSubTestFail { + tc.t.Errorf("View: unexpected error: %v", err) + } + return false + } + + return true +} + +// TestFailureScenarios ensures several failure scenarios such as database +// corruption, block file write failures, and rollback failures are handled +// correctly. +func TestFailureScenarios(t *testing.T) { + // Create a new database to run tests against. + dbPath := filepath.Join(os.TempDir(), "ffldb-failurescenarios") + _ = os.RemoveAll(dbPath) + idb, err := database.Create(dbType, dbPath, blockDataNet) + if err != nil { + t.Errorf("Failed to create test database (%s) %v", dbType, err) + return + } + defer os.RemoveAll(dbPath) + defer idb.Close() + + // Create a test context to pass around. + tc := &testContext{ + t: t, + db: idb, + files: make(map[uint32]*lockableFile), + maxFileSizes: make(map[uint32]int64), + } + + // Change the maximum file size to a small value to force multiple flat + // files with the test data set and replace the file-related functions + // to make use of mock files in memory. This allows injection of + // various file-related errors. + store := idb.(*db).store + store.maxBlockFileSize = 1024 // 1KiB + store.openWriteFileFunc = func(fileNum uint32) (filer, error) { + if file, ok := tc.files[fileNum]; ok { + // "Reopen" the file. + file.Lock() + mock := file.file.(*mockFile) + mock.Lock() + mock.closed = false + mock.Unlock() + file.Unlock() + return mock, nil + } + + // Limit the max size of the mock file as specified in the test + // context. + maxSize := int64(-1) + if maxFileSize, ok := tc.maxFileSizes[fileNum]; ok { + maxSize = int64(maxFileSize) + } + file := &mockFile{maxSize: int64(maxSize)} + tc.files[fileNum] = &lockableFile{file: file} + return file, nil + } + store.openFileFunc = func(fileNum uint32) (*lockableFile, error) { + // Force error when trying to open max file num. + if fileNum == ^uint32(0) { + return nil, makeDbErr(database.ErrDriverSpecific, + "test", nil) + } + if file, ok := tc.files[fileNum]; ok { + // "Reopen" the file. + file.Lock() + mock := file.file.(*mockFile) + mock.Lock() + mock.closed = false + mock.Unlock() + file.Unlock() + return file, nil + } + file := &lockableFile{file: &mockFile{}} + tc.files[fileNum] = file + return file, nil + } + store.deleteFileFunc = func(fileNum uint32) error { + if file, ok := tc.files[fileNum]; ok { + file.Lock() + file.file.Close() + file.Unlock() + delete(tc.files, fileNum) + return nil + } + + str := fmt.Sprintf("file %d does not exist", fileNum) + return makeDbErr(database.ErrDriverSpecific, str, nil) + } + + // Load the test blocks and save in the test context for use throughout + // the tests. + blocks, err := loadBlocks(t, blockDataFile, blockDataNet) + if err != nil { + t.Errorf("loadBlocks: Unexpected error: %v", err) + return + } + tc.blocks = blocks + + // Test various failures paths when writing to the block files. + if !testWriteFailures(tc) { + return + } + + // Test various file-related issues such as closed and missing files. + if !testBlockFileErrors(tc) { + return + } + + // Test various corruption scenarios. + testCorruption(tc) +} diff --git a/database2/interface.go b/database2/interface.go new file mode 100644 index 00000000000..b4342e09b0b --- /dev/null +++ b/database2/interface.go @@ -0,0 +1,455 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +// Parts of this interface were inspired heavily by the excellent boltdb project +// at https://github.com/boltdb/bolt by Ben B. Johnson. + +package database2 + +import ( + "github.com/btcsuite/btcd/wire" + "github.com/btcsuite/btcutil" +) + +// Cursor represents a cursor over key/value pairs and nested buckets of a +// bucket. +// +// Note that open cursors are not tracked on bucket changes and any +// modifications to the bucket, with the exception of Cursor.Delete, invalidates +// the cursor. After invalidation, the cursor must be repositioned, or the keys +// and values returned may be unpredictable. +type Cursor interface { + // Bucket returns the bucket the cursor was created for. + Bucket() Bucket + + // Delete removes the current key/value pair the cursor is at without + // invalidating the cursor. + // + // The interface contract guarantees at least the following errors will + // be returned (other implementation-specific errors are possible): + // - ErrIncompatibleValue if attempted when the cursor points to a + // nested bucket + // - ErrTxNotWritable if attempted against a read-only transaction + // - ErrTxClosed if the transaction has already been closed + Delete() error + + // First positions the cursor at the first key/value pair and returns + // whether or not the pair exists. + First() bool + + // Last positions the cursor at the last key/value pair and returns + // whether or not the pair exists. + Last() bool + + // Next moves the cursor one key/value pair forward and returns whether + // or not the pair exists. + Next() bool + + // Prev moves the cursor one key/value pair backward and returns whether + // or not the pair exists. + Prev() bool + + // Seek positions the cursor at the first key/value pair that is greater + // than or equal to the passed seek key. Returns whether or not the + // pair exists. + Seek(seek []byte) bool + + // Key returns the current key the cursor is pointing to. + Key() []byte + + // Value returns the current value the cursor is pointing to. This will + // be nil for nested buckets. + Value() []byte +} + +// Bucket represents a collection of key/value pairs. +type Bucket interface { + // Bucket retrieves a nested bucket with the given key. Returns nil if + // the bucket does not exist. + Bucket(key []byte) Bucket + + // CreateBucket creates and returns a new nested bucket with the given + // key. + // + // The interface contract guarantees at least the following errors will + // be returned (other implementation-specific errors are possible): + // - ErrBucketExists if the bucket already exists + // - ErrBucketNameRequired if the key is empty + // - ErrIncompatibleValue if the key is otherwise invalid for the + // particular implementation + // - ErrTxNotWritable if attempted against a read-only transaction + // - ErrTxClosed if the transaction has already been closed + CreateBucket(key []byte) (Bucket, error) + + // CreateBucketIfNotExists creates and returns a new nested bucket with + // the given key if it does not already exist. + // + // The interface contract guarantees at least the following errors will + // be returned (other implementation-specific errors are possible): + // - ErrBucketNameRequired if the key is empty + // - ErrIncompatibleValue if the key is otherwise invalid for the + // particular implementation + // - ErrTxNotWritable if attempted against a read-only transaction + // - ErrTxClosed if the transaction has already been closed + CreateBucketIfNotExists(key []byte) (Bucket, error) + + // DeleteBucket removes a nested bucket with the given key. + // + // The interface contract guarantees at least the following errors will + // be returned (other implementation-specific errors are possible): + // - ErrBucketNotFound if the specified bucket does not exist + // - ErrTxNotWritable if attempted against a read-only transaction + // - ErrTxClosed if the transaction has already been closed + DeleteBucket(key []byte) error + + // ForEach invokes the passed function with every key/value pair in the + // bucket. This does not include nested buckets or the key/value pairs + // within those nested buckets. + // + // WARNING: It is not safe to mutate data while iterating with this + // method. Doing so may cause the underlying cursor to be invalidated + // and return unexpected keys and/or values. + // + // The interface contract guarantees at least the following errors will + // be returned (other implementation-specific errors are possible): + // - ErrTxClosed if the transaction has already been closed + // + // NOTE: The values returned by this function are only valid during a + // transaction. Attempting to access them after a transaction has ended + // results in undefined behavior. This constraint helps prevent + // additional data copies and allows support for memory-mapped database + // implementations. + ForEach(func(k, v []byte) error) error + + // ForEachBucket invokes the passed function with the key of every + // nested bucket in the current bucket. This does not include any + // nested buckets within those nested buckets. + // + // WARNING: It is not safe to mutate data while iterating with this + // method. Doing so may cause the underlying cursor to be invalidated + // and return unexpected keys and/or values. + // + // The interface contract guarantees at least the following errors will + // be returned (other implementation-specific errors are possible): + // - ErrTxClosed if the transaction has already been closed + // + // NOTE: The keys returned by this function are only valid during a + // transaction. Attempting to access them after a transaction has ended + // results in undefined behavior. This constraint prevents additional + // data copies and allows support for memory-mapped database + // implementations. + ForEachBucket(func(k []byte) error) error + + // Cursor returns a new cursor, allowing for iteration over the bucket's + // key/value pairs and nested buckets in forward or backward order. + // + // You must seek to a position using the First, Last, or Seek functions + // before calling the Next, Prev, Key, or Value functions. Failure to + // do so will result in the same return values as an exhausted cursor, + // which is false for the Prev and Next functions and nil for Key and + // Value functions. + Cursor() Cursor + + // Writable returns whether or not the bucket is writable. + Writable() bool + + // Put saves the specified key/value pair to the bucket. Keys that do + // not already exist are added and keys that already exist are + // overwritten. + // + // The interface contract guarantees at least the following errors will + // be returned (other implementation-specific errors are possible): + // - ErrKeyRequired if the key is empty + // - ErrIncompatibleValue if the key is the same as an existing bucket + // - ErrTxNotWritable if attempted against a read-only transaction + // - ErrTxClosed if the transaction has already been closed + Put(key, value []byte) error + + // Get returns the value for the given key. Returns nil if the key does + // not exist in this bucket. + // + // NOTE: The value returned by this function is only valid during a + // transaction. Attempting to access it after a transaction has ended + // results in undefined behavior. This constraint prevents additional + // data copies and allows support for memory-mapped database + // implementations. + Get(key []byte) []byte + + // Delete removes the specified key from the bucket. Deleting a key + // that does not exist does not return an error. + // + // The interface contract guarantees at least the following errors will + // be returned (other implementation-specific errors are possible): + // - ErrKeyRequired if the key is empty + // - ErrIncompatibleValue if the key is the same as an existing bucket + // - ErrTxNotWritable if attempted against a read-only transaction + // - ErrTxClosed if the transaction has already been closed + Delete(key []byte) error +} + +// BlockRegion specifies a particular region of a block identified by the +// specified hash, given an offset and length. +type BlockRegion struct { + Hash *wire.ShaHash + Offset uint32 + Len uint32 +} + +// Tx represents a database transaction. It can either by read-only or +// read-write. The transaction provides a metadata bucket against which all +// read and writes occur. +// +// As would be expected with a transaction, no changes will be saved to the +// database until it has been committed. The transaction will only provide a +// view of the database at the time it was created. Transactions should not be +// long running operations. +type Tx interface { + // Metadata returns the top-most bucket for all metadata storage. + Metadata() Bucket + + // StoreBlock stores the provided block into the database. There are no + // checks to ensure the block connects to a previous block, contains + // double spends, or any additional functionality such as transaction + // indexing. It simply stores the block in the database. + // + // The interface contract guarantees at least the following errors will + // be returned (other implementation-specific errors are possible): + // - ErrBlockExists when the block hash already exists + // - ErrTxNotWritable if attempted against a read-only transaction + // - ErrTxClosed if the transaction has already been closed + // + // Other errors are possible depending on the implementation. + StoreBlock(block *btcutil.Block) error + + // HasBlock returns whether or not a block with the given hash exists + // in the database. + // + // The interface contract guarantees at least the following errors will + // be returned (other implementation-specific errors are possible): + // - ErrTxClosed if the transaction has already been closed + // + // Other errors are possible depending on the implementation. + HasBlock(hash *wire.ShaHash) (bool, error) + + // HasBlocks returns whether or not the blocks with the provided hashes + // exist in the database. + // + // The interface contract guarantees at least the following errors will + // be returned (other implementation-specific errors are possible): + // - ErrTxClosed if the transaction has already been closed + // + // Other errors are possible depending on the implementation. + HasBlocks(hashes []wire.ShaHash) ([]bool, error) + + // FetchBlockHeader returns the raw serialized bytes for the block + // header identified by the given hash. The raw bytes are in the format + // returned by Serialize on a wire.BlockHeader. + // + // It is highly recommended to use this function (or FetchBlockHeaders) + // to obtain block headers over the FetchBlockRegion(s) functions since + // it provides the backend drivers the freedom to perform very specific + // optimizations which can result in significant speed advantages when + // working with headers. + // + // The interface contract guarantees at least the following errors will + // be returned (other implementation-specific errors are possible): + // - ErrBlockNotFound if the requested block hash does not exist + // - ErrTxClosed if the transaction has already been closed + // - ErrCorruption if the database has somehow become corrupted + // + // NOTE: The data returned by this function is only valid during a + // database transaction. Attempting to access it after a transaction + // has ended results in undefined behavior. This constraint prevents + // additional data copies and allows support for memory-mapped database + // implementations. + FetchBlockHeader(hash *wire.ShaHash) ([]byte, error) + + // FetchBlockHeaders returns the raw serialized bytes for the block + // headers identified by the given hashes. The raw bytes are in the + // format returned by Serialize on a wire.BlockHeader. + // + // It is highly recommended to use this function (or FetchBlockHeader) + // to obtain block headers over the FetchBlockRegion(s) functions since + // it provides the backend drivers the freedom to perform very specific + // optimizations which can result in significant speed advantages when + // working with headers. + // + // Furthermore, depending on the specific implementation, this function + // can be more efficient for bulk loading multiple block headers than + // loading them one-by-one with FetchBlockHeader. + // + // The interface contract guarantees at least the following errors will + // be returned (other implementation-specific errors are possible): + // - ErrBlockNotFound if any of the request block hashes do not exist + // - ErrTxClosed if the transaction has already been closed + // - ErrCorruption if the database has somehow become corrupted + // + // NOTE: The data returned by this function is only valid during a + // database transaction. Attempting to access it after a transaction + // has ended results in undefined behavior. This constraint prevents + // additional data copies and allows support for memory-mapped database + // implementations. + FetchBlockHeaders(hashes []wire.ShaHash) ([][]byte, error) + + // FetchBlock returns the raw serialized bytes for the block identified + // by the given hash. The raw bytes are in the format returned by + // Serialize on a wire.MsgBlock. + // + // The interface contract guarantees at least the following errors will + // be returned (other implementation-specific errors are possible): + // - ErrBlockNotFound if the requested block hash does not exist + // - ErrTxClosed if the transaction has already been closed + // - ErrCorruption if the database has somehow become corrupted + // + // NOTE: The data returned by this function is only valid during a + // database transaction. Attempting to access it after a transaction + // has ended results in undefined behavior. This constraint prevents + // additional data copies and allows support for memory-mapped database + // implementations. + FetchBlock(hash *wire.ShaHash) ([]byte, error) + + // FetchBlocks returns the raw serialized bytes for the blocks + // identified by the given hashes. The raw bytes are in the format + // returned by Serialize on a wire.MsgBlock. + // + // The interface contract guarantees at least the following errors will + // be returned (other implementation-specific errors are possible): + // - ErrBlockNotFound if the any of the requested block hashes do not + // exist + // - ErrTxClosed if the transaction has already been closed + // - ErrCorruption if the database has somehow become corrupted + // + // NOTE: The data returned by this function is only valid during a + // database transaction. Attempting to access it after a transaction + // has ended results in undefined behavior. This constraint prevents + // additional data copies and allows support for memory-mapped database + // implementations. + FetchBlocks(hashes []wire.ShaHash) ([][]byte, error) + + // FetchBlockRegion returns the raw serialized bytes for the given + // block region. + // + // For example, it is possible to directly extract Bitcoin transactions + // and/or scripts from a block with this function. Depending on the + // backend implementation, this can provide significant savings by + // avoiding the need to load entire blocks. + // + // The raw bytes are in the format returned by Serialize on a + // wire.MsgBlock and the Offset field in the provided BlockRegion is + // zero-based and relative to the start of the block (byte 0). + // + // The interface contract guarantees at least the following errors will + // be returned (other implementation-specific errors are possible): + // - ErrBlockNotFound if the requested block hash does not exist + // - ErrBlockRegionInvalid if the region exceeds the bounds of the + // associated block + // - ErrTxClosed if the transaction has already been closed + // - ErrCorruption if the database has somehow become corrupted + // + // NOTE: The data returned by this function is only valid during a + // database transaction. Attempting to access it after a transaction + // has ended results in undefined behavior. This constraint prevents + // additional data copies and allows support for memory-mapped database + // implementations. + FetchBlockRegion(region *BlockRegion) ([]byte, error) + + // FetchBlockRegions returns the raw serialized bytes for the given + // block regions. + // + // For example, it is possible to directly extract Bitcoin transactions + // and/or scripts from various blocks with this function. Depending on + // the backend implementation, this can provide significant savings by + // avoiding the need to load entire blocks. + // + // The raw bytes are in the format returned by Serialize on a + // wire.MsgBlock and the Offset fields in the provided BlockRegions are + // zero-based and relative to the start of the block (byte 0). + // + // The interface contract guarantees at least the following errors will + // be returned (other implementation-specific errors are possible): + // - ErrBlockNotFound if any of the requested block hashed do not + // exist + // - ErrBlockRegionInvalid if one or more region exceed the bounds of + // the associated block + // - ErrTxClosed if the transaction has already been closed + // - ErrCorruption if the database has somehow become corrupted + // + // NOTE: The data returned by this function is only valid during a + // database transaction. Attempting to access it after a transaction + // has ended results in undefined behavior. This constraint prevents + // additional data copies and allows support for memory-mapped database + // implementations. + FetchBlockRegions(regions []BlockRegion) ([][]byte, error) + + // ****************************************************************** + // Methods related to both atomic metadata storage and block storage. + // ****************************************************************** + + // Commit commits all changes that have been made to the metadata or + // block storage to persistent storage. Calling this function on a + // managed transaction will result in a panic. + Commit() error + + // Rollback undoes all changes that have been made to the metadata or + // block storage. Calling this function on a managed transaction will + // result in a panic. + Rollback() error +} + +// DB provides a generic interface that is used to store bitcoin blocks and +// related metadata. This interface is intended to be agnostic to the actual +// mechanism used for backend data storage. The RegisterDriver function can be +// used to add a new backend data storage method. +// +// This interface is divided into two distinct categories of functionality. +// +// The first category is atomic metadata storage with bucket support. This is +// accomplished through the use of database transactions. +// +// The second category is generic block storage. This functionality is +// intentionally separate because the mechanism used for block storage may or +// may not be the same mechanism used for metadata storage. For example, it is +// often more efficient to store the block data as flat files while the metadata +// is kept in a database. However, this interface aims to be generic enough to +// support blocks in the database too, if needed by a particular backend. +type DB interface { + // Type returns the database driver type the current database instance + // was created with. + Type() string + + // Begin starts a transaction which is either read-only or read-write + // depending on the specified flag. Multiple read-only transactions + // can be started simultaneously while only a single read-write + // transaction can be started at a time. The call will block when + // starting a read-write transaction when one is already open. + // + // NOTE: The transaction must be closed by calling Rollback or Commit on + // it when it is no longer needed. Failure to do so can result in + // unclaimed memory and/or inablity to close the database due to locks + // depending on the specific database implementation. + Begin(writable bool) (Tx, error) + + // View invokes the passed function in the context of a managed + // read-only transaction. Any errors returned from the user-supplied + // function are returned from this function. + // + // Calling Rollback or Commit on the transaction passed to the + // user-supplied function will result in a panic. + View(fn func(tx Tx) error) error + + // Update invokes the passed function in the context of a managed + // read-write transaction. Any errors returned from the user-supplied + // function will cause the transaction to be rolled back and are + // returned from this function. Otherwise, the transaction is commited + // when the user-supplied function returns a nil error. + // + // Calling Rollback or Commit on the transaction passed to the + // user-supplied function will result in a panic. + Update(fn func(tx Tx) error) error + + // Close cleanly shuts down the database and syncs all data. It will + // block until all database transactions have been finalized (rolled + // back or committed). + Close() error +} diff --git a/database2/internal/treap/README.md b/database2/internal/treap/README.md new file mode 100644 index 00000000000..ead8f136b6c --- /dev/null +++ b/database2/internal/treap/README.md @@ -0,0 +1,36 @@ +treap +===== + +[![Build Status](https://travis-ci.org/btcsuite/btcd.png?branch=master)] +(https://travis-ci.org/btcsuite/btcd) + +Package treap implements a treap data structure that which is used to hold +ordered key/value pairs using a combination of binary search tree and heap +semantics. It is a self-organizing and randomized data structure that doesn't +require complex operations to to maintain balance. Search, insert, and delete +operations are all O(log n). + +Package treap is licensed under the copyfree ISC license. + +## Usage + +This package is only used internally in the database code and as such is not +available for use outside of it. + +## Documentation + +[![GoDoc](https://godoc.org/github.com/btcsuite/btcd/database/internal/treap?status.png)] +(http://godoc.org/github.com/btcsuite/btcd/database/internal/treap) + +Full `go doc` style documentation for the project can be viewed online without +installing this package by using the GoDoc site here: +http://godoc.org/github.com/btcsuite/btcd/database/internal/treap + +You can also view the documentation locally once the package is installed with +the `godoc` tool by running `godoc -http=":6060"` and pointing your browser to +http://localhost:6060/pkg/github.com/btcsuite/btcd/database/internal/treap + +## License + +Package treap is licensed under the [copyfree](http://copyfree.org) ISC +License. diff --git a/database2/internal/treap/doc.go b/database2/internal/treap/doc.go new file mode 100644 index 00000000000..2b9c8a10768 --- /dev/null +++ b/database2/internal/treap/doc.go @@ -0,0 +1,12 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +/* +Package treap implements a treap data structure that which is used to hold +ordered key/value pairs using a combination of binary search tree and heap +semantics. It is a self-organizing and randomized data structure that doesn't +require complex operations to to maintain balance. Search, insert, and delete +operations are all O(log n). +*/ +package treap diff --git a/database2/internal/treap/treap.go b/database2/internal/treap/treap.go new file mode 100644 index 00000000000..bb488bdf75f --- /dev/null +++ b/database2/internal/treap/treap.go @@ -0,0 +1,335 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package treap + +import ( + "bytes" + "math/rand" + "time" +) + +// staticDepth is the size of the static array to use for keeping track of the +// parent stack during treap iteration. Since a treap has a very high +// probability that the tree height is logaritmic, it is exceedingly unlikely +// that the parent stack will ever exceed this size even for extremely large +// numbers of items. +const staticDepth = 128 + +// treapNode represents a node in the treap. +type treapNode struct { + key []byte + value []byte + priority int + left *treapNode + right *treapNode +} + +// newTreapNode returns a new node from the given key, value, and priority. The +// node is not initially linked to any others. +func newTreapNode(key, value []byte, priority int) *treapNode { + return &treapNode{key: key, value: value, priority: priority} +} + +// parentStack represents a stack of parent treap nodes that are used during +// iteration. It consists of a static array for holding the parents and a +// dynamic overflow slice. It is extremely unlikely the overflow will ever be +// hit during normal operation, however, since a treap's height is +// probabilistic, the overflow case needs to be handled properly. This approach +// is used because it is much more efficient for the majority case than +// dynamically allocating heap space every time the treap is iterated. +type parentStack struct { + index int + items [staticDepth]*treapNode + overflow []*treapNode +} + +// Len returns the current number of items in the stack. +func (s *parentStack) Len() int { + return s.index +} + +// At returns the item n number of items from the top of the stack, where 0 is +// the topmost item, without removing it. It returns nil if n exceeds the +// number of items on the stack. +func (s *parentStack) At(n int) *treapNode { + index := s.index - n - 1 + if index < 0 { + return nil + } + + if index < staticDepth { + return s.items[index] + } + + return s.overflow[index-staticDepth] +} + +// Pop removes the top item from the stack. It returns nil if the stack is +// empty. +func (s *parentStack) Pop() *treapNode { + if s.index == 0 { + return nil + } + + s.index-- + if s.index < staticDepth { + node := s.items[s.index] + s.items[s.index] = nil + return node + } + + node := s.overflow[s.index-staticDepth] + s.overflow[s.index-staticDepth] = nil + return node +} + +// Push pushes the passed item onto the top of the stack. +func (s *parentStack) Push(node *treapNode) { + if s.index < staticDepth { + s.items[s.index] = node + s.index++ + return + } + + // This approach is used over append because reslicing the slice to pop + // the item causes the compiler make unneeded allocations. Also, since + // the max number of items is related to the tree depth which requires + // expontentially more items to increase, only increase the cap one item + // at a time. This is more intelligent than the generic append + // expansion algorithm which often doubles the cap. + index := s.index - staticDepth + if index+1 > cap(s.overflow) { + overflow := make([]*treapNode, index+1) + copy(overflow, s.overflow) + s.overflow = overflow + } + s.overflow[index] = node + s.index++ +} + +// Treap represents a treap data structure which is used to hold ordered +// key/value pairs using a combination of binary search tree and heap semantics. +// It is a self-organizing and randomized data structure that doesn't require +// complex operations to to maintain balance. Search, insert, and delete +// operations are all O(log n). +type Treap struct { + root *treapNode + count int +} + +// Len returns the number of items stored in the treap. +func (t *Treap) Len() int { + return t.count +} + +// get returns the treap node that contains the passed key and its parent. When +// the found node is the root of the tree, the parent will be nil. When the key +// does not exist, both the node and the parent will be nil. +func (t *Treap) get(key []byte) (*treapNode, *treapNode) { + var parent *treapNode + for node := t.root; node != nil; { + // Traverse left or right depending on the result of the + // comparison. + compareResult := bytes.Compare(key, node.key) + if compareResult < 0 { + parent = node + node = node.left + continue + } + if compareResult > 0 { + parent = node + node = node.right + continue + } + + // The key exists. + return node, parent + } + + // A nil node was reached which means the key does not exist. + return nil, nil +} + +// Has returns whether or not the passed key exists. +func (t *Treap) Has(key []byte) bool { + if node, _ := t.get(key); node != nil { + return true + } + return false +} + +// Get returns the value for the passed key. The function will return nil when +// the key does not exist. +// +// NOTE: It is acceptable to add keys with nil values, so do not rely on a nil +// return value to indicate that a key does not exist. Use the Has function for +// that purpose instead. +func (t *Treap) Get(key []byte) []byte { + if node, _ := t.get(key); node != nil { + return node.value + } + return nil +} + +// relinkGrandparent relinks the node into the treap after it has been rotated +// by changing the passed grandparent's left or right pointer, depending on +// where the old parent was, to point at the passed node. Otherwise, when there +// is no grandparent, it means the node is now the root of the tree, so update +// it accordingly. +func (t *Treap) relinkGrandparent(node, parent, grandparent *treapNode) { + // The node is now the root of the tree when there is no grandparent. + if grandparent == nil { + t.root = node + return + } + + // Relink the grandparent's left or right pointer based on which side + // the old parent was. + if grandparent.left == parent { + grandparent.left = node + } else { + grandparent.right = node + } +} + +// Put inserts the passed key/value pair. +func (t *Treap) Put(key, value []byte) { + // The node is the root of the tree if there isn't already one. + if t.root == nil { + t.count++ + t.root = newTreapNode(key, value, rand.Int()) + return + } + + // Find the binary tree insertion point and construct a list of parents + // while doing so. When the key matches an entry already in the treap, + // just update its value and return. + var parents parentStack + var compareResult int + for node := t.root; node != nil; { + parents.Push(node) + compareResult = bytes.Compare(key, node.key) + if compareResult < 0 { + node = node.left + continue + } + if compareResult > 0 { + node = node.right + continue + } + + // The key already exists, so update its value. + node.value = value + return + } + + // Link the new node into the binary tree in the correct position. + t.count++ + node := newTreapNode(key, value, rand.Int()) + parent := parents.At(0) + if compareResult < 0 { + parent.left = node + } else { + parent.right = node + } + + // Perform any rotations needed to maintain the min-heap. + for parents.Len() > 0 { + // There is nothing left to do when the node's priority is + // greater than or equal to its parent's priority. + parent = parents.Pop() + if node.priority >= parent.priority { + break + } + + // Perform a right rotation if the node is on the left side or + // a left rotation if the node is on the right side. + if parent.left == node { + node.right, parent.left = parent, node.right + } else { + node.left, parent.right = parent, node.left + } + t.relinkGrandparent(node, parent, parents.At(0)) + } +} + +// Delete removes the passed key if it exists. +func (t *Treap) Delete(key []byte) { + // Find the node for the key along with its parent. There is nothing to + // do if the key does not exist. + node, parent := t.get(key) + if node == nil { + return + } + + // When the only node in the tree is the root node and it is the one + // being deleted, there is nothing else to do besides removing it. + if parent == nil && node.left == nil && node.right == nil { + t.root = nil + t.count-- + return + } + + // Perform rotations to move the node to delete to a leaf position while + // maintaining the min-heap. + var isLeft bool + var child *treapNode + for node.left != nil || node.right != nil { + // Choose the child with the higher priority. + if node.left == nil { + child = node.right + isLeft = false + } else if node.right == nil { + child = node.left + isLeft = true + } else if node.left.priority >= node.right.priority { + child = node.left + isLeft = true + } else { + child = node.right + isLeft = false + } + + // Rotate left or right depending on which side the child node + // is on. This has the effect of moving the node to delete + // towards the bottom of the tree while maintaining the + // min-heap. + if isLeft { + child.right, node.left = node, child.right + } else { + child.left, node.right = node, child.left + } + t.relinkGrandparent(child, node, parent) + + // The parent for the node to delete is now what was previously + // its child. + parent = child + } + + // Delete the node, which is now a leaf node, by disconnecting it from + // its parent. + if parent.right == node { + parent.right = nil + } else { + parent.left = nil + } + t.count-- +} + +// Reset efficiently removes all items in the treap. +func (t *Treap) Reset() { + t.count = 0 + t.root = nil +} + +// New returns a new empty treap ready for use. See the documentation for the +// Treap structure for more details. +func New() *Treap { + return &Treap{} +} + +func init() { + rand.Seed(time.Now().UnixNano()) +} diff --git a/database2/internal/treap/treap_test.go b/database2/internal/treap/treap_test.go new file mode 100644 index 00000000000..cbb394551fb --- /dev/null +++ b/database2/internal/treap/treap_test.go @@ -0,0 +1,383 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package treap + +import ( + "bytes" + "crypto/sha256" + "encoding/binary" + "encoding/hex" + "math/rand" + "reflect" + "testing" +) + +// fromHex converts the passed hex string into a byte slice and will panic is +// there is an error. This is only provided for the hard-coded constants so +// errors in the source code can bet detected. It will only (and must only) be +// called for initialization purposes. +func fromHex(s string) []byte { + r, err := hex.DecodeString(s) + if err != nil { + panic("invalid hex in source file: " + s) + } + return r +} + +// serializeUint32 returns the big-endian encoding of the passed uint32. +func serializeUint32(ui uint32) []byte { + var ret [4]byte + binary.BigEndian.PutUint32(ret[:], ui) + return ret[:] +} + +// TestEmpty ensures calling functions on an empty treap works as expected. +func TestEmpty(t *testing.T) { + t.Parallel() + + // Ensure the treap length is the expected value. + testTreap := New() + if gotLen := testTreap.Len(); gotLen != 0 { + t.Fatalf("Len: unexpected length - got %d, want %d", gotLen, 0) + } + + // Ensure there are no errors with requesting keys from an empty treap. + key := serializeUint32(0) + if gotVal := testTreap.Has(key); gotVal != false { + t.Fatalf("Has: unexpected result - got %v, want false", gotVal) + } + if gotVal := testTreap.Get(key); gotVal != nil { + t.Fatalf("Get: unexpected result - got %x, want nil", gotVal) + } + + // Ensure there are no panics when deleting keys from an empty treap. + testTreap.Delete(key) +} + +// TestReset ensures that resetting an existing treap works as expected. +func TestReset(t *testing.T) { + t.Parallel() + + // Insert a few keys. + numItems := 1000 + testTreap := New() + for i := 0; i < numItems; i++ { + key := serializeUint32(uint32(i)) + testTreap.Put(key, key) + } + + // Reset it. + testTreap.Reset() + + // Ensure the treap length is now 0. + if gotLen := testTreap.Len(); gotLen != 0 { + t.Fatalf("Len: unexpected length - got %d, want %d", gotLen, 0) + } + + for i := 0; i < numItems; i++ { + key := serializeUint32(uint32(i)) + + // Ensure the treap no longer has the key. + if testTreap.Has(key) { + t.Fatalf("Has #%d: key %q is in treap", i, key) + } + + // Get the key that no longer exists from the treap and ensure + // it is nil. + if gotVal := testTreap.Get(key); gotVal != nil { + t.Fatalf("Get #%d: unexpected value - got %x, want nil", + i, gotVal) + } + } + +} + +// TestSequential ensures that putting keys into the treap in sequential order +// works as expected. +func TestSequential(t *testing.T) { + t.Parallel() + + // Insert a bunch of sequential keys while checking several of the treap + // functions work as expected. + numItems := 1000 + testTreap := New() + for i := 0; i < numItems; i++ { + key := serializeUint32(uint32(i)) + testTreap.Put(key, key) + + // Ensure the treap length is the expected value. + if gotLen := testTreap.Len(); gotLen != i+1 { + t.Fatalf("Len #%d: unexpected length - got %d, want %d", + i, gotLen, i+1) + } + + // Ensure the treap has the key. + if !testTreap.Has(key) { + t.Fatalf("Has #%d: key %q is not in treap", i, key) + } + + // Get the key from the treap and ensure it is the expected + // value. + if gotVal := testTreap.Get(key); !bytes.Equal(gotVal, key) { + t.Fatalf("Get #%d: unexpected value - got %x, want %x", + i, gotVal, key) + } + } + + // Delete the keys one-by-one while checking several of the treap + // functions work as expected. + for i := 0; i < numItems; i++ { + key := serializeUint32(uint32(i)) + testTreap.Delete(key) + + // Ensure the treap length is the expected value. + if gotLen := testTreap.Len(); gotLen != numItems-i-1 { + t.Fatalf("Len #%d: unexpected length - got %d, want %d", + i, gotLen, numItems-i-1) + } + + // Ensure the treap no longer has the key. + if testTreap.Has(key) { + t.Fatalf("Has #%d: key %q is in treap", i, key) + } + + // Get the key that no longer exists from the treap and ensure + // it is nil. + if gotVal := testTreap.Get(key); gotVal != nil { + t.Fatalf("Get #%d: unexpected value - got %x, want nil", + i, gotVal) + } + } +} + +// TestReverseSequential ensures that putting keys into the treap in reverse +// sequential order works as expected. +func TestReverseSequential(t *testing.T) { + t.Parallel() + + // Insert a bunch of sequential keys while checking several of the treap + // functions work as expected. + numItems := 1000 + testTreap := New() + for i := 0; i < numItems; i++ { + key := serializeUint32(uint32(numItems - i - 1)) + testTreap.Put(key, key) + + // Ensure the treap length is the expected value. + if gotLen := testTreap.Len(); gotLen != i+1 { + t.Fatalf("Len #%d: unexpected length - got %d, want %d", + i, gotLen, i+1) + } + + // Ensure the treap has the key. + if !testTreap.Has(key) { + t.Fatalf("Has #%d: key %q is not in treap", i, key) + } + + // Get the key from the treap and ensure it is the expected + // value. + if gotVal := testTreap.Get(key); !bytes.Equal(gotVal, key) { + t.Fatalf("Get #%d: unexpected value - got %x, want %x", + i, gotVal, key) + } + } + + // Delete the keys one-by-one while checking several of the treap + // functions work as expected. + for i := 0; i < numItems; i++ { + // Intentionally use the reverse order they were inserted here. + key := serializeUint32(uint32(i)) + testTreap.Delete(key) + + // Ensure the treap length is the expected value. + if gotLen := testTreap.Len(); gotLen != numItems-i-1 { + t.Fatalf("Len #%d: unexpected length - got %d, want %d", + i, gotLen, numItems-i-1) + } + + // Ensure the treap no longer has the key. + if testTreap.Has(key) { + t.Fatalf("Has #%d: key %q is in treap", i, key) + } + + // Get the key that no longer exists from the treap and ensure + // it is nil. + if gotVal := testTreap.Get(key); gotVal != nil { + t.Fatalf("Get #%d: unexpected value - got %x, want nil", + i, gotVal) + } + } +} + +// TestUnordered ensures that putting keys into the treap in no paritcular order +// works as expected. +func TestUnordered(t *testing.T) { + t.Parallel() + + // Insert a bunch of out-of-order keys while checking several of the + // treap functions work as expected. + numItems := 1000 + testTreap := New() + for i := 0; i < numItems; i++ { + // Hash the serialized int to generate out-of-order keys. + hash := sha256.Sum256(serializeUint32(uint32(i))) + key := hash[:] + testTreap.Put(key, key) + + // Ensure the treap length is the expected value. + if gotLen := testTreap.Len(); gotLen != i+1 { + t.Fatalf("Len #%d: unexpected length - got %d, want %d", + i, gotLen, i+1) + } + + // Ensure the treap has the key. + if !testTreap.Has(key) { + t.Fatalf("Has #%d: key %q is not in treap", i, key) + } + + // Get the key from the treap and ensure it is the expected + // value. + if gotVal := testTreap.Get(key); !bytes.Equal(gotVal, key) { + t.Fatalf("Get #%d: unexpected value - got %x, want %x", + i, gotVal, key) + } + } + + // Delete the keys one-by-one while checking several of the treap + // functions work as expected. + for i := 0; i < numItems; i++ { + // Hash the serialized int to generate out-of-order keys. + hash := sha256.Sum256(serializeUint32(uint32(i))) + key := hash[:] + testTreap.Delete(key) + + // Ensure the treap length is the expected value. + if gotLen := testTreap.Len(); gotLen != numItems-i-1 { + t.Fatalf("Len #%d: unexpected length - got %d, want %d", + i, gotLen, numItems-i-1) + } + + // Ensure the treap no longer has the key. + if testTreap.Has(key) { + t.Fatalf("Has #%d: key %q is in treap", i, key) + } + + // Get the key that no longer exists from the treap and ensure + // it is nil. + if gotVal := testTreap.Get(key); gotVal != nil { + t.Fatalf("Get #%d: unexpected value - got %x, want nil", + i, gotVal) + } + } +} + +// TestDuplicatePut ensures that putting a duplicate key updates the existing +// value. +func TestDuplicatePut(t *testing.T) { + key := serializeUint32(0) + val := serializeUint32(10) + + // Put the key twice with the second put being the expected final value. + testTreap := New() + testTreap.Put(key, key) + testTreap.Put(key, val) + + // Ensure the key still exists and is the new value. + if gotVal := testTreap.Has(key); gotVal != true { + t.Fatalf("Has: unexpected result - got %v, want false", gotVal) + } + if gotVal := testTreap.Get(key); !bytes.Equal(gotVal, val) { + t.Fatalf("Get: unexpected result - got %x, want %x", gotVal, val) + } +} + +// TestParentStack ensures the treapParentStack functionality works as intended. +func TestParentStack(t *testing.T) { + t.Parallel() + + tests := []struct { + numNodes int + }{ + {numNodes: 1}, + {numNodes: staticDepth}, + {numNodes: staticDepth + 1}, // Test dynamic code paths + } + +testLoop: + for i, test := range tests { + nodes := make([]*treapNode, 0, test.numNodes) + for j := 0; j < test.numNodes; j++ { + var key [4]byte + binary.BigEndian.PutUint32(key[:], uint32(j)) + node := newTreapNode(key[:], key[:], 0) + nodes = append(nodes, node) + } + + // Push all of the nodes onto the parent stack while testing + // various stack properties. + stack := &parentStack{} + for j, node := range nodes { + stack.Push(node) + + // Ensure the stack length is the expected value. + if stack.Len() != j+1 { + t.Errorf("Len #%d (%d): unexpected stack "+ + "length - got %d, want %d", i, j, + stack.Len(), j+1) + continue testLoop + } + + // Ensure the node at each index is the expected one. + for k := 0; k <= j; k++ { + atNode := stack.At(j - k) + if !reflect.DeepEqual(atNode, nodes[k]) { + t.Errorf("At #%d (%d): mismatched node "+ + "- got %v, want %v", i, j-k, + atNode, nodes[k]) + continue testLoop + } + } + } + + // Ensure each popped node is the expected one. + for j := 0; j < len(nodes); j++ { + node := stack.Pop() + expected := nodes[len(nodes)-j-1] + if !reflect.DeepEqual(node, expected) { + t.Errorf("At #%d (%d): mismatched node - "+ + "got %v, want %v", i, j, node, expected) + continue testLoop + } + } + + // Ensure the stack is now empty. + if stack.Len() != 0 { + t.Errorf("Len #%d: stack is not empty - got %d", i, + stack.Len()) + continue testLoop + } + + // Ensure attempting to retrieve a node at an index beyond the + // stack's length returns nil. + if node := stack.At(2); node != nil { + t.Errorf("At #%d: did not give back nil - got %v", i, + node) + continue testLoop + } + + // Ensure attempting to pop a node from an empty stack returns + // nil. + if node := stack.Pop(); node != nil { + t.Errorf("Pop #%d: did not give back nil - got %v", i, + node) + continue testLoop + } + } +} + +func init() { + // Force the same pseudo random numbers for each test run. + rand.Seed(0) +} diff --git a/database2/internal/treap/treapiter.go b/database2/internal/treap/treapiter.go new file mode 100644 index 00000000000..7d4ec16c49f --- /dev/null +++ b/database2/internal/treap/treapiter.go @@ -0,0 +1,322 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package treap + +import "bytes" + +// Iterator represents an iterator for forwards and backwards iteration over +// the contents of a treap. +type Iterator struct { + t *Treap // The treap the iterator is associated with + node *treapNode // The node the iterator is positioned at + parents parentStack // The stack of parents needed to iterate + isNew bool // Whether the iterator has been positioned + seekKey []byte // Used to handle dynamic updates + startKey []byte // Used to limit the iterator to a range + limitKey []byte // Used to limit the iterator to a range +} + +// limitIterator clears the current iterator node if it is outside of the range +// specified when the iterator was created. It returns whether the iterator is +// valid. +func (iter *Iterator) limitIterator() bool { + if iter.node == nil { + return false + } + + node := iter.node + if iter.startKey != nil && bytes.Compare(node.key, iter.startKey) < 0 { + iter.node = nil + return false + } + + if iter.limitKey != nil && bytes.Compare(node.key, iter.limitKey) > 0 { + iter.node = nil + return false + } + + return true +} + +// seek moves the iterator based on the provided key and flags. +// +// When the exact match flag is set, the iterator will either be moved to first +// key in the treap that exactly matches the provided key, or the one +// before/after it depending on the greater flag. +// +// When the exact match flag is NOT set, the iterator will be moved to the first +// key in the treap before/after the provided key depending on the greater flag. +// +// In all cases, the limits specified when the iterator was created are +// respected. +func (iter *Iterator) seek(key []byte, exactMatch bool, greater bool) bool { + iter.node = nil + iter.parents = parentStack{} + var selectedNodeDepth int + for node := iter.t.root; node != nil; { + iter.parents.Push(node) + + // Traverse left or right depending on the result of the + // comparison. Also, set the iterator to the node depending on + // the flags so the iterator is positioned properly when an + // exact match isn't found. + compareResult := bytes.Compare(key, node.key) + if compareResult < 0 { + if greater { + iter.node = node + selectedNodeDepth = iter.parents.Len() - 1 + } + node = node.left + continue + } + if compareResult > 0 { + if !greater { + iter.node = node + selectedNodeDepth = iter.parents.Len() - 1 + } + node = node.right + continue + } + + // The key is an exact match. Set the iterator and return now + // when the exact match flag is set. + if exactMatch { + iter.node = node + iter.parents.Pop() + return iter.limitIterator() + } + + // The key is an exact match, but the exact match is not set, so + // choose which direction to go based on whether the larger or + // smaller key was requested. + if greater { + node = node.right + } else { + node = node.left + } + } + + // There was either no exact match or there was an exact match but the + // exact match flag was not set. In any case, the parent stack might + // need to be adjusted to only include the parents up to the selected + // node. Also, ensure the selected node's key does not exceed the + // allowed range of the iterator. + for i := iter.parents.Len(); i > selectedNodeDepth; i-- { + iter.parents.Pop() + } + return iter.limitIterator() +} + +// First moves the iterator to the first key/value pair. When there is only a +// single key/value pair both First and Last will point to the same pair. +// Returns false if there are no key/value pairs. +func (iter *Iterator) First() bool { + // Seek the start key if the iterator was created with one. This will + // result in either an exact match, the first greater key, or an + // exhausted iterator if no such key exists. + iter.isNew = false + if iter.startKey != nil { + return iter.seek(iter.startKey, true, true) + } + + // The smallest key is in the left-most node. + iter.parents = parentStack{} + for node := iter.t.root; node != nil; node = node.left { + if node.left == nil { + iter.node = node + return true + } + iter.parents.Push(node) + } + return false +} + +// Last moves the iterator to the last key/value pair. When there is only a +// single key/value pair both First and Last will point to the same pair. +// Returns false if there are no key/value pairs. +func (iter *Iterator) Last() bool { + // Seek the limit key if the iterator was created with one. This will + // result in either an exact match, the first smaller key, or an + // exhausted iterator if no such key exists. + iter.isNew = false + if iter.limitKey != nil { + return iter.seek(iter.limitKey, true, false) + } + + // The highest key is in the right-most node. + iter.parents = parentStack{} + for node := iter.t.root; node != nil; node = node.right { + if node.right == nil { + iter.node = node + return true + } + iter.parents.Push(node) + } + return false +} + +// Next moves the iterator to the next key/value pair and returns false when the +// iterator is exhausted. When invoked on a newly created iterator it will +// position the iterator at the first item. +func (iter *Iterator) Next() bool { + if iter.isNew { + return iter.First() + } + + if iter.node == nil { + return false + } + + // Reseek the previous key without allowing for an exact match if a + // force seek was requested. This results in the key greater than the + // previous one or an exhausted iterator if there is no such key. + if seekKey := iter.seekKey; seekKey != nil { + iter.seekKey = nil + return iter.seek(seekKey, false, true) + } + + // When there is no right node walk the parents until the parent's right + // node is not equal to the previous child. This will be the next node. + if iter.node.right == nil { + parent := iter.parents.Pop() + for parent != nil && parent.right == iter.node { + iter.node = parent + parent = iter.parents.Pop() + } + iter.node = parent + return iter.limitIterator() + } + + // There is a right node, so the next node is the left-most node down + // the right sub-tree. + iter.parents.Push(iter.node) + iter.node = iter.node.right + for node := iter.node.left; node != nil; node = node.left { + iter.parents.Push(iter.node) + iter.node = node + } + return iter.limitIterator() +} + +// Prev moves the iterator to the previous key/value pair and returns false when +// the iterator is exhausted. When invoked on a newly created iterator it will +// position the iterator at the last item. +func (iter *Iterator) Prev() bool { + if iter.isNew { + return iter.Last() + } + + if iter.node == nil { + return false + } + + // Reseek the previous key without allowing for an exact match if a + // force seek was requested. This results in the key smaller than the + // previous one or an exhausted iterator if there is no such key. + if seekKey := iter.seekKey; seekKey != nil { + iter.seekKey = nil + return iter.seek(seekKey, false, false) + } + + // When there is no left node walk the parents until the parent's left + // node is not equal to the previous child. This will be the previous + // node. + for iter.node.left == nil { + parent := iter.parents.Pop() + for parent != nil && parent.left == iter.node { + iter.node = parent + parent = iter.parents.Pop() + } + iter.node = parent + return iter.limitIterator() + } + + // There is a left node, so the previous node is the right-most node + // down the left sub-tree. + iter.parents.Push(iter.node) + iter.node = iter.node.left + for node := iter.node.right; node != nil; node = node.right { + iter.parents.Push(iter.node) + iter.node = node + } + return iter.limitIterator() +} + +// Seek moves the iterator to the first key/value pair with a key that is +// greater than or equal to the given key and returns true if successful. +func (iter *Iterator) Seek(key []byte) bool { + iter.isNew = false + return iter.seek(key, true, true) +} + +// Key returns the key of the current key/value pair or nil when the iterator +// is exhausted. The caller should not modify the contents of the returned +// slice. +func (iter *Iterator) Key() []byte { + if iter.node == nil { + return nil + } + return iter.node.key +} + +// Value returns the value of the current key/value pair or nil when the +// iterator is exhausted. The caller should not modify the contents of the +// returned slice. +func (iter *Iterator) Value() []byte { + if iter.node == nil { + return nil + } + return iter.node.value +} + +// Valid indicates whether the iterator is positioned at a valid key/value pair. +// It will be considered invalid when the iterator is newly created or exhausted. +func (iter *Iterator) Valid() bool { + return iter.node != nil +} + +// ForceReseek notifies the iterator that the underlying treap has been updated, +// so the next call to Prev or Next needs to reseek in order to allow the +// iterator to continue working properly. +func (iter *Iterator) ForceReseek() { + // Set the seek key to the current node. This will force the Next/Prev + // functions to reseek, and thus properly reconstruct the iterator, on + // their next call. + if iter.node == nil { + iter.seekKey = nil + return + } + iter.seekKey = iter.node.key +} + +// Iterator returns a new iterator for the treap. The newly returned iterator +// is not pointing to a valid item until a call to one of the methods to +// position it is made. +// +// The start key and limit key parameters causes the iterator to be limited to +// a range of keys. Either or both can be nil if the functionality is not +// desired. +// +// WARNING: The ForceSeek method must be called on the returned iterator if +// the treap is mutated. Failure to do so will cause the iterator to return +// unexpected keys and/or values. +// +// For example: +// iter := t.Iterator(nil, nil) +// for iter.Next() { +// if someCondition { +// t.Delete(iter.Key()) +// iter.ForceSeek() +// } +// } +func (t *Treap) Iterator(startKey, limitKey []byte) *Iterator { + iter := &Iterator{ + t: t, + isNew: true, + startKey: startKey, + limitKey: limitKey, + } + return iter +} diff --git a/database2/log.go b/database2/log.go new file mode 100644 index 00000000000..0f92c1cb64c --- /dev/null +++ b/database2/log.go @@ -0,0 +1,65 @@ +// Copyright (c) 2013-2014 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package database2 + +import ( + "errors" + "io" + + "github.com/btcsuite/btclog" +) + +// log is a logger that is initialized with no output filters. This +// means the package will not perform any logging by default until the caller +// requests it. +var log btclog.Logger + +// The default amount of logging is none. +func init() { + DisableLog() +} + +// DisableLog disables all library log output. Logging output is disabled +// by default until either UseLogger or SetLogWriter are called. +func DisableLog() { + log = btclog.Disabled +} + +// UseLogger uses a specified Logger to output package logging info. +// This should be used in preference to SetLogWriter if the caller is also +// using btclog. +func UseLogger(logger btclog.Logger) { + log = logger + + // Update the logger for the registered drivers. + for _, drv := range drivers { + if drv.UseLogger != nil { + drv.UseLogger(logger) + } + } +} + +// SetLogWriter uses a specified io.Writer to output package logging info. +// This allows a caller to direct package logging output without needing a +// dependency on seelog. If the caller is also using btclog, UseLogger should +// be used instead. +func SetLogWriter(w io.Writer, level string) error { + if w == nil { + return errors.New("nil writer") + } + + lvl, ok := btclog.LogLevelFromString(level) + if !ok { + return errors.New("invalid log level") + } + + l, err := btclog.NewLoggerFromWriter(w, lvl) + if err != nil { + return err + } + + UseLogger(l) + return nil +} diff --git a/database2/log_test.go b/database2/log_test.go new file mode 100644 index 00000000000..372f96aa1e3 --- /dev/null +++ b/database2/log_test.go @@ -0,0 +1,67 @@ +// Copyright (c) 2015 The btcsuite developers +// Use of this source code is governed by an ISC +// license that can be found in the LICENSE file. + +package database2_test + +import ( + "errors" + "io" + "os" + "testing" + + database "github.com/btcsuite/btcd/database2" +) + +// TestSetLogWriter ensures the +func TestSetLogWriter(t *testing.T) { + tests := []struct { + name string + w io.Writer + level string + expected error + }{ + { + name: "nil writer", + w: nil, + level: "trace", + expected: errors.New("nil writer"), + }, + { + name: "invalid log level", + w: os.Stdout, + level: "wrong", + expected: errors.New("invalid log level"), + }, + { + name: "use off level", + w: os.Stdout, + level: "off", + expected: errors.New("min level can't be greater than max. Got min: 6, max: 5"), + }, + { + name: "pass", + w: os.Stdout, + level: "debug", + expected: nil, + }, + } + + t.Logf("Running %d tests", len(tests)) + for i, test := range tests { + err := database.SetLogWriter(test.w, test.level) + if err != nil { + if err.Error() != test.expected.Error() { + t.Errorf("SetLogWriter #%d (%s) wrong result\n"+ + "got: %v\nwant: %v", i, test.name, err, + test.expected) + } + } else { + if test.expected != nil { + t.Errorf("SetLogWriter #%d (%s) wrong result\n"+ + "got: %v\nwant: %v", i, test.name, err, + test.expected) + } + } + } +} diff --git a/database2/testdata/blocks1-256.bz2 b/database2/testdata/blocks1-256.bz2 new file mode 100644 index 0000000000000000000000000000000000000000..6b8bda4429200c0566bb13c28c35d6397272e475 GIT binary patch literal 37555 zcmV)8K*qm9T4*^jL0KkKSr@A=?*Lyo|NsC0|NsC0|NsC0|NsC0|NsC0|NsC0|NsC0 z|NsC0|Nr1%0`GtT00*bHz1|n5T)=M=pBnqU;dPxJy+dC1_BM0g?t47Wcf|GKdY#;B zx%Jb_&i2~vj_uc8@w++d_r2Xc+bQeWwB7AnH@m&s?|Z7VcLwh6_&)Ua-C7%VGQQc~ z>${h|?$>*(yBXJ;E3Q`cd%f+uRJ}XA?=#ch z^LF0cm$dfxz3JWU#m@D$*Rj3c>USOO?{`dUw52*PZ4) zZsq3L?C!nYb#Ajg-1geucDp>i-Q{VQx3xUI&gD6`Uawu6^=;dCb=vn_^?SRn>aKNl zE$pv*o-NzE&uh^0y|K@GzIV4+$bmfo002w?nqp#L0%cFrLlY(dm;eaCCJBH5N#bAt zO*GR4VqgTqU=ssCXc_@FgCjbAaCIAVQ4yFcx0gyDQ{z0Gs4FX^(;0dOf69QlWOd2#a z4FC-Q0fYo#1w0VRlT9>W35k^S(-Q<>m=TeQsvT87z%&|Zsek|g&_ zj3y%jGfhmMr>UAIk3kuxqef2@z$c;x$u^Pc4@lCq5G;R}q9*?O-;!gJMnwV1tNmyb zJ@fx!pSG8@UQWKV&A3<1{kZJNqQ08k z6D?(C9~^K8No6azD{saS5up0CIL)USziD_I?C)=&!gQRRl1VS{+J?XC_#bbw?xgcw zjlnWQhmuJmK$1=Nq>@QeY{iyDRa8VqDzR1~h>SsCD;S`vMFkaz#a1j>F^Y@@ixB}8 zR7F^*f~;V~WLU;U0gMLL5g4e*#6~J23L=XU6%+sgVk!!%C^905f(&B?Rz?W0j2J=! zkz&A5Bv=9>DkzG^F;zul6^s@Nf+(vMV51aRs)#D6ixx2yRYe3*Q4s|eF+^2@v4W~9 z3lL()AgmOMip7Gf76_n>7{x?j!Bi0#Dx^^qh$zHCR8?TA005$jqA-Mjq=2G;p(GR_ zERkRYhGl^yhX}|(fNQV-4dYK>n#ItplMB?#{#=cV!a#-X6#_w-Hww)~IQDocyC!q_ zP;QC6GWpfg{SRTvVKK%&;D83JAV?@cB9MTfNoG=P`RhZcW`%~VouXmT%%>Q`%gFiG znI{I2!l$s1vkuvTf^(_QFeC#yn%e`wLK@aVJW{_@>IkMu*Tz znP6aPZ9IqEsuH!41X8oO@$DzHuf2m$q6GXXz?)H=) z8Oo<|u~gic8#@ls4^23kc~Cpor9%wuOfhH~@7+O`A-6w1U~OG zKw{EgNhFbmD_5u(-)^8l4f&a^9R_dO&uKNTUsBR*+$IlA*>~OFPl4v}nJizqrf9w) zqe22=T4Ir+f2Wo>`sLWI3k8mhw?H3J4@8NVOJTC0;$We@CRKTKF-3MYHiCQ2x3BU( z)}O1pcK`t1iH=|ZvOy7K09X-1NkIVYB$7zO4T7V2@@RyR?lKpdotDOu@DKn100025 z8j|2a5i)c~hdx1-v5bPtJMzV|CO3sRfYNWYLAI?F)Ks#b?V*}=K)U~Rs~`0}`9jfr zq@K;^8mYzZ93voGNFdj?bzU>-R^QcyxFYAQ$zjzXUTxHHURb0-{WBrwQ(ZhX(djB$ z`|{b~h9T^fOv9~7I81ym3u$cwRR4{rh|WgWszLs;K?0NUxzv@B&3M~d!H89)Ki8|p zAx&K5^LeTg6__5a@+Dnc!orx%+h*yBU8H0mz)})hyp3;X`sbgdLE`m z?DM(0Suen3X-SJQGFHiJ0yCcr?|5*sFDL6h7#NVcZP%#TI4$qq*)#tOc#{r?EQZ$3 z@2aWLp$#GztTlv&s>7+oXsj0TsvJKrUA^1=xCTk?v&P>l6P;n&yXO^LN(W_@q8;<+ zrl^L0MY!5`jO_o#dV1AP-7?iu{WwR>bh*tHu{|g2=;Y@7OV2^dGW>g#(m%7hqc51| zzcxELdBOX6%VscLc4GgiyT#Xi{j2Y7xDyeuA-Ni0gT+xi?gf7B=ljdA!DA@CvcVa z0|L+;v>c%L7r?;)(hZ#!uzUkvOYL5>NO9@tvhh0$`rHzF$Y-2yK|#}W7;QdVP@gZ+ zsaaj}*SXBqyKiTPvBhP!4o2f%PFbD9m|>CUYfa|p_$#mW%GO>rXDT<-Io!O+ur6g+)r2!T9)G#e18LaZ{9x-(kfMX?ERZ|fqNIO zME-!VUTi4TBS-t#Yyc}w_xB<;JV`o%DIac@7nw_d$iX@5|TOOcDXyH_y!EDC<-4 zp|dc+urTQkwTFY@)y(Qp%58N!TtBnjTp9Kr7BaYo8#iI4v7ogav*%~Y0v&t_4WK;m zs=m@rZ7B+osQl1Akfx1co1S^%Ikray<`Wy)L8)zO^LY$qI#2!G2Dw%`Dr z5h{+fe|-9E$fmiB08zRveD1Yhcfwjt`wk1Iq2R`X1Q91A_9FYJ^&CP00aaVyE^zPKKH@`iDe(E?g>zB zM}%7D;S$eKl={FXK@rn&5>;QeQly&S%rH!KMK``jv;;EAMXa9wnFszdSnAk@(`fr1 z-F= zHj`aS%|-LLjNoBDsEib}NSwq5JLd6T7693-79 zDa>6lA#reFs2?X-X75sY!}Ai3TB3CwkYdOYjVslXg2Yd1enXi;DDq zzWn4VT)GEKJ7~9}y%gc+VRK3Rba`|;zJX#GA9Sf|N_4kmm@4D}qpUcsJR#}R86^G< zK3?Ss!&$nw>89d_t1|k&(zzOxU|5m0sk1dZMgpjHbS26>7RoL1{!7IW2-RKL0>wZ8 z&XXqs4L;1xknu*uwWRl0<1N;nGu^qkNAlVzq+3u3ApfQF#4g+wm$A|d!l1-_#J;yG zm3x<}PZ#pXr1xG8Mz6(W3cqYjn`{FN9tt7SJ^^!VBXJqHN)y}$W&%S+7+*{gN8(YL z5ohchr<3tZ=0#=0qo6$Obbi|LG=kcor0YLPK%_&LwJHl&e<-7n0%_7Ta;(=}?ZcGW z7T+s3xx;mUgx3d_b@*|}#dzL(mHFl|$h7?z$9%yOaT#1n%`PLi)RS9__>lx%bOy-Akf0XxoAo4bGs7|!&K zVdF4s+eNgQ3=(XFg^_SFGpAPBZD=(#bR(ZH3bbYqvRA4Zo{_Q@@-*w8gy+Zad|ABU zng4o-FaT?4s=bRnwy7h7^VJ49_PuGt6%Aw3^Ipp5ilKx@+Z*_<&m%won)`+4@VZhV zHg!;XUaY*>_#7oW>`w9d$demj4)B-L*Vf+Fq$4Eu;;nfzU&F4!7U$Y;%AhWCArS@4 zB!BjEb@@nDgj}0dN(C~)NpQOJmA$-mN*V2{*!6eX*_INd= z=fDLXe|P-UT#c#q@ZPo1k2PUb7tiD4X_n7hlBX;1ska~8f}x~aVURQY=<+tUzpc8Q zt}=$!+y>*`gZNj!5}6~H{CYxa&DUZ!e6P7Q>tOFrPa3)|*u*W5>(T_uBFMBznE3Ltoe!nHloq_n88!ROprYEC+-9f037NNO1u(LBUw5W@{UzH{ zvCt>#Fg;Vo6hJ%Fop`;K%e8Y>BbS@MN%9|0Q1jv&Cs+^WQ3dArTb$@NUIfB^_Etm{#jDAgQ26YLod%U4$7rV$nkip#Q; zvi&FM<=Wu3y>RFweN0i2m&BIh_6I)3sl3>%#?p>Oi$q6=fLPteH>3$a5%w<^?vPMx92tvP>l`P47rW>_nRV zfTuj5mXN3^1KHqn*R7R%+`@egteS@{*&XrjM}$b7JMs=%nfx8OU+@sd&h}0>q$MBB>Xtm9SQ3y(jn(R)=rk;Q9OyH<*O6iS5u|W4h`{~KFJ0>Vg z-T)so!4so}@3v1B6kS5-Vz>J_5MmX%qd4QCPk{?u2foQxU_6MN!AHoM(wxhl*b;TM z1Vc4s3M`q+D(3}9B?WVa2OXyh?MVLn(58ibwBV0>n8?IKhxzI6M3eipVcjT~Gg@sj zr3x+?rRe|yL6CuI#nExc0uEL0z;~%k$8}GWF4o}+u)v>uH8KGVcw;$W=TtOdp<1JY z>&0-7e74P}{5i5mO07Yj)vr2GA`vMe_~&8aJdLJv*Zvq)>xYGcx>e%fBr_I(A0_M8 z-2LHMP!X^vW`ssRiVCJe8%@HcwnK0tLj!mXR8aCCWjmL9=>r&1a;T&p5CGt3I6#)n zhJ$eIk5qwZ7)WCkTnYj$7IOL0cD_r5wZRr z+F$0YH)+(-acsHH3%sf~mV>de!rMD)&xx*pi?5t-?`=cTH&3+WJ_0#<$&N1s;tVj1 z$dZbT!Ut-R-|uHiCcl-y7L}JrexMik7fubi8A`5AQwV^<3O zvMAAehg72IDOeXh@D!UxlQsEJl=%P>u7uKX`3!JcHnGka85W3nE#PSINg{E0{wD=S zhbu{ICpZI4(!`e|;c+h^v9W_rsI((Fd5G*^&@x1f@g~_SaIHAVGpyN+_`OUdWwfb%tOm<=fAr+k#^ol)(mY~0;e6ub=ZYCNZ#N- zSNQxcn83j??A=7`CZ0T&ugeN+;a%HG&NJe)xG5?EB8&P(r;-b`nFqjTo4=QjzwCje zHv%x^VocPEEI8_gyF}U z9pvK~V+b=B5R9h2qxin_J4Xdc(z@@#F}NTA05l9fPEw?hhAjzx_ADd}+D-w64KyhY zD-2Tu&1aX5;B7dB|7ae%NN8CnroskUKKeuTG8&qB??RczamaLa)$>;Xcns?hoc{q* z9+M4;1OOmB{j6i;?=bEeCj>{!Ndv>{hTuK?$PU_V`q;=oS?|dw)c;^gbCJ@}{&ol( zG}`__Ye=$*XQ*$OKh8J%p~-)M$$^{R1wVz|BB4`wOXN{NrnbI*51{T&F}p(eEGcsd*hV!#dDi#q_s*aTQp>WFk}=m;i!mO@#$~$1$9Q z)DGq=0clhG9_ZIYpO1%oOOXCn9@L{E;4r4!r6aiu+H?x)UIj?Swau?7%vC_yz7;uw zrt09C8M>e143Sp^{62F%Y7cKvioYprA2YA`bO&hOf^B|EM;d3yQ%a&@eJ!T_R8agz zHhPGZCFS`JR!m?Xu<>>N0p^zY0%^=&eE6sE2t;-_DF`HSb^{%BhQ$r;{|s3<%TMHJ=< z2|zw&$9M{Q@6bFxgY)Pr{WDSmS9tx6(qg>iC`@J*O^=qiWa6wb>`x=paOV7;29 zV-v2R9sV)9Z$OA=;{Vh%5`ROu$rsj-RJs|{-y{i0!c%SF3;fZ?K;p6@pI>rjuC#f!b=sjjvC}>NP?9n+E)e&15F=|TC@q-uXa7u-Bg5j0PQo)`j zZ>sI6uXDvqO08=l=3gWk4;h$nL2?eU*{%W%^aTRLTQ5o{K@?D|^S*KJ0afG3xl;<< z`AWFU4nje8|G`DTW0H%|5#FYonpW5RaHNV}l>`v=l>xAdqQ8a;@-ZLES1y)nap^|N zIod@?T5cf9@Xl5fFaWqwn6+$OsG1-tMX10B6x1(glQ5#>XBtB$*!pSG>0O!ob;qZaXL6Ha7B;80j?8 z{}q;zbcgif=%+p44x-ZaH9&k3`(9`qpP1PT9|b4bsqA|LO!Nt_em#gvlaEL4(It8d z{!m={WQ44r9K>E;Th~1_A+S4*pXAl@ald22vvnJl-!F=%~@s9ixdn@!ZJ2y)eN8w!b_^*|_SCkF)X@Lc_ zuw z%2r~joj8{{}al_64lG#3>B!L0}UvOp^dlC?}*6T9gg>t3e3y-T;K&pW}WcNL3cv+tRdd}&IGy{^+PfR0r+x@iY*)P$?01PsOC(oy1 z#nV-XH;RVIrmexLdt5^@oZl~M2Pq|3lTiCp+~(TAv`vbqFsoFHAuLO>0_62)5V};3 zOoU&*!AT*Ssn8y)OE*&%QZ|;1KgR^NDcJ048K=(eRlPU%O$hf`=^u$tJ|lmjoAU@A z22e`ua92W66s3+L0MV-b1)c zp==w?W|Ijsh{zsEg;)*9q1CY2OX;-hEtF)9qaXXP(lKfS@vT8WbO@;Ctn`>awixJ5 zkV?)8pif(N0QQe{=&aezjmMHNo?ETY^11IR!EdZ_!IP58gB@qp<-~N+t*?b>RLn%g zBYyR+@6h4Mn-UrR_R;?*34M45Io7xf_*(Zp^<^_lSHz~6k_BPtIy9ZiK!4^Bs1SEO zL$hL`wZf=(T*2y~oWMIxy6PqMz(N{8P}4W|pB179@)(-XAjQRkU!41R;6Y}dy1iZH zl6bmLiQw?c>}CK2(KdPV8fM1N;+H;Sk9K)H<=O1uWB5zYcRRt=XKd?r9f0mUADj;B z1kD2Xk&Au~#!EdkP#pWw$}6b@&a38XB~2m?1X}Qyg-crDKjGbHl-QA*j?XLP02O3% zDqvOZPs?=4Ii!(#j%Q`3+QV`#&)scQXNmK`XHg!swI9*|7TH>Be-H?P+t3PoGl?WE zNcJP|@|^46ceBYkMF~TM^25yD__nO291fj)$oXZA9*cUzECm{cyBXYR*XKgpffH<&>{;tfFZqBo_tinP;NIski2aIz4{*5|TACBfs!izV=|dyCoG9 zz~!gb{RZ}x*h1SSvXSt_*rdAQ05^tmZPcKq-8^b8<}>mZn^a(S$=>fT_0@k(le0xB zK?dbLJY%G$iM7h{J9;n{$k1uCEb_d0Py8G(rOp>cf|7CMeG@dK{!;U2!)zL+_y}bA z{vZIA0^M#9j!2K%_}_HXFCUCjUg&#;sn(m=v-z5dK1I$IIt6L%GF+~z;sj%>iGs-d ztwx5ygjCKHN1|PuTfJ-I_%#Z|fvfH1>!bv8)_lRy@E429$}Gis61OSMxbSDz90@-Cpc5pAmFzC% zYat}Wlhp$p$6TauFQ=x^D4)icV=O~eLCeNu;Fb$?r*7OHQnkQS^TBhzu)J|9JlHor z?PK}DGx#cRf26S-3hMvZl^6^)kqvEki=jR_iW!vp(FbX$%oxxTSxvk6$5_(ADL%Ez z!Ueg?O*nIv)V)v;ai!VF!R*GG*C)mAP4+8xmZbJMta#t;W_i+fm4b~f*dUjha6a-Y zX2JI?JN?gjWF@k#tSrEV6khcjk}3V3Tp;b)Du|yCd_WE^D{IHURBOmvapg&qb>t+c z@@-?b-+xDKrnuU7Y4peX;Bg%xn0&YA4i$<>dUQ;t1V8EA6A(Fp zwrw(9N>E;KX!cv;Qwg@;0pX!)ETZj*harI_6CDaPvSq9GXW)OGvhu|)!;XB0(TJry zEA$e5YR3r`w()t@f{N}U7$8ZfOgOhgw}fEuKE0U7n@DSRoF=b?(t?0zr{N#2H;AhD zlhh|tg05^rc{EgFH-g3J>NVz27WM8Ok7qs;zO~8!3eqC22PPS(% z$I9%xedtE-HQmt@k77Q@Kf^Bj0K8LJt(EDkiPF#_agy4YO%tOr(y4JEzR4NF6i3%I zJ}zq8*%t9Psk`Bl&Umodgy>{zxN;@qfWNL%(_)wLP8N&oU=lgq9Dqt| z45sReb`|n^m5`3}J#aMnY&tKzZlfj69wy|%qeVR9L9=9aS6VJcn$fsqqGrGZ0c^t! zz!HaD_Brrds;Is!h4AnDj;w$`T$pz~3eJwFW^OlCp|v&VUr_`JrY_fv*wiB*{1HQ> zR9)Qq@r7O+)U>nyKGEI*OcRiEO-mEd2KKBpH?vYs7uhTuHSqby@?Ho|&=lDH8%gut z3izkU4iz>ozAqE>=S(RPr|;P8A|P6gr=^v%d8#W>c|X)r!}J#dn9~_T)~P6(6P%<` zE{)0DChf+U2F04m&PV&3%*-3+4`4C;;48GBRV=7DnNTimgz3Kx^pTbA%!zv9b5y;b z;d{+zn0p^~5_mVrU6ScAev{rYKQeCpCghkx5R^kwq#uCSlazx1u}>%7v5hENn$qC| zI+}DOBOI3rSn1uR!~SbHAwBm@Iu}+8arJHf0(R`XtN?2dIb-?9Peb2X9SQ#iHuF*W zU19p5XpS20Aupl(RS-xBO_Jw`rpj&CDRbz?c-bQHge57#@|=7^lGT3jF(C>YU&mCT;k>TZ&&`H!$wxFytAfws_qJ?=$8 z>`UJbOaBfBicz+->C*3^M)}d`Yzwnv00~p~NLFygppXX-Q4;XQV2szX6Xouw$#!pM zJMeu91+nXl3+=g^nVSKEu~g&8h*%dXFDPm5jk@=NEWGR&R>8ea)8I@e{=%+a^WVy5 zWT<32ar3O+*9548(8C~dP@wwfMrF{=qio`JGZ{2I63&2e`rj#S>+7)g+RvfFPN>~j zf6i*P#-AoVlyJM@VsZHvChWhU**l+~ykDY%Z z2Ncg4P0;z6=~g6?y1;>y`0RsZM)4#MYRjZ)Rc~F+Zqd) zpsIJ-n0KL%roT^=ziQw8%ZoKe=F2G=Vv{j748R*U~naE^xh~17J(XDZ;esTXq6@GOwywuuo3t@JQ zCA2c!7{lIA_5 z!5H9~7v%4w!st_~Uy(*9mTV6d;{XfU3W?XhM_{0EpLx+~S9FoPdCnpZ7j{_5e3d1% zKJIfK4LU5ZpUbtozoFsH4;`Up0Xo?u7K_-;;^L+`ZtNS2`<3pztx~bEQlFcyE|pzD zyMWzLIFR8v~Tg??*(FfRCxe=H^UvFViAfjX&96|Oup1Sc102c5Y8d|HhWhucJAAy9 zU%SlT$}N_Xw!888G70-zPy4(QBWI!AVdvw%HyPVt>IrzJXVRA!E3|N4Vi$$y!8;4U zY>*ka?;IBem#wf3hQtMZn>3UnJWar}D-?+o$msXO*k|%y@QYZMK8!y1a5hUqLmBC> zAdKG1;O7u0XA8z9X>izL{~5&vMf=|&!hTIp@%5TxFUyUGaZ95fM9EA)72K2>hDZdR zd*|7`S6C4|Q zN@h(Z`%r@zs(r0MlA1OnT9@BhQX{^W_#gE*7fEe`mSrC9dnbnxq}}i*?`PN)KebIA zu7;u6s)>U|Ao`4z3Oc@16lTJ z(f=$05I}$g0003fxZ=ctLVW>5=-9ogS{uLLHS~39zSq7vauxJOg=w{x8n@`@TJ1NA zNtwqK8fDEFTGX^QYqr{P+ck-*tIJhOD(Y48Q2t*R z_0B7+2_s?lYXSRf_|;=?dj1)Yv8TJT12Qk8q9%r;eFr`x0)CSGgxRk&cE1=L||Z}$GTNm<(>%Xz0QWjq=VrC``!OcoOPuj(6DP9E~cTQl(aFR=eMWZjrc z5yAkJgf-3G>EO77ky?(J%@Wh=5q{RNw)|^ApsmS2I$gSP_uH9@=?J6v-TRWTF^8g0 zY_q=d6nkitvXy$lKHTUSn2_tyF4BrZ*`TH6*+ZthdifoXP;vFOlCo<6H{mA)2mn9; z02j9j!|0f3icy5q?EPNRnqj|bkZTLfu8pz!Df&O9A#SnU{q=RLR7}_TUB=~2yPi3K zvqzi0n-JQUWKLx#3EOEozTg|QF>lV8eJ zCyanlg)xM9%++q6#WDrg(>b1AZ#Cf3$?Coh8DsQFqekXFX3WbDw0Eepi=Ix=T+!S%qE>XLnQ zJZ&Sl3e+7PQc1Oa44GCUz8I;2Htja%C=E8`$4`E(f4#Rg&1i_6T_@5S!Rr}s*+Ss2 zQ-oA?q4L^jv`d%EM_MPsF(J3_{TsR7Ym=g+bf~K)JR2^xQ;>fv3nKe4WrqbBKi>s2 zP%^agjI8kT0O=N_Kn3FhO9pLr_a#(jyn9Q)APeqwTkQDMY<#g{C}DD#sYfHsdsIGb zRAzou_V4l!=O8abKb7bEAn$m&9ELQzzpj7_%i&gR6|{7cawMrl1`9fTgtg=Dg^}vp zxJNY!t_BSD>>CCD&R+=a{E9c3-iXv;hYZP;joK47d=&1afrfOJoB11AkXjJts0xu@ zY0~2@ejy!Uje{J$nJ~Nqcvi4A8F~>ukDis6vXfewm5&pyDZmi=Kq%@O?-UO>Xh-!fmpQ!1v9L&ud^N+*mN@V{yu3K z5NBWm1`HY4Fkrx`<^wtfYL`_OM(eIEaaOWbt#emUw-01EAUTaHXb324v75*e9p-2o;v+=yY{nw`w!4G8 zpO%*!0tlNpnZygqt+;q|V_9q4Vk+eaGx+3g9CbRLgkxkH&`tGYp*&I!mLQo4=1JEE zq{szdU#@PKK24!vgzwp7EQbRP>0hbpx`)efOyy*(a3N;AkhrZDyoYqq&R|cFJSLP^ zXxDw_0&a+D8H5eJ-3saL+S4m6i2lawPW&srwf<}8%vBqoX4*6(*njt488~K|9<08R z`s^3eC&e=Tb_oVK+^L6wl}iQd>hB7OCCE`b+A3j#hZa0(2_l-C^vD*C{<2WceP!~+Q$2GN#BEnQPW zhFl~rJwC?r}F^8S^GiK z;ZBi1%eqBRD2c?n9>Qt7UW0Tr9&Debt`0>bK|!@?<~I`|wts7nAhI0Z1^Is7QwD7U zz#9}=ZYqS*x-9Ha;ovp8!xIUe{Z1s@B|ojO84og2OuJqtqwaF}of9I`J{xwICP!0H zRS9>k1R>hXcskaT;Oi@!W_Yb1X=tV7{17$u+uw5`tIAn=*n5;HZn@slpo@8+<)ndqG10G}rjN7*|V|Tc){W|sY zO_+E!RxrevJYB4P$I&~8u>NhS8^@0D?9i?;&4F3D7CVthKUqmI{fA$OX7eu`X~G&D zguCRcJC9;*swzJAp%4j4(B^uPzDI;sMRqc$E3z*SvCLJ#JM>@m!}Nk;j>TQQH)Ycg z;#%P`j%m^=M)PU|*BiQ;uPdHSl<0UZY#^_#Qp9}^!4m~fzUJ6pg>LGt?zo7O-A{^2 z*U)@#(N6mqPe)d6J$LTM^$z=kkb!&c#ecT>2S#W}qMO=;sQPEVuBo^iC*5-z}~vm7}YJWDEXDwan{MP?{aNBjMHZ;Malui4^RKu~#s2#LV=cSv5OPBExF zeuuF8WbU4r;pu~zlGIsB_^a4_Nt|Cf2|D-6t$I7QOm2<7bvwRpXvVhiw}s3LLY$1y z$rf*q05397&3(bmRSw8q_GRhSyQuf5Y<%u4J5dpD=h_bA4D;-Ej~25JP8v5dC|WcJ z;+vN=1=Qq1fLgxt@{*fKbf4KQ(5*WfrbnqN^w=6SRpI&%=^g#kmY}9yd2e}`1;UKh z37$4WI*Rsf_$LV}g~!h;du9PEAw#~oZ1!5Bf3`IFeXF)ZtnKdeatr1jUC2eFSogHn1z`8ZEF5EgtGM1pqN+{}U&g^u#WxbC2b>`(`OFvC3Im9u{A{@sHD&28a(LvAqW;0WXC6WK2Yn0l zQ2%Sa%r!P38F{?O6yx=p*tF-*RJX*w;NF<{pvBKP z7j$P3qEgs}5|rUsw7x64Y)v=p4lf^l^V}U=JUG6kj20TpW=Kd3BYJ(;Ykf=X1!OKs z7mi|0fG6#xelqZ!tWl5Mm*9JAi=1Y6?9)hb+nS`AVOA(SJ##`=(ZVKAoy*7D2{=k{hiNLd?)Xo=a1t#UOfkcGMH?mSG z|32j`0Wz-{2jC7rblzQEG@N>eoJmotLj|)FMpY;KrcGXe_Q32Xwq{s2&-!mB@{>wm z$mGA;u_A1(R5Ocz3a*v*L7_%jw`%d48>Ll!b;Z>eR#qV7vv8;wIcG(a!DT+0S#o1{o5J^ja_we^7ruY*;Lw1%4+G z0hT9?cyG70ZkYGJcx|#fTzCRcc6B`V?mV$Gcvq7Ic_Ht??Z`MnM|>2oIO@Sdf*uMf zdaOi-**5u4+#byzQSc}cRR8ooo=AQ_b$qj?Sg90fS=kF@)s>{78uHZjOP-x5oRFqj z&AB-49>CXD=x1q}6zAV&1X{x7%Y+k&a*AR2w}04P&F}^oBU%vF@8#lb5;Q|(;r5s< z2ZMIw7_WlTwl=W{Cn-C_0u!kyuyT}JPXmrVD)wEkg16xI(Nh!b*GA$eApNo$)%f~L z9}twgk{DB<9lXeS&tj%bSR#hyk2I06bL?@!!nU+xr~4JE=`ljs+oO;nxJSYsquI!| z4@|(ixO9nRgUf)APO zHC-1-Mu`F%?(-j}0=-sw0muI02t zHA8RuNt{wi9{ZW5dkLa6<(N z)}x;N`Q1=d|8?1|@?NLpN!Uo8!{kOuazfnSLaI?2H|AgGx5Z~db{Ofs$80l1emiyd zSX~ybB4ojmll(sa)E!B+rHIc~Wk$xSc?EUn_&k59v&Lok>V2yQgZpsF-FD%Wk zRmdEBoOiVbxE0iU{_4owaly5X0X6pEQ6;B!si1D^z>MI=`e12)>Spf(S<@mW_isCW z>6O9WA+`EZKPmEVy0XA~UdSdAscc&R`jr7>$y(uH6W_bvHwAZSiI(%XM2G@u(@Bfd zrIgIB=&;?LG#3mf`Nwv1+tRIYNAX(S=u?TebBvQyLVpRJFwLmD@-kBNQy--6B~JgE zqfb+Wj^SIw)y)`udeb~ywB52=L^LW$+X}@^sC%v4tn-Q zHBuA;60W?;bvcpCK& ztK$bw=YE|&Xw7J{879u;p%4HWI+HxTRK!^sJ?^z>Qgv~;EXw-H#9oW8Bc;a*K@>6iI_Lqu?wI8BItku z47lg)q6g5WOc8hg% z<0i&l_J2+33db%_TzgiRr_7buwYm&zSBo_P5zqvthL_SdePGke6cM6ZeG$PRGa2<$oOn9H5w8zj z0=lL)_~xL?mK_K3rM)Y))>6rXhSBA$K}8`ZL_l7@!K`t8cjH51H#8SrqN_p6%y46W z2LUf>z#Zd?D0U8hc7($3@_d9wBpX)PZAc{@x8Ns!sC~KgPyVE9g-*hrf9kpIv zyh;UF%DegG(*CtJTMHzO082o$zc+>ybHSVn8%+g9|8OzzRzTLTw)Me9d2Pg8TXa+< z22w{r!wa(HSxGei8fZT(f`-0S*<={VyX!9Ve7~I&=uSFR``F$_v+14h8RM76#cJRA zCEdoNM?lV_2Nb^bT=gM(iSB-ov)dmiK`V0t*5(Upy^OJBv?u9%%}nfvIiykRA8SXjhRGg7vd!qzZ>n+A&{<_s>G1{d)GB7Q2d7SIt z&&2~lSe5yM*)ANmM9^@fSo*kEsE({m_-IPBXV>IrbU!z^+8v-I(>I(TOIKwx|MDZN zlGutuHPLkAI*9d`dauC>t~tF8Z7>YX8XV1jX@6n->)=)cy%+oMl-<3@v{C%$dfoX! zjV!WU#8PO5Uhwxp1|DsAknnIBcGI&x(J^$Cn&P{6Z2a$40arZo;xPRJKMKs!6BIN3 zHb*-+0vy;9F%dGm&LM1%KD03$)*AhvoOlf%HBU6SW-*hhN=J8*%iI!&14gn?Gp^a`JjGMDEO;O8C^r!{2H0W3tVh z{eHX8vDlqR6sM+4X^n`(Ii$iIi$={kG4VYk-v-6dmzisw*rgWyBN<#B(8+Z^myEo0 zYV}v1)G@8c8qpRY0yO(6$kan?{JI&hDKK-z`+i?>rT5yp%Qc4q+qZ4Aqdtj5rqRL4$tUTSi70@QlO3JoRGjO zF~(1n%7$uR$y``c-)+&x1V%zcS!cqrm-zSUkn4h553EXxwh5w!jRu&CoH8=@GZ;j% z+=mMPpLJ3uYIQ@@ko@OglllIx?dbN;;S<>69a5XZi{fCktb+ptBToT?w6Op}GWRof z?3)S%{xcyz;r_0qYvwQZ%`MQUCD*EoyTkXW|3U>g`g)_SHMAEGh2P&D;ki#R2s^ZI zU|5p%%kE8$YhNP@>|UDz|2~QA!YPzK3PcA_d>nYa7N2@^_+4tRzp^_K3g!*`8GI*U zwvXF;97?Oah8_Y8CEKyN8=0b?a^jk-t4bT%f(lfx-IOY_F0Hq^!3Ft}R>B+z>=K3c>TA_Z2IsZ&)-2!0TLqHyyOMM)-?&k6 zPu?B|qAHr!NK0s~y;ukU9EO-VpMu<2h<|08-39tUz|!YWs-BQfcux%j-S+M)Xf{C( zy`k&PN#(C6w;D+@m#cOCLg%n&T?iC^60EBStoG0~s(l(Hsid2YYc!Y^yMVa38tK55 z#mm{MXm&VU%dsUBPUxBtE`{6`C<&|yTR9$$7IC8G%=s^%Zb<0ZZs;kG@`ry_=wIHUo5E;0Z^{?IaH zqt9Ql636QouvSA<>7YM?;&d*lW{Q*Fo*k8F z`x@E%EKYj~&Yvz;`vfH&Z9TgRC7TtzJGez~z}U|JCs2Pl+*D_LJ@oR0+!oVaCyvpH zzI_Uom{dSmGc18{{K7Njbp3)*;K=@!g*zgcIjw~nV#tmyM*6M)q2@!A-e-a zPa0i&LAeu8jp1&we@p84U}MCnIlslLHpxu3+dq^oWi7)q3{%6l5(j|jS~DYTskE2` z7?lrp2`5IhO}|-^wUx#!j8n#meoNdFi~Gd4Z=jmAF(y#abkCX{4)Pr zb!`0}{DpKq2EspI`F(BcV}3|H4aJ4OlM7uO;@WyG>@=zw?e#4J>Mo(zQm=TO?Cs53 zl>Gtw>>+aVW&dAt%Tq>(k%f(k2?4b)QH-gWfA4b2KZUP3SE%5|k!2lQ0;S_ibu)vN zp4I8@n%T+&VPq5=OmOCkSwVo4C%IGrc{0!QQhQ&Ni_jtrvZ0ig5U|sTzcq1cG{XQ! z_0Q$n$@K&?Y2P{6Lhw8;a#OuCpb-iZT<>n?>dkZw-mhTy8fyxZ(QC^dPHm&U69!=! zIqQ|RAz0OC`#UB2;KQ>hL%yF;qW{8nHY7atx@mG@y{3+U@d z(jeYFAuryR^57#adKhF_>=%HU=vZLhCMcCCA}$T+1r;}~(v_03(}Wg>OT7H4Rb5(^ z=X=TbH4Cw@WdBB6Y}A$2C6-9P#7AqfwT-K9@~KuVHe26uMM#fk2vA};Kc9z5==@Hz zF*t*&f9tUO8*{a&f>!q4CLMLjOO6IzICVY5FE6Y*0tcVCGhIUE!wVY~>oEhLe-WwM z-2Kf`eIOzK5H$blws1Gm#@RshqSsjH6A`-f<{Di1z7&kd^^2L?DkV19+haCUQ_b#? zoJ7)^1zdH1?L?iYgkcKIVc#EY6%E?*W~1akYwzC$`cj<~1e1{%inTz;@V!15ajP~5 zBZ^lvy$c-wELgSFG>q!K;p+51dwwl8O4ZwjeqBXXHKmkh%a(~x!U3L@B zp=SEB%uO*p0U0XyVA+d~I!-U*>*Eda+p7bhFr5EEo_oZ31#z(4aRQ@-%X2zhuSY#7 zK4GY)^QRrjgfVq1K?csCugofW1*X0rHA&5vJBPC`JMnqMZ*Y;!H2X);qko8Cb$JKE z1I*Vln5%AcWrakVMu*U+Jmb6w>2kLwx0HdcF*KudB<_l#<)(KomMGpS`(AO=nrQ9! zxf3~&k*W+zjBNzsuORV_Uw)@OB`Nh3dO^_j zncN`A^KknQZRzd0V7QNBK+bvKXnf9qukyUvK=SKfld7wz92r4NWDMa%;P13ysk z|B$xJG(!tTnyqUp$os%lp9Vqv1NnS~J}Q7k&1sFCOE*Mvu!7#xZn&v)LOJE9WiJM!Jl> z2$Hk}GUyhQEO73Uf`jK~V0Q1*d~0q0v>*3mi|wMjVTevS9Bqm)pzm^Z|1BB{1{gRy z=L`^u_MkO90a`iIGn)!)W;|q9#C5xl2$4u!b<%fTxJ3R^NET&~ z({=qmZU*^G=1^S-v2nyHuV)gBe9QmnuQ%aWoCjSQzIDp_Ip#{-%R#E1Sc5kLv3^LY zGx^x40@oCdZ*p^ytO#e=sX2u(ERg;5~sC<(C{*N++EEZ*9abR@i;L{of~m;U=oy@Mm99B(#7IMO(e zreEnwo0rq=b2)TgN}om{#z6=3f6!n9eKPI?@tRClMe?#v%{E}zh}ZTL4y z6{la2?*NHm(aBS)Tc?K|N{UOPcRRG%MiTU&k!yn1yeUmM(gOsMjAd;2ze9za#yaEN zp)=l3MNhHH9-T2&7@{j45-lNySFYz%hFSVzMSnzi3qiBj2|S_O6)o(SC!cb}W+`)}^130+~;@NJs9lO6LOe}Fr%%9lg19HLB44ap7YfuqCYJ1D2(& zlq-1Si;HMS?fNp>jYNar_2Y=yHRI^@;qZX(CyY82`gR+tOx5;%LaQi=B~_%jQpnPv zGY!KSO}rHolUd-e>4lcqXy4V(elD=W8HNgR9xJ8MA!{kX*FJ@p9e~_;zui(HxHSNT z>%R{2`lCA(sWiq9|77G8M?ITf6uA8W( zeLRsdj~@6tH~M}i%jV@sCIeYgNyFK*N7fLr$x`Wy^pkB*(4% zVBk@l$zA6Z&zF5X)s2t+Jy1m#pmy5#tXr_*1I(Ihzp6+TzNs~ukA4J)&qI?0KmAr4 z?nBJdwNpensuYJ$V%DzoJ^|_a{t*HHLB2*}U>h4j=CCrbz`x|8?el;O)cqx6%z!ef z#V$Pkzd$7)O>TKZ`;;(w6yAx)9G#{(J$(2N zhjuKs_o-B^{m2-5XKUVL|Lz#~1IHBdkTRq#1pa_uJ}RG7HIBz&Xl~YBCsp{+U%l1Z zoBKss)x|nWcGTxNlILtJ$xXl3_r{iXr6{dnm$*h~@iF2&AQEE`v2dMlDkIJj0Fxe0our_l0u?5(y7OmZ}QybYIG)(wUF!Be% znn}f1oS9g6Z$myLn#cK_KSe76+do+N|T|ue;q^nL+(cQK314@ApOmF~Io*W0q*IObCOU1>Nc%#7fb}vE0=rV@eclw)IVHmmcn&P`~S4<#iO;$;$@jXdQ-YKk!Sb3uKA$&HOYux)Q(s?Vk z$e(A*&pp|kQ3F56xS?r3Zl)^3OZ6LR9`PD({~#6htEl(ie%X&hNdM}L0LBlxkGtDH zKpA6wmG}Bu4w*NWzxK#oR8 z6BKx0iUdG?%s>SBz@z$M4zpFb$mU2tvY^}NXqDf%QF?XVM{jmyF&H|5Mn&Mv#ya>*O?~BpA z2o0jMsn=w#IpQb+EN-X%bVRFulM%wN`M+9BHnj$ET8YNt2m?qwc_{r4C5DiM?B;Yg z5Y{cYcF=;Fok(p*0G@3KgpQPb5-j<0Hk;T1=AZ2Ya}dD#S~8}Jsbw8T@>UwDm^c&_ zIihpB(tkyMrrUHBTMemXWHK+QaEFfhu9$QgHzD3eP5)Ja&x>o~Ntu=IFO}I9Zyhq^ z@NJ4e>%cMOK4G7mf#cvtQO*1cED@MsvBkP(adfCNirS%a25qg>wDZ-`@`K36bzW=i`@^Qpx@l^dek zmC;~c&qAE_Q-_RI*<|vChv_f*uqzUMhMi3Qs`Wf|E@K#QId)q&DMbQu5i)auGHV!5 z@ou%oZnHI)uj(dZ(bo+}`dngA04kQ&@*4KMkvix?qZLue)2g&p;EZ<1X?RzndQu@r z`Iw5E9Xh(YT@?N?%fcMRv_{UUqmofVn*4M@)2hc}oxvYx*inrahpH7Fw>7>L94&f0 z-l(lD~pSvR*4pC1uB>)kP33GjC_lLVk4gl(Kk!!S(KuI38i^UO%L@ z!Kq}KTXK!p72q}NdDpyNoh-cFOlucm}3c>8GVCN^`{2Xe4hs7q({ zlLzYFY?tRxApJy47o07h)O;@^n-);%x}uo!&|yOw&OqOuc=$u;>i)OMqi~SbBq<{; zA=Qc6oj%Usi0hdTD7PmULITA{?-Z>I@UzFGyJN#y=~!lk-o0m&lpF*9c+bPjCBZ4` zRxbb>ruTJV)9Iq?X)QYZGtu3_ty1N~*GEouKU`opuNcd-AQuv$e zR`2VsI)X3`M)%{KF^l9t=Ouaz0C|P}_t0c!r&8M-$-8xIAjvyrmLJ@1*oOFyGfeKt z8!5X=$H0}Hml!@lAx89hSxALh3q>WU^zKsZMr;?GzGfJ&+@zk1R!p40@``T}RH@WI z#P4Ev2I{!i)-l0PI@AT+au?GK1)Di7vaw=GkGi@g&@2vcBZ!JVylYq+w|T=-Sy(b4 zTrF#Gy9(02{r|#8pW5@TLOStCg@Z`v2VTcHH3p7l&Zk3MN! z8_!2PZ>&1E6NZVcLbiMwEr${R;u~Y7LZvS{GGA3s$$Lj$1h={7q`6L+`W z3B1b#*Tv9QhhvdPBuO3cu=#7g{=odNLX+4pTXJl|3V(txm1PSbk)4L;Agc>ojh%!I zA_1g6NhXD#YNB1fY(Iod!c7TSp<))t1#f>uYb_)GKMI~xSk|NG*)kd1JhXfrOR)M7 zW%k7c?aOang&+lFLK+{2BPe!8wnhP2tx6)`B(%RVyigoD3xs(~;}&m=`L*Lm3nrYL zK+vpY>01zZa%BABY2ehy>^G$Js*U@GnKLaqm#OB^2VUPy^_aD{f&;xG1GU5wwKB_V)hcBV#gp_0|nsh~Gs`8Kg38UkrXXTmku?Mo0 znb-S#$KFSVXSmwHDKk&~whhzJ6i(iFB0`jcVVlwmYq&ehP?${EBKdI`$JH2iAOecF z$g`2{TA|DE#aCT2yK?98LP814ftMbIA~{R9W%Z48!Ybba9dlRpC!4b{>>H$+Y!xr^ z+V-geM|Smk513B9e&5MrygSf3rj??q*HZ9MT}mA=hq%aHMn%U7muZn(rr@cD{k!9#d zM@K783Yjm9;ETRzgE^4*if;54FCWLDfA#Sfh@v+Ptz(HqSO!W0h^tCr4wh_(0Dq1v zgG~gj+D$A{7H{~p2m?*#11drO!5KY{$4!u0FoWou4)(j}AQTJ_^-7x;=;1wav!S~e zCcNph0}beNU`x0(*YMwI4^pZso*r#8c&Ah$bSQ|2_54TjyBJbRvA4@TEPFo2{8W09 zT1T9e(4v{~*Z)i;UsKc?I!qg#I6T$mg60$j>k4l(wq&@aH-}$tyDVI09X5`9FExC_ z-stj^{7unI)r36DlnV@3Pp%e#V)?`#bFWR&pXB{9t8>kUr3W{DmNSOMJ#VFD{`{FX zf}DuJZGK_)yrO4+fbvg85!Yjt?lzoma{UTz!Nr!u6}I_jY5uH}EwrzLeBjN1am5}v zW{L~3@>qN+o4Iiw(bfRSUrFtm!m$$mdZ=r5TN_Qpg>`Jh*Io?=PF=^&LO>^5DZ`f;X zA68@2k>3{kij=Ri3X}#&RJGuqd7~pl9Furt+b`?<{G}Jh2*MzoA7x3*=$J)Sb_WI~k-Y zk@m!Vf)nnFeLt@~0OunK`|wl)Qpg7+4p2rKQ4@RJx9yH}*0qxAyx@$ot98(J4x=6! z^5iUte;{!iZ7$@|n;}ehC;r!pwWECugI9+_D-6@SJu6buN2%^kbG94;dFmp!X7648 z5?C8>(GO20R{$oHv8PeVb6skEshgAnI+MM>vd(Ewr>N5k)WwaQL(Ge?-2b^GSiTos zaV)K}OKVd#x9VGFz7S?P^`mAwD)O9u4Ovx~TytbaHh!#TcW_dvE%qP1ba-jS!EL_k z{(W8v6T5&bE-atcW9p{9T~y~&5v*&md>#EF78O-OYq2mHQ-Fk}3~kk^c^0t4kD#$m zvwe@mJ>^6SQSJom&0yn?(kjeW%gen$#&|erp))8`Tq2^lY@^EkIAmN0C)l9+7^W(= zdDu{Il8|Flb8*V!bW=@NLEF+x)uc^Z3ZK^Dxl^qc+V|?;B}5*&S+{y-ym6ZRwoM?M zwVjdxF->KlgQ5aF zoT(%NF@${SRK}#yh53o@IC8I<#M?Dtr$@wvItW3eSbzZ!I4+wtOk+D$Htsl@p(PoO zh)mo{%zd<0i>(jXa``AL6UQz)S$=$_C!`J=!ZpUf+*?}t*fe~yO8XTu(|nw1wt|EFN?4dVvP)Ea?i!-`(T4+I@C z1ROJ!56fyCKrd!aqBXe6bM1+$=^VNTW&~na;HPkT*vN++;gxm z=Xh4s1P=oKy_yhaoC`TROa!CR9-(MXP7?^;6|y!Q1Z)^VpnIeApX#i63FA%uxq zUCnF3^$&mwYjW4?3gW%sXenZoxrA#i{3HI;GFRp=oG=0g!EgUaN1u-~9ujr3p_;{i z@0jC}dY*)cB3mxti~a*;Kh8??fD$2>pa#vYU>=-n=RY0fKF%oM4qrfbPY%*)_4I;||*9TNZdAz8qO30$O9>*-(N>ZlAuSP{pPv4rk0)uv7jGQcla z%C?@!5JR>bbZ4tB5Zx!4&Eq-FrJYw|lY5F;T8ogOfmA{$5j4M@9Kk=u1BJe9dfLjo z87|Pec8}|i9gQ#NIzRX3xJ;w<9oQ07NhrFb#POY@dwi=aH2gfnkPm#1wrB$j?X!Dg zY;tO!t=%ooBV}j+s?aKukPXBUEZy_39zCQ!G;n+bD~*N(-C1Pc6U~@-;H#S^5>yq+ zfnXRN*EA+{^-}kCvjS+Szu3!2jd)+oQ>b4I)7Sy9)m?ikawgZbbzmsuws?L|z&XvL zQ}C1MwP@;2 z9wp2KE^XtD6=`rzb4=`y>V-4Q#({+RG-I~T@{^mO6<1>KnoJ;>l?%!=QAqdqM5Wo) z2IyN~MX9y&an`Fo$b8(#11~v2eS=y*8wA9^M77rNhfH77N}}EUK3|bX4{f=OqKhuG&V2;p)@Nv?#Nhw_Ajm?7M@SB z;GV|Uk0Z!Du?Ho!3o9K8HxEB+pi7Pyb8jou4Ia77SMW9)fI?-b@Mo>+J!BJ%8{7l?*APU;%g|euGg}k4W8RGjiC1a#Pm^O14 zWyVQ9Xy4-_PM>8s$vvjIzHHt47U6v!75TrLU@L=-UGaqGq6H9OFt#{0bPVhSZcx2j zW3J+u(B8gyUb1d7!+7!og2vxs^1;S89c^ERkeUYDv)Eci^++`omB9xdUiJ?+<#LPZ zHJHX45PP+R7VQq7fU02^FhWMI?RooSzMHwfz(<+~I7!`frD`cu;o_0O7$gJu@05U3| z%gi?_O8j1r5Lo9%3I!?b&vA&+J%nHE$AIZOvK_7HR!mXIacl32w66J*964@Yr;X`u zR2e3v5PH10d8hrA7?w=sG0Vp!RIJu-W)N%)R&&fs3h({8s95e@6V+*-`uzR?S2M+C z6p7(is?dDO0KJ0$e*Gs)mRc1=05S~ZgSUAVf7}e`q>8L-)1+x8_OT5j+|XKjEH9FA z;>A2c*zMO%S=r0r%}X96-SmX2jhYt4swrkMLgg%7kk+5omjz3@mHXLfxDjtNxTaEt z18~qWb-)C?F8tpbwt%^HzQ&s9XI271$tPE)2PH(=nn2yQc(?NDUk)E3+wy>D%LzIa z`Hk;Vpiw#+Riz$BN77oYkt1coE_YBX?KwFG!?saP_~j4wgVVjyQYG_H=R2g5{L=g4v*-_R|CLpZV}JqJEz731h{rXufsb*6{+)7b(CJoj)^h zJ0=wO+>OBm4T63uBfsT`@>MtRKc_m2BLrSUGWn;ZJZkYVHbHH>}9BoarOfVgz|y zkM+!M1bLJZ&%nuoTl~K+D&Um<$NF2`02PYKD8*N%jWiir>t?}QNT$BxfoR!OL2J{c zT{^?(ZOLb;Q^6Ayyudk)THe-Hv!kqzQZq9RLT{JIjH-#$5IZYMP;#~g+9~n)%j&@# z{HZ%dFacNuWCrXFT3=1CF~0nbw11QvVm{;0r#gU24oTAZ`B|E(cWnF-Rc@$k2IaV7;uRk3mm8-OfS7eAF=2#(i8b zJHrtfZ~6yf|3P)E(zyD}5e7@b!lwJ$K99(o5KD4|S`sTyjeg47th!NA`tHS$fym>` z8>2Kb0oqNy_4H)2>hgzZ4)-}_^P{d+mOohk=wq->x?K;~IK!;%8!VERIPtqoNQhfs zuWXdEUhos>>CkXr&&Q~JvCjCH7dh`)3D-3-fofRKbh&66qZ$0ZS25ez37Ly_P&S&F z=t*6)0}andO}WJU`F@o51*8wpOMdITW$-=#!2k#lAV7w$pJjHh1$uw7VnV=YqC5z_S5*X*%nQ%AY=p*^=l-<4K_@WWwyzw4~f6n};az?yxk? zxuS14yu>F-*9(F88s^^;hKMRyzbtKiVH8@LH`Lr6QRyX3>O$t}x&meZz_9{Lh#<=S zbSEln8I#BzQu)K&KNz0hwZhs3$oYM4jQqJF5m>a=hibdWD-$EgIep7M8U4rojqlS> zG-=xqGS#7!MV54-o_&6i{b>@zBT5lhXAp-;>~PJdQq8DdU)WBAK2X}5-!y=`mmMjF zp0{8DPyg63%^OCR}?IH8C69`sZv3AXE>a)v8YBO46yl4McUq2q7qBS zA$1iI3#BmX>IqS|+Q8J8D1o(vTVe^0vpmbJAuf6fC29M5k? zq)3R!d(CpLD&0y06z4P7wDuU?cjceVYXEVMOXbM!c=VsdD}^flDyt?WMfl!z7%m#} zrPRx$!Q8noT&AC~xYD+D^?xkC!$zxIiKxk8Paa2AGcTW^T4yk(xvrwGTgj4u$JK$Q zrqdVF^!h>!`(Ae*az76^+f~M_{Jharpyr?P$d}^=(&%Jg-kX1>RQ087rT@cQ-AA#> zU)=if*Gz5Sjl>*X_@bQZn38rPmCCj1{8Uy~hi9+1ivVIMu{&X;qvTX^xJ&f#(Gtn0 z;$<3-Hz)9Um`ow|X()i+GHTYR*JI1E6eZYbMTAeBc%W+cs|p>1*Bv&I-=;b3s(k$e#>v> z16rC8Wa`H(|19JkNWa$V6^v(8a z^PI@M-E;MmBA-QLqtl4eT{J(&5;?xLAa{VlsGc!&7?B(vmtXZm&NkM_^1w-_JcCqO zZn8Xlq}Wb<t%rF+7}Yb~>sgz9hbVx)@bL_vRtNoK z3T#4@NUL=K<@j`GV{>tixFMU6IcmUnqt0`5=IfUW2ysPzvJ{kBaPY52AV$h~%t zZ>H#h@#G|j=hDAP)Fxe=8KA+d@mA}V(q^)ZCtjyum#HFq^a#WI^;-LmkYo<`6&~TD zmPI^698bu^1zRl`XE`!mCE(+)aytT(B`m@jsRGI}w{!^Xpo<%8d^UQ~*?$oCgVh1WT&k=*|pd9d<9H}Ip*H!vs0 z6gu1apbrQnHq4zF_quM)Y|>a1VO&?tMSboKEs>+h>5PH~7i zp?9%EZ%Gcb2Fl%HrPh*Mp3_yCm=^%(&0Q6+%(L#34Yl0KOFW=3vlM#Zo(h;O>!g!% zGa31?jP#y!LRZdjpQ7fvsV2%JZ$yc$b^X=&!hdlKHfSn;#l$x9d9I6hKbF$}kJ@4C zH7FUbs^K4hRns zP@$T_fxxL?BGJ{Yr8q7%v1s-!T+9!9x*UD6q;y-d6Hg=SNOzZI!!!-F`ldA~WS$|D z@LMuFm)m!Z4#JO-eCnocmB{TJ#;Ha>+NvWJ+!n=y#_iVAy&E}IYnc&O`679-DBK+L z922J7^iAz{Scf``_mo1l>meOecho~<$b?EHO~yQChr9by3cwwFFtP8|ExxvLqv4{N z{Gy{vGQdKdpkOWj6xa7Gn!D=E#bU&=QzHQSiNOob|H8{NiTB;Nq%BonVxM75grez& z_*_DEcU~L7p@x&`v@k$z_L$5CcX(*=-3V1}6$;{K1sO6=mj+LyF%ZHsfs1Q+IkAAT zQX;F~1Ps{MTZ4ZlbLw&_(15jHhEzmr6N z#bvCa6E&B@lu2ynaIKoi(bM(>Ba$xtVl*8yxF87gwA|`B$=g-l94a3?Qt7ub-UZgk zqM|z8@-=<;2!uEIcZCVi$C2n6tIAZl)`{R>PSOPbMHjRhYz2r385wrKheo zw8b$AvsPjfFLPhr>lZFC&$T{6Y&;>ihG#4tvmCG(k^_pT7OK*5J(_xW+=(;xV?@|G zyisP8S7z3V-ZcJyv-=et8(Dn1EXUmmv)+Dg`CWp&&t-`Oz@uWRc~sdU@g+C2Q=kb6 zaXLal7Jm|5${FfaKtQFwH zM+uqJq*WR8I>G-@f$n9XCRjVv#@}58@&l!8DP`jC%K(<`v1bp&SI@lm+VX9YR63@- zW_ll1bs(JSf!Ki6VT;LO@O2D}>7=ICPJR6NIlScj75yAO8Dn|YIUnM|-m7ca6xztK zBwlt>D&}7=>LBM`Deg%Fz(L6Ns8ERa>=?)=Z6ycsQ(Ci#^^sfs|60XX@Dw=~6@=-!#hvE~`O{XN~1kX2mqL1*NMMN_ybA3%J(Buh3JX5A~LC z{BEvRA`8tGC@&PeS~*+MMLHwvp^Y7d{4?E@T&_1E1k^U9AiJZ*`|%LeEsS``hV3|9 z*jIljz766qCHO*nHAB&7`kk?iay`6jBK`-$IzNlIJmk=GKp^tTTQm_6qN@9aVwlUW z&G68IB^tsmbFw(`valR5&IX&C{&LCxegD*DR}@XrJG>rchQuZ&OrXw(>PHNQHY4Z; zi9?N^)QxM*RU6l4$^SQSv62oPAH)^}b0&Mn+IPz%J!ELfB%lM@CP6RsJJ|@EQ+U7l z%08VU=#-jh{^Y9)Rtp8E;8FKGw=d8fY756g{_V;SonZLfa%Cr{3{YNr;do|R&YTb4 z1`TqtuU>>Zz0VWP{W>C4ux@vzn_pbMT-l-2?P*VTafrWa?ImH1JLp6(MSreqD0mM+ z0T&#G!XSXN|6~tun{C#w)3$cK(lnBvu(gM&Vw!#Aod6V5Si0-K(fW{FlJgDNz5;(J zfQv+^W`3VbK#tD6v@(0k6+>6PRSm)N7fL;+Cg+KTEvTEhS40r-(>yBTv1yg~O5EWg z2sioupNkO&s0OS>=7|2WroW1pwvyM$1emyELcsB{Iv@QukgjSmp>R-UH&!{rH+bs-chDY&8iDP*V~ zk1s?2s<`BQNsz^b_G&5HT8Qv%Uq5kO!l{KRs{aYX0Y;PlD{r@F>B&>58`LI(r{_`_ z6WZC9lH<#4-KoBQ5SGUrzDvAkZ{N0Rht=u6sO z`5eK2yA{pOj9*qY0v7UwOiqw%`9;Uj0eRO^*<*@WYxF+ebwZNL3fd&&<7Cv2I?C(72~0J)Q&l^mSIhpN}%>T8^> z6)e?`_wgA>cJ`#&zi#I5D+3UTS2S`gc&^-@%Db#kbInXSS4#3gF_R%l%* zS)%;p3RJx=bfu9B5$t$%I_yS=sT$t~F?MeHLt=^4O;eQ{dMovNU9axwC3f$BQEq?Vz zb1AKHgM$jvRuRa3Je;?EJJ+<&0NZcgGU67libhCcExzWv4W7vZwkt{hiqBd1#!X2; z%x^n_S?x0{vTWjN?_9DIADTu^owN4>Qi_}j)J&Fo zS)*;_MpBDWdE-OpP_^b7va1pR&>AZD%AqH68n-XAIVbEge3CE3IB^;mi9A~yHqcjG zw|>YL!FuIS?QQZ-kHj;2C_u=)@!~E>PjD=aGtsQ_eyl5#*WK4vw;F7Y&i2e9ygq5C zJrP_YuqO(VwdTAvf5TYe5T05CS0cB!?PAW<(%1G4295yK=u!nc_;&6KShJ8sU~Z6l z3M&$)$N;ZQ8H?e+M}}7vtS+2K6Q3Ed?|c-hcw_Ou>#uv#NBvc4{f;`}@O$MCcy5P{4IP$o zc|QEw|6ldXB6pCzNrBsdw?k+tUm;;cohB*gcQ0gDIV4Gmj%S0~=+0rN!2FUja`aY+ zAh)iF{wjvzu2usLfaurL*8gTcyXq%a(1amvZO|lB&?(B3rCc*@@DV*B)gP}dkCO$^ zCIKm1@6vx)QYr(*@#NkZ8IbYDzO=&sr7FfGq)ZnRc?`BXLPlOUx*!}{uS5WpZX?ZG zP446~FwAWZE5xZ}@BwiHAf~M9a%d__!8P{2p~jlT3}KWkX5O-?MwTaqAtNgDdM?pm zS4&N%Ajl<0|AhUysJ528<*8e617QCiTN&dCVZv9Yu-hb?mM zV|j%|U`Dr#4ih5r36F0yrV0FibHub`O|Ut^1#H~h3ahkp2d%b61G_wLagMl_Y1zzY z%Nx&hc+)NaF3+y>2|b+ujyeq&hzC3$EFm^6yB1i>Wlc;cpeN5-3`Gxodu$Pg;Ae7o z-6uGxE49kOmwhOz1L93c`hm={5tN8+Y}}pUfY4Gq>H<)T{GY%l6HN%2YoF!Atv4fn zs@9S`vIXvyLTC`U_DiHH%Pv(+hSuGq_pHJ+BMk! z66ccFpq_<=CC^#v=RbBbvlJ;Ouhz1uO{DT4_n%!|#Wz%ELrEiULDY|(1DMnQ*ZlVl zv{kRN9lYv*`|zF?-(*=}_p-zIN&pM4(KyE#*;9Q1N#XV)pA-!Ddw|&!i7Gl}ABj$a z@=45Aie*y))n-jUHKPSf;jUzS1334&WH^6E1UMi-fbRfTEb*5DgnwCVWt*#6yTyY1 z;N9S{w3!qfH@tRN0ztoTyiiiQ@CKZ?6@x6-pR;0c2l!`f* zCpO~1?K^_0MTaQ6fHv7KGM-vR$bI}CYW+_DcCS+KTKtYSbw|xjW&6uhZ#ugFFY?kG zUsy(JJqytZ9_o~_z-LekZ((&?4I?5L13PiKRA>^^wN(RPyYT1!`5umHevYHX`Y{kK z`5@APqY{yn{49!;WQVpZaq+XqZY|XAYo8fNw~?us0001i0tGMGJk>dcvUkfqNC*R$ zIG{DrsXgbBz$n*!eTMUfe+kq(uc-zESBDDu{){Mf5e}d>oB1tq@vV6dxdPLRU7aE% zT9s^M?U(C2uQ0*Nna}gW^|=}K2QsXCvQu>1TSd31oc|H=xnSbGHTlxLmTD7{;|~@5 zk5GXwxVJD+X(pIGhf)bvS}vq%CS#P(B5`!P6D8)BYdIfG>LKFdmftjg zw!P+dCxKITjfW49YT_Sh3;4p8U(Ye46phzYYj za=O$r2Tsw^#GA8?o>v--onQc{cr*r1e!vAs8a>XoxX-Q%Q#U(sY@kAz@ZsX4X>v-yF1!i7d1^5>~m7IU2XZ z{T4)qEA1p&Qkcx(yx&%cIS)f|pkeEtP&#kj7orFCkuw=XhBu@DC7h0d zRIBT4ezzk=^+yP2ONqj}wx&bQr^ls#FE_t)x=@KT>)lmbip;XQb?b4wQokIgt}w)` zn_L$p*j~vPYyhWXTpq)|TjqqO46MVj_&zwX<}Ctp=?kIWvoWUg)~Us4$K`fELv2(ddm#|pN;is z<(eYBgsRnXQ-qJqTwuiH(uMM6QCSMY>GcPDKOP*QMjvS!a^0K0+(LJgWQ_Ww74GV+ zuJ@NdP{G$O&(=UZyKuY982=pBKV@~4ySauD?!;=~%d|A=ZwCrl53yI%QWbh~`Sy;N zu;A$E67k|XLldB2`DL0b`ZD)gM<}D9Bc<3ZY6Ypa&W&0s>+?H5XAddO!=st5Vg-UI za87)OtrP98Nh5kT$TYDaJI@?J*zPdZYU3$7&6PAtuMD#Aso>Ie)tP54m76gge%DIR zh6c2l&{+C4Lm_&+v)6Ke)+q@9Fx32q&A=qXW8-d=Df82AZQW6zpMM~q7qJ4?ok*hA zh2B9kR!$iQ;fbzAb_)CjZ#TyDf;frDHwT@y&@F=NI#xhuu;KjJ6!Y5%L0TEt2!IQH zbumB6J4A=aRSft54tD%t^Iu+4-W^fSIy2D&%}m}h=Stb^h{c&`7e~Qyjm_tCFyL1a zf&F`{_+X|@bl8H6y3q@i=x94m9mLWuJ)0VdsM%)U!V63eB6T8cptp}sV=ax6@N`&7 z+TU=lQt;WKh>>Z%=BLh@+qB;fAoRmcn#x|}4oG{Wu1^0G!jYn(Z}Kox}m8(mO! zYV-Tt4jRXI?0{}^j-#dTJUX1*6Y=Db7#+iWJx1+4OqlJ>ECpfmEM|2|zvS-z z!QBtw!+eo@NOdOqX^4Hsj~z7e)xD*qe&ciIAqf}O%=KTT)4+*+??C?(-Pjqy9zi1KG=FQ7l5mvL+g-7pqbuPtvPXWCj3i0U_O?)#Uia1UAy#TXq>XYhNIJ zRht0?E2q?CX`!WtIdD?zADQ+g)Hk9+)BOP2G3wg?vE)`MslM6xDBwOzSG~&C-I_@8 zwE$7eqGn$Q9W8kEO`K4x=*!e(y0+*4-Q z=o=4Oi>47X3VAr1+I?ngFn`sx1CwTHQ>Uh~i7C+`wv`XV>5+h4h}Hs4qg-Fp2z7c# zFML>f=YK_4P0lN`5k_v=nLpFcOsDmC64Y{{-_u4d61O0*%BiY}31XWa;D(!DODJh? zwe$+Meg!_itom8!%t+8E$PrCbin!a}2O~kLBv`2-_k2oNSi61R*jUW~{Eaau55ap; zMw+byrw{ZqdkK~JqcEFk)*bKfP(%nMB%j!iQ0bXFx z-G0&^xtjEs-zMOVL9zz($UgpV@h$b!((-~Os7hng{ve%|VyhP%*Z}3%v`wy)NE5hs z&-&eEbqc-FGHR(De4x7vX}sEQcFzmVO=qoN7n{q(xOUMyK7aa*3W7ac)axZ9%L2u; zmyS-B7v?m~mlq9yoq&1jI*_wTfttl=xitk3p1x-5chIkOQ;^x>_*^?DidL5OyOWt6P&bJd2Y`QAN*{5I{bGqla08NTnR#Zvm=&1xx5A|k&R+L;zO z2jKpxUvM(F=v*0s8YkUu3@TcSL+p3zel7Lh(rqZhYsgG(PjRo4+Lwc(-Ae#lE0EzK zNf8mCZIceE7h1_}67=e*T=Kq8ojHqQZ|k@1UW9V>{XDc#&M+*}o{f6Yg+=!DqsX3?$g3apEB9Um(A zpjHH@HjG+3&Je(zGj?gjls1YunBhkjp!*zS9rALJ11`a7Uk%k%%ht2*Vcs33d!fwr z#F$srH^-7O%$kO)^PbZh#OAOEDl4{*%;hYFNNfv=D#}~i&3&FmC&?AvR|yf4dM>n5 z7oxjDHb1|5{Ok=o&XZBB#U>U93&J^eEitpyoHa)k4*j=5UPnJL)K4o#E0df{8$8#i z=-395O{xcysV3?wRJEL=*&rY{zMwAvIojiE`(A&;9@^J!b)nl8nu#GccFR_U|E`7k znSGicyf4};$zJ+CoM>iG!d-pO7R+|J1Jew9j@Ft5M1>wbU-!_+cd^nKYFbs8!Ufx_ ze6`KO0h}b(Dklk%TpvY3>HvWP1PBl#XIX>h$;lwV)^+>W0!dSU@4{rZj{BL`9$4GB z#n-P{2M!yBC`<>LVTT!cj6iV+`hCM*lhVtOEy6Ufk~6%PXQ*k%PfCs*#U@xhsKB;# zcHr6h$@A(bw2+=-vvZRfcNlFV3%fJpm?X{rdrjW%3}ZL}$Wl$D?%G)qW$=`A17Ly5 z#@CY}VaQ9A)>sxz;8+L|i8z06q8{Ctjq-n10c)_%d8E2-*sX3PeyJfEdV-jMaf)L| z=TY{j1&vwo`{)VyjfV)ir5&JZ^A$br7cJf#P?@vQc)NUv3E|J&_xS1bWLb^1dZQoF zGT?b)d&yL{PQAU1J!OfYeiW7uhv4vZPFJZSFc+~p3@BY#e+7o>A*SkD_ zHkw};T)O!G{Z}Fn{o}NJVt|umP_dtHEpZKF0o?b%+WB!R-QJB(I?N+Hn%E!qhB$~_ zu%qf}bo%1Xl8;1Xc5gwZy8B8l&(+IQtXBuXFpB|w16W#xsJtL@YN04}ndEZas`+kA zJX{zP*F?<3i~o;S00q{UbKYB-9m4xon{@9S%x!Q0Tp6j0r zKIw&zXEMUrXjP{%?uUs~_gj1EtJLzG#`RMFUR9W@mC6-JD%0K+R7kvNS{ z^FM*Ae7@7)D&9>uVM|3^dhcx`q?;GfI+1dv1UMc%Vij|9JB}q6b~u6U(cn zJH!o>Svo#SvV+Q{^uM;lUWZ`>2oNAZfmVK($@I1g01AQHixeeO*Q*t>!+N31Ph|Bx zGLu^twPg{Fy;*^cc@aY<@+%2+Ce_#)xgssQDo>BV1w*V- z(wleC=<-KC!5VanMvQ@OR<>*2EM8Ujq{5hxyKGwD1FFW0H&Jl=g$CH%z}mD7Yu(dv z?2E@s;|H(6(5tntnd#Vt4l&E-@91V4nYx=HX@fUWa_B8dBb<_h0YqKp8%M|n@%@39 zyo?$p`VvvxGRO||*!bi>pP6F<(jF~Um4}s~!)&&N$SS|I2TGiB*d|s;XxX|?GC`tM zuQb=MHv!)zuy1@-=uOje%Y8SQj+Vef1>*iy4`I@eqg&t@t9^lwQcCOk@{u z{|9a`=Z^5=1e2s2Z=h$d7CtpH6@G&+VZ9-md8{prEH*vI;M^y%)=gE4TPdt?&rcw4 zhX`ahwlX6J?J?&(xr{{0pmiA5mZPm?f&y8#5PE$*-PMGTvGoM@cQe|9a;L%MNxDGH zPItG>Np-(#^vbvO zYlkPNa&>c(=5}Q?L()|JtMBPnFnoSyD=J*2_@S`Qiju9l=kE+ZTVV+N^Z3Xx(^v)c z5WVqznmd(W+;tmQ62VK!J3k2qIZFsbAhIi~%bFMz5!+U`hjw<1C-76K!Acftr~L@= z_D5Y@Yn)aqA07kx#-{v8f~d&2n&!CuD)^OPgGu^YTVDori{ceo-4X2Yba!~bt<{<8 z>TMp5fJh4&P39#~7<(f<_lx%{Eh<;OYQM1@(<~|uA)4yU3Zl7ja{|%X%TtoMyYRpe z$(L@>daFcLhvldIS>r#cI06I!000000aUy?5J>D?Wl5R~m`n%HTUvVZJ{N*w8(Lh{ z^_#Ko~Xplcg?!2@UUQ z=1B;g&Dm9zy3=xYiF>)`U_;pefc-`yAIuk`d63N fn*wjQQLZMzvf=dzVx*Eu|Ha&qP81{s>dU*pKfkT7 literal 0 HcmV?d00001