diff --git a/README.md b/README.md index 6ef4ad85f..ee28c65dc 100644 --- a/README.md +++ b/README.md @@ -1,201 +1,41 @@ -# Hathor Network +# Hathor core (full node) -[![Mainnet](https://img.shields.io/badge/mainnet-live-success)](https://explorer.hathor.network/) -[![Version](https://img.shields.io/github/v/release/HathorNetwork/hathor-core)](https://github.com/HathorNetwork/hathor-core/releases/latest) -[![Testing](https://img.shields.io/github/actions/workflow/status/HathorNetwork/hathor-core/main.yml?branch=master&label=tests&logo=github)](https://github.com/HathorNetwork/hathor-core/actions?query=workflow%3Atests+branch%3Amaster) -[![Docker](https://img.shields.io/github/actions/workflow/status/HathorNetwork/hathor-core/docker.yml?branch=master&label=build&logo=docker)](https://hub.docker.com/repository/docker/hathornetwork/hathor-core) [![Codecov](https://img.shields.io/codecov/c/github/HathorNetwork/hathor-core?logo=codecov)](https://codecov.io/gh/hathornetwork/hathor-core) -[![Discord](https://img.shields.io/discord/566500848570466316?logo=discord)](https://discord.com/invite/35mFEhk) -[![License](https://img.shields.io/github/license/HathorNetwork/hathor-core)](./LICENSE.txt) -## Running a full-node +## Description -**Disclaimer** +**Hathor core** is the official and reference client for operating a full node in Hathor Network. -At the moment, our mainnet is running on a whitelist basis. This means only authorized nodes will be able to connect. For testing purposes, you can connect to the testnet (using the `--testnet` parameter). If you want to connect to the mainnet, you have to [use a peer-id](#using-a-peer-id) and send this id to a team member. You can get in touch with us through [our channels](https://hathor.network/community/), preferrably Discord. +## Operation and usage -### Using Docker +To know how to operate and use Hathor core, see [Hathor full node at Hathor docs — official technical documentation of Hathor](https://docs.hathor.network/pathways/components/full-node). -The easiest way to run a full-node is to use our Docker image. If you don't have Docker installed, check out [this -link](https://docs.docker.com/install/). So, just run: +## Support -``` -docker run -ti -p 8080:8080 -p 8081:8081 hathornetwork/hathor-core run_node --cache --status 8080 --stratum 8081 -``` +If after consulting the documentation, you still need **help to operate and use Hathor core**, [send a message to the `#development` channel on Hathor Discord server for assistance from Hathor team and community members](https://discord.com/channels/566500848570466316/663785995082268713). -The `--status 8080` will run our HTTP API on port 8080, while the `--stratum 8081` will run a stratum server on port -8081. You can check your full-node status accessing `http://localhost:8080/v1a/status/`. Use `--help` for more -parameters. +If you observe an incorrect behavior while using Hathor core, see [the "Issues" subsection in "Contributing"](#issues). -For more information about our HTTP API, check out our [API Documentation](https://docs.hathor.network/). +## Contributing +### Issues -## From source-code +If you observe an incorrect behavior while using Hathor core, we encourage you to [open an issue to report this failure](https://github.com/HathorNetwork/hathor-core/issues/new). -First, you need to have Python >=3.8 installed. If you don't, we recommend you to install using `pyenv` (check this -[link](https://github.com/pyenv/pyenv#installation)). +You can also [open an issue to request a new feature you wish to see](https://github.com/HathorNetwork/hathor-core/issues/new). -### System dependencies +### Pull requests -- on Ubuntu 20.04 (without using `pyenv`): +To contribute to the development of Hathor core, we encourage you to fork the `master` branch, implement your code, and then [submit a pull request to merge it into `master`, selecting the "feature branch template"](https://github.com/HathorNetwork/hathor-core/compare). - ``` - sudo add-apt-repository ppa:deadsnakes/ppa - sudo apt update - sudo apt install python3 python3-dev python3-pip build-essential liblz4-dev libbz2-dev libsnappy-dev - pip install -U poetry - ``` +### Security - optionally install RocksDB lib: +Please do not open an issue to report a security breach nor submit a pull request to fix it. Instead, follow the guidelines described in [SECURITY](SECURITY.md) for safely reporting, fixing, and disclosing security issues. - ``` - sudo apt install librocksdb-dev - ``` -- on macOS: +## Miscellaneous - first intall `pyenv`, keep in mind that you might need to restart your shell or init `pyenv` after installing: - - ``` - brew install pyenv - ``` - - then Python 3.11 (you could check the latest 3.11.x version with `pyenv install --list`): - - ``` - pyenv install 3.11.0 - pyenv local 3.11.0 - pip install -U poetry - ``` - - optionally install RocksDB lib: - - ``` - brew install rocksdb - ``` -- on Windows 10 (using [winget](https://github.com/microsoft/winget-cli)): - - ``` - winget install python-3.11 - pip install -U poetry - ``` - - currently it isn't possible to use RocksDB, if you're interested, [please open an issue][open-issue] or if you were - able to do this [please create a pull-request with the required steps][create-pr]. - -### Clone the project and install poetry dependencies - -``` -git clone git@github.com:HathorNetwork/hathor-core.git && cd hathor-core -``` - -``` -poetry install -``` - -### Running the full-node - -``` -poetry run hathor-cli run_node --status 8080 -``` - -It may take a considerable amount of time for it to sync all the transactions in the network. To speed things up, read below. - -#### Speeding up the sync -You can use database snapshots to speed things up. - -We provide both testnet and mainnet snapshots. You can get the link to the latest snapshots this way: -- Testnet: `curl https://hathor-public-files.s3.amazonaws.com/temp/testnet-data-latest` -- Mainnet: `curl https://hathor-public-files.s3.amazonaws.com/temp/mainnet-data-latest` - -You should download and unpack one of them into your `data` directory before starting the full-node: - -``` -wget $(curl https://hathor-public-files.s3.amazonaws.com/temp/testnet-data-latest) - -tar xzf testnet-data-*.tar.gz -``` - - -## Additional considerations - -(Assume `poetry shell`, otherwise prefix commands with `poetry run`) - -### Data persistence - -By default, the full node uses RocksDB as storage. You need to pass a parameter --data to configure where data will be stored. You can use a memory storage instead by using --memory-storage parameter. In this case, if the node is restarted, it will have to sync all blocks and transactions again. - -Example passing --data: -``` -hathor-cli run_node --status 8080 --data /data -``` - -Example with --memory-storage: -``` -hathor-cli run_node --status 8080 --memory-storage -``` - - -#### With Docker - -When running the full node with Docker and using a persistent storage, it's best to bind a Docker volume to a host -directory. This way, the container may be restarted or even destroyed and the data will be safe. - -To bind the volume, use parameter `-v host-dir:conatiner-dir:options` ([Docker -documentarion](https://docs.docker.com/engine/reference/run/#volume-shared-filesystems)). - -``` -docker run -v ~/hathor-data:/data:consistent ... run_node ... --data /data -``` - -### Using a peer-id - -It's optional, but generally recommended, first generate a peer-id file: - -``` -hathor-cli gen_peer_id > peer_id.json -``` - -Then, you can use this id in any server or client through the `--peer` parameter. For instance: - -``` -hathor-cli run_node --listen tcp:8000 --peer peer_id.json -``` - -The ID of your peer will be in the key `id` inside the generated json (`peer_id.json`), e.g. `"id": "6357b155b0867790bd92d1afe3a9afe3f91312d1ea985f908cac0f64cbc9d5b2"`. - -## Common development commands - -Assuming virtualenv is active, otherwise prefix `make` commands with `poetry run`. - -Check if code seems alright: - -``` -make check -``` - -Test and coverage: - -``` -make tests -``` - -Generate Sphinx docs: - -``` -cd docs -make html -make latexpdf -``` - -The output will be written to `docs/_build/html/`. - - -Generate API docs: - -``` -hathor-cli generate_openapi_json -redoc-cli bundle hathor/cli/openapi_files/openapi.json --output index.html -``` - -[open-issue]: https://github.com/HathorNetwork/hathor-core/issues/new -[create-pr]: https://github.com/HathorNetwork/hathor-core/compare +A miscellany with additional documentation and resources: +- [Subdirectory docs](docs/README.md): supplementary documentation of Hathor core. +- [Docker images at Docker Hub](https://hub.docker.com/r/hathornetwork/hathor-core) +- To know more about Hathor from a general or from a business perspective, see [https://hathor.network](https://hathor.network). +- To know more about Hathor from a technical perspective, see [https://docs.hathor.network](https://docs.hathor.network). diff --git a/SECURITY.md b/SECURITY.md new file mode 100644 index 000000000..0b8927cf0 --- /dev/null +++ b/SECURITY.md @@ -0,0 +1,3 @@ +# Security + +Hathor Labs has a bounty program to encourage white hat hackers to collaborate in identifying security breaches and vulnerabilities in Hathor core. To know more about this, see [Bug bounty program at Hathor docs](https://docs.hathor.network/references/besides-documentation#security). diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 000000000..916c1f2a4 --- /dev/null +++ b/docs/README.md @@ -0,0 +1,23 @@ +# Documentation + +## Directory overview + +This directory contains a miscellany of documents of Hathor core. + +Hathor core documentation is distributed over the following locations: +- For users: [Hathor full node at Hathor docs](https://docs.hathor.network/pathways/components/full-node). +- At the root of the `hathor-core` repository for default documents (license, readme, etc.). +- [API documentation following Open API standard](../hathor/cli). +- [RFCs](https://github.com/HathorNetwork/rfcs). +- And finally, this directory for all other documents. + +## Table of contents + +Documents in this directory: + +- [Developing](developing.md) +- [Debugging](debugging.md) +- [Feature: event queue](event-queue-feature.md) +- [Feature: RocksDB index](rocksdb-index-feature.md) +- [Legacy documentation of Hathor Network](legacy) +- [Metadocs: Open API and Redoc usage guide](metadocs-openapi-redoc-usage-guide.md) diff --git a/DEBUG.md b/docs/debugging.md similarity index 97% rename from DEBUG.md rename to docs/debugging.md index d797083d9..cdfa4904b 100644 --- a/DEBUG.md +++ b/docs/debugging.md @@ -1,4 +1,6 @@ -# Debugging tips and tools +# Debugging + +## Purpose Here are some useful tips and tools for debugging. diff --git a/docs/developing.md b/docs/developing.md new file mode 100644 index 000000000..fecd59a9c --- /dev/null +++ b/docs/developing.md @@ -0,0 +1,42 @@ +# Developing + +## Purpose + +Miscellany of relevant commands for developing Hathor core. + +## Tests + +Check if code seems alright: + +``` +make check +``` + +Test and coverage: + +``` +make tests +``` + +## Generate documentation + +Generate Sphinx docs: + +``` +cd docs +make html +make latexpdf +``` + +The output will be written to `docs/_build/html/`. + + +Generate API docs: + +``` +hathor-cli generate_openapi_json +redoc-cli bundle hathor/cli/openapi_files/openapi.json --output index.html +``` + +[open-issue]: https://github.com/HathorNetwork/hathor-core/issues/new +[create-pr]: https://github.com/HathorNetwork/hathor-core/compare diff --git a/EVENT_QUEUE.md b/docs/event-queue-feature.md similarity index 99% rename from EVENT_QUEUE.md rename to docs/event-queue-feature.md index 3cb1050b6..56a306e1c 100644 --- a/EVENT_QUEUE.md +++ b/docs/event-queue-feature.md @@ -1,4 +1,4 @@ -# Event Queue +# Feature: event queue ## Introduction diff --git a/docs/Makefile b/docs/legacy/Makefile similarity index 100% rename from docs/Makefile rename to docs/legacy/Makefile diff --git a/docs/conf.py b/docs/legacy/conf.py similarity index 100% rename from docs/conf.py rename to docs/legacy/conf.py diff --git a/docs/conflict-resolution.rst b/docs/legacy/conflict-resolution.rst similarity index 100% rename from docs/conflict-resolution.rst rename to docs/legacy/conflict-resolution.rst diff --git a/docs/glossary.rst b/docs/legacy/glossary.rst similarity index 100% rename from docs/glossary.rst rename to docs/legacy/glossary.rst diff --git a/docs/images/syncing-example.png b/docs/legacy/images/syncing-example.png similarity index 100% rename from docs/images/syncing-example.png rename to docs/legacy/images/syncing-example.png diff --git a/docs/images/syncing-example.txt b/docs/legacy/images/syncing-example.txt similarity index 100% rename from docs/images/syncing-example.txt rename to docs/legacy/images/syncing-example.txt diff --git a/docs/index.rst b/docs/legacy/index.rst similarity index 100% rename from docs/index.rst rename to docs/legacy/index.rst diff --git a/docs/make.bat b/docs/legacy/make.bat similarity index 100% rename from docs/make.bat rename to docs/legacy/make.bat diff --git a/docs/quickstart.rst b/docs/legacy/quickstart.rst similarity index 100% rename from docs/quickstart.rst rename to docs/legacy/quickstart.rst diff --git a/docs/ref/crypto.rst b/docs/legacy/ref/crypto.rst similarity index 100% rename from docs/ref/crypto.rst rename to docs/legacy/ref/crypto.rst diff --git a/docs/ref/index.rst b/docs/legacy/ref/index.rst similarity index 100% rename from docs/ref/index.rst rename to docs/legacy/ref/index.rst diff --git a/docs/ref/p2p.rst b/docs/legacy/ref/p2p.rst similarity index 100% rename from docs/ref/p2p.rst rename to docs/legacy/ref/p2p.rst diff --git a/docs/ref/pubsub.rst b/docs/legacy/ref/pubsub.rst similarity index 100% rename from docs/ref/pubsub.rst rename to docs/legacy/ref/pubsub.rst diff --git a/docs/ref/transaction.rst b/docs/legacy/ref/transaction.rst similarity index 100% rename from docs/ref/transaction.rst rename to docs/legacy/ref/transaction.rst diff --git a/docs/ref/wallet.rst b/docs/legacy/ref/wallet.rst similarity index 100% rename from docs/ref/wallet.rst rename to docs/legacy/ref/wallet.rst diff --git a/docs/sync.rst b/docs/legacy/sync.rst similarity index 100% rename from docs/sync.rst rename to docs/legacy/sync.rst diff --git a/docs/tx-maleability.rst b/docs/legacy/tx-maleability.rst similarity index 100% rename from docs/tx-maleability.rst rename to docs/legacy/tx-maleability.rst diff --git a/docs/api/openapi.md b/docs/metadocs-openapi-redoc-usage-guide.md similarity index 100% rename from docs/api/openapi.md rename to docs/metadocs-openapi-redoc-usage-guide.md diff --git a/docs/rocksdb-indexes.md b/docs/rocksdb-indexes-feature.md similarity index 94% rename from docs/rocksdb-indexes.md rename to docs/rocksdb-indexes-feature.md index f318129e7..f3a0d2b71 100644 --- a/docs/rocksdb-indexes.md +++ b/docs/rocksdb-indexes-feature.md @@ -1,23 +1,25 @@ -# Summary +# Feature: RocksDB indexes + +## Introduction This design describes basically how to add a new indexes backend in-disk using rocksdb besides our current in-memory backend. -# Motivation +## Motivation The network is growing rapidly and the large number of transactions is increasing the memory usage of a full-node. It usually was enough to run a full-node with 8GB RAM, lately there have been cases with out-of-memory crashes with 8GB, so our recommendation increased to 16GB. Secondarily, a full-node with an existing database will take a while (usually 10~50min) to start because no index is persisted and they have to be rebuilt on every start. Persisting the indexes across reboots will solve this really annoying behavior. -# Acceptance Criteria +## Acceptance Criteria - Have all indexes (except for the interval-tree ones, that will be removed with sync-v1) using the rocksdb backend by default - Initially make RocksDB indexes opt-in - Make sure the tests cover the new backend - Persist the indexes across restarts (this can, and probably will, be implemented and released separately) -# Detailed explanation +## Detailed explanation -## How to use rocksdb to persist indexes +### How to use rocksdb to persist indexes Last July @msbrogli made a proof-of-concept implementation on #254 using rocksdb to persist the address-index(previously called wallet-index). @@ -46,7 +48,7 @@ And then we iterate by `[address]` prefix and the keys will be sorted by (timest self.log.debug('seek end') ``` -## How to load persistent-indexes +### How to load persistent-indexes The first implementation will simply reset all indexes when initializing (this is implemented by dropping the relevant column-families on rocksdb, which is an operation that has constant time `O(1)`), it's important to really make sure the index was successfully reset or fail initializing otherwise. This will still have the down-side of slow loading times but will significantly simplify the implementation and avoid introducing issues related to a change to the index initialization implementation. diff --git a/hathor/builder/cli_builder.py b/hathor/builder/cli_builder.py index 9581be9fd..2d7cf1372 100644 --- a/hathor/builder/cli_builder.py +++ b/hathor/builder/cli_builder.py @@ -309,6 +309,7 @@ def create_manager(self, reactor: Reactor) -> HathorManager: feature_service=self.feature_service, pubsub=pubsub, wallet=self.wallet, + log_vertex_bytes=self._args.log_vertex_bytes, ) self.manager = HathorManager( diff --git a/hathor/cli/load_from_logs.py b/hathor/cli/load_from_logs.py new file mode 100644 index 000000000..c5a34e427 --- /dev/null +++ b/hathor/cli/load_from_logs.py @@ -0,0 +1,65 @@ +# Copyright 2024 Hathor Labs +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import re +import sys +from argparse import ArgumentParser, FileType + +from hathor.cli.run_node import RunNode + + +class LoadFromLogs(RunNode): + def start_manager(self) -> None: + pass + + def register_signal_handlers(self) -> None: + pass + + @classmethod + def create_parser(cls) -> ArgumentParser: + parser = super().create_parser() + parser.add_argument('--log-dump', type=FileType('r', encoding='UTF-8'), default=sys.stdin, nargs='?', + help='Where to read logs from, defaults to stdin.') + return parser + + def prepare(self, *, register_resources: bool = True) -> None: + super().prepare(register_resources=False) + + def run(self) -> None: + from hathor.transaction.base_transaction import tx_or_block_from_bytes + + pattern = r'new (tx|block) .*bytes=([^ ]*) ' + pattern = r'new (tx|block) .*bytes=([^ ]*) ' + compiled_pattern = re.compile(pattern) + + while True: + line_with_break = self._args.log_dump.readline() + if not line_with_break: + break + line = line_with_break.strip() + + matches = compiled_pattern.findall(line) + if len(matches) == 0: + continue + + assert len(matches) == 1 + _, vertex_bytes_hex = matches[0] + + vertex_bytes = bytes.fromhex(vertex_bytes_hex) + vertex = tx_or_block_from_bytes(vertex_bytes) + self.manager.on_new_tx(vertex) + + +def main(): + LoadFromLogs().run() diff --git a/hathor/cli/main.py b/hathor/cli/main.py index a1ab960d2..6c4c84f4e 100644 --- a/hathor/cli/main.py +++ b/hathor/cli/main.py @@ -35,6 +35,7 @@ def __init__(self) -> None: db_export, db_import, generate_valid_words, + load_from_logs, merged_mining, mining, multisig_address, @@ -91,6 +92,7 @@ def __init__(self) -> None: self.add_cmd('dev', 'x-export', db_export, 'EXPERIMENTAL: Export database to a simple format.') self.add_cmd('dev', 'x-import', db_import, 'EXPERIMENTAL: Import database from exported format.') self.add_cmd('dev', 'replay-logs', replay_logs, 'EXPERIMENTAL: re-play json logs as console printted') + self.add_cmd('dev', 'load-from-logs', load_from_logs, 'Load vertices as they are found in a log dump') def add_cmd(self, group: str, cmd: str, module: ModuleType, short_description: Optional[str] = None) -> None: self.command_list[cmd] = module diff --git a/hathor/cli/openapi_files/openapi_base.json b/hathor/cli/openapi_files/openapi_base.json index ea10d9442..237df7466 100644 --- a/hathor/cli/openapi_files/openapi_base.json +++ b/hathor/cli/openapi_files/openapi_base.json @@ -7,7 +7,7 @@ ], "info": { "title": "Hathor API", - "version": "0.60.0" + "version": "0.60.1" }, "consumes": [ "application/json" diff --git a/hathor/cli/quick_test.py b/hathor/cli/quick_test.py index 1c4fa056a..8fe2e4fee 100644 --- a/hathor/cli/quick_test.py +++ b/hathor/cli/quick_test.py @@ -12,6 +12,7 @@ # See the License for the specific language governing permissions and # limitations under the License. +import os from argparse import ArgumentParser from hathor.cli.run_node import RunNode @@ -23,9 +24,12 @@ class QuickTest(RunNode): def create_parser(cls) -> ArgumentParser: parser = super().create_parser() parser.add_argument('--no-wait', action='store_true', help='If set will not wait for a new tx before exiting') + parser.add_argument('--quit-after-n-blocks', type=int, help='Quit the full node after N blocks have synced. ' + 'This is useful for sync benchmarks.') return parser def prepare(self, *, register_resources: bool = True) -> None: + from hathor.transaction import BaseTransaction, Block super().prepare(register_resources=False) self._no_wait = self._args.no_wait @@ -34,10 +38,26 @@ def prepare(self, *, register_resources: bool = True) -> None: def patched_on_new_tx(*args, **kwargs): res = orig_on_new_tx(*args, **kwargs) - if res: - self.log.info('sucessfully added a tx, exit now') + msg: str | None = None + + if self._args.quit_after_n_blocks is None: + should_quit = res + msg = 'added a tx' + else: + vertex = args[0] + should_quit = False + assert isinstance(vertex, BaseTransaction) + + if isinstance(vertex, Block): + should_quit = vertex.get_height() >= self._args.quit_after_n_blocks + msg = f'reached height {vertex.get_height()}' + + if should_quit: + assert msg is not None + self.log.info(f'successfully {msg}, exit now') self.manager.connections.disconnect_all_peers(force=True) - self.reactor.stop() + self.reactor.fireSystemEvent('shutdown') + os._exit(0) return res self.manager.on_new_tx = patched_on_new_tx @@ -45,12 +65,13 @@ def patched_on_new_tx(*args, **kwargs): self.log.info('exit with error code if it take too long', timeout=timeout) def exit_with_error(): - import sys self.log.error('took too long to get a tx, exit with error') self.manager.connections.disconnect_all_peers(force=True) self.reactor.stop() - sys.exit(1) - self.reactor.callLater(timeout, exit_with_error) + os._exit(1) + + if self._args.quit_after_n_blocks is None: + self.reactor.callLater(timeout, exit_with_error) def run(self) -> None: if self._no_wait: diff --git a/hathor/cli/run_node.py b/hathor/cli/run_node.py index 166c7cef6..32f5848f9 100644 --- a/hathor/cli/run_node.py +++ b/hathor/cli/run_node.py @@ -151,6 +151,8 @@ def create_parser(cls) -> ArgumentParser: # XXX: this is temporary, should be added as a sysctl instead before merging parser.add_argument('--x-ipython-kernel', action='store_true', help='Launch embedded IPython kernel for remote debugging') + parser.add_argument('--log-vertex-bytes', action='store_true', + help='Log tx bytes for debugging') return parser def prepare(self, *, register_resources: bool = True) -> None: diff --git a/hathor/cli/run_node_args.py b/hathor/cli/run_node_args.py index 1d161cecf..c67aaeebb 100644 --- a/hathor/cli/run_node_args.py +++ b/hathor/cli/run_node_args.py @@ -79,3 +79,4 @@ class RunNodeArgs(BaseModel, extra=Extra.allow): x_asyncio_reactor: bool x_ipython_kernel: bool nano_testnet: bool + log_vertex_bytes: bool diff --git a/hathor/event/model/event_data.py b/hathor/event/model/event_data.py index cf22fa424..632d124a7 100644 --- a/hathor/event/model/event_data.py +++ b/hathor/event/model/event_data.py @@ -101,7 +101,7 @@ class TxData(BaseEventData, extra=Extra.ignore): hash: str nonce: Optional[int] = None timestamp: int - signal_bits: int + signal_bits: int | None version: int weight: float inputs: list['TxInput'] diff --git a/hathor/version.py b/hathor/version.py index b1afb04ca..ba87fa88a 100644 --- a/hathor/version.py +++ b/hathor/version.py @@ -19,7 +19,7 @@ from structlog import get_logger -BASE_VERSION = '0.60.0' +BASE_VERSION = '0.60.1' DEFAULT_VERSION_SUFFIX = "local" BUILD_VERSION_FILE_PATH = "./BUILD_VERSION" diff --git a/hathor/vertex_handler/vertex_handler.py b/hathor/vertex_handler/vertex_handler.py index 5bcbc1369..0ef601aa0 100644 --- a/hathor/vertex_handler/vertex_handler.py +++ b/hathor/vertex_handler/vertex_handler.py @@ -45,6 +45,7 @@ class VertexHandler: '_feature_service', '_pubsub', '_wallet', + '_log_vertex_bytes', ) def __init__( @@ -59,6 +60,7 @@ def __init__( feature_service: FeatureService, pubsub: PubSubManager, wallet: BaseWallet | None, + log_vertex_bytes: bool = False, ) -> None: self._log = logger.new() self._reactor = reactor @@ -70,6 +72,7 @@ def __init__( self._feature_service = feature_service self._pubsub = pubsub self._wallet = wallet + self._log_vertex_bytes = log_vertex_bytes def on_new_vertex( self, @@ -223,6 +226,8 @@ def _log_new_object(self, tx: BaseTransaction, message_fmt: str, *, quiet: bool) 'time_from_now': tx.get_time_from_now(now), 'validation': metadata.validation.name, } + if self._log_vertex_bytes: + kwargs['bytes'] = bytes(tx).hex() if tx.is_block: message = message_fmt.format('block') if isinstance(tx, Block): diff --git a/hathor/wallet/resources/thin_wallet/address_history.py b/hathor/wallet/resources/thin_wallet/address_history.py index e7e231e71..4f5608871 100644 --- a/hathor/wallet/resources/thin_wallet/address_history.py +++ b/hathor/wallet/resources/thin_wallet/address_history.py @@ -35,8 +35,11 @@ class AddressHistoryResource(Resource): isLeaf = True def __init__(self, manager): - self._settings = get_global_settings() self.manager = manager + settings = get_global_settings() + # XXX: copy the parameters that are needed so tests can more easily tweak them + self.max_tx_addresses_history = settings.MAX_TX_ADDRESSES_HISTORY + self.max_inputs_outputs_address_history = settings.MAX_INPUTS_OUTPUTS_ADDRESS_HISTORY # TODO add openapi docs for this API def render_POST(self, request: Request) -> bytes: @@ -166,21 +169,23 @@ def get_address_history(self, addresses: list[str], ref_hash: Optional[str]) -> 'message': 'The address {} is invalid'.format(address) }) - tx = None - if ref_hash_bytes: - try: - tx = self.manager.tx_storage.get_transaction(ref_hash_bytes) - except TransactionDoesNotExist: - return json_dumpb({ - 'success': False, - 'message': 'Hash {} is not a transaction hash.'.format(ref_hash) - }) - # The address index returns an iterable that starts at `tx`. - hashes = addresses_index.get_sorted_from_address(address, tx) + if idx == 0: + ref_tx = None + if ref_hash_bytes: + try: + ref_tx = self.manager.tx_storage.get_transaction(ref_hash_bytes) + except TransactionDoesNotExist: + return json_dumpb({ + 'success': False, + 'message': 'Hash {} is not a transaction hash.'.format(ref_hash) + }) + hashes = addresses_index.get_sorted_from_address(address, ref_tx) + else: + hashes = addresses_index.get_sorted_from_address(address) did_break = False for tx_hash in hashes: - if total_added == self._settings.MAX_TX_ADDRESSES_HISTORY: + if total_added == self.max_tx_addresses_history: # If already added the max number of elements possible, then break # I need to add this if at the beginning of the loop to handle the case # when the first tx of the address exceeds the limit, so we must return @@ -193,7 +198,7 @@ def get_address_history(self, addresses: list[str], ref_hash: Optional[str]) -> if tx_hash not in seen: tx = self.manager.tx_storage.get_transaction(tx_hash) tx_elements = len(tx.inputs) + len(tx.outputs) - if total_elements + tx_elements > self._settings.MAX_INPUTS_OUTPUTS_ADDRESS_HISTORY: + if total_elements + tx_elements > self.max_inputs_outputs_address_history: # If the adition of this tx overcomes the maximum number of inputs and outputs, then break # It's important to validate also the maximum number of inputs and outputs because some txs # are really big and the response payload becomes too big diff --git a/pyproject.toml b/pyproject.toml index f5ac12f1c..4ebd77226 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -14,7 +14,7 @@ [tool.poetry] name = "hathor" -version = "0.60.0" +version = "0.60.1" description = "Hathor Network full-node" authors = ["Hathor Team "] license = "Apache-2.0" diff --git a/tests/resources/base_resource.py b/tests/resources/base_resource.py index ce9a84c07..96a0d4994 100644 --- a/tests/resources/base_resource.py +++ b/tests/resources/base_resource.py @@ -53,8 +53,17 @@ def __init__(self, method, url, args=None, headers=None): # Set request args args = args or {} - for k, v in args.items(): - self.addArg(k, v) + if isinstance(args, dict): + for k, v in args.items(): + self.addArg(k, v) + elif isinstance(args, list): + for k, v in args: + if k not in self.args: + self.args[k] = [v] + else: + self.args[k].append(v) + else: + raise TypeError(f'unsupported type {type(args)} for args') def json_value(self): return json_loadb(self.written[0]) diff --git a/tests/resources/wallet/test_thin_wallet.py b/tests/resources/wallet/test_thin_wallet.py index 4f01a739d..393599c13 100644 --- a/tests/resources/wallet/test_thin_wallet.py +++ b/tests/resources/wallet/test_thin_wallet.py @@ -266,6 +266,55 @@ def test_history_paginate(self): # The last big tx self.assertEqual(len(response_data['history']), 1) + @inlineCallbacks + def test_address_history_optimization_regression(self): + # setup phase1: create 3 addresses with 2 transactions each in a certain order + self.manager.wallet.unlock(b'MYPASS') + address1 = self.get_address(0) + address2 = self.get_address(1) + address3 = self.get_address(2) + baddress1 = decode_address(address1) + baddress2 = decode_address(address2) + baddress3 = decode_address(address3) + [b1] = add_new_blocks(self.manager, 1, advance_clock=1, address=baddress1) + [b2] = add_new_blocks(self.manager, 1, advance_clock=1, address=baddress3) + [b3] = add_new_blocks(self.manager, 1, advance_clock=1, address=baddress2) + [b4] = add_new_blocks(self.manager, 1, advance_clock=1, address=baddress1) + [b5] = add_new_blocks(self.manager, 1, advance_clock=1, address=baddress2) + [b6] = add_new_blocks(self.manager, 1, advance_clock=1, address=baddress3) + add_blocks_unlock_reward(self.manager) + + # setup phase2: make the first request without a `hash` argument + self.web_address_history.resource.max_tx_addresses_history = 3 + res = (yield self.web_address_history.get( + 'thin_wallet/address_history', [ + (b'paginate', True), # this isn't needed, but used to ensure compatibility is not removed + (b'addresses[]', address1.encode()), + (b'addresses[]', address3.encode()), + (b'addresses[]', address2.encode()), + ] + )).json_value() + self.assertTrue(res['success']) + self.assertEqual(len(res['history']), 3) + self.assertTrue(res['has_more']) + self.assertEqual(res['first_address'], address3) + self.assertEqual(res['first_hash'], b6.hash_hex) + self.assertEqual([t['tx_id'] for t in res['history']], [b1.hash_hex, b4.hash_hex, b2.hash_hex]) + + # actual test, this request will miss transactions when the regression is present + res = (yield self.web_address_history.get( + 'thin_wallet/address_history', [ + (b'paginate', True), # this isn't needed, but used to ensure compatibility is not removed + (b'addresses[]', address3.encode()), + (b'addresses[]', address2.encode()), + (b'hash', res['first_hash'].encode()), + ] + )).json_value() + self.assertTrue(res['success']) + self.assertEqual(len(res['history']), 3) + self.assertFalse(res['has_more']) + self.assertEqual([t['tx_id'] for t in res['history']], [b6.hash_hex, b3.hash_hex, b5.hash_hex]) + def test_error_request(self): from hathor.wallet.resources.thin_wallet.send_tokens import _Context