Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add signMessageWithoutRand method for kaspa wasm #587

Merged
merged 7 commits into from
Nov 14, 2024

Conversation

witter-deland
Copy link
Contributor

@witter-deland
Copy link
Contributor Author

@coderofstuff
Copy link
Collaborator

Signing without aux rand is dangerous and can leak keys.

@apoelstra
Copy link

Signing without aux rand is dangerous and can leak keys.

This is completely untrue.

@coderofstuff
Copy link
Collaborator

Signing without aux rand is dangerous and can leak keys.

This is completely untrue.

Can you elaborate? I'm not familiar with the underlying implementation here but I'm mainly concerned with this leading to nonce reuse.

@witter-deland
Copy link
Contributor Author

@apoelstra is is the main contributor of rust-secp256k1
rust-bitcoin/rust-secp256k1#761

@elichai
Copy link
Member

elichai commented Nov 5, 2024

Andrew is obviously right :)
The auxiliary randomness exists only to mitigate specific kinds of power analysis side-channel attacks,
Providing it definitely improves security, but omitting it should not be considered dangerous, as most legacy signature schemes don't provide mitigations against such attacks,

To read more about the relevant discussions that arised in adding this randomness please see: sipa/bips#195

@coderofstuff coderofstuff reopened this Nov 5, 2024
@coderofstuff
Copy link
Collaborator

Andrew is obviously right :) The auxiliary randomness exists only to mitigate specific kinds of power analysis side-channel attacks, Providing it definitely improves security, but omitting it should not be considered dangerous, as most legacy signature schemes don't provide mitigations against such attacks,

To read more about the relevant discussions that arised in adding this randomness please see: sipa/bips#195

Thanks for the input @elichai.

@witter-deland I've reopened the PR while I read up more onto this.

@witter-deland
Copy link
Contributor Author

@coderofstuff Thanks
@elichai @apoelstra Thank you both for participating in the discussion

Andrew is obviously right :) The auxiliary randomness exists only to mitigate specific kinds of power analysis side-channel attacks, Providing it definitely improves security, but omitting it should not be considered dangerous, as most legacy signature schemes don't provide mitigations against such attacks,
To read more about the relevant discussions that arised in adding this randomness please see: sipa/bips#195

Thanks for the input @elichai.

@witter-deland I've reopened the PR while I read up more onto this.

@witter-deland
Copy link
Contributor Author

@coderofstuff May I ask when this PR will be merged? I need this feature to be able to continue and complete the integration and docking process. Thank you.

@witter-deland
Copy link
Contributor Author

Excuse me. What's going on? This API is very straightforward. Are there any issues that are blocking the merge? Thank you.

Copy link
Collaborator

@coderofstuff coderofstuff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apologies for the delay, we're currently packed with other things.

The change itself looks sound. I just have some comments on code duplication.

wallet/core/src/message.rs Outdated Show resolved Hide resolved
wallet/core/src/wasm/message.rs Outdated Show resolved Hide resolved
@witter-deland
Copy link
Contributor Author

fixed :)

Apologies for the delay, we're currently packed with other things.

The change itself looks sound. I just have some comments on code duplication.

wallet/core/src/wasm/message.rs Outdated Show resolved Hide resolved
wallet/core/src/wasm/message.rs Outdated Show resolved Hide resolved
@witter-deland
Copy link
Contributor Author

@coderofstuff fixed :)

coderofstuff
coderofstuff previously approved these changes Nov 14, 2024
Copy link
Collaborator

@coderofstuff coderofstuff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Tested the scenarios and existing demo still works.

As a follow-up, please update the wasm/examples/nodejs/javascript/general/message-signing.js to include a demo of the new scenario you added support for (passing noAuxRand: true)

@witter-deland
Copy link
Contributor Author

witter-deland commented Nov 14, 2024

@coderofstuff Sure, done :)

LGTM. Tested the scenarios and existing demo still works.

As a follow-up, please update the wasm/examples/nodejs/javascript/general/message-signing.js to include a demo of the new scenario you added support for (passing noAuxRand: true)

Copy link
Collaborator

@coderofstuff coderofstuff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for that!

@coderofstuff coderofstuff merged commit 1d3b9a9 into kaspanet:master Nov 14, 2024
6 checks passed
smartgoo added a commit to smartgoo/rusty-kaspa that referenced this pull request Nov 22, 2024
* rothschild: donate funds to external address with custom priority fee (kaspanet#482)

* rothschild: donate funds to external address

Signed-off-by: Dmitry Perchanov <demisrael@gmail.com>

* rothschild: Append priority fee to txs.

Signed-off-by: Dmitry Perchanov <demisrael@gmail.com>

* rothschild: add option to choose and randomize fee

Signed-off-by: Dmitry Perchanov <dima@voyager.local>

* rothschild: address clippy formatting issues

Signed-off-by: Dmitry Perchanov <demisrael@gmail.com>

---------

Signed-off-by: Dmitry Perchanov <demisrael@gmail.com>
Signed-off-by: Dmitry Perchanov <dima@voyager.local>
Co-authored-by: coderofstuff <114628839+coderofstuff@users.noreply.github.com>
Co-authored-by: Dmitry Perchanov <dima@voyager.local>

* fix wrong combiner condition (kaspanet#567)

* fix wRPC json notification format (kaspanet#571)

* Documentation updates (kaspanet#570)

* docs

* Export ConsensusSessionOwned

* add CI pass to run `cargo doc`

* module rust docs

* lints

* fix typos

* replace glob import terminology with "re-exports"

* cleanup

* fix wasm rpc method types for methods without mandatory arguments (kaspanet#572)

* cleanup legacy bip39 cfg that interferes with docs.rs builds (kaspanet#573)

* Bump tonic and prost versions, adapt middlewares (kaspanet#553)

* bump tonic, prost versions
update middlewares

* use unbounded channel

* change log level to trace

* use bounded channel

* reuse counts bytes body to measure bytes body

* remove unneeded clone

* Fix README.md layout and add linting section (kaspanet#488)

* Bump tonic version (kaspanet#579)

* replace statrs and statest deps & upgrade some deps.  (kaspanet#425)

* replace statrs and statest deps.

* remove todo in toml.cargo and fmt & lints.

* do a run of `cargo audit fix` for some miscellaneous reports.

* use maintained alt ks crate.

* add cargo.lock.

* update

* use new command

* newline

* refresh cargo lock with a few more version updates

* fix minor readme glitches

---------

Co-authored-by: Michael Sutton <msutton@cs.huji.ac.il>

* enhance tx inputs processing (kaspanet#495)

* sighash reused trait

* benches are implemented

* use cache per iteration per function

* fix par versions

* fix benches

* use upgreadable read

* use concurrent cache

* use hashcache

* dont apply cache

* rollback rwlock and indexmap.

* remove scc

* apply par iter to `check_scripts`

* refactor check_scripts fn, fix tests

* fix clippy

* add bench with custom threadpool

* style: fmt

* suppress warnings

* Merge branch 'master' into bcm-parallel-processing

* renames + map err

* reuse code

* bench: avoid exposing cache map + iter pools in powers of 2

* simplify check_sig_op_counts

* use thread pool also if a single input
1. to avoid confusion
2. since tokio blocking threads are not meant to be used for processing anyway

* remove todo

* clear cache instead of recreate

* use and_then (so map_err can be called in a single location)

* extend check scripts tests for better coverage of the par_iter case

---------

Co-authored-by: Michael Sutton <msutton@cs.huji.ac.il>

* Parallelize MuHash calculations (kaspanet#575)

* Parallelize MuHash calculations

MuHash calculations are additive and can be done in chunks then later combined

* Reimplement validate tx with muhash as a separate fn

* Use smallvec for muhash parallel

Co-authored-by: Michael Sutton <msutton@cs.huji.ac.il>

* Add independent rayon order test

* Filter some data

* Use tuple_windows for test iter

---------

Co-authored-by: Michael Sutton <msutton@cs.huji.ac.il>

* Muhash parallel reduce -- optimize U3072 mul when LHS = one (kaspanet#581)

* semantic: add `from` ext methods

* muhash from txs benchmark

* optimization: in u3072 mul test if lhs is one

* extract `parallelism_in_power_steps`

* comment

* Rust 1.82 fixes + mempool std sig op count check (kaspanet#583)

* rust 1.82 fixes

* sig op count std check

* typo(cli/utils): kaspa wording (kaspanet#582)

Co-authored-by: Michael Sutton <msutton@cs.huji.ac.il>

* On-demand calculation for Ghostdag for Higher Levels (kaspanet#494)

* Refactor pruning proof validation to many functions

Co-authored-by: Ori Newman <orinewman1@gmail.com>

* Use blue score as work for higher levels

Co-authored-by: Ori Newman <orinewman1@gmail.com>

* Remove pruning processor dependency on gd managers

Co-authored-by: Ori Newman <orinewman1@gmail.com>

* Consistency renaming

Co-authored-by: Ori Newman <orinewman1@gmail.com>

* Update db version

Co-authored-by: Ori Newman <orinewman1@gmail.com>

* GD Optimizations

Co-authored-by: Ori Newman <orinewman1@gmail.com>

* Remove remnant of old impl. optimize db prefixes

* Ensure parents are in relations; Add comments

apply_proof only inserts parent entries for a header from the proof into
the relations store for a level if there was GD data in the old stores
for that header.

This adds a check to filter out parent records not in relations store

* Match depth check to block_at_depth logic

* Use singular GD store for header processing

* Relax the panic to warn when finished_headers and couldn't find sufficient root

This happens when there's not enough headers in the pruning proof but it satisfies validation

* Error handling for gd on higher levels

relations.get_parents on GD gets extra parents that aren't in the
current GD store. so get_blue_work throws an error

next, ORIGIN was mising from the GD so add that

* remove using deeper requirements in lower levels

* Fix missed references to self.ghostdag_stores in validate_pruning_point_proof

* Refactoring for single GD header processing

* Add assertion to check root vs old_root

* Lint fix current_dag_level

* Keep DB Version at 3

The new prefixes added are compatible with the old version. We don't want to trigger a db delete with this change

* Cleanup apply_proof logic and handle more ghostdag_stores logic

* remove simpa changes

* Remove rewriting origin to primary GD

It's already on there

* More refactoring to use single GD store/manager

* Lint fixes

* warn to trace for common retry

* Address initial comments

* Remove "primary" in ghostdag store/manager references

* Add small safety margin to proof at level 0

This prevents the case where new root is an anticone of old root

* Revert to only do proof rebuilding on sanity check

* Proper "better" proof check

* Update comment on find_selected_parent_header_at_level

* Re-apply missed comment

* Implement db upgrade logic from 3 to 4

* Explain further the workaround for GD ordering.rs

* Minor update to Display of TempGD keys

* Various fixes

- Keep using old root to minimize proof size. Old root is calculated
  using the temporary gd stores
- fix the off-by-one in block_at_depth and chain_up_to_depth
- revert the temp fix to sync with the off-by-one

* Revert "Various fixes"

This reverts commit bc56e65.

This experimental commit requires a bit more thinking to apply, and
optimization can be deferred.

* Revert better proof check

Recreates the GD stores for the current consensus by checking existing proof

* Fix: use cc gd store

* When building pruning point proof ghostdag data, ignore blocks before the root

* Add trusted blocks to all relevant levels during apply_proof

As opposed to applying only to level 0

* Calculate headers estimate in init proof stores

* Explain finished headers logic

Add back the panic if we couldn't find the required block and our headers are done

Add explanation in comment for why trying anyway if finished_headers is acceptable

* clarify comment

* Rename old_root to depth_based_root

explain logic for the two root calculation

* More merge fixes

* Refactor relations services into self

* Use blue_work for find_selected_parent_header_at_level

* Comment fixes and small refactor

* Revert rename to old root

* Lint fix from merged code

* Some cleanup

- use BlueWorkType
- fix some comments

* remove last reference to ghostdag_primary_*

* Cleaner find_selected_parent_header_at_level

Co-authored-by: Michael Sutton <mikisiton2@gmail.com>

* Refactor for better readability and add more docs

* Smaller safety margin for all

* Lint and logic fix

* Reduce loop depth increase on level proof retries

Co-authored-by: Michael Sutton <mikisiton2@gmail.com>

* Update consensus/src/processes/pruning_proof/mod.rs

Co-authored-by: Michael Sutton <mikisiton2@gmail.com>

* Comment cleanup

* Remove unnecessary clone

Co-authored-by: Michael Sutton <mikisiton2@gmail.com>

* Rename genesis_hash to root; Remove redundant filter

* Cleaner reachability_stores type

Co-authored-by: Michael Sutton <mikisiton2@gmail.com>

* Change failed to find sufficient root log to debug

* Bump node version to 0.15.3

* A few minor leftovers

---------

Co-authored-by: Ori Newman <orinewman1@gmail.com>
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
Co-authored-by: Michael Sutton <msutton@cs.huji.ac.il>

* Standartize fork activation logic (kaspanet#588)

* Use ForkActivation for all fork activations

* Avoid using negation in some ifs

* Add is_within_range_from_activation

* Move 'is always' check inside is_within_range_from_activation

* lints

* Refactoring for cleaner pruning proof module (kaspanet#589)

* Cleanup manual block level calc

There were two areas in pruning proof mod that
manually calculated block level.

This replaces those with a call to calc_block_level

* Refactor pruning proof build functions

* Refactor apply pruning proof functions

* Refactor validate pruning functions

* Add comments for clarity

* Pruning proof minor improvements (kaspanet#590)

* Check pow for headers in level proof

* Implement comparable level work

* Implement fairer pruning proof comparison

* prefer having the GD manager compose the level target, so that
1. level_work is always used
2. level zero can be explicitly set to 0 by the manager itself (being consensus sensitive code)

* 1. no need to init origin here
2. comments about blue work are obvious

* use saturating ops and avoid SignedInteger all together

* Comment on level_work

* Move MAX_WORK_LEVEL close to BlueWorkType and explain

* Refactor block level calc from pow to a function

---------

Co-authored-by: Michael Sutton <msutton@cs.huji.ac.il>

* Add KIP-10 Transaction Introspection Opcodes, 8-byte arithmetic and Hard Fork Support (kaspanet#487)

* implement new opcodes

* example of mutual tx

* add docs describing scenario

* introduce feature gate for new features

* introduce hf feature that enables txscript hf feature

* style: fmt and clippy fix

* implement new opcodes

* example of mutual tx

* add docs describing scenario

* introduce feature gate for new features

* style: fmt and clippy fix

* remove unused feature

* fmt

* make opcode invalid in case of feature disabled

* feature gate test

* change test set based on feature
add ci cd test

* rename InputSPK -> InputSpk

* enable kip10 opcodes based on daa_score in runtime

* use dummy kip10 activation daa score in params

* use dummy kip10 activation daa score in params

* suppress clippy lint

* add example with shared key

* fix clippy

* remove useless check from example

* add one-time borrowing example

* Implement one-time and two-times threshold borrowing scenarios

- Add threshold_scenario_limited_one_time function
- Add threshold_scenario_limited_2_times function
- Create generate_limited_time_script for reusable script generation
- Implement nested script structure for two-times borrowing
- Update documentation for both scenarios
- Add tests for owner spending, borrowing, and invalid attempts in both cases
- Ensure consistent error handling and logging across scenarios
- Refactor to use more generic script generation approach

* fix: fix incorrect sig-op count

* correct error description

* style: fmt

* pass kip-10 flag in constructor params

* remove borrow scenario from tests.
run tests against both kip1- enabled/disabled engine

* introduce method that converts spk to bytes.
add tests covering new opcodes

* return comment describing where invalid opcodes starts from.
add comments describing why 2 files are used.

* fix wring error messages

* support introspection by index

* test input spk

* test output spk

* tests refactor

* support 8-byte arithmetics

* Standartize fork activation logic (kaspanet#588)

* Use ForkActivation for all fork activations

* Avoid using negation in some ifs

* Add is_within_range_from_activation

* Move 'is always' check inside is_within_range_from_activation

* lints

* Refactoring for cleaner pruning proof module (kaspanet#589)

* Cleanup manual block level calc

There were two areas in pruning proof mod that
manually calculated block level.

This replaces those with a call to calc_block_level

* Refactor pruning proof build functions

* Refactor apply pruning proof functions

* Refactor validate pruning functions

* Add comments for clarity

* only enable 8 byte arithmetics for kip10

* use i64 value in 9-byte tests

* fix tests covering kip10 and i64 deserialization

* fix test according to 8-byte math

* finish test covering kip10 opcodes: input/output/amount/spk

* fix kip10 examples

* rename test

* feat: add input index op

* feat: add input/outpiut opcodes

* reseve opcodes
reorder kip10 opcodes.
reflect script tests

* fix example

* introspection opcodes are reserverd, not disables

* use ForkActivation type

* cicd: run kip-10 example

* move spk encoding to txscript module

* rework bound check ot input/output index

* fix tests by importing spkencoding trait

* replace todo in descripotions of over[under]flow errors

* reorder new opcodes, reserve script sig opcode, remove txid

* fix bitcoin script tests

* add simple opcode tests

* rename id(which represents input index) to idx

* fix comments

* add input spk tests

* refactor test cases

* refactor(txscript): Enforce input index invariant via assertion

Change TxScriptEngine::from_transaction_input to assert valid input index
instead of returning Result. This better reflects that an invalid index is a
caller's (transaction validation) error rather than a script engine error,
since the input must be part of the transaction being validated.

An invalid index signifies a mismatch between the transaction and the input being
validated - this is a programming error in the transaction validator layer, not
a script engine concern. The script engine should be able to assume it receives
valid inputs from its caller.

The change simplifies error handling by enforcing this invariant early,
while maintaining identical behavior for valid inputs. The function is
now documented to panic on malformed inputs.

This is a breaking change for code that previously handled
InvalidIndex errors, though such handling was likely incorrect
as it indicated an inconsistency in transaction validation.

* refactor error types to contain correct info

* rename id to idx

* rename opcode

* make construction of TxScriptEngine from transaction input infallible

* style: format combinators chain

* add integration test covering activation of kip10

* rename kip10_activation_daa_score to kip10_activation

* Update crypto/txscript/src/lib.rs

refactor vector filling

* rework assert

* verify that block is disqualified in case of it has tx which requires
block that contains the tx with kip10 opcode is accepted after daa score has being reached

* revert changer to infallible api

* add doc comments

* Update crypto/txscript/src/opcodes/mod.rs

Fallible conversion of output amount (usize -> i64)

* Update crypto/txscript/src/opcodes/mod.rs

Fallible conversion of input amount (usize -> i64)

* add required import

* refactor: SigHashReusedValuesUnsync doesnt neet to be mutable

* fix test description

* rework example

* 9 byte integers must fail to serialize

* add todo

* rewrite todo

* remove redundant code

* remove redundant mut in example

* remove redundant mut in example

* remove redundant mut in example

* cicd: apply lint to examples

---------

Co-authored-by: Ori Newman <orinewman1@gmail.com>

* Some simplification to script number types (kaspanet#594)

* Some simplification to script number types

* Add TODO

* Address review comments

* feat: add signMessage noAuxRand option for kaspa wasm (kaspanet#587)

* feat: add signMessageWithoutRand method for kaspa wasm

* enhance: sign message api

* fix: unit test fail

* chore: update noAuxRand of ISignMessage

* chore: add sign message demo for noAuxRand

* Optimize window cache building for ibd (kaspanet#576)

* show changes.

* optimize window caches for ibd.

* do lints and checks etc..

* bench and compare.

* clean-up

* rework lock time check a bit.

* // bool: todo!(),

* fmt

* address some reveiw points.

* address reveiw comments.

* update comments.

* pass tests.

* fix blue work assumption, update error message.

* Update window.rs

slight comment update.

* simplify a bit more.

* remove some unneeded things. rearrange access to cmpct gdd.

* fix conflicts.

* lints..

* address reveiw points from m. sutton.

* uncomplicate check_block_transactions_in_context

* commit in lazy

* fix lints.

* query compact data as much as possible.

* Use DefefMut to unify push_mergeset logic for all cases (addresses @tiram's review)

* comment on cache_sink_windows

* add comment to new_sink != prev_sink

* return out of push_mergeset, if we cannot push any more.

* remove unused diff cache and do non-daa as option.

* Cargo.lock

* bindings signer layout

---------

Signed-off-by: Dmitry Perchanov <demisrael@gmail.com>
Signed-off-by: Dmitry Perchanov <dima@voyager.local>
Co-authored-by: demisrael <81626907+demisrael@users.noreply.github.com>
Co-authored-by: coderofstuff <114628839+coderofstuff@users.noreply.github.com>
Co-authored-by: Dmitry Perchanov <dima@voyager.local>
Co-authored-by: Maxim <59533214+biryukovmaxim@users.noreply.github.com>
Co-authored-by: aspect <anton.yemelyanov@gmail.com>
Co-authored-by: George Bogodukhov <gvbgduh@gmail.com>
Co-authored-by: Michael Sutton <msutton@cs.huji.ac.il>
Co-authored-by: D-Stacks <78099568+D-Stacks@users.noreply.github.com>
Co-authored-by: Romain Billot <romainbillot3009@gmail.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
Co-authored-by: witter-deland <87846830+witter-deland@users.noreply.github.com>
someone235 added a commit to someone235/rusty-kaspa that referenced this pull request Nov 24, 2024
* Add KIP-10 Transaction Introspection Opcodes, 8-byte arithmetic and Hard Fork Support (kaspanet#487)

* implement new opcodes

* example of mutual tx

* add docs describing scenario

* introduce feature gate for new features

* introduce hf feature that enables txscript hf feature

* style: fmt and clippy fix

* implement new opcodes

* example of mutual tx

* add docs describing scenario

* introduce feature gate for new features

* style: fmt and clippy fix

* remove unused feature

* fmt

* make opcode invalid in case of feature disabled

* feature gate test

* change test set based on feature
add ci cd test

* rename InputSPK -> InputSpk

* enable kip10 opcodes based on daa_score in runtime

* use dummy kip10 activation daa score in params

* use dummy kip10 activation daa score in params

* suppress clippy lint

* add example with shared key

* fix clippy

* remove useless check from example

* add one-time borrowing example

* Implement one-time and two-times threshold borrowing scenarios

- Add threshold_scenario_limited_one_time function
- Add threshold_scenario_limited_2_times function
- Create generate_limited_time_script for reusable script generation
- Implement nested script structure for two-times borrowing
- Update documentation for both scenarios
- Add tests for owner spending, borrowing, and invalid attempts in both cases
- Ensure consistent error handling and logging across scenarios
- Refactor to use more generic script generation approach

* fix: fix incorrect sig-op count

* correct error description

* style: fmt

* pass kip-10 flag in constructor params

* remove borrow scenario from tests.
run tests against both kip1- enabled/disabled engine

* introduce method that converts spk to bytes.
add tests covering new opcodes

* return comment describing where invalid opcodes starts from.
add comments describing why 2 files are used.

* fix wring error messages

* support introspection by index

* test input spk

* test output spk

* tests refactor

* support 8-byte arithmetics

* Standartize fork activation logic (kaspanet#588)

* Use ForkActivation for all fork activations

* Avoid using negation in some ifs

* Add is_within_range_from_activation

* Move 'is always' check inside is_within_range_from_activation

* lints

* Refactoring for cleaner pruning proof module (kaspanet#589)

* Cleanup manual block level calc

There were two areas in pruning proof mod that
manually calculated block level.

This replaces those with a call to calc_block_level

* Refactor pruning proof build functions

* Refactor apply pruning proof functions

* Refactor validate pruning functions

* Add comments for clarity

* only enable 8 byte arithmetics for kip10

* use i64 value in 9-byte tests

* fix tests covering kip10 and i64 deserialization

* fix test according to 8-byte math

* finish test covering kip10 opcodes: input/output/amount/spk

* fix kip10 examples

* rename test

* feat: add input index op

* feat: add input/outpiut opcodes

* reseve opcodes
reorder kip10 opcodes.
reflect script tests

* fix example

* introspection opcodes are reserverd, not disables

* use ForkActivation type

* cicd: run kip-10 example

* move spk encoding to txscript module

* rework bound check ot input/output index

* fix tests by importing spkencoding trait

* replace todo in descripotions of over[under]flow errors

* reorder new opcodes, reserve script sig opcode, remove txid

* fix bitcoin script tests

* add simple opcode tests

* rename id(which represents input index) to idx

* fix comments

* add input spk tests

* refactor test cases

* refactor(txscript): Enforce input index invariant via assertion

Change TxScriptEngine::from_transaction_input to assert valid input index
instead of returning Result. This better reflects that an invalid index is a
caller's (transaction validation) error rather than a script engine error,
since the input must be part of the transaction being validated.

An invalid index signifies a mismatch between the transaction and the input being
validated - this is a programming error in the transaction validator layer, not
a script engine concern. The script engine should be able to assume it receives
valid inputs from its caller.

The change simplifies error handling by enforcing this invariant early,
while maintaining identical behavior for valid inputs. The function is
now documented to panic on malformed inputs.

This is a breaking change for code that previously handled
InvalidIndex errors, though such handling was likely incorrect
as it indicated an inconsistency in transaction validation.

* refactor error types to contain correct info

* rename id to idx

* rename opcode

* make construction of TxScriptEngine from transaction input infallible

* style: format combinators chain

* add integration test covering activation of kip10

* rename kip10_activation_daa_score to kip10_activation

* Update crypto/txscript/src/lib.rs

refactor vector filling

* rework assert

* verify that block is disqualified in case of it has tx which requires
block that contains the tx with kip10 opcode is accepted after daa score has being reached

* revert changer to infallible api

* add doc comments

* Update crypto/txscript/src/opcodes/mod.rs

Fallible conversion of output amount (usize -> i64)

* Update crypto/txscript/src/opcodes/mod.rs

Fallible conversion of input amount (usize -> i64)

* add required import

* refactor: SigHashReusedValuesUnsync doesnt neet to be mutable

* fix test description

* rework example

* 9 byte integers must fail to serialize

* add todo

* rewrite todo

* remove redundant code

* remove redundant mut in example

* remove redundant mut in example

* remove redundant mut in example

* cicd: apply lint to examples

---------

Co-authored-by: Ori Newman <orinewman1@gmail.com>

* Some simplification to script number types (kaspanet#594)

* Some simplification to script number types

* Add TODO

* Address review comments

* feat: add signMessage noAuxRand option for kaspa wasm (kaspanet#587)

* feat: add signMessageWithoutRand method for kaspa wasm

* enhance: sign message api

* fix: unit test fail

* chore: update noAuxRand of ISignMessage

* chore: add sign message demo for noAuxRand

* test reflects enabling payload

* Enhance benchmarking: add payload size variations

Refactored `mock_tx` to `mock_tx_with_payload` to support custom payload sizes. Introduced new benchmark function `benchmark_check_scripts_with_payload` to test performance with varying payload sizes. Commented out the old benchmark function to focus on payload-based tests.

* Enhance script checking benchmarks

Added benchmarks to evaluate script checking performance with varying payload sizes and input counts. This helps in understanding the impact of transaction payload size on validation and the relationship between input count and payload processing overhead.

* Add new test case for transaction hashing and refactor code

This commit introduces a new test case to verify that transaction IDs and hashes change with payload modifications. Additionally, code readability and consistency are improved by refactoring multi-line expressions into single lines where appropriate.

* Add payload activation test for transactions

This commit introduces a new integration test to validate the enforcement of payload activation rules at a specified DAA score. The test ensures that transactions with large payloads are rejected before activation and accepted afterward, maintaining consensus integrity.

* style: fmt

* test: add test that checks that payload change reflects sighash

* rename test

---------

Co-authored-by: Ori Newman <orinewman1@gmail.com>
Co-authored-by: witter-deland <87846830+witter-deland@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants