Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mining refactor with new features #37

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

armstrys
Copy link
Contributor

@armstrys armstrys commented Jan 22, 2023

This is a rather large PR that I fully expect to be reworked, but wanted to share because I think it adds some useful features. I've tried to break the commits up into logical steps so we can choose to only apply a subset if needed.

  • raise error if vanity characters not in bech32 possible characters
  • move mine_vanity_key into pow.py to avoid circular imports when adding other features
  • refactor functions that take "guesses" out of the mining functions so that we can use them to test performance
  • add methods to check hashrate and estimate mining times for both types of private key mining and event mining

@kdmukai
Copy link
Contributor

kdmukai commented Jan 22, 2023

@armstrys can you (or anyone else) drop in some rough performance calcs here? i.e. "hashes" (guesses) per second.

I had been working on my own vanity key brute forcing library in python, but completely abandoned it once I saw rana's performance.

My M1 Macbook Pro was doing something like 1 mil guesses in two minutes in python. By running multiple instances I could hit maybe 6x that.

Rana is hitting 340k per second!

I think it probably makes more sense to try to write python bindings for rana's Rust code and PR into rana whatever mods that might require.

@armstrys
Copy link
Contributor Author

armstrys commented Jan 22, 2023

If you pull this branch you should be able to run get_hashrate(_guess_key) for hex or get_hashrate(_guess_vanity_key) for npub (assuming my code is right). I think this implementation is consistent with the fastest I’ve been able to get keys in Python on my machine. It wouldn’t surprise me if rana is much faster - I probably won’t have any time to benchmark on my 2015 intel MacBook today, but I can probably try sometime next week.

I suspect you’re right that investigating python bindings to another solution might make more sense here! I hadn’t thought of that.

@kdmukai
Copy link
Contributor

kdmukai commented Jan 24, 2023

In [24]: pow.get_hashrate(pow._guess_key)
Out[24]: 24796.10208550446

In [25]: pow.get_hashrate(pow._guess_key)
Out[25]: 24901.973319194134

In [26]: pow.get_hashrate(pow._guess_vanity_key)
Out[26]: 10890.909980241477

In [27]: pow.get_hashrate(pow._guess_vanity_key)
Out[27]: 10914.554774844557

So for vanity gen it's about 34x slower than rana -- but that's with rana using all 8 cores. If we manually run 8 instances of python and assume no performance loss per thread (more or less true based on my earlier testing with my own python vanity gen), we roughly cut that to rana being somewhere around 4x faster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants