Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set max epoch number constant #212

Merged
merged 1 commit into from
Nov 10, 2021
Merged

Set max epoch number constant #212

merged 1 commit into from
Nov 10, 2021

Conversation

chfast
Copy link
Owner

@chfast chfast commented Nov 10, 2021

Define the maximum value of epoch number this Ethash library supports.

@chfast chfast requested review from axic and gumb0 November 10, 2021 11:01
@@ -49,6 +49,9 @@ extern "C" {
#define ETHASH_FULL_DATASET_ITEM_SIZE 128
#define ETHASH_NUM_DATASET_ACCESSES 64

/// The maximum epoch number supported by this implementation.
#define ETHASH_MAX_EPOCH_NUMBER 32639
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How did you arrive at this number?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The max value where light cache size is under 4GB. Later some uint32 overflows start to happen.

Copy link
Collaborator

@axic axic Nov 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you document this?

Also can you put an estimate in terms of mainnet block numbers (I know it depends on difficulty, but assume a linear yearly difficulty increase). I also understand this is moot with PoS, but still would be nice to have an idea.

Or at least document the current block + epoch number, as that gives an idea how many epochs we progressed over 6 years.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@axic
1 epoch = 30k blocks so the limit imposed by @chfast is 979.17M blocks
Difficulty is not related. Difficulty is adjusted in mining to keep block interval constant (among 12 and 15 seconds).
Actually mainnet is at block 13.5M i.e. epoch 450

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is documented now.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AndreaLanfranchi @chfast Thanks for the explanation, this looks good now.

@codecov-commenter
Copy link

codecov-commenter commented Nov 10, 2021

Codecov Report

Merging #212 (acd8e96) into master (a725a2f) will increase coverage by 0.01%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master     #212      +/-   ##
==========================================
+ Coverage   98.63%   98.65%   +0.01%     
==========================================
  Files          17       17              
  Lines        1323     1340      +17     
==========================================
+ Hits         1305     1322      +17     
  Misses         18       18              
Flag Coverage Δ
be 85.54% <96.66%> (+0.18%) ⬆️
default 98.41% <100.00%> (+0.02%) ⬆️
x86_64 85.18% <96.66%> (+0.19%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
include/ethash/ethash.hpp 100.00% <ø> (ø)
lib/ethash/ethash.cpp 100.00% <100.00%> (ø)
test/unittests/test_ethash.cpp 96.00% <100.00%> (+0.14%) ⬆️

Comment on lines 440 to 442
if (epoch_number > max_epoch_number)
return 0;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this might cause a cascading problem in create_epoch_context around here

   const size_t alloc_size = context_alloc_size + light_cache_size + full_dataset_size;
   char* const alloc_data = static_cast<char*>(std::calloc(1, alloc_size));
    if (!alloc_data)
        return nullptr;  // Signal out-of-memory by returning null pointer.

    hash512* const light_cache = reinterpret_cast<hash512*>(alloc_data + context_alloc_size);

If beyond max epoch then alloc_size == 512+0+0 which means
hash512* const light_cache = reinterpret_cast<hash512*>(alloc_data + context_alloc_size); reinterpret a location beyond previous allocation

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably is worth to make this function to return nullptr in case epoch requested is above limit. This way higher level code are somehow notified

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The create_epoch_context() has the same check and returns nullptr. Now I also added a check for epoch_number < 0.

The calculate_X_num_items() have the check duplicated because they are public functions.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right sorry. Was comparing master branch while reading PR branch. My bad

Copy link
Collaborator

@gumb0 gumb0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks fine, I haven't reviewed tests very carefully.

Comment on lines 56 to 57
/// The DAG size in the last epoch is 252GB and the block number is above 979M
/// what gives over 60 years of blocks assuming 2s block times.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// The DAG size in the last epoch is 252GB and the block number is above 979M
/// what gives over 60 years of blocks assuming 2s block times.
/// The DAG size in the last epoch is 252GB and the block number is above 979M,
/// which gives over 60 years of blocks assuming 2s block times.

@@ -268,19 +273,30 @@ TEST(ethash, find_epoch_number_double_descending)
TEST(ethash, find_epoch_number_sequential)
{
hash256 seed = {};
for (int i = 0; i < 30000; ++i)
for (int i = 0; i < max_epoch_number; ++i)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would <= work?

TEST(ethash, find_epoch_number_sequential_gap)
{
constexpr int start_epoch = 200;
hash256 seed = calculate_epoch_seed(start_epoch);
for (int i = start_epoch; i < 30000; ++i)
for (int i = start_epoch; i < max_epoch_number; ++i)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would <= work?

@@ -324,7 +340,7 @@ TEST(ethash_multithreaded, find_epoch_number_sequential)
{
auto fn = [] {
hash256 seed = {};
for (int i = 0; i < 30000; ++i)
for (int i = 0; i < max_epoch_number; ++i)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would <= work?

Define the maximum value of epoch number this Ethash library supports.
@chfast chfast merged commit acd8e96 into master Nov 10, 2021
@chfast chfast deleted the max_epoch_number branch November 10, 2021 18:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants