Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bp multi aggregation pippenger #4219

Conversation

moneromooo-monero
Copy link
Collaborator

No description provided.

@paulshapiro
Copy link
Contributor

Phew nice! Btw I notice some copyrights on new files are 2017, e.g. multiexp.h

@moneromooo-monero
Copy link
Collaborator Author

The first commit is from the 9th of january, so I likely started off in 2017.

@moneromooo-monero moneromooo-monero force-pushed the bp-multi-aggregation-pippenger branch 2 times, most recently from 26c95ed to b9b2650 Compare August 7, 2018 21:53
@moneromooo-monero
Copy link
Collaborator Author

Also, no need to ask for speedups, I have random patches of those which I'll keep for after this is reviewed.

@moneromooo-monero moneromooo-monero force-pushed the bp-multi-aggregation-pippenger branch 3 times, most recently from ebdf00e to 95d85f9 Compare August 10, 2018 18:50
Copy link
Contributor

@stoffu stoffu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comments about the Straus algorithm

std::shared_ptr<straus_cached_data> cache(new straus_cached_data());

#ifdef RAW_MEMORY_BLOCK
const size_t offset = cache->size;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't the size member data uninitialized at this point?

static inline rct::key multiexp(const std::vector<MultiexpData> &data, bool HiGi)
{
if (HiGi)
return data.size() <= 128 ? straus(data, straus_HiGi_cache, 0) : pippenger(data, pippenger_HiGi_cache, get_pippenger_c(data.size()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe data.size() <= STRAUS_SIZE_LIMIT ? ... for more explicitness, if my understanding is correct?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That 128 is really the spot at which the fastest changes, whereas STRAUS_SIZE_LIMIT is the precalc amount. They are the same, since it'd be wasteful to precalc more, so kinda the same, but it feels wrong somehow to link them, especially when you've got a different limit for the non precalc one below.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But if this number was changed to something greater than STRAUS_SIZE_LIMIT, doesn't the CACHE_OFFSET macro point to outside the allocated memory, which is limited to STRAUS_SIZE_LIMIT?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a static_assert instead.

digits[j*256+i+1] = (bytes[0] >> 1) & 0xf;
digits[j*256+i+2] = (bytes[0] >> 2) & 0xf;
digits[j*256+i+3] = (bytes[0] >> 3) & 0xf;
digits[j*256+i+4] = ((bytes[0] >> 4) | (bytes[1]<<4)) & 0xf;
Copy link
Contributor

@stoffu stoffu Aug 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think bytes[1] would be pointing outside the array's bound when i == 248 where bytes == data[j].scalar.bytes + 31.

The digits array will be accessed only at an index multiple of STRAUS_C which is 4, so it seems wasteful to allocate 256 bytes for each point (75% of the array is unused).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a very good observation, I've added that to my list.

start_i += STRAUS_C;
MULTIEXP_PERF(PERF_TIMER_STOP(setup));

size_t start_offset = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems unnecessary, since a variable of the same name is defined in the for loop below.

while (!(i < STRAUS_C))
{
ge_p2 p2;
ge_p3_to_p2(&p2, &band_p3);
Copy link
Contributor

@stoffu stoffu Aug 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe perform this doubling process when !ge_p3_is_point_at_infinity(&band_p3), just like the Pippenger case, mainly for the sake of consistency? The performance gain seems negligible, though.

EDIT1: Hmm, maybe there's a noticeable performance gain. Could someone please test?

EDIT2: Maybe not, my performance test seems to fluctuate for random reasons... :/

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first step (where band_p3 is identity) is already skipped. The subsequent steps are vanishingly unlikely to be identity in practice.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah ok.

Copy link
Contributor

@stoffu stoffu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Went through unit tests and performance tests

size_t n2 = n_bulletproof_amounts(proof);
CHECK_AND_ASSERT_MES(n2 < std::numeric_limits<uint32_t>::max() - n, 0, "Invalid number of bulletproofs");
if (n2 == 0)
return 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this situation expected to occur normally? Wouldn't some kind of assertion be useful here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It happens if n_bulletproof_amount asserted in the first place.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah sorry, I misread.

TEST(ringct, mul8)
{
ASSERT_EQ(rct::scalarmult8(rct::identity()), rct::identity());
ASSERT_EQ(rct::scalarmult8(rct::H), rct::scalarmultKey(rct::H, rct::EIGHT));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe also test for INV_EIGHT?

key aP;
ge_tobytes(aP.bytes, &R);
return aP;
}

//Computes aH where H= toPoint(cn_fast_hash(G)), G the basepoint
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this comment valid?

{
case op_sc_add: sc_add(key.bytes, scalar0.bytes, scalar1.bytes); break;
case op_sc_sub: sc_sub(key.bytes, scalar0.bytes, scalar1.bytes); break;
case op_sc_mul: sc_sub(key.bytes, scalar0.bytes, scalar1.bytes); break;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sc_mul?

Copy link
Contributor

@stoffu stoffu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments on core tests

if (!valid)
DO_CALLBACK(events, "mark_invalid_block");
events.push_back(blk_txes);
blk_last = blk_txes;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this a no-op?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It technically is.

{
const size_t mixin = 10;
const uint64_t amounts_paid[] = {1000, 1000, 1000, 1000, 1000, 1000, 1000, (uint64_t)-1};
const size_t bp_sizes[] = {4, 2, 1, (size_t)-1};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The bp_sizes here and below is unused when the post_tx arg is set to null. Maybe drop it for the invalid tests, and bring in the post_tx with check_bp for the valid tests, namely gen_bp_txs_valid_2_and_2 and gen_bp_txs_valid_2_and_3_and_2_and_4?

const size_t mixin = 10;
const uint64_t amounts_paid[] = {1000, 1000, (uint64_t)-1, 1000, 1000, 1000, (uint64_t)-1, 1000, 1000, (uint64_t)-1, 1000, 1000, 1000, 1000, (uint64_t)-1};
const rct::RangeProofType range_proof_type[] = {rct::RangeProofPaddedBulletproof, rct::RangeProofPaddedBulletproof, rct::RangeProofPaddedBulletproof, rct::RangeProofPaddedBulletproof};
const size_t bp_sizes[] = {2, (size_t)-1, 3, (size_t)-1, 2, (size_t)-1, 4, (size_t)-1};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I activated the check_bp post_tx test, this test failed which was fixed by

-  const size_t bp_sizes[] = {2, (size_t)-1, 3, (size_t)-1, 2, (size_t)-1, 4, (size_t)-1};
+  const size_t bp_sizes[] = {2, (size_t)-1, 4, (size_t)-1, 2, (size_t)-1, 4, (size_t)-1};

apparently for the same reason with the gen_bp_tx_valid_3 case (sizes need to be bumped up to power of 2).

const size_t mixin = 10;
const uint64_t amounts_paid[] = {5000, 5000, (uint64_t)-1};
const size_t bp_sizes[] = {1, 1, (size_t)-1};
const rct::RangeProofType range_proof_type[] = { rct::RangeProofBulletproof };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't really dived into the main code yet, so my understanding on the distinction between different types of BPs is still vague, but why is the use of RangeProofBulletproof supposed to cause failure here? Also, I see all the valid core tests always using the RangeProofPaddedBulletproof type. Shouldn't there be valid tests for the other 2 types, RangeProofBulletproof and RangeProofMultiOutputBulletproof?

Copy link
Collaborator Author

@moneromooo-monero moneromooo-monero Aug 15, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because ArticMine's recommendation is for a single proof per tx, padded.

@stoffu
Copy link
Contributor

stoffu commented Aug 15, 2018

The core RPC version needs bumping (too easy to forget...)

@moneromooo-monero
Copy link
Collaborator Author

I'll be a bit longer with the RPC version, as I realized the changes are not backward compatible and it's not immediately obvious how to make them so.

@moneromooo-monero
Copy link
Collaborator Author

I marked the version as incompatible. Reasonable for a switch to BPs on a fork I guess.

Copy link
Contributor

@stoffu stoffu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments on bulletproofs


// PAPER LINES 21-22
w[round] = hash_cache_mash(hash_cache, L[round], R[round]);
if (w[round] == rct::zero())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't x_ip also receive the same checking?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sarang says it could be checked too, but it's not attacker controlled, so 2e-256 chance of failing. I'l check anyway.

constexpr size_t N = 1<<logN;
size_t M, logM;
for (logM = 0; (M = 1<<logM) <= maxM && M < sv.size(); ++logM);
CHECK_AND_ASSERT_THROW_MES(logM <= maxM, "sv/gamma are empty or too large");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This assertion doesn't seem to check for the emptiness of sv/gamma. Shouldn't it be tested along with the sv.size()==gamma.size() test above?

Also, the comparison logM <= maxM seems wrong; shouldn't it be M <= maxM instead?


if (!(L61Right == L61Left))
// PAPER LINES 32-33
rct::key x_ip = hash_cache_mash(hash_cache, x, taux, mu, t);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Zero check?

PERF_TIMER_START_BP(PROVE_step4);
// PAPER LINE 13
while (nprime > 1)
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This while loop is character-wise exactly identical to the corresponding part in the single case. Perhaps factor into one?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The single version is gone now.

Copy link
Contributor

@stoffu stoffu Aug 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The single version is gone now.

What do you mean? In the current commit 2dda1e3a9ebb040ea5653b56364614d754068b3f, I see both single bulletproof_PROVE(const rct::key &sv, const rct::key &gamma) & multi bulletproof_PROVE(const rct::keyV &sv, const rct::keyV &gamma) each containing the same while loop.

for (size_t j = rounds; j-- > 0; )
max_length = std::max(max_length, proof.L.size());
}
CHECK_AND_ASSERT_MES(max_length < 32, false, "At least one proof is too large");
Copy link
Contributor

@stoffu stoffu Aug 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is this derived? The size of L is logarithmic to the size of the committed vector, so I suppose the assertion should be (1 << max_length) <= maxN*maxM which means max_length <= 10.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's just an overflow test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah ok

CHECK_AND_ASSERT_MES(!(z == rct::zero()), false, "y == 0");
rct::key x = hash_cache_mash(hash_cache, z, proof.T1, proof.T2);
CHECK_AND_ASSERT_MES(!(x == rct::zero()), false, "y == 0");
rct::key x_ip = hash_cache_mash(hash_cache, x, proof.taux, proof.mu, proof.t);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Zero check?

// setup weighted aggregates
rct::key z0 = rct::identity();
rct::key z1 = rct::zero();
rct::key z2 = rct::identity();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe use uppercase Z0, Z2 to clearly indicate they represent points?

Copy link
Contributor

@stoffu stoffu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments on rctTypes and rctSigs

for (size_t i = 0; i < outputs; ++i)
PREPARE_CUSTOM_VECTOR_SERIALIZATION(nbp, bulletproofs);
if (bulletproofs.size() > outputs)
return false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this redundant given the if (nbp > outputs) test above?


size_t n_bulletproof_amounts(const Bulletproof &proof)
{
CHECK_AND_ASSERT_MES(proof.L.size() >= 6, 0, "Invalid bulletproof L size");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about checking L.size() == R.size() as well? The same for n_bulletproof_max_amounts.

rv.p.bulletproofs[i] = proveRangeBulletproof(rv.outPk[i].mask, outSk[i].mask, amounts[i]);
else
rv.p.rangeSigs[i] = proveRange(rv.outPk[i].mask, outSk[i].mask, amounts[i]);
//compute range proof (bulletproofs are done later)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this comment a bit misleading? In the new version, there's no such thing as full rct with BP, right? This comment gives me an impression that the BP generation will be done at some later point of the full rct construction.


for (i = 0; i < outSk.size(); ++i)
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why close the loop and open a new one over the same range?

};
enum RangeProofType { RangeProofBorromean, RangeProofBulletproof, RangeProofMultiOutputBulletproof, RangeProofPaddedBulletproof };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still confused about the role of these different types of BPs:

  1. Looking at wallet2.cpp, RangeProofBulletproof seems to be used only when the tx has one output. However, basically all txes are supposed to have at least 2 outputs (one dummy in the case of sweep txes).

  2. RangeProofMultiOutputBulletproof seems never used in the actual tx construction (never used in wallet2). It seems like a generalization of RangeProofPaddedBulletproof where more than 16 outputs can be included in a tx (as opposed to RangeProofPaddedBulletproof which is limited to just one Bulletproof and thus up to 16 outputs).

As such, it appears that just the RangeProofMultiOutputBulletproof type suffices. Am I missing something?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RangeProofBulletproof is the existing one, with one proof per amount. RangeProofMultiOutputBulletproof is the "optimal size" one which creates, eg, a proof for 1 amount and a proof for 4 amounts when asked to prove 5 amounts. RangeProofPaddedBulletproof is the new one which will prove 8 amounts (next power of 2) when asked to prove 5.RangeProofMultiOutputBulletproof would generate proofs that do not comply with ArticMine's recommendation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does the gen_bp_tx_invalid_1_1 core test fail then? I expected it to create a tx with two Bulletproofs each having one element in V.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because ArticMine decided it was better that way. You can ask him for the details of the reasoning.

Copy link
Contributor

@stoffu stoffu Aug 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see now why gen_bp_tx_invalid_1_1 is failing; the expand_transaction_1 function simply rejects multiple bulletproofs:

-           if (rv.p.bulletproofs.size() != tx.vout.size())
+           if (rv.p.bulletproofs.size() != 1)

I see the function is_canonical_bulletproof_layout also has a similar role.

In your above explanation,

RangeProofBulletproof is the existing one, with one proof per amount.

you mean the current testnet by the "existing one", I suppose? I don't see any point in caring about the compatibility and in preserving the old format. We can simply throw away the old/obsolete format and roll back the testnet, no?

RangeProofMultiOutputBulletproof would generate proofs that do not comply with ArticMine's recommendation.

So in the end, RangeProofPaddedBulletproof is the only usable BP type. Why bother with the other two unused types? Removing them seems to help simplify the code.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current one is the one on current testnet, yes.Those extra variants don't use up much code, and we might use them again in the future as dynamics change, so I don't think it's necessarily best to remove them. If nothing else, it allows one to test they're rejected :P If you feel strongly about it though I'll remove them. Testnet will be rolled back, so no backward compatibility to worry about.

Copy link
Contributor

@stoffu stoffu Aug 17, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, fine with keeping them, but at least I want RangeProofBulletproof to be renamed to something a bit more specific, maybe something like RangeProofNonpaddedBulletproof or RangeProofNoAggregateBulletproof? The current form RangeProofBulletproof seems too generic.

Also I think RangeProofMultiOutputBulletproof can be confusing, as RangeProofPaddedBulletproof also handles multiple outputs. Maybe something like RangeProofAggregateSplitBulletproof and RangeProofAggregateNoSplitBulletproof? ArticMine's spreadsheet https://docs.google.com/spreadsheets/d/1XaAKFUiB0hZoCZ9fsQz7JX2UO6kVIgKtZSGRSKqE7a0/htmlview#gid=892847241 also refers to "split proof".

@moneromooo-monero moneromooo-monero force-pushed the bp-multi-aggregation-pippenger branch 2 times, most recently from 5f44805 to 2dda1e3 Compare August 16, 2018 20:30
@stoffu
Copy link
Contributor

stoffu commented Aug 17, 2018

Please rebase

@stoffu
Copy link
Contributor

stoffu commented Aug 17, 2018

Shouldn't transfer_details::tx_size in the wallet RPC also be renamed to tx_weight?

Copy link
Contributor

@stoffu stoffu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments on wallet2

LOG_PRINT_L1("Failed to query per kB fee, using " << print_money(FEE_PER_KB));
return FEE_PER_KB;
const uint64_t base_fee = use_fork_rules(HF_VERSION_PER_BYTE_FEE) ? FEE_PER_BYTE : FEE_PER_KB;
LOG_PRINT_L1("Failed to query base fee, using " << base_fee);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

print_money(base_fee)

const bool use_per_byte_fee = use_fork_rules(HF_VERSION_PER_BYTE_FEE, 0);
const uint64_t base_fee = get_base_fee();
const uint64_t fee_multiplier = get_fee_multiplier(1);
const double fee_level = fee_multiplier * base_fee * (use_per_byte_fee ? (12/(double)13) / (double)1024 : 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't the test with use_per_byte_fee the other way around?

const bool use_rct = fake_outs_count > 0 && use_fork_rules(4, 0);
const bool bulletproof = use_fork_rules(get_bulletproof_fork(), 0);
const uint64_t fee_per_kb = get_per_kb_fee();
const rct::RangeProofType range_proof_type = bulletproof ? rct::RangeProofBulletproof : rct::RangeProofBorromean;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't RangeProofPaddedBulletproof be used? RangeProofBulletproof seems unusable in practice because txes normally have two or more outputs.

size_t log_padded_outputs = 2;
while ((1<<log_padded_outputs) < n_outputs)
++log_padded_outputs;
uint64_t nlr = 6 + 2*log_padded_outputs;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be nlr = 2 * (6 + log_padded_outputs);

else
return n_inputs * (mixin+1) * APPROXIMATE_INPUT_BYTES + extra_size;
}

uint64_t estimate_tx_weight(bool use_rct, int n_inputs, int mixin, int n_outputs, size_t extra_size, bool bulletproof)
{
size_t size = estimate_tx_size(use_rct, n_inputs, mixin, n_outputs, extra_size, bulletproof);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the BP part in the estimate_tx_size:

   if (bulletproof)
     size += ((2*6 + 4 + 5)*32 + 3) * n_outputs;

is not correct.

First of all, this is supposed to be an estimate of the base size with 2 outputs, right?

I don't understand the meaning of + 3 in the formula.

And for 2 outputs aggregated into one BP, the formula should instead be:

   if (bulletproof)
     size += (2*7 + 4 + 5)*32;

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It\s meant to be an etsimate of the size for n_output outputs. It needs to be modified for multi output proofs. The +3 is overhead for array sizes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok

// request all outputs with less than 3 instances
const size_t min_mixin = use_fork_rules(7, 10) ? 6 : use_fork_rules(6, 10) ? 4 : 2; // v6 increases min mixin from 2 to 4, v7 to 6
return select_available_outputs_from_histogram(min_mixin + 1, false, true, false, trusted_daemon);
// request all outputs with less instances that the min ring size
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/that/than/

Copy link
Contributor

@stoffu stoffu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments on cryptonote_core

MERROR_VER("rct signature semantics check failed");
tx_info[n].tvc.m_verifivation_failed = true;
tx_info[n].result = false;
break;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

redundant

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically yes, but since it's failure path I prefer to keep it in case more checks get added to the cases.

MERROR_VER("rct signature semantics check failed");
tx_info[n].tvc.m_verifivation_failed = true;
tx_info[n].result = false;
break;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

redundant

MERROR_VER("Unknown rct type: " << rv.type);
tx_info[n].tvc.m_verifivation_failed = true;
tx_info[n].result = false;
break;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

redundant

{
std::swap(bad_semantics_txes[0], bad_semantics_txes[1]);
bad_semantics_txes[0].clear();
set_semantics_failed(tx_info[n].tx_hash);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be also called each time the semantics check fails below?

Copy link
Contributor

@stoffu stoffu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments on blockchain

if (version < 8)
CHECK_AND_ASSERT_MES(money_in_use - fee <= base_reward, false, "base reward calculation bug");
else
CHECK_AND_ASSERT_MES(money_in_use <= fee || money_in_use - fee <= base_reward, false, "base reward calculation bug");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is supposed to be dropped, as per #4190?

{
lo = mul128(block_reward, DYNAMIC_FEE_REFERENCE_TRANSACTION_WEIGHT, &hi);
div128_32(hi, lo, min_block_weight, &hi, &lo);
div128_32(hi, lo, median_block_weight, &hi, &lo);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From ArticMine's paper: https://drive.google.com/file/d/1V96gUEe44r6Qy_Juav00bOx_sIMSv2P8/view:

Low: 0.002*(block reward)/(effective median block weight )
Example: tx size 3000 bytes and effective median block weight = 300000 bytes
Fee = 0.00002*(block reward) or for block reward = 4.28 XMR or 0.0000856 XMR

On the other hand, the current patch does:

base_fee = block_reward * reference_tx_weight / (min_block_weight * median_block_weight * 5)

Assuming the median block weight of 300000, the base fee (corresponding to the lowest priority) will be:

base_fee = block_reward * 3000 / (300000 * 300000 * 5) = 0.00000000667 * block_reward

which seems way smaller than ArticMine's suggestion.

Shouldn't it be simply this?

base_fee = block_reward * reference_tx_weight / (median_block_weight * 500)

Assuming the median block weight of 300000,

base_fee = block_reward * 3000 / (300000 * 500) = 0.00002 * block_reward

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After a private chat, I realized it was my misunderstanding, and the current code is correct.

This avoids problems when the caller can't deal with a zero
walue, which happens often enough that it's worth nipping the
problem in the bud.
Reported by QuarksLab.
Apparently needed for openssl 1.1.x
So that bulletproofs become mandatory
@fluffypony
Copy link
Contributor

Re-reviewed

@fluffypony fluffypony merged commit 9137ad2 into monero-project:master Sep 11, 2018
fluffypony added a commit that referenced this pull request Sep 11, 2018
9137ad2 blockchain: add a testnet v9 a day after v8 (moneromooo-monero)
ac4f71c wallet2: bump testnet rollback to account for coming reorg (moneromooo-monero)
8f418a6 bulletproofs: #include <openssl/bn.h> (moneromooo-monero)
2bf6365 bulletproofs: speed up the latest changes a bit (moneromooo-monero)
044dff5 bulletproofs: scale points by 8 to ensure subgroup validity (moneromooo-monero)
c83012c bulletproofs: match aggregated verification to sarang's latest prototype (moneromooo-monero)
ce0c743 performance_tests: add padded bulletproof construction (moneromooo-monero)
1224e53 core_tests: add a test for 4-aggregated BP verification (moneromooo-monero)
0e6ed55 fuzz_tests: add a bulletproof fuzz test (moneromooo-monero)
463434d more comprehensive test for ge_p3 comparison to identity/point at infinity (moneromooo-monero)
d0a0565 unit_tests: add a few more multiexp unit tests (moneromooo-monero)
6526d87 core_tests: add a test for a tx with empty bulletproof (moneromooo-monero)
a129bbd multiexp: fix maxscalar off by one (moneromooo-monero)
7ed496c ringct: error out when hashToPoint* returns the point at infinity (moneromooo-monero)
d159185 cryptonote_basic: check output type before using it (moneromooo-monero)
61632dc ringct: prevent a potential very large allocation (moneromooo-monero)
a4317e6 crypto: some paranoid checks in generate_signature/check_signature (moneromooo-monero)
7434df1 crypto: never return zero in random32_unbiased (moneromooo-monero)
0825e97 multiexp: fix wrong Bos-Coster result for 1 non trivial input (moneromooo-monero)
a1359ad Check inputs to addKeys are in range (moneromooo-monero)
fe0fa3b bulletproofs: reject x, y, z, or w[i] being zero (moneromooo-monero)
5ffb2ff v8: per byte fee, pad bulletproofs, fixed 11 ring size (moneromooo-monero)
869b3bf bulletproofs: a few fixes from the Kudelski review (moneromooo-monero)
c429176 bulletproofs: reject points not in the main subgroup (moneromooo-monero)
1569717 bulletproofs: speed up a few multiplies using existing Hi cache (moneromooo-monero)
0b05a0f Add Pippenger cache and limit Straus cache size (moneromooo-monero)
51eb3bd add pippenger unit tests (moneromooo-monero)
b17b8db performance_tests: add stats and loop count multiplier options (moneromooo-monero)
7314d91 perf_timer: split timer class into a base one and a logging one (moneromooo-monero)
d126a02 performance_tests: add aggregated bulletproof tx verification (moneromooo-monero)
263431c Pippenger multiexp (moneromooo-monero)
1ed0ed4 multiexp: cut down on memory allocations (moneromooo-monero)
1b867e7 precalc the ge_p3 representation of H (moneromooo-monero)
ef56529 performance_tests: document the tested bulletproof layouts (moneromooo-monero)
3011178 unit_tests: a couple more bulletproof unit tests for gamma (moneromooo-monero)
c444b1b require canonical multi output bulletproof layout (moneromooo-monero)
7e67c52 Add a define for the max number of bulletproof multi-outputs (moneromooo-monero)
2a8fcb4 Bulletproof aggregated verification and tests (moneromooo-monero)
126196b multiexp: some speedups (moneromooo-monero)
71d67bd aligned: aligned memory alloc/realloc/free (moneromooo-monero)
cb9ecab performance_tests: add signature generation/verification (moneromooo-monero)
bacf0a1 bulletproofs: add aggregated verification (moneromooo-monero)
e895c3d make straus cached mode thread safe, and add tests for it (moneromooo-monero)
7f48bf0 multiexp: bos coster now works for just one point (moneromooo-monero)
9ce9f8c bulletproofs: add multi output bulletproofs to rct (moneromooo-monero)
f34e2e2 performance_tests: add tx checking tests with more than 2 outputs (moneromooo-monero)
0793184 performance_tests: add a --verbose flag, and default to terse (moneromooo-monero)
939bc22 add Straus multiexp (moneromooo-monero)
9ff6e6a ringct: add bos coster multiexp (moneromooo-monero)
e9164bb bulletproofs: misc optimizations (moneromooo-monero)
112f32f performance_tests: add crypto ops (moneromooo-monero)
f5d7b99 performance_tests: add bulletproofs (moneromooo-monero)
8f4ce98 performance_tests: add RingCT MLSAG gen/ver tests (moneromooo-monero)
1aa10c4 performance_tests: add (Borromean) range proofs (moneromooo-monero)
aacfd6e bulletproofs: multi-output bulletproofs (moneromooo-monero)
cb1cc75 performance_tests: don't override log level to 0 (moneromooo-monero)
stoffu pushed a commit to stoffu/monero that referenced this pull request Feb 25, 2019
Note: the 'tx weight' idiom was adapted to the older 'tx size' idiom
since PR monero-project#4219 which introduced the former hasn't been merged yet
@stoffu stoffu mentioned this pull request Jun 4, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants