Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add wtema-72 weighted-target exponential moving average #30

Merged
merged 1 commit into from
Jan 4, 2018

Conversation

dgenr8
Copy link
Contributor

@dgenr8 dgenr8 commented Jan 3, 2018

Add support for a new wtema-72 DAA characterized by a single alpha parameter and no fixed window size. Results are similar to ema-1d from @jacob-eliosoff with differences in the adversarial dr100 and ft100 scenarios.

wtema-72:

  • Has measured statistics similar to ema-1d and better than all other algos
  • Like ema-1d, uses an exponential moving average which
    • Has the fastest response of the major tested algos
    • Has no fixed window size or ringing effects
  • Has the simplest implementation of all tested algos
  • Is implemented using integer arithmetic devised by @kyuupichan
  • Requires no special handling of future/zero timestamps (which causes very low median in ema-1d with -s ft100)

wtema-72 approximates a 144-block window by setting weight half-life to 72 blocks. This results in a latest-block weight of ~1/104. ema-1d sets latest block weight to ~1/144 which is slower-responding.

Add support for a new WTEMA DAA characterized by a single
"alpha" parameter and no fixed window size.
@zawy12
Copy link

zawy12 commented Jan 3, 2018

How can the right-hand wtema_target in the following have a value other than zero?

wtema_target = weighted_target // alpha_recip + wtema_target // alpha_recip * (alpha_recip - 1)

@dgenr8
Copy link
Contributor Author

dgenr8 commented Jan 4, 2018

@zawy12 wtema_target is a global variable in the sim. If algos were classes in the sim, we could use a class (as opposed to instance) variable.

The target is initialized at the beginning of each simulation run to the difficulty value of the warm-up blocks. In real life it should be initialized with the last target produced by the prior algorithm, or in the case of a new blockchain, the initial target for the chain.

@kyuupichan kyuupichan merged commit e4226a3 into kyuupichan:master Jan 4, 2018
@kyuupichan
Copy link
Owner

Sounds awesome, I intend to play with it today

@zawy12
Copy link

zawy12 commented Jan 4, 2018

I think you're letting wtema_target store the value until the next block. You can't do that on the block chain so just as a point of clarity it should be target instead which should be the same thing, at least after startup.

This is probably the best because it works with integers and you don't need to do anything with solvetimes. The only thing left to do is figure out what the best N is, and as a function of target solvetime, which I've worked on. The "N" = 104 in this one might be ideal for coins with 150 second solvetimes, but for 600 seconds, 50 might be best. What's hard to capture in the metrics for success is the effect of price changes on miner motivation. So without that, large N is going to look better in tests when half of N might be best. The best algorithm so far in a live coin is N=60 with my WHM modification to WT that Masari has implemented. They are solvetime=120 which implies a smaller N is need for 600 seconds.

This is really a basic EMA of the solvetimes. Jacob's is a certain type of EMA for rates. One is probably better in theory and practice, but I don't know which. I tried an inverted form Jacob's to do like this, but it didn't work. One hint that Jacob's might be theoretically best is that the solvetime stays perfect. Like all the others, this one is a little too high, with solvetimes about 0.5% too high. If you go to a lower N, it should get a little higher solvetime. I have equations to correct the solvetimes for non-EMA algos when N < 200. If you could multiply by a correction factor of 0.99995, it would correct the 0.5% error. Non-ema algos would need 0.995 (you can even see SMA with N=144 solvetime is a little too high ~0.2% from experiment and BCH live data), but EMA's are different. Like Jacob's, this algo is really susceptible to any "round off" or "carry over" error. I do not know how or if those errors could occur in chains, but if the exact previous target that was used is not used in the next "iteration", then the error compounds as (1-error)^104. For error = 0.001, the solvetimes are 10% too fast.

It looks really good, and it's nice to see one that acts different than the others. Like the SMA N=144, the statistics are going to look good at the expense of slowness to respond to price changes. That's when the new DAA is having problem. So it's half way between where the DAA is and where I think it should go (like EMA with N=30 or WHM with N=50). A faster version of this might be the best.

@jacob-eliosoff
Copy link
Contributor

  1. I strongly suggest weighting blocks by time, not by block count: or in other words, half-life should be measured in minutes, not in blocks. (Or to put it yet another way: a block twice as slow should get double the weight.) Eg, if we're suddenly hit by an extremely slow block (say following a HF, or if miners otherwise shift elsewhere en masse), a time-based EMA will slash difficulty immediately, whereas a block-based EMA will take several blocks which could be a big problem. Also from a purely mathematical pov, I believe a block-based EMA will produce an avg block time above (in some cases, significantly above) 10 minutes, whereas a time-based one should nail 10 minutes.

  2. Minor point, but I believe the half-life that best approximates a 144 window is not 72 but ln(2)*144 ≃ 100. I don't have much of a sense though as to what window we should be approximating - @zawy12 has done a lot more poking at that.

  3. I'm curious how wtema-72 avoids problems from inaccurate timestamps, but I defer to testing on that point!

@zawy12
Copy link

zawy12 commented Jan 4, 2018

The alpha here is 1 - 0.5^(1/72), so it is following the basic EMA:
EMA of ST/T = alpha * ST/T + (1-alpha) * T/T
The ST/T is an estimate of the "current price" in stock terms which is the most recent block's ratio. The "T/T = 1" in the 2nd term is stating the EMA previous to that one was the correct EMA.

The previous terms are weighted according to the following, but this is just an EMA:
image

Where p=price is out ST/T. I'm glad Tom mentioned the "half life" view of it since 1-alpha in the above means past ST/T ratios get a weighting of
0.5^(i/72)
where "i" is blocks into past.

Jacob's statement that this N=72 should be compared to his EMA with N=100 is correct. They function about the same. Jacob's has slightly fewer post-"hash attack" delays, but Jacob's does not do as well as this one on my "blocks stolen" metric (blocks obtained by intelligent big miner at lower-than correct difficulty). But they are very close. The main advantage of this is the use of integer math, as coins I'm talking to really want the integer math. Allowing negative solvetimes is also nice.

BTW, for small alpha, the following strange thing is true:
alpha =~ e^alpha - 1
1/alpha = 1/(1-0.5^(1/72)) = 1/(e^alpha - 1)
which hurts my brain.

It should not have a problem with timestamps because in my brief testing it handled negative solvetimes well. But since devs will often not use a singed integer, some will just use ST = 0 if ST < 0 which allows a disastrous exploit. An attacker with 20% of the total hashrate only needs to assign old timestamps at the minus 1 hour limit to make the difficulty drop to 1/2 the correct value in 50 blocks. See method 2 in this article.. The fix, if you can't allow negatives, is to use the basic limitation WT144 and Jacob's EMA are using, as long as you don't use a faster version of this like I would like to see. When it responds faster, a single timestamp set to the forward limit has too big an effect on it. Monero allows 24xT and BTC clones 12xT. The fix for faster responding algorithms is in method 3 of that same article. It needs some care to be symmetrical as there's an exploit if it is not. I will recommend this to the 6 coins about to fork or begin and who are following my recommendations (sumokoin, masari, BTG, masari, and I think new BTCP and ZGLD). Most of them are 4x faster solvetimes so the 104 is probably a pretty good choice for them, but maybe needs to be about 80. For the T=600 coins I'm going to recommend about N=50 with the complicated solvetime protection (method 3)

To be nice, everyone coding this algorithm should include a comment "Solvetimes must be allowed to be negative or use [include Neil's simple max time subtraction method as commented out]".

To get the average solvetime more accurate, replace 600 as targetSolveTime in the equation with 597.6. Coin's with different solvetimes use 0.996 of their target time. At N=50 the adjustment is about 0.99.

@jacob-eliosoff
Copy link
Contributor

You can make an EMA block-based ("Each block gets x% less weight than the one that followed it, regardless of how long they took") or time-based ("Each block gets x% less weight for each 10 minutes since it was mined"). Both are sane things to try. I'm arguing that time-based is better here, because it responds to difficulty drops better (faster), and is more mathematically correct for what we're trying to do (maintain a 10-minute avg block time). I actually looked into both pretty closely when I first tried out EMAs for difficulty.

@zawy12
Copy link

zawy12 commented Jan 4, 2018

Jacob's EMA is:
next D = D * ( alpha * T/ST + (1-alpha) * T/T )
where alpha = 1-e^(-ST/T/N) and follows wiki's form:
image
Jacob's EMA, as he has been saying, should be more responsive because it includes an adjusting alpha. But adjusting alpha in the second term is affecting all previous terms which might be counteracting the benefits. I tried to make it constant, but it messed it up.

[deleted other equation trying to approximate Degenr8's from Jacob's that is wrong]

@zawy12
Copy link

zawy12 commented Jan 4, 2018

Jacob's EMA:

a=alpha=1-e^-x
x=t/T/N
t = previous SolveTime

next D = D*( a*T/t + (1-a) )
           = D*( T/t + e^-x*(1-T/t) )

[deleted my approximation for degener8's equation because it did not work]

When Jacob's N is set to Degenr8's recip_alpha, I'm not sure I can see a difference in the algorithms.

@zawy12
Copy link

zawy12 commented Jan 4, 2018

These charts show the EMAs are indistinguishable. The charts are 4000 blocks. The red is the hashrate being applied, normalized to the difficulty. T=1, D=1, and HR = 1 most of the time. I included my modification of WT-144 for reference (it says N=52 but it's really N=60). Notice how the WHM does not vary from the correct difficulty by more than 25% very often, so miners will not attack it much. Also notice it's a lot faster in case miners do come on strong or leave (last image).

To get an EMA that acts like a WHM (or WT), use an N half as big. So N=30 to 50 for these EMAs is what should be used if T=600 seconds. N=104 is fine if solvetime is T=150 seconds. N=104 is like N=208 for the WT and WHM. So it's far from ideal in response rate. Again, the thing consistently being missed is that price changes are not model in the code. The lines in the first image below are "crazy flat". There's no need to make difficulty that smooth. The statistics look good, but that's irrelevant. Again, look at the awesome live data from the super-small Masari coin using N=60 for WHM. Super-small means it has to deal with much higher hashrate changes, and still beats the pants off N=144 SMA. And if the same results are desired in T=600, N has to be even smaller than 60 while the thing being promoted here is effectively N=208. It may be that Masari would do even better with N=100 which implies about N=50 to 80 for WHM which means N=25 to 40 for these EMAs. All these statements were supported by links above.

_wtema1
_wtema2
_wtema3

@zawy12
Copy link

zawy12 commented Jan 5, 2018

The relationship between the 2 EMA's is very reminiscent of how the harmonic mean is related to the arithmetic mean:

Jacob's integerized EMA:
next_target = target  / (N-t/T+1) * N
Degener8's EMA:
next_target = target * (N+t/T-1) / N 

I can make Jacob's avg ST more accurate with e^-x in place of the approximation or by extending the power series of the approximation 1 more term, but I can't find a way to use an e^-x to correct the small avg ST error in Degenr8's. Jacob's EMA hints there is a better theoretical version of Degenr8's, or that Jacob's is theoretically.

@jacob-eliosoff
Copy link
Contributor

Do you run any tests where, say, 90-99% of hashrate instantly disappears for a few days? This is not totally unrealistic - eg, after a HF one or the other branch could be in this state. This is the main situation I can think of where the difference between a "block-based" EMA (like wtema) and a "time-based" EMA (like emai) could be material.

@zawy12
Copy link

zawy12 commented Jan 5, 2018

Good catch. These EMAs are not nearly as good as WT and WHM for raising difficulty during extreme hashrate increase. But they come down as quickly which is really important. Yours with the integer verson "craps out" sometimes if no other timestamp protection is used, as shown below. Red is actual hashrate. The WT looks better than WHM here, but it was the reverse situation in the everyday testing I did. I'm going to reconsider it. The WHM gets a "jump start" on rising which is important, but then it seems more reluctant to reach the final correct value. The WT seems to drop better which is really important.

The N for both EMA's N=30 because they had no chance of competing with N=104. Notice the WT and WHM have N=2x30 because that is when they seem most like the EMAs.

_wtemab

Your hash rate integerized-EMA is not fooled by bad timestamps that try to raise difficulty, but it is terribly subject to attempts to lower difficulty which is not good (if no other protection is used Degener8's solvetime-EMA rises in both cases, which is a better situation. But it seems timestamp protection is needed in both, especially if smaller N is use.

The first hump is with time stamps from 50% miner always assigning at the MTP, 6xT into the past, which causes an honest miner to cause an apparent solvetime of +7xT in the next block. The second solvetime-EMA hump is from 50% miner always assigning +12xT which causes the next apparent solvetime to be -10xT. Neil's timestamp protection might solve the problem in both cases, but as I said above, for the smaller N that T=600 seconds needs, my method 2 in the link above is desperately needed, if miners figure it out. An better alternative to the complexity of method 2 is to allow the negative solvetimes and enforce block_time limits of +6xT and -4xT. This will not degrade performance under extreme conditions, allows solvetime-EMA to be most accurate under bad timestamps, and allows small N.

_wtemaa

So I would make the algorithm:

block_time = states[-1].timestamp - states[-2].timestamp
block_time=max(6*block_time,min(-4*block_time,block_time))
wtema_target = bits_to_target(states[-1].bits)
wtema_target = wtema_target // alpha_recip * (block_time//IDEAL_BLOCK_TIME + alpha_recip - 1 )

I would like to see the code clarified with different variable names. Also, remember there is an error this algorithm as N gets smaller. IDEAL_BLOCK_TIME needs a new value depending on N. 0.5% smaller for N=100 and 2% smaller for N=30. So in initialization, the BT equation below should be used to get the correct average solvetime. The same equation I found for WHM works here.

# initialization:
BT=floor( 0.5+IDEAL_BLOCK_TIME*pow(0.9989,480/N))
....
solvetime = states[-1].timestamp - states[-2].timestamp
solvetime = max(6*solvetime,min(-4*solvetime,solvetime))
new_target = bits_to_target(states[-1].bits) // N * (solvetime//BT + N - 1 )

This is my current equation the best value for N based on the target solvetime:
N=40 + int(15*log(600/T)/log(2))
That's for WT, WHM, and SMAs. Divide by 2 for these EMAs which would be N=20 ("recip_alpha") for T=600. That's just my data from live coins and theory. N=40 might be safer as a lower limit for T=600 coins.

@zawy12
Copy link

zawy12 commented Jan 5, 2018

Correction: Jacob's is an EMA of "needed difficulty" while Degener8's is an EMA of "needed target". The needed targets are weighted, but "weighted" is part of the definition of EMA. I can't find a way to use an e^x to get the EMA-T to have an exact solvetime like the EMA-D.

@zawy12
Copy link

zawy12 commented Jan 5, 2018

Using the simplified versions above of the EMA-T and EMA-D and setting them equal to each other and solving for t/T gives 1 (where t=solvetime and T=600 seconds). I believe this means that, on average, if they are doing their job of keeping an average t = T, then they are the same equation. [edit: I do not mean to say they are the same equation. The are probably the same equation on average. ]

@zawy12
Copy link

zawy12 commented Jan 6, 2018

I've completed my additions to this in order to select N, correct the avg solvetime, and handle bad timestamps. I will recommend this to Karbowanec, Sumokoin, ZGLD, BTGP (both Platinum and {Private), BTG, and Masari. Masari will push it to Monero / cryptonote for other clones. I think Masari will employ it first which is good for comparison to my WHM version of WT which has done awesome.

zawy12/difficulty-algorithms#17

@zawy12
Copy link

zawy12 commented Jan 7, 2018

I finally found the idealized equation for EMA-je and EMA-th that shows they are equal within 0.02% for most solvetimes, and equal to within 0.005% on average.

b = T/t 
ema-JE = prev_target / ( b+(1-b)*e^(-1/b/N) )
ema-TH = prev_target * ( b+(1-b)*e^(+1/b/N) )

Substitute e^x= 1+x and to get the "integer" EMAs in the python code.
Substitute e^x = 1+x+x^2 and no correction for solvetime is needed for all N. So there's no reason to expand e^x more than this.

The e^x = 1+x+x^2 substitution needs 2 sign changes after the inversion and term expansion:

k=t/T/N
ema-JE = prev_target / (1+1/N+k*(k/2-1-1/2/N))
ema-TH = prev_target * (1-1/N+k*(k/2+1-1/2/N))

I think the following is the EMA code is best since it keep solvetime accurate for all N. Does it qualify as "integer math"? The timestamp protection is symmetrical and the 10xT limit does not slow down D recovery much even after 1/50th hashrate drops.

ST = states[-1].timestamp - states[-2].timestamp
T=IDEAL_BLOCK_TIME
ST = max(8*T,min(-7*T,ST))
prev_target = bits_to_target(states[-1].bits)
# scales everything up by (TN)^2 to allow math with integers
k = T*N*ST
next_target = prev_target * ((T*N)^2-T^2*N+k*(k/2+(T*N)^2-T^2*N/2) / (T*N)^2

The solvetime has to be < -1000xT for the denominator to go zero or negative for N>1.

The 1st pair of equations above can be re-written in standard EMA form:

EMA =  A*p1 + (1-A)*p0

where:
alpha = A =  1-e^x
ema-JE: x = -t/T/N
ema-TH: x =  t/T/N
  
p1 = "next stock price estimate" = prev_price * T/t
p0 = "previous stock price" = prev_price

EMA-JE: replace "price" with "difficulty"
EMA-TH: replace "price" with "target"

@jacob-eliosoff
Copy link
Contributor

jacob-eliosoff commented Jan 7, 2018

I did more testing and digging. I'm at risk of writing a very long post so here's the upshot:

  1. wtema (aka EMA-TH) is a better approximation of ema (aka EMA-JE, mostly identical to ema2) than emai is.
  2. So I recommend wtema, except I still think ema2's handling of "negative block times" may be better.

More details - happy to expand on any:

  1. wtema is a better EMA approx than emai because emai strongly overreacts to very slow blocks. Eg, suppose we're chugging along and suddenly hashrate plummets to 1/144 of its old value, so that avg block time leaps from 10 min to 1 day:
  • My ideal EMA formula (basically ema2-1d) responds by decreasing difficulty by 2.69x.
  • wtema-100 decreases difficulty by 1.99x - slighty conservative, but fine.
  • emai-1d slashes difficulty by 144x (ie, effectively ignores everything before the last day). I think this is an overreaction and too likely to lead to oscillating difficulty on occasional slow blocks.
  1. I prefer ema2's handling of very slow blocks. To me, the right way to handle a timestamp sequence like:
    (100000, 100600, 123, 101800)
    is to handle it exactly the same as:
    (100000, 100600, 100600, 101800)
    ie, just treat a "negative block time" as a misreported "0 block time". (There is really no such thing as a "negative solvetime": this is a data cleanup problem, not a theoretical math problem.) Similarly, the right way to handle timestamps like:
    (100000, 100600, 102500, 101800, 102400, 103000)
    is to treat them like:
    (100000, 100600, 102500, 102500, 102500, 103000)
    ie, timestamps are floored at the highest previous timestamp, leading to some 0 block times until new timestamps catch up. This is what ema2 does.

  2. Don't compare wtema-72 directly to ema[i2]-1d; instead compare wtema-100 (alpha_recip 144, rather than wtema-72's 104). Otherwise you're just comparing different window sizes. I still defer to @zawy12 and the rest of you on whether the best window (responsiveness) is 1d, 0.5d, ...

  3. wtema and the emas actually completely diverge for very slow blocks. Eg, after a million-minute block, wtema-100 reduces difficulty by 13x, whereas ema2-1d reduces it by 1640x! (Specifically, wtema's factor approaches minutes/86400 for infinitely-slow blocks, whereas ema2's approaches minutes/600.) Fortunately - who cares? The two are close for block times less than a day or two (eg, 2.69x vs 1.99x for 1 day - see #1 above), leave million-minute blocks to the theorists.

  4. ema, ema2, emai, and wtema are essentially all just functions from (block_time/IDEAL_BLOCK_TIME) to (new_target/old_target). A big virtue of all these algos (compared to some of the old ones) is that they rely on so few inputs - eg, the (new_target/old_target) output factor is independent of the input target: pass in a target 1000x bigger and you'll still get the same factor. So you can just plot the function above for each algo, a handy way to compare their behavior without running a zillion test cases. (Though you do have to be careful to set them up with equivalent internal window/alpha parameters, hence #3 above.)

  5. ema/ema2 (basically identical) are still the most mathematically exact form of my original EMA reasoning: "Infer a hashrate-during-block estimate from each block time, estimate recent hashrate as an EMA of those hashrate inferences, assume current hashrate matches that recent-hashrate estimate, and calculate the difficulty/target that yields a 10-minute avg block time under that hashrate." If integer math didn't matter, I'd still advocate ema2 as slightly preferable to wtema: specifically it cuts difficulty a little more aggressively after very slow blocks (see 2.69x vs 1.99x under #1 above). But the difference doesn't seem material, wtema seems fine.

@zawy12
Copy link

zawy12 commented Jan 7, 2018

I corrected the exact algorithm above to allow integer math. If you use it, then you don't have to pick between any of the EMAs..

But I've decided WHM is the best out of the EMAs, Digishield v3, and WT. These charts show why. But I need to try WT with a higher N than WHM because WHM may act slower than WT which gives it an unfair advantage in the metrics (it's hard to remind myself of not falling for the large N trap when the large N is hidden in the algo). These EMAs are much larger N's than they appear.

Charts: stability, and under typical hash attack. The smoother line of WHM implies it's cheating with a larger effective N, so WT might be best.

image

image

@zawy12
Copy link

zawy12 commented Jan 7, 2018

If WT has an overflow problem you just scale it down at each step in the loop by N, N*(N+1), or maybe even N*(N+1)*T and multiply by the same factor at the end.

@jacob-eliosoff
Copy link
Contributor

Which is the implementation that reproduces ema/ema2 using only integer math? Have you verified that for very long block times, it multiplies target by ~minutes/600, like they do?

@dgenr8
Copy link
Contributor Author

dgenr8 commented Jan 8, 2018

@jacob-eliosoff Your points 1 and 4 are what I was referring to in #32 (comment). In your next_bits_ema function, alpha is not constant but a function increasing in the current solvetime. Sorry I'm not sure which version of ema this is -- I suspect it's not ema2. Where is ema2?

Your point 2 is how fixed-window WT dealt with negative times, out of necessity because it built up the new target from scratch instead of simply working from the prior target as these ema algos do.

@jacob-eliosoff
Copy link
Contributor

jacob-eliosoff commented Jan 8, 2018

Ah yeah sorry ema2 isn't in the main branch yet: see #34. It's almost the same as the old ema except it adds the negative block time handling from #2 above. (Whereas emai is my integer-math linear-approx version of ema, but I like wtema better than emai, apart from the negative block time handling.)

@zawy12
Copy link

zawy12 commented Jan 8, 2018

I can't get WT to work better than WHM. WHM is the best. It responds faster than WT and EMAs to hash attacks and yet does not encourage more attacks by going too low on accident. When the MTP delay in Digishield v3 is removed to improve it, it ties WHM in response speed + low random variation, but it has double the delays of WHM.

Bad Timestamps
Jacob's EMAs and WT-144 are using
block_time = max(min(IDEAL_BLOCK_TIME, window) // 100, states[-1].timestamp - states[-2].max_timestamp)
but it would be better to allow negative solvetimes to pass through like wtema. Bad timestamps (ahead or behind) throw it off for one block, but then immediately correct in the next block that has a good timestamp, as long as the negative is allowed to erase a bad positive. What you're doing is close enough and fine, except if lower N is used in the range that is experimentally observed to better (see below), then it enables an exploit when a miner comes in and assigns the max ahead time.

Say a 20% hashrate miner comes on and always assigns the max allowed timestamp. This is a best case scenario. Miner's who do timestamp manipulation are 2x to 10x a coin's baseline hashrate if the coin is not the largest coin for that POW. This also assumes they use BTC's 12xT max forward time instead of what Monero clones default to (24xT).

Timestamps, if every solve is magically 1xT (so the conclusion will be the average case):
1,2,3,4,(5+12),6,7,8,9,(10+12)...

The method WT-144 and EMA-je are using give reported solvetimes:
1,1,1,13,0,0,0,0,5,0,0,0,0,5,..repeat
Ignoring the 4 startup blocks, avg ST is 1, which is what it should be. The WT-144 with N=60 (which is like EMA's with N=30) will end up at about 22% below the correct difficulty for as long as the miner does this. The ST=13 triggers a 30% drop in both, so the miners get the first 10 blocks cheaper. I showed an improvement to this algorithm because some devs refuse to allow negative solvetimes (see method 3 here) but it's still not a good thing because my fix slows the response down in some circumstances.

So the EMA-je and WT code enables an exploit in timestamps that gives 25% extra profit (the Price/Difficulty ratio) for 20 blocks. The 25% is extreme because BCH sees 3x more hashrate if the P/D ratio falls by 25%, and it's slow response is why it has longer delays than a good algorithm (see below). In Zcash clones with Digishield v3 miners get 20 blocks 3 times a day when the P/D ratio rises above 25% on accident.

Selecting N
Slow response to price changes are what is hurting BCH. With N=144, the D rarely falls low on accident, so random variation is not the problem as it is with low N. BCH sees problems when price (+fees) raises 10% at the same time the D falls 10%. About the time the D catches up with this (half a day) the price may also correct (not to mention those opportunistic miners reacting to short-term changes are probably selling their coins), so suddenly the P/D is normal again and hashrate drops to 1/2 when they leave, causing delays.

In Zcash digi v3 the price changes don't help. But the random variation is the main problem, combined with using MTP to handle bad timestamps which gives away the first 6 blocks without adjusting. Zcash has 1/2 the delays and "stolen blocks" as BCH. It would do even better if MTP was not being used, which is an option thanks to being able to accept negative solvetimes. Digi v3 with the N=17 is like a SMA, WT, and WHM with N=60. Due to Zcash having 1/4 the blocktime, BCH would need a smaller N to get equal results, although there is a penalty to not having access to as many data points per real time, so it can't do as well unless it has a better algorithm combined with lower N.

Summary: N too low suffers from random variation in difficulty and N too high suffers from variation in price (being faster than D).

More data to show lower than N=60 is needed for T=600 coins: I got 3 coins to use N=17 with SMA. Their delays and hash attacks have have been 3x worse than BCH's DAA. They have T=150 seconds, which in BCH's T=600 terms is like N=17*(150/600)^0.3 = 11. So N=11 is 3x worse than N=144 because of variation in D instead of variation price. These are the tail ends of a hump who's N=? peak we want, converted to an N for WT and and N for EMA. Based on this I selected my modified version of WT (WHM) to be N=60 for Masari's T=120. It has done great. And its problems from attackers were much worse than the other 2 coins using my N=17.

The following first 2 charts compares Zcash and HUSH. They have the same digi v3 and hush is doing much worse ostensibly because it is 1% the size of Zcash. Masari is doing better than both of them and it's a LOT smaller than even HUSH, so it should have the worst problems of all, and its history showed it was a terrible situation. And yet it has 4x fewer delays and nearly 4 times fewer stolen blocks than hush (although HUSH does not always perform this badly). It even beats Zcash by a wide margin in delays, when Zcash was stable in hashrate ... all while Masari is a micro-coin and it's difficulty varied. BTW, some are crediting Masari's doubling in price that's causing the difficulty to rise as being the result of the new difficulty. Masari's colored spikes make it look worse, but because it reacts fast, each of those delays and attacks are very brief compared to BCH and the others.

clipboard01

Since Masari has T=120, the maximum BCH can be to be as good is N=60*(120/600)^0.3 = 37. See this article.

@zawy12
Copy link

zawy12 commented Jan 8, 2018

So BCH should use WHM with N=40 and employ method 6 timestamp handling (limit +8xT and -7xT from previous timestamp).

@dgenr8
Copy link
Contributor Author

dgenr8 commented Jan 8, 2018

@zawy12 I completely agree that slow response is the biggest problem with the current BCH simple moving average. The combination of a price and difficulty change is detrimental; in addition, predictable difficulty changes are self-reinforcing. This is gameable and may explain why BCH is not more profitable than BTC anywhere close to the expected 50% of the time.

The shape of the weight curve in any exponential moving average is steeper than linear with equivalent half-life. So how do you conclude that WHM is faster-responding than anything ema-based?

@zawy12
Copy link

zawy12 commented Jan 8, 2018

In trying to answer your question, I realized the N in the EMAs is actually an honest-to-god "half life". So a "full life" is 2*N. So for evermore, let's throw a 2 into the equations to correct for this so others do not make a mistake in the direction we were headed. That is, you started investigating N=1/2 of 144 which makes me say "yippee" and Jacob and I showed it's really an N=100. But that's only a half-life, so it's not N like the SMA or WT. It's N=200 which I think we agree is not good (although it may equal performance of SMA with 144)

Compare your EMA approximation and WT with N=2, when the previous block saw a t=T/2 solvetime.
next_target = 1 * (1+T/2/T-1/2) = 1
next_target = 1 * (T/2/T*2 +T/T)/(N*(N+1)/2) = 2/3
hmmm, it's kind of interesting the EMA approximation does not respond to the first fast solvetime. If I make N=1 for the ema approximation to WT's N=2, then it's 1/2, so it seems faster if we use the correct N. But the data indicates EMA with N = 1/2 of WT N is about the same as WT. And that's what I use in comparisons. In the image below, the attack lengths are 60 blocks.

image

I use admittedly inflammatory language by saying "hash attack" and "blocks stolen". But it's only because "opportunistic mining" does not portray what small coins experience and "blocks acquired at cheap difficulty" is too cumbersome. So, I want to object to your viewpoint that miners are or can actively damage beyond simple, blind profit motive. You would be correct with "self-reinforcing" or "gaming" if there were oscillations in any of our algorithms. I see references here to oscillations and "ringing" but the tests I run on them try make this happen and I can't. That was my fear of EMA and WT at first. They predict current needed difficulty and no more. Trying to extrapolate to future blocks like hull's moving average is where trouble can begin.

I think BCH can only be improved a little. But some days people will be waiting 4 hours when they really really needed that $10k in less than 2 hours and were planning on it.

I don't think the difficulty has anything to do with the profitability problem (of which I have no knowledge). That kind of problem can occur at most at the 25% level if difficulty is swinging up a lot from big miners jumping on and off, which isn't happening here. Miners jump on for 1/3 of the N and then leave other miners with higher difficulty. But here, if they are doing it, they are only gaining about 5% if they jump on when it is accidentally low, and then leave others with only 5% accidentally high. Any 50% is from something else.

@zawy12
Copy link

zawy12 commented Jan 8, 2018

I can't believe N=37 is safe for BCH. It's too much variation above 20% that would invite attacks. Here's a different line of reasoning to argue for the highest N. The charts above show BCH can do at least twice as good. Half as many delays and half as many blocks stolen are possible. Random variation is not hurting BCH. Delays and blocks stolen are proportional to N (due to linearly slower response) when random variation is not the cause. Therefore N needs to be 72 with the given SMA. WHM is a lot better in responding faster, but it also touches upon more random variation for a given N. So it should do better than cutting delays and blocks stolen in half with WHM and N=72.

Looking at N=60, I see 4 times a week it would drop to 80% of the correct difficulty, inviting more hashers. Will price+fee dynamics happen as often? That is the goal: we want to attract miners with random variation in difficulty as much as they come due to price changes. Otherwise, it's not responding as fast as we want to price changes. I suspect price+fees vary more than this, so I believe this is higher than it should be.

N=60 seems to be an excellent balance between my fear of random variation attracting miners and getting a fast response to price changes. N=37 had about twice as many drops over 80%, which is still within reason as to what price+fees is doing (and also hashrate competition varies).

@zawy12
Copy link

zawy12 commented Jan 8, 2018

So here's my best algorithm for T=600 coins, and it sure is working well for a t=120 coin.

WHM difficulty algorithm
# height -1 = most recently solved block number
T = target solvetime = 120 to 600 seconds
N=60
adjust =~0.9989^(480/N) # 0.986 for N=60
 k = (N+1)/2 *adjust * T    

sumTarget=0, wt=0, j=0
for ( i = height-N; i < height; i++) {  
     solvetime = timestamp[i] - timestamp[i-1]
     if solvetime >  (7+1)*T  then solvetime =  (7+1)*T
    if solvetime < -(7-1)*T then solvetime = -(7-1)*T
    j++
    wt +=  solvetime * j 
    sumTarget + = target[i]
} 
# Keep t reasonable in case strange solvetimes occurred. 
wt = N*k/3 if t < N*k/3 
next_target = wt * sumTarget / k

@jacob-eliosoff
Copy link
Contributor

The "window" parameter to the ema algos is not the half-life: it's the simple moving avg "window" they aim to match. Long topic but the short of it is, the half-life is ln(2) (0.69) * the window: so for ema2-1d, with a window of 1 day (24*60*60), the half-life is ln(2) * 1 day = 16h38m.

More generally, it's hard to compare these algos, given that they all have "responsiveness"/"window" parameters and the parameters sometimes have different meanings. I have three suggestions on how to compare them:

  1. Let's specify the responsiveness for one case explicitly, and people can back out the parameter setting needed to match that responsiveness. Then we can compare apples to apples. Eg: "Please set up your algo so that, assuming a 10-minute ideal block time, when a 4-hour (24x too slow) block is mined, the algo increases target (cuts difficulty) by 17.25%." (Or choose different numbers - 17.25% is just what ema[2]-1d does.) Then maybe we'll see that different algos matching that constraint respond differently to very fast blocks, etc.

  2. As I described in my #5 above, these algos are essentially just x->y functions mapping (block_time/IDEAL_BLOCK_TIME) to (new_target/old_target). We can plot them to see the shape differences (if any - if two fns give the same curve, then implementation aside, they're the same thing!). This combined with standardizing responsiveness as above should cut through some of the noise from conflicting test results etc. I could help make the charts.

  3. This leaves aside the issue of negative block times (etc), but I think that should be handled separately. @zawy12, I see what you're saying about the 5-0-0-0-0-5-0-0-0-0... case, I'll need to take a closer look. But anyway I think how bogus data is handled should be a separate question from the main feature of an algo - the shape of its curve above.

@zawy12
Copy link

zawy12 commented Jan 8, 2018

Yeah, I guess it was silly to hope the EMA taking about 2xN to reach the correct difficulty after a hashrate change was meaningful. It takes over 2xN for the EMA to fully correct to a step signal. It starts off good, but then turns skeptical well before it reaches the goal. And WHM / WT get up there faster than SMA.

As far as differences in the EMAs, I don't think we should say there is any. I can't see any in all the plots I've done except the solvetime is perfect when the exact equation is used. What kind of differences are you looking for when the individual solvetime values are this close?

image

@zawy12
Copy link

zawy12 commented Jan 8, 2018

Again, here are the simplest forms of the equations:

Exact:
b = T/t 
ema-JE = prev_target / ( b+(1-b)*e^(-1/b/N) )
ema-TH = prev_target * ( b+(1-b)*e^(+1/b/N) )
Approximate
ema-JE= target  / (N-t/T+1) * N
ema-TH = target * (N+t/T-1) / N 

We've just been looking at a case of

(1-x) =~ 1/(1+x) when x is small.

@dgenr8
Copy link
Contributor Author

dgenr8 commented Jan 9, 2018

@zawy12 Will you implement your WHM in this simulation (the one maintained in this repository)? Is your simulation available for others to run?

@zawy12
Copy link

zawy12 commented Jan 9, 2018

For more direct comparing to your wt, it needs 4 line changes :

def next_bits_wtz(msg, block_count):
    # ##### = changes to wt
    first, last  = -1-block_count, -1
    timespan = 0
    prior_timestamp = states[first].timestamp
    for i in range(first + 1, last + 1):
        target_i += bits_to_target(states[i].bits)  #### added "+"
        
        # Prevent negative time_i values
        timestamp = max(states[i].timestamp, prior_timestamp)
        time_i = timestamp - prior_timestamp
        prior_timestamp = timestamp
        adj_time_i = time_i # Difficulty weight  #### removed *target_i
        timespan += adj_time_i * (i - first) # Recency weight
        
    timespan = timespan * 2 // (block_count + 1) // target_i  #### added  //target_i
    target = timespan // (IDEAL_BLOCK_TIME) ### removed *block_count
    return target_to_bits(target)

@zawy12
Copy link

zawy12 commented Jan 9, 2018

If we can concur on WHM with N=60 and the bad-timestamp handling it would be great. Then it needs to be converted to BTC code. There's resistance to using unsigned integers for some reason.

I called the above WTZ because it does not have the same timestamp handling and adjustment as WHM. They are the same if their are no bad timestamps, although WTZ will be closer on avg solvetime.

I'm running mining.py for first time to try to code WHM. Is 15 seconds for 20k blocks normal? Excel does 90k blocks (9 algorithms) with 9 charts printed side-by side in half a second. Would printing to file be faster? How do I code that?

     solvetime = timestamp[i] - timestamp[i-1]
     if solvetime >  (7+1)*T  then solvetime =  (7+1)*T
    if solvetime < -(7-1)*T then solvetime = -(7-1)*T

@zawy12
Copy link

zawy12 commented Jan 9, 2018

In order to evaluate an algorithm, only 2 scenarios are really needed: ideal case of steady state hash rate with no random variation and worst case step functions. Weird ideas need to be checked against a forever ramping function. Miner behavior can be accurately modeled with a scenario that triggers a step function of size S when D drops x% and ends the step function when D raises y%.

Your metric of the inadequacy of the algo is sum( HR/D if HR > D else D/HR) where D and HR baseline are scaled to 1.

How do I do a steady state (no random variation) and a step function in mining.py?

@jacob-eliosoff
Copy link
Contributor

jacob-eliosoff commented Jan 9, 2018

A couple of charts below comparing how different block times affect target (1/difficulty) under different algos: wtema, the various emas, and simpexp. simpexp is a new one I just added with a very simple rule:

target *= e**((block_time-IDEAL_BLOCK_TIME) / window)

simpexp is motivated by @zawy12's argument that the algo should just accept negative block times, but ensure that their effect on difficulty is offset by the subsequent large-positive. simpexp has this property - from the comments:

Successive block times of (-1000000, 1000020) (or vice versa) result in *exactly* the same target as (10, 10).

These charts convince me of a few things:

  1. For block times >0 and <12h or so, these algos are all basically the same.
  2. @zawy12 is right: simply accepting negative block times, as wtema and simpexp do, is probably a cleaner, least gameable approach than ema2's (timestamp - max_prev_timestamp).
  3. Up to block times >12h (for which simpexp cuts difficulty much more aggressively) - including negative block times, at least down to -2h - wtema and simpexp are basically identical. So unless that >12h case is important, wtema's clean integer math makes it my top pick (at least until I see evidence of cases WHM or something handles better.)

image
image

@zawy12
Copy link

zawy12 commented Jan 9, 2018

My post above shows WHM N=60 has better metrics to protect against miner behavior than even a "fast" EMA N=25 while keeping smoother difficulty. What makes better evidence?

@jacob-eliosoff
Copy link
Contributor

I understand that you've put months and months into comparing these. But simplicity counts for a lot too. wtema is literally 2 lines. What are the specific attacks that WHM (with its adjust = 0.9989^(480/N) etc) handles significantly better than wtema? Is it certain that wtema (or simpexp - 1 line!) couldn't handle them similarly well just by giving it a different responsiveness setting (alpha_recip)?

Maybe if you can add WHM and the attacks you have in mind here, the importance of choosing it over something like wtema will be clearer.

Just my 2c! I don't make the decisions around here...

@zawy12
Copy link

zawy12 commented Jan 9, 2018

We all make decisions that influence.

adjust = 0.9989^(480/N) is just a constant that keeps a more accurate solvetime for different N.

The post I linked to describes the attacks. I am unable to code the python well enough model them. I can't even figure out how to change algos from the default to test a WHM. I can't do a pull request if I can't test it, and I don't even know how to do a pull request But I've given the code above if someone wants to do it. After that, we still have a metrics issue on defining "best". You would have to read my articles to see the justification. Same thing goes with selecting the +/- timestamp limits and N.

I selected N based on not wanting to see the difficulty drop more than 80% too often. That's based on the live data show below. Not wanting to "do better" to make the limit 90% with a higher N is a very complicated subject that I covered in an article, and it's based on looking at other charts that are not as obvious as the ones below. The timestamp limit justification is easier: I don't want coins to see a 10% to 50% drop in difficulty from a single bad timestamp, and +8xT was demonstrated above to allow recovery from even 50x miners leaving.

image
image
image
image

@zawy12
Copy link

zawy12 commented Jan 9, 2018

Yes, I've tested all kinds of variations to come to the conclusion WHM N=60 can't be beat with another algorithm with different settings. I can see the charts for 5 different algorithms using 5 different settings in 30 seconds.

Yes, simple code is nice, but if the more complicated code can be done safely, small coins really need it. If you want simple code for safety reason, then you would choose Digishield v3 without the MTP delay over EMAs because it's nearly as good and used in live coins a lot.

Digishield v3 is
next_target=avg(prev N targets)*[avg(0.75*T + 0.25*avg( N STs)]/T

Here's an example of why low N is good. (and why comparing like you say is important)

image

@jacob-eliosoff
Copy link
Contributor

That last example shows that WHM with N=37 responds more quickly to short block times than EMA with N=100. But we could just reduce EMA's N to make it equally responsive, right? If we do that - find the N for EMA that makes the two equally responsive in the above scenario - then what is the advantage of WHM? Perhaps there's some other attack that, of those two equally-responsive-to-the-above algos, WHM handles better?

@zawy12
Copy link

zawy12 commented Jan 9, 2018

Sorry, I accidentally had the N=37. WHM comparison was an after thougth. My main point was that the standard digishield v3 is not so far behind ema, and even better if the wrong N is in place.

I found something almost twice as good as the WHM. I first remembered to test my dynamic EMA and it beats WHM. Then I combined a low-N WHM with Digisheild's method of tempering a low-N SMA. It tied the dynamic EMA. I could not get tempered EMA to do better than EMA. Then I made the tempered WHM switch to low-N EMA if a high/low statistical event occurs. This is an especially exciting idea because it efficiently combines the best of each idea in a way that's ideally suited for our needs. We want fast initial response, without overshooting. This is important because we know miners will leave in droves if it overshoots. Tempered WHM does this, and it works great. But we can also can see when a big miner suddenly comes on or off by sudden solve time changes in 5 blocks. That's a come yes/no event in our situation. So a yes/ no coding is justified, if it's symmetrical and does not over shoot. EMA is perfectly suited for this because the statistical trigger is past on past data. I know from experience on dynamic SMAs that just switching to a lower N for my averaging will not have a net benefit for complex reasons. But the jaw-dropping realization here is that the EMA doesn't look at what triggered a request for its activation. EMA says "Huh? You think you have an unusual event? Well, i'm going to ignore your data and check for myself." It can begin immediately with the next block. I was able to think of an exploit for this: a big miner jumping on will trigger the event, and knowing our code, decides to throw a long timestamp at the low N EMA, knocking difficulty down. That's not cool. So we use deadalnix's(?) idea of median of last 3 timestamps but I'll just subtract it from a median of 3 chifted back only 1 block. So this requires every idea we've seen, carefully applied. I can't combine them effectively any other way.

I've done a lot on getting simulating real coin/miner activity and getting a metric for it. I simulate a very typical 3x attack that I've seen in several coins. It's very typical as I've seen it in 4 coins. ( If it's 50x, the comparative results here will not change much, except that this dynamic algo will blow the other algorithms away because the event is easily detected and it has a low N. So the following is worst case for this algorithm. ) The attackers typically stay on until the hashrate has risen 35%. So the attack ends when that occurs, as the red bars below will show. Again, this attack example just for the metric and the relative results will not change much if the attack profile changes. Also, this attack is simulating price changes as well as accidental drops in D and accidental drops in hashrate that caused a low difficulty. In live coins the my avg 11 ST > 2xT delay percentage metric is about 1/3 of the hash attack metric, but the simulation here is brutal by making the attacks constant, so I have to scale the delays up and multiply by 5 instead of 3 (and delays are still weighted less than "blocks stolen" aka avg 11 ST indicate > 2x hashing is present). Delays and blocks stolen are connections: algos good at one are usually worse at the other. More attacking means more delays. So a sum of them and making them nearly equal for a single metric is completely justified. Here are the results. Lower is better.

10%  TWHM-DEMA  (tempered WHM with Dynamic EMA)
17% T-WHM
17% DEMA
20% WHM
22% EMA
23% Digishield w/o MTP delay
25% WT
25% Digishield 
33% SMA

Digishield is really not as good as WT as indicated above because I didn't fix it to be on a comparable scale. Digishield not shown in graphs for space reasons. SMA is even worse than 33%. The numbers in my table above more are accurate than these examples.

I put together this algorithm while driving the kids some distance to school about 4 hours ago. So it's not merely made to "curve-fit" my testing. It's a theory that seems to win all three observational graphs.

all_attack
all_stability
all_step

@jacob-eliosoff
Copy link
Contributor

@zawy12, let's focus on specific questions:

  1. What is the attack that your current favorite algo TWHM-DEMA handles better than EMA?
  2. Is it really not the case that EMA can handle that attack similarly well if we just tweak its window?
  3. If tweaked-EMA does handle the attack similarly well, why exactly not just use tweaked-EMA? Is it perhaps because tweaked-EMA then handles some other attack worse than TWHM-DEMA? What attack, specifically?

Please give the short versions of the answers (≤3 lines). I want to understand, but I expect some people have just stopped reading.

Also, I understand reimplementing your algos here may be painful - maybe you could just generate a chart like the ones I plotted above (new_target/old_target for block times -2h, -1h, ..., 7d) from your own code? Or just send me the numbers & I'll add them to the charts.

@zawy12
Copy link

zawy12 commented Jan 9, 2018

  1. Everything I test
  2. No. Smaller window causes too much variation. Longer window responds too slowly

I'm sure few read 1/10 of what I write.

I don't understand your curves.

@jacob-eliosoff
Copy link
Contributor

Well, "It kicks ass on my tests" isn't the stuff that ACKs are made of. Unless you can briefly convey a specific attack or two that illustrate why it outperforms all parameterizations of wtema, it seems unlikely to be adopted here.

The curves are just each algo's mapping from block time to (new_target/old_target). Eg, the flat grey line at top right of the first chart shows that in response to any block time ≥1d, emai-1d just multiplies target by 144 (not too nice...).

@zawy12
Copy link

zawy12 commented Jan 9, 2018

I specified the exact math of the attacks and metrics and showed the results. I guess I did not explain how I chose N for each algorithm. ( I am not aware of any other parameterization for EMAs.) I chose N that results in attacks of about the same width (the red bars). If they are about the same width, it means the algorithms will respond about the speed to a price increase. So they are compared fairly starting from that basis. I could raise N for any of them to "win" in my test because there is no increase in hashrate unless the difficulty dips below 1. Then the metrics measure in their ability to not accidentally go too low too often, and how well they come back down, given the same general speed of response. I varied the low D that triggers start, the stop D, and the hashrate. These did not affect the results hardly any, except the best one had was able to shine if hashrate was large. I did not go below 0.8xD start or below 2x baseline hashrate attack.

I can't prove I am not mistaken in an undetermined way without an infinite number of words. But you can falsify a statement. It's a lot less work to check a hash than it is to generate it. My falsifiable observation is that TWHM and TWHM-DEMA are the best we have when checked against the metrics I have theorized are the best. What is the competing contradicting theory?

It seems like your charts would be different for different N. By requiring the widths of the red bars to be about the same I'm factoring out what seems to be a missing parameterization in your charts.

In responding, I see the step response should be a lot better (this is a given in this controllers), and that it should be used to determine my N for each algorithm. I now see that my step responses above were not "fair". An equivalent step response between them means it will respond to price changes the same, and our whole goal is to balance speed of response to price and accidental changes without accidentally going too low to attract hashing. So I need to re-compare them to make sure TWHM and EMA were not getting an unfair advantange from not being responsive.

@jacob-eliosoff
Copy link
Contributor

The purpose of providing code here is to let people play with your algos & scenarios and convince themselves that both are realistic and bug-free.

On "the charts would be different for different N", yes, that was the point of my #1 above:

Let's specify the responsiveness for one case explicitly, and people can back out the parameter setting needed to match that responsiveness. Then we can compare apples to apples. Eg: "Please set up your algo so that, assuming a 10-minute ideal block time, when a 4-hour (24x too slow) block is mined, the algo increases target (cuts difficulty) by 17.25%." (Or choose different numbers - 17.25% is just what ema[2]-1d does.) Then maybe we'll see that different algos matching that constraint respond differently to very fast blocks, etc.

All the algos in my chart responded to a 4h block by increasing target by between 15% (wtema-100) and 19% (emai-1d) - I figured that made it close enough to an apples-to-apples comparison.

@zawy12
Copy link

zawy12 commented Jan 9, 2018

I posted the WHM algorithm above and almost two months ago as an issue under the title "Best algorithm so far". Masari implemented it and it is the best algorithm of the seven I follow.

I think my baseline is going to be the ability to change 50% in 2 hours to a 10% price change (aka needed difficulty change). I mean the equation would be assigned an N to achieve meet but not exceed that goal.
Then finding the best algorithm might be approximately as simple as the one with that N who has the lowest standard deviation in difficulty when hashrate is constant (i.e., for a given speed of response to price changes, the best algorithm is the one attracting the least number of hash attacks from random variation). Since they have the same response speed, in some range, their ability to discourage an attact via fast response should be similar.

@zawy12
Copy link

zawy12 commented Jan 10, 2018

I standardized the N values by making their response to a 3x hash rate step function (50 block step) the same. I changed N until they all averaged the same ending difficulty. The tempered WHM failed to be an improvement. Otherwise, the rankings based on attracting hash attacks and causing delays come out the same. I used 300,000 blocks for the rankings. You can see the WHM-EMA responds faster than the others to the step function which causes its ranking to be under-estimates. It also gets even better scores if hash rate changes more than the 3x testing. The WHM-EMA uses the same N for the WHM below it (N=90) and then switches to an N=10 EMA if 10 blocks have a statistically-significant event. It uses that for 20 blocks after the last event before switching back to WHM.

WHM-EMA	15.6
WHM	18.3
EMA     20.5
WT	25.4
Digishield	27.9
SMA	36.4

_all_attack
_all_baseline
_all_step

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants