Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adjusting the DAA with a faster converging algorithm #125

Closed
wormite opened this issue Nov 19, 2017 · 43 comments
Closed

Adjusting the DAA with a faster converging algorithm #125

wormite opened this issue Nov 19, 2017 · 43 comments

Comments

@wormite
Copy link

wormite commented Nov 19, 2017

Hi there,
I have noticed that even after the Nov 13th fork, the block generation per hour sometimes drops to 1.07 block per hour (Nov 14th, 6:37pm GMT, see https://fork.lol/blocks/time).
The DAA goal specifies:
Wn = W * ExpectedBlockTime / T .
G = (2^256 / Wn) - 1
This is a moving window average algorithm or moving mean algo, which has been shown to produce oscillation when large miners are migrating to the bitcoin legacy chain. As you can see in https://fork.lol/pow/hashrate, after the DAA activation there is a significant fluctuation over time.

I suggest that we can use a PID control algorithm instead. A quick explanation can be found:
https://youtu.be/0vqWyramGy8
This is industrial standard algorithm for temperature controllers etc that can be used to dynamically adjust the difficulty based on the previous block mining time.
Just to briefly summarize it, the moving window average is basically a special case of PID with K_p = 1, K_i = 0, and K_d = 0; which causes oscillation in a mechanism that has delayed response such as miner migration, which results in this:
image

By implementing the PID control you will see the convergence much faster, such as this:
image

And all you need is to record the previous block's mining time, no need to do 144 blocks either. It's very dynamic and only a little tuning is needed in the end for the parameters.
Please let me know if this needs my help. I hope PID in DAA will be finally the thing to give a proper block mining time.

@zawy12
Copy link

zawy12 commented Nov 27, 2017

I've looked very carefully at the solvetimes and compared them to other coins. If you neglect the oscillations at the beginning, the last 9 days of solvetimes are the best I've seen in the 6 difficulty algorithms I am following closely.

The current algorithm is basically a PI controller. The P is a proportional constant that just makes sure you're in the right "units" so to speak. In this code P is basically the 600 seconds. "I" is the integrative part which is the sum of past data, which this code uses. The I in a PI controller is a rolling sum or average (however you want to express it) just like the DAA. This code is basically
next_difficulty = avg(past 144 difficulties) * 600 / avg(past 144 solvetimes)
or
next_difficulty = sum(past 144 difficulties) * 600 / sum(past 144 solvetimes)
or
next_difficulty = (most_recent_total_chain_work - chain_work_144_blocks_in_past) * 600 / ( most_recent_timestamp - timestamp_144_blocks_in_past)

You have to be very careful in implementing the D (derivative) part of a PID controller in these coins. It looks at the slope to predict the future. The D part is the source of ringing (oscillations) in PID controllers. Oscillations can cause big problems in coins. Miners are intelligent agents seeking profit that depends on a variable (the price) that is not available to the difficulty algorithm. Miners can change their behavior to profit while your difficulty algo can't change to reflect their change of mind, so it's not like oven temp. It might be moderately safe to use the D part if price were available to the algo, but malicious behavior could still cause destructive ringing. The best you can (or should do) is to use the slope to project what the difficulty should be in the very next block, but no further out into the future like normal controllers. The BCH devs are aware of two algos that do this which they call WT-144 and EMA. The EMA is better and there's a modification to the WT-144 that makes it as good. The current simple moving average is just estimating the next difficulty to be about equal to what the difficulty was 72 blocks in the past (the average). The people in control of BCH were in a hurry and preferred the simple SMA over the WT-144 for safety reasons at what seems to be a small price in loss of responsiveness.

@wormite
Copy link
Author

wormite commented Nov 28, 2017

Thank you for the comments! I wonder if you have looked at digibyte(digishield). I wonder if the claim of adjusting difficulty every block being the optimal is true. From PID's point of view, it may be so.
Glad to know that the BCH dev teams are already aware of these controlling algorithms.
I am not a control expert, but "I" should be the sum of the past error, so even if a small drift exists the PID will over time correct it. "P" is the current error, "D" is the change of the error, which is typically used to counter the drastic changes if there is a tendency to overshoot. I think when you said data, you confused between the current difficulty value and the error (next difficulty - current difficulty) . PID is always adjusting according to error, not the current difficulty value. So "I" shouldn't be the sum of difficulties, but the sum of errors.

So if we treat next target to be
target_difficulty = avg(past 144 difficulties) * 600 / avg(past 144 solvetimes) // (here the avg can be over 144 or 2 or 10 blocks, even with some weighted average to get rid of the noise from the randomness of block finding time.)
then the error is
error = prev_difficulty - target_difficulty
Now the P, I, D are: (in each run of the block)
I = error + prev_I
prev_I = I // for next round
P = error
D = error - prev_error
prev_error = error // for next round
next_difficulty = k_P*P+k_I*I+k_D*D
prev_difficulty = next_difficulty // for next round
These three variables should be distinctively different because their transfer function are in different orders, so I don't understand why the simple average can somehow include both the first and second order. I don't understand why "D" is the reason for over shooting, I thought "P" is, and "D" is used to amplify the acceleration or deceleration and therefore damp down the oscillation. I may be wrong though.
I guess my suggestion is, the difficulty adjustment is by itself a topic that can be optimized separately. I can not quite imagine how miner manipulate this if the PID parameters are carefully chosen, because the overall "damping" should be able to help reduce the risk of large pools manipulating the difficulty.

Of course these are just hypothesis, we need to put these together to test it. (at least simulation). I just don't know who to work with.

Again. It's great to know that BCH is looking into this problem. Can you point me to the place that I can find WT_144 and EMA?
Thanks!

@wormite
Copy link
Author

wormite commented Nov 28, 2017

I just found Andew Stone's excellent article on this here.
Another one here

@zawy12
Copy link

zawy12 commented Nov 28, 2017

The target difficulty you've expressed above is the current hashrate times a proportionality constant. Our goal is to set difficulty to the current mining hashrate.
Difficulty = k*hashrate
k=2^x where x depends on how the coin scales the difficulty (leading zeros in the "target" or "bits" field of the block header). ("hashrate" here is "per target solvetime", or H/s*600 s so difficulty is proportional to the number of hashes needed on avg to solve it)

But we can't measure hashpower of the miners except by observing the solvetime for a given difficulty:
hashes = 1/k*difficulty*T/ST
Where T = target solvetime and ST = actual solvetime. This is also times ln(2)=0.693 for a single data point because Poisson is skewed low. The T/ST varies wildly, so we use a buffer, aka low pass filter like a capacitor. A capacitor is an "integrative" element for voltage, but I can't say this buffer is like the I in a PI controller for the reason you give (I'm not summing them separately). It would be just a P controller with a filter on the error measurement. In a more general view of PI controllers, it should qualify as one, but I'll show how it is at least a P controller.

I believe your equation should be
next_D_adjust = kP*P + kI*I + kD*D
where next_D=next_D_adjust + previous_D.
or
next_D=previous_D + kP*error + kI*sum(errors) + kD*slope(errors)

Our set point we want to achieve is solvetime=T which is another way of saying we want D to match the current hashrate. But this is different from a PID controller because the P/D force that determines hashrate has this unknown P so we have to work off of the previous_D.

For a simple P controller:
next_D = previous_D + kP*(current_estimate_hashrate*T - prev_estimate_HR*T)
where kP=1/k but just define the units of hashrate*T= difficulty so that kP=1.
By the definition of what we're doing: current_estimate_hashrate*T = sum(N Ds) * T / sum(N STs) (our "capacitive" filter to get a good measurement)
Prev_estimate_HR is the same but shifted by 1 block into the past. Previous_D = prev_estimate_HR*T, so we're back to the standard simple moving average for our controller:
next_D = sum(D) * T / sum(ST)

The WT-144 method does not use a buffer like this, but sums the individual D[i]*T/ST[i] values, so the math should show it is of the same form you're looking for in a PI controller. I found an improvement to it which is
next_D = sum(D's) * T / [sum of weighted ST's]
or
1/next_D=sum[ weighted_ST's/T/sum(D's) ]

The invisible proportionality constant of converting 1/hashes to 1/difficulty must be the kP and this sum indicates the integral part of a "PI-like" controller. The way it gives more weight to the recent ones is a type of derivative control. It is taking the slope into account. So in some sense we have PID controller, and the only reason it does not look like it and can be different is because our plant does not have a momentum mass that leads to a second order equation and we have a Price that changes the set point in a way we can't see which is another way of saying our plant changes. Lack of momentum means there's no oscillation from kP as in a normal 2nd order plant, and the existence of a Price variable means getting carried away with any derivative adjustment will lead to oscillations (which may not be the case in a normal PID, but I think the k's interact in PID so it may not be true to say only kP in PID causes oscillations).

Jacob Eliosoff's EMA difficulty algorithm uses
next_D = previous_D*( probability_density(ST/T/k) + cumulative_probability_function(ST/T/k))
so it is kind of like a PI controller. Another way to express the EMA is:
next_D = previous_D + kP*error

PID simply gives weightings to present, past, and future estimates of how much we need to adjust.

@zawy12
Copy link

zawy12 commented Nov 28, 2017

To express our "plant" in terms of the excellent PID controller article you linked to, we have an input "force" which is the Price/Difficulty ratio and an output hashrate. If we design a controller for this force, we are assuming miners are simply selfish for the P/D instead of malicious which is a risky assumption. To make matters more difficult in seeking a controller, our "plant" is a non-linear function (linear for small P/D ratio changes, but "exponential" when it changes about +/- 30%). It looks like an asymptotic tangent function, so it's not nice as exponential math. The Laplace transform of tangent function is incredibly ugly because of the infinities. The "plant" no mass and little dampening (a delay in the hashrate change after a P/D ratio change), so in another way it's not as complicated. I mean our plant to control is something P/D=k1*tan(k2*hashes) or D=P/(k1*tan(k2*hashes)) where we have control of D, don't know P (price), and want to reach steady state quickly. The plant in the article is F=k1*x + k2*x'+mass*x'' where k1 is spring constant. Notice there is no unknown input variable like P. Their controller is function for F that quickly obtains a target x. Real systems needing PID controllers seem to always have some sort of mass that leads to linear second order differential equations. So it's not a trivial task of copying how PID controllers are done. You need to go back to more fundamental math (maybe linearize the "tangent" function) and end up with something a bit different than a normal PID controller that will probably go in a direction that intuitive ideas are already leading us.

@zanza321
Copy link

current DAA seems to be working pretty well. Unless there is any catastrophic failures that come up, it seems very stable and not needed to be changes.

@zawy12
Copy link

zawy12 commented Dec 14, 2017

It is not too bad, but it has 3x more delays and is giving away 2x more cheap blocks than a good algorithm. Masari implemented my version of Tom Harding's WT-144 with N=60 and it might be the best algorithm in use right now. Their solvetime is T=120 and to maintain the same features in a T=600 coin, N would need to be about 40 instead of 144. N too high is the problem.

Each of the following are 3600 blocks. BCH's does not include the startup problems.

bch_zcash_masari_3650_blocks

@wormite
Copy link
Author

wormite commented Dec 18, 2017

The road map says we are going to see DAA adjusted 2018. :) https://www.bitcoinabc.org/bitcoin-abc-medium-term-development

@montvid
Copy link

montvid commented Jan 4, 2018

I am looking forward for this problem to be less of an issue. Now I was waiting 30 min for my tx to confirm 1 time from block 511411 to 511412 to 511413 it took 1 hour for 2 confirmations. A fix would be sorely needed. I guess there is no universal agreement what to do as it was not solved in November fork. :(

@montvid
Copy link

montvid commented Jan 5, 2018

Word is out that people are using Litecoin because of the fast confirmation times and low fees instead of Bitcoin or Ether clogged networks. They need speed and low fees to move money to the next exchange (usually needs 2 confirmations) to do trading on the new best thing. So 10 minute blocks - 6 in an hour would be very helpful. See the chart here https://fork.lol/blocks/time to monitor BCH block time variations in real time.

@Pheromon
Copy link

Pheromon commented Jan 5, 2018

If technically feasable (i.e.: not too many orphans), it would be great to have reduced block times down to 1-2 minutes.

@montvid
Copy link

montvid commented Jan 6, 2018

I was looking on the internet for other projects orphan rates but so far I found only bitcoin data. I would like to see BCH and LTC data for sure.
@Pheromon making the block time 2 min would create new coins 5 times faster so that is not good as it creates too big inflation.

@zawy12
Copy link

zawy12 commented Jan 6, 2018

We've developed a new algorithm that's the best of the best and allows integer math on the target. I am recommending N=20 for T=600 second coins, but others are still pushing for high N. Digishield v3 that Zcash et al use is like an SMA with N=60, and they do pretty good. But they have T=150 seconds which means if T=600 seconds coins keep that, they will respond 4x slower in real time so that price changes occur too fast compared to the difficulty. But let's say my research is not persuasive and you want to use N=60 like Digishield v3. Then keep in mind N=30 in the algorithm below is like N=60 in digishield. And please allow negative solvetimes!

zawy12/difficulty-algorithms#17

@Bitcoin-ABC Bitcoin-ABC deleted a comment from rangershi Jan 10, 2018
@schancel
Copy link
Contributor

@wormite PID Controllers have an internal state which cannot be calculated from the last output alone. In order to implement one for the difficulty adjustments, another header field would need to be added.

@wormite
Copy link
Author

wormite commented Jan 10, 2018

@schancel What do you mean by internal state? It's only using the difference of target and current block time as input. Unless you are saying the current block time is the internal state and is not available directly. Or are you referring to the total hash power of the network as the internal state?

@zawy12
Copy link

zawy12 commented Jan 10, 2018

He means when doing the integral and derivative parts of a PID controller they have to remember the past in a variable that's stored and constantly updated in the controller. Node software does not have this because they have to rely on the blockchain for their memory so they can go on and off line without losing the ability to sync state with other nodes.

DA's have access to the same information as a PID controller because they usually look back over the previous blocks to calculate current state. Their memory is on the blockchain.

@zawy12
Copy link

zawy12 commented Jan 10, 2018

DA's use the current value, they sum past values, and most take the slope into account, which is what PID controllers do. I just can't prove (yet) that they are mathematically equal to the sum of these 3 items as seen in a normal PID controller equation.

@wormite
Copy link
Author

wormite commented Jan 10, 2018

@zawy12 @montvid
Creating faster blocks will not increase the mining reward if the reward is properly adjusted. If you read about what Andrew Stone has researched about on the DAA, you will realize that the people behind bitcoin cash has already considered the possible modifications to make the block time more stable at 10 min. They also thought about how to make sure the blockchain doesn't stall when the major mining power has left the chain. I think this is already being worked on as implied by the medium term road map:

  • Improved Difficulty Adjustment Algorithm

Also for those who think that making smaller block time will benefit the chain. Craig Wright has talked about this in his twitter. The smaller block time (1min for example) actually benefits the larger miners because they sit on the more connected nodes. Therefore they will get the newly minted blocks earlier than smaller miners due to the network propagation time. This will give the bigger miners a head start for calculating new blocks. There is a whole mathematics model on this, even including the statistical model of the block finding time according to hash power. Shortening block time will make the network propagation delay a bigger portion of the block finding interval, and thereby promote the centralization of miners(or mining pools) even more, which is not necessarily desirable. However, all these will change with the new graphene technology for block propagation coming out, I believe.

So yeah, people behind bitcoin cash is simply way ahead of other researchers in the field and surely smarter than we are. They did the work, both in modelling, testing network and implementation. 10 min block time is not an arbitrarily chosen number, just like the 21 million total amount of tokens. That's why I didn't comment here for a while as I found out that all the improvement that I have thought about has been considered or tested by them, I guess they are just the "pros".
For confirmation of transaction we don't really need to reduce the block time. Nowadays once the transaction is broadcast, it takes 2 seconds to see it on major websites such as blockchair.

@wormite
Copy link
Author

wormite commented Jan 10, 2018

@zawy12 Oh that should not be a problem, it's mathematically equivalent. The DA is simply a special case for PID with only the P of coefficient 1. You can also say that the P part can be generalized to a window average over some past history. Btw you can also put a filter on this past history(basically weighting the past block time differently, say, if you care more about the recent history, you can put 0.9 on the last block time and 0.1 on the block time 60 blocks away). The choice of filters can also be used to improve the estimation of the hash power using the information from the statistical model of block finding time, as proposed in Andrew Stone's post, although he told me it really doesn't help much:
The PIEC results were not amazingly better than other PID algorithms so I think that the small difference does not matter.

@zawy12
Copy link

zawy12 commented Jan 10, 2018

Faster converging algorithm doesn't necessarily mean shorter block time. I don't argue they should, but more data points per real time enables the DA to be more stable and respond faster to price changes.

None of the random variation in solvetimes is the result of a "forcing function" such as the 60 Hz noise in electronics. All the variation is relevant data that should not be filtered lightly. The variation is normally reduced by using a larger sample size. However, digishield's tempering is a filter aka dampening factor that does add substantial benefit to the simple average, so there's some error to my viewpoint.

We now have 3 algorithms better than WT-144 which was the best before BCH jumped the gun and used an SMA with an N=144 that is way too big. I haven't published the specs for the new one yet.

@wormite
Copy link
Author

wormite commented Jan 10, 2018

@zawy12 Agreed, faster converging algorithm should be independently sought after without the need to adjust block time interval. However, I doubt if you will get better estimation of hash power(and I say hash power because that's the target of your DA ) by sampling more. Because:

  1. The hash power is also dynamic, the change of it may be way faster than what you can sample.
  2. There is a statistical distribution that is correlating the hash power to the sampled result (block finding time). And this distribution has not been considered in a simple window average. (lost information).

I'm not saying sampling more will not give any benefit, just the return of the benefit may not be worth the price you have to pay.

@zawy12
Copy link

zawy12 commented Jan 10, 2018

Now I see my comment 4 days ago which is why you think I wanted shorter block times. I was referring to the coins with shorter blocks times getting better results with a much smaller as evidence their N is way too big. Coins with longer block times have to have smaller N's for their windows. To respond equally well in real time to price and network changes they have to decrease N inversely to target block time. N2/N1 = T1/T2 where 1 and 2 are two different coins that you want to respond equally fast in real time. But larger T coins lose stability as N gets smaller. If they want to keep the same stability in real time they have to decrease N more slowly, as N2/N1 = (T1/T2)^0.3 but then they lose responsiveness. Here's the article I wrote on that subject.

@wormite
Copy link
Author

wormite commented Jan 10, 2018

@zawy12 Yes I have seen your article before. The N's choice is important. May I suggest that you should do it on a test network such as what Andrew Stone did? It will be more persuading if you have test nework simulations to compare different N's results.

@wormite
Copy link
Author

wormite commented Jan 10, 2018

@zawy12 Sorry I just saw your article here I guess it's already the test network result?

@wormite
Copy link
Author

wormite commented Jan 10, 2018

@zawy12 btw I saw a previous email from deadalnix in the maillist. He obviously knew all the benefits about different DAA. But they advocated for a more "conservative" adjustment to stabilize BCH first and try not to make huge changes for the Nov 2017 hard fork. I think it's pretty wise. I wish I could find that email link. Therefore I don't think the BCH dev teams "jumped the gun". They are aware of the various DA's going on. They already had the DA improvement in plan. You may find talking to Andrew Stone helpful because he generally responds within 2-3 days, even to the outsiders.

@zawy12
Copy link

zawy12 commented Jan 10, 2018

Yes, I know they had a timeline and I was too late to the game to figure out how to convince them N=144 was too big. WT was never used before, so I can understand their fear. But the simulations are exactly what the live coins do, with the wild card of miner behavior.

The Poisson distribution effect you're referring to has actually been used: the EMA Jacob Eliosoff showed us is based on the Poisson distribution. See wiki here.

I have a book that discusses the EMA in process control charts and how it can be used to determine when a process is out of control and how to convert the EMA into a PID contoller. After thinking through the equation and how we could use it, it was obvious to me the full blown P+I+D equation of the EMA would not help us. And for whatever reason, my modified WT (WHM) is even better, without adjusting each time data point by the probability distribution. I tried to do that a little. Maybe I can try using the prob. mass density function to do it. The EMA is actually the probability mass density function of solvetimes plus it's integral which is the 1-tailed probability. I posted this to bitcoin-dev mailist, so maybe I've already shown it's an honest-to-God PID controller that's adjusting for the distribution..

Mining.py does not demonstrate relative effectiveness of algorithms so I can't use it (and I can't program good enough to modify it sufficiently well).

@wormite
Copy link
Author

wormite commented Jan 12, 2018

I am curious about how you can convert EMA into PID. Can you point me to the right direction? I can believe that somehow the WHM is better though. PID is designed for certain controller problems and it may not suite the BCH well. But it's better to compare your result with what Andrew is doing since he has already some results out there too. I sent you an email of Andrew's email address. Look forward to your future results.

@zawy12
Copy link

zawy12 commented Jan 13, 2018

I discussed Andrew's idea in the bitcoin-ml mail-list when it came out. I reasoned through the consequences and it seems to work excellent and seems to have no drawbacks if implemented reasonably well. That kind of asymmetry is supposed to always have a drawback which may not be easy to see, as evidenced by the EDA problems. I suggest making it symmetrical and he was receptive to the idea in the mail-list, but it may be best not to make it that complicated. It might be simple to code, but it's a major change.

I'm working getting a PID implementation.

@wormite
Copy link
Author

wormite commented Jan 13, 2018

Maybe I’m not understanding correctly but Andrew’s idea IS PID in this:https://medium.com/@g.andrew.stone/tail-removal-block-validation-ae26fb436524

@wormite
Copy link
Author

wormite commented Jan 13, 2018

PIEC is a PID algorithm with a Poisson filter applied to the window average just as EMA does, except with the I & D part.

@wormite
Copy link
Author

wormite commented Jan 13, 2018

Ah I see why you think it’s not about PID, forget about the title of the article. It’s in the first part that he showed the result of PIEC compared with weighted average algorithms.

@wormite
Copy link
Author

wormite commented Jan 13, 2018

Also I found the original email deadalix sent out about the EDA:
https://lists.linuxfoundation.org/pipermail/bitcoin-ml/2017-August/000136.html

@zawy12
Copy link

zawy12 commented Jan 13, 2018

I actually don't know what PIEC is. I'll assume his email is describing it. Concerning the H = sum(Wi) / sum(Ti) method they referred to me as the source, but it's really bitcoin's method converted to a rolling average. It's also the best I cold think of as the simplified version of Dark Gravity and Digishield, and it seemed to perform as well as those. If I understand the email correctly, he's saying they'll use a 130 minutes as a base time rather than a number of blocks which I never tried because it's not amenable to my spreadsheet testing. I'm going to test it now because I think it's a good idea. I don't see how that email is describing a PID controller any more than any other algorithm. Maybe you could express to me what PIEC is.

He could not get HR = sum(Wi / Ti) / N to work which I bet lots of people have tried at first (like me). Inverting it actually works, i.e. using to find the average 1/HR = sum(Ti / Di) / N. This is why the WT-144 works. But for complicated reasons described in my intro to difficulty algorithms this does not work as well as the SMA and why WHM version of WT works better.

I got a PID controller with EMA working but at best I can only get it to tie the WHM. It can do better than EMA. I'll post an article on it sometime.

@zawy12
Copy link

zawy12 commented Jan 13, 2018

This appears to be a good specification for the PIEC. The only time I see PIEC in a web search is Andrew Stone referring to it and me asking him what a PIEC is. Slow target of 2016 blocks is way too long. The 5 block lower limit needs to be higher because of what can be done with bad timestamps, even if they use median of 3 protection. For example, 50% attacker needs almost no luck to get a block and assign the max reverse limit of about -3600 timestamp followed by -3599 timestamp in the next block or 2 (hence he needs very little luck), then let someone else get 1 or 2 blocks (or get them himself and set an accurate time) and then assign the max allowed +7200 followed by +7100 in the next 1 or 2 block. So the 5-block calculation with the median of 3 will be 7100- (-3599) = 2.8 hours to get 5 block, dropping the difficulty to 1/4 of the correct value. He has a limit of 12.5% drop per block so it can't be done in 1 block, but repeating the process (and not using such extreme numbers) should be able to get D to the 1/4 value in possibly as few as 11 blocks (the solution to the equation (1-0.125)^n = 1/4. There are all kinds of patches to fix this, but it gets messy faster than you can realize that there is a way around the fix.

But the general idea of measuring hashrate based on time rather than block count might be a great one. It just needs to be implemented in a more direct and fundamental way. It might cause some problems in making sure it is not asymmetrical in how much it allows the difficulty to rise verses fall, but the correction for that should also be possible in a fundamental way rather than experimentation in order to assure there is no exploit.

@schancel
Copy link
Contributor

schancel commented Jan 13, 2018

DA's have access to the same information as a PID controller because they usually look back over the previous blocks to calculate current state. Their memory is on the blockchain.

This is incorrect. The previous target cannot be used to infer the internal state of a PID controller. You would need the history of all targets to infer the internal state.

@zawy12
Copy link

zawy12 commented Jan 13, 2018

Isn't N=100 enough of the previous targets? Any PID making use of data further back in time than that seems to be making a mistake.

@zawy12
Copy link

zawy12 commented Jan 13, 2018

Concerning DA's based on time instead of blocks. The Poisson math depends on occurrences rather than time. That doesn't mean it can't be done, but that it involves some advanced Poisson math to make sure it's symmetrical.

The following is an SMA difficulty based on time rather than block count, but it probably has a symmetry problem that will make the avg solvetime too low or too high, if not cause some oscillations. I think it has strong protection against bad timestamps not only because of the 30, but because it allows negative solvetimes.

Nmax=100
# Set the amount of time that has to pass before calculating next difficulty
# We're doing a 100 block SMA unless solvetimes are < target
TT = Nmax*TargetSolveTime 
Nmin =30 # lower would be good, but need more protection against bad timestamps

for (i=-1 ;  i=-Nmax ; i--) {
  sumD+=D[i] # sum difficulties
  sumST = median(T[-1],T[-2],T[-3]) - median(T[i],T[i-1],T[-2]))
   if i < -Nmin {
        # exit loop when TT has passed
        if (sumST > TT ) {  goto FINISH } # for clarity 
   }
}
FINISH
next_D = TargetSolvetime * sumD/sumST

I can't test this easily and need to start work on a block chain application.

@schancel
Copy link
Contributor

schancel commented Jan 13, 2018

Isn't N=100 enough of the previous targets? Any PID making use of data further back in time than that seems to be making a mistake.

The output over the last 100 blocks depends on what the state was 100 blocks ago. You'd have to play with how you initialize the state at N-100 and see.

@zawy12
Copy link

zawy12 commented Jan 13, 2018

The WT, WHM, and EMA all but ignore blocks that far in the past (if they have N=100) which enables them to work better than SMA. Controllers are meant to give the correct current value when the state far in the past (like 100 blocks) was way off. Start with a difficulty of 1 when it needs to be 1,000,000,000,000 and a controller as good as our DA's will have the correct value in about 200 blocks even if the increase is limited to 16% per block (100 blocks if there is no limit even for the SMA).

@zawy12
Copy link

zawy12 commented Jan 14, 2018

I finished the PID controller algorithm based on the EMA.
zawy12/difficulty-algorithms#20

@monsterbitar
Copy link

Can this issue:

"But we can't measure hashpower of the miners except by observing the solvetime for a given difficulty:"

Be effectively solved by the main idea behind Bobtail? (multiple hashes average tests agains target diff, instead of a single)

@zawy12
Copy link

zawy12 commented May 15, 2018

Yes, Bobtail makes calculating hashrate and therefore difficulty a lot easier.

ftrader pushed a commit to bitcoin-cash-node/bitcoin-cash-node that referenced this issue Aug 17, 2020
Fixes Bitcoin-ABC#125

These longest possible numbers are already included in the unit test.
(Unit test fails if the buffer size would be 20 or less.)

Test plan: `ninja check-univalue`
ftrader pushed a commit to bitcoin-cash-node/bitcoin-cash-node that referenced this issue Aug 17, 2020
Base UniValue::setInt buffer size on maximum integer length

Closes Bitcoin-ABC#125

See merge request bitcoin-cash-node/bitcoin-cash-node!706
@jasonbcox
Copy link
Contributor

It's probably time to close out this issue given there's a spec for ASERT out for a while now: https://www.bitcoincash.org/spec/2020-11-15-asert.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

9 participants