-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Next generation SDiff #584
Comments
I'll have to look at this more closely, but the first biggest issue I see just looking at the numbers is it allows the pool size to get way too large. After just five rounds, the pool size will have grown to 51,760, which is whopping 25% increase over the target while the current algorithm is keeping it within a 5% band. Such a large pool would undesirably skew the average payout times, reduce the probability of payout before expiration, and therefore significantly change the social contract. The primary purpose of the algorithm must be to keep the ticket pool size relatively constant with the secondary concern of the actual price fluctuations. |
The example shows 5 rounds of full increase, but this is probably not possible in practice because it would require people to have funds to make those purchases. I will look into adding the PoS pool total funds value into the model. |
Decred new PoS sdiff algorithm proposal This proposal is based on an asymptotic function to calculate the new ticket price, or sdiff, in a new PoS pool buying window. The function has two variables, the average ticket price in the PoS pool and the pool size. X=0 should be defined as the desired “target pool size”. Due to its asymptotic nature, the function has two boundary values. These values effectively function as the maximum and minimum pool size. This function aims to assess a fair price for the tickets in the PoS system based on the demand, which is expressed as the difference to the desired target pool size, and the current average price of tickets in the PoS pool. If the current pool size is over the target pool size, demand is high and price goes up accordingly. If the current pool size is under the target pool size, demand is low and the price goes down accordingly. The asymptotic function determining the sdiff has the form of: Where, By adding the average ticket price in pool to the function and pivot around it, the average ticket price will slowly adjust as the demand for tickets grows or declines. In the following example the current values for the modifier a and boundaries b and c have been chosen as such: a = 100000 The value for a is somewhat arbitrarily chosen, it close to doubles the sdiff at a full buying window of 2880 tickets at the current average price d, while growing the price at 50% and 25% at an average ticket price of 100DCR and 200DCR, respectively. Boundaries b and c have been chosen as the number of a full buying window, namely 2880 tickets, or four times the number of voting tickets per window. d is the current average price in the PoS pool as of the writing of this document. Inputting these values give the following graph. In line with the value chosen for d the current pool size is 42068 tickets. The “target pool size” at x=0 is assumed to be 40960 tickets. This function would put the sdiff price for the next window at 57DCR. In other words, it evaluates the fair price based on the pool size and current average price at 57DCR for the next window. If no tickets are bought at that price and the pool size drops to 41348 tickets, the price for the next window would be approximately 44 DCR. In case 2880 tickets are bought at the target pool size the next price will be ~99.5DCR at a pool size of 43210 tickets. In scenarios where the maximum boundary is exceeded a linear increase price function should take over starting at the maximum average ticket price in DCR’s lifetime eg. 512.69DCR. This linear function should take over starting at a ticket pool size of 2880 in order to avoid the infinity price that would break the function at the exact value of x=2880 or could cause it to overshoot on the other segment of the function which develops at x=2881. In scenarios where the pool size requires a ticket price of 0 the minimum price should be set at a fraction above zero. |
As an update, I wrote a simulation tool which models the full Decred proof-of-stake system behavior. It needs a better demand distribution function to properly model expected user demand as the current one is very basic and not very realistic, however the infrastructure is fully fleshed out and nice graphs of the resulting pool sizes and ticket prices per block are generated. The graphs can be explored by selecting the desired sections. |
The following is a suggested algorithm from @coblee. Simple: The expanded equation reduces to the simple equation when x=1. |
I've updated dcrstakesim to include a demand distribution function (DDF) that takes into account both the yield and the volume-weighted average price (VWAP). In the case the price is higher than the VWAP (indicating zero demand from that term), and there is 100% demand due to the yield that would be produced by purchasing tickets, the DDF will chase the yield. This more closely models typical purchasing behavior since the typical criteria a rational ticket buyer will use is whether or not the yield produced is acceptable and whether or not delaying until a future interval has a lower price than the VWAP which will increase their yield. Running the simulation with this updated DDF and the existing sdiff algorithm shows that the form produced by the simulation is similar to what actually happened on mainnet. In particular it produces wild ticket price swings, ticket prices that increase over time and are within a similar range, and a pool size that is consistently higher than the target. One notable difference is that simulation doesn't ramp up quite as slowly as mainnet did, but that is most definitely due to external factors such as the relative lack of a user-friendly GUI for staking early on and the fact the devs intentionally did not stake more than their proportion of the coins in the name of fairness and ensuring their voting power was never greater than the rest of the community. For reference, the following are the results using the real mainnet data and existing sdiff algorithm: While the following is a simulation run using the DDF and existing sdiff algorithm: Given these results, I will be running the proposed algorithms and posting the results of the simulation using them. For the simulation results, the algorithms will be named/numbered as follows: Proposal 1: The proposal from @raedah There will be an additional proposal forthcoming that is a joint effort between myself and jy-p. More details will be posted when finalized. |
I'll write up a quick proposal too. Thanks. |
Proposal 1 ResultsHere are the simulation results of Proposal 1 using two different demand distribution functions (DDFs): DDF A -- The DDF discussed above that produces the most realistic results. DDF B -- Purely based upon the yield and used as a point of comparison Pros:
Cons:
SummaryI think the simplicity and general form of the proposal are on the right track when the pool size is near the target, however it is not usable as is due to the aforementioned issues. |
Proposal 2 ResultsHere are the simulation results of Proposal 2 using two different demand distribution functions (DDFs): DDF A -- The DDF discussed above that produces the most realistic results. DDF B -- Purely based upon the yield and used as a point of comparison These graphs are using the proposed values, however, I also simulated with other variants such as Pros:
Cons:
SummaryAs previously discussed, I personally think the idea of having relatively linear adjustments whenever the pool size is close to the target and increasingly large price adjustments the further away you are from the target is the most desirable behavior. Unfortunately, as this proposal is, it would only work in a very narrow set of pool sizes and doesn't handle. I suspect the general forum could work well as part of a piece-wise function that handles the extreme cases differently. |
As an update, Wednesday of this coming week (Apr 12, 2017) is the deadline for proposals. We need time to implement, simulate, and analyze them in the simulator and then the actual software before release along with a testing period. |
Next generation SDiff #584 - ProposalBeing an active staker and stakepool admin with some understanding of the dynamics and desirability of this targeting mechanism I thought I would give my input. Some of this has been discussed before but here I am trying to distill it down to simplicity, flexibility (works with different network parameters) with an aim to maintain the essential goals and requirements of a healthy staking network. I didn't have much time to refine this but I wanted to make sure I put in my submission before the deadline in 2 days. In the Decred Mainnet we find these variables. A short variable is also defined for this submission:
Here are some additionally defined variables:
The goals are:
Generalized formula
Detailed formula
This formula can probably be improved when looking over a larger window of past blocks by adding additional scaling factors per block window. This formula is designed to work on a Mid-staking cycle. This would be the typical case where Staking has reached a point of balancing out equilibrium forces. Further consideration would be needed when staking on the network has first started out. I probably need to revisit the Bounds calculation to make it higher. The ScalingFactor looks OK to me. Example CalculationsTicket price: 50 Ticket price: 75 |
Proposal 3 ResultsHere are the simulation results of Proposal 3 using two different demand distribution functions (DDFs) and two different constants for the x value: DDF A -- The DDF discussed above that produces the most realistic results. DDF B -- Purely based upon the yield and used as a point of comparison Pros:
Cons:
SummaryThe algorithm itself is really simple, which is quite attractive, and it does eventually converge on the target pool size. However, as mentioned in the cons, the algorithm is a bit too slow to react to various scenarios such as a huge influx or a large loss of staked coins which causes the pool size to remain significantly over the target for an extended period of time. Alternatively, when using higher values of EDIT: I added a bit more to the summary based on the post below with different |
You should try with a larger X. I think that will address most of the cons.
…On Tue, Apr 11, 2017, 10:02 AM Dave Collins ***@***.***> wrote:
Here are the simulation results of Proposal 3 using two different demand
distribution functions (DDFs) and two different constants for the x value:
DDF A -- The DDF discussed above that produces the most realistic results.
x=1
[image: proposal3ddfa]
<https://camo.githubusercontent.com/9f6036a18f155285133491f6075f69e35b79f640/687474703a2f2f692e696d6775722e636f6d2f493369796f4f382e706e67>
x=2
[image: proposal3ddfax2]
<https://camo.githubusercontent.com/b7012e2f1bcda74758ba2f8c114fc978073435e0/687474703a2f2f692e696d6775722e636f6d2f735a416953375a2e706e67>
DDF B -- Purely based upon the yield and used as a point of comparison
x = 1
[image: proposal3ddfb]
<https://camo.githubusercontent.com/515316b16a73b2055eb8dd4e67a8a67f5596d985/687474703a2f2f692e696d6775722e636f6d2f69776c65364e452e706e67>
x = 2
[image: proposal3ddfbx2]
<https://camo.githubusercontent.com/be8ba945324b172542fe7f96d0c40d4b59276eee/687474703a2f2f692e696d6775722e636f6d2f624264524367552e706e67>
Pros:
1. The algorithm itself is easy to understand and efficiently
computable
1. The pool size eventually converges on the target value
Cons:
1. The results vary significantly depending on the demand
distribution. For example, looking at the difference between DDF A and DDF
B, we can see that the shape of the curves vary significantly.
2. Purely chasing the yield allows the pool size to get around 116k
and takes nearly 40000 blocks to bring the pool size back to the target.
3. Using a more moderate demand distribution such that users wait for
lower prices under the VWAP before purchasing as opposed to purely chasing
yield still results in the pool size reaching around 62k and it takes
nearly 100k blocks to bring the pool size back to the target.
4. It fails to prevent runaway ticket purchasing during the initial
population of the pool size since the ticket price rises too slowly even
when the pool size is significantly over the target. This is important
because testnet and simnet chains are frequently reset, thus any algorithm
must work well under all circumstances, not just the current state of
mainnet.
5. While it might work for an already established system that is near
the target pool size and has low volatility, it doesn't handle unexpected
swings such as a huge influx of new coins to stake or a large loss of
staked coins due to a large portion of coins being taken offline.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#584 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AA9B97f4e7Ob1gyqJkOvsDlbngMKByGZks5ru7IzgaJpZM4MGaTW>
.
|
@coblee, I did try with higher X values as well. Namely 4, 5, 6, 10 and 100. However, I didn't post all the results in the name of brevity. The end result was effectively the same, although some differences. Effectively, lower X values see the initial period (and any huge influx of coins at any point) cause the pool size to shoot up significantly over target and then it takes a long time to relax back down to the target pool size while higher X values introduce more volatility in both the pool size and the ticket price. For example, here is DDFA with Purely buying if the current yield is acceptable without considering the VWAP (which granted isn't super likely behavior as users typically will wait for a price drop to boost their yield which is what DDFA models, but is still useful for studying the algorithm's behavior), with |
I've updated dcrstakesim to include the code for first three proposals that have been analyzed above as well as added an additional behavior which increases the percentage of staked coins from 40% to 50% at the block that is 60% of the number of blocks to simulate and then drops it back down to 40% at the block that is 80% of the number of blocks to simulate in order to simulate a sudden surge and drop in the number of staked coins. Please feel free to download the code and play around with the parameters in the |
I was reading this and thinking about applying PID (Proportional Integral Derivative) control theory. A simple Proportional controller would be something like: A simple Integral controller could be: I don't believe a Derivative controller would give good results. It can become wild with a sudden change on pool size. A PI (Proportional Integral) controller could be implemented with something like: error = (PoolTarget - PoolTickets) I think we have sufficient historical data to use Ziegler–Nichols method to find good Kp and Ki values. It could also be adaptative but that could be overkill and too complex. Anyway, I think I'm late. :( Best regards! |
For an update, I've updated dcrstakesim to further improve the modelling. In particular, I've modified it to use the DDF prior to the stake validation height instead of just assuming 50% demand. While the 50% more closely matched what happened on mainnet, it was artificially limited by the fact the system was new and relatively difficult to use at the time as opposed to purely rational behavior. It's pretty clear that if things were to reset today, tickets at the starting price of 2 DCR would have 100% demand! Second, I updated the yield DDF such that it will calculate the yield based on the expected future return of the average expected vote time (28 days). In addition, the two newest proposals will be named/numbered as follows: Proposal 4: The proposal from @jyap808 In addition, there are several more proposals that have not been pasted here and are being carried out via other channels such as slack. There has been a lot more simulation going on, but it's too much to post all of the results here. More results to come. EDIT: I should have also noted that I reran the proposals that have already been reported with the changes to the simulator and the conclusions are the same. The values for the initial bringup phase are slightly different, but the end result did not change. |
This was put forth in Slack, so I guess I'll declare the algorithm implemented in my branch finalized, and make it it a proposal as well. Well, I will this evening at least. https://github.com/chappjc/dcrstakesim/tree/chappjc |
I've tersely documented my proposal with code comments. Please see the following lines for a description: |
Hello! Posting test results after some tunnings. // STATIC PUBLIC VARIABLES // TICKET CALCULATION USING PID (Proportional Integral Derivative) CONTROLLER Regards! |
I just pushed an update to dcrstakesim to allow |
I've just pushed another update to dcrstakesim to include a graph of the total versus staked supply to help show the changes in the participation that are being driven by the ticket price algorithm. For reference, the following shows the results using the real mainnet data and existing sdiff algorithm: |
Looking at DDF B results makes it pretty clear that starting the simulation with a low stake difficulty – a ridiculously high yield – does not make a ton of sense. I added a flag ( After doing this, the difficulty algorithm from my branch gives reasonable results for both DDF A and B, where previously the ramp-up period was trouble with DDF B. |
I'd say its utility is useful even though I agree with the sentiment. Both That said, the reason DDF A is the default is because it's a bit more rational in that it effectively says even if the yield is super high, if I just wait a period or two, my yield will be even higher, so I'm going to wait a bit. However, as we know, people aren't always particularly rational either, though. EDIT: I have not looked at the most recent algorithms in depth yet as I've been working on improving the simulator and a joint proposal as well. I will however offer an analysis on them and if they are better they will be chosen. |
Ticket Price Algorithm Proposal (Proposal 7)AbstractThe following is a joint proposal by @raedah, @behindtext, and @davecgh that is primarily based on the original proposal in this issue (proposal 1) but has been expanded to address its shortcomings and incorporate some of qualitative behavior insights made available during the process of simulating the various proposals. Many thanks to everyone who has participated in this extremely important process. Requirements and GoalsBefore diving into the specifics of the proposal, let's address the primary requirements and goals of an ideal ticket price algorithm:
Theory of OperationThe general idea of this proposal is rooted in the concept of forces that react to and drive changes to the pool size. The first force acts to counteract the relative change in the pool size, while the second force acts as a restorative force to push the pool size towards the target value. There are minimum and maximum bounds imposed to prevent runaway behavior. FormulasGeneralizedExplanation of terms:
DetailedExplanation of new terms:
Simulator NotesIt is worth noting that dcrstakesim intentionally aims for several worst case scenarios in order to help ensure the algorithms behave well under extreme conditions and attack scenarios.
ResultsPros:
Cons:
SummaryI believe this proposal satisfies all of the hard requirements and manages to achieve the ideal goals as well. For example, the mathematical form of the algorithm is simple and its theory of operation is easy to understand. After running over 300 different simulations with this algorithm and manually tweaking the DDFs to intentionally throw various forms of irrational and attack-like behavior at it, the only notable con I can find is that under unrealistically extreme conditions the pool size can spike up higher than would be ideal, however, that is mitigated by the fact it fairly quickly self-corrects and returns to the desired behavior versus experiencing the runaway behavior of every other algorithm I've tested up to this point, including the existing algorithm on |
The two newest proposals will be named/numbered as follows: Proposal 6: The proposal from @chappjc I will be simulating the remaining proposals which have not already been done (4, 5, and 6) and providing an analysis of each this coming week. NOTE: As mentioned above, the proposal deadline has been reached. No more proposals will be accepted due to time constraints. |
@davecgh Nice work :) If you remove the S_ub term, then you should remove that initial poolsize surge? Does the S_ub max come into play much, or is it only allowing that poolsize surge? To me, it looks like the S_ub serves to allow the poolsize max to be exceeded, in exchange for more level ticket prices, when staking massively increases. is this what we want? You could try removing it, or doubling S_ub? It should remove that inital surge to 70k in that case. |
@kandiru: Thanks! The goal of For example, here is the simulation with DDF-B without the upper bound: |
@davecgh I have given this a once over and the last suggestion seems to get us where we want to be. I am going to spend a bit more time on this to think through the corners. |
Proposal 4 ResultsHere are the simulation results of Proposal 4 using two different demand distribution functions (DDFs): DDF A NOTE: The simulation had to be manually stopped early because the algorithm results in the pool size falling to zero due to failing to encourage ticket purchases which in turn causes the chain to be unable to advance due to a lack of votes. DDF B NOTE: The simulation also had to be manually stopped early for this DDF for the same reason as DDF A. SummaryI've skipped listing pros and cons since (as suspected by the author himself) this algorithm would only potentially work with an already established system and failed to complete the simulation for either DDF. As a result, I don't believe it is suitable since one of the primary requirements for the new algorithm is that it works for bringup as well as be able to absorb sudden surges and ebbs in demand and staking proportions and recover back to expected steady state behavior. The algorithm in its current form is unable to perform that requirement. |
@davecgh thanks for taking the time to run through the algorithm proposal. it was mostly to provide feedback so it's nice to see how Proposal 7 is progressing. |
Proposal 5 ResultsHere are the simulation results of Proposal 5 using two different demand distribution functions (DDFs) and the constants proposed by the author, namely, Pros:
Cons:
SummaryThe algorithm itself is based on the mathematically sound PID controller and it does eventually converge on the target pool size as well as has the ability to handle instant surges and ebbs in demand after the system has been operational for while which are desirable properties. However, as mentioned in the cons, this algorithm struggles with the initial bringup phase with both DDFs, is relatively slow to converge, and also would not react well to any future changes to the parameters used by |
@jyap808 No problem. Thank you for participating! Regardless of the final proposal that gets selected, all of the proposals, feedback, and results have helped zero in on a solution. |
Proposal 6 ResultsHere are the simulation results of Proposal 6 using two different demand distribution functions (DDFs) and the constants proposed by the author, namely, Pros:
Cons:
SummaryThe results of this proposal are nearly identical that of proposal 7 which is not surprising since it shares the same general principle of forces, was updated to include some of the details from proposal 7, such as including immature tickets in the calculation, and even has the same dominant term. It is interesting to note that initially both proposals were originally worked on independently and arrived to very similar methodologies. As such, I believe this proposal, like proposal 7, satisfies all of the hard requirements and manages to achieve the ideal goals. Since the two proposals are virtually identical, both in form and function, further analysis with different DDFs and a more in-depth look at the minor differences in the qualitative behavior and the complexity of the math and implementation will be needed to make a final determination. |
Perhaps the right way to look at prop 6 is to decide if their is anything of value (via further analysis and/or theoretical assessment), and simply incorporate the ideas into prop 7. I'm not sure how prop 7 has evolved recently, but I personally don't have any further plans to improve my proposal. In any case, I want to be clear that I am only interested in reaching the best possible solution. Given the theoretical similarities, I think the only questions are
|
We have been working with some various tweaks to proposal 7 as well, one of which includes an exponential term which either attenuates or amplifies the ratio depending on the periodic distance from the target pool size (as opposed to price change). It has shown some excellent results in terms of addressing the final con of proposal 7. I believe the difference in the We are experimenting with some other approaches to attenuating and amplifying the relative velocity as well however, because one of the challenges is that the final algorithm will need to be implemented without the use of floating point math because it has to produce identical results on all languages and machines due to its role in consensus and floating point math is notoriously unreliable in terms of producing exactly identical results between machines and languages due to a variety of factors. |
Price change percentage would correspond with percent change in ticket pool size.
I have modeled this here
https://play.golang.org/p/QSV4ssuZ4O
Example, starting with pool at target size with 40960.
Maximum tickets purchased in a window is 144 * 20 = 2880.
Tickets redeemed in that window is 144 * 5 = 720.
So actual ticket increase is 2880 - 720 = 2160.
pool size 40960 + added tickets 2160 = 43120.
New window pool size 43120 divided by last window pool size 40960 = 1.0527.
So 5.27% would be the maximum price increase in a window
when all possible ticket purchase slots are filled.
If no tickets were bought then the target pool size of
40960 would fall by 144 * 5 = 720 to 40240.
New window pool size 40240 divided by last window pool size 40960 = 0.982%,
which is a maximum 1.75% decrease in ticket price from the last window
when no tickets are bought.
So, a ticket price of 50 DCR could adjust up 5.27% to 52.63 DCR,
or adjust down 1.75% to 49.12 DCR.
The text was updated successfully, but these errors were encountered: