Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configurable step duration transitions #124

Closed
wants to merge 19 commits into from

Conversation

vkomenda
Copy link

Fixes #122.

The PR adds a configuration map and at the same time keeps the previous configuration option for backwards compatibility.

I'll rebase it on top of the rebased aura-pos as soon as we update the working branch.

@vkomenda vkomenda requested review from phahulin and afck April 12, 2019 18:13
@@ -1224,6 +1250,14 @@ impl Engine<EthereumMachine> for AuthorityRound {
let header = block.header().clone();
let first = header.number() == 0;

if let Some((_, dur)) = self.step_duration.range(0..=header.number()).last() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't on_prepare_block the wrong place for this? The method is meant to only be called when this node starts to prepare a block. The current implementation calls it more often, but only in nodes with a signing key. With #103, we'd actually only call it when it is our turn.

But even non-validators need to keep track of the step duration, I think? Maybe we should call this in on_close_block, which is called for our own as well as for imported blocks (if I'm not mistaken).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved it to on_close_block.

Copy link

@phahulin phahulin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mapping doesn't seem to be supported when parsing spec from json file.
For spec

{
  "name": "testnetpoa",
  "engine": {
    "authorityRound": {
      "params": {
        "stepDuration": {
                        "0": 5
        },
        "blockReward": "0xDE0B6B3A7640000",
        "maximumUncleCountTransition": 0,
        "maximumUncleCount": 0,
        "validators": {

getting error

Loading config file from node1.toml
Spec json is invalid: invalid type: map, expected a hex encoded or decimal uint at line 7 column 3

@afck
Copy link
Collaborator

afck commented Apr 16, 2019

Does this work?

"stepDuration": {
  "transitions": {
    "0": 5
  }
},

@phahulin
Copy link

Same error

@varasev
Copy link

varasev commented Apr 18, 2019

Spec json is invalid: invalid type: map, expected a hex encoded or decimal uint at line 7 column 3

It seems that some syntax checker needs to be corrected for the spec.json to be able to support a new syntax for the stepDuration.

@vkomenda
Copy link
Author

It needs 0x and hexadecimal notation.

@vkomenda
Copy link
Author

Just try without quotes:

  "engine": {
    "authorityRound": {
      "params": {
          "stepDuration": {
              0: 5
          },
...

It works for me.

@vkomenda vkomenda force-pushed the vk-step-duration-transition branch from e163ed1 to c681342 Compare April 18, 2019 12:59
@vkomenda vkomenda force-pushed the vk-step-duration-transition branch from c681342 to c0482f7 Compare April 18, 2019 13:03
@vkomenda vkomenda requested a review from phahulin April 18, 2019 13:56
@phahulin
Copy link

Now it works for me in

        "stepDuration": {
                        "0": 5
        },

format

However I did a test with two validator nodes and noticed some strange things. Here is the piece of log:

2019-04-18 16:46:22  IO Worker #0 INFO import  Imported #64 0x5fa2…4bf4 (0 txs, 0.00 Mgas, 4 ms, 0.57 KiB)
2019-04-18 16:46:28  Verifier #0 INFO import  Imported #65 0x9204…baac (0 txs, 0.00 Mgas, 14 ms, 0.57 KiB)
2019-04-18 16:46:30  IO Worker #0 INFO import     1/25 peers   33 KiB chain 55 KiB db 0 bytes queue 43 KiB sync  RPC:  0 conn,    0 req/s,  393 µs
2019-04-18 16:46:37  Verifier #2 INFO import  Imported #66 0xd97d…3c76 (0 txs, 0.00 Mgas, 11 ms, 0.57 KiB)
2019-04-18 16:46:43  IO Worker #0 INFO import  Imported #67 0x2051…9c31 (0 txs, 0.00 Mgas, 6 ms, 0.57 KiB)
2019-04-18 16:46:52  IO Worker #3 INFO import  Imported #68 0x2c27…3325 (0 txs, 0.00 Mgas, 4 ms, 0.57 KiB)
  1. the timing seems to be wrong (blocks should be produced at :00, :05, :10, etc timestamps
  2. these blocks are not produced in round-robin order, it's more like 2 by 2:
block 64: author 0x6546ed725e88fa728a908f9ee9d61f50edc40ad6
      65:        0x1a22d96792666863f429a85623e6d4ca173d26ab
      66:        0x1a22d96792666863f429a85623e6d4ca173d26ab
      67:        0x6546ed725e88fa728a908f9ee9d61f50edc40ad6
      68:        0x6546ed725e88fa728a908f9ee9d61f50edc40ad6
      69:        0x1a22d96792666863f429a85623e6d4ca173d26ab
  1. sometimes blocks are repeated with different hashes (maybe a reorg? but not clear from logs)
2019-04-18 16:56:04  IO Worker #3 INFO import  Imported #130 0x4d13…cb45 (0 txs, 0.00 Mgas, 7 ms, 0.57 KiB)
2019-04-18 16:56:05  IO Worker #1 INFO import     1/25 peers   64 KiB chain 98 KiB db 0 bytes queue 43 KiB sync  RPC:  0 conn,    0 req/s, 2364 µs
2019-04-18 16:56:13  IO Worker #0 INFO import  Imported #131 0x0237…62d4 (0 txs, 0.00 Mgas, 6 ms, 0.57 KiB)
2019-04-18 16:56:23  IO Worker #1 INFO import  Imported #132 0x74ed…dd84 (0 txs, 0.00 Mgas, 5 ms, 0.57 KiB)
2019-04-18 16:56:35  IO Worker #0 INFO import     1/25 peers   64 KiB chain 99 KiB db 0 bytes queue 43 KiB sync  RPC:  0 conn,    0 req/s, 2364 µs
2019-04-18 16:56:36  IO Worker #3 INFO import  Imported #133 0x1b97…7a23 (0 txs, 0.00 Mgas, 12 ms, 0.57 KiB)
2019-04-18 16:56:40  Verifier #0 INFO import  Imported #133 0xe4b7…4e85 (0 txs, 0.00 Mgas, 35 ms, 0.57 KiB)
2019-04-18 16:56:46  IO Worker #2 INFO import  Imported #134 0xda9f…aee8 (0 txs, 0.00 Mgas, 13 ms, 0.57 KiB)
2019-04-18 16:56:48  Verifier #3 INFO import  Imported #135 0x9257…ddd0 (0 txs, 0.00 Mgas, 10 ms, 0.57 KiB)
2019-04-18 16:56:51  Verifier #1 INFO import  Imported #135 0x399b…1e4d (0 txs, 0.00 Mgas, 33 ms, 0.57 KiB) + another 1 block(s) containing 0 tx(s)
  1. then I added a transition:
	"stepDuration": {
		"0": 5,
		"150": 13
	},

and it stalled:

2019-04-18 16:59:13  IO Worker #3 INFO import  Imported #148 0xfdba…74cd (0 txs, 0.00 Mgas, 6 ms, 0.57 KiB)
2019-04-18 16:59:22  IO Worker #0 INFO import  Imported #149 0x9009…5131 (0 txs, 0.00 Mgas, 4 ms, 0.57 KiB)
2019-04-18 16:59:33  IO Worker #3 INFO import     1/25 peers   142 KiB chain 113 KiB db 0 bytes queue 15 KiB sync  RPC:  0 conn,    0 req/s, 2549 µs
2019-04-18 17:00:03  IO Worker #0 INFO import     1/25 peers   142 KiB chain 113 KiB db 0 bytes queue 15 KiB sync  RPC:  0 conn,    0 req/s, 2549 µs
2019-04-18 17:00:33  IO Worker #3 INFO import     1/25 peers   142 KiB chain 113 KiB db 0 bytes queue 15 KiB sync  RPC:  0 conn,    0 req/s, 2549 µs
2019-04-18 17:01:03  IO Worker #1 INFO import     1/25 peers   142 KiB chain 113 KiB db 0 bytes queue 15 KiB sync  RPC:  0 conn,    0 req/s, 2549 µs
2019-04-18 17:01:33  IO Worker #3 INFO import     1/25 peers   142 KiB chain 113 KiB db 0 bytes queue 15

@vkomenda
Copy link
Author

vkomenda commented Apr 18, 2019

You can still try without the quotes: No, you are right, quotes are needed. It should be in the format

"0": 5

I'm not sure about (1) because in my case blocks are produced at 5 second intervals:

2019-04-18 15:19:25  Verifier #1 INFO import  Imported #1 0x2783…2ab1 (1 txs, 0.23 Mgas, 71 ms, 0.91 KiB)
2019-04-18 15:19:30  IO Worker #3 INFO import  Imported #2 0xedd0…5972 (1 txs, 0.21 Mgas, 14 ms, 0.91 KiB)
2019-04-18 15:19:30  IO Worker #3 TRACE miner  update_sealing: imported internally sealed block
2019-04-18 15:19:35  Verifier #5 INFO import  Imported #3 0x8f87…99d9 (1 txs, 0.21 Mgas, 78 ms, 0.91 KiB)
2019-04-18 15:19:40  Verifier #1 INFO import  Imported #4 0x395f…f08c (0 txs, 0.00 Mgas, 52 ms, 0.55 KiB)
2019-04-18 15:19:45  IO Worker #0 INFO import  Imported #5 0x516a…36b0 (1 txs, 3.71 Mgas, 112 ms, 14.41 KiB)
2019-04-18 15:19:45  IO Worker #0 TRACE miner  update_sealing: imported internally sealed block
2019-04-18 15:19:50  Verifier #3 INFO import  Imported #6 0x93e7…5ea9 (1 txs, 0.04 Mgas, 90 ms, 0.68 KiB)
2019-04-18 15:19:55  Verifier #1 INFO import  Imported #7 0xfa07…d462 (1 txs, 0.04 Mgas, 91 ms, 0.68 KiB)
2019-04-18 15:20:00  IO Worker #1 INFO import  Imported #8 0xd6aa…f4d9 (1 txs, 0.05 Mgas, 56 ms, 0.68 KiB)
2019-04-18 15:20:00  IO Worker #1 TRACE miner  update_sealing: imported internally sealed block
2019-04-18 15:20:05  Verifier #3 INFO import  Imported #9 0x8213…80d2 (0 txs, 0.00 Mgas, 59 ms, 0.55 KiB)
...
2019-04-18 15:25:35  Verifier #4 INFO import  Imported #75 0xb355…d6b4 (0 txs, 0.00 Mgas, 35 ms, 0.55 KiB)
2019-04-18 15:25:40  Verifier #2 INFO import  Imported #76 0x4087…84f6 (0 txs, 0.00 Mgas, 54 ms, 0.55 KiB)
2019-04-18 15:25:45  IO Worker #1 INFO import  Imported #77 0x660a…6bcc (0 txs, 0.00 Mgas, 21 ms, 0.55 KiB)
2019-04-18 15:25:45  IO Worker #1 TRACE miner  update_sealing: imported internally sealed block
2019-04-18 15:25:50  Verifier #4 INFO import  Imported #78 0xf80d…caba (0 txs, 0.00 Mgas, 43 ms, 0.55 KiB)
2019-04-18 15:25:55  Verifier #2 INFO import  Imported #79 0x9d0e…75cb (0 txs, 0.00 Mgas, 93 ms, 0.55 KiB)
2019-04-18 15:26:00  IO Worker #1 INFO import  Imported #80 0x68dc…a8a1 (0 txs, 0.00 Mgas, 14 ms, 0.55 KiB)
2019-04-18 15:26:00  IO Worker #1 TRACE miner  update_sealing: imported internally sealed block
2019-04-18 15:26:05  Verifier #5 INFO import  Imported #81 0xecea…1b47 (2 txs, 0.31 Mgas, 121 ms, 1.01 KiB)

@phahulin
Copy link

I also find it strange. Do you get correct timestamps after severl restarts?
I checked that time on local machine is synchronized. Also, with original parity (the one from paritytech), timestamps are correct.

@vkomenda
Copy link
Author

There is a problem with npm run all which is not suitable for using a customised spec.json. But I always had 5-second intervals until the stall at the first step duration transition. I have the same stalling problem. Block authors seem OK but maybe my tests didn't cover the right conditions for things to go wrong.

@vkomenda
Copy link
Author

I tested other intervals successfully before the first change of duration. Then it always stalls.

@varasev
Copy link

varasev commented Apr 18, 2019

I also tried to launch it with posdao-test-setup with "stepDuration": {"0": 10} - works fine for me.

@varasev
Copy link

varasev commented Apr 18, 2019

This one also makes it stall:

"stepDuration": {
  "0": 5,
  "15": 10
}

@vkomenda
Copy link
Author

Anything with more than one step duration will make it stall. I tried to move the step duration update from on_prepare_block to on_close_block as suggested by @afck but it's still stalling at the moment.

@varasev
Copy link

varasev commented Apr 18, 2019

@phahulin I tried to launch it with test-block-reward and I see almost the same picture as yours:

2019-04-18 18:46:02  Imported #14 0xb65b…46d9 (1 txs, 0.02 Mgas, 23 ms, 0.66 KiB)
2019-04-18 18:46:07  Imported #15 0xf8dd…be09 (0 txs, 0.00 Mgas, 3 ms, 0.55 KiB)
2019-04-18 18:46:12  Imported #16 0x0a88…8c02 (2 txs, 0.04 Mgas, 22 ms, 0.76 KiB)
2019-04-18 18:46:17  Imported #17 0x7a68…0a57 (0 txs, 0.00 Mgas, 3 ms, 0.55 KiB)
2019-04-18 18:46:22  Imported #18 0xc59f…1676 (1 txs, 0.02 Mgas, 22 ms, 0.66 KiB)
2019-04-18 18:46:27  Imported #19 0x770f…25b6 (1 txs, 0.02 Mgas, 4 ms, 0.66 KiB)
2019-04-18 18:46:29     1/25 peers     14 KiB chain   52 KiB db  0 bytes queue   16 KiB sync  RPC:  0 conn,    0 req/s,    0 µs
2019-04-18 18:46:32  Imported #20 0x7357…d879 (2 txs, 0.04 Mgas, 23 ms, 0.76 KiB)
2019-04-18 18:46:37  Imported #21 0x4cdd…925a (0 txs, 0.00 Mgas, 4 ms, 0.55 KiB)
2019-04-18 18:46:42  Imported #22 0x30a6…c22a (1 txs, 0.02 Mgas, 20 ms, 0.66 KiB)
2019-04-18 18:46:47  Imported #23 0x1287…05a5 (1 txs, 0.02 Mgas, 4 ms, 0.66 KiB)
2019-04-18 18:46:52  Imported #24 0xb76c…629c (1 txs, 0.02 Mgas, 23 ms, 0.66 KiB)
2019-04-18 18:46:57  Imported #25 0x1b39…3a09 (1 txs, 0.02 Mgas, 4 ms, 0.66 KiB)

I think the reason is that our modified Parity can't call the Random contract when we use the spec from the test-block-reward repo, so it produces each block a bit earlier than should. Seems it's better to test it with the posdao-test-setup.

@varasev
Copy link

varasev commented Apr 18, 2019

Anything with more than one step duration will make it stall. I tried to move the step duration update from on_prepare_block to on_close_block as suggested by @afck but it's still stalling at the moment.

Also there are no logs which would tell anything about the error.

@vkomenda
Copy link
Author

I added logging in the latest commit.

@varasev
Copy link

varasev commented Apr 18, 2019

This case works, though :)

"stepDuration": {
  "0": 5,
  "15": 5
}

@vkomenda
Copy link
Author

It's still the same duration though.

@afck
Copy link
Collaborator

afck commented Apr 18, 2019

It might be useful to print the calculated step numbers (if there is such a thing); I still don't have a good overview how the steps are computed. If there's any other place where we just say something like
"step number = time since the epoch / step duration"
things would obviously go wrong if the duration is changed.

@@ -144,7 +162,7 @@ impl Step {
let now = unix_now();
let expected_seconds = self.load()
.checked_add(1)
.and_then(|ctr| ctr.checked_mul(self.duration as u64))
.and_then(|ctr| ctr.checked_mul(self.duration.load(AtomicOrdering::SeqCst) as u64))
Copy link
Collaborator

@afck afck Apr 25, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this line is the main problem:
With a constant step duration, step number n just ended n * duration seconds after the beginning of 1970. Now, with a variable step duration, we'd have to count the number of steps up to the latest duration change, and then continue counting from there, with the new duration!
It almost looks like it would be easier to change the duration at a specified time (or step number) instead of a block number. Then at least Step wouldn't have to know about a particular block's timestamp, and could make the computation based on the step duration map only.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could compute time by adding up all preceding (num_blocks * duration). But that wouldn't work if (or rather when) blocks appear that are longer than duration because of excess load.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Step duration and block time don't necessarily coincide, and it looks like the step number computation is independent of actual blocks (in particular if the network wasn't started in 1970…); we should probably keep it that way and just compute it based on the table of step duration changes.

@@ -1220,7 +1226,7 @@ impl Engine<EthereumMachine> for AuthorityRound {

/// Change the next step duration if necessary and apply the block reward on finalisation of the block.
fn on_close_block(&self, block: &mut ExecutedBlock) -> Result<(), Error> {
self.update_step_duration(block.header().number() + 1);
self.update_step_duration(block.header().number());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really understand this commit, but won't that still compute the step the wrong way in Step::duration_remaining? E.g. let's say we have step duration 1 second and are 100 seconds into 1970, i.e. we're at step 100. Now we change the step duration to 3 seconds and then increment the step. So it will change its step number to 101, and Step::duration_remaining will think that step 101 lasts until the 303rd second of 1970, making the step last more than 200 seconds?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted the step duration update to happen between steps, so that the duration of the current step is not changed as opposed to when duration is changed directly.

let next_dur = self.next_duration.load(AtomicOrdering::SeqCst);
let cur_dur = self.duration.swap(next_dur, AtomicOrdering::SeqCst);
if cur_dur != next_dur {
let starting_sec = unix_now().as_secs() as usize;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be safer to use the block's timestamp instead of our local time here, so the validators' step counting doesn't go out of sync.
(And I still feel it would be even safer to just configure the step duration changes by time instead of by block.)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can make it preserve the synchrony across step duration changes purely by time and the number of steps since the last step duration change. I don't think getting the block timestamp would be necessary; in any case it would not help to restore synchrony in this case.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My suggested solution is in c025e11.

let next_step = (
(unix_now().as_secs() as usize - starting_sec) /
(self.duration.load(AtomicOrdering::SeqCst) as usize)
) + starting_step;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still concerned that this value is going to be different in two validators if one of them handles the block earlier than the other. Won't that cause them to go out of sync?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is possible too, although the likelihood is extremely small. As a remedy I've been considering linking the duration update with a step number or scheduling an update one block ahead. If the update is already scheduled in a previous on_close_block, there will be no race conditions at the time of step duration change because it will be done in Step::increment but in that case we must forbid changing the step duration without any blocks in between.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

linking the duration update with a step number

Yes, I think that's the right approach. But making it a timestamp might be even more convenient: Then you could just specify the time of the change in the spec.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you prefer having timestamps in the spec rather than block numbers? Than we'd have to map timestamps to block numbers.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we'd ever have to map them at all: The step duration would just change at the steps corresponding to those timestamps, independent of any block numbers.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, but we can just define that they only affect the following step.
I think that's still more convenient than having to calculate and specify a step number.

Copy link
Author

@vkomenda vkomenda Apr 30, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the time being I implemented a simpler modification with the step numbers. If needed it can be further extended to timestamps. However this only has the difference that the step numbers no longer need to be calculated for writing the spec. There is another issue with calibrate which needs to be fixed in either case.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way using steps (or block numbers) or timestamps in the spec seems complementary because internally we have to compute both in order to synchronise blocks. Which is why if we use timestamps in the spec, there would be internal synchronisation of block numbers which is similar to the step number routine. If using timestamps is only improves user convenience, I'd prefer to stay with block numbers for now and maybe convert to timestamps later if needed.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I guess so. But if we used block numbers, I think we'd have to use the block's timestamp to change the step number, to guarantee that the nodes don't go out of sync. And that might complicate things more because our internal step counter could already have progressed further in the meantime. And what if there's a reorg that undoes a block that changed the step duration?

Copy link
Author

@vkomenda vkomenda Apr 30, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following d170486 step numbers coincide with block numbers. The initial_step atavism has been removed.

StepDuration::Single(u) => {
let mut durs: BTreeMap<u64, u16> = BTreeMap::new();
durs.insert(0, map_step_duration(u));
durs
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe simpler:

iter::once((0, map_step_duration(u))).collect()

duration: u16,
inner: AtomicU64,
/// Duration of the current step.
current_duration: AtomicU16,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indentation is off. (Spaces/tabs…)

if let Some(&next_dur) = self.durations.get(&next_step) {
let prev_dur = *self.durations.range(0 .. next_step).last().expect("step duration map is empty").1;
let prev_starting_sec = self.starting_sec.load(AtomicOrdering::SeqCst);
let prev_starting_step = self.starting_step.load(AtomicOrdering::SeqCst);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder whether we need to put the whole Step in a Mutex instead of making its fields atomic individually: Things probably go wrong if some of those fields are mutated while increment is running?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the only place where these fields are modified apart from the constructor. I agree though that if we start mutating them elsewhere we need an atomic lock on them.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But I'm not sure it's even guaranteed that no two instances of increment run at the same time. 😬
And even if it is, it would be safer in a single Mutex (or RwLock?), in case future modifications change this.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the Atomic stuff was only supposed to help mutating fields of a Step that's passed by an immutable reference. Synchronisation is simulated by calibrate.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but if the Step itself were passed as an Arc<Mutex<Step>>, that interior mutability wouldn't be necessary.
(I feel like this pattern—interior mutability of fields—is overused in Parity: It creates the illusion of thread-safety.)

calibrate: our_params.start_step.is_none(),
duration: our_params.step_duration,
current_duration: AtomicU16::new(duration),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Also indentation.)

let engine = Arc::new(
AuthorityRound {
transition_service: IoService::<()>::start()?,
step: Arc::new(PermissionedStep {
inner: Step {
inner: AtomicU64::new(initial_step),
inner: AtomicU64::new(0),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we compute the actual current step number on startup?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually another point is true: there is a configurable start_step which got thrown away with the water. I have to take it into account in order not to break the spec.

@varasev
Copy link

varasev commented Jun 24, 2019

Done in #152.

@varasev varasev closed this Jun 24, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Changing the value of stepDuration option in spec.json for AuRa
4 participants