Skip to content

Commit

Permalink
Merge pull request #3682 from akkadotnet/dev
Browse files Browse the repository at this point in the history
Akka.NET v1.3.11 stable release
  • Loading branch information
Aaronontheweb authored Dec 18, 2018
2 parents 4dbb9da + a674f17 commit f7d5b6e
Show file tree
Hide file tree
Showing 36 changed files with 288 additions and 89 deletions.
26 changes: 26 additions & 0 deletions RELEASE_NOTES.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,29 @@
#### 1.3.11 December 17 2018 ####
**Maintenance Release for Akka.NET 1.3**

Akka.NET v1.3.11 is a bugfix patch primarily aimed at solving the following issue: [DotNetty Remote Transport Issues with .NET Core 2.1](https://github.com/akkadotnet/akka.net/issues/3506).

.NET Core 2.1 exposed some issues with the DotNetty connection methods in DotNetty v0.4.8 that have since been fixed in subsequent releases. In Akka.NET v1.3.11 we've resolved this issue by upgrading to DotNetty v0.6.0.

In addition to the above, we've introduced some additional fixes and changes in Akka.NET v1.3.11:

* [Akka.FSharp: Akka.Fsharp spawning an actor results in Exception](https://github.com/akkadotnet/akka.net/issues/3402)
* [Akka.Remote: tcp-reuse-addr = off-for-windows prevents actorsystem from starting](https://github.com/akkadotnet/akka.net/issues/3293)
* [Akka.Remote: tcp socket address reuse - default configuration](https://github.com/akkadotnet/akka.net/issues/2477)
* [Akka.Cluster.Tools:
Actor still receiving messages from mediator after termination](https://github.com/akkadotnet/akka.net/issues/3658)
* [Akka.Persistence: Provide minSequenceNr for snapshot deletion](https://github.com/akkadotnet/akka.net/pull/3641)

To [see the full set of changes for Akka.NET 1.3.11, click here](https://github.com/akkadotnet/akka.net/milestone/29)

| COMMITS | LOC+ | LOC- | AUTHOR |
| --- | --- | --- | --- |
| 5 | 123 | 71 | Aaron Stannard |
| 3 | 96 | 10 | Ismael Hamed |
| 2 | 4 | 3 | Oleksandr Kobylianskyi |
| 1 | 5 | 1 | Ruben Mamo |
| 1 | 23 | 6 | Chris Hoare |

#### 1.3.10 November 1 2018 ####
**Maintenance Release for Akka.NET 1.3**

Expand Down
2 changes: 1 addition & 1 deletion build.fsx
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ let outputBinariesNet45 = outputBinaries @@ "net45"
let outputBinariesNetStandard = outputBinaries @@ "netstandard1.6"

let buildNumber = environVarOrDefault "BUILD_NUMBER" "0"
let preReleaseVersionSuffix = "beta" + (if (not (buildNumber = "0")) then (buildNumber) else "")
let preReleaseVersionSuffix = "beta" + (if (not (buildNumber = "0")) then (buildNumber) else DateTime.UtcNow.Ticks.ToString())
let versionSuffix =
match (getBuildParam "nugetprerelease") with
| "dev" -> preReleaseVersionSuffix
Expand Down
12 changes: 6 additions & 6 deletions docs/articles/actors/coordinated-shutdown.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,14 +122,14 @@ Tasks should be registered as early as possible, preferably at system startup, i
## Running `CoordinatedShutdown`
There are a few different ways to start the `CoordinatedShutdown` process.

If you wish to execute the `CoordinatedShutdown` yourself, you can simply call `CoordinatedShutdown.Run()`, which will return a `Task<Done>`.
If you wish to execute the `CoordinatedShutdown` yourself, you can simply call `CoordinatedShutdown.Run(CoordinatedShutdown.Reason)`, which takes a [`CoordinatedShutdown.Reason`](/api/Akka.Actor.CoordinatedShutdown.Reason.html) argument will return a `Task<Done>`.

```csharp
CoordinatedShutdown.Get(myActorSystem).Run();
```
[!code-csharp[CoordinatedShutdownSpecs.cs](../../examples/DocsExamples/Actors/CoordinatedShutdownSpecs.cs?name=coordinated-shutdown-builtin)]

It's safe to call this method multiple times as the shutdown process will only be run once and will return the same completion task each time. The `Task<Done>` will complete once all phases have run successfully, or a phase with `recover = off` failed.

> It's possible to subclass the `CoordinatedShutdown.Reason` type and pass in a custom implementation which includes custom properties and data. This data is accessible inside the shutdown phases themselves via the [`CoordinatedShutdown.ShutdownReason` property](/api/Akka.Actor.CoordinatedShutdown.html#Akka_Actor_CoordinatedShutdown_ShutdownReason).
### Automatic `ActorSystem` and Process Termination
By default, when the final phase of the `CoordinatedShutdown` executes the calling `ActorSystem` will be terminated. However, the CLR process will still be running even though the `ActorSystem` has been terminated.

Expand All @@ -148,9 +148,9 @@ If you're using Akka.Cluster, the `CoordinatedShutdown` will automatically regis
2. Gracefully handing over / terminating ClusterSingleton and Cluster.Sharding instances; and
3. Terminating the `Cluster` system itself.

By default, this graceful leave action will by triggered whenever the `CoordinatedShutdown.Run()` method is called. Conversely, calling `Cluster.Leave` on a cluster member will also cause the `CoordinatedShutdown` to run and will terminate the `ActorSystem` once the node has left the cluster.
By default, this graceful leave action will by triggered whenever the `CoordinatedShutdown.Run(Reason)` method is called. Conversely, calling `Cluster.Leave` on a cluster member will also cause the `CoordinatedShutdown` to run and will terminate the `ActorSystem` once the node has left the cluster.

By default, `CoordinatedShutdown.Run()` will also be executed if a node is removed via `Cluster.Down` (non-graceful exit), but this can be disabled by changing the following Akka.Cluster HOCON setting:
`CoordinatedShutdown.Run()` will also be executed if a node is removed via `Cluster.Down` (non-graceful exit), but this can be disabled by changing the following Akka.Cluster HOCON setting:

```
akka.cluster.run-coordinated-shutdown-when-down = off
Expand Down
3 changes: 2 additions & 1 deletion docs/articles/intro/tutorial-1.md
Original file line number Diff line number Diff line change
Expand Up @@ -215,6 +215,7 @@ After running the snippet, we see the following output on the console:
supervised actor started
supervised actor fails now
supervised actor stopped
supervised actor started
[ERROR][05.06.2017 13:34:50][Thread 0003][akka://testSystem/user/supervising-actor/supervised-actor] I failed!
Cause: System.Exception: I failed!
at Tutorials.Tutorial1.SupervisedActor.OnReceive(Object message)
Expand Down Expand Up @@ -285,4 +286,4 @@ In the following chapters we will grow the application step-by-step:

1. We will create the representation for a device
2. We create the device management component
3. We add query capabilities to device groups
3. We add query capabilities to device groups
4 changes: 2 additions & 2 deletions docs/articles/intro/tutorial-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,8 +154,8 @@ our device actor:
We maintain the current temperature, initially set to `null`, and we simply report it back if queried. We also
added fields for the ID of the device and the group it belongs to, which we will use later.

We can already write a simple test for this functionality @scala[(we use ScalaTest but any other test framework can be
used with the Akka.NET Testkit)]:
We can already write a simple test for this functionality (we are using xUnit but any other test framework can be
used with the Akka.NET Testkit):

[!code-csharp[Main](../../examples/Tutorials/Tutorial2/DeviceSpec.cs?name=device-read-test)]

Expand Down
5 changes: 4 additions & 1 deletion docs/articles/persistence/snapshots.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,10 @@ If not specified, they default to `SnapshotSelectionCriteria.Latest` which selec

A persistent actor can delete individual snapshots by calling the `DeleteSnapshot` method with the sequence number of when the snapshot was taken.

To bulk-delete a range of snapshots matching `SnapshotSelectionCriteria`, persistent actors should use the `DeleteSnapshots` method.
To bulk-delete a range of snapshots matching `SnapshotSelectionCriteria`,
persistent actors should use the `deleteSnapshots` method. Depending on the journal used this might be inefficient. It is
best practice to do specific deletes with `deleteSnapshot` or to include a `minSequenceNr` as well as a `maxSequenceNr`
for the `SnapshotSelectionCriteria`.

## Snapshot status handling
Saving or deleting snapshots can either succeed or fail – this information is reported back to the persistent actor via status messages as illustrated in the following table.
Expand Down
20 changes: 20 additions & 0 deletions docs/articles/persistence/storage-plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,3 +16,23 @@ Snapshot store is a specialized type of actor which exposes an API to handle inc

[!code-json[Main](../../../src/core/Akka.Persistence/persistence.conf#L204-L219)]

### Eager initialization of persistence plugin

By default, persistence plugins are started on-demand, as they are used. In some case, however, it might be beneficial to start a certain plugin eagerly. In order to do that, specify the IDs of plugins you wish to start automatically under `akka.persistence.journal.auto-start-journals` and `akka.persistence.snapshot-store.auto-start-snapshot-stores`.

For example, if you want eager initialization for the sqlite journal and snapshot store plugin, your configuration should look like this:

```
akka {
persistence {
journal {
plugin = "akka.persistence.journal.sqlite"
auto-start-journals = ["akka.persistence.journal.sqlite"]
}
snapshot-store {
plugin = "akka.persistence.snapshot-store.sqlite"
auto-start-snapshot-stores = ["akka.persistence.snapshot-store.sqlite"]
}
}
}
```
2 changes: 1 addition & 1 deletion docs/docfx.json
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@
"_appTitle": "Akka.NET Documentation",
"_appLogoPath": "/images/akkalogo.png",
"_appFaviconPath": "/images/favicon.ico",
"_appFooter": "Copyright © 2013-2017 Akka.NET project<br>Generated by <strong>DocFX</strong>",
"_appFooter": "Copyright © 2013-2018 Akka.NET project<br>Generated by <strong>DocFX</strong>",
"_enableSearch": "true"
},
"dest": "_site",
Expand Down
37 changes: 37 additions & 0 deletions docs/examples/DocsExamples/Actors/CoordinatedShutdownSpecs.cs
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Akka.Actor;
using FluentAssertions;
using Xunit;

namespace DocsExamples.Actors
{
public class CoordinatedShutdownSpecs
{
[Fact]
public async Task CoordinatedShutdownBuiltInReason()
{
#region coordinated-shutdown-builtin
var actorSystem = ActorSystem.Create("MySystem");

// shutdown with reason "CLR exit" - meaning the process was being terminated
// task completes once node has left cluster and terminated the ActorSystem
Task shutdownTask = CoordinatedShutdown.Get(actorSystem)
.Run(CoordinatedShutdown.ClrExitReason.Instance);
await shutdownTask;

// shutdown reason gets cached here.
// The`Reason` type can be subclassed with custom properties if needed
CoordinatedShutdown.Get(actorSystem).ShutdownReason.Should()
.Be(CoordinatedShutdown.ClrExitReason.Instance);

#endregion


actorSystem.WhenTerminated.IsCompleted.Should().BeTrue();
}
}
}
1 change: 0 additions & 1 deletion docs/examples/DocsExamples/DocsExamples.csproj
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@

<ItemGroup>
<PackageReference Include="FluentAssertions" Version="4.19.2" />
<PackageReference Include="System.Collections.Immutable" Version="1.3.1" />
<PackageReference Include="System.ValueTuple" Version="4.3.0" />
<PackageReference Include="xunit" Version="$(XunitVersion)" />
<PackageReference Include="xunit.runner.visualstudio" Version="$(XunitVersion)" />
Expand Down
4 changes: 4 additions & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,10 @@ img.main-logo{
h2:before{
display: none;
}
.featured-box-minimal h4:before {
height: 0px;
margin-top: 0px;
}
</style>

<div class="jumbotron">
Expand Down
2 changes: 1 addition & 1 deletion src/benchmark/Akka.Benchmarks/Akka.Benchmarks.csproj
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
<ItemGroup>
<PackageReference Include="BenchmarkDotNet" Version="0.10.14" />
<PackageReference Include="Newtonsoft.Json" Version="11.0.2" />
<PackageReference Include="System.Collections.Immutable" Version="1.4.0" />
<PackageReference Include="System.Collections.Immutable" Version="1.5.0" />
</ItemGroup>

<ItemGroup>
Expand Down
43 changes: 17 additions & 26 deletions src/common.props
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
<PropertyGroup>
<Copyright>Copyright © 2013-2018 Akka.NET Team</Copyright>
<Authors>Akka.NET Team</Authors>
<VersionPrefix>1.3.10</VersionPrefix>
<VersionPrefix>1.3.11</VersionPrefix>
<PackageIconUrl>http://getakka.net/images/akkalogo.png</PackageIconUrl>
<PackageProjectUrl>https://github.com/akkadotnet/akka.net</PackageProjectUrl>
<PackageLicenseUrl>https://github.com/akkadotnet/akka.net/blob/master/LICENSE</PackageLicenseUrl>
Expand All @@ -11,39 +11,30 @@
<PropertyGroup>
<XunitVersion>2.3.1</XunitVersion>
<TestSdkVersion>15.7.2</TestSdkVersion>
<HyperionVersion>0.9.8</HyperionVersion>
<AkkaPackageTags>akka;actors;actor model;Akka;concurrency</AkkaPackageTags>
</PropertyGroup>
<PropertyGroup>
<CopyLocalLockFileAssemblies>true</CopyLocalLockFileAssemblies>
</PropertyGroup>
<PropertyGroup>
<PackageReleaseNotes>Maintenance Release for Akka.NET 1.3**
Akka.NET v1.3.10 consists mostly of bug fixes and patches to various parts of Akka.NET:
[Akka.Remote: add support for using installed certificates with thumbprints](https://github.com/akkadotnet/akka.net/issues/3632)
[Akka.IO: fix TCP sockets leak](https://github.com/akkadotnet/akka.net/issues/3630)
[Akka.DI.Core: Check if Dependency Resolver is configured to avoid a `NullReferenceException`](https://github.com/akkadotnet/akka.net/pull/3619)
[Akka.Streams: Interop between Akka.Streams and IObservable](https://github.com/akkadotnet/akka.net/pull/3112)
[HOCON: Parse size in bytes format. Parse microseconds and nanoseconds.](https://github.com/akkadotnet/akka.net/pull/3600)
[Akka.Cluster: Don't automatically down quarantined nodes](https://github.com/akkadotnet/akka.net/pull/3605)
To [see the full set of changes for Akka.NET 1.3.10, click here](https://github.com/akkadotnet/akka.net/milestone/28).
Akka.NET v1.3.11 is a bugfix patch primarily aimed at solving the following issue: [DotNetty Remote Transport Issues with .NET Core 2.1](https://github.com/akkadotnet/akka.net/issues/3506).
.NET Core 2.1 exposed some issues with the DotNetty connection methods in DotNetty v0.4.8 that have since been fixed in subsequent releases. In Akka.NET v1.3.11 we've resolved this issue by upgrading to DotNetty v0.6.0.
In addition to the above, we've introduced some additional fixes and changes in Akka.NET v1.3.11:
[Akka.FSharp: Akka.Fsharp spawning an actor results in Exception](https://github.com/akkadotnet/akka.net/issues/3402)
[Akka.Remote: tcp-reuse-addr = off-for-windows prevents actorsystem from starting](https://github.com/akkadotnet/akka.net/issues/3293)
[Akka.Remote: tcp socket address reuse - default configuration](https://github.com/akkadotnet/akka.net/issues/2477)
[Akka.Cluster.Tools:
Actor still receiving messages from mediator after termination](https://github.com/akkadotnet/akka.net/issues/3658)
[Akka.Persistence: Provide minSequenceNr for snapshot deletion](https://github.com/akkadotnet/akka.net/pull/3641)
To [see the full set of changes for Akka.NET 1.3.11, click here](https://github.com/akkadotnet/akka.net/milestone/29)
| COMMITS | LOC+ | LOC- | AUTHOR |
| --- | --- | --- | --- |
| 8 | 887 | 220 | Bartosz Sypytkowski |
| 5 | 67 | 174 | Aaron Stannard |
| 4 | 15 | 7 | Caio Proiete |
| 3 | 7 | 4 | Maciek Misztal |
| 2 | 60 | 8 | Marcus Weaver |
| 2 | 57 | 12 | moerwald |
| 2 | 278 | 16 | Peter Shrosbree |
| 2 | 2 | 2 | Fábio Beirão |
| 1 | 71 | 71 | Sean Gilliam |
| 1 | 6 | 0 | basbossinkdivverence |
| 1 | 24 | 5 | Ismael Hamed |
| 1 | 193 | 8 | to11mtm |
| 1 | 17 | 33 | zbynek001 |
| 1 | 12 | 3 | Oleksandr Bogomaz |
| 1 | 1 | 1 | MelnikovIG |
| 1 | 1 | 1 | Alex Villarreal |
| 1 | 1 | 0 | Yongjie Ma |</PackageReleaseNotes>
| 5 | 123 | 71 | Aaron Stannard |
| 3 | 96 | 10 | Ismael Hamed |
| 2 | 4 | 3 | Oleksandr Kobylianskyi |
| 1 | 5 | 1 | Ruben Mamo |
| 1 | 23 | 6 | Chris Hoare |</PackageReleaseNotes>
</PropertyGroup>
</Project>
17 changes: 9 additions & 8 deletions src/contrib/cluster/Akka.Cluster.Sharding/PersistentShard.cs
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ protected override bool ReceiveCommand(object message)
/*
* delete old events but keep the latest around because
*
* it's not safe to delete all events immediate because snapshots are typically stored with a weaker consistency
* it's not safe to delete all events immediately because snapshots are typically stored with a weaker consistency
* level which means that a replay might "see" the deleted events before it sees the stored snapshot,
* i.e. it will use an older snapshot and then not replay the full sequence of events
*
Expand All @@ -99,21 +99,22 @@ protected override bool ReceiveCommand(object message)
}
break;
case SaveSnapshotFailure m:
Log.Warning("PersistentShard snapshot failure: {0}", m.Cause.Message);
Log.Warning("PersistentShard snapshot failure: [{0}]", m.Cause.Message);
break;
case DeleteMessagesSuccess m:
Log.Debug("PersistentShard messages to {0} deleted successfully", m.ToSequenceNr);
DeleteSnapshots(new SnapshotSelectionCriteria(m.ToSequenceNr - 1));
var deleteTo = m.ToSequenceNr - 1;
var deleteFrom = Math.Max(0, deleteTo - Settings.TunningParameters.KeepNrOfBatches * Settings.TunningParameters.SnapshotAfter);
Log.Debug("PersistentShard messages to [{0}] deleted successfully. Deleting snapshots from [{1}] to [{2}]", m.ToSequenceNr, deleteFrom, deleteTo);
DeleteSnapshots(new SnapshotSelectionCriteria(deleteTo, DateTime.MaxValue, deleteFrom));
break;

case DeleteMessagesFailure m:
Log.Warning("PersistentShard messages to {0} deletion failure: {1}", m.ToSequenceNr, m.Cause.Message);
Log.Warning("PersistentShard messages to [{0}] deletion failure: [{1}]", m.ToSequenceNr, m.Cause.Message);
break;
case DeleteSnapshotsSuccess m:
Log.Debug("PersistentShard snapshots matching {0} deleted successfully", m.Criteria);
Log.Debug("PersistentShard snapshots matching [{0}] deleted successfully", m.Criteria);
break;
case DeleteSnapshotsFailure m:
Log.Warning("PersistentShard snapshots matching {0} deletion failure: {1}", m.Criteria, m.Cause.Message);
Log.Warning("PersistentShard snapshots matching [{0}] deletion failure: [{1}]", m.Criteria, m.Cause.Message);
break;
default:
return this.HandleCommand(message);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -286,6 +286,15 @@ private void AwaitCount(int expected)
});
}

private void AwaitCountSubscribers(int expected, string topic)
{
AwaitAssert(() =>
{
Mediator.Tell(new CountSubscribers(topic));
Assert.Equal(expected, ExpectMsg<int>());
});
}

#endregion

[MultiNodeFact]
Expand All @@ -306,6 +315,7 @@ public void DistributedPubSubMediatorSpecs()
DistributedPubSubMediator_must_remove_entries_when_node_is_removed();
DistributedPubSubMediator_must_receive_proper_UnsubscribeAck_message();
DistributedPubSubMediator_must_get_topics_after_simple_publish();
DistributedPubSubMediator_must_remove_topic_subscribers_when_they_terminate();
}

public void DistributedPubSubMediator_must_startup_2_nodes_cluster()
Expand Down Expand Up @@ -820,5 +830,23 @@ public void DistributedPubSubMediator_must_get_topics_after_simple_publish()
EnterBarrier("after-get-topics");
});
}

public void DistributedPubSubMediator_must_remove_topic_subscribers_when_they_terminate()
{
Within(TimeSpan.FromSeconds(15), () =>
{
RunOn(() =>
{
var s1 = new Subscribe("topic_b1", CreateChatUser("u18"));
Mediator.Tell(s1);
ExpectMsg<SubscribeAck>(x => x.Subscribe.Equals(s1));

AwaitCountSubscribers(1, "topic_b1");
ChatUser("u18").Tell(PoisonPill.Instance);
AwaitCountSubscribers(0, "topic_b1");
}, _first);
EnterBarrier("after-15");
});
}
}
}
Loading

0 comments on commit f7d5b6e

Please sign in to comment.