Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up cycle detector reaping some actors #3649

Merged
merged 1 commit into from
Sep 11, 2020
Merged

Conversation

SeanTAllen
Copy link
Member

This commit speeds up reaping of some actors by the cycle-detector.
When an actor is done running, we do a check similar to the check
we do for when --ponynoblock is on:

  • Check to see if the actor's message queue is empty
  • Check to see if it has a reference count of 0

If both are true, then the actor can be deleted. When --ponynoblock
is used, this is done immediately. It can't be done immediately when
the cycle detector is in use because, it might have references to
the actor in question.

In order to safely cleanup and reap actors, the cycle detector must
be the one to do the reaping. This would eventually be done without
this change. However, it is possible to write programs where the
speed at which the cycle detectors protocol will operate is unable
to keep up with the speed at which new "one shot" actors are created.

Here's an example of such a program:

actor Main
  new create(e: Env) =>
    Spawner.run()

actor Spawner
  var _living_children: U64 = 0

  new create() =>
    None

  be run() =>
    _living_children = _living_children + 1
    Spawnee(this).run()

  be collect() =>
    _living_children = _living_children - 1
    run()

actor Spawnee
  let _parent: Spawner

  new create(parent: Spawner) =>
    _parent = parent

  be run() =>
    _parent.collect()

Without this patch, that program will have large amount of memory growth
and eventually, run out of memory. With the patch, Spawnee actors will be
reaped much more quickly and stable memory growth can be achieved.

This is not a complete solution to the problem as issue #1007 is still a
problem due to reasons I outlined at
#1007 (comment).

It should also be noted that in the process of developing this patch,
Dipin Hora and I discovered a use after free race condition in the
runtime mpmcq implementation. That race condition can and will result
in segfaults when running the above example program. There's no issue
opened yet for that problem, but one will be coming.

While my name is on the commit, credit should be divided up between myself
and Dipin. I came up with the basic idea of how to reap some "one shot"
actors earlier and Dipin reimplemented by my idea to work much better with
the existing cycle-detector than my approach which was overly complicated and
had a number of areas that were likely sources of subtle bugs.

@SeanTAllen SeanTAllen requested a review from a team September 10, 2020 17:52
@SeanTAllen SeanTAllen added the changelog - fixed Automatically add "Fixed" CHANGELOG entry on merge label Sep 10, 2020
@ponylang-main
Copy link
Contributor

Hi @SeanTAllen,

The changelog - fixed label was added to this pull request; all PRs with a changelog label need to have release notes included as part of the PR. If you haven't added release notes already, please do.

Release notes are added by creating a uniquely named file in the .release-notes directory. We suggest you call the file 3649.md to match the number of this pull request.

The basic format of the release notes (using markdown) should be:

## Title

End user description of changes, why it's important,
problems it solves etc.

If a breaking change, make sure to include 1 or more
examples what code would look like prior to this change
and how to update it to work after this change.

Thanks.

@SeanTAllen
Copy link
Member Author

Windows errors appear to be related to stdlib tests not being run sequentially for which I've opened a PR:

#3651

This commit speeds up reaping of some actors by the cycle-detector.
When an actor is done running, we do a check similar to the check
we do for when --ponynoblock is on:

- Check to see if the actor's message queue is empty
- Check to see if it has a reference count of 0

If both are true, then the actor can be deleted. When --ponynoblock
is used, this is done immediately. It can't be done immediately when
the cycle detector is in use because, it might have references to
the actor in question.

In order to safely cleanup and reap actors, the cycle detector must
be the one to do the reaping. This would eventually be done without
this change. However, it is possible to write programs where the
speed at which the cycle detectors protocol will operate is unable
to keep up with the speed at which new "one shot" actors are created.

Here's an example of such a program:

```pony
actor Main
  new create(e: Env) =>
    Spawner.run()

actor Spawner
  var _living_children: U64 = 0

  new create() =>
    None

  be run() =>
    _living_children = _living_children + 1
    Spawnee(this).run()

  be collect() =>
    _living_children = _living_children - 1
    run()

actor Spawnee
  let _parent: Spawner

  new create(parent: Spawner) =>
    _parent = parent

  be run() =>
    _parent.collect()
```

Without this patch, that program will have large amount of memory growth
and eventually, run out of memory. With the patch, Spawnee actors will be
reaped much more quickly and stable memory growth can be achieved.

This is not a complete solution to the problem as issue #1007 is still a
problem due to reasons I outlined at
#1007 (comment).

It should also be noted that in the process of developing this patch,
Dipin Hora and I discovered a use after free race condition in the
runtime mpmcq implementation. That race condition can and will result
in segfaults when running the above example program. There's no issue
opened yet for that problem, but one will be coming.

While my name is on the commit, credit should be divided up between myself
and Dipin. I came up with the basic idea of how to reap some "one shot"
actors earlier and Dipin reimplemented by my idea to work much better with
the existing cycle-detector than my approach which was overly complicated and
had a number of areas that were likely sources of subtle bugs.
@SeanTAllen
Copy link
Member Author

Windows errors appear to be related to stdlib tests not being run sequentially for which I've opened a PR:

#3651

I fixed this in another commit and rebased over it.

@SeanTAllen SeanTAllen merged commit f920092 into master Sep 11, 2020
@SeanTAllen SeanTAllen deleted the dipin-cd-patch branch September 11, 2020 15:34
github-actions bot pushed a commit that referenced this pull request Sep 11, 2020
github-actions bot pushed a commit that referenced this pull request Sep 11, 2020
SeanTAllen added a commit that referenced this pull request Sep 19, 2020
We have a few example programs that create a large number of short-lived
actors that can exhibit runaway memory growth. The changes in this commit
greatly reduce the memory growth or potentially reduce it to be stable.

The primary change here is to use some static analysis at compilation time
to determine if an actor is "an orphan". That is, upon creation, has no
actors (including it's creator) with references to it.

In order to support the concept of an "orphan actor", additional changes had
to be made to the cycle detector protocol and how other actors interact with
it. There are two primary things we can do with an orphan actor to improve
the rate of garbage collection when the cycle detector is running.

When the cycle detector is running actors are collected by the cycle detector
when they have no other actors referring to them and can't receive any new messages
or are in a cycle that can't receive additional messages.

The examples listed in this PR all had problematic memory usage because, actors
were created at a rate that overwhelmed the cycle detector and caused large
amount of messages to it to back up and sit in its queue waiting to be processed.

This commit addresses the "too many messages" problem by making the following actor
GC rule changes for orphan actors. Note, all these rules are when the cycle detector
is running.

* when an actor finishes being run by a scheduler thread "locally delete" it if:

  * no other actors hold a reference to the actor
  * the cycle detector isn't holding a reference to the actor

  "locally deleting" follows the same message free protocol used when the cycle
  detector isn't running.

* when an actor finishes being run by a scheduler thread, pre-emptively inform the
  cycle detector that we are blocked if:

  * no other actors hold a reference to the actor
  * the cycle detector is holding a reference to the actor

  by pre-emptively informing the cycle detector that we are blocked, the cycle detector
  can delete the actor more quickly. This "pre-emptively" notify change was introduced
  in #3649.

In order to support these changes, additional cycle detector protocol changes were made.
Previously, every actor upon creation informed the cycle detector of its existence. If
we want to allow for local deleting, then this won't work. Every actor was known by the
cycle detector. All concept of "actor created" messages have been removed by this change.
Instead, actors have a flag FLAG_CD_CONTACTED that is set if the actor has ever sent a
message to the cycle detector. Once the flag is set, we know that locally deleting the
actor is unsafe and we can fall back on the slower pre-emptively inform strategy.

The cycle detector works by periodically sending messages to all actors it knows about
and asking them if they are currently blocked as the first step in its "path to actor
garbage collection" protocol. As actors no longer inform the cycle detector of their
existence on creation, the cycle detector needs a new way to discover actors.

The first time an actor becomes logically blocked, it sets itself as blocked and notifies
the cycle detector of its existence and that it is blocked. This is done by:

* setting the actor as blocked
* sending an "i'm blocked" message to the cycle detector
* setting FLAG_CD_CONTACTED on the actor

The overall effect of these changes is:

- The rate of garbage collection for short-lived actors is improved.
- Fewer cycle detector related messages are generated.

It should be noted that if an actor is incorrectly identified by the compiler as
being an orphan when it is in fact not, the end result would be a segfault. During
development of this feature, it was discovered that the following code from the
Promise select test would segfault:

```pony
let pb = Promise[String] .> next[None]({(s) => h.complete_action(s) })
```

The issue was that .> wasn't being correctly handled by the `is_result_needed` logic
in `expr.c`. It is possibly that other logic within the function is faulty and if
segfaults are see after this commit that didn't exist prior to it, then incorrectly
identifying an actor as an orphan might be the culprit.

Prior to these change, examples such as the one from issue #1007 (listed
later) would be unable to be collected due to an edge-case in the cycle
detector and the runtime garbage collection algorithms.

Issue #1007 was opened with the following code having explosive memory growth:

```pony
primitive N fun apply(): U64 => 2_000_000_000

actor Test
  new create(n: U64) =>
    if n == 0 then return end
    Test(n - 1)

actor Main
  new create(env: Env) =>
    Test(N())
```

Dipin and I previously worked on #3649 which added "pre-emptively informing" as
detailed earlier. The example above from #1007 wasn't addressed by #3649 because
the key to #3649 was that when done running its behaviors, an actor can see if it
has no additional messages AND no references to itself and can then tell the cycle
detector to skip parts of the CD protocol and garbage collect sooner. #1007 requires
the "local delete" functionality from this commit whereas #3649 only provided
"pre-emptive informing".

The addition of "local delete" allows for the cycle detector to keep up with many
cases of generating large amounts of "orphaned" actors. The from #1007 above wasn't
addressed by #3649 because of an implementation detail in the ORCA garbage collection
protocol; at the time that an instance of Test is done running its create behavior,
it doesn't have a reference count of 0. It has a reference count of 256. Not because
there are 256 references but because, when an actor is created puts a "fake value"
in the rc value such that an actor isn't gc'd prematurely. The value will be set to
the correct value once the actor that created it is first traced and will be subsequently
updated correctly per ORCA going forward.

However, at the time that an instance of Test is finished running its create, that
information isn't available. It would be incorrect to say "if rc is 256, I'm blocked
and you can gc me". 256 is a perfectly reasonable value for an rc to be in normal usage.

This isn't a problem with the changes in this PR as the compiler detects that each
instance of Test will be an orphan and immediately sets its rc to 0. This allows it
to be garbage collected as the instance's message queue is empty so long as it's rc
remains 0.

Any changes in the future to address lingering issues with creating large numbers
of orphaned actors should also be tested with the following examples. Each exercises
a different type of pattern that could lead to memory growth or incorrect execution.

Example 2 features reasonably stable memory usage that I have seen from time-to-time,
increase rapidly. It should be noted that such an increase is rather infrequent but
suggests there are additional problems in the cycle-detector. I suspect said problem
is a periodic burst of additional messages to the cycle-detector from actors
that can be garbage collected, but I haven't investigated further.

```pony
actor Main
  new create(e: Env) =>
    Spawner.run()

actor Spawner
  var _living_children: U64 = 0

  new create() =>
    None

  be run() =>
    _living_children = _living_children + 1
    Spawnee(this).run()

  be collect() =>
    _living_children = _living_children - 1
    run()

actor Spawnee
  let _parent: Spawner

  new create(parent: Spawner) =>
    _parent = parent

  be run() =>
    _parent.collect()
```

Example 3 has stable memory growth and given that it won't result in any messages
being sent to the cycle detector as we have determined at compile-time that the
Foo actor instances are orphaned.

```pony
actor Main
  var n: U64 = 2_000_000_000

  new create(e: Env) =>
    run()

  be run() =>
    while(n > 0 ) do
      Foo(n)
      n = n - 1
      if ((n % 1_000) == 0) then
        run()
        break
      end
     end

actor Foo
  new create(n: U64) =>
    if ((n % 1_000_000) == 0) then
      @printf[I32]("%ld\n".cstring(), n)
    end

    None
```

Example 4 has the same characteristics as example 3 with the code as of this commit.
However, it did exhibit different behavior prior to this commit being fully completed
and appears to be a good test candidate for any future changes.

```pony
actor Main
  var n: U64 = 2_000_000_000

  new create(e: Env) =>
    run()

  be run() =>
    while(n > 0 ) do
      Foo(n)
      n = n - 1
      if ((n % 1_000_000) == 0) then
        @printf[I32]("%ld\n".cstring(), n)
      end
      if ((n % 1_000) == 0) then
        run()
        break
      end
     end

actor Foo
  new create(n: U64) =>
    None
```

Finally, for anyone who works on improving this in the future, here's an additional
test program beyond ones that already exist elsewhere for testing pony programs. This
program will create a large number of actors that are orphaned but then send themselves
to another actor. This should increase their rc count and keep them from being garbage
collected. If the program segfaults, then something has gone wrong and the logic related
to orphan actors has been broken. The example currently passes as of this commit.

```pony
use "collections"

actor Holder
  let _holding: SetIs[ManyOfMe] = _holding.create()

  new create() =>
    None

  be hold(a: ManyOfMe) =>
    _holding.set(a)

  be ping() =>
    var c: U64 = 1_000
    for h in _holding.values() do
      if c > 0 then
        h.ping()
        c = c - 1
      end
    end

actor ManyOfMe
  new create(h: Holder) =>
    h.hold(this)

  be ping() =>
    None

actor Main
  var n: U64 = 500_000
  let holder: Holder

  new create(env: Env) =>
    holder = Holder
    run()

  be run() =>
    while(n > 0 ) do
      ManyOfMe(holder)
      n = n - 1
      if ((n % 1_000) == 0) then
        @printf[I32]("%ld\n".cstring(), n)
        run()
        holder.ping()
        break
      end
     end
```

The code and ideas in this commit, while attributed to me is the result of
a group effort featuring ideas and/or code from myself, Joe Eli McIlvain,
Dipin Hora, and Sylvan Clebsch.

Closes #1007
SeanTAllen added a commit that referenced this pull request Sep 24, 2020
We have a few example programs that create a large number of short-lived
actors that can exhibit runaway memory growth. The changes in this commit
greatly reduce the memory growth or potentially reduce it to be stable.

The primary change here is to use some static analysis at compilation time
to determine if an actor is "an orphan". That is, upon creation, has no
actors (including it's creator) with references to it.

In order to support the concept of an "orphan actor", additional changes had
to be made to the cycle detector protocol and how other actors interact with
it. There are two primary things we can do with an orphan actor to improve
the rate of garbage collection when the cycle detector is running.

When the cycle detector is running actors are collected by the cycle detector
when they have no other actors referring to them and can't receive any new messages
or are in a cycle that can't receive additional messages.

The examples listed in this PR all had problematic memory usage because, actors
were created at a rate that overwhelmed the cycle detector and caused large
amount of messages to it to back up and sit in its queue waiting to be processed.

This commit addresses the "too many messages" problem by making the following actor
GC rule changes for orphan actors. Note, all these rules are when the cycle detector
is running.

* when an actor finishes being run by a scheduler thread "locally delete" it if:

  * no other actors hold a reference to the actor
  * the cycle detector isn't holding a reference to the actor

  "locally deleting" follows the same message free protocol used when the cycle
  detector isn't running.

* when an actor finishes being run by a scheduler thread, pre-emptively inform the
  cycle detector that we are blocked if:

  * no other actors hold a reference to the actor
  * the cycle detector is holding a reference to the actor

  by pre-emptively informing the cycle detector that we are blocked, the cycle detector
  can delete the actor more quickly. This "pre-emptively" notify change was introduced
  in #3649.

In order to support these changes, additional cycle detector protocol changes were made.
Previously, every actor upon creation informed the cycle detector of its existence. If
we want to allow for local deleting, then this won't work. Every actor was known by the
cycle detector. All concept of "actor created" messages have been removed by this change.
Instead, actors have a flag FLAG_CD_CONTACTED that is set if the actor has ever sent a
message to the cycle detector. Once the flag is set, we know that locally deleting the
actor is unsafe and we can fall back on the slower pre-emptively inform strategy.

The cycle detector works by periodically sending messages to all actors it knows about
and asking them if they are currently blocked as the first step in its "path to actor
garbage collection" protocol. As actors no longer inform the cycle detector of their
existence on creation, the cycle detector needs a new way to discover actors.

The first time an actor becomes logically blocked, it sets itself as blocked and notifies
the cycle detector of its existence and that it is blocked. This is done by:

* setting the actor as blocked
* sending an "i'm blocked" message to the cycle detector
* setting FLAG_CD_CONTACTED on the actor

The overall effect of these changes is:

- The rate of garbage collection for short-lived actors is improved.
- Fewer cycle detector related messages are generated.

It should be noted that if an actor is incorrectly identified by the compiler as
being an orphan when it is in fact not, the end result would be a segfault. During
development of this feature, it was discovered that the following code from the
Promise select test would segfault:

```pony
let pb = Promise[String] .> next[None]({(s) => h.complete_action(s) })
```

The issue was that .> wasn't being correctly handled by the `is_result_needed` logic
in `expr.c`. It is possibly that other logic within the function is faulty and if
segfaults are see after this commit that didn't exist prior to it, then incorrectly
identifying an actor as an orphan might be the culprit.

Prior to these change, examples such as the one from issue #1007 (listed
later) would be unable to be collected due to an edge-case in the cycle
detector and the runtime garbage collection algorithms.

Issue #1007 was opened with the following code having explosive memory growth:

```pony
primitive N fun apply(): U64 => 2_000_000_000

actor Test
  new create(n: U64) =>
    if n == 0 then return end
    Test(n - 1)

actor Main
  new create(env: Env) =>
    Test(N())
```

Dipin and I previously worked on #3649 which added "pre-emptively informing" as
detailed earlier. The example above from #1007 wasn't addressed by #3649 because
the key to #3649 was that when done running its behaviors, an actor can see if it
has no additional messages AND no references to itself and can then tell the cycle
detector to skip parts of the CD protocol and garbage collect sooner. #1007 requires
the "local delete" functionality from this commit whereas #3649 only provided
"pre-emptive informing".

The addition of "local delete" allows for the cycle detector to keep up with many
cases of generating large amounts of "orphaned" actors. The from #1007 above wasn't
addressed by #3649 because of an implementation detail in the ORCA garbage collection
protocol; at the time that an instance of Test is done running its create behavior,
it doesn't have a reference count of 0. It has a reference count of 256. Not because
there are 256 references but because, when an actor is created puts a "fake value"
in the rc value such that an actor isn't gc'd prematurely. The value will be set to
the correct value once the actor that created it is first traced and will be subsequently
updated correctly per ORCA going forward.

However, at the time that an instance of Test is finished running its create, that
information isn't available. It would be incorrect to say "if rc is 256, I'm blocked
and you can gc me". 256 is a perfectly reasonable value for an rc to be in normal usage.

This isn't a problem with the changes in this PR as the compiler detects that each
instance of Test will be an orphan and immediately sets its rc to 0. This allows it
to be garbage collected as the instance's message queue is empty so long as it's rc
remains 0.

Any changes in the future to address lingering issues with creating large numbers
of orphaned actors should also be tested with the following examples. Each exercises
a different type of pattern that could lead to memory growth or incorrect execution.

Example 2 features reasonably stable memory usage that I have seen from time-to-time,
increase rapidly. It should be noted that such an increase is rather infrequent but
suggests there are additional problems in the cycle-detector. I suspect said problem
is a periodic burst of additional messages to the cycle-detector from actors
that can be garbage collected, but I haven't investigated further.

```pony
actor Main
  new create(e: Env) =>
    Spawner.run()

actor Spawner
  var _living_children: U64 = 0

  new create() =>
    None

  be run() =>
    _living_children = _living_children + 1
    Spawnee(this).run()

  be collect() =>
    _living_children = _living_children - 1
    run()

actor Spawnee
  let _parent: Spawner

  new create(parent: Spawner) =>
    _parent = parent

  be run() =>
    _parent.collect()
```

Example 3 has stable memory growth and given that it won't result in any messages
being sent to the cycle detector as we have determined at compile-time that the
Foo actor instances are orphaned.

```pony
actor Main
  var n: U64 = 2_000_000_000

  new create(e: Env) =>
    run()

  be run() =>
    while(n > 0 ) do
      Foo(n)
      n = n - 1
      if ((n % 1_000) == 0) then
        run()
        break
      end
     end

actor Foo
  new create(n: U64) =>
    if ((n % 1_000_000) == 0) then
      @printf[I32]("%ld\n".cstring(), n)
    end

    None
```

Example 4 has the same characteristics as example 3 with the code as of this commit.
However, it did exhibit different behavior prior to this commit being fully completed
and appears to be a good test candidate for any future changes.

```pony
actor Main
  var n: U64 = 2_000_000_000

  new create(e: Env) =>
    run()

  be run() =>
    while(n > 0 ) do
      Foo(n)
      n = n - 1
      if ((n % 1_000_000) == 0) then
        @printf[I32]("%ld\n".cstring(), n)
      end
      if ((n % 1_000) == 0) then
        run()
        break
      end
     end

actor Foo
  new create(n: U64) =>
    None
```

Finally, for anyone who works on improving this in the future, here's an additional
test program beyond ones that already exist elsewhere for testing pony programs. This
program will create a large number of actors that are orphaned but then send themselves
to another actor. This should increase their rc count and keep them from being garbage
collected. If the program segfaults, then something has gone wrong and the logic related
to orphan actors has been broken. The example currently passes as of this commit.

```pony
use "collections"

actor Holder
  let _holding: SetIs[ManyOfMe] = _holding.create()

  new create() =>
    None

  be hold(a: ManyOfMe) =>
    _holding.set(a)

  be ping() =>
    var c: U64 = 1_000
    for h in _holding.values() do
      if c > 0 then
        h.ping()
        c = c - 1
      end
    end

actor ManyOfMe
  new create(h: Holder) =>
    h.hold(this)

  be ping() =>
    None

actor Main
  var n: U64 = 500_000
  let holder: Holder

  new create(env: Env) =>
    holder = Holder
    run()

  be run() =>
    while(n > 0 ) do
      ManyOfMe(holder)
      n = n - 1
      if ((n % 1_000) == 0) then
        @printf[I32]("%ld\n".cstring(), n)
        run()
        holder.ping()
        break
      end
     end
```

The code and ideas in this commit, while attributed to me is the result of
a group effort featuring ideas and/or code from myself, Joe Eli McIlvain,
Dipin Hora, and Sylvan Clebsch.

Closes #1007
SeanTAllen added a commit that referenced this pull request Sep 24, 2020
We have a few example programs that create a large number of short-lived
actors that can exhibit runaway memory growth. The changes in this commit
greatly reduce the memory growth or potentially reduce it to be stable.

The primary change here is to use some static analysis at compilation time
to determine if an actor is "an orphan". That is, upon creation, has no
actors (including it's creator) with references to it.

In order to support the concept of an "orphan actor", additional changes had
to be made to the cycle detector protocol and how other actors interact with
it. There are two primary things we can do with an orphan actor to improve
the rate of garbage collection when the cycle detector is running.

When the cycle detector is running actors are collected by the cycle detector
when they have no other actors referring to them and can't receive any new messages
or are in a cycle that can't receive additional messages.

The examples listed in this PR all had problematic memory usage because, actors
were created at a rate that overwhelmed the cycle detector and caused large
amount of messages to it to back up and sit in its queue waiting to be processed.

This commit addresses the "too many messages" problem by making the following actor
GC rule changes for orphan actors. Note, all these rules are when the cycle detector
is running.

* when an actor finishes being run by a scheduler thread "locally delete" it if:

  * no other actors hold a reference to the actor
  * the cycle detector isn't holding a reference to the actor

  "locally deleting" follows the same message free protocol used when the cycle
  detector isn't running.

* when an actor finishes being run by a scheduler thread, pre-emptively inform the
  cycle detector that we are blocked if:

  * no other actors hold a reference to the actor
  * the cycle detector is holding a reference to the actor

  by pre-emptively informing the cycle detector that we are blocked, the cycle detector
  can delete the actor more quickly. This "pre-emptively" notify change was introduced
  in #3649.

In order to support these changes, additional cycle detector protocol changes were made.
Previously, every actor upon creation informed the cycle detector of its existence. If
we want to allow for local deleting, then this won't work. Every actor was known by the
cycle detector. All concept of "actor created" messages have been removed by this change.
Instead, actors have a flag FLAG_CD_CONTACTED that is set if the actor has ever sent a
message to the cycle detector. Once the flag is set, we know that locally deleting the
actor is unsafe and we can fall back on the slower pre-emptively inform strategy.

The cycle detector works by periodically sending messages to all actors it knows about
and asking them if they are currently blocked as the first step in its "path to actor
garbage collection" protocol. As actors no longer inform the cycle detector of their
existence on creation, the cycle detector needs a new way to discover actors.

The first time an actor becomes logically blocked, it sets itself as blocked and notifies
the cycle detector of its existence and that it is blocked. This is done by:

* setting the actor as blocked
* sending an "i'm blocked" message to the cycle detector
* setting FLAG_CD_CONTACTED on the actor

The overall effect of these changes is:

- The rate of garbage collection for short-lived actors is improved.
- Fewer cycle detector related messages are generated.

It should be noted that if an actor is incorrectly identified by the compiler as
being an orphan when it is in fact not, the end result would be a segfault. During
development of this feature, it was discovered that the following code from the
Promise select test would segfault:

```pony
let pb = Promise[String] .> next[None]({(s) => h.complete_action(s) })
```

The issue was that .> wasn't being correctly handled by the `is_result_needed` logic
in `expr.c`. It is possibly that other logic within the function is faulty and if
segfaults are see after this commit that didn't exist prior to it, then incorrectly
identifying an actor as an orphan might be the culprit.

Prior to these change, examples such as the one from issue #1007 (listed
later) would be unable to be collected due to an edge-case in the cycle
detector and the runtime garbage collection algorithms.

Issue #1007 was opened with the following code having explosive memory growth:

```pony
primitive N fun apply(): U64 => 2_000_000_000

actor Test
  new create(n: U64) =>
    if n == 0 then return end
    Test(n - 1)

actor Main
  new create(env: Env) =>
    Test(N())
```

Dipin and I previously worked on #3649 which added "pre-emptively informing" as
detailed earlier. The example above from #1007 wasn't addressed by #3649 because
the key to #3649 was that when done running its behaviors, an actor can see if it
has no additional messages AND no references to itself and can then tell the cycle
detector to skip parts of the CD protocol and garbage collect sooner. #1007 requires
the "local delete" functionality from this commit whereas #3649 only provided
"pre-emptive informing".

The addition of "local delete" allows for the cycle detector to keep up with many
cases of generating large amounts of "orphaned" actors. The from #1007 above wasn't
addressed by #3649 because of an implementation detail in the ORCA garbage collection
protocol; at the time that an instance of Test is done running its create behavior,
it doesn't have a reference count of 0. It has a reference count of 256. Not because
there are 256 references but because, when an actor is created puts a "fake value"
in the rc value such that an actor isn't gc'd prematurely. The value will be set to
the correct value once the actor that created it is first traced and will be subsequently
updated correctly per ORCA going forward.

However, at the time that an instance of Test is finished running its create, that
information isn't available. It would be incorrect to say "if rc is 256, I'm blocked
and you can gc me". 256 is a perfectly reasonable value for an rc to be in normal usage.

This isn't a problem with the changes in this PR as the compiler detects that each
instance of Test will be an orphan and immediately sets its rc to 0. This allows it
to be garbage collected as the instance's message queue is empty so long as it's rc
remains 0.

Any changes in the future to address lingering issues with creating large numbers
of orphaned actors should also be tested with the following examples. Each exercises
a different type of pattern that could lead to memory growth or incorrect execution.

Example 2 features reasonably stable memory usage that I have seen from time-to-time,
increase rapidly. It should be noted that such an increase is rather infrequent but
suggests there are additional problems in the cycle-detector. I suspect said problem
is a periodic burst of additional messages to the cycle-detector from actors
that can be garbage collected, but I haven't investigated further.

```pony
actor Main
  new create(e: Env) =>
    Spawner.run()

actor Spawner
  var _living_children: U64 = 0

  new create() =>
    None

  be run() =>
    _living_children = _living_children + 1
    Spawnee(this).run()

  be collect() =>
    _living_children = _living_children - 1
    run()

actor Spawnee
  let _parent: Spawner

  new create(parent: Spawner) =>
    _parent = parent

  be run() =>
    _parent.collect()
```

Example 3 has stable memory growth and given that it won't result in any messages
being sent to the cycle detector as we have determined at compile-time that the
Foo actor instances are orphaned.

```pony
actor Main
  var n: U64 = 2_000_000_000

  new create(e: Env) =>
    run()

  be run() =>
    while(n > 0 ) do
      Foo(n)
      n = n - 1
      if ((n % 1_000) == 0) then
        run()
        break
      end
     end

actor Foo
  new create(n: U64) =>
    if ((n % 1_000_000) == 0) then
      @printf[I32]("%ld\n".cstring(), n)
    end

    None
```

Example 4 has the same characteristics as example 3 with the code as of this commit.
However, it did exhibit different behavior prior to this commit being fully completed
and appears to be a good test candidate for any future changes.

```pony
actor Main
  var n: U64 = 2_000_000_000

  new create(e: Env) =>
    run()

  be run() =>
    while(n > 0 ) do
      Foo(n)
      n = n - 1
      if ((n % 1_000_000) == 0) then
        @printf[I32]("%ld\n".cstring(), n)
      end
      if ((n % 1_000) == 0) then
        run()
        break
      end
     end

actor Foo
  new create(n: U64) =>
    None
```

Finally, for anyone who works on improving this in the future, here's an additional
test program beyond ones that already exist elsewhere for testing pony programs. This
program will create a large number of actors that are orphaned but then send themselves
to another actor. This should increase their rc count and keep them from being garbage
collected. If the program segfaults, then something has gone wrong and the logic related
to orphan actors has been broken. The example currently passes as of this commit.

```pony
use "collections"

actor Holder
  let _holding: SetIs[ManyOfMe] = _holding.create()

  new create() =>
    None

  be hold(a: ManyOfMe) =>
    _holding.set(a)

  be ping() =>
    var c: U64 = 1_000
    for h in _holding.values() do
      if c > 0 then
        h.ping()
        c = c - 1
      end
    end

actor ManyOfMe
  new create(h: Holder) =>
    h.hold(this)

  be ping() =>
    None

actor Main
  var n: U64 = 500_000
  let holder: Holder

  new create(env: Env) =>
    holder = Holder
    run()

  be run() =>
    while(n > 0 ) do
      ManyOfMe(holder)
      n = n - 1
      if ((n % 1_000) == 0) then
        @printf[I32]("%ld\n".cstring(), n)
        run()
        holder.ping()
        break
      end
     end
```

The code and ideas in this commit, while attributed to me is the result of
a group effort featuring ideas and/or code from myself, Joe Eli McIlvain,
Dipin Hora, and Sylvan Clebsch.

Closes #1007
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog - fixed Automatically add "Fixed" CHANGELOG entry on merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants