Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP, RFC FS-1072] task and state machine support #6634

Closed
wants to merge 62 commits into from

Conversation

dsyme
Copy link
Contributor

@dsyme dsyme commented Apr 24, 2019

[ Closed in favour of #6811 from a feature branch ]

This inserts a heavily modified (but semantically almost completely compatible) version of TaskBuilder.fs into FSharp.Core and adds a general state machine compilation mechanism for F# computation expressions.

Overview

The primary intention is to add quality support for task { ... } to F#. This means

  • library support for Task task { ... }
  • state machine compilation for computation expressions
  • support for using task-like tasks (value tasks etc.)
  • support for configurable tasks (half done)
  • support for defining value tasks
  • tests for all of these (half done)
  • proper debug support (stack, breakpoints, stepping) equivalent to C# (ok but needs testing)

We won't add the support until all of the above are done. We are starting with TaskBuilder.fs as a reference library implementation to help define semantics.

The mechanism we use to do this is to first add general support for "generated state machines" in F#.

Technical Note: Specifying generated state machines

From a high level, state machines are lovely to implement inefficiently in a functional language e.g. using continuations or F# async { ... } or TaskBuilder.fs task { ... }. These tend to allocate continuations and other values like crazy. Thus their efficient implementation (with low or zero allocation rates) means generating C-like constructs - label, goto, jumptables and and other such things.

It is hard to recover a low-allocation implementation from a functional/continuation/... implementation directly - but we can if we add enough compiler support for generating exactly the right code. Generation of efficient state machines from needs compiler support. For example both C# and F# support state machine compilation of C# iterator methods and F# seq { .... } expressions.

The magic heart of a typical generated state machine is a MoveNext or Step function that takes an integer program counter (pc) and jumps to a target:

      member __.MoveNext() = 
            match (pc) with
               | 1 -> goto L1 
               | 2 -> goto L2 
               | _ -> goto L0

            L0: ...
                ... this code can return, first setting "pc <- L1"...

            L1: ...
                ... this code can return, e.g. first setting "pc <- L2"...

            L2: ...

This is roughly what compiled seq { ... } code looks like in F# today and what compiled async/await code looks like in C#, at a very high level.

Note you can't write this kind of code directly in F# - there is no goto and the goto often jump directly into code resuming from the last step of the state machine.

In this mechanism we allow the specification of new varieties of generated state machines in library code, normally as part of the implementation of a computation-expression builder. (Note this is an extremely subtle mechanism and its validity is not yet checked by the compiler - "caveat emptor, here be dragons").

To help define generated state machines we use some primitives, currently

module Microsoft.FSharp.Core.CompilerServices.CodeGenHelpers
        val __jumptable : int -> (unit -> 'T) -> 'T
        val __stateMachine : 'T -> 'T
        val __newEntryPoint: unit -> int
        val __machine<'T> : 'T
        val __entryPoint: int -> unit
        val __return : 'T -> 'T

plus some special magic value names such __expand_XYZ and the very obscure __machine_step$cont.

A generated state machine expression is any expression of the form

    let inline makeStateMachine __expand_code = 
        (__stateMachine
            { new SomeBaseType() with 
                member __.SomeOverride(pc) =
                     __jumptable pc __expand_code }).SomeMethod()

Here SomeOverride will be compiled as a multi-entry method where the entry point is determined by pc using a jumptable at the start of the method.

Important The content of a generated state machine is specified by __expand_code which must be fully inlined code - that is, fully inlined to reveal the full implementation of the state machine. You can think of everything beginning with __expand_ABC as a macro where macro expansion is implemented by F# inlining.

For example, __expand_code could be a call to :

    makeStateMachine (bindTask task (fun res -> returnTask (res + 1))

where bindTask is inlined:

    let inline bindTask (task : Task<'TResult1>) (__expand_continuation : 'TResult1 -> TaskStep<'TResult2>) =
        let CONT = __newEntryPoint()
        let awaiter = task.GetAwaiter()
        if awaiter.IsCompleted then 
            __entryPoint CONT
            __expand_continuation (awaiter.GetResult())
        else
            __machine<TaskStateMachine>.Await (awaiter, CONT)
            __return false

and returnTask is inlined:

    let inline returnTask (x: 'T)  =
        __machine<TaskStateMachine>.Current <- x
        true

This shows the use of some of the other constructs in state machine specification - you can see

  1. the compile-time creation of an entry point using __newEntryPoint()
  2. the compile-time specification of the target of that entry point using __entryPoint
  3. the compulsory macro expansion of the continuation using __expand_continuation
  4. a self-reference to the state machine being run (using the this pointer) via __machine
  5. an early exit from the state machine stepping function using __return

The constructs that can be used in the (inlined, expanded) state machine code is limited and in some cases (e.g. try/with blocks and while loops) extremely subtle. It is easy to create incorrect and invalid code using this mechanism.

For full details of current status see the implementation in the PR, details may have changed from the above.

When specifying state machines, it is common to return a typed "dummy" struct such as TaskStep<'T> from each call to the Step function where T represents the result of the task. This is sort of a phantom type.

Example: sync { ... }

As a micro "no-op" example of defining a builder which gets compiled using state machines, we can define sync { ... } which is for entirely synchronous computation with no special semantics.

Implementation: https://github.com/dsyme/visualfsharp/blob/tasks/tests/fsharp/core/state-machines/sync.fs

Examples of use:

let t1 y = 
    sync {
       printfn "in t1"
       let x = 4 + 5 + y
       return x
    }

let t2 y = 
    sync {
       printfn "in t2"
       let! x = t1 y
       return x + y
    }

printfn "t2 6 = %d" (t2 6)

Code performance will be approximately the same as normal F# code except for one allocation for each execution of each sync { .. } as we allocate the "SyncMachine". In later work we may be able to remove this.

Example: task { ... }

See the implementation in tasks.fs. There is complication due to the need to bind to task-pattern tasks and asyncs.

Example: taskSeq { ... }

This is for state machine compilation of computation expressions that generate IAsyncEnumerable<'T> values. This is a headline C# 8.0 feature and a very large feature for C#. It appears to mostly drop out as library code once general-purpose state machine support is available.

See the example in taskSeq.fs. Not everything is implemented yet but the basics work.

Example seq2 { ... }

See https://github.com/dsyme/visualfsharp/blob/tasks/tests/fsharp/core/state-machines/seq2.fs

This is an example showing how to do state machine compilation for seq2 { ... } expressions, akin to seq { ... } expressions, for which we bake-in state machine compilation into the F# compiler today. Caveats:

I think it's possible this version actually gives better stack traces than the current sequence expression support in the F# compiler.

This is essentially trimming the task support out of taskSeq { ... } .

Examples list { ... }, array { ... }

See https://github.com/dsyme/visualfsharp/blob/tasks/tests/fsharp/core/state-machines/list.fs

This example defines list { .. }, array { .. } and rsarray { .. } for collections, where the computations generate directly into a ResizeArray (System.Collections.Generic.List<'T>).

F#'s existing [ .. ] and [| ... |] and seq { .. } |> Seq.toResizeArray all use an intermediate IEnumerable which is then iterated to populate a ResizeArray and then converted to the final immutable collection. In contrast, generating directly into a ResizeArray is potentially more efficient (and for list { ... } further perf improvements are possible if we put this in FSharp.Core and use the mutate-tail-cons-cell trick to generate the list directly). This technique has been known for a while and can give faster collection generation but it has not been possible to get good code generation for the expressions in many cases. Note these aren't really "state machines" because there are no resumption points - there is just an implicit collection we are yielding into in otherwise synchronous code.

Using a directly-generating list { ... } seems to give a significant speedup over [ ... ] in the example I just tried, included in the code.

PERF: list { ... } : 497
PERF: [ ... ] : 748

Technical Note: Expected allocation profile for task { ... }

The allocation performance of the current approach should be:

  • one allocation of TaskStateMachine per task { ... }
  • one allocation of Task per task { ... }
  • one or two allocations on each let! or do! bind in a task { .. } - I'm not quite sure how many (we may be able to remove these when binding to another task produced by task { ... }, though I'm not sure).
  • there is an additional boxing allocation when a bind returns a value type - we can remove this later
    one allocation on each let mutable used inside the task - these currently get turned into ref cells through * [x] the autobox transformation when let mutable is used in a task

More improvements are needed - see discussion below. We should compare with TaskBuilder.fs, Ply and C#.

Performance Status

Systematic perf testing of task { ... } is required.

Some benchmarks are at tests\fsharp\perf\tasks in the PR. Please help improve this.

Currently compile and run with:

msbuild tests\fsharp\perf\tasks\FS\TaskPerf.fsproj /p:Configuration=Release
dotnet artifacts\bin\TaskPerf\Release\netcoreapp2.1\TaskPerf.dll

The build/run cycle is a bit irritating as BenchmarkDotNet seems to run the "FSharpAsync" slow benchmarks around 100 times. Please help fix that.

Here are results at last run:

BenchmarkDotNet=v0.11.5, OS=Windows 10.0.18362
Intel Core i7-8750H CPU 2.20GHz (Coffee Lake), 1 CPU, 12 logical and 6 physical cores
.NET Core SDK=2.2.203
  [Host]     : .NET Core 2.1.9 (CoreCLR 4.6.27414.06, CoreFX 4.6.27415.01), 64bit RyuJIT DEBUG
  DefaultJob : .NET Core 2.1.9 (CoreCLR 4.6.27414.06, CoreFX 4.6.27415.01), 64bit RyuJIT
Method Mean Error StdDev Median Ratio RatioSD Gen 0 Gen 1 Gen 2 Allocated
ManyWriteFile_CSharpAsync 18.19 ms 0.6308 ms 1.8499 ms 17.49 ms 1.00 0.00 406.2500 - - 1.09 KB
ManyWriteFile_Task 16.66 ms 0.3535 ms 0.9062 ms 16.46 ms 0.92 0.09 406.2500 - - 1.02 KB
ManyWriteFile_TaskBuilder 20.79 ms 0.4120 ms 1.0411 ms 20.83 ms 1.16 0.11 1125.0000 - - 1.75 KB
ManyWriteFile_FSharpAsync 32.59 ms 0.6645 ms 1.9489 ms 32.80 ms 1.81 0.18 1750.0000 - - 3.46 KB
Method Mean Error StdDev Median Ratio RatioSD Gen 0 Gen 1 Gen 2 Allocated
SyncBinds_CSharpAsync 94.21 ms 1.942 ms 4.872 ms 92.28 ms 1.00 0.00 167833.3333 - - 755.31 MB
SyncBinds_Task 120.77 ms 1.322 ms 1.104 ms 120.72 ms 1.26 0.09 167800.0000 - - 755.31 MB
SyncBinds_TaskBuilder 160.14 ms 2.100 ms 1.964 ms 160.30 ms 1.66 0.11 256000.0000 - - 1152.04 MB
SyncBinds_FSharpAsync 864.69 ms 3.520 ms 2.748 ms 865.17 ms 9.07 0.55 596000.0000 - - 2685.55 MB
Method Mean Error StdDev Ratio RatioSD Gen 0 Gen 1 Gen 2 Allocated
AsyncBinds_CSharpAsync 121.53 ms 2.416 ms 3.901 ms 1.00 0.00 1000.0000 - - 1.87 MB
AsyncBinds_Task 94.16 ms 1.820 ms 2.235 ms 0.77 0.03 1200.0000 - - 2.33 MB
AsyncBinds_TaskBuilder 123.48 ms 2.234 ms 2.090 ms 1.01 0.04 3600.0000 - - 4.96 MB
Method Mean Error StdDev Ratio RatioSD Gen 0 Gen 1 Gen 2 Allocated
SingleSyncTask_CSharpAsync 59.46 ms 0.3397 ms 0.3011 ms 1.00 0.00 - - - -
SingleSyncTask_Task 63.98 ms 0.5184 ms 0.4849 ms 1.08 0.01 - - - -
SingleSyncTask_TaskBuilder 94.40 ms 1.2734 ms 1.1911 ms 1.59 0.02 127000.0000 - - 600000000 B
SingleSyncTask_FSharpAsync 1,160.43 ms 15.3916 ms 14.3973 ms 19.54 0.24 813000.0000 - - 3840000000 B

@dsyme
Copy link
Contributor Author

dsyme commented Apr 24, 2019

Note the library implementation is not big - just 300 lines. And it's pretty transparent what is happening apart from the advanced use of SRTP resolution

@dsyme
Copy link
Contributor Author

dsyme commented Apr 24, 2019

The main problem here is adding the signature file, so large chunks are commented out.

@cartermp
Copy link
Contributor

Link suggestion since the RFC isn't out yet: fsharp/fslang-suggestions#581

let __newEntryPoint() : int = failwith "__newEntryPoint should always be removed from compiled code"

[<MethodImpl(MethodImplOptions.NoInlining)>]
let __machine<'T> : 'T = failwith "__newEntryPoint should always be removed from compiled code"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the error message mention __machine instead of __newEntryPoint ?

@dsyme
Copy link
Contributor Author

dsyme commented May 20, 2019

I have updated this PR removing one source of allocation, and updated the performance results

The allocation profile of the F# code is now identical to that of the C# baselines, except for AsyncBinds where there are some extra allocations (though fewer than TaskBuilder). However this doesn't hugely concern me as truly async binding almost always incurs other signficant allocations or delays. THe important thing is that there are now no extra allocations for synchronous binding.

The main thing now is to consider how to make the state machine feature sufficiently complete to allow its incorporation. From the examples I've worked through I'm convinced of the general utility of the mechanism, however there are some kinds of code that can't currently be compiled to state machines.

Copy link
Contributor

@NinoFloris NinoFloris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Identified some last easy wins. If that doesn't pull the async profile in line with C# I'd have to pull out the decompiler/profiler to see what the statemachine codegen actually produces ;)

@dsyme end result looks great all things considering. Happy you were able to maintain quite some library level code. Thanks a lot for putting in the significant effort!

src/fsharp/FSharp.Core/tasks.fs Outdated Show resolved Hide resolved
src/fsharp/FSharp.Core/tasks.fs Outdated Show resolved Hide resolved
src/fsharp/FSharp.Core/tasks.fs Outdated Show resolved Hide resolved
src/fsharp/FSharp.Core/tasks.fs Outdated Show resolved Hide resolved
// A using statement is just a try/finally with the finally block disposing if non-null.
builder.TryFinally(
(fun () -> __expand_body disp),
(fun () -> if not (isNull (box disp)) then disp.Dispose()))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove box by comparing via Object.ReferenceEquals

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is this done? My understanding is Object.ReferenceEquals(disp, null) will also box, unless the JIT elmiinates?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Jit eliminates due to generic specialization via the flexibile type sig.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should double check either way

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely. Though it's quite a simple path, boxes are semantics changing so cannot be eliminated easily while ReferenceEquals gets aggressively inlined to what amounts to cmp disp, null; jne which for 'T = struct always is false = true. Branch elimination does the rest.

src/fsharp/FSharp.Core/tasks.fs Outdated Show resolved Hide resolved
src/fsharp/FSharp.Core/tasks.fs Outdated Show resolved Hide resolved
src/fsharp/FSharp.Core/tasks.fs Outdated Show resolved Hide resolved
src/fsharp/FSharp.Core/tasks.fs Show resolved Hide resolved
src/fsharp/FSharp.Core/tasks.fs Outdated Show resolved Hide resolved
@dsyme
Copy link
Contributor Author

dsyme commented May 20, 2019

@NinoFloris Thanks for the review. I've applied the changes and fixed a race condition.

Latest perf results are shown below F# is 26% slower for SyncBinds, and 23% faster than C# for AsyncBinds, at least on this test run. Allocations are zero for repeated SyncBindSingleTask, which is as hoped.

Method Mean Error StdDev Median Ratio RatioSD Gen 0 Gen 1 Gen 2 Allocated
ManyWriteFile_CSharpAsync 18.19 ms 0.6308 ms 1.8499 ms 17.49 ms 1.00 0.00 406.2500 - - 1.09 KB
ManyWriteFile_Task 16.66 ms 0.3535 ms 0.9062 ms 16.46 ms 0.92 0.09 406.2500 - - 1.02 KB
ManyWriteFile_TaskBuilder 20.79 ms 0.4120 ms 1.0411 ms 20.83 ms 1.16 0.11 1125.0000 - - 1.75 KB
ManyWriteFile_FSharpAsync 32.59 ms 0.6645 ms 1.9489 ms 32.80 ms 1.81 0.18 1750.0000 - - 3.46 KB
Method Mean Error StdDev Median Ratio RatioSD Gen 0 Gen 1 Gen 2 Allocated
SyncBinds_CSharpAsync 94.21 ms 1.942 ms 4.872 ms 92.28 ms 1.00 0.00 167833.3333 - - 755.31 MB
SyncBinds_Task 120.77 ms 1.322 ms 1.104 ms 120.72 ms 1.26 0.09 167800.0000 - - 755.31 MB
SyncBinds_TaskBuilder 160.14 ms 2.100 ms 1.964 ms 160.30 ms 1.66 0.11 256000.0000 - - 1152.04 MB
SyncBinds_FSharpAsync 864.69 ms 3.520 ms 2.748 ms 865.17 ms 9.07 0.55 596000.0000 - - 2685.55 MB
Method Mean Error StdDev Ratio RatioSD Gen 0 Gen 1 Gen 2 Allocated
AsyncBinds_CSharpAsync 121.53 ms 2.416 ms 3.901 ms 1.00 0.00 1000.0000 - - 1.87 MB
AsyncBinds_Task 94.16 ms 1.820 ms 2.235 ms 0.77 0.03 1200.0000 - - 2.33 MB
AsyncBinds_TaskBuilder 123.48 ms 2.234 ms 2.090 ms 1.01 0.04 3600.0000 - - 4.96 MB
Method Mean Error StdDev Ratio RatioSD Gen 0 Gen 1 Gen 2 Allocated
SingleSyncTask_CSharpAsync 59.46 ms 0.3397 ms 0.3011 ms 1.00 0.00 - - - -
SingleSyncTask_Task 63.98 ms 0.5184 ms 0.4849 ms 1.08 0.01 - - - -
SingleSyncTask_TaskBuilder 94.40 ms 1.2734 ms 1.1911 ms 1.59 0.02 127000.0000 - - 600000000 B
SingleSyncTask_FSharpAsync 1,160.43 ms 15.3916 ms 14.3973 ms 19.54 0.24 813000.0000 - - 3840000000 B

@NinoFloris
Copy link
Contributor

The asyncbinds result makes me suspicious though. Why is it deviating so much from both C# and Taskbuilder? (which are quite close to each other). Could it point to a behavioral difference?

Good to see a full MB of allocations, about 30%, shaved off by those changes, leaving just a tiny difference. Really have to see the IL for that last bit though, could be an FSharpRef or some other hiding in there.

@dsyme
Copy link
Contributor Author

dsyme commented May 20, 2019

The asyncbinds result makes me suspicious though. Why is it deviating so much from both C# and Taskbuilder? (which are quite close to each other). Could it point to a behavioral difference?

I'll do a few more runs tomorrow, the results are a bit variable. I suspect the C# one just came in slow on that particular test run

Good to see a full MB of allocations, about 30%, shaved off by those changes, leaving just a tiny difference. Really have to see the IL for that last bit though, could be an FSharpRef or some other hiding in there.

I think the difference is just that the state machines are bigger, and when their state gets boxed this results in more allocated heap size. For example the F# state machine for AsyncBinds has this:

    .field public class [FSharp.Core]Microsoft.FSharp.Core.Unit Result
    .field public int32 ResumptionPoint
    .field public valuetype [System.Threading.Tasks]System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1<class [FSharp.Core]Microsoft.FSharp.Core.Unit> MethodBuilder
    .field public valuetype [System.Runtime]System.Runtime.CompilerServices.YieldAwaitable/YieldAwaiter awaiter
    .field public valuetype [System.Runtime]System.Runtime.CompilerServices.YieldAwaitable/YieldAwaiter awaiter0
    .field public valuetype [System.Runtime]System.Runtime.CompilerServices.YieldAwaitable/YieldAwaiter awaiter1
    .field public valuetype [System.Runtime]System.Runtime.CompilerServices.YieldAwaitable/YieldAwaiter awaiter2
    .field public valuetype [System.Runtime]System.Runtime.CompilerServices.YieldAwaitable/YieldAwaiter awaiter3
    .field public valuetype [System.Runtime]System.Runtime.CompilerServices.YieldAwaitable/YieldAwaiter awaiter4
    .field public valuetype [System.Runtime]System.Runtime.CompilerServices.YieldAwaitable/YieldAwaiter awaiter5
    .field public valuetype [System.Runtime]System.Runtime.CompilerServices.YieldAwaitable/YieldAwaiter awaiter6
    .field public valuetype [System.Runtime]System.Runtime.CompilerServices.YieldAwaitable/YieldAwaiter awaiter7
    .field public valuetype [System.Runtime]System.Runtime.CompilerServices.YieldAwaitable/YieldAwaiter awaiter8

where the C# one has this:

    .field public int32 '<>1__state'
    .field public valuetype [System.Threading.Tasks]System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1<int32> '<>t__builder'
    .field private valuetype [System.Runtime]System.Runtime.CompilerServices.YieldAwaitable/YieldAwaiter '<>u__1'

Looking at that I'm quite surprised the difference isn't actually great, but sizeof<MethodBuilder> is 12 and sizeof<System.Runtime.CompilerServices.YieldAwaitable.YieldAwaiter> is actually just one byte (I suppose a boolean), so I think that adds up.

I'm not immediately inclined to implement the state machine field sharing that C# has, partly because it can be a nightmare for debugging and hell if it goes wrong. Also we don't do that optimization for sequence expressions.

@dsyme
Copy link
Contributor Author

dsyme commented May 20, 2019

The build is failing on Linux and Mac because the bootstrap compiler is not being used, see #6380

@NinoFloris
Copy link
Contributor

I'll do a few more runs tomorrow, the results are a bit variable. I suspect the C# one just came in slow on that particular test run

Good to hear that, local machine runs are hard to get stable, thermal throttling and overactive background services cause night and day differences.

I'm not immediately inclined to implement the state machine field sharing that C# has, partly because it can be a nightmare for debugging and hell if it goes wrong. Also we don't do that optimization for sequence expressions.

Agreed, it's a bit more work during GC to track the extra references inside most of the awaiters (TaskAwaiter carries the ref of its Task) but it's much more straight forward like this. Always have the option to re-evaluate if somebody wants to build the next Kestrel in F# ;)

That does make me remember something else C# does, which is valuable to add.
Take TaskAwaiter<'T>, as it stores a reference to Task<'T> which can carry a user supplied 'state' object, continuations, and 'T which could be a huge object graph. The C# implementation sets the default value on the awaiter field once its done with it. Unrooting the Task reference and removing the possibility of pushing earlier allocations out to Gen 0/1 in longer running methods.

Also, and this applies more to C# — we don't have let! mutable — if you would receive a new unshared instance of 'T from your callee you'd expect nulling out its only visible binding allows the GC to clean it up. Keeping the Task alive in the state machine would prevent that.

@forki
Copy link
Contributor

forki commented May 21, 2019

@dsyme I often have to use this:

 task { return myID }

would love to see some nice syntactic sugar and/or optimization that doesn't even create a task

@ashtonkj
Copy link

@dsyme I often have to use this:

 task { return myID }

would love to see some nice syntactic sugar and/or optimization that doesn't even create a task

Couldn't you just use Task.FromResult?

@forki
Copy link
Contributor

forki commented May 21, 2019

right! Does that create an actual task as overhead?

@ashtonkj
Copy link

right! Does that create an actual task as overhead?

As far as I can see here (https://referencesource.microsoft.com/mscorlib/R/11a386e7d7cae64a.html) and here (https://docs.microsoft.com/en-us/dotnet/api/system.threading.tasks.task.fromresult?view=netframework-4.8#remarks) it creates a Task with the result immediately set and the status of RanToCompletion

@forki
Copy link
Contributor

forki commented May 21, 2019

ok. question is: can we do similar in task { } for stuff that's basically already done?

@ashtonkj
Copy link

ok. question is: can we do similar in task { } for stuff that's basically already done?

I'm not sure I understand the question. Wouldn't return already do that?

@forki
Copy link
Contributor

forki commented May 21, 2019

yeah maybe I'm stupid here. need to think about my question ...

@benaadams
Copy link
Member

benaadams commented May 21, 2019

If it could be either; but usually is complete; then you can use ValueTask<'T> instead of Task<'T>.

@dsyme
Copy link
Contributor Author

dsyme commented May 22, 2019

Closing in favour of #6811 from a feature branch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.