Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion: Defer async state machine creation #10449

Closed
benaadams opened this issue Apr 10, 2016 · 29 comments
Closed

Discussion: Defer async state machine creation #10449

benaadams opened this issue Apr 10, 2016 · 29 comments

Comments

@benaadams
Copy link
Member

benaadams commented Apr 10, 2016

From #7169 (comment) will evolve this over time, however this is to fork the discussion from @ljw1004's great proposal as its a separate thing.

While you're there, messing with the async state machine.... 😉

There is currently a faster await path for completed tasks; however an async function still comes with a cost. There are faster patterns to avoid the state machine; which involve greater code complexity so it would be nice if the compiler could generate them as part of the state machine construction.

Tail call clean up

async Task MethodAsync()
{
    // ... some code

    await OtherMethodAsync();
}

becomes

Task MethodAsync()
{
    // ... some code

    return OtherMethodAsync();
}

Like-wise single tail await

async Task MethodAsync()
{
    if (condition)
    {
        return;
    }
    else if (othercondition)
    {
        await OtherMethodAsync();
    }
    else
    {
        await AnotherMethodAsync();
    }
}

becomes

Task MethodAsync()
{
    if (condition)
    {
        return Task.CompletedTask;
    }
    else if (othercondition)
    {
        return OtherMethodAsync();
    }
    else
    {
        return AnotherMethodAsync();
    }
}

Mid async

async Task MethodAsync()
{
    if (condition)
    {
        return;
    }

    await OtherMethodAsync();

    // some code
}

splits at first non completed task into async awaiting function

Task MethodAsync()
{
    if (condition)
    {
        return Task.CompletedTask;
    }

    var task = OtherMethodAsync();

    if (task.Status != TaskStatus.RanToCompletion)
    {
        return MethodAsyncAwaited(task);
    }

    MethodAsyncRemainder();
}
async Task MethodAsyncAwaited(Task task)
{
    await task;

    MethodAsyncRemainder();
}

void MethodAsyncRemainder()
{
    // some code
}

ValueTask postponement of async and Task<T>

async Task<int> MethodAsync()
{
    if (condition)
    {
        return 0;
    }

    return await OtherMethodAsync();
}

async Task<int> OtherMethodAsync()
{
    if (otherCondition)
    {
        return 1;
    }

    return await AnotherMethodAsync();
}

async Task<int> AnotherMethodAsync()
{
   ...
}

to

// Common async awaiter
async Task<T> AwaitResult<T>(Task<T> task)
{
    return await task;
}

ValueTask<int> MethodAsync()
{
    if (condition)
    {
        return 0;
    }

    var task = OtherMethodAsync();

    if (!task.IsCompletedSuccessfully)
    {
        return AwaitResult(task.AsTask());
    }
    return task.Result;
}

ValueTask<int> OtherMethodAsync()
{
    if (otherCondition)
    {
        return 1;
    }

    return AnotherMethodAsync();
}

async Task<int> AnotherMethodAsync()
{
 ...
}
@benaadams
Copy link
Member Author

benaadams commented Apr 10, 2016

Re: ValueTask vs Task<T> results for splitting with pre-completed results are as follows for 1M ops

Arch 64 bit - Cores 4
1 Thread - Sync: 5.756ms
1 Thread - Async: 146.579ms
1 Thread - ValueTask Async: 16.917ms
Parallel - Async: 86.191ms
Parallel - ValueTask Async: 4.988ms
Parallel - Sync: 1.661ms

See atemerev/skynet#40

@benaadams
Copy link
Member Author

ValueTask by itself saves in allocations; but you don't get the full throughput in performance until you also delay the state machine creation (the actual async part of the function) until the first task that is not completed.

@benaadams
Copy link
Member Author

@bartdesmet wrote:

"Tail await elimination" is definitely a promising pattern we manually optimize for quite regularly in our code base; it's easy enough to have a function that used to do multiple awaits and ultimately ends up being rewritten such that the async and await combo is no longer necessary.

There are some caveats though, such as getting rid of a single return await if it occurs in a try or using block etc. but nothing a compiler wouldn't be able to determine. Another more tricky one is when this takes away the side-effect of await capturing the SynchronizationContext, which is unknown to the compiler because ConfigureAwait is a library function rather than a language feature. It'd be strange if the compiler would start knowing about this, especially since it occurs in the await site where the language does not prescribe a fixed operand type but instead supports the awaiter pattern.

If we'd have the (much desirable) flexibility on the return type for an async method, we got two points of interaction with opaque user code: the async builder and the awaiter. When things get optimized away, any (potentially desirable) side-effects also vanish. For example, in Rx, the GetAwaiter extension method for observables has the effect of creating a subscription (unlike Task, an observable is typically born "cold"). If we were to support returning an observable from an async method through this proposal, we'd still be unable to perform a tail await elimination optimization because it'd take away the side-effect of await kicking off the subscription. A caller may simply store the observable returned by the async method, without immediately calling Subscribe, and it'd be unclear whether the observable has become "hot" or not.

ConfigureAwait just happens to be one such case of an extensibility point in today's world with the awaiter pattern. Maybe a Roslyn analyzer and code fix for some of these optimizations is a good first step, ultimately putting the user in control of such rewrites? I'm not sure though whether there's such a thing as a "suggested potential fix" where the lightbulb in the IDE shows links to more information for the user to digest prior to committing to making a code change that could potentially introduce subtle changes in behavior.

@benaadams
Copy link
Member Author

Removed the .ConfigureAwait from the example; however that could be lifted if the existing could has one, e.g. rather than taking a Task the second function would take a ConfiguratedAwaitable or whatever type was awaited.

try,catch,finally,using would probably have to block the rewrites at least as a first pass; often this can be moved up a function - but potentially also causes a behaviour change; so perhaps highlight as a caveat.

The issue with an analyzer rewrite for splitting the functions into pre-completed and async parts is it does generate horribly unmaintainable code; though it is a pattern that is used in some places e.g. https://github.com/dotnet/corefx/blob/master/src/System.IO/src/System/IO/MemoryStream.cs#L468-L531

@benaadams
Copy link
Member Author

@ljw1004 wrote

@benaadams - those are interesting ideas. I need to think more deeply about them. I appreciate that you linked to real-world cases of complexity that's (1) forced by the compiler not doing the optimizations, (2) has real-world perf numbers to back it up. But let's move the compiler-optimizations-of-async-methods into a different discussion topic.

@benaadams
Copy link
Member Author

@i3arnon wrote

@benaadams automagically removing async from these methods changes exception semantics. It would be extremely confusing for someone using this code:

async Task MainAsync()
{
    var task1 = FooAsync("hamster");
    var task2 =  FooAsync(null);
    try
    {
        await Task.WhenAll(task1 , task2);
    }
    catch (Exception e)
    {
        Console.WriteLine(e.Message);
    }
}

async Task FooAsync(string value)
{
    if (value == null) throw new ArgumentNullException();

    await SendAsync(value);
}

Their app will crash without any obvious reason.

@benaadams
Copy link
Member Author

@i3arnon it should work? First function's conversion would be blocked due to await in try catch, second function would become:

Task FooAsync(string value)
{
    if (value == null) return Task.FromException(new ArgumentNullException());

    return SendAsync(value);
}

@benaadams
Copy link
Member Author

@i3arnon wrote

@benaadams no.
We now changed where the exception is thrown.
With async the exception is stored in the returned task and so is rethrown when the task is awaited (i.e. inside the try clause of the calling method).
When async is removed this becomes a simple standard method which means the exception is thrown synchronously when FooAsync is called (i.e. just before entering the try clause). So the handled exception is now unhandled because the compiler changed things implicitly.

@benaadams
Copy link
Member Author

@i3arnon not in the changed function I showed; where the exception is not thrown but Task.FromException is returned instead - behaviour should remain the same?

@benaadams
Copy link
Member Author

@i3arnon wrote

@benaadams Oh.. right, I missed that. However it doesn't really matter as exceptions aren't necessarily raised explicitly, for example:

async Task FooAsync(object value)
{
    var message = value.ToString();
    await SendAsync(message);
}

Though you can add try-catches everywhere and emulate the behavior of an async method but I'm not sure it will improve performance.

@benaadams
Copy link
Member Author

@i3arnon yeah rewriting everything like:

Task FooAsync(string value)
{
    try
    {
        var message = value.ToString();
        return SendAsync(message);
    }
    catch(Exception e)
    {
        return Task.FromException(e);
    }
}

probably isn't great and oddly may work better like

Task FooAsync(string value)
{
    try
    {
        return FooAsyncImpl(value);
    }
    catch (Exception e)
    {
        return Task.FromException(e);
    }
}

Task FooAsyncImpl(string value)
{
    var message = value.ToString();
    return SendAsync(message);
}

As try catch also prevents some optimizations; something to measure...

@StephenCleary
Copy link

I think you'd have to wrap everything in try/catch in the output. But that said, it's still an optimization (the state machine has to have a try/catch, too). Last time I checked (a long time ago, for SEH), trys were cheap and it's the catch that's expensive.

It would be important for semantic reasons to always have the try in the generated code, even if the original code only called a single task-returning method. I am of the persuasion that exceptions from task-returning methods should always be on the returned task, but there are others (notably Jon Skeet) who take the position that precondition-style boneheaded exceptions should be thrown directly and not placed on the returned task. I prefer the everything-on-the-Task approach because in my mind Task represents the execution of that method, but I can see the benefits of the Skeet approach because it's more in line with how LINQ works (lazy evaluation but eager preconditions).

Ended up writing a bit much there. But anyway, the point is that there's a pretty sizable minority who do throw some exceptions directly and not place them on the task, so even in the simplest possible case:

async Task OverloadAsync()
{
  await OverloadAsync(CancellationToken.None);
}

you'd still need a try/catch for proper semantics:

Task OverloadAsync()
{
  try
  {
    return OverloadAsync(CancellationToken.None);
  }
  catch (Exception ex)
  {
    return Task.FromException(ex);
  }
}

because some people write their code like this:

Task OperationAsync(CancellationToken token)
{
  if (!this._preconditionCheck())
    throw new InvalidOperationException("Programmer error.");
  return DoOperationAsync(token);
}
async Task DoOperationAsync(CancellationToken token) { ... }

Bottom line: I'm totally in favor of the optimization. Perhaps we could elide the try if the compiler could prove that OperationAsync is async (and thus cannot throw directly). I expect that would be a common enough case to warrant the work.

@benaadams
Copy link
Member Author

@StephenCleary your and @i3arnon's examples cover two ends of the spectrum! Both good examples.

I don't think try is expensive per se; but it does prevent some jitter optimizations, inlining, tail call etc.

Will do some measurements

@ericeil
Copy link

ericeil commented Apr 18, 2016

Exception semantics are one issue; another is ExecutionContext semantics. Async methods save the current ExecutionContext on entry, and restore it at the first await. So, for example, if you set the value of an AsyncLocal<T> in the async method, the caller of that method won't see the modified value.

I could imagine an optimizer that could eliminate the exception and/or ExecutionContext overhead under certain circumstances (where it could be proved that no exceptions are thrown and/or no modifications are made to the ExecutionContext) but I'd then wonder whether this optimization would work in enough cases to justify the complexity.

@benaadams
Copy link
Member Author

Execution context changing would be hard to detect at compile time; especially if calling via interfaces or virtual methods.

Runtime it could be detected as not default; which would allow a poor solution of doubling up the functions for default context/custom context

Task MyFuncAsync()
{
    if (ExecutionContext.IsDefault()) 
    {
        // Fast path
        try
        {
            return MyFuncAsyncDefaultContext();
        }
        catch (Exception ex)
        {
            return Task.FromException(ex);
        }
    } 
    else 
    {
        // Code as default state machine rewrite
        return MyFuncAsyncCustomContext();
    }
}

Though now its starting to get really inelegant... :(

Maybe code generators? #5561

@benaadams
Copy link
Member Author

benaadams commented Apr 19, 2016

Added ValueTask awaiting:

ValueTask postponement of async and Task<T>

async Task<int> MethodAsync()
{
    if (condition)
    {
        return 0;
    }

    return await OtherMethodAsync();
}

async Task<int> OtherMethodAsync()
{
    if (otherCondition)
    {
        return 1;
    }

    return await AnotherMethodAsync();
}

async Task<int> AnotherMethodAsync()
{
   ...
}

to

// Common async awaiter
async Task<T> AwaitResult<T>(Task<T> task)
{
    return await task;
}

ValueTask<int> MethodAsync()
{
    if (condition)
    {
        return 0;
    }

    var task = OtherMethodAsync();

    if (!task.IsCompletedSuccessfully)
    {
        return AwaitResult(task.AsTask());
    }
    return task.Result;
}

ValueTask<int> OtherMethodAsync()
{
    if (otherCondition)
    {
        return 1;
    }

    return AnotherMethodAsync();
}

async Task<int> AnotherMethodAsync()
{
 ...
}

@ericeil
Copy link

ericeil commented Apr 19, 2016

Runtime it could be detected as not default

I'm not sure that would help. Even if we start with a default context, if the Async method modifies that context, we still need to restore the original (default) context on the way out.

To preserve all the semantics, the code needs to look more like this:

Task MyFuncAsync()
{
    var ec = ExecutionContext.Capture();
    try
    {
         return MyFuncAsyncImpl();
    }
    catch (Exception e)
    {
         return Task.FromException(e);
    }
    finally
    {
         // Note: this doesn't exist as a public API
         ExecutionContext.Restore(ec);
    }
}

The actual infrastructure code in the Fx uses some internal tricks to make that all work more efficiently than the pseudo-implementation I gave here. The internal functionality could maybe be exposed. But still, this goop is likely the bulk of the cost you're trying to avoid.

@benaadams
Copy link
Member Author

benaadams commented Apr 19, 2016

Though ExecutionContext.Capture and ExecutionContext.Restore don't do much if t_currentMaybeNull, previous and ec are all Default.

@StephenCleary
Copy link

Good catch re ExecutionContext. async methods also notify the logical call context to establish a copy-on-write scope.

@bbarry
Copy link

bbarry commented Apr 21, 2016

@StephenCleary, @ericeil is any of that specified somewhere or is it an implementation detail of the current state machine creation?

@StephenCleary
Copy link

Neither.

It's not documented/specified, but it can't be treated as "just an implementation detail", either, since it significantly changes the semantics of types like AsyncLocal<T>.

bgrainger added a commit to mysql-net/MySqlConnector that referenced this issue Apr 22, 2016
Avoid "await" if ReceiveReplyAsync completed synchronously (because the data was already in memory).

See dotnet/roslyn#10449 for more details.
@benaadams
Copy link
Member Author

A follow up point on performance; while an async method and its state machine construction is pretty fast; due to the viral nature of async one actually async method then generally causes a very large chain of async methods all the way up the stack.

So if its 30 method calls deep, that's now the construction of 30 state machines per call which starts to add up.

@benaadams
Copy link
Member Author

Example code case from aspnet/KestrelHttpServer#863

public Task WriteAsync(ArraySegment<byte> data, CancellationToken cancellationToken)
{
    if (!HasResponseStarted)
    {
        var produceStartTask = ProduceStartAndFireOnStarting();
        // ProduceStartAndFireOnStarting normally returns a CompletedTask
        if (produceStartTask.Status != TaskStatus.RanToCompletion)
        {
            // If the Task was not completed go async and await the task
            // to surface any errors, cancellation or wait for the Task 
            // to complete before calling SocketOutput.WriteAsync
            return WriteAsyncAwaited(produceStartTask, data, cancellationToken);
        }
    }

    // Otherwise fast-path by not constructing an async statemachine, 
    // examining the various contexts and just return the final Write Task
    if (_autoChunk)
    {
        if (data.Count == 0)
        {
            return TaskUtilities.CompletedTask;
        }
        return WriteChunkedAsync(data, cancellationToken);
    }
    else
    {
        return SocketOutput.WriteAsync(data, cancellationToken: cancellationToken);
    }
}

private async Task WriteAsyncAwaited(Task produceStartTask, ArraySegment<byte> data, CancellationToken cancellationToken)
{
    await produceStartTask;

    if (_autoChunk)
    {
        if (data.Count == 0)
        {
            return;
        }
        await WriteChunkedAsync(data, cancellationToken);
    }
    else
    {
        await SocketOutput.WriteAsync(data, cancellationToken: cancellationToken);
    }
}

@benaadams
Copy link
Member Author

To follow up:

public class Program
{
    public static void Main(string[] args)
    {
        long limit = 10000000;

        MainAsync(limit).Wait();
    }

    public static async Task MainAsync(long limit)
    {
        var sw = Stopwatch.StartNew();

        GC.Collect();
        sw.Restart();

        await Async1(limit);
        sw.Stop();
        Console.WriteLine("Async1: {0:0.0000}s", sw.Elapsed.TotalSeconds);

        GC.Collect();
        sw.Restart();

        await Async2(limit);
        sw.Stop();
        Console.WriteLine("Async2: {0:0.0000}s", sw.Elapsed.TotalSeconds);


        GC.Collect();
        sw.Restart();

        await Async1(limit);
        sw.Stop();
        Console.WriteLine("Async1: {0:0.0000}s", sw.Elapsed.TotalSeconds);

        GC.Collect();
        sw.Restart();

        await Async2(limit);
        sw.Stop();
        Console.WriteLine("Async2: {0:0.0000}s", sw.Elapsed.TotalSeconds);
    }

    private static async Task Async1(long count)
    {
        if (count == 0) return;

        var tasks = new Task[10];
        for (var i = 0; i < 10; i++)
        {
            tasks[i] = Async1(count / 10);
        }
        for (var i = 0; i < 10; i++)
        {
            await tasks[i];
        }
    }

    private static Task Async2(long count)
    {
        if (count == 0) return Task.CompletedTask;

        var tasks = new Task[10];
        for (var i = 0; i < 10; i++)
        {
            tasks[i] = Async2(count / 10);
        }
        for (var i = 0; i < 10; i++)
        {
            if (tasks[i].Status != TaskStatus.RanToCompletion)
            {
                return Async2Awaited(tasks);
            }
        }

        return Task.CompletedTask;
    }

    private async static Task Async2Awaited(Task[] tasks)
    {
        for (var i = 0; i < 10; i++)
        {
            await tasks[i];
        }
    }
}

Outputs

Async1: 5.8140s
Async2: 1.6628s
Async1: 5.8387s
Async2: 1.6715s

So the non-deferred path is x3.5 slower than the deferred path - which is significant for fine grained async which is normally sync.

@benaadams
Copy link
Member Author

This might be a better sample, since async is generally viral and "all the way down"

public static void Main(string[] args)
{
    long startCallDepth = 512;
    long repeats = 1000000;

    MainAsync(startCallDepth, repeats).Wait();
}

public static async Task MainAsync(long startCallDepth, long repeats)
{
    var sw = Stopwatch.StartNew();
    for (var i = 0L; i < 10; i++)
    {
        await Async1(startCallDepth);
    }
    for (var i = 0L; i < 10; i++)
    {
        await Async2(startCallDepth);
    }
    sw.Stop();

    var callDepth = startCallDepth;

    while (callDepth > 0)
    {
        GC.Collect();

        sw.Restart();
        for (var i = 0L; i < repeats; i++)
        {
            await Async1(callDepth);
        }
        sw.Stop();

        Console.WriteLine("Async1, depth {1}: {0:0.0000}s", sw.Elapsed.TotalSeconds, callDepth);

        GC.Collect();

        sw.Restart();
        for (var i = 0L; i < repeats; i++)
        {
            await Async2(callDepth);
        }
        sw.Stop();

        Console.WriteLine("Async2, depth {1}: {0:0.0000}s", sw.Elapsed.TotalSeconds, callDepth);

        callDepth /= 2;
    }
}

private static async Task Async1(long count)
{
    if (count == 0) return;

    await Async1(count - 1);
}

private static Task Async2(long count)
{
    if (count == 0) return Task.CompletedTask;

    var task = Async2(count - 1);

    if (task.Status != TaskStatus.RanToCompletion)
    {
        return Async2Awaited(task);
    }

    return Task.CompletedTask;
}

private async static Task Async2Awaited(Task task)
{
    await task;
}

Outputs

Async1, depth 512: 28.3308s
Async2, depth 512: 8.2960s
Async1, depth 256: 14.1378s
Async2, depth 256: 4.0650s
Async1, depth 128: 7.0713s
Async2, depth 128: 1.9988s
Async1, depth 64: 3.3938s
Async2, depth 64: 0.9568s
Async1, depth 32: 1.6903s
Async2, depth 32: 0.4467s
Async1, depth 16: 0.8301s
Async2, depth 16: 0.1789s
Async1, depth 8: 0.4041s
Async2, depth 8: 0.0892s
Async1, depth 4: 0.1804s
Async2, depth 4: 0.0480s
Async1, depth 2: 0.1042s
Async2, depth 2: 0.0287s
Async1, depth 1: 0.0699s
Async2, depth 1: 0.0187s

@benaadams
Copy link
Member Author

Added the try. catch, finally with execution context does it perf some

private static Task Async3(long count)
{
    if (count == 0) return Task.CompletedTask;

    var ec = ExecutionContext.Capture();
    try
    {
        var task = Async3(count - 1);

        if (task.Status != TaskStatus.RanToCompletion)
        {
            return Async2Awaited(task);
        }
    }
    catch (Exception e)
    {
            return Task.FromException(e);
    }
    finally
    {
            // Note: this doesn't exist as a public API
            Restore(Thread.CurrentThread, ec);
    }

    return Task.CompletedTask;
}

static ExecutionContext Default = ExecutionContext.Capture();

internal static void Restore(Thread currentThread, ExecutionContext executionContext)
{
    ExecutionContext previous = null ?? Default;
    //ExecutionContext previous = currentThread.ExecutionContext ?? Default;
    //currentThread.ExecutionContext = executionContext;

    // New EC could be null if that's what ECS.Undo saved off.
    // For the purposes of dealing with context change, treat this as the default EC
    executionContext = executionContext ?? Default;

    if (previous != executionContext)
    {
        //OnContextChanged(previous, executionContext);
    }
}

Results

Async1, depth 512: 28.9657s
Async2, depth 512: 8.2836s
Async3, depth 512: 16.8255s
Async1, depth 256: 14.3558s
Async2, depth 256: 4.1034s
Async3, depth 256: 8.2940s
Async1, depth 128: 7.0991s
Async2, depth 128: 2.0186s
Async3, depth 128: 4.1132s
Async1, depth 64: 3.5059s
Async2, depth 64: 0.9612s
Async3, depth 64: 1.9936s
Async1, depth 32: 1.7041s
Async2, depth 32: 0.4383s
Async3, depth 32: 0.9555s
Async1, depth 16: 0.8618s
Async2, depth 16: 0.1804s
Async3, depth 16: 0.4571s
Async1, depth 8: 0.4166s
Async2, depth 8: 0.0903s
Async3, depth 8: 0.2291s
Async1, depth 4: 0.1817s
Async2, depth 4: 0.0491s
Async3, depth 4: 0.1165s
Async1, depth 2: 0.1100s
Async2, depth 2: 0.0290s
Async3, depth 2: 0.0611s
Async1, depth 1: 0.0727s
Async2, depth 1: 0.0187s
Async3, depth 1: 0.0352s

@khellang
Copy link
Member

The tail call optimization is discuessed at #1981

@gafter
Copy link
Member

gafter commented Mar 25, 2017

We are now taking language feature discussion in other repositories:

Features that are under active design or development, or which are "championed" by someone on the language design team, have already been moved either as issues or as checked-in design documents. For example, the proposal in this repo "Proposal: Partial interface implementation a.k.a. Traits" (issue 16139 and a few other issues that request the same thing) are now tracked by the language team at issue 52 in https://github.com/dotnet/csharplang/issues, and there is a draft spec at https://github.com/dotnet/csharplang/blob/master/proposals/default-interface-methods.md and further discussion at issue 288 in https://github.com/dotnet/csharplang/issues. Prototyping of the compiler portion of language features is still tracked here; see, for example, https://github.com/dotnet/roslyn/tree/features/DefaultInterfaceImplementation and issue 17952.

In order to facilitate that transition, we have started closing language design discussions from the roslyn repo with a note briefly explaining why. When we are aware of an existing discussion for the feature already in the new repo, we are adding a link to that. But we're not adding new issues to the new repos for existing discussions in this repo that the language design team does not currently envision taking on. Our intent is to eventually close the language design issues in the Roslyn repo and encourage discussion in one of the new repos instead.

Our intent is not to shut down discussion on language design - you can still continue discussion on the closed issues if you want - but rather we would like to encourage people to move discussion to where we are more likely to be paying attention (the new repo), or to abandon discussions that are no longer of interest to you.

If you happen to notice that one of the closed issues has a relevant issue in the new repo, and we have not added a link to the new issue, we would appreciate you providing a link from the old to the new discussion. That way people who are still interested in the discussion can start paying attention to the new issue.

Also, we'd welcome any ideas you might have on how we could better manage the transition. Comments and discussion about closing and/or moving issues should be directed to #18002. Comments and discussion about this issue can take place here or on an issue in the relevant repo.


I am closing this issue because discussion appears to have died down. You are welcome to open a new issue in the csharplang repo if you would like to kick-start discussion again.

@gafter gafter closed this as completed Mar 25, 2017
@Thaina
Copy link

Thaina commented Oct 25, 2017

I think this is not specific language design but a compiler or IL generator optimization which should alive on roslyn. Thus I request this thread to be reopened here

This feature does not make any change in any language. Only IL generated is the thing that should be optimized and analyzed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants