-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document the memory model guaranteed by dotnet/runtime runtimes #63474
Comments
I couldn't figure out the best area label to add to this issue. If you have write-permissions please help me learn by adding exactly one area label. |
Tagging subscribers to this area: @dotnet/area-meta Issue DetailsWe routinely run into questions about the coreclr / mono memory models and what model modern .NET code should be targeting. ECMA specifies a memory model: Joe Duffy wrote down a rough sketch of the model from the .NET Framework 2.0 days: Igor Ostrovsky wrote two very nice articles about the memory model: We should:
|
I just faced the case where I'm accessing the same memory mapped at two different addresses. It's a memory-mapped file based circular buffer. The implementation can be simplified by mapping the buffer two times back to back. So I wonder what guarantees .NET makes in such a case. |
@stephentoub @danmoseley do you have a suggestion for a better area path where this expertise lies? |
I'm not sure any is perfect, but area-CodeGen-coreclr is probably a good place to start. |
Tagging subscribers to this area: @JulieLeeMSFT Issue DetailsWe routinely run into questions about the coreclr / mono memory models and what model modern .NET code should be targeting. ECMA specifies a memory model: Joe Duffy wrote down a rough sketch of the model from the .NET Framework 2.0 days: Igor Ostrovsky wrote two very nice articles about the memory model: We should:
|
For (3), here's my test code: [TestMethod]
[MethodImpl(MethodImplOptions.AggressiveOptimization)]
public void AliasArray()
{
Debugger.Break();
var ba = GetArray<byte>();
var ia = GetArray<int>();
var sum = 0;
for (int i = 0; i < ba.Length; i++)
{
sum += ba[i];
ia[i] = 1; //write, uncomment this to see difference
sum += ba[i];
}
GC.KeepAlive(sum);
}
[TestMethod]
[MethodImpl(MethodImplOptions.AggressiveOptimization)]
public void AliasSpan()
{
Debugger.Break();
var ba = GetArray<byte>().AsSpan();
var ia = GetArray<int>().AsSpan();
var sum = 0;
for (int i = 0; i < ba.Length; i++)
{
sum += ba[i];
ia[i] = 1; //write, uncomment this to see difference
sum += ba[i];
}
GC.KeepAlive(sum);
}
[MethodImpl(MethodImplOptions.NoOptimization)]
static T[] GetArray<T>() => new T[10]; |
I can answer to the questions about aliasing.
None. The compiler assumes such aliasing will never ever happen and uses this for optimizations. The compiler also assumes statics fields do not alias each other. This means that mutable overlapping RVA statics are not supported.
Just as above, the Jit assumes arrays of incompatible types will not alias each other (note it takes into account things like On the other hand, the compiler does not assume that byrefs or unmanaged pointers pointing to different types will not alias, so span reinterpretation is safe from the aliasing point of view. Notably, there are compiler bugs that exist today which make the compiler assume that writes to "known" (derived from arrays or statics) locations will only be performed using "proper" fields. |
cc @dotnet/jit-contrib . |
cc @mangod9. |
An example where having documented memory model would help. Another example asking for clarity. |
Do synchronization primitives (locks, events, tasks, ...) generate a barrier? Normally, the answer is yes. So if you call |
I have a lot of context on this matter from various angles - from assumptions and guarantees in C# spec/compiler, to memory model invariants in the runtime (VM, GC, JIT), to how it translates to the native code (x64, arm, arm64). I'd like to take this issue, if noone is already working on it. |
@VSadov I think it's safe to say it's yours. |
|
Based on dotnet/csharpstandard#366 (comment), I think a new version of ECMA-335 is unlikely. I imagine the memory model documentation would be posted somewhere in https://github.com/dotnet/runtime/tree/main/docs/design. |
Note: RyuJit does not reorder exceptions, any and all cases where it does are bugs. I would expect the reordering you observed on full framework to have been a bug as well; I was told once JIT64 especially did not take great care in preserving exceptions. |
How do volatile loads and stores interact with the hardware? This discussion came up in ECMA has the opinion that volatile accesses can access hardware registers. This means that volatile loads cannot be dropped or coalesced (e.g. Apparently, the JIT does just that, though. And @VSadov just declared that this deviation from ECMA is acceptable. If the .NET memory model is to be defined rigorously then this issue must be resolved. |
And we use
|
In theory languages running on CLR can implement their own memory model. In practice languages typically specify only grammar and single-threaded semantics and leave memory model issues to the runtime. The memory model for a language like C# mostly affects what optimizations may be performed, as memory model tells what would be observable. As a result it makes a lot of sense to just assume and provide the memory model of the underlying runtime. For example consider: x += 1; Can we emit the code as x = x + 1; // evaluating x twice or must do ref var t = ref x; // evaluating x once
t = t + 1; The language spec says that
However the CLR memory model would tell that if x is accessed only by a single thread, then introducing reads and writes is allowed as long as overal result is the same. Therefore if compiler sees that x is a local variable, then it can use simpler FWIW, for the purpose of providing examples in the CLR memory model doc I am going to use C#, assuming a mainstream compiler targeting CLR will preserve the memory model.
Typically compiler will treat its code generation strategy as an implementation detail, so that not be constrained if changes need to be made. There are obvious concept leaks in areas like cross-language interop or when working with platform services like reflection. |
It is just another feature in the runtime. GitHub is an obvious place for bugs, feedback, documentation clarifications or discussions of in-progress items, but items here tend to have action oriented lifetime - Opened/Assigned/Closed... |
We have https://github.com/dotnet/runtime/discussions. Note that the labeler/notifications do not work there, so if a question gets overlooked, you may have to @-mention the right person. |
@VSadov I'm not sure if the code generation strategy can be seen as just implementation detail. An optimizing compiler can remove variables and rewrite the code completely, so the guarantees about the loads/stores into the memory locations on CLR level may have no counterpart on C# level. Here is my favorite example: https://godbolt.org/z/YGGffzaab. The optimizing compiler folded the arithmetic series, removed the loop variable and the loop altogether, so there is no memory location corresponding to the loop variable at all. This example shows that C# optimizer, too, can omit creation of objects. Keeping this in mind, I find it complicated to reason about the memory locations because I cannot control which of them are going to exist; as a developer, I can control only C# variables, but not whether the said variables would generate a memory location. |
isn't when |
This is a choice of language designers and depends on their goals. As another example specifying a C# local variable to be always an IL local would be inconvenient. Sometimes a local can be optimized away and sometimes it needs to be a field in a display class or struct if local's lifetime needs to exceed the lexical scope that created the local (happens in lambda or async capturing cases). Not specifying a particular implementation for locals is certainly useful. I think that fields are always emitted as fileds, but can't find it in any formal spec. |
@VSadov I understand that the code generation is free to do whatever it wants with a variable, and that this freedom is advantageous for the better compiled code quality. But then, as a developer I am having troubles reasoning in memory locations and loads/stores, because the actual existence of memory locations depends on the compiler's code generation strategy. What I would need is an ability to reason in terms of C# variables, and obtain memory model guarantees in terms of the variables. (These guarantees might be not well-suited for runtime repository, though.) |
Maybe volatile would guarantee that a C# variable corresponds to a memory location. I'm however interested in the non-volatile case too, as most of the variables are non-volatile. My point is however that the developers usually need to think in variables, not in memory locations, and given that there is no direct mapping between them (runtime is free to elide the variables and introduce temporary variables or maybe reuse a memory location for different variables etc.) it would be advantageous to get the memory model expressed in terms of variables. |
You can do this sort of reasoning only for methods such as |
@jkotas At least there is a guarantee that an immutable object doesn't need any So as a developer I don't need to think in memory locations at least in this case. This is a kind of guarantees I'm talking about. (Let me reiterate: I'm not sure if the language-level guarantees are on-topic in this repository. I'm sorry for offtopic they are not.) |
As far as I know C# makes the same assumptions about observability of accesses to variables that can be shared with other threads. Optimizations are only performed when they are unobservable. See example in #63474 (comment) Either a variable is not observable from multiple threads, and then it can follow just single-threaded semantics, or it must be a field and it must follow the memory model that is basically the same as CLR memory model. Is this enough of a "connection" that would allow applying CLR memory model to C# variables? |
Yes, as long as object is initialized before publishing (in program order). This is a very common pattern when a shared instance is lazily initialized by threads that see the instance is |
Thank you for this valuable addition! I think this should be enough for connecting C# to the CLR memory model, but I'd prefer to hear an opinion of language specialists more experienced then me. Can I reformulate your point this way:
? |
Yes. yield/await do not imply sharing local variables with another thread. |
@VSadov That's an interesting topic. Is there really a guarantee that enumeration won't happen on different threads? I mean something like this: var en = seq.GetEnumerator();
(new Thread(Enumerate)).Start();
(new Thread(Enumerate)).Start();
void Enumerate()
{
while (en.MoveNext())
Console.WriteLine(en.Current);
} With this code, |
We routinely run into questions about the coreclr / mono memory models and what model modern .NET code should be targeting.
ECMA specifies a memory model:
http://www.ecma-international.org/publications/standards/Ecma-335.htm
but it's mostly weaker than what's actually supported by the runtimes.
Joe Duffy wrote down a rough sketch of the model from the .NET Framework 2.0 days:
http://joeduffyblog.com/2007/11/10/clr-20-memory-model/
but that was a long time ago, and unofficial.
Igor Ostrovsky wrote two very nice articles about the memory model:
https://docs.microsoft.com/en-us/archive/msdn-magazine/2012/december/csharp-the-csharp-memory-model-in-theory-and-practice
https://docs.microsoft.com/en-us/archive/msdn-magazine/2013/january/csharp-the-csharp-memory-model-in-theory-and-practice-part-2
but that's also from a decade ago, and things have evolved.
We should:
The text was updated successfully, but these errors were encountered: