-
Notifications
You must be signed in to change notification settings - Fork 74
Nominal types vs. inter-module interaction #148
Comments
Recently thought about something potentially related but in context of interface types: What if one module's string is |
I'm just going to say that, at first glance, this looks like the same problem that linkers try to solve using weak symbols. |
Solution A) is nice because it just improves upon structural typing with "nominal typing light", i.e. just reduce the amount of structural typing matches by additionally matching on tag "string". Unintentional collisions are thus not a serious problem, since they just make things strictly better than not having this tag at all. And it's not hard to come up with more elaborate names that will not clash unless someone specifically wants to. Conversely, you could allow people to still opt in to structural typing, by specifying |
Based on @rossberg 's observation that, to express generics, rtts might be dynamically created at runtime in a potentially unbounded way: this sort of canonicalization system would also have to be available at runtime. Maybe there could be an instruction that accepts an rtt and a some kind of chunk of memory, to cache the rtt in the system using the memory chunk as a dynamic key. (Or, it could be done in a user module which everyone is expected to import, as many others have discussed...) |
This observation is false. There already exists a C# compiler that produces independently verifiable (i.e. typed) x86 that counters this claim. |
Could you make an issue that describes how this is done, with reference to the example @rossberg used in his slides? |
My precise observations were:
Please explain which of these is false. AFAICT, the only solution was to essentially forego the premise of (4) and implement a mirror RTT representation in user space. I believe that's what you said as well. The problem with such an implementation is that it fundamentally requires "erasure" for generic type parameters. Hence it would not be able to benefit from more precise typing via Wasm-level generics once we add them, and would instead be left with "unnecessary casts" for each access to something typed by a parameter -- the very problem you complain about elsewhere wrt array accesses. |
I think "the premise of (4)" is the crucial point. Yesterday's presentation sounded (to me) as if it took that premise as a given -- which I had trouble wrapping my head around, but clarifying that it is a conditional premise clears that up. I think this premise is very much up for discussion. As even yesterday's presentation pointed out early on, the primary purpose of the types in WasmGC is to aid the engine in producing safe and fast code. Supporting every single feature of every single surface-level language's type system is a non-goal in particular because it is impossible. For a low-level language with "assembly" in its name, it is appropriate and also nicely general/flexible to provide low-level primitives; we'd then expect module producers to build their individual required semantics on top of that. On the other hand, for better performance, we want to allow engines to (safely!) drop "unnecessary casts", which requires the type system to express somewhat higher-level (i.e. closer to the surface language) notions and guarantees. Clearly we have to draw the line somewhere. I'm not sure where exactly we should draw the line; most folks on this issue tracker would probably agree that that's a very difficult question. I think it's fair to argue that supporting cast-free array element loads is more important than enabling dynamically generated type parameters to piggyback on the Wasm-level casts -- so drawing the line somewhere between these two wouldn't generally sound unreasonable to me. I mean, even if the C# type system allows you to generate |
Following up on @jakobkummerow's comment, this sentence seems to be the source of misunderstanding. On slide 36 of my presentation overviewing the related literature titled "Custom Casting with Identifiers", I showed how "nominal" typed assembly languages reason about casts at the assembly level. That is, these "nominal" systems are able to reason about casts below the level WebAssembly |
@jakobkummerow, I agree, and that was kind of what I was getting at. Unfortunately, though, it does not matter how often such a features is used. In a separate compilation scenario you cannot make useful assumptions about that and usually need to compile for the general case. (The only localised scenario would be such a constrained and rare case that it’s hardly worth optimising for.) In practice, the implication is that you won’t be able to piggyback C#/.NET casts on Wasm casts. That in turn means that adding generative RTTs to Wasm really only helps Java. Which raises the question that I have raised before: whether it is worth adding them at all for the MVP. |
@RossTate, yes, but that is assuming a whole machinery of highly non-standard and specialised typing extensions that I estimate would be post-post-mvp at best. |
Can you explain what you mean by "highly non-standard"? There is only one implemented system that is comparable to the Post-MVP goals of the GC proposal, and it uses these techniques, so I don't understand how you can classify them as highly non-standard. |
I would like to voice my support for the |
I like the idea of somehow adopting nominal struct types (if only to facilitate interaction with JS, e.g., so that fields can be easily named), but I worry that it could be a bit difficult to maintain this Another option we might consider is having array types be structural, but struct types are nominal. I think this is a sort of natural split that many programming languages use anyway. The whole graph canonicalization problem goes away with this defintion of type equality, if I understand correctly. With structural array types and nominal struct types, simple examples like strings can just work without needing an |
Yes, the global namespace is a little weird. Perhaps the
Unfortunately not. Here's an example of three array type definitions that are equivalent under an equirecursive system but not syntactically equivalent:
|
@tlively this seems to still require a new global namespace, for modules rather than types. At least for tools, the Things do get more complicated when attempting to compose modules together at runtime (maybe provided by a CDN). In the
In the world @littledan is hypothesising, it might make sense to restrict array types as being defined inductively (with respect to other array types). |
Indeed, I thought of C++'s COMDATs which work much like groups of weakly defined symbols: https://www.airs.com/blog/archives/52: "In C++ there are several constructs which do not clearly live in a single place". |
I don't think this is different from how normal imports work, is it? e.g. all Emscripten imports come from the global "env" or "wasi" modules.
Yes, I agree that this is the case for tools operating on module graphs. For tools operating on a single module at a time, it is more akin to having the module declare and export all of their types. Both points of view are useful in their respective contexts.
Yes, I would expect that in addition to |
Currently it's a very flexible convention of the host as to how "global" the module namespace really is. For example, the JS-API doesn't have a global namespace; a record labelled "env" must be passed as an explicit argument to the instantiation call (and could be varied with each instantiation). An |
Oh, I see what you mean. I wasn't thinking that Say modules
Say we instantiate |
@tlively ah I see, this is like a fleshed-out solution B of the OP? I was reading all of the comments in the frame of (something like) solution A... sorry. I agree it makes sense to add a module namespace to the I like this idea. It would still require some user-defined central coordinating logic (to keep a record of previously exported types), so I don't know what @rossberg thinks. This solution's primary value would be in smoothing over issues of unclear ownership (e.g. two modules failing to compose because they both export a nominal type they "morally" want to share), |
@conrad-watt, can you elaborate on the user-defined central coordinating logic that would still be needed? |
@tlively imagine some system that needs to dynamically compose modules together at runtime in some fancy way. In general, the top-level may not know ahead of time whether the modules it's composing together want to share types. So it has to come up with some convention to decide when to allow an The obvious system would be to use the declared namespaces naively (although I'm sure something more complicated could be done). So the top-level script keeps a map from I personally don't think requiring such code in this scenario would be a problem, but @rossberg has been attached to the idea that even the most complicated module composition schemes could in principle avoid requiring the user to manage types in a central location, by using |
When using the declared namespaces naively, it is necessary to instantiate the modules in a topological order such that exporters are instantiated before the corresponding importers. It is also sufficient to unconditionally provide the exports of every module instantiated so far as the available imports to the next module. This is true with just function imports and exports, and neither type imports and exports nor type As far as more complicated systems go, I was just chatting with @sbc100 about how Emscripten's dynamic linking system already requires type coordination for correctly implementing dynamically loaded function pointers with MVP WebAssembly, so it's already not the case that type coordination can be avoided by all systems. I'm sympathetic to the idea that type coordination should not be be overly burdensome and in particular I think it makes sense to resolve the asymmetry problem, but beyond that we will need to be more precise about the problems we want to solve to make forward progress. |
To check my understanding first: could this idea be framed as enabling modules to optimize their validation+instantiation-time by reducing the number of type definitions that need to be checked for structural equality? In theory, If that is the case: could these optimizations also be achieved by replacing But I also wonder if there's a separate unstated performance objective here of specifically wanting to have a string as a way to simplify the underlying structural equality algorithm? If so, then I think the actual win depends on the pending choice of iso- vs. equi-recursive structural equality, since, iiuc, iso-recursion would make type definitions fairly easy to hash and canonicalize. |
I think this is something that comes from having nominal types in general, rather than specifically |
This discussion of |
Are we concretely considering iso-recursive types? This would also avoid graph canonicalisation. |
Ah, avoiding graph canonicalization is what I was guessing and getting at in my third paragraph in my last response (although I said "performance objective", when it's also an "engineering complexity objective"). If that is the major motivating goal, +1 to considering iso-recursion. IIUC, with |
Yes, both nominal types and iso-recursive types would eliminate the graph canonicalization in essentially the same way. @rossberg has hinted that he will present on the iso-recursive types option in the not-too-distant future, and I expect we will want to compare and contrast that design with both the current equi-recursive proposal and the nominal design explored in this issue. I would be surprised if any of those three options was the clear best by all the measures we care about. |
I'll note that iso-recursive types will not remove the burden of type canonicalization from tools or engines. With iso-recursive types, you can still have distinct type indices in the same module are across modules defining the same type, and Also, the various proofs of undecidability I gave apply just as well to (extensions of) iso-recursive types as to equi-recursive types. And iso-recursive types have composability problems that are well-known to be problematic in the world of modules, the details of which I'm sure @rossberg knows better than me. So, although iso-recursive types will make subtyping/canonicalization linear-time rather than quadratic-time (and still not constant-time), I do not expect them to solve any of the real existing problems and I expect them to introduce new even subtler problems. |
In #220, we had this example using
@rossberg asked a few questions and comments about this example, and I'm answering here so as not to clutter the other thread with the details of this particular mechanism.
It's still nominal in the sense that types have identity independent of their structure and two instances agree on the identity of a type only if it has explicitly been exported (possibly via
The semantics is no different from normal (nominal) type definitions and type imports, except that whether it is a definition or an import is determined by whether an import with the expected name is supplied at instantiation time. If the import is supplied, this is no different from any other nominal type import. Otherwise, it is no different from any other nominal type definition. So if you look at corresponding types exported from two instances, they will be equivalent if and only if the export from one instance was supplied as an import to the other instance.
Does it make sense now that
Either module could be instantiated first, but without loss of generality let's assume A is instantiated first. We instantiate it with no imports, so both of the This linking action of taking all the exports produced so far and passing them as potential imports to the next module to be instantiated is no different from how linking would be done today. @rossberg, does that all make sense? |
A couple more points I just thought of:
|
@tlively, thanks for the clarification. If I understand what you have in mind then it is that a definition becomes retroactively generative if it's not supplied as an import. Or in other words, if the import isn't supplied, then a generative one is synthesised implicitly. Just to clarify, how does that work transitively? Imagine:
What if I supply "u" as an import, but not "t"? Is that rejected? What if I had
And what if it was
or
and "t" is or is not supplied?
I agree that separating these from exports would be a natural factorisation (and would avoid complicating static linking tools more than necessary).
If I understand you correctly, then that would be a total show stopper. The signature of a module (as a list of imports and exports) would no longer suffice to actually describe the type of a module. You'd have additional constraints on the side that you'll need to surface and make explicit somehow, because they affect what instantiations are allowed. And these constraints would generally depend on implementation details of the module, so could be an abstraction leak. Consequently, this not only breaks our current compilation model (and the module linking proposal), there also is no clean way to extend it to fix that, at least not with such a semantics. The obvious alternative to this semantics would be that validation conservatively assumes that any weak imports are distinct. If they become equivalent after the fact then that's okay, but for all intends and purposes of validation, they are considered to be distinct types. But apart from these technical questions, the bigger one is: would such a feature actually solve the problem with nominal types? And the answer is: no. While it allows you to defer the decision whether a type is an import or an export until link time, you still have to make a choice at that point. And to maintain type sharing, you'd be forced to opt all importexport's into imports, and therefore still produce a module graph with inverted dependencies. Which still amounts to a global program transformation, which is only viable if all clients to a module are known and linking is performed and controlled at a central place. So you essentially gain nothing over regular type imports, modular layering and incremental linking remains broken. Basically, this already follows from my description above: "in other words, if the import isn't supplied, then a generative one is synthesised implicitly." This makes clear that this feature does not fundamentally change the expressiveness of the system. It merely adds some convenience: you don't need to define fresh types explicitly. But there is no further difference to just doing that at the place where you would omit them. |
Yes, that's a good way to put it 👍
At instantiation time, there would be a shallow structural check (taking nominal identities into account) to ensure that all of the imported types have the same top level structure as the fallback definitions that were used during validation and compilation. In this example, the type of
Instantiation would necessarily fail in this case for the same reason: the instantiation of
I think
Ah, good idea. I think that would work.
Edit: actually, I'll reply to that here, too, since I think the details of |
The compilation of each module does need to know about the modules it imports types from and the structure of those imported types. That knowledge also needs to include transitive dependencies. The compilation scheme would use Criticality, however, this compilation scheme does not require knowledge of client modules, so I disagree that it requires global program knowledge or a global program transformation. AFAICT, this scheme supports fully separate compilation of libraries that do not depend on each other, even if those libraries end up getting linked into the same program.
AFACT, it adds just a tiny bit of expressiveness—the ability to create type import-export pairs between modules that do not create "instantiated-before" constraints between those modules. And that extra expressiveness is precisely what is required to support cyclic dependencies in the type graph. |
You are correct that the instantiated-before constraints are not hardwired into the modules themselves. But the same would be true if I used regular imports and created the types manually, by some mechanism, wouldn't it? There is no effective difference I can see. Moreover, while the instantiated-before constraint is not technically hardwired into the modules, you still have to commit to it in your linking scheme/convention, and all modules have to agree on the same conventions upfront. Otherwise they will not be linkable with one another. So for all practical purposes, whether the direction is decided by enforcement or by mere convention, has the same effect.
FWIW, that is not generally necessary when types are exports, only when they are inverted into imports. In the export approach, it's sufficient to reexport a transitive export, and the intermediate import may be devoid of any constraints on the type if they are irrelevant to the intermediate module, i.e., it has no usage constraints. With the import model, all these become definitional constraints, imposed on some other module supplying the types (e.g., a client), and they have to be repeated at every level to check the constraints transitively. It's probably more effective if I prepare another presentation for some of the next meetings to explain these problems more concretely. |
Yes, I agree. The only difference is that
Agreed, but is there anything special about type imports and exports that make this more constraining than before? These constraints and conventions already have to be in place for all other imports and exports to work correctly.
Yes, I agree that intermediate modules that just re-export imported types would require knowledge of the type structure in these nominal schemes but not in the equirecursive scheme with type bounds. But in either system the defining module upstream from the intermediate module and the using module downstream from the intermediate module both need to know (to some nontrivial extent) the shape of the type, so I can't imagine a realistic compilation scheme for which requiring the type shape in the intermediate module would be a problem. Can you describe such a compilation scheme?
More examples would be great and it would be helpful if you could post them to a discussion thread so we all can take the time to digest and respond to them. Then once we're all on the same footing, we can continue the discussion with higher bandwidth in a meeting. |
Yes, but the problem isn't the presence of type imports or exports per se. The problem is inverting the original direction of (structural) type exports to imports during either compilation or linking. That only works if you consistently do it for all modules in the program. And it's a whole program transformation because it does not respect the dependency structure of the source, e.g., changes which modules you can link independently and in what order. Obviously, if you are doing whole-program linking anyway then that doesn't matter, but otherwise it does.
In the export scheme, only the defining module needs to know the exact type, all downstream ones can choose to ignore it, partially or completely, while still passing through the information. For one, that changes code size, the amount of transitive imports needed (potentially exponentially so!), and the recompilation story when changing a type, but all those are comparably minor points. The big problem is handling type sharing across multiple independent modules. When exporting structural types, you can just define them in multiple places, and you can link and use each module independently, in different places, and reexport their types without worrying how downstream clients might use them (together). (In fact, for structural types, you don't even need explicit exports, but that's just another aside.) When inverted into nominal imports from the client, then you simply cannot do that. You cannot link either of the modules before you know if there is or isn't a downstream use where these types flow together and need to be compatible. You ultimately need to know the entire program before you can decide where to define the types and how to supply them.
I could try that, but it would take way more time. I have increasingly realised that quite a bit of context needs to be established for a thorough explanation, and there's been ample evidence here that a GitHub discussion is not an effective way to achieve that, because we all get dragged down into excessively time-consuming tangents. I believe it's much more effective and time-efficient for everybody to walk through examples interactively, at least for starters. |
There is already an example in flight in #220. If you now believe that the example you chose is somehow not representative, then please provide one that is more representative. Feel free to give more context on the example, e.g. not just the source code but also how things are compiled, loaded, and linked—in fact, I'd prefer you provide that additional context. But I, for one, have found the GitHub discussions to be much more effective, and especially much more fair, than presentations. |
#220 is about mutual recursion, which is a different can of worms. As per the current issue, I was referring to explaining the general modularity hazards of nominal types, to get past repeating myself over and over. This time I even ought to be able to demo how they directly affect something like the Wob compiler. |
I'm not too worried about having to do this inversion for all modules in a program, since they all presumably use the same compilation scheme. I'm curious about the distinction you're drawing between whole-program and non-whole-program linking here. Can you explain what the latter might look like?
To be clear, types in independent modules would "flow together" only if they were structural types in the source language as well, right? Rather than examining the downstream users of such types, the compilation scheme could also unconditionally import them, but that would require the dynamic linker to perform some module introspection to discover which shared types to provide at runtime. This gets us back to the old arguments about whether it is acceptable to have to import source-level structural types like tuples from a central shared location, so I'm not sure there's much more to say here until we get guidance from real-world implementations. |
I agree that high-bandwidth discussions during the video calls is valuable as well. Perhaps we could get the best of both worlds if the slides were posted at least a few business days before the meeting so folks can get a rough idea of what will be covered and how they might respond. @rossberg, does that sound good to you? If that doesn't leave time to prepare slides for the next meeting, it would be fine to defer to the following meeting. |
I know very few people who ever succeed in completing slides days in advance, no matter when a presentation is scheduled. ;) Practicalities aside, I want to point out that recent requests like this appear to reflect a rather non-constructive understanding of discussion: individuals demand early access to slides in order to prepare their "response" (as I've read here a number of times). Why should the presenter not equally demand access to that response beforehand, so that they have time to think about a response to that? And so on? Obviously, this drives the idea of an in-person discussion ad absurdum. A discussion is no shoot-out. If people need time to think about something that's said or presented in a meeting, then it's totally fine to have follow-up discussions. |
Fair enough, and we do have recent examples of productive follow-on conversations such as our iso-recursive typing discussion. |
I would be less concerned if there were not even more examples of follow-on discussions to @rossberg's presentations that have come to a halt because of @rossberg's absence (and then he has even spoken in meetings as if the issue had been decided due to his presentation). To make matters worse, even for those discussions that have come to an apparent conclusion (typically due to others' efforts), I often find that people still believe @rossberg's presentation is the status quo even when the conclusion ends up contradicting his presentation (because, understandably, people do not follow these detailed discussions). This pattern leads to a sprawl of unresolved issues in which everyone has a different understanding of the current status of the issue, making for a huge mess in communication and coordination in which there is no sense of progress. So, before adding yet another item to the sprawl, I request that we first schedule time to discuss the outcomes of the follow-ons to @rossberg's prior presentations and hopefully bring at least some items to a close. We already have a plan to discuss #217, which was created as a follow-on to his most recent presentation, and I think we should do the same for the other issues that were created as follow-ons to prior presentations. |
I agree that we should do a better job of summarizing and closing out old discussion issues. But a public GitHub discussion is not an appropriate place to discuss the conduct of members of the community. Since we don't yet have a standard process for appropriately raising this kind of concern, let's discuss your specific concerns in a private meeting instead. @rossberg, please schedule time to present in an upcoming meeting whenever it is convenient for you. I am eager to make progress on these discussions however we can. |
I agree that there are too many open issues, but as far as I can tell, for the relevant ones, that is due to the lack of a satisfactory solution or conclusion. There are also various issues open that are only marginally relevant to the MVP. Perhaps we should close some of those in order to focus the discussion better and waste less time and energy on tangents. |
In light of today's meeting and how the Instead of all types getting canonicalized automatically (which causes a lot of time spent doing that, with no way to opt out), modules could annotate types that they (use for interfacing with other modules and hence) need to have canonicalized (using whichever canonicalization algorithm/semantics we settle on). The hope is that the number of types participating in canonicalization on would be significantly smaller (1%? 10%?) than the number of types overall showing up in a module, and hence the time spent on canonicalization would be a lot less. An order of magnitude (or two) less instantiation work would make a big difference for practical viability. Using a name along with that keyword was just a strawman detail intended to further reduce the number of possible pairs of types that the canonicalization system would have to check. If that's not considered viable, we can totally drop that detail, and have a parameterless |
@jakobkummerow, thanks for the clarification. Isn't that just another way of saying that you suggest having both structural and nominal type definitions? (Which would be completely orthogonal to exports and imports?) FWIW, if canonicalisation can be made linear, then I would be very surprised if it occurred as a relevant cost factor in compilation or instantiation. |
Yes, it'd be one particular way of having both kinds of types. I'm interested in making forward progress, so I'm looking for pragmatic compromises. Without going into detail here, there appear to be undeniable arguments that some (current or future) use cases are best served by nominal types, whereas others require type canonicalization; so to me the resulting question is: "how can we practically accommodate both?" The canonicalization time results we've seen so far in the equirecursion experiments were worrying. |
We've settled on the isorecursive type system in #243 as the solution to many of the issues raised here, so I'll close this issue. |
If I understand correctly, one of the biggest arguments for the current MVP's structural static type system is that it makes sharing types across module boundaries quite straightforward:
...and things just work, because
$string
and$my_string
are automatically recognized as being the same type.Notable observations here are:
Now, if we switched the static type system to nominal semantics, that would in particular mean that two type definitions give us two distinct types, so the example above would no longer work "just like that". To still support cross-module object sharing, e.g. for function calls, we have to find an alternative solution.
One idea is to not change what happens at the module boundary, i.e. keep structural typing there. In a case like the above, where a module attempts to satisfy a function import with another module's function export and these functions use certain types, the engine would automatically try to match the types. I believe that this would work for the module interaction case, however it creates another problem: merging modules would potentially change semantics, and that is unacceptable. (Merging two modules is straightforward and should work fine, but consider a three-module case: module M imports functions from N and O, the modules define types $m, $n, $o that are structurally identical and used in these functions' signatures. If N and O are merged, the merging tool cannot know that $n and $o are supposed to be the same nominal type (because there are no function calls between N and O that would implicitly provide this information); when linking M with the merged NO, it becomes apparent that two nominally distinct types $n and $o in NO are supposed to match the same type $m in M. This would re-introduce structural type equivalence into NO "through the back door".) So I think this idea does not hold up to critical inspection.
Another idea that has been mentioned before is to import and export types just like functions:
(Note that module B repeats the
(array i8)
type definition in order to allow separate compilation, i.e. we want to enable engines to compile code for B before having seen A or knowing that B will import things from A.)That solves the problem of needing a single type, but creates a (potential?) new problem: it requires the modules to agree on the export/import role assignment. For functions, this is natural, because the whole point is that the function definition is large and complex and one module's responsibility. For types, this is less natural, especially if we imagine an NPM-style decentralized ecosystem of libraries: what if I want to import one module's string hashing function and another module's string compression function, who gets to export the string type used for these interactions?
Per the observations above, there is some doubt whether this scenario is realistic (and hence whether the concern is warranted), but we may want to avoid the issue anyway, so as not to paint ourselves into a corner. We could work around this difficulty by searching for more symmetric/flexible approaches to sharing types across different modules.
One idea is to define a symmetric version of the directed export→import concept. Maybe:
(Side note: maybe we should pick a different strawperson keyword than
importexport
, such aspublicname
, just to make (spoken) conversations less ambiguous: "importexport" sounds like "import/export", but describes a significantly different notion.)Problems solved with this approach:
Problems created:
"string"
. I would argue that this is not a problem in practice: since even in the structural world, module authors have to agree on matching type definitions, they now simply have to agree on one more detail, which is the type'simportexport
name. That's not a significant increase in coordination burden.importexport
type names, there could be unintentional collisions. That is a serious concern that needs a solution.Solution A: treat identical
importexport
names as requests that may well fail: a matching name causes the engine to perform a structural type check; if that check fails, then that's not a validation error, the types simply won't be identified with each other. If the two modules don't try to use the types in question for any interactions, then that's just fine. One way to look at this would be to say "we still have structural typing across module boundaries, but with the twist that theimportexport
name is part of the structural definition".Solution B: analogous to how the coordinating host can map function exports to imports with different names, we could also empower (or burden) this coordinator with matching up type names. This would, effectively, give each module its own namespace for public type names. (This might be desirable in addition to solution A in order to solve the opposite problem: unintentional mismatches of type names.)
Solution C: please contribute other ideas by commenting :-)
Speaking of comments, I'm mostly trying to get a conversation / thought process started here. There may well be other problems that I have overlooked; please point them out! There may well be better, or at least additional, ideas for solutions that I haven't thought of; please present them!
If we're generally happy with nominal typing inside a module, it would be great to find a way to make it viable for multi-module scenarios as well -- maybe we can then finally arrive at a type system that we can reach consensus on?
To give credit where it's due: thanks to @tebbi, @rossberg, and @RossTate for having discussed these ideas with me. This post is more thought-through than it would have been without their input :-)
The text was updated successfully, but these errors were encountered: