-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle requirements properly as part of the type system #231
Comments
Yup, I was wondering if this was intended, but didn't ask, since it makes getting the first lambda harder 😅 |
Why does it make getting the first lambda harder? |
It can be hidden behind a lake/mountain and you have to make a program that goes around. ⛰️ Most cases can be solved with "move in one direction, move in other direction and then go back", but that "go back" can be error prone so I used the |
So I was trying to work on this today, and ran into what seems like a difficult issue. I put a basic check in place and it immediately started rejecting things like Clearly, capability checking does need to treat
and now Note higher-order functions make this even trickier. For example, consider a generalization of the above example:
Base should still be able to execute I'm guessing this means we have to bite the bullet and make capability checking more principled / fine-grained. I will put some thought into what might be required, but happy to hear any ideas and/or pointers to literature (I'm quite sure others have already done similar things). |
Hmm, what if it's as simple as having capability checking return a stack of capability sets instead of a single set? The top of the stack represents capabilities required to evaluate it now. The next thing in the stack represents capabilities needed for things buried inside one level of delay, which would become required after application of
I wanted to write this down since it seems interesting, but I am actually not at all convinced that it will allow dealing with higher-order functions. |
More generally, suppose that for each Swarm type
|
I have thought about this some more, and I seem to be coming to a few different (interrelated) conclusions.
I am now envisioning something where types can be annotated with a set of capabilities, and function arrows are annotated with levels, or something like that... I'll keep banging away at it. |
I have a very rough sketch of a type system on the |
I did a bunch more work on the Checking this involves generating capability set inequalities with both literal capabilities and capability set variables:
|
…erm` (#827) Closes #540. When we typecheck and capability check a new term, resulting in a `ProcessedTerm`, we already know the `Requirement`s for all the `def`s it contains. This PR simply takes all those definition requirements and adds them to `robotContext . defReqs` just before installing the new `ProcessedTerm` in the CESK machine, so that if we ever encounter the name of any of the definitions inside an argument to `build` or `reprogram`---which we capability check at runtime---we will be able to properly match up names with their requirements. This is a hack for several reasons: - We really shouldn't be capability-checking arguments to `build` and `reprogram` at runtime in the first place---ideally we should be able to check everything statically. But fixing that will require implementing #231 . - We have to do this in three places in the code: when loading a new term into the base when the player hits Enter at the REPL; when executing `run`; and when running the solution from a scenario in the integration tests. Ideally, there would be only one place, but I don't have a good idea at the moment how to refactor things properly. - Technically, the names being defined shouldn't be in scope until after their corresponding `def` runs, so it's incorrect to dump them all into the `defReqs` prior to executing any of the `def`s. - However, doing it properly, with each name coming into scope with its requirements right after the `def` runs, is very tricky/annoying to implement for more reasons than I care to write about here (believe me, I tried, and it made my brain hurt). For starters, we don't really have access to the robot state when stepping the CESK machine. - The only scenario where this would be a problem is if (1) there is already a name `x` defined, and then we (2) `run` a `.sw` file containing, first, a `build` command whose argument references `x`, and second, a redefinition of `x`. In that case the `build` would incorrectly decide that the `x` in the argument to `build` has the requirements of the *second* `x` (the one that will be redefined later) even though the first `x` is the one that should still be in scope. However, this seems like a very unlikely scenario (who would write a `.sw` file that depends on some specific name already being defined before it is run, but then redefines that same name later?) and I'd *much* rather have that obscure problem than the current very relevant and annoying one. However, it's a simple hack that solves the issue and will be easy to get rid of later if and when we do something better. I'm a big believer in doing things the right way, but in this instance I definitely think we should just do the simple thing that mostly works and then continue to make it better in the future.
Wanted to update this with my latest thinking on a unified effect system for capabilities/requirements since it has changed quite a bit since the most recent comments above.
|
@byorgey I would argue an infinite amount should suffice 😄 |
Well, yes, an infinite amount would certainly suffice, but that would not be a useful answer, because then it would be impossible to run any recursive functions that call But thinking about it a bit more, perhaps that is indeed the right answer, coupled with some kind of escape mechanism whereby you can say "I know the requirements system cannot prove that this call is safe, but I want to run it anyway". Put another way, given infinite counts, we could design the requirements analysis so that it is always conservative/safe, but recognize that in some cases (especially recursive calls) we will need to provide an escape hatch to allow running things where the analysis is too conservative. |
@byorgey it could also be a challenge or a normal item providing unlimited supply. 🙂 I like the idea of a message saying it calculated an unbounded item requirements. 👍 |
Yes, that's true, I just meant that in general you cannot count on having an infinite amount of something, e.g. if you write a recursive function while playing the classic game. |
Closes #1911. I understand the bug now: when inferring the type of a term, we also return a context of type alias definitions in the term. However, those type aliases might occur in types later in the term itself, so we need to pass those additional type aliases into the `requirements` function (which now looks at types and needs to be able to unfold type aliases). (Once again, it's not really correct to be doing requirements analysis separately from type checking, and leads to bugs like this. See #231.) What I am still really confused by is why the test suite did not catch this problem. The unit tests do in fact contain some terms like this, which contain a `tydef` followed by a use of the defined type. And the tests in `TestLanguagePipeline` call `processTerm` which seems like it should have triggered this...
Another example program that ought to work but currently doesn't:
This ought to build a robot equipped with four rocks. Currently, it is equipped with only one rock, which it places and then crashes when it tries to place the next one. I know exactly why this happens: when checking the requirements of |
@byorgey could you please add that as a failing test case? 🙂 |
Describe the bug
It seems that programs run by
base
are only checked dynamically for capabilities. This means that bothTo Reproduce
build "x" {move}; move
to see that the base builds the robot"x"
first before failing to execute themove
due to a missing capability. Ideally, it should instead refuse to run the entire program due to missing capabilities.The
build "x"
fails as it should with an error saying that you don't have the necessarily devices to install onx
(namely, a lambda and a strange loop). However, the last line works fine even though it shouldn't, since thebase
doesn't have a lambda or strange loop.The text was updated successfully, but these errors were encountered: