Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rework Code Evaluation #195

Open
mbaruh opened this issue Aug 4, 2022 · 5 comments
Open

Rework Code Evaluation #195

mbaruh opened this issue Aug 4, 2022 · 5 comments

Comments

@mbaruh
Copy link
Member

mbaruh commented Aug 4, 2022

Currently the code evaluation suffers from possible exploits. These stem from how code evaluation currently works on the forms app:

  1. The user's code, as well as the pre-supplied unit tests, are sent to snekbox.
  2. The unit tests are run on the user's code inside snekbox.
  3. The result of the tests are supplied back from snekbox through stdout.

This means that if the user manages to control the stdout, they are able to control the output of the tests, as far as the forms app can understand it.

There are several means in place already to mitigate this issue, but to solve it, this ultimately requires a more thorough/drastic solution.

Solution 1

Separate user code from the tests.

  • The user code alone will be sent to snekbox.
  • The result of the user's code will be fed to stdout.
  • The stdout will be read from snekbox.
  • The output will be compared to a pre-supplied string to check whether the user's code passes (similarly to how it works in Code Wars).

This has the disadvantage of limiting what we can evaluate, as currently we are able to inspect Python objects during testing.

Solution 2

We can see whether we can add an additional way to supply information from snekbox, specifically for the purpose of getting test responses. For example via an encrypted byte-stream. Either way something that can't be accessed from the evaluated code. The details can be hashed out if this is something we want to do.

In any case, considering the protections we already have in place against something like that, and considering Code Jam qualifiers are meant to assess a person's Python knowledge, even if someone is able to exploit the current system, it's not the worst thing in the world (although preferrably it wouldn't happen anyway). Therefore, this issue should be relatively low priority.

@ShakyaMajumdar
Copy link
Member

I propose a slight modification to Solution 1.

To make Forms's unittesting as general as possible, the API could accept python code in a setup field for each unittest form field. Forms evaluates[1] this setup code with the form submitter's response, and then sends this result to snekbox. snekbox sends back stdout, forms matches it with some output field.

The test template currently in forms is then moved to Code Jam backend, which sends a request like

{
    "setup": // essentially https://github.com/python-discord/forms-backend/blob/main/backend/routes/forms/unittesting.py#L104-L108
    "output": "5 Tests Passed" // i.e., output of the unittests patched in by setup
}

This maintains the current behaviour: you get to test object states and such, but its possible for the submitter to control stdout. Not an issue, since

Code Jam qualifiers are meant to assess a person's Python knowledge, even if someone is able to exploit the current system, it's not the worst thing in the world

For Esoteric Challenges, a setup field won't be necessary - the code submitted by the user is directly sent to snekbox, and what that code prints to stdout is directly matched against the output field.

[1] Simplest way is to run it within Forms backend, but that assumes we trust people who create forms. Marginally more complicated way is to run the setup in snekbox too, with the minor downside of having two snekbox requests per test.

@HassanAbouelela
Copy link
Member

Running any user code directly on the backend is a no-go. The part I don’t understand about your proposal is what it ends up achieving. You still can’t check state the way you described because the same issue of not being able to safely communicate between the unit test suite and the backend remains.

Also to touch on Zig’s last point: the discussion yesterday was that this issue is currently sitting on low priority, and adding support for things that aren’t the code jam isn’t something we’ll look into unless we have a good idea.

@ShakyaMajumdar
Copy link
Member

ShakyaMajumdar commented Aug 5, 2022

what it ends up achieving

It removes the possibility of exploiting stdout in tests where we directly match the user's prints, while letting you keep the current behaviour of code jams where you check prints by unittest

That is to say, you can decide on a per-event (or even per-test) basis whether you want (unexploitable stdout + no unittests) or (exploitable eval + unittests)

@HassanAbouelela
Copy link
Member

Isn’t that just suggestion 1?

@ShakyaMajumdar
Copy link
Member

You don't get to keep code jam's unit tests with suggestion 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants