Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scripts do not seem to be evaluated in isolation #737

Closed
kMutagene opened this issue Jan 27, 2022 · 3 comments · Fixed by #807
Closed

scripts do not seem to be evaluated in isolation #737

kMutagene opened this issue Jan 27, 2022 · 3 comments · Fixed by #807
Labels

Comments

@kMutagene
Copy link
Contributor

kMutagene commented Jan 27, 2022

I suspect this is an issue that is the root cause of many bugs that we are experiencing currently.
Most of this seems to be caused in conjunction with using #r "nuget: ..." references.

A few observations that lead me to this conclusion:

  • referencing different versions of a library in different scripts can cause evaluations to fail, while using the same version everywhere works. There seems to be an issue if there are transitive dependencies in the used libraries. (repro coming soon)
  • In some cases, values spill over from one script to another. This happens when different values are bound to the same name in different scripts. Here is a repro where the same chart is rendered 2 times although it should be completely different in the second script.
  • the order in which the scripts are rendered has an influence on which evaluations fail or not. There can be for example one problematic script that will cause all consecutively rendered htmls to have no value returned by any evaluator, while removing it will lead to all those files being rendered correctly.
  • These issues are almost always happening in blog-style repos, where scripts are heavily using #r "nuget: ..." references, while documentation for libraries seem to work fine most of the time, because they usually reference a single compiled binary in all doc scripts.

This seems to be the underlying issue of almost all problems i have had with fsdocs, but it is very hard to pin down to single lines in scripts. I will update this issue with additional insights.

Here are some issues that i also think are caused by this:

@dsyme
Copy link
Contributor

dsyme commented Jan 31, 2022

Hmmm yes, this can easily cause a load of issues. The reflective nature of script evaluation is just highly polluting in the fsdocs process.

@kMutagene
Copy link
Contributor Author

So i guess this is the problem?

// Inject the standard 'fsi' script control model into the evaluation session
// without referencing FSharp.Compiler.Interactive.Settings (which is highly problematic)
//
// Injecting arbitrary .NET values into F# interactive sessions from the outside is non-trivial.
// The technique here is to inject a script which reads values out of static fields
// in this assembly via reflection.

I tried to look a bit into Evaluator.fs but i have to admit that i do not understand that much besides the comments - but this seems to come from a time where we did not have dotnet fsi right? It might be a really naive question, but can't we just start dotnet fsi script.fsx as a subprocess of the tool and collect the stdout of that?

@dsyme
Copy link
Contributor

dsyme commented Feb 3, 2022

I tried to look a bit into Evaluator.fs but i have to admit that i do not understand that much besides the comments - but this seems to come from a time where we did not have dotnet fsi right? It might be a really naive question, but can't we just start dotnet fsi script.fsx as a subprocess of the tool and collect the stdout of that?

I think it should be possible to do that, yes.

The code is messy and unclear, I've been trying to clean it up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants