Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Path maps for Docker #89

Open
lukas-reineke opened this issue Aug 15, 2022 · 24 comments
Open

Path maps for Docker #89

lukas-reineke opened this issue Aug 15, 2022 · 24 comments
Labels
enhancement New feature or request

Comments

@lukas-reineke
Copy link

neotest uses the absolute path for files, but when the tests run in Docker, those paths don't match.

Would it be possible to add some kind of path mapping to fix this?
Something like DAPs localRoot and remoteRoot.

@rcarriga
Copy link
Collaborator

I'm not opposed to the idea of this but also I'm not sure how this should be implemented. Off the top of my head I believe this would have to be implemented by adapters rather than by neotest itself, as the usage of paths is completely up to them. Of course it'd be better if there was one implementation that could be shared, There could be a translate_path function somewhere for adapters to use 🤔

@rcarriga rcarriga added the enhancement New feature or request label Aug 21, 2022
@lukas-reineke
Copy link
Author

I haven't looked too deep into it, but from what I could tell, the adapters wouldn't need to change. The translation should be directly at the beginning from Neovim buffer to filepath. All adapter functions are called with the translated path, so adapters only ever look at things in Docker. And at the end when parsing the results.

@rcarriga
Copy link
Collaborator

Unfortunately that wouldn't work well. It would require adding translation logic to both consumers and the core code, and would also hide the translation from adapters which shouldn't be done in case they need to use the file path for some type of functionality (e.g. project root inference).

@lukas-reineke
Copy link
Author

It would require adding translation logic to both consumers and the core code

I agree, consumers should not have to worry about this.

and would also hide the translation from adapters which shouldn't be done in case they need to use the file path for some type of functionality (e.g. project root inference).

IMO if the tests run inside docker, everything needed to run the tests should be inside docker too. Project root should resolve to the root inside docker, etc.

Maybe the translation can happen directly before calling the adapter, and directly after receiving results from the adapter.

@rcarriga
Copy link
Collaborator

Maybe I'm misunderstanding, but from what I understand the neovim process is outside of the container and the test commands are running docker exec ... commands (or something similar). That means that to read the files, an adapter needs to use the host OS paths, not the docker paths and so we can't hide mappings.

Unfortunately I believe the only route for this is to provide the mappings to the adapter when building commands/results and let them handle it manually

@lukas-reineke
Copy link
Author

But that's only a problem if the adapter needs to read the file, would that ever be necessary outside of discover_positions? And discover_positions could be called with the original path (this does get a bit messy though I agree)

@rcarriga
Copy link
Collaborator

Yes unfortunately the adapters use the filesystem in both the root function and results

@pBorak
Copy link

pBorak commented Sep 4, 2022

If I were to use it solely for docker, would it be just matter of changing the root and results function (locally/forking), not to use absolute path, but rather the one matching the docker one?

@lukas-reineke
Copy link
Author

yeah, it's not that easy.. root needs to run on the local file system as well, same as discover_positions.

You're probably right, the only way to not make this a total mess would be to provide translate functions that the adapter can use when they need to.

@rcarriga
Copy link
Collaborator

rcarriga commented Sep 6, 2022

Yeah it wouldn't be any easier if you only used docker. The same changes would be needed unfortunately.

I'll get to working on this at some stage but I don't use docker for testing so I can't say it'll be a priority. If someone wants to work on this, I'd be happy to discuss 😄

@rcarriga
Copy link
Collaborator

A thought that I've had with this is that even if we implement some sort of path mappings in the adapter interface, several adapters won't be able to run without some very gnarly mounting. For example neotest-python uses a custom wrapper script which would have to be mounted as well as a temp file that would also have to be mounted separately.

It's looking like it'd be better to solve this problem doing what VSCode does and just install Neovim in the container along with plugins and run from there. There is https://github.com/esensar/nvim-dev-container which looks like it might work but I've no clue how much neotest would have to be aware of it. Just noting ideas down

@mmirus
Copy link

mmirus commented Dec 2, 2022

👋 For running rpec tests in a docker container, I suggested using a shell script as an intermediary over at olimorris/neotest-rspec#28 (comment).

  1. Create a shell script that invokes rspec in docker with the args passed to it
  2. Point neotest at the shell script as the rspec executable

It's easy to transform the paths as needed in the script.

#!/bin/bash

# Strip "/home/mmirus/git/repo/path/" from test paths sent to container
args="$(echo "$@" | sed 's/\/home\/mmirus\/git\/repo\/path\///g')"
docker compose exec web bundle exec rspec $args

Then you just do this:

require("neotest").setup({
  adapters = {
    require("neotest-rspec")({
      rspec_cmd = "my-script.sh",
    }),
  },
})

This works; with this setup, neotest can run the tests in the container.

The next problem is that neotest reports all the tests as failures, regardless of the actual result. The likely explanation is that when the rspec output comes from the container / script, it's different from what neotest expects.

@rcarriga Do you have any ideas on what the problem might be, or suggestions for troubleshooting it?

@rcarriga
Copy link
Collaborator

rcarriga commented Dec 5, 2022

I'm afraid you're going to find a lot of issues with that approach, hence why I don't it's worth pursuing this without a more holistic approach.

The issue you're probably running into is neotest-rspec instructs rspec to write the results to a file and then reads the results from that file https://github.com/rcarriga/neotest-rspec/blob/3dac4782b61a4d8fbc61d627b36f245326959b77/lua/neotest-rspec/init.lua#L143. That file won't be filled if it's run in docker without mounting it. If you manage to solve that issue, the paths in the results will all then need to be translated back to the host machine paths.

@mmirus
Copy link

mmirus commented Dec 5, 2022

Thanks for pointing me in the right direction. I got at least part of the way there.

  1. Disable rspec writing the output to a file (remove these two lines)
  2. Replace the output file path with a static path (/tmp/neotest-rspec or whatever)
  3. Update the script I provided as below (again, substitute your particulars). Make sure the file path matches what you put in neotest-rspec.
# Strip "/home/mmirus/git/repo/path/" from test paths sent to container
escaped_path="\/home\/mmirus\/git\/repo\/path\/"
args=("${@/$escaped_path/}")
docker compose exec web bundle exec rspec "${args[@]}" | tee /tmp/neotest-rspec | sed '/^{/d'
# Remove all lines except the JSON from the rspec output file
sed -i '/^{/!d' /tmp/neotest-rspec

I did not find it necessary to do anything with the paths in the results.

One thing to note is that neotest-rspec is trying to have rspec output two results: json test results to a file, and a human-readable test result (the rspec "progress" formatter) to stdout. Trying to preserve both of those is the trickiest bit. If we can change that to use only the json output without losing much user-facing functionality, that might simplify things.

This isn't intended to be a shippable solution as-is, but perhaps something can come of it. For now, just exploring what's possible.

Overall:
image

Individual test result:
image


Another option that would work for some cases would be to just have rspec write the output file to a location that is shared between the container and the host system. This is simpler and safer than messing about with filtering and redirecting test output, but of course has the downside of writing a results file within your project.

In that case, I believe this is all that's needed:

  1. Set the results path here to a location that's shared between the container and the host. Use the path on your local system, not the path inside the container.
  2. Filter that path in the command args passed to your script (substitute your particulars):
  # Strip "/home/mmirus/git/repo/path/" from test paths sent to container
  escaped_path="\/home\/mmirus\/git\/repo\/path\/"
  args=("${@/$escaped_path/}")
  docker compose exec web bundle exec rspec "${args[@]}"
 

And there you go:

image

You can see that the output in the panel fares better in this approach.

@rcarriga
Copy link
Collaborator

rcarriga commented Dec 5, 2022

I did not find it necessary to do anything with the paths in the results.

Ah brilliant, it must just use relative paths 👍

Another option that would work for some cases would be to just have rspec write the output file to a location that is shared between the container and the host system.

You could parse the path from the args in your script and then docker cp <container>:<path> <result_path> after the docker exec right?

@mmirus
Copy link

mmirus commented Dec 5, 2022

@rcarriga Ah brilliant, it must just use relative paths +1

Yes, it seemed like while the path originally provided to rspec by neotest is absolute, the paths in the rspec output are themselves relative.

@rcarriga You could parse the path from the args in your script and then docker cp : <result_path> after the docker exec right?

Smart idea! This seems to do the trick, without requiring any changes to the neotest-rspec code. Here's the script users would modify and specify as their rspec exec:

#!/bin/bash

# Customize the following:
# - escaped_path: absolute local path to your project
# - container: name of your docker container
escaped_path="\/home\/<user>\/project\/path\/" # Be careful to properly escape this
container="container_name"

# WARN: This will break if flags other than -o and -f are added in neotest-rspec
while getopts o:f: flag; do
	# This deliberately does not handle all arguments
	# shellcheck disable=SC2220
	# shellcheck disable=SC2213
	case "${flag}" in
		o) output_path=${OPTARG} ;;
	esac
done

# Strip local path from test paths sent to container
args=("${@/$escaped_path/}")

# Run the tests
docker compose exec "$container" bundle exec rspec "${args[@]}"

# Copy the test output from the container to the host
docker compose cp "$container:$output_path" "$output_path"

As long as the /tmp/... path provided by neotest-spec is writable both in the container and on the host, this should work.

And then, as shown above:

require("neotest").setup({
  adapters = {
    require("neotest-rspec")({
      rspec_cmd = "my-script.sh",
    }),
  },
})

Result:

image

You can see in the summary that neotest thinks some other tests in the suite have failed, which is not correct (you can see in the panel that only one failure occurred). I don't know if that error in the summary is due to this method, or is a neotest / neotest-rspec bug which would be happening regardless.

Sláinte!

@rcarriga
Copy link
Collaborator

Awesome that it works! 😁 Not sure why you're seeing the failed tests in the summary. Neotest does some result aggregation which can mark tests as failed if results are not returned for them so if the tests are present in the container then that would explain it. Otherwise the logs might contain some useful information with the trace level set.

@meicale
Copy link

meicale commented Mar 2, 2023

It is wonderful to make it rspec, which is a test framework for Ruby. How about neotest-python, are there any counter part to "rspec_cmd", and how to set it. I just monkey-copy and replace runner whith the provided scripts, but it doestn't work .
Thank you!

@mmirus
Copy link

mmirus commented Mar 2, 2023

Sorry, @meicale, I don't work in Python, so I can't help. You'll need to do some experimenting.

If memory serves, I used these debugging techniques:

  1. Run the script from the command line instead of with neotest and see what the output is
  2. Run the script using neotest and look at the results in the output panel

Hopefully between the output in those two locations you'll be able to figure out what's going wrong and adjust the script as needed.

If you get it working, please share your changed version here for others!

@rcarriga
Copy link
Collaborator

rcarriga commented Mar 3, 2023

neotest-python would be more difficult to make work with docker. It uses a wrapper module that writes JSON results
to a file that is read by the adapter (i.e. the neovim process). To make it work you'd have

  1. Mount the wrapper module into the container
  2. Mount the JSON results file so that the NeoVim process can access it or replace the path in the args (as the rspec script does)
  3. Translate paths passed to python (same as the rspec script does above)
  4. Translate the paths in the results back, because the paths are absolute in the results unlike rspec

@camilledejoye
Copy link

Hi, jumping into the conversation since I'm also mostly working with containers.
I started to adapt the phpunit adapter and managed to make it work both on the host and remote.

For the issue of local vs remote path it's manageable only by the adapter, without the rest of neotest being aware of it.
In build_spec make the position.path relative so that the test program works in both cases.
In results if the test program returns full file paths they must be mapped to local paths.
This is doable without too much issues by providing a path_mappings like DAP.

The real issue is to be able to read the file containing the test result since it's going to be created inside the container.
The quickest solution is to use a wrapper script to identify when the test program is run through neotest and copy the file from the container to host.
But that's really inefficient and requires constant wrapping of the test programs in each project with a similar configuration to not make the configuration of neotest too hard to maintain.
We could provide a "container" configuration to the adapter so that it does the copy itself but that's not really any better: container names changes from project to project, etc.
One could imagine handling a volume to share those files but that's still requiring file system operations, we can only mount directories from the project root so we need to be able to configure where the adapter is going to write this file and override the definition of the container for a personal use case (not easily done in all projects).

From my experience the most painless way to work with containers is to not rely on the file system but solely on stdin, stdout and stderr when it's possible.
In the case of the phpunit adapter it would be possible to use the TeamCity format as output which would then enable to process it in real time and without knowing where the process is actually running.
I saw that the lib.files provide a stream feature and I was wondering if it could be possible to put something similar in place for stdout instead of a file ?
The main downside here is that we will loose the regular output, if we use docker compose exec ... then docker compose will merge stdout with stderr and if we use docker compose exec -T ... to disable the TTY and keep them separated then the container does not have access to a /dev/stderr anymore to write on :(

But it's probably acceptable, IMO you either want to have neotest handle test result and only showing you relevant information: success, failure, failure messages
Or you want to have access to the output and analyze yourself, maybe a strategy to add there which will be a behavior like vim-test by running the test program in an emulated terminal.

Anyway, I digress a bit here, the main question is how can I access the output from within the adapter to process the results from it without going through a file ?
It seems there is already a temporary file generated by neotest in order to show the output at any time.
Would it be possible to return a callback from an adapter to process this output in real time ?

@meicale
Copy link

meicale commented Mar 8, 2023

I think it is reasonable to add a layer like DAP. I use dap and debugpy to debug python code in the container and remote server, without too much config and efforts. LSP needs similar ability, too. If this works, it will be as good as VSCode to work with remote if not better.

@rcarriga
Copy link
Collaborator

@camilledejoye A custom strategy is an interesting idea. I don't use containers at all for testing but if someone is interested in opening a PR I'd be happy to help. It could work like the dap strategy where is requires some configuration from the adapter but abstracts the common needs of actually running the process.

It seems there is already a temporary file generated by neotest in order to show the output at any time.
Would it be possible to return a callback from an adapter to process this output in real time ?

Adapters can already process the output of the process. It can provide a stream function in the RunSpec which gets an iterator of the process output.

---@field stream fun(output_stream: fun(): string[]): fun(): table<string, neotest.Result>

@hwo411
Copy link

hwo411 commented Dec 1, 2024

Hi everyone, I came across this topic since it was relevant to me and I'd like to talk a bit about general solution.

I was trying to integrate this plugin into my workflow, but it didn't work for me since we're using k8s in our regular development flow and it's not supported by the plugin.

For reference: we're using devspace for local development in k8s and I tried all things using it.

I tried it with neotest-rspec, since they have docker support and it worked, but was showing incorrect statuses for tests except for single-test runs. Quite likely because it doesn't manage to sync in time from container to the host system, but somehow managed to do it with a single test (I didn't manage to strictly verify this idea, but it behaved liked this).

I also tried neotest-golang and it didn't have options to run tests in devspace at all.

So the list (not excessive) of the things that are required to be supported contain the following things:

  1. Being able to override the test command to run it using a certain tool (and it needs to be separated from command options from plugin), probably as a separate layer that does not need to be aware of the plugins
  2. Being able to properly map tests on local filesystem to the container ones and back
  3. Being able to read the results after they're synced back to the container or reading using other means like stdout. I'd assume that former might require additional command to query that result is ready (e.g., if it's possible, ask sync tool if everything is up to date or e.g. run comparison between local and container file). As for the latter, I see that @camilledejoye already described this idea, but it has downside of losing stdout input.

And quite likely there are other things that I'm missing, since I'm not experienced with this plugin.

I have two main questions:

  1. Does what I wrote above make sense? Or maybe I'm thinking in a wrong direction
  2. Is there anybody already working on this kind of change?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

7 participants