Replies: 1 comment 2 replies
-
Hey thanks for the report/suggestions! Great to hear you're writing an adapter 😁 GTest has been one that's always evaded my support with vim-ultest as well rcarriga/vim-ultest#104 (comment).
That shouldn't happen AFAIK, the only case that would happen for would be if the adapter said that the file is a test file. Is that what's happening? If you can grab the logs this should help diagnose if that's not the case.
This makes sense though it doesn't make sense to use
Yep that's something I've thought about (though adapters would probably still want to provide error messages for the test file as there's no guarantee that the file is open/visible), I'd just add
The diagnostics consumer could handle that with a limit per line/buffer? Don't think that'd be a blocker anyway
Interesting idea, I'd probably keep that as functionality solely in the output consumer which is quite simple so I don't mind making a bit more complex. Thinking something like a
I'm not sure there's much point in neotest providing a centralised location rather than adapters just using
So this has been raised for neotest-go as well. neotest-pytest will just aggregate each result into the single result (this is one of the reasons a single result can have multiple errors) and then display all outputs together for the "short" output. The neotest-go author had discussed attempting to parse the parameters with treesitter since they are usually static and and so can be handled by treesitter (vscode-go does this to some extent nvim-neotest/neotest-go#1 (comment)). However this is getting into territory where treesitter may be a bit limited and you might want to instead use the test runner discovery if possible (or LSP). I may do that for pytest as neotest-pytest already integrates with it and so can find that information but it could be a lot slower (pytest discovery isn't even close to treesitter speed) |
Beta Was this translation helpful? Give feedback.
-
Hello!
First of all, thanks for all the work put into this plugin, it's great and I never want to run tests in a separate tmux pane ever again :)
On to the issue: I'm working on creating an adapter for the Google Test framework for C++. It is not supported by vim-test, as far as I know, due to architectural difficulties. There is a separate plugin that ports that support, but the functionality is limited.
The plugin that I'm writing can be found in my repo. It works, but many features are still WIP, so I decided to open this issue in case somebody else wants to work on this (which would be much appreciated!), and to bring up some issues and suggestions.
So far I've only discovered a single issue: test discovery breaks down when opening a file outside the current project.
If I'm working on
/home/me/myproject/foo.cpp
and go-to-definition in/usr/include/something.h
, the whole filesystem tree gets parsed, which breaks neovim in a number of ways, from going overulimit
with open files to straight up freezing while trying to parse every test file it finds. The discovering mechanic seems way too eager to discover :) Similar behavior has already been mentioned here, and is probably being worked on, but if I can help, I would love toFurthermore, if it's okay, I would also like to suggest a couple of minor improvements. If you think they are a good fit for the plugin, I think I can add them myself.
build_spec
.If an adapter returns
nil
frombuild_spec
, an error happens. With google test, the adapter has to find the executable to run. Sometimes, that may require user input, and I would like to give the user an opportunity to cancel during that input (i.e., "enter path to the executable, empty to cancel"). Otherwise the user has to press<C-C>
and see some errors they have nothing to do with.Consider the following use case: A test runs a function in another file, that file throws an error. Do we want to put a diagnostic at the line that the error happened? Currently the only way to report this would be to display such an error in the test's short summary. However, printing that error in a different file could result in hundreds of test reporting the same error in that one file, so maybe it's best left as is.
I think
pytest
does this best, creating/tmp/pytest-of-username/pytest-run-<counter>
directory. I implemented something similar myself for google test (code here), perhaps it would be generally useful? I sometimes check old test runs to see when did it all go so wrong.Adapters may want some tiny state. Of course, they can store it themselves under
stdpath('data')
, but it would be nice to store all test states in a centralized fashion. My particular adapter wants to store a simple JSON associating test files to executables they are compiled into.Finally, I need some guidance with parametrized tests: is there a definitive way to work with them? E.g.,
@pytest.mark.parametrize
in pytest orTEST_P
in Google Test. This is really multiple tests masquerading as one and I'm not sure how to report them - should they be different nodes in the tree? Should it just be one test and whenever it's run an adapter should report that all errors happened in that one test?Sorry for jamming all this into a single issue, if you think any of these should be worked on, I'll create separate ones.
Beta Was this translation helpful? Give feedback.
All reactions