-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to write tests? #416
Comments
I'm also look for a way how to test promt commands |
That depends on the type of tests you are aiming for. I would assume nearly all being unit tests then I just started looking in to this as well so I'll post anything I find. |
I'm talking about all features which e.g |
We can assume that anything on `toolbox` works as there should be tests for
those already. If I'm testing a custom command my unit test shouldn't care
that toolbox.print works. The gluecode test suites should have tests for
that. My test should be concerned that it called toolbox.print.info. Mocks
and spies would work perfect for these tests.
I believe what you are talking about are integration tests. The test value
you get from these in this scenario wouldn't be worth squirreliness to test
things for real.
In the testing pyramid you should try to push tests as far down as you can.
It's all down to personal taste though. 🤷♂️
…On Fri, Dec 21, 2018, 10:11 AM Dustin Deus ***@***.*** wrote:
That depends on the type of tests you are aiming for.
I'm talking about all features which gluegun provides
e.g toolbox isn't exposed in the API and I don't want to mock require
statements or mocking in general. It should exist a consistent and simple
api to test prompts, filesystem etc this is really important. The projects
was designed to create big cli applications but that's impossible with a
proper test setup.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#416 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAscl8nlMf5ZHhdV7GCma7jc-uQ31qNKks5u7QgmgaJpZM4Zcrt9>
.
|
I'm not talking about the stability of gluegun itself but about my commands which are composed of them. How can I guarantee that my super command works when I can't or when it's a hassle for every new developer to write simple tests? It would be awesome to provide in memory adapters for Filesystem, prompts, process... Mocking/Spying is great but it shouldn't be the solution for all. I ask for first class testing support. |
Hey @StarpTech , apologies for the late response here. I've been focused on some other things. We do need better testing documentation. I am a big believer in integration tests where possible (especially for CLIs!) and so new Gluegun CLIs now come with an integration test built-in. This is new, as of #409. Here's an example of this generated integration test: const { system, filesystem } = require('gluegun')
const { resolve } = require('path')
const src = resolve(__dirname, '..')
const cli = async cmd =>
system.run('node ' + resolve(src, 'bin', 'movie') + ` ${cmd}`)
test('outputs version', async () => {
const output = await cli('--version')
expect(output).toContain('0.0.1')
})
test('outputs help', async () => {
const output = await cli('--help')
expect(output).toContain('0.0.1')
})
test('generates file', async () => {
const output = await cli('generate foo')
expect(output).toContain('Generated file at models/foo-model.ts')
const foomodel = filesystem.read('models/foo-model.ts')
expect(foomodel).toContain(`module.exports = {`)
expect(foomodel).toContain(`name: 'foo'`)
// cleanup artifact
filesystem.remove('models')
}) I'll make it a priority to add more documentation around testing in the future. Thanks for bringing this up! |
Me too but it's not easy to provide simple and reliable helpers for all the tools (axios, filesystem, process, prompts) and different OS without to provide some builtins. I would like have a combination of both, something like this. const { systemTest, filesystem, httpTest, testUtils } = require('gluegun')
const { resolve } = require('path')
const src = resolve(__dirname, '..')
// The difference between system and systemTest is that it will prepare the result
// for better testing and it can tracks the progress via 'fork' cross-process communication
const cli = async cmd =>
systemTest.run('node ' + resolve(src, 'bin', 'movie') + ` ${cmd}`)
test('generates file', async () => {
// create real file system from JSON
const cleanUp = await testUtils.createFileSystem({
'./README.md': '1',
'./src/index.js': '2',
'./node_modules/debug/index.js': '3',
})
const app = await cli('generate foo')
await testUtils.delay(2)
// Under the hood we could use supertest
httpTest.onGet('/users').reply(200, users)
expect(app.output[0]).toContain('Generated file README.md')
// I'm in a prompt or process execution or template operation?
// for better debugging experience
app.state() // returns { command: 'create-readme', state: 'foo' }
const readme = await filesystem.read('README.md')
expect(readme).toContain(`# myapp`)
await httpTest.ensureCalled()
// cleanup artifact
await cleanUp()
} |
Great idea @StarpTech. Perhaps we could expose those utilities at |
FYI, here's an article I wrote that is helpful: https://shift.infinite.red/integration-testing-interactive-clis-93af3cc0d56f |
Do we have any news about this topic, is anyone have an idea on how to work with tests without writing on a real file system? |
Hi,
I'm looking for an easy way to write tests for gluegun commands which access the toolbox (file system, process...) Does exist any test helpers? It seems that testing isn't taken seriously when I look at the referenced projects in the Readme.
The text was updated successfully, but these errors were encountered: