Comments (14)
Thank you, that makes sense!
The hash value (= output directory name) of a test is now unique across files. In addition, nf-test deleted the .nf-test
directory on each startup. I removed this behaviour and added the new subcommand clean
that can be used to clean-up the .nf-test
directory manually (like in Nextflow). This should eliminate all side-effects when test cases are running in parallel.
from nf-test.
Hi Aaron 👋
We've now started implementing this feature and a pull request (#24) inlcuding TAP output is available. Would be great if you can give it a try before we merge the pull request. Simply add --tap
as a parameter on the command-line.
from nf-test.
This is rad! For sure - we'll test this and the functions PR week.
from nf-test.
Hi,
I tried the feature. I think this is something we are really looking for. Though I found a bug here. I am not sure the issue is with TAP output or it is some other issue with the code:
I have run 4 test files consist of multiple cases init and running all these 4 files as a part of test suite, but getting different results everytime:
tests/local/nextflow.sh
tests/local/nextflow/paris/CollectSamplesWf.nf ........ Failed 1/2 subtests
tests/local/nextflow/paris/CreatePerLaneFastqChWf.nf .. Failed 1/1 subtests
tests/local/nextflow/paris/CreatePatientsChWf.nf ...... Failed 3/4 subtests
tests/local/nextflow/paris/CreateSamplesChWf.nf ....... 1/3
not ok 1 Test workflow CreateSamplesChWf: Should run without failures
tests/local/nextflow/paris/CreateSamplesChWf.nf ....... Failed 1/3 subtests
Test Summary Report
tests/local/nextflow/paris/CollectSamplesWf.nf (Wstat: 0 Tests: 2 Failed: 1)
Failed test: 1
tests/local/nextflow/paris/CreatePerLaneFastqChWf.nf (Wstat: 0 Tests: 1 Failed: 1)
Failed test: 1
tests/local/nextflow/paris/CreatePatientsChWf.nf (Wstat: 0 Tests: 4 Failed: 3)
Failed tests: 1-2, 4
tests/local/nextflow/paris/CreateSamplesChWf.nf (Wstat: 0 Tests: 3 Failed: 1)
Failed test: 1
Files=4, Tests=10, 36 wallclock secs ( 0.04 usr 0.00 sys + 189.48 cusr 6.56 csys = 196.08 CPU)
Result: FAIL
[k.agrawal@login1(FastNet-Dev-9) nf]$ tests/local/nextflow.sh
tests/local/nextflow/paris/CollectSamplesWf.nf ........ ok
tests/local/nextflow/paris/CreatePerLaneFastqChWf.nf .. Failed 1/1 subtests
tests/local/nextflow/paris/CreatePatientsChWf.nf ...... Failed 2/4 subtests
tests/local/nextflow/paris/CreateSamplesChWf.nf ....... 1/3
not ok 1 Test workflow CreateSamplesChWf: Should run without failures
tests/local/nextflow/paris/CreateSamplesChWf.nf ....... Failed 1/3 subtests
Test Summary Report
tests/local/nextflow/paris/CreatePerLaneFastqChWf.nf (Wstat: 0 Tests: 1 Failed: 1)
Failed test: 1
tests/local/nextflow/paris/CreatePatientsChWf.nf (Wstat: 0 Tests: 4 Failed: 2)
Failed tests: 2, 4
tests/local/nextflow/paris/CreateSamplesChWf.nf (Wstat: 0 Tests: 3 Failed: 1)
Failed test: 1
Files=4, Tests=10, 42 wallclock secs ( 0.03 usr 0.01 sys + 223.82 cusr 7.37 csys = 231.23 CPU)
Result: FAIL
[k.agrawal@login1(FastNet-Dev-9) nf]$ tests/local/nextflow.sh
tests/local/nextflow/paris/CollectSamplesWf.nf ........ ok
tests/local/nextflow/paris/CreatePerLaneFastqChWf.nf .. Failed 1/1 subtests
tests/local/nextflow/paris/CreatePatientsChWf.nf ...... Failed 2/4 subtests
tests/local/nextflow/paris/CreateSamplesChWf.nf ....... 1/3
not ok 1 Test workflow CreateSamplesChWf: Should run without failures
tests/local/nextflow/paris/CreateSamplesChWf.nf ....... Failed 1/3 subtests
Test Summary Report
tests/local/nextflow/paris/CreatePerLaneFastqChWf.nf (Wstat: 0 Tests: 1 Failed: 1)
Failed test: 1
tests/local/nextflow/paris/CreatePatientsChWf.nf (Wstat: 0 Tests: 4 Failed: 2)
Failed tests: 2, 4
tests/local/nextflow/paris/CreateSamplesChWf.nf (Wstat: 0 Tests: 3 Failed: 1)
Failed test: 1
Files=4, Tests=10, 42 wallclock secs ( 0.03 usr 0.01 sys + 220.11 cusr 7.29 csys = 227.44 CPU)
Result: FAIL
[k.agrawal@login1(FastNet-Dev-9) nf]$ tests/local/nextflow.sh
tests/local/nextflow/paris/CollectSamplesWf.nf ........ ok
tests/local/nextflow/paris/CreatePatientsChWf.nf ...... Failed 1/4 subtests
tests/local/nextflow/paris/CreatePerLaneFastqChWf.nf .. Failed 1/1 subtests
tests/local/nextflow/paris/CreateSamplesChWf.nf ....... 1/3
not ok 1 Test workflow CreateSamplesChWf: Should run without failures
tests/local/nextflow/paris/CreateSamplesChWf.nf ....... Failed 1/3 subtests
Test Summary Report
tests/local/nextflow/paris/CreatePatientsChWf.nf (Wstat: 0 Tests: 4 Failed: 1)
Failed test: 4
tests/local/nextflow/paris/CreatePerLaneFastqChWf.nf (Wstat: 0 Tests: 1 Failed: 1)
Failed test: 1
tests/local/nextflow/paris/CreateSamplesChWf.nf (Wstat: 0 Tests: 3 Failed: 1)
Failed test: 1
Files=4, Tests=10, 47 wallclock secs ( 0.03 usr 0.01 sys + 208.58 cusr 7.32 csys = 215.94 CPU)
Result: FAIL
To be clear tests/local/nextflow/paris/CollectSamplesWf.nf
is sometimes pass whereas sometimes it is fail. As per code it should fail.
Next, tests/local/nextflow/paris/CreatePatientsChWf.nf
sometimes 2 out of 4 cases fails, sometime 3 or 1. Ideally it is correct test and should pass all 4 cases every time.
Again, I am not sure it is issue with TAP output or issue when running test in suite. Let me know if you need any more details
from nf-test.
Thank you for testing it! 🙏 Any chance to send us a minimal running example file for tests/local/nextflow/paris/CreatePatientsChWf.nf
? Or to share the content of the nf.test file? We tried it with different testsuites and the output of the tests was always reproducible.
from nf-test.
hey @lukfor , this code looks great. Just provided some feedback, but all minor points really. Will be great to re-use the abstraction for xunit!
from nf-test.
Thank you for testing it! 🙏 Any chance to send us a minimal running example file for
tests/local/nextflow/paris/CreatePatientsChWf.nf
? Or to share the content of the nf.test file? We tried it with different testsuites and the output of the tests was always reproducible.
Hi @lukfor
I tried to further analyse, and I think something is going wrong when testcase name in different files has name.
Here what I have done.
I used same process as yours https://github.com/askimed/nf-test/blob/main/example/say-hello.nf
Created 3 testfiles as yours [say-hello.nf.test](https://github.com/askimed/nf-test/blob/main/example/say-hello.nf.test)
with name trial 1, 2 and 3. I updated trial_2 by deleting input[0] from first test and input[1] from second.
Now I run all these 3 cases as a part of suite multiple times and here are my results:
tests/nextflow.sh
tests/local/nextflow/trial2.nf .. ok
tests/local/nextflow/trial1.nf .. ok
tests/local/nextflow/trial3.nf .. ok
All tests successful.
Files=3, Tests=6, 51 wallclock secs ( 0.03 usr 0.01 sys + 128.05 cusr 11.43 csys = 139.52 CPU)
Result: PASS
[k.agrawal@login1(FastNet-Dev-9) nf-new]$ tests/nextflow.sh
tests/local/nextflow/trial1.nf .. Failed 2/2 subtests
tests/local/nextflow/trial2.nf .. Failed 2/2 subtests
tests/local/nextflow/trial3.nf .. ok
Test Summary Report
tests/local/nextflow/trial1.nf (Wstat: 0 Tests: 2 Failed: 2)
Failed tests: 1-2
tests/local/nextflow/trial2.nf (Wstat: 0 Tests: 2 Failed: 2)
Failed tests: 1-2
Files=3, Tests=6, 39 wallclock secs ( 0.04 usr 0.01 sys + 112.91 cusr 11.56 csys = 124.52 CPU)
Result: FAIL
[k.agrawal@login1(FastNet-Dev-9) nf-new]$ tests/nextflow.sh
tests/local/nextflow/trial2.nf .. ok
tests/local/nextflow/trial1.nf .. ok
tests/local/nextflow/trial3.nf .. ok
All tests successful.
Files=3, Tests=6, 30 wallclock secs ( 0.03 usr 0.01 sys + 116.66 cusr 10.36 csys = 127.06 CPU)
Result: PASS
[k.agrawal@login1(FastNet-Dev-9) nf-new]$ tests/nextflow.sh
tests/local/nextflow/trial2.nf .. Failed 1/2 subtests
tests/local/nextflow/trial1.nf .. Failed 1/2 subtests
tests/local/nextflow/trial3.nf .. ok
Test Summary Report
tests/local/nextflow/trial2.nf (Wstat: 0 Tests: 2 Failed: 1)
Failed test: 1
tests/local/nextflow/trial1.nf (Wstat: 0 Tests: 2 Failed: 1)
Failed test: 1
Files=3, Tests=6, 23 wallclock secs ( 0.03 usr 0.00 sys + 104.42 cusr 5.04 csys = 109.49 CPU)
Result: FAIL
I am again not sure, but it looks like testcases output directories are same between different cases and that is causing issue.
See the tap output files:
cat tests_local_nextflow_trial2.nf
1..2
not ok 1 Test process SAY_HELLO: Running with positive number should succeed
output: |+
Nextflow stdout:
Process `SAY_HELLO` declares 2 input channels but 1 were specified
-- Check script '.nf-test/tests/21a3d9414a64b1c86d8ee8fbe6691d68/meta/mock.nf' at line: 31 or see '/home/k.agrawal/nf-new/.nf-test/tests/21a3d9414a64b1c86d8ee8fbe6691d68/meta/nextflow.log' file for more details
Nextflow stderr:
failure: "Assertion failed: \n\nassert process.success\n | |\n |
\ false\n SAY_HELLO\n"
...
not ok 2 Test process SAY_HELLO: Running with negative number should fail
output: |+
Nextflow stdout:
A process input channel evaluates to null -- Invalid declaration `val name`
-- Check script '.nf-test/tests/7b531a5bb35824fc75c646e8d14727e7/meta/mock.nf' at line: 31 or see '/home/k.agrawal/nf-new/.nf-test/tests/7b531a5bb35824fc75c646e8d14727e7/meta/nextflow.log' file for more details
Nextflow stderr:
failure: "Assertion failed: \n\nassert process.errorReport.contains("Negative numbers
\ not allowed")\n | | |\n | '' false\n
\ SAY_HELLO\n"
...
[k.agrawal@login1(FastNet-Dev-9) nf-new]$ cat tests_local_nextflow_trial1.nf
1..2
not ok 1 Test process SAY_HELLO: Running with positive number should succeed
output: |+
Nextflow stdout:
Process `SAY_HELLO` declares 2 input channels but 1 were specified
-- Check script '.nf-test/tests/21a3d9414a64b1c86d8ee8fbe6691d68/meta/mock.nf' at line: 31 or see '/home/k.agrawal/nf-new/.nf-test/tests/21a3d9414a64b1c86d8ee8fbe6691d68/meta/nextflow.log' file for more details
Nextflow stderr:
failure: "Assertion failed: \n\nassert process.success\n | |\n |
\ false\n SAY_HELLO\n"
...
not ok 2 Test process SAY_HELLO: Running with negative number should fail
output: |+
Nextflow stdout:
Nextflow stderr:
failure: "Assertion failed: \n\nassert process.errorReport.contains("Negative numbers
\ not allowed")\n | | |\n | '' false\n
\ SAY_HELLO\n"
...
I think getting different output directory for each case could help
from nf-test.
Hi @Khushbu04in! Thanks, I will look into that!
Thank you @aaron-fishman-achillestx! Where can I find your feedback? There are no comments in the Pull Request.
from nf-test.
I put here - is it not showing for you?: #24
from nf-test.
oh they were pending @lukfor - just submitted them as comments. Should be visible now
from nf-test.
Cool! Thank you for your great code review and the comments! 👍
from nf-test.
Hi @Khushbu04in! Thanks for you effort! Could you please post the content of the tests/nextflow.sh
file? Are you executing the testsuites in parallel?
from nf-test.
Hi @Khushbu04in! Thanks for you effort! Could you please post the content of the
tests/nextflow.sh
file? Are you executing the testsuites in parallel?
Hi @lukfor , yes my test cases are running in parallel. I understand, I am getting this issue because of that. But I think, by having different output directory it will solve it and help test run quickly.
from nf-test.
I merged the pull request 👍
from nf-test.
Related Issues (20)
- [feature] exclude tests by tag
- Weird behaviour when Gzipped files are empty HOT 4
- [Docs] Broken link for navigating to getting started HOT 1
- Automatic staging of `assets` `bin` and `lib` folders broken HOT 1
- Repeated 'permission denied' when running `nf-test clean` or `rm -rf .nf-test` HOT 2
- Add --target option to nf-test list HOT 2
- is there a way we can allow for deliberate snapshot matching? HOT 3
- Only run tests for changed files HOT 3
- Test file not created for workflow when workflow.onComplete is in main.nf HOT 1
- Can snapshots entries be made unique by `test-name + id` instead of just the `id` ?
- Problem when pipeline param is the same as nf-test param
- Broken link HOT 1
- bug: When an input is passed to output with no modification it causes the snapshot to have the absolute path HOT 1
- Access to output channels of unnamed/implicit workflow
- Keyword Arguments
- Allow capture and assertion of `.command.{err,out,log}`
- Conda environment reuse HOT 1
- Need the ability to skip md5 check for some files in snapshot HOT 1
- All tests' meta/output/work directories are deleted and recreated unnecessarily
- Caching of setup fixtures' results HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nf-test.