Comments (8)
You filed this in csi-test
, so at first I thought that this was a report for csi-sanity. There are issues in that one, too, so let me keep my original reply:
When running in parallel, multiple JUnit files need to be created (one per process). Looking at the code, I am not sure whether that is supported. I suspect all processes write into the same file, the one specified with
-csi.junitfile
.
Regarding the Kubernetes e2e.test: when running in parallel, it writes one junit_%d.xml
file (one per process). Does you CI system parse all of them? There's a tool in https://github.com/kubernetes-csi/csi-test/blob/master/release-tools/filter-junit.go which can merge multiple different files.
-report-prefix
works for me (Kubernetes master):
$ ginkgo -p -focus=foobar ./test/e2e -- -report-dir=/tmp/ -report-prefix=myprefix
...
$ ls /tmp/junit_myprefix*
/tmp/junit_myprefix02.xml /tmp/junit_myprefix08.xml /tmp/junit_myprefix14.xml /tmp/junit_myprefix20.xml /tmp/junit_myprefix26.xml /tmp/junit_myprefix32.xml
/tmp/junit_myprefix03.xml /tmp/junit_myprefix09.xml /tmp/junit_myprefix15.xml /tmp/junit_myprefix21.xml /tmp/junit_myprefix27.xml /tmp/junit_myprefix33.xml
/tmp/junit_myprefix04.xml /tmp/junit_myprefix10.xml /tmp/junit_myprefix16.xml /tmp/junit_myprefix22.xml /tmp/junit_myprefix28.xml /tmp/junit_myprefix34.xml
/tmp/junit_myprefix05.xml /tmp/junit_myprefix11.xml /tmp/junit_myprefix17.xml /tmp/junit_myprefix23.xml /tmp/junit_myprefix29.xml /tmp/junit_myprefix35.xml
/tmp/junit_myprefix06.xml /tmp/junit_myprefix12.xml /tmp/junit_myprefix18.xml /tmp/junit_myprefix24.xml /tmp/junit_myprefix30.xml
/tmp/junit_myprefix07.xml /tmp/junit_myprefix13.xml /tmp/junit_myprefix19.xml /tmp/junit_myprefix25.xml /tmp/junit_myprefix31.xml
from csi-test.
Not sure if something not suppoerted but i was specifying -reportFile flag. In this file the report was inconsistent with console output.
Removing such option when i run
ginkgo -p -focus='External.Storage' -skip='[Feature:|[Disruptive]|[Serial]'
external-e2e/e2e.test -- -report-dir=/tmp/ -report-prefix=prefix
-storage.testdriver=${env.PWD}/test/external-e2e/test-driver.yaml
this is the console output:
Oct 14 15:52:33.065: INFO: Running AfterSuite actions on all nodes
Oct 14 15:54:54.771: INFO: Running AfterSuite actions on node 1
Oct 14 15:54:54.771: INFO: Dumping logs locally to: /tmp/
Oct 14 15:54:54.772: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory
Ran 30 of 5978 Specs in 526.133 seconds
SUCCESS! -- 30 Passed | 0 Failed | 0 Pending | 5948 Skipped
and i couldnt find any xml file on /tmp. Not sure about maybe some configuration issue?
Thanks
from csi-test.
Not sure if something not suppoerted but i was specifying -reportFile flag. In this file the report was inconsistent with console output.
Ah, you are talking about -ginkgo.reportFile
. I don't know how that one is supposed to be used. That's a question for https://github.com/onsi/ginkgo/, I don't think Kubernetes does anything special there.
Removing such option when i run
ginkgo -p -focus='External.Storage' -skip='[Feature:|[Disruptive]|[Serial]' external-e2e/e2e.test -- -report-dir=/tmp/ -report-prefix=prefix -storage.testdriver=${env.PWD}/test/external-e2e/test-driver.yaml
this is the console output:
Oct 14 15:52:33.065: INFO: Running AfterSuite actions on all nodes Oct 14 15:54:54.771: INFO: Running AfterSuite actions on node 1 Oct 14 15:54:54.771: INFO: Dumping logs locally to: /tmp/ Oct 14 15:54:54.772: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory
Ran 30 of 5978 Specs in 526.133 seconds SUCCESS! -- 30 Passed | 0 Failed | 0 Pending | 5948 Skipped
and i couldnt find any xml file on /tmp. Not sure about maybe some configuration issue?
Looks okay to me and for me it works (see earlier). Which version of e2e.test are you using, and where did you get it from?
Does it work for you when you invoke it like I did above?
from csi-test.
Hi,
I tried again this morning with following command:
./ginkgo -p -focus='External.Storage' -skip='[Feature:|[Disruptive]' ./e2e.test -- -report-dir=. -report-prefix=myprefix -storage.testdriver=test-driver.yaml
it looks it is working now...not sure whats wrong last week maybe reportFile flag was reasonably not supported or my VM got crazy ...
we download it from https://dl.k8s.io/v1.21.1/kubernetes-test-linux-amd64.tar.gz
thanks for your support
BR
from csi-test.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
from csi-test.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
from csi-test.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
from csi-test.
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
from csi-test.
Related Issues (20)
- mock driver cleanup HOT 4
- Add Windows support to csi sanity HOT 5
- LifeCycle test fails since 'unpublish' is not run HOT 14
- snapshots and pvcs: test and support arbitrary deletion order HOT 8
- List volume : should fail when an invalid starting_token is passed HOT 4
- Add test for filesystem expansion after snapshot restore / clone HOT 5
- Create staging_target_path directory on the host HOT 5
- [Bug] Missing CapacityRange field in CreateVolumeRequest HOT 2
- ListSnapshot: tests are not precise HOT 9
- Extend NodeUnpublish test to verify cleanup of node_publish path HOT 3
- update from v3 (v3.1.0) to v4.1.0 lead to grpc naming issue HOT 1
- Resource cleanup failed due to fast create-delete process HOT 18
- ListSnapshot: support secrets HOT 3
- NodeExpandVolume test checks optional fields HOT 6
- Assert NodePublishVolume behavior for single node single writer volumes HOT 8
- csi-test: add new RPCs in group controller service
- ListVolume "check the presence of new volumes and absence of deleted ones in the volume list" creating volume error HOT 4
- Add VolumeGroupSnapshot service tests HOT 4
- CSI sanity test with topology feature gate enabled HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from csi-test.