GithubHelp home page GithubHelp logo

enricomi / publish-unit-test-result-action Goto Github PK

View Code? Open in Web Editor NEW
551.0 7.0 173.0 2.82 MB

GitHub Action to publish unit test results on GitHub

License: Apache License 2.0

Dockerfile 0.14% Python 96.11% Shell 0.03% XSLT 3.72%
reporting github-actions testing

publish-unit-test-result-action's Introduction

GitHub Action to Publish Test Results

CI/CD GitHub release badge GitHub license badge GitHub Workflows badge Docker pulls badge

Arm badge Ubuntu badge macOS badge Windows badge XML badge TRX badge JS badge

Test Results

This GitHub Action analyses test result files and publishes the results on GitHub. It supports JSON (Dart, Mocha), TRX (MSTest, VS) and XML (JUnit, NUnit, XUnit) file formats, and runs on Linux, macOS and Windows.

You can use this action with Ubuntu Linux runners (e.g. runs-on: ubuntu-latest) or ARM Linux self-hosted runners that support Docker:

- name: Publish Test Results
  uses: EnricoMi/publish-unit-test-result-action@v2
  if: always()
  with:
    files: |
      test-results/**/*.xml
      test-results/**/*.trx
      test-results/**/*.json

See the notes on running this action with absolute paths if you cannot use relative test result file paths.

Use this for macOS (e.g. runs-on: macos-latest) runners:

- name: Publish Test Results
  uses: EnricoMi/publish-unit-test-result-action/macos@v2
  if: always()
  with:
    files: |
      test-results/**/*.xml
      test-results/**/*.trx
      test-results/**/*.json

… and Windows (e.g. runs-on: windows-latest) runners:

- name: Publish Test Results
  uses: EnricoMi/publish-unit-test-result-action/windows@v2
  if: always()
  with:
    files: |
      test-results\**\*.xml
      test-results\**\*.trx
      test-results\**\*.json

For self-hosted Linux GitHub Actions runners without Docker installed, please use:

- name: Publish Test Results
  uses: EnricoMi/publish-unit-test-result-action/linux@v2
  if: always()
  with:
    files: |
      test-results/**/*.xml
      test-results/**/*.trx
      test-results/**/*.json

See the notes on running this action as a non-Docker action.

If you see the "Resource not accessible by integration" error, you have to grant additional permissions, or setup the support for pull requests from fork repositories and branches created by Dependabot.

The if: always() clause guarantees that this action always runs, even if earlier steps (e.g., the test step) in your workflow fail.

When run multiple times in one workflow, the option check_name has to be set to a unique value for each instance. Otherwise, the multiple runs overwrite each other's results.

Note: By default, this action does not fail if tests failed. This can be configured via action_fail. The action that executed the tests should fail on test failure. The published results however indicate failure if tests fail or errors occur, which can be configured via fail_on.

Permissions

Minimal workflow job permissions required by this action in public GitHub repositories are:

permissions:
  checks: write
  pull-requests: write

The following permissions are required in private GitHub repos:

permissions:
  contents: read
  issues: read
  checks: write
  pull-requests: write

With comment_mode: off, the pull-requests: write permission is not needed.

Generating test result files

Supported test result files can be generated by many test environments. Here is a small overview, by far not complete. Check your favorite development and test environment for its JSON, TRX file or JUnit, NUnit, XUnit XML file support.

Test Environment Language JUnit
XML
NUnit
XML
XUnit
XML
TRX
file
JSON
file
Dart Dart, Flutter
Jest JavaScript
Maven Java, Scala, Kotlin
Mocha JavaScript not xunit
MStest / dotnet .Net
pytest Python
sbt Scala
Your favorite
environment
Your favorite
language
probably

What is new in version 2

These changes have to be considered when moving from version 1 to version 2:

Default value for check_name changed

Unless check_name is set in your config, the check name used to publish test results changes from "Unit Test Results" to "Test Results".

Impact: The check with the old name will not be updated once moved to version 2.

Workaround to get version 1 behaviour: Add check_name: "Unit Test Results" to your config.

Default value for comment_title changed

Unless comment_title or check_name are set in your config, the title used to comment on open pull requests changes from "Unit Test Results" to "Test Results".

Impact: Existing comments with the old title will not be updated once moved to version 2, but a new comment is created.

Workaround to get version 1 behaviour: See workaround for check_name.

Modes create new and update last removed for option comment_mode

The action always updates an earlier pull request comment, which is the exact behaviour of mode update last. The configuration options create new and update last are therefore removed.

Impact: An existing pull request comment is always updated.

Workaround to get version 1 behaviour: Not supported.

Option hiding_comments removed

The action always updates an earlier pull request comment, so hiding comments is not required anymore.

Option comment_on_pr removed

Option comment_on_pr has been removed.

Workaround to get version 1 behaviour: Set comment_mode to always (the default) or off.

Publishing test results

Test results are published on GitHub at various (configurable) places:

  • as a comment in related pull requests
  • as a check in the checks section of a commit and related pull requests
  • as annotations in the checks section and changed files section of a commit and related pull requests
  • as a job summary of the GitHub Actions workflow
  • as a check summary in the GitHub Actions section of the commit

Pull request comment

A comment is posted on pull requests related to the commit.

pull request comment example

In presence of failures or errors, the comment links to the respective check summary with failure details.

Subsequent runs of the action will update this comment. You can access earlier results in the comment edit history:

pull request comment history example

The result distinguishes between tests and runs. In some situations, tests run multiple times, e.g. in different environments. Displaying the number of runs allows spotting unexpected changes in the number of runs as well.

When tests run only a single time, no run information is displayed. Results are then shown differently then:

pull request comment example without runs

The change statistics (e.g. 5 tests ±0) might sometimes hide test removal. Those are highlighted in pull request comments to easily spot unintended test removal:

pull request comment example with test changes

Note: This requires check_run_annotations to be set to all tests, skipped tests.

Comments can be disabled with comment_mode: off.

Commit and pull request checks

The checks section of a commit and related pull requests list a short summary (here 1 fail, 1 skipped, …), and a link to the check summary in the GitHub Actions section (here Details):

Commit checks:

commit checks example

Pull request checks:

pull request checks example

Check runs can be disabled with check_run: false.

Commit and pull request annotations

Each failing test produces an annotation with failure details in the checks section of a commit:

annotations example check

and the changed files section of related pull requests:

annotations example changed files

Note: Annotations for test files are only supported when test file paths in test result files are relative to the repository root. Use option test_file_prefix to add a prefix to, or remove a prefix from these file paths. See Configuration section for details.

Note: Only the first failure of a test is shown. If you want to see all failures, set report_individual_runs: "true".

Check run annotations can be disabled with ignore_runs: true.

GitHub Actions job summary

The results are added to the job summary page of the workflow that runs this action:

job summary example

In presence of failures or errors, the job summary links to the respective check summary with failure details.

Note: Job summary requires GitHub Actions runner v2.288.0 or above.

Job summaries can be disabled with job_summary: false.

GitHub Actions check summary of a commit

Test results are published in the GitHub Actions check summary of the respective commit:

checks comment example

Check runs can be disabled with check_run: false.

The symbols

The symbols have the following meaning:

Symbol Meaning
  ✅ A successful test or run
A skipped test or run
A failed test or run
An erroneous test or run
The duration of all tests or runs

Note: For simplicity, "disabled" tests count towards "skipped" tests.

Configuration

Files can be selected via the files option. It supports glob wildcards like *, **, ?, and [] character ranges. The ** wildcard matches all files and directories recursively: ./, ./*/, ./*/*/, etc.

You can provide multiple file patterns, one pattern per line. Patterns starting with ! exclude the matching files. There have to be at least one pattern starting without a !:

with:
  files: |
    *.xml
    !config.xml

The list of most notable options:

Option Default Value Description
files no default File patterns of test result files. Relative paths are known to work best, while the non-Docker action also works with absolute paths. Supports *, **, ?, and [] character ranges. Use multiline string for multiple patterns. Patterns starting with ! exclude the matching files. There have to be at least one pattern starting without a !.
check_name "Test Results" An alternative name for the check result. Required to be unique for each instance in one workflow.
comment_title same as check_name An alternative name for the pull request comment.
comment_mode always The action posts comments to pull requests that are associated with the commit. Set to:
always - always comment
changes - comment when changes w.r.t. the target branch exist
changes in failures - when changes in the number of failures and errors exist
changes in errors - when changes in the number of (only) errors exist
failures - when failures or errors exist
errors - when (only) errors exist
off - to not create pull request comments.
large_files false unless
ignore_runs is true
Support for large files is enabled when set to true. Defaults to false, unless ignore_runs is true.
ignore_runs false Does not collect test run information from the test result files, which is useful for very large files. This disables any check run annotations.
Options related to Git and GitHub
Option Default Value Description
commit ${{env.GITHUB_SHA}} An alternative commit SHA to which test results are published. The push and pull_requestevents are handled, but for other workflow events GITHUB_SHA may refer to different kinds of commits. See GitHub Workflow documentation for details.
github_token ${{github.token}} An alternative GitHub token, other than the default provided by GitHub Actions runner.
github_token_actor github-actions The name of the GitHub app that owns the GitHub API Access Token (see github_token). Used to identify pull request comments created by this action during earlier runs. Has to be set when github_token is set to a GitHub app installation token (other than GitHub actions). Otherwise, existing comments will not be updated, but new comments created. Note: this does not change the bot name of the pull request comments.
github_retries 10 Requests to the GitHub API are retried this number of times. The value must be a positive integer or zero.
seconds_between_github_reads 0.25 Sets the number of seconds the action waits between concurrent read requests to the GitHub API.
seconds_between_github_writes 2.0 Sets the number of seconds the action waits between concurrent write requests to the GitHub API.
secondary_rate_limit_wait_seconds 60.0 Sets the number of seconds to wait before retrying secondary rate limit errors. If not set, the default defined in the PyGithub library is used (currently 60 seconds).
pull_request_build "merge" As part of pull requests, GitHub builds a merge commit, which combines the commit and the target branch. If tests ran on the actual pushed commit, then set this to "commit".
event_file ${{env.GITHUB_EVENT_PATH}} An alternative event file to use. Useful to replace a workflow_run event file with the actual source event file.
event_name ${{env.GITHUB_EVENT_NAME}} An alternative event name to use. Useful to replace a workflow_run event name with the actual source event name: ${{ github.event.workflow_run.event }}.
search_pull_requests false Prior to v2.6.0, the action used the /search/issues REST API to find pull requests related to a commit. If you need to restore that behaviour, set this to "true". Defaults to false.
Options related to reporting test results
Option Default Value Description
time_unit seconds Time values in the test result files have this unit. Supports seconds and milliseconds.
test_file_prefix none Paths in the test result files should be relative to the git repository for annotations to work best. This prefix is added to (if starting with "+"), or remove from (if starting with "-") test file paths. Examples: "+src/" or "-/opt/actions-runner".
check_run true Set to true, the results are published as a check run, but it may not be associated with the workflow that ran this action.
job_summary true Set to true, the results are published as part of the job summary page of the workflow run.
compare_to_earlier_commit true Test results are compared to results of earlier commits to show changes:
false - disable comparison, true - compare across commits.'
test_changes_limit 10 Limits the number of removed or skipped tests reported on pull request comments. This report can be disabled with a value of 0.
report_individual_runs false Individual runs of the same test may see different failures. Reports all individual failures when set true, and the first failure only otherwise.
report_suite_logs none In addition to reporting regular test logs, also report test suite logs. These are logs provided on suite level, not individual test level. Set to info for normal output, error for error output, any for both, or none for no suite logs at all. Defaults to none.
deduplicate_classes_by_file_name false De-duplicates classes with same name by their file name when set true, combines test results for those classes otherwise.
check_run_annotations all tests, skipped tests Adds additional information to the check run. This is a comma-separated list of any of the following values:
all tests - list all found tests,
skipped tests - list all skipped tests
Set to none to add no extra annotations at all.
check_run_annotations_branch event.repository.default_branch or "main, master" Adds check run annotations only on given branches. If not given, this defaults to the default branch of your repository, e.g. main or master. Comma separated list of branch names allowed, asterisk "*" matches all branches. Example: main, master, branch_one.
json_file no file Results are written to this JSON file.
json_thousands_separator " " Formatted numbers in JSON use this character to separate groups of thousands. Common values are "," or ".". Defaults to punctuation space (\u2008).
json_suite_details false Write out all suite details to the JSON file. Setting this to true can greatly increase the size of the output. Defaults to false.
json_test_case_results false Write out all individual test case results to the JSON file. Setting this to true can greatly increase the size of the output. Defaults to false.
fail_on "test failures" Configures the state of the created test result check run. With "test failures" it fails if any test fails or test errors occur. It never fails when set to "nothing", and fails only on errors when set to "errors".
action_fail false When set true, the action itself fails when tests have failed (see fail_on).
action_fail_on_inconclusive false When set true, the action itself fails when tests are inconclusive (no test results).

Pull request comments highlight removal of tests or tests that the pull request moves into skip state. Those removed or skipped tests are added as a list, which is limited in length by test_changes_limit, which defaults to 10. Reporting these tests can be disabled entirely by setting this limit to 0. This feature requires check_run_annotations to contain all tests in order to detect test addition and removal, and skipped tests to detect new skipped and un-skipped tests, as well as check_run_annotations_branch to contain your default branch.

JSON result

The gathered test information are accessible as JSON via GitHub Actions steps outputs string or JSON file.

Access JSON via step outputs

The json output of the action can be accessed through the expression steps.<id>.outputs.json.

- name: Publish Test Results
  uses: EnricoMi/publish-unit-test-result-action@v2
  id: test-results
  if: always()
  with:
    files: "test-results/**/*.xml"

- name: Conclusion
  run: echo "Conclusion is ${{ fromJSON( steps.test-results.outputs.json ).conclusion }}"

Here is an example JSON:

{
  "title": "4 parse errors, 4 errors, 23 fail, 18 skipped, 227 pass in 39m 12s",
  "summary": "  24 files  ±0      4 errors  21 suites  ±0   39m 12s [:stopwatch:](https://github.com/EnricoMi/publish-unit-test-result-action/blob/v2.6.1/README.md#the-symbols \"duration of all tests\") ±0s\n272 tests ±0  227 [:white_check_mark:](https://github.com/EnricoMi/publish-unit-test-result-action/blob/v2.6.1/README.md#the-symbols \"passed tests\") ±0  18 [:zzz:](https://github.com/EnricoMi/publish-unit-test-result-action/blob/v2.6.1/README.md#the-symbols \"skipped / disabled tests\") ±0  23 [:x:](https://github.com/EnricoMi/publish-unit-test-result-action/blob/v2.6.1/README.md#the-symbols \"failed tests\") ±0  4 [:fire:](https://github.com/EnricoMi/publish-unit-test-result-action/blob/v2.6.1/README.md#the-symbols \"test errors\") ±0 \n437 runs  ±0  354 [:white_check_mark:](https://github.com/EnricoMi/publish-unit-test-result-action/blob/v2.6.1/README.md#the-symbols \"passed tests\") ±0  53 [:zzz:](https://github.com/EnricoMi/publish-unit-test-result-action/blob/v2.6.1/README.md#the-symbols \"skipped / disabled tests\") ±0  25 [:x:](https://github.com/EnricoMi/publish-unit-test-result-action/blob/v2.6.1/README.md#the-symbols \"failed tests\") ±0  5 [:fire:](https://github.com/EnricoMi/publish-unit-test-result-action/blob/v2.6.1/README.md#the-symbols \"test errors\") ±0 \n\nResults for commit 11c02e56. ± Comparison against earlier commit d8ce4b6c.\n",
  "conclusion": "success",
  "stats": {
    "files": 24,
    "errors": 4,
    "suites": 21,
    "duration": 2352,
    "tests": 272,
    "tests_succ": 227,
    "tests_skip": 18,
    "tests_fail": 23,
    "tests_error": 4,
    "runs": 437,
    "runs_succ": 354,
    "runs_skip": 53,
    "runs_fail": 25,
    "runs_error": 5,
    "commit": "11c02e561e0eb51ee90f1c744c0ca7f306f1f5f9"
  },
  "stats_with_delta": {
    "files": {
      "number": 24,
      "delta": 0
    },
    …,
    "commit": "11c02e561e0eb51ee90f1c744c0ca7f306f1f5f9",
    "reference_type": "earlier",
    "reference_commit": "d8ce4b6c62ebfafe1890c55bf7ea30058ebf77f2"
  },
  "check_url": "https://github.com/EnricoMi/publish-unit-test-result-action/runs/5397876970",
  "formatted": {
     "stats": {
        "duration": "2 352",
        
     },
     "stats_with_delta": {
        "duration": {
           "number": "2 352",
           "delta": "+12"
        },
        
     }
  },
  "annotations": 31
}

The formatted key provides a copy of stats and stats_with_delta, where numbers are formatted to strings. For example, "duration": 2352 is formatted as "duration": "2 352". The thousands separator can be configured via json_thousands_separator. Formatted numbers are especially useful when those values are used where formatting is not easily available, e.g. when creating a badge from test results.

Access JSON via file

The optional json_file allows to configure a file where extended JSON information are to be written. Compared to "Access JSON via step outputs" above, errors and annotations contain more information than just the number of errors and annotations, respectively.

Additionally, json_test_case_results can be enabled to add the cases field to the JSON file, which provides all test results of all tests. Enabling this may greatly increase the output size of the JSON file.

{
   …,
   "stats": {
      …,
      "errors": [
         {
            "file": "test-files/empty.xml",
            "message": "File is empty.",
            "line": null,
            "column": null
         }
      ],
      
   },
   …,
   "annotations": [
      {
         "path": "test/test.py",
         "start_line": 819,
         "end_line": 819,
         "annotation_level": "warning",
         "message": "test-files/junit.fail.xml",
         "title": "1 out of 3 runs failed: test_events (test.Tests)",
         "raw_details": "self = <test.Tests testMethod=test_events>\n\n                def test_events(self):\n                > self.do_test_events(3)\n\n                test.py:821:\n                _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\n                test.py:836: in do_test_events\n                self.do_test_rsh(command, 143, events=events)\n                test.py:852: in do_test_rsh\n                self.assertEqual(expected_result, res)\n                E AssertionError: 143 != 0\n            "
      }
   ],
   …,
   "cases": [
      {
         "class_name": "test.test_spark_keras.SparkKerasTests",
         "test_name": "test_batch_generator_fn",
         "states": {
            "success": [
               {
                  "result_file": "test-files/junit-xml/pytest/junit.spark.integration.1.xml",
                  "test_file": "test/test_spark_keras.py",
                  "line": 454,
                  "class_name": "test.test_spark_keras.SparkKerasTests",
                  "test_name": "test_batch_generator_fn",
                  "result": "success",
                  "time": 0.006
               },
               {
                  "result_file": "test-files/junit-xml/pytest/junit.spark.integration.2.xml",
                  "test_file": "test/test_spark_keras.py",
                  "line": 454,
                  "class_name": "test.test_spark_keras.SparkKerasTests",
                  "test_name": "test_batch_generator_fn",
                  "result": "success",
                  "time": 0.006
               }
            ]
         }
      },
   
   ],
   
}

See Create a badge from test results for an example on how to create a badge from this JSON.

Use with matrix strategy

In a scenario where your tests run multiple times in different environments (e.g. a strategy matrix), the action should run only once over all test results. For this, put the action into a separate job that depends on all your test environments. Those need to upload the test results as artifacts, which are then all downloaded by your publish job.

Example workflow YAML
name: CI

on: [push]
permissions: {}

jobs:
  build-and-test:
    name: Build and Test (Python ${{ matrix.python-version }})
    runs-on: ubuntu-latest

    strategy:
      fail-fast: false
      matrix:
        python-version: [3.6, 3.7, 3.8]

    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Setup Python ${{ matrix.python-version }}
        uses: actions/setup-python@v5
        with:
          python-version: ${{ matrix.python-version }}

      - name: PyTest
        run: python -m pytest test --junit-xml pytest.xml

      - name: Upload Test Results
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: Test Results (Python ${{ matrix.python-version }})
          path: pytest.xml

  publish-test-results:
    name: "Publish Tests Results"
    needs: build-and-test
    runs-on: ubuntu-latest
    permissions:
      checks: write

      # only needed unless run with comment_mode: off
      pull-requests: write

      # only needed for private repository
      contents: read

      # only needed for private repository
      issues: read
    if: always()

    steps:
      - name: Download Artifacts
        uses: actions/download-artifact@v4
        with:
          path: artifacts

      - name: Publish Test Results
        uses: EnricoMi/publish-unit-test-result-action@v2
        with:
          files: "artifacts/**/*.xml"

Please consider to support fork repositories and dependabot branches together with your matrix strategy.

Support fork repositories and dependabot branches

Getting test results of pull requests created by contributors from fork repositories or by Dependabot requires some additional setup. Without this, the action will fail with the "Resource not accessible by integration" error for those situations.

In this setup, your CI workflow does not need to publish test results anymore as they are always published from a separate workflow.

  1. Your CI workflow has to upload the GitHub event file and test result files.
  2. Set up an additional workflow on workflow_run events, which starts on completion of the CI workflow, downloads the event file and the test result files, and runs this action on them. This workflow publishes the test results for pull requests from fork repositories and dependabot, as well as all "ordinary" runs of your CI workflow.
Step-by-step instructions
  1. Add the following job to your CI workflow to upload the event file as an artifact:
event_file:
  name: "Event File"
  runs-on: ubuntu-latest
  steps:
  - name: Upload
    uses: actions/upload-artifact@v4
    with:
      name: Event File
      path: ${{ github.event_path }}
  1. Add the following action step to your CI workflow to upload test results as artifacts. Adjust the value of path to fit your setup:
- name: Upload Test Results
  if: always()
  uses: actions/upload-artifact@v4
  with:
    name: Test Results
    path: |
      test-results/*.xml
  1. If you run tests in a strategy matrix, make the artifact name unique for each job, e.g.:
  with:
    name: Test Results (${{ matrix.python-version }})
    path: 
  1. Add the following workflow that publishes test results. It downloads and extracts all artifacts into artifacts/ARTIFACT_NAME/, where ARTIFACT_NAME will be Upload Test Results when setup as above, or Upload Test Results (…) when run in a strategy matrix.

    It then runs the action on files matching artifacts/**/*.xml. Change the files pattern with the path to your test artifacts if it does not work for you. The publish action uses the event file of the CI workflow.

    Also adjust the value of workflows (here "CI") to fit your setup:

name: Test Results

on:
  workflow_run:
    workflows: ["CI"]
    types:
      - completed
permissions: {}

jobs:
  test-results:
    name: Test Results
    runs-on: ubuntu-latest
    if: github.event.workflow_run.conclusion != 'skipped'

    permissions:
      checks: write

      # needed unless run with comment_mode: off
      pull-requests: write

      # only needed for private repository
      contents: read

      # only needed for private repository
      issues: read

      # required by download step to access artifacts API
      actions: read

    steps:
       - name: Download and Extract Artifacts
         uses: dawidd6/action-download-artifact@e7466d1a7587ed14867642c2ca74b5bcc1e19a2d
         with:
            run_id: ${{ github.event.workflow_run.id }}
            path: artifacts

      - name: Publish Test Results
        uses: EnricoMi/publish-unit-test-result-action@v2
        with:
          commit: ${{ github.event.workflow_run.head_sha }}
          event_file: artifacts/Event File/event.json
          event_name: ${{ github.event.workflow_run.event }}
          files: "artifacts/**/*.xml"

Note: Running this action on pull_request_target events is dangerous if combined with code checkout and code execution. This event is therefore not use here intentionally!

Running with multiple event types (pull_request, push, schedule, …)

This action comments on a pull request each time it is executed via any event type. When run for more than one event type, runs will overwrite earlier pull request comments.

Note that pull_request events may produce different test results than any other event type. The pull_request event runs the workflow on a merge commit, i.e. the commit merged into the target branch. All other event types run on the commit itself.

If you want to distinguish between test results from pull_request and push, or want to distinguish the original test results of the push to master from subsequent schedule events, you may want to add the following to your workflow.

There are two possible ways to avoid the publish action to overwrite results from other event types:

Test results per event type

Add the event name to check_name to avoid different event types overwriting each other's results:

- name: Publish Test Results
  uses: EnricoMi/publish-unit-test-result-action@v2
  if: always()
  with:
    check_name: "Test Results (${{ github.event.workflow_run.event || github.event_name }})"
    files: "test-results/**/*.xml"

Pull request comments only for pull_request events

Disabling the pull request comment mode ("off") for events other than pull_request avoids that any other event type overwrites pull request comments:

- name: Publish Test Results
  uses: EnricoMi/publish-unit-test-result-action@v2
  if: always()
  with:
    # set comment_mode to "always" for pull_request event, set to "off" for all other event types
    comment_mode: ${{ (github.event.workflow_run.event == 'pull_request' || github.event_name == 'pull_request') && 'always' || 'off' }}
    files: "test-results/**/*.xml"

Create a badge from test results

Here is an example how to use the JSON output of this action to create a badge like this: Test Results

Example workflow YAML
steps:
- 
- name: Publish Test Results
  uses: EnricoMi/publish-unit-test-result-action@v2
  id: test-results
  if: always()
  with:
    files: "test-results/**/*.xml"

- name: Set badge color
  shell: bash
  run: |
    case ${{ fromJSON( steps.test-results.outputs.json ).conclusion }} in
      success)
        echo "BADGE_COLOR=31c653" >> $GITHUB_ENV
        ;;
      failure)
        echo "BADGE_COLOR=800000" >> $GITHUB_ENV
        ;;
      neutral)
        echo "BADGE_COLOR=696969" >> $GITHUB_ENV
        ;;
    esac

- name: Create badge
  uses: emibcn/badge-action@808173dd03e2f30c980d03ee49e181626088eee8
  with:
    label: Tests
    status: '${{ fromJSON( steps.test-results.outputs.json ).formatted.stats.tests }} tests, ${{ fromJSON( steps.test-results.outputs.json ).formatted.stats.runs }} runs: ${{ fromJSON( steps.test-results.outputs.json ).conclusion }}'
    color: ${{ env.BADGE_COLOR }}
    path: badge.svg

- name: Upload badge to Gist
  # Upload only for master branch
  if: >
    github.event_name == 'workflow_run' && github.event.workflow_run.head_branch == 'master' ||
    github.event_name != 'workflow_run' && github.ref == 'refs/heads/master'
  uses: andymckay/append-gist-action@6e8d64427fe47cbacf4ab6b890411f1d67c07f3e
  with:
    token: ${{ secrets.GIST_TOKEN }}
    gistURL: https://gist.githubusercontent.com/{user}/{id}
    file: badge.svg

You have to create a personal access toke (PAT) with gist permission only. Add it to your GitHub Actions secrets, in above example with secret name GIST_TOKEN.

Set the gistURL to the Gist that you want to write the badge file to, in the form of https://gist.githubusercontent.com/{user}/{id}.

You can then use the badge via this URL: https://gist.githubusercontent.com/{user}/{id}/raw/badge.svg

Running with absolute paths

It is known that this action works best with relative paths (e.g. test-results/**/*.xml), but most absolute paths (e.g. /tmp/test-results/**/*.xml) require to use the non-Docker variant of this action:

uses: EnricoMi/publish-unit-test-result-action/linux@v2
uses: EnricoMi/publish-unit-test-result-action/macos@v2
uses: EnricoMi/publish-unit-test-result-action/windows@v2

If you have to use absolute paths with the Docker variant of this action (uses: EnricoMi/publish-unit-test-result-action@v2), you have to copy files to a relative path first, and then use the relative path:

- name: Copy Test Results
  if: always()
  run: |
    cp -Lpr /tmp/test-results test-results
  shell: bash

- name: Publish Test Results
  uses: EnricoMi/publish-unit-test-result-action@v2
  if: always()
  with:
     files: |
        test-results/**/*.xml
        test-results/**/*.trx
        test-results/**/*.json

Using the Docker variant of this action is recommended as it starts up much quicker.

Running as a non-Docker action

Running this action as below allows to run it on action runners that do not provide Docker:

uses: EnricoMi/publish-unit-test-result-action/linux@v2
uses: EnricoMi/publish-unit-test-result-action/macos@v2
uses: EnricoMi/publish-unit-test-result-action/windows@v2

These actions, however, require a Python3 environment to be setup on the action runner. All GitHub-hosted runners (Ubuntu, Windows Server and macOS) provide a suitable Python3 environment out-of-the-box.

Self-hosted runners may require setting up a Python environment first:

- name: Setup Python
  uses: actions/setup-python@v5
  with:
    python-version: 3.8

Start-up of the action is faster with virtualenv or venv package installed.

Running as a composite action

Running this action via:

uses: EnricoMi/publish-unit-test-result-action/composite@v2

is deprecated, please use an action appropriate for your operating system and shell:

  • Linux (Bash shell): uses: EnricoMi/publish-unit-test-result-action/linux@v2
  • macOS (Bash shell): uses: EnricoMi/publish-unit-test-result-action/macos@v2
  • Windows (PowerShell): uses: EnricoMi/publish-unit-test-result-action/windows@v2
  • Windows (Bash shell): uses: EnricoMi/publish-unit-test-result-action/windows/bash@v2

These are non-Docker variations of this action. For details, see section "Running as a non-Docker action" above.

publish-unit-test-result-action's People

Contributors

adriandsg avatar airadier avatar ali-raza-arain avatar audricschiltknecht avatar cpdeethree avatar danxmoran avatar dependabot[bot] avatar efaulhaber avatar enricomi avatar ilent2 avatar jgiannuzzi avatar justanotherdev avatar ktasper avatar lachaib avatar mas-wtag avatar mathroule avatar mightyguava avatar mpv avatar ofek avatar pavel-spacil avatar rafikfarhad avatar sorekz avatar szepeviktor avatar tomerfi avatar turnrdev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

publish-unit-test-result-action's Issues

Make action reduce number of comments on PR

On #28 (comment) we have discussed that the action could reduce the number of comments that it creates by reusing its latest comment until other comments are added by other users. This way, the unit test results stay at the end of the PR conversation, but not every commit causes a new comment. This is an alternative behaviour between only commenting once (#38) and always commenting, the current behaviour. This could be combined with hiding older comments (#36).

Distinguish between golang test and subtests

Golang's table-driven tests used along with go-junit-report generates JUnit files such as this.

	<testsuite tests="28" failures="0" time="0.031" name="github.com/Checkmarx/kics/pkg/engine">
		<properties>
			<property name="go.version" value="go version go1.15.6 linux/amd64"></property>
			<property name="coverage.statements.pct" value="35.1"></property>
		</properties>
		<testcase classname="engine" name="TestMapKeyToString" time="0.000"></testcase>
		<testcase classname="engine" name="TestMapKeyToString/mapKeyToString-0" time="0.000"></testcase>
		<testcase classname="engine" name="TestMapKeyToString/mapKeyToString-1" time="0.000"></testcase>
		<testcase classname="engine" name="TestMapKeyToString/mapKeyToString-2" time="0.000"></testcase>
		<testcase classname="engine" name="TestMapKeyToString/mapKeyToString-3" time="0.000"></testcase>
		<testcase classname="engine" name="TestMapKeyToString/mapKeyToString-4" time="0.000"></testcase>
		<testcase classname="engine" name="TestComputeSimilarityID" time="0.000"></testcase>
		<testcase classname="engine" name="TestComputeSimilarityID/Changed_file_name" time="0.000"></testcase>
		<testcase classname="engine" name="TestComputeSimilarityID/Changed_queryID" time="0.000"></testcase>
		<testcase classname="engine" name="TestComputeSimilarityID/Changed_searchKey" time="0.000"></testcase>
		<testcase classname="engine" name="TestComputeSimilarityID/Changed_filepath_dir" time="0.000"></testcase>
		<testcase classname="engine" name="TestComputeSimilarityID/No_changes" time="0.000"></testcase>
		<testcase classname="engine" name="TestComputeSimilarityID/Relative_directory_resolution" time="0.000"></testcase>
		<testcase classname="engine" name="TestComputeSimilarityID/No_changes,_empty_searchValue" time="0.000"></testcase>
		<testcase classname="engine" name="TestStandardizeFilePathEquals" time="0.000"></testcase>
		<testcase classname="engine" name="TestStandardizeFilePathEquals/Clean_input" time="0.000"></testcase>
		<testcase classname="engine" name="TestStandardizeFilePathEquals/Cleanup_double_slashes" time="0.000"></testcase>

Currently, the comment shows a very high number of unit tests, because of the subtests:
image

It would be great if we could have a configuration setup to differentiate the tests (e.g: TestMapKeyToString) from subtests (e.g: TestMapKeyToString/mapKeyToString-0) and count them separately in the PR comments.

Why this action cannot be executed on pull_request event ?

I would like to execute this action only on pull request action why you by pass this event ?

2020-09-23 17:48:11 +0000 - publish-unit-test-results - DEBUG - action triggered by 'pull_request' event
9
2020-09-23 17:48:11 +0000 - publish-unit-test-results - WARNING - event '{}' is not supported

Unexpected input(s) 'commit'

I'm running v1.6 and provided a git sha as shown in the README.

However, I get the following warning:

Warning: Unexpected input(s) 'commit', valid inputs are ['entryPoint', 'args', 'github_token', 'check_name', 'files', 'report_individual_runs', 'deduplicate_classes_by_file_name', 'hide_comments', 'comment_on_pr', 'log_level']

Limit check run fields size

Github API limits the size of some API call JSON fields, especially when creating check runs:
https://developer.github.com/v3/checks/runs/#annotations-object-1

Annotation:

  • message: 64 KB
  • title: 255 characters
  • raw_details: 64 KB

Limit these fields by abbreviating the strings with ellipsis in the middle of the string. Make sure the title cannot get larger than 255 characters without abbreviating it.

Limiting a string by bytes is tricky as some characters may use multiple bytes. Easiest would be to limit it to 16k characters, which should be safe in terms of bytes in any situation.

Comment hard codes "Unit Test Results" instead of $check_name

See

pull.create_issue_comment('## Unit Test Results\n{}'.format(get_long_summary_md(stats_with_delta)))

I was expecting to find the defined check_name used here.

The result is if you have more than one instance of this action in use (e.g. for "Integration tests" and "Unit tests"), they both get the same text in the comment and you can't easily tell which is which.

I'm happy to provide a PR for this if it is accepted.

Make cross for failed tests red

For me (Windows 10, Firefox and Chrome) the cross ✖️ for failed tests is grey and not red like in the screenshots.
I feel like failed tests get lost this way and should rather be bright red. You could use this unicode character instead: ❌

Support scheduled action event

When trying to publish result from an scheduled execution, we get an error like:

Traceback (most recent call last):
  File "/action/publish_unit_test_results.py", line 845, in <module>
    commit = get_var('COMMIT') or get_commit_sha(event, event_name)
  File "/action/publish_unit_test_results.py", line 812, in get_commit_sha
    raise RuntimeError("event '{}' is not supported".format(event))
RuntimeError: event '{'schedule': '0 * * * *'}' is not supported

it should be easy to support just by adding a new entry to:

def get_commit_sha(event: dict, event_name: str):

Support absolute path in `files` setting

I would like to collect test results somewhere in /tmp (so that I don't modify the checked out source tree that is the current working directory). Unfortunately, this fails with the following error:

Traceback (most recent call last):
  File "/action/publish_unit_test_results.py", line 202, in <module>
    main(settings)
  File "/action/publish_unit_test_results.py", line 52, in main
    files = [str(file) for file in pathlib.Path().glob(settings.files_glob)]
  File "/action/publish_unit_test_results.py", line 52, in <listcomp>
    files = [str(file) for file in pathlib.Path().glob(settings.files_glob)]
  File "/usr/local/lib/python3.6/pathlib.py", line 1098, in glob
    raise NotImplementedError("Non-relative patterns are unsupported")
NotImplementedError: Non-relative patterns are unsupported

Here is my configuration:

      - name: Publish Unit Test Results
        uses: EnricoMi/[email protected]
        # run even if tests failed
        if: always()
        with:
          files: /tmp/test-reports/**/*.xml

xUnit tests outputted in JUnit format display only as runs and not tests

Is this expected behaviour or am I potentially doing something wrong?

  - name: Publish Unit Test Results
    uses: EnricoMi/publish-unit-test-result-action@v1
    if: always()
    with:
      github_token: ${{ secrets.GITHUB_TOKEN }}
      check_name: Unit Test Results
      check_run_annotations: all tests, skipped tests
      comment_on_pr: true
      test_changes_limit: 5
      hide_comments: all but latest
      files: Release/net472/virtual_out.xml
      report_individual_runs: true

Parsing fails if test case result is empty

Parsing fails when the test case result is missing attributes or content

A Java JUnit 5 test generates the following XML test case, which fails parsing and breaks publishing.

<testcase name="test" classname="test.Test" time="0.042">
  <skipped/>
</testcase>

fails with the message

TypeError: argument of type 'NoneType' is not iterable

The culprid seems to be missing null checks here:

message=unescape(case.result.message) if case.result else None,
content=unescape(case.result._elem.text) if case.result else None,

Combining trigger pull_request and workflow_run

Hi, as per GitHub recommendations, it's dangerous to use pull_request_target, they recommend using pull_request with workflow_run.

I have tried to do this with this GitHub Action here and here are the logs:

/usr/bin/docker run --name e400d0fdeb040d443bb87865509f8e811d_620554 --label 5588e4 --workdir /github/workspace --rm -e INPUT_CHECK_NAME -e INPUT_GITHUB_TOKEN -e INPUT_FILES -e INPUT_REPORT_INDIVIDUAL_RUNS -e INPUT_CHECK_RUN_ANNOTATIONS -e INPUT_COMMIT -e INPUT_COMMENT_TITLE -e INPUT_DEDUPLICATE_CLASSES_BY_FILE_NAME -e INPUT_HIDE_COMMENTS -e INPUT_COMMENT_ON_PR -e INPUT_TEST_CHANGES_LIMIT -e INPUT_CHECK_RUN_ANNOTATIONS_BRANCH -e INPUT_LOG_LEVEL -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/nHapi/nHapi":"/github/workspace" 5588e4:00d0fdeb040d443bb87865509f8e811d
2021-02-10 20:52:10 +0000 - publish-unit-test-results -  INFO - reading **/*.xml
2021-02-10 20:52:10 +0000 - publish.publisher -  INFO - publishing success results for commit b1e94751b354cc4fbd75e4bb282eca419f6555d4
2021-02-10 20:52:10 +0000 - publish.publisher -  INFO - creating check
2021-02-10 20:52:11 +0000 - publish.publisher -  INFO - there is no pull request for commit b1e94751b354cc4fbd75e4bb282eca419f6555d4

what am I missing to ensure it will comment on the forked PR?

Provide link to failing test and line in annotations

Question #32 made me think of linking to the failing test and line form the annotations. Given the commit sha, test file and line, the action could generate a link like this and add to the respective annotation:

https://github.com/EnricoMi/publish-unit-test-result-action/blob/de7f7f0c5f7694846ce69e3384f2e5a03253c141/test/test_publish.py#L35

Such a link gets nicely rendered by GitHub:

self.assertEqual(get_formatted_digits(None), (3, 0))

Providing a few lines before and after gives a nice context:

def test_get_formatted_digits(self):
self.assertEqual(get_formatted_digits(None), (3, 0))
self.assertEqual(get_formatted_digits(None, 1), (3, 0))
self.assertEqual(get_formatted_digits(None, 123), (3, 0))

Let's hope that also holds for annotations. It probably won't because annotations do not support markdown.

Resolving the test file path from the test result file will be challenging. This might work best with some configuration like how many directories to remove from the path or which path to prepend.

Fails if run by schedule

https://github.com/ned14/llfio/runs/1477349816?check_suite_focus=true

Run EnricoMi/[email protected]
  with:
    check_name: Unit Test Results
    github_token: ***
    files: **/merged_junit_results.xml
    log_level: INFO
/usr/bin/docker run --name ac1af7e4c4db2a986807a5c45326c_1a9922 --label 179394 --workdir /github/workspace --rm -e INPUT_CHECK_NAME -e INPUT_GITHUB_TOKEN -e INPUT_FILES -e INPUT_REPORT_INDIVIDUAL_RUNS -e INPUT_DEDUPLICATE_CLASSES_BY_FILE_NAME -e INPUT_LOG_LEVEL -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/llfio/llfio":"/github/workspace" 179394:727ac1af7e4c4db2a986807a5c45326c
Traceback (most recent call last):
  File "/action/publish_unit_test_results.py", line 793, in <module>
    commit = get_var('COMMIT') or get_commit_sha(event, event_name)
  File "/action/publish_unit_test_results.py", line 765, in get_commit_sha
    raise RuntimeError("event '{}' is not supported".format(event))
RuntimeError: event '{'schedule': '0 0 1 * *'}' is not supported

Hide older comments by default

In #28 we have discussed that it would be great to have the action by default hide earlier comments so that only the latest comment created by the action is visible. This avoids filling up the PR with old test results.

Deleting test results leaves a message that a comment has been removed, which looks pretty similar to the hidden comment. With hidden comments, user have a chance to look at older test results.

The action currently hides all comments that refer to commits that are no longer part of the branch / PR due to commit history rewrite. This can be extended to all but the latest comment.

This default behaviour should be controlled by configuration.

junit xsd doesn't require a failure body => GH checks api => properties/raw_details', nil is not a string.

It would appear the body of the test failure is being supplied to GH's checks api via a required property. However, a failure body may not always be present.

2020-11-03 17:36:53 +0000 - publish-unit-test-results -  INFO - creating check
Traceback (most recent call last):
  File "/action/publish_unit_test_results.py", line 801, in <module>
    main(token, event, repo, commit, files, check_name, report_individual_runs, dedup_classes_by_file_name)
  File "/action/publish_unit_test_results.py", line 754, in main
    publish(token, event, repo, commit, stats, results['case_results'], check_name, report_individual_runs)
  File "/action/publish_unit_test_results.py", line 722, in publish
    publish_check(stats, cases)
  File "/action/publish_unit_test_results.py", line 613, in publish_check
    repo.create_check_run(name=check_name, head_sha=commit_sha, status='completed', conclusion='success', output=output)
  File "/action/githubext/Repository.py", line 78, in create_check_run
    headers={'Accept': 'application/vnd.github.antiope-preview+json'},
  File "/usr/local/lib/python3.6/site-packages/github/Requester.py", line 319, in requestJsonAndCheck
    verb, url, parameters, headers, input, self.__customConnection(url)
  File "/usr/local/lib/python3.6/site-packages/github/Requester.py", line 342, in __check
    raise self.__createException(status, responseHeaders, output)
github.GithubException.GithubException: 422 {"message": "Invalid request.\n\nFor 'properties/raw_details', nil is not a string.\nFor 'properties/raw_details', nil is not a string.", "documentation_url": "https://docs.github.com/rest/reference/checks#create-a-check-run"}

Status message on Github Action Summary Page

Hey,
On the summary page of failed github workflows, in the "Annotations" Section, the failed tests are displayed as warnings for me.
Furthermore, they always recommend to check line 0 in the failed file. (See image below)
Captura de pantalla de 2020-10-23 14-50-18

Do you know what the reason for this is?
Best regards
Anton

Make action comment on Pull Requests once and edit

In #28 (comment) we have discussed that the action should allow to configure it commenting on PRs only once and later edit that comment with latest test results. Then, the test results stay at the top of the PR (always at the same position) and the PR does not fill up with more and more test result comments.

Make PR comments more customizable

I really like the feature to have the test results in PR comments. However I think these comments can get out of hand on very active PRs.

I would really like to see an option to make the bot only comment once and edit the comment for every new test run like the codecov bot does. This way active PRs wouldn't be full of bot comments.

PR comments should probably also be optional and one should be able to turn them off in the GitHub workflow. I couldn't find anything like this in the readme.

Provide optional monospace PR comment layout

In #26 we have discussed how a monospace comment could look like. Here is a suggestion:

521 suites (-60) in 4h 24m 19s (-5m 23s)
508 tests   (+1)
482 success (+1)
 26 skipped (±0)
  0 failed  (±0)
9 876 runs    (-1 190)
7 873 success (-  896)
2 003 skipped (-  294)
    0 failed  (±    0)
results for commit e826078 ± comparison against base commit 3fefb1a

and without runs:

521 suites (-60) in 4h 24m 19s (-5m 23s)
508 tests   (+1)
482 success (+1)
 26 skipped (±0)
  0 failed  (±0)
results for commit e826078
± comparison against base commit 3fefb1a

Implement that as an optional comment layout.

Random failure with `github.GithubException.GithubException: 500 null`

I recently observed this random failure. It seems completely unrelated to the actual commit that the workflow was triggered for, as the workflow had previously executed without an error for the same commit.

Not sure where the actual error is, but here's the stacktrace:

Run EnricoMi/publish-unit-test-result-action@v1
  with:
    files: firmware/tests/build/bin/**/*.xml
    github_token: ***
    check_name: Unit Test Results
    fail_on: test failures
    hide_comments: all but latest
    comment_on_pr: true
    pull_request_build: merge
    check_run_annotations: all tests, skipped tests
/usr/bin/docker run --name ghcrioenricomipublishunittestresultactionv111_09e53a --label 5588e4 --workdir /github/workspace --rm -e INPUT_FILES -e INPUT_GITHUB_TOKEN -e INPUT_COMMIT -e INPUT_CHECK_NAME -e INPUT_COMMENT_TITLE -e INPUT_FAIL_ON -e INPUT_REPORT_INDIVIDUAL_RUNS -e INPUT_DEDUPLICATE_CLASSES_BY_FILE_NAME -e INPUT_HIDE_COMMENTS -e INPUT_COMMENT_ON_PR -e INPUT_PULL_REQUEST_BUILD -e INPUT_TEST_CHANGES_LIMIT -e INPUT_CHECK_RUN_ANNOTATIONS -e INPUT_CHECK_RUN_ANNOTATIONS_BRANCH -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/Wunderkiste/Wunderkiste":"/github/workspace" ghcr.io/enricomi/publish-unit-test-result-action:v1.11
2021-04-10 07:50:41 +0000 - publish-unit-test-results -  INFO - reading firmware/tests/build/bin/**/*.xml
2021-04-10 07:50:41 +0000 - publish.publisher -  INFO - publishing success results for commit 6cee9c9b0d9b3fd0f7758337e79225cf24db0f21
2021-04-10 07:50:41 +0000 - publish.publisher -  INFO - creating check
2021-04-10 07:50:43 +0000 - publish.publisher -  INFO - creating comment
Traceback (most recent call last):
  File "/action/publish_unit_test_results.py", line 194, in <module>
    main(settings)
  File "/action/publish_unit_test_results.py", line 68, in main
    Publisher(settings, gh, gha).publish(stats, results.case_results, conclusion)
  File "/action/publish/publisher.py", line 55, in publish
    self.publish_comment(self._settings.comment_title, stats, pull, check_run, cases)
  File "/action/publish/publisher.py", line 244, in publish_comment
    pull_request.create_issue_comment(f'## {title}\n{summary}')
  File "/usr/local/lib/python3.6/site-packages/github/PullRequest.py", line 457, in create_issue_comment
    "POST", self.issue_url + "/comments", input=post_parameters
  File "/usr/local/lib/python3.6/site-packages/github/Requester.py", line 317, in requestJsonAndCheck
    verb, url, parameters, headers, input, self.__customConnection(url)
  File "/usr/local/lib/python3.6/site-packages/github/Requester.py", line 340, in __check
    raise self.__createException(status, responseHeaders, output)
github.GithubException.GithubException: 500 null

Here's a link to the specific workflow run

Any ideas what the cause may be?

Publishes to random workflow

When there are multiple GitHub workflows for one commit, the action publishes the commit status to a random workflow. It should be the workflow that contains the action. This is due to GitHub's API not allowing to specify which workflow to add the check run to.

Both REST and GraphQL APIs do not allow to specify the workflow:

The github.run_id workflow variable provides the check run id where the action runs.

Update 2022-05-14:
With GitHub introducing "job summary", the action now additionally publishes the results at the summary page of the workflow that runs the publish action. From there, a link to the check annotations is provided if failures exists: https://github.com/EnricoMi/publish-unit-test-result-action/tree/v1.35#github-actions-job-summary. However, this is not a proper fix for this issue.

Only Github can fix this issue. Please see this discussion: https://github.com/orgs/community/discussions/24616

Checks can be disabled with check_run: false.

Tests results are sometimes published into wrong job

We have a few workflows and one them tests code and publishes tests results. Sometimes, however, this actions publishes tests results into a wrong job (see the screenshot - test result were published into a different workflow, the one that was building documentation).

image

Add annotations to changed files

Hey, I really like your library!
Is there a plan currently to implement adding annotations so that they can be seen in the "files changed" tab?
Best regards
Anton

Error while hiding comments in PullReq on GitHub Enterprise

Using GitHub Enterprise Server 3.0.0.

Workflow file (relevant parts):

name: Java CI with Maven

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

jobs:
  build:
    runs-on: [ ... ]
    steps:
    - uses: actions/checkout@v2

    - name: Set up JDK 11
      uses: actions/setup-java@v1
      with:
        java-version: 11

    - name: Build with Maven
      run: ./mvnw -B verify --file pom.xml

    - name: Publish Unit Test Results
      uses: EnricoMi/[email protected]
      if: always()
      with:
        files: target/surefire-reports/**/*.xml
      env:
        GITHUB_API_URL: https://<our.fqdn.here>/api/v3/

Error received:

Run EnricoMi/[email protected]
  with:
    files: target/surefire-reports/**/*.xml
    github_token: ***
    check_name: Unit Test Results
    hide_comments: all but latest
    comment_on_pr: true
    pull_request_build: merge
    check_run_annotations: all tests, skipped tests
    log_level: INFO
  env:
    JAVA_HOME_11.0.10_x64: /home/runner/_work/_tool/jdk/11.0.10/x64
    JAVA_HOME: /home/runner/_work/_tool/jdk/11.0.10/x64
    JAVA_HOME_11_0_10_X64: /home/runner/_work/_tool/jdk/11.0.10/x64
    GITHUB_API_URL: https://<our.fqdn.here>/api/v3/
/usr/bin/docker run --name b1cbc511eb0319cad24c82bd9c7f0910960709_aaa4bd --label b1cbc5 --workdir /github/workspace --rm -e JAVA_HOME_11.0.10_x64 -e JAVA_HOME -e JAVA_HOME_11_0_10_X64 -e GITHUB_API_URL -e INPUT_FILES -e INPUT_GITHUB_TOKEN -e INPUT_COMMIT -e INPUT_CHECK_NAME -e INPUT_COMMENT_TITLE -e INPUT_REPORT_INDIVIDUAL_RUNS -e INPUT_DEDUPLICATE_CLASSES_BY_FILE_NAME -e INPUT_HIDE_COMMENTS -e INPUT_COMMENT_ON_PR -e INPUT_PULL_REQUEST_BUILD -e INPUT_TEST_CHANGES_LIMIT -e INPUT_CHECK_RUN_ANNOTATIONS -e INPUT_CHECK_RUN_ANNOTATIONS_BRANCH -e INPUT_LOG_LEVEL -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/_work/_temp/_github_home":"/github/home" -v "/home/runner/_work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/_work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/_work/<workspace.folder.redacted>/<workspace.folder.redacted>":"/github/workspace" b1cbc5:11eb0319cad24c82bd9c7f0910960709
2021-03-09 12:16:57 +0000 - publish-unit-test-results -  INFO - reading target/surefire-reports/**/*.xml
2021-03-09 12:16:57 +0000 - publish.publisher -  INFO - publishing success results for commit 421157be393553a7c6b3bde525dce2bb41305ba1
2021-03-09 12:16:57 +0000 - publish.publisher -  INFO - creating check
2021-03-09 12:16:58 +0000 - publish.publisher -  INFO - creating comment
Traceback (most recent call last):
  File "/action/publish_unit_test_results.py", line 181, in <module>
    main(settings)
  File "/action/publish_unit_test_results.py", line 67, in main
    Publisher(settings, gh, gha).publish(stats, results.case_results, conclusion)
  File "/action/publish/publisher.py", line 56, in publish
    self.hide_all_but_latest_comments(pull)
  File "/action/publish/publisher.py", line 354, in hide_all_but_latest_comments
    comments = self.get_pull_request_comments(pull)
  File "/action/publish/publisher.py", line 284, in get_pull_request_comments
    "POST", f'{self._settings.api_url}/graphql', input=query
  File "/usr/local/lib/python3.6/site-packages/github/Requester.py", line 317, in requestJsonAndCheck
    verb, url, parameters, headers, input, self.__customConnection(url)
  File "/usr/local/lib/python3.6/site-packages/github/Requester.py", line 340, in __check
    raise self.__createException(status, responseHeaders, output)
github.GithubException.UnknownObjectException: 404 {"message": "Not Found", "documentation_url": "https://docs.github.com/enterprise/3.0/rest"}

Note that test results themself are visible within GitHub Actions output - this part works correctly

Is GH Enterprise supported?
Looking at code I'm not sure if GITHUB_API_URL is supported from environment variables (not a Python expert).
Same error occurs without GITHUB_API_URL set in step.
Or maybe this is issue with action itself?

Make action work on all OS

This action is only supported on Linux since its container based. Is it possible to support multiple OS for use with self-hosted runners like internal Macs.

Testing dependabot branch fails with "Resource not accessible by integration"

When running my workflow on dependabot branch dependabot/npm_and_yarn/typescript-eslint/parser-4.18.0 I am receiving the following error:

2021-03-16T14:49:50.9184853Z Traceback (most recent call last):
2021-03-16T14:49:50.9185525Z   File "/action/publish_unit_test_results.py", line 89, in <module>
2021-03-16T14:49:50.9186069Z     main(settings)
2021-03-16T14:49:50.9186877Z   File "/action/publish_unit_test_results.py", line 32, in main
2021-03-16T14:49:50.9187889Z     Publisher(settings, gh).publish(stats, results.case_results)
2021-03-16T14:49:50.9188632Z   File "/action/publish/publisher.py", line 59, in publish
2021-03-16T14:49:50.9189519Z     check_run = self.publish_check(stats, cases)
2021-03-16T14:49:50.9190217Z   File "/action/publish/publisher.py", line 163, in publish_check
2021-03-16T14:49:50.9190781Z     output=output)
2021-03-16T14:49:50.9191375Z   File "/action/githubext/Repository.py", line 78, in create_check_run
2021-03-16T14:49:50.9192995Z     headers={'Accept': 'application/vnd.github.antiope-preview+json'},
2021-03-16T14:49:50.9194475Z   File "/usr/local/lib/python3.6/site-packages/github/Requester.py", line 319, in requestJsonAndCheck
2021-03-16T14:49:50.9195678Z     verb, url, parameters, headers, input, self.__customConnection(url)
2021-03-16T14:49:50.9196789Z   File "/usr/local/lib/python3.6/site-packages/github/Requester.py", line 342, in __check
2021-03-16T14:49:50.9197860Z     raise self.__createException(status, responseHeaders, output)
2021-03-16T14:49:50.9200050Z github.GithubException.GithubException: 403 {"message": "Resource not accessible by integration", "documentation_url": "https://docs.github.com/rest/reference/checks#create-a-check-run"}

The full logs can be seen here. The tests are XML formatted coburtura test results, I have attached the test files used for this workflow below: Unit Test Results.zip

Any help would be appreciated.

Status message on Github Action Summary Page not correct

Hello, I am using karma tests for my project with junit reporter to create .xml reports.
When I am parsing this xml with this GitHub actions all looks pretty cool except this Summary Page. For some reason I can see same name for all failed test.
image

I've found this topic: #34 but didn't find any way to solve the problem on karma side :(

Also I've tried some others actions: scacap/action-surefire-report@v1, ashley-taylor/[email protected] and all of them managed to get this Summary page looks good.

So probably problem somewhere in this action.

Would appreciate any help.

Thanks

Move PR comment details into collapsible section

GitHub Markdown supports collapsible sections: https://gist.github.com/pierrejoubert73/902cc94d79424356a8d20be2b382e1ab.

The lists of removed and skipped tests should go in there. With that, all four available lists could be added and list limits can be increased as comments do not get scattered if lists are hidden initially.

The markdown in the summary only works when separated from the the <summary> tag by newlines, so better use HTML markup.

Example:

This pull request removes 4 and adds 21 tests. Note that renamed tests count towards both.
test.integration.test_spark.SparkTests ‑ test_get_available_devices
test.integration.test_spark.SparkTests ‑ test_happy_run_elastic
test.integration.test_spark.SparkTests ‑ test_happy_run_with_gloo
test.integration.test_spark.SparkTests ‑ test_happy_run_with_mpi
This pull request skips 39 tests.
test.parallel.test_adasum_pytorch.TorchAdasumTests ‑ test_orthogonal
test.parallel.test_adasum_pytorch.TorchAdasumTests ‑ test_parallel
test.parallel.test_mxnet.MXTests ‑ test_gluon_trainer
test.parallel.test_mxnet.MXTests ‑ test_gpu_required
test.parallel.test_mxnet.MXTests ‑ test_horovod_allreduce_cpu_gpu_error
...

Disable comparisons to earlier test runs

Hey, thanks for writing this lovely action!

I'm trying to integrate it into our work project and I'm running into a specific issue around comparisons with earlier commits: the action does a good job doing so, but I'd love a way to disable those comparisons.

The reason we need to do that is that we have some smarts built into our test runner which only runs a subset of tests based on which parts of the codebase were changed. So the number of tests run varies wildly from commit to commit.

Alternatively, we could potentially provide the baseline in some format (e.g. in the JUnit format as skipped tests). If that works, it should be possible to provide better data as well -- e.g. if a test gets added and another gets removed, you'd able to say "+1, -1".

What do you think?

Documentation on token

Sorry to bother with a so stupid question, but what are the expected permissions for the GITHUB_TOKEN?
I registered a new token with repo and workflow, but it does not seem to do the job:

The token seems to be used:

Run EnricoMi/[email protected]
  with:
    github_token: ***
    files: build_report.xml
    check_name: Unit Test Results
    hide_comments: all but latest
    comment_on_pr: true
    log_level: INFO

but GitHub complains with

github.GithubException.GithubException: 403 {"message": "You must authenticate via a GitHub App.", "documentation_url": "https://docs.github.com/rest/reference/checks#create-a-check-run"} 

Should -I- register this app myself? I'm a bit lost.
Thanks.

Use special GitHub Action status log format

Some action output formatted in a special way is picked up by GitHub and put as annotations to the Workflow page. Use this for warnings and errors of the publish action.

Comparing against the wrong base (master) commit

The action compares a commit of a branch to the where it branched off master. Github seems to merge this commit with master head, so unit tests include master head and should compared against that (at least in pull_request_target). This has happened here. Make sure to compare to the master commit that is part of the merge, not master head, as master could have moved further while the action is running.

What if that commit cannot be merged into master? On which commit does GitHub actions run then?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.