GithubHelp home page GithubHelp logo

jeffwright13 / pytest-tui Goto Github PK

View Code? Open in Web Editor NEW
31.0 3.0 1.0 1.86 MB

A Text User Interface (TUI) for Pytest, with console scripts to launch a TUI or an HTML page

License: MIT License

Python 85.07% CSS 12.73% Shell 1.03% RobotFramework 1.08% Tcl 0.09%

pytest-tui's Introduction

Build Status Documentation Status Binder Gitpod ready-to-code

pytest-tui

A pytest plugin for viewing test run results, with console scripts to launch a Text User Interface (TUI) or an HTML page

TUI:

2022-05-01 19 25 19

HTML:

2022-08-27 08 07 11

Log Folding:

2023-04-11 23 56 57

Introduction

When you run Pytest campaigns that produce a lot of terminal output (e.g. with many tests, very detailed output, or with multiple failures), the standard Pytest output can make it difficult to examine the results. You end up scrolling way back in the terminal, looking for that one test you want to examine more closely. Pytest-tui provides a Text User Interface (TUI) and an HTML page that aim to make it easier to find the information you're looking for.

Test results are categorized in the same way Pytest does it:

  • By outcome: [Pass|Fail|Error|Skipped|Xpass|Xfail]
  • By output section: [Summary|Full|Errors|Passes|Failures|Warnings]

The intent it to make it easier for you to find the specific results you want so you can examine it without all the other results getting in your way.

How does it work in practice? Easy. You just run your Pytest campaigns like you normally would, adding the command line option --tui (pytest --tui). Your test session will proceed as it always does (always in verbose mode), showing you the familiar terminal output while running. Then, at the end of the session, a TUI or an HTML page can be launched via the included console scripts (tui and/or tuih). The results are displayed on-screen or in-browser for you to examine. When you're done, just exit the TUI to go back to the terminal, or close the HTML page. Don't worry about losing your test session data. Results are stored to local disk and you can always relaunch the TUI or HTML page using those same console scripts.

Output sections and individual test results are expandable/collapsible, and test summary statistics are displayed for convenience. Both the TUI and the HTML page retain the original pytest ANSI-encoded color output, lending a familiar look and feel.

Features

  • New in 1.10.0 Regex-based folding on the HTML page, configurable by user-provided regex! See "Python Regex Folding" section below.
  • New in 1.9.1 Log message folding on the HTML page, configurable by log level. See "Python Log Message Folding" section below.
  • Launch either or both of the Textual TUI or the HTML page using built in console scripts
  • ANSI text markup support - whatever the output on your console looks like is how things are going to show up in the TUI
  • Mouse and keyboard support (including scrolling)
  • Support for all output formats/modes:
    • -v, -vv, -no-header, --showlocals, --color=<yes|no|auto>
    • all variants of --tb except "native"
    • "live-log" (aka log_cli)
  • Support for other, simple output-manipulating plugins:
    • pytest-clarity
    • pytest-emoji
    • pytest-icdiff
    • pytest-rerunfailures
    • etc.
  • Not supported: plugins that take over the console in other ways, like
    • pytest-sugar
    • pytest-emoji-output
    • pytest-timestamp
  • Untested:
    • pytest-xdist
    • loguru

Requirements

  • Pytest >= 6.2.5
  • Python >= 3.8 (but see "Known Limitations/Issues" below if you want to run 3.10+)

Installation

For most users, simply issue the command pip install pytest-tui and you are good to go.

For those users wishing to install via a requirements.txt file, they are located in the /reqts/ directory.

Usage

Running Your Tests

Pretty much just run pytest like you always do, adding the --tui option to the list of command line options:

pytest --tui <whatever-else-you-normally-do>

In some environments, where the working directory for pytest has been changed from the default, it may be necessary to cd into the working directory in order to successfully launch the TUI or HTML. Basically, you need to be in the parent directory of wherever the /tui_files folder has been placed by the plugin after a test run. This is a known issue and will be fixed at some point.

Sample / Demo Tests

If you would like some dummy tests that will allow you to take pytest-tui for a testdrive, copy all the files at https://github.com/jeffwright13/pytest-tui/tree/main/demo-tests into a folder called demo-tests/ where your test environment resides. You will need the additional libraries listed in /reqts/requirements-dev.txt, so install them (pip install -r requirements-dev.txt). Then:

pytest demo-tests/

Looking at Results After Quitting TUI

If you have already exited the TUI and would like to re-enter it with the same data generated from the last Pytest run, simply type tui. To re-launch the HTML page using your default browser, issue the command tuih.

TUI Copy/Paste

On Linux terminals, you can typically press and hold the SHIFT key on your keyboard to temporarily bypass the TUI and access the terminal's native mouse copy/paste functionality (commonly, click-drag-release or double-click to select text, middle-click to paste). This copy/paste works with the terminal's selection buffer, as opposed to the TUI's buffer.

On Windows, use the ALT key while click-dragging the mouse. Mac users can get the same effect with the Option key.

Generating and viewing the HTML File

The HTML output file is located at <cwd>/tui_files/html_report.html. The HTML file is automatically generated when a test run is completed with the "--tui" option. It can also be generated manually with the tuih script by invoking it on the command line.

Visibility

Sometimes it can be difficult to read the terminal output when rendered on the HTML report. Pytest embeds ANSI color codes in its output, which are interpreted by a terminal program to display various colors for text. Pytest-tui takes these ANSI color codes and translates them to HTML (using the ansi2html librray). Because the dhe default color scheme for the HTML report is a light background with dark text, it can be difficult to see some of the colors. To address this, there are three buttons that can help. The first ("Toggle Background") allow you to toggle the bakcground color of all console output. This should result in a page that closely resembles the output you would get in a standard terminal environment (assuming you have white text on a black background). The other two buttons, Invert Colors and Remove/Restore Colors, are a bit more drastic in that they affect all text in the report. Experiment and see what works for you. Also note that if you have your browser set to dark mode, or have a theme that changes the default color scheme, this can also affect the visibility of the text.

"Folding" output in the HTML report

New in 1.11.0 is the integrated "folding" feature, which will automatically roll up any output lines from your test's output which match a regex (or regexes) specified in the file given on the command line. This option allows you to match on specific lines of console output from pytest, and 'fold' them (hide them).

The folding feature is activated by passing the --tui-regexfile option (see pytest --help), and setting the path of a file containing the desired regex or regexes.

The file itself must contain plain text (UTF-8 encoded) with either a single regex, specified on a single line of the file; or two 'marker' patterns, specified in two consecutive lines of the file. If there is a single line in the file, that line is assumed to contain a regular expressoin that will cause the folding action to be used on any line in the console output of pytest if that line matches the regex. Consecutive lines that match will be folded into the same section. If there are two lines in the regex file, the first line is assumed to be a start marker, and the second line is assumed to be a stop marker. The folding action will be applied to all lines between the start and stop markers

Ideas and tips for folding:

  • Run all tests with DEBUG level logging, but only view those DEBUG messages when necessary. I find this option particularly helpful when trying to debug a test that is only failing intermittently.
  • Mark certain sections of a test's output with a pair of start/end markers. If you have test output that is very chatty, but you only want to see it when you need to, this is a good option. For example, if you have a test that is making a bunch of API calls, and you want to see the output of those calls, but only when the test fails, you can mark the start and stop of the API calls with a pair of markers, and then fold them away when you don't need to see them.
  • Use the non-printable characters 'ZWS' and 'ZWJ' ((Zero Width Space)[https://en.wikipedia.org/wiki/Zero-width_space] / (Zero Width Joiner)[https://en.wikipedia.org/wiki/Zero-width_joiner]) as start and stop markers. The visual impact on the output is minimal (only inserts one visible space), and the regex pattern is very unlikely to match anything else in the output. The repo contains a file called nonprintable_​​characters.txt that contains cobinations of these characters, which can be used as a starting point for your own regexes.

Known Limitations / Issues

  • Python support for 3.10+ is not guaranteed. Changes were made to the importlib.metadata library that are not backwards-compatible, and may result in exceptions when attempting to run. I have not had the chance to chase this down definitively, so until such a time that I fully understand the issue, I recommend using Python 3.8 or 3.9. Of course, YMMV...give it a try, and let me know how things go. :-)
  • User interfaces need work:
    • Overall layouts need optimization (I am definitely not a UX guy)
    • Textual interface may be sluggish, esp. if run within an IDE
    • All code here is like a sausage factory: pleasant enough, until you look inside - do so at your own peril!
  • Not fully tested with all combinations of output formats. Probably some use-cases where things won't work 100% right.
  • pytest-tui is currently incompatible with pytest command line option --tb=native, and will cause an INTERNALERROR if the two are used together.
  • HTML page cannot offer clickable links to local filesystem. This is one of the workflows I depend on when using iTerm2...traceback lines with a file:// URL to a locally-hosted resource are clickable, and open up my IDE to that line in that file. Unfortunately, web browsers are much more security-minded than terminal apps, and actions like this are strictly disallowed.

History

This project was originally envisioned to only show test failures, and allow the user to 'fold' the details of the failed tests by clicking a line so that the details would alternately show/hide. In fact, the original repo was called pytest-fold. As development progressed, it became clear that what was really needed was a real TUI, one that organized the output in such a way that all of pytest's output was available in a more streamlined way.

Several TUIs (using different TUI libraries) have been cycled through this project. The Textual interface is the only one currently supported, since some internal optimization has been done to make the results simpler to consume. However, other TUIs should be able to be integrated without too much work (e.g. Asciimatics, PyTermTk, pyermgui, etc.). Same would be true of a GUI. Contact the author if you have a desire to implement one of these. The results of any given testrun are collected and sorted in such a way that it should relatively simple to take them and put them into the presentation mode of choice.

The HTML feature was put into place because of some minor limitations the author found in the available HTML plugins (miscounted totals in some corner cases, no color-coded output, inability to show output from the pytest live logs option). There is no intent to replace existing HTML plugins, but if you like this one, please do spread the word. :-)

Reporting Issues

If you encounter any problems, have feedback or requests, or anything else, please file an issue, along with a detailed description.

Contributing

Contributions are welcome. Please run pyflakes, isort and black on any code before submitting a PR.

I have tried to make the TUIs and the HTML page as clean as possible, but I am not a UI expert and I am sure many improvements could be made. If you are slick with user interfaces, I would love some help!

License

Distributed under the terms of the MIT license, "pytest-tui" is free and open source software.

pytest-tui's People

Contributors

jeffwright13 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

python-yj

pytest-tui's Issues

Support non-standard result outcomes

Need to figure out if there is a neat way to support non-standard outcome categories (e.g. [R]erun), without having to code up each new one that comes along.

tui2: "local variable 'width' referenced before assignment"

$ tui2


Traceback (most recent call last):
  File "/Users/jwr003/repos/gems-qa-auto/venv/bin/tui2", line 33, in <module>
    sys.exit(load_entry_point('pytest-tui', 'console_scripts', 'tui2')())
  File "/Users/jwr003/coding/pytest-tui/pytest_tui/tui2.py", line 129, in main
    tui.create_test_result_tabs()
  File "/Users/jwr003/coding/pytest-tui/pytest_tui/tui2.py", line 81, in create_test_result_tabs
    results_splitter.addWidget(results_list, width)
UnboundLocalError: local variable 'width' referenced before assignment

Figure out discrepancy with test_1004

When test-1004 is run, Pytest and Pytest-Tui differ in their counts for Pass. Need to understand the issue at heart, and replicate hos Pytest fixed it so Pytest-Tui metrics exactly match.

HTML page improvements

Ideas:

  • Section Buttons across top of page for Sections (Start/Summary/Full Output/Errors/Warnings)
  • Results Buttons below for individual Results (Passes/Failures/Xpasses/Xfails/Errors/Skips)

HTML page issues

  • Red section content in Summary (== test session starts ===) DONE
  • Section headers and test headers (with ====== and ______ as identifying characteristic) do not get rendered in bold or color in sections. They are rendered correctly in Full Out though. DONE
  • Blue markup? DONE
  • On About tab, summary stats are doubled horizontally DONE
  • Migrate buttons at top to navbar
  • Consistency of theme/template DONE
  • Highlight on hover for test case buttons
  • Each Results tab should have sortability by fqtn, time and outcome
  • Top-level buttons (About | Passed | Full Output | etc.) should be sticky at top of page
  • Horizontal scroll bar should be visible at bottom of browser if page content extends beyond the page
  • RERUN section not working DONE (removed RERUN)

Incorporate Error and Rerun categories

pytest-html supports filtering results on Error or Rerun cartegory/outcome, so consider doing the same (they also have a filter for Warning but sine Pytest itself doesn't show that in the verbose summary section, don't do that one).

Tweak regex to handle plugin `pytest-emoji-output`

It changes the wording of the output from a simple PASSED/FAILED to a phrase that includes that word, but other stuff as well.

tests/test_1.py::test_a_ok PASSED                                                                                            [  2%]
tests/test_1.py::test_b_fail FAILED                                                                                          [  4%]
tests/test_1.py::test_c_error ERROR                                                                                          [  7%]
tests/test_1.py::test_d1_skip_inline SKIPPED (Skipping this test with inline call to 'pytest.skip()'.)                       [  9%]
tests/test_1.py::test_d2_skip_decorator SKIPPED (Skipping this test with inline call to 'pytest.skip()'.)                    [ 11%]
tests/test_1.py::test_e1_xfail_by_inline_and_has_reason XFAIL (Marked as Xfail with inline call to 'pytest.xfail()'.)        [ 14%]
tests/test_1.py::test_e2_xfail_by_decorator_and_has_reason XFAIL (Marked as Xfail with decorator.)                           [ 16%]
tests/test_1.py::test_f1_xfails_by_inline_even_though_assertTrue_happens_before_pytestDotXfail XFAIL (Marked as Xfail wi...) [ 19%]
tests/test_1.py::test_f2_xpass_by_xfail_decorator_and_has_reason XPASS (Marked as Xfail with decorator.)                     [ 21%]
tests/test_1.py::test_g_eval_parameterized[3+5-8] PASSED                                                                     [ 23%]
tests/test_1.py::test_g_eval_parameterized[2+4-6] PASSED                                                                     [ 26%]
tests/test_1.py::test_g_eval_parameterized[6*9-42] FAILED                                                                    [ 28%]
tests/test_1.py::test_1_passes_and_has_logging_output PASSED                                                                 [ 30%]
tests/test_1.py::test_2_fails_and_has_logging_output FAILED                                                                  [ 33%]
tests/test_1.py::test_3_fails FAILED                                                                                         [ 35%]
tests/test_1.py::test_4_passes PASSED                                                                                        [ 38%]
tests/test_1.py::test_5_marked_SKIP SKIPPED (unconditional skip)                                                             [ 40%]
tests/test_1.py::test_6_marked_xfail_by_decorator_but_passes_and_has_no_reason XPASS                                         [ 42%]
tests/test_1.py::test_7_marked_xfail_by_decorator_and_fails_and_has_no_reason XFAIL                                          [ 45%]
tests/test_1.py::test_8_causes_a_warning FAILED                                                                              [ 47%]
tests/test_1.py::test_9_lorem_fails FAILED                                                                                   [ 50%]
tests/test_1.py::test_10_fail_capturing FAIL stdout not captured, going directly to sys.stdout
FAIL stderr not captured, going directly to sys.stderr
FAILED                                                                               [ 52%]
tests/test_1.py::test_11_pass_capturing FAILED                                                                               [ 54%]
tests/test_1.py::test_12_fails_and_has_stdout FAILED                                                                         [ 57%]
tests/test_1.py::test_13_passes_and_has_stdout PASSED                                                                        [ 59%]
tests/test_1.py::test_14_causes_error_pass_stderr_stdout_stdlog ERROR                                                        [ 61%]
tests/test_1.py::test_15_causes_error_fail_stderr_stdout_stdlog ERROR                                                        [ 64%]
tests/test_1.py::test_16_fail_compare_dicts_for_pytest_icdiff FAILED                                                         [ 66%]
tests/test_2.py::test_a_ok PASSED                                                                                            [ 69%]
tests/test_2.py::test_b_fail FAILED                                                                                          [ 71%]
tests/test_2.py::test_c_error ERROR                                                                                          [ 73%]
tests/test_hoefling.py::test_1 FAILED                                                                                        [ 76%]
tests/test_hoefling.py::test_2 FAILED                                                                                        [ 78%]
tests/test_hoefling.py::test_3 ERROR                                                                                         [ 80%]
tests/test_hoefling.py::test_4 PASSED                                                                                        [ 83%]
tests/test_hoefling.py::test_4 ERROR                                                                                         [ 83%]
tests/test_issue_1004.py::test_foo PASSED                                                                                    [ 85%]
tests/test_issue_1004.py::test_foo ERROR                                                                                     [ 85%]
tests/test_issue_1004.py::test_foo2 PASSED                                                                                   [ 88%]
tests/test_issue_1004.py::test_foo2 ERROR                                                                                    [ 88%]
tests/test_issue_1004.py::test_foo3 FAILED                                                                                   [ 90%]
tests/test_issue_1004.py::test_foo3 ERROR                                                                                    [ 90%]
tests/test_warnings.py::test_1_fails_with_warnings FAILED                                                                    [ 92%]
tests/test_warnings.py::test_2_passes_with_warnings PASSED                                                                   [ 95%]
tests/test_xpass_xfail.py::test_xfail_by_inline XFAIL (xfailing this test with 'pytest.xfail()')                             [ 97%]
tests/test_xpass_xfail.py::test_xfail_by_decorator XFAIL (Here's my reason for xfail: None)
tests/test_1.py::test_a_ok 😇 Yes sir, it is passed                                                                          [  2%]
tests/test_1.py::test_b_fail 😡 Oh crap, it is failed                                                                        [  4%]
tests/test_1.py::test_c_error ERROR                                                                                          [  7%]
tests/test_1.py::test_d1_skip_inline 😶 Nevermind, it is skipped (Skipping this test with inline call to 'pytest.skip()'.)   [  9%]
tests/test_1.py::test_d2_skip_decorator 😶 Nevermind, it is skipped (Skipping this test with inline call to 'pytest.skip...) [ 11%]
tests/test_1.py::test_e1_xfail_by_inline_and_has_reason 😶 Nevermind, it is skipped (Marked as Xfail with inline call to...) [ 14%]
tests/test_1.py::test_e2_xfail_by_decorator_and_has_reason 😶 Nevermind, it is skipped (Marked as Xfail with decorator.)     [ 16%]
tests/test_1.py::test_f1_xfails_by_inline_even_though_assertTrue_happens_before_pytestDotXfail 😶 Nevermind, it is skipped   [ 19%]
tests/test_1.py::test_f2_xpass_by_xfail_decorator_and_has_reason 😇 Yes sir, it is passed (Marked as Xfail with decorator.)  [ 21%]
tests/test_1.py::test_g_eval_parameterized[3+5-8] 😇 Yes sir, it is passed                                                   [ 23%]
tests/test_1.py::test_g_eval_parameterized[2+4-6] 😇 Yes sir, it is passed                                                   [ 26%]
tests/test_1.py::test_g_eval_parameterized[6*9-42] 😡 Oh crap, it is failed                                                  [ 28%]
tests/test_1.py::test_1_passes_and_has_logging_output 😇 Yes sir, it is passed                                               [ 30%]
tests/test_1.py::test_2_fails_and_has_logging_output 😡 Oh crap, it is failed                                                [ 33%]
tests/test_1.py::test_3_fails 😡 Oh crap, it is failed                                                                       [ 35%]
tests/test_1.py::test_4_passes 😇 Yes sir, it is passed                                                                      [ 38%]
tests/test_1.py::test_5_marked_SKIP SKIPPED (unconditional skip)                                                             [ 40%]
tests/test_1.py::test_6_marked_xfail_by_decorator_but_passes_and_has_no_reason 😇 Yes sir, it is passed                      [ 42%]
tests/test_1.py::test_7_marked_xfail_by_decorator_and_fails_and_has_no_reason 😶 Nevermind, it is skipped                    [ 45%]
tests/test_1.py::test_8_causes_a_warning 😡 Oh crap, it is failed                                                            [ 47%]
tests/test_1.py::test_9_lorem_fails 😡 Oh crap, it is failed                                                                 [ 50%]
tests/test_1.py::test_10_fail_capturing FAIL stdout not captured, going directly to sys.stdout
FAIL stderr not captured, going directly to sys.stderr
😡 Oh crap, it is failed                                                             [ 52%]
tests/test_1.py::test_11_pass_capturing 😡 Oh crap, it is failed                                                             [ 54%]
tests/test_1.py::test_12_fails_and_has_stdout 😡 Oh crap, it is failed                                                       [ 57%]
tests/test_1.py::test_13_passes_and_has_stdout 😇 Yes sir, it is passed                                                      [ 59%]
tests/test_1.py::test_14_causes_error_pass_stderr_stdout_stdlog ERROR                                                        [ 61%]
tests/test_1.py::test_15_causes_error_fail_stderr_stdout_stdlog ERROR                                                        [ 64%]
tests/test_1.py::test_16_fail_compare_dicts_for_pytest_icdiff 😡 Oh crap, it is failed                                       [ 66%]
tests/test_2.py::test_a_ok 😇 Yes sir, it is passed                                                                          [ 69%]
tests/test_2.py::test_b_fail 😡 Oh crap, it is failed                                                                        [ 71%]
tests/test_2.py::test_c_error ERROR                                                                                          [ 73%]
tests/test_hoefling.py::test_1 😡 Oh crap, it is failed                                                                      [ 76%]
tests/test_hoefling.py::test_2 😡 Oh crap, it is failed                                                                      [ 78%]
tests/test_hoefling.py::test_3 ERROR                                                                                         [ 80%]
tests/test_hoefling.py::test_4 😇 Yes sir, it is passed                                                                      [ 83%]
tests/test_hoefling.py::test_4 ERROR                                                                                         [ 83%]
tests/test_issue_1004.py::test_foo 😇 Yes sir, it is passed                                                                  [ 85%]
tests/test_issue_1004.py::test_foo ERROR                                                                                     [ 85%]
tests/test_issue_1004.py::test_foo2 😇 Yes sir, it is passed                                                                 [ 88%]
tests/test_issue_1004.py::test_foo2 ERROR                                                                                    [ 88%]
tests/test_issue_1004.py::test_foo3 😡 Oh crap, it is failed                                                                 [ 90%]
tests/test_issue_1004.py::test_foo3 ERROR                                                                                    [ 90%]
tests/test_warnings.py::test_1_fails_with_warnings 😡 Oh crap, it is failed                                                  [ 92%]
tests/test_warnings.py::test_2_passes_with_warnings 😇 Yes sir, it is passed                                                 [ 95%]
tests/test_xpass_xfail.py::test_xfail_by_inline 😶 Nevermind, it is skipped (xfailing this test with 'pytest.xfail()')       [ 97%]
tests/test_xpass_xfail.py::test_xfail_by_decorator 😶 Nevermind, it is skipped (Here's my reason for xfail: None)

Standardize layout

The current layout is kludgy. It relies on the default DockView to lay out the 4 results trees and the text body ScrollView. This has drawbacks: widgets take up either the whole row or whole columns, which crowds the text body on the one hand, while leaving spaces open with nothing in them.

Textual currently supports 3 views at the moment:

  1. DockView (with autodocking, as default or a "dock-grid" option)
  2. GridView
  3. WindowView

Utlimately I want to have the 4 trees, the body (results), and the option for the full terminal output, the intro section (test_session_starts) which includes an overview of which tests passed and which failed, chronologically; and the final warnings/summary section.

Consider using fully qualified test names

It is possible that someone could use two different test files and have the same test name, one in each. In that case, the results are going to get messed up. We should probably use a fully qualified testname, e.g.
test_input_file.py::test_number_one
instead of as we do currently:
test_number_one

Custom color scheme

Need to figure out a good way to support custom color schemes. I've heard people call the format "ugly". Guess they think their terminal sessions are ugly, too, but gotta make the masses happy. Give 'em a choice to customize the colors and I am sure this plugin will go Tick Tock Famous.

Interactive helper script (CLI)

There is enough complexity and variation in the configuration of the plugin that an interactive tui script would be beneficial.

Things to support:

  • Choice of TUI
  • Autostart TUI
  • Autolaunch HTML
  • HTML color scheme
  • HTML layout
  • etc...

Cannot execute code if Tk is not installed

$ pytest --tui
Traceback (most recent call last):
  File "/Users/jeff/coding/pytest-tui/venv/bin/pytest", line 8, in <module>
    sys.exit(console_main())
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 187, in console_main
    code = main()
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 145, in main
    config = _prepareconfig(args, plugins)
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 324, in _prepareconfig
    config = pluginmanager.hook.pytest_cmdline_parse(
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
    return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/pluggy/_callers.py", line 55, in _multicall
    gen.send(outcome)
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/_pytest/helpconfig.py", line 102, in pytest_cmdline_parse
    config: Config = outcome.get_result()
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
    raise ex[1].with_traceback(ex[2])
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
    res = hook_impl.function(*args)
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1016, in pytest_cmdline_parse
    self.parse(args)
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1304, in parse
    self._preparse(args, addopts=addopts)
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1187, in _preparse
    self.pluginmanager.load_setuptools_entrypoints("pytest11")
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/pluggy/_manager.py", line 287, in load_setuptools_entrypoints
    plugin = ep.load()
  File "/Users/jeff/.pyenv/versions/3.8.7/lib/python3.8/importlib/metadata.py", line 77, in load
    module = import_module(match.group('module'))
  File "/Users/jeff/.pyenv/versions/3.8.7/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/_pytest/assertion/rewrite.py", line 168, in exec_module
    exec(co, module.__dict__)
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/pytest_tui/plugin.py", line 13, in <module>
    from pytest_tui.tui2 import main as tui2
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/_pytest/assertion/rewrite.py", line 168, in exec_module
    exec(co, module.__dict__)
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/pytest_tui/tui2.py", line 1, in <module>
    import TermTk as ttk
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/TermTk/__init__.py", line 1, in <module>
    from .TTkCore import *
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/TermTk/TTkCore/__init__.py", line 4, in <module>
    from .ttk import *
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/TermTk/TTkCore/ttk.py", line 39, in <module>
    from TermTk.TTkWidgets.widget import *
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/TermTk/TTkWidgets/__init__.py", line 16, in <module>
    from .tabwidget       import *
  File "/Users/jeff/coding/pytest-tui/venv/lib/python3.8/site-packages/TermTk/TTkWidgets/tabwidget.py", line 25, in <module>
    from turtle import isvisible
  File "/Users/jeff/.pyenv/versions/3.8.7/lib/python3.8/turtle.py", line 107, in <module>
    import tkinter as TK
  File "/Users/jeff/.pyenv/versions/3.8.7/lib/python3.8/tkinter/__init__.py", line 36, in <module>
    import _tkinter # If this fails your Python may not be configured for Tk
ModuleNotFoundError: No module named '_tkinter'

Add HTML export feature

There is a relatively new library, ansi2html, that could be leveraged to produce HTML output instead of (or in addition to) a TUI session. Initial experiments show that it will take an entire unmarked_output.bin file from a pytest --tuin run and convert it into an HTML page that looks exactly like the console showed at the end of a test run.

Notes:

  • ansi2html will not process triple-quotes, as they are considered valid Python string delimiters. They would have to be removed from the input to ansi2html. Example output from demo-tests/test_1.py:
capsys = <_pytest.capture.CaptureFixture object at 0x110ae27c0>

    �[94mdef�[39;49;00m �[92mtest_9_lorem_fails�[39;49;00m(capsys):
        lorem = �[33m"""�[39;49;00m�[33m"�[39;49;00m�[33mLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.�[39;49;00m�[33m�[39;49;00m
    �[33m�[39;49;00m
    �[33m    Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur, vel illum qui dolorem eum fugiat quo voluptas nulla pariatur?�[39;49;00m�[33m�[39;49;00m
    �[33m�[39;49;00m
    �[33m    At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet ut et voluptates repudiandae sint et molestiae non recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat.�[39;49;00m�[33m"""�[39;49;00m

Accommodate console lines that don't match standard sections

Encountered a very highly customized version of Pytest which munges the console output to send lines before the famous === test session starts === line. This caused an exception:

Traceback (most recent call last):
  File "/Users/jwr003/repos/gems-qa-auto/venv/bin/gems-qa-tests", line 33, in <module>
    sys.exit(load_entry_point('gems-qa-auto', 'console_scripts', 'gems-qa-tests')())
  File "/Users/jwr003/repos/gems-qa-auto/gems_qa_tests/__main__.py", line 11, in main
    pytest.main()
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/_pytest/config/__init__.py", line 162, in main
    ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/pluggy/hooks.py", line 286, in __call__
    return self._hookexec(self, self.get_hookimpls(), kwargs)
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/pluggy/manager.py", line 93, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/pluggy/manager.py", line 84, in <lambda>
    self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/pluggy/callers.py", line 208, in _multicall
    return outcome.get_result()
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/pluggy/callers.py", line 80, in get_result
    raise ex[1].with_traceback(ex[2])
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/pluggy/callers.py", line 187, in _multicall
    res = hook_impl.function(*args)
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/_pytest/main.py", line 316, in pytest_cmdline_main
    return wrap_session(config, _main)
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/_pytest/main.py", line 289, in wrap_session
    config.notify_exception(excinfo, config.option)
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/_pytest/config/__init__.py", line 1037, in notify_exception
    res = self.hook.pytest_internalerror(excrepr=excrepr, excinfo=excinfo)
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/pluggy/hooks.py", line 286, in __call__
    return self._hookexec(self, self.get_hookimpls(), kwargs)
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/pluggy/manager.py", line 93, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/pluggy/manager.py", line 84, in <lambda>
    self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/pluggy/callers.py", line 208, in _multicall
    return outcome.get_result()
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/pluggy/callers.py", line 80, in get_result
    raise ex[1].with_traceback(ex[2])
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/pluggy/callers.py", line 187, in _multicall
    res = hook_impl.function(*args)
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/_pytest/terminal.py", line 474, in pytest_internalerror
    self.write_line("INTERNALERROR> " + line)
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/_pytest/terminal.py", line 430, in write_line
    self._tw.line(line, **markup)
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/_pytest/_io/terminalwriter.py", line 170, in line
    self.write(s, **markup)
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/pytest_tui/plugin.py", line 150, in tee_write
    if not config._tui_current_section:
AttributeError: 'Config' object has no attribute '_tui_current_section'

What needs to happen is we need to assign a new output section (likee "other") to act as a catch-all for lines coming in that are not part of a known section. And then we need to accommodate that new section in the TUI and HTML.

tui scripts fail to run when original pytest run was executed in directory other than `.`

Example: pytest is executed from the /qa-auto directory, but actually runs from the /qa-auto/qa_tests folder because it is wrapped in a custom script called qa-tests.

Traceback (most recent call last):
  File "qa-auto/venv/bin/tui4", line 33, in <module>
    sys.exit(load_entry_point('pytest-tui', 'console_scripts', 'tui4')())
  File "/Users/jwr003/coding/pytest-tui/pytest_tui/tui_textual_tabs.py", line 269, in main
    app.run()
  File "qa-auto/venv/lib/python3.9/site-packages/textual/app.py", line 206, in run
    asyncio.run(run_app())
  File "/Users/jwr003/.pyenv/versions/3.9.9/lib/python3.9/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/Users/jwr003/.pyenv/versions/3.9.9/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
    return future.result()
  File "qa-auto/venv/lib/python3.9/site-packages/textual/app.py", line 204, in run_app
    await app.process_messages()
  File "qa-auto/venv/lib/python3.9/site-packages/textual/app.py", line 291, in process_messages
    await self.dispatch_message(load_event)
  File "qa-auto/venv/lib/python3.9/site-packages/textual/message_pump.py", line 232, in dispatch_message
    await self.on_event(message)
  File "qa-auto/venv/lib/python3.9/site-packages/textual/app.py", line 434, in on_event
    await super().on_event(event)
  File "qa-auto/venv/lib/python3.9/site-packages/textual/message_pump.py", line 254, in on_event
    await invoke(method, event)
  File "qa-auto/venv/lib/python3.9/site-packages/textual/_callback.py", line 29, in invoke
    result = await result
  File "/Users/jwr003/coding/pytest-tui/pytest_tui/tui_textual_tabs.py", line 135, in on_load
    self.test_results = Results()
  File "/Users/jwr003/coding/pytest-tui/pytest_tui/utils.py", line 91, in __init__
    self.test_results = self._deduplicate_reports()
  File "/Users/jwr003/coding/pytest-tui/pytest_tui/utils.py", line 164, in _deduplicate_reports
    processed_reports = self._process_reports()
  File "/Users/jwr003/coding/pytest-tui/pytest_tui/utils.py", line 181, in _process_reports
    test_info.keywords = set(report.keywords)
AttributeError: 'CollectReport' object has no attribute 'keywords'

Workaround is to cd qa_tests and then run tui4.

Refactor: tag each test with FQTN/outcome/time/duration

In addition to (or maybe instead of) marking each section in the tee_write section of pytest_configure (file: plugin.py), we could populate a custom dataclass with the above metrics to make sorting/presentation a lot easier during post-processing phases (e.g. when utils.py is run to categorize the tests; when the TUIs and HTML files are populated; etc.

Handle launch of `tuihtml` script when output_html.html does not exist

$ tuihtml

Traceback (most recent call last):
  File "/Users/jwr003/repos/gems-qa-auto/venv/bin/tuihtml", line 8, in <module>
    sys.exit(main())
  File "/Users/jwr003/repos/gems-qa-auto/venv/lib/python3.9/site-packages/pytest_tui/html.py", line 63, in main
    (mode, ino, dev, nlink, uid, gid, size, atime, mtime, ctime) = os.stat(
FileNotFoundError: [Errno 2] No such file or directory: '/Users/jwr003/repos/gems-qa-auto/gems_qa_tests/results/output_html.html'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.