GithubHelp home page GithubHelp logo

btest's Introduction

Zeek Logo

The Zeek Network Security Monitor

A powerful framework for network traffic analysis and security monitoring.

Key FeaturesDocumentationGetting StartedDevelopmentLicense

Follow us on Twitter at @zeekurity.

Coverage Status Build Status

Slack Discourse

Key Features

  • In-depth Analysis Zeek ships with analyzers for many protocols, enabling high-level semantic analysis at the application layer.

  • Adaptable and Flexible Zeek's domain-specific scripting language enables site-specific monitoring policies and means that it is not restricted to any particular detection approach.

  • Efficient Zeek targets high-performance networks and is used operationally at a variety of large sites.

  • Highly Stateful Zeek keeps extensive application-layer state about the network it monitors and provides a high-level archive of a network's activity.

Getting Started

The best place to find information about getting started with Zeek is our web site www.zeek.org, specifically the documentation section there. On the web site you can also find downloads for stable releases, tutorials on getting Zeek set up, and many other useful resources.

You can find release notes in NEWS, and a complete record of all changes in CHANGES.

To work with the most recent code from the development branch of Zeek, clone the master git repository:

git clone --recursive https://github.com/zeek/zeek

With all dependencies in place, build and install:

./configure && make && sudo make install

Write your first Zeek script:

# File "hello.zeek"

event zeek_init()
    {
    print "Hello World!";
    }

And run it:

zeek hello.zeek

For learning more about the Zeek scripting language, try.zeek.org is a great resource.

Development

Zeek is developed on GitHub by its community. We welcome contributions. Working on an open source project like Zeek can be an incredibly rewarding experience and, packet by packet, makes the Internet a little safer. Today, as a result of countless contributions, Zeek is used operationally around the world by major companies and educational and scientific institutions alike for securing their cyber infrastructure.

If you're interested in getting involved, we collect feature requests and issues on GitHub here and you might find these to be a good place to get started. More information on Zeek's development can be found here, and information about its community and mailing lists (which are fairly active) can be found here.

License

Zeek comes with a BSD license, allowing for free use with virtually no restrictions. You can find it here.

Tooling

We use the following tooling to help discover issues to fix, amongst a number of others.

btest's People

Contributors

0xxon avatar awelzel avatar bbannier avatar ckreibich avatar corelight-chris avatar deejgregor avatar dnthayer avatar fabaff avatar grigorescu avatar jsiwek avatar rsmmr avatar steap avatar szaydel avatar timwoj avatar woot4moo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

btest's Issues

BTest fails if WSL2 is installed on Windows

I recently installed Docker Desktop on Windows which required the installation of WSL2 so I could run VMs. Having that installed results in the following when trying to run btest from inside a gitbash shell:

$ btest -d bifs.all_set
<3>WSL (11) ERROR: CreateProcessEntryCommon:368: getpwuid(0) failed 2
<3>WSL (11) ERROR: CreateProcessEntryCommon:372: getpwuid(0) failed 2
<3>WSL (11) ERROR: CreateProcessEntryCommon:574: execvpe /bin/bash failed 2
<3>WSL (11) ERROR: CreateProcessEntryCommon:583: Create process not expected to return
<3>WSL (14) ERROR: CreateProcessEntryCommon:368: getpwuid(0) failed 2
<3>WSL (14) ERROR: CreateProcessEntryCommon:372: getpwuid(0) failed 2
<3>WSL (14) ERROR: CreateProcessEntryCommon:574: execvpe /bin/bash failed 2
<3>WSL (14) ERROR: CreateProcessEntryCommon:583: Create process not expected to return
<3>WSL (17) ERROR: CreateProcessEntryCommon:368: getpwuid(0) failed 2
<3>WSL (17) ERROR: CreateProcessEntryCommon:372: getpwuid(0) failed 2
<3>WSL (17) ERROR: CreateProcessEntryCommon:574: execvpe /bin/bash failed 2
<3>WSL (17) ERROR: CreateProcessEntryCommon:583: Create process not expected to return
bifs.all_set ... failed
  % 'zeek -b D:/repos/zeek/testing/btest/.tmp/bifs.all_set/all_set.zeek >out' failed unexpectedly (exit code 1)
  % cat .stderr
  fatal error: can't find base/init-bare.zeek

1 of 1 test failed

The reason here is that it's trying to execute WSL2's bash, but a root user doesn't exist because there isn't a Linux actually installed so there aren't any users. This is easily fixed by executing wsl.exe --install Ubuntu and wsl.exe --setdefault Ubuntu. At this point, bash.exe from WSL2 works and a root user exists. Unfortunately, this still doesn't fix btest:

$ btest -d bifs.all_set
/bin/bash: line 1: C:/Users/timwo/AppData/Local/Temp/tmpychtyig2.sh: No such file or directory
/bin/bash: line 1: C:/Users/timwo/AppData/Local/Temp/tmp2cmnkk4i.sh: No such file or directory
/bin/bash: line 1: C:/Users/timwo/AppData/Local/Temp/tmpa7mxr6mf.sh: No such file or directory
bifs.all_set ... failed
  % 'zeek -b D:/repos/zeek/testing/btest/.tmp/bifs.all_set/all_set.zeek >out' failed unexpectedly (exit code 1)
  % cat .stderr
  fatal error: can't find base/init-bare.zeek

1 of 1 test failed

Next we run into the problem where calling bash.exe from inside Python executes the first bash encountered on the PATH. On my system, that means bash from WSL2 gets executed. The assumptions made that gitbash would be the shell executed, and certain things like paths and such would work, but clearly those assumptions are violated.

`btest-diff` is broken on e.g., Alpine

Alpine ships with BusyBox diff which does not support the recently added option --strip-trailing-cr. This was working previously.

$ cat .stderr
  diff: unrecognized option: strip-trailing-cr
  BusyBox v1.31.1 () multi-call binary.
  
  Usage: diff [-abBdiNqrTstw] [-L LABEL] [-S FILE] [-U LINES] FILE1 FILE2
  
  Compare files line by line and output the differences between them.
  This implementation supports unified diffs only.
  
  	-a	Treat all files as text
  	-b	Ignore changes in the amount of whitespace
  	-B	Ignore changes whose lines are all blank
  	-d	Try hard to find a smaller set of changes
  	-i	Ignore case differences
  	-L	Use LABEL instead of the filename in the unified header
  	-N	Treat absent files as empty
  	-q	Output only whether files differ
  	-r	Recurse
          --no-dereference Don't follow symlinks
  	-S	Start with FILE when comparing directories
  	-T	Make tabs line up by prefixing a tab when necessary
  	-s	Report when two files are the same
  	-t	Expand tabs to spaces in output
  	-U	Output LINES lines of context
  	-w	Ignore all whitespace

A possible workaround for users is to use the diffutils package.

BTest uses deprecated package `distutils`

We currently use the deprecated package distutils which is slated for removal with python-3.12. It might be possible to replace our uses with packaging.requirements, use functionality from semver, or roll our own.

$ btest
/Users/bbannier/.local/bin/btest:11: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
  import distutils.version
configuration file 'btest.cfg' not found

BTest hangs waiting for child processes

Since 2a6a88e we over in zeek/spicy seem to observe more hangs in our alpine-3.12 build (which uses musl as libc). This build was always slow, but now seems to sometimes hang completely.

I was able to bisect this with the following setup

  • use branch from zeek/spicy#652
  • build alpine Docker image: cd docker && make build-alpine-3.12
  • insider the Docker container: make run-alpine-3.12 with Docker set up up provide just 4 CPUs:
    • execute the zeek part of the test suite: cd /opt/spicy/src/tests && SPICY_INSTALLATION_DIRECTORY=/opt/spicy ~/btest/btest -a installation -d -j 8 zeek (note -j 8 even though container has only 4 CPUs)

This executes 25 test jobs (more than ncpus=4 or -j 8), puts considerable strain on the CPU and has each test forking additional subprocesses. With that I see reliable hangs with the tip of the branch from zeek/spicy#652 which first appear with 2a6a88e. I compared that commit its precursor, but nothing jump out, yet.

The symptoms are:

  • BTest stops making progress
  • the forked processes stop consuming CPU suggesting they finished
  • gdb indicates that the btest threads are blocking on waitpid
  • running BTest under pdb also leads to a hang, but trying to examine the state by breaking into the debugger with ctrl-c makes the process finish successfully.

Ctrl-C doesn't always kill all python processes

This happens on all platforms but it's especially broken on Windows. Pressing ctrl-c on Windows when running with a -j argument will kill one of the currently-running tests, but will continue to run the others and spawn new ones. Pressing it repeatedly will eventually kill all of the currently-running tests but will then hang the entire process. This leads to having to open Task Manager to manually kill all of the python processes.

%DIR set incorrectly after @TEST-START-NEXT

I think there might be a problem with the %DIR value: when using it in @TEST-EXEC statements the value seems to differ in test executions after a @TEST-START-NEXT. Since the source directory for such tests is the same, this shouldn't be happening, right?

I'm attaching a demo, see example.zip

$ unzip example.zip
Archive:  example.zip
   creating: example/
   creating: example/tests/
  inflating: example/tests/test.sh
  inflating: example/btest.cfg

$ cd example
$ cat tests/test.sh
# @TEST-EXEC: echo %DIR
# @TEST-START-NEXT

$ btest -A -t
[  0%] tests.test ... ok
[ 50%] tests.test-2 ... ok
all 2 tests successful

$ cat .tmp/tests.test/.stdout
/home/christian/tmp/example/tests

$ cat .tmp/tests.test-2/.stdout
/home/christian/tmp/example

It looks like in the tests following the first one, %DIR is just %(testbase)s, but I didn't dig in. I did verify though that at least in Zeek we currently have no btests using both %DIR and @TEST-START-NEXT....

Dependence on multiprocessing "fork" start-method

btest is structured to only work with the multiprocessing "fork" start-method [1], but Python 3.8 has changed to default start-method from "fork" to "spawn" on macOS [2] to avoid transient/intractable issues in macOS 10.13+ [3]. Running btest with Python 3.8's new "spawn" default on macOS emits a run-time error:

RuntimeError: 
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.

I suggest the simplest near-term fix is to explicitly ask for the "fork" start-method since I've not, up to this point, observed any significant degree of strange/unexplained failures that make me think we actually run into the underlying macOS issue (guess nothing we're currently doing ends up calling into ObjC runtime). Longer-term, could still investigate restructuring to work with other start-methods. Such a restructuring may be required if Windows support became desirable since it uses the "spawn" start-method.

[1] https://docs.python.org/3.8/library/multiprocessing.html#contexts-and-start-methods
[2] https://bugs.python.org/issue33725
[3] https://wefearchange.org/2018/11/forkmacos.rst.html

Requesting the same test multiple times leads to assertion error

I was generating the tests to run from a script and ran into a failure when the same test appeared multiple times, e.g., in the BTest test suite:

$ btest tests.xml tests.xml
Traceback (most recent call last):
  File "/Users/bbannier/.local/bin/btest", line 2815, in <module>
    tests = mergeTestParts(tests)
  File "/Users/bbannier/.local/bin/btest", line 2195, in mergeTestParts
    assert t.part != other.part
AssertionError

It would be nice if BTest could deduplicate the list of tests for users.

Sphinx features are disabled on Windows

The sphinx features were disabled on Windows as part of #80 due to the amount of work that will be required to fix it. There's also a question about whether those features are used or useful anymore and whether we should just remove them at this point.

Avoid dependency on bash

There's really no good reason btest needs to require a bash (vs. just standard sh), so we should modify the scripts accordingly.

Option '--abort-on-failure' does not ignore known failures

7d34bae introduced the --abort-on-failure option which aborts processing as soon as the first failing test is encountered. Currently it has no special handling for known failures (tests marked with @TEST-KNOWN-FAILURE) and will abort processing whenever they are encountered, i.e., they are treated as true failures. This seems counterintuitive as globally BTest would not count such tests as failures in that even with such tests failing the global status could still be success.

No error/warning for nonexistent alternative

Requesting an alternative name that nothing uses provides no indication about that:

$ btest --show-all --alternative=foo,asdf  tests/basic-succeed.test
[  0%] tests.basic-succeed [foo] ... ok
[100%] tests.basic-succeed [asdf] ... ok
all 2 tests successful

Nothing's been defined for the foo or asdf alternatives, but it may have reverted back to running the standard config and said they were "successful". Seems preferable to have this be an error rather than running btest -a typo and being fooled into thinking a test suite passes when it actually doesn't.

Support for multiple btest baselines

I'd like to be able to use btest's --alternative option as a way to conduct testing of Zeek options that alter Zeek's behavior on only a small number of tests, and for which I'd like to make sure that on the bulk of the tests the options don't result in any changes. While there's already a notion in btest of alternative baselines (via BTEST_TEST_BASE), that changes the entire baseline. What I want instead is essentially a series of baseline directories that are searched first-to-last. With that, then I can put updates for the given Zeek option in Baseline.my_alternative/ and confine what goes in there to just the differences, still picking up the bulk of the baseline from the default Baseline/. I'd also want btest -u to update into the alternative Baseline rather than the original.

Initial baseline diagnostics output with`-u` not canonified

I've ran into this a few times now and continue to be confused if it's my canonifier not working right, or btest not showing me what the baseline will actually look like. IMO the UX would be better if the initial diagnostics output showed the canonified baseline.

Workflow:

  • Write a new test with a canonifier
  • run with btest -d -u new.test
  • The first update shown with "diagnostics" presents the non-canonified version:
$ btest -d scripts/base/frameworks/cluster/leftover-log-rotation-multi-logger.zeek -u
[  0%] scripts.base.frameworks.cluster.leftover-log-rotation-multi-logger ... failed
>> Type 'c' to continue, 'd' to see diagnostics, 'u' to update baseline, and 'a' to abort
(pressing d)
== File ===============================
./log-queue/conn__2023-06-13-08-16-52__2023-06-13-08-16-52__log_suffix=logger-2__.log
./log-queue/dns__2023-06-13-08-16-52__2023-06-13-08-16-52__log_suffix=logger-2__.log
leftover conn log
leftover dns log
=======================================
(pressing q)
>> Type 'c' to continue, 'd' to see diagnostics, 'u' to update baseline, and 'a' to abort.
(pressing u for update)
>> Updating baseline ...
all 1 tests successful
  • Checking the just stored baseline shows it was actually canonified even if the diagnostics output doesn't show it.
$ cat Baseline/scripts.base.frameworks.cluster.leftover-log-rotation-multi-logger/out
### BTest baseline data generated by btest-diff. Do not edit. Use "btest -U/-u" to update. Requires BTest >= 0.63.
./log-queue/conn__XXXX-XX-XX-XX-XX-XX__XXXX-XX-XX-XX-XX-XX__log_suffix=logger-2__.log
./log-queue/dns__XXXX-XX-XX-XX-XX-XX__XXXX-XX-XX-XX-XX-XX__log_suffix=logger-2__.log
leftover conn log
leftover dns log

Interestingly, updating an existing baseline shows the canonified diff. It's probably a small tweak somewhere.

Allow alternatives to unset an environment variable.

One can already override an existing environment variable (FOO=bar), but one cannot remove it; the best one can do is to set it to an empty value (FOO=)

Not sure on syntax, maybe:

[environment-myalternative]
-FOO=

?

Alternative '-' doesn't work in a list

Use of - to refer to the standard config doesn't seem to work when used in a list of alternatives like the docs imply it should:

$ btest -A -a inline,AST-dup,- language/while.zeek
[  0%] language.while [inline] ... ok
[100%] language.while [AST-dup] ... ok
all 2 tests successful
$ btest -A -a - language/while.zeek
[  0%] language.while ... ok
all 1 tests successful

Seems tests/alternatives-environment.test is supposed to cover this, but either I misunderstand how btest -a foo,-,foo2 is supposed to work or the baseline for that is wrong.

Add `@TEST-PORT-RANGE`

Having something like @TEST-PORT-RANGE <env> <num> to get a range of <num> ports would be handy.

Support ?= in [environment] section

In this PR I've used the following pattern to allow users to override an environment variable externally:

[environment]
HILTI_JIT_PARALLELISM=`bash -c 'echo ${HILTI_JIT_PARALLELISM:-1}'`

Would it make sense to support HILTI_JIT_PARALLELISM?=1 similar to the Makefile style? For "tweakable" environment variables?

btest fails if environment variable exists twice with different case e.g. http_proxy HTTP_PROXY

How to reproduce:

(cd /tmp && echo > btest.cfg && http_proxy=qqq HTTP_PROXY=qqq btest && rm btest.cfg)

Followed by error configparser.DuplicateOptionError: While reading from '<dict>': option 'http_proxy' in section 'DEFAULT' already exists

Environment:

Python 3.8.10
Ubuntu 20.04.1 LTS

Why it is a bug:

There are cases in corporate environments, both lower-case and upper-case proxy variables are set and have equal values.

Here is a good comprehensive summary of the case for the proxy variables https://about.gitlab.com/blog/2021/01/27/we-need-to-talk-no-proxy/

CTRL-C of an unparallelized btest run doesn't work

On a "normal", unparallelized btest run (one without -j) nothing happens when I hit CTRL-C. I'm attaching a really small example to demonstrate. It contains a single test that just sleeps for 10 seconds, and that test just keeps going:

$ unzip testsuite.zip
$ cd testsuite
$ btest
[  0%] tests.test1 ...^C^C^C^C^C^C^C^C^C

Eventually the 10 seconds are up and I get

all 1 tests successful

I see this also with the Zeek testsuite, for example. The root cause seems to be that we block SIGINT in TestManager.threadRun(), and in unparallelized operation threadRun() doesn't execute in a multiprocessing child process, but the main one. I looked at a few options for fixing this, and everything I try ends up being ugly and adding a bunch of complexity. Essentially, once I remove the ignoring of SIGINT, handling KeyboardInterrupt either happens "too low" in the stack, meaning we just skip the current test and continue with the next (I'm not seeing a clean way to signal exit to the manager, other than abuse of Options.abort_on_failure?) or "too high", meaning multiprocessing stuff doesn't get cleaned up properly (I'm seeing the subprocess created in runSubprocess() continuing, even though its child properly exits).

Now ... the good news is that when I run btest with -j 1, i.e. not parallelized but multi-processed, everything works fine. CTRL-C makes btest exit cleanly (I get the cursor back, a correct summary message regarding tests that have completed at this point, the various processes/threads clean themselves up, etc). So I am wondering whether instead of contorting ourselves to support unparallelized operation, we should just default to j = 1 instead of j = 0, and remove a bit of code. Or perhaps I'm just not seeing the right fix.

Also, I believe @rsmmr observed that CTRL-C actually does work for him, and I'm now quite curious why. :-)

I'm on Linux, Python 3.9, fwiw.

testsuite.zip

Revisiting finalizers

I'd still like to have a way to run finalizers reliably after any test, failing or succeeding. My use case would not be to check the environment for unexpected properties, but to shut down a setup created by the initializer.

We previously discussed this in #57 and pulled the plug at the time. I just looked at this for a bit and here's my take:

  • I think it would be acceptable to change the semantics of Finalizer (and PartFinalizer!) to behave in this way, instead of adding a new option. I've looked over our Zeek projects, Corelight, and the Zeek package source and I've only found actual finalizer use in three public packages, all containing the same single script, checking the status of the environment after a test. Running this after a failing test wouldn't change the overall result.

  • We can introduce a deprecation period to make this palatable for current finalizer users: for now, we simply flag the fact that finalizer semantics will change in an upcoming version when the test parser comes upon finalizers. (This output won't affect test outcomes via baselines etc, since it's not part of the command execution.)

  • I think it is fine to keep the semantics of the current finalizers, i.e., the fact that they can fail the test if they want to. It seems reasonable functionality to me, and I don't have to use it if I just want to run some activity. I just need to ensure I return success in any case.

  • We could start tracking the failure state of the current test via an environment variable, alongside the other TEST_* ones. TEST_FAILED could become 1, say, whenever the failures counter becomes non-zero. That way, you can build finalizers that are sensitive to the outcome of the previous test sequence.

As an aside, I think we should release 1.0.0 in the not too distant future and adopt semantic versioning. That could then be the time when we switch the finalizers to the new behavior.

@awelzel @deejgregor @bbannier @rsmmr (whoa :-) ... thoughts?

Running -d with non-utf8 encoded characters in output hangs process

Non UTF-8 encoded byte sequences in the .diag / stderr output hang btest when run with -d.

We ran into this in the following Cirrus CI build, where the Alpine job hung due to Zeek outputting non-UTF8 encoded bytes on stderr.
https://cirrus-ci.com/task/6325848790269952?logs=test#L24

The below test triggers the hang on Debian and also Alpine. I'm a bit uncertain why the Zeek CI run above only hung on Alpine, but might have been a baseline difference only there (and the difference then being displayed).

There seem to be two things: We should probably ignore decoding errors by some errors= strategy. The fact that the process hangs upon and unhandled Exception is a separate scary thing.

$ cat x.test
@TEST-EXEC: bash -c 'echo -e "\xed\xee\xef" >&2 && false'

$ btest -d x.test 
[  0%] x ... failed
  % 'bash -c 'echo -e "\xed\xee\xef" >&2 && false'' failed unexpectedly (exit code 1)
  % cat .stderr
Process #1:
Traceback (most recent call last):
  File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/awelzel/corelight-oss/zeek/auxil/btest/btest", line 500, in <lambda>
    name="#%d" % (i + 1), target=lambda: self.threadRun(i, mgr_data)
  File "/home/awelzel/corelight-oss/zeek/auxil/btest/btest", line 578, in threadRun
    t.run(self)
  File "/home/awelzel/corelight-oss/zeek/auxil/btest/btest", line 1337, in run
    run_cmdseq(seq)
  File "/home/awelzel/corelight-oss/zeek/auxil/btest/btest", line 1285, in run_cmdseq
    need_teardown |= run_cmdseq(cmd)
  File "/home/awelzel/corelight-oss/zeek/auxil/btest/btest", line 1307, in run_cmdseq
    self.mgr.testFailed(self)
  File "/home/awelzel/corelight-oss/zeek/auxil/btest/btest", line 765, in testFailed
    self._output_handler.testFailed(test, msg)
  File "/home/awelzel/corelight-oss/zeek/auxil/btest/btest", line 1716, in testFailed
    h.testFailed(test, msg)
  File "/home/awelzel/corelight-oss/zeek/auxil/btest/btest", line 2058, in testFailed
    self.showDiag(test)
  File "/home/awelzel/corelight-oss/zeek/auxil/btest/btest", line 2032, in showDiag
    for line in open(f):
  File "/usr/lib/python3.9/codecs.py", line 322, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xed in position 0: invalid continuation byte

< ... hangs ... >

Consider providing a flag to abort run once a single test has failed

While e.g., bisecting it might be more interesting to check whether the test suite passes at all than to see which particular tests have failed. Googletest provides a flag --gtest_break_on_failure which provides exactly that: following tests are skipped, and the test runner exits with a non-zero return value.

It would be great if btest would provide similar functionality.

Looking at the current implementation, the main difficulty in implementing this would be to ensure orderly shutdown of workers in a parallel run.

Possible race condition with -j and lots of quick tests

I'm using btest 1.1 (on macOS Ventura) in a directory with a lot (~50) of quick-executing tests, each reading a PCAP. Most of the time they succeed, but every few runs there's a failure like this:

mytests.FOO ... failed
  % 'zeek -r $TRACES/myanonpcap.pcap $PACKAGE /myfilesystem/testing/.tmp/mytests.FOO failed unexpectedly (exit code -6)
  % cat .stderr
  libc++abi: terminating due to uncaught exception of type ghc::filesystem::filesystem_error: No such file or directory: '/myfilesystem/testing/.tmp/mytests.BAR'

Note that here test FOO is failing, but the actual .stderr complaint is about a different test BAR. If I use -t to keep all the temporary directories, the BAR directory is in fact present, at least when btest finishes.

Across repeated runs, both FOO and BAR vary, so this doesn't appear to be an issue with the actual tests. In addition, if I run serially, or with a low -j 5 setting, I'm unable to make the problem occur. Both of these point to some sort of race condition when creating the .tmp/ subdirectories.

Let me know if there's more I can provide to help diagnose this.

Deprecation warnings

I've started to see these deprecation warnings:

/opt/homebrew/bin/btest:2298: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
  v_now = distutils.version.LooseVersion(VERSION)
/opt/homebrew/bin/btest:2299: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
  v_min = distutils.version.LooseVersion(getOption("MinVersion", None))

This is with Python 3.9 from Homebrew.

Output for `-A` is garbled if used with `-j`

If one uses the -A option together with -j one ends up with garbled output, e.g.,

$ btest -A tests/start-*
[  0%] tests.start-file ... ok
[ 25%] tests.start-next ... ok
[ 50%] tests.start-next-dir ... ok
[ 75%] tests.start-next-naming ... ok
all 4 tests successful

$ btest -A tests/start-* -j
[  0%] tests.start-file ... [  0%] tests.start-next ... [  0%] tests.start-next-dir ... [  0%] tests.start-next-naming ... ok
ok
ok
ok
all 4 tests successful```

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.