aiiie / cram Goto Github PK
View Code? Open in Web Editor NEWFunctional tests for command line applications
License: GNU General Public License v2.0
Functional tests for command line applications
License: GNU General Public License v2.0
https://bitheap.org/cram/ mentions 0.6 as the latest version, with a link to the 0.6 tarball for downloading. Want to update it?
Hi nice cram people,
I seem to have written a different, similar tool, and I think probably I should contribute to cram instead. You can find my tool here: https://github.com/sandstorm-io/sandstorm/tree/master/installer-tests
One feature my tool has is that I can provide text input to an interactive process. You can see an example here: The $[type]
directive means "Type the empty string, then press enter"
So I would like it if cram supported providing text input to programs. I can possibly work on implementing this if it's something that y'all would merge.
Let me know what you think!
I was just wondering if this project is still maintained? The last commit is from over 4 years ago and there are issues and PRs that have not had any attention for a couple of years.
Build started
git clone -q https://github.com/brodie/cram.git C:\projects\cram
git fetch -q origin +refs/pull/16/merge:
git checkout -qf FETCH_HEAD
Specify a project or solution file. The directory does not contain a project or solution file.
Hi. The problem in using distutils
.
See this question http://stackoverflow.com/q/1829524/941020 and this answer http://stackoverflow.com/a/1936850/941020
For fix this issuse you need to use setuptools
as recomended in Packaging Tool Recommendations
Add an option to exit on first failue
so cram --fail-fast ./tests
exits on first failing test case and one can focus only the single test case failing output.
When using this in CI jobs with lot of tests it will be good to have an integration that reports test results a way CI tool understands (eg: jenkins understands junit or has tap plugin)
I personally consider TAP output more human-readable than junit.
sometimes cram just crashes with
Traceback (most recent call last):
File "/home/yac/.local/bin/cram", line 7, in <module>
sys.exit(cram.main(sys.argv[1:]))
File "/home/yac/.local/lib/python3.6/site-packages/cram/_main.py", line 197, in main
refout, postout, diff = test()
File "/home/yac/.local/lib/python3.6/site-packages/cram/_cli.py", line 90, in testwrapper
refout, postout, diff = test()
File "/home/yac/.local/lib/python3.6/site-packages/cram/_run.py", line 73, in test
testname=path)
File "/home/yac/.local/lib/python3.6/site-packages/cram/_test.py", line 230, in testfile
cleanenv=cleanenv, debug=debug)
File "/home/yac/.local/lib/python3.6/site-packages/cram/_test.py", line 169, in test
ret = int(cmd.split()[1])
ValueError: invalid literal for int() with base 10: b'$?'
I was able to find the problem with my test via patching cram/_test.py to print cmd and out on line 157 but I was unable to produce a sscce though.
import sys
import cram
args = ['-E', '--verbose', 'src/cmdlinetest\\test_help.t', 'src/cmdlinetest\\test_no_build.py.t', '--shell', 'cmd.exe']
sys.exit(cram.main(args))
Given a file:
Simple commands:
$ echo foo
foo
$ printf 'bar\nbaz\n' | cat
bar
baz
Multi-line command:
$ foo() {
> echo bar
> }
$ foo
bar
If I run it:
$ cram test_cases.t -v
test_cases.t: passed
# Ran 1 tests, 0 skipped, 0 failed.
This confused me, because there are two separate tests in this file. Cram's test count seems to only be the number of files executed:
$ cram test_cases.t exit.t
..
# Ran 2 tests, 0 skipped, 0 failed.
Could cram report the number of test cases too? Something like this:
$ cram test_cases.t
.
# Ran 1 test (2 specs), 0 skipped, 0 failed.
yac@remy % cat foo.t
$ for i in {0..20}; do echo "y"; done
--------------------------------------------------------------------------------
~
yac@remy % cram foo.t | head -n 1
!
Traceback (most recent call last):
File "/usr/bin/cram", line 7, in <module>
sys.exit(cram.main(sys.argv[1:]))
File "/usr/lib/python3.6/site-packages/cram/_main.py", line 195, in main
for path, test in tests:
File "/usr/lib/python3.6/site-packages/cram/_cli.py", line 132, in runcli
_log('\n', None, verbose)
File "/usr/lib/python3.6/site-packages/cram/_cli.py", line 50, in _log
sys.stdout.flush()
BrokenPipeError: [Errno 32] Broken pipe
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>
BrokenPipeError: [Errno 32] Broken pipe
Is is possible to differentiate between stdout and stderr? For example to check that an error message was printed to stderr instead of stdout?
cc @snordhausen
cram sometimes requires trailing whitespace, which causes unfortunate conflicts with trees or editors set up to strip them.
The main case I notice is blank spaces occurring in the middle of output blocks. cram wants them to have ^ $
so that they're indented to the same point. It seems like that could be ignored without much loss of sensitivity.
We're using cram to test nbstripout on multiple platforms. The test file has UNIX line endings, which leads to our tests failing on Windows due to differences in line endings.
Would be great if there was away to tell cram to ignore these!
hi (and many thanks for cram!). my .t file looks like
some backref errors
$ cd ${TESTDIR}/../.. && ./sedcsv -f <(echo 's/(this)/\2/') < /dev/null
$ cd ${TESTDIR}/../.. && ./sedcsv -f <(echo 's/(this)/\g{2}/') < /dev/null
$ cd ${TESTDIR}/../.. && ./sedcsv -f <(echo 's/(this)/\g{1/') < /dev/null
and, my .t.err file is
some backref errors
$ cd ${TESTDIR}/../.. && ./sedcsv -f <(echo 's/(this)/\2/') < /dev/null
/dev/fd/63:1: backreference "\2" too high (1 max)
[3]
$ cd ${TESTDIR}/../.. && ./sedcsv -f <(echo 's/(this)/\g{2}/') < /dev/null
/dev/fd/63:1: backreference "\2" too high (1 max)
[3]
$ cd ${TESTDIR}/../.. && ./sedcsv -f <(echo 's/(this)/\g{1/') < /dev/null
/dev/fd/63:1: unable to find backreference closing ("}") in "\g{1"
[3]
the cram run looks like
bash apollo2 (main): {50016} cram -i tests/cram/sed-backref.t
!
--- tests/cram/sed-backref.t
+++ tests/cram/sed-backref.t.err
@@ -1,7 +1,13 @@
some backref errors
$ cd ${TESTDIR}/../.. && ./sedcsv -f <(echo 's/(this)/\2/') < /dev/null
+ /dev/fd/63:1: backreference "\2" too high (1 max)
+ [3]
$ cd ${TESTDIR}/../.. && ./sedcsv -f <(echo 's/(this)/\g{2}/') < /dev/null
+ /dev/fd/63:1: backreference "\2" too high (1 max)
+ [3]
$ cd ${TESTDIR}/../.. && ./sedcsv -f <(echo 's/(this)/\g{1/') < /dev/null
+ /dev/fd/63:1: unable to find backreference closing ("}") in "\g{1"
+ [3]
Accept this change? [yN] y
patching file tests/cram/sed-backref.t
Hunk #1 FAILED at 1.
1 out of 1 hunk FAILED -- saving rejects to file tests/cram/sed-backref.t.rej
tests/cram/sed-backref.t: merge failed
# Ran 1 tests, 0 skipped, 1 failed.
i'm on an up-to-date arch linux. patch(1)
is
bash apollo2 (master): {49628} patch --version
GNU patch 2.7.6
Copyright (C) 2003, 2009-2012 Free Software Foundation, Inc.
Copyright (C) 1988 Larry Wall
cheers.
I would like to be able to write this code:
$ command
Good morning (optional)
Hello.
I expect the command being tested to either output "Good morning\nHello.\n"
or "Hello.\n"
. I don't see how this is possible, but an (optional)
option would be very useful.
This is related to #14 .
I need to test for a line of command output that starts with "$ ". But this is exactly the pattern that cram uses to run commands in a shell. How do I do it?
Perhaps cram needs to be taught to make an exception for lines starting with two spaces then "$ " and ending with something like "(nocmd)", so it treats them as command output as well rather than trying to run them as shell commands?
So simple little test case so far:
tests/udns.t
:
#!cram
$ udnsd -d -b 127.0.0.1:5300 --dbhost $(docker-machine ip dev) --pidfile $TEMP/udnsd.pid
[0]
$ kill $(cat $TEMP/udnsd.pid)
[0]
I'm trying to use cram to write a bunch of integration tests for udns which initially require me to spin up a redis server, the udnsd daemon then run a bunch of udnsc
and dig
commands verifying expected output.
zsh» cram test.t
!
--- test.t
+++ test.t.err
@@ -1,1 +1,2 @@
$ echo hallö
+ hall\xc3\xb6 (esc)
# Ran 1 tests, 0 skipped, 1 failed.
zsh» cat test.t
$ echo hallö
I just want to make a note that currently CI tests are failing because AppVeyor needs to be configured by adding appveyor.yml
config file.
It'd be great if you could release v0.8 properly via Github releases so the version can get bumped on conda-forge, among others. to PyPI so that it can be installed via PIP and conda-forge can also update the version. Maybe also make a Github release just for completeness.
Thanks!
One of our Debian packages uses cram tests (v0.6) which fail on some architectures and pass on others. Where the tests fail, I see no difference between the output and input. See, for example, the build log for the package on 32-bit x86 at https://buildd.debian.org/status/fetch.php?pkg=python-pbh5tools&arch=i386&ver=0.8.0%2Bdfsg-2&stamp=1447588876 where failures look like:
## Selection
$ cmph5tools.py select --groupBy Barcode \
> --where "(Barcode == 'F_42--R_42') | (Barcode == 'F_10--R_10')" $INCMP
$ cmph5tools.py merge --outFile merged.cmp.h5 F_42--R_42.cmp.h5 F_10--R_10.cmp.h5
$ cmph5tools.py stats --what "Count(Reference)" --groupBy Barcode merged.cmp.h5
- Group Count(Reference)
- F_10--R_10 62
- F_42--R_42 76
+ Group Count(Reference)
+ F_10--R_10 62
+ F_42--R_42 76
The tests pass just fine on 64-bit x86. I'm not sure if there's some whitespace problem or if there's a way to ignore that. For now, I have to ignore the cram test results during the build for this package because of these spurious failures, but I think this should be sorted out at some point.
Many thanks and regards
I have a situation where the help documentation for an item depends on whether or not a particular plugin is installed:
available commands:
COMMAND SECTION
sub-command description for subcommand
other-sub-command description for other subcommand
OPTIONAL COMMAND SECTION
sub-command description for subcommand
other-sub-command description for other subcommand
OTHER COMMAND SECTION
sub-command description for subcommand
other-sub-command description for other subcommand
I originally intended to capture that optional section with a multi-line regex expression but I can see that the standard (re) rule doesn't work with that. Is there any current facility to support this?
If not, I would be happy to introduce a patch provided we can agree on a useful syntax.
Hi,
as you can read in the Debian bug report the test suite fails since some time.
COVERAGE=python-coverage PYTHON=python PYTHONPATH=`pwd` scripts/cram \
tests
..s...ss...
# Ran 11 tests, 3 skipped, 0 failed.
python-coverage report --fail-under=100
Name Stmts Miss Cover
---------------------------------------
cram/__init__.py 3 0 100%
cram/__main__.py 6 6 0%
cram/_cli.py 74 0 100%
cram/_diff.py 89 0 100%
cram/_encoding.py 67 32 52%
cram/_main.py 135 0 100%
cram/_process.py 14 0 100%
cram/_run.py 40 0 100%
cram/_test.py 104 0 100%
cram/_xunit.py 66 0 100%
---------------------------------------
TOTAL 598 38 94%
make[2]: *** [Makefile:35: test] Error 2
I have relaxed the fail-under parameter but now there is another failure which you can read in the build log (see at the end):
COVERAGE=python-coverage PYTHON=python PYTHONPATH=`pwd` scripts/cram \
tests
..s..!
--- tests/interactive.t
+++ tests/interactive.t.err
@@ -277,11 +277,22 @@
\d (re)
Accept this change? [yN] y
patch failed
- examples/fail.t: merge failed
-
- # Ran 1 tests, 0 skipped, 1 failed.
- [1]
- $ md5 examples/fail.t examples/fail.t.err
- .*\b0f598c2b7b8ca5bcb8880e492ff6b452\b.* (re)
- .*\b7a23dfa85773c77648f619ad0f9df554\b.* (re)
+ Traceback (most recent call last):
+ File "/<<PKGBUILDDIR>>/tests/../scripts/cram", line 7, in <module>
+ sys.exit(cram.main(sys.argv[1:]))
+ File "/<<PKGBUILDDIR>>/cram/_main.py", line 197, in main
+ refout, postout, diff = test()
+ File "/<<PKGBUILDDIR>>/cram/_cli.py", line 121, in testwrapper
+ if _patch(patchcmd, diff):
+ File "/<<PKGBUILDDIR>>/cram/_cli.py", line 54, in _patch
+ out, retcode = execute([cmd, '-p0'], stdin=b('').join(diff))
+ File "/<<PKGBUILDDIR>>/cram/_process.py", line 53, in execute
+ out, err = p.communicate(stdin)
+ File "/usr/lib/python2.7/subprocess.py", line 473, in communicate
+ self.stdin.close()
+ IOError: [Errno 32] Broken pipe
+ [1]
+ $ md5 examples/fail.t examples/fail.t.err
+ 0f598c2b7b8ca5bcb8880e492ff6b452 examples/fail.t
+ 7a23dfa85773c77648f619ad0f9df554 examples/fail.t.err
$ rm patch examples/fail.t.err
ss...
# Ran 11 tests, 3 skipped, 1 failed.
make[2]: *** [Makefile:34: test] Error 1
Could you please have a look.
Kind regards, Andreas.
This didn't work but maybe I setup something wrong.
$ cat test/here-doc.t
$ ls -a
.
..
$ cat >> config <<-EOF
[section]
name=value
EOF
$ cat config
[section]
name=value
$ cram -i test/here-doc.t
!
--- test/here-doc.t
+++ test/here-doc.t.err
@@ -4,7 +4,4 @@
$ cat >> config <<-EOF
[section]
name=value
- EOF
$ cat config
- [section]
- name=value
Accept this change? [yN]
I have version 0.7 installed via pip
$ cram --version
Cram CLI testing framework (version 0.7)
Copyright (C) 2010-2016 Brodie Rao <[email protected]> and others
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
We're using cram quite successfully for a collection of command line tools concerned with processing geospatial data. In some cases we match numerical data such as XYZ position, the distance between two locations, or the size of files written. I've started looking around for tools that might help in the case that we'd like to tolerate some relative or absolute variation without failing the test.
http://www.nongnu.org/numdiff/
numdiff needs a reference input file and seems to apply the same criteria to every number in the output.
http://www.math.utah.edu/~beebe/software/ndiff/
ndiff also needs a pair of input files
It seems to me that cram might support something similar to (re) suffix for the purpose of testing numerical aspects of the the matched output. Does anyone have a suitable solution for this kind of situation?
As an example, say we are monitoring a web service. We fetch a page and check HTTP headers, including the size. We know the size will vary, but if it's within some sane range, we ought to consider it a positive test result.
The README links to a binary for a 0.8 release, but there is no tag in the repo for 0.8.
hi, I have not given this much thought yet but it would probably be nice to have a simple shell expansion in the test case.
For example when tesing a command based on git I have a test case like
$ . ${TESTDIR}/setup
$ my-cmd
Initialized empty Git repository in /tmp/cramtests*/git-wc/.git/ (glob)
while with the available expansion I could have
$ . ${TESTDIR}/setup
$ my-cmd
Initialized empty Git repository in ${GIT_WORK_TREE}/.git/
or possibly
$ . ${TESTDIR}/setup
$ my-cmd
Initialized empty Git repository in ${GIT_WORK_TREE}/.git/ (expand)
note, in this case GIT_WORK_TREE is exported by the ${TESTDIR}/setup command as an input variable for my-cmd
In general I expect this to be helpfull wherever the command output is based on an env variable and supporting only the most basic expansion like ${VAR}, no ${VAR:-foo} etc would go a long way.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.