GithubHelp home page GithubHelp logo

hypothesisworks / hypothesis Goto Github PK

View Code? Open in Web Editor NEW
7.3K 73.0 567.0 36.54 MB

Hypothesis is a powerful, flexible, and easy to use library for property-based testing.

Home Page: https://hypothesis.works

License: Other

Shell 0.37% Python 90.03% PowerShell 0.19% Batchfile 0.04% Makefile 0.01% Jupyter Notebook 5.17% Ruby 1.60% Rust 2.04% TeX 0.36% HTML 0.13% CSS 0.07%
python testing fuzzing property-based-testing

hypothesis's Introduction

Hypothesis

Hypothesis is a family of testing libraries which let you write tests parametrized by a source of examples. A Hypothesis implementation then generates simple and comprehensible examples that make your tests fail. This simplifies writing your tests and makes them more powerful at the same time, by letting software automate the boring bits and do them to a higher standard than a human would, freeing you to focus on the higher level test logic.

This sort of testing is often called "property-based testing", and the most widely known implementation of the concept is the Haskell library QuickCheck, but Hypothesis differs significantly from QuickCheck and is designed to fit idiomatically and easily into existing styles of testing that you are used to, with absolutely no familiarity with Haskell or functional programming needed.

Hypothesis for Python is the original implementation, and the only one that is currently fully production ready and actively maintained.

Hypothesis for Other Languages

The core ideas of Hypothesis are language agnostic and in principle it is suitable for any language. We are interested in developing and supporting implementations for a wide variety of languages, but currently lack the resources to do so, so our porting efforts are mostly prototypes.

The two prototype implementations of Hypothesis for other languages are:

  • Hypothesis for Ruby is a reasonable start on a port of Hypothesis to Ruby.
  • Hypothesis for Java is a prototype written some time ago. It's far from feature complete and is not under active development, but was intended to prove the viability of the concept.

Additionally there is a port of the core engine of Hypothesis, Conjecture, to Rust. It is not feature complete but in the long run we are hoping to move much of the existing functionality to Rust and rebuild Hypothesis for Python on top of it, greatly lowering the porting effort to other languages.

Any or all of these could be turned into full fledged implementations with relatively little effort (no more than a few months of full time work), but as well as the initial work this would require someone prepared to provide or fund ongoing maintenance efforts for them in order to be viable.

hypothesis's People

Contributors

adriangb avatar agucova avatar alexwlchan avatar amw-zero avatar cheukting avatar dchudz avatar drmaciver avatar felixdivo avatar grigoriosgiann avatar honno avatar itsrifat avatar jobh avatar jonathanplasse avatar jwg4 avatar keewis avatar kreeve avatar moreati avatar nmbrgts avatar pschanely avatar pyup-bot avatar reaganjlee avatar rsokl avatar sam-watts avatar sobolevn avatar stranger6667 avatar thunderkey avatar touilleman avatar tybug avatar zac-hd avatar zalathar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hypothesis's Issues

Need a better system for exhaustive testing of small search spaces

Right now if you write something like

@given(bool)
def foo(x):
     pass

This will fail because it will only be able to generate two examples and then will conclude that it hasn't been able to produce the desired number of examples.

This is obviously bad behaviour. Instead Hypothesis should be able to detect small example spaces and exhaustively enumerate them, or at least not error if it produces the full set.

falsify does not correctly handle functions which mutate their arguments

See c8127c5 for example of the problem.

Basically the process of generate -> simplify only works sensibly if the object you start simplifying is unchanged from the object you generated. This does not work very well for a heavily mutable language like python.

Plan of action:

  1. Give strategies information about how to "copy" their arguments. For immutable values this can and should avoid the copy and just return the element. For others it should fall back to using deepcopy.
  2. Before passing a value to a user defined function, copy it via the strategy's copy method.

The reason to not just use deepcopy straight off is a) Flexibility and b) The strategy has a much better idea than deepcopy as to whether the copy can be elided entirely.

Long term but not right now it might be important for a strategy to opt out of copying. This is achievable if you don't care about minimization (you can do it by saving the repr and the random seed before each invocation) but may be too much of a pain to bother with.

Bias `st.datetimes()` towards bug-revealing values such as DST transitions and other oddities

pytz.tzinfo.localize will raise a NonExistentTimeError or AmbiguousTimeError exception if it can't resolve the current local time due to the change to/from daylight savings time. This is the source for numerous bugs in software dealing with datetimes in Python. A strategy that selects for these error causing times would help improve the quality of Hypothesis-Datetime.

Not enough format arguments to report about not enought keyword arguments

import hypothesis

@hypothesis.given(x=int)
def test(y):
    pass  # no, it doesn't

Gives

Traceback (most recent call last):
  File "/usr/local/opt/virtualenv/sandbox/lib/python3.3/site-packages/green/loader.py", line 166, in loadFromModuleFilename
    __import__(dotted_module)
  File "/home/kxepal/projects/sandbox/sandbox/testbox.py", line 19, in <module>
    @hypothesis.given(x=int)
  File "/usr/local/opt/virtualenv/sandbox/lib/python3.3/site-packages/hypothesis/core.py", line 68, in run_test_with_generator
    extra_kwargs[0]
TypeError: not enough arguments for format string

There is missed one more formatting argument here: https://github.com/DRMacIver/hypothesis/blob/master/src/hypothesis/core.py#L66-L69

ValueError: unichr() arg not in range(0x10000) (narrow Python build)

In 0.4.1 and 0.4.2 this test case now errors when run with pytest on OS X 10.9, Python 2.7.6:

@hs.given(x=unicode)
def test_moo(x):
    pass
>               to_falsify, (generator_arguments, kwargs))[0]

../../../Envs/classi/lib/python2.7/site-packages/hypothesis/testdecorators.py:38:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../Envs/classi/lib/python2.7/site-packages/hypothesis/verifier.py:129: in falsify
    for args in example_source:  # pragma: no branch
../../../Envs/classi/lib/python2.7/site-packages/hypothesis/examplesource.py:119: in __iter__
    self.random, parameter
../../../Envs/classi/lib/python2.7/site-packages/hypothesis/searchstrategy.py:554: in produce
    for g, v in zip(es, pv)
../../../Envs/classi/lib/python2.7/site-packages/hypothesis/searchstrategy.py:554: in produce
    for g, v in zip(es, pv)
../../../Envs/classi/lib/python2.7/site-packages/hypothesis/searchstrategy.py:895: in produce
    result[k] = g.produce(random, pv[k])
../../../Envs/classi/lib/python2.7/site-packages/hypothesis/searchstrategy.py:700: in produce
    return self.pack(self.mapped_strategy.produce(random, pv))
../../../Envs/classi/lib/python2.7/site-packages/hypothesis/searchstrategy.py:636: in produce
    self.element_strategy.produce(random, pv.child_parameter))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = OneCharStringStrategy(unicode), random = <random.Random object at 0x7fb83a925820>, pv = Result(ascii_chance=0.32169705803539705)

    def produce(self, random, pv):
        if dist.biased_coin(random, pv.ascii_chance):
            return random.choice(self.ascii_characters)
        else:
            while True:
>               result = hunichr(random.randint(0, 0x10ffff))
E               ValueError: unichr() arg not in range(0x10000) (narrow Python build)

non-negative numbers? numbers in a range? one of? constant?

Could you tell me how, using the @given syntax, one goes about:

  • specifying a non-negative int or float? positive? negative? non-positive?
  • specifying an int or float in a range?
  • using one_of?
  • specifying a constant (Just) value?

e.g.

@given(nonneg)
def test_something(x):
  assert something(x) == True

source_exec_as_module() produces ResourceWarning

When running test on Python 3.4 I get the following warnings on the console:

[...]site-packages/hypothesis/internal/reflection.py:316: ResourceWarning: unclosed file <_io.TextIOWrapper name='[...]/.hypothesis/eval_source/hypothesis_temporary_module_ae7c3d261715e28e9832a153df34d35ccc6c70ff.py' mode='r' encoding='UTF-8'>
  assert open(filepath).read() == source

This is caused by this assertion. The easiest (but less good looking) way would probably by to replacethis with:

with open(filepath) as f:
    assert f.read() == source

pip installation problem

I tried to install with pip and failed:

$ python --version
Python 2.7.3
$ pip install hypothesis
Downloading/unpacking hypothesis
  Running setup.py egg_info for package hypothesis
    Traceback (most recent call last):
      File "<string>", line 16, in <module>
      File "/var/folders/r_/bq_60c0r8xl_fw0059_vk2k00000gn/T/pip-build/hypothesis/setup.py", line 24, in <module>
        long_description=open('README.rst').read(),
    IOError: [Errno 2] No such file or directory: 'README.rst'
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):

  File "<string>", line 16, in <module>

  File "/var/folders/r_/bq_60c0r8xl_fw0059_vk2k00000gn/T/pip-build/hypothesis/setup.py", line 24, in <module>

    long_description=open('README.rst').read(),

IOError: [Errno 2] No such file or directory: 'README.rst'

Provide mechanism for (progress) feedback

Great library! Thanks for making it :)

It would be great if there would be a way to get progress feedback.
As a start I think it would be a good idea to use Python's logging system to provide some basic stats like number of tested permutations, time taken and so on. Additionally some kind of progress callback could be very useful.

When using class based tests, setUp is not called for each hypothesis test

I was just writing some test for a custom data structure and had a class based test with a setUp function that initialized a fresh instance for each test (so I don't have to copy the code in each test). Some test would fail randomly most but not all executions. After investigating this I found that setUp was simply not called for every hypothesis test which resulted in a "dirty" data structure and in term made some tests failing if the values came in the "wrong" order.

Here is an example:

class TestHypothesis(unittest.TestCase):

    def setUp(self):
        super(TestHypothesis, self).setUp()
        self.test_set = set()
        print "setUp called"

    @given(unicode)
    def test_example(self, text):
        chars = [c for c in text]
        for c in chars:
            assert c not in self.test_set
        self.test_set.update(chars)
        print "test called with", text

If I run this, I get the following output:

setUp called
test called with 
test called with \U0001bf50
test called with \U0004ac1e
test called with \U000d5c8f\U0002fb61\U00051be8\U000d5c8f\U0002fb61\U0002fb61\U00051be8\U000d5c8f\U00095c18\U0010a11f\U000d5c8f\U00051be8\U00095c18\U0002fb61\U000361af\U000d5c8f\U00019548\U000361af\U000d5c8f\U0010a11f\U000361af\U000d5c8f\U0002fb61\U000361af\U0010a11f\U0010a11f\U00095c18\U000361af\U000361af\U0010a11f\U0002fb61\U0010a11f\U000361af\U00095c18\U00019548\U000d5c8f\U000d5c8f\U00019548\U0002fb61\U0010a11f\U000361af\U00019548\U0010a11f\U00095c18\U000361af
test called with 0

[lots of other lines]

test called with \U00095c0f
Falsifying example: test_example(self=TestHypothesis(methodName='test_example'), text='\U00095c18')

The reason for this is obivous: Since setUp was only called once the data structure got "dirty".

I'm not really sure if this is something that needs to be fixed but I think it needs to be documented that you should not use setUp in this way (which in my opinion is perfectly fine) when using hypothesis.

support nice syntax for sharing failing cases

I've written some code and tested it with hypothesis but I'd like to store the failing tests and share them with other developers working on my project

I'm currently doing:

def assert_like_foo(x, y):
    assert bar(ham(x), ham(y)) == foo(x, y)

@pytest.mark.parametrize("x,y",[
    (0.5, 0.3),
    (1e-310, 1e-300),
    (1e-300, 1e-310),
    (float('inf'), float('inf')),
    (4.0, 1.677633824243504e-308),
    (3.9999999999999996, 2.2250738585072014e-308),
])
def test_bar_parametrized(x, y):
    assert_like_foo(x, y)


@hypothesis.given(x=float, y=float)
def test_bar(x, y):
    assert_like_foo(x, y)

This seems a bit long winded, I'd like to do something more like:

@hypothesis.given(x=float, y=float, _eg=[
    (0.5, 0.3),
    (1e-310, 1e-300),
    (1e-300, 1e-310),
    (float('inf'), float('inf')),
    (4.0, 1.677633824243504e-308),
    (3.9999999999999996, 2.2250738585072014e-308),
])
def test_bar(x, y):
    assert bar(ham(x), ham(y)) == foo(x, y)

Inheritance doesn't work with StatefulTest

I tried making a test using a StatefulTest subclass, and several subclasses of that class; the tests fail with:

ValueError: one_of requires at least one value to choose from

No specified distinction between private and public API

There's currently a lot of internals in hypothesis I'm planning to change.

But... I don't actually know what bits of the library are public and which are private, because I never specified that, and I have no easy way of seeing what people are using.

My current plans are:

  • I am allowed to break anything
  • I don't want to break anything if I think there's any chance that someone is depending on it
  • By 1.0 I want everything that I consider internal moved into a hypothesis.internal package. If you depend on something in that and changes break your code, that's your problem. If you depend on something outside of that and changes break your code, that's my problem.

You can help! Tell me how you're using the library. Ideally contribute test cases to back it up. Which bits do you care about, which bits could you live with me breaking, which bits would you rather I break?

"ImportError: No module named 'hypothesis.internal'"

At the risk of having done something stupid...

I've just tried a fresh install of both master and the pypi release. In both cases (using an Anaconda environment) into IPython if I try:

In [2]: from hypothesis import falsify
...
ImportError: No module named 'hypothesis.internal'

The same occurs if I just try import hypothesis.

Looking at my site-packages after installing with pip I see files like site-packages/hypothesis/statefultesting.py, it looks like your internal directory is being removed during install?

For the master install I did a fresh checkout (python 3.4, current Anaconda, no venv) and ran python setup.py install, I had the same problem (though the files were installed to an egg rather than a directory).

For the pip install I just used pip install hypothesis.

If I install from your master using python setup.py develop then I get the internal folder (in my checked out folder), so then the imports work.

Am I missing something silly? Maybe you've not tested your installer recently?

@pytest.fixture integration?

Pytest has a feature where you can apply the @pytest.fixture decorator on a function foo, and then any test which Pytest discovers that accepts a foo argument will get passed whatever foo() returns automatically (see https://pytest.org/latest/fixture.html#fixtures-as-function-arguments for more).

You can even have fixtures derived from other fixtures:

@pytest.fixture(scope='session')
def foo():
    return some_complex_input

@pytest.fixture(scope='session')
def bar(foo):
    return do_something_complex_with(foo)

def test_baz(bar):
    assert 'Pytest passes bar here automatically'

If I wanted Hypothesis to generate the input foo() is supplying in the above, then have bar transform it in some way (applying some assume statements while it's at it), and then have Pytest feed that as input to all my tests, would that be possible (using pytest.fixture or otherwise)?

I'm currently duplicating the same @given(...) line for every one of my tests, which means Hypothesis has to do a lot of duplicate work. This seems like maybe a common enough use case, but I haven't found anything that addresses it in the docs.

(I did find https://pypi.python.org/pypi/hypothesis-pytest but it looks like it currently only improves Pytest reporting.)

Thanks for any help and apologies if I'm missing something (I'm new to both Hypothesis and Pytest). And thanks for releasing Hypothesis, it's awesome!

Possibility of support coroutine tests

I have a problem to test some stuff which based on asyncio. Default unittest doesn't likes when test_ methods are coroutines - it has no idea what to do with them. Common workaround is to make coroutine as nested function like this:

import asyncio
import aiohttp
from unittest import TestCase
from hypothesis import given

class DummyTestCase(TestCase):

    def setUp(self):
        self.loop = asyncio.new_event_loop()
        asyncio.set_event_loop(self.loop)

    def tearDown(self):
        self.loop.close()

    @given(str)
    def test_foo(self, x):
        @asyncio.coroutine
        def go():
            # dummy test just to check that hypothesis works with asyncio
            r = yield from aiohttp.request(x, 'http://localhost')
            assert r.status == 200, 'yay!'
        self.loop.run_until_complete(go())

The problem is that such solution is too much verbose and when you have thousands of tests it turns into a problem. However it works with hypothesis fine and this test fails as expected. To resolve inner coroutine problem I used the following trick:

import asyncio
import aiohttp
import functools
from unittest import TestCase
from hypothesis import given


def run_in_loop(f):
    @functools.wraps(f)
    def wrapper(testcase, *args, **kwargs):
        coro = asyncio.coroutine(f)
        future = asyncio.wait_for(coro(testcase, *args, **kwargs),
                                  timeout=testcase.timeout)
        return testcase.loop.run_until_complete(future)
    return wrapper


class MetaAioTestCase(type):

    def __new__(cls, name, bases, attrs):
        for key, obj in attrs.items():
            if key.startswith('test_'):
                attrs[key] = run_in_loop(obj)
        return super().__new__(cls, name, bases, attrs)


class DummyTestCase(TestCase, metaclass=MetaAioTestCase):

    timeout = 5

    def setUp(self):
        self.loop = asyncio.new_event_loop()
        asyncio.set_event_loop(self.loop)

    def tearDown(self):
        self.loop.close()

    @given(str)
    def test_foo(self, x):
        # here we expect the fail, but it never happens
        r = yield from aiohttp.request(x, 'http://localhost')
        assert r.status == 200, 'yay!'

in other words by autodecorating test_ methods to turn them into coroutines and run them within the event loop until they are completed. This removes a lot of routine and useless code, but suddenly such approach stops working with hypothesis - test passes as if it was never run.

I just found you project today and still playing around with it features without digging into given implementation, but may you have quick idea why the last case doesn't works?

Docs example fails

The decorator given([int]) is creating tests with empty lists, which is breaking this test example from the documentation

@given([int])
def test_reversing_twice_gives_same_list(xs):
    assert xs == list(reversed(reversed(xs)))

Apparently a reversed empty list isn't a squence, which might be an issue in it of it self

A test fails by raising TypeError exception whenever one of the test function arguments is named 'f'

A test will always fail with the exception that an instance of HypothesisProvided is not callable whenver
one of the arguments test function arguments is named 'f'. For example, given

@given(int, int)
def test_this(x,f):
    assume(x > 1)
    assert x > 1

when run with py.test produces

_________________________________________________________________ test_this __________________________________________________________________

x = HypothesisProvided(value=<class 'int'>), f = HypothesisProvided(value=<class 'int'>)

    def test_this(x=not_set, f=not_set):
>       return f(x=x, f=f)
E       TypeError: 'HypothesisProvided' object is not callable

.hypothesis/eval_source/hypothesis_temporary_module_06cffbd9c8baa4dbbfa21a56fbd77c45e3b770be_0.py:5: TypeError

however, the following

@given(int, int)
def test_this(x,y):
    assume(x > 1)
    assert x > 1

works just fine. Note that I renamed 'f' to 'y'. I get the same outcome regardless of the number of arguments a test function has.

If it help, I'm on platform darwin -- Python 3.4.3 -- py-1.4.26 -- pytest-2.6.4 according to py.test.

hypothesis-django: no support for custom fields

It doesn't appear that there's any way to specify extended mappings for custom Django fields, even for fields that inherit from other fields. For instance, we have a custom field, XidField, which inherits from the django_extensions field ShortUUIDField, which inherits from UUIDField, which inherits from CharField. But reading the model_to_base_specifier function in the Django extension, it looks like hypothesis looks through its hardcoded set of predefined mappings, and gives up if it finds a non-foreign key field that's not in that set.

OverflowError in datetime.py:draw_template

Traceback during test running. No other logging output.

  File "/Users/zachsmith/makespace/site/ve/lib/python2.7/site-packages/hypothesisdatetime/datetime.py", line 101, in draw_template
    return self.templateize(timezone.localize(base))
  File "/Users/zachsmith/makespace/site/ve/lib/python2.7/site-packages/pytz/tzinfo.py", line 309, in localize
    loc_dt = dt + delta
OverflowError: date value out of range

Support for Python 2.6

I've got some py2.6 code I'd love to use hypothesis for.

There is a py2.6 tox env, but it's not in use and the package clearly doesn't work with 2.6.

I plan to submit a PR with 2.6 support if you're willing to accept it.

Statistical distribution tests have a massive case of multiple testing

This isn't a correctness problem because it increases the set of false positive failures. It's annoying because it makes the build flaky though!

Proposed plan of action:

  • Make individual tests run at a much stricter significance level. e.g. divide it by the number of tests.
  • Use the Benjamini–Hochberg–Yekutieli procedure to do a statistical test on the p values for all the tests to see if any of them are significant with a 1% false positive rate.

Map and flatmap fail on functions without names

Not all functions have the __name__ attribute, but map and flatmap will crash if the function does not.

For example, if you wrap a function with itertools.partial the result will not have a __name__ and so cannot be used in map.

$ python3
Python 3.4.0 (default, Apr 11 2014, 13:05:11)
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from hypothesis import given, assume, strategy
>>> func = lambda a, b: a+b
>>> import functools
>>> func_wrapped = functools.partial(func, 10)
>>> func_wrapped(1)
11
>>> func_wrapped(10)
20
>>> strategy(int).map(func_wrapped)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.4/dist-packages/hypothesis/searchstrategy/strategies.py", line 533, in __repr__
    self.mapped_strategy, self.pack.__name__
AttributeError: 'functools.partial' object has no attribute '__name__'

Move @fails annotation into main test decorators

I was reading "Software Testing with Quickcheck" by John Hughes and in it he points out that keeping around failing properties with a marker that they fail is actually a useful thing which people should do as part of their test suites. This seems like an eminently fair point. I already have something like this in the tests for my test decorators. It currently depends on py.test but could easily be made not to. Do this and merge it into the main API.

Feature request - Half-open intervals

Thanks to David's comment on Issue 61, I've discovered the joys of hypothesis.specifiers, which solved the problem in that issue. Reading through the code I've noticed that floats_in_range() doesn't permit float('inf') as a value in a range. I'd like to suggest permitting float('inf') and float('-inf') as values so that we can test half-open intervals. In short, floats_in_range(0, float('inf')) would be similar to strategy(floats_in_range(0, sys.float_info.max)) | strategy(just(float('inf'))), and floats_in_range(float('-inf'), 0) would do something similar.

Tracker accepts, but not support iterables

from hypothesis.internal.tracker import Tracker

def test_track_iterable():
    t = Tracker()
    assert t.track(iter([1])) == 1
    assert t.track(iter([1])) == 2

Causes

  File "/home/kxepal/projects/sandbox/sandbox/testbox.py", line 23, in test_track_iterable
    assert t.track(iter([1])) == 1
  File "/usr/local/opt/virtualenv/sandbox/lib/python3.3/site-packages/hypothesis/internal/tracker.py", line 64, in track
    k = object_to_tracking_key(x)
  File "/usr/local/opt/virtualenv/sandbox/lib/python3.3/site-packages/hypothesis/internal/tracker.py", line 47, in object_to_tracking_key
    k = marshal.dumps(flatten(o))
  File "/usr/local/opt/virtualenv/sandbox/lib/python3.3/site-packages/hypothesis/internal/tracker.py", line 39, in flatten
    result.append(len(t))
TypeError: object of type 'list_iterator' has no len()

This happens because the iterator passed to the branch which accepts Iterable instances. However, these instances are guarantee to implements __next__, but not __len__ which have Sized ones. May be you expected here Sequence instances which are implements both Sized and Iterable interfaces to pass len(...) and list(...) calls?

missing module "flags"

Running py.test (with python 2.7) reports many import errors of the form:

from hypothesis.flags import Flags
E ImportError: No module named flags

Perhaps a file is missing from the checkin?

Issue: specifiers.floats_in_range() returns floats not in range

for i in range(1, 20):
print strategy(spec.floats_in_range(.5,1)).example()

RETURNS:
0.968588265907
0.569918309757
0.806252086243
0.182544019517 *****
0.98129418528
0.964912189405
0.707089910335
0.201839450826 *******
0.510918699796

Things get worse for smaller ranges:
for i in range(1, 20):
print strategy(spec.floats_in_range(.5,.51)).example()

RETURNS:

0.214248244349
0.601758048096
0.420202881225
0.545288818055
0.523624722385
0.70708993713
0.446426877632
0.570118640575
0.510503224163
0.529512864265
0.733973471751
0.518818519475
0.487080865111
0.54701095173
0.592756742708
0.36979839466
0.743264942082
0.499588510816
0.2745099114

All of these values fall out of the range given.

Timeouts don't seem to be working

I've written (an admittedly torturous) test to test out how well pickling works with a chunk of code I have. The relevant section is below.

with Settings(timeout=-1):
    @given(max_speed=float,
           tradingpost_probability=float,
           pause_mean=float,
           max_step_size=float,
           name=str,
           robot_call_periodicity=float)
    def test_pickling(self, max_speed, tradingpost_probability, pause_mean, 
                      max_step_size, name, robot_call_periodicity):
        assume(max_speed >= 0.0)
        assume(tradingpost_probability >= 0.0)
        assume(tradingpost_probability <= 1.0)
        assume(pause_mean > 0.0)
        assume(max_step_size > 0.0)
        assume(robot_call_periodicity > 0.0)
        assume(robot_call_periodicity < 1000.0)

This fails with:

hypothesis.errors.Unsatisfiable: Unable to satisfy assumptions of hypothesis test_pickling. Only 4 examples found after 0.121668 seconds

The problem is the timeouts. Reading through the documentation of core.find_satisfying_template(), it appears that by setting the timeout setting to a negative number, hypothesis should no longer timeout; that is, it will work for as long as is necessary to generate enough examples. However, what actually happens is that the timeouts are still defaulting to whatever the defaults are. Is this a bug in hypothesis, or is this an error on my part?

Thanks, and thanks for writing such a fantastic library!

Documentation: Descriptors don't exist anymore

In the docs:

This code on page doesn't work anymore:

import hypothesis.descriptors as desc

strategy([desc.integers_in_range(1, 10)]).example()
[7, 9, 9, 10, 10, 4, 10, 9, 9, 7, 4, 7, 7, 4, 7]

In[10]: strategy([desc.floats_in_range(0, 1)]).example()
[0.4679222775246174, 0.021441634094071356, 0.08639605748268818]

strategy(desc.one_of((float, bool))).example()
3.6797748715455153e-281

strategy(desc.one_of((float, bool))).example()
False

It appears hypothesis.descriptors is now hypothesis.specifiers

I know this is a minor change, but I'm in the process of reading the docs myself, so I don't trust myself to get everything exactly correct!

Automatically generate django forms

Django models and forms have more than enough metadata for us to automatically introspect them and figure out how to build one. We should have a hypothesis-extra package to do this.

strategy_for_instances will behave arbitrarily with diamond inheritance

Suppose you have types A, B with instance definitions for both, and C subtyping both A and B. An instance of C will basically get an arbitrary strategy - it'll be stable across runs within a single process because of dict iteration order but will change from process to process.

Really this needs to be an error condition. If you have diamond inheritance you need to provide a more specific implementation for the more specific type.

numpy support

It seems reasonable to add support for numpy data types.

Provide a "fixed seed" setting

Verifier supports a configurable random number generator and all use of randomness goes through that.

However if you do not provide one it will just create one with the default seed. It is desirable to run tests in a deterministic mode, which you could do by having Verifier create its random with a fixed seed instead.

In this case it would be better to have the randomisation happening per falsify run rather than be on the Verifier object.

Planned solution is to create the seed as some sort of hash of the hypothesis. e.g. use hypothesis.name or inspect.getsource(hypothesis) if hypothesis is a lambda. This ensures tests cannot interfere with each others' randomness.

Documentation is unclear

I'm not sure whether it's the documentation or just me, but I can't, for the life of me, figure out how to use strategy. I can't see how to pass it to @given, and thus I'm not sure how to create a strategy that will give me examples of a list of given length.

I see the example that uses flatmap to do it, but how do I pass that strategy to @given? Is it mentioned anywhere that I have overlooked?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.