datadriventests / ddt Goto Github PK
View Code? Open in Web Editor NEWData-Driven Tests for Python Unittest
Home Page: http://ddt.readthedocs.org/
License: MIT License
Data-Driven Tests for Python Unittest
Home Page: http://ddt.readthedocs.org/
License: MIT License
We use a combination of dynamic (ddt) tests and vcr-unittest
, which records HTTP requests and replays them when tests are rerun. This is all well and good, but we bump into an issue:
vcr
saves them based on test name with the ordinalIt would therefore be awesome if we could specify that we do not want ordinal numbers appended:
@data(*datapoints, ordinal=False)
def f(x, y, z):
pass
The argument name could be anything, ordinal
, suppress_ordinal
, or even unique=True
, meaning that all test names are unique and thus that ordinals aren't necessary.
Currently we just monkey patch mk_test_name
, but shouldn't do this because it causes other issues later.
I am trying to push a new branch in order to submit a PR but am getting permission denied...thanks!
I love ddt
! I often find myself doing something like the following in order to get useful names:
import ddt
import unittest
from collections import namedtuple
@ddt.ddt
class TestExample(unittest.TestCase):
class DummyClass(namedtuple('DummyClass', ['name', 'arg1', 'arg2'])):
def __str__(self):
return self.name
@ddt.data(
DummyClass('TwoGreaterThanOne', 1, 2),
DummyClass('ThreeGreaterThanTwo', 2, 3)
)
def test_example(self, dummy):
self.assertGreater(dummy.arg2, dummy.arg1)
I see that some of this type of functionality is in the works for ddt 2.0, but I don't know what kind of progress has been made on that. In the meantime, I put together this helper module (below, tested successful in Python 2.7), which allows me to do this with a single decorator and may be helpful for others.
I am rather new at code development and contribution in Github, so am not sure of the best process to contribute this code and ensure it passes the necessary checks and tests.
import ddt
import unittest
class NamedDataList(list):
""" This is a helper class for @named_data that allows ddt tests to have meaningful names. """
def __init__(self, name, *args):
super(NamedDataList, self).__init__(args)
self.name = name
def __str__(self):
return str(self.name)
class NamedDataDict(dict):
""" This is a helper class for @named_data that allows ddt tests to have meaningful names. """
def __init__(self, name, **kwargs):
super(NamedDataDict, self).__init__(kwargs)
self.name = name
def __str__(self):
return str(self.name)
# In order to ensure that the name is properly interpreted regardless of arguments, NamedScenario must be added to the
# tuple of ddt trivial types for which it always gets the name. See ddt.trivial_types for more information.
ddt.trivial_types = ddt.trivial_types + (NamedDataList, NamedDataDict, )
def named_data(*named_values):
"""
This decorator is to allow for meaningful names to be given to tests that would otherwise use @ddt.data and
@ddt.unpack.
Example of original ddt usage:
@ddt.ddt
class TestExample(TemplateTest):
@ddt.data(
[0, 1],
[10, 11]
)
@ddt.unpack
def test_values(self, value1, value2):
...
Example of new usage:
@ddt.ddt
class TestExample(TemplateTest):
@named_data(
['A', 0, 1],
['B', 10, 11],
)
def test_values(self, value1, value2):
...
Note that @unpack is not required.
Args:
named_values(list[Any] | dict[Any,Any]): Each named_value should be a list with the name as the first
argument, or a dictionary with 'name' as one of the keys. The name will be coerced to a string and all other
values will be passed unchanged to the test.
"""
type_of_first = None
values = []
for named_value in named_values:
if type_of_first is None:
type_of_first = type(named_value)
if not isinstance(named_value, type_of_first):
raise TypeError('@named_data expects all values to be of the same type.')
if isinstance(named_value, list):
value = NamedDataList(named_value[0], *named_value[1:])
type_of_first = type_of_first or list
elif isinstance(named_value, dict):
if 'name' not in named_value.keys():
raise ValueError('@named_data expects a dictionary with a "name" key.')
value = NamedDataDict(**named_value)
type_of_first = type_of_first or dict
else:
raise TypeError('@named_data expects a list or dictionary.')
# Remove the __doc__ attribute so @ddt.data doesn't add the NamedData class docstrings to the test name.
value.__doc__ = None
values.append(value)
def wrapper(func):
ddt.data(*values)(ddt.unpack(func))
return func
return wrapper
@ddt.ddt
class TestNamedData(unittest.TestCase):
class NonTrivialClass(object):
pass
@named_data(
['Single', 0, 1]
)
def test_single_named_value(self, value1, value2):
self.assertGreater(value2, value1)
@named_data(
['1st', 1, 2],
['2nd', 3, 4]
)
def test_multiple_named_value_lists(self, value1, value2):
self.assertGreater(value2, value1)
@named_data(
dict(name='1st', value2=1, value1=0),
{'name': '2nd', 'value2': 1, 'value1': 0}
)
def test_multiple_named_value_dicts(self, value1, value2):
self.assertGreater(value2, value1)
@named_data(
['Passes', NonTrivialClass(), True],
['Fails', 1, False]
)
def test_list_with_nontrivial_type(self, value, passes):
if passes:
self.assertIsInstance(value, self.NonTrivialClass)
else:
self.assertNotIsInstance(value, self.NonTrivialClass)
@named_data(
{'name': 'Passes', 'value': NonTrivialClass(), 'passes': True},
{'name': 'Fails', 'value': 1, 'passes': False}
)
def test_dict_with_nontrivial_type(self, value, passes):
if passes:
self.assertIsInstance(value, self.NonTrivialClass)
else:
self.assertNotIsInstance(value, self.NonTrivialClass)
Seems that you cannot run an individual ddt test via nosetests:
$ cat test_foo.py
import unittest
from ddt import ddt, data
@ddt
class FooTestCase(unittest.TestCase):
@data(3, 4, 12, 23)
def test_larger_than_two(self, value):
self.assertTrue(value > 0)
$ nosetests -vv test_foo.py:FooTestCase
nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
test_larger_than_two_12 (tests.cli.test_foo.FooTestCase) ... ok
test_larger_than_two_23 (tests.cli.test_foo.FooTestCase) ... ok
test_larger_than_two_3 (tests.cli.test_foo.FooTestCase) ... ok
test_larger_than_two_4 (tests.cli.test_foo.FooTestCase) ... ok
----------------------------------------------------------------------
Ran 4 tests in 0.001s
OK
$ nosetests -vv test_foo.py:FooTestCase.test_larger_than_two
nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
Failure: ValueError (No such test FooTestCase.test_larger_than_two) ... ERROR
======================================================================
ERROR: Failure: ValueError (No such test FooTestCase.test_larger_than_two)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/nose/failure.py", line 41, in runTest
raise self.exc_class(self.exc_val)
ValueError: No such test FooTestCase.test_larger_than_two
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (errors=1)
============================= test session starts ==============================
platform linux -- Python 3.8.7, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /build/ddt-1.4.1
collected 18 items / 1 error / 17 selected
==================================== ERRORS ====================================
____________________ ERROR collecting test/test_example.py _____________________
test/test_example.py:153: in <module>
class YamlOnlyTestCase(unittest.TestCase):
ddt.py:372: in ddt
return wrapper(arg) if inspect.isclass(arg) else wrapper
ddt.py:366: in wrapper
process_file_data(cls, name, func, file_attr)
ddt.py:254: in process_file_data
data = yaml.load(f, Loader=yaml_loader)
/nix/store/lpl17r5zrayqvxflsiylwjxzz7wb04n6-python3.8-PyYAML-5.4.1/lib/python3.8/site-packages/yaml/__init__.py:114: in load
return loader.get_single_data()
/nix/store/lpl17r5zrayqvxflsiylwjxzz7wb04n6-python3.8-PyYAML-5.4.1/lib/python3.8/site-packages/yaml/constructor.py:51: in get_single_data
return self.construct_document(node)
/nix/store/lpl17r5zrayqvxflsiylwjxzz7wb04n6-python3.8-PyYAML-5.4.1/lib/python3.8/site-packages/yaml/constructor.py:60: in construct_document
for dummy in generator:
/nix/store/lpl17r5zrayqvxflsiylwjxzz7wb04n6-python3.8-PyYAML-5.4.1/lib/python3.8/site-packages/yaml/constructor.py:413: in construct_yaml_map
value = self.construct_mapping(node)
/nix/store/lpl17r5zrayqvxflsiylwjxzz7wb04n6-python3.8-PyYAML-5.4.1/lib/python3.8/site-packages/yaml/constructor.py:218: in construct_mapping
return super().construct_mapping(node, deep=deep)
/nix/store/lpl17r5zrayqvxflsiylwjxzz7wb04n6-python3.8-PyYAML-5.4.1/lib/python3.8/site-packages/yaml/constructor.py:143: in construct_mapping
value = self.construct_object(value_node, deep=deep)
/nix/store/lpl17r5zrayqvxflsiylwjxzz7wb04n6-python3.8-PyYAML-5.4.1/lib/python3.8/site-packages/yaml/constructor.py:100: in construct_object
data = constructor(self, node)
/nix/store/lpl17r5zrayqvxflsiylwjxzz7wb04n6-python3.8-PyYAML-5.4.1/lib/python3.8/site-packages/yaml/constructor.py:427: in construct_undefined
raise ConstructorError(None, None,
E yaml.constructor.ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/object:test.test_example.MyClass'
E in "/build/ddt-1.4.1/test/data/test_custom_yaml_loader.yaml", line 36, column 13
=============================== warnings summary ===============================
ddt.py:43
/build/ddt-1.4.1/ddt.py:43: PytestCollectionWarning: cannot collect test class 'TestNameFormat' because it has a __new__ constructor (from: test/test_functional.py)
class TestNameFormat(Enum):
-- Docs: https://docs.pytest.org/en/stable/warnings.html
=========================== short test summary info ============================
ERROR test/test_example.py - yaml.constructor.ConstructorError: could not det...
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
========================= 1 warning, 1 error in 0.51s ==========================
Async test functions aren't awaited on python 3.11 when annotated with a DDT decorator. Consequently, such tests will always pass. This can be reproduced with the following minimal example:
import asyncio
from unittest import IsolatedAsyncioTestCase
from ddt import ddt, data
@ddt
class TestDdt(IsolatedAsyncioTestCase):
@data(1)
async def test_ddt(self, value: int) -> None:
await asyncio.sleep(1)
self.assertNotEqual(value, 1)
Executing this test case results in the following warnings:
==================================================================== test session starts =====================================================================
platform linux -- Python 3.11.5, pytest-7.4.2, pluggy-1.3.0
rootdir: /home/soldag/ddt-test
collected 1 item
test_ddt.py . [100%]
====================================================================== warnings summary ======================================================================
test_ddt.py::TestDdt::test_ddt_1_1
/usr/lib/python3.11/unittest/async_case.py:90: RuntimeWarning: coroutine 'TestDdt.test_ddt' was never awaited
if self._callMaybeAsync(method) is not None:
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.
test_ddt.py::TestDdt::test_ddt_1_1
/usr/lib/python3.11/unittest/case.py:678: DeprecationWarning: It is deprecated to return a value that is not None from a test case (<bound method TestDdt.test_ddt of <test_ddt.TestDdt testMethod=test_ddt_1_1>>)
return self.run(*args, **kwds)
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=============================================================== 1 passed, 2 warnings in 0.05s ================================================================
How do I import named_data
in my code?
It seems like only the ddt
module is provided by the pip package, and named_data
does not seem to be imported anywhere in ddt.py
, so I cannot do something like from ddt import named_data
.
Line 15 in 590e2ca
+ cd ddt-1.4.1
+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-ddt-1.4.1-2.fc33.x86_64/usr/lib/python3.8/site-packages
+ nosetests --verbose
test_dicts_extracted_into_kwargs_1 (test.test_example.FooTestCase) ... ok
test_dicts_extracted_into_kwargs_2 (test.test_example.FooTestCase) ... ok
Missing args with value {0} and {1} ... ok
Missing args with value {0} and {1} ... ok
Missing args with value {0} and {1} ... ok
Missing args with value {0} and {1} ... ok
Missing kargs with value {value} {value2} ... ok
Missing kargs with value {value} {value2} ... ok
Missing kargs with value {value} {value2} ... ok
Missing kargs with value {value} {value2} ... ok
test_file_data_json_dict_1_unsorted_list ... ok
test_file_data_json_dict_2_sorted_list ... ok
test_file_data_json_dict_dict_1_positive_integer_range ... ok
test_file_data_json_dict_dict_2_negative_integer_range ... ok
test_file_data_json_dict_dict_3_positive_real_range ... ok
test_file_data_json_dict_dict_4_negative_real_range ... ok
test_file_data_json_list_1_Hello ... ok
test_file_data_json_list_2_Goodbye ... ok
test_file_data_yaml_dict_1_unsorted_list ... ok
test_file_data_yaml_dict_2_sorted_list ... ok
test_file_data_yaml_dict_dict_1_positive_integer_range ... ok
test_file_data_yaml_dict_dict_2_negative_integer_range ... ok
test_file_data_yaml_dict_dict_3_positive_real_range ... ok
test_file_data_yaml_dict_dict_4_negative_real_range ... ok
test_file_data_yaml_list_1_Hello ... ok
test_file_data_yaml_list_2_Goodbye ... ok
test_greater_1_test_2_greater_than_1 (test.test_example.FooTestCase) ... ok
test_greater_2_test_10_greater_than_5 (test.test_example.FooTestCase) ... ok
Test docstring 1 ... ok
Test docstring 2 ... ok
test_larger_than_two_1_3 (test.test_example.FooTestCase) ... ok
test_larger_than_two_2_4 (test.test_example.FooTestCase) ... ok
test_larger_than_two_3_12 (test.test_example.FooTestCase) ... ok
test_larger_than_two_4_23 (test.test_example.FooTestCase) ... ok
Larger than two with value 3 ... ok
Larger than two with value 4 ... ok
Larger than two with value 12 ... ok
Larger than two with value 23 ... ok
test_list_extracted_into_arguments_1__3__2_ (test.test_example.FooTestCase) ... ok
test_list_extracted_into_arguments_2__4__3_ (test.test_example.FooTestCase) ... ok
test_list_extracted_into_arguments_3__5__3_ (test.test_example.FooTestCase) ... ok
Extract into args with first value 3 and second value 2 ... ok
Extract into args with first value 4 and second value 3 ... ok
Extract into args with first value 5 and second value 3 ... ok
test_not_larger_than_two_1_1 (test.test_example.FooTestCase) ... ok
test_not_larger_than_two_2__3 (test.test_example.FooTestCase) ... ok
test_not_larger_than_two_3_2 (test.test_example.FooTestCase) ... ok
test_not_larger_than_two_4_0 (test.test_example.FooTestCase) ... ok
test_tuples_extracted_into_arguments_1__3__2_ (test.test_example.FooTestCase) ... ok
test_tuples_extracted_into_arguments_2__4__3_ (test.test_example.FooTestCase) ... ok
test_tuples_extracted_into_arguments_3__5__3_ (test.test_example.FooTestCase) ... ok
test_undecorated (test.test_example.FooTestCase) ... ok
test_unicode_1_ascii (test.test_example.FooTestCase) ... ok
test_unicode_2_non_ascii__ (test.test_example.FooTestCase) ... ok
test_custom_yaml_loader_10_python_float ... ok
test_custom_yaml_loader_1_bool ... ok
test_custom_yaml_loader_2_str ... ok
test_custom_yaml_loader_3_int ... ok
test_custom_yaml_loader_4_float ... ok
test_custom_yaml_loader_5_python_list ... ok
test_custom_yaml_loader_6_python_dict ... ok
test_custom_yaml_loader_7_my_class ... ok
test_custom_yaml_loader_8_python_str ... ok
test_custom_yaml_loader_9_python_int ... ok
Failure: TypeError (Cannot extend enumerations) ... ERROR
Test the ``data`` method decorator ... ok
Test the ``file_data`` method decorator ... ok
Test the ``ddt`` class decorator ... ok
Test the ``ddt`` class decorator with ``INDEX_ONLY`` test name format ... ok
Test the ``ddt`` class decorator with ``DEFAULT`` test name format ... ok
Test that the ``file_data`` decorator creates two tests ... ok
Test that ``file_data`` creates tests with the correct name ... ok
Test that data is fed to the decorated tests ... ok
Test that data is fed to the decorated tests from a file ... ok
Test that a ValueError is raised when JSON file is missing ... ok
Test that a ValueError is raised when YAML file is missing ... ok
Test the ``__name__`` attribute handling of ``data`` items with ``ddt`` ... ok
Test the ``__doc__`` attribute handling of ``data`` items with ``ddt`` ... ok
Test that unicode strings are converted to function names correctly ... ok
Test not using value if non-trivial arguments ... ok
Test that data is fed to the decorated tests ... ok
Test that YAML files containing python tags throw no exception if an ... ok
Test that YAML files are not loaded if YAML is not installed. ... ok
======================================================================
ERROR: Failure: TypeError (Cannot extend enumerations)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/nose/failure.py", line 39, in runTest
raise self.exc_val.with_traceback(self.tb)
File "/usr/lib/python3.8/site-packages/nose/loader.py", line 522, in makeTest
return self._makeTest(obj, parent)
File "/usr/lib/python3.8/site-packages/nose/loader.py", line 567, in _makeTest
obj = transplant_class(obj, parent.__name__)
File "/usr/lib/python3.8/site-packages/nose/util.py", line 642, in transplant_class
class C(cls):
File "/usr/lib64/python3.8/enum.py", line 124, in __prepare__
member_type, first_enum = metacls._get_mixins_(bases)
File "/usr/lib64/python3.8/enum.py", line 502, in _get_mixins_
raise TypeError("Cannot extend enumerations")
TypeError: Cannot extend enumerations
----------------------------------------------------------------------
Ran 83 tests in 0.122s
FAILED (errors=1)
error: Bad exit status from /var/tmp/rpm-tmp.Doogsq (%check)
Nice utility.
I am wondering if it is possible to provide data created in setUp?
E.g.
def setUp(self):
self.data1 = createData()
self.data2 = createData()
def tearDown(self):
self.data1.delete()
self.data2.delete()
@data(self.data1, self.data2)
def test(self, data):
...
Do I have clean ways to use mock patch with DDT
(with @data
or smth else)? Thanks.
I use Excel to store my test cases.Now I want to add a is_skip column to control whether to execute the testcase.
But I can't use unittest.skip(is_skip="Yโ)๏ผif i use this๏ผall of my testcase will skip.
I hope is_skip of each line can control whether each use case is executed.Can DDT be implemented like this?
Looking forward to your reply : )
Would it be possible for the @data
decorator to stop early if the test fails? Similar to nosetests -x
except on a method by method basis.
I am officially dropping support for ddt
. It's had its time, but now that I've transitioned to using py.test in my projects, it brings me no value and no longer use it.
If you however still see value in it and would like to step up as a maintainer, let me know and we'll manage a transfer of ownership.
Please comment on this issue if you're interested.
Exactly what I was looking for, just that I'd like to pass an interable of test values to the data decorator, like so:
LANGUAGES = (
('en', _('English')),
('de', _('German')),
('fr', _('French')),
('sv', _('Swedish')),
('tr', _('Turkish')),
)
@data(LANGUAGES)
def test_that_the_page_loads_succesfully_in_the_given_language(self, language):
...
The unpack decorator doesn't help here, as I want to run the test per language.
There is a lack of documentation on how ddt interacts with the setup and teardown methods
Subclasses of ddt-decorated test cases generate duplicate tests
Currently ddt
supports feeding test data from a file, but that file has to be in JSON format.
Support for YAML seems like a good addition.
When running test suites locally with the new 1.4.0 release a traceback is raised on import because of the use nose
. For example:
Failed to import test module: test.python
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.7.7/x64/lib/python3.7/unittest/loader.py", line 470, in _find_test_path
package = self._get_module_from_name(name)
File "/opt/hostedtoolcache/Python/3.7.7/x64/lib/python3.7/unittest/loader.py", line 377, in _get_module_from_name
__import__(name)
File "/home/vsts/work/1/s/test/__init__.py", line 17, in <module>
from ddt import data, unpack
File "/opt/hostedtoolcache/Python/3.7.7/x64/lib/python3.7/site-packages/ddt.py", line 15, in <module>
from nose.tools import nottest
ModuleNotFoundError: No module named 'nose'
This is in a stdlib unittest test suite. nose
is not listed in the requirements list for the new ddt package on pypi so this causes a failure since nose will not necessarily be installed. That being said nose is an inactive project and the first section in their docs suggest not using it: https://nose.readthedocs.io/en/latest/#note-to-users so it might be better to remove this import.
function ddt test_docstring = getattr(v, "doc", None)๏ผ
when v has attribte๏ผ return doc of v So that desc of report is doc of v๏ผlist tuple dict๏ผ ,not doc of testing function
Hi
I'm in the need of having nested data in my current tests. So basically I was looking for a way to nest "for" iteration over provided @DaTa. Is this possible? how?
I could create a for in my test itself, but I would lose proper test map in statistics. I could create a better log system to being able to distinguish among which specific test failed, but if it's possible to nest @DaTa, that would help me a lot.
Thanks a lot for you help and attention.
Regards
Right now, if you want to pass several arguments to your test, you typically provide a tuple for each test case, and have to unpack them inside the test method, such as:
@data((1,2,3), (4,5,9))
def test_things_with_3_numbers(self, arg):
op1, op2, addition = arg
...
A nice improvement could be if ddt
did this unpacking for you:
@packed_data((1,2,3), (4,5,9))
def test_things_with_3_numbers(self, op1, op2, addition):
...
When I use ddt and @unpack I get the message
"tuple() -> empty tuple ... ok"
Is there a way to avoid it?
This will correctly report this package as being Python 3 compatible by various checkers.
Programming Language :: Python :: 3
I have a test and I want to decorate it with @idata.
The iterable I want to use as the idata argument is member in my class.
@ddt
class TestRunnable(BaseTest):
def __init__(self, *args, **kwargs):
super(TestRunnable, self).__init__(*args, **kwargs)
self.lst = []
@idata(self.lst)
def test_something(self):
pass
Is it possible to do it?
Currently, if a JSON file is missing for the file_data
decorator, the test cases are skipped and no visible error occurs.
Hi there,
today, I have upgraded my DDT package to v1.2.0 from v1.1.3 and I have hit an issue when trying to run:
nosetests -v test_convert.py
In version 1.1.3 I used to get:
test_01_format_code_1 (test_convert.TestConvert) ... ok
while with version 1.2.0 I get:
dict() -> new empty dictionary ... ok
I am using python v3.5.6 on a virtual environment. The test case looks like:
import unittest
from ddt import ddt, data, unpack
@ddt
class TestConvert(unittest.TestCase):
@data({'title': 'Case 1: Nothing to do',
'str_in': 'Hello world.',
'str_out': 'Hello world.'})
@unpack
def test_01_format_code(self, title, str_in, str_out)
# [ my code ]
pass
# coding:utf-8
import ddt
import unittest
@ddt.ddt
class Test(unittest.TestCase):
def setUp(self):
print("Start!")
def tearDown(self):
print("end!")
@ddt.data(1,2,3,4,5,6,7,8,9,10,11,12)
def test_ddt10(self, data):
print(data)
if __name__ == "__main__":
unittest.main()
when I run this script, it returns ๏ผ
Start!
10
end!
.Start!
11
end!
.Start!
12
end!
.Start!
1
end!
.Start!
2
end!
.Start!
3
end!
.Start!
4
end!
.Start!
5
end!
.Start!
6
end!
.Start!
7
end!
.Start!
8
end!
.Start!
9
end!
I wanna the data excute with the normal order. Could you help to resolve this issue? thanks!
I maintain the Arch AUR package for this module. I want to implement the test suite when building the package. If I use the pypi tarball as the source, several files are missing.
ImportError: No module named 'test.mycode'
FileNotFoundError: [Errno 2] No such file or directory: '/build/python-ddt/src/ddt-1.0.0/test/test_data_dict.json'
ValueError: test_data_dict.json does not exist
Please include these missing files in the pypi tarball.
ddt-1.0.0/CONTRIBUTING.md
ddt-1.0.0/LICENSE.md
ddt-1.0.0/README.md
ddt-1.0.0/test/__init__.py
ddt-1.0.0/test/mycode.py
ddt-1.0.0/test/test_data_dict.json
ddt-1.0.0/test/test_data_list.json
Hi, in case I would like to just use the index for a test name (which is the code path when is_trivial()
returns False), what would be the suggested way to do that?
Is the proposal from this issue #47 in the right direction? (i.e. using kwargs)
Thank you.
In #92, the signature of idata
was changed from idata(values)
to idata(values, index_len)
, with no default handling for the latter argument. This means that all current uses of idata(values)
are now broken on upgrade to 1.4.3, and there's no compatible calling method for versions 1.4.2 and 1.4.3.
I'm not too familiar with your internals, but glancing through PR #92, it looks like it could be handled safely without affecting that PR by changing the signature to something like:
def idata(values, index_len=None):
if index_len is None:
# Avoid accidentally consuming a one-time-use iterable.
values = tuple(values)
index_len = len(str(len(values)))
# ... continue as normal ...
If so, I'd be happy to make the PR.
Currently if I passing in Objects using the decorator, because the "name" attribute can take on "<" and ">" characters if value passed in is an object (using default repr() ). This will cause the JUnit parser to break if you use JUnit parser in CI. Can you please add some sort of XML chracter stripping or escaping in the @DDT annotation class.
for name, f in cls.__dict__.items():
if hasattr(f, MAGIC):
i = 0
for v in getattr(f, MAGIC):
test_name = getattr(v, "__name__", "{0}_{1}".format(name, v))
#strip illegal xml characters characters. - DL
formatted_test_name = re.sub(r'[<>&/]', '', test_name)
setattr(cls, formatted_test_name, feed_data(f, v))
i = i + 1
delattr(cls, name)
return cls
As in subject. I would like to add parameter to @DDT decorator.
@ddt(noindex=True)
Behaviour would be fairly straightforward. It would prevent indexing names.
Thus resulting in naming generated functions with only name and data.
My use case is that I need more control over namings as I use test results in another system which is very difficult since names loose uniqueness. I can guarantee uniqueness of data fed through decorator.
Ideal aproach would probably be poviding in parameter a template string to be interpolated during creation of test names, but I am not certain if that effort would be well received.
Underneath more codified explanation.
@unpack
@data([('a', 'b'), ('c', 'd')])
def test_data(self, val1, val2)
would generate names:
test_data_a_b
test_data_c_d
instead of:
test_data_1_a_b
test_data_2_c_d
@named_data
currently explicitly checks for lists or dicts, but in the case of lists it could just check against sequences (including tuples) and work in exactly the same way.
I am running a test case under a test class as:
@ddt.ddt
class IloCommonModuleTestCase(unittest.TestCase):
@mock.patch.object(time, 'sleep', lambda x: None)
def test_case1(self, sleep_mock):
pass
@ddt.data(
('data1_to_test_case'),
('data2_to_test_case'),
...,
)
@ddt.unpack
def test_case2(self, test_sample_data):
pass
wherein I have other test cases which uses ddt.data and ddt.unpack as depicted. Now when I run the test cases it fails in the test_case1 and I believe this i something to do with ddt:
Traceback (most recent call last):
File ".../local/lib/python2.7/site-packages/mock/mock.py", line 1305, in patched
return func(*args, **keywargs)
TypeError: test_case1() takes exactly 2 arguments (1 given)
But if I remove the custom value in the patching decorator, then it works fine.
@mock.patch.object(time, 'sleep')
def test_case1(self, sleep_mock):
pass
Hence, I feel there's something which is missing, like you can provide custom value to patch decorators with mock, while the class under test gets decorated by @ddt.ddt
Thank you.
A change in the latest release of ddt (#22 included in 1.0.1) has seemingly introduced breaking behavior for wrapped tests expecting a dictionary object when receiving test data, as exemplified below:
test_data.json
{
"test_name": {
"arg_one": "test1",
"arg_two": "test2",
"arg_three": "test3"
}
}
Original test code:
@ddt.file_data('test_data.json')
def test_something(inputs):
arg_one = inputs["arg_one"]
arg_two = inputs["arg_two"]
arg_three = inputs["arg_three"]
# Perform tests here
pass
New test code:
@ddt.file_data('test_data.json')
def test_something(arg_one, arg_two, arg_three):
# Perform tests here
pass
Since this behavior appears to be non-configurable and backward-incompatible, this seems more like a major version change rather than a simple patch. Can this change be reverted at least for the time being?
With unittest
it is possible to run specific tests from the cli.
When using ddt
this doesn't seem to be possible.
Example:
import unittest
from ddt import data, ddt
class Test(unittest.TestCase):
def test(self):
self.assertEqual(1, 1)
@ddt
class TestDDT(unittest.TestCase):
@data('arg1', 'arg2')
def test(self, arg='arg1'):
self.assertEqual(1, 1)
if __name__ == '__main__':
unittest.main(exit=False)
Running specific unittest:
$ python test.py Test.test
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
Running specific unittest with ddt:
$ python test.py TestDDT.test
E
======================================================================
ERROR: test (unittest.loader._FailedTest)
----------------------------------------------------------------------
AttributeError: type object 'TestDDT' has no attribute 'test'
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
It would be great if you could add a license to this project. Thanks!
Hi,
Great project, you've really improved our testing framework, thanks.
I appear to have found some unexpected behaviour when combining @file_data
and @unpack
when the JSON file contains a list of lists of strings.
Say I have j.json
:
[
["some", "data"],
["another", "point"]
]
and example.py
:
import unittest
from ddt import ddt, file_data, unpack
@ddt
class Tests(unittest.TestCase):
@file_data('j.json')
@unpack
def test(self, first, second):
return
unittest.main()
and run python example.py
, I get an unexpected error:
Traceback (most recent call last):
File "/.../py3/lib/python3.6/site-packages/ddt.py", line 139, in wrapper
return func(self, *args, **kwargs)
TypeError: test() missing 1 required positional argument: 'second'
I would expect second
to be "data"
for the first generated test, then "point"
for the second, but in fact, first
contains a list, ["some", "data"]
, meaning that it wasn't unpacked . Is this behaviour a bug or feature? If bug, should I try to fix and PR?
Should point out: I know I could use dict/map structures in the JSON, but for our cases this is not desirable.
Lets take a look at this simple test:
import unittest
import mock
from ddt import ddt, data
@ddt
class Test(unittest.TestCase):
@data(mock.Mock())
def test_dict(self, value):
pass
This lead to different test names with different runs:
test_dict_1__Mock_id__139806852956624__ (tests.test_d.Test) ... ok
test_dict_1__Mock_id__140027200111056__ (tests.test_d.Test) ... ok
This is serious problem when running tests by testr.
Testr generates a list of all tests and divides it into N parts, depends on number of CPUs in the system.
Then it forks N processes, and each process loads one list.
Tests with mock.Mock in arguments are silently skipped because of unique mock id in test name.
For now there is no Python 3.6/3.7 targets in tox.ini
and travis.yml
- gotta add them and test.
Going into Example Usage section of the documentation, below the code snippet, there's a section which, I suppose, should show the contents of the files used in the snippet.
It shows nothing currently.
Where test_data_dict_dict.json:
and test_data_dict_dict.yaml:
and test_data_dict.json:
and test_data_dict.yaml:
and test_data_list.json:
and test_data_list.yaml:
Paths in example.rst seem to be ok.
Seems there's also an issue with latest
docs not being the actual latest version of the RST document.
I would like to suggest adding some sort of class-level ddt.data
I will write an example to show what I mean:
def is_valid_metavar(metavar):
return metavar in ['foo', 'bar', 'baz']
@ddt.ddt
@ddt.data('foo', 'bar')
class TestMetavars(unittest.TestCase):
def test_valid(self, metavar):
self.assertTrue(is_valid_metavar(metavar))
def test_something_else(self, metavar):
self.assertEqual(len(metavar), 3)
# this overrides the previous variable
@ddt.data('ulf', 'flup')
def test_invalid(self, metavar):
self.assertFalse(is_valid_metavar(metavar))
# or (better?) without implicit overrides
@ddt.data(
{'metavar': 'ulf'},
{'metavar': 'flup'})
def test_invalid(self, metavar):
self.assertFalse(is_valid_metavar(metavar))
I would be happy to send a pull request implementing this if you're happy with the API. This should be a backwards-compatible change.
I noticed that @txels had expressed the desire to ditch backwards compatibility and give ddt
another shot to get a more powerful platform on the other side. Hence, with the intent of sparking discussion, I write this.
Here's a simple test case:
import unittest
from ddt import ddt, data
@ddt
class Test(unittest.TestCase):
@data(1)
def test_scalar(self, value):
pass
@data({"a": 1, "b": 2})
def test_dict(self, value):
pass
When run with python3.4
which nosetests-v test.py
you'll get (potentially) different test names with different runs:
$ nosetests -V
nosetests version 1.3.1
$ python3.4 `which nosetests` -v test.py
test_dict_1___b___2___a___1_ (test.Test) ... ok
test_scalar_1_1 (test.Test) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.001s
OK
$ python3.4 `which nosetests` -v test.py
test_dict_1___a___1___b___2_ (test.Test) ... ok
test_scalar_1_1 (test.Test) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.001s
OK
Why this is a problem:
TestId
functionality (e.g. makes running nosetests --with-ids --failed
fail). Being able to re-run failed tests only (--failed
) is a super-useful feature, so it not working is a big problem (time-waster).The reason why this happens is that from Python 3.3 onwards hash randomization has been the default option (see https://docs.python.org/3.3/using/cmdline.html#cmdoption-R). This means that dict key hashes (at least for types with hash function depending on the system hash()
) change from different runs, and thus the naming of tests using @data
with dicts (or sets/frozensets for that matter) will vary from one run to another.
Although there is a simple workaround (use a fixed PYTHONHASHSEED
when running tests) this is still not a completely good solution for a few reasons:
On the other hand I am not sure how to fix this. A few possibilities:
PYTHONHASHSEED
being set.mk_test_name
omit the str(value)
bit on >= 3.3 python if PYTHONHASHSEED
hasn't been set. This would generate test names based only the running index, keeping them unique and immune to hash key randomization.(And in any case add a note about this behavior to @data
documentation.)
I think the first one is a can of worms since you'd need to handle recursive dicts and it really does not help with complex types that are not dicts but internally use dicts and produce str
values from those.
The second one would help, but would essentially require changes to test environment when moving to >= 3.3 python on existing projects, and the warning might also be missed by a lot of people.
I think the third one would present the least surprise to people as it'd at least keep test cases accessible and running, yet makes it possible to get the full name (with test data str-encoded) by introducing PYTHONHASHSEED
to the test environment by the user.
Comments, thoughts?
When using the @DaTa decorator, DDT attempts to dynamically determine the zero-padding. The problem is, a global value is being set (index_len) for each test that calls @DaTa. But the value index_len is not used until all tests have been added. Meaning only the index_len of the last test in the suite is used. If I have test A that has 300 data sets, and test B that has 2 data sets, no zero padding will occur in the test name.
This is also making existing test names change if new tests or new data sets are added in the future.
I would suggest remove the zero-padding index_len altogether to avoid dynamic test case name changes. But if not, then index_len should use the max of all test cases that call @DaTa and not just the last one
tldr; a combination of the @idata and @file_data decorators
Add a decorator @ifile_data to pass multiple files as data items into a test function.
Files would be specified by a glob pattern relative to the test files directory.
The name of the test case would be the file.name of each file.
Test generated test names should include the file name.
MyTest_0_some_1234_json
MyTest_1_some_xyz_json
an example of how this operator would be used.
@ddt
class MyTestCase(unittest.TestCase):
@ifile_data('./data/some_*.json')
def MyTest(self, some_data):
self.assertIsNotNone(some_data)
I am using this as a workaround for now:
from unittest import TestCase, mock
import json
from ddt import ddt, idata
from pathlib import Path
class NamedList(list):
pass
def get_file_data(glob_pattern):
result = []
for file in Path(__file__).parent.glob(glob_pattern):
with open(file) as reader:
dataitem = NamedList(json.load(reader))
setattr(dataitem, '__name__', file.name)
result.append(dataitem)
return result
@ddt
class FileDataTests(TestCase):
@idata(get_file_data('./data/some_*.json'))
def test_file_data(self, some_data):
self.assertIsNotNone(some_data)
All Python versions older than 3.7 are EOL. Please consider dropping support for the versions older than these by making a new major version release.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.