GithubHelp home page GithubHelp logo

python-jsonschema / hypothesis-jsonschema Goto Github PK

View Code? Open in Web Editor NEW
248.0 9.0 29.0 10.64 MB

Tools to generate test data from JSON schemata with Hypothesis

Home Page: https://pypi.org/project/hypothesis-jsonschema/

License: Mozilla Public License 2.0

Python 100.00%
hypothesis python json-schema property-based-testing

hypothesis-jsonschema's Introduction

hypothesis-jsonschema

A Hypothesis strategy for generating data that matches some JSON schema.

Here's the PyPI page.

API

The public API consists of just one function: hypothesis_jsonschema.from_schema, which takes a JSON schema and returns a strategy for allowed JSON objects.

from hypothesis import given

from hypothesis_jsonschema import from_schema


@given(from_schema({"type": "integer", "minimum": 1, "exclusiveMaximum": 10}))
def test_integers(value):
    assert isinstance(value, int)
    assert 1 <= value < 10


@given(
    from_schema(
        {"type": "string", "format": "card"},
        # Standard formats work out of the box.  Custom formats are ignored
        # by default, but you can pass custom strategies for them - e.g.
        custom_formats={"card": st.sampled_from(EXAMPLE_CARD_NUMBERS)},
    )
)
def test_card_numbers(value):
    assert isinstance(value, str)
    assert re.match(r"^\d{4} \d{4} \d{4} \d{4}$", value)


@given(from_schema({}, allow_x00=False, codec="utf-8").map(json.dumps))
def test_card_numbers(payload):
    assert isinstance(payload, str)
    assert "\0" not in payload  # use allow_x00=False to exclude null characters
    # If you want to restrict generated strings characters which are valid in
    # a specific character encoding, you can do that with the `codec=` argument.
    payload.encode(codec="utf-8")

For more details on property-based testing and how to use or customise strategies, see the Hypothesis docs.

JSONSchema drafts 04, 05, and 07 are fully tested and working. As of version 0.11, this includes resolving non-recursive references!

Supported versions

hypothesis-jsonschema requires Python 3.6 or later. In general, 0.x versions will require very recent versions of all dependencies because I don't want to deal with compatibility workarounds.

hypothesis-jsonschema may make backwards-incompatible changes at any time before version 1.x - that's what semver means! - but I've kept the API surface small enough that this should be avoidable. The main source of breaks will be if or when schema that never really worked turn into explicit errors instead of generating values that don't quite match.

You can sponsor me to get priority support, roadmap input, and prioritized feature development.

Contributing to hypothesis-jsonschema

We love external contributions - and try to make them both easy and fun. You can read more details in our contributing guide, and see everyone who has contributed on GitHub. Thanks, everyone!

Changelog

Patch notes can be found in CHANGELOG.md.

Security contact information

To report a security vulnerability, please use the Tidelift security contact. Tidelift will coordinate the fix and disclosure.

hypothesis-jsonschema's People

Contributors

jayvdb avatar jjpal avatar kathyreid avatar stephan-kashkarov avatar stranger6667 avatar zac-hd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hypothesis-jsonschema's Issues

Optional properties are not generated for objects when there is only one property specified

Hello!

I found some unexpected behavior in object generation. Given this schema:

SCHEMA = {
    "properties":  {
        "key": {
            "type": "string"
        }
    }, 
    "additionalProperties": False, 
    "type": "object", 
    "required": []
}

Only empty dictionaries are generated, however {"key": ""} is a valid example as well. Test case:

from hypothesis import given
from hypothesis_jsonschema import from_schema

schema = from_schema(SCHEMA)

@given(query=schema)
def test(query):
    print(query)

Test session output:

⇒  pytest test_example.py -s --hypothesis-show-statistics
======================== test session starts ========================
platform linux -- Python 3.7.4, pytest-5.1.2, py-1.8.0, pluggy-0.13.0
rootdir: /tmp, inifile: pytest.ini
plugins: hypothesis-4.50.2, mock-1.10.0, env-0.6.2, asyncio-0.10.0, subtests-0.2.1, recording-0.3.5, xdist-1.26.1, forked-1.0.1, cov-2.6.1, aiohttp-0.3.0, schemathesis-0.18.1
collected 1 item                                                    

test_example.py {}
.
======================= Hypothesis Statistics =======================

test_example.py::test:

  - 1 passing examples, 0 failing examples, 0 invalid examples
  - Typical runtimes: < 1ms
  - Fraction of time spent in data generation: ~ 47%
  - Stopped because nothing left to do

========================= 1 passed in 0.01s =========================

Setting additionalProperties: True changes the generation process and key appears in the generated data. Also, adding a second property makes generation work.

hypothesis-jsonschema version - 0.9.10, but the behavior is reproducible with older versions as well (tried up to 0.9.7)

My investigation at this point is:

elements = cu.many(
            draw(st.data()).conjecture_data,
            min_size=min_size,
            max_size=max_size,
            average_size=min(min_size + 5, (min_size + max_size) // 2),
        )

average_size is 0 here (having min_size=0 and max_size=1) which makes elements.more() return False immediately and nothing will be generated. Should we use // here? with /` values are generated as expected, but I am not sure how this heuristic is calculated

What do you think?

Aliases for RFC 3339 formats are defined but not used

Hi!

In rfc3339 I see the following conditions:

if name == "date" or name == "full-date":
    return st.dates().map(str)
if name == "time" or name == "full-time":
    return st.tuples(rfc3339("partial-time"), rfc3339("time-offset")).map("".join)

The first comparisons are always false because of the following line at the beginning of the function:

assert name in RFC3339_FORMATS

date and time are not in RFC3339_FORMATS + the format value is checked against known_formats in the string_schema function.

Found this when I was trying to generate data for Open API with {"format": "date", "type": "string"} in Schemathesis.

At the moment, I can extend Schemathesis with a custom format for date, but as these formats are a part of JSON Schema, I believe that those aliases should be added to STRING_FORMATS.

What do you think?

How to specify alphabet for certain string field in jsonschema file?

I have a field "name" with "type": "string" in jsonschema file. Is there any approach to specify alphabet when I do hypothesis_jsonschema.from_schema for name field. OR any other way to go around it to avoid '\U000e7228\U000edb03\U000fa8b4\U000eadf5\U000687c6\U0005b0d4\U000b83fb' and get something more human readable?

I have this question because string in jsonschema looks mapping to text in hypothesis. If hypothesis.strategies.text has such argument, It would be nice make string field from jsonchema has.

Thanks.
Ke

Merge overlapping `items` subschemas

In hypothesis_jsonschema._canonicalise.merged(), we should calculate the intersection of items schemas.

This is complicated somewhat by the possibility that one or both schemas to merge may have a list-of-items-schemas and an additionalItems schema, or a single schema for all items. Fortunately we have analogous logic for object properties, so it's more fiddly than conceptually challenging.

Canonicalise `oneOf` sub-schemas to exclude shared types

Consider the schema {"oneOf": [{"required": ["foo"]}, {"required": ["bar"]}]}

What values could match this? Because the two sub-schemas only constrain objects, all values of any other type will be valid against both sub-schemas... and therefore invalid against the oneOf schema.

This tends to result in a lot of filtering, so we have a workaround in our tests for this test case.

Obviously it would be nicer to support such schemas properly, by canonicalising them to include type: object (i.e. exclude other types) and therefore translate them to a Hypothesis strategy which doesn't have to filter out so much.

The semantics are a bit too tricky for me to tag this good first issue, though the tests will catch any plausible mistakes you could make. We have some prior art in the handling of "not" sub-schemas, which might provide a good starting point; you might even move shared logic to a new helper function.

Alternatively, 'lowering' oneOf: [a, b, c] to anyOf: [{**a, not: {anyOf: [b, c]}}, ...] would automatically share any improvements to not-logic, which would be nice. Just a question of balancing that against the less elegant canonical form.

Generate integers for "number" schema?

At the moment the "number" type corresponds to st.float strategy, but as integers are valid for the "number" type in JSON Schema, I am wondering whether it will make sense to generate integers there as well.

I believe it may increase the output variation. What do you think?

I was thinking about your comment in the context of negating schemas and tried to canonicalise {"not": {"minimum": 42}} which gives {'not': {'minimum': 42, 'type': 'number'}, 'type': 'number'}. Even though it is a valid transformation, it seems too restrictive. For example, the end-user might expect that negating {"minimum": 42} will give e.g. 41, or some other integers, but there will be always floats. Having type: integer in place will solve this ({"not": {"minimum": 42}, "type": "integer"} works), but still sometimes it happens that it is missing.

Add some way of resolving out-of-schema references

I really don't like references to other schemas, because they make the meaning temporally variable and unreliable - the referenced schema can change or become inaccessible. Dealing with them also adds IO, which makes everything harder.

Nonetheless, they are useful and in practice widely used, so we can't ignore them forever. I would therefore like to add a function which fetches external schemas, adds them to a definitions section, and rewrites the $ref keys to point to the new in-schema copies.

The actual from_schema() implementation can then continue to ignore IO, and any problems will at least be isolated in their own function!

Merging object schemas with `requires` and `additionalProperties: False` should return the empty schema

https://github.com/Zac-HD/hypothesis-jsonschema/blob/3047ec70fc05123d7253be09bfb74490e59e06e6/tests/test_canonicalise.py#L192-L196

This test will pass when merged can tell that "required": ["name"] in the first schema conflicts with "additionalProperties": False in the second, and returns the empty schema {"not": {}}.

Note that if other types of value are allowed, it should instead just remove "object" from the types.

Possible improvements on not supported & invalid regexes

In web APIs, users often use regular expressions syntax supported by their backend, and sometimes it is not compatible (in some areas) with the one supported by JSON Schema.

For example, this AWS API uses character classes that are supported by Java, for example, \p{Alpha}. It is not supported in Python stdlib re module, and currently hypothesis-jsonschema uses st.nothing() for such cases. In the simplest case, it leads to Unsatisfiable as there are no values in this strategy.

But consider an array:

{
    "items": {
        "pattern": "\p{Alpha}",
        "type": "string",
    },
    "maxItems": 50,
    "minItems": 0,
    "type": "array",
}

Even though we don't support generating strings for such regular expressions in the schema above, we still can generate an empty array that will match the schema. The same could be applied to optional properties, etc.

The current error output for the schema above:

hypothesis.errors.InvalidArgument: Cannot create a collection of max_size=50, because no elements can be drawn from the element strategy nothing()

From the user perspective, it will be nice to expose some information about why it happens (the unsupported regex syntax). For cases when we still can generate data without those items/properties, it might be a warning, and for cases when we can't, a better error message will be great (e.g., if there is minItems: 1)

What do you think?

P.S. I am pretty sure that I saw a different InvalidArgument error that was also connected to drawing from nothing() - I will post an update once I find it

RFC3339 doesn't produce valid results in some cases

Hello Zac!

I found out that rfc3339 function might produce invalid results sometimes and here are few things to change to fix the issue:

  • Add zero-filling to date-fullyear (4), date-month (2), date-mday (2), time-hour (2), time-minute (2), time-second (2);
  • Use %s%s:%s formatting in time-numoffset to avoid : before the hour value

I will submit a PR with the changes for review soon :)

Wrong result from internal `merged([{'multipleOf': 2}, {'not': {'maximum': 0}}])`

I think this is something to do with the intermediate inference of the types as integer and number respectively, and the way those types actually overlap. Could be located in canonicalisation or merging, really.

As well as adding an explicit regression test we'll want to re-enable the "draft7/validate against correct branch, then vs else" entry in our test corpus.

Excessive filtering when generating object properties

I’m bumping into the filter_too_much hypothesis health check.

I seem to be unable to reproduce the actual health check failure on a small example, but running the following test with --hypothesis-show-statistics shows filtering failure events:

from hypothesis import given
from hypothesis_jsonschema import from_schema

SCHEMA = {
    'type': 'object',
    'additionalProperties': False,
    'required': ['foo'],
    'properties': {
        'foo': {'type': 'integer'}
    }
}

@given(from_schema(SCHEMA))
def test_excessive_filtering(instance):
    pass
$ py.test --hypothesis-show-statistics
[…]
  - during generate phase (0.65 seconds):
    - Typical runtimes: < 1ms, ~ 88% in data generation
    - 99 passing examples, 0 failing examples, 475 invalid examples
    - Events:
      * 82.75%, Aborted test because unable to satisfy sampled_from(['foo']).filter(<jsonschema.validators.create.<locals>.Validator object at 0x7ff9a9962b00>.is_valid).filter(lambda s: s not in out)
      * 82.75%, Retried draw from sampled_from(['foo']).filter(<jsonschema.validators.create.<locals>.Validator object at 0x7ff9a9962b00>.is_valid).filter(lambda s: s not in out) to satisfy filter

  - Stopped because settings.max_examples=100

As far as I can understand the black magic in from_object_schema, here’s what happens:

  • It counts the minimum possible number of properties in the object, which is the number of required properties, which is 1.
  • It determines the maximum possible number of properties, which is +∞. (Seriously, does anybody ever set maxProperties on a schema for an object with all properties known?)
  • It draws some elements, no fewer than min_size (1) and on average min_size + 5 (6).
  • For each element drawn:
    • If a required property is missing, it generates that property. (This happens for the first element and property foo.)
    • If all required properties are set, it sees if any dependent properties are missing, and generates those. (My schema does not have any of these.)
    • Otherwise, it tries to generate an arbitrary property name it could add. But the only statically named property is already set, and no dynamic names are allowed, so it rejects the element. This happens, on average, 5 times per generated object.

The filtering events go away with this trivial patch:

--- _from_schema.py.orig	2020-06-15 03:14:06.321863065 +0700
+++ _from_schema.py	2020-06-15 03:13:56.965921488 +0700
@@ -474,6 +474,9 @@
     additional = schema.get("additionalProperties", {})
     additional_allowed = additional != FALSEY
 
+    if not patterns and not additional_allowed:
+        max_size = min(max_size, len(properties))
+
     dependencies = schema.get("dependencies", {})
     dep_names = {k: v for k, v in dependencies.items() if isinstance(v, list)}
     dep_schemas = {k: v for k, v in dependencies.items() if k not in dep_names}

Resolve all references before canonicalising

Currently, we don't handle schemas with references at all - in our test suite, schema with $ref keys just get skipped. Ideally the first step of canonicalisation would be to replace each reference with the referred-to schema, and raise an explicit error if there are any recursive references.

Handle remaining keywords when merging schemas

https://github.com/Zac-HD/hypothesis-jsonschema/blob/c16b93bde5616fa4ecbe2808d7ba9fe7a221faf4/src/hypothesis_jsonschema/_canonicalise.py#L408-L420

The merged(schemas) function outputs a single schema which matches all and only instances matched by all of the input schemas, or None if there is no such schema (without e.g. allOf).

This issue will be closed when merged() understands how to merge all keywords defined in the spec that can in principle be merged. This includes e.g. maximum (take the min), but not contains (at least one array item valid against each)

Failing test_canonicalises_to_equivalent_fixpoint

Currently the test_canonicalises_to_equivalent_fixpoint test fails on assert cc == canonicalish(cc) with the following schema:

schema = {'not': {'anyOf': [{'type': 'number'}, {'if': {'type': 'null'}, 'then': {'type': 'null'}, 'else': {}}]}}

As far as I see, calling canonicalise second time should not transform the schema again. But here is schema after the first call:

{
    "not": {"anyOf": [{"const": None}, {"not": {"const": None}}]},
    "type": ["null", "boolean", "string", "array", "object"],
}

and after the second call:

{
    "not": {
        "anyOf": [
            {"const": None},
            {
                "type": ["null", "boolean", "string", "array", "object"],
                "not": {"const": None},
            },
        ]
    },
    "type": ["null", "boolean", "string", "array", "object"],
}

git bisect gave me this commit - afc292b

Should the second call leave the input schema as is?

Bug in the caching implementation

After trying out atheris, based on your example (it is awesome, I'd say!) I found an interesting bug in caching that comes from the following fact:

>>> hash(-2)
-2
>>> hash(-1)
-2

From PEP-456:

The internal interface code between the hash function and the tp_hash slots implements special cases for zero length input and a return value of -1. An input of length 0 is mapped to hash value 0. The output -1 is mapped to -2.

It leads to a problem with the wrong canonicalisation, e.g. if {'exclusiveMaximum': 1, 'exclusiveMinimum': -1, 'type': 'number'} was cached first, then applying canonicalisation on {'exclusiveMaximum': 1, 'exclusiveMinimum': -2, 'type': 'number'} will return 'exclusiveMaximum': 1, 'exclusiveMinimum': -2, 'type': 'number'} :(

-1 is quite common, and these cache collisions make me think about the current implementation - I am not completely sure how to implement caching efficiently enough. However, in #69, after reducing how many schemas are inlined, the performance improved dramatically, and I am not sure if this caching layer worth having (at least in the current implementation)

What do you think?

Code review suggestions

Thanks for the nice package!

I have an implementation of a JSON schema strategy in Hypothesis [1], and was reviewing your implementation out of interest to see how you approached it.

We have a need for such a package, but I'd rather not maintain my own implementation. That's why I thought it might be useful to contribute some suggestions to help improve your implementation (I mean this in the most constructive way possible!). Here they are:

  1. JSON_STRATEGY is implemented using defer, but I think recursive might be a better fit here (see gen_document). Seems more idiomatic to me, and that way you have control over the depth of the recursion too.
  2. The implementation of object_schema comments that you do black magic with private Hypothesis internals. I was able to implement the object strategy without having to resort to those techniques [4]. Although it admittedly does not implement patternProperties, I suspect you could use the same idea in your implementation to remove the black magic.
  3. According to the JSON schema spec, a string pattern will match anywhere in the string [2]. If you want to match a full string, they recommend to write the pattern as ^...$. Unfortunately, in Python re.search('^abc$', 'abc\n') finds a match (!), while re.search('^abc$', 'abc\n\n') does not. Meanwhile in JSON schema, neither of those two cases result in a match. Therefore, the from_regex strategy needs to be adapted to fix that edge case.
  4. Looks like you haven't implemented the case that an array has uniqueItems = true and the items are a list.
  5. Also, if an array's items are a dict, then the item's type property may be either a string or a list [3]. Looking at your implementation, it doesn't look like you handle the latter case.

[1] https://gist.github.com/lsorber/4d902733772fab270e6eb9f1c77a3690
[2] https://json-schema.org/understanding-json-schema/reference/string.html#regular-expressions
[3] https://gist.github.com/lsorber/4d902733772fab270e6eb9f1c77a3690#file-hypothesis-jsonschema-py-L84
[4] https://gist.github.com/lsorber/4d902733772fab270e6eb9f1c77a3690#file-hypothesis-jsonschema-py-L110

Could not resolve recursive references when trying to generate swaggers

hello, I would like to use hypothesis-jsonschema to test a library to manage swagger specifications (https://github.com/sdementen/oasapi).
I am using the jsonschema for the swagger https://github.com/sdementen/oasapi/blob/master/src/oasapi/schemas/schema_swagger.json.

When I try to run the following test

schema = json.load(Path(r"..\oasapi\src\oasapi\schemas\schema_swagger.json").open())

@given(from_schema(schema))
def test_swagger(value):
    assert isinstance(value, dict)

I receive the error hypothesis_jsonschema._canonicalise.HypothesisRefResolutionError: Could not resolve recursive references in schema={...}.

Am I using the library properly ? if so, is there some workaround to avoid this infinite recursion (even if it restrains the range of objects that could be generated by hypothesis) ?

Bug in object schema merging

The test cases below demonstrate cases where merging is unsound - we return a schema which matches some instances not matched by one of the inputs. We should instead be incomplete here, returning None to signify our inability to merge. Prompted by the rediscovery in #39.

https://github.com/Zac-HD/hypothesis-jsonschema/blob/13638d9966be40b9849eaeaaaeba0bfa372e2542/tests/test_canonicalise.py#L367-L377

https://github.com/Zac-HD/hypothesis-jsonschema/blob/13638d9966be40b9849eaeaaaeba0bfa372e2542/tests/test_canonicalise.py#L380-L388

There's a good chance that I'll make merged smarter about the interactions between properties, patternProperties, and additionalProperties in the process, but soundness is definitely the priority here.

Support draft 6 and 7 via jsonschema>=3.0

Turns out that the current stable version of jsonschema (2.6) doesn't support validation of draft06 or draft07. pip install --upgrade --pre jsonschema gets the 3.0 beta, which does - we should do that and fix the resulting problems.

The strategy-from-schema part only needs support for numeric exclusiveMin and exclusiveMax on numeric fields; everything else it can handle or ignore.

The generate-a-schema strategy needs to grow support for schema versions so that we can test this, which is going to be more work but hopefully still tractable. Alternatively we could just generate draft07 schemas exclusively, and depend on jsonschema~=3.0, but I'd rather not break things like that.

Do not merge schemas with distinct "format" values

An example of a failed test

Code to reproduce:

from hypothesis_jsonschema._canonicalise import merged

s1 = {'format': 'color', 'type': 'string'}
s2 = {'format': 'date-fullyear', 'type': 'string'}

assert merged([s1, s2]) == merged([s2, s1])

Canonicalise inner schemas

canonicalish applies a whole swarm of local re-write rules to reduce the variety of ways in which a given schema can be represented. However, there's an obvious but powerful trick we're not doing yet and should: for each keyword where the value is a schema (or list of schemas, etc), we should recurse and ensure that the whole schema is canonicalised, not just the top level.

`hypothesis-jsonschema` needs a logo!

Every project with aspirations to greatness needs a logo, and hypothesis-jsonschema is no exception. Are you the generous designer who can help?

  • Hypothesmith is, as the name suggests, built on Hypothesis. You may therefore want to draw on that project's logo and brand, though it's not required.
  • The other major inspiration is, of course, JSON Schema. Every way I tried to put a dragonfly in braces looked pretty silly, but perhaps you can do better - or have a completely different idea!
  • Once hypothesis-jsonschemahas a logo I like, I'll be printing it on stickers - and will send you some wherever you are if you would like some.

Ideas or sketches are welcome, not just finished proposals 😁

Support user-defined `format` in string schemas

Thanks to @Stranger6667 in schemathesis/schemathesis#337 (comment):

[kiwicom has] some custom formats, e.g. for payment card numbers - and we declare it in the schema + use validation for it via jsonschema + connexion:

 @draft4_format_checker.checks("card_number")
 def is_card_number(value):
     # Validation based on Luhn algorithm

This is definitely in-scope for hypothesis-jsonschema, because user-defined formats are explicitly allowed by the spec! The options for this are to either add an argument, so that the API looks like

def from_type(schema: Schema, *, formats: Dict[str, SearchStrategy[str]] = None) -> ...:

or to maintain a global registry of known formats and strategies. In either case I'd want to raise an error if we ever try to generate a format value for which jsonschema does not have a known validator.

At the moment I'm leaning towards global state, with validation and the caveat that users can't override formats defined in the spec, but feedback would be most welcome.

Calculate number of allowed integers for a schema

As part of our canonicalisation logic, it can be really useful to know how many unique values a schema permits. That's sometimes hard to calculate, but we try to cover the simple cases with upper_bound_instances():

https://github.com/Zac-HD/hypothesis-jsonschema/blob/20a0fb1a44c8bc031d9b99e19be1eceff08cce69/src/hypothesis_jsonschema/_canonicalise.py#L132-L143

If we have a list (array) schema where the list items need to be unique and there aren't enough to fill the minimum size, it's impossible to generate a matching list:

https://github.com/Zac-HD/hypothesis-jsonschema/blob/20a0fb1a44c8bc031d9b99e19be1eceff08cce69/src/hypothesis_jsonschema/_canonicalise.py#L308-L312


So this issue is to add support for type: integer schemas to upper_bound_instances. If get_integer_bounds() are not None, calculating the number of allowed integers isn't too hard - even accounting for multipleOf.

And then this test won't need xfail anymore, because we'll be able to canonicalise it to the empty schema as expected 🎉

Possible mishandling in `allOf` keyword canonicalisation

Consider the following schema:

SCHEMA = {
    "allOf": [
        {"$ref": "#/definitions/ref"},
        {"required": ["foo"]}
    ],
    "properties": {
        "foo": {},
    },
    "definitions": {
        "ref": {"maxProperties": 1}
    },
    "type": "object"
}

If we call jsonschema.validate({}, SCHEMA) then it will complain that 'foo' is a required property which is expected. But, if we canonicalise the schema, then $ref keyword will be placed to the schema's root and will cancel the validation of other keywords as implemented in jsonschema. I can't find the exact place in the spec itself, but there are tests in the JSON Schema suite that ensure that other keywords are ignored.

So, with the canonicalised version of the schema an empty object passes validation:

>>> CANONICALISED = canonicalish(SCHEMA)
>>> jsonschema.validate({}, CANONICALISED)

What would be the best way to solve it? Should we check if there are schemas with $ref keywords, then keep them inside allOf and move everything else to the previous level?

P.S. I found the issue via Webpack bootstrap-loader configuration file schema in the testing catalog

Merge overlapping `dependencies` subschemas

In hypothesis_jsonschema._canonicalise.merged(), we should calculate the intersection of dependencies keywords from two schemas.

This is complicated somewhat by the fact that dependencies can be either the bare names, or also schemas. I think if they are of different kinds we can 'promote' the names to a mapping of name to {} (which accepts any value), but it needs some more investigation.

JSONschema draft 2019-09 changed this, but we don't support it yet anyway so that can be left for later.

Translate few-allowed-`integers` schemas to use `enum`

After #38 is done, we'll have a nice way to check how many values are allowed for type: integers schemas.

When only a few integers are allowed, representing this as an enum can be more efficient - it's easier to merge schemas because we just check enum elements, and Hypothesis has some smarter logic for lists(sampled_from(...), unique=True, ...) strategies. To implement this:

https://github.com/Zac-HD/hypothesis-jsonschema/blob/20a0fb1a44c8bc031d9b99e19be1eceff08cce69/src/hypothesis_jsonschema/_canonicalise.py#L287-L288

elif type_ == ["integer"] and upper_bound_instances(schema) <= 256:
    allowed_values = ... # TODO your logic here
    return {"enum": allowed_values}

$ref fails when in items of an array

$ref fails when in items of an array. The jsonschema code seems to be trying to find the definitions key in the object describing the array. Here is an example schema that fails:

{
    "type": "object",
    "definition": {"type": "string"},
    "properties": {
        "something": {
            "type": "array",
            "items": {"$ref": "#/definition"}
        }
    }
}

Here is the trace:

Traceback (most recent call last):
  File "python3.7/site-packages/jsonschema/validators.py", line 811, in resolve_fragment
    document = document[part]
KeyError: 'definitions'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "test_REDACTED.py", line 13, in test_doesnt_crash
    def test_doesnt_crash(self, features):
  File "python3.7/site-packages/hypothesis/core.py", line 1080, in wrapped_test
    raise the_error_hypothesis_found
  File "python3.7/site-packages/hypothesis_jsonschema/_from_schema.py", line 458, in from_object_schema
    out[key] = draw(merged_as_strategies(pattern_schemas))
  File "python3.7/site-packages/hypothesis_jsonschema/_from_schema.py", line 48, in merged_as_strategies
    return from_schema(schemas[0])
  File "python3.7/site-packages/hypothesis_jsonschema/_from_schema.py", line 86, in from_schema
    schema = resolve_all_refs(schema)
  File "python3.7/site-packages/hypothesis_jsonschema/_canonicalise.py", line 511, in resolve_all_refs
    schema[key] = res_one(val)
  File "python3.7/site-packages/hypothesis_jsonschema/_canonicalise.py", line 495, in res_one
    with resolver.resolving(ref) as got:
  File "python3.7/contextlib.py", line 112, in __enter__
    return next(self.gen)
  File "python3.7/site-packages/jsonschema/validators.py", line 754, in resolving
    url, resolved = self.resolve(ref)
  File "python3.7/site-packages/jsonschema/validators.py", line 766, in resolve
    return url, self._remote_cache(url)
  File "python3.7/site-packages/jsonschema/validators.py", line 781, in resolve_from_url
    return self.resolve_fragment(document, fragment)
  File "python3.7/site-packages/jsonschema/validators.py", line 815, in resolve_fragment
    "Unresolvable JSON pointer: %r" % fragment
jsonschema.exceptions.RefResolutionError: Unresolvable JSON pointer: 'definitions/some_def'

PyPy support

It turns out that hypothesis-jsonschema doesn't run on PyPy due to importing _make_iterencode from json.encoder. The fix should be quite straightforward, and I push a PR soon.

Issue with contains group

Here I got error
Unsatisfiable: Unable to satisfy assumptions of hypothesis example_generating_inner_function.
when I called
sample = from_schema(schema).example()
using the JSON schema.

{ 'type': 'object', 'required': [ 'Header' ], 'properties': { 'Header': { 'type': 'object', 'required': [ 'Address' ], 'properties': { 'Address': { 'type': 'array', 'items': { 'type': 'object', 'required': [ 'AddressTypeCode' ], 'properties': { 'AddressTypeCode': { 'type': 'string' } } }, 'minItems': 2, 'allOf': [ { 'contains': { 'required': [ 'AddressTypeCode' ], 'type': 'object', 'properties': { 'AddressTypeCode': { 'type': 'string', 'enum': [ 'ST' ] } } } }, { 'contains': { 'required': [ 'AddressTypeCode' ], 'type': 'object', 'properties': { 'AddressTypeCode': { 'type': 'string', 'enum': [ 'SF' ] } } } } ] } }, 'description': 'Encloses all document header elements.' } } }

Schema looks straight forword, this could be due to contains group, am not sure.

Tests not passing on python:3.6 docker image

Hi, I've been trying to get this going inside the python:3.6 docker container. I got an exception in interactive poking around with the version installed from pip (0.4.0), so I cloned the latest source and tried to run the tests, which failed in the same way.

Here's what I ran in your repo root dir:

pip install .
pip install pytest pytest-mypy pytest-coverage pytest-pylint
pytest .

Here's the output:

================================================= test session starts =================================================
platform linux -- Python 3.6.8, pytest-4.0.2, py-1.7.0, pluggy-0.8.0
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/hypothesis-jsonschema/.hypothesis/examples')
rootdir: /hypothesis-jsonschema, inifile: pytest.ini
plugins: pylint-0.13.0, mypy-0.3.2, cov-2.6.0, hypothesis-3.86.4
collected 15 items                                                                                                    
-----------------------------------------------------------------
Linting files
....
-----------------------------------------------------------------

setup.py ..                                                                                                     [  6%]
test_hypothesis_jsonschema.py ..FF.....                                                                         [ 60%]
src/hypothesis_jsonschema/__init__.py ..                                                                        [ 66%]
src/hypothesis_jsonschema/_impl.py ..

====================================================== FAILURES =======================================================
___________________________________________ test_all_py_files_are_blackened ___________________________________________
test_hypothesis_jsonschema.py:24: in test_all_py_files_are_blackened
    stderr=subprocess.PIPE,
/usr/local/lib/python3.6/subprocess.py:423: in run
    with Popen(*popenargs, **kwargs) as process:
/usr/local/lib/python3.6/subprocess.py:729: in __init__
    restore_signals, start_new_session)
/usr/local/lib/python3.6/subprocess.py:1364: in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
E   FileNotFoundError: [Errno 2] No such file or directory: 'black': 'black'
_________________________________________ test_generated_data_matches_schema __________________________________________
test_hypothesis_jsonschema.py:29: in test_generated_data_matches_schema
    max_examples=1000,
/usr/local/lib/python3.6/site-packages/hypothesis/core.py:624: in evaluate_test_data
    escalate_hypothesis_internal_error()
/usr/local/lib/python3.6/site-packages/hypothesis/core.py:604: in evaluate_test_data
    result = self.execute(data)
/usr/local/lib/python3.6/site-packages/hypothesis/core.py:573: in execute
    result = self.test_runner(data, run)
/usr/local/lib/python3.6/site-packages/hypothesis/executors.py:56: in default_new_style_executor
    return function(data)
/usr/local/lib/python3.6/site-packages/hypothesis/core.py:571: in run
    return test(*args, **kwargs)
test_hypothesis_jsonschema.py:29: in test_generated_data_matches_schema
    max_examples=1000,
/usr/local/lib/python3.6/site-packages/hypothesis/core.py:516: in test
    result = self.test(*args, **kwargs)
test_hypothesis_jsonschema.py:37: in test_generated_data_matches_schema
    value = data.draw(from_schema(schema), "value from schema")
/usr/local/lib/python3.6/site-packages/hypothesis/_strategies.py:2177: in draw
    result = self.conjecture_data.draw(strategy)
/usr/local/lib/python3.6/site-packages/hypothesis/internal/conjecture/data.py:224: in draw
    return self.__draw(strategy, label=label)
/usr/local/lib/python3.6/site-packages/hypothesis/internal/conjecture/data.py:239: in __draw
    return strategy.do_draw(self)
/usr/local/lib/python3.6/site-packages/hypothesis/searchstrategy/lazy.py:156: in do_draw
    return data.draw(self.wrapped_strategy)
/usr/local/lib/python3.6/site-packages/hypothesis/internal/conjecture/data.py:224: in draw
    return self.__draw(strategy, label=label)
/usr/local/lib/python3.6/site-packages/hypothesis/internal/conjecture/data.py:233: in __draw
    return strategy.do_draw(self)
/usr/local/lib/python3.6/site-packages/hypothesis/_strategies.py:1901: in do_draw
    return self.definition(data.draw, *self.args, **self.kwargs)
/usr/local/lib/python3.6/site-packages/hypothesis_jsonschema/_impl.py:284: in from_object_schema
    data = getattr(data_obj, "conjecture_data", getattr(data_obj, "data"))
E   AttributeError: 'DataObject' object has no attribute 'data'
------------------------------------------------ Captured stderr call -------------------------------------------------
Exception ignored in: <_io.FileIO name='/usr/local/lib/python3.6/site-packages/hypothesis_jsonschema/_impl.py' mode='rb
' closefd=True>
ResourceWarning: unclosed file <_io.BufferedReader name='/usr/local/lib/python3.6/site-packages/hypothesis_jsonschema/_
impl.py'>
Exception ignored in: <_io.FileIO name='/usr/local/lib/python3.6/site-packages/hypothesis_jsonschema/_impl.py' mode='rb
' closefd=True>
ResourceWarning: unclosed file <_io.BufferedReader name='/usr/local/lib/python3.6/site-packages/hypothesis_jsonschema/_
impl.py'>
Exception ignored in: <_io.FileIO name='/usr/local/lib/python3.6/site-packages/hypothesis_jsonschema/_impl.py' mode='rb
' closefd=True>
ResourceWarning: unclosed file <_io.BufferedReader name='/usr/local/lib/python3.6/site-packages/hypothesis_jsonschema/_
impl.py'>
Exception ignored in: <_io.FileIO name='/usr/local/lib/python3.6/site-packages/hypothesis_jsonschema/_impl.py' mode='rb
' closefd=True>
ResourceWarning: unclosed file <_io.BufferedReader name='/usr/local/lib/python3.6/site-packages/hypothesis_jsonschema/_
impl.py'>
Exception ignored in: <_io.FileIO name='/usr/local/lib/python3.6/site-packages/hypothesis_jsonschema/_impl.py' mode='rb
' closefd=True>
ResourceWarning: unclosed file <_io.BufferedReader name='/usr/local/lib/python3.6/site-packages/hypothesis_jsonschema/_
impl.py'>
----------------------------------------------------- Hypothesis ------------------------------------------------------
You can add @seed(107554635721253995027438164951715726229) to this test or run pytest with --hypothesis-seed=1075546357
21253995027438164951715726229 to reproduce this failure.

----------- coverage: platform linux, python 3.6.8-final-0 -----------
Name                                                                       Stmts   Miss Branch BrPart  Cover   Missing
----------------------------------------------------------------------------------------------------------------------
/usr/local/lib/python3.6/site-packages/hypothesis_jsonschema/__init__.py       3      0      0      0   100%
/usr/local/lib/python3.6/site-packages/hypothesis_jsonschema/_impl.py        270     37    165      7    83%   89, 91, 
93-99, 107, 225-238, 251, 286-315, 88->89, 90->91, 92->93, 106->107, 224->225, 241->244, 244->251
----------------------------------------------------------------------------------------------------------------------
TOTAL                                                                        273     37    165      7    84%

FAIL Required test coverage of 100% not reached. Total coverage: 83.56%
======================================== 2 failed, 13 passed in 30.29 seconds =========================================

That AttributeError: 'DataObject' object has no attribute 'data' is what I saw in my interactive session.
__version__ == 0.4.0
sha == 6b65362

Improve strategies for jsonpointer string formats

https://github.com/Zac-HD/hypothesis-jsonschema/blob/ac8b5258268780e1409ddd57fe47f508a40e3a56/src/hypothesis_jsonschema/_from_schema.py#L310-L311

is, while valid, not a great strategy for these formats - and we'd like to generate the full range of valid jsonpointer strings, probably with a custom @st.composite strategy. See xfail tests in bd35a2a and https://tools.ietf.org/html/rfc6901 and https://json-schema.org/draft/2019-09/relative-json-pointer.html

Support recursive references

Take for example the following schema:

{
    "person": {
        "type": "object",
        "properties": {
            "name": {"type": "string"},
            "parents": {
                "type": "array",
                "maxItems": 2,
                "items": {"$ref": "#/person"}
            }
        }
    },
    "$ref": "#/person"
}

So we need to generate a person, who has 0-2 parents, each of whom is also a person. More complicated situations with several mutually-recursive objects are also possible, of course. Currently such cases will fail a RecursionError as we make no attempt to handle them!

Conceptually these are all easy enough to handle with the st.deferred() strategy... which is the extent of the good news. Some complications:

  1. We want to avoid using st.deferred() when we don't actually need to, as it makes introspection (including e.g. error messages!) much less helpful, and adds some performance overhead (small but rarely needed).
  2. resolve_all_refs() can still resolve all non-recursive references in place, but will need to track which references have been traversed to avoid cycles.
  3. canonicalish() can still discard all non-functional keys, so long as it tracks which keys are reference targets. We can also re-write these to a standard location such as #/definitions/*.
  4. from_schema() can then switch into "deferred mode" if and only if there is a #/definitions part of the schema after canonicalising.

This is all basically managable, but it's also going to be a lot of work that I just don't have capacity to do for free. Pull requests and/or funding welcome, but otherwise expect this issue to stay open.

Exclude types which are unconstrained in other branches of a oneOf

Because the branches of a oneOf are mutually exclusive, any type that is unconstrained in one branch must be excluded from all others; and if unconstrained by two or more must be excluded from all.

This is applicable to more than just the type key, but I think that handling type would get us most of the benefit for very little implementation complexity and intend to leave a comment to this effect.

This change will obviate the current special handling of the oneOf complex types test schema.

Reusing jsonschema validators

Hello!

Recently I noticed some visible performance gaps when I was trying to generate schemas with a big number of values in enum keyword or a big number of subschemas in oneOf (a few hundred). The idea itself is about building schemas for Open API links implementation here. E.g. we made 1000 requests to endpoint POST /users and then we want to generate data for GET /users/{user_id} and reuse user_id values returned by the first test. The idea is to modify the base schema by adding const: <user_id> and then combining resulting 1000 schemas with oneOf so we'll always have some user ids from the previous responses, but will still generate parameters that are not filled with those constants. Not sure if it is the best approach, but it works.

After some investigation, I found that _canonicalise.is_valid does not reuse the provided schema, however often it is the same schema used in a loop.

Here is some basic benchmark:

In[1] from hypothesis_jsonschema import from_schema
In[2] variants = list(range(1000))
In[3] schema = {"enum": variants}
In[4] %timeit from_schema(schema)                                                                                                                                                                                                                                                                                    
350 ms ± 2.16 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Which is noticeable during the Schemathesis workflow. In the example the schema is empty and validation is pretty simple, but it might be much slower with real-world schemas.

And if we'll reuse the schema here like this:

if "enum" in schema:
    validator = jsonschema.validators.validator_for(schema)(schema)
    enum_ = sorted((v for v in schema["enum"] if validator.is_valid(v)), key=sort_key)

The results of the benchmark are much better:

In [6]: %timeit from_schema(schema)                                                                                                                                                                                                                                                                                     
4.64 ms ± 20.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

There are multiple places where it can be applied inside _canonicalise.py. For me, it feels like a quite low-hanging fruit that might improve performance in these cases very well without any extra dependencies / API changes.

I don't know if it was already discussed, but I'd be happy to know what do you think :)

Cheers

Unclear Hypothesis exception on minimal example

Hi, thank you for your efforts in combining Hypothesis with JSON Schema!

My understanding from the readme is that hypothesis-jsonschema works like this:

>>> from hypothesis_jsonschema import from_schema
>>> schema = {'properties': {'x': {'type': 'string'}}}
>>> from_schema(schema).example()  # would return something like {'x': 'hi'}

Turns out I get a hypothesis.errors.InvalidArgument: Cannot use strategy data() within a call to find (presumably because it would be invalid after the call had ended) instead (full stacktrace below).

Sometimes, with the exact same schema, I will not get an exception, but a boolean result instead. I suppose this is also an unexpected behavior given the above schema, which describes an object.

I'm not familiar with Hypothesis internals, so I really have no clue where to start investigating. Am I doing something stupid?

I'm using:

  • Hypothesis v4.24.5
  • hypothesis-jsonschema v0.9.3
Stacktrace
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File ".../hypothesis/searchstrategy/strategies.py", line 309, in example
    phases=tuple(set(Phase) - {Phase.shrink}),
  File ".../hypothesis/core.py", line 1112, in find
    runner.run()
  File ".../hypothesis/internal/conjecture/engine.py", line 398, in run
    self._run()
  File ".../hypothesis/internal/conjecture/engine.py", line 766, in _run
    self.generate_new_examples()
  File ".../hypothesis/internal/conjecture/engine.py", line 700, in generate_new_examples
    self.test_function(last_data)
  File ".../hypothesis/internal/conjecture/engine.py", line 144, in test_function
    self.__stoppable_test_function(data)
  File ".../hypothesis/internal/conjecture/engine.py", line 127, in __stoppable_test_function
    self._test_function(data)
  File ".../hypothesis/core.py", line 1083, in template_condition
    result = data.draw(search)
  File ".../hypothesis/internal/conjecture/data.py", line 829, in draw
    return self.__draw(strategy, label=label)
  File ".../hypothesis/internal/conjecture/data.py", line 844, in __draw
    return strategy.do_draw(self)
  File ".../hypothesis/searchstrategy/strategies.py", line 497, in do_draw
    return data.draw(self.element_strategies[i], label=self.branch_labels[i])
  File ".../hypothesis/internal/conjecture/data.py", line 829, in draw
    return self.__draw(strategy, label=label)
  File ".../hypothesis/internal/conjecture/data.py", line 838, in __draw
    return strategy.do_draw(self)
  File ".../hypothesis/searchstrategy/lazy.py", line 156, in do_draw
    return data.draw(self.wrapped_strategy)
  File ".../hypothesis/internal/conjecture/data.py", line 829, in draw
    return self.__draw(strategy, label=label)
  File ".../hypothesis/internal/conjecture/data.py", line 838, in __draw
    return strategy.do_draw(self)
  File ".../hypothesis/_strategies.py", line 1856, in do_draw
    return self.definition(data.draw, *self.args, **self.kwargs)
  File ".../hypothesis_jsonschema/_impl.py", line 594, in from_object_schema
    draw(st.data()).conjecture_data,
  File ".../hypothesis/internal/conjecture/data.py", line 820, in draw
    % (strategy,)
hypothesis.errors.InvalidArgument: Cannot use strategy data() within a call to find (presumably because it would be invalid after the call had ended).

The input schema draft version should be used in all validators

Hi Zac!

During work on the "recursive references" branch I found a problem.

For example, the input schema is Draft 4 and uses exclusiveMaximum as a boolean value and has keywords that involve subschema validation and are not canonicalised.

The problem is that those subschemas will use Draft 7 (the default from jsonschema) and will be rejected as invalid. For example, the following schema is valid Draft 4 schema:

SCHEMA = {
    "$schema": "http://json-schema.org/draft-04/schema#",
    "not": {
        "allOf": [
            {"exclusiveMinimum": True, "minimum": 0},
            {"exclusiveMaximum": True, "maximum": 10},
        ]
    }
}

And canonicalise call fails:

~/code/hypothesis-jsonschema/src/hypothesis_jsonschema/_canonicalise.py in merged(schemas, resolver)
    922             return FALSEY
    923     assert isinstance(out, dict)
--> 924     jsonschema.validators.validator_for(out).check_schema(out)
    925     return out
    926 

~/.virtualenvs/hypothesis-jsonschema/lib/python3.8/site-packages/jsonschema/validators.py in check_schema(cls, schema)
    292         def check_schema(cls, schema):
    293             for error in cls(cls.META_SCHEMA).iter_errors(schema):
--> 294                 raise exceptions.SchemaError.create_from(error)
    295 
    296         def iter_errors(self, instance, _schema=None):

SchemaError: True is not of type 'number'

Failed validating 'type' in metaschema['properties']['exclusiveMaximum']:
    {'type': 'number'}

On schema['exclusiveMaximum']:
    True

The real-life example is the Open API schema, which I found failing for this reason in my "recursive references" branch.

To me, it seems like we need to pass the schema version to places where those validators are created to use proper validators for the schema's draft. I could try to push a PR with that approach.

What do you think?

Cheers

Mis-canonicalisation of `const` in draft04 schemas (not a keyword until draft06)

The const keyword was added in draft-06, and therefore has no effect whatsoever in draft-04 schemas. While authors probably intended to constrain the value somewhat, we still have to follow the spec!

{
    "$schema": "http://json-schema.org/draft-04/schema#",
    "oneOf": [{"type": "boolean"}, {"const": None}],
    "type": ["boolean", "null"],
}
# Should canonicalise 
#    {"const": None} -> {}
#    "oneOf": [{"type": "boolean"}, {}] -> "not": {"type": "boolean"}
# and ultimately to
{"enum": [None]}

See also python-jsonschema/jsonschema#778; longer term this is another argument for my do-it-right schema processing redesign...

Canonicalise to `type: integer` or `type: number`, never both

Dealing with schemas that might contain "type": ["integer", "number"] makes various tricks like discarding non-type-relevant keywords harder than it needs to be.

But if you stop and think about it, this is also a silly type to represent: either it's a number type (which may be of Python type int, I should check that in from_schema) and we can discard the integer part as adding no information at all, or it only allows integral values and it might as well be of type: integer to express that.

Notes to self:

  • If multipleOf is an integer, we can convert type: number to type: integer
  • To make this easy, and generally canonicalise things, we should round-trip schemas through encode_canonical_json to e.g. convert all integer-valued floats to integers.
  • Check for type-related logic elsewhere which has to deal with multiple numeric types in a schema (ie for keyword relevance detection, maybe other things?) and replace it with comments explaining why it's no longer needed

Examples keyword

Hello, I'm here from the Schemathesis repository! I'm adding support for multiple example schemas using the OpenAPI examples keyword. I found, however, that hypothesis-jsonschema does not yet support an examples keyword.

Screen Shot 2020-06-05 at 8 13 35 AM

I found that an examples keyword was added to the JSON Schema specification in Draft 6. The format of JSON Schema and OpenAPI examples differ because JSON Schema treats the value of examples as an array while OpenAPI treats the value of examples as a dictionary.

JSON Schema:

"examples" : [
    "Anything",
    4035
]

OpenAPI:

examples:
    example_1:
        value: Anything
    example_2:
        value: 4035

I'm not yet familiar with the goals and conventions of this project, but would you consider adding the examples keyword without validating the value of the examples?

Avoid needless filters on inferred strategies

#22 notes, among other things, that hypothesis-jsonschema strategies for simple schemas are often slower than the obvious translation. Some of this is unavoidable, as we try some extensive transformations of the schema which are necessary to avoid pathologically poor performance on certain complex schemas, but not all.

There are a range of points at which we output slower strategy which always works, instead of checking whether a simpler one would be correct. For example we always apply a filter to string strategies that validate their length, which is necessary only for regex-based or format-based strings and could be skipped even there in the common case that min_size=0 and max_size=inf.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.