GithubHelp home page GithubHelp logo

vetr's Introduction

vetr

R build status Project Status: WIP - Initial development is in progress, but there has not yet been a stable, usable release suitable for the public. Dependencies direct/recursive

Trust, but Verify

Easily

When you write functions that operate on S3 or unclassed objects you can either trust that your inputs will be structured as expected, or tediously check that they are.

vetr takes the tedium out of structure verification so that you can trust, but verify. It lets you express structural requirements declaratively with templates, and it auto-generates human-friendly error messages as needed.

Quickly

vetr is written in C to minimize overhead from parameter checks in your functions. It has no dependencies.

Declarative Checks with Templates

Templates

Declare a template that an object should conform to, and let vetr take care of the rest:

library(vetr)
tpl <- numeric(1L)
vet(tpl, 1:3)
## [1] "`length(1:3)` should be 1 (is 3)"
vet(tpl, "hello")
## [1] "`\"hello\"` should be type \"numeric\" (is \"character\")"
vet(tpl, 42)
## [1] TRUE

The template concept is based on vapply, but generalizes to all S3 objects and adds some special features to facilitate comparison. For example, zero length templates match any length:

tpl <- integer()
vet(tpl, 1L:3L)
## [1] TRUE
vet(tpl, 1L)
## [1] TRUE

And for convenience short (<= 100 length) integer-like numerics are considered integer:

tpl <- integer(1L)
vet(tpl, 1)       # this is a numeric, not an integer
## [1] TRUE
vet(tpl, 1.0001)
## [1] "`1.0001` should be type \"integer-like\" (is \"double\")"

vetr can compare recursive objects such as lists, or data.frames:

tpl.iris <- iris[0, ]      # 0 row DF matches any number of rows in object
iris.fake <- iris
levels(iris.fake$Species)[3] <- "sibirica"   # tweak levels

vet(tpl.iris, iris)
## [1] TRUE
vet(tpl.iris, iris.fake)
## [1] "`levels(iris.fake$Species)[3]` should be \"virginica\" (is \"sibirica\")"

From our declared template iris[0, ], vetr infers all the required checks. In this case, vet(iris[0, ], iris.fake, stop=TRUE) is equivalent to:

stopifnot_iris <- function(x) {
  stopifnot(
    is.data.frame(x),
    is.list(x),
    length(x) == length(iris),
    identical(lapply(x, class), lapply(iris, class)),
    is.integer(attr(x, 'row.names')),
    identical(names(x), names(iris)),
    identical(typeof(x$Species), "integer"),
    identical(levels(x$Species), levels(iris$Species))
  )
}
stopifnot_iris(iris.fake)
## Error in stopifnot_iris(iris.fake): identical(levels(x$Species), levels(iris$Species)) is not TRUE

vetr saved us typing, and the time and thought needed to come up with what needs to be compared.

You could just as easily have created templates for nested lists, or data frames in lists. Templates are compared to objects with the alike function. For a thorough description of templates and how they work see the alike vignette. For template examples see example(alike).

Auto-Generated Error Messages

Let’s revisit the error message:

vet(tpl.iris, iris.fake)
## [1] "`levels(iris.fake$Species)[3]` should be \"virginica\" (is \"sibirica\")"

It tells us:

  • The reason for the failure
  • What structure would be acceptable instead
  • The location of failure levels(iris.fake$Species)[3]

vetr does what it can to reduce the time from error to resolution. The location of failure is generated such that you can easily copy it in part or full to the R prompt for further examination.

Vetting Expressions

You can combine templates with && / ||:

vet(numeric(1L) || NULL, NULL)
## [1] TRUE
vet(numeric(1L) || NULL, 42)
## [1] TRUE
vet(numeric(1L) || NULL, "foo")
## [1] "`\"foo\"` should be `NULL`, or type \"numeric\" (is \"character\")"

Templates only check structure. When you need to check values use . to refer to the object:

vet(numeric(1L) && . > 0, -42)  # strictly positive scalar numeric
## [1] "`-42 > 0` is not TRUE (FALSE)"
vet(numeric(1L) && . > 0, 42)
## [1] TRUE

If you do use the . symbol in your vetting expressions in your packages, you will need to include utils::globalVariables(".") as a top-level call to avoid the “no visible binding for global variable ‘.’” R CMD check NOTE.

You can compose vetting expressions as language objects and combine them:

scalar.num.pos <- quote(numeric(1L) && . > 0)
foo.or.bar <- quote(character(1L) && . %in% c('foo', 'bar'))
vet.exp <- quote(scalar.num.pos || foo.or.bar)

vet(vet.exp, 42)
## [1] TRUE
vet(vet.exp, "foo")
## [1] TRUE
vet(vet.exp, "baz")
## [1] "At least one of these should pass:"                         
## [2] "  - `\"baz\" %in% c(\"foo\", \"bar\")` is not TRUE (FALSE)" 
## [3] "  - `\"baz\"` should be type \"numeric\" (is \"character\")"

all_bw is available for value range checks (~10x faster than isTRUE(all(. >= x & . <= y)) for large vectors):

vet(all_bw(., 0, 1), runif(5) + 1)
## [1] "`all_bw(runif(5) + 1, 0, 1)` is not TRUE (is chr: \"`1.234342` at index 1 not in `[0,1]`\")"

There are a number of predefined vetting tokens you can use in your vetting expressions such as:

vet(NUM.POS, -runif(5))    # positive numeric; see `?vet_token` for others
## [1] "`-runif(5)` should contain only positive values, but has negatives"

Vetting expressions are designed to be intuitive to use, but their implementation is complex. We recommend you look at example(vet) for usage ideas, or at the “Non Standard Evaluation” section of the vignette for the gory details.

vetr in Functions

If you are vetting function inputs, you can use the vetr function, which works just like vet except that it is streamlined for use within functions:

fun <- function(x, y) {
  vetr(numeric(1L), logical(1L))
  TRUE   # do work...
}
fun(1:2, "foo")
## Error in fun(x = 1:2, y = "foo"): For argument `x`, `length(1:2)` should be 1 (is 2)
fun(1, "foo")
## Error in fun(x = 1, y = "foo"): For argument `y`, `"foo"` should be type "logical" (is "character")

vetr automatically matches the vetting expressions to the corresponding arguments and fetches the argument values from the function environment.

See vignette for additional details on how the vetr function works.

Additional Documentation

Development Status

vetr is still in development, although most of the features are considered mature. The most likely area of change is the treatment of function and language templates (e.g. alike(sum, max)), and more flexible treatment of list templates (e.g. in future lists may be allowed to be different lengths so long as every named element in the template exists in the object).

Installation

This package is available on CRAN:

install.packages('vetr')

It has no runtime dependencies.

For the development version use remotes::install_github('brodieg/vetr@development') or:

f.dl <- tempfile()
f.uz <- tempfile()
github.url <- 'https://github.com/brodieG/vetr/archive/development.zip'
download.file(github.url, f.dl)
unzip(f.dl, exdir=f.uz)
install.packages(file.path(f.uz, 'vetr-development'), repos=NULL, type='source')
unlink(c(f.dl, f.uz))

The master branch typically mirrors CRAN and should be stable.

Alternatives

There are many alternatives available to vetr. We do a survey of the following in our parameter validation functions review:

The following packages also perform related tasks, although we do not review them:

  • valaddin v0.1.0 by Eugene Ha, a framework for augmenting existing functions with validation contracts. Currently the package is undergoing a major overhaul so we will add it to the comparison once the new release (v0.3.0) is out.
  • ensurer v1.1 by Stefan M. Bache, a framework for flexibly creating and combining validation contracts. The development version adds an experimental method for creating type safe functions, but it is not published to CRAN so we do not test it here.
  • validate by Mark van der Loo and Edwin de Jonge, with a primary focus on validating data in data frames and similar data structures.
  • assertr by Tony Fischetti, also focused on data validation in data frames and similar structures.
  • types by Jim Hester, which implements but does not enforce type hinting.
  • argufy by Gábor Csárdi, which implements parameter validation via roxygen tags (not released to CRAN).
  • typed by Antoine Fabri, which enforces types of symbols, function parameters, and return values.

Acknowledgments

Thank you to:

  • R Core for developing and maintaining such a wonderful language.
  • CRAN maintainers, for patiently shepherding packages onto CRAN and maintaining the repository, and Uwe Ligges in particular for maintaining Winbuilder.
  • Users and others who have reported bugs and/or helped contribute fixes (see NEWS.md).
  • Tomas Kalibera for rchk and rcnst to help detect errors in compiled code, and in particular for his infinite patience in helping me resolve the issues he identified for me.
  • Jim Hester because covr rocks.
  • Dirk Eddelbuettel and Carl Boettiger for the rocker project, and Gábor Csárdi and the R-consortium for Rhub, without which testing bugs on R-devel and other platforms would be a nightmare.
  • Winston Chang for the r-debug docker container, in particular because of the valgrind level 2 instrumented version of R.
  • Hadley Wickham and Peter Danenberg for roxygen2.
  • Yihui Xie for knitr and J.J. Allaire etal for rmarkdown, and by extension John MacFarlane for pandoc.
  • Michel Lang for pushing me to implement all_bw to compete with his own package checkmate.
  • Eugene Ha for pointing me to several other relevant packages, which in turn led to the survey of related packages.
  • Stefan M. Bache for the idea of having a function for testing objects directly (originally vetr only worked with function arguments), which I took from ensurer.
  • Olaf Mersmann for microbenchmark, because microsecond matter, and Joshua Ulrich for making it lightweight.
  • All open source developers out there that make their work freely available for others to use.
  • Github, Codecov, Vagrant, Docker, Ubuntu, Brew for providing infrastructure that greatly simplifies open source development.
  • Free Software Foundation for developing the GPL license and promotion of the free software movement.

About the Author

Brodie Gaslam is a hobbyist programmer based on the US East Coast.

vetr's People

Contributors

brodieg avatar franknarf1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

vetr's Issues

Rationalize token names in docs

We need better nomenclature for:

  • Templates: expression that evaluate to R objects to use as templates
  • Custom expressions: user expressions to evaluate for truth, possibly pre-substituting . before eval
  • Validation expressions: mix of template and custom expression tokens.

More verbose `validate_args` error

Could potentially consider dumping out either str or a snippet of the print output of an object in addition to the error message to accelerate debugging of objects since typically that is the very first thing one will do upon seeing the error. For complex objects could even pull out the nested element that is not matching, though that starts becoming more difficult, particularly since alike doesn't return the coordinates.

Constant error message in formulas not good

alike(y ~ x ^ 2, a ~ b ^ 3)
## [1] "`(a ~ b^3)[[3]][[3]]` should have identical constant values"

Would be better to have something along the lines of "is 2, should be 3" or some such.

Forgetting `.(` Can Be Confusing

If we do something like:

validate(character() && !any(is.na(.)))

instead of the intended:

validate(character() && .(!any(is.na(.))))

the error message risk being very confusing since the value of the second token is used as a template instead of being interpreted as a value.

NULL being wildcard problematic?

Very convenient in most instanced, but the common expect this argument to be "X" or NULL falls flat on its face. What's the workaround?

Provide Context About Vetting Expression in Failure

Ideally would return full vetting expression, and the token that triggered the failure, potentially as an attribute for vet, and as part of the error message for vetr?

For example, in:

a <- quote(integer() && . > 0)
b <- quote(logical(1L) && !is.na(.))
c <- quote(a || b)

vet(c, -1)

The returned attribute might be structured as:

list(
  vet.exp=quote((integer() && . > 0) || (logical(1L) && !is.na(.))),
  fail.tokens=list(vet.exp[[2]][[2]][[3]], vet.exp[[3]][[2]][[2]])
)

although the vet.exp part in fail.tokens may need to be expanded.

Remove ggplot2 suggests

Causes massive installation. Think about how to test abstract without including this package.

Implement Variable Length Lists with `elist` and `vlist`

elist (Extensible List, could be xlist too) is an extensible list, where objects are accepted assuming that they have every element that is present in the template. This is supposed to mimic S4 objects where objects that inherit from another contain all the slots of the other. Some unresolved questions are whether the subset of elements must be first and in the same order as in the template, and whether named objects should be treated differently. In terms of implementation, elist will probably produce and S4 object that will trigger special treatment.

One question is how we do something like structure(elist(...), attra, attrb) etc. as then the return value of elist can hardly be S4 as there could be conflicts between slots and attributes.

vlist (Vector List) is a variable length list with the same template repeated n times. TBD whether we allow a repetitions argument, or whether people should use a normal list template for those. Ideally the template would allow the same syntax present at the top level (i.e. use of template and evaluated tokens, etc.).

Run Valgrind

There are gremlins lurking, including issue #36, and:

> unitize_dir()

Prepping Unitizers...                                                           
 *** caught bus error ***
address 0x7fd3cd81ad78, cause 'non-existent physical address'

Traceback:
 1: initialize(value, ...)
 2: initialize(value, ...)
 3: new("unitizerBrowseSubSectionFailed", show.out = TRUE, show.msg = TRUE,     items.new = [email protected][[email protected] & sect.map], show.fail = [email protected][[email protected] &         sect.map], items.ref = [email protected][[email protected][[email protected] &         sect.map]], new.conditions = [email protected][[email protected] &         sect.map], tests.result = [email protected][[email protected] &         sect.map, , drop = FALSE])
 4: .local(x, mode, ...)
 5: (function (x, mode, ...) standardGeneric("browsePrep"))(dots[[1L]][[10L]], mode = dots[[2L]][[1L]],     start.at.browser = dots[[3L]][[10L]], hist.con = 3L, interactive = TRUE)
 6: (function (x, mode, ...) standardGeneric("browsePrep"))(dots[[1L]][[10L]], mode = dots[[2L]][[1L]],     start.at.browser = dots[[3L]][[10L]], hist.con = 3L, interactive = TRUE)
 7: mapply(browsePrep, as.list(unitizers), mode = mode, start.at.browser = (identical(mode,     "review") | !to.review) & !force.update, MoreArgs = list(hist.con = hist.obj$con,     interactive = interactive.mode), SIMPLIFY = FALSE)
 8: unitize_browse(unitizers = unitizers[valid], mode = mode, interactive.mode = interactive.mode,     force.update = force.update, auto.accept = auto.accept, history = history,     global = global)
 9: doWithOneRestart(return(expr), restart)
10: withOneRestart(expr, restarts[[1L]])
11: withRestarts(unitizers[valid] <- unitize_browse(unitizers = unitizers[valid],     mode = mode, interactive.mode = interactive.mode, force.update = force.update,     auto.accept = auto.accept, history = history, global = global),     unitizerInteractiveFail = function(e) interactive.fail <<- TRUE)
12: unitize_core(test.files = test.files, store.ids = store.ids,     state = state, pre = pre, post = post, history = history,     interactive.mode = interactive.mode, force.update = force.update,     auto.accept = auto.accept, mode = "unitize")
13: unitize_dir()

Possible actions:
1: abort (with core dump, if enabled)
2: normal R exit
3: exit R without saving workspace
4: exit R saving workspace
Selection: 2
Save workspace image? [y/n/c]: n

But difficult to reproduce.

Camparisons to `checkmate`

  • checkmate probably faster, but diff might not be too bad after we implement #48
  • simplicity of structural checks (and possibly speed) should be an advantage for vetr

`.(` should imply `.(all`

Basically, tests should pass if expression evaluates to all TRUEs, allows for things such as:

integer() && .(!is.na(.))

instead of

integer() && .(all(!is.na(.))

Seems like there is no harm to this and it saves a bit of typing

COPYRIGHT/LICENSE Issues

  1. Description comment about seeing COPYRIGHTS seems out of date
  2. Make sure license info in every file

Segfault when testing call

This could well be an alike issue:

> x <- quote(a + b)
> validate(x, 2 + 3)
Error: object 'a' not found
Error in validate(x, 2 + 3) : 
  Validation expression for argument `current` produced an error (see previous error).
> x <- quote(quote(a + b))
> validate(x, 2 + 3)
Error in validate(x, 2 + 3) : 
  Argument `current` should be type "language" (is "double")
> validate(x, quote(2 + 3))
Error in validate(x, quote(2 + 3)) : 
  Argument `current` should be "symbol" (is "double") for token `2` in: `{2}` + 3
> validate(x, quote(x2 + x3))
Warning: stack imbalance in '.Call', 6 then 4

 *** caught bus error ***
address 0x106c29ff8, cause 'non-existent physical address'

Possible actions:
1: abort (with core dump, if enabled)
2: normal R exit
3: exit R without saving workspace
4: exit R saving workspace

Substituting Arg When Combining `.` and Diff Arg Name

fun2 <- function(x, y)
  validate_args(
    x=integer(),
    y=character() && length(x) == length(.)
  )
fun2(1:3, letters[1:4])
## Error in fun2(x = 1:3, y = letters[1:4]) : 
##  For argument `y`, `length(x) == length(letters[1:4])` is not TRUE (FALSE)

Would be nice if x was also subbed so the above is consistent? As it is this is almost worse than:

length(x) == length(y)

Avoid Double Evaluation of Args

validate will evaluate the arguments as captured from the calls, in the correct frames. The problem is that then the arguments will also be evaluated by the function. Really validate should force the arguments to validate (need to think a little bit though as to where the alike call should be evaluated)

`validate` return value

Need to think through, right now is:

> validate(integer(1L) && NO.NA && NO.INF, 5.2)
Error in validate(integer(1L) && NO.NA && NO.INF, 5.2) : 
  Argument `current` should be type "integer-like" (is "double")

but maybe the whole current blah blah shouldn't show up here to allow people to use validate however they want instead as we're dictating here. For validate_args the more processed return makes sense, but here perhaps not.

Confusing Error Messages for "NULL"

Could be interpreted as the string "NULL":

## Error in fun(x = 1, y = 2): For argument `y`, `2` should be "NULL", or type "character" (is "double")

with_vetr?

Implement something that takes a function and transforms it into a vetter function:

fun_v <- with_vetter(fun, x=numeric(), y=character())

Multi Option Error Message

> fun1(matrix(1:9, ncol = 3), "fail", "fail")
Error in fun1(x = matrix(1:9, ncol = 3), y = "fail", z = "fail") : 
  Argument meet at least one of the following:
  - `y` should be type "integer-like" (is "character")
  - `y` should be "NULL" (is "character")
  - `y` should be type "logical" (is "character")

Use "could" instead of "should"?

Ensure `match.call` corner cases handled properly

Now that we've switched away from 'match_call', need to verify all the cases we wrote 'match_call' for are properly handled.

Also, some errors in existing tests:

> fun7 <- function(x, y = z + 2) {
+     z <- "boom"
+     vetr(x = TRUE, y = 1L)
+ }
> fun7a <- function(x, y = z + 2) {
+     z <- 40
+     vetr(x = TRUE, y = 1L)
+ }
> z <- 1

# fail because z in fun is character

> fun7(TRUE)
Error in vetr(x = TRUE, y = 1L) : 
  Need to implement deparsing of tag since this could be lang now

| Conditions mismatch: 

< .REF$conditions                                                               
> .NEW$conditions                                                               
@@ 1,3 / 1,3 @@                                                                 
  Condition list with 1 condition:                                              
< 1. Error in fun7(x = TRUE, y = z + 2) : Argument `y` produced error during    
> 1. Error in vetr(x = TRUE, y = 1L) : Need to implement deparsing of tag since 
<    evaluation; see previous error.                                            
>    this could be lang now                                                     

unitizer> N

# works

> fun7a(TRUE)
Error in vetr(x = TRUE, y = 1L) : 
  Need to implement deparsing of tag since this could be lang now

| Value mismatch: 

< .ref      > .new    
@@ 1 @@     @@ 1 @@   
< [1] TRUE  > NULL    

| Conditions mismatch: 

< .REF$conditions                                                               
> .NEW$conditions                                                               
@@ 1 / 1,3 @@                                                                   
< Empty condition list                                                          
> Condition list with 1 condition:                                              
> 1. Error in vetr(x = TRUE, y = 1L) : Need to implement deparsing of tag since 
>    this could be lang now         

Alikeness of Functions

Currently signature is required to be a possible generic to a method. In the future we might relax that or at least provide a mode that allows a more relaxed fit.

This all came from the valaddin example where we wanted to add checks that we ensured would lead to two argument functions, but they failed because the function arguments were incorrect.

Thinking about it further it seems that a function should be able to be called with the argument specified, so the # of arguments only deal probably is not a good idea. We could however provide a special object along the lines of elist and vlist being considered in #29 that would vet purely the number of arguments.

`vetr` eval much slower within `knitr`

Run cli:

> secant <- function(f, x, dx) (f(x + dx) - f(x)) / dx
> 
> secant_valaddin <- valaddin::firmly(secant, list(~x, ~dx) ~ is.numeric)
> secant_stopifnot <- function(f, x, dx) {
+   stopifnot(is.numeric(x), is.numeric(dx))
+   secant(f, x, dx)
+ }
> secant_vetr <- function(f, x, dx) {
+   vetr(x=numeric(), dx=numeric())
+   secant(f, x, dx)
+ }
> microbenchmark(
+   secant_valaddin(log, 1, .1),
+   secant_stopifnot(log, 1, .1),
+   secant_vetr(log, 1, .1)
+ )
Unit: microseconds
                          expr     min       lq      mean   median       uq
  secant_valaddin(log, 1, 0.1) 123.589 129.4050 160.07218 139.3860 154.9075
 secant_stopifnot(log, 1, 0.1)   9.168  11.3390  16.11966  13.1475  14.4380
      secant_vetr(log, 1, 0.1)  14.780  16.7175  22.29358  20.1230  21.2600
     max neval
 328.443   100
  57.431   100
  64.839   100

run on knitr:

secant <- function(f, x, dx) (f(x + dx) - f(x)) / dx

secant_valaddin <- valaddin::firmly(secant, list(~x, ~dx) ~ is.numeric)
secant_stopifnot <- function(f, x, dx) {
  stopifnot(is.numeric(x), is.numeric(dx))
  secant(f, x, dx)
}
secant_vetr <- function(f, x, dx) {
  vetr(x=numeric(), dx=numeric())
  secant(f, x, dx)
}
microbenchmark(
  secant_valaddin(log, 1, .1),
  secant_stopifnot(log, 1, .1),
  secant_vetr(log, 1, .1)
)

## Unit: microseconds
##                           expr     min       lq      mean
##   secant_valaddin(log, 1, 0.1) 132.051 148.1325 201.55885
##  secant_stopifnot(log, 1, 0.1)  10.504  13.4080  21.38084
##       secant_vetr(log, 1, 0.1)  32.141  37.5715  57.02122
##   median       uq     max neval
##  166.021 228.5780 616.862   100
##   16.137  23.4070  82.141   100
##   43.360  66.7915 213.646   100

Default arguments evaluated in wrong frame

Right now in calling frame of function, instead of in function frame. Unfortunately not completely trivial to fix since we need to keep track of which args are default, vs which ones are not. One option might be to just no validate default args that have not been changed by user (not ideal though).

`alike` options

Should be similar to the alike_settings business, and should obviously include the alike options, for example, turning off the integer-like numerics matching integer templates.

Better Mechanism For Token Messages

How do we attach the message "be TRUE or FALSE" to:

logical(1L) && !is.na(.)

Right now we can't do:

identity(logical(1L) && !is.na(.))

because from that point forward the expression stops making sense, and make_val_token

Consistency between Passing Quoted Objects And Putting them In

> validate(quote(quote(a + b)), quote(x2 + x3))
Error in validate(quote(quote(a + b)), quote(x2 + x3)) : 
  `quote(x2 + x3)[[1]]` should be a call to `quote` (is a call to `+`)

unitizer> validate(quote(a + b), quote(x2 + x3))
[1] TRUE

> x <- quote(quote(a + b))
> validate(x, quote(x2 + x3))
[1] TRUE

Error Message Should Match Original Call

Right now we throw error with matched call:

analyze(laps.1)   # Invalid object
# Error in analyze.laps(x = laps.1): 
#   Argument `x` should be "car" at index [[1]] for "names" (is "lap")

but maybe should be with actual call. This might be better as is though

Ambiguity of when `err.msg` is used

cust.tok.2 <- quote(TRUE)
 attr(cust.tok.2, "err.msg") <- letters
 vet(cust.tok.2, TRUE)

uses cust.tok.2 as a template rather than a language object with a custom error message.

Properly check for numeric overflows

Right now we check that numbers wrap, and although that works in theory we should really be checking against INT_MAX and the like since the wrapping is not defined behavior.

Internal INTEGER error in track hash

Can't reproduce this consistently. Seems like it only happens the first time the code is run.

> vetr:::track_hash(letters[1:5], 2L)
Error in vetr:::track_hash(letters[1:5], 2L) : 
  INTEGER() can only be applied to a 'integer', not a 'NULL'

| Value mismatch: 

< .ref           > .new         
@@ 1 @@          @@ 1 @@        
< [1] 1 1 4 1 8  > NULL         

| Conditions mismatch: 

< .REF$conditions                                                         
> .NEW$conditions                                                         
@@ 1 / 1,3 @@                                                             
< Empty condition list                                                    
> Condition list with 1 condition:                                        
> 1. Error in vetr:::track_hash(letters[1:5], 2L) : INTEGER() can only be 
>    applied to a 'integer', not a 'NULL'                      

`..` Does not cause `.` to be Substituted

Somehow the escaping of the dot doesn't permit substitution to happen with a . variable in the substitution environment.

Also, clarify whether only things that are all dots, or just all leading dots must be escaped. Should probably be the latter.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.