GithubHelp home page GithubHelp logo

easystats / report Goto Github PK

View Code? Open in Web Editor NEW
661.0 17.0 67.0 18.24 MB

:scroll: :tada: Automated reporting of objects in R

Home Page: https://easystats.github.io/report/

License: Other

R 93.45% TeX 6.55%
reporting r apa reports statsmodels bayesian models automated-report-generation easystats rstats

report's People

Contributors

bwiernik avatar camden-bock avatar cgeger avatar dominiquemakowski avatar drfeinberg avatar dtoher avatar etiennebacher avatar fkohrt avatar github-actions[bot] avatar grimmjulian avatar humanfactors avatar indrajeetpatil avatar jdtrat avatar lukaswallrich avatar m-macaskill avatar mattansb avatar mutlusun avatar pkoaz avatar rempsyc avatar strengejacke avatar vincentarelbundock avatar webbedfeet avatar wjschne avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

report's Issues

bayes factors in `bayesfactor_*` report

originally from #38

  • bayesfactor_inclusion
    • text - only results.
    • text_full - Give full explanation about model averaging + posterior probability.
    • table - basically as.data.frame
    • table_full - same as table??
    • docs...
    • tests...
  • bayesfactor_models
    • text - method + denominator + {best + worst model}
    • text_full - with BF computation method + all models.
    • table - basically as.data.frame
    • table_full - give also "BF01 compared to best model` column.
    • docs...
    • tests...
  • bayesfactor_parameters
    • text - what null (once), BF for each, by name.
    • text_full - same as text??
    • table - basically as.data.frame + row for null and side
    • table_full - same as table??
    • docs...
    • tests...
  • bayesfactor_restricted
    • text - ???
    • text_full - ???
    • table - basically as.data.frame
    • table_full - same as table??
    • docs...
    • tests...

Notes to self:

  • Should use interpret_bf internally for text, with arg rules.

Bayesian parameters: convergence, Rhat and Effective Sample Size

Since there are explicitly established guidelines for Rhat and ESS interpretation:

Rhat should not be larger than 1.1 (Gelman and Rubin, 1992) or 1.01 (Vehtari et al., 2019).

An effective sample size greater than 1,000 is sufficient for stable estimates (Bürkner, 2017)

It would make sense adding an interpret functions and some textual description in the long text parameters reports:

  • current text. The sampling algorithm correctly converged (Rhat = x, Effective Sampling Size = y)
  • current text. However, the estimation of this parameter might not be reliable as the sampling algorithm did not correctly converge (Rhat = x, Effective Sampling Size = y)

In the case there is a suspicion of non-convergence, the info should also be added to the short text report.

Revising report due to changes in "parameters" package

parameters::model_parameter(), when using standardize = "refit", no longer returns non-standardized coefficients, but only coefficients, SE and CI based on the refitted model from the standardized data. Only when standardize-method is posthoc/basic or so, then "normal" coefficients, SE and CI and std. coefficients (only) are returned.

This breaks the default report method. Since standardizing is in package effectsize now, I suggest calling model_parameters() w/o standardizing and then call effectsize::standardize_parameters() separately to build the report.

I can address this issue, or you can do it, if you like. But I think you have a clearer vision of what should be reported in the end.

Having some trouble runnign report with lm

I'm utilizing R studio and trying to run "report" on the following code to summarize the output in plain text.

lm.out <- lm(time ~ year, data=Q2_Data)

report(lm.out)

Utilizing this, I then see the following error:

Error in UseMethod("format_value") : no applicable method for 'format_value' applied to an object of class "NULL"

The data I'm utilizing this for is open source as I am experimenting with this package and I've attached it here for reference.

olympics copy.sav.zip

Let me know if there is something I'm doing wrong here. I'm fairly new to the R world so it could be entirely on my side but I thought if anyone would know it would be you guys :)

Language localisation

Describe the solution you'd like
Would be nice to add language localizations so spanish speakers like me can improve our reporting too!.

How could we do it?
A language="es" switch can be sufficient using glue to process string literals

Thank you guys! great work!

Purpose for print.report_table containing several package calls

Question and context

This obviously isn't a bug, but I'm trying to work out the logic behind why print.report_table requires nested calls to both insight, and then parameters. There's something I've missed along the way in terms of why format table is contained within insight if it's used for pretty printing. Indeed, this could have been discussed in another thread and I've missed it.

#' @export
print.report_table <- function(x, ...) {
  table <- insight::format_table(parameters::parameters_table(x))
  cat(table)
}

If I'm not mistaken, in effect the final call ends up being something like this within report:

cat(insight::format_table(parameters::parameters_table(report::report(a_model_object))))

I would like better understand this design decision, and the history behind it. Pending that answer, I do think it would be worthwhile documenting some of these processes more explicitly within the functions themselves (i.e., dev notes). This is just a suggestion, but it certainly makes contributing and debugging a little easier!

For context, the reason I ask this is that I have been experimenting with some Rmarkdown printing, but it's certainly not trivial to implement within the ecosystem of report. I wanted to follow how these tables were generated... here I am!

Hopefully I haven't missed something written elsewhere in the project. Cheers. 😀

report_participants() to verbose?

library(report)
data <- data.frame(
  "Age" = c(22, 23, 54, 21, 8, 42),
  "Sex" = c("F", "F", "M", "M", "M", "F")
)

report_participants(data)
#> [1] "6 participants (Mean age = 28.33, Mean = 28.33, SD = 16.62, Median = 22.50, MAD = 11.86, range: [8, 54], Skewness = 0.67, Kurtosis = -0.97; 50.00% females)"

Created on 2020-02-14 by the reprex package (v0.3.0)

  1. Mean is mentioned twice.
  2. I think, if median is already reported, we don't need MAD for "age". mean/sd is enough, maybe also median, but not mad.
  3. Range is OK, but I never ever saw any paper where kurtosis and skewness of age was shown, at least not for a descriptive statistic.

report compare_models / rank_models

Related to this.

It would be cool to transform such model ranking into a sentence.

Example:

Model 3 (R2 = 0.85, AIC=123, BIC=111) presented the best fit (according to 2/3 indices), followed by model2 (d_R2 = -0.1, d_AIC = 5, d_BIC=-2) and model1 (d_R2 = -0.15, d_AIC = 8, d_BIC=5).

Colours for text reports?

As the text (especially the fulltext) can become a bit lengthy (for instance in the case of Bayesian models), I am wondering about the possibility of adding some "contrast" to the text, by colouring some of the values using the same colour code as in the table. It would help to identify key parts of the text.

One way to achieve is to run a smart regex (🤢) over the text to find key elements, e.g. where "beta = X,", "Median = X" etc. However, this doesn't seem straightforward 😕

What is "std.beta" in the output of report()?

Hello, I'm running the report() on a mixed-effects model and part of the output reads:

- The effect of grammar is negative and can be considered as very large and significant (beta = -1.49, SE = 0.14, 95% CI [-1.77, -1.21], std. beta = -1.49, p < .001).

I thought "std.beta" is the standardized parameter coefficient, but it's not. "beta" is the standardized parameter coefficient. Then what is "std. beta"? How is it obtained? I couldn't find a document for the information. Could somebody help?

Fix links in dependencies

* checking Rd cross-references ... WARNING
Missing link or links in documentation object 'report.aov.Rd':
  'eta_squared'

Missing link or links in documentation object 'report.lavaan.Rd':
  'parameters_standardize' 'standardize.lm'

Missing link or links in documentation object 'report.lm.Rd':
  'parameters_standardize' 'model_parameters.stanreg'
  'parameters_bootstrap' 'standardize.lm'

Missing link or links in documentation object 'report.lmerMod.Rd':
  'parameters_standardize' 'model_parameters.stanreg'
  'parameters_bootstrap' 'p_value' 'ci' 'standardize.lm'

Missing link or links in documentation object 'report.stanreg.Rd':
  'parameters_standardize' 'hdi' 'eti' 'rope' 'p_direction'
  'standardize.lm'

Rmarkdown partials for report (knit_print method)

Describe the solution you'd like
My codebook package does, I guess, a very extensive report on a dataset. I'm using rmarkdown partials for this, i.e. markdown that is echoed with the class knit_asis. That way, I can report on something using a mixture of text, tables and graphs.

How could we do it?
Maybe this is out of scope for the report package, but I already see examples in the description that would benefit from markdown capabilities (italics, symbols, combining tables with text for model summaries).

I hit a few snags when implementing this, but it now works like a charm. Here are my relevant helper functions and a simple example. It's a bit more bothersome to write tests for, but I managed to get pretty good coverage anyway.

Error with objects of class stanreg

Describe the bug
report() produces an error when run on an object of class "stanreg" "glm" "lm"

To Reproduce

library(rstanarm)
#> Loading required package: Rcpp
#> rstanarm (Version 2.19.2, packaged: 2019-10-01 20:20:33 UTC)
#> - Do not expect the default priors to remain the same in future rstanarm versions.
#> Thus, R scripts should specify priors explicitly, even if they are just the defaults.
#> - For execution on a local, multicore CPU with excess RAM we recommend calling
#> options(mc.cores = parallel::detectCores())
#> - bayesplot theme set to bayesplot::theme_default()
#>    * Does _not_ affect other ggplot2 plots
#>    * See ?bayesplot_theme_set for details on theme setting
library(report)

z <- stan_glm(mpg ~ cyl, data = mtcars, refresh = 0)

report(z)
#> Error: $ operator is invalid for atomic vectors

Created on 2020-03-02 by the reprex package (v0.3.0)

Expected behaviour
Should get a report object.

Specifiations (please complete the following information):

  • report_0.1.0
  • rstanarm_2.19.2

Can't calculate log-loss. Error: 'format_ci' is not an exported object from 'namespace:parameters'

I'm not sure if this is an error with the report() function or something with my data but here it is:

When trying to get report to summarize a glm object, I get the following error:

Can't calculate log-loss.
Error: 'format_ci' is not an exported object from 'namespace:parameters'

Here is what I'm running:

glm.out <- glm(data=data, survived ~ sex, family=binomial)

report(glm.out)

I'm using some basic titanic data for learning purposes and trying to run a logistic regression on sex vs. survived and consider this to be binomial data.


I guess, I'm not sure if it is just me or if it is the report() package.

Call to `parameters::model_parameters()` needs to be updated

# Parameters -----------------------------------------------------------------
if (bootstrap & !info$is_bayesian) {
if (is.null(ci_method) || ci_method %in% c("wald", "boot")) ci_method <- "quantile" # Avoid issues in parameters_bootstrap for mixed models
parameters <- parameters::model_parameters(model, ci = ci, bootstrap = bootstrap, iterations = iterations, p_method = p_method, ci_method = ci_method, standardize = NULL)
} else {
parameters <- parameters::model_parameters(model, ci = ci, bootstrap = bootstrap, iterations = iterations, p_method = p_method, ci_method = ci_method, centrality = centrality, dispersion = dispersion, test = test, rope_range = rope_range, rope_ci = rope_ci, bf_prior = bf_prior, diagnostic = diagnostic, standardize = NULL)
}

Arguments ci_method and p_method are no longer valid for model_parameters() (it is now df_method), however, when passing to describe_posterior() or for bootstrapped models, we indeed need ci_method as argument.

Need to check how we handle this.

output-style from report

I think we can / should improve the output-style from reporting model tables. Currently, it is:

library(report)
library(magrittr)
data(iris)

lm(Sepal.Length ~ Petal.Length + Species, data=iris) %>%
  report() %>%
  table_long() 
#> Parameter         | Coefficient |   SE | CI_low | CI_high |     t | df_error |    p | Std_Coefficient |    Fit
#> --------------------------------------------------------------------------------------------------------------
#> (Intercept)       |        1.50 | 0.19 |   1.12 |    1.87 |  7.93 |      146 | 0.00 |            1.50 |       
#> Petal.Length      |        1.93 | 0.14 |   1.66 |    2.20 | 13.96 |      146 | 0.00 |            1.93 |       
#> Speciesversicolor |       -1.93 | 0.23 |  -2.40 |   -1.47 | -8.28 |      146 | 0.00 |           -1.93 |       
#> Speciesvirginica  |       -2.56 | 0.33 |  -3.21 |   -1.90 | -7.74 |      146 | 0.00 |           -2.56 |       
#>                   |             |      |        |         |       |          |      |                 |       
#> AIC               |             |      |        |         |       |          |      |                 | 106.23
#> BIC               |             |      |        |         |       |          |      |                 | 121.29
#> R2                |             |      |        |         |       |          |      |                 |   0.84
#> R2 (adj.)         |             |      |        |         |       |          |      |                 |   0.83
#> RMSE              |             |      |        |         |       |          |      |                 |   0.33

Created on 2020-02-14 by the reprex package (v0.3.0)

Things that can be improved

  1. CIs can be collapsed into one column, like in model_parameters().
  2. Column Std_Coefficient is identical to Coefficient
  3. My main concern are the fit indices, which are additional rows for an additional column. I think we can change the stlye here, having
  • top left: headline, maybe formula, or "linear regression" or so
  • top right: fit indices
  • bottom: coefficient table

For the layout of 3) I have something like the stata output in mind (without table for sums of squares)

hqdefault

or

image

Features ideas and plans (i.e., TODO list)

General

  • coloured to_text: Add colours in textual report
  • report priors: Extract and format priors from Bayesian models
  • report algorithm: Extract algorithm used (for Bayesian, sampling and sampling properties (chains, samples), for frequentist, OLS, ML and such?)

Support

  • ANOVAs: depends on omega square in parameters
  • brms: depends on get_priors in parameters
  • lme4: depends on p values in parameters
  • rstanarm: meanfield algorithm: depends on find_algorithm().
  • estimate objects: means and contrasts
  • ...

Error: 'format_ci' is not an exported object from 'namespace:parameters'

Hi! This is the error that I got when trying to call "report" on t.test results:

Describe the bug
Error: 'format_ci' is not an exported object from 'namespace:parameters'

To Reproduce

t.test(mtcars$mpg ~ mtcars$am) %>% 
 report()

Tried

# resinstall these packages:  
devtools::install_github(c("easystats/insight",
                             "easystats/bayestestR",
                             "easystats/performance",
                             "easystats/parameters",
                             "easystats/correlation",
                             "easystats/estimate",
                             "easystats/see",
                             "easystats/report"))

Specifiations:

#sessionInfo()
R version 3.4.3 (2017-11-30)
Platform: x86_64-redhat-linux-gnu (64-bit)
Running under: CentOS Linux 7 (Core)

Matrix products: default
BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C
 [9] LC_ADDRESS=C               LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base

other attached packages:
[1] report_0.1.0

loaded via a namespace (and not attached):
 [1] Rcpp_1.0.3       crayon_1.3.4     dplyr_0.8.3      assertthat_0.2.1
 [5] R6_2.4.1         bayestestR_0.4.9 magrittr_1.5     pillar_1.4.2
 [9] rlang_0.4.2.9000 tools_3.4.3      glue_1.3.1       purrr_0.3.3
[13] compiler_3.4.3   pkgconfig_2.0.3  parameters_0.4.0 insight_0.8.0
[17] tidyselect_0.2.5 tibble_2.1.3

Thank you in advance for your attention!

JY

Adding model performance in table

In traditional APA regression tables, model performance metrics (R2) are included at the bottom line (see here). We could add an extra column in to_table or to_fulltable, with an empty name, displaying the indices of fit in an extra line.

This would be quite clunky from a data analysis pipeline (if one want to extract the parameters and do something with them later on), but useful for the table to be reported "as is", which is the scope of the package.

A solution could be a parameter show_model_metrics with which one could remove this extra line and column.

ICC report

@strengejacke Thanks to your work on ICC, we could start thinking about its reporting. I think this information has its place, when applicable, in the full reports (fulltext and fulltable). Below are some hints about how to actually report it.

Interpretation

From Koo & Li (2016):

Values less than 0.5 are indicative of poor reliability, values between 0.5 and 0.75 indicate moderate reliability, values between 0.75 and 0.9 indicate good reliability, and values greater than 0.90 indicate excellent reliability.

From statisticshowto:

  • A high Intraclass Correlation Coefficient (ICC) close to 1 indicates high similarity between values from the same group.
  • A low ICC close to zero means that values from the same group are not similar.

Reporting

From Koo & Li (2016):

There is currently a lack of standard for reporting ICC in the clinical research community. Given that different forms of ICC involve distinct assumptions in their calculation and will lead to different interpretations, it is imperative for researchers to report detailed information about their ICC estimates. We suggest that the best practice of reporting ICC should include the following items: software information, “Model,” “Type,” and “Definition” selections. In addition, both ICC estimates and their 95% confidence intervals should be reported. For instance, the ICC information could be reported as such:

"ICC estimates and their 95% confident intervals were calculated using SPSS statistical package version 23 (SPSS Inc, Chicago, IL) based on a mean-rating (k = 3), absolute-agreement, 2-way mixed-effects model."

Questions:

  • For Bayesian models, still use the 95% CI (or 90% to be consistent with the R2 and the parameters?)
  • How to deal with the different ICCs (in the above example, "mean-rating (k=3), absolute-agreement" etc.

Implementation

An example of a more generic (applicable not only for reliability studies) possible report sentence:

  • "The ICC suggests a strong (instead of good?) similarity between values of the same [random? to avoid confusion for absolute beginners with fixed effects] groups (ICC = 0.83, 95% IC [0.76, 0.88])"

change report methods names?

current the report objects can be provided to to_text() (default print method) and to_fulltext(), to_table() and to_fulltable() . the goal is also to make it work with see's plot method

maybe changing the methods would be nicer and more intuitive? like details() (default) instead of to_fulltext(), summary instead of to_text(), table() instead of to_table() and for the full table I don't know...
thoughts?

(PS: currently report is pretty much broken until parameters is CRAN ready (which hopefully should be soon))

Improve report() for anova() outcome of regression models

I noticed that report() of an anova() object does not provide correct/complete output for regression models (eg lm, lmer, glm).

In particular, it could be very helpful if report() could give an anova-like table of main and interaction effects of lmer or complex multiple lm() as coefficients parameters in these cases is sometimes hard to understand or some coefficients do not make sense empirically (i.e. expected difference for a numerical (age) coefficient of 0 between the levels of a categorical (use/non-use of electronic cigarette) coefficient for the VD (motivation to quit smoking)) and as reference for categorical comparison is taken without control on experimental or theoretical reasons in R.
Use of report for these models could be helpful for providing also effect size from sjstats and other relevant parameters of an anova() object to be reported in papers.

This is secondary but might help to report relevant information: report() does not work for anova() of model comparisons. Honestly, I do not know whether there are guidelines for model comparison reporting in psychology/neuroscience. However, report() could help for picking the right parameters (e.g. AIC, BIC or Chi2 and so on) and correct writing.

Lastly, would it be possible to get main and interaction effects for stan_lm/glm/lmer model with report()? the coefficient parameters output is the same as classic regression models and would be helpful to get the general effect of a predictor instead of its coefficient estimation.

How could we do it?
For instance, significance for fixed effects in lmer could be done with anova(model.lmer, dff = "Kenward-Roger) (Luke, 2017) or for other regression models as anova(model.lm) or anova(model.glm, test ="F").

I am not an expert in R programming and cannot help with actual coding, this is just a suggestion that I noticed during practical use of report() and R in general.

I appreciate your work!

Luke, S.G. Behav Res (2017) 49: 1494. https://doi.org/10.3758/s13428-016-0809-y

Software paper for report

Once correlation and estimate are on CRAN, report will go in, with a high priority to make it citeable.

I think that it makes sense to associate all contributors of easystats to this publication, as it heavily relies on all of the easystats dependencies.

Here's a list of potential journals (feel free to complete the list):

any opinion and experience are welcomed

Error: 'format_value' is not an exported object from 'namespace:parameters'

Recently install the package and it fails producing the report of a data frame:

library(report)
report(iris)
Error: 'format_value' is not an exported object from 'namespace:parameters'

The same error trying to produce the report with mtcars

However, it works well with

lm(Sepal.Length ~ Petal.Length + Species, data=iris) %>%
report() %>%
to_table()

The error popup in a fresh session of R 3.6.1 with the report version 0.1.0

error creating report on a stan_lmer

I get the following error when I try to create a report object on a bayes mixed model (class = "stanreg" "glm" "lm" "lmerMod"):

Error in match.arg(diagnostic, c("ESS", "Rhat", "MCSE", "all"), several.ok = TRUE) : 
  'arg' must be NULL or a character vector

This code produces that error:

library(easystats)
library(rstanarm)

lme_mod_4_sq_bayes <- rstanarm::stan_lmer(speed_hrs ~ rel_demand_sc +   
                      items_sc + (items_sc + items_sc_sq + rel_demand_sc| euid),
                    data = df_train, 
                    na.action = na.omit)

report::report(lme_mod_4_sq_bayes)

I thought it was my model, so I tried the example from the package docs and get the same thing:

library(easystats)
library(rstanarm)

model <- rstanarm::stan_lmer(Sepal.Length ~ Petal.Length + (1 | Species), data = iris)
report(model)

Produces this error again:

Error in match.arg(diagnostic, c("ESS", "Rhat", "MCSE", "all"), several.ok = TRUE) : 
  'arg' must be NULL or a character vector

I can create a report and apply typical functions to a lmer model. E.g, when I run:

mod <- readRDS("./output/models/lmer.RDS")
class(mod)
[1] "lmerMod"
attr(,"package")
[1] "lme4"

and then run:

mod %>% report(standardize = TRUE, effsize = "cohen1988") %>% to_fulltext()

I get the text report as expected.

Please set the `standardize` method explicitly. Set to "refit" by default.Model failed to converge with max|grad| = 0.00307336 (tol = 0.002, component 1)We fitted a linear mixed model (using REML algorithm and nloptwrap optimizer) to predict speed_hrs with rel_demand_sc and items_sc (formula = speed_hrs ~ rel_demand_sc + items_sc). The model included items_sc, items_sc_sq, rel_demand_sc and euid as random effects (formula = ~items_sc + items_sc_sq + rel_demand_sc | euid). The 95.00% Confidence Intervals (CIs) and p values were computed using Wald approximation. Effect sizes were labelled following Cohen's (1988) recommendations. The model's total explanatory power is substantial (conditional R2 = 0.88) and the part related to the fixed effects alone (marginal R2) is of 0.73. The model's intercept, corresponding to rel_demand_sc = 0 and items_sc = 0, is at  (t = 20.74, 95% CI [-0.43, 1.32], p < .001). 

Within this model: 
  - items_sc is  and significant (beta = , SE = 0.03, t = 17.81, 95% CI [-0.59, 1.81], p < .001).
  - rel_demand_sc is  and significant (beta = , SE = 0.01, t = -3.07, 95% CI [0.02, -0.07], p < .01).

I thought this was similar to issue #19 because I also had the same earlier version or R installed (3.5.2) as the user who raised it. But the same error persists, even after updating to 3.6.1 and trying:

devtools::install_github("easystats/bayestestR", force=TRUE)
devtools::install_github("easystats/performance", force=TRUE)
devtools::install_github("easystats/parameters", force=TRUE)
devtools::install_github("easystats/correlation", force=TRUE)
devtools::install_github("easystats/estimate", force=TRUE)
devtools::install_github("easystats/report", force=TRUE)

and easystats::install_easystats_latest()

Any idea what's causing this? This is a great package, and would simplify my work flow a ton if I can get it working!

My system info:

R version 3.6.1 (2019-07-05)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 17134)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United States.1252    LC_MONETARY=English_United States.1252 LC_NUMERIC=C                          
[5] LC_TIME=English_United States.1252  

Features for "table1()"

  • Column name by context (i.e. replace Summary with Mean (SD) or Median (MAD)
  • Add Total column when group_by is used.

missing values in output

Hi, very neat package, love the idea of saving (me) time to report all those ANOVAs.

When reporting an ANOVA, following your example, my result misses a few statistics (deg. of freedom, omega):

image

_The ANOVA suggests that:

  • The effect of Species is significant (F() = 119.26, p < .001) and can be considered as large (partial omega squared = )._

Same happens with any other data I try.

My sessionInfo:

R version 3.6.1 (2019-07-05)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 18362)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C LC_TIME=English_United States.1252

attached base packages:
[1] stats graphics grDevices utils datasets methods base

other attached packages:
[1] report_0.1.0

loaded via a namespace (and not attached):
[1] Rcpp_1.0.2 compiler_3.6.1 pillar_1.4.2 parameters_0.1.0 prettyunits_1.0.2
[6] remotes_2.1.0 tools_3.6.1 boot_1.3-22 testthat_2.1.1 digest_0.6.20
[11] pkgbuild_1.0.3 pkgload_1.0.2 lattice_0.20-38 memoise_1.1.0 tibble_2.1.3
[16] pkgconfig_2.0.2 rlang_0.4.0 Matrix_1.2-17 cli_1.1.0 curl_3.3
[21] mvtnorm_1.0-11 coda_0.19-3 bayestestR_0.2.5 withr_2.1.2 dplyr_0.8.3
[26] desc_1.2.0 fs_1.3.1 devtools_2.1.0 grid_3.6.1 rprojroot_1.3-2
[31] tidyselect_0.2.5 performance_0.3.0.9000 glue_1.3.1 R6_2.4.0 processx_3.4.0
[36] survival_2.44-1.1 sessioninfo_1.1.1 multcomp_1.4-10 TH.data_1.0-10 callr_3.3.0
[41] purrr_0.3.2 magrittr_1.5 codetools_0.2-16 MASS_7.3-51.4 splines_3.6.1
[46] backports_1.1.4 ps_1.3.0 emmeans_1.4 usethis_1.5.1 insight_0.4.1
[51] assertthat_0.2.1 estimate_0.1.0 xtable_1.8-4 sandwich_2.5-1 estimability_1.3
[56] crayon_1.3.4 correlation_0.1.0 zoo_1.8-6

Eff size and missing data on report for lmer

I cannot find eff.size for lmer object with report even if specified in the formula. How can I get this?

Additionaly I found missing values in the text due to a bug I think ;)

Many thanks!

fit2 <- lmer(Own~ Condition * Time * Iaw_tot + (1|Soggetti), data = df1)
r <- report(fit2, effsize = "cohen1988")
to_fulltext(r)
We fitted a linear mixed model to predict Own with Condition, Time and Iaw_tot (formula = Own ~ Condition * Time * Iaw_tot). The model included Soggetti as random effects (formula = ~1 | Soggetti). Effect sizes were labelled following Cohen's (1988) recommendations. The model's intercept, corresponding to Condition = , Time = and Iaw_tot = 0, is at (t() = 6.24, 95% CI [-3.23, 9.97], p < .001).

Within this model:

  • Condition_ASync is and not significant (beta = , SE = 0.53, t() = -0.35, 95% CI [0.18, -0.55], p > .1).
  • Condition_ASync:Iaw_tot is and not significant (beta = , SE = 0.01, t() = -0.42, 95% CI [0.00, -0.01], p > .1).
  • Condition_ASync:Time2 is and not significant (beta = , SE = 0.97, t() = 1.23, 95% CI [-1.14, 3.53], p > .1).
  • Condition_ASync:Time2:Iaw_tot is and not significant (beta = , SE = 0.02, t() = -1.58, 95% CI [0.03, -0.09], p > .1).
  • Iaw_tot is and not significant (beta = , SE = 0.01, t() = 0.50, 95% CI [-0.00, 0.01], p > .1).
  • Time2 is and not significant (beta = , SE = 0.67, t() = -1.61, 95% CI [1.03, -3.19], p > .1).
  • Time2:Iaw_tot is and not significant (beta = , SE = 0.01, t() = 1.64, 95% CI [-0.02, 0.07], p > .1).

get rid of dplyr

I think we should avoid using dplyr in report, and looking at the code, I think this is easy to do. I can work on report.data.frame()...

report() issue

Hi everyone,

I have got this issue with report(). I re-installed bayestestR but there is still this problem. How can I solve this?
Best

r <- report(fit)
Error: 'rope_bounds' is not an exported object from 'namespace:bayestestR'

report for modification indices from lavaan

library(lavaan)
structure <- '
  # latent variable definitions
    ind60 =~ x1 + x2 + x3
    dem60 =~ y1 + a*y2 + b*y3 + c*y4
    dem65 =~ y5 + a*y6 + b*y7 + c*y8
  # regressions
    dem60 ~ ind60
    dem65 ~ ind60 + dem60
  # residual correlations
    y1 ~~ y5
    y2 ~~ y4 + y6
    y3 ~~ y7
    y4 ~~ y8
    y6 ~~ y8
'
model <- lavaan::sem(structure, data=PoliticalDemocracy)
modificationindices(model)

absolute MI threshold? Elbow of the curve (jump in ordered MI values using estimate's derivative)?

Trouble Installing package from GitHub

Read the documentation for your package but I am having a hard time installing it due to an error related to Rcpp

Curious if anyone else is having these same issues.

img

model metrics don't match between model_text_performance_bayesian() and model_performance()

report() on a mixed bayesian model doesn't report text on the model's explanatory performance from the internal model_text_performance_bayesian(). "The model's explanatory power is...."

It looks like this is because the names(performance) generated from performance::model_performance() don't match the conditions inside model_text_performance_bayesian().

E.g.:

table_performance_0 <- performance::model_performance(b_mod, performance_metrics = "all")
model_text_performance_bayesian(table_performance_0)

produces

$text
[1] ""

$text_full
[1] ""

because names(table_performance_0) does not have column names "R2_Median" or "R2_marginal_Median", but rather "R2" and "R2_marginal".

Changing the names produces (nearly) expected text results.

names(table_performance_0)[6] <- "R2_Median"
names(table_performance_0)[8] <- "R2_marginal_Median"
model_text_performance_bayesian(performance = table_performance_0)
$text
[1] "The model's explanatory power is substantial (R2's median = 0.85, LOO adj. R2 = 0.85). Within this model, the explanatory power related to the fixed effects alone (marginal R2's median) is of 0.78."

$text_full
[1] "The model's total explanatory power is substantial (R2's median = 0.85, MAD = , 90% CI [, ], LOO adj. R2 = 0.85). Within this model, the explanatory power related to the fixed effects alone (marginal R2's median) is of 0.78 (MAD = , 90% CI [, ])."

I say nearly becuase MAD is still missing from the performance table. model_text_performance_bayesian() expects to find columns in the performance table called "R2_marginal_MAD" and "R2_marginal_CI_high"

Unless I'm missing something, which is totally possible.

multiple errors: can’t run the package

Dear all,
I'd really like to use report, I think it's a very neat package. But I'm having some problems.
I ran a bays mix model and report does not seem to work.

If i run:

fit1 <- stan_lmer(Agency ~ Condition + Time + IAc_tot + (1|Soggetti), data = df )
r <- report:::report(fit1)

I get:

Error: 'rope_bounds' is not an exported object from 'namespace:bayestestR'

If I run:

r <- psycho::analyze(fit1)

I get:

Warning messages:
1: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
 
2: In R2_LOO_Adjusted(fit) :
  Something went wrong in the Loo-adjusted R2 computation.

But I'm able to see a summary of the model and all the relevant information (which I'm not able to do with report).

r

We fitted a Markov Chain Monte Carlo gaussian (link = identity) [... etc.]
The model has an explanatory power (R2) of about 64.88% (MAD = 0.06, 90% CI [0.54, 0.74]). The intercept is at 3.90 (MAD = 0.64, 90% CI [2.88, 4.93]). Within this model:

  • The effect of Conditionas has a probability of 99.15% of being negative (Median = -0.67, MAD = 0.27, 90% CI [-1.09, -0.22], Overlap = 22.75%). etc...

I can also just extract the summary with

summary(r, round = 2)

And everything works fine.

I'm running R 3.5.3 (x64) and R studio Version 1.2.1335 on a Windows 10 machine.
other attached packages:
[1] lme4_1.1-21 Matrix_1.2-17 report_0.1.0 tidyr_0.8.3 estimate_0.1.0 dplyr_0.8.0.1 rstanarm_2.18.2 Rcpp_1.0.1
[9] psycho_0.4.9 ggplot2_3.1.1
[97] parameters_0.1.0

Side note: I had several problems in installing report package and estimate package.
They required the parameters package but this one always failed to install (zero exit error).
I solved by manually installing the parameters package v 0.1.0 and then when I ran

library(devtools)
devtools::install_github("easystats/report")
devtools::install_github("easystats/estimate")

Everything worked fine.

Anyway, I'd really appreciate any help.
Thank you very much.

Dan

Improve table printing

Table printing is a bit jiggly, i.e., the columns are not perfectly aligned (due to coloured columns). This must be improved, but I am not sure how...

The answer is probably within the .display.data.frame function in display.R, where the alignment is done.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.