GithubHelp home page GithubHelp logo

sbfnk / rbi Goto Github PK

View Code? Open in Web Editor NEW
22.0 9.0 9.0 5.17 MB

R package for Bayesian inference with state-space models using LibBi.

Home Page: https://sbfnk.github.io/rbi/

R 95.63% Dockerfile 0.69% FreeBasic 3.68%

rbi's Introduction

Bayesian inference for state-space models with R

GitHub R package version R-CMD-check codecov GitHub contributors License: GPL v3

rbi is an R interface to libbi, a library for Bayesian inference.

It mainly contains:

  • various functions to retrieve and process the results from libbi (which are in NetCDF format)
  • a bi_model class, to manipulate libbi models
  • a libbi wrapper class, to perform Bayesian using libbi inference from within R,

Installation

The easiest way to install the latest stable version of rbi is via CRAN:

install.packages("rbi")

Alternatively, the current development version can be installed using the remotes package

# install.packages("remotes")
library("remotes")
install_github("sbfnk/rbi")

The rbi package has only been tested on GNU/Linux and OS X, but it should mostly work everywhere R works.

If you want to use rbi as a wrapper to LibBi then you need a working version of LibBi. To install LibBi on a Mac, the easiest way is to install Homebrew, followed by (using a command shell, i.e. Terminal or similar):

brew install libbi

On linux, follow the instructions provided with LibBi.

The path to libbi script can be passed as an argument to rbi, otherwise the package tries to find it automatically using the which linux/unix command.

If you just want to process the output from LibBi, then you do not need to have LibBi installed.

Getting started

A good starting point is to look at the included demos:

demo(PZ_generate_dataset) ## generating a data set from a model
demo(PZ_PMMH) ## particle Markov-chain Metropolis-Hastings
demo(PZ_SMC2) ## SMC^2
demo(PZ_filtering) ## filtering

For further information, have a look at the introductory vignette from the link from the rbi CRAN package.

Using coda

LibBi contains the get_traces method which provides an interface to coda.

Other packages

For higher-level methods to interact with LibBi, have a look at rbi.helpers.

rbi's People

Contributors

actions-user avatar bisaloo avatar blackedder avatar pierrejacob avatar sbfnk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rbi's Issues

Release rbi 1.0.0

Prepare for release:

  • Finalise #26
  • git pull
  • Check current CRAN check results
  • Polish NEWS
  • usethis::use_github_links()
  • urlchecker::url_check()
  • devtools::build_readme()
  • devtools::check(remote = TRUE, manual = TRUE)
  • devtools::check_win_devel()
  • revdepcheck::revdep_check(num_workers = 4)
  • Update cran-comments.md
  • git push
  • Draft blog post

Submit to CRAN:

  • usethis::use_version('major')
  • devtools::submit_cran()
  • Approve email

Wait for CRAN...

  • Accepted 🎉
  • Add preemptive link to blog post in pkgdown news menu
  • usethis::use_github_release()
  • usethis::use_dev_version(push = TRUE)
  • Finish blog post
  • Tweet

Error messages in the Introduction to RBi vignette

Context:

In the RBi package vignette, under Generating a dataset, there is the following error message:

## Error in run.libbi(x, client = "sample", ...): LibBi terminated with "Error: ./configure failed with return code 77. See /var/folders/k_/8wrb49vn4713j7fn6y3hmndm0000gn/T/Rtmpjgq4OD/SIR1336d7948851c/.SIR/build_assert_openmp_sm_30/configure.log and /var/folders/k_/8wrb49vn4713j7fn6y3hmndm0000gn/T/Rtmpjgq4OD/SIR1336d7948851c/.SIR/build_assert_openmp_sm_30/config.log for details".
## You can view a full log using "print_log('/private/var/folders/k_/8wrb49vn4713j7fn6y3hmndm0000gn/T/Rtmpjgq4OD/SIR1336d7948851c/output1336d81b2f79.txt')"

All subsequent commands also yield errors as a result and no useful output is generated.

I would appreciate if you could assist with resolving this issue.

Passing a data frame with both multiple samples and dimensions doesn't work

If I read the libbi documentation correct it should be possible to pass multiple samples for fitting to libbi by using an ns dimension. On top of that my data also has a second user defined dimension lage, but if I try to pass this to rbi I get the following error:

Error in netcdf_create_from_list(filename, variables_with_dim, ...) : Could not decide on coord dimension between lagens

filter function not working

When trying to use the filter function with a libbi object I get the following error:

Error in UseMethod("filter_") : 
  no applicable method for 'filter_' applied to an object of class "libbi"

The error can be replicated by running

demo(PZ_filtering)

Is this a known issue?

libbi not found

Hi,
I try to use the rib package. I installed LibBi based on the help files that are provided in LibBi github page.
When I tried to run "demo(PZ_PMMH)" I got this error:
Error in rethrow_call(c_processx_exec, command, c(command, args), stdin, :
Command 'C:/LibBi/libbi' not found @win/processx.c:983 (processx_exec)

I also ran
options(path_to_libbi="C:/LibBi")
demo(PZ_PMMH)

but the same error popped up.
I appreciate it if you could help me with this issue.

Thanks in advance.
Neda

Autoreconf: command not found

Context:

I am attempting to run the first R package rbi demo called demo(PZ_generate_dataset) as displayed in this README. However, I am receiving the following error message and would appreciate any insight into fixing this:

> demo(PZ_generate_dataset)


	demo(PZ_generate_dataset)
	---- ~~~~~~~~~~~~~~~~~~~

Type  <Return>	 to start : 

> ### This demo shows how to generate data from a model using bi_generate_dataset
> 
> # the PZ model file is included in rbi and can be found there:
> model_file_name <- system.file(package="rbi", "PZ.bi")

> # assign model variable
> PZ <- bi_model(model_file_name)

> # look at the model
> PZ
bi_model:
=========
 1: model PZ {
 2:   const c = 0.25
 3:   const e = 0.3
 4:   const m_l = 0.1
 5:   const m_q = 0.1
 6:   param mu
 7:   param sigma
 8:   state P
 9:   state Z
10:   noise alpha
11:   obs P_obs
12:   sub parameter {
13:     mu ~ uniform(0.0, 1.0)
14:     sigma ~ uniform(0.0, 0.5)
15:   }
16:   sub proposal_parameter {
17:     mu ~ gaussian(mu, 0.02)
18:     sigma ~ gaussian(sigma, 0.01)
19:   }
20:   sub initial {
21:     P ~ log_normal(log(2.0), 0.2)
22:     Z ~ log_normal(log(2.0), 0.1)
23:   }
24:   sub transition(delta = 1.0) {
25:     alpha ~ gaussian(mu, sigma)
26:     ode {
27:       dP/dt = alpha*P - c*P*Z
28:       dZ/dt = e*c*P*Z - m_l*Z - m_q*Z*Z
29:     }
30:   }
31:   sub observation {
32:     P_obs ~ log_normal(log(P), 0.2)
33:   }
34: }

> T <- 100

> # First let's generate a dataset without specifying parameters
> # so the parameters and initial conditions are drawn from the prior distribution
> # which is specified in the model file.
> # if libbi throws an error here, try passing the "working_folder" argument to
> # 'bi_generate_dataset' and look for the issue in the ".PZ" subdirectory
> dataset1 <- bi_generate_dataset(model=PZ, end_time=T, noutputs=T)
Error in run.libbi(x, client = "sample", ...) : 
  LibBi terminated with "Error: ./autogen.sh failed with return code 127. Make sure autoconf and automake are installed. See /var/folders/ys/5jtmxty12_92qlj7wfdjx66w0000gn/T/Rtmp04wBdd/PZ571c1d1b630f/.PZ/build_assert_openmp_sm_30/autogen.log for details".
You can view a full log using "print_log('/private/var/folders/ys/5jtmxty12_92qlj7wfdjx66w0000gn/T/Rtmp04wBdd/PZ571c1d1b630f/output571c71047cb6.txt')"

Viewing the output of the logs indicates that autoreconf is not installed on my system, however I've verified that autoreconf is installed via Homebrew.

~ % cat /var/folders/ys/5jtmxty12_92qlj7wfdjx66w0000gn/T/Rtmp04wBdd/PZ571c1d1b630f/.PZ/build_assert_openmp_sm_30/autogen.log
./autogen.sh: line 4: autoreconf: command not found

 ~ % which autoreconf
/opt/homebrew/bin/autoreconf

Would you be able to suggest further steps to debug this issue?

Session Info:

> sessionInfo()
R version 4.1.2 (2021-11-01)
Platform: aarch64-apple-darwin21.1.0 (64-bit)
Running under: macOS Monterey 12.2

Rbi.helpers-How to discard burn-in correctly?

In some simulation experiments, we want to check if the posterior samples correctly start from the initial values we assign to each parameter. The current way we did this is to first set the initial values of parameters using init in sample function, then remove the adapt_proposal line in our code and followed with sample(nsamples=10000, thin=1). But the resulting trace plot tells us this seems to be incorrect (e.g. tau’s initial value is set to be 0.8, but from the trace plot, it starts from 0.3 ). So is there a way to correctly discard burn-in in LibBi?
Screenshot 2023-05-24 at 11 33 49

How to add cuda-arch option

Hi Seb,

Trying to use the --cuda-arch option via Libbi. Based on the current run.libbi help I am not sure this is possible. Using cuda-arch = TRUE leads to an error as this is formatted as enable-cuda-arch which doesn't exist.

If this is a bug/omission I am happy to fix in a pull request (pointers in the code would be useful).

Thanks,

Sam

How to add '-std=c++11' to CXXFLAGS

I recently installed Libbi and RBi on a Linux machine, but have had some problems sampling from models with RBi. Minimal code here:
https://gist.github.com/shug3502/8bb72b66ac341e5859a9101d9a8b9024
The error occurs when trying to sample from the posterior in the final line (error message pasted below). I get the same kind of error when using LibBi directly. I can fix it by manually adding '-std=c++11' to CXXFLAGS in the Makefile when using LibBi. When using RBi, I don't know how to add this to the CXXFLAGS. Any suggestions about how to fix this? Thanks!
(Additionally it looks like the same error may be present when the code for the rbi.helpers vignette was run? https://cran.r-project.org/web/packages/rbi.helpers/vignettes/introduction.html )

/usr/local/bin/libbi sample --seed 52857484 --nsamples 1000 --end-time 112 --noutputs 16 --obs-file /tmp/RtmpkB8HgN/SIR6c53e406f0f75/SIR_obs6c53e7ea7ee2b.nc --init-file /tmp/RtmpkB8HgN/SIR6c53e406f0f75/SIR_init6c53e36d1405.nc --target posterior --nparticles 32 --verbose --output-file /tmp/RtmpkB8HgN/SIR6c53e406f0f75/SIR_output6c53e55326ed8.nc --model-file /tmp/RtmpkB8HgN/SIR6c53e406f0f75/SIR.bi
Parsing...
Processing arguments...
Transforming model...
Generating Doxyfile...
Generating C++ code...
Generating GNU autotools build system...
make -j 4 sample_cpu
Building...
depbase=echo src/sample_cpu.o | sed 's|[^/]*$|.deps/&|;s|\.o$||';\
g++ -DPACKAGE_NAME="LibBi" -DPACKAGE_TARNAME="libbi" -DPACKAGE_VERSION="1.4.4" -DPACKAGE_STRING="LibBi\ 1.4.4" -DPACKAGE_BUGREPORT="[email protected]" -DPACKAGE_URL="http://www.libbi.org\" -DHAVE_OMP_H=1 -DTHRUST_DEVICE_SYSTEM=THRUST_DEVICE_SYSTEM_CPP -DTHRUST_HOST_SYSTEM=THRUST_HOST_SYSTEM_OMP -DHAVE_LIBM=1 -DHAVE_LIBGFORTRAN=1 -DHAVE_LIBATLAS=1 -DHAVE_LIBQRUPDATE=1 -DHAVE_LIBGSL=1 -DHAVE_LIBNETCDF=1 -DHAVE_NETCDF_H=1 -DHAVE_CBLAS_H=1 -DHAVE_GSL_GSL_CBLAS_H=1 -DHAVE_BOOST_MPL_IF_HPP=1 -DHAVE_BOOST_RANDOM_BINOMIAL_DISTRIBUTION_HPP=1 -DHAVE_BOOST_RANDOM_BERNOULLI_DISTRIBUTION_HPP=1 -DHAVE_BOOST_RANDOM_GAMMA_DISTRIBUTION_HPP=1 -DHAVE_BOOST_RANDOM_MERSENNE_TWISTER_HPP=1 -DHAVE_BOOST_RANDOM_NORMAL_DISTRIBUTION_HPP=1 -DHAVE_BOOST_RANDOM_POISSON_DISTRIBUTION_HPP=1 -DHAVE_BOOST_RANDOM_UNIFORM_INT_HPP=1 -DHAVE_BOOST_RANDOM_UNIFORM_REAL_HPP=1 -DHAVE_BOOST_RANDOM_VARIATE_GENERATOR_HPP=1 -DHAVE_BOOST_TYPEOF_TYPEOF_HPP=1 -DHAVE_THRUST_ADJACENT_DIFFERENCE_H=1 -DHAVE_THRUST_BINARY_SEARCH_H=1 -DHAVE_THRUST_COPY_H=1 -DHAVE_THRUST_DEVICE_PTR_H=1 -DHAVE_THRUST_DISTANCE_H=1 -DHAVE_THRUST_EXTREMA_H=1 -DHAVE_THRUST_FILL_H=1 -DHAVE_THRUST_FOR_EACH_H=1 -DHAVE_THRUST_FUNCTIONAL_H=1 -DHAVE_THRUST_GATHER_H=1 -DHAVE_THRUST_INNER_PRODUCT_H=1 -DHAVE_THRUST_ITERATOR_COUNTING_ITERATOR_H=1 -DHAVE_THRUST_ITERATOR_DETAIL_NORMAL_ITERATOR_H=1 -DHAVE_THRUST_ITERATOR_DISCARD_ITERATOR_H=1 -DHAVE_THRUST_ITERATOR_PERMUTATION_ITERATOR_H=1 -DHAVE_THRUST_ITERATOR_TRANSFORM_ITERATOR_H=1 -DHAVE_THRUST_ITERATOR_ZIP_ITERATOR_H=1 -DHAVE_THRUST_LOGICAL_H=1 -DHAVE_THRUST_REDUCE_H=1 -DHAVE_THRUST_SCAN_H=1 -DHAVE_THRUST_SEQUENCE_H=1 -DHAVE_THRUST_SORT_H=1 -DHAVE_THRUST_TRANSFORM_H=1 -DHAVE_THRUST_TRANSFORM_REDUCE_H=1 -DHAVE_THRUST_TRANSFORM_SCAN_H=1 -DHAVE_THRUST_TUPLE_H=1 -DHAVE_GSL_GSL_MULTIMIN_H=1 -DENABLE_DIAGNOSTICS=no -DHAVE_STDBOOL_H=1 -I. -Isrc -I/usr/local/cuda/include -DCUDA_FAST_MATH=0 -DENABLE_OPENMP -fopenmp -O3 -g3 -funroll-loops -MT src/sample_cpu.o -MD -MP -MF $depbase.Tpo -c -o src/sample_cpu.o src/sample_cpu.cpp &&\
mv -f $depbase.Tpo $depbase.Po
In file included from /usr/local/cuda/include/thrust/system/detail/generic/extrema.h:88:0,
from /usr/local/cuda/include/thrust/detail/extrema.inl:22,
from /usr/local/cuda/include/thrust/extrema.h:802,
from src/bi/state/../primitive/vector_primitive.hpp:787,
from src/bi/state/../primitive/matrix_primitive.hpp:11,
from src/bi/state/State.hpp:467,
from src/model/ModelSIR.hpp:31,
from src/sample_cpu.cpp:19:
/usr/local/cuda/include/thrust/system/detail/generic/extrema.inl: In instantiation of ‘ForwardIterator thrust::system::detail::generic::max_element(thrust::execution_policy&, ForwardIterator, ForwardIterator, BinaryPredicate) [with DerivedPolicy = thrust::system::omp::detail::tag; ForwardIterator = thrust::detail::normal_iterator<const double*>; BinaryPredicate = bi::nan_less_functor]’:
/usr/local/cuda/include/thrust/system/omp/detail/extrema.h:39:54: required from ‘ForwardIterator thrust::system::omp::detail::max_element(thrust::system::omp::detail::execution_policy&, ForwardIterator, ForwardIterator, BinaryPredicate) [with DerivedPolicy = thrust::system::omp::detail::tag; ForwardIterator = thrust::detail::normal_iterator<const double*>; BinaryPredicate = bi::nan_less_functor]’
/usr/local/cuda/include/thrust/detail/extrema.inl:65:21: required from ‘ForwardIterator thrust::max_element(const thrust::detail::execution_policy_base&, ForwardIterator, ForwardIterator, BinaryPredicate) [with DerivedPolicy = thrust::system::omp::detail::tag; ForwardIterator = thrust::detail::normal_iterator<const double*>; BinaryPredicate = bi::nan_less_functor]’
/usr/local/cuda/include/thrust/detail/extrema.inl:139:29: required from ‘ForwardIterator thrust::max_element(ForwardIterator, ForwardIterator, BinaryPredicate) [with ForwardIterator = thrust::detail::normal_iterator<const double*>; BinaryPredicate = bi::nan_less_functor]’
src/bi/state/../primitive/vector_primitive.hpp:867:32: required from ‘typename V1::value_type bi::max_reduce(V1) [with V1 = bi::host_vector_reference; typename V1::value_type = double]’
src/bi/state/../primitive/vector_primitive.hpp:936:21: required from ‘typename V1::value_type bi::logsumexp_reduce(V1) [with V1 = bi::host_vector_reference; typename V1::value_type = double]’
src/bi/filter/BootstrapPF.hpp:193:37: required from ‘void bi::BootstrapPF<B, F, O, R>::term(S1&) [with S1 = bi::BootstrapPFState<ModelSIR, (bi::Location)0u>; B = ModelSIR; F = bi::Forcer<bi::InputNullBuffer, (bi::Location)0u>; O = bi::Observer<bi::InputNetCDFBuffer, (bi::Location)0u>; R = bi::Resamplerbi::SystematicResampler]’
src/bi/filter/Filter.hpp:68:3: required from ‘void bi::Filter::filter(bi::Random&, bi::ScheduleIterator, bi::ScheduleIterator, S1&, IO1&) [with S1 = bi::BootstrapPFState<ModelSIR, (bi::Location)0u>; IO1 = bi::ParticleFilterBuffer<bi::BootstrapPFCache<(bi::Location)0u> >; F = bi::BootstrapPF<ModelSIR, bi::Forcer<bi::InputNullBuffer, (bi::Location)0u>, bi::Observer<bi::InputNetCDFBuffer, (bi::Location)0u>, bi::Resamplerbi::SystematicResampler >; bi::ScheduleIterator = __gnu_cxx::__normal_iterator<const bi::ScheduleElement*, std::vectorbi::ScheduleElement >]’
src/bi/sampler/MarginalMH.hpp:222:3: required from ‘void bi::MarginalMH<B, F>::init(bi::Random&, bi::ScheduleIterator, bi::ScheduleIterator, S1&, IO1&, IO2&) [with S1 = bi::BootstrapPFState<ModelSIR, (bi::Location)0u>; IO1 = bi::ParticleFilterBuffer<bi::BootstrapPFCache<(bi::Location)0u> >; IO2 = bi::InputNetCDFBuffer; B = ModelSIR; F = bi::Filter<bi::BootstrapPF<ModelSIR, bi::Forcer<bi::InputNullBuffer, (bi::Location)0u>, bi::Observer<bi::InputNetCDFBuffer, (bi::Location)0u>, bi::Resamplerbi::SystematicResampler > >; bi::ScheduleIterator = __gnu_cxx::__normal_iterator<const bi::ScheduleElement*, std::vectorbi::ScheduleElement >]’
src/bi/sampler/MarginalMH.hpp:204:7: required from ‘void bi::MarginalMH<B, F>::sample(bi::Random&, bi::ScheduleIterator, bi::ScheduleIterator, S1&, int, IO1&, IO2&) [with S1 = bi::MarginalMHState<ModelSIR, (bi::Location)0u, bi::BootstrapPFState<ModelSIR, (bi::Location)0u>, bi::ParticleFilterBuffer<bi::BootstrapPFCache<(bi::Location)0u> > >; IO1 = bi::MCMCBuffer<bi::MCMCCache<(bi::Location)0u, bi::MCMCNetCDFBuffer> >; IO2 = bi::InputNetCDFBuffer; B = ModelSIR; F = bi::Filter<bi::BootstrapPF<ModelSIR, bi::Forcer<bi::InputNullBuffer, (bi::Location)0u>, bi::Observer<bi::InputNetCDFBuffer, (bi::Location)0u>, bi::Resamplerbi::SystematicResampler > >; bi::ScheduleIterator = __gnu_cxx::__normal_iterator<const bi::ScheduleElement*, std::vectorbi::ScheduleElement >]’
src/sample_cpu.cpp:713:77: required from here
/usr/local/cuda/include/thrust/system/detail/generic/extrema.inl:221:89: error: ‘lowest’ is not a member of ‘std::numeric_limits’
initial = thrust::tuple<InputType, IndexType>(std::numeric_limits::lowest(), -1);
^
/usr/local/cuda/include/thrust/system/detail/generic/extrema.inl: In instantiation of ‘ForwardIterator thrust::system::detail::generic::max_element(thrust::execution_policy&, ForwardIterator, ForwardIterator, BinaryPredicate) [with DerivedPolicy = thrust::system::omp::detail::tag; ForwardIterator = thrust::permutation_iterator<const double*, thrust::transform_iterator<bi::strided_functor, thrust::counting_iterator<long int, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default> >; BinaryPredicate = bi::nan_less_functor]’:
/usr/local/cuda/include/thrust/system/omp/detail/extrema.h:39:54: required from ‘ForwardIterator thrust::system::omp::detail::max_element(thrust::system::omp::detail::execution_policy&, ForwardIterator, ForwardIterator, BinaryPredicate) [with DerivedPolicy = thrust::system::omp::detail::tag; ForwardIterator = thrust::permutation_iterator<const double*, thrust::transform_iterator<bi::strided_functor, thrust::counting_iterator<long int, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default> >; BinaryPredicate = bi::nan_less_functor]’
/usr/local/cuda/include/thrust/detail/extrema.inl:65:21: required from ‘ForwardIterator thrust::max_element(const thrust::detail::execution_policy_base&, ForwardIterator, ForwardIterator, BinaryPredicate) [with DerivedPolicy = thrust::system::omp::detail::tag; ForwardIterator = thrust::permutation_iterator<const double*, thrust::transform_iterator<bi::strided_functor, thrust::counting_iterator<long int, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default> >; BinaryPredicate = bi::nan_less_functor]’
/usr/local/cuda/include/thrust/detail/extrema.inl:139:29: required from ‘ForwardIterator thrust::max_element(ForwardIterator, ForwardIterator, BinaryPredicate) [with ForwardIterator = thrust::permutation_iterator<const double*, thrust::transform_iterator<bi::strided_functor, thrust::counting_iterator<long int, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default> >; BinaryPredicate = bi::nan_less_functor]’
src/bi/state/../primitive/vector_primitive.hpp:870:32: required from ‘typename V1::value_type bi::max_reduce(V1) [with V1 = bi::host_vector_reference; typename V1::value_type = double]’
src/bi/state/../primitive/vector_primitive.hpp:936:21: required from ‘typename V1::value_type bi::logsumexp_reduce(V1) [with V1 = bi::host_vector_reference; typename V1::value_type = double]’
src/bi/filter/BootstrapPF.hpp:193:37: required from ‘void bi::BootstrapPF<B, F, O, R>::term(S1&) [with S1 = bi::BootstrapPFState<ModelSIR, (bi::Location)0u>; B = ModelSIR; F = bi::Forcer<bi::InputNullBuffer, (bi::Location)0u>; O = bi::Observer<bi::InputNetCDFBuffer, (bi::Location)0u>; R = bi::Resamplerbi::SystematicResampler]’
src/bi/filter/Filter.hpp:68:3: required from ‘void bi::Filter::filter(bi::Random&, bi::ScheduleIterator, bi::ScheduleIterator, S1&, IO1&) [with S1 = bi::BootstrapPFState<ModelSIR, (bi::Location)0u>; IO1 = bi::ParticleFilterBuffer<bi::BootstrapPFCache<(bi::Location)0u> >; F = bi::BootstrapPF<ModelSIR, bi::Forcer<bi::InputNullBuffer, (bi::Location)0u>, bi::Observer<bi::InputNetCDFBuffer, (bi::Location)0u>, bi::Resamplerbi::SystematicResampler >; bi::ScheduleIterator = __gnu_cxx::__normal_iterator<const bi::ScheduleElement*, std::vectorbi::ScheduleElement >]’
src/bi/sampler/MarginalMH.hpp:222:3: required from ‘void bi::MarginalMH<B, F>::init(bi::Random&, bi::ScheduleIterator, bi::ScheduleIterator, S1&, IO1&, IO2&) [with S1 = bi::BootstrapPFState<ModelSIR, (bi::Location)0u>; IO1 = bi::ParticleFilterBuffer<bi::BootstrapPFCache<(bi::Location)0u> >; IO2 = bi::InputNetCDFBuffer; B = ModelSIR; F = bi::Filter<bi::BootstrapPF<ModelSIR, bi::Forcer<bi::InputNullBuffer, (bi::Location)0u>, bi::Observer<bi::InputNetCDFBuffer, (bi::Location)0u>, bi::Resamplerbi::SystematicResampler > >; bi::ScheduleIterator = __gnu_cxx::__normal_iterator<const bi::ScheduleElement*, std::vectorbi::ScheduleElement >]’
src/bi/sampler/MarginalMH.hpp:204:7: required from ‘void bi::MarginalMH<B, F>::sample(bi::Random&, bi::ScheduleIterator, bi::ScheduleIterator, S1&, int, IO1&, IO2&) [with S1 = bi::MarginalMHState<ModelSIR, (bi::Location)0u, bi::BootstrapPFState<ModelSIR, (bi::Location)0u>, bi::ParticleFilterBuffer<bi::BootstrapPFCache<(bi::Location)0u> > >; IO1 = bi::MCMCBuffer<bi::MCMCCache<(bi::Location)0u, bi::MCMCNetCDFBuffer> >; IO2 = bi::InputNetCDFBuffer; B = ModelSIR; F = bi::Filter<bi::BootstrapPF<ModelSIR, bi::Forcer<bi::InputNullBuffer, (bi::Location)0u>, bi::Observer<bi::InputNetCDFBuffer, (bi::Location)0u>, bi::Resamplerbi::SystematicResampler > >; bi::ScheduleIterator = __gnu_cxx::__normal_iterator<const bi::ScheduleElement*, std::vectorbi::ScheduleElement >]’
src/sample_cpu.cpp:713:77: required from here
/usr/local/cuda/include/thrust/system/detail/generic/extrema.inl:221:89: error: ‘lowest’ is not a member of ‘std::numeric_limits’
Makefile:1016: recipe for target 'src/sample_cpu.o' failed
make: *** [src/sample_cpu.o] Error 1
Error: make failed with return code 2, see /tmp/RtmpkB8HgN/SIR6c53e406f0f75/.SIR/build_assert_openmp_sm_30/make.log for details
at /usr/local/share/perl/5.22.1/Bi/FrontEnd.pm line 322.
Bi::FrontEnd::_error(Bi::FrontEnd=HASH(0x1c30f30), "make failed with return code 2, see /tmp/RtmpkB8HgN/SIR6c53e4"...) called at /usr/local/share/perl/5.22.1/Bi/FrontEnd.pm line 110
Bi::FrontEnd::ANON("make failed with return code 2, see /tmp/RtmpkB8HgN/SIR6c53e4"...) called at /usr/local/share/perl/5.22.1/Bi/Builder.pm line 468
Bi::Builder::_make(Bi::Builder=HASH(0x1e46b90), "sample") called at /usr/local/share/perl/5.22.1/Bi/Builder.pm line 278
Bi::Builder::build(Bi::Builder=HASH(0x1e46b90), "sample") called at /usr/local/share/perl/5.22.1/Bi/FrontEnd.pm line 245
Bi::FrontEnd::client(Bi::FrontEnd=HASH(0x1c30f30)) called at /usr/local/share/perl/5.22.1/Bi/FrontEnd.pm line 135
eval {...} called at /usr/local/share/perl/5.22.1/Bi/FrontEnd.pm line 127
Bi::FrontEnd::do(Bi::FrontEnd=HASH(0x1c30f30)) called at /usr/local/bin/libbi line 36
at /usr/local/share/perl/5.22.1/Bi/FrontEnd.pm line 322.
Bi::FrontEnd::_error(Bi::FrontEnd=HASH(0x1c30f30), "Error: make failed with return code 2, see /tmp/RtmpkB8HgN/SI"...) called at /usr/local/share/perl/5.22.1/Bi/FrontEnd.pm line 110
Bi::FrontEnd::ANON("Error: make failed with return code 2, see /tmp/RtmpkB8HgN/SI"...) called at /usr/local/share/perl/5.22.1/Bi/FrontEnd.pm line 138
Bi::FrontEnd::do(Bi::FrontEnd=HASH(0x1c30f30)) called at /usr/local/bin/libbi line 36

Returned observations when having multiple dimensions

I think there is a bug in the object returned when fitting data with an extra coord/dimension. For example in the code below all values for M are equal to zero (bi_read(bi1)$M), while they should hold the modelled values similar to the parameter without an extra dimension (i.e. bi_read(bi1)$MSum).

The dataset generated by bi_generate_dataset does hold values for M

library(RBi)

model_str <- "
model stratified {
  const no_age = 2
  dim age(no_age)
  
  obs M[age], MSum
  
  state N[age] (has_input = 0)
  noise e
  
  sub parameter {
  }
  
  sub initial {
    N[age] <- 1
  }
  
  sub transition {
    e ~ gaussian(mean = 1, std = 1)
    N[age] <- N[age]*e
  }
  
  sub observation {
    M[age] ~ gaussian(mean = N[age], std = 10)
    MSum ~ gaussian(mean = N[0]+N[1], std = 20)
  }
}
"
biModel <- bi_model(lines = stringi::stri_split_lines(model_str)[[1]])
dataset <- bi_generate_dataset(biModel, end_time=50, seed = 1)

bi1 <- sample(biModel, sample_obs=TRUE, obs=dataset, time_dim = "time", nparticles = 1,
             nsamples = 1, verbose = T, end_time = 50)

`output_all`, `debug` and `verbose` - separate progress reporting from model debugging.

Hi Seb,

In the development version, I see that verbose in run.libbi has been replaced with debug. From what I can see debug both outputs libbi progress messages and also forces all states + parameters to be returned regardless of has_output settings. This seems to essentially be a combination of verbose and output_all in the CRAN rbi version.

Both features are very useful but would it be possible to split them out again as once a model is working the full states + parameters are not needed but I am finding that the libbi output is useful in order to assess the acceptance rate etc. during MCMC sampling/SMC-SMC.

More generally do you think it would be possible to just catch the MCMC sampling/SMC-SMC sampling/optimisation libbi messages rather than the full (or partial?) model logs that are returned now?

Thanks,

Sam

Error running `demo(PZ_generate_dataset)`

On a fresh install of RBi, demo(PZ_generate_dataset) exits with the following error:

Error in paste0(path_to_libbi, "/", "libbi") : 
argument "path_to_libbi" is missing, with no default

Checked Sys.which("libbi") returns the correct path to the libbi executable. The errant line comes from the libbi function when it is not given a default path for libbi, starting at line 136 of libbi.R.

Output file in `bi_generate_dataset()`

Not sure why the option to include an output file in the options is not working.

Possibly caused by the temporary directory. Including the working_folder option generates a results file, but with a different name and not in the location specified in config file, as well as creating several additional files and not overwriting old results as LibBi does.

Can not install LibBi

Hi,
I am trying to install LibBi from "LibBi-stable.tar.gz" in R studio but this error comes up:
"ERROR: cannot extract the package from 'LibBi-stable.tar.gz'"

Can you help me with this issue?

Best,
Neda

Mixed selections to initial states block error-LibBi PZ model example

# Issue description: when setting one initial state starts from a fixed value and others are drawn randomly from a statistical 
# distribution, mixing these 2 selections in initial states block, there is always an error reported when RBi starts 
# adapt_proposal test

rm(list=ls())
library(tidyverse)
library(ggplot2)
library(ggpubr)
library(pander)
library(lubridate)
library(latex2exp)
library(rbi)
library(rbi.helpers)

# Load the data
v <- read.csv("obs_P.csv", header=FALSE, stringsAsFactors=FALSE) 
P_obs <- data.frame(value = v) %>%
  mutate(time = seq(1, by = 1, length.out = n())) %>%
  dplyr::select(time, V1)
colnames(P_obs) <- c("time", "value")


ncores <- 8
minParticles <- max(ncores, 16)
model_str <- "
model PZ {
  const c = 0.25   // zooplankton clearance rate
  const e = 0.3    // zooplankton growth efficiency
  const m_l = 0.1  // zooplankton linear mortality
  const m_q = 0.1  // zooplankton quadratic mortality

  param mu, sigma  // mean and standard deviation of phytoplankton growth
  state P, Z       // phytoplankton, zooplankton
  noise alpha      // stochastic phytoplankton growth rate
  obs P_obs        // observations of phytoplankton
  
  sub parameter {
    mu ~ uniform(0.0, 1.0)
    sigma ~ uniform(0.0, 0.5)
  }
  
  sub proposal_parameter {
    mu ~ truncated_gaussian(mu, 0.02, 0.0, 1.0);
    sigma ~ truncated_gaussian(sigma, 0.01, 0.0, 0.5);
  }

  sub initial {
    P <- log(2.0)  // Change P_0 starts from a fixed number
    Z ~ log_normal(log(2.0), 0.1)
  }

  sub transition(delta = 1.0) {
    alpha ~ gaussian(mu, sigma)
    ode {
      dP/dt = alpha*P - c*P*Z
      dZ/dt = e*c*P*Z - m_l*Z - m_q*Z*Z
    }
  }

  sub observation {
    P_obs ~ log_normal(log(P), 0.2)
  }
}"
model <- bi_model(lines = stringi::stri_split_lines(model_str)[[1]])
bi_model <- libbi(model)
end_time <- max(P_obs$time)

obs_lst <- list(P_obs = P_obs %>% dplyr::filter(time <= end_time))

bi <- sample(bi_model, end_time = end_time, obs = obs_lst, nsamples = 200, nparticles = minParticles, nthreads = ncores, proposal = 'model') %>% 
  adapt_particles(min = minParticles, max = minParticles*500) %>%
  adapt_proposal(min = 0.1, max = 0.4) %>%
  sample(nsamples = 100, thin = 1)

bi_lst <- bi_read(bi %>% sample_obs)

Cre
obs_P.csv
ated on 2023-08-21 with reprex v2.0.2

Passing Inputs as a list and the new force_inputs option.

Hi Seb,

First off, great package! Thanks for all the hard work.

Just updated to the latest GitHub version and I am seeing an error with my inputs not being found by Libbi. I have been passing them in as a named list of data frames and everything has been working as expected up to now.

Looking at your latest series of commits I see that a force_inputs option has been added for use with input files. Setting this to be false (rather than the default true) gets me back to everything working properly. Not sure if this is a bug or if passing inputs as a list is being deprecated!

Happy to knock up an example if useful.

Thanks,

Sam

Error in `PZ_filtering.R` demo

On running the demo, I get the error

> logw <- xtabs(value ~ nr + np, data = bi_read(bi_object, "logweight"))
Error in eval(expr, envir, enclos) : object 'nr' not found

On further inspection, the produced data indeed does not contain the variable nr:

> temp_data <- as.data.frame(bi_read(bi_object, "logweight"))
> colnames(temp_data)
[1] "np"    "time"  "value"

Instead, nr which corresponds to the observation number should be replaced by time... or time should be replaced by nr in the function generating the data set.

Config file path in `bi_generate_dataset`

When including a config file in the arguments of bi_generate_dataset, I get the following error:

> dataset1 <- bi_generate_dataset(endtime=T, config=config_file, model=Lorenz96)
Warning: model name does not match model file name
Error: unrecognised options 'prior.conf'
Error in .self$run(from_init = TRUE, run = run, ...) : 
  LibBi terminated with an error.

This is alleviated by changing config <- 'prior.conf' to confg <- '@prior.conf'. Not sure why...

Flagging {tidync} - "tidy" handling of NetCDF

Hi Seb,

Just flagging this Ropensci package for handling NetCDF data in a "tidy" way. I assume that there would be little to gain from moving to this for handling the NetCDF read/writes but thought it worth bringing to your attention (though you were probably already aware 😄)

Happy to help with any conversion work if this is of interest.

Possible Enhancement: Saving to folder rather than file

Hi Seb,

Again sorry for the multiple issues - feel free to ignore this one as really more of a feature request that may only be useful to me.

I have been using Libbi off and on for a while and been finding that in general RBI is great. However, when it comes to saving and loading bigger models they are often larger than the GitHub max. This in itself is fine but can cause a bit of a workflow issue. To get around this I coded up slightly altered versions of your save_libbi and read_libbi functions that save and load respectively to and from a folder rather than a file. Having done this I have also found that having the models saved as seperate components is quite useful when not working interactively, which is quite often due to some of the long run times.

The code for these is below and feel free to update/use etc if useful.

Thanks,

Sam

  • Read a libbi model from a folder.
#' @title Read a LiBbi Model
#'  
#' @description
#' Read results of a \code{LibBi} run from a folder. This completely reconstructs the saved \code{LibBi} object
#' This reads all options, files and outputs of a \code{LibBi} run from a specified folder
#'
#' @param folder Name of the folder containing the Libbi output as formated by \code{save_libbi}.
#' @param ... any extra options to pass to \code{\link{read_libbi}} when creating the new object
#' @return a \code{\link{libbi}} object#'
#' @importFrom rbi attach_file bi_write get_name
#' @importFrom purrr map
#' @importFrom stringr str_replace
#' @export
#' @examples
#' 
read_libbi <- function(folder, ...) {
  if (missing(folder)) {
    stop("Need to specify a folder to read")
  }
  
  files <- list.files(folder)
  
  read_obj <- map(files, function(x) {
    if (x == "output") {
      files <- list.files(file.path(folder, x))
      file <- map(files, ~readRDS(file.path(folder, x, .)))
      names(file) <-  str_replace(files,".rds", "")
                          
    }else{
      file <- readRDS(file.path(folder, x))
      file <- file[[1]]
    }
    
    return(file)
  })
  
  names(read_obj) <- str_replace(files,".rds", "")
  
  libbi_options <- list(...)
  
  pass_options <- c("model", "dims", "time_dim", "coord_dims", "options",
                    "thin", "init", "input", "obs")
  
  for (option in pass_options) {
    if (!(option %in% names(libbi_options)) &&
        option %in% names(read_obj)) {
      libbi_options[[option]] <- read_obj[[option]]
    }
  }
  
  output_file_name <-
    tempfile(pattern=paste(get_name(read_obj$model), "output", sep = "_"),
             fileext=".nc")
  bi_write(output_file_name, read_obj$output,
           time_dim=libbi_options$time_dim)
  
  new_obj <- do.call(libbi, libbi_options)
  new_obj <- attach_file(new_obj, file="output", data=output_file_name)
  new_obj$supplement <- read_obj$supplement
  
  return(new_obj)
}
  • Save a libbi model to a folder. The folder creation step here may be an issue (breaks CRAN guidelines etc).
#' @title Save a LiBbi Model
#' @description
#' Write results of a \code{LibBi} run to a folder as a series of RDS files.
#' This saves all options, files and outputs of a \code{LibBi} run to a specified folder.
#'
#' @param x a \code{\link{libbi}} object
#' @param folder A character string indicating the folder name under which to save the model.
#' @param supplement any supplementary data to save
#' @importFrom rbi bi_read
#' @importFrom purrr walk2
#' @export
#' @examples 
#'
#' ## Function code 
#' ModelTBBCGEngland::save_libbi
save_libbi <- function(x, folder, supplement) {
  if (missing(folder)) {
    stop("Need to specify a folder name")
  }
  
  if (!dir.exists(folder)) {
    dir.create(folder)
    dir.create(file.path(folder, "output"))
  }
  
  save_obj <- list(model=x$model,
                   dims=x$dims,
                   time_dim=x$time_dim,
                   coord_dims=x$coord_dims,
                   thin=1,
                   supplement=x$supplement
  )
  
  
  options <- x$options
  
  for (file_type in c("init", "input", "obs")) {
    file_option <- paste(file_type, "file", sep="-")
    if (file_option %in% names(x$options)) {
      save_obj[[file_type]] <- bi_read(x, file=file_type)
      options[[file_option]] <- NULL
    }
  }
  
  save_obj[["options"]] <- options
  
  if (!missing(supplement)) save_obj[["supplement"]] <- supplement
  
  walk2(save_obj, names(save_obj), ~ saveRDS(list(.x), file.path(folder, paste0(.y, ".rds"))))
  
  output <- bi_read(x)
  
  walk2(output, names(output), ~ saveRDS(.x, file.path(folder, "output", paste0(.y, ".rds"))))
  
}

PZ_PMMH and cuda compilation

Disclaimer: This might well be a libbi error, but because I tried it first with the PZ_PMMH example I submitted it here.

I am trying to recreate the PZ_MMH demo but with cuda enabled as follows:

library(rbi)
demo(PZ_PMMH)
bi_object <- sample(bi_object, obs=synthetic_dataset, init=init_parameters,
                     end_time=T, noutputs=T, nsamples=128, nparticles=128, options=list("cuda"=TRUE),
                     nthreads=1, log_file_name=tempfile(pattern="pmmhoutput", fileext=".txt"))

but run into the following error (from the make.log file)

/usr/local/cuda/include/thrust/system/cuda/detail/extrema.h(395): error: no suitable user-defined conversion from "std::tuple<thrust::permutation_iterator<thrust::device_ptr<const real>, thrust::transform_iterator<bi::strided_functor<std::ptrdiff_t>, thrust::counting_iterator<std::ptrdiff_t, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default>>, thrust::cuda_cub::counting_iterator_t<signed long>>" to "iterator_tuple" exists
          detected during:
            instantiation of "ItemsIt thrust::cuda_cub::__extrema::element<ArgFunctor,Derived,ItemsIt,BinaryPred>(thrust::cuda_cub::execution_policy<Derived> &, ItemsIt, ItemsIt, BinaryPred) [with ArgFunctor=thrust::cuda_cub::__extrema::arg_max_f, Derived=thrust::cuda_cub::tag, ItemsIt=thrust::permutation_iterator<thrust::device_ptr<const real>, thrust::transform_iterator<bi::strided_functor<std::ptrdiff_t>, thrust::counting_iterator<std::ptrdiff_t, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default>>, BinaryPred=bi::nan_less_functor<real>]" 
(475): here
            instantiation of "ItemsIt thrust::cuda_cub::max_element(thrust::cuda_cub::execution_policy<Derived> &, ItemsIt, ItemsIt, BinaryPred) [with Derived=thrust::cuda_cub::tag, ItemsIt=thrust::permutation_iterator<thrust::device_ptr<const real>, thrust::transform_iterator<bi::strided_functor<std::ptrdiff_t>, thrust::counting_iterator<std::ptrdiff_t, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default>>, BinaryPred=bi::nan_less_functor<real>]" 
/usr/local/cuda/include/thrust/detail/extrema.inl(65): here
            instantiation of "ForwardIterator thrust::max_element(const thrust::detail::execution_policy_base<DerivedPolicy> &, ForwardIterator, ForwardIterator, BinaryPredicate) [with DerivedPolicy=thrust::cuda_cub::tag, ForwardIterator=thrust::permutation_iterator<thrust::device_ptr<const real>, thrust::transform_iterator<bi::strided_functor<std::ptrdiff_t>, thrust::counting_iterator<std::ptrdiff_t, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default>>, BinaryPredicate=bi::nan_less_functor<real>]" 
/usr/local/cuda/include/thrust/detail/extrema.inl(139): here
            instantiation of "ForwardIterator thrust::max_element(ForwardIterator, ForwardIterator, BinaryPredicate) [with ForwardIterator=thrust::permutation_iterator<thrust::device_ptr<const real>, thrust::transform_iterator<bi::strided_functor<std::ptrdiff_t>, thrust::counting_iterator<std::ptrdiff_t, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default>>, BinaryPredicate=bi::nan_less_functor<real>]" 
src/bi/state/../primitive/vector_primitive.hpp(872): here
            instantiation of "V1::value_type bi::max_reduce(V1) [with V1=bi::gpu_vector_reference<real, -1, -1>]" 
src/bi/state/../primitive/vector_primitive.hpp(968): here
            instantiation of "V1::value_type bi::ess_reduce(V1, double *) [with V1=bi::gpu_vector_reference<real, -1, -1>]" 
src/bi/filter/../resampler/Resampler.hpp(160): here
            instantiation of "double bi::Resampler<R>::reduce(V1, double *) [with R=bi::SystematicResampler, V1=bi::gpu_vector_reference<real, -1, -1>]" 
src/bi/filter/BootstrapPF.hpp(180): here
            instantiation of "void bi::BootstrapPF<B, F, O, R>::correct(bi::Random &, bi::ScheduleElement, S1 &) [with B=ModelPZ_model3cca73e8a634, F=bi::Forcer<bi::InputNullBuffer, bi::ON_DEVICE>, O=bi::Observer<bi::InputNetCDFBuffer, bi::ON_DEVICE>, R=bi::Resampler<bi::SystematicResampler>, S1=bi::BootstrapPFState<ModelPZ_model3cca73e8a634, bi::ON_DEVICE>]" 
src/bi/filter/Filter.hpp(65): here
            instantiation of "void bi::Filter<F>::filter(bi::Random &, bi::ScheduleIterator, bi::ScheduleIterator, S1 &, IO1 &) [with F=bi::BootstrapPF<ModelPZ_model3cca73e8a634, bi::Forcer<bi::InputNullBuffer, bi::ON_DEVICE>, bi::Observer<bi::InputNetCDFBuffer, bi::ON_DEVICE>, bi::Resampler<bi::SystematicResampler>>, S1=bi::BootstrapPFState<ModelPZ_model3cca73e8a634, bi::ON_DEVICE>, IO1=bi::ParticleFilterBuffer<bi::BootstrapPFCache<bi::ON_DEVICE, bi::ParticleFilterNullBuffer>>]" 
src/bi/sampler/MarginalMH.hpp(224): here
            instantiation of "void bi::MarginalMH<B, F>::init(bi::Random &, bi::ScheduleIterator, bi::ScheduleIterator, S1 &, IO1 &, IO2 &) [with B=ModelPZ_model3cca73e8a634, F=bi::Filter<bi::BootstrapPF<ModelPZ_model3cca73e8a634, bi::Forcer<bi::InputNullBuffer, bi::ON_DEVICE>, bi::Observer<bi::InputNetCDFBuffer, bi::ON_DEVICE>, bi::Resampler<bi::SystematicResampler>>>, S1=bi::BootstrapPFState<ModelPZ_model3cca73e8a634, bi::ON_DEVICE>, IO1=bi::ParticleFilterBuffer<bi::BootstrapPFCache<bi::ON_DEVICE, bi::ParticleFilterNullBuffer>>, IO2=bi::InputNetCDFBuffer]" 
src/bi/sampler/MarginalMH.hpp(206): here
            instantiation of "void bi::MarginalMH<B, F>::sample(bi::Random &, bi::ScheduleIterator, bi::ScheduleIterator, S1 &, int, IO1 &, IO2 &) [with B=ModelPZ_model3cca73e8a634, F=bi::Filter<bi::BootstrapPF<ModelPZ_model3cca73e8a634, bi::Forcer<bi::InputNullBuffer, bi::ON_DEVICE>, bi::Observer<bi::InputNetCDFBuffer, bi::ON_DEVICE>, bi::Resampler<bi::SystematicResampler>>>, S1=bi::MarginalMHState<ModelPZ_model3cca73e8a634, bi::ON_DEVICE, bi::BootstrapPFState<ModelPZ_model3cca73e8a634, bi::ON_DEVICE>, bi::ParticleFilterBuffer<bi::BootstrapPFCache<bi::ON_DEVICE, bi::ParticleFilterNullBuffer>>>, IO1=bi::MCMCBuffer<bi::MCMCCache<bi::ON_DEVICE, bi::MCMCNetCDFBuffer>>, IO2=bi::InputNetCDFBuffer]" 
src/sample_cpu.cpp(715): here

depbase=`echo src/bi/cuda/random/RandomKernel.o | sed 's|[^/]*$|.deps/&|;s|\.o$||'` && \
srcbase=`echo src/bi/cuda/random/RandomKernel.o | sed 's|/[^/]*$||'` && \
perl nvcc_wrapper.pl /usr/local/cuda-10.0/bin/nvcc -ccbin=g++ -M -w -arch sm_30 -Xcompiler="-fopenmp -O3 -g3 -funroll-loops  " -Isrc  -I/usr/local/cuda/include  -DENABLE_CUDA  -DCUDA_FAST_MATH=0    -DENABLE_OPENMP    -DPACKAGE_NAME=\"LibBi\" -DPACKAGE_TARNAME=\"libbi\" -DPACKAGE_VERSION=\"1.4.2\" -DPACKAGE_STRING=\"LibBi\ 1.4.2\" -DPACKAGE_BUGREPORT=\"[email protected]\" -DPACKAGE_URL=\"http://www.libbi.org\" -DHAVE_OMP_H=1 -DTHRUST_DEVICE_SYSTEM=THRUST_DEVICE_SYSTEM_CUDA -DTHRUST_HOST_SYSTEM=THRUST_HOST_SYSTEM_OMP -DHAVE_LIBM=1 -DHAVE_LIBGFORTRAN=1 -DHAVE_LIBATLAS=1 -DHAVE_LIBQRUPDATE=1 -DHAVE_LIBGSL=1 -DHAVE_LIBNETCDF=1 -DHAVE_LIBCUDA=1 -DHAVE_LIBCUDART=1 -DHAVE_LIBCURAND=1 -DHAVE_LIBCUBLAS=1 -DHAVE_NETCDF_H=1 -DHAVE_CBLAS_H=1 -DHAVE_GSL_GSL_CBLAS_H=1 -DHAVE_BOOST_MPL_IF_HPP=1 -DHAVE_BOOST_RANDOM_BINOMIAL_DISTRIBUTION_HPP=1 -DHAVE_BOOST_RANDOM_BERNOULLI_DISTRIBUTION_HPP=1 -DHAVE_BOOST_RANDOM_GAMMA_DISTRIBUTION_HPP=1 -DHAVE_BOOST_RANDOM_MERSENNE_TWISTER_HPP=1 -DHAVE_BOOST_RANDOM_NORMAL_DISTRIBUTION_HPP=1 -DHAVE_BOOST_RANDOM_POISSON_DISTRIBUTION_HPP=1 -DHAVE_BOOST_RANDOM_UNIFORM_INT_HPP=1 -DHAVE_BOOST_RANDOM_UNIFORM_REAL_HPP=1 -DHAVE_BOOST_RANDOM_VARIATE_GENERATOR_HPP=1 -DHAVE_BOOST_TYPEOF_TYPEOF_HPP=1 -DHAVE_THRUST_ADJACENT_DIFFERENCE_H=1 -DHAVE_THRUST_BINARY_SEARCH_H=1 -DHAVE_THRUST_COPY_H=1 -DHAVE_THRUST_DEVICE_PTR_H=1 -DHAVE_THRUST_DISTANCE_H=1 -DHAVE_THRUST_EXTREMA_H=1 -DHAVE_THRUST_FILL_H=1 -DHAVE_THRUST_FOR_EACH_H=1 -DHAVE_THRUST_FUNCTIONAL_H=1 -DHAVE_THRUST_GATHER_H=1 -DHAVE_THRUST_INNER_PRODUCT_H=1 -DHAVE_THRUST_ITERATOR_COUNTING_ITERATOR_H=1 -DHAVE_THRUST_ITERATOR_DETAIL_NORMAL_ITERATOR_H=1 -DHAVE_THRUST_ITERATOR_DISCARD_ITERATOR_H=1 -DHAVE_THRUST_ITERATOR_PERMUTATION_ITERATOR_H=1 -DHAVE_THRUST_ITERATOR_TRANSFORM_ITERATOR_H=1 -DHAVE_THRUST_ITERATOR_ZIP_ITERATOR_H=1 -DHAVE_THRUST_LOGICAL_H=1 -DHAVE_THRUST_REDUCE_H=1 -DHAVE_THRUST_SCAN_H=1 -DHAVE_THRUST_SEQUENCE_H=1 -DHAVE_THRUST_SORT_H=1 -DHAVE_THRUST_TRANSFORM_H=1 -DHAVE_THRUST_TRANSFORM_REDUCE_H=1 -DHAVE_THRUST_TRANSFORM_SCAN_H=1 -DHAVE_THRUST_TUPLE_H=1 -DHAVE_GSL_GSL_MULTIMIN_H=1 -DHAVE_CUBLAS_V2_H=1 -DHAVE_CURAND_H=1 -DENABLE_DIAGNOSTICS=no -DBOOST_NOINLINE -odir $srcbase -o $depbase.Tpo src/bi/cuda/random/RandomKernel.cu && \
perl nvcc_wrapper.pl /usr/local/cuda-10.0/bin/nvcc -ccbin=g++ -c -w -arch sm_30 -Xcompiler="-fopenmp -O3 -g3 -funroll-loops  " -Isrc  -I/usr/local/cuda/include  -DENABLE_CUDA  -DCUDA_FAST_MATH=0    -DENABLE_OPENMP    -DPACKAGE_NAME=\"LibBi\" -DPACKAGE_TARNAME=\"libbi\" -DPACKAGE_VERSION=\"1.4.2\" -DPACKAGE_STRING=\"LibBi\ 1.4.2\" -DPACKAGE_BUGREPORT=\"[email protected]\" -DPACKAGE_URL=\"http://www.libbi.org\" -DHAVE_OMP_H=1 -DTHRUST_DEVICE_SYSTEM=THRUST_DEVICE_SYSTEM_CUDA -DTHRUST_HOST_SYSTEM=THRUST_HOST_SYSTEM_OMP -DHAVE_LIBM=1 -DHAVE_LIBGFORTRAN=1 -DHAVE_LIBATLAS=1 -DHAVE_LIBQRUPDATE=1 -DHAVE_LIBGSL=1 -DHAVE_LIBNETCDF=1 -DHAVE_LIBCUDA=1 -DHAVE_LIBCUDART=1 -DHAVE_LIBCURAND=1 -DHAVE_LIBCUBLAS=1 -DHAVE_NETCDF_H=1 -DHAVE_CBLAS_H=1 -DHAVE_GSL_GSL_CBLAS_H=1 -DHAVE_BOOST_MPL_IF_HPP=1 -DHAVE_BOOST_RANDOM_BINOMIAL_DISTRIBUTION_HPP=1 -DHAVE_BOOST_RANDOM_BERNOULLI_DISTRIBUTION_HPP=1 -DHAVE_BOOST_RANDOM_GAMMA_DISTRIBUTION_HPP=1 -DHAVE_BOOST_RANDOM_MERSENNE_TWISTER_HPP=1 -DHAVE_BOOST_RANDOM_NORMAL_DISTRIBUTION_HPP=1 -DHAVE_BOOST_RANDOM_POISSON_DISTRIBUTION_HPP=1 -DHAVE_BOOST_RANDOM_UNIFORM_INT_HPP=1 -DHAVE_BOOST_RANDOM_UNIFORM_REAL_HPP=1 -DHAVE_BOOST_RANDOM_VARIATE_GENERATOR_HPP=1 -DHAVE_BOOST_TYPEOF_TYPEOF_HPP=1 -DHAVE_THRUST_ADJACENT_DIFFERENCE_H=1 -DHAVE_THRUST_BINARY_SEARCH_H=1 -DHAVE_THRUST_COPY_H=1 -DHAVE_THRUST_DEVICE_PTR_H=1 -DHAVE_THRUST_DISTANCE_H=1 -DHAVE_THRUST_EXTREMA_H=1 -DHAVE_THRUST_FILL_H=1 -DHAVE_THRUST_FOR_EACH_H=1 -DHAVE_THRUST_FUNCTIONAL_H=1 -DHAVE_THRUST_GATHER_H=1 -DHAVE_THRUST_INNER_PRODUCT_H=1 -DHAVE_THRUST_ITERATOR_COUNTING_ITERATOR_H=1 -DHAVE_THRUST_ITERATOR_DETAIL_NORMAL_ITERATOR_H=1 -DHAVE_THRUST_ITERATOR_DISCARD_ITERATOR_H=1 -DHAVE_THRUST_ITERATOR_PERMUTATION_ITERATOR_H=1 -DHAVE_THRUST_ITERATOR_TRANSFORM_ITERATOR_H=1 -DHAVE_THRUST_ITERATOR_ZIP_ITERATOR_H=1 -DHAVE_THRUST_LOGICAL_H=1 -DHAVE_THRUST_REDUCE_H=1 -DHAVE_THRUST_SCAN_H=1 -DHAVE_THRUST_SEQUENCE_H=1 -DHAVE_THRUST_SORT_H=1 -DHAVE_THRUST_TRANSFORM_H=1 -DHAVE_THRUST_TRANSFORM_REDUCE_H=1 -DHAVE_THRUST_TRANSFORM_SCAN_H=1 -DHAVE_THRUST_TUPLE_H=1 -DHAVE_GSL_GSL_MULTIMIN_H=1 -DHAVE_CUBLAS_V2_H=1 -DHAVE_CURAND_H=1 -DENABLE_DIAGNOSTICS=no -DBOOST_NOINLINE -o src/bi/cuda/random/RandomKernel.o src/bi/cuda/random/RandomKernel.cu && \
cat $depbase.Tpo > $depbase.Po && \
rm -f $depbase.Tpo
1 error detected in the compilation of "/tmp/tmpxft_00007040_00000000-6_sample_gpu.cpp1.ii".
Makefile:1441: recipe for target 'src/sample_gpu.o' failed
make: *** [src/sample_gpu.o] Error 1
make: *** Waiting for unfinished jobs....

This is on ubuntu 18.04 with cuda 10

Really long time to run even simple filter operations

I was playing around with RBi and noticed that even very simple filter operations would take a long time to run:

   user  system elapsed 
 40.179   6.096  23.788 

while the display of the libbi obj would claim a much faster run time:

Wrapper around LibBi
======================
Model:  test 
Run time:  0.000202  seconds
Number of samples:  1024 
State trajectories recorded:  x 
Noise trajectories recorded:  w 

And this happens even after I have already run an operation with a model, so that compilation should not be the reason for that time difference AFAIK.

Is there a reason for this duration diff?

Unclear error message about id.vars

Currently I see the following error message, but it is unclear to me where/how to provide id.vars and measure.vars. I couldn't find anything about them in the help (I guess it has something to do with the coord_dims and the time_dim?)

To be consistent with reshape2's melt, id.vars and measure.vars are internally guessed when both are 'NULL'. All non-numeric/integer/logical type columns are conisdered id.vars, which in this case are columns []. Consider providing at least one of 'id' or 'measure' vars in future.

Also note that the error message actually doesn't show any columns. And spelling mistake in conisdered.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.